entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
18
175
authors
sequencelengths
1
1.12k
primary_category
stringclasses
114 values
categories
sequencelengths
1
8
text
stringlengths
5
364k
http://arxiv.org/abs/2407.13630v1
20240718160140
Exploration to early universe by Josephson Junction Switching Current Detector
[ "Dan Kondo" ]
hep-ph
[ "hep-ph", "cond-mat.mes-hall", "gr-qc", "quant-ph" ]
Systematic moment expansion for electroweak baryogenesis [ July 22, 2024 ======================================================== § ABSTRACT In this paper, we propose a method to probe a Stochastic Gravitational Wave Background (SGWB) with Josephson Junction Switching Current Detector (JJSCD). The sensitivity for the shear can reach h≃ 10^-19 realistically, 10^-21 in the near future, 10^-24 optimistically. If we utilize the enhancement factor from the ratio of the frequency, it is possible to reach further below the Big Bang Nucleosynthesis (BBN) bound. It will be interesting if we can access the region to discover a footprint of new physics. § INTRODUCTION Ultra High frequency Gravitational Waves (UHF-GWs) can be a frontier for exploring new physics in the next generation. On one hand, there are no known source of gravitational wave with frequency higher than kHz from astrophysical side <cit.>. On the other hand, there are proposal for the possible sources of UHF-GWs such as Primordial Black Hole (PBH) <cit.>, inflation<cit.>, reheating<cit.>, oscillon<cit.> and topological defects<cit.>. Detection of such a GW can be in the frontier for the next generation of physics. There are proposals for the hunting gravitational waves. Thanks to the fact that the frequency of interest is close to the typical mass of QCD axion <cit.>, they are mainly from applying axion detection experimental proposal to the GW detection such as haloscope, magnon, Rydberg atom sensor <cit.>. Other than axion method, we can consider a way to utilize the property of gravity. One of such way is to use the preciseness of atomic clock <cit.>. In this paper, they comment that we need a way to probe the Stochastic Gravittaional Wave Background (SGWB). In this paper, we propose a way to hunt the possibility with application of Josephson Junction Switching Current Detector (JJSCD) <cit.>. The punchline to obtain the small signal is to apply for the technique of heterodyne(homodyne) detection <cit.>. The idea is simple. First thing to do is to obtain a modulation signal from the combination of EM background wave and SGWB. Second, we make the modulation signal pass to drop the other contributions. Finally, we will stimulate the JJSCD to count the signal. The concept of the JJSCD is similar to transition edge sensor (TES). The benefit of UHFGW search is that we can probe with compact size apparatus because of the shortness of the wave length. The organization of this paper is in the following way. In <ref>, we will see a relation between electro-magnetic (EM) wave and gravitational wave. In <ref>, we will see a way to pick up the modulation signal. In <ref>, we will introduce a concept of Josephson junction switching current detector. In <ref>, we estimate the sensitivity of the gravitational wave shear in three cases, non-resonant (realistic), resonant (near future), and optimistic. § EM AND GW In this section, we will see the relation between the electromagnetic wave and gravitational wave based on <cit.>. There are mainly two way to see the GW effects on the propagating EM waves. One is to see the shift of the frequency, the other is to see the modulation. In the former case, we can see the EM waves are red-shifted in the presence of gravity at the level of 𝒪(h). We wrote about it in <ref>. In the latter case, we can see the modulated EM waves. EM contains tiny components mixed with GW with shifted frequency. We will explore in the following section in <ref>. §.§ Propagation of EM wave in curved spacetime Then Maxwell equation in perturbed curved spacetime is <cit.> ∂_μ F^μν =-∂_μ[1/2hF^μν-h^μαF_α^ν-F^μ_β h^βν] Here, we expand F_μν=F^0_μν+Δ F_μν as 𝒪(h^0) part and 𝒪(h) part. F^0_μν is the solution of Maxwell equation in flat spacetime ∂_μ F^μν=0. To the level of 𝒪(h), the Maxwell equation is ∂_μΔ F^μν =-∂_μ[1/2h(F_0)^μν-h^μα(F_0)_α^ν-(F_0)^μ_β h^βν]=j_eff^ν. From this equation, we can read out that the presence of GW and background EM wave induce an EM wave with amplitude 𝒪(h) compared to original one. In TT gauge, h^TT(x^μ)=h_+ s^2_θcos[ω_g(t-c_θ x-s_θ z)+φ_0] In transverse traceless gauge (TT gauge), a plame GW propagating in x^3-x^1 plane with angle θ can be written as <cit.> h^TT_tμ =h^TT_μ t=0, h^TT_ij =(h_+e^+_ij+h_× e^×_ij)cos[ω_g(x^0-k̂_g·x)+φ_0], k̂_g = (c_θ,0,s_θ)^T, e^+_ij = [ s_θ^2 0 -s_θ c_θ; 0 -1 0; -s_θ c_θ 0 c^2_θ ],e^×_ij= [ 0 c_θ 0; c_θ 0 -c_θ; 0 -c_θ 0 ]. Let us consider the following EM wave A(t,x) = A_0cos[ω_γ(t-x)]e_z+ΔA(t,x) with field strength for the flat part (F_0)_tx =(F_0)_ty=(F_0)_xy=(F_0)_yz=0, (F_0)_tz = -A_0ω_γsinω_γ(t-x), (F_0)_xz = A_0ω_γsinω_γ(t-x). With combination of h^TT, the right hand side of <ref> is the linear combination of cos_- = cos[(ω_g-ω_γ)t-(ω_gc_θ-ω_γ)x], cos_+ = cos[(ω_g+ω_γ)t-(ω_gc_θ+ω_γ)x]. We assume that ΔA takes the following form ΔA_i(t,x) = a_i^-cos_-+a_i^+cos_+. We can evaluate the both the left hand side and the right hand side of the equation <ref> to obtain the EM amplitude of 𝒪(h). You can see the <ref> for the calculation process. a^±_x = -1/4A_0h_+s_2θω_γ(ω_γ±ω_g(1-c_θ))/(ω_γ±ω_g)^2≃ -1/4A_0h_+s_2θ+𝒪(ω_g/ω_γ), a^±_y = ± A_0h_×ω_γω_g c_θ/2{ω_g^2(1+c_θ)± 2ω_gω_γ}≃1/4A_0h_× c_θ+𝒪(ω_g/ω_γ), a^±_z = A_0h_+ω_γ{ω_γ(1+c_θ)± c_θ^2ω_g}/2{ω_g^2(1+c_θ)± 2ω_gω_γ}≃±1/4A_0h_+ω_γ/ω_g(1+c_θ)+𝒪(ω_g/ω_γ). The point here is that all of them are 𝒪(h) relative to background EM wave amplitude. This means that background EM wave and GW produce EM wave with amplitude 𝒪(h) compared to original amplitude with shifted frequency. Also, along the direction of input EM wave, there is an enhancement factor ω_γ/ω_g. It is more likely that we can hunt a signal from h_+ for high frequency input EM wave and relatively low frequency GW. Note that both h_+ and h_× arise in the direction perpendicular to the input EM wave. If there are chances to detect the signal in the future, by comparing the signal orthogonal to the input direction, we will be able to study the ratio of h_+ and h_×, which can be a hint of early universe. § DETECTION SCHEME We will describe how to detect the effect from GW. We can write the photon wave at the detector as <cit.> A(t,L) = A_γcos(ω^s_γ t+ϕ'_γ)+A_γh_+/4ω^s_γ/ω_g[cos[ω^+(t-L)+ϕ'_γ]-cos[ω^-(t-L)+ϕ'_γ]] + A_γh_+/4ω^s_γ/ω_g[-cos[ω^+t-ω_γ L+ϕ'_γ]+cos[ω^-t-ω_γ L+ϕ'_γ]] - A_γh_+/2ω^s_γ/ω_g[sin[ω_gL]sin[ω_γ(t-L)+ϕ'_γ]] This is linear combination of each basic waves, we will pick up a second term as representative wave. We write it as A(t,L) = A_γcos(ω_γ t+ϕ'_γ) -A_γh_+/4ω^s_γ/ω_g[cos[ω^-(t-L)+ϕ'_γ]] The electric field an be obtained as E = -∂ A(t,L)/∂ t =A_γω^s_γsin(ω^s_γ t+ϕ'_γ) -A_γω^s_γh_+/4ω^-/ω_g[sin[ω^-(t-L)+ϕ'_γ]] = E_0sin(ω^s_γ t+ϕ'_γ) -E_0h[sin[ω^-(t-L)+ϕ'_γ]] where we wrote h=h_+/4ω^-/ω_g. From this expression, we can see that if GW comes, we can see the modulation in the signal. To test the possibility, we propose the way to detect the effect shown in <ref>. The light source emit laser beam with frequency ω^s_γ. The beam is split into half by half by 50% beam splitter. The first beam arrives at detector1 as E_1 = E_0 e^iω^s_γ t+ϕ'_γ. The second wave affected by GW arrives at detector2 as E_2 = E_0e^ω^s_γ t+ϕ'_γ +E_0he^ω^-(t-L)+ϕ'_γ The intensity of the laser is Poynting vector S=E×H, I_1 = cϵ_0/2|E_1|^2 =cϵ_0/2E_0^2 ≡ P_0 I_2 = cϵ_0/2|E_2|^2 =cϵ_0/2E_0^2+cϵ_0E_0^2hcos(ω^-(t-L)-ω^s_γ t)+cϵ_0/2E_0^2h^2 =P_0+ 2P_0 hcos(ω^-(t-L)-ω^s_γ t)+P_0h^2 Here, we use a low pass filter which shuts out the original laser with frequency ω_γ^s. The last term is negligible considering that h<10^-20 from BBN bound <cit.>. In practice, only second term which is sourced by EM background field and GW pass the filter. The frequency of the modulated wave is |(ω_γ-ω_g)-ω_γ|=ω_g. If we can detect the microwave with this frequency ω_g, we can get an evidence of GW. Homodyne signal: We show the schematic picture of the signal from homodyne measuremnt in <ref>. For simplicity, we discuss for classical case, but the essence of quantum case is the same <cit.>. Let r,t be reflection, transmission coefficient of the splitter, respectively. They obey a constraint |r|^2+|t|^2 =1, rt^*+r^*t =0. From the second relation, we set r^*t=i|rt|. The amplitude of the diode 1,2 E_D1,E_D2 are E_D1 = rE_1+tE_2, E_D2 = tE_1+rE_2. The difference of output signal is S_hom = |E_D2|^2-|E_D1|^2 =(1-2|t|^2)|E_1|^2-(1-2|t|^2)|E_2|^2+4|tr|[E_1^*E_2]. For ideal beam splitter with |r|^2=|t|^2=1/2, the signal is S_hom = 2[E_1^*E_2]. It is possible to obtain the modulation signal originating from the gravitational wave background. Signal from Lock-in technique: Lock-in technique is widely used technique. We show the schematic picture in <ref>. By injecting the signal and local oscillator field into mixer to pass the band pass filter, one can pick up a signal with power E_0^2h and frequency ω_g. This signal propagates through the coaxial cable to the Josephson junction. This type of measurement are considered by wide range of people <cit.> because it plays an important role in various field from quantum information to astronomy. § JOSEPHSON JUNCTION SWITCHING CURRENT DETECTOR (JJSCD) Here we describe the Josephson junction (JJ) Switching current detector (SCD) based on <cit.>. josephson junction is given by Kirchhhoff law C (Φ_0/2π)^2φ̈+(Φ_0/2π)^2φ̇/R+∂ U/∂φ=(Φ_0/2π)(I_b+I_m) where C is capacitance, R is resistivity, Φ_0=2.067× 10^-15Wb is quantum flux, and φ is the phase difference of Josephson junction. In typical cases, the potential is Washboard potential <cit.> U(φ)=E_J0 (-iφ-cosφ) where E_J0=(Φ_0/2π)I_0 is Josephson energy, i=I/I_0. The point is that we can regard the equation as particle in metastable potential in one dimension. In <ref>, we show the potential for specific values. As you can see, as the current flows, the tilt of the potential increases and current is likely to flow. Let us see typical values related to the setup. ω_p0=(2π I_c0/Φ_0C)^1/2 is plasma frequency with zero bias. Φ_0=2.06× 10^-15[Wb] is flux quantum. For example, take C=1pF, I_c0=50μA ω_p0 = 389 GHz( I_c0/50μA)^1/2( C/1pF)^-1/2 We can reach to MHz region if we can use a condensor with capacitance 1kF. Dynamics of JJ is described by washboard potential <ref>. The barrier height is Δ U(φ)=2E_J0(√(1-i^2)-iarccos i), i=I/I_c0, E_J0=Φ_0 I_c0/2π is Josephson energy. E_J0 =1.19× 10^3 K( I_c0/50μA) Typical parameter values are I(t)=I_b cosω_bt, ω_b/2π=150Hz, T=1K, I_c0=50μA. The point is that the bias current is effectively DC because the frequency is low. It is assumed that the thermal noise dominates and the escape rate is Γ_0(I) = a(I) ω_p(I) /2πexp[ -Δ U(I)/k_bT] a(I) = 4/ [ 1+√(1+Q(I)k_bT/1.8Δ U(I) ) ]^2 The probability density (escape rate) for switching and the gain are obtained as g(I) = Γ(I) /dI/dt [1-G(I)], G(I) = ∫_0^I dx g(x)dx. From these two equations, we can obtain the gain as G(I) = 1-exp[ -∫ _0^I dI' Γ(I') /dI'/dt]. With microwave, there is an extra current associated with thid microwave i_MW, and input current is oscillating I(t)=I_b sinω_bt+I_MWsinω t. and escape rate is enhanced. Γ_MW=γΓ_0 Since this contributes additionary to the bias current, it is expected that switching current is smaller than original value without i_MW. For non-resonant case ω≪ω_p, the escape rate is obtained by averaging over the period τ=2π/ω Γ_MW = 1/τ∫_0^τΓ_0(t)dt. For resonsnt case, For high Q≫1, when resonance happens, the enhancement factor is given by the fitting formula <cit.> logγ ≃5E_J0Δ U/(k_BT)^2i_MW^2Q/(ω_p/ω_p0)^2 f(x), x=ω/ω_p-1 f(x<0) = Q [ e^9x/2Q+9(1-2x+2/2Q+9) +e^2Qx-9x/9-2Q( 1+2/9-2Q)+2xe^9x/9-2Q] f(x>0) = Qe^-2Qx[ 1/9+2Q+2/(9+2Q)^2] The point is that this enhancement factor has a sharp cutoff beyond ω_p. From the detector viewpoint, this is more sensitive to lower frequency with respect to resonant frequency ω_p. We can freely tune the ω_p to hunt the signal originating from GW with frequency ω_g. § SENSITIVITY ESTIMATE We make an estimate for the modulated microwave input from the background EM wave and SGWB based on <cit.>. If the induced power is larger than that from thermal noise, we can see the signal. This determines the Noise Equivalent Power (NEP). According to the paper <cit.>, the typical value of NEPs are NEP≃ 10^-19W/√(Hz) for non-resonant case 10^-21W/√(Hz) for resonant case 10^-24W/√(Hz) for best value As noted in the paper <cit.>, the above two values can be achieved by realistic experimental setup <cit.>. Even the best value might be achievable to the level of NEP ∼10^-25W/√(Hz) in the future <cit.>. In particular, these values are realized experimentally <cit.>. We comment that non-resonant sensitivity can be achieved for photo diode <cit.>, also the resonant sensitivity is expected to reach using technology such as squeezing <cit.>. Therefore, we expect that the resonant sensitivity is possible in near future and hope that we can use the technology with optimistic level. We use a input laser With 1mW as an example [This value is typical value as large as laser pointer <http://www.pmaweb.caltech.edu/ phy003/labs/interferometryhandout5hqM1.pdf>. Note that it might be possible to use 𝒪(10^2)W laser <cit.>, or <https://lightcon.com/product/carbide-femtosecond-lasers/> in the future or 𝒪(1)W laser <cit.>. If it is possible, we can gain an enhancement from these laser input. I do not know what will happen in the future.], the Signal to Noise Ratio (SNR) is accordingly SNR≃(P_laser/10^-3W)· 10^16h√(Hz) for non-resonant case 10^18h√(Hz) for resonant case 10^21h√(Hz) for best value If we use 1 day for each frequency band SNR≃ 3× 10^18h for non-resonant case 3× 10^20h for resonant case 3× 10^23h for best value Therefore requiring SNR=1 go to h∼3× 10^-19, 3×10^-22, 3×10^-24, respectively. We show these sensitivities in <ref>. It seems that the present estimate does not win the BBN bound. However, it might be possible to beat this bound if we utilize the ω_γ/ω_g∼10^5 enhancement factor with THz input EM wave in 𝒪(10)MHz region. We show the sensitivity including the enhancement factor in <ref>. Although this will be allowed only for h_+, it will be fun to explore the region bellow the BBN bound, where we might be able to see the footprint of the early universe region where we cannot see. § CONCLUSION In this paper, we explored the possibility to detect the signal of stochastic gravitational wave background (SGWB) based on heterodyne (homodyne) measurement with Josephson Junction Switching Current Detector (JJSCD). The conceptual strength of the paper is that the resonant frequency is tunable from MHz to THz. The moderate sensitivity is competitive to other proposals. In the optimistic case, the sensitivity can be close to the BBN bound, furthermore, in the case of high frequency input and low frequency target GW enhancement ω_γ/ω_g might help to go to the deeper region below the BBN bound. If such case is realized, we might be able to see the footprint of the early universe from the region where we can get access only through the GW. This direction can be one of the ways to search for new physics hidden in early universe for the next generation. § ACKNOWLEDGEMENT We appreciate Vladmir Krasnov, Hitoshi Murayama, Elisa Ferreira, Tom Melia, Ippei Obata, Kloian Luzanov, Genta Osaki, Akira Miyazawa, Kosuke Aizawa, Ryosuke Akizawa, and Tomotake Matsumura for conversation, encouragement. This work was supported by JSPS KAKENHI Grant Number 24KJ0613. § PHOTON FREQUENCY SHIFT Let us consider a situation that we emit a photon with frequency ω_γ from the source (S) to the detector (D). We assume that D is on positive x axis. The photon 4 momentum is p^μ|_t=0=(ω_0,ω_0,0,0) The frame with metric g_μν, an observer with four-velocity u^μ will measure a photon frequency ω_γ =-g_μνp^μ u^ν. We would like to study a tiny effect from GW that passes S and D initially at rest. We can write the metric, four momentum and four velocity as g_μν = η_μν+h_μν, p^μ =(ω_0,ω_0,0,0)+δ p^μ, u^μ =(1,0,0,0)+δ u^μ, η = [ -1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1 ], where h_μν, δ p^μ, δ u^μ are 𝒪(h) correction. Here h is the GW amplitude (strain). The inverse metric is g^μν =η_μν-h_μν We can obtain ω_γ = -(g_ttp^tu^t+g_txp^tu^x+g_xtp^xu^t+g_xxp^xu^x) =ω_0(1+δ u^t-δ u^x-h_tt-h_tx)+𝒪(h^2) (many terms are in the 𝒪(h^2)) Basically, the photon propagates the vacuum, the momentum p^0 obeys the geodesic equation dp^t/dλ = -Γ^t_μνp^μ p^ν =-ω^2_0(Γ^t_tt+2Γ^t_xt+Γ^t_xx) where Γ^μ_νκ =1/2g^μλ[∂_κ g_λν+∂_ν g_λκ-∂_λ g_νκ] = 1/2η^μλ[∂_κ h_λν+∂_ν h_λκ-∂_λ h_νκ] is a connection for the background metric <ref>. Specifically, Γ^t_tt =1/2η^tλ[∂_t h_λ t+∂_t h_λ t-∂_λ h_tt] =-1/2∂_th_tt Γ^x_tt =η^xλ[∂_t h_λ t+∂_t h_λ t-∂_λ h_tt] =∂_th_tx-1/2∂_xh_tt Γ^t_xt =1/2η^tλ[∂_t h_λ x+∂_x h_λ t-∂_λ h_xt] =-1/2∂_x h_tt Γ^t_xx =1/2η^tλ[∂_x h_λ x+∂_x h_λ x-∂_λ h_xx] =-∂_x h_tx+1/2∂_t h_xx We can think that the light pass the light cone. By substituting the above expressions, we can obtain the master formula for the frequency by integrating dp^0/dλ ω^D_γ-ω^S_γ/ω^D_γ =-ω_0/2∫_0^λ_Ddλ' ∂_t[h_tt+2h_xt+h_xx]_x^μ=x^μ_λ',0 +[δ u^t-δ u^x](λ_D)-[δ u^t-δ u^x](λ_S). One way to test GW is to see this frequency shift that is explored in <cit.>. § CALCULATION PROCESS OF THE EM WAVE PROPAGATION IN GRAVITATIONAL WAVE BACKGROUND In this section, we write about the calculation process in <ref>. Maxwell equation in curved spacetime without external source is <cit.> ∇_μ (g^μαF_αβg^βν) =∂_μ (√(-g)g^μαF_αβg^βν)=0 here, g_μν =η_μν+h_μν. Since (1+h)= exp(1+h)=exp(Tr(1+h))=1+Tr(h), √(-g) =1+1/2h Then Maxwell equation becomes 0 =∂_μ[ (1+1/2h)g^μαF_αβg^βν] = ∂_μ[ (η^μα-h^μα)F_αβ(η^βν-h^βν) ]+1/2∂_μ [hF^μν] = ∂_μ F^μν-∂_μ [h^μαF_α^ν+F^μ_β h^βν]+1/2∂_μ [hF^μν] = ∂_μ F^μν+∂_μ[1/2hF^μν-h^μαF_α^ν-F^μ_β h^βν] ≡∂_μ F^μν-j_eff^ν. We evaluate the left hand side and right hand side to compare each other We can evaluate the left hand side ∂_μΔ F^μν = -∂_t^2Δ A^ν+∂_x^2Δ A^ν -∂_x∂^νΔ A^x For ν=t ∂_μΔ F^μ t =∂_x∂_t Δ A^x =(ω_g-ω_γ)(ω_gc_θ-ω_γ)a_-^xcos_- +(ω_g+ω_γ)(ω_gc_θ+ω_γ)a_+^xcos_+ For ν=x ∂_μΔ F^μ x = -∂_t^2Δ A^x =a_x^-(ω_g-ω_γ)^2cos_-+a_x^+(ω_g+ω_γ)^2cos_+ For ν=y ∂_μΔ F^μ y = (-∂_t^2+∂_x^2)Δ A^y = -a_y^-[-(ω_g-ω_γ)^2+(ω_g c_θ-ω_γ)^2]cos_- -a_y^+[-(ω_g+ω_γ)^2+(ω_gc_θ+ω_γ)^2]cos_+ = a_y^-[ω_g^2s_θ^2-2ω_gω_γ(1-c_θ)]cos_-+a_y^+[ω_g^2s_θ^2+2ω_gω_γ(1-c_θ)]cos_+ For ν=z ∂_μΔ F^μ z = (-∂_t^2+∂_x^2)Δ A^z = -a_z^-[-(ω_g-ω_γ)^2+(ω_g c_θ-ω_γ)^2]cos_- -a_z^+[-(ω_g+ω_γ)^2+(ω_gc_θ+ω_γ)^2]cos_+ = a_z^-[ω_g^2s_θ^2-2ω_gω_γ(1-c_θ)]cos_-+a_z^+[ω_g^2s_θ^2+2ω_gω_γ(1-c_θ)]cos_+ In TT gauge, right hand side of <ref> is simplified to be -∂_μ[1/2hF_0^μν-h^μαF_0α^ν-F^μ_0β h^βν] = h^μα∂_μ F^ν_0α+F^μ_0β∂_μ h^βν =h^ij∂_i(F_0)^ν_j-(F_0)_tj∂_t h^jν+(F_0)_ij∂_i h^jν The first term vanished because of traceless h=0 and Maxwell equation ∂_μ (F^0)^μν=0 For ν=t, the right hand side is h_+s_θ c_θ A_0ω_γ^2 [1/2cos_++1/2cos_-] For ν=x the right hand side is h^13∂_x (F_0)_xz-(F_0)_tz∂_t h^zx+(F_0)_xz∂_xh^zx = -1/2A_0ω_γ h_+s_θ c_θ[{ω_γ+ω_g(1-c_θ)}cos_++{ω_γ-ω_g(1-c_θ)}cos_-] For ν=y h^ij∂_i(F_0)^y_j-(F_0)_tj∂_th^jy+(F_0)_ij∂_ih^jy = -1/2A_0h_×ω_γω_g c_θ(1-c_θ)[cos_+-cos_-] For ν=z h^ij∂_i(F_0)^z_j-(F_0)_tj∂_t h^jz+(F_0)_ij∂_i h^jz = 1/2A_0h_+ω_γ [s_θ^2ω_γ (cos_++cos_-)+ω_gc_θ^2(1-c_θ)(-cos_++cos_-)] = 1/2A_0h_+ω_γ(1-c_θ)[(ω_γ(1+c_θ)-c_θ^2ω_g)cos_++(ω_γ(1+c_θ)+c_θ^2ω)cos_-] To compare each equation, we can obtain the EM wave amplitude of 𝒪(h). a^±_x = -1/4A_0h_+s_2θω_γ(ω_γ±ω_g(1-c_θ))/(ω_γ±ω_g)^2≃ -1/4A_0h_+s_2θ+𝒪(ω_g/ω_γ), a^±_y = ± A_0h_×ω_γω_g c_θ/2{ω_g^2(1+c_θ)± 2ω_gω_γ}≃1/4A_0h_× c_θ+𝒪(ω_g/ω_γ), a^±_z = A_0h_+ω_γ{ω_γ(1+c_θ)± c_θ^2ω_g}/2{ω_g^2(1+c_θ)± 2ω_gω_γ}≃±1/4A_0h_+ω_γ/ω_g(1+c_θ)+𝒪(ω_g/ω_γ).
http://arxiv.org/abs/2407.13627v1
20240718160039
A restricted model for the bounded derived category of gentle algebras
[ "Esha Gupta" ]
math.RT
[ "math.RT", "math.CO", "16G10, 16E35" ]
§ ABSTRACT We present a restricted model for the bounded derived category of gentle algebras that encodes the indecomposable objects and positive extensions between them. The model is then used to count the number of d-term silting objects for linearly oriented A_n, recovering the result that they are counted by the Pfaff-Fuss-Catalan numbers. Systematic moment expansion for electroweak baryogenesis [ July 22, 2024 ======================================================== § INTRODUCTION In recent years, (graded) gentle algebras have come to gather a lot of attention because of their ubiquity in several areas of mathematics. These include cluster theory, where they appear as cluster-tilted algebras and Jacobian algebras associated to triangulations of marked surfaces <cit.>, as well as geometry, where they appear as endomorphism rings of formal generators of some partially wrapped Fukaya categories <cit.>. A geometric model for the module categories of all gentle algebras was provided in <cit.>, where the authors realised them as tiling algebras associated to partial triangulations of unpunctured surfaces with marked points on the boundary. In <cit.>, the authors provided a complete model for the bounded derived category of a gentle algebra, which encoded information such as the indecomposable objects, morphisms, mapping cones, Auslander-Reiten translation for perfect objects, and Auslander–Reiten triangles. A model for studying support τ-tilting modules, or equivalently 2-term silting complexes, of all locally gentle algebras was provided in <cit.>. In this work, we modify the model of <cit.> to give a model for the bounded derived category of a gentle algebra that encodes the indecomposable objects and their positive extensions as (ungraded) arcs on a surface and their intersections (Proposition <ref>). We can then restrict this model to study the truncated homotopy categories ^[-d+1,0]() of d-term complexes of projectives for a gentle algebra . Since the model encodes positive extensions, it is particularly suited for the study of d-term silting complexes generalizing the case of 2-term silting complexes studied in <cit.> (Corollary <ref>, Remark <ref>). Applying the model to the case of linearly oriented A_n quiver, we recover the result of <cit.> that the number of d-term silting complexes in ^b( kA_n) is given by the Pfaff-Fuss-Catalan number C^d_n+1 ( <ref>). § MARKED SURFACES AND ADMISSIBLE DISSECTIONS Throughout this work, k will denote an algebraically closed field. We recall here the special definition of marked surfaces as used in <cit.>, the admissible dissections of which are in bijection with gentle algebras. <cit.> A marked surface is a triple (S, M, P), where * S is an oriented closed smooth surface with non-empty boundary ∂ S; * M = M_∘∪ M_∙ is a finite set of marked points on ∂ S. The elements of M_∘ and M_∙ will be represented by symbols ∘ and ∙, respectively. They are required to alternate on each connected component of ∂ S, and each such component is required to contain at least one marked point; * P = P_∙ is a finite set of marked points in the interior of S, called punctures. The elements of P_∙ will also be represented by ∙. <cit.> A ∘-arc (or ∙-arc) is a smooth map γ from [0, 1] to S ∖ P such that its endpoints γ(0) and γ(1) are in M_∘ (or in M_∙, respectively). The curve γ is required not to be contractible to a point in M_∘ (or M_∙, respectively). We will usually consider arcs up to homotopy and inverses. Two arcs are said to intersect if any choice of homotopic representatives intersect. <cit.> A collection of pairwise non-intersecting and pairwise different ∘-arcs {γ_1, ⋯ , γ_r} on the surface (S, M, P) is called admissible if the arcs {γ_1, ⋯ , γ_r} do not enclose a subsurface containing no punctures of P_∙ and with no boundary segment on its boundary. A maximal admissible collection of ∘-arcs is called an admissible ∘-dissection. The notion of admissible ∙-dissection is defined similarly. To any admissible ∘-dissection, we can associate a dual ∙-dissection in the following sense. <cit.> Let (S, M, P) be a marked surface, and let Δ be an admissible ∘-dissection. There exists a unique admissible ∙-dissection Δ^* (up to homotopy) such that each arc of Δ^* intersects exactly one arc of Δ. <cit.> The dissections Δ and Δ^* are called dual dissections. § ADMISSIBLE DISSECTIONS AND GENTLE ALGEBRAS <cit.> Let Δ be an admissible ∘-dissection of a marked surface (S, M, P). The k-algebra A(Δ) is the quotient of the path algebra of the quiver Q(Δ) by the ideal I(Δ) defined as follows: * the vertices of Q(Δ) are in bijection with the ∘-arcs in Δ; * there is an arrow i → j in Q(Δ) whenever the ∘-arcs i and j meet at a marked point ∘, with i preceding j in the counter-clockwise order around ∘, and with no other arc coming to ∘ between i and j. * the ideal I(Δ) is generated by the following relations: whenever i and j meet at a marked point as above, and the other end of j meets k at a marked point as above, then the composition of the corresponding arrows i → j and j → k is a relation. Let (S,M,P) be the marked surface in Figure <ref>, and Δ the ∘-dissection dual to the depicted ∙-dissection. Then A(Δ) is the path algebra of the quiver with relations shown in Figure <ref>. <cit.> The assignment ((S, M, P), Δ)→ A(Δ) defines a bijection from the set of homeomorphism classes of marked surfaces (S, M, P) with an admissible dissection to the set of isomorphism classes of gentle algebras. Let Λ be a gentle algebra of rank n. Let ((S,M,P),Δ) be the marked surface with an admissible dissection obtained from the previous theorem. We will give another model of the bounded derived category of Λ using this surface. We start by choosing a labelling of the -points in M. We replace each -point v on a boundary component between two -points with a collection of ℤ-indexed ×-points T={T_(i,v)| i∈} arranged in descending order along the orientation of the component. Moreover, we do so in such a way that the set T has precisely two limit points given by the two -points. For a point T_(i,v), we define ϵ(T_(i,v)):=i and σ(T_(i,v)):=v. We consider special arcs joining these ×-points which we call `slaloms'. The above process transforms the marked surface in Figure <ref> to the surface in Figure <ref>. A ×-arc is a smooth map γ from the interval [0, 1] to S ∖ P such that its endpoints γ(0) and γ(1) are ×-points. The curve γ is required not to be contractible to a ×-point. Let γ be a ×-arc. We assume γ intersects the arcs of the dual -dissection Δ^* at a finite number of points, minimally and transversally. Define f_γ:γ∩Δ^*→ℤ as follows. The orientation of γ induces a total order on the points in γ∩Δ^*. Let p_0 be the initial point in this order. Then f(p_0):=ϵ(s(γ)). Next, if p and q are in γ∩Δ^* and q is the successor of p, then γ enters a polygon enclosed by ∙-arcs of Δ^* via p and leaves it via q. If the ×-points in this polygon are to the left of γ, then f(q) := f(p) + 1; otherwise, f(q) := f(p) - 1. A ×-arc γ is called a slalom if f(q_0)=ϵ(t(γ)), where q_0 is the last point of γ∩Δ^*. Let γ_1 be the slalom in Figure <ref>. Then the corresponding function f_γ_1 is given as f_γ_1(A)=0, f_γ_1(B)=-1, f_γ_1(C)=0. Let γ be a slalom connecting T_(i,v_1) to T_(j,v_2). We consider the homotopy class of γ where the homotopies are allowed to move the endpoints of γ along the boundary without crossing a -point. Then this homotopy class contains a unique (up to homotopy) ∘-arc connecting v_1 and v_2. We denote this ∘-arc by σ_γ. Moreover, the function f_γ naturally defines a grading on the arc σ_γ. We will denote the graded ∘-arc (σ_γ,f_γ) as Σ(γ). Let γ_1 be the slalom in Figure <ref>. Then using the procedure described above, γ_1 is transformed into the graded arc Σ(γ_1) shown in Figure <ref>. The map Σ defines a bijection between the set of homotopy classes of slaloms with the set of homotopy classes of graded ∘-arcs, as defined in <cit.>, which are, in turn, in bijection with certain indecomposable objects of K^-,b() called (finite) string objects. We denote by P^∙_γ the object associated to Σ(γ). For two graded ∘-arcs (μ_1,f_1) and (μ_2,f_2), we will use the description of (P^∙_(μ_1,f_1),P^∙_(μ_2,f_2)) in terms of the `graded intersection points' of (μ_1,f_1) and (μ_2,f_2), as given in <cit.>. Let γ_1 and γ_2 be two slaloms and T an intersection point of γ_1 and γ_2 lying in the interior of S. Then T is called contractible if (one of) the region(s) bounded by the following three segments is contractible (Figure <ref>): * the part of γ_1 between T and the boundary. * the part of γ_2 between T and the boundary. * the part of the boundary between the endpoints of γ_1 and γ_2. Let γ_1 and γ_2 be two slaloms. Then we have the following: * There is a bijection between the set of interior intersection points of Σ(γ_1) and Σ(γ_2), and the set of non-contractible intersection points of γ_1 and γ_2. * There is a bijection between the set of boundary intersection points of Σ(γ_1) and Σ(γ_2) of positive degree, and the set of contractible intersection points of γ_1 and γ_2. * There is a bijection between the set of boundary intersection points of Σ(γ_1) and Σ(γ_2) of degree 0, and the boundary intersection points of γ_1 and γ_2. Let (μ_1,f_1):=Σ(γ_1) and (μ_2, f_2):=Σ(γ_2). Assume that we have chosen homotopy representatives such that μ_1 and μ_2 intersect minimally, and so do γ_1 and γ_2. We prove the three statements one by one. * Let p be an interior intersection point of Σ(γ_1) and Σ(γ_2). It is possible to choose homotopy representatives of γ_1 and γ_2 such that p is also an interior intersection point of γ_1 and γ_2. Moreover, p is non-contractible because, otherwise, μ_1 and μ_2 do not intersect minimally as we can choose homotopy representatives that do not intersect at p (Figure <ref>). Conversely, if p is a non-contractible intersection point of γ_1 and γ_2, then it is still an interior intersection point of Σ(γ_1) and Σ(γ_2) when they intersect minimally. * Suppose p is a boundary intersection point of Σ(γ_1) and Σ(γ_2) of degree >0. Then, without loss of generality, we can assume that we have the configuration around p as shown in Figure <ref> with f_2(p_2)>f_1(p_1). This implies that ϵ(q')>ϵ(q), where q (resp. q') is the endpoint of γ_1 (resp. γ_2) such that σ(q)=p (resp. σ(q')=p). Thus, the configuration of γ_1 and γ_2 is as shown in Figure <ref>. Since μ_1 and μ_2 are homotopic to γ_1 and γ_2 respectively, and they intersect on the boundary, the point T has to be a contractible intersection point of γ_1 and γ_2. Conversely, let T be a contractible intersection point of γ_1 and γ_2. Then, without loss of generality, we can assume that γ_1 and γ_2 are as shown in Figure <ref>. In particular, the part of the boundary between q and q' does not contain any ∙-points, which implies that σ(q)=σ(q'), and hence Σ(γ_1) and Σ(γ_2) intersect on the boundary. Moreover, this intersection point has a positive degree because f_2(p_2)=ϵ(q')>ϵ(q)=f_1(p_1). * Suppose p is a boundary intersection point of Σ(γ_1) and Σ(γ_2) of degree 0. Then we can assume that the configuration around p is as shown in Figure <ref> such that f_1(p_1)=f_2(p_2). This implies that ϵ(q)=ϵ(q')=f_1(p_1), where q (resp. q') is the endpoint of γ_1 (resp. γ_2) such that σ(q)=p (resp. σ(q')=p). Thus, q=q' is a boundary intersection point of γ_1 and γ_2. Conversely, if q is a boundary intersection point of γ_1 and γ_2, then, by definition, Σ(γ_1) and Σ(γ_2) have an intersection point on the boundary of degree 0. Two slaloms γ_1,γ_2 intersect in the interior of S if and only if either ^i(P^∙_γ_1,P^∙_γ_2)≠ 0 for some i>0 or ^i(P^∙_γ_2,P^∙_γ_1)≠ 0 for some i>0. Suppose γ_1 and γ_2 intersect in the interior of S at a point T. If T is a contractible intersection point, then using Theorem <ref>, it corresponds to a boundary intersection point of Σ(γ_1) and Σ(γ_2) of positive degree. This implies that either ^i(P^∙_γ_1,P^∙_γ_2)≠ 0 for some i>0 or ^i(P^∙_γ_2,P^∙_γ_1)≠ 0 for some i>0. On the other hand, if T is a non-contractible intersection point, then it corresponds to an interior intersection point of Σ(γ_1) and Σ(γ_2). Using <cit.>, we get that either ^i(P^∙_γ_1,P^∙_γ_2)≠ 0 for some i>0 or ^i(P^∙_γ_2,P^∙_γ_1)≠ 0 for some i>0. Conversely, suppose γ_1 and γ_2 do not intersect in the interior of S. Then either they do not intersect at all or they intersect on the boundary. In both cases, Σ(γ_1) and Σ(γ_2) can only have zero or negative degree boundary intersection points using Theorem <ref>. This implies that ^i(P^∙_γ_1,P^∙_γ_2)= 0 for all i>0 and ^i(P^∙_γ_2,P^∙_γ_1)= 0 for all i>0. The above proposition helps us to give a characterization of presilting and silting objects in K^-,b(). Say that a collection γ_1,γ_2,…, γ_m of slaloms is mutually non-intersecting if γ_i does not intersect γ_j in the interior of for all 1≤ i,j≤ n. There is a bijection between basic presilting objects in K^-,b() and mutually non-intersecting collections of slaloms in S_. Suppose T=⊕_i=1^mT_i is a basic presilting object in K^-,b() with T_i indecomposable. Then, using <cit.>, we get that T_i is of the form P^∙_γ_i for some slalom γ_i. Since T is presilting, we get that ^j(T_i_1,T_i_2)= 0 for all j>0 and for all 1≤ i_1, i_2≤ m. Using the previous proposition, this gives that {γ_1,⋯,γ_m} is a mutually non-intersecting collection of slaloms in S_. Conversely, given a mutually non-intersecting collection of slaloms {γ_1,⋯,γ_m} in S_, the previous proposition immediately implies that T=⊕_i=1^mP^∙_γ_i is a basic presilting object in K^-,b(). Using <cit.>, we know that for a gentle algebra , a basic presilting object X is silting if and only if it has n=|| many indecomposable summands. This gives us the following corollary. There is a bijection between basic silting objects in K^-,b() and mutually non-intersecting collections of n slaloms in S_. We can also restrict the above model to obtain a characterization of d-term presilting objects of . To do this, we keep only the ×-points labelled from 0 to -d+1 between each pair of neighbouring -points. A slalom in the restricted model is a slalom γ in the original model for which f_γ(γ∩Δ^*)⊆ [-d+1,0], as this ensures that P^∙_γ is concentrated in [-d+1,0]. We will denote this restricted model by S_^d. It is easy to see that Theorem <ref> and Proposition <ref> still hold in this restricted model, which gives us the following corollary. There is a bijection between basic d-term silting objects of and mutually non-intersecting collections of n slaloms in S^d_. Note that in the full model S_, for every finite arc starting at a ×-point, there exists a unique choice for the other endpoint which makes it a slalom. However, this is not the case in the restricted model S^d_, because of the additional condition that γ(γ∩Δ^*)⊆ [-d+1,0]. In <cit.>, the authors introduced a model for basic support τ-tilting modules by considering blossom points on the boundary of a marked surface and slaloms connecting these points. Viewing these blossom points as ×-points, one precisely recovers the model S^2_ described above, which is consistent with the fact that basic support τ-tilting modules are in bijection with basic 2-term silting complexes <cit.>. § COUNTING D-TERM SILTING OBJECTS IN LINEARLY ORIENTED A_N Let kA_n be the path algebra of the linearly oriented quiver of type A_n, that is, kA_n=k(1→2→⋯→ n). In this section, we will give a recursive formula for the number of d-term silting objects in kA_n using the above corollary. This will recover the result of <cit.> that these are counted by the Pfaff-Fuss-Catalan numbers. The marked surface (S,M,P) obtained from Theorem <ref> for =kA_n is shown in Figure <ref>. By replacing each ∘-point in this figure with d ×-points, we obtain the surface S^d_kA_n defined above (Fig <ref>). Let B^d_n denote the number of basic silting objects in kA_n. Using Corollary <ref>, we get that B^d_n is also the number of collections of n mutually non-intersecting slaloms in Figure <ref>. To calculate B^d_n, we first count the number of such collections containing some fixed slalom γ. This is done by cutting the disc along this slalom, and relabelling (one of) the parts thus obtained to get S^d_kA_m for some m<n. This allows us to build a recursive formula for B^d_n. We explain the detailed process below. Let Γ be a collection of n mutually non-intersecting slaloms. Such a collection will be maximal with respect to the property of mutual non-intersection as the number of indecomposable summands of any presilting object is less than or equal to n <cit.>. Our first claim is that in such a collection of n slaloms, at least one of (0,0), (0,-1), (0,-2), ⋯ (0,-d+1) has to be an endpoint of some slalom. This is because otherwise, the slalom in Figure <ref> will contradict the maximality of the collection. Let γ be a slalom in S^d_n as shown in Figure <ref>. Then a ×-arc lying in the disc D' is a slalom in S^d_n if and only if it is a slalom in S^d_i_2-i_1-1 obtained by relabelling the points of D' as shown in Figure <ref>. Since there are no slaloms in S^d_n starting at (i_1,0) and lying in D', we can remove it. Moreover, since the only slaloms in D' starting at (i_1,m) for some j≤ m≤ -1 end at (l, m+1) for some i_1+1≤ l≤ i_2-1, removing the -arc i_1 and relabelling as in Fig <ref> gives a bijection between the set of slaloms starting at these points. Similarly, since the only slaloms in D' ending at (i_2,m) for some -d+1≤ m ≤ j+1 start at (l,m-1) for some i_1+1≤ l≤ i_2-1, removing the -arc i_2, the point (i_2,-d+1), and relabelling as in Fig <ref> gives a bijection between the set of slaloms ending at these points. Combining this with the map that sends the slalom connecting (s_1,t) to (s_2,t+1) for some i_1<s_1<s_2<i_2 in D' to the slalom connecting (s_1-i_1,t) to (s_2-i_1,t+1) in the relabelled figure, we get the required bijection. Let γ be a slalom in S^d_n as shown in Figure <ref>. Then a ×-arc lying in the disc D' is a slalom in S^d_n if and only if it is a slalom in S^d_n-i obtained by relabelling the points of D' as shown in Figure <ref>. Since there are no slaloms in S^d_n starting at (i,0) and lying in D' (other than possibly γ when j=0), we can remove it. Moreover, since the only slaloms in D' starting at (i,m) for some j≤ m ≤ -1 end at (l, m+1) for some i+1≤ l≤ n, removing the -arc i and relabelling as in Fig <ref> gives a bijection between the set of slaloms starting at these points. Combining this with the map that sends the slalom connecting (s_1,t) to (s_2,t+1) for some i<s_1<s_2≤ n in D' to the slalom connecting (s_1-i,t) to (s_2-i,t+1) in the relabelled figure, and the slalom connecting (s,t) to (0,t), for some i<s_1≤ n and -d+1≤ t≤ j, in D' to the slalom connecting (s-i,t) to (0,t) in the relabelled figure, we get the required bijection. The number of mutually non-intersecting collections of i_2-i_1-1 slaloms in S^d_n lying in D' is B^d_i_2-i_1-1. It is easy to see that a collection of i_2-i_1-1 slaloms in S^d_n lying in D' is mutually non-intersecting if and only if it is mutually non-intersecting in S^d_i_2-i_1-1 under the above bijection. Using Corollary <ref>, the number of such collections is equal to B^d_i_2-i_1-1. We can now divide the problem into two cases. * (0,0) is the endpoint of a slalom: Let γ_d be the rightmost such slalom. Since this is the rightmost slalom connected to (0,0), we need to block the slaloms connecting (0,0) to some (j,0) for j<i_d-1. This gives that at least one of (1,-1), ⋯ (1,-d+1) is the endpoint of some slalom in Γ. Suppose (1,-d+1) is the endpoint of some slalom in Γ and let γ_1 be the leftmost such slalom. To block the slalom connecting (0,0) to (i_1,0), we need that at least one of (i_1,0), ⋯ (i_1,-d+2) is the endpoint of some slalom in Γ. Repeating this argument with the assumption that at each step the point with the lowest index is the endpoint of some slalom in Γ, we get a collection of d slaloms in Γ arranged as shown in Figure <ref>. Using Lemmas <ref> and <ref>, each of the parts of the discs cut by these slaloms can be relabelled to obtain S^d_m for some m<n. The exact value of m for each of the parts is shown in Figure <ref>. Thus the number of collections of non-mutually intersecting n slaloms containing the above collection of d slaloms is given by the sum of the product of the number of collections of non-mutually intersecting a_1,⋯, a_d slaloms in S^d_kA_i_1-2, S^d_kA_i_2-i_1-1, ⋯, S^d_kA_i_d-1-i_d-2-1, S^d_kA_n-i_d-1 respectively. Now, since we know that a_1≤ i_1-2, a_l≤ i_l-i_l-1-1 for 2≤ l≤ d-1, a_d≤ n-i_d-1, and Σ_m=1^da_d=n-d, we get that a_1=i_1-2, a_l= i_l-i_l-1-1 for 2≤ l≤ d-1, and a_d= n-i_d-1. This means that the above number is equal to the product B^d_i_1-2B^d_i_2-i_1-1⋯ B^d_i_d-1-i_d-2-1B^d_n-i_d-1. Taking into account all possible values of i_1, ⋯ i_d-1, we get the sum Σ_i_d-1=2^nΣ_i_d-2=2^i_d-1-1⋯Σ_i_1=2^i_2-1B^d_i_1-2B^d_i_2-i_1-1⋯ B^d_i_d-1-i_d-2-1B^d_n-i_d-1. Now in the penultimate step of the above argument, when one of (i_d-2,0) or (i_d-2,-1) has to be connected to something in Γ, suppose (i_d-2,-1) is not connected to anything. Then (i_d-2,0) has to be connected to something, which is only possible if i_d-2=i_d-1 (Figure <ref>). Again, in this case, the parts of the disc cut by the d-1 slaloms can be relabelled by Lemmas <ref> and <ref> to get S^d_m for some m<n. And we will get that the number of collections Γ containing this set of d-1 slaloms is given by the product of d-1 terms B^d_i_1-2, B^d_i_2-i_1-1,⋯ ,B^d_i-i_d-3-1,B^d_n-i. Repeating this argument at each step, we get that the total number of collections Γ in which (0,0) is connected to something is given by Σ_k=0^d-1d-1kΣ_i_k=2^nΣ_i_k-1=2^i_k-1⋯Σ_i_1=2^i_2-1B^d_i_1-2B^d_i_2-i_1-1⋯ B^d_i_k-i_k-1-1B^d_n-i_k, where the sum over 0 elements is taken to be B^d_n-1, corresponding to the boundary case in Figure <ref>. * (0,0) is not the endpoint of a slalom: First suppose that (0,-1) is connected to something. It can be easily seen that the above cutting and relabelling procedure works in this case as well, except that the maximum number of parts in which the disc is cut is now d-1. This gives that the number of collections Γ in this case is Σ_k=0^d-2d-2C_kΣ_i_k=2^nΣ_i_k-1=2^i_k-1⋯Σ_i_1=2^i_2-1B^d_i_1-2B^d_i_2-i_1-1⋯ B^d_i_k-i_k-1-1B^d_n-i_k. In the general case, let m be the smallest integer such that (0,-m) is connected to something. Then the number of collections is given by Σ_k=0^d-1-md-1-mkΣ_i_k=2^nΣ_i_k-1=2^i_k-1⋯Σ_i_1=2^i_2-1B^d_i_1-2B^d_i_2-i_1-1⋯ B^d_i_k-i_k-1-1B^d_n-i_k. Thus, we get that the total number of d-term silting objects in kA_n is given by the recursive formula B^d_n=Σ_k=0^d-1Σ_m=k^d-1mkΣ_i_k=2^nΣ_i_k-1=2^i_k-1⋯Σ_i_1=2^i_2-1B^d_i_1-2B^d_i_2-i_1-1⋯ B^d_i_k-i_k-1-1B^d_n-i_k, where B_0^d=1, and B_1^d=d. Since the above recursive formula also counts the number of complete ordered d-ary trees with n+1 internal nodes, which are known to be counted by the Pfaff-Fuss-Catalan numbers <cit.>, we recover the result that the number of d-term silting objects in kA_n is given by the Pfaff-Fuss-Catalan number C^d_n+1.
http://arxiv.org/abs/2407.12732v1
20240717165110
New Integrable Chiral Cosmological Models with Two Scalar Fields
[ "Vsevolod R. Ivanov", "Sergey Yu. Vernov" ]
gr-qc
[ "gr-qc", "astro-ph.CO", "math-ph", "math.MP" ]
vsvd.ivanov@gmail.com Physics Department, Lomonosov Moscow State University Leninskie Gory 1, 119991, Moscow, Russia svernov@theory.sinp.msu.ru Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, Leninskie Gory 1, 119991, Moscow, Russia § ABSTRACT We construct integrable chiral cosmological models with two scalar fields and potentials represented in terms of hyperbolic functions. Using the conformal transformation of the metric and the corresponding models with induced gravity terms, we obtain the general solutions in the spatially flat, open and closed Friedmann universes and the corresponding integrals of motion. The obtained general solutions can be written in terms of the Jacobi elliptic functions of the conformal time. 04.20.Jb, 04.50.Kd, 98.80.-k New Integrable Chiral Cosmological Models with Two Scalar Fields Sergey Yu. Vernov July 22, 2024 ================================================================ § INTRODUCTION Most cosmological models, which describe the global evolution of the Universe, include scalar fields. To describe only one epoch of the Universe evolution, single field models can be used. Using two-field models, one can describe multiple epochs or multiple physical phenomenons. Models with a single scalar field nonminimally coupled to gravity can be transformed to models with a minimally coupled scalar field with a canonical kinetic term by the metric and scalar field transformations. On the other hand, it is not possible to transform a model with two scalar fields nonminimally coupled to gravity to a model with two minimally coupled scalar fields and a standard kinetic part of the Lagrangian in the most general case <cit.>. After the metric transformation, one obtains a general relativity model with nonstandard kinetic terms of scalar fields, a so-called chiral cosmological model (CCM) <cit.>. In the Friedmann–Lemaître–Robert­son–Walker (FLRW) metric, the Einstein's equations reduce to a system of ordinary differential equations. The problem of searching for integrable cosmological models with scalar fields is being actively investigated, essentially in the case of single-field cosmological models, but such models have been found only for a few specific forms of the scalar field potential <cit.>. A two-field CCM with a constant potential is trivially integrated not only in the spatially-flat FLRW metric, but also in the Bianchi I metric <cit.>. At the same time, finding integrable cosmological models with multiple scalar fields is, in general, an extremely complicated task. We know only a few examples of integrable cosmological models with two or more fields <cit.>. The goal of this paper is to construct new integrable CCM with two fields and potentials written in terms of hyperbolic functions. There are a few methods to integrate evolution equations in the FLRW metric. The integrability can be proven by using a suitable parametric time <cit.>, but it is not immediately clear as to how to find this parametric time. The superpotential method, which is also known as the Hamilton-Jacobi equation approach, is useful to integrate not only single-field models with the exponential potential <cit.>, but also two-field models with the exponential potential <cit.>. Note that this method is actively used to study inflation and to construct cosmological models with particular exact solutions <cit.>. Another method for constructing one-field integrable models is based on the conformal metric transformation and the construction of the corresponding model in the Jordan frame <cit.>. In Ref. <cit.>, the authors construct two-field models with one ordinary scalar field, one phantom field and induced gravity terms to prove the integrability of a single-field cosmological model with a potential written in terms of hyperbolic functions. Note that integrability of this model has been proven in the closed and open Friedmann universes as well <cit.>. In Refs. <cit.>, the proposed cosmological model has been intensively studied in the context of the early Universe evolution, with special emphasis on inflation and the possibility of crossing the Big Bang-Big Crunch singularity. In Ref. <cit.>, the authors have proven integrability of the same model in a simpler way by using a single-field integrable model with non-minimal coupling constructed in Ref. <cit.>. Note that this model belongs to a wide class of integrable models with minimally coupled scalar fields and potentials in terms of hyperbolic functions. Their integrability have been proven by using a suitable parametric time in Ref. <cit.> (see also Ref. <cit.>). To construct a new integrable CCM with two fields, we start from models with two nonminimally coupled scalar fields. The single-field integrable model proposed in Ref. <cit.> has an interesting feature: the Ricci scalar is an integral of motion. Recently, N-field cosmological models with the same property have been found and their integrability in the spatially flat FLRW metric have been proven <cit.>. In this paper, we find general solutions of evolution equations in the Friedmann universe with arbitrary spatial curvature for such two-field integrable models with non-minimal coupling. After finding such solutions, we obtain two-field chiral cosmological models in the Einstein frame by the conformal transformation of the metric. For a few two-field potentials, we get general solutions. In the conformal time, we get analytic expressions of the general solutions in terms of the Jacobi elliptic functions. The structure of the paper is as follows. In Section II, we consider two-field models with induced gravity terms. In Section III, we construct the corresponding CCMs by the conformal transformation of the metric and the introduction of new fields. In Section IV, we find solutions in the FLRW metric for the model with the nonminimal coupling. Examples of integrated CCMs and their general solutions in the conformal time are given in Section V. The results are summarized in Section VI. § TWO-FIELD MODEL WITH NONMINIMAL COUPLING Let us consider models with two nonminimally coupled scalar fields, described by an action S=∫ d^4 x √(-g)[U(σ^1, σ^2) R - 1/2g^μν(C_1∂_μσ^1∂_νσ^1+C_2∂_μσ^2∂_νσ^2) - V(σ^1, σ^2)], where the functions U and V are differentiable, R is the Ricci scalar, and C_A are constants. Positive values of C_A correspond to the case of standard scalar fields. By varying the action (<ref>), we obtain evolution equations U(R_μν - 1/2 g_μν R) = ∇_μ∇_ν U - g_μν U + 1/2 T_μν, where T_μν =C_1∂_μσ^1∂_νσ^1+C_2∂_μσ^2∂_νσ^2-g_μν(1/2g^αβ(C_1∂_ασ^1∂_βσ^1+C_2∂_ασ^2∂_βσ^2) + V) . and the field equations are C_Aσ^A - V_,σ^A + R U_,σ^A = 0, A=1,2; F_,σ^A≡dF/dσ^A for any function F(σ^1,σ^2), and there is no summation in A. From Eq. (<ref>), it follows 3 U -UR=1/2 g^μνT_μν= - 1/2 g^αβ∑_A=1^2C_A∂_ασ^A∂_βσ^A - 2V. Our goal is to find such functions U and V that Eq. (<ref>) has an integral of motion. Using the field equations (<ref>), we get U=∑_A=1^21/C_A(U_,σ^AV_,σ^A-R(U_,σ^A)^2)+∑_A,B=1^2U_,σ^Aσ^Bg^αβ∂_ασ^A∂_βσ^B. Substituting Eq. (<ref>) into Eq. (<ref>), we obtain 2V+∑_A=1^2 3/C_AU_,σ^AV_,σ^A=R(U+∑_A=1^23/C_A[U_,σ^A]^2)+g^αβ/2∑_A,B=1^2[6 U_,σ^Aσ^B+C_Aδ_AB]∂_ασ^A ∂_βσ^B. To solve Eq. (<ref>), we choose such a function U(σ^1,σ^2) that 6 U_,σ^Aσ^B+C_Aδ_AB=0, for all values of A and B, and U+∑_A=1^23/C_A[U_,σ^A]^2=U_0, where U_0 is a constant. The solution of Eqs. (<ref>) and (<ref>) is U(σ^1, σ^2)=U_0-C_1/12(σ^1-σ^1_0)^2-C_2/12(σ^2-σ^2_0)^2, where σ^A_0 are constants. With such a function U, Eq. (<ref>) is 2V+∑_A=1^2 1/2σ^AV_,σ^A=RU_0. If R is a constant, R=R_0, then Eq. (<ref>) is a linear first-order partial differential equation with V(σ^1,σ^2) as an unknown function. Equation (<ref>) has the following general solution: V = R_0 U_0/2 + (σ^1 - σ_0^1)^4 f(σ^2 - σ_0^2/σ^1 - σ_0^1), where f is an arbitrary differentiable function. Note that for models with functions U and V given by Eqs. (<ref>) and (<ref>), the Ricci scalar R is the integral of motion in an arbitrary metric. This result can be generalized on models with an arbitrary number of scalar fields <cit.>. We consider the case of U_0>0, when the function U>0 for some values of σ^A. The case of U_0<0 corresponds to antigravity. If U_0=0, then Eq. (<ref>) restricts the potential without the assumption that R is a constant. It means that nontrivial solutions of the Friedmann equations can exist only for potential (<ref>) with U_0=0. In the case of single-field models, an analogous restriction on the potential has been obtained in Ref. <cit.>. § THE CORRESPONDING MODELS IN THE EINSTEIN FRAME Now we construct models in the Einstein frame that correspond to the previously chosen functions U and V. To get a model with two standard scalar fields, we take C_A=1. Also, we can take σ^A_0=0 without loss of generality. By doing a conformal transformation of the initial metric g_μν of the form g^E_μν = 2 U/M^2_Pl g_μν, one arrives at the following action in the Einstein frame: S_E = ∫ d^4 x √(-g^E)(M^2_Pl/2 R_E - g_E^μν/2∑_A,B=1^2 K_AB∂_μσ^A ∂_νσ^B - V_E), where M_Pl is the reduced Planck mass, K_AB = M^2_Pl/2 U(δ_AB + 3/U U_,σ^A U_,σ^B), V_E = M^4_Pl/4 U^2 V. For the chosen function U(σ^1,σ^2) the matrix K_AB is not diagonal: K_11 = M^2_Pl/2 U^2(U + 1/12(σ^1)^2), K_12 = K_21 = M^2_Pl/24 U^2σ^1 σ^2, K_22 = M^2_Pl/2 U^2(U + 1/12(σ^2)^2). It is possible to diagonalize the matrix K_AB by introducing new fields χ^1 and χ^2. In general, one has K_CD∂_μσ^C ∂_νσ^D = K_CD∂σ^C/∂χ^A∂σ^D/∂χ^B∂_μχ^A ∂_νχ^B = G_AB∂_μχ^A ∂_νχ^B, where G_AB is the new “mass” matrix. In our particular case, we take σ^1 = √(12 U_0)tanhχ^2/√(6) M_Pl, σ^2 = √(12 U_0)/coshχ^2/√(6) M_Pltanhχ^1/√(6) M_Pl, and get the following matrix G_AB: G_11 = 1, G_12 = G_21 = 0, G_22 = cosh^2(χ^1/√(6) M_Pl). In terms of the new fields, the function U takes the form U(χ^1, χ^2) = U_0/cosh^2(χ^1/√(6) M_Pl)cosh^2 (χ^2/√(6) M_Pl). Action (<ref>) can be rewritten as S_E = ∫ d^4 x √(-g^(E))(M^2_Pl/2 R^(E) - g^(E)μν/2[ ∂_μχ^1 ∂_νχ^1 + cosh^2(χ^1/√(6) M_Pl) ∂_μχ^2 ∂_νχ^2] - V_E), where V_E=M^4_Pl(coshχ^1/√(6) M_Plcoshχ^2/√(6) M_Pl)^2/4U_0(R_0/2 + 144U_0[tanhχ^2/√(6) M_Pl]^4 f(tanhχ^1/√(6) M_Pl/sinhχ^2/√(6) M_Pl)) . By varying the action (<ref>), we get the following equations: R^(E)_μν - 1/2 g^(E)_μν R^(E) = 1/M^2_Pl T^E_μν, χ^1 - √(6)/12M_Plsinh(χ^1/3√(6)M_Pl) g^(E)μν∂_μχ^2 ∂_νχ^2 - ∂ V_E/∂χ^1 = 0; cosh^2(χ^1/√(6) M_Pl) χ^2 + √(6)/6M_Plsinh(χ^1/3√(6)M_Pl) g^(E)μν∂_μχ^1 ∂_νχ^2 - ∂ V_E/∂χ^2 = 0, where T^E_μν = ∂_μχ^1 ∂_νχ^1 + cosh^2(χ^1/√(6) M_Pl) ∂_μχ^2 ∂_νχ^2 - g^(E)_μν(1/2 g^(E)αβ∂_αχ^1 ∂_βχ^2 + 1/2cosh^2(χ^1/√(6) M_Pl)g^(E)αβ∂_αχ^2 ∂_βχ^2 + V_E(χ^1, χ^2)). In the Einstein frame, the d'Alembert operator acting on a scalar is =1/√(g^(E))∂_μ(√(g^(E))g^(E)μν∂_ν). § INTEGRABLE COSMOLOGICAL MODELS WITH NONMINIMAL COUPLING §.§ The FLRW metric with conformal time To get cosmological solutions in the analytic form, we consider the FLRW metric with conformal time ds^2=a^2(τ)(-dτ^2+dr^2/1-Kr^2+r^2dθ^2+r^2sin^2(θ)dφ^2), where a(τ) is the scale function. A positive curvature index K is associated with a closed Universe, K=0 with a flat Universe and a negative K with an open one. In this metric, Eqs. (<ref>) and (<ref>) with C_A=1 have the following form: 6U[h^2+K]+6hU̇=1/2[(σ̇^1)^2+(σ̇^2)^2]+a^2V, U[2ḣ+h^2+K]+Ü+HU̇=1/2 a^2V-1/4[(σ̇^1)^2+(σ̇^2)^2], σ̈^A+2hσ̇^A-6U_,σ^A'a^2R+a^2V_,σ^A = 0, where h=ȧ/a, and “dots” denote derivatives with respect to the parametric time τ. The Ricci scalar is R=6/a^2(ä/a+K). Integrating equation R=R_0, namely, ä+Ka=R_0/6a^3, we obtain ȧ^2+Ka^2=R_0/12a^4+C, where C is an integration constant. Equation (<ref>) has the general solution in terms of the Jacobi elliptic function: a(τ) = √(6C)/√(3K+√(9K^2-3R_0C )) × sn(1/6√(18K+6√(9K^2-3R_0C ))(τ-τ_0)|√(R_0C(6K^2-R_0C+2K√(9K^2-3R_0C )))/R_0C-6K^2-2K√(9K^2-3R_0C ).), if R_0>0. For R_0=0, we get a(τ)={ a_1sin(√(K)τ)+a_2cos(√(K)τ), K>0; a_1τ+a_0, K=0; a_1sinh(√(K)τ)+a_2cosh(√(K)τ), K<0; . where a_0, a_1 and a_2 are arbitrary constants. To find solutions for scalar fields, we substitute σ^A=y^A/a into Eq. (<ref>) and obtain a system ÿ^A+Ky^A+ a^3 V_,ϕ^A(y^1/a,y^2/a) = 0. For V given by (<ref>), we get the following system of two equations: ÿ^A+Ky^A + ∂Ṽ/∂ y^A = 0, where A=1,2 and Ṽ(y^1,y^2)=(y^1)^4f(y^2/y^1). System (<ref>) has the first integral 1/2(ẏ^1)^2+1/2(ẏ^2)^2+K/2[(y^1)^2+(y^2)^2] + Ṽ= E, where E is an integration constant. To analyze the cosmological evolution we consider equations in the cosmic time, which is defined by the equation dt=a(τ)dτ. Using h=aH, U̇=adU/dt, ẏ^A=a^2dσ^A/dt+a^2Hσ^A, where H is the Hubble parameter, we get Eq. (<ref>) in the following form: 6U[H^2+K/a^2]+6HdU/dt=1/2[dσ^1/dt]^2+1/2[dσ^2/dt]^2+V. Using Eq. (<ref>), we get from Eq. (<ref>) the following equation: 6U_0H^2+6U_0K/a^2=1/2 U_0R_0+E/a^4. Equation (<ref>) has the following solutions for R_0>0: t-t_0=±√(3)/√(R_0)ln(R_0 U_0a^2-6U_0K/√(R_0)+√(R_0 U_0^2a^4-12KU_0^2a^2+2EU_0 )) . At R_0=0, we obtain the following nonconstant solutions: a(t)={ ±√(K(E-6K^2U_0^3(t-t_0)^2))/√(6)U_0K, K≠ 0, ±√(3)/3U_0√(±√(6EU_0 )U_0(t-t_0) ), K=0 . . Equation (<ref>) coincides with the equation obtained in Ref. <cit.> for K=0 and in Ref. <cit.> for an arbitrary K in the case of the corresponding single-field model. Thus, the analysis of the Universe evolution in depending on the values of the constants K, R_0 and E, including conditions of the existence of bounce solutions given in Ref. <cit.>, is also valid for the considered two-field models. For this reason, we do not repeat it in this paper. Note only that in the case K=0, the Hubble parameter has the following simple form: H(t)={ √(3R_0)(e^2√(3R_0)t/3+B)/6(e^2 √(3R_0)t/3-B) , R_0>0 1/2(t-t_0), R_0=0 . . where B is a constant. Note that explicit form of the a(τ) does not depend on the form of the potential and is given by Eq. (<ref>) for R_0>0 and Eq. (<ref>) for R_0=0. To get the general solutions in the conformal time, so it is suitable to write out expressions for H(τ) and a(τ) in explicitly real forms. We present formulas for the case of K=0 and R_0>0 only. For K=0 and R_0>0, we obtain * if E > 0, then H(τ) = √(R_0/3)dn(.a_0√(R_0/3)(τ-τ_0) |1/2)/1 - cn(.a_0√(R_0/3)(τ-τ_0) |1/2), a(τ) = a_0 sn(.a_0√(R_0/3)(τ-τ_0) |1/2)/1+cn(.a_0√(R_0/3)(τ-τ_0) |1/2), a_0 = (2E/U_0 R_0)^1/4, τ(t) = τ_0 + 1/a_0√(3/R_0)F(arccos(a_0^2 - a^2/a_0^2 + a^2)|1/2.), a_0√(R_0/12)(τ-τ_0) ∈(0, K(1/2)); * if E < 0, then a(τ) = a_0/cn(.a_0√(R_0/6)(τ-τ_0) |1/2), a_0 = (-2E/U_0 R_0)^1/4, H(τ) = √(R_0/6) sn(.a_0√(R_0/6)(τ-τ_0) |1/2)dn(.a_0√(R_0/6)(τ-τ_0) |1/2), τ(t) = τ_0 + 1/a_0√(6/R_0)F(arcsin(√(a^2 - a_0^2/a^2))|1/2.), a_0√(R_0/6)(τ-τ_0) ∈(-K(1/2), K(1/2)); * if E = 0, then H(τ) = ±√(R_0/12), a(τ) = a_0/1 ∓ a_0 √(R_0/12)(τ-τ_0), τ(t) = τ_0 ±1/a_0√(12/R_0)(1 - e^∓√(R_0/12)(t-t_0)) =τ_0 ±1/a_0√(12/R_0)(1 -a_0/a), ± a_0√(R_0/12)(τ-τ_0) ∈(-∞, 1). Here F is the incomplete elliptic integral of the first kind, defined as F(.φ |m) = ∫_0^φdθ/√(1 - m sin^2 θ), K(m) ≡ F(π / 2 | m.) is the complete elliptic integral of the first kind, and the Jacobi elliptic functions sn, cn and dn have been used. In the Einstein frame, the FLRW metric (<ref>) becomes ds^2 =a_E^2(τ)( - dτ^2 + dr^2/1-Kr^2+r^2dθ^2+r^2sin^2(θ)dφ^2), where a_E(τ) = √(U/U_0)a(τ). § POTENTIALS AND SOLUTIONS OF THE FIELD EQUATIONS §.§ The way to get the general cosmological solution of the CCM In the FLRW metric with the conformal time, Eqs. (<ref>)–(<ref>) have the following form: 3 M_Pl^2 (h_E^2+K) = 1/2(χ̇^1)^2 + 1/2 G_22(χ̇^2)^2 +a^2_E V_E, -M_Pl^2 (2 ḣ_E + h_E^2+K) = 1/2(χ̇^1)^2 + 1/2 G_22(χ̇^2)^2 - a^2_E V_E, χ̈^1 + 2 h_Eχ̇^1 - 1/2dG_22/dχ^1(χ̇^2)^2 + a^2_E∂ V_E/∂χ^1 = 0, G_22χ̈_2 + 2 h_E G_22χ̇_2 + dG_22/dχ^1χ̇_1 χ̇_2 + a^2_E∂ V_E/∂χ^2 = 0. Here “dots” denote d/dτ, and h_E≡ȧ_E/a_E = a_E H_E. Our goal is to get the general solutions of these equations for the potentials given by the formula (<ref>). If we know the general solutions for the corresponding model in the Jordan frame, or, in other words, if the functions a(τ), σ^1(τ), and σ^2(τ) are known, then the scale function a_E is given by Eq. (<ref>) and the Hubble parameter H_E(τ) = M_Pl/√(2 U(σ^1, σ^2))(H(τ) + 1/2 a(τ) U(σ^1, σ^2)d U/d τ). Here we use the fact that conformal times for both frames coincide (up to an additive constant, which can be set to be zero without loss of generality), since dt / a = dt_E / a_E. Having these expressions for a_E, H_E, To get χ^1(τ) and χ^2(τ), we use the relations χ^1 = √(6) M_Pl arctanh(σ^2/√(12 U_0 - (σ^1)^2)), χ^2 = √(6) M_Pl arctanh(σ^1/√(12 U_0 )). It is not always possible to integrate Eq. (<ref>). To integrate field equations in the Jordan frame we must firstly choose a particular form of the potential V. Below we present the general solutions for three types of the potential V. The obtained solutions allow us to get the general solutions for the corresponding CCM. §.§ Potential V=1/2R_0 U_0 We start from the case of a constant potential V_0=1/2R_0 U_0. The corresponding potential in the Einstein frame is V_E=M^4_PlR_0/8U_0cosh^2(χ^1/√(6) M_Pl)cosh^2(χ^2/√(6) M_Pl) . For Ṽ≡ 0, Eq. (<ref>) has the following solutions: y^A={ B_1^Asin(√(K)τ)+B_0^Acos(√(K)τ), K>0; B_1^Aτ+B_0^A, K=0; B_1^Asinh(√(K)τ)+B^A_0cosh(√(K)τ), K<0; . where B_0^A and B_1^A are integration constants. Using the expression (<ref>), we get σ^A(τ). It allows us to get the general solution for the CCM with the potential (<ref>) by Eqs. (<ref>) and (<ref>). §.§ Potential V=1/2R_0 U_0 + c_1 (σ^1)^4+ c_2 (σ^2)^4 Let us choose the potential V as V_1(σ^1, σ^2) = R_0 U_0/2 + c_1 (σ^1)^4+ c_2 (σ^2)^4, where c_1 and c_2 are constant. For this potential V, system (<ref>) is completely separable, and we have 2 independent integrals of motion: 1/2(d y^A/d τ)^2+K/2(y^A)^2 + c_A (y^A)^4= E_A, where E_A are integration constants. The form of solutions of (<ref>) depends on the signs of c_A and E_A. For K=0, we obtain: * E_A > 0, c_A > 0: y^A(τ) = (E_A/c_A)^1/4cn.(± 2(E_A c_A)^1/4 (τ - τ_i) + cn^-1(u_n|1/2.)| 1/2); * E_A < 0, c_A < 0: y^A(τ) = (E_A/c_A)^1/4/cn.(± 2(E_A c_A)^1/4 (τ - τ_i) + cn^-1.(1/u_A|1/2)| 1/2); * E_A > 0, c_A < 0: y^A(τ) = |E_A/c_A|^1/4sn(±.2√(2)|E_A c_A|^1/4 (τ - τ_i) + sn^-1(.2u_A/1+u_A^2|1/2)| 1/2)/1+cn(±.2√(2)|E_A c_A|^1/4 (τ - τ_i) + cn^-1(.1-u_A^2/1+u_A^2|1/2)| 1/2). Here u_A = |c_A/E_A|^1/4 y^A(τ_i), and the choice of sign in “±” is determined by initial conditions. Obviously, there are no solutions when E_A < 0 and c_A > 0. For c_A = 0, E_A has to be non-negative, and we have y^A(τ) given by Eq. (<ref>). For the potential V given by  (<ref>), the corresponding potential V_E has the following form in terms of the new fields: V_E = M^4_Pl/4 U_0^2[1/2 R_0 U_0 cosh^4(χ^1/√(6) M_Pl) cosh^4(χ^2/√(6) M_Pl) . . + (12 U_0)^2 c_1 cosh^4(χ^1/√(6) M_Pl) sinh^4(χ^2/√(6) M_Pl) + (12 U_0)^2 c_2 sinh^4(χ^1/√(6) M_Pl)]. If we put χ^2=0, then V_E coincides with the potential from single-field model proposed by Bars and Chen in Ref. <cit.>: V_BC = M^4_Pl/4 U_0^2[1/2 R_0 U_0 cosh^4(χ^1/√(6) M_Pl) + (12 U_0)^2 c_2 sinh^4(χ^1/√(6) M_Pl)]. So, the proposed integrable model with potential (<ref>) is two-field generalization of the known single-field integrable model <cit.>. §.§ Potential V=1/2R_0 U_0 + c((σ^1)^2 + (σ^2)^2)^2 For the potential V_2=R U_0/2 + c((σ^1)^2 + (σ^2)^2)^2, where c is a nonzero constant, general solutions can also be explicitly written in terms of the Jacobi elliptic functions. In this case, Eq. (<ref>) with K=0 takes the form 1/2(d y^1/d τ)^2 + 1/2(d y^2/d τ)^2 + c((y^1)^2 + (y^2)^2)^2 = E. By introducing “polar coordinates”, y^1(τ) = ρ(τ) cos(θ(τ)), y^2(τ) = ρ(τ) sin(θ(τ)), it can be rewritten as 1/2(d ρ/d τ)^2 + 1/2ρ^2 (d θ/d τ)^2 + c ρ^4 = E. In analogy with classical mechanics, one may observe that θ is a cyclic coordinate, which means that ρ^2 d θ/d τ = L = const, and the equation for ρ reads 1/2(d ρ/d τ)^2 + L^2/2 ρ^2 + c ρ^4 = E. In what follows, we restrict ourselves to the case of c > 0 only. In such a case, the solution for ρ reads ρ(τ) = √(ρ_0^2 - (ρ_0^2 - ρ_1^2) sn^2(.√(v_0-v_2)/2βτ + C_ρ| v_0-v_1/v_0-v_2)). Here, C_ρ is a constant of integration, ρ^2_0, 1 = 2 √(E/3c) v_0, 1, v_k = cos(1/3arccos(-(E_min/E)^3/2) - 2 π k/3), k = 0, 1, 2, β = 1/4(3/cE)^1/4, E_min = 3/2(c L^4/2)^1/3. Note that since E ⩾ E_min, we always have v_0 ⩾ v_1 > 0 > v_2. The equality v_0 = v_1 holds when E = E_min. The solution for θ is θ(τ) =sgn(L)/2 v_0 √(v_0 - v_2)(E_min/E)^3/4Π(.1-v_1/v_0; √(v_0-v_2)/2βτ + C_ρ| v_0-v_1/v_0-v_2) + C_θ. Here Π(.n; u|m) ≡∫_0^ud w/1 - n sn^2(.w|m) is the incomplete elliptic integral of the third kind, and C_θ is a constant of integration. If E = E_min, then ρ = ρ_min = (L^2 / 4 c)^1/6 = const, and θ(τ) = L/ρ_min^2τ + C_θ. For the potential V_2, the corresponding potential V_E takes the form V_E = M^4_Pl/4 U_0^2[1/2 R_0 U_0 cosh^4(χ^1/√(6) M_Pl) cosh^4(χ^2/√(6) M_Pl) . . + (12 U_0)^2 c ( cosh^2(χ^1/√(6) M_Pl) sinh^2(χ^2/√(6) M_Pl) + sinh^2(χ^1/√(6) M_Pl))^2]. The general solution for the CCM with this potential can be found by Eqs. (<ref>) and (<ref>). § CONCLUSIONS In this paper, we have constructed new integrable chiral cosmological models with two-fields and potentials in terms of hyperbolic functions. These potentials are presented in Eqs. (<ref>), (<ref>), and (<ref>). The general solutions have been found in terms of the Jacobi elliptic functions of the conformal time. To get these solutions we have used the corresponding models in the Jordan frame, for which the Ricci scalar is an integral of motion. The integration of the initial system of equations reduces to the integration of system (<ref>) of two second-order equations, that has one integral of motion for any potential V given by Eq. (<ref>). In this paper, we present examples of the potential V, for which system (<ref>) has two independent integrals of motion and, hence, is integrable. Two-field CCMs actively investigated in cosmology to describe the evolution of the observable Universe, including inflation, primordial black hole formation and dark energy <cit.>. The constructed models can be considered as two-field generalizations of the single-field integrable model proposed in Ref. <cit.>, which has been actively investigated as a possible description of the evolution of the early Universe. We hope that the proposed integrable models will be useful to describe the Universe evolution. § ACKNOWLEDGEMENTS V.R.I. is supported by the “BASIS” Foundation grant No. 22-2-2-6-1. § REFERENCES utphys
http://arxiv.org/abs/2407.13044v1
20240717224847
DropKAN: Regularizing KANs by masking post-activations
[ "Mohammed Ghaith Altarabichi" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Axions in Andromeda: Searching for Minicluster - Neutron Star Encounters with the Green Bank Telescope Luca Visinelli July 22, 2024 ======================================================================================================== § ABSTRACT We propose DropKAN (Drop Kolmogorov-Arnold Networks) a regularization method that prevents co-adaptation of activation function weights in Kolmogorov-Arnold Networks (KANs). DropKAN operates by randomly masking some of the post-activations within the KANs computation graph, while scaling-up the retained post-activations. We show that this simple procedure that require minimal coding effort has a regularizing effect and consistently lead to better generalization of KANs. We analyze the adaptation of the standard Dropout with KANs and demonstrate that Dropout applied to KANs' neurons can lead to unpredictable performance in the feedforward pass. We carry an empirical study with real world Machine Learning datasets to validate our findings. Our results suggest that DropKAN is consistently a better alternative to Dropout, and improves the generalization performance of KANs. § INTRODUCTION Kolmogorov-Arnold Networks (KANs) <cit.> are recently proposed as an alternative to Multi-Layer Perceptrons (MLPs). The computation graph of Kolmogorov-Arnold Networks (KANs) is different form the standrad Multi-Layer Perceptrons (MLPs) in two fundamental ways: 1) On edges: KANs use trainable activation functions, unlike MLPs that rely on linear weights. 2) On neurons ("nodes"): KANs sum the incoming signals, different to MLPs that apply non-linear activation functions e.g., ReLU. This change in the computation graph indicates that many of the techniques used in MLPs might not be directly transferable to KANs, or at least may not necessarily give the same desired effect. In this work we explore whether KANs could benefit from using Dropout <cit.> for regularization, and propose DropKAN based on our analysis as a method to regularize KANs by randomly masking post-activations. DropKAN is efficient at regularizing KANs and is easy to incorporate within any implementation of KANs using a single line of code. Our contributions in this paper can be categorized in two folds. First, we analyze the behavior of Dropout applied to KANs and show that it behaves differently from how it was originally designed to operate with MLPs. Our second and main contribution is proposing DropKAN, an alternative to Dropout applied to KANs, as we show that DropKAN consistently outperforms Dropout with KANs using a number of real world datasets. § MOTIVATION We start by formalizing the definition of a KAN layer and the adaptation of Dropout to KANs. We denote the input dimension of the l^th layer of a KAN model consisting of L layers by n_l. The activation functions connecting the layer l to the following layer l+1 can be expressed as a 1D matrix of functions: Φ_l = {ϕ_l,j,i}, l = 1, 2, …, L, i = 1, 2, …, n_l, j = 1, 2, …, n_l+1 where ϕ_l,j,i is an arbitrary function represented as a spline with trainable parameters. We define x_l,i as the pre-activation (input) of the function ϕ_l,j,i; the post-activation of ϕ_l,j,i is denoted by x̃_l,j,i = ϕ_l,j,i(x_l,i). The neurons in the KAN layer performs a sum of all incoming post-activations to output x̃_l,j,i. x_l+1,j = ∑_i=1^n_inx̃_l,j,i = ∑_i=1^n_inϕ_l,j,i(x_l,i), j = 1, 2, …, n_l+1 By adding a Dropout layer between the KAN layers l and l+1 as in Figure <ref>, a binary dropout mask m_j is applied to the l layer outputs' x_l+1,j in training time. The mask is usually sampled independently from a Bernoulli distribution with a probability p of being 1 (indicating that the neuron j is retained) and 1-p of being 0 (indicating that the neuron j is dropped). The output of the KAN layer with the inclusion of Dropout becomes: x^'_l+1,j = m_jx_l+1,j/1-p, j = 1, 2, …, n_l+1 The output of the Dropout layer in Equation <ref> is usually scaled-up by a factor of 1/1-p, we denote this factor by s. This procedure is done to compensate the effect of dropping out some nodes, with the rational of ensuring that a similar level of signal will continue to propagate through the network with the presence of Dropout in training time. §.§ Why is Dropout problematic with KANs? The key motivation of Dropout is to prevent co-adaptations <cit.> of weights by sampling a thinned network through dropping out nodes. If a node in a MLP is dropped by a Dropout layer then all the weights connected to it will take no part in the feedforward and backward passes during the training step. The weights of the dropped neuron in a MLP have no impact on feedforward because they are multiplied by an input of zero from the dropped node, while on the backward pass they have no influence on the loss calculation, and will consequently get a gradient of zero in that training step. However, applying Dropout to the outputs of a KAN layer is not enough to exclude the masked nodes from actively participating in the feedforward and backward passes. The zero outputs from the masked nodes will be used as inputs to the activation functions Φ_l+1 in the following layer l+1, given that for an arbitrary activation function ϕ^' from Φ_l+1, the output of ϕ^'(x=0) is not necessarily zero, the nodes will still propagate the corresponding value of ϕ^'(x=0) into the network during feedforward. Consequently, the weights of activation function (the spline coefficients) ϕ^' in the layer l+1 following the dropout will also be updated in the backward pass. Another key issue with applying Dropout to KANs is the compensation of dropping out some nodes by scaling-up. As we have observed in Equation <ref>, the outputs of the kept nodes are scaled by a factor of s. This procedure is ineffective with KANs as the arbitrary activation function from Φ_l+1 are not necessarily homogeneous function of degree 1, ϕ^'(sx) ≠ sϕ^'(x). The behavior of ϕ^'(s x_l+1,j) is unpredictable at training time, and it is not trivial to identify s that could lead to the proper scaling-up effect, as this procedure must be done for each layer using Dropout in the KANs. § METHODS We design DropKAN to address the previous issues of Dropout by applying a binary dropout mask M_i to the post-activations x̃_l,j,i as observed in Figure <ref> in training time. This is different to Dropout where the mask is applied to the output of the neurons x_l+1,j. Similar to Dropout, we sample the mask independently from a Bernoulli distribution with a probability p of being 1 (indicating that the post-activation is retained) and 1-p of being 0 (indicating that the post-activation is dropped). The output of the KAN layer when DropKAN is applied becomes: x^'_l+1,j = 1/1-p∑_i=1^n_in M_ix̃_j,i = 1/1-p∑_i=1^n_in M_iϕ_j,i(x_i), j = 1, 2, …, n_out In DropKAN the output is scaled-up by a factor of 1/1-p to compensate for the dropped post-activations. In test time DropKAN behaves exactly as a regular KAN layer as in Equation <ref>, effectively serving as an identity function. Applying the mask on the post-activations in DropKAN combined with scaling-up ensures that the expected values of the sums in KAN nodes during training are approximately the same with and without DropKAN: 𝔼[1/1-p∑_i=1^n_in M_ix̃_j,i] ≈𝔼 [∑_i=1^n_inx̃_j,i] Unlike with Dropout, DropKAN doesn't mask the nodes after the sum, instead some of the post-activations are masked while the kept ones are scaled-up, which ensure the expected value of the summation taking place in the node with the presence of DropKAN is about the same without DropKAN as observed in Equation <ref>. § RESULTS AND DISCUSSION This section describes the experimental design we have used to evaluate DropKAN, along with the results obtained in our experiments. The first experiment aimed to evaluate the expected value of a KAN function using DropKAN layers in training mode, and compare to a network with Dropout enabled between the KAN layers. In the second experiment, we compare the performance of a network using DropKAN layers against a regular KANs and KAN regularized with Dropout using a number of classification problems. §.§ Experimental Setup Our experiments involve 10 popular <cit.> data sets from the UCI Machine Learning Database Repository[http://archive.ics.uci.edu/ml]. We have included datasets of varying sizes from different domains. Table <ref> provides a summary of the number of instances, features and classes of all data sets used in our experiments. Each data set is divided into training (60%), validation (20%), and testing (20%) splits. A unified approach of prepossessing is adopted for all data sets, including categorical features encoding, imputation of missing values, and shuffling of instances. Accuracy of the model is the metric we used for the evaluation in all experiments. All reported accuracies are the ones realized on the testing set. As for the KANs hyperparameters we have adopted the default hyperparameters' values, beside setting the grid to 3, and turning off the trainable parameters of the base activation function. The KAN architecture is unified across all experiments for all datasets [n_in, 10, 1], where n_in is the number of input features in the dataset, unless explicitly mentioned otherwise. The network are trained for 2000 steps using Adam optimizer with a learning rate of 0.01, and a batch size of 32. In describing the experiments, we will use KANs with five different settings: using Dropout, DropKAN and neither (No-Drop). For Dropout and DropKAN we will report results in two modes with scale-up (Dropout w/ scale) and without (Dropout w/o scale). §.§ Experiment I - Forward Pass Evaluation In this experiment we evaluate the impact of using DropKAN and Dropout on the forward pass of backpropagation for a KAN network. We train the KAN network [6, 2, 2, 1] five times for a total of 100 steps. We carry five forward passes using the validation split on every 10 steps using the five settings of: No-Drop, Dropout w/ scale, Dropout w/o scale, DropKAN w/ scale, and DropKAN w/o scale. We fix the rate of drop to 0.5 for all drop modes on each layer. We average the value at the output neuron (x_3,1) to estimate the expected value of the output using the validation split. The plots in Figure <ref> show the results of this experiment, and while it is clear that the propagated signals for both Dropout w/o scale and DropKAN w/o scale become weaker due to the process of zeroing-out, the scaling is allowing DropKAN w/ scale to pretty accurately approximate the strength of the signal comparing to the No-Drop mode consistent with Equation <ref>. On the other hand, the procedure of scaling-up the input for Dropout is consistently off, and in this case is causing to over-scale the signal. As we have explained earlier, scaling up the input of the non-linear activation function ϕ^' would cause an unpredictable behaviour at training time since ϕ^'(sx) ≠ sϕ^'(x). §.§ Experiment II - Classification Problems In this experiment, we compare KANs equipped with DropKAN layers against KANs with standard KAN layers and KANs regularized using Dropout. The goal of the experiment is to validate the regularizing effect of DropKAN and to compare it against using standard Dropout with KANs. We use a random search to optimize the rates of drop for the DropKAN and Dropout settings, we ran the search for 50 evaluation per setting, and choose the setting with the highest accuracy performance on the validation split. For evaluation we train every setting five times and report the average of the runs. The results in Table <ref> suggest that regularization methods improved the standard KAN performance (No-Drop) in 8 scenarios apart from two datasets semeion, and car. DropKAN w/ scale is clearly the most successful setting recording the highest test accuracy in six of the ten datasets, while the standard Dropout w/ scale was only the best choice in one dataset. The results suggest that scaling is showing a positive effect with DopKAN as the scaled settings performed better than w/o scaling in seven scenarios, consistent with the results of the previous experiment and Equation <ref>. However, scaling is not as successful in the case of Dropout showing better performance comparing to no scaling in only 4 scenarios. § DROPKAN IMPLEMENTATION DropKAN can not be implemented as a standalone layer, but instead it must be applied to the post-activations within the KAN layer. Luckily, the implementation is straightforward, and could be realized with a single line of code, allowing DropKAN to be easily incorporated into any custom KAN implementation. We demonstrate here the line of code to be added assuming library is used for the KAN layer implementation: Where postspline is the post-activations (spline) variable, and the variable rate is the dropping probability of DropKAN. It would be interesting to implement and test DropKAN for the newly-emergeing KAN-based architectures such as graph <cit.>, convolutional <cit.>, and transformer <cit.>. For instance in FourierKAN-GCF (Graph Collaborative Filtering ) <cit.>, regularization strategies like node and message dropout were tested, we believe DropKAN could be directly extended to such architectures. § RELATED WORK Expanding upon the Dropout technique, various methodologies <cit.> have been proposed to refine the training regularization of Deep Neural Networks (DNNs) for supervised learning. The fundamental idea is to introduce noise into intermediate layers during training. Recent advancements have focused on enhancing regularization in Convolutional Neural Networks (CNNs) through the introduction of structured noise. For instance, the SpatialDropout method <cit.> selectively removes entire channels from activation maps, the DropPath scheme <cit.> opts to discard entire layers, and the DropBlock algorithm <cit.> zeros out multiple contiguous regions within activation maps. Techniques inspired by Dropout are not limited to ones during the feedforward stage of backpropagation, as in Gradient Dropout proposed in the context of meta-learning by <cit.>, and later generalized by <cit.> to the supervised training setting, <cit.> showed that masking the gradient during the backward pass prevents the network from memorizing the data and learning overly simple patterns early in the training, similar ideas could potentially be extended to KANs . § CONCLUSIONS We present DropKAN an effective method to regularize KANs leading to reliable generalization improvement over the standard KANs, and KANs regularized with Dropout. Our analysis of adopting Dropout to KANs indicates that using it between KAN layers is problematic due to the speciality of the computation graph of KANs. We design DropKAN to mask the post-activations instead of masking the neurons/nodes. We show that a KAN network constructed using DropKAN layers consistently outperforms a KAN of the same architecture using standard KAN layers, even when the later is regularized with Dropout between its layers.
http://arxiv.org/abs/2407.12121v1
20240716191507
FoodMem: Near Real-time and Precise Food Video Segmentation
[ "Ahmad AlMughrabi", "Adrián Galán", "Ricardo Marques", "Petia Radeva" ]
cs.CV
[ "cs.CV" ]
1 .001 A. AlMughrabi et al. mode = title]FoodMem: Near Real-time and Precise Food Video Segmentation 1]Ahmad AlMughrabi[orcid=0000-0002-9336-3200] sci.mughrabi@gmail.com [url]https://amughrabi.github.io/ This work was partially funded by the EU project MUSAE (No. 01070421), 2021-SGR-01094 (AGAUR), Icrea Academia’2022 (Generalitat de Catalunya), Robo STEAM (2022-1-BG01-KA220-VET000089434, Erasmus+ EU), DeepSense (ACE053/22/000029, ACCIÓ), CERCA Programme/Generalitat de Catalunya, and Grants PID2022141566NB-I00 (IDEATE), PDC2022-133642-I00 (DeepFoodVol), and CNS2022-135480 (A-BMC) funded by MICIU/AEI/10.13039/501100 011033, by FEDER (UE), and by European Union NextGenerationEU/ PRTR. R. Marques acknowledges the support of the Serra Húnter Programme. A. AlMughrabi acknowledges the support of FPI Becas, MICINN, Spain. [1]organization=Universitat de Barcelona, addressline=Gran Via de les Corts Catalanes, 585, city=Barcelona, postcode=08007, country=Spain 1]Adrián Galán adriangalan0204@gmail.com [url] [2]organization=Computer Vision Center, Generalitat de Catalunya and the Universitat Autònoma de Barcelona, addressline=Campus UAB, Edifici O, s/n, city=Barcelona, postcode=08193, country=Spain 1,2]Ricardo Marques[orcid=0000-0001-8261-4409] ricardo.marques@ub.edu [url]https://mat.ub.edu/departament/professors/marques-ricardo/ [3]organization=Institut de Neurosciències, University of Barcelona, addressline=Passeig de la Vall d’Hebron, 171, city=Barcelona, postcode=08035, country=Spain 1,3]Petia Radeva[orcid=0000-0003-0047-5172] petia.ivanova@ub.edu [url]http://www.ub.edu/cvub/petiaradeva/ § ABSTRACT Food segmentation, including in videos, is vital for addressing real-world health, agriculture, and food biotechnology issues. Current limitations lead to inaccurate nutritional analysis, inefficient crop management, and suboptimal food processing, impacting food security and public health. Improving segmentation techniques can enhance dietary assessments, agricultural productivity, and the food production process. This study introduces the development of a robust framework for high-quality, near-real-time segmentation and tracking of food items in videos, using minimal hardware resources. We present FoodMem, a novel framework designed to segment food items from video sequences of 360-degree unbounded scenes. FoodMem can consistently generate masks of food portions in a video sequence, overcoming the limitations of existing semantic segmentation models, such as flickering and prohibitive inference speeds in video processing contexts. To address these issues, FoodMem leverages a two-phase solution: a transformer segmentation phase to create initial segmentation masks and a memory-based tracking phase to monitor food masks in complex scenes. Our framework outperforms current state-of-the-art food segmentation models, yielding superior performance across various conditions, such as camera angles, lighting, reflections, scene complexity, and food diversity. This results in reduced segmentation noise, elimination of artifacts, and completion of missing segments. Here, we also introduce a new annotated food dataset encompassing challenging scenarios absent in previous benchmarks. Extensive experiments conducted on Nutrition5k and Vegetables & Fruits datasets demonstrate that FoodMem enhances the state-of-the-art by 2.5% mean average precision in food video segmentation and is 58× faster on average. The source code is available at: [https://amughrabi.github.io/foodmem]. * Introduces FoodMem, the first framework to explore near-real-time food segmentation and tracking in videos. * Introduces a novel framework that leverages a segmentation transformer model for initial mask creation and a memory-based model for consistent mask refinement. * Outperforms existing models like FoodSAM, the state-of-the-art, across various conditions, including different camera angles, lighting, reflections, and scene complexity. * Achieves high-quality segmentation with minimal hardware resources. * Demonstrates superior performance, with 2.5% higher mean average precision and 58 times faster processing than current models. * Presents a novel annotated food dataset featuring challenging scenarios. * Enhances applications in dietary assessments, agricultural productivity, and food processing efficiency. * Source code and data are publicly available to facilitate further research. Video Food Segmentation Fast Segmentation Food Tracking Segmentation Transformer Memory-based Models Near Real-time Segmentation [ [ July 22, 2024 ================= § INTRODUCTION Video Object Segmentation (VOS) <cit.> stands as a prevalent task within the domain of computer vision, finding applications in various fields, including object recognition, scene comprehension, medical imaging, and enhancing filter effects in video communication. While automated methodologies leveraging pretrained models for object segmentation are widely utilized, interactive user input is frequently incorporated to annotate novel training datasets or accomplish precise rotoscoping for intricate footage, particularly within visual effects contexts. This becomes particularly beneficial when videos present challenging lighting conditions and dynamic scenes or necessitate partial region segmentation. Although automatic VOS techniques strive to delineate entire objects with clearly defined semantic boundaries, Interactive Video Object Segmentation (IVOS) and Semi-supervised Video Object Segmentation (SSVOS) approaches <cit.> offer enhanced flexibility. These typically entail the utilization of scribble or contour drawing interfaces for manual refinement. Cutting-edge IVOS and SSVOS methodologies rely on memory-based models <cit.> and have shown impressive segmentation outcomes in complex scenes through user-supplied mask annotations. However, these techniques are primarily designed to improve individual annotation performances <cit.>, making them unsuitable for practical production scenarios. They tend to over-segment recognized semantic contours (such as individuals, hair, faces, and entire objects) while struggling to accurately delineate partial regions (such as parts of a person's face or a dog's tail), and face challenges with lighting and extreme object orientations. As a result, when given only one annotated frame, inconsistent segmentations arise due to the inherent ambiguity in the object's appearance, particularly when subjected to significant variations in viewing angles and complex lighting conditions. This limitation limits the scalability of these techniques, as it is unclear which frame annotations are a priority, especially in extended sequences. A new SSVOS framework, XMem++ <cit.>, stores annotated frames for reference and handles multiple frame annotations iteratively. XMem++ <cit.> adopted XMem <cit.> as a backbone. It improves object segmentation in complex scenes with fewer annotated frames without re-training. Using attention-based frame suggestion, it predicts the next annotations, supporting both sparse and dense annotations for better quality scaling. The system achieves real-time video segmentation with instant consideration of frame annotations. In contrast, a novel zero-shot framework, FoodSAM <cit.>, is designed to integrate the original semantic mask with category-agnostic masks generated by the Segment Anything (SAM) <cit.>. While SAM exhibits impressive food image segmentation capabilities, its lack of class-specific information limits its effectiveness <cit.>. Conversely, conventional segmentation methods maintain category information but often sacrifice segmentation quality <cit.>. FoodSAM proposes fusing the original segmentation output with SAM's generated masks to enhance semantic segmentation quality. Identifying the mask's category based on its predominant elements represents a novel and practical approach to improving the semantic segmentation process <cit.>. Still, FoodSAM fails to generalize the segmentation across multiple frames in a given video; for instance, the estimated mask IDs (i.e., mask colors) are assigned differently for the same food portion in different frames for the same scene. Moreover, FoodSAM fails to segment perfectly and generates masks from different camera views, such as missing food parts, and segments non-food objects, such as plates or tables. Furthermore, FoodSAM is slow, which makes it an unsuitable scene segmentation framework for production scenarios. Due to these limitations, we propose FoodMem, a novel framework that leverages FoodSAM and SSVOS (XMem++). Our framework generates one or a few annotated images as a first step and then tracks them throughout the scene. We further introduce a dataset for benchmarking purposes, which includes the new challenging scenes and food use cases, including: * different capturing settings such as 360^o food scenes; high and low speed capturing; blurry images; complex and simple backgrounds; different illumination; * food diversity and complexity such as basic ingredients (e.g., apple) or complex ingredients (e.g., cuisine). * bounded and unbounded scenes such as scene has been captured by Mobile phone in a free movement; while some scene has been captured by Intel Realsense camera in a restricted movement . We extensively evaluate and compare our framework on our dataset, which is built from real-world and challenging datasets <cit.> that are publicly available to show how our framework outperforms the current state-of-the-art. § RELATED WORK Various approaches have been proposed to tackle the food segmentation challenge. FoodSAM <cit.> has achieved a groundbreaking milestone as the first system to accomplish instance segmentation, panoptic segmentation, and promptable segmentation. FoodSAM utilizes the ViTh <cit.> as SAM’s image encoder, employing the same hyperparameters outlined in the original paper. Object detection is performed using UniDet <cit.> with Unified learned COIM RS200. For semantic segmentation, SETR <cit.> serves as FoodSAM baseline, incorporating ViT-16/B as the encoder and MLA as a decoder in FoodSeg103 <cit.>, which SETR focuses on extracting richer feature representations. Following a comprehensive evaluation of the FoodSeg103 <cit.> and UECFoodPix Complete <cit.> benchmarks, FoodSAM has outperformed the state-of-the-art methods on both datasets, demonstrating the exceptional potential of SAM as a powerful tool for food image segmentation. However, FoodSAM <cit.> exhibits considerable performance drawbacks, including significant latency and high memory consumption, attributable to the integration of multiple models within its framework. Additionally, FoodSAM cannot track food items throughout a video, as it segments each image independently. In contrast, the k-Means++ <cit.> technique is extensively utilized in various food segmentation tasks <cit.> due to its ease of implementation, high speed, and the fact that it is an analytical solution requiring no training, prior knowledge or GPU to run. This method is particularly effective when the camera is positioned above the food, and the table background is simple, containing only food, a fork, a spoon, and a plate. It primarily serves as an RGB image segmentation method employing K-means clustering to differentiate the object from the background, although additional processing, such as median filtering, is necessary to address issues like uneven illumination. Subsequent steps involve converting the clustered image to grayscale, applying median filtering, Otsu thresholding, and morphological operations (erosion, dilation, opening, and closing) to enhance segmentation quality and accuracy in volume prediction, particularly for brightly colored food objects. For video segmentation, DEVA <cit.> is a decoupled video segmentation approach that uses task-specific image-level segmentation and class/task-agnostic bi-directional temporal propagation, allowing it to 'track anything' without needing video data training for each individual task. This method only requires training an image-level model for the target task and a universal temporal propagation model once, which generalizes across tasks. DEVA outperforms end-to-end approaches in data-scarce tasks, including large-vocabulary video panoptic segmentation, open-world video segmentation, referring video segmentation, and unsupervised video object segmentation. Using DEVA with the prompt "food" demonstrates good overall model performance and speed; however, it exhibits limitations in recognizing a diverse range of food items and consumes a high amount of memory. To overcome these limitations, we propose FoodMem, a production-grade video food segmentation framework that accepts a video or a set of images and segments food in near-real-time performance and memory-friendly. Our main contributions are as follows: * We build a novel near-real-time food segmentation architecture for videos that combine Segmentation Transformer (SETR) <cit.> and memory-based model <cit.> frameworks as a first exploration for video food segmentation. * We introduce a novel dataset tailored for food image segmentation tasks. Our dataset comprises a comprehensive selection of dishes sourced from the Nutrition5k <cit.> dataset, encompassing 31 distinct dishes with a total of 1356 annotated frames. Additionally, we include 11 dishes from the Vegetable and Fruits <cit.> dataset, augmented with 2308 annotated frames. Our dataset features 42 diverse dishes, accompanied by 3664 meticulously annotated frames. We believe this expansive dataset is a valuable resource for advancing research in video food segmentation and related computer vision tasks. * We conducted an extensive series of experiments on our dataset to assess the effectiveness and flexibility of our framework against the baselines. * Our framework outperforms the state-of-the-art performance in video food segmentation for 2.5% mean average precision with similar recall. * Our framework is 58 times faster than the baseline's <cit.> inference time. The rest of the paper is structured as follows: We present the theoretical background in Sec. <ref>. We define our proposed methodology in Sec. <ref>. A thorough set of experimental results is presented in Sec. <ref>. Finally, we present our conclusions and future work in Sec. <ref>. § PROPOSED METHODOLOGY This section details the structure of our proposed method for automated food portion segmentation and tracking. Sec.  <ref> provides a high-level overview of the methodology, with reference to Fig. <ref>. Initially, we employ the SETR model <cit.> to generate one or several food masks per scene. Subsequently, these masks are tracked within complex scenes using the XMem++ model <cit.>. The following sections delve into the specifics of each step in our approach. §.§ Overview Our framework leverages a two-phase framework, combining SETR and memory-based frameworks to achieve efficient and rapid video segmentation, addressing the critical time complexity issue. This combination is inherently complex due to their distinct architectural designs and operational methodologies. Merging these models is non-trivial and involves significant challenges in aligning their functionalities and optimizing their interoperability. We have developed robust solutions to seamlessly integrate these components, ensuring precise segmentation and tracking while maintaining computational efficiency. §.§ Our Proposal: FoodMem Our methodology starts with semantic segmentation with the pre-trained SETR model for food segmentation in video data. Specifically, we utilized the Sequence-to-Sequence Transformer (SETR) model <cit.>, leveraging pre-trained weights to perform initial semantic segmentation on the first n frames {I_t}_t=1^n of the video sequence. Here, I_t denotes the t-th frame of the video sequence. The video frames are partitioned into N non-overlapping patches {p_t,i}_i=1^N, each of size P × P × C. In this context, P represents the height and width of each square patch, and C denotes the number of color channels (e.g., C = 3 for RGB images). These patches are embedded into feature vectors E_t,i∈ℝ^D using a linear projection, where D denotes the dimensionality of the embedded feature vector: E_t,i = W_p ·flatten(p_t,i) + b_p where W_p and b_p are the learnable parameters of the linear projection. These feature vectors are augmented with positional embeddings P_i to encode spatial relationships. The sequence of embedded patches Z_t,0 = {E_t,i + P_i}_i=1^N undergoes multiple layers of transformer encoding, leveraging pre-trained weights to enhance feature extraction capabilities. Here, Z_t,0 represents the initial sequence of embedded patches with positional embeddings. The output of the transformer encoder Z_t,L is subsequently fed into the transformer decoder to refine feature representations. Segmentation predictions are generated by applying a linear projection followed by a softmax activation to produce the probability distribution <cit.>: Ŝ_t,i = softmax(W_s Z_t,L^i + b_s) for each patch i. In this case, Z_t,L, where L denotes the number of layers in the transformer encoder, is the output of the transformer encoder after L layers, and W_s and b_s are the learnable parameters for the linear projection in the decoder. The predicted segmentation probability distribution for patch i in frame t is denoted by Ŝ_t,i, as shown in Fig. <ref> (a). We leverage Long-Term Tracking with XMem++ <cit.> to maintain temporal coherence in segmentation across frames. We integrate the XMem model without training, focusing on memory mechanisms for tracking and refining segmentation masks. For each frame I_t, feature maps F_t = ϕ(I_t) are extracted and used to populate Short-Term Memory (STM) and Long-Term Memory (LTM). Here, ϕ denotes the feature extraction function, and F_t denotes the feature map extracted from frame I_t. STM retains recent feature maps <cit.>: STM_t = {F_t-n, …, F_t} while LTM accumulates feature maps selectively based on their significance <cit.>: LTM_t+1 = LTM_t ∪{F_t} To track segmentation masks over time, similarities S_t,i between the current feature map F_t and memory features M_i are computed using dot products, normalized to obtain attention weights <cit.>: α_t,i = exp(S_t,i)/∑_jexp(S_t,j) where S_t,i represents the similarity between the current feature map F_t and memory feature M_i, and α_t,i denotes the attention weight for memory feature M_i. The memory read R_t is computed as a weighted sum of memory features <cit.>: R_t = ∑_iα_t,i M_i Fig. <ref> (b) illustrates Long-Term Tracking method, which takes the mask and a set of images as input and generates masks for all frames. §.§ Our proposed dataset We utilized the Nutrition5k <cit.> dataset, which comprises approximately 5000 dishes, offering a wide array of food items for analysis. For comprehensive testing, we carefully selected 31 plates from this dataset. Each plate was chosen to encompass diverse compositions of ingredients, ensuring a broad scope for evaluating our segmentation model's performance across various food categories. Each selected plate contains between 130 and 560 frames, totaling several hundred per plate. To streamline processing and reduce computational overhead, we employed an image processing pipeline to minimize the number of frames, specifically Imagededup <cit.>. This reduction consolidated the frames to approximately 20 to 65 keyframes per plate, preserving critical visual information essential for accurate segmentation analysis <cit.>. Additionally, we incorporated the Vegetables & Fruits (V&F) dataset into our evaluation <cit.>. Like Nutrition5k, V&F is a substantial dataset containing multiple dishes and frames. We strategically selected a subset of dishes from both datasets based on food variation, video complexity, and ingredient diversity to ensure efficient validation of our framework. Similarly, we employed a near-image similarity approach <cit.>, eliminating highly overlapped images and optimizing dataset integrity <cit.>. Furthermore, we manually annotated each selected dish frame using LabelMe <cit.>. This meticulous annotation resulted in a dataset comprising 31 dishes from Nutrition5k with 1356 annotated frames and 11 dishes from V&F with 2308 annotated frames, totaling 42 dishes with 3664 annotated frames. This curated dataset forms a robust foundation for rigorous testing and validation of our segmentation framework, facilitating accurate assessment across diverse culinary compositions. Our methodology ensures robust food segmentation in video data by systematically integrating pre-trained SETR for initial segmentation and memory-based XMem for tracking without retraining. Evaluated using mean average precision and recall, our approach provides quantitative measures of segmentation accuracy and temporal coherence, demonstrating its efficacy in real-world applications such as the FoodSeg103 dataset. This combination of transformer-based segmentation and memory-based tracking advances state-of-the-art video segmentation techniques, offering reliable food recognition and analysis tools. § EXPERIMENTAL RESULTS Our framework is evaluated on two public datasets: Nutrition5k dataset <cit.> and Vegetables and Fruits dataset <cit.>. We apply two quality metrics: mAP and recall metrics. §.§ Implementation settings We ran the experiments using an NVIDIA GeForce RTX 2080 Ti with 11GB of VRAM. For SETR, we set one-shot image segmentation; for keyframe selection, we set the hamming distance to 12. §.§ Quality Metrics We outline the quality metrics used to evaluate the performance and effectiveness of our framework. These metrics are crucial for assessing the model's accuracy, efficiency, and reliability. Mean Average Precision (mAP) <cit.> and recall <cit.> are the primary metrics for evaluating the FoodMem model's performance. By focusing on these metrics, we aim to ensure that our model is accurate and reliable for practical applications in food segmentation tasks. These metrics provide insights into the model's capability to detect and segment food items effectively, which is essential for dietary assessment and culinary automation applications. The subsections below detail the two key metrics employed in our evaluation. §.§.§ Mean Average Precision mAP is a comprehensive measure that combines precision and recall to evaluate the accuracy of object detection models. It calculates the average precision across different recall levels, providing a metric that reflects the model’s performance in detecting food items. The mAP is defined as: mAP = 1/N∑_i=1^NAP_i where N is the number of classes, and AP_i is the average precision for class i. Using mAP, we can assess how well our model identifies and localizes food items in images, ensuring high accuracy and effectiveness. §.§.§ Recall measures the model's ability to correctly identify all relevant instances within a dataset. It is the ratio of true positive detections to the total number of relevant instances. Recall is defined as: Recall = TP/TP + FN where TP is the number of true positives, and FN is the number of false negatives. High recall indicates that the model effectively detects the majority of food items, minimizing the occurrence of missed detections. This metric is crucial for applications where it is essential to capture all instances of food items, such as in dietary assessment, where missing an item could lead to inaccurate dietary intake records. §.§ FoodMem Results We extensively evaluated our model in two experimental categories: * the bounded scenes of the Nutrition 5k dataset, and * the Vegetables & Fruits benchmark for unbounded scenes. For the experiments on the datasets, we compared it to FoodSAM <cit.>. Using the same quality metrics, we reproduced the DEVA, kMean++, and FoodSAM results. Table <ref> presents mAP and recall for each dataset. Briefly, we calculated quality metrics for each image in each considered scene to ensure consistency with previous works. We then determine the quality metrics at the scene level by averaging the quality of all images of the same scene. Next, we average the quality metrics across all scenes to compute the final quality values per method. This process is repeated for the two datasets. Notably, our model requires only one day to complete training, which is 58 times faster than the baseline, as shown in Fig. <ref>. The qualitative results on the Nutrition 5k are shown in Fig. <ref>. Additionally, Fig. <ref> shows the qualitative results on the V&F dataset. The figures show that our model excels in texture details, artifact correction, and missing data handling across different scene parts, surpassing other models. Table <ref> shows the quantitative results of our comparison. The table shows that our model performs better in food segmentation on different datasets. §.§ Ablation study To understand the factors contributing to FoodMem's success, we studied the impact of altering the masks generated by SETR. Specifically, we examined the changes resulting from increasing the number of masks to 3, 6, or 9. We aimed to present both qualitative and quantitative outcomes of these modifications. For qualitative analysis, Fig. <ref> compared the masks visually. For quantitative analysis, Table <ref> assessed the execution time and evaluated performance using mean average precision and recall metrics. §.§ Discussion While our framework demonstrates robust performance in maintaining temporal coherence in segmentation tasks, our findings indicate that it generates artifacts in the resulting masks when multiple clean masks are provided. This issue arises from the model's reliance on feature similarity for memory read operations. When the features of multiple clean masks are highly similar, the dot product-based similarity measure used by XMem++ can lead to ambiguous attention weight assignments, resulting in incorrect weighting of memory features. Consequently, portions of the mask may be incorrectly assigned or blended, reducing segmentation accuracy and causing visual inconsistencies, as shown in Fig. <ref> for Pear and Apple objects. These artifacts are particularly problematic in complex scenes where precise segmentation is critical. To mitigate these limitations, further research is needed to enhance the feature differentiation capabilities of XMem++. Potential approaches include improving the feature extraction process to generate more distinct feature vectors, incorporating advanced similarity measures to better handle subtle differences between features, or refining the memory update mechanism to selectively store and retrieve more discriminative features. Addressing this limitation is crucial for improving the reliability and accuracy of segmentation in complex video sequences. Limitations. Similar to the baselines, several fundamental limitations of our framework are identified, which are pivotal for comprehending our findings and guiding future research directions. * Our framework's reliance on SETR, trained on the FoodSeg103 dataset, may restrict its performance when confronted with food items or variations not adequately represented in the training data, potentially leading to inaccuracies in segmentation, especially for less common or novel food items. * Leveraging XMem++ introduces inherited constraints related to its algorithmic design and computational requirements, which could impact scalability and efficiency when processing large datasets or high-resolution videos. * Our framework exhibits sensitivity to lighting conditions, particularly in low-light environments or scenarios with significant shadows, posing challenges in accurately detecting and segmenting food objects. Addressing these limitations will enhance the framework's reliability and applicability across varied real-world conditions and datasets in future research endeavors. § CONCLUSIONS AND FUTURE WORK We have developed a robust framework for video semantic segmentation that effectively segments food items across diverse datasets with high accuracy, efficiency, and flexibility, making it well-suited for practical applications in food recognition and analysis. Despite its strengths, challenges such as sensitivity to low-light conditions and difficulty recognizing less common food items have been identified for improvement. Our research focuses on enhancing the framework's capabilities and addressing these challenges through targeted initiatives. This includes expanding the training dataset to cover a wider range of food items to improve segmentation accuracy across various culinary items. We also aim to implement robust low-light detection mechanisms and enhance memory capabilities to better adapt to varying visual contexts. Additionally, integrating volume and calorie estimation will provide comprehensive insights into food composition and nutritional content, extending the framework's utility in dietary assessment, health monitoring, and culinary analysis. By pursuing these avenues of research, we aim to enhance the framework's overall performance, versatility, and practical applicability in real-world scenarios, contributing to advancements in semantic food segmentation and enabling more accurate and insightful applications in food-related domains. cas-model2-names
http://arxiv.org/abs/2407.12603v1
20240717143119
Generalization of the Central Limit Theorem to Critical Systems: Revisiting Perturbation Theory
[ "Sankarshan Sahu", "Bertrand Delamotte", "Adam Rançon" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "hep-th" ]
Sorbonne Université, CNRS, Laboratoire de Physique Théorique de la Matière Condensée, LPTMC, 75005 Paris, France Sorbonne Université, CNRS, Laboratoire de Physique Théorique de la Matière Condensée, LPTMC, 75005 Paris, France Univ. Lille, CNRS, UMR 8523 – PhLAM – Laboratoire de Physique des Lasers Atomes et Molécules, F-59000 Lille, France § ABSTRACT The Central Limit Theorem does not hold for strongly correlated stochastic variables, as is the case for statistical systems close to criticality. Recently, the calculation of the probability distribution function (PDF) of the magnetization mode has been performed with the functional renormalization group in the case of the three-dimensional Ising model [Balog et al., Phys. Rev. Lett. 129, 210602 (2022)]. It has been shown in that article that there exists an entire family of universal PDFs parameterized by ζ=lim_L,ξ_∞→∞ L/ξ_∞ which is the ratio of the system size L to the bulk correlation length ξ_∞ with both the thermodynamic limit and the critical limit being taken simultaneously. We show how these PDFs or, equivalently, the rate functions which are their logarithm, can be systematically computed perturbatively in the ϵ=4-d expansion. We determine the whole family of universal PDFs and show that they are in good qualitative agreement with Monte Carlo data. Finally, we conjecture on how to significantly improve the quantitative agreement between the one-loop and the numerical results. Generalization of the Central Limit Theorem to Critical Systems: Revisiting Perturbation Theory Adam Rançon July 22, 2024 =============================================================================================== § INTRODUCTION The physics of many-body systems has seen many important advances over the twentieth century. For both particle and statistical physics, field theory has provided a unified formalism, particularly when the physics involves length scales much larger than the microscopic scales at which the models are formulated. This is the usual case in particle physics where the ultraviolet (UV) cutoff Λ of the theory is in general unknown, that is, it is so large that its effects on low-energy physics are almost invisible <cit.>. This is also the case in statistical mechanics when the system is close to a second-order phase transition. In this case, the correlation length is very large compared to the microscopic scale – a lattice spacing, an inter-molecular distance, etc – and here again, the long-distance physics is almost insensitive to its UV cutoff. In both cases, renormalization plays a crucial role. Historically, renormalization was invented to eliminate UV divergences appearing at all orders of perturbation theory. When the theory is either renormalizable or superrenormalizable, it consists of a reparametrization, called renormalization, of coupling constants, masses, and fields leading to finite correlation functions. In the renormalization process, all reference to this UV scale is eliminated, and this is precisely what makes it possible to get rid of the UV cutoff. It is replaced by the scale where the renormalized couplings are defined. As a consequence, these couplings become scale-dependent, and their evolution with length or energy scale is given by the renormalization group (RG) equations. Renormalization is thus one of the cornerstones of the study of many-body systems. A second cornerstone of the study of many-body systems is the Central Limit Theorem (CLT) which states that for a set of n independent and identically distributed (iid) random variables, the probability distribution function (PDF) of their properly normalized sum is a Gaussian when n→∞, provided that their mean and variance exist <cit.>. This theorem can easily be generalized to the case of weakly correlated random variables by considering groups of correlated variables that, on scales larger than the correlation length, are effectively uncorrelated and identically distributed <cit.>. In this case, the variance of the individual degrees of freedom is replaced by the susceptibility of the system, and the CLT still holds. The two cornerstones described above are in fact two facets of the same key concept of many-body systems: Universality. For a many-body system, universality is most often described as the complete loss of memory of microscopic details at the macroscopic level, the idea being that the microscopic details of a physical system are averaged out at large scales and that what only matters for the long-distance physics is the tensorial nature of the order parameter – for the symmetry group of the system – and the space dimensionality. It is often stated that universality requires scale invariance which, existing only at criticality, makes universality the main and most striking feature of second-order phase transitions. This is however too restrictive. Universality in the above sense is in fact at work everywhere in physics, and is precisely what makes physics possible; otherwise, for a gas of weakly interacting molecules, instead of using the ideal gas law, we would have to solve the dynamics of all the quarks and electrons in all the atoms of the gas, not to mention their quantum gravitational interactions. Although below the molecular scale, the physics of the elementary degrees of freedom of the gas – electrons, quarks, etc. – is non-trivial, beyond this scale, it is the CLT that enables us to overcome this difficulty and is, therefore, the most common manifestation of universality. The difficulty with criticality is that all degrees of freedom are long-range correlated and, consequently, the CLT does not apply. Therefore, the real surprise with universality is not that it is specific to criticality, but that, on the contrary, criticality shows universality. One of the aims of the present work is to show by perturbative means that the CLT can be generalized to critical systems and that the PDF of the sum of the system's stochastic variables is no longer Gaussian.[In the following, we call for short PDF the PDF of the properly normalized sum of the system's stochastic variables.] More precisely, using the ϵ-expansion, we show that at criticality there are infinitely many such PDF depending on the way the critical limit, T→ T_c, and the thermodynamic limit, L→∞, are taken. Here T is the system's temperature, T_c its critical temperature and L the (linear) system size. We also show that all these PDFs, indexed by the ratio ζ=lim_L,ξ_∞→∞L/ξ_∞ where ξ_∞ is the correlation length of the infinite size system, are universal. Clearly, renormalization has to do with universality because the very fact that the UV cutoff can be eliminated in a renormalizable theory while maintaining fixed the physics at large scales means that microscopic details of the model do not play any crucial role at scales much larger than the cutoff. The speculation of the relationship between renormalization group and probability theory goes back at least as far as 1973 when Bleher and Sinai <cit.> hinted towards it in the context of hierarchical models, which are nontrivial statistical systems specially designed such that they can be solved exactly with the RG. This idea was extended by Jona-Lasinio <cit.>, where he further established the connection between limiting theorems in probability theory and RG, e.g. see <cit.>. In the language of the RG, the relationship goes as follows: The coarse-graining procedure at the heart of Wilson's RG consists in integrating out fluctuations of wavenumber | q|∈ ]k,Λ] and obtaining the effective Hamiltonian H_k for the modes of wavenumber smaller than k. Therefore, when all modes have been integrated out except the zero momentum mode, i.e. the sum of all variables as in the CLT, the PDF of this sum is given by exp(- H_k=0) up to a normalization factor. The flow of probability measures is thus directly related to the flow of the coupling constants in the potential part of H_k. It has already been shown in <cit.> that when the critical limit is taken first and only then the thermodynamic limit, the log of the PDF is close to (albeit slightly different from) the fixed-point potential, reinstating the connection between RG and probability theory for correlated random variables. The PDF calculation is not only interesting from a conceptual point of view but also from a pragmatic one. Obviously, universal quantities are much easier to compare between theory and experiments than nonuniversal quantities.[It should be noted that nonuniversal quantities have also been successfully calculated for some simple models such as the 3D Ising model <cit.>, quantum lattice models <cit.> and reaction-diffusion systems <cit.> using a modern nonperturbative version of Wilson's RG. However, this remains a challenge in general and is mostly impossible by perturbative or conformal bootstrap means.] Critical exponents are the most famous and most studied universal quantities. Six critical exponents rule the leading scaling behavior of the thermodynamic quantities, among which only two are independent. They have been computed with striking accuracy (in both the perturbative as well as nonperturbative setting). However, they remain difficult to compute numerically and measure experimentally, not least because, in most cases, scaling only takes place over a finite interval of the control parameter, there is disorder and/or impurities, etc. Moreover, in many cases, only one or two of these exponents can be measured. On the theoretical side, an accurate perturbative determination of these exponents is difficult because it requires going to rather large orders of perturbation theory and performing resummations. Conformal bootstrap is free of these difficulties but has been implemented so far on only a limited number of models. The same holds true for the functional RG. Correction to scaling exponents and amplitude ratios are also universal <cit.>. They have been computed perturbatively for rather simple models, e.g. in O(N) models, with similar difficulties. Even though in principle there are infinitely many of these universal quantities, what we have in the end are, most of the time, only two numbers to characterize a universality class. Of course, universal functions also exist and in principle provide a better characterization of universality classes. However, in practice, they are often both difficult to compute accurately and to measure experimentally with the prime example being that of the structure factor which is the universal part of the correlation function <cit.>. The PDF is the other obvious example of a universal function. A lot of efforts have been made in the past decades to devise such a PDF for strongly correlated systems most of which are rather unsystematic and riddled with mathematical discrepancies <cit.>. Recently, the PDF has been computed for the three-dimensional Ising model using a modern version of Wilson's RG called either the functional RG or the nonperturbative RG (NPRG) <cit.>. We show in this article, harnessing the power of the RG, that the PDF can be calculated perturbatively although in essence this calculation remains functional. A first perturbative calculation of the PDF in the Ising model was done long ago by Eisenriegler and Tomaschitz <cit.>. In our opinion, this article has not received the attention it deserves, probably because it is not an easy read and the comparison with numerical simulations was not easy to make at the time (we disagree with their comparison shown in their Fig. 1, see below), and the tail of the PDF was unreachable by Monte Carlo simulations. In this article, we present a systematic method, hopefully in a pedagogical way, for deriving at one loop the entire family of PDFs, indexed by the ratio ζ in the Ising and O(N) models. As in <cit.>, we show that these PDFs do not exhibit the correct large field behavior which calls for an improvement of the tail of the PDFs. Our work generalizes <cit.> by computing the entire ζ-dependent family of universal PDFs and rate functions. It shows that the PDFs can be unimodal or bimodal depending on ζ. Using RG arguments, it is shown how to perform improvements of the tails of the rate functions for the Ising model for all values of ζ and how, using the Principle of Minimal Sensitivity, these improvements can be optimized. By comparing with Monte Carlo data, we also show that our results are qualitatively correct for all values of ζ, including the negative values corresponding to the approach to criticality from the low-temperature phase. However, the agreement is not quantitative. Finally, we show that if we rescale the entire family of rate functions by one number only, the agreement between the Monte Carlo and the improved rate functions becomes excellent for all values of the field and for all values of ζ. We therefore formulate the conjecture that the main inaccuracy of the one-loop approximation is concentrated in that one number which is the overall scale of the rate functions. The manuscript is organized as follows. The general formalism is described in Sec. <ref>, and two approaches to compute the rate function at one loop are given in Secs. <ref> and <ref>. The RG improvement to correctly capture the large field behavior is described in Sec. <ref>, while Sec. <ref> discusses a generalization of our calculation to approach to criticality from the low-temperature phase. Finally, we compare our results to Monte Carlo simulations in Sec. <ref> and discuss our conjecture in Sec. <ref>. Our conclusions are given in Sec. <ref>. § THE FIELD THEORETIC FORMALISM Let us first recall what the PDF is in the case of weakly correlated random variables. We consider in the following the ferromagnetic Ising model or more generally the O(N) model. We therefore consider random variables σ̂_i that are N-component unit vectors defined on each site i of a hypercubic lattice in d dimensions with periodic boundary conditions of linear size L and lattice spacing a. Their joint probability distribution is given by the Boltzmann law: exp(- H/k_BT)/Z with Hamiltonian H= -J∑_⟨ ij⟩σ̂_i·σ̂_j, where J>0. The system is said to be weakly correlated if the correlation matrix G_ij∝⟨σ̂_iσ̂_j⟩ decays sufficiently fast at large distance such that ∑_i G_ij is finite. Typically, when L→∞ and T>T_c where T_c is the critical temperature of the infinite volume system, G_ij decays exponentially with the distance between sites i and j with a characteristic length scale given by the (bulk) correlation length ξ_∞. Physically, the picture becomes that of finite clusters of spins of size ξ_∞ which are strongly correlated within the cluster, with each cluster being independent of each other <cit.>. What we are interested in is the PDF of the total normalized sum of spins defined as ŝ=L^-d∑_iσ̂_i, the average of which is the magnetization per spin m=⟨ŝ⟩. We represent this quantity by P(ŝ=s). The fluctuations of ŝ are measured by the quantity ⟨ŝ^2⟩=L^-dχ with χ the magnetic susceptibility. In the limit of infinite volume (L→∞) and for a weakly correlated system, i.e. for T>T_c, one recovers the Central Limit Theorem, i.e. P(ŝ=s)∝exp(-L^d s^2/2χ). Obviously, this result cannot hold at criticality because χ diverges and it is our aim to devise a formalism allowing us to compute this PDF perturbatively in the critical limit. Since the PDF is a universal quantity, we choose to work in the continuum with the ϕ^4 theory and with periodic boundary conditions. In the following, we present the formalism for the Ising case, the generalization to O(N) being straightforward. The object of interest is therefore P(ŝ=s) ∝∫ Dϕ̂ δ(ŝ-s)exp(-∫_Vℋ[ϕ̂]), where ŝ=1/L^d∫_V ϕ̂(x), with ∫_V=∫ d^dx and the integral is limited to the volume V=L^d. The Hamiltonian is ℋ(ϕ̂(x))= 1/2(∇ϕ̂(x))^2+1/2Z_2 t ϕ̂^2(x)+1/4!Z_4 g ϕ̂^4(x). The bare couplings, that is, the couplings defined at the ultra-violet scale Λ, e.g. the inverse lattice spacing Λ∼ a^-1, are t_0=Z_2 t and g_0=Z_4 g while t and g are dimensionful renormalized couplings defined at a scale μ which is unspecified for the moment. In the following, we use dimensional regularization and the Minimal Subtraction scheme (MS) and Z_2 and Z_4 have been introduced in the previous equation to cancel the divergences coming out of the loop-expansion when it is performed at fixed t_0 and g_0.[Notice that in MS, the bare critical mass is vanishing so that criticality is reached when t_0→0, that is, t∝ T-T_c.] Dimensionless renormalized quantities, to be defined later in Eq. (<ref>), will be denoted with a bar and for μ=L^-1 by a tilde, see Eq. (<ref>). [Notice that the normalization factor in front of the PDF in Eq. (<ref>) has been omitted for simplicity.] Replacing the Dirac delta function in Eq. (<ref>) by an infinitely peaked Gaussian, i.e. δ(z)∝exp(-M^2z^2/2) with M→∞, the PDF can be interpreted as the partition function (𝒩 a normalization constant) 𝒵_M,s[h] =𝒩∫ Dϕ̂exp(-∫_V ℋ_M,s+∫_V h(x)ϕ̂(x)) at vanishing magnetic field h=0 of a system with Hamiltonian ℋ_M,s(ϕ̂(x))=ℋ(ϕ̂(x))+ M^2/2(∫_V(ϕ̂(x)-s))^2. Following <cit.>, we now show that P(s) can be conveniently written in terms of the modified Legendre transform defined by Γ_M[ϕ]+log𝒵_M,s[h] = ∫_V h(x)ϕ(x)-M^2/2(∫_V(ϕ(x)-s))^2, with ϕ(x) = δlog𝒵_M,s[h]/δ h(x) , and δΓ_M/δϕ(x) = h(x)-M^2∫_y (ϕ(y)-s) . Using Eq. (<ref>), we find e^-Γ_M[ϕ(x)] =𝒩∫ Dϕ̂ e^-∫_V ℋ(ϕ̂(x))+∫_V δΓ_M/δϕ·(ϕ̂-ϕ)-M^2/2(∫_V(ϕ̂-ϕ))^2. Note that although it is not explicit from its definition, Eq. (<ref>), Γ_M is independent of s as can be checked from Eq. (<ref>) or by deriving Eq. (<ref>) with respect to s. The PDF P(s) can be recovered from Eq. (<ref>) in the limit M→∞ lim_M→∞ e^-Γ_M[ϕ(x)=s] = 𝒩∫ Dϕ̂ e^-∫ℋ[ϕ̂]+h.∫ (ϕ̂-s)δ(∫ϕ̂(x)-s ), ∝ P(ŝ=s). The rate function I(s) is defined by P(ŝ=s) ∝ e^-L^d I(s, ξ_∞, L). and thus lim_M→∞Γ_M[ϕ(x)=s] =L^d I(s), where the parameters on which Γ_M depends have not been specified, see below. § THE ONE-LOOP CALCULATION The one-loop calculation of I(s) requires to expand 𝒵_M,s[h] at leading order around the mean field configuration ϕ̂_0 defined by δℋ_M,s/δϕ̂|_ϕ̂=ϕ̂_0=h, and 𝒵_M,s[h] = 𝒩exp(-∫_V ℋ_M,s(ϕ̂_0)+∫_V h.ϕ̂_0-1/2logℋ^(2)_M,s+…). Using Eqs. (<ref>) and (<ref>), we find at one-loop Γ_M[ϕ] = ∫_V ℋ(ϕ)+1/2logℋ^(2)_M,s[ϕ]-log(𝒩). The equation above is a priori ill-defined because the log term is divergent. However, once it is dimensionally regularized and the log(𝒩) term is chosen such that it is equal to the value of 1/2logℋ^(2)_M,s[ϕ] computed at vanishing field and vanishing mass (i.e. at t∝ T-T_c=0) with an addition of a constant counter-term, it becomes well-defined. We thus find (up to a constant divergent term) logℋ^(2)_M,s[ϕ]-log(𝒩)=1/L^d∑_qlog(1+t+gϕ^2/q^2+M^2L^dδ_q, 0) , where q is a d-dimensional vector with components q_i=2π n_i/L and n_i∈ℤ. We thus obtain lim_M→∞ Γ_M[ϕ(x)=s] = L^d(1/2Z_2t s^2+1/4!Z_4g s^4 +1/2 L^d∑_q≠ 0log(1+t+g s^2/2/q^2)-t^ 2L^ϵ/32π^2ϵ), where the last term in Eq. (<ref>) is precisely the constant counter-term mentioned before Eq. (<ref>). The renormalized coupling g a priori depends on a scale μ. Since there cannot be any fluctuation with wavelengths larger than the system size, it is natural to choose the scale μ=L^-1 so as to include all fluctuations between the UV scale Λ and the infrared scale L^-1. This is implicit in Eq. (<ref>). Using the dimensionless variables s̃=sL^d-2+η/2 , g̃=L^4-dg(μ=L^-1), we obtain at first order in ϵ=4-d, putting the anomalous dimension η=0 at this order lim_M→∞Γ_M[s̃, μ=L^-1] = 1/2Z_2(tL^2)s̃^ 2+1/4!Z_4g̃s̃^ 4-t^ 2L^ϵ/32π^2ϵ +1/2∑_n≠ 0log(1+2(tL^2)+g̃s̃^ 2/8π^2n^2), where n∈ℤ^d. In dimension d=4-ϵ (see Appendix <ref> and <cit.>), ∑_n≠ 0log(1+x/n^2)+(xπ)^d/2Γ(-d/2)= Δ(x)+O(ϵ), with Δ(x)=θ(x)-θ(0), and θ(z)=-∫_0^∞ dσe^-σ z/σ(ϑ^d(σ)-1-σ^-d/2), where ϑ(x) is the Jacobi ϑ function. By definition, Z_2 and Z_4 are series in the dimensionless coupling constant g̃ designed in the minimal scheme to cancel out the poles coming out of the term (xπ)^d/2Γ(-d/2). Thus, we have lim_M→∞Γ_M [s̃, L^-1] = 1/2(tL^2)s̃^ 2+2π^2 /3ũs̃^ 4 +π^2/4(tL^2/4π^2 +2ũs̃^ 2)^2[γ+log2π-3/2. +.log(tL^2/8π^2+ũs̃^ 2)]+1/2Δ(tL^2/4π^2+2ũs̃^ 2) + O(ϵ), where γ is the Euler-Mascheroni constant, g̃=16π^2ũ and the counter terms Z_2 and Z_4 are found to be Z^-1_2=1+g̃/16π ^2ϵ+O(g̃^2) , Z^-1_4=1+3g̃/16π ^2ϵ+O(g̃^2). Universality takes place in the double limit of infinite volume, L→∞, and infinite bulk correlation length, ξ_∞→∞, that behaves as ξ_∞∼t^-1/ν. We show below that this limit is not unique and that keeping the ratio ζ=lim_L,ξ_∞→∞L/ξ_∞ fixed, there are infinitely many inequivalent ways of taking this double limit and thus infinitely many universal rate functions indexed by ζ I_ζ(s̃)=lim_M,L,t^-1→∞Γ_M[s̃, L^-1] at fixed ζ=lim_L,ξ_∞→∞L/ξ_∞ . The derivation of I_ζ(s̃) proceeds in two steps. Firstly, if we were running the RG flow, we would obviously find that the dimensionless coupling constant ũ flows to its fixed point value u_*=3ϵ/N+8 (with N=1 for Ising model) when μ=L^-1→ 0. Secondly, since for μ≪Λ, ξ_∞^-1=μ(t/μ ^2)^ν and we have chosen μ=L^-1, we find that tL^2=ζ^1/ν. Here and in the following, in all one-loop expressions we use the one-loop exponents, e.g. for ν, 1/ν=2-ϵ/3 and η=0. Thus, by defining x=√(u_*) s̃, we finally find I_ζ(x) = 3/ϵ(1/2ζ^1/ν x^2+2π^2 /3x^4) +π^2/4(ζ^1/ν/4π^2+2x^2)^2(γ+log2π-3/2 +log(ζ^1/ν/8π^2+x^2)) +1/2Δ(ζ^1/ν/4π^2+2x^2) + O(ϵ), which shows that the universal rate functions are only functions of ζ and not of L and ξ_∞ separately. Notice that using the variable x instead of s̃ is on one hand a finite redefinition of the field when ϵ is fixed and nonvanishing and on the other hand is what guarantees that Eq. (<ref>) is the beginning of a systematic ϵ-expansion <cit.>. The above result can be easily extended to the O(N) case I_ζ, N (x) = N+8/3ϵ(1/2ζ^1/ν x^2+2π^2 /3x^4)+1/2Δ(ζ^1/ν/4π^2+2x^2) +π^2/4(ζ^1/ν/4π^2+2x^2)^2(γ+log2π-3/2 +log(ζ^1/ν/8π^2+x^2)) +(N-1)[π^2/4(ζ^1/ν/4π^2+2x^2/3)^2(γ+log2π-3/2 +log(ζ^1/ν/8π^2+x^2/3))+1/2Δ(ζ^1/ν/4π^2+2x^2/3)]+ O(ϵ), The calculation of the rate functions (<ref>) and (<ref>) has the advantage of simplicity and is based on our intuition that choosing L^-1 as the renormalization scale is the best we can do. It has the disadvantage of hiding the fact that these functions should be scaling functions which is not fully clear in (<ref>) and (<ref>). Moreover, we shall see in Section <ref> that the large field behavior of the one-loop rate function is qualitatively incorrect, see Fig. <ref> for the special case of ζ=0. The reason is clear: at large field, the rate function behaves as the fixed point potential, that is, as s̃^ δ+1 with δ=d+2-η/d-2+η, <cit.> whereas perturbation theory yields the same result but fully expanded in ϵ, that is, both δ and s̃^δ+1 are expanded. This explains the presence of logarithms, visible in Eq. (<ref>) and (<ref>), that become large at large fields and that invalidate perturbation theory in this range of fields. They are an internal signal of perturbation theory that it fails at large fields. The recipe for overcoming this problem is well known: these logarithms must be resummed to recreate the power law s̃^ δ+1 where δ takes its one-loop value. This is automatic in the NPRG formalism, but not in perturbation theory. In what follows, we show how to improve our determination of I_ζ(s̃) given in Eq. (<ref>) at large values of the field. § DERIVATION OF SCALING FUNCTION: AN ALTERNATIVE ROUTE Before looking at possible improvements to the rate functions at large fields, we must first show that they are scaling functions and determine them. To do this, we generalize the calculation of the previous section by keeping μ arbitrary, which will enable us to put these functions into a scaling form. We will then show that this is not sufficient to recover the correct large-field behavior and, in a second step, we will present various improvement schemes that will enable us to recover the correct large-field power-law behavior. The fact that the rate function I_ζ(s) is a scaling function is a consequence of Eq. (<ref>) that shows that it can be seen as the free energy of the model with Hamiltonian ℋ_M,s in the limit of large M. Defining δΓ_M(s,T,L)=Γ_M(s,T,L)-Γ_M(0,T_c,L), its singular part is universal and its density δγ_M is proportional to δΓ_M(s,T,L)/(L μ)^d=δγ_M(s,T,g,L), with δγ_M(s,T,g,L) a dimensionless function. This function can be written in terms of dimensionless renormalized parameters defined at a scale μ by g̅= μ^-ϵg =μ^-ϵ16π^2 u̅, s̅= μ^-1+ϵ/2Z^-1/2s, t̅= μ^-2t, L̅= μ L, where Z=Z(u̅) is the field renormalization. Taking first the limit M→∞, it satisfies μ^d δγ_∞(s̅,t̅,g̅,L̅)= μ'^d δγ_∞(s̅',t̅ ',g̅',L̅'), where the prime variables are defined at scale μ'. Defining l=μ'/μ, the evolution of the couplings and fields with l is given by their RG flow. Provided we take the thermodynamic limit (L̅→∞) as well as the critical limit (t̅→0) and we choose l small, that is, we consider large length scales compared to the UV cutoff, the coupling u̅ runs to its fixed point value u^* and Z(u^*)∼μ^-η where η is the anomalous dimension. Choosing l=L̅^-1, we obtain from Eqs. (<ref>) and (<ref>) δγ_∞(s̅,t̅,u̅,L̅) =l^dδγ_∞(s̅ l^-β/ν,t̅ l^-1/ν,u^*, L̅ l), = L̅^-dδγ_∞(s̃, ζ^1/ν,u^*,1), where s̃ is defined in Eq. (<ref>) and β/ν=(d-2+η)/2. Equation (<ref>) shows that the rate function is a scaling function of the two variables s̃ and ζ^1/ν. To make explicit the fact that I_ζ(s̃) is a scaling function, we repeat our one-loop result for Γ_M[ϕ(x)=s] but this time with couplings defined at an arbitrary scale μ instead of μ=L^-1, as we did in the previous section. At this order, Γ_M becomes lim_M→∞ Γ_M[ϕ(x) =μ^1-ϵ/2s̅] = L̅^d(1/2Z_2t̅s̅^2+2π^2/3Z_4u̅s̅^ 4 +1/2L̅^d∑_n≠ 0log(1+t̅L̅^2/4π^2+2u̅s̅^2L̅^2/n^2)+t̅^2/64π^4ϵ), where once again Z_2 and Z_4 are counterterms expanded in powers of u̅ to cancel out the poles coming out of the one-loop term and n∈ℤ^d. One can then once again use Eq. (<ref>), to compute the discrete sum in the above expression in 4-ϵ dimensions, and thus end up with lim_M→∞ L̅^ -dΓ_M[ϕ(x) =μ^1-ϵ/2s̅] = (1/2t̅s̅^ 2+2π^2/3u̅s̅^ 4+π^2/4(t̅/4π^2+2u̅s̅^ 2)^2(γ+log2π -3/2+log(t̅/8π^2+u̅s̅^ 2))) + 1/2L̃^ 4Δ([t̅/4π^2+2u̅s̅^ 2]L̅^ 2). Now defining the variable x̅=√(u̅)s̅ and taking the limit of criticality and infinite volume together by keeping the ratio ζ=lim_L,ξ_∞→∞L/ξ_∞=lim_t̅→ 0, L̅→∞t̅^ νL̅ fixed and sending u̅ to its fixed point value u_*=3ϵ/n+8, with n=1, we get at one loop, that is, up to terms of order ϵ lim_M→∞L̅^ -dΓ_M(x̅, ζ,L̅) = 3/ϵ[1/2(ζ/L̅)^1/νx̅^2+2π^2/3x̅^4] + π^2/4(1/4π^2(ζ/L̅)^1/ν+2x̅^2)^2[γ+log2π -3/2+log((ζ/L̅)^1/ν/8π^2+x̅^2)] + 1/2L̅^ 4Δ([(ζ/L̅)^1/ν/4π^2+2x̅^2]L̅^ 2) . From the discussion above, we know that the right scaling variable for the rate function is of the form s̃=L^β/νs=L^d-2+η/2s, see Eq. (<ref>), or equivalently x̃=L̅^β/νx̅. From now on, for notational ease, we trade x̃ for x. Thus, what we want to actually create is a scaling function of the variables ζ^1/ν and x which has a systematic ϵ-expansion. For any function f of scaling variables x and ζ, one finds at order ϵ f(ζ^1/ν, x) = f(ζ^2-ϵ/3, L̅^1-ϵ/2x̅), = f(ζ^2, x̅L̅)-ζ^2ϵ/3logζ∂ f/∂ζ^2|_ζ^2, x̅L̅ -ϵx̅L̅/2logL̅∂ f/∂ (x̅L̅)|_ζ^2, x̅L̅+O(ϵ^2), Now, since the scaling function f itself has an ϵ-expansion, i.e. one can write f as f(x)=f_0(x)/ϵ+f_1(x)+O(ϵ). Thus, Eq. (<ref>), can be written as f(ζ^1/ν, x) = f_0(ζ^2, x̅L̅)/ϵ+f_1(ζ^2, x̅L̅)-ζ^2/3logζ∂ f_0/∂ζ^2|_ζ^2, x̅L̅ -x̅L̅/2logL̅∂ f_0/∂ (x̅L̅)|_ζ^2, x̅L̅+O(ϵ), Using Eqs. (<ref>), (<ref>) and  (<ref>), we find f_0(ζ^1/ν, x)=3/2ζ^1/νx^2+2π^2 x^4, and f_1(ζ^1/ν, x) = π^2/4(ζ^1/ν/4π^2+2x^2)^2[γ+log2π-3/2+log(ζ^1/ν/8π^2+x^2)] +1/2Δ(ζ^1/ν/4π^2+2x^2). Thus, one can write the total rate function I_ζ(x) as [ There is a little subtlety here. Actually, there is an extra term of the form ζ^4logL̅ in the expression below which cannot be recast in the scaling form. However, if one considers the perturbative expansion of the quantity f(ζ^1/ν, x)-f(ζ^1/ν, 0), this term no longer appears and the identification of f_0 and f_1 works fine. This does not cause any problem since this constant term can be easily absorbed in the normalization constant of the PDF and we can in principle just work with the perturbative expansion of f(ζ^1/ν, x)-f(ζ^1/ν, 0) instead of just f(ζ^1/ν, x). This is reminiscent of the choice of the log𝒩 term in Eq. (<ref>).] I_ζ(x) = 3/ϵ(1/2ζ^1/ν x^2+2π^2 /3x^4)+π^2/4(ζ^1/ν/4π^2+2x^2)^2(γ+log2π -3/2+log(ζ^1/ν/8π^2+x^2))+1/2Δ(ζ^1/ν/4π^2+2x^2) + O(ϵ), An important remark is in order here: Eq. (<ref>) is exactly the same as Eq. (<ref>). We have checked that this also holds true in the O(N) case and that Eq. (<ref>) corresponds to a scaling function expanded in ϵ. This means that our choice of scale μ=L^-1, performed in Eq. (<ref>) on physical ground to optimize the determination of the rate function, is in fact what makes it a scaling function. It also means that recasting I_ζ under the form of a scaling function is not sufficient to get rid of the large logarithms appearing at large fields. § THE LARGE FIELD BEHAVIOR AND THE RG IMPROVEMENTS To analyze the field dependence of the rate function, it is more convenient to work with I_ζ(x)-I_ζ(0) instead of I_ζ(x), since the constant I_ζ(0) only changes the PDF's normalization (see also footnote <ref>). Using Eq. (<ref>), we find that at one loop, this quantity is of the form I_ζ(x)-I_ζ(0)=S_0(x)/ϵ+S_1(x)+O(ϵ), with S_i(x)=f_i(x)-f_i(0) and f_i introduced in the previous section. This perturbation expansion is only valid if S_1(x)/S_0(x) is finite for all values of x. However, this quantity diverges S_1(x)/S_0(x) at large field which invalidates perturbation theory in this domain. This calls for an improvement and the aim of this section is to cure the divergence that appears in the large field behaviour of I_ζ(x) using RG. To do so, we need to take one step back. Our starting point is Eq. (<ref>). Near criticality where u can be replaced by its fixed point value u_*, it reads lim_M→∞ L̅^-dΓ_M(x̅, t̅,L̅) = 3/ϵ[1/2t̅x̅^2+2π^2/3x̅^4] + π^2/4(1/4π^2t̅+2x̅^2)^2(γ+log2π-3/2 +log(t̅/8π^2+x̅^2))+ 1/2L̅^4Δ([t̅/4π^2+2x̅^2]L̅^2) + O(ϵ) , with x̅≃√(u_*)s̅. As explained above, we compute from this expression the ratio S_1(x̅, ζ,L̅)/S_0(x̅, ζ,L̅) in the limit t̅→ 0 and L̅→∞ and study when it becomes large. Let us notice that, as already said above, instead of Γ_M(x̅, t̅, L̅), it is more convenient to consider Γ_M(x̅, t̅, L̅)-Γ_M(0, t̅,L̅). We find that two different ranges of parameters are problematic for the ratio S_1/S_0 in the double limit t̅→ 0 and L̅→∞. They are summarized in Table <ref>, where we show that since we are working in the thermodynamic limit, the condition x̅^2L̅^2≫1 corresponds to the large field limit while that of x̅^2L̅^2≪1 corresponds to the small field region, i.e. x̅→ 0. We see in Table <ref> that in both cases the perturbation expansion is riddled with logarithmic divergences. The present section aims to show how to get rid of them. We again use s̅' =s̅ l^-β/ν, t̅ ' =t̅ l^-1/ν, L̅' =L̅ l . Then, choosing l small enough, Eq. (<ref>) can be written at criticality lim_M→∞Γ_M[x̅', t̅ ', L̅'] =L̅'^d(3/ϵ[1/2t̅ 'x̅'^2+2π^2/3x̅'^4] +π^2/4(t̅ '/4π^2 +2 x̅'^2)^2(γ+log2π -3/2+log(t̅ '/8π^2+x̅'^2)) + 1/2L̅'^4Δ([t̅ '/4π^2+2x̅'^2]L̅'^2)), with x̅'=√(u_*)s̅', just as in the previous section, where once again we have a systematic ϵ-expansion. We now fix L̅' to get rid of the divergences that appear in both the large-field and small-field limits. One crucial thing that we have already noticed, is that when we set L̅'=1, that is, when the RG flow is run down to the momentum scale μ'=L^-1, Eq. (<ref>) reverts back to Eq. (<ref>) where there is no longer any problem at small field. However, the problem survives at large field because the s̃^δ+1 behaviour is not reproduced. In principle, the presence of large logarithms in perturbative series satisfying scaling is cured by the RG, which allows the resummation of these logarithms, transforming the expansion of the quantity studied into an expansion of the exponent of its power law behavior. The difficulty here lies in the fact that the PDF is a function, not a number, and that this RG improvement must be functional. In other words, the L̅' scale must depend on the field s̃ in such a way that the logx̅^2 term shown in Table <ref> disappears. This means, in terms of the variable x defined in Eq. (<ref>), the behaviour of L̅' at large value of x should go as x^ν/β. From this, we come to the conclusion that L̅'(x) should behave as 1 in the small field region and as x^ν/β in the large field region. Writing the final result in terms of the quantities that are kept fixed in the critical limit and in the thermodynamic limit, that is, x and ζ, we get for the improved rate function I_ζ(x) =3L̅'^d/ϵ[1/2(ζ/L̅')^1/νx^2/L̅'^2β/ν+2π^2/3x^4/L̅'^4β/ν] +L̅'^d[π^2/4(1/4π^2(ζ/L̅')^1/ν+2x^2/L̅'^2β/ν)^2(γ+log2π-3/2 +log((ζ/L̅')^1/ν/8π^2+x^2/L̅'^2β/ν))] +L̅'^d[1/2L̅'^4Δ([(ζ/L̅')^1/ν/4π^2+2x^2/L̅'^2β/ν]L̅'^2)] + O(ϵ), where L̅' has the desirable small field and large field behavior as discussed above. One must notice that there is not just one but infinitely many ways to carry out the RG improvement depending upon the functional form of L̅'(x). We choose a family of RG improvements indexed by a parameter α L̅'(x, α)=1+α x^ν/βe^-ζ^1/ν/8π^2x^2 . Notice that at one loop and in d=3, ν/β=2. Notice also that we have chosen a ζ-dependent function L̅'. The reasoning behind this choice is as follows. As we increase the value of ζ, the rate at which the rate-function grows increases. We therefore need to design an improvement in such a way that it can smoothly interpolate between the large field and the small field behaviour. Due to this we must encode in our improvement some kind of damping with ζ. As is evident in Eqs. (<ref>) and (<ref>), there always exists a competition between ζ^1/ν/8π^2 and x^2 and it is only after x^2 becomes much larger than ζ^1/ν/8π^2 that the large field limit is truly reached. It is by keeping these factors in mind that the improvement in (<ref>) has been designed. Let us notice that the family of improvements parameterized by α, Eq. (<ref>), differ quantitatively but they all yield the correct large field behavior of the rate function which is proportional to s̃^δ+1, with δ+1=6 (instead of 5.79), whereas the unimproved rate function behaves at large field as s̃^4 with large logarithmic corrections. Before comparing with Monte Carlo simulations, we have to decide how to fix α. We use the Principle of Minimal Sensitivity (PMS) that states that since the exact rate function should be independent of the RG improvements, the best we can do for fixing α is to choose the value where the dependence of I_ζ(x) on α is the smallest, that is, the value of α for which it is stationary. This principle has been used in the NPRG context and other contexts, see <cit.> and references therein, and it has made it possible to determine the values of critical exponents of the O(N) models very accurately <cit.>. For I_ζ(x) considered as a function of α, the difficulty is that it is a function and not a number that has to be stationary and it is a priori not trivial that I_ζ(x) is stationary in α for all values of x simultaneously. However, this is what we find with high precision and it allows us to select a single optimal value of α, see Section <ref>. The RG improvement also enables us to predict a non-trivial behavior of the tails of the PDF. Consider Eq. (<ref>), one can see that in the large field limit i.e. x→∞, the leading behavior of I_ζ(x) is given by the power law x^δ+1. This is what is expected since that is what we designed the RG improvement for. What is striking is that, in the large field regime, i.e. x→∞ there exists a sub-leading behavior of the rate function which goes as -log(x)=-δ-1/2log(x)+O(ϵ). This means that in the large field regime, the PDF must go as x^δ-1/2e^-x^δ+1. This behavior has been previously predicted in the Ising model some time ago <cit.>, and is, in fact, generic at and out-of- equilibrium <cit.>. § ANALYTIC CONTINUATION TO THE LOW-TEMPERATURE PHASE In the present section, we discuss how our results can be generalized to the approach to criticality from the low-temperature phase: T→ T_c^-. This corresponds to the case t<0, which we parametrize by a negative ζ, i.e. we generalize it to ζ = sgn(t)L/ξ_∞(|t|).[Note that for t<0, ξ_∞(|t|) is not the correlation length of the system since it is infinite for N>1 and different from that of the disordered phase for N=1. It is however a convenient way to universally parameterize the family of rate functions in this phase.] As t is traded for ζ^1/ν on the disordered side, we trade it for -|ζ|^1/ν on the ordered side. Although the entire family of rate functions for negative ζ's still remains inaccessible in the current approach, we show that not all is lost in the low-temperature phase. In essence, we show how to analytically continue Eq. (<ref>) for negative values of the argument in ϵ=4-d expansion. Leaving the details to Appendix <ref>, in 4-ϵ expansion one can rewrite Eq. (<ref>) as ∑_n≠ 0log(1+ω/π/n^2) = -ω^2/ϵ -3/4ω ^2+∫_1^∞dx 1/x(1-e^-ω x)[ϑ^4(x) -1]+∫_1^∞dx x(1-e^-ω x^-1)[ϑ^4( x)-1] -[E_1(ω)+log(ω)+γ] -1/2[1-(1-ω)e^-ω -ω^2(E_1(ω)+log(ω)+γ)]+ O(ϵ), where E_n=∫_1^∞ dσσ^-ne^-σ z and we have used E_3(ω)=1/2((1-ω)e^-ω+ω^2E_1(ω)) in Eq. (<ref>). Using lim_δ→0^±E_1(-|x|± iδ)=-Ei(|x|)∓ i π and analytically continuing log(-|x|) such that log(-|x|)=log(|x|)± iπ in upper or lower half plane respectively, one can write for ω<0 E_1(ω)+log(ω) = -Ei(-ω) +log(-ω). Thus for ω<0, Eq. (<ref>) becomes ∑_n≠ 0log(1+ω/π/n^2) = -ω^2/ϵ -3/4ω ^2+∫_1^∞dx 1/x(1-e^-ω x)[ϑ^4( x) -1]+∫_1^∞dx x(1-e^-ω x^-1)[ϑ^4(x)-1] -[-Ei(-ω)+log(-ω)+γ]-1/2[1-(1-ω)e^-ω -ω^2(-Ei(-ω)+log(-ω)+γ)]+ O(ϵ). The expression (<ref>) can be dimensionally regularized as is done in the case of ω>0 by adding counterterms and removing the pole singularity in ϵ. Notice that ∫_1^∞dx 1/x(1-e^-ω x)[ϑ^4( x)-1] diverges for ω≤-π (even after the dimensional regularization is performed). This is expected since the sum ∑_n≠ 0log(1+ω/π/n^2) itself is not defined for ω=-π. Since in practice, from Eq. (<ref>), the term ω/π is actually (2x̃^2-|ζ|^1/ν/(4π ^2)), a continuous function of x, the sum is only defined for (2x̃^2-|ζ|^1/ν/(4π ^2))>-π. The worst case scenario is that of x̃=0, where the argument of the discrete sum is the most negative. Hence for our analytic continuation to hold we must have -|ζ|^1/ν<-4 π^2 which translates to ζ⪆-9. The expression for I_ζ(x) for -9<ζ<0, can thus be obtained by replacing ζ^1/ν with -|ζ|^-1/ν in Eq. (<ref>). Also, we must point out, that the improvement that works for positive values of ζ does not work for the negative ones. The reason for this is rather simple. In the low-temperature phase, there exists a non-vanishing magnetization and hence the true large field limit is reached for higher values of x̃. Due to this, even though the large field behavior remains the same for any value of ζ (positive or negative) i.e. I_ζ(x̃)∝x̃^δ+1 for large x̃, the way to interpolate between the small field and the large field behavior must be different. Thus, getting rid of the large logarithms occurring at large fields is in principle possible, as in the high-temperature phase but the matching with moderate values of the field is less trivial because of the presence of a nonvanishing spontaneous magnetization. Designing a family of improvements for negative values of ζ must therefore be possible but remains beyond the scope of our present work. § COMPARISON WITH MONTE-CARLO SIMULATIONS We compare below Monte Carlo (MC) data of Ref. <cit.> and the one-loop results with and without RG improvement, both for the rate function and the PDF. All curves have been obtained in d=3 in the Ising case. The interest in showing these two quantities is that the PDF is of order unity at small fields and extremely small at large fields, making invisible the structure of this function in this field regime, and vice versa for the rate function. In particular, the discussion about the RG improvement will focus only on the rate function. §.§ The two-scale-factor universality The comparison between the PDF obtained from the lattice Ising model and the PDF obtained from the ϕ^4 model requires a priori to get rid of several non-universal factors. Once these factors are eliminated, the different PDFs are universal and can be compared. This section aims to analyze how to match results obtained for instance in the lattice Ising model and the ϕ^4 theory. The general RG theory tells us that two non-universal amplitudes survive in the renormalized theory: this is called the two-scale-factor universality <cit.>. These two scales are related on one hand to the scale of the order parameter which is of course different between the Ising model and a scalar field theory and, on the other hand, to the amplitude in front of the parameter t in Eq. (<ref>) which means that the distance to the critical temperature is a priori not measured in the same way in the two models. This second amplitude translates to an amplitude for the correlation length when t is eliminated in favor of ξ_∞ and this amplitude is of course also nonuniversal. However, it is possible to decide on a protocol to define in the same way the correlation lengths of the two models. We can, for instance, decide to define the correlation length from the long-distance behavior of the two-point correlation function: ⟨σ̂_i σ̂_j⟩∝exp(-r_ij/ξ_∞) for r_ij→∞.[Note that there are many ways of defining the correlation length, all of which are quantitatively different, but all of which behave like t^-ν close to T_c, and are related to each other by a universal amplitude ratio.] Of course, these two correlation lengths are measured in two different units, the lattice spacing a for the Ising model and either the UV cutoff or an arbitrary scale μ^-1 in the ϕ^4 theory such as that introduced in Eq. (<ref>). Thus, of course, even if these two quantities are defined according to the same definition in the two models, ξ_∞/a and ξ_∞μ are different. However, this discrepancy disappears in the ratio ζ because here again the system size L is measured in the same unit, either a or μ^-1. For instance, if ζ=1 this means in both systems that ξ_∞=L and the ambiguity in comparing the two systems has disappeared. The situation is somewhat similar for the scale of the order parameter because ŝ does not mean the same thing in the two systems. Noting that this scale is independent of ζ, it can be fixed for instance by expressing the PDF as a function of s/⟨ s^2⟩_0 instead of s, where ⟨ s^2⟩_0=∫ ds s^2 P_L(s,ζ=0) is the variance of the order-parameter at ζ=0. In this case √(⟨ s^2⟩_0) P_L(s,ζ)= P̃(s/√(⟨ s^2⟩_0),ζ), where P̃ is the truly universal PDF, and the above equality holds only in the large L limit. Since √(⟨ s^2⟩_0)∼ L^(-d+2-η)/2, the argument of P̃ in Eq. (<ref>) is proportional to s̃ defined in Eq. (<ref>). Thus, to compare two PDFs it is convenient to make a change of variables from s to s' such that ⟨s'^2⟩=1 <cit.>. In this case, P_L(s') is directly the universal PDF for L→∞. The normalization conditions satisfied by the universal PDF are therefore: (i) it is normalized to unity as it must be a probability distribution and, (ii) its second moment is set to unity by a rescaling of the field. Alternatively, one can rescale s such that the maxima of the PDF at ζ=0 are at s=±1. Since the position of the maxima of P̃ are universal, this amounts to a universal rescaling of the argument of P̃. It is the latter procedure that we use below. §.§ Comparison with the unimproved results We show in Fig. <ref> how the PDF evolves as a function of ζ. The first observation we can make when looking at Fig. <ref> is that the concavity of the PDF at the origin changes for ζ between 2 and 3. This means that the PDF changes from a double-peak shape to a single-peak shape. We have determined the value of the critical value ζ_c where this change occurs and we have found ζ_c^ 1L=2.12 while this occurs for the Monte Carlo simulations for ζ_c^ MC between 2.05 and 2.1.[This bound on ζ_c^ MC has been obtained by Monte Carlo simulations as described in <cit.>, collecting ∼ 150× 10^6 configurations of magnetization and energy for L=16,32,64. Using histogram reweighting to change the temperature and thus ζ, we can assert that ζ_c^ MC∈[2.05,2.1] independent of the system size. ] This shows the very good accuracy of the one-loop determination of this highly nontrivial universal quantity. Let us notice that since the RG improvement is only effective at large fields, the value of ζ_c^ 1L is not modified by it. Second, we observe that for ζ>ζ_c, the PDF is unimodal and could thus seems Gaussian. This is not too surprising for the typical values of the field because a large ζ means that L≫ξ_∞→∞. In this case, for the typical values of s̃, the PDF is indeed well approximated by a Gaussian around s̃=0, even at criticality. It is only in the tail of the distribution, that is, in the large field limit that the difference to strict Gaussianity shows up, something almost invisible on the PDF. Furthermore, both numerically and theoretically, the non-Gaussian behavior of the tails can be shown to hold for the rate functions, whose exact leading behavior is s̃^δ+1 <cit.>. We show in Figs. <ref> and <ref> the plots of the rate function and the PDF for ζ=0, where they are compared to the Monte Carlo data of Ref. <cit.>. We observe that qualitatively, i.e. for the overall shape, with a single or double peak, the comparison between the PDF or the rate function for ζ=0 with the Monte Carlo data is good but that the quantitative error is rather large. We have checked that typically the same holds true for all positive values of ζ. It is thus rather surprising that the one-loop determination of ζ_c is so accurate while the overall shape of the PDFs is not. Let us notice that the comparison of the PDFs at ζ=0 shown in Fig. <ref> differs from that given in Fig. 1 of Ref. <cit.>. We agree with the one-loop calculation of the PDF at ζ=0 derived in this reference and that is rederived above. However, we disagree with the fact that the curve obtained by Monte Carlo simulations in Fig. 15 of Ref. <cit.>, and which is used in Fig. 1 of Ref. <cit.>, corresponds to ζ=0, as assumed in Ref. <cit.>. We have found from the data given in Ref. <cit.> that the value of ζ to which this PDF corresponds is ζ=1.18. By performing a Monte Carlo simulation for ζ=1.18, we have confirmed the Monte Carlo data given in Ref. <cit.> and in Fig. 1 of Ref. <cit.>. Notice also that our Monte Carlo data at ζ=0 agree with the large-scale simulations of Ref. <cit.>. It is therefore a numerical coincidence that the PDF found at one loop for ζ=0 coincides with the PDF found in Monte Carlo simulations for ζ=1.18 and we reaffirm that our Fig. <ref> gives the correct comparison between the one-loop and Monte Carlo results. As already said in the previous section, the large field behavior is not reproduced by the one-loop results. This is the reason for the RG improvement proposed in Section <ref> and we show now how it indeed improves the comparison with Monte Carlo data. §.§ Comparison with the RG improved results The reason for the necessity of the improvement and the way the one-loop results can be improved at large field values have been explained in Section <ref>. What remains to be shown is how the PMS allows us to select an optimal value of α in Eq. (<ref>). We find that as α increases from α=0, the tail of the curve I_ζ(x) moves globally upwards, then stops moving and by further increasing α, finally moves globally downwards. This is shown in Fig. <ref>. This is the typical scenario where the PMS is useful for optimizing a function because there exists a particular value α_ opt where the entire function I_ζ(x) is stationary. We find α_ opt=6.78. One crucial point to notice here is that α_ opt is independent of ζ: it is always for the same value of α that I_ζ(x) becomes stationary. More about this functional PMS is discussed in Appendix <ref>. We can see on Fig. <ref> that the tail of the rate function is better reproduced with the improved curve than with the unimproved one. It is in fact possible to show that at large field, the improved curve follows the right power law, that is, s̃^δ+1 with δ+1=6 (instead of 5.79), which is not the case of the unimproved curve. However, the prefactor of this power law is clearly incorrect. The moderate-field region is not well reproduced either. We have checked that this is the case for all values of ζ. We conclude that although the RG improvement does improve the large field behavior, the quantitative discrepancy with the Monte Carlo data persists. This is clearly a consequence of the one-loop approximation. § A CONJECTURE ABOUT THE ONE-LOOP RESULTS We would now like to point out an intriguing fact about the one-loop calculation which is worth mentioning. As we have seen in Section <ref>, the magnitude of the field should be rescaled to compare the universal PDFs obtained either from Monte Carlo simulations of the Ising model or from the one-loop calculation performed from the ϕ^4 theory. However, it is not allowed to rescale the rate function itself. If we nevertheless perform such a rescaling for one value of ζ, say ζ=0, and we perform the same rescaling for all positive values of ζ, then we find that there is an almost perfect match between the numerical data and the one-loop results for all values of ζ. Even more remarkable, we find that the improved one-loop rate functions optimized by the PMS that selects the value of the parameter α defined in Eq. (<ref>), that is, α_ opt.=6.78, is the best fit of the numerical data for all values of s̃ and all values of ζ. This turns out to be true also for the analytic continuation to negative values of ζ. In practice, we rescale the one-loop rate function I_ζ(s̃) such that the rescaled rate function I_ζ^R(s̃) has the same position and value at its minimum as Monte-Carlo Data at ζ=0. Thus this amounts to two rescalings instead of one which we are permitted to do as discussed in Sec. <ref>. We show in Figs. <ref> to <ref> some of these rescaled rate functions I^R_ζ(s̃). Therefore, we conjecture that, unexpectedly, the error from the one-loop approximation in d=3, improved at large fields by the choice made in Eq. (<ref>) and optimized with the PMS,[We have tested the robustness of our results by using another family of improvements. Here again, the PMS yields the best possible rate functions compared to the numerical results and for the optimal value of the parameter α of this family these rate functions fit again very well the numerical data for all values of ζ.] is almost entirely concentrated in one number: the scale of the rate functions. If this conjecture is correct, the one-loop computation has a much stronger predictive power than expected, at least in three dimensions. Obviously, this conjecture needs to be tested on many other models before it can be relied upon. Note that the rescaling of the rate functions does not alter their concavity at the origin. The value of ζ_c is therefore unchanged by this rescaling. This was of course a prerequisite of our conjecture, as the value of ζ_c at one loop is very precise and should hardly vary at two loops and beyond. § CONCLUSION In the present paper we study the perturbative calculation at one loop of the PDF of the properly normalized sum of spins of an Ising model in dimension d=4-ϵ. We find that at criticality there is an infinite number of universal PDFs in the thermodynamic limit depending on how the critical and thermodynamic limits are taken and whether the limit is taken from the high- or low-temperature phase. This infinity of universal PDFs can be parameterized by the ratio ζ=L/ξ_∞ of the system size to the bulk correlation length in the double limit L,ξ_∞→∞ at fixed ζ. We find that the shape of the PDF changes as ζ increases and goes from a bimodal shape at small ζ to a unimodal shape at large ζ. The value ζ_c where this change occurs is a universal number and it turns out that its one-loop value compares very well with its Monte Carlo value. While the tails of the PDFs are qualitatively incorrect in the perturbative calculation, it is possible to design RG improvements for these tails that restore the correct leading behavior. Not only is the correct leading behavior restored but also a subleading logarithmic correction is found in agreement with <cit.>. We show that there are infinitely many such improvements and that using the Principle of Minimal Sensitivity selects one optimal improvement that turns out to be indeed the best improvement when compared to numerical data. However, even after improvement, the comparison between the one-loop and the numerical results remains at a qualitative level. We show that by assuming the existence of one more fit parameter, namely the scale of one rate function for a given value of ζ, it is possible to drastically improve the agreement between the one-loop improved results and the Monte Carlo data. We, therefore, assume that the error from the one-loop approximation is mostly concentrated on one number, the scale of the rate function, and that subsequent orders of perturbation theory simply correct this scale. One of the questions that naturally arises in connection with our work is that of its range of applicability. For the CLT it is well known: it applies to all systems involving only iid variables having a finite mean and variance. For the 3d Ising model, it is clear that all systems belonging to the Ising universality class share the same universal family of PDFs. The question is thus that of the contour of the 3d Ising universality class. On one hand, the answer to this question is easy: all 3d and ℤ_2 invariant systems undergoing a second-order phase transition are in the Ising universality class and thus share the same universal PDFs. In this sense, we can speak of a generalization of the CLT. On the other hand, it is difficult to know a priori whether a given system undergoes a second-order phase transition and the phase diagram of all ℤ_2 invariant systems in 3d is probably impossible to fully characterize. For example, the phase diagram of scalar field theories involving a ϕ^6 term in their Hamiltonian in addition to the terms already present in the Eq. (<ref>) is not known because these are theories undergoing either a first-order phase transition or a second-order phase transition with a very non-trivial boundary between the two regions of the parameter space.[Notice that systems on this boundary can undergo a tricritical phase transition not described by the Wilson-Fisher fixed point but by the Gaussian fixed point.] Therefore, the range of applicability of the above approach is not fully under control but is exactly at the same level as the characterization of a universality class. Our study paves the way for many others. First, the extension to O(N) models is straightforward at one-loop although the RG improvements must be further studied to take into account the N-dependence of the rate functions. It will be interesting to see whether ζ_c at one loop also agrees with the Monte Carlo data and whether our conjecture works again for N>1. Second, the approach to d=4 is interesting for its connection to the triviality problem. Here, the problem is to understand how the large field limit and the d→4 limit have to be taken since they a priori do not commute. The approach to criticality from the low temperature phase will also be interesting since it is probable that the PDFs are again bimodal in d=4 for ζ<0. Third, it should be generalized to universality classes with more complex order parameters such as matrices. Not only will it be interesting to see whether the same phenomenon of PDF shape modification occurs when ζ is varied, but confirmation of our conjecture will also be an interesting challenge. Of course, the study should also be generalized to non-equilibrium systems undergoing continuous phase transitions or showing generic scaling. We wish to acknowledge Ivan Balog for discussions and collaboration on this topic. This work was supported by the Croatian Science fund project HRZZ-IP-10-2022-9423, an IEA CNRS project, and by the “PHC COGITO” program (project number: 49149VE) funded by the French Ministry for Europe and Foreign Affairs, the French Ministry for Higher Education and Research, and The Croatian Ministry of Science and Education. § COMPUTING THE DISCRETE SUM We compute the discrete sum that appears in (<ref>). We start by considering the sum I_d(r)=1/L^d∑_q≠ 01/r+q^2, with q=2π n/L and n∈ℤ^d, which is divergent in the UV. We thus regularize it, and add and subtract a divergent (if not regularized) part ∫d^d k/(2π)^d1/r+k^2, which corresponds to the thermodynamic limit of I_d(r) (i.e. rL^2≫ 1). The now convergent sum reads Ĩ_d(r) =1/L^d∑_q≠ 01/r+q^2-∫d^d k/(2π)^d1/r+k^2, =L^2/4π∫_0^∞ dσ e^-σL^2r/4π(L^-d∑_n≠ 0 e^-σπ n^2-∫d^dk/(2π)^d e^-σL^2 k^2/4π). Note that now, all integrals and sums are convergent for r≥0. Performing the integral over k and introducing the Jacobi ϑ function, ϑ(σ)=∑_j=-∞^∞e^-j^2 πσ, we obtain Ĩ_d(r) =L^2-d/4π∫_0^∞ dσ e^-σ L^2r/4π(ϑ^d( σ)-1-σ^-d/2). This expression is well defined both in the IR (σ→∞, for d>2) and the UV (σ→0, for d≤4) thanks to the property ϑ(σ)=√(1/σ)ϑ(1/σ) shown by Poisson summation. Integrating I_d(r) with respect to r, we obtain 1/L^d∑_q≠ 0ln(1+r/q^2)=1/L^d(θ(L^2r/4π)-θ(0))+∫d^dk/(2π)^dln(1+r/k^2), with θ(z)=-∫_0^∞ dσe^-σ z/σ(ϑ^d(σ)-1-σ^-d/2). To perform the analytic continuation to negative z, it is convenient to recast θ(z)-θ(0) as follows, θ(z)-θ(0) = lim_δ→ 0(∫_δ^∞dσ/σ(ϑ^d(σ)-1-σ^-d/2) - ∫_δ^∞e^-zσ/σ(ϑ^d(σ)-1-σ^-d/2)) = lim_δ→ 0(∫_δ^1dσ/σ(ϑ^d(σ)-1)(1-e^-zσ) +∫_1^∞dσ/σ(ϑ^d(σ)-1)(1-e^-zσ) -∫_δ^∞ ds(1-e^-zσ)σ^-d/2-1), where we have regularized the lower bound of the integral to ensure convergence during the various manipulations. Making the change of variables σ→1/σ in the first integral that appears in the expression (<ref>) and once again using the identity ϑ(σ)=√(1/σ)ϑ(1/σ), we get θ(z)-θ(0) = lim_δ→ 0(∫_1^1/δdσ  σ^d/2-1(ϑ^d(σ)-1)(1-e^-zσ^-1) +∫_1^∞dσ/σ(ϑ^d(σ)-1)(1-e^-zσ) - ∫_1^∞dσ (1-e^-zσ)σ^-d/2-1-∫_δ^1dσ/σ(1-e^-zσ)). This allows us to take the limit δ→ 0 safely. Using the relations E_n(z)=∫_1^∞dσe^-zσ/σ^n and E_1(z)+log(z)+γ=∫_0^1dσ/σ(1-e^-zσ), we obtain θ(z)-θ(0) =∫_1^∞dσ  σ^d/2-1(ϑ^d(σ)-1)(1-e^-zσ^-1) +∫_1^∞dσ/σ(ϑ^d(σ)-1)(1-e^-zσ) -(E_1(z)+logz+γ)-[2/d-E_d/2+1(z)]. Defining ω=rL^2/4π, and performing the change of variable k→kL/2π, Eq. (<ref>) becomes ∑_n≠ 0ln(1+ω/π/n^2) = ∫ d^d kln(1+ω/π/k^2) + ∫_1^∞dσ  σ^d/2-1(ϑ^d(σ)-1)(1-e^-ωσ^-1) +∫_1^∞dσ/σ(ϑ^d(σ)-1)(1-e^-ωσ) -(E_1(ω)+logω+γ)-[2/d-E_d/2+1(ω)]. § THE FUNCTIONAL PMS In Fig. <ref>, we show that as the parameter α of the functional optimization, see Eq. (<ref>), is increased, I_ζ(s̃) first moves upward then reaches a stationary curve at α_ opt=6.78 before starting to move downward. A key point to note here is that this phenomenon happens for all values of ζ, and with the same value α_ opt. A close examination of what happens, value of s̃ by value of s̃, shows that the value of α where I_ζ(α,s̃) is stationary depends slightly on s̃: α_ opt=α_ opt(s̃). However, this dependence on s̃ is very weak and we find that for s̃>2.5, α_ opt(s̃) varies between 6.31 and 6.78, see the value of α_ opt(s̃=3.8)=6.58 in Fig. <ref>. We have plotted I_ζ(α,s̃) for α=6.31 and α=6.78 and found that these curves are almost on top of each other. This also holds true for all values of ζ. Notice that if we keep on increasing the value of α, the rate function becomes a strictly decreasing function as can be seen in Fig. <ref>, suggesting an unphysical choice of α. apsrev4-1
http://arxiv.org/abs/2407.13680v1
20240718165402
HPix: Generating Vector Maps from Satellite Images
[ "Aditya Taparia", "Keshab Nath" ]
cs.CV
[ "cs.CV", "cs.AI", "eess.IV" ]
conditions
http://arxiv.org/abs/2407.12306v1
20240717040254
Splatfacto-W: A Nerfstudio Implementation of Gaussian Splatting for Unconstrained Photo Collections
[ "Congrong Xu", "Justin Kerr", "Angjoo Kanazawa" ]
cs.CV
[ "cs.CV" ]
MEDFuse: Multimodal EHR Data Fusion with Masked Lab-Test Modeling and Large Language Models Wen-Chih Peng =========================================================================================== 2 § ABSTRACT Novel view synthesis from unconstrained in-the-wild image collections remains a significant yet challenging task due to photometric variations and transient occluders that complicate accurate scene reconstruction. Previous methods have approached these issues by integrating per-image appearance features embeddings in Neural Radiance Fields (NeRFs). Although 3D Gaussian Splatting (3DGS) offers faster training and real-time rendering, adapting it for unconstrained image collections is non-trivial due to the substantially different architecture. In this paper, we introduce Splatfacto-W, an approach that integrates per-Gaussian neural color features and per-image appearance embeddings into the rasterization process, along with a spherical harmonics-based background model to represent varying photometric appearances and better depict backgrounds. Our key contributions include latent appearance modeling, efficient transient object handling, and precise background modeling. Splatfacto-W delivers high-quality, real-time novel view synthesis with improved scene consistency in in-the-wild scenarios. Our method improves the Peak Signal-to-Noise Ratio (PSNR) by an average of 5.3 dB compared to 3DGS, enhances training speed by 150 times compared to NeRF-based methods, and achieves a similar rendering speed to 3DGS. Additional video results and code integrated into Nerfstudio are available at https://kevinxu02.github.io/splatfactow/https://kevinxu02.github.io/splatfactow/. § INTRODUCTION Novel view synthesis from a collection of 2D images has garnered significant attention for its wide-ranging applications including virtual reality, augmented reality, and autonomous navigation. Traditional methods such as Structure-from-Motion (SFM) <cit.> and Multi-View Stereo (MVS), and more recently Neural Radiance Fields <cit.> and its extensions <cit.> have laid the groundwork for 3D scene photometric reconstruction. However, these approaches often struggle with image collections captured at the same location under different appearances, for example time-of-day or weather variations, which exhibit photometric variations, transient occluders, or scene inconsistency. Extensions to NeRF like NeRF-W <cit.> or others <cit.> are able to capture these variations by optimizing per-image appearance embeddings and conditioning its rendering on these. These methods however are slow to optimize and render, On the other hand, 3D Gaussian Splatting <cit.> has emerged as a promising alternative, offering faster training and real-time rendering capabilities. 3DGS represents scenes using explicit 3D Gaussian points and employs a differentiable rasterizer to achieve efficient rendering. However, the explicit nature of 3DGS makes handling in-the-wild cases via per-image appearance embedding non-trivial. In this paper, we introduce a simple, straightforward approach for handling in-the-wild challenges with 3DGS, called Splatfacto-W, implemented in Nerfstudio. Our method achieves a significant improvement in PSNR, with an average increase of 5.3 dB compared to 3DGS. Splatfacto-W maintains a rendering speed comparable to 3DGS, enabling real-time performance on commodity GPUs such as the RTX 2080Ti. Additionally, our approach effectively handles background representation, addressing a common limitation in 3DGS implementations. There have been efforts to handle in-the-wild scenarios with 3DGS, such as SWAG <cit.> and GS-W <cit.>. However, these approaches have limitations. SWAG's implicit color prediction slows down rendering due to the need to query latent embeddings, while GS-W's reliance on 2D models restricts both training and inference speed. In contrast, Splatfacto-W offers several key contributions: * Latent Appearance Modeling: We assign appearance feature for each Gaussian point, enabling effective Gaussian color adaptation to variations in reference images. This can later be converted to explicit colors, ensuring the rendering speed. * Transient Object Handling: An efficient heuristic based method for masking transient objects during the optimization process, improving the focus on consistent scene features, without reliance on 2D pretrained models. * Background Modeling: A spherical harmonics-based background model that accurately represents the sky and background elements, ensuring improved multi-view consistency. Our approach can handle in-the-wild challenges such as diverse lighting at PSNR 17% higher than NeRF-W, while enabling Real-Time interaction, as illustrated in Figure <ref>, and videos on our https://kevinxu02.github.io/splatfactow/website. § RELATED WORK §.§ Neural Rendering in the Wild Pioneering approaches such as NeRF-W <cit.>, proposed disentangling static and transient occluders by employing two per-image embeddings (appearance and transient) alongside separate radiance fields for the static and transient components of the scene. In contrast, Ha-NeRF <cit.> uses a 2D image-dependent visibility map to eliminate occluders, bypassing the need for a decoupled radiance field since transient phenomena are only observed in individual 2D images. This simplification helps reduce the blurry artifacts encountered by NeRF-W  <cit.> when reconstructing transient phenomena with a 3D transient field. Building on previous methods, CR-NeRF <cit.> improves performance by leveraging interaction information from multiple rays and integrating it into global information. This method employs a lightweight segmentation network to learn a visibility map without the need for ground truth segmentation masks, effectively eliminating transient parts in 2D images. Another recent advancement, RefinedFields <cit.>, utilizes K-Planes and generative priors for in-the-wild scenarios. This approach alternates between two stages: scene fitting to optimize the K-Planes <cit.> representation and scene enrichment to finetune a pre-trained generative prior and infer a new K-Planes representation. Implicit-field representations have seen diverse adaptations for in-the-wild scenarios. However, their training and inference processes are time-consuming, posing a significant challenge to achieving real-time rendering. This limitation hinders their application in practical scenarios where fast rendering speed is essential, particularly in various interactive 3D applications. Inspired by the appearance embeddings in NeRF-W, Splatfacto-W uses a image wise appearance embedding to handle lighting variations. §.§ Gaussian Splatting in the Wild Recent advancements in 3D Gaussian Splatting (3DGS) <cit.> have shown promise for efficient and high-quality novel view synthesis, particularly for static scenes. However, the challenge remains to adapt these methods for unconstrained, in-the-wild image collections that include photometric variations and transient occluders. Two significant contributions to this field are SWAG (Splatting in the Wild images with Appearance-conditioned Gaussians) <cit.> and GS-W (Gaussian in the Wild) <cit.>. SWAG <cit.> extends 3DGS by introducing appearance-conditioned Gaussians. This method models the appearance variations in the rendered images by learning per-image embeddings that modulate the colors of the Gaussians via a multilayer perceptron (MLP). Additionally, SWAG addresses transient occluders using a new mechanism that trains transient Gaussians in an unsupervised manner, improving the scene reconstruction quality and rendering efficiency compared to previous methods <cit.>. However, the color prediction for each Gaussian in SWAG is implicit, requiring a query of the latent embedding for each Gaussian in the hash grid, which slows down the rendering speed to about 15 FPS form the 181 FPS of 3DGS. Similarly, GS-W <cit.> proposes enhancements for handling in-the-wild scenarios by equipping each 3D Gaussian point with separate intrinsic and dynamic appearance features. This separation allows GS-W to better model the unique material attributes and environmental impacts for each point in the scene. Moreover, GS-W introduces an adaptive sampling strategy to capture local and detailed information more effectively and employs a 2D visibility map to mitigate the impact of transient occluders. However, this method introduces 2D U-Nets that slow down both the training and inference speed, and it also limits the rendering resolution. Our method improves on the speed limitations of both SWAG and GS-W and introduces a spherical harmonics based background model to address the background issue, ensuring improved multiview consistency. § PRELIMINARIES 3D Gaussian Splatting <cit.> is a method for reconstructing 3D scenes from static images with known camera poses. It represents the scene using explicit 3D Gaussian points (Gaussians) and achieves real-time image rendering through a differentiable tile-based rasterizer. The positions (μ) of these Gaussian points are initialized with point clouds extracted by Structure-from-Motion (SFM) <cit.> from the image set. The 3D covariance (Σ) models the influence of each Gaussian point on the color anisotropy of the surrounding area: G(x - μ, Σ) = e^-1/2 (x - μ)^T Σ^-1 (x - μ) Each Gaussian point is also equipped with opacity (α) and color (c) attributes, with the color represented by third-order spherical harmonic coefficients. When rendering, the 3D covariance (Σ) is projected to 2D (Σ') using the viewing transformation (W) and the Jacobian of the affine approximation of the projective transformation (J): Σ' = JW Σ W^T J^T The color of each pixel is aggregated using α-blending: σ_i = G(px' - μ_i, Σ'_i) C(r) = ∑_i ∈ G_r c_i σ_i ∏_j=1^i-1 (1 - σ_j) Here, r represents the position of a pixel, and G_r denotes the sorted Gaussian points associated with that pixel. The final rendered image is used to compute the loss with reference images for training, optimizing all Gaussian attributes. Additionally, a strategy for point growth and pruning based on gradients and opacity is devised. § SPLATFACTO-W We now present Splatfacto-W, a system for reconstructing 3D scenes from in-the-wild photo collections. We build on top of Splatfacto in Nerfstudio <cit.> and introduce three modules explicitly designed to handle the challenges of unconstrained imagery. An illustration of the whole pipeline can be found in Fig. <ref>. §.§ Latent Appearance Modeling 3D Gaussian Splatting <cit.> is designed for reconstructing scenes from consistent image sets and employs spherical harmonic coefficients for color modeling. In our approach, we deviate from this convention. Instead, we introduce a new appearance feature f_i for each Gaussian point, adapting to variations in the reference images along with the appearance embedding vector ℓ_j of dimension n. We predict the spherical harmonics coefficients 𝐛_i of each Gaussian using a multi-layer perceptron (MLP), parameterized by θ: 𝐛_i = MLP_θ(ℓ_j, f_i) where 𝐛_i = (b_i,ℓ^m) 0 ≤ℓ≤ℓmax, -ℓ≤ m ≤ℓ. The {ℓ_j}_j=1^N_img and {fi}_i=1^Ngs embeddings are optimized alongside θ, where N_img is the number of images and N_gs is the number of gaussian points. We then recover the color c_i for gaussian point i from the SH coeffcients 𝐛_i: c_i = Sigmoid(∑_ℓ=0^ℓ_max∑_m=-ℓ^ℓ b_i,ℓ^m Y_ℓ^m(𝐝_i)) Here, 𝐝_i is the viewing direction for gaussian point i. Y_ℓ^m are the spherical harmonic basis functions. This approach allows us to prevent inputting the viewing directions into the MLP, hence we can cache the gaussian status with a single inference for any appearance embedding, allowing us to have the same rendering speed as 3DGS. §.§ Transient Handling with Robust Mask Our objective is to develop an efficient method for mask creation that addresses transient objects within the optimization process of Gaussian Splatting. Gaussian Splatting's dependence on initialized point clouds results in suboptimal performance for transient object representation, leading to increased loss in affected regions. By strategically masking pixels, we aim to enhance the model's focus on more consistent scene features. We adopt a strategy similar to RobustNeRF <cit.>. We hypothesize that residuals surpassing a certain percentile between the ground truth and the rendered image indicate transient objects, and thus, their corresponding pixels should be masked. Additionally, we posit that a lower loss between the ground truth and the predicted image signifies a more accurate representation, implying fewer transient objects. According to the previous assumption, we record the maximum, minimum, and the current L1 loss between the ground truth image and the predicted image before any masking. We then linearly interpolate the current mask percentage between the maximum and minimum masking percentages (Per_max and Per_min). As optimization progresses, images with fewer transient objects exhibit lower loss, thereby reducing the mask percentage. Conversely, images with more transient objects retain higher loss. The threshold for masking is determined as follows. 𝒯_ϵ = (1 - k)% percentile of residuals for all pixels where k = L1_current - L1_min/L1_max - L1_min× (Per_max - Per_min) + Per_min We start by creating a per-pixel mask ω̃(𝐫), where inlier (i.e., pixels to be learned by the model) is 1 and outlier (i.e., pixels to be masked and not learned by the model) is 0. To ensure more efficient model convergence, we introduce an additional condition: always mark the pixels belonging to the upper n% of the image as inlier, as this region typically corresponds to the sky in most images. We define an upper n% (choosing n=40 in practice) region mask: U(𝐫) = 1 if r_y ≤ 0.4H, 0 otherwise, where H is the height of the image and r_y is the row coordinate of a pixel. Thus, ω̃(𝐫) is activated (marking the pixel as inlier) when the loss ϵ(𝐫) at pixel 𝐫 is less than or equal to 𝒯_ϵ or belongs to the upper 40% of the image. ω̃(𝐫) = (ϵ(𝐫) ≤𝒯_ϵ) U(𝐫), where denotes the logical OR operation. Furthermore, to capture the spatial smoothness of transient objects, we spatially blur inlier/outlier labels ω̃ with a 5 × 5 box kernel ℬ_5 × 5. The final mask 𝒲 is expressed as: 𝒲(𝐫) = (ω̃∗ℬ_5 × 5)(𝐫) ≥𝒯_∗, 𝒯_∗ = 0.4. This tends to remove high-frequency details from being classified as pixels for transient objects, allowing them to be captured during optimization. §.§ Background Modeling Since 3DGS lacks depth perception for images and outdoor images often feature large areas of solid color in the background, it is challenging to accurately represent the background in outdoor scenes. Furthermore, the initial point cloud inadequately represents the spatial positions of the sky. This leads to inconsistent representation of the sky during the 3DGS optimization process, where sky elements may appear close to the camera or adjacent to building structures and tree leaves. This occurs as new Gaussians, intended to represent the background, are split from those representing foreground objects, resulting in a scattered and inaccurate depiction of the sky and overall background. Moreover, images from in-the-wild collections exhibit varied appearances of the sky, further exacerbating this issue. Since 3DGS focuses only on image space matching, the sky often connects with the optimized scene structure, thereby losing multi-view consistency. Although we can introduce 2D depth model priors or background segmentation to force the Gaussians to represent the background in the distance, this undoubtedly increases the computational overhead and additional model dependency. Furthermore, it is unwise to use tens of thousands of Gaussians to represent relatively simple background parts of the image. To address this issue, we introduce a simple yet effective prior: the background should be represented at infinity. Given that the sky portion is typically characterized by low-frequency variations, we found that using only three levels of Spherical Harmonics (SH) basis functions can accurately model the sky. For scenes with consistent backgrounds, we can directly optimize a set of SH coefficients 𝐛 to efficiently model the background. However, in in-the-wild scenarios, backgrounds often vary across different images. To accommodate this variability, we employ a Multi-Layer Perceptron (MLP) that takes an appearance embedding vector ℓ_j as input and predicts the SH coefficients 𝐛 for the background of the current image: 𝐛 = MLP(ℓ_j), where 𝐛 = (b_ℓ^m)0 ≤ℓ≤ℓmax, -ℓ≤ m ≤ℓ. We then derive the color of the sky at infinity for each pixel's ray direction 𝐝_ray(𝐫). For a pixel at position 𝐫, the background color C_background(𝐫) is predicted as: C_background(𝐫) = Sigmoid(∑_ℓ=0^ℓ_max∑_m=-ℓ^ℓ b_ℓ^m Y_ℓ^m(𝐝_ray(𝐫))), where Y_ℓ^m are the spherical harmonic basis functions. To compute the final color for each pixel, we use alpha blending between the foreground color C(𝐫) and the background color: C_final(𝐫) = C(𝐫) + (1 - α(𝐫)) C_background(𝐫) where α(𝐫) is the alpha value (opacity) at pixel position 𝐫. Furthermore, we introduce a new loss term: the alpha loss. This loss is designed to penalize Gaussians (representing potential foreground objects) that incorrectly occupy pixels well represented by the background model. We start by picking out pixels p_i are well presented by the background model (i.e., the residual between the background and the ground truth is below a certain threshold). To avoid false positives and utilize the low frequency nature of the background, we ensure that the surrounding pixels of each selected pixel also belong to the background. Otherwise, we deselect that pixel. We encourage the alpha of the gaussians corresponding to these pixels to be low. Specifically, the alpha loss L_α can be expressed as: L_α = λ×∑_𝐫∈ p_iα(𝐫) where α(𝐫) is the accumulation of the Gaussians at pixel r, and λ is a scaling factor. The set p_i is defined as: p_i = { (𝐫') : M'(𝐫') > 0.6 } where M'(𝐫) is the result of applying a 3 × 3 box filter to the residual mask M, computed as: M(r) = 1_|Ground Truth(𝐫) - Predicted Background(𝐫)| < Threshold and M'(𝐫) = (M ∗ℬ_3 × 3)(𝐫) This approach considers the smoothness of the background, ensures that only those pixels significantly represented by the background model, as confirmed by the filtered mask, contribute to the alpha loss. § EXPERIMENTS §.§ Implementation Details We minimize the L_1 loss combined with D-SSIM term and the alpha loss term to optimize the 3D Gaussians parameters alongside with the MLP F_θ weights, appearance embedding and gaussian appearance features altogether. We train 65000 iterations on a single RTX2080Ti. The appearance embedding is configured with 48 dimensions, while the gaussian appearance features are set to 72 dimensions. The architecture for the Appearance Model incorporates a three-layer MLP with a width of 256, and the Background Model employs a three-layer MLP with a width of 128. §.§ Quantitative Results We provide quantitative results using common rendering metrics: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS). Following the NeRF-W <cit.> evaluation approach, where only the embeddings for images are optimized during training, we optimize an embedding on the left half of each test image and report the metrics on the right half. We train on all the images in the datasets and pick the same test image sets as NeRF-W <cit.> for evaluation. The final quantitative evaluation is provided in Table <ref>. In this section, we also compare two variants of our method to analyze the contribution of each component of Splatfacto-W: * Splatfacto-W-A, a variation with only appearance model enabled. * Splatfacto-W-T, a variation with only appearance model and robust mask enabled. Our experiments demonstrate that our method yields competitive results. Remarkably, even without caching the SH coefficients for background and gaussian points, our method achieves real-time rendering at over 40 frames per second (fps) and supports dynamic appearance changes. With the current hyperparameters, our training process requires less than 6 GB of GPU memory and achieves the fastest performance on a single RTX 2080Ti, making training feasible on home computers. Additional image evaluation results are presented in Figure  <ref>. §.§ Background Modeling Our background model is also applicable in Splatfacto. Our method eliminates the majority of background floaters, providing greater background and depth consistency across different viewpoints without 2D guidance, as shown in Figure <ref>. More video results are available on our https://kevinxu02.github.io/gsw.github.io/webpage. § DISCUSSION Due to the lack of compression and understanding of image information by 2D models like U-Net, our method converges slowly on images with special lighting conditions, such as shadows and highlights caused by sunlight at specific times. Introducing additional networks and gaussian point features to learn the residuals between the image highlights and the current prediction results can alleviate this problem. However, this approach also introduces additional computational and storage overhead, which contradicts our initial objectives. Therefore, we ultimately did not adopt this method. Although our masking strategy is effective in most cases and has minimal impact on training duration, the shadows and highlights in the aforementioned scenarios can result in significant loss, leading our model to overlook these parts and further complicating their convergence. Another issue is that the our SH background model can only modeling low-frequency backgrounds, making it less effective at representing cloud portions, which also leads to the decline in PSNR. § CONCLUSION In this paper, we introduced Splatfacto-W, a approach that significantly enhances the capabilities of 3D Gaussian Splatting (3DGS) for novel view synthesis in in-the-wild scenarios. By integrating latent appearance modeling, an efficient transient object handling mechanism, and a robust neural background model, our method addresses the limitations of existing approaches such as SWAG and GS-W. Our experiments demonstrate that Splatfacto-W achieves better performance in terms of PSNR, SSIM, and LPIPS metrics across multiple challenging datasets, while also ensuring real-time rendering capabilities. The introduction of appearance features and robust masking strategies enables our model to effectively handle photometric variations and transient occluders, providing more consistent and high-quality scene reconstructions. Additionally, the neural background model ensures improved multiview consistency by accurately representing sky and background elements, eliminating the issues associated with background floaters and incorrect depth placements. Despite these advancements, there remain challenges such as slow convergence in special lighting conditions and limitations in representing high-frequency background details. Future work will focus on addressing these issues by exploring more sophisticated neural architectures and additional network components to refine transient phenomena and enhance background modeling further. § ACKNOWLEDGMENTS This work would not have been possible without the incredible support from the Nerfstudio team. Thanks to Professor Angjoo Kanazawa for her insightful guidance and mentorship. Special thanks to Justin Kerr for his pivotal role in hinting at this research direction, providing critical feedback on my ideas, and offering continuous guidance throughout the entire project. Thanks Ruilong Li for testing and optimizing appearance model for general datasets. And thanks to ShanghaiTech for offering the computing resources for running the experiments. ieeenat_fullname
http://arxiv.org/abs/2407.12744v1
20240717170236
Negligible Normal Fluid in Superconducting State of Heavily Overdoped Bi$_2$Sr$_2$CaCu$_2$O$_{8+δ}$ Detected by Ultra-Low Temperature Angle-Resolved Photoemission Spectroscopy
[ "Chaohui Yin", "Qinghong Wang", "Yuyang Xie", "Yiwen Chen", "Junhao Liu", "Jiangang Yang", "Junjie Jia", "Xing Zhang", "Wenkai Lv", "Hongtao Yan", "Hongtao Rong", "Shenjin Zhang", "Zhimin Wang", "Nan Zong", "Lijuan Liu", "Rukang Li", "Xiaoyang Wang", "Fengfeng Zhang", "Feng Yang", "Qinjun Peng", "Zuyan Xu", "Guodong Liu", "Hanqing Mao", "Lin Zhao", "Xintong Li", "Xingjiang Zhou" ]
cond-mat.supr-con
[ "cond-mat.supr-con" ]
^1National Lab for Superconductivity, Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China ^2University of Chinese Academy of Sciences, Beijing 100049, China ^3Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, Beijing 100190, China ^4Songshan Lake Materials Laboratory, Dongguan, Guangdong 523808, China ^†These authors contributed equally to this work. ^*Corresponding author: XJZhou@iphy.ac.cn, xintongli@iphy.ac.cn, lzhao@iphy.ac.cn Negligible Normal Fluid in Superconducting State of Heavily Overdoped Bi_2Sr_2CaCu_2O_8+δ Detected by Ultra-Low Temperature Angle-Resolved Photoemission Spectroscopy Chaohui Yin^1,2,†, Qinghong Wang^1,2,†, Yuyang Xie^1,2, Yiwen Chen^1,2, Junhao Liu^1,2, Jiangang Yang^1,2, Junjie Jia^1,2, Xing Zhang^1,2, Wenkai Lv^1,2, Hongtao Yan^1,2, Hongtao Rong^1,2, Shenjin Zhang^3, Zhimin Wang^3, Nan Zong^3, Lijuan Liu^3, Rukang Li^3, Xiaoyang Wang^3, Fengfeng Zhang^3, Feng Yang^3, Qinjun Peng^3, Zuyan Xu^3, Guodong Liu^1,2,4, Hanqing Mao^1,2,4, Lin Zhao^1,2,4,*, Xintong Li^1,2,4,* and Xingjiang Zhou^1,2,4,* July 22, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================== In high temperature cuprate superconductors, it was found that in the overdoped region the superfluid density decreases with the increase of hole doping. One natural question is whether there exists normal fluid in the superconducting state in the overdoped region. In this paper, we have carried out high-resolution ultra-low temperature laser-based angle-resolved photoemission measurements on a heavily overdoped Bi2212 sample with a T_c of 48 K. We find that this heavily overdoped Bi2212 remains in the strong coupling regime with 2 Δ_0 / k_B T_c=5.8. The single-particle scattering rate is very small along the nodal direction (∼5 meV) and increases as the momentum moves from the nodal to the antinodal regions. A hard superconducting gap opening is observed near the antinodal region with the spectral weight at the Fermi level fully suppressed to zero. The normal fluid is found to be negligibly small in the superconducting state of this heavily overdoped Bi2212. These results provide key information to understand the high T_c mechanism in the cuprate superconductors.   Introduction The high temperature cuprate superconductors exhibit anomalous normal state properties as well as unconventional superconducting properties.<cit.> In the underdoped region, with the increase of the doping level, the superconducting transition temperature (T_c) and the superfluid density increase while the superconducting gap decreases, indicating that superconductivity is closely related to superfluid density.<cit.> In the overdoped region, the superconducting transition temperature, the superconducting gap and superfluid density all decrease with the increase of the doping level.<cit.> One prominent issue arises as to whether there is normal fluid in the superconducting state due to the missing charge carriers if fewer and fewer carriers are condensed into the superconducting state in the overdoped region.<cit.> In Bi_2Sr_2CuO_6+δ (Bi2201) superconductors, normal electrons are observed in the superconducting state by scanning tunneling microscopy (STM) both in the real space<cit.> and in the reciprocal space<cit.> which are strongly affected by disorder. Whether it is universal that the normal fluid always exists in the superconducting state in the overdoped samples remains to be investigated. Angle-resolved photoemission spectroscopy (ARPES) is a powerful tool to study the normal state and superconducting state of the cuprate superconductors.<cit.> It can also detect the normal electrons in the superconducting state. Here we report high-resolution ultra-low temperature ARPES studies on heavily overdoped Bi_2Sr_2CaCu_2O_8+δ (Bi2212) with a T_c at 48 K. By performing ARPES measurements near the antinodal region at ultra-low temperature (1.2 K), we find that the normal fluid is negligibly small in the superconducting state in this heavily overdoped Bi2212 sample. These results provide key information for understanding the mechanism of superconductivity in cuprate superconductors. High-quality Bi2212 single crystals were grown using the traveling solvent floating zone method.<cit.> The crystals were annealed at 500 ^∘C under high oxygen pressure over 300 bar for seven days.<cit.> Magnetic susceptibility measurement (Fig. 1(a)) indicates that the prepared sample has a T_c at 48.5 K which is among the most heavily overdoped samples that can be achieved in bulk Bi2212 (Fig. 1(b)). For convenience, the sample will be referred to as Bi2212OD48K hereafter. High-resolution ARPES measurements were conducted using our laboratory-based ARPES system. It is equipped with a vacuum-ultraviolet laser operating at a photon energy of hν =6.994 eV and a R8000 hemispherical electron energy analyzer (Scienta Omicron). Employing ^3He pumping technology, the system can cool samples to the lowest temperature of ∼0.8 K. The energy resolution was set to 0.5 meV and the angular resolution is 0.3° which corresponds to a momentum resolution of 0.004 Å^-1 at the photon energy of 6.994 eV. All samples were cleaved in situ at low temperatures and measured in ultrahigh vacuum with a base pressure better than 1 × 10^-10 mbar. The Fermi level was carefully referenced by measuring polycrystalline gold which was well connected with the sample. The work function of the heavily overdoped Bi2212 was measured to be 4.35 eV. In the STM measurements, the sample was cleaved at room temperature in an ultrahigh vacuum and the data were obtained at 4.7 K. The tip used in the measurements was an etched tungsten tip which has been calibrated on the Au(111) surface. The differential conductance (dI/dV) spectra were obtained by using the standard lock-in technique with a modulation of 3 mV and 488.137 Hz.   Results and discussion Figure 1 presents detailed Fermi surface measurements of the Bi2212OD48K sample at 1.2 K. Figure 1(c) shows the Fermi surface mapping while Figs. 1(d) and 1(e) show the constant energy contours at binding energies of 5 meV and 10 meV, respectively. With increasing binding energy, the spectral weight spreads from the nodal region to the antinodal region due to an anisotropic superconducting gap opening. In the constant energy contour with a binding energy of 10 meV (Fig. 1(e)), in addition to the strong antibonding Fermi surface, the bonding Fermi surface becomes visible near the antinodal region. But it is rather weak due to the photoemission matrix element effect. Combined with the measured band structures (Fig. 2), these results give a Fermi surface topology that is plotted in Fig. 1(f). It consists of the bonding Fermi surface and the antibonding Fermi surface. Analyzing the area of the measured Fermi surface, the doping level of the antibonding Fermi surface is ∼0.36 hole/Cu while it is ∼0.20 hole/Cu for the bonding Fermi surface. These yield an overall doping level of ∼0.28 hole/Cu, which is slightly higher than 0.24 hole/Cu estimated from empirical formula (Fig. 1(b)).<cit.> We note that, even though the antibonding Fermi surface has a doping level of ∼0.36 hole/Cu, it remains hole-like centered around (π,π). This indicates that even up to this heavily overdoped Bi2212 sample, the antibonding Fermi surface has not undergone a Lifshitz transition from a hole-like to an electron-like topology, as reported before. <cit.> Figure 2 shows the band structures of the Bi2212OD48K sample measured at 1.2 K. The antibonding band is strong and clear in all momentum cuts. The bonding band appears clearly near the antinodal region although its intensity is relatively weak. They split more and more when the momentum cuts approach the antinodal region. The spectral weight near the Fermi level gets increasingly suppressed as the momentum cuts move from the nodal to the antinodal regions. Signatures of the Bogoliubov bands for both the bonding and antibonding bands, caused by the superconducting gap opening, are clearly seen. Figure 3 shows the representative photoemission spectra of the Bi2212OD48K sample measured at 1.2 K along the antibonding Fermi surface. Fig. 3(a) shows the original photoemission spectra (energy dispersion curves, EDCs) at five Fermi momenta and corresponding background EDCs are also plotted. The observed EDCs are very sharp with a width (full width at half maximum, FWHM) of ∼5 meV for the nodal EDC (leftmost panel in Fig. 3(a)) and ∼12 meV for the EDC near the antinodal region (rightmost panel in Fig. 3(a)). The nodal EDC in our heavily overdoped Bi2212 sample is among the sharpest spectra so far observed in Bi-based cuprates. This indicates the scattering rate along the nodal direction is extremely small which corresponds to a long quasiparticle lifetime, suggesting that the material is subject to minimal scattering from disorder or other defects. To quantitatively analyze the superconducting gap, scattering rate and the fraction of the normal fluid in the superconducting state, the EDCs along the Fermi surface in Fig. 3(a) are symmetrized as shown in Fig. 3(b). We also subtracted the corresponding background (BG) from the EDCs along the Fermi surface in Fig. 3(a) and the symmetrized background-subtracted EDCs are shown in Fig. 3(c). By subtracting the background, the EDC intensity near the antinodal region approaches zero at the Fermi level, as shown in the insets of the right three panels in Fig. 3(c). The hard gap opening is observed where the spectral intensity is nearly zero within an energy range around the Fermi level. The symmetrized EDCs are fitted by a phenomenological gap formula,<cit.> taking the self-energy Σ(k, ω)=-i Γ_1+Δ^2 /[ω+ϵ(k)+iΓ_0] where Γ_1 is a single-particle scattering rate, Γ_0 is the inverse pair lifetime, Δ is the superconducting gap and ϵ(k) represents the band dispersion which is zero at the Fermi momentum. This fitting procedure is applied to all the symmetrized background-subtracted EDCs along the antibonding Fermi surface and the fitting results are presented in Fig. 4. Figure 4(a) shows the superconducting gap measured along the antibonding Fermi surface. The superconducting gap can be well fitted by a standard d-wave gap form Δ = Δ_0 |cos (k_x a)-cos (k_y a)| / 2 where Δ_0 equals 12.0 meV. This is consistent with our STM measurement on the same sample (bottom-right inset in Fig. 4(a)) where the coherence peak lies at 12.5 meV. This results in a ratio of the gap to the critical temperature 2 Δ_0 / k_B T_c=5.8. It is significantly larger than the value of 4.3 for the weak coupling d-wave BCS superconductor.<cit.> This indicates that the heavily overdoped Bi2212 up to T_c∼48 K remains in the strong coupling regime. Figure 4(b) presents the momentum-dependent single-particle scattering rate (Γ_1) and inverse pair lifetime (Γ_0) along the antibonding Fermi surface. Γ_0 is quite small (less than 1.5 meV) across the entire Fermi surface. The single-particle scattering rate is minimal (∼5 meV) along the nodal direction, and slightly increases as the momentum moves to the antinodal region and shoots up near the antinodal region. The antinodal electrons appear to experience some additional scattering channels. Previous ARPES measurements found that, in the overdoped (La_2-xSr_x)Cu_2O_4 (LSCO)<cit.> and Tl_2Ba_2CuO_6+δ (Tl2201)<cit.>, the scattering rate is maximal along the nodal direction and decreases with the momentum moving from the nodal to the antinodal regions. The scattering rate we observed in our heavily overdoped Bi2212 sample exhibits a different momentum dependence from those observed in LSCO<cit.> and Tl2201<cit.>. We note that the scattering rate we observed in our overdoped Bi2212 sample is much smaller than those observed in LSCO<cit.> and Tl2201<cit.>. This indicates that the momentum dependence of the scattering rate we observed is more intrinsic to the overdoped cuprates. In the overdoped region, the superfluid density is found to decrease with the increasing hole doping.<cit.> This naturally raises the question of whether there exist normal electrons in the superconducting state in the overdoped samples. The normal electrons, if they exist, may be present in the form of phase separation either in the real space or reciprocal space as shown in the STM measurement of the overdoped Bi2201 samples.<cit.> In principle, ARPES can distinguish normal electrons from superconducting electrons because they behave differently in the superconducting state. The normal electrons will have a band that crosses the Fermi level and produce spectral weight at the Fermi level. On the other hand, the superconducting electrons will form a gap near the Fermi level and cause spectral weight suppression at the Fermi level. Under the condition that the superconducting gap is larger than the single-particle scattering rate, the spectral weight at the Fermi level can be fully suppressed to zero. In this case, if there are normal electrons coexisting with the superconducting electrons, they will produce spectral weight at the Fermi level. It can then be used to determine whether there are normal electrons and the fraction of normal electrons in the superconducting state. Figure 4(c) shows the momentum dependence of the ratio of the EDC intensity at the Fermi energy, I_E_F to that at the peak position, I_Peak in the symmetrized EDC, as illustrated in the inset of Fig. 4(c). The ratio is 1 along the nodal direction, decreases as the momentum moves away from the nodal direction and approaches nearly zero near the antinodal region. For superconducting electrons, the ratio depends on the relative magnitude of the superconducting gap (Δ) and the single particle scattering rate (Γ_1). The ratio is 1 because the superconducting gap is zero along the nodal direction. Away from the nodal direction, the superconducting gap increases rapidly while the scattering rate changes slowly, giving rise to the decrease of the ratio I_E_F/I_Peak. Upon approaching the antinodal region, the superconducting gap becomes larger than the scattering rate and a hard gap is formed around the Fermi level where the spectral weight is fully suppressed to zero. These results indicate that in the momentum space, except for the nodal region, no other regions are observed to have a zero superconducting gap. This rules out the possibility of phase separation in the reciprocal space around the antinodal region in our heavily overdoped Bi2212 sample. Our present results can also rule out the possibility of phase separation in real space in our heavily overdoped Bi2212 sample. This is made possible because we can observe a hard gap opening near the antinodal region where the scattering rate is smaller than the superconducting gap. Three factors are essential to this observation: a clean sample, ultra-low temperature and ultra-high instrumental resolution. If the superconducting phase has a hard gap with the spectral weight at the Fermi level fully suppressed to zero, the presence of the normal phase can be detected because it will produce extra spectral weight at the Fermi level. As shown in Fig. 3(c) and Fig. 4(c), the spectral weight at the Fermi level near the antinodal region is nearly zero. This indicates that the normal phase is negligible in the superconducting state in our heavily overdoped Bi2212 sample. Our present results indicate that overdoped Bi2212 behaves differently from the overdoped Bi2201. The normal fluid is clearly observed by STM in the superconducting state of the overdoped Bi2201.<cit.> But in heavily overdoped Bi2212, the normal fluid is negligible in the superconducting state. In overdoped LSCO, it is found that the superfluid density decreases with increasing doping.<cit.> In overdoped Bi2212, whether the superfluid density follows the same doping evolution remains to be investigated. Our observation of the absence of normal fluid in the superconducting state of the heavily overdoped Bi2212 can be understood if the superfluid density keeps increasing with increasing doping in overdoped Bi2212.<cit.> In that case, it provides important information to understand the origin of superconductivity in the overdoped region.   Summary In summary, we have carried out high-resolution ultra-low temperature laser-based ARPES measurements on the heavily overdoped Bi2212 sample with a T_c of 48 K. We find that this heavily overdoped Bi2212 remains in the strong coupling regime with 2 Δ_0 / k_B T_c=5.8. The single-particle scattering rate is very small along the nodal direction (∼5 meV) and increases as the momentum moves from the nodal to the antinodal regions. A hard superconducting gap opening is observed near the antinodal region with the spectral weight at the Fermi level fully suppressed to zero. The normal fluid is found to be negligibly small in the superconducting state in this heavily overdoped Bi2212 sample. These results provide key information to understand the high T_c mechanism in the cuprate superconductors.   References 10 keimer2015quantum Keimer B, Kivelson S A, Norman M R, Uchida S and Zaanen J http://www.nature.com/articles/nature141652015 Nature 518 179–186 uemura1989universal Uemura Y J, Luke G M, Sternlieb B J et al. https://link.aps.org/doi/10.1103/PhysRevLett.62.2317 1989 Phys. Rev. Lett. 62 2317 hashimoto2014energy Hashimoto M, Vishik I M, He R H, Devereaux T P and Shen Z X http://www.nature.com/articles/nphys3009 2014 Nat. Phys. 10 483–495 bovzovic2016dependence Božović I, He X, Wu J and Bollinger A T https://www.nature.com/articles/nature19061 2016 Nature 536 309–311 mahmood2019locating Mahmood F, He X, Božović I and Armitage N P https://link.aps.org/doi/10.1103/PhysRevLett.122.027003 2019 Phys. Rev. Lett. 122 027003 tromp2023puddle Tromp W O, Benschop T, Ge J F, Battisti I, Bastiaans K M, Chatzopoulos D, Vervloet A H M, Smit S, van Heumen E, Golden M S, Huang Y K, Kondo T, Takeuchi T, Yin Y, Hoffman J E, Sulangi M A, Zaanen J and Allan M P https://www.nature.com/articles/s41563-023-01497-1 2023 Nat. Mater. 22 703–709 ye2023emergent Ye S S, Xu M, Yan H T, Li Z X, Zou C W, Li X T, Hao Z Q, Yin C H, Chen Y W, Zhou X J, Lee D H and Wang Y Y https://doi.org/10.1038/s41467-024-49325-7 2024 Nat. Commun. 15 4939 damascelli2003angle Damascelli A, Hussain Z and Shen Z X https://link.aps.org/doi/10.1103/RevModPhys.75.473 2003 Rev. Mod. Phys. 75 473 sobota2021angle Sobota J A, He Y and Shen Z X http://arxiv.org/abs/2008.02378 2021 Rev. Mod. Phys. 93 025006 liang2002growth Liang B and Lin C T https://www.sciencedirect.com/science/article/pii/S0022024801020103 2002 Journal of Crystal Growth 237 756–761 wen2008large Wen J S, Xu Z J, Xu G Y, Hücker M, Tranquada J M and Gu G D https://www.sciencedirect.com/science/article/pii/S0022024807007865 2008 Journal of Crystal Growth 310 1401–1404 zhang2016reproducible Zhang Y X, Zhao L, Gu G D and Zhou X J https://iopscience.iop.org/article/10.1088/0256-307X/33/6/067403 2016 Chin. Phys. Lett. 33 067403 presland1991general Presland M R, Tallon J L, Buckley R G, Liu R S and Flower N E https://linkinghub.elsevier.com/retrieve/pii/0921453491907009 1991 Physica C: Superconductivity 176 95–105 kaminski2006change Kaminski A, Rosenkranz S, Fretwell H M, Norman M R, Randeria M, Campuzano J C, Park J M, Li Z Z and Raffy H https://link.aps.org/doi/10.1103/PhysRevB.73.174511 2006 Phys. Rev. B 73 174511 norman1998 Norman M R, Randeria M, Ding H and Campuzano J C https://link.aps.org/doi/10.1103/PhysRevB.57.R11093 1998 Phys. Rev. B 57 R11093 he2018rapid He Y, Hashimoto M, Song D, Chen S D, He J, Vishik I M, Moritz B, Lee D H, Nagaosa N, Zaanen J, Devereaux T P, Yoshida Y, Eisaki H, Lu D H and Shen Z X https://www.science.org/doi/10.1126/science.aar3394 2018 Science 362 62–65 zhou2004dichotomy Zhou X J, Yoshida T, Lee D H, Yang W L, Brouet V, Zhou F, Ti W X, Xiong J W, Zhao Z X, Sasagawa T, Kakeshita T, Eisaki H, Uchida S, Fujimori A, Hussain Z and Shen Z X https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.92.187001 2004 Phys. Rev. Lett. 92 187001 plate2005fermi Platé M, Mottershead J D F, Elfimov I S, Peets D C, Liang R X, Bonn D A, Hardy W N, Chiuzbaian S, Falub M, Shi M, Patthey L and Damascelli A https://link.aps.org/doi/10.1103/PhysRevLett.95.077001 2005 Phys. Rev. Lett. 95 077001 hwang2007doping Hwang J, Timusk T and Gu G D https://iopscience.iop.org/article/10.1088/0953-8984/19/12/125208 2007 J. Phys.: Condens. Matter 19 125208 Acknowledgements This work is supported by the National Natural Science Foundation of China (Grant Nos. 12488201, 12374066, 12074411 and 12374154), the National Key Research and Development Program of China (Grant Nos. 2021YFA1401800, 2022YFA1604200, 2022YFA1403900 and 2023YFA1406000), the Strategic Priority Research Program (B) of the Chinese Academy of Sciences (Grant No. XDB25000000 and XDB33000000), the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0301800), the Youth Innovation Promotion Association of CAS (Grant No. Y2021006) and the Synergetic Extreme Condition User Facility (SECUF). Author Contributions X.J.Z., X.T.L., L.Z. and C.H.Y. proposed and designed the research. C.H.Y. contributed to the sample growth and the magnetic measurements. C.H.Y., Y.Y.X., Y.W.C., J.G.Y., J.J.J., X.Z., W.K.L., H.T.Y., H.T.R., S.J.Z., Z.M.W., N.Z., L.J.L., R.K.L., X.Y.W., F.F.Z., F.Y., Q.J.P., Z.Y.X., G.D.L., H.Q.M., L.Z., and X.J.Z. contributed to the development and maintenance of the ARPES systems and related software development. Q.H.W., J.H.L., X.Z., H.Q.M. and X.T.L. contributed to the development and maintenance of the STM. C.H.Y. and Q.H.W. carried out the ARPES and STM experiments with Y.Y.X., J.H.L., J.G.Y., J.J.J., X.Z., W.K.L., L.Z., X.T.L. and X.J.Z.. C.H.Y., Q.H.W., Y.W.C., J.H.L., L.Z., X.T.L. and X.J.Z. analysed the data. X.J.Z., L.Z., X.T.L. and C.H.Y. wrote the paper. All authors participated in discussion and commented on the paper. Competing interests The authors declare no competing interests.
http://arxiv.org/abs/2407.13047v1
20240717230324
A Novel GAN Approach to Augment Limited Tabular Data for Short-Term Substance Use Prediction
[ "Nguyen Thach", "Patrick Habecker", "Bergen Johnston", "Lillianna Cervantes", "Anika Eisenbraun", "Alex Mason", "Kimberly Tyler", "Bilal Khan", "Hau Chan" ]
cs.LG
[ "cs.LG" ]
[ [ July 22, 2024 ================= § ABSTRACT Substance use is a global issue that negatively impacts millions of persons who use drugs (PWUDs). In practice, identifying vulnerable PWUDs for efficient allocation of appropriate resources is challenging due to their complex use patterns (e.g., their tendency to change usage within months) and the high acquisition costs for collecting PWUD-focused substance use data. Thus, there has been a paucity of machine learning models for accurately predicting short-term substance use behaviors of PWUDs. In this paper, using longitudinal survey data of 258 PWUDs in the U.S. Great Plains collected by our team, we design a novel GAN that deals with high-dimensional low-sample-size tabular data and survey skip logic to augment existing data to improve classification models' prediction on (A) whether the PWUDs would increase usage and (B) at which ordinal frequency they would use a particular drug within the next 12 months. Our evaluation results show that, when trained on augmented data from our proposed GAN, the classification models improve their predictive performance (AUROC) by up to 13.4% in Problem (A) and 15.8% in Problem (B) for usage of marijuana, meth, amphetamines, and cocaine, which outperform state-of-the-art generative models. § INTRODUCTION Substance use can create short- and long-term negative consequences for persons who use drugs (PWUDs) <cit.>. These consequences include mental illness, HIV/AIDS, hepatitis, drug overdose, and death. Within the U.S. alone, an estimated 161.8 million people aged 12 or older used a substance (out of which 40.0 million used an illicit drug) in the past month before being interviewed in 2021 <cit.>. Furthermore, according to <cit.>, substance-involved overdose deaths, including those related to illicit drugs and prescription opioids, continue to increase over the years, with 106,699 deaths in 2021 compared to 91,799 (+16%) in 2020 and 70,630 (+51%) in 2019. This alarming trend also applies globally, with the estimated number of PWUDs in the past 12 months reaching 296 million in 2021 from 240 million in 2011 (out of which 39.5 million and 27.3 million had drug use disorders, respectively) <cit.>. In response to these large-scale negative impacts, various public and private organizations around the globe have prompted initiatives for preventing and reducing substance use at both population and individual levels. Prominent examples are the United Nation’s Sustainability Goal 3: “Strengthen the prevention and treatment of substance abuse" <cit.> and U.S. Department of Health and Human Services (HHS)'s Healthy People 2030: “Reduce misuse of drugs and alcohol" <cit.>. Generally, approaches toward these initiatives focus on designing and deploying intervention and outreach programs/resources (e.g., rehabs and consulting services) for PWUDs, with the main goal of reducing and eliminating their usage of certain substances <cit.>. While these programs and resources have shown to be effective to some extent <cit.>, they often require volunteer participation from PWUDs, who face the difficulties and reluctances of (self-)evaluating and (self-)determining whether they want or need help <cit.>. Even when PWUDs agree to use these programs/resources, they may have already experienced prior harms such as overdose and mental illness <cit.>. Therefore, it is important to prevent harm from occurring in the first place by carefully identifying PWUDs at the highest risk (i.e., those who are prone to drastically increase usage of some drug) and allocating them appropriate resources to reduce or eliminate potential harms. Unfortunately, forecasting individual substance use behavior is challenging due to its complex patterns (e.g., drug use frequency and co-use of multiple drugs) and tendency to change over time at short timescales (i.e., within months) <cit.>, as well as the lack of appropriate data and models for predicting short-term future substance use (see Existing Efforts and Limitations and additional contents in the full paper attached in the supplementary material). Thus, our goal is to design accurate predictive models for modeling short-term drug usage (i.e., within months) from PWUDs to aid healthcare agencies, local communities, policymakers, and other stakeholders in the efficient allocation of resources to PWUDs who need the most help. An effective predictive model has the potential to improve the well-being of millions of PWUDs and hampers the ongoing rapid growth of drug use prevalence and drug overdose death rates. Our Approach and Associated Challenges. To address the above shortcomings of substance use data and models, we formed a collaboration between computer scientists and domain experts in substance use research, including social scientists, intervention specialists, and survey interviewers from University of Nebraska-Lincoln and Rural Drug Addiction Research Center. With IRB approval, our team recruited a local sample of 258 PWUDs in the Great Plains of the U.S. from which longitudinal survey data were collected and stored in tabular format (detailed below in Section <ref>). Despite its high value and relevance to our stated goal, the sample size is small due to various challenges during data collection (mainly disruptions due to COVID-19 and the lack of visible harm reduction movements and punitive laws in the region), As a result, our preliminary predictive models trained on currently available data achieved subpar performance with respect to the baseline (demonstrated in our full paper). These models seem to overfit due to the limited training examples (<200) with respect to the number of (preprocessed) features (>600). One effective way to tackle overfitting on small datasets is via data augmentation <cit.> i.e., creating synthetic samples based on real data to increase its sample size. Deep generative models such as the popular Generative Adversarial Network (GAN) <cit.> provide powerful tools for this purpose due to their flexibility in representing complex and high-dimensional data distributions. In recent years, these models have been extended to generate tabular data, especially healthcare records, using specialized architectures (e.g., GOGGLE <cit.>) and/or novel training algorithms (CTGAN <cit.>), which in turn achieved promising performance across different evaluation criteria. However, these models were benchmarked mostly on moderate to large datasets that typically contain at least thousands of training examples. In rare cases when being applied on small datasets with less than 1000 samples, e.g., https://www.openml.org/search?type=data sort=runs id=37 status=activeDiabetes (768) and Breast (569) <cit.> as reported respectively in the work for GOGGLE and TabDDPM <cit.>, the proposed models and its selected comparatives often either fail to or narrowly outperform simple baselines such as Bayesian networks <cit.> and SMOTE <cit.> in terms of their considered evaluation metrics. The benchmark datasets for state-of-the-art models also contain no more than 200 features, which is substantially smaller than the size of our feature set (>600 after preprocessing). Additionally, the survey used for collecting our tabular data contains skip logic, a commonly employed functionality in survey design for social science applications <cit.>, which has not been addressed by the aforementioned state-of-the-art models. Our Contributions. In this paper, we design a novel specialized generative model to augment our tabularized survey data with skip logic in order to improve the predictive performance of our classification models, which ultimately predict the following two short-term substance use behaviors: for a given PWUD and a certain drug, (A) whether they would increase its usage and (B) at which frequency (on a pre-defined ordinal scale) they would use it within the next 12 months. We summarize our contributions as follows: * We believe ours is the first work that addresses skip logic from surveys in tabular data generation. We demonstrate its practical value by showing both conceptually and empirically how enforcing constraints from skip logic (or skip constraints for brevity) positively affects the training of our generative model. We are also one of the first to investigate the real-world feasibility of deep generative models in settings where the number of features exceed the sample size i.e., high-dimension low-sample-size (HDLSS). (See Related Work in our full paper.) * We design a novel GAN that deals with small tabular data containing <258 samples and 210 features (209 plus one target variable). Specifically, we leverage CTGAN <cit.>, a well-known tabular GAN, by incorporating an auxiliary classifier <cit.> within its architecture to generate high-quality samples conditioned on the corresponding target variable. Since the transformed feature space has high dimensionality (over 600), we also embed a global feature selection mechanism while training the auxiliary classifier's by employing the novel approach from <cit.> that was shown to perform well on even smaller and higher dimensional data than ours. Finally, to enforce the skip constraints stated in (i), we take advantage of CTGAN's built-in conditional vector. * We implement and train the proposed GAN and use it to augment our (training) data. The augmented data is then used to train binary and multiclass classification models for predictive problems (A) and (B), respectively, for each of the following drugs: marijuana, methamphetamine, amphetamines, and cocaine. Our experimental results show that the average Area under the Receiver Operating Characteristic curve (AUROC) evaluated on multiple distinct sets of test data is improved by up to 13.4% in Problem (A) and 15.8% in Problem (B) when the data is augmented, which is significantly higher than what yielded using state-of-the-art generative models. § PROBLEM DESCRIPTION §.§ Background Our Tabular Data. The recruitment of PWUDs started in 2019 under the respondent-driven sampling scheme <cit.> in the Great Plains of the U.S. and has continued to the present time. Enrolled PWUDs were followed up within 4–12 months after their initial visit and took the same survey as before. In the survey, each PWUD answered questions on a computer regarding their individual attributes, including the drug use behavior of 18 different drugs. Use frequencies of considered drugs (e.g., marijuana, cocaine, amphetamines, and methamphetamine) were inquired on an ordinal scale (1–8) of {never, less than once a month, once a month, once a week, 2–6 times a week, once a day, 2–3 times a day, 4 or more times a day}. Collectively, the responses to these questions form the (raw) features stored in a 2-D table. There are 258 samples (each represented as a row) and 151 features (each represented as a column) in total. (See supplementary material for the survey questions.) After being preprocessed (detailed in Section <ref>), the table contains 209 features (plus the target variable for teaching the desired classification models): 2 continuous and 207 categorical. Definition 1 (Skip Logic).Definition 1 In survey design, skip logic is a set of automated navigational rules that allows respondents to skip to relevant questions depending on their prior answers <cit.>. Each of the rules is defined as a skip constraint, which restricts the possible values of a subset of features A in accordance with the value of some feature a. We say the skip constraint on the chain A is imposed by a and denote this as a→ A. For the example in Figure <ref>, the skip constraint on A= {TB4} is imposed by a= TB3 i.e., TB3 → {TB4}, in which TB4 is omissible when TB3=“No". Definition 2 (Tabular Data Generation).Definition 2 Following the notations from <cit.>, given a table 𝐓_real that is partitioned row-wise into training set 𝐓_train and test set 𝐓_test, the task involves training a data generator G on 𝐓_train and then independently sampling rows using the learned G to generate a synthetic table 𝐓_syn such that |𝐓_syn|=|𝐓_train| with similar probability mass function for some target variable, y (defined in Section <ref>) (in order for us to fairly evaluate the efficacy of G). Note that a capable G can satisfy the latter requirement without affecting the overall quality of the generated synthetic samples—that is, the features in each generated row should be consistent with the label in that row. For our setting, 𝐓_real and its derivatives consist of N_c=2 continuous features and N_d=207+1 categorical features. Definition 3 (GAN and Its Extensions). GANs are deep generative models that have recently found success in modeling tabular data <cit.> in addition to images and text. A GAN typically consists of two separate networks: a generator 𝒢 that maps a noise distribution (typically Gaussian) to the data distribution and a discriminator 𝒟 that estimates the probability an input sample came from the data distribution. The learning process is defined as an adversarial game between 𝒢 and 𝒟 in which 𝒢 attempts to consistently fool 𝒟 <cit.>. To stabilize the training of GANs, <cit.> introduce WGAN that provides a meaningful loss, the Wasserstein distance, for quantifying the difference between the generated and real data distributions. Using the value function from WGAN-GP <cit.> (an improvement of WGAN), <cit.> design CTGAN that aims to tackle class imbalance in categorical features of tabular data by modifying 𝒢 to additionally take a vector as input. This so-called conditional vector represents a certain class/category of some categorical feature and is used to condition both the generated samples and the real training samples. The model can thus efficiently learn proper conditional distributions for each feature. We further describe the main components of CTGAN in Section <ref>. §.§ Problem Formulation In this work, we focus on data augmentation by designing a novel GAN to generate high-quality synthetic samples that resemble our tabular data with small sample size and skip logic incorporated. The augmented data is then used to train classifiers for predicting PWUDs' usage of a given drug within the next 12 months, including (A) whether they would increase its usage and (B) at which ordinal frequency they would use it. The former is a binary classification problem wherein PWUDs exhibiting an increase and non-increase (i.e., decrease or unchanged) in usage belong to the positive and negative class, respectively; the latter is a multiclass classification problem in which PWUDs are labeled according to their ordinal usage of the corresponding drug within 12 months after. We mainly concern with predicting usage of the most prevalent drugs (i.e., used by at least half of the PWUDs in our tabular data), which include marijuana, methamphetamine (or meth for brevity), amphetamines, and cocaine. There are two unique properties of our tabularized survey data that impede existing generative and predictive models from achieving desirable performance: being high-dimension low-sample-size (HDLSS) and the presence of skip logic (whose impact is demonstrated in Challenges and Observations of the full paper). § OUR PROPOSED GAN §.§ Overview We focus on GAN architectures due to their prevalence in tabular data generation literature <cit.> and their convenient properties (e.g., flexibility for conditional generation) that help us approach the discussed challenges systematically. Overall, we extend the popular CTGAN (reviewed in Section <ref>) as follows: * Due to the large number of (categorical) features, it is difficult for the conditional generator 𝒢 to learn to generate samples that are conditioned on a particular feature. Therefore, during the training of 𝒢, we raise the production of synthetic samples in order to ensure adequate training for conditional generation on a wide range of features. Furthermore, we prioritize the generation of synthetic samples that are conditioned on the empirical distributions of the target variable. * To enhance the quality of synthetic samples conditioned on the target variable y, we incorporate an auxiliary classifier 𝒞 <cit.> into the architecture of CTGAN for learning the correlation between y and other features. That is, 𝒞 is trained to predict the label of an input (synthetic or real) sample with y removed. * Within 𝒞, which is originally a multilayer perceptron (MLP), we add two auxiliary networks that together perform global feature selection and compute the weights of the first hidden layer of 𝒞 <cit.>. The objectives of this addition are twofold: facilitate effective learning by 𝒞 on HDLSS data and provide a mean to approximate the feature importance scores. * Lastly, we enforce skip logic on synthetic samples during training by leveraging CTGAN's conditional vector. §.§ Conditional Generator We first revisit the two main components of CTGAN, the conditional generator and the training-by-sampling procedure. Formally, the conditional generator takes a vector cond as additional input, which represents the condition (D_i^*=k^*) for some category k^* of the i^*th categorical feature D_i^*∈{D_1,…,D_N_d} (^* denotes the selected feature/category in cond). It is worth noting that each condition only involves one category of one categorical feature and does not involve any continuous features. The sampling of cond is twofold: sampling for the i^*th categorical feature and sampling for the k^*th category of that feature. Let ⊕ denote vector concatenation e.g., given 𝐱_1=[0,0,0] and 𝐱_2=[1,0], 𝐱_1⊕𝐱_2=[0,0,0,1,0]. If each categorical feature D_i is encoded as a one-hot vector 𝐝_i=[𝐝_i^(1),…,𝐝_i^(k),…,𝐝_i^(|D_i|)], where |D_i| is the number of categories in D_i and 𝐝_i^(k)∈{0,1}, then cond is defined as cond=𝐦_1⊕…⊕𝐦_N_d where 𝐦_i=[𝐦_i^(1),…,𝐦_i^(k),…,𝐦_i^(|D_i|)] is the mask vector of zeros associated with 𝐝_i with 𝐦_i^(k)=1 at i=i^* and k=k^*. To ensure the generator produces samples in accordance with the given conditions, a cross-entropy term measuring the difference between 𝐦_i^* from the input cond and the generated (denoted by ) one-hot feature 𝐝̂_i^* is added to its loss. To help the model evenly explore all possible categories in categorical features, a procedure for sampling the cond vector, termed training-by-sampling, is employed in CTGAN as follows: randomly choose a categorical feature D_i^* with uniform probability; construct the probability mass function across the categories available in D_i^* by taking the logarithm of their frequencies in that feature (with respect to all training examples in 𝐓_train); then sample a category k^* accordingly and calculate the corresponding 𝐦_i^* and cond. Afterward, the cond vector is used to condition both synthetic and real training samples in order for the discriminator to properly estimate the (Wasserstein) distance between the learned and real conditional distributions P_𝒢(row|cond) and P(row|cond), respectively. We design our GAN to place more emphasis on generating samples conditioned on y by leveraging the conditional generator (discussed in the following paragraph) as well as other known techniques (discussed in subsequent subsections). Proper Conditional Generation in HDLSS Setting. In each iteration of CTGAN, cond is sampled twice during the respective training of the discriminator 𝒟 and the generator 𝒢, with sample size equal the specified batch size for both. For our HDLSS setting with |𝐓_train|<200 and over 600 columns/available conditions to consider, even when the batch size is maximally set to |𝐓_train|, any condition (D_i^*=k^*) would either be missing or inadequately sampled in the minibatch, and hence it would be impossible for 𝒢 to properly learn to produce samples that preserve the input conditions. Therefore, we increase the sample size of cond and hence the number of synthetic samples to generate during the training of 𝒢 by a factor of q>1 with respect to the batch size (q=1 in CTGAN). Moreover, instead of randomly sampling D_i^* while constructing cond, we want 𝒢 to sample certain features more frequently than others in order to prioritize learning their conditional distributions, particularly for the target variable y since our main purpose of generating synthetic samples is to help classification models improve their prediction on y. Therefore, we dedicate a portion of cond to conditions solely for y. We discuss on how we sample the remaining D_i^*'s in Section <ref>. §.§ Auxiliary Classifier Incorporating an auxiliary classifier 𝒞 into GAN architecture has been shown to improve conditional generation quality for both image and tabular data <cit.>. 𝒞 takes either a synthetic or a real sample having its label removed as input and aims to predict that label. Its loss, termed classification loss <cit.>, quantifies the discrepancy between the label of a real sample and the label predicted by 𝒞 for that same sample, which is formulated as binary and categorical cross entropy for Problems (A) and (B), respectively. The addition of 𝒞 also introduces an extra loss term into the loss function of 𝒢, which has the same form as the classification loss but concerns synthetic samples instead. We refer to this loss as downstream loss as in <cit.>. Altogether, 𝒞 is trained to learn the actual correlation between the label and the features, then teach 𝒢 how to generate realistic samples accordingly. The synthetic samples that are fed to 𝒞 are conditioned via the cond vector that is sampled earlier during the training of 𝒢 (we iteratively train 𝒟▸𝒢▸𝒞). Since we are mainly concerned with learning the conditional distribution for the target variable P_𝒢(row|y), we only feed synthetic samples that were conditioned on y to 𝒞 when computing the downstream loss. The proper conditional generation step in Section <ref> ensures that we have sufficient amount of such samples for training 𝒞. §.§ Learning Important Features The auxiliary classifier 𝒞 was originally proposed to be an MLP having the same architecture as 𝒟 <cit.>. On HDLSS data, however, 𝒞 is very likely to overfit, especially during the first few iterations and epochs when 𝒞 encounters few training (either real or synthetic) examples, which in turn would negatively impact 𝒢. Recently, <cit.> propose a way to overcome overfitting on HDLSS tabular classification problem by adding two auxiliary networks before the first hidden layer of some classification neural network in order to reduce the number of its learnable parameters and simultaneously perform global feature selection. We generalize this idea by integrating these networks into 𝒞 to prevent it from overfitting. Furthermore, by leveraging the global importance scores s=[s_1,…,s_N]∈(0,1)^N (higher indicates greater importance) for N features, which are learned by one of the auxiliary networks, we can inform the conditional generator of important categorical features during training-by-sampling. More specifically, while sampling the cond vector, we sample features D_i (excluding the target variable y) each with probability proportional to its important score s_i. Hence, conditions for features that have significant effects on the prediction of y are sampled more frequently, allowing 𝒢 to prioritize generating synthetic samples conditioned on those features. §.§ Enforcing Skip Logic We first provide an intuitive explanation of how enforcing skip logic on synthetic samples can benefit the training of our GAN. Recall that CTGAN attempts to minimize the Wasserstein distance between P_𝒢(row|cond) and P(row|cond), where cond represents some condition (D_i^*=k^*) as defined earlier in Section <ref>. Let cond_1 and cond_2 be two distinct samples of cond with different sampled features e.g., D_1 (i^*=1) and D_2 (i^*=2), respectively (and hence different sampled categories). With the presence of skip logic, if D_1 and D_2 both belong to the same chain imposed by another feature D_3 and the corresponding skip constraint D_3→{D_1,D_2} is enforced such that both D_1,D_2 are omissible, then cond_1 and cond_2 must be redefined to match the same conditional vector, cond^*, that satisfies such constraint i.e., (D_1=[BLANK]) AND (D_2=[BLANK]). It follows that on real data, P(row|cond_1)=P(row|cond_2)=P(row|cond^*). This implies that the corresponding synthetic samples associated with either condition should follow the same distribution P_𝒢(row|cond^*). Therefore, enforcing skip logic by inferring cond^* effectively reduces the search space for P_𝒢, which leads to more efficient and stable learning and hence more consistency in the quality of the generated samples. We empirically demonstrate this claim in Section <ref>. Existing methods for enforcing column-wise constraints require either creating customized transformation functions coupled with validity check or using reject sampling <cit.>, both of which are ad hoc and highly inefficient for large number of constrained columns. Instead, we leverage the cond vector to enforce our constraints. Formally, recall from Section <ref> that cond is constructed as ⊕_i=1^N_d𝐦_i with 𝐦_i^*^(k^*) set to 1 and all other entries set to 0 for representing the condition (D_i^*=k^*). Let us assume the corresponding categorical feature D_i^* from cond imposes some skip constraint κ (e.g., D_i^*= TB3 and k^*= “No") on a chain of features {D_i'}_i'∈ M, where M⊆{1,…,N_d} such that |M| is the size of such chain. Let k'_i'∈{1,…,|D_i'|} be a specific category that each D_i' takes in accordance with κ (e.g., D_i'= TB4 and k'_i'= [BLANK] under TB3=“No"). Then, we can define κ as the extension of a condition as follows: κ = [D_i^*→{D_i'}_i'∈ M] = [ (D_i^*=k^*) ⇒⋀_i'∈ M (D_i'=k'_i') ]. Therefore, whenever applicable, we can restrict condζ(cond) by reconstructing the individual mask vectors of cond (with a slight abuse of notation) as 𝐦_i^(k)ζ(𝐦_i^(k))= 1 [t].15if i∈{i^*,i'} and k∈{k^*,k'_i'}, 0 otherwise. Note that κ is only defined for a fixed set of {k^*,k'_i'} and hence can take various forms in practice. For instance, the following is an exhaustive list of valid expressions for κ=TB3→{TB4}: -0.1em ▸ TB3=“No" ⇒ TB4=[BLANK] (omissible) ▸ TB3=“Yes" ⇒ TB4=“Not at all" ▸ TB3=“Yes" ⇒ TB4=“Less than 1 cigarettes a day” ▸ TB3=“Yes" ⇒ TB4=“1-5 cigarettes a day” ▸ TB3=“Yes" ⇒ TB4=“Half a pack a day” ▸ TB3=“Yes" ⇒ TB4=“A pack or more a day”. §.§ The Complete Model The complete training procedure via minibatch stochastic gradient descent (SGD) is summarized in Algorithm <ref>. q was previously defined in Section <ref>. We denote the conditions for y as cond^(y). ω∈[0,1] is the ratio for controlling the prevalence of such conditions in some sample of cond. Let ℐ be the probability mass function across the categorical features (excluding y) wherein the probability for selecting a feature D_i^* is defined as its normalized feature importance score: s_i^*/∑_i=1^N_d-1s_i (-1 for excluding y). The remaining conditions in cond are sampled following the training-by-sampling procedure but with features selected according to ℐ (the categories in each categorical feature are still sampled according to their log probabilities in 𝐓_train as before), for which we express as cond_j∼ℐ with a slight abuse of notation. The loss for 𝒟, L^𝒟, is defined similarly as in CTGAN i.e., WGAN-GP loss. The loss function for 𝒢 is L^𝒢=L^𝒢_orig+L^𝒢_dstream where L^𝒢_orig is the original loss function for 𝒢 in CTGAN and L^𝒢_dstream is the downstream loss defined in Section <ref> along with the classification loss L^𝒞. At each iteration of an epoch, the training sequence is as follows: train the discriminator (lines 3-6), train the generator (lines 7-10), and train the auxiliary classifier while simultaneously updating ℐ according to s (lines 11-13). § EXPERIMENTS §.§ Methodology §.§.§ Implementation Details All experiments were conducted using PyTorch 1.13.1, CUDA 11.7, and scikit-learn 1.3.2. Our implementation of the proposed GAN[<https://github.com/AnonyMouse3005/HDLSS-GAN>] is based on CTGAN's. We refer readers to our full paper for further implementation details, particularly on our data preprocessing step. Hyperparameters. For all considered generative models, unless otherwise stated, we adopt the same specifications as in the cited original work. We use a batch size of |B|=30 to train each model, For CTGAN, we set the pac size <cit.> to 3. 𝒞 is a 2-layer MLP with (256, 256) neurons and either a sigmoid (for Problem A) or a softmax (for Problem B) activation in the last layer. Each of the two attached auxiliary networks prior to the first hidden layer of 𝒞 is a 4-layer MLP (256, 256, 256, 256) with a tanh and a sigmoid activation in the last layer, respectively. We manually tuned q and ω (via 10-fold cross validation on 𝐓_train) before fixing their values to 20 and 0.5, respectively. Each model is trained for 100 epochs, each contains ⌊|𝐓_train|/|B|⌋ iterations. The ratio for partitioning 𝐓_real is 80:20 for 𝐓_train and 𝐓_test, respectively, with a total of 100 distinct seeds. §.§.§ Evaluation Metrics and Framework Figure <ref> illustrates our evaluation framework. We evaluate the efficacy of generative models[For every measure of 𝐓_syn's quality while evaluating some trained generator G, we use G to generate 10 samples of 𝐓_syn (each satisfies the two requirements specified in def:tab-gen) and evaluate their quality independently, then average the respective scores.] using three criteria: conflict, compatibility <cit.>, and utility. Conflict. Every row of a synthetic table generated by 𝒢 should not contain too many entries that violate skip logic. Given the jth row of 𝐓_syn that is represented as 𝐫̂_j = Ĉ_j ⊕𝐝̂_1,j⊕…⊕𝐝̂_N_d,j where 𝐝̂_i,j is the one-hot vector of the ith categorical feature and Ĉ_j is the representation of continuous features in that row[varied across different generative models e.g., CTGAN uses the proposed mode-specific normalization (and so does our GAN)], we check for each skip constraint κ whether the columns of 𝐫̂_j satisfy the condition (D_i^*=k^*) i.e., match cond (left-hand side of Equation <ref>). If it does, the one-hot vectors of the features within the chain linked by κ must exactly match the mask vectors associated with said features in the restricted cond vector, ζ(𝐦_i) (whose construction is defined in Equation <ref>). Hence, we quantify the degree of κ-violation by computing the Hamming distance between the vectors (𝐫̂_i,j)_i∈ M=⊕_i∈ M𝐝̂_i,j and ⊕_i∈ Mζ(𝐦_i), where M contains the indices for the features within the chain linked by κ. The conflict metric of a single row 𝐫̂_j is the average Hamming distance across all applicable skip constraints, and we compute the conflict of 𝐓_syn by taking its average across all rows. Thus, a synthetic table whose rows adequately conform skip logic yields a low conflict score in [0,1]. Compatibility. The classification models trained on synthetic data should output prediction for unseen examples in test data as accurately as those trained on real (training) data. We train classification models on 𝐓_train and on 𝐓_syn (both having the same size as defined in Section <ref>), then test them using 𝐓_test and compare their predictive performance, which is measured using the standard Area under the Receiver Operating Characteristic curve (AUROC). We train each of the multiclass classification models in Problem (B) using the one-vs-all strategy and we compute the resulting AUROC also using the one-vs-all strategy <cit.>. We report the average difference in AUROC (across different classification models <cit.> and different partitioning of 𝐓_train and 𝐓_test) of models trained on 𝐓_syn and those trained on 𝐓_train. It is expected for the classification models trained on 𝐓_syn to score lower AUROC than those trained on 𝐓_train, ideally with a margin as small as possible. Therefore, the compatibility score of 𝐓_syn should be negative and close to zero. We consider the following classifiers same for both Problems (A) and (B): elastic-net logistic regression <cit.>, decision tree i.e., CART <cit.>, random forest <cit.>, XGBoost <cit.>, CatBoost <cit.>, 3-layer MLP (100, 100, 10) with sigmoid/softmax activation in the last layer, and WPFS <cit.>. Utility. Ultimately, when the real training data is augmented by synthetic samples, the classification models trained on it should excel compared to the models trained on real data only. We report the average AUROC (across different classification models and different partitioning of 𝐓_train and 𝐓_test) of the models trained on the augmented data, 𝐓_train+𝐓_syn, and compare it against the average AUROC of those trained on 𝐓_train. We consider the same set of classifiers listed above. Unlike compatibility, we do not compute the difference in AUROC scores. Hence, the utility of 𝐓_syn follows the same scale as AUROC ∈[0,1] and should ideally yield values as high as possible. §.§ Results We use our evaluation framework to evaluate CTGAN, TVAE <cit.>, TableGAN <cit.>, CTABGAN+ <cit.>, and TabDDPM <cit.> in addition to the proposed GAN, which are considered state-of-the-art for tabular data generation. For baseline, we use Bayesian networks <cit.>. Compatibility of Synthetic Data. Figure <ref> shows the performance of all considered generative models in terms of compatibility. Across all considered drugs in both problems, we see that the drop in predictive performance of classifications models trained on synthetic data that are generated from our GAN (<0.05 and <0.07 lower in AUROC for Problems A and B, respectively) is considerably smaller with lower uncertainty compared to the performance drop of those trained on synthetic data from other generative models. This demonstrates the efficacy of our GAN in generating synthetic samples that are comparable to real training data. Utility of Synthetic Data. In terms of utility, the proposed GAN also performs well in each problem relative to the benchmarks as shown in Figure <ref>. More specifically, the classification models trained using augmented data from our GAN gain from 8.35% up to 13.4% in AUROC in Problem (A) and 8.66% up to 15.8% in Problem (B), whereas those trained using augmented data from other models barely show any improvement. This implies that our GAN is capable of augmenting existing HDLSS data in order for classification models to effectively improve their predictive performance. Violation of Skip Logic in Synthetic Data. The evaluation on degree of skip logic violation is summarized in Table <ref>. Since our work is the first to take this criteria into account, our proposed GAN has a clear advantage over existing models, with 𝐓_syn consistently having lower average conflict by a large margin in both considered problems. The additional runtime for enforcing all 26 skip constraints is negligible i.e., for approximately 5% longer training time. Impact of Enforcing Skip Logic. We also perform an ablation study to understand the practical benefits of enforcing skip logic during our GAN training. As shown in Table <ref>, when we enforce skip logic, the training phase exhibits not only higher stability but also higher efficiency, as evidenced by the loss of 𝒢 at 50 and 100 epochs. Note that the adoption of WGAN (via CTGAN) in our GAN allows us to interpret the loss in a meaningful way. As a result, the trained GAN is able to consistently generate high-quality samples, which positively affects the scores for all three evaluation metrics to some extent. Similar results for other considered problems can be found in the appendix of our full paper. § CONCLUSION In this paper, using HDLSS tabular data collected by our team via a survey that employs skip logic on short-term substance use behavior, we design a novel GAN for augmenting our limited tabular data in order to help classification models accurately predict short-term substance use behaviors of PWUDs: (A) whether they would increase usage of a certain drug and (B) at which ordinal frequency they would use it within the next 12 months. Our evaluation results demonstrate the efficacy of the proposed GAN. The resulting predictions for the two defined problems can ultimately be leveraged by relevant substance use organizations as a complementary forecasting tool when determining the most appropriate resource to allocate to PWUDs that need the most help. § ACKNOWLEDGEMENTS This project is supported by the National Institute of General Medical Sciences of the National Institutes of Health [P20GM130461], the Rural Drug Addiction Research Center at the University of Nebraska-Lincoln, and the National Science Foundation under grant IIS:RI #2302999. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies. § ETHICS STATEMENT Due to the confidentiality agreement and IRB approval for this study, we do not have access to sensitive information such as full name and gender identity. Data will be made available to applicants (with an IRB protocol and ethical research plan) upon request. named
http://arxiv.org/abs/2407.12898v1
20240717175315
JWST reveals a rapid and strong day side variability of 55 Cancri e
[ "J. A. Patel", "A. Brandeker", "D. Kitzmann", "D. J. M. Petit dit de la Roche", "A. Bello-Arufe", "K. Heng", "E. Meier Valdés", "C. M. Persson", "M. Zhang", "B. -O. Demory", "V. Bourrier", "A. Deline", "D. Ehrenreich", "M. Fridlund", "R. Hu", "M. Lendl", "A. V. Oza", "Y. Alibert", "M. J. Hooton" ]
astro-ph.EP
[ "astro-ph.EP" ]
Patel et al. Department of Astronomy, Stockholm University, AlbaNova University Center, 10691 Stockholm, Sweden Weltraumforschung und Planetologie, Physikalisches Institut, Universität Bern, Gesellschaftsstrasse 6, 3012 Bern, Switzerland Center for Space and Habitability, Universität Bern, Gesellschaftsstrasse 6, 3012 Bern, Switzerland Observatoire astronomique de l'Université de Genève, Chemin Pegasi 51, 1290 Versoix, Switzerland Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91011, USA Faculty of Physics, Ludwig Maximilian University, Scheinerstrasse 1, D-81679, Munich, Bavaria, Germany ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, CH-3008, Bern, Switzerland University College London, Department of Physics & Astronomy, Gower St, London, WC1E 6BT, United Kingdom University of Warwick, Department of Physics, Astronomy & Astrophysics Group, Coventry CV4 7AL, United Kingdom Department of Space, Earth and Environment, Chalmers University of Technology, Onsala Space Observatory, 439 92 Onsala, Sweden Department of Astronomy and Astrophysics, The University of Chicago, Chicago, IL 60637, USA Centre Vie dans l'Univers, Faculté des sciences de l'Université de Genève, Quai Ernest-Ansermet 30, 1205 Geneva, Switzerland Leiden Observatory, University of Leiden, PO Box 9513, 2300 RA Leiden, The Netherlands Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125, USA Cavendish Laboratory, JJ Thomson Avenue, Cambridge CB3 0HE, UK The nature of the close-in rocky planet 55 Cnc e is puzzling despite having been observed extensively. Its optical and infrared occultation depths show temporal variability, in addition to a phase curve variability observed in the optical. We wish to explore the possibility that the variability originates from the planet being in a 3:2 spin-orbit resonance, thus showing different sides during occultations. We proposed and were awarded Cycle 1 time at the James Webb Space Telescope (JWST) to test this hypothesis. JWST/NIRCam observed five occultations (secondary eclipses), of which four were observed within a week, of the planet simultaneously at 2.1 and 4.5 μm. While the former gives band-integrated photometry, the latter provides a spectrum between 3.9–5.0 μm. We find that the occultation depths in both bandpasses are highly variable and change between a non-detection (-5 ± 6 ppm and 7 ± 9 ppm) to 96±8 ppm and 119^+34_-19 ppm at 2.1 μm and 4.5 μm, respectively. Interestingly, the variations in both bandpasses are not correlated and do not support the 3:2 spin-orbit resonance explanation. The measured brightness temperature at 4.5 μm varies between 873–2256 K and is lower than the expected dayside temperature of bare rock with no heat re-distribution (2500 K) which is indicative of an atmosphere. Our atmospheric retrieval analysis of occultation depth spectra at 4.5 μm finds that different visits statistically favour various atmospheric scenarios including a thin outgassed CO/CO_2 atmosphere and a silicate rock vapour atmosphere. Some visits even support a flat line model. The observed variability could be explained by stochastic outgassing of CO/CO2, which is also hinted by retrievals. Alternatively, the variability, observed at both 2.1 and 4.5 μm, could be the result of a circumstellar patchy dust torus generated by volcanism on the planet. JWST reveals a rapid and strong day side variability of 55 Cancri e J. A. Patel1jayshil.patel@astro.su.se^https://orcid.org/0000-0001-5644-6624 < g r a p h i c s >, A. Brandeker1, D. Kitzmann2,3^https://orcid.org/0000-0003-4269-3311 < g r a p h i c s >, D. J. M. Petit dit de la Roche4, A. Bello-Arufe5, K. Heng6,7,8,9, E. Meier Valdés3, C. M. Persson10, M. Zhang11, B.-O. Demory3,2, V. Bourrier4, A. Deline4, D. Ehrenreich4,12, M. Fridlund13,10, R. Hu5,14^https://orcid.org/0000-0003-2215-8485 < g r a p h i c s >, M. Lendl4, A. V. Oza14,5, Y. Alibert2,3, M. J. Hooton15 Received XX; accepted XX ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION Ultra-short-period planets (USPs) provide a unique opportunity to study planets in extreme environments that have no counterparts in our Solar system <cit.>. Many USPs are consistent with a bare rock composition, while some of them might have a secondary metal-rich atmosphere or a disintegrating surface <cit.>. Being in an orbit around the nearby (d=12.6 pc), bright naked eye star 55 Cancri (V=5.95 mag), 55 Cancri e (hereafter 55 Cnc e) is one of the best targets for investigating the nature of a USP. Out of the five known planets in the system, planet e is the only one transiting the star. 55 Cnc e was discovered by <cit.> with an orbital period of ∼ 2.8 d, which was later found to be an alias of the true 0.74 d period <cit.>. This was confirmed by the detection of planetary transits in the optical and infrared (IR) independently <cit.>, enabling its radius measurement. Together with mass estimates derived from radial velocity measurements, the earlier works attempted to constrain the internal structure of the planet and found that the planetary density was consistent with either a purely rocky planet, a rocky planet with a thick super-critical water envelope, or carbon-rich interior with no envelope <cit.>. More recently, <cit.> refined planetary mass (8.3 M_⊕) and radius (1.88 R_⊕) using radial velocity data and HST/STIS transit observations. Their internal structure modelling, based on these updated mass-radius measurements, suggests a rocky planet surrounded by a heavyweight (high mean molecular weight) atmosphere. A low mean molecular weight, or lightweight, atmosphere on the planet is not possible because of intense radiation from its host star. Atmospheric escape simulations also imply that lightweight atmospheres (made of H, He) would not survive on 55 Cnc e for a long time period <cit.>. Other attempts to model the internal structure of the planet <cit.> indicate a rocky interior with a gas or water envelope. Soon after its discovery, <cit.> used Spitzer to detect thermal emission from 55 Cnc e and determined its day-side temperature to be around 2300 K. <cit.> constructed a temperature map of the planet using Spitzer/IRAC phase curve measurements at 4.5 μm. They calculated the average day-side temperature to be around 2350 K with a maximum ∼ 2700 K. Curiously, the hottest location of the planet was found to be shifted by ∼ 41 ^ ∘ to the east compared to the sub-stellar point, indicating a strong heat re-distribution. On the other hand, the day-night temperature difference was found to be as large as 1300 K, a sign of inefficient heat transport to the night side. These conflicting results led <cit.> to speculate that perhaps efficient heat transport is only happening on the day side of the planet by a thick atmosphere, or alternatively, a molten lava flow is responsible for the heat transport. The inefficiency of energy transport to the night side could be due to gases becoming cold enough to condense. Similarly, a lava stream could be hindered by the surface solidifying at the night side. <cit.> re-analysed the phase-curve data and confirmed the findings of <cit.>. Their physical model of the phase curve allowed them to show that the radiative and advective time scales must be of the same order to reproduce the observed phase curve. This disfavours the lava ocean scenario since a lava flow would have a too large advective time scale <cit.> to be an efficient heat transporter <cit.>. <cit.> further propose that a CO or N_2 dominated atmosphere on the day side could explain the phase curve. This claim was corroborated by a 3D global circulation model climate model by <cit.> that could potentially describe the observations, assuming a H_2 + N_2 dominated atmosphere with a trace source of opacity at 4.5 μm (such as CO_2 or H_2O), coupled with the with a presence of night-side clouds. A recent re-reduction and re-analysis of the Spitzer phase curve by <cit.> yielded an even larger day-night temperature difference with a smaller phase offset, more consistent with a poor heat transport typically found on USPs. The heavyweight atmosphere on the planet, which was implied by the Spitzer phase curve, climate modelling and mass-radius constraints, is challenging to detect. Numerous observations have tried but failed to detect any atmosphere on the planet The singular claim of detection of gas on 55 Cnc e comes from <cit.> who identified HCN in the atmosphere using HST/WFC3 transit observations. However, subsequent observations using high-resolution spectroscopy from the ground could not reproduce the detection of HCN <cit.>. Furthermore, the transit observation of 55 Cnc e in the Ly α band by <cit.> resulted in a non-detection, suggesting the absence of an extended H upper atmosphere. This was supported by the non-detection of He in the upper atmosphere by <cit.>. A lack of H and He in the atmosphere could mean that both gases escaped if they were initially accreted from the disk. In addition to this, several studies attempted but could not detect other atmospheric species such as H_2O, TiO, NH_3, C_2H_2, Fe, Ca, Mg, K, Na, H <cit.>. These non-detections mean that either those species are absent from the atmosphere or only present at very low volume mixing ratios if the mean molecular weight of the atmosphere is not too high to be detected by the transit observations. Another possibility is that the atmosphere of the planet is cloudy <cit.>. The IR observations of 55 Cnc e in emission posed another challenge for understanding the behaviour of the planet. <cit.> monitored the occultation depths of 55 Cnc e with Spitzer at 4.5 μm during 2012–2013 and found a variable occultation depth ranging from 47 ppm to 176 ppm. This translates into a corresponding change of brightness temperature of 1370 K to 2530 K. Variability was also observed in the optical bandpass by MOST (Microvariability and Oscillations of STars) that discovered significant changes in phase curves over several seasons <cit.>. While the optical observations with MOST found a significant phase curve amplitude, the secondary occultation remained undetected. More recently, CHEOPS (CHaracterising ExOPlanet Satellite) extensively observed 55 Cnc <cit.> in the optical (G band) and confirmed significant variability not only in phase amplitude but also in phase offset and occultation depth, where the occultation depths at some epochs were consistent with zero. TESS (the Transiting Exoplanet Survey Satellite) also observed 55 Cnc and found a hint of weak variability in occultation depths over three observing sectors <cit.>. In contrast to the variability of the occultation depths, no optical or IR variability has been observed in the transit depths <cit.>. Multiple studies in the literature propose various hypotheses to explain the observed variability of the occultation depth of 55 Cnc e in the optical and IR. <cit.> suggested that plumes from volcanic outgassing on the dayside could explain the observed variability in emission. Assuming an Earth-like composition for the interior, it can release gases such as CO or CO_2 that are a significant source of opacity around 4.5 μm. Gas plumes evolving at different atmospheric pressure levels could be inferred as varying temperatures during occultation observations in the IR. Given that the variability was observed throughout the optical and IR, it was suggested by <cit.> that a circumstellar inhomogeneous dusty torus could provide a variable source of opacity. <cit.> studied the dusty torus scenario in detail and concluded that such a torus made up of certain species of a narrow range of particle sizes could indeed reproduce the level of observed variability in the optical. However, a dusty torus should extent out to its Hill sphere and, if opaque, is inconsistent with the observed transit depths <cit.>. <cit.> argued that a thin, transient outgassed atmosphere is consistent not only with the observed optical and infrared occultation depths, but also provides a plausible explanation for their variability. <cit.> demonstrated that CO-CO2 atmospheres are outgassed under a broad range of conditions (surface pressures, oxygen fugacity, temperatures). Since 55 Cnc e is in a very close-in orbit around its host star, <cit.> showed that the planet's orbit is inside the stellar Alfvén surface. This means that star-planet interactions (SPI) are plausible for the system, potentially causing variability-inducing star spots. <cit.> proposed coronal rain, a kind of SPI, as a reason for the variability in chromospheric lines that they observed with HST <cit.>. <cit.> ruled out star spot creation by the planet as a plausible mechanism to explain the optical variability observed by CHEOPS but this does not prohibit other possible forms of SPIs, such as coronal rain. Although multiple hypotheses have been provided to describe the thermal phase curve and variability from 55 Cnc e, each has difficulties in fully explaining all observed features. The observations with the James Webb Space Telescope (JWST), presented here, were in part motivated by exploring an alternative hypothesis that the planet rotates at an asynchronous rate to its orbit, potentially explaining both the hot-spot shift into the afternoon and the rapid orbit-to-orbit variability. The idea and the observations motivated by it are presented in Section <ref> followed by results in Section <ref>. We show the results from atmospheric retrieval analysis in Section <ref>. Finally, we interpret the results from our observations and present our conclusions in Section <ref> and <ref>, respectively. Details of the data analysis methods used are put into Appendix <ref>. § ASYNCHRONOUS ROTATION SCENARIO FOR 55 CNC E, OBSERVATIONS AND METHODS §.§ 55 Cnc e in 3:2 spin-orbit resonance The planet 55 Cnc e orbits its host star in about 17.7 h with a semi-major axis of 0.015 AU <cit.>. When a planet is orbiting this close to its host star, it is usually assumed to be in a tidally locked synchronous spin-orbit configuration because of strong tidal forces. However, if the planet is part of a multi-planetary system, gravitational interactions with the other planets can perturb the planet from its synchronous 1:1 spin-orbit configuration. <cit.> simulated the tidal evolution of the orbit of 55 Cnc e and showed that there is a reasonable likelihood for the planet to be trapped in an asynchronous spin-orbit resonance, with the 3:2 spin-orbit resonance being the most likely after 1:1 synchronous rotation. Asynchronous rotation can thus not be ruled out for 55 Cnc e. The consequence is that the planet would show different faces to the star during the orbit. This in turn means that the hottest point on the planet would not necessarily be the sub-stellar point. Just as on Earth the hottest time of the day is in the afternoon and not at noon, so could thermal inertia on 55 Cnc e shift its hottest spot to the afternoon (east). The thermal inertia could, like on Earth, be provided by the atmosphere. In the case of a bare rock, thermal inertia could be provided by the heating, melting and evaporation of the rock in the morning with subsequent condensation and crystallisation in the afternoon. Quantitative models of these scenarios are sensitive to the detailed assumptions of the mass and composition of the atmosphere that, in turn, depends on the material equation of state. Using simplified models, <cit.> showed that the observations up until then could indeed be explained by using reasonable assumptions on the physical properties of the planet, meaning that the asynchronous rotation scenario could not be excluded. Assuming that the planet is rotating asynchronously in the most probable 3:2 spin-orbit resonance, the planet will show the same face only at every second occultation instead of showing the same face every time. That means the two opposite sides will be seen during consecutive occultations. If there are semi-stable surface features, e.g., due to volcanic activity, on different sides of the planet they will show up differently during alternate occultations. In this case, the observed occultation depths would be expected to highly correlate with the occultation number over a short period, while this correlation could be broken over a longer time scale due to surface changes. The variability in occultation depths observed by <cit.> can then be attributed simply to the planet showing different faces during occultations. Notably, <cit.>, who confirmed the Spitzer variability of occultation depths, found the variability to be well fitted by a sinusoidal with a period as short as 2 days, but discarded this solution as being unphysical. However, if the planet is indeed in a 3:2 spin-orbit resonance, it is expected that the period of variability should be equivalent to the synodic period (∼ 35.5 hr), close to the period of 2 days. To further test this intriguing hypothesis of asynchronous rotation and simultaneously sensitively measure potential atmospheric signatures, we designed an observation programme for JWST as detailed in the next section. §.§ Observations If the planet is indeed in a 3:2 spin-orbit resonance, it will show two opposite sides in consecutive occultations. Assuming the planetary surface to evolve slowly, we would then expect every second consecutive occultation to be strongly correlated. Enumerating the occultations by orbit number, we thus requested two “odd” and two “even” occultations within a short time-constrained span of two weeks, to rule out significant surface evolution within that time. Since 55 Cnc is a very bright IR target (K = 4 mag), avoiding saturation while observing it with JWST is challenging. From pre-launch estimates, our options were essentially limited to a grism time-series mode of the Near Infrared Camera (NIRCam). The proposal was awarded time in JWST Cycle 1 as GO 2084 <cit.>. The observation log is provided in Table <ref>. Due to technical difficulties, only three occultations of the programme were observed within the time constraint of two weeks; the fourth was postponed until five months later. Fortunately, a different programme <cit.> that also targeted 55 Cnc had an occultation observed in the same instrument mode and within the same first week <cit.>. In the following, we thus present an analysis of all five visits. NIRCam offers simultaneous observations in short-wave (SW) and long-wave (LW) channels at 0.6–2.3 μm and 2.4–5.0 μm, respectively. The SW channel allows the use of a weak lens with a filter providing photometric monitoring of the target, while the LW channel provides a spectroscopic mode using a grism and a filter. Our observations in the LW channel used the F444W filter with GRISMR element and RAPID readout mode. On the other hand, the WLP4/F212N2 weak lens/filter with RAPID readout mode was used in the SW channel. Both channels employed the SUBGRISM64 subarray that has 2048 columns and 64 rows. This gave us spectroscopic data between 3.9–5 μm (centred at around 4.5 μm) in the LW channel (or, 4.5 μm channel) and one single photometric data point in a narrow-band (2.3%) bandpass at 2.12 μm from the SW channel (also referred to as the 2.1 μm channel). Given the brightness of the host star, we chose two groups per integration with a total integration time of about 1.03 s. We used five independent pipelines to reduce and analyse the spectroscopic data at 4.5 μm and two different pipelines to analyse the short-wave photometric data. The details of these methods are described in Appendix <ref>. §.§ Retrieval model and atmospheric scenarios We chose two representative independent reductions of occultation depth spectra, from and pipelines, to perform atmospheric retrieval. Both reductions differ in their treatment of correlated noise and thus produce slightly different results, which was the reason for choosing two different reductions for retrieval (see Appendix <ref> for more details). To interpret the observational data, we used the open-source atmospheric retrieval code <cit.>, which uses the nested sampling algorithm <cit.> implemented in the library <cit.>. For the atmospheric characterisation, we tested four different models with a varying level of complexity. The simplest model tries to fit the observational data with a flat line, while the second one assumes the planet to emit like a pure blackbody of temperature T_bb. Since observations by, for example, <cit.> and <cit.> rule out the presence of a thick primordial hydrogen-helium atmosphere, a potential atmosphere has to be secondary in nature. There are two essential pathways to create a secondary atmosphere for a hot planet such as 55 Cnc e. The atmosphere can either be dominated by outgassing from the planetary interior <cit.> or be created through evaporation of mantle material, or a combination thereof. Thus, for the two atmospheric scenarios, we assumed a secondary atmosphere with outgassed carbon monoxide (CO)/carbon dioxide (CO2) <cit.> or an atmosphere produced by an evaporating mantle with a bulk silicate earth composition that is composed of silicon oxide (SiO), silicon dioxide (SiO2), and magnesium oxide (MgO) <cit.>. Nested sampling allows Occam's Razor <cit.> to be enforced via the calculation of the Bayesian evidence (or marginalised likelihood function), see <cit.>. In practice, this allows us to favour simpler explanations for some of the data (e.g. flat line or blackbody function). To provide good constraints on the Bayesian evidence values, within MultiNest we used 5000 live points <cit.> for each retrieval calculation. Increasing this value further did not alter the resulting evidence values to a significant degree. The atmosphere was considered to be isothermal with the surface pressure p_surf as a free parameter in the retrieval model. The atmosphere and surface were allowed to have their own distinct temperatures, T_atm and T_surf, respectively. The cross sections of CO, CO2, SiO, SiO2, and MgO were taken from <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>, respectively. All temperature and pressure-dependent cross sections were calculated with the open-source opacity calculator <cit.>. The atmospheric composition in the retrieval model was described through a centred-log-ratio prior which allows a more optimised sampling of the parameter space when the dominant background gas is not known <cit.>. For a given mixture of n gases, the centred-log-ratio conversion (clr) for the mixing ration x_j of a given molecule j in the mixture is given by ξ_j = clr(x_j) = lnx_j/g(𝐱) , where g(𝐱) is the geometric mean of all mixing ratios 𝐱: g(𝐱) = ( ∏_j=1^n x_j )^1/n . Due to the constraint that ∑_j=1^n x_j = 1 or ∑_j=1^n ξ_j = 0 , only n-1 free parameters are needed in the retrieval. We used uniform priors to produce ξ_j values subject to the constraints that min(𝐱) = 10^-10 and max(𝐱) = 1, see <cit.> for details. We note that the prior boundaries for ξ_j depend on the number of molecules in the retrieval and the chosen value of the smallest allowed mixing ratio. For the retrieval of the data from the reduction, we performed the calculations on the relative occultation depths. Thus, for these calculations, we needed to add an additional free parameter to the retrieval: the white-light occultation depth d_wl. For these, we used Gaussian priors with the values provided in Table <ref>. Since reduction provides absolute occultation depths this additional parameter was not needed. Additionally, we binned the data provided by which uses the instrument's native resolution to about 30 spectral bins. All retrieval parameters for the different models are summarised in Table <ref>. The empirically calibrated stellar spectrum of 55 Cnc from <cit.> was used to transform the emission spectra calculated by the retrieval model to wavelength-dependent occultation depths. § RESULTS §.§ Wide band occultation depths We used six pipelines to reduce and fit our JWST/NIRCam dataset. The methods are described in detail in Appendix <ref>. Here we present results from our primary analysis from the pipeline (Appendix <ref>). A summary of our results, along with the observation log, is tabulated in Table <ref>. Our main finding is the strong variability in occultation depths. The white-light occultation depths (computed by fitting an occultation model to the band-averaged occultation time series) at 4.5 μm are highly variable even during the short time scale of a week (Table <ref>). During the time span of 6 days (8 planetary orbits), the measured occultation depths at 4.5 μm continuously increased from basically non-detection in Visit 1 (7±9 ppm) to 119^+34_-19 ppm in Visit 4. The occultation depth from our final visit (Visit 5), observed 5 months after the other visits, is ∼ 95±16 ppm and consistent with the depths from Visits 3 and 4 but differs significantly from the depths from Visit 1 and 2. Fig. <ref> shows occultation depths as a function of time, illustrating this point. The best-fitted occultation models along with the de-trended data are shown in Fig. <ref> for all visits. We used an empirically calibrated stellar spectrum of 55 Cnc from <cit.>, stellar and planetary parameters from <cit.> and the NIRCam response function[<http://svo2.cab.inta-csic.es/theory/fps/>] to compute brightness temperatures using the measured white-light occultation depths at 4.5 μm. As shown in Table <ref>, the brightness temperature changes significantly from 873 K to 2256 K within a week. Notably, the brightness temperature almost doubled from Visit 1 to 2, i.e., only after three planetary orbits. Similarly, the 2.1 μm channel occultation depths are also variable. Within a week, the 2.1 μm occultation depths remained almost constant at around 40 ppm for Visit 1, 3 and 4, while we found a non-detection of occultation for Visit 2 which was observed between Visit 1 and 3 (see, Fig. <ref>). However, the final observation that was taken 5 months later (Visit 5) shows a significantly higher occultation depth of 96 ± 8 ppm, which is almost equal to the depth observed at 4.5 μm in the same epoch. The corresponding brightness temperatures varies significantly between 1247 K and 3138 K (see, Table <ref>). Interestingly, there is no correlation between the occultation depth variability observed at 2.1 and 4.5 μm (Fig. <ref>). Fig. <ref> present the de-trended SW data with best-fitted models. The variability, plotted in Fig. <ref>, is clearly not correlated with the parity of the orbit number. Occultation depths are also variable between occultations from orbits of the same parity, e.g., in even (Visit 1, 3, 4) or odd (Visit 2, 5) visits. The rapid variability thus cannot be explained by simply alternating between two sides of the planet. This does not rule out that the planet rotates asynchronously but means that an explanation for the rapid variability has to be found elsewhere. All visits showed various degrees of significant correlated noise of unknown origins, in both the 2.1 and 4.5 μm channels. The leftover correlated noise can be seen in Fig. <ref> and are also quantified in the Allan deviation plots in Fig. <ref>. We perform an injection-retrieval test to estimate proper uncertainties on occultation depths in the presence of correlated noise (see, Sec. <ref>). We report uncertainties from this analysis in Table <ref>. We, however, found that various methods to account for correlated noise could somewhat change the results of occultation depths and emission spectra (see, Appendix <ref> for more details). §.§ Occultation depth spectra and atmospheric retrieval We computed the relative occultation depth spectra as outlined in Appendix <ref> using the reduction and absolute occultation depth spectra from pipeline as described in Appendix <ref>. Since different methods to handle the correlated noise could lead to different results, we chose to perform atmospheric retrieval analysis on results from two pipelines, and , which use two representative techniques to deal with the correlated noise (see, Appendix <ref> for details). The occultation depth spectra, shown in Fig. <ref>, are also variable from visit to visit and do not show any consistent spectral features. §.§.§ Summary of the retrieval results The retrieval results for the two different reductions across all five visits and for the four different model scenarios described in Sect. <ref> are summarised in Table <ref>. The table shows the resulting Bayesian evidence values ln𝒵 and the Bayes factors B with respect to the models with the highest likelihood value. The former are marked in bold for every visit. Fig. <ref> additionally shows the posterior spectra for all models, visits, and reductions. The detailed posterior distributions for all atmospheric retrievals can be found in Fig. <ref> & <ref>, as well as in Appendix <ref>. The results presented in Table <ref> suggest that for the reduction, the planetary blackbody model is always the preferred model. This is likely caused by the relatively large errors of the reduction that results in the retrieval favouring a simpler model as can clearly be noticed in the spectra shown in the right column of Fig. <ref>. However, for most visits, the preference for the simple blackbody model is not statistically significant. The more complex atmospheric scenarios usually have a Bayes factor of less than three, which suggests that they are essentially equally likely. For the first three visits, a flat-line fit to the measured spectrum is effectively ruled out by the Bayesian evidence. The last two visits, on the other hand, can be fit with any of the four models. There seems to be little statistical preference for any of the different modelling scenarios. The results for the reduction show a much broader range of different models that are statistically preferred. As suggested by Table <ref>, the first visit strongly prefers a CO/CO2 atmosphere, the second visit can be explained by either a CO/CO2 atmosphere, a planetary blackbody, or a siliciate vapour atmosphere, while the third model overwhelmingly prefers the SiO/SiO2 scenario. The fourth visit is consistent with a planetary blackbody spectrum, as well as an atmosphere with CO & CO2, or SiO, SiO2 & MgO. Finally, the last visit strongly prefers a flat-line model. §.§.§ Detailed posterior distributions Detailed posterior distributions for the preferred model from the reduction of Visit 1 (CO & CO2) and Visit 3 (SiO, SiO2 & MgO), where atmospheric models are favoured, are shown in Fig. <ref> & <ref>. The posterior distributions for the first visit reveal a bimodal distribution for the surface pressure p_surf and the abundances of CO and CO2. As the two-dimensional correlation plots suggest, the surface pressure has a solution with a very low value of about 10^-6.5 bar that is dominated by CO in composition, as well as a higher-pressure mode at about 10^-3 bar that contains mostly CO2. For comparison, if the outgassing flux were to be balanced by flux-limited atmospheric escape then the implied surface pressure is ∼ 10^-7 bar <cit.>. At about 2000 K, the atmosphere temperature is much warmer than the retrieved temperature for the surface. It is also important to note, that the posterior distribution for the white-light occultation depths d_wl is shifted from its prior value of 7± 9 ppm, though they are both still within their 1-σ intervals. The posterior distribution for the SiO/SiO2/MgO model shown in Fig. <ref> for the third visit, on the other hand, exhibits a unimodal pressure distribution with a median value of about 0.1 bar. Here, the atmosphere is clearly dominated by SiO2 with only an upper limit for SiO and essentially no constraints on MgO. The posterior spectra shown in Fig. <ref> clearly show the drop-off in the occulation depth near a wavelength of 4.8 μm caused by SiO2. Just like in the previous CO/CO2 scenario for Visit 1, the retrieved atmosphere temperature is again much higher than the one of the surface. §.§.§ Blackbody temperatures The resulting posterior distributions of the blackbody temperature models are shown in Fig. <ref> for all visits and the two different reductions. In the case of the reduction, the blackbody is always the preferred model according to the Bayesian evidence, though, as previously mentioned, this preference is statistically not very significant. As the distributions depicted in the figure suggest, the temperatures retrieved from the observational data are found in two different clusters. A low-temperature mode near 750 K is found for Visits 1 and 2 and a second one at about 1200 K to 1300 K for the other three visits. The temperatures are quite well constrained with 1-σ intervals usually in the range of about ± 100 K, despite the rather large errors on the observational data points (see Fig. <ref>). For , the temperatures are clustered much closer together around a mean temperature of 1500 K. In comparison to the reduction, however, these temperatures are less well constrained with 1-σ intervals typically covering a range of several 100 K. This is likely caused by the white-light occultation depths that are directly correlated with these temperatures. Following Table <ref>, they have in general quite large associated errors that translate into less well-constrained temperatures. §.§.§ Surface pressures For the two model scenarios that involve atmospheres, we also retrieve the surface pressure. For the CO/CO2 model, the corresponding posterior distributions are shown in Fig. <ref>, while those for the SiO/SiO2/MgO scenario are shown in Fig. <ref>. In general, the reduction only weakly constrains the surface pressure with posteriors that usually cover the entire prior range of the pressure from 10^-10 bar to 500 bar. The posterior distributions seem to be essentially bimodal for almost every visit, with a very low-pressure mode and a high-pressure one. These more or less unconstrained pressures are the result of the rather large errors of the observational data from the reduction. Those make it difficult to provide good constraints for actual atmospheric models. For the reduction, the results are more diverse. Some visits seem to result in very well-constrained surface pressures. This includes Visits 1 and 5 for the CO/CO2 model (see upper panel of Fig. <ref>) and Visits 1 and 3 for the SiO/SiO2/MgO case (see upper panel of Fig. <ref>). Other visits show the same behaviour as for the reduction: rather unconstrained surface pressures with usually a bimodal posterior distribution. Even though not very visible in Fig. <ref>, the posterior distribution for Visit 1 is also bimodal in shape, with a smaller, high-pressure mode of an atmosphere dominated by CO2, as discussed above. We note that our retrieved surface pressures differ from the one reported by <cit.>, which corresponds to our Visit 4 and is based on the JWST program by <cit.>. However, given that even the two reductions of the same data in our study produce different results regarding the atmospheric properties, this is not too surprising. Furthermore, <cit.> employed a different retrieval approach. This includes not using the white-light eclipse depths of the NIRCam data, imposing a lower limit on the surface temperature and allowing for a non-radiatively interacting background gas. Especially the latter assumption will affect the posterior distributions of the surface pressure. §.§.§ Surface and atmosphere temperatures For the CO/CO2 model, we present the posteriors for the surface and atmosphere temperatures in Fig. <ref>. As discussed in Sect. <ref>, we allow these two temperatures to have distinct values. We only present the posteriors for the reduction since, as shown above, the one does not provide good constraints on the atmospheric properties. Just like the surface pressure, the temperatures are rather well-constrained for some visits, such as the surface temperatures for Visits 4 and 5. Observational data from other visits yield much broader distributions, such as Visit 2, some of which also seem to possess a bimodal shape or only provide upper limits. Visit 1 is the only case where the atmosphere seems to have a distinctly higher temperature than the surface. For other visits, this trend is less clear. For example, Visit 5 yields a very high surface temperature but the atmospheric one is less well-constrained and only seems to provide an upper limit that is roughly equal to the surface temperature. In the case of Visit 3, this situation is reversed. Here, the atmosphere temperature is constrained with a median value of roughly 1400 K, while the surface temperature only has an upper limit of about the same value. § INTERPRETATION OF OBSERVATIONS As mentioned in Section <ref>, if the variability in the emission from the planet is caused by the planet showing different faces during consecutive occultations, we would expect the occultation depth to be correlated with the orbit number. However, Fig. <ref>, which plots the occultation depths as a function of orbit number, shows that this is not the case. This means that the observations give no support for a 3:2 spin-orbit resonance being the root cause for the variability. It is still possible that the planet is trapped in some higher-order spin-orbit resonance, but to show this by establishing a pattern would require many more occultation observations than we currently have. There are several hypotheses that could potentially explain the full or part of the observations. We outline two such models in the subsections below: a transient outgassing atmosphere model and a circumstellar material supported by the volcanism model. Moreover, the NIRCam data also constrain the presence of spectral features from a mineral atmosphere resulting from a purported lava ocean, as described in Section. <ref> below. §.§ Constraints on silicate atmosphere on 55 Cnc e Being in proximity to its host, the substellar temperature on 55 Cnc e can reach > 2000 K. The surface of the planet at such a high temperature is expected to be molten if there is no atmosphere on the planet. A molten surface on the planet could then produce a thin rock vapour atmosphere on the planet. <cit.> recently calculated self-consistent models of outgassed atmospheres for all USPs at the time. They solved the radiative transfer equations along with equilibrium chemistry models for the outgassed atmosphere to compute temperature-pressure profile and emission spectra. They showed that gases such as SiO, SiO_2, Na, MgO etc. are some of the main constituents of these outgassed atmospheres. Their models for 55 Cnc e[All models are publicly available at <https://github.com/zmantas/LavaPlanets>] are shown in Fig. <ref> overplotted with our observations. The models assume bulk silicate (oxidised) Earth (BSE) composition for the planet with unevolved and evolved surface with 80% outgassed efficiency (evolved BSE composition). All of their models with different outgassing efficiencies predict occultation depths of 70–80 and 145–150 ppm for NIRCam 2.1 and 4.5 μm channels. As depicted in Fig. <ref> these values are larger compared to our observations. Some occultation depths are, however, consistent with models at 1–3σ. One occultation depth at 2.1 μm in Visit 5 produces a larger depth compared to the models. This hints towards a lack of short-wave absorbers such as SiO and/or SiO_2 from the atmosphere that are responsible for thermal inversion and, in turn, larger occultation depths in NIRCam bandpasses. Indeed, only one visit (Visit 3) favoured the SiO/SiO2/MgO model in the retrieval analysis. The band-averaged occultation depth for this visit at 4.5 μm agrees with the model prediction (145 ppm for BSE case) at 2.4σ. However, the short-wave occultation depth in this visit is inconsistent with the model prediction at 7σ. We here note that <cit.> found that the occultation depths in the MIRI bandpass are significantly lower than what is predicted by <cit.> models and thus do not support the presence of the silicate-rich atmosphere. At the same time, lower occultation depths in the NIRCam bandpasses could imply the presence of a gaseous species that have opacity sources in our NIRCam bandpasses. Alternatively, the lower occultation depths, translated into lower brightness temperatures, suggest a thick atmosphere with a strong heat re-distribution <cit.>. The estimated dayside brightness temperatures (see, Table <ref>) at 4.5 μm (Table <ref>) in all visits are smaller than the expected dayside temperature[Computed using T_day = T_⋆√(R_⋆/a) (1-A_B)^1/4 f^1/4, while using zero bond albedo and the heat re-distribution factor f=2/3, for a bare rock with no heat re-distribution <cit.>.] of 2537 K indicating the presence of heat transfer. In either case, our observations seem to indicate the existence of volatiles in the atmosphere of 55 Cnc e. However, it is still challenging to explain the very large occultation depth (and, thus, hot brightness temperature — 3138 K; see, Table <ref>) observed at 2.1 μm in Visit 5. §.§ Constraints on an outgassed secondary atmosphere <cit.> previously suggested that a transient, outgassed secondary atmosphere is capable of simultaneously explaining the observed variability of 55 Cnc e in both the optical/visible and infrared range of wavelengths. Specifically, atmospheres of several tens of bars of pure carbon monoxide (CO) are capable of producing occultation depths of about 21 ppm in the CHEOPS and TESS bandpasses, which are consistent with most of the occultation depths measured by CHEOPS <cit.> and TESS <cit.>. However, a change in atmospheric surface pressure of several tens of bars through loss processes or outgassing over the observed variability time scale in the CHEOPS data is difficult to explain. Such outgassed atmospheres are incapable of producing occultation depths as high as ≈ 40–50 ppm, which were measured thrice in Fig. 3 of <cit.>. Similarly, they cannot produce phase variations as high as 110 ppm as measured by MOST <cit.>. It cannot be ruled out that these anomalously high occultation depths are associated with stellar activity. For the first data reduction (), the outgassed atmosphere with CO and CO2 is associated with the highest Bayesian evidence in Visits 1 & 2. Bayesian model comparison does not disfavour this interpretation of Visit 4 as well. Fig. <ref> shows the interpretation of the spectrum from Visit 1 using a CO+CO_2 atmosphere. For Visit 3, a silicate-vapour atmosphere is strongly preferred over an outgassed atmosphere (with the logarithm of the Bayes factor being 9.8; Fig. <ref>). For the more conservative second data reduction (), the retrieval associated with the highest Bayesian evidence is a blackbody curve over all 5 visits. The simplest interpretation of the spectra is using a blackbody curve, which is consistent with the data in Visits 2 and 4 of the reduction and all 5 visits of the reduction. Fig. <ref> shows the posterior distributions of the blackbody temperature. For Visits 2 and 4 of the reduction, the blackbody temperature is broadly between 1500 K and 2000 K. Note that a blackbody curve does not automatically imply that one is probing a bare rocky surface, since an optically thick, isothermal atmosphere may also produce a blackbody curve <cit.>. For the reductions, the blackbody temperature is about 750 K for Visits 1 and 2 and increases to about 1250 K for Visits 3, 4 and 5 over a period of about 2.2 days (between Visits 2 and 3). Such a duration is not inconsistent with the radiative timescale, which is under an Earth day for ∼ 1 bar atmospheres <cit.>. If 55 Cnc e has a bare rocky surface and negligible albedo, then its temperature would be the equilibrium temperature of about 2000 K. If we take these blackbody temperatures (750 K and 1250 K) seriously, then it implies that the observations are not probing a bare rocky surface that has reached a steady state with the stellar instellation, unless one assumes implausibly high surface albedos. If we focus on the interpretation of the spectra using CO-CO_2 atmospheres, then Figs. <ref> and <ref> show the posterior distributions of surface pressures, atmospheric temperatures and surface pressures. For the data reductions, the surface pressure is unconstrained. For Visits 1, 2 and 4 of the reduction, the inferred surface pressure is ∼ 1 μbar. The surface temperature is ∼ 1000 K, which is only possible if the surface has not come to radiative equilibrium with the stellar instellation because of the presence of an atmosphere. The atmospheric temperature jumps from ∼ 2000 K to ∼ 2500 K to ∼ 1500 K from Visits 1 to 2 to 3. While this is not implausible because of the short radiative timescales, we do not have a mechanism to explain how and why this happens. §.§ Can a circumstellar inhomogeneous dusty torus explain variability? Two of our observations, Visit 1 at 4.5 μm and Visit 2 at 2.1 μm, show occultation depths that are consistent with zero at 1-σ. These non-detections are challenging to explain with any kind of atmospheric phenomena. Moreover, the occultation depths observed at 2.1 μm and 4.5 μm are not correlated with each other (Fig. <ref>), which potentially hints towards different origins of variability in different wavelength channels. A grey absorber could explain the optical and 2.1 μm channel variability. A natural candidate for this grey absorber is a circumstellar dust torus <cit.>. The progenitor of the dusty torus could be the volcanism on 55 Cnc e developed by the extreme tidal heating akin to Io <cit.>. The most common gases from volcanism seen on the Earth, Io, and Venus - e.g., SO_2, CO_2, generate a tenuous atmosphere on the planet. Volcanism, supported by significant tidal heating, is expected to expel a prodigious quantity of dust grains into the upper atmosphere, which ultimately escape the planet's gravitational sphere of influence due to impinging stellar ions. Upon escape, such a mechanism may eventually generate a patchy, circumstellar dust torus, which has been shown to be sufficiently opaque in visible light to produce optical variability <cit.>. Volcanic gases are additional non-trivial sources of opacity in our NIRCam 4.5 μm channel. Analytical models showed that an optically thin <cit.> SO_2 atmosphere with a range of pressures can produce the IR variability observed with Spitzer. Since the Spitzer/IRAC bandpass at 4.5 μm and our NIRCam/F444W bandpass have a large overlap in wavelength, it remains a possibility that a similar thin SO_2 (or, any other volcanic gases such as CO2 which also absorbs at 4.5 μm) atmosphere with several tens of μbar could explain the observed variability in our NIRCam dataset. To evaluate this idea in detail is however beyond the scope of the present work and instead planned for an upcoming publication (Oza et al., in prep.). The variability at 2.1 μm is difficult to explain with a thin atmosphere consisting volcanic gases such as SO2 or CO2 since they do not have significant opacity in the 2.1 μm bandpass. Instead, the dust grains present in the torus could be a cause of this variability, which was also hypothesised by <cit.>. If the grain size is larger than 0.3 μm from the size range of 0.1–0.7 μm discussed in <cit.> and <cit.>, the particles will be opaque in the 2.1 μm channel, but transparent in the 4.5 μm channel. Although many Earth-like dust species do not survive long enough in the circumstellar environment, dust made of quartz, silicon carbide and graphite can survive a significant fraction of an orbit to generate a patchy torus <cit.>. Following the same formalism from <cit.>, the mass loss needed to account for the maximum change in occultation depth (95.9 ppm, in visit 5) 2.5–5.7 × 10^6 kg s^-1 is within a factor of two of the maximum escape rate derived by CHEOPS, reported to be as large as ∼ 2.9 × 10^6 kg s^-1 <cit.>. If the particle size is larger than 0.7 μm, they can, in principle, even explain the variability at 4.5 μm channel. However, the non-correlation of occultation depths at 2.1 μm and 4.5 μm channels suggests that although the two sources may be linked, they are indeed distinct absorbers, e.g, grains and gas at 2.1 and 4.5 μm, respectively, as mentioned above. However, the effect of the dust torus on the transit observations is yet to be found observationally. In particular, if the dust escape happens during a transit event, dust could float in the Hill sphere of the planet or form a comet-like tail <cit.>. Both processes should affect the transit light curve in the form of a significantly large transit depth and an asymmetric transit shape, respectively, unless dust very quickly leaves the vicinity of the planet. It is unknown what escape mechanism is currently operating at 55 Cnc e, and therefore more phase curve observations, especially at shorter wavelengths where Si in the dust have emission lines, are needed to monitor the variability. Multiple phase curves would scan the whole circumstellar region over time to determine the location of the dusty torus and how it evolves, helping in a better understanding of the escape mechanism and thus variability. However, based on its close proximity several mechanisms including canonical photoevaporation and boil-off <cit.> are able to reproduce the estimated escape rate. For close-in rocky bodies like 55 Cnc e, more energetic plasma escape mechanisms including ion-neutral interactions such as atmospheric sputtering <cit.>, which, similar to Io, drive a feedback process sourced by the melting and degassing of the rocky body itself via induction-heating <cit.> and two body tidal-heating <cit.>. The aforementioned escape mechanisms are source-limited by geological activity and expected to vary on orbital timescales in phase-curve observations <cit.>. Source-limited implies that the escape rate is ultimately limited by the outgassing rate below the escape layer, such that if the supply rate were zero, escape would not occur. Effectively, the discussed energetic escape mechanisms naturally generate extended neutral and grain clouds that provide a toroidal opacity source in the circumstellar environment. §.§ Can stellar activity cause the occultation depth variability? Stellar activity can, in principle, cause the occultation depth variability of 55 Cnc e. <cit.> checked whether stellar granulation could explain the optical occultation depth variability found with CHEOPS. They, however, rejected stellar activity as a source of variability due to very low occultation depths in some visits and their detection of a sinusoidal temporal trend of the variability. Furthermore, the photometric monitoring of the star for about 11 years in the optical from the ground revealed a photometric variability of 0.006 mag which is too small to explain the ∼ 50 ppm occultation depth variability observed with CHEOPS <cit.>. The stellar activity signal is expected to decrease at longer wavelengths. This means that it is challenging to explain IR variability with the photometric variation of mmag level observed by <cit.> in the optical. Moreover, the activity has to happen every instance during the short time window around the occultation, which is improbable. In any case, the inflation of uncertainties with the injection-retrieval method accounts for any noise, including the correlated noise. The fact that the maximum difference in the occultation depths is significant even with inflated uncertainties suggests that the origin of the occultation depth variability is not related to the star. § CONCLUSIONS We obtained time on JWST/NIRCam to study the dayside emission variability of 55 Cnc e (GO 2084: PI Brandeker and GO 1952: PI Hu). In particular, we test the hypothesis that 55 Cnc e is in a 3:2 spin-orbit resonance, thus showing different faces at every occultation and thereby explaining the observed dayside variability and also the hot-spot displacement from the sub-stellar location. The prediction was that this would result in occultation depths highly correlated with their orbital number parity, at least over short time scales. We observed five occultations of 55 Cnc e in two wavelength bands, or channels, a spectroscopic band at 4.5 μm and a single photometric band at 2.1 μm. Four of them are observed within a week, i.e., in the duration of eight planetary orbits, while the last was observed after five months. We analysed the data using six different pipelines. Our main finding is that the occultation depths change strongly, from a non-detection to 100 ppm, and rapidly (within a week). The variability is however not observed to correlate with the occultation number parity, implying that a planet 3:2 spin-orbit resonance is not the reason for its variability. The variability is observed in both 2.1 and 4.5 μm channels, but is curiously not correlated between channels. The estimated brightness temperature at 4.5 μm varies between 873 K – 2256 K. These values are less than the predicted dayside temperature in case of zero heat re-distribution and zero albedo, 2537 K, which hints at the presence of a planetary atmosphere enabling the heat re-distribution. The spectroscopic data at 4.5 μm is affected by correlated noise of unknown origin. Although the results from different reductions overall agree well with each other, there are several differences in white-light occultation depths and emission spectra that can be attributed to different treatments of correlated noise. We select two representative reductions, and , to perform atmospheric retrieval. Our atmospheric retrieval was performed using two simple atmospheric models containing an isothermal atmosphere made up of either CO/CO2 or SiO/SiO2/MgO. Additionally, we also tested a blackbody model and a flat line model with no atmospheric features. Retrievals performed with results mainly favour a blackbody model owing to larger errorbars on the occultation depths. However, other models with CO/CO2 or SiO/SiO2/MgO were not discarded either, statistically. The retrievals with prefer CO/CO2 atmospheres in at least two visits, SiO/SiO2/MgO atmosphere in one visit and blackbody and flat line models in the remaining two visits. The CO/CO2 atmosphere could be generated from outgassing of the surface <cit.>. The outgassing could be stochastic and thus can potentially explain the variability. As already advocated by <cit.>, simultaneous observations in the optical and infrared are needed to corroborate (or refute) the presence of a transient outgassed CO/CO2 atmosphere. The occultation depth variability in the 2.1 μm channel, especially its uncorrelated behaviour with its 4.5 μm channel counterpart, is challenging to explain with a simple atmospheric model. It is possible that the variability seen at 2.1 μm and that at 4.5 μm have different origins. A circumstellar inhomogeneous cloud of dust could potentially describe the variability at 2.1 μm. Volcanism induced by extreme tidal heating of 55 Cnc e could be a natural source of dust in the atmosphere of the planet which would eventually escape the planet and generate a patchy dusty torus in the circumstellar environment. The presence of dust in the circumstellar environment could also be helpful in the interpretation of several non-detection of occultation depths found in our observations as it could hide our view of the planet. More observations at shorter wavelengths, e.g., in ultraviolet, would help to more strongly constrain the presence of a circumstellar patchy dust torus. Simultaneous observations in near and mid-IR around 4 and 8 μm where volcanic gases CO2/SO2 have opacity would be helpful in constraining their presence. Such multiple observations in the optical and IR would not only constrain the presence of a circumstellar dust torus and atmosphere on the planet but also probe how these components evolve with time, essentially distinguishing both scenarios discussed in this work. While we do find a hint of an atmosphere on the planet in at least some visits, corroborating <cit.>, the simple picture of a static atmosphere cannot explain all observational features. A more complex model, including an outgassed atmosphere, circumstellar material, and perhaps dynamical processes in the atmosphere, would probably be needed to explain the entire range of observations. Moreover, given the strong variability of the system, simultaneous multi-wavelength observations would go a long way to distinguish between possible explanations and help probe the true nature of 55 Cnc e. We would like to thank an anonymous referee for their detailed referee report and suggestions which significantly improved the manuscript. JAP acknowledges Néstor Espinoza for discussing the peculiarities of JWST data analysis. JAP would like to thank Ludmila Carone for an insightful dialogue about theoretical models of the planet. JAP and ABr were supported by the Swedish National Space Agency (SNSA). The contributions of DP and ML have been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. DP and ML also acknowledge support of the Swiss National Science Foundation under grant number PCEFP2_194576. EMV acknowledges support from the Centre for Space and Habitability (CSH). This work has been carried out within the framework of the National Centre of Competence in Research PlanetS supported by the Swiss National Science Foundation under grant 51NF40_205606. EMV acknowledges the financial support of the SNSF. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (project Spice Dune, grant agreement No 947634, and Four Aces; grant agreement No 724427). ADe and DEh have received funding from the Swiss National Science Foundation for project 200021_200726. This work has also been carried out within the framework of the National Centre of Competence in Research PlanetS supported by the Swiss National Science Foundation under grant 51NF40_205606. This research has made use of the Spanish Virtual Observatory (<https://svo.cab.inta-csic.es>) project funded by MCIN/AEI/10.13039/501100011033/ through grant PID2020-112949GB-I00. CMP and MF gratefully acknowledge the support of the SNSA (DNR 65/19, 177/19). BOD acknowledges support from the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number MB22.00046. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). Part of the High Performance Computing resources used in this investigation were provided by funding from the JPL Information and Technology Solutions Directorate. Finally, we thank ERASMUS student Charlotte Zimmermann for her contributions to the initial studies of this work. aa § DATA ANALYSIS METHODS This section details six independent methods of analysing the JWST/NIRCam data. In Table <ref>, we summarise the white-light occultation depths between about 4 and 5 μm (see, below for exact wavelength range for different methods) and photometric occultation depths at 2.1 μm. Figure <ref> compares the relative occultation depth spectra for all visits from different methods. It can be seen from Figure <ref> and Table <ref> that the results obtained with various independent analysis methods overall agree with each other, however, there are some differences which could be attributed to the different handling of correlated noise in the data. For example, reduction uses Gaussian processes (GP) to model the correlated noise and thus produces results, white-light and spectroscopic occultation depths, that are the most distinct from the rest of the methods. On the other hand, reduction methods from, e.g., , inflate errorbars on occultation depths to account for correlated noise. We use results from and as two representative methods in our atmospheric retrieval analysis and interpretation. We describe each analysis method below. §.§ As described in Section <ref>, the observations were carried out using NIRCam grism timeseries observing mode, which has two channels, a long-wave (LW, 4.5 μm channel) spectroscopic channel and a short-wave (SW, 2.1 μm channel) photometric channel. We analysed both datasets with our pipeline. §.§.§ Long-wave data analysis We downloaded uncalibrated data files ( files) from the MAST archive and used the official pipeline to produce calibrated from them. We ran Stage 1 of the pipeline with some modifications to the files. The main change in Stage 1 is that we skipped the step and step. This is justified because the dark current level in NIRCam detectors is low. Furthermore, since our observations were carried out using only two groups per integration, the step would become obsolete. Once we have data from Stage 1 processing, we replace all values in data and error arrays with average values of their neighbouring pixels. We add these pixels to the default bad-pixel map generated by the pipeline. We performed a column-by-column and row-by-row background subtraction to reduce 1/f noise from the data. In this process, we subtracted a median of background pixels from each row while we fitted a line to the column background pixels and subtracted the estimated background from each column pixel. We then searched for cosmic ray events in the data file by comparing each frame with a median frame. We replaced all detected events with the mean of neighbouring pixels. However, we added these events to the bad-pixel map in the end. We did not run Stage 2 of the pipeline because it does not change the science images. Once we have corrected timeseries data, we used an open-source package [<https://stark-package.readthedocs.io/en/latest/>] to extract spectra. fits one and two-dimensional splines to the spectral data to find a robust estimate of PSF which can later be used to extract the spectrum. Before spectral extraction, we computed the location of the spectral trace using the centre-of-flux method. We found that the location of the trace on the detector remains extremely stable and varies only within 0.03 pixels. To estimate the stellar spectrum, we first need to compute the stellar PSF, which we did by fitting splines to the data. As a first approximation, we assume that the PSF does not change with wavelength and with time, so we fitted a 1D spline to the data as a function of distance from trace (known as pixel coordinates). This is a poor assumption because while the PSF stays constant in time, it varies significantly with wavelength. We improved our PSF estimate by fitting a 2D spline to the data as a function of pixel coordinates and wavelength. This robust PSF is then used to find stellar timeseries spectra. We used aperture half-widths of 9 and 2 pixels to fit PSF and extract spectra, respectively. We ran this procedure iteratively. At the end of each iteration, we subtracted the median static residual noise from the raw data. The median static noise is defined as a median difference between data and synthetic images constructed using stellar PSF and spectra. Only two iterations were sufficient to find robust stellar spectra. We compute the white-light light curve by taking a weighted average of light curves in all spectroscopic channels between 3.8612 and 4.9771 μm. The raw white-light light curves for all visits are shown in Figure <ref>. Now that we have generated light curves we can fit an occultation model to the data. The light curves show a strong ramp in the beginning of each visit (see, Figure <ref>), so we discarded the first 35 min of the data before the analysis. In the light curve analysis, we fixed all planetary parameters except occultation depth to their values from the literature <cit.>. We used a wide uniform prior between -500 to 500 ppm to the occultation depth parameter. We analysed white-light light curves from all five visits together. We used <cit.> to fit an occultation model to the data, which uses an occultation model from <cit.> and samples posteriors using <cit.>. In addition to the planetary model, we added linear and quadratic polynomials in time to correct for long-term trends seen in the light curve. The best-fitted values of white-light occultation depths are tabulated in Table <ref>. We could not, however, model hour-long correlated noise (see, e.g., Fig. <ref>), with this simple polynomial model. This is also evident from the Allan deviation plots, shown in Fig. <ref>, of residuals that show additional noise at larger bin sizes. The presence of uncorrected correlated noise means that the uncertainties found on the occultation depths are underestimated. We could not determine the origin of this noise: we searched engineering data but could not find any parameter that correlates with the noise, pointing towards a possible astrophysical origin. However, recent transit observations of a bright star <cit.> with the same observing mode also show a similar noise as our dataset (see, their Fig. 2). So, the correlated noise could be a previously unknown systematics of the instrument. We looked at the 2D spectral data at the group level to further test this possibility. Generally, the data from the first and last groups are discarded as they could be unreliable. We cannot do this since our dataset has only two groups. We took the 2D spectral data for both groups independently and extracted spectral timeseries from them in exactly the same manner described earlier. We finally computed and analysed white-light light curves from both groups. We found that the correlated noise similar to the integration level light curve is also present at “group level” white-light light curves. This suggests that the correlated noise does not originate from unreliable first and last groups (see also our companion paper for more details, Patel & Brandeker, in prep). We perform injection-retrieval tests on the white-light light curves to estimate proper uncertainties on the occultation depths in the presence of correlated noise. We first subtract the normalised planetary signal from the raw white-light light curve keeping the long-term trend and the correlated noise as it is in the data. We next produced 1000 realisations of light curves by injecting an occultation signal at random times in the data. The depth of the signal is equal to the median value from the full light curve analysis presented earlier. In this process, we made sure that the full signal remained inside the data. We fit a full model, consisting of an occultation model and polynomial – linear and quadratic – trend, using to each of the realisations. We build a posterior of occultation depth using randomly selected samples from the posteriors of occultation depth in each realisation. These posteriors, shown in Fig. <ref> for all visits, are clearly not Gaussian for most of the cases illustrating the effect of correlated noise. A 68-percentile confidence interval of this posterior should be more representative of uncertainties on white-light occultation depths. In the cases where the uncertainties obtained this way were smaller than the “white” uncertainties from the light curve analysis, we choose to report the larger value. The correlated noise is also present in the spectroscopic light curves of each column. We first boosted the estimated errors of the spectroscopic light curves and the white-light light curve according to the scatter in the light curves. Then we divided spectroscopic light curves from each column with the white-light light curve to remove the correlated noise from the spectroscopic data. This mostly removed correlated noise from the spectroscopic light curves. Finally, we computed relative occultation depths as 1 - (F_in / F_out), where F_in and F_out are the flux inside and outside of the occultation duration, respectively. Before computing this, we made sure that the baseline before and after the occultation signal was the same. Note that we compute relative occultation depths at the native resolution of the instrument before binning them to a lower resolution. This method minimizes the impact of any leftover 1/f noise in the data <cit.>. §.§.§ Short-wave data analysis The Stage 1 processing of 2.1 μm channel files was mostly done in the same way as for the 4.5 μm channel files described above. The main difference is that here we only perform a row-by-row background subtraction. The short-wave PSF spreads to almost all pixel ranges along the column so that there are very few background pixels along the column making it impossible to perform background subtraction along columns. Once we got data, we performed simple aperture photometry to 2.1 μm channel data to obtain a photometric light curve. Before doing this, we computed the centroids of the PSF using the centre-of-flux method. We then computed a growth function – flux inside an aperture as a function of increasing aperture radius – to optimally select an aperture radius. We find that the growth function flattens out at around 45 pixel radius that we eventually used in our analysis. We adapted the [<https://photutils.readthedocs.io/en/stable/index.html>] <cit.> package to compute aperture photometry. simply calculates the total flux inside the aperture. Since we already did a row-by-row background subtraction we did not perform another sky annulus subtraction. Uncorrected short-wave photometric light curves are plotted in Figures <ref>. We fitted an occultation model to thus-obtained SW light curves in almost the same manner as for the occultation model fitting of LW white-light light curves. The instrumental model used here was different from what was used in the LW case. Here we used a linear polynomial in time and PSF centroids as decorrelation vectors. Additionally, light curves from two of our visits (Visits 1 and 4) show abrupt flux jumps analogous to what was found in <cit.> (see, Figure <ref>). These flux jumps may or may not be caused by mirror tilting events as described in <cit.> — a thorough investigation of the origin of these jumps is ongoing (see also our companion work Patel & Brandeker, in prep.). Here we model these flux jumps using multiple step functions; since the jumps are abrupt and affect only a few integrations, it is fairly easy to set the boundaries of step functions. For certainty, we masked all integrations near jumps, which is safe because the masked integrations consist of only a few per cent of the total number of data points and none of these are near the ingress or egress. Another source of noise in the SW light curves is the high-frequency periodic noise possibly caused by the thermal cycling of heaters in the Integrated Science Instrument Module on JWST <cit.>. This is clearly visible in the power spectrum of the light curve as a peak period near 3.8 min in all visits. We performed a principal component analysis (PCA) of the PSF time series to see if we could capture this noise as a principal component (PC) or not. Indeed, one of the first PCs in all visits show a periodic pattern with a period of about 3.8 min. While we are uncertain about the origin of this noise, we simply use this PC as a decorrelation vector in our light curve analysis. In summary, our total model fitted to the SW light curve includes an occultation model, linear models in time, PSF centroids and a PC. Step functions were also included as decorrelation vectors in Visits 1 and 4. We used to fit the light curve data. The best-fitted occultation depths can be found in Table <ref>. These data are also affected by a correlated noise that we could not model using our simple model. This is also evident from the Allan deviation of the residuals shown in Figure <ref>. We performed injection-retrieval tests similar to the LW data analysis described in Appendix <ref> to properly estimate the uncertainties on the occultation depths. §.§ Eureka! — Reduction 1 Here we provide an independent reduction of the short-wave (SW) observations of NIRCam. To reduce the files we used <cit.> pipeline. Stage 1 consists of running default detector processing steps, but we skip the saturation step. On stage 2 we only correct for the flat field. On Stage 3, we crop the full array to a window between pixels 1400 and 2000 in the x-axis and between pixels 1 and 64 in the y-axis. We also mask pixels flagged as bad quality and reject outliers above 7σ along time axis. We interpolate bad pixels with a linear function and perform row-by-row background subtraction and 1/f noise correction. Aperture photometry is performed using a circular 40 pixel radius aperture. We subtract the background region with an annulus with an inner edge of 45 pixels and an outer edge of 60 pixels. Finally, Stage 4 uses the calibrated files to produce the light-curve. Visit 1 and 4 exhibit strong discontinuities, dividing the light-curve into five and six clearly defined segments, respectively. To correct the discontinuities, first, we mask the occultation. To flatten the light-curve, we fit a linear function to each segment and then fit an occultation model with in a Hamiltonian Monte Carlo algorithm with . The rest of the visits did not exhibit such discontinuities and thus we fit only one linear function in time. The resulting occultation depths are shown in Table <ref>. Compared to the reduction and analysis, all occultation depths are consistent within 1σ. §.§ — Reduction 2 We produced an independent reduction of the NIRCam spectra using the <cit.> and <cit.> pipelines, including purpose-built steps that we describe here. Starting from the uncalibrated raw data, we ran the default detector processing steps up to (and including) the dark current step. Prior to the ramp fitting step, we subtracted from each row the median of the left-most 650 pixels in the corresponding row and group. By using these unilluminated pixels as a reference of the level of noise added during readout, this helps reduce 1/f noise. We then applied the remaining calibration steps. We ran the resulting calibrated files through . We extracted columns 850 through 1945 and discarded the reference pixels. To straighten the trace, we vertically slid each detector column by an integer number of pixels. We performed background subtraction using the average value of each column, rejecting 7σ outliers and excluding a window with a half-width of 15 pixels centred on the trace. Constructing the spatial profile from the median frame, we performed optimal extraction on a region centred on the source and with a half-width of 5 pixels. We generated 30 spectroscopic light curves between 3.9365 and 4.9265 μm, each spanning 0.033 μm. In each light curve, we discarded values farther than 4σ from the mean of a sliding window. The flux in the light curves follows a downward trend with time, and they show significant time-correlated noise. After trimming the initial 20 min of data, where the ramp is the steepest, we modelled the white light curve in each visit as the product of an exponential ramp, a linear polynomial and a occultation model, where the occultation depth acted as a free parameter. The fits included an estimated error multiplier to match the scatter in the residuals. We assumed a circular orbit, and fixed the orbital period and mid-transit time to the values in <cit.>, and planet radius, orbital inclination and scaled semi-major axis to those reported by <cit.>. For each visit, we also calculated the relative occultation depths following the methodology outlined in Appendix <ref>. §.§ The (atmospHeric trANsmission SpectrOscopy anaLysis cOde) pipeline was originally developed to analyse ground-based transmission spectra observed with 8m-class telescopes, but has been adapted to also enable its use on NIRCam data <cit.>. takes calibrated outputs of the pipeline Stage 1 as input. We used the LACOSMIC algorithm <cit.> to remove cosmic ray effects from the two-dimensional images and identified the spectral trace by using a Moffat function fit to each column. The sky background was calculated on a column-by-column basis by calculating a linear trend in the column background, which was defined as at least 20 pixels away from the centre of the spectral trace. This linear trend was then subtracted from the whole column. We extracted the spectrum by summing over an aperture with a half-width of 4 pixels. Consistent with the other reductions, we generated a white light curve and 30 spectroscopic light curves from which we clipped the first 35 min to remove the worst of the ramp that is present in all the data. For each light curve we applied a 5σ outlier rejection filter. We used the light curve and RV fitting code CONAN to fit the white light curves with an occultation model and a GP with a 3/2 Matern kernel to account for both the remaining ramp and the correlated red noise. We leave the occultation depth and the GP parameters (amplitude, lengthscale and a white noise factor) as free parameters and fix all orbital parameters to the literature values found by <cit.>. The white light occultation depths are presented in Table <ref>. We then calculate the common mode for each visit by removing the fitted occultation from the white light curve and divide the common mode out of the spectroscopic light curves. Since the spectroscopic light curves still show some correlated noise even with the common mode removed, we then fit each spectroscopic light curve individually in the same way as the white light curves, with the orbital parameters held fixed and the occultation depth and GP parameters as free parameters. The resulting emission spectra are shown in Figure <ref>. §.§ We take the corrected timeseries data from long-wave analysis and use an open-source tool <cit.>[<https://github.com/nespinoza/transitspectroscopy>]. We first use a centre of flux method to find the location of trace on the detector. We used the optimal extraction algorithm from <cit.> to extract 1D stellar spectra from the timeseries data. In this procedure, we used an aperture half-width of 3 pixels. The optimal extraction naturally clips all outliers not identified by the pipeline. We masked all such 10σ outliers. White-light light curves for each visit were computed by taking a weighted average of spectroscopic light curves between 3.8612 and 4.9771 μm. We used to fit the occultation model to the white-light light curve data. In addition to the occultation model <cit.>, our full model includes linear, quadratic and cubic polynomials to model a long-term decreasing trend. We also added white noise to the errors on the flux. We fixed all planetary parameters except occultation depth from the literature <cit.>. The median and 68-percentile confidence intervals for the best-fitted occultation depths are tabulated in Table <ref>. We also determined relative occultation depth spectra using the procedure described in Appendix <ref> and plotted in Fig. <ref>. §.§ Our SPARTA reduction is very similar to that used in <cit.>, which analyzed the one occultation observed by GO 1952 (PI Hu). The steps that we used to go from the uncalibrated files to the spectroscopic light curves are identical. In stage 1, we perform superbias subtraction, reference pixel subtraction, non-linearity correction, dark subtraction, and up-the-ramp fitting (which amounted to subtracting the two reads since we only have two). In stage 2, we remove the background, which also removes some of the 1/f noise because we perform row-by-row subtraction in addition to column-by-column subtraction. In stage 3, we perform sum extraction with a window half-width of 2 pixels, obtaining spectroscopic light curves. Using , we fit the white light curve with a model that has the occultation time and occultation depth as astrophysical free parameters, while the light curve normalization factor, exponential ramp amplitude and timescale, x and y linear correlation parameters, linear slope with time, and error inflation multiple are free systematics parameters. We save the systematics model corresponding to the best fit to the white light curve. To fit the spectroscopic light curves, we first divide each light curve by the aforementioned systematics model, and then fit the result with a model that includes every parameter in the white light curve fit except the occultation time (which we fix to the white light value). § PROPERTIES OF THE STAR §.§ Observed stellar spectrum We produced files from uncalibrated data using the pipeline using the same procedure as described in Appendix <ref>. We then ran Stage 2 of the pipeline with some modifications, namely skipping the and steps, to produce calibrated spectrum files. This was followed by correcting data and error files for and cosmic rays as described in Appendix <ref>. Despite being classified as a point source by the pipeline, the physical unit of calibrated 2D spectrum data is given as MJy/sr. We converted the units to Jy using the pixel area quoted in a header file of data products from Stage 2 of the pipeline. We finally extracted the spectrum using as described in Appendix <ref>. We extracted a timeseries of spectra from part of the data from our most recent visit, Visit 5. A median spectrum of these timeseries spectra is plotted in Figure <ref> and compared with the <cit.> empirical spectrum and black body spectrum. We found that similar to <cit.>, the NIRCam observed spectrum is discrepant with the <cit.> empirical spectrum. We think that this may be because of improper photometric correction for bright stars provided by the pipeline. Furthermore, <cit.> found that their MIRI observed spectrum agrees very well with <cit.> spectrum. Here, we use the <cit.> spectrum in our atmospheric retrieval analysis. §.§ Stellar parameters from modelling We modelled 85 publically available spectra from the High Accuracy Radial velocity Planet Searcher <cit.> spectrograph with a resolution of 115 000. The spectra were co-added and modelled with Spectroscopy Made Easy[<http://www.stsci.edu/ valenti/sme.html>] <cit.> version 5.2.2 and the stellar atmosphere grid Atlas12 <cit.>. SME computes synthetic spectra and adjusts the chosen free parameters based on comparison with the observed spectrum. We modelled one parameter at a time, utilising spectral features sensitive to different photospheric parameters and iterating until all parameters converged. Throughout the modelling, we held the macro- and micro-turbulent velocities, V_ mac and V_ mic, fixed at 2.7 km s^-1 <cit.> and 0.95 km s^-1 <cit.>. A description of the modeling procedure is detailed in <cit.>. The results are listed in Table <ref>. The stellar radius was modelled with the SED fitting software astroARIADNE[<https://github.com/jvines/astroARIADNE>] <cit.> using priors from SME and photometry from the Johnson B and V magnitudes (APASS), G G_ BP G_ RP (DR3), JHK_S magnitudes (2MASS), WISE W1-W2, and the Gaia DR3 parallax. We utilized three different atmospheric model grids from Phoenix v2 <cit.>, <cit.>, and <cit.>. The final radius was computed with Bayesian Model Averaging and was found to be 0.953± 0.011 R_⊙. The luminosity is 0.63± 0.02 L_⊙, and the visual extinction is consistent with zero (0.03± 0.03). We derived a stellar mass of 0.639^+0.021_-0.020 M_⊙ interpolating the MIST <cit.> isochrones with astroARIADNE. Our results are very close to previous results; <cit.> derive a stellar radius of 0.943± 0.010 R_⊙ based on interferometric measurements and the parallax from <cit.>. Updating this calculation with the Gaia DR3 parallax, this radius becomes 0.962± 0.010 R_⊙ in good agreement with our results. § DETAILED RETRIEVAL POSTERIOR DISTRIBUTIONS In this appendix we present all posterior distributions from our retrieval calculations for the CO/CO2 and SiO/SiO2/MgO cases. The posterior distributions are ordered in chronological order and shown for the and reductions. Due to the fact that for the reduction, the retrievals are performed on absolute occultation depths, the posterior distributions do not include the white-light occultation depths parameter d_wl. It is also important to note that the depicted centre-log-ratio posterior ξ_j for the last molecule is not a free parameter in the retrieval as mentioned in Sect. <ref>. Instead, we calculated the corresponding posterior distribution following the requirement that for each posterior sample, the sum of all ξ values must be zero. For Visits 1 and 3, the posterior distributions are already shown in Figs. <ref> & <ref> in the main text and are not repeated here. The corresponding posterior spectra for the posteriors are shown in Fig. <ref>.
http://arxiv.org/abs/2407.13314v1
20240718091603
NIRVAR: Network Informed Restricted Vector Autoregression
[ "Brendan Martin", "Francesco Sanna Passino", "Mihai Cucuringu", "Alessandra Luati" ]
stat.ME
[ "stat.ME", "stat.AP" ]
A millisecond pulsar position determined to 0.2 mas precision with VLBI Hao Ding 1EACOA Fellow Adam T. Deller2 Paulo C. C. Freire3 Leonid Petrov4 ================================================================================================================================ Corresponding author: Brendan Martin – § ABSTRACT High-dimensional panels of time series arise in many scientific disciplines such as neuroscience, finance, and macroeconomics. Often, co-movements within groups of the panel components occur. Extracting these groupings from the data provides a course-grained description of the complex system in question and can inform subsequent prediction tasks. We develop a novel methodology to model such a panel as a restricted vector autoregressive process, where the coefficient matrix is the weighted adjacency matrix of a stochastic block model. This network time series model, which we call the Network Informed Restricted Vector Autoregression (NIRVAR) model, yields a coefficient matrix that has a sparse block-diagonal structure. We propose an estimation procedure that embeds each panel component in a low-dimensional latent space and clusters the embedded points to recover the blocks of the coefficient matrix. Crucially, the method allows for network-based time series modelling when the underlying network is unobserved. We derive the bias, consistency and asymptotic normality of the NIRVAR estimator. Simulation studies suggest that the NIRVAR estimated embedded points are Gaussian distributed around the ground truth latent positions. On three applications to finance, macroeconomics, and transportation systems, NIRVAR outperforms competing factor and network time series models in terms of out-of-sample prediction. § INTRODUCTION Panels of stochastic processes, {(X_1,t,…,X_N,t)^'}_t ∈ℤ, which exhibit co-movements between components are central to many scientific disciplines such as environmental science, econometrics, and neuroscience. Often, X_i,t depends not only on its own past values, but also on the past values of a subset of other panel components, written {X_j,s : j ⊆{1,…,N}, s <t}. For large N, modelling via the time series vector autoregression (VAR) framework becomes prohibitive as the number of model parameters grows as O(N^2) and can quickly exceed the number of observations. Techniques from high-dimensional statistics that introduce some form of sparsity or dimensionality reduction are therefore required in this setting. Many variants of the factor modelling approach of <cit.> where the large panel of time series are modelled as stemming from a relatively small number of common latent factors have been proposed in statistics and econometrics <cit.>. Sparse regression methods using various regularised estimation procedures have also been proposed as a way to reduce the number of model parameters. Common methods are the Least Absolute Shrinkage and Selection Operator <cit.>, Smoothly Clipped Absolute Deviation <cit.>, and Least Angle Regression <cit.>. Network time series approaches in which each univariate time series is observed on a vertex of a graph are also popular in the literature <cit.>. The graph can be observed or inferred from data, with the edges of the graph encoding the co-dependence structure of the multivariate time series. For example, <cit.> utilise an observed graph to model the multivariate time series, whereas <cit.> infer the underlying graph via a graph neural network. Often, each edge corresponds to a parameter in the time series model. Therefore, if the graph is sparse, the number of model parameters can be smaller than the number of observations, enabling estimation. The estimated parameters can then be used for prediction. In the case that it is inferred from data, the graph itself is of interest for tasks such as clustering, link prediction, and estimation of the Granger causal linkages between panel components <cit.>. This paper proposes a method for inferring a graph structure from multivariate time series data and using it to impose restrictions on the coefficient matrix of a VAR model. Since the network determines the restricted VAR, we call the model the Network Informed Restricted VAR (NIRVAR) model. There is an extensive literature on incorporating network effects into a VAR modelling framework. <cit.> and <cit.> independently propose a VAR model in which the network effect on a panel component is the average of its connected neighbours in an observed network. <cit.> propose a Generalised Network Autoregressive (GNAR) model which allows for a time-varying observed network, and they show the consistency of generalised least squares estimation of the model parameters. <cit.> combine the dimensionality reduction of factor modelling with the parsimony of sparse linear regression and develop a hypothesis testing framework to determine the partial covariance structure. <cit.> extend this approach to the setting of dynamic factors, and propose an L_1-regularised Yule-Walker method for estimating a factor adjusted, idiosyncratic VAR model. The estimated VAR coefficients are then used for network estimation. <cit.> introduce a network VAR model that is similar to that of <cit.> but allows for network effects between groups of panel components. They assume that the network adjacency matrix is observable and generated by a stochastic block model <cit.>. <cit.> introduce a stochastic block VAR model in which the time series are partitioned into latent groups such that the spillover effects are determined by a SBM. A group detection algorithm is developed in which the VAR coefficients are estimated using ordinary least squares (OLS) and used to obtain an embedding on which K-means clustering can be applied. The method we propose differs from existing works as it involves firstly detecting latent groups from a latent space representation of each time series, and secondly estimating the VAR coefficients. Carrying out estimation in this order is a key contribution of this paper. We avoid estimating a dense VAR parameter matrix as in <cit.> or specifying tuning and thresholding parameters as in <cit.>. We also emphasise that, in contrast to <cit.>, <cit.>, <cit.> and <cit.>, the NIRVAR estimator does not require the underlying network to be observable, which is a realistic setting in many real-world applications. NIRVAR models a panel of multivariate time series as a VAR(1) in which the VAR coefficient matrix Φ is the weighted adjacency matrix of some random graph, 𝒢 = (𝒱,ℰ), where each of the N vertexes in the node set 𝒱 has an associated d-dimensional latent position θ_i∈ℋ, i ∈𝒱, with ℋ⊆ℝ^d being some latent space, and where ℰ⊆𝒱×𝒱 is a set of random edges. In particular, we model 𝒢 as a stochastic block model. Typically, in the random graphs literature, the latent positions are estimated through spectral embedding of the observed adjacency matrix. However, in the settings we consider, the realised graph and its adjacency matrix are unobserved. It is therefore necessary to construct latent positions directly from the observed time series rather than from an observed adjacency matrix. Once latent positions have been constructed, they are clustered into K communities and these communities are used to enforce zero constraints on the VAR coefficients. In particular, if the constructed latent positions of two panel components i and j belong to different clusters, then Φ_ij is set to 0. A VAR model with such zero constraints is called a subset VAR model <cit.>. The remaining unrestricted parameters are estimated via OLS. The NIRVAR estimation procedure draws inspiration from the workflow rationalised in <cit.>: linear dimension reduction via principal component analysis, embedding, clustering and graph construction. In particular, we compute the singular value decomposition (SVD) of concatenated sample covariance matrices, consider the corresponding left singular vectors embedding, and cluster the embedded points via a Gaussian mixture model with K components. The embedding dimension is chosen as the number of eigenvalues of the sample covariance matrix that are larger than the upper bound of the support of the Marčenko-Pastur distribution <cit.>. The motivation for this approach is that large eigenvalues that lie beyond the right edge of the Marčenko-Pastur spectrum are deemed significant and correspond to informative eigenvectors, while eigenvalues below the right edge of the spectrum correspond to noisy eigenvectors. A graph with K cliques is then constructed based on a binary allocation of each embedded point to its most probable Gaussian mixture component. The estimated communities are reconstructions of the blocks of the SBM. The NIRVAR estimator assumes that the probability p^(in) of an edge forming within a block is 1, and the probability p^(out) of an edge forming between blocks is 0. If p^(out) > p^(in), the data generating SBM will have a large proportion of inter-block edges and the NIRVAR estimation framework will not capture any of these edges. We therefore restrict our attention to assortative SBMs <cit.>. Since the NIRVAR estimator does not recover edges between vertices in different blocks of the data generating SBM, the OLS estimator of the unrestricted VAR parameters will be biased whenever such inter-block edges are present. We derive the bias and show that the NIRVAR estimator is consistent and asymptotically normal. Each step of the NIRVAR estimation framework can be modified easily. For example, we also consider embedding via the SVD of the precision matrix. Although we do not pursue it here, one could imagine choosing the embedding dimension using, for example, the profile likelihood method of <cit.> or the ScreeNOT method of <cit.>. The motivation for using a Gaussian mixture model instead of say, K-means clustering, is due to the literature on random dot product graphs <cit.> which shows that spectral embedding yields uniformly consistent latent position estimates with asymptotically Gaussian error (up to identifiability). NIRVAR can model data that is represented on a multiplex network. A multiplex network is one that has multiple types of edges, with the corresponding graph 𝒢 containing multiple layers of connectivity, one layer for each type of edge <cit.>. If there are Q layers, then the time series data we consider consists of NQ univariate series, {X_i,t^(q)}_t ∈ℤ, where q ∈{1,…,Q}. In this paper, we refer to the q-th layer as the q-th feature of the matrix time series. These types of matrix time series have been considered by <cit.>, for example, who propose a one-pass estimation procedure for a tensor canonical polyadic decomposition model of the matrix time series. <cit.> also employ a tensor-based principal component approach to a factor network autoregressive model. In contrast, to obtain latent embeddings in the case of multiple features, we unfold the tensor and compute the SVD of S = (S^(1)|⋯|S^(q)) where S^(q) is the sample covariance matrix across feature q and |⋯| denotes column concatenation. This choice is motivated by the stability properties of the unfolded adjacency spectral embedding procedure from <cit.>. The rest of the paper is organised as follows. Section <ref> defines the NIRVAR model, outlines the proposed estimation method, and discusses alternative embedding methods. Additionally, we link the NIRVAR model and its corresponding estimator to the literature on spectral embedding of weighted stochastic block models. In Section <ref>, we derive the bias of the NIRVAR estimator and determine the conditions for which it is unbiased. Furthermore, we show the consistency and asymptotic normality of the estimator and compare its asymptotic efficiency with that of a restricted VAR least-squares estimator with known restrictions. In Section <ref>, a simulation study is conducted to compare the large sample distribution of NIRVAR parameter estimates with the asymptotic results derived in Section <ref> and we also investigate how NIRVAR cluster recovery and parameter estimates vary with different model hyperparameters. Furthermore, we compare the empirical distribution of the NIRVAR estimated embedded points with the ground truth latent positions. In Section <ref>, we apply NIRVAR to the prediction of financial returns of a universe of US assets, the prediction of US industrial production, and the prediction of daily bicycle rides from Santander stations in London, and conclude that NIRVAR obtains the best performance across all data sets and tasks. The data and code are available in the Github repository https://github.com/bmartin9/NIRVAR. §.§ Notation For an integer k, let [k] denote the set {1,…,k}. The n × n identity matrix is written I_n, and the indicator function is {ℬ} = 1 if the event ℬ occurs and 0 otherwise. For vectors, v_1,…,v_n∈ℝ^p, let M = (v_1,…,v_n) ∈ℝ^p × n be the matrix whose columns are given by v_1,…,v_n. The column space of M is colsp(M), its rank is (M), and its transpose is M^'. We write M_i,: and M_:,j to denote the i-th row and j-th column of M considered as vectors, respectively. The element-wise ℓ_0 norm is defined as |M|_0 = ∑_i=1^p∑_j=1^n1{ M_ij≠ 0 }. The Frobenius norm is ‖ M ‖_F = (∑_i=1^p∑_j=1^n |M_ij|^2)^1/2. The spectral radius of an n × n matrix A is ρ(A) = max{|λ_1|,…,|λ_n|} where λ_1,…,λ_n are the eigenvalues of A. The vectorisation operator (·) transforms a p × n matrix M into a pn × 1 vector by stacking its columns. Let ⊙ denote the Hadamard product between two matrices of the same dimensions, and ⊗ denote the Kronecker product between two matrices, not necessarily of the same dimension. For matrices M_1,…,M_r, let (M_1|⋯|M_r) and (M_1;…;M_r) denote the column-wise and row-wise concatenation of the matrices, respectively. The set of n × d matrices with orthogonal columns is 𝕆(n × d). For a distribution F the support of F is (F). § MODEL AND ESTIMATION §.§ Model Let X_t = (X_i,t^(q)) be a matrix consisting of NQ random variables at each time t ∈ℤ. For example, i ∈ [N] could label an individual and q ∈ [Q] a particular variable or feature associated with that individual. Consider an associated multiplex network with Q layers each having N vertices. The random variable X_i,t^(q) is observed at vertex i of layer q of the multiplex network. A feature is then just a layer of the multiplex network. For each feature q ∈ [Q] we associate a random graph 𝒢^(q) = (𝒱,ℰ^(q)) where 𝒱 = [N]. 𝒢^(q) is modelled as a SBM. We utilise the random dot product graph <cit.> representation of the SBM since our subsequent estimation method involves constructing latent positions for each node, which are then clustered in order to recover the blocks of the SBM, providing a convenient mathematical framework. Let F be a d-dimensional distribution with support (F) = ℋ⊂ℝ^d, such that x^'y∈ [0,1] for all x, y∈ℋ. F is also known as inner product distribution <cit.>. Also, let θ_1,…,θ_N∼ F collected in the rows of the matrix Θ = (θ_1,…,θ_N)^'∈ℝ^N × d. Suppose A is a random adjacency matrix such that, conditional on Θ: p(A|Θ) = ∏_i=1^N∏_ j=1 j≠ i^N (θ_i^'θ_j)^A_ij(1 - θ_i^'θ_j)^1 - A_ij∏_h=1^N {A_hh=1}, where p(·) represents a probability mass function. Then we say that A is the adjacency matrix of a directed RDPG with self-loops of rank at most d and with latent positions given by the rows of Θ, written (A,Θ)∼RDPG(F). In contrast to the definition given by <cit.>, Definition <ref> does not restrict A to be symmetric and have all of the principal diagonal elements equal to zero. This is to allow for directed graphs with self loops. With a slight abuse of notation, RDPG will be used in the rest of this work to denote a directed random dot product graph with self-loops. We say that (A,Θ)∼RDPG(F) is a SBM with K blocks or communities if the number of distinct rows in Θ is K. We define the block membership function z : [N] ↦ [K] to be a function such that z_i = z_j if and only if θ_i = θ_j (where z(i) ≡ z_i). Under this representation, F(θ)=∑_k=1^K π_kδ(θ - ν_k), where ν_k∈ℝ^d, k∈[K] are community-specific latent positions chosen such that ν_k^'ν_ℓ∈ [0,1] for all k,ℓ∈[K], and π=(π_1,…,π_K) represent the prior probabilities of each node to belong to the k-th community, with π_k≥0 for all k∈[K] and ∑_k=1^Kπ_k=1. Here, δ is the d-dimensional Dirac delta function. The between-community connection probabilities can be collected in a matrix B∈[0,1]^K× K with B_kℓ=ν_k^'ν_ℓ, ν_k,ν_ℓ∈ℝ^d for k,ℓ=1,…,K, and we write (A,Θ)∼SBM(B,π). Note that Definition <ref> only covers the case in which the matrix B∈[0,1]^K× K of between-block connection probabilities is positive semi-definite. In our work, we assume that the graph is assortative, corresponding to the case of a positive semi-definite matrix B. The motivation for this assumption is to have a multivariate time series model in which there is greater co-movement between panel components within the same community than between communities. The generalised RDPG <cit.> provides an extension to the indefinite case. In the NIRVAR model, it is assumed that each feature-specific graph 𝒢^(q)=(𝒱,ℰ^(q)) with adjacency matrix A^(q) has an associated latent position matrix Θ^(q) such that (A^(q),Θ^(q))∼SBM(B^(q),π^(q)), q=1,…,Q, where B^(q)∈[0,1]^K× K and π^(q) are feature-specific SBM parameters. The probability of an edge forming between vertices i and j, i≠ j, is p_ij^(q) = B_z_iz_j = θ_i^(q)^'θ_j^(q), and the adjacency matrix corresponding to 𝒢^(q) is A_ij^(q)∼Bernoulli(p_ij^(q)). Given 𝒢^(q), we define the NIRVAR model as follows. For some fixed q∈[Q], let {X_t^(q)}_t ∈ℤ denote a zero mean, second order stationary stochastic process where X_t^(q) = (X_1,t^(q),⋯,X_N,t^(q))^'∈ℝ^N and q ∈ [Q]. The NIRVAR model for the q-th feature is X_t^(q) = ∑_r=1^Q (A_q^(r)⊙Φ_q^(r)) X_t-1^(r) + ϵ_t^(q), ϵ_t^(q)∼𝒩(0,Σ), in which the generic element of A_q^(r), r∈[Q] is given by Equation (<ref>) and Φ_q^(r), r ∈ [Q] is an N × N matrix of fixed weights. The covariance matrix of the noise process, Σ, is assumed to be positive definite. Defining Φ_q^(r) A_q^(r)⊙Φ_q^(r) and Φ_q (Φ_q^(1)|⋯|Φ_q^(Q)), we write X_t^(q)∼NIRVAR(Φ_q). If, for each q ∈ [Q], X_t^(q)∼NIRVAR(Φ_q), then Z_t:= (X_t^(1)^',…,X_t^(Q)^')^'∈ℝ^NQ is a VAR(1) process with coefficient matrix Ξ (Φ_1;…;Φ_Q) ∈ℝ^NQ × NQ. Therefore, X_t^(q) will be stable, and hence stationary, if ρ(Ξ) < 1 <cit.>. In Equation (<ref>) in Definition <ref>, we define the NIRVAR model only for the q-th feature, which acts as the response or endogenous variable, whereas the features 𝒞_-q=[Q]∖{q} are interpreted as covariates or exogenous variables. Since we focus on the response of the q-th feature, we drop the subscript on A_q^(r), Φ_q^(r), Φ_q^(r), and Φ_q for the remainder of the paper. It will be convenient when discussing estimation of the NIRVAR model to write Equation (<ref>) in vectorised form. For this purpose, we define Ψ^(q):= (X_1^(q),…,X_T^(q)), Z := (Z_0,…,Z_T-1), U^(q):= (ϵ_1^(q),…,ϵ_T^(q)), A := (A^(1)|⋯|A^(Q)). We can then write Equation (<ref>) for t=1,…,T as Ψ^(q) = Φ Z + U^(q), or, in vectorised form, ψ^(q) = (Φ Z) + (U^(q)) = (Z^'⊗ I_N)β + 𝐮^(q), where we define ψ^(q)(Ψ^(q)), β(Φ), and 𝐮^(q)(U^(q)). The number of non-zero elements of β is given by M = | A |_0. The model can be written in terms of an unrestricted M-dimensional vector γ(A) whose elements are those in the set {β_i : (A)_i≠ 0, 9mu i=1,…,N^2Q }. Additionally, an N^2Q × M matrix R(A) is defined via the mapping R : {0,1}^N × NQ→{0,1}^N^2Q × M, where [R(A)]_ij(A)_i×{∑_k=1^i-1(A)_k = j-1}. The constraints on β can then be written as β = R(A)γ(A). Combining Equation (<ref>) and Equation (<ref>) finally yields ψ^(q) = (Z^'⊗ I_N)R(A)γ(A) + 𝐮^(q). §.§ Estimation The NIRVAR estimation method proceeds by firstly imposing subset VAR restrictions through a binary matrix, Â∈{0,1}^N × NQ, and then estimating γ(Â). To obtain Â, a low-dimensional latent representation of each panel component is found by embedding the sample covariance matrix in some latent space. Clustering the latent positions then allows for the construction of a graph with adjacency matrix Â. The embedding and clustering method used here follows the workflow suggested by <cit.> who propose a statistical model called the latent metric model to explain the manifold hypothesis. The manifold hypothesis posits that many high-dimensional data sets are in fact samples from a lower-dimensional manifold <cit.>. <cit.> give theoretical justification for the commonly employed approach of linear dimensionality reduction via principal component analysis, a subsequent embedding, and graph construction using the embedded points. The next sections describe these steps in details. §.§.§ Embedding Let X^(q) = (x_1^(q),…,x_T^(q)) be the N × T design matrix of feature q where x_t^(q) = (x_1,t^(q),…,x_N,t^(q))^' is a realisation of the random variable X_t^(q). To construct an embedding ŷ_i^(q)∈ℝ^d we are motivated by the unfolded adjacency spectral embedding of <cit.>, which obtains embeddings Y^(q)∈ℝ^N × d by considering the SVD of A = (A^(1)|⋯|A^(Q)). Unfolded adjacency spectral embedding has two key stability properties shown by <cit.>: it assigns the same position, up to noise, to vertices behaving similarly for a given feature (cross-sectional stability) and a constant position, up to noise, to a single vertex behaving similarly across different features (longitudinal stability). Since we do not observe A^(q), we instead consider the SVD of S = (S^(1)|⋯|S^(Q)) ∈ℝ^N × NQ where S^(q) X^(q) X^(q)^'/T ∈ℝ^N × N is the sample covariance matrix for feature q. We can write S as S = UDV^' + U_⊥D_⊥V_⊥^', where D ∈ℝ^d × d is a diagonal matrix containing the d largest singular values of S, and the columns of U ∈𝕆(N × d) and V ∈𝕆(NQ × d) are the corresponding d left and right singular vectors, respectively. The choice of of d is discussed below. Assuming S admits a low-rank approximation, unfolded adjacency spectral embedding uses only Ŝ = UDV^' to produce embeddings. In particular, the q-th right unfolded adjacency spectral embedding is the matrix Ŷ^(q)∈ℝ^N × d obtained by dividing Ŷ = VD^1/2∈ℝ^NQ × d into Q equal blocks, Ŷ = (Ŷ^(1);…;Ŷ^(Q) ). The feature-q embedding of time series i is thus ŷ_i^(q) = (Ŷ^(q) )_i,:. It is of interest to relate ŷ_i^(q) to the ground truth latent positions θ_i^(q), since clustering of ŷ_i^(q) will be used to recover the blocks of the data generating SBM. We derive a connection between the covariance matrix Γ^(q) = 𝔼{(X_t^(q))(X_t^(q))^'} of a NIRVAR(Φ) process and θ_i^(q) for the case Q = 1. Under a technical assumption of symmetric Φ, Proposition <ref> shows that the rank d spectral embedding of Γ^(q) and Φ are equivalent. Since we assume Q=1 in Proposition <ref>, the superscript q is dropped for readability. Let X_t∼NIRVAR(Φ) where Φ is assumed to be symmetric. Consider the eigendecomposition Φ = U_ΦΛ_Φ U_Φ^' + U_Φ,⊥Λ_Φ,⊥U_Φ,⊥^', where U_Φ∈𝕆(N × d) and Λ_Φ is a d × d diagonal matrix comprising the d largest eigenvalues in absolute value of Φ. Then the rank d truncated eigendecomposition of the covariance matrix Γ = 𝔼(X_tX_t^') is Γ = U_ΦΛ_Γ U_Φ^' in which Λ_Γ is a d × d diagonal matrix with diagonal elements (λ_Γ)_i = 1/{1-(λ_Φ)_i^2} where (λ_Φ)_i is the corresponding diagonal entry of Λ_Φ. The proof of Proposition <ref> is in Appendix A.1 of the online supplementary material. Under the assumptions of Proposition <ref>, the eigenvectors corresponding to the d largest eigenvalues of Γ and Φ are the same. Therefore, the rank d spectral embedding of Φ can be constructed, up to identifiability, from the eigenvectors and scaled eigenvalues of Γ, where the scaling is given by (λ_Φ)_i = ±√(1-1/(λ_Γ)_i). For the case where Φ is symmetric, <cit.> prove a central limit theorem for the asymptotic behaviour of Y_Φ = U_ΦΛ_Φ^1/2. They prove that, up to a sequence of orthogonal transformations, (Y_Φ)_i is asymptotically normally distributed around the ground truth latent positions, θ_i. The relation between Y_Γ = U_ΦΛ_Γ^1/2 and Y_Φ shown by Proposition <ref> suggests that there may be a similar connection between (Y_Γ)_i and θ_i. A simulation study comparing (Y_Γ)_i and θ_i is discussed in Section <ref>. §.§.§ Clustering Clustering via a Gaussian mixture model can be used to assign a label ẑ_i^(q)∈{1,…,K} to each panel component based on ŷ_i^(q). In particular, we use Expectation-Maximisation <cit.> to maximise the Gaussian mixture model incomplete log-likelihood and estimate the cluster assignments ẑ_i^(q). We define  = (Â^(1)|⋯|Â^(Q)) ∈{0,1}^N × NQ as Â_ij^(q) = {ẑ_j^(q) = ẑ_i^(q)}. From this definition, it can be seen that  defines a graph with K cliques and therefore cannot recover inter-block edges of the data generating SBM. We use the following argument to motivate our choice of K. For a RDPG, A_ij^(q)∼Bernoulli(θ_i^(q)^'θ_j^(q)) and thus 𝔼(A^(q)) = Θ^(q)Θ^(q)^', where Θ^(q) = (θ_1^(q),…,θ_N^(q))^'∈ℝ^N × d. Assuming that Θ^(q) is full rank, then rank{𝔼(A^(q))} = rank(Θ^(q)Θ^(q)^') = rank(Θ^(q)) = d. Now for a SBM, Θ^(q) has K distinct rows (corresponding to the K latent positions), and thus rank(Θ^(q)) = K. Since we are assuming 𝒢^(q) is a SBM, we set K=d. §.§.§ OLS estimation of VAR coefficients The subset VAR restrictions we impose correspond to the zero entries of Â. Given Â, there are M = |  |_0 remaining unrestricted parameters that we estimate via OLS. Let γ(Â) be the M-dimensional vector of unrestricted parameters whose elements are those in the set {β_i : (Â)_i≠ 0, 9mu i=1,…,N^2Q }. Let R(Â) ∈{0,1}^N^2Q ×M be the restrictions matrix corresponding to  where R is defined by Equation (<ref>). The model corresponding to the estimated restrictions  is ψ_Â^(q) = (Z^'⊗ I_N)R(Â)γ(Â) + 𝐮^(q). Comparing with Equation (<ref>), we see that Equation (<ref>) will be misspecified if (Â)_i = 0 but (A)_i = 1 for any i=1,…,NQ. In this case, ψ_Â^(q)≠ψ^(q). Following <cit.>, we look for the estimator of γ(Â) that minimises the objective S{γ(Â)} = {ψ^(q)- (Z^'⊗ I_N) R(Â) γ(Â)}^'(I_T⊗Σ^-1){ψ^(q)- (Z^'⊗ I_N) R(Â) γ(Â)} = ψ^(q)^'(I_T⊗Σ^-1)ψ^(q)+ γ(Â)^' R(Â)^'(ZZ^'⊗Σ^-1) R(Â) γ(Â) -2 γ(Â)^' R(Â)^'(Z ⊗Σ^-1) ψ^(q). Hence, ∂ S{γ(Â)}/∂γ(Â) = 2 R(Â)^'(ZZ^'⊗Σ^-1) R(Â) γ(Â) - 2 R(Â)^' (Z ⊗Σ^-1) ψ^(q). Equating to zero yields the generalised least-squares estimator γ̂(Â) = {R(Â)^'(ZZ^'⊗Σ^-1) R(Â)}^-1 R(Â)^' (Z ⊗Σ^-1) ψ^(q). For the simulation studies in Section <ref> and applications in Section <ref>, we assume that Σ = σ^2I_N, in which case the generalised least-squares estimator is the same as the multivariate least-squares estimator γ̂(Â) = {R(Â)^'(ZZ^'⊗ I_N) R(Â)}^-1 R(Â)^' (Z ⊗ I_N) ψ^(q). Finally, the least-squares estimator of β is β̂(Â) = R(Â) γ̂(Â). §.§.§ Determining the embedding dimension We employ tools from random matrix theory to estimate d^(q), the rank of the RDPG 𝒢^(q). Following standard methods in the literature <cit.>, our estimate d̂^(q) is the number of eigenvalues of S^(q) that are larger than the upper bound of the support of the Marčenko-Pastur distribution <cit.>. The Marčenko-Pastur distribution is the limiting eigenvalue distribution of an N × N Wishart matrix whose dimension to sample size ratio is a constant, η = N/T, and is given by f_MP(x;η,σ^2) = √((x-x_-)(x_+-x)) / (2πσ^2η) if x_-≤ x ≤ x_+, 0 otherwise, with σ^2 being the scale parameter and x_± = σ^2 (1 ±√(η))^2 <cit.>. If λ_1^(q),…,λ_N^(q) are the eigenvalues of S^(q), then d̂^(q) = ∑_j=1^N{λ_j^(q) > x_+}. The interpretation is that all eigenvalues that are greater that x_+ cannot be attributed to random noise and are thus deemed as “informative”. Another approach to determining the embedding dimension using random matrix theory is to consider the distribution of the largest eigenvalue of a Wishart matrix, which, when centered and scaled, tends towards the Tracy-Widom distribution <cit.>. The 95th percentile of the Tracy-Widom distribution could then be chosen as the cut-off above which eigenvalues of the sample covariance matrix are deemed informative. §.§ Alternative embeddings Alternatives to the sample covariance matrix could be chosen for the embedding procedure described in Section <ref>. Here, we discuss embedding the precision matrix, Ω^(q) = (S^(q))^-1. One motivation for the use of the precision matrix is its relation to the partial correlation Cor( Ψ_i,:^(q) , Ψ_j,:^(q)|Ψ_-(i,j),:^(q)) = -Ω_ij^(q)/Ω_ii^(q)Ω_jj^(q), where Ψ_-(i,j),:^(q) denotes the set of all rows of Ψ^(q) except rows i and j. We can thus interpret Ω_ij^(q) as being proportional to the remaining correlation between time series i and time series j for feature q after the residual effect of all other time series has been removed. The unfolded adjacency spectral embedding and Gaussian mixture model clustering steps in the estimation procedure remain unchanged using the precision matrix. However, we need to slightly modify our method for estimating d^(q). Instead of counting the number of eigenvalues of S^(q) that are larger than the upper bound of the support of the Marčenko-Pastur distribution, we count the number of eigenvalues of Ω^(q) that are smaller than the lower bound of the support of the inverse Marčenko-Pastur distribution, which is the distribution of the reciprocal of a Marčenko-Pastur distributed random variable. The inverse Marčenko-Pastur distribution is derived in Appendix A.5 of the online supplementary material and is given by f_IMP(y;η,σ^2) = (1-η)√((y_+ - y)(y - y_-)) / (2 πη y^2) if y_-≤ y ≤ y_+, 0 otherwise, where y_± = 1/σ^2( 1 ±√(η)/1-η)^2. If ζ_1^(q),…,ζ_N^(q) are the eigenvalues of Ω^(q), then our estimate of d^(q) when using the precision matrix becomes d̂^(q) = ∑_j=1^N{ζ_j^(q) < y_-}. § ASYMPTOTIC PROPERTIES In this section we derive the bias, consistency and asymptotic normality of the NIRVAR generalised least-squares estimator given by Equation (<ref>). We show that it is biased whenever an incorrect restriction is placed on the VAR matrix, that is, whenever Â_ij^(q) = 0 but A_ij^(q) = 1, and unbiased otherwise. The proof of consistency and asymptotic normality is an extension of the theory of restricted VAR models <cit.> to the case where the restrictions may be misspecified. Furthermore, for the case in which a proper subset of the true restrictions are known, we prove that the asymptotic variance of the corresponding estimator is greater than or equal to that of a restricted estimator that uses all restrictions. Substituting Equation (<ref>) into Equation (<ref>) using Equation (<ref>) yields γ̂(Â) = {R(Â)^'(ZZ^'⊗Σ^-1) R(Â)}^-1 R(Â)^' (Z ⊗Σ^-1){(Z^'⊗ I_N) R(A) γ(A) + 𝐮^(q)} = {R(Â)^'(ZZ^'⊗Σ^-1) R(Â)}^-1 R(Â)^'(ZZ^'⊗Σ^-1) R(A)γ(A) + {R(Â)^'(ZZ^'⊗Σ^-1) R(Â)}^-1 R(Â)^' (I_NQ⊗Σ^-1) (U^(q)Z^'). Noting that 𝔼{(U^(q)Z^')} = (Z ⊗ I_N) 𝔼(𝐮^(q)) = 0, we see that the bias of the estimator, given Â, is 𝔼{β̂(Â)|Â} = R(Â) C γ(A), where C {R(Â)^'(ZZ^'⊗Σ^-1) R(Â)]^-1 R(Â)^'(ZZ^'⊗Σ^-1) R(A). The following proposition gives the conditions under which β̂(Â) is an unbiased estimator of β. Conditional on the estimated restrictions Â, the NIRVAR estimator β̂(Â) given by Equation (<ref>) is unbiased if and only if colsp{R(A)}⊆colsp{R(Â)}, where A specifies the true restrictions and  specifies the estimated restrictions. The proof of Proposition <ref> is in Appendix A.2 of the online supplementary material. We now consider the asymptotic properties of γ̂(Â). The NIRVAR estimator γ̂(Â) given by Equation (<ref>) is a consistent estimator of C γ(A) where C is defined by Equation (<ref>), and √(T){γ̂(Â) - C γ(A)}𝒩(0, {R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1), where Γ𝔼(Z_t Z_t^') = ZZ^'/T. The proof of Proposition <ref> is in Appendix A.3 of the online supplementary material. It is informative to compare the asymptotic efficiency of γ̂(A) (the generalised least-squares estimator given that the true restrictions are known) with γ̂(Â) for the case where colsp{R(A)}⊆colsp{R(Â)} (that is,  contains no misspecified restriction, but may not include all of the true restrictions). The following proposition proves that, in this case, γ̂(A) is asymptotically never less efficient than γ̂(Â). Let β̂(A) = R(A)γ̂(A) and β̂(Â) = R(Â)γ̂(Â) be the generalised least-squares estimators given by Equation (<ref>) for the case where A specifies the true restrictions of the data generating process and  is another set of restrictions that satisfies colsp{R(A)}⊆colsp{R(Â)}. Then the asymptotic variance of β̂(Â) is greater than or equal to that of β̂(A) in the sense that var[ √(T){β̂(Â) - β}] - var[ √(T){β̂(A) - β}] ≽ 0, where ≽ denotes positive semi-definite. The proof of Proposition <ref> is in Appendix A.4 of the online supplementary material. § SIMULATION STUDY In this section, we conduct an extended simulation study. We simulate the underlying network from a stochastic block model with B_kh=p^(in)∈[0,1] for k=h and B_kh=p^(out)∈[0,1] for k≠ h, k,h∈[K]. Firstly, we check whether NIRVAR cluster recovery and parameter estimates improve as the spectral radius of the ground truth VAR coefficient matrix ρ(Φ) increases. We expect this to be the case since ρ(Φ) controls the signal-to-noise ratio of the data generating process. Secondly, the ability of the NIRVAR estimator to obtain the correct restrictions should decrease as the number of edges between blocks increases. We thus count the number of errors in  as a function of the probability p^(out) of an edge forming between blocks in the simulated data generating process. Thirdly, we compare the NIRVAR estimated latent positions with the ground truth latent positions of the data generating SBM. Lastly, the asymptotic distribution given by Proposition <ref> is compared to the distribution of parameter estimates γ̂(Â) obtained from a large finite sample. §.§ Spectral radius The hyperparameters used to generate data from the NIRVAR model were N=100, Q=1, K ∈{2,10}, and T ∈{250,500,750,1000}. To set Φ, we sampled each entry Φ_ij independently from a Uniform(0,1) distribution and then normalised such that ρ(Φ) was the desired value. The normalised root mean squared error (NRMSE) and Adjusted Rand Index <cit.> for ρ(Φ) ∈{0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95} are shown in Figure <ref>. The ARI is a measure of the similarity between two clusterings of data. It ranges from -1 to 1, with an ARI of 1 indicating a perfect agreement between the two clusterings, 0 indicating a random agreement and -1 indicating that the two clusterings are completely different. For each combination of ρ(Φ) and T, we sampled X_t^(q)∼NIRVAR(Φ), obtained an estimate Φ̂ of Φ from the simulated data, and computed the NRMSE between Φ̂ and Φ, defined as NRMSE = ‖Φ̂ - Φ‖_F/M̂ρ(Φ). Repeating this 15 times for each combination of ρ(Φ) and T, we computed the mean NRMSE and mean ARI along with the corresponding standard error of the mean, respectively. Figure <ref> (fig:sim-spectral-sub1) and (fig:sim-spectral-sub2) show that the NRMSE decreases both as ρ(Φ) increases and as T increases. Figure <ref> (fig:sim-spectral-sub3) and (fig:sim-spectral-sub4) show that the ARI approaches 1 as the spectral radius approaches 1, meaning that the ground truth clusters are recovered in a high signal-to-noise regime. §.§ Between block probability Choosing simulation hyperparameters N=100, T=1000, Q=1, K=10, and setting Φ^(1) by sampling each entry from a Uniform(0,1) and normalising so that ρ(Φ^(1)) = 0.9, we computed the percentage of incorrect entries of  as a function of p^(out). In particular, the percentage error was calculated as 100×∑_i,j=1^N{Â_ij≠ A_ij}/N^2. The percentage variable selection error of a LASSO estimator was computed for comparison, where the LASSO penalty hyperparameter was chosen using the Akaike Information Criterion (AIC) <cit.>. Figure <ref>(fig:sim-lasso-bias-sub1) shows that the percentage error increases with p^(out) and surpasses that of the LASSO estimator for p^(out) > 0.05. This is expected since the NIRVAR estimation procedure always constructs a graph with K cliques and cannot recover edges between blocks, which are deemed as noise. §.§ Latent position recovery We conduct a study in which we fix two ground truth latent positions θ_B1 = (0.05,0.95)^' and θ_B2 = (0.95,0.05)^' of the data generating SBM and compare the (rotated) NIRVAR embedded points Ŷ^(1) to θ_B1 and θ_B2. The hyperparameters used to generate the data were N=150, T=2000, Q=1, K=2, z_1, … ,z_75 = 1 and z_76, … ,z_150 = 2. The weights Φ were set to the constant 0.9/ρ(ΘΘ^') so that the spectral radius of the expected value of Φ was 0.9. With X_t^(1)∼NIRVAR(Φ) for t=1,…,T, the left singular vectors and singular values of S^(1) were computed, and the singular values were scaled using (λ_Φ)_i = ±√(1-1/(λ_Γ)_i) from Proposition <ref>. Procrustes alignment <cit.> was then used to compare the rotated sample embedded points Ŷ_1:75 to θ_B1 and Ŷ_76:150 to θ_B2. Figure <ref>(fig:sim-latent-recovery-sub1) shows Ŷ_i coloured according to the block membership z_i for i=1,…,N. Figure <ref>(fig:sim-latent-recovery-sub2) shows a Q-Q plot comparing the (standardised) sample distribution Ŷ_1:75 with a standard normal distribution. The sample data is in good agreement with a normal distribution. We also conduct a further study in which we replicate the above procedure 4000 times and plot Ŷ_i for i=21 against θ_B1 across the 4000 replicas. Since ρ(Φ_k) (with k ∈{1,…,4000} labelling the replica) has spectral radius less than 1 only in expectation, we only retain the first 4000 samples that have ρ(Φ_k) < 1. Figure <ref>(fig:sim-latent-recovery-sub3) shows that the large sample distribution of Ŷ_21 is approximately normally distributed around θ_B_1. Using the R package <cit.>, we performed the Henze-Zirkler test of multivariate normality on the 4000 sample points yielding a test statistic of 0.56 and an observed p-value of 0.96 which supports the null hypothesis that the data is multivariate normally distributed. §.§ Large sample distribution of the NIRVAR estimator We simulate data with N=50, Q=1, T=5000, K=5, and set Φ^(1) by sampling each entry from a Uniform(0,1) and normalising so that ρ(Φ^(1)) = 0.9. The probability of an edge forming within each block was set to p^(in) = 0.75 and we set p^(out) = 0.2. Creating 10,000 replica datasets, each having these hyperparameters, and estimating Φ for each replica provides an empirical distribution for √(T)γ̂(Â)_i. For each i=1,…,M̂, we performed a one-sample Kolmogorov-Smirnov (KS) test of whether the empirical distribution of √(T)γ̂(Â)_i is normal with mean √(T)C_∞γ(A) and variance {R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1 as claimed by Proposition <ref> to be its asymptotic distribution. The bias term, C_∞ is defined as C_∞(C) = {R(Â)^'(Γ⊗Σ^-1) R(Â)}^-1 R(Â)^'(Γ⊗Σ^-1) R(A). Figure <ref>(fig:sim-lasso-bias-sub2) shows a box plot of the KS statistic for each i=1,…,M̂, whilst Figure <ref>(fig:sim-lasso-bias-sub3) shows a histogram of the observed sample for one particular i plotted alongside the corresponding asymptotic normal curve. The low value of δ^(KS) shown by the box plot in Figure <ref>(fig:sim-lasso-bias-sub2) indicates that the large sample distribution of √(T)γ̂(Â)_i is well captured by the asymptotic distribution of Proposition <ref>. § APPLICATIONS In this section, we apply the NIRVAR model to three different data sets: (1) the market excess returns of a panel of 648 financial assets, (2) the August 2022 vintage of the FRED-MD database <cit.> which consists of 122 monthly macroeconomic variables, and (3) the number of daily bicycle rides leaving each of 774 Santander Cycles docking stations in central London between 07/03/2018 and 10/03/2020 (735 days). For each data set, we compare the predictive performance of NIRVAR against the Factor Augmented Regression Model (FARM) of <cit.>, the Factor-Adjusted Network Estimation and Forecasting for High-Dimensional Time Series (FNETS) model of <cit.>, and the Generalised Network Autoregressive Processes (GNAR) model of <cit.>. The details of each data set and the specific predition task at hand are given below. For convenience, we briefly summarise below the three benchmark models from the literature. * FARM – In FARM, the stochastic process X_i,t is modelled as X_i,t = μ_i + λ_i∑_ℓ = 1^Lτ_ℓ f_t-ℓ + ∑_ℓ=1^L∑_j=1^N (Δ_ℓ)_ijξ_j,t-ℓ + ϵ_i,t, where μ_i is the sample mean of series i, f_t is the leading factor at time t with λ_i the corresponding loading on this factor for series i, and ξ_j,t is the residual value of series i after subtracting the contribution of the factor. The parameters τ_ℓ and Δ_ℓ are AR and VAR coefficients for the static factor model and idiosyncratic component, respectively. The factors and loadings are estimated using PCA, τ_ℓ is estimated via OLS, and Δ_j,ℓ is estimated using LASSO, with a penalty parameter selected via BIC. * FNETS – In FNETS, the stochastic process X_t is modelled as a sum of two latent components: a factor-driven common component χ_t, modelled as a generalised dynamic factor model <cit.>, and an idiosyncratic component ξ_t, modelled as a VAR process. This results in the following model X_t = χ_t + ξ_t, χ_t = ∑_ℓ=1^∞Λ_ℓf_t-ℓ, ξ_t = ∑_l=1^LΔ_lξ_t-l + Γ^1/2ϵ_t, where 𝔼(ϵ_t) = 0 and cov(ϵ_t) = I_N. Estimation of χ_t proceeds by dynamic PCA, whilst for ξ_t, <cit.> propose a ℓ_1-regularised Yule-Walker estimator that requires second-order moments only. * GNAR – In GNAR, a simplified model for X_i,t takes the form X_i,t = ∑_ℓ=1^L( α_i,ℓ X_i,t-ℓ + β_ℓ∑_j ∈𝒩(i) X_j,t-ℓ) + ϵ_i,t, where 𝒩(i) denotes the set of vertices that are connected to vertex i of a known graph and 𝔼(ϵ_i,t)=0 with var(ϵ_i,t) = σ_i^2. The full GNAR model in <cit.> extends Equation (<ref>) allowing for a time varying known network, edge weights, multiple edge covariates, and stage-neighbours. A key difference between NIRVAR and GNAR is that GNAR assumes the network structure to be known, which is typically not the case in most real-world applications. §.§ Financial Returns Prediction The open-to-close (OPCL) and previous close-to-close (pvCLCL) price returns of 648 financial assets between 03/01/2000 and 31/12/2020 were derived from databases provided by the Center for Research in Security Prices, LLC, an affiliate of the University of Chicago Booth School of Business. Using the S&P 500 index as a proxy for the market, we construct the OPCL and pvCLCL market excess returns by subtracting the return of SPY, the exchange traded fund which tracks the S&P 500 index. The task is to predict the next day pvCLCL market excess returns. If the predicted next day pvCLCL market excess return of an asset has a positive (resp., negative) sign, then a long (resp., short) position in the asset is taken. We compare seven different models: NIRVAR using the covariance matrix embedding method with Q=1 (pvCLCL returns only) and Q=2 (pvCLCL and OPCL), labelled as NIRVAR C1 and NIRVAR C2, respectively; NIRVAR using the precision matrix embedding method with Q=1 and Q=2, labelled as NIRVAR P1 and NIRVAR P2, respectively; FARM with L=1; FNETS with L=1; GNAR with L=1 and j ∈𝒩(i) iff asset i and asset j share the same Standard Industrial Classification (SIC) division. The SIC assigns a four-digit code to businesses according to their industry; the codes are available at <https://siccode.com/>. Every business is in one of the following 10 sector divisions: (i) Agriculture, Forestry, and Fishing, (ii) Mining, (iii) Construction, (iv) Manufacturing, (v) Transportation and Public Utilities, (vi) Wholesale Trade, (vii) Retail Trade, (viii) Finance, Insurance, and Real Estate, (ix) Services, and (x) Public Administration. We backtest each model using a rolling window between 01/01/2004 and 31/12/2020 with a look-back window of four years, and we evaluate the performance via a number of commonly used metrics in the financial literature <cit.>. Each day, we compute a measure of profit and loss (PnL), defined on day t as PnL_t = ∑_i=1^Nsign(ŝ_i^(t)) × s_i^(t), where ŝ_i^(t) is the predicted return of asset i on day t, and s_i^(t) is the realized return of asset i on day t. Note that we assign an equal portfolio weighting to each asset in our definition of PnL_t; an alternative approach would be to consider value-weighted portfolio construction methods, that account for the liquidity of each asset. Doing this every day for T backtesting days, we obtain the time series {PnL_t}_t=1,…,TPnL. The cumulative PnL for each model is shown in Figure <ref>(fig:CumPnL-sub1). The NIRVAR models outperform FNETS, GNAR and FARM in terms of cumulative PnL, and NIRVAR P1 attains the highest cumulative PnL overall. We plot the cumulative PnL of NIRVAR P1 for different levels of transaction costs in Figure <ref>(fig:CumPnL-sub2), where a flat transaction cost (given in basis points, denoted bpts, with 1% = 100bpts) is applied every time we flip our position in a given asset from long (short) to short (long). From Figure <ref>(fig:CumPnL-sub2) we conclude that, even with a transaction cost of 4bpts, NIRVAR is profitable, especially during the global financial crisis and the COVID-19 pandemic which were periods of high market volatility. Table <ref> compares each model across 8 statistics often used in the literature on financial returns prediction <cit.>. One of the most common metrics used in the financial industry is the annualised Sharpe ratio, defined as SR = mean(PnL)/stdev(PnL)×√(252), where the scaling is due to the fact that there are 252 trading days within a year. Definitions of the other statistics used are provided in Appendix B.1 of the online supplementary material. Figure <ref>(fig:returns-groups-sub1) compares the NIRVAR estimated clusters with the 10 SIC sectors on one particular backtesting day. Figure <ref>(fig:returns-groups-sub2) shows the corresponding ARI for every backtesting day. The ARI never breaches 0.17, from which we conclude that the NIRVAR estimated clusters do not recover the SIC groupings, which, by no means, should be treated as ground truth. The bar chart does suggest, however, that NIRVAR tends to separate Finance, Insurance, and Real Estate companies from the other SIC groups. Figure <ref>(fig:returns-groups-sub3) provides the ARI between the NIRVAR estimated clusters at time t and the NIRVAR estimated clusters at time s, where t and s run over every pair of backtesting days. There are two distinct blocks along the diagonal of this heatmap, the first one between 2004-2008 and the second one between 2008-2013, indicating that the NIRVAR estimated clusters underwent a shift after the global financial crisis. Table <ref> provides the Sharpe ratio of each model on five equally-spaced intervals of the backtesting period. NIRVAR performs particularly well around the time of the global financial crisis. We also note that NIRVAR P2 and NIRVAR C2 are the best performing models in the fourth sub-period considered, suggesting that the extra feature is beneficial in this period. §.§ Forecasting US Industrial Production FRED-MD is a publicly accessible database of monthly observations of macroeconomic variables. It is updated in real-time and is available from Michael McCracken's webpage[<https://research.stlouisfed.org/econ/mccracken/fred-databases/>]. Also available from this webpage is an R package to transform the raw data into a stationary series, which we use. The details of the transformations used can be found at <cit.>. To allow direct comparison with <cit.>, we choose the August 2022 vintage of the FRED-MD database which extends from January 1960 until December 2019 (719 observations). Only variables with no missing values (122 variables) are used. The prediction task is one-step ahead forecasts of the first-order difference of the logarithm of the monthly industrial production (IP) index. We backtest each model from January 2000 until December 2019 using a rolling window framework with a lookback window of 480 observations. We choose the covariance matrix embedding method of NIRVAR. Following <cit.>, we set L = 24 for FARM. For FNETS and GNAR we set L=1. The FRED-MD variables are divided into eight groups: (i) output and income; (ii) labour market; (iii) housing; (iv) consumption, orders, and inventories; (v) money and credit; (vi) interest and exchange rates; (vii) prices; and (viii) stock market. The network we choose for GNAR is defined by j ∈𝒩(i) if and only if i,j are in the same FRED-MD group. Figure <ref> shows the realised difference of the logarithm of IP against the corresponding prediction of each model. Figure <ref>(fig:fred-gt-sub1) and Figure <ref>(fig:fred-gt-sub4) show that both NIRVAR and FARM are more reactive to extreme values than FNETS and GNAR, shown in Figure <ref>(fig:fred-gt-sub2) and Figure <ref>(fig:fred-gt-sub3), respectively. Table <ref> reports the overall MSE, the sum of the MSE at points for which the model over (under)-estimated the realised value, and the sum of the MSE in extreme and non-extreme regimes where a realised value is defined to be extreme if its magnitude is above the 90-th quantile. In Figure <ref>(fig:fred-ratio-groups-ari-corr-sub1), the ratio of cumulative MSEs between NIRVAR and each of FNETS, GNAR, and FARM is plotted. In particular, letting Δ_t^(m) be the ratio for model m ∈{FNETS,GNAR,FARM} at time t ∈{1,…,T}, we have Δ_t^(m)∑_s =1 ^tMSE_s^(NIRVAR)/∑_s =1 ^tMSE_s^(m). A value of Δ_t^(m) that is less than 1 indicates that NIRVAR is outperforming model m in terms of cumulative MSE. Δ_t^(m) is always below 1 for GNAR and FNETS, and for FARM it is below 1 the majority of the time. Figure <ref>(fig:fred-ratio-groups-ari-corr-sub2) compares the NIRVAR estimated clusters with the eight FRED groups on one particular backtesting day. Overall, the NIRVAR clusters do not match the FRED-MD groups closely, although NIRVAR clusters 2,4,10,11, and 12 consist almost entirely of a single FRED-MD group. This low-to-medium level matching between NIRVAR clusters and FRED-MD groups is captured by Figure <ref>(fig:fred-ratio-groups-ari-corr-sub3), which shows that the ARI for every backtesting day lies in the range [0.24,0.36]. We conclude that the human defined FRED-MD groups are not necessarily the most useful course grained description if one is interested in co-movements between time series of the 122 macroeconomic variables. §.§ Forecasting Santander bicycle rides The log daily number of bicycle rides from 774 Santander stations in London from 07/03/2018 until 10/03/2020 (T = 735) were obtained using data from <https://cycling.data.tfl.gov.uk/>. First differences were then taken in order to obtain stationary time series. NIRVAR, FARM, FNETS and GNAR were used to predict the next day log number of bicycle rides using a rolling window backtesting framework from 09/02/2020 until 10/03/2020 (30 days). Since N > T in this example, the covariance matrix is not invertible and therefore the NIRVAR precision matrix embedding method is not feasible here. FARM, FNETS and GNAR are modelled using a lag of L = 1. The network we choose for GNAR is defined by j ∈𝒩(i) if and only if the Euclidean distance between station j and station i is less than 3 kilometers. The one-step-ahead MSE, ‖X̂_t - X_t‖_2^2, is shown for each model in Table <ref>. NIRVAR achieves the lowest overall MSE. There were K=7 clusters estimated by NIRVAR. Figure <ref>(fig:santander-sub1) shows the geographic locations of the stations in NIRVAR clusters 3 and 5. Cluster 5 is concentrated around central London whereas the stations in cluster 3 are located further from the center near parks and more residential areas. In contrast to the human defined epsilon ball construction of the GNAR network, the NIRVAR clusters do not always lie within some ball of a given radius. Figure <ref>(fig:santander-sub2) shows the mean number of bicycle rides across each of the 7 NIRVAR clusters for every day of the week. The clusters are differentiated by their mean number of bicycle rides as well as by the change in the number of bicycle rides on weekdays compared with weekends. For example, stations in clusters 2 and 5, which are located mainly in central London, have high usage during the week and lower usage during the weekend. This likely corresponds to usage by professionals commuting to work. In contrast, the usage of bicycles from stations in cluster 3 increases at the weekends. This may correspond to usage by tourists and residential home owners that live outside of central London. To understand the flow of bicycle rides between NIRVAR clusters, we computed a K × K matrix F, where F_ij is the number of bicycle rides that start from any station in cluster i and end in any station in cluster j. A matrix F with elements F_ij = F_ij - F_ji/F_ij + F_ji, provides a normalised measure of the imbalance in the flow of bicycle rides between NIRVAR clusters. Figure <ref>(fig:santander-sub3) shows F_ij. Rows 2 and 5 of F are negative (red) which corresponds to a net inflow into NIRVAR clusters 2 and 5, whose stations are located mainly in central London. § DISCUSSION In this work, we introduce NIRVAR, a framework for modelling a panel of multivariate time series as a network VAR process when the underlying network is unobserved. The NIRVAR estimation method differs from existing network VAR methods by proposing a network representation of the panel components prior to estimation of the VAR model parameters. This circumvents the need to specify tuning or thresholding hyperparameters when constructing the network, and it avoids estimation of the full, dense VAR parameter matrix. NIRVAR can accommodate multiple edge types in the form of a multiplex graph and has better predictive performance compared with other network time series methods on three applications in finance, macroeconomics and transportation. Due to the regularisation induced by the network, NIRVAR is computationally much faster than a VAR. For example, backtesting NIRVAR on the Santander experiment in Section <ref> for a single forecast took approximately 10 seconds compared with approximately 70 seconds for a VAR[Both experiments were run using a single Intel Core i5-1145G7 CPU.]. A drawback of the estimation method is that it will be biased whenever there are edges between blocks of the data generating SBM. Therefore, it is only suitable for assortative graphs. One possible solution to this is to consider a mixed membership SBM <cit.> in which each graph vertex can be associated with multiple clusters. We leave this for future research. The NIRVAR model assumes the observed random graph does not change over time. Extending the model to incorporate a time varying adjacency matrix could add flexibility and improve both the cluster assignment and the predictive performance of the framework. It would be a natural extension to incorporate a factor model into the NIRVAR framework. The advantage of this would be to isolate the idiosyncratic interactions that are due to network effects from the co-movements that are due to common factors. For example, in the financial application, we removed the market factor prior to NIRVAR estimation. It would be straightforward to include more factors, such as those of the Fama-French five factor model <cit.>, before applying NIRVAR estimation. Incorporating a factor model would also likely improve the forecasting performance of NIRVAR on the FRED-MD dataset. <cit.> find eight factors in the sample they consider; removing these factors could give prominence to network effects, thus improving NIRVAR estimation and prediction. § ACKNOWLEDGEMENTS Brendan Martin acknowledges funding from the Engineering and Physical Sciences Research Council (EPSRC), grant number EP/S023151/1. Francesco Sanna Passino acknowledges funding from the EPSRC, grant number EP/Y002113/1. § DATA AVAILABILITY All data and Python code are available in the GitHub repository https://github.com/bmartin9/NIRVAR. rss § THEORY §.§ Proof of Proposition <ref> We prove that the covariance matrix Γ = 𝔼(X_tX_t^') can be written in terms of U_Φ. From <cit.>, we have that Γ - ΦΓΦ^' = Σ. This is an example of a Lyapunov matrix equation and its formal solution is given by <cit.>: Γ = ∑_k=0^∞(Φ)^kΣ(Φ^')^k, which converges when ρ(Φ) < 1 <cit.>. Since we assume a stationary NIRVAR process, then indeed ρ(Φ) < 1 and the series converges. With Σ = σ^2 I_N we have Γ = σ^2∑_k=0^∞(Φ)^k(Φ^')^k = σ^2 (I_N-ΦΦ^')^-1. Recalling the eigendecomposition Φ = U_ΦΛ_Φ U_Φ^' + U_Φ,⊥Λ_Φ,⊥U_Φ,⊥^', we can substitute Equation (<ref>) into Equation (<ref>), Γ = σ^2∑_k=0^∞ (U_ΦΛ_Φ U_Φ^' + U_Φ,⊥Λ_Φ,⊥U_Φ,⊥^')^k[(U_ΦΛ_Φ U_Φ^' + U_Φ,⊥Λ_Φ,⊥U_Φ,⊥^')^']^k = σ^2( I_N - U_ΦU_Φ^' - U_Φ,⊥U_Φ,⊥^'+ U_Φ∑_k=0^∞Λ_Φ^2k U_Φ^' + U_Φ,⊥∑_k=0^∞Λ_Φ,⊥^2k U_Φ,⊥^') = σ^2( I_N - U_ΦU_Φ^' - U_Φ,⊥U_Φ,⊥^'+ U_ΦΛ_Γ U_Φ^' + U_Φ,⊥Λ_Γ,⊥ U_Φ,⊥^'), where Λ_Γ is a diagonal matrix whose diagonal entries are (λ_Γ)_i = 1/(1-(λ_Φ)_i^2) with (λ_Φ)_i being the i-th diagonal entry of Λ_Φ. Note that (λ_Φ)_i∈ (-1,1) since we assume ρ(Φ) < 1. Now I_N - U_ΦU_Φ^' and U_Φ,⊥U_Φ,⊥^' are both projection operators onto the column space of U_Φ,⊥. Therefore, the rank d eigendecomposition of Γ is Γ = U_ΦΛ_Γ U_Φ^'. §.§ Proof of Proposition <ref> We have that colsp{R(A)}⊆colsp{R(Â)} R(Â)γ(Â) = R(A) γ(A) = β. This is true if and only if 𝔼{β̂(Â) | Â) = 𝔼(R(Â)γ̂(Â) | Â) = R(Â) 𝔼[{R(Â)^'(ZZ^'⊗Σ^-1) R(Â)}^-1 R(Â)^'(ZZ^'⊗Σ^-1) R(A) γ(A)| Â] = R(Â) 𝔼[{R(Â)^'(ZZ^'⊗Σ^-1) R(Â)}^-1 R(Â)^'(ZZ^'⊗Σ^-1) R(Â)γ(Â)| Â] = R(Â) γ(Â) = β. §.§ Proof of Proposition <ref> The NIRVAR estimator, γ̂(Â) = {R(Â)^'(ZZ^'⊗Σ^-1) R(Â)}^-1 R(Â)^'(ZZ^'⊗Σ^-1) R(A)γ(A) + {R(Â)^'(ZZ^'⊗Σ^-1) R(Â)}^-1 R(Â)^' (I_NQ⊗Σ^-1) (U^(q)Z^'). can be written as √(T){γ̂(Â) - C γ(A)} = {R(Â)^'(ZZ^'/T⊗Σ^-1) R(Â) }^-1 R(Â)^' (I_NQ⊗Σ^-1) 1/√(T)(U^(q)Z^'). By Lemma 3.1 of <cit.>, Γ exists and is nonsingular. Also by Lemma 3.1 of <cit.>, 1/√(T)(U^(q)Z^') 𝒩(0, Γ⊗Σ). By Equation (<ref>), {γ̂(Â) - C γ(A)} = [ {R(Â)^'(ZZ^'/T⊗Σ^-1) R(Â) }^-1 R(Â)^' (I_NQ⊗Σ^-1) (U^(q)Z^'/T) ] = [ {R(Â)^'(ZZ^'/T⊗Σ^-1) R(Â) }^-1 R(Â)^' (I_NQ⊗Σ^-1)] {(U^(q)Z^'/T) }, where the second line follows from Slutsky's Theorem <cit.> and the continuous mapping theorem. But Equation (<ref>) implies that {(UZ^'/T) } = 0. Thus {γ̂(Â) - C γ(A)} = 0 (weak consistency). For convenience, we define Ĝ{R(Â)^'(ZZ^'/T⊗Σ^-1) R(Â) }^-1 R(Â)^' (I_NQ⊗Σ^-1). Then G (Ĝ) = {R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1 R(Â)^' (I_NQ⊗Σ^-1). By Proposition C.15(1) of <cit.>, Equation (<ref>) implies that Ĝ1/√(T)(U^(q)Z^') 𝒩(0, G (Γ⊗Σ)G^'). Thus, √(T){γ̂(Â) - C γ(A)} is asymptotically normal with asymptotic variance !G (Γ⊗Σ)G^' = [ {R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1 R(Â)^' (I_NQ⊗Σ^-1)] (Γ⊗Σ) [ {R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1 R(Â)^' (I_NQ⊗Σ^-1)]^' = {R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1 R(Â)^' (I_NQ⊗Σ^-1) (Γ⊗Σ) (I_NQ⊗Σ^-1) R(Â) {R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1 = {R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1{R(Â)^'(Γ⊗Σ^-1) R(Â) }{R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1 = {R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1. §.§ Proof of Proposition <ref> By Proposition C.15(1) and Proposition 5.1 of <cit.>, var{√(T)(β̂(A) - β)} = R(A)[R(A)^'{Γ⊗Σ^-1} R(A) ]^-1R(A)^'. By Proposition C.15(1) of <cit.> and Proposition <ref> of this paper, var( √(T)(β̂(Â) - β)) = R(Â){R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1R(Â). Since the covariance matrix Γ⊗Σ^-1 is positive semi-definite, {R(A)^'(Γ⊗Σ^-1) R(A) }^-1 and {R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1 are positive semi-definite matrices of rank M and M̂, respectively. Therefore, R(A){R(A)^'(Γ⊗Σ^-1) R(A) }^-1R(A)^' and R(Â){R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1R(Â) are also positive semi-definite. Given this, and the assumption that colsp{R(A)}⊆colsp{R(Â)}, we have that colsp[R(A){R(A)^'(Γ⊗Σ^-1) R(A) }^-1R(A)^'] ⊆colsp[R(Â){R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1R(Â)]. Thus P R(Â){R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1R(Â) - R(A){R(A)^'(Γ⊗Σ^-1) R(A) }^-1R(A)^' has [R(Â){R(Â)^'(Γ⊗Σ^-1) R(Â) }^-1R(Â)^'] - [R(A){R(A)^'(Γ⊗Σ^-1) R(A) }^-1R(A)^'] nonzero eigenvalues with all other eigenvalues being 0. Hence, P is positive semi-definite and var[ √(T){β̂(Â) - β}] - var[ √(T){β̂(A) - β}] ≽ 0. §.§ Derivation of the Inverse Marčenko-Pastur Distribution Suppose X ∼ f_MP(η,σ^2) where f_MP(x;η,σ^2) = 1/2πσ^2η√((x-x_-)(x_+-x)) if x_-≤ x ≤ x_+, 0 otherwise, is the Marčenko-Pastur distribution with N/T η∈ (0,1) as N,T ∞, σ^2 being the variance, and x_± = σ^2 (1 ±√(η))^2 being the upper and lower bounds of the support of the distribution. Let Y = 1/X. Then the distribution, f_IMP, of Y, called the inverse Marčenko-Pastur distribution, has finite support over which its density is f_IMP(y;η,σ^2) = f_MP(y^-1;η,σ^2) 1/y^2 = 1/2πσ^2η y^-1√((y^-1-x_-)(y^-1-x_+))1/y^2 = 1/2πσ^2η y^2√([1 - yσ^2 (1 - √(η))^2][yσ^2 (1 + √(η)(^2 - 1)]) = 1/2πσ^2η y^2√(σ^4(1-y)^2(1/σ^2[ 1+√(η)/1-η]^2 - y ) ( y -1/σ^2[ 1-√(η)/1-η]^2)) = 1-η/2 πη y^2√((y_+ - y)(y - y_-)), where y_± = 1/σ^2( 1 ±√(η)/1-η)^2. Therefore, the distribution of Y is f_IMP(y;η,σ^2) = 1-η/2 πη y^2√((y_+ - y)(y - y_-)) if y_-≤ y ≤ y_+, 0 otherwise. . § APPLICATIONS §.§ Statistics used in the application to financial returns prediction Here, we define statistics used in Table <ref>. Letting PnL_t^-{PnL_t : PnL_t < 0 } and PnL^-{PnL_t^-}_t=1,…,T, we define the Sortino ratio <cit.> as SortR = mean(PnL)/stdev(PnL^-)×√(252). The maximum drawdown is defined as MaxDrawdown = max_t,s{CPnL_t - CPnL_s/CPnL_t : t<s }, where CPnL_t = ∑_r=1^tPnL_r. The hit ratio is the mean daily percentage of correct predictions, whereas the long ratio is the mean daily percentage of long predicted positions. The mean absolute error (MAE) and the root mean squared error (MSE) are defined as MAE = 1/TN∑_t=1^T∑_i=1^N|ŝ_i^(t) - s_i^(t)|, RMSE = √(1/TN∑_t=1^T∑_i=1^N(ŝ_i^(t) - s_i^(t))^2). §.§ Estimating the scale parameter of the Marčenko-Pastur distribution The scale parameter of the Marčenko-Pastur distribution is chosen as the value of σ^2 that minimises the Kolmogorov-Smirnov statistic: σ_*^2 = _σ^2sup_x |F_MP(x;η,σ^2) - F_N(x)|, where F_MP(x;η,σ^2) is the Marčenko-Pastur cumulative distribution and F_N(x) = k/N, where k is the number of observations less than or equal to x, is the observed cumulative step-function of the sample eigenvalues of the covariance matrix. The Marčenko-Pastur cumulative distribution is obtained through numerical integration of f_MP(x;η,σ^2). The minimisation is done using the Broyden–Fletcher–Goldfarb–Shanno algorithm <cit.>. One can also consider embedding the correlation matrix instead of the covariance matrix. In this case, σ^2 is set equal to 1. As an example, Figure <ref>(fig:MP-fitting-sub1) shows a histogram of the eigenvalues of the covariance matrix used in the FRED-MD application in Section <ref> after the rows and columns of the design matrix were randomly permuted. The best fit Marčenko-Pastur distribution obtained using Equation (<ref>) is also plotted in Figure <ref>(fig:MP-fitting-sub1) and fits the histogram well. Figure <ref>(fig:MP-fitting-sub2) shows a histogram of the eigenvalues of the correlation matrix corresponding to the FRED-MD application in Section <ref> after the rows and columns of the design matrix were randomly permuted. The Marčenko-Pastur distribution with σ^2 = 1 is also plotted in Figure <ref>(fig:MP-fitting-sub2) and fits the histogram well.
http://arxiv.org/abs/2407.12961v1
20240717190254
Graph-theoretical estimates of the diameters of the Rubik's Cube groups
[ "So Hirata" ]
math.CO
[ "math.CO", "cs.DM", "math.GR", "math.PR" ]
sohirata@illinois.edu Department of Chemistry, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA § ABSTRACT A strict lower bound for the diameter of a symmetric graph is proposed, which is calculable with the order (n) and other local parameters of the graph such as the degree, even girth g (≥ 4), and number of cycles of length g passing through a vertex, which are easily determined by inspecting a small portion of the graph (unless the girth is large). It is applied to the symmetric Cayley graphs of the Rubik's Cube groups of various sizes and metrics, yielding reasonably tight lower bounds, which range from 60% to 77% of the correct diameters of large-n graphs. Graph-theoretical estimates of the diameters of the Rubik's Cube groups So Hirata July 22, 2024 ======================================================================= § INTRODUCTION The fewest number of turns to solve the Rubik's Cube from its hardest initial configuration is colloquially referred to as God's number.<cit.> It is equal to the diameter of the Cayley graph of the Rubik's Cube group. Previously,<cit.> we estimated these diameters for the Cubes of various sizes in different metrics (the set of allowed combinations of rotations, each counted as one turn) using probabilistic logic. The estimates are accurate to within 2 of the actual values and are in exact agreement with the latter for both the 2×2×2 and 3×3×3 Cubes when the quarter-turn metric is adopted. The Cayley graphs of the Rubik's Cube group in some metrics are a symmetric graph. For example, the 3×3×3 Cube group in the square-slice-turn metric (R^2L^-2, D^2U^-2, and B^2F^-2 in the Singmaster notation<cit.>) has the Cayley graph that is a cubic symmetric graph known as the generalized Petersen graph G(4,1) (Fig. <ref>). The Cayley graph of the 2×2×2 Cube group in the square-turn metric<cit.> (R^2, D^2, and B^2) is another cubic symmetric graph, which is identified as the generalized Petersen graph G(12,5) (Fig. <ref>). The Cayley graphs of the 2×2×2 and 3×3×3 Cube groups in the quarter-turn metrics should also be symmetric graphs because all configurations and all turns are symmetrically equivalent. The Cayley graph of the 3×3×3 Cube group in the square metric is another symmetric graph, which turns out to be locally isomorphic to the graph of the 2×2×2 Cube group in the quarter-turn metric, with the only difference being their orders (numbers of vertexes). In this article, we introduce the formula for a strict lower bound for the diameter of a general symmetric graph derived with algebraic-graph-theoretical arguments.<cit.> It can be evaluated with the order and some easily determined local parameters of the graph, such as the degree, girth (g), and number of cycles with length g passing through a vertex. We then apply this formula to the diameters of the Cube groups of different sizes and metrics and compare the lower bounds with the probabilistic estimates<cit.> and actual values. § SYMMETRIC CAYLEY GRAPHS Following the notation of Biggs,<cit.> a distance-partition graph Γ_i(v) for any vertex v of a symmetric graph Γ of order n and even girth g (≥ 4) is defined by its vertex set, VΓ_i(v) = { u ∈ VΓ | ∂(u,v) = i }, where ∂(u,v) is the distance between vertexes u and v. Here, VΓ_0(v) = { v} and |VΓ_0(v)| = 1 as well as |VΓ_i(1)| = |VΓ_i(2)| = … = |VΓ_i(n)|, at any i (0 ≤ i ≤ d). Its corresponding edge set is defined by EΓ_i(v) = {{ u,v }∈ EΓ | u ∈ VΓ_i, v ∈ VΓ_i-1}, with EΓ_0(v) = ∅ and |EΓ_0(v)| = 0 as well as |EΓ_i(1)| = |EΓ_i(2)| = … = |EΓ_i(n)|, at any i. If the graph Γ has a diameter d, |EΓ_d(v)| = 0, for any v. Henceforth, we drop the spurious argument v. We infer ∑_i=0^d |VΓ_i| = n, ∑_i=0^d |EΓ_i| = nk, which are self-evident, as well as ∑_i=0^d (-1)^i |VΓ_i| = 0, which can be proven by noting |EΓ_i| + |EΓ_i+1| = k|VΓ_i|. Let d be an array of orders of the distance-partition graph VΓ_i, i.e., d = { |VΓ_0|, |VΓ_1|, …, |VΓ_d| } . The d_i (0 ≤ i ≤ d) is therefore the number of vertexes with distance i from an arbitrary origin vertex; d_0 = 1 and d_1 = k. Let r be an array of “branching ratios” defined by r = {|VΓ_1|/|VΓ_0|, |VΓ_2|/|VΓ_1|, …, |VΓ_d|/|VΓ_d-1|} . The r_i (1 ≤ i ≤ d) is the growth rate of the number of vertexes with increasing distance, i.e., r_1 = k, r_2 = k-1, and generally, r_i+1≤ r_i. These can be understood as follows: From the origin vertex (distance 0), k edges emanate and reach k new vertexes at distance 1 (r_1 = k). From each of the k vertexes at distance 1, one edge connects the origin vertex and the remaining k-1 edges spawn k-1 new vertexes at distance 2 (r_2 = k-1). From each of the k-1 vertexes at distance 2, at most k-1 new vertexes at distance 3 can be reached (r_3 ≤ k-1). The equal sign applies when the girth is greater than 4. When the girth is equal to 4, some of the vertexes are a shared destination of two edges and therefore the number of new vertexes at distance 3 is less (r_3 < k-1). In subsequent steps, possible existence of longer cycles decreases the branching ratio even further (r_i+1≤ r_i). § DIAMETERS OF SYMMETRIC GRAPHS In a symmetric graph with order n and degree k, let g (≥ 4) be the even girth, d be the diameter, and η be the number of cycles of length g passing through a given vertex. We seek a lower bound for d, whose precise determination is generally hard, given in terms of n and local parameters of the graph—k, g, and η—which are usually readily available. The branching ratios r up to distance g/2-1 are r_1 = k, r_h = k-1 (2 ≤ h < g/2-1), and r_g/2-1 = k(k-1)^g/2-1 - η/k(k-1)^g/2-2≡ r_max. As per Eq. (<ref>), this r_g/2-1 is the upper bound of r for greater distances. We call it r_max, which can take any real positive value. An upper bound for n is then obtained by assuming that the vertexes multiply at the same highest branching ratio of r_max all the way to distance d. n_max = ⌊ 1+ k+k(k-1) + … + k(k-1)^g/2-2. . + k(k-1)^g/2-2 r_max + … + k(k-1)^g/2-2 r_max^d-g/2+1⌋. When r_max≠ 1, the above geometric series can be summed to n_max = ⌊ n_0 + k(k-1)^g/2-2r_max^d-g/2+2-r_max/r_max-1⌋, with n_0 = k(k-1)^g/2-1-2/k-2, which is related to the Moore bound (and equal to the latter when g=2d and η=0).<cit.> When r_max=1, we instead have n_max = ⌊ n_0 + k(k-1)^g/2-2 (d-g/2+1) ⌋ . The corresponding lower bound for d is obtained by equating n with n_max. d_min = ⌈g/2 - 2 + log_r_max{ (n-n_0)(r_max-1)/k(k-1)^g/2-2 + r_max}⌉, for r_max≠ 1 or d_min = ⌈g/2 - 1 + n-n_0/k(k-1)^g/2-2⌉, for r_max = 1. We have confirmed that they indeed give a lower bound for the diameter for each of the cubic symmetric graphs (with an even girth, conservatively assuming η = 1) listed in a Foster Census.<cit.> A similar argument can be used to determine an upper bound for d by assuming that the number of vertexes grows at the slowest pace after distance g/2-1, i.e., just one vertex at each distance. Unfortunately, this gives a uselessly conservative upper bound and we shall not consider it any further. In our previous study,<cit.> the diameter of a Rubik's Cube group is estimated by the following formula, d_probab = ln n/ln r_max + ln n/r_max. which is neither a lower nor upper bound, but works for nonsymmetric Cayley graphs or for any metric that does not form a mathematically well-defined group. It has been derived on the basis of a probabilistic argument or a modified version of the coupon collector's problem. Equation (<ref>) can also be viewed as a probabilistic estimate of the diameter of a symmetric group, where r_max is given by Eq. (<ref>) (in Ref. Hirata2024, a more accurate estimate of r was computationally determined and used instead). The first term approximately counts the distance of the growth phase, where r ≈ r_max and the number of vertexes increases geometrically. The second term measures the distance of the subsequent rapid decay phase. Our graph-theoretical lower bound [Eq. (<ref>)] considers only the growth phase and its formula thus has a similar functional form with the first term of Eq. (<ref>). § CAYLEY GRAPHS OF RUBIK'S CUBE GROUPS The reader is referred to our previous report<cit.> for the terminologies, group/graph orders, and other details of the Rubik's Cube groups of various sizes and metrics. Table <ref> summarizes the results of applying above formulas. §.§ The 3×3×3 Cube in the square-slice-turn metric The Cayley graph of the 3×3×3 Rubik's Cube group in the square-slice-turn metric<cit.> (R^2L^-2, D^2U^-2, and B^2F^-2) is a cubic symmetric graph known as the generalized Petersen graph G(4,1) depicted in Fig. <ref>. The objective here is to determine the diameter d of this graph from its global (n) and local parameters (k, g, and η) that are easily determined without having the whole graph. (In this tiny example, we immediately know the answer: d=3.) The order of the graph n is a global parameter and is potentially hard to determine. However, for the Cayley graph of a Rubik's Cube group, it can be worked out by a careful combinatorial argument<cit.> or with the aid of group theory.<cit.> In the square-slice metric,<cit.> each of the three center slices is allowed to independently take one of two orientations and thus n=2^3=8. Other local parameters such as the degree k, girth g, and the number η of cycles of length g per vertex can be gleaned from the small neighborhood of a vertex in the partial graph. Inspecting Fig. <ref>, we find that k=3, g=4, and η = 3. The distance array is d = { 1, 3, 3, 1}, which obeys Eqs. (<ref>) and (<ref>), as it must. The branching ratios are r = {3,1,1/3}, satisfying Eq. (<ref>). Using Eq. (<ref>), r_max = 1, which corresponds to the predicted distance array of d={1, 3, 3, 3}. Equation (<ref>) yields d_min = ⌈ 2.3 ⌉ = 3, which is a tight lower bound for the actual diameter of 3. The probabilistic estimate [Eq. (<ref>)] breaks down in this tiny case because n=8 is too few for probabilistic logic. §.§ The 2×2×2 Cube in the square-turn metric The Cayley graph of the 2×2×2 Rubik's Cube group in the square-turn metric<cit.> (R^2, D^2, and B^2) is also a cubic symmetric graph known as the generalized Petersen graph G(12,5) drawn in Fig. <ref>. It is also called the Nauru graph. In the absence of the whole Cayley graph, its order (n) can be determined to be 24 by the following combinatorial argument: The ulf cubie<cit.> is held fixed as the orientation anchor. The urf cubie and three cubies (ulf, drb, dlb) that are diagonal from it can interchange their positions in 4! = 24 different ways. Their orientations and the other four cubies' positions and orientations are completely determined by the positions of the first four. Hence, n=24. Focusing on any one vertex and its small neighborhood of Fig. <ref>, we find k=3, g=6, and η=3. Substituting these values into Eq. (<ref>), we obtain d_min = ⌈ 3.4 ⌉ = 4, which is a tight lower bound for the actual diameter of 4 (Table <ref>). §.§ The 2×2×2 Cube in the quarter-turn metric Figure <ref> shows a local Cayley graph within distance 3 from an arbitrary origin vertex for the 2×2×2 Cube group in the quarter-turn metric.<cit.> The order of the graph can be determined by combinatorial logic to be n = 3.67 × 10^6.<cit.> Without drawing such a huge graph, based on the facts that all Cayley graphs are vertex-transitive and that all turns are symmetrically equivalent with one another, the Cayley graph of this group should also be edge-transitive and thus symmetric.<cit.> This is supported by the appearance of the local graph of Fig. <ref>, in which every vertex is equivalent and interchangeable with one another, and so is every edge. A smallest portion of the Cayley graph is sufficient to determine that k=6, g=4, and η=3. Unlike these easily determined parameters, the diameter (d) is hard to come by. For the 2×2×2 Cube, a brute-force computation of the distance array d is possible,<cit.> as shown in Fig. <ref>. The computed d indicates that Eq. (<ref>) is obeyed and d=14. It is our objective, however, to estimate d from the easily obtained local information of the graph as well as n. Using Eq. (<ref>), we find d_min = ⌈ 9.7 ⌉ = 10, which bounds the actual d = 14 from below as it should. The probabilistic estimate [Eq. (<ref>)] is d_probab = 13.4, which is much closer to the correct value of 14, as expected. §.§ The 3×3×3 Cube in the square-turn metric The square-turn metric of the 3×3×3 Rubik's Cube allows six 180^∘ rotations denoted by R^2, D^2, B^2, L^2, U^2, and F^2 in the Singmaster notation.<cit.> The order of the Cayley graph can be determined by group theory to be 663 552.<cit.> Combinatorial logic explains the same number as follows: n =4!44!4!4!/2 = 663 552. The first two factors 4!4 is the number of ways in which the eight corner cubies can be positioned with their orientations fixed by the positions. They can be viewed as the four diagonal corner cubies of the 2×2×2 Cube in the square metric, which has 4!=24 different configurations, which is further multiplied by the 4 possible positions of the ulf cubie (which was held fixed in the 2×2×2 Cube in the square metric, but now can move) reachable by the L^2, U^2, and F^2 turns. The remaining 3 corner cubies' positions and orientations are completely determined by the first five. The quotient 4!4!4!/2 is the number of ways three groups of 4 edge cubies (sharing a common center slice) can be positioned with their orientations fixed. Since the number of correctly oriented edge cubies is always even, there are two orbits and the number must be divided by two. In short, the global parameter n can be known analytically. What makes this group interesting is the fact that its Cayley graph is locally isomorphic with the graph of the 2×2×2 Cube group in the quarter-turn metric (Sec. <ref>). The only difference is the orders of these graphs; one is 5.5 times smaller than the other. Figure <ref> shows the local (up to distance 3) Cayley graph, which is indeed identical with Fig. <ref>; it is a symmetric graph with k=6, g=4, and η = 3. Surprisingly, the diameter (d=15) of this graph with n=6.63 × 10^5 is greater than the diameter (d=14) of the 2×2×2 Cube group in the quarter-turn metric with n=3.67 × 10^6. With all the other parameters being the same, it is impossible to correctly predict this inversion by the simple graph-theoretical or probabilistic estimations presented here and earlier.<cit.> These two graphs, therefore, underscore the diversity of symmetric graph structures differing only in their orders. Using Eq. (<ref>), we obtain as the lower bound of this graph d_min=9, which is lower than d_min=10 of the 2×2×2 Cube group in the quarter-turn metric. Likewise, the probabilistic estimate d_probab = 11.9 is also lower than that (d_probab = 13.4) of the 2×2×2 Cube group in the quarter-turn metric and is uncharacteristically underestimated. Hence, the inversion is not reproduced. Figure <ref> compares the distance arrays d of these two locally isomorphic Cayley graphs. The two curves initially rise at the same rate for they have the same r_max. The curve for the 3×3×3 Cube group in the square metric (the red open circles) begins to slow the growth earlier, which is expected because its n is 5.5-times smaller. The peak in d occurs slightly earlier (by one half distance unit) in the 3×3×3 Cube group in the square metric with the smaller n. What causes the inversion is the decay phase; the d of the 3×3×3 Cube group in the square metric decays unusually slowly (see other plots of d in Ref. Hirata2024 also), considerably delaying reaching the diameter. The cause of this slow decay is unknown. §.§ The 3×3×3 Cube in the quarter-turn metric Figure <ref> is the local (up to distance 2) of the Cayley graph of the 3×3×3 Rubik's Cube group in the quarter-turn metric.<cit.> It is a symmetric graph with k=12, g=4, and η = 18. The order of the graph is n=4.33×10^19 according to combinatorial logic.<cit.> The order of this group/graph is so high that an explicit enumeration of distance array d is nearly impossible. Nonetheless, by their Herculean computations, Rokicki et al. determined the diameter to be 26.<cit.> Equation (<ref>) places the lower bound for d at d_min = 20, whereas the probabilistic estimation [Eq. (<ref>)] suggests d_probab = 24.8. They are, respectively, consistent with and in reasonable agreement with the actual diameter of 26. § CONCLUSIONS Precisely determining the diameter of a large graph of the order n in excess of millions or billions poses a major computational challenge. When n can be known by a combinatorial or other argument and furthermore the graph has a high symmetry with its local parameters easily determined by inspection, one may estimate the diameter using an algebraic, geometrical, or probabilistic argument. In our previous study, we proposed a probabilistic estimate of the diameter for a large Cayley graph, which reproduces the diameters of the Rubik's Cube groups within 2 of the actual values or often exactly. Here, we introduced a strict lower bound for the diameter for a symmetric Cayley graph, calculable with n and other easily determined local parameters such as the degree k, girth g, and number η of cycles with length g per vertex. Similar, but more crude estimates of the diameters of the Rubik's Cube groups have been made,<cit.> but this study is distinguished from them for presenting a rigorous lower bound also applicable to other symmetric graphs. When applied to symmetric Cayley graphs of some Rubik's Cube groups, it yields reasonably tight lower bounds for the diameters, which are 60% to 77% of the correct values for large-n graphs. The author is a Guggenheim Fellow of the John Simon Guggenheim Memorial Foundation.
http://arxiv.org/abs/2407.12585v1
20240717141019
A global view on star formation: The GLOSTAR Galactic plane survey. XI. Radio source catalog IV: $2^\circ < \ell < 28^\circ$, $36^\circ < \ell < 60^\circ$ and $|b| < 1^\circ$
[ "S. -N. X. Medina", "S. A. Dzib", "J. S. Urquhart", "A. Y. Yang", "A. Brunthaler", "K. M. Menten", "F. Wyrowski", "W. D. Cotton", "A. Cheema", "R. Dokara", "Y. Gong", "S. Khan", "H. Nguyen", "G. N. Ortiz-Leon", "M. R. Rugel", "V. S. Veena", "H. Beuther", "T. Csengeri", "J. D. Pandian", "N. Roy" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.HE", "astro-ph.SR" ]
German Aerospace Center, Scientific Information, 51147 Cologne, Germany Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany Centre for Astrophysics and Planetary Science, University of Kent, Canterbury, CT2 7NH, UK National Astronomical Observatories, Chinese Academy of Sciences, A20 Datun Road, Chaoyang District, Beijing, 100101, P. R. China Key Laboratory of Radio Astronomy and Technology, Chinese Academy of Sciences, A20 Datun Road, Chaoyang District, Beijing, 100101, P. R. China National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903, USA Instituto Nacional de Astrofísica, Óptica y Electrónica, Apartado Postal 51 y 216, 72000 Puebla, Mexico Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA I. Physikalisches Institut, Universität zu Köln, Zülpicher Str. 77, 50937 Köln, Germany Max Planck Institute for Astronomy, Koenigstuhl 17, 69117 Heidelberg, Germany Laboratoire d'astrophysique de Bordeaux, Univ. Bordeaux, CNRS, B18N, allée Geoffroy Saint-Hilaire, 33615 Pessac, France Department of Earth & Space Sciences, Indian Institute of Space Science and Technology, Trivandrum 695547, India Department of Physics, Indian Institute of Science, Bangalore 560012, India The GLOSTAR survey studies star formation with the Very Large Array (VLA) and the Effelsberg 100 meter radio telescope in the Galactic plane between -2 < ℓ < 60 and |b| < 1, and the Cygnus X region (76 < ℓ < 83 and -1 < b< 2) with unprecedented sensitivity in both flux density (1σ∼50 μJy beam^-1) and the capability of detecting emission with angular scales in the range from 1.”0 to the largest radio structures in the Galaxy on the order of a few degrees size. Here, we provide a complete GLOSTAR-VLA D-configuration radio source catalog for the covered part of the Galactic disk. A catalog for the “pilot region” between 28 < ℓ < 36 has been published in a previous paper and here we present the complementary catalog for the area within 2 < ℓ < 28, 36 < ℓ < 60 and |b| < 1. Observations were taken with the VLA in a 4 – 8 GHz band to image 100 square degrees of the inner Galactic disk at a reference frequency of 5.8 GHz, using a total of 260 hours of telescope time. We determined spectral indices (α; S_ν∝ν^α) inside the observed band and in the frequency range 1.4 – 5.8 GHz by complementing our results with those from the THOR survey, which covers 1 – 2 GHz. The final images have an angular resolution of 18” and an average sensitivity of 123 μJy beam^-1. The sensitivity is better (∼ 60 μJy beam^-1) in areas free of extended emission. The complementary Galactic disk catalog, presented in this work, consists of radio sources. Of these, are known large-scale structure sources such as star-forming region complexes, well-known supernova remnants (SNRs), SNR candidates or parts thereof. The remaining are discrete individual sources. Source parameters, namely flux densities, sizes, spectral indices, and classifications are reported. We identify 769 region candidates, 359 of which are newly classified as such. The mean value of spectral indices of 225 regions is +0.14±0.02, consistent with most of them emitting optically thin thermal radio emission. Combining our results with the previously published catalog of the pilot region, the final GLOSTAR-VLA D-configuration catalog contains 12 981 radio sources. GLOSTAR Catalog IV Medina, S.-N. X. et al. A global view on star formation: The GLOSTAR Galactic plane survey. XI. Radio source catalog IV: 2 < ℓ < 28, 36 < ℓ < 60 and |b| < 1 The full version of Table 7, and Fig. 1 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or via <https://glostar.mpifr-bonn.mpg.de> S.-N. X. Medina 1,2E-mail: smedina@mpifr-bonn.mpg.de, S. A. Dzib2 E-mail: sdzib@mpifr-bonn.mpg.de, J. S. Urquhart3, A. Y. Yang4, 5, A. Brunthaler2, K. M. Menten 2, F. Wyrowski 2, W. D. Cotton6, A. Cheema2, R. Dokara 2, Y. Gong 2, S. Khan2, H. Nguyen2, G. N. Ortiz-León7,2, M. R. Rugel2,6,8, V. S. Veena2,9, H. Beuther10, T. Csengeri11, J. D. Pandian12, N. Roy13 Received February 2024; Accepted July 2024 ================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION The wide-field radio continuum surveys have been fundamental in significantly increasing the number of known radio objects, particularly sources representing high-mass young stellar objects still embedded in or close to dense gas. These objects, when far away in the Galactic plane, are typically obscured by large volumes of dust, making observations of stars in the earliest evolutionary stages difficult (see a review in high-mass star formation by ). Therefore, the GLObal view of the STAR formation in the Milky Way (GLOSTAR)[https://glostar.mpifr-bonn.mpg.de/glostar/] survey <cit.> focuses on finding and characterizing star-forming regions in the Galactic plane at radio frequencies with unprecedented sensitivity using the Karl G. Jansky Very Large Array (VLA) and the Effelsberg 100 meter telescope. The GLOSTAR survey detects tell-tale tracers of star formation: compact, ultra- and hyper-compact regions and class II methanol masers that trace different stages of early stellar evolution. The capabilities of the GLOSTAR survey helps to locate the center of early stages of star-forming activity and complements previous radio surveys like MAGPIS, CORNISH, and THOR <cit.> that have also largely contributed to star formation studies. Furthermore, the sensitivity achieved by GLOSTAR will highly increase and confirm the number of known sources within the Galactic plane. To achieve its goal, the GLOSTAR survey covers a large area of the Galactic plane within 145 ^2, covering -2 < ℓ < 60; |b| < 1, which includes the Galactic center, and, in addition, the Cygnus X region (76 < ℓ < 83; -1 < b < 2). It uses the results of complementary set of observations with the VLA of the NRAO[The National Radio Astronomy Observatory is operated by Associated Universities Inc. under cooperative agreement with the National Science Foundation.] and the Effelsberg 100 m telescope. With this combination, the GLOSTAR survey can detect compact radio sources and recover extended emission. The GLOSTAR-VLA observations used the wideband C-band (4–8 GHz) receivers. They were performed using both the compact D-configuration (as well as complementary DnC- and C-configurations) and the extended B-configuration (as well as complementary BnA- and A-configurations) to obtain good surface brightness sensitivity and a range of resolutions to map the radio emission over a large range of angular scales. The C-band also comprises the frequency of the prominent class II CH_3OH (methanol) maser line (6.668 GHz), the 4.829 GHz transition line H_2CO (formaldehyde), and several radio recombination lines (RRLs), spectrally resolved data have also been taken within the GLOSTAR survey. The series of GLOSTAR survey results can be found in a series of publications, starting with the first catalog of radio continuum sources by <cit.> based on the VLA D-configuration data of the GLOSTAR “pilot region” (defined within the following limits: 28 < ℓ < 36 and |b| < 1). The overview of the GLOSTAR survey is presented by <cit.>, who additionally provided examples of the combination of the VLA and the Effelsberg observations. At the same time, <cit.> presented a study of Galactic SNRs, <cit.> presented radio continuum detections of young stellar objects (YSOs) in the Galactic center, and <cit.> reported 6.7 GHz methanol maser detections in the Cygnus X region, followed by <cit.> who reported methanol maser detections within -2 < ℓ < 60 and |b| < 1. The GLOSTAR VLA B-configuration data were used by <cit.> to provide the first radio source catalog of compact sources in the pilot region and by <cit.> for the region within 2 < ℓ < 28, 36 < ℓ < 40, 56 < ℓ < 60 and |b| < 1. Moreover, GLOSTAR VLA and Effelsberg data were recently used by <cit.> to analyze nonthermal synchrotron emission from SNR candidates in the GLOSTAR pilot region, by <cit.> and a deep analysis of formaldehyde absorption in the Cygnus X region, while <cit.> reports RRLs related to bright regions of the full survey. The focus of this paper is the characterisation of continuum radio emission from GLOSTAR VLA D-configuration data (angular resolution ∼18” and mean sensitivity ∼128 μ Jy beam^-1) to provide a reliable catalog of radio sources with angular sizes of ∼ 18 to a few arcminutes. The work presented in this paper extends the analysis already conducted on the radio continuum map of the pilot region <cit.> to the rest of the inner Galactic disk covered by the survey. We will follow a similar set of steps to create a complete catalog of the Galactic disk covered by the GLOSTAR survey (2 < ℓ < 60 and |b| < 1). This paper is organized as follows: Section <ref> gives a description of the observations and the obtained data. Section <ref> describes the resulting image of the radio continuum emission and the method used for source extraction. Section <ref> discusses the catalog construction and section <ref> summarizes and discusses the catalog properties. In Section <ref> we give a summary of the work and outline our main findings. § OBSERVATIONS AND DATA REDUCTION The observations were performed with the VLA in the D-, and C-configurations (see Table <ref>). The hybrid DnC configuration was preferred for targets with low elevation (that is, ℓ<12) to recover a synthesized beam similar in shape to that obtained with D-configuration observations at higher elevations. However, the use of the hybrid configurations was discontinued in 2016. To compensate for this, the regions not observed in the DnC-configuration were observed in both D- and C-configurations as suggested by NRAO[<https://science.nrao.edu/facilities/vla/docs/manuals/propvla/array_configs>]. Though diverse array configurations were used, for the simplicity of convention, we will refer to all these low-resolution VLA observations as the GLOSTAR VLA D-configuration observations. A detailed overview of the observing strategy, data reduction, and imaging of the observed data is given in <cit.>, and more specific details on the calibration and imaging of the D-configuration data are described in <cit.>. While we refer the reader to these works for details, a summary is provided below. The observed band corresponds to the so-called C-band, covering a frequency range from 4.0 to 8.0 GHz. In particular, the correlator was tuned to record two 1.0 GHz wide sub-bands centered at 4.7 and 6.9 GHz in a semi-continuum mode[The observations also comprised additional high spectral resolution correlation setups covering spectral lines <cit.>. In this paper, however, we focus on the continuum data.]. The portion of the Galactic disk for which results are reported in this paper has a size of 100 square degrees and is delimited by |b| ≤ 1 and 2≤ℓ≤28 and 36≤ℓ≤60. The results for the areas corresponding to the Galactic center region (-2<ℓ<+2) and the Cygnus X region are not included in this work and are left for future GLOSTAR catalog releases. A total of 52 observing sessions were performed to cover smaller areas using the hexagonal mosaicing scheme of pointed observations <cit.>[Note that the values given for the spacing and primary beam size in Sect. 2.1 of <cit.> have incorrect units; they were given in arcseconds, whereas the correct units should be arcminutes. The hexagonal grid spacing is θ_ hex=3.'25 and the FWHM primary beam size at the higher frequency band, 6.9 GHz, is θ_ B=45/(ν  [GHz])=6.'5. ]. The sessions observed while the VLA was in its D- and DnC-configurations covered rectangular areas with sizes of two degrees in Galactic latitude direction and one degree in Galactic longitude direction, using ∼650 target pointings (spending ∼22s per target pointing). C-configuration observations, on the other hand, covered 1.5 degrees in the Galactic longitude direction and observed ∼950 target pointings (spending ∼11s per target pointing). Quasars were observed for calibration purposes, particularly 3C286 and 3C48 were observed for amplitude and bandpass calibrations. A gain calibrator is observed for each session and chosen to be at an angular distance of ≲10.0^∘ from the region covered in that session. The observations are part of the VLA projects with IDs 11B-168, 13A-334, 15B-175, and 17A-197. The total telescope time used for these observations is 260 hours (that is, five hours per session). The observations occurred from 2011 December to 2017 June. Individual observation dates, corresponding VLA project ID, VLA configuration, and used gain calibrators are listed in Table <ref>. The data sets were edited, calibrated, and imaged using the software package <cit.>, designed for handling radio astronomical data[ applications can be accessed through a Python interface (ObitTalk). also inter-operates with classic AIPS <cit.> and has access to its tasks.]. Prior to calibration, the data sets were edited to remove the first 10 seconds of the scans on calibrators, which are affected by the antennas slewing. Then, a standard calibration is applied. This includes amplitude corrections based on the switched power signal, group delay, bandpass, amplitude and phase, and polarization calibration. The standard calibration is alternated with the editing of data affected by instrumental problems or radio frequency interference. Then, the data are reset and calibrated again, excluding the identified problematic data. The images are produced using the task MFImage. First, the data for every target pointing are imaged individually, covering the FWHM primary beam corresponding to the lowest continuum band, 4.7 GHz, which is 8.'4. In cases for which the target pointing covered strong sources located outside the primary beam that are bright enough to affect the imaging, we add outlier fields at the location of these sources to account for them in the cleaning process. The observed bandwidth is divided into frequency bins that are narrow enough such that the effects of variable spectral index and antenna pattern variations are minor, allowing a joint spectral deconvolution. MFImage then generates an image for each individual frequency bin, that are then weighted averaged to produce the final map of the target field. The images are produced using a Briggs' weighting with robustness parameter 0.0. Finally, the frequency bins and the weighted averaged map are combined to produce the mosaiced images with a restoring beam of 18” and a pixel size of 2.”5 as described by <cit.> and <cit.>. The shortest baseline of the VLA in the D-configuration is 35 m. Thus, sources with angular sizes larger than ∼4' are filtered out from the images. In order to conserve computational resources, the total observed area was divided into 6 sub-mosaics that constitute the final radio continuum map. The full continuum image of the Galactic disk area covered by the GLOSTARS survey, 2 < ℓ < 60 and |b| < 1 (i.e., including the pilot region), is shown in Figure <ref>. § ANALYSIS OF THE CONTINUUM MAP §.§ Radio Continuum Map The observational data previously described were used to image 100 square degrees of the Galactic plane by producing six sub-mosaics (see Table <ref> for each longitude range, including the pilot region). Together with the sub-mosaic of the pilot region, previously reported by <cit.>, we have a complete map of 116 square degrees of the Galactic plane (that is, 2 < ℓ < 60 and |b| < 1). Figure <ref> presents the radio continuum map of the complete region produced from the D, DnC, and C-configuration data. This radio map has an effective central frequency of 5.8 GHz and has been restored using a circular beam of 18”. Although the observed region extends slightly beyond |b|=1.0, the noise increases significantly in these outer regions because of the primary beam attenuation and because these areas do not overlap with other pointings. Hence, sources detected beyond this latitude range have been excluded from the analysis presented here, although they are listed in our final catalog. Inspection of Fig. <ref> reveals the presence of thousands of compact sources and many large-scale sources (LSSs, see Sect. <ref> for details) associated, for example, with prominent star-forming complexes (e.g., W31, W33, W49, and W51) or supernova remnants (e.g., W28 and W44). The emission towards these more complex regions is not fully recovered due to the lack of short (u,v) baselines, which also results in strong negative bowls that affect the noise level around them. Therefore, these LSSs cause significantly higher noise levels in the Galactic mid-plane and the inner part of the Galactic disk, where they are more densely concentrated, and affect the quality of the imaging around them. This is demonstrated in Table <ref> where the mean noise values determined from each of the mosaics can be seen to decrease with longitude from the Galactic center to the outer part of the disk where less star formation is occurring. In Fig. <ref> we present a histogram of the pixel values from the whole survey region. The significant noise variations across the GLOSTAR VLA D-configuration map produce the non-Gaussian profile of the pixel noise values. We use a Gaussian fit to this pixel distribution to estimate the overall sensitivity of the final map. The standard deviation of the distribution of the pixel values, σ_rms, is 123 μJy beam^-1; however, the noise values can be significantly higher towards prominent complexes concentrated towards the mid-plane. The noise level in regions free of extended emission is approximately 60 μJy beam^-1. §.§ Source extraction The range of detected structures, level of complexity, variation, and survey coverage make source extraction and analysis difficult in the GLOSTAR low-resolution images presented here. In our previous analysis of the pilot region, we developed a number of steps to automatically identify sources and confirm them visually and by spatially correlating them with published source catalogs at different wavelengths <cit.>. In this paper, we will follow the same process to create a catalog of radio sources for the rest of the covered Galactic plane, which will then be combined with the sources detected in the pilot field to create a complete catalog of radio sources in the GLOSTAR VLA D-configuration radio map (excluding the Galactic center and the Cygnus X regions); that is 2 < ℓ < 60 and |b| < 1. We have again used the software package to perform the source extraction; see <cit.> for a detailed description of this Python code. This code has also been previously used for the source extraction of a complementary 21 cm radio continuum survey (The HI/OH/Recombination line survey of the inner Milky Way (THOR); ), which will facilitate further comparisons. Below we provide a brief outline of the method used and refer the reader to <cit.> for more details. The methodology steps are also visualized in Fig. <ref>. Creation of the noise maps. To perform the automatic source extraction, we first need to produce an independent noise map of the region due to the position-dependent nature of the noise in GLOSTAR VLA D-configuration radio continuum maps. We use the rms estimation algorithm within the SExtractor package <cit.>. This algorithm defines the rms value for each pixel in an image by determining the distribution of pixel values within a local mesh until all values are around a chosen sigma value. Following our previous work <cit.>, for the calculations we used a detection and analysis threshold of 5σ, a minimum size of 5 pixels, and a mesh size of 80 × 80 pixels^2. As a result, most real emission is removed from the noise image, and the determined noise map represents the correct noise level. Automatic source extraction with . We employ the software package that uses an algorithm to detect and identify blobs, or islands of pixels representing sources, in two-dimensional astronomical radio-wavelength images <cit.>. The algorithm makes a pixel-by-pixel comparison between the main image and the background noise map, locates pixels above a given threshold, and defines “blobs” around these pixels. Then, a first source parameter set is obtained by fitting a 2D elliptical Gaussian. Later, the peak and integrated flux densities are corrected by considering biases from the Gaussian fits, such as source morphology <cit.>. returns several parameters from which we use the following (and their corresponding errors) for our catalog construction: the 2D source position ( and ; notice these are software internal names and not the image coordinate system), peak flux density (), integrated flux density (), and the number of pixels covered by the source (). Following our previous work, we applied a detection threshold (dSNR) of 4σ and a minimum source size of 5 pixels in diameter (∼13). This resulted in the extraction of blobs potentially associated with radio sources in the area delimited by 2 < ℓ < 28, 36 < ℓ < 60 and |b| < 1. All of them have flux above four times that in the local noise map and have a size comparable to or larger than the beam. However, this sample contains a significant number of artifacts that need to be removed, and also emission from large-scale structures, that is, sources fragmenting into multiple components that need to be grouped. Furthermore, at the edges of the map, the noise increases significantly, therefore, the sources located at |b|>1 are excluded from further analysis and their identification names are marked with an *. The following steps will address these issues. Additionally, in our previous work, we also flagged (that is labeled) sources that appeared to consist of two or more distinct emission peaks; however, in this work, we have not attempted to flag them or to separate the emission because in our previous work, only around 3% of the sources are affected by this problem, and those are resolved in the GLOSTAR VLA B-configuration images <cit.>. Finally, as in our previous works <cit.>, we considered the Y-factor (Y_ factor = S_ν, Int/S_ν, Peak) and roughly divided the sources into extended (Y_ factor >2.0), compact (1.2<Y_ factor≤2.0) and unresolved (Y_ factor≤1.2). Identification of Large Scale Structures (LSSs). In our previous work, we defined LSSs as over-resolved structures whose emission is broken up into multiple independent fragments of radio sources. To identify LSSs, we performed a visual inspection and compared the radio emission distribution with their corresponding 8.0 μm maps extracted from the GLIMPSE archive. Most GLOSTAR sources with corresponding 8.0 μm have thermal radio emission, that is they are regions. SNRs are also generally faint or even not detected at mid-infrared wavelengths <cit.>. In Fig. <ref> we show some examples of these LSSs. We delimited a rectangular area around the coherent infrared structures morphologically correlated with the radio emission (see the left panel of Fig. <ref>). Also, we have delimited a rectangular area around radio emission sources with shell-like morphology that are clearly part of one source even if there is no clear correlation with the 8.0 μm emission, which is the case for many SNRs (see the lower right panel of Fig. <ref>). Finally, we also classified as LSSs the complexes with two or more independent sources that clearly belong to the same star-forming region as seen in the 8.0 μm maps (see the top right panel of Fig. <ref>). All extended radio sources in these boxed regions are considered to be part of one LSS. Individual fragment components and their properties are given in the catalog, however, they have been excluded from the statistical analysis presented here. Compact and unresolved radio sources that are considered unlikely to be associated with LSSs have been retained in the final catalog as distinct sources, since these are possible individual sources and not part of the extended emission. They are included in our further analysis. In this work, we have visually identified a total of large-scale radio emission regions containing individual fragments within 2 < ℓ < 28, 36 < ℓ < 60 and |b| < 1, which includes the SNRs like W28, W30, and W49 <cit.> and star-forming region complexes such as W31 <cit.>, W33 <cit.>, M16 <cit.>, M17 <cit.>, W39 <cit.>, and W51 <cit.>. In Table <ref>, we give the list of LSSs, including the 27 from the pilot region. The name of each structure has been constructed from the central coordinates of the enclosing box. The table includes the extent of the boxed region in ℓ and b and the integrated flux of the total emission in the box minus the emission from any compact sources that are considered as distinct sources. The total integrated flux density of each LSS, is calculated by adding the flux densities of all components together. The measured parameters of LSSs should be used with caution as the flux densities and sizes are unlikely to correspond to discrete sources, but rather to source complexes. We also point the readers to <cit.> and <cit.> for a discussion on the calculation of flux densities of sources identified as SNRs. Crossmatch with recently identified SNR candidates. SNRs are sources of nonthermal radio emission <cit.> that can also be detected with the GLOSTAR observations <cit.>. The radio emission from most of these radio sources is extended and, in many cases, has a weak surface brightness. Because of these properties, automatic source extraction is not optimal to fully recover these sources. Their identification and total flux determination require a more detailed visual inspection that goes beyond the scope of this work. SNRs and SNR candidates in the observed area of the GLOSTAR survey are the subject of the studies presented by <cit.> and <cit.>. However, our procedure recovered sources located in the regions containing well-known SNRs and previously identified SNR candidates. Previously, well-known SNRs with large angular scales were identified as LSSs (see also Table <ref>). Additionally, we also identified extended sources (Y_ factor>2.0) located in areas of other SNR candidates, including the 80 new SNR candidates from the GLOSTAR survey <cit.>. These sources are labeled in the final catalog. Cross match with previous catalogs. In order to verify the remaining sources, the next step was to assume that any radio source that has a counterpart in a published radio or mid-infrared catalog is real. We used several complementary multi-wavelength surveys to cross match with our detected radio sources to identify possible counterparts. The selected surveys are better described in the following sections. They either have comparable angular resolution: GLIMPSE <cit.>, WISE <cit.>, HiGAL <cit.>, ATLASGAL <cit.>, GLOSTAR VLA B-configuration <cit.>, CORNISH <cit.>, RMS <cit.>, and THOR <cit.> all have angular resolutions between 6.0 and 20. The CORNISH and RMS VLA surveys, both at 5.0 GHz, comparable to the GLOSTAR survey's wavelength, have resolutions of 1.5 and 2, respectively. We used an initial cross match radius of 10, the same as chosen in our previous paper <cit.>. The offset distribution between GLOSTAR VLA D-configuration sources and the above catalogs was analyzed based on the empirical cumulative distribution function (ECDF), see Figure <ref>. The offsets between GLOSTAR and other radio centimeter wavelength surveys (THOR, CORNISH, and RMS) in Fig. <ref> show a good position match. This is because they are tracing radio emission with similar morphology. For ATLASGAL, GLIMPSE, and WISE, the situation is different. ATLASGAL 870 μm submillimeter continuum sources trace compact high column density emission of cold dust (and associated molecular gas) in star-forming regions and are quite rare. Thus, a radio source the direction of an ATLASGAL source may be related to it even if the angular separation between the ATLASGAL and GLOSTAR positions is larger than 10. The cases of the WISE and GLIMPSE catalogs are more complex: due to the high density of foreground field stars detected in these survey there is a significant probability of false line of sight matches with the GLOSTAR detections. As a precaution, a smaller matching radius of 6” was used for WISE and GLIMPSE in our final counterpart search. We have found radio sources with at least one counterpart at a 10 cross match radius. In Table <ref>, we summarize the counterparts of the GLOSTAR-VLA sources with the complementary multi-wavelength surveys. Visual inspection. After our cross match analysis, we find blobs to be unrelated to extended sources and to have no counterparts in any other survey. To understand the nature of these sources, we first compare the statistics with the expected number of false detections. As discussed in our previous works <cit.> the number of false detections can be calculated using the complementary cumulative distribution function Φ(x)=1- ϕ(x), where ϕ(x) is the cumulative distribution function. Assuming that the noise in our radio maps follows a Gaussian distribution, ϕ(x)=1/2 [ 1 + erf ( x/√(2)) ] where erf is the error function given by erf(x)= 1/√(π)∫_-x^x e^-t^2 dt. Φ(x) is the probability that the value of a random variable x with a standard normal distribution will exceed the value x. Considering the 100 square degrees of the analyzed radio map, the expected false detection is calculated like θ^2×ϕ(x), where θ is the synthesized beam size. Therefore, considering sources above 4σ, we expect only 127 false detections, which is much smaller than the sources without counterparts. Next, we carried out a visual inspection to identify and remove side lobes (which appear as elongated blobs in the direction of a very bright source) and artifacts (weak sources, between 4–5σ, and have a size much smaller than the beam size). This led to the removal of blobs. We also identified sources as unclear; these sources have a signal-to-noise (S/N) ratio between 4.0 and 5.0; they have no counterpart and do not comply with the characteristics of a side-lobe or an artifact; however, they are located in a noisy region or close to a very bright source. Finally, we identified as real sources, and included all detections with a S/N ratio ≥ 5 that are not elongated (and thus a not part of sidelobes). Table <ref> summarizes the numbers of detected sources ascribed to the different categories. § FINAL CATALOG After removing all artifacts extracted by , the final catalog consists of entries. From these, 9 254 represent discrete sources; the remaining are part of LSS and SNR candidates. In this section, we will use the discrete sources and compare them with previous surveys in order to quantify the reliability of their properties based on our source extraction process (see Sect. <ref>). Additionally, we use the information from the counterparts of previous radio and infrared surveys to classify the sources. §.§ Astrometry To check the quality of the GLOSTAR source positions, we have compared the positions of unresolved (Y_ factor< 1.2) sources that have counterparts in the radio fundamental catalog of compact radio sources[This catalog is provided via the project webpage at <https://astrogeo.smce.nasa.gov>. Responsible NASA official: L. Petrov.]. These radio sources are quasars that were observed with Very Long Baseline Interferometry (VLBI) and their celestial positions are known with accuracies better than a few milli-arcseconds. Using a position matching of 6” between both catalogs, we found 55 sources in common. In the upper panel of Fig. <ref>, we show the ℓ and b position offsets between GLOSTAR D-configuration and VLBI sources. The mean values of the position offsets are -0.”34±0.”10[The errors reported on the mean values, here and through the manuscript, are estimated using the standard error of means (SEM=σ/√(N), where σ is the standard deviation and N is the number of elements in the sample).] and +0.”31±0.”11 in Galactic longitude and latitude, respectively. The standard deviations of the offsets are 0.”8 in both directions. Thus, the positions of the GLOSTAR catalog presented in this paper are accurate to within 1”. A second test can be done by comparing the GLOSTAR sources with the CO-Ordinate Radio 'N' Infrared Survey for High-mass star formation <cit.> catalog. CORNISH used the VLA prior to its upgrade to observe at 5.0 GHz an area similar to that covered by us. Most of the sources in this catalog are expected to be background extragalactic objects, which is also the case observed for GLOSTAR. No detectable proper motions are expected from these sources. We have also used a matching radius of 6”, and we have restricted our analysis to sources that are unresolved in both catalogs. In the lower panel of Fig. <ref>, the measured position offsets of 668 sources meeting these criteria are shown. The mean values of the position offsets are -0.”30±0.”04 and +0.”28±0.”04 in ℓ and b, respectively. The standard deviation of the offsets is 0.”80 in both Galactic coordinates. This result is in agreement with that found for the VLBI sources. §.§ Flux density levels In this section, we will check the accuracy of flux density measurements. By using the 668 CORNISH unresolved sources detected in the GLOSTAR survey, we can check the quality of the measured flux densities in GLOSTAR <cit.>. This can be done since most of these sources are background extragalactic objects whose radio emission is expected to have a low degree of variability. In the upper panel of Fig. <ref>, the flux densities of the CORNISH sources are plotted against the GLOSTAR flux densities. For a visual guide, the equality line is also shown. This panel shows that the radio emission from both catalogs is consistent. In the lower panel of Fig. <ref>, the distribution of the ratios of the flux densities of sources from both catalogs are shown, with mean and standard deviation values of 1.01±0.02 and 0.42, respectively. Taking a 3σ value, it is then concluded that the fluxes of the GLOSTAR survey have an accuracy better than 6%. §.§ Source effective radius The BLOBCAT software returns the number of pixels within a source (output parameter npix). This number does not contain any information on the structure, elongation, or position angle of the source. While source size cannot be recovered from , following the strategy by <cit.> we can determine the effective radius of a source. We aim to estimate the radius of a circular source that covers the same number of pixels as the source extracted using . The area of a single pixel in the GLOSTAR D-configuration images is 2.”5×2.”5. The area, A, covered by a source is times the pixel size squared, and the effective source radius is given by R_ source=√(A/π). The effective radius, as obtained from this procedure, is listed for all sources in Table <ref>. §.§ Spectral indices The flux density of radio sources as a function of the observed frequency can expressed in a power law form: S_ν∝ν^α, where S_ν is the flux density at the observed frequency, ν, and α is the spectral index. The sensitive and wideband GLOSTAR-VLA observations allow determining an in-band spectral index (α_ GLOSTAR). We will follow the successful strategy used in the other GLOSTAR-VLA catalogs <cit.>. First, to avoid possible spectral index biases, we constrain our spectral index determinations to sources with S/N ratio >10 and Y_ factor≤2.0, that is, compact sources. A lower limit for the S/N ratio was chosen to ensure that the source could be detected in most of the seven frequency bin images. For instance, as the noise level per frequency bin image is expected to be √(7)× larger than that of the combined image, for sources with S/N=10 we expect detection of the order of 10/√(7)=3.8σ_ bin in each frequency bin image. This rough value ensures detections in most frequency bin images. For extended sources, as the (u, v)-coverage is different across the band, the fraction of total flux density recovered at the lower sideband can greatly exceed that at the upper sideband. In these cases, the spectral index derived will have apparently steeper (more negative) spectra to values that can be physically impossible. Second, source flux densities are measured on the imaged frequency bins (see Sect. 2). We use the logarithmic form of Equation <ref>, which is a linear equation, and the measured flux densities in the individual frequency bin images to perform a least-squares fitting and determine the spectral index. Following these steps, we have estimated the spectral indices of 3 968 sources. In <cit.> and <cit.>, we have compared the spectral indices determined from GLOSTAR data with those determined by the THOR survey <cit.> for compact common sources. We have found a good consistency between the spectral index determination by both surveys. The THOR survey covered the full L-band (1.0–2.0 GHz) with the VLA, and they also split their observed bandwidth into smaller frequency chunks to estimate the in-band spectral index <cit.>. Though THOR observed at lower frequencies frequency than GLOSTAR, the resulting angular resolution of their images is similar to that of the GLOSTAR-VLA D-configuration images (18” for GLOSTAR and 25” for THOR). Thus, following <cit.>, we now proceed to determine the source spectral indices from the peak flux densities measured by GLOSTAR and THOR (α_ GLOSTAR-THOR). We consider mid-frequencies of 1.5 GHz and 5.8 GHz for THOR and GLOSTAR, respectively. As only two flux density values are considered, the spectral index can be determined as α_ GLOSTAR-THOR= ln(S_ GLOSTAR/S_ THOR)/ ln(5.8/1.5 ), where S_ THOR and S_ GLOSTAR are the peak flux densities from the respective survey. The spectral index uncertainty is calculated using: σ_α_ GLOSTAR-THOR = √((σ_S_ THOR/S_ THOR)^2 + (σ_S_ GLOSTAR/S_ GLOSTAR)^2)/ ln(5.8/1.5 ), which is obtained following standard error propagation theory. σ_S_ GLOSTAR and σ_S_ THOR are the source flux density uncertainties for GLOSTAR and THOR. In the case of the THOR survey, their catalog does not contain direct values for flux density uncertainties. However, since their S/N ratio is given, the THOR flux density uncertainty can be estimated as σ_S_ THOR= S_ THOR/ S/N. With this procedure, we estimate α_ GLOSTAR-THOR for 4 127 radio sources, of which 1 308 do not have an estimated value of α_ GLOSTAR. The total number of radio sources with an estimated radio spectral index, either the GLOSTAR-inband or GLOSTAR-THOR, is 5 276. Results of both α_ GLOSTAR and α_ GLOSTAR-THOR on individual sources are listed in Table <ref> and they will be compared in Sect. 5.3. §.§ Counterparts in other surveys During the last few decades, Galactic Plane surveys have addressed many aspects of high-mass star (>8M_⊙) formation. As these surveys image large areas of the Galaxy, they provide unbiased samples of star-forming regions with different properties <cit.>. As we have seen in previous sections, information from other radio surveys, such as CORNISH and THOR, can corroborate and complement the source parameters from our catalog. Additionally, information at shorter wavelengths can be used to give insights into the nature of the observed radio sources. In the context of GLOSTAR, particularly interesting radio sources are those related to star formation. CORNISH and THOR were briefly described in previous sections and in the following we describe the other Galactic plane surveys used to characterize the GLOSTAR radio sources. Recently, we have presented the GLOSTAR VLA B-configuration catalog of the Galactic plane using higher angular resolution (1) images <cit.>. The imaged field covers the area |b| ≤ 1.0^∘, and the Galactic longitude ranges 2^∘< ℓ < 40^∘ and 56^∘< ℓ < 60^∘; i.e., 32 square degrees less than the GLOSTAR VLA D-configuration. About 5 500 radio sources were reported and classified. Using a crossmatch radius of 10, we found a match for 2 497 compact radio sources. The Red MSX Source (RMS) survey was a multi-wavelength project aiming to identify massive young stellar objects (MYSO) in the Galactic plane <cit.>. Using a multi-wavelength classification scheme, RMS identifies the nature of radio sources; particularly within the northern hemisphere they identified 79 PNe and 391 regions with radio emission <cit.>. We retrieved 150 of the RMS sources within a search radius of 10. Emission at sub-millimeter wavelengths is dominated by dense cool dust and gas, which is intimately related to star formation. At these wavelengths, the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL; ) is the first high-resolution (≈ 20” FWHM) ground-based submillimeter (870 μm) survey of the thermal dust emission in the entire inner Galactic plane. The ATLASGAL survey has presented >10 000 dense clumps in the Galactic plane <cit.>. Correlating radio sources (such as those detected in GLOSTAR) with ATLASGAL clumps is an excellent way to identify embedded or dust-enshrouded objects such as regions <cit.>. Correlating our radio source catalog with the ATLASGAL compact source catalog <cit.>, we find cross match 281 sources within 10”. The Herschel Infrared Galactic plane survey (HiGAL) is a photometric survey in five far-infrared (FIR) bands between 70 and 500 μm <cit.>. Its observations covered the whole Galactic plane with a varying latitude range <cit.>. The FWHM beam sizes range from 6” to 35”, and the mean position uncertainty is 1.”2 <cit.>. Using a cross matching radius of 10” we found 1527 sources in both our GLOSTAR and the HiGAL catalogs. The Wide-field Infrared Survey Explorer (WISE) was a NASA infrared-wavelength astronomical space telescope mission <cit.> that mapped the entire sky in four mid-infrared (MIR) bands W1, W2, W3, and W4 centered at 3.4, 4.6, 12, and 22 μm, respectively, using a 40 cm telescope feeding an array with a total of 4 million pixels; these wavelengths correspond to angular resolutions of 6.”1, 6.”4, 6.”5, and 12. The WISE All-sky release source catalog contains the source properties of ∼563 million sources. The WISE All-sky release source catalog has been filtered out for sources with S/N<5, spurious detections, and image artifacts[Detailed information on WISE data processing and source catalogs (including rejected detections) can be found in <https://wise2.ipac.caltech.edu/docs/release/allsky/>.]. Using a radius of 10” for cross matching GLOSTAR radio sources and sources from the WISE All-sky release source catalog, we found 7 298 common sources. However, as mentioned earlier, because of the high density of WISE detected sources, mostly foreground field stars, there is a significant probability of false matches and we have reduced the cross matching criteria to 6 and found 3 034 matching sources. Finally, we have also used the point source catalog[The GLIMPSE point Source Catalog (GLMC) is the most reliable of GLIMPSE catalogs with a reliability ≥95% <cit.>. The GLMC can be found at the following sites: <http://ssc.spitzer.caltech.edu/legacy/glimpsehistory.html> and <http://irsa.ipac.caltech.edu/data/SPITZER/GLIMPSE>.] from the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire <cit.>. This legacy project was conducted with the Spitzer space telescope and observed the shorter wavelength part of the MIR range; that is, at 3.6, 4.5, 5.8, 8.0 μm. GLIMPSE is composed of several surveys, and particularly interesting for GLOSTAR are GLIMPSEI and GLIMPSEII which, combined, effectively observed the Galactic plane from -70^∘ < ℓ < 65^∘ and |b|<1^∘. Using a cross matching criterion of 10, we found 7 326 GLIMPSE counterparts to radio sources. However, similar to the WISE catalog, a large number of these are expected to be foreground objects. Thus, we reduced the cross match radius to 6 for the final counterpart search and found 5 525 counterparts. §.§ Source classification The present catalog is composed of radio sources, and thus one-by-one source classification, as performed in our previous work <cit.>, is a time-consuming task. Furthermore, in our previous catalog work, we have shown that the classification of radio sources detected within the GLOSTAR-VLA project is largely (>90%) compatible with the classifications made by other radio surveys <cit.>. Notably, in the GLOSTAR-VLA pilot region, the source classifications in the high-resolution and low-resolution maps are also highly consistent <cit.>. Thus, our source classification approach for the present catalog is slightly different and is presented below. We focus the source classification analysis on the sources that are not related to LSSs ( sources) or SNRs ( sources) or Unclear (). §.§.§ Source classification from other surveys First, we identify sources with counterparts in other catalogs and use their classification. We prioritize the catalogs in the following order: GLOSTAR-VLA B-configuration <cit.>, CORNISH <cit.>, RMS <cit.>, WISE regions <cit.>, and THOR <cit.>. We also check the GLOSTAR-VLA 6.7 GHz methanol maser catalog <cit.>, as this maser line is only detected toward MYSOs. The above catalogs use different names for different classes of sources. To homogenize them, we have used the classifications chosen in previous GLOSTAR-VLA catalogs. These classifications are HII for regions and region candidates, PN for planetary nebulae and planetary nebula candidates, EgC for background extragalactic candidate sources, Psr for pulsars, Radio-star for stars with radio emission, and other for sources that could not be classified within our classification scheme. The CORNISH survey used the terms UCHII and HII-Region, which, in our final classification, are renamed as HII. Sources CORNISH classified as Radio Galaxy (Central Source), Radio Galaxy (lobe), Galaxy, and IR-Quiet are renamed as EgC. Sources classified as region and HII/YSO in the RMS survey are categorized as HII, while YSO and Evolved star classifications are renamed as Radio-star in our catalog. Regardless of the region sub-classification from <cit.>, we rename these sources as HII. From the THOR survey, we discarded the classification X-ray (which only accounts for a counterpart at another wavelength) and renamed their jet classification to EgC <cit.>. Sources associated with methanol masers are labeled as HII, as they represent objects in the very early phases of high-mass star formation. While most class II methanol masers are not associated with compact radio emission, some of them are: in the GLOSTAR D array data, <cit.> find that toward 12% of the 554 detected methanol masers also radio continuum is detected (while 97% are associated with dust emission. The number of classified radio sources from other surveys is 3 302. The numbers of adopted classifications are 2 286 from GLOSTAR-VLA B-configuration, 537 from CORNISH, 57 from RMS, 167 from WISE regions, 249 from THOR, and six sources from the 6.7 GHz methanol maser catalog. Classifications and references to the classification origins are given in Table <ref>. We note that a large fraction of GLOSTAR radio sources could not be classified by only using the information from other surveys. §.§.§ Source classification from information at shorter wavelengths After the cross match classification, there are still 5 925 unclassified sources. To classify them, we use the information obtained in surveys conducted at other than radio wavelengths. We follow similar criteria as used in the previous GLOSTAR-VLA catalogs. First, compact (Y_ factor≤2.0) radio sources that do not have counterparts at submm, FIR, and MIR wavelengths are likely to be background extragalactic sources <cit.>, and thus they are classified as EgC. By matching these criteria, we classified 1 569 radio sources as EgC. The HII region candidate classification is assigned to radio sources that have a counterpart at submm wavelengths or that have a counterpart only at FIR wavelengths <cit.>. There are 32 radio sources with counterparts in the ATLASGAL submm survey, and 82 sources with a counterpart only in the HiGAL FIR survey. All of these 114 sources are classified as region candidates. If radio sources have counterparts only at FIR and MIR wavelengths, additional criteria are needed to classify them. Extended sources can, for example, represent photo dissociated regions (PDRs), parts of SNRs, or parts of extended regions and complicate the classification procedure. While these may include interesting cases, for the purpose of this paper we concentrate on compact radio sources with the signatures of early phases of star formation. Thus, in the following, we will focus on the classification of compact radio sources (Y_ factor≤2.0). The 1 463 sources with Y_ factor>2.0 that have not been previously classified, are classified as Unclassified in our final catalog. The 2 779 remaining unclassified compact radio sources have counterparts at FIR or MIR wavelengths. <cit.> constructed [12] vs. [12]–[22] color-magnitude diagram using mid IR magnitudes from the WISE catalog and showed that in the resulting diagram the regions and PNe occupy a different area than radio stars and other unclassified objects <cit.>. As discussed by <cit.>, regions and PNe fulfill the constraint (hereafter the color-magnitude constraint): [12] Mag. < 1.74× ([12]-[22])+2, while Radio-stars and EgC do not. To illustrate the situation for the GLOSTAR radio sources discussed in this work, we use the flux information from WISE. A total of 2 401 GLOSTAR radio sources have counterparts detected in the 12 and 22 μm WISE bands. These include 178 regions, 175 PNe, 452 EgC, 159 radio-stars, and 1 437 unclassified sources. In Fig. <ref>, we plot the color-magnitude diagram [12] vs. [12]–[22] of these sources, where the source types are distinguished with different markers and colors. The line that follows the color-magnitude constraint is also plotted as a dashed line. Fig. <ref> clearly shows that regions and PNe occupy different areas in this diagram than the radio stars and EgC objects. From the already classified sources, we notice that 360 lie above the color-magnitude constraint line, and these are divided into 178 (50%) regions, 152 (42%) PNe, 7 (2%) EgC, and 23 (6%) radio stars. Thus, we can assume that sources that fall above the color-magnitude constraint are either regions or PNe. There are 153 unclassified sources fulfilling these criteria, and in this work we classify them as region candidates, labeled as HII in the signal catalog; we notice that up to 50% will likely be PNe; however, given the current analysis, it is not possible to definitively characterize them. On the other hand, 604 classified sources do not comply with the color-magnitude constraint and these are divided into 23 (4%) PNe, 445 (74%) EgC, and 136 (22%) radio stars. We notice that EgC sources constitute the majority in this region of the color-magnitude diagram. Following these counts, we classify the 1 284 previously unclassified sources in this area as EgC, noting that about 25% of them may likely be galactic sources; however, it is not possible to differentiate them with the current data. The IR information of the remaining sources is scarcer than for the previous cases, and MIR colors cannot be obtained as fluxes are only reported in a single band. We noticed that 86 of these sources have at least a counterpart in one of the FIR bands. As FIR emission may trace cold dust, radio sources associated with FIR emission might be related to regions, and accordingly we classify them as region candidates (labeled as HII in the final catalog). However, further studies are required to confirm the nature of these sources. The 1 256 remaining sources have counterparts only in the MIR range. These can be either EgC or radio stars, however our previous work <cit.> has shown that most sources with only MIR and near-infrared counterparts are background extragalactic sources. Accordingly, these 1256 sources are classified as EgC. A summary of the classified sources per method is given in Table <ref>. Class and classification methods are given in the catalog and Table <ref>. § SUMMARY OF FULL CATALOG The final catalog consists of entries, of which are related to extended structures (for example, SNRs or extended regions) and are sources of uncertain nature that could not be identified as real or spurious sources. Source classification was possible for most of the remaining sources, but 1468 extended sources (Y_ factor>2) were classified as Unclassified as their extended structures require a deeper analysis to classify them, which goes beyond the scope of the present work. In the following, we will discuss the remaining classified sources. This discussion will focus on the spatial distribution, flux densities, sizes, and spectral indices of the sources. Also, we will discuss the identified region candidates. §.§ Spatial distribution The Galactic disk hosts most of its star-forming regions in its inner parts. Basic source properties, namely intensity and angular size, depend on their distance, which is determined by their location in the Galaxy. Figures <ref> and <ref> show the distribution of all detected GLOSTAR sources. The overall distribution in the observed Galactic latitude range is robustly uniform (Fig. <ref>). When we only consider the EgC sources (yellow bars), it can be noticed that their number diminishes towards the Galactic mid-plane (b=0). Moreover, as can be seen from Fig. <ref>, most of the extended sources are found around this latitude line, and it is also where most of the star formation is expected. The extended structures have two effects on source identification: they confuse background sources and, in interferometric images with no zero-spacing information, they also increase the noise levels. Thus, background extragalactic sources that are intrinsically uniformly distributed in the sky show a decreasing number towards the mid-plane. On the contrary, regions (red bars) are more numerous towards low Galactic latitudes; that is, where most of the star formation in the Galaxy is occurring. The source distribution in Galactic longitude, Fig. <ref>, also looks uniform for ℓ<56^∘, with the source counts increasing at larger longitudes. The area with ℓ>56^∘ contains a small number of LSSs, resulting in an improvement of the noise level (see also Table <ref>) and less compromised sources. As a consequence, a higher number of sources are detected at these longitudes. However, the number of regions in this area is also lower, indicating a lower number of star-forming sites. This is in concordance with the dearth of MYSOs that <cit.> noted in the segment of the Perseus arm stretching between ℓ = ∼50^∘ and 90^∘. Finally, in Fig. <ref>, we show the complete spatial distribution of radio sources detected in the course of this work. We have included the sources with LSS and SNRs. This figure shows that the number of sources is lower in the presence of extended sources because of the effects mentioned earlier. §.§ Fluxes and angular sizes The source flux density is one of the most critical parameters in the GLOSTAR source catalog. By comparing our results with the CORNISH survey, we have shown that the flux density parameter from the GLOSTAR survey is accurate within 6% (see Section <ref>). The CORNISH survey, however, had a nominal noise level of ∼0.4 mJy beam^-1, significantly higher than the GLOSTAR D-configuration images. In the top panel of Fig. <ref>, we show the source peak flux density distribution for GLOSTAR and CORNISH for the Galactic plane area covered by this work. The better sensitivity of the GLOSTAR survey resulted in a larger number of detected sources, indicating that most sources with a brightness lower than a few mJy beam^-1 are new detections. The ratio S_ int/S_ peak is also known as the Y-factor, a parameter that can be used to infer the size of a source. In the GLOSTAR survey, we have used it to define an unresolved source when the Y_ factor≤1.2, a compact source when 1.2<Y_ factor≤2.0, and an extended source when Y_ factor>2.0. In Fig. <ref>, we plot the Y-factor distribution of the GLOSTAR sources. Similarly, Fig. <ref> shows the distribution of source effective radius. Both results show that our catalog mainly comprises compact and unresolved sources. §.§ Spectral indices Sources that exhibit radio emission produced from ionized thermal gas with temperature > 10^4 K (such as regions and PNe) have spectral indices ranging from –0.1 to 2.0, representing optical thin and thick bremsstrahlung, respectively. On the other hand, non-thermal radio emission is produced in high-energy processes. The most common, resulting in synchrotron radiation, is produced by relativistic electrons spiralling around magnetic field lines and reaches brightness temperatures above 10^6 K. Active galactic nuclei, for example, are common nonthermal radio emitters. Other compact Galactic nonthermal radio sources are pulsars (PSRs) and micro-quasars. Interestingly, also in the star formation context nonthermal radio emission may arise from compact sources, such as strong wind collision regions in massive multiple stellar systems <cit.>, strong shocks of jets of very young massive stars <cit.>, and magnetically active young low-mass stars <cit.>. Typical spectral index values for non-thermal radio sources related to massive stars are about -0.5 <cit.>. However, the spectral index of radio emission from magnetically active stars ranges from -2.0 to +2.0 <cit.>. In this work, we have measured the spectral index of over 5 000 sources. First, by using the upgraded capabilities of the VLA interferometer that offer a total bandwidth of 4 GHz in C-band, we have produced images in smaller frequency bins and determined in-band spectral indices of compact sources detected with S/N ratio >10.0. Second, we have used the flux densities from the THOR survey and computed GLOSTAR-THOR spectral indices of compact sources detected in both surveys. Using the flux densities reported by THOR has two advantages. First, it increases the frequency baseline of the spectral index determination. Second, the total flux densities of both surveys can be used without splitting the data into smaller and less sensitive frequency bins. By using the full-band images of both surveys, we optimize the (u, v)-coverage of both surveys. This results in smaller errors in the spectral index determination. Quantitatively, this is reflected by the fact that we obtain mean error values for the in-band and the GLOSTAR-THOR spectral indices of 0.2 and 0.07, respectively. We also notice the recent findings by <cit.>, whose results indicate that broadband spectral indices are more reliable than in-band spectral indices. For the 2 819 sources in which both spectral indices could be determined, we plot their values in the top panel of Fig. <ref>. In the middle panel of Fig. <ref> we show the distribution of the differences between these two spectral index determinations, and in the low panel we plot this difference against the GLOSTAR source effective radius. We found a mean difference of -0.08±0.01 and a standard deviation of 0.35. To check if the small negative difference is correlated with size, we ran a Pearson correlation test between the source effective radius and the difference of spectral indices. We obtain a correlation coefficient r=-0.14, with a significance value of p=6×10^-14. This indicates a weak negative correlation (larger sources have more negative values) and could explain the small negative difference between the two spectral index determinations. While the in-band spectral indices are slightly more negative than the GLOSTAR-THOR spectral index, the independent spectral index determinations are consistent considering the mean values of the errors obtained for both methods. Because the error bars are smaller, however, a preference is given to the GLOSTAR-THOR spectral indices. Figure <ref> shows the GLOSTAR-THOR spectral index distribution of GLOSTAR sources. The distribution shows two peaks, the first at a value ∼-0.6 and the second at a value ∼0.0, pointing to two populations of radio sources. The distribution of EgC sources (yellow histogram) shows only one peak and the mean is ∼-0.6, similar to the canonical value of -0.7 for extragalactic nonthermal radio sources <cit.>. For the 225 regions with measured spectral index (red histogram), the distribution also shows a single peak at α=0.14±0.02, consistent with the values for (almost) optically thin free-free emission (thermal radio emission). We will discuss regions in more detail in the following section. §.§ regions Following the classification criteria discussed in Section <ref>, we have identified 769 regions, of which 359 are new region candidates. Most of them are located close to the mid-plane of the Galactic disk (b≃0.0, see Figure <ref>). Most of the newly identified region candidates are compact, with an effective radius smaller than 20” (see Fig. <ref>). We have estimated the spectral index for 225 region candidates, and their mean spectral index value is +0.14±0.02 (Fig. <ref>). Surprisingly, 55 (24%) of the region candidates have a spectral index <-0.2, which is smaller than -0.1, the minimum spectral index value expected for optically thin free-free radio emission. In the GLOSTAR higher angular resolution images, we have also found such sources <cit.>. In those cases, we have speculated that in regions associated with radio sources with negative spectral indices, the radio sources do not trace radio emission from the ionized gas of the region, which will be resolved out, but rather trace stellar processes producing non-thermal radio emission or a mixture of emission produced by thermal and non-thermal processes. Sensitive radio observations have indeed revealed compact non-thermal radio sources in the vicinity of the regions proper, up to several tens in some cases <cit.>. <cit.> argue that the non-thermal compact radio sources they find around the archetypal ultracompact region W3(OH), several of which are time variable, represent low-mass stars in the stellar cluster that surrounds the MYSO that excites the region. In our VLA D-configuration images, we do not expect that regions related to compact radio sources (Y_ factor≤2.0) have resolved out emission. This excludes the possibility of imaging artifacts. In Fig. <ref> we plot the 225 spectral indices measured for region candidates as a function of their signal-to-noise ratio from the GLOSTAR maps. From this plot, it can be seen that most of the sources with negative spectral index also have a low S/N ratio. Most of them also have a low S/N ratio in the THOR survey. The low brightness of these sources could have biased the spectral index determination. However, there are still some sources with a S/N ratio >10 in both GLOSTAR and THOR, that clearly have a negative spectral index. These radio sources require further studies to confirm or refute their classification as regions. §.§ Extragalactic background sources Along with Galactic sources, our observations are expected to detect a large number of extragalactic radio sources. In fact, it has been shown that most of the unclassified sources in our previous catalog can indeed be related to galaxies with radio emission <cit.>. While selections effects could play a role on their distribution, the early studies by <cit.> show that, at 5 GHz, the rough number of expected background sources per square arcminute, with flux levels above S[μ Jy], is described by: N(S[μ Jy])=(0.42±0.05)(S[μ Jy]/30)^-1.18±0.19. Considering a mean noise level of σ=123 μJy, and a 4 σ threshold for our detected sources, the number of expected extragalactic radio sources in our images is 6 400±3 750. This number is in excellent agreement with the 6 312 sources that we have classified as EgC. § SUMMARY AND CONCLUSIONS The GLOSTAR-VLA Galactic plane survey <cit.> is currently the most sensitive mid-radio wavelength survey covering a large fraction of the Galactic plane observed from the northern hemisphere. Its main objective is unveiling signatures of recent massive star formation. However, as many of these sources are expected to be extragalactic background sources, it is necessary to classify the detected sources, a task that has many challenges. In this work, we have presented radio images of a 100 square degree area of the Galactic plane, with an angular resolution of 18”. The mapped region covers the area delimited by the coordinates 2 < ℓ < 28, 36 < ℓ < 60 and |b| < 1. We have used a combination of the software and visual inspection procedures to identify radio sources in the GLOSTAR map. We have identified that are part of very extended sources ( region complexes or SNRs). The radio source catalog presents the results of the source extraction performed with , such as positions, S/N, flux densities, Y-factor, and effective radius. We have also obtained the spectral indices of 5 276 radio sources, which are also listed in the catalog. We have cross-matched the GLOSTAR radio sources with the radio sources reported from other radio surveys; for example, THOR <cit.>, CORNISH <cit.>, and RMS <cit.>. These radio surveys were used to, first, verify the measured source parameters and, second, to classify sources. Source classification was also performed with the information on (sub)millimeter and IR wavelength counterparts. Source classes are also listed in the catalog, in which a large number (6 312) are extragalactic background sources. With the performed multi-wavelength analysis, we identified 769 region candidates. Previous works have reported 410 of these as regions or region candidates, and the remaining 359 we identify as candidates for the first time. The spatial distribution of these sources is concentrated around the Galactic mid-plane and their numbers decrease in the outer parts of the Galactic disk (ℓ>56^∘), indicating zones with higher and lower star formation, respectively. Using additional flux density measurements at 1.4 GHz from the THOR survey, we have also determined spectral indices of 225 region candidates. Their mean spectral index is ∼0.1, consistent with thermal free-free radio emission. Interestingly we found several region candidates with negative spectral index, although this could be partly due to their low S/N ratio in GLOSTAR and THOR. However, there is an interesting sample of region candidates with S/N>10 and negative spectral indices that need to be studied further to establish their true nature. Combining the information from large surveys is an excellent way to obtain an unbiased look for tracers of early star formation. The derived properties and classification of several thousands of new and known radio sources are invaluable information and truly show the legacy nature of the GLOSTAR radio survey. This research was partially funded by the ERC Advanced Investigator Grant GLOSTAR (247078). S.A.D. acknowledge the M2FINDERS project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant No 101018682). AYY acknowledges support from the NSFC grants No. 11988101 and No. NSFC 11973013. M.R.R. is a Jansky Fellow of the National Radio Astronomy Observatory. This work uses information from the GLOSTAR databases at <http://glostar.mpifr-bonn.mpg.de> supported by the MPIfR (Max-Planck-Institut für Radioastronomie), Bonn, which is based on observations with the Karl G. Jansky Very Large Array (VLA) of NRAO (The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.) and 100-m telescope of the MPIfR at Effelsberg. It also made use of information from the ATLASGAL database at <http://atlasgal.mpifr-bonn.mpg.de/cgi-bin/ATLASGAL_DATABASE.cgi> supported by the MPIfR, Bonn, as well as information from the CORNISH database at <http://cornish.leeds.ac.uk/public/index.php> which was constructed with support from the Science and Technology Facilities Council of the UK. This work has used data from GLIMPSE survey of the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This publication also makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This paper used the data products from the Hi-GAL survey of the Herschel telescope which is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. This document was prepared using the collaborative tool Overleaf available at: <https://www.overleaf.com/>. aa
http://arxiv.org/abs/2407.13237v1
20240718074751
LLM-Empowered State Representation for Reinforcement Learning
[ "Boyuan Wang", "Yun Qu", "Yuhang Jiang", "Jianzhun Shao", "Chang Liu", "Wenming Yang", "Xiangyang Ji" ]
cs.AI
[ "cs.AI" ]
[ LLM-Empowered State Representation for Reinforcement Learning equal* Boyuan Wangequal,thu Yun Quequal,thu Yuhang Jiangthu Jianzhun Shaothu Chang Liuthu Wenming Yangthu Xiangyang Jithu thuTsinghua University Xiangyang Jixyji@tsinghua.edu.cn Machine Learning, ICML 0.3in ] § ABSTRACT Conventional state representations in reinforcement learning often omit critical task-related details, presenting a significant challenge for value networks in establishing accurate mappings from states to task rewards. Traditional methods typically depend on extensive sample learning to enrich state representations with task-specific information, which leads to low sample efficiency and high time costs. Recently, surging knowledgeable large language models (LLM) have provided promising substitutes for prior injection with minimal human intervention. Motivated by this, we propose LLM-Empowered State Representation (LESR), a novel approach that utilizes LLM to autonomously generate task-related state representation codes which help to enhance the continuity of network mappings and facilitate efficient training. Experimental results demonstrate LESR exhibits high sample efficiency and outperforms state-of-the-art baselines by an average of 29% in accumulated reward in Mujoco tasks and 30% in success rates in Gym-Robotics tasks. Codes of LESR are accessible at <https://github.com/thu-rllab/LESR>. § INTRODUCTION Traditional reinforcement learning (RL) algorithms <cit.> generally require a large number of samples to converge <cit.>. Compounding this challenge, in most cases, the intricate nature of RL tasks significantly hampers sample efficiency. <cit.>. For handling complex tasks and enhancing generalization, neural networks are utilized to approximate value functions <cit.>. However, value networks often lack smoothness even when they are trained to converge, leading to instability and low sample efficiency throughout the training process <cit.>. Considering that in deep RL, state vectors serve as the primary input to value networks, sub-optimal state representations can result in limited generalization capabilities and non-smoothness of value network mappings <cit.>. In most RL environments <cit.>, source state representations typically embody environmental information but lack specific task-related details. However, the learning of value networks depends on accurately capturing task dynamics through rewards <cit.>. The absence of task-related representations may impede the establishment of network mappings from states to rewards, affecting network continuity. Previous researches leverage diverse transitions, intensifying the issue of time-consuming data collection and training <cit.>. Consequently, a question arises regarding the existence of a more efficient method for identifying task-related state representations. Indeed, incorporating task-specific information into the state representation can be achieved through introducing expert knowledge <cit.>. Recently, significant strides have been accomplished in the field of Large Language Models (LLM) <cit.>. The exceptional performance of LLM in various domains <cit.>, particularly in sequential decision-making tasks, showcases their extensive knowledge and sufficient reasoning abilities <cit.>. This motivates the idea that the prior knowledge embedded in LLM can be exploited to enhance the generation of task-related state representations. To preliminarily validate LLM's capacity in enhancing state representations, an illustrative toy example is provided. In Figure <ref>, results demonstrated that the state representation generated by LLM can help to enhance the continuity of value networks and expedite the convergence of policy learning. To substantiate this, we employ the Lipschitz constant <cit.> for smoothness assessment. The LLM-generated state representations enhance the Lipschitz continuity of value networks, accounting for the improvement of sample efficiency and performance. In this paper, we propose a novel method named LLM-Empowered State Representation (LESR). We utilize LLM's coding proficiency and interpretive capacity for physical mechanisms to generate task-related state representation function codes. LLM is then employed to formulate an intrinsic reward function based on these generated state representations. A feedback mechanism is devised to iteratively refine both the state representation and intrinsic reward functions. In the proposed algorithm, LLM consultation takes place only at the beginning of each iteration. Throughout both training and testing stages, LLM is entirely omitted, ensuring significant time savings and flexibility. In summary, our main contributions are: * We propose a novel method employing LLM to generate task-related state representations accompanied by intrinsic reward functions for RL. These functions are demonstrated to exhibit robustness when transferred to various underlying RL algorithms. * We have theoretically demonstrated that enhancing Lipschitz continuity improves the convergence of the value networks and empirically validated more task-related state representations can enhance Lipschitz continuity. * LESR is a general framework that accommodates both continuous and discontinuous reward scenarios. Experimental results demonstrate that LESR significantly surpass state-of-the-art baselines by an average improvement of 29% in Mujoco tasks and 30% in Gym-Robotics tasks. We have also experimentally validated LESR's adaptability to novel tasks. § RELATED WORK Incorporating LLM within RL Architecture Since the advent of LLM, researchers have endeavored to harness the extensive common-sense knowledge and efficient reasoning abilities inherent in LLMs within the context of RL environments. Challenges have arisen due to the misalignment between the high-level language outputs of LLMs and the low-level executable actions within the environment. To address this, <cit.> have sought to employ environments where observation and action spaces can be readily translated into natural language <cit.>. Alternative approaches employ language models as the policy network with fine-tuning <cit.>. Meanwhile, other endeavors focus on leveraging LLMs as high-level planners, generating sub-goals for RL agents <cit.>. Nevertheless, these works encounter a common challenge: the tight coupling of LLMs with RL agents, leading to frequent communication between the two even during the testing stage, a process that proves time-consuming and inefficient. State Representation Derived from LLM Some researchers use LLM for state representations that contain more information in partially observable scenarios. <cit.> employs LLMs to provide extra details about road conditions in traffic control areas. Similarly, <cit.> focuses on robot locomotion in unstructured environments, using LLM as translators to extract environmental properties and generate contextual embeddings for training. <cit.>, tracks key objects and attributes in open-world household environments via LLM, expanding and updating object attributes based on historical trajectory information. These methods mainly addressed the issue of missing state information in partially observable scenarios. In contrast, our method emphasizes exploring correlations among internal state features, recognizing meaningful correlations reflecting underlying physical relationships and generating more task-related representations. Reward Design via LLM For the purpose of effectively bridging the gap between high-level language instructions and low-level robot actions, some researchers employ rewards as the intermediate interface generated by LLM. Works in this domain can be categorized into the following three types: (1) Sparse Reward: <cit.>, aiming to design sparse rewards at the trajectory level. (2) Dense Reward: <cit.>, aiming to design dense rewards for every interactive step of the agent. (3) Intrinsic Reward: <cit.>, aiming to design intrinsic rewards for reducing ineffective exploration and improving sample efficiency. In this paper, we also utilize LLM to generate intrinsic reward function codes. The primary difference between our methodology and prior research lies in that our reward design serves as an auxiliary mechanism for encouraging the agent to better comprehend the state representations generated by LLM. This aids the policy and critic networks in establishing correlations between the state representations and intrinsic rewards. § METHOD §.§ Problem Statement We consider a Markov decision process <cit.> defined by a tuple (𝒮, 𝒜, ℛ, 𝒫, p_0, γ), where 𝒮 denotes the source state space and 𝒜 denotes the action space. Given a specific task, ℛ is the source extrinsic reward function of the environment. 𝒫(s'|s, a) denotes the dynamic transition function, p_0 is the initial state distribution, and γ is the discount factor. The primary objective is to learn a RL policy π(a|s) that maximizes the cumulative rewards expectation, which is defined as value function Q_π(s_t, a_t) = 𝔼_π[ ∑_t=0^∞γ^t r_t |s_t, a_t ]. In order to assess the impact of LESR on network continuity, we introduce the Lipschitz constant <cit.>: Denote the data space as 𝒳⊂ℝ^d and the label space as 𝒴⊂ℝ. Consider a dataset 𝒳_0 ⊂𝒳, and the label 𝒴_0 = {y_i | y_i=u(x_i), where x_i ∈𝒳_0}⊂𝒴. Here, x_i represents a sequence of i.i.d. random variables on 𝒳 sampled from the probability distribution ρ, and u:𝒳_0 ⊂𝒳→𝒴 is the Lipschitz constant of a mapping given by (u;𝒳_0) = sup_x_1,x_2 ∈𝒳_0u(x_1)-u(x_2)_2/x_1 - x_2_2. When 𝒳_0 is all of 𝒳, we write (u;𝒳) = (u). A lower Lipschitz constant indicates a smoother mapping u. §.§ LLM-Empowered State Representation In many RL settings <cit.>, source state representations usually contain general environmental information, while often lacking specific details related to current tasks which is critical to the training of the value networks <cit.>. The absence of task-related representations may hinder network mappings from states to rewards, impacting the continuity of networks. Recognizing this limitation, the identification and incorporation of additional task-related state representations emerge as a pivotal strategy. This strategic augmentation can expedite the establishment of network mappings, subsequently boosting the smoothness of networks and augmenting training efficiency. Due to the extensive knowledge and priors embedded in LLM, utilizing it for generating task-related state representations can be promising. In this context, we present a direct and efficient method named LLM-Empowered State Representation (LESR). The whole framework is depicted in Figure <ref>. Our methodology hinges on leveraging LLM to facilitate the generation of more task-specific state representations. Herein, we denote LLM as ℳ and a descriptor translating symbolic information into natural language as d. Consequently, the input to LLM ℳ is represented as d(𝒮), which constitutes four parts: (1) Task Description: information about the RL environment and the specific task. (2) State Details: information pertaining to each dimension of the source state. (3) Role Instruction: assignments that require LLM to generate task-related state representation and intrinsic reward codes. (4) Feedback: historical information from previous iterations. For a full comprehensive descriptions of our prompts, refer to Appendix <ref>. Our primary objective is to harness LLM ℳ to formulate a python function ℱ: 𝒮→𝒮^r, where 𝒮^r denotes the LLM-empowered state representation space. ℱ is sampled from ℳ( d(𝒮) ) and d(𝒮) explicitly embeds the task information into LLM. The state representation function ℱ utilizes source state dimensions for calculations, generating task-specific state dimensions. At each timestep t, when the agent get the current state s_t from the environment, the corresponding state representation s^r_t = ℱ(s_t) will be concatenated to the source state as the input for the policy π(a_t | s_t, s^r_t) and value Q(s_t, s^r_t, a_t). Once the state representation function ℱ is obtained, LLM ℳ is subsequently required to provide an intrinsic reward function 𝒢: 𝒮^c→ℝ in python code format based on s_t^c = (s_t, s_t^r)∈𝒮^c, where 𝒮^c = 𝒮×𝒮^r is the joint state space. More precisely, we stipulate in the prompt that LLM is obliged to incorporate the LLM-empowered state representations s_t^r to calculate the intrinsic rewards, and it also retains the option to incorporate the source state s_t for a better intrinsic reward design. We formulate the joint optimization objective as: max_ℱ, 𝒢max_π 𝔼_ℱ, 𝒢, π[ ∑_t=0^∞γ^t ( r + w · r^i) | r^i = 𝒢(s_t, ℱ(s_t)) ]. where ℱ, 𝒢∼ℳ(d(𝒮)), and w is the weight of the intrinsic reward. §.§ Lipschitz Constant for Feedback In practice, to enhance the robustness of state representations, we iteratively query LLM multiple times, incorporating previous training results as feedback. During each training iteration, we sample K state representation and intrinsic reward function codes ℱ_k, 𝒢_k, k=1, …, K from LLM ℳ. Subsequently, we concurrently execute K training processes over N_small training timesteps to evaluate the performance of each function ℱ_k, 𝒢_k. Here, N_small is intentionally set to be smaller than the total timesteps N employed in the final evaluation stage. ∙ Continuous Scenarios For scenarios with continuous extrinsic reward, we maintain a Lipschitz constant array C_k∈ℝ^|𝒮^c| for each of the K training instances. Each element of C_k signifies the Lipschitz constant of a mapping u_i, i=1, …, |𝒮^c|, which maps each dimension of 𝒮^c to the extrinsic rewards. Note: u_i is introduced to signify the Lipschitz constant computed independently for each state dimension concerning the extrinsic reward. This assessment is crucial for guiding the LLM in identifying and eliminating undesired dimensions within the state representation. Given a trajectory T={s_t^c, r_t}_t=1^H of length H, the current Lipschitz constant array is calculated as follows: C_k^T = [ (u_i;T_i) ]_i=1^|𝒮^c|, where T_i = {s^c_t[i], r_t}_t=1^H, s^c_t[i] denotes the i-th dimension of the joint state representations s^c, and C_k^T denotes the Lipschitz constant array of the current trajectory. we soft-update C_k over trajectories: C_k = τ C_k + (1 - τ) C_k^T, where τ∈ [0, 1] is the soft-update weight. At the end of each training iteration, the Lipschitz constant array C_k and policy performance ν_k are provided to LLM for CoT <cit.> suggestions, which, along with the training results, serve as feedback for subsequent iterations. The feedback information helps LLM to generate task-related state representations that exhibit a lower Lipschitz constant with extrinsic rewards, as elaborated in Section <ref> where we discuss the theoretical advantages of a lower Lipschitz constant for network convergence. In the subsequent iterations, LLM leverages all historical feedback to iteratively refine and generate improved state representation and intrinsic reward function codes {ℱ_k}_k=1^K, {𝒢_k}_k=1^K. The whole algorithm is summarized in Algorithm <ref>. ∙ Discontinuous Scenarios Dealing with scenarios with discontinuous extrinsic reward using conventional RL baselines is notably challenging and constitutes a specialized research area <cit.>. Despite this challenge, LESR remains effective in such scenarios. We provide two ways of estimating Lipschitz constant as feedback in discontinuous extrinsic reward settings. LESR with Discounted Return In Equation <ref>, u_i initially maps each dimension of 𝒮^c to dense extrinsic rewards. In sparse reward settings, we substitute these extrinsic rewards with the discounted episode return ∑_tγ^t r. Thus, u_i now maps each dimension of 𝒮^c to the discounted episode returns, consistent with the algorithmic framework and theoretical scope proposed in LESR. LESR with Spectral Norm Since in Theorems <ref> and <ref> show that reducing the Lipschitz constant of the reward function lowers the upper bound of Lip(V; 𝒮) and improves the convergence of value functions. Therefore, we can use the spectral norm to estimate Lip(V; 𝒮) <cit.> as feedback to LLM. By calculating the spectral norm of the N weight matrices W_1, …, W_N of the value functions, the Lipschitz constant of the value function is bounded by ∏_i=1^NW_i_2, which is then presented to the LLM as feedback. §.§ Theoretical Analysis In this section, we present analysis of the theoretical implications of the Lipschitz constant on convergence in neural networks, inspired by <cit.>. Consider the dataset and labels 𝒳_0, 𝒴_0 defined in Definition <ref> with N=|𝒳_0| elements. The true mapping from 𝒳 to 𝒴 is denoted as u_0^*: 𝒳→𝒴. Let f:𝒳→𝒳 be a function transforming a source x∈𝒳 into a more task-related f(x). Denote u(x;ψ) as a neural network mapping parameterized by ψ. Consider the empirical loss for 𝒳_0, 𝒴_0, where ℓ : 𝒴×𝒴→ℝ is a loss function satisfying (i) ℓ≥ 0, (ii) ℓ(y_1,y_2) = 0 if and only if y_1 = y_2: min_u: 𝒳_0 →𝒴ℒ(u, 𝒳_0) = 1/N∑_i=1^Nℓ(u(x_i; ψ), y_i). The mapping f:𝒳→𝒳 only swaps the order of x in the dataset 𝒳_0, which means 𝒳_1 = {f(x_i) | f(x_i) ∈𝒳_0, i =1,…,N} and 𝒳_1 = 𝒳_0. While for x∈𝒳_2={x| x∈𝒳, x ∉𝒳_0}, f(x) = x. Under f, the true mapping from f(𝒳) to 𝒴 is denoted as u_1^*: f(𝒳)→𝒴. It can be derived that u_1^* = u_0^* ∘ f^-1. We suppose under f a lower Lipschitz constant is achieved: (u_1^*) ≤(u_0^*). Under Assumption <ref>, Given 𝒳_0, 𝒴_0 and 𝒳_1, 𝒴_1={y_i | y_i=u_1^*(x_i), x_i ∈𝒳_1}, u_0∈𝒰_0 is any minimizer of ℒ(u, 𝒳_0) and u_1∈𝒰_1 is any minimizer of ℒ(u, 𝒳_1), where 𝒰_0 and 𝒰_1 denote the solution set of ℒ(u, 𝒳_0) and ℒ(u, 𝒳_1) in Definition <ref> relatively, then on the same condition when (u_0) = (u_1): sup_u_1∈𝒰_1𝔼_x ∼ρ_fu_1^* - u_1_2 ≤sup_u_0∈𝒰_0𝔼_x ∼ρu_0^* - u_0_2, ρ and ρ_f denote the source probability distribution on 𝒳 and probability distribution on f(𝒳), relatively. In Theorem <ref>, it is demonstrated that the mapping f exhibiting a lower Lipschitz constant can attain superior convergence. This observation underscores the significance of identifying task-related state representations characterized by lower Lipschitz constants with respect to the associated rewards. Such analysis to some extent sheds light on why smoother network mappings exhibit improved convergence performance. Proofs of Theorem <ref> can be referred to Appendix <ref>. We delve deeper into the significance of the Lipschitz constant of the reward concerning state representations in RL. We introduce two additional theorems, namely Theorem <ref> and Theorem <ref>, establishing a strong correlation between Lip(r; 𝒮) and Lip(V; 𝒮) and, consequently, the convergence of RL's value functions. Theorem <ref> indicates that reducing the Lipschitz constant of the reward function lowers the upper bound of Lip(V; 𝒮). Theorem <ref> illustrates how decreasing Lip(r; 𝒮) can enhance the convergence of RL algorithms' value functions. These theorems collectively emphasize our focus on minimizing the Lipschitz constant of the reward function to improve RL algorithms' convergence. Detailed proofs are available in Appendix <ref>. § EXPERIMENTS In this section, we will assess LLM-Empowered State Representation (LESR) through experiments on two well-established reinforcement learning (RL) benchmarks: Mujoco <cit.> and Gym-Robotics <cit.>. For more information about the tasks, see Appendix <ref>. The following questions will guide our investigation: •Q1: Can LESR generate task-related state representations characterized by lower Lipschitz constants in relation to extrinsic environmental rewards? (Section <ref>) •Q2: Can LESR achieve higher sample efficiency and outperform RL baselines? Does each component contribute to the final performance? (Section <ref>, <ref>) •Q3: Are functions ℱ_best and 𝒢_best algorithm-agnostic, and transferable directly to other RL algorithms? (Section <ref>) •Q4: Do ℱ_best and 𝒢_best possess semantic-physical significance and exhibit consistency across different runs of LLM? Does LESR exhibit robustness to the variations in hyperparameters? (Section <ref>, <ref>) §.§ Implementation Details LLM and Prompts We employ the gpt-4-1106-preview as LLM to generate the state representation and intrinsic reward functions. There are three well-designed prompt templates and details of prompts are available in Appendix <ref>. Baseline Algorithm We employ the SOTA RL algorithm TD3 <cit.> as the foundational Deep Reinforcement Learning (DRL) algorithm. Building upon the implementation of TD3 as provided in the source paper[https://github.com/sfujim/TD3], we have formulated LESR (Ours). We also employ EUREKA <cit.> for comparison, which incorporates LLM for human-level reward designs. It is noteworthy that, in order to maintain comparative fairness, we have adhered to the hyperparameters of TD3 without introducing any modifications. Both EUREKA and LESR are grounded in the common RL algorithm TD3. For a comprehensive list of hyperparameters, please refer to Appendix <ref>. §.§ LESR Can Enhance Lipschitz Continuity The reward function ℛ commonly serves as an indicator of the specific task within the same environment <cit.>. In RL, value function learning is also predicated on rewards. Specifically, when the discount factor γ equals to 0, the value function directly learns the rewards. Additionally, the state representations form a part of the input to the value function. Task-related state representations might contribute to enhancing the Lipschitz continuity of the value function, thereby expediting the learning process. Therefore, to validate whether LESR can help enhance the Lipschitz continuity between the generated state representations and the value function, we execute the final policy of LESR in the Mujoco environment for 20 episodes of length H=1000 and establish two datasets: (a) T_1 = {s_t, r_t}_t=1^H and (b) T_2 = {ℱ_best(s_t), r_t}_t=1^H. For states within each dataset, we employ t-SNE <cit.> to visualize the states on the 2D graph with coloring the data with corresponding rewards and calculate the Lipschitz constant between states and rewards. As shown in Figure <ref>, it is illustrated that in T_1 the Lipschitz constant of the mapping from state representations generated by LESR to the extrinsic environment rewards (ℛ:ℱ(s)→ r, (ℛ;T_2) = 168.1) is much lower than that of the source(ℛ:s→ r, (ℛ;T_1) = 560.2). This indicates that the task-related state representations generated by LESR can enhance the Lipschitz continuity. §.§ LESR Can Achieve High Sample Efficiency Performance Comparison We validate LESR and the results are presented in Table <ref>. In Mujoco environments, LESR outperforms the state-of-art baselines in 4 out of 5 tasks. In Gym-Robotics environments, it outperforms the baselines in 5 out of 6 tasks. Particularly noteworthy are the results on challenging antmaze tasks, where TD3 fails to train, resulting in all final success rates of TD3 remaining at zero. Conversely, the performance exhibited by LESR is marked by excellence, highlighting its proficiency in overcoming the challenges posed by these intricate tasks. LESR yields an average performance improvement of 29% over the 5 tasks in Mujoco and an average success improvement of 30% over the 7 tasks in Gym-Robotics, thus substantiating the efficacy of the employed methodology. More information of training can be referenced in Appendix <ref>. Sample Efficiency For the purpose of assessing the sample efficiency of LESR, we have presented the performance for a limited scope of 300k training steps, as depicted in Table <ref>. The results demonstrate that LESR exhibits superior performance within significantly fewer training steps, excelling comprehensively across all tasks in comparison to the SOTA baseline, thereby manifesting heightened sample efficiency. This is further supported by the information of experimental details presented in Appendix <ref>. §.§ Ablation Study Experimental Setting In this section, we conduct ablative analyses on distinct components of LESR to evaluate their respective contributions. Three ablation types are considered in total. Ours w/o IR: This entails the removal of the intrinsic reward component, with training relying solely on state representation. Ours w/o SR: Here, we exclude the state representation part. It is crucial to note that despite this ablation, the state representation function is still necessary due to the intrinsic reward calculation process r^i = 𝒢(s, ℱ(s)) outlined in Equation (<ref>). However, in this context, only the source state s is provided as input to the policy, instead of the concatenated state s^c. Ours w/o FB: In this instance, the `iteration_count' specified in Appendix <ref> is set to 1 and the `sample_count' is increased to 10, eliminating the feedback component. Results and Analysis The ablation results are presented in Table <ref>. It is elucidated that regardless of the component subjected to ablation, a substantial decline in final performance ensues, underscoring the indispensability of all components for the final efficacy of our method. Crucially, our findings demonstrate that when only the intrinsic rewards part is ablated, namely Ours w/o IR, the performance results exhibit minimal influence, particularly in Mujoco environment tasks. This observation unveils the pivotal role of state representation component in our method, highlighting its significant contribution to the primary performance enhancement and further substantiating the assertions made in Section <ref>. Furthermore, since LESR requires that the input of networks be the concatenation of the source state and the generated state representations (i.e., s_t^c=(s_t, s_t^r)), we have conducted experiments to substantiate the indispensability of the source state s_t. As depicted in Figure <ref>, it is evident that the source state and the generated state representations work synergistically, both playing pivotal roles in achieving optimal performance. Besides, in Appendix <ref>, we also validate the role of the Lipschitz constant of LESR through more ablation experiments. Furthermore, we showcase LESR's robustness through experiments solely utilizing intrinsic reward functions, affirming its reliability. Additionally in Appendix <ref> experiments on the novel tasks 'Walker Jump' and 'Walker Split Legs' underscore LESR's adaptability to new scenarios. §.§ Directly Transfer to Other Algorithms As there is no algorithm-specific information provided in the iteration prompts, we hypothesize that the state representation and intrinsic reward functions are algorithm-agnostic. This suggests that they can be directly transferred and integrated with other RL algorithms without the iteration process in Algorithm <ref>. To substantiate this hypothesis, we retain the best state representation and intrinsic reward function ℱ_best, 𝒢_best and combine them with two other widely employed RL algorithms PPO <cit.> and SAC <cit.> for validation. The results in Table <ref> highlight that the state representations and their associated intrinsic reward functions, acquired through the training of TD3, exhibit the potential to be integrated with alternative algorithms, still resulting in improved outcomes. This validates our initial hypothesis. Furthermore, these results further emphasize the efficacy and adaptability of our approach. Consequently, by simply employing a fundamental algorithm to explore state representation and intrinsic reward functions for a given task, it becomes possible to significantly diminish the training complexity associated with that task for other algorithms. §.§ Semantic Analysis and Consistency Verification To elucidate the precise function of the state representations and comprehend the rationale behind their superior performance, we meticulously analyze the state representation functions produced by LLM. We use the task from Mujoco as an example for semantic analysis. Please refer to Appendix <ref> for details about the state representation functions generated by LLM. Semantic Analysis In the case of the task, the goal is to move as fast as possible towards the right by applying torque on the rotors and using the fluids friction, without taking any drastic action. The state representations in this task can be categorized into five groups, whose names are derived from the comments provided by LLM: cosine or sine of the angles (c1), relative angles between adjacent links (c2), kinetic energy (c3), distance moved (c4), and sum of torques applied (c5). It is apparent that these representations exhibit a strong correlation with the task's objective: c1 and c2 contribute to enhanced understanding of posture, c3 and c5 signify the necessity to avoid excessive actions, while c4 corresponds directly to the objective of advancing towards the target. Consistency of LLM Through semantic analysis, it is demonstrable that LLM is capable of generating significantly task-related state representations of a physically meaningful nature, stemming from the source state. Then, how consistent are the answers from LLM? We carried out experiments with four random seeds for the task, and took out the final state representation functions in each experiment for statistics. As demonstrated in Table <ref>, the functions generated by LLM exhibit a pronounced level of consistency across diverse experiments, e.g. c_1, c_2 and c_3 are included in all experiments. This reaffirms the importance and universality of the extended state while ensuring the robustness and stability of our methodology. §.§ Robustness To validate the robustness and stability of varying hyperparameters, we conducted experiments in the environment. We systematically altered the values of two key hyperparameters: the weight of the intrinsic reward w in Equation (<ref>) and the sample count K. For each experiment, we modified one hyperparameter while keeping the others. The outcomes of these experiments are illustrated in Figure <ref>. The results indicate that an increase in the sample count K can lead to a performance improvement, for instance, from K = 1 to K = 3. However, elevating the value of K from 3 to 9 does not yield further performance enhancement. Regarding the variation in the weight of the intrinsic reward w, the findings illustrate that the final performance is not significantly affected by changes in w and remains stable. In summary, the final performance across all hyperparameter tuning experiments remains consistently within a stable range, significantly surpassing the baseline. This observation underscores the stability of our approach. § CONCLUSION In this paper, we introduce LESR, an algorithm leveraging the coding proficiency and interpretive capacity for physical mechanism of LLM to generate state representation and intrinsic reward function codes for reinforcement learning. Demonstrating its efficacy on benchmarks Mujoco and Gym-Robotics, we illustrate LLM's capability to produce task-specific state representations alongside meaningful intrinsic reward functions. These representations enhance Lipschitz continuity in networks, resulting in superior efficiency and outperforming SOTA baselines. In-depth ablations and additional experiments show the consistency and robustness of LESR. We believe that LESR can effectively contribute to various real-world interaction tasks. However, our work still suffers limitations. A primary constraint lies in our attempt to derive task-related representations solely from the source state features using LLM, without incorporating external information. This approach may restrict the information available to the network in partially observable environments. Besides, the quality of state representations generated by LLM is constrained by its capabilities, and there is no absolute guarantee that it can produce more task-related representations. This limitation is anticipated to be mitigated as LLM evolves. In the future, we are interested in exploring the integration of additional information to establish a more comprehensive framework for state representations. We anticipate that our work may serve as an inspiration for further exploration in this promising area among researchers. § IMPACT STATEMENT This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. § ACKNOWLEDGMENT This work was supported by the National Key R&D Program of China under Grant 2018AAA0102801. icml2024 § THEORETICAL ANALYSIS We firstly define the identity projection and closet point projection map :𝒳→𝒳, σ_𝒳_k:𝒳→𝒳_k = {x_1,…,x_n} that satisfies ∀ x∈𝒳, (x) = x ∀ x∈𝒳, x-σ_𝒳_k_2 = min_1≤ i≤ n{x-x_i_2 } Now we investigate the convergence under the mapping f in Theorem <ref>: If u_1∈𝒰_1 is any minimizer of ℒ(u, 𝒳_1), where 𝒰_1 denotes the solution set of ℒ(u, 𝒳_1) in Definition <ref>, then for any t>0, there exists C ≥ 1 and 0 < c < 1 satisfies the following with probability at least 1-Ct^-1N^-(ct-1): 𝔼_x ∼ρ_fu_1^* - u_1_2 ≤ C[ (u_1^*) + (u_1) ] ( tlog N/N)^1/d Since u_1∈𝒰_1 is a minimizer of ℒ(u, 𝒳_1), we must have u_1(x_i)=u_1^*(x_i) for all 1 ≤ i ≤ N. Then for any x∈ X we have u_1^*(x) - u_1(x)_2 =u_1^*(x) - u_1^*(σ_𝒳_1(x)) + u_1^*(σ_𝒳_1(x)) - u_1(σ_𝒳_1(x))_=0 + u_1(σ_𝒳_1(x)) - u_1(x)_2 ≤u_1^*(x) - u_1^*(σ_𝒳_1(x))_2 + u_1(σ_𝒳_1(x)) - u_1(x)_2 ≤[ (u_1^*) + (u_1) ] x - σ_𝒳_1(x)_2 Now we provide a bound between the identity projection and closet point projection: For any t>0, the following holds with probability at least 1-Ct^-1N^-(ct-1): 𝔼_x ∼ρ_f x - σ_𝒳_1(x)_2 ≤ C(tlog N/N)^1/d The proof is completed by combining Lemma <ref> into Eq <ref>. Next, we can prove that the generalization loss converges based on Theorem <ref>: Assume that for some q≥1 the loss ℓ in Definition <ref> satisfies ℓ(y_i,y_k) ≤ C y_i-y_k^q_2 for all y_i,y_k∈𝒴. Then under Theorem <ref>, the following bound of the loss ℒ[u_1, 𝒳] in Definition <ref> holds with probability at least 1-Ct^-1N^-(ct-1): ℒ[u_1, 𝒳] ≤ C[ (u_1^*) + (u_1) ]^q ( tlog N/N)^q/d We can bound the loss as: ℒ[u_1, 𝒳] = ∫_x∈𝒳ρ_f(x)ℓ(u_1^*(x), u_1(x)) dx≤ C 𝔼_x ∼ρ_fu_1^* - u_1^q_2 The proof is completed by combining Theorem <ref> into Eq <ref>. Now we turn to the proof of Theorem <ref>, we start with the following lemma: Under Assumption <ref>, ρ and ρ_f denote the source probability distribution on 𝒳 and probability distribution on f(𝒳): 𝔼_x ∼ρ_f x - σ_𝒳_1(x)_2 = 𝔼_x ∼ρ x - σ_𝒳_0(x)_2 Let 𝒳_2 = {x | x ∈𝒳, x ∉𝒳_0}, from Definition <ref>, when x ∈𝒳_0, σ_𝒳_0(x) = x, then we have: ∫_x ∈𝒳ρ(x) x - σ_𝒳_0(x)_2 dx = ∫_x ∈𝒳_2ρ(x) x - σ_𝒳_0(x)_2 dx + ∫_x ∈𝒳_0ρ(x) x - σ_𝒳_0(x)_2 dx_ = 0 = ∫_x ∈𝒳_2ρ(x) x - σ_𝒳_0(x)_2 dx Consider the mapping f:𝒳→𝒳 only swaps the order of x in the dataset 𝒳_0, which means: f(x) = x' | x, x' ∈𝒳_0 x | x∈𝒳_2 𝒳_1 = {f(x_i) | f(x_i) ∈𝒳_0, i=1,…,N}𝒳_0 = 𝒳_1 Therefore we get ∀ x∈𝒳_2, ρ_f(x) = ρ(x). Hence: 𝔼_x ∼ρ x - σ_𝒳_0(x)_2 = ∫_x ∈𝒳_2ρ(x) x - σ_𝒳_0(x)_2 dx = ∫_x ∈𝒳_2ρ_f(x) x - σ_𝒳_0(x)_2 dx = 𝔼_x ∼ρ_f x - σ_𝒳_1(x)_2 Since u_0∈𝒰_0 is a minimizer of ℒ(u, 𝒳_0), we must have u_0(x_i)=u_0^*(x_i) for all 1 ≤ i ≤ n. Then for any x∈ X we have u_0^*(x) - u_0(x)_2 =u_0^*(x) - u_0^*(σ_𝒳_0(x)) + u_0^*(σ_𝒳_0(x)) - u_0(σ_𝒳_0(x)) + u_0(σ_𝒳_0(x)) - u_0(x)_2 ≤u_0^*(x) - u_0^*(σ_𝒳_0(x))_2 + u_0(σ_𝒳_0(x)) - u_0(x)_2 ≤[ (u_0^*) + (u_0) ] x - σ_𝒳_0(x)_2 Therefore, combined with Theorem <ref>, we have: sup_u_1∈𝒰_1𝔼_x ∼ρ_fu_1^* - u_1_2 = [ (u_1^*) + (u_1) ] 𝔼_x ∼ρ x - σ_𝒳_1(x)_2 ≤[ (u_0^*) + (u_0) ] 𝔼_x ∼ρ_f x - σ_𝒳_1(x)_2 = sup_u_0∈𝒰_0𝔼_x ∼ρu_0^* - u_0_2 () § WHY LIPSCHITZ CONSTANT IS CRUCIAL FOR RL In this section we main focus on the relationship between the Lipschitz constant of the reward function and the continuity of the value function in RL. Firstly we make some assumptions. In reinforcement learning, given a policy π, γ denotes the discounted factor, r (r: s→ℝ) denotes the reward function, H denotes the length of trajectory, the definition of the value function V(s) is: V^π(s) = 𝔼[ ∑_t = 0^Hγ^tr |s_0 = s, a_t ∼π(s_t) ]. Similar to <cit.>, we make the deterministic assumption of the environment and RL policy. Besides, the policy of RL algorithm TD3 utilized in our method is also deterministic. The environment transition and policy are deterministic. We also make the assumptions of the Lipschitz constant of reward function and environment dynamic transition which is similar to previous work <cit.>. There exists constants K_1, K_2 such that the lipschitz constant of reward function r (r: s→ℝ) is K_1, where s∈𝒮 is the state. In other words, (r) = K_1, and 𝒫(s, a):𝒮×𝒜→𝒮 denotes the environment dynamic transition function, K_2 that satisfies: (𝒫) = K_2. ∙ Relationship between (r; 𝒮) and (V; 𝒮) Drawing upon the aforementioned definitions and assumptions, let us commence our analysis: Under Assumption <ref>, <ref>, given a RL policy π, the value function of π, namely V^π, satisfies: ∀ s_1, s_2 ∈𝒮, V^π(s_1) - V^π(s_2)≤[1 - (γ K_2)^H]K_1/1 - γ K_2 s_1 - s_2 . Because the policy π and environment dynamic transition function 𝒫 are deterministic, for ∀ s_1, s_2 ∈𝒮, two trajectories τ_1, τ_2 can be generated: τ_1 = { s_1,i | t = 0,…,H s_1,0 = s_1, s_1, t+1 = 𝒫(s_1, t, π(s_1, t)) }, τ_2 = { s_2,i | t = 0,…,H s_2,0 = s_2, s_2, t+1 = 𝒫(s_2, t, π(s_2, t)) }. Under Definition <ref>: V^π(s_1) - V^π(s_2) = ∑_t = 0^Hγ^tr(s_1, t) - ∑_t = 0^Hγ^tr(s_2, t) = ∑_t = 0^Hγ^t[ r(s_1, t) - r(s_2, t) ] ≤∑_t = 0^Hγ^t [ r(s_1, t) - r(s_2, t) ] ≤∑_t = 0^Hγ^t K_1 s_1, t - s_2, t. The state dynamic distances s_1, t - s_2, t can be estimated as follows: s_1, t - s_2, t≤ K_2 s_1, t - 1 - s_2, t - 1≤ K_2^2 s_1, t - 2 - s_2, t - 2…≤ K_2^t s_1 - s_2 . Combined Equation <ref> with <ref>: V^π(s_1) - V^π(s_2) ≤∑_t = 0^H(γ K_2)^t K_1 s_1 - s_2 = [1 - (γ K_2)^H]K_1/1 - γ K_2 s_1 - s_2 . Proof of Theorem <ref> is completed. A direct corollary from Theorem <ref> is: (V^π; 𝒮) ≤[1 - (γ K_2)^H]K_1/1 - γ K_2. Corollary <ref> demonstrates that reducing the Lipschitz constant K_1 of the reward function directly decreases the upper bound of the Lipschitz constant of value functions in RL. This confirms that our approach effectively improves the continuity of the value functions. ∙ How does (V; 𝒮) work for RL algorithms? We provide additional definitions to aid in understanding the subsequent theorems. Inspired by <cit.>: Denote environment dynamic transition function 𝒫 defined in  <ref>. For a given value function V, the Bellman operator as 𝒯, policy transition matrix as P^π: 𝒮×𝒮→ℝ, the greedy policy π_g of V, and the optimal policy π^* and optimal value funtion V^* are defined as follows: 𝒯V(s) = r(s) + max_a 𝒫(s, a) V(𝒫(s, a)), 𝒯^π V = r + γ P^π V, P^π(s_1, s_2) = 0, s_2 ≠𝒫(s_1, π(s_1)) 1, s_2 = 𝒫(s_1, π(s_1)) , ∀ s ∈𝒮, π_g = max_a [ r(s) + 𝒫(s, a) V(𝒫(s, a)) ], ∀ s ∈𝒮, V^π^*(s) = V^*(s) = max_π V^π(s). Denote π_1, π_2, …, π_m as any m policies arranged in any order, ∀ m ≥ 1, u and v are two state distributions, define C(u) ∈ℝ^+ ∪+∞, c(m) ∈ℝ^+ ∪+∞ and c(0) = 1: c(m) = max_π_1, π_2, …π_m, s∈𝒮( vP^π_1P^π_2… P^π_m)(s)/u(s), C_1(v, u) = (1 - γ) ∑_m≥ 0γ^m c(m). c(m), as defined in Definition <ref>, quantifies the discrepancy between transitions of m policies from an initial state distribution v over m time steps and a target distribution u, while C_1(v, u) quantifies the discounted discrepancy. Denote π_g as the greedy policy of a given value function V, V is the value function of some policy π_0. u and v are two state distributions. The norm V_q, u is the q-norm of V weighted by u which is defined as V_q, u = [∑_s u(s)(V(s))^q]^1/q. Suppose ∀ s_1, s_2 ∈𝒮, there exists a constant D_1 that satisfies s_1 - s_2 ≤ D_1 and under assumption <ref>, <ref>: ||V^* - V^π_g||_q, v≤2γ K_1 D_1 (1 - (γ K_2)^H)/(1 - γ)(1 - γ K_2)[C_1(v, u)]^1/q. We first present two lemmas that aid in the subsequent proof. Denote matrix I as the identity matrix(here V_1≼ V_2 means ∀ s ∈𝒮, V_1(s) ≤ V_2(s)): V^* - V^π_g≼[ (I-γ P^π^*)^-1 + (I-γ P^π_g)^-1] |𝒯V - V|. It can be derived that 𝒯^π_g V = 𝒯V, 𝒯^π_g V^π_g = V^π_g: V^* - V^π_g = 𝒯^π^*V^* - 𝒯^π^*V + 𝒯^π^*V - 𝒯^π_gV_≼0 + 𝒯^π_gV - 𝒯^π_gV^π_g ≼𝒯^π^*V^* - 𝒯^π^*V + 𝒯^π_gV - 𝒯^π_gV^π_g = γ P^π^*(V^* - V^π_g + V^π_g - V) + γ P^π_g(V - V^π_g). ⇒ (I - γ P^π^*)(V^* - V^π_g) ≼γ (P^π^* - P^π_g)(V^π_g - V). ⇒ V^* - V^π_g≼ (I - γ P^π^*)^-1γ (P^π^* - P^π_g)(V^π_g - V). (I - γ P^π_g)(V^π_g - V) = V^π_g - V - γ P^π_gV^π_g + γ P^π_gV = r + γ P^π_gV - ( r + γ P^π_gV^π_g) + V^π_g - V = 𝒯^π_gV - 𝒯^π_gV^π_g + V^π_g - V = 𝒯V - V. Combine Inequation <ref> and Equation <ref>: V^* - V^π_g ≼ (I - γ P^π^*)^-1γ (P^π^* - P^π_g)(V^π_g - V) = (I - γ P^π^*)^-1γ (P^π^* - P^π_g)(I - γ P^π_g)^-1(𝒯V - V) = (I - γ P^π^*)^-1[ (I - γ P^π_g) - (I - γ P^π^*) ](I - γ P^π_g)^-1(𝒯V - V) = [ (I - γ P^π^*)^-1 - (I - γ P^π_g)^-1](𝒯V - V) ≼[ (I - γ P^π^*)^-1 + (I - γ P^π_g )^-1]|𝒯V - V|. Proof of Lemma <ref> is completed. For a given policy π: (I - γ P^π)^-1 = ∑_k=0^∞( γ P^π)^k. (I - γ P^π)∑_k=0^∞( γ P^π)^k = ∑_k=0^∞[ ( γ P^π)^k - ( γ P^π)^k+1] = I - lim_k→∞( γ P^π)^k+1 = I. Proof of Lemma <ref> is completed. Now we turn to the proof of Theorem <ref>, we rewrite Lemma <ref> as follows: V^* - V^π_g ≼2/1 - γ A |𝒯V - V| A = 1 - γ/2[ (I-γ P^π^*)^-1 + (I-γ P^π_g)^-1]. Under the q-norm weighted by v: V^* - V^π_g_q, v^q ≤ [2/1 - γ]^q ∑_s∈𝒮 v(s) [ A(s) |𝒯V(s) - V(s)| ]^q ≤ [2/1 - γ]^q ∑_s∈𝒮 v(s) A(s) |𝒯V(s) - V(s)|^q. Equation <ref> stems from Jensen's inequality, exploiting the convexity of the function f(x) = x^q for q ≥ 1. Furthermore, when combined with Lemma <ref> and Equation <ref>, each row of matrix A attains a sum of 1 - γ/2[1/1 - γ + 1/1 - γ] = 1, meeting the prerequisites of Jensen's inequality. Next we focus on the v(s)A(s) in Equation <ref>: vA = 1 - γ/2 v [ (I-γ P^π^*)^-1 + (I-γ P^π_g)^-1] = 1 - γ/2∑_m=0^∞γ^m ( vP^π^* + vP^π_g) ≼1 - γ/2( ∑_m≥ 0γ^m c(m)u + ∑_m≥ 0γ^m c(m)u ) = C_1(v, u)u. Now we return to Equation <ref>, combined with Equation <ref>: V^* - V^π_g_q, v^q ≤ [2/1 - γ]^q C_1(v, u) ∑_s∈𝒮 u(s) |𝒯V(s) - V(s)|^q. In the description of Theorem <ref>, Because π_g is denoted as the greedy policy of a given value function V, V is the value function of some policy π_0. Therefore under assumption <ref>: 𝒯V(s) - V(s) = r(s) + γ V(s_1) - r(s) - γ V(s_2) = γ (V(s_1) - V(s_2)), s_1 = 𝒫(s, π_g(s)), s_2 = 𝒫(s, π_0(s)). Continuing Equation <ref>: V^* - V^π_g_q, v^q ≤ [2/1 - γ]^q C_1(v, u) ∑_s∈𝒮 u(s) |𝒯V(s) - V(s)|^q = [2/1 - γ]^q C_1(v, u) ∑_s∈𝒮 u(s) |γ (V(s_1) - V(s_2)|^q ≤ [2/1 - γ]^q C_1(v, u) ∑_s∈𝒮 u(s) [γ(V; 𝒮)s_1 - s_2]^q ≤[ 2 γ/1 - γ(1 - (γ K_2)^H)K_1 D_1/1 - γ K_2]^q C_1(v, u). Proof of Theorem <ref> is completed. A direct corollary from Theorem <ref> is: Under the setting of Theorem <ref>: ||V^* - V^π_g||_∞≤2γ K_1 D_1 (1 - (γ K_2)^H)/(1 - γ)(1 - γ K_2). § ALL OF OUR PROMPTS There are three prompt templates in total. The initial prompt for the first iteration. This prompt is designed to use at the commencement of the first iteration, which is the letter `p' in Algorithm <ref>. LLM is required to output the state representation and intrinsic reward functions in python code format. Notably, the `task_description' and `detail_content of each dimensions' in the prompt are derived from the official document of Mujoco[https://www.gymlibrary.dev/environments/mujoco/] and Gym-Robotics[https://robotics.farama.org/]. Prompt for Chain-of-thought Feedback Analysis. This prompt is formulated to facilitate the examination by LLM of the outcomes from all training experiments during each iteration in a chain-of-thought process<cit.>. This prompt corresponds to the variable p_feedback in Algorithm <ref>. LLM is expected to provide suggestions about how to enhance the performance of the state representation function codes. It is noteworthy that the `iteration_results' referred to in the prompt encompasses both policy performance and correlation coefficients, as elaborated in Section <ref>. Subsequent Prompt for Later Iterations. This prompt is similar to `The initial prompt for the first iteration'. However, the difference is that it contains the information of history iterations, as well as LLM's suggestions about how to enhance the performance of the state representation and intrinsic reward functions. Here are the prompt templates: [frametitle=Initial Prompt for the First Iteration] Revise the state representation for a reinforcement learning agent. ========================================================= The agent’s task description is: {task_description} ========================================================= The current state is represented by a {total_dim}-dimensional Python NumPy array, denoted as `s`. Details of each dimension in the state `s` are as follows: {detail_content} You should design a task-related state representation based on the source {total_dim} dim to better for reinforcement training, using the detailed information mentioned above to do some caculations, and feel free to do complex caculations, and then concat them to the source state. Besides, we want you to design an intrinsic reward function based on the revise_state python function. That is to say, we will: 1. use your revise_state python function to get an updated state: updated_s = revise_state(s) 2. use your intrinsic reward function to get an intrinsic reward for the task: r = intrinsic_reward(updated_s) 3. to better design the intrinsic_reward, we recommond you use some source dim in the updated_s, which is between updated_s[0] and updated_s[{total_dim - 1}] 4. however, you must use the extra dim in your given revise_state python function, which is between updated_s[{total_dim}] and the end of updated_s Your task is to create two Python functions, named `revise_state`, which takes the current state `s` as input and returns an updated state representation, and named `intrinsic_reward`, which takes the updated state `updated_s` as input and returns an intrinsic reward. The functions should be executable and ready for integration into a reinforcement learning environment. The goal is to better for reinforcement training. Lets think step by step. Below is an illustrative example of the expected output: “`python import numpy as np def revise_state(s): # Your state revision implementation goes here return updated_s def intrinsic_reward(updated_s): # Your intrinsic reward code implementation goes here return intrinsic_reward “` [frametitle=Prompt for Chain-of-thought Feedback Analysis] We have successfully trained Reinforcement Learning policy using {args.sample_count} different state revision codes and intrinsic reward function codes sampled by you, and each pair of code is associated with the training of a policy relatively. Throughout every state revision code's training process, we monitored: 1. The final policy performance(accumulated reward). 2. Most importantly, every state revise dim's Lipschitz constant with the reward. That is to say, you can see which state revise dim is more related to the reward and which dim can contribute to enhancing the continuity of the reward function mapping. Here are the results: {iteration_results(performance and Lipschitz constants)} You should analyze the results mentioned above and give suggestions about how to imporve the performace of the "state revision code". Here are some tips for how to analyze the results: (a) if you find a state revision code's performance is very low, then you should analyze to figure out why it fail (b) if you find some dims' Lipschitz constant very large, you should analyze to figure out what makes it fail (c) you should also analyze how to imporve the performace of the "state revision code" and "intrinsic reward code" later Lets think step by step. Your solution should aim to improve the overall performance of the RL policy. [frametitle=Subsequent Prompt for Later Iterations] Revise the state representation for a reinforcement learning agent. ========================================================= The agent’s task description is: {task_description} ========================================================= The current state is represented by a {total_dim}-dimensional Python NumPy array, denoted as `s`. Details of each dimension in the state `s` are as follows: {detail_content} You should design a task-related state representation based on the source {total_dim} dim to better for reinforcement training, using the detailed information mentioned above to do some caculations, and feel free to do complex caculations, and then concat them to the source state. For this problem, we have some history experience for you, here are some state revision codes we have tried in the former iterations: {former_histoy} Based on the former suggestions. We are seeking an improved state revision code and an improved intrinsic reward code that can enhance the model's performance on the task. The state revised code should incorporate calculations, and the results should be concatenated to the original state. Besides, We are seeking an improved intrinsic reward code. That is to say, we will: 1. use your revise_state python function to get an updated state: updated_s = revise_state(s) 2. use your intrinsic reward function to get an intrinsic reward for the task: r = intrinsic_reward(updated_s) 3. to better design the intrinsic_reward, we recommond you use some source dim in the updated_s, which is between updated_s[0] and updated_s[{total_dim - 1}] 4. however, you must use the extra dim in your given revise_state python function, which is between updated_s[{total_dim}] and the end of updated_s Your task is to create two Python functions, named `revise_state`, which takes the current state `s` as input and returns an updated state representation, and named `intrinsic_reward`, which takes the updated state `updated_s` as input and returns an intrinsic reward. The functions should be executable and ready for integration into a reinforcement learning environment. The goal is to better for reinforcement training. Lets think step by step. Below is an illustrative example of the expected output: “`python import numpy as np def revise_state(s): # Your state revision implementation goes here return updated_s def intrinsic_reward(updated_s): # Your intrinsic reward code implementation goes here return intrinsic_reward “` § ADDITIONAL ENVIRONMENT INFORMATION §.§ Mujoco Environment Tasks HalfCheetah This environment is based on the work by P. Wawrzyński in <cit.>. The HalfCheetah is a 2-dimensional robot consisting of 9 body parts and 8 joints connecting them (including two paws). The goal is to apply a torque on the joints to make the cheetah run forward (right) as fast as possible, with a positive reward allocated based on the distance moved forward and a negative reward allocated for moving backward. The torso and head of the cheetah are fixed, and the torque can only be applied on the other 6 joints over the front and back thighs (connecting to the torso), shins (connecting to the thighs) and feet (connecting to the shins). Hopper This environment is based on the work done by Erez, Tassa, and Todorov in <cit.>. The environment aims to increase the number of independent state and control variables as compared to the classic control environments. The hopper is a two-dimensional one-legged figure that consist of four main body parts - the torso at the top, the thigh in the middle, the leg in the bottom, and a single foot on which the entire body rests. The goal is to make hops that move in the forward (right) direction by applying torques on the three hinges connecting the four body parts. Walker This environment builds on the Hopper environment by adding another set of legs making it possible for the robot to walk forward instead of hop. Like other Mujoco environments, this environment aims to increase the number of independent state and control variables as compared to the classic control environments. The walker is a two-dimensional two-legged figure that consist of seven main body parts - a single torso at the top (with the two legs splitting after the torso), two thighs in the middle below the torso, two legs in the bottom below the thighs, and two feet attached to the legs on which the entire body rests. The goal is to walk in the in the forward (right) direction by applying torques on the six hinges connecting the seven body parts. Ant This environment is based on the environment introduced by Schulman, Moritz, Levine, Jordan and Abbeel in <cit.>. The ant is a 3D robot consisting of one torso (free rotational body) with four legs attached to it with each leg having two body parts. The goal is to coordinate the four legs to move in the forward (right) direction by applying torques on the eight hinges connecting the two body parts of each leg and the torso (nine body parts and eight hinges). Swimmer This environment corresponds to the Swimmer environment described in Rémi Coulom’s PhD thesis <cit.>. The environment aims to increase the number of independent state and control variables as compared to the classic control environments. The swimmers consist of three or more segments (’links’) and one less articulation joints (’rotors’) - one rotor joint connecting exactly two links to form a linear chain. The swimmer is suspended in a two dimensional pool and always starts in the same position (subject to some deviation drawn from an uniform distribution), and the goal is to move as fast as possible towards the right by applying torque on the rotors and using the fluids friction. §.§ Gym-Robotics Environment Tasks AntMaze This environment was refactored from the D4RL repository, introduced by Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine in <cit.>. The task in the environment is for an ant-agent to reach a target goal in a closed maze. The ant is a 3D robot consisting of one torso (free rotational body) with four legs attached to it with each leg having two body parts. The goal is to reach a target goal in a closed maze by applying torques on the eight hinges connecting the two body parts of each leg and the torso (nine body parts and eight hinges). Fetch This environment was introduced in <cit.>. The robot is a 7-DoF Fetch Mobile Manipulator with a two-fingered parallel gripper. The robot is controlled by small displacements of the gripper in Cartesian coordinates and the inverse kinematics are computed internally by the MuJoCo framework. The gripper is locked in a closed configuration in order to perform the push task. The task is also continuing which means that the robot has to maintain the block in the target position for an indefinite period of time. Notably, in “FetchPush" or “FetchSlide" when the absolute value of the reward is smaller than 0.05, namely |r| < 0.05, we consider the task to be terminated. This is because the returned dense reward is the negative Euclidean distance between the achieved goal position and the desired goal. Adroit Hand This environment was introduced in <cit.>. The environment is based on the Adroit manipulation platform, a 28 degree of freedom system which consists of a 24 degrees of freedom ShadowHand and a 4 degree of freedom arm. Notably, in “AdroitDoor" when the absolute value of the reward is greater than 20, namely |r| > 20, we consider the task to be terminated. This is because a total positive reward of 20 is added if the door hinge is opened more than 1.35 radians. In “AdroitHammer" when the absolute value of the reward is greater than 25, namely |r| > 25, we consider the task to be terminated. This is because a total positive reward of 25 is added if the euclidean distance between both body frames is less than 0.02 meters. For more information about the tasks, please refer to the official document and the links are presented in Appendix <ref>. §.§ Toy Example Settings This subsection introduces the settings of the toy example in Figure <ref>. Task Description We employ the tailored interface of the environment within Gym-Robotics<cit.>. As illustrated in Figure <ref>, the maze has dimensions of 10×10, and the agent commences its navigation from the bottom-left corner, aiming to reach the top-right corner. Observation Space The source observation only contains an array of shape (4, ). The entire observation space is continuous, where obs[0] and obs[1] denotes the x, y coordinates of the agent's current position, while obs[2] and obs[3] represent the x, y coordinates of the target location. Action Space The source action only contains an array of shape (2, ). The entire action space is continuous, where action[0] and action[1] denote the linear force in the x, y direction to the agent. Rewards We use dense rewards. The returned reward is the negative Euclidean distance between the current agent's location and the target location. Training Parameters We use TD3 <cit.> as the RL algorithm. The values of hyperparameters for TD3<cit.> are derived from their original implementation[https://github.com/sfujim/TD3]. In Figure <ref> `1 Iteration', the `Episode Return' is normalized to 0-100. § ADDITIONAL EXPERIMENTS §.§ Could LESR work if it uses the LLM-coded reward function as the only reward signal? We have conducted several experiments in Mujoco tasks. Directly Intrinsic means directly using the best state representation and intrinsic reward codes ℱ_best, 𝒢_best to train without extrinsic reward. LESR w/o ER means throughout the entire iterative process of LESR, extrinsic rewards are entirely removed. LESR w/o LC means throughout the entire iterative process of LESR, Lipschitz constant is entirely removed. Results under five random seeds are demonstrated in the following Table <ref>. Directly Intrinsic: When we directly use ℱ_𝒷ℯ𝓈𝓉, 𝒢_𝒷ℯ𝓈𝓉 to train without extrinsic reward. It's observed that the performance of our method is subject to perturbations but remains relatively stable, achieving comparable results to TD3 in environments such as HalfCheetah, Hopper, and Ant. LESR w/o ER: Particularly, when we iterate without extrinsic rewards throughout the process, LESR demonstrates superior final performance over TD3 in most tasks. LESR w/o LC: In experiments where the Lipschitz constant is removed, there is a performance decrease compared to LESR, yet still outperforming TD3, further indicating the effectiveness of utilizing the Lipschitz constant for feedback in our approach. §.§ Two New Task Decriptions To further validate the generalization capability of LESR, we conduct experiments on two entirely new task descriptions: (1) requiring the walker agent to learn to jump in place, and (2) requiring the walker agent to learn to stand with its legs apart. We abstain from any iterations or feedback, training solely on the ℱ, 𝒢 produced in the first round of LESR, and only for 300k time steps. We generate the final GIFs in the github repository of LESR[https://github.com/thu-rllab/LESR], and from the GIFs on the webpage, it is observed that the walker can perform the jumping and standing with legs apart actions as per the task descriptions, which further highlights the generalization significance of LESR. §.§ Different Choices of Lipschitz constant As mentioned in Section <ref>, apart from estimating the Lipschitz constant computed independently for each state dimension concerning the extrinsic reward, we can also employ the discouted return (LESR(DR)) or spectral norm (LESR(SN)) instead of the extrinsic reward. We have conducted experimental evaluations on these various feedback signals in Mujoco tasks in Table <ref>. Note: In this setting the sample count K is set to 3. It's illustrated that both LESR(DR) and LESR(SN) are feasible for estimating the Lipschitz constant. However, LESR demonstrates comparatively more stable performance with lower variance in the context of dense reward environments. Nonetheless, this does not discount the utility of employing discounted return and spectral norm estimation methods. Specifically, in scenarios involving discontinuous rewards, the necessity of employing LESR(DR) and LESR(SN) becomes apparent. §.§ More Training Details of LESR We have presented the performance data for a limited scope of 300k environment interaction training steps in Table <ref>. The results demonstrate that LESR exhibits superior performance within significantly fewer training steps, excelling comprehensively across all tasks in comparison to the RL baseline algorithm TD3, thereby manifesting heightened sample efficiency. We have also presented the final evaluating curves in Figure <ref> using all tasks' best state representation and intrinsic reward functions ℱ_best, 𝒢_best. This further substantiates that our approach outperforms the baseline in terms of both sample efficiency and ultimate performances. In Figure <ref> we have presented performance improments compared with the baseline using every iteration's best state representation and intrinsic reward functions ℱ_best, 𝒢_best. It is demonstrated that the feedback part plays a critical role in our method. The final performance demonstrated a gradual amelioration across successive iterations. §.§ Semantic Analysis Here are the state representation functions for the Swimmer task across four random seeds. The state representations in this task can be categorized into five groups, whose names are derived from the comments provided by the LLM: cosine or sine of the angles (c1), relative angles between adjacent links (c2), kinetic energy (c3), distance moved (c4), and sum of torques applied (c5). [frametitle=State representation functions for the Swimmer task] seed_1: [language=Python] def revise_state(s): x_velocity_squared = s[3]**2 total_angular_velocity = np.sum(s[5:]**2) cos_angles = np.cos(s[:3]) sin_angles = np.sin(s[:3]) relative_angles = np.diff(s[1:3]) updated_s = np.concatenate((s, [x_velocity_squared, total_angular_velocity], cos_angles, sin_angles, relative_angles)) return updated_s seed_2: [language=Python] def revise_state(s): if len(s) > 3: relative_angles = [s[i+1] - s[i] for i in range(1, len(s)-3)] else: relative_angles = [] kinetic_energy = 0.5 * (s[3]**2 + s[4]**2) + 0.5 * sum(s[5:]**2) distance_moved = s[3] sum_torques = sum(abs(s[5:])) updated_s = np.concatenate((s, relative_angles, [kinetic_energy, distance_moved, sum_torques])) return updated_s seed_3: [language=Python] def revise_state(s): relative_angle_1_2 = s[1] - s[2] total_kinetic_energy = 0.5 * (s[5]**2 + s[6]**2 + s[7]**2) mass = 1.0 length = 1.0 gravity = 9.81 potential_energy = mass * gravity * length * np.sin(s[0]) dt = 1.0 distance_x = s[3] * dt updated_s = np.concatenate((s, [relative_angle_1_2, total_kinetic_energy, potential_energy, distance_x])) return updated_s seed_4: [language=Python] def revise_state(s): cos_angles = np.cos(s[:3]) sin_angles = np.sin(s[:3]) relative_angles = np.diff(s[:3]) kinetic_energy = 0.5 * (s[3]**2 + s[4]**2) + 0.5 * np.sum(s[5:]**2) distance_moved = s[3] updated_s = np.concatenate((s, cos_angles, sin_angles, relative_angles, [kinetic_energy, distance_moved])) return updated_s § FUTURE WORK Methodology framework viability Our methodology framework in LESR remains viable for image tasks, where Vision-Language Models (VLMs)<cit.> can be employed to extract semantic features from images, followed by further processing under the LESR framework. We anticipate utilizing VLMs for future research. General applicability beyond symbolic environments While our primary focus lies on symbolic environments, our method extends beyond them. LESR serves as a general approach for leveraging large models to generate Empowered State Representations, offering potential applicability across various environments. Offline reinforcement learning LESR is also feasible for offline reinforcement learning scenarios<cit.>. The LESR framework is versatile and not limited to online RL. In the future, we aim to explore various applications and possibilities. Within LESR, we utilize LLMs to generate Empowered State Representations, showcasing their effectiveness in enhancing the Lipschitz continuity of value networks in reinforcement learning. Our experimental results, along with supplementary theorems, validate these advantages. We believe that LESR holds promise in inspiring future research endeavors. § HYPERPARAMETERS We have listed the hyperparameters of our algorithm in Table <ref>, which encompasses the parameters of the training process, algorithm and optimizer settings. Notably, for PPO and SAC, we adopt the code in <https://github.com/Lizhi-sjtu/DRL-code-pytorch>. For EUREKA, to ensure comparative fairness, we utilized the same hyperparameters as LESR, namely the sample count K and the iteration count I.
http://arxiv.org/abs/2407.12305v1
20240717040155
SHARC-VQE: Simplified Hamiltonian Approach with Refinement and Correction enabled Variational Quantum Eigensolver for Molecular Simulation
[ "Harshdeep Singh", "Sonjoy Majumder", "Sabyashachi Mishra" ]
quant-ph
[ "quant-ph" ]
APS/123-QED harshdeeps@kgpian.iitkgp.ac.in Center of Computational and Data Sciences, Indian Institute of Technology, Kharagpur, India sonjoym@phy.iitkgp.ac.in Department of Physics, Indian Institute of Technology, Kharagpur, India mishra@chem.iitkgp.ac.in Department of Chemistry, Indian Institute of Technology, Kharagpur, India § ABSTRACT Quantum computing is finding increasingly more applications in quantum chemistry, particularly to simulate electronic structure and molecular properties of simple systems. The transformation of a molecular Hamiltonian from the fermionic space to the qubit space results in a series of Pauli strings. Calculating the energy then involves evaluating the expectation values of each of these strings, which presents a significant bottleneck for applying variational quantum eigensolvers (VQEs) in quantum chemistry. Unlike fermionic Hamiltonians, the terms in a qubit Hamiltonian are additive. This work leverages this property to introduce a novel method for extracting information from the partial qubit Hamiltonian, thereby enhancing the efficiency of VQEs. This work introduces the SHARC-VQE (Simplified Hamiltonian Approximation, Refinement, and Correction-VQE) method, where the full molecular Hamiltonian is partitioned into two parts based on the ease of quantum execution. The easy-to-execute part constitutes the Partial Hamiltonian, and the remaining part, while more complex to execute, is generally less significant. The latter is approximated by a refined operator and added up as a correction into the partial Hamiltonian. SHARC-VQE significantly reduces computational costs for molecular simulations. The cost of a single energy measurement can be reduced from 𝒪(N^4/ϵ^2) to 𝒪(1/ϵ^2) for a system of N qubits and accuracy ϵ, while the overall cost of VQE can be reduced from 𝒪(N^7/ϵ^2) to 𝒪(N^3/ϵ^2). Furthermore, measurement outcomes using SHARC-VQE are less prone to errors induced by noise from quantum circuits, reducing the errors from 20-40% to 5-10% without any additional error correction or mitigation technique. Additionally, the SHARC-VQE is demonstrated as an initialization technique, where the simplified partial Hamiltonian is used to identify an optimal starting point for a complex problem. Overall, this method improves the efficiency of VQEs and enhances the accuracy and reliability of quantum simulations by mitigating noise and overcoming computational challenges. SHARC-VQE: Simplified Hamiltonian Approach with Refinement and Correction enabled Variational Quantum Eigensolver for Molecular Simulation Sabyashachi Mishra July 22, 2024 ================================================================================================================================================ § INTRODUCTION Quantum computing utilizes the concepts of quantum mechanics to carry out computations that are fundamentally distinct from those performed by classical computing <cit.>. Quantum computing relies on qubits, which can concurrently exist in numerous states through superposition, unlike classical bits. In addition, qubits can become entangled, which implies that the state of one qubit can be influenced by the state of another qubit, regardless of the distance between them. The phenomenon of entanglement, along with superposition, allows quantum computers to simultaneously generate and manipulate a large quantity of data, resulting in potential exponential acceleration for specific problems <cit.>. Its capacity for catalyzing substantial change is notably evident in the discipline of chemistry <cit.>. Although there have been significant improvements in the classical simulation of quantum chemical systems <cit.>, classical computing faces difficulties in accurately simulating the intrinsic quantum properties of electrons, particularly in large molecules and materials. As a result, its ability to predict and simulate molecular interactions with a high level of accuracy is limited <cit.>. Quantum computers are particularly suitable for simulating structure and properties in the areas of catalysis, materials research, and drug discovery <cit.>. In this early era of quantum computing, hybrid classical-quantum algorithms such as Variational Quantum Eigensovlers (VQE) have become the most efficient methods for doing molecular electronic structure calculations <cit.>. These algorithms have undergone experimental testing on several quantum hardware, such as photonic processors (photonic) and trapped-ion processors <cit.>. The use of efficient ansatzes such as the unitary coupled cluster methods <cit.>, and the advances in error-resistant algorithms, and quantum error correction <cit.> add further to the utility of these variational algorithms. Despite the potential for revolutionizing computing, the current generation of quantum computers, the so-called Noisy Intermediate-Scale Quantum (NISQ) computers, are susceptible to external noise and face many other challenges to be fully utilized. The major bottlenecks for scaling up the size and length of quantum computations are the availability of a limited number of qubits and the short coherence time of the employed qubits. Recently, we benchmarked the performance of different classical optimizers for quantum chemistry calculation with variational quantum algorithms <cit.>. Despite using the best available classical optimizer, the error in the ground-state energy calculation across different simple molecules in the presence of noise was about 10-20%, which does not bode well for the applicability and scalability of the VQAs. Two major hurdles that contribute the most to the noise are identified as the high-depth ansatz, which increases the number of quantum gates in the quantum circuits, and the size of the molecular Hamiltonian, which increases the number of measurements of the quantum circuits. A lot of work has been done in the modification and construction of efficient ansatz specific to quantum chemistry problems with VQAs <cit.>. In the pursuit of increasing the accuracy and flexibility of the wavefunctions, these ansatzes invariably increase the depth of the quantum circuits. This adds further to the cost and error in the measurements needed to estimate the expectation value of the Hamiltonian using VQE <cit.>. Quantum error mitigation (QEM) can be one of the potential solutions for overcoming the measurement problem, and a significant amount of work has been done in that area <cit.>. However, most of the error mitigation techniques significantly increase the resource requirement of the VQEs. To overcome this, scalable QEM techniques have been devised <cit.>. There have also been efforts to quantify and mitigate the errors introduced by the quantum gates in the VQEs <cit.>. However, the focus has to be placed on the growing size of the molecular Hamiltonian and the consequent exponential rise in resource requirements, as that would negate all other endeavors in error mitigation. While some work has been done in devising efficient methods of qubit Hamiltonian measurement, many approaches and techniques are yet to be explored. A typical N-qubit Hamiltonian consists of several Pauli operators P_i with their respective coefficients c_i, such that, H =∑^n_i=1c_iP_i, where each Pauli operator P_i is a N-dimensional Pauli string. The number of terms in a N-qubit Hamiltonian scales as 𝒪(N^4). Each string is measured individually to determine its contribution to the overall energy, resulting in a high measurement overhead. Since quantum measurements are probabilistic and noise-susceptible, they amplify inaccuracies when aggregating results from 𝒪(N^4) Pauli strings. In this work, we introduce a simplified Hamiltonian approach to improve the efficiency of VQE. The method employs a divide-and-conquer strategy, partitioning the full molecular Hamiltonian into smaller fragments <cit.> based on their ease of computation and contribution to the overall Hamiltonian. While the simplified Hamiltonian containing significant and easy-to-compute fragments is solved exactly, the insignificant and difficult-to-compute fragments are approximated by some easy-to-compute refined operators within the VQE scheme. Post variational optimization, the final corrected energy is obtained by a subtractive scheme where the contribution of the refined operators is replaced by the true operators. This Simplified Hamiltonian Approach with Refinement and Correction enabled VQE (SHARC-VQE) is tested for ground state energy and wave functions of molecules with varying sizes, described by 4 to 10 qubits in ideal and noisy quantum environments. The approach is also extended to the excited states of molecules. The versatility of the method as an initialization technique is demonstrated for a general Fermi-Hubbard model. Further study also revealed that replacing a full Hamiltonian with a simpler partial Hamiltonian does not worsen the performance of the VQEs against the barren plateau problems, as suspected in literature <cit.>. § SHARC-VQE §.§ Algorithm The Hamiltonian of a molecular system with N electrons and M nuclei under the Born-Oppenheimer approximation is given by, H=-∑_i=1^N( ∇_i^2/2 + ∑_A=1^MZ_A/r_iA) + ∑_j>i1/r_ij + ∑_B>AZ_AZ_B/R_AB, where the first sum represents the one-electron operators (electron kinetic energy and electron potential energy), the second sum accounts for the two-electron interaction operators, and the last sum reflects the constant inter-nuclear repulsion. The molecular Hamiltonian can be expressed equivalently in the Fock space in the second quantization form as, H = ∑_p,q h_pq a_p^† a_q + 1/2∑_p,q,r,sh_pqrsa_p^† a_q^† a_s a_r. In a given electronic basis set {X(x)}, the one- and the two-electron integrals (h_pq and h_pqrs, respectively) are given by, h_pq = ∫ d x⃗ X_p^*(x⃗)(-∇^2/2-∑_AZ_A/r_A x⃗)X_q(x⃗) and h_pqrs = ∬ d x⃗_⃗1⃗d x⃗_⃗2⃗X_p^*(x⃗_⃗1⃗)X_q^*(x⃗_⃗2⃗)X_r(x⃗_⃗1⃗)X_s(x⃗_⃗2⃗)/r_12. The Fock space representation of the wavefunction is expressed in terms of occupation numbers vector, | s ⟩ = | s_1, ⋯, s_p, ⋯ ,s_N ⟩, with s_p= {0, 1} represents if the p^th spin-orbital is unoccupied or occupied, respectively. The fermionic operators a^†_p and a_p in the Fock-space representation are then employed to create and annihilate an electron in the s_p spin-orbital by, a^†_p | s⟩ = (1-δ_s_p,1) Γ^s_p | s_1, ⋯ ,1_p, ⋯, s_n⟩ a_p | s⟩ = δ_s_p,1Γ^s_p | s_1, ⋯ ,0_p,⋯ ,s_n⟩. Here Γ^s_p = (-1)^∑_m<ps_m ensures the antisymmetric nature of the wave function with respect to electron exchange and the delta function δ_s_p,1 enforces the Pauli's exclusion principle. In the Fock-space representation, the one-to-one correspondence between the fermionic state |s⟩ and the qubit state |q⟩ is straightforward, i.e., | s⟩ = | s_1, ⋯, s_N⟩⟺ | q⟩ = | q_1, ⋯ ,q_N⟩, where, the occupancy of a spin-orbital s_p ({0,1}) maps to the binary state ({↑ , ↓}) of qubit q_p. The transformation between the fermionic space and the qubit space can be achieved by various transformation schemes, such as the Jordan-Wigner <cit.>, Parity <cit.>, and Brayvi-Kitaev <cit.> schemes. In the Jordan-Wigner transformation, the fermionic operators are expressed in terms of the Pauli spin operators (X, Y, Z) as, a_p^† = 1/2 (X_p - iY_p) ⊗_q<p Z_q a_p = 1/2 (X_p + iY_p) ⊗_q<p Z_q . Using the above expressions in Equation <ref>, the molecular Hamiltonian in qubit representation appears as a sum-of-products of Pauli operators (Equation <ref>). For example, the Hamiltonian for H_2 in a 4-qubit space appears as <cit.>, H_ qubit^ H_2 = -0.81054 IIII + ⋯ + 0.12091 ZZII, with the complete list given in Table <ref>. A quantum measurement of the above Hamiltonian involves the measurement of each Pauli string separately (i.e., 15 separate measurements in this case). The most conspicuous way to divide the molecular Hamiltonian would be by transforming the one and two-electron terms in the fermionic Hamiltonian (Equation <ref>) separately via the Equation <ref> (see Table <ref>). It is obvious that ignoring a particular type of interaction altogether leads to highly incorrect results (see Figure S1 in the supporting information). Hence, instead of ad-hoc fragmenting the fermionic Hamiltonian, we explore a systematic approach to selecting terms from the qubit Hamiltonian. The sum-of-products form of the qubit Hamiltonian (Table <ref>) can be fragmented based on two factors, namely, the contribution of each term to the overall Hamiltonian (c_i) and the cost of execution of the Pauli string as described in FIG. <ref>. Considering a certain cut-off value c_0, one can divide the terms in the Hamiltonian as significant (c_i ≥ c_0) and insignificant (c_i < c_0). The Pauli strings with exclusively I and Z gates are easy to execute since all such operators commute with each other and require no additional basis transformation for measurement. Therefore, the entire group can be measured using a single quantum circuit. On the other hand, each term containing X or Y gate(s) requires appropriate basis transformation, thus raising the execution cost. Therefore, the qubit Hamiltonian of any system can be divided into four segments, e.g., the significant and easy-to-execute terms (H_ es), the significant and hard-to-execute terms (H_ ds), the insignificant and easy-to-execute terms (H_ ei), and the insignificant and hard-to-execute terms (H_ di), see FIG. <ref>. The qubit Hamiltonian can be simplified in such a way that requires the minimum number of circuit executions, thereby reducing the quantum load of the VQA. The simplest way of achieving this is to consider a partial Hamiltonian H_ p (as H_ es+H_ ei), by including the terms with I or Z gates alone, notwithstanding their relative contribution. After all, a single circuit measurement shall suffice their evaluation. A VQE optimization using the partial Hamiltonian yields the wave function in terms of the optimized rotation parameters {θ^p} and energy E^', such that, H_ p | Ψ_ p (θ^p) ⟩ =E^' |Ψ_ p (θ^p) ⟩. The reliability of the state of the system optimized with the partial Hamiltonian needs to be tested against the results from VQE optimization with the full Hamiltonian within a desired level of accuracy (ϵ). In other words, we ask if, | ⟨Ψ_ full| H_ full|Ψ_ full⟩ - ⟨Ψ_ partial| H_ full|Ψ_ partial⟩ | ≤ϵ In most cases, the above relation is unlikely to hold true as the H_ p has left out the H_ di and H_ ds terms of the Hamiltonian. The terms that are left out mainly arise from the two-electron interactions. Their number is expected to grow significantly with molecular size. On the other hand, the inclusion of the difficult-to-execute terms H_ d within the VQE scheme offers a marginal contribution to the overall energy (note the small coefficients of the H_ d terms in Table <ref>) all the while introducing a significant amount of measurement noise. Hence, it is not only necessary to add corrections to the energy obtained from partial Hamiltonian optimization but also to devise a method to incorporate the information of the left-out terms in some way in the VQE simulation itself. One way of achieving this is to approximate the difficult-to-execute part of the Hamiltonian with an easy-to-execute operator (H^'_d). Finding H^'_d that can approximate H_d at all possible points can be tedious. If |Ψ(θ^F)⟩ is the final optimized state at the end of a VQE simulation with the full Hamiltonian, we need the operator H^'_d such that, ⟨Ψ(θ^F) | H_d |Ψ(θ^F) ⟩≈⟨Ψ(θ^F) | H^'_d |Ψ(θ^F) ⟩. Of course, finding θ^F is also an expensive task by itself. Under the assumption that the true state of a molecular system is not far from the Hartree-Fock state |Ψ(θ^ HF)⟩, we can construct a refined operator as a sum of one or more easy-to-evaluate Pauli gates whose coefficients can be fitted to reproduce the expectation value of the H_ d operator in the close vicinity of the Hartree-Fock state, that is, ⟨Ψ(θ^ HF+δθ) | H_d |Ψ(θ^ HF+δθ) ⟩≈ ⟨Ψ(θ^ HF+δθ)) | H^'_d |Ψ(θ^ HF+δθ) ⟩ where, δθ is a small random perturbation. The advantage of invoking the HF state lies in the fact that it is very easy to initialize the HF state in a quantum circuit, Based on these assumptions, we introduce the refined operator H^' ij_d, where i represents the number of easy-to-execute Pauli strings employed in place of the H_ d terms and j indicates the number of points in the close vicinity of HF state at which the operator has been approximated. Using the refined operator H^' ij_d, the refined partial Hamiltonian can be written as, H_p^ij = H_p + H^' ij_d The VQE optimization with the refined Hamiltonian H_p^ij yields the final optimized parameters θ^p. The final energy is then estimated from a subtractive correction scheme in which the contribution of the refined operator H^' ij_d to the overall energy is substituted by the energy contribution of the true operator H_ d, i.e., E^ij_ SHARC-VQE = ⟨Ψ(θ^p) | H_p^ij - H_d^' ij + H_d |Ψ(θ^p) ⟩. Equation <ref> ensures that the final energy accounts for the contribution from the true operator H_ d, whereas, during the iterative procedure of the VQE optimization, its approximate form (H_d^'ij) is used for state optimization. This simplified Hamiltonian approach (SHARC-VQE) can be employed for any partial Hamiltonian by replacing the left-out terms with some easy-to-compute refined operators during VQE optimization, and by correcting the final energy according to Equation <ref>. In a similar way, the H_p^00 method can also be defined, where the approximation operator is not used, but the contribution from H^d is added at the end. The schematic of the SHARC-VQE algorithm is presented in FIG. <ref>. §.§ Implementation All VQE simulations employed the standard unitary coupled cluster with singles and doubles (UCCSD) ansatz <cit.> with ideal and noisy quantum circuit settings. For noisy simulation, the quantum circuit simulator (qasm_simulator) with additional noise embedded from a noise model sampled from the IBM Cairo quantum device is employed, which gives an equivalent performance of a realistic fault-tolerant quantum computer. Based on our earlier benchmarking study <cit.>, we used the SLSQP optimizer <cit.> for ideal simulations, and the SPSA optimizer <cit.> for the noisy simulations. In the ideal conditions, the maximum number of iterations for SLSQP was set to 200, while for SPSA in noisy conditions, it was set to 500 <cit.>. We considered H_2, LiH, BeH_2, and H_2O with 4, 6, 8, and 10 qubits, respectively, to test the performance of SHARC-VQE. With the increase in the size of the molecule (and qubit), the total number of terms in the qubit Hamiltonian increases. Consequently, the number of gates with high-cost execution increases rapidly (Table <ref>). These terms are approximated by some easy-to-execute refined operators listed in Table <ref>. The refined operators can be written as, H^' ij_d = ∑^i_k=1 c^(j)_k P_k where, the coefficients c_k are evaluated by comparing the expectation values of H^' ij_d, and H_d in the vicinity of the Hartree-Fock state, averaged at j points. P_k are chosen in accordance with the Hartree-Fock state of the molecule under consideration. For example, for the H_2 molecule, a single Pauli string is chosen as ZIZI, and the coefficient for H^' 11_d is evaluated as, c^(1)_1 = ⟨Ψ(θ^HF+δθ)) | H_d |Ψ(θ^HF+δθ) ⟩/⟨Ψ(θ^HF+δθ) | ZIZI |Ψ(θ^HF+δθ) ⟩ . Similarly, for H^' 12_d, the coefficient c_12 takes the average of the two values evaluated at two points θ^HF+δθ, and θ^HF+2δθ. δθ can be a randomly chosen small value (0.1 in the present study). It is straightforward to extend this approach to greater values of j (i.e., averaged over the j points) and of i (i.e., number of distinct easy-to-evaluate refined Pauli operators in Equation <ref>). In the present case, i=1 and j=1, 2 were found to be sufficient for the chosen problems. The performance of SHARC-VQE under ideal and noisy conditions is tested with respect to the total energy and wavefunction of the molecule obtained from VQE optimization with the full Hamiltonian. The accuracy of energy evaluation can be found by defining the relative energy error of the different methods, Δ E = | E^ VQE - E^ ref/E^ ref| where E^ VQE is the energy evaluated by the VQE method, while E^ ref is the reference energy, generally evaluated exactly by the Numpy Eigensolver. In this case, the chemical accuracy can be defined as a dimensionless quantity, Chemical Accuracy = | 1.6 mH/E^ ref| To evaluate the accuracy of the wavefunction, we computed the fidelity of the approximate wavefunction (| Ψ_ approx.⟩) with the exact wavefunction (| Ψ_ exact⟩) by, F (| Ψ_ approx.⟩, | Ψ_ exact⟩) = | ⟨Ψ_ approx. | Ψ_ exact⟩|^2. The fidelity of two quantum states measures their degree of similarity with 0≤ F≤ 1. Any deviation from 1 indicates a loss of accuracy. § RESULTS AND DISCUSSION §.§ Performance of SHARC-VQE under Ideal Condition FIG. <ref> compares the performance of all three variants of SHARC-VQE (differing by the refined operators employed to approximate the hard-to-execute terms of the Hamiltonian, Table <ref>) under ideal conditions. In all cases, energy and the wave function converge within the chemical accuracy to their corresponding values with the full Hamiltonian VQE (FH-VQE). The relative energy errors show that different variants of SHARC-VQE require different numbers of iterations to achieve convergence. The corrected partial Hamiltonians, H^11_p, and H^12_p do not show any particular difference from H^00_p (without any approximation operator for H_d) in H_2. However, in LiH, BeH_2, the corrected partial Hamiltonians converge much faster than the uncorrected partial Hamiltonian and the full Hamiltonian (FIG. <ref>). The approximation operators ensure that the SHARC-VQE method remains scalable to larger molecules. §.§.§ Applicability of the Approximation Operators FIG. <ref> shows the expectation values of the hard-to-execute terms in the qubit Hamiltonian (H_d) and those of their approximate form (H^'ij_d) along the VQE simulation of the full Hamiltonian of LiH. The error introduced by the approximation operator, ϵ(H^'ij_d), can be defined as the deviation of its expectation value from that of the true operator at a certain point f, ϵ(H^'ij_d) = |⟨ H_d⟩_f - ⟨ H^'ij_d⟩_f|. It is found that the error is typically one or just a few orders less than the expectation value of the operator H_d, that is, 𝒪(ϵ(H^'ij_d)) = 𝒪(⟨ H_d⟩_f)× 10^-k with, k ∈{1,2}. The hard-to-execute terms in the qubit Hamiltonian (H_ d) include Pauli strings with significant and insignificant coefficients. The insignificant part of this operator H_ di can be safely estimated with the approximation operator. However, the same may not hold up for the hard-to-execute terms with significant contribution (H_ ds). The actual contribution of a particular term in the Hamiltonian to the final energy depends on the expectation value of the corresponding Pauli string (|⟨ P_i ⟩| ≤ 1), weighted by the coefficient (c_i). As long as the overall expectation value |c_i⟨ P_i ⟩| is less than a particular threshold value (c'_0, e.g., 1 mH as the chemical accuracy), the operator H_ ds can be estimated using the approximation operator H^'ij_d. If that is not the case, one can use the SHARC-VQE method as an initialization technique to resolve the Hamiltonian (see Section <ref>). §.§.§ SHARC-VQE for Excited States with Variational Quantum Deflation While VQE offers the solution for the ground state of a problem, the solution of the excited states can be obtained with some modification to the VQE algorithm. One such algorithm employed in the current work is the variational quantum deflation (VQD) method <cit.>. Starting from the optimized VQE solution, the VQD method finds the k^th-excited state of the system by optimizing the parameters θ_k for the state |Ψ_k⟩ such that the function F(θ_k) is variationally optimized, where F(θ_k) = ⟨Ψ(θ_k)|H|Ψ(θ_k)⟩ + ∑_j=0^k-1γ_j |⟨Ψ(θ_k)|Ψ(θ_j) ⟩|^2 = E(θ_k) + ∑_j=0^k-1γ_j |⟨Ψ(θ_k)|Ψ(θ_j) ⟩|^2. Here, {θ_k} is the optimized parameters of the k^th energy state. VQD method optimizes E(θ_k) with an additional constraint that the current excited state |Ψ(θ_k)⟩ is orthogonal to the previous states |Ψ(θ_0)⟩, ⋯, |Ψ(θ_k-1)⟩. Here, γ balances the contribution of each overlap term to the cost function and is generally computed as the mean square sum of the coefficients of the observable. This is equivalent to finding the ground state energy of a modified Hamiltonian at a stage k, H_k = H + ∑_l=0^k-1γ_l |l⟩⟨ l| where, |l⟩ is the l^th eigenstate of the Hamiltonian H with energy E_l = ⟨ l|H|l⟩. The VQD method has been successfully applied in various problems <cit.>. Here, we have extended the VQD approach within the SHARC scheme (FIG. <ref>), where the Hamiltonian H_k in Equation <ref> can be modified to, H^ij_kp = H^ij_p + ∑_l=0^k-1γ_l |l⟩⟨ l| where, H^ij_p is the partial Hamiltonian constructed from the initial full Hamiltonian H of different molecules. No additional changes were made with regard to the ∑_l=0^k-1γ_l |l⟩⟨ l| terms. In Table <ref>, we demonstrate the performance of various SHARC-VQD schemes for the lowest four states of the systems under consideration. The results highlight that the SHARC-VQE can be an effective tool for exploring the entire energy spectrum of the molecules with the partial Hamiltonian H^ij_kp. The averaged relative energy energies with respect to the exact energies show that in all the cases, the VQD algorithm works equally well with the full and partial Hamiltonians. §.§ Performance of SHARC-VQE under Noisy Condition Quantum noise is a fundamental hindrance in existing quantum technology, leading to inaccuracies in qubit states and operations, thus affecting the reliability of calculations. The noise arises from multiple sources, such as decoherence (which leads to the loss of the quantum state in qubits over time); gate errors (which occur during the implementation of quantum gates and result in imperfect operations); and measurement errors (which cause inaccuracies when determining the final state of qubits). The effect of quantum noise can be particularly damaging to the variational algorithms. The iterative process of VQAs, which includes multiple iterations of preparing, evolving, and measuring quantum states for optimizing a cost function, amplifies the effect of quantum noise. The accumulation of errors on each iteration hinders the ability to get accurate results. As quantum computers get more complicated or require more qubits, these flaws accumulate, which further restricts their practical applicability. Accurately modeling and characterizing noise sources in quantum hardware is essential for understanding their effects and developing effective mitigation strategies. While the success of SHARC-VQE under ideal conditions demonstrates its proof of principle, its real test lies in its performance under noisy conditions. Noise can be particularly damaging to chemistry applications with variational quantum algorithms. For a N-qubit Hamiltonian, the number of expectation values required to be evaluated scales as 𝒪(N^4). This massively increases error accumulation over a large number of measurements and consequent VQE iterations. This is where SHARC-VQE can be really helpful. Since all the Pauli strings in the partial Hamiltonian can be grouped, a single circuit measurement would suffice for energy evaluation. Hence, for a particular iteration of the simulation, the SHARC-VQE method effectively reduces the complexity from 𝒪(N^4/ϵ^2) to 𝒪(1/ϵ^2) for a certain desired precision ϵ. FIG. <ref> compares the performance of all three variants of SHARC-VQE under noisy conditions. For each system, 25 VQE simulations were run to sample the effect of noise on simulations adequately. The individual results, the average, and the standard deviations are highlighted in the figure. The SHARC-VQE technique outperforms the full Hamiltonian method, with the errors in energy dropping from 30-40% (in the latter) to 5-10% (in the former), see FIG. <ref>. A similar performance is also seen in the fidelity calculations, where the fidelity is evaluated against the exact wavefunction obtained from the ideal quantum circuit evaluations, FH-VQE. The fidelity calculations, in particular, highlight the strength of the SHARC-VQE technique. SHARC-VQE not only estimates the energy more accurately, but it is also able to reproduce the wave function reliably. This shall have cascading benefits when molecular properties are evaluated from the quantum states of the system. It is noteworthy, that the excellent performance of the SHARC-VQE method is obtained without using any error mitigation technique, therefore, with no overhead cost. §.§ Hamiltonian Simplification Prior to Qubit Transformation In the present work, we have employed a Hamiltonian simplification approach after transforming the fermionic Hamiltonian (Equation <ref>) to its qubit form. However, the Hamiltonian simplification is also possible prior to the transformation. In the second quantized form of the Hamiltonian (Equation <ref>), the one- and two-electron integrals (h_pq and h_pqrs for H_2 given in Table S2 in the supporting information) correspond to the one-electron operators a^†_ia_j, and the two-electron operators a^†_ia^†_ja_ka_l, respectively. The qubit transformation (denoted by T) of the one-electron operators to the corresponding Pauli strings, i.e., T(a^†_i a_j) = ∑_i P_i, results in Pauli strings with only I and Z Pauli gates when i=j. Hence, they can be categorized as easy-to-execute. Similarly, for the two-electron operator transformation (T(a^†_ia^†_j a_k a_l) = ∑_i P_i), when (k,l) = (i,j) × (i,j), the resulting qubit strings shall fall in the easy-to-execute category. Further, any operator with a repeated index (on either the creation or the annihilation operator, e.g., a^†_ia^†_j a_k a_k, or a^†_ia^†_i a_k a_l) shall vanish. Any second-quantized molecular Hamiltonian can be simplified based on these criteria and then subjected to qubit transformation followed by evaluation using the SHARC-VQE approach. However, while constructing the partial Hamiltonian before qubit transformation, the ease of computation can not be decided a priori. Here, one may ignore Pauli strings that are easy to execute, thus losing accuracy with no cost benefit. To counter this, the post-qubit transformation of the molecular Hamiltonian needs to be assessed to keep track of the gates being excluded in the partial Hamiltonian construction. §.§ Barren Plateau Problem in SHARC-VQE The barren plateau problem presents a significant challenge in the execution of variational quantum algorithms <cit.>. The barren plateau problem refers to a situation where the cost landscape becomes extremely flat, which poses a challenge for optimization algorithms in locating the global minimum. While the barren plateau problem is mainly linked to the size of the system, that is, the number of qubits, it can be linked to other hyperparameters of the quantum circuit, such as, the entanglement of the wave-function <cit.> or its non-locality <cit.> and the quantum noise <cit.>, making the current NISQ hardware highly susceptible. Addressing the barren plateau problem is crucial for effective application and scaling up of VQAs in addressing real-life issues <cit.>. While replacing a full Hamiltonian with a partial Hamiltonian simplifies the problem and intuitively makes it more efficient, a recent study finds that the simplified Hamiltonians are more likely to be susceptible to barren plateaus problem due to their complicated cost landscape <cit.>. In that context, we explore the energy surfaces of different molecules and identify the consequences of the approximations made under SHARC-VQE. To illustrate this, we evaluate the potential energy surface (PES) of H_2 and LiH, each as a 4-qubit system initiated from their corresponding Hartree-Fock state. In these cases, the one-electron excitations do not contribute to the energy due to the Brillouin theorem <cit.>. Therefore, only one parameter corresponding to the single two-electron excitation is required. FIG. <ref> shows the PES of H_2 and LiH as a function of the internuclear distances and the excitation parameter. The PES for both molecules were obtained using full Hamiltonian VQE and SHARC-VQE. While the overall cost landscape for the full and partial Hamiltonians appear similar (FIG. <ref>), a closer comparison of the surface at the equilibrium and dissociated geometry reveals some interesting results (FIG. <ref> c and f). At the equilibrium geometry, the potential-energy curves are nearly identical, barring the small shift in the minima in H_2, which gets smaller in LiH. However, SHARC-VQE yields considerably different results from the full Hamiltonian at the dissociated geometry. This could be because the approximation operator within the SHARC-VQE scheme was parameterized in the vicinity of the Hartree-Fock state at the equilibrium geometry, and the same parameterized operator was used at all other geometries, including the dissociated geometry. This assumption fails since the electronic structure at the dissociated geometry differs substantially from that at the equilibrium geometry. §.§ SHARC-VQE as an Initialization Technique: The Fermi-Hubbard Model The Fermi-Hubbard model describes the dynamics of interacting fermions on a lattice structure. It serves as a fundamental framework for understanding the behavior of electrons in strongly correlated materials. In the Fermi-Hubbard Hamiltonian <cit.>, H = ∑_i,j∑_σ = ↑, ↓ t_ij a^†_iσ a_jσ + U∑_i n_i↑n_i↓ t_ij represents the kinetic energy term associated with the hopping of an electron (in σ spin-state) from a site i to an adjacent site j, and U accounts for the strength of onsite interactions among the electrons. The number operator n_iσ, defined as a^†_i σ a_iσ, accounts for the occupancy of electron in σ spin-state at the lattice site i. In this case, the chosen Fermi-Hubbard Hamiltonian was constructed with open boundary condition and a uniform interaction, t_ij = -1.0, while the onsite potential, U was chosen to be 5.0. The resulting qubit Hamiltonian of the lattice is presented in TABLE <ref>. Unlike the molecular Hamiltonians (Table <ref>), all terms in this Hamiltonian are significant. In this case, Hamiltonian simplification based on the significance of the terms is not straightforward. In such cases, a SHARC-VQE can be used as an initialization technique. One of the major challenges in the variational algorithms is to find a good starting point in the simulation <cit.>. The success of SHARC-VQE in reliably obtaining the full Hamiltonian's true ground state with high fidelity establishes it as an ideal initialization technique for a variety of other VQE problems. Initially, a simplified partial Hamiltonian (PH) is constructed only based on the ease of execution of the gates, and additional terms can be added as required. The extent of the excluded information from the partial Hamiltonian can be quantified as, k_PH = ∑^m_i|c_i|/∑^n_j|c_j|, where, the sum of the coefficients of the m and n Pauli strings in the partial and the full Hamiltonians, respectively, quantifies a degree of similarity between them. The optimized state from the partial Hamiltonian VQE is then used as the initial state for a new simulation, with the full Hamiltonian, in the so-called partial Hamiltonian initialized-VQE (PHI-VQE). Figure <ref> highlights the performance of the PHI-VQE method as an initialization technique for the Fermi-Hubbard model. The partial Hamiltonian prepared by just considering the easy-to-execute terms (7 out of 11 terms, with k_PH=0.875 fails to come close to the actual energy (Figure <ref>a), whereas the addition of one or two more terms in the partial Hamiltonian improves the situation (Figure <ref>b and c). The energy approaches its true value in the latter cases, although the convergence rate is slower than the full Hamiltonian VQE. However, with a good starting point provided by the PH-VQE, the convergence is achieved in PHI-VQE within a few iterations (15-30), as opposed to the random initialized parameters (>120), see Figure <ref>b and c. For the present model, we observe that once k_PH≥ 0.85, the PH-VQE method can reach a good initial state for the standard VQE. For different problems, the ideal value of k_PH needs to be determined. However, more research on different problems is required for a general prescription of the optimized value of k_PH. §.§ Comparison of SHARC-VQE with other Methods There have been several efforts to enhance the performance of quantum computing for complex problems. In this section, we compare SHARC-VQE with other methods with shared objectives. McClean et al. have demonstrated that incorporating spatial locality with the Bravyi−Kitaev transformation leads to enhanced scalability of the existing quantum algorithms for quantum chemistry <cit.>. Considering the proximity of physical interactions, it is conceivable that numerous terms in the Hamiltonian can be deemed insignificant for a finite desired accuracy ϵ. By exploiting the Gaussian product rule, i.e., the product of two spherical Gaussian functions is a rapidly decaying Gaussian function along the line segment connecting the two centers, one can reduce the overall scaling of the two-electron integrals to O(N^2). This approach, while laying the foundation for quadratic scaling methods of electronic structure, is basis-dependent. Another drawback of the locality-based method is that the insignificant two-electron integrals are not included in the VQE simulation, possibly making the method difficult to scale for larger molecules. Both of these drawbacks have been addressed in the SHARC-VQE method, which is basis-independent and introduces an approximation operator for the insignificant terms. Basis rotation grouping <cit.> relies on decomposing the two-body operator via tensor analysis. It provides a method to reduce the total number of terms, particularly the joint terms, to be measured in the Hamiltonian. This method, based on a two-stage decomposition of the interaction tensor <cit.>, has been employed to minimize the overall gate depth of the whole UCCSD Ansatz, as well as the Trotter steps <cit.>. This reduction results in a linear scaling with the size of the system, where the measurement is conducted on a distinct basis for each term. This requires an additional O(N) gate depth to execute the orbital rotation for each grouped term of the decomposed Hamiltonian before the measurement itself. The Hamiltonian partitioning scheme is based on the concept that a Pauli string's Abelian group can be diagonalized simultaneously by employing a unitary rotation of the measurement basis <cit.>. This process effectively reduces the number of terms to be measured to accurately calculate the expectation value of a Hamiltonian. Using the principles of the stabilizer theory, individual measurement values for all the terms in a particular Abelian group are evaluated by performing a single joint measurement on the whole qubit register. Both the partitioning and grouping techniques help in reducing the overall number of Hamiltonian measurements but require additional quantum resources, generally of the order of O(N), hence reducing the overall complexity to O(N^2 ∼ N^3). However, additional quantum gates lead to worse performance of the VQEs against noise. This was also addressed with the SHARC-VQE technique, where no additional quantum gates are required. Other potential methods of reducing the measurement overhead of a molecular Hamiltonian include machine learning and tomography techniques. Intending to minimize the number of shots needed to attain a specific degree of accuracy in measuring a Hamiltonian, machine learning on a sequence of shot results has been used to lower the variability of the expectation value <cit.>. A replica of the state is first generated by an ansatz by using an unsupervised restricted Boltzmann machine (RBM) <cit.>. The replica is subsequently employed to calculate the expectation values of quantum observables. RBMs have demonstrated efficacy as effective models for representing quantum states in the domain of condensed matter physics <cit.>. By limiting the tomography problem <cit.> to the estimation of a Hamiltonian, the expense of measurements can be greatly reduced. Considering the Pauli weight of the Hamiltonian can aid in decreasing the sampling prerequisites. It is important to mention that the methods discussed here are primarily intended for Hamiltonian characterization, rather than ground state estimation. Therefore, they may not be immediately optimized for usage in the context of the VQE. This is why the exact advantage and computational complexity of these methods are not known. Table <ref> compares the strength of SHARC-VQE vis-a-vis other available methods of Hamiltonian approximation and measurement reduction. SHARC-VQE presents an easily applicable, and most efficient method to reduce the number of the Hamiltonian measurements without requiring any additional quantum resources. §.§ Computational Advantage of SHARC-VQE Measurements of different Pauli strings, when not grouped together are independent and therefore uncorrelated, resulting in the mean squared error for the energy estimate for the qubit Hamiltonian (Equation <ref>) <cit.>, ϵ = √(∑^n_ic^2_i Var[P̂_i]/S_i) where S_i is the number of shots employed in the measurement of the Pauli string P_i. Since the Pauli matrices are self-inverse, the variance can be written as, Var[P̂_i] = ⟨ψ | P̂^2_i | ψ⟩ - ⟨ψ | P̂_i | ψ⟩^2 = 1 - ⟨ψ | P̂_i | ψ⟩^2 ≤ 1. Hence, the mean squared error, as a function of the number of shots becomes, ϵ = √(∑^n_i c^2_i ( 1 - ⟨ψ | P̂_i | ψ⟩^2/S_i) ). For the ungrouped Pauli strings, FIG. <ref> shows the mean squared error in the energy calculation for H_2 against the number of shots of the quantum circuit, where SHARC-VQE shows a faster convergence. This performance would get better with larger systems. Interestingly, when grouping is considered, the computational advantage of SHARC-VQE is even better. Considering the fact that all the strings in the partial Hamiltonian contain either the `I', or, `Z' gate, they can be grouped together and a single circuit would be sufficient to calculate the energy. Hence, the number of Pauli string measurements can be reduced from 𝒪(n^4/ϵ^2) to 𝒪(1/ϵ^2) for a particular precision ϵ per VQE iteration. The other grouping or measurement reduction techniques can help reduce the complexity to 𝒪(n^3 ∼ n^2/ϵ^2). Even then, SHARC-VQE remains computationally most inexpensive. For a VQE simulation with a Hamiltonian having n terms, a required precision of ϵ in k number of optimizer iterations, the number of shots scale as, S ∼𝒪( kn/ϵ^2). Scaling of the maximum number of optimizer iterations required with the number of terms in the Hamiltonian is not straightforward to evaluate. However, based on our previous study <cit.>, the number of iterations needed scales worse than second-order with the number of qubits (N). In the best-case scenario, it can be assumed to be k∼𝒪(N^3). Hence, the overall computational complexity of the VQE problem as the function of the number of terms in the Hamiltonian can be written as, S ∼𝒪( N^3 n/ϵ^2). For a general ungrouped Hamiltonian (n ∝ N^4), S ∼𝒪( N^7/ϵ^2) With other grouping techniques, n can be reduced to n ∝ N^2 ∼ N^3. In that case, S ∼𝒪( N^5∼ N^6/ϵ^2). With SHARC-VQE, the k-iteration VQE simulation is done with the partial Hamiltonian, which can be measured with a single circuit. This makes k scale linearly with the number of terms in the partial Hamiltonian m (<n). Since the partial Hamiltonian mainly consists of easy-to-execute terms, it nearly scales quadratically with the number of qubits. In SHARC-VQE, two additional measurements are required for the correction terms. Therefore we have, S_SHARC∼𝒪( N^3/ϵ^2 + 2 N^4/ϵ^2) It should be noted that other grouping techniques can still be employed for the corrections in the SHARC-VQE method (denoted by SHARC+). This further reduces the cost, S_SHARC+∼𝒪( N^3/ϵ^2 + 2( N^2 ∼ N^3)/ϵ^2). The superior scaling of SHARC-VQE methods is highlighted in Figure <ref>. The reduced number of measurements helps in the error resistance of the SHARC-VQE algorithm, as highlighted by the improved performance against noise in Figure <ref>. § CONCLUSIONS In this work, we have presented the SHARC-VQE (Simplified Hamiltonian approach with refinement and correction enabled variational quantum eigensolver) method, which offers a significant advancement in performing quantum chemical calculations on noisy intermediate-scale quantum devices. By partitioning the full molecular Hamiltonian into a more manageable Partial Hamiltonian and a less significant correction term, this method addresses the primary bottleneck of evaluating expectation values of a large number of Pauli strings in VQEs. The partial Hamiltonian approach introduced by SHARC-VQE reduces computational costs and enhances the accuracy and reliability of molecular simulations by mitigating noise from quantum circuits. SHARC-VQE performs exceptionally well in ideal quantum settings, converging to the ground state energy well within chemical accuracy with the full Hamiltonian VQE. SHARC-VQE also performs well in the presence of noise, outperforming the full Hamiltonian method, with the errors in energy dropping from 30-40% (in the latter) to 5-10% (in the former). SHARC-VQE is shown to reduce the computational costs for molecular simulations significantly. The cost of a single energy measurement can be reduced from 𝒪(N^4/ϵ^2) to 𝒪(1/ϵ^2) for a system of N qubits and accuracy ϵ. The overall cost of VQE can be reduced from 𝒪(N^7/ϵ^2) to 𝒪(N^3/ϵ^2), which is much better than the existing simplification techniques, scaling at best at 𝒪(N^5/ϵ^2). Furthermore, the method's utility as an initialization technique for the Fermi Hubbard model underscores its potential to optimize starting points for more elaborate quantum simulations. Overall, SHARC-VQE improves the efficiency, accuracy, and practicality of VQEs in quantum chemistry, paving the way for more effective and scalable quantum simulations of molecular properties. § CONFLICT OF INTEREST There are no conflicts to declare. § ACKNOWLEDGEMENTS This work used the Supercomputing facility of IIT Kharagpur established under the National Supercomputing Mission (NSM), Government of India, and supported by the Centre for Development of Advanced Computing (CDAC), Pune. HS acknowledges the Ministry of Education, Govt. of India, for the Prime Minister's Research Fellowship (PMRF). § AUTHOR CONTRIBUTION HS: Data Curation, Formal Analysis, and Original Draft Writing. S. Majumder and S. Mishra: Supervision, Review & Editing.
http://arxiv.org/abs/2407.13309v1
20240718091308
Exposure Completing for Temporally Consistent Neural High Dynamic Range Video Rendering
[ "Jiahao Cui", "Wei Jiang", "Zhan Peng", "Zhiyu Pan", "Zhiguo Cao" ]
cs.CV
[ "cs.CV", "cs.MM" ]
School of AIA, Huazhong University of Science and Technology smartcjh@hust.edu.cn School of AIA, Huazhong University of Science and Technology jwei0303@hust.edu.cn School of AIA, Huazhong University of Science and Technology peng_zhan@hust.edu.cn School of AIA, Huazhong University of Science and Technology zhiyupan@hust.edu.cn Corresponding author. School of AIA, Huazhong University of Science and Technology zgcao@hust.edu.cn § ABSTRACT High dynamic range (HDR) video rendering from low dynamic range (LDR) videos where frames are of alternate exposure encounters significant challenges, due to the exposure change and absence at each time stamp. The exposure change and absence make existing methods generate flickering HDR results. In this paper, we propose a novel paradigm to render HDR frames via completing the absent exposure information, hence the exposure information is complete and consistent. Our approach involves interpolating neighbor LDR frames in the time dimension to reconstruct LDR frames for the absent exposures. Combining the interpolated and given LDR frames, the complete set of exposure information is available at each time stamp. This benefits the fusing process for HDR results, reducing noise and ghosting artifacts therefore improving temporal consistency. Extensive experimental evaluations on standard benchmarks demonstrate that our method achieves state-of-the-art performance, highlighting the importance of absent exposure completing in HDR video rendering. The code is available at https://github.com/cuijiahao666/NECHDRhttps://github.com/cuijiahao666/NECHDR. <ccs2012> <concept> <concept_id>10010147.10010178</concept_id> <concept_desc>Computing methodologies Artificial intelligence</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010224</concept_id> <concept_desc>Computing methodologies Computer vision</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010224.10010226</concept_id> <concept_desc>Computing methodologies Image and video acquisition</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010224.10010226.10010236</concept_id> <concept_desc>Computing methodologies Computational photography</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Artificial intelligence [500]Computing methodologies Computer vision [500]Computing methodologies Image and video acquisition [500]Computing methodologies Computational photography [Indoor scene with saturation and motions.] < g r a p h i c s > [Outdoor scene with noise and motions.] < g r a p h i c s > Qualitative comparison between the proposed method and HDRFlow <cit.> on two different scenes. The input low dynamic range (LDR) videos consist of frames with alternate exposures, meaning that the exposure changes and part of exposure information is missing at every time stamp (1-st row). Prior arts, , the HDRFlow, struggle to achieve temporally consistent high dynamic range (HDR) results due to the exposure change and the motion that causes information loss (2-nd row). In contrast, the proposed method achieves temporally consistent HDR reconstruction results (4-th row) by completing the absent exposure frames (3-rd row) at each time stamp. “EV”: the exposure value for each input LDR frame. “EC”: our exposure completing results for the absent exposure corresponding to the input LDR frames. . Exposure Completing for Temporally Consistent Neural High Dynamic Range Video Rendering Zhiguo Cao July 22, 2024 ======================================================================================= § INTRODUCTION Compared with the high dynamic range (HDR) of natural scenes, the standard digital camera can only capture low dynamic range (LDR) information. To achieve the goal of “what you see is what you get” in computational photography <cit.>, delicate optical systems <cit.> are designed to simultaneously capture LDR images with different exposures which cover the whole dynamic range to reconstruct HDR results. However, these systems are expensive and un-portable, making HDR contents not easily accessible for general users <cit.>. To address this limitation, computational methods <cit.> are designed to render HDR videos from LDR videos whose frames are exposed alternately with different exposures (as shown in 1-st row in Fig. <ref>). In this setting, the incomplete exposure information varies along the time dimension. Existing neural HDR rendering methods <cit.> align neighbor LDR frames according to the LDR frame of current time stamp (called reference frame in this paper) and fuse the aligned neighbor LDR frames into the reference frame to supplement the missing exposure information. The exposure of reference frame changes at every time stamp, which means that the reference frames of different exposures may have different defects: saturation for highly exposed LDR frames, and noise for lowly exposed ones. Due to that the methods mentioned above heavily depend on reference frames, HDR results from these methods may inherit defects of the reference frames, suffering from the artifacts, , the ghosting (2-nd row in Fig. <ref>(a)) and noise (2-nd row in Fig. <ref>(b)), thereby causing temporal inconsistency. To tackle this problem, we revisit early traditional methods <cit.> for HDR rendering and are motivated to complete the missing exposure information for each time stamp <cit.>. This idea follows the philosophy of optical HDR systems <cit.> that are designed to obtain different exposed LDR frames at exact the same time. In this way, defects of different exposed frames can be mutually compensated. In this paper, we propose the Neural Exposure Completing HDR (NECHDR) framework to reconstruct the LDR frames with missing exposure information. In this way, full exposure information at each time stamp can be covered by the reference and completed LDR frames, therefore the exposure information is complete and consistent along the time dimension, which benefits the temporal consistency of the rendered HDR videos. In the proposed NECHDR framework, pyramid features of input LDR frames are extracted by the feature encoder, then fed into the exposure completing decoder and the HDR rendering decoder. The exposure completing decoder interpolates the features of neighbor LDR frames at every level of the feature pyramid. The interpolated LDR features are combined with the features from input LDR images as the inputs to HDR rendering decoder. The HDR rendering decoder estimates coarse HDR results and the optical flows at every level. The flows can facilitate feature interpolating in exposure completing decoder. At the end of NECHDR, a simple blending network is used to integrate interpolated LDR frames, input frames, and coarse HDR frames, which can achieve high-quality and temporally consistent HDR reconstruction results. Extensive experimental results on multiple public benchmarks demonstrate the superiority of the proposed NECHDR: by completing the missing exposure information (3-rd row in Fig. <ref>), our method mitigates ghosting resulting from large motions for the time stamps with highly exposed reference frames (4-th row in Fig. <ref>(a)), and reduces the noise level for the lowly exposed reference frames (4-th row in Fig. <ref>(b)), hence achieves better temporal consistency. Our work sheds light again on the exposure completing for HDR video rendering. The contributions can be summarized as follows: ∙ Our work firstly implements the idea of exposure completing for neural HDR rendering. ∙ Our work proposes a novel HDR video rendering framework, a.k.a., the NECHDR, which completes the missing exposure information by interpolating LDR frames. ∙ Our NECHDR achieves new state-of-the-art performance on current benchmarks. § RELATED WORK §.§ HDR Image Rendering HDR imaging technology aims to extend the dynamic range of images by merging LDR images with different exposures. Both the HDR image and video tasks encounter the misalignment between LDR images. Pioneer studies for HDR image rendering employ image alignment techniques to address this issue. Early works <cit.> often align LDR images globally, then detect and discard unaligned regions. Directly discarding these regions leads to significant information loss, posing challenges for HDR image rendering. To address these issues, optical-flow-based <cit.> and patch-based methods <cit.> are proposed. However, large motion still makes these methods fail to produce artifact-free HDR results. With the rapid development of deep learning, many studies for HDR image rendering switch their focus to neural networks <cit.>, achieving promising results. Kalantari  <cit.> propose using neural networks to align input LDR images with predicted optical flow. Wu  <cit.> directly map LDR images to HDR images, thereby avoiding alignment errors. Yan  <cit.> use spatial attention to implicitly align LDR images, achieving further improvement. Liu <cit.> introduce a ghost-free imaging model based on swin-transformer <cit.>. Yan  <cit.> utilize patch-based and pixel-based fusion to search for information to complement the reference frame from the other frames. However, these HDR image rendering methods assume a fixed exposure for reference frames, typically medium exposure, therefore is not suitable for HDR video rendering where reference frames have alternating exposures. §.§ HDR Video Rendering HDR video can be obtained by delicate optical systems, such as scanline exposure/ISO <cit.>, internal/external beam splitter <cit.>, modulo camera <cit.> and neuromorphic camera <cit.>. However, these expensive systems are hardly accessible to general users. Therefore, rendering HDR videos from LDR videos is investigated. Kang  <cit.> utilize both global and local alignment for reference frames with neighbor frames. Kalantari  <cit.> propose a patch-based optimization algorithm to reconstruct HDR video by synthesizing missing exposures for each frame. These traditional methods offer a fundamental idea for completing missing exposures, but they require iterative optimization and are prone to producing artifacts. Recently, Kalantari  <cit.> introduce an end-to-end neural network consisting of an optical flow prediction model and a fusion network. Chen  <cit.> propose a coarse-to-fine alignment framework, which utilizes optical flow for coarse alignment and then employs deformable convolution <cit.> for fine alignment. Chung  <cit.> employ spatial attention instead of optical flow to achieve alignment from adjacent frames to the reference frame. Xu  <cit.> propose the HDR-domain loss, which uses optical flow supervised by HDR frames for LDR alignment, yielding favorable results. However, when complex situations such as large motion, saturation, and others occur simultaneously, these methods struggle to achieve precise alignment, resulting in artifacts in the final outputs. Besides, as the exposures of reference frames alternate, the reconstructed HDR frames tend to inherit the defects of reference frames, leading to flicker in the output videos. The fundamental issue lies in the lack of exposure information at every time stamp. Different from previous methods, we propose a renaissance work which focuses on completing missing exposures for the reference frame in neural HDR video rendering. Through compensating for the defects of the alternatively exposed reference frames by utilizing the completed results, our approach avoids artifacts of ghost and noise and obtains a temporally consistent HDR video result. § PROPOSED METHODS §.§ Overview Each frame l_i in the LDR video L={l_i|i=1,...,n} is expected to have multiple exposures 𝐄={ϵ_i|i=1,...,z} to cover the whole dynamic range of HDR video H = {h_i|i = 1,...n}, where n is the length of video, and z is the number of exposures. However, in our task, only one exposure can be obtained in one certain time stamp, which creates an exposure sequence E={e_i|i = 1,...n} for the LDR video, where e_i = ϵ_(i⊘ z)+1. a⊘ b indicates the remainder of a divided by b. Following previous work <cit.>, we try to render a high-quality HDR video with LDR video under two settings of different alternate exposures: two alternate exposures (z = 2) with three frames as input, and three alternate exposures (z = 3) with five frames. We use the two-exposure setting as an example to demonstrate how to achieve HDR rendering. In Sec. <ref>, we introduce how to extend our approach to the three-exposure setting. The frame l_t corresponding to the current time stamp t, is treated as the reference frame. l_t is combined with its neighbors as input {l_t-1,l_t,l_t+1}. To render HDR frames, we explicitly complete frames l_t with absent exposure ϵ_i, where i≠ (t⊘ z)+1. Therefore, with complete exposure information, we can render a high-quality HDR video. Our method consists of two processes: exposure completing for the frame with the absent exposure at the time stamp t and coarse-to-fine HDR rendering. These two processes are coupled together through an encoder-decoder architecture, as illustrated in the Fig. <ref>. The feature encoder ℰ takes LDR frames {l_t-1, l_t, l_t+1} as input and generates pyramid features for each frame. As the spatial dimensions decrease and the feature channels increase, the encoder outputs feature sets {Φ_t-1,Φ_t,Φ_t+1} for input frames, each of which is a set of k-level pyramid features Φ={ϕ^k|k=1,2,3,4}. Considering the distinct characteristics of the two processes, we devise two dedicated decoders. Initially, an exposure completing decoder 𝒟_I is employed to interpolate frame l_t and its corresponding features Φ_t^k={ϕ_t^k|k=1,2,3}. Subsequently, a coarse-to-fine HDR rendering decoder 𝒟_R takes {Φ_t-1,Φ_t,Φ_t+1} and Φ_t as input, and outputs optical flows F_t-1 → t = {f_t-1 → t^k|k=0,1,2,3}, F_t+1 → t = {f_t+1 → t^k|k=0,1,2,3}, coarse HDR features Ψ_t={ψ_t^k|k=1,2,3} and coarse HDR frame h_t^c. The optical flows here are utilized to warp {Φ_t-1,Φ_t+1} or {l_t-1,l_t+1} to time stamp t, yielding warped neighbor features Φ_t-1={ϕ_t-1^k|k=1,2,3}, Φ_t+1={ϕ_t+1^k|k=1,2,3} or frames {l_t-1,l_t+1}. The warped neighbor features {Φ_t-1, Φ_t+1} and frames {l_t-1,l_t+1} are fed into 𝒟_I. Subsequently, the exposure completing features Φ_t from 𝒟_I are fed into the HDR rendering decoder 𝒟_R, assisting it in better restoring lost details due to saturation and noise when decoding coarse HDR features Ψ_t and coarse HDR frame h_t^c. This design coupling two decoders enables mutual enhancement of the two processes, thus obtaining more accurate exposure completing frame l_t and better coarse HDR result h_t^c. To obtain the final HDR rendering result, we map {l_t-1, l_t, l_t+1} and l_t to the linear HDR domain. The function defining the mapping of LDR frames to the linear HDR domain is as follows: x_t=l_t^γ / e_t, where γ is a hyperparameter and e_t is the exposure time of l_t. Finally, the ultimate high-quality HDR rendering result h_t is obtained by fusing the input and completed frames with coarse HDR frame in linear HDR domain using a simple blending network. The details of each process will be presented in the following sections. §.§ Exposure Completing Under the setting that input LDR frames are with alternating exposures, completing the absent exposure information is essential for rendering HDR frames. However, existing neural HDR rendering methods overlook this essential problem, making it difficult for these methods to accurately recover detailed information when motion occurs with saturation or noise simultaneously, potentially leading to artifacts. Furthermore, due to these methods heavily depend on reference frames, the alternate appearance of noise and saturation caused by the alternate exposures in reference frames affects the temporal consistency of the video. Therefore, to better address the aforementioned issues, we explicitly tackle the fundamental problem of HDR rendering by completing frames with absent exposure corresponding to reference frames. Given the LDR frame input{l_t-1, l_t, l_t+1} with alternate exposures {e_t-1, e_t, e_t+1} , where the reference frame and neighbor frames have two different exposures ( e_t=ϵ_1 and e_t-1=e_t+1=ϵ_2 ), our task is to complete the frame l_t with absent exposure ϵ_2 corresponding to reference frame through interpolation. Our exposure completing process takes the neighbor features {Φ_t-1,Φ_t+1} and the optical flows {F_t-1 → t, F_t+1 → t} as input. During this process for the features with absent exposure information, the k-th level neighbor features {ϕ_t-1^k,ϕ_t+1^k} and optical flows {f_t-1 → t^k, f_t+1 → t^k} are processed to obtain completed feature ϕ_t^k: ϕ_t-1^k,ϕ_t+1^k=𝒲(ϕ_t-1^k,f_t-1 → t^k),𝒲(ϕ_t+1^k,f_t+1 → t^k), ϕ_t^3 = 𝒟^3_I([ ϕ_t-1^3, ϕ_t+1^3]), ϕ_t^k = 𝒟^k_I([ ϕ_t-1^k, ϕ_t+1^k,𝒰_2(ϕ_t^k+1)]), where 𝒲(·, ·) denotes using optical flow to warp neighbor features to the reference feature, 𝒰_2 represents the bilinear upsampling operation with scale factor 2, 𝒟_I^k(k = 1, 2) represents the middle levels of the exposure completing decoder 𝒟_I, and [·] indicates concatenation operation. In the final step of the exposure completing process, the completed frame l_t is explicitly interpolated with neighbor frames l_t-1, l_t+1 and optical flow f_t-1 → t^0, f_t+1 → t^0 as input: l_t-1,l_t+1=𝒲(l_t-1,f_t-1 → t^0),𝒲(l_t+1,f_t+1 → t^0), l_t = 𝒟^0_I([ l_t-1, l_t+1,𝒰_2(ϕ_t^1)]), where 𝒟_I^0 represents the highest level of the exposure completing decoder 𝒟_I. As illustrated in the Fig. <ref>, with the above design, we have obtained realistic exposure completing result for the challenging areas with motion, saturation and noise, providing accurate and complete exposure information for subsequent HDR rendering. §.§ Coarse-to-fine HDR Rendering Building upon the results of above process, we designed a coarse-to-fine HDR rendering network based on completed features Φ_t and frame l_t. Taking the original features {Φ_t-1, Φ_t, Φ_t+1}, completed features Φ_t and warped neighbor features {Φ_t-1, Φ_t+1} as input, the HDR rendering decoder 𝒟_R reconstructs HDR features Ψ_t and optical flows {F_t-1 → t, F_t+1 → t} in a coarse-to-fine manner, and finally obtains the coarse HDR frame h_t^c: ψ_t^3,f_t-1 → t^3, f_t+1 → t^3 =𝒟_R^4([ϕ_t-1^4, ϕ_t^4, ϕ_t+1^4]), ψ_t^k-1, f_t-1 → t^k-1, f_t+1 → t^k-1= 𝒟^k([ϕ_t^k, ϕ_t^k, ψ_t^k, f_t-1 → t^k, f_t+1 → t^k, ϕ_t-1^k, ϕ_t+1^k]), h_t^c, f_t-1 → t^0, f_t+1 → t^0= 𝒟^1([ϕ_t^1, ϕ_t^1, ψ_t^1, f_t-1 → t^1, f_t+1 → t^1, ϕ_t-1^1, ϕ_t+1^1]), where 𝒟_R^k(k=2,3) stand for the k-th level of the HDR rendering decoder 𝒟_R. The warped neighbor features {ϕ_t-1^k, ϕ_t+1^k} help 𝒟_R to identify regions with poor alignment and subsequently obtain the refined optical flow {f_t-1 → t^k-1, f_t+1 → t^k-1}. Based on the completed frame l_t and the coarse HDR frame h_t^c, we combine them with original LDR frames to obtain the final HDR rendering result h_t. We adopted a blending network with U-net architecture from Xu  <cit.>. Thus, original and completed LDR frames {l_t-1,l_t,l_t+1,l_t}, along with their corresponding frames {x_t-1,x_t,x_t+1,x_t} in linear HDR domain, and coarse HDR frame h_t^c are fed into the blending network. The blending network calculates fusion weights W={w_i|i=0,1,2,3,4} for the input five HDR domain frames {x_t-1,x_t,x_t+1,x_t, h_t^c} and obtains the final HDR rendering result h_t through a weighted average based on the computed weights W: h_t=w_0 h_t^c + w_1 x_t + w_2 x_t +w_3 x_t-1+w_4 x_t+1/∑_j=0^4 w_j. Following Xu  <cit.>, we also fuse with the neighbor frames {x_t-1, x_t+1} to provide information about static regions. As depicted in Fig. <ref>, when encountering complex scenes involving both motion and saturation or noise, the fused exposure completion frames effectively provide the missing exposure information, thereby obtaining ghost-free and low-noise HDR results. In summary, based on completing the frame with missing exposure information, we finally achieve a high-quality HDR rendering process. §.§ Training Loss We calculate the losses for the completed LDR features Φ_t and frame l_t, rendered HDR features Ψ_t and frame h_t, and optical flows {F_t-1 → t, F_t+1 → t}: The losses for images. The widely adopted ℒ_1 loss is used to supervise the completed LDR frame l_t and rendered HDR frame h_t, and is defined as follows: ℒ^com_I = l_t-l_t_1, ℒ^ren_I = 𝒯(h_t)-𝒯(h_t)_1, ℒ_I = ℒ^com_I + ℒ^ren_I, where l_t and h_t are the corresponding ground truth for the completed LDR frame l_t and rendered HDR frame h_t, respectively. The 𝒯(·) is a widely used function to map HDR frame to the tone-mapped HDR domain, since HDR images are typically displayed after tone mapping. This simple differentiable μ-law function 𝒯(·) is defined as follows: 𝒯(h)=log (1+μ h)/log (1+h), where μ is a hyperparameter. The losses for features. Feature space geometry loss proposed in <cit.> is employed to ensure that the intermediate features obtained from frame interpolation and HDR reconstruction can be more effectively refined to conform to geometrically structured features. The parameter shared encoder ℰ is used to obtain corresponding pyramid features Φ_t = {ϕ_t^k|k=1,2,3} and Ψ_t = {ψ_t^k|k=1,2,3} from the ground truth of completed LDR and rendered HDR frames. Then, the loss function for supervising the features of completed LDR and rendered HDR frames can be written as: ℒ^com_G=∑_k=1^3 ℒ_c e n(ϕ_t^k, ϕ_t^k), ℒ^ren_G=∑_k=1^3 ℒ_c e n(ψ_t^k, ψ_t^k), ℒ_G = ℒ^com_G + ℒ^ren_G, where the ℒ_c e n is census loss <cit.> and computed using the soft Hamming distance between census-transformed <cit.> feature maps, considering 3×3 patches in a channel-by-channel manner. The loss for optical flows. The HDR-domain alignment loss <cit.> is used hierarchically to supervise the learning process of optical flow, which is defined as follows: ℒ_F^t-1 → t=∑_k=0^3 𝒲(𝒯(h_t-1), 𝒰_2^k(f_t-1 → t^k)) - 𝒯(h_t)_1, ℒ_F^t+1 → t=∑_k=0^3 𝒲(𝒯(h_t+1), 𝒰_2^k(f_t+1 → t^k)) - 𝒯(h_t)_1, ℒ_F =(1-m_t) ⊙(ℒ_F^t-1 → t + ℒ_F^t+1 → t), where 𝒲(·, ·) denotes using optical flow to warp neighbor frames to the reference frame, 𝒰_s represents the bilinear upsampling operation with scale factor s. The mask m_t indicates well-exposed regions in reference frame. First, l_t is converted to YCbCr color space to extract luminance y. Then, m_t is defined as δ_low < y < δ_high, where δ_low and δ_high are the low and high luminance thresholds, respectively. This allows the optical flow computation to focus more on regions with poor exposure in the reference frame. Total loss. Our total loss can be summarized as follows: ℒ_total = ℒ_I+α * ℒ_G + β * ℒ_F . §.§ Extension to Three Exposures In the alternate exposure setting with three exposures, five LDR frames {l_t-2, l_t-1, l_t, l_t+1, l_t+2} are used as input to the network for HDR h_t rendering. The five LDR frames are with corresponding exposure sequences {e_t-2, e_t-1, e_t, e_t+1, e_t+2}. In this way, the reference frame has only one certain exposure e_t=ϵ_3. Specifically, the first frame l_t-2 and the fourth frame l_t+1 have the same exposure e_t-2=e_t+1=ϵ_1, while the second frame l_t-1 and the fifth frame l_t+2 also share the same exposure e_t-1=e_t+2=ϵ_2. Therefore, the exposure completion process takes {l_t-2, l_t,l_t+1} and {l_t-1, l_t, l_t+2} as inputs, generates exposure completing results {l_t^ϵ_1, l_t^ϵ_2} through the flow-guided completing process, and obtains coarse HDR results h_t^c through the coarse-to-fine HDR rendering process. A total of fifteen images, including the exposure completing frames and multiple original LDR frames, along with their corresponding frames in linear HDR domain, and the coarse HDR result, are fed into the blending network. This blending network calculates seven weights to fuse the seven HDR domain images in a weighted average manner and obtain the final HDR rendering result. § EXPERIMENTS §.§ Experimental Setup Datasets. We utilize synthetic training data generated from the Vimeo-90K dataset <cit.>. To adapt the Vimeo90K dataset for HDR video reconstruction, we follow prior research <cit.> to convert the original data into LDR sequences with alternate exposures. To create the ground truth of completed LDR frames, we also generated LDR sequences with missing exposures in the same way. Our framework is tested on the Cinematic Video dataset <cit.> and DeepHDRVideo dataset <cit.>. The Cinematic Video dataset has two synthetic videos from indoor and outdoor scenes. The DeepHDRVideo dataset <cit.> contains both real-world dynamic scenes and static scenes with random global motion augmentation. The HDRVideo dataset <cit.> is employed solely for qualitative evaluation, as it lacks ground truth. Implementation details. We implement our approach using PyTorch and conduct experiments on an NVIDIA RTX3090 GPU. We employ AdamW optimizer <cit.> with β_1 = 0.9 and β_2 = 0.999. The learning rate is set to 10^-4. In our experiments, we set γ in Eq. <ref> to 2.2 and μ in Eq. <ref> to 5000. Following HDRFlow <cit.>, we set δ_low to 0.2 and δ_high to 0.8. The weighting hyperparameters α and β for the loss function in Eq. <ref> are set to 0.01. Evaluation metrics. PSNR_T, SSIM_T and HDR-VDP-2 <cit.> are adopted as the evaluation metrics. PSNR_T and SSIM_T are computed on the tone-mapped images. HDR-VDP-2 is computed with the number of pixels per visual degree set to 30, which means the angular resolution of the image. §.§ Comparisons with State-of-the-art Quantitative comparisons between our and other state-of-the-art methods on the Cinematic Video <cit.> and DeepHDRVideo <cit.> datasets are shown in Table <ref> and Table <ref>, respectively. Compared to state-of-the-art methods, our approach consistently achieves superior or comparable performance. Especially, our approach achieves state-of-the-art performance on Cinematic Video <cit.> dataset, outperforming the second-best method by 1.29dB and 0.59dB in terms of PSNR_T for the 2-exposure and 3-exposure settings, respectively. Qualitative comparisons are shown in Fig. <ref> and Fig. <ref>. we compare our NECHDR with the previous methods: Chen <cit.>, LAN-HDR <cit.> and HDRFlow <cit.>. Fig. <ref> illustrates the results from scenes with saturation and motion on Cinematic Video dataset <cit.> and HDRVideo dataset <cit.> under the 2-exposure setting. Apart from our method, other methods tend to exhibit severe artifacts or detail loss when saturation and motion occur simultaneously. In such challenging scenarios, we achieve accurate and artifact-free exposure completion results for saturated regions by leveraging frame interpolation from neighboring frames. This enables us to fuse high-quality HDR results. And in Fig. <ref>, we also show the results encountering noise and motion from the Cinematic Video dataset <cit.> and DeepHDRVideo dataset <cit.>. The scene on the left side of Fig. <ref> is captured in low-light conditions, resulting in very low signal-to-noise ratio in the low-exposure frames. This can lead to very high noise levels in rendered HDR frames after tone mapping, which makes this scene particularly challenging. Specifically, methods based on optical flow <cit.> tend to produce noisy artifacts, while attention-based method <cit.> exhibit more pronounced noise. Our method explicitly reconstructs the absent high-exposure frame with low noise at the current time stamp, achieving the best noise suppression, even with some regions having noise intensity lower than ground truth. The visualization results above explain why we achieve a significant performance improvement compared with other state-of-the-art methods. §.§ Analysis Ablation Study. We conduct ablation experiments under the 2-exposure setting on Cinematic Video dataset <cit.> and DeepHDRVideo dataset <cit.>, and the quantitative and qualitative results are shown in Table <ref> and Fig. <ref>, respectively. We devised a baseline that employs IFRNet <cit.> to utilize the neighbor frames to complete the middle time frame that with missing exposure information and achieve HDR results by simply fusing the completed LDR frame with the original LDR frames. The performance of this baseline is shown as “EC Baseline” in the first row in Table <ref>. Another baseline directly renders HDR results based on IFRNet, which is the “HR Baseline” in second row in Table <ref>. However, relying solely on either exposure completing or HDR rendering results is with limited performance. By adding the exposure completing decoder, we get a network (third row in Table <ref>) that contains both decoupled exposure completing and HDR rendering processes. Finally, through feeding the completed features and frames from exposure completing decoder into the process of HDR rendering, we achieve our NECHDR framework in the forth row in Table <ref>. Based on the qualitative and quantitative results, we can see: (a) coupled exposure completing and HDR rendering processes benefit the quality of final HDR results; (b) exposure completing decoder is necessary; (d) the completed results with missing exposure information help the rendering process of HDR reconstruction. Temporal consistency. We show the visual comparisons of temporal consistency in Fig. <ref>. In Fig. <ref>, we record a two-pixel-height line traversing all frames of a scene in the Cinematic Video dataset <cit.> over time and lay them out sequentially to form temporal profiles. Base on the illustration of temporal profiles, we can observe that the horizontal stripes exist in the temporal profiles of other methods. The horizontal stripes comes from the differences between adjacent frames, which represents the temporal inconsistency. In contrast, the horizontal stripes can hardly be observed in our temporal profiles, which means that our proposed method achieves better temporal consistency. § CONCLUSION In this paper, we implement the idea of exposure completing for neural HDR video rendering and propose the Neural Exposure Completing HDR (NECHDR) framework. The NECHDR leverages interpolation of neighbor LDR frames to complete missing exposures, providing a complete set of exposure information for each time stamp. This process of exposure completing creates a novel neural HDR video rendering pipeline, which can generate results of less noise and ghosting artifacts, thereby enhancing temporal consistency. Experimental results on multiple public benchmarks demonstrate the superiority of our NECHDR, which may shift the focus of researchers in this area to the exposure completing. ACM-Reference-Format
http://arxiv.org/abs/2407.12445v1
20240717095419
A Comprehensive Sustainable Framework for Machine Learning and Artificial Intelligence
[ "Roberto Pagliari", "Peter Hill", "Po-Yu Chen", "Maciej Dabrowny", "Tingsheng Tan", "Francois Buet-Golfouse" ]
cs.LG
[ "cs.LG", "cs.CY", "I.2.0" ]
A]Roberto Pagliari A]Peter Hill A,B]Po-Yu Chen Corresponding Author. Email: po-yu.chen11@imperial.ac.uk. A]Maciej Dabrowny A]Tingsheng Tan A]Francois Buet-Golfouse [A]JPMorgan [B]Imperial College London § ABSTRACT In financial applications, regulations or best practices often lead to specific requirements in machine learning relating to four key pillars: fairness, privacy, interpretability and greenhouse gas emissions. These all sit in the broader context of sustainability in AI, an emerging practical AI topic. However, although these pillars have been individually addressed by past literature, none of these works have considered all the pillars. There are inherent trade-offs between each of the pillars (for example, accuracy vs fairness or accuracy vs privacy), making it even more important to consider them together. This paper outlines a new framework for Sustainable Machine Learning and proposes FPIG, a general AI pipeline that allows for these critical topics to be considered simultaneously to learn the trade-offs between the pillars better. Based on the FPIG framework, we propose a meta-learning algorithm to estimate the four key pillars given a dataset summary, model architecture, and hyperparameters before model training. This algorithm allows users to select the optimal model architecture for a given dataset and a given set of user requirements on the pillars. We illustrate the trade-offs under the FPIG model on three classical datasets and demonstrate the meta-learning approach with an example of real-world datasets and models with different interpretability, showcasing how it can aid model selection. § INTRODUCTION Artificial Intelligence has become an emerging tool essential for all financial sectors <cit.>. However, the characterisation of AI extends beyond the realm of technology and permeates into the precincts of infrastructure <cit.> and ideology <cit.>, leading to an opacity around the concept of AI <cit.>. This nebulous nature of AI magnifies the challenges of effectively understanding and governing it while underscoring the need for malleability and interdisciplinary dialogue in AI ethics and governance. Consequently, this discourse does not gravitate towards a rigid definition of AI; rather, it embraces its polysemous essence and explores AI as a complex system <cit.>. The current landscape of AI ethics frameworks <cit.> is peppered with a proliferation of proposed principles and a conspicuous absence of uniformity across these frameworks. The initial environmental rights and climate justice movements were driven by the United Nations Climate Change Conferences, and Sustainable Development Goals (SDGs) <cit.>, along with the environmental, social and corporate governance (ESG) frameworks <cit.>. Unfortunately, while the environmental implications of AI are gradually entering the discourse <cit.>, the broader concept of sustainability in AI appears to be largely overlooked <cit.>. Recent literature only fostered a narrow vision of sustainable AI <cit.>, neglecting the interconnected nature of various AI governance challenges. A holistic view of sustainable AI should be an amalgamation of three intertwined pillars: economic, environmental and social, necessitating a complex systems approach <cit.>. Financial institutions have particular duties in relation to AI that need to be paid close attention to. The Information Commissioner's Office (ICO) has strict guidance on AI regarding interpretability, data protection and privacy <cit.>. Additionally, there have been several recent developments from significant organisations relating to AI regulations, bolstering the importance of sustainable AI's key features. For example, the European Union has proposed the AI Act <cit.>, a European law on AI. The Bank of England has also recently updated their model risk management framework <cit.>, outlining the expectations of the Prudential Regulation Authority (PRA) for banks' management of model risk. They indicate the need for a robust approach to model risk and discuss five key principles within their framework, from governance to validation. The particular focus on model risk mitigants indicates the importance for banks to consider factors such as the interpretability and fairness of their models as part of the model selection stage. In addition, the FCA has also recently updated their consumer duty expectations <cit.>, raising the standards required by financial institutions from previous expectations. With the growing role AI is having, there is an increased expectation from the FCA for consumers to be at the forefront of model design. The EU AI act <cit.> has also been proposed recently, focusing on the risk of AI applications and categorising AI use into four risk levels, as well as imposing increased documentation and validation of models. In this paper, we first advocate for the adoption of the principles of sustainability science to AI, analysing AI through the lens of an unsustainable system. We then propose a Fair, Private, Interpretable and Green (FPIG) framework to address the above-mentioned pillars. These four features (as illustrated in Fig <ref>) are tightly associated with the concept of sustainability but, to the best of our knowledge, have yet to be tackled together under a single framework. In the FPIG framework, we first propose to integrate fairness <cit.> into our ML objective function to reduce the loss disparity across groups. This approach allows us to change the level of fairness required in our training, varying from a standard optimisation (with no additional fairness constraints) to a multi-objective scenario, where trade-offs across different metrics form the Pareto front. Secondly, we integrate the concept of differential privacy during the model training process. By adding noise during training, we ensure that good models are obtained for all other dimensions across varying degrees of differential privacy. The carbon dioxide (CO2) emission during model training and inference pipeline is tracked and monitored by an independent software package, CodeCarbon <cit.>. Ultimately, we propose a new meta-learning algorithm that helps users to find better AI models and hyperparameters (e.g., number of neural network layers) given a dataset and the three sustainability goals plus model interpretability. To demonstrate the effectiveness of the FPIG framework, we evaluate it with five independent datasets and four distinctive types of machine learning (ML) models, varying in interpretability. We compute the results across the different pillars. Our evaluation reveals common trade-offs from training models using our frameworks, such as trade-offs between accuracy, fairness and privacy. We also indicate the significant features when considering a meta-learning approach, allowing us to estimate the impact on accuracy, fairness, privacy and carbon emissions of training different models without needing to undertake the cost of actually training them. We demonstrate the usefulness of this approach on a particular example. The rest of this paper is organised as the following. A brief overview of the related work is given in Section <ref>. We then present the FPIG framework in Section <ref>. The experiment results with five distinctive datasets are presented in Section <ref>, and the paper is ultimately concluded in Section <ref>. § SUSTAINABILITY IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING The notion of "sustainable AI" has been proposed by a variety of researchers and practitioners <cit.> intending to emphasise the interconnection between AI and sustainability <cit.>. Nevertheless, the term "sustainable" is frequently interpreted as synonymous with "environmentally friendly." For instance, the "Sustainable AI" manifesto issued by Facebook AI <cit.> is solely focused on diminishing carbon emissions from AI systems whilst vowing to "advance the field of AI in an environmentally responsible manner". This exemplifies the challenge of encouraging stakeholders to embrace a multifaceted perspective on sustainability rather than confining it merely to environmental aspects. Fairness is one of the most important sustainability metrics according to SDGs. Fairness in ML models refers to the absence of bias or discrimination in the predictions and decisions made by the models <cit.>. Technically speaking, fairness involves identifying and addressing biases in the data used to train the models and the algorithms themselves. Techniques such as pre-processing the data to remove biased patterns, using specialised algorithms that explicitly consider fairness constraints during training, and employing fairness metrics to evaluate model performance can help achieve fairness in ML models <cit.>. Recent research has defined different fairness metrics for AI <cit.>. Among these definitions, group fairness metrics such as demographic parity <cit.>, equalised odds <cit.>, and social fairness ensures that different groups based on a protected attribute are treated equally. The impossibility theorems on fairness <cit.> show that these definitions cannot all be satisfied at once. There is also individual fairness and counterfactual fairness <cit.>, which are proposed to ensure fairness at individual levels. Fairness can be considered in both a supervised learning or an unsupervised learning setting. Recent research also investigated in incorporating multiple fairness objectives into ML models given a desired level of fairness, using group functionals <cit.>. Privacy in ML models pertains to preserving the confidentiality and security of sensitive data used for training and inference <cit.>. Privacy protection involves implementing mechanisms that prevent unauthorised access, use, or disclosure of personal information. Techniques like data anonymisation, encryption, and secure multi-party computation can be employed to protect privacy in ML models. Differential privacy (DP) has also been regarded as the gold standard in academia as it provides a well-defined theoretical guarantee. It can be applied to ensure that individual data points are not distinguishable, thereby safeguarding privacy while maintaining the utility of the models <cit.>. Adding noise during training is one way to incorporate DP into ML models. For example, Differentially private stochastic gradient descent (DP-SGD) <cit.> makes deep learning models differentially private by modifying the mini-batch stochastic optimisation process during gradient descent <cit.>. Other approaches incorporate DP to data synthesis <cit.> and than train the model on the DP synthetic data, making these models more robust against DP attacks with exponentially many queries. Interpretability in ML models refers to understanding and explaining the reasoning behind the model's predictions or decisions <cit.>. Interpretability techniques involve feature importance analysis, rule-based approaches, and model-agnostic techniques<cit.>. These techniques provide insights into the factors influencing the model's output and enable humans to comprehend and validate the decision-making process. Also, layer-wise relevance propagation (LRP) or attention mechanisms can help identify relevant features or parts of the input contributing to the model's predictions. A few works investigated the trade-offs between computational efficiency and model explainability <cit.>. Greenhouse Gas (GHG) emissions associated with ML models refer to the carbon footprint generated during the model's lifecycle, including data processing, training, inference, and deployment <cit.>. At a technical level, reducing GHG emissions involves optimising the computational resources used for training by employing energy-efficient hardware and algorithms. Techniques like model compression, which reduces the model's size or complexity, can also contribute to lower energy consumption during inference <cit.>. Additionally, adopting hardware acceleration techniques, using distributed computing, and leveraging renewable energy sources can help minimise the environmental impact of ML models <cit.>. While there exists extensive literature working on improving model efficiency, which ultimately leads to lower greenhouse gas (GHG) emissions, explainable AI (xAI) has become an emerging area that attracts research <cit.>. A few works investigated the trade-offs between computational efficiency and model explainability <cit.>. Nevertheless, to our knowledge, this paper is the first to introduce a framework encompassing all four sustainability goals, including fairness, privacy, model interpretability and low GHG emissions. § THE FPIG FRAMEWORK We propose a framework incorporating the four sustainability features into the model training pipeline. Specifically, we are optimizing along different dimensions (e.g., model performance, explainability, carbon emission and fairness with some level of privacy). §.§ Single-Objective Optimization Traditionally, the hyper-parameters of a machine learning model are tuned to maximize one metric of interest, for example, the area under the curve. Once the metric of interest is defined, the objective is to minimize the quantity in Eq. (<ref>): minimize f(x) subject to x∈𝐗 where x∈𝐗⊆ℝ^d is the set of d hyper-parameters, 𝐗⊆ℝ^d is the search space and f(·) is the objective function, for example, a loss function to be minimized. In the case of a single-objective scenario, such as the one shown in Eq. (<ref>), the Tree-structured Parzen Estimator (TPE), originally proposed for neural networks <cit.>, is widely used for optimizing a wide number of machine learning models. The key idea is to separate the likelihood function p(𝐱|y) in two components so as to identify which region the best hyper-parameters are likely to be in: p(𝐱|y) = l(𝐱) if y < y^* g(𝐱) if y ≥ y^* where y^* is usually a quantile of the observed values y (e.g, 80%), and l(·) and g(·) are the probability density functions formed using the observations {𝐱^(i)} below and above y^*, respectively. This methodology begins with several random observations {𝐱}^(i) and proceeds iteratively by adding one observation at a time such that the expected improvement is maximized. As shown in <cit.>, the expected improvement is proportional to EI_y^*(𝐱) ∝[ y^* + g(𝐱)/l(𝐱) (1-y^*) ]^-1. In other words, the aim is to sample with higher probability under l(𝐱) (i.e., the portion of the density with the most promising hyper-parameters) and lower probability under g(𝐱). §.§ Multi-Objective Optimization In a multi-objective scenario, we are interested in the minimization (or maximization) of many objectives that usually conflict. The optimization problem is defined as follows: minimize f(𝐱) = (f_1(x), …, f_n(x)) subject to x∈𝐗 where, as in Eq. (<ref>), x∈𝐗⊆ℝ^d is the set of d hyper-parameters, 𝐗⊆ℝ^d is the search space. The difference is that this time, we define a vector of cutoffs 𝐘^* = (y_1^*,…, y_n^*), that is, a cutoff for every objective, and the TPE estimator is generalized a follows: p(𝐱|𝐲) = l(𝐱) if 𝐲≻𝐘^*∪𝐲∥𝐘^* g(𝐱) if 𝐘^*≽𝐲 where the ≻, ≽ and ∥ operators denote dominant, weakly dominant and non-comparable relationships, respectively, as per <cit.>. This time, as shown in Eq. (<ref>), the most "promising" solutions are those that dominate the cutoff 𝐘^* or those that are not comparable to 𝐘^*. The less promising models are those that are weakly dominated by 𝐘^* and, thus, contribute to forming the density function of the least "promising" hyper-parameters g(𝐱). Splitting the data into two sets is, in this instance, achieved via the Hype method <cit.>; however, the methodology is, in principle, the multi-objective equivalent of Eq. (<ref>). §.§ Incorporating Fairness There exist several metrics for quantifying fairness and algorithmic bias of machine learning models. Amongst them, the most popular and often considered are equalized odds, equal opportunity, and demographic parity <cit.>. Without loss of generality, in our framework, we optimized for demographic parity, which is satisfied when the condition below holds true: P(Ŷ | A=0) = P(Ŷ | A=1). 11121321In other words, the protected attribute A (e.g., sex or age) does not influence the model's outcome. In reality, demographic parity can never be exactly zero because the protected attribute A usually correlates with other features the model uses. Hence the objective is minimizing group disparity f(Ŷ, A) as f(Ŷ, A) = | P(Ŷ | A=0) - P(Ŷ | A=1) |. §.§ Incorporating Privacy To include privacy in our framework, we consider the concept of differential privacy <cit.>, which we use to include privacy guarantees in the model. A randomized mechanism ℳ:𝒟→ℛ satisfies (ϵ, δ)-differential privacy if for any two adjacent inputs d,d' ∈𝒟, and any S ⊂ℛ fulfil the inequality below: ℙ(ℳ(d) ∈ S) ≤ e^ϵℙ(ℳ(d') ∈ S) + δ. To incorporate Differential Privacy into diverse model architectures, we didn't take the common route where privacy is applied via training, such as the popular Differentially-Private Stochastic Gradient Descent (DP-SGD) <cit.>. Instead, we exploit the idea that converts data into differentially private synthetic data, which can be exploited by different model architectures universally. Various DP data synthesis approaches were proposed for different modalities. For example, <cit.> and <cit.> can generate DP synthetic data for tabular and images, respectively. In this work, we adopted DPView <cit.>, a state-of-the-art DP-aware high-dimensional data synthesis for tabular data. For a given privacy requirement ϵ, it utilises the domain size of attributes and the correlation among attributes to analytically optimise both privacy budget allocation and consistency in producing synthetic data points. Their evaluation demonstrated that the approach is versatile (when applied to tabular data from vest applications) and can effectively preserve model utilities. Compared to traditional gradient-based approaches (e.g., DP-SGD <cit.>) where privacy can still be breached by querying the model multiple times and the total privacy budgets are required to split across users, DPView is more robust as it does not suffer from the same issues since noises are directly applied to data instead of the model during training. §.§ Evaluating GHG Emissions We included a GHG emission tracking with CodeCarbon <cit.> throughout our training and inference pipeline to monitor the carbon emissions from each model training. It helps to track the overall carbon emission by accumulating the power consumption of individual hardware components and converting it into GHG emission based on the energy mixture of local power grids. At the end of the training, the overall GHG emission amount is output along with the model parameters. § EVALUATION This section presents an experimental analysis of an AI pipeline implementing the proposed FPIG framework. All the code is implemented in Python 3.9, and the experiments are performed on an 16 CPU instance consisting of 64GB memory. The evaluation is divided into two parts. Firstly, we run 2000 trials for each dataset using different models and hyperparameters using the FPIG framework. Using results across all trials, we then identify the relationships between the key metrics (accuracy, fairness, emissions). Secondly, we use the results of the trials to develop a sustainable meta learning algorithm, aiming to learn the relationship between the meta-features of the model and dataset used, and the outputted accuracy, fairness and emissions. §.§ Dataset, Differential Privacy and Protected Attributes We included the five public datasets in our evaluation: * Adult Income <cit.> is a public multivariate social dataset for annual income classification (i.e., if annual income is above 50K). It comprises 48,842 records with 14 attributes such as education, occupation, and work class. * COMPAS Recidivism Racial Bias <cit.> is a popular commercial algorithm judges and parole officers use to score criminal defendants' recidivism likelihood. This dataset compares the algorithm outputs and the ground truths, which shows that the algorithm is biased in favour of white defendants and against black inmates, based on a 2-year follow-up study. * LSAC <cit.> is a public dataset originally collected for a 'LSAC National Longitudinal Bar Passage Study' study. It includes background information and if (and how) candidates passed the bar exam to become lawyers in the United States. * Loan Default <cit.> is a public multivariate financial dataset for loan default classification. It includes 139,202 records with 34 attributes such as income, gender, and loan purpose. Note that we randomly sampled 40,000 records from this dataset in our evaluation. * Support2 <cit.> is a public multivariate health dataset for predicting survival over a 180-day period for seriously ill hospitalized adults. It comprises 9,105 records and 42 attributes, such as age, sex, and follow-up days. Each dataset is split into a training set (70%) and a test set (30%). We then apply DPView <cit.> to each training set and generate twenty Differentially Private (DP) synthetic data (having the same number of data points as the training sets) with different DP levels ϵ = [0.5, 10.0], where smaller ϵ indicates higher DP. Also, we only consider one protected attribute, that is gender, for all five datasets. We only consider binary protected attributes (thus generating two groups in each case) and measure fairness via disparity, i.e., the absolute difference in the average loss between both groups. This is for simplicity, but our approach can easily carry over to other metrics and more complex settings. §.§ Models and their Parameters We built the FPIG framework using various models with varying degrees of complexity. We consider a range of hyperparameters for each model, as detailed below. Varying the model complexity and parameters will give different performance metrics and, better or worse, fairness and GHG emissions. Complex models like Neural Networks are expected to worsen fairness due to overfitting. The trade-offs between fairness and accuracy, as well as GHG emissions, should always be considered when designing new models. Table <ref> illustrates the models and associated hyperparameters used in our study. We also include a self-defined measure of each model's explainability. This ranges from 1 to 3, with 1 being the most explainable model and 3 least explainable. §.§ Search Space and Pareto-Front Analysis We exploited Optuna <cit.>, a state-of-the-art hyper-parameter optimization package, as our optimization engine to find the Pareto Fronts of the objectives defined in the FPIG framework. For each dataset, we run 2000 trials. For each trial, we select a value between 0.5 and 10 for differential privacy, where 10 indicates lower differential privacy and vice versa. We then use the respective differentially private dataset based on this value. We also include the option of no differential privacy. The objectives are listed below: * Fairness: we exploited demographic parity <cit.> as our fairness metrics. The objective is to minimise the group disparity f in Equation (<ref>). * Explainability: ML models are assigned an ordinal value /0,1,2,3/ to illustrate their explainability as in Table <ref>, the lowest indicating the simplest models. Models range from decision logistic regression to decision trees, random forests, XGBoost, and neural networks. * Carbon emission: CodeCarbon <cit.> is used to measure GHG emission in practical settings. The GHG emission is reported in the unit of the kilogram. * Accuracy: given that all the five tasks are classification, we use classification accuracy [0,1] as the performance metric evaluating model utility. * Equal Importance: To find the best models with respect to the accuracy, fairness and carbon emissions, we consider a naive approach for trading off each of them with equal importance. To define this, we scale the values of each metric to be between 0 and 1 to obtain v^D_i,j, the scaled value for each trial i ∈{1, ⋯, 2000} for each metric j ∈{Accuracy, Fairness, Carbon Emissions}, for each dataset D. The best trial is defined to be i^D = argmin_i( (1-v^D_i,Accuracy) + v^D_i,Fairness + v^D_i,Carbon Emissions), as we wish to maximise Accuracy, whilst minimising Fairness and Carbon Emissions. §.§ Optimisation under the FPIG Framework We use Optuna to search through the hyperparameter space and find the Pareto-Frontier. Table <ref> summarises the best models and their overall performances concerning the performance metrics - Accuracy, Fairness, Carbon Emissions - and their corresponding model Explinabilities and Differential Privacy levels. We further illustrate (in a rolling average of 500 trials) the trade-offs between the three performance metrics over time in Figure <ref>. Below, we summarise our observations. Although the degree may vary, the trade-offs between objectives always exist. The results showed that Accuracy can always trade for Fairness across all five datasets. This observation aligns with the heuristic, where better fairness usually leads to worse model utility. However, the degree of trade-offs varies. As can be seen in Table <ref>, Adult Income demonstrates a small accuracy reduction from 0.861 to 0.767 when improving the group disparity from 0.167 to 0.0, whilst Support2 shows significant accuracy reductions from 0.977 to 0.255 when improving the group disparity from 0.0259 to 0.0. Similarly, we can train a model with lower Carbon emissions by trading Accuracy and Fairness. Compared to Accuracy, the trade-offs between Carbon Emission and Fairness are more significant across all five datasets. For example, when applying the model with the lowest Carbon Emission model to the Adult income dataset, the Accuracy and Group Disparity were reduced by 0.135 and increased by 0.279, respectively, compared to the optimal for Accuracy and Fairness. Multiple objectives can be improved during model tuning. Although trade-offs (disregarding their degree) are seen across all datasets tested as shown in Table <ref>, we also observed that multiple objectives could still be improved simultaneously. First, the models selected by optimising the Equal Importance (Eq. (<ref>)) demonstrate balanced performances between all three metrics - Accuracy, Fairness and Carbon Emission, indicating that it is possible to find a good solution across all datasets. For example, we find a model in which the Accuracy and Fairness are reduced by only 0.057 and 0.007, respectively, with the COMPAS dataset. Second, from Figure <ref> we observed that the trade-offs between objectives varies as per dataset. For example, Accuracy and Group Disparity are positively correlated (i.e., higher accuracy and lower fairness) in Adult Income and COMPAS datasets, whilst the same correlation in the LSAC and Support 2 datasets is negative. Although it is possible to improve multiple objectives simultaneously, the trajectory toward optimal (concerning the surrogate objective, such as the Equal Importance in Eq. (<ref>)) can vary as per dataset. The models yielding lower Group Disparity are usually more deferentially private. Heuristically, we know that protected attributes (e.g., gender in our experiments) could potentially be utilised to identify individuals. Therefore, differential privacy (DP) could also be improved when building fair models for those protected attributes. Our experiment results shown in Table <ref> confirm the above hypothesis. The best model concerning fairness always provides better DP (fulfilling smaller privacy budget ϵ) compared to other scenarios. Simpler models are usually more explainable and carbon-friendly. This observation reinforces our heuristic regarding the trade-offs between model complexity and explainability. As seen in Table <ref>, the best models for Carbon Emission adopt the decision tree architecture (i.e., more explainable and easy to compute) across four datasets. In contrast, Neural Network (NN) and XGBoost, having extensive capability to approximate any continuous function with lower Explainability, achieved the best Accuracy across four datasets. Further observations regarding the impact of model architecture will be presented in the next subsection. §.§ The Impact of Model Architecture The choice of model architecture also plays a significant role when searching for better solutions under the FPIG framework. The best model architecture varies significantly across datasets. We further see this in Figure <ref>, where we compare accuracy and fairness for the three datasets - Loan Default, COMPAS and LSAC. We plot a line that best fits each model type and dataset. As can be seen, the trade-offs not only vary in between datasets but also differ between model architectures. The correlations between Accuracy and Group Disparity when applying different model architectures across the five datasets shown in Table <ref> also support this observation. In the COMPAS dataset, we see similar behaviour across all model types. As we increase Accuracy, this must come at the expense of Fairness, as Group Disparity also increases. This is shown by the positive gradient of the lines of best fit in Figure <ref> and the positive correlations in Table <ref>. In contrast, the correlation becomes negative when applying NNs to the LSAC dataset, which means that the NN models could improve Accuracy and Group Disparity simultaneously compared to other model architectures. This difference is less obvious in Loan Default and COMPAS datasets. §.§ Sustainable Meta Learning In the previous subsection, we studied the trade-offs between sustainable objectives and realised that the preferred hyperparameters and model architectures vary as per dataset. A single solution does not exist that fits all scenarios. Following the methodology in <cit.>, we trained regression models ℳ_i for each of i ∈ [ accuracy, disparity, emissions] that learn the relationship between the key objectives (i.e. accuracy, group disparity, and GHG emission), based on features of the dataset d_X (e.g., number of features, number of entries) as well as features on the model and training d_m ( e.g., hyperparameters of the model architecture, number of training epochs) and finally privacy level requirements, d_p. This is trained based on the trained models from the FPIG framework using Optuna. We aim to offer users a framework to determine which architecture and "sustainable hyperparameters" to pick given requirements before training, which is typically time-consuming and leads to extensive energy consumption and GHG emissions. This is the first step towards developing broader frameworks and attempting to define a meta-learning approach that will allow for a more automated ML system. Table <ref> summarises the results by running separate regression models against each objective of interest and showing the learnt coefficients of selected inputs. We list the most important features in the Table. Our experiment demonstrates the relationship between dataset, model hyperparameters and sustainability features. We observe that using a less differentially private dataset increases the accuracy, thus illustrating the trade-off between accuracy and privacy previously discussed. We also note that using neural networks tends to increase accuracy, whilst using a dataset with a large variance in the target will decrease the model's accuracy. Further, group disparity tends to decrease with more privacy. Using no differential privacy is the biggest factor in having a larger group disparity, with a coefficient of 0.843. Lastly, we note that using Neural Networks has the largest impact on GHG Emissions, thus increasing with model complexity. There doesn't seem to be any significant relationship between privacy requirements and carbon emissions. Algorithm <ref> demonstrates how the meta learning algorithms can be used in practice. This takes in a dataset X, a set of candidate model architectures d^(k)_m for k = 1, ⋯, K, along with user requirements τ on the minimum accuracy, maximum disparity and maximum emissions the user is willing to accept. The algorithm then returns only the model architectures whose estimated metrics sit within the thresholds, based on the meta learning models. From this, a user can select their chosen model based on preference across the metrics. § CONCLUSION This paper introduces the FPIG framework for Sustainable Machine Learning, considering a multi-objective optimisation problem involving accuracy, fairness, privacy, explainability and carbon emissions. We demonstrate this approach on five datasets and show the trade-offs between these sustainable objectives and the possibility of finding models to balance these trade-offs. We also extended these observations by building a meta-learning approach to predict these key metrics based on the dataset and model characteristics. The results further validate our above observations and show that fairness, as one of the sustainable objectives, is more data-dependent, meaning that it is more difficult to provide guarantees by selecting suitable model architectures and corresponding hyperparameters.
http://arxiv.org/abs/2407.12997v1
20240717203258
Multi-Iteration Multi-Stage Fine-Tuning of Transformers for Sound Event Detection with Heterogeneous Datasets
[ "Florian Schmid", "Paul Primus", "Tobias Morocutti", "Jonathan Greif", "Gerhard Widmer" ]
eess.AS
[ "eess.AS" ]
[ Dominykas Norgilas Received -; accepted - ========================== § ABSTRACT A central problem in building effective sound event detection systems is the lack of high-quality, strongly annotated sound event datasets. For this reason, Task 4 of the DCASE 2024 challenge proposes learning from two heterogeneous datasets, including audio clips labeled with varying annotation granularity and with different sets of possible events. We propose a multi-iteration, multi-stage procedure for fine-tuning Audio Spectrogram Transformers on the joint DESED and MAESTRO Real datasets. The first stage closely matches the baseline system setup and trains a CRNN model while keeping the pre-trained transformer model frozen. In the second stage, both CRNN and transformer are fine-tuned using heavily weighted self-supervised losses. After the second stage, we compute strong pseudo-labels for all audio clips in the training set using an ensemble of fine-tuned transformers. Then, in a second iteration, we repeat the two-stage training process and include a distillation loss based on the pseudo-labels, achieving a new single-model, state-of-the-art performance on the public evaluation set of DESED with a PSDS1 of 0.692. A single model and an ensemble, both based on our proposed training procedure, ranked first in Task 4 of the DCASE Challenge 2024.[Code: <https://github.com/CPJKU/cpjku_dcase24>]. DCASE Challenge, Sound Event Detection, ATST, BEATs, PaSST, DESED, MAESTRO Real, pseudo-labels § INTRODUCTION The goal of Sound Event Detection (SED) is to identify specific acoustic events and their timing within audio recordings. Reliable SED systems enable applications in numerous domains, for example, in security and surveillance <cit.>, smart homes <cit.>, or health monitoring <cit.>. A main driver of research in this field is the annual DCASE Challenge, particularly Task 4, which focuses on SED. State-of-the-art SED systems are based on deep learning approaches, requiring a substantial amount of annotated data. Their performance is mainly limited by the lack of strongly annotated real-world sound event datasets <cit.>. Hence, previous editions of Task 4 focused on learning from weakly labeled data <cit.>, semi-supervised learning strategies <cit.>, and utilizing synthetic strongly labeled data <cit.>. While Task 4 has been based on the DESED dataset <cit.> in previous years, the key novelty of the 2024 edition is a unified setup including a second dataset, MAESTRO Real <cit.>. As domain identification is prohibited, the goal is to develop a single system that can handle both datasets despite crucial differences, such as labels with different temporal granularity and potentially missing labels. In fact, because of the lack of strongly annotated, high-quality real-world data, the hope is that learning from two datasets in parallel has a synergetic effect and eventually increases the generalization performance on both datasets. The main contributions of this work are as follows: (1) We introduce a multi-iteration, multi-stage training routine for fine-tuning pre-trained transformer models on SED using heterogeneous datasets. (2) We demonstrate that combining fine-tuned transformers – ATST <cit.>, PaSST <cit.>, and BEATs <cit.> – into a diverse ensemble to generate pseudo-labels, and using these pseudo-labels in a subsequent training iteration, significantly enhances single-model performance, yielding a relative increase of 25.9% in terms of polyphonic sound detection score <cit.> (PSDS1) on DESED and 2.7% in terms of segment-based mean partial area under the ROC curve (mpAUC) on MAESTRO, compared to the baseline system. (3) We conduct an ablation study to analyze the impact of the heterogeneous datasets and design choices related to them. On DESED, we set a new state of the art on the public evaluation set, raising single-model performance from 0.686 <cit.> to 0.692 in terms of PSDS1. A single model and an ensemble, both based on our proposed training procedure, ranked first in the respective categories in Task 4 of the DCASE Challenge 2024 <cit.>. § RELATED WORK SED Architectures: Since the 2018 edition <cit.>, the baseline system is based on a Convolutional Recurrent Neural Network (CRNN). A large increase in performance happened in the 2023 edition, as the baseline used BEATs <cit.> embeddings concatenated with CNN embeddings, which were then fed to the RNN, with a relative increase of almost 50% in PSDS1 score. Top-ranked systems in the 2023 edition improved over the baseline architecture with variations of frequency-dynamic convolution <cit.>. Recently, Shao et al. <cit.> proposed a procedure to fine-tune large pre-trained transformers on the DESED dataset with a two-stage training procedure, establishing a new state of the art. They showed that the key to avoiding overfitting is placing a large weight on the self-supervised losses to take advantage of the larger amount of unlabeled data. Data Augmentation: As strongly annotated data is very limited, data augmentation is an important strategy to improve the generalization of SED systems. In this regard, Filter-Augment <cit.> simulates variations in the acoustic environment by applying different weights to frequency bands, forcing the model to extract information from wider frequency regions. Strategies for recording device generalization in Acoustic Scene Classification apply similar frequency weighting mechanisms: Frequency-MixStyle <cit.> mixes the frequency information of two audio clips in the dataset, and Device-Impulse augmentation <cit.> convolves an audio clip with an impulse response of a real recording device. Recently, Frequency Warping <cit.>, which stretches or squeezes spectrograms in the frequency dimension, was shown to be an integral part when fine-tuning transformers on the DESED dataset <cit.>. Pseudo-labels: Both of the top-ranked approaches in the 2022 and 2023 editions of Task 4 computed pseudo-labels. Ebbers et al. <cit.> use a multi-iteration self-training procedure in which pseudo-labels, predicted by an ensemble, are iteratively refined. Kim et al. <cit.> employ a two-iterations setup in which strong pseudo-labels for weakly labeled, unlabeled, and AudioSet <cit.> clips are computed from an ensemble of models from the first training iteration. The computed pseudo-labels are converted into hard labels and used as additional targets in a second training iteration. § DATASETS The development set is composed of two datasets: DESED <cit.> and MAESTRO Real <cit.>. For common processing, all audio in the training set is converted to clips of 10 seconds in length. For the MAESTRO dataset, we strictly follow the train-test-validation split established by the baseline system <cit.>. As for DESED, we use the following subsets: * Weakly labeled: clip-wise labels, 1,267 / 158 for train. / valid. * Unlabeled: 13,057 unlabeled clips * Synthetic strong: 10,000 / 2,500 strongly labeled synthetic clips for train. / valid. * AudioSet strong: 3,435 strongly labeled real clips * External strong: 6,426 / 957 additional strongly labeled real clips for train. / valid. from AudioSet strong as used in <cit.> * Test: 1,168 strongly labeled real clips as in baseline setup <cit.> We noticed that the provided AudioSet strong subset overlaps with weakly labeled, unlabeled, and test sets. Therefore, we remove 1,355 files from the unlabeled set and 153 files from the weakly labeled set to avoid oversampling individual audio clips. Additionally, we remove the 35 files that overlap with the test set from the AudioSet strong set to obtain a more accurate estimate of the generalization performance. § SYSTEM ARCHITECTURE Figure <ref> gives an overview of our SED system. It consists of two independent audio embedding branches (CNN and transformer), the outputs of which are pooled to the same sequence length. A Recurrent Neural Network (RNN) derives strong predictions from these combined sequences. Compared to the baseline <cit.> we experiment with two additional Audio Spectrogram Transformers besides BEATs <cit.>, namely, ATST <cit.> and PaSST <cit.>. In addition to adaptive average pooling, we experiment with linear and nearest-exact interpolation to align transformer and CNN sequence lengths. In the following, we briefly describe the transformer models used in our setup. We refer the reader to <cit.> for more details. ATST-Frame <cit.>(denoted ATST in the following) was specifically designed to produce a sequence of frame-level audio embeddings. The architecture of ATST is based on the Audio Spectrogram Transformer (AST) <cit.>; it is pre-trained in a self-supervised manner on AudioSet. In our experiments, we use a checkpoint of ATST that is fine-tuned on the weak labels of AudioSet. fPaSST: The Patchout faSt Spectrogram Transformer (PaSST) <cit.> is an improved version of the original AST <cit.> that shortens the training time and improves the performance via patchout regularization. We adapt PaSST to return frame-level predictions and call the resulting model Frame-PaSST (fPaSST). We pre-train fPaSST on the weakly annotated AudioSet using Knowledge Distillation <cit.>, obtaining a mAP of 0.484. BEATs: Likewise, BEATs <cit.> is also based on the AST <cit.> architecture; it was trained in an iterative, self-supervised procedure on AudioSet, where the BEATs encoder and tokenizer were alternately updated. In our experiments, we rely on the checkpoint of BEATs after the third iteration, where both the tokenizer and the encoder were fine-tuned on the weak labels of AudioSet. § TRAINING PIPELINE In this section, we describe the pre-training routine on AudioSet strong and how the pre-trained models are fine-tuned on the Task 4 datasets in the proposed multi-iteration, multi-stage training procedure. An overview of the full training pipeline is shown in Figure <ref>. The full system architecture, depicted in Figure <ref>, is involved in all iterations and stages of Figure <ref>. The pre-trained transformers (blue block in Figure <ref>) are used as frozen audio embedding models in Stage 1 and fine-tuned together with the CRNN (red blocks in Figure <ref>) in Stage 2. The pseudo-labels are generated from an ensemble after Iteration 1 and used as additional prediction targets in Stage 1 of Iteration 2. In the following, we abbreviate Iteration {1,2} and Stage {1,2} as I{1,2} and S{1,2}, respectively. §.§ Pre-Training on AudioSet strong We hypothesize that the transformer models would benefit from additional pre-training on a large dataset strongly annotated for various acoustic events. To this end, we add a BiGRU block with 1024 units that processes the output of the transformer. We pre-train for 10 epochs on AudioSet strong <cit.>, a subset of AudioSet that holds around 86,000 strongly labeled examples with annotations for 456 event classes. While ATST and, in particular, fPaSST benefit from this pre-training, there was no effect on the downstream task performance of BEATs; therefore, we only pre-train ATST and fPaSST on AudioSet strong. We select the checkpoint with the highest PSDS1 score on the AudioSet strong validation set for downstream training. §.§ Multi-Stage Training Inspired by <cit.>, I1 and I2 are both split into two training stages. In S1, the CRNN (CNN + BiGRU) is trained from scratch while the large transformer model is frozen. This setup corresponds to the training of the baseline system with slightly different hyperparameters and additional data augmentations (as shown in Table <ref>). In S2, the CRNN is initialized with pre-trained weights from S1, and both the CRNN and the transformer model are fine-tuned. As the system already performs well in its initial state, the transformer can rely on a high-quality self-supervised loss computed on the larger unlabeled set. Aligned with <cit.>, in S2, we compute the interpolation consistency (ICT) loss <cit.> in addition to the mean teacher (MT) loss <cit.>. In both stages, we choose the best model based on the validation set by computing the sum of PSDS1 on the strongly labeled synthetic data, PSDS1 on external strongly labeled real data, and mpAUC on the MEASTRO validation set. §.§ Multi-Iteration Training After completing I1, we build an ensemble (see Ensemble Stage 2 in Table <ref>) of multiple ATST, fPaSST, and BEATs models. This ensemble is used to compute strong pseudo-labels for all audio clips in the training set by averaging the frame-wise logits of the individual models. In S1 of I2, we then use the pseudo-labels as additional prediction targets. We found that BCE is superior to MSE for computing the pseudo-label loss, and interestingly, using the pseudo-label loss only improves performance in S1 of I2 (see Table <ref>). § EXPERIMENTAL SETUP §.§ Audio Pre-processing and Augmentation For all models, we convert audio clips to 10 seconds in length at a 16 kHz sampling rate. For the CNN, we match the baseline settings and compute Mel spectrograms with 128 Mel bins using a window length of 128 ms and hop size of 16 ms. For the transformers, we use their original feature extraction pipelines <cit.>. Table <ref> details all the data augmentation methods used in our training pipeline. In contrast to the baseline, we apply Cross-Dataset Mixup and Cross-Dataset Freq-MixStyle. That is, we mix audio clips from MAESTRO and DESED instead of keeping them separate. In the case of Mixup, we allow the loss to be calculated for all partially active classes, irrespective of the audio clip's dataset origin (see Section <ref>). For Wavmix and Mixup, we mix the pseudo-labels accordingly. §.§ Datasets and Optimization We use the DESED <cit.> and MAESTRO <cit.> datasets as provided for Task 4 in the DCASE 2024 challenge and, additionally, approximately 7,000 strongly annotated clips extracted from AudioSet strong according to <cit.>. We refer the reader to <cit.> for a detailed description of the data setup. The training data can be seen as the union of five subsets: MAESTRO strong and DESED: real strong, synthetic strong, weakly annotated, and unlabeled. We draw batches of (12, 10, 10, 20, 20) and (56, 40, 40, 72, 72) samples from these datasets in S1 and S2, respectively. The model is trained to minimize BCE loss on all (pseudo-)labeled audio clips and MSE loss for the self-supervised MT <cit.> and ICT <cit.>methods. We compute a weighted sum of all losses and tune the individual weights for all iterations and stages. AdamW <cit.> with weight decays of 1e-2 and 1e-3 is used in S1 and S2, respectively. Learning rates are listed in Table <ref>. §.§ Handling Heterogeneous Sound Event Classes The DESED and MAESTRO datasets are annotated with two different sets of sound event classes. We adopt the baseline <cit.> strategy, in which the loss for an audio clip is calculated only on the dataset-specific event classes and mapped event classes, as explained in the following: To exploit the fact that the DESED and MAESTRO classes are not fully disjoint but partly represent the same concepts, the baseline introduces class mappings. For example, when the classes people talking, children voices, or announcement are active in a MAESTRO clip, the corresponding DESED class Speech is set to the same confidence value. In addition, we also include a mapping from MAESTRO to DESED classes. Specifically, we set the values of the MAESTRO classes cutlery and dishes and people talking to 1 if the DESED classes Dishes and Speech are present. This is also performed for weak class labels. §.§ Postprocessing For model selection and hyperparameter tuning, we stick with the same class-wise median filter used in the baseline system <cit.>. After selecting the best configurations for each model, we apply the recently introduced Sound Event Bounding Boxes (SEBBs) <cit.> method for postprocessing. We use class-wise parameters and tune them by using linearly spaced search grids (8 values) for step filter length (0.38 to 0.66), relative merge threshold (1.5 to 3.25), and absolute merge threshold (0.15 to 0.325). § RESULTS In this section, we present the results of the described models (Section <ref>) in the introduced training pipeline (Section <ref>). Table <ref> lists the best configuration and the corresponding results on the test set for each architecture in both iterations and stages. The table lists the sequence pooling method (Seq.) and the CNN (lr_cnn), RNN (lr_rnn), and Transformer (lr_tf) learning rates. lr_dec indicates the layer-wise learning rate decay for the transformers as used in <cit.>. In I1.S1, in which the transformers are frozen, BEATs seems to extract the embeddings of the highest quality, followed by fPaSST and ATST. I1.S1 with BEATs is very similar to the baseline <cit.> and achieves a similar rank score with a slight performance increase in our setup. Compared to I1.S1, all three transformers demonstrate a large increase in rank score when fine-tuned on the Task 4 datasets in I1.S2. Notably, the three transformers have different strengths, with ATST and BEATs achieving the best scores on MAESTRO and DESED clips, respectively. Ensemble Stage 2 denotes an ensemble of 46 models resulting from I1.S2, including ATST, fPaSST, and BEATs trained in different configurations. We use Ensemble Stage 2 to generate strong pseudo-labels for all audio clips in the dataset. The additional pseudo-label loss in I2.S1 boosts performance substantially, with all three transformers achieving a higher rank score compared to I1.S2. The top rank scores for all models are achieved in I2.S2, with ATST obtaining the highest rank score. Table <ref> presents the top configurations of ATST, fPaSST, and BEATs from I2.S2 with the state-of-the-art postprocessing method cSEBBs <cit.> applied. ATST and ATST DT, a variant of ATST that is trained on all available audio clips included in the Task 4 development set, were submitted as single models to the challenge. ATST DT using cSEBBs postprocessing achieves a PSDS1 of 0.692 on the public evaluation set of DESED, improving over the previous state of the art (0.686 PSDS1) <cit.>. §.§ Ablation Study Table <ref> shows the results of ATST for I2.S1 and I2.S2 trained in different configurations to analyze the design choices related to the heterogeneous datasets and the pseudo-label loss. For settings and , the proposed system is trained only on MAESTRO and DESED data, respectively. We find that training on DESED and MAESTRO simultaneously is beneficial for the performance on both datasets, which coincides with the finding reported for the baseline system <cit.>. For both stages of I2, excluding MAESTRO clips when calculating the self-supervised losses () and not mapping event classes from MAESTRO to DESED (, see Section <ref>) leads to a performance decrease. However, we find no clear answer to the question of whether the SSL loss should be calculated on all classes or only on the dataset-specific classes of an audio clip (+/- SSL class mask); S1 and S2 benefit from different settings. Interestingly, using the pseudo-label loss in I2.S2 (+ Pseudo Loss) does not increase the rank score. Therefore, the setup in I1.S2 and I2.S2 remains identical, which demonstrates that a well-trained CRNN from S1 can have a large impact on the performance achieved in S2. We also tried to use separate heads for predictions on DESED and MAESTRO classes and realized this with an additional single BiGRU layer per dataset (+ Separate RNN layer), which resulted in a performance decrease. Further obvious choices, such as thresholding the pseudo-labels by 0.5 (+ Hard Pseudo) and calculating the pseudo-label loss on all classes (+ Pseudo All Classes) instead of only dataset-specific classes, are inferior to our proposed strategy as well. §.§ Final Submissions For the final submissions shown in Table <ref>, we select the top single-model, ATST, after I2.S2, and build an ensemble consisting of ATST, fPaSST, and BEATs models obtained in I2. We repeat the full training process for the two selections and include the test data of MAESTRO and DESED for training to make use of the full development set. In this case, model selection still relies on validation metrics. As described in Section <ref>, we use SEBBs <cit.> instead of a class-wise median filter for postprocessing the predictions of all submissions. The resulting performance is listed in Table <ref>. PSDS1 MF and PSDS1 SEBB denote the performance on the DESED test set with median filter and SEBB, respectively. Although the median filter lengths of the baseline system are also tuned on the test set, we note that PSDS1 SEBB results should be taken with a grain of salt, as the SEBB hyparameters are tuned on the test set. We therefore also report the results on the unseen DESED public evaluation set (Eval PSDS1 SEBB). Notably, our best single model (S2) improves the state-of-the-art PSDS1 score on the public evaluation set from 0.686 <cit.> to 0.692. The submitted ensembles (S3 & S4) clearly improve over the single models (S1 & S2) in terms of PSDS1, but interestingly, mpAUC cannot be improved by ensembling. § CONCLUSION This paper presented a multi-iteration, multi-stage training routine for fine-tuning transformers on the SED task with heterogeneous datasets. We showed that the performance of all tested systems monotonously increases throughout both iterations and stages. The proposed method led to a new state-of-the-art performance of 0.692 in PSDS1 on the DESED public evaluation set and achieved the top rank in Task 4 of the DCASE 2024 challenge. We specifically studied design choices related to the heterogeneous datasets and found that simultaneously training on DESED and MAESTRO leads to a performance increase on both datasets compared to training the system on a single dataset. § ACKNOWLEDGMENT The computational results presented were achieved in part using the Vienna Scientific Cluster (VSC) and the Linz Institute of Technology (LIT) AI Lab Cluster. The LIT AI Lab is supported by the Federal State of Upper Austria. Gerhard Widmer's work is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No 101019375 (Whither Music?). IEEEtran
http://arxiv.org/abs/2407.12435v1
20240717094358
F-HOI: Toward Fine-grained Semantic-Aligned 3D Human-Object Interactions
[ "Jie Yang", "Xuesong Niu", "Nan Jiang", "Ruimao Zhang", "Siyuan Huang" ]
cs.CV
[ "cs.CV" ]
F-HOI J. Yang, et al. 1The Chinese University of Hong Kong, Shenzhen   2State Key Laboratory of General Artificial Intelligence, BIGAI   3Institute for AI, Peking University <https://f-hoi.github.io> ⋆Equal contribution †Corresponding author F-HOI: Toward Fine-grained Semantic-Aligned 3D Human-Object Interactions Jie Yang1,2,⋆0009-0009-5891-2911Xuesong Niu2,⋆0000-0001-7737-4287 Nan Jiang2,3,⋆0009-0006-5726-7672 Ruimao Zhang1,†0000−0001−9511−7532 Siyuan Huang2,†0000-0003-1524-7148 July 22, 2024 ================================================================================================================================================================================ § ABSTRACT Existing 3D human object interaction (HOI) datasets and models simply align global descriptions with the long HOI sequence, while lacking a detailed understanding of intermediate states and the transitions between states. In this paper, we argue that fine-grained semantic alignment, which utilizes state-level descriptions, offers a promising paradigm for learning semantically rich HOI representations. To achieve this, we introduce Semantic-HOI, a new dataset comprising over 20K paired HOI states with fine-grained descriptions for each HOI state and the body movements that happen between two consecutive states. Leveraging the proposed dataset, we design three state-level HOI tasks to accomplish fine-grained semantic alignment within the HOI sequence. Additionally, we propose a unified model called , designed to leverage multimodal instructions and empower the Multi-modal Large Language Model to efficiently handle diverse HOI tasks. offers multiple advantages: (1) It employs a unified task formulation that supports the use of versatile multimodal inputs. (2) It maintains consistency in HOI across 2D, 3D, and linguistic spaces. (3) It utilizes fine-grained textual supervision for direct optimization, avoiding intricate modeling of HOI states. Extensive experiments reveal that effectively aligns HOI states with fine-grained semantic descriptions, adeptly tackling understanding, reasoning, generation, and reconstruction tasks. § INTRODUCTION Modeling human-object interaction (HOI) in 3D space is critical for various downstream applications, such as computer animation, virtual reality, and embodied AI <cit.>. The HOI process involves a sequence of continuous state transitions, which contains intricate changes in body parts, object trajectories, and interaction contacts. However, existing models <cit.> only align global descriptions with such processes, making them struggle to comprehend each HOI state and the transitions between states in the fine-grained semantic space. In the literature, how to achieve fine-grained semantic-aligned 3D HOI is still a challenging yet under-explored issue due to the following difficulties: * Dataset Gap. Existing HOI datasets only provide coarse-grained goal descriptions (e.g., “a person picks up a backpack”) to depict a long HOI sequence. Thus, the absence of fine-grained semantic descriptions significantly hampers progress in the related field. * Model Capacity. Aligning the fine-grained semantic descriptions with HOIs is non-trivial. It requires a model that can establish alignments from a limited dataset, possesses powerful semantic comprehension skills to handle extensive textual descriptions, and has prior knowledge of real-world actions. To address the first issue, we reconstruct a new dataset named from three existing datasets, bridging the semantic gap in current datasets by furnishing fine-grained descriptions for HOI states and detailing the movements between two consecutive HOI states. Based on the proposed dataset, we design three state-level tasks to achieve fine-grained HOI modeling from different perspectives: (1) Understanding: None of the current tasks explicitly involve understanding an HOI state (i.e., human pose with object pose) via textual descriptions. Thus, we are motivated to propose such a task and aim to achieve fine-grained understanding, as shown in <ref>-(a). (2) Reasoning: Building upon the understanding task, we further increase the level of difficulty by describing the next HOI state given the current HOI state and the overall HOI goal, as shown in <ref>-(b). (3) Generation: Beyond the understanding and reasoning tasks, we further explore the fine-grained action control, which aims to leverage the transformation descriptions to generate the next state from the current one, as shown in <ref>-(c). To address the second problem, we introduce , a novel unified framework to tackle diverse 3D HOI tasks. Specifically, first integrates various input modalities, including 2D images, 3D object meshes, 3D HOI-Pose (comprising human and object poses), and textual descriptions, into a unified architecture. By employing different task instructions in the training phase, it progressively learns consistent HOI representations across 2D, 3D, and linguistic spaces, and realizes the mutual enhancement of different tasks. Once the model is optimized, it can leverage the powerful language understanding capabilities inherent in the multi-modal large language model to adeptly execute diverse HOI tasks with flexible inputs. Through extensive experiments, we demonstrate that F-HOI can effectively align HOI states of sequences with fine-grained semantic descriptions, adeptly tackling understanding, reasoning, and generation tasks, along with the traditional reconstruction task. In addition, our ablation studies reveal that our model designs, coupled with the proposed dataset and training strategies, could improve fine-grained 3D HOI modeling. Finally, as a pioneering work, we provide a comprehensive discussion to inspire future research in the related field. In summary, the contributions of this work are three-fold: * As far as we know, this is the first work to explore the problem of fine-grained semantic-aligned 3D HOI modeling. To tackle such a problem, we introduce a new dataset named Semantic-HOI to bridge the annotation gap present in current datasets by providing fine-grained descriptions for HOI states and the body movement between two consecutive states. * To learn and evaluate the fine-grained HOI representation, we define three new state-level HOI tasks from the perspectives of understanding, reasoning, and generation. Furthermore, we present F-HOI, which empowers the MLLM to execute the above HOI tasks with flexible inputs. * Extensive experiments show can effectively align HOI states with fine-grained semantic descriptions. We hope our proposed dataset and tasks can bring in new perspectives to fine-grained semantic-aligned HOI modeling. § RELATED WORK §.§ Human-Object Interaction Research in Human-Object Interaction (HOI) has traditionally focused on identifying interactions from images <cit.>, 3D interaction reconstruction <cit.> and generation <cit.>. Some noteworthy contributions include Phosa <cit.>, which geometrically reconstructs HOIs by utilizing contact priors from different body regions, and GOAL <cit.>, which employs a conditional variational autoencoder (cVAE) to generate full-body motions for object grasping by estimating the grasping pose for the entire body. CHOIS <cit.> further extends the field by synthesizing HOI motions using a conditional diffusion model based on language descriptions and the initial state of the object and the human involved. On the other hand, the advent of 3D HOI assets <cit.> and datasets, which include visual recordings <cit.>, text annotations <cit.> or both <cit.>, have promoted effective HOI modeling across various applications. However, existing models and datasets typically rely on coarse-grained descriptions for interactions, making it challenging to learn fine-grained semantic alignment. Our work represents the first attempt to address this limitation by constructing a new dataset with rich descriptions for body parts, objects, and their interactions, and proposing three new fine-grained state-level HOI tasks from different perspectives. §.§ Multimodal Large Language Models Large Language Models (LLMs) are rapidly emerging as powerful tools across various domains. While leading models like OpenAI's ChatGPT <cit.> and GPT-4 <cit.> remain proprietary, the availability of open-source LLMs such as Vicuna <cit.>, LLaMA <cit.>, and Alpaca <cit.> is enabling researchers to engage in multimodal research. In general, there are two technical paradigms to leverage the LLMs to solve multi-modality tasks: Firstly, LLMs can serve as effective decision-making agents by interfacing with task-specific models through API calls <cit.>. Through carefully designed prompt engineering or instruction tuning, LLMs can generate API calls to address multi-modal tasks. However, this approach may not fully comprehend the intricacies of task-specific modalities, leading to potential failures when dealing with complex scenes. Secondly, an advanced approach is to map modality-specific representations into the language embedding space of the LLM. Recent works like LLaVA <cit.> and MiniGPT-4 <cit.> incorporate pre-trained visual encoders to obtain image features and train projection layers to align visual representations with the language space of LLM. This approach can also be extended to speech generation <cit.>, image generation <cit.>, video understanding <cit.>, and other perception tasks <cit.>, providing a more comprehensive understanding of multi-modal data. Among these, ChatPose <cit.> is most relevant to our work, as it explores the use of 3D body pose as a new modality for LLMs to process. However, our work goes further by comprehensively considering humans, objects, and their interactions. More importantly, we emphasize exploring the problem of fine-grained semantic-aligned HOI modeling. To achieve this, we leverage multi-modal instructions to empower MLLMs to complete fine-grained HOI tasks, thereby demonstrating significant potential. § DATASET §.§ Motivation As illustrated in <ref>, existing datasets focus on different subsets of objects and interactions, while providing only goal descriptions for long-term interaction processes. These limitations restrict the effectiveness of models in handling diverse scenes and achieving fine-grained semantic alignments for HOIs. To address these dataset gaps, we introduce a novel dataset called , characterized by two primary features: (1) Diverse objects. We aggregate and unify data from three established datasets to incorporate a wider range of objects; (2) Detailed descriptions. Instead of directly describing long-term HOI sequences, which demand extensive language to capture various redundant action changes, we provide detailed descriptions for each HOI state and highlight the body movements between consecutive HOI states. §.§ Data Collection We collect data from 3 existing datasets (GRAB <cit.>, CHAIRS <cit.> and BEHAVE <cit.>), while carefully considering the following data balancing principles: Balanced Data for Interaction Diversity: Each dataset in <ref> is video-based and each video contains a single interaction process. To ensure diversity in interaction, we randomly sample a subset of state pairs from these videos. Balance Data for Images Discrepancies and Object Categories: Since the GRAB dataset lacks natural images, we need to render 3D HOIs into 2D images. Additionally, the CHAIRS dataset predominantly features interactions with a diverse range of chairs. To address the variations in data distribution between rendered and natural images, and to ensure the super-category diversity of the interactive objects, we have restricted the number of samples drawn from both the GRAB and CHAIRS datasets to maintain balance. In contrast, we have included an increased number of samples from the BEHAVE dataset. §.§ Dataset Construction Annotations. As depicted in <ref>, the annotations of an HOI pair comprises the following components: * 2D images for the current state and the next state. * HOI poses for the current state and the next state, along with object mesh. * Goal description for the action. * Fine-grained descriptions for both the current state and the next state. * Transformation descriptions detailing the changes between the current state and the next state. where components 1-3 can be derived from the original datasets. For components 4-5, we prompt GPT-4V <cit.> using the given 2D images for annotations. During this process, we meticulously design the formats for fine-grained descriptions as follows: (a) decoupled human pose descriptions, including whole-body, head, two arms, two hands, two legs, and two feet; (b) object state descriptions; (c) interaction state descriptions. Based on the above prompts, we can offer both part-level state descriptions and part-level movement descriptions. We also provide action descriptions to supplement ambiguous and incomplete goal descriptions. Statistics Analysis. Considering the potential for response errors in GPT-4V, we conduct manual verification to filter out improperly formatted fine-grained descriptions. In total, our comprises 20,441 pairs, with 1,187 from GRAB, 3,368 from CHAIRS, and 15,886 from BEHAVE. Furthermore, following BEHAVE, we split into about 70% for training and 30% for testing to show the potential of fine-grained semantic-aligned HOI. §.§ Discussion There are several related works worth discussing, which focus on linking textual descriptions to human poses. We have summarized key differences in <ref>: (1) PoseScript <cit.> utilizes generic rules to annotate textual descriptions from state-level 3D keypoints. In contrast, utilizing GPT-4V with well-designed prompts for annotation is more simple and fine-grained. Additionally, our annotation introduces state-to-state human and object transitions, as well as interactions, which PoseScript overlooks. (2) Although PoseFix <cit.> annotates text descriptions of human state transitions, it similarly overlooks object state transitions and interaction transitions, which are critical and unique for the HOI problem. § FINE-GRAINED 3D HOI TASKS Motivation. Building on Semantic-HOI introduced in <ref>, we propose three novel tasks to showcase fine-grained semantic-aligned HOI, motivated by varying objectives: (1) Understanding: The task of captioning an HOI state (, the poses of both the human and the object) with textual descriptions serve as the most straightforward method for demonstrating alignment. This approach addresses a gap often overlooked in existing tasks. (2) Reasoning: Building upon the understanding task, the depth of the problem can be extended to reason about the next HOI state via textual descriptions, when knowing the action goal. (3) Generation: Moving forward, beyond simply considering the current or next state, comprehending the fine-grained movement descriptions to generate the next state based on the current one is promising. Problem Definitions. As shown in <ref>, conditioned by the object mesh and each task instruction, the proposed three tasks, along with the traditional reconstruction task, can be formulated as follows: (1) Understanding. Given the t-th HOI state 𝐬_t, the model aims to produce the corresponding fine-grained textual descriptions p _t. Specifically, the HOI state 𝐬_t can be formulated as (M(θ,β), O). M(θ,β) is the human state obtained by a parametric human body model SMPL M <cit.>. θ and β are pose and shape parameters, respectively. Following <cit.>, β is by default set to zeros, corresponding to the average body shape. O is the object state represented by 6 degrees of freedom (6DoF) object pose (i.e., translations and rotations). (2) Reasoning. Given the goal descriptions p_g (e.g., a person picks up a backup) and the t-th HOI state 𝐬_t, we expect the model to reason the next state's text descriptions p_t+1. (3) Generation. Given the t-th HOI state 𝐬_t and the movement descriptions p _Δ, the model needs to generate the next HOI state 𝐬_t+1. (4) Reconstruction. Given the 2D image 𝐈_t at t-th state, the model output the HOI state 𝐬_t. However, directly outputting the object pose is challenging. We perform object-conditioned human reconstruction, which inputs the object pose at t-th state and outputs the human pose at t-th state. § MODEL §.§ Motivation As defined in <ref>, the consistency across the four tasks essentially involves learning fine-grained alignments across 2D, 3D, and language spaces. Therefore, it is crucial to use a single model to unify the formulations of these four tasks and stimulate potential mutual benefits among them, which is demonstrated in our ablation study. On the other hand, due to the complexity of new tasks, previous technical paradigms <cit.> cannot meet the requirements of tasks, which necessitate semantic comprehension and cognition capabilities for handling lengthy sentences in fine-grained descriptions. Based on the above motivations, we propose to empower the Multi-modal Large Language Model to complete the proposed HOI tasks with flexible inputs. §.§ Architecture As illustrated in <ref>, based on different task instructions, could support multi-modal inputs and complete diverse HOI tasks containing the three components: multi-modal encoders, LLM Backbone, and task-specific projectors. Multi-modal Encoders. We leverage different modality encoders to project each input modality into tokens that align together within the language space of the LLM backbone. Specifically, (1) We utilize the SentencePiece tokenizer <cit.> to encode textual descriptions, including task instructions, goal descriptions p_g in the reasoning task, and fine-grained movement descriptions p_Δ in the generation task, as in Fig. <ref>; (2) We employ the frozen CLIP image encoder and one additional projection layer to encode 2D HOI images; (3) We utilize the frozen 3D point encoder in Uni3D <cit.> with a trainable projection layer to encode 3D object meshes; (4) For HOI-Pose, we use separate projection layers to project the human pose and object pose into the hidden space of the LLM backbone. LLM Backbone. We utilize Vicuna-7B <cit.> as the LLM backbone and employ LoRA <cit.>, which initializes a trainable matrix for fine-tuning the LLM. De-tokenlization. (1) For text response, we also utilize the SentencePiece detokenizer to decode all the text tokens into textual descriptions; (2) For HOI-Pose response, we utilize a trainable human-pose projection layer to decode the special token into human pose, while employing another trainable object-pose projection layer to decode the special token into object pose. §.§ Training Pipeline Pretraining for Alignments. Thanks to the versatility of our model in handling various input and output formats, we are able to utilize multiple related datasets for pretraining, thereby improving alignments across different modalities. Specifically, acknowledging the critical role of human pose diversity in HOI tasks, we engage with extensive pose estimation and description datasets to facilitate alignment between images and human poses, as well as text and human poses. To this end, we employ the COCO <cit.> and Posescript <cit.> datasets, respectively. We convert these two datasets into different question-answer formats and optimize the model using the following loss functions: ℒ = ℒ_text+ℒ_pose where ℒ_text is the cross-entropy loss typically applied for prefix language modeling such as GPT. ℒ_pose = θ_gt - θ_pred is the L1 loss computed between predicted human pose parameters and ground-truth ones. Multi-Task Instruction Tuning. In this stage, we convert the proposed dataset into task-specific instruction-following format, including understanding, reasoning, generation, and reconstruction tasks. By joint training these tasks, we optimize the model using the following loss functions: ℒ = ℒ_text+ℒ_hoi where ℒ_hoi = Δθ_gt - Δθ_pred + Δ O_gt - Δ O_pred is the L1 loss to minimize the translation difference between predicted and ground-truth human and object pose parameters from the start state to the next state, which only applies to the generation (current state as start state) and reconstruction (default state as start state). Therefore, ℒ_hoi is computed as the offset. This approach could benefit the generation task, which we refer to as offset regression. § EXPERIMENT §.§ Experimental Setup Evaluation Metric. Since the output of understanding and reasoning tasks are textual descriptions, we employ two commonly used metrics from the NLP field for evaluations: (1) BLUE-4 <cit.> analyzes the co-occurrences of 4-grams between the predicted and ground-truth sentences. (2) ROUGH <cit.> examines the adequacy and fidelity of the predicted sentences based on recall rate. In contrast, for evaluating generation and reconstruction tasks, we follow previous works <cit.> to utilize the Chamfer distance for both humans and objects. Specifically, we decouple the human pose into different body parts for evaluation, including the head, two arms, two hands, and two legs, to better demonstrate the performance guided by fine-grained semantic descriptions. Baseline. Since this work focuses on entirely new HOI tasks that existing models have not been able to tackle, we choose the base multi-modal large language model in for comparison. Specifically, we use our to finetune LLaVA-1.5V-7B <cit.> as our baseline model, which incorporates Vicuna-7B <cit.> as the LLM backbone with a CLIP image encoder <cit.> for visual encoding. Considering the original LLaVA only takes images and textual descriptions as inputs, we further incorporate 3D HOI-Pose embedding to process HOI poses, enabling the completion of our tasks. In contrast, we do not input the object mesh and instead output the HOI-Pose as a textual response. To obtain a complete and precise HOI-Pose, we decouple the human pose parameters and the object pose, and then perform multi-turn conversation for training and the batch inference to query them separately. Implementation Details. We employ LLaVA-1.5V-7B to initialize the model weight. During training, we freeze the CLIP image encoder while fine-tuning the LLM using LoRA <cit.>. Additionally, we train projection layers to adapt the LLM to our proposed HOI tasks. For fine-tuning with LoRA, we set the rank to 128 and the alpha to 256. We employ AdamW <cit.> for network optimization, with a learning rate of 2e-4 and weight decay of 0. During training, we utilize 8 Nvidia A100 GPUs, each with 80G of memory. We set the batch size to 16 for each GPU and configure a gradient accumulation step of 1. §.§ Main Results For quantitative results, as illustrated in <ref>,  <ref>,  <ref>, and  <ref>, we conduct a comprehensive comparative analysis of our proposed model, , across all four tasks against the established baseline, utilizing our dataset. significantly and consistently outperforms its baseline, demonstrating its effectiveness in achieving fine-grained semantic alignment. Moreover, we provide the qualitative results as shown in <ref> for the generation task. Overall, can learn rich semantic representations at the state level, adeptly tackling our proposed understanding, reasoning, and generation tasks. §.§ Ablation Study Offset Regression. For the generation task, designs an offset-based regression method, which expects the model to predict offsets from the input HOI-Pose, thereby supervising the HOI-Pose offsets to achieve better alignment with movement descriptions. The results in <ref> indicate that this regression approach can notably enhance the generation task by achieving improved alignment with movement descriptions. Image-to-Pose Alignment. Thanks to the input and output flexibility of our model, we can use large-scale image-pose paired dataset COCO <cit.> to perform image-to-pose alignment. The results in <ref>, #1 vs. #2, demonstrate significant improvements in both the reconstruction and generation tasks. The enhanced human pose diversity is crucial for effective HOI, thereby contributing to the observed improvements. Text-to-Pose Alignment. We utilize the text-pose paired dataset PoseScript <cit.> to achieve text-to-pose alignment. The results in <ref>, comparing methods #2 and #3, demonstrate notable benefits for understanding and reasoning tasks. This approach effectively aligns poses with diverse descriptions, contributing to improved performance. Multi-task Joint Training. In the previous ablation studies, a consistent phenomenon is observed: an improvement in one task's performance often leads to improvements in other tasks as well. Furthermore, we conduct ablations focusing on single tasks. The results in <ref> demonstrate the presence of mutual benefits across different tasks in multi-task training, significantly outperforming single-task training. In addition, training with multiple modalities input of the same sample (e.g., images, HOI-Pose, and textual descriptions) can also provide more information to improve performance. § DISCUSSION State-by-state Generation for an HOI process. Although the primary focus of this work is to align fine-grained semantic details with HOI at the state level, we demonstrate the potential of this paradigm to generate long sequences for the interaction process while maintaining a detailed understanding of each intermediate state and the transitions between states, as shown in Fig <ref>. Failure Case Analysis. We show three types of failure cases in our method, as shown in <ref>. (1) Interpenetration: employs fine-grained textual supervision (e.g., body part with contact) for implicit human-object optimization. Compared with previous methods <cit.>, it is conceptually simple by eliminating complex structured HOI modeling and explicit contact supervision <cit.>. However, in cases where there are conflicts between human actions and objects, the interpenetration still persists, as illustrated in <ref>-(a). (2) Physics Gap: Since does not explicitly incorporate the physical laws <cit.>, the generation of the next HOI state merely aligns with linguistic descriptions without constraints imposed by physical reality. For instance, as depicted in <ref>-(b), without contact, the “keyboard” would actually fall to the ground due to gravity. (3) Difficulty with Complex Movements: fails to generate the next HOI state when the trajectory of object states between the current and next states is complex or highly variable, as exemplified in <ref>-(c). This is primarily due to our dataset lacking sufficient examples of long-term and complex movements. One direct solution to address this issue is to decompose the description of movements into multiple fine-grained stages. Limitations. Overall, our work serves as a pioneering work that provides the community with a new perspective for fine-grained semantic-aligned 3D human-object interaction modeling. However, as an early-stage effort, our work leaves ample room for further exploration in this field. Here, we discuss several limitations to inspire future research: (1) As shown in <ref>, our proposed tasks require inputs including a 3D object mesh, 3D HOI-Pose, and textual descriptions. These input requirements significantly hurt the convenience of inference. Reducing the strict requirements of input modalities is an area worth exploring. (2) Our work only evaluates the effectiveness of fine-grained textual descriptions and HOI-Pose alignment in closed-set scenarios within existing datasets. However, such a model lacks generalization to open-set scenarios, which is limited by interaction diversity and unseen object meshes. (3) Based on our task and problem definitions, can only perform long sequence generation state-by-state through fine-grained textual descriptions for addressing the HOI motion generation task. This approach introduces complexity and error accumulation, and it does not address the issue of smooth transitions between states. (4) The design of follows a simple and intuitive principle and could be considered as a baseline model. It may perform poorly compared to previous methods <cit.> in generation and reconstruction tasks. (5) We expect our model to predict hand parameters to better demonstrate the alignment between fine-grained textual descriptions and HOIs. However, our preliminary results indicate that our model underperforms in capturing the details of the hands. (6) For understanding and reasoning tasks, heavily relies on the priors of large language models. Despite showing significant potential, we still identify several understanding and reasoning errors, such as inaccurate judgments of interactions and incorrect assessments of the spatial relationships between body parts. These limitations arise from the restricted data volume used to align HOI-Pose with fine-grained descriptions, while the richness and quality of the textual descriptions also affect performance <cit.>. Future Directions. (1) Flexiable Input Modality. As discussed in the limitations above, reducing the required input modalities is worth considering. For instance, the object mesh could potentially be obtained through a text-to-3D approach <cit.>. Furthermore, the initial HOI-Pose could be directly derived from image input, as the human SMPL parameters and object 6DoF pose can be obtained by other powerful models <cit.>. (2) Increase Data Scale. Our currently covers only three datasets with a limited number of samples. Merging more HOI datasets <cit.>, scaling up the number of samples, and enriching the textual descriptions are also worth exploring. Moreover, exploring the hierarchy of human body part states, object states, and actions are promising and meaningful <cit.>. (3) Diverse Model Architectures. Due to the complexity of new tasks and the limited data samples, our model is built on the Multi-modal Large Language Model, which brings semantic comprehension and cognitive capabilities for handling lengthy sentences in fine-grained descriptions. However, compared to previous HOI models, our model has a significantly larger amount of parameters, making it less lightweight for addressing HOI tasks. Thus, as data scales up, exploring other model architectures <cit.> also becomes an important consideration. § CONCLUSION This paper proposes the overlooked challenge of fine-grained semantic-aligned 3D human-object interaction (HOI), which is inadequately addressed by current HOI datasets and models. To bridge this gap, we introduce Semantic-HOI, a new dataset featuring over 20K meticulously annotated HOI state pairs, each equipped with detailed descriptions and corresponding body movements between consecutive states. Leveraging this dataset, we formulate three state-level HOI tasks aimed at achieving fine-grained semantic alignment within the HOI sequence. Moreover, we present , which empowers the MLLM to proficiently tackle the proposed HOI tasks. Extensive experiments showcase 's prowess in aligning HOI states with fine-grained semantic descriptions. § ACKNOWLEDGEMENT The work is partially supported by the Young Scientists Fund of the National Natural Science Foundation of China under grant No.62106154, by the Natural Science Foundation of Guangdong Province, China (General Program) under grant No.2022A1515011524, and by Shenzhen Science and Technology Program JCYJ20220818103001002, and by the Guangdong Provincial Key Laboratory of Big Data Computing, The Chinese University of Hong Kong (Shenzhen). splncs04
http://arxiv.org/abs/2407.13663v1
20240718163901
Studying the Performance of the Jellyfish Search Optimiser for the Application of Projection Pursuit
[ "H. Sherry Zhang", "Dianne Cook", "Nicolas Langrené", "Jessica Wai Yin Leung" ]
stat.CO
[ "stat.CO", "cs.NE" ]
1]H. Sherry Zhang cor1 huize.zhang@austin.utexas.edu 2]Dianne Cook dicook@monash.edu 3]Nicolas Langrené nicolaslangrene@uic.edu.cn 2]Jessica Wai Yin Leung Jessica.Leung@monash.edu [1]organization=University of Texas at Austin, Department of Statistics and Data Sciences,city=Austin,country=United States,countrysep=,,postcode=78751,postcodesep= [2]organization=Monash University, Department of Econometrics and Business Statistics,city=Melbourne,country=Australia,countrysep=,,postcode=3800,postcodesep= [3]organization=Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, BNU-HKBU United International College, Department of Mathematical Sciences,city=Zhuhai,country=China,countrysep=,,postcode=519087,postcodesep= [cor1]Corresponding author § ABSTRACT The projection pursuit (PP) guided tour interactively optimises a criteria function known as the PP index, to explore high-dimensional data by revealing interesting projections. The optimisation in PP can be non-trivial, involving non-smooth functions and optima with a small “squint angle”, detectable only from close proximity. To address these challenges, this study investigates the performance of a recently introduced swarm-based algorithm, Jellyfish Search Optimiser (JSO), for optimising PP indexes. The performance of JSO for visualising data is evaluated across various hyper-parameter settings and compared with existing optimisers. Additionally, this work proposes novel methods to quantify two properties of the PP index – smoothness and squintability – that capture the complexities inherent in PP optimisation problems. These two metrics are evaluated along with JSO hyper-parameters to determine their effects on JSO success rate. Our numerical results confirm the positive impact of these metrics on the JSO success rate, with squintability being the most significant. The JSO algorithm has been implemented in the package and functions to calculate smoothness and squintability are available in the package. projection pursuit jellyfish search optimiser (JSO) optimisation grand tour high-dimensional data exploratory data analysis § INTRODUCTION Projection Pursuit (PP) (<cit.>, <cit.>) is a dimension reduction technique aimed at identifying informative linear projections of data. The method involves optimising an objective function known as the PP index (e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>), which defines the criteria for what constitutes interesting or informative projections. Given high-dimensional data X ∈𝐑^n× p and an index function f(·), PP finds the orthonormal basis A ∈𝐑^p × d that maximises the index value of the projection, Y = XA: Amax f(XA) subject to A'A = I The index functions can be non-linear and non-convex, hence an effective and efficient optimisation procedure is essential to explore the data landscape and achieve a globally optimal viewpoint of the data. Optimisation of PP is typically investigated in the literature when new indexes are proposed <cit.> or when visualisation method are used to track the optimisation process. <cit.> introduced the PP guided tour, which monitors the optimisation visually to see the data projections leading in and out of the optima. An implementation is available in the package <cit.> in R <cit.>. <cit.> illustrated how to diagnose optimisation processes, particularly focusing on the guided tour, and revealed a need for improved optimisation. While improving the quality of the optimisation solutions in the tour is essential, it is also important to be able to view the data projections as the optimisation progresses. Integrating the guided tour with a global optimisation algorithm that is efficient in finding the global optimal and enables viewing of the projected data during the exploration process is a goal. Here, the potential for a Jellyfish Search Optimiser (JSO) to be integrated with the projection pursuit guided tour is explored. JSO, <cit.> - <cit.>, inspired by the search behaviour of jellyfish in the ocean, is a swarm-based metaheuristic designed to solve global optimisation problems. Compared to traditional methods, JSO has demonstrated stronger search ability and faster convergence, and requires fewer tuning parameters. These practical advantages make JSO a promising candidate for enhancing PP optimisation. The primary goal of the study reported here is to investigate the performance of JSO in PP optimisation for the guided tour. It is of interest to assess how quickly and closely the optimiser reaches a gobal optima, for various PP indexes that may have differing complexities. To observe the performance of JSO with different types of PP indexes, metrics are introduced to capture specific properties of the index including squintability (based on <cit.>'s squint angle) and smoothness as introduced in <cit.>. Here we mathematically define metrics for squintability and smoothness, which is new for the field. A series of simulation experiments using various datasets and PP indexes are conducted to assess JSO's behaviour and its sensitivity to hyper-parameter choices (number of jellyfish and maximum number of tries). The relationship between the JSO performance, hyper-parameter choices and properties of PP indexes (smoothness and squintability) is analysed to provide guidance on selecting optimisers for practitioners using projection pursuit. Additionally, it aims to guide the design of new indexes that facilitate easy optimization for PP researchers. The paper is structured as follows. Section <ref> introduces the background of PP guided tour and reviews existing optimisers and index functions in the literature. Section <ref> details JSO and introduces metrics that measure different properties of PP indexes, smoothness and squintability. Section <ref> details two simulation experiments to assess JSO's performance: one comparing JSO's performance improvements relative to an existing optimiser, Creeping Random Search (CRS), and the other studying the impact of different PP index properties on optimisation performance. Section <ref> presents the results. Section <ref> summarises the work and provides suggestions for future directions. § PROJECTION PURSUIT, TOURS, INDEX FUNCTIONS AND OPTIMISATION A tour on high-dimensional data is constructed by geodesically interpolating between pairs of planes. Any plane is described by an orthonormal basis, A_t, where t represents time in the sequence. The term “geodesic” refers to maintaining the orthonormality constraint so that each view shown is correctly a projection of the data. The PP guided tour operates by geodesically interpolating to target planes (projections) which have high PP index values, as provided by the optimiser. The geodesic interpolation means that the viewer sees a continuous sequence of projections of the data, so they can watch patterns of interest forming as the function is optimised. There are five (unsophisticated) optimisation methods implemented in the package: * : provides a pseudo-derivative optimisation. It searches locally for the best direction, based on differencing the index values for very close projections. Then it follows the direction along the geodesic path between planes, stopping when the next index value fails to increase. * : also known as Creeping Random Search (CRS), is a brute-force optimisation searching randomly for projections with higher index values. * : is essentially simulated annealing <cit.> where the search space is reduced as the optimisation progresses. * : implements the algorithm described in <cit.>. * : is a very localised search, to take tiny steps to get closer to the local maximum. There are several PP index functions available: and <cit.>; <cit.>; <cit.>; and <cit.>; and <cit.>; <cit.>. Most are relatively simply defined, for any projection dimension, and implemented because they are relatively easy to optimise. A goal is to be able to incorporate more complex PP indexes, for example, based on scagnostics (<cit.>, <cit.>). An initial investigation of PP indexes, and the potential for scagnostics is described in <cit.>. To be useful here an optimiser needs to be able to handle index functions which are possibly not very smooth. In addition, because data structures might be relatively fine, the optimiser needs to be able to find maxima that occur with a small squint angle, that can only be seen from very close by. One last aspect that is useful is for an optimiser to return local maxima in addition to global because data can contain many different and interesting features. § THE JELLYFISH OPTIMISER AND PROPERTIES OF PP INDEXES JSO mimics the natural movements of jellyfish, which include passive and active motions driven by ocean currents and their swimming patterns, respectively. In the context of optimization, these movements are abstracted to explore the search space, aiming to balance exploration (searching new areas) and exploitation (focusing on promising areas). The algorithm aims to find the optimal solution by adapting the behaviour of jellyfish to navigate towards the best solution over iterations <cit.>. To solve the optimisation problem embedded in the PP guided tour, a starting projection, an index function, the number of jellyfish, and the maximum number of trials (tries) are provided as input. Then, the current projection is evaluated by the index function. The projection is then moved in a direction determined by a random factor, influenced by how far along we are in the optimisation process. Occasionally, completely new directions may be taken like a jellyfish might with ocean currents. A new projection is accepted if it is an improvement compared to the current one, rejected otherwise. This process continues and iteratively improves the projection, until the pre-specified maximum number of trials is reached. [enhanced jigsaw, leftrule=.75mm, arc=.35mm, bottomrule=.15mm, title=Algorithm: Jellyfish Optimizer Pseudo Code, colbacktitle=quarto-callout-note-color!10!white, toprule=.15mm, toptitle=1mm, titlerule=0mm, opacityback=0, colframe=quarto-callout-note-color-frame, coltitle=black, breakable, bottomtitle=1mm, opacitybacktitle=0.6, rightrule=.15mm, colback=white, left=2mm] Input: , , , Output: Initialize as the projection with the best index value from , and as the array of index values for each projection in for each in 1 to do Calculate the time control value, c_t, based on and if c_t is greater than or equal to 0.5 then Define trend based on the and Update each projection towards the trend using a random factor and orthonormalisation else if a random number is greater than 1 - c_t then Slightly adjust each projection with a small random factor (passive) else For each projection, compare with a random jellyfish and adjust towards or away from it (active) Update the orientation of each projection to maintain consistency Evaluate the new projections using the index function if any new projection is worse than the current, revert to the for that case Determine the projection with the best index value as the new if ≥ , print the last best projection exit return the set of projections with the updated as the The JSO implementation involves several key parameters that control its search process in optimization problems. These parameters are designed to guide the exploration and exploitation phases of the algorithm. While the specific implementation details can vary depending on the version of the algorithm or its application, the focus is on two main parameters that are most relevant to our application: the number of jellyfish and the maximum number of tries. <cit.> has proposed five criteria for assessing projection pursuit indexes (smoothness, squintability, flexibility, rotation invariance, and speed). Since not all index properties affect the optimisation process, the focus here is on the first two properties, smoothness (Section <ref>) and squintability (Section <ref>), for which metrics are proposed to quantify them. §.§ Smoothness This subsection proposes a metric for the smoothness of a projection pursuit index. If a PP index function is evaluated at some random points, as is done at the initialization stage of JSO, then these random index values can be interpreted as a random field, indexed by a space parameter, namely the random projection angle. This analogy suggests to use this random training sample to fit a spatial model, a simple one being a (spatial) Gaussian process. The distribution of a Gaussian process is fully determined by its mean and covariance function. The smoothness property comes into play in the definition of the covariance function: if a PP index is very smooth, then two close projection angles should produce close index values (strong correlation); by contrast, if a PP index is not very smooth, then two close projection angles might give very different index values (fast decay of correlations with respect to distance between angles). Popular covariance functions are parametric positive semi-definite functions, some of which have a parameter to capture the smoothness of the Gaussian field. In particular, consider the Matérn class of covariance functions, defined by K(u):=(√(2ν)u)^ν/Γ(ν)2^ν-1𝒦_ν(√(2ν)u) , where ν>0 is the smoothness parameter and where 𝒦_ν is the modified Bessel function. The Matérn covariance function can be expressed analytically when ν is a half-integer, the most popular values in the literature being 1/2, 3/2 and 5/2. The parameter ν, called smoothness parameter, controls the decay of the covariance function. As such, it is an appropriate measure of smoothness of a random field. In this context, the parameter ν is suggested as a measure of the smoothness of the index function by fitting a Gaussian process prior with Matérn covariance on a dataset generated by random evaluations of the index function, as done at the initialization stage of the jellyfish search optimization. There exist several R packages, such as <cit.> or <cit.>, to fit the hyperparameters of a GP covariance function on data. In this project, the package is used. After fitting the smoothness parameter ν>0, its value can be interpreted as follows: the higher ν, the smoother the index function. §.§ Squintability Squintability has been a useful concept. Here, it is defined mathematically and two approaches to compute this metric numerically, are described. From <cit.> and <cit.>, a large squint angle implies that the objective function value is close to optimal even when the perfect view to see the structure is far away. A small squint angle means that the PP index value improves substantially only when the perfect view is close by. As such, low squintability implies rapid improvement in the index value when near the perfect view. For PP, a small squint angle is considered to be undesirable because it means that the optimiser needs to be very close to be able to “see” the optima. Thus, it could be difficult for the optimiser to find the optima. The mathematical formulation of this intuition is proposed below: Let A ∈𝐑^p × d be the d-D projection basis and A^* be the optimal basis achieving the maximum index value for a given projection pursuit problem defined by the shape to find, data dimension, and the index function f(.). The projection distance d(A, A^*), is defined as the Frobenius norm ‖ . ‖ _F, between the squared difference of these two bases: d(A, A^*) = ‖ M ‖ _F = ∑_ij M_ij^2, where M = AA^' - A^*A^*' . We now focus our attention to how the index value f(XA) changes with respect to the projection distance d(A, A^*), over the course of the JSO. It is expected that for a PP index with high squint angle, the optimization (<ref>) should make substantial progress early on. Conversely, for a PP index with low squint angle, it might take a long while for the optimization to make substantial progress, as the candidate projections would need to be very close to the optimal one for the structure of the index function to be visible enough to be amenable to efficient optimization. This observation suggests that the extreme values of f' (the ones for which f”=0, assuming that f is twice differentiable), and the projection distances for which these values are attained, are crucial in the mathematical definition of squintability. Noticing that f is expected to be a decreasing function, the squintability metric is proposed as follows: ς:=-4min_xf'(x)×xminf'(x) . -min_xf'(x) represents the largest value of the gradient of -f and xminf'(x) represents the projection distance at which this largest gradient is attained. It is expected that these two values should be both high in the case of high squintability (fast increase in f early on), and both low in the case of low squintability (any substantial increase in f happens very late, close to the optimal angle). This suggest that their product (<ref>) should provide a sensible measure of squintability. The multiplicative constant 4, which can be deemed arbitrary, does not change the interpretation of the squintability metric ς; it is here to adjust the range of values of ς and simplify the explicit formula for ς obtained later on. To compute the squintability metric (<ref>) in practice, several approaches are possible. The first one is to propose a parametric model for f, and use it to obtain an explicit formula for ς. Numerical experiments suggest a scaled sigmoid shape as described below. Define ℓ(x):=1/1+exp(θ_3(x-θ_2)) , which is a decreasing logistic function depending on two parameters θ_2 and θ_3, such that ℓ(θ_2)=1/2. Then, define f(x)=(θ_1-θ_4)ℓ(x)-ℓ(x_max)/ℓ(0)-ℓ(x_max)+θ_4 , which depends on three additional parameters, θ_1, θ_2, and x_max, such that f(0)=θ_1 and f(x_max)=θ_4. Under the parametric model (<ref>), the squintability metric (<ref>) can be shown to be equal to ς=(θ_1-θ_4)θ_2θ_3/ℓ(0)-ℓ(x_max) . In practice, the parameters of this model (<ref>) can be estimated numerically, for example by non-linear least squares, and then used to evaluate ς as in equation (<ref>). Alternatively, one can estimate (<ref>) in a nonparametric way, for example by fitting f using kernel regression, then numerically estimate the angle at which -f' attains its highest value. § SIMULATION DETAILS The JSO performance is compared with an existing optimiser, Creeping Random Search (CRS) <cit.> used in the PP guided tour to explore JSO's behaviour under different hyper-parameter and data dimension combinations. The second simulation studies the effect of index properties (smoothness and squintability), along with JSO hyper-parameters, and data dimension, on the success rate of the JSO performance. This section describes the simulation details, with the results deferred to Section Section <ref>. §.§ Performance of JSO relative to CRS he performance of JSO is investigated both in comparison to the existing optimizer, CRS, and across various hyper-parameter values. The performance is measured by the success rate, defined as the proportion of simulations that achieves a final index value within 0.05 of the best index value found among all 50 simulations (see Figure <ref> for an illustration). This comparison is based on projection pursuit to find the pipe shape investigated by <cit.> using the index. Fifty simulations are conducted with both JSO and CRS, in four different data dimensions (d = 6, 8, 10, 12). JSO uses 100 jellyfishes with a maximum of 100 tries, while the CRS allows a maximum of 1000 samples at each iteration before the algorithm terminates. The different numbers account for the multiple paths of JSO to enable fairer comparison with CRS. The results of the simulation are collected using the data structure proposed in <cit.> for assessing JSO, where the design parameters are stored along with index value, projection basis, random seed, and computation time. Fifty additional simulations are conducted for each hyper-parameter combination to analyze how they affect the JSO success rate. This includes variations in the number of jellyfish (20, 50, and 100) and the maximum number of tries (50 and 100). §.§ Factors affecting JSO success rate: index properties and jellyfish hyper-parameters To assess JSO's performance across various scenarios, two different data shapes, pipe and a sine wave, are investigated in 6D and 8D spaces using six different PP indexes: , , , , , and , with varied JSO hyper-parameters. A total of 52 combinations result, comprising of 24 computed on the pipe data and 28 on the sine-wave data. Again, JSO is run 50 times to calculate the success rate for each projection pursuit. Smoothness and squintability are computed following the procedures outlined in Section <ref> and Section <ref> and as illustrated in Figure <ref> and Figure <ref>. To compute smoothness, 300 random bases are simulated. Index values and projection distance (to the optimal basis) are calculated for each random basis before fitting the Gaussian process model to obtain the smoothness measure for the index. To compute squintability, 50 random bases are sampled and interpolated to the optimal basis with a step size of 0.005. Index values and projection distances are calculated for these interpolated bases and the index values are averaged with a bin width of 0.005. A four-parameter scaled logistic function is fitted to the index values against projection distances, estimated by non-linear least squares. The squintability measure is then calculated as <ref>. To construct a relationship among success rate, index properties (smoothness and squintability), and jellyfish hyper-parameters, a generalised linear model is fitted using a binomial family and a logit link function. The data is pre-processed by 1) scaling the JSO hyper-parameters by a factor of 10 for interpretation, 2) creating a new binary variable to indicate cases with an average run time over 30 seconds, and 3) re-coding the success rate for the index as 0, because none of the 50 simulations correctly identified the sine-wave shape. § RESULTS The results from the first simulation described in Section <ref> are analysed based on the final projections across two optimisers (JSO and CRS) and the success rate across JSO hyper-parameters. In the second simulation, smoothness and squintability are calculated across a collection of pipe-finding and sine-wave finding problems to construct the relationship between success rate, JSO hyper-parameters, and index properties. The final projections found by the two optimisers (JSO and CRS) are presented in Figure <ref>, broken down by 10th quantile, faceted by the data dimensions. In the 6-dimensional data scenario, JSO consistently identifies a clear pipe shape. The CRS also finds the pipe shape but with a wide rim, suggesting a further polish search may be required. With increasing dimensions, JSO may not always identify the pipe shape due to random sampling, but it still finds the pipe shape in over 50% of cases. Compared to CRS, JSO achieves higher index values and clearer pipe shapes across all quantiles in data of 8, 10, 12 dimensions, suggesting its advantage in exploring high-dimensional spaces. The success rate calculated at each hyper-parameter combination (number of jellyfish and the maximum number of tries) is presented in Figure <ref>. As the number of jellyfish and maximum tries increase, the success rate also increases. For simpler problems (6 dimensions), small parameter values (20 jellyfishes and a maximum of 50 tries) can already achieve a high success rate. However, larger parameter values (i.e. 100 jellyfishes and a maximum of 100 tries) are necessary for higher-dimensional problems (8, 10, and 12 dimensions). Increasing both parameters enhances the performance of JSO, but it also extends the computational time required for the optimisation, which can be computationally intensive when evaluating the index function (such as scagnostic indexes) multiple times across numerous iterations. The index properties, including smoothness and squintability, offer numerical metrics to characterise the complexity of projection pursuit optimisation problems. Table <ref> presents the parameters for calculating both metrics estimated from the Gaussian process (variance, range, smooth, and nugget) and the scaled logistic function (θ_1 to θ_4) for each case considered in the simulation. The column “smooth” is used as the smoothness metric and the column “squint” is calculated as equation <ref> as the squintability metric. Table <ref> presents the results of fitting a generalised linear model with a binomial family and a logit link function to a sample of data in Table <ref>, where all the three components (success rate, JS hyper-parameters, and index properties) are combined. The model suggests that JSO success rate is positively associated with the two hyper-parameters, as well as with the index properties: smoothness and squintability. Specifically, using 10 more jellyfish and 10 more tries increases the odd ratio of success by 24.11% and 11.93%, respectively. However, being flagged with long runtime and an increase of data dimension reduce the success rate by 41.72% and 53.36%, respectively. The variable and are significant, suggesting their importance relative to JSO hyper-parameters in the optimisation success. In defining a projection pursuit problem, factors such as the shape-to-find, the index function used, and the data dimension, determine the properties such as smoothness and squintability. These metrics can then be compared across different problems to understand their relative complexites. The results suggest that squintability has a more significant influence than smoothness on the success rate of JSO. Once the characteristics of the projection pursuit problem are fully understood, increasing JSO hyper-parameters can enhance the search effectiveness, however, it is also important to consider the resulting increase in computational complexity. § CONCLUSION This paper has presented new metrics to mathematically define desirable features of PP indexes, squintability and smoothness, and used these to assess the performance of the new jellyfish search optimiser. The metrics will be generally useful for characterising PP indexes, and help with developing new indexes. In the comparison of the JSO against the currently used CRS, as expected the JSO vastly out-performs CRS, and provides a high probability of finding the global optima. The JSO obtains the maximum more cleanly, with a slightly higher index value, and plot of the projected data showing the structure more clearly. The JSO performance is affected by the hyper-parameters, with a higher chance of reaching the global optima when more jellyfish are used and the maximum number of tries is increased. However, it comes at a computational cost, as expected. The performance declines if the projection dimension increases and if the PP index has low squintability. The higher the squintability the better chance the JSO can find the optima. However, interestingly smoothness does not affect the JSO performance. The new JSO is integrated with the current implementation of the projection pursuit guided tour in the package, and can be further examined using PP optimisation diagnostics in the package. Using the JSO is a little different than the current CRS, because it runs many paths. The recommended approach is to conduct the optimisation off-line, extract the bases of a selected jellyfish, and then use the planned tour to follow selected jellyfish. The tools for this are available in the package, too. § ACKNOWLEDGEMENT The article is created using Quarto <cit.> in R <cit.>. The source code for reproducing the work reported in this paper can be found at: <https://github.com/huizezhang-sherry/paper-jso>. Nicolas Langrené acknowledges the partial support of the Guangdong Provincial Key Laboratory IRADS (2022B1212010006, R0400001-22) and the UIC Start-up Research Fund UICR0700041-22.
http://arxiv.org/abs/2407.13037v1
20240717220744
Dispersion Relations for Active Undulators in Overdamped Environments
[ "Christopher J. Pierce", "Daniel Irvine", "Lucinda Peng", "Xuefei Lu", "Hang Lu", "Daniel I. Goldman" ]
physics.bio-ph
[ "physics.bio-ph", "q-bio.QM" ]
APS/123-QED School of Physics, Georgia Institute of Technology School of Chemical and Biomolecular Engineering, Georgia Institute of Technology School of Mathematics, Georgia Institute of Technology School of Chemical and Biomolecular Engineering, Georgia Institute of Technology School of Chemical and Biomolecular Engineering, Georgia Institute of Technology School of Chemical and Biomolecular Engineering, Georgia Institute of Technology School of Physics, Georgia Institute of Technology § ABSTRACT Organisms that locomote by propagating waves of body bending can maintain performance across heterogeneous environments by modifying their gait frequency ω or wavenumber k. We identify a unifying relationship between these parameters for overdamped undulatory swimmers (including nematodes, spermatozoa, and mm-scale fish) moving in diverse environmental rheologies, in the form of an active `dispersion relation' ω∝ k^±2. A model treating the organisms as actively driven viscoelastic beams reproduces the experimentally observed scaling. The relative strength of rate-dependent dissipation in the body and the environment determines whether k^2 or k^-2 scaling is observed. The existence of these scaling regimes reflects the k and ω dependence of the various underlying force terms and how their relative importance changes with the external environment and the neuronally commanded gait. Dispersion Relations for Active Undulators in Overdamped Environments Daniel I. Goldman July 22, 2024 ===================================================================== Self-propulsion results from the cyclical self-deformation of a body in space and time. In overdamped mechanical regimes, these self-deformations, or gaits, produce center of mass displacements that are independent of the speed of the cycle, due to the lack of inertia <cit.>. Overdamped locomotor dynamics are quite common: once thought to be restricted to the microscopic domain of low Reynolds number swimming in water and complex biofluids, subsequent work has shown that a large number of terrestrial locomotor systems, like snakes <cit.> and centipedes <cit.> also operate in overdamped regimes where inertia, and hence coasting is negligible. Therefore general principles of locomotion mechanics in overdamped regimes can help to describe organisms across scales, environments, and taxa [Similar principles, often in the form of relationships between gait parameters, have been previously identified in many different categories of locomotion, including bipedal, quadrupedal, hexapodal and myriapodal locomotion<cit.>, flight<cit.>, and aquatic swimming <cit.>]. Because of the lack of inertia, theoretical models of overdamped locomotion often describe relationships between spatial variables. For example, in overdamped undulatory locomotion, where self-deformations take the form of waves of body curvature along a slender, elongated body, the geometric properties of the gait, like the wave number k and the amplitude, fully determine the distance traveled in a cycle. Tools like the geometric phase <cit.> connect these spatial properties of gaits in the body frame to displacements in the world frame, and have therefore been useful in describing why diverse organisms, from worms to snakes and lizards, select particular gait geometries<cit.>. These models are independent of time. Organisms, however, do not arbitrarily select their frequency of undulation, ω. What, then, constrains the space-time dynamics of undulatory gaits in low coasting, and hence time-reversal-invariant mechanical regimes? To rationalize why organisms choose particular space-time dynamics requires a deeper consideration of force balance and energetics within the body and the surrounding environments<cit.>. Here we identify a relationship linking temporal and spatial traveling wave dynamics for undulatory locomotion. We explain this relationship between wave frequency and wavenumber by deriving an active `dispersion relation' ω(k) from force balance, using previously identified phase relationships between muscle drive and body curvature <cit.>. This dispersion relation holds for a set of diverse overdamped undulatory systems (nematode worms, spermatazoa, fish larvae), all of which navigate in heterogeneous, rheologically complex environments with minimal feedback (e.g. neuronal) control. We begin by exploring the gaits of a model biological undulator, the nematode Caenorhabditis elegans, which encounters diverse environments in its native habitat <cit.>, can locomote in a diverse set of complex laboratory environments <cit.> and systematically changes its gait parameters as a function of environmental viscosity <cit.>. See Fig. <ref>. We experimentally measured ω and k in C. elegans in a diverse set of environments, including A) buffer solutions, B) methylcellulose mixtures of different viscosities (weakly viscoelastic fluids), C) elastic polyethylene glycol (PEG) hydrogels with a range of bulk moduli, and D) agar surfaces (See SI for details). We also compared our measured gaits to two literature sources, including previously measured buffer swimming and agar crawling gaits <cit.>, and swimming gaits in Dextran mixtures with various viscosities<cit.>, which remain Newtonian over a broad range of concentrations. Surprisingly, across these diverse environments, C. elegans' gaits fall approximately on a single curve given by the `dispersion relation' ω(k) ∝ k^-2, as shown in Fig. <ref>(b). To explain this observation, we constructed a simple mechanical model based on force balance. We began by considering a driven viscoelastic (Kelvin-Voight) beam previously used to model terrestrial undulation of snakes <cit.> and later nematodes<cit.> immersed in a low-Reynolds number fluid. The linear force balance along the body is given by b_e y_ssss + b_νẏ_ssss + mf^a_ss = -C ẏ, which equates internal and external linear force densities and where y(s,t) is the lateral displacement at the point s along the body and at time t, b_e y_ssss is the elastic body force, b_νẏ_ssss is the viscous body force, mf^a_ss is the muscle force, and -C ẏ is the fluid drag force transverse to the beam. See Fig. <ref>(a). This force balance equation (<ref>) implies two characteristic length scales and a single characteristic time scale. (See the SI for a detailed derivation and discussion.) Here we have chosen to scale the lengths y and s by the length scale s_c = b_e/m, which represents the minimal radius of curvature achieved by the body when the muscle torque is balanced solely by the body elasticity, and the time by t_c = b_ν/b_e, which represents the viscoelastic relaxation time of the passive body. The second characteristic length scale s_ν=(b_ν/C)^1/4, relates to the relative strength of internal and external dissipation. Consider the ratio of the internal dissipative force (f_ν=b_νẏ_ssss) and the external viscous force (F_fluid=-Cẏ) for a lateral wave of the form y(s,t) = y_0 sin(s/λ- t/τ). The ratio of the two forces is given by f_ν/F_fluid=1/λ^4b_ν/C. Hence, λ = s_ν is the wavelength at which the internal and external dissipative terms are equal. We now proceed by computing the Fourier transform of the adimensionalized force balance equation (<ref>), k^4 υ + iω k^4 υ -k^2f̃^a = -iω(s_c/s_ν)^4 υ. To solve for the dispersion relation ω(k), we must first determine the functional form of the Fourier-transformed muscle spatiotemporal variation f̃^a. Previous measurements of vertebrate undulators<cit.> and recent work on C. elegans <cit.> have shown that muscle activity travels along the body in advance of the corresponding body bends, a phenomenon referred to as a neuromechanical phase lag. We therefore approximate the spatiotemporal variation of the muscle activity pattern as a phase shift of lateral body displacement f̃^a =e^iϕυ. After substituting, we finally solve for ω(k) and find the following real and imaginary parts, Re{ω (k)} =k^2sin(ϕ)/(s_c/s_ν)^4 +k^4 and Im{ω (k)} = k^4-k^2 cos(ϕ)/(s_c/s_ν)^4 +k^4. The oscillatory part of the solutions to (<ref>) are governed by the real part of the dispersion relation (<ref>). In the limit that k ≫ s_c/s_ν, we recover the experimentally observed relation ω∝ k^-2. This implies that k^-2 is observed when the wavelength λ, taken in natural units, is less than s_ν, because the dimensionless wavenumber k is scaled by s_c. This, in turn, suggests that nematodes operate in a regime where the largest source of dissipation is within the body's bending degree of freedom, and not from the surrounding fluid, since the length scale s_ν is the wavelength at which internal and external viscous dissipation forces are equal. The criteria for obtaining ω∝ k^-2 depends solely on whether λ is smaller or larger than s_ν and is therefore independent of muscle torque amplitude m and the body elastic modulus b_e. This reflects the fact that the body elasticity term in equation (<ref>) contains no time derivatives, hence b_e only contributes the fourth-order term in the imaginary part of the dispersion (<ref>), which governs the exponential decay envelope on the oscillatory solutions to Eqn. <ref>. Similarly, the muscle torque term contains no time derivatives, however it contributes to the overall magnitude of the real part of the dispersion because of the complex phase factor e^iϕ. If we, therefore, consider a simplified equation of motion in the regime internal dissipation-dominated regime, we may simply equate the muscle torque and internal dissipation term in (<ref>), neglecting the irrelevant or negligible terms to reproduce the k^-2 scaling. We find ω≈cos(ϕ)m/y_0 b_ν1/k^2 = α^-1/k^2, where we have defined the proportionality constant α^-:=cos(ϕ)m/y_0b_ν. Fitting the data in Fig. <ref>, we find a value for nematodes of α^-= 0.69 ± 0.02 s^-1mm^-2. We now consider the limit where k ≪ s_c/s_ν. In this regime, equation (<ref>) produces ω∝ k^2 and the dominant dissipation term is the external viscous dissipation. Similar to the above explanation, we may recover this scaling by solely considering the force balance of the external viscous forces and the muscle activity, which results in ω≈cos(ϕ) m/y_0 C k^2 = α^+ k^2. Here we have defined α^+ := cos(ϕ) m/y_0 C. The existence of these two scaling regimes reflects the different k and ω dependences of the force terms relevant to the dispersion (muscle force, internal dissipation, and fluid drag), which in turn reflect the order of the spatial and temporal derivatives. The fluid drag depends linearly on ω, while the muscle term depends quadratically on k has no dependence on ω. This is illustrated graphically by the red and blue surfaces in Fig. <ref>(b). The intersection of these two surfaces defines a curve where ω is quadratic in k and where force balance is maintained between fluid drag and muscle force. In contrast, the internal dissipation force is linear in ω and quartic in k, illustrated by the green surface [Fig. <ref>(b)]. The intersection of the green and red surfaces defines a curve where ω is inverse-quadratic in k, which occurs when the internal dissipation and muscle forces are balanced. The presence of both dissipative terms produces a curve that smoothly interpolates between these two regimes. For small values of k, ω∝ k^2. As k increases and internal dissipation becomes significant, the dispersion curve reaches a maximum value when dimensionless k=1, a condition which is satisfied when λ = s_ν. For sufficiently large values of k, ω∝ k^-2. Fundamentally, the two scaling regimes arise because the fluid drag, modeled here using resistive force theory, is local, while the internal dissipation in the body, being dependent on the rate of change of local body curvature, involves coupling of nearby body points, which is reflected mathematically in the fourth-order spatial derivatives in Eq. (<ref>). Having identified the origin of the scaling in the dispersion relation, we proceed to discuss the values of the model parameters and the source of variation in k and ω selected by the organisms. For the nematodes to obey a constant scaling relationship across individuals and the various environments, α^- must be constant. Immediately, the lack of dependence on C in equation (<ref>) suggests that viscosity will not change the scaling (provided the nematodes select gaits with wavenumbers sufficiently higher than s_c/s_ν). Thus, viscosity changes only serve to induce the nematode to select a different k or ω, presumably by increasing the energy penalty for maintaining higher ω above the base value set by the rate-dependent dissipation within the tissue. Hence, tuning the viscosity does not change the overall shape of the dispersion curve [Eq. (<ref>), Fig. <ref>(b)] but only the location of the maximum value separating the k^2 and k^-2 regimes. Restoring units with different measured values of viscosity for the model and comparing with our methylcellulose data (Fig. <ref>), shows that while the viscosity has an effect on the relative distance of the experimental wavenumbers to the peak of the dispersion curve, the viscosity nonetheless does not break the overall k^-2 relationship. Similarly, our results imply that across these environments mcos(ϕ) must be approximately constant, provided the nematode cannot actively manipulate its body damping coefficient b_ν. Across multiple order of magnitude differences in external viscosity, the amplitude of the muscle torque has been estimated to change by only a factor of ≈ 3, accompanied by an estimated increase of the phase lag ϕ from ≈0 to π/3<cit.>. Recent experimental results confirmed that the phase lag accumulation along the posterior region of the body indeed increased with viscosity <cit.>. We, therefore, hypothesize that as muscle torque increases with external viscosity, cos(ϕ) decreases, explaining how the coefficient α^- does not appear to depend, even indirectly, on the viscosity. Having explained why the scaling persists despite changes in viscosity, we proceed to describe why changes in the elasticity of the surrounding medium do not lead to deviations from inverse-quadratic scaling. In the PEG hydrogel experiments (magenta points, Figure <ref>,b, iii), the nematodes encounter a highly elastic environment. However, like the body elasticity, the lack of time dependence implies that increasing the elasticity in the environment will only affect the imaginary part of the dispersion, and therefore may impact the persistence of the oscillations in time. We note that as PEG gel weight percentage is increased, the overall periodicity of the waves is diminished, leading to quasi-periodic waves which may reflect the increase in the imaginary part of the dispersion. We next asked if any organisms might follow the k^2 dispersion relation predicted by the model when k<<s_c/s_ν. We considered several literature sources of gait parameters measured in other lateral undulators, a meta-analysis of fish swimming <cit.>, the bank of swimming organisms at the micron scale (BOSO-Micro) database <cit.>, where we investigated the gaits of spermatozoa, an analysis of other nematode species' gaits<cit.> and a study of polychaete worms in water and in sediment <cit.>. The non-C. elegans nematodes and polychaete worms, like C. elegans, displayed frequencies that decreased with k across environments (however, data was insufficient to evaluate the scaling). In contrast, for both the spermatozoa and the fish data, ω increased with k. For example, the larvae of the Atlantic Herring Clupea harengus, decrease both their wavenumber and undulation frequency throughout development – as the body increases in size, producing smaller values of k, the undulations slow down [Fig. <ref>(a)]. We observed similar relationships across a subset of aquatic swimming fish [Fig. <ref>(b)] and also spermatozoa [Fig. <ref>(c)]. For the majority of aquatic swimming fish, the body and fluid dynamics are inertial. Unsurprisingly, the majority of the data in the meta-analysis<cit.> do not appear to fit well to a k^2 model, which assumes that inertia is negligible. A small number of organisms, however, with Reynolds numbers (Re) under 2,000 and Strouhal numbers (St) greater than 0.7 appear to approximately fall on the curve (see Fig. <ref>, b). We note that while Re of ∼2,000 is inertial, the drag coefficient remains approximately linear in velocity at intermediate Reynolds numbers (See for example <cit.>, Ch. 14). Unsurprisingly, organisms in this regime were some of the smallest species in the study, and with one exception were collected from different larval stages (see Fig. <ref>,d). These larval fish hence appear to be operating in a fluid dissipation-dominated overdamped regime with α_Fish^+ = 9.4 ± 1.0 s^-1m^2. We also fit the spread of the spermatozoa data to k^2 and find a rough agreement, with a constant of α_Sp^+ = (8 ± 3)× 10^-3 s^-1mm^2, (without making any attempt to rectify differences in the material properties and scaling variables appropriate to the different sperm flagella). Cricket sperm are of particular interest in terms of their dispersion relations, because their unusually long flagella (≈ mm) exhibit spatial variation in k and ω <cit.>. Thus an individual cell's flagellum allows a test of the model, where the biomechanical parameters are likely held constant. The inset in Fig. <ref>(c) shows cricket sperm data from <cit.>, where like colors represent values collected at different points along the same individual flagellum, along with a fit yielding a constant of α_Crick.^+ = (7 ± 4)× 10^-5 s^-1mm^2. We note that while previous models of spermatozoa force balance<cit.> resemble our model (<ref>), recent work has suggested that in the case of high-amplitude undulation <cit.>, shear dissipation and elasticity become dominant contributors to force balance. Hence, while our model is appropriate for small-amplitude spermatozoa undulation, seen in Crickets, the scaling for high-amplitude spermatozoa may reflect additional terms not considered here. In conclusion, we have shown that force balance constrains the possible undulatory gaits achieved by viscoelastic, actively driven undulators in low-coasting environments to a one-dimensional curve given by (<ref>), with scaling ω∝ k^±2 determined by the relative importance of internal and external damping forces. This implies that ω and k cannot be independently selected, without manipulation of the constants such as the muscle torque amplitude m. While we have explained the origin of this constraint, our model does not allow predictions of how the undulator's nervous, or molecular feedback control system selects a point along the curve ω∝ k^± 2. For nematodes, based on prior models of proprioception <cit.>, we hypothesize that the nervous system controls the frequency ω (slowing as viscosity increases) and that k is either fixed spontaneously through mechanics or as a result of mechanical entrainment of the locomotor neurons oscillations along the body. Nematodes, therefore, target a set of gaits that maintain a relatively constant overall speed across different environments<cit.>, by enforcing the selection of higher wavenumber gaits that produce higher kinematic efficiencies<cit.>, as frequency is decreased. This process enables environmentally robust locomotion with minimal control (i.e. requiring active control over a single parameter ω). Beyond biology, these insights could enable the construction of future robotic systems that take advantage of mechanics to simplify control <cit.>. § ACKNOWLEDGEMENTS This work was supported by NIH R01AG082039, NSF Physics of Living Systems Student Research Network (GR10003305), NSF-Simons Southeast Center for Mathematics and Biology (SCMB) through the National Science Foundation grant DMS1764406 and Simons Foundation/SFARI grant 594594. 40 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Purcell(1977)]Purcell1977 author author E. M. Purcell, https://pubs.aip.org/aapt/ajp/article-pdf/45/1/3/11809839/3_1_online.pdf journal journal Am. J. Phys. volume 45, pages 3 (year 1977)NoStop [Hu et al.(2009)Hu, Nirody, Scott, and Shelley]hu2009mechanics author author D. L. Hu, author J. Nirody, author T. Scott, and author M. J. Shelley, @noop journal journal Proceedings of the National Academy of Sciences volume 106, pages 10081 (year 2009)NoStop [Chong et al.(2023a)Chong, He, Li, Erickson, Diaz, Wang, Soto, and Goldman]chong2023self author author B. Chong, author J. He, author S. Li, author E. Erickson, author K. Diaz, author T. Wang, author D. Soto, and author D. I. Goldman, @noop journal journal Proceedings of the National Academy of Sciences volume 120, pages e2213698120 (year 2023a)NoStop [Note1()]Note1 note Similar principles, often in the form of relationships between gait parameters, have been previously identified in many different categories of locomotion, including bipedal, quadrupedal, hexapodal and myriapodal locomotion<cit.>, flight<cit.>, and aquatic swimming <cit.>NoStop [Shapere and Wilczek(1989)]Shapere1989 author author A. Shapere and author F. Wilczek, https://www.physics.utoronto.ca/ poppitz/poppitz/PHY1530_files/ShapereLowR.pdf journal journal J. Fluid Mech. volume 198, pages 557 (year 1989)NoStop [Rieser et al.(2024)Rieser, Chong, Gong, Astley, Schiebel, Diaz, Pierce, Lu, Hatton, Choset, and Goldman]Rieser2024 author author J. M. Rieser, author B. Chong, author C. Gong, author H. C. Astley, author P. E. Schiebel, author K. Diaz, author C. J. Pierce, author H. Lu, author R. L. Hatton, author H. Choset, and author D. I. Goldman, http://dx.doi.org/10.1073/pnas.2320517121 journal journal Proc. Natl. Acad. Sci. U. S. A. volume 121, pages e2320517121 (year 2024)NoStop [Sánchez-Rodríguez et al.(2023)Sánchez-Rodríguez, Raufaste, and Argentina]Sanchez-Rodriguez2023 author author J. Sánchez-Rodríguez, author C. Raufaste, and author M. Argentina, http://dx.doi.org/10.1038/s41467-023-41368-6 journal journal Nat. Commun. volume 14, pages 5569 (year 2023)NoStop [McMillen et al.(2008a)McMillen, Williams, and Holmes]mcmillen2008nonlinear author author T. McMillen, author T. Williams, and author P. Holmes, @noop journal journal PLoS computational biology volume 4, pages e1000157 (year 2008a)NoStop [Ding et al.(2013)Ding, Sharpe, Wiesenfeld, and Goldman]ding2013emergence author author Y. Ding, author S. S. Sharpe, author K. Wiesenfeld, and author D. I. Goldman, @noop journal journal Proceedings of the National Academy of Sciences volume 110, pages 10123 (year 2013)NoStop [Pierce et al.(2024)Pierce, Ding, Chong, Lu, and Goldman]Pierce2024 author author C. J. Pierce, author Y. Ding, author B. Chong, author H. Lu, and author D. I. Goldman, https://www.biorxiv.org/content/10.1101/2024.05.06.592744v1 journal journal bioRxiv , pages 2024.05.06.592744 (year 2024)NoStop [Butler et al.(2015)Butler, Branicky, Yemini, Liewald, Gottschalk, Kerr, Chklovskii, and Schafer]butler2015consistent author author V. J. Butler, author R. Branicky, author E. Yemini, author J. F. Liewald, author A. Gottschalk, author R. A. Kerr, author D. B. Chklovskii, and author W. R. Schafer, @noop journal journal Journal of the Royal Society Interface volume 12, pages 20140963 (year 2015)NoStop [Fang-Yen et al.(2010)Fang-Yen, Wyart, Xie, Kawai, Kodger, Chen, Wen, and Samuel]fang2010biomechanical author author C. Fang-Yen, author M. Wyart, author J. Xie, author R. Kawai, author T. Kodger, author S. Chen, author Q. Wen, and author A. D. Samuel, @noop journal journal Proceedings of the National Academy of Sciences volume 107, pages 20323 (year 2010)NoStop [Félix and Braendle(2010)]felix2010natural author author M.-A. Félix and author C. Braendle, @noop journal journal Current biology volume 20, pages R965 (year 2010)NoStop [Juarez et al.(2010)Juarez, Lu, Sznitman, and Arratia]juarez2010motility author author G. Juarez, author K. Lu, author J. Sznitman, and author P. E. Arratia, @noop journal journal Europhysics Letters volume 92, pages 44002 (year 2010)NoStop [Shen and Arratia(2011)]shen2011undulatory author author X. Shen and author P. E. Arratia, @noop journal journal Physical review letters volume 106, pages 208101 (year 2011)NoStop [Majmudar et al.(2012)Majmudar, Keaveny, Zhang, and Shelley]majmudar2012experiments author author T. Majmudar, author E. E. Keaveny, author J. Zhang, and author M. J. Shelley, @noop journal journal Journal of the Royal Society Interface volume 9, pages 1809 (year 2012)NoStop [Wang et al.(2023)Wang, Pierce, Kojouharov, Chong, Diaz, Lu, and Goldman]Wang2023mechanical author author T. Wang, author C. Pierce, author V. Kojouharov, author B. Chong, author K. Diaz, author H. Lu, and author D. I. Goldman, @noop journal journal Sci Robot volume 8, pages eadi2243 (year 2023)NoStop [Guo and Mahadevan(2008)]Guo2008 author author Z. V. Guo and author L. Mahadevan, http://dx.doi.org/10.1073/pnas.0705442105 journal journal Proc. Natl. Acad. Sci. U. S. A. volume 105, pages 3179 (year 2008)NoStop [Jayne and Lauder(1995)]Jayne1995redmuscle author author B. Jayne and author G. Lauder, @noop journal journal J. Exp. Biol. volume 198, pages 1575 (year 1995)NoStop [D'Août et al.(2001)D'Août, Curtin, Williams, and Aerts]DAout2001Mechanical author author K. D'Août, author N. A. Curtin, author T. L. Williams, and author P. Aerts, @noop journal journal J. Exp. Biol. volume 204, pages 2221 (year 2001)NoStop [McMillen et al.(2008b)McMillen, Williams, and Holmes]McMillen2008 author author T. McMillen, author T. Williams, and author P. Holmes, @noop journal journal PLoS Comput. Biol. volume 4, pages e1000157 (year 2008b)NoStop [Sharpe et al.(2013)Sharpe, Ding, and Goldman]Sharpe2013environmental author author S. S. Sharpe, author Y. Ding, and author D. I. Goldman, @noop journal journal J. Exp. Biol. volume 216, pages 260 (year 2013)NoStop [van Weerden et al.(2014)van Weerden, Reid, and Hemelrijk]Van_Weerden2014 author author J. F. van Weerden, author D. A. P. Reid, and author C. K. Hemelrijk, https://onlinelibrary.wiley.com/doi/10.1111/faf.12022 journal journal Fish Fish volume 15, pages 397 (year 2014)NoStop [Velho Rodrigues et al.(2021)Velho Rodrigues, Lisicki, and Lauga]Velho_Rodrigues2021 author author M. F. Velho Rodrigues, author M. Lisicki, and author E. Lauga, http://dx.doi.org/10.1371/journal.pone.0252291 journal journal PLoS One volume 16, pages e0252291 (year 2021)NoStop [Gray and Lissmann(1964)]Gray1964locomotion author author J. Gray and author H. W. Lissmann, @noop journal journal J. Exp. Biol. volume 41, pages 135 (year 1964)NoStop [Dorgan et al.(2013)Dorgan, Law, and Rouse]Dorgan2013 author author K. M. Dorgan, author C. J. Law, and author G. W. Rouse, http://dx.doi.org/10.1098/rspb.2012.2948 journal journal Proc. Biol. Sci. volume 280, pages 20122948 (year 2013)NoStop [Panton(2024)]PantonHydro author author R. L. Panton, @noop title Incompressible Flow, 5th Edition (publisher Wiley, year 2024)NoStop [Rikmenspoel(1978)]Rikmenspoel1978 author author R. Rikmenspoel, http://dx.doi.org/10.1016/S0006-3495(78)85442-3 journal journal Biophys. J. volume 23, pages 177 (year 1978)NoStop [Riedel-Kruse et al.(2007)Riedel-Kruse, Hilfinger, Howard, and Jülicher]Riedel-Kruse2007 author author I. H. Riedel-Kruse, author A. Hilfinger, author J. Howard, and author F. Jülicher, http://dx.doi.org/10.2976/1.2773861 journal journal HFSP J. volume 1, pages 192 (year 2007)NoStop [Cass and Bloomfield-Gadêlha(2023)]Cass2023 author author J. F. Cass and author H. Bloomfield-Gadêlha, http://dx.doi.org/10.1038/s41467-023-40338-2 journal journal Nat. Commun. volume 14, pages 5638 (year 2023)NoStop [Batty(1984)]Batty1984-js author author R. S. Batty, http://dx.doi.org/10.1242/jeb.110.1.217 journal journal J. Exp. Biol. volume 110, pages 217 (year 1984)NoStop [Maes et al.(2012)Maes, Verlooy, Buenafe, de Witte, Esguerra, and Crawford]Maes2012 author author J. Maes, author L. Verlooy, author O. E. Buenafe, author P. A. M. de Witte, author C. V. Esguerra, and author A. D. Crawford, http://dx.doi.org/10.1371/journal.pone.0043850 journal journal PLoS One volume 7, pages e43850 (year 2012)NoStop [Fischbach et al.(2023)Fischbach, Finke, Moritz, Polte, and Thieme]Fischbach2023 author author V. Fischbach, author A. Finke, author T. Moritz, author P. Polte, and author P. Thieme, https://aslopubs.onlinelibrary.wiley.com/doi/10.1002/lom3.10551 journal journal Limnol. Oceanogr. Methods volume 21, pages 357 (year 2023)NoStop [Ji et al.(2021)Ji, Fouad, Teng, Liu, Alvarez-Illera, Yao, Li, and Fang-Yen]Ji2021 author author H. Ji, author A. D. Fouad, author S. Teng, author A. Liu, author P. Alvarez-Illera, author B. Yao, author Z. Li, and author C. Fang-Yen, http://dx.doi.org/10.7554/eLife.69905 journal journal Elife volume 10 (year 2021)NoStop [Heglund et al.(1974)Heglund, Taylor, and McMahon]Heglund1974 author author N. C. Heglund, author C. R. Taylor, and author T. A. McMahon, http://dx.doi.org/10.1126/science.186.4169.1112 journal journal Science volume 186, pages 1112 (year 1974)NoStop [Hildebrand(1989)]Hildebrand1989-rb author author M. Hildebrand, @noop journal journal Bioscience volume 39, pages 766 (year 1989)NoStop [Full and Tu(1991)]Full1991 author author R. J. Full and author M. S. Tu, http://dx.doi.org/10.1242/jeb.156.1.215 journal journal J. Exp. Biol. volume 156, pages 215 (year 1991)NoStop [Chong et al.(2023b)Chong, He, Soto, Wang, Irvine, Blekherman, and Goldman]Chong2023 author author B. Chong, author J. He, author D. Soto, author T. Wang, author D. Irvine, author G. Blekherman, and author D. I. Goldman, @noop journal journal Science volume 380, pages 509 (year 2023b)NoStop [Sullivan et al.(2019)Sullivan, Meyers, and Arzt]Sullivan2019 author author T. N. Sullivan, author M. A. Meyers, and author E. Arzt, http://dx.doi.org/10.1126/sciadv.aat4269 journal journal Sci Adv volume 5, pages eaat4269 (year 2019)NoStop [Gazzola et al.(2014)Gazzola, Argentina, and Mahadevan]Gazzola2014 author author M. Gazzola, author M. Argentina, and author L. Mahadevan, https://www.nature.com/articles/nphys3078 journal journal Nat. Phys. volume 10, pages 758 (year 2014)NoStop
http://arxiv.org/abs/2407.12923v1
20240717180037
T-duality for non-critical heterotic strings
[ "Héctor Parra De Freitas" ]
hep-th
[ "hep-th" ]
empty T-duality for non-critical heterotic strings [3mm] Héctor Parra De Freitas Jefferson Physical Laboratory, Harvard University [-1mm] Cambridge, MA 02138, USA Abstract We consider non-critical heterotic strings compactified on S^1. For full rank theories, they are related to odd self-dual lattices and are structurally of the same form as the critical non-supersymmetric theories. For dimensions up to 14 the associated moduli spaces are Coxeter polytopes already studied by Vinberg and Kaplinskaya. In the heterotic string context, the Coxeter diagrams of these moduli spaces are related through transformations representing the process of dimension changing tachyon condensation of Hellerman-Swanson. For dimensions 8 and 6 respectively on S^1 and T^2 we show that at special points in the moduli space the subcritical string is the CHS background for two coincident NS5-branes and the intersection of two such pairs. These configurations are interpreted as an end result of condensing heterotic winding tachyons along one or two Scherck-Schwarz circles at self-dual radius. We give evidence that in the first case there is a T-duality between the pair of NS5-branes and a recently constructed non-supersymmetric heterotic 6-brane. plain § INTRODUCTION It is a classical result in string theory (see e.g. <cit.>) that heterotic superstrings compactified on a circle admit marginal deformations spanning the coset ℳ≃ O(II_1,17)\ O(1,17;ℝ)/O(17;ℝ) , where O(II_1,17) is the group of automorphisms of the even self-dual lattice II_1,17. These deformations correspond to varying vacuum expectation values of 17 target space moduli fields, namely the circle radius and 16 constant gauge fields (Wilson lines) along the circle in the Cartan subgroup of the gauge group E_8× E_8 (HE) or Spin(32)/ℤ_2 (HO) <cit.>. On physical grounds we expect that at tree level this situation extends to any heterotic worldsheet with a periodic coordinate field, say X_9, and target space gauge bundle. One expects to have 1+n independent marginal deformations, where n is the rank of the gauge group, giving rise to a moduli space locally described by O(1,1+n;ℝ)/O(1+n;ℝ), and it remains to determine the arithmetic subgroup describing its global structure.[ When thinking about the global structure of moduli spaces it helps to refer to the simpler example of complex structures of elliptic curves, where the relevant space is the quotient of the upper half plane ℍ^+ by the group of Möbius transformations SL(2,ℤ). There are two finite distance singularities at τ = i and τ = e^2π i/3 as well as an infinite distance singularity at the limit τ→ i ∞, which can be associated to two non-Abelian symmetry enhancements and a decompactification limit in T^2 compactifications of bosonic/heterotic strings <cit.>. Likewise, the moduli space ℳ in (<ref>) has a 44 finite distance singularities and 2 infinite distance singularities, which physically correspond to vacua with maximal nonabelian gauge symmetry as well as the decompactification limits to HE and HO <cit.>.] Often times, this group is given by automorphisms of a certain hyperbolic lattice Γ_1,1+n, and in many situations it can be described in a controlled manner, see e.g. <cit.>. In this paper we will consider circle compactifications of heterotic strings of non-critical type <cit.>. These theories have D ≠ 10 target spacetime dimensions as well as a linear dilaton profile necessary e.g. to avoid the conformal anomaly. Supercritical strings (D > 10) have a time-like linear dilaton and yield time-dependent backgrounds;[The physical relevance of these supercritical theories is not too clear, but we are interested in their formal properties insofar as they could be used to understand those of critical theories.] they have been studied in detail in <cit.>. Subcritical strings (D < 10) on the other hand exhibit a spacelike linear dilaton, and may be related to near-horizon backgrounds of various brane configurations, see e.g. <cit.>. As a first result we will show in Section <ref> that circle compactification results in a classical moduli space ℳ_1,1+n≃ O(1,1+n;ℤ)\ O(1,1+n;ℝ)/O(1+n;ℝ) , where O(1,1+n;ℤ) is the group of automorphisms of the odd self-dual lattice I_1,1+n or equivalently the group of integral matrices preserving the quadratic form -x_0^2+x_1^2+⋯ x_n^2. We will then transpose known results in the theory of hyperbolic reflection groups <cit.> to describe these spaces diagramatically and extract physical data. In the special case of D = 2, these results will complement previous investigations of the moduli space carried out in <cit.> where the circle is thermal (timelike). Subcritical strings have also received renewed attention as they seem to describe the linear dilaton background of non-supersymmetric heterotic branes whenever the theory is tachyon-free <cit.>. There are four such theories, with gauge groups G = E_8, SU(16)/ℤ_2, E_7× E_7, Spin(24)/ℤ_2, respectively in D = 9,8,6,2; there are corresponding non-supersymmetric p-branes with p = 7,6,4,0. The cases p = 0,4 are known from <cit.>, while the cases p = 6,7 are given in <cit.>. Moreover, these theories can arise as the end-point of a tachyon condensation process in a tachyonic D = 10 non-supersymmetric string <cit.>. In Section <ref> we consider how this picture is enriched when there is a compact direction (we do not consider the special case p = 7), and find among other things that * The linear dilaton background of the 6-brane compactified on a circle with suitable Wilson lines becomes supersymmetric, and is T-dual to the background of two coincident NS5-branes given by the CHS model <cit.>. * In turn, the two NS5-branes can be thought of as the end-point of tachyon condensation for the two winding tachyons of a Scherk-Schwarz reduction of HO at self-dual radius. This result generalizes to any heterotic string with or without supersymmetry. * Two circle compactifications of the linear dilaton background of the 4-brane can similarly be related to the intersection of two pairs of coincident NS5-branes. Motivated by this physical picture we will intertwine our discussion of the moduli spaces ℳ_1,1+n and symmetry enhancements with considerations of tachyon condensation and interpret them, at least formally, to be connected through this process. The backgrounds that we consider in this paper are not gravitational and so it is a priori not clear what we can learn from them as far as quantum gravity is concerned. Their study, however, should shed light into the nature of tachyons in string theory. More precisely, we would like to understand why tachyons appear in a generic way when supersymmetry is broken (see <cit.> for a recent construction of heterotic models without tachyons and neutral scalars in various dimensions). Rather than take this to be an illness of the theory, we take the point of view that their appearance should be systematically understood, in this case restricting our attention to heterotic strings. In this paper we show in particular that there is a relationship between tachyonic states appearing in a Scherck-Schwarz reduction of any heterotic string are related to the existence of NS5 branes in the original supersymmetric theory. This paper is orgnaized as follows. In Section <ref> we work out the global structure of circle compactifications of noncritical heterotic strings and how this information is represented using Coxeter diagrams. In Section <ref> we show how every point of maximal symmetry enhancement can be obtained from these Coxeter diagrams, or alternatively from a higher dimensional tachyonic theory through tachyon condensation. In Section <ref> we interpret our results in terms of heterotic branes and derive various relationships among them. We discuss our results in Section <ref>. § NON-CRITICAL HETEROTIC STRINGS ON S^1 In this section we briefly review the construction of non-critical Spin(2n) heterotic strings and compactify them on a circle S^1. We then determine their T-duality group to be the group of automorphisms of the lattice Γ_1,1⊕ D_n, equivalent to O(1,1+n;ℤ). This group determines the global structure of the moduli space.[By moduli space we mean the space of marginal deformations of the heterotic worldsheet. In general the usual moduli fields are massless up to a shift in m^2, see below.] Its reflective part is seen to be encoded in a Coxeter diagram, from which one easily reads in particular all infinite distance limits comprising every rank n non-critical heterotic string in the corresponding number of spacetime dimensions. We then comment on how these diagrams help to understand the effect of tachyon condensation in the compactified theories. Particular attention is given to the theories with 13 ≤ n≤ 18 i.e. 4 ≤ D ≤ 14 with D the number dimensions before compactification. The special case D = 2 is considered at the end. §.§ Spin(2n) heterotic strings Non-critical strings can be studied starting from the simplest family of heterotic strings, which in the fermionic formulation consists of theories with 2n free left-moving Majorana-Weyl (MW) fermions λ^A, A = 1,...,2n, in the internal field theory. For n = 16 the central charge contribution is c_L^int = 16 and the theory is critical with ten spacetime dimensions in target space. The 32 fermions give rise to an 𝔰𝔬_32 gauge algebra, but working out the spectrum one finds states transforming as vectors, spinors and co-spinors of Spin(32) hence the full gauge group is simply connected. We find in particular a spacetime tachyon transforming in the vector representation. This is one of the tachyonic heterotic strings discovered in <cit.>. For generic n, we have c_L^int = n, and one must introduce a linear dilaton background to avoid a conformal anomaly. Overall consistency requires changing the number of spacetime dimensions. The right-moving part of the heterotic worldsheet has N = 1 supersymmetry, hence for each right-moving coordinate field X_R^μ there is a corresponding superpartner fermion ψ_R^μ. Since a linear dilaton background contributes both to c_L and c_R by the same amount, changing the number of λ fields by Δ n requires doing the same for X_L, X_R and ψ_R. Taking Φ = -V_μ X^μ, we have c_Φ = 6α' V_μ V^μ , and so we require 6 α' V_μ V^μ = -Δ n. From now on we will set α' = 1. For positive n we obtain supercritical strings with a time-like linear dilaton, and for negative n we obtain subcritical strings with a space-like linear dilaton. Working in lightcone gauge, the torus partition function 𝒵(τ,τ̅) can be written in the bosonic formulation in terms of conjugacy classes of Spin(2n); the linear dilaton does not enter explicitly, although its presence is felt by the effective change in the number spacetime dimensions D. This function is of the form[c.f. eq. A.8 in <cit.> with their n our 2(n-16).] 𝒵^D(τ,τ̅) = 1/(√(τ)_2 ηη̅)^D-2(O_2nV̅_D-2 + V_2nO̅_D-2 - S_2nS̅_D-2 - C_2nC̅_D-2) , where η is the Dedekind eta function and every function except τ_2 = Imτ in the RHS is holomorphic (unbarred) or anti-holomorphic (barred). We use the Spin(2n) characters O_2n = 1/2η^n(ϑ_3^n + ϑ_4^n) ,       V_2n = 1/2η^n (ϑ_3^n - ϑ_4^n) , S_2n = 1/2η^n (ϑ_2^n + ϑ_1^n) ,       C_2n = 1/2η^n (ϑ_2^n - ϑ_1^n) , with ϑ_1,2,3,4 the usual Jacobi theta functions evaluated at zero chemical potential. From the analysis above, there is a constraint D = 10 + 2n - 32, and so 2n ≥ 24 with the lower bound saturated by the subcritical 2D heterotic string with gauge group Spin(24). The supercritical case is referred to as HO^+(m)/ in its first extensive study <cit.>, where m is the number of dimensions beyond 10, i.e. m = 2(n-16). §.§ Circle compactification Compactifying our Spin(2n) theory on a circle is done by exchanging 1/√(τ_2)ηη̅→1/ηη̅∑_(p_L,p_R)∈Γ_1,1 q^12 p_L^2q̅^12 p_R^2 , with Γ_1,1 the unique even self-dual lattice of signature (1,1). This factor can be combined with each of the conjugacy classes inside the parentheses in (<ref>), and it is convenient to write the overall partition function of the compactified theory as 𝒵^D_S^1(τ,τ̅) = 1/(√(τ)_2 ηη̅)^D-3(𝒵_v,nV̅_D-2 + 𝒵_o,nO̅_D-2 - 𝒵_s,nS̅_D-2 - 𝒵_c,nC̅_D-2) , with 𝒵_ω,n = 1/η^n+1η̅∑_(P_L,p_R) ∈Γ^ω_1,n+1 q^12 P_L^2q̅^12 p_R^2 ,      ω = v,o,s,c . Here Γ^v_1,n+1 is the hyperbolic lattice Γ_1,1⊕ D_n, and the remaining sets correspond to replacing D_n by D_n + y with y a non-trivial element in the set of conjugacy classes D_n^*/D_n. This light-cone computation cannot be performed for the special case D = 2; we will come back to this problem in Section <ref>. We split P_L = (P,p_L) with P, p_L the gauge lattice and circle contributions respectively, so that the momenta take the form P = π + Aw , p_L = 1/√(2)R(n + (R^2 -1/2 A^2)w - A ·π) , p_R = 1/√(2)R(n - (R^2 +1/2 A^2)w - A ·π) . R is the circle radius and we have turned on arbitrary Wilson line moduli A = (A_1,...,A_n) for the gauge group Spin(2n) along S^1; n ∈ℤ denotes the Kaluza-Klein momentum, w ∈ℤ the winding number and π∈ D_n+y the momentum along the internal D_n lattice or its other conjugacy classes depending on the sector under consideration. The spectrum of the theory suffers an effective mass shift on right-moving NS states due to the presence of the linear dilaton, reflected in the change of characters in the partition function relative to the critical case. The mass formula and level matching conditions thus read 1/2m^2 = 1/2P_L^2 + 1/2 p_R^2 + N_L + N_R - 1/2δ_D- 1 R sector 3/2 NS sector , 0 = 1/2P_L^2 - 1/2p_R^2 + N_L - N_R - 1 R sector 1/2 NS sector , where N_L,R are oscillator numbers and δ_D≡ (10-D)/8 is the aforementioned shift in m^2. For massless states in the R sector, this shift is canceled by an equivalent shift in P_L^2 resulting from the reduction of the gauge bundle. We will come back to this point in Section <ref>. We will usually refer to states in the NS sector as being massless when their effective mass is due solely to the linear dilaton, i.e. when m^2 = (10-D)/8, and similarly for tachyons. Up to this shift, the theory generically has the usual graviton, B-field and dilaton as well as 18 gauge bosons furnishing the gauge group U(1)^17_L × U(1)_R, and no massless fermions. At special points in the moduli space, however, we expect to have extra low lying states in each of the four conjugacy classes, including massless gauge bosons, tachyons as well as massless spinors and conjugate spinors. The problem of determining these enhancements was addressed in detail in <cit.> for the critical case, and we will see here that for non-critical strings the situation is analogous. §.§ The moduli space At the local level, the moduli space of our circle-compactified theory is the (n+1)-dimensional hyperbolic space ℳ_1,n+1 = O(1,1+n)/O(1+n) . We would like to know what is the global structure of this space. To this end we look for the group of discrete symmetries acting on the moduli that leave invariant the formulas (<ref>) and (<ref>), with suitable transformations on the quantum numbers n,w,π. In other words, we look for the T-duality group. A natural candidate for the T-duality group is the group of automorphisms of the momentum lattice. In the case of toroidal compactifications of the E_8 × E_8 and the Spin(32)/ℤ_2 heterotic strings, this correspondence is a standard result. It was in fact shown in <cit.> that this correspondence also holds for toroidal compactifications of the rank 16 non-supersymmetric heterotic strings in ten dimensions, which is nothing but the case n = 16 in our family of heterotic strings. Let us briefly review this result in the context of generic n. First note that the full momentum lattice is the dual of Γ^v_1,n+1, as it contains Γ^v_1,n+1 itself as well as its three non-trivial conjugacy classes. However, the automorphism group of a lattice L and its dual lattice L^* are the same. Consider then the group of automorphisms of the lattice Γ^v_1,n+1 = Γ_1,1⊕ D_n, denoted O(Γ^v_1,n+1). It is obvious that it mixes states in the spacetime vector class among themselves and by construction respects their quantization conditions. What we need to check is if it does the same for the other three spacetime classes. We may write any element of Γ^o_1,n+1 as an element of Γ^v_1,n+1 plus the vector y with n = w = 0 and π = (1,0^n-1), and since y^2 = 1 it cannot be mapped to an element in any other class, hence Γ^o_1,n+1 is invariant as a class under O(Γ^v_1,n+1). Applying this analysis to the spinor classes it remains only a possibility that they are exchanged, but this is not an issue at all. Our theory is compactified on a circle and so it is not chiral, hence there is no physical distinction between the spinor and conjugate spinor classes. Finally, the group O(Γ_1,n+1^v) is equivalent to O(Γ_1,n+1^v∪Γ_1,n+1^o), where Γ_1,n+1^v ∪Γ_1,n+1^o ≃I_1,n+1 is the unique odd self-dual hyperbolic lattice,[This is easily seen by noting the isomorphism D_n∪(D_n+y) ≃ℤ^n where y is the vector conjugacy class of D_n, and Γ_1,1⊕ℤ^n ≃I_1,n+1.] hence O(Γ_1,n+1^v)≃ O(1,n+1;ℤ). We conclude that for arbitrary n, the T-duality group is O(Γ^v_1,n+1) ≃ O(1,n+1;ℤ), or O_1,n+1 for short. Fortunately for us, these groups were studied in detail by Vinberg <cit.> and Vinberg-Kaplinskaya <cit.> a long time ago for n ≤ 18; the cases n = 19,...,22 were worked out subsequently by Borcherds <cit.>. They take the form O_1,n+1 = O^r_1,n+1⋊ H_n+1 , where O^r_1,n+1 is the maximal reflexive subgroup of O_1,n+1 and H_n+1 is the group of outer automorphisms of Γ^v_1,n+1.[The subscript n+1 denotes the dimension of the hyperbolic space we are working with, while 1,n+1 is otherwise used to emphasize the signature of the corresponding object. Also note: the outer automorphism of D_n is a reflection in D_n^* generated by short roots, hence it is in O^r_1,n+1 and not in H_n+1.] The quotient 𝒫_n+1 = O^r_1,n+1\ℳ_1,n+1 defines a cover of the fundamental domain O_1,n+1\ℳ_1,n+1 which is particularly convenient to work with. It is the fundamental cell of a group of reflections in hyperbolic space, known as a Coxeter polytope. These reflections are finitely generated, such that the generators are in correspondence to a set of codimension 1 walls bounding 𝒫_n+1. The way in which these walls intersect can be encoded in a Coxeter diagram Σ_n+1 as shown in Figure <ref> for 13≤ n ≤ 16. For supercritical strings these diagrams are more complicated, and we show the cases n = 17,18 in Figure <ref> (see <cit.> for higher n). The group of symmetries of the polytope 𝒫_n+1 is equivalent to H_n+1 for n ≤ 18. At the locus of each wall represented by a white node, there occurs an U(1) → SU(2) symmetry enhancement in the left-moving gauge group. At the intersection of various walls, one reads off the corresponding symmetry enhancement from the Dynkin diagram they form. At walls represented by black nodes there is however a different kind of enhancement. Namely, a pair of tachyons acquire their lowest possible value of m^2. This corresponds to an “enhancement" of a Spin(2) symmetry with tachyons in the vector representation. The intersection of such a wall with the other ordinary walls is represented by a B-type Dynkin diagram, and must be read as an enhancement of a Spin(2n) gauge symmetry together with 2n tachyons transforming in the vector representation. We should keep in mind that all of these enhancements lie in the NS sector and so the corresponding states have shifted m^2. The appearance of massless fermions is not directly encoded in these diagrams, but can be worked out separately. §.§ What we learn from the diagrams As already stated, the Coxeter diagram Σ_n+1 for a circle compactified Spin(2n) heterotic string encodes the possible non-Abelian gauge symmetry enhancements in the moduli space. The exact values for the moduli R and A at which such enhancements occur can be obtained by specifying the root vectors corresponding to each node, i.e. by giving their values for n,w and π. We leave the explanation for this procedure and the corresponding results to Section <ref>. Here we simply highlight some important features of our theories that these diagrams teach us. §.§.§ Infinite distance limits One of the main features of the diagrams Σ_n+1 is that they encode all the vertices of the polytope 𝒫_n+1 as maximal Dynkin subdiagrams. These vertices represent maximal symmetry enhancements and can be at finite or infinite distance, corresponding respectively to ordinary or extended Dynkin diagrams. The infinite distance limits give decompactifications to the (non)critical heterotic strings of rank n in the corresponding dimension D. For example, 𝒫_17 has eight extended Dynkin subdiagrams reading E_8⊕ E_8 ,      D_16 ,      D_8 ⊕ D_8 ,      A_15⊕ B_1 , E_7⊕ E_7 ⊕ B_2 ,      D_12⊕ B_4 ,      E_8 ⊕ B_8 ,      B_16 , corresponding to both the supersymmetric and the non-supersymmetric heterotic strings in ten dimensions with rank 16.[From the point of view of the lower dimensional theory, the gauge symmetry does correspond to an extended Dynkin diagram, i.e. it becomes affine, as shown for the supersymmetric case in <cit.>. See also <cit.>.] The moduli space ℳ_1,17 thus interpolates between these eight theories, a fact that was already demonstrated in <cit.>. Also note that by deleting the red nodes one easily restricts to infinite distance limits without tachyons. This interpolation between rank n heterotic strings extends to the non-critical cases. For subcritical theories the subdiagrams are given by those in (<ref>) by transforming B_m → B_m-ℓ with ℓ = (10-D)/2 when possible. For example, the diagram Σ_16 contains the five extended Dynkin subdiagrams A_15 ,      E_7 ⊕ E_7 ⊕ B_1 ,      D_12⊕ B_3 ,      E_8 ⊕ B_7 ,      B_15 . For supercritical strings, the procedure is reversed. As one adds pairs of MW fermions λ, factors of type B_m appear or are extended, and new tachyon-free theories appear (we mean tachyon-free up to m^2 shift). Indeed in D = 12, 14 there are tachyon-free heterotic strings with gauge algebras[In general, non-critical strings can be constructed by taking the left-moving internal part of the worldsheet CFT to be a chiral fermionic theory with c = n. This construction is nicely reviewed in <cit.> for critical non-supersymmetric strings, and the classification of such chiral fermionic theories up to c = 24 is achieved in <cit.> (see also <cit.>). For our Spin(2n) theories and their T-duals, this classification is equivalent to that of odd self-dual lattices, which is much older (see <cit.> for c ≥ 25).] D = 12:      𝔰𝔲_12⊕𝔢_6 , D = 14:      𝔰𝔲_17⊕𝔰𝔲_2 ,      𝔰𝔬_20⊕𝔢_7 ⊕𝔰𝔲_2 ,       3 𝔰𝔬_12 ,       2𝔰𝔲_10 , and these can be seen as extended Dynkin subdiagrams of the corresponding Coxeter diagrams shown in Figure <ref>. §.§.§ How tachyon condensation affects the moduli space Tachyon condensation can dynamically reduce the number of target space dimensions of a heterotic string <cit.>, transforming for example a critical tachyonic theory to a subcritical tachyon-free one. This particular situation was studied in detail by Kaidi in <cit.>, following the work of Hellerman-Swanson <cit.>. Here we consider the interplay between tachyon condensation and compactification on a circle as reflected on the Coxeter diagrams. We do not dwell on the physics of tachyon condensation but rather on the worldsheet theories it connects (up to a light-like linear dilaton in the initial setup.) This formalism for tachyon condensation in heterotic strings requires the tachyon field 𝒯(X) to couple to left-moving free MW fermions λ through a superpotential W = λ :𝒯(X): . This occurs when the tachyon has the lowest possible value of m^2, so that it corresponds to a B_n subdiagram in the Coxeter diagram. If we wish only to condense a pair of tachyonic states then we can take them to correspond to one of the red nodes in the diagram, without loss of generality.[These tachyons correspond to norm 1 vectors in Γ_1,n+1^o, all of which lie in the same T-duality orbit. Two red nodes in Σ_n+1 are equivalent under outer automorphisms, i.e. symmetries of the diagram.] This process reduces the dimension by 2, and so it is naturally represented as a transformation of the Coxeter polytope 𝒫_n+1 into 𝒫_n. We can think of the corresponding Coxeter diagrams as being connected through a series of transformations Σ_n+1→Σ_n representing the physical process of tachyon condensation. Indeed, this transformation proceeds by eliminating the tachyonic nodes and replacing ordinary nodes by new tachyonic ones in a way that is not entirely trivial. What is perhaps more important is the fact that, for n ≤ 16, tachyon condensation connects one moduli space to another one in an unique way, and so this procedure commutes with circle compactification (for n = 17,18 there is a subtlety that we address below.) One may have anticipated this fact from the uniqueness of the moduli spaces themselves. In any case, this fact allows us to interpret any point in the moduli space of a subcritical string as resulting from the condensation of a tachyon in a higher dimensional theory, which is relevant for the study of heterotic branes. §.§.§ Supercritical strings have two kinds of tachyons The groups O_1,18 and O_1,19 were studied by Vinberg and Kaplinskaya in <cit.>. At a glance, a striking feature appears. Unlike the cases considered above, their associated diagrams Σ_18 and Σ_19 showcase two inequivalent kinds of walls corresponding to short roots. Let us work out what this means in the first case. The Coxeter diagram Σ_18 is shown in Figure <ref>. The long roots form a tetrahedron which naturally results from replacing the short roots in Σ_17 by two disconnected long roots. There are three short roots of the first kind, which are analogous to those found in Σ_p+1 ≤ 17. They are linked to the long roots with double lines, and the intersections of their corresponding walls occur at infinite distance. We thus expect that tachyon condensation as explained above leads to the theory corresponding to Σ_17. On the other hand, we find 12 new short roots of the second kind, which are linked in a different manner. A physically sensible expectation is that such roots correspond to tachyons which condense to a critical supersymmetric theory. Here we draw only one representative for each inequivalent short root. As for the previous cases Σ_n+1 ≤ 17, the red nodes form an orbit under the symmetry group of the full diagram, and can be easily seen to add up to three using a ℤ_3 subgroup of H_18≃ S_4. Similarly, the blue nodes form a 12-element orbit under H_18. Tachyon condensation selects out the orthogonal complement of a D_1 sublattice in Γ_1,18^v, and we expect there to be two inequivalent choices. These can indeed be read off from the two presentations of Γ_1,18^v, Γ_1,1⊕ [D_16⊕D_1]^(v,v)≃Γ_1,1⊕ E_8 ⊕ E_8 ⊕D_1 , where (v,v) is a gluing vector such that [D_16⊕ D_1]^(v,v) = D_17. In the first case, the orthogonal complement of D_1 gives Γ_1,17^v, while in the second it gives the Narain lattice of supersymmetric heterotic strings on S^1. Incidentally, if one removes all the tachyonic nodes in Σ_18 as well as the three nodes connected to the blue node, there remains the Coxeter diagram of Γ_1,1⊕ E_8⊕ E_8. The reason is that a given blue node is connected to every other blue node as well as every red node; removing all the connected nodes leaves out a set of generating roots for the supersymmetric Narain lattice, which is the diagramatic equivalent of taking the orthogonal complement of the corresponding D_1. The diagram Σ_19 also contains short roots of the first and second kind. Condensing a pair of tachyons of any type will inevitably lead to Σ_18 as there is no D = 12 supersymmetric heterotic string. The (primitive) lattice embedding D_1 ↪Γ_1,19^v must be unique. Presumably, the difference between the types of nodes in Σ_19 accounts for the inequivalent ways of condensing four tachyons, which can indeed lead to the supersymmetric theory in ten dimensions. The structure of this diagram is more intricate than Σ_18, containing in total five short roots of the first kind and twenty of the second kind. One can carry out this construction in a controlled manner up to Σ_23, see <cit.>. §.§ Note on 2D strings The case of D = 2 subcritical theories is special in that the space-like direction cannot be compactified due to the linear dilaton profile. One may however compactify the time direction, obtaining a moduli space as before. This setup was studied some time ago in <cit.>, but the T-duality group remained undetermined. It was sensibly conjectured to be O(1,13;ℤ) and our results imply that this is indeed the case.[We do note that in the physics literature the notation O(p,q) is usually abused in that there is no specification of the metric with respect to which the elements of the group are orthogonal. In our case we do mean orthogonal matrices with respect to the Minkowskian metric, which act as automorphisms of the odd self-dual lattices. Such lattices appear naturally in the covariant formulation of <cit.> used in <cit.>.] The mass formula and level matching conditions for the spectrum follow the same structure as the spacelike compactifications in higher dimensions due to Lorentz invariance, so that the type of T-duality group generalizes to this case even if we have not written down the partition function explicitly. The group O(1,13;ℤ) is purely reflective, and so the moduli space is exactly the Coxeter polytope 𝒫_13. In going from Σ_14 to Σ_13 the ℤ_2 outer automorphism is broken. The Coxeter diagram Σ_13 takes the following form <cit.>:[The labeling of nodes follows the general form employed in Section <ref>.] [scale = 1.25] (0.5,0)–(0.5,1); (4.5,0)–(4.5,0.5); (0,0)–(5,0); [Red, fill=white,double](4.5,0.5)–(4.5,1); [fill=white](0,0)circle(0.07)node[below]1; [fill=white](0.5,0)circle(0.07)node[below]2; [fill=white](1,0)circle(0.07)node[below]3; [fill=white](1.5,0)circle(0.07)node[below]4; [fill=white](2,0)circle(0.07)node[below]5; [fill=white](2.5,0)circle(0.07)node[below]6; [fill=white](3,0)circle(0.07)node[below]7; [fill=white](3.5,0)circle(0.07)node[below]8; [fill=white](4,0)circle(0.07)node[below]9; [fill=white](4.5,0)circle(0.07)node[below]10; [fill=white](5,0)circle(0.07)node[below]γ; [fill=white](0.5,0.5)circle(0.07)node[left]0; [fill=white](4.5,0.5)circle(0.07)node[right]11; [fill=white](0.5,1)circle(0.07)node[left]β; [Red, fill=white,double](4.5,1)circle(0.07)node[right]12; By deleting the pairs (β,12), (β,γ) we obtain the diagrams D_16 and B_16 corresponding to the two heterotic theories referred to as HO and THO in <cit.>. The first has gauge group G = Spin(24)/ℤ_2, while the second has G = Spin(24), with 24 tachyonic states in the vector representation of G becoming massless due to the linear dilaton shift. Deleting node 8 results in the diagram E_8⊕ B_4 corresponding to the heterotic E_8× Spin(8) theory likewise referred to as HE. The interpolation between the three heterotic theories is hence manifest. § MAXIMAL SYMMETRY ENHANCEMENTS In this section we explain how the moduli for all maximal symmetry enhancements can be extracted from the Coxeter diagrams for each non-critical circle compactification. We then show how the different enhancements are related across dimensions through tachyon condensation, allowing to extract the spectral data for a D-dimensional theory from that of the (D+2)-dimensional one. In particular, the data for the critical case has been obtained in <cit.>, hence the data for all subcritical theories can be readily obtained. §.§ Extracting the data from the diagrams To work with a Coxeter diagram we must first choose a presentation for the roots corresponding to each node. We will use the Spin(2n) heterotic frame, which easily generalizes across dimensions. There is a B_n chain corresponding to the gauge bundle, whose simple roots may be written as α_1 = (1,-1,0^n-2) ,      ... ,     α_n-1 = (0^n-2,1,-1) ,      α_n = (0^n-1,1) . We embed them into the charge sublattice Γ_1,n+1^v ∪Γ_1,n+1^o ≃I_1,n+1 as vectors φ_i = |0,0;α_i⟩ ,      i = 1,...,n , and extend the diagram to B_n by adding the lowest root with non-trivial KK momentum φ_0 = |-1,0;-1,-1,0^n-2⟩ . For generic n there are two more extensions given by the vectors φ_β = |1,1;0^n⟩ ,      φ_γ = |-2,2;-1^10,0^n-10⟩ . For n = 12 the set {φ_0,...,φ_n,φ_β,φ_γ} furnishes the whole Coxeter diagram. For higher n we add the vectors n = 13:           φ_δ = |-3,2;-1^13⟩ , n = 14:           φ_δ = |-3,2;-1^14⟩ , n = 15:           φ_δ = |-3,2;-1^14,0⟩ ,      φ_ε = |-4,4;-2^6,-1^9⟩ . The way in which these vectors give rise to the Coxeter diagrams is given in Figure <ref>. From the mass formula (<ref>), the locations of the walls are found by setting p_R = 0 in (<ref>) after plugging in the quantum numbers for the corresponding root: φ_i:            A_i - A_i+1 = 0 ,       i = 1,...,n-1 , φ_n:            A_n = 0 , φ_β:            R^2 + 12 A^2 = 1 , φ_γ:            2R^2 + A^2 = ∑_i = 1^10A_i - 2 , φ_δ:            2R^2 + A^2 = ∑_i =1^mA_i-3 ,       m = 13 n = 13 14 n = 14,15 , φ_ϵ:            4R^2 + 2A^2 = 2∑_i = 1^6 A_i + ∑_i = 7^15A_i - 4 . Symmetry enhancements are then obtained by selecting a Dynkin subdiagram of the Coxeter diagram and then fixing the constraints associated to the corresponding nodes. Similarly, decompactification limits are obtained by fixing the constraints associated to an affine Dynkin subdiagram. §.§ Transforming the spectra through tachyon condensation As we have explained, every point in the moduli space of some heterotic string can be interpreted as resulting from tachyon condensation of another in higher dimensions. In the case of circle compactifications these moduli spaces are unique, and this implies for example that every maximal symmetry enhancement contained in the diagram Σ_16 can be obtained from one in Σ_17 for which there is a pair of tachyons. The question is then what is the effect of tachyon condensation at the level of the light spectrum of the higher dimensional theory. From the results of <cit.> we expect that the light bosonic states acquire a mass shift while massless fermionic states remain massless. Let us then explain how this expectation is realized in our language: * Bosons: Bosonic states have electric charge lying in the lattice Γ^v_1,n+1∪Γ^o_1,n+1≃I_1,n+1. A generic region in moduli space with a pair of maximally tachyonic states is given by the polarization I_1,n+1 = I_1,n⊕ℤ, with the tachyonic states having charge ± 1 in ℤ. The bosonic states in the lower dimensional theory have charges in I_n and will simply correspond to those in the original theory which are neutral under the Spin(2) associated to the tachyons, with their mass shifted as in eq. (<ref>). * Fermions: Writing the vector class as Γ^v_1,n+1 = Γ_1,1⊕ D_n, the spinor classes all have π∈ D_n+s or π∈ D_n+c, hence for any enhancement of the gauge symmetry group to Spin(2n) with tachyons in the vector representation, every fermionic state in the spectrum is charged under Spin(2n) in the spinor or co-spinor representation. In the case n = 1 corresponding to the situation being considered now, the squared mass of every fermionic state receives a contribution P_L,t^2 ∈ (2ℤ+1)^2/4 from its Spin(2) charge; for massless states this is P_L,t^2 = 1/4. This contribution is absent in the lower dimensional theory, and since δ_D - δ_D-2 = -1/4, the mass of the corresponding fermionic states is invariant when it is zero. More precisely, there are towers of fermionic states with various Spin(2) charge which “collapse" to massless states in the lower dimensional theory; a similar situation holds for massive states. What we learn from this analysis is that the spectrum of every maximal enhancement in a D-dimensional theory can be obtained in a simple way from a maximal enhancement in a (D+2)-dimensional theory with a pair of extremal tachyons. In particular we know the complete list of maximal enhancements in the case D = 10 <cit.>, and from it we can obtain every enhancement in the subcritical theories. The data obtained in this way will be exactly equivalent to that obtained from the Coxeter diagrams as described above. The latter method offers the advantage of furnishing the values of the moduli for which the enhancements occur, which can be put into the partition function (<ref>) to obtain the degeneracies for all massive levels. In the next section we will look at some enhancements which have a special interpretation in terms of heterotic branes. § APPLICATIONS TO HETEROTIC BRANES In this section we interpret the noncritical strings discussed above as furnishing linear dilaton backgrounds of heterotic branes. We will first consider the subcritical strings arising from tachyon condensation of the critical theory on a circle at special points corresponding to Scherk-Schwarz reductions at self-dual radius, which turn out to correspond to CHS models describing pairs of coincident NS5 branes. We then show how in the case of the Spin(32)/ℤ_2 heterotic string this background is dual to that of the non-supersymmetric 6-brane of <cit.> compactified on a circle with suitable Wilson lines. A similar relationship is seen to hold between the non-supersymmetric 4-brane on T^2 and an intersection of two pairs of NS5-branes in the E_8 × E_8 string, but only seems valid at low energies. §.§ A pair of NS5's from tachyon condensation Recall that a Scherk-Schwarz reduction is realized by compactifying a theory, say the E_8× E_8 heterotic string, on a circle S^1 with a (-1)^F holonomy, where F is the spacetime fermion number. It is well known that as the radius R of the circle becomes smaller, two winding states ±|n = 1, w = 12; π = 0⟩ become massless at a critical radius R = R_H below which they are tachyonic, signaling the so-called Hagedorn phase transition <cit.> in the analogous time compactification. For heterotic strings, T-duality implies that the tachyons actually acquire their lowest possible squared mass at the self-dual radius R_sd, after which there is a presumed second Hagedorn radius R_H'. What is important for us is the fact that at R = R_sd the pair of winding tachyons have lowest possible m^2 and so their condensation can be studied using the machinery of <cit.>. A second important point is that at R = R_sd two extra winding states ±|n = 1, w = -12;0⟩ become massless, enhancing the graviphoton U(1) to an SU(2) at level 2. It follows that the contribution from the compact direction to the worldsheet CFT consists of a pair of free left-moving MW fermions λ,λ' associated to the tachyons and a triplet of free right-moving MW fermions furnishing the SU(2)_2 current. Turning on a suitable light-like linear dilaton and tachyon profile, the coordinate fields for two non-compact directions X_L,R^7,8, as well as their right-moving superpartners ψ_R^7,8 and left-moving fields λ,λ' become infinitely massive at late times, decoupling from the worldsheet. The end result is thus a worldsheet CFT describing the background ℝ^5,1×ℝ_ϕ× SU(2)_2 × (E_8× E_8)_1, where ℝ_ϕ is a linear dilaton direction resulting from 1-loop corrections to the initial lightlike dilaton. This is exactly the CHS background generated by k = 2 coincident NS5 branes in the initial theory <cit.>. In spacetime we might interpret this process as producing a local tachyon condensate wherein the non-critical string description applies <cit.> (see also <cit.>) and the worldvolume of the NS5 brane stack is located along the linear dilaton direction in the strongly coupled region. The change in target space dimensions in the CFT is simply an artifact of describing a localized object as has already been appreciated in the subcritical string literature, see e.g. <cit.>. In this region, gravity decouples in accordance with the mass shift in the graviton due to the linear dilaton. §.§.§ Moduli space analysis The starting point of the tachyon condensation process above is the Scherk-Schwarz reduction of the E_8× E_8 string at self-dual radius. The worldsheet sits at a point in the moduli space of non-supersymmetric heterotic strings compactified on S^1, and so it corresponds to a Dynkin subdiagram of Σ_17 <cit.>: [scale = 1.25] [shift=(-4,0)] (0,0)–(0.5,0.5); (2,0)–(1.5,0.5); (0,2)–(0.5,1.5); (2,2)–(1.5,1.5); [double, Red](0.5,0.5)–(1,0.75); [double, Red](0.5,1.5)–(1,1.25); [double, Red](1,0.75)–(1.5,1.5); [double, Red, fill=white](1,1.25)–(1.5,0.5); [double, Red](1,0.75)–(1,1.25); (0,0)–(2,0)–(2,2)–(0,2)–(0,0); [fill=white](0,0)circle(0.07); [fill=white](0.5,0)circle(0.07)node55; [fill=white](1,0)circle(0.07); [fill=white](1.5,0)circle(0.07); [fill=white](2,0)circle(0.07); [fill=white](0,0.5)circle(0.07); [fill=white](0,1)circle(0.07); [fill=white](0,1.5)circle(0.07); [fill=white](2,0.5)circle(0.07); [fill=white](2,1)circle(0.07); [fill=white](2,1.5)circle(0.07); [fill=white](0,2)circle(0.07); [fill=white](0.5,2)circle(0.07); [fill=white](1,2)circle(0.07); [fill=white](1.5,2)circle(0.07)node55; [fill=white](2,2)circle(0.07); [fill=white](0.5,0.5)circle(0.07); [fill=white](0.5,1.5)circle(0.07); [fill=white](1.5,0.5)circle(0.07); [fill=white](1.5,1.5)circle(0.07); [Red, fill=white,double](1,1.25)circle(0.07)node55; [Red, fill=white,double](1,0.75)circle(0.07); [-stealth](2.5,1)–(3.5,1); (1,-0.5)nodeΣ_17; (0,2)–(0.5,1.5); (2,0)–(1.5,0.5); (1,2)–(0,2)–(0,0); (2,2)–(2,0)–(1,0); [fill=white](0,0)circle(0.07); [fill=white](0.5,2)circle(0.07); [fill=white](1,0)circle(0.07); [fill=white](2,0)circle(0.07); [fill=white](0,0.5)circle(0.07); [fill=white](0,1)circle(0.07); [fill=white](0,1.5)circle(0.07); [fill=white](2,0.5)circle(0.07); [fill=white](2,1)circle(0.07); [fill=white](2,1.5)circle(0.07); [fill=white](0,2)circle(0.07); [fill=white](1,2)circle(0.07); [fill=white](1.5,0)circle(0.07); [fill=white](2,2)circle(0.07); [fill=white](0.5,1.5)circle(0.07); [fill=white](1.5,0.5)circle(0.07); [Red, fill=white,double](1,0.75)circle(0.07); (1,-0.5)nodeE_8⊕ E_8 ⊕ B_1; After the tachyons condense, the red node is deleted and the worldsheet sits at a point in the moduli space described by Σ_16: [scale = 1.25] (1,-0.5)nodeΣ_16; [double, Red](0,0)–(0.75,0.75); (2,0)–(1.5,0.5); (0,2)–(0.5,1.5); [double, Red](2,2)–(1.25,1.25); [double, Red](0.75,0.75)–(1.25,1.25); (0,0)–(2,0)–(2,2)–(0,2)–(0,0); [fill=white](0,0)circle(0.07); [fill=white](0.5,0)circle(0.07)node55; [fill=white](1,0)circle(0.07); [fill=white](1.5,0)circle(0.07); [fill=white](2,0)circle(0.07); [fill=white](0,0.5)circle(0.07); [fill=white](0,1)circle(0.07); [fill=white](0,1.5)circle(0.07); [fill=white](2,0.5)circle(0.07); [fill=white](2,1)circle(0.07); [fill=white](2,1.5)circle(0.07); [fill=white](0,2)circle(0.07); [fill=white](0.5,2)circle(0.07); [fill=white](1,2)circle(0.07); [fill=white](1.5,2)circle(0.07)node55; [fill=white](2,2)circle(0.07); [fill=white](0.5,1.5)circle(0.07); [fill=white](1.5,0.5)circle(0.07); [Red, fill=white,double](1.25,1.25)circle(0.07)node55; [Red, fill=white,double](0.75,0.75)circle(0.07)node55; [-stealth](2.5,1)–(3.5,1); [shift=(4,0)] (0,2)–(0.5,1.5); (2,0)–(1.5,0.5); (1,2)–(0,2)–(0,0); (2,2)–(2,0)–(1,0); [fill=white](0,0)circle(0.07); [fill=white](0.5,2)circle(0.07); [fill=white](1,0)circle(0.07); [fill=white](2,0)circle(0.07); [fill=white](0,0.5)circle(0.07); [fill=white](0,1)circle(0.07); [fill=white](0,1.5)circle(0.07); [fill=white](2,0.5)circle(0.07); [fill=white](2,1)circle(0.07); [fill=white](2,1.5)circle(0.07); [fill=white](0,2)circle(0.07); [fill=white](1,2)circle(0.07); [fill=white](1.5,0)circle(0.07); [fill=white](2,2)circle(0.07); [fill=white](0.5,1.5)circle(0.07); [fill=white](1.5,0.5)circle(0.07); (1,-0.5)nodeE_8⊕ E_8; A direct implication of this analysis is that, since the CHS background is supersymmetric, the 1-loop partition function (<ref>) with D = 8, n = 15 must vanish at the maximal enhancement with gauge group E_8× E_8. This enhancement can be reached from the circle compactification of the Spin(30) sub-critical string by performing an O(1,16) lattice boost Γ_1,16^v ≃Γ_1,1⊕ D_15→ D_1(-1)⊕ E_8⊕ E_8 . The characters 𝒵_ω,n all share a sum over the lattice E_8⊕ E_8, differing in conjugacy classes of Spin(2). Up to the common E_8⊕ E_8 contribution, the terms in parentheses in (<ref>) read O̅_2 V̅_6 + V̅_2 O̅_6 - C̅_2 S̅_6 - S̅_2 C̅_6 ≃V̅_8 - S̅_8 = 0 , and 𝒵_S^1^8(τ,τ̅) indeed vanishes. The procedure carried out for the E_8× E_8 string can be repeated for any heterotic string with at least three noncompact dimensions. For the eight theories in ten dimensions with rank 16, the Scherck-Schwarz reductions all live in the same moduli space. Hence, the CHS background describing a pair of coincident NS5 branes in all of these theories are connected through marginal deformations. Changing from one theory to another is done through T-duality. For the Spin(32)/ℤ_2 heterotic string the analysis is straightforward. Simply boost D_1(-1)⊕ E_8 ⊕ E_8 → D_1(-1) ⊕ D_16^+ , with D_16^+ the weight lattice of Spin(32)/ℤ_2. The other cases are technically more cumbersome to work out, but it should be clear that the statement above holds. §.§.§ Duality with 6-branes We have seen that the CHS background (<ref>) can be reached by compactifying an eight-dimensional subcritical heterotic string on S^1 and suitably tuning the radius and Wilson line moduli. This is particularly true for the SU(16)/ℤ_2 string (cf. Section <ref>), which has been recently proposed to describe the linear dilaton background generated by a non-supersymmetric 6-brane in the Spin(32)/ℤ_2 heterotic string <cit.>. This brane is charged under π_1 ≃ℤ_2 of the gauge bundle, and its magnetic flux through S^2 breaks the gauge symmetry to (SU(16)/ℤ_2) × U(1); the connection of the tangent bundle of the S^2 is then embedded into the U(1) factor. This situation suggests that the 6-brane compactified on S^1 with suitable radius and Wilson line becomes supersymmetric and is T-dual to a pair of coincident NS5-branes. This T-duality can be seen at the level of the full near horizon background of the brane. The pair of NS5-branes is associated to a 3-sphere S^3 threaded by two units of H_3 flux, while the 6-brane is associated to a 2-sphere S^2 threaded by a gauge flux F_2 = (12^16) with appropriate normalization and a circle S^1 with some Wilson line. There must then exist a topology changing T-duality connecting both backgrounds, which is made possible by the fact that T-duality in heterotic strings may involve the gauge bundle in non-trivial ways. This problem was studied recently in the context of fibrations of T^2 over K3 surfaces in <cit.>, and specializing to fibrations of S^1 over S^2 is straightforward. The heterotic background may be specified by a magnetic charge vector v = |k,ℓ;λ⟩ where k ∈ℤ relates to the first Chern class of the fibration as k = c_1 ω with ω the top form of S^2, ℓ∈ℤ similarly encodes the H_3-flux and λ∈ D_16^+ the F_2-flux. The radius R of the fiber and its associated Wilson line moduli A are constrained by flux quantization to satisfy R^2k + 1/2A^2k + A ·λ= ℓ . T-duality acts on the background by transforming v → v' = Mv , where M is an automorphism of Narain lattice II_1,17, and the moduli R and A transform so as to leave equation (<ref>) invariant. An important fact is that any positive definite primitive vector v of a given norm is unique up to automorphism in II_1,17,[This is equivalent to stating that any rank 1 (signature (0,1)) lattice has a unique (up to automorphisms) primitive embedding into Γ_1,17, which follows from theorem 1.14.4 of <cit.>.] meaning that any two such vectors are related by T-duality. The background for a pair of coincident NS5-branes is given by v = |1,2;0⟩, A = 0 and R^2 = 2. One can T-dualize v to v' = |0,0;12^16⟩, since v'^2 = v^2 = 4, and the resulting background is a direct product S^1 × S^2 with F_2-flux threading S^2 as desired, and we see that the T-duality picture is consistent. An interesting consequence of this analysis is that the NS5 background admits 16 marginal deformations which break supersymmetry. It was in fact determined already in <cit.> that the sigma model for a 3-sphere S^3 admits exactly marginal deformations involving gauge fields such that the S^3 is deformed into S^2 × S^1 at infinite distance, where the S^1 shrinks to zero size and is T-dualized to ℝ. At this limit, the H_3 flux through S^3 disappears, and one ends with an F_2 flux through the final S^2. This matches nicely with our results; we see that the marginal deformations found in this work span precisely the moduli space O(1,16;ℤ)\ O(1,16)/O(16). §.§ Intersections of branes §.§.§ An NS5 pair and a 6-brane Consider now the Scherk-Schwarz reduction of the non-supersymmetric U(16) heterotic string at self-dual radius. This configuration has two pairs of tachyons, each of which condenses respectively to the linear dilaton background of a 6-brane compactified on S^1 and an NS5-brane pair, hence after condensing both pairs we expect to have a model associated to the intersection of these objects. From our discussion above, the resulting background reads ℝ^3,1×ℝ_ϕ× SU(2)_2 × (SU(16)/ℤ_2)_1 . Since we are condensing four tachyons, the end result sits at a point in the moduli space described by the diagram Σ_15: [scale = 1.25] (1,-0.5)nodeΣ_15; (2,0)–(1.5,0.5); (0,2)–(0.5,1.5); (0,0)–(2,0)–(2,2)–(0,2)–(0,0); [Red, fill=white,double](1.5,2)–(2,2); [Red, fill=white,double](2,1.5)–(2,2); [fill=white](0,0)circle(0.07); [fill=white](0.5,0)circle(0.07); [fill=white](1,0)circle(0.07); [fill=white](1.5,0)circle(0.07); [fill=white](2,0)circle(0.07); [fill=white](0,0.5)circle(0.07); [fill=white](0,1)circle(0.07); [fill=white](0,1.5)circle(0.07); [fill=white](2,0.5)circle(0.07); [fill=white](2,1)circle(0.07); [fill=white](2,1.5)circle(0.07); [fill=white](0,2)circle(0.07); [fill=white](0.5,2)circle(0.07); [fill=white](1,2)circle(0.07); [fill=white](1.5,2)circle(0.07); [fill=white](0.5,1.5)circle(0.07)node55; [fill=white](1.5,0.5)circle(0.07)node55; [Red, fill=white,double](2,2)circle(0.07)node55; [-stealth](2.5,1)–(3.5,1); [shift=(4,0)] (1,-0.5)nodeA_15; (0,0)–(2,0)–(2,1.5); (0,0)–(0,2)–(1.5,2); [fill=white](0,0)circle(0.07); [fill=white](0.5,0)circle(0.07); [fill=white](1,0)circle(0.07); [fill=white](1.5,0)circle(0.07); [fill=white](2,0)circle(0.07); [fill=white](0,0.5)circle(0.07); [fill=white](0,1)circle(0.07); [fill=white](0,1.5)circle(0.07); [fill=white](2,0.5)circle(0.07); [fill=white](2,1)circle(0.07); [fill=white](2,1.5)circle(0.07); [fill=white](0,2)circle(0.07); [fill=white](0.5,2)circle(0.07); [fill=white](1,2)circle(0.07); [fill=white](1.5,2)circle(0.07); The partition function is given by eq. (<ref>) with D = 6 and n = 14, with boosted momentum lattice Γ_1,1⊕ D_14→ [D_1(-1)⊕ A_15^(8)]^(2,4) , where A_15^(8) is the weight lattice of SU(16)/ℤ_2 and (2,4) is a gluing vector y = y_1+y_2 with y_1,2 index 2 vectors in D_1(-1) and A_15^(8). It would be interesting to study the stability of this configuration. It would not be unreasonable to expect it to be stable given that the moduli fields are associated to deformations of a supersymmetric background. Supersymmetry is broken by the presence of the 6-brane, which has no such moduli. Also note that this picture generalizes easily to intersections of NS5 pairs with other non-supersymmetric branes by starting with the appropriate subcritical string. Using the duality between the NS5 pair and the 6-brane, we may interpret the background (<ref>) as being produced by an intersection of two 6-branes compactified on a circle with suitable radius and Wilson line. Similarly, we may consider this background as resulting from a circle compactification of the E_7× E_7 subcritical heterotic string, suggesting a duality with the heterotic 4-brane of <cit.>. Indeed, an intersection of two 6-branes has 4 space dimensions. This, however, poses a problem. The background of the 4-brane consists in a 4-sphere S^4 with (anti)instantons breaking each E_8 to E_7 in the E_8× E_8 heterotic string. For consistency we would require a topology changing T-duality transforming this S^4 into a product S^2× S^2, with each S^2 threaded respectively by an appropriate gauge flux. Such a possibility seems unlikely, suggesting that many naive duality relationships between brane configurations could be an artifact of looking only at the theories at low energies where the sphere contributions are gapped. §.§.§ Two NS5 pairs Consider now two consecutive Scherk-Schwarz reductions performed on some critical heterotic string, say the E_8 × E_8 string. Take the associated circles to be perpendicular and both at self-dual radius. The resulting background has two pairs of tachyons, each of which may be condensed just as for the case of one Scherk-Schwarz reduction explained in Section <ref>. The resulting background reads ℝ^2,1×ℝ_ϕ× SU(2)_2 × SU(2)_2 × (E_8 × E_8)_1 . This background corresponds to two intersecting pairs of coincident NS5-branes. The moduli space for the background (<ref>) is more complicated than those we have considered mainly in this paper, since it is associated a T^2 compactification of a subcritical string. Still it is straightforward to generalize some results from the S^1 case. Namely the vector and scalar spacetime classes furnish the odd self-dual lattice I_2,16, and the T-duality group reads O(2,16;ℤ). An analysis of this group is far outside the scope of this paper, but we may still analyze the moduli space using elementary lattice manipulations. The vector class lattice Γ_v takes the form Γ_2,2⊕ D_14, which at the specific point (<ref>) is polarized to D_2(-1)⊕ E_8 ⊕ E_8. Equivalently, one may write Γ_v ≃Γ_v,1⊕Γ_v,2 ,      Γ_v,1≃Γ_v,2≃ D_1 ⊕ E_8 , and separately polarize each of the two sublattices in the sum. These can be polarized in particular to Γ_1,1⊕ E_7, making manifest the realization of the background as a torus compactification of the subcritical E_7 × E_7 heterotic string. As before, this would naively suggest that two intersecting pairs of NS5 branes are T-dual to a torus compactification of the non-supersymmetric 4-brane with suitable values for the moduli. At the level of the S^3× S^3 sigma model, however, a topology changing T-duality simply leads to S^2× S^2 × S^1× S^1 with appropriate gauge flux threading both 2-spheres, i.e. an intersection of two 6-branes compactified on S^1 × S^1. It would be interesting to understand this relationship further. § DISCUSSION OF RESULTS In this note we have determined the global structure of the moduli (deformation) spaces of noncritical heterotic strings with maximal rank compactified on a circle, explicitly up to 14 spacetime dimensions. The T-duality groups turned out to be described by Coxeter polytopes following the work of Vinberg <cit.> and Vinverg-Kaplinskaya <cit.>. We have then used these results to systematically extend the application of “dimension changing" tachyon condensation of Hellerman-Swanson <cit.> from tachyonic non-supersymmetric heterotic strings <cit.> to their circle compactifications in the case of maximal rank (i.e. omitting the E_8 string). Inspired by the connection between these backgrounds and certain non-supersymmetric heterotic branes <cit.> we have extended this interpretation to various other backgrounds, establishing a connection between the momentum-winding tachyons of the heterotic string on a Scherk-Schwarz circle with a pair of NS5 branes, providing a possible tachyon condensation mechanism for these objects in a spirit similar to earlier results which relied on the use of supercritical strings <cit.>. Along the way we have pointed out the possibility that a pair of coincident NS5 branes is dual to the non-supersymmetric 6-brane of <cit.> compactified on a circle with special radius and Wilson lines. An important caveat that we have not addressed in this paper concerns the regularization of the strong coupling region in the linear dilaton backgrounds for subcritical strings.[We thank S. Sethi for emphasizing this point to us.] For the supersymmetric configurations corresponding to NS5-branes, this may be done by trading the ℝ_ϕ×ℝ_t part of the CFT by the coset SL(2,ℝ)_2/U(1), see e.g. <cit.>. Without this regularization it is not clear a priori what physical information can be extracted by computing the 1-loop partition function of the compactified models. We expect that the resulting behavior is analogous to that of the critical case studied in <cit.>, up to novelties such as supersymmetry enhancement, which is possible due to the absence of gravity in these models. It would be of great interest to better understand the spacetime picture of brane production through tachyon condensation. This seems to be the right interpretation of the setups of Hellerman-Swanson-Kaidi <cit.>, rather than resulting in a quantum gravity vacuum with positive cosmological constant – indeed, Lorentz symmetry is broken in these setups and more importantly the graviton degrees of freedom are massed up, falling outside of the class of constructions that are relevant for the study of generic properties of quantum gravity theories, i.e. the Swampland program <cit.>. Not all is lost, however, as this mechanism might help in understanding possible spacetime deffects in heterotic strings such as the branes of <cit.>, informing us how the cobordism conjecture <cit.> applies in them (see also <cit.>). Finally, it has been recently conjectured <cit.> that an NS p-brane compactified on S^8-q is dual to a q-brane on S^8-p with appropriate fluxes on the spheres. One such setup corresponds to what we have interpreted here as the background of an intersection of the non-supersymmetric heterotic 6-brane and a stack of n coincident NS5 branes, where we have specialized to n = 2. In <cit.> the aforementioned duality was used to make inferences on the worldvolume content of the 6-brane. It would be interesting to connect this picture with that of the duality between the 6-brane on S^1 and the n = 2 NS5 brane stack. § ACKNOWLEDGEMENTS We thank Soumangsu Chakraborty, Bernardo Fraiman, Ruben Minasian, Savdeep Sethi and Cumrun Vafa for useful discussions. We thank Miguel Montero for very helpful correspondence, and also the IPhT, Saclay for hospitality. This work is supported by a grant from the Simons Foundation (602883,CV), the DellaPietra Foundation and by the NSF grant PHY2013858. JHEP
http://arxiv.org/abs/2407.12517v1
20240717121024
Evaluating the transferability potential of deep learning models for climate downscaling
[ "Ayush Prasad", "Paula Harder", "Qidong Yang", "Prasanna Sattegeri", "Daniela Szwarcman", "Campbell Watson", "David Rolnick" ]
cs.LG
[ "cs.LG" ]
[ Evaluating the transferability potential of deep learning models for climate downscaling Ayush Prasadcomp,hel Paula Hardercomp Qidong Yangmit Prasanna Sattegeriibm Daniela Szwarcmanibm Campbell Watsonibm David Rolnickcomp helUniversity of Helsinki, Helsinki, Finland compMila Quebec AI Institute, Montreal, Canada ibmIBM Research, US/Brazil mitMassachusetts Institute of Technology, Cambride, MA, US Ayush Prasadayush.prasad@mila.quebec Machine Learning, ICML 0.3in ] § ABSTRACT Climate downscaling, the process of generating high-resolution climate data from low-resolution simulations, is essential for understanding and adapting to climate change at regional and local scales. Deep learning approaches have proven useful in tackling this problem. However, existing studies usually focus on training models for one specific task, location and variable, which are therefore limited in their generalizability and transferability. In this paper, we evaluate the efficacy of training deep learning downscaling models on multiple diverse climate datasets to learn more robust and transferable representations. We evaluate the effectiveness of architectures zero-shot transferability using CNNs, Fourier Neural Operators (FNOs), and vision Transformers (ViTs). We assess the spatial, variable, and product transferability of downscaling models experimentally, to understand the generalizability of these different architecture types. § INTRODUCTION Climate downscaling refers to the creation of synthetic high-resolution climate data from coarse-resolution data. Downscaling can be useful when raw datasets, either reanalysis or climate model outputs, have a spatial resolution that is insufficient for applications requiring climate information at local scales. Running climate models at high resolutions is computationally intensive and often infeasible due to the enormous computational resources required. Downscaling addresses this challenge by providing high-resolution information with the need for time-intensive simulations. Deep learning is increasingly being used in climate downscaling, including methods such as Convolutional Neural Networks (CNNs) <cit.>, Fourier Neural Operators (FNOs) <cit.>, and Conditional Normalizing Flows <cit.>. These models have demonstrated the ability to capture complex spatial and temporal patterns in climate data, leading to improved downscaling accuracy compared to traditional statistical methods. However, existing studies primarily focus on training deep learning models on a single dataset, which can result in overly specialised models that fail to capture the diverse characteristics of climate data across different sources and resolutions, limiting their generalizability and transferability to different climate variables, spatial regions, and temporal periods. In this work, we develop and evaluate approaches to address this challenge by training deep learning models on multiple climate datasets and evaluating them on different tasks (zero-shot transfer). We hypothesize that exposing the models to a diverse range of climate information during training will enable them to learn more comprehensive and transferable representations of climate patterns by capturing the underlying structures and relationships present across various datasets. In this work, we consider the transferability of several deep learning architectures, including CNNs, Fourier Neural Operators (FNOs), and Transformers. We train these models on a combination of climate reanalysis data products, which include a set of climate variables at different spatial resolutions and temporal frequencies. By leveraging the complementary information present in these datasets, we aim to enhance the models' ability to learn robust and transferable representations of climate patterns. We conduct a series of experiments that assess the downscaling models' spatial, variable, and product transferability. Furthermore, we explore the impact of fine-tuning the pre-trained models on the target dataset for the case where zero-shot performance is not sufficient. § DATA In this study, we utilize multiple climate datasets to pre-train our deep learning models for climate downscaling. The datasets are selected based on their spatial resolution, temporal frequency, and the variety of climate variables they provide. By leveraging diverse datasets, we aim to capture a wide range of climate patterns and enhance the generalizability of the models. The following datasets are used in our experiments: §.§ ERA5 ERA5 <cit.> is a state-of-the-art atmospheric reanalysis dataset produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) <cit.>. It provides hourly data at a spatial resolution of 0.25^∘ × 0.25^∘ on a single pressure level. We consider the following climate variables from ERA5: 2-meter temperature, 2-meter dewpoint temperature, u-10 component of wind, v-10 component of wind, mean sea level pressure, and total precipitation. The total number of samples used from the ERA5 dataset is 50,000. §.§ MERRA2 MERRA2 (Modern-Era Retrospective Analysis for Research and Applications, Version 2) <cit.> is an atmospheric reanalysis dataset. It provides data at a spatial resolution of 0.5^∘ × 0.625^∘. We utilize the following climate variables from the MERRA-2 M2T1NXFLX, Surface Flux Diagnostics V5.12.4 dataset: Total precipitation, surface wind speed, surface air temperature, surface eastward wind, surface northward wind. The total number of samples used from the ERA5 dataset is 50,000. §.§ NOAA CFSR The NOAA Climate Forecast System Reanalysis (CFSR) is a global coupled atmosphere-ocean-land surface-sea ice system dataset <cit.>. It provides data at a spatial resolution of 0.5^∘ × 0.5^∘. We consider the following climate variables from the CFSR dataset: Precipitable water at the entire atmosphere layer, downward longwave radiation flux, surface temperature, and pressure at mean sea level. Our dataset comprises of 30,000 samples. §.§ NorESM Our NorESM data set is based on the second version of the Norwegian Earth System Model (NorESM2) <cit.>, which is a coupled Earth System Model developed by the NorESM Climate modeling Consortium (NCC), based on the Community Earth System Model, CESM2. We build our data set on two different runs: NorESM-MM which has a 1-degree resolution and NorESM2-LM which has a 2-degree resolution for atmosphere and land components. We use the temperature at the surface (tas) and a time period from 2015 to 2100. The data is cropped into patches of 64 × 64 (HR) and 32× 32 pixels (LR). The scenarios ssp126 and ssp585 are used for training and ssp245 for testing with a total sample size of 37152. §.§ Data Preprocessing To ensure consistent input to the deep learning models, we scale all variables to have zero mean and unit variance. This normalization step helps in improving the convergence and stability of the training process. Following a common approach <cit.>, we create low-resolution (LR) and high-resolution (HR) pairs for training and evaluation by applying average pooling to the high-resolution data. Specifically, we use a pooling kernel size of 2x2 to downsample the HR data to generate the corresponding LR data for the 2x scale factor, and a pooling kernel size of 8x8 for the 8x scale factor. § METHODS §.§ Convolutional Neural Networks Convolutional Neural Networks (CNNs) have performed well in image super-resolution tasks, including climate downscaling <cit.>. The key components of CNNs are convolutional layers, which apply learnable filters to extract local features from the input data. In our CNN model, we adopt a residual architecture inspired by the Super-Resolution Convolutional Neural Network (SRCNN) <cit.>. The model consists of multiple residual blocks, each containing two convolutional layers with a kernel size of 3×3 and 64 filters, followed by a Rectified Linear Unit (ReLU) activation function. The number of residual blocks is set to 16. The output of the last residual block is passed through a final convolutional layer to generate the high-resolution climate variable. §.§ Fourier Neural Operators Fourier Neural Operators (FNOs) are a recently proposed class of models that learn mappings between function spaces by operating in the Fourier domain <cit.>. The key idea behind FNOs is to represent the input and output functions using their Fourier coefficients and learn the mapping between the coefficients using a neural network. In our FNO model, we use the architecture developed by <cit.> for downscaling climate data. The input low-resolution climate variable is first transformed into the Fourier domain using the Fast Fourier Transform (FFT). The Fourier coefficients are then processed by a series of Fourier layers, each consisting of a linear transformation followed by a non-linear activation function. The linear transformation is performed in the Fourier domain, which allows for efficient and resolution-invariant learning. The output of the last Fourier layer is transformed back to the spatial domain using the Inverse Fast Fourier Transform (IFFT) to obtain the high-resolution climate variable. The number of Fourier layers is set to 4, and the number of modes (i.e., the number of Fourier coefficients) is set to 12 in each dimension. §.§ CNN-ViT Hybrid Model Vision Transformers (ViTs) have recently gained popularity in computer vision tasks due to their ability to capture global context and model long-range dependencies <cit.>. However, ViTs often require large amounts of training data and may struggle with localized features. To address these limitations, we use a hybrid architecture that combines CNNs and ViTs, following recent works on integrating CNNs and ViTs <cit.>. The CNN-ViT model first uses a convolutional stem to extract local features from the input low-resolution climate variable. The convolutional stem consists of two convolutional layers with a kernel size of 3×3 and 64 filters, followed by a ReLU activation function. The output of the convolutional stem is then reshaped and passed through a ViT module to capture global interactions. The ViT module follows the standard Transformer architecture <cit.>, consisting of multiple multi-head self-attention layers and feed-forward networks. The number of Transformer layers is set to 4, with 4 attention heads and a hidden dimension of 256. The output of the ViT module is reshaped and passed through a final convolutional layer to generate the high-resolution climate variable. §.§ Training All models are trained using the Adam optimizer with a learning rate of 0.001 and a batch size of 32. We train the models for 150 epochs, which each individually takes approximately 7 hours on a single Nvidia V100 GPU. § EXPERIMENTS AND RESULTS §.§ Spatial Transferability In this experiment (Table <ref>), we train the models on the ERA5, MERRA2 and the NOAA CFSR dataset in the DACH region (Germany, Austria, and Switzerland) between 2010 and 2023. The DACH region is defined by the coordinates 45^∘N to 55^∘N latitude and 5^∘E to 15^∘E longitude. We then evaluate the trained models' performance on a region in North America, defined by the coordinates 35^∘N to 50^∘N latitude and 70^∘W to 125^∘W longitude, covering a portion of the continental United States. The models are evaluated on a subset of the North American data. The goal of this experiment is to assess the spatial transferability of the models, i.e., their ability to generalize and perform well on a geographic region distinct from the region they were trained on. The results in Table <ref>, demonstrate that all three deep learning models outperform the bicubic interpolation baseline in both the 2× and 8× downscaling scenarios. The CNN-ViT hybrid model achieves the highest R^2 scores and lowest MSE values among the compared models. The FNO model also had strong performance, closely following the CNN-ViT model. The CNN model, while still performing better the bicubic baseline, has lower performance compared to the other models. §.§ Variable Transferability In this experiment (Table <ref>), we first train the models on temperature, wind, and precipitation variables from all the climate reanalysis datasets (ERA5, MERRA2, NOAA CFSR). None of the flux variables, such as downward longwave radiation flux, were included in the training data. After training, we evaluate the models' performance on the downward longwave radiation flux variable from the NOAA CFSR dataset. This variable was intentionally excluded from the training data. The aim is to evaluate whether the models can effectively generalize and transfer the learned representations to predict an unseen climate variable distinctly different from the variables they were exposed to during training. The results in Table <ref> show that the FNO model achieves the highest R^2 scores and lowest MSE values for both 2× and 8× downscaling. The CNN-ViT model performs slightly better than the CNN model for 2× downscaling, but the CNN model outperforms the CNN-ViT model for 8× dowmscaling. The FNO model demonstrates the best transferability to the unseen downward longwave radiation flux variable, which was not included in the training data. §.§ Product Transferability In this experiment (Table <ref>), we train the models on ERA5 and MERRA2 datasets and evaluate their performance on the NOAA CFSR dataset without fine-tuning, i.e., their zero-shot performance. The goal is to examine the transferability of the models across different climate data products. The results in Table <ref> show that the CNN-ViT model achieves the highest R2 scores and lowest MSE values for both 2× and 8× downscaling. The FNO follows closely, outperforming the CNN. §.§ Two Simulation Transferability In this experiment (Table <ref>), we evaluate the models' performance on real-world low-resolution (LR) and high-resolution (HR) data pairs obtained from the Norwegian Earth System Model (NorESM) <cit.>. Unlike the previous experiments where LR data was generated by average pooling of the HR data, here we use the actual coarse-resolution data from NorESM as LR inputs, and the corresponding high-resolution data as HR targets. The LR data has a spatial resolution of 2.5^∘ × 1.9^∘, while the HR data has a resolution of 0.25^∘ × 0.25^∘. Initially, we trained the models on all the reanalysis datasets and evaluate their performance on the NorESM data without any additional fine-tuning. However, the results were just slightly better than interpolation, indicating the need for domain adaptation to the specific characteristics of the NorESM data. To improve performance, we perform fine-tuning on the pre-trained models on 30 percent of the NorESM training data. The results in Table <ref> show that when the models are directly applied to the NorESM dataset without fine-tuning (zero-shot), their performance is relatively poor compared to the previous experiments. The FNO and CNN-ViT models perform slightly better than the CNN model and the bicubic interpolation baseline in the zero-shot setting. After fine-tuning (FT) the pre-trained models on the NorESM data, their performance improves. The CNN-ViT model achieves the highest R^2 scores and lowest MSE values for both 2× and 8× downscaling factors, followed by the FNO model. The CNN architecture also shows improvement after fine-tuning. § CONCLUSION AND FUTURE WORK In this paper, we investigate the performance of deep learning models for climate downscaling when evaluated on different data than orignally trained on. We assess the spatial, variable and product transferability of the different models. The results indicate that training on diverse datasets leads to models that have good zero-shot capability, performing better than simple bicubic interpolation on many tasks. All three model types, CNNs, FNOs and Transformers outperform the baseline on all tasks zero-shot, apart from the the two-simulation-NorESM setup, where additional fine-tuning is necessary. For spatial and product transferability the CNN-ViT shows the best performance across architectures, whereas for variable transferability the FNO-based model has the highest scores. In the two-simulation transferability experiment, we observe relatively poor performance when directly transferring models without adaptation. However, further fine-tuning the pre-trained models on a subset of the target dataset significantly improved their accuracy. Overall, the results indicate that pre-training on multiple climate datasets, combined with fine-tuning, is an effective strategy for developing deep learning models for climate downscaling. Next steps of this work are the application of pre-training and fine-tuning for variable, spatial and product transferability, as it was done for the NorESM case. This could be evaluated against directly training on the target dataset. Here it could be valuable to see how the choice, number, and sizes of pre-training datasets affect the transferability performance as well of the number of data points chosen for fine-tuning. icml2024
http://arxiv.org/abs/2407.12916v1
20240717180004
Tomography of parametrized quantum states
[ "Franz J. Schreiber", "Jens Eisert", "Johannes Jakob Meyer" ]
quant-ph
[ "quant-ph" ]
§ ABSTRACT Characterizing quantum systems is a fundamental task that enables the development of quantum technologies. Various approaches, ranging from full tomography to instances of classical shadows, have been proposed to this end. However, quantum states that are being prepared in practice often involve families of quantum states characterized by continuous parameters, such as the time evolution of a quantum state. In this work, we extend the foundations of quantum state tomography to parametrized quantum states. We introduce a framework that unifies different notions of tomography and use it to establish a natural figure of merit for tomography of parametrized quantum states. Building on this, we provide an explicit algorithm that combines signal processing techniques with a tomography scheme to recover an approximation to the parametrized quantum state equipped with explicit guarantees. Our algorithm uses techniques from compressed sensing to exploit structure in the parameter dependence and operates with a plug and play nature, using the underlying tomography scheme as a black box. In an analogous fashion, we derive a figure of merit that applies to parametrized quantum channels. Substituting the state tomography scheme with a scheme for process tomography in our algorithm, we then obtain a protocol for tomography of parametrized quantum channels. We showcase our algorithm with two examples of shadow tomography of states time-evolved under an NMR Hamiltonian and a free fermionic Hamiltonian. Johannes Jakob Meyer July 22, 2024 ======================== Characterizing quantum states is a central task in quantum information science. The importance of methods of tomographic recovery, benchmarking and verification <cit.> is further elevated by the advent of NISQ (noisy intermediate-scale quantum) devices, where, in particular, quantum state tomography is widely seen as a key method to certify that a device is working correctly, and to provide actionable advice on how to improve the experiment in case of a deviation. The practical utility of full state tomography beyond the study of very small quantum systems is, however, severely limited by its stark resource requirements in the number of copies of the state, which scales exponentially in the number of involved quantum systems, such as qubits or modes. The same holds true for the classical memory necessary to store the information gathered in the process. To overcome this issue, novel methods for approximate state tomography have been developed, with shadow tomography being a particularly prominent example. In these approaches, a better performance is obtained by relaxing the requirements involved: instead of characterizing the whole quantum state, only some of its properties should be learned. Owing to these developments, it is possible to construct classical approximations of quantum states that allow for high precision predictions for large classes of observables, while maintaining resource requirements which scale polynomially with respect to the number of involved systems. Still, in many scenarios of practical relevance, one is not only interested in a quantum state, but in a whole family of quantum states, depending on possibly multiple continuous parameters – a parametrized quantum state. The archetypal example here is quantum time evolution, where the quantum states are parametrized by the time parameter. Other examples are certification of analog quantum devices, where control parameters naturally give rise to such a setting, or the electronic ground states of a molecule, depending on the coordinates of the heavy nuclei. In this work, we extend the foundations of quantum state tomography to parametrized quantum states, lifting procedures for state tomography to procedures for the tomography of parametrized quantum states. Our first key contribution to this end is to introduce a framework that unifies different notions of tomography, like full tomography and shadow tomography. We achieve this by expressing the figure of merit of these tomographic notions as an optimization over restricted sets of observables. For parametrized quantum states observables no longer give rise to scalar expectation values, but to functions taking the parameters as input. By combining the unified framework for state tomography with the associated L^p function norms, we obtain a natural figure of merit for the tomography of parametrized quantum states that is expressed as the largest L^p norm difference for an observable in the restricted set. Our next key contribution is to present a algorithm that recovers a full parametrized quantum state with respect to the natural figure of merit we introduce. This is realized by using a given tomography scheme as a black box at a number of parameter values and classically combining the obtained data through signal processing techniques, as shown in <ref>. Our algorithm allows us to explicitly use techniques of compressed sensing to exploit sparsity in the parametrization. This happens, for example, when only few frequencies contribute to the time-evolution of a quantum state. Our algorithm is efficient if the used tomography scheme is efficient and the parameter dependence can be accurately captured by a polynomial number of basis functions. Furthermore, the representation obtained for the parametrized quantum state inherits its data structure from the tomography scheme. Many classical representations like density matrices or tensor networks allow for more than just estimating observables, but are well suited to compute other properties of the underlying system. We emphasize that our algorithm constitutes significant method development, as the facts that computing norms of quantum states can be computationally hard and that quantum states only allow for access via measurements prohibit a straightforward application of compressed sensing techniques to recover parametrized quantum states. Our algorithm can be applied to both finite- and infinite-dimensional systems. Like classical shadows, it also gives an agnostic representation of the parametrized quantum state that can be used to infer the parameter-dependent expectation value of arbitrary observables as long as they are in the set for which the tomography procedure has guarantees. This new tomographic paradigm can be naturally extended to parametrized quantum channels. Analogously to the state case, we take a unifying figure of merit for process tomography and combine it with function L^p norms to obtain a figure of merit for tomography of parametrized quantum channels. Substituting as input the tomographic procedure with a scheme for process tomography, our algorithm outputs a classical representation of the parametrized channel. The efficiency of the resulting scheme depends on an efficient description of the parametrization and on the efficiency of the scheme for process tomography given as input. There have already been instances where tomographic procedures, in particular classical shadows, have been applied to questions involving parametrized quantum states. The authors of Ref. <cit.> introduce an algorithmic technique coined shadow spectroscopy. It allows to estimate the energy gaps of a Hamiltonian from classical shadows taken at different short times of an evolution under the Hamiltonian. In Ref. <cit.>, the authors learn conservation laws that can be expressed as the sum of few geometrically local Pauli observables from classical shadows constructed at different times of the quantum time evolution. In Ref. <cit.>, the authors investigate how well the time evolution of a quantum state can be predicted from its classical shadow. Beyond time evolution, there has been work to predict ground states of different phases of matter <cit.>. There, the ground states of a family of Hamiltonians form a parametrized quantum state, where the task is to obtain classical representations for new parameter values from classical shadows constructed at different points of the parametrization. We briefly comment on the relation between this and prior work. Our work is fundamentally different in scope in that we construct classical representations of parametrized quantum states, generalizing state tomography to the parameter-dependent setting. However, in doing so, we provide a unified approach as many of the above mentioned techniques can be re-expressed as a tomographic procedure for parametrized quantum states combined with classical post-processing. This is reminiscent of how quantum state tomography appears as a subroutine in various different contexts. We also examine how to exploit structure in the form of sparsity in the parametrization to obtain more efficient algorithms. Our work is structured as follows: In <ref>, we rigorously define what we mean by a parametrized quantum state and introduce some conventions regarding notation that we adopt throughout this work. In <ref>, we define the notion of a tomographic procedure, which encompasses full state tomography as well as approximation techniques like shadow tomography and classical shadows. We especially emphasize how different tomographic procedures use different ways of measuring distances between quantum states. This sets the stage for <ref>, where we investigate how such distance measures generalize from quantum states to parametrized quantum states. Then, in <ref>, we provide a crash course in the topic of compressed sensing, leading up to <ref>, where we give an algorithm detailing how to construct tomographic procedures on parametrized quantum states as well as provide a corresponding error analysis. Special attention is given to classical shadows. In <ref>, we explain how to identify sparse supports in function bases for parametrized quantum states. In <ref>, we discuss at the hand of two example how our framework enables efficient tomography of two types of parametrized quantum states: First, we show the efficient tomography of states with a sub-Gaussian energy spectrum evolving under an NMR Hamiltonian in the Fourier basis. Second, we efficiently reconstruct a quantum state under free fermionic time evolution through Chebychev polynomials. In <ref>, we show that the ideas and techniques introduced so far can be naturally extended to parametrized quantum channels. For the convenience of the reader, we give a high-level overview of our work and techniques in <ref>. An outlook on future directions is given in <ref> and we conclude the manuscript in <ref>. § PARAMETRIZED QUANTUM STATES On a practical level, parametrized quantum states arise in many contexts, for example in the time evolution ρ(t) = e^-iHtρ(0) e^iHt of a quantum state under a given Hamiltonian H, the encoding of a phase on a probe state in a quantum sensor ρ(ϕ) = (ϕ)[ρ(0)] or when parametrized quantum circuits are used in the context of quantum machine learning ρ() = ()[ρ(0)]. On an abstract level, a parametrized quantum state ρ(·) is a function from the parameter space to the space of linear operators on a Hilbert space _n associated to n physical systems that can be finite- or infinite-dimensional. We write ρ→_n and require that for all ∈, ρ() is a valid quantum state. For our purpose, it will be especially important to consider expansions of such operator-valued functions in terms of orthonormal functions from to . We thus introduce some notation, for functions f,g→, we denote with ⟨f, g|⟩∫_dμ() f^*() g() the scalar product of the corresponding function space <cit.> equipped with probability measure μ over . This scalar product induces the L^2 function norm as f_2 = √(⟨f,f|⟩). For general p, this generalizes to f_p ( ∫_dμ() |f()|^p )^1/p , called the L^p-function norm, where f_∞inf{C:|f()| ≤ Cμ-almost everywhere} is the special case p=∞. Later, these will important for the definition of distance measures between two parametrized quantum states ρ, σ→_n. Note that only the L^2-norm is induced by a scalar product. An orthonormal function basis (ONB) is a set of functions {φ_k→}_k, such that ⟨φ_k, φ_j|=⟩δ_k,j which allows the representation of arbitrary functions f→ as f() = ∑_k∈Λ c_k φ_k() with and c_k = ⟨f, φ_k|∈⟩. We call the ONB bounded by the constant K if ‖φ_k ‖_∞≤ K for all k ∈Λ . Every entry of the parametrized quantum state ρ can be seen as a function from to . Therefore, for any ONB, we can expand the operator-valued function ρ() as ρ() = ∑_k ∈Λα_k φ_k() with operator-valued coefficients α_k ∈_n. The coefficients are given explicitly as α_k ⟨φ_k, ρ⟩ = ∫μ() φ_k^*()ρ(). While, in principle, infinitely many basis functions might be necessary to perfectly describe ρ(), we assume that we are able to pick some finite set Λ with |Λ|=D, such that the error from truncating the infinite series is negligible. As an example, consider a Hamiltonian H with integer λ_i ∈ and an energy bound 0 ≤λ_i ≤ E_max. We have = [0, 2π) and ρ(t) = e^-iHtρ(0) e^iHt . Any orthonormal basis of functions from [0, 2π) → would allow for an expansion in the style of <ref>, but in this case it is natural to choose the Fourier basis φ_k(t) = exp(-ikt) and μ(t)=1/(2π). We can then express any time-evolution of the form in <ref> as ρ(t) = ∑_k ∈Λα_k e^-ikt , where the set of frequencies Λ is determined by the maximal eigenvalues bound of the Hamiltonian, Λ = { -E_max, -E_max+1, …, 0, …, E_max} , and again, α_k are operator-valued. While the size of Λ can be very large, situations arise where the structure of the initial state of the time evolution are such that only a comparatively small set S ⊂Λ contribute significantly. In our work, we are especially interested in cases like the above example, where we can pick a (possibly very large) set Λ with an unknown subset S ⊂Λ with |S| ≪ |Λ| that also describes ρ(·) exactly or at least well. In such cases, we refer to ρ(·) as sparse or approximately sparse, respectively. In <ref> we will develop the tools to provide rigorous definitions of both cases, while in <ref>, we provide an algorithm for the identification of such a set S. § A GENERAL FRAMEWORK FOR STATE TOMOGRAPHY Before we can establish a scheme for the tomography of parametrized quantum states, we first need to establish an understanding of tomography of non-parametrized states. In the typical form of state tomography, we are given T copies of the system in state ρ that we assume to be composed of n quantum systems. We are tasked with outputting an approximation ρ̂ that is close in trace distance with high probability. Formally, the output ρ̂ of the tomography procedure needs to fulfill [ ‖ρ - ρ̂‖_1 ≤ϵ] ≥ 1 - δ, where ‖.‖_1 denotes the trace-norm. We know that solving this task of full state tomography necessitates at least <cit.> T = Ω( 2^2n + log1/δ/ϵ^2) many samples, even if entangling measurements are being allowed, a simple consequence of the fact that quantum states have exponentially many degrees of freedom in the number of qubits. Under the additional assumption that the state ρ has rank at most r (low-rank tomography), we can reduce the factor 2^2n to r 2^n. While this is an exponential gain in sample complexity, it is still scaling exponentially in the number of qubits. But for many practical applications, performing full tomography of the state ρ is not necessary. A particularly prominent example of this is shadow tomography <cit.>, where we are satisfied if out estimate of the state faithfully reproduces the expectation values of a given set of observables = { O_1, O_2, …, O_M } [ max_1 ≤ i ≤ M |[ O_i ρ] - [ O_i ρ̂] | ≤ϵ] ≥ 1 - δ. Assuming without loss of generality that we have [O_i] = 0 for all O_i ∈, the classical shadows protocol of Ref. <cit.> achieves the above using T = O ( max_1 ≤ i ≤ M‖ O_i ‖_shadow^2/ϵ^2logM/δ) many samples. The shadow norm ‖·‖_shadow essentially captures the compatibility of the observable with the particular protocol used to construct the classical shadow. In the practically relevant case where all observables O_i act only on ℓ subsystems, we can use local Clifford shadows, to get <cit.> ‖ O_i ‖_shadow^2 ≤ 4^ℓ‖ O_i ‖_∞^2 which is an efficient scaling for constant ℓ. Instead of demanding that the difference of estimates over a set of observables is small in the worst case we can also demand that it should be small in expectation relative to a distribution over the observables or that the probability of an observable drawn from said distribution being large is small. This average case notion of tomography also allows for much more efficient protocols <cit.>. §.§ Tomographic procedures Having reviewed two seemingly very different approaches to quantum state tomography, we will no go on to establish a unifying viewpoint that allows us to actually treat both full tomography and shadow tomography on the same footing, and that also extends to other ways of performing tomography. The objective in shadow tomography (see <ref>) is expressed as an optimization over different observables, whereas the objective in full tomography (see <ref>) is written in terms of the trace norm. However, we can also express the trace norm as an optimization ‖ρ - ρ̂‖_1 = sup_‖ O ‖_∞≤ 1[ O (ρ - ρ̂) ] over observables. We use this dual viewpoint where a norm is expressed as an optimization to establish a unifying treatment of tomographic procedures. To this end, we define the induced semi-norm of a set of observables . Let be a set of observables. The semi-norm induced by is defined as ‖ X ‖_sup_O ∈ |[ X O ]|. This is indeed a semi-norm: it is non-negative by construction, homogeneity follows from the homogeneity of trace and absolute value and the triangle inequality can be easily established from the triangle inequality of the absolute value. If is a compact convex set, then the supremum is achieved on the boundary ∂. If is also closed under negation, i.e., -O ∈ if O ∈, then the absolute value is not necessary. To achieve an average-case notion of approximation, one could in principle replace the supremum in the definition of the induced semi-norm with an expectation value relative to some distribution over observables and would still retain the semi-norm property. The induced semi-norm ‖ρ - σ‖_=sup_O ∈ |[ ρ O ] - [σ O ]| quantifies the largest deviation between the predictions of two quantum states for observables in the set . A semi-norm differs from a norm only in the fact that ‖ X ‖_ = 0 does not imply that X = 0, which in the example of shadow tomography reflects that two distinct quantum states can give the exact same expectation values for a given set of observables. With the definition of the induced semi-norm as a figure of merit, we are ready to state the definition of a tomographic procedure which will form the backbone of our generalization to parametrized quantum states. An (ϵ, δ, n) tomographic procedure relative to a set of observables is an experimental procedure that uses T(ϵ, δ, n) copies of a quantum state ρ on n copies of a physical system to produce a classical representation ρ̂ such that [ ‖ρ - ρ̂‖_≤ϵ] ≥ 1 - δ. The number of copies T(ϵ, δ, n) is the sample complexity of the tomographic procedure. The fact that ‖·‖_ (typically) is a semi-norm instead of a norm reflects that we relaxed the requirements of full state tomography and means that even in the limit ϵ→ 0 the tomographic procedure cannot necessarily differentiate between distinct states. In the light of the above definition, we can now understand full state tomography as a tomographic procedure relative to the set of observables = { O : ‖ O ‖_∞≤ 1}. If we, for example, restrict the observables in this definition to only act non-trivially on ℓ of the n subsystems, the induced semi-norm is the ℓ-local trace norm ‖ X ‖_1,ℓmax_I ⊆ [n], |I| = ℓ‖ X_I‖_1, where we use the notation X_I to denote the reduced operator on the subsystem I, i.e., what is left after tracing out the complement I̅ of I, X_I = _I̅[X]. The local Clifford shadows of Ref. <cit.> can be used to construct an (ϵ, δ, n) tomographic procedure for the ℓ-local trace norm with a sample complexity of T(ϵ, δ, n) = O( ℓ 12^ℓ/ϵ^2logn/δ). Another norm captured by the definition of the induced semi-norm is the quantum Wasserstein distance of order 1 <cit.>. § DISTANCE MEASURES FOR PARAMETRIZED QUANTUM STATES We have introduced a general framework that captures the quality of a tomographic procedure through the semi-norm induced by a set of observables . In this section, we give a natural way of lifting this definition to parametrized states to obtain a measure of distance between functions ρ, σ→_n that can act as a figure of merit for tomography. We do so by thinking along similar lines as in the preceding section, where we computed the largest expectation value relative to a set of observables. Here, we now lift this reasoning to the parametrized case by recognizing that after an observable is fixed, the expectation value relative to this observable becomes itself a function from to of which we can compute the L^p function norm. Optimizing now again over all observables in the set leads us to the definition of the induced semi-norm for parametrized operators. Let X→_n be a parametrized operator and a set of observables. Then the L^p semi-norm induced by is defined as X(·)_, p sup_O ∈[OX(·)]_p. For p < ∞, it is X(·)_,p = sup_O ∈(∫_dμ() |[OX()]|^p )^1/p. Note that <ref> does not require X(·) to be a quantum state. Analogous to <ref>, if the above semi-norm is applied to a pair of quantum states, it quantifies the largest deviation between predictions for the set of observables , only here the predictions are functions [Oρ(·)]→ whose difference is accordingly measured with an L^p function norm as ρ(·) - σ(·)_, p = sup_O ∈[Oρ(·)] - [O σ(·)]_p . For a scalar function f() = ∑_k ∈Λ c_k φ_k(), the coefficients of the basis expansion { c_k }_k ∈Λ carry all the information about the function and the ℓ^q norms of the vector of coefficients = (c_k)_k ∈Λ can be related to the L^p norms of f in certain cases, most strikingly in the case p=q=2 through Parseval's Theorem and in more general cases through the Hausdorff-Young inequalities. We would like to use similar tools to relate the induced L^p semi-norm of a parametrized operator X() = ∑_k∈Λα_k φ_k() to some norm of the vector of coefficients = (α_k)_k∈Λ. To define a compatible norm of vectors of operators, we will again make use of the same intuition – namely that after fixing an observable, a vector of operators turns into a simple vector of complex numbers. This leads us to the definition of the induced ℓ^p semi-norm for vectors of operators. Let = (X_1, X_2, … , X_m) be a vector of operators X_i ∈_n. Then the ℓ^p semi-norm of induced by is defined as _, psup_O ∈( ∑_i=1^m |[OX_i]|^p )^1/p . Again, we quantify the largest deviation between predictions for the set of observables , but here we have a vector of predictions, which we treat with the usual p-norm: Let X_O ([O X_1], [O X_2],… , [O X_m)], note that X_O ∈^m, then - _, p = sup_O ∈(-)_O_p = sup_O ∈_O - _O_p . In the following, we will also need to understand linear transformations of vectors of operators. For a matrix A ∈^m' × m with elements a_i,j, we denote A_p → qsup__p = 1A_q and define multiplication between a matrix and a operator-valued vector in the usual sense as A [ a_1,1 X_1 + a_1,2 X_2 +… + a_1,m X_m; a_2,1 X_1 + a_2,2 X_2 +… + a_2,m X_m; ⋮; a_m',1 X_1 + a_m',2 X_2 +… + a_m',m X_m ] . Then, the semi-norm is submultiplicative, i.e., A_, p≤A_p → p_, p . As already hinted at, in the special case p=2, there is a connection between the semi-norms defined in <ref> and <ref> via an analogue of Parseval's Theorem. Let X be a parametrized operator X→_n and consider the expansion into an orthonormal basis {φ_k()}_k ∈Λ as X() = ∑_k ∈Λα_k φ_k(). Denote with the operator-valued vector consisting of the coefficient operators α_k. Then, X(·)_, 2 = _, 2 . We have X(·)_, 2 = sup_O ∈ [OX(·)]_2 = sup_O ∈( [Oα_k_1], [Oα_k_2], …)_2 = _, 2 . To arrive at the second equality, we have used Parseval's theorem for the L^2-space. Note the change from the L^2 function norm to the euclidean vector norm going from the first to the second equality. Having established natural distance measures between parametrized quantum states, we are now ready to study the tomography of parametrized states, or, respectively, how to lift a tomographic procedure relative to a set of observables to a tomographic procedure for parametrized quantum states. Such a procedure should give a generalized guarantee of a form similar to [ ρ(·) - ρ̂(·) _, p≤ϵ] ≥ 1 - δ. §.§ Sparse parametrized quantum states As already said, in this context we care a lot about the case when ρ(·) can be described using a set of indices S ⊆Λ such that |S| is small. Even if that is not the case, it might still be that ρ(·) can be well approximated using a small set S and is thus approximately sparse. This situation can arises for instance if a sparse parametrized quantum state is subject to noise. With the tools developed in this section, we can give a precise definition of sparsity and approximate sparsity. Let {φ_k()}_k ∈Λ be an orthonormal basis and ρ() = ∑_k ∈Λα_k φ_k() a parametrized quantum state. If there exists a a set S of cardinality |S|=s such that the approximation ρ_S() = ∑_k ∈ Sα_k φ_k() has bounded error with respect to the induced function p-norm ρ(·) - ρ_S(·)_, p≤γ_L^p, or the induced vector p-norm (·) - _S(·)_, p≤γ_ℓ^p, we call the parametrized quantum state (s, γ_L^p)-sparse or (s, γ_ℓ^p)-sparse with respect to the orthonormal basis {φ_k()}_k ∈Λ and a set of observables . For p=2, per <ref>, we have ρ(·)-ρ_S(·)_,2 = - _S_, 2 = _S̅_,2 , where S̅ = Λ∖ S. This means that in the case p=2, the two notions of sparsity are equivalent. § COMPRESSED SENSING IN ORTHONORMAL FUNCTION BASES Before we come to our algorithm for tomography of parametrized quantum states, we have to review the basics of compressed sensing <cit.>. In signal theory, sampling rates were traditionally determined in accordance with the celebrated Nyquist-Shannon sampling theorem, which states that for a band-limited signal f(x) = ∑_k ∈Λ c_k e^-i k x, sampling at twice the maximum frequency k_max = max{ |k| : k ∈Λ} of the signal – in other words, taking O(|Λ|) many samples – suffices to capture the signal in its entirety. This ultimate limit can, however, be beaten when additional information about the structure of the signal is available. The most studied structural assumption is the aforementioned sparsity of a signal, i.e., when a signal is only supported on s frequencies S ⊆Λ. In this case, it can be shown that evaluating the signal at Õ(s) positions is sufficient to obtain the sparse vector of Fourier coefficients = (c_k)_k∈Λ that describes the signal. Other structures considered in compressed sensing are sparsity with respect to frames instead of orthonormal function bases <cit.>, hierarchical sparsity <cit.>, or low rank structures <cit.>. Let us explore how this can be achieved. Formally, evaluating a general scalar signal f() = ∑_k ∈Λ c_k φ_k() at a point in parameter space gives us an observation that we can express as an inner product between the coefficient vector = (c_k)_k∈Λ and a vector å_i = (φ_k(_i))_k∈Λ that represents the particular values of the basis functions f(_i) = ∑_k ∈Λ c_k φ_k(_i) = ⟨, å_i ⟩ at the point _i. If we now obtain M different observations, we can again arrange them in a vector = (f(_i))_i=1^M and form a matrix A whose columns are the vectors å_i to arrive at the following linear relation between observations and the underlying coefficient vector [ f(_1); f(_2); ⋮; f(_M) ] = [ φ_1(_1) φ_2(_1) … φ_D(_1); φ_1(_2) φ_2(_2) … φ_D(_2); ⋮ ⋮ ⋱ ⋮; φ_1(_M) φ_2(_M) … φ_D(_M); ][ c_1; c_2; ⋮; c_D ], or in short = A. As we know both and A, we can try to solve the system of equations = A for . But if we obtain less than D different observations the system of equations will be underdetermined and thus have many different solutions. Formally speaking, any vector from the kernel of A, ker(A) = { : A = 0}, can be added to a solution and the result still satisfies the above system of equations. The standard approach to deal with an underdetermined system of equations is to choose the which has minimal ℓ^2 norm, i.e., which solves the optimization problem min ‖‖_2 , subject to = A . Interestingly, the above optimization problem has a closed-form solution that can be obtained from the pseudoinverse of A. For injective A, the pseudo-inverse is defined as A^+ (A^†A)^-1 A^† . The solution to the optimization problem is then _ℓ^2 = A^+. This approach does, however, not make use of the additional knowledge we have that the solution to the system of equations should be sparse and will usually not produce sparse vectors in the first place. To produce a sparse solution, it is tempting to instead solve the optimization problem min ‖‖_0 , subject to = A , where ‖‖_0 is the number of non-zero components of . This optimization problem can be shown to be NP-hard <cit.>. But this is no reason to despair because we can efficiently solve a relaxed version of the above problem min ‖‖_1 , subject to = A where ‖‖_0 is replaced with ‖‖_1. In fact, this is the tightest convex relaxation of the above problem. Under the guarantee that the true solution only has s non-zero entries, the solution of the optimization problem of <ref> can be shown to coincide with the one of <ref> when A has the so-called null space property of order s <cit.>. On an intuitive level, this property states that the kernel of A contains only vectors that are not very sparse. While the null space property is both a necessary and a sufficient condition, it is not easy to work with. This is why most works in compressed sensing are based on a stronger sufficient notion called the restricted isometry property (RIP) <cit.>, which in turn implies the null-space property. A matrix A ∈^M × D has the restricted isometry property (RIP) with constant Δ_s if for all s-sparse vectors , we have that (1-Δ_s)‖‖_2^2 ≤‖ A ‖_2^2 ≤ (1 + Δ_s)‖‖_2^2. We will use the notation Δ_s(A) to denote the smallest Δ_s such that the above holds for a matrix A. An equivalent reading of the restricted isometry property is given by introducing a projector Π_S onto all the entries in an index set S. Then, defining A_S A Π_S, we see that the restricted isometry property is equivalent to ‖Π_S - A_S^† A_S ‖_∞≤Δ_s for all sets of indices S of cardinality at most s. In other words, to sparse vectors, the matrix A looks like an isometry. The theory of compressed sensing tells us that we can in principle determine the sparse coefficients as long as the matrix A that contains the basis function values φ_k(_i) has the restricted isometry property with a low enough value of the RIP constant Δ_2s < 1/3 <cit.>. Other, sometimes even more practical approaches to compressed sensing are available that have slightly different requirements on the RIP constant <cit.>. While it is notoriously difficult to hand-craft deterministic matrices A that have a good RIP constant <cit.>, randomized methods perform surprisingly well <cit.>. In the case of bounded orthonormal systems we care about, we can make use of the following Corollary of a result of Rauhut <cit.> that bounds the number of points _i we need to sample according to the orthogonality measure μ() of the bounded orthonormal system to achieve a desired RIP constant. Let A ∈^M × D be the random sampling matrix associated to a bounded orthonormal system {φ_(·)} with constant K. Then, [ Δ_s(A/√(M)) ≤Δ] ≥ 1 - δ if M = s K^2/Δ^2( C_1 ln 300 s ln 4 D + C_2 ln2/δ) , where the values of the constants do not exceed C_1 ≤ 103 140 and C_2 ≤ 2 736. We see that we, indeed, get the desired dependence Õ(s). So far, we only considered the setting in which all the observations we obtain are perfect. In reality, however, the observations can come with errors, which means we observe f̂_i = f(_i) + η_i for some error η_i. In this case, the condition = A might not be satisfiable, which in turn means that the ℓ^1 optimization problem of <ref> cannot be solved. A relaxation of the problem, min ‖‖_1 , subject to ‖ A - ‖_2 ≤κ, can, however, remedy this issue. The parameter κ controls how much slack the solution can have, and given it is chosen suitably relative to the magnitude of the errors, the solution can be recovered with good accuracy even from noisy observations <cit.>. The basic approach presented here has found numerous generalizations and improvements across the compressed sensing literature. So far, the solution strategy for compressed sensing reconstruction was relaxing the problem to the convex optimization <ref> or, in the noisy setting, <ref>. Next to that, a large family of greedy strategies for compressed sensing reconstruction exist. These algorithms are generally much easier to implement and offer very similar performance guarantees. The most prototypical example of these is iterative hard thresholding <cit.>, which is both conceptually simple and still achieves surprising performance. The algorithm consists of starting from a candidate _0 = 0 and then iteratively updating with the rule _t+1 = _s[ _t + A^† ( - A _t) ], where _s is the thresholding operator that cuts off all entries of the given vector except for the s largest in absolute value. Essentially, iterative hard thresholding can be seen as gradient descent on the objective ‖ - A ‖_2^2, interleaved with projections onto the set of sparse solutions. The greedy optimization algorithms for compressed sensing have also been extensively studied and improved. For a broader review of compressive sensing algorithms, see Ref. <cit.>. § TOMOGRAPHY FOR PARAMETRIZED QUANTUM STATES In this section, we explain how to perform tomographic procedures on parametrized quantum states to obtain approximate classical representations ρ̂(·) of parametrized quantum states ρ(·) with rigorous error guarantees. §.§ Algorithm Our approach brings techniques from compressed sensing to the quantum realm, and as such generalizes the approaches presented in the preceding section. There, the goal was to reconstruct a scalar signal f(·). In the quantum case, we aim to find a good classical approximation of a parametrized quantum state ρ() = ∑_k∈Λα_k φ_k(). In the scalar case, the reconstruction has been obtained from observations of the signal at different parameter values. To reconstruct a parametrized quantum state, we can proceed in a similar fashion. We randomly sample M parameter values _i ∈ according to the orthogonality measure μ(·) of the orthonormal system {φ_k(·) } (see <ref>). Analogous to the scalar case, we can arrange the actual states in an operator-valued vector =(ρ(_1), …, ρ(_M)) such that A = , with _k = α_k, A_ik = φ_k(_i) , where the measurement matrix A ∈^M × D. Note that we can always write the basis functions φ_k(·) in a form that ensures that the index k goes k=1, 2, 3, …, which we assume here for ease of notation. The fact that we sample the _i according to the measure μ(·) for which the φ_k(·) form an orthonormal basis ensures we can use <ref> to establish a recovery guarantee later. At the parameter values {_i}_i=1^M, we perform tomographic procedures with N(ϵ_i, δ_i, n) shots each. These constitute operator-valued observations ρ̂(_i) which approximate the true quantum states ρ(_i). We also arrange the observations in an operator-valued vector = (ρ̂(_1), …, ρ̂(_M) ). The fact that we use a tomographic procedure means that the approximation is not perfect, which is why we need to write = + = A + . Here is an operator-valued vector that captures the deviations of observations ρ̂(_i) from the true states ρ(_i). A guarantee for the control of the magnitude of these deviations η_i_ is inherited from the tomographic procedure used to construct the approximations ρ̂(_i). Here, we already see that the quantum setting holds more challenges than the scalar one, as there is a multitude of possible distance measures relative to which the error operators η_i are considered small. One might now be temped to think that a generalization of the standard approaches for compressed sensing with noise presented in the preceding section would be straightforward. One can of course replace the norms ‖·‖_1/2 in either the optimization-based approach of <ref> or the update rule of iterative hard thresholding given in <ref> with their induced semi-norm counterparts ·_,1/,2. Executing the algorithms would then, however, necessitate to hold the full vector in memory, which, as it is a vector of operators, is very inefficient. It would also require us to compute the norm ·_,1/,2 explicitly, which can also be numerically intractable, for example when the induced semi-norm is equal to the trace norm. We circumvent this problem by splitting the recovery procedure in two parts: one where we identify the sparse support S of a parametrized quantum state (see <ref>) and a recovery procedure that takes this set as input, both using the same data. This section is devoted to showing that upon input of a set S, we can recover the parametrized quantum state on this support with bounded error. Note that this also subsumes the non-sparse case where S = Λ. At the end of this section, we show that in this particular case, the sample complexity of our algorithm can be further improved. We now assume that we are given a set S such that ρ(·) is (γ_ℓ^1,s)-sparse with respect to S. Given S, we can approximate the operator valued α_k with k ∈ S by applying the pseudo-inverse of A_S to the linear, operator valued equation A =, obtaining = A_S^+ ( A + ) = _S + A_S^+ A_S̅_S̅ + A_S^+ . For the second equality, we have used that A_S _S'=0 for disjoint sets S, S'. If the number of measurements M is chosen sufficiently large (more on this in the error analysis), one can guarantee with high probability that (i) A_S is injective, such that A_S^+A_S = _S, which we also used for the second equality and (ii) that combined with estimates on and _S̅, we are able to control the error ρ(·)- ρ̂_S(·)_,2 = - _S_,2. In the limit s → D and _,p→ 0 we recover the original parametrized state ρ̂(·)=ρ(·). We summarize this procedure in <ref>. §.§ Error analysis To bound the error between parametrized quantum state and classical approximation, we start from <ref> to obtain ρ(·) - ρ̂_S(·)_,2 = _S̅ - A_S^+A_S̅_S̅ - A_S^+ _,2 ≤_S_,2 + A_S^+A_S̅_S̅_,2 ≤ + A_S^+_∞_,2 . To provide a rigorous error analysis for <ref>, we need to establish control over the quantities A_S^+A_S̅_S̅_,2, A_S^+_∞ and _,2 while ensuring injectivity of A_S. The term _S_,2 can be bounded by the ℓ^2 sparsity constant γ_ℓ^2 and cannot be avoided in general. We start by observing a bound on _,2. If observations = (ρ̂(_1), …, ρ̂(_M)) are constructed from true states = (ρ(_1), …, ρ(_M)) at points {_i}_i=1^M using an (ϵ_i, δ_i, n) tomographic procedure, then the deviations = - are bounded as ‖‖_,p ≤_p , where = (ϵ_1, …, ϵ_M), with probability at least 1-∑_i=1^M δ_i. Note that for constant ϵ_i = ϵ, one obtains _,p≤ M^1/pϵ . We know that the entries of fulfill ‖η(_i) ‖_ = ‖ρ̂(_i) - ρ(_i) ‖_≤ϵ_i with probability greater than 1-δ_i by the definition of a tomographic procedure. As such, the desired result holds with probability at least ∏_i=1^M (1- δ_i ) ≥ 1 - ∑_i=1^M δ_i by the union bound. Next, we give a bound for the term A_S^+A_S̅_S̅_,2 which quantifies the spillover from imperfect sparsity into our estimate. If _, 1≤γ_ℓ^1 and Δ_2s(A/√(M)) ≤Δ/2 ≤ 1/2, then A_S^+A_S̅_S̅_,2≤Δγ_ℓ^1 . We give a sketch of the proof, for details see <ref>. For two disjoint sets S, S' of equal cardinality s, we can control ‖ A_S^† A_S'‖_∞≤Δ_2s(A) <cit.>. Armed with this result for disjoint sets of equal cardinality, one then splits S̅ into sets of cardinality at most s and then bounds the ℓ^2 with the ℓ^1 norm to obtain the desired result. Lastly, we state an error bound on the term A_S^+_∞. If Δ_s(A/√(M)) ≤Δ≤ 1/2, then A_S^+_∞≤1/√(M)√(1+Δ)/1-Δ≤√(6/M) . By definition, the bound on the RIP constant Δ_s(A/√(M)) ensures that the singular values of all submatrices of size at most s are in the interval [1-Δ, 1+Δ]. Then, using the definition of the pseudo inverse for injective matrices, we find A_S^+_∞ = (A_S^† A_S)^-1 A_S^†_∞ ≤1/√(M)(A_S^†/√(M)A_S/√(M))^-1_∞A_S^†/√(M)_∞ ≤1/√(M)√(1+Δ)/1-Δ . The full statement follows from the fact that this expression is monotonically increasing in Δ and evaluating at Δ = 1/2. Combining these three lemmas, we arrive at the desired error bound for the quantity ρ(·)-ρ̂_S(·)_,2 and therefore a recovery guarantee for our <ref>. Let ρ(·) be a parametrized quantum state with coefficient vector relative to a bounded orthonormal system with constant K that is γ_ℓ^1- and γ_ℓ^2-sparse with respect to a set of indices S of cardinality s. The output of <ref> satisfies [ρ(·) - ρ̂_S(·)_,2 ≤γ_ℓ^2 + Δγ_ℓ^1 + ϵ] ≥ 1 - δ for a parameter 0 < Δ≤ 1 using a number M = s K^2/Δ^2( C_1 ln 300 s ln 4 D + C_2 ln2/δ) of randomly sampled parameters and a total number of samples N(ϵ, δ, n) = M T( ϵ/√(6), δ/2M, n). The procedure furthermore guarantees that Δ_3s(A/√(M)) ≤ 1/2. The values of the constants do not exceed C_1 ≤ 103 140 and C_2 ≤ 2 736. M is chosen according to <ref> such that that Δ_3s(A/√(M)) ≤Δ/2 with probability at least 1-δ_RIP, such that <ref> and <ref> apply. Those together with <ref> applied to <ref> give the desired result. Choosing ϵ' = ϵ/√(6), δ' = δ/2M and δ_RIP = δ/2 yields the desired statement. Let us reflect on this result: We first observe that the sample complexity of our reconstruction is of order O(s log s log D), which is much more efficient than full recovery in the practically relevant setting where s≪ D. We observe that the error of our reconstruction is bounded by three separate contributions. The constant γ_ℓ^2 quantifies how well the optimal set of indices S approximates the true signal and is therefore unavoidable. The next term Δγ_ℓ^1 stems from the possible spillover of contributions outside of S into our estimate that stem from the fact that in compressed sensing, we can only control subsets of cardinality s. We can mitigate this term by making the parameter Δ smaller, at inverse quadratic cost. The last term, ϵ, captures the inherent error of our tomographic procedure and can therefore also be reduced with a cost depending on the specific procedure chosen, usually this cost is also inverse quadratic in ϵ. As such, <ref> provides a way for us to approximate ρ(·) as well as possible on a sparse support in a controlled way. One might wonder why M is chosen such that a RIP constant of order 3s is guaranteed, despite the fact that <ref> and <ref> only require RIP constants of order 2s and s, respectively. The reason for this is that this ensures that the data we use for the sparse recovery can equivalently used in the support identification procedure introduced in the next section. §.§ Full recovery The preceding discussion and analysis pertains to the case of sparse recovery, where an approximation is only performed on a small subset S of the whole set of basis functions Λ. Often, full recovery, i.e., the setting where S = Λ and s = D, is still desired. This is for example the case when there is no known sparsity structure or simply when full knowledge of ρ(·) is wanted. In this case, the analysis of <ref> still applies and the constants γ_ℓ^1 = γ_ℓ^2 = 0 by construction. In this setting, however, the result we have used to impose a certain RIP constant on the measurement matrix A, <ref>, is sub-optimal as the additional sparsity considerations are not needed anymore. By using a different result, we can improve both the logarithmic dependencies as well as the constants to obtain the following guarantee for full recovery through the simplified <ref>. We obtain the following recovery guarantee. Let ρ(·) be a parametrized quantum state with coefficient vector relative to a bounded orthonormal system with constant K. The output of <ref> satisfies [ρ(·) - ρ̂_S(·)_,2 ≤ϵ] ≥ 1 - δ using a number M = CDK^2 log2D/δ of randomly sampled parameters and a total number of samples N(ϵ, δ, n) = M T( ϵ/√(6), δ/2M, n). The procedure furthermore guarantees that Δ_3s(A/√(M)) ≤ 1/2. The value of the constant does not exceed C ≤ 11. Theorem 12.12 of Ref. <cit.> posits that when sampling in a bounded orthonormal system with constant K, M = 8/3D K^2/Δ^2log2D/δ_A samples are sufficient to guarantee that the singular values of A/√(M) lie in the interval [1-Δ, 1+Δ] with probability at least 1-δ_A. We now proceed as in the proof of <ref> but choose Δ = 1/2, ϵ'=ϵ/√(6), δ' = δ/2M and δ_A = δ/2 to arrive at the desired statement. This yields a constant C = 32/3 ≤ 11. §.§ Predicting expectation values On the level of the data, the approximation of ρ(·), ρ̂(·), is given by the coefficients {α̂_k }_k ∈ S. The α̂_k themselves are given as linear combinations of observations, α̂_k = ∑_i=1^M (A_S^+)_k,i ρ̂(_i). We can use this data to predict the expectation value of an observable O evaluated on ρ() for all . [Oρ̂()] = ∑_k ∈ S[O α_k] φ_k() = ∑_k ∈ S∑_i=1^M (A_S^+)_k,i φ_k() [Oρ̂(_i)] = ∑_i=1^M m_i() [Oρ̂(_i)] . The coefficients m_i() are given as m_i() ∑_k ∈ S (A_S^+)_k,iφ_k() . The observations ρ̂(_i) are classical representations obtained from a tomographic procedure, therefore there is a protocol to obtain expectation values [Oρ̂(_i)]. Overall, this is efficient as long as computing [Oρ̂(_i)] is efficient. This is not the case for full tomography, where the observations ρ̂(_i) are density matrices. For local Clifford classical shadows, however, it would be efficient for all observables for which the shadow protocol gives guarantees. In principle, parametrized expectation values could also be computed without performing the recovery of ρ(·) in terms of ρ̂(·) that we described it in this section. This is because expectation values could also be computed from the observations ρ̂(_i) directly. For a fixed observable O, the values [Oρ̂(_i)] are noisy measurements of the signal f() = [Oρ()] = ∑_k ∈Λ c_k(O) φ_k() . Therefore, the coefficients c_k(O) could be approximately recovered from the linear system A_O = _O using compressed sensing techniques for scalar signals as presented in <ref>. This, however, would require us to run a compressed sensing algorithm for every new observable we want to evaluate, incurring unnecessary overheads. Furthermore, this approach would not give rise to a meaningful tomography of the parametrized quantum state ρ(·) where we want explicit access to the operator-valued coefficients α_k. Our algorithm gives rise to an operationally meaningful classical representation of the parametrized state ρ(·), where we have the same access to the coefficients α̂_k that we have to the observations ρ̂(_i). For example, when constructing the observations using full state tomography, we can explicitly give the coefficients α̂_k in matrix form. From this, we can for example compute a classical representation of the reduced parametrized state ρ̂_A(·)=_A[ρ̂(·)], where A is a subsystem. Without direct access to the coefficients α̂_k, it is not obvious how to do this. Direct access to the coefficients α̂_k can also speed up the computation of the expectation value [Oρ̂()]. For a classical shadow σ̂ of a quantum state σ, expectation values are computed via snapshots σ̂_j as [Oσ̂] = 1/J∑_j=1[Oσ̂_j] . If we take our observations ρ̂(_i) to be classical shadows, computing [Oρ̂()] means evaluating a weighted sum of classical shadows. Let t denote the run time needed to compute the expectation value of a single classical shadow, then the overall computational complexity of computing [Oρ̂()] is O(Mt). However, if we have access to the coefficients α̂_k as a linear combination of observations, we can compute the coefficients m_i() in <ref> for a fixed before evaluating the expectation values [Oρ̂(_i)]. The precision of a prediction from a classical shadow scales with the number of snapshots evaluated. Evaluating [Oρ̂()] = ∑_i=1^M m_i() [Oρ̂(_i)] can be interpreted as a sampling problem, where we compute the values [Oρ̂(_i)] from samples of the snapshots that make up the observations ρ̂(_i). If we have access to the coefficients m_i beforehand though, we can do much better than uniform sampling. Instead, we apply an importance sampling, where we sample from the distribution [Oρ̂(_i)] with probability |m_i()|/∑_i' |m_i'()|. It takes a moment of though to see that this reduces the computational time required to O(t). Naturally, this method can also be applied if the observable O is given as a sum O=∑_j h_j O_j, where each component O_j can be efficiently evaluated by a classical shadow. For an explicit calculation and further details on this see Ref. <cit.>. § IDENTIFICATION OF A SPARSE SUPPORT In this section, we tackle the question of identifying a sparse support S such that ρ_S(·) is a good approximation to ρ(·) which we postponed in <ref>. While it might at first sound like a straightforward exercise, it comes with a lot of subtle difficulties <cit.>. The reason for this is that we only have indirect access to the coefficients α_k in the expansion of ρ(·) = ∑_k ∈Λα_k φ_k(·) and that we can usually not compute the norms ‖·‖_ in a computationally efficient manner. Furthermore, we desire a computational procedure that is compatible with the underlying restrictions of the chosen tomographic procedure. Classical shadows <cit.>, for example, only allow an efficient computation of expectation values of Pauli words. In an ideal world, we would like to find the set S that achieves the best constant γ_ℓ^2 in <ref>, i.e., S _S ⊆Λ, |S| = s_S̅_,2. If we disregard computational complexity considerations, the data we gathered in <ref> is sufficient to identify a good candidate for the support. The guarantees on the RIP constant of the measurement matrix A allow us to run standard compressed sensing approaches for any fixed observable O, and as there is an observable whose scalar coefficients achieve the induced semi-norm _,2, one could in principle always find a good candidate for the support by trying enough observables. In general, it is however computationally intractable due to the aforementioned constraints. We, therefore, present a probabilistic algorithm that uses the expectation values of randomly chosen observables to guarantee that _S̅_,2 is small for = { O : O_2 ≤ 1 }. In particular our method gives a guarantee that the coefficients α_k outside S have small Hilbert-Schmidt norm. We believe this to be a reasonable proxy for sparsity – first of all, the notions of sparsity coincide in the setting of exact sparsity where a set S exists such that all α_k outside of S are zero because the Hilbert-Schmidt norm is a proper norm. Second, the Hilbert-Schmidt norm subsumes all sets of observables with small Hilbert-Schmidt norm ⊆{ O : ‖ O ‖_2 ≤ 1 } and is compatible with global Clifford shadows. §.§ Algorithm We give a protocol to estimate the Hilbert-Schmidt norm α_k such that the s coefficient operators largest in Hilbert-Schmidt norm can be determined. We estimate from random Pauli measurements, which is convenient as it is hardware friendly as well as compatible with broadly used tomographic procedures like classical shadows. Recall the parameter dependent expectation value for some observable O, which is given by [Oρ()] = ∑_k [Oα_k] φ_k() ∑_k c_k(O) φ_k() . Now, the task of identifying S is equivalent to identifying the indices of the s largest expectation values (|c_k(P)|^2), where P is drawn uniformly at random from all n-qubit Pauli words. To see this, consider writing an operator α_k in terms of the normalized Pauli basis. We denote α_k = ∑_l=1^d a_k,lB_l/√(d) , where d=4^n, a_k,m∈ and { B_l }_l=1, … , d = {, X, Y, Z}^⊗ n is the normalized Pauli basis. In other words, we can identify the operator α_k with a vector å_k corresponding to its Pauli expansion. Note that c_k(B_l) = √(d) |a_k,l|, [ |c_k(P)|^p ] = d^p/2-1å_k_p^p, and especially α_k_2 = å_k_2= √([|c_k(P)|^2]) . Realizations of the random variable c_k(P) can be obtained by solving the linear system A (P) = _P , where we denote (P) := (c_1(P), … , c_D(P)) and _P = ([P ρ(_1)], …, [Pρ(_M)]). We only want to probe the parametrization at M=O(log(D)) points. Therefore, M ≪ D and the above system is heavily underdetermined and can only be reasonably queried by means of sparse recovery, thus further restricting our ability to read out vectors (P) directly. Additionally, we do not have direct access to the above linear system, as we can only obtain observations ρ̂(_i), not quantum states ρ(_i). Thus, we can only obtain information from the linear system A (P) = _P = _P + _P . In principle, any sparse recovery algorithm might be employed to extract information from <ref>. In practice, most sparse recovery algorithms work reasonably well <cit.>. Here, we employ hard thresholding pursuit (HTP) <cit.>, mainly since it is easy to implement. The constants in <ref> are chosen such that Δ_3s(A/√(M)) ≤ 1/2, which ensures convergence of HTP. Applying HTP to <ref>, we recover a vector ^#(P). The error between (P) and ^#(P) depends the maximum error rate over all Pauli observables ηmax_P_P_2 and the distance between (P) and its best possible s-sparse approximation in the 1-norm σ_s((P))_1 min{(P) - _1 : is s-sparse} . The error between (P) and ^#(P) then obeys the error bound (P) - ^#(P)_2 ≤D_1/√(s)σ_s((P))_1 + D_2 _P_2 ≤D_1/√(s)σ_s((P))_1 + D_2 η where D_1, D_2 are constants. This matches the intuition that the performance of sparse recovery algorithms depend on the noise level and the compressibility of the target vector, quantified here via σ_s((P)). For the right-hand side in <ref> we will use the shorthand κ(P) D_1/√(s)σ_s((P))_1 + D_2 η . The algorithm we use to identify S computes estimates of the largest expectation values X_k √([|c_k^#(P)|^2]) and outputs the support of the s largest values as the set S. We estimate the expectation value X_k by the empirical mean X̂_k √(1/L∑_l=1^L |c_k^#(P_l)|^2) , where the Pauli observables P_1, …, P_L are drawn uniformly at random. We summarize this procedure in <ref>. Clearly, this algorithm does not necessarily succeed. It is not clear that the indices of the largest expectation values X_k coincide with the largest values α_k_2. Note that the parameters ϵ and δ only control how well we approximate X_k with the empirical mean X̂_k (see <ref>) and have no implications for how close X_k and α_k_2 are. Therefore, we analyze the requirements which are needed to ensure the success of <ref>. §.§ Error analysis <ref> guarantees that the expectation values X_k are estimated to precision ϵ. We can ensure that picking the set S associated to the largest expectation values X_k is the correct set, only if there is a gap of at least 2ϵ between the possible values in S and in S̅, or, formally, min_k ∈ S X_k - max_k' ∈S̅ X_k'≥ 2 ϵ, a condition that is not automatically fulfilled by construction. We can, however, relate this condition to the parameters of the parametrized quantum state and the guarantees of the compressed sensing procedure. If min_k ∈ Sα_k_2 - max_k' ∈S̅α_k'_2 ≥ 2ϵ + 2D_1/√(s)√([σ_s((P))_1^2 ]) + 2D_2 η, then we can guarantee that min_k ∈ S X_k - max_k' ∈S̅ X_k'≥ 2 ϵ . For any P, it follows from <ref> that |c_k(P) - c^#(P)| ≤κ(P). Applying the triangle inequality and taking the expectation value then leads to the desired result. We see that the success of <ref> depends on the magnitude of the squared expectation value of κ(P), it is clear that as the size of the subset s approaches D, this term vanishes and S is identified correctly. Besides giving a condition on the success of <ref>, the theorem also dictates the necessary precision for the estimate X̂_k. What is left is to determine the sample complexity required such that |X_k - X̂_k|≤ϵ. First, note that c_k^#(P) is bounded. Denote with κmax_P κ(P) . the maximum error bound between vector (P) and ^#_k(P) over all Pauli observables. For an n-Pauli observable P ∈{, X, Y, Z}^⊗ n, we have that |c^#_k(P)| ≤ 1 + κ . We start by observing a bound on the absolute value of the original coefficients c_k(O). By Hölder's inequality, [Oρ()] ≤O_∞. Then, it follows that f_2^2 = ∑_k |c_k(O)|^2 = ∫_dμ() |f()|^2 ≤O_∞^2 and thus |c_k(O)| ≤O_∞ . Now, apply the triangle inequality to a single term of the right-hand side in <ref> and apply <ref> to bound |c_k(P)|. Note that for any Pauli P, P_∞=1. As we deal with bounded random variables, we obtain performance guarantees by applying Hoeffding's inequality. Using HTP to probe linear systems of the form <ref> to obtain data vectors ^#(P), one achieves | X̂_k - X_k |≤ϵ with probability at least 1-δ by using L ≥log( 2D/δ) (1 + κ)^2/2 ϵ^4 many data vectors. We give a proof for completeness in <ref>. The scaling of ϵ^4 naturally appears because we require guarantees on X̂_k and not on only on 1/L∑_l=1^L |c^#(P_l)|^2. §.§ Error analysis under additional assumptions Here, we introduce a reasonable additional assumption on the input of the support identification task and, as a consequence, provide a more refined analysis of the precision requirement for the success of <ref> given in <ref>. As a starting observation, the dependence on the term σ_s((P))_1 provides overly pessimistic bounds on the sample complexity in certain practical scenarios. To see this, consider first the following example: For all k ∈ S, let |c_k(P)|=1 for all P. For k' ∈S̅, let |c_k'(P)|=x for all P, x ≪ 1. Under these conditions, one would expect that a precision ϵ = O(1-x) suffices to provably determine the correct support. However, from <ref>, specifically from the term σ_s((P))_1, we would conclude that a precision of O((D-s)x) is necessary. This is due to the form of the error bounds for sparse recovery in <ref>, from which the dependence on σ_s((P))_1 is inherited. This indicates that for certain inputs, the hardness of the support identification task might not be best described by sparsity or compressibility, indicated by small [σ_s((P))_1^2], but rather by some measure of distance between coefficients corresponding to S and S̅. The purpose of this section is to equip this intuition with rigorous results. Throughout this paragraph, we will assume that what we call the local support identification assumption holds true. [Local support identification] Consider the linear system given in <ref>, which is A(P) = _P + _P . Then, the s non-zero entries of the approximate solution ^#(P) obtained by means of sparse recovery coincide with the s largest entries of the exact solution (P). It is important to note that we are not assuming that the non-zero entries coincide with the s largest entries of the noise-free solution (P). <ref> becomes increasingly valid the larger the gap between the smallest of the s largest values of (P) and the largest of the rest. For a sufficiently large gap, <ref> is even provably true: According to <ref>, it holds that (P) - ^#(P)_2 ≤D_1/√(s)σ_s((P))_1 . As the gap increases, σ_s((P))_1 goes to zero, such that at some point violating <ref> would immediately violate the above error bound. As we will see, <ref> allows us to analyze a slightly different error model. We can express the deviations in the parametrized quantum state as ρ̂() = ∑_k (α_k + γ_k) φ_k() = ρ() + η() . If we solved the linear system A(P) = _P + _P traditionally, i.e., without using sparse recovery techniques, we would obtain coefficients ĉ_k(P) = [(α_k + γ_k)P] c_k(P) + n_k(P) . When using sparse recovery techniques, we instead obtain c^#_k(P), which are not easily decomposed in elementary terms, making the analysis of the precision requirements for the success of <ref> cumbersome. However, by virtue of <ref>, for any fixed Pauli B_l we can write c_k^#(B_l) = (c_k(B_l) + n_k(B_l)) χ_k,l , where χ_k,lχ(|c_k(B_l)+n_k(B_l)|>max_k' ∈S̅ |c_k'(B_l) + n_k'(B_l)| ) , that is the indicator function which assumes 1 if there is no entry in (B_l) corresponding to an index in S̅ that is larger than ĉ_k(B_l). The role of <ref> is to give rise to the clear condition that defines the indicator function χ_k,l. With this, we can write X_k = ( 1/d∑_l=1^d |c_k(B_l) - n_k(B_l) |^2 χ_k,l)^1/2 , where, as a reminder, X_k √([|c_k^#(P)|^2]) . The support identification task cannot succeed if the s largest X_k do not correspond to the s largest α_k_2. From <ref>, we see that there are essentially two mechanisms that can lead to such an outcome: (i) The values c_k^#(B_l) are significantly smaller than c_k(B_l) for many measurements B_l, underselling the contribution of the α_k. This is controlled by the magnitude of the noise n_k(B_l). (ii) It cannot be ruled out that the coefficients ĉ_k'(B_l) = |c_k'(B_l) + n_k'(B_l)| for k' ∈S̅ act as adversaries against a certain index k ∈ S, collectively trying to push ĉ_k(B_l) out of the s largest entries for as many measurements B_l as possible. Despite being small in expectation, ĉ_k'(B_l) can be large for few measurements B_l, such that a coordinated adversarial effort of all indices in S̅ could mask the contributions an α_k with k ∈ S. One possibility to rule out such a case is requiring that for all measurements B_l, |ĉ_k(B_l)| > ∑_k' ∈S̅ |ĉ_k'(B_l)| . With this in mind, we can also better understand the term σ_s((B_l))_1 in the original precision requirement, which sums over contributions from indices that are mostly in S̅. However, such an adversarial coordination across multiple indices in S̅ seems highly unlikely, which warrants going beyond a worst-case analysis. In order to do so, our key insight is that the ability to perform such adversarial attacks as described above is rooted in a large variance of the magnitude of ĉ_k'(B_l) for different B_l. In an extreme case like in the above example, where there is no variation across different measurements B_l, no adversarial strategies can be employed at all. In order to capture this phenomenon, we denote the vectors (k') (c_k'(B_1), c_k'(B_2), … c_k'(B_d)) and (k') (n_k'(B_1), n_k'(B_2), …, n_k'(B_d)) and define the so-called flatness constant as β_(k')1/√(d)(k')_2/(k')_∞ and analogous β_(k') for (k'). The value lies in β_(k')∈ [1, 1/√(d)], where β_(k')=1 is obtained for a perfectly flat (i.e., all entries are of the same magnitude) vector and β_(k')=1/√(d) for a perfectly peaked (i.e., only one entry is non-zero) vector. We further denote β_min_k' ∈S̅β_(k') and analogously β_. Now, with <ref> in place and the concept of flatness at hand to parametrize the hardness of the support identification task, we can give a more fine grained alternative to <ref>. Choosing ϵ such that min_k ∈ S{α_k - γ_k_2 } ≥ 2 ϵ + β_+1 /β_max_k'∈S̅α_k'_2 + β_+1/β_max_k' ∈S̅γ_k'_2 guarantees correct identification of the sparse support. See <ref>. First, note that compared to <ref>, we managed to characterize the precision requirement for successful support identification purely in terms of the Hilbert-Schmidt norms of the original coefficient operators and two flatness parameters. We observe that indeed, for certain inputs, the hardness of the support identification task is determined by some measure of distance between coefficients corresponding to S and S̅. The precision requirement only depends on the smallest α_k_2 in S and the largest in S̅, with the flatness constants quantifying to what extend this picture holds. § APPLICATIONS The definitions and algorithms we propose in this work may appear quite abstract. In this section, we present two families of examples that showcase the workings of the approach. As a first family of examples, we approach the problem of predicting all ℓ-local reduced density matrices of an n-qubit state evolving under a Hamiltonian with an equally spaced spectrum, for which the established framework is particularly easily applicable. Within this first family of examples, we mainly consider Hamiltonians capturing settings in nuclear magnetic resonance (NMR) <cit.> for all times t, under the promise that the initial state has a sub-Gaussian energy spectrum, which will in turn induce a sparse structure on the time dependence. We also point out, however, that a similar description is applicable to quantum many-body scars <cit.>, where perfect revivals of quantum states imply eigenstates with energies placed in an equally spaced ladder. As a second family of examples, we discuss how to similarly predict all ℓ-local reduced density matrix of a fermionic system undergoing a fermionic Gaussian time evolution, which is a non-interacting time evolution. This second example, in particular, shows that orthonormal function bases beyond the Fourier basis can be very useful in the tomography of parametrized quantum states. §.§ Spectrally equally spaced Hamiltonians We start by discussing in detail notions of tomography of parametrized quantum states at hand of systems in NMR. We denote the NMR Hamiltonian with H and assume its energies are in the range [0, E_max]. If we let a state ρ_0 evolve under such a Hamiltonian, we obtain a parametrized quantum state ρ(t) = e^-i H tρ_0 e^i H t. NMR Hamiltonians have integer-valued energies, which means we can write H = ∑_e=0^E_max e Π_e, where Π_e are projectors onto the eigenspaces associated to energy e. Inserting this shows that we can expand ρ(t) into a regular Fourier series ρ(t) = ∑_e=0^E_max∑_e'=0^E_max e^-i (e - e') tΠ_e ρ_0 Π_e' = ∑_k=-E_max^E_maxα_k exp( i k t), where the operators α_k can be obtained by grouping all terms such that e' - e = k. In the language of our paper, the Fourier basis functions φ_k(t) = exp(i k t) constitute a bounded orthonormal system over the domain = [0, 2π] with orthogonality measure μ(t) = 1/(2π) and constant K=1. While there are in principle infinitely many basis functions, the fact that the energy of the Hamiltonian is bounded by E_max means it is sufficient to consider Λ = { - E_max, -E_max + 1, …, E_max-1, E_max}. Having established the structure of the parametrization of the time-evolved state, we now turn to the objective. We wish to recover all ℓ-local reduced density matrices of the parametrized quantum state. This corresponds to the semi-norm induced by the set of observables _ℓ = { O : ‖ O ‖_∞≤ 1, O is ℓ-local}. To this end, we will employ the tomographic procedure based on local Clifford shadows of Ref. <cit.>, which has a sample complexity T(ϵ, δ, n) = O( ℓ 12^ℓ/ϵ^2logn/δ), as already states in <ref>. With the objective clear, we now have to analyze the sparsity of the parametrized state with respect to the ℓ-local trace norm. We now assume that the energy spectrum of ρ_0 is sub-Gaussian, which means that there exist constants τ, σ and e_0 such that [ ρ_0 Π_e ] ≤τexp(-1/2(e - e_0)^2/σ^2). This assumption naturally induces sparsity, as we only expect that a energies whose distance to e_0 is on the order of the standard deviation σ will contribute significantly to the time evolution. In <ref> (<ref>), we show that the sub-Gaussian assumption additionally allows us to control the ℓ^1 sparsity defect γ_ℓ^1. Explicitly, we can consider the sparse support S = {-R, -R+1, …, R - 1, R} for R = Õ( σ√(n + logτ/γ)) and guarantee that γ_ℓ^1≤γ. With this result at hand, we can apply <ref> with the guarantees of <ref> to obtain an efficient tomography scheme for ρ(·). Note that in this particular example, we can forgo the support identification procedure, because we can infer the sparse support from the properties of the initial state. Bringing all ingredients together, we obtain a tomographic procedure whose output ρ̂(·) fulfills the guarantee [ ρ(·) - ρ̂(·)__ℓ, 2 ≤ 2 γ + ϵ] ≥ 1 - δ by performing local Clifford shadow tomography at M = Õ( σ√(n + logτ/γ)logE_max/δ) uniformly sampled values of t. This is achieved by using <ref> with Δ = 1, bounding γ_ℓ^2≤γ_ℓ^1 and using s = 2R + 1 with R given in <ref>. At every parameter value, a number of samples T = O( ℓ 12^ℓ/ϵ^2logn M/δ) are used, resulting in a total number of samples of N(ϵ, δ, n, ℓ) = Õ( ℓ 12^ℓ/ϵ^2σ√(n + logτ/γ)logE_max/δ). We stress that spectrally equally spaced Hamiltonians naturally emerge in the context of quantum many-body scars. Signatures of many-body scarring have first been observed when a system of 51 Rydberg atoms quenched out of equilibrium did not show a relaxation to a thermal state, but instead featured distinct recurrences <cit.>, stimulating a body of work on quantum many-body scars <cit.>. In Ref.<cit.>, the converse question has been asked, namely whether recurrences would imply many-body scars, presenting an answer to the affirmative. If recurrences arise, a large number of eigenstates must exist with entanglement features that are typical for quantum many-body scars and with energy spectral values that are equally spaced. For systems prepared in this suitable subspace, all of the above discussion apply. §.§ Gaussian fermionic time evolution As a second example, we consider the time evolution of fermions under an arbitrary fermionic Gaussian Hamiltonian. In particular, we look at n fermionic modes. In the Majorana representation, we construct 2n Hermitian operators γ_i that pairwise anti-commute {γ_i, γ_j } = δ_i,j. The first n operators can be seen as position and the second n operators as momentum operators. In this representation, a Hamiltonian is given by <cit.> H = i ∑_i=1^n ∑_j = 1^n F_i,jγ_i γ_j, where F is a real skew-symmetric matrix. We denote J max_i,j |F_i,j| as the interaction strength. These kinds of Hamiltonians create non-interacting dynamics. This manifests as the fact that we can bring the matrix F to a normal form of n non-interacting fermionic modes by performing an orthogonal transformation of the vector of Majorana operators, which in turn can be physically realized by a fermionic Gaussian unitary transformation of the n modes of the system. Practically speaking, we can find a new set of Majorana operators γ̃= O γ such that H = i ∑_i=1^n λ_i (γ̃_i γ̃_n+i - γ̃_n+iγ̃_i), where ± i λ_i are the eigenvalues of F. As promised, in this form the only Majoranas associated to the same transformed mode interact. One particular ingredient we will need shortly is the possibility to perform a fermionic operation that maps H to -H to emulate evolution into the negative time direction. This is easily achieved by the orthogonal transformation = [ 0; 0 - ] which corresponds to an exchange of the fermionic annihilation and creation operators ã_i, ã_i^† associated to the transformed modes, which is nothing but a Pauli X transformation on the transformed modes. Hence, this transformation can be realized by applying the unitary U_O implementing the transformation O, X^⊗ n, and then the inverse U_O^† γ̃ ↦γ̃, H ↦ U_O^† X^⊗ n U_O H U_O^† X^⊗ n U_O = -H. We note that for a particle-preserving fermionic Hamiltonian, there are no terms that mix position and momentum operators in the first place, and as a consequence the time reversal can be emulated without the additional unitary U_O. From the normal form of the Hamiltonian we can also deduce its spectrum completely, as all possible eigenvalues of H are given by the possible sums of the ±λ_i, spec(H) = {±λ_1 ±λ_2 …±λ_n }, which especially means that ‖ H ‖_∞ = ‖ F ‖_1 and that every eigenvalue can be associated to a vector ∈{-1,1}^n. We will therefore use the expansion H = ∑_∈{ -1,1}^nλ_Π_. The time evolution of a fermionic initial state ρ_0 under the aforementioned Hamiltonian is then given by ρ(t) = e^-i H tρ_0 e^i H t = ∑_∈{ -1,1}^n ∑_∈{ -1,1}^n e^-i (e_ - e_) tΠ_ρ_0 Π_ = ∑_∈{ -1,1}^n ∑_∈{ -1,1}^n e^-i ω_, tΠ_ρ_0 Π_ . Here, we note that the maximum frequency is bounded as |ω_, | ≤ω_max = 2 ‖ H ‖_∞ = 2‖ F ‖_1 ≤ 2 n^2 J, where we have recalled J = max_i,j |F_i,j|. We can well-approximate this time evolution over the interval [-1, 1] using Chebychev polynomials. The Chebychev polynomials are defined for k ≥ 0 and are given by T_k(t) = cos ( k arccos t). They are the unique polynomials fulfilling the relation T_k(cos t) = cos kt and fulfill the orthogonality relation ∫_-1^1 t T_k(t) T_l(t)/√(1-t^2) = δ_kl×π if k=0, π/2 if k ≥ 1. To obtain a system of orthonormal basis functions in the sense of this work, we define the measure μ̃(t) = 1/π1/√(1-t^2) and the rescaled Chebychev polynomials T_k(.) that satisfy T̃_k(t) = ξ_k T_k(t), where ξ_k = 1 if k=0 and ξ_k = √(2) otherwise. This ONB is bounded with constant K = √(2). We can express the time evolution in terms of this orthonormal function basis as ρ(t) = ∑_k=0^∞α̃_k T̃_k(t). We can find the coefficients α̃_k by computing the Chebychev expansion of the Fourier basis function e^-i ω t. As we establish in <ref> (<ref>), we have that over the interval [-1, 1] e^-i ω t = ∑_k=0^∞ i^k ξ_k J_k(ω) T̃_k(t), where J_k is the k-th Bessel function of the first kind. Inserting this expression and grouping by T̃_k yields the following formula α̃_k = i^k ξ_k ∑_∈{ 0,1}^n ∑_∈{ 0,1}^n J_k(ω_,) Π_ρ_0 Π_. To apply our algorithm, we need to specify a set S of coefficients (in this case indices of the relevant Chebychev polynomials) and bound the ℓ^1 sparsity defect relative to the norm induced by the set of observables of the used tomographic procedure. As in the preceding example, we are able to construct S explicitly, such that we do not need the support identification procedure. Again, we care about ℓ-local observables. In this case, we can bound the sparsity defect in the induced norm by the sum of the trace norm of the coefficients α̃_k for k ∈S̅ as γ_ℓ^1 = _S̅__ℓ,1≤∑_k ∈S̅α̃_k __ℓ≤∑_k ∈S̅‖α̃_k ‖_1 . We can bound the magnitude of the coefficients α̃_k by exploiting the following upper bound on the Bessel function of the first kind (see <ref> of <ref>) |J_k(ω)| ≤(eω/2k)^k, which implies that for m > eω/2, the coefficients decay at least exponentially. Formally, we have that ‖α̃_k ‖_1 ≤ |ξ_k| ∑_∈{ 0,1}^n ∑_∈{ 0,1}^n |J_k(ω_,)| ‖Π_ρ_0 Π_‖_1 ≤ 2^2n+1/2(e ω_max/2 k)^k, where we have upper-bounded ω≤ω_max and |ξ_k| ≤√(2). Our plan is to approximate ρ(t) by only including terms with k up to a cutoff R, i.e. by using the set S = {0, 1, …, R}. Let us now set R = ⌈ e ω_max⌉ + R' for R' ≥ 0, which guarantees that for all k > R, e ω_max / 2 k ≤ 1/2. In this case, ∑_k=R+1^∞‖α̃_k ‖_1 ≤ 2^2n+1/2∑_k=R+1^∞ 2^-k = 2^2n+1/2 2^-R-1∑_k=0^∞ 2^-k = 2^2n +1/2 -R where we have evaluated the geometric series for the last step. To achieve a certain value γ for the tail, we need R ≥ 2n + 1/2 + log_2 1/γ. This compares favorably to the minimal cutoff ⌈ e ω_max⌉, which we can bound as ⌈ 2en^2 J ⌉, which is O(n^2 J). This means that R = max{ O( n + log1/γ), O( n^2 J ) } ≤ O( n + n^2 J + log1/γ) is sufficient to achieve the desired sparsity defect γ_ℓ^1≤γ. Additionally to finding a set S of coefficients that approximates ρ(t) to small error, we also need to truncate the infinite series of α_k coefficients at some k = D to manipulate it in the memory of a computer. The preceding analysis suggests that this can be done at logarithmic cost in the inverse error of the finite-size approximation. As the factor D only enters into the sample complexity of the analysis as log D, we have a doubly-logarithmic dependence on the finite-size approximation error which we can safely neglect in the following. Having established that a low number of Chebychev basis functions are sufficient to well-approximate the fermionic Gaussian time evolution, we are left to combine this with a tomographic procedure to obtain an actual tomography protocol. As in the previous example, we will consider the estimation of all ℓ-local observables with bounded operator norm, which corresponds to an ℓ-local trace distance. A shadow tomography algorithm for this task tailored to fermionic systems was proposed in Ref. <cit.> and has a sample complexity of T(ϵ, δ, n) = O(n^ℓℓ^3/2/ϵ^2logn/δ), where the factor n^ℓ comes from the binomial coefficients n choose ℓ for constant ℓ. Exploiting the bound γ_ℓ^2≤γ_ℓ^1≤γ and looking at the guarantees of <ref>, we have a tomographic procedure whose output ρ̂(·) fulfills the guarantee [ ρ(·) - ρ̂(·)__ℓ, 2 ≤ 2 γ + ϵ] ≥ 1 - δ by performing local fermionic shadow tomography at M = Õ( (n + n^2 J + log1/γ) logD/δ) values of t sampled according to μ̃. If t < 0, we apply the unitary transformation of <ref> and evolve for time |t| to emulate a time evolution in the negative direction. At every parameter value, a number of samples T = O(n^ℓℓ^3/2/ϵ^2logn M /δ) are used, resulting – for constant J – in a total number of samples of N(ϵ, δ, n, ℓ) = Õ( n^ℓ + 2 J ℓ^3/2/ϵ^2logn M D/γlog^2 1/δ). In this example, it was possible to construct S explicitly. However, this is not fundamental to our techniques, as the above sample complexity remain unchanged even if S is unknown and needs to be determined with the support identification procedure detailed in <ref>. Finally, we emphasize that the same algorithm allows us to recover the time evolution in the interval [-T, T] instead of [-1, 1] by simply rescaling the time parameter appropriately, at the expense of increasing the interaction strength by a factor of T, yielding a total sample complexity of N(ϵ, δ, n, ℓ) = Õ( n^ℓ + 2 J T ℓ^3/2/ϵ^2logn M D/γlog^2 1/δ). § TOMOGRAPHY OF PARAMETRIZED QUANTUM CHANNELS While we have formulated both our framework and our algorithm for parametrized quantum states, our strategies equally well apply to parametrized quantum channels. These arise very natural, for example in the context of quantum metrology, where the knowledge of the parameter-encoding evolution is crucial to devise good sensing protocols. In this section, we outline how the different parts of our work are generalized. §.§ Parametrized quantum channels A parametrized quantum channel is a function from the set of parameters to the set of quantum channels from system _n to _m, which we denote by →_n → m. This means that for all ∈, () ∈_n → m needs to be a completely positive trace-preserving map. In other words, (·) is a parametrized superoperator. Similar to a parametrized quantum state, we can expand a parametrized quantum channel in terms of an orthonormal function basis {φ_k(·) }_k as () = ∑_k_k φ_k(), where the coefficients of the expansions are now superoperator-valued and given by _k ∫μ() φ^*_k() (), where μ is the orthogonality measure of the function basis. §.§ Tomographic procedures We generalize the notion of a tomographic procedure from states to channels by generalizing the semi-norm induced by a set of observables. In the case of quantum channels, we have an additional degree of freedom next to the observable, namely the choice of the state that the quantum channel takes as input. Also, it is now necessary to include an ancillary system of the same dimension as the input. We can thus define a semi-norm for channels by a set of pairs of input states and observables as ‖‖_sup_(ρ, O) ∈ | [ (⊗)[ρ] O]|. Another way of looking at this definition which is closer in form to the state definition is obtained by considering quantum combs, also known as quantum process tensors <cit.>. In this language, the combination of state preparation and the measurement of a POVM forms a single object called a tester <cit.>. Our definition slightly generalizes this idea by weighing the different outcomes of the POVM to obtain an expectation value. If we use the set constructed by combining all possible input states on the joint system of input and ancilla with all possible joint observables ‖ O ‖_∞≤ 1, the induced semi-norm will be equal to the diamond norm ‖‖_♢ which is the natural generalization of the trace norm on the level of quantum channels. Even more so than in the state case, performing tomography with respect to the diamond norm is very resource intensive. Efficient recovery of a quantum channel is, nevertheless, possible under relaxed assumptions. For example, the method of Ref. <cit.> guarantees an efficient approximation if the states and observables in are restricted to Pauli-sparse states and observables, i.e., whose Pauli expansion has only few terms. Another way to relax the requirements of channel tomography is to replace the joint supremum over inputs and observables with a combination of supremum and expected value. An example for this is Ref. <cit.>, which shows that channels can be learned efficiently when we only consider local observables and take the expectation _ρ{sup_O local |[ [ρ] O ] |} over Haar random single-qubit states on the input. As a parametrized superoperator becomes a parametrized scalar function upon fixing a combination of input and observable, the induced L^p semi-norm of parametrized superoperators and the induced ℓ^p semi-norm of vectors of superoperators generalize in the natural ways and the Parseval Theorem also carries over. As the notions of induced semi-norms generalize directly, so does the notion of an approximately sparse parametrized quantum channel. §.§ Algorithm Our algorithm itself is agnostic to the underlying quantum object, and hence can be applied in the exact same way as in the case of a parametrized quantum state. The only thing that changes are the guarantees and the sparsity constants γ_ℓ^1 and γ_ℓ^2, which are now given relative to the semi-norm for channels. §.§ Support identification The support identification algorithm we outlined in this work can also be generalized in a relatively straightforward manner. In it, we have used the Hilbert-Schmidt 2-norm as a proxy for the magnitude of the coefficients α_k. The natural generalization of this is to use the Hilbert-Schmidt 2-norm of the superoperator-valued coefficients A_k. It can be obtained by preparing a maximally entangled state |Ω⟩ between the input and the ancillary register, then applying the quantum channel and subsequently measuring the expectation value of a Pauli word P ⊗ Q. This gives us access to the entries of the Pauli transfer matrix of given by [ (⊗_k)[|Ω⟩⟨Ω |] (P ⊗ Q)] = 2^-n[Q _k [P]]. Choosing the Pauli operators uniformly at random then gives us an estimate of the coefficients, from which we can construct an estimator as in <ref>. § A SHORT PRACTITIONERS GUIDE For convenience, we give a more intuitive summary of our results and guidance for their practical application. Tomography is the task of constructing a classical representation ρ̂(·) of a parametrized quantum state ρ(·) which should be sufficiently close for all possible values of the parameters ∈. Our method is based on the simple realization that we can always expand the parameter dependence in terms of a function basis {φ_k(·) } ρ() = ∑_k ∈Λα_k φ_k() for →. The choice of the function basis itself is crucial and will necessarily depend on the kind of parametrized quantum state we deal with. In this work, we considered two very different examples with very different properties, namely the Fourier basis on one hand and the Chebychev polynomials on the other hand. Note that we always have to assume that the state can be approximated with a finite number of basis functions k ∈Λ as we have to hold a vector of size D = |Λ| in memory. It comes to no great surprise that for an arbitrary parametrized quantum state, the sample complexity must scale as Ω(D) as we have to recover all the coefficients {α_k }. We show that a randomized protocol can achieve this sample complexity in <ref> with favorable overheads. Interestingly, we can do much better if the parameter dependence of the state is sparse, i.e. there exists a set S ⊂Λ of small cardinality s = |S| such that ρ_S() = ∑_k ∈ Sα_k φ_k() approximates ρ() well. Sparsity of the time dependence is a property that exists relative to the chosen function basis {φ_k }, which means that a state that is sparse with respect to one basis will usually not be sparse with respect to another basis. The flip side of this is that there often exist bases that allow for a sparse representation of a specific parametrized quantum state ρ(·). The challenge lies in finding a function basis from first principles that allows for a sparse representation of a whole family of parametrized quantum states, something that we exemplified in our two example applications. Given a sparse state, we present a protocol that performs tomography of the parametrized quantum state with a sample complexity of Õ(s log D), see <ref>, which means that the savings in samples for tomography of a parametrized quantum state can be exponentially large. A crucial property of our proposed protocols is that they use a state tomography scheme as a black box. This allows us to switch between different schemes of full or approximate tomography (like shadow tomography), while still retaining the guarantees of the tomographic procedure in a sensible way over the full parameter space . It is important to mention a few caveats and details that are relevant in practice. When we say efficient, we generally mean sample efficient as well as computationally efficient. However, when choosing Λ it should be noted that our algorithm involves computations on the M × D sized matrix A and scales linearly in D in computational resources <cit.>. This contrasts with the sample complexity which scales only logarithmic in D. When finding an orthonormal basis, care needs to be taken that the constant K=φ_k_∞ that bounds the function basis does not scale unfavorably, as this could negate some of the gains we get from our randomized procedures. § FUTURE DIRECTIONS Our work opens up many exciting directions of research into the better characterization and understanding of parametrized quantum circuits. In this section, we sketch some of these future directions that may emerge from our work. §.§ Applications Key to the success of our methods is knowledge of an orthonormal basis in which the parametrized quantum state can be expressed with few basis elements. The main challenge in applying our techniques lies in identifying such good basis sets for problems of practical interest. Inspiration here might be taken from the vast literature on basis sets for computational methods in quantum chemistry and solid state physics, for which a body of heuristic knowledge is available <cit.>. We expect that often, such basis sets will be found by observing what works in practice and will, therefore, be of a heuristic nature, not necessarily accompanied by formal proofs. However, we consider the rigorous study of basis sets and their approximation quality for parametrized quantum states to be of equal interest and as a question of fundamental relevance in its own right. An interesting way of realizing compressed sensing approaches that work well in practice is to employ overcomplete bases or use dictionary methods that combine different bases together to induce a sparse representation of a signal <cit.>. Taking the same avenue to characterize parametrized quantum systems in practice should prove fruitful. §.§ Algorithms The plug and play nature of our framework is well suited for the incorporation of other compressed sensing techniques, because large parts of the algorithm are agnostic to the actual technique used. However, as seen in <ref>, the extension to the quantum case brings its own difficulties and pitfalls and is not guaranteed to be straight forward. There are many advanced techniques to perform compressed sensing in different scenarios that one could presumably integrate with our recovery algorithm. The aforementioned approach of using frames and overcomplete dictionaries allows to better recover signals that do not have a sparse expansion in a known orthonormal basis, a likely feature in some practical applications. Another promising approach allows one to replace the sampling from the orthogonality measure of the orthonormal basis {μ(.)}, with a more measure that is more accessible in an experiment <cit.>. Beyond harmonizing our protocols with more powerful tools from compressed sensing, there are also other directions that we find promising. As explained in <ref>, finding the optimal support S involves a minimization that is computationally intractable. We, therefore, give a different figure of merit that we argue works well in practice. Can we find support identification protocols with yet other figures of merit? Here, it is interesting to device algorithms that find the largest coefficients α_k not in Hilbert-Schmidt norm, but in another induced semi-norms ·_. On the highest level, these are ideas of model selection. At the end of the day, one will have to strike a balance between mathematical and conceptual approaches <cit.> as well as heuristic ones <cit.>. This would, presumably, also require algorithms that efficiently evaluate the semi-norm of a given operator. A further noteworthy point is that the procedure introduced in this manuscript is modular: the tomographic problem is split into a recovery problem and a support identification problem. This contrasts with the situation in traditional compressed sensing, where support identification and recovery are carried out in an integrated fashion. Can such an integrated algorithm also be given in the quantum setting? Are there potential advantages to such an approach? §.§ Tomographic tasks A reoccurring theme in the research field of quantum state tomography has been to simplify the task by focusing it on the most important properties of a state. Central to the success of shadow tomography <cit.> is the insight that most of the time, one is not interested in knowing all observables, but only a finite collection of them, which turns out the make a significant difference <cit.>. While we already include the different restrictions on the tomographic task in our figure of merit, we note that requiring that predictions can be made in the whole parameter space might be excessive in some applications. This could be because the parametrization is too complex and eludes an efficient description or because only small parts of the parameter space are relevant, for examples when performing an optimization step in a quantum neural network. Here, it is interesting to consider relaxations of the tomographic task for parametrized quantum states, i.e., a tomographic procedures with performance guarantees for a small environment around a fixed point in parameter space . Going in the opposite direction of relaxations, the protocol we give for tomography of parametrized quantum states solves the tomographic task relative to the norm ·_,p, where p=2. Open questions and conceivably harder are the cases p=1 and p=∞, where the latter would ensure ϵ-closeness to the original state for all points in , not in an average sense, but even in the worst case. Here, we expect that schemes efficient in the number of basis functions cannot be given without further restrictions on the parametrized quantum state. § CONCLUSIONS Parametrized quantum states are of fundamental interest in quantum theory. They occur in many different areas, including quantum optimization and machine learning, dynamical quantum systems, quantum metrology and quantum many body physics. Characterizing such objects well is of crucial importance. For quantum states without parameter dependence, this task is broadly known as tomography. But what is its analogue for parametrized quantum states? We address this question by first defining the notion of a tomographic procedure, which unifies existing approaches to quantum state tomography in a single framework. This is done by expressing the figures of merit of different tomography tasks in a systematic fashion: as a semi-norm induced by the set of observables targeted. We show that this framework can be naturally combined with function L^p norms such that it extends to parametrized quantum states. This way, we can rigorously define the task of performing tomography for parametrized quantum states. With this in place, we provide a first class of learning algorithms for this new task. Our algorithm combines a tomographic procedure for quantum states with a recovery algorithm from signal processing. We show that by using techniques from compressed sensing as recovery algorithms, we can exploit structure in the form of sparsity of the parameter dependence to significantly lower the number of quantum states that need to be prepared. Overall, the resulting scheme is efficient if two conditions hold: The tomographic procedure chosen as input is efficient and the parametrization admits an efficient description in terms of an orthonormal basis. In quantum information science, we further encounter parametrized evolutions, i.e., parametrized quantum channels. This happens for example in the context of quantum metrology, where knowledge of the dependence of the evolution on the parameter to be estimated is crucial to devise protocols or when error channels in quantum devices depend on adjustable parameters of the system. Analogous to the case of parametrized quantum states, one may extend notions of quantum process tomography, and define the task of tomography for parametrized quantum channels. By substituting a tomographic procedure with a protocol for process tomography as subroutine of our algorithm, we obtain a tomography scheme that applies to parametrized quantum channels. Again, the resulting scheme is efficient if both the process tomography protocol chosen as input and the description of the parametrization is efficient. To showcase the techniques introduced in this manuscript, we apply them to quantum states with sub-Gaussian energy spectrum that are parametrized by the time evolution under an NMR Hamiltonian. We combine a tomographic procedure based on local Clifford shadows with a compressed sensing algorithm to obtain a classical representation that recovers all ℓ-local reduced density matrices from the original parametrized quantum state. There are many open questions and novel research directions associated to the tomography of parametrized quantum states. At the core, there is a need for further understanding of these states and their structural properties. To move to applications of practical interest, it is central to find orthonormal basis functions that allow for an efficient decomposition of the parametrized quantum states in question. In quantum state tomography, shadow tomography caused a change in paradigm by arguing for an alternative figure of merit, giving unexpected insights into the structure of quantum states. In the same way, it is interesting to investigate novel figures of merit for tomography of parametrized quantum states. These might be tailored with certain practical applications in mind or be motivated by purely theoretical considerations. We give a detailed survey of these and further open questions. To summarize, tomography of parametrized quantum states is a new paradigm in quantum information theory. Derived from quantum state tomography, it is a task of fundamental interest in its own right. Furthermore, it is motivated by the need to characterize the class of parametrized quantum states, whose importance is only pronounced by the advent of ever more complex quantum devices. Here, we give a first treatment of this novel tomographic task, including an efficient class of algorithms to address it. We hope that our work stimulates further compressed sensing based methods for tomography of parametrized quantum states. § ACKNOWLEDGMENTS The authors wish to thank Matthias Caro and Lennart Bittel for insightful discussions. We also acknowledge early discussions with Siriu Lu. This work has been supported by the DFG (CRC 183, FOR 2724), by the BMBF (Hybrid), the BMWK (PlanQK, EniQmA), the Munich Quantum Valley (K-8), QuantERA (HQCC) and the Einstein Foundation (Einstein Research Unit on Quantum Devices). This work has also been funded by the DFG under Germany's Excellence Strategy – The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689). § AUTHOR CONTRIBUTIONS F.S. has led the project. The project has been initiated by J.E. F.S. and J.J.M. have developed the algorithms and conducted their theoretical analysis. J.J.M. has supervised the project with support of J.E. All authors contributed to the development of meaningful examples and the writing of the manuscript. § PROOF OF <REF> This section is devoted to the proof of <ref> which bounds the spillover of contributions from inexact sparsity into our estimate. Before we come to the proof, we establish an auxiliary lemma based on the following result. [Proposition 6.3 of Ref. <cit.>] Let A ∈^M × D and denote with A_S = A Π_S and A_S' = A Π_S' its restriction to disjoint sets of indices S and S' of cardinality at most s. Then, ‖ A^†_S A_S'‖_∞≤Δ_2s(A). We are now ready to bound the spillover at the level of the measurement matrix in terms of its RIP constant. [Bound to spillover] Let A ∈^M × D with Δ_s(A/√(M)) < 1 and S, S' disjoint sets of cardinality at most s such that. Then, A_S^+A_S'≤Δ_2s(A/√(M))/1-Δ_2s(A/√(M)) . Due to the RIP constant being smaller than one, A_S is injective. For injective A_S, the pseudo-inverse is given as A_S^+=(A_S^† A_S)^-1 A_S^†. Then, ‖ A_S^+ A_S'‖_∞ = ‖ (A_S^† A_S)^-1 A_S^† A_S'‖_∞ ≤‖ (A_S^† A_S)^-1‖_∞‖ A_S^† A_S'‖_∞ = ‖(A_S^†/√(M)A_S/√(M))^-1‖_∞‖A_S^†/√(M)A_S'/√(M)‖_∞ ≤Δ_2s/1 - Δ_s ≤Δ_2s/1 - Δ_2s. The first inequality is the generic operator norm bound for products of matrices, the second equality adds the normalization to establish the restricted isometry property of A. The second inequality combines the restricted isometry property of A with <ref> and the final inequality follows from Δ_s ≤Δ_2s. We are now ready to present the proof. <ref> If _, 1≤γ_ℓ^1 and Δ_2s(A/√(M)) ≤Δ/2 ≤ 1/2, then A_S^+A_S̅_S̅_,2≤Δγ_ℓ^1 . Before we commence, we reiterate that, for a given set S and a vector of operators , _S = Π_S takes the value X_i for all entries with indices i ∈ S and else 0. For an observable O, _O ([OX_1], [OX_2], …). Then, (_S)_O takes the value [OX_i] for i ∈ S and else 0. We can partition S̅ into disjoint sets S_i such that S̅= ⋃_i S_i and |S_i|≤ |S|. A_S^+A_S̅_S̅_,2 = sup_O∈A_S^+A_S̅ (_S̅)_O_2 = sup_O∈∑_i A_S^+ A_S_i (_S_i)_O_2 ≤sup_O ∈∑_i A_S^+ A_S_i (_S_i)_O_2 ≤sup_O ∈∑_i A_S^+A_S_i_∞(_S_i)_O_2 ≤Δ_2s/1 - Δ_2ssup_O ∈∑_i (_S_i)_O_2 ≤Δ_2s/1 - Δ_2ssup_O ∈(_S_i)_O_1 = Δ_2s/1 - Δ_2s_S̅_,1 ≤Δγ_ℓ^1 . The first inequality is just the triangle inequality, the second uses the definition of the operator norm as the 2→ 2 norm. The third inequality applies <ref> and the fourth uses that _2 ≤_1 holds for all . For the last inequality, note that for x ≤ 1/2, x/1 - x≤ x . § PROOF OF <REF> [Hoeffding's inequality] Let X_1, X_2, …, X_M be a number of M random variables such that |X_i| ≤ B for all i and denote their empirical mean with μ̂∑_i X_i /M. Then, one achieves | 1/M∑_i=1^M X_i - [μ̂] |≤ϵ with probability at least 1-δ by using M ≥log( 2/δ) B^2/2ϵ^2 many samples. <ref> Using HTP to probe linear systems of the form <ref> to obtain data vectors ^#(P), one achieves | X̂_k - X_k |≤ϵ with probability at least 1-δ by using L ≥log( 2D/δ) (1 + κ)^2/2 ϵ^4 data vectors. From <ref>, each term in the empirical mean X̂_k √(1/L∑_l=1^L |c_k^#(P_l)|^2) is bounded as |c_k^#(P)|^2 ≤ (1 + κ)^2. Thus, we can apply <ref> (Hoeffding's inequality) to the term 1/L∑_l=1^L |c_k^#(P_l)|^2. As |√(x)-√(x±ϵ)| ≤√(ϵ), we need a precision of ϵ^2 to ensure an error of at most ϵ for |X̂_k - X_k|. Estimating all D empirical means X̂_k to failure probability at most δ/D and taking the union bound gives the desired result. § PROOF OF <REF> <ref> Choosing ϵ such that min_k ∈ S{α_k - γ_k_2 } - (1 + 1/β_)max_k' ∈S̅α_k'_2 - (1 + 1/β_)max_k' ∈S̅γ_k'_2 ≥ 2 ϵ guarantees correct identification of the sparse support. The identification of the sparse support succeeds if min_k ∈ S X_k - max_k' ∈S̅ X_k'≥ 2 ϵ . We, therefore, derive an operationally meaningful lower bound to min_k ∈ S X_k and an upper bound to max_k' ∈S̅ X_k'. Starting with the latter, we begin by noting that there is some family of indicator functions χ_k',l such that √([|c_k'^#(P)|^2]) = ( 1/d∑_l=1^d |c_k'(B_l) + n_k'(B_l)|^2 χ_k',l)^1/2 . From there, we have √([|c_k'^#(P)|^2]) = ( 1/d∑_l=1^d |c_k'(B_l) + n_k'(B_l)|^2 χ_k',l)^1/2 ≤( 1/d∑_l=1^d |c_k'(B_l) + n_k'(B_l)|^2 )^1/2 ≤( 1/d∑_l=1^d |c_k'(B_l)|^2 )^1/2 + ( 1/d∑_l=1^d |n_k'(B_l)|^2 )^1/2 = α_k'_2 + γ_k'_2 . In the second step, we have used the triangle inequality. For the lower bound, we similarly start by expressing the expectation value with an indicator function. √([|c_k^#(P)|^2]) = ( 1/d∑_l=1^d |c_k(B_l) + n_k(B_l)|^2 χ_k,l)^1/2 , where χ_k,lχ(|c_k(B_l)+n_k(B_l)|>max_k' ∈S̅ |c_k'(B_l) + n_k'(B_l)| ) . From there, we have √([|c_k^#(P)|^2]) = ( 1/d∑_l=1^d |c_k(B_l) + n_k(B_l)|^2 χ_k,l)^1/2 ≥( 1/d∑_l=1^d |c_k(B_l) + n_k(B_l)|^2 - max_k' ∈S̅|c_k'(B_l) + n_k'(B_l)|^2 )^1/2 ≥( 1/d∑_l=1^d |c_k(B_l) + n_k(B_l)|^2 - |max_k' ∈S̅(k')_∞ + max_k' ∈S̅(k')_∞|^2 )^1/2 ≥( 1/d∑_l=1^d |c_k(B_l) + n_k(B_l)|^2 )^1/2 - ( 1/d∑_l=1^d |max_k' ∈S̅(k')_∞ + max_k' ∈S̅(k')_∞|^2 )^1/2 ≥α_k_2 - γ_k_2 - max_k' ∈S̅(k')_∞ - max_k' ∈S̅(k')_∞ ≥α_k_2 - γ_k_2 - max_k' ∈S̅1/β_√(d)(k')_2 - max_k' ∈S̅1/β_√(d)(k')_2 = α_k_2 - γ_k_2 - max_k' ∈S̅1/β_α_k'_2 - max_k' ∈S̅1/β_γ_k'_2 . For the first step, note that in case the indicator function χ_k,l is zero, by subtracting we get something lower than zero. For the fifth inequality, we use the definition of the flatness constant. By combining the lower and upper bound derived here and taking minima and maxima where appropriate, the desired claim follows. § SPARSITY OF TIME-EVOLUTION OF STATES WITH SUB-GAUSSIAN ENERGIES In this section, we want to establish rigorous constraints on the operator-valued coefficients α_k of the time evolution of a state ρ_0 under an NMR Hamiltonian H = ∑_e = 0^E_max e Π_e, ρ(t) = ∑_k = - E_max^E_maxα_k e^i k t, under the assumption of a sub-Gaussian energy spectrum of ρ_0, i.e., that [ρ_0 Π_e] ≤τexp(-1/2(e-e_0)^2/σ^2) for some constants τ, σ and e_0. To do so, we first establish an auxiliary lemma. Let Π and Π' denote orthogonal projectors. Then, for any quantum state ρ, we have that ‖ΠρΠ' ‖_1 ≤√( r ‖ΠρΠ‖_1‖Π' ρΠ' ‖_1), where r = min{rank(Π), rank(Π') }. The projector Π + Π' defines a principal submatrix of ρ given by [ ΠρΠ ΠρΠ'; Π' ρΠ Π' ρΠ' ][ A X; X^† B ], where the entries of the matrix are understood to be meant on their support. Principal submatrices of positive semi-definite operators are again positive semidefinite, which follows, for example, from the Cauchy interlacing theorem <cit.>. The positive semi-definiteness of a block-matrix is equivalent to the positive semi-definiteness of the Schur complement, hence A ≥ X B^-1 X^†. This implies that [A] ≥[ X B^-1 X^† ] = [ X^† X B^-1] ≥[X^† X] λ_min(B^-1). As A ≥ 0 the trace is identical to the trace norm and [X^† X] = ‖ X ‖_2^2 with the Schatten-2 or Hilbert-Schmidt norm, this rearranges to ‖ X ‖_2 ≤√(‖ A ‖_1 ‖ B ‖_∞) ≤√(‖ A ‖_1 ‖ B ‖_1). Now, we use that ‖ X ‖_2 ≥1/√(rank(X))‖ X ‖_1 and rearrange to conclude the statement from the fact that the rank of X it at most the minimal rank between the two projectors Π and Π'. Equipped with this lemma, we can now prove a bound on the γ_ℓ^1 sparsity defect with respect to the ℓ-local trace norm induced by the set _ℓ = { O : ‖ O ‖_∞≤ 1, O is ℓ-local}, which we will in the proof control with the regular trace norm. Let ρ_0 be a state sub-Gaussian energy distribution relative to an NMR Hamiltonian as in <ref>. Then, we can guarantee that the parametrized state ρ(·) has sparsity defect γ_ℓ^1≤γ with respect to the support S = {-R, -R + 1, …, R-1, R} and the local trace norm if R = Õ( σ√(n + logτ/γ)). Let us fix a sparse support S = { -R, -R+1, …, R-1, R} with R≥2. In this case, we have that γ_ℓ^1 ≤∑_|k| > R‖α_k ‖__ℓ ≤∑_|k| > R‖α_k ‖_1 ≤∑_|k| > R‖∑_e=-E_max^E_max-kΠ_e ρ_0 Π_e+k‖_1 ≤∑_|k| > R∑_e=-E_max^E_max-k‖Π_e ρ_0 Π_e+k‖_1 ≤∑_|k| > R∑_e=-E_max^E_max-k√(2^n ‖Π_e ρ_0 Π_e‖_1‖Π_e+kρ_0 Π_e+k‖_1) ≤∑_|k| > R∑_e=-E_max^E_max-k√(2^n τ^2 exp(-1/2(e-e_0)^2 + (e-(e_0-k))^2/σ^2)) = 2^n/2τ∑_|k| > R∑_e=-E_max^E_max-kexp(-1/2(e-e_0)^2 + (e-(e_0-k))^2/2σ^2) ≤ 2^n/2τ∑_|k| > R∑_e=-∞^∞exp(-1/2(e-e_0)^2 + (e-(e_0-k))^2/2σ^2). The first inequality uses the triangle inequality to bound γ_ℓ^1. The second inequality bounds the ℓ-local trace norm with the regular trace norm. The third inequality uses the definition of the coefficients α_k given in <ref>. The fourth inequality bounds the norm of the α_k using the triangle inequality. The fifth inequality applies <ref> with the trivial rank bound r ≤ 2^n. The sixth inequality then applies the sub-Gaussian energy assumption. The first equality takes the square root and the final inequality bounds the sum over e by letting E_max→∞. We will now bound the sum over the Gaussian terms using the integral upper bound for monotonic functions. To this end, we split the sum and can simplify by exploiting the evident symmetries: γ_ℓ^1 ≤ 2^n/2τ 2 ∑_k > R( ∑_e=-∞^e_0 - k + ∑_e = e_0 - k^e_0 - k/2 + ∑_e = e_0 - k/2^e_0 + ∑_e = e_0^∞) exp(-1/2(e-e_0)^2 + (e-(e_0-k))^2/2σ^2) ≤ 2^n/2τ 4 ∑_k > R(∑_e = e_0 - ⌊ k/2 ⌋^e_0 + ∑_e = e_0^∞) exp(-1/2(e-e_0)^2 + (e-(e_0-k))^2/2σ^2). In the first line, it does not matter for uneven k to which of the two sums we attribute ⌊ k/2 ⌋. For the first sum, the function is monotonically increasing, for the second it is monotonically decreasing. Hence, γ_ℓ^1 ≤ 2^n/2τ 4 ∑_k > R(∫_e_0 - ⌊ k/2 ⌋^e_0 + 1 e + ∫_e = e_0-1^∞ e ) exp(-1/2(e-e_0)^2 + (e-(e_0-k))^2/2σ^2). The indefinite integral on the right evaluates to ∫ e exp(-1/2(e-e_0)^2 + (e-(e_0-k))^2/2σ^2) = √(π/2)σexp(-k^2/8 σ^2) Erf( 2(e-e_0) + k/√(8)σ), and hence γ_ℓ^1 ≤ 2^n/2τ 4 ∑_k > R√(π/2)σexp(-k^2/8 σ^2)( 1 + Erf( k+2/√(8)σ) - Erf( k-2/√(8)σ) - Erf( k - 2⌊ k/2 ⌋/√(8)σ) ). As |Erf(x)| ≤ 1, we can bound the term with the error functions by 4 and obtain γ_ℓ^1 ≤ 16 2^n/2τ√(π/2)σ∑_k > Rexp(-k^2/8 σ^2) ≤ 16 2^n/2τ√(π/2)σ∫_R-1^∞ k exp(-k^2/8 σ^2) = 16 2^n/2τπσ^2 ( 1- Erf(R-1/√(8)σ) ), where we have again used the integral upper bound for monotonically decreasing sums. Applying the standard inequality 1-Erf(x) ≤ e^-x^2, we arrive at our end result γ_ℓ^1 ≤ 16 2^n/2τπσ^2 exp( - (R-1)^2/8 σ^2). Now, if we wish to achieve a certain value γ_ℓ^1≤γ, we have to have R ≥ 1 + √(8 σ^2 log2^n/2 + 4τπσ^2/γ) which means R = Õ( σ√(n + logτ/γ)) is sufficient to achieve a certain target sparsity defect as claimed. § CHEBYCHEV APPROXIMATION OF FREE FERMIONIC TIME EVOLUTION We consider the problem of approximating the time evolution of free fermions ρ(t) for t ∈ [-1,1]. In this case, the Chebychev polynomials form an appropriate set of basis functions. They are the unique polynomials satisfying the relation T_k(cos t) = cos kt for integer k ≥ 0. They fulfill the orthogonality relation ∫_-1^1 t T_k(t) T_l(t)/√(1-t^2) = δ_kl×π if k=0 , π/2 if k ≥ 1. As ∫_-1^1 t 1/√(1-t^2) = π, we obtain an orthonormal function basis in the sense of our paper by choosing a measure μ̃ and normalized Chebychev function T̃_k as μ̃(t) = 1/π1/√(1-t^2) , T̃_k(t) = T_k(t) × 1 if k=0, √(2) if k ≥ 1 T_k(t) ξ_k, where we denoted the different normalization factor as ξ_k. To translate the time evolution into Chebychev polynomials, we need to find the Chebychev expansion of exp (- i ω t). We have the following. Let ω∈ and J_k the k-th Bessel function of the first kind. Then, exp( -i ω t) = ∑_k=0^∞ i^k ξ_k J_k(ω) T̃_k(t). The prefactor c̃_k of T̃_k in the expansion of exp (-i ω t) is given by c̃_k = ∫_-1^1 μ̃(t) T̃_k(t) exp (-i ω t) = ξ_k/π∫_-1^1 t T_k(t) exp (-i ω t)/√(1-t^2). We aim to make use of the defining property of the Chebychev polynomials in <ref>. To do so, we substitute t = cosτ t/τ = - sinτ ∫_-1^1 t = -∫_-π^0 τ sinτ to obtain c̃_k = -ξ_k/π∫_-π^0 τ sinτT_k(cosτ) exp (-i ωcosτ)/√(1-cos^2 τ) = - ξ_k/π∫_-π^0 τ sinτ/|sinτ|cos (k τ) exp (-i ωcosτ) = ξ_k/π∫_0^πτ cos (k τ) exp (-i ωcosτ), where we used that in the given range of integration sinτ / |sinτ| = -1 and the symmetry of the integrand. Next, we recall an integral formula for the Bessel function of the first kind <cit.> J_k(ω) = (-i)^k/π∫_0^πτ cos (k τ)exp (i ωcosτ). Hence, we have that c̃_k = i^k ξ_k J_k(-ω). The claimed expansion follows from the (anti)symmetry of the Bessel functions of the first kind J_k(-ω) = (-1)^k J_k(ω). Additionally, we need a bound on the Bessel function of the first kind. Let J_k be the k-th Bessel function of the first kind. Then, |J_k(ω)| ≤( e ω/2 k)^k. We simply combine the bound <cit.> |J_k(ω)| ≤1/k!|ω/2|^k with the Stirling lower bound k! ≥ (k/e)^k.
http://arxiv.org/abs/2407.12674v1
20240717155725
Sharp isoperimetric inequalities on the Hamming cube near the critical exponent
[ "Polona Durcik", "Paata Ivanisvili", "Joris Roos" ]
math.CA
[ "math.CA", "math.CO", "46B09, 60E15, 05C35, 65G30" ]
§ ABSTRACT An isoperimetric inequality on the Hamming cube for exponents β≥ 0.50057 is proved, achieving equality on any subcube. This was previously known for β≥log_2(3/2)≈ 0.585. Improved bounds are also obtained at the critical exponent β=0.5, including a bound that is asymptotically sharp for small subsets. A key ingredient is a new Bellman-type function involving the Gaussian isoperimetric profile which appears to be a good approximation of the true envelope function. Verification uses computer-assisted proofs and interval arithmetic. Applications include progress towards a conjecture of Kahn and Park as well as sharp Poincaré inequalities for Boolean-valued functions near L^1. [2020]46B09, 60E15, 05C35, 65G30 Kirolos Ataallah10009-0007-0495-2171 Xiaoqian Shen^*10000-0001-6284-520X Eslam AbdelrahmanEqual contribution 1Essam Sleimanwork was done during internship in KAUST20000-0002-1505-6694Mingchen Zhuge10000-0003-2561-7712Jian Ding12222–3333-4444-5555Deyao Zhu10000-0001-8014-7309Jürgen Schmidhuber1,30000-0002-1468-6758Mohamed Elhoseiny10000-0001-9659-1551 Received 04 December 2023 / Accepted 15 July 2024 ==================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Let n≥ 1 be an integer. For a subset A of the Hamming cube {0,1}^n and a vertex x∈ A let h_A(x) be the number of edges connecting x with the complement of A and let h_A(x)=0 if x∉A. One of our main results is the following isoperimetric inequality. For all β≥β_0=0.50057 and A⊂{0,1}^n with |A|≤1/2, 𝐄 h_A^β≥ |A| (log_2(1/|A|))^β. This is an equality if A is a subcube. Here 𝐄f=2^-n∑_x∈{0,1}^n f(x) and |A|=𝐄1_A. If (<ref>) holds for some β, then it follows for all β'≥β (by Hölder's inequality, see Lemma <ref> below). The value β_0=0.50057 is essentially optimal for our argument; the failure for smaller values of β≥ 0.5 is rooted in the failure of a certain variant of Bobkov's inequality for the Gaussian isoperimetric profile (see Remark <ref> and <ref> below). §.§ Motivation, history and further results The quantity 𝐄 h_A^β should be interpreted as arising from a natural interpolation between vertex boundary measure (β=0) and edge boundary measure (β=1) which are classical quantities that are well-understood on the Hamming cube (Harper <cit.>, Bernstein <cit.>, Hart <cit.>; a very nice exposition can be found e.g. in Bollobás <cit.>). Talagrand <cit.> has initiated the study of 𝐄 h_A^β for other values of β and proved non-trivial dimension-free lower bounds when β=1/2. This is a natural and surprising result, since Hamming ball examples show that no such bounds can exist if β<1/2 (see e.g. <cit.>). However, Talagrand's bounds are not sharp and while progress has been made (e.g. Bobkov–Götze <cit.>, Kahn–Park <cit.>, Beltran–Ivanisvili–Madrid <cit.>), no sharp lower bound is currently known for the exponent β=1/2. In this paper we are interested in the minimum possible value of the quantity 𝐄 h_A^β when |A| is fixed and β∈ [1/2, 1]. The β-isoperimetric profile ℬ_β for the Hamming cube is ℬ_β(x) = inf_n≥ 1inf_A⊂{0,1}^n, |A|=x𝐄 h_A^β. The definition is restricted to dyadic rationals x, i.e. x∈𝒬={ k 2^-n : n≥ 0, 0≤ k≤ 2^n }⊂ [0,1], since otherwise there exists no A with |A|=x. For β=1 the precise value is known for every x=k2^-n∈𝒬: ℬ_1(x) = x (n - 2/k∑_j=1^k-1 s(j)), where s(j) is the sum of binary digits of j (Hart <cit.>). The graph of ℬ_1 is fractal and there is numerical evidence to believe that this is also the case for ℬ_β with β∈ (1/2,1) (see <ref>, in particular Figure <ref>). For β=1 it is not difficult to see that minimizing 𝐄 h_A with |A| fixed is equivalent to maximizing the number of edges of A (as an induced subgraph). This no longer holds when β<1 and no characterization of extremizers is known for any value β<1. The example of a codimension k subcube A = {x∈{0,1}^n : x_1=⋯=x_k=0} shows that ℬ_β(2^-k)≤ 2^-k k^β. Subcubes are extremizers in (<ref>) for β=1, and it is believed that they will remain extremizers for other values of β as well. For all β≥1/2 and all k≥ 1, ℬ_β(2^-k)= 2^-k k^β. Theorem <ref> proves the conjecture for β≥ 0.50057. This was previously shown to hold for β≥log_2(3/2)≈ 0.585 and k=1,2 by Kahn and Park <cit.> and then for β≥log_2(3/2) and all k≥ 1 by Beltran, Ivanisvili and Madrid <cit.> (also for k=1 when β≥ 0.53). §.§ Results at the critical exponent At the critical exponent β=1/2 we obtain improved estimates for the value ℬ_1/2(1/2)≤1/2. Talagrand <cit.> proved that ℬ_1/2(x) ≥ C x(1-x) for all x∈𝒬 with C=√(2). This has been improved subsequently by Bobkov–Götze <cit.> (C=√(3)) and in <cit.> (C=2√(2^3/2-2)). The later implies ℬ_1/2(1/2)≥1/2√(2^3/2-2)≈ 0.455. The best possible bound of the form Cx(1-x) is achieved when C=4/3√(2) since for larger values of C the function would contradict the upper bound ℬ_1/2(1/4)≤1/4√(2). This would only give ℬ_1/2(1/2)≥1/3√(2)≈ 0.471. We break this barrier by a significant margin, coming to within 0.3% of the conjectured value. For all k≥ 1, 2^-k√(k)≥ℬ_1/2(2^-k)≥ 0.997· 2^-k√(k). In particular, 0.5≥ℬ_1/2(12)≥ 0.4985. This is a consequence of a more technical lower bound, see (<ref>) below; see Figure <ref> for comparison with previous lower bounds[Note 𝔅_β may be different from ℬ_β, see Definition <ref>.]. While we still do not have any sharp estimates at β=1/2 for fixed x=|A|, we identify the sharp asymptotic behavior of ℬ_1/2 as |A|→ 0^+. For all A⊂{0,1}^n, 𝐄√(h_A)≥ |A| √(log_2(1/|A|)+1) - |A| and as a consequence, ℬ_1/2(x)  ∼  x√(log_2(1/x)) as x→ 0^+. (Here f(x) ∼  g(x) as x→ x_* means lim inf_x→ x_*f(x)/g(x)=1.) The bound (<ref>) is quite bad unless |A| is very close to 0 (e.g. at |A|=1/2 it proves only ℬ_1/2(1/2)≥ 0.207). However, it features the sharp asymptotic behavior as |A|→ 0^+, both in the constant of the leading term and in the power of the logarithm. This can be compared with the estimate (1.1) of Talagrand <cit.> which gave a sharp asymptotic in the power of the logarithm, but not the constant. To our knowledge, this is the first sharp estimate proved at the critical exponent β=1/2. We also obtain asymptotics as |A|→ 1^-, but we do not know if these are sharp (see Remark <ref>). §.§ Classical isoperimetric inequality Let us now discuss the case |A|>1/2, which we have omitted in Theorem <ref>. The classical isoperimetric inequality states that 𝐄h_A ≥ |A|^* log_2(1/|A|^*), where x^*=min(x,1-x), with equality when A is a subcube or complement of a subcube (the inequality follows from (<ref>)). The symmetry between |A|≤1/2 and |A|≥1/2 is natural here because 𝐄 h_A=𝐄 h_A^c. However, generally 𝐄 h_A^β≠𝐄 h_A^c^β for β≠1 and complements of subcubes are no longer believed to be extremizers. Nevertheless, we are able to prove an analogue of the classical isoperimetric inequality 𝐄h_A^β≥ |A|^* (log_2(1/|A|^*))^β, for any given β∈ [β_0,1]. This was shown to hold for β=log_2(3/2) in <cit.>. For |A|>1/2 it turns out that this inequality is far from optimal. Let J be the unique smooth function on (1/2,1) continuous at the endpoints satisfying J”J=-2, J(12)=12, J(1)=0. The function J plays a central role in this paper and can be expressed in terms of the Gaussian isoperimetric profile I(x), see (<ref>) below. For β=β_0=0.50057 and A⊂{0,1}^n with |A|≥1/2, 𝐄 h_A^β≥max(J(|A|), (1-|A|)(log_2(1/(1-|A|)))^β) In particular, this implies (<ref>) for β=β_0. The inequality (<ref>) improves (<ref>) when 12<|A|< 1-10^-10^380. (see Lemma <ref>). The restriction to the hardest case β=β_0 in Theorem <ref> is only for technical reasons. We can show the conclusion for all β∈ [β_0,1], but omit the details for brevity. §.§ Towards a conjecture of Kahn and Park As an immediate consequence of Theorem <ref> we come closer to a conjecture of Kahn and Park <cit.>. Let (A,B,W) be a partition of {0,1}^n and assume |A|=1/2. Then |∇(A,B)| + n^0.50057 |W|≥12, where |∇(A,B)|=2^-n#{(x,y) : x∈ A, y∈ B} and |W|=𝐄1_W. Kahn and Park proved this with log_3(3/2)≈ 0.585 in the exponent of n and in <cit.> this was improved to 0.53, while the conjectured optimal exponent is 0.5. Corollary <ref> follows from Theorem <ref> because for |B∪ W|=|A|=1/2, we have 12≤𝐄 h_B∪ W^β_0≤𝐄 (h_B∪ W1_B) + 𝐄 (h_B∪ W^β_01_W) ≤ |∇(A,B)| + n^0.50057 |W|. §.§ Sharp Poincaré inequalities Let 1≤ p≤ 2 and let C_p be the largest constant such that the L^p Poincaré inequality ∇ f_p ≥ C_p f-𝐄 f_p holds for all functions f:{0,1}^n→ℂ, where f_p=(𝐄|f|^p)^1/p and |∇ f|^2=∑_i=1^n (12(f(x)-f(x⊕ e_i)))^2. The value of C_p remains unknown except for p=2 where C_2=1. In the endpoint case p=1, it is known <cit.>, <cit.> that √(2π)≥ C_1>2π and it is conjectured that C_1=√(2/π), matching Pisier's inequality <cit.> for the Gaussian case. The lower bound C_1>2/π was obtained only recently <cit.> with arguments that eventually led to the resolution of Enflo's problem <cit.>. These techniques were later used in <cit.> and <cit.> to obtain sharpening of Poincaré inequalities in the quantum setting, and for vector valued functions. In <cit.> it was shown that if we restrict the inequality to Boolean-valued functions f:{0,1}^n→{0,1}, then the corresponding best constant C_B,p satisfies C_B,1 >C_1. Specifically, it was shown that C_B,1≥√(2^3/2-2)>√(2/π). It is conjectured that indicator functions of half-cubes extremize this inequality, which would mean that C_B,p=1 for all p≥ 1. We obtain an improvement of the best known lower bound for C_B,1 as well as the sharp bound C_B,p =1 for p≥ 2β_0. For all p≥ 2β_0= 1.00114 and f:{0,1}^n→{0,1}, ∇ f_p≥f-𝐄f_p. Equality is achieved when f=1_A for a half-cube A. Moreover, ∇ f_1≥ 0.997 f-𝐄f_1. This relies on the familiar observation that ∇1_A_p can be written in terms of 𝐄 h_A^p/2 and 𝐄 h_A^c^p/2; see (<ref>). A subtlety is that (<ref>) does not follow from Theorems <ref> and <ref>, but requires the stronger, more technical isoperimetric inequalities (<ref>) and (<ref>) proved below. §.§ Structure of the paper Here is a brief overview of the different sections and where to find the proofs of the main theorems: – In <ref> we reduce the proofs of Theorems <ref>, <ref> and <ref> to proving certain two-point inequalities for a Bellman-style function, Theorem <ref>. Finding this function was facilitated by numerical approximation of the true envelope function. – In <ref> we revisit Bobkov's inequality and prove Theorem <ref>, as well as a variant of Bobkov's two-point inequality used in the proof of Theorem <ref>. – In <ref> we explain how to automate proving that a function is positive using interval arithmetic. To our knowledge it is the first time computer-assisted methods are used in this context. – In <ref> we prove various auxiliary estimates used throughout. – In <ref> we prove the main two-point inequalities, Theorem <ref>. This takes a considerable amount of effort, even with computer-assisted proofs. The analysis naturally breaks into several cases; the most critical one is “Case J.I” in <ref>. – In <ref> we derive Theorem <ref> as a consequence of Theorem <ref>. §.§ Acknowledgments We thank the American Institute of Mathematics (AIM) for funding our SQuaRE workshop from which this project developed. We also thank our fellow SQuaRE members Irina Holmes and Alexander Volberg. J.R. also thanks Rodrigo Bañuelos for helpful comments. The authors were supported in part by grants from the National Science Foundation DMS-2154356 (P.D.), CAREER-DMS-2152401 (P.I.), DMS-2154835 (J.R.). § ENVELOPE FUNCTIONS Let B:[0,1]→ [0,∞) be given. For β≥12, 0≤ x≤ y≤ 1 set G^1_β[B](x,y) = ((y-x)^1/β+B(y)^1/β)^β + B(x) - 2 B(x+y2), G^2_β[B](x,y) = y-x + (2^β-1) B(y) + B(x) - 2 B(x+y2), G_β[B](x,y) = max(G^1_β[B](x,y), G^2_β[B](x,y)). Suppose B:[0,1]→ [0,∞) satisfies B(0)=B(1)=0 and that the following two-point inequality holds: G_β[B](x,y)≥ 0 for all 0≤ x≤ y≤ 1. Then ℬ_β≥ B. This was proved by Kahn and Park <cit.>. The proof is by induction on n (see <cit.>, <cit.>). It improves on prior induction schemes by Talagrand <cit.>, Bobkov <cit.> and Bobkov–Götze <cit.>. The refinement lies mainly in the inclusion of the term G^2 which is crucial for our application. This reduces the proof of Theorems <ref>, <ref> and <ref> to finding an appropriate function B and verifying the two-point inequality (<ref>). Both steps are difficult obstacles. Define L_β(x) = x (log_2(1/x))^β, and let Q_β(x) be the unique cubic interpolation polynomial such that Q_β(0)=Q_β(1)=0, Q_β(1/2)=1/2 and Q_β(1/4)=2^β-2. Then Q_β(x) = 23 x (1-x)(2^β+2-3+4(3-2^β+1) x). The polynomial Q_1/2 has been used also in <cit.>. Define b_β(x) = {[ L_β(x) for x∈ [0, 1/4],; Q_β(x) for x∈ [1/4, 1/2],; J(x) for x∈ [1/2, 1]. ]. The function J is as in Theorem <ref> and can be expressed in terms of the Gaussian isoperimetric profile, see (<ref>). Let β_0=0.50057 and c_0=0.997. Then for all 0≤ x≤ y≤ 1: G_β_0[b_β_0](x,y)≥ 0, G_1/2[c_0 · b_1/2](x,y)≥ 0. (<ref>) continues to hold for all β∈ [β_0, 1] and this can be proved by the same methods, but we do not pursue this here. Failure for smaller values of β_0 occurs at x=12 and y>12, see Figure <ref>. In particular, the failure only involves the function J (the somewhat suspicious looking cubic Q_β surprisingly is not involved in this failure). J is a very natural candidate for modeling the envelope 𝔅_1/2 (see Definition <ref>) on [1/2,1]. The function J extremizes the two-point inequality locally: letting y→ x in the inequality G_1/2[B](x,y)≥ 0 and assuming that B is C^2 gives the differential inequality B” B ≥ -2, so it is natural to consider functions satisfying B”B=-2 and numerical differentiation of the approximated envelope 𝔅_1/2 on the interval [1/2,1] further supports this approach. The numerical evidence also suggests that |𝔅_1/2(x)-J(x)|≤ 3· 10^-5 for x∈ [1/2,1] (see Figure <ref>). The failure of the inequality for the function J at β=1/2 is predicted by an analogue of Bobkov's two-point inequality, see Proposition <ref> and Remark <ref>. With some additional effort in <ref>, the two-point inequalities could be proved for more optimal values for β_0, c_0 with arbitrary precision. The proof of Theorem <ref> uses computer assistance and is contained in <ref>. Theorem <ref> and Proposition <ref> immediately imply: The following isoperimetric inequalities hold: ℬ_β_0≥ b_β_0, ℬ_1/2≥ 0.997· b_1/2. (Recall Definition <ref> and (<ref>).) §.§ Computed envelopes In order to find a good candidate B it is helpful to observe that if B_1, B_2 satisfy (<ref>), then so does max(B_1,B_2). This is a consequence of monotonicity (see Lemma <ref>) and the same holds for the supremum of a families of such functions (B_i)_i∈ I. The β-envelope 𝔅_β is the supremum of B over all functions B such that B(0)=B(1)=0 and (<ref>) holds for all 0≤ x≤ y≤ 1 with x,y∈𝒬. By the above observation, the β-envelope satisfies (<ref>), so Proposition <ref> implies ℬ_β≥𝔅_β. Apart from the case β=1 it is not known whether the reverse inequality holds. There is a natural way to numerically approximate the envelope functions 𝔅_β. Figure <ref> shows plots of numerically approximated envelopes for different values of β. Details on this and further explorations will appear elsewhere. These numerical approximations are not used to prove any of our results, but they were helpful in finding the function b_1/2 which can be viewed as an approximation of the envelope 𝔅_1/2. §.§ Proofs of Theorems <ref>, <ref>, <ref> To prove Theorem <ref> it suffices to consider the case β=β_0 (by Lemma <ref>). For β=β_0, (<ref>) follows from (<ref>) because b_β(x)=Q_β(x)≥ L_β(x) for x∈ [14,12]. This and various other useful facts about L_β and Q_β are proved in <ref> (see Lemma <ref>). The lower bound in Theorem <ref> follows from (<ref>) while the upper bound follows from letting A be a subcube of codimension k. To prove Theorem <ref> it suffices to show that b̃_β(x) = {[ L_β(x) for x∈ [0, 1/4],; Q_β(x) for x∈ [1/4, 1/2],; max(J(x),L_β(1-x)) for x∈ [1/2, 1]. ]. also satisfies the two-point inequality G_β_0[b̃_β_0](x,y)≥ 0 for all 0≤ x≤ y≤ 1. Notice that J(x) and L_β(1-x) are equal to 1/2 at x=1/2. Lemma <ref> shows that J(x)>L_β_0(1-x) for x∈ (12, 1-10^-10^380). The two-point inequality (<ref>) now follows from Lemma <ref> by writing b̃_β_0(x) as the maximum of b_β_0(x) and the function L̃(x)=L_β_0(1-x)1_x≥1/2. The hypotheses of Lemma <ref> are satisfied by (<ref>) and because it is known that the function L̃(x)=x↦ L_β_0(1-x) satisfies G_β_0[L̃](x,y)≥ 0 for all 1/2≤ x≤ y≤ 1 (see <cit.>). The estimate (<ref>) means that b_β_0(x+y2)<L̃(x+y2) can only happen when x+y/2≥ 1-10^-10^380 which in particular implies y≥ x≥1/2. Thus (<ref>) is proved and Theorem <ref> follows. § VARIANTS OF BOBKOV'S INEQUALITY In this section we revisit aspects of Bobkov's classical proof <cit.> of the Gaussian isoperimetric inequality to achieve two objectives: * prove a sharp asymptotic estimate, Theorem <ref> * prove a crucial two-point inequality satisfied by the function J This is also informed by Bobkov and Götze's work <cit.>. For functions f:{0,1}^n→ℝ let M_i f(x) = (f(x)-f(x⊕ e_i))_+, M f = √(∑_i=1^n (M_i f)^2). Observe M 1_A = √(h_A). This notation originates in <cit.>, <cit.>. Suppose that ℐ⊂ [0,∞) is an interval and B:ℐ→ℝ. If B(𝐄 f) ≤𝐄√(B(f)^2 + (Mf)^2) holds for all f:{0,1}→ℐ, then it holds for all f:{0,1}^n→ℐ and n≥ 1. This is a version of Lemma 2.1 of Bobkov–Götze <cit.> and follows from it by rescaling. The proof is by induction on n, see <cit.>. For n=1, (<ref>) is referred to as a two-point inequality and may be written as B(x+y2) ≤12 √((y-x)^2 + B(y)^2) + 12 B(x), where max(f(0),f(1))=y and min(f(0),f(1))=x. Observe that (<ref>) is equivalent to G^1_1/2[B](x,y)≥ 0 with G^1_β as in (<ref>). It is known that the function L(x)=x√(log_2(1/x)) satisfies (<ref>) when 0≤ x≤ y≤1/2 (see <cit.>). Applying Lemma <ref> with ℐ=[0,1/2] we therefore have L(𝐄 f)≤𝐄√( L(f)^2 + (Mf)^2) for all f:{0,1}^n→ [0,1/2]. Plugging in f=1/21_A we have 𝐄 f=1/2 |A| and the inequality becomes L(12|A|) ≤12 𝐄 (1_A √(1 + h_A))≤12 (|A| + 𝐄√(h_A)) so the claim follows. The two-point inequality (<ref>) is a variant of Bobkov's original two-point inequality <cit.> for the “two-sided gradient”, i.e., B(x+y2) ≤12 √((y-x2)^2 + B(y)^2) + 12 √((y-x2)^2 + B(x)^2), which is closely related and satisfied by the Gaussian isoperimetric profile <cit.> I(x) = φ(Φ^-1(x)), where φ is the standard Gaussian density function and Φ its cumulative distribution function, i.e. φ(t) = (2π)^-1/2 e^-t^2/2, Φ(t) = ∫_-∞^t φ(s) ds. The function I is also characterized by the conditions I(0)=I(1)=0, I· I”=-1. For a real parameter w>0 and x with w^-1(1-x)∈ [0,1] we define J_w(x) = √(2)· w I(w^-1(1-x)). Then J_w(1)=0 and J_w”J_w=-2. Thus, with J being the function in Theorem <ref>, J(x) = J_w_0(x), where w_0 is such that J_w_0(1/2)=1/2. One computes w_0∈ [0.895, 0.896]. In his seminal paper <cit.>, Bobkov showed that I is the pointwise maximal non-negative continuous function satisfying (<ref>) together with the boundary conditions I(0)=I(1)=0. We are interested in the inequality (<ref>). If a non-negative function I solves I· I”=-1, then I is concave and (I')^2 is convex. The first claim follows from I”=-1/I≤ 0. For the second claim, we compute ((I')^2)' = 2 I' I” = -2I'/I and ((I')^2)” = -2I”/I + 2(I')^2/I^2 = 2 (1+(I')^2)/I^2 ≥ 0. Let I be a non-negative function so that I”· I=-1 and I'≤ 0 hold on an interval ℐ. Then we have for all x,y∈ℐ with x≤ y that √((12 (y-x)^2 + I(y)^2) + I(x) ≥ 2 I(x+y2). The crucial difference to <cit.> is the additional requirement that I'≤ 0. This requirement is necessary: the conclusion fails if it does not hold. This is inspired by Bobkov's argument in <cit.>. Let c=x+y/2 and u=y-x/2. Then x=c-u, y=c+u. Let us fix c∈ℐ. Write ℐ=[a,b]. The variable u ranges in the interval [0, c_*] with c_* = min(c-a, b-c). We need to show for all u∈ [0, c_*]: √(2u^2 + I(c+u)^2)≥ 2 I(c) - I(c-u) Since 2I(c) - I(c-u)≥ 0 (by concavity of I), this is equivalent to 2u^2 + I(c+u)^2 ≥ 4 I(c)^2 + I(c-u)^2 - 4 I(c) I(c-u). Define G_c(u) = I(c+u)^2 - I(c-u)^2 + 2u^2 - 4I(c)^2 + 4 I(c) I(c-u). We have G_c'(u) = 2 I'(c+u)I(c+u) + 2 I'(c-u)I(c-u) + 4u - 4 I(c) I'(c-u), G_c”(u) = 2(I'(c+u)^2 - I'(c-u)^2) + 4(1- I(c)I(c-u)) , where we have used I· I”=-1 to obtain the last equation. Since I is decreasing on ℐ we have I(c-u)≥ I(c), so G”_c(u) ≥ 2 (I'(c+u)^2 - I'(c-u)^2). Recall that if f is convex, then for all x≤ y, f(y)-f(x) ≥ (y-x)f'(x). In particular, since (I')^2 is convex by Lemma <ref>, I'(c+u)^2 - I'(c-u)^2 ≥ 2u ((I')^2)'(c-u). Observe ((I')^2)' = 2 I' · I” = - 2 I' / I ≥ 0, since I'≤ 0 on the interval ℐ. Thus we have shown that G”_c≥ 0 on [0, c_*]. Since G'_c(0) = 0 we must have G'_c≥ 0 on [0, c_*]. Since G_c(0)=0, also G_c≥ 0 on [0, c_*], as required. Applying Proposition <ref> to I=J_w/√(2) we obtain: Let 0<w≤ 1. Then for all 1-w/2≤ x≤ y≤ 1: √((y-x)^2 + J_w(y)^2) + J_w(x) ≥ 2 J_w(x+y2). The range of x,y is optimal: if x<1-w/2, the inequality fails because J_w' changes sign at 1-w/2, so the necessary condition I'≤ 0 in Proposition <ref> is not satisfied. This is the underlying reason why we cannot prove Theorem <ref> for β=1/2. Setting w=w_0, we obtain in particular that (<ref>) holds for B=J as long as y≥ x≥ x_0, where x_0 = 1-w_02 ∈ [0.552, 0.553]. We will rely on this in the proof of Theorem <ref> in <ref>. Corollary <ref> and Lemma <ref> imply also that J(𝐄 f)≤𝐄√( J(f)^2 + (Mf)^2) holds for all f:{0,1}^n→ [x_0, 1]. Letting f=(1-x_0)1_A + x_0 we obtain the isoperimetric inequality 𝐄√(h_A)≥ (1-x_0)^-1(J((1-x_0)|A|+x_0)-J_∞ (1-|A|)), which is no good when |A| is away from 1, but improves on (<ref>) as |A|→ 1^- since the function on the right hand side is asymptotically equivalent to J(|A|). § COMPUTER-ASSISTED PROOFS It can be quite challenging to prove that a given function of several real variables is strictly positive. In this section we describe a simple but powerful method to automate positivity proofs for functions on compact rectangles that we will use routinely throughout the paper. This is likely well-known to readers familiar with computer-assisted proofs. §.§ Dyadic partitioning By a rectangle we mean a product of closed intervals [a_1, b_1]×⋯× [a_n, b_n]⊂ℝ^n for a,b∈ℝ^n with a_i<b_i for all i=1,…,n. We will use the notation [a,b] for such a rectangle. A map f:[a,b]× [a,b]→ℝ is called a tight lower bound of a function f:[a,b]→ℝ if f(x)≥f(x,x) for all x∈ [x, x]⊂ [a,b] and f(x,x)→ f(x) as |x-x|→ 0, x∈ [x,x]. A finite partition 𝒫 of [a,b] into rectangles such that f(c,d)>0 for all [c,d]∈𝒫 will be called admissible. The following compactness observation reduces verification of the infinitely many conditions f(x)>0 to only finitely many evaluations. Suppose a function f on [a,b] has a tight lower bound f. Then f>0 on [a,b] if and only if there exists an admissible partition. The `if' part follows from the lower bound property and the definition of an admissible partition. For the `only if' part let f>0. Then for every x∈ [a,b] exists ε_x>0 such that f(x,x)>0 for all x, x∈ [a,b] with x∈ [x,x] and x-x≤ε_x. The collection of ε_x/2-neighborhoods around each point x forms an open cover of [a,b]. By the Heine-Borel theorem, there exists a finite subcover, which induces an admissible partition. It is not required that f is continuous; however, existence of a tight lower bound implies lower semicontinuity of f. Admissible partitions 𝒫 can be practically determined by recursive dyadic partitioning, where a given rectangle is partitioned into 2^n congruent subrectangles called its children by cutting along the midpoint in each of the n coordinates: Fail We will only use this when n = 1 or n = 2. In the case of n = 2, we will slightly abuse notation and write f(x_1,x_1,x_2,x_2) instead of f((x_1,x_2),(x_1,x_2)). Running Partition_n(f,[a,b]) either fails or returns an admissible partition, thus providing a proof that f>0 holds on [a,b]. Figure <ref> visualizes an example of an admissible partition that is particularly critical to this paper. A finite value of maxDepth ensures that the procedure terminates. Moreover, Fact <ref> implies that if it is true that f>0 holds on [a,b], then the procedure will always succeed in producing a proof of this, provided that maxDepth is chosen large enough (in this paper maxDepth=12 will suffice). In practice, limitations are imposed by memory and time constraints. Further limitations of the method are the requirement of strict positivity as well as the need for a (good enough) tight lower bound f given by an explicit formula. The proof of positivity is established by providing the pair (f, 𝒫). Verification of admissibility can, in principle, be carried out using any method of rigorous computation, including manual computation. However, the most practical approach is to use a computer. All necessary computations for this paper can be completed within a few seconds on a standard laptop. We use interval arithmetic to account for all numerical and rounding errors inherent in numerical function evaluation using floating-point arithmetic so that the bounds produced by the computer are provably correct and this can be verified by inspection of the source code. §.§ Interval arithmetic Interval arithmetic is a well-known method for producing rigorous numerical estimates that has gained widespread use in mathematics over recent decades. A fundamental issue is that generic real numbers are not represented exactly. Instead, one uses floating point numbers, which for the purpose of this discussion can be considered to be rationals of the form k 2^-n with n,k∈ℤ, subject to size restrictions. The idea of interval arithmetic is to represent (an approximation of) a real number x by a small interval [x,x] with floating point numbers as endpoints so that x is guaranteed to lie inside the interval. When evaluating a function f(x) we instead evaluate an associated interval enclosure f(x,x) which is an interval-valued map with values [f(x,x), f(x,x)] so that f(x)∈f(x,x) whenever x∈ [x,x] and f(x,x)=[f(x),f(x)]. Arithmetic operations and standard functions are extended to the `degenerate' floating point values ∞,-∞,NaN (`not a number', for undefined operations) with the usual semantics. Note that the canonical interval extension of the order relation on real numbers is only a partial order: if x>y does not hold for intervals x,y, this does not imply x≤y. Given a formula for f involving only standard arithmetic operations and functions with known piecewise monotonicity, an interval enclosure can be automatically constructed. For further details on interval arithmetic and computer-assisted proofs see e.g. <cit.>, <cit.>, <cit.> and references therein. One could rely on automatically constructed interval enclosures to provide tight lower bounds for use in partitioning. However, in our applications the automatic enclosures are sometimes not quite sufficient. Because of this and for clarity, we prefer to always manually provide explicit tight lower bounds. §.§ Source code Each use of Partition_n throughout the paper has been implemented using FLINT/Arb, an open source library for arbitrary precision interval arithmetic that produces provably correct error bounds <cit.>, <cit.>. The code verifying numerical claims for this paper is available on GitHub at Every claim verified in this manner will be tagged with (). For readers who prefer different methods of verification, we provide admissible partitions for each of these claims as an ancillary file to our arXiv submission (; this file is automatically generated by the verification code). For convenience we include a supplemental Mathematica file, which also implements partitioning (in standard double floating point precision) and contains additional numerical and symbolic calculations that the reader may find helpful when reading the paper. Mathematica is not required or relied on for the logical completeness of our proofs. § AUXILIARY ESTIMATES In this section we collect various estimates that are used in this paper and may also be useful in the future. §.§ Small exponents versus large exponents If 𝐄 h_A^β≥ B(|A|)>0 holds for some β>0, then for all β̃≥β, 𝐄 h_A^β̃≥ B(|A|)^β̃/β |A|^-β̃/β+1. In particular, if (<ref>) holds for β_0, then it holds for all β≥β_0. Let p=β̃/β≥ 1. By the assumption and Hölder's inequality, B(|A|)≤𝐄 h_A^β≤ (𝐄 h_A^pβ)^1/p |A|^1-1/p, which implies the claim. §.§ Two-point inequalities Suppose we are trying to prove an inequality of the form ℒ(x,y,B(x),B(y))≥ B(x+y2) for some function B on [a,b] with ℒ(x,y,u,v) monotone increasing in u,v for all a≤ x≤ y≤ b. A well-known observation is that if B_1,B_2 satisfy (<ref>) for all a≤ x≤ y≤ b, then the same holds for B=max(B_1,B_2). We will use this observation with minimal hypotheses as follows. Assume for a≤ x≤ y≤ b that * (<ref>) holds for B_1 when B_1(x+y/2)≥ B_2(x+y/2) * (<ref>) holds for B_2 when B_1(x+y/2)< B_2(x+y/2) Then (<ref>) holds for B=max(B_1,B_2) for all a≤ x≤ y≤ b. Fix a≤ x≤ y≤ b. If B_1(x+y/2)≥ B_2(x+y/2), then ℒ(x,y,B(x),B(y))≥ℒ(x,y,B_1(x),B_1(y)) ≥ B_1(x+y2) = B(x+y2) and the case B_1(x+y/2)< B_2(x+y/2) is analogous. Let β∈ [1/2,1) and t,s>0. Then (t^1/β+s^1/β)^β≥ t + (2^β-1) s holds if and only if t≤ s. Dividing by s>0, the inequality can be equivalently written as ( ( ts^-1)^1β+1 )^β≥ ts^-1 + 2^β-1. Writing a=ts^-1, raising the inequality to the power 1/β, and subtracting the right-hand side, it suffices to show that a^1/β+1 - (a + 2^β-1)^1/β≥ 0 holds if and only if 0≤ a≤ 1. When a=1, the left-hand side vanishes. Thus, it suffices to show that the left-hand side is strictly decreasing in a>0. Differentiating the left-hand side in a, it suffices to show a^1/β-1 - (a + 2^β-1)^1/β-1 < 0 for each a>0, which holds because 1β-1> 0. Recalling the definition of G_β in (<ref>), Lemma <ref> implies G_β[B](x,y) = {[ G^1_β[B](x,y), if y-x≤ B(y),; G^2_β[B](x,y), if y-x≥ B(y). ]. In particular, G_β[B](x,y)=G^1_β[B](x,y) for (x,y) near the diagonal {x=y}. On the diagonal we have G^1_β[B](x,x)=0, so near the diagonal it is natural to consider the variable h=y-x. The following lower bound will be sufficient to bound all near diagonal cases. Let β∈ (0,1). For h≥ 0, G^1_β[B](x,x+h) = β B(x+h)^1-1/β h^1/β + [B(x)+B(x+h)-2B(x+12h)] - E_1, where E_1 satisfies 0≤ E_1 ≤12 β (1-β) B(x+h)^1-2/β h^2/β = O(h^2/β). In our applications, B is always concave, so B(x)+B(x+h)-2B(x+h/2)≤ 0. It follows from Taylor's theorem that for a≥ 0, (1+a)^β = 1 + β a - 12 β(1-β) a^2 for some a∈ [0,a]. Letting a=(h B(x+h)^-1)^1/β, (h^1/β + B(x+h)^1/β)^β = B(x+h) (1+a)^β = B(x+h) + β B(x+h)^1-1/β h^1/β - E_1, with E_1 = 12 B(x+h) β(1-β) a^2≥ 0. E_1 satisfies the claimed bound, because a^2 ≤ a^2=h^2/β B(x+h)^-2/β. §.§ Properties of L and Q Let us record the first few derivatives of L_β(x)=x (log_2(1/x))^β: L'_β(x) = (log 2)^-β (log1x)^-1+β (-β +log1x), L”_β(x) = -β (log 2)^-β x^-1 (log1x)^-2+β(1-β+log(x^-1)), L”'_β(x) = β (log 2)^-β x^-2 (log1x)^-3+β (β(3-β)-2 + (log1x)^2). These will be used to verify the following properties of L_β. Let β∈ [1/2,1]. Then * the function x↦ L_β(x) is strictly concave on [0,1]. * if x∈ (0,e^-1), then L”_β(x) is decreasing in β. * if x∈ (0,e^-√(3)/2), then L”'_β(x) is positive and increasing in β. * if x∈ (0,12], L^(4)_β(x) is negative. Note e^-√(3)/2>e^-1>1/4. (1) This follows from (<ref>). (2) We show that ∂_βlog(-L”_β(x)) > 0 which means that β↦log(-L”_β(x)) is increasing in β, so L”_β(x) is decreasing in β. The left hand side equals β^-1(1-β+log1x)^-1ℓ(x,β), where ℓ(x,β)= -a_2(x)β^2 + a_1(x)β + a_0(x) and a_0(x) = 1+log1x>0, a_1(x) = -2 + log1log 2 + (log1x) (log1log 2) + loglog1x + (log1x) (loglog1x) > a_1(e^-1) , a_2(x) = log1log 2+loglog1x>0. Notice that ∂_β^2 ℓ(x,β)=-2a_2(x)<0, so β↦ℓ(x,β) is concave and thus ℓ(x,β)≥min(ℓ(x,12),ℓ(x,1)). Observe from definition that x↦ℓ(x,12) and x↦ℓ(x,1) are decreasing functions, so ℓ(x,β)≥min(ℓ(e^-1,12),ℓ(e^-1,1)). Now it only remains to evaluate ℓ(e^-1,12) = 1+34 log1log 2 > 0, ℓ(e^-1,1)=log1log 2 > 0. (3) By (<ref>) it suffices to show β(3-β) - 2 + (log1x)^2 > 0. Observe that the function on the left hand side is positive for small enough x and strictly decreases in x. Also, the function β↦β(3-β) is increasing since its derivative is 3-2β≥ 1. Solving β(3-β) -2 + (log1x)^2 = 0 with β=1/2 for x gives x=e^-√(3)/2. (This also shows that L”'_β(x) is increasing in β.) (4) From (<ref>) we compute L_β^(4)(x) = x^-3β (log 2)^-β (log1x)^β -4 f_β(x), where f_β(x) = -(3-β) (2-β) (1-β) + (1-β -2 log1x)(log1x)^2 +2 (2-β) (1-β)log1x, so it suffices to show that f_β(x) is negative. To see this, we note that ∂_β f_β(12) = 3 β^2+4 (log 2 -3)β +11-(log 2)^2-6 log 2 which has both roots strictly greater than 1 and is thus positive for β∈ [12,1], so β↦ f_β(1/2) is increasing. Evaluating f_1 (12) <-0.6, this implies f_β(12)<0 for all β∈ [12,1]. Thus it suffices to show that f_β'(x)=2x^-1(3 (log1x)^2 -(1-β) log1x -β ^2+3 β -2 ) is non-negative on (0,12]. For this it suffices to show 3 (log1x)^2 -(1-β) log1x - β ^2+3 β -2≥ 0 on (0,12]. The left-hand side is increasing in β, so it suffices to verify the inequality for β=12. Setting also u=log(1x), it suffices to show 12 u^2 -2 u - 3≥ 0 for u∈ [log2,∞). This can be verified by solving for u and noting that the solutions are not greater than log 2. We turn our attention to the cubic polynomials Q_β which are defined by Q_β(x) = 23 x (1-x)(2^β+2-3+4(3-2^β+1) x) so that Q_β(0)=Q_β(1)=0, Q_β(1/2)=1/2 and Q_β(1/4)=2^β-2. For β ∈ [12,1] we denote α_0 = 2^2 + β - 5 ≥ 2^2.5-5>0, α_1 = 3-2^1 + β. Note that α_1>0 if and only if β<log_2(3/2). We record some derivatives of Q_β for future use: Q_β'(x) =23(2^2+β-3) - 4α_0x-8α_1 x^2, Q”_β(x)= -4α_0 - 16α_1 x, Q”'_β(x)=-16α_1, ∂_β Q_β(x) = 13 log(2) 2^3 + β x (1 - x) (1 - 2 x). Let x∈ [0,12] and β∈[1/2,log_2(3/2)). Then: * Q_β(x)≥ 0 and Q_β is increasing in β. * Q'_β(x)> 0 and if x∈[1/4,1/2], then Q'_β is decreasing in β * Q”_β(x)<0, Q”'_β(x)<0 and Q”_β(x) is decreasing in β (1) This follows from (<ref>). (2) From (<ref>) one can write Q'_β(x)= 23(1-2x)(2^β+2-3+4α_1 x) + 83α_1 x(1-x)>0 for x∈ [0,1/2] and β∈[1/2,log_2(3/2)]. Also, ∂_β Q'_β(x) = 13(log 2)2^3+β(6x^2-6x+1). This is negative if x∈ [14,12], because 6x^2-6x+1=(x-12-√(3)6) (x-12+√(3)6). (3) The first part follows from (<ref>), (<ref>). To see that Q_β”(x) is decreasing in β, compute ∂_β Q”_β(x) = -log(2)2^4+β(1-2x)≤ 0, since x≤12. For β∈ [12,log_23/2] we have * L_β(x) ≥ Q_β(x) for x∈ [0,14], * L_β(x) ≤ Q_β(x) for x∈ [ 14,12]. The conclusions in the lemma continue to hold for all β∈ [1/2,1]. We make use of Lemma <ref> and <ref>. (1) We have L_β”'-Q_β”'> 0 on [0,1/4], so L_β'-Q_β' is strictly convex there. Also Q_β'(0)>0 and L'_β(x)→∞ as x→ 0^+. Moreover, L_β'(14)-Q_β'(14) = 2^β ( 43-βlog 4 )-32 <0, where the last inequality follows from evaluating L_1/2'(14)-Q_1/2'(14)<-0.12 and the fact that L_β'-Q_β' is decreasing in β, which in turn follows from ∂_β(L_β'-Q_β')(14) = 1log 642^β (-3 - βlog 8 + 8(log 2)^2) and -3 - βlog 8 + 8(log 2)^2 ≤ -3 - 12log 8 + 8(log 2)^2 <-0.06. Therefore, L_β'-Q_β' has a unique zero x̃ on [0,14]. Thus, L_β-Q_β has a unique local extreme on this interval, which is a maximum since L_β'-Q_β' is increasing on [0,x̃] and decreasing on [x̃,14]. Together with L_β(0)-Q_β(0)=L_β(14)-Q_β(14)=0 this shows L_β-Q_β≥ 0 on [0,14]. (2) L_β^(4)≤ 0. Thus, L_β”-Q_β” is concave. We compute L_β”(12)-Q_β”(12)=(log 2)^-2(2 β (β- 1-log 2)+2 (log 2)^2). This is positive which can be seen by computing ∂_β(L_β”(12)-Q_β”(12))=2(log 2)^-2(2β-1-log 2)<0 and L_log_2(3/2)”(12)-Q_log_2(3/2)”(12)>1.3. Moreover, L_β”(14)-Q_β”(14) = 8 (2^β-1)+(log 2)^-2(2^ββ (β -1-log 4)) is increasing in β. Indeed, we have ∂_β(L_β”(14)-Q_β”(14) ) = = 2^βlog 2(β ^2 log 2+2 β -βlog 2 (1+log 4)-1+8 (log 2)^2-log 4) and one can see that β ^2 log 2+2 β -βlog 2 (1+log 4) is increasing in β, positive at β=12, and -1+8 (log 2)^2-log 4>0. Moreover, L_1/2”(14)-Q_1/2”(14)>0.5. Therefore, L_β”-Q_β”≥ 0, so L_β-Q_β is convex. Since L_β(14)-Q_β(14) = L_β(12)-Q_β(12)=0, we have L_β-Q_β≤ 0 as desired. §.§ Lower bounds for the Gaussian isoperimetric profile It is well-known that as x→ 0^+, I(x) ∼√(2)· x √(log(1/x)) (Here f(x)∼ g(x) means lim_t→ 0^+f(x)/g(x)=1.) We shall need a quantitative lower bound. For all 0<x≤1/5: I(x)≥√(2)· x√(log(1/x))(1- 12 loglog(1/x)log(1/x)-log(2π^1/2)log(1/x)). This should be well-known, but since we could not find a reference, we provide a proof. The estimate I(x)≤√(2)· x√(log(1/x)) also holds and can be proved along the same lines, but we do not need this here. Let t<0. Integration by parts shows ∫_-∞^t e^-s^2/2 ds = -t^-1 e^-t^2/2 - ∫_-∞^t s^-2 e^-s^2/2 ds. Thus, Φ(t) = |t|^-1φ(t) - ∫_-∞^t s^-2φ(s) ds ≤ |t|^-1φ(t). Applying I=φ∘Φ^-1 (an increasing function on [0,12]) on both sides, φ(t) ≤ I(|t|^-1φ(t)) at least as long as x=|t|^-1φ(t)≤ 1/2. This means I(x)≥ x· |t|. Rewriting x|t| = (2π)^-1/2 e^-t^2/2, we have |t| = √(2)√(log(1/x)- log((2π)^1/2|t|)) (the expression under the square root is positive because it equals t^2/2). The assumption x≤1/2 implies |t|>0.64 which also means (2π)^1/2|t|>1, and thus |t|≤√(2)√(log(1/x)). This, in turn, implies log((2π)^1/2|t|)≤log(2 π^1/2√(log(1/x))) = log(2π^1/2) + 12 loglog(1/x), and therefore we can write √(2)√(log(1/x))√(1-ε) = |t|, where ε = (log(1/x))^-1log((2π)^1/2|t|) satisfies ε≤log(2π^1/2)log(1/x) + 12 loglog(1/x)log(1/x)→ 0 as x→ 0^+. In particular, ε<1 if x≤1/5. Thus, I(x)≥√(2)· x√(log(1/x))√(1-ε), and using √(1-ε)≥ 1 - ε (since ε≤ 1) the claim follows. We record the following consequence that will be needed in <ref>. For 0<x≤164, J(1-x)≥ x √(log(w_0/x)) This follows from Proposition <ref>. From (<ref>), J(1-x)=√(2) w_0 I(x w_0^-1) and w_0∈ [0.895, 0.896]. Thus J(1-x) ≥ 2 x √(log(w_0/x)) (1- ε), where ε = 12 loglog(w_0/x)log(w_0/x) + log(2π^1/2)log(w_0/x). Finally, if x≤1/64, then 2 (1-ε)> 1 (by evaluating this decreasing function at x=1/64). For all x∈ (1/2, 1-10^-10^380), J(x)>L_β_0(1-x). We begin with the case 1/2<x≤2047/2048. Using (<ref>), a lower bound for -∂_x^2[J(x)-L_β(1-x)] is g_JL,β(x,x) = 2 J(x,x)^-1 -βlog(2)^-β (1-x)^-1 (1-β+log11-x)(log11-x)^-2+β (here J is an upper enclosure for J, see (<ref>)). Running Partition_1(g_JL,β_0,[1/2,2047/2048]) shows -∂_x^2[J(x)-L_β(1-x)] > 0.08, for x∈ [1/2,2047/2048], so J(x)-L_β_0(1-x) is a strictly concave function on this interval and it suffices to evaluate it at x=1/2, where the value is zero and at x=2047/2048, where the value is >10^-4. Next we consider the case 2047/2048<x<1-10^-10^380. Setting s=1-x<1/2048, (<ref>) implies J(x)=J(1-s) ≥ 2 s(1-ε) √(log(w_0/s)) with ε = 12 loglog(w_0/s)log(w_0/s) + log(2π^1/2)log(w_0/s). Thus, J(x) - L_β(1-x) ≥ s (2(1-ε)√(log(w_0/s)) - (log 2)^-β_0 (log(1/s))^β_0). Let u=log(w_0/s). If s∈ (1/2048, 10^-10^380), then u∈ (7, 10^381). Thus it suffices to show 2u-log(u)-log(4π) - (log 2)^-β_0 u^1/2(u^β_0 +log(w_0^-1)^β_0)>0 for all u∈ [7,10^381]. Here we have used log(1/s)=u+log(w_0^-1) and (u+v)^β_0≤ u^β_0+v^β_0. We first do so on the interval, say u∈ [1200, 10^381]. Evaluating the decreasing function u↦ -log(u)-log(4π) at u=10^381 and the coefficients of other terms, the left-hand side of (<ref>) is > - 1.21 u^1.00057+2u -0.4 u^0.5 - 880. This is a fractional polynomial in u and by Descartes' rule of signs it can have at most two sign changes on (0,∞). Evaluating and using the intermediate value theorem then shows that it has exactly two roots, one in the interval u∈ [1000, 1200] and one in the interval u∈ [10^381, 10^390] with positive value on the interval u∈ [1200, 10^381]. Showing that (<ref>) also holds for u∈ [7, 1200] follows the same argument. Here is a “back-of-the-envelope” version of this calculation: From the asymptotic estimates for J one sees that J(1-s)-L_β(s) must change sign roughly where 2(log 2)^β_0=(log(1/s))^β_0-1/2 s = e^-(2(log 2)^β_0)^1/(β_0-1/2)≈ 10^-9.4· 10^387 Controlling the error shows the zero must lie in [10^-10^388, 10^-10^387]. § PROOF OF THE TWO-POINT INEQUALITY In this section we prove Theorem <ref>. The proof is split into various subcases according to the piecewise definition of b_β in (<ref>) (see Figure <ref>). §.§ Case J This is the most critical case. In particular, this is where the precise values of β_0, c_0 originate. Recall from Corollary <ref> that for x_0≤ x≤ y≤ 1 the function J satisfies the best possible estimate, G^L_β[J](x,y)≥ 0 for β=1/2. When x<x_0≈ 0.552 this estimate fails for β=1/2. Let β≥β_0=0.50057 and c_0=0.997. Then for 1/2≤ x≤ x_0 and x≤ y≤ 1, G^1_β[J](x,y)≥ 0 and G^1_1/2[c_0 J](x,y)≥ 0. By monotonicity in β, we only need to show G_β_0^1[J]≥ 0 to show the first part. The proof is split into two further cases according to the size of y (see Figure <ref>). §.§.§ Case I: 1/2≤ x≤ x_0, x≤ y≤11/16 Let h=y-x. The proof will use automatic dyadic partitioning as described in <ref>. Since x_0≈ 0.552 is not a floating point number, we must account for cases when x is slightly larger than x_0. This causes no issues. In fact, the argument is robust enough to afford a generous margin away from x_0 and with no additional effort we will prove the desired inequality in the larger region 12≤ x≤58, 0≤ h≤316. By Lemma <ref>, G^1_β[c· J](x,x+h)≥β c^1-1/β J(x+h)^1-1/βh^1/β + c(J(x) + J(x+h)-2J(x+h2)) - 12 β(1-β) c^1-2/β J(x+h)^1-2/βh^2/β. Taylor's theorem shows that for every N≥ 2: J(x) + J(x+h)-2J(x+h2) = ∑_k=2^N-11k!(1-2^-k+1) J^(k)(x)h^k + E_N, where E_N = 1N!(J^(N)(ξ_1)-2^-N+1 J^(N)(ξ_2)) h^N and ξ_1∈ [x,x+h], ξ_2∈ [x,x+h/2] are intermediate values. Thus, G_β^1[c· J](x,x+h)≥ h^1/β g_J,1,β,c(x,h), where g_J,1,β,c(x,h)= β c^1-1/βJ(x+h)^1-1/β - 12 β(1-β)c^1-2/βJ(x+h)^1-2/βh^1/β +c ∑_k=2^N-11k!(1-2^-k+1) J^(k)(x)h^k-1/β + c E_N h^-1/β. To get a sufficiently accurate lower bound we will unfortunately need to use N=6 which makes the following expressions somewhat lengthy. From J· J”=-2, we may calculate J^(3)=2J' J^-2, J^(4)=-4 (1+|J'|^2)J^-3<0, J^(5)=4J'(7+3|J'|^2)J^-4, J^(6)=-8(7+23|J'|^2+6|J'|^4)J^-5<0. Recall that J is increasing on [1/2,x_0] and decreasing on [x_0,1] (see (<ref>) and (<ref>)). Thus an interval enclosure for J is given by J(x,x)=min(J(x),J(x)), J(x,x)={[ J(x), if x<x_0,; J(x), if x>x_0,; J(x_0), else. ]. Also observe J'(x_0)=0 and J”<0, so J' is strictly decreasing and |J'| has the interval enclosure |J'|(x,x)= {[ J'(x), if x<x_0,; -J'(x), if x>x_0,; 0, else, ]. |J'|(x,x) = max(|J'(x)|, |J'(x)|). The odd order derivatives J^(3) and J^(5) change sign at x_0 and admit the tight lower bounds J^(3)(x,x) = 2 J'(x) J(x)^-21_x<x_0 - 2 |J'|(x,x) J(x,x)^-21_not x< x_0 and J^(5)(x,x) = 4 J'(x)(7+3 |J'(x)|^2)J(x)^-41_x<x_0 - 4 |J'|(x,x)(7+3|J'|(x,x)^2)J(x,x)^-41_not x< x_0. Now we can compose a tight lower bound for g_J,1,β,c as g_J,1,β,c(x,x,h,h) = β c^1-1/βJ(x+h,x+h)^1-1/β - 12 β(1-β)c^1-2/βJ(x+h,x+h)^1-2/βh^1/β - c2 J(x,x)^-1h^2-1/β +c (14 J'(x) J(x)^-2h^3-1/β + 132J'(x)(7+3 |J'(x)|^2)J(x)^-4h^5-1/β)1_x<x_0 - c(14 |J'|(x,x) J(x,x)^-2h^3-1/β + 132|J'|(x,x)(7+3|J'|(x,x)^2)J(x,x)^-4h^5-1/β)1_not x< x_0 -7c48(1+|J'|(x,x)^2)J(x,x)^-3h^4-1/β -c90(7+23|J'|(x,x+h)^2 + 6 |J'|(x,x+h)^4)J(x,x+h)^-5h^6-1/β + c2880(7+23|J'|(x,x+12h)^2+6 |J'|(x,x+12h)^4)J(x,x+12h)^-5h^6-1/β. Running Partition_2(g_J,1,β,c,[12,58]× [0,316]) for each (β,c)∈{(β_0,1),(12,c_0)} shows that g_J,1,β,c>10^-7 on this rectangle. Figure <ref> visualizes the admissible partition. §.§.§ Case II: 1/2≤ x≤ x_0, 11/16≤ y≤ 1 By monotonicity it suffices to show the stronger conclusion G^1_1/2[J](x,y)≥ 0 or equivalently, g_J,2(x,y)=(y-x)^2 +J(y)^2-(2J(x+y2)-J(x))^2≥ 0. The quantity 2J(x+y/2)-J(x)≥ 0 is decreasing in y (since x+y/2≥12(12+1116)=0.59375>x_0, so J'(x+y/2)<0) and also decreasing in x, because the x-derivative is J'(x+y2)-J'(x) = J”(ξ)y-x2 < 0 by the mean value theorem. Therefore, a tight lower bound for g_J,2 is given by g_J,2(x,x,y,y)=(y-x)^2 +J(y)^2-(2J(x+y2)-J(x))^2. Running Partition_2(g_J,2,[1/2,9/16]×[11/16,1]) shows that g_J,2(x,y)>10^-8 for (x,y) in this rectangle (and note that 9/16=0.5625>x_0). §.§ Case Q For β∈{12, β_0} and 0≤ x≤ y≤12: G^1_β[max(L_β, Q_β)](x,y)≥ 0 The argument is not sensitive to the exact value of β_0: the conclusion holds for all β∈ [12,1], but we don't pursue this here. It is known that G^1_β[L_β](x,y)≥ 0 holds for all 0≤ x≤ y≤12 (see <cit.>). By Lemma <ref> and Lemma <ref> we therefore have the claim on the triangle I in Figure <ref> and it now suffices to show G^1_β[Q_β](x,y)≥ 0 for all 0≤ x≤ y≤12 with x+y/2≥14. Observe that G_β^1[Q_β](x,y)=0 if x=y and if (x,y)=(0,12). This leads to distinguishing two further cases (regions II and III in Figure <ref>). §.§.§ Near diagonal: 0≤ y-x≤14 This is the quadrangle II in Figure <ref>. Let h=y-x. We will prove the desired bound in the larger region 0≤ h≤14≤ y≤12. By Lemma <ref>, G^1_β[Q_β](x,y)≥ h^1/β g_Q,1,β(h,y), where g_Q,1,β(h,y)= β Q_β(y)^1-1/β - (2Q_β(y-h2)-Q_β(y-h)-Q_β(y))h^-1/β - 12 β (1-β) Q_β(y)^1-2/β h^1/β. Calculate 2Q_β(y-h2)-Q_β(y-h)-Q_β(y) = h^2 (α_0 - 2α_1 h + 4α_1 y) ≥ 0, where α_0,α_1 are as in (<ref>). Thus g_Q,1,β(h,y) = β Q_β(y)^1-1/β - (α_0-2α_1 h+4α_1 y)h^2-1/β - 12 β (1-β) Q_β(y)^1-2/β h^1/β If β>12, then g_Q,1,β(x,y)>0 and this can be proved automatically: Since Q_β is monotone increasing on [0,12] (see Lemma <ref>), we obtain the tight lower bound g_Q,1,β(h,h,y,y) = β Q_β(y)^1-1/β - (α_0 - 2α_1 h + 4α_1 y)h^2-1/β - 12 β (1-β)Q_β(y)^1-2/βh^1/β. Running Partition_2(g_Q,1,β_0,[0,14]×[14,12]) gives g_Q,1,β_0(h,y)>0.001 and thus finishes the proof of (<ref>) near the diagonal for β=β_0. The case β=1/2 needs a little more work because g_Q,1,1/2(0,12)=0. We have g_Q,1,1/2(h,y) = 12 Q_1/2(y)^-1 - (α_0+4α_1 y) + 2α_1 h - 18 Q_1/2(y)^-3 h^2 Observe that ∂_h^2 g_Q,1,1/2(h,y)=-14 Q_1/2(y)^-3<0, so the function is concave in h and it suffices to evaluate at h=0 and h=14: * g_Q,1,1/2(0,y)≥ 0 for all y∈[14,12]. We need to show g_Q,1,1/2(0,y)=12 Q_1/2(y)^-1 - (α_0+4α_1 y) ≥ 0 Multiplying by 2Q_1/2(y)>0 this is 1 - 2(α_0+4α_1 y)Q_1/2(y) ≥ 0 The left-hand side is a polynomial of degree 4 in y. Changing variables t=12-y, plugging in the definition of Q_1/2(12-t) and simplifying this becomes 1-13 (1-4t^2)(1-4α_1 t)(3-4α_1 t) ≥ 0. The left-hand side equals zero at t=0 and the three factors do not change sign on the interval t∈ [0,14], so the left-hand side is increasing in t∈ [0,14], which shows that g_Q,1,1/2(0,y)≥ 0 for y∈ [14, 12]. * g_Q,1,1/2(14,y)>0 for all y∈ [14,12] Evaluate g_Q,1,1/2(14,14,14,38) > 0.01 and g_Q,1,1/2(14,14,38,12)> 0.001. §.§.§ Far from diagonal: 14≤ y-x≤12 This is the triangle III in Figure <ref>. Again let h=y-x. The idea is to reduce to a fractional polynomial in h using logarithmic derivatives. The desired inequality (<ref>) can be written in equivalent form as G_Q,β(h,y)=log(h^1/β + Q_β(y)^1/β) - 1βlog(2 Q_β(y - h2) - Q_β(y - h))≥ 0. For β∈{β_0,1/2} and y∈[1/4,1/2], the function h↦∂_h G_Q,β(h,y) vanishes at most once on the interval [14, 12]. Also, ∂_h G_Q,β(14,y)>0. This is slightly more than we need: it would be enough to show this for y∈ [3/8,1/2] and h∈ [14,y]. The proof of Lemma <ref> is postponed to the end of this section. The lemma implies that h↦ G_Q,β(h,y) must achieve its minimum on [1/4,y] at h=1/4 or at h=y. In the near diagonal case we have already proved that G_Q,β(14,y)≥ 0 (this is the line segment with y-x=14 in Figure <ref>). Thus it remains to show G_Q,β(y,y)≥ 0, i.e. log(y^1/β + Q_β(y)^1/β) - 1βlog(2 Q_β(y2))≥ 0. Equivalently, we need to show y^1/β + Q_β(y)^1/β - (2 Q_β(y2))^1β≥ 0. Multiplying by y^-1β and defining f_β(y)=1+R_β(y)^1/β-R_β(y2)^1/β where R_β(y) = y^-1Q_β(y) = 23 (1-y)(2^β+2-3+4(3-2^β+1) y), we see that it is now equivalent to show f_β(y) ≥ 0 for all y∈ [14,12]. We evaluate f_β(12)=0. We claim that f_β' ≤ 0 on [14,12], which will in turn imply f_β≥ 0 on [14,12]. To see the claim, we calculate f_β'(y) = 1βR_β(y)^1/β-1R_β'(y) - 12βR_β(y2)^1/β-1R_β'(y2) To show f_β'(y)≤ 0 it is equivalent to show that β y f_β'(y)≤ 0. Setting f_β,0(y)=yR_β(y)^1/β-1R_β'(y), it holds β y f_β'(y) = f_β,0(y)-f_β,0(y/2). Thus, it suffices to show that f_β,0'≤ 0. Let us further denote f_β,1(y)=R_β(y)^1/β-1, f_β,2(y)=yR_β'(y), so that f_β,0(y)=f_β,1(y)f_β,2(y). Since R_β≥ 0 and R_β'≤ 0 on [14,12], also f_β,1≥ 0 and f_β,2≤ 0 on this interval. We calculate f_β,1'(y) = (1β-1)R_β(y)^1/β-2R_β'(y), f_β,1”(y) = (1β-1)(1β-2)R_β(y)^1/β-3(R_β'(y))^2+(1β-1)R_β(y)^1/β-2R_β”(y) f_β,1”'(y) = (1β-1)(1β-2)((1β-3)R_β(y)^1/β-4(R_β'(y))^3 + 3 R_β(y)^1/β-3R_β'(y)R_β”(y)) Since β∈{β_0,12}, one has R_β”≤ 0 and 1β>1,1β≤ 2,1β<3, which in turn implies f_β,1'≤ 0, f_β,1”≤ 0, and f_β,1”'≤ 0. To calculate the third derivative we also used that R_β”'=0. Next we compute f_β,2'(y)=R_β'(y)+yR_β”(y), f_β,2”(y) = 2R_β”(y) +yR_β”'(y) = 2R_β”(y) Thus, f_β,2'≤ 0 and f_β,2”≤ 0. Using f_β,2”'=0 we calculate f_β,0”'(y) = f_β,1”'(y)f_β,2(y) + 3f_β,1”(y)f_β,2'(y) + 3f_β,1'(y)f_β,2”(y) The calculations above show that f_β,0”'≥ 0. We evaluate f_β,0”(14) > 3.2, which implies f_β,0”>0. We also evaluate f_β,0'(12)< -0.6, which then shows f_β,0'<0 on [14,12], as desired. Here t,y∈ [1/4,1/2]. The function β∂_h G_Q,β(h,y) is given by h^1/β-1/h^1/β+Q_β(y)^1/β - Q_β'(y-h)-Q_β'(y-h/2)/2Q_β(y-h/2)-Q_β(y-h) Observe that 2Q_β(y-h/2)-Q_β(y-h)>0 by strict concavity of Q_β (see Lemma <ref>), so we may multiply by the common denominator (h^1/β+Q_β(y)^1/β)(2Q_β(y-h/2)-Q_β(y-h)) to arrive at the quantity h^1/β-1(2Q_β(y-h2)-Q_β(y-h)) -(Q_β'(y-h)-Q_β'(y-h2))(h^1/β+Q_β(y)^1/β). This is a fractional polynomial in h. To see this, calculate 2Q_β(y-h2)-Q_β(y-h) =-2α_1 h^3+(α_0+4α_1 y)h^2 + Q_β(y) Q_β'(y-h)-Q_β'(y-h2)=-6α_1 h^2+(2α_0+8α_1 y)h, where α_0,α_1 are as in (<ref>). Thus (<ref>) equals h^1/β-1 p_y,β(h), where p_y,β(h) = 4α_1 h^3 - (α_0 + 4α_1 y)h^2 + 6α_1 Q_β(y)^1/β h^3-1/β - (2α_0+8α_1 y)Q_β(y)^1/β h^2-1/β + Q_β(y). It will now suffice to show that p_y,β is strictly decreasing and thus has at most one zero. To do this we compute the derivative: g_Q,2,β(h,y) = -∂_h p_y,β(h) = -a_2 h^2 + a_1,y h - a_0^+,y h^2-1/β + a_-1^+,h h^1-1/β where the coefficients can be read off from the definition of p_y,β(h). A tight lower bound of g_Q,2,β is given by g_Q,2,β(h,h,y,y) = -12α_1 h^2 + (2α_0 + 8α_1 y)h - 6(3-1β)α_1 Q_β(y)^1/βh^2-1/β + (2-1β)(2α_0+8α_1 y)Q_β(y)^1/βh^-1/β+1. Finally, running Partition_2(g_Q,2,β, [14,12]^2) for β∈{1/2,β_0} shows that g_Q,2,β(h,y)>0.01 for all (h,y)∈ [14,12]^2. §.§ Case LJQ We distinguish further subcases. §.§.§ Case I: 116≤ x≤14, 12≤ y≤34 This is the rectangle I in Figure <ref>. Here we have G_β[b_β](x,y)>0 for all β∈[1/2,1], but we will only prove it for β∈{1/2,β_0}. A naive tight lower bound for G_β[b_β](x,y) suffices: g_LJQ,1,β(x,x,y,y)= max(((y-x)^1/β+J(y,y)^1/β)^β, y-x+(2^β-1) J(y,y)) + L(x,β) - 2 Q_β(x+y2). Running Partition_2(g_LJQ,1,1/2,[1/16,1/4]×[1/2,3/4]) shows G_β[b_β](x,y)>10^-7 for β∈{β_0,1/2} and (x,y)∈ [1/16,1/4]×[1/2,3/4]. §.§.§ Case II This is region II in Figure <ref> and covers the remainder of Case LJQ. Let β∈[1/2,1], x∈ [0,1/4] and y∈ [1/2,1]. If in addition x≤1/16 or y≥3/4 holds, then G_LJQ,β(x,y)=y-x + (2^β-1) J(y) +L_β(x)-2Q_β(x+y2)≥ 0. The proof rests on the following observation. For every β∈ [1/2,1] and y∈ [0, 1] the function x↦ G_LJQ,β(x,y) is strictly concave on [0,1/4]. Compute ∂_x^2 G_LJQ,β(x,y) = L”_β(x) - 12 Q_β”(x+y2) By Lemma <ref> this equals L”_β(x) + 2α_0 + 4α_1 (x+y) Recall that α_1=3-2^1+β>0 iff β<log_2(3/2). Let us only consider the case β<log_2(3/2) – the argument for the other (less interesting) case is analogous. By Lemma <ref> and since x≤1/4< e^-√(3)/2, the quantity (<ref>) is increasing in both x and y. Thus it suffices to evaluate ∂_x^2 G_LJQ,β(14,1)=L”_β(14) - 12 Q_β”(58)≤ L”_1/2(14) -12 Q”_1/2(58)<-0.6, where we have used that β↦ L”_β(14) is decreasing by Lemma <ref> and that for x∈[12,1], the function β↦ Q_β”(x) is increasing. Lemma <ref> reduces the proof of Proposition <ref> to checking the claim on the x-boundary of the region II. Thus it remains to verify the following three claims: * G_LJQ,β(0,y)≥ 0 for all y∈ [1/2,1] * G_LJQ,β(116,y)≥ 0 for all y∈[12,34] * G_LJQ,β(1-y,y)≥ 0 for all y∈ [34,1] Let f_β(y)=G_LJQ,β(0,y) = y + (2^β-1) J(y) - 2Q_β(y2). Differentiating in β we see that ∂_β f_β(y) takes the form 2^βf(y) for some function f(y). Thus, for a fixed y, the quantity f_β(y) is either monotone increasing or monotone decreasing in β. Thus, for all β∈[1/2, 1], f_β(y)≥min(f_1/2(y), f_1(y)). Thus we may assume β∈{12,1} without loss of generality. We give details only for the case β=1/2, the case β=1 being very similar. Using JJ”=-2 we see f_1/2'(y) = 1 + (√(2)-1)J'(y) - Q_1/2'(y2), f_1/2”(y) = -2 (√(2)-1) J(y)^-1 -12 Q”_1/2(y2) By (<ref>), f_1/2^(4)(y)<0 so f_1/2” is strictly concave on (1/2,1). Since f_1/2”(12)=0 this implies that f_1/2” has at most one zero on (12,1). We evaluate f_1/2”(916)>0.05, f_1/2”(45)<-0.3. Thus, f_1/2” has exactly one zero y_0 in (12,1) where f_1/2' achieves its unique maximum. Since f_1/2' is increasing on (12,y_0) and decreasing on (y_0,1), and f_1/2'(916)>0.05, f_1/2'(1516)<-0.1, it follows that f_1/2' has exactly one zero on (12,1). Therefore, f_1/2 is increasing on [1/2,y_0] and decreasing on [y_0,1]. Together with f_1/2(0)=f_1/2(12)=0, this implies f_1/2(y)≥ 0 for y∈ [1/2,1]. A tight lower bound is given by g_LJQ,2(y,y,β,β) = y-116 + (2^β-1) J(y,y) +L_β(116)-2Q_β(132+ y2) Running Partition_2(g_LJQ,2,[12,34]× [12,1]) shows that G_LJQ,β(116,y)>10^-5 for all y∈ [1/2,3/4] and all β∈ [1/2,1]. We need to show that f_β(y)=G_LJQ,β(1-y,y) = 2y-1+(2^β-1) J(y)+L_β(1-y)-1≥ 0. Observe that this is increasing in β, so it suffices to consider β=1/2. The function f_1/2 is concave on [34,1], because f_1/2”(y)= (√(2)-1) J”(y)+L_1/2”(1-y)≤ 0 (recall J”≤ 0 and Lemma <ref>). Thus it suffices to evaluate at the endpoints: f_1/2(1)=0 and f_1/2 (34) > 0.01. §.§ Case LJ Note that in this region we have y∈ [34,1]. For (x,y)∈ [0,14]× [34,1], 1≤x+y, and β∈ [12,1], y-x + (2^β-1) J(y) + L_β(x) -2J(x+y2)≥ 0. The left-hand side is increasing in β, so it suffices to prove the claim for β=12. Denote g_LJ(x,y) = y-x + (√(2)-1) J(y) + L_1/2(x) -2J(x+y2) We claim that if (x,y)∈ (0,14]× [34,1] and 1≤ x+y, then ∂^2_xg_LJ(x,y)≤ 0. Using JJ”=-2 we have ∂^2_xg_LJ (x,y) = L_1/2”(x) +J(x+y2)^-1. Lemma <ref> shows L”_1/2 is increasing and negative. Furthermore, L”_12(14)≤ -2.7. If x+y2∈ [12,1-w_02], then J(x+y2)^-1≤ 2 and hence ∂^2_xg_LJ (x,y)≤ 0. If x+y2≥ 1-w_02, it suffices to show that ∂^3_xg_LJ(x,y) ≥ 0 and ∂^2_xg_LJ (14,y) ≤ 0. To show the latter we first evaluate ∂^2_xg_LJ (14,1) ≤ -0.7. Since J^-1 is increasing on [34,1], we have ∂^2_xg_LJ (14,y) ≤ 0 for all y∈ [34,1]. To show ∂^3_xg_LJ≥ 0 we calculate ∂^3_xg_LJ(x,y) = L_1/2”'(x) -12J'(x+y2) J(x+y2)^-2. Since in the current region J'(x+y2)≤ 0 and L”'_1/2≥ 0 by Lemma <ref>, the claim follows. It remains to show non-negativity on the boundary of the region LJ. If y=1, we already know d^2dx^2g_LJ(x,1)≤ 0. Then we evaluate g_LJ(0,1)=0 and g_LJ(14,1)≥ 0.1, which shows g_LJ(x,1)≥ 0 for all x∈ [0,14]. Next we consider the case x+y=1, x∈ [0,14], y∈ [34,1] and let g_LJ(1-y,y) = 2y+(√(2)-1) J(y)+L_1/2(1-y)-2. We have g_LJ(14,34)≥ 0.02, g_LJ(0,1)= 0, and ∂^2_y g_LJ(x,y) = (√(2)-1) J”(y) + L_1/2”(1-y)≤ 0, which implies g_LJ(1-y,y)≥ 0 for all y∈ [34,1]. Finally, we tackle the case x=14, y∈ [34,1]. We have f(y)=g_LJ(14,y)=y-14+(√(2)-1) J(y)+L_1/2(14)-2J(y2 + 18). Since we already know that this function is non-negative at the endpoints of the interval, it suffices to show f”(y)≤ 0 for y∈ [34,1). Using JJ”=-2 we calculate f”(y) = -2(√(2)-1) J(y)^-1+J(y2 + 18)^-1. It suffices to show J(y)J(y2 + 18)^-1-2(√(2)-1)≤ 0. At y=0, the left-hand side vanishes, so the inequality holds. Thus, it suffices to show that ddy (J(y)J(y2 + 18)^-1) = J'(y)J(y2 + 18)-12J(y)J'(y2 + 18)/J(y2 + 18)^2 is non-positive. For this it suffices to show 2J'(y)/J(y)≤J'(y2 + 18)/J(y2 + 18) Since J≥ 0 and J'(y)<0 on our interval, it suffices to show J'(y)/J(y)≤J'(y2 + 18)/J(y2 + 18) Since y≥y2+18, it suffices to show y↦J'(y)/J(y) is decreasing. Indeed, d/dy( J'(y)/J(y) ) = -2-(J'(y))^2/J(y)^2≤ 0. Thus, we deduce f”(y)≤ 0 as desired. §.§ Case QJQ Note that the region is contained in the rectangle [1/4,1/2]× [1/2,3/4], see Figure <ref>. Let β_1=12+311024≈ 0.53. For all (x,y)∈ [1/4,1/2]× [1/2,3/4] and β∈ [12, β_1]: G_QJQ,β(x,y)=(y-x)^2 + J(y)^2 - (2Q_β(x+y2)-Q_β(x))^2 ≥ 0. The conclusion fails for β≥β_1+11024. From (<ref>) we see that for each fixed (x,y) the left-hand side in (<ref>) is either monotone increasing or decreasing in β. Thus, G_QJQ,β(x,y)≥min(G_QJQ,1/2(x,y),G_QJQ,1(x,y)) and it suffices to show the claim for β=12 and β=β_1. The y-derivative of the left-hand side in (<ref>) is 2 times g_QJQ,β(x,y)=(y-x) +J(y)J'(y)-(2Q_β(x+y2)-Q_β(x))Q_β'(x+y2). We begin by showing that this quantity is strictly positive for all (x,y)∈ [1/4,1/2]× [1/2,3/4] thus reducing to the case y=1/2. This can be done by Partition_2. In order to formulate a tight lower bound we record the monotonicity of the various terms appearing in (<ref>): * The function x↦ J(x)J'(x) is decreasing on x∈ [1/2, 3/4]. (JJ')' = (J')^2-2 using that JJ”=-2. Also (J')^2 is convex (Lemma <ref>), so it suffices to evaluate (J')^2-2 at the endpoints x=1/2 and x=3/4 which shows (J')^2-2<-1<0. * The quantity 2Q_β(x+y/2)-Q_β(x) is positive, increasing in y and decreasing in x. Recall Lemma <ref>. First, 2Q_β(x+y/2)-Q_β(x)≥ Q_β(y)>0 follows since Q_β is concave. The quantity is increasing in y since Q_β'>0 on [0,1/2]. Finally, the x-derivative is Q'_β(x+y2)-Q'_β(x) = Q_β”(ξ) y-x2<0 where ξ is a value in [x,(x+y)/2]. * The function x↦ Q_β'(x) is decreasing and positive on [0,1/2] by Lemma <ref>. Therefore, a tight lower bound of g_QJQ,β is given by g_QJQ,β(x,x,y,y) = y-x + J(y)J'(y) - (2Q_β(x+y2)-Q_β(x))Q'_β(x+y2). Calling Partition_2(g_QJQ,β,[1/4,1/2]×[1/2,3/4]) for β∈{1/2,β_1} shows that g_QJQ,β>10^-5 on [1/4,1/2]× [1/2,3/4]. To finish the proof it now suffices to show (<ref>) for y=1/2, that is (12-x)^2 + 14 - (2Q_β(x2 + 14)-Q_β(x))^2 ≥ 0 for x∈ [1/4,1/2]. The left-hand side is a polynomial in x that factors as 14(1-2x)^3 p_β(x), where p_β(x)= (18- 3· 2^3 + β+ 2^3 + 2 β)x^3 + (2^5 + β-3· 2^2 + 2 β- 21)x^2 + (8+3· 2^1 + 2 β-7· 2^1 + β)x + 2 - 2^2 β Plugging in β=β_1,1/2 one sees that the coefficients of x,x^2,x^3 are positive, so p_β is an increasing function. Finally, at x=1/4 one computes p_β(14)>10^-4>0 for β∈{β_1,1/2} (actually p_β(14) is decreasing in β) . §.§ Case QJ We distinguish two cases, see Figure <ref>. The near-diagonal triangle I is the most critical: here it is again necessary to move away from β=1/2 (or include a constant c<1). Note that in this region G^1_β[J](x,y) is increasing in β. §.§.§ Near diagonal: 3/8≤ x≤ 1/2, 1/2≤ y≤ 5/8, x+y≥ 1 This is the triangle I in Figure <ref>. For 3/8≤ x≤1/2≤ y≤5/8 and β∈ [β_0,1], G^1_β[b_β_0](x,y)≥ 0 and G_1/2^1[c_0· b_1/2](x,y)≥ 0. By monotonicity in β, it suffices to consider β=β_0 to show the first part of the claim. Let us consider the equivalent expression g_QJ,β,c(x,y)=(y-x)^1/β +c^1/β J(y)^1/β-c^1/β(2 J(x+y2)-Q_β(x))^1/β. (Observe that 2J(x + y2) - Q_β(x)>0.) We claim that ∂_x g_QJ,β(x,y)≤ 0 for each y. Calculate -β∂_x g_QJ,β,c(x,y) = (y-x)^1/β-1+c^1/β(2J(x+y2)-Q_β(x))^1/β-1 (J'(x+y2)-Q'_β(x)) It suffices to show that this is positive. Recalling Lemma <ref> and the enclosures for J, |J'| (see (<ref>)), a tight lower bound of this expression can be given by g_QJ,1,β,c(x,x,y,y) =(y-x)^1/β-1+c^1/β(2J(x+y2,x+y2)-Q_β(x))^1β-1 J'(x+y2)1_x<x_0 - c^1/β(2J(x+y2,x+y2)-Q_β(x))^1β-1|J'|(x+y2,x+y2)1_not x<x_0 - c^1/β (2J(x+y2,x+y2)-Q_β(x))^1β-1Q'_β(x). Running Partition_2(g_QJ,1,β,c,[1/4,1/2]× [1/2,5/8]) shows (β,c)∈{(β_0,1),(1/2,c_0)}, -β∂_x g_QJ,β,c > 10^-5 on this region. Thus it only remains to check that g_QJ,β,c(12, y)≥ 0 for all y∈ [12,58] and (β,c)∈{(β_0,1),(1/2,c_0)}, but this is equivalent to showing G^1_β[c J](1/2,y)≥ 0 for these values, which already follows from Proposition <ref>. §.§.§ Far from diagonal: 14≤ x≤12, 5/8≤ y≤ 1, x+y≥ 1 We will show the following claim that is stronger than required: For all x∈ [1/4, 1/2] and y∈[5/8,1], g_QJ,2(x,y)= ((y-x)^2 +J(y)^2)^1/2+Q_1/2(x)-2J(x+y2)>10^-7. A tight lower bound is given by g_QJ,2(x,x,y,y) = ((y-x)^2 + J(y)^2)^1/2 + Q_1/2(x)-2J(x+y2,x+y2). Running Partition_2(g_QJ,2,[14,12]× [58,1]) shows the claim. § PROOF OF THE POINCARÉ INEQUALITY Every Boolean-valued function f can be written as f=1_A for some A⊂{0,1}^n. Then 𝐄 f=|A|=2^-n#A and f-𝐄 f^p_p = |A|^p (1-|A|) + |A| (1-|A|)^p. On the other side of the inequality, ∇ f_p^p = 2^-p ( 𝐄 h_A^p/2 + 𝐄 h_A^c^p/2). Set p=2β. If 𝐄h_A^β≥ B(|A|) for some function B, then ∇ f_p^p ≥ 2^-2β_0(B(|A|)+B(1-|A|)). Thus Theorem <ref> follows from (<ref>) and (<ref>) if we show the following. For all x∈ [0,1], β∈ [1/2,β_0]: G_P,β(x)= 2^-2β( b_β(x) + b_β(1-x)) - x^2β(1-x) - x(1-x)^2β≥ 0. The conclusion holds for all β∈[12,1] and this can be proved by the same methods. Since G_P,β(x)=G_P,β(1-x) it suffices to show this for x∈ [0,12]. Notice that G_P,β vanishes at x=0,12,1. §.§.§ Case I: x∈ [0,164] We need to show 2^-2β( L_β(x) + J(1-x)) - x^2β(1-x) - x(1-x)^2β≥ 0 It is clear that this inequality holds asymptotically as x→ 0^+ and this is not sensitive to the value of β. Using β∈[1/2,β_0] and Corollary <ref>, the left hand side is ≥ 2^-2β_0(L_1/2(x) + J(1-x)) - 2x(1-x) ≥ x (2^-2β_0√(log_2(1/x)) + 2^-2β_0√(log(w_0/x))- 2) = x· g_P,1(x), Notice that g_P,1 is a decreasing function of x and one can evaluate g_P,1(164)>0.2 Thus, G_P,β(x)>0.2 x for all x∈ [0,164] and β∈[1/2,β_0]. §.§.§ Case II: x∈ [164,14] Note that x↦ J(1-x) is increasing on x∈ [1/64,1/4] because x_0<1-14. Thus a tight lower bound for G_P,β(x) is given by g_P,2(x,x)= 2^-2β_0(L_1/2(x) + J(1-x)) - 2x(1-x). Running Partition_1(g_P,2,[164,14]) shows that G_P,β(x)>10^-4 for all x∈ [1/64,1/4] and β∈[1/2,β_0]. §.§.§ Case III: x∈ [14,12] We need to show 2^-2β( Q_β(x) + J(1-x)) - x^2β(1-x) - x(1-x)^2β≥ 0. Since the left-hand side equals 0 at x=12 it will suffice to show that the function on the left is decreasing in x, i.e. that g_P,3,β(x)=-∂_x G_P,β(x)>0 for all x∈ [1/4,1/2], β∈[1/2,β_0]. Calculate g_P,3,β(x)=-2^-2β Q'_β(x)+2^-2β J'(1-x) + 2β x^2β-1 (1-x) - x^2β + (1-x)^2β - 2β x(1-x)^2β-1. J'(1-x) is decreasing for x∈ [0, 12], positive if 1-x<x_0 and negative if 1-x>x_0. Also, Q'_β(x) is decreasing in x, positive and decreasing in β by Lemma <ref>. A tight lower bound for g_P,3,β is therefore given by g_P,3(x,x) = - 2^-1 Q'_1/2(x) + 2^-2β_0 J'(1-x) 1_1-x<x_0 - 2^-1|J'|(1-x,1-x)1_not 1-x<x_0 + x^2β_0-1(1-x) - x + (1-x)^2β_0-2β_0 x Running Partition_1(g_P,3,[1/4,1/2]) shows g_P,3(x) > 0.001, for all x∈[1/4,1/2] which finishes the proof of Proposition <ref>. ABCD99 BIM23 D. Beltran, P. Ivanisvili, J. Madrid. On sharp isoperimetric inequalities on the hypercube. arXiv:2303.06738, 2023. BELP08 L. Ben Efraim, F. Lust-Piquard, Poincaré type inequalities on the discrete cube and in the CAR algebra, Probab. Theory Related Fields 141 (2008), no. 3-4, 569–602. Ber67 A. J. Bernstein. Maximally connected arrays on the n-cube. SIAM J. Appl. Math. 15 (1967), 1485–1489. Bob97 S. G. Bobkov. An isoperimetric inequality on the discrete cube, and an elementary proof of the isoperimetric inequality in Gauss space. Ann. Probab. 25 (1997), no. 1, 206–214. BG99 S. G. Bobkov, F. Götze. Discrete isoperimetric and Poincaré-type inequalities. Probab. Theory Relat. Fields 114 (1999), no. 2, 245–277. Bol86 B. Bollobás. Combinatorics: Set Systems, Hypergraphs, Families of Vectors and Combinatorial Probability. Cambridge University Press, 1986. Esken1 D. Cordero-Erausquin, A. Eskenazis. Discrete logarithmic Sobolev inequalities in Banach spaces. arXiv: 2304.03878, 2023. Flint The FLINT team. FLINT: Fast Library for Number Theory. 2023. Version 3.0.0, https://flintlib.org. GomezSerrano J. Gómez-Serrano. Computer-assisted proofs in PDE: a survey. SeMA Journal 76 (2019), no. 3, 459–484. Har66 L. Harper. Optimal numberings and isoperimetric problems on graphs. J. Comb. Theory 1 (1966), no. 3, 385–393. Hart76 S. Hart. A note on the edges of n-cube. Discrete Math. 14 (1976), no. 2, 157–163. ILvHV P. Ivanisvili, D. Li, R. van Handel, A. Volberg. Improving constant in end-point Poincaré inequality on Hamming cube. arXiv:1811.05584, 2018. IVH P. Ivanisvili, R. van Handel, A. Volberg. Rademacher type and Enflo type coincide. Ann. Math. 192 (2020), no. 2, 665–678. Arb F. Johansson. Arb: efficient arbitrary-precision midpoint-radius interval arithmetic. IEEE Transactions on Computers (2017), no. 66, 1281–1292. KP20 J. Kahn, J. Park. An isoperimetric inequality for the Hamming cube and some consequences. Proc. Amer. Math. Soc. 148 (2020), 4213–4224. ODonnell R. O'Donnell. Analysis of Boolean Functions. Cambridge University Press, 2014. pis02 G. Pisier. Probabilistic methods in the geometry of Banach spaces. in: Probability and Analysis, (Varenna, 1985), Lecture Notes in Math. 1206, Springer, Berlin, 1986. haonan1 C. Rouzé, M. Wirth, M, H. Zhang. Quantum Talagrand, KKL and Friedgut’s Theorems and the Learnability of Quantum Boolean Functions. Commun. Math. Phys. 405 (2024), 95. Rump S. M. Rump. Verification methods: Rigorous results using floating-point arithmetic. Acta Numer. 19 (2010), 287–449. Tal93 M. Talagrand. Isoperimetry, logarithmic Sobolev inequalities on the discrete cube, and Margulis graph connectivity theorem. Geom. Funct. Anal. 3 (1993), no. 3, 295–314. Tuc11 W. Tucker. Validated Numerics: A Short Introduction to Rigorous Computations. Princeton University Press, 2011.
http://arxiv.org/abs/2407.13282v1
20240718083415
Imaging pulsar census of the Galactic Plane using MWA VCS data
[ "S. Sett", "M. Sokolowski", "E. Lenc", "N. D. R. Bhat" ]
astro-ph.HE
[ "astro-ph.HE" ]
§ ABSTRACT Traditional pulsar surveys have primarily employed time-domain periodicity searches. However, these methods are susceptible to effects like scattering, eclipses and orbital motion. At lower radio frequencies (≲ 300 MHz), factors such as dispersion measure and pulse broadening become more prominent, reducing the detection sensitivity. On the other hand, image domain searches for pulsars are not limited by these effects and can extend the parameter space to regions inaccessible to traditional search techniques. Therefore, we have developed a pipeline to form 1-second full Stokes images from offline correlated high time-resolution data from the Murchison Widefield Array (MWA). This led to the development of image-based methodologies to identify new pulsar candidates. In this paper, we applied these methodologies to perform a low-frequency image-based pulsar census of the Galactic Plane ( 12 MWA observations, covering ∼6000 deg^ 2 sky ). This work focuses on the detection of the known pulsar population which were present in the observed region of the sky using both image-based and beamformed methods. This resulted in the detection of 83 known pulsars, with 16 pulsars found only in Stokes I images but not in periodicity searches applied in beamformed data. Notably, for 14 pulsars these are the first reported low-frequency detections. This underscores the importance of image-based searches for pulsars that may be undetectable in time-series data, due to scattering and/or dispersive smearing at low frequencies. This highlights the importance of low-frequency flux density measurements in refining pulsar spectral models and investigating the spectral turnover of pulsars at low frequencies. § INTRODUCTION Even though pulsars were discovered by observing their pulsed emission at a very low radio frequency of 81.5 MHz <cit.>, most of the pulsars to date, have been discovered and studied at frequencies ≳ 1 GHz. Especially, the large population of Galactic pulsars still remains relatively unexplored at low frequencies (≲500 MHz). The main effects that make pulsar studies at low frequencies more difficult are: (i) scattering <cit.>, (ii) increase in the system temperature due to diffuse Galactic continuum emission <cit.> and (iii) the spectral turnover of pulsars at low frequencies. Despite these effects, multiple Galactic Plane (GP) surveys have been conducted in an effort to discover new pulsars and study the known Galactic pulsar population <cit.>. With the advancements in instrumentation and computing, studying pulsars at low frequencies is once again coming back to the forefront. Recently upgraded or constructed telescopes such as the Giant Metrewave Radio Telescope <cit.>, Low-Frequency Array <cit.> and the Murchison Widefield Array <cit.> are contributing to the study of pulsars in the low-frequency regime. Studying pulsars at low frequencies will help us better understand the physics of pulsar radio emission and the interstellar medium (ISM). Moreover, given that the vast majority of catalogued pulsars lack reliable flux density measurements below 400 MHz, studying more pulsars at low frequencies is needed to better constraint the models of spectral energy distributions (SEDs) at these frequencies. Studying the radio spectra of pulsars can help in planning surveys of the Galactic pulsar population with the Square Kilometer Array <cit.>. While the time domain search procedures have been successful in detecting many pulsars in the GP <cit.>, highly scattered or exotic (e.g. sub-millisecond) pulsars are more difficult to detect. Some algorithms that explore a wider parameter space have been developed <cit.> but they are computationally expensive for all-sky surveys. Continuum images have been considered an effective way to detect known pulsars and pulsar candidates. These image-based efforts have resulted in discoveries of several interesting sources, for example, PSRs J1431-6328 <cit.>, J0523-7125 <cit.>. The advantage of searching for pulsars in continuum images is that the detections are unaffected by period, scattering or orbital modulation which is beneficial for GP surveys. Furthermore, similar image-based surveys at low frequencies can be highly effective in overcoming the dispersion and scattering effect that limits the horizon of periodicity searches at low frequencies especially on the GP where the effects of scattering are more prominent. Such image-based GP surveys can be successful in first low-frequency detections of some known pulsars, which may not have been possible to detect at low frequencies by the traditional searches due to scattering becoming more prominent (∝ν^-4). For instance, the first-millisecond pulsar (MSP) was initially detected as an unusually steep and scintillating continuum source and then confirmed by a targeted pulsar search <cit.>. Hence, as this example shows, sensitive image-based searches can lead to a discovery of exotic and/or new classes of pulsars, which were missed by the earlier searches potentially due to being in the parts of parameter space not covered by the traditional searches. The study of pulsars at low frequencies can be used to inform pulsar population studies, which play an important role in estimating the yields of the future surveys <cit.>. The MWA is the low-frequency precursor telescope to the SKA and pulsar science is one of the key science goals of the SKA <cit.>. Therefore, pulsar observations in the same observing environment and at the similar frequencies are necessary to prepare for pulsar science with the SKA-Low. In this paper, we present the image-based GP pulsar census using MWA Phase II archival observations. Section 2 describes the main science goal and motivation for the census. Section 3 describes the the observations and the data processing details of this work. Section 4 summarises the results and discusses the implications of this work on the future. Section 5 focuses on summarising the work done as part of this paper and discusses the importance of image-based pulsar candidate search techniques for future surveys and instruments. § MAIN SCIENCE GOALS AND MOTIVATION Very few surveys of the Southern sky such as the GLEAM survey covering the entire Southern radio sky at frequencies between 72 and 231 MHz have been done <cit.>. The drift scan observations of the GLEAM survey were able to reach a sensitivity of 10 mJy/beam, producing a catalogue of radio sources that can be used for further discoveries <cit.>. The next generation of the GLEAM survey, GLEAM-X (Ross et al, 2023 in prep) is currently ongoing and is expected to reach down to a sensitivity of 1-2 mJy/beam. These surveys have also been utilised to study the pulsar population. Studies of the low-frequency spectral energy distribution of pulsars using the continuum images from the GLEAM survey were realised for 60 radio pulsars <cit.>. Their analysis provided reliable flux density measurements and helped in improving the spectral modelling of pulsars. Similar studies were performed to explore the variability of pulsars by <cit.> and circular polarisation of pulsars by <cit.>. The method of detecting pulsars using the ISM and variance imaging was also investigated by <cit.>. 33 pulsars were also detected in MWA images as linear polarised sources as part of the POlarised GLEAM Survey (POGS), 11 of which POGS was the first image-plane detection <cit.>. An initial census of Southern pulsars with the MWA Voltage Capture System (MWA VCS) data using incoherent beamforming and searching was performed by <cit.>. Their work also resulted in the first low-frequency detections of 10 pulsars and forecasted that a survey with SKA-Low could potentially detect around 9400 pulsars. The currently ongoing large pulsar survey with the MWA, the Southern-sky MWA Rapid Two-metre (SMART) pulsar survey <cit.> has discovered 4 new pulsars and is expected to detect many more (∼300 new pulsars after full processing). The SMART survey would also provide a complete census of the known Southern sky population of pulsars (in the time domain) and will be beneficial in informing future pulsar surveys with the SKA-Low. With the SKA under construction, there is an increased need to study and understand the pulsar population. This work is the first attempt to perform an image-based pulsar census of the dense region of the GP using MWA VCS Phase II data. The expected detectable pulsar population is guided by the current knowledge of the known pulsar population. Therefore it is important to explore all new avenues and refine our knowledge of the known pulsar population. The presented work contributes to this knowledge by detecting some of the pulsars for the first time at frequencies lower than 300 MHz. More detection of pulsars, especially at low frequencies will be useful to address some of the broader questions surrounding the neutron-star population. This work also explores the new parameter space available to image-based pulsar search strategies and provides insights into the efficacy of such methodologies. An underlying goal of this work is to provide better and more reliable flux density measurements leading to improved spectral modelling for pulsars that do not have any low-frequency flux density measurements. Finally, conducting a full Galactic census of pulsars is a high-priority science objective for the SKA <cit.> and this work will also serve as a reference survey for future deeper imaging surveys at low frequencies such as those planned with the SKA-Low. The initial success of this work is the first step towards the greater goal of detecting new pulsars using low-frequency image-based pulsar searches. As the MWA is the official low-frequency precursor for the SKA-Low, the lessons learned from this work will be useful in informing future SKA-Low image-based pulsar surveys. § OBSERVATIONS AND DATA PROCESSING The MWA is a low-frequency precursor telescope to the SKA, located at the Murchison Radio-astronomy Observatory (MRO) in Western Australia. It operates in the frequency range of 70-300 MHz and is able to access the entire Southern sky. The low-frequency range and the wide field-of-view (FoV), makes it complementary both to similar low-frequency telescopes in the Northern Hemisphere, such as the Low Frequency Array <cit.>, as well as high-frequency telescopes in the Southern Hemisphere, such as the Parkes Radio Telescope (Murriyang). The Phase I of the MWA consisted of 128 tiles with a maximum baseline of ∼ 3km <cit.>. In 2018, the MWA was upgraded to phase II with a maximum baseline of 5.3 km and 256 tiles (small 4×4 aperture arrays with dual-polarisation dipoles). The Phase II upgrade increased the angular resolution by a factor of ∼ 2 and the sensitivity by a factor of ∼ 4 as a result of reduction in the classical and sidelobe confusion <cit.>. It was initially designed as an imaging telescope, requiring only time-averaged tile cross-correlation products ("visibilities"). However, it was eventually upgraded to enable the capture of raw complex voltages from each tile with the development of the VCS <cit.>. The VCS captures high-time and frequency resolution voltage data (100 μ s / 10 kHz), which provided flexibility and the opportunity to process data in multiple ways. Since pulsar flux densities at low-frequencies can vary significantly from day to day, being able to find pulsar candidates in images formed from the MWA VCS data, and investigate them by beamforming the very same data is a very powerful technique unique to the MWA. The pipeline developed for this kind of processing was described in <cit.>. The current work uses 12 MWA VCS Phase II observations as shown in Figure <ref>. Their duration ranges between 30 minutes to 90 minutes with a central frequency of 184 MHz. The observations are labelled from A to L and their details are given in Table <ref>. The total amount of data processed as part of this work is ∼ 450 TB and corresponds to 6000 deg^2 of the sky. These observations are used to form full Stokes images using the procedure described in Section <ref>, which leads to the detection of known pulsars and identification of new pulsar candidates. The mean RMS reached for the Stokes I images of the 12 observations ranges between ∼ 5 mJy/beam and 8 mJy/beam. The same data were also used to confirm image detections of known pulsars and follow up the candidates by beamforming the original VCS data and searching for pulsations. Using the Pawsey supercomputing systems (mainly the Garrawarla supercomputer dedicated to processing MWA data), processing an hour of observation to produce full Stokes images takes about 12 hours. Forming a beam and searching for pulsations using the same one-hour data, for a single pointing (single source) takes about 5 hours. This demonstrates the large volume of data captured by VCS and the large amount of time and resources required to process all the observations. §.§ Imaging pipeline The raw voltages from the MWA antennas are processed by the xGPU software correlator <cit.> to produce visibilities with a temporal resolution of 1 second. The resulting data are then processed with COTTER <cit.>, which converts them into the CASA measurement set format and applies calibration, and flags channels affected by radio-frequency interference (RFI) using the AOFlagger software <cit.>. Calibration solutions are obtained from the MWA All-Sky Virtual Observatory <cit.>. Images in instrumental polarisation are created using WSCLEAN <cit.> with a Briggs weighting of -1 and then transformed into Stokes I, Q, U, and V images using the MWA's "fully" embedded element beam model <cit.>. The images are 8192×8192 pixels with pixel size of 0.2 arcmin, corresponding to ∼ 35 × 35 images. The individual 1-second images are averaged to produce mean full Stokes images, which are subsequently used for further analysis. An example of on and off GP Stokes I image are shown in Figure <ref> and <ref> respectively, where known detected pulsars are marked with white circles. The source finding software, AEGEAN <cit.>, was used to detect and extract radio sources from the mean Stokes I and V images. A catalogue of sources that exceeded a 5σ threshold was created for each observation and analysed to generate a list of potential pulsar candidates. The pulsar candidates were chosen based on three criteria, namely, steep spectrum, circular polarisation and variability. Steep spectrum sources are the ones whose spectral index is steeper than -2. The spectral index is calculated using RACS flux and MWA flux of the source. The circular polarisation sources are the ones that are more than 7% circularly polarised. The variable sources are selected by generating light curves for sources that have significance more than 5 and modulation of more than 20%. More details of the methodologies can be found in <cit.>. The flux densities measured of detected pulsars and derived from Stokes I images were used for modelling their spectra using pulsar_spectra software <cit.>, which is described in Section <ref>. §.§ Pulsar spectra fitting software - pulsar_spectra Measurements of flux densities are important for detailed spectral analysis and furthering our understanding of pulsar radio luminosities. A recent version of the ATNF pulsar catalogue shows that the pulsar flux densities are well studied between 400 MHz to 1.4 GHz as most of the pulsars were discovered at those frequencies. However, they are not studied as extensively at frequencies ≤400 MHz. The main challenges of accurate flux density measurements at these frequencies are pulsar scintillation. This is especially applicable for low-DM pulsars due to the interstellar medium. This leads to fluctuations in flux density from a factor of two to an order of magnitude. Even though pulsar scintillation is less prominent for high-DM pulsars, these would be very hard to detect at low-frequencies. Variations in flux densities are more dominant at low frequencies and is dependent on their Galactic latitude and instrumental parameters of the telescope such as observing bandwidth and integration time. The other factor that impedes reliable spectral models is that the currently catalogued data were taken with different instrumental backends and different systematic errors. Even though many accurate flux density measurements have been taken over the last several decades, there is no catalogue that records all this information. Furthermore, there is no theoretical model for the spectra of the pulsars and no single model that can accurately fit the wide variety of pulsars' spectra. <cit.> studied the spectral properties of 441 pulsars observed with the Parkes radio telescope at centre frequencies of 728, 1382 and 3100 MHz, providing a systematic and uniform sample of pulsar flux densities. The data were then combined with spectral data from the literature to derive the spectral properties of the pulsars and fit different spectral models to the combined data in a robust manner. The spectra of the pulsars needed to have at least four flux density measurements at four different frequencies to ensure sufficient spectral coverage and better constraints of the spectral fits. This simply means that it needs at least 4 or more data points to produce a fit as it would be more reliable than the one with two data points. The Akaike information criterion (AIC) was used to decide on the best-fit model. AIC measures the amount of information about the data retained by the model without over-fitting. The model that results in the lowest AIC was selected as the best-fitting model. Based on the method described in <cit.>, a spectral fitting software, pulsar_spectra was developed by <cit.>. The pulsar_spectra software contains an open-source catalogue of flux density measurements from several publications and uses this information to fit the best spectral model for a given pulsar. The pulsar_spectra software also reduces the effect of underestimated uncertainties on outlier points and prevents the skewing of the model fit by using the Huber loss function <cit.>. Currently, the software can incorporate different spectral models based on <cit.>. These are: (i) simple power law, (ii) broken power law, (iii) log parabolic spectrum, (iv) power law with low-frequency turnover, (v)power law with high-frequency cutoff and (vi) double turnover spectrum. The software is written in Python and can be easily installed. It has multiple features such as fitting a spectral model using the flux density measurements from the literature, estimating the flux density for a pulsar at a desired frequency and adding more spectral models to the repository. Even though this is a relatively new software, it has already been applied to other analyses. One such spectral study was performed by <cit.> for 22 radio pulsars detected with SKA-Low precursor stations. 21 out of the 22 pulsars showed a change in the spectral fit after the addition of the new low-frequency flux density measurements. A more extensive analysis using 893 radio pulsars is being undertaken by Swainston et al, 2023 (in prep). The presented work uses the feature of pulsar_spectra, which enables us to include the flux density measurements from this work and compare the spectral fits before and after the addition. In this study, pulsar mean flux density is measured from the continuum Stokes I images. The flux densities calculated from periodicity searches are dependant on beam model, sky model, sky and receiver temperatures. Given the dependencies on more models it is less reliable and may have large errors associated with it. The flux densities from the Stokes I images are reliant on proper calibration solutions and the beam model. Hence, the flux densities from Stokes I images therefore have fewer dependencies than that of the periodicity searches and therefore may have less errors associated with it. However, we do have to take into account that the flux densities from the Stokes I images may be overestimated due to the blending of sources and hence introduce errors to the measurements. Figure <ref> shows the comparison of the flux densities from this work to the literature and demonstrates that the flux densities from this work match well within the error limits of the flux densities in the literature. These low-frequency, reliable flux density measurements will be useful in filling up the gap in the measurements at the lower end of the frequency. Our study illustrates the scientific application of image-based pulsar surveys conducted using the Murchison Widefield Array (MWA) which leads to accurate measurements of low-frequency pulsar flux densities and broadband modelling of pulsar spectra. These insights serve as crucial inputs for informing both pulsar surveys and scientific endeavours planned with the Square Kilometer Array Low-Frequency (SKA-Low) telescope. The details of the results from this analysis are given in Section <ref>. § RESULTS AND DISCUSSION §.§ Detection of known pulsars Our image-based survey is generally expected to detect sufficiently bright pulsars irrespective of their DM. For MWA frequencies, the sensitivity for periodic searches significantly reduces above a DM of 250 pc cm ^-3, but imaging sensitivity to pulsars is not compromised at these high DMs. Therefore, it is clear that there are three regions of DM-flux density parameter space: (i) at flux densities below the 5σ imaging sensitivity of the telescope and low DM where only periodic searches are effective, (ii) at higher flux densities and DM ≤ 250 both image-based and periodic searches can be used, (iii) at higher flux and DM≳ 250 only image based searches are sensitive. This is summarised in Figure <ref>. The Australian Telescope National Facility (ATNF) pulsar catalogue <cit.> comprises of about 3000 pulsars detected to date. Of the total population, 1000 pulsars are present in the 3dB (half power point) for the 12 observations processed for this work. Taking the local σ at the pulsar positions from Stokes I images and applying a flux density threshold for a 5σ detection, we evaluate that 85% (850) of the pulsars are below our detection threshold. As our work is focused on the dense region of the Galactic Plane, we also have to account for the pulsars that are present in supernova remnants (SNRs) which amounts to 4% (40) of the pulsars, leaving 11% (110) of the population which can be detected in image domain with the MWA Phase II sensitivity. Out of the 110 expected pulsar detections we were able to detect 66 pulsars in image-domain i.e. 60% of the expected number of detections. The non-detections can be either due to spectral turnover of the pulsars at low frequencies <cit.> which leads to reduced flux density. The other possible reason can be that the pulsars are in regions of diffused emission such as in pulsar wind nebulae (PWN) or in undetected SNRs that are not accounted for and may be excluded as extended sources. Apart from detections of the pulsars in the image domain, time-domain beamformed searches resulted in 83 pulsar detections (17 more). The total number of pulsar detections in this survey is 83 shown in Figure <ref>, including 3 MSPs and 4 binary pulsars. In Figure <ref>, purple dots are the pulsars detected by both image-domain and time-domain searches, yellow dots are pulsars detected only in images and orange dots are the pulsars that are detected via periodic searches. For 14 pulsars, these are also the first low-frequency (≤400 MHz) detections, and our flux density measurements can be helpful in more accurate spectral modelling of these pulsars. Some of the detections are discussed in detail in Section <ref>. Figure <ref> shows the pulsars detected on the DM-flux density plane. It is clear that the different search methods are optimal in different parts of the parameter space, and they overlap in the region of bright and low-DM pulsars. The purple dots denote the pulsars that are detected in both periodicity as well as imaging and as listed in a Table given in <cit.>. These are mainly bright, low DM pulsars which have the highest overlap between the two search methods. The yellow dots denote the pulsars that are detected only in Stokes I images as listed in Table <cit.>. These pulsars are mainly the high DM pulsars which are harder to detect via periodicity searches at low frequencies. The orange dots are the pulsars that are detected only in beamformed searches, the details of which are given in Table <cit.>. These pulsars are faint and hence are below the detection threshold of Stokes images. The blue-shaded region is the parameter space that is exclusively available to image-based searches. Image-based searches, therefore, can be useful in detecting high DM or highly scattered pulsars that may be missed by periodic searches at low frequencies but they are sensitive primarily to very bright sources. As the mean standard deviation of the noise (σ or RMS) for these observations ranges from 5 mJy/beam to 8 mJy/beam, the blue and green dashed line in Figure <ref>, indicates a flux density threshold of 25 mJy/beam and 40 mJy/beam for the sensitivity of image-domain searches for a 5σ detection of pulsars. Obtaining a threshold for DM for periodicity searches is much more complicated as a DM cutoff is a complex relation between pulse broadening, frequency and period of the pulsar. Under the assumption of a pure Kolmogorov electron density spectrum <cit.>, a non-linear function between scattering timescale, DM and frequency is given as τ_d∝ DM^-2.2ν^-4.4. This affects pulsar searches at low frequencies, especially when the pulse broadening time is longer than the pulsar's spin period, leading to a loss in sensitivity of periodicity searches. Figure <ref> shows the significant effect of pulse broadening (i.e. smearing due to scattering) at larger DMs for the MWA SMART survey <cit.>. This is applicable to our work as we are using the same observing and processing parameters and observing frequency. The scatter broadening is also dependent on the line-of-sight and is larger in the Galactic Plane compared to off-Galactic Plane latitudes. Given that this work is focused on the Galactic Plane, pulse broadening needs to be taken into account when determining the DM cutoff. As shown in Figure <ref>, the pulse broadening is higher at larger DMs. At higher DMs, the pulse broadening times are ≳300 ms for a line of sight towards the Galactic Centre region at |b|≲ 5^∘, l≳ 330^∘ or l≲ 30^∘, where one would expect such high DMs. Furthermore, DM smearing is also significant at low frequencies (154 MHz) and is ∼ 10 ms at DM of ∼ 250 pc  cm^-3 at 10kHz MWA VCS frequency channels <cit.>. Therefore, taking these effects into account we consider a virtual DM threshold of 250 pc  cm^-3 beyond which DM smearing and pulse broadening significantly reduce the sensitivity of the search. This DM threshold is shown as a red dashed line in Figure <ref>. §.§ Pulsar detections - Imaging vs. Beamforming As mentioned earlier, both search methods are sensitive to certain areas of the DM-flux parameter space. Given that the total number of pulsar detections in this survey is 83, Figure <ref> shows the percentages of the detections with the two search methods. It shows that more than half of the pulsar detections were by both the search methods and image-based searches seems to perform as well as beamformed searches for the Galactic Plane observations processed as part of this work. 60% of the total pulsar detections were by both image-based as well as beamformed searches. An example of the detection of PSR J1141-6545 in both imaging and beamformed searches is shown in Figure <ref>. It is seen as a 75 mJy (13 σ) source in imaging and a 12σ PRESTO detection. The detection significance in both methods is comparable for this pulsar denoting that pulsars above our imaging sensitivity are mostly detected at similar significance with beamformed detections. However, for the highly scattered or high DM pulsars, image-based searches are more favourable. One such example of PSR J1823-1115 with a high DM of 428.59 pc  cm^-3 is shown in Figure <ref>. It shows the image-based detection of the pulsar in MWA as a 125 mJy (25σ) continuum source at the position of the pulsar as well as a detection in RACS Stokes I image. The beamformed search, however, did not result in any detection. The reason for the non-detection is possibly due to the highly scattered pulse profile of the pulsar at lower frequencies when compared to higher frequencies. This scattered profile is demonstrated in Figure <ref>, which shows the available profiles of the pulsar at 410 MHz and 925 MHz, in the European Pulsar Network (EPN http://www.epta.eu.org/epndb/) database. Due to the frequency dependence of scattering, we can expect the profile of this pulsar to be even more scattered at MWA frequency (185 MHz), making it more difficult to detect via beamformed searches. Periodicity searches at higher frequencies, such as that of RACS (888 MHz) would be able to detect it more comfortably. This demonstrates the capability of image-based searches at low frequencies to detect similar pulsars having scatter-broadened profiles which may be missed by periodicity searches. Even though image-based searches are good for detecting high DM or highly scattered pulsars, the sensitivity of Stokes I images is still a limiting factor. This can be demonstrated by the example of detection of PSR J1320-5359 shown in Figure <ref>. This shows that the pulsar was detected at a high PRESTO detection significance of 33σ but was not seen as a continuum source in the MWA Stokes I image. The expected mean flux density of the pulsar at a frequency of 184 MHz is ∼ 15 mJy/beam, which is below our best 5σ detection threshold of 25 mJy/beam. Therefore, the sensitivity and confusion noise of MWA is a limiting factor in detecting fainter pulsars in image domain searches. However, the SKA-Low with approximately 64 more antennas will be a significantly more sensitive for image-based pulsar searches at low frequencies. The spatial resolution of SKA will also be much better than the present generation of telescope. The improved confusion noise and spatial resolution in the next-generation telescope compared to current facilities will lead to the detection of more pulsars in the image-plane. §.§ Comparison with literature Figure <ref> shows the flux densities of the pulsars detected in the image plane as part of this work compared to their flux densities from the literature. Sixty-eight of the detected pulsars in the sample that is detected have previously measured flux densities at low frequencies. The surveys by <cit.>, <cit.> and <cit.> provide a good comparison as they were also performed with the MWA, while <cit.> conducted the survey using the GMRT at 150 MHz which aligns well with our MWA band of 184 MHz. Table <cit.> shows the flux density of the pulsars from Stokes I images that are detected in both imaging and beamforming searches and the recorded low-frequency fluxes from the literature. Eighteen pulsars were previously detected by <cit.> using MWA images generated as part of the GLEAM survey <cit.>, with the GLEAM sub-band centred at 151 MHz, which matches well with our central frequency of 185 MHz. 24 pulsars were previously detected in <cit.> and six in <cit.> using periodicity searches. One pulsar was previously detected by <cit.> using LOFAR observations and periodicity searches. For the pulsars detected only in the Stokes I image, shown in Table <cit.>, only two pulsars were previously detected using the GMRT 150 MHz radio continuum survey <cit.>. As can be seen in Figure <ref>, the flux densities measured in our work agree (within the errors) with these earlier surveys. However, it is important to note that the fluxes from the literature are from different instruments and observational setups. The low-frequency measurements are also more affected by scintillation than higher-frequency measurements and pulsars are also intrinsically variable which may lead to differences in measured flux densities on particular days. The measured flux densities are also dependant on the DM of the pulsars and the corresponding scintillation effects it experiences. Moreover, flux densities may also vary depending on the detection method i.e. time domain detection using the radiometer equation to derive the flux density vs. image-based detections using the flux density of the sources directly from images. Hence, flux densities measured by different surveys and using different search methods may vary by an order of magnitude. Given this, the agreement of our flux density measurements with the earlier measurements (as shown in Figure <ref>, left panel) is remarkably good, and shows that MWA images can provide reliable flux density measurements despite being potentially prone to blending (caused by the spatial resolution of the MWA Phase II limited to ∼1 arcmin). §.§ Efficiency of the criteria The pulsars detected in the image domain can be used as an efficiency marker for the developed criteria. It can be done by comparing the number of pulsars and candidates selected based on the threshold of each criterion. This method can help us understand the balance between actual pulsar detections and the number of false positives we get and improve the criteria such that we do not have numerous false positives when compared to actual detections. Table <ref> shows the number of candidates and the pulsars that were selected by each criterion before the criteria were combined. The table also shows the efficiency of each criterion when the number of candidates and pulsars are compared. The spectral index criterion works the best when compared to the other two criteria, detecting 50 pulsars and a manageable number of candidates (less than 1000) for follow-up. The circular polarisation criterion was able to detect six pulsars as candidates out of 150 candidates and therefore shows the potential of circular polarisation surveys to detect pulsars that may be missed by other criteria. The variability criterion was able to detect two pulsars amongst 200 candidates. There are possible reasons for the low efficiency of the variability criterion. One of them can be that the pulsars detected in this survey do not vary on the timescales probed by our variability criterion. As diffractive scintillation is the main cause of variability of the pulsars in our timescale of minutes it is most applicable to low-DM pulsars. Given that this work is focused on the GP, it is unlikely that the variability criterion will be performing to its full potential due to the dominant population of high-DM pulsars (order of 100s pc cm^-3 in the Galactic Plane). In order to further reduce the number of candidates as well as to select sources that are the most probable pulsar candidates we attempt to combine the criteria and produce a list of candidates that satisfy more than one criteria. Table <ref> shows the candidates and the pulsars detected when the criteria are combined. The combination results in a reduction in the number of candidates. Four pulsars were detected by the combined criteria of steep spectrum and circular polarisation and two with the combined criteria of steep spectrum and variability. The other combinations do not yield any pulsar detections. Even though the combination of the criteria reduces the number of candidates, one has to be careful while combining the criteria. This is because it may happen that the combination of the criteria can excise good pulsar candidates that do not satisfy more than one criterion. This discrepancy in selecting candidates when combined criteria are applied arises mainly from the less understanding of the criteria. For example, the circular polarisation criteria may not be extremely reliable due to less reliable beam model leading to inadequate leakage characterisation. Overall, we conclude that the steep spectrum criterion is better than the other two when only individual criteria are used (not combined). Circular polarisation has its own advantage of being less affected by confusion noise and can be useful in detecting pulsars that may be missed in Stokes I image due to increased noise. Circular polarisation also has fewer number of candidates which makes it easier for follow up and confirmation. However, not all pulsars have circular polarisation and for some of the pulsars, the circular polarisation averaged over pulse period is zero due to sign flip of the Stokes V pulse profile. Moreover, it should also be noted that not all circularly polarised sources are pulsars. Variability criterion will be most fruitful if used for detecting low DM pulsars that are affected by diffractive scintillation. When criteria are combined, the combination of steep spectrum and circular polarisation works better than the other combination and has the highest efficiency. In an ideal case, one would follow up on all candidates, but due to the constraints of telescope time and computational costs, combining the criteria to rank the candidates is the most feasible way to reduce the number of follow-up candidates and select the best possible candidates for future work. §.§ Spectral energy distributions The exact nature of the pulsar emission is an open question even after decades of research. Studying the radio spectra of the pulsars provides clues to their emission mechanism. However, these investigations are limited by the small number of flux density measurements, especially at frequencies below 400 MHz, which limits the accuracy of spectral modelling. The flux density measurements, especially the first low-frequency detections of pulsars in this work will be helpful in bridging the gap between the low and high-frequency flux density measurements and help with broadband modelling of their spectra. For the 66 pulsars that are detected in the image-plane accurate flux density measurements can be obtained with 10% - 20% uncertainty depending on the position of the pulsar in the Stokes I image. Using the flux density measurements, we fitted spectra using the method implemented in the software, pulsar_spectra [<https://github.com/NickSwainston/pulsar_spectra>] <cit.>. We attempted to investigate the change in the resulting spectral model, if any, upon addition of our low-frequency flux density measurements. The list of spectral fits used and the acronyms are given in Table <ref> With only two flux density measurements, two-point spectral index can be calculated analytically (without fitting). Therefore, in order to increase the reliability of the spectral fits produced, the software, pulsar_spectra, requires at least three data points for a simple power law fit and four or more data points for other spectral models. The software does not produce any fits if this requirement is not satisfied. Taking this threshold into consideration, out of the 66 pulsars, 63 had sufficient number of flux density measurements in the literature to produce the spectral fits without the addition of our work, one did not have enough measurements to fit the model without our data points and two did not have a sufficient number of data points to create a spectral fit even after the addition of our work. The lack of measurement data at frequencies below 400 MHz shows the need for more low-frequency campaigns to fill the gap in the literature. The reliable flux density measurement of pulsars with the MWA below 300 MHz will be useful in better spectral modelling of the pulsars. This is exemplified by the fact that 15 out of the 66 pulsars that were detected in Stokes I images changed the spectral models after the addition of our measurements as shown in Table <ref>. The table also shows the Akaike Information Criteria (AIC), which is a measure of how much information the model retains without over-fitting. The model that results in the lowest AIC is the most likely to be a sufficiently accurate model, which does not over-fit the data. The AIC of the model before and after the addition of the flux density from this work is given in Table <ref>. We note that the AIC value may not be ideal to compare the accuracy of the fit before and after the addition of our points. This is because AIC is mainly aimed at finding the ”minimal model” which describes the data without over-fitting sufficiently well. This demonstrates lack of low-frequency flux densities in the literature causes large uncertainties and volatility of the spectral modelling (i.e. addition of a single data point can change the fitted spectral model). Five pulsars changed from LFTO to BPL (Figure <ref>), two pulsars changed from SPL to HFCO (Figure <ref>), two pulsars changed from LFTO to DTOS (Figure <ref>) and one pulsar changed from SPL to LFTO (Figure <ref>). The remaining five pulsars changed models to equally or more complex ones (Figure <ref>). As we can see from the AIC values of the fit before and after the addition of the data from this work, the majority of the spectral fits after the addition of this work are favoured. For some of them, the values are comparable for both the fits. In a small number of cases, the AIC favours the model before the new data point addition. For millisecond pulsars, the broken power law for PSR J0737-3039A and double frequency turnover spectra for PSR J2145-0750 are consistent before and after the addition of our low-frequency flux density measurement. For one of the millisecond pulsars, PSR J1902-5105, a simple power law fit was produced (Figure <ref>) only after the addition of our flux density measurement. This shows the importance of performing more flux density measurements at lower and intermediate frequencies, which will improve the spectral modelling of pulsars and help explain the emission mechanism behind them. Measuring flux densities at multiple epochs is also important for improving the reliability of these measurements with reduced effects of flux variability. However, for two of the millisecond pulsars, PSR J1751-2737 and J1748-3009, we were unable to produce a reliable spectral fit even with the addition of our low-frequency measurements as pulsar_spectra software requires at least three points for a simple power law fit and four or more for other spectral fits. Even though pulsar_spectra provides reliable spectral fits, there can be discrepancies in the model fits due to several reasons. One of the reasons can be the lack or small number of measurements between 185 MHz and 1.4 GHz which may cause the fit to be less reliable. For example, the spectral fit for PSR J1827-0958 may be unreliable due to the lack of measurements between our flux density at 185 MHz and the measurements at higher frequencies as shown in Figure <ref>. The fits may also be skewed by underestimated error which can result from scintillation not being taken into account potentially increasing the measured flux density, especially at low frequencies. §.§ The undetected pulsar population Despite the detection of many known pulsars, there are many other which were not detected in imaging and periodic searches, that needs to be addressed. The non-detections in imaging can be attributed to either spectral turnover at low frequencies as highlighted by <cit.> or confusing the pulsars with other extended structures such as supernova remnants and pulsar wind nebulae near the Galactic Plane or the high noise confusion at the edge of the image. For periodic searches, the detection sensitivity is mainly affected by high scattering and scintillation which makes it harder to detect highly scattered pulsars and pulsars that tend to scintillate down making it fainter than the sensitivity threshold. The sky brightness temperature (hence sky noise) at low radio frequencies (∼400 MHz) scales with frequency as T_b∝ν^-2.6 <cit.>, such that it becomes dominant at lower frequencies, making pulsar detections increasingly difficult. This affects both pulsed and imaging searches equally, with imaging searches also being affected by confusion and reduced image fidelity in the presence of bright Galactic HII regions and supernova remnants. The pulsar J1513-5908 is a good example of a bright pulsar confused by nearby bright, extended emission from supernova remnant, SNR G320.4-1.2 as shown in Figure <ref>. The detectability at low frequencies is also affected by the spectral shape of the pulsar emission. Approximately 10% of the known pulsar population is shown to have spectral turnovers <cit.>. The pulsar population that we expect to detect also has a number of pulsars with known spectral turnovers such as PSR J1644-4559 which has a turnover at 600 MHz <cit.>. However, the lack of more accurate pulsar flux densities at low frequencies makes it difficult to confidently determine the spectral behaviour of the pulsars at frequencies below 300 MHz. For periodic searches, a factor that affects the detection of almost 40% of the Southern pulsar population is the rapid decline in sensitivity beyond DM ∼ 250  pc  cm^-3 for time series data at MWA-frequencies. This is due to scattering and interstellar scintillation, which becomes very significant close to GP. In summary, we conclude that the non-detections of the pulsars can be due to a combination of effects. There is clear evidence that the majority of the non-detections in the imaging plane are due to the increased confusion noise near the Galactic Plane and the low-frequency spectral turnover of the pulsars. The non-detections in periodic searches are a combination of factors such as reduced sensitivity at high DMs, scattering and scintillation. § SUMMARY AND CONCLUSION The primary goal of the project was to discover new pulsar using the methodologies described in <cit.>. Even though we have not detected any new pulsars, we have detected multiple known pulsars leading to the most comprehensive imaging survey of Southern pulsars at low-frequencies with the MWA. We have identified 83 pulsars from the 12 MWA VCS observations covering the Galactic Plane. 66 of the pulsars are detected in wide-field MWA images. The approach of imaging is complementary to pulsation studies because it is not affected by pulse scatter broadening or dispersion , which potentially makes image-based searches sensitive to pulsars missed by periodicity searches. This work also explores the unique parameter space that is exclusively accessible to image-domain pulsar searches eg., bright, high DM pulsars which are very difficult to detect at low frequencies using periodicity searches. We also presented the new low-frequency flux density measurements for 14 pulsars from the ATNF pulsar catalogue. These low-frequency flux density measurements can help to model pulsar spectra and address questions about the presence of low-frequency spectral turnovers. It is noticeable that with the scaricity of low-frequency flux density measurements of pulsars addition of a single point can drastically change the fitted models. Therefore, providing more flux density measurements at low-frequencies will improve the robustness of pulsar spectra modelling and lead to better understanding of pulsar emission mechanisms. We have carried out a preliminary spectral index study of our sample. We find that for 15 pulsars there is a change the spectral index model after the addition of our low-frequency measurement. However, these changes in the spectral index models may have been affected by the lack of measurements in intermediate frequencies such as 300 MHz or large fractional bandwidths of the existing measurements. Another factor that affects the spectral index model is the variability of the flux density measurements which may over or under-estimate the flux density measurement on a particular day. This work also emphasises the need for more flux density measurements at low frequencies. Especially, including more flux density measurements from different epochs, which will lead to better modelling and understanding of pulsar spectra and population of Southern sky pulsars. More flux density measurements can also answer questions about low-frequency spectral turnovers of pulsars. Our identification of the pulsars was largely limited due to the sensitivity and the high confusion region near the Galactic Plane. The number of detections may also be affected by low-frequency spectral turnover for pulsars rendering them undetectable. We can conclude that a combination of all the effects is responsible for the pulsar non-detections in the image domain, while, time domain searches were mainly affected by scattering and scintillation. Based on the work done, we can conclude that imaging is a good way to find new pulsars in less explored parts of the DM space. Although, the main pulsar surveys will be conducted with the SKA-Mid, SKA-Low can also be very useful tool for searching for new pulsars mainly because of its larger FoV and many pulsars being brighter at lower frequencies. Image based strategies in the era of SKA may further expand the parameter space for these searches. This scientific work uses data obtained from Inyarrimanha Ilgari Bundara / the Murchison Radio-astronomy Observatory. We acknowledge the Wajarri Yamaji People as the Traditional Owners and native title holders of the Observatory site. Support for the operation of the MWA is provided by the Australian Government (NCRIS), under a contract to Curtin University administered by Astronomy Australia Limited. We acknowledge the Pawsey Supercomputing Centre which is supported by the Western Australian and Australian Governments. Inyarrimanha Ilgari Bundara, the CSIRO Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Research Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. This research has made use of NASA's Astrophysics Data System. Data Availability Statement The data can be made available on a reasonable request.
http://arxiv.org/abs/2407.13221v1
20240718070649
Multimodal Label Relevance Ranking via Reinforcement Learning
[ "Taian Guo", "Taolin Zhang", "Haoqian Wu", "Hanjun Li", "Ruizhi Qiao", "Xing Sun" ]
cs.CV
[ "cs.CV" ]
T. Guo et al. Tencent Youtu Lab {taianguo,linuswu,hanjunli,ruizhiqiao,winfredsun}@tencent.com Tsinghua Shenzhen International Graduate School, Tsinghua University ztl23@mails.tsinghua.edu.cn Equal contribution. Work done during internship at Tencent. Corresponding author: Ruizhi Qiao. Multimodal Label Relevance Ranking via Reinforcement Learning Taian Guo10000-0003-0787-511X Taolin Zhang20009-0006-2441-2861 Haoqian Wu10000-0003-1035-1499 Hanjun Li10009-0006-4211-7479 Ruizhi Qiao10000-0002-3663-0149 Xing Sun10000-0001-8132-9083 ============================================================================================================================================================================================== § ABSTRACT Conventional multi-label recognition methods often focus on label confidence, frequently overlooking the pivotal role of partial order relations consistent with human preference. To resolve these issues, we introduce a novel method for multimodal label relevance ranking, named Label Relevance Ranking with Proximal Policy Optimization (LR2PPO), which effectively discerns partial order relations among labels. LR2PPO first utilizes partial order pairs in the target domain to train a reward model, which aims to capture human preference intrinsic to the specific scenario. Furthermore, we meticulously design state representation and a policy loss tailored for ranking tasks, enabling LR2PPO to boost the performance of label relevance ranking model and largely reduce the requirement of partial order annotation for transferring to new scenes. To assist in the evaluation of our approach and similar methods, we further propose a novel benchmark dataset, LRMovieNet, featuring multimodal labels and their corresponding partial order data. Extensive experiments demonstrate that our LR2PPO algorithm achieves state-of-the-art performance, proving its effectiveness in addressing the multimodal label relevance ranking problem. Codes and the proposed LRMovieNet dataset are publicly available at <https://github.com/ChazzyGordon/LR2PPO>. § INTRODUCTION Multi-label recognition, a fundamental task in computer vision, aims to identify all possible labels contained within a variety of media forms such as images and videos. Visual or multimodal recognition is broadly applied in areas such as scene understanding, intelligent content moderation, recommendation systems, surveillance systems, and autonomous driving. However, due to the complexity of the real world, simple label recognition often proves to be insufficient, as it treats all predicted labels equally without considering the priority of human preferences. A feasible solution could be to rank all the labels according to their relevance to a specific scene. This approach would allow participants to focus on labels with high scene relevance, while reducing the importance of secondary labels, regardless of their potentially high label confidence. In contrast to predicting label confidence, the task of label relevance constitutes a more challenging problem that cannot be sufficiently tackled via methods such as calibration <cit.>, as the main objective of these methods is to correct label biases or inaccuracies to improve model performance, rather than establishing relevance between the label and the input data. As illustrated in Fig. <ref>, label confidence typically refers to the estimation from a model about the probability of a label's occurrence, while label relevance primarily denotes the significance of the label to the primary theme of multimodal inputs. The observation also demonstrates that relevance labels bear a closer alignment with human preferences. Understandably, ranking the labels in order of relevance can be employed to emphasize the important labels. Recently, Learning to Rank (LTR) methods <cit.> have been explored to tackle ranking problems. However, the primary focus of ranking techniques is based on retrieved documents or recommendation lists, rather than on the target label set. Therefore, these approaches are not directly and effectively applicable to address the problem of label relevance ranking due to the data and task setting. In addition to LTR, there is a more theoretical branch of research that deals with the problem of ranking labels, known as label ranking <cit.>. However, previous works on label ranking are oblivious to the semantic information of the label classes. Therefore, the basis for ranking is the difference between the positive and negative instances within each class label, rather than truly ranking the labels based on the difference in relevance between the label and the input. Recognizing the significance of label relevance and acknowledging the research gap in this field, our study pioneers an investigation into the relevance between labels and multimodal inputs of video clips. We rank the labels with the relevance scores, thereby facilitating participants to extract primary labels of the clip. To show the difference in relevance between distinct labels and the video clips, we develop a multimodal label relevance dataset LRMovieNet with relevance categories annotation. Expanded from MovieNet <cit.>, LRMovieNet contains various types of multimodal labels and a broad spectrum of label semantic levels, making it more capable of representing situations in the real world. Intuitively, label relevance ranking can be addressed by performing a simple regression towards the ground truth relevance score. However, this approach has some obvious shortcomings: firstly, the definition of relevance categories does not perfectly conform to human preferences for label relevance. Secondly, the range of relevance scores cannot accurately distinguish the differences in relevance between labels, which limits the accuracy of the label relevance ranking model, especially when transferring to a new scenario. Given the above shortcomings, it is necessary to design a method that directly takes advantage of the differences in relevance between different labels and multimodal inputs to better and more efficiently transfer the label relevance ranking ability from an existing scenario to a new one. For clarity, we term the original scenario as source domain, and the new scenario with new labels or new video clips as target domain. By introducing a new state definition and policy loss suitable for the label relevance ranking task, our LR2PPO algorithm is able to effectively utilize the partial order relations in the target domain. This makes the ranking model more in line with human preferences, significantly improving the performance of the label relevance ranking algorithm. Specifically, we train a reward model over the partial order annotation to align with human preference in the target domain, and then utilize it to guide the training of the LR2PPO framework. It is sufficient to train the reward model using a few partial order annotations from the target domain, along with partial order pair samples augmented from the source domain. Since partial order annotations can better reflect human preferences for primary labels compared to relevance category definitions, this approach can effectively improve the label relevance ranking performance in the target domain. The main contributions of our work can be summarized as follows: * We recognize the significant role of label relevance, and analyze the limitations of previous ranking methods when dealing with label relevance. To solve this problem, we propose a multimodal label relevance ranking approach to rank the labels according to the relevance between label and the multimodal input. To the best of our knowledge, this is the first work to explore the ranking in the perspective of label relevance. * To better generalize the capability to new scenarios, we design a paradigm that transfers label relevance ranking ability from the source domain to the target domain. Besides, we propose the LR2PPO (Label Relevance Ranking with Proximal Policy Optimization) to effectively mine the partial order relations among labels. * To better evaluate the effectiveness of LR2PPO, we annotate each video clip with corresponding class labels and their relevance order of the MovieNet dataset <cit.>, and develop a new multimodal label relevance ranking benchmark dataset, LRMovieNet (Label Relevance of MovieNet). Comprehensive experiments on this dataset and traditional LTR datasets demonstrate the effectiveness of our proposed LR2PPO algorithm. § RELATED WORKS §.§ Learning to Rank Learning to rank methods can be categorized into pointwise, pairwise, and listwise approaches. Classic algorithms include Subset Ranking <cit.>, McRank <cit.>, Prank <cit.> (pointwise), RankNet, LambdaRank, LambdaMart <cit.> (pairwise), and ListNet <cit.>, ListMLE <cit.>, DLCM <cit.>, SetRank <cit.>, RankFormer <cit.> (listwise). Generative models like miRNN <cit.> estimate the entire sequence directly for optimal sequence selection. These methods are primarily used in information retrieval and recommender systems, and differ from label relevance ranking in that they typically rank retrieved documents or recommendation list rather than labels. Meanwhile, label ranking <cit.> is a rather theoretical research field that investigates the relative order of labels in a closed label set. These methods typically lack perception of the textual semantic information of categories, mainly learning the order relationship based on the difference between positive and negative instances in the training set, rather than truly according to the relevance between the label and the input. In addition, these methods heavily rely on manual annotation, which also limits their application in real-world scenarios. Moreover, these methods have primarily focused on single modality, mainly images, and object labels, suitable for relatively simple scenarios. These methods differ significantly from our proposed LR2PPO, which for the first time explores the ranking of multimodal labels according to the relevance between labels and input. LR2PPO also handles a diverse set of multimodal labels, including not only objects but also events, attributes, and character identities, which are often more challenging and crucial in real-world multimodal video label relevance ranking scenarios. §.§ Reinforcement Learning Reinforcement learning is a research field of great significance. Classic reinforcement learning methods, including algorithms like Monte Carlo <cit.>, Q-Learning <cit.>, DQN <cit.>, DPG <cit.>, DDPG <cit.>, TRPO <cit.>, , are broadly employed in gaming, robot control, financial trading, Recently, Proximal Policy Optimization (PPO <cit.>) algorithm proposed by OpenAI enhances the policy update process, achieving significant improvements in many tasks. InstructGPT <cit.> adopts PPO for human preference feedback learning, significantly improving the performance of language generation. In order to make the ranking model more effectively understand the human preference inherent in the partial order annotation from the target domain, we adapt the Proximal Policy Optimization (PPO) algorithm to the label relevance ranking task. By designing state definitions and policy loss tailored for label relevance ranking, partial order relations are effectively mined in accordance with human preference, improving the performance of label relevance ranking model. §.§ Vision-Language Pretraining Works in this area include two-stream models like CLIP <cit.>, ALIGN <cit.>, and single-stream models like ViLBERT <cit.>, UNITER <cit.>, UNIMO <cit.>, SOHO <cit.>, ALBEF <cit.>, VLMO <cit.>, TCL <cit.>, X-VLM <cit.>, BLIP <cit.>, BLIP2 <cit.>, CoCa <cit.>. These works are mainly applied in tasks like Visual Question Answering (VQA), visual entailment, visual grounding, multimodal retrieval, Our proposed LR2PPO also applies to multimodal inputs, using two-stream Transformers to extract visual and textual features, which are then fused through the cross attention module for subsequent label relevance score prediction. § METHOD §.§ Preliminary Given a multi-label classification task with a set of labels ℒ = {l_1, l_2, …, l_n}, an instance x is associated with a label subset ℒ_x ⊆ℒ. The label confidence of a label l_i for instance x, denoted as C(l_i|x), is defined as the probability that l_i is a correct label for x, , C(l_i|x) = P(l_i ∈ℒ_x|x). The label relevance of a label l_i for instance x, denoted as R(l_i|x), is defined as the degree of association between l_i and x, , R(l_i|x) = f(l_i, x), where f is a function that measures the degree of association between l_i and x. Given V video clips, where the j-th clip consists of frames F^j = [F^j_0, F^j_1, ..., F^j_N-1], with N representing the total number of frames extracted from a video clip, and j ranging from 0 to V-1. Each video clip is accompanied by text descriptions T^j and a set of recognized labels denoted as ℒ^j, where ℒ^j={l^j_0, l^j_1,...,l^j_i,...,l^j_|ℒ^j|-1}, and |ℒ^j| is the number of labels in the j-th video clip. The objective of label relevance ranking is to learn a ranking function f_rank: F^j, T^j, ℒ^j →U^j, where U^j=[u^j_0, u^j_1,..., u^j_i,...,u^j_|ℒ^j|-1] represents the ranking result of the label set ℒ^j. §.§ Label Relevance Ranking with Proximal Policy Optimization (LR2PPO) We now present the details of our LR2PPO. As depicted in Fig. <ref>, LR2PPO primarily consists of three network modules: actor, reward, and critic, and the training can be divided into 3 stages. We first discuss the relations between three training stages, and then we provide a detailed explanation of each stage. Relations between 3 Training Stages. We use a two-stream Transformer to handle multimodal input and train three models in LR2PPO: actor, reward, and critic. In stage 1, the actor model is supervised on the source domain to obtain a label relevance ranking base model. Stages 2 and 3 generalize the actor model to the target domain. In stage 2, the reward model is trained with a few annotated label pairs in the target domain and augmented pairs in the source domain. In stage 3, reward and critic networks are initialized with the stage 2 reward model. Then, actor and critic are jointly optimized under LR2PPO with label pair data, guided by the stage 2 reward model, to instruct the actor network with partial order relations in the target domain. Finally, the optimized actor network, structurally identical to the stage 1 actor, is used as the final label relevance ranking network, adding no inference overhead. Stage 1. Label Relevance Ranking Base Model. During Stage 1, the training of the label relevance ranking base model adopts a supervised paradigm, , it is trained on the source domain based on manually annotated relevance categories (high, medium and low). The label relevance ranking base network accepts multimodal inputs of multiple video frames F^j, text descriptions T^j, and text label set ℒ^j. For a video clip, each text label is concatenated with text descriptions. Then ViT <cit.> and Roberta <cit.> are utilized to extract visual and textual features, respectively. Subsequently, the multimodal features are fused through the cross attention module. Finally, through the regression head, the relevance score of each label is predicted, and the SmoothL1Loss is calculated based on the manually annotated relevance categories as the final loss, which can be formulated as: L_SmoothL1(p) = 0.5 (p - y)^2 / β if |p - y| < β |p - y| - 0.5 β otherwise, where p and y are the predicted and ground truth relevance score of a label in a video clip, respectively. y = 2,1,0 corresponds to high, medium and low relevance, respectively. β is a hyper-parameter. Stage 2. Reward Model. We train a reward model on the target domain in stage 2. With a few label pair annotations on the target domain, along with augmented pairs sampled from the source domain, the reward model can be trained to assign rewards to the partial order relationships between label pairs of a given clip. This kind of partial order relation annotation aligns with human perference of label relevance ranking, thus benefiting relevance ranking performance with limited annotation data. In a video clip, the reward model takes the concatenation of the initial pair of labels and the same labels in reranked order (in ground truth order of relevance or in reverse) as input, and predicts the reward of the pair in reranked order through the initial pair. The loss function adopted for the training of the reward model can be formulated as: L_RM(g_ini, g_c) = max(0, m_R - (R([g_ini, g_c]) - R([g_ini, flip(g_c)]))), where g_ini and g_c represent the initial label pair and the pair in ground truth order respectively, [·, ·] means concatenation of two label pairs, flip(·) means flipping the order of the label pair, and R(·) denotes the reward model. m_R is a margin hyperparameter. Stage 3. LR2PPO. Stage 3 builds upon the first two training stages to jointly train the LR2PPO framework. To better address the issues in the label relevance ranking, we modify the state definition and policy loss in the original PPO. We redefine the state s_t as the order of a group of labels (specifically, a label pair) at timestep t, with the initial state s_0 being the original label order at input. The policy network (aka. actor model) predicts the relevance score of the labels and ranks them from high to low to obtain a new label order as next state s_t+1, which is considered a state transition, or action a_t. This process can be regarded as the policy π_θ of state transition. The combination of state s_t (the initial label pair) and action a_t (implicitly representing the reranked pair) is evaluated by the reward model to obtain a reward r_t. We denote the target value function estimate at time step t as V^target_t, which can be formulated as: V_t^target =r_t+γ r_t+1+γ^2 r_t+2+…+γ^T-t-1 r_T-1 +γ^T-t V_ω_old(s_T), where V_ω_old(s_T) denotes the old value function estimate at state s_T. ω is the trainable parameters of an employed state value network (aka. critic model) V_ω(·), while V_ω_old(·) represents the old state value network. γ is the discount factor. T is the terminal time step. We can further obtain the advantage Â_t estimate at time step t via the target value V_t^target and the critic model's prediction for the value of the current state s_t: Â_t = V_t^target - V_ω_old(s_t), where V_ω_old(s_t) denotes the old value function estimate at state s_t. In typical reinforcement learning tasks, such as gaming, decision control, language generation, , PPO usually takes the maximum component in the vector of predicted action probability distribution to obtain the ratio item of policy loss. (See supplementary for more details.) However, in the label relevance ranking task, it requires a complete probability vector to represent the change in label order, , a state transition. Thus, it is difficult to directly build the ratio item of policy loss from action probability. To address this problem, we first define the partial order function, : H_partial(p_t^1,p_t^2)=max(0,m-(p_t^1-p_t^2)), where p_t^1 and p_t^2 represent the predicted scores of the input label pair by the actor network at time step t , , π_θ(a_t|s_t) , and m is a margin hyperparameter, which helps the model better distinguish between correct and incorrect predictions to make the policy loss more precise. Maximizing the surrogate objective is equal to minimize the policy loss in the context of reinforcement learning. In original PPO, ratio r_t(θ) is adopted to measure the change in policy and serves as a multiplication factor in the surrogate objective to encourage the policy to increase the probability of actions that have a positive advantage, and decrease the probability of actions that have a negative advantage. However, in label relevance ranking, determining the action for a single state transition necessitates a comprehensive label sequence probability distribution. Consequently, if we employ the ratio calculation approach from the original PPO, the ratio fails to encapsulate the change between the new and old policies, thereby inhibiting effective adjustment of the advantage within the surrogate objective. This issue persists even when adopting the clipped objective function. Please refer to Sec. <ref> for more experimental analysis. To solve this problem, we propose partial order ratio r_t^'(θ) to provide a more suitable adjustment for advantage in the surrogate objective. It is a function that depends on the sign of Â_t, the estimated advantage at time step t. In practice, we utilize a small negative threshold δ instead of zero to stabilize the joint training of LR2PPO framework. Specifically, r_t^'(θ) is formulated as: r_t^'(θ) = -H^partial(p_t^1,p_t^2) Â_t ≥δ -H^partial(p_t^2,p_t^1) Â_t < δ. The proposed partial order ratio r_t^'(θ) encourages the model to correctly rank the labels by penalizing incorrect orderings. In Eq. (<ref>), assuming the advantage Â_t surpasses δ (, Â_t ≥δ), the reward model favors the first label (scored p_t^1) over the second (scored p_t^2). If p_t^1 > p_t^2, the absolute value of r_t^'(θ) falls below m, lessening the penalty in Eq. (<ref>). Conversely, if the advantage is below δ, the opposite happens. The policy function loss of LR2PPO is formulated as: L_LR2PPO^PF(θ) = - 𝔼_t ( r^'_t(θ) abs(Â_t) ). In our design, the order of label pairs is adjusted based on the relative magnitude of the advantage Â_t and δ. The absolute value function abs(·) in Eq. (<ref>) ensures that the advantage is always positive, reflecting the fact that moving a more important label to a higher position is beneficial. At the same time, the advantage is taken as an absolute value, indicating that after adjusting the order of the labels, , moving the more important label to the front (according to the relative magnitude of the original advantage Â_t and δ), the advantage can maintain a positive value. In this way, policy loss can be more suitable for the label relevance ranking task, ensuring the ranking performance of LR2PPO. Meanwhile, as original PPO, the value function loss of LR2PPO is given by: L_LR2PPO^VF (ω) = L^VF(ω) = 𝔼_t [ ( V_ω(s_t) - V^target_t )^2 ]. This is the expected value at time step t of the squared difference between the value function estimate V_ω(s_t) and the target value function estimate V^target_t, under the policy parameters θ. This loss function measures the discrepancy between the predicted and actual value functions, driving the model to better estimate the expected return. As original PPO, we utilize entropy bonus S(π_θ) to encourage exploration by maximizing the entropy of the policy, and employ KL penalty KL_penalty (π_θ_old, π_θ) to constrain policy updates and to prevent large performance drops during optimization. Please refer to supplementary for more details. Finally, the overall loss function of LR2PPO combines the policy function loss, the value function loss, the entropy bonus, and the KL penalty for encouraging exploration: L_LR2PPO(θ,ω) = L_LR2PPO^PF(θ) + c_1 L_LR2PPO^VF(ω) - c_2 S(π_θ) + c_3 KL_penalty (π_θ_old, π_θ), where c_1, c_2 and c_3 are hyper-parameter coefficients. In summary, our LR2PPO algorithm leverages a combination of a label relevance ranking base model, a reward model, and a critic model, trained in a three-stage process. The algorithm is guided by a carefully designed loss function that encourages correct label relevance ranking, accurate value estimation, sufficient exploration, and imposes a constraint on policy updates. The pseudo-code of our LR2PPO is provided in Algorithm <ref>. § LABEL RELEVANCE RANKING DATASET Our objective in this paper is to tackle the challenge of label relevance ranking within multimodal scenarios, with the aim of better identifying salient and core labels in these contexts. However, we find that existing label ranking datasets are often designed with a focus on single-image inputs, lack text modality, and their fixed label systems limit the richness and diversity of labels. Furthermore, datasets related to Learning to Rank are typically tailored for tasks such as document ranking, making them unsuitable for label relevance ranking tasks. Our proposed method for multimodal label relevance ranking is primarily designed for multimodal scenarios that feature a rich and diverse array of labels, particularly in typical scenarios like video clips. In our search for suitable datasets, we identify the MovieNet dataset as a rich source of multimodal video data. However, the MovieNet dataset only provides image-level object label annotations, while the wealth of information available in movie video clips can be used to extract a broad range of multimodal labels. To address this gap, we have undertaken a process of further label extraction and cleaning from the MovieNet dataset, with the aim of transforming the benchmark for label relevance ranking. This process has allowed us to create a more comprehensive and versatile dataset, better suited to the challenges of multimodal label relevance ranking. Specifically, we select 3,206 clips from 219 videos in the MovieNet dataset <cit.>. For each movie clip, we extract frames from the video and input them into the RAM model <cit.> to obtain image labels. Concurrently, we input the descriptions of each movie clip into the LLaMa2 model <cit.> and extract correspoinding class labels. These generated image and text labels are then filtered and modified manually, which ensures that accurate and comprehensive annotations are selected for the video clips. We also standardize each clip into 20 labels through truncation or augmentation. As a result, we annotate 101,627 labels for 2,551 clips, with a total of 15,234 distinct label classes. We refer to the new benchmark obtained from our further annotation of the MovieNet dataset as LRMovieNet (Label Relevance of MovieNet). § EXPERIMENTS §.§ Experiments Setup LRMovieNet Dataset To assess the effectiveness of our approach, we conduct experiments using the LRMovieNet dataset. After obtaining image and text labels, we split the video dataset into source and target domains based on video label types. As label relevance ranking focuses on multimodal input, we partition the domains from a label perspective. To highlight label differences, we divide the class label set by video genres. Notably, there is a significant disparity between head and long-tail genres. Thus, we use the number of clips per genre to guide the partitioning. Specifically, we rank genres by the number of clips and divide them into sets S_P and S_Q. Set S_P includes genres with more clips, while set S_Q includes those with fewer clips (, long-tail genres). We designate labels in set S_Q as the target domain and the difference between labels in sets S_P and S_Q as the source domain, achieving domain partitioning while maintaining label diversity between domains. For source domain labels, we manually assign relevance categories (high, medium, low) based on their relevance to video clip content. For target domain labels, we randomly sample 5%-40% of label pairs and annotate their relative order based on their relevance to the video clip content. To evaluate our label relevance ranking algorithm, we also annotate the test set in the target domain with high, medium, and low relevance categories for the labels. We obtain 2551/2206/1000 video clips for the first stage/second stage/test split. The first stage data contains 10393 distinct labels, while the second stage and validation set contain 4841 different labels. MSLR-WEB10k → MQ2008 To demonstrate the generalizability of our method, we further conduct experiments on traditional LTR datasets. In this transfer learning scenario, we use MSLR-WEB10k as the source domain and MQ2008 as the target domain, based on datasets introduced by Qin and Liu <cit.>. Evaluation Metrics We use the NDCG (Normalized Discounted Cumulative Gain) metric as the evaluation metric for multimodal label relevance ranking. For each video clip, we compute NDCG@k for the top k labels. §.§ State-of-the-art Comparison LRMovieNet Dataset We compare LR2PPO with previous state-of-the-art LTR methods, reporting NDCG metrics for the LRMovieNet dataset. Results of LTR-based methods are reproduced based on the paper description, since the original models can not be directly applied to this task. As seen in Tab. <ref>, LR2PPO significantly outperforms previous methods. Meanwhile, compared to the first-stage model LR2PPO (S1), the final LR2PPO achieves consistent improvement of over 3% at different NDCG@k. Open-vocabulary (OV) based methods, such as CLIP and MKT, utilize label confidence for the ranking of labels, exhibiting relatively poor performance. Furthermore, these models solely focus on specific objects, thus perform inadequately when dealing with semantic information. LTR-based methods shows superiority over OV-based ones. However, they are not originally designed for ranking for labels, especially when transferring to a new scenario. In contrast, our LR2PPO model allows the base model to interact with the environment and optimize over unlabeled data, resulting in superior performance compared to other baseline models. MSLR-WEB10k → MQ2008 Comparison on LTR datasets are shown in Tab. <ref>. It can concluded that our proposed LR2PPO method surpasses previous LTR methods significantly when performing transfer learning on LTR datasets. §.§ Ablation Studies Annotation Proportion in Target Domain To explore the influence of the annotation proportion of ordered pairs in the target domain, we adjust the annotation proportion during the training of the reward model in stage 2. Subsequently, we adopt the adjusted reward model when training the entire LR2PPO framework in stage 3. The accuracy of the reward model in the second stage and the NDCG metric in the third stage are reported in Tab. <ref>. The proportion of 10% achieves relatively high reward model accuracy and ultimate ranking relevance, while maintaining limited annotation. Partial Order Ratio As shown in Fig. <ref>, partial order ratio shows stability in training and achieves better NDCG, in contrast to the original ratio in PPO, which experiences a training collapse. This result indicates that the original ratio in PPO may not be directly applicable in the setting of label relevance ranking, thereby demonstrating the effectiveness of our proposed partial order ratio. Hyper-parameters Sensitivity Here, we investigate the impact of the threshold δ in Eq. (<ref>). As shown in Fig. <ref>, negative thresholds -0.1 achieves better NDCG scores, while improving the robustness of training procedure in stage 3. Refer to supplementary for more details. §.§ Qualitative Assessment To clearly reveal the effectiveness of our method, we visualize the label relevance ranking prediction results of the LR2PPO algorithm and other state-of-the-art OV-based or LTR-based methods on some samples in the LRMovieNet test set. Fig. <ref> shows the comparison between the LR2PPO algorithm and CLIP and PRM. For a set of video frame sequences and a plot text description, as well as a set of labels, we compare the ranking results of different methods based on label relevance for the given label set, and list the top 5 high relevance labels predicted by each method. Compared with CLIP and PRM, our method ranks more high relevance labels at the top and low relevance labels at the bottom. The results show that our method can better rank the labels based on the relevance between the label and the multimodal input, to more accurately obtain high-value labels. § CONCLUSION In this study, we prove the pivotal role of label relevance in label tasks, and propose a novel approach, named LR2PPO, to effectively mine the partial order relations and apply label relevance ranking, especifically for a new scenario. To evaluate the performance of the method, a new benchmark dataset, named LRMovieNet, is proposed. Experimental results on this dataset and other LTR datasets validate the effectiveness of our proposed method. splncs04 equationsection algorithmsection figuresection tablesection § PPO ALGORITHM In this section, we provide a detailed explanation of the original Proximal Policy Optimization (PPO) algorithm, as proposed by Schulman et al. <cit.>. The PPO algorithm is defined by three key loss functions: the policy function loss, the value function loss, and the entropy bonus. In Sec. fake3.2 of the main paper, we present the concept of the target value function estimate V_t^target and the advantage estimate Â_t at time step t. Building upon this, we proceed to introduce the details of the PPO algorithm. Formally, r_t(θ) is defined as the ratio of the action probability under the policy π_θ to the action probability under the old policy π_θ_old: r_t(θ) = π_θ(a_t|s_t)/π_θ_old(a_t|s_t). Subsequently, the policy function loss, denoted as L^PF(θ), is computed based on the policy parameters θ: L^PF(θ) = - 𝔼_t ( r_t(θ) Â_t ). To prevent the policy from changing too drastically in a single update, the PPO algorithm typically employs a clipped formation of policy loss: L^CPF(θ) = - 𝔼_t [ min( r_t (θ) Â_t, clip( r_t(θ), 1-ϵ, 1+ϵ) Â_t ) ]. The value function loss, denoted as L^VF(ω), has also been defined in Sec. fake3.2 of the main paper. The entropy bonus, denoted as S(π_θ), is defined as: S(π_θ) = 𝔼_t [ ∑_a_t - π_θ(a_t|s_t) logπ_θ(a_t|s_t) ] . This is the expected value at time step t of the sum over all possible actions of the product of the action probability under the policy π_θ and the logarithm of the action probability under the policy π_θ. This term encourages exploration by maximizing the entropy of the policy. Ultimately, the total loss of PPO can be defined as: L'(θ) = L^CPF(θ) + c_1' L^VF(ω) - c_2' S(π_θ), where c_1' and c_2' represent adjustable hyperparameters, which can be tuned to optimize the performance of the model. In many cases, instead of using the clipped policy loss form in Eq. (<ref>), the PPO algorithm incorporates a KL penalty term in the overall loss to prevent overly large policy updates that could lead to instability or performance drops in the learning process. The KL penalty, denoted as KL_penalty, is formulated as: KL_penalty (π_θ_old, π_θ) = E_t[KL(π_θ_old(a_t|s_t), π_θ(a_t|s_t))], where KL(·, ·) represents the Kullback-Leibler (KL) divergence. Thereby, the overall loss function, denoted as L”(θ), is a combination of the four aforementioned loss functions. It is computed as: L”(θ) = L^PF(θ) + c_1” L^VF(ω) - c_2” S(π_θ) + c_3” KL_penalty (π_θ_old, π_θ) . Here, the hyperparameters c_1”,c_2” and c_3” are used to balance the contributions of the four components to the overall loss, and are typically determined through empirical tuning. § LR2PPO ALGORITHM The joint training of Label Relevance Ranking with Proximal Policy Optimization (LR2PPO) framework is illustrated in Algorithm <ref>. Note that the definitions of entropy bonus S(π_θ) and KL penalty KL_penalty (π_θ_old, π_θ) are the same as the original PPO. (See Eq. (<ref>) and Eq. (<ref>).) § EXPERIMENTS §.§ More Details about LRMovieNet Dataset Prompts to Extract Text Labels When generating textual labels from scene description in each movie clip of MovieNet <cit.> dataset, we feed the following prompts into LLaMa2 <cit.> to obtain event labels and entity labels, respectively. (1) Event labels: “Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Use descriptive language tags consisting of less than three words each to capture events without entities in the following sentence: {sentence} Please seperate the labels by numbers and DO NOT return sentence. ### Response:” (2) Entity labels: “Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Use descriptive language tags consisting of less than three words each to capture entities in the following sentence: {sentence} Please seperate the labels by numbers and DO NOT return sentence. ### Response: ”. The produced text labels and image labels (by RAM model <cit.>), are subsequently screened and manually adjusted. Statistics of LRMovieNet Dataset To better understand the overall distribution of LRMovieNet, we provide histograms of its statistics in several dimensions, which are displayed in Fig. <ref>. Specifically, Fig. <ref> (a) shows the total number of video clips in all videos of different genres, Fig. <ref> (b) displays the number of label classes in different genres, and Fig. <ref> (c) provides the number of labels with different frequencies in the entire LRMovieNet dataset. The details of Fig. <ref> (a) and Fig. <ref> (b) are listed in Tab. <ref> and Tab. <ref>, respectively. Samples of LRMovieNet Dataset To illustrate the variation of types (, objects, attributes, scenes, character identities) and semantic levels (, general, specific, abstract) in LRMovieNet, we provide some training samples in Fig. <ref>. These samples also serve to clarify the process of label relevance categories annotation and the annotated partial order label pairs. The annotated label relevance categories in the source domain are used to train the relevance ranking base model in the first stage. Subsequently, the partial order label pairs in the target domain (along with label pairs augmented from the source domain) are employed to train the reward model in the second stage. This reward model then guides the joint training of LR2PPO in the third stage. §.§ Metrics In this paper, we evaluate the performance of our label relevance ranking algorithm using the Normalized Discounted Cumulative Gain (NDCG) metric, which is widely adopted in information retrieval and recommender systems. To better comprehend the NDCG metric, let's first introduce the concept of relevance scores. These scores, denoted as rel_i, are assigned to the item (label) at position i. Based on their relevance levels, the manually annotated labels in the test set are assigned these scores: labels with high, medium, and low relevance are given scores of 2, 1, and 0, respectively. Next, we define the Discounted Cumulative Gain (DCG) at position k as follows: DCG@k = ∑_i=1^k2^rel_i - 1/log_2(i+1) , where rel_i is the relevance score of the item at position i. The DCG metric measures the gain of a ranking algorithm by considering both the relevance of the items and their positions in the ranking. To obtain the Ideal Discounted Cumulative Gain (IDCG) at position k, we first rank the items by their relevance scores in descending order and then calculate the DCG for this ideal ranking: IDCG@k = ∑_i=1^k2^rel^*_i - 1/log_2(i+1) , where rel_i^* is the relevance score of the item (label) at position i in the ideal ranking. The IDCG represents the maximum possible DCG for a given query. Finally, we compute the Normalized Discounted Cumulative Gain (NDCG) at position k by dividing the DCG by the IDCG to ensure that it lies between 0 and 1: NDCG@k = DCG@k/IDCG@k . A value of 1 indicates a perfect ranking, while a value of 0 indicates the worst possible ranking. The normalization also allows for the comparison of NDCG values across different video clips, as it accounts for the varying number of labels. §.§ Implementation Details We leverage published Vision Transformer <cit.> and Roberta <cit.> weights to initialize the parameters of vision encoder and language encoder, respectively. The parameters of the two encoders are fixed during three training stages. The coefficients c_1 and c_2 in Eq. (fake11) are set to 1, 1×10^-3, respectively. The coefficient c_3 of KL penalty in Eq. (fake11) is set to 1×10^-3. In our implementation, the KL penalty is subtracted from the reward, instead of being directly included in the joint loss. In this way, the policy is still updated based on the expected return, but the return is adjusted based on the policy change. This allows the algorithm to explore more freely while still being penalized for large policy changes, leading to more effective exploration and potentially higher overall returns. γ in the definition of V_t^target is set to 0. β in Eq. (fake3) is set to 0.3. Margin m_R in Eq. (fake4) and margin m in Eq. (fake7) are both set to 1, and δ in Eq. (fake8) is set to -0.1. AdamW optimizer is adopted for all optimized networks in three training stages, with learning rate 2×10^-5 in the first and second training stages and 1×10^-3 in the third training stage. We train the first two stages for 15 epochs, respectively. In Algorithm <ref>, T is set to 1, N_Trajs is set to 200, K is set to 1, N_Iters is set to 412, M is set to 24. During the training of stage 2, only 10% of ordered annotation pairs is used. In stage 3, 40% of randomly sampled pairs without annotation is utilized to train the LR2PPO framework, with the guidance of the reward model trained in stage 2. We use four V100 GPUs, each with 32GB of memory, to train the models. In comparison experiments, we utilize “There is a {label} in the scene” as textual prompt for both CLIP <cit.> and MKT <cit.>. §.§ More Ablation Studies about LR2PPO Classification and regression. The relevance between each label and the multimodal input are annotated into high, medium and low (with scores 2, 1, 0, respectively). We adopt regression instead of classification for better performance and convenience in the latter stages. The results of classification in stage 1 (S1-CLS) are listed in Tab. <ref>. We contribute the performance gap between classification (S1-CLS) and regression (S1) to the need to fully rank in the task. The predicted logits of classification need to be weighted to form the final relevance score, which hinders its performance. Regression scores are more suitable for ranking compared to classification logits. Results w/o stage 1. The results of omitting stage 1 (w/o S1) are listed in Tab. <ref>. LR2PPO achieves better performance since it starts learning from the source domain pretrained model, whose parameters more coincide with the ranking task, while LR2PPO (w/o S1) starts from the officially pretrained ViT <cit.> and Roberta <cit.>. §.§ More Hyper-parameters Sensitivity Analysis Here, we conduct experimental analysis on the hyperparameters of the LR2PPO algorithm, namely margin m_R in reward model training, margin m and hyperparameter c_1 in joint training of LR2PPO. (See Eq. (fake4), Eq. (fake7), and Eq. (fake11) in the main paper.) The results are shown in Fig. <ref>. It can be observed that the final performance of LR2PPO is not sensitive to the variations of these hyperparameters, indicating the stability of our method. §.§ Training Curves To better illustrate the changes during the training process of the LR2PPO algorithm, we have plotted the training curves of various variables against iterations. These are displayed in Fig. <ref>. Fig. <ref> to Fig. <ref> represent the training curves for the third stage, while <ref> and <ref> correspond to the training curves for the second and first stages, respectively. The reward and value curves, along with the NDCG curves in the third training stage, all exhibit an upward trend, while the policy loss and value loss show a downward trend. All of these observations validate the effectiveness of the LR2PPO learning process.
http://arxiv.org/abs/2407.13007v1
20240717205737
MagAO-X: Commissioning Results and Status of Ongoing Upgrades
[ "Jared R. Males", "Laird M. Close", "Sebastiaan Y. Haffert", "Maggie Y. Kautz", "Jay Kueny", "Joseph D. Long", "Eden McEwen", "Noah Swimmer", "John I. Bailey III", "Warren Foster", "Benjamin A. Mazin", "Logan Pearce", "Joshua Liberman", "Katie Twitchell", "Alycia J. Weinberger", "Olivier Guyon", "Alexander D. Hedglen", "Avalon McLeod", "Roz Roberts", "Kyle Van Gorkom", "Jialin Li", "Isabella Doty", "Victor Gasho" ]
astro-ph.IM
[ "astro-ph.IM" ]
CUAOA: A Novel CUDA-Accelerated Simulation Framework for the QAOA Jonas Stein0000-0001-5727-9151 LMU Munich, Germany Aqarios GmbH, Germany jonas.stein@ifi.lmu.de Jonas Blenninger0009-0004-5382-7113 LMU Munich, Germany Aqarios GmbH, Germany jonas.blenninger@aqarios.com David Bucher0009-0002-0764-9606 Aqarios GmbH, Germany david.bucher@aqarios.com Peter J. Eder0009-0006-3244-875X Siemens AG, Munich, Germany peter-josef.eder@siemens.com Elif Çetiner LMU Munich, Germany elif.cetiner@tum.de Maximilian Zorn0009-0006-2750-7495 LMU Munich, Germany maximilian.zorn@ifi.lmu.de Claudia Linnhoff-Popien0000-0001-6284-9286 LMU Munich, Germany linnhoff@ifi.lmu.de July 22, 2024 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT MagAO-X is the coronagraphic extreme adaptive optics system for the 6.5 m Magellan Clay Telescope. We report the results of commissioning the first phase of MagAO-X. Components now available for routine observations include: the >2 kHz high-order control loop consisting of a 97 actuator woofer deformable mirror (DM), a 2040 actuator tweeter DM, and a modulated pyramid wavefront sensor (WFS); classical Lyot coronagraphs with integrated low-order (LO) WFS and control using a third 97-actuator non-common path correcting (NCPC) DM; broad band imaging in g, r, i, and z filters with two EMCCDs; simultaneous differential imaging in Hα; and integral field spectroscopy with the VIS-X module. Early science results include the discovery of an Hα jet, images of accreting protoplanets at Hα, images of young extrasolar giant planets in the optical, discovery of new white dwarf companions, resolved images of evolved stars, and high-contrast images of circumstellar disks in scattered light in g-band (500 nm). We have commenced an upgrade program, called “Phase II”, to enable high-contrast observations at the smallest inner working angles possible. These upgrades include a new 952 actuator NCPC DM to enable coronagraphic wavefront control; phase induced amplitude apodization coronagraphs; new fast cameras for LOWFS and Lyot-LOWFS; and real-time computer upgrades. We will report the status of these upgrades and results of first on-sky testing in March-May 2024. § INTRODUCTION MagAO-X<cit.> is an “extreme” adaptive optics (ExAO) system optimized for high-contrast science at visible-to-near-IR wavelengths. It is installed on the Magellan Clay 6.5 m telescope at Las Campanas Observatory (LCO), in Chile. Construction and commissioning of MagAO-X was funded by the NSF MRI program. Commissioning of MagAO-X is now complete, and it is routinely offered to all Magellan partners for observations. Here we report on the outcome of this first phase of MagAO-X. MagAO-X was conceived with the ultimate goal of reflected light characterization of nearby temperate exoplanets. This will require achieving exquisite wavefront control on-sky. This motivates a program of continued upgrades and testing dubbed “Phase II”. We next provide a status update on these upgrades, including the installation of a new 952 actuator non-common path deformable mirror (DM), a high-speed low-order wavefront sensor (LOWFS) camera, and a real-time computer (RTC) upgrade. When not at the telescope, MagAO-X also serves as a valuable testbed for future technology, including for AO and segment phasing technologies for the Giant Magellan Telescope (GMT). We will also provide a brief overview of these activities and recent results. § INSTRUMENT OVERVIEW Here we briefly present the design of MagAO-X and its key specifications. Previous SPIE contributions provide more detail. <cit.> In addition, the complete preliminary design review (PDR) documentation is available at <https://magao-x.org/docs/handbook/appendices/pdr/>, and results of laboratory integration and preparation for shipment can be bound in the pre-ship review (PSR) documentation: <https://magao-x.org/docs/handbook/appendices/psr/index.html>. MagAO-X is installed on the Nasmyth platform of the 6.5 Magellan Clay telescope (Figure <ref>) at Las Campanas Observatory, Chile. MagAO-X is a woofer-tweeter system, with a 97-actuator deformable mirror (DM) serving as the woofer and a 2040 actuator MEMS DM as the tweeter. The pupil illuminates 48 actuators across in the short direction, giving a 13.5 cm actuator pitch, with approximately 1640 actuators illuminated. MagAO-X is a two-level optical bench (Figure <ref>). The upper-level houses the k-mirror derotator, woofer, tweeter, and atmospheric dispersion compensator. A periscope relay folds the beam to the lower level containing the pyramid wavefront sensor (PyWFS). A selectable beamsplitter sends light to the PyWFS and to a Lyot-coronagraph system feeding a dual-EMCCD simultaneous differential imaging (SDI) system or optionally visitor instruments such as the VIS-X spectrograph<cit.>. The coronagraph contains a third deformable mirror, originally with 97 actuators but now upgraded to 952, after the beamsplitter. This enables non-common path offloading without offsetting the PyWFS. All of these components and subsystems are described in great detail in the references and documents noted above. Science cases motivating MagAO-X include: * The search for and characterization of young accreting planets in Hα orbiting nearby T Tauri and Herbig Ae/Be stars <cit.>. See Li et al<cit.> in these proceedings for an update on this project. * Circumstellar disk characterization * Young EGP characterization in the red-optical/near-IR * Spatially resolved stellar surface and evolved star imaging at high spectral resolution. * Characterization of tight binary star systems. * Kepler and TESS followup: MagAO-X can be used for high spatial-resolution follow-up of potential transiting planet host stars to search for binarity and other sources of contamination. * Accelerating Stars: Stars exhibiting accelerations measured by astrometry and radial velocity are proving to be excellent places to search for low-mass companions. Led by S. Haffert and L. Pearce, the MagAO-X Xoomies survey is conducting a cued search of such stars. * White Dwarf Companions: MagAO-X is sensitive to an understudied population of white dwarf companions to nearby main sequence stars (so-called Sirius-like systems). The ExAO Pup Search is characterizing such systems (Pearce et al, in prep). * High spatial resolution imaging of asteroid surfaces and asteroid companion searches. § COMMISSIONING RESULTS Commissioning of the initial MRI-funded capabilities of MagAO-X is now complete. Table <ref> summarizes the on-sky time utilized by MagAO-X on the Clay Telescope. Starting with the 2nd commissioning run (2022A) we have made significant time available for partner observations. 13 nights out of 69.5 were used for explicit commissioning tasks. Additionally 2 nights have been used for VIS-X IFU commissioning and 4 nights so far have been devoted to commissioning Phase II upgrades. §.§ XKID In March, 2023, we commissioned the MagAO-X Kinetic Inductance detector (XKID) IFU behind MagAO-X. XKID is a superconducting integral field spectrograph designed for low spectral resolution, photon counting spectroscopy of exoplanets in the optical and near-IR (800-1400 nm) based on an MKIDs array<cit.>. The high time resolution (microsecond photon counting) and lack of detector noise (zero read noise or dark current) allows significant reduction of speckle backgrounds through the Stochastic Speckle Discrimination (SSD) technique <cit.>. Figure <ref> shows the XKID dewar aligned to MagAO-X on the Clay platform. §.§ VIS-X VIS-X is a visible wavelength IFU spectrograph in MagAO-X, see Haffert et al.<cit.>. VIS-X supports three modes: the wide-field “Emission Line Mode” (ℛ=15,000, 653-658 nm, 1.15”X1.15”) and the wide-field “Low-res Mode” (ℛ=50–100, 450-900 nm, 1.15”X1.15”), and narrow-field “Characterization Mode” (ℛ=15,000, 500-900nm, 0.063”X0.5”). The “Emission Line Mode“ has been commissioned and has been used to observe young stellar objects with jets, accreting companions, resolved stellar surfaces and (interacting) close binaries in the Hα emission line (Haffert et al, in prep). Final commissioning of the remaining two modes is planned for the Fall 2024 semester. § PHASE II UPGRADES §.§ NCPC DM The correction of non-common path aberrations (NCPA) is critical to achieving the highest image quality in the science focal plane of an AO system. In typical AO systems the only corrective elements are prior to the beamsplitter, and so are in common-path with the WFS and the science instrumentation. The challenge this creates is that any aberrations non-common between the WFS and science instrument must be corrected using the upstream DM(s), which then requires that offsets be applied to the WFS signal such that the control loop does not simply remove the desired correction. The solution we arrived at in MagAO-X is to place a non-common path correcting DM, which we refer to as “dmncpc”, after the beamsplitter. This DM can then be used to correct NCPA without disturbing the upstream high-order loop. The original version of MagAO-X included a 97 actuator device identical to the woofer. In Phase II we have upgraded this to a 952 actuator MEMS device. We make use of focus diversity phase retrieval<cit.> (FDPR), as well as coronagraphic low-order wavefront sensing using defocused spots to sense and correct NCPA. See Kueny et al.<cit.> (these proceedings) for details about the characterization and on-sky commissioning of the new 1K dmncpc. See Haffert et al.<cit.> (these proceedings) for details on how this upgrade has revolutionized our post-coronagraph high-contrast imaging capabilities, and Liberman et al.<cit.> (these proeceedings) for an analysis of the NCPC DM and coronagraph alignment tolerances. §.§ RTC The control system for MagAO-X is based on commercial-off-the-shelf (COTS) computing components, including standard ATX motherboards, CPUs, and consumer-grade GPUs. This enables us to upgrade the performance of MagAO-X as computer hardware improves. The original RTC of MagAO-X was based on dual Intel Xeon 18 core processors on a PCIe Gen 3 motherboard, and utilized three NVIDIA 2080 Ti GPUs on an PCIe expansion board. As part of Phase II we upgraded to the AMD Threadripper architecture, with a 64 core processor on a PCIe Gen 4 motherboard. Additionally two GTX 4090 GPUs replaced the 2080s. This improved our total reconstruction time from ∼200 μsec to ∼75 μsec. The move to PCIe Gen 4 also resulted in significantly improved timing stability, as we now have many fewer PCIe bus collisions between the WFS images, DM commands, and GPU memory operations. The computer hardware upgrade enabled us to achieve 3 kHz loop speeds on-sky for the first time. The resulted in an over 10% reduction in residual flux, see Figure <ref>. §.§ PIAA A further upgrade underway is the implementation of Phase Induced Amplitude Apodization Complex Mask Coronagraphs (PIAACMC) in MagAO-X. We have designed, procured, and installed a set of PIAA and inverse PIAA lenses and conducted initial tests on-sky<cit.>. Results with an opaque focal plane mask (FPM) are shown in Figure <ref>. We are finalizing designs for transmissive complex FPMs to complete the PIAACMC implementation for demonstration on-sky in Fall, 2024. §.§ LOWFS Upgrades To take advantage of the high-speed capabilities of the new NCPC DM and to prepare for using transmissive complex masks in PIAACMC, we have upgraded our low-order WFS (LOWFS) camera to use the Teledyne Kinetix camera. Over the small FOVs used for LOWFS on MagAO-X<cit.> this device is capable of over 10,000 frames per second. In addition to upgrading the existing FPM based LOWFS, we have also begun implementing a second Lyot-based LOWFS<cit.> system using a second Kinetix camera. We plan to demonstrate this capability in Fall, 2024. §.§ Algorithm Development We continue to optimize the control system. Ongoing efforts include machine-learning driving non-linear PyWFS reconstruction algorithms (Landman et al<cit.>, these proceedings), and PyWFS “optical gain” characterization and on-line correction (McEwen et al.<cit.>, these proceedings). Efforts are also underway to implement real-time atmospheric dispersion corrector (ADC) control (Twitchell et al.<cit.>, these proceedings). Finally, we are working to implement PSF/speckle reconstruction using system telemetry, see Long et al.<cit.> (these proceedings) for recent results. § TESTBED OPERATIONS As part of the plan to continuously improve MagAO-X, it was designed for routine shipment between LCO and the University of Arizona in Tucson. This has enabled MagAO-X to serve as a key part of the High-Contrast Adaptive Optics Testbed (HCAT) for the Giant Magellan Telescope (GMT), part of the overall program to develop segment phase sensing and control strategies for the GMT<cit.>. MagAO-X serves as the simulated GMT-AO system for HCAT, and is used to conduct phase sensing and control experiments.<cit.>. For more details on these experiments see Close et al.<cit.>, Quiros-Pacheco et al.<cit.>, Plantet et al.<cit.>, and Demers et al.<cit.> in these proceedings, and Kautz et al. (in prep). As of July, 2024 MagAO-X is set up at LCO for remote operations. It is available for WFS&C experiments and algorithm development. We expect it to remain at LCO through April, 2025. § CONCLUSION MagAO-X has now completed initial commissioning at LCO, and is supporting science observations by Magellan partner scientists. Significant improvements in performance have been realized over the commissioning period which began in Dec, 2019. In addition, ongoing upgrades to MagAO-X and its ability to serve as a WFS&C testbed will continue to improve performance. We are very grateful for support from the NSF MRI Award #1625441 (MagAO-X). The Phase II upgrade program is made possible by the generous support of the Heising-Simons Foundation. spiebib
http://arxiv.org/abs/2407.12763v1
20240717174120
An Alexander Polynomial Obstruction to Cosmetic Crossing Changes
[ "Joe Boninger" ]
math.GT
[ "math.GT", "57K10" ]
#1 thmTheorem thmsection conj[thm]Conjecture prop[thm]Proposition lemma[thm]Lemma cor[thm]Corollary question[thm]Question conv[thm]Convention claim[thm]Claim *namedtheorem named_thm[1] named_prop[1] definition defn[thm]Definition *nameddef named_def[1] definition rmk[thm]Remark Department of Mathematics, Boston College, Chestnut Hill, MA boninger@bc.edu An Alexander Polynomial Obstruction to Cosmetic Crossing Changes Joe Boninger July 22, 2024 ================================================================ § ABSTRACT The cosmetic crossing conjecture posits that switching a non-trivial crossing in a knot diagram always changes the knot type. Generalizing work of Balm, Friedl, Kalfagianni and Powell, and of Lidman and Moore, we give an Alexander polynomial condition that obstructs cosmetic crossing changes for knots with L-space branched double covers, a family that includes all alternating knots. As an application, we prove the cosmetic crossing conjecture for an infinite family of pretzel knots. § INTRODUCTION The cosmetic crossing conjecture, attributed to X. S. Lin and appearing on Kirby's famous problem list <cit.>, posits that switching a non-trivial crossing in an oriented knot diagram always changes the knot type. More precisely, let K ⊂ S^3 be an oriented knot and D ⊂ S^3 an oriented disk intersecting K in two points of opposite sign; then a crossing change on K is the operation of performing ± 1-surgery on the unknot ∂ D. We say a crossing change is cosmetic if the resulting oriented knot is equivalent to K, and the change is nugatory if ∂ D bounds a disk in S^3 - K. We also define a generalized crossing change to be the result of 1/n surgery on ∂ D for some n ≠ 0, with cosmetic and nugatory generalized crossing changes defined as before. It's not hard to see that any nugatory crossing change is cosmetic; the cosmetic crossing conjecture asserts the converse is also true. Any cosmetic crossing change is nugatory. Some authors (e.g. <cit.>) have also considered a “generalized” cosmetic crossing conjecture, with the goal of showing any cosmetic generalized crossing change is nugatory. In recent years <Ref> has seen a flurry of activity—it is now known that knots in the following families to do not admit cosmetic, non-nugatory crossing changes: * Two-bridge knots, due to Torisu <cit.>. * Fibered knots, due to Kalfagianni <cit.>. * Genus one knots with non-trivial Alexander polynomial, due to Ito following work of Balm, Friedl, Kalfagianni and Powell <cit.>. * Alternating knots with square-free determinant, due to Lidman and Moore <cit.>. * Special alternating knots, due to the author <cit.>. Lidman and Moore also give a stronger obstruction using the homology of the branched double cover <cit.>, and see <cit.> for more results. In this note we give an Alexander polynomial obstruction to cosmetic crossing changes that generalizes two of the above results. Given any knot K ⊂ S^3, let Σ(K) denote its branched double cover. The three-manifold Σ(K) is an L-space if it satisfies a Heegaard-Floer theoretic condition—see <cit.> for details. Let K ⊂ S^3 be a knot such that Σ(K) is an L-space. If K admits a non-nugatory, cosmetic generalized crossing change, then its Alexander polynomial Δ_K has a factor of the form f(t)f(t^-1) for some f ∈[t, t^-1] satisfying f(-1) ≠± 1. In particular, f is non-constant. The fact that f is non-constant in <Ref> follows from the identity Δ_K(1) = ± 1, so that f(1) = ± 1. Additionally, the branched double cover Σ(K) of a knot K ⊂ S^3 is an L-space if K is an alternating knot, or more generally if K has thin Khovanov homology with /2 coefficients <cit.>. We thus obtain: Let K ⊂ S^3 be an alternating knot. If K admits a non-nugatory, cosmetic crossing change, then Δ_K has a factor of the form f(t)f(t^-1) for some f ∈[t,t^-1] satisfying f(-1) ≠± 1. Balm, Friedl, Kalfagianni and Powell prove a statement analogous to <Ref> for genus one knots <cit.>: their theorem does not require that Σ(K) be an L-space, but allows for the possibility that Δ_K ≡ f ≡ 1. (By Ito's work, this is the only remaining case open for genus one knots <cit.>.) <Ref> also recovers the aforementioned result of Lidman and Moore. Let K ⊂ S^3 be a knot such that Σ(K) is an L-space and K admits a non-nugatory, cosmetic crossing change. Then the determinant (K) has a square factor. This is immediate from <Ref> and the identity (K) = Δ_K(-1). <Ref> is a good obstruction to cosmetic crossing changes for knots with low crossing number, and alternating knots in particular—it verifies <Ref> for 200 of the 250 prime knots with 10 or fewer crossings, and 483 of the 564 prime alternating knots with 11 crossings or fewer <cit.>. Thanks to the authors referenced above, the cosmetic crossing conjecture has been proven for all but four knots with 10 or fewer crossings <cit.>, but <Ref> proves <Ref> for new knots as well. As an example, we use <Ref> to prove: Let p_1, p_2, p_3, p_4 and q be integers satisfying: * p_i ≥ 1 for all i, and q > min(p_1, …, p_4). * p_i ≡ 1 modulo 4 for all i, and q ≡ 3 modulo 4. Then the pretzel knot P(p_1,p_2,p_3,p_4,-q) does not admit a non-nugatory, cosmetic generalized crossing change. While some of the knots in <Ref> are already known to satisfy <Ref>, infinitely many are not; see our discussion in <Ref> below. Finally, our <Ref> gives a simplified proof of <Ref> for the knots 10_108 and 10_164, which Ito handled using the Casson-Walker invariant and the calculus of Jacobi diagrams <cit.>. §.§ Discussion For a genus one knot K, the conclusion of <Ref> implies K is algebraically concordant to the unknot—see <cit.>. Conversely, knots which are concordant to knots with strictly smaller genus frequently have factors of the form f(t)f(t^-1) in their Alexander polynomials (cf. <cit.>). We therefore pose the following question as a next step in studying <Ref>. If a knot K admits a non-nugatory cosmetic crossing change, is K concordant to a knot with strictly smaller genus than itself? §.§ Outline In <Ref> we collect useful facts, and in <Ref> we prove <Ref>. In <Ref>, we use <Ref> to prove <Ref>. §.§ Acknowledgments The author thanks Josh Greene for mathematical and professional guidance, and Ali Naseri Sadr for many worthwhile discussions about the cosmetic crossing conjecture. The author is also grateful to John Baldwin, Keerthi Madapusi and Ian Montague for helpful conversations. This material is based upon work supported by the National Science Foundation under Award No. 2202704. § TOPOLOGICAL SET-UP Let K ⊂ S^3 be an oriented knot. When considering a crossing change on K given by a disk D, as described in the introduction, we refer to D as a crossing disk associated to the change. Similarly, a crossing arc is an arc in D connecting the two points of K ∩ D. Additionally, let π : Σ(K) → S^3 be the double cover of S^3 branched along K, and let π^-1 indicate taking preimages in π. If α⊂ S^3 is a crossing arc for K, then π^-1(α) ⊂Σ(K) is a simple closed curve. A key tool in our proof is the following result of Lidman and Moore. Suppose Σ(K) is an L-space and α⊂ S^3 is the crossing arc of a cosmetic generalized crossing change for K. If the curve π^-1(α) ⊂Σ(K) is nullhomologous in H_1(Σ(K)), then the crossing change is nugatory. Lidman and Moore's theorem is stated only for standard crossing changes, but their proof works for generalized crossing changes as well. An important ingredient is the surgery characterization of the unknot among nullhomologous knots in an L-space, due to Gainullin <cit.>. The goal of the next two lemmas is to give a useful criterion for applying <Ref>. Let S ⊂ S^3 be a Seifert surface for K, and let α⊂ S be a properly embedded, oriented, non-separating arc. Let S' ⊂ S^3 be the surface S - ν(α), where ν denotes a regular open neighborhood; we identify H_1(S') with a subgroup of H_1(S) via the inclusion S' ↪ S. There exists a basis ℬ = [e_1, …, e_2g] for H_1(S) such that: e_1 ·α = 1 e_i ·α = 0 for all i ≠ 1, where · denotes the intersection pairing. It follows that [e_2, …, e_2g] is a basis for H_1(S'). By the classification of surfaces with boundary, S' is homeomorphic to the surface in <Ref>. Additionally, by an isotopy supported in a small neighborhood of ∂ S', we assume the two arcs ∂ν(α) ⊂∂ S' are the red arcs shown in the figure. We then choose the classes e_2, …, e_2g as shown in the figure, and let e_1 be the class of the curve resulting from joining the ends of the arc e̅_1 through ν(α) in S. Let ℬ be a basis for H_1(S) as given by <Ref>, and let V be the Seifert matrix of S in the basis ℬ. Then the matrix V + V^⊺ is a presentation matrix for H_1(Σ(K)) with ordered generating set b_1, …, b_2g, such that b_1 = 2[π^-1(α)] ∈ H_1(Σ(K)) for some orientation of the curve π^-1 (α). The fact that V + V^⊺ presents H_1(Σ(K)) is well known—the substance of <Ref> is the claim that b_1 = 2[π^-1(α)]. Let ℬ = e_1, …, e_2g∈ H_1(S) be as in the lemma, and let f_1, …, f_2g be the dual basis of H_1(S^3 - S) given by the Alexander duality isomorphism H_1(S^3 - S) ≅ H^1(S) ≅ H^*_1(S). Let D ⊂ S^3 be an oriented disk with D ∩ S = α, and let U = ∂ D. By <Ref> the linking pairing lk(*, [U]) : H_1(S) → satisfies lk(e_i, [U]) = e_i ·α = δ_i1 = lk(e_i, f_1), where δ_i1 is the Kronecker delta, so [U] = f_1 ∈ H_1(S^3 - S) by Alexander duality. Since U ∩ S = ∅, lk([U], [K]) = 0 and π^-1(U) ⊂Σ(K) has two components, which we denote by U_1 and U_2. We claim that, for some choice of orientation of π^-1(α), [U_1] = [π^-1(α)] = -[U_2] ∈ H_1(Σ(K)). To this end, let X = S^3 - ν(K) be the exterior of K, let S = S ∩ X, and let Y = X - ν(S). We choose our regular neighborhoods of K and S small enough that U ∩ν(K) = U ∩ν(S) = ∅, so that U ⊂ Y and D ∩∂ Y is a circle U' parallel to U in D ∩ Y. The boundary ∂ν(S) ⊂∂ Y has two components, both parallel to S in X, which we label S^+ and S^- according to the orientation of S. See <Ref>. Let Y_1 and Y_2 be two copies of Y, and let S^±_i be the copy of S^± in Y_i for i = 1, 2. Then the cyclic double cover X̅_2 of X can be constructed by gluing S^+_1 to S^-_2 and S^+_2 to S^-_1: X̅_2 = Y_1 ∪_S^+_i = S^-_3 - i Y_2. The branched double cover Σ(K) can be obtained from X̅_2 by filling its torus boundary with a solid torus T along the natural choice of meridian—see <cit.> for details. The curves U_1, U_2 ⊂Σ(K) that make up π^-1(U) are precisely the copies of U in Y_1 and Y_2, and similarly we let U'_i be the copy of U' in Y_i, so that π^-1(U') = U'_1 ∪ U'_2. Let α = α∩S; then the preimage of α in X̅_2 is the two arcs α_1 and α_2 which are the push-offs of α to S^+_1 (= S^-_2) and S^+_2 (= S^-_1) respectively. The preimage π^-1(α) ⊂Σ(K) is therefore a closed curve obtained by connecting α_1 and α_2 to the core of T via four radial arcs through two meridian disks, as in <Ref>. Forgetting orientations, this curve is clearly parallel to each of U'_1 and U'_2 in Σ(K), hence parallel to each of U_1 and U_2. Fixing an orientation for π^-1(α) without loss of generality, we have [U_1] = [π^-1(α)] = -[U_2], proving (<ref>). Following the notation of the previous paragraph, let f_1^1 and f_1^2 be the two copies of f_1 in H_1(Y_1) and H_1(Y_2). A careful reading of the proof of <cit.> now shows the matrix V + V^⊺ presents the group H_1(Σ(K)), with generating set b_1, …, b_2g such that b_1 = f_1^1 - f_1^2 = [U_1] - [U_2] = 2[U_1] = 2[π^-1(α)] ∈ H_1(Σ(K)). Let ℬ be a basis for H_1(S) given by <Ref>, and let V be the Seifert matrix of S in the basis ℬ. Let b_1 be the first standard basis vector of ^2g. Then π^-1(α) is nullhomologous if and only if b_1 is in the image of the map V + V^⊺ : ^2g→^2g. By <Ref>, b_1 is in the image of V + V^⊺ if and only if 2[π^-1(α)] ∈ H_1(Σ(K)) is nullhomologous. Since H_1(Σ(K)) is a finite group of odd order (equal to the determinant of K), 2[π^-1(α)] is nullhomologous if and only if [π^-1(α)] is. Along with <Ref>, our proof of <Ref> relies on the following simple observation. Let L be the two-component link ∂ S'. If α is the crossing arc of a cosmetic generalized crossing change whose crossing disk D satisfies D ∩ S = α, then Δ_L ≡ 0. Suppose the crossing change is 1/n surgery on ∂ D for some n > 0, and let K_n ≅ K be the resulting knot. If n = 1, then without loss of generality the Alexander polynomials of K, K_1 and L satisfy the skein relation Δ_K_1(t) - Δ_K(t) = (t^1/2 - t^-1/2)Δ_L(t). More generally, for all j > 0 we have Δ_K_j + 1(t) - Δ_K_j(t) = (t^1/2 - t^-1/2)Δ_L(t). Summing (<ref>) with n - 1 copies of (<ref>), where j ranges from 1 to n - 1, gives Δ_K_n(t) - Δ_K(t) = n(t^1/2 - t^-1/2)Δ_L(t). Since K_n = K by hypothesis, Δ_K_n = Δ_K and Δ_L ≡ 0. § PROOF OF <REF> Let K be a knot with Σ(K) an L-space, and let D ⊂ S^3 be a crossing disk associated to a cosmetic generalized crossing change on K. Since the linking number of K with ∂ D is zero, there exists a Seifert surface S ⊂ S^3 for K such that S ∩∂ D = ∅. In fact, any Seifert surface can be modified to satisfy this condition by adding tubes along pairs of oppositely signed points of S ∩∂ D, beginning with an innermost pair. Additionally, by shrinking D if necessary, we assume S ∩ D consists of a single crossing arc α. We also assume α does not separate S: if this is not the case, we change S by adding a tube connecting the components of S - α and disjoint from D. Let g be the genus of S, and let S' ⊂ S^3 be the surface S - ν(α) as above. Let L be the link ∂ S'. Let ℬ be a basis for H_1(S) given by <Ref>, using the crossing arc α as the relevant non-separating arc. Let V be the Seifert matrix of S in the basis ℬ, and let A be the Alexander matrix V - tV^⊺. Let A' be the (2g - 1)-by-(2g - 1) matrix given by removing the first row and column of A; then A' is an Alexander matrix of L by <Ref>, and by <Ref> we have 0 = Δ_L(t) = (A'). Let v_1 = [x_1, …, x_2g - 1]^⊺∈(t)^2g - 1 be a vector in the kernel of A'. Multiplying v_1 by the least common denominator of the x_i, we assume v_1 is in the subring [t]^2g - 1, and multiplying again by the least common denominator of the coefficients of all the x_i, we further assume that v_1 ∈[t]^2g - 1. Finally, after dividing out any common factors, we have (x_1, …, x_2g - 1) = 1 in [t]. Let v ∈[t]^2g be the vector [0, x_1, …, x_2g - 1]^⊺. Since v_1 is in the kernel of A', we have Av = [f(t),0, …, 0]^⊺ for some f(t) ∈[t]. Let coeff(f) ⊂ be the set of coefficients of f. The polynomial f satisfies (coeff(f)) = 1. In other words, f is primitive. Suppose f(t) = d · g(t) for some d ∈, d ≥ 1, and g ∈[t]. Let X̅ be the universal abelian cover of S^3 - ν(K), and recall that A presents H_1(X̅) as a module over [t,t^-1], where t generates the infinite cyclic group of deck transformations. Let a ∈ H_1(X̅) be the first generator corresponding to this presentation; then by (<ref>) we have 0 = f(t) · a = d · g(t) · a. Therefore the element g(t) · a ∈ H_1(X̅) is a torsion homology class, forgetting the ⟨ t ⟩-module structure, and since H_1(X̅) is torsion-free (see the remark following the proof of the claim) we have 0 = g(t) · a. Now (<ref>) holds if and only if the equation Aw = [g(t),0, …, 0]^⊺ has a solution w ∈[t,t^-1]^2g. The vector (1/d)v ∈[t]^2g is a solution to (<ref>), and since A is invertible over (t) this is the unique solution. We conclude that (1/d)v ∈[t]^2g, so d = 1 by the construction of v. Here is one way to see that H_1(X̅) is torsion-free: as in the proof of <Ref>, let X = S^3 - ν(K), let S = S ∩ X, and let Y = X - ν(S). Let {Y_i}_i ∈ be infinitely many copies of Y, and let S^+_i and S^-_i be the two components of ∂ν(S) in ∂ Y_i. Then X̅ can be constructed by gluing S^+_i to S^-_i + 1 for all i ∈ <cit.>. For m ≥ 0, let Z_m ⊂X̅ be the subset Z_m = Y_-m∪ Y_-m + 1∪⋯∪ Y_m - 1∪ Y_m. Since any singular chain in X̅ is supported in a compact set, to show that H_1(X̅) is torsion-free it suffices to show that H_1(Z_m) is torsion-free for all m. A Mayer-Vietoris calculation shows that for any m H_1(Z_m) ≅^2g, so H_1(X̅) is torsion-free. If f(-1) = ± 1, then the crossing change is nugatory. When t = -1, the Alexander matrix A becomes V + V^⊺. Since v ∈[t]^2g, v(-1) ∈^2g, and if f(-1) = ± 1 then (<ref>) yields (V + V^⊺) v(-1) = [± 1, 0, …, 0]^⊺. The claim then follows from <Ref> and <Ref>. We will show that f(t) and f(t^-1) are distinct factors of Δ_K. Let 𝒜̂ = [t^1/2, t^-1/2], and let 𝒜⊂𝒜̂ be the subring [t, t^-1]. Given a ∈𝒜̂, let a̅ denote the image of a under the involution determined by t^1/2↦ -t^-1/2; on the subring 𝒜 this restricts to the involution t ↦ t^-1. If M = {a_ij} is a matrix or vector with elements in 𝒜̂ then let M̅ = {a̅_ij}, and define M^* = M̅^⊺. We have v^*A = [-tf(t^-1),0,…, 0]. Let A_sym be the matrix A_sym = t^-1/2A = t^-1/2V -t^1/2V^⊺∈ M_2g(𝒜̂), and observe that A_sym^* = A_sym. We therefore calculate v^*A = t^1/2(v^*A_sym) = t^1/2(v^*A_sym^*) = t^1/2(v^⊺ A_sym^⊺) = t^1/2(A_symv)^* = -t(Av)^* = [-tf(t^-1),0,…, 0]. By construction the elements of v_1 = [x_1, …, x_2g - 1] satisfy (x_1, …, x_2g - 1) = 1 in [t, t^-1], and this also holds in 𝒜 by Gauss's lemma. Since 𝒜 is a principal ideal domain <cit.>, it follows that v_1 extends to a basis v_1, …, v_2g - 1 of the free 𝒜-module 𝒜^2g - 1. Let B' to be the (2g - 1)-by-(2g - 1) matrix whose ith column vector is v_i, and modify the v_i if necessary so that (B') = 1. Let B be the 2g-by-2g block diagonal matrix [1] ⊕ B', and define C = B^*AB. Then (C) = (A) = Δ_K. Let b_1, …, b_2g be the standard basis (column) vectors of 𝒜^2g. Using (<ref>), <Ref> and the definition of B, we compute Cb_2 = B^*Av = B^* [f(t), 0, …, 0]^⊺ = [f(t), 0, …, 0]^⊺ and b_2^⊺ C = v^*AB = [-tf(t^-1),0,…, 0]B = [-tf(t^-1),0,…, 0], and we conclude that C has the form C = [ [ c_1 f(t) c_2 ⋯ c_2g - 1; -tf(t^-1) 0 0 ⋯ 0; c_2g 0 3c3*C'; ⋮ ⋮ 3c; c_4g - 3 0 3c ]] for some constants c_1, …, c_4g - 3∈𝒜 and (2g - 2)-by-(2g - 2) submatrix C'. Therefore, up to multiplication by a unit of [t,t^-1], Δ_K = (C) = f(t) · f(t^-1) ·(C'). Since Δ_K and f belong to the ring [t,t^-1] and f is primitive by <Ref>, (C') is in [t,t^-1] as well. It follows that (<ref>) gives a factorization of Δ_K in [t,t^-1]. The proof is completed by observing that, if the crossing change is non-nugatory, then f(-1) ≠± 1 by <Ref>. § KNOTS WITHOUT COSMETIC CROSSINGS In this section, we use <Ref> to prove the cosmetic crossing conjecture for an infinite family of knots. <Ref> Let p_1, p_2, p_3, p_4 and q be integers satisfying: * p_i ≥ 1 for all i, and q > min(p_1, …, p_4). * p_i ≡ 1 modulo 4 for all i, and q ≡ 3 modulo 4. Then the pretzel knot K = P(p_1,p_2,p_3,p_4,-q) does not admit a non-nugatory, cosmetic generalized crossing change. In <Ref>, we've chosen the parameters of the knot K so that our <Ref> is necessary to prove the result. In particular, it is not difficult to show that: * K is not fibered unless p_1 = … = p_4 = 1 and q = 3, since Δ_K is not monic (see (<ref>) below), so we can't apply <cit.>. * K has genus g(K) > 1, since Δ_K has degree four (see (<ref>)), so we can't apply <cit.>. * K is not a special alternating knot, since K satisfies σ(K) < |2g(K)|, where σ denotes the signature (see (<ref>)), so we can't apply <cit.>. The knot K also appears to have bridge number higher than two for most choices of p_1, …, p_4 and q, though this is difficult to quantify rigorously. Additionally, Lidman and Moore prove that if Σ(K) is an L-space, then K does not admit a non-nugatory cosmetic crossing change if each of the invariant factors of the finite abelian group H_1(Σ(K)) is square-free <cit.>. For many choices of p_1, …, p_4 and q, this result, <cit.>, or <cit.> suffices to prove the cosmetic crossing conjecture for K. For infinitely many other choices, however, <Ref> is needed. For example, if K = P(1,5,1,1,-q) for some q > 1 congruent to 3 modulo 4, then H_1(K) is cyclic of order 16q - 5. If q = 100m + 55 for some m ≥ 0, then 25 is a factor of |H_1(K)|, and in this case none of the aforementioned results can be applied. It also seems possible that <Ref> is necessary to prove <Ref> for generic choices of p_1, …, p_4 and q, but we will not attempt to prove this here. Condition (i) ensures the knot K is quasi-alternating by <cit.>, so has branched double cover an L-space <cit.>. Therefore, by <Ref>, it suffices to show Δ_K has no factor of the form f(t)f(t^-1). The bounded checkerboard surface in the standard pretzel diagram of K is orientable, with Seifert matrix given by V = [ (p_1 + p_2)/2 -p_2 0 0; 0 (p_2 + p_3)/2 -p_3 0; 0 0 (p_3 + p_4)/2 -p_4; 0 0 0 (p_4 - q)/2 ]. By (ii) the matrix V reduces modulo 2 to [ 1 1 0 0; 0 1 1 0; 0 0 1 1; 0 0 0 1 ], so the Alexander polynomial of K satisfies Δ_K(t) = (V - tV^⊺) ≡ 1 + t + t^2 + t^3 + t^4 2. Let ψ(t) ∈_2[t] be the polynomial on the right side of (<ref>); then it is sufficient to show that ψ is irreducible. Certainly ψ has no linear factors, since ψ(0) = ψ(1) = 1. Consequently, if ψ factors non-trivially then ψ is a product of two irreducible quadratic polynomials. There is only one such polynomial, 1 + t + t^2, and (1 + t + t^2)^2 = 1 + t^2 + t^4 ≠ψ(t). Thus ψ is irreducible, and Δ_K is as well. amsplain
http://arxiv.org/abs/2407.13215v1
20240718065503
Scaling limit of the KPZ equation with non-integrable spatial correlations
[ "Luca Gerolla", "Martin Hairer", "Xue-Mei Li" ]
math.PR
[ "math.PR", "math-ph", "math.MP", "60H15, 60F17" ]
Imperial College London, SW7 2AZ London, UK. EPFL, 1015 Lausanne, Switzerland. luca.gerolla16@imperial.ac.uk, martin.hairer@epfl.ch, xue-mei.li@epfl.ch Scaling limit of the KPZ equation with non-integrable spatial correlations Luca Gerolla^1, Martin Hairer^1,2, Xue-Mei Li^1,2 Received X XX, XXXX; accepted X XX, XXXX =========================================================================== § ABSTRACT We study the large scale fluctuations of the KPZ equation in dimensions d ≥ 3 driven by Gaussian noise that is white in time Gaussian but features non-integrable spatial correlation with decay rate κ∈ (2, d) and a suitable limiting profile. We show that its scaling limit is described by the corresponding additive stochastic heat equation. In contrast to the case of compactly supported covariance, the noise in the stochastic heat equation retains spatial correlation with covariance |x|^-κ. Surprisingly, the noise driving the limiting equation turns out to be the scaling limit of the noise driving the KPZ equation so that, under a suitable coupling, one has convergence in probability, unlike in the case of integrable correlations where fluctuations are enhanced in the limit and convergence is necessarily weak. Mathematics Subject Classification: 60H15, 60F17 Keywords: KPZ equation, stochastic heat equation, fluctuations, long range correlations plain § INTRODUCTION In this article, we investigate the large-scale dynamics of the Kardar–Parisi–Zhang (KPZ) equation on ^d in dimension d ≥ 3. The KPZ equation ∂_t h = 12 Δh + 12 |∇h |^2 + βξ , was originally introduced in <cit.> to model random interface growth. Here, β is a coupling constant, and ξ is a mean zero Gaussian noise. Over time, it has evolved into one of the most extensively studied stochastic PDEs due to its ability to capture the universal behaviour of many probabilistic models on one hand and the mathematical challenges it presents for a well-posed solution theory on the other. Very recently, significant achievements have been made, demonstrating that, in dimension 1, the rescaled solution ϵ^1/2 h(ϵ^-3/2t, ϵ^-1 x ) - C_ t converges to the KPZ fixed point <cit.>. This accomplishment built upon the remarkable progress made in the last couple of decades <cit.>. While the KPZ equation offers insights into universal phenomena, it poses substantial mathematical challenges when it comes to defining a robust and mathematically rigorous solution that exhibits the aforementioned physical relevance <cit.>. In the original model, the mean zero Gaussian noise ξ has short range correlations. In that case, it was shown recently in a number of works <cit.> that the large-scale behaviour of the KPZ equation in dimensions d ≥ 3 in the weak coupling regime is described by the Edwards–Wilkinson model, namely simply the additive stochastic heat equation driven by space-time white noise. In the present article, we consider instead spatially coloured noise with long range correlations, and we are interested to explore how these correlations affect the KPZ scaling limit in the supercritical regime d ≥ 3. In such settings, where the correlation is not integrable, so that the form of the effective variance in the above references is infinite, an effective fluctuation theory for the stochastic heat equation (SHE) has been developed <cit.>. Like there, we consider large scales correlations of polynomial decay |x|^-κ. Our result on the KPZ scaling limit introduced below aligns with the physics literature <cit.> which predicts a Gaussian limit when κ >2. We begin by describing the noise ξ considered: The Gaussian noise ξ is white in time with smooth spatial covariance R, formally [ ξ(t,x)ξ(s,y)] = δ(t-s) R(x-y). There exists κ∈ (2,d) such that R(x) ≲ (1 + |x|)^-κ and, for any x ∈^d ∖{0}, lim_→ 0ϵ^-κ R( ϵ^-1x ) = |x|^-κ. For example, the noise obtained by convolving (in space) a space-time white noise with a smooth function ϕ :^d→_+ behaving at infinity like ϕ(x) ∼ |x|^-d+κ/2 leads to R(x) = ∫_^dϕ(x+y) ϕ(y) dy, which satisfies Assumption <ref>. Writing h for the solution to (<ref>), but with a slowly varying initial condition of the form ^κ2-1h_0( x), we rescale it by setting, for ∈ (0,1], h_(t,x) = ^1-κ2h(t/^2,x/)-t2 β^2 R(0) ϵ^-1-κ2 . A simple calculation shows that h_ then solves the rescaled KPZ equation [eq:KPZ_rescaled] ∂_t h_= 12 Δh_+ 12 ^κ2-1 |∇h_|^2 - 12 β^2 R(0) ϵ^-1-κ2 + βξ^ , with initial condition h_0. Here ξ^ϵ is the rescaling of ξ such that [ ξ^ϵ(t,x) ξ^ϵ(s,y) ] = δ(t-s) ^-κ R(x-y ϵ) . Under this scaling, ξ^ converges in law, as → 0, to a non-trivial Gaussian limit ξ^0 with covariance [ξ^0(t,x) ξ^0(s,y)]= δ(t-s) |x-y|^-κ . It will be convenient to upgrade this convergence to convergence in probability (which can always be achieved by Skorokhod's representation theorem): A coupling between the ξ^ is good if, for every ψ∈_c^∞(^d+1), there exists a random variable ξ^0(ψ) such that ξ^(ψ) ⇒ξ^0(ψ) in probability. If one can write ξ(t,x) = ∫_^dϕ(x-y)η(t,y) dy, for η a space-time white noise and for ϕ as described just above, then a natural good coupling is given by ξ^(t,x) = ^-(d+κ)/2 ∫_^d ϕ((x-y)/)η(t,y) dy . Given the limiting noise ξ^0, we write for the solution to the corresponding additive stochastic heat equation: [eq:AddSHE_h0] ∂_t =12 Δ+ βξ^0 , (0, ·) = h_0 . Our main result is then the following. Let d≥ 3 and let ξ satisfy Assumption <ref>. For any p≥ 1, α < 1 - κ 2, and σ < - 1 - κ 2, there exists a constant c<0 and a positive value β_0 = β_0(p, α) such that for any β<β_0, for any deterministic initial condition h_0 ∈_b(^d), and any good coupling of the noise, one has, for any T > T_0 > 0 lim_→0 (h_(t,x) - ^1-κ2c) = , in probability in L^p([T_0, T], ^α(E)), for any compact E ⊂^d. We provide the definition and comment on the constant c in Remark <ref> below, whilst the proof of this result will be given in Section <ref>, where we also provide the definition of the spaces ^α(E) and their semi-norms. In fact, we will show a limit theorem for general functions Φ(u) of solutions to the multiplicative stochastic heat equation ∂_t u(t,x)=12 Δu (t,x)+βu(t,x) ξ(t,x) , u(0,·) = 1 , from which Theorem <ref> essentially follows as a special case with Φ = log. (We still need a little bit more work since we allow for general initial conditions in Theorem <ref>, while our general fluctuation result, Theorem <ref>, holds for u_0 ≡ 1. It is this additional work that is performed at the end of Section <ref>.) Under the hypotheses stated earlier, (<ref>) admits a unique solution <cit.>. Since we are interested in its large-scale fluctuations, we write u_ϵ(t,x) = u(t/ϵ^2, x/ϵ) for the diffusively rescaled solution, which solves ∂_t u_ϵ(t,x)=12 Δu_ϵ(t,x)+βϵ^κ/2 -1 u_ϵ(t,x) ξ^ϵ(t, x) , u_ϵ(0, ·)=1, with ξ^ϵ as before. We then have the following functional central limit theorem, which will be restated as Theorem <ref> below and proven in Section <ref>. Consider a map : _+ → satisfying Assumption <ref> and let u_^Φ(t,x) = ^1-κ/2 ( (u_(t,x))-[(u_(t,x))]) . With assumptions on d, R, p, α and σ as in Theorem <ref>, there exists β_0^=β_0^(α, p) >0 and ν_≥ 0 such that, for all β < β_0^, for any T >0, and any good coupling, u_^Φ converges in probability to ν_^0 in the space L^p([0, T], ^α(E)) for any compact E ⊂^d. Here, ^0 denotes the solution to the additive stochastic heat equation ∂_t ^0 =12 Δ^0 + βξ^0 , ^0(0, ·) = 0 . Our class of possible transformations includes the Cole–Hopf transform Φ(u) = log u, for which one has ν_ = 1. In this case we recover the fluctuations of KPZ since h_ (t,x) = ^1-κ2log u_ϵ(t,x) solves (<ref>) with flat initial conditions h_0 ≡ 0. We discuss and provide the explicit expression of ν_ in the general case in Remark <ref> below. The main tool (and this is also one of the main novelties of this article) to prove the above is a homogenisation result for the fundamental solution of (<ref>). In order to formulate it, write w_s,y(t,x) for the solution to (<ref>) at times t ≥ s with initial condition at time s given by w_s,y(s,·) = δ_y: Let d≥ 3 and ξ satisfies Assumption <ref>. Then for any p≥ 1, for sufficiently small β, there exist stationary random fields Z⃗ and Z such that, with μ = 12 - 1κ, [e:mainHomog] w_s,y(t,x)/P_t-s(y-x) - Z⃗(t,x)Z(s,y)_p ≲ (1 + t-s)^-μ (1 + (1 + t-s)^-1 2|x-y|) . A slightly more detailed version of this statement will be formulated as Proposition <ref> below. The field Z⃗ is the analogue of the pullback field already appearing in <cit.> where it was denoted by Z. It is obtained as the limit Z⃗(t,x) = lim_s → -∞ u^(s)(t,x), where u^(s) denotes the solution to (<ref>) with initial condition 1 at time s. It turns out that the field Z is given by “Z⃗ running backwards in time” (which explains our notations) in a sense that will be made precise below, see Theorem <ref> and equation (<ref>). A shadow of the field Z can be seen in the constant c̅ appearing in <cit.>, whose inverse would correspond to the expectation of Z (which in our case equals 1 as a consequence of the fact that we consider noise that is white in time). More explicitly, these fields satisfy the equations Z⃗(t,x) =1 + β∫_-∞^t∫_^d P_t-r(x-z) Z⃗(r,z) ξ(dr,dz) , Z(s,y) =1+β∫_s^∞∫_^d P_r-s(y-z)Z(r,z) ξ(dr,dz) , with the stochastic integral appearing in the second expression being of backwards Itô type. Using the Clark–Ocone formula we recover a divergence representation for the fluctuations of (u), which can be expressed in terms of the fundamental solution of (<ref>). Relying on the large time homogenisation of w_s, y from Theorem <ref> and mixing of the stationary fields, we are able to approximate the fluctuations of (u) by the fluctuations of the linear SHE (Proposition <ref>) and then prove Theorem <ref>. We shall explain more in details the argument and carry out the proof in Section <ref>. Unlike what happens in the case of short-range correlations, the constant c in Theorem <ref> is all that is left from the KPZ nonlinearity since neither the covariance of the noise nor the coefficient in front of the Laplacian end up being affected by the scaling limit. The constant is given by c=[log Z] with Z Z⃗(0,0). Since Z = 1 and Z has non-zero variance by (<ref>), c is strictly negative. The way one can intuitively understand the presence of c is the following: since we start with a fixed -independent bounded continuous initial condition h_0 which looks quite different from the stationary solutions (which exhibit oscillations of size about ^1-κ2), the KPZ nonlinearity and the renormalisation constant appearing in (<ref>) don't quite balance out initially. By the time the process has become locally sufficiently close to stationarity, this imbalance has already caused it to shoot down by a substantial amount. This strongly suggests that one can find a collection Y_ of stationary processes such that lim_→ 0 Y_ = 0 in distribution and such that a statement analogous to Theorem <ref>, but without the constant c, holds provided that one starts h_ with initial condition Y_ + h_0. In fact, we believe that one should be able to take Y_(x) ^1-κ2(log(u_(t_,x)) - c) , (but independent of the noise ξ) for a suitable sequence of times t_→ 0, and show that lim_ϵ→ 0h_ϵ=. With the notations above at hand, one has ν_ = [Z ' (Z)] in Theorem <ref> where Z=Z⃗(0,0). We will see in Lemma <ref> that our assumptions on guarantee that this quantity is finite. Given the dependence of the pullback stationary solution Z⃗ on β, we note that ν_ actually depends on β in general and that, as β→ 0, ν_ converges to Φ'(1). In the special case (x)=x, one has ν_ = 1 and we recover the result of <cit.>, at least in the case σ(u) = u. Note that the effective variance differs from that obtained for the case of integrable R in <cit.>. There, the effective variance ν_eff is given by the expression ν^2_eff = ν_Φ^2 ∫_ ^d R(x) [ Z⃗(0,x) Z⃗(0,0)] dx . In their works, the fluctuation scale depends on the dimension, while in our setting the fluctuations scale ϵ^κ -2 depends on the decay coefficient κ of the noise covariance. §.§.§ Structure of the article The remainder of the article is structured as follows. In Section <ref>, we introduce the random fields Z⃗ and Z and we collect some useful a priori bounds on these fields as well as on the solutions to the multiplicative stochastic heat equation (<ref>) and their Malliavin derivatives. Section <ref> forms the bulk of the article and contains the proof of the homogenisation result. Finally, in Section <ref>, we use all of these ingredients to provide a proof of Theorems <ref> and <ref>. §.§.§ Acknowledgements The authors are grateful to Alex Dunlap and Yu Gu for discussions on this topic and in particular for pointing out that our proof may yield convergence in probability in this setting. This research was partially supported by the Royal Society through MH's research professorship. XML acknowledges support from EPSRC grant EP/V026100/1 and EP/S023925/1. LG is supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1). § ANALYSIS OF THE STATIONARY SOLUTION §.§ Notations and conventions The background probability space is (Ω, , ℙ), and we write _t for the filtration generated by ξ, i.e. _t is the smallest ℙ-complete σ-algebra such that the random variables {ξ( ϕ) : ϕ⊂ (-∞,t]×^d, ϕ∈_c^∞ (_+×^d)} are all _t-measurable. We generally interpret an integrable function ξ as a distribution by the following expression: ξ(ϕ)=∫_×^dξ(s,y)ϕ(s,y) dsdy. The notation ·_p stands for the L^p(Ω) norm. Moreover, * With a≲ b we mean a ≤ C b for some constant C>0, independent of ϵ. * (^d) denotes the set of smooth functions on ^d with compact support. * For E⊂^d open and k∈, we denote with ^k(E) the space of k times continuously differentiable functions on E and _b^k(E) the subset of bounded ^k(E) functions with all partial derivatives up to order k bounded. §.§ Forward and backward stationary solutions For the purpose of this section, it will be convenient to assume that the underlying probability space Ω is the space of space-time distributions and that the noise ξ is simply the canonical process. With u^(s)(t,x) denoting the solution to (<ref>) with initial condition u^(s)(s,·) ≡ 1 at time s, we recall <cit.>. To avoid confusion, we remark that in <cit.>, u^(K)(t,x) denotes the solution that starts from time -K. A result of similar nature can be found in <cit.>. Let d≥ 3 and ξ satisfies Assumption <ref>. Given any p≥1, there exists a positive constant β_1(p) = β_1(p, d, R) and a space-time stationary random field Z⃗ such that, for β≤β_1(p), one has u^(s)(t, x) - Z⃗(t,x)_L_p(Ω)≲ 1 ∧ (t-s)^2-κ/4 , uniformly over all s ≤ t and x ∈^d. Recall that κ∈ (2,d). Furthermore, for any compact region ⊂^d and every t ∈, one has lim_s → -∞sup_x ∈ | u^(s)(t,x)-Z⃗(t,x) |^p → 0. It follows from (<ref>) that Z⃗ is a random fixed point for the random dynamical system generated by the stochastic heat equation (<ref>). In fact, we have even more structure as we will describe now. Our probability space admits a natural “time reversal” involution Ω→Ω and “space-time translations” _t,xΩ→Ω such that, formally (ξ)(t,x) = ξ(-t,x) , (_t,x ξ)(s,y) = ξ(s+t,x+y) , with the obvious rigorous interpretations in terms of test functions. These operations naturally yield linear operators acting on the space of random variables A Ω→ by [e:opRandom] (A)(ξ) A(ξ) , (_t,x A)(ξ) A(_t,xξ) . Since the law of ξ is invariant under and _t,x, the operations in (<ref>) are isometries for every L^p(Ω,ℙ)-space. We will repeatedly use the fact that, by uniqueness of solutions to (<ref>), one has the following. For any t∈, x∈^d, one has _t,x = _-t,x and the following holds almost surely: u^(s)(t,x)(_0,yξ) =u^(s)(t, x+y)(ξ) , u^(s)(t,x)(_r,0ξ) =u^(s+r)(t+r, x)(ξ) . Given any random variable A as above, we then define the stationary random fields A⃗(t,x) = _t,x A , A(t,x) = _t,xA . If we now define [e:defZs] Z = lim_s→-∞ Z_s , Z_s = u^(s)(0,0) , then we see that this definition is consistent with our notation since by Lemma <ref> Z⃗(t,x):=_t,x Z, defined as in (<ref>), satisfies Z⃗(t,x) = lim_s→-∞ u^(s)(0,0)(_t,xξ) = lim_s→-∞ u^(t+s)(t,x)(ξ) = lim_s→-∞ u^(s)(t,x) , which is indeed the same quantity appearing in (<ref>), as well as in (<ref>), by virtue of (<ref>). With the notation above Z(s,y)=_s,y Z = _-s,y Z= Z⃗(-s, y) , so that Z(s,y)(ξ)=lim_r→-∞u^(r)(-s,y)(ξ) , which is indeed consistent with (<ref>) via the change of variables r ↦ -r (and noting that such a change of variables does indeed turn Itô integrals into backwards Itô integrals by simple Riemann sum approximations). Let us now explain why Z appears in (<ref>). If it were the case that w_s,y(t,x) = P_t-s(y-x) Z⃗(t,x)Z(s,y) , then, as a consequence of the fact that Z⃗ = 1 and the law of large number we could recover Z by Z(s,y) = lim_t →∞ ∫_^d w_s,y(t,x) dx . It is therefore natural to consider the stochastic process [e:defn_Csy] C_s,y(t) = ∫_^d w_s,y(t,x) dx . As we will see shortly, it is then indeed the case that one has the identity Z(s,y) = lim_t →∞ C_s,y(t). In fact we will show more, namely that with Z_s as in (<ref>), one has the exact identity C_s,y(t) = Z_s-t(s,y). Under the conditions of <Ref>, one has C_s,y(t) = Z_s-t(s,y), and C_s,y(t)- Z(s,y)_p≲ 1 ∧ (t-s)^2-κ/4. By the definition (<ref>), Z_s-t(s,y):=_s,y Z_s-t=_-s,y u^(s-t)(0,0). By Lemma <ref>, Z_s-t(s,y)= u^(-t)(-s,y), lim_t→∞Z_s-t(s,y)= Z(s,y). We shall use the method of duality. The basic observation is that if u denotes the SHE running forward in time with initial condition u_s at some time s and v denotes the one running backwards in time with terminal condition v_t at some time t > s then u_r, v_r, where the bracket denotes the inner product on L^2(^d), is independent of r ∈ [s,t]. Formally, this is because [e:formal] _̣r u_r, v_r = Δu_r + ξu_r, v_r - u_r, Δv_r + ξv_r = 0 . One can make this precise in the following way. Fix a mollifier ρ∈_c^∞(×^d) such that ∫ρ = 1 and ρ(t,x) = ρ(-t,x) = ρ(t,-x). For any δ > 0, we write ρ_δ(t,x) = δ^-2-dρ(t/δ^2,x/δ) and we set ξ_δ(t,x) = (_t,xξ)(ρ_δ). Fix then some initial time s and constant K_δ and consider the solution u^δ(r,x) to the random PDE [e:defudelta] _̣r u^δ= 12 Δu^δ+ βu^δξ_δ- K_δu^δ , u^δ(s,·) = δ_y . We also fix some final time t and consider the solution v^δ to the random backwards PDE [e:defvdelta] _̣r v^δ= -12 Δv^δ- βv^δξ_δ+ K_δv^δ , v^δ(t,·) = 1 . At this stage we note that if we set ṽ^δ(r,x) = v^δ(-r,x), then ṽ^δ satisfies the forwards PDE [e:defvtilde] _̣rṽ^δ= 12 Δṽ^δ+ βṽ^δξ_δ- K_δṽ^δ , ṽ^δ(-t,·) = 1 . At this point we note that (<ref>) fits into the setting of <cit.> (in fact, the regularity of our limiting noise ξ is better than what is considered there so some of the arguments could be simplified). Combining this with <cit.> which shows that the BPHZ lift of ξ^δ to the regularity structure of <cit.> converges as δ→ 0, we conclude that, provided that the constants K_δ are chosen appropriately, u^δ converges as δ→ 0 to the Green's function w_s,y. This convergence takes place in an exponentially weighted space that enforces rapid decay of the solution. By the exact same argument, one finds that ṽ^δ converges to u^(-t) as δ→ 0, this time in a weighted space that allows for some slow growth of the solutions. In particular, this shows that v^δ_r converges to v_r(x) = u^(-t)(-r,x) = Z_r-t(r, x) . We conclude that, for any r ∈ [s,t], one has lim_δ→0 u_r^δ,v_r^δ = ∫_^d w_s,y(r,x) Z_r-t(r, x) dx . On the other hand, it follows immediately from (<ref>) and (<ref>) that u_r^δ,v_r^δ is independent of r, as in the formal calculation (<ref>). Since Z_0 = 1 and Z_0(t, ·)=1, it follows that Z_s-t(s, y) = ∫_^d w_s,y(t,x) Z_0(t, x) dx = C_s,y(t) , which is precisely as claimed. The rate of convergence follows from (<ref>) and <Ref>. §.§ A priori bounds Our main goal in this subsection is to establish L^p bounds on the Malliavin derivative of the random variable u as well as on Φ(u). We write for the reproducing kernel space of the noise ξ, which is formed by completing the space (×^d) under the inner product ⟨ f , g ⟩_ = ∫_-∞^∞∫_^2d f(s, y_1) g(s, y_2) R(y_1-y_2) dy_1 dy_2 ds , after quotienting out zero norm elements. The family of random variables ξ(f)=∫ f(s,y) ξ(ds, dy) , f∈ , defines an isonormal Gaussian process on : [ ξ(f) ξ(g)] = ⟨ f , g ⟩_. Denote by D the Malliavin derivative D of a random variable, D(ξ(f))=f. Let _p^∞(^n) denote the space of smooth functions F ∈^∞(^n) with F and its partial derivatives growing at most polynomially. Denoting by the class of smooth random variables of the form F(ξ(f_1), …, ξ(f_n)), we set D F(ξ(f_1), …, ξ(f_n))=∑_i=1^n ∂_i F(ξ(f_1), …, ξ(f_n)) f_i. The Malliavin derivative operator D: L^p(Ω)→ L^p(Ω; ), with initial domain is closable for any p> 1. We define ^1,p as the completion of smooth random variables under the norm X ^p_^1,p ( |X|^p ) + ( DX_^p ). In all the cases we consider, our random variables X will be such that their Malliavin derivatives DX have natural function-valued representatives, and we denote their pointwise evaluations by D_t, x X. For example, for a deterministic function h, D_r,zξ(h) = h(r,z). The L^2 adjoint of D is denoted by δ and is known as the divergence operator or Skorokhod integral. It is shown in <cit.> that, for every p ≥ 2, δ extends to a bounded operator δ: ^1,p() → L^p(Ω). For a deterministic f∈, we have δ(f) = ξ(f) and D (ξ(f))=f. If w ∈ δ is _t-adapted, δ(w) coincides with the Itô integral. We refer to e.g. <cit.> for more details on Malliavin calculus. Before stating the lemma, we recall some covariance estimates – used for the a priori bounds on the moments below and Theorems <ref>-<ref> in next sections. If R(x) ≲ (1 + |x|^κ)^-1, where κ∈ (2, d), then uniformly in t > 0 and in x ∈^d, ∫_^d P_t(x - z) R(z) dz ≲(1 + |x| + √(t))^- κ, ∫_0^∞∫_^d P_r(z-x) R(z) dr dz ≲1 ∧|x|^2-κ, ∫_0^t ∫_^2d P_r(z_1-x) P_t-r(z_2) R(z_1 -z_2) dr dz_1 dz_2 ≲t(1+|x|+√(t))^-κ. The first two estimates follow from the proof of <cit.>. By a change of variables, the left hand side of (<ref>) equals ∫_0^t ∫_^d P_t(y-x)R(y) dr dy ≲ t(1+|x|+√(t))^-κ , so that (<ref>) follows from (<ref>). In the following we gather moment estimates on u, Z⃗ and the Green's function w_s,y appearing in Theorem <ref>. Crucially, for sufficiently small β, the negative moments of u are also uniformly bounded. Assume that R satisfies Assumption <ref>. For any p ≥ 2, there exist constants β_0(p)>0 and C_p, such that if β≤β_0, the following holds for the solution of (<ref>). * The solution of (<ref>) has positive and negative moments of all orders: sup_s ≥0, x∈^d u(s,x)_p ≤C_p , sup_s≥0, x∈^d u(s,x)^-1 _p ≤C_p . * For all x ∈^d and t > 0, u(t,x) ∈^1,p. The Malliavin derivative of u is given by [e:MallDeru] D_s,yu(t,x) = βu(s,y)w_s,y(t,x) _[0,t](s) , and the process w satisfies the pointwise estimate [e:bound_w] w_s,y(t,x)_p ≲P_t-s(x-y) . * The process C_s,y(t) defined in (<ref>) satisfies sup_ysup_t>s≥0 C_s,y(t) _p ≲ 1 and [e:estimate_Csy] C_s,y(t) - C_s,y(τ) _p ≲β(1 + (t∧τ)- s)^2 - κ/4 . Assertion <ref> also holds if we replace u by Z⃗, and one has [e:MallDerZ] D_r,z Z⃗(t,x) = βZ⃗(r,z) w_r,z (t, x)_(-∞, t](r) . The positive moment bounds in (<ref>) follow from <cit.>. Concerning the Malliavin derivative, we note that by the mild solution formulation we have u(t,x)=1+ βδ( v_t,x), where v_t,x(r,z) = P_t -r ( x -z) u(r,z) _[0,t](r) . Differentiating on both sides and using the commutation relation <cit.>, it follows that for s ∈ [0, t] one has D_s,y u(t,x) = βP_t -s ( x -y) u(s,y) + β∫_s^t ∫_^d P_t -r ( x -z) D_s,y u(r,z) ξ(dr, dz) . This shows that D_s,y u solves the same equation as u itself, but with initial condition β u(s,·)δ_y at time s, so that (<ref>) follows by linearity. The moment bound on w_s,y follows by the same steps as in the proof of <cit.> (we omit tracking the same constants here). Consider w^(0)_s, y (t,x) := P_t-s(x-y), and w^(n+1)_s,y(t,x) = P_t-s(x-y) + β∫_s^t ∫_^d P_t-r(x-z) w^(n)_s,y(r,z) ξ(dr,dz) . Letting w̃^(n)_s,y(t,x) = w^(n+1)_s,y(t,x) - w^(n)_s,y(t,x), by (<ref>) one has w̃^(n)_s,y(t,x)_p ≤βC_p ( ∫_s^t∫_^2d (∏_i = 1^2 P_t-r (x-z_i) w̃^(n-1)_s,y(r,z_i)_p ) | R(z_1 -z_2)| dz dr )^1 2 ≲βC_p sup_r > s sup_z w^(n-1)_s,y(r,z)_p . Thus, for sufficiently small β≤β_0(p), w^(n)_s,y converges to w_s,y as n →∞. Moreover, again by (<ref>), we have w^(n+1)_s,y (t,x) ^2_p ≲P_t-s(x-y) + β^2 ∫_s^t ∫_ ^2d R(z_1-z_2) ( ∏_i=1^2 P_t-r(x-z_i) w^(n)_s,y (s, z_i) _p ) dz dr . Given these bounds and Lemma <ref>, (<ref>) follows by <cit.>. In view of Proposition <ref>, the bounds on the mass process C are an immediate consequence of (<ref>) in the uniform moment bounds on Z⃗. The uniform control on the negative moments of u in (<ref>) is a consequence of the fact that log u(t,x) has sub-Gaussian left tails, namely for θ > 0, ℙ( logu(t,x) ≤-θ) ≤C e^-θ^2/2 . The proof of (<ref>) follows by <cit.>, given that u is uniformly L^2 bounded for any β≤β_0(2). Their strategy to discretize and cut-off of the mollifier ϕ does not require ϕ to be compactly supported. Then, all steps carry through once noted that even with long range covariance R, uniformly in t>0, 𝐄_B^(1), B^(2) [ ∫_0^t R(B^(1)_s - B^(2)) ds e^ β^2 ∫_0^t R(B^(1)_s - B^(2)) ds ] ≲sup_ β≤β_0(2), t>0 [u(t, 0)^2] ≲1, where 𝐄_B^(1), B^(2) denotes the expectation with respect to Brownian motions B^(i) independent of ξ. The first inequality above follows by noting that, via Feynman-Kac formula, one has an explicit expression for the second moment of u (cf. <cit.>): [u(t, 0)^2] = 𝐄_B^(1), B^(2) [ e^ β^2 ∫_0^t R(B^(1)_s - B^(2)) ds ]. Regarding Z⃗, we note that it solves the fixed point problem [e:Z_mild_eqn] Z⃗(t,x)=1 + β∫_-∞^t∫_^d P_t-s(x-y) Z⃗(s,y) ξ(ds,dy) , whence we conclude that D_r,z Z⃗(t,x) =βP_t-r(x-z) Z⃗(r,z)+ β∫_-∞^t∫_^d P_t-s(x-y) D_r,z Z⃗(s,y) ξ(ds,dy). Since D_r,zZ⃗(t,x)=0 for r>t, assertion (<ref>) holds. Since the bounds in <ref> are uniform in time, their extension to Z⃗ is immediate. It remains to show (<ref>), which follows from <Ref>, the fact that Z(s,y)(ξ)=lim_r→ -∞u^(r)(-s,y)(ξ), and <Ref>. With positive and negative moment bounds in place, we can Malliavin differentiate transformations of u and then approximate fluctuations through the homogenisation result. Assume ∈^1(_+) and that, for some M, q ≥ 1, satisfies for all z ∈_+, |(z)| + | '(z) | ≤M(z^-q + z^q). Then, for any p >1 there exists β_0^M,q(p) = β_0(p, d, R, q, M) such that for any β≤β_0^M,q(p), we have (u(t,x)) ∈^1,p with D_s,y [(u(t,x)) ]= β' (u(t,x)) u(s,y)w_s,y(t,x) _[0,t](s), and the following bounds are satisfied: (u(t,x))_p ≲M , '(u(t,x)) _p ≲M , D_s,y(u(t,x))_p ≲M P_t-s(x-y) , (u(t,x)) _^1,p ≲M , uniformly over s, t, x, y. Moreover, uniformly over x ∈^d we have sup_t>0 | ( ( u(t, x)) , ( u(t, 0) ) ) | ≲M ( 1 ∧|x|^2 -κ) . All the estimates above also hold with Z⃗(t,x) in place of u(t,x). Finally, we also have the following convergence rate ( u^(s)(t,x)) - (Z⃗ (t,x) )_p ≲ 1 ∧(t-s)^2-κ/4 . For β≤β_0(pq), bounds (<ref>) follow immediately from (<ref>) and the assumptions on . Observe that β_0(p) is non-increasing in p. Regarding the Malliavin derivative, let us introduce an approximation by bounded functions _n ∈_b^1(_+) such that _n = on [1/n, n]. Consider ϱ_n : _+ →_+ defined by ϱ_n(x) := 1/2n + n/2 x^2 for x ∈[0, 1/n], x for x ∈[1/n, n], 3/2 n - 1/2n (x - 2n)^2 for x ∈[n, 2n], 3/2 n for x ∈[2n, +∞). Then, we let _n := ∘ϱ_n. Note that ϱ_n ∈_b^1(_+) with sup_x|ϱ_n'(x)| ≤ 1 and range ϱ_n (x) ∈ [1/2n, 3/2 n]. Thus, _n ∈_b^1(_+) for all n. Moreover, we note that uniformly in n: |ϱ_n (x)| ≤|x| +1, |ϱ_n (x)|^-1 ≤1 |x| + 1. Then, given growth control (<ref>) of , for β≤β_0(pq) we can also infer sup_n >0, s ≥0, x∈^d _n(u(s,x))_p ≲1 , sup_n >0, s ≥0, x∈^d _n'(u(s,x))_p ≲1. Since u(t,x) ∈^1,p by Lemma <ref>, we have _n (u(t,x)) ∈^1,p with D _n (u(t,x)) = _n' (u(t,x)) D u(t,x) by <cit.>. We let F_r,z^(n)(t,x) = D_r,z (_n (u(t,x)) ) - ' (u(t,x)) D_r,z u(t,z) = _ { 1/n ≤u(t,x) ≤n }^c ( _n' (u(t,x)) - ' (u(t,x)) ) D_r,z u(t,x). Then, for sufficiently small β we use the moments estimates on u, ', _n' and Du, F_r,z^(n) (t,x) _p ≤ℙ( { u< 1 n} ∪{ u>n})^1/3p _n' (u(t,x)) - ' (u(t,x)) _3p D_r,z u(t,x) _3p ≲n^-1/p M P_t-r(x -z). By Minkowski, Hölder's inequalities and (<ref>), [ F ^(n)(t,x) _^p] = [ ( ∫_0^t∫_ ^2d ∏_i=1^2 F_r,z_i^(n)(t,x) R(z_1 -z_2) dr dz)^p/2 ] ≤( ∫_0^t∫_ ^2d ∏_i=1^2 F_r,z_i^(n)(t,x) _p/2 R(z_1 -z_2) dr dz)^p/2 ≤n^-1 M^p ( ∫_0^t∫_ ^2d ∏_i=1^2 P_t-r(x-z_i) R(z_1 -z_2) dr dz)^p/2 ≤C M^p n^-1. Thus, F^(n)(t,x) → 0 in L^p(Ω, ) as n →∞. On the other hand, _n(u(t,x)) →(u(t,x)) in L^p(Ω). Since D is closed, we conclude (u(t,x)) ∈(D) and D_s,y(u(t,x)) = '(u(t,x)) D_s,y u(t,x). Then, given (<ref>), representation (<ref>) holds. Moreover, for sufficiently small β, given (<ref>) and the uniform moments bounds on '(u) and u, we have D_r,z(u(t,x))_p≲ M P_t-r(x-z). Since the inequality constant is independent of (t,x), we estimate similarly to above [ D (u(t,x))(t,x) _^p] ≤( ∫_0^t∫_ ^2d ∏_i=1^2 D_r,z_i(u(t,x)) _p R(z_1 -z_2) dr dz)^p/2 ≤( ∫_0^∞∫_ ^2d ∏_i=1^2 P_t-r(x-z_i) R(z_1 -z_2) dr dz )^p/2 ≲1, where we used (<ref>). This concludes the proof of (<ref>). Using the Clark–Ocone formula in the same way as in <cit.>, we obtain the bound | ( ( u(s, x)) , ( u(s, 0)) )| ≲M ∫_0^t ∫_^2d P_t-r(x-z_1) P_t-r(z_2) R(z_1-z_2) dz dr, so that (<ref>) follows by (<ref>). It remains to show (<ref>), for which we make use of the identity ( u^(s)(t,x)) - (Z⃗ (t,x) ) = ( u^(s)(t, x) - Z⃗(t,x)) ∫_0^1 '( θu^(s)(t, x) + (1-θ) Z⃗(t,x)) dθ . Using the fact that u is non-negative and the uniform a priori bounds on positive and negative moments of u and Z⃗, one obtains a uniform L^2p bound on ∫_0^1 '( θ u(t,x) + (1-θ) Z(t,x)) dθ for sufficiently small β. Thus, (<ref>) implies ( u^(s)(t,x)) - (Z⃗ (t,x) )_p ≲M u^(s)(t, x) - Z⃗(t,x)_2p ≲1 ∧(t-s)^2-κ/4 , which is precisely (<ref>), concluding the proof. § THE MAIN HOMOGENISATION THEOREM In view of Theorem <ref>, recalling that Z⃗(t,x) = lim_s→ -∞ u^(s)(t,x), we define [e:defn_rho] (t,x) = w_s,y(t,x) P_t-s(x-y) - C_s,y(t)Z⃗(t,x) . By Proposition <ref> and (<ref>), we know that C_s,y(t) - Z (s, y) _p ≲β(1 + t - s)^2- κ 4 . Since κ>2, our main homogenisation result then follows from the following proposition. The proof relies on a kernel estimate and moment bounds on Skorokhod correction terms proved below. Let μ= 12 - 1κ. Then, given p≥ 1, there exists β̃(p) > 0, such that for β≤β̃, the following holds [e:homo_estimate] sup_x,y sup_t > s ≥0 (t,x) _p / (1 + t-s)^-μ (1 + (1 + t-s)^-1 2|x-y|) ≲β . Let us consider w̃_s,y(t,x) = w_s,y(t,x)/P_t-s(x-y). Since w_s,y(t,x) solves [e:defn_w] w_s,y(t,x) = P_t-s(x-y) + β∫_s^t ∫_^d P_t-r(x-z)w_s,y(r,z) ξ(dr,dz) , one has the identity [e:exprTildew] w̃_s,y(t,x) = 1 + β∫_s^t ∫_^d q_s, t^x,y (r, z) w̃_s,y(r,z) ξ(dr,dz) , where q_s, t^x,y is kernel of the Brownian bridge starting from s at x and ending at y at time t, [e:brownian_bridge] q_s, t^x,y (r, z) = P_r-s(z-y) P_t-r(x-z) P_t-s(x-y) = P_(r-s)(t-r)/(t-s) (z-y - r-st-s(x-y)) . On the other hand, integrating (<ref>) over x, we obtain [e:Csy_eqn] C_s,y(t) = 1 + β∫_s^t ∫_^d P_r-s (z-y) w̃_s,y(r,z) ξ(dr,dz) . As a consequence, we can wrte ρ_s,y as [e:rho_id1] ρ_s,y(t,x) = w̃_s,y(t,x) - C_s,y(t) Z⃗(t,x) = 1 + β∫_s^t ∫_^d q_s, t^x,y (r, z) w̃_s,y(r,z) ξ(dr,dz) - (1 + β∫_-∞^t ∫_^d P_t-r(x-z) Z⃗(r,z) ξ(dr,dz)) C_s,y(t) = β∫_s^t ∫_^d ( q_s, t^x,y (r, z) - P_r-s(y-z)) w̃_s,y(r,z) ξ(dr,dz) - β∫_-∞^t ∫_^dP_t-r(x-z) Z⃗(r,z) ξ(dr,dz) C_s,y(t) . To treat the term C_s,y(t), we split the last integral into two parts, integrating over r ∈ (-∞, s) leads to a contribution proportional to [eq:defn_E1] E_1 := ∫_-∞^s ∫_^dP_t-r(x-z)Z⃗(r,z) ξ(dr,dz) C_s,y(t) . For the other region r ∈ [s, t] , we rewrite it in terms of a Skorokhod integral using the fact (cf. <cit.>) that F ∫ G dξ = δ (FG) + ⟨ G , DF ⟩_ with F = C_s,y(t) and G(r,z) = P_t-r(x-z)Z⃗(r,z)_[s, t](r). Moreover, C_s,y(r) is _r-adapted and we obtain, writing δξ for the Skorokhod integration: ∫_s^t ∫_^d P_t-r(x-z) Z⃗(r,z) ξ(dr,dz) C_s,y(t) = ∫_s^t ∫_^d P_t-r(x-z) Z⃗(r,z) C_s,y(t) δξ(dr, dz) +∫_s^t ∫_^2d P_t-r(x-z_1) Z⃗(r,z_1) D_r,z_2 C_s,y(t)R(z_1-z_2 ) dr dz = ∫_s^t ∫_^d P_t-r(x-z) Z⃗(r,z) C_s,y(r) ξ(dr, dz) + ∫_s^t ∫_^d P_t-r(x-z) Z⃗(r,z) ( C_s,y(t) - C_s,y(r)) δξ(dr, dz) +∫_s^t ∫_^2d P_t-r(x-z_1) Z⃗(r,z_1) D_r,z_2 C_s,y(t)R(z_1-z_2 ) dr dz . Bringing all these back to (<ref>) and given the definition (<ref>) of ρ_s,y, we obtain [e:rho_decomp] ρ_s,y(t,x) = β∫_s^t ∫_^d P_t-r(x-z) ρ_s,y(r,z) ξ(dr, dz) + βE_0 - β∑_i=1^3 E_i , where E_1 was defined in (<ref>) and the other terms are given by E_0 := ∫_s^t ∫_^d ( q_s, t^x,y (r, z) - P_r-s(y-z) - P_t-r(x-z) ) w̃_s,y(r,z) ξ(dr,dz) , E_2 := ∫_s^t ∫_^d P_t-r(x-z) Z⃗(r,z) ( C_s,y(t) - C_s,y(r)) δξ(dr, dz) , E_3 := ∫_s^t ∫_^2d P_t-r(x-z_1) Z⃗(r,z_1) D_r,z_2 C_s,y(t)R(z_1-z_2 ) dr dz . We can now estimate their L^p norms. Given a time interval I = [a,b] and an adapted random process f(s,y), by Burkholder–Davis–Gundy (BDG) inequality, followed by Minkowski and Hölder inequalities, we have the following estimate ∫_I×^d f(s,y) ξ(ds, dy)_p ≤C_p [ (∫_I∫_^2d f(s,y_1)f(s,y_2) R(y_1 - y_2) dy ds)^p 2 ]^1 p ≤C_p ( ∫_I∫_^2d f(s,y_1) _p f(s,y_2)_p R(y_1 - y_2) dy ds )^1 2 . Given the uniform moments bounds, on C_s,y and Z⃗, from Lemma <ref> and the kernel estimate (<ref>), for β≤β_0(2p) we have E_1 _p ≲ ∫_-∞^s P_t-r (x - z)Z⃗(r,z) ξ(dr, dz) _2p C_s,y (t) _2p ≲( ∫_-∞^s ∫_^2d ∏_j=1^2 ( P_t-r (x-z_j) Z⃗(r,z_j) _2p ) R(z_1 -z_2) dr dz )^1 2 ≲( ∫_-∞^s (1 + t -r)^-κ2 dr )^1 2 ≲(1 + t-s)^2-κ/4 . For E_0, we note that by (<ref>), sup_x, y sup_t > s ≥0 w̃_s,y (t,x) _p ≲1 . Then, estimating moments like above, followed by change of variables z_i ↦ z_i - y, r ↦ r -s, and Lemma <ref> below, we can bound E_0 as follows for β≤β_0(p): E_0 _p ≤C_p( ∫_s^t ∫_^2d∏_i = 1^2 ( q_s, t^x,y (r, z_i) - P_r-s(y-z_i) - P_t-r(x-z_i) ) R(z_1 - z_2) dr dz )^1 2 =C_p ( ∫_0^t-s ∫_^2d ∏_i = 1^2 ( q_0, t-s^x-y,0 (r, z_i) - P_r-s(z_i) - P_t-r(x-y-z_i) ) R(z_1 - z_2) dr dz )^1 2 ≲(1 + t-s)^-μ (1 + (1 + t-s)^-1 2|x-y|) . By Lemma <ref> below, for β≤β_1(3p)∧β_0(3p) also the terms E_2 and E_3 satisfy E_i _p ≲(1 + t-s)^-μ , i = 2,3 . Among the estimates, the worst decay rate is the bound on E_0, and we obtain ∑_i= 0^3 E_i _p ≲m(t-s, x-y) , where we denote the space-time weight m(r, z) = (1 + r)^-μ (1 + (1 + r)^-1 2|z|) . Finally, let us consider the weighted norm M := sup_x,y sup_t > s ≥0 (t,x) _p / m(t-s, x-y) . We estimate the first term in (<ref>) as follows: ∫_s^t ∫_^d P_t-r(x-z) ρ_s,y(r,z) ξ(dr, dz) _p ≤C_p ( ∫_s^t ∫_^2d ∏_i = 1^2 ( P_t-r(x-z_i) ρ_s,y(r,z_i) _p ) R(z_1 - z_2 ) dr dz ) ^1/2 ≲M ( ∫_s^t (1+r-s)^-2μ ∫_^2d (∏_i = 1^2 P_t-r(x-z_i) ) R(z_1 - z_2 ) dr dz ) ^1/2 + M ( ∑_j =1^2 ∫_s^t (1+r-s)^-1-2μ ∫_^2d (∏_i = 1^2 |z_j- y |^2 P_t-r(x-z_i) ) R(z_1 - z_2 ) dr dz ) ^1/2 ≲M ( ∫_s^t (1+r-s)^-2μ (1 + t-r)^-κ2 dr ) ^1/2 + M ( ∫_s^t (1+r-s)^-1-2μ (1 + t-r)^-κ2 ( | x - y|^2 + (t-r) ) dr dr ) ^1/2 , where we used (<ref>) and the fact that ∫_^d |z|^2 P_r(z-x) dz ≲ r + |x|^2 for the second integral. For α, β > 0, we note [e:alpha_beta_decay] ∫_0^T (1+τ)^-α (1 + T -τ)^-1-β dτ≲(1+T)^-min(α, β) . This follows by splitting the integration region. On the interval [0, T/2], we have T -τ≥ T/2 and ∫_0^T 2 (1+τ)^-α (1 + T -τ)^-1-β dτ≲(1 + T )^-1-β T ≲(1 + T )^-β , while on [T/2, T], we have ∫_T 2 ^T (1+τ)^-α (1 + T -τ)^-1-β dτ≲(1+T)^-α ∫_0^∞(1 + r)^-1-β dr ≲(1+T)^-α . Then, going back to (<ref>), we apply (<ref>) to both terms and obtain ∫_s^t ∫_^d P_t-r(x-z) ρ_s,y(r,z) ξ(dr, dz) _p ≲M m(t-s, x-y) . Hence, by the estimates above, for some C > 0 we have (t, x) _p ≤C βM m(t-s, x-y) + C β m(t-s, x-y) . Taking β̃>0 such that C β̃< 1 and β̃≤β_1(3p)∧β_0(3p), we conclude (<ref>). Let p≥1. With notation as in the proof of Proposition <ref>, for any β≤β_1(3p)∧β_0(3p), for i = 2, 3, we have the following uniformly in x, y ∈^d and t>s≥ 0: E_2(s, t, y, x) _p ≲β (1 + t-s )^2-κ/4 , E_3(s, t, y, x) _p ≲β (1 + t-s )^1 - κ/2 . Let us start with the Skorokhod correction term E_3. Similarly to Lemma <ref>, where we obtained D_s,yu(t,x) = β u(s,y)w_s,y(t,x)_[0,t](s), one has D_r,z w_s,y(t,x) = βw_s,y(r,z) w_r,z(t,x) _[s, t](r) , so that, integrating over x, [e:MallDerC] D_r,z C_s,y(t) = βw_s,y(r,z) C_r,z(t)_[s, t](r) . Therefore, the term E_3 is given by E_3= ∫_s^t ∫_^2d P_t-r(x-z_1) Z⃗(r,z_1) D_r,z_2 C_s,y(t)R(z_1-z_2 ) dr dz = β∫_s^t ∫_^2d P_t-r(x-z_1)Z⃗(r,z_1) w_s,y(r,z_2) C_r,z_2(t)R(z_1-z_2) dr dz . Using the uniform moments estimates on Z⃗, w and C from Lemma <ref>, by Minkowski and Hölder's inequalities, followed by (<ref>) we obtain for β≤β_0(3p): E_3 _p ≤β∫_s^t ∫_^2d P_t-r(x-z_1) Z⃗(r,z_1) w_s,y(r,z_2) C_r,z_2(t) _p R(z_1-z_2) dr dz ≲β∫_s^t ∫_^2d P_t-r(x-z_1) P_r-s(z_2-y) R(z_1-z_2) dr dz ≲β(1 + t - s)^1 - κ 2 . To estimate the L^p norm of E_2, we use the continuity of the Skorokhod integral δ : ^1,p() → L^p <cit.>. Letting F_s,t, x,y(r,z) = P_t-r(x-z) Z⃗(r,z) ( C_s,y(t) - C_s,y(r)) _[s, t](r) , we have E_2 = δ (F_s,t, x,y). Therefore, dropping subscripts of F going forward, we have [e:E2_norm_bound] E_2_p ≲ F _L^p(Ω, ) + D F _L^p(Ω, ⊗) . We see that F is a pointwise product of a Malliavin differentiable random variable M and a deterministic element h ∈. Namely, F(r,z) = M(r,z) h(r,z), where M(r,z) = Z⃗(r,z) ( C_s,y(t) - C_s,y(r)) , h(r,z) = P_t-r(x-z) _[s, t](r) . Given (<ref>), we note that M(r,z) _p ≲ Z⃗(r,z) _2p C_s,y(t) - C_s,y(r) _2p ≲β(1 + r - s)^ 2 - κ 4 . With this, using Minkowski inequality, (<ref>) and (<ref>), we bound the first term: F _L^p(Ω, ) = [ (∫_ ∫_^2d F(r,z_1)F(r,z_2) R(z_1-z_2) dr dz )^p 2 ] ^1 p ≤( ∫_s^t ∫_^2d M(r,z_1) M(r,z_2) _p 2 h(r,z_1) h(r,z_2) R(z_1-z_2) dr dz )^1 2 ≲( β^2 ∫_s^t (1 + r - s)^ 1 - κ2 ∫_^2d P_t-r(x-z_1 ) P_t-r(x-z_2) R(z_1-z_2) dr dz )^1 2 ≲β( ∫_s^t (1 + r - s)^1 - κ2 (1 + t - r)^ - κ2 dr )^1 2 ≲β(1 + t - s)^ 2 - κ 4 . For the derivative term, by Minkowski and Hölder's inequalities we have DF _L^p(Ω, ⊗) = [ ( ∫_ ∫_^2d ⟨D M(r, z_1) , D M(r, z_2) ⟩_ h(r,z_1) h(r,z_2) R(z_1 - z_2) dz dr )^p 2] ^1 p ≤( ∫_s^t ∫_^2d ⟨D M(r, z_1) , D M(r, z_2) ⟩_ _p 2 h(r,z_1) h(r,z_2) R(z_1 - z_2) dz dr )^1 2 ≤(∫_s^t ∫_^2d ( ∏_i = 1^2 D M(r, z_i) _L^p(Ω, ) h(r,z_i)) R(z_1 - z_2) dz dr )^1 2 . We note that Z⃗ and C's are adapted, satisfying identities (<ref>) and (<ref>) respectively. Then, D_u, z̅ M(r, z) = D_u, z̅Z⃗(r,z) ( C_s,y(t) - C_s,y(r)) + Z⃗(r,z) D_u, z̅ ( C_s,y(t) - C_s,y(r)) = βw_u, z̅ (r,z) Z⃗(u, z̅) ( C_s,y(t) - C_s,y(r)) _[s, r](u) + βZ⃗(r,z) w_s,y (u, z̅) ( C_u, z̅(t) - C_u, z̅(r)) _[s, r](u) + βZ⃗(r,z) w_s,y (u, z̅) C_u, z̅(t) _[r, t](u) =: β∑_j = 1^3 m^(j)_r,z(u,z̅ ) . By Minkowski and Hölder inequalities [e:DM_bound] D M(r, z) _L^p(Ω, ) ≲∑_j, k = 1^3 ( ∫_s^t ∫_^2d m^(j)_r,z(u,z̅_1 )_p m^(k)_r,z(u,z̅_2 )_p R(z̅_1 - z̅_2) dz̅ du)^1 2 . Given the a priori estimates from Lemma <ref>, for β≤β_0(3p) we have: m^(1)_r,z(u,z̅ )_p ≲P_r- u(z - z̅) (1+ r -s)^2 - κ4 _[s, r](u) , m^(2)_r,z(u,z̅ )_p ≲P_u-s(z̅ - y) (1+ r -u)^2 - κ4 _[s, r](u) , m^(3)_r,z(u,z̅ )_p ≲P_u-s(z̅ - y) _[r, t](u) . Hence, by Lemma <ref>, for j = k =1 we obtain ∫_s^t ∫_^2d (∏_i = 1^2 m^(1)_r,z(u,z̅_i )_p ) R(z̅_1 - z̅_2) dz̅ du ≲(1 + r -s)^1 - κ2 ∫_s^r ∫_^2d (∏_i=1^2 P_r- u(z - z̅_i) ) R(z̅_1 - z̅_2) dz̅ du ≲(1 + r -s)^1 - κ2 . On the other hand, by (<ref>) and (<ref>), ∫_s^t ∫_^2d m^(1)_r,z(u,z̅_1 )_p m^(2)_r,z(u,z̅_2 )_p R(z̅_1 - z̅_2) dz̅ du ≲(1 + r -s)^2 - κ/4 ∫_s^r (1 + r -u)^2 - κ/4 ∫_^2d P_r- u(z - z̅_1) P_u-s(z̅_2-y) R(z̅_1 - z̅_2) dz̅ du ≲(1 + r -s)^2 - κ/4 ∫_s^r (1 + r -u)^2 - κ/4 (1 + u-s)^-κ/2 du ≲(1 + r -s)^1 - κ2 . Estimating in a similar way the other terms in (<ref>) involving m^(2)_r,z and m^(3)_r,z, we obtain D M(r, z) _L^p(Ω, ) ≲β(1+ r -s)^2 - κ4 . With this, using again (<ref>) and (<ref>), we return to estimate DF: DF _L^p(Ω, ⊗) ≲β( ∫_s^t (1+ r -s)^1 - κ2∫_^2d P_t-r(x-z_1 ) P_t-r(x-z_2) R(z_1-z_2) dr dz dz dr )^1 2 ≲β( ∫_s^t (1+ r -s)^1 - κ2 (1 + t -r)^- κ2 dr )^1 2 ≲β(1 + t - s)^2 - κ4 . Gathering bounds back to (<ref>), we obtain E_2(s, t, y, x) _p ≲β (1 + t-s )^2-κ/4. The following kernel estimate leads to the L^p homogenisation rate. Let x∈^d, t >0, P_r(x) the heat kernel, and let H_t, x(r,z) = P_r(t-r)/t (z+ r t x) - P_r(z) - P_t-r(z+x) . Let μ̃= 1 - 2κ, then the following holds: J_t(x) := ∫_0^t ∫_^2d (∏_i = 1^2 | H_t, x(r,z_i) | ) R(z_1 - z_2) dz dr ≲(1 + t)^-μ̃ (1 + (1 + t)^-1|x|^2) . Since P_r(t-r)/t (z+ r t x) = q_0, t-s^x,0 (r, z), this is really just a quantitative way of stating that, when considered over a very large time interval, a Brownian bridge is close to a Brownian motion near either end of the interval. Furthermore, the integral J_t is such that the contribution coming from the “bulk” is small. Let α = 2/κ such that μ̃= α (κ 2 -1). By Lemma <ref>, we have J_t, x≲ 1 uniformly in t. Then, in the following we can assume t > 2^ 1 1-α. We consider three regions of integration, J_t(x) = J_0, t^α(x) + J_ t^α, t - t^α(x) + J_t - t^α, t(x) , where J_t_1, t_2(x) = ∫_t_1^t_2 ∫_^2d(∏_i = 1^2 | H_t, x(r,z_i) | ) R(z_1 - z_2) dz dr . By changing variables z ↦ z+x and r ↦ t -r, we have J_0, t^α(x) = J_t - t^α, t(-x). Hence, it suffices to estimate J_0, t^α(x) and J_ t^α, t - t^α(x). For J_ t^α, t - t^α(x), since r∈ [t^α, t - t^α], using (<ref>) for each Gaussian kernel: J_ t^α, t - t^α(x) = ∫_t^α^t - t^α ∫_^d | H_t, x(r,z_1) | ( ∫_ ^d |H_t, x(r,z_2) | R(z_1 - z_2) dz_2 ) dz_1 dr ≲∫_t^α^t - t^α r^- κ2 ∫_^d | H_t, x(r,z_1) | dz_1 dr ≲t^α(1 - κ2) . In the remainder we estimate J_0, t^α(x). For this, we split H_t,x into two terms: |H_t,x(r,z)| = H^1_t,x(r,z) + H^2_t,x(r,z) , where H^(1)_t,x(r,z) = | P_r(t-r)/t (z+ r t x) - P_r(z) | , H^(2)_t,x(r,z) = P_t-r(z+x) . So that, J_0, t^α(x) ≤∑_i,j = 1^2 ∫_0^t^α ∫_^2d H^(i)_t,x(r,z_1) H^(j)_t,x(r,z_2) R(z_1 - z_2) dz dr = ∑_i,j = 1^2 J^(i,j)_0, t^α(x) . By (<ref>) from Lemma <ref>, we have uniformly in z_1: ∫_^d H^(2)_t,x(r,z_2) R(z_1 - z_2) dz_2 ≲(1 + t - r)^-κ2 . Hence, recalling t > 2^ 1 1-α, for any i = 1, 2, we have J^(i,2)_0, t^α(x) ≲∫_0^t^α (t - r)^-κ2 ∫_^d H^(i)_t,x(r,z_1) dz_1 dr ≲(t - t^α)^1-κ2 ≲t^1-κ2 . We now need to estimate J^(1,1)_0, t^α(x). We rewrite H^(1)_t,x(r,z) = |P_r(t-r)/t (z+ r t x) - P_r(z)| ≤∫_0^1 | _̣θ P_r - θr^2 t (z + θr t x) | dθ . Letting v = r^2 / t and w = r x /t, with θ∈ [0,1] we can compute | _̣θ P_r - θv (z + θw)| = | d v/2(r - θv) + v |z + θw|^2 /2(r - θv)^2 - ⟨z, w⟩+ θ|w|^2 /r - θv | P_r - θv (z + θw) ≲( v/r - v + v ( |z|^2 + |w|^2) /(r - v)^2 + | z| | w| + |w|^2 /r - v ) P_r - θv (z + θw) . Given r < t^α and t > 2^ 1 1-α, we have v/r - v = r/t/1 - r/t ≲t^α-1 , |w| ≤t^α-1 |x| , v/(r - v)^2 ≤v/r^2 - 2rv ≲t^-1 , |w| /r - v= r |x|/t /r - r^2/t ≲t^-1 |x| . Therefore, letting μ = α (κ/2 -1)/2 = μ̃/ 2, we have the following bound | _̣θ P_r - θr^2 / t (z + θr t x) | / P_r - θr^2 / t (z + θr t x) ≲t^α-1 + t^-1 ( |z|^2 + t^2(α-1)|x|^2 ) + t^-1 ( |x||z| + t^α-1|x|^2 ) ≲t^α-1 + t^-1 ( |z|^2+ |x||z| + t^α-1|x|^2 ) ≲t^α-1 + ( t^-1-2μ + t^α-2)|x|^2 + (t^-1 + t^-1+2μ )|z|^2 ≲t^α-1 + t^-1-2μ |x|^2 + t^-1+2μ|z|^2 , where we used the fact that |x||z| ≲ t^-2μ |x|^2 + t^2μ |z|^2 and α -1 ≤ -2μ when α≤ 2/κ. Thus, H^(1)_t,x(r,z) ≲( t^α-1 + t^-1-2μ |x|^2 + t^-1+2μ|z|^2) ∫_0^1 P_r - θr^2 / t (z + θr t x) dθ . Uniformly in θ, by Lemma <ref> and since ∫_^d |z|^2 P_r(z+x) dz ≲ r + |x|^2, we have ∫_0^t^α ∫_^2d |z_1|^2 P_r - θr^2 / t (z_1 + θr t x) H^(1)_t,x(r,z_2) R(z_1 -z_2) dr dz = ∫_0^t^α ∫_^d |z_1|^2 P_r - θr^2 / t (z_1 + θr t x) ( ∫_ ^d H^(1)_t,x(r,z_2) R(z_1 -z_2) dz_2) dz_1 dr ≲∫_0^t^α (( r - θr^2 t) + θ^2 r^2 t^2 |x|^2 ) ( (r(t-r)/t)^-κ2 + r^-κ2)) dr ≲∫_0^t^α r^1 -κ2 + r^2-κ2 t^-2 |x|^2 dr ≲t^α(2 - κ2) + t^α(3 - κ2) -2 |x|^2 = t^α- 2μ + t^2α- 2 -2μ |x|^2 , and ∫_0^t^α ∫_^2d P_r - θr^2 / t (z_1 + θr t x) H^(1)_t,x(r,z_2) R(z_1 -z_2) dr dz ≲1 , With (<ref>) and these estimates, we obtain J^(1,1)_0, t^α(x) ≲t^α-1 + t^-1-2μ |x|^2 + t^2α- 3 |x|^2 ≲t^α-1 + t^-1-2μ |x|^2 . Therefore, since max(1-κ 2, α - 1 )≤ -2μ, we obtain J_0, t^α(x) ≲t^1-κ2 + t^α-1 + t^-1-2μ |x|^2 ≲t^-2μ + t^-1-2μ |x|^2 . Combining the estimates of J_0, t^α(x) and J_ t^α, t-t^α(x), we conclude the proof. § PROOF OF THE MAIN RESULTS In order to prove Theorem <ref>, we are going to consider general transformations of u (still as in (<ref>), i.e. with initial condition equal to 1) and prove the following for the class of functions below. Let ∈^2(_+;) be a function satisfying for some q >0, |(z)| + |'(z)| + |”(z)| ≲z^-q + z^q , ∀z∈_+ . Let d≥ 3, ξ and satisfy Assumptions <ref> and <ref> respectively and set u_^Φ(t,x) = ^1-κ/2 ( (u_(t,x))-[(u_(t,x))]) . Then, given any p≥ 1, α < 1 - κ 2 and σ < - 1 - κ 2, there exists β_0^(α, p) >0 and ν_≥ 0 such that, for all β < β_0^ and any good coupling of the noise, the random processes u_^Φ converge in probability to ν_^0 in L^p([0, T], ^α(E)) as → 0, for any T > 0 and compact E ⊂^d. Here, ^0 denotes the solution to the additive stochastic heat equation ∂_t ^0 =12 Δ^0 + βξ^0 , ^0(0, ·) ≡0 . The effective variance is given by ν_ = [Z ' (Z)], where Z denotes a real-valued random variable with the same law as Z⃗(t,x). The proof of this theorem will be given in Section <ref> below. Given a test function g:^d→ and : _+ →, in the following we write Y_t^ϵ, g =ϵ^1-κ/2∫_^d ((u_ϵ(t,x))-[(u_ϵ(t,x))])g(x) dx. In order to prove Theorem <ref>, we first derive a divergence representation for Y_t^ϵ, g by using the Clark–Ocone formula. Let ∈^2(_+) satisfy (<ref>), then for β sufficiently small, Y_t^ϵ, g = ϵ^1-κ/2 ∫_0^t/ϵ^2∫_ ^d ∫_ ^d [ '(u_ϵ(t,x)) D_s,y u_ϵ(t,x) | _s] g(x) dx ξ(ds,dy). By Lemma <ref> and Clark–Ocone formula Y_t^ϵ, g = ∫_×^d [ D_s,y Y_t^ϵ, g | _s] ξ(ds,dy) = ϵ^1-κ/2 ∫_0^t/ϵ^2∫_ ^d ∫_ ^d [ D_s,y (u_ϵ(t,x)) | _s] g(x) dx ξ(ds,dy) , and the claim follows from the chain rule. The strategy of proof of <Ref> is then as follows. We use the expression for the Malliavin derivative of the solution combined with the homogenisation result to argue that, for τ≫ s, [ D_s,y (u(τ,x)) | _s] = β['(u(τ,x))u(s,y)w_s,y(τ,x) | _s] ≈β['(Z⃗(τ,x))Z(s,y) u(s,y)P_τ-s(x-y)Z⃗(τ,x) | _s] . At that point, we note that the process Z⃗ is expected to mix relatively well and that furthermore for |τ-s| ≫ 1, Z(s,y) is _s-independent, of expectation 1, and roughly independent of Z⃗(τ,x) so that one gets [e:MalliavinDer] [ D_s,y (u(τ,x)) | _s] ≈βu(s,y)P_τ-s(x-y) ['(Z⃗(τ,x))Z⃗(τ,x)] βu(s,y)P_τ-s(x-y)ν_ . The approximations are proved in the next section. With these in place, in Section <ref> we can show Y_t^ϵ, g converges strongly towards ^0 and conclude the proof of Theorem <ref>. In the final section, we consider the KPZ equation with non-flat initial conditions and prove our main result Theorem <ref>. §.§ Main convergence result The main result of this section is Proposition <ref> below, which shows that we can approximate Y_t^ϵ, g by ν_ times the large-scale fluctuations of the linear SHE which, by the mild formulation, are given by X_t^ϵ, g = βϵ^1 - κ2 ∫_0^t ϵ^2 ∫_ ^d ∫_ ^d P_t / ϵ^2 -s(x -y) u(s,y) ^d g(x) dx ξ(ds, dy) . Then, in Proposition <ref>, we show that as → 0 these are close to the solution of the stochastic heat equation (<ref>) driven by noise ξ^ϵ, which can be written as _t^ϵ(g) := βϵ^1 - κ2 ∫_0^t ϵ^2∫_ ^d ∫_ ^d P_t /ϵ^2-s(x -y) ^d g(x) dx ξ(ds,dy) . Since ^ϵ converges to the limit ^0, we conclude that Y_t^ϵ, g converges to ν_^0 as in Theorem <ref>. We first write [eq:defn_J_Gamma0] J_κ, δ (f) ∫_^4d ( ∏_i=1^2 | f(x_i)| |x_i - y_i|^δ-d ) |y_1 - y_2|^-κ dy dx , and we recall the following <cit.>. For any compactly supported bounded function f: ^d→ and any δ∈ (0,κ 2), one has J_κ, δ (f) ≲ M^2d+2δ-κ f_∞ ^2, where M denotes the diameter of the support of f. Furthermore, setting f_x^λ (y)= λ^-d f(y -x/λ), one has the scaling relation [eq:scaling_J] J_κ, δ( f_x^λ) = λ^2δ- κJ_κ, δ(f) . Let p ≥ 1. For β sufficiently small, for any test function g ∈(^d) and t > 0, the following holds for any δ_0 ∈ (0,1): [eq:Y_X_Lp_conv] Y_t^ϵ, g - ν_ X_t^ϵ, g _p ≲ϵ^ 1 2min( 1-δ_0, 1 - 2 κ) , with precise rate given by (<ref>). Recall we denote with u^(s)(t, x) the solution of SHE for t ≥ s with initial condition 1 at time s. Without loss of generality, assume ϵ < t and let τ = t/ϵ^2. By <Ref> and the expression for the Malliavin derivative (<ref>), [e:Y_start] Y_t^ϵ, g = βϵ^1-κ/2 ∫_0^τ∫_ ^d ∫_ ^d [ '(u(τ,x)) u(s,y) w_s,y (τ,x) | _s] ^d g(x) dx ξ(ds,dy) . Given the large times asymptotics, the homogenisation of w_s,y(τ,x) and mixing of the stationary process, the proof consists in approximating this fluctuations expression through separation of timescales. Namely, given moments control, the contribution coming from τ - s < ϵ^-1 is negligeable. On the other hand, we let (u) = '(u)u and for τ - s > ϵ^-1 we show [ '(u(τ,x)) u(s,y) w_s,y (τ,x) | _s] ≈[ '(Z⃗(τ,x) ) u(s,y) w_s,y (τ,x) | _s] ≈[ '(Z⃗(τ,x)) u(s,y) P_τ - s(x - y) C_s, y( τ) Z⃗ (τ,x) | _s] ≈[ u(s,y) P_τ - s(x - y) C_s, y(s + τ 2) ( u^(s + τ 2)(τ,x)) | _s] = P_τ - s(x - y) u(s,y) [ ( u^(s + τ 2)(τ,x)) ] ≈ P_τ - s(x - y) u(s,y) ν_ . Define the indicator functions (s) = _[τ - ϵ^-1, τ ](s) and (s) = _[0, τ - ϵ^-1](s). Then, to show (<ref>), we write the conditional projection in (<ref>) as [ ' (u(τ,x)) u(s,y) w_s,y (τ,x) | _s] - ν_P_ τ- s(x - y) u(s, y) = ∑_i=1^5 S_i( τ, x , s, y) , where S_1 = ( [ '(u(τ,x)) u(s,y) w_s,y (τ,x) |_s] - ν_P_ τ- s(x - y) u(s,y) ) (s) denotes the integrand on [0, τ - ϵ^-1] and, on the remaining interval, we have 4 terms: S_2 = [ ( '(u(τ,x)) - '(Z⃗(τ,x) ) u(s,y) w_s,y (τ,x) |_s] (s) , S_3 = [ '(Z⃗(τ,x)) u(s,y) ( w_s,y (τ,x) - P_ τ- s( x- y) C_s,y (τ) Z⃗(τ,x) ) |_s] (s) , S_4 = P_ τ- s( x - y) u(s,y) [ C_s,y (τ) (Z⃗(τ,x)) - C_s, y(s + τ 2) ( u^(s + τ 2)(τ,x))|_s ] (s) , [0.3em] S_5 = P_ τ- s( x - y) u(s,y) ( [ C_s, y(s + τ 2) ( u^(s + τ 2)(τ,x))|_s] - ν_ ) (s) . With this decomposition, we can express the difference as follows: Y_t^ϵ, g - ν_ X_t^ϵ, g = βϵ^1 - κ2 ∑_k =1^5 _k, ϵ , where _k, ϵ := ∫_0^τ∫_ ^d ∫_ ^d S_k(τ, x, s, y) ^d g(x) dx ξ(ds,dy) . Estimating the L^p norms of S_j, we show that each term _j, ϵ in the decomposition converges to 0. Since, by BDG inequality as in (<ref>), we have for k=1,…, 5, [e:Ij_bound] _k, ϵ ^2_p ≤∫_0^τ∫_ ^4d (∏_i=1^2 S_k( τ, x_i, s, y_i)_p |g_(x_i)|) R(y_1 -y_2) ds dx dy , where g_(x) = ^d g( x). By the moment estimates from Lemmas <ref>–<ref> (in particular (<ref>) to bound w_s,y), we have S_1( τ, x, s, y)_p≲ P_τ - s( x - y) provided that β is sufficiently small. For δ∈ (0, d), we have (cf. <cit.>) P_s(x) ≲s^-δ2 |x|^δ- d . Inserting this bound into (<ref>) yields _1, ϵ _p^2 ≲∫_τ- ϵ^-1^ τ ∫_^4d (∏_i = 1^2 P_ τ- s( x_i - y_i) |g_(x_i)|) R(y_1 - y_2) ds dx dy . Recall that t = ^2 τ. By the change of variables x_i ↦ x_i, y_i ↦ y_i, s ↦ ^2 s, and the fact that P_r/^2(z/) = ^d P_r(z), we have _1, ϵ _p^2 ≲^κ-2∫_t- ϵ^ t ∫_^4d ∏_i = 1^2 (P_ t - s( x_i - y_i) |g(x_i)| ) ^-κR( y_1 - y_2 ) ds dx dy . Since ^-κR( y_1 - y_2) ≲ |y_1 - y_2|^-κ, using (<ref>) with δ_0 < 1, we estimate _1, ϵ _p^2 ≲ϵ^κ-2 ∫_t- ϵ^t ∫_^4d (∏_i = 1^2 P_ t - s( x_i- y_i) |g(x_i)|) |y_1 - y_2|^-κ ds dx dy ≲ϵ^κ-2 ∫_0^ϵ s^-δ_0 ds ∫_^4d(∏_i=1^2 |x_i- y_i|^δ_0 -d |g(x_i)|) |y_1 - y_2|^-κ dx dy ≲ϵ^κ-2 ϵ^1 - δ_0 J_κ,δ_0(g) , where J_κ, δ is defined as in (<ref>). For _2, ϵ, first apply Cauchy-Schwartz inequality, then the moment bounds (<ref>) and (<ref>), we have S_2( τ, x, s, y) _p ≲'(u(τ, x)) - '(Z⃗(τ, x)_3p w_s,y (τ, x) _3p ≲(1 + τ)^2 -κ4 P_ τ- s( x - y) ≲ϵ^κ- 2 4 P_ τ- s( x - y) . By a change of variables and (<ref>) with δ_1 < 1 as above, from (<ref>) we obtain _2, ϵ _p^2 ≲ϵ^κ2-1 ∫_0^τ- ϵ^-1∫_ ^4d (∏_i = 1^2 P_ τ- s( x_i - y_i) |g_(x_i)|) R(y_1 - y_2) ds dx dy ≲ϵ^κ-2 ϵ^κ2-1 ∫_0^t - ϵ ∫_^4d (∏_i = 1^2 P_ t - s( x_i- y_i) |g(x_i)|) |y_1 - y_2|^-κ ds dx dy ≲ϵ^κ-2 ϵ^κ2-1 t^1-δ_1 J_κ,δ_1(g) . For the remaining _j, ϵ, we shall make use of the fact that s < τ - ϵ^-1. Using the uniform moment bounds on (u) = '(u)u and on u, and the homogenisation result Proposition <ref>, we have S_3_p ≲P_ τ- s( x - y) (τ, x)_2p ≲P_ τ- s( x - y) (1 + τ-s)^-μ (1 + (1 + τ-s)^-1 2|x -y|) ≲ ϵ^μ P_ τ- s( x - y) + ϵ^μ |x -y| P_ τ- s( x - y) . Here μ = 12 - 1 κ and ρ_s,y is given by (<ref>). As for _2, ϵ, the contribution from ϵ^μ P_τ - s( x - y) can be bounded by ϵ^κ-2 ϵ^2μ t^1-δ_1 J_κ,δ_1(g). The contribution from ϵ^μ |x -y| P_τ - s( x - y) can be estimated as follows: ϵ^2μ ∫_0^τ- ϵ^-1 ∫_ ^4d ( ∏_i = 1^2 |x_i-y_i| P_ τ- s( x_i - y_i) |g_(x_i)|) R(y_1 - y_2) ds dx dy ≲ϵ^2μ ϵ^κ-2 ∫_0^t -ϵ (t-s)^-δ_2 ds ∫_^4d (∏_i = 1^2 |x_i- y_i|^1 + δ_2 -d |g(x_i)|) |y_1 - y_2|^-κ dx dy ≲ϵ^2μ ϵ^κ-2 t^ 1 - δ_2 J_κ, 1 + δ_2 (g) . We have employed again the heat kernel bound (<ref>), with δ_2 < κ/2-1. Summing these up, we have: _3, ϵ _p^2 ≲ϵ^2μ ϵ^κ-2 ( t^1-δ_1 J_κ, δ_1 (g) + t^ 1- δ_2 J_κ, 1 + δ_2 (g) ) . For _4, ϵ and _5, ϵ, we make use of mixing, separating C_s,y and Z⃗ in independent parts. We first apply (<ref>). For s < τ - ϵ^-1 and q > 1 we have (Z⃗ (τ, x) )- ( u^(s + τ 2)(τ, x))_q ≲β(1 + 1 2 ( τ- s))^2 -κ4 ≲ϵ^κ-24 , On the other hand, by (<ref>) we also have C_s,y (τ) - C_s,y (s + τ 2) _q ≲β(1 + 1 2 ( τ- s))^2 -κ4 ≲ϵ^κ-24 . Therefore, for sufficiently small β, C_s,y (τ) ( Z⃗ (τ, x)) - C_s,y (s + t/ϵ^2 2) ( u^(s + τ 2)(τ, x)) _2p ≲ϵ^κ-24 , We have used trinagle inequality and <Ref>. Consequently, S_4_p ≲ϵ^κ-24 P_ τ- s( x - y) . This is the same estimates as for _2, ϵ, therefore, _4, ϵ_p^2 ≲ϵ^κ-2ϵ^κ 2 -1 t^1-δ_1 J_κ, δ_1 (g). By the integral expression for C_s,y in (<ref>), C_s,y (s + τ 2) is of expectation 1 and independent of u^(s + τ 2)(τ, x). In addition, both terms are _s-independent, therefore we have [ C_s,y (s + τ 2) ( u^(s + τ 2)(τ, x))|_s] = [ C_s,y (s + τ 2) ] [ ( u^(s + τ 2)(τ, x)) ] = [ ( u^(s + τ 2)(τ, x)) - ( Z⃗ (τ, x)) ] + ν_ . Using the bounds in (<ref>), we obtain S_5_p ≲P_ τ- s( x - y) ( u^(s + τ 2)(τ, x)) - ( Z⃗ (τ, x)) _2p ≲P_ τ- s( x - y) (1 + 1 2 (τ- s))^2 -κ4 ≲ϵ^κ-24 P_ τ- s( x - y) . This is the same bound for S_4, hence _5, ϵ_p^2 ≲ϵ^κ-2ϵ^κ 2 -1 t^1-δ_1 J_κ, δ_1 (g). Putting all estimates together, observing that μ= 1 2 - 1 κ < κ -2 /4, we obtain [eq:approx_Lp_SHE] Y_t^ϵ, g - ν_ X_t^ϵ, g _p ≲^1-δ_0/2 J_κ, δ_0 (g)^1 2 + ^μ ( t^1 - δ_1 2 J_κ, δ_1 (g)^1 2 + t^1 - δ_2 2 J_κ, 1+δ_2 (g)^1 2 ) , where δ_0, δ_1 ∈ (0, 1) and δ_2 ∈ (0, κ 2 -1). The implicit constants are uniform in t, > 0. We conclude the proof of (<ref>). Let p ≥ 1 and β sufficiently small. The following holds for any g ∈(^d), t > 0, and θ∈ (2, κ∧ (d + 2 - κ)): [eq:X_Ueps_Lp_conv] X_t^ϵ, g - _t^ϵ(g) _p ≲ϵ^θ-2 2p t^1-δ 2 (1 + 1 p) J_κ+ θ- 2δ, δ(g)^1 2p J_κ, δ(g)^1 2 (1 - 1 p) . Recall that X^ϵ, g_t is defined in (<ref>) and _t^ϵ is defined in (<ref>). In the former, we substitute u(s,y) by 1+∫_0^s∫_^dP_s-r(y-z) u(r,z) ξ(dr, dz) allowing to recover _t^ϵ(g) and an additional term: X^ϵ, g_t = _t^ϵ(g) + B_(g), where B_ = β^2 ϵ^1 - κ2∫_0^τ∫_ ^d ∫_0^s∫_ ^d ∫_ ^d P_τ-s(x -y) · P_s-r(y-z) u(r,z) ^d g(x) dx ξ(dr,dz) ξ(ds,dy) and τ = t / ϵ^2. We first show that B_ converges to 0 in L^2. By Itô's isometry, B_ _2^2 = β^4 ϵ^2 -κ∫_0^τ ∫_0^s ∫_ ^6d ( ∏_i = 1^2 P_τ-s(x_i -y_i) P_s-r(y_i-z_i) ) R(y_1 - y_2) R(z_1 -z_2) [u(r,z_1)u(r,z_2) ] ^2dg(x_1) g(x_2) ds dr dx dy dz . Using the uniform L^2 bound on u and a change of variables as before[Namely, x_i ↦ x_i, y_i ↦ y_i, z_i ↦ z_i, s ↦ ^2 s, r ↦ ^2 r, combined with P_r/^2(z/) = ^d P_r(z).], we obtain B_ _2^2 ≲∫_0^t ∫_0^s ∫_^6d ( ∏_i = 1^2 P_t -s(x_i -y_i) P_s-r(y_i-z_i) ) ϵ^-2R( y_1 - y_2 ϵ) ϵ^-κR( z_1 - z_2ϵ) |g(x_1) g(x_2) | ds dr dx dy dz . Using ϵ^-κR(z/ϵ) ≲ |z|^-κ and the kernel bound (<ref>) with δ∈ (0, 1), we obtain B_ _2^2 ≲∫_0^t ∫_0^s (t-s)^-δ (s-r)^-δ dr ds I^_δ, κ(g) ≲t^2 - 2δ I^_δ, κ(g) , where, with G(x) = g(x_1) g(x_2), I^_δ, κ(g) = ∫_ ^6d ( ∏_i = 1^2 |x_i -y_i|^δ- d |y_i-z_i|^δ- d ) ^-2R( y_1 - y_2 ϵ) |z_1 - z_2|^-κ |G(x)| dx dy dz . Using the fact that, for β_1 + β_2+d<0 and β_i<-d, ∫_^d |y -z|^β_1 |z-w|^β_2 dz = c_β_1, β_2 |y-w|^β_1 + β_2 +d, we can integrate out the z_i variables and obtain I^_δ, κ(g) = c ∫_ ^4d ( ∏_i = 1^2 |x_i -y_i|^δ- d ) ^-2R( y_1 - y_2 ϵ) |y_1 -y_2|^2δ- κ |G(x)| dx dy . Let us denote K_(x) = ^-2R( x/ ϵ) |x|^2δ- κ . We know that R(x) ≲ |x|^-θ, for any θ∈ [0, κ]. Let us pick θ∈ (2, κ∧ (d + 2 - κ)), so that θ + κ - d < 2 and we fix δ∈ ( 1 2 (θ + κ - d ), 1 ). Then, uniformly in ∈ (0, 1), [e:Keps_bound] K_(x) ≲^θ- 2 |x|^2δ- κ- θ . Note that, by the choice of θ and δ, K_(x) is integrable at 0 and decays fast enough at infinity to integrate it over the other kernels. In fact, by (<ref>) we obtain I^_δ, κ(g) ≲^θ- 2 ∫_ ^4d ( ∏_i = 1^2 |x_i -y_i|^δ- d ) |y_1 -y_2|^2δ- κ- θ |G(x)| dx dy = ^θ- 2 J_κ+ θ- 2δ, δ(g) , where J is defined as in (<ref>) and bounded by Lemma <ref>, as long as δ < 1/2 (κ + θ - 2δ). This is the case, given that δ< 1 (thus δ < 1/4 (κ + θ) as required). Then, B__2 ≲^θ- 2 2 t^1-δ J_κ+ θ- 2δ, δ(g)^1 2 . On the other hand, estimating the L^p norms of X^ and ^ by BDG and a change of variables as in (<ref>) in the previous proof, we have B_ _p ≲ X^ϵ, g_t_p + ^ϵ_t(g)_p ≲ϵ^1 - κ2 ( ∫_0^τ ∫_ ^4d (∏_i = 1 P_τ-s(x_i -y_i) ^d g(x_i) ) R(y_1 - y_2) dx dy ds )^1 2 ≲( ∫^t_0 ∫_^4d ( ∏_i = 1^2 P_ t - s( x_i- y_i) | g(x_i)|) |y_1 - y_2|^-κ ds dx dy )^1 2 ≲t^1 - δ 2 J_κ, δ(g)^1 2 , By interpolation, for any θ∈ (2, κ∧ (d + 2 - κ)) and δ∈ ( 1 2 (θ + κ - d ), 1 ), [eq:approx_Lp_X_Ueps] X^ϵ, g_t - ^ϵ_t(g) _p = B_ _p ≤ B_ _2^1 p B_ _2(p-1)^1- 1 p ≲ϵ^θ-2 2p t^1-δ 2 (1 + 1 p) J_κ+ θ- 2δ, δ(g)^1 2p J_κ, δ(g)^1 2 (1 - 1 p) , with implicit constants uniform in t, > 0, concluding the proof. §.§ Convergence in Hölder spaces Before going into the proof of Theorem <ref>, we introduce the spaces of locally Hölder continuous distributions ^α(^d), of negative exponent, and prove moment estimates in these spaces. For a function g ∈(^d), z ∈^d and λ>0, we denote g_x^λ(y) := λ^-dg( λ^-1(y-x)) , where we drop the corresponding index if λ =1 or x= 0. We denote with the space of smooth functions with compact support, and ' the dual space of distributions. Given α < 0, let r = ⌈ -α⌉ be the smallest positive integer bigger than -α and consider the following subset of ^r(^d): _r := { g ∈: (g)⊂B_1, g_^r≤1 } , where B_a denotes the open ball of radius a centred at 0. We say that a distribution ζ belongs to ^α(^d) if for every compact set E ⊂^d, ζ_α; E sup_g ∈_r sup_ λ∈(0,1) sup_x ∈E λ^-α| ⟨ζ, g^λ_x ⟩| < ∞ . See <cit.>. The space ^α is metrisable, with d_α (ζ, ζ')∑_m ≥1 2^-m (1 ∧ ζ- ζ' _α; B_m ) , making it a Fréchet space. For α' > α, ^α' is compactly embedded in ^α . The spaces of ^α can be characterised in multiple ways, here we use the characterisation from <cit.> in terms of a single test function. We fix an arbitrary reference test function φ∈ such that φ⊂ B_1 and ∫φ = 1 for the remainder of the article. We shall use the following notation for its rescaled version: φ^(n) = 2^nd φ(2^n ·) , φ_x^(n) = 2^nd φ(2^n (·-x)) . Although φ is fixed, the following shows that control on ⟨ζ,φ^(k)_x⟩ allows to get a similar bound for all test functions and control (<ref>). Let ζ∈' be a distribution and let α∈ (-∞, 0]. Then, one has ζ∈^α if and only if, for any compact set E and uniformly over k∈, sup_x∈ E |⟨ζ,φ^(k)_x⟩| ≲2^-k α. Furthermore, the semi-norms of ζ can be estimated by [e:boundSeminorm] ζ_α;E ≤C sup_x ∈E_2sup_ k∈ 2^kα|ζ,φ^(k)_x| where C = C(φ, α, d) is an explicit constant and E_2={z: d(z,E)≤ 2}. This characterisation of ^α spaces allows for a criterion to estimate norms in Hölder spaces. We recall the following definition from <cit.>. A random distribution ζ is said to belong to ^α_0_p if there exists C>0 such that the following estimates hold for any n≥ 1: sup_x ∈^d ⟨ζ, φ^(n)_x ⟩_p ≤C 2^-nα_0 , sup_x, y ∈^d , 0<|x-y|≤2^-n ⟨ζ, φ^(n)_x - φ^(n)_y ⟩_p ≤C 2^-n(α_0-1) |x-y| . The smallest such constant C is denoted by ζ_^α_0_p. We write ^γ_0, α_0_p, T, as a shorthand, for the space ^γ_0([0, T], ^α_0_p). Given any compact E ⊂^d, we denote by ^α (E) the space of distributions such that the seminorm in (<ref>) is finite. Then, by <cit.> we have the following: Let α_0<0, p > d, and γ_0 > 1/p. Then, for any γ∈ (0,γ_0 - 1/p), α < α_0 - d/p and any compact E ⊂^d, one has the continuous embeddings ^α_0_p ⊂L^p(Ω,^α(E)) , ^γ_0,α_0_p,T ⊂L^p(Ω,^γ([0, T],^α(E))) . In <cit.>, these inclusions are stated for E=^d, but the proof given there allows to replace ^d by E. We have the following: Given any α_0 < 1-κ 2, p ≥ 1, for sufficiently small β, there exist a = a(α_0, p, κ) > 0, such that the following holds for any T >0: [eq:Yeps_cUeps_approx] sup_0<t≤T Y^ϵ_t- ν_ ^_t _^α_0_p ≲ϵ^a . In particular, for every α<α_0- d p, there exists a > 0 such that sup_0<t≤ T Y_t^ϵ - ν__t^_α, E_p ≲ϵ^a, for any compact E ⊂^d. Furthermore, for any γ_0 ∈ (0,1/2), α_0 < 1 - κ/2 - 2γ_0, and p ≥ 1, one has sup_∈ (0, 1]^_^γ_0, α_0_p, T < ∞. By (<ref>), for δ_0, δ_1 ∈ (0, 1), δ_2 ∈ (0, κ 2 -1), and μ = 1 2 - 1 κ, one has sup_t∈[0,T] Y_t^ϵ, g - ν_ X_t^ϵ, g _p ≲^1-δ_0/2 J_κ, δ_0 ( g)^1 2 + ^μ ( J_κ, δ_1 ( g)^1 2 + J_κ, 1+δ_2 (g)^1 2 ). Recall, Lemma <ref>, J_κ, δ(g) ≲ M^2d+2δ-κ g ^2_∞, where M is the diameter of the support of g, and the scaling property (<ref>). Let g ∈ and δ∈ (0,1). Then the following holds uniformly in λ∈ (0, 1): sup_t>0 sup_x∈^d Y_t^ϵ, g_x^λ - ν_ X_t^ϵ, g_x^λ _p ≲^(1 - δ 2 ) ∧μ λ^δ- κ2 ( J_κ, δ(g)^1 2 + J_κ, 1+δ_2 (g)^1 2 ) ≲^(1 - δ 2 ) ∧μ λ^δ- κ2 g_∞ . Hence, for any α_0 < 1 - κ 2, by picking[If α_0 > - κ 2, can set δ = α_0 + κ 2.] δ = δ(α_0) sufficiently close to 1 and applying this to the test function φ^(n)_x, we have sup_t ∈(0, T] sup_n≥1 sup_x ∈^d 2^nα_0 ⟨Y_t^ϵ - ν_ X_t^ϵ, φ^(n)_x ⟩_p ≲^(1 - δ 2 ) ∧μ , which implies (<ref>). Since the bound holds for any g ∈, (<ref>) is also satisfied (cf. <cit.>). We obtain sup_t ∈(0, T] Y_t^ϵ - ν_ X_t^ϵ_^α_0_p ≲^(1 - δ 2 ) ∧μ . It remains to control X_t^ϵ - ^_t _^α_0_p. Following the same steps as above, for any θ∈ (2, κ∧ (d + 2 - κ)) and δ∈ ( 1 2 (θ + κ - d ), 1 ), by Proposition <ref> and the scaling relation (<ref>), we have uniformly in λ∈ (0, 1): sup_x∈^dsup_t∈(0,T] X^ϵ, g_x^λ_t - ^ϵ_t(g_x^λ) _p ≲ϵ^θ-2 2p T^1-δ 2 (1 + 1 p) λ^ (4δ- κ- θ) 1 2p λ^(2δ-κ) p-12p g_∞ ≲ϵ^θ-2 2p λ^(δ- κ2)(1 + 1 p) g_∞ , where we used the fact that θ < κ. For any α_0 < 1 - κ 2, picking sufficiently high p(α_0) ≥ 1 and δ(α_0) < 1, we have (δ - κ 2)(1 + 1 p) > α_0. Then, by the same reasoning as above, we obtain sup_t ∈(0, T] X_t^ϵ - ^_t _^α_0_p ≲ϵ^θ-2 2p . Letting a = 1 2 min( 1 - δ, 1 - 2 κ, θ -2p ), by triangle inequality we conclude that (<ref>) holds. The final claim follows by Proposition <ref>, as long as α< α_0 - d p. Regarding the bound on _t^, we note that the definition of ξ^ implies the scaling relation _t^(g_x^λ) = ^1-κ2_t/^2(g_x/^λ/) , so that the required bound follows immediately from <cit.>. §.§ Proof of the main results Consider a good coupling for the ξ^ satisfying Assumption <ref>. Given a test function ψ∈_c^∞(^d+1) and t ∈, write ψ̃_t(s,x) = ψ(s,x)1_s ≥ t. Then, one has lim_→ 0ξ^(ψ̃_t) = ξ^0(ψ̃_t) in probability. Let χ→_+ be a smooth increasing function such that χ(s) = 0 for s ≤ 0 and χ(s) = 1 for s ≥ 1, and set χ_δ(t) = χ(t/δ). Setting ψ̃_t^δ(s,x) = χ_δ(s-t)ψ(s,x), we then note that, as a consequence of Itô's isometry, one can find a constant C (depending on ψ) such that [e:ItoBound] ξ^(ψ̃_t^δ- ψ̃_t)^2 ≤Cδ , uniformly over all ,δ≤ 1. Since ψ̃_t^δ is a smooth function, one furthermore has ξ^(ψ̃_t^δ) →ξ^0(ψ̃_t^δ) in probability as → 0 for any fixed δ. Combining this with (<ref>), the claim follows at once. With the Lemmas above we are now in the position to prove Theorem <ref>. Write L^p([0, T], ^α(E)) for the space with seminorms, ζ_α, E, p, T := ( ∫_0^T ζ_t _α, E^p dt )^1 p , where the semi-norms of ^α(E) is as in Sec. <ref>. We see that for any γ >0, the space ^γ([0, T], ^α(E) ) is continuously embedded in L^p([0, T], ^α(E)). From Lemma <ref>, given α < 1 - κ 2, we pick p = p(α) suffiently high such that, as → 0, sup_t ∈[0, T] Y_t^ϵ- ν_ _t^_α, E_p ⟶0 . This implies that Y^ϵ - ν_^ converges in probability to zero in L^p([0, T], ^α(E)). It therefore remains to show that, for any good coupling, one has ^⇒^0 in probability in L^p([0, T], ^α(E)). For this, we make use of <cit.> which implies that it suffices to show tightness of the laws of ^ and convergence in probability of ^(ψ) for any test function ψ∈_c^∞(^d+1). By Lemma <ref> and Proposition <ref>, we obtain for any γ' ∈ (0, 1 2) and any α' < 1 - κ 2 - 2γ', sup_∈(0, 1] ^_^γ'([0, T], ^α'(E) ) _p ≲1 . Since ^γ'([0, T], ^α'(E) ) is compactly embedded in ^γ([0, T], ^α(E) ) for any γ∈ (0, γ') and α < α', we conclude that the required tightness holds. To get convergence of ^(ψ), we note that ^(ψ) = ξ^ϵ(Pψ) with (Pψ)(s,y) = β1_s ≥0∫_0^∞∫_^d ψ(t,x) P_t - s(x - y) dx dt , so that the desired convergence follows from Lemma <ref>. We now have all the ingredients in place to prove the main result of this article, namely the identification of the large-scale limit of the KPZ equation given in Theorem <ref>. Given the initial condition h_0, let ψ_ϵ(y) = exp(ϵ^κ/2 -1 h_0(y)) -1 and denote by w^ϵ_0,z(t,x) = ^-d w_0,z/( t /ϵ^2,x/ϵ) the rescaled fundamental solution of the linear stochastic heat equation. Since the initial condition for u_ϵ is 1, it follows from the Cole–Hopf transform that h_(t,x) = ^1-κ2 log( u_(t,x) + ∫_^d w^ϵ_0,y(t,x) ψ_ϵ(y) dy ) . Since 2log(x+y) - log(x) - y/x≤ (y/x)^2 for x,y > 0, it follows that | h_(t,x) - ^1-κ2 log( u_(t,x) ) - ^1-κ2 (u_(t,x))^-1 ∫_^d w^ϵ_0,y(t,x) ψ_ϵ(y) dy | ≤12 ^1-κ2 ( (u_(t,x))^-1 ∫_^d w^ϵ_0,y(t,x) ψ_ϵ(y) dy )^2 . By the a priori bounds for u_ (t,x))^-1 from Lemma <ref> and (<ref>) and since ψ__∞≲^κ2-1, we obtain for any p ≥ 1 (u_(t,x))^-1 ∫_^d w^ϵ_0,y(t,x)ψ_(y) dy _p ≲^κ2-1∫_^d P_t (x-y) dy = ^κ2-1 . On the other hand, by Taylor approximating ψ_ϵ(y), we have ψ_ϵ(y) - ϵ^κ/2 -1 h_0_∞≲ϵ^κ -2 and consequently, |^1-κ2 (u_ (t,x))^-1(∫_^d w^ϵ_0,y(t,x) ψ_ϵ(y) dy -∫_^d w^ϵ_0,y(t,x) h_0(y) dy) |≲ϵ^κ 2-1, Combining these bounds, we conclude that h_(t,x) - ^1-κ2 log( u_(t,x) ) - (u_(t,x))^-1 ∫_^d w^ϵ_0,y(t,x) h_0(y) dy _p ≲^κ2-1 . As a consequence of Theorem <ref>, with μ = 1 2 - 1 κ, for any p≥1 we have w^ϵ_0,y(t,x) - Z⃗(t/ϵ^2, x/ϵ) Z (0, y/ϵ) ^-dP_t/^2 (x-y)_p ≲(t/^2)^-μ (1 + (t/^2)^-1 2 ^-1|x-y|) ^-dP_t/^2 (x-y) ≲^2μ t^-μ (1 + t^-1 2 |x-y| ) P_t(x-y) , where we used the fact that ^-dP_t/^2 ( x) = P_t(x). Given that h_0 is bounded and the uniform control of negative moments of u_ from Lemma <ref><ref>, we obtain ∫_^d (u_(t,x))^-1 ( w^ϵ_0,y(t,x) - Z⃗(t/ϵ^2, x/ϵ) Z (0, y/ϵ) ^-dP_t/^2 (x-y) ) h_0(y) dy _p ≲^2μ t^-μ u^-1_(t,x) _2p ∫_ ^d (1 + t^-1 2 |x-y| ) P_t(x-y) dy ≲^2μ t^-μ . On the other hand, by Theorem <ref> we also have Z⃗(t/ϵ^2, x/ϵ)/u_(t,x) - 1 _p ≲ u^-1_(t,x) _2p Z⃗(t/ϵ^2, x/ϵ) - u_(t,x) _2p ≲t^2-κ 4 ^κ2 -1 . Combining the estimates, we obtain ∫_^d ( w^ϵ_0,y(t,x)/ u_(t,x) - Z (0, y/ϵ) P_t (x-y) ) h_0(y) dy _p ≲^κ2 -1 t^2-κ 4 + ^2μ t^-μ . Hence, noting that 2μ < κ/2 -1 and using the fact that t ≥ T_0 > 0, we have [eq:R1eps_bound] h_(t,x) - ^1-κ2 log( u_(t,x) ) - ∫_^d Z (0, y/ϵ) P_t (x-y) h_0(y) dy _p ≲^2μ . Let c = [logZ⃗(t,x)], F(t) = [ log( u (t,x) ) ] and set F_(t) = F(t/^2). Then, we decompose h_ϵ as follows: [eq:decomp_heps] h_(t,x) - ^1-κ2 c = u_^log(t,x) + ∫_^d P_t (x-y) h_0(y) dy + R_0, ϵ(t) + ∑_i = 1^2 R_i, ϵ(t, x) , where u_^log(t,x) = ^1-κ2 ( log( u_(t,x) ) - [log( u_(t,x) )]) , R_0, ϵ(t) = ^1-κ2 ( F_(t) - c ) , R_1, ϵ(t, x) = h_(t,x) - ^1-κ2 log( u_(t,x) ) - ∫_^d Z (0, y/ϵ) P_t (x-y) h_0(y) dy , R_2, ϵ (t,x) = ∫_^d (Z (0, y/ϵ) - 1) P_t (x-y) h_0(y) dy . By Theorem <ref> with = log, we know that ν_ = 1 and u_^log converge in probability to ^0 in L^p([0, T], ^α(E)) for a good coupling of ξ^, where ^0 is the solution of the stochastic heat equation (<ref>) with zero initial conditions. Hence, the first two terms in (<ref>) converge in law to the claimed limit , the solution of (<ref>) with initial conditions h_0. It remains to show that the remainder terms R_i, vanish in L^p([T_0, T], ^α(E)). Firstly, we show that for t>0: F(t) - c≲1∧|t|^(2-κ)/2 . Then, this implies |R_0, (t)| ≲ϵ^κ 2 -1 t^1 - κ 2, which vanishes uniformly in t ≥ T_0 as → 0. Recall that F(t)-c = ( logu(t,x)- logZ⃗(t,x)) . It follows that, by splitting into the u<Z and u≥ Z case, |F(t)-c - (u(t,x)- Z⃗(t,x)/u(t,x))| ≲(u(t,x)- Z⃗(t,x)/u(t,x) ∧Z⃗(t,x))^2 , which in turn is bounded by 1∧ |t|^(2-κ)/2 by combining Theorem <ref> and Lemma <ref><ref>. The trick now is to write u(t,x)- Z⃗(t,x)/u(t,x) = ∫1- Z⃗(0,x)/u(t,x)w_0,y(t,x) dy and to note that Z⃗(0,x) is independent of both u(t,x) and w_0,y(t,x), so that the expectation of this expression vanishes, and concluding the claim on |F(t)-c |. From (<ref>), we see that R_1, (t,x) _p ≲ ^2μ uniformly in x and t ≥ T_0. Hence, given that the bounds are uniform in space, integrating against any spatial test function g ∈ implies that R_0, and R_1, converge to zero in L^p([T_0, T], ^α(E)). It remains to estimate R_2,. We claim, for any δ∈ (0, 1), uniformly in λ∈ (0, 1), ∫_^d R_2, (t, x) g_z^λ(x) dx _2 ≲^κ2 - 1 t^-δ2 ( J_κ-2, δ (g_z^λ))^12 . By Lemma <ref>, for any g ∈ we obtain sup_t ≥T_0 ∫_^d R_2, (t, x) g_z^λ(x) dx _2 ≲^κ2 - 1 λ^δ+ 1 - κ2 ≲^κ2 - 1 λ^ 1 - κ2 . As in the proof of Lemma <ref>, this implies that R_2, (t, ·) __2^α_0≲^κ 2 - 1 for any α_0 ≤ 1 - κ 2, uniformly in t ≥ T_0. On the other hand, by the uniform moments bounds, we know sup_t ≥T_0 ∫_^d R_2, (t, x) g_z^λ(x) dx _p ≲sup_t, x R_2, (t, x)_p ≲1 . Then, given α < 1 - κ/2, picking p ≥ 1 sufficiently large such that α < α_0 - d p, by Proposition <ref> and interpolation we conclude that sup_t ≥ T_0 R_2, (t,·) _α, E_p → 0 as → 0 for p∈[2,∞]. Therefore, R_2, vanishes in L^p([T_0, T], ^α(E)). It remains to show (<ref>), namely that Z(0, y/ ) satisfies a law of large numbers. We know that Z has constant expectation 1 and decorrelates in space by (<ref>): Λ(0, y/ϵ) := ( Z (0, 0), Z (0, y/ϵ) ) ≲1∧^κ-2 |y|^2-κ . Then, we compute the second moments and use (<ref>) with δ∈ (0, 1), [ ( ∫_^d R_2, (t, x) g_z^λ(x) dx)^2 ] = ∫_ ^4d Λ(y_1/ϵ, y_2/ϵ) ∏_i = 1^2 (P_t(x_i - y_i) h_0(y_i) g_z^λ(x_i) ) dx dy ≲^κ-2 t^-δ ∫_ ^4d |y_1 - y_2|^2-κ ∏_i = 1^2 ( |x_i - y_i |^δ-d |g_z^λ(x_i)| ) dx dy , which implies (<ref>), given definition (<ref>) of J_κ -2, δ. This concludes the proof of <Ref>. Martin
http://arxiv.org/abs/2407.13686v1
20240718165637
Josephson currents in neutron stars
[ "Armen Sedrakian", "Peter B. Rau" ]
astro-ph.HE
[ "astro-ph.HE", "cond-mat.supr-con", "nucl-th" ]
http://arxiv.org/abs/2407.12481v1
20240717110627
Pretraining Data and Tokenizer for Indic LLM
[ "Rahul Kumar", "Shubham Kakde", "Divyansh Rajput", "Daud Ibrahim", "Rishabh Nahata", "Pidathala Sowjanya", "Deepak Kumar" ]
cs.CL
[ "cs.CL" ]
Planar reinforced k-out percolation Christian Hirsch July 22, 2024 =================================== § ABSTRACT We present a novel approach to data preparation for developing multilingual Indic large language model. Our meticulous data acquisition spans open-source and proprietary sources, including Common Crawl, Indic books, news articles, and Wikipedia, ensuring a diverse and rich linguistic representation. For each Indic language, we design a custom preprocessing pipeline to effectively eliminate redundant and low-quality text content. Additionally, we perform deduplication on Common Crawl data to address the redundancy present in 70% of the crawled web pages. This study focuses on developing high-quality data, optimizing tokenization for our multilingual dataset for Indic large language models with 3B and 7B parameters, engineered for superior performance in Indic languages. We introduce a novel multilingual tokenizer training strategy, demonstrating our custom-trained Indic tokenizer outperforms the state-of-the-art OpenAI Tiktoken tokenizer, achieving a superior token-to-word ratio for Indic languages. § INTRODUCTION In the ever-evolving realm of Natural Language Processing (NLP), the development of Large Language Models (LLMs) have seen a meteoric rise since the inception of transformers. The introduction of LLMs has ushered in a new era in NLP, where the boundaries of what machines could achieve with human language are constantly pushed to astonishing limits. OpenAI's ChatGPT and Google’s BARD has been a pioneering force for redefining the landscape of language modeling performance and has also illuminated the vast societal implications inherent in these technological advancements. Alongside ChatGPT and BARD, various open source and proprietary LLMs have showcased remarkable natural language understanding and generation. There have been public releases by some key players in the realm of LLM including Llama 2 <cit.>, Mistral <cit.>, Falcon, MPT, etc creating more affordable, efficient, and high-performing language models. The chat-models of these pretrained LLMs showcase comparable results on few benchmarks with proprietary LLM chat-models including ChatGPT, BARD and Claude. LLMs such as Llama and Falcon have outlined their data refinement and pre-training steps in their respective technical reports. We draw inspiration from the meticulous analysis of RefinedWeb Dataset by Falcon in our data filtering and deduplication approaches. Indic culture boasts linguistic diversity, encompassing Indo-Aryan, Dravidian and Munda languages. Indo-Aryan and Dravidian languages constitute 96% of India's spoken languages. Despite this richness, most open language models lack Indic language support, hindering innovation due to limited high-quality data and complex tokenization challenges <cit.>. Notable corpora like EMILLE/CIIL, Wikipedia for Indian languages, Samantar Corpus, and AI4Bharat-IndicNLP provide valuable resources <cit.>. EMILLE/CIIL spans 14 languages with 92 million words, Wikipedia for Indian languages is limited, Samantar Corpus offers 49.7 million parallel sentences across 11 languages, and AI4Bharat-IndicNLP Corpus contains 2.7 billion words for 10 languages <cit.>. IndicCorp with 8.8 billion tokens across 11 languages, supplements linguistic resources <cit.>. Despite these efforts, it is noteworthy that Hindi being the third most spoken language, does not rank among the top 20 languages in processed CommonCrawl documents, highlighting the scarcity of India-specific data for open large language model training <cit.>. In this study, we provide a technical report on clean indic dataset preparation and tokenizer training for Indic LLM : India’s own foundational model with indic-rich context. The study highlights a comprehensive analysis of available open source and proprietary datasets and data refinement steps. We also devise a state-of-art indic tokenizer through rigorous experimentations and validated the performance through a pretraining model. § RELATED WORKS Large language models (LLMs) owe their remarkable learning capabilities to massive model sizes and extensive training datasets. At present, there are numerous foundational models spanning from open source and proprietary LLMs. Some noteworthy open source foundational LLMs include Llama 2 <cit.>, Mistral 7B <cit.>, Falcon <cit.>, MPT <cit.>, Bloom <cit.>, etc. whereas proprietary foundational LLMs comprise of GPT4 <cit.>, LaMDA <cit.>, etc. LLaMA 2 developed by Meta AI and Microsoft focuses on multilingual capabilities and is optimized for swift training and inference. MPT-7B by MosaicML and Mistral-7B by Mistral AI are 7-billion-parameter models that have demonstrated efficient open-source training code, promoting transparency and ease of use. These models have showcased superiority over other open-source models in the 7B-20B range. Falcon-40B developed by Technology Innovation Institute (TII) has 40-billion-parameters and is a causal decoder-only model trained on a causal language modeling task. It is trained on a large dataset and has demonstrated superior performance to GPT-3. BLOOM is the world's largest open multilingual language model, with 176-billion-parameters. Generally, proprietary systems are more expensive and offer product solutions that can be tailored to fit very specific business needs. Open source models usually offer more affordable and customizable options but may lack the performance level and specialization of proprietary LLMs. Despite the widespread availability of LLMs for public exploration, the lack of transparency regarding training datasets, especially for state-of-the-art models, hinders research on addressing relevant biases. Furthermore, LLMs are known to generate text lacking sufficient grounding to knowledge sources, thus posing risks of misinformation and hallucination <cit.>. This challenge is exacerbated largely in multilingual learning scenarios where datasets are often inadequately collected. Researchers have been advancing in the development of LLMs tailored to specific regional languages <cit.> <cit.> <cit.>. For development of a robust multilingual LLM, two pivotal components are: presence of abundant multilingual data and a diverse vocabulary <cit.>. The current state-of-art LLMs provide a rigid multilingual support, this is due to less multilingual data in the pre-training corpus. For example, Llama models <cit.> <cit.> have leveraged a vast pre-training corpus with over 1.6 trillion tokens, but less than 4.5% is multilingual data, over 20 different languages. This number is further enhanced in Llama 2 models where the proportion of multilingual data is increased to approximately 11% and the number of languages to around 26. CulturaX <cit.> is another multilingual dataset with 6.3 trillion tokens across 167 languages. The dataset is created after meticulously cleaning and deduplication steps to facilitate advancements in multilingual LLMs. However, out of 167 languages only 14 languages amount to 90.38%, thus creating a very low pre-training corpus for developing an efficient and versatile LLM having contextual understanding of indic languages. To cater this issue of procuring massive datasets for LLM development, researchers have been relying on open source datasets such as web-crawled Common Crawl, Wikipedia, Public domain books spanning from cultural and historical facets <cit.>, Stack Exchange and Github archives, Journal articles and educational resources <cit.>, News archives, Government and institutional legal repositories, Multimedia transcripts. In this study, we aim to exploit the aforementioned data sources for developing extensive indic data corpus. Moreover, these massive corpora need meticulous rigorous data refinement and deduplication to ensure data quality while maintaining integrity. Recent LLMs have demonstrated a robust work on data preparation and filtering, The RefinedWeb <cit.> has been pivotal in providing an insightful technical background in this context, significantly enhancing our understanding of the nuances involved. RefinedWeb has executed filtering and deduplication techniques on Common Crawl corpus, these filters include incorporation of threshold-based language filtering, url-filtering, line-wise correction filters, document deduplication, etc. MassiveText <cit.> has defined rules for reducing nuances from the text documents by implementing extensive quality filtering techniques. Upon rigorous filtering and deduplication steps, a large amount of noisy data is removed from the original corpora. It has been observed that the open source language datasets do hold a high number of boilerplate texts and similar-context text documents <cit.> <cit.>. Consequently, deduplication process is a crucial step for producing high quality pre-training corpus. Various deduplication algorithms have been established in literature spanning from exact matching with suffix arrays <cit.>, largest substring matching , minhash <cit.>, simhash <cit.>, and other fuzzy techniques in order to minimize memory usage while maximizing efficiency. In this work, we have methodically incorporated the data filtering and deduplication techniques, drawing inspiration from the research findings presented in RefinedWeb and MassiveText and also proposed our own innovative filters for data preprocessing. Conventional tokenization approaches often involve a complex preprocessing pipeline and are language-specific. A simplistic and multilingual tokenizer is required for diverse natural language processing (NLP) tasks. These tokenization techniques generally include BPE <cit.>, Wordpiece <cit.> , Sentencepiece <cit.>, IndicBERT <cit.>, Spacy <cit.>. <cit.> proposed an Indic NLP tokenizer <cit.> which emerges as an effective tokenization tool for indic-languages (Assamese, Bengali, Gujarati, Hindi, Marathi, Odia, Punjabi, Kannada, Malayalam, Tamil, Telugu) including English. In addition, Stanford NLP <cit.> has proposed Indic tokenizer <cit.> which supports English, Indo-aryan and Dravidian languages along with preprocessing functionalities. SentencePiece <cit.> tokenizer enables a fully end-to-end language-independent system by directly training its subword models from raw input sentences. This approach eliminates the need for pre-tokenized word sequences. In this study, we experiment with the SentencePiece <cit.> tokenizer and fine-tune it based on our indic-rich corpora. In this study, we meticulously compile data corpus from open source and proprietary datasets for a comprehensive selection of 12 indic languages including English. These languages span from Assamese, Bengali, English, Kannada, Gujarati, Hindi, Marathi, Malayali, Punjabi, Odia, Tamil, Telugu. Our contributions in paper are highlighted as : 1. Development of High-Quality Multilingual Indic Data 2. Novel Multilingual Tokenizer Training Strategy § PRE-TRAINING DATA §.§ Common Crawl Common Crawl is an open repository that houses extensive web crawl data. Since its inception in 2008, the archive has amassed petabytes of data and continues to perform crawls almost monthly. The data, accessible via Common Crawl's public S3 bucket, is provided in three distinct archive formats: WAT, WET, and WARC files. The WAT (Web Archive Transformation) files encompass the metadata of the crawl, including HTTP headers, elements from the HTML <head> (such as title, meta tags, and scripts), and links from the websites. WET (Web Extracted Text) files contain the text extracted directly from the HTML of the crawl. Meanwhile, WARC (Web ARChive) files comprise the complete crawl data, encompassing both the metadata and the full HTML response. While WET files could have served as a straightforward source for text data, our experimentation with various HTML scraping tools revealed that the text extracted in WET files often lacks cleanliness. Therefore, we opted to process the WARC files, which allowed us to scrape cleaner text from their comprehensive HTML archives. To date, we have processed a total of 93 Common Crawl snapshots, with CC-MAIN-2023-50 being the latest snapshot at the time of writing this paper. Our processing of the Common Crawl data is divided into three major steps: * Preprocessing : It involves raw text extraction. * Postprocessing : Entails Language Detection and the application of Heuristic Filters. * Deduplication : Aimed at removing duplicate components. Common Crawl Preprocessing Among the various data sources utilized during the pretraining phase, the Common Crawl dataset presented the most significant challenge, primarily due to its vast scale, which encompasses approximately 7 petabytes of data. We employed the warcio library in Python, which is adept at efficiently streaming WARC files instead of loading them entirely into memory. This approach is crucial given the substantial size of WARC files, often reaching gigabytes, making complete loading into memory impractical. This streaming process necessitated a separate post-processing pipeline for language identification and heuristic feature computation, which proved more efficient when conducted in batches rather than on individual records. In our exploration of various open-source text extraction libraries, including unstructured.io and trafilatura, we found that trafilatura delivered the most effective results. While streaming through the warcio iterator, we extracted clean text, markdown text, and URLs from each web page using trafilatura. Our analysis encompassed a total of 5,455,398 WARC files across 93 snapshots. We initially conducted a Proof of Concept (POC) for WARC preprocessing using a PySpark pipeline. In this pipeline, we read multiple WARC files as RDDs (Resilient Distributed Datasets) and processed them concurrently. Each RDD in this pipeline was a tuple containing the file path and the content of the WARC file as a byte array. We employed the same warcio library in the map partition User-Defined Function (UDF) for processing these files. Additionally, we established a vanilla Python processing pipeline utilizing multiprocessing, which enabled parallel processing of multiple WARC files, fully utilizing the cores of a machine. The extraction of text using trafilatura from each WARC file took approximately 40 to 60 minutes, varying with the number of web pages archived in each file. Both the PySpark and multiprocessing vanilla Python pipelines demonstrated similar processing times for the WARC files. However, we opted to process the files using our multiprocessing vanilla Python pipeline. This decision was driven by the significant reduction in compute costs it offered. Furthermore, this approach allowed us to design our pipeline more fault-tolerantly, leveraging a custom orchestration pipeline that ran on multiple AWS EC2 instances. Common Crawl Postprocessing Following the preprocessing step for the WARC files, we obtain a CSV file corresponding to each WARC file. These files comprise columns detailing the scraping date, clean text, markdown text, and URL. As our focus is on developing Large Language Models (LLMs) for Indic languages, an initial step involves applying a language detection filter to segregate non-Indic languages, with the exception of English. In our exploration of various open-source language detection models, including AI4Bharat and FastText, we found that FastText provided the most accurate results. Currently, the text extracted from the Common Crawl dataset is being classified into English and 11 specific Indic languages. Post this language filtration, approximately 50% of the initial data remains. Subsequently, we compute heuristic features such as token count, mean sentence length, mean word length, symbol-to-word ratio, perplexity (ppl) score, fraction of duplicate lines, fraction of characters in duplicate lines, fraction of characters in the most common 2-11-grams, fraction of lines ending with an ellipsis, and fraction of lines starting with a bullet point. Through exploratory data analysis conducted for each language, we determined that applying these filters within the range of 0-90th percentiles effectively eliminates unclean and gibberish documents. Additionally, we examined the impact of applying filters solely on mean word length, mean sentence length, language threshold, and symbol-to-word ratio. The outcomes were found to be similar to those obtained when applying the full range of heuristic features.Table <ref> presents the number of documents and token count subsequent to the application of these filters. Common Crawl Deduplication As noted on the Common Crawl website, each snapshot of their web crawl typically encompasses approximately 3 billion web pages. Remarkably, 2 billion of these pages have been previously crawled in earlier snapshots. This results in an average of about 66% duplicate content per snapshot. When considering all snapshots cumulatively, the proportion of duplicate content is substantially higher. Consequently, the implementation of a deduplication process is crucial. It not only ensures the high quality of data but also significantly reduces the computational resources required for training. In our deduplication pipeline, we have employed the Minhash Locality Sensitive Hashing (LSH) technique. The Minhash LSH process involves three primary steps: first is shingling, which converts documents into set representations; second is min-hashing, which transforms these large sets into shorter signatures while preserving their similarity; and finally, LSH, which identifies likely candidates for similarity based on these signatures. We have opted for 5-gram shingles for the initial conversion of each document into sets. In the min-hashing step, we set the signature dimension to 250. By establishing a similarity threshold of 0.7, we determined that the optimal configuration consists of 25 bands and 10 rows per band for this process. While duplicates were present across snapshots, we chose not to run our deduplication pipeline across all snapshots to avoid excessive data loss. Additionally, we noted that if a URL is re-crawled within approximately a year, its content is likely to remain largely unchanged, although the converse may not hold true. Therefore, we segmented our deduplication process, considering snapshots spanning 1-2 years as a single batch, which roughly equates to 10 million documents. Thus, we divided the 93 snapshots for each language into batches of 10 million documents and executed the deduplication pipeline on each batch. This approach effectively removes recent duplicated data while preserving older, potentially unique data.The number of documents before and after deduplication is presented in Tabel <ref> §.§ Newspaper There are plethora of newspapers across various indic languages which are regarded as a knowledge-rich source of data for pre-training our indic-rich context LLM. We accumulated language-specific newspapers and downloaded the digital version of all the historical editions. These newspapers generally comprise blocks of images, tables and advertisements which are noisy data in pre-training corpora. Henceforth, a robust and scalable algorithm is necessary to identify individual useful text-blocks from the news article and subsequently extract data out of it. This elementary problem of article extraction is challenging due to composition of a wide range of layouts random arranged in the target page. Research has been prevailing for development of algorithms addressing the text extraction issues . However, most of these methods are based on a set of heuristic rules. In this study, we construct an article-extraction pipeline by experimenting with existing open source algorithms and frameworks. Our goal in this extraction process is to identify bounding boxes of layouts in an article and then detect relevant text-blocks for respective languages. Initially, we experiment with an open source python package, layoutparser, prolific in document-image analysis tasks. LayoutParser provides a rich repository of deep learning models for layout detection as well as a set of unified APIs for using them.LayoutParser comes with a set of layout data structures with carefully designed APIs that are optimized for document image analysis tasks. In addtion, we train models on indic-data comprising of various handwritten documents and newspapers. In this step, we experiment with deep learning-based models by detectron2 from layoutparser library are leveraged to detect text-snippets from the layouts of the article. Furthermore, this algorithm also detects tables which are redundant for the text corpus. After rigorous experimentation, we resolved to fine-tune a MaskRCNN based model for our extraction process. Annotation All the segmented blocks of the newspaper images are labelled into 4 classes namely headings, text, images and non-text by leveraging label studio. The text class focuses mainly on the article block and this is what we intend to extract. The headings class contains all the main and sub headings in the newspaper image. The non-text class taks care of the tables, summaries and quotes. Model fine-tuning MaskRCNN based model is a three stage model that predicts a class label, bounding box offset and object mask for each Region of Interest (ROI). It can produce more accurate and fine-grained masks for each object, which can capture the shape and contour details better than bounding boxes. It can also handle overlapping and occluded objects better. Model evaluation The performance of the object detection and localization algorithm is evaluated by a metric called Average Precision (AP) and mean average precision (MAP). AP is calculated with the help of several other metrics such as Intersection over Union (IoU), confusion matrix (TP, FP, FN), precision and recall as shown in the figure below. IoU quantifies the closeness of the two bounding boxes (ground truth and prediction). It's a value between 0 and 1. The IoU is calculated by taking the ratio between the area of intersection and the area of the union of two bounding boxes. Average Precision is the area under the PR curve. AP summarizes the PR Curve to one scalar value. Average precision is high when both precision and recall are high, and low when either of them is low across a range of confidence threshold values. The range for AP is between 0 to 1. AP value can be calculated for each class. The mean average precision is calculated by taking the average of AP across all the classes under consideration. Model Inference The model inference pipeline takes up one newspaper image and then bounding box prediction is done on the entire newspaper image along with the class. Masking of the heading, non-text and images class bounding boxes is done on the image. For each text class bounding box, a tesseract based OCR is used for extraction of text. §.§ Books The web crawled data from Common Crawl has limited distribution of Indic language specific content. The distribution of deduplicated content is indeed further lesser. This is a huge challenge especially when we want to perform LLM pre-training with Indic languages. In order to increase the number of tokens as well as the breadth of Indic specific content, we leverage the open source pdfs that mostly contain books, periodicals, magazines, and financial reports. The pdfs are downloaded using a pipeline across the indic languages. The pdfs have broadly two types of format present in it, image or text embedded on pdf. A pipeline is built that detects whether an image is embedded on the pdf and then based on that, appropriate extraction pipeline runs. Tesseract OCR is leveraged to extract text out of these pdf documents with images embedded on it. There is a challenge of detecting a script so that tesseract OCR engine with appropriate language is used for extraction. We overcome this with our pipeline that detects the script and extracts text out of it. It is then scaled up with multiprocessing and across multiple nodes. § TOKENIZER Tokenization is a preprocessing step in natural language processing (NLP), facilitating machine comprehension and interpretation of human language by breaking down text into fundamental units such as words, characters and subwords. Among the various types of tokenization tools, one widely employed approach is SentencePiece, which sets itself apart by directly training subword models from raw sentence data. This method offers a fully end-to-end and language-agnostic system, eliminating the need for pre-tokenized word sequences. In our study, we adopted Byte Pair Encoding (BPE) as the tokenization strategy. We assessed the quality of tokenization using two key metrics: the token-to-word ratio and the exact score, which measures the proportion of correctly tokenized units out of the total tokens. §.§ Tokenizer Experiments Corpus Size For training the tokenizer, we utilized sample data from various sources such as Common Crawl, Wikipedia, and Books. This sampled data constitutes the corpus size employed during tokenizer training. The data in the corpus is sampled in proportion to the availability of data from these different sources. The vocabulary size in the context of a tokenizer refers to the number of unique tokens or subword units that the tokenizer can produce or handle. We trained the tokenizer with different corpus sizes and vocabulary sizes. The results in corpus and vocab table indicate that increasing the corpus size does not significantly improve the performance of the tokenizer. However, it is evident that increasing the vocabulary size significantly improves the token-to-word ratio. This finding is logical because increasing the corpus size does not necessarily enhance tokenizer performance once a critical mass of major words has been covered. If the vocabulary size remains constant, the same or similar tokens will be generated, and the token-to-word ratio will remain unchanged, even with an increased corpus size. Thus, beyond a certain point, expanding the corpus size yields diminishing returns in terms of tokenizer improvement. Vocabulary Size Additionally, we experimented with various corpus sizes. The results, as presented in the corpus table, demonstrate that a tokenizer trained with a 225 million sampled corpus size achieves a token-to-word ratio similar to that of a tokenizer trained with a 12 billion sampled corpus size. Increasing the vocabulary size allows for the formation of more eligible words or sub-words as tokens, thereby improving the tokenizer's quality, as indicated by the improved token-to-word ratio. However, this improvement comes with a trade-off: a tokenizer with a larger vocabulary size will have reduced inference speed compared to one with a smaller vocabulary size. Character Coverage We also experimented with different character coverage levels during tokenizer training. Setting character coverage to 1 resulted in the inclusion of numerous gibberish characters, such as emojis and special characters, in the tokenizer vocabulary. These characters, despite having low frequencies, were included due to the high character coverage. To address this, we tested various character coverage settings. As presented in the character coverage table, we found that a character coverage of 0.997 is optimal. This setting ensures that the characters included in the Hindi tokenizer vocabulary closely match the typical set of characters in the Hindi language, which consists of approximately 50 characters (36 consonants and 12 vowels). §.§ Tokenizer Training Initially, we trained a tokenizer with the optimal vocabulary size, corpus size, and character coverage determined from our previous experiments. Despite sampling a dataset focused on Indic content, we observed a significant number of false positives, which introduced characters from other languages, such as Chinese and French, into our Indic LLMs. To address this issue and create a clean Indic tokenizer, we implemented a series of steps specifically targeting Indic words. First, we trained a dummy tokenizer using the sampled data from various sources. With the assistance of linguistic experts, we manually removed gibberish characters and characters from other languages from the dummy tokenizer vocabulary. Next, we encoded and decoded the sampled dataset using this dummy tokenizer, with the byte fallback flag set to false. This process converted all gibberish words and non-Indic language words identified by linguists to UNK tokens. With this cleaned dataset, we then trained our final tokenizer. This refined tokenizer demonstrated optimal vocabulary size, corpus size, and character coverage, with minimal inclusion of gibberish and foreign language words. We compared the performance of our tokenizer with the OpenAI Tiktoken tokenizer across 11 Indic languages. The results, as shown in the tokenizer table, clearly indicate that our tokenizer's metrics significantly outperform those of the Tiktoken tokenizer. dfd § CONCLUSION In conclusion, our meticulous data acquisition and custom preprocessing pipeline have effectively curated a high-quality multilingual dataset, significantly enhancing the performance of our Indic large language models. Through rigorous filtering and deduplication processes, we ensured the elimination of redundant and low-quality content, optimizing the data for training. Our novel multilingual tokenizer training strategy demonstrated superior token-to-word ratios for Indic languages compared to the state-of-the-art OpenAI Tiktoken tokenizer. The experiments underscore the importance of tailored preprocessing and tokenizer design, paving the way for more accurate and efficient language models in Indic-rich multilingual contexts.
http://arxiv.org/abs/2407.12663v1
20240717154725
Is That Rain? Understanding Effects on Visual Odometry Performance for Autonomous UAVs and Efficient DNN-based Rain Classification at the Edge
[ "Andrea Albanese", "Yanran Wang", "Davide Brunelli", "David Boyle" ]
cs.RO
[ "cs.RO", "cs.AI" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Is That Rain? Understanding Effects on Visual Odometry Performance for Autonomous UAVs and Efficient DNN-based Rain Classification at the Edge Andrea Albanese, Yanran Wang, Davide Brunelli, David Boyle Andrea Albanese and Davide Brunelli are with the Department of Industrial Engineering at the University of Trento, Italy. Yanran Wang and David Boyle are with the Systems and Algorithms Lab at Imperial College London, United Kingdom. July 22, 2024 ========================================================================================================================================================================================================================================================================================================== § ABSTRACT The development of safe and reliable autonomous unmanned aerial vehicles relies on the ability of the system to recognise and adapt to changes in the local environment based on sensor inputs. State-of-the-art local tracking and trajectory planning are typically performed using camera sensor input to the flight control algorithm, but the extent to which environmental disturbances like rain affect the performance of these systems is largely unknown. In this paper, we first describe the development of an open dataset comprising ∼335k images to examine these effects for seven different classes of precipitation conditions and show that a worst-case average tracking error of 1.5 m is possible for a state-of-the-art visual odometry system (VINS-Fusion). We then use the dataset to train a set of deep neural network models suited to mobile and constrained deployment scenarios to determine the extent to which it may be possible to efficiently and accurately classify these `rainy' conditions. The most lightweight of these models (MobileNetV3 small) can achieve an accuracy of 90% with a memory footprint of just 1.28 MB and a frame rate of 93 FPS, which is suitable for deployment in resource-constrained and latency-sensitive systems. We demonstrate a classification latency in the order of milliseconds using typical flight computer hardware. Accordingly, such a model can feed into the disturbance estimation component of an autonomous flight controller. In addition, data from unmanned aerial vehicles with the ability to accurately determine environmental conditions in real time may contribute to developing more granular timely localised weather forecasting. UAV, Visual Odometry, DNN, Rainy Conditions, Autonomous Navigation, Internet of Drones § INTRODUCTION Autonomous Unmanned Aerial Vehicles (UAVs) are set to become central to a variety of industrial applications ranging from first response to infrastructure communications to deliveries, among myriad other UAV-based Internet of Things (IoT) services <cit.>. In each case, UAVs will require the ability to safely navigate under variable weather conditions. Although much recent research attention has been paid to autonomously navigating complex environments characterised by the presence of obstacles and dynamic disturbances from airflow, little effort has been afforded to understanding the effects of other dynamic environmental factors - particularly rain. Visual odometry (VO) and visual inertial odometry (VIO) leveraging depth cameras are one of the most promising methods to achieve autonomous navigation <cit.>, however, little attention has been paid to understanding the effects of rainy conditions in terms of tracking errors that might be expected when the camera lens is exposed to various rainy conditions. Rainfall can be expected to disrupt the visual scene by altering image contrast, introducing blurring effects, and potentially obscuring visual landmarks or obstacles due to water droplets on the camera lens. These factors can reasonably be expected to lead to significant degradation in VO performance, resulting in inaccurate position and motion estimation for the UAV. In worst-case scenarios, inaccurate navigation due to rain could lead to mission failure, collisions, and other safety hazards. Thus, developing methods to identify and estimate the severity of rain conditions impacting VO accuracy is critical to ensure safe and reliable UAV navigation in variable weather. The most relevant contributions in the literature concerning this or similar problems have emerged from the autonomous driving point of view <cit.>. In these cases, the images used in the analysis and estimation of the environmental conditions are taken from inside the cockpit; therefore, the water droplets are on the windshield and not directly on the camera lens. On the other hand, such contributions cannot be directly applied to developing reliable autonomous UAV applications because, in such scenarios, the UAVs navigating in rainy conditions may often have the camera lens directly exposed to the environment. As a result, we are motivated to study and analyse the effects of rain on a VO system suitable for autonomous UAVs with direct lens exposure. Given the absence of a suitable relevant dataset, we designed a set of laboratory experiments to simulate a flight at a low altitude (i.e., a scene with objects) under a variety of rainy conditions to collect and label with a view to identifying and classifying precipitation conditions in real time. Moreover, many authors have successfully used deep learning (DL) or deep neural networks (DNN) to predict and estimate rain severity in vehicles <cit.>. This serves as inspiration to leverage such algorithms to estimate rain conditions in order to improve our system's performance and reliability. However, autonomous vehicles can have relatively large computational resources, while small drones (i.e., typically carrying a payload up to 2 kg) have necessarily limited computational resources considering size and payload capacity. Thus, when designing a DL-based system, we must remain aware of the available onboard resources. This paper presents a first step towards the development of lightweight models that can determine precipitation conditions in real time, and which are suitable for use in future autonomous flight controller designs. We begin by determining the extent to which various intensities of rain can introduce tracking errors to VO-based navigation systems. We do this leveraging a large dataset that we have collected under controlled laboratory conditions. A low and fixed altitude flight scenario is developed, where a depth camera, processing unit and mechanical spraying apparatus are used to simulate various rain conditions. A dataset comprising the images taken for various rain intensities and orientations was curated and used as a basis to develop a DNN-based system to classify and estimate the severity of the rain in each case. DNN training has been performed with a view to ensuring a low-complexity algorithm that can be deployed on edge (e.g., small drones) or IoT-type devices <cit.>. By accurately estimating rain severity, the system can provide a basis for the development and implementation of appropriate counteractions (i.e., control strategies) to maintain reliable navigation performance during UAV operations under dynamic rainy conditions. We summarise the main contributions of this paper as: * We have collected and made openly available a new dataset[The dataset (∼12 GB compressed) is available at: <https://ieee-dataport.org/documents/adverse-rainy-conditions-autonomous-uavs>] comprising approximately 335k real images equally distributed among 7 classes that represent different levels of rain intensity, spanning clear to slanting heavy rain. Unlike other similar datasets comprising images taken from the cockpit, the camera lens is directly exposed to the water droplets in our case. * We provide the characterization of a VO system under different rain intensities in order to demonstrate the varying consequences to tracking accuracy, obtaining an average error in path estimation ranging from 0.07 m to 2.5 m. This permits quantifying the average error and recovery time, and may serve as a basis for designing suitable strategies to safely navigate in dynamic rainy weather conditions. * We describe the training, testing, and comparison of three state-of-the-art DNNs to explore the feasibility of the approach and explore the trade-offs between performance and resources needed [The source code and all research artefacts will be available open source on acceptance.]. Our results demonstrate that various off-the-shelf DNN-architectures developed for mobile or constrained processors offer excellent low-latency classification accuracy under almost all conditions. The best performing model is MobileNetV3 small reaching an accuracy of 90% with a frame rate of 93 FPS and a memory footprint of 1.28 MB. The paper is organized as follows: Section <ref> analyses the related work concerning UAV navigation in adverse conditions. Section <ref> presents the experimental setup used to conduct the experiments, and Section <ref> summarizes the related results. Section <ref> presents the DNN used and the training setup, while Section <ref> presents the different DNN test results. Finally, Section <ref> concludes the paper with a view to future work. § RELATED WORKS UAV deployment is increasing rapidly thanks to commercial devices that are easily accessible to professionals, researchers and amateur enthusiasts alike. Despite recreational use, UAVs are excellent tools to support a variety of industrial application scenarios. In future, autonomous UAVs are likely to be able to navigate in a coordinated `swarm' where each UAVs is a node or agent of an IoT system <cit.>. In this setting, they can collaborate to exchange data, obtain a more accurate data collection, and efficiently complete critical mission tasks <cit.>. They offer advantages in extreme environments where human intervention may be hazardous. Moreover, they avoid the need for a specifically trained and certified pilot to control them on an individual basis, consequently, increasing their reliability and opening their usage to many applications <cit.>. A major constraint, however, continues to be the requirement to operate safely in highly dynamic outdoor environments mostly affected by weather. Most contemporary UAV systems cannot fly in all weather conditions, limiting their usage for time-limited missions (e.g., search and rescue), and under the control of human pilots. The authors in <cit.> have studied UAV “flyability", which is “the proportion of time drones can fly safely". On average, a common drone has a `flyability' lower than 5.7 h/day (or 2.0 h/day considering only daylight hours). However, this estimate does not consider all weather conditions, especially extreme ones, such as slanting heavy rain or high-speed wind. This analysis suggests increasing the drone's weather resistance to improve its flyability. For instance, a weather-resistant drone may increase its flyability to 20.4 h/day (or 12.3 h/day considering only daylight hours). This research confirms the fundamental role of weather in drone navigation. However, it does not take into account the drone's autonomous navigation, thus the perturbation of the sensing and navigation systems involved in this technology. Researchers are studying autonomous navigation systems for UAVs with deep-reinforcement learning to increase their reliability as components of Internet of Things systems <cit.>. However, there is a knowledge gap and missing contributions that demonstrate the effects of variable weather conditions on these autonomous UAV systems. Many researchers have begun to study these effects from an autonomous driving point of view. Accordingly, the sensing systems and data involved are tailored for autonomous vehicles, and are thus not directly applicable to small autonomous UAVs. For instance, datasets of images available to the research community in the context of autonomous vehicles are taken from within the vehicle cockpit and looking out through the windshield. This is a setting that may not be similar for UAVs <cit.>, where camera lenses are often directly exposed to the environment. Nonetheless, these works show the effect of adverse weather conditions in autonomous vehicles, suggesting clearly that similar conditions will be present and affect autonomous UAVs. We therefore expect that this can be studied and addressed by adopting similar methodologies <cit.>. Adverse weather conditions comprise perturbed situations that are caused by, e.g., wind, rain, snow, fog, and flares. For example, the authors in <cit.> have studied autonomous UAV navigation under perturbations caused by external airflow or wind. In particular, they have proposed a solution based on reinforcement learning (RL) to tackle unknown external disturbances whilst guaranteeing exponential convergence for any feasible reference trajectories <cit.>. Such works represent valuable initial research in this field as they open the usage of autonomous UAVs under external wind, but leave open opportunities to examine and solve for the effects of additional environmental disturbances. Considering the variety of under-explored adverse weather conditions, we focus on rain given that it is one of the most common variable conditions, and can be expected to be responsible for significant perturbation to VO systems and consequently damaging to the accuracy and safety of local trajectory planning. Accordingly, we focus on the camera system responsible for the sensor data provided as input to the VO algorithm. Raindrops can bring different perturbations to the sensing and perception system, such as refraction or reflection of light rays. This results in pixel value fluctuation, thus causing the inaccurate processing of the raw images. For instance, DL-based algorithms (e.g., object detection and image classification) are heavily affected by rain, making them unreliable in such conditions. Thus, it is important to be aware of the navigation condition to optimize the involved image processing and computer vision algorithms <cit.>. Furthermore, researchers have proposed de-raining methods as image post-processing techniques to mitigate the rain effects. However, such techniques are likely to be minimally effective in our case given performance limitations and their relatively large computational complexity <cit.>. Meanwhile, other contributions focus on the implementation of specific algorithms that can outperform standard ones in adverse conditions, although these solutions may not be translatable as they are tailored to deal with specific applications, and thus their utility in other scenarios is questionable <cit.>. In general, the impact and severity of variable weather, particularly rain, is understudied. As a consequence, we believe that this research may be important in the development and implementation of fully autonomous UAVs that can safely navigate outdoors in all conditions. § EXPERIMENTAL SETUP Our initial objective is to determine the extent to which various rain conditions affect the performance of a state-of-the-art VO system used for autonomous local trajectory tracking and generation. We make the assumption that depth perception (or other) cameras mounted on UAVs are likely to have the camera lens directly exposed to the environment. This follows the majority of related literature on autonomous UAV systems that leverage VO and VIO for trajectory tracking and generation. As such, in addition to specifying the sensor and computer architecture (Sec. <ref>), some mechanical design to ensure ingress protection against moisture is a prerequisite (Sec. <ref>). §.§ Hardware Specification The key sensor and computer hardware underpinning the VO system comprise a processing unit, i.e., an Intel NUC 11 [Specifically, we use Intel NUC 11 Pro Kit NUC11TNKi7 as the computation unit.], and a depth camera, i.e., an Intel Real Sense D435i. These are typical components in use among researchers developing autonomous UAV systems leveraging visual odomoetry <cit.>. §.§ Mechanical Design for Moisture Protection Given that the electronics may become damaged by exposure to water, we designed and fabricated a water-resistant box to enclose and protect them, as shown in Figure <ref>. The dimensions of the box are 20×15×10 cm (length, width, height), and so it can easily host the processing unit, the depth camera, and the necessary cables. At the back of the box, there is a lid that permits the insertion and removal of the electronic devices; moreover, an IP68 nylon gland connects the device power supply to an external power source to further ensure water resistance. The manufacturing process has been conducted using a laser cutting machine to cut the box faces made of polymethyl methacrylate. Then, we composed the box with bi-component specific glue, silicon, and rubber seals to enhance water ingress protection. §.§ Visual Odometry Algorithm The processing unit is programmed with the “VINS-Fusion" algorithm described in <cit.>. It consists of an optimization-based multi-sensor state estimator that runs accurate simultaneous localization and mapping (SLAM) for autonomous navigation applications. The authors of VINS-Fusion developed the platform to support a variety of visual-inertial sensor types including mono camera and IMU, stereo cameras and IMU, and stereo cameras only. We specifically use the algorithm with stereo cameras only running on Intel Real Sense D435i[<https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/tree/master/config/realsense_d435i>]. In this way, the inertial contribution is avoided, and we focus directly on the visual odometry. §.§ Experimental Environment and Settings The experiments were conducted in a controlled indoor laboratory, which is representative of a challenging and high-entropy scenario. Experiments were designed for two different navigation conditions, namely static and moving. In the static condition, the VO system is stationary in all axes and simulates a drone hovering. In the moving condition, the VO system follows a rectangular trajectory of size 140×160 cm with a fixed altitude. The constant height allows the simulation of the drone's navigation when it reaches the desired altitude. In this experiment, a specifically trained user moves the VO system over a stool to ensure a constant velocity of around 0.2 m/s (shown in Fig. <ref>). We use a particularly low velocity to exclude its contribution to the analysis, thus focusing only on the rain effect and perturbation in the navigation system. Moreover, we simulate different rain conditions with a sprayer by changing its distance and inclination from the VO system, taking inspiration from <cit.>. In particular, we use the rain conditions shown in Table <ref>. In addition, we conducted experiments in clear conditions without simulated rain to have a reference for the other experiments. Overall, the slanting rain scenario has been designed to show and analyse the effect of raindrops directly on the camera lens. This scenario is the most common, as it is present during navigation at medium/high velocities. In contrast, the vertical rain scenario does not directly show the raindrop effect on the camera lens but happens as a small raindrop aggregation. This scenario reflects hovering or navigation at low velocities. Furthermore, we measured the time required to recover from the slanting rain condition (i.e., the most severe) needed to ensure a VO navigation with an average error below 30 cm, thus ensuring an acceptable error that does not lead to hazardous situations, especially in higher altitude flights. All the experiments have been repeated 30 times with a view to ensuring statistical soundness. The quantity of water released during each experiment is constant and consists of 1.8 ml/s, which is equivalent to 2.4 sprays/s (this rate is due to the natural and continuous spray rate of an average user). § DATASET AND DNN DEVELOPMENT §.§ Dataset During the experiments presented in Section <ref>, we collected raw color images to construct a dataset representing the different rain conditions analysed in this study following <cit.>. This data can be useful to develop a classification system that can understand the external condition and then act accordingly. The dataset consists of 7 classes of 48k images per class, namely “Clear", “Slanting Heavy Rain", “Vertical Heavy Rain", “Slanting Medium Rain", “Vertical Medium Rain", “Slanting Low Rain", and “Vertical Low Rain". Figure <ref> shows several examples of images from the dataset. Overall, the dataset is composed of around 336k images and is almost 12 GB (compressed) in total, where 80% is used for training, 10% for validation and the remaining 10% is used for testing. §.§ Deep Neural Network Specifications Three different DNNs have been selected and trained with the dataset developed in Section <ref>. We use state-of-the-art architectures, namely MobileNetV2 <cit.> (alpha parameter equal to 0.35), MobileNetV3 small <cit.> (alpha parameter equal to 0.35), and SqueezeNet <cit.> as they show an optimal trade-off between performance and computational complexity <cit.> for similar mobile and/or constrained deployment contexts. The DNNs are trained on an NVIDIA GeForce RTX 4090 GPU with the following hyperparameters: * 100 epochs * Image shape 224×224×3 * SGD optimizer * Batch size 64 * Polynomial decay learning rate from 10^-1 to 10^-3 with 10^4 decay steps and square root function (i.e., power equal to 0.5) Table <ref> summarizes the number of parameters and the memory footprint of the three architectures. Even though they have a deep structure, they present a low memory footprint because of the innovative blocks that compose them. § EXPERIMENTAL RESULTS & ANALYSIS In the first part of this section, we present the experimental results analysed with the setup presented in Section <ref>. We analyse the experiments in static and moving conditions. First, we use the standard deviation to provide the error of the path estimation of the VO system. In the second, we use the root mean square error (RMSE) to provide, on average, the error on the path estimation of the VO system by using the clear condition as the reference. Moreover, we provide the restoring time for the slanting rain scenario needed to ensure navigation with an error below 30 cm. In the second part, we present the results of the DNN test. In particular, we use the confusion matrix of the 7-class classifier to compute average accuracy, precision, recall, and f1-score. These results are used to identify the best performing DNN. §.§ VO System in Static and Moving Conditions The VO system developed in Section <ref> has been tested in static and moving conditions. For each condition, we evaluated different rain intensities and modalities namely slanting heavy rain, vertical heavy rain, slanting medium rain, vertical medium rain, slanting low rain, and vertical low rain (as shown in Table <ref>). §.§.§ Static In this experiment, the VO system is completely stationary at the same point for the duration of the trials. Figure <ref> shows a comparison of the trajectory estimation and data distribution of the experiments conducted in a static scenario under different rain conditions. Furthermore, Table <ref> summarizes the error computed in terms of standard deviation as the system is fixed and the real position is 0 m in all axes. In this way, it is possible to evaluate the performance degradation starting from the optimal condition (i.e., clear) to the worst-case scenario (i.e., slanting heavy rain). Figures <ref> and <ref> reveal the severity of the slanting heavy rain scenario confirmed by a considerable drift in Table <ref>. On the other hand, the “Vertical Low Rain" scenario is almost comparable to the “Clear" one, meaning that the rain perturbation can be negligible. §.§.§ Moving In this experiment, we analyse and compare the effect of the different rain conditions of Table <ref> in a moving scenario (i.e., the VO system following a rectangular trajectory). Figure <ref> shows a comparison of the trajectory estimation under the different rain conditions broken down depending on the rain intensity (i.e., heavy, medium, and low rain). Furthermore, Table <ref> summarizes the RMSE computed by using the clear condition as the reference. This analysis confirms the severity of the slanting heavy rain scenario, which introduces an unacceptable drift in all axes, especially in the vertical one (Figure <ref>). On the other hand, the vertical rain scenarios are almost comparable and less alarming as they show drift in the order of dozens of centimetres. §.§ DNN Test We test the DNNs developed in Section <ref> by using their confusion matrix. In particular, each architecture is evaluated by considering the precision, recall, and f1-score of each class as shown in Tables <ref>, <ref>, and <ref>. Metrics are computed with the test set of the dataset presented in Section <ref>, which consists of 10% of the total dataset size. The three architectures perform well in all classes, especially for the clear and slanting heavy rain scenario. This is because they are the two extremes of the conditions studied, meaning that the system can easily discriminate these two scenarios. However, the “Vertical Low Rain" class presents the lowest recall (highlighted in red in Tables <ref>, <ref>, and <ref>), meaning the system recognizes many false negatives. We expected such behaviour as the perturbation introduced by a vertical low rain scenario is very small and almost comparable to the clear scenario (as analysed in Section <ref>). Nonetheless, considering UAV autonomous navigation, a false negative in a vertical low rain scenario does not present a hazard during navigation, thus making it acceptable. Moreover, we provide the overall accuracy, precision, recall, and f-score in Table <ref>. The computational latency of the three classifiers on the Intel NUC 11 is shown in Table <ref>. The performance of the three DNNs is comparable, ensuring an accuracy, precision, recall, and f1-score over 90%. On the other hand, MobileNet V3 Small presents the lowest memory footprint (Table <ref>) and the lowest classification latency of around 10 ms (Table <ref>); thus it is preferable to implement a stable, real-time, and reliable on-the-edge solution in small-size UAVs. § CONCLUSION Autonomous UAV navigation based on VO systems operating under variable or adverse weather conditions is an understudied topic in the literature. Given that UAV employment is increasing rapidly in IoT contexts, it is fundamental to develop solutions to mitigate potentially hazardous situations (e.g., navigation in the rain) and increase drone `flyability'. In this paper, we investigated the behaviour of a VO system under different rainy conditions that might be encountered during autonomous UAV navigation. Intuitively, the analysis determined that the worst-case scenario is slanting heavy rain, which was demonstrated to introduce an unacceptable error (i.e., from 1 m to 2.5 m) in the path estimation, thereby making the navigation dangerous and unreliable. On the other hand, the other “slanting rain" scenarios (i.e., slanting medium rain and slanting low rain) present a higher error compared with the vertical rain conditions, meaning that they have to be evaluated depending on the application requirements. We demonstrate a basis for the development of solutions to mitigate the effects of rain and make the autonomous UAV systems more `aware' of dynamic in situ environmental conditions. Having trained and compared models based on three candidate DNN architectures, we have demonstrated that an accuracy of around 90% can be achieved in classifying the colour images used by the VO system across 7 classes: clear, slanting heavy rain, vertical heavy rain, slanting medium rain, vertical medium rain, slanting low rain, and vertical low rain. The best-performing architecture has reached a frame rate of 97 FPS, achieving approximately real-time processing. Accordingly, it is possible to use this information and incorporate these techniques in the development of disturbance estimation approaches feeding into online flight controllers that may allow for specific counteractions to be taken by the UAV (e.g., switch to an alternative navigation system, change the navigation path, land, etc.) depending on the environmental condition. This may permit performance improvements for navigation in otherwise hazardous situations, e.g., by avoiding collisions and increasing the UAV's reliability and flyability. Taking an `Internet of Things' perspective, the ability of a UAV to determine the local environmental conditions in real time may also be of significant importance to the development of more accurate and timely localised weather forecasting methods through the provision of this data. This and the integration of precipitation-based disturbance estimation with autonomous UAV tracking and trajectory control systems are left for future work. IEEEtran
http://arxiv.org/abs/2407.13337v1
20240718093447
Long-Term 3D Point Tracking By Cost Volume Fusion
[ "Hung Nguyen", "Chanho Kim", "Rigved Naukarkar", "Li Fuxin" ]
cs.CV
[ "cs.CV" ]
Long-Term 3D Point Tracking By Cost Volume Fusion Hung Nguyen Chanho Kim Rigved Naukarkar Li Fuxin Oregon State University {nguyehu5, kimchanh, naukarkr, lif}@oregonstate.edu July 22, 2024 =================================================================================================================================== § ABSTRACT Long-term point tracking is essential to understand non-rigid motion in the physical world better. Deep learning approaches have recently been incorporated into long-term point tracking, but most prior work predominantly functions in 2D. Although these methods benefit from the well-established backbones and matching frameworks, the motions they produce do not always make sense in the 3D physical world. In this paper, we propose the first deep learning framework for long-term point tracking in 3D that generalizes to new points and videos without requiring test-time fine-tuning. Our model contains a cost volume fusion module that effectively integrates multiple past appearances and motion information via a transformer architecture, significantly enhancing overall tracking performance. In terms of 3D tracking performance, our model significantly outperforms simple scene flow chaining and previous 2D point tracking methods, even if one uses ground truth depth and camera pose to backproject 2D point tracks in a synthetic scenario. § INTRODUCTION Motion estimation is a task that has existed since the beginning of computer vision. Short-term dense motion estimation problems, such as optical flow in 2D and scene flow in 3D, have been extensively studied. However, the utility of these tasks is limited because many points are featureless, making their motion estimation within two frames fundamentally ambiguous. Besides, chaining 2-frame motions to derive point trajectories is susceptible to significant cumulative errors and ineffective for handling long occlusions. However, spatial-temporal tracking of keypoints have always been interesting <cit.> because it offers the capability to track long-term non-rigid object motions. Such knowledge would be greatly helpful for augmented reality and robotics applications, as well as providing supervision for generative models that generate dynamic videos with arbitrary non-rigid motions. Recently, the authors of <cit.> reinvigorated the long-term pixel tracking problem and proposed a framework inspired by previous state-of-the-art optical flow and object tracking work. <cit.> released datasets specifically designed to address point tracking. These datasets, including 2D and 3D data, have boosted research on this task. Most existing methods predominantly address the problem of 2D long-term point tracking. But in the 3D world we live in, such 2D tracking, however accurate it might be, might still miss the 3D motion of the points. Even the state-of-the-art video generator SORA has significant trouble understanding long-term 3D point motion <cit.>, which may limit its usage for downstream tasks such as augmented reality and robot manipulation. As we will show in the experiments, backprojecting 2D long-term point tracks into 3D, even with known camera poses and pixel depths, is still prone to errors due to the sparsity of the depth maps, occlusion, and the accumulation of numerical errors. The ability to track any point long-term in 3D would significantly enhance our understanding of the scene dynamics. Recently, <cit.> proposed a test-time optimization method to model dynamic scenes using a set of Gaussians, thereby enabling long-term point tracking. While surpassing the performance of prior 2D methods, this approach relies on test-time optimization for each scene and struggles to track new points entering the scene. Test-time optimization is quite computationally expensive, especially for longer videos, making it unsuitable for online tracking. Recognizing the potential of tracking directly in 3D space, we propose a simple yet efficient method for long-term online tracking of keypoints in a dynamic 3D point cloud. To our knowledge our approach is the first deep learning framework that directly attacks long-term 3D point tracking in a generalizable manner that can work on new points and videos without test-time fine-tuning. Our online tracking framework takes as input a sequence of point clouds representing the dynamic scene. The model predicts a position for each query point by combining multiple past appearance information with motion information from the past point trajectory with a transformer-based framework. Occlusions are predicted explicitly to filter out noisy appearance features. We propose an adaptive decoding module that selectively decodes around the query point, enabling the network to process denser point clouds and generate more precise motion for each point. Experiment results show that our approach significantly outperforms baselines such as linking scene flow results or 3D tracks backprojected from 2D point tracks obtained by prior work. In summary, our contributions include: * We propose the first online deep learning-based tracking framework that can track any point in 3D without test-time optimization. * We devise a novel Cost Volume Fusion module that effectively takes into account the long-term appearances of each point and its past motion trajectory. * We propose an adaptive decoding module that significantly reduces memory consumption when training on denser point clouds, allowing the model to produce better motion predictions. § RELATED WORK §.§ Point tracking Tracking any pixel or point in 2D long-term has recently gained significant attention. MFT <cit.> presents an extension for optical flow by constructing optical flows not only between consecutive frames but also between distant frames. These flows are chained together guided by the predicted occlusion and uncertainty scores obtained from pre-trained networks, to derive the most reliable sequence of flows for each tracked pixel. PIPs <cit.> presents a novel framework designed for multi-frame point tracking which simultaneously estimates the target point positions in multiple frames. Therefore, it can handle short occlusion events. To improve tracking performance, Cotracker <cit.> utilizes self-attention layers to track target points and their local and global contextual points together. The self-attention layers enable information exchange among these points, leading to better tracking quality. Tap-Vid <cit.> offers valuable datasets along with a straightforward baseline method. It achieves this by predicting the position of a point in each frame based on the cost volume derived from the query feature and the corresponding feature map. Meanwhile, TAPIR <cit.> introduces a two-stage network comprising a matching stage and a refinement stage. This framework also incorporates the prediction of uncertainty to suppress ambiguous or unreliable predictions, enhancing point tracking accuracy. <cit.> introduces a differentiable non-rigid approach that achieves superior performance in reconstructing non-rigidly moving objects. However, this approach can only focus on a single object in the video. In contrast, we focus on modeling entire dynamic scenes by tracking any point within them. A different line of work utilizes test-time optimization techniques to model the scene, resulting in superior tracking performance. Additionally, these methods directly track points in 3D. Therefore, they can utilize useful 3D priors for tracking purposes. Specifically, OmniMotion <cit.> represents video content by employing a canonical 3D volume. It learns a set of bijections to map points between any frame and the canonical one, thereby enabling tracking capability. Similarly, <cit.> represents the scene with a set of Gaussians. While the number of Gaussians, their colors, and opacity remain fixed throughout the video, these Gaussians can move and rotate freely to model the dynamic scene. Therefore, the tracking capability emerges from persistently modeling the dynamic scene under these constraints. To achieve superior tracking performance, <cit.> require significant time to reconstruct the 3D model of the scene using test-time optimization. Consequently, these methods cannot be used for online tracking. In contrast, our method tracks points online without test-time optimization. §.§ Scene flow estimation Scene flow estimation predicts the 3D motion field of points, with the input being either 2D images or 3D point clouds. We focus on the approaches that has 3D point clouds as input as they are more relevant to our work. Approaches can usually be categorized into two types: supervised and self-supervised. However, many supervised methods can utilize self-supervised losses to adapt a pre-trained model to a new dataset without ground truth. FlowNet3D <cit.> is among the first to use PointNet++ <cit.>, a deep network for point clouds, to predict scene flow from point clouds directly and proposes important basic modules such as flow embedding, set conv, and set upconv layers that are commonly used in subsequent works. FLOT <cit.> reformalizes scene flow estimation as an optimal transport problem to initially compute scene flow and subsequently refine it using a deep network. PointPWC <cit.> introduces novel cost volumes, upsampling, and warping layers and utilizes a point convolution network <cit.> to handle 3D point cloud data. Scene flow is then constructed in a coarse-to-fine fashion. The authors also propose a novel self-supervised loss that has been adopted in subsequent works <cit.>. PV-Raft <cit.> introduces a point-voxel correlation field to handle both long-range and local interactions between point pairs. NSFP <cit.> represents scene flow implicitly with a neural network trained directly on test scenes with self-supervised losses. Fast neural scene flow <cit.> improves upon NSFP by utilizing the distance transform <cit.> and a correspondence-free loss, significantly reducing processing time. Finally, with the rise of diffusion models, <cit.> incorporate diffusion processes into the scene flow estimation pipeline, achieving millimeter-level end-point error. § METHOD §.§ Overview We aim to track any point in a video of a dynamic 3D scene. We assume camera poses and depth information have been obtained (e.g. from a SLAM system <cit.>). With these, the video is converted to a sequence of point clouds V={p_0, , p_T} where p_t={p_t,i}, and p_t,i∈ R^3 denotes the 3D coordinates of each point i in the point cloud at time step t. Let q_t,j∈ R^3 be the j^th query point at t. For each query point q_t,j, the model predicts the 3D motion v_t+1,j = q_t+1,j-q_t,j and the occlusion status σ_t+1,j∈{0,1} in the next frame. The same process is then repeated autoregressively for long-term point tracking. The multi-level features for each scene point are obtained using a point-based U-Net backbone <cit.>, comprising an encoder and a decoder. Let p_t(l), f_t^p,E(l), and f_t^p,D(l) represent the point cloud used at level l and its corresponding encoder/decoder features extracted at level l from our backbone where l=1 represents the densest level and l=L the sparsest level. Here, p_t(l) is derived by applying grid-subsampling on p_t(l-1) with p_t(1) = p_t. For simplicity, unless explicitly stated, we illustrate the algorithm on a single level and hence refer to the point cloud and their features as p_t and f_t^p. Unless specified, the features f_t^p represent the decoder features f_t^p,D. We use f_t,i^p to denote the decoder feature at a specific point location p_t,i. §.§ Background - PointPWC PointPWC <cit.> is a deep network that can be trained in either a supervised or unsupervised fashion to predict the scene flow between two point clouds. In PointPWC, a patch-to-patch strategy is used to obtain a robust cost volume and increase the receptive field. C(p_t,i) = ∑_p_t,u∈ N_t(p_t,i) W_t(p_t,i, p_t,u) ∑_p_t+1,j∈ N_t+1(p_t,u) W_t+1(p_t,u, p_t+1,j) cost(p_t,u, p_t+1,j) W_t(p_t,i, p_t,u) = MLP(p_t,i-p_t,u) where N_t(p) represents a neighborhood around p at time t, and cost(p_t,u, p_t+1,j) refers to the matching cost between two points. The cost is computed via MLP using the concatenation of color features at p_t,u and p_t+1,j and the relative position between the points as input <cit.>: cost(p_t,u, p_t+1,j)=MLP([f^p_t,u, f^p_t+1,j, p_t+1,j-p_t,u]). The idea of the cost volume is to incorporate matching between points in the neighborhood N_t(p_t,i) and the neighbors of each point in the frame t+1, which is similar to a small 2D window that is commonly used to derive 2D cost volumes <cit.> – it increases the range that points p_t,i can match to as well as the robustness of the matching. Given the cost volume C(p_t,i) of a point p_t,i at level l, the PointPWC framework estimates the flow for each point in a coarse to fine manner. Specifically, given the predictor features from level l+1, the framework first upsamples them to points on level l, then concatenates them with the cost volume C(p_t,i), as well as the decoder feature f_t,i^p at level l. These are used together by the flow predictor to generate the predictor features and the residual scene flow at level l. The latter is then summed with the upsampled scene flow from level l+1 as the predicted flow at level l. §.§ Cost Volume for Long-Term Point Appearance One difference between our long-term point tracking framework and scene flow is the existence of query points that are not necessarily within the given point cloud. Given the hierarchical decoder features of the point cloud, f^p_t, the feature of the query point q_t,i can be extracted by applying a PointConv layer as follows: f^q_t,i = MLP(∑_p_t,j∈ N(q_t,i) W(q_t,i, p_t,j) f^p_t,j). Since a single point does not contain enough appearance information, when talking about the appearance of each query point q_i, it should be understood as the appearance of the local region containing that point, which could deform from time to time. By jointly considering multiple appearances, tracking performance can be improved. Specifically, given a set of appearances of a query point q_i up to the time step t, F^q_t,i = {f^q_t_1,i, f^q_t_2,i, , f^q_t_n,i}, where t_1,t_2,...,t_n represent the frames storing the query's appearances (t_n ≤ t. Refer to Sec. <ref>), we can obtain a set of cost volumes C^q_t,i = {C_t_1(q_t,i), , C_t_n(q_t,i)} as follows: C_t_k(q_t,i) =∑_p_t+1,j∈ N(q_t,i)W(q_t,i,p_t+1,j) cost_t_k(q_t,i, p_t+1,j) cost_t_k(q_t,i, p_t+1,j) = MLP([f^q_t_k, i, f^p_t+1,j, p_t+1,j-q_t,i]) where we use the feature at timestep t_k to account for the query's multiple appearances. Note that here we used a simpler cost volume construction without the patch-to-patch formulation as in Eq. (<ref>). This is because for long-term tracking, it is difficult to define neighbors at frame t+1 from a past frame t_k that could be very far apart temporally. Results in Table <ref> show that our cost volume formulation is only slightly less effective than the patch-to-patch formulation. Another important aspect of the query point features is that based on Eq. (<ref>), they are only affected by scene points surrounding the query point at the same level. Therefore, we propose to selectively decode only the points surrounding the query points and prune other points to reduce total computation and memory consumption, especially during training time. We only use this selective decoding strategy for l ∈{1, 2}, which are the two densest levels and thus have the highest decoder memory requirements. By utilizing selective decoding, we increased the number of points per frame that fit into GPU memory from 8,192 to 60,000 (with 16 frames used for each mini-batch during training), significantly improving the algorithm's performance, particularly its capability of making predictions with sub-pixel accuracy. For scene flow, this recovered the lost ground we had with the simpler cost volume formulation (Table <ref>), and ablations for long-term point tracking can be found in the supplementary materials. §.§ Fusion of Appearance and Motion Cues A major benefit of long-term point tracking over simply chaining 2-frame scene flows is the capability to incorporate motion cues. Motion cues can help produce motion estimates during occlusion or a blurry frame. However, due to our goal of non-rigid point tracking, points can move in surprising and different ways that cannot be easily captured by a motion prior. In those cases, a good appearance tracking module should take over and predict more precise locations. To properly consider multi-frame appearance and motion jointly, we devise a novel data-driven Cost Volume Fusion module that softly combines motion-based and appearance-matching-based features. The output of this combination is then used to predict the actual motion and occlusion status for each point in the target frame. Below, we detail the specific components of the module. §.§.§ Cost Volume Fusion Module To provide the motion prior for the network, the last M predicted motions of the query point, v_(M,t), i=[v_t-(M-1),i, , v_t,i], are concatenated and encoded with an MLP followed by a group normalization layer <cit.> to obtain a motion prior vector ϕ_t,i=MLP(v_(M,t), i). Note that, in the beginning of the video, the list of past motions v_(M,t), i is initialized with zeros. Additionally, each level l utilizes a separate MLP to encode the motion prior while using the same list of past motions as input. For each appearance feature of the query extracted at t_k, we calculate the corresponding cost volume C_t_k,i (a simplified notation for C_t_k(q_t,i)) using Eq. (<ref>). Each of these cost volumes contains appearance-matching-based motion information. In the experiment, to estimate the motion of a query point from T to T+1, we extract the query appearance from the following frames, {0, T-6, T-2, T} to reduce the appearance redundancy from consecutive frames. The cost volume module is shown in Fig. <ref>. To predict the motion of the i^th query in the current frame at the sparse level l, our network jointly uses the motion and appearance information extracted from multiple time steps in the past. Specifically, unlike approaches for short-term point correspondence or scene flow, we maintain a list of potential appearance vectors for each query point as mentioned above and calculate its corresponding cost volume with Eq. (<ref>). Besides, we also consider pure motion-based information from our motion prior ϕ_t,i. This information is especially beneficial when the query point is not visible, as well as other cases where the appearance is ambiguous. Each cost volume C_t_k,i is concatenated with the corresponding point appearance f^q_t_k,i to obtain C^f_t_k,i. It contains information about the matching between points from frame t+1 and past query point appearances. The motion prior ϕ_t,i is concatenated with a learnable feature token. We propose to use the motion prior as the query and to cross-attend to all the {C^f_t_k,i}_k=1^n. The intuition is to select the points that match some past query point appearance while are also plausible w.r.t. the motion trajectory. Besides, we also use a learnable token E to allow the module to rely on the motion prior when no appearance-matching information is available due to occlusion. This module can be implemented by stacking multiple transformer decoder <cit.> layers together. O_t,i = Cross-Attn({C^f_t_1,i,, C^f_t_n,i, E}, ϕ_t,i) Ĉ_t,i = MLP(O_t,i+ϕ_t,i) Finally, to predict the motion at the current level l, v_t,i, we use the flow predictor head from <cit.> which takes the transformer features Ĉ_t,i and the predictor features from level l+1 as input. Besides, the transformer features Ĉ_t,i are fed to an MLP to predict the occlusion status at each level l. By training this, we encourage the model to store the occlusion information within the cost volumes. However, during inference, we only use the predicted occlusion at level 1 as the final occlusion prediction for each query point. §.§ Model Training Our model relies on the estimated frame-to-frame motion of each point to construct the long-term point trajectory. Therefore, instead of directly training the model on the long-term tracking data, we split the training process into two stages: * Scene flow pretraining. The whole model is pre-trained with the scene flow datasets. Each training sample includes two consecutive frames randomly sampled from a video. After the training, the model can achieve competitive performance in the scene flow estimation task. * Long-term tracking. In this stage, the Cost Volume Fusion Module is added to handle multiple appearances of the query point and its past trajectory. The network is trained with randomly sampled longer videos. By following this two-stage training pipeline, we can utilize synthetic scene flow datasets to improve the overall tracking performance and stabilize the training process. We supervise the model by the GT point position and the GT scene flow as follow: L^track = 1/Tn_q∑_l=1^L ∑_t=1^T ∑_i=1^n_qα^l-1 |q_t,i-q̂_t,i|_1 L^sf = 1/T∑_l=1^L ∑_t=1^T ∑_i=1^|p_t|γ^l-1 |Δ p_t,i-Δp̂_t,i|_1 where q_t,i and q̂_t,i are the predicted and the ground truth positions, and Δ p_t,i and Δp̂_t,i are the predicted and the ground truth flow of the scene point p_t,i. Motion in the 3D world is usually smooth in terms of both direction and magnitude unless the target is affected by external force from a collision. To encourage such smoothness property, we introduce a smoothness loss that minimizes the difference between the predicted motions in consecutive frames of each query point. The motion smoothness can be defined over all query points as follow: L^smooth=1/LTn_q∑_l=0^L∑_i=1^n_q∑_t=0^T-1||v_t,i-v_t+1,i||_1 We also attempted to use L^rigid and L^iso, the rigidity and isometry losses from  <cit.>. Altogether, we train the model using the weighted sum of the above losses: L = λ_1 · L^sf + λ_2 · L^track + λ_3 · L^smooth + λ_4 · L^rigid + λ_5 · L^iso We use grid-search to find the optimal values for these hyper parameters. During the scene flow pretraining stage, we only use L^sf and L^track. λ_1 and λ_2 are set to 2 and 1 respectively. During the second stage, we set λ_1=2, λ_2=1, λ_3=0.3, and λ_4=λ_5=0.2. § EXPERIMENTS §.§ Dataset and Training Details We use the FlyingThings <cit.> dataset to pre-train the scene flow model, and then train and test on two separate datasets, TapVid-Kubric <cit.> and PointOdyssey <cit.>. TapVid-Kubric is a synthetic dataset with 9,760 training videos and 250 testing videos. Each video has 24 frames with resolution 256×256. In each validation video, 256 query points are randomly sampled from all the frames. The model is required to track these points in the rest of the video. In the training set, we can generate an arbitrary number of query points for training. Because the dataset is synthetic, we can generate ground truth depth for all points. We first build a scene flow task based on the point tracking ground truth on TapVid-Kubric's training split and fine-tune the pretrained model on this task. Then, with the encoder and decoder backbone frozen to save GPU memory, the full model is fine-tuned on 16-frame videos from the point tracking dataset constructed from TapVid-Kubric <cit.>. For data augmentation, we use random horizontal flipping, random scaling the point cloud coordinates, and random temporal flipping. We use a batch size of 16 during the scene flow pre-training stages and a batch size of 8 during the training of the full model. PointOdyssey provides much longer synthetic videos (over 1,000 frames and up to 4,000 frames) for training and testing. The dataset includes 131/15/13 videos in the training/validation/testing split. Because Point Odyssey does not provide scene flow ground truth, we augment a single frame with random translations and rotations to simulate scene flow data. The model is first fine-tuned on this simulated scene flow data before being trained on the entire training set. We observe that the model supervised with the simulated scene flow data tends to converge faster in the second phase than the one trained with self-supervised loss. Backbone We utilized a U-Net-based PointConvFormer <cit.> backbone. This is similar to PCFPWC in Table <ref> but with our simplied cost volume (Eq. (<ref>)) instead of theirs (Eq. (<ref>)). Inference. During the inference stage, we have query points that can appear in any place in the video. Hence, we run the model twice (forward and backward) for each video to track the query points in both directions. §.§ Metrics We extend previous 2D point tracking metrics to 3D: * Occlusion accuracy (OA) is the accuracy of the occlusion prediction for each query point on each frame. * δ^x measures the position accuracy of the predicted point on each frame where the point is visible. A predicted point is considered correct if it is within x centimeters (cm) from the ground truth position. * δ^avg is the average of δ^x with x∈[1, 2, 4, 8, 16] (cm). For PointOdyssey where the videos are longer, we also adopt the survival rate metric <cit.> (SR) which is the average number of frames before each tracked point drifts T cm away from the ground truth position, divided by the number of frames in the video, with T =50. We also report results with 2D metrics in order to compare with other 2D methods, but those are secondary results because the primary goal of this paper is 3D tracking. §.§ Scene Flow Pre-Training Results In Table <ref>, we show the scene flow pretraining results on the FlyingThings dataset. Our framework outperforms PointPWCNet and other 2-frame baselines. Our scene flow performance was significantly improved when we used an input point cloud size of 60,000 points over the conventional 8,000 points used in <cit.> despite our simpler cost volume computation than PointPWCNet and PCF-PWCNet. §.§ 3D & 2D Evaluations To the best of our knowledge, no prior deep learning-based work tackled the problem of generalizable 3D point tracking. Hence, we mainly compare with 2D baselines and simple scene flow chaining. Since the dataset is synthetic, we can lift 2D tracking results to 3D using the ground truth camera pose and depth map. Due to the subpixel-level predictions from the 2D point trackers, the depth of each point is obtained by interpolation on the provided depth map. For the occluded points, the depth is linearly interpolated using the depth of that point before and after the occlusion. Results with alternative nearest neighbor interpolation are similar and are shown in the supplementary material. Table <ref> shows 3D point tracking results comparing our proposed approach with state-of-the-art 2D approaches TAPIR <cit.> and CoTracker <cit.>, as well as simply chaining the pretrained scene flow. We significantly outperform both approaches by 15.3% and 26.2% points on the 3D- δ^avg metric. The performance difference is more significant in the occluded areas, where we record a 44.0% accuracy whereas TAPIR and CoTracker obtain accuracies lower than 10% due to not having a good 3D motion prior to maintain a good track during occlusion. Note that baseline results were already generated by interpolating tracks using the ground truth depth. Better nonlinear interpolation may improve their performance by a bit, but it is unlikely that their performance would catch up to our approach. Such significant performance differences support our arguments that even accurate 2D tracking can have significant issues locating accurate tracks in 3D, even with fully known camera pose and ground truth depth. A qualitative illustration in Fig. <ref> indicates the issue with 2D trackers. At frame T, although the original view does not indicate a significant error on the red circled point that is being tracked, the tracking actually drifted slightly off the blue object. In consequence, if we render it from a novel viewpoint outside of the original 2D image plane, we can see a significant jump in the trajectory. At time T+1, the tracked point went back to the blue object and from the novel view we again see a significant jump in the trajectory. This indicates significant errors in 3D tracking despite low 2D tracking errors. Our approach, on the other hand, works naturally in 3D. Hence it does not suffer from such drifting issues and produces much more consistent 3D tracking results. Results on the test split of PointOdyssey in Table <ref> show a similar trend. We measure the Survival Rate and δ^avg on the first 128 frames and 512 frames of each video or the full sequences. Our framework also outperforms the baselines by a large margin across all the metrics. In Table <ref>, we show results on the 2D evaluation on Kubric. We projected our 3D results to 2D using the known camera parameters for our approach. Our approach outperformed many baselines and are generally comparable or slightly worse than TAPIR and CoTracker in 2D when only a single iteration is used for them. Note that for our 3D tracking results in Table <ref>, we only compared against the full version of TAPIR and CoTracker (i.e., with multiple iterations) and still outperformed them. We did not include the OA and 2D-AJ (Average Jaccard) numbers for CoTracker (1 iter) because CoTracker produces the occlusion status only at the last iteration. TAPIR and CoTracker additionally utilize an hourglass network that decodes, re-encodes and decodes several times to obtain more precise predictions, but that is against the spirit of online algorithms and we did not pursue that path. We did outperform TAPIR significantly in the prediction of occluded points, showing the benefits of interpolation of the motion from 3D. §.§ Ablation Experiments We analyze the contribution of using multiple appearance features and the motion prior. Results are presented in Table <ref>. We demonstrate that by employing multiple appearances and the motion prior, the performance across all 2D and 3D metrics consistently improves over using a single appearance. Additionally, the performance improvement under occlusion surpasses that in normal conditions, showing that utilizing multiple appearances of the query point makes the model more resilient to occlusion. By integrating appearances from the past, the model can recover from a few bad appearances during or right before/after occlusion and achieve better results. We also conduct ablation on the regularization terms. Results in Table <ref> show that our smoothness term is very useful in terms of improving the 3D point tracking results. The rigidity and isometry loss from  <cit.> provide marginal improvements in the 2D results. § CONCLUSION In this paper, we proposed a 3D long-term point tracking approach based on fusing multiple cost volumes and motion information with a transformer model, which, to the best of our knowledge, is the first generalizable online long-term 3D point tracking approach using deep learning. By selective decoding, we significantly increased the size of the input point cloud that fits into GPU memory, which improves the performance of scene flow and 3D long-term point tracking. In terms of 3D point tracking performance, our approach significantly outperforms scene flow chaining and 2D long-term point tracking approaches even if they are backprojected to 3D with ground truth depths and camera poses, showing the benefits of tracking in 3D. We hope this paper could increase the interest of the community in 3D long-term point tracking. In future work, we plan to utilize our 3D point tracking framework in downstream tasks. ieee_fullname The supplementary material consists of this document and a video demo. In the document, we provide additional ablation results and result analyses. In the video demo, we show comparisons between our approach and CoTracker, the strongest 2D point tracking baseline used in our experiments. § RUNNING TIME BREAKDOWN In this section, we show the running time breakdown for our model. We benchmark our model on the test split of the PointOdyssey dataset. The resolution of each frame in the dataset is 540×960. Therefore, the maximum points on each frame is up to 518400. Our average FPS on this dataset is around 2.8. As shown in Fig. <ref>, our model takes roughly half of the total running to extract the point cloud and query's features (i.e., encode & decode). The Cost Volume Fusion module, which predicts the query motion, takes roughly one third of the total running time. To further cut down the running time, we can replace our backbone network with a faster network. § ABLATION - EFFECT OF DENSE POINT CLOUD ON LONG TERM TRACKING As mentioned in the main paper, the selective decoding module enables the use of a dense point cloud by reducing memory consumption significantly. In this ablation, we show the importance of using dense point clouds. As shown in Table. <ref>, the overall performance on long-term tracking is improved by switching from sparse point cloud input into dense point cloud input. § ABLATION - EFFECT OF SELECTIVE DECODING Table <ref> shows the compression rate we achieve by reducing the number of points to be decoded in the 2 most densest levels. Here, we define the compression rate to be the ratio between the number of points before and after the pruning process of the selective decoding module. Besides, we can significantly reduce the total memory consumption when training the model - Table <ref>. Here, we show memory consumption during the pre-training process on the scene flow dataset with batch size 8. Besides, it also helps to speed up the training process by 1.7 times. § IN-DEPTH 3D PERFORMANCE COMPARISON TO COTRACKER While working well in 2D space, CoTracker <cit.>'s performance significantly drops in 3D. We hypothesize that the main reason is due to noisy depth values used to convert 2D points into 3D. To validate our hypothesis, we visualize 3D tracks using new camera views by rotating the camera around the z-axis as follows. First, we back-project the predicted 2D point tracks of CoTracker into the 3D scene using the original camera pose and the provided depth maps. The depth of each predicted query point is obtained by first selecting the four nearest neighbors in the depth map (i.e., 4 depth pixels) and calculating the depth through bilinear interpolation. We also show the results for nearest interpolation in Sec. <ref>. Then, these points are projected to 2D again using new viewing angle. In contrast to CoTracker, we can directly project our predicted 3D points into the new viewing angles. The 2D results evaluated in the new camera views are shown in Table. <ref>. Although CoTracker performs well in the original view, its performance rapidly declines when these points are projected into new camera views. In contrast, our model exhibits more robustness to changes in viewpoint. When analyzing the results of CoTracker in the new view - Fig. <ref>, we observe many cases where the points move in zig-zag patterns or have abnormally large motions at some specific time steps. We examine the depth values around these points in the original view and find that they typically reside on or near the object boundary - Fig. <ref>. Consequently, if the predicted positions of CoTracker for the query points are slightly off by a few pixels, it can lead to a significant discrepancy in the extracted depth. This discrepancy causes the queries to oscillate when they are projected into different views. § ABLATION - NEAREST AND BILINEAR INTERPOLATION FOR LIFTING INTO 3D In this section, we show the results of CoTracker <cit.> and TAPIR <cit.> when we use the nearest neighbor to extract the depth value for each query point from the GT depth map - Table <ref>. From these results, it can be shown that the nearest neighbor can extract the depth value for the query more precisely and improve the overall results of CoTracker and TAPIR consistently. However, despite the improvement, CoTracker and Tapir still underperform compared to ours in terms of the 3D metrics. § ABLATION - 2D LOSS FINE TUNING To compare with other 2D baselines, we have to project our 3D results into 2D. To reduce the effect of the numerical error when projecting, we tried to tune our model directly in 2D with the projection loss: L^2D_track = 1/Tn_q∑_l=1^L ∑_t=1^T ∑_i=1^n_qα^l-1 |q^2D_t,i-q̂^2D_t,i|_1 where q^2D_t,i is the projection of the predicted 3D position of the query point, and q̂^2D_t,i is the 2D ground truth. Thanks to this, the accuracy in 2D is slightly improved. The ablations in the main paper were done without this projection loss hence the numbers were a little lower than tables 5 & 6 with the 2D results.
http://arxiv.org/abs/2407.13205v1
20240718064312
Transformer-based Single-Cell Language Model: A Survey
[ "Wei Lan", "Guohang He", "Mingyang Liu", "Qingfeng Chen", "Junyue Cao", "Wei Peng" ]
cs.CL
[ "cs.CL" ]
lanwei@gxu.edu.cn 1234-5678-9012 18775067872@163.com The Guangxi Key Laboratory of Multimedia Communications and Network Technology, School of Computer, Electronic and Information, Guangxi University Nanning China The School of Computer and Electronic Information, Guangxi University Nanning China hitomil@foxmail.com Both authors contributed equally to this research. The School of Computer, Electronic and Information, Guangxi University Nanning China qingfeng@gxu.edu.cn [1] The College of Life Science and Technology, Guangxi University Nanning China qingfeng@gxu.edu.cn the Faculty of Information Engineering and Automation, Kunming University of Science and Technology Kunming China weipeng1980@gmail.com § ABSTRACT The transformers have achieved significant accomplishments in the natural language processing as its outstanding parallel processing capabilities and highly flexible attention mechanism. In addition, increasing studies based on transformers have been proposed to model single-cell data. In this review, we attempt to systematically summarize the single-cell language models and applications based on transformers. First, we provide a detailed introduction about the structure and principles of transformers. Then, we review the single-cell language models and large language models for single-cell data analysis. Moreover, we explore the datasets and applications of single-cell language models in downstream tasks such as batch correction, cell clustering, cell type annotation, gene regulatory network inference and perturbation response. Further, we discuss the challenges of single-cell language models and provide promising research directions. We hope this review will serve as an up-to-date reference for researchers interested in the direction of single-cell language models. Transformer-based Single-Cell Language Model: A Survey Wei Peng ====================================================== § INTRODUCTION Single-cell research has shown tremendous potential across a variety of fields including genetics, immunology and oncology. By utilizing single-cell RNA sequencing data for cluster analysis and the identification of cell subtypes, it is possible to accurately categorize cell populations and reveal crucial information about cell interactions and the structure of tissues <cit.>. Exploring the gene expression, gene function and gene-gene interaction at the single-cell level helps to unveil the deep mechanisms of cellular heterogeneity within tissues <cit.>. Single-cell research is critically important for understanding fundamental biological processes and provides significant insights for the diagnosis of diseases <cit.>. Single-cell data usually consist of large amounts of high-dimensional data which contains complex information. There is heterogeneity among single-cell data originating from the same tissue. In the early stages, traditional machine learning methods, such as n-gram <cit.> and Hidden Markov Models (HMM) <cit.>, were widely used for cell annotation and protein prediction. With the development of machine learning technology, more sophisticated algorithms were applied to single-cell research <cit.>. Subsequently, deep learning models, including Recurrent Neural Networks (RNN) <cit.> and Convolutional Neural Networks (CNN) <cit.>, were used for the analysis of single-cell data. Currently, the transformers developed by Google has become the most popular language model <cit.>. The transformers can process an entire sentence at once during training and effectively captures long-distance dependencies within sequences through the self-attention mechanism <cit.>. This capability enables transformers to effectively explore various types of single-cell data. It leads to an increasing number of researchers applying Transformer technology in the field of single-cell research <cit.>. This review will introduce the main modules of the transformers in the second section. Then, we provide an overview and analysis of existing single-cell language models in the third section and showcase some downstream tasks accomplished by single-cell language models in the fourth section. Final, we discuss the challenges and opportunities of transformers-based single-cell language models in the fifth section. We hope to offer assistance to individuals interested in understanding single-cell language models. § TRANSFORMER The transformers requires extensive training on numerous texts. It usually employs a self-supervised approach during training, enabling language models to perform classification and generation <cit.>. For instance,the transformers-based language models can automatically extract key information of text, generate new text and answer user queries in question-answering. These achievement is credited to the ability of transformers for learning long-term dependencies of language and allowing parallel training across multiple language units. This enhances the parallelism in processing sentences and capability to extract overall sequence correlations of transformers. The structure of transformers is depicted in Fig. 1. The transformers has demonstrated excellent performance in both training tasks from scratch and pre-training tasks. Transformer-XL <cit.> introduces the recursive mechanism and positional encoding. It captures longer-term dependencies by learning beyond fixed-length dependencies while maintaining temporal continuity to address context fragmentation. Reformer <cit.> reduces attention calculation complexity and uses reversible residual layers instead of standard residual layers to achieve higher memory efficiency and alleviate pressure on computing resources. In addition, pre-training tasks can reduce dependence on annotated data, thus lowering the training cost of the transformers <cit.>. The GPT <cit.> employs multiple layers of the transformers encoders and performs unsupervised language modeling tasks during pre-training to learn semantic and syntactic knowledge from the text. The BERT <cit.> is a model that pre-trained on large datasets. It use bi-directional transformers and mask mechanism to consider the context information from both the left and right sides of the input sequence simultaneously. Due to the success of these models, many models based on them have started to emerge. XLNet <cit.> is a pre-training model base on Transformer-XL that achieves bidirectional learning of context. It uses the self-regressive strategy to helps the model avoid the inconsistency issue in pre-training fine-tuning. RoBERTa <cit.> is a model based on Bert and achieves enhanced training performance by utilizing dynamic masking. §.§ Encoder and decoder The transformers is primarily composed of encoders and decoders, which uses residual connections and layer normalization. The layer normalization is defined as follows: LayerNorm(X+MultiHead(X)) LayerNorm(X+FFN(X)) where the X in formula(1) denotes the input embedding. It is processed through multi-head self-attention mechanism(MultiHead). After processing X, the result is added to the original X in formula(1) to obtain the X in formula(2). Then the X in formula(2) is processed through Position-wise feed-forward networks(FFN). The Layer normalization computes the mean and variance of each input sequence to provide more accurate training results <cit.>. The encoder gradually extracts semantic information from the input sequence and encodes it into a series of hidden vectors by stacking multiple identical layers. The decoder is responsible for transforming the hidden representations generated by the encoder into an output sequence. It adopts an autoregressive training approach. The decoder acquires information about the entire sequence of tokens during training, which would lead to a decrease in prediction accuracy. To address this issue, the decoder uses masked self-attention mechanism in the first layer. After obtaining vector information based on the masked self-attention mechanism, it needs to be combined with the hidden vectors provided by the encoder before entering the next layer. Then, the decoder gradually generates vectors of the sequence and transforms them into the final output sequence based on linear transformation and Softmax function. §.§ Multi-head self-attention mechanism The multi-head self-attention mechanism is comprised of multiple self-attention mechanism. It can help the model to determine the important of parameters during the training process. In addition, it adjusts the weights at different positions by calculating the correlations between each input position and other positions. The self-attention mechanism is defined as follows: Attention(Q,K,V)=softmax(QK^T/√(d_k))V where d_k represents the dimensionality of the key vector. Q, K and V are three matrices. K^T represents the transpose of the K matrix. The dot product of Q and K^T denotes the similarity between the current word vector and other word vectors. After dividing this value by √(d)_k and applying the softmax function, the coefficient of weight is obtained. The weight coefficient is then multiplied by V to ultimately obtain the attention value. The multi-head attention mechanism is defined as follows: M(Q, K, V)=C(head_1,…,head_h)W^O head_i=Attention(QW_i^Q,KW_i^K,VW_i^V) where W^O is a matrix containing the weights for each attention value. C denotes the concat function. head_i represents the self-attention mechanism module of i-th head. W^O contains the weights of each head_i. The W^Q, W^K and W^V denote the weight matrices. Each input embedding vector is multiplied with them to obtain the corresponding Q, K and V matrices. They are updated with each backward propagation during training. Each self-attention module has different W^Q, W^K and W^V. The multi-head attention value is calculated by weighting each Attention value with W^O. §.§ Position encoding The position encoding is obtained by adding positional information to the embedding vectors of input words in transformers. It is defined as follows: PE_(pos,2i)=sin(pos/10000^2i/d_model) PE_(pos,2i+1)=cos(pos/10000^2i/d_model) where pos is the position index, i is the dimension index and d_model is the size of the hidden layer. The sine and cosine values for each pos and i are calculated separately using the PE function. Then they are merged into a position encoding vector. This ensures that the embedding vectors for each token not only contain semantic information, but also position information of input sequence. In addition, the relative position encoding is proposed <cit.>. It makes transformers to better understand the positional information of input sequence, thereby enhancing the performance and generalization capability of model. §.§ Position-wise feed-forward networks The position-wise feed-forward networks(FFN) acts as a multi-layer perceptron, which is equivalent to use a linear layer in each encoder and decoder <cit.>. It is defined as follows: FFN(x)=max(0,xW_1+b_1)W_2+b_2 where W_1, b_1, W_2 and b_2 are parameters that can be learned during training. The FFN initially performs a linear operation on the input to increase its dimension and applies the ReLU activation function to learn more complex feature information. In final, the FFN reduces the dimension to the original dimension based on a linear operation. This contributes to enhance the generalization capability of features. § THE APPLICATION OF TRANSFORMER IN SINGLE-CELL We categorize the application of single-cell data analysis based on transformers into single-cell language models and single-cell large language models depended on whether it uses pre-training or not. These models effectively analyze single-cell omics data by utilizing the unique feature representation of transformers. §.§ single-cell language model This section introduces the current structural design and optimization of single-cell language models. These models are developed based on the transformers framework. They have been utilized for analyzing various types of single-cell datasets, including single-cell transcriptomics, spatial transcriptomics, epigenomics. §.§.§ Single-cell language model based on single-cell transcriptomics The transCluster <cit.> is a model based on transformers for analyzing scRNA-Seq data. It demonstrates that transformers can be used for scRNA-seq analysis. It utilizes Linear Discriminant Analysis (LDA) <cit.> to obtain input embeddings for the transformers. Then, CNN is employed to train the output of transformers for predicting cell types. In addition, scTransSort <cit.> is also a model that combination of transformers and CNN. It uses CNN to transfer the gene embeddings of each cell into multiple two-dimensional matrix blocks. Each matrix block represents a token and these tokens are trained through 12 layers of transformers. Finally, a linear classifier utilizes the output features of transformers to predict cell type. CIForm <cit.> is a model inspired by the application of transformers in computer vision (CV). It divides equally sized sub-vectors within the gene embedding module. These sub-vectors are combined with positional embeddings to fed into the transformers for training. The average pooling layer uses the average of the output sub-vectors to Fetch the final result. STGRNS <cit.> is an interpretable model base on transformers. It proposes a Gene Expression Motif (GEM) data processing technique to process scRNA-seq. The combination of GEM and transformers in STGRNS provides stonger interpretability. In contrast to STGRNS, T-GEM <cit.> enhances model interpretability by replacing the weights in the transformers with gene-related weights. It obtains attention values for different genes. Then, it utilizes these attention values for the classification task. §.§.§ Single-cell language model based on single-cell spatial omics and epigenomics The PROTRAIT <cit.> is a model based on transformers for analyzing scATAC-Seq data. It utilizes one-hot encoding to map input sequences into a latent space. When the sequence length is less than a predefined threshold, the one-hot encoding is transformed into motif embedding through convolutional layers. For sequences longer than the predefined threshold, an alternating combination of convolutional and pooling layers is used to obtain motif embedding. Then, the embeddings with absolute positional information are subsequently passed into the transformers for further processing. The output features from the transformers are used to conduct cell classification. TransformerST <cit.> constructs a Variational-Transformers framework for data representation and employs CNN as both the decoder and encoder. It introduces a graph transformers between the decoder and encoder to analyze spatial transcriptomics data. By constructing an undirected graph, the graph transformers is able to learn nonlinear mappings and aggregate neighbor relationships. It makes high-resolution reconstruction of gene expression possible. §.§.§ Single-cell language model based on single-cell multi-omics The SCMVP <cit.> is a deep generative model based on transformers specifically designed for the simultaneous analysis of scRNA-seq and scATAC-seq data. The model establishes two independent channels at the encoder and decoder layers for processing scRNA data and scATAC data. In the scRNA channel, the masked attention mechanism is adopted, while in the scATAC channel, the self-attention mechanism is employed. Subsequently, the outputs of the two channels are combined, and the mean and variance of the common latent variables are obtained through a shared linear layer. scMoFormer <cit.> is a multimodal model based on transformers that uses a heterogeneous graph to model single-cell data. It constructs a multimodal heterogeneous graph containing three types of nodes: cells, genes, and proteins. In the training framework, three transformers are used, each dedicated to extracting the data representation of the corresponding modality. Finally, a multi-layer fully connected network is utilized to predict the target protein expression level of each cell. DeepMAPS <cit.> is a model that introduces the heterogeneous graph transformer (HGT) framework. It constructs a heterogeneous graph using a cell-gene matrix. Then, the entire heterogeneous graph is divided into multiple subgraphs and HGT is applied on these subgraphs. Subgraph sampling is performed through a sparse-based feature selection method. During training process, the information of nodes is updated through multiple iterations of training and the training on different subgraphs shares the same set of parameters. After training on all subgraphs is completed, HGT is applied to the entire heterogeneous graph to obtain data features. MarsGT <cit.> is an extended model based on DeepMAPS. The heterogeneous graph of MarsGT is constructed base on cell-gene matrix, gene-peak matrix and cell-peak matrix. Compared to DeepMAPS, it better obtain features of single-cell data from the perspective of regulatory networks by increasing the peak. During the subgraph sampling stage, a probability-based subgraph sampling method is employed to select genes and regulatory regions associated with rare cells. Then, the model is trained on the subgraph using transformers. After obtaining the trained weights, the pre-trained model is applied to the entire graph for training. §.§ Single-cell Large Language model Currently, large language models are also being applied to single-cell domains. The GPT and BERT have emerged as leading representatives. This section will provide an introduction of the current single-cell large language models. §.§.§ Single-cell large language model based on single-Cell transcriptomics The scBERT <cit.> is the first single-cell pre-training model constructed based on the BERT architecture. The structure of scBERT is shown in Fig. 2. During the training process, scBERT has been optimized to eliminate of artificial biases and overfitting for enhancing the generalization capability of model. To capture the similarity between genes, the scBERT employs the gene2vec <cit.> to obtain gene embedding for each gene. The input embedding information is obtained to capture relationships between genes by combining expression embedding and gene embedding. The embedding design allows scBERT to more effectively transform gene expression information into the input for the transformers for generating cell-specific embedding. Considering that most scRNA-seq data dimensions exceed the 512-limitation of transformers, scBERT utilizes the Performer to reduce computational complexity through approximate self-attention calculations, which employs a Linear Attention mechanism based on low-rank random feature mapping. It enables scBERT to input over 16,000 genes when processing long sequence data. In addition, the scBERT also provides the interpretability by using Enrichr to visual attention weight to reflect the contribution of genes. The scFoundation <cit.> is a large pre-trained model based on transformers with 100 million parameter scale. The embedding module of scFoundation is employed to get final embeddings with positional information. In addition, scFoundation adopts an asymmetric encoder-decoder architecture. During the encode phase, it exclusively conducts the training on non-zero and non-masked expressed genes to reduce computational costs. In the decode phase, it restores zero and masked expressed genes to learn relationships among all genes. The Read-depth-aware task is utilized as training strategy to train a pre-trained model, which is illustrated in Fig. 3. It successfully harmonize read depth differences across different cells to prove more coordinated and precise when dealing with cells with varying sequencing depths. §.§.§ Single-cell large language model based on single-cell multi-omics The scGPT <cit.> is the first single-cell foundation model based on transformers that undergone generative pre-training on over 33 million cells. The model draws inspiration from GPT. The structure of scGPT is depicted in Fig. 4. scGPT treats genes as tokens and uses a condition token to represent the positional information of genes. In addition, it employs value binning to address differences between different sequencing batches. scGPT uses stacked transformers layers and Flash-Attention <cit.> to handle single-cell multi-omics data. Flash-Attention can effectively address the sequence length limitation and reduce computational cost. In terms of interpretability, scGPT focusing on key genes through pre-training on a good deal of single-cell data. Therefore, it has more comprehensive interpretability. While scGPT demonstrates impressive performance, It still has some shortcomings. It proves competitive in low-data settings, but it requires careful consideration of experimental conditions in zero-shot settings. Moreover, the current pre-training methods may lack universal applicability. The CellPLM <cit.> is the first single-cell pre-trained model based on transformers that considers the relationship between cells. The structure of CellPLM is depicted in Fig. 5. It establishes a gene expression embedder for processing input data. The embedder initializes an embedding vector for each type of gene and filters out unmeasured genes and randomly masked genes. The gene expression embedder aggregates gene embedding based on their expression levels in each cell and then transforms them into a suitable input of the transformers. These expression embedding are then input into a structure of an encoder-decoder by utilizing a latent space between the encoder and decoder. The encoder part comprises N transformers blocks. However, the computational complexity of transformers exhibits quadratic growth which results in significant computational costs <cit.>. CellPLM replaces the transformers with a variant called Flowformer <cit.> to resolve the input constraints and computational complexity problems associated with the transformers. To more effectively capture cell-cell relationships and spatial positional information of individual cells, CellPLM incorporates spatial resolution transcriptome (SRT) data into the encoder for training. SRT data contains position embedding information. The position embedding are combined with expression embedding to obtain the final input embedding. In the latent space, a Gaussian mixture model is employed. The decoder employs several feedforward layers (FFLayers) to train latent space vectors and acquires the batch label of each cell by learning from the learnable lookup table. §.§.§ Single-cell large language model based on gene expression ranking The tGPT <cit.> is an autoregressive unsupervised training model based on transformers. It utilizes the ranking of gene expression to predict the index of the next gene. Gene expression ranking provides the relative position of genes and is more suitable for large-scale gene screening and comparative analysis. However, this strategy may only consider genes with higher expression levels and neglect the specific information contained in low-expression genes. The structure of tGPT is depicted in Fig. 6. The tGPT predefines a length limit of input sequence and any part of the input sequence exceeding this limit is truncated, while the sections not reaching the limit are padded as 0. In the training process, it combines gene token embedding with positional encoding embedding. Final embedding undergoes 8 transformers modules to extract features from single-cell sequences. The Cell2Sentence (C2S) <cit.> is a pretrained model fine-tuned on GPT-2, focusing on handling text sequences containing gene names. Through fine-tuning, C2S is capable of generating new cell sentences and reversely converting them back into gene expression vectors, retaining most of the information. The order of gene names is determined by the expression ranking of each gene and C2S uses these gene name sequences as its input. By converting cell text sequences back into gene expressions, C2S minimizes information loss and retains key information from the original data in most cases. This method enables transformers to acquire information about single-cell data, but the sequence conversion operation often results in higher computational costs. § DOWNSTREAM TASK ANALYSIS The single-cell language models based on transformers have conducted on various downstream tasks include batch correction, cell clustering, cell type annotation, Gene network inference and Perturbation responses. The datasets used for these downstream tasks are primarily obtained through databases such as TCGA <cit.> and GEO <cit.>. The details of them are shown in Table 1. §.§ Batch correction With the increasing quantity of single-cell data, the variability between different batches has become an increasingly significant interference in data analysis. It becomes an urgent challenge to improve the effectiveness of batch correction. Three key metrics are used to evaluation of batch correction effects including k-nearest neighbor batch effect test (kBET) <cit.>, Average Silhouette Width for batch correction (ASWbatch) <cit.> and Graph Connectivity measurement (GraphConn) <cit.>. The kBET assesses the effectiveness of correction by comparing the distribution of cells within and between batches. Its acceptance rate reflects the uniformity of cell distribution after correction. A higher acceptance rate indicates the preservation of biological heterogeneity and a reduction in technical batch effects. The ASWbatch originates from the concept of silhouette width in cluster analysis. It is used to measure the clustering effect after removing batch effects. The GraphConn is a method for evaluating the connectivity between cells in the dataset after batch correction. It aims to quantify the enhancement of cell-to-cell connectivity post-correction for reflecting the reduction of batch effects. The tGPT <cit.> adopts the ranking of gene expression to void the interference of actual expressions of Highly Variable Genes (HVGs) and batch information during training. It is trained on the HCA dataset <cit.>, utilizing the kBET acceptance rate to reflect the magnitude of differences between different batches. In addition, tGPT conducted an immune checkpoint blockade (ICB) clinical trial. By quantifying the expression features of different attention heads, it is demonstrated that these attention heads have prognostic significance in this clinical trial. scGPT <cit.> conducts batch effect experiments by fine-tuning on pre-trained models. To quantify batch correction performance, scGPT calculates the average silhouette width (ASW_batch) and the graph connectivity measure (GraphConn) <cit.>. It computes the AvgBATCH (i.e average of ASWbatch and GraphConn,) to comprehensively represent batch performance. scGPT evaluates batch correction performance on three datasets including COVID-19 <cit.>, PBMC 10 <cit.> and Perirhinal Cortex <cit.>. The evaluation is conducted against three methods including Seurat <cit.>, Harmony <cit.> and scVI <cit.>. scGPT achieves a best performance AvgBATCH value on the three datasets. However, scGPT does not achieve excellent batch effect correction in zero-shot settings <cit.>. §.§ Cell clustering The goal of cell clustering analysis is to group cells based on their gene expression patterns. When evaluating the accuracy of clustering results, commonly used metrics include Adjusted Rand Index (ARI) <cit.>, Average Silhouette Width (ASW) <cit.> and Normalized Mutual Information (NMI) <cit.>. ARI adjusts the Rand Index by comparing the observed pair-wise concordance to the expected random concordance and yields a measure of clustering consistency. ASW measures the difference in similarity between samples and different clusters by calculating the silhouette width for each sample. It offers a intuitive evaluation of clustering results. NMI utilizes normalized mutual information to eliminate the influence of the number of clusters and the total number of samples It makes it useful for comparing clustering results under different parameter settings. The scMVP <cit.> employs a joint deep learning model to learn features from both scATAC data and scRNA data. It is trained on Paired-seq cell line data <cit.> and SNARE-seq cell line data <cit.>. Then it utilizes UMAP visualization to perform cell clustering analysis on cell clusters. It successfully identifies different numbers of cell subpopulations and effectively separates the integration data of scRNA-seq and scATAC-seq. It confirms its effectiveness in cell clustering analysis. tGPT <cit.> is applicable to large-scale tissue samples through pre-training. It partitions samples into distinct clusters that correspond to different organs. It is trained on six datasets including HCA <cit.>, HCL <cit.>, TCGA <cit.>, Macaque Retina <cit.>, GTEx <cit.> and Tabula Muris <cit.>. The experimental results demonstrate that it achieves excellent performance in cell clustering tasks. CellPLM <cit.> conducts unsupervised clustering analysis by extracting cell embedding vectors from the dataset without fine-tuning. CellPLM achieves zero-shot clustering experiments on a public dataset <cit.>. It compares with PCA, Geneformer and scGPT. In the experiments, it achieves the highest ARI and NMI. DeepMAPS <cit.> validates cell clustering on ten single-cell multi-omics datasets. It trains with 36 parameter combinations and compares with Seurat, MOFA+ <cit.>, TotalVI <cit.> and Harmony. In all experiments, DeepMAPS achieved the best ARI and ASW. Furthermore, DeepMAPS performs single-cell multi-omics integration analysis on the PBMC dataset <cit.> and the CITE-seq dataset of lung tumor leukocytes <cit.>. It successfully identifies 13 cell types and validates its effectiveness. §.§ Cell type annotation Cell type annotation refers to assigning known cell type labels to each cell or cell cluster, which aids in gaining a deeper understanding of the biological significance of the cells <cit.>. When evaluating the performance of cell annotation, commonly used metrics include precision, recall, accuracy and the F1 score <cit.>. Precision represents the proportion of true samples of a certain category among those predicted as that category by the model. Accuracy is the ratio of correctly classified samples to the total number of samples. Recall is the proportion of true samples of a certain category that the model correctly predicts as that category. The F1 score is the harmonic mean of precision and recall. It is used to comprehensively assess the performance of model. The TransCluster <cit.> is the first model to apply transformers to cell type annotation. It is trained on the Shao dataset <cit.> and the Baron dataset <cit.> and demonstrates efficient performance in cell type prediction tasks. PROTRAIT <cit.> is trained on the sci-ATAC human atlas <cit.> and generates cell embeddings that reflect the distribution of scATAC-seq data.Then, it uses the k-nearest neighbors (KNN) for cell type annotation. scBERT <cit.> is pre-trained on 9 scRNA-seq datasets. Then fine-tuning is performed on the trained model. Final, it uses the K-means algorithm to annotate cell types. scBERT performs cell annotation tasks on the Baron dataset <cit.>, the Muraro dataset <cit.>, the Segerstolpe dataset <cit.> and the Xin dataset <cit.>. Both scGPT <cit.> and CellPLM <cit.> are trained on the hPancreas <cit.> dataset and multiple sclerosis (MS) <cit.> dataset to perform cell annotation task. scGPT performs normalization, log transformation and binning operations on gene expression values. Then cell type annotation is achieved through fine-tuning. In addition, scGPT is trained on the tumor-infiltrating myeloid dataset (Mye.) <cit.> and evaluated on query partitions of three previously unseen cancer types. The results indicate that scGPT has high accuracy in distinguishing immune cell subtypes. CellPLM adds a feedforward layer during the fine-tuning process and utilizes standard cross-entropy loss function for the fine-tuning process. Fine-tuned CellPLM exhibits a significant improvement in F1-score and precision metrics on two datasets compared to the from-scratch CellPLM. §.§ Gene network inference Gene network inference analysis reveals regulatory associations between genes by comparing gene expression patterns under different conditions. Currently, single-cell language models based on transformers have introduced innovative perspectives to the study of gene regulatory networks <cit.>. Centrality score metrics, including closeness centrality (CC) and eigenvector centrality (EC) <cit.> is used to the experiment of single-cell language models. The CC assesses the average distance of a gene node relative to other gene nodes in the network. EC considers not only the number of connections of a gene node but also the importance of the other gene nodes that it is connected to. In addition, functional enrichment analysis <cit.> and pathway enrichment analysis <cit.> are employed in experimental analysis. Functional enrichment analysis aims to identify biological functions or processes that are significantly enriched in a set of genes. Pathway enrichment analysis is similar to functional enrichment analysis but focuses more on known biochemical pathways <cit.>. It aims to deeply understand how genes function through synergistic interactions within specific biological pathways. The DeepMAPS <cit.> uses the Steiner Forest Problem (SFP) to identify genes contributing significantly to cell cluster features and constructs a gene correlation network. It defines sets of genes regulated by the same transcription factor (TF) as regulons and compares regulon activities between cell clusters. Then, it selects regulons with significantly higher activity scores as cell-type-specific regulons and constructs gene regulatory networks (GRNs) based on cell cluster regulons. After constructing GRNs, DeepMAPS conducted functional enrichment analysis. Specifically, it employs hypergeometric tests to compare the intersection of GRN results with regulons in the database and evaluates whether the predicted regulons in the GRN are enriched for the same functions or pathways as known regulons. DeepMAPS is trained on single-cell multi-omics datasets from the 10x database. The Experiment of DeepMAPS demonstrates that the GRNs exhibits a greater number of unique transcription factor (TF) and cell-type-specific regulons and they are enriched in specific functions or pathways. In addition, scGPT <cit.> demonstrates high interpretability in gene regulatory network experiments. Pre-training enables scGPT to emphasize genes with intricate relationships. It improves the interpretability of scGPT. In the Human Leukocyte Antigen (HLA) dataset, scGPT forms a human leukocyte antigen (HLA) gene network through zero-shot learning. On the Immune Human dataset <cit.>, fine-tuned scGPT generates CD gene networks through zero-shot learning and visualization of the gene information. scGPT performs pathway enrichment analysis on the Reactome database <cit.>. It successfully validates the extracted gene program and identifies 22 additional pathways. These experiments demonstrate the ability of scGPT to capture complex gene relationships. Through pre-training and fine-tuning, scGPT achieves stronger generalization capabilities. §.§ Perturbation responses Single-cell perturbation prediction experiments aim to predict and analyze the biological responses of cells to external stimuli or changes introduced into single cells <cit.>. Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) have become two important metrics for evaluating the performance of model in predicting how cells respond to specific perturbations <cit.>. MSE is used to measure the accuracy of the model in predicting the response of single cells to specific perturbations. A lower MSE value indicates that the predictions of model are more consistent with the actual observed values. RMSE is the square root of MSE and provides an error measure in the same units as the original data. It directly reflects the magnitude of the prediction error. In perturbation responses prediction experiments, scFoundation <cit.> is combined with the GEARS <cit.> to construct personalized gene co-expression graphs for each cell. It significantly improves the accuracy of gene perturbation predictions. It is evaluated on three datasets including the Dixit dataset <cit.>, the Adamson dataset <cit.> and the Norman dataset <cit.>. It obtains lower Mean Squared Error (MSE) values. In addition, scGPT <cit.> uses the pre-trained the parameters of embedding and transformer layers to initialize fine-tuning. The fine-tuning process uses genes with zero and non-zero expression. scGPT is compared with GEARS and CPA <cit.> on the Adamson dataset and the Norman dataset. It accurately predicted the expression changes of the top 20 Differentially Expressed (DE) genes in the datasets. During the fine-tuning process, CellPLM <cit.> initializes other components except the decoder with pre-trained weights. CellPLM is compared with GEARS and scGen <cit.> on the Adamson dataset and the Norman dataset. It conducts two types of experiments (single-gene perturbation and double-gene perturbation) on the Norman dataset and only single-gene perturbation on the Adamson dataset. In each experiment, CellPLM exhibited lower Root Mean Squared Error (RMSE) than GEARS and scGen. § CHALLENGES AND PROSPECTS In the field of single-cell research, transformers contribute to a deeper understanding of these vast and complex datasets. It enhances the simulation and comprehension of cellular processes. In this section, we discuss the challenges encountered by transformers-based single-cell models. We focus primarily on limitations in the transformer-based single-cell language model including handling long sequence data, overfitting risks in pre-training, computational requirement and interpretability. In addition, we also analyze some potential future research directions. §.§ Sequence data processing The transformers-based single-cell language models have strong representational capabilities on single-cell sequence data. However, single-cell sequence data often contains excessively long sequences <cit.>. It leads to an exponential increase in the computational complexity of these models. In addition, single-cell data with long sequences may contain more complex gene relationships. Nevertheless, the self-attention mechanism of the transformers tends to capture dependencies between adjacent positions in the sequence. It may causes the model to ignore some key gene information. scBERT <cit.> adopts a variant of transformers called Performer to solve the problem. scBERT used the low-rank attention mechanism of Performer to avoid over-focusing on dependencies between adjacent positions. When dealing with sparse DNA sequences, the attention mechanism of Performer may exhibit better robustness. Although Performer achieves good results, there are certain challenges in terms of data precision and sensitivity to model parameters due to the low-rank attention mechanism. In addition, the effectiveness of Performer is not always superior to that of the traditional transformers for different datasets and tasks. However, it is undeniable that using some variants of the transformers has brought new insights to the research of single-cell language models. §.§ Overfitting risks in pre-training Although transformers-based single-cell language models are increasingly inclined to adopt pre-training techniques, the analysis of these pre-trained models in terms of overfitting issues is relatively limited. The characteristic of single-cell data lies in its diversity of types and different types of single-cell data may vary significantly. It may lead to an imbalanced distribution of pre-training samples and potentially causing overfitting on smaller datasets. To address this issue, data augmentation techniques can be introduced into the pre-training. Currently, Generative Adversarial Networks (GAN) have shown promising results in the field of single-cell data augmentation <cit.>. By using GAN to generate synthetic data samples that are similar to the original data, the diversity of the dataset can be effectively increased. It can mitigate the overfitting problems caused by data imbalance. In addition, interpolating and extrapolating between original single-cell data samples can also be considered. By using methods such as linear interpolation, polynomial interpolation or deep learning models to generate new samples, the quantity and diversity of the data can be increased. It further enhances the generalization capability and robustness of models. We believe that incorporating these methods into the pre-training process of single-cell language models may help address the issue of overfitting in the models. §.§ Computing Requirement Currently, transformers-based single-cell language models for single-cell multi-omics research are still in their early stages. Future work may involve incorporating more omics data in the pre-training phase to study single-cell multimodal tasks. However, the incorporation of omics data has led to an even larger scale of data. It causes challenges related to computational costs. Recently, the combination of recurrent neural networks and transformers has reduced computational costs by speeding up the training of transformers <cit.>. This method could be considered as a possibility for application in single-cell language models. In addition, the parallel computing capabilities of transformers still face challenges. In the self-attention mechanism, the attention weights for each position need to be calculated sequentially and cannot be directly parallelized. When processing batch data, the sequence lengths of different single-cell samples may vary, increasing the complexity of parallel computing. In the future, solving the parallel computing capabilities of single-cell language models may become increasingly critical. §.§ Interpretability Transformers-based single-cell language models offer significant advantages in terms of interpretability. They are capable of assigning different gene weights during the processing of sequence data to identify key features in the representation process. In single-cell research, this capability is crucial for understanding complex biological processes such as gene expression, protein interactions and gene regulation <cit.>. In addition, single-cell data is highly complex and diverse. Each cell potentially exhibit unique gene expression patterns <cit.>. Through the self-attention mechanism, transformers have successfully provided interpretability for the predictions of the key features. This helps biologists understand how models assign weights to different genes or cells and gain insights into gene expression patterns. Although transformers-based single-cell language models have achieved good results, these models still employ a black-box training approach. It inevitably affects the application of models in clinical settings. Therefore, improving the interpretability of single-cell language models remains a challenging research problem. §.§ Validation Analysis The single-cell language models and single-cell large language models mentioned in this paper have demonstrated promising results in experiments. Currently, some of these models have been subjected to benchmark experiments <cit.>, which have revealed that different models exhibit varying performance across different tasks. These models have been proven to have the capability to integrate representations from diverse single-cell omics data. In particular, pre-trained models like scGPT have shown remarkable performance in gene function prediction tasks and achieve good results even without fine-tuning. However, the application of single-cell language models and single-cell large language models is still in its early stages and their generalizability faces certain challenges. In addition, comparing with some of the latest methods such as Sccross <cit.>, ctpredictor <cit.>, will also help to promote research progress. Therefore, we provide an accessible link to the experimental code of the single-cell language model, please refer to Table 2 for details. We hope these resources can provide some assistance to researchers interested in this field. § CONCLUSION The Transformer-based Single-Cell Language Model has shown promising results in single-cell data analysis. In this review, we provide a detailed overview of single-cell language models and single-cell large language models. We summarized the methods of these models as well as their applications in downstream tasks. While these models may not achieve optimal performance in certain evaluation metrics, they hold potential contributions and applications in single-cell research. They open new possibilities for research and applications in the field and present significant avenues for further development. We think that the potential areas for improvement may include refining data preprocessing methods, reducing computational costs, enhancing model interpretability and optimizing the transfer learning process. In-depth investigations into these directions will facilitate more effective utilization of various types of single-cell data. This review aims to provide an overview of single-cell language models and hope promoting progress in the field of single-cell research. § ACKNOWLEDGEMENTS This work was partially supported by the National Natural Science Foundation of China (No. 62072124), the Natural Science Foundation of Guangxi (No. 2023JJG170006), the Natural Science and Technology Innovation Development Foundation of Guangxi University (No. 2022BZRC009), the CAAI-Huawei MindSpore Open Fund (No. CAAIXSJLJJ-2022-022A), the Project of Guangxi Key Laboratory of Eye Health (No. GXYJK-202407), the Project of Guangxi Health Commission eye and related diseases artificial intelligence screen technology key laboratory (No. GXYAI-202402). unsrt
http://arxiv.org/abs/2407.12626v1
20240717145246
Domain-specific or Uncertainty-aware models: Does it really make a difference for biomedical text classification?
[ "Aman Sinha", "Timothee Mickus", "Marianne Clausel", "Mathieu Constant", "Xavier Coubez" ]
cs.CL
[ "cs.CL" ]
Entropy-Stable Model Reduction of One-Dimensional Hyperbolic Systems using Rational Quadratic Manifolds [ ======================================================================================================= § ABSTRACT The success of pretrained language models (PLMs) across a spate of use-cases has led to significant investment from the NLP community towards building domain-specific foundational models. On the other hand, in mission critical settings such as biomedical applications, other aspects also factor in—chief of which is a model's ability to produce reasonable estimates of its own uncertainty. In the present study, we discuss these two desiderata through the lens of how they shape the entropy of a model's output probability distribution. We find that domain specificity and uncertainty awareness can often be successfully combined, but the exact task at hand weighs in much more strongly. § INTRODUCTION Deep-learning models are trained with data-driven approaches to maximize prediction accuracy <cit.>. This entails several well-documented pitfalls, ranging from closed-domain limitations <cit.> to social systemic biases <cit.>. These limitations compound to a severe deterioration of model performances in out-of-domain (OOD) scenarios <cit.>. This has led to engineering efforts towards developing models tailored to specific domains, ranging from the legal <cit.> to the biomedical <cit.> ones. Domain-specific models, while useful, are rarely considered as a definitive answer. Crucially, in the biomedical domain, experts require more reliability from these models—in particular, insofar as accounting for uncertainty in prediction is concerned. For example, in the case of a risk scoring model used to rank patients for live transplant, uncertainty-awareness becomes critical. The lack of uncertainty-aware models may lead to improper allocation of medical resources <cit.>. Such concerns exemplify the importance of uncertainty aware models and its critical role in model selection. The compatibility of domain-specific pretraining and uncertainty modeling appears under-assessed. To illustrate this, one can consider the entropy of output distributions: Domain-specific pretraining will lead to more probability mass assigned to a single (hopefully correct) estimate, leading to a lower entropy; whereas uncertainty-aware designs intend to not neglect valid alternatives—meaning that the probability mass should be spread out, which entails a higher entropy when uncertainty is due. In this work, we reflect on how model-specificity and uncertainty-awareness articulate with one another. Figure <ref> illustrates the experimental setup we use for our study. In practice, we study the performances of frequentist and Bayesian general and domain-specific models on biomedical text classification tasks across a wide array of metrics, ranging from macro F1 to SCE, with a specific focus on entropy <cit.>. More narrowly, we study the following research questions: RQ1: Are the benefits of uncertainty-awareness and domain-specificity orthogonal? RQ2: Given our benchmarking results, should medical practitioners prioritize domain-specificity or uncertainty-awareness? § RELATED WORK Recently, uncertainty quantification has gained attention from the NLP community <cit.>—particularly in mission critical settings, such as in the medical domain <cit.>. In parallel, compared to domain adaptation approaches <cit.> for the medical domain, there is a growing interest in domain-specific language models starting from BioBERT <cit.> to the recent MedPalM <cit.>. <cit.> presented an elaborate study of uncertainty paradigm for general-domain PLMs. While uncertainty modeling has been applied to biomedical data previously <cit.>, surprisingly little has been done for biomedical textual data. Therefore, our study precisely focuses on the interaction between the two paradigms for medical domain NLP tasks. We address this gap by focusing specifically on predictive entropy <cit.>. § METHODOLOGY Datasets. We conduct experiments on six standard biomedical datasets: three English datasets, viz. MedABS <cit.>, MedNLI <cit.> and SMOKING <cit.>; as well as three French datasets, viz. MORFITT <cit.>, PxSLU <cit.> and MedMCQA <cit.>. For MEDABS, SMOKING, PxSLU, and MEDMCQA, we do not perform any special preprocessing. For MEDMCQA, we perform Task 2, i.e., predicting the number of possible responses (ranging from 1-5) for the input multi choice question. For MEDNLI, we concatenate the statement and hypothesis using the token and use it as an input converting it to a multi-class task. For MORFITT, which is originally a multi-label classification task, we use the first label for each sample to convert it to a multi-class problem. The descriptive statistics of these datasets are listed in Table <ref>, along with class imbalance ratio (CIR; ). See Appendix <ref> for more information. Models. We derive classifiers from language-specific PLMs: for English datasets, we use <cit.> and <cit.>; for French, we use <cit.> and <cit.>. We compare two types of models, frequentist deep learning models (DNN) and Bayesian deep learning models (BNNs). The DNN model comprises of a PLM-based encoder, a Dropout unit along with 1-layer classifier. The BNN models are likewise based on a PLM encoder, along with a Bayesian module applied over the classification layer. We also experimented with MC-dropout models <cit.>, DropConnect <cit.>, and variational inference <cit.> models. We focus[ We justify our focus on DropConnect empirically, as it yielded the highest validation F1 scores on average in our case. See <Ref> for details. All main text results for uncertainty-aware classifiers pertain to DropConnect. ] on the DropConnect architecture which comprises a PLM encoder along a DropConnect dense classification layer. This approach infuses stochasticity into a deterministic model by randomly zeroing out classifier weights with probability 1-p. This allows us to sample multiple outputs for a given input, thus enabling to aggregate the predictions and to produce estimates of uncertainty. For simplicity, we note domain-specific models as +𝒟 (and general models -𝒟); uncertainty aware models are referred to as +𝒰 (with frequentist models noted -𝒰). We replicate training across 10 seeds per model and dataset; further implementation details can be found in <Ref>. Evaluation Setup. We evaluate classifiers on two aspects: task performance and uncertainty awareness. For text classification, we report Macro-F1 and accuracy. For uncertainty quantification we report Brier score (BS; ), Expected Calibration Error (ECE; ), Static Calibration Error (SCE; ), Negative log likelihood (NLL), coverage (Cov%) and entropy (H). See <Ref> for definitions. § RESULTS Performance. All results are listed in <Ref> in <Ref>, we highlight some key metrics in <Ref>. Insofar as classification metrics go, +𝒟 configurations outperform -𝒟 ones. More generally, as all scores are highly dependent on the exact dataset considered, we first de-trend them by z-normalizing results on a per-dataset basis to simplify analysis. We find +𝒟 +𝒰 classifiers to be strong contenders, although they are often outperformed—especially by +𝒟 -𝒰 models on classification metrics (<Ref>) and by -𝒟 +𝒰 models on calibration metrics (<Ref>). As for entropy, we find both +𝒟 -𝒰 and +𝒟 +𝒰 to lead to lower scores. Trends are consistent across languages. Relative importance. To interpret results in <Ref> more rigorously, we rely on SHAP <cit.>. SHAP is an algorithm to compute heuristics for Shapley values <cit.>, viz. a game theoretical additive and fair distribution of a given variable to be explained across predetermined factors of interest. Here, we analyze the scores obtained by individual classifiers on all 8 metrics, and try to attribute their values (z-normalized per dataset) to domain specificity (±𝒟), uncertainty awareness (±𝒰) and the dataset one observation corresponds to (ds.). Results are displayed in <Ref>; specific points correspond to weights assigned to one of the factors for one of the datapoints, factors are sorted from most to least impactful from top to bottom. We can see that which of domain specificity and uncertainty awareness has the strongest impact depends strictly on the metrics: Cases where ±𝒟 is assigned on average a greater absolute weight than ±𝒰 account for exactly half of the metrics we study. Another import trend is that effects tied to +𝒟 are also often attested for +𝒰: if domain specificity is useful, then uncertainty awareness is as well.[ There are two notable exceptions: ECE and coverage, where we find +𝒟 to be detrimental. Variation across seeds might explain the discrepancy with <Ref>. ] Lastly, weights assigned to both ±𝒟 and ±𝒰 are considerably smaller than those assigned to datasets, showcasing that these trends are often overpowered by the specifics of the task at hand. Entropy. A desideratum we laid out above is to have large entropy scores when the model is incorrect. Focusing on entropy, we display how it compares to the probability mass assigned to the target in <Ref>. In detail, we retrieve all predictions for every datapoint across all classifiers and then z-normalize entropy scores and probability assigned the target class.[ When plotting entropy against probability mass assigned to the target class, we can keep in mind some useful points of reference. A perfect classifier that is always confidently correct should display a high probability mass and a low entropy (i.e., top left of our plot); what we hope to avoid is a confidently incorrect classifier (bottom left). As entropy and probability are statistically related, it is impossible to observe a high probability mass and a high entropy (top right). Lastly, assuming the classifier outputs continuous scores, this statistical dependency also dictates that probability mass and entropy be inversely correlated for correct predictions. ] We can see that incorrect predictions do result in more spread out entropy scores. Moreover, we can notice some tentative differences between the four types of classifiers of our study: Correct predictions from +𝒟 +𝒰 models seem to lead to an especially tight correlation between entropy and probability mass. However, establishing whether this difference is significant requires further testing. We therefore measure whether incorrect predictions lead to higher entropy in two ways: (i) using Mann–Whitney U-tests, from which we derive a common language effect size f (as the entropy of incorrect predictions should be higher);[All U-tests suggest entropy for incorrect predictions is significantly higher (p < 10^-10).] and (ii), by computing Spearman correlation coefficients between the entropy and the mass assigned to the target class (as entropy should degrade with correctness). Corresponding results are listed in <Ref>: Across most of the datasets we study, the top or second most coherent distributions we observe are for domain-specific and uncertainty-aware models. However, we also observe that actual performances are highly sensitive to the exact classification task at hand. § DISCUSSION & CONCLUSION We can now answer our initial research questions. RQ1: Are the benefits of uncertainty-awareness and domain-specificity orthogonal? We have seen in <Ref> that in most cases, using a classifier that was both domain-specific and uncertainty-aware led to the optimal distribution shape, with entropy more gracefully increasing with incorrectness. RQ2: Should medical practitioners prioritize domain-specificity or uncertainty-awareness? SHAP attributions in <Ref> strongly suggest that the evaluation metric dictates the strategy to follow. As one would expect, accuracy is better captured with domain-specific models, whereas uncertainty-aware models tend to be better calibrated. We also found significant evidence throughout our experiments that the exact classification task at hand weighs in much more strongly than the design of the classifier. This extraneous factor necessarily complicates the relationship between domain-specificity and uncertainty-awareness: In a handful of cases in <Ref>, we observe classifiers that are neither uncertainty-aware nor domain specific faring best among all the models we survey—and conversely domain-specific uncertainty-aware classifiers can also rank dead last. This is also related to the often limited quantitative difference between best and worst models, which for instance can be as low as ±2.3% for F1 on MEDABS (cf. <Ref>). Overall, our experiments suggest a very nuanced conclusion. Domain-specificity and uncertainty-awareness do appear to shape classifiers' distributions and their entropy in distinct but compatible ways, but they have a lesser impact than the task itself. Hence, while we can often combine uncertainty-awareness and domain-specificity, there are no out-of-the-box solutions, and optimal performances require careful application designs. § ACKNOWLEDGMENTS We thank Sami Virpioja for his comments on an early version of this work. This work is supported by the ICT 2023 project “Uncertainty-aware neural language models” funded by the Academy of Finland (grant agreement № 345999). § EXPERIMENTAL DETAILS §.§ Supplementary Bayesian models We include the details for two more Bayesian models: MC-dropout and variational inference. Note that for all the Bayesian models we sample K=3 predictions at inference and use the mean prediction. MCDropout (MCD) This model is based on a PLM encoder, similar to the main study models. The difference in this case is that Stochastic Dropout is applied over the classification layer. MCD <cit.> proposes to extend the usage of Dropout but at inference time enabling it to sample a multiple K models, to make K predictions. The final prediction in the case of classification model can denoted as ŷ = K^-1∑_k=1^K f_i(x) . Variational inference (VI) This model is based on a PLM encoder, similar to the main study models, with variational inference dense layer as the classification layer. We use the Bayes by BackProp <cit.> for the VI Dense layer. It approximates the distribution of each weight with a Gaussian distribution with parameter 𝒩(μ, ρ). The weights are approximated with Monte Carlo gradient. Finally, the predictions are computed using the predictive posterior distribution, by sampling K weight instances and making one forward pass per set of weights same as MCD. §.§ Implementation details We use https://github.com/mvaldenegro/keras-uncertaintykeras-uncertainty models for implementing our BNN model backbones. Hyperparameter Setting In all cases, we fine-tune the PLM backbone for all the downstream task with a maximum sequence length of 512 and a batch size of 50 sentences. We perform a grid hyper-parameter search for = {3,4,5, ..., 15} and = {1e-7, 5e-6, 1e-6, 5e-5, 1e-5, 5e-4, 1e-4}. We replicate training with 3 seeds for each hyperparameter configuration, select the optimal configuration for validation F1, and replicate training with 7 more seeds for these optimal configurations, so as to obtain 10 models per dataset, PLM and architecture. We also select the main BNN model of the study by selecting the system yielding the highest average rank across all six datasets, as displayed in <Ref>. We train all models with binary cross entropy loss and optimizer with ϵ=10^-8 and β=(0.9, 0.999). For all BNN models, we obtain 3 sets of predictions after training the models to calculate the mean class probabilities. Corresponding optimal hyperparameters are listed in <ref>. §.§ Calibration metrics definition In what follows, N denotes the number of samples in test set, C denotes the number of classes. Lower score for Brier score, ECE, SCE, NLL and Entropy metrics; and higher score for coverage, are indicative of better uncertainty aware model. Brier score. <cit.> proposed BS which computes the mean square difference between the true classes and the predicted probabilities. BS = 1/N∑^N_i=1∑^C_c=1 (y^(i)_c - ŷ^(i)_c)^2 Expected Calibration Error. <cit.> provides weighted average of the difference between accuracy and confidence across B bins. ECE = ∑^B_b=1n_b/N |acc(b)-conf(b)| where acc(b) and conf(b) are the average accuracy and confidence of predictions in bin b, respectively. We set B = 15 in our experiments. Static Calibration Error. <cit.> proposed an extension of ECE to multi-class problems to overcome its limitation of dependence of the number of bins. SCE = ∑^C_c=1∑^B_b=1n_b/NC |acc(b)-conf(b)| We set B = 15 in our experiments. Negative Log Likelihood. serves as the primary approach for optimizing neural networks in classification tasks. Interestingly, this loss function can also double as an effective metric for assessing uncertainty. NLL = - ∑_i=1^N y_i log(ŷ_i) Coverage Percentage. The normalized form of number of times the correct class in indeed contain within the prediction set. Shannon Entropy. quantifies the expected uncertainty inherent in the possible outcomes of a discrete random variable. H = - ∑_i=1^N p_i log(p_i) §.§ Dataset We provided supplementary details about each dataset we used in Table <ref>. i have a simple with siunitx i'll paste below, might be easier than what you're doing right now separate-uncertainty, table-align-uncertainty, table-format = 2.1(1), detect-weight=true 2@c2*1.75cmTranslation directions 𝒩 ℱ ℰ 𝒟 𝒯 ℒ 12*90test on UNPC 7*90train on UNPC All (seen) 26.6(0.5) 28.2(1.1) 29.0 ± 0.2 24.8(0.2) 26.6(0.1) 26.4(0.2) (lr)3-10 3*AR all 18.1(0.3) 24.6 ± 1.3 23.1(1.7) 15.7(1.8) 16.8(0.4) 17.9(0.1) seen 26.4(0.1) 27.1± 0.9 26.6(0.7) 22.0(1.0) 24.9(0.1) 26.2 ± 0.0 unseen 13.9(0.3) 23.4± 1.4 21.4(2.3) 12.5(2.2) 12.8(0.6) 13.7(0.1) (lr)3-10 3*EN all 19.3(0.4) 22.8(2.9) 23.9± 1.0 17.9(0.4) 19.3(0.1) 19.4(0.1) seen 34.5(0.2) 33.1(2.6) 35.9± 0.1 31.6(0.9) 34.0(0.2) 34.6(0.3) unseen 11.7(00.6) 17.6(3.0) 17.9± 1.4 11.0(1.0) 11.9(0.1) 11.8(0.1) 2-10 3*901.75cmtrain on OPUS 3*EN all 16.4(0.2) 20.6(0.5) 20.9 ± 0.4 13.6(1.2) 16.8(0.2) 16.3(0.1) seen 30.8(0.2) 30.5(0.5) 31.1 ± 0.3 23.7(1.5) 30.7(0.3) 30.6(0.3) unseen 9.1(0.3) 15.6(0.5) 15.8 ± 0.5 8.5(1.0) 9.9(0.1) 9.1(0.1) 12*90test on OPUS 7*90train on UNPC All (seen) 17.6(0.2) 19.1(0.8) 19.7 ± 0.2 16.3(0.2) 17.5(0.2) 17.5(0.3) (lr)3-10 3*AR all 12.3(0.2) 16.5 ± 0.8 15.4(1.0) 10.3(1.4) 11.4(0.2) 12.1(0.1) seen 17.7(0.1) 18.4 ± 0.8 17.9(0.6) 13.8(1.0) 16.6(0.1) 17.6(0.1) unseen 9.2(0.3) 15.5 ± 0.8 13.9(1.3) 8.4(1.6) 8.5(0.2) 9.0(0.1) (lr)3-10 3*EN all 13.4(0.3) 15.9(2.0) 17.0 ± 0.5 12.6(0.2) 13.3(0.1) 13.5(0.2) seen 19.7(0.1) 19.8(1.4) 21.0 ± 0.0 18.5(0.7) 19.2(0.1) 19.8(0.2) unseen 8.2(0.3) 12.7(2.5) 13.6 ± 0.9 7.7(0.7) 8.4(0.3) 8.2(0.3) 2-10 3*901.75cmtrain on OPUS 3*EN all 15.0(0.2) 17.8(0.3) 17.9 ± 0.3 12.4(1.0) 15.5(0.2) 14.8(0.1) seen 25.1 ± 0.2 24.6(0.3) 25.1 ± 0.2 19.7(1.6) 24.9(0.2) 24.9(0.3) unseen 6.6(0.2) 12.1 ± 0.3 11.8(0.5) 6.3(0.7) 7.7(0.2) 6.4(0.1) § FULL RESULTS We present the detailed Table for all the configurations in Table <ref>. As noted in the main text, the most obvious trend across the board is that scores are tightly coupled with datasets: The range of scores achieved by all classifiers we study tends to be fairly limited across a given dataset, whereas we can observe often spectacular differences from one dataset to the next. Insofar as classification metrics go, we observe that +𝒟 models almost always occupy the top ranks. This is especially salient in MedABS and MedNLI, where all +𝒟 classifiers outperform all -𝒟 classifiers both in terms of F1 and accuracy. In PxSLU, the only model that deviates from this trend is the +𝒟-𝒰 model, which appears to suffer from an especially low accuracy. In the two other French datasets, along with SMOKING, classification metrics do not exhibit as clear a division between domain-specific and general PLMs. As for calibration metrics, we find a very similar behavior to what we highlight in the main text: uncertainty-unaware model almost never rank among the top two contenders. Rankings per metric tend to be fairly stable as long as we control for domain-specificity. Lastly, having a look at the various Bayesian architecture, we can see that DropConnect is not necessarily the most optimal system across all uncertainty-aware classifiers. Selecting the best architectures given 3 seeds, and then expanding to 10 seeds most likely led to some degree of sampling bias, explaining this discrepancy. It does however constitute a strong contender across many situations: it still remains the best ranking Bayesian architecture on average both in terms of F1 across the validation set, as well as in terms of test BS., ECE, SCE, NLL and Entropy. In fact, differences in terms of ranks across datasets per architecture are not always significant: If we normalize all 80 classifiers per dataset by taking their rank, then Kruskal-Wallis H-test suggest that F1, accuracy and ECE do not lead to significant rank differences across architectures (assuming a threshold of p < 0.05). Likewise, comparing +𝒟 and -𝒟 models with the same procedure does not lead to significant differences in terms of ECE, SCE, and coverage.
http://arxiv.org/abs/2407.13080v1
20240718010028
Fermion determinants on a quantum computer
[ "George T. Fleming", "Prasanth Shyamsundar", "Judah Unmuth-Yockey" ]
hep-lat
[ "hep-lat", "quant-ph" ]
FERMILAB-PUB-24-0264-T gfleming@fnal.gov prasanth@fnal.gov jfunmuthyockey@gmail.com Fermi National Accelerator Laboratory, Batavia, Illinois, 60510, USA § ABSTRACT We present a quantum algorithm to compute the logarithm of the determinant of the fermion matrix, assuming access to a classical lattice gauge field configuration. The algorithm uses the quantum eigenvalue transform, and quantum mean estimation, giving a query complexity that scales like O(Vlog(V)) in the matrix dimension V. Fermion determinants on a quantum computer Judah Unmuth-Yockey 0000-0001-9962-7134 July 22, 2024 =========================================== § INTRODUCTION Lattice quantum chromodynamics (QCD) is a cornerstone in modern high-energy physics theory. Under a Wick rotation, the Minkowski signature of spacetime is transformed into Euclidean space allowing for the path integral of QCD to be interpreted as a classical partition function in four Euclidean spatial dimensions (see Refs. <cit.> for review). With the discretization of this space into a lattice, this allows for the Monte Carlo method to be used to sample that partition function. However, with the inclusion of fermionic fields into the lattice action in the form of Grassmann variables, this sampling cannot be done directly, and instead the fermions must be integrated out beforehand which results in an effective action of the gauge fields which depends on the logarithm of the determinant of the fermion matrix, M. Therefore, to sample the QCD partition function, the determinant of the fermion matrix must be calculated during each Monte Carlo step. This calculation is expensive, scaling like O(V^3), where V is the dimension of the fermion matrix, and hence, finding improvements to this scaling would assist in lattice QCD calculations. The standard method to avoid this expensive calculation is to introduce new bosonic fields called pseudofermions <cit.> which replaces the problem of computing the log of the determinant with the problem of solving a sparse linear system which converges faster than O(V^3). The pseudofermion method places limitations on the types of physical systems that can be solved. In particular, finite density calculations suffer from a severe sign problem which make it difficult to perform calculations relevant for heavy-ion collision experiments and the equation of state for dense nuclear matter in the heart of neutron stars. Having access to an efficient algorithm for computing log of the determinant would enable calculations at finite density free of sign problems <cit.>. Aside from the problem of efficient sampling of the lattice QCD partition function, computation of physical observables involving fermions on the sampled configurations often involve computing the trace of the inverse of the fermion matrix M^-1 which is also called the fermion propagator. There are many areas of particle physics research that can benefit from more accurate and more frequent computation of these traces including neutral current neutrino-nucleus scattering in the DUNE and SBN experimental programs <cit.>, quark-disconnected contributions to the anomalous magenetic moment of the muon related to the Fermilab g-2 experiment <cit.>, couplings of Higgs bosons to nuclei relevant for direct-detection of dark matter scattering off normal matter through Higgs exchange <cit.>, and for computation of properties of composite Higgs bosons in models which may be relevant to the Large Hadron Collider (LHC) <cit.>, just to name a few. Quantum computing offers many potential speed-ups to classical algorithms. One of the most promising is through a generic algorithm called the quantum eigenvalue (or singular value) transform (QET) <cit.>. The strategy of this algorithm is to construct polynomial transforms of matrices using alternating X- and Z-like rotations on a two-dimensional qubit subspace. The depth of these quantum circuits for these transforms scale as the degree of the polynomial being implemented, providing exponential improvements over classical algorithms in many cases, and often providing optimal constructions <cit.>. Another area where quantum computers offer an advantage is in the calculation of normalized sums (or means). Quantum mean estimation (QME) is known to provide a quadratic speed-up over its classical counterpart <cit.>. Using the aforementioned technology, we propose to accelerate the calculation of fermion determinants. The starting point is the following well-known identity for a positive-definite matrix W: [W] = exp([log W]). Our approach is to a) use the QET to compute an approximation to log W and subsequently b) use the QME algorithm to compute the trace, in order to estimate [W]. Using this strategy to evaluate the determinant of the fermion matrix M involves the following “classical” steps as groundwork: 1) Devise a positive-definite matrix W whose determinant has a one-to-one correspondence to the determinant of M (which is not necessarily positive-definite). 2) Construct an efficient quantum circuit to block-encode <cit.> the matrix W; this is necessary to perform the QET on W. 3) Customize the implementation of QET so that the corresponding transformation approximates the log function. If the computational cost of the resulting quantum algorithm is found to be less than O(V^3) there is a potential speed up of calculating fermion determinants using a quantum computer. Indeed we show that in terms of the polynomial degree d, the probability of success 1-δ, the error on the trace estimation ϵ, and the dimension of the matrix V, the algorithm's query complexity scales like O(d log(1/δ) V log(V) / ϵ). The article is organized as follows: In Sec. <ref> we devise block-encodings for staggered fermions coupled to an SU(3) gauge field, starting from the simple case of a free scalar field Laplacian. In Sec. <ref> we show how to compute the trace of a unitary matrix, as well as the specialization to computing the trace of a block-encoded matrix. Then in Sec. <ref> we combine the previous ingredients in the context of the QET to demonstrate how to compute log(M). Finally in Sec. <ref> we give some concluding remarks as well as further optimizations and future work to be done. § BLOCK-ENCODING In order to use the quantum computer to compute the determinant of the fermion matrix, we must first load the fermion matrix onto the quantum computer. The fermion matrix itself is not unitary, and so to do this we encode it inside a larger unitary matrix through the method of block-encoding. To block-encode a matrix, one embeds the nonunitary matrix in the top-left block of a larger matrix, and from that block determines what the remaining entries must be such that the larger matrix is unitary, as is required in quantum computing. For example, a simple block-encoding of a Hermitian matrix A into a larger matrix B is given by, B = [ A √(1 - A^2); -√(1 - A^2) A ]. For a generic block-encoding which uses ℓ+1 qubits in which to embed the block, this can be expressed as A = (⟨0^ℓ+1|⊗1) B (|0^ℓ+1⟩⊗1). Rather than immediately providing the block-encoding of the fermion matrix, it is advantageous to build up the block-encoding process for simpler cases which at each step provide necessary technology needed in order to block encode the full fermion matrix. We begin with a free scalar field Laplacian, then free staggered fermions, staggered fermions with abelian gauge fields, and then finally SU(3) gauge fields with staggered fermions. Since these matrices are sparse, we use the methods of Ref. <cit.> to block-encode them. §.§ Scalar Laplacian We consider a normalized lattice Laplacian, Δ = 1/s(-∇^2 + m_0^2) = 1/s[ q+m_0^2 ⋯ -1 ⋯; 0 ⋱ 0 ⋯; ⋮ 0 ⋱ ⋮; ⋯ -1 ⋯ q+m_0^2 ], where q = 2D is the coordination number of the hypercubic lattice in D dimensions, and m_0 is the bare mass. Each row/column, which corresponds to a lattice site, has two unique values: {-1, q+m_0^2}, and 2D+1 nonzero entries total. Let us fix D=4. We can write the list of nonzero elements in a given column corresponding to the lattice site n⃗ as N = {-1, -1, -1, -1, -1, -1, -1, -1, 8+m_0^2} / s where the first eight entries lie in rows which correspond to {n⃗±x̂, n⃗±ŷ, n⃗±ẑ, n⃗±t̂}. Following Ref. <cit.>, for each column j, we need to map each non-zero entry to one or more values of an index ℓ. We will use ℓ=0-7 to encode the the terms corresponding to the eight nearest neighbor elements and 8≤ℓ<16 to encode the diagonal element. Let c(j, ℓ) represent the row-index of the element on the j-th column associated with index ℓ, c(n⃗,ℓ) = ℓ = 0 : n⃗ + x̂ ℓ = 1 : n⃗ + ŷ ℓ = 2 : n⃗ + ẑ ℓ = 3 : n⃗ + t̂ ℓ = 4 : n⃗ - x̂ ℓ = 5 : n⃗ - ŷ ℓ = 6 : n⃗ - ẑ ℓ = 7 : n⃗ - t̂ 8 ≤ℓ < 16 : n⃗ . An operator O^NN_c which implements this transformation is O^NN_c|0000⟩|n⃗⟩ = |0000⟩|(x+1) L⟩|y⟩|z⟩|t⟩ O^NN_c|0001⟩|n⃗⟩ = |0001⟩|x⟩|(y+1) L⟩|z⟩|t⟩ ⋮ O^NN_c|0111⟩|n⃗⟩ = |0111⟩|x⟩|y⟩|z⟩|(t-1) L⟩ O^NN_c|1abc⟩|n⃗⟩ = |1abc⟩|x⟩|y⟩|z⟩|t⟩, ∀ a,b,c∈{0,1} The operator O^NN_c only depends on ℓ, and so we can perform controlled integer arithmetic using the ℓ register to implement O^NN_c. The modular addition and subtraction operators needed above are given by 𝔞𝔡𝔡 = [ 0 0 … 0 1; 1 0 ⋱ ⋱ 0; 0 1 ⋱ ⋱ 0; ⋮ ⋱ ⋱ ⋱ ⋮; 0 0 … 1 0 ] and 𝔰𝔲𝔟 = 𝔞𝔡𝔡^T. Figure <ref> shows the circuit for the O^NN_c operator which connects nearest neighbors. The other ingredient is an operator O^s_A which gives O^s_A|0⟩|ℓ⟩|n⃗⟩ = ( A_c(n⃗,ℓ) n⃗|0⟩. + . √(1 - |A_c(n⃗,ℓ) n⃗|^2)|1⟩) |ℓ⟩|n⃗⟩, where A_c(n⃗,ℓ) n⃗ is the matrix element at c(n⃗,ℓ), n⃗. The value that O_A assigns is determined by ℓ only. So we only need controlled rotations—controlled on the ℓ register acting on the |0⟩ register—to implement O^s_A. Figure <ref> gives the circuit for this operator. In this case when the matrix elements are themselves real, we can use a y-rotation gate. We also use D_s, the “diffusion operator”, which creates an equal superposition on the qubits in the |ℓ⟩-register using Hadamards. The complete circuit for the block encoding of the free scalar Laplacian can be seen in Fig. <ref>. The following block-encodings have a similar complete structure. To compute the angles for the rotations in O_A^s we look at expectation values. For the diagonal, ⟨n⃗|⟨0000|⟨0| D_s O_c^s O_A^s D_s|0⟩|0000⟩|n⃗⟩ = 1/16 (8cos(θ_0/2)) ≡1/s(8 + m_0^2) θ_0 = 2 arccos(2/s(8 + m_0^2) ) For any one of the off-diagonal elements we consider ⟨n⃗+μ̂|⟨0000|⟨0| D_s O_c^s O_A^s D_s|0⟩|0000⟩|n⃗⟩ = 1/16cos(θ_1/2) ≡ -1/s θ_1 = 2 arccos(-16/s). We see from Eqs. (<ref>) and (<ref>) how to choose the normalization so that the argument of arccosine is valid. In this case s must be chosen such that |s| ≥ 2 (8+m_0^2), and |s| ≥ 16. Similar arguments can be used for the following cases. §.§ Free staggered fermions We now move on to the block-encoding of free staggered fermions. The action is given by, S = ∑_nχ̅_n( K ∑_μ=1^4η_μ(n) Δ_μ + m_0) χ_n with Δ_μχ_n = χ_n+μ̂ - χ_n-μ̂/2 as the symmetric finite difference, η_μ(n) = (-1)^∑_ν < μ n_ν with η_1(n) = 1 is the staggered phase, K and m_0 are couplings, and χ̅ and χ are Grassmann fields. The K coupling is typically set to one, but we leave it general here. Write the free staggered fermion matrix as M_m n = K/2∑_μη_μ(m) (δ_n, m+μ̂ - δ_n, m-μ̂) + m_0δ_n, m so that S = ∑_m,nχ̅_m M_m nχ_n. At this point it is useful to reflect on some of the important properties of M, and how we wish to compute the determinant of M. The goal is to use the relation [log] = log to compute the trace of the log of a matrix and avoid ever computing the actual determinant. However, the logarithm is only defined for positive arguments, and M generally can have complex eigenvalues making the direct logarithm of M undefined. To resolve this issue, we use the nice property of M that its eigenvalues always appear in complex conjugate pairs (see appendix <ref>) . This results in the determinant of M as being purely real and positive. This in turn gives rise to the following relation between M and W: W_a c≡∑_b M^†_a b M_b c = ∑_b M^*_b a M_b c, giving 2log(M) = log(W). W is positive definite, meaning that the logarithm of that matrix is well-defined. We then block-encode W, instead of M, knowing that when we compute the trace of the logarithm the outcome only differs from the desired answer by a factor of two. Using Eq. (<ref>), we can work out W, giving, W_a c = K^2/4∑_μ, ν[ η_μ(a+μ̂) η_ν(c+ν̂) δ_a+μ̂-ν̂, c. - η_μ(a+μ̂) η_ν(c-ν̂) δ_a+μ̂+ν̂, c - η_μ(a-μ̂) η_ν(c+ν̂) δ_a-μ̂-ν̂, c . + η_μ(a-μ̂) η_ν(c-ν̂) δ_a-μ̂+ν̂, c] + m_0K/2∑_μη_μ(c) (δ_c, a+μ̂ - δ_c, a-μ̂) + m_0K/2∑_νη_ν(a) (δ_a, c+ν̂ - δ_a, c-ν̂) + m_0^2δ_a, c. The staggered phase has the property that shifting η_μ(x) by any ν≥μ doesn't change it. Therefore, -η_μ(a+μ̂) + η_μ(a) = 0 and the terms connecting nearest neighbors cancel. Thus, W_a c = K^2/4∑_μ, νη_μ(a) η_ν(c) [ δ_a+μ̂-ν̂, c. - δ_a+μ̂+ν̂, c - δ_a-μ̂-ν̂, c . + δ_a-μ̂+ν̂, c] + m_0^2δ_a, c. We see this matrix only connects those lattice sites that are two “hops” away, or next-to-nearest neighbors (NtNNs). We can enumerate those neighbors to know how many nonzero elements there are in each column of W. Choose an origin lattice site. There are four-choose-two combinations of μ and ν defining six planes through the origin, each containing four sites that are NtNNs. This gives 6 × 4 = 24 NtNNs. There are then those points in each of the eight directions away from the origin that are two hops away, adding eight more sites, giving 24 + 8 = 32 NtNNs total (there is also a set of two hops which return to the origin. There are eight of these). We can re-express Eq. (<ref>) specifically identifying these NtNN sites, W_a c = K^2/4∑_μ < ν (η_μ(a) η_ν(c) + η_ν(a) η_μ(c)) [ δ_a+μ̂-ν̂, c + δ_a-μ̂+ν̂, c - δ_a+μ̂+ν̂, c - δ_a-μ̂-ν̂, c ] + (m_0^2 + 2K^2) δ_a, c - K^2/4∑_μ (δ_a+2μ̂,c + δ_a-2μ̂, c). In addition, another property of the staggered phases is that when μ < ν, then η_ν(x ±μ) + η_ν(x) = 0. This property completely eliminates the first term in Eq. (<ref>), leaving only the last two terms: W_a c = (m_0^2 + 2K^2) δ_a, c - K^2/4∑_μ (δ_a+2μ̂,c + δ_a-2μ̂, c). The O^f_c operator in this case is almost identical to the free scalar case with O_c^NN with the replacement 𝔞𝔡𝔡→𝔞𝔡𝔡^2, and 𝔰𝔲𝔟→𝔰𝔲𝔟^2. This programs two hops in each of the eight directions instead of one. For O_A^f we can also use a similar structure to O_A^s. Since there are only nine nonzero entries in each row/column, and they are real numbers, we can encode them using y-rotations like Fig. <ref>. If we block-encode W / s for some normalization s, we find the angles are given by θ_0 = 2 arccos( 2/s (m_0^2 + 2K^2) ) for the diagonal, and θ_1 = 2 arccos(-4 K^2/s) for the off-diagonal terms. §.§ U(1) and staggered fermions We now gauge the fermions with an abelian gauge field. The action is, S = ∑_n{ m ψ̅(n) ψ(n) . + . K/2∑_μ=1^4η_μ(n) [ ψ̅(n) U_μ(n) ψ(n+μ̂) . . - . . ψ̅(n+μ̂) U^*_μ(n) ψ(n) ] }. The fermion matrix is given by, M_m n = K/2∑_μη_μ(m) (U_μ(m) δ_n, m+μ̂ - U_μ^*(m-μ̂) δ_n, m-μ̂) + m_0δ_n, m. The resulting W matrix is W_a c = K^2/4∑_μ, νη_μ(a) η_ν(c) [ U^*_μ(a-μ̂) U_ν(c-ν̂) δ_a-μ̂+ν̂,c. - U^*_μ(a-μ̂) U^*_ν(c) δ_a-μ̂-ν̂,c - U_μ(a) U_ν(c-ν̂) δ_a+μ̂+ν̂,c + . U_μ(a) U^*_ν(c) δ_a+μ̂-ν̂,c] + m_0^2δ_a,c. and if we reorganize this equation to identify the unique NtNNs we find, W_a c = K^2/4∑_μ < ν [(η_μ(a) η_ν(c) U^*_μ(a-μ̂) U_ν(c-ν̂) + η_ν(a) η_μ(c) U_ν(a) U^*_μ(c)) δ_a-μ̂+ν̂,c - (η_μ(a) η_ν(c) U^*_μ(a-μ̂) U^*_ν(c) + η_ν(a) η_μ(c) U^*_ν(a-ν̂) U^*_μ(c)) δ_a-μ̂-ν̂,c - (η_μ(a) η_ν(c) U_μ(a) U_ν(c-ν̂) + η_ν(a) η_μ(c) U_ν(a) U_μ(c-μ̂)) δ_a+μ̂+ν̂,c + (η_ν(a) η_μ(c) U^*_ν(a-ν̂) U_μ(c-μ̂) + η_μ(a) η_ν(c) U_μ(a) U^*_ν(c)) δ_a+μ̂-ν̂,c ] + (m_0^2 + 2K^2) δ_a,c - K^2/4∑_μ(U^*_μ(a-μ̂) U^*_μ(a-2μ̂) δ_a-2μ̂,c + U_μ(a) U_μ(a+μ̂) δ_a+2μ̂,c). This matrix shows that each nonzero element on the off-diagonal consists of a sum over all paths from c to a. Since the nonzero elements of W are between NtNNs, we must construct a O^NtNN_c to take all these NtNNs into account. There are 32 NtNNs and one diagonal element, meaning we need to encode 33 nonzero elements in each column. For this we require six qubits. We use ℓ=0-31 to encode the NtNNs, and 32≤ℓ < 64 to encode the diagonal element. We encode the two hops, starting with positive jumps, as c(n⃗,ℓ) = ℓ=0: n⃗+2x̂ ℓ=1: n⃗+x̂+ŷ ℓ=2: n⃗+x̂+ẑ ℓ=3: n⃗+x̂+t̂ ℓ=4: n⃗+2ŷ ℓ=5: n⃗+ŷ+ẑ ℓ=6: n⃗+ŷ+t̂ ℓ=7: n⃗+2ẑ ⋮ ⋮ Using the 𝔞𝔡𝔡 and 𝔰𝔲𝔟 operators we can construct the O^NtNN_c operator that connects NtNN pairs corresponding to Eq. (<ref>). Looking at Eq. (<ref>), the first term with ∑_μ < ν can be done by combinations of 𝔞𝔡𝔡 and 𝔰𝔲𝔟 matching those Kronecker deltas in the first term. This accounts for 24 NtNNs, the final eight come from the last term. For this we use combinations of two 𝔞𝔡𝔡 or 𝔰𝔲𝔟 operators for each direction. Figure <ref> shows the circuit for O^NtNN_c. Since each column of the matrix has 32 unique elements, in addition to the diagonal, we need to to specify which column we are populating with which matrix elements. This can be done by controlling on the |n⃗⟩ register. By controlling on |n⃗⟩, along with the six qubits used for the 33 nonzero entries in each column, we can uniquely specify each nonzero matrix element. We use the same encoding as O^NtNN_c for the six qubits. We define the action of O_A^U(1) as, O^U(1)_A|0⟩|ℓ⟩|n⃗⟩ = ( A_c(n⃗,ℓ) n⃗|0⟩ + √(1 - |A_c(n⃗,ℓ) n⃗|^2)|1⟩ ) |ℓ⟩|n⃗⟩. Each term in Eq. (<ref>) can be embedded in an SU(2) matrix, U_3(θ⃗), as the top-left element. For example, using the first term α≡ K^2(η_μ(a) η_ν(c) U^*_μ(a-μ̂) U_ν(c-ν̂) + η_ν(a) η_μ(c) U_ν(a) U^*_μ(c))/4, U_3 = [ α -√(1 - |α|^2); √(1 - |α|^2) α^* ]. This SU(2) matrix can be parameterized by the three angles, θ⃗, and directly decomposed in terms of elementary gates. Just as before, W must be normalized such that Eq. (<ref>) is unitary. Moreover, α should be multiplied by the dimension of the Hilbert space of |ℓ⟩, to cancel the same factor that appears from the application of D_s twice, U_3 = [ 64 α / s -√(1 - |64 α / s|^2); √(1 - |64 α / s|^2) 64 α^* / s ]. The diagonal element—being real and the same in every column—can be encoded using a y-rotation just as in the free scalar field case. Figure <ref> shows the beginning of the quantum circuit for O_A^U(1). §.§ SU(3) and staggered fermions We now gauge the fermions with the an SU(3) color gauge field. The resulting fermion matrix has the same form as in the abelian case, with the minor addition of color indices. The form of W is identical to Eq. (<ref>) with the color indices expressed: W^αβ_a c = K^2/4∑_μ, ν∑_γη_μ(a) η_ν(c) × [ U^γα^*_μ(a-μ̂) U^γβ_ν(c-ν̂) δ_a-μ̂+ν̂,c. - U^γα^*_μ(a-μ̂) U^βγ^*_ν(c) δ_a-μ̂-ν̂,c - U^αγ_μ(a) U^γβ_ν(c-ν̂) δ_a+μ̂+ν̂,c + . U^αγ_μ(a) U^βγ^*_ν(c) δ_a+μ̂-ν̂,c] + m_0^2δ_a,c. The nonzero matrix elements are between NtNNs and so we can use O^NtNN_c. For O^SU(3)_A we must encode a complex 3 × 3 matrix—as opposed to a sum of phases as in the abelian case—and so we need an additional four qubits to index the matrix elements which we denote ℓ_αβ = 0,1,… , 15: O^SU(3)_A |0⟩|ℓℓ_αβ⟩|n⃗⟩ = ( A^αβ_c(n⃗,ℓ) n⃗|0⟩ + √(1 - |A^αβ_c(n⃗,ℓ) n⃗|^2)|1⟩ ) |ℓℓ_αβ⟩|n⃗⟩. O_A^SU(3) can be constructed in a similar fashion as O_A^U(1) with additional controls for ℓ_αβ. The above block-encoding enters each nonzero matrix element in W into the quantum state which scales like O(V), however each multi-control operates on log(V) qubits, and hence the complete scaling is O(Vlog(V)). § MATRIX TRACE The next major ingredient to compute fermion determinants on a quantum computer is the matrix trace <cit.>. This can be done by recasting the trace as an expectation value and using QME. A similar classical approach can be carried out stochastically <cit.>, and this approach has been pursued in the lattice QCD community for decades <cit.>. Quantum mean estimation uses quantum phase estimation (QPE). To use QPE we need a state preparation method, and a unitary whose eigenphase corresponds to the average of interest. We recast the trace of a V × V ≡ 2^n× 2^n arbitrary unitary matrix, U, as an average through, Γ≡1/V[U] = 1/V∑_i=0^V-1 U_ii = 1/V∑_i=0^V-1⟨ i | | i⟩ = 1/V∑_i=0^V-1∑_j=0^V-1δ_ij ⟨ j | | i⟩ = 1/V∑_i=0^V-1∑_j=0^V-1⟨ j | i⟩  ⟨ j | | i⟩ = ⟨ψ_0 | Q | ψ_0⟩ ≡⟨ψ_0 | 1⊗ | ψ_0⟩ where |ψ_0⟩≡1/√(V)∑_i=0^V-1 |i⟩⊗ |i⟩, and denotes the operator for U. The form of Eq. (<ref>) is precisely the form of an average. Moreover, since Q is unitary, and |ψ_0⟩ is a normalized state, QME can be used on a quantum computer to calculate it. To use QME we first identify the various qubit registers. We label them using the variable denoting their size. First, we require a qubit register that handles the state space of . This is n = log_2(V) qubits. Second, we need another register of equal size, ℓ = n. For QPE, we must set aside m qubits which determine the precision and success probability, and finally QME as in Ref. <cit.> requires one additional qubit, which we shall label a.[If only the absolute value of the trace is desired, the a qubit is unnecessary.] State preparation for |ψ_0⟩ uses O(n) basic gates, and can be seen in Fig. <ref>, along with the preparation of a. We denoted the state preparation circuit as . The matrix Q is given by 1⊗ in the ℓ-n state space. The quantum circuit for Q can be seen in Fig. <ref>. With this we can define the Grover iterate, , used in QME. Generically, = S_0^†𝕌_φ, where 𝕌_φ is related to Q through Fig. <ref>, and S_0 is the reflection about the all-zeros state, as shown in Fig. <ref>. The quantum circuit for is shown in Fig. <ref>. With the allotted qubits in register m, one uses and in QPE to extract a phase. The cosine of this phase is precisely the real part of the average from Eq. (<ref>). To estimate the number of queries, we note that each call uses O(n + n_) basic gates, where n_ is the number of basic gates in . The error in the estimate of the real part of the trace comes directly from QPE and is ϵ∼ O(1/N_calls), which is the same as the imaginary part. The success probability can be constrained to lie within 1-δ with N_calls∼ O(log(1/δ) / ϵ) <cit.>. To verify the algorithm, we compute the trace of an example 4 × 4 unitary matrix. We do this for four different QPE qubit register sizes of m=6,7,8,9. In this case n = ℓ = 2. The results of the shots from QPE for [Γ] ≡cos(θ) and [Γ] ≡sin(θ), corresponding to the real and imaginary parts of the trace, can be seen in Fig. <ref>, along with the exact values. We find the algorithm converges as expected to the exact values. §.§ Trace of a block-encoding In actuality, we are interested in the trace of a block-encoded matrix. A simple modification allows for this within the same framework of QME. Consider the trace of the top-left V × V block, A, in a block-encoding, , where ℓ+1 qubits are used to encode the block, A = (⟨0^ℓ+1|⊗1) (|0^ℓ+1⟩⊗1). Then, frac1V[A] = 1/V∑_i=0^V-1 A_ii = 1/V∑_i=0^V-1⟨0^ℓ+1|⟨i||0^ℓ+1⟩|i⟩ = 1/V∑_j=0^V-1∑_i=0^V-1δ_ij⟨0^ℓ+1|⟨j||0^ℓ+1⟩|i⟩ = 1/V∑_j=0^V-1∑_i=0^V-1⟨j | i|⟨%s|⟩0^ℓ+1⟨j||0^ℓ+1⟩|i⟩ = ⟨ψ_0| Q |ψ_0⟩≡⟨ψ_0|1⊗|ψ_0⟩ with |ψ_0⟩ = 1/√(V)∑_i|i⟩|0^ℓ+1⟩|i⟩ and 1 is the identity on a Hilbert space with the same dimension as the block. With this minor modification to the initial state used in QME we can compute the trace of a block of a matrix. § DETERMINANT OF A MATRIX We are now ready to combine the results from the previous sections and compute the determinant of W as defined in Sec. <ref>. To compute the determinant we use the relation [log(W)] = log(W). We do this because it is easy on a quantum computer to compute the log of a matrix using the QET, and the trace can be computed using the method from Sec. <ref>. The QET is possible because of the following theorem of quantum signal processing <cit.>: Quantum signal processing in SU(2): For any polynomials P, Q ∈ℂ, and any positive integer d, such that (1) (P) ≤ d, (Q) ≤ (d-1), (2) P has parity d 2, and Q has parity (d-1) 2, (3) |P(x)|^2 + (1-x^2)|Q(x)|^2 = 1, ∀ x ∈ [-1,1], there exists a set of phase factors Φ = {ϕ_0, …, ϕ_d}∈ [-π, , π)^d+1 such that U_Φ = e^i ϕ_0 Z∏_n=1^d W(x) e^i ϕ_n Z = [ P(x) i Q(x) √(1-x^2); i Q(x) √(1-x^2) P^*(x) ], where W(x) = e^i arccos(x) X = [ x i √(1-x^2); i √(1-x^2) x ]. If one can find the appropriate phases for a polynomial approximation of log(x), then the QET can use those phases to apply that same polynomial approximation to a block-encoded matrix. Under the polynomial transform, the resulting polynomial must be between -1 and 1 since it ultimately is used in a unitary block-encoding. So if W is rescaled such that the largest eigenvalue λ_max≤ 1, and λ_min is the smallest eigenvalue of W, then 0 ≥log(W) / log(1/λ_min) ≥ -1, and we should find a polynomial approximation, f̃, of f(x) ≡log(x) / log(1/λ_min). This additional normalization of W is compatible with the normalization necessary for the block-encoding. The largest eigenvalue of W is bounded by m_0^2≤λ_max≤ m_0^2 + 16K^2 (see appendix <ref>). To give our function definite parity we actually define f(x) ≡log(|x|) / log(1/λ_min), which changes nothing since we are interested in cases where x > 0. To approximate f, we use a Chebyshev series, f̃(x) = ∑_n=0^d_f a_n T_n(x) ≈ f(x), over the interval λ_min≤ |x| ≤ 1, keeping only even terms. This polynomial approximation can however be less than negative one, in particular when |x| < λ_min. To fix this we use the same procedure from Ref. <cit.>, and construct a polynomial approximation to the rectangular function, r(x, s) = 1 |x| > s 0 |x| < s 1/2 |x| = s . This is done by first taking the error function as a proxy for the sign function, and then adding two shifted error functions to form an approximate rectangular function. Polynomial approximations for such functions can be found in Ref. <cit.>. We find performing a Chebyshev fit to the error-function approximation of the rectangular function works better in practice, keeping only even terms. We denote this polynomial approximation of the rectangular function as r̃(x,s). r̃ has degree d_r. Since the product of two polynomials is another polynomial, we can construct an improved polynomial approximation to f, F, using the product F(x) ≡r̃(x,λ_min) ×f̃(x) with degree d = d_f + d_r. We then use F for the polynomial P found in Eq. (<ref>). The phases can be found using the method of least squares <cit.>. Using F from above, along with the method from Ref. <cit.>, we are able to find phases such that the P(x) from Eq. (<ref>) reproduces F to machine precision. Figure <ref> shows a comparison between the exact function f(x) and the polynomial approximation found using the phases obtained from the least squares minimization. We find good agreement by setting the degree of the rectangular polynomial much higher than that of the logarithm polynomial. In addition, we find it helps to shift the entire polynomial away from y=-1, so that the polynomial “sits” in the middle of 1 and -1. A shift of 1/2 works well. This known shift can always be subtracted at the end of the calculation. With a block-encoding of the matrix (from Sec. <ref>) to which we wish to apply the polynomial transform using the phases, we can use the method of alternating phase modulation <cit.> and perform the QET to approximate F in the uppermost-left block of the unitary block-encoding. We then compute the trace of this matrix using the method from Sec. <ref>, yielding ≈log(W) / log(1/λ_min). We also note that this sequence of operations is gauge invariant. Since the QET applies a polynomial transform on the eigenvalues of the block-encoded matrix, and a gauge transform leaves the eigenvalues of the fermion matrix invariant, the final result is gauge invariant. We checked this explicitly with an example gauge field to verify and find it holds true. § CONCLUSIONS We have presented an algorithm to compute the logarithm of the determinant of the staggered fermion matrix for a classical gauge field configuration. The algorithm uses the relation [log] = log and computes the logarithm of the fermion matrix using the QET. The trace is them computed using QME. We find an improved scaling with this quantum algorithm relative to the classical computing algorithms which are either exact based on sparse LU decomposition <cit.> to compute the determinant, or stochastic trace estimation <cit.>; with the present algorithm scaling like O(V log(V)) while the traditional classical algorithm scales like O(V^3). The O(V log(V)) scaling stems from the quantum circuit cost associated with the block-encoding of the classical data. This quantum circuit is then run through the QET O(d) times. This is in contrast to a classical version of this algorithm where one would construct a matrix polynomial of W that approximates the logarithm. Classically, the cost of computing a polynomial of W is O(d V^3), demonstrating a polynomial speed-up when using the quantum algorithm. The O(V^3) scaling arises from matrix multiplication operations, even in the case of an initial sparse matrix. A detailed analysis of the block-encoding complexity can be found in appendix <ref>. An important ingredient of the algorithm is the use of the fact that 2 log(M) = log(W), which allows the computation of log(W) in place of M. The W matrix actually contains an additional property not taken advantage of here; that is that W only couples NtNN lattice sites, meaning that only half the matrix need be block-encoded, since the even and odd sub-lattices are completely independent of each other. We save this optimization for future work. Relevantly, future work also includes devising a similar algorithm for Wilson and overlap/domain wall fermions. With the success of the QET to compute polynomial transforms of matrices, it would be interesting to extend the ideas presented here to other classical algorithms used in lattice gauge theory calculations, e.g. matrix inversion and the trace operation in the calculation of the chiral condensate. Another possible benefit would be the calculation of the sign of the Hermitian Wilson-Dirac operator in QCD which is currently an expensive operation, but which could perhaps benefit from the QET's speed-up. This manuscript has been authored by employees of Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. This work is supported by the Department of Energy through the Fermilab Theory QuantiSED program in the area of “Intersections of QIS and Theoretical Particle Physics”. PS is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics QuantISED program under the grants “HEP Machine Learning and Optimization Go Quantum”, Award Number 0000240323, and “DOE QuantiSED Consortium QCCFP-QMLQCF”, Award Number DE-SC0019219. § FERMION MATRIX EIGENVALUE RELATIONS Write the fermion matrix as M = D + m_0. The spectrum, σ, of this operator is characterized by σ(M) = {± iλ + m_0: ± iλ∈σ(D) }. We have that W = M^† M , ⟹[W] = [M^†] [M] = ([M])^∗[M] = |[M]|^2 Since the eigenvalues of M occur in complex conjugate pairs, [M] is real-valued and non-negative. So, [M] = √([W])⇒[log M] = 1/2[log W]. § SPECTRAL BOUNDS ON STAGGERED DIRAC OPERATOR We write the staggered Dirac operator in the following form M_x x' = N ( 2 m' δ_x, x'. + . ∑_μ = 1^4η_μ(x) [ U_μ(x) δ_x+μ̂,x' - U^†_μ(x-μ̂) δ_x-μ̂,x'] ), where N is an arbitrary normalization, related to Eq. (<ref>) by N ≡ K / 2 and m' = m_0/K. For this analysis η_μ^*(x-μ̂) = η_μ(x). In the beginning we will work with the normalization N=1 and mention how the spectral bound changes (trivially) with normalization at the end. To proceed, we define a set of four unitary matrices 𝒰_μ≡η_μ(x) U_μ(x) δ_x+μ̂, x'. and rewrite the staggered Dirac operator (N=1) in terms of these matrices M = 2m' + ∑_μ (𝒰_μ - 𝒰^†). For a given μ we have the following eigenvalue equations 𝒰_μ|μ, j⟩ = e^i θ_μ, j|μ, j⟩ (𝒰_μ - 𝒰^†_μ) |μ, j⟩ = (e^i θ_μ, j - e^-i θ_μ, j) |μ, j⟩ = 2i sin(θ_μ, j) |μ, j⟩ For normal eigenvectors, we see the bounds are |⟨μ, j |𝒰_μ | μ, j||⟩ = 1 |⟨μ, j | (𝒰_μ - 𝒰^†_μ) | μ, j||⟩≤ 2 . In general [𝒰^†_μ, 𝒰_ν] ≠δ_μν, so it is not so simple to find a basis where ∑_μ (𝒰_μ - 𝒰^†_μ) is diagonal. However, the bounds are still additive due to an analogue of the Schwartz inequality. So, we get the following spectral bound for the mass independent part of M [ ∑_μ=1^4 (𝒰_μ - 𝒰^†_μ) ] |λ⟩ = λ|λ⟩, |λ| ≤ 8. Because M is anti-Hermitian when m'=0, we also get the following spectral bound on M M |λ'⟩ = λ' |λ'⟩, 2|m'| ≤λ' ≤√(4m'^2 + 64). For the Hermitian positive-definite form W = M^† M, the spectrum is real and positive with the spectral bound W |λ”⟩ = λ”|λ”⟩, 4 m'^2≤λ”≤ 4m'^2 + 64. We trivially extend this result to general normalizations of the staggered Dirac operator W |λ”⟩ = λ”|λ”⟩, 4 N^2m'^2≤λ”≤ 4N^2 (m'^2 + 16) = m_0^2≤λ”≤ (m_0^2 + 16 K^2). Also, we note that the upper limit on the condition number κ for W is independent of normalization κ≤16 + m'^2/m'^2≈16/m'^2 = 16 K^2/m_0^2 where we assume m' ≪ 1. § COMPUTATIONAL COMPLEXITY For completeness, here we list the computational complexity in terms of the spacetime dimension D, and volume V, for the various parts of the above algorithms for block-encoding in table <ref>. The computational complexity for the QET is well established, and can be found in the original references <cit.>, and for QME it is the same as quantum phase estimation <cit.>. The overall computational complexity of our algorithm is the product of complexities of the block encoding, QET, and QME. Importantly, computational complexities of QET and QME, for a given target error bound, are independent of V. unsrt 10 montvay_munster_1994 Istvan Montvay and Gernot Münster. Quantum Fields on a Lattice. Cambridge Monographs on Mathematical Physics. Cambridge University Press, 1994. gattringer Christof Gattringer and Christian B. Lang. Quantum Chromodynamics on the Lattice. Lecture Notes in Physics. Springer Berlin, Heidelberg, 2010. kogut:1979 John B. Kogut. An introduction to lattice gauge theory and spin systems. Rev. Mod. Phys., 51:659–713, Oct 1979. WEINGARTEN1981333 D.H. Weingarten and D.N. Petcher. Monte carlo integration for lattice gauge theories with fermions. Physics Letters B, 99(4):333–338, 1981. Gottlieb:1987mq Steven A. Gottlieb, W. Liu, D. Toussaint, R. L. Renken, and R. L. Sugar. Hybrid Molecular Dynamics Algorithms for the Numerical Simulation of Quantum Chromodynamics. Phys. Rev., D35:2531–2542, 1987. Joo:2001bz B. Joo, I. Horvath, and K. F. Liu. The Kentucky noisy Monte Carlo algorithm for Wilson dynamical fermions. Phys. Rev. D, 67:074505, 2003. PhysRevD.105.L051506 Szabolcs Borsányi, Zoltán Fodor, Matteo Giordano, Sándor D. Katz, Dániel Nógrádi, Attila Pásztor, and Chik Him Wong. Lattice simulations of the qcd chiral transition at real baryon density. Phys. Rev. D, 105:L051506, Mar 2022. Nagata_2022 Keitaro Nagata. Finite-density lattice qcd and sign problem: Current status and open problems. Progress in Particle and Nuclear Physics, 127:103991, November 2022. Park:2024vjp Sungwoo Park, Tanmoy Bhattacharya, Rajan Gupta, Huey-Wen Lin, Santanu Mondal, and Boram Yoon. Update on flavor diagonal nucleon charges from clover fermions. PoS, LATTICE2023:328, 2024. Kuberski:2023qgx Simon Kuberski. Muon g-2: Lattice calculations of the hadronic vacuum polarization. PoS, LATTICE2023:125, 2024. Ellis:2018dmb John Ellis, Natsumi Nagata, and Keith A. Olive. Uncertainties in WIMP Dark Matter Scattering Revisited. Eur. Phys. J. C, 78(7):569, 2018. Varnhorst:2020dba Lukas Varnhorst. Aspects of quark mass dependence in lattice QCD. PhD thesis, Wuppertal U., 2020. LatticeStrongDynamics:2023bqp R. C. Brower et al. Light Scalar Meson and Decay Constant in SU(3) Gauge Theory with Eight Dynamical Flavors. 6 2023. PRXQuantum.2.040203 John M. Martyn, Zane M. Rossi, Andrew K. Tan, and Isaac L. Chuang. Grand unification of quantum algorithms. PRX Quantum, 2:040203, Dec 2021. low2024quantum Guang Hao Low and Yuan Su. Quantum eigenvalue processing, 2024. Gily_n_2019 András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC ’19. ACM, June 2019. kothari:2022 Robin Kothari and Ryan O'Donnell. Mean estimation when you have the source code; or, quantum Monte Carlo methods, 8 2022. shyamsundar2021nonboolean Prasanth Shyamsundar. Non-boolean quantum amplitude amplification and quantum mean estimation, 2021. Ham21 Yassine Hamoudi. Quantum Algorithms for the Monte Carlo Method. PhD thesis, Université de Paris, 2021. montanaro:2015 Ashley Montanaro. Quantum speedup of monte carlo methods. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 471(2181):20150301, 2015. Low_2019 Guang Hao Low and Isaac L. Chuang. Hamiltonian simulation by qubitization. Quantum, 3:163, July 2019. PhysRevLett.118.010501 Guang Hao Low and Isaac L. Chuang. Optimal hamiltonian simulation by quantum signal processing. Phys. Rev. Lett., 118:010501, Jan 2017. camps2023explicit Daan Camps, Lin Lin, Roel Van Beeumen, and Chao Yang. Explicit quantum circuits for block encodings of certain sparse matrices, 2023. PhysRevA.103.032422 Anirban N. Chowdhury, Rolando D. Somma, and Yiğ ğit Subaş ı. Computing partition functions in the one-clean-qubit model. Phys. Rev. A, 103:032422, Mar 2021. aharonov2006polynomial Dorit Aharonov, Vaughan Jones, and Zeph Landau. A polynomial quantum algorithm for approximating the jones polynomial, 2006. PhysRevLett.81.5672 E. Knill and R. Laflamme. Power of one bit of quantum information. Phys. Rev. Lett., 81:5672–5675, Dec 1998. cade2017quantum Chris Cade and Ashley Montanaro. The quantum complexity of computing schatten p-norms, 2017. shen2024efficient Yizhi Shen, Katherine Klymko, Eran Rabani, Daan Camps, Roel Van Beeumen, and Michael Lindsey. Efficient quantum trace estimation with reconfigurable real-time circuits, 2024. Nghiem_2023 Nhat A. Nghiem and Tzu-Chieh Wei. An improved method for quantum matrix multiplication. Quantum Information Processing, 22(8), August 2023. Hutchinson M.F. Hutchinson. A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines. Communications in Statistics - Simulation and Computation, 18(3):1059–1076, 1989. skorski2020modern Maciej Skorski. A modern analysis of hutchinson's trace estimator, 2020. PhysRevD.42.3520 H. R. Fiebig and R. M. Woloshyn. Monopoles and chiral-symmetry breaking in three-dimensional lattice qed. Phys. Rev. D, 42:3520–3523, Nov 1990. DONG1994130 Shao-Jing Dong and Keh-Fei Liu. Stochastic estimation with z2 noise. Physics Letters B, 328(1):130–136, 1994. mande2023tight Nikhil S. Mande and Ronald de Wolf. Tight bounds for quantum phase estimation and related problems, 2023. PhysRevA.103.042419 Yulong Dong, Xiang Meng, K. Birgitta Whaley, and Lin Lin. Efficient phase-factor evaluation in quantum signal processing. Phys. Rev. A, 103:042419, Apr 2021. wang2023accelerating Tengcheng Wang, Wenhao Li, Haojie Pei, Yuying Sun, Zhou Jin, and Weifeng Liu. Accelerating sparse lu factorization with density-aware adaptive matrix multiplication for circuit simulation. In 2023 60th ACM/IEEE Design Automation Conference (DAC), pages 1–6. IEEE, 2023.
http://arxiv.org/abs/2407.12752v1
20240717172130
Supersolidity in Rydberg tweezer arrays
[ "Lukas Homeier", "Simon Hollerith", "Sebastian Geier", "Neng-Chun Chiu", "Antoine Browaeys", "Lode Pollet" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas", "cond-mat.stat-mech", "quant-ph" ]
lukas.homeier@physik.uni-muenchen.de Department of Physics and Arnold Sommerfeld Center for Theoretical Physics (ASC), Ludwig-Maximilians-Universität München, Theresienstr. 37, München D-80333, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, München D-80799, Germany Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA Physikalisches Institut, Universität Heidelberg, Im Neuenheimer Feld 226, 69120 Heidelberg, Germany Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA Université Paris-Saclay, Institut d’Optique Graduate School, CNRS, Laboratoire Charles Fabry, 91127 Palaiseau Cedex, France lode.pollet@physik.uni-muenchen.de Department of Physics and Arnold Sommerfeld Center for Theoretical Physics (ASC), Ludwig-Maximilians-Universität München, Theresienstr. 37, München D-80333, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, München D-80799, Germany § ABSTRACT Rydberg tweezer arrays provide a versatile platform to explore quantum magnets with dipolar XY or van-der-Waals Ising ZZ interactions. Here, we propose a scheme combining dipolar and van-der-Waals interactions between Rydberg atoms, where the amplitude of the latter can be greater than that of the former, realizing an extended Hubbard model with long-range tunnelings in optical tweezer arrays. For repulsive interactions, we predict the existence of a robust supersolid phase accessible in current Rydberg tweezer experiments on the triangular lattice supported by large-scale quantum Monte Carlo simulations based on explicitly calculated pair interactions for ^87Rb and with a critical entropy per particle S/N ≈ 0.19. Such a lattice supersolid is long-lived, found over a wide parameter range in an isotropic and flat two-dimensional geometry, and can be realized for 100s of particles. Its thermodynamical and dynamical properties can hence be studied at a far larger scale than hitherto possible. Supersolidity in Rydberg tweezer arrays Lode Pollet July 22, 2024 ======================================= Introduction.— Supersolids are enigmatic states of matter in which the usually competing translational and U(1) symmetries are simultaneously broken <cit.>. First conceived theoretically by E. Gross <cit.>, they can be pictured as a density modulated superfluid. Recent progress in cold atomic physics saw flashes of such supersolids <cit.> in elongated traps for a few droplets, following theoretical proposals <cit.>. Other realizations, based on different mechanisms, are based on cavity-mediated interactions <cit.> or spin-orbit coupling <cit.>. There exists however a complementary view on supersolids based on the defect induced picture introduced by Andreev-Lifshitz-Chester <cit.> in which defects such as vacancies and interstitials may delocalize on top of a robust crystal with one atom per unit cell. As the original claims of bulk supersolidity in ^4He are now understood differently <cit.>, the realization of supersolids with spontaneous symmetry breaking in the tight-binding limit has remained elusive. Such lattice supersolids for hard-core bosons have a long theoretical history. Two decades ago, quantum Monte Carlo simulations <cit.> confirmed mean-field predictions <cit.> for the model with nearest-neighbor hopping and interactions, establishing the existence of a supersolid phase for any filling between 1/3 and 2/3 on the triangular lattice. In contrast, for the same model on the square lattice no supersolid phase is found, in line with the domain wall argument <cit.>. The controversy at half filling was settled in favor of a first-order transition between the supersolids found above and below half filling <cit.>. The model attracted renewed interest when polar molecules became a realistic option <cit.>; models with dipolar repulsion found stable supersolids on the triangular <cit.> and also on the square lattice <cit.>. With recent experimental progress in magnetic atoms <cit.>, itinerant Rydberg dressing <cit.>, and polar molecules <cit.>, testing these predictions will soon become reality, and offer an alternative to the spin supersolids concurrently sought in frustrated magnets with anti-ferromagnetic spin exchange <cit.>. Rydberg tweezer arrays are another experimental cold atom platform that has gained significant attention in both analog <cit.> and digital quantum simulation <cit.>. In the realm of analog simulation, studies have mainly focused on two classes of Hamiltonians <cit.>: (i) A pair of atoms in the same Rydberg state experiences a van-der-Waals (vdW) interaction that can be used to implement Ising ZZ spin models ∝ J_z/r_ij^6 <cit.>; here r_ij is the distance between atoms at site i and j. (ii) The direct dipole-dipole interaction between Rydberg states of different parity enables to study dipolar XY spin models ∝ J_⊥/r_ij^3 <cit.>; this model can be mapped to hard-core bosons <cit.> with long-range tunnelings. In this Letter, we propose a scheme enabling the implementation of hard-core bosons with competing dipolar tunnelings t and vdW extended Hubbard interactions V encoded in two Rydberg states <cit.>, combining the previously established models in tweezer arrays, see Fig. <ref>(a). For ferromagnetic tunnelings t>0 and repulsive Hubbard interactions V>0, we show the existence of a long-lived lattice supersolid on the triangular lattice using quantum Monte Carlo techniques, see Fig. <ref>(b). Its critical temperature and entropy can be as high as T_c/ t ∼ 2 and  S_c/N ∼ 0.19, putting it well within experimental reach and without concerns about an elongated, confining geometry. For its experimental realization, we propose a specific set of Rydberg states in ^87Rb and an experimental protocol based on adiabatic state preparation techniques <cit.>, allowing one to prepare the supersolid in current Rydberg tweezer arrays with hundreds of atoms. Model.— We consider hard-core bosons â^†_j with Hamiltonian = _̋t + _̋V _̋t = -t∑_i<j1/r_ij^3( â^†_iâ_j +h.c.) _̋V = V∑_i<j1/r_ij^6(n̂_i-1/2)(n̂_j-1/2) - μ∑_jn̂_j, where n̂_j = â^†_jâ_j is the number operator on lattice sites j of a triangular lattice, see Fig. <ref>(a). The distance between two lattice sites is r_ij = a · |i-j| and in the following we set the lattice constant a=1. Therefore, the Hamiltonian (<ref>) describes hard-core bosonic matter with ferromagnetic, dipolar tunneling amplitudes t > 0 and repulsive, extended vdW Hubbard interactions V > 0. The chemical potential μ controls the filling of particles such that μ=0 corresponds to half filling. Let us first discuss the various ordered phases that may originate in the ground state, see Fig. <ref>(b). For V=0, we expect a condensate with true off-diagonal long-range order being possible in two dimensions even at finite temperature because of the dipolar nature of the tunneling amplitudes. In the spin picture, such an XY ferromagnet has been studied in Refs. <cit.>. For V = ∞, the system orders in a crystalline, charge-density wave (CDW) with a wave ordering vector Q depending on the filling factor. In this work, we restrict ourselves to not too large values of V/ t ≲ 24 where only the commensurate fillings 1/3 and 2/3 are relevant as shown in Fig. <ref>(b) left. When the dipolar tunneling and repulsive Hubbard interactions are in competition, we anticipate the existence of a supersolid as the commensurate structure with filling 1/3 is doped with particles (or, equivalently, as the CDW with filling 2/3 is doped with holes), in analogy to models with short-range exchange, <cit.>, see Fig. <ref>(b) left. Next, we corroborate the existence of the supersolid phase in numerical simulations. Quantum Monte Carlo simulations.— We employ Quantum Monte Carlo (QMC) simulations with worm-type <cit.> updates to study the model, Eq. (<ref>), with ferromagnetic tunnelings t = 1 on the triangular lattice of linear size L with periodic boundary conditions (PBC) and inverse temperature β. The code is based on an adaptation of Ref. <cit.>. The simulations were found to be challenging because of the following reasons: (i) the parametric difference between the dipolar kinetic and potential vdW terms, (ii) strong first-order transitions between the superfluid (SF) and CDW phases, (iii) critical slowing down near the second-order crystalline transitions. This is reflected in the data where the shown statistical errors do not capture all the noise originating from the systematic errors. In order to study the U(1) symmetry breaking we analyze the experimentally accessible condensate fraction n_c defined by n_c = 1/N∑_j C^+-( j), where C^+-(j) = < â^†_jâ_0 + h.c.> is the equal-time off-diagonal correlation function, which can be directly obtained in the proposed Rydberg tweezer setup by measuring the in-plane spin-spin correlations ⟨Ŝ^x_i Ŝ^x_j ⟩ in the spin picture. The criticality of the analogous dipolar XY model has been analyzed in Ref. <cit.>, from which we extract the critical exponents ν = η = 1. The lattice symmetry breaking transition to a √(3)×√(3) order is caught by the structure factor S(Q) = 1/L^2< | ∑_k=1n̂_k e^i Qr_k|^2 >, with ordering vector Q = (4π/3,0). Its critical exponent in a finite size scaling analysis is 2β/ν. The vdW interactions are short-ranged; hence, no changes to the known critical exponents are expected. Ground-state phase diagram.— We perform simulations for linear system sizes L=9,15,27 commensurate with the expected 1/3 and 2/3-filling phases, at sufficiently low temperature β=1/T=10, which probes the ground state of Hamiltonian (<ref>) for all practical purposes. In Fig. <ref>(b), we show the maximum structure factor max_QS(Q) and the condensate fraction n_c for L=15. In our simulations, we find CDW lobes at 1/3- and 2/3-filling. Finite size effects can be appreciated from the supplemental <cit.>. In the ground state, the phase diagram bears a strong similarity to the models with a short-range exchange term <cit.>. In the following, we focus on the vicinity of the CDW_1/3 region, i.e., the lower lobe in Fig. <ref>(b) right. In analogy to the defect picture of Andreev-Lifshitz-Chester, we start within the lobe of constant particle density and increase the chemical potential above μ/t ≳ 10; hence doping the system with particles. For sufficiently large V/t ≳ 12, we find a reduced but still substantial structure factor S(Q) for Q=(4π/3,0); hence establishing long-range order in the density-density ⟨n̂_i n̂_j ⟩ correlations. In addition, the dopants give rise to a condensate fraction n_c; hence long-range order in the off-diagonal ⟨â_i^†â_j +h.c.⟩ correlations. The co-existence of both order parameters confirms the existence of a supersolid phase in the ground state of our proposed model. The experimentally accessible static structure factor and the off-diagonal correlation function for the different phases are shown in Fig. <ref>. We further characterize the phase transitions. In the vicinity of the tip of the lobe, the CDW_1/3 shows a first-order phase transition directly into the SF phase without crossing a supersolid region, as indicated by the dashed line in Fig. <ref>(b). As we increase V/t the first-order line splits into two lines of second-order transitions indicated by the solid lines in Fig. <ref>(b): (i) The CDW to supersolid transition described above is the U(1) symmetry breaking of the XY ferromagnet. (ii) At intermediate V/t, the supersolid loses the translational-symmetry broken crystalline structure as we approach μ=0 leading to a transition into the SF phase. Finite temperature transitions.— We choose a representative parameter point μ = -8, V = 16 where we expect a supersolid phase. As shown in Fig. <ref>, translational symmetry is the first symmetry we break when lowering the temperature. The transition is well described by the (2+1D) three-state Potts model, with known critical exponents ν=5/6 and β=1/9. When lowering the temperature further, a supersolid phase forms at T_c=2.2. A finite size analysis <cit.> of the superfluid stiffness and of the condensate fraction yield the same critical temperature for the U(1) transition, with the critical exponents are ν=η=1, as expected from Refs. <cit.>. The critical entropy per site was found to be S/N ≈ 0.14 for L=27. For V=20, μ =-5 the critical entropy was the highest within our parameter range and found to be S/N ≈ 0.19 with a T_c ≈ 1.9 <cit.>. These entropies are within reach for the current generation of Rydberg tweezer experiments <cit.>. Experimental proposal.— We propose to encode the empty and occupied lattice sites in two Rydberg states of ^87Rb. The tunneling amplitudes t = -C_3 arise from resonant dipole-dipole interactions, while extended Hubbard interactions V/R^6 = E_ + E_ - 2E_ = C^eff_6/R^6 emerge via second order van-der-Waals interactions <cit.>. Here, E_σσ'∝ 1/R^6 are the van-der-Waals interactions between a pair of atoms in state | σσ'⟩ on neighbouring sites, and coefficients C_3 and C^eff_6 define the interaction strength. The mapping from the Rydberg Hamiltonian to the extended Hubbard model introduces an additional position-dependent chemical potential μ_j^z = 1/2∑_ia^6/r^6_ij( E_ - E_), which is to a good approximation constant in the bulk but acts as a pinning field at the open boundaries in an experiment. For previous quantum simulations focusing on two Rydberg states with identical principal quantum number n, tunneling t is typically much stronger than V <cit.>. Realizing a regime where t < V requires the dipole matrix elements between the chosen Rydberg states, which contribute to t, to be smaller than the ones contributing to the dispersive van-der-Waals interactions. Here, we propose Rydberg states with different n where radial dipole matrix elements decay rapidly. Additionally to n and the orbital angular momentum l of the Rydberg states, interactions can be fine-tuned by the total electronic angular momentum state J, their projection m_J, and the magnetic field B applied perpendicular to the atomic plane. As one example, we find promising interaction strengths for states |⟩ = |60P_3/2,m_j = 3/2⟩ and |⟩ = |59 D_3/2,m_j = 3/2⟩ and a field amplitude B = 50G, see Fig. <ref>. These states implement a model with attractive, extended Hubbard interactions V<0 and antiferromagnetic tunnelings t<0. Hence the highest energy state (negative temperature regime) realizes the predicted supersolid, which can be prepared adiabatically as we explain below. We are confident that similar parameters can be found using different parameters and atomic species, opening a wide range of the accessible parameters. State preparation protocol.— In Rydberg tweezer experiments, it is customary to prepare an initial product state followed by a ramp protocol in order to prepare a target ground state <cit.>. Due to imperfections in the initial state preparation, decoherence effects during the time evolution, e.g., caused by finite Rydberg state lifetime, and rapid spin exchange processes, local observables are often well described by an equilibrium grand canonical statistical ensemble during the later stages of an experiment <cit.>. We proceed as follows: (i) All atoms are prepared in the |⟩ state. (ii) Subsequently, local light shifts |Δ_j| ≫ |t|, |V| acting on the |⟩ state are applied to sites j∈ C_N, where C_N corresponds to a realization of the CDW_1/3 order <cit.> with N additional excitations in the system, see Fig. <ref>(a). Depending on the sign of the light shift, the system has a large overlap with the ground state (Δ_j > 0) or highest excited state (Δ_j < 0). (iii) In the presence of the strong local light shift, a global microwave pulse applies a π-rotation Û_π =∏_j ∉ C_N( â^†_j +â_j) for the atoms on site j∉ C_N. (iv) To adiabatically prepare the ground state (highest excited state), the local light shift |Δ_j| is decreased to zero while maintaining an eigenstate for ramp speeds slower than the gap size. We expect that the long-range interactions give rise to large finite system size gaps advantageous for the experiments. Previous experiments in similar systems were mostly limited by infidelities in the initial state preparation and not diabaticity <cit.>. To account for non-equilibrium processes, we suggest to wait for a short time to reach thermal equilibrium before preforming measurements. (v) Lastly, a projective measurement on one internal state is performed. In the above basis, this directly allow us to obtain the ⟨n̂_i n̂_j ⟩ correlations and static structure factor S(Q). An additional global π/2 rotation around the (â^† + â)-direction prior to the measurement gives access to the correlations of ⟨â^†_iâ_j + h.c.⟩. Therefore, the proposed protocol allows us to directly test our predicted supersolid phase by a comparison to the correlation functions obtained using QMC simulations, see Fig. <ref>. Discussion and Outlook.— We have introduced a new model combining long-range dipolar tunnelings of hard-core bosons with dominant short-range vdW Hubbard interactions in Rydberg tweezer arrays. On the triangular model and for ferromagnetic tunnelings/repulsive extended Hubbard interactions, we have established the phase diagram using large-scale QMC simulations in the experimentally relevant parameter regime. We have predicted the existence of a robust supersolid phase. From a finite temperature analysis, we have extracted the critical temperature and entropy, which we find to be within reach in current Rydberg analog quantum simulators using concrete calculations of pair states in ^87Rb. Our proposed scheme makes the realization of the long-sought lattice supersolid with hundreds of particles viable. In addition to the equal time correlation functions, it has been demonstrated <cit.> that dynamical experiments give access to the momentum resolved excitation spectrum of spin models <cit.>. The application of this protocol to our model allows to study the emergence of superfluid as well as crystalline phonons in the supersolid phase <cit.>. Furthermore, the Hamiltonian (<ref>) can be considered from the perspective of a spin-ordered Mott insulator; hence extensions to three Rydberg levels provide a route to explore supersolids with additional mobile hard-core bosonic vacancies <cit.>. Our proposed experimental scheme is highly flexible: The ability to probe either the ground state or the inverted spectrum <cit.> as well as the freedom to engineer both ferromagnetic and antiferromagnetic dipolar tunnelings by choosing atomic m_j sublevels enables to probe four distinct models. This includes the recently discovered transverse quantum fluids <cit.> with ferromagnetic tunnelings and attractive Hubbard interactions. A sign problem for antiferromagnetic tunnelings prohibits the exploration of large-scale QMC simulation, but frustrated models host potential candidates for quantum spin liquids <cit.>. Hence their quantum simulation could give valuable insights into their quantum phases of matter. Note added.– During completion of this work we became aware of another proposal to study supersolidity with Rydberg tweezer arrays based on three Rydberg states <cit.>. Acknowledgements.— We thank Daniel Barredo, Guillaume Bornet, Cheng Chen, Gabriel Emperauger, Bastien Gély, Fabian Grusdt, Lukas Klein, Thierry Lahaye, Simon Linsel, Mu Qiao, Ana Maria Rey and Norman Yao for fruitful discussions. This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC-2111 – 390814868. L.H. has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programm (Grant Agreement no 948141) — ERC Starting Grant SimUcQuam, and acknowledges support from the Studienstiftung des deutschen Volkes. S.H. acknowledges funding through the Harvard Quantum Initiative Postdoctoral Fellowship in Quantum Science and Engineering. S.G. acknowledges funding by Structures (Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC2181/1-390900948). A.B. acknowledges funding by the Agence Nationale de la Recherche (ANR-22-PETQ-0004 France 2030, project QuBitAF), Horizon Europe programme HORIZON-CL4-2022-QUANTUM-02- SGA via the project 101113690 (PASQuanS2.1), and the European Research Council (Advanced grant No. 101018511- ATARAXIA). 67 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Boninsegni and Prokof'ev(2012)]Boninsegni_RMP2012 author author Massimo Boninsegni and author Nikolay V. Prokof'ev, title title Colloquium: Supersolids: What and where are they? 10.1103/RevModPhys.84.759 journal journal Rev. Mod. Phys. volume 84, pages 759–776 (year 2012)NoStop [Gross(1957)]Gross1957 author author Eugene P. Gross, title title Unified theory of interacting bosons, 10.1103/PhysRev.106.161 journal journal Phys. Rev. volume 106, pages 161–162 (year 1957)NoStop [Chomaz et al.(2019)Chomaz, Petter, Ilzhöfer, Natale, Trautmann, Politi, Durastante, van Bijnen, Patscheider, Sohmen, Mark, and Ferlaino]Chomaz2019 author author L. Chomaz, author D. Petter, author P. Ilzhöfer, author G. Natale, author A. Trautmann, author C. Politi, author G. Durastante, author R. M. W. van Bijnen, author A. Patscheider, author M. Sohmen, author M. J. Mark, and author F. Ferlaino, title title Long-Lived and Transient Supersolid Behaviors in Dipolar Quantum Gases, https://doi.org/10.1103/physrevx.9.021012 journal journal Physical Review X volume 9, pages 021012 (year 2019)NoStop [Tanzi et al.(2019)Tanzi, Lucioni, Famà, Catani, Fioretti, Gabbanini, Bisset, Santos, and Modugno]Tanzi2019 author author L. Tanzi, author E. Lucioni, author F. Famà, author J. Catani, author A. Fioretti, author C. Gabbanini, author R. N. Bisset, author L. Santos, and author G. Modugno, title title Observation of a Dipolar Quantum Gas with Metastable Supersolid Properties, https://doi.org/10.1103/physrevlett.122.130405 journal journal Physical Review Letters volume 122, pages 130405 (year 2019)NoStop [Böttcher et al.(2019)Böttcher, Schmidt, Wenzel, Hertkorn, Guo, Langen, and Pfau]Boettcher2019 author author Fabian Böttcher, author Jan-Niklas Schmidt, author Matthias Wenzel, author Jens Hertkorn, author Mingyang Guo, author Tim Langen, and author Tilman Pfau, title title Transient Supersolid Properties in an Array of Dipolar Quantum Droplets, https://doi.org/10.1103/physrevx.9.011051 journal journal Physical Review X volume 9, pages 011051 (year 2019)NoStop [Lode(2019)]Pollet_newsandviews author author Pollet Lode, title title Quantum gases show flashes of a supersolid, 10.1038/d41586-019-01585-w journal journal Nature volume 569, pages 494–495 (year 2019)NoStop [Norcia et al.(2021)Norcia, Politi, Klaus, Poli, Sohmen, Mark, Bisset, Santos, and Ferlaino]Norcia2021 author author Matthew A. Norcia, author Claudia Politi, author Lauritz Klaus, author Elena Poli, author Maximilian Sohmen, author Manfred J. Mark, author Russell N. Bisset, author Luis Santos, and author Francesca Ferlaino, title title Two-dimensional supersolidity in a dipolar quantum gas, https://doi.org.10.1038/s41586-021-03725-7 journal journal Nature volume 596, pages 357–361 (year 2021)NoStop [Wenzel et al.(2017)Wenzel, Böttcher, Langen, Ferrier-Barbut, and Pfau]Wenzel2017 author author Matthias Wenzel, author Fabian Böttcher, author Tim Langen, author Igor Ferrier-Barbut, and author Tilman Pfau, title title Striped states in a many-body system of tilted dipoles, 10.1103/PhysRevA.96.053630 journal journal Phys. Rev. A volume 96, pages 053630 (year 2017)NoStop [Baillie and Blakie(2018)]Baillie2018 author author D. Baillie and author P. B. Blakie, title title Droplet crystal ground states of a dipolar bose gas, 10.1103/PhysRevLett.121.195301 journal journal Phys. Rev. Lett. volume 121, pages 195301 (year 2018)NoStop [Macia et al.(2016)Macia, Sánchez-Baena, Boronat, and Mazzanti]Macia2016 author author A. Macia, author J. Sánchez-Baena, author J. Boronat, and author F. Mazzanti, title title Droplets of trapped quantum dipolar bosons, 10.1103/PhysRevLett.117.205301 journal journal Phys. Rev. Lett. volume 117, pages 205301 (year 2016)NoStop [Cinti and Boninsegni(2017)]Cinti2017 author author Fabio Cinti and author Massimo Boninsegni, title title Classical and quantum filaments in the ground state of trapped dipolar bose gases, 10.1103/PhysRevA.96.013627 journal journal Phys. Rev. A volume 96, pages 013627 (year 2017)NoStop [Chomaz et al.(2022)Chomaz, Ferrier-Barbut, Ferlaino, Laburthe-Tolra, Lev, and Pfau]Chomaz2023 author author Lauriane Chomaz, author Igor Ferrier-Barbut, author Francesca Ferlaino, author Bruno Laburthe-Tolra, author Benjamin L Lev, and author Tilman Pfau, title title Dipolar physics: a review of experiments with magnetic quantum gases, 10.1088/1361-6633/aca814 journal journal Reports on Progress in Physics volume 86, pages 026401 (year 2022)NoStop [Léonard et al.(2017)Léonard, Morales, Zupancic, Esslinger, and Donner]Leonard2017 author author Julian Léonard, author Andrea Morales, author Philip Zupancic, author Tilman Esslinger, and author Tobias Donner, title title Supersolid formation in a quantum gas breaking a continuous translational symmetry, https://doi.org/10.1038/nature21067 journal journal Nature volume 543, pages 87–90 (year 2017)NoStop [Li et al.(2017)Li, Lee, Huang, Burchesky, Shteynas, Top, Jamison, and Ketterle]Li2017 author author Jun-Ru Li, author Jeongwon Lee, author Wujie Huang, author Sean Burchesky, author Boris Shteynas, author Furkan Çaǧrı Top, author Alan O. Jamison, and author Wolfgang Ketterle, title title A stripe phase with supersolid properties in spin–orbit-coupled Bose–Einstein condensates, https://doi.org/10.1038/nature21431 journal journal Nature volume 543, pages 91–94 (year 2017)NoStop [Andreev and Lifshitz(1969)]Andreev1969 author author Aleksandr F. Andreev and author I. M. Lifshitz, title title Quantum theory of defects in crystals, http://www.jetp.ras.ru/cgi-bin/dn/e_029_06_1107.pdf journal journal Soviet Physics JETP volume 29, pages 1107–1113 (year 1969)NoStop [Chester(1970)]Chester1970 author author G. V. Chester, title title Speculations on bose-einstein condensation and quantum crystals, 10.1103/PhysRevA.2.256 journal journal Phys. Rev. A volume 2, pages 256–258 (year 1970)NoStop [Kim and Chan(2004a)]Kim2004_Nature author author E. Kim and author M. H. W. Chan, title title Probable observation of a supersolid helium phase, 10.1038/nature02220 journal journal Nature volume 427, pages 225–227 (year 2004a)NoStop [Kim and Chan(2004b)]Kim2004 author author E. Kim and author M. H. W. Chan, title title Observation of Superflow in Solid Helium, https://doi.org/10.1126/science.1101501 journal journal Science volume 305, pages 1941–1944 (year 2004b)NoStop [Kim and Chan(2012)]Kim2012 author author Duk Y. Kim and author Moses H. W. Chan, title title Absence of supersolidity in solid helium in porous vycor glass, 10.1103/PhysRevLett.109.155301 journal journal Phys. Rev. Lett. volume 109, pages 155301 (year 2012)NoStop [Boninsegni(2003)]Boninsegni2003 author author M. Boninsegni, title title Hard core bosons on a triangular lattice, 10.1023/A:1023741108312 journal journal Journal of Low Temperature Physics volume 132, pages 39–53 (year 2003)NoStop [Melko et al.(2005)Melko, Paramekanti, Burkov, Vishwanath, Sheng, and Balents]Melko2005 author author R. G. Melko, author A. Paramekanti, author A. A. Burkov, author A. Vishwanath, author D. N. Sheng, and author L. Balents, title title Supersolid order from disorder: Hard-core bosons on the triangular lattice, 10.1103/PhysRevLett.95.127207 journal journal Phys. Rev. Lett. volume 95, pages 127207 (year 2005)NoStop [Heidarian and Damle(2005)]Heidarian2005 author author Dariush Heidarian and author Kedar Damle, title title Persistent supersolid phase of hard-core bosons on the triangular lattice, 10.1103/PhysRevLett.95.127206 journal journal Phys. Rev. Lett. volume 95, pages 127206 (year 2005)NoStop [Wessel and Troyer(2005)]Wessel2005 author author Stefan Wessel and author Matthias Troyer, title title Supersolid hard-core bosons on the triangular lattice, 10.1103/PhysRevLett.95.127205 journal journal Phys. Rev. Lett. volume 95, pages 127205 (year 2005)NoStop [Boninsegni and Prokof'ev(2005)]Boninsegni2005 author author Massimo Boninsegni and author Nikolay Prokof'ev, title title Supersolid phase of hard-core bosons on a triangular lattice, 10.1103/PhysRevLett.95.237204 journal journal Phys. Rev. Lett. volume 95, pages 237204 (year 2005)NoStop [Murthy et al.(1997)Murthy, Arovas, and Auerbach]Murthy1997 author author Ganpathy Murthy, author Daniel Arovas, and author Assa Auerbach, title title Superfluids and supersolids on frustrated two-dimensional lattices, 10.1103/PhysRevB.55.3104 journal journal Phys. Rev. B volume 55, pages 3104–3121 (year 1997)NoStop [Sengupta et al.(2005)Sengupta, Pryadko, Alet, Troyer, and Schmid]Sengupta2005 author author Pinaki Sengupta, author Leonid P. Pryadko, author Fabien Alet, author Matthias Troyer, and author Guido Schmid, title title Supersolids versus phase separation in two-dimensional lattice bosons, 10.1103/PhysRevLett.94.207202 journal journal Phys. Rev. Lett. volume 94, pages 207202 (year 2005)NoStop [Cornish et al.(2024)Cornish, Tarbutt, and Hazzard]Cornish2024 author author Simon L. Cornish, author Michael R. Tarbutt, and author Kaden R. A. Hazzard, title title Quantum computation and quantum simulation with ultracold molecules, https://doi.org/10.1038/s41567-024-02453-9 journal journal Nature Physics volume 20, pages 730–740 (year 2024)NoStop [Pollet et al.(2010)Pollet, Picon, Büchler, and Troyer]Pollet2010 author author L. Pollet, author J. D. Picon, author H. P. Büchler, and author M. Troyer, title title Supersolid Phase with Cold Polar Molecules on a Triangular Lattice, https://doi.org/ journal journal Physical Review Letters volume 104, pages 125302 (year 2010)NoStop [Capogrosso-Sansone et al.(2010)Capogrosso-Sansone, Trefzger, Lewenstein, Zoller, and Pupillo]CapogrossoSansone2010 author author B. Capogrosso-Sansone, author C. Trefzger, author M. Lewenstein, author P. Zoller, and author G. Pupillo, title title Quantum Phases of Cold Polar Molecules in 2D Optical Lattices, https://doi.org/10.1103/physrevlett.104.125301 journal journal Physical Review Letters volume 104, pages 125301 (year 2010)NoStop [Su et al.(2023)Su, Douglas, Szurek, Groth, Ozturk, Krahn, Hébert, Phelps, Ebadi, Dickerson, Ferlaino, Marković, and Greiner]Su2023 author author Lin Su, author Alexander Douglas, author Michal Szurek, author Robin Groth, author S. Furkan Ozturk, author Aaron Krahn, author Anne H. Hébert, author Gregory A. Phelps, author Sepehr Ebadi, author Susannah Dickerson, author Francesca Ferlaino, author Ognjen Marković, and author Markus Greiner, title title Dipolar quantum solids emerging in a Hubbard quantum simulator, https://doi.org/10.1038/s41586-023-06614-3 journal journal Nature volume 622, pages 724–729 (year 2023)NoStop [Weckesser et al.(2024)Weckesser, Srakaew, Blatz, Wei, Adler, Agrawal, Bohrdt, Bloch, and Zeiher]Weckesser2024 author author Pascal Weckesser, author Kritsana Srakaew, author Tizian Blatz, author David Wei, author Daniel Adler, author Suchita Agrawal, author Annabelle Bohrdt, author Immanuel Bloch, and author Johannes Zeiher, https://arxiv.org/abs/2405.20128 title Realization of a Rydberg-dressed extended Bose Hubbard model, (year 2024), http://arxiv.org/abs/2405.20128 arXiv:2405.20128 NoStop [Sellmann et al.(2015)Sellmann, Zhang, and Eggert]Sellmann2015 author author Daniel Sellmann, author Xue-Feng Zhang, and author Sebastian Eggert, title title Phase diagram of the antiferromagnetic xxz model on the triangular lattice, 10.1103/PhysRevB.91.081104 journal journal Phys. Rev. B volume 91, pages 081104 (year 2015)NoStop [Gao et al.(2022)Gao, Fan, Li, Yang, Zeng, Sheng, Zhong, Qi, Wan, and Li]Gao2022 author author Yuan Gao, author Yu-Chen Fan, author Han Li, author Fan Yang, author Xu-Tao Zeng, author Xian-Lei Sheng, author Ruidan Zhong, author Yang Qi, author Yuan Wan, and author Wei Li, title title Spin supersolidity in nearly ideal easy-axis triangular quantum antiferromagnet na2baco(po4)2, 10.1038/s41535-022-00500-3 journal journal npj Quantum Materials volume 7, pages 89 (year 2022)NoStop [Tsurkan et al.(2017)Tsurkan, Zherlitsyn, Prodan, Felea, Cong, Skourski, Wang, Deisenhofer, von Nidda, Wosnitza, and Loidl]Tsurkan2017 author author Vladimir Tsurkan, author Sergei Zherlitsyn, author Lilian Prodan, author Viorel Felea, author Pham Thanh Cong, author Yurii Skourski, author Zhe Wang, author Joachim Deisenhofer, author Hans-Albrecht Krug von Nidda, author Joahim Wosnitza, and author Alois Loidl, title title Ultra-robust high-field magnetization plateau and supersolidity in bond-frustrated MnCr_2S_4, https://doi.org/10.1126/sciadv.1601982 journal journal Science Advances volume 3 (year 2017)NoStop [Xiang et al.(2024)Xiang, Zhang, Gao, Schmidt, Schmalzl, Wang, Li, Xi, Liu, Jin, Li, Shen, Chen, Qi, Wan, Jin, Li, Sun, and Su]Xiang2024 author author Junsen Xiang, author Chuandi Zhang, author Yuan Gao, author Wolfgang Schmidt, author Karin Schmalzl, author Chin-Wei Wang, author Bo Li, author Ning Xi, author Xin-Yang Liu, author Hai Jin, author Gang Li, author Jun Shen, author Ziyu Chen, author Yang Qi, author Yuan Wan, author Wentao Jin, author Wei Li, author Peijie Sun, and author Gang Su, title title Giant magnetocaloric effect in spin supersolid candidate Na2BaCo(PO4)2, https://doi.org/10.1038/s41586-023-06885-w journal journal Nature volume 625, pages 270–275 (year 2024)NoStop [Bernien et al.(2017)Bernien, Schwartz, Keesling, Levine, Omran, Pichler, Choi, Zibrov, Endres, Greiner, Vuletić, and Lukin]Bernien2017 author author Hannes Bernien, author Sylvain Schwartz, author Alexander Keesling, author Harry Levine, author Ahmed Omran, author Hannes Pichler, author Soonwon Choi, author Alexander S. Zibrov, author Manuel Endres, author Markus Greiner, author Vladan Vuletić, and author Mikhail D. Lukin, title title Probing many-body dynamics on a 51-atom quantum simulator, 10.1038/nature24622 journal journal Nature volume 551, pages 579–584 (year 2017)NoStop [Scholl et al.(2021)Scholl, Schuler, Williams, Eberharter, Barredo, Schymik, Lienhard, Henry, Lang, Lahaye, Läuchli, and Browaeys]Scholl2021 author author Pascal Scholl, author Michael Schuler, author Hannah J. Williams, author Alexander A. Eberharter, author Daniel Barredo, author Kai-Niklas Schymik, author Vincent Lienhard, author Louis-Paul Henry, author Thomas C. Lang, author Thierry Lahaye, author Andreas M. Läuchli, and author Antoine Browaeys, title title Quantum simulation of 2D antiferromagnets with hundreds of Rydberg atoms, https://doi.org/10.1038/s41586-021-03585-1 journal journal Nature volume 595, pages 233–238 (year 2021)NoStop [Ebadi et al.(2021)Ebadi, Wang, Levine, Keesling, Semeghini, Omran, Bluvstein, Samajdar, Pichler, Ho, Choi, Sachdev, Greiner, Vuletić, and Lukin]Ebadi2021 author author Sepehr Ebadi, author Tout T. Wang, author Harry Levine, author Alexander Keesling, author Giulia Semeghini, author Ahmed Omran, author Dolev Bluvstein, author Rhine Samajdar, author Hannes Pichler, author Wen Wei Ho, author Soonwon Choi, author Subir Sachdev, author Markus Greiner, author Vladan Vuletić, and author Mikhail D. Lukin, title title Quantum phases of matter on a 256-atom programmable quantum simulator, 10.1038/s41586-021-03582-4 journal journal Nature volume 595, pages 227–232 (year 2021)NoStop [Steinert et al.(2023)Steinert, Osterholz, Eberhard, Festa, Lorenz, Chen, Trautmann, and Gross]Steinert2023 author author Lea-Marina Steinert, author Philip Osterholz, author Robin Eberhard, author Lorenzo Festa, author Nikolaus Lorenz, author Zaijun Chen, author Arno Trautmann, and author Christian Gross, title title Spatially Tunable Spin Interactions in Neutral Atom Arrays, https://doi.org/10.1103/physrevlett.130.243001 journal journal Physical Review Letters volume 130, pages 243001 (year 2023)NoStop [Chen et al.(2023a)Chen, Bornet, Bintz, Emperauger, Leclerc, Liu, Scholl, Barredo, Hauschild, Chatterjee, Schuler, Läuchli, Zaletel, Lahaye, Yao, and Browaeys]Chen2023 author author Cheng Chen, author Guillaume Bornet, author Marcus Bintz, author Gabriel Emperauger, author Lucas Leclerc, author Vincent S. Liu, author Pascal Scholl, author Daniel Barredo, author Johannes Hauschild, author Shubhayu Chatterjee, author Michael Schuler, author Andreas M. Läuchli, author Michael P. Zaletel, author Thierry Lahaye, author Norman Y. Yao, and author Antoine Browaeys, title title Continuous symmetry breaking in a two-dimensional Rydberg array, https://doi.org/10.1038/s41586-023-05859-2 journal journal Nature (year 2023a)NoStop [Shaw et al.(2024)Shaw, Chen, Choi, Mark, Scholl, Finkelstein, Elben, Choi, and Endres]Shaw2024 author author Adam L. Shaw, author Zhuo Chen, author Joonhee Choi, author Daniel K. Mark, author Pascal Scholl, author Ran Finkelstein, author Andreas Elben, author Soonwon Choi, and author Manuel Endres, title title Benchmarking highly entangled states on a 60-atom analogue quantum simulator, https://doi.org/10.1038/s41586-024-07173-x journal journal Nature volume 628, pages 71–77 (year 2024)NoStop [Anand et al.(2024)Anand, Bradley, White, Ramesh, Singh, and Bernien]Anand2024 author author Shraddha Anand, author Conor E. Bradley, author Ryan White, author Vikram Ramesh, author Kevin Singh, and author Hannes Bernien, https://arxiv.org/abs/2401.10325 title A dual-species Rydberg array, (year 2024), http://arxiv.org/abs/2401.10325 arXiv:2401.10325 NoStop [Madjarov et al.(2020)Madjarov, Covey, Shaw, Choi, Kale, Cooper, Pichler, Schkolnik, Williams, and Endres]Madjarov2020 author author Ivaylo S. Madjarov, author Jacob P. Covey, author Adam L. Shaw, author Joonhee Choi, author Anant Kale, author Alexandre Cooper, author Hannes Pichler, author Vladimir Schkolnik, author Jason R. Williams, and author Manuel Endres, title title High-fidelity entanglement and detection of alkaline-earth Rydberg atoms, https://doi.org/10.1038/s41567-020-0903-z journal journal Nature Physics volume 16, pages 857–861 (year 2020)NoStop [Graham et al.(2022)Graham, Song, Scott, Poole, Phuttitarn, Jooya, Eichler, Jiang, Marra, Grinkemeyer, Kwon, Ebert, Cherek, Lichtman, Gillette, Gilbert, Bowman, Ballance, Campbell, Dahl, Crawford, Blunt, Rogers, Noel, and Saffman]Graham2022 author author T. M. Graham, author Y. Song, author J. Scott, author C. Poole, author L. Phuttitarn, author K. Jooya, author P. Eichler, author X. Jiang, author A. Marra, author B. Grinkemeyer, author M. Kwon, author M. Ebert, author J. Cherek, author M. T. Lichtman, author M. Gillette, author J. Gilbert, author D. Bowman, author T. Ballance, author C. Campbell, author E. D. Dahl, author O. Crawford, author N. S. Blunt, author B. Rogers, author T. Noel, and author M. Saffman, title title Multi-qubit entanglement and algorithms on a neutral-atom quantum computer, https://doi.org/10.1038/s41586-022-04603-6 journal journal Nature volume 604, pages 457–462 (year 2022)NoStop [Bluvstein et al.(2023)Bluvstein, Evered, Geim, Li, Zhou, Manovitz, Ebadi, Cain, Kalinowski, Hangleiter, Bonilla Ataides, Maskara, Cong, Gao, Sales Rodriguez, Karolyshyn, Semeghini, Gullans, Greiner, Vuletić, and Lukin]Bluvstein2023 author author Dolev Bluvstein, author Simon J. Evered, author Alexandra A. Geim, author Sophie H. Li, author Hengyun Zhou, author Tom Manovitz, author Sepehr Ebadi, author Madelyn Cain, author Marcin Kalinowski, author Dominik Hangleiter, author J. Pablo Bonilla Ataides, author Nishad Maskara, author Iris Cong, author Xun Gao, author Pedro Sales Rodriguez, author Thomas Karolyshyn, author Giulia Semeghini, author Michael J. Gullans, author Markus Greiner, author Vladan Vuletić, and author Mikhail D. Lukin, title title Logical quantum processor based on reconfigurable atom arrays, https://doi.org/10.1038/s41586-023-06927-3 journal journal Nature volume 626, pages 58–65 (year 2023)NoStop [Browaeys and Lahaye(2020)]Browaeys2020 author author Antoine Browaeys and author Thierry Lahaye, title title Many-body physics with individually controlled Rydberg atoms, 10.1038/s41567-019-0733-z journal journal Nature Physics volume 16, pages 132–142 (year 2020)NoStop [Bornet et al.(2023)Bornet, Emperauger, Chen, Ye, Block, Bintz, Boyd, Barredo, Comparin, Mezzacapo, Roscilde, Lahaye, Yao, and Browaeys]Bornet2023 author author Guillaume Bornet, author Gabriel Emperauger, author Cheng Chen, author Bingtian Ye, author Maxwell Block, author Marcus Bintz, author Jamie A. Boyd, author Daniel Barredo, author Tommaso Comparin, author Fabio Mezzacapo, author Tommaso Roscilde, author Thierry Lahaye, author Norman Y. Yao, and author Antoine Browaeys, title title Scalable spin squeezing in a dipolar Rydberg atom array, https://doi.org/10.1038/s41586-023-06414-9 journal journal Nature volume 621, pages 728–733 (year 2023)NoStop [de Léséleuc et al.(2019)de Léséleuc, Lienhard, Scholl, Barredo, Weber, Lang, Büchler, Lahaye, and Browaeys]Leseleuc2019 author author Sylvain de Léséleuc, author Vincent Lienhard, author Pascal Scholl, author Daniel Barredo, author Sebastian Weber, author Nicolai Lang, author Hans Peter Büchler, author Thierry Lahaye, and author Antoine Browaeys, title title Observation of a symmetry-protected topological phase of interacting bosons with Rydberg atoms, 10.1126/science.aav9105 journal journal Science volume 365, pages 775–780 (year 2019)NoStop [de Hond et al.(2020)de Hond, Cisternas, Spreeuw, van Linden van den Heuvell, and van Druten]Hond2020 author author J de Hond, author Nataly Cisternas, author R J C Spreeuw, author H B van Linden van den Heuvell, and author N J van Druten, title title Interplay between van der Waals and dipole–dipole interactions among Rydberg atoms, https://doi.org/10.1088/1361-6455/ab752b journal journal Journal of Physics B: Atomic, Molecular and Optical Physics volume 53, pages 084007 (year 2020)NoStop [Sup()]Supplements @noop journal See Supplemental Material at [URL will be inserted by publisher] for details on the quantum Monte Carlo simulations and Rydberg pair interactions. NoStop [Sbierski et al.(2024)Sbierski, Bintz, Chatterjee, Schuler, Yao, and Pollet]Sbierski2024 journal author author Björn Sbierski, author Marcus Bintz, author Shubhayu Chatterjee, author Michael Schuler, author Norman Y. Yao, and author Lode Pollet, title title Magnetism in the two-dimensional dipolar XY model, https://doi.org/10.1103/physrevb.109.144411 journal journal Physical Review B volume 109, pages 144411 (year 2024)NoStop [Prokof'ev et al.(1998)Prokof'ev, Svistunov, and Tupitsyn]Prokofev_worm_1998 author author N. V. Prokof'ev, author B. V. Svistunov, and author I. S. Tupitsyn, title title Exact, complete, and universal continuous-time worldline Monte Carlo approach to the statistics of discrete quantum systems, 10.1134/1.558661 journal journal Journal of Experimental and Theoretical Physics volume 87, pages 310–321 (year 1998)NoStop [Pollet(2012)]Pollet2012 author author Lode Pollet, title title Recent developments in quantum Monte Carlo simulations with applications for cold gases, https://doi.org/10.1088/0034-4885/75/9/094501 journal journal Reports on Progress in Physics volume 75, pages 094501 (year 2012)NoStop [Sadoune and Pollet(2022)]Sadoune2022 author author Nicolas Sadoune and author Lode Pollet, title title Efficient and scalable path integral Monte Carlo simulations with worm-type updates for Bose-Hubbard and XXZ models, https://doi.org/10.21468/scipostphyscodeb.9 journal journal SciPost Physics Codebases (year 2022)NoStop [Chen et al.(2023b)Chen, Emperauger, Bornet, Caleca, Gély, Bintz, Chatterjee, Liu, Barredo, Yao, Lahaye, Mezzacapo, Roscilde, and Browaeys]chen2023spectroscopy author author Cheng Chen, author Gabriel Emperauger, author Guillaume Bornet, author Filippo Caleca, author Bastien Gély, author Marcus Bintz, author Shubhayu Chatterjee, author Vincent Liu, author Daniel Barredo, author Norman Y. Yao, author Thierry Lahaye, author Fabio Mezzacapo, author Tommaso Roscilde, and author Antoine Browaeys, https://arxiv.org/abs/2311.11726 title Spectroscopy of elementary excitations from quench dynamics in a dipolar XY Rydberg simulator, (year 2023b), http://arxiv.org/abs/2311.11726 arXiv:2311.11726 NoStop [Semeghini et al.(2021)Semeghini, Levine, Keesling, Ebadi, Wang, Bluvstein, Verresen, Pichler, Kalinowski, Samajdar, Omran, Sachdev, Vishwanath, Greiner, Vuletić, and Lukin]Semeghini2021 author author G. Semeghini, author H. Levine, author A. Keesling, author S. Ebadi, author T. T. Wang, author D. Bluvstein, author R. Verresen, author H. Pichler, author M. Kalinowski, author R. Samajdar, author A. Omran, author S. Sachdev, author A. Vishwanath, author M. Greiner, author V. Vuletić, and author M. D. Lukin, title title Probing topological spin liquids on a programmable quantum simulator, https://doi.org/10.1126/science.abi8794 journal journal Science volume 374, pages 1242–1247 (year 2021)NoStop [Wang and Pollet(2024)]zhenjiu2024 author author Zhenjiu Wang and author Lode Pollet, @noop title The renormalized classical spin liquid on the ruby lattice, (year 2024), http://arxiv.org/abs/2406.07110 arXiv:2406.07110 NoStop [Whitlock et al.(2017)Whitlock, Glaetzle, and Hannaford]Whitlock_2017 author author Shannon Whitlock, author Alexander W Glaetzle, and author Peter Hannaford, title title Simulating quantum spin models using Rydberg-excited atomic ensembles in magnetic microtrap arrays, 10.1088/1361-6455/aa6149 journal journal Journal of Physics B: Atomic, Molecular and Optical Physics volume 50, pages 074001 (year 2017)NoStop [Weber et al.(2017)Weber, Tresp, Menke, Urvoy, Firstenberg, Büchler, and Hofferberth]Weber2017 author author Sebastian Weber, author Christoph Tresp, author Henri Menke, author Alban Urvoy, author Ofer Firstenberg, author Hans Peter Büchler, and author Sebastian Hofferberth, title title Calculation of Rydberg interaction potentials, 10.1088/1361-6455/aa743a journal journal J. Phys. B: At. Mol. Opt. Phys. volume 50, pages 133001 (year 2017)NoStop [Knap et al.(2013)Knap, Kantian, Giamarchi, Bloch, Lukin, and Demler]Knap2013 author author Michael Knap, author Adrian Kantian, author Thierry Giamarchi, author Immanuel Bloch, author Mikhail D. Lukin, and author Eugene Demler, title title Probing Real-Space and Time-Resolved Correlation Functions with Many-Body Ramsey Interferometry, https://doi.org/10.1103/physrevlett.111.147205 journal journal Physical Review Letters volume 111, pages 147205 (year 2013)NoStop [Hertkorn et al.(2021)Hertkorn, Schmidt, Böttcher, Guo, Schmidt, Ng, Graham, Büchler, Langen, Zwierlein, and Pfau]Hertkorn2021 author author J. Hertkorn, author J.-N. Schmidt, author F. Böttcher, author M. Guo, author M. Schmidt, author K. S. H. Ng, author S. D. Graham, author H. P. Büchler, author T. Langen, author M. Zwierlein, and author T. Pfau, title title Density Fluctuations across the Superfluid-Supersolid Phase Transition in a Dipolar Quantum Gas, https://doi.org/10.1103/physrevx.11.011037 journal journal Physical Review X volume 11, pages 011037 (year 2021)NoStop [Homeier et al.(2024)Homeier, Harris, Blatz, Geier, Hollerith, Schollwöck, Grusdt, and Bohrdt]Homeier2024 author author Lukas Homeier, author Timothy J. Harris, author Tizian Blatz, author Sebastian Geier, author Simon Hollerith, author Ulrich Schollwöck, author Fabian Grusdt, and author Annabelle Bohrdt, title title Antiferromagnetic Bosonic t–J Models and Their Quantum Simulation in Tweezer Arrays, https://link.aps.org/doi/10.1103/PhysRevLett.132.230401 journal journal Phys. Rev. Lett. volume 132, pages 230401 (year 2024)NoStop [Kuklov et al.(2024a)Kuklov, Prokof’ev, Radzihovsky, and Svistunov]Kuklov2024 author author Anatoly Kuklov, author Nikolay Prokof’ev, author Leo Radzihovsky, and author Boris Svistunov, title title Transverse quantum fluids, https://doi.org/10.1103/physrevb.109.l100502 journal journal Physical Review B volume 109, pages l100502 (year 2024a)NoStop [Kuklov et al.(2024b)Kuklov, Pollet, Prokof’ev, Radzihovsky, and Svistunov]Kuklov2024a author author Anatoly Kuklov, author Lode Pollet, author Nikolay Prokof’ev, author Leo Radzihovsky, and author Boris Svistunov, title title Universal correlations as fingerprints of transverse quantum fluids, https://doi.org/10.1103/physreva.109.l011302 journal journal Physical Review A volume 109, pages l011302 (year 2024b)NoStop [Bintz et al.(2024)Bintz, Liu, Hauschild, Khalifa, Chatterjee, Zaletel, and Yao]Bintz2024 author author Marcus Bintz, author Vincent S. Liu, author Johannes Hauschild, author Ahmed Khalifa, author Shubhayu Chatterjee, author Michael P. Zaletel, and author Norman Y. Yao, https://arxiv.org/abs/2406.00098 title Dirac spin liquid in quantum dipole arrays, (year 2024), http://arxiv.org/abs/2406.00098 arXiv:2406.00098 NoStop [Yao()]Norm author author Norman Yao, @noop journal private communication NoStop [Pollock and Ceperley(1987)]Pollock1987 journal author author E. L. Pollock and author D. M. Ceperley, title title Path-integral computation of superfluid densities, 10.1103/PhysRevB.36.8343 journal journal Phys. Rev. B volume 36, pages 8343–8352 (year 1987)NoStop Supplemental Materials: Supersolidity in Rydberg tweezer arrays § SELECTION OF RYDBERG STATES In this manuscript, we predict the presence of supersolid phases in atomic arrays where all atoms are excited to their Rydberg states. We focus on systems of two Rydberg states with opposite parity, where the orbital angular momentum l between both states differs by one, i.e., Δ l = 1. Here, resonant dipole-dipole interactions between atom pairs are typically significantly stronger than dispersive van-der-Waals interactions, which emerge from second-order dipole-dipole interactions to off-resonant pair potentials. We propose to use two Rydberg states with different principal quantum numbers Δ n ≠ 0 where the dipole matrix element between both Rydberg states decreases drastically. This enables us to access the opposite regime where van-der-Waals interactions dominate and supersolids are expected, as we confirm using large-scale QMC simulations. We studied various Rydberg states |nS_1/2⟩, |nP_J⟩ and |nD_J⟩ of ^87Rb at different principle quantum numbers n and n'. For Rydberg atom pairs |nS_1/2⟩ and |n'P_J⟩, for typical principal quantum numbers, resonant dipole-dipole interactions decrease too quickly with Δ n. As a result, t/V is either too large, such that we do not expect the presence of a supersolid phase, or too small, such that it is hard to observe experimentally. For states |nD_J⟩ and |n'P_J⟩, we predict the interesting parameter regime if n = n' - 1. For the relevant principal quantum numbers, both Rydberg states are energetically less than 10GHz separated, enabling an efficient coupling using state-of-the art microwave technology. We further fine-tune the interactions by the magnetic quantum numbers m_J as well as the magnetic field B. We chose the magnetic field to point perpendicular to the atomic plane such that interactions between atoms in the atomic plane are independent of the orientation of the interacting atom pairs. The dipole-dipole interactions furthermore depend on the magnitude of the magnetic field B because it mixes the fine-structure levels of both Rydberg states, which affects their dipole matrix elements. An additional constraint is the relative sign of t and V, which depends on m_J. We only expect the system to support a supersolid phase if t/V > 0. Finally, we converged to states |⟩ = |60P_3/2,m_j = 3/2⟩ and |⟩ = |59 D_3/2,m_j = 3/2⟩ and a field amplitude B = 50G. These states have the additional advantage that the van-der-Waals interactions between D-state atom pairs is relatively weak. This enables an efficient excitation of the atomic array into the state |⟩, which is an essential part of the proposed state preparation. In Fig. <ref> in the main text, it has been discussed that interactions between Rydberg pairs |⟩ and |⟩ contain a resonant off-diagonal term ∝ 1/R^3, which induces dipolar exchange and mixes both terms, as well as a diagonal contribution 1/R^6. At short distances, we expect modifications from additional contributions (such as off-diagonal exchange interactions ∝ 1/R^6). These terms were small for our specific Rydberg pairs but are generally not zero. § PHASE DIAGRAM In order to appreciate finite size effects we show the phase diagram of the model in Fig. <ref> obtained with quantum Monte Carlo simulations. We took an inverse temperature β = 5 which is close to the ground state. The transitions between the superfluid and supersolid phase are of second order and determined from a finite size analysis, i.e., extrapolated to the thermodynamic limit. The insulating, crystalline CDW phase for different system sizes was determined from a requirement that the superfluid stiffness ρ_S < 0.1. The superfluid stiffness is computed in quantum Monte Carlo simulations from the fluctuations of the winding number as χ_ s = < W^2 > / (2 β) <cit.>. The transition between the insulating crystalline phase and the superfluid phase is first order. It is particularly strong away from the tip of the lobe. The shrinking of the Mott lobe, in particular in the vicinity of the tip of the lobe, is remarkably strong. For instance, a scan in μ at fixed V=10 shows a charge density wave lobe for L=9 and L=15 but (just) not for L=27. Such finite size effects will affect a future experiment, which will have comparable system sizes as the ones shown here. These finite size effects originate from the kinetic, dipolar term in the Hamiltonian, which is parameterically stronger than the potential term, and thus grows stronger in importance with increasing L at low enough temperature. § FINITE SIZE SCALING ANALYSIS AT FINITE TEMPERATURE For the U(1) transitions at finite temperature, we expect the critical exponents ν = η = 1 as has been shown in Ref. <cit.>. The scale invariant quantities are the superfluid stiffness ρ_s and the condensate density scaled wiht the linear system size, n_c L. The crystalline (CDW) transition at finite temperature belongs to the 2D 3-state Potts model with known critical exponents β=1/9 and ν = 5/6. The finite size scaling analysis is then performed for the rescaled static structure factor, S(Q)L^2β/ν=S(Q)L^4/15, at the ordering wavevector Q. In Fig. <ref> we confirm that these exponents are compatible with the quantum Monte Carlo data.
http://arxiv.org/abs/2407.13677v1
20240718165245
PASTA: Controllable Part-Aware Shape Generation with Autoregressive Transformers
[ "Songlin Li", "Despoina Paschalidou", "Leonidas Guibas" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.GR", "cs.LG" ]
Stanford University 353 Jane Stanford Way Stanford CA 94305 USA svli97@stanford.edu Stanford University 353 Jane Stanford Way Stanford CA 94305 USA paschald@stanford.edu Stanford University 353 Jane Stanford Way Stanford CA 94305 USA guibas@stanford.edu <ccs2012> <concept> <concept_id>10010147.10010371.10010396</concept_id> <concept_desc>Computing methodologies Shape modeling</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Shape modeling § ABSTRACT The increased demand for tools that automate the 3D content creation process led to tremendous progress in deep generative models that can generate diverse 3D objects of high fidelity. In this paper, we present PASTA, an autoregressive transformer architecture for generating high quality 3D shapes. PASTA comprises two main components: An autoregressive transformer that generates objects as a sequence of cuboidal primitives and a blending network, implemented with a transformer decoder that composes the sequences of cuboids and synthesizes high quality meshes for each object. Our model is trained in two stages: First we train our autoregressive generative model using only annotated cuboidal parts as supervision and next, we train our blending network using explicit 3D supervision, in the form of watertight meshes. Evaluations on various ShapeNet objects showcase the ability of our model to perform shape generation from diverse inputs from scratch, from a partial object, from text and images, as well size-guided generation, by explicitly conditioning on a bounding box that defines the object's boundaries. Moreover, as our model considers the underlying part-based structure of a 3D object, we are able to select a specific part and produce shapes with meaningful variations of this part. As evidenced by our experiments, our model generates 3D shapes that are both more realistic and diverse than existing part-based and non part-based methods, while at the same time is simpler to implement and train. PASTA: Controllable Part-Aware Shape Generation with Autoregressive Transformers Leonidas Guibas July 22, 2024 ================================================================================ § INTRODUCTION The ability to generate realistic and diverse 3D shapes has the potential to significantly accommodate the workflow of artists and content creators and potentially enable new levels of creativity through "generative art" <cit.>. The tremendous progress in generative modelling and implicit-based representations gave rise to several works <cit.> that generate objects with high realism in terms of geometric details and texture. Nevertheless, as these pipelines represent objects holistically, without taking into consideration the underlying part-based structure of each object, they only support few interactive applications that typically require deep technical knowledge of each model. However, shape editing and manipulation involves controlling what parts of the object need to be changed. To enable this level of control, an active area of research proposes to consider the decomposition of shapes into parts <cit.>. Existing part-based generative models, represent 3D shapes as a collection of simple shapes parametrized with cuboids <cit.>, spheres <cit.>, implicit fields <cit.> or more general handles <cit.>, and seek to synthesize new shapes in accordance to their underlying structural understanding of the object. For example, among the first to explore structure-based generative models were <cit.> that utilized an autoencoder for generating structured shapes. Despite their impressive capabilities on shape generation and interpolation neither can perform shape completion from a partial input or generate plausible part variations. Similarly, while <cit.> can generate 3D shapes as a sequences of parts, they need to train a separate model for different editing tasks, namely the same model cannot perform both shape generation and shape completion, which makes their approach impractical. To address these limitations, we devise PASTA, a novel part-aware generative model for 3D shapes. PASTA comprises two main components: An autoregressive transformer encoder that generates shapes as unordered sets of parts and a blending network, implemented as a transformer decoder that combines the part sequences and produces high-quality meshes. Each component of our architecture is trained independently. In particular, we optimize our autoregressive transformer to maximize the log-likelihood of all part arrangements in the dataset. Our supervision comes in the form of part labels and 3D cuboids that specify the per-part size and pose. Unlike existing autoregressive pipelines <cit.> that are trained using teacher forcing, we train PASTA using scheduled sampling <cit.> and showcase that it significantly improves the generation performance of our model. To train our blending network, we consider 3D supervision in the form of watertight meshes and optimize it to reconstruct 3D shapes as implicit occupancy fields <cit.>. We evaluate the performance of our model on several PartNet <cit.> objects and demonstrate that our model can produce more realistic and diverse 3D objects in comparison to both part-based <cit.> and non part-based methods <cit.>. Furthermore, we showcase that our model can generate meaningful part arrangements conditioned on versatile user input (fig:teaser) including but not limited to text and images. In summary we make the following contributions: We propose the first part-aware generative model using an autoregressive transformer architecture. Our experiments on various PartNet objects <cit.> demonstrate that our model generates more diverse and plausible 3D shapes in comparison to part-based <cit.> and non part-based methods <cit.>. Furthermore, our simple, yet effective architecture allows training a single model capable of performing several editing operations, such as generating new objects from scratch, generating part variations or completing partial shapes. § RELATED WORK 3D Representations Learning-based approaches for 3D reconstruction employ a neural network that learns a function from the input to a mesh <cit.>, a pointcloud <cit.>, a voxel grid <cit.> or an implicit surface <cit.>. Unlike explicit representations that discretize the output space, using voxels, points or mesh vertices, implicit representations represent shapes in the weights of a neural network that learns a mapping between a query point and a context vector to a signed distance value <cit.> or a binary occupancy value <cit.>. As these methods require 3D supervision, several works propose combining them with surface <cit.> or volumetric <cit.> rendering to learn the 3D object geometry and texture directly from images. In this work, we introduce a part-aware generative model that parametrizes shapes as an occupancy field <cit.>. Primitive-based Representations Shape abstraction techniques represent shapes using semantically consistent part arrangements and seek to recover the 3D geometry using simple shapes such as cuboids <cit.>, superquadrics <cit.>, convex solids <cit.>, spheres <cit.> and 3D Gaussians <cit.>. In recent work, <cit.> proposed to represent objects as a family of homeomorphic mappings, parametrized with an Invertible Neural Network (INN) <cit.>. Likewise, <cit.> suggested to represent 3D objects using a structured set of implicit functions <cit.>. Another line of research employs primitives to recover the 3D geometry using CSG trees <cit.> or shape programs <cit.>. Very recently <cit.>, explored learning primitive-based representations only from images. Unlike these works that focus on recovering the object geometry as a collection of parts, we introduce a part-aware generative model that synthesizes novel objects as a set of cuboidal primitives. 3D Generative Models Generative Adversarial Networks (GANS) <cit.> have demonstrated impressive capabilities on several image synthesis <cit.> and editing <cit.> tasks. However, adapting them to 3D content creation is non-trivial as they ignore the 3D nature of the world and hence, lack an understanding of the object's underlying geometry. To address this, 3D-aware GANs proposed to incorporate 3D representations such as voxel grids <cit.> in generative settings or combine them with differentiable renderers <cit.>. As an alternative, several works explored generating 3D shapes as octrees <cit.>, pointclouds <cit.>, meshes <cit.> and implicit functions <cit.>. While these methods yield realistic geometries, they do not consider the part-based object structure. In contrast, we propose a part-aware generative model that generates shapes as an unordered set of cuboids, which are then combined and synthesize a high quality implicit shape. Part-based Generative Models Our work is falls into the category of part-based generative models. Zou <cit.> was among the first to introduce a generative recurrent model, parametrized with LSTMs <cit.> in combination with a Mixture Density Network (MDN) to synthesize novel objects as a set of cuboids. Concurrently, Li <cit.> proposed to represent shapes using a symmetry hierarchy, which defines how parts are recursively grouped by symmetry and assembled by connectivity <cit.>. In particular, they utilize an RNN and generate objects as bounding box layouts, which are then filled with voxelized parts. Likewise, StructureNet <cit.> utilizes a VAE <cit.> and generates novel shapes as n-ary graphs, where every node in the graph is associated with a bounding box. Note that while <cit.> can generate plausible new shapes, perform shape and part interpolations, their model cannot be utilized for completion from unconstrained inputs, a partial object. Furthermore, to perform shape editing, their model requires additional optimization steps in order to find a new shape in the latent space that satisfies a specific edit. Concurrently, <cit.>, explored learning a latent space of structured shape differences, using pairs of structured shapes demonstrating a specific edit. Closely related to our work is PQ-NET <cit.> that generates shapes autoregressively using an RNN autoencoder. The RNN encoder takes a 3D shape segmented into parts and extracts per-part features, which are then fed to the decoder that sequentially predicts parts that reconstruct the input shape. To be able to generate new shapes, they train a latent GAN <cit.> on the latent space of the autoencoder. Moreover, to perform different editing tasks, they need to train different variants of their model. Instead, our formulation allows applying a single model trained for object completion on a variety of tasks. Furthermore, instead of using an RNN, we employ an autoregressive transformer that synthesizes objects as a sequence of cuboids, which are passed to our blending network to produce the final high-quality shape. Autoregressive Transformers for Content Creation Transformer-based architectures <cit.> have been extensively utilized for various autoregressive tasks ranging from machine translation <cit.> to image <cit.> and music <cit.> generation, as well as shape completion <cit.> and indoor scene synthesis <cit.>. Similar to ATISS <cit.>, that is an autoregressive transformer for scene synthesis, we pose object synthesis as an autoregressive prediction problem and generate objects as a sequence of cuboids. However, unlike ATISS that is trained with teacher forcing, we train our model using scheduled sampling <cit.> and showcase that it improves the generation capabilities of our model. § METHOD PASTA is a part-aware generative model for synthesizing 3D shapes. Our model comprises two main components that are trained independently: An object generator that sequentially generates objects as unordered sequences of labelled parts, where each part is parametrized using a 3D cuboidal primitive (subsec:objects_representation), and a blending network that composes part sequences in a meaningful way and synthesizes high quality implicit shapes. The object generator is an autoregressive model trained to maximize the log-likelihood of all possible part arrangements in the dataset. We use part-level supervision in the form of part labels and 3D cuboids that define the size and the pose of each part (subsec:object_generator). The blending network is an occupancy network <cit.>, implemented with a transformer decoder that takes a sequence of cuboids and a query 3D point and predicts whether this point is inside or outside the surface boundaries (subsec:blending_network). To train the blending network, we assume explicit 3D supervision in the form of a watertight mesh. §.§ Object Parametrization We define each object = {, } using a collection of N parts, = {p_1, …, p_N}, and a 3D bounding box that specifies the object's boundaries. Each part is represented with a 3D labelled cuboid and is parametrized using four values describing its label, size, translation and rotation, p_j = {_j, _j, _j, _j}. To model the label of each part, the back or the arm of a chair, we use a categorical distribution defined over the total number of part labels in the dataset, whereas for the rest of the attributes we use a mixture of logistic distributions <cit.>. In our setup, the rotation _j ∈ℝ^6 is the 6D representation <cit.> of the 3D cuboid that contains the part. Likewise, the translation _j ∈ℝ^3 is the center of the 3D cuboid containing the part and the size _j ∈ℝ^3 is its width, height and depth. Similarly, the bounding box is defined using 12 parameters: three for its size along each axis, three for the translation, which is the center of the box and six for the rotation, which is a 6D representation. Similar to ATISS <cit.>, we predict the components of p_j autoregressively, namely part label first, followed by translation, rotation and size. Hence, the probability of generating the j-th part, conditioned on the previous parts and box becomes p_θ(p_j | p_<j, ) = p_θ^c(_j | p_<j, ) p_θ^t(_j |_j, p_<j, ) p_θ^o(_j |_j, _j, p_<j, ) p_θ^s(_j |_j, _j, _j, p_<j, ), where p_θ^c, p_θ^t, p_θ^o and p_θ^s are the probabilities for the respective attributes. Assuming a fixed ordering the part attributes is reasonable, as we want our model to consider the part label before predicting its pose and size. To compute the likelihood of generating an object , we estimate the likelihood of autoregressively generating its parts using any order, as <cit.> demonstrated that not having a fixed ordering can be beneficial. Hence, the likelihood of generating an object conditioned on a bounding box is p_θ( | ) = ∑_∈π()∏_j∈ p_θ(p_j | p_<j, ), where π(·) is a permutation function that computes the set of permutations of all object parts and denotes an ordered part sequence. §.§ Object Generator The input to our object generator is a set of objects in the form of 3D labelled cuboids and their corresponding bounding boxes. We implement our generator using an autoregressive transformer architecture similar to ATISS. The transformer model takes as input a sequence of embedding vectors that represent the conditioning sequence and generates the features that will be used to predict the attributes of the next part. We map the per-part attributes to embedding vectors using a part encoder and the features to part attributes using a part decoder. Our model is illustrated in fig:object_generator. The part encoder network s_θ(·) takes the attributes for each part p_j={_j, _j, _j, _j} and maps them to an embedding vector _j _j = s_θ( [ λ(_j); γ(_j); γ(_j); γ(_j) ] ), where λ(·) is a learnable embedding, γ(·) is a positional encoding layer <cit.> that is applied separately on each attribute's dimension and [· ;·] denotes concatenation. To predict an embedding vector _B for , we pass its attributes to s_θ(·). Similar to ATISS <cit.>, we implement our transformer encoder τ_θ(·) as a multi-head attention transformer without positional encoding <cit.>, as we want to model objects as unordered sets of parts. Our transformer encoder takes as input {_j}_j=1^N the N embeddings for all parts in the sequence, _B the embedding for the bounding box and a learnable embedding vector , which is used to predict the feature vector that will be used to generate the next part in the sequence. More formally, = τ_θ(_B, {_j}_j=1^N, ). The last component of the object generator is the part decoder that takes as input the feature vector and autoregressively predicts the attributes of the next part to be generated. For the part label, we define a function c_θ(·), implemented using a linear projection layer, that takes and predicts the per-part label probability. We predict the size, translation and rotation in two-stages. First, we cluster the values of each attribute from the training set into 20 clusters using K-Means. Subsequently, we predict a cluster for each attribute, which is then used to predict the specific values. More formally, for the translation, we learn t_θ^coarse(·) that predicts the per-cluster probability from using a linear projection layer and t_θ^fine(·) that predicts the 7 × K parameters that define the mixture of logistics distribution for the translation. Specifically, we have K parameters for the mixing coefficients and 6 × K for the means and variances. In a similar manner, we define o_θ^coarse(·) and o_θ^fine(·) to predict the 13 × K parameters that define the mixture of logistics distribution for the rotation and s_θ^coarse(·) and s_θ^fine(·) that predict the 7 × K parameters that define the mixture of logistic distribution for the size. To predict the part attributes in an autoregressive manner, we condition the prediction of each attribute to the values of the previously predicted ones. In practice, t_θ^coarse(·), t_θ^fine(·), o_θ^coarse(·), o_θ^fine(·), s_θ^coarse(·) and s_θ^fine(·) take as input concatenated with the previously predicted attributes embedded by λ(·) and γ(·) from (<ref>). §.§ Blending Network The input to the blending network is a sequence of labelled cuboids and a set of 3D query points , for which we want to predict their occupancy probabilities, namely whether they lie inside or outside the surface boundaries. In detail, our blending network consists of two main components: (i) a part-encoder that maps the part attributes into an embedding vector, which is implemented as discussed in subsec:object_generator and (ii) a transformer decoder without self-attention that takes the part embeddings and the query points and predicts whether they are inside or outside the surface boundary (see fig:blending_network). The transformer comprises only cross attention layers and MLPs, thus, each query 3D point attends to the per-part embeddings using cross attention, without attending to the other points. The transformer output is passed to a linear layer followed by a sigmoid non-linearity to get an occupancy probability for each query point. We follow common practice and before passing the query points to the decoder we map them to a higher dimensional space using positional encoding <cit.>. §.§ Training and Inference Unlike prior autoregressive models <cit.> that are trained with teacher forced embeddings, we train PASTA using scheduled sampling <cit.>. The key idea is that during training we feed our model with a mix of the teacher forced embeddings and the actual model's predictions from the previous generation step. In particular, we choose an object from the dataset and apply the permutation function π(·) on its elements. Next, we randomly select the first M objects and pass them to the object generator that is used to predict the next part. The newly generated part is appended to the initial sequence of M parts and passed once again to the object generator to predict the attribute distribution of the next part. Our model is trained to maximize the likelihood of the M+2 object in the permuted sequence. A pictorial representation of our scheduled sampling is provided in fig:scheduled_sampling. We follow common practice and during training, we train with both scheduled sampling and teacher forcing. To train the object generator we follow <cit.> and maximize the likelihood of generating all possible part sequences in the dataset in all possible permutations, as follows (θ) = ∑_∈∑_∈π()∑_j∈log p_θ(p_j | p_<j, ). For the mixture of logistic distributions, we use the discretized mixture of logistics loss as defined in <cit.>. To train the blending network, we assume 3D supervision in the form of a watertight mesh, which we use to generate a set of occupancy pairs = {{_i, o_i}}_i=1^V, namely a set of 3D points _i and their occupancy labels o_i, denoting whether _i lies inside or outside the object. We train the blending network using a classification loss between the predicted and the target occupancies. Note that the blending network is trained using ground-truth part sequences. During inference, we start from a bounding box and autoregressively sample the attribute values from the predicted distributions for the next part to be generated. Once a new part is generated, it is used in the next generation step until the end symbol is predicted. To indicate the end of sequence, we augment the part labels with an additional category, which we refer to as end symbol. Once a complete sequence of parts is generated, we pass it to the blending network that combines them into a single implicit shape. §.§ Conditional Generation Here, we discuss how PASTA can be used to perform language- and image-guided generation. Instead of conditioning the generation only on the bounding box B, we now condition also on a textual description of the shape to be generated. In particular, we utilize the pre-trained CLIP <cit.> model to extract text embeddings and pass them to the transformer encoder as an additional input. Note that during training the pre-trained CLIP text encoder remains frozen, namely is not optimized with the rest of our network. Once we train PASTA with text embeddings from CLIP, we can use it, without any re-training, also for image-guided generation. This is possible because the CLIP model has a joint latent space for text and images. While, in our experiments, we only demonstrate language- and image-guided generations, our model can be extended to other types of conditioning such as depth maps or pointclouds by utilizing an appropriate encoder that generates embeddings from the input. § EXPERIMENTAL EVALUATION In this section, we provide an extensive evaluation of our method comparing it to relevant baselines. Additional results and implementation details are provided in the supplementary. Datasets We report results on three PartNet <cit.> categories: Chair, Table and Lamp, which contain 4489, 5705, and 1554 shapes respectively. For the Chair category, there are 47 different types of parts, while for the Lamp and the Table we have 32 and 43 respectively. We train our model and our part-based baselines using the part annotations and train/test splits from PartNet. Moreover, for our model, we use the object bounding boxes specified in PartNet. Baselines In our evaluation, we include PQ-NET <cit.> that is a generative model that generates 3D shapes using an RNN autoencoder, ATISS <cit.>, which was originally introduced for scene synthesis but can be easily adapted to part-based object generation. In particular, instead of conditioning to a floor layout, we condition the generation on the object's bounding box, as for our model. Finally, we also compare with IM-Net <cit.>, which is an implicit-based generative model that does not reason about parts, hence not enabling part-level control. Metrics To evaluate the quality of the generated shapes, we report the Coverage Score (COV) and the Minimum Matching Distance (MMD) <cit.> using the Chamfer-L_2 distance (CD) between points sampled from real and generated shapes. To compute these metrics our part-based representation, we sample points on the surface of the union of the generated cuboids. §.§ Shape Generation We evaluate the performance of our model on the shape generation task on chairs, tables and lamps and perform category-specific training for our model and our baselines. Conditioned on different bounding boxes, our model can successfully synthesize meaningful part arrangements (see 4th column in fig:shapenet_qualitative_comparison_chairs, fig:shapenet_qualitative_comparison_tables and fig:shapenet_qualitative_comparison_lamps), which are fed to our blending network that combines them and yields plausible 3D meshes (see 5th column in fig:shapenet_qualitative_comparison_chairs, fig:shapenet_qualitative_comparison_tables and fig:shapenet_qualitative_comparison_lamps). We observe that PASTA consistently generates diverse and realistic part arrangements (Ours-Parts) for all object categories, which are, in turn, converted into meshes (Ours) that faithfully capture the initial part representation with cuboids. On the contrary, ATISS struggles to synthesize meaningful part sequences, the synthesized part arrangements consist of parts positioned in unnatural positions, especially for the case of chairs and tables. While synthesized objects sampled from PQ-NET and IM-NET are more realistic than the part arrangements produced by ATISS, they lack diversity, as indicated by the coverage score in tab:shape_generation_quantitative. Note that our model, even without the blending network (see Ours-Parts in tab:shape_generation_quantitative), outperforms both PQ-NET and ATISS on chairs and tables on both metrics, while our complete architecture (see Ours in tab:shape_generation_quantitative) outperforms also the non-part-based IM-NET on all object categories. For the case of lamps, we observe that our model outperforms all baselines MMD-CD, while performing on par with ATISS that achieves the highest score COV-CD. We hypothesize that ATISS performs better on the lamps category, as they typically consist of fewer components, hence making training and inference easier. Next, we evaluate the ability of our model and our baselines to generate plausible 3D shapes when jointly trained on multiple object categories, without any class conditioning. Again our model consistently produces plausible part arrangements that are more realistic and diverse than our baselines, as validated by our quantitative analysis in tab:shape_generation_quantitative (see 6th and 10th column). fig:shapenet_qualitative_comparison_all provides a qualitative comparison of six objects generated with our model and our baselines. §.§ Shape Completion Starting from an incomplete sequence of parts, we evaluate whether our model and our baselines can complete the input sequence in a meaningful way. For this experiment, we only consider our part-based baselines and all models are trained in a category-specific manner. To ensure a fair comparison to PQ-NET, which is trained with 3D parts of arbitrary geometries, instead of conditioning their generation on the partial set of cuboids (illustrated in the 1st column of fig:shapenet_qualitative_completion_comparison_chairs, fig:shapenet_qualitative_completion_comparison_tables and fig:shapenet_qualitative_completion_comparison_lamps), that is used for PASTA and ATISS, we utilize the corresponding 3D parts, that were used during PQ-NET's training. From our qualitative evaluation, we observe that both ATISS and PQ-NET tend to generate non-realistic part arrangements, especially for the case of chairs (see unnatural back part for the 3rd and 6th chairs in the 3rd column of fig:shapenet_qualitative_completion_comparison_chairs) and tables (see missing leg in the 2nd table in the 2nd column of fig:shapenet_qualitative_completion_comparison_tables). For the easier case of lamps (see fig:shapenet_qualitative_completion_comparison_lamps), we observe that all methods can complete the partial object in a meaningful way. Note that, as PQ-NET relies on a sequence-to-sequence autoencoder to learn the part arrangements, there is no guarantee that the completed shape will contain the parts used for conditioning (see 3rd+4th row in fig:shapenet_qualitative_completion_comparison_chairs, 2nd row in fig:shapenet_qualitative_completion_comparison_tables). Moreover, while for PASTA, the same model is used for both object completion as well as object generation, PQ-NET requires training a different model to perform the completion task. To demonstrate that PASTA generates diverse part arrangements, we also visualize three generated completions of our model conditioned on the same partial input (see fig:shapenet_diversity). We observe that our generations are consistently valid and diverse. The quantitative results for this experiment are summarized in tab:shape_completion_quantitative. We note that our model outperforms all baselines both on chairs and tables both metrics. For the case of lamps, ATISS outperforms all methods COV-CD, while being worse than all in terms of MMD-CD. §.§ Applications In this section, we present several applications of our model, such as conditional generation from text and images. In both experiments, our model is trained in a category-specific manner. Language-guided Generation For this experiment, we use the part labels provided in PartNet <cit.> and generate utterances that describe the part-based structure of each object. We train a variant of our model that conditions on CLIP <cit.> embeddings produced from our textual descriptions in addition to the object bounding box as described in subsec:inference. In fig:text_guided_generation, we provide text-guided generations of our model and observe note that they consistently match the input text (the table with the two pedestals, or the lamp connected to the ceiling). Image-guided Generation: We now test the ability of our model to perform image-guided generation using the same model that was trained for language-guided generation, without any re-training. In particular, we take advantage of the CLIP's joint latent space, and condition our model on image embeddings produced by CLIP's image encoder. fig:image_guided_generation shows examples of image-guided synthesis. While PASTA was never trained with images, we showcase that it generates shapes of various object categories that faithfully match the input image. Notably, the recovered parts capture fine geometric details such as the three legs of the first table in the fig:image_guided_generation. § CONCLUSION In this paper, we introduced PASTA a part-aware generative model for 3D shapes. Our architecture consists of two main components: the object generator that autoregressively generates objects as sequences of labelled cuboids and the blending network that combines a sequence of cuboidal primitives and synthesizes a high-quality implicit shape. Unlike traditional autoregressive models that are trained with teacher forcing, we demonstrate that relying on scheduled sampling <cit.> improves the generation performance of our model. Our experiments, showcase that PASTA generates more meaningful part arrangements and plausible 3D objects than both part-based <cit.> and non part-based generative models <cit.>. In future work, we plan to extend our architecture to generate parts with textures. Note that this is a straight-forward extension of our method if we simply replace our blending network with a NeRF-based decoder <cit.> that instead of predicting occupancies, predicts colors and opacities. Another exciting direction for future research, is to explore learning such part-based autoregressive models without explicit part annotations. ACM-Reference-Format § SUPPLEMENTARY MATERIAL §.§ Abstract In this supplementary document, we provide a detailed overview of our network architecture and the training procedure. Subsequently, we describe the preprocessing steps that we followed to filter out problematic objects from the PartNet dataset <cit.>. Next, we discuss how scheduled sampling impacts the performance of our model on the scene synthesis task. Finally, we provide additional qualitative and quantitative results and analyze the limitations, future research directions, and potential negative impact of our work on society. § IMPLEMENTATION DETAILS In this section, we provide a detailed description of the several components of our network architecture (subsec:network). Next, we describe our training procedure (subsec:training) and the generation protocol (subsec:generation). Finally, we detail our metrics computation (subsec:metrics) and discuss our baselines (subsec:baselines). §.§ Network Architecture Here we describe the individual components of our network architecture and provide additional implementation details. Our model comprises two main components: the object generator that sequentially generates objects as unordered sequences of labelled parts, where each part is parametrized using a 3D cuboidal primitive, and a blending network that composes part sequences in a meaningful way and synthesizes high quality implicit shapes. Object Generator We implement our object generator using an autoregressive transformer architecture similar to ATISS <cit.>. In particular, it comprises three main components: (i) the part encoder that takes the per-part attributes and maps them to an embedding vector, (ii) the transformer encoder that takes the embeddings for each part in the sequence, the embedding of the bounding box and a learnable query embedding and predicts the feature vector and (iii) the part decoder that takes and predicts the attributes of the next object to be added in the scene. The part encoder simply takes the attributes of each part p_j={_j, _j, _j, _j} and maps them to an embedding vector _j as follows: _j = s_θ( [ λ(_j); γ(_j); γ(_j); γ(_j) ] ). For the part label _j, we use a learnable embedding, denoted as λ(·), which is simply a matrix of size C × 64, where C is the total number of part labels in the dataset. The positional encoding layer, denoted as γ(·), that is applied on the remaining attributes can be expressed as follows: γ(p) = (sin(2^0π p), cos(2^0π p), …, sin(2^L-1π p), cos(2^L-1π p)) where p can be any of the size, translation or rotation. We follow <cit.> and set L=32. Once the attributes are embedded to a higher dimensional space either using λ(·) or γ(·), we concatenate them to a 512-dimensional feature vector, which is then passed to another linear layer and generates the final embedding vector _j ∈ℝ^64 Similar to <cit.> we implement our transformer encoder as a multi-head attention transformer without any positional encoding. Our transformer consists of 4 layers with 8 heads. The queries, keys and values have 72 dimensions and the intermediate representations for the MLPs have 1024 dimensions. We implement the transformer architecture using the transformer library provided by Katharopoulos <cit.>[https://github.com/idiap/fast-transformershttps://github.com/idiap/fast-transformers]. The learnable embedding vector and the predicted feature vector have both 64 dimensions. The part decoder takes the feature vector as input and autoregressively predicts the attributes of the next part to be generated. The function c_θ(·) that is used for the part labels is a linear layer with 64 hidden dimensions that predicts C per-label probabilities. The functions, t_θ^coarse(·), t_θ^fine(·), o_θ^coarse(·), o_θ^fine(·), s_θ^coarse(·) and s_θ^fine(·) are implemented using a 2-layer MLP with RELU non-linearities with hidden size 128 and output size 64. Using the t_θ^fine(·), o_θ^fine(·) and s_θ^fine(·) we predict the mean, variance and mixing coefficients for the K logistic distributions for each attribute. In our experiments, we set K=10. Blending Network Our blending network consists of two main components: (i) a part-encoder that maps the part attributes into an embedding vector, which is implemented as discussed before and in Sec. 3.2 of our main submission and (ii) a transformer decoder without self-attention that takes the part embeddings and a set of query points and predicts their occupancy probabilities, namely whether they are inside or outside the surface boundary. We implement our transformer decoder as a multi-head cross-attention transformer without self-attention. Our transformer consists of 42 layers with 8 attention heads. The queries, keys and values have 72 dimensions and the intermediate representations for the MLPs have 1024 dimensions. To implement the transformer decoder architecture we use the transformer library provided by Katharopoulos <cit.>[https://github.com/idiap/fast-transformershttps://github.com/idiap/fast-transformers]. §.§ Training Protocol As already discussed in our main submission, we train the two components of our model independently. To train the autoregressive transformer encoder, we use the Adam optimizer <cit.> with learning rate η=10^-4 and weight decay 10^-3. For the other hyperparameters of Adam we use the PyTorch defaults: β_1=0.9, β_2 = 0.999 and ϵ= 10^-8. For both category-specific training as well as joint training experiments, we train the autoregressive transformer encoder with a batch size of 128 for 700k iterations. During training, we do not perform any type of rotation augmentation. To determine when to stop training, we follow <cit.> and evaluate the validation metric every 1000 iterations and use the model that performed best as our final model. Likewise, to train our blending network that is implemented using a transformer decoder we use the Adam optimizer <cit.> with learning rate η=10^-4 with no weight decay. For the other hyperparameters of Adam we use the PyTorch defaults: β_1=0.9, β_2 = 0.999 and ϵ= 10^-8. For the case of category-specific training, we train the transformer on each object category with a batch size of 32 for 150k iterations. For the case of the joint-training on multiple object categories, we train the blending network with a batch size of 32 for 300k iterations. To train the blending network, we need to generate occupancy pairs, namely points accompanied by a label indicating whether this point lies inside or outside the target mesh. To this end, we sample 180,000 points uniformly in a cube ranging from -1 to 1 centered at (0, 0, 0) plus 20,000 points from the surface for each mesh and compute which of these points lie inside or outside the mesh. During training, we sample 2048 occupancy pairs from an unbalanced distribution that, in expectation, results in an equal number of points with positive and negative labels. Note that we follow common practice and compute importance sampling weights in order to reweigh our loss and create an unbiased estimator of the loss with uniform sampling similar to <cit.>. All our experiments were conducted on a single NVIDIA 2080 Ti GPU, with 11GB of memory, and training of both components of our architecture takes approximately 2 days. §.§ Generation Protocol In this section, we discuss the sampling process for generating a novel part arrangement. When we perform generation from scratch, we condition our generation on a bounding box that specifies the object boundaries. Note that when we want to generate novel shapes from a model that was trained jointly on multiple object categories, we do not have to explicit condition on a specific category. For the case of shape completion from an incomplete sequence of parts, we condition our generation on the cuboidal primitives as well as the bounding box that specifies the object boundaries. A similar concept is adapted for the case of the language- and image-guided generation. As soon as a sequence of cuboidal parts is produced, we pass it to the transformer decoder that composes the cuboids and synthesizes an implicit 3D shape of high quality. §.§ Metrics As mentioned in the main submission, to evaluate the plausibility and the diversity of the generated shapes using our model and our baselines, we report the Coverage Score (COV) and the Minimum Matching Distance (MMD) <cit.> using the Chamfer-L_2 distance between points sampled on the surface of the real and the generated shapes. In particular, MMD measures the quality of the generated shapes by computing how likely it is that a generated shape looks like a shape from the reference set of shapes. On the other hand, COV measures how many shape variations are covered by the generated shapes, by computing the percentage of reference shapes that are closest to at least one generated shape. Let us denote G the set of generated shapes and R the set of reference shapes from the test set. To estimate the similarity between two shapes from the two sets, we use the Chamfer Distance (CD), which is simply the distance between a set of points sampled on the surface of the reference and the generated mesh. Namely, given a set of N sampled points on the surface of the reference ={_i}_i=1^N and the generated shape ={_i}_i=1^N the Chamfer Distance (CD) becomes CD(, ) = 1/N∑_∈min_∈|| - ||_2^2 + 1/M∑_∈min_∈|| - ||_2^2. In all our evaluations, we set N=2048. Note that to compute the Chamfer distance between our generated part-based representations and real objects from the test set, we sample points on the surface of the union of the generated cuboids and compute their distance to points sampled from the reference shapes. The Minimum Matching Distance (MMD) is the average distance between each shape from the generated set G to its closest shape in the reference set R and can be defined as: MMD(G, R) = 1/| R |∑_∈ Rmin_∈ GCD(, ). Intuitively, MMD measures how likely it is that a generated shape is similar to a reference shape in terms of Chamfer Distance and is a metric of the plausibility of the generated shapes. Namely, a high MMD score indicates that the shapes in the generated set G faithfully represent the shapes in the reference set R. The Coverage score (COV) measures the percentage of shapes in the reference set that are closest to each shape from the generated set. In particular, for each shape in the generated set G, we assign its closest shape from the reference set R. In our measurement, we only consider shapes from R that are closest to at least one shape in G. Formally, COV is defined as COV(G, R) = |{_∈ RCD(, ) |∈ G }|/| G | Intuitively, COV measures the diversity of the generated shapes in comparison to the reference set. In other words, a high Coverage indicates that most of the shapes in the reference set R are roughly represented by the set of generated shapes G. To ensure a fair comparison with our baselines, we generate 2000 shapes per object category for each baseline and compare it with 225, 1216, 1477 test shapes from the lamps, chairs and tables categories respectively. Note that both the generated and the target shapes from the test set are scaled within the unit cube before the metrics computations. §.§ Baselines In this section, we provide additional details regarding our baselines. We compare our model with IM-NET <cit.>, PQ-NET <cit.> and ATISS <cit.>. For all our experiments, we retrain all baselines, using the released code provided by the authors. For the case of ATISS, which is an autoregressive transformer originally introduced for indoor scene synthesis, we adapt the original code[https://github.com/nv-tlabs/ATISShttps://github.com/nv-tlabs/ATISS] for the task of part-based object generation and train ATISS using the per-object part sequences from PartNet <cit.>. Note that IM-NET <cit.> is not directly comparable to our model as it does not consider any parts. However, we include it in our evaluation as a powerful implicit-based generative model for 3D shapes. IM-NET IM-NET <cit.> was among the first methods that proposed to implicitly represent 3D object geometries in the weights of a neural network. In particular, given an input feature representation and a 3D point, their model predicts whether the query 3D point lies inside or outside the object's surface boundaries. IM-NET can be combined with several generative frameworks such as Variational Autoencoders (VAEs) <cit.> or Generative Adversarial Networks (GANs) <cit.>. In our experiments, we consider the latter, referred to as IM-GAN in the original paper, that utilizes a latent GAN <cit.> directly trained on the latent feature space of a voxel-based autoencoder. Note that IM-GAN relies on two-stage training, namely, first train the autoencoder and then train the GAN on the autoencoder's latent space. In our experiments, we train IM-NET using their Tensorflow implementation <cit.>[ https://github.com/czq142857/IM-NEThttps://github.com/czq142857/IM-NET] with the default parameters until convergence, on the preprocessed data released by the authors. Training of both the autoencoder and the latent GAN took approximately 2 days on a single NVIDIA 2080 Ti GPU. PQ-NET In PQ-NET <cit.>, the authors introduce a generative model that synthesizes 3D shapes sequentially using a set of parts parametrized as volumetric Signed Distance Fields (SDFs). In detail, PQ-NET comprises three core components that need to be trained sequentially. First, the part autoencoder learns a mapping between the voxelized part-based representation of a 3D shape to a volumetric SDF. Next, a sequence-to-sequence autoencoder, implemented with two RNNs <cit.> is employed, which takes as input a sequence of per-part features and maps them to a latent feature representation that describes the assembled 3D shape. This representation is then passed to a sequential decoder that predicts a sequence of meaningful parts. Lastly, to generate novel 3D shapes, a GAN <cit.> is trained on the latent space of the sequential autoencoder. We train PQ-NET using the provided PyTorch <cit.>[https://github.com/ChrisWu1997/PQ-NEThttps://github.com/ChrisWu1997/PQ-NET] implementation with the default parameters until convergence. Note that in our experiments, we do not exclude shapes with more than 10 shapes from our training data, as in the original paper. Instead, we consider chairs and tables with at most 50, and lamps with no more than 30 parts. Unlike our model, considering shapes with a larger number of parts is not possible, because it results in excessive memory usage that prevents training PQ-NET's seq2seq module on a single GPU. Specifically, PQ-NET's part auto-encoder requires 3 NVIDIA 2080 Ti GPU. Part seq2seq module and GAN require 1 NVIDIA 2080 Ti. The training of all three components took approximately 4-5 days. ATISS In ATISS <cit.>, the authors introduce an autoregressive transformer architecture for indoor scene synthesis. In our experiments, we repurpose the original PyTorch <cit.>[https://github.com/nv-tlabs/ATISShttps://github.com/nv-tlabs/ATISS] implementation in order to be able to utilize it for the object generation task. In particular, each object is represented as a collection of labelled cuboidal primitives and we train ATISS to maximize the log-likelihood of all possible permutations of part arrangements in a collection of training data. We train ATISS using the default parameters for 1200 epochs on an NVIDIA 2080 Ti GPU for approximately 1-2 days, without any data augmentation. Unlike PQ-NET, to train ATISS, we do not filter out shapes with a larger number of parts. Specifically, we consider the same set of shapes used to train our model, namely chairs, tables and lamps with a maximum number of 144,164,191 parts respectively. §.§ Mesh Extraction To extract meshes from the predicted occupancy field, we employ the Marching Cubes algorithm <cit.>. In particular, we start from a voxel grid of 128^3 initial resolution for which we predict occupancy values. Next, we follow the process proposed in <cit.> and extract the approximate isosurface with Marching Cubes using the code provided by Mescheder <cit.>. Note that the same process is followed to extract meshes from the implicit representations learned both with PQ-NET <cit.> and IM-NET <cit.>. § DATA PROCESSING We use PartNet <cit.> as the dataset to evaluate our model and our baselines. We remove 123 objects in total across all categories due to their invalid part hierarchical structures (missing nodes ). To train our blending network, we assume explicit 3D supervision in the form of a wateright mesh. To acquire this, we align ShapeNet <cit.> objects with PartNet objects using the scripts provided in the official PartNet repository[https://partnet.cs.stanford.eduhttps://partnet.cs.stanford.edu/]. We then convert aligned objects into watertight meshes using the code provided by Stutz <cit.>[https://github.com/paschalidoud/mesh_fusion_simplehttps://github.com/paschalidoud/mesh_fusion_simple]. To train the variant of our model for language-guided generation, we remove samples, whose descriptions are too long and cannot be handled by CLIP <cit.>. After this processing step, we have 4000 training samples for chairs, 5439 for tables, and 1424 for lamps. fig:shapenet_statistics visualizes the part sequences for all object categories. We note that chairs contain more samples with longer sequences, while lamps tend to have objects with fewer components, hence making them easier to generate. § IMPACT OF SCHEDULE SAMPLING In this section, we investigate how schedule sampling affects the generation capabilities of our model. For this experiment, we perform category specific training on the Chair category. In particular, we train two variants of our model, one with teacher forcing and one with schedule sampling until convergence. We compare the two model variants their generation performance in tab:schedule_sampling_quantitative. We note that training our network only with teacher forcing, significantly deteriorates performance and results in generations of lower quality. This is also validated from our qualitative comparison in fig:schedule_sampling_impact, where we visualize randomly generated chairs using both models. We observe that the variant of our model trained only with teacher forcing tends to produce part arrangements, with parts placed in unnatural positions. On the contrary, when training our model with our proposed schedule sampling strategy, we observe that our model consistently generate plausible part sequences. This is expected as schedule sampling allows training our model with imperfect data, which makes it more robust to imperfect generations. Namely, even if one of the generated parts is problematic, our network can produce plausible parts in the next generation steps. § ADDITIONAL EXPERIMENTAL RESULTS In this section, we provide additional information regarding our experiments on PartNet <cit.>. In particular, we consider three categories: Chair, Table and Lamp, which contain 4489, 5705, and 1554 shapes respectively. For the Chair category, there are 47 different classes (back surface horizontal bar, arm holistic frame ) in total, while for the Lamp and Table, we have 32 and 43 part categories respectively. Note that unlike prior works such as PQ-NET <cit.> that only consider shapes that have less than 10 parts, we consider shapes with a significantly larger number of components. In particular, chairs can have up to 144 parts, tables 164 parts and lamps up to 191 parts. For more details, regarding our training data, we refer reader to sec:data_processing. In this section, we provide additional qualitative results for all experiments discussed in our main submission. §.§ Shape Generation In this experiment, we investigate the ability of our model to generate plausible part-aware 3D geometries, conditioned on various bounding boxes that specify the object's boundaries. fig:shapenet_qualitative_comparison_chairs_supp shows seven randomly generated chairs using our model, ATISS <cit.>, PQ-NET <cit.> and IM-NET <cit.>. Note that for this experiment, we perform category-specific training, namely we train a different model for each object type. For our model, we visualize both the synthesized part arrangements (Ours-Parts) and the output of our blending network that composes the sequence of the generated cuboids into a single high-quality implicit shape. We observe that our synthesized part sequences are consistently meaningful. For the case of ATISS, we note that the generated cuboids are placed in unnatural positions, hence producing non-functional objects. We hypothesize that this is mainly due to the large number of parts that typically compose chairs. On the contrary, we observe that both PQ-NET and IM-NET produce plausible chairs. For the case of tables (see fig:shapenet_qualitative_comparison_tables_supp), we observe that our model consistently generates meaningful part sequences, whereas ATISS again produces part arrangements, where parts are placed in unnatural positions. For PQ-NET, we observe that in some cases, it tends to generate non-functional tables with less legs (see 5th row in fig:shapenet_qualitative_comparison_tables_supp). Finally, we also provide seven generated lamps of our model and our baselines in fig:shapenet_qualitative_comparison_lamps_supp. For the case of lamps, both our model and ATISS seem to be able to generate realistic sequences of cuboids. For all the experiments in this section, as well as the results in Sec. 4.1 of our main submission we condition the generation on randomly sampled bounding boxes from the test set. For all object categories, we observe that our blending network composes the generated part sequences in a meaningful way and synthesizes novel 3D shapes that respect the provided part-based structure. The output representation of our blending network is of higher quality than IM-NET <cit.> that also generates implicit 3D shapes, as indicated by our quantitative evaluation from Table 1 in our main submission. §.§ Shape Completion Starting from a partial object, parameterized with a set of cuboidal parts, we want to evaluate whether our model and our baselines can generate plausible part arrangements. Since IM-NET cannot be used to complete 3D shapes from a partial sequence of parts, we exclude it from our evaluation. To measure the quality of the generated parts, in this experiment, we also report the classification accuracy of a classifier trained to discriminate real from synthetic objects. In particular, as we represent objects using a collection of cuboidal primitives, we implement our classifier using a transformer encoder <cit.> trained to discriminate real from generated 3D labelled cuboids. Furthermore, in our evaluation, we also the FID score <cit.>. For the FID score computation, we generate the same amount of objects as in the test set and render them at 512×512 resolution using 5 random camera views. To evaluate the realism of the generated parts, we render objects using their part-based representation, as show in fig:images_for_fid_computation, and we compare with corresponding part-based renderings from the ground-truth objects from the test set, rendered using the same camera distribution. We follow common practice and repeat the metric computation for FID 10 times and report the mean. The quantitative results for this experiment are summarized in tab:shape_completion_quantitative_supp. Note that we compare our model only two ATISS FID and classification accuracy, which also generates objects as a sequence of cuboidal primitives. To generate the partial objects, we randomly sample objects from the test set and generate the partial input by removing an arbitrary set of parts. As mentioned also in our main submission, to ensure a fair comparison to PQ-NET, for this task instead of conditioning on the partial set of cuboids, we utilize the corresponding 3D parts that were used during the PQ-NET's training. The 1st column in fig:shapenet_qualitative_completion_comparison_chairs_supp, fig:shapenet_qualitative_completion_comparison_tables_supp and fig:shapenet_qualitative_completion_comparison_lamps_supp shows the partial input in the form of 3D cuboids that is used in the case of PASTA and ATISS. For PQ-NET, we utilize the actual 3D parts that correspond to each cuboid. Both from the quantitative analysis in tab:shape_completion_quantitative_supp, as well as the qualitative results in fig:shapenet_qualitative_completion_comparison_chairs_supp and fig:shapenet_qualitative_completion_comparison_tables_supp, we observe that our model completes the partial input chairs and table in a more plausible way, than both PQ-NET and ATISS. Note that, since PQ-NET employs a sequence-to-sequence autoencoder to generate part sequences, there is no guarantee that the original part sequence, will also appear in the completed sequence. For the case of ATISS, we observe that it struggles completing the partial input in a meaningful way. For example, the added parts are placed in non realistic positions (see last row in fig:shapenet_qualitative_completion_comparison_chairs_supp and 3rd row in fig:shapenet_qualitative_completion_comparison_tables_supp. For the case of lamps, we note that all three models can successfully complete the partial sequence. §.§ Size-guided Generation Now we examine whether our model can generate objects of different sizes. Note that as we condition the generation of parts on a bounding box that defines the object boundaries, our model can generate shapes of arbitrary sizes. In this experiment, we generate several bounding boxes, with different size parameters and demonstrate the ability of our model to generate short and tall lamps (see 1st and 3rd lamp in 3rd row in fig:shapenet_generations_variable_sizes, respectively), or smaller and bigger tables (see 1st and 2nd tables in 2nd row in fig:shapenet_generations_variable_sizes). We believe that this is an important application of our model that allows users to precisely specify the size of the generated object. For all experiments presented in fig:shapenet_generations_variable_sizes, we perform category-specific training per object type. §.§ Language-guided Generation Starting from a text prompt and a bounding box that defines the object's boundaries, we want to examine the ability of our model to generate plausible part arrangements that match the input text descriptions. To this end, for this task, we utilize the part labels provided in PartNet <cit.> and generate text descriptions that describe the part-based structure for each object. Some examples of the produced text descriptions for various object categories are summarized below: * A chair with four leg, one bar stretcher, three runners, one seat single surface, one arm horizontal bar, two arm near vertical bars, two arm horizontal bars, two arm near vertical bars, one arm horizontal bar, and one back single surface. * A chair with four leg with two runners, one seat single surface, and one back single surface. * A table with one drawer front, one handle, one drawer front, one handle, two vertical side panels, one bottom panel, four leg, one back panel, one vertical front panel, and one board. * A table with one central support, one pedestal, one tabletop connector, one other, one board, and one tabletop frame. * A lamp with one lamp shade, one light bulb, one other, one chain, and one lamp base part. * A lamp with one lamp arm straight bar, one lamp shade, one light bulb, one other, one lamp base part, and one lamp body. Using these text descriptions, we utilize a pre-trained CLIP <cit.>[https://github.com/OpenAI/CLIPhttps://github.com/OpenAI/CLIP] text encoder to extract embeddings for the shape's textual descriptions. During training, we condition our generation both on the CLIP text embeddings and the embedding produced from the bounding box, containing the object. Note that during training the CLIP text-encoder is not optimized with the rest of our architecture. While our model was not trained with free-text descriptions, we showcase that by exploiting CLIP's powerful latent space our model can generate plausible part arrangements and 3D objects that match the input text prompt (see fig:text_guided_generation_supp). To be able to control the shape of the generated shape . generate a small chair or a narrow table, it simply suffices to condition the generation on a bounding box that fits these criteria. For now, we do this manually, namely we generate a bounding box that fits the text input. §.§ Image-guided Generation For this task, we utilize the variant of our model that was trained for language-guided generation without any re-training. CLIP <cit.> learns a common latent space between images and sentences that describe them. Therefore, we take advantage of CLIP's joint latent space, and use the variant of our model trained for language-guided generation, to synthesize plausible 3D shapes from images, simply by replacing the CLIP's text encoder, with the corresponding CLIP image encoder. While our model was only trained with text embeddings, we showcase that it can successfully generate part sequences that match the input image (see fig:image_guided_generation_supp). Note that the recovered parts capture fine geometric details such as the circular base of the 2nd chair, in the second row in the fig:image_guided_generation_supp, and our model is able to generate realistic shapes conditioned on images with and without backgrounds. §.§ Generating Part Variations Finally, we also demonstrate that our model can produce plausible part variations for a specific part, selected by the user. For example in fig:part_variation, we select the back of the chair, highlighted with red and show that our model can generate variations of the selected part with different sizes. § DISCUSSION AND LIMITATIONS In this work, we devise PASTA, a part-aware generative model for 3D shapes. Our architecture, consists of two main components: the object generator that autoregressively generates objects as sequences of labelled cuboids and the blending network that composes a sequence of cuboidal primitives and synthesizes a high-quality implicit shape. We train the object generator to maximize the log-likelihood of all part arrangements, in the dataset. Unlike prior part-based works <cit.>, our model is simpler to train and only requires part annotations in the form of cuboids and not the actual parts, which are typically harder to acquire. Moreover, our blending network, which is implemented with a transformer decoder, generates 3D shapes of high fidelity from the sequence of the generated cuboids. The supervision for training our blending network comes in the form of watertight meshes. From our experimental evaluation it becomes evident that our model outperforms existing part-based and non-part based methods both on the task of shape generation and completion. Furthermore, we showcase several applications of our model, such as language- and image-guided shape generation. Although, we believe that our model is an important step towards automating 3D content creation, it has several limitations. Firstly, our model requires part-supervision, which is difficulty to acquire, thus hindering applying our model to other data. Recent works <cit.> proposed generative models that can be trained without part-level annotations but as they are not autoregressive they cannot be used for several completion tasks. We believe that an exciting direction for future research is to explore whether we can learn an autoregressive generative model of parts, without explicit part-level supervision. Note that this task is not trivial, since training autoregressive models with teacher forcing or schedule sampling requires part annotations. Furthermore, while our model can generate plausible 3D geometries, in this work, we do not consider the object's appearance. We believe that another interesting direction for future research would be to explore learning a generative model of parts with textures. This would unlock more editing operations both on the object's geometry and appearance. In our current setup, this can be easily done, simply by replacing our blending network with a NeRF-based decoder <cit.> that instead of only predicting occupancies, predicts colors and opacity values for the set of query points. § POTENTIAL NEGATIVE IMPACT ON SOCIETY Our proposed model enables generating part-aware 3D shapes as well as several editing operations such as size-, image- and language-guided generation. While, we see our work as an important step towards automatic content creation and enabling a multitude of editing functionalities, it can also lead to negative consequences, when applied to sensitive data, such as human bodies. Therefore, we believe it is imperative to always check the license of any publicly available 3D model. In addition, we see the development of techniques for identifying real from synthetic data as an essential research direction that could potential prevent deep fake. While throughout this work, we have only worked with publicly available datasets, we recommend that future users that will train our model on new data to remove biases from the training data in order to ensure that our model can fairly capture the diversities in terms of shapes and sizes.
http://arxiv.org/abs/2407.13265v1
20240718081608
Capillary lubrication of a spherical particle near a fluid interface
[ "Aditya Jha", "Yacine Amarouchene", "Thomas Salez" ]
cond-mat.soft
[ "cond-mat.soft", "physics.class-ph", "physics.flu-dyn" ]
Univ. Bordeaux, CNRS, LOMA, UMR 5798, F-33405 Talence, France. Univ. Bordeaux, CNRS, LOMA, UMR 5798, F-33405 Talence, France. thomas.salez@cnrs.fr Univ. Bordeaux, CNRS, LOMA, UMR 5798, F-33405 Talence, France. § ABSTRACT The lubricated motion of an object near a deformable boundary presents striking subtleties arising from the coupling between the elasticity of the boundary and lubricated flow, including but not limited to the emergence of a lift force acting on the object despite the zero Reynolds number. In this study, we characterize the hydrodynamic forces and torques felt by a sphere translating in close proximity to a fluid interface, separating the viscous medium of the sphere's motion from an infinitely-more-viscous medium. We employ lubrication theory and perform a perturbation analysis in capillary compliance. The dominant response of the interface owing to surface tension results in a long-ranged interface deformation, which leads to a modification of the forces and torques with respect to the rigid reference case, that we characterise in details with scaling arguments and numerical integrations. Capillary lubrication of a spherical particle near a fluid interface Thomas Salez July 22, 2024 ==================================================================== § INTRODUCTION The dynamics of objects moving in viscous fluids has been studied both theoretically and experimentally for a long time <cit.>. Confining the viscous flow between an object and a rigid surface modifies the forces felt by the object <cit.>. Such a modification is involved in vastly different phenomena ranging from the mechanics of joints <cit.>, to the movement of cells in capillaries <cit.>, and the dynamics of suspensions <cit.>. Recent research has provided evidence of boundary elasticity further modifying the lubricated dynamics of an object <cit.>. Further standardization of the measurement process has led to the design of contactless probes for rheology <cit.>. The coupling of boundary elasticity and lubrication flow, collectively termed as soft lubrication, predicts the emergence of lift forces exerted on particles translating parallel to soft boundaries <cit.>. Such lift forces are associated with the symmetry breaking arising out of the deformability. Since the latter is crucial to the generated force, the nature of the bounding wall has been further explored by examining the influence of slip <cit.>, and viscoelasticity <cit.>. A reversal of the nature of the lift force from repulsive to attractive has also been predicted for viscoelastic settings <cit.>. Other studies have explored the complex modifications induced by including inertial effects <cit.> and compressibility <cit.>. On the experimental front, dedicated research has verified the presence of these lift forces on various substrates <cit.>. In biology where cells and tissues are extremely soft, and/or at small scales in soft matter, the interfacial capillary stress at the boundary dominates over bulk elasticity. By employing a classical Stokeslet-like response of the flow near a fluid interface, it has been shown that a rectified flow may be generated owing to the tension of the boundary <cit.>. On the other hand, finite-size effects were addressed <cit.> in the regime of a large gap between the object and the fluid interface, predicting counter-intuitive behaviors unique to capillarity. The results of these studies have been useful in analyzing the movement of microorganisms near a fluid interface <cit.>, as well as the formation of floating biofilms <cit.>. Recently, capillary-lubrication studies <cit.> have characterized the dynamics of an infinite cylinder near a fluid interface, highlighting the influence of the viscosity contrast and thickness ratio between the two fluid layers, and leading to large variability in the forces generated as opposed to elastic interfaces. While previous research has highlighted the importance of understanding lubricated motion near a fluid interface, the characterization of the dynamics of a particle moving with multiple degrees of freedom near a fluid interface remains to be explored. In this article, we explore in detail the translational motion of a sphere moving in close proximity to an infinitely viscous but deformable sublayer. In the small-deformation limit, we calculate the forces and torques generated on the sphere during the motion. Due to a symmetry between translational and rotational motions in soft lubrication <cit.>, our work immediately generalizes to the case where rotation would be added. The remainder of the article is organized as follows. We start by describing the capillary-lubrication framework, before presenting the theoretical methodology for obtaining the different fields using perturbation analysis at small deformations of the fluid interface. We then discuss the implications of the interfacial deformation, and the competition between gravity and capillarity, on the forces and torques generated on the particle. § CAPILLARY-LUBRICATION THEORY We consider a sphere of radius a translating with a prescribed time-dependent horizontal velocity u⃗=u(t)e⃗_x near a fluid interface, as shown in Fig. <ref>. The interface is characterized by its surface tension σ, and separates two incompressible Newtonian viscous liquids with dynamic shear viscosities η and η_sl, as well as densities ρ and ρ_sl (with ρ<ρ_sl). The acceleration of gravity is denoted by g. The gap profile between the sphere and the undeformed fluid interface is denoted by h(r,t), which depends on the horizontal radial coordinate r and time t. The x-direction oriented along e⃗_x corresponds to the horizontal angular coordinate θ=0. The minimum gap between the bottom of the sphere surface and the undeformed interface is denoted by d(t), which depends upon time, hence its temporal derivative ḋ(t) denotes the vertical velocity of the sphere. We focus on the case where the bottom layer is extremely viscous compared to the top layer, i.e. η_sl≫η, and is infinitely thick compared to the gap between the sphere and the undeformed interface. These conditions allow us to assume that the velocity at the fluid interface vanishes because the shear stress from the top layer is insufficient to generate a flow in the lower layer. We neglect fluid inertia and assume that the typical gap between the sphere and the interface is much smaller than the typical horizontal length scale, allowing us to invoke lubrication theory <cit.>. Introducing the excess pressure field p, in the top layer, with respect to the hydrostatic contribution, and the horizontal velocity field v⃗ of the fluid in the gap, the incompressible Stokes equations thus read at leading lubrication order: ∂ p/∂ z = 0 , ∇ p = η∂^2 v⃗/∂ z^2 , where ∇ denotes the gradient in the horizontal plane and where z is the vertical coordinate. In the limit of a small gap, the thickness profile of the fluid between the sphere and the undeformed fluid interface can be approximated by its parabolic expansion, leading to: h(r,t) ≃ d(t)+r^2/2 a . The no-slip boundary condition is imposed at both the sphere's surface and the fluid interface. Hence, in the frame of the moving sphere, at z = h, one has: v⃗ = 0⃗ , and at the interface, i.e. z = δ, one has: v⃗ = -u⃗ =-ue⃗_x . Given the boundary conditions above, we can easily write the horizontal velocity profile in the gap between the sphere and the interface, as: v⃗ = ∇⃗ p/2η(z-h)(z-δ)+u⃗ z-h/h-δ . The conservation of the fluid volume in the gap allows for the derivation of the Reynolds thin-film equation for the system, which reads: ∂/∂ t(h-δ) = ∇⃗·[∇ p/12η(h-δ)^3+u⃗/2(h-δ)] . The deflection of the fluid interface is controlled by the Laplace pressure jump. Thus, at z = δ, one has: p ≃σ∇^2δ+gδ(ρ-ρ_sl) , where we have assumed a small deformation of the interface and linearized the curvature. Let us now non-dimensionalize the equations through: d(t) = d^*D(T) , h(r,t) = d^*H(R,T) , r⃗ =lR⃗ , z =d^*Z , t = l/cT , v⃗(r⃗,z,t) =cV⃗(R⃗,Z,T) , u⃗(t) =cU(T)e⃗_x , p(r⃗,t) =η c l/d^*^2P(R⃗,T) , δ(r⃗,t) = d^*Δ(R⃗,T) , where c and d^* represent characteristic in-plane velocity and gap-thickness scales, respectively, with l = √(2ad^*) denoting the characteristic hydrodynamic radius. In dimensionless terms, the undeformed thickness profile and Reynolds equation become: H(R,T) ≃ D(T)+R^2 , and: ∂/∂ T(H-Δ) = ∇⃗·[∇ P/12(H-Δ)^3+U⃗/2(H-Δ)] . Besides, the deflection field Δ is related to the excess pressure field P by the dimensionless version of the Laplace equation, which reads: ∇^2Δ-BoΔ=κ P , where Bo = (l/l_c)^2 denotes the Bond number, l_c = √(σ/[g(ρ_sl-ρ)]) denotes the capillary length, and where we have introduced the capillary compliance of the interface, denoted by: κ = η cl^3/σd^*^3 . § PERTURBATION ANALYSIS As in previous studies regarding soft lubrication <cit.>, we perform an asymptotic expansion of the unknown fields at first order in dimensionless capillary compliance κ, as: Δ ≃κΔ_1+O(κ^2) , P ≃ P_0+κ P_1+O(κ^2) , where κ^iΔ_i is the i-th order contribution to the deflection of the interface, and κ^iP_i is the i-th order contribution to the excess pressure field. §.§ Zeroth-order pressure At zeroth order in κ, Eq. (<ref>) reduces to: Ḋ = ∇.(∇⃗ P_0/12H^3+U⃗H/2) . This equation is identical to the one for a perfectly rigid, flat and no-slip boundary. Since the zeroth-order pressure field results from linear terms in velocity, we decompose it azimuthally, as: P_0(R⃗,T) = P_00(R,T)+P_01(R,T)cosθ , with R and θ the horizontal polar coordinates of R⃗. Assuming a vanishing pressure field at large R, and that it must remain finite and single-valued at R=0, the equations can be solved to give the zeroth-order components of the excess pressure field, as <cit.>: P_00 = -3Ḋ/2(D+R^2)^2 , P_01 = 6UR/5(D+R^2)^2 . §.§ Interface deflection We now decompose the first-order deflection of the interface azimuthally, as: Δ_1(R⃗,T) = Δ_10(R,T)+Δ_11(R,T)cosθ . Writing Eq. (<ref>) at first order in κ, one has: 1/R∂/∂ R(R∂Δ_10/∂ R)-BoΔ_10=P_00 , and: 1/R∂/∂ R(R∂Δ_11/∂ R)-Δ_11/R^2-BoΔ_11=P_01 , with the boundary conditions Δ_1i→0 as R→∞, and finite and single-valued Δ_1i at R=0. The solutions of the above equations can be written in the most general form as: Δ_10(R) = -I_0(√(Bo)R)∫_R^∞ K_0(√(Bo)ξ)ξ P_00(ξ)dξ-K_0(√(Bo)R)∫_0^R I_0(√(Bo)ξ)ξ P_00(ξ)dξ , Δ_11(R) = -I_1(√(Bo)R)∫_R^∞ K_1(√(Bo)ξ)ξ P_01(ξ)dξ-K_1(√(Bo)R)∫_0^R I_1(√(Bo)ξ)ξ P_01(ξ)dξ , where I_j and K_j denote the jth-order modified Bessel functions of the first and second kinds, respectively. To understand the parametric influence of the Bond number Bo, we explore the behaviors of the interface deflection for both small and large Bo values. For vanishing Bo, the anisotropic deflection component Δ_11 reaches a limiting behavior given by the following expression: Δ_11≃ -3U/10ln(1+R^2/D)/R . In contrast, the isotropic deflection component Δ_10 does not show any limiting behavior at vanishing Bo, and a reduction in Bo leads to an unbounded increase in Δ_10. To understand the behavior of Δ_10 as Bo→0, we take an asymptotic approach previously used in problems relating to capillary deformations <cit.>. The vanishing of Bo allows for a scale separation in the radial coordinate R into: i) an inner region controlled by surface tension; and ii) an outer region, where gravity is present. In the inner region (R≪1/√(Bo)), the interface deformation denoted by Δ_10^i satisfies: 1/R∂/∂ R(R∂Δ_10^i/∂ R)=P_00 . The general solution of the above equation reads: Δ_10^i = 𝒜-3Ḋ/8Dln(D+R^2) , where 𝒜 is an integration constant. The far-field behavior of the inner solution reads: Δ_10^i∼𝒜-3Ḋ/4Dln(R)  . In the outer region (R ≫ 1/√(Bo)), gravity matters but there is no lubrication pressure, which leads to the following governing equation for the interface deformation denoted by Δ_10^o: 1/R∂/∂ R(R∂Δ_10^o/∂ R)-BoΔ_10^o=0 . The latter equation is solved, along with the condition that the deflection vanishes at infinity, leading to: Δ_10^o = ℬK_0(√(Bo)R), where ℬ is an integration constant. The near-field behavior of the outer solution reads: Δ_10^o∼ -ℬ(γ-ln 2+lnBo/2+ln R) , where γ is Euler's constant. Matching Eq. (<ref>) with Eq. (<ref>) leads to: 𝒜 = -3Ḋ/4D(γ-ln 2+lnBo/2) , ℬ = 3Ḋ/4D . Substituting these constants in Eqs. (<ref>) and (<ref>), we find the matched asymptotes of the interface deflection at small Bo. The interface deflection can then be approximated by the matched crossover expression: Δ_10|_Bo→0≈3Ḋ/4D[K_0(√(Bo)R)+1/2ln(R^2/D+R^2)] . The interface deflection is shown in Fig. <ref> at a fixed low value of Bo. The crossover expression at small Bo described above matches the exact one calculated using Eq. (<ref>), with improving precision as Bo reduces. Figure <ref> also shows the inner and outer solutions, which diverge in the far and near fields, respectively. The other interesting limit arises when Bo→∞, leading to the curvature-related terms in Eq. (<ref>-<ref>) to drop out, giving us: Δ_10 = -P_00/Bo = 3Ḋ/2Bo(D+R^2)^2 , Δ_11 = -P_01/Bo = -6UR/5Bo(D+R^2)^2 . As a consequence, the deflection is directly proportional to the pressure applied with κ/Bo = η c l/[gd^*^3(ρ_sl-ρ)] as an effective compliance. From the latter, we see that in the limit of large Bo the surface tension no longer controls the deformation. This large-Bo response is akin to the Winkler response for thin compressible elastic materials, which has been studied previously <cit.>. Apart from these two extremes, further exploration on the influence of Bo can be done using Eqs. (<ref>) and (<ref>). Figure <ref> shows the isotropic and anisotropic profiles of the amplitude of the first-order interface deflection, for different values of Bo. We see that capillarity leads to a long-ranged interface deflection whose range and magnitude both decrease with increasing Bo. §.§ First-order pressure At first order in κ, the pressure field involves in particular the squared velocity, and can thus be decomposed as: P_1(R⃗,T,Bo,D) = P_10(R,T,Bo,D)+P_11(R,T,Bo,D)cosθ+P_12(R,T,Bo,D)cos2θ . The governing equations for the components P_1i of the first-order magnitude P_1 of the excess pressure field can be derived by considering Eq. (<ref>) at first order in κ. Since P_12 is not needed to compute the forces and torques, we restrict ourselves to the two following equations: 1/R∂/∂ R(RH^3∂ P_10/∂ R) =1/R∂/∂ R[3RH^2(Δ_10∂ P_00/∂ R+Δ_11/2∂ P_01/∂ R)]+3U∂Δ_11/∂ R+3UΔ_11/R-12Δ_10/∂ T , 1/R∂/∂ R(RH^3∂ P_11/∂ R)-H^3/R^2P_11 =1/R∂/∂ R[3RH^2(Δ_10∂ P_01/∂ R+Δ_11∂ P_00/∂ R)]-3H^2Δ_10P_01/R^2+6U∂Δ_10/R-12Δ_11/∂ T . Using the linearity of the equations above, the components of the pressure field can be expressed as follows: P_10 = U^2/D^2ϕ_U^2+Ḋ^2/D^3ϕ_Ḋ^2+D̈/D^2ϕ_D̈ , P_11 = U̇/Dϕ_U̇+UḊ/D^2ϕ_UḊ , where the ϕ_k represent the auxiliary functions for the corresponding second-order forcing parameters k, such as U^2 etc. These functions can then be evaluated by numerically solving Eqs. (<ref>-<ref>) together with the above boundary conditions. The results are shown in Figs. <ref> and <ref>. For all components, as Bo increases, the magnitudes of the auxiliary functions decrease due to the decreasing interface deflection. Interestingly, even though the deflection studied above is long-ranged, the pressure field decays quite rapidly. § CAPILLARY-LUBRICATION FORCES AND TORQUES Since the zeroth-order forces and torques acting on the sphere are identical to the known ones for the motion near a rigid, flat and no-slip boundary, we focus here on the first-order forces and torques acting on the sphere, and resulting from the interface deflection. These can be calculated from the stress tensor Σ, that reads in the lubrication approximation: Σ≃ -p𝐈+ηe⃗_z∂_zv⃗, where 𝐈 denotes the identity tensor. In dimensional units, the first-order vertical force acting on the sphere can be evaluated as: f_z,1≃2^5/2πκη ca^3/2/d^*^1/2∫_0^∞ P_10(R) R dR , which can be decomposed using the auxiliary functions calculated before, as: f_z,1≃α_U^2η^2 u^2 a^3/σ d^2-α_Ḋ^2η^2 ḋ^2 a^4/σ d^3+α_D̈η^2 d̈ a^4/σ d^3 , where the α_k (with i indicating here the forcing source, such as U^2) are the prefactors of the respective scalings. These prefactors are plotted in Fig. <ref> as functions of Bo. An important difference arises between the various prefactors at small Bo. Indeed the prefactor α_U^2 reaches a plateau, whereas α_D̈ and α_Ḋ^2 increase logarithmically with decreasing Bo. These asymptotic behaviors at small Bo can be calculated using Lorentz' reciprocal theorem <cit.>, by invoking as well Eqs. (<ref>) and (<ref>), as detailed in Appendix and summarised in Tab. <ref>. They are in agreement with the numerical solutions, as shown in Fig. <ref>. Besides, as Bo increases, all the prefactors decrease, and this decrease becomes inversely proportional to Bo in the large-Bo limit, highlighting the transition to the Winkler-like regime. The corresponding asymptotic expressions have been calculated previously <cit.>, are summarised in Tab. <ref>, and are in agreement with the numerical solutions, as shown in Fig. <ref>. Similarly, the horizontal force acting on the sphere is given by the expression <cit.>: f_x,1≃ 2πη c aκ∫_0^∞[-2RP_11-H/2(∂_R P_11+P_11/R)+Δ_11/2∂_R P_00+Δ_10/2(∂_R P_01+P_01/R)-2UΔ_10/H^2]RdR , which can be decomposed into: f_x,1≃ -β_UḊη^2 u ḋa^3/σ d^2+β_U̇η^2 u̇a^3/σ d , where the β_k (with i indicating here the forcing source, such as U̇) are the prefactors of the respective scalings. These prefactors are plotted in Fig. <ref> as functions of Bo. An important difference arises once again between the two prefactors at small Bo. Indeed the prefactor β_U̇ reaches a plateau, whereas β_UḊ increases logarithmically with decreasing Bo. These asymptotic behaviors at small Bo can be once again calculated using Lorentz' reciprocal theorem, as detailed in Appendix and summarised in Tab. <ref>. They are in agreement with the numerical solutions, as shown in Fig. <ref>. Besides, as Bo increases, all the prefactors decrease, and this decrease becomes inversely proportional to Bo in the large-Bo limit, highlighting once again the transition to the Winkler-like regime. The corresponding asymptotic expressions have been calculated previously <cit.>, are summarised in Tab. <ref>, and are in agreement with the numerical solutions, as shown in Fig. <ref>. As shown in previous studies <cit.>, the contributions to the first-order torque felt by the sphere along the y-axis have the same numerical prefactors as those for the first-order horizontal force, with the inclusion of a supplementary length-scale factor a. Hence, the first-order torque exerted on the sphere is given by: τ_y,1≃β_UḊη^2 u ḋa^4/σ d^2-β_U̇η^2 u̇a^4/σ d . We conclude this whole section by an important remark. The crossover from the capillary-dominated to the Winkler-like regime occurs at Bo∼ 1. This occurs when the hydrodynamic radius is comparable to the capillary length. Since typical capillary lengths are on the order of ∼1 mm, and given the lubrication conditions, such a crossover can only be felt with spheres of millimetric radii and above. § CONCLUSION Using capillary-lubrication theory, scaling arguments and numerical integrations, we explored the asymptotic forces and torques generated on a sphere in translational motion within a viscous fluid, in close proximity to a deformable capillary interface with another, infinitely viscous fluid on the other side. Due to a symmetry between translational and rotational motions in soft lubrication <cit.>, our work immediately generalizes to the case where rotation would be added. Specifically, by employing a perturbation analysis in the limit of small deformation of the fluid interface, we calculated the pressure fields decomposed into their various contributions from different degrees of freedom of the sphere. We investigated in particular the effects of gravity, which not only changes the scaling laws of the forces and torques, but also show a Winkler-like elastic response at large Bond numbers. Altogether, our results allow to quantify and possibly control soft-lubricated colloidal mobility near tensile interfaces – which are ubiquitous in soft matter and biological physics. The authors thank Vincent Bertin for interesting discussions. They acknowledge financial support from the European Union through the European Research Council under EMetBrown (ERC-CoG-101039103) grant. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. The authors also acknowledge financial support from the Agence Nationale de la Recherche under Softer (ANR-21-CE06-0029) and Fricolas (ANR-21-CE06-0039) grants. Finally, they thank the Soft Matter Collaborative Research Unit, Frontier Research Center for Advanced Material and Life Science, Faculty of Advanced Life Science at Hokkaido University, Sapporo, Japan. § DECLARATION OF INTERESTS The authors report no conflict of interest. § APPENDIX: LORENTZ RECIPROCAL THEOREM In this appendix, we employ Lorentz reciprocal theorem for Stokes flows <cit.> in the limit of vanishing Bond number Bo, which allows us to evaluate the asymptotic behaviors of the scaling prefactors α_k and β_k. §.§ A: Vertical force The model problem introduced to perform the calculation comprises a sphere moving in a viscous fluid and towards an immobile, rigid, planar surface. We note V⃗̂⃗_⊥ = -V̂_⊥e⃗_z, the velocity at the particle surface while assuming a no-slip boundary condition at the wall surface located at z=0. The model problem is described by the incompressible Stokes' equations, ∇·Σ̂=0 and ∇·v⃗̂⃗_⊥=0, where Σ̂ denotes the stress tensor of the model problem and v⃗_⊥ denotes the corresponding fluid velocity. We invoke lubrication theory to obtain the corresponding pressure and velocity fields, given by: p̂_⊥(r⃗) = 3ηV̂_⊥a/ĥ^2(r) , 𝐯̂_⊥(r⃗,z) = ∇p̂_⊥(r⃗)/2ηz[z-ĥ(r)] , where: ĥ(r) ≃ d+r^2/2a . From the Lorentz reciprocal theorem, one has: ∫_𝒮n⃗·Σ·v⃗̂⃗_⊥ ds = ∫_𝒮n⃗·Σ̂·v⃗ ds , where Σ and v⃗ denote the stress tensor and flow velocity for the real problem, with 𝒮 denoting the entire bounding surface, including the surface of the sphere, the surface of the substrate and the surface located at r⃗→∞. The unit vector normal to the surface pointing towards the fluid is denoted by 𝐧. Given the boundary conditions of the model problem, the above relation simplifies to give: F_z = -1/V̂_⊥∫_𝒮n⃗·Σ̂·v⃗ds . To approximate the velocity field at the wall surface, we perform a Taylor expansion accounting for the small deformation of the wall, as: v⃗|_z=0 ≃v⃗|_z=δ-δ∂_zv⃗_0|_z=0 , = -ue⃗_x-ḋe⃗_z+(∂_t-u∂_x)δe⃗_z-δ∂_zv⃗_0|_z=0 , where v⃗_0 denotes the zeroth-order velocity field corresponding to a sphere moving near a rigid surface. Thus, the leading-order normal force simplifies into: F_z,1≃ -1/V̂_⊥∫_ℝ^2(p̂_⊥(∂_t-u∂_x)δ+η∂_zv⃗̂⃗_⊥|_z=0·∂_zv⃗_0|_z=0) dr⃗ . Computing the latter integral by considering the deflection δ (or Δ in dimensionless variables) generated at vanishing Bo allows us to recover the asymptotic expressions presented in Tab. <ref>. §.§ B: Horizontal force The model problem consists here of a sphere translating with a velocity V̂_∥e⃗_x, parallel and near an immobile and rigid substrate with no-slip boundary conditions applied at both the surfaces of the sphere and the substrate. A similar treatment as in the previous section leads to: p̂_∥(r⃗) = 6ηV̂_∥rcosθ/5ĥ^2(r) , v⃗̂⃗_∥(r⃗,z) = ∇p̂_∥(r⃗)/2ηz[z-ĥ(r)]+V⃗̂⃗_∥z/ĥ(r) , and: F_x,1≃ -1/V̂_∥∫_ℝ^2(p̂_∥(∂_t-u∂_x)δ+η∂_zv⃗̂⃗_∥|_z=0·∂_zv⃗_0|_z=0)dr⃗ . Computing the latter integral by considering the deflection δ (or Δ in dimensionless variables) generated at vanishing Bo allows us to recover the asymptotic expressions presented in Tab. <ref>.
http://arxiv.org/abs/2407.12695v1
20240717161942
Highly Efficient Parallel Row-Layered Min-Sum MDPC Decoder for McEliece Cryptosystem
[ "Jiaxuan Cai", "Xinmiao Zhang" ]
cs.CR
[ "cs.CR", "cs.AR" ]
CAI AND ZHANG: Highly Efficient Parallel Row-Layered Min-Sum MDPC Decoder for McEliece Cryptosystem Highly Efficient Parallel Row-Layered Min-Sum MDPC Decoder for McEliece Cryptosystem Jiaxuan Cai and Xinmiao Zhang This material is based upon work supported by the National Science Foundation under Award No. 2052641. The authors are with the Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210 USA (E-mails: cai.1072@osu.edu; zhang.8952@osu.edu). ================================================================================================================================================================================================================================================================================================================================ § ABSTRACT The medium-density parity-check (MDPC) code-based McEliece cryptosystem remains a finalist of the post-quantum cryptography standard. The Min-sum decoding algorithm achieves better performance-complexity tradeoff than other algorithms for MDPC codes. However, the prior Min-sum MDPC decoder requires large memories, whose complexity dominates the overall complexity. Besides, its actual achievable parallelism is limited. This paper has four contributions: For the first time, the row-layered scheduling scheme is exploited to substantially reduce the memory requirement of MDPC decoders; A low-complexity scheme is developed to mitigate the performance loss caused by finite precision representation of the messages and high column weights of MDPC codes in row-layered decoding; Constraints are added to the parity check matrix construction to enable effective parallel processing with negligible impacts on the decoder performance and resilience towards attacks; A novel parity check matrix division scheme for highly efficient parallel processing is proposed and the corresponding parallel row-layered decoder architecture is designed. The number of clock cycles for each decoding iteration is reduced by a factor of L using the proposed L-parallel decoder with very small memory overhead. For an example 2-parallel decoder, the proposed design leads to 26% less memory requirement and 70% latency reduction compared to the prior decoder. Error-correcting codes, McEliece cryptosystem, medium-density parity-check codes, Min-sum algorithm, parallel decoder, post-quantum cryptography, row-layered scheduling. § INTRODUCTION The National Institute of Standards and Technology (NIST) has initiated the standardization of post-quantum cryptography in response to the imminent needs for cryptographic schemes that can withstand quantum computing attacks. It announced the fourth round of finalists in July 2022. The McEliece cryptosystem that employs quasi-cyclic medium-density parity-check (QC-MDPC) codes <cit.> remains one of the candidates. The design and implementation of low-density parity-check (LDPC) decoders for error correction in storage systems and digital communication have been extensively explored. The parity check matrices of popular QC-LDPC codes consist of a large number of smaller cyclic permutation matrices (CPMs). Efficient parallel decoding can be achieved by processing one or multiple CPMs at a time <cit.>. However, the parity check matrices for the MDPC codes considered for the McEliece cryptography scheme consist of a few very large circulant matrices, whose nonzero entries are randomly generated and their column weights are much higher. As a result of the irregularity and high column weight, many prior techniques for simplifying LDPC decoders do not apply to MDPC decoders. The decoding algorithms require re-evaluation and the decoder architectures need to be re-designed. A spectrum of algorithms can be used to decode MDPC codes, with the bit-flipping (BF) algorithm and its variations being the simplest and having been explored in quite a few recent works <cit.>. The BF MDPC decoder implementations in <cit.> divide each column of the parity check matrix into L-bit segments, which are processed one by one to reduce the decoding latency. However, the parity check matrices of MDPC codes are still very sparse. Taking this into account, the decoder in <cit.> only processes the nonzero segments and utilizes an out-of-order processing scheme to reduce the complexity, decoding latency, and memory access. MDPC decoding algorithms with lower failure rates can better resist reaction-based attacks that try to recover the secret parity check matrix of the code by utilizing decoding failures <cit.>. One of the most powerful attacks is the Guo-Johansson-Stankovski (GJS) attack <cit.>. The Min-sum algorithm <cit.> with optimized scalars <cit.> achieves orders of magnitude lower decoding failure rate compared to BF algorithms. The Min-sum MDPC decoder proposed in <cit.> divides the parity check matrix into L blocks of rows and processes one nonzero entry in each block row in parallel to increase the efficiency and throughput. Because of the high column weight and long length of the MDPC codes, the decoder has large memory requirement and the memories dominate the overall decoder complexity. Besides, due to the uneven distribution of the nonzero entries in the randomly constructed parity check matrix, the speedup achieved by the decoder in <cit.> is far less than L and does not improve much for larger L despite the much bigger memory overhead. Unlike the sliced message-passing scheduling scheme <cit.> used in the design of <cit.> that updates the messages at the end of each decoding iteration, the row-layered scheduling scheme <cit.> utilizes the most updated messages in the remainder of the decoding iteration. As a result, both the number of decoding iterations and memory requirement are reduced. Row-layered decoding has been extensively studied for LDPC codes <cit.>. However, the high column weights of MDPC codes lead to decoding failure rate increase when finite precision is adopted in the hardware implementation of the decoder. Additionally, the random structure of their parity check matrices makes it difficult to achieve efficient parallel row-layered decoding. For the first time, this paper investigates the design of efficient parallel row-layered Min-sum decoders for the MDPC codes considered in the McEliece cryptosystem. The major contributions of this paper include the following: * A serial row-layered decoder is first proposed for MDPC codes as the baseline for our parallel decoder architecture. The decoder processes only the nonzero entries of the parity check matrix and only the locations of the nonzero entries are stored to reduce the memory requirement and latency. By processing the parity check matrix layer by layer, the size of the memory storing the check-to-variable (c2v) messages and accordingly the overall memory requirement are substantially reduced. * The decoding failure rate increase caused by finite precision representation of the scaled c2v messages is analyzed. The performance gap is eliminated by keeping fractional bits in the messages with small overheads on memory requirement and critical path. * Constraints are proposed in the random construction of the parity check matrices for MDPC codes to enable effective parallel processing. Analyses and simulations have been carried out on the possible effects of the constraints on the number of possible keys, decoding failure rate, and resilience towards reaction attacks. * A novel dynamic parity check matrix division scheme based on the locations of identity blocks is proposed to achieve highly efficient parallel processing. The corresponding parallel row-layered decoder architecture is also designed. The proposed L-parallel decoder effectively reduces the number of clock cycles needed for each decoding iteration by a factor of L. Additionally, the memory overhead associated with parallel processing is very small. As a result, the proposed design can efficiently exploit a much higher parallelism such as 32 instead of only 2 or 4 as in prior design <cit.>. For an example MDPC code considered for the standard, the proposed 2-parallel row-layered Min-sum decoder reduces the size of the memories, which dominate the decoder complexity, by 26% and shorten the latency by 70% compared to that in <cit.>. This paper is organized as follows. Section II introduces background information. Section III proposes the overall architecture of a serial row-layered Min-sum decoder and methodologies to eliminate the decoding performance gap caused by finite precision representation of messages. In Section IV, constraints for MDPC code construction for enabling efficient parallel processing is presented and it is shown that they do not have significant effects on decoding performance and possible attacks. The dynamic parity check matrix division scheme for highly efficient parallel processing and the corresponding implementation architecture are detailed in Section V. Section VI presents the hardware complexity analyses and comparisons, and conclusions follow in Section VII. § BACKGROUND §.§ MDPC codes and McEliece cryptosystem An MDPC code is a type of linear block code, which can be defined using a parity check matrix denoted as 𝐇. A vector 𝐜 is a valid codeword if and only if 𝐜𝐇^T=0. The private key of the McEliece cryptosystem based on MDPC codes is a set of n_0 r-bit random vectors, each with a Hamming weight of w. The parity check matrix of the MDPC code is in the format of 𝐇=[𝐇_0|𝐇_1|⋯|𝐇_n_0-1], where 𝐇_i (0≤ i<n_0) is a circulant matrix with the first column equaling the i-th random vector. If 𝐇_n_0-1 is non-invertible, the corresponding vector is randomly regenerated. The generator matrix 𝐆 and the parity check matrix 𝐇 of a linear block code satisfy 𝐆𝐇^T=0. Consequently, the 𝐆 matrix for the McEliece cryptosystem is derived as [𝐈|𝐁^T], where 𝐁=[𝐇_n_0-1^-1𝐇_0|𝐇_n_0-1^-1𝐇_1|⋯|𝐇_n_0-1^-1𝐇_n_0-2]. The first columns of circulant matrices 𝐇_n_0-1^-1𝐇_i (0≤ i< n_0-1) constitute the public key in the McEliece cryptosystem. A bipartite graph called Tanner graph can be used to represent the 𝐇 matrix of an MDPC code. This graph has two types of nodes known as variable nodes and check nodes. The 𝐇 matrix and the corresponding Tanner graph of a toy MDPC code is depicted in Fig. <ref>. Each variable node corresponds to a column of 𝐇, while each check node represents a row. A pair of variable and check nodes are connected by an edge if the corresponding entry in the 𝐇 matrix is nonzero. As shown in Fig. <ref>, the core process of encryption in the McEliece cryptosystem is MDPC encoding. An (n_0-1)r-bit plaintext vector is multiplied with the generator matrix 𝐆 to compute a codeword of n=n_0r bits denoted as 𝐜. Then the ciphertext 𝐱 is derived by adding a randomly generated n-bit vector 𝐞 containing t nonzero bits to 𝐜. The decryption is to perform MDPC decoding on the ciphertext 𝐱. If 𝐱 is decodable, then 𝐜 is recovered. Since the 𝐆 matrix is systematic, the first (n_0-1)r bits of the codeword 𝐜 equal the plaintext. The parameters of the MDPC codes considered for the post-quantum cryptography standard are listed in Table <ref>. §.§ Min-sum decoding algorithm Algorithm <ref> lists the pseudo codes for the Min-sum algorithm. It iteratively exchanges reliability information of multiple bits between connected variable and check nodes to make decisions on the received bits until a codeword is found. For an input vector 𝐱=[x_0,x_1,⋯,x_n-1], the initial reliability information for x_j, denoted as γ_j, is configured as either +C or -C, depending on whether x_j is `0' or `1', respectively. Simulations can be used to determine the optimal value of C, which depends on the maximum value of the reliability messages and the column weight of the code. In the remaining part of this paper, u_i,j denotes the variable-to-check (v2c) message from variable node j to check node i, and v_i,j represents the c2v message from check node i to variable node j. k is the decoding iteration number in Algorithm <ref>. S_v(i) (S_c(j)) represents the set of variable (check) nodes connected to the check (variable) node i (j). A sign bit of `0' and `1' indicates that the message is positive and negative, respectively, and ⊕ denotes the XOR operation. To reduce the decoding failure rate, the sum of c2v messages is multiplied with a scalar, α, for variable node processing and computing a posteriori information, γ̃_̃j̃ <cit.>. If no valid codeword is found after I_max iterations, a decoding failure is declared. §.§ Row-layered scheduling scheme In row-layered decoding <cit.>, the parity check matrix 𝐇 is divided into blocks of f rows. Each block of f rows is called a layer, and each column in a block should only have at most one nonzero entry. During the decoding of layer l of iteration k, for a variable node, denote the corresponding v2c and c2v messages by u^(k,l) and v^(k,l), respectively. The notations omit the indices for both variable and check nodes for brevity. In row-layered decoding, the c2v messages computed from a layer are used right away to update the a posterior information and v2c messages used in the decoding of the next layer. Since all the c2v messages are initially zero, the initial a posterior information equals the channel information γ. Then after the decoding of layer l in iteration k, the a posterior information is updated as γ̃' = γ̃ - α v^(k-1,l) + α v^(k,l). Comparing Lines 14 and 16 of Algorithm <ref>. The v2c message can be calculated as u^(k,l) = γ̃- α v^(k-1,l). The row-layered Min-sum decoding algorithm is the same as that listed in Algorithm <ref>, except that Lines 16 and 18 are replaced by (<ref>) and (<ref>), respectively. Since updated messages are utilized in the remainder of the same decoding iteration instead of the next iteration, the row-layered decoding converges faster. As a result, it also achieves better decoding performance compared to Algorithm <ref> when I_max is limited. Fig. <ref> shows the simulation results of the sliced message-passing <cit.> Min-sum decoder that updates the c2v and v2c messages at the end of each iteration and a row-layered decoder that consists of one row in each layer for a randomly generated MDPC code with (n_0,r,w)=(2, 4801,45). The value of the scalar, α<1, affects the decoding frame error rate (FER). To simplify the scalar multiplier, the scalar is allowed to have 6 digits in the fractional part with at most two nonzero digits in our design. The scalar leading to the lowest FER is found from simulations for each decoder. Each v2c and c2v message is represented by 4-bit integer magnitude and 1-bit sign. In this case, setting C to 9 leads to good performance. As shown in Fig. <ref>, row-layered decoding reduces the average number of decoding iterations substantially compared to the sliced message-passing decoder. When the maximum number of decoding iterations is set to a limited number, such as I_max=30, the FER of the row-layered decoder is also lower due to the faster convergence. Most previous MDPC decoder designs consider BF algorithms <cit.> due to their simplicity. For comparison, simulation results of the REMP-2 BF algorithm <cit.> with optimal parameter settings are included in Fig. <ref>. The REMP-2 algorithm has one of the lowest FERs among BF algorithms. However, its performance is still orders of magnitude worse compared to that of the Min-sum algorithm. §.§ Parallel MDPC decoders Many hardware implementation architectures have been developed for row-layered decoders of QC-LDPC codes <cit.>. The parity check matrices of these codes consist of a large number of smaller zero or CPMs as shown in Fig. <ref>(b). In this figure, the nonzero entries are represented by diagonal lines. Since each row or each column has exactly one single nonzero entry in a CPM, efficient parallel QC-LDPC decoders are achieved by processing one or multiple CPMs in each clock cycle. However, the MDPC codes used in the McEliece cryptosystem exhibit fundamentally different structures in their parity check matrices as illustrated in Fig. <ref>(a). Due to the irregular distribution of nonzero entries in the randomly generated 𝐇 matrix, the parallel designs of QC-LDPC decoders are not applicable to MDPC decoders. In the L-parallel Min-sum MDPC decoder of <cit.>, the 𝐇 matrix is horizontally divided into L segments and the decoder tries to process one entry from each segment simultaneously in each clock cycle. However, due to the unbalanced number of nonzero entries among the L segments, the effective speedup factor of this design is quite limited. From simulations on 1000 randomly generated MDPC codes with (n_0,r,w)=(2, 4801,45), on average, the 2-parallel design in <cit.> can achieve 1.82 times speedup compared to the serial design. The effective speedup becomes 3.15 times on average for 4-parallel decoding, and this design is even less effective for higher parallelism despite the L times more check node units (CNUs) and larger variable node units (VNUs). Furthermore, one memory bank is needed to record the messages for each segment of 𝐇 and its size is decided by the worst-case number of messages to store. Due to the irregularity of the locations of nonzero entries, the overall size of the memories increases for higher parallelism. § ROW-LAYERED MIN-SUM MDPC DECODER WITH FINITE PRECISION In this section, a row-layered MDPC Min-sum decoder architecture is first presented to explain the involved computation units and decoding data flow. It serves as a starting point for the design of the parallel decoder to be detailed in Section V. Then a low-complexity method is proposed to mitigate the performance loss caused by finite precision representation of messages. §.§ Row-layered MDPC decoder architecture A row-layered Min-sum decoder for MDPC codes can be implemented by extending the architecture for LDPC codes in <cit.> as shown in Fig. <ref>. In the 𝐇 matrix of an MDPC code for the McEliece cryptosystem, each row is a cyclic shift of the previous row. In order to increase the hardware efficiency and reduce the memory requirement, only the indices of the nonzero entries in one row of the 𝐇 matrix are stored in RAM I. As the indices are read out for the decoding, they are incremented by one mod r to derive the indices of the nonzero entries in the next row of 𝐇 by the “H matrix shifting” block. Subsequently, these results are written back to RAM I. Since the c2v messages are initially zero, the initial value of the a posteriori information for bit j equals the channel information γ_j, which is set to +C or -C depending on whether bit j is `0' or `1'. These information are loaded into RAM U in the beginning of the decoding. The a posteriori information is updated according to (<ref>) and written back to RAM U, and the v2c messages are calculated by subtracting the c2v messages of the same check node from the previous iteration from the a posteriori information as shown in (<ref>). Hence, the channel information does not need to be stored. RAM M is used to hold the c2v messages for all rows in a compressed format containing min1, min2, idx, and s. A CNU comprises two parts and their architectures are depicted in Fig. <ref> <cit.>. CNU A iteratively computes min1, min2, idx, and s values according to Lines 8-11 of Algorithm <ref> and CNU B derives the actual c2v message from its compressed form as listed in Lines 13 and 14 of Algorithm <ref>. The sign bits of the v2c messages are stored in RAM S. RAM T is used to buffer u^(k,l) until it is added up with the α v^(k,l) calculated by the CNU to update the a posteriori information. During the decoding of a layer, the indices of the nonzero entries of 𝐇 stored in RAM I are read out consecutively and used as the addresses to read out the corresponding γ̃ from RAM U. At the same time, v^(k-1,l) are generated by CNU B using the compressed messages stored in RAM M and sign bit stored in RAM S. Then the v2c messages are derived as u^(k,l) =γ̃- α v^(k-1,l) and sent to CNU A to compute the min1, min2, idx, and s values as the compressed c2v message for v^(k,l). The new sign bits of u^(k,l) replace the old sign bits in RAM S. The newly computed compressed v^(k,l) overwrite the old values in RAM M and are also sent to CNU B' to derive the actual v^(k,l). α v^(k,l) added with the u^(k,l) read out from RAM T is the updated a posteriori information, which are written back to RAM U. The addresses for accessing all RAMs except RAM U are generated using counters, and hence are not explicitly shown in Fig. <ref>. Fig. <ref> shows the scheduling of the computations in the row-layered decoder. The updating of the a posterior information using the c2v messages generated from the decoding of layer l overlaps with the check node processing of layer l+1. §.§ Performance loss caused by finite precision and mitigation In hardware implementations, finite precision is necessary to reduce the number of bits used to represent the messages in order to reduce memory requirement and simplify computation units. Assume that each v2c and c2v message is represented by a (q+1)-bit integer with q-bit magnitude and 1-bit sign. As shown in Line 16 in Algorithm <ref>, the a posteriori information is the sum of the channel information and scaled c2v messages. To prevent overflow, more bits are used to represent the a posteriori information and the result of subtracting it by α v^(k-1,l) according to (<ref>). The magnitude of the difference is saturated to 2^q-1 before it is utilized as the v2c message in CNU A. To simplify the scalar multiplications, the scalars considered in our design have at most two nonzero digits, each of which is either +1 or -1. In the design of <cit.>, the results of scalar multiplications are brought back to integer representation. Rounding instead of truncation is utilized to reduce the precision loss. Fig. <ref> illustrates simulation results for row-layered Min-sum decoding of an MDPC code with (n_0,r,w)=(2, 4801,45), I_max=30, and q=4 bits representing each c2v and v2c message magnitude. Optimal scalars for each decoding scheme as shown in the figure are derived from simulations and they are small due to the high column weight of the code. Assume that each a posteriori information is represented by a (p+1)-bit integer with p-bit magnitude and 1-bit sign. For the scalars shown in Fig. <ref> and w=45, p=⌈ log_2(0.21875× 45× (2^4-1)+15)⌉ =8 is sufficient to prevent overflow. However, if the scaled c2v messages are brought back to integer, even with rounding, the row-layered decoding has much higher FER compared to the case where all the fractional bits are kept as shown in Fig. <ref>. This is because, following (<ref>) and (<ref>), in row-layered decoding, a single c2v message instead of the sum of c2v messages as in <cit.> is scaled by a small scalar. The rounding brings an error in the range of (-0.5, 0.5) for each scaled c2v message and the errors are accumulated during the updating of the a posteriori information. To mitigate the performance loss, this paper proposes to keep a limited number of fractional bits in the scaled c2v messages and the a posteriori information to improve their precision. If i fractional bits are kept, the rounding error of each scaled c2v message is reduced to (-0.5/2^i, 0.5/2^i). Fig. <ref> shows that keeping two fractional bits in the scaled c2v messages and a posteriori information can achieve almost the same FER as the decoder with full precision. § MDPC CODE CONSTRUCTION CONSTRAINTS FOR PARALLEL PROCESSING This section first proposes adding constraints to the construction of the parity check matrices of MDPC codes in order to enable highly efficient parallel processing. Then it is shown that the proposed constraints do not bring noticeable increase in the decoding failure rate or compromise the security of the cryptosystem in a non-negligible way. §.§ Constraints on MDPC code construction for parallel processing For QC-LDPC codes used in digital communication and storage systems, as shown in Fig. <ref> (b), their 𝐇 matrices consist of CPMs or zero matrices of dimension L× L. A row-layered decoder that processes an L× L block of 𝐇 can be easily designed since there is a single nonzero entry in each row and each column of a CPM. If there is more than one nonzero entries in a row, the CNU will be much more complicated than that in Fig. <ref>. If there are multiple nonzero entries in a column, then the message updating formula in (<ref>) and (<ref>) are not applicable. Formulas for row-layered decoding with two nonzero entries in a column are available in <cit.>. If the 𝐇 matrix of an MDPC code is divided into blocks of size L× L, each block may have multiple `1's in a row or column due to the random code construction. Besides, the number of `1's varies among the blocks. The CNUs need to be designed to handle the worst case and the memories storing the messages need to be wide enough to store the maximum number of messages in a block. The v2c message updating formulas become much more complicated for processing more than one nonzero entry at a time. Additionally, many of the hardware units are idling when the decoder is processing a block with a smaller number of nonzero entries. As a result, such parallel MDPC decoders have large area overhead and low efficiency. Constraints need to be added to the 𝐇 matrices to make efficient parallel processing possible. To enable efficient parallel decoding, this paper proposes the following constraint on the construction of the 𝐇 matrices of MDPC codes, which are specified by the r-bit vectors in the first columns of the n_0 sub-matrices. For L-parallel processing, it is required that the circular distance between any pair of nonzero entries in each of the n_0 r-bit vectors is at least L. For two entries whose indices are i and j, their circular distance is defined as min{|i-j|, r-|i-j|}. A matrix satisfying the above constraint is referred to as a "matrix constrained by L" in the remainder of this paper. If the proposed constraint is satisfied, then the 𝐇 matrix can be divided into L × L blocks, where each block has at most a single nonzero entry in each row and each column. One block can be processed in each clock cycle to achieve L-parallel processing. Fig. <ref> illustrates an example 𝐇 matrix of a toy MDPC code with (n_0,r,w)=(2, 17, 3) constrained by L=4. In this example, the 𝐇 matrix is divided into blocks according to fixed indices. However, it is possible to divide the matrix in alternative ways to achieve even more efficient parallel processing and the details will be presented in Section V. Since the size of the submatrix, r, is a prime number, there are some columns and rows by the edges of each submatrix that do not form complete L × L blocks. These exceptions can be easily handled in the decoder as will be explained in Section V. §.§ Analyses on the effects of the constraint on decoding performance and security The 𝐇 matrix was originally randomly constructed for the McEliece cryptosystem. However, our design adds a constraint to the 𝐇 matrix construction in order to enable efficient parallel decoding. The potential effects of the proposed constraint on the number of possible keys, decoding failure rate, and resilience towards reaction attacks are analyzed in this subseciton. As mentioned previously, the private key of the MDPC-based scheme consists of n_0 r-bit vectors specifying 𝐇_0, 𝐇_1,⋯, 𝐇_n_0-1, and each r-bit vector has weight w. 𝐇_n_0-1 needs to be invertible. However, the possibility of a randomly generated r-bit vector with weight w leading to non-invertible 𝐇_n_0-1 is negligible <cit.>. Hence the total number of possible private keys can be estimated as (C(r, w))^n_0. It is very difficult to derive a closed-form formula for the number of possible r-bit vectors when the proposed constraint is added. Hence, simulations were carried out to estimate the total number of possible private keys satisfying the proposed constraint. A large number of r-bit vectors with weight w are randomly generated. Then the vectors satisfying the proposed constraints are counted. Denote these two numbers by s_0 and s_1, respectively. Then the total number of possible private keys satisfying the proposed constraint can be estimated as ((s_1/s_0)× C(r, w))^n_0. For MDPC codes with (n_0,r,w)=(2, 4801,45), the total number of possible 4801-bit vectors with weight 45 is C(r,w) = C(4801,45)=3.1 × 10^109. Hence, the total number of possible keys is (3.1 × 10^109)^2 ≈ 2^727 originally. From 10^9 4801-bit vectors with weight 45 randomly generated from simulations, 119 of them satisfy the proposed constraint with L=32. Therefore, the total number of keys under our proposed constraint with L=32 is ((119/10^9)× 3.1 × 10^109)^2 ≈ 2^681. Although the number of possible keys is reduced, it is still very large so that the private key is not recoverable by exhaustive search. For different L, the numbers of possible private keys are listed in Table <ref>. As shown in Table <ref>, the MDPC codes considered for higher security level have lower density. As a result, the number of possible keys is reduced by a smaller percentage when the proposed constraint is reinforced. The proposed constraint may affect the decoding failure rates of MDPC codes. To investigate the possible effects, simulations have been carried out to find the FERs of random MDPC codes with (n_0,r,w)=(2, 4801,45) constrained by different L ≤ 64. For each L, more than 10 random codes have been generated. It is found from simulations that different codes with the same L≤ 32 have very similar FERs. Hence only one code is chosen for each L to show the FER in Fig. <ref>. From this figure, it is observed that the FER curves for different L ≤ 32 almost overlap with that of the code without the proposed constraint, indicating that the proposed constraint does not lead to any performance loss when the parallelism is moderate or small. However, when L=64, FER increase is observed for one of the randomly generated codes and this is shown in Fig. <ref>. One possible reason is that, for large L, the number of possible locations of the nonzero entries in the 𝐇 matrix is significantly reduced, which may lead to more 4-cycles. The number of possible nonzero entry locations increases with r, and hence longer MDPC codes may tolerate larger L. Various attacks targeting at the McEliece cryptosystem are available in the literature. The reaction attacks <cit.> that try to recover the secret 𝐇 matrix by utilizing decoding failures are among the most powerful. The effects of the proposed 𝐇 matrix construction constraint on the resiliency to the reaction attacks need to be evaluated. In our analysis, the key-recovery GJS attack <cit.> is considered since it is one of the most critical reaction attacks. This attack consists of two steps: distance spectrum construction and private key reconstruction. Denote the first row of 𝐇_0 by 𝐡_0. The distance spectrum of 𝐡_0, D(𝐡_0), is a list of the circular distance d≤⌊ r/2 ⌋ between any pair of nonzero entries in the vector. To construct D(𝐡_0), for every 1≤ d≤⌊ r/2 ⌋, M n_0r-bit vectors, each with t `1's arranged as ⌊ t/2 ⌋ pairs with circular distance d within the first r bits of the vector are sent to the decoder, and the corresponding FER is recorded. M is a large integer. The FER is lower for those d ∈ D(𝐡_0), and accordingly the distance spectrum is constructed. The second step reconstructs the 𝐇 matrix from D(𝐡_0). From calculations and simulations in <cit.>, the complexity of the GJS attack is dominated by the first step. Hence, the analysis on the impacts of the proposed constraint only needs to be carried out on the first step. The number of decoding trials in the first step of the GJS attack for codes without the proposed constraint is ⌊ r/2 ⌋· M. M needs to be large enough to collect a non-trivial number of decoding failures and hence it is decided by the decoding failure rates of MDPC codes. When the parallelism is moderate or small, the proposed constraint does not lead to any FER degradation. Hence, the same M is needed to attack the codes adopting our constraint. On the other hand, since 1≤ d<L are not possible entries in the spectrum, the total number of decoding trials to attack the code with our proposed constraint is reduced to (⌊ r/2 ⌋-L) · M. r is a very large number as shown in Table <ref>. Therefore, the number of decoding trails is only reduced by a small proportion for our scheme. For an example MDPC code with (n_0,r,w)=(2, 4801,45) constrained by parallelism L=32, the number of decoding trials is only reduced by 32/2400 = 1.33%. For MDPC codes considered for higher security levels, due to the larger r, the number of decoding trails is reduced by an even smaller percentage. Thus, it can be concluded that the proposed constraint on MDPC code construction only negligibly increases the vulnerability towards the GJS attack. § HIGHLY EFFICIENT PARALLEL MDPC DECODER ARCHITECTURE In this section, a dynamic scheme is proposed to divide the 𝐇 matrix into L× L identity blocks to achieve L times speedup in L-parallel processing. Then an even-odd message storage method is developed to enable the proposed matrix division in order to realize a highly efficient parallel MDPC decoder. §.§ Dynamic 𝐇 matrix division scheme In Fig. <ref>, the 𝐇 matrix is divided into L× L blocks in a straightforward manner according to fixed indices. In this case, although the decoder processes an L× L block in each clock cycle, the achievable speedup is far less than L since a large portion of the blocks have less than L nonzero entries due to the irregular distribution of the nonzero entries in the 𝐇 matrices MDPC codes. From simulations on 10^9 randomly generated submatrices of MDPC codes with (n_0,r,w)=(2, 4801,45), Table <ref> shows the average number of nonzero entries in those L× L nonzero blocks for different L. The achievable speedup compared to a serial design is only around 50-60% of L for larger L. To achieve L times effective speedup by processing an L× L block each time, this section proposes to divide the 𝐇 matrix dynamically into L× L identity blocks. An example 𝐇 matrix divided by the proposed scheme for L=4 is illustrated in Fig. <ref>. The top left corner of each L× L identity block is a nonzero entry in the rows of 𝐇 whose indices are integer multiples of L. Due to the cyclical shift property of 𝐇, some identity blocks wrap around the column edges of the submatrices, such as the identity formed by columns 15,16,0,1 in the first 4 rows of the second submatrix shown in Fig. <ref>. Although r is a prime number and hence is not divisible by L, the last r-⌊ r/L⌋ L rows of 𝐇 are divided into blocks in the same way, except that they are incomplete identity blocks. They can still be processed by the same hardware units for the full identity blocks by disabling the units for the incomplete rows. Using the proposed dynamic 𝐇 matrix division scheme, the row-layered decoder can process one block of L rows as a layer. In each layer, one identity block is processed in each clock cycle. Since each of the identity blocks has exactly L nonzero entries, except those in the last r-⌊ r/L⌋ L rows of 𝐇, the decoder using the proposed matrix division can achieve almost L times speedup compared to the serial design as shown in Table <ref>. §.§ Highly efficient parallel row-layered decoder architecture The top-level block diagram of the proposed L-parallel row-layered Min-sum MDPC decoder is shown in Fig. <ref>. In our design, only the nonzero identity blocks of 𝐇 are processed in order to reduce the latency and increase the hardware utilization efficiency. To reduce the memory requirement, only the column indices for the `1's in the top left corners of the identity blocks in the first layer are stored into RAM I in the beginning of the decoding. For example, the column indices recorded for the first layer of the first submatrix in the example matrix in Fig. <ref> are 0,7,12. During decoding, the column indices for the next layer are derived from those of the current layer by adding L mod r in the “H matrix shifting" block and the results are written back to RAM I. RAM U is the memory used for storing the most updated a posteriori information for each decoder input bit and it is initialized with the channel information. For an L-parallel decoder, the a posteriori information of L consecutive bits are stored in one address of RAM U. However, an identity block of 𝐇 can start from any column index in the proposed dynamic matrix division scheme. Hence, the information belonging to an identity block may not be in one address of RAM U. However, they can always be found from two consecutive addresses of RAM U. To enable the simultaneous access of two consecutive addresses, RAM U is divided into two banks: RAM U0 storing the a posteriori information for the even block columns of 𝐇 and RAM U1 recording those of the odd block columns. Take an L=32-parallel decoder for an MDPC code with (n_0,r,w)=(2, 4801,45) as an example. A column index is represented by ⌈ log_2(2× 4801)⌉ =14 bits as a= a_13a_12⋯ a_0. The address for accessing RAM U1 is the higher 14-⌈ log_2 2L⌉=8 bits, a_13a_12⋯ a_6, and that for accessing RAM U0 is a_13a_12⋯ a_6 +a_5. The lower ⌈log_2 2L⌉=6 bits of a decide which of the L entries to take from the 2L information read from RAM U0 and U1. This is implemented by the shifter shown in Fig. <ref>. To simplify the illustration, this architecture is for an example case of L=8. The outputs of RAM U0 and U1 are denoted by g_0, g_1,⋯, g_7 and h_0,h_1, ⋯, h_7, respectively, in Fig. <ref>. Each of them has p+1 bits. This shifter consists of ⌈log_2 (2L)⌉ levels of multiplexors. The signal s_3s_2s_1s_0=a_3a_2a_1a_0 decides the number of locations cyclically shifted to the left. At the output of the shifter, g'_0,g'_1,⋯, g'_7 are the L consecutive information starting from index a. After these a posteriori information are updated in the decoder, they are sent to the reverse shifter with h'_0,h'_1,⋯, h'_7, and the outputs are written back to the same addresses at RAM U0 and U1. Similar to that in the serial decoder introduced in Section III, the RAM M in our proposed parallel decoder is used to hold the c2v messages in a compressed format containing min1, min2, idx, and s for each 𝐇 matrix row. However, since L rows of 𝐇 are processed at a time, the compressed messages for L rows are stored in each address of RAM M in Fig. <ref>. A RAM T serves as a buffer for all the v2c messages during the processing of one row. Since an L-parallel design processes L rows simultaneously, L copies of RAM T are needed. K>m0.6cm L>m2.3cm M>m2.8cm Table <ref> summarizes the sizes of the RAM banks utilized in the proposed L-parallel row-layered decoder for MDPC codes with code parameters (n_0,r, w), q-bit c2v and v2c message magnitude, and p-bit a posteriori information magnitude. To support codes with varying parameters, the size of each RAM bank should be configured to its maximum possible value. The contents of each memory, excluding RAM U banks, are accessed consecutively by using a counter to generate the addresses. The addresses used to access RAM U banks are retrieved from RAM I. In our proposed L-parallel decoder, L copies of CNUs are employed. As it was mentioned before, the last block row of 𝐇 has less than L rows. This can be taken care of by disabling the extra CNUs when processing the last block row. For the identity blocks that wrap around the edges of the submatrices of 𝐇, such as the last identity block in the second block row of the first submatrix in Fig. <ref>, the corresponding a posterior information might be stored in two addresses of the same RAM U bank. To avoid memory access conflicts, registers are used to store the a posterior information of the last block column for each submatrix in our design. § HARDWARE COMPLEXITY ANALYSES This section provides detailed hardware complexity analyses of the proposed decoder for an example (n_0,r,w)=(2, 4801,45) MDPC code. Comparisons with the sliced message-passing decoder <cit.> and the REMP-2 BF decoder <cit.> are also carried out. For the design in <cit.>, the memory overhead caused by parallel processing increases rapidly and the speed improvement becomes less significant for larger L. Hence L=2 designs are compared. The logic parts of the design, including VNUs, CNUs, shifters, registers, etc., are synthesized using the GlobalFoundries 22FDX process. The equivalent number of NAND2 gates under a tight 242ps timing constraint is listed in Table <ref>. On the other hand, the c2v messages in the REMP-2 BF decoder have single-bit magnitudes. Hence the accumulation of the c2v messages in the VNU has shorter data path and it can achieve higher clock frequency. For the (n_0,r,w)=(2, 4801,45) MDPC code, the targeted t is 84 as listed in Table <ref>. The average number of decoding iterations listed in Table <ref> is collected from simulations over 10,000 samples under this parameter. The sizes of the memories for the proposed row-layered Min-sum decoder are calculated from Table <ref> and those for the REMP-2 and sliced message-passing Min-sum decoders have been analyzed in <cit.>. Although the proposed decoder requires larger area for the shifter and registers compared to the sliced message-passing decoder in <cit.>, the memories contribute to the majority of the overall area. Table <ref> shows that the proposed row-layered decoder achieves 26% total memory requirement reduction compared to that of the sliced message-passing decoder <cit.>. RAM C is used to store the channel information in the sliced message-passing decoder. However, in the proposed row-layered decoder, the channel information is incorporated in the updated a posteriori information according to (<ref>) after the initialization. Hence, it does not need to be stored in the proposed design and RAM C is eliminated. In the sliced message-passing decoder, two copies of RAM M are needed for storing the c2v messages for the previous decoding iteration and calculating the c2v messages for the current iteration. On the other hand, in the proposed row-layered design, the decoder processes one block row after another. Hence the min1, min2, etc. for one block row can be stored in registers and one copy of RAM M is needed to record the c2v messages. Lastly, the proposed design requires much smaller RAM S and RAM I because they are not affected by the uneven distribution of nonzero entries in 𝐇. The sliced message-passing decoder in <cit.> divides the 𝐇 matrix into segments of rows. The number of entries in each RAM S and RAM I bank is more than w/L, especially when L is larger, due to the irregularity of 𝐇. This is also the reason that the sliced message-passing decoder requires many more clock cycles per decoding iteration. Besides, the proposed row-layered decoder reduces the number of decoding iterations significantly. As a result, it achieves 70% latency reduction. Table <ref> also shows that the proposed decoder has much lower complexity and latency than the REMP-2 decoder in addition to the advantage of several orders of magnitude lower FER as shown in Fig. <ref> (a). Even for decoders with higher parallelisms, the overall area requirement is dominated by the memories. To show how the complexity of the proposed design grows with the parallelism, Table <ref> lists the memory requirements of the proposed decoder for L=2, 8, 16, and 32 calculated from Table <ref>. The size of RAM I does not change with the parallelism because it stores the location information of the identity blocks in one layer of 𝐇, which remains the same for different L. Since r is a prime number and is not divisible by L, the sizes of RAM M, RAM S, and RAM U may change slightly with L due to the ceiling or floor function of r/L as shown in Table <ref>. The total size of RAM T increases linearly with L. However, they are much smaller than other memories. As a result, the total size of the memories of the proposed decoder increases very slightly with L. Even for a 32-parallel design, the memories are only 5.1% larger than those of the 2-parallel decoder. Due to the proposed dynamic 𝐇 matrix division scheme, our proposed L-parallel decoder can achieve L times speedup compared to a serial design, ignoring that the last block row of 𝐇 has less than L rows. On the other hand, the speedup achievable by the previous decoder in <cit.> is far less than L, especially for larger L, as shown in Table <ref>. § CONCLUSIONS For the first time, this paper investigates row-layered Min-sum decoding for MDPC codes with high column weight used in the McEliece cryptosystem. The performance loss caused by scaling and rounding single c2v messages with finite precision in the case of high column weight is identified. Increasing the precision of the a posteriori information is proposed to compensate the performance loss. Constraints are proposed for the code construction to enable highly efficient parallel decoding. Analyses have proved that the proposed constraints only bring negligible increase on the decoding failure rate and do not compromise the security of the cryptosystem in a non-negligible way. Additionally, a dynamic parity check matrix division scheme and the corresponding parallel decoder architecture are designed to achieve L times speedup for L-parallel processing. Future work will focus on further reducing the memory requirement of the decoder. 1 IEEEtran McElieceNIST D. J. Bernstein, et al., (Nov. 2017). “Classic McEliece: conservative code-based cryptography”. NIST. Accessed: Sep. 2022. [Online]. Available: https://classic.mceliece.org/nist.html. MDPCMcEliece R. Misoczki, J.-P. Tillich, N. Sendrier, and P. S. L. M. Barreto, “MDPC- McEliece: new McEliece variants from moderate density parity-check codes,” Proc. IEEE Intl. Symp. on Info. Theory, pp. 2069-2073, Oct. 2013. ParaLayeredLDPCArch1 J. Kim, J. Cho, and W. Sung, “A high-speed layered min-sum LDPC decoder for error correction of NAND Flash memories,” Proc. IEEE Intl. Midwest Symp. Circuits and Syst., pp. 1-4, Aug 2011. ParaLayeredLDPCArch2 V. L. Petrović and D. M. El Mezeni, “Reduced-complexity offset Min-sum based layered decoding for 5G LDPC codes,” Proc. Telecomm. Forum, pp. 1-4, Nov 2020. ParaLayeredLDPCArch3 K. K. Gunnam, G. S. Choi, and M. B. Yeary, “A parallel VLSI architecture for layered decoding for array LDPC codes,” Proc. IEEE Intl. Conf. on VLSI Design, pp. 738-743, Feb 2007. Liva H. Bartz and G. Liva, “On decoding schemes for the MDPC-McEliece cryptosystem,” Proc. of Intl. ITG Conf. on Syst., Commun., and Coding, pp. 1-6, Mar. 2019. BIKE T. B. Paiva and R. Terada, “Faster constant-time decoder for MDPC codes and applications to BIKE KEM,” IACR Trans. on Cryptographic Hardw. and Embedded Syst., vol. 2022, no. 4, pp. 110-134, Aug. 2022. Baldi P. Santini, M. Battaglioni, M. Baldi, and F. Chiaraluce, “Analysis of the error correction capability of LDPC and MDPC codes under parallel bit-flipping decoding and application to cryptography,” IEEE Trans. on Commun., vol. 68, no. 8, pp. 4648-4660, Aug. 2020. ErrorFloor S. Arpin, et al. “A study of error floor behavior in QC-MDPC codes,” Proc. Intl. Conf. on Post-Quantum Cryptogr., pp. 89-103, Sep. 2022. GuneysuDATE I. V. Maurich and T. Güneysu, “Lightweight code-based cryptography: QC-MDPC McEliece encryption on reconfigurable devices,” Proc. IEEE Design, Autom. & Test in Eur. Conf. & Exhib., pp. 1-6, Mar. 2014. Guneysu I. V. Maurich, T. Oder, and T. Güneysu, “Implementing QC-MDPC McEliece encryption,” ACM Trans. on Embedded Comput. Syst., vol. 14, no. 3, pp. 1-27, Apr. 2015. Hu J. Hu and R. Cheung, “Area-time efficient computation of Niederreiter encryption on QC-MDPC codes for embedded hardware,” IEEE Trans. on Computers, vol. 66, no. 8, pp. 1313-1325, Aug. 2017. XieSparse Z. Xie and X. Zhang, “Sparsity-aware medium-density parity-check decoder for McEliece cryptosystems," IEEE Trans. on Circuits and Syst.-II, vol. 70, no. 9, pp. 3343-3347, Sep. 2023. BaldiAttack P. Santini, M. Battaglioni, F. Chiaraluce, and M. Baldi, “Analysis of reaction and timing attacks against cryptosystems based on sparse parity-check codes," Proc. Code-Based Cryptogr.: 7th Intl. Workshop. pp. 115-136, Jul. 2019. Reaction Q. Guo, T. Johansson, and P. Stankovski, “A key recovery attack on MDPC with CCA security using decoding errors," Proc. ASIACRYPT, pp. 789-815, Dec. 2016. Minsum J. Chen and M. Fossorier, “Density evolution for two improved BP-based decoding algorithms of LDPC codes," IEEE Commun. Lett., vol. 6, no. 5, pp. 208-210, May 2002. McElieceSlicedMP J. Cai and X. Zhang, “Low-Complexity parallel Min-sum medium-density parity-check decoder for McEliece cryptosystem," IEEE Trans. on Circuits and Syst.-I, Sep. 2023. SlicedLDPC L. Liu and C.-J. Shi, “Sliced message passing: high throughput overlapped decoding of high-rate low-density parity-check Codes,” IEEE Trans. on Circuits and Syst.-I, vol. 55, no. 11, pp. 3697-3710, Dec. 2008. LayeredLDPC M. Mansour and N. Shanbhag, “A 640-Mb/s 2048-bit programmable LDPC decoder chip,” IEEE Journ. of Solid-State Circuits, vol. 41, no. 3, pp. 684-698, Mar. 2006. MyBook X. Zhang, VLSI Architectures for Modern Error Correcting Codes, CRC Press, Jul. 2015. MultiBlockRow X. Zhang and Y. Tai, “High-speed multi-block-row layered decoding for Quasi-cyclic LDPC codes,” Proc. IEEE Global Conf. on Signal and Info. Processing, pp. 11-14, Dec. 2014.
http://arxiv.org/abs/2407.13612v1
20240718155207
Rigidity of symmetric frameworks with non-free group actions on the vertices
[ "Alison La Porta", "Bernd Schulze" ]
math.CO
[ "math.CO", "52C25 (primary), 20C35, 05B35 (secondary)" ]
table compat=newest theoremTheorem[section] proposition[theorem]Proposition lemma[theorem]Lemma corollary[theorem]Corollary definition definition[theorem]Definition remark[theorem]Remark remarks[theorem]Remarks example[theorem]Example conjecture[theorem]Conjecture examples[theorem]Examples #1#1 claim Claim: Rigidity of symmetric frameworks with non-free group actions on the vertices Alison La PortaSchool of Mathematical Sciences, Lancaster University, UK, (corr. author) and Bernd SchulzeSchool of Mathematical Sciences, Lancaster University, UK, ============================================================================================================================================================================ § ABSTRACT For plane frameworks with reflection or rotational symmetries, where the group action is not necessarily free on the vertex set, we introduce a phase-symmetric orbit rigidity matrix for each irreducible representation of the group. We then use these generalised orbit rigidity matrices to provide necessary conditions for infinitesimal rigidity for frameworks that are symmetric with a cyclic group that acts freely or non-freely on the vertices. Moreover, for the reflection, the half-turn, and the three-fold rotational group in the plane, we establish complete combinatorial characterisations of symmetry-generic infinitesimally rigid frameworks. This extends well-known characterisations for these groups to the case when the group action is not necessarily free on the vertices. The presence of vertices that are fixed by non-trivial group elements requires the introduction of generalised versions of group-labelled quotient graphs and leads to more refined types of combinatorial sparsity counts for characterising symmetry-generic infinitesimal rigidity. Keywords: bar-joint framework; infinitesimal rigidity; symmetry; non-free group action; gain graph. § INTRODUCTION Largely motivated by problems from the applied sciences such as engineering, robotics, biophysics, materials science, and computer-aided design, where structures are often symmetric, there has recently been significant interest in studying the impact of symmetry on the infinitesimal rigidity of (bar-joint) frameworks and related geometric constraint systems. We refer the reader to <cit.> for an introduction to the theory and to <cit.> for a summary of recent results. One line of research in this area, which has seen a lot of progress lately, is to study when a symmetry-generic framework (i.e. a framework that is as generic as possible under the given symmetry constraints) is “forced-symmetric rigid", in the sense that it has no non-trivial motion that breaks the original symmetry of the framework. See <cit.> for some key results on this topic for finite symmetric frameworks. (For an overview of the corresponding results on infinite periodic or crystallographic frameworks, see <cit.>.) Another active line of research deals with the more difficult problem to determine when a symmetry-generic framework has no non-trivial motion at all. Since in this case the original symmetry of the framework is allowed to be destroyed by a motion, the framework is sometimes said to be “incidentally symmetric" – a term coined by Robert Connelly. This problem was first tackled for the special class of isostatic (i.e. minimally infinitesimally rigid) frameworks in the plane <cit.>. While there is ongoing work on this case (see e.g. <cit.>), the more general question of when a symmetry-generic framework is infinitesimally rigid – rather than just isostatic – is more challenging, as not every symmetric framework contains an isostatic sub-framework with the same symmetry on the same vertex set. Combinatorial characterisations of incidentally symmetric infinitesimally rigid frameworks have so far only been obtained for special classes of cyclic groups in the Euclidean plane <cit.> (see also <cit.>), in some non-Euclidean normed planes <cit.> and for classes of body-bar and body-hinge frameworks with ℤ_2×⋯×ℤ_2 symmetry in Euclidean d-space <cit.>. A central result in the theory is that the rigidity matrix of a symmetric framework can be transformed into a block-decomposed form, where each block corresponds to an irreducible representation of the group <cit.>. This breaks up the infinitesimal rigidity analysis into independent sub-problems, one for each block matrix. For forced symmetric rigidity, one only focuses on the block matrix corresponding to the trivial irreducible representation <cit.>. The problem of characterising incidentally symmetric infinitesimal rigidity may be solved by characterising the maximum rank of each block matrix for a symmetry-generic framework. By setting up phase-symmetric orbit rigidity matrices that are equivalent to, but are easier to work with than the block matrices in the block-decomposed rigidity matrix, this may be achieved via a Henneberg-type recursive construction for the corresponding group-labelled quotient graphs. This has been the most common approach for solving problems on the rigidity of incidentally symmetric frameworks (see e.g. <cit.>). However, so far, all of the work on forced and incidentally symmetric infinitesimal rigidity (except for the results for symmetric isostatic frameworks mentioned above, and for the result on forced-symmetric rotationally-symmetric frameworks given in <cit.>) has made the assumption that the symmetry group acts freely on the vertex set of the framework. This simplifies the definition of the group-labelled quotient graph, the structure of the corresponding orbit rigidity matrices, and the types of sparsity counts on the group-labelled quotient graphs that appear in characterisations of symmetry-generic infinitesimal rigidity. In this paper we address this gap and start to extend the theory to symmetric frameworks where vertices may be fixed by non-trivial group elements. Closing this gap is not just of pure mathematical interest, but has important motivations arising from practical applications. For example, tools and results from symmetric rigidity theory have recently been applied to the design and analysis of material-efficient long-span engineering structures such as gridshell roofs and cable nets <cit.>. For this application, it is crucial to understand the infinitesimal (or equivalently static) rigidity properties of the vertical 2D projections of the 3D structures, known as form diagrams. For structural optimisation reasons, these form diagrams (which are planar bar-joint frameworks that are often symmetric) frequently have vertices that are fixed by reflections and rotations. Other application areas for which it is important to understand the infinitesimal rigidity of symmetric frameworks where the group acts non-freely on the vertex set include the design of distributed control laws for multi-robot formations <cit.> and the analysis of geometric constraint systems appearing in computer-aided design <cit.>. In this paper, we first extend the definition of a group-labelled quotient graph (also known as a quotient “gain graph") for all cyclic groups to include the possibility of having vertices that are fixed by non-trivial group elements (Section <ref>). This allows us to define generalised phase-symmetric orbit rigidity matrices for these groups (Section <ref>). In Section <ref> we then use these matrices to establish necessary conditions for frameworks with reflection or rotational symmetry in the plane to be infinitesimally rigid. These conditions are given in terms of sparsity counts for the corresponding quotient gain graphs. In Sections <ref> and <ref> we then focus on the reflection group, half-turn group and rotational group of order 3 and show that the conditions given in Section <ref> are also sufficient for a symmetry-generic framework in the plane to be infinitesimally rigid. This extends the corresponding results for the case when the group acts freely on the vertex set of the graph given in <cit.>. The proofs of our main results in Section <ref> are based on a Henneberg-type inductive construction on the relevant quotient gain graphs, using the graph operations described in Section <ref>. Finally, in Section <ref> we describe some avenues of future work. In a second paper <cit.>, we establish the corresponding combinatorial characterisations for all cyclic (rotational) groups of odd order up to 1000 and for the cyclic groups of order 4 and 6, extending the results of Clinch, Ikeshita and Tanigawa <cit.> to the case where there is a vertex that is fixed by the rotation. This is done in a separate paper, since the sparsity counts and hence the proofs become significantly more complex for these groups. In that paper we will also show that for even order groups of order at least 8, the standard sparsity counts are not sufficient. § INFINITESIMAL RIGIDITY OF FRAMEWORKS We start by reviewing some basic terms and results from rigidity theory. See <cit.>, for example, for further details. A (bar-joint) framework in ℝ^d is a pair (G,p) where G is a finite simple graph and p:V(G)→ℝ^d is a map such that p(u)≠ p(v) for all u≠ v∈ V(G). We also refer to (G,p) as a realisation of the graph G, the underlying graph of (G,p), and to p as a configuration. We will sometimes use p_v to denote p(v). Unless explicitly stated otherwise, we will assume throughout the paper that p(V(G)) affinely spans ℝ^d. An infinitesimal motion of a d-dimensional framework (G,p) is a function m:V(G)→ℝ^d such that for all {u,v}∈ E(G), (p_u-p_v)^T(m(u)-m(v))=0. An infinitesimal motion m:V(G)→ℝ^d of a framework (G,p) is called trivial if there is a skew-symmetric matrix M∈ M_d(ℝ) and a d-dimensional vector t such that m(u)=Mp_u+t for all u∈ V(G). Otherwise, it is called an infinitesimal flex. We say a framework (G,p) in ℝ^d is infinitesimally rigid if all of its infinitesimal motions are trivial. Otherwise, we say (G,p) is infinitesimally flexible. It is a well known fact that if the points of a framework affinely span all of ℝ^d, then the space of infinitesimal motions has dimension ()0pt2d+12. It is sometimes useful to see a motion m:V(G)→ℝ^d as a column vector with d|V(G)| entries. In order to study the infinitesimal rigidity of a framework, we analyse its rigidity matrix. Given a d-dimensional framework (G,p), the rigidity matrix of (G,p), denoted by R(G,p), is a |E(G)|× d|V(G)| matrix, whose rows correspond to the edges of G and where each vertex is represented by d columns. Given an edge e={u,v}∈ E(G), the row corresponding to e in R(G,p) is [ 0 … 0 [p_u-p_v]^T 0 … 0 [p_v-p_u]^T 0 … 0 ], where the d-dimensional row vector [p_u-p_v]^T is in the d columns corresponding to u, [p_v-p_u]^T is in the columns corresponding to v, and there are zeros everywhere else. By Equation (<ref>), the kernel of R(G,p) is the space of infinitesimal motions of (G,p). From this follows the well-know fact that (G,p) is infinitesimally rigid if and only if rank(R(G,p))=d|V(G)|-d(d+1)/2 or G=K_|V(G)| and the points p_i (for i=1,…,|V(G)|) are affinely independent. A self-stress of a framework (G,p) is a map ω:E(G)→ℝ such that for all u∈ V(G), it satisfies ∑_v:{u,v}∈ E(G)ω({u,v})(p_u-p_v)=0. Notice that ω is a self-stress if and only if R(G,p)^Tω=0. So, (G,p) has a non-zero self-stress if and only if there is a non-trivial row dependency in R(G,p). Given a graph G, we say a configuration p:V(G)→ℝ^d is generic if R(G,p) has maximum rank among all configurations of G in ℝ^d. If p is generic then we say (G,p) is a generic framework. The set of all generic configurations of G is a dense, open subset of ℝ^d|V(G)|. § INFINITESIMAL RIGIDITY OF SYMMETRIC FRAMEWORKS In this section, we will go through what it means for a graph and a framework to be symmetric and we will introduce some of the tools we will be using to study the rigidity of such frameworks. Throughout the paper, we let Γ≃ℤ_k for some k∈ℕ. We will later restrict k to be 2 or 3. §.§ Symmetric graphs The set of all automorphisms of a graph G forms a group under composition called the automorphism group of G and is denoted Aut(G). We say G is ℤ_k-symmetric if Aut(G) contains a subgroup isomorphic to ℤ_k={0,1,…,k-1}. For notational convenience, we will often identify ℤ_k with the multiplicative group Γ=<γ> via the isomorphism defined by 1↦γ. (Later on, geometrically, γ will correspond to the rotation by 2π/k about the origin in the plane.) The terms ℤ_k-symmetric graph and Γ-symmetric graph are used interchangeably. Throughout the paper, the order of Γ is |Γ|=k. Given γ∈Γ, we say γ fixes v∈ V(G) if γ(v)=v, and γ fixes a subset U⊆ V(G) if it fixes all u∈ U. Similarly, γ fixes e∈ E(G) if γ(e)=e, and it fixes a subset F⊆ E(G) if it fixes all f∈ F. We define the stabiliser of a vertex v∈ V(G) to be S_Γ(v)={γ∈Γ:γ fixes v}, and the stabiliser of an edge e∈ E(G) to be S_Γ(e)={γ∈Γ:γ fixes e}. For simplicity, we will assume throughout the paper that any vertex fixed by a non-trivial element of Γ is fixed by all elements of Γ. This is justified, as our main focus is the infinitesimal rigidity of symmetric frameworks in the plane, and the definition of a symmetric framework (see Section <ref>) will imply, for d=2, that any vertex that is fixed by a non-trivial element of Γ, |Γ|≥3, will have to be placed at the origin, and conversely, since framework configurations are injective, any vertex at the origin must be fixed by every element of the group. We define V_0(G):={v∈ V(G):S_Γ(v)=Γ} and V(G):={v∈ V(G):S_Γ(v)={id}}. We also use the notation E(G):={e∈ E(G):S_Γ(e)={id}}. We will refer to the elements of V_0(G),V(G) and E(G) as the fixed vertices, free vertices and free edges of G, respectively. §.§ Gain graphs Let G be a Γ-symmetric graph. We denote the orbit of a vertex v (respectively an edge e) of G̃ by Γ v (respectively Γ e). Thus, Γ v={γ v| γ∈Γ} and Γ e={γ e| γ∈Γ}. The collection of all vertex orbits and edge orbits of G̃ is denoted by V and E, respectively. The quotient graph G̃ /Γ of G̃ is a multigraph G with vertex set V(G)=V, edge set E(G)=E and incidence relation satisfying Γ e=Γ uΓ v if some (equivalently every) edge in Γ e is incident with a vertex in Γ u and a vertex in Γ v. Notice that the partitioning of V(G̃) given in the previous section induces a partition of V(G) into the sets V_0(G):={Γ v∈ V(G):|Γ v|=1} and V(G)={Γ v∈ V(G):|Γ v|=|Γ|} of fixed and free vertices of G, respectively. Let G̃ be a Γ-symmetric graph with quotient graph G. For each vertex orbit Γ v we fix a representative vertex v^⋆∈Γ v. We also fix an orientation on the edges of the quotient graph G. For each directed edge Γ e = (Γ u, Γ v) in the directed quotient graph, we assign the following labelling (or “gain"): * If Γ u,Γ v∈V(G), then there exists a unique γ∈Γ, referred to as the gain on Γ e, such that {u^⋆,γ v^⋆}∈Γ e. * If at least one of Γu,Γv is fixed, say Γ u∈ V_0(G), then Γ e={{u^⋆,γ v^⋆}| γ∈Γ}. The size of Γ e depends on the size of Γ v. If |Γ e|=|Γ|, then we define the gain on Γ e to be any γ∈Γ. Otherwise, |Γ e|=1, and we define the gain on Γ e to be id. This gain assignment ψ: E(G) →Γ is well-defined and the pair (G,ψ) is called the (quotient) Γ-gain graph of G̃. Moreover, in a slight abuse of topological terminology, G̃ is called the covering graph (or lifting) of (G,ψ). In each of the cases above, we could re-direct Γ e from Γ v to Γ u and re-label it with the group inverse of the original label chosen. Up to this operation, and up to the choice of representatives, and of gains for edges incident with a fixed and a free vertex, this process gives a unique quotient Γ-gain graph of G̃. We call two Γ-gain graphs equivalent if they are obtained from the same Γ-symmetric graph by applying this process. In general, this process gives rise to a class of group-labelled, directed multigraphs, called Γ-gain graphs. A Γ-gain graph is a pair (G,ψ), where G is a directed multigraph and ψ:E(G)→Γ is a function that assigns a label to each edge such that, for some partition V(G)=V_0(G) ∪̇ V(G), where no vertex in V_0(G) has a loop or is incident to parallel edges, the following conditions are satisfied: * for all e∈ E(G) with both endpoints in V_0(G), ψ(e)=id; * if e,f∈ E(G) are parallel and have the same direction, then ψ(e)≠ψ(f). If they are parallel and have opposite directions, then ψ(e)≠ψ(f)^-1; * if e∈ E(G) is a loop, then ψ(e)≠id. We call ψ the gain function of (G,ψ). The elements of V_0(G) and V(G) are called, respectively, the fixed and free vertices of (G,ψ). Throughout this paper, we assume that a loop at a vertex v adds 2 to the degree of v. Let (G,ψ) be a Γ-gain graph. Using a generalisation of the process described in Section 3.2 of <cit.>, we can obtain a unique Γ-symmetric graph G̃, which we call the covering (or lifting) of (G,ψ), as follows. For each v∈ V_0(G), V(G̃) contains v; for each v∈V(G), V(G̃) contains {γ v| γ∈Γ}. For each edge (u,v)∈ E(G) with u∈ V_0(G), E(G̃) contains {u,v} if v∈ V_0(G), and E(G̃) contains {{u,γ v}| γ∈Γ} if v∈V(G); for each (u,v)∈ E(G) with u,v∈V(G) and label γ, E(G̃) contains Γ{u,γ v}. This process gives a unique lifting, and is inverse to the one shown at the beginning of the section. Thus, each Γ-gain graph uniquely determines a simple Γ-symmetric graph (up to equivalence). When drawing a Γ-gain graph (G,ψ) it is important to distinguish between the fixed and free vertices of (G,ψ). We will be doing so by representing the elements of V_0(G) and V(G) by black and white circles, respectively. In Figure <ref>, we consider the cyclic group Γ={id,γ} of order 2, and we give an example of a Γ-gain graph and its lifting. From now on, we will adopt the notation G̃ to denote the covering of a Γ-gain graph (G,ψ). As before, given v∈ V(G), we use v^⋆ to denote the representative of v in V(G). If ψ is clear from the context then we often write G for (G,ψ). §.§ Gain sparsity of a symmetric graph In this section we will introduce the criteria we will use to characterise infinitesimally rigid Γ-symmetric graphs. We start with the notions of balancedness and near-balancedness, which can also be found (for free group actions) in Section 4.1 of <cit.> and Section 1 of <cit.> . The notion of balancedness can also be found in Section 2.2 of <cit.>. Let (G,ψ) be a Γ-gain graph and let W be a walk in (G,ψ) of the form v_1e_1v_2e_2… e_tv_t+1. We say that the gain of W is ψ(W)=∏_i=1^tψ(e_i)^sign(e_i), where sign(e_i)=1 if e_i is directed from v_i to v_i+1, and sign(e_i)=-1 otherwise. Let v∈V(G). If G is connected, we use the notation <E(G)>_ψ,v, or simply <G>_ψ,v, to denote the group generated by {ψ(W):W is a closed walk in G starting at v and with no fixed vertex}. In <cit.>, it was shown that, given two free vertices u,v∈ V(G), <G>_ψ,v and <G>_ψ,u are conjugate. Since the groups we work with are cyclic, the two subgroups are actually the same. When clear, we omit ψ,v and write <G>. An edge set E is balanced if the edge set E' obtained from E by removing all fixed vertices and their incident edges, either has no cycle or every cycle in E' has gain id. Otherwise, we say E is unbalanced. We say (G,ψ) is balanced (respectively unbalanced) if E(G) is balanced (respectively unbalanced). For 0≤ m≤2,0≤ l≤ 3, we say a Γ-gain graph (G,ψ) is (2,m,l)-sparse if |E(H)|≤ 2|V(H)|+m|V_0(H)|-l for all subgraphs H of G, with E(H)≠∅ in the case of l=3, and we say it is (2,m,l)-tight if it is (2,m,l)-sparse and |E(G)|=2|V(G)|+m|V_0(G)|-l. We abbreviate (2,2,l)-sparse (equivalently, (2,2,l)-tight) to (2,l)-sparse (equivalently, (2,l)-tight). The covering G̃ of a balanced Γ-gain graph (G,ψ) will have a non-zero self-stress if G is not (2,3)-sparse, regardless of the size of V_0(G). In fact, if |V_0(G)|≤ 1, then clearly G̃ will have at least |Γ| non-zero self-stresses with mutually disjoint support. See e.g. Figures <ref> (a) and (b). Note also that the choice of vertex orbit representatives for constructing the quotient Γ-gain graph has no effect on the gain of any edges incident to a fixed vertex, since those edges can have any gain. So, given a cycle C of (G,ψ) containing a fixed and a free vertex, for each γ∈Γ, there is a choice of representatives such that ψ(C)=γ (see cycles v_1v_3v_0 and v_4v_3v_0 in Figures <ref> (b) and (c), respectively). Let H_1,H_2 be connected balanced Γ-gain graphs such that V_0(H_1)=V_0(H_2)=∅. If H_1∩ H_2 is connected, then H_1∪ H_2 is balanced. Since balancedness of a gain graph solely depends on its edges joining free vertices, it is easy to see that the following holds. Let H_1,H_2 be connected Γ-gain graphs. Assume that the graph obtained from H_1∩ H_2 by removing its fixed vertices is connected. If H_1,H_2 are balanced, then so is H_1∪ H_2. Suppose F is a connected subset of E(G) with V_0(F)=∅. We say F is near-balanced if it is unbalanced, and there exist a vertex v of G[F], called the base vertex of G[F], and γ∈Γ such that, for all closed walks W in F starting from v and not containing v as an internal vertex, ψ(W)∈{id,γ,γ^-1}. A subgraph H of (G,ψ) with no fixed vertex is said to be near-balanced if E(H) is near-balanced. Figure <ref> shows a near-balanced ℤ_5-gain graph and its covering. If <H>≃ℤ_2 or <H>≃ℤ_3, then H is always near-balanced. Hence, we say H (equivalently, E(H)) is proper near-balanced if it is near-balanced and <H>≄ℤ_2,ℤ_3. Let (G,ψ) be a Γ-gain graph. Let m,l be non-negative integers such that 0≤ m≤2,0≤ l≤3,m≤ l. (G,ψ) is called (2,m,3,l)-gain-sparse if * any balanced subgraph H of (G,ψ) with E(H)≠∅ is (2,3)-sparse; * |E(H)|≤ 2|V(H)|+m|V_0(H)|-l for any subgraph H of (G,ψ) with E(H)≠∅. (G,ψ) is called (2,m,3,l)-gain-tight if it is (2,m,3,l)-gain-sparse and |E(G)|=2|V(G)|+m|V_0(G)|-l. Let (G,ψ) be a Γ-gain graph. Suppose that, for some 0≤ m≤2,0≤ l≤3, m≤ l, (G,ψ) is (2,m,3,l)-gain sparse, and let H be a balanced subgraph of (G,ψ) with E(H)≠∅. Then H must satisfy |E(H)|≤2|V(H)|-3, as well as |E(H)|≤2|V(H)|+m|V_0(H)|-l. It is easy to check that, whenever (2-m)|V_0(H)|>3-l, the latter condition is stronger than the former. An argument similar to the proof of Lemma 4.13 in <cit.> shows the following. For 0≤ m≤2,1≤ l≤3 such that m≤ l, any (2,m,l)-tight graph with non-empty edge-set has exactly one connected component with non-empty edge set (but may have other connected components consisting of isolated vertices). Fix 0≤ m≤2,1≤ l≤3 such that m≤ l. Let c_0≥0,c≥1 be integers such that c-c_0≥1, and (G,ψ) be a (2,m,l)-tight graph with connected components H_1,…,H_c, of which H_1,…,H_c_0 are isolated vertices, and H_c_0+1,…,H_c have non-empty edge sets. Assume, by contradiction, that c-c_0≥2. Then, |E(G)|=∑_i=c_0+1^c|E(H_i)| ≤2∑_i=c_0+1^c|V(H_i)|+m∑_i=c_0+1^c|V_0(H_i)|-(c-c_0)l ≤2|V(G)|+m|V_0(G)|-(c-c_0)l, where the last inequality holds with equality if c_0=0. Since (c-c_0)≥2,l≥1, this is strictly less than 2|V(G)|+m|V_0(G)|-l, which contradicts the fact that (G,ψ) is (2,m,l)-tight. If |Γ|≥4,2≤ j≤|Γ|-2, then there are additional conditions that Γ-gain graphs must satisfy in order for their liftings to have ρ_j-symmetrically isostatic realisations. Hence, we introduce more refined sparsity conditions. First, we need the following notions, which may also be found in Section 4.3 of <cit.> and Section 2 of <cit.>. See also <cit.>. Let k:=|Γ|≥4 and (G,ψ) be a Γ-gain graph. For 2≤ j≤ k-2,-1≤ i≤ 1, we define the following sets: S_i(k,j)={n∈ℕ:2≤ n, n|k,j≡ i( n)} if j is even {n∈ℕ:2<n, n|k,j≡ i( n)} if j is odd We say a connected subset F of E(G) (equivalently, a connected subgraph H of G) is S_0(k,j) if <F>≃ℤ_n (equivalently, <H>≃ℤ_n) for some n∈ S_0(k,j). Similarly, we say F (equivalently, H) is S_±1(k,j) if <F>≃ℤ_n (equivalently, <H>≃ℤ_n) for some n∈ S_-1(k,j)∪ S_1(k,j). We say F (equivalently, H) is S(k,j) if it is either S_0(k,j) or S_±1(k,j). We also define the function f_k^j on 2^E(G) by f_k^j(F)=∑_X∈ C(F){2|V(X)|-3+α_k^j(X)}, where F is a subset of E(G), C(F) denotes the set of connected components of F, and α_k^j(X)= 0 if X is balanced 1 if j is odd and <X>≃ℤ_2 2-|V_0(X)| if X is S_±1(k,j) 2-2|V_0(X)| if X is S_0(k,j) or |V_0(X)|=0 and X is proper near-balanced 3-2|V_0(X)| otherwise Recall that the concept of near-balancedness is only defined on graphs with no fixed vertices. Hence, if X is near-balanced, we assume by default that it has no fixed vertices. It was shown in <cit.> that S_0(k,j)∩ S_i(k,j)=∅ for i=1,-1, and hence the functions α_k^j and f_k^j are well-defined. See also <cit.>. Let k:=|Γ|≥4,2≤ j≤ k-2 and (G,ψ) be a Γ-gain graph. (G,ψ) is said to be ℤ^j_k-gain sparse if |E(H)|≤ f_k^j(E(H)) for all non-empty subgraphs H of G. It is said to be ℤ^j_k-gain tight if it is ℤ^j_k-gain sparse and |E(G)|=f_k^j(E(G)). §.§ Symmetric frameworks Let G be a Γ-symmetric graph, and τ:Γ→ O(ℝ^d) be a faithful representation. We say a realisation (G,p) of G is τ(Γ)-symmetric if τ(γ)p(v)=p(γ v) for all γ∈Γ,v∈ V(G). Notice that we are now realising the group Γ geometrically. For instance, if |Γ|=2, then τ(Γ) can be identified either with the rotation group 𝒞_2:={id,C_2}, where C_2 is the rotation of ℝ^2 by π around the origin, or 𝒞_s={id,σ}, where σ is a reflection whose mirror line goes through the origin. By applying a rotation, we may always assume that the mirror line is the y-axis. The Γ-symmetric graph in Figure <ref>, for example, can be interpreted as a 𝒞_2-symmetric framework. Similarly, for k:=|Γ|≥ 3, we identify τ(Γ) with the group 𝒞_k which is generated by a counterclockwise rotation about the origin by 2π/k. Consider a τ(Γ)-symmetric framework (G̃,p̃). By definition, G̃ must be a Γ-symmetric graph and so it has a quotient Γ-gain graph (G,ψ). Then, for all v∈ V(G), we can define p_v:=p̃(v^⋆), where v^⋆ is the representative of v in V(G̃). This allows us to define the (quotient) τ(Γ)-gain framework of (G,p) to be the triplet (G,ψ,p). We say p (or, equivalently p̃, (G̃,p̃), (G,ψ,p)) is τ(Γ)-generic if rank(R(G̃,p))≥rank(R(G̃,q)) for all τ(Γ)-symmetric realisations (G̃,q) of G. The set of all τ(Γ)-generic configurations of G̃ is a dense, open subset of the set of τ(Γ)-symmetric configurations of G̃. §.§ Block-diagonalisation of the rigidity matrix Recall that Γ=<γ> is isomorphic to ℤ_k through an isomorphism which sends γ to 1. From group representation theory we know that Γ has k irreducible representations ρ_0,…,ρ_k-1 such that ρ_j:Γ→ℂ∖{0} sends γ^r∈Γ to ω^rj, where ω=e^2π i/k. For some faithful representation τ:Γ→ O(ℝ^d), let (G̃,p̃) be a τ(Γ)-symmetric framework. We define P_V(G̃):Γ→ GL(ℝ^|V(G̃)|) to be the linear representation of Γ that sends an element γ'∈Γ to the matrix [δ_ũ,γ'ṽ]_ũ,ṽ, where δ denotes the Kronecker delta symbol. We also define P_E(G̃):Γ→ GL(ℝ^|E(G̃)|) to be the linear representation of Γ that sends an element γ'∈Γ to the matrix [δ_ẽ,γ'f̃]_ẽ,f̃. Theorem 3.1 in <cit.> shows the following. For all γ∈Γ, P_E(G̃)^-1(γ)R(G̃,p̃)(τ⊗ P_V(G̃))(γ)=R(G̃,p̃). By Schur's lemma, this implies that the rigidity matrix of (G̃,p̃) block-decomposes with respect to suitable symmetry-adapted bases, which subdivides the column space into the direct sum of the spaces V^0,…,V^k-1, where each V^j is the (τ⊗ P_E(Ṽ))-invariant subspace corresponding to ρ_j. Similarly, the row space can be written as the direct sum of k spaces W^0,…,W^k-1, each being the P_E(G̃)-invariant subspace corresponding to ρ_j (for details, see Section 3.2 of <cit.>). Hence, we can write the rigidity matrix in the form R̃(G̃,p̃)=[ R̃_0(G̃,p̃) 0; ⋱ ; 0 R̃_k-1(G̃,p̃) ], where each R̃_j(G̃,p̃) is determined by some ρ_j. This decomposition into subspaces also allows us to define the following. With the notation above, we say an infinitesimal motion m̃:V(G̃)→ℂ^d|V(G̃)| is symmetric with respect to ρ_j (or ρ_j-symmetric) if it lies in V^j. Since all irreducible representations ρ_0,…,ρ_k-1 of Γ are 1-dimensional, an infinitesimal motion m̃:V(G̃)→ℂ^d|V(G̃)| is ρ_j-symmetric if and only if for all γ∈Γ and all v∈ V(G̃), m̃(γ v)=ρ_j(γ)τ(γ)m̃(v), where ρ_j(γ) indicates the complex conjugate of ρ_j(γ) (for details, see Section 4.1.2 in <cit.>). We usually refer to ρ_0-symmetric infinitesimal motions as fully-symmetric motions, since they exhibit the full symmetry. If k=2, we refer to ρ_1-symmetric infinitesimal motions as anti-symmetric motions, since the motion vectors are reversed by the non-trivial element of the group. Let k:=|Γ|≥2, (G̃,p̃) be a τ(Γ)-symmetric framework and 0≤ j≤ k-1. We say (G̃,p̃) is ρ_j-symmetrically isostatic if all ρ_j-symmetric infinitesimal motions of (G̃,p̃) are trivial and R̃_j(G̃,p̃) has no non-trivial row dependence. We usually refer to a ρ_0-symmetrically isostatic framework as a fully-symmetrically isostatic framework. If k=2, we refer to a ρ_1-symmetrically isostatic framework as an anti-symmetrically isostatic framework. For k:=|Γ|≥2,0≤ j≤ k-1, let (G,ψ,p) be a ρ_j-symmetrically isostatic τ(Γ)-gain framework. Then, by definition of τ(Γ)-genericity (recall Section <ref>), any τ(Γ)-generic realisation (G,ψ,q) of (G,ψ) is ρ_j-symmetrically isostatic. We will now start working in ℝ^2. The following is a well known fact and can be read off standard character tables <cit.>. See also the proof of Theorem 6.3 and Lemma 6.7 in <cit.>. Let τ:Γ→ O(ℝ^2) be a faithful representation. Given a τ(Γ)-symmetric framework (G,p), the following hold: (i) If τ(Γ)=𝒞_s, all infinitesimal rotations of (G,p) are ρ_1-symmetric. The space of infinitesimal translations of (G,p) decomposes into two 1-dimensional subspaces, one consisting of ρ_0-symmetric and the other of ρ_1-symmetric translations (as shown in Figure <ref>). (ii) If τ(Γ)=𝒞_2, all infinitesimal rotations of (G,p) are ρ_0-symmetric and all infinitesimal translations of (G,p) are ρ_1-symmetric. (iii) If τ(Γ)=𝒞_k for some k≥3, all infinitesimal rotations of (G,p) are ρ_0-symmetric. The space of infinitesimal translations of (G,p) decomposes into two 1-dimensional subspaces, one consisting of ρ_1-symmetric and the other of ρ_k-1-symmetric translations. § PHASE-SYMMETRIC ORBIT MATRICES We now establish `orbit matrices' which are equivalent to the matrices composing the block-diagonalised version of (G̃,p̃), and which we will denote O_j(G,ψ,p). These matrices can be written down directly without using representation theory and they allow us to establish a recursive construction of (2,m,3,l)-sparse graphs. Moreover, their structure allows us to work with Γ-gain graphs rather than Γ-symmetric graphs, deleting any redundancies that would be present in the rigidity matrix. §.§ Dimensions of the block matrices Since the diagonalisation of the rigidity matrix follows from the fact that it intertwines τ⊗ P_V(G̃) and P_E(G̃) (recall Lemma <ref>), the dimension of each block can be determined by studying the decomposition of τ⊗ P_V(G̃) and P_E(G̃). Let |Γ|=k. Let (G̃,p̃) be a τ(Γ)-symmetric framework, where τ:Γ→ O(ℝ^2) is a faithful representation. Let ρ_reg:Γ→ GL(ℝ^k) denote the regular representation of Γ, that sends γ∈Γ to the matrix [δ_g,γ h]_g,h, where δ again represents the Kronecker delta symbol. From representation theory, we know that ρ_reg is the direct sum of the irreducible representations of Γ. It is also easy to see that P_V(G̃) is the direct sum of |V(G)| copies of ρ_reg and |V_0(G)| copies of the 1×1 identity matrix, and so P_V(G̃)≃|V(G)|ρ_reg⊕|V_0(G)|I_1≃⊕_j=0^k-1|V(G)|ρ_j⊕ |V_0(G)|I_1. Given 0≤ j,j'≤ k-1, the character of ρ_j⊗ρ_j' is the coordinate-wise product of ρ_j and ρ_j'. So, the multiplicity of ρ_j in τ⊗ρ_reg is Tr(τ(id))=2. Hence, τ⊗ P_V(G̃)≃|V(G)|[τ⊗ρ_reg]⊕|V_0(G)|τ≃⊕_j=0^k-12|V(G)|ρ_j⊕|V_0(G)|τ. So, for each free vertex v∈ V(G), every block matrix contains two columns. Where the columns corresponding to the fixed vertices lie depends on the map τ:Γ→ O(ℝ^2). Let (G,p) be a τ(Γ)-symmetric framework for some faithful representation τ:Γ→ O(ℝ^2). The following statements hold: (i) If τ(Γ)=𝒞_s, R̃_0(G̃,p̃) and R̃_1(G̃,p̃) both have 2|V(G)|+|V_0(G)| columns. (ii) If τ(Γ)=𝒞_2, then R_0(G,p) has 2|V(G)| columns and R_1(G,p) has 2|V(G)| columns. (iii) If τ(Γ)=𝒞_k for some k≥3, then R_1(G,p) and R_k-1(G,p) both have 2|V(G)|+|V_0(G)| columns, and all the other blocks have 2|V(G)| columns. (iv) For all τ(Γ) and all even 0≤ j≤|Γ|-1, R_j(G,p) has |E(G)| rows and for all odd 0≤ j≤|Γ|-1, R_j(G,p) has |E(G)| rows (notice that when |Γ| is odd, E(G)=E(G)). First, let |Γ|=2 (so, τ(Γ) is either 𝒞_s or 𝒞_2), and recall that Γ has irreducible representations ρ_0,ρ_1, where ρ_0 is the identity representation, and ρ_1 sends the non-identity element γ of Γ to -1. Let τ_ref:Γ→ O(ℝ^2) be the reflection homomorphism that maps γ to diag(-1,1) and let τ_rot:Γ→ O(ℝ^2) be the two-fold rotation homomorphism that maps γ to diag(-1,-1). It is easy to see that τ_ref=ρ_0⊕ρ_1 and so τ_ref⊗ P_V(G̃)≃⊕_j=0^k-12|V(G)|ρ_j⊕|V_0(G)|τ_ref≃⊕_j=0,1(2|V(G)|+|V_0(G)|)ρ_j. Similarly, we have τ_rot=ρ_1⊕ρ_1. So, τ_rot⊗ P_V(G̃)≃⊕_j=0^k-12|V(G)|ρ_j⊕|V_0(G)|τ_rot≃(2|V(G)|)ρ_0⊕(2|V(G)|)ρ_1. (i) and (ii) follow. Now, let |Γ|=k≥3, so that τ(Γ)=𝒞_k. Let α=2π/k and Γ=⟨γ⟩, where γ∈Γ↦1∈ℤ_k through an isomorphism. The standard k-fold rotation homomorphism τ:Γ→ O(ℝ^2) is given by τ(γ)=[ cos(α) -sin(α); sin(α) cos(α) ]. We apply a complexification of the Euclidean plane with a change of basis from ℬ_1={[ 1 0 ]^T,[ 0 1 ]^T} to ℬ_2={1/2[ -1-i 1-i ]^T,1/2[ -1+i 1+i ]^T}, with the change of basis matrix M_1→2=1/2[ -1-i -1+i; 1-i 1+i ]. Then, τ(γ)_ℬ_2=M_1→2τ(γ)_ℬ_1M_1→2^-1=[ cos(α)-isin(α) 0; 0 cos(α)+isin(α) ]=[ ω 0; 0 ω ]. It follows that τ=ρ_1⊕ρ_k-1. Hence, τ⊗ P_V(G̃)≃⊕_j=0^k-12|V(G)|ρ_j⊕|V_0(G)|(ρ_1⊕ρ_k-1). Hence, (iii) holds. Finally, we prove (iv). Recall that, for some integer k≥2, Γ=<γ>≃ℤ_k through the isomorphism which maps γ to 1. Consider the edge set E(G). If k=2, any edge is clearly either free or fixed by that whole group. If k≥3, any edge is either free or fixed uniquely by id and δ:=γ^k/2. It was shown in Section 4.3 of <cit.> that for each e∈ E(G)∖E(G), all the blocks R_j(G,p) such that ρ_j(δ)=1 have one row, and all the other blocks have no rows (this argument does not use the fact that the action is free on the vertex set). It was also shown that each block R_j(G,p) has a row for each edge in |E(G)|. Since ρ_j(δ)=ρ_j(γ^k/2)=exp(2π ikj/2k)=exp(π ij), it follows that ρ_j(δ) is 1 if and only if j is even. Hence, for all even j, R_j(G,p) has |E(G)| rows, and for all odd j R_j(G,p) has |E(G)| rows. Let Γ=<γ> be isomorphic to ℤ_k through the isomorphism which sends γ to 1. Suppose k is even. An edge e={u,v} of G̃ is non-free if either both u,v∈ V_0(G̃), in which case S_Γ(e)=Γ, or if v=γ^k/2u, in which case S_Γ(e)={id,γ^k/2}. If τ(Γ)=𝒞_k, |V_0(G̃)|≤1, so there are no edges between fixed vertices. In Figure <ref> we show realisations of non-free edges in 𝒞_s-symmetric (a,b,c) and 𝒞_2-symmetric frameworks (d). Figures (a,b,d) show ρ_1-symmetric motions of such bars, whereas (c) shows a ρ_0-symmetric motion. For any ρ_1-symmetric velocity assignment m̃ to the vertices, the equation (m̃(v)-m̃(u))·(p̃(v)-p̃(u))=0 always holds. Hence, the edge e constitutes no constraint for ρ_1-symmetric infinitesimal rigidity. This is not the case for a ρ_0-symmetric velocity assignment. (Note that in (c) the equation only holds because the velocities are parallel to the mirror line.) The edge in (d) can also be seen as a subgraph of a 𝒞_k-symmetric framework (G̃,p̃), where k≥4 is even. Given an odd j with 2≤ j≤ k-2, (d) shows a ρ_j-symmetric motion of (G̃,p̃), restricted to that edge. §.§ Orbit matrices For each block matrix, we now construct an orbit matrix which has the same dimension and an isomorphic kernel. Thus, it will have the same nullity and rank and can be used for the corresponding symmetric rigidity analysis. The orbit matrix for j=0 is given in Definition 5.1 of <cit.>. In Section 4.1.2 of <cit.>, the orbit matrix was defined for all other 1≤ j≤|Γ|-1, for the special case where V_0(G)=∅. We model our definition of orbit matrices based on the definitions in <cit.> and <cit.>. Let k:=|Γ|, and let (G,ψ,p) be the τ(Γ)-gain framework of a τ(Γ)-symmetric framework (G,p), with respect to a faithful representation τ:Γ→ O(ℝ^2). Let U⊆ V(G) be equal to V_0(G) if τ(Γ)=𝒞_k, and 0≤ j≤ k-2,j≠1, and U=∅ otherwise. For u∈ V(G)∖ U, let M^j_τ(Γ)(u)=[ 0 1 ]^T if u∈ V_0(G) and τ(Γ)=𝒞_s,j=0 [ 1 0 ]^T if u∈ V_0(G) and τ(Γ)=𝒞_s,j=1 [ 1 -i ]^T if u∈ V_0(G) and τ(Γ)=𝒞_k for some k≥3,j=1 [ 1 i ]^T if u∈ V_0(G) and τ(Γ)=𝒞_k for some k≥3,j=k-1 I_2 otherwise. For each 0≤ j≤ k-1 and u∈ V(G)∖ U, let c_τ(Γ)^j(u)=rank(M^j_τ(Γ)(u)). When clear, we will often omit the symmetry group and simply write M_u^j and c_u^j for M^j_τ(Γ)(u) and c_τ(Γ)^j(u), respectively. Let u∈ V(G)∖ U and A_τ(Γ)^j(u):={m̃(u^⋆): m̃ is a ρ_j-symmetric motion of (G̃,p̃)}. (For 𝒞_s, for instance, if u∈ V_0(G), then A_𝒞_s^0(u)={[ 0 a ]^T:a∈ℝ}.) Then there is some basis ℬ of A_τ(Γ)^j(u) such that M_τ(Γ)^j(u) is the matrix whose columns are the elements of ℬ. This can be easily checked by applying Equation (<ref>). For details, see <cit.>. With the same notation as above, for 0≤ j≤ k-1, the ρ_j-orbit matrix O_j(G,ψ,p) of (G,ψ,p) is a matrix with c_u^j columns for each u∈ V(G)∖ U. If j is even, O_j(G,ψ,p) has |E(G)| rows. Otherwise, it has |E(G)| rows. Each edge e=(u,v)∈ E(G) which has a row in O_j(G,ψ,p), has row u v [ … (p_u-τ(ψ(e))(p_v))^TM^j_u … ρ_j(ψ(e))(p_v-τ(ψ^-1(e))(p_u))^TM^j_v … ] if u≠ v, and it has row u [ … (p_u+ρ_j(ψ(e))p_u-τ(ψ(e))p_u-ρ_j(ψ(e))τ(ψ^-1(e))p_u)^T … ]. otherwise. If u (respectively, v) lies in U, then the columns corresponding to u (respectively, v) vanish. For j=0, Definition <ref> coincides with the definition of the orbit matrix given in <cit.>. In the same paper, it was shown that R̃_0(G̃,p̃) is equivalent to O_0(G,ψ,p). Similarly, if e=(u,v) is such that u,v∈V(G), the row of e under Definition <ref> coincides with the row of e under the definition of the phase-symmetric orbit matrix O_j(G,ψ,p) defined in <cit.>, for all 0≤ j≤ k-1. In the same paper, it was shown that R̃_j(G̃,p̃) and O_j(G,ψ,p) have the same size and R̃_j(G̃,p̃)= O_j(G,ψ,p) for all 0≤ j≤ k-1 whenever V_0(G)=∅. Let k:=|Γ|, let (G̃,p̃) be a Γ-symmetric framework and let 0≤ j ≤ k-1. Let 𝒪 denote the set of vertex orbit representatives of G̃. Define the subset 𝒪'⊆ V_0(G̃) to be V_0(G̃) if τ(Γ)=𝒞_k and 0≤ j≤ k-2,j≠1, and ∅ otherwise. For some fixed u^⋆∈𝒪 and some free v^⋆∈𝒪, let {u^⋆,v^⋆}∈ E(G̃). For each δ∈Γ, let (G,ψ_δ,p) be a Γ-gain framework of (G̃,p̃) such that the edge e=(u,v) has gain δ. Fix γ∈Γ. Then a vector m lies in O_j(G,ψ_γ,p) if and only if m̃':𝒪∖𝒪'→ℝ^2 defined by m'(w^⋆)= M^j_w m(w) is the restriction of a ρ_j-symmetric motion m̃ of (G̃,p̃) to 𝒪∖𝒪'. Let m̃:V(G̃)→ℂ^2 be defined by m̃(γ w^⋆)=ρ_j(γ)τ(γ)m̃'(w^⋆) for all w^⋆∈𝒪∖𝒪',γ∈Γ [ 0 0 ]^T for all w^⋆∈𝒪',γ∈Γ. Clearly, m̃' is a restriction of m̃ to 𝒪∖𝒪'. Moreover, it is easy to see that m̃ is a ρ_j-symmetric motion of (G̃,p̃) if and only if it is an infinitesimal motion of (G̃,p̃). View m as a column vector. For each row r in O_j(G,ψ_γ,p) that represents an edge e=(u_1,u_2)∈ E(G), we check that rm is zero if and only if m̃ satisfies the conditions of being an infinitesimal motion of the framework on the subgraph induced by the elements of the orbit e. If u_1,u_2∈V(G), this has been shown in Section 4.1.2 in <cit.>. If u_1,u_2∈ V_0(G), then τ(Γ)=𝒞_s, since τ(Γ)=𝒞_k implies |V_0(G)|≤1 by definition of a framework. Since the row corresponding to {u_1^⋆,u_2^⋆} in R̃_1(G̃,p̃) is zero, we need only consider the case where j=0. However, by Remark <ref>, this case was already proven. Hence, we may assume that u_1∈ V_0(G),u_2∈V(G). Without loss of generality, we consider the edge e=(u,v), where u^⋆,v^⋆ are as defined in the statement. Note that the orbit of e is {{u^⋆,δ v^⋆}:δ∈Γ}. Let r be the row of e in O_j(G,ψ_γ,p). The map m̃ satisfies the conditions of being an infinitesimal motion of the framework on the subgraph induced by the elements of the orbit e if and only if, for all δ∈Γ <p̃(u^⋆)-p̃(δ v^⋆),m̃(u^⋆)-m̃(δ v^⋆)>=0. Since δ runs through all the elements of Γ, so does δγ. Hence, this is equivalent to saying that, for all δ∈Γ, <p̃(u^⋆)-p̃(δγ v^⋆),m̃(u^⋆)-m̃(δγ v^⋆)>=0. Since u is fixed, m̃(u^⋆)=m̃(δ u^⋆) and so, by the definitions of m̃ and a τ(Γ)-symmetric framework, this is equivalent to saying that, for all δ∈Γ, <p_u-τ(δγ)p_v,ρ_j(δ)τ(δ)M_u^jm(u)-ρ_j(δγ)τ(δγ)m(v)>=0. (If M_u^j is not defined, we ignore terms involving M_u^j.) This is equivalent to saying that, for all δ∈Γ, ρ_j(δ)(<p_u-τ(δγ)p_v,τ(δ)M_u^jm(u)>+<τ(δγ)p_v-p_u,ρ_j(γ)τ(δγ)m(v)>)=0. Notice that, since u is fixed, p_u=τ(δ)p_u for all δ∈Γ. Hence, since each τ(δ) is an orthogonal matrix, we may remove the τ(δ)'s from the inner products, and multiply each equation by ρ_j(δ), to see that this set of equations holds if and only if <p_u-τ(γ)p_v,M_u^jm(u)>+<τ(γ)p_v-p_u,ρ_j(γ)τ(γ)m(v)>=0. Similarly, since u is fixed, p_u=τ(γ)p_u, and so we may remove τ(γ) from the second inner product, and move the factor of ρ_j(γ) in the second inner product to the left, to obtain the equivalent equation <p_u-τ(γ)p_v,M_u^jm(u)>+<ρ_j(γ)[p_v-p_u],m(v)>=0. Again, since u is fixed, p_u=τ(γ^-1)p_u, and hence the above equation is equivalent to [p_u-τ(γ)p_v]^TM_u^jm(u)+ρ_j(γ)[p_v-τ(γ^-1)p_u]^Tm(v)=0, i.e. rm=0, as required. In Section 4.1, we saw that O_j(G,ψ_γ,p) and R̃_j(G̃,p̃) have the same dimension. Then, by Lemma <ref>, the following holds. With the same notation as in Lemma <ref>, for any 0≤ j ≤ k-1, rank (O_j(G,ψ_δ,p)) and rank (R̃_j(G̃,p̃)) coincide for all δ∈Γ. Recall that, when defining the gain graph associated to a symmetric graph, the gain of an edge incident to a fixed vertex and a free vertex could be chosen arbitrarily. As result of Lemma <ref> and Corollary <ref>, the choice of such a gain does not affect the rank of any of the orbit rigidity matrices. For convenience, we usually choose gain id for edges incident to fixed vertices. § NECESSITY OF THE SPARSITY CONDITIONS Let (G,ψ) be a Γ-gain graph. A switching at a free vertex v with γ∈Γ is an operation that generates a new function ψ':E(G)→Γ by letting ψ'(e)=γψ(e) if e is a non-loop edge directed from v, ψ'(e)=ψ(e)γ^-1 if e is a non-loop edge directed to v, and ψ'(e)=ψ(e) otherwise. We say two maps ψ,ψ':E(G)→Γ are equivalent if one can be obtained from the other by applying a sequence of switchings, and/or by changing the gains of the edges incident to a fixed and a free vertex (recall the definition of equivalence given in Section <ref>). In the proof of Lemma 5.2 in <cit.>, it was shown that the rank of the fully-symmetric orbit matrix is invariant under switchings whenever V_0(G)=∅. Proposition 5.2 in <cit.> states that this is also true for all phase-symmetric orbit matrices whenever V_0(G)=∅. Since the proof is not explicitly given in <cit.>, we include it here for completeness, and we drop the restriction on V_0(G). Together with Corollary <ref> this will then show that the rank of any orbit matrix is invariant under equivalence. Let |Γ|=k, let (G,ψ,p) be a τ(Γ)-gain framework and let γ≠id∈Γ. Let ψ' be obtained from ψ by applying a switching at a free vertex v with γ. Let p':V(G)→ℝ^2 be defined by p'_v=τ(γ)p_v and p'_u=p_u for all u≠ v in V(G). Then for all 1≤ j=0≤ k-1, rank O_j(G,ψ,p)=rank O_j(G,ψ',p'). Clearly, any edge non-incident with v has the same row in O_j(G,ψ,p) as in O_j(G,ψ',p'). Let e:=(u,v)∈ E(G) with ψ(e)=δ for some δ∈Γ and, for 0≤ j≤ k-1, let r_j be the row representing e in O_j(G,ψ',p'). Notice that ψ'(e)=ψ(e)γ^-1=δγ^-1. Then, τ(ψ'(e))p'_v=τ(δγ^-1)τ(γ) p_v=τ(δ)p_v, and so, if u≠ v, r_j =[ … (p_u-τ(δ)p_v)^TM_u^j … ρ_j(δγ^-1)[τ(γ)p_v-τ((δγ^-1)^-1)p_u)]^T … ] =[ … (p_u-τ(δ)p_v)^TM_u^j … ρ_j(γ^-1)ρ_j(δ)[τ(γ)(p_v-τ(δ^-1)p_u)]^T … ] whenever M_u^j is defined. (If M_u^j is not defined, then there are no columns representing u; recall Section <ref>.) If u=v, then ψ'(δ)=ψ(δ), and so r_j =[ … [τ(γ)p_v-τ(δγ)p_v+ρ_j(δ)τ(γ)p_v-ρ_j(δ)τ(δ^-1γ)p_v]^T … ] =[ … [p_v-τ(δ)p_v+ρ_j(δ)p_v-ρ_j(δ)τ(δ^-1)p_v]^Tτ(γ)^T … ] Multiply each row representing a loop at v by the scalar ρ_j(γ^-1). Let s be the number of columns of O_j(G,ψ,p), and t,t+1 be the columns representing v in O_j(G,ψ,p). Define A to be the square matrix of dimension s such that the 2×2 submatrix with entries A_t,t,A_t,t+1,A_t+1,t,A_t+1,t+1 is ρ_j(γ)τ(γ), all other diagonal entries of A are 1, and all other entries 0. Then, O_j(G,ψ',p')A=O_j(G,ψ,p). Since A is an orthogonal matrix, this implies that rank O_j(G,ψ,p)=rank O_j(G,ψ',p'), as required. In addition, one can easily generalise the proof of Proposition 2.3 and Lemma 2.4 in <cit.> to show Lemma <ref> (see <cit.> for details). Let (G,ψ) be a Γ-gain graph. (i) For any forest T in E(G), there is some ψ' equivalent to ψ such that ψ'(e)=id for all e∈ T. (ii) A subgraph H of G is balanced if and only if there is a gain ψ' equivalent to ψ such that ψ'(e)=id for all e∈ E(H). §.§ Frameworks symmetric with respect to a group of order 2 We first consider reflection symmetry 𝒞_s. Let (G,p) be a 𝒞_s-symmetric framework with 𝒞_s-gain framework (G,ψ,p). Then: (1) If (G,p) is fully-symmetrically isostatic, then (G,ψ) is (2,1,3,1)-gain tight. (2) If (G,p) is anti-symmetrically isostatic, then (G,ψ) is (2,1,3,2)-gain-tight. If (G,p) is fully-symmetrically isostatic, then null(O_0(G,ψ,p))=1, by Proposition <ref>(i). Since O_0(G,ψ,p) has dimension 2|V(G)|+|V_0(G)|, by the rank-nullity theorem, we deduce that |E(G)|=2|V(G)|+|V_0(G)|-1. Moreover, there is no subgraph (H,ψ|_E(H)) of (G,ψ) such that |E(H)|>2|V(H)|+|V_0(H)|-1, as this would imply a row dependency in the orbit matrix. Similarly, by Proposition <ref>(i), if (G,p) is anti-symmetrically isostatic, then |E(G)|=2|V(G)|+|V_0(G)|-2 and |E(H)|≤2|V(H)|+|V_0(H)|-2, for all subgraphs (H,ψ|_E(H)) of (G,ψ). Now, let j=0,1 and suppose by contradiction that (G,p) is ρ_j-symmetrically isostatic and there is a balanced subgraph (H,ψ|_E(H)) of (G,ψ) such that |E(H)|>2|V(H)|-3. Let M be the submatrix of O_j(G,ψ,p) obtained by removing all columns corresponding to the elements of V(G)∖ V(H), together with the rows corresponding to their incident edges. By Corollary <ref>, Proposition <ref> and Lemma <ref>(ii), we can assume that ψ(e)=id for all e∈ E(H). M is a submatrix of a standard rigidity matrix for a graph F with |E(F)|>2|V(F)|-3, obtained by removing zero or more columns (depending on |V_0(H)|; one column is removed for each vertex in V_0(H)). But, by row independence, we must have |E(F)|≤2|V(F)|-3, a contradiction. Hence, the result holds. The proof of the following result for half-turn symmetry 𝒞_2 is completely analogous to that of Proposition <ref>, and uses Proposition <ref> (ii). Let (G,p) be a 𝒞_2-symmetric framework with 𝒞_2-gain framework (G,ψ,p). Then: (1) If (G,p) is fully-symmetrically isostatic, then (G,ψ) is (2,0,3,1)-gain-tight. (2) If (G,p) is anti-symmetrically isostatic, then (G,ψ) is (2,2,3,2)-gain-tight. §.§ Higher order rotational-symmetric frameworks In this subsection, we consider the case where k≥3. First, recall that near-balancedness is only defined for a graph with no fixed vertices, so we may directly use the following result from <cit.>. (Lemma 5.5 in <cit.>) Let k:=|Γ|≥4,0≤ j≤ k-1, τ:Γ→𝒞_k be a faithful representation, (G,ψ) be a Γ-gain graph, and p:V(G)→ℝ^2. If O_j(G,ψ,p) is row independent, |E(H)|≤2|V(H)|-1 for any near-balanced subgraph H of G. We now give necessary conditions for the infinitesimal rigidity of 𝒞_k-symmetric frameworks, where k≥3. Recall that ℤ^j_k-gain sparsity was defined in Definition <ref>. For k≥3, let (G,p) be a 𝒞_k-symmetric framework with 𝒞_k-gain framework (G,ψ,p). Then, (1) If (G,p) is fully-symmetrically isostatic, then (G,ψ) is (2,0,3,1)-gain tight. (2) If (G,p) is ρ_1-symmetrically isostatic or ρ_k-1-symmetrically isostatic, then (G,ψ) is (2,1,3,1)-gain tight. (3) If k≥4 and (G,p) is ρ_j-symmetrically isostatic for some 2≤ j≤ k-2, then (G,ψ) is ℤ^j_k-gain tight. The proof of (1) is the same as the proof of Proposition <ref>(1), so we will only prove (2) and (3), starting with (2). Since the proofs for ρ_1-symmetrically isostatic and ρ_k-1-symmetrically isostatic frameworks are the same, we will only look at the former case. So, suppose (G,p) is ρ_1-symmetrically isostatic. Using the rank-nullity theorem, together with Proposition <ref> (iii), we see that |E(G)|=2|V(G)|+|V_0(G)|-1, and that |E(H)|≤2|V(H)|+|V_0(H)|-1 for all subgraphs H of G with E(H)≠∅. Assume, by contradiction, that there is a balanced subgraph (H,ψ|_E(H)) of (G,ψ) such that |E(H)|>2|V(H)|-3. Let M be the submatrix of O_1(G,ψ,p) obtained by removing all the columns representing the vertices that are not in V(H), together with the rows corresponding to their incident edges. By Corollary <ref>, Proposition <ref> and Lemma <ref>(ii), we may assume that ψ(e)=id for all e∈ E(H). If V(H)=∅, M is a standard rigidity matrix for a graph F with |E(F)|>2|V(F)|-3, contradicting the row independence of O_1(G,ψ,p). So, we may assume that V_0(H)={v_0}. Let v_1,…,v_t be the vertices that are incident with v_0 in H and, for 1≤ i≤ t, let p_i:=p(v_i)=[ x_i y_i ]^T. Then, M has the form ([ -x_1+iy_1 ⋮ -x_t+iy_t p_1 … 0 ⋮ ⋱ ⋮ 0 … p_t 0 … 0 ⋮ ⋱ ⋮ 0 … 0; [0.4pt]1-2 0 ⋮ 0 ⋮ ⋮ … p_i-p_j … p_j-p_i … ⋮ ⋮ ]). Let M' be the matrix obtained from M by replacing the first column with the following two columns: [ x_1 y_1; ⋮ ⋮; x_t y_t; 0 0; ⋮ ⋮; 0 0 ]. Since M is row independent, so is M'. But M' is a standard rigidity matrix for a graph F with |E(F)|>2|V(F)|-3, contradicting the row independence of O_1(G,ψ,p). This proves (2). For (3), let 2≤ j≤ k-2 and assume (G̃,p̃) is ρ_j-symmetrically isostatic. By the rank-nullity theorem and by Proposition <ref>(iii), |E(G)|=2|V(G)| and |E(H)|≤2|V(H)| for all subgraphs H of G with E(H)≠∅. The same argument as that in the proof of Proposition <ref> also shows that all balanced subgraphs H of G must satisfy |E(H)|≤2|V(H)|-3. By Lemma <ref>, all near-balanced subgraphs H of G must satisfy |E(H)|≤2|V(H)|-1. So we only need to consider the subgraphs of G which are S(k,j) and, in the case where j is odd, the subgraphs H of G with <H>≃ℤ_2. So, suppose that H is a subgraph of G with <H>≃ℤ_n for some n∈ S_0(k,j)∪ S_-1(k,j)∪ S_1(k,j)∪{2}, where n=2 only if j is odd. Recall that ℤ_k≃Γ with the isomorphism mapping 1 to γ. So the group <H> is the group Γ' of order n generated by γ^k/n. Moreover, j≡ i( n), where i=0 if n∈ S_0(k,j) and i=±1 otherwise. Hence, there is some integer m≥1 such that j=i+mn. Let ρ_i' be the irreducible representation of Γ' which sends the generator γ^k/n to exp(2π i√(-1)/n), and let τ':Γ'→𝒞_n be the homomorphism which sends γ^k/n to the rotation C_n. Let e=(u,v)∈ E(H). Then, ψ(e)=γ^sk/n for some 0≤ s≤ n-1. Since j=i+mn, we have ρ_j(ψ(e))=exp(2π(i+mn)√(-1)/ksk/n)=exp(2π i√(-1)/ns)exp(2π ms√(-1))=exp(2π i√(-1)/ns)=ρ'_i(ψ(e)). Thus, we have p_u-τ(ψ(e))p_v=p_u-τ'(ψ(e))p_v and ρ_j(ψ(e))(p_v-τ(ψ(e))^-1p_u)=ρ'_i(ψ(e))(p_v-τ'(ψ(e))^-1p_u). (See also the proofs of Lemma 5.4 in <cit.> and Lemma 6.13 in <cit.> for the free action case.) Hence, O_j(H,ψ,p) is the ρ_i'-orbit matrix of a 𝒞_n-symmetric framework. If i≡0 n, this implies that H must satisfy |E(H)|≤2|V(H)|-1 by (1) and by Proposition <ref>(1). If i≡±1 n, this implies that H must satisfy |E(H)|≤2|V(H)|-2 when n=2 (see Proposition <ref>(2)), and it must satisfy |E(H)|≤2|V(H)|+|V_0(H)|-1 when k≥3, by (2). This gives the result. § GAIN GRAPH EXTENSIONS When proving the sufficiency of the sparsity conditions, we will use an inductive argument on the order of the gain graph. To do so, we introduce certain operations on gain graphs, called extensions. As the name suggests, extensions add vertices to the gain graph. Each extension has an inverse operation, called reduction. For the inductive arguments to hold, extensions must maintain the symmetry-generic isostatic properties of a gain graph, and reductions must maintain the relevant sparsity counts. In this section, we will consider the extension operations. The corresponding reductions will be considered in Section <ref>. Throughout this section, we let (G,ψ) be a Γ-gain graph. We will construct a Γ-gain graph (G',ψ') by applying an extension to (G,ψ). Depending on the extension we are working with, we may apply restrictions on the order of Γ, in which case we will specify it. §.§ Adding a vertex of degree 1 The following move will only be used to study the infinitesimal rigidity of 𝒞_s-symmetric frameworks. A fix-0-extension chooses a vertex u∈ V(G), adds a new fixed vertex v to V_0(G), and connects it to u with a new edge e. We label e arbitrarily, unless u∈ V_0(G), in which case ψ'(e)=id, and we let ψ'(f)=ψ(f) for all f∈ E(G). The inverse operation of a fix-0-extension is called a fix-0-reduction. See Figure <ref> for an illustration. For 0≤ j≤1, let (G,ψ,p) be a ρ_j-symmetrically isostatic 𝒞_s-gain framework. Suppose (G',ψ') is obtained by applying a fix-0-extension to (G,ψ). Suppose further that, whenever j=1, the fix-0-extension from which we obtain (G',ψ') connects the new fixed vertex to a free vertex. Then there is a map p':V(G')→ℝ^2 such that (G',ψ',p') is a ρ_j-symmetrically isostatic 𝒞_s-gain framework. With the same notation as in Definition <ref>, we define p':V(G')→ℝ^2 so that p'_w=p_w for all w∈ V(G), p'_v lies on the y-axis, and the y-coordinates of p'_v,p_u differ. Let p'_v=[ 0 y_v ] and p'_u=[ x_u y_u ]. If j=1, then x_u≠0 since u∈V(G). We may assume that ψ(e)=id, by Proposition <ref> and Lemma <ref>(i). We have O_0(G',ψ',p')=([ y_v-y_u ⋆; [0.4pt]1-2 0 O_0(G,ψ,p) ]) and O_1(G',ψ',p')=([ -x_u ⋆; [0.4pt]1-20 O_1(G,ψ,p) ]). For j=0,1, we have added one row and one column to O_j(G,ψ,p). Hence, it suffices to show that the rows of the new matrices are independent. This follows from the fact that y_u≠ y_v and x_u≠0. Notice that Lemma <ref> does not take into consideration the case where j=1 and u∈ V_0(G). This is because, by Proposition <ref> (2), if (G,p̃) is a ρ_1-symmetrically isostatic 𝒞_s-symmetric framework, then its Γ-gain graph (G,ψ) is (2,1,3,2)-gain tight. In particular, any two vertices in V_0(G) cannot be joined by an edge. (Recall also Example <ref>.) Hence, when proving the sufficiency of the sparsity conditions for this case, if we apply a fix-0-reduction at a fixed vertex v∈ V_0(G), we may assume that the vertex v is adjacent to is free. §.§ Adding a vertex of degree 2 A 0-extension chooses two vertices v_1,v_2∈ V(G) (we may choose v_1=v_2 provided that v_1∈V(G)) and adds a free vertex v, together with two edges e_1=(v,v_1),e_2=(v,v_2). We let ψ'(e)=ψ(e) for all e∈ E(G). If v_1,v_2 coincide, we choose ψ' such that ψ'(e_1)≠ψ'(e_2). In all other cases, we label e_1,e_2 freely. The inverse operation of a 0-extension is called a 0-reduction. See Figures <ref> and <ref> for an illustration. Defining p':V(G')→ℝ^2 such that p'_w=p_w for all w∈ V(G) and p'_v does not lie on the line through τ(ψ(e_1))p(v_1) and τ(ψ(e_2))p(v_2), we can prove the following result in a similar way as we proved Lemma <ref>. Given an irreducible representation ρ of Γ and a faithful representation τ:Γ→ O(ℝ^2), let (G,ψ,p) be a ρ-symmetrically isostatic τ(Γ)-gain framework. If (G',ψ') is obtained by applying a 0-extension to (G,ψ), then there is a map p':V(G')→ℝ^2 such that (G',ψ',p') is a ρ-symmetrically isostatic τ(Γ)-gain framework. The following extension will only be used to study the infinitesimal rigidity of 𝒞_s-symmetric frameworks. A fix-1-extension chooses two distinct vertices u_1,u_2∈ E(G) and an edge e∈ E(G) which can either be (u_1,u_2) or, if u_1 (respectively, u_2) is free, a loop at u_1 (respectively, u_2). It removes e, and adds a fixed vertex v, together with the edges e_1=(v,u_1),e_2=(v,u_2). We label e_1 and e_2 freely, and we let ψ'(f)=ψ(f) for all f∈ E(G). The inverse operation of a fix-1-extension is called a fix-1-reduction. See Figure <ref> for an illustration. Let Γ=<γ> be the cyclic group of order 2, 0≤ j≤1, and let (G,ψ,p) be a ρ_j-symmetrically isostatic 𝒞_s-gain framework. Let (G',ψ') be obtained by applying a fix-1-extension to (G,ψ). With the same notation as in Definition <ref>, assume that if e=(u_1,u_2), then the line through p(u_1) and τ(ψ(e))p(u_2) and the line through σ p(u_1) and στ(ψ(e))p(u_2) meet in at least one point. Assume further that if e is a loop, then p(u_1),p(u_2) do not share the same y-coordinate. Then there is a map p':V(G')→ℝ^2 such that (G',ψ',p') is a ρ_j-symmetrically isostatic 𝒞_s-gain framework. Throughout the proof, we use the same notation as that in Definition <ref> and, for 1≤ i≤2, we let x_i and y_i be, respectively, the x-coordinate and y-coordinate of p(u_i). We let H be the subgraph obtained from G by removing e. Since v is fixed, we may assume that ψ(e_1)=ψ(e_2)=id. We first show the result holds when e is a loop. Assume, without loss of generality, that u_1 is free and that e is a loop at u_1, and notice that ψ(e)=γ. By assumption, y_1-y_2≠0. Moreover, since (G,ψ) is ρ_j-symmetrically isostatic and G contains a loop edge, we know that j=0 (recall Example <ref>). Define p':V(G')→ℝ^2 such that p'_u=p_u for all u∈ V(G) and p'_v be the mid-point of the segment between p(u_1) and σ p(u_1). Then, p'_v lies on the y-axis and has y-coordinate y_1, so that O_0(G',ψ',p')=([ 0 y_1-y_2 x_1 0 0 0 0 [p(u_2)-p'_v]^TM_u_2^0; [0.4pt]1-2 0 O_0(H,ψ|_E(H),p|_V(H)) ]). Multiplying the first row by 4, we obtain the row corresponding to e which, added to the bottom right block, forms O_0(G,ψ,p). Since O_0(G',ψ',p') is obtained by adding one row and one column to O_0(G,ψ,p), it suffices to show that the additional row does not add a dependence. This follows from the fact that y_1-y_2≠0. Hence, the result holds whenever e is a loop. Now, assume that e=(u_1,u_2). Let t:=0 if ψ(e)=γ and t:=1 if ψ(e)=id. Since the line through p(u_1) and τ(ψ(e))p(u_2) and the line through σ p(u_1) and στ(ψ(e))p(u_2) meet, they must meet in a point P that lies on the y-axis. Simple calculations show that the y-coordinate of P is y=-y_1-y_2/x_1+(-1)^tx_2x_1+y_1=(-1)^t+1y_2-y_1/x_1+(-1)^tx_2x_2+y_2. Define p':V(G')→ℝ^2 such that p'_u=p_u for all u∈ V(G) and p'_v=P. Then, we have O_j(G',ψ',p')=([ [ -x_1 y-y_1 ]M_v^j [ -x_2 y-y_2 ]M_v^j [p(u_1)-P]^TM_u_1^j 0 0 [p(u_2)-P]^TM_u_2^j; [0.4pt]1-2 0 O_j(H,ψ|_E(H),p|_V(H)) ]). So, multiplying the row corresponding to e_i by x_1+(-1)^tx_2/x_i for 1≤ i≤2, and using (<ref>), we see that O_j(G',ψ',p') is ([ [ -x_1+(-1)^t+1x_2 y_2-y_1 ]M_v^j [ -x_1+(-1)^t+1x_2 (-1)^t+1(y_2-y_1) ]M_v^j x_1+(-1)^tx_2 y_1-y_2 0 0 0 0 x_1+(-1)^tx_2 (-1)^t(y_2-y_1); [0.4pt]1-2 0 O_j(H,ψ|_E(H),p|_V(H)) ]) where the first column corresponding to u_1 (respectively, u_2) in O_0(G',ψ',p') vanishes if u_1 (respectively, u_2) is fixed, and the second column corresponding to u_1 (respectively, u_2) in O_1(G',ψ',p') vanishes if u_1 (respectively, u_2) is fixed. Apply the following row operations: if j=t=0, add the second row to the first; in all other cases, subtract the second row from the first. Then, we obtain the row corresponding to e which, added to the bottom right block, forms O_j(G,ψ,p). Similarly as in the case where e is a loop, it suffices to show that the second row does not add a dependence to O_j(G,ψ,p). This follows from the fact that the line through p(u_1) and τ(ψ(e))p(u_2) and the line through σ p(u_1) and στ(ψ(e))p(u_2) meet at a point, which implies that the entry in the leftmost column is not zero. §.§ Adding a vertex of degree 3 A loop-1-extension adds a free vertex v to V(G) together with an edge e=(v,u) for some u∈ V(G) and a loop e_L=(v,v). We let ψ'(f)=ψ(f) for all f∈ E(G). ψ(e_L) can be any non-identity element of Γ and ψ(e) can be chosen freely. The inverse operation of a loop-1-extension is called a loop-1-reduction. See Figure <ref> for an illustration. Let Γ be a cyclic group of order k≥2, τ:Γ→ O(ℝ^2) be a faithful representation and (G,ψ,p) be a ρ_j-symmetrically isostatic τ(Γ)-gain framework for some 0≤ j≤ k-1 such that j=0 when k=2. Let γ∈Γ correspond to the k-fold rotation (or the reflection) under τ. Let (G',ψ') be obtained from (G,ψ) by applying a loop-1-extension. With the same notation as Definition <ref>, let g:=ψ'(e_L) and h:=ψ'(e). Assume the following hold: (i) if k is even and j is odd, then g≠γ^k/2; (ii) if τ(Γ)=𝒞_k and j=0, then u is free; (iii) if k≥4,2≤ j≤ k-2 and u is fixed, then there is no n∈ S_0(k,j) such that <g>≃ℤ_n. Then there is a map p':V(G')→ℝ^2 such that (G',ψ',p') is a ρ_j-symmetrically isostatic τ(Γ)-gain framework. With the same notation as in Definition <ref>, let p':V(G')→ℝ^2 be defined such that p'_w=p_w for all w∈ V(G). We have O_j(G',ψ',p')=([ [I_2+ρ_j(g)I_2-τ(g)-ρ_j(g)τ(g^-1)](p'_v)^T [p_v'-τ(h)p_u]^T 0 ⋆; [0.4pt]1-2 0 O_j(G,ψ,p) ]). So O_j(G',ψ',p') is obtained from O_j(G,ψ,p) by adding two rows and two columns. Since O_j(G,ψ,p) has full rank by assumption, it is enough to show that the first two rows of O_j(G',ψ',p') are linearly independent for some choice of p'_v. Let A be the matrix I_2+ρ_j(g)I_2-τ(g)-ρ_j(g)τ(g^-1). If τ(Γ)=𝒞_s, then j=0 and τ(g)=σ by assumption, so A is the 2×2 matrix whose only non-zero entry is (A)_1,1=4. If we choose p'_v such that it does not share the same y-coordinate as p'_u, it is then easy to see that the first two rows of the matrix are linearly independent. Hence, O_j(G',ψ',p') has full rank, as required. So, we may assume that τ(Γ)=𝒞_k. Let g=γ^t for some 1≤ t≤ k-1, α=2π t/k, and ω=exp2π i /k. Then, A=[ (1-cos(α))(1+ω^jt) sin(α)(1-ω^jt); -sin(α)(1-ω^jt) (1-cos(α))(1+ω^jt) ]. We show that A is not the zero matrix. Assume, by contradiction, that A is the zero matrix. Since 1≤ t≤ k-1, we know cos(α)≠1 and so ω^jt=-1, i.e. there is some odd integer m such that 2π jt/k=mπ. Moreover, sin(α)(1-ω^jt)=2sin(α)=0. Since 1≤ t≤ k-1, α=π, and so t=k/2. It follows that j=m, and so j is odd. This contradicts (i), so, as claimed, A is not the zero matrix. If u is free, then, by the injectivity of p, the vector p_u, and hence also the vector τ(h)p_u, cannot be zero. So unless q is a multiple of τ(h)p_u, the affine map q↦ q-τ(h)p_u applied to λ q gives vectors of different directions for each scalar λ. The linear map A applied to λ q, however, only produces vectors that are multiples of the vector Aq. This implies that {Aq,q-τ(h)p_u} is linearly independent for some q, and so we may choose p'_v to be such q. Then O_j(G',ψ',p') is linearly independent, as required. So, assume that u is fixed. By (ii), we may also assume that 1≤ j≤ k-1. In particular, this implies, by assumption, that k≠2. Assume, by contradiction, that there is no choice of p'_v such that the first two rows of O_j(G',ψ',p') are linearly independent. This implies that A is a scalar multiple of I_2. This happens exactly when sin(α)(1-ω^jt)=0. If sin(α)=0, then α=π, and t=k/2. Hence, ω^jt=exp(π ij). By (i), j must be even, and so ω^jt=1. If sin(α)≠0, then clearly ω^jt=1. Hence, in both cases we have ω^jt=1, i.e. jt=mk for some integer m. If j=1, this implies that t is a multiple of k, contradicting the fact that 1≤ t≤ k-1. Hence, j≠1. Similarly, j≠ k-1: if j=k-1, then ω^jt=ω^-t. Since ω^jt is real, this equals ω^t=1, and so t=k, a contradiction. Hence, k≥4 and 2≤ j≤ k-2. We show that there is an integer n∈ S_0(k,j) such that <g>≃ℤ_n, contradicting (iii). Let n=k/(k,t)=lcm (k,t)/t. Then, we know from group theory (see e.g. <cit.>) that <g>=<γ^t>≃ℤ_n, and that m'=mk/lcm (k,t) is an integer (since mk is a multiple of both k and t), and so, since j=mk/t=nm', we have j≡0 n. Moreover, k=n(k,t), so n|k. Hence, n∈ S_0(k,j), as required. This contradicts (iii). Thus, there is a choice of p'_v such that the first two rows of O_j(G',ψ',p') are linearly independent. It follows that O_j(G',ψ',p') has full rank. It was shown in <cit.> that for the Γ-gain graph of a rotationally symmetric framework in the plane, an edge joining the fixed vertex u with a free vertex v gives the same constraint in the fully-symmetric orbit rigidity matrix as a loop edge on v which corresponds to a regular |Γ|-polygon in the covering framework. Hence we have condition (ii) in Lemma <ref>. This is clear geometrically, because both of these edges force the vertices in the orbit of v to keep their distance to the origin in any symmetry-preserving motion. Thus, for analysing fully-symmetric infinitesimal rigidity, one may always reduce the problem to the case when the group acts freely on the vertices. However, Lemma <ref> shows that this simple reduction is not possible for the reflection group nor for analysing “incidentally symmetric" infinitesimal rigidity for any rotational group. A 1-extension chooses a vertex u∈ V(G) and an edge e=(v_1,v_2)∈ E(G) (any pair of free vertices in {v_1,v_2,u} are allowed to coincide; further, v_1,v_2,u are all allowed to coincide, provided they are not fixed and |Γ|≥3), removes e and adds a new free vertex v to V(G), together with three edges e_1=(v,v_1),e_2=(v,v_2),e_3=(v,u). We let ψ'(f)=ψ(f) for all f∈ E(G). The edges e_1,e_2 are labelled such that ψ'(e_1)^-1ψ'(e_2)=ψ(e). The label of e_3 is chosen such that it is locally unbalanced, i.e. every two-cycle e_ie_j^-1, if it exists, is unbalanced. The inverse operation of a 1-extension is called a 1-reduction. See Figure <ref> for an illustration. Let Γ be a cyclic group of order k, τ:Γ→ O(ℝ^2) be a faithful representation, and (G,ψ,p) be a ρ_j-symmetrically isostatic τ(Γ)-gain framework for some 0≤ j≤ k-1. With the same notation as in Definition <ref>, assume that the points τ(ψ(e_1))p(v_1),τ(ψ(e_2))p(v_2) and τ(ψ(e_3))p(u) do not lie on the same line. If (G',ψ') is obtained from (G,ψ) by applying a 1-extension, then there is a map p':V(G')→ℝ^2 such that (G',ψ',p') is a ρ_j-symmetrically isostatic τ(Γ)-gain framework. With the same notation as that of Definition <ref>, let H be the subgraph of G obtained by removing e. If v_1,v_2 are free, then an analogous proof to that of Lemma 6.1 in <cit.> gives the result. So, without loss of generality, assume v_1∈ V_0(G). In particular, v_1 cannot coincide with either v_2 or u, and we may assume ψ(e_1)=ψ(e_2)=ψ(e)=id. Let ψ(e_3)=δ. Define p':V(G')→ℝ^2 such that p'_w=p_w for all w∈ V(G) and p'_v lies on the midpoint of the line through p(v_1) and p(v_2). Then, O_j(G',ψ',p') is ([ ρ_j(δ)[p_v'-τ(δ)p_u]^T 1/2[p(v_2)-p(v_1)]^T 1/2[p(v_1)-p(v_2)]^T 0 ⋆ ⋆ 1/2[p(v_1)-p(v_2)]^TM_v_1^j 0 0 0 1/2[p(v_2)-p(v_1)]^TM_v_2^j 0; [0.4pt]1-2 0 O_j(H,ψ_E(H),p|_V(H)). ]), where, for 1≤ i≤2, the columns representing v_i vanish if M_v_i^j is not defined. Adding the second row to the third, and multiplying this latter by 2, we obtain the row representing e in O_j(H,ψ_E(H),p|_V(H)). Now, O_j(G',ψ',p') is obtained by adding two rows and two columns to O_j(G,ψ,p), so it suffices to show that the first two entries of the two added rows are independent. Since p(v_1),p(v_2) and τ(δ)p_u do not lie on the same line, the line through p'_v and τ(δ)p_u is not parallel to the line through p(v_1) and p(v_2). Hence, the upper left 2× 2 matrix has full rank, and O_j(G',ψ',p') has full rank. If τ(Γ)=𝒞_s, and p(v_1),p(v_2) both lie on the symmetry line, then p_v' also lies on the symmetry line. In such a case, we may perturb p'_v slightly without changing the rank of O_j(G',ψ',p'), in order to avoid placing the free vertex v on the symmetry line. §.§ Adding two vertices of degree 3 The following extension is defined on Γ-gain graphs with |V_0(G)|=1 and with |Γ|≥2 even. Recall that, for k:=|Γ|, the cyclic group Γ=<γ> is isomorphic to ℤ_k, through the isomorphism which maps γ to 1. A 2-vertex-extension adds two free vertices v_1,v_2 and connects them to the fixed vertex. Then, it adds two parallel edges e_1,e_2=(v_1,v_2) between v_1 and v_2. We define ψ' such that ψ'(e)=ψ(e) for all e∈ E(G), the new edges incident with the fixed vertex are labelled arbitrarily, and ψ'(e_1)=id,ψ'(e_1)=γ^k/2. The inverse operation of a 2-vertex-extension is called a 2-vertex-reduction. See Figure <ref> for an illustration. In a similar way as for Lemma <ref>, we can show that the following result holds. In this case, we define p':V(G')→ℝ^2 such that p'(v_1) and p'(v_2) are not scalar multiples of each other. See <cit.> for details. Let k≥2 be even. Let (G,ψ,p) be a ρ_j-symmetrically isostatic 𝒞_k-gain framework with V_0(G)={v_0}. If (G',ψ') is obtained by applying a 2-vertex-extension to (G,ψ), then there is a map p':V(G')→ℝ^2 such that (G',ψ',p') is a ρ_j-symmetrically isostatic 𝒞_k-gain framework. § SUFFICIENCY OF THE SPARSITY CONDITIONS In this section, we will establish the characterisations of symmetry-generic infinitesimally rigid bar-joint frameworks. We first need some combinatorial preliminaries. §.§ General combinatorial results Let 0≤ m≤2,1≤ l≤2 be such that 0≤ l-m≤1. Let (G,ψ) be a Γ-gain graph with at least one free vertex, and let s,t∈ℕ be the number of free vertices in G of degree 2 and 3, respectively. Assume (G,ψ) is (2,m,3,l)-gain tight. The following hold: (i) Each free vertex has degree at least 2, and each fixed vertex has degree at least m. (ii) If each fixed vertex has degree at least d for some d≥0, then 2s+t≥|V_0(G)|(d-2m)+2l. For (i), let v∈ V(G). By the sparsity of (G,ψ), the subgraph H obtained from G by removing v satisfies |E(H)|≤ 2|V(G)|+m|V_0(G)|-l-2 if v is free 2|V(G)|+m|V_0(G)|-l-m if v is fixed. But |E(G)|=2|V(G)|+m|V_0(G)|-l. So there are at least 2 edges in G that are not in H when v is free, and there are at least m edges in G that are not in H when v is fixed. (i) follows. For (ii), the average degree of G is ρ̂=2|E(G)|/|V(G)|=4|V(G)|+2m|V_0(G)|-2l/|V(G)|. The minimum average degree ρ_min of G is attained when all free vertices, which are not the s and t vertices of degree 2 and 3, have degree 4, and all fixed vertices have degree d. So ρ_min=2s+3t+d|V_0(G)|+4(|V(G)|-s-t)/|V(G)|. By minimality, ρ_min≤ρ̂, and (ii) follows. Let 0≤ m≤ 2,0≤ l≤3, let (G,ψ) be a Γ-gain graph and suppose there is some v∈V(G) of degree 3 with no incident loops. If G is (2,m,l)-sparse, then there is no (2,m,l)-tight subgraph of G-v which contains all neighbours of v (the neighbours of v need not be distinct). Suppose such a subgraph H exists. Then the subgraph H' of G obtained from H by adding v and its incident edges satisfies |E(H')|=|E(H)|+3=2|V(H)|+m|V_0(H)|-l+3=2|V(H')|+m|V_0(H')|-l+1 a contradiction. It is straightforward to check that all except two of the reductions are admissible, i.e. they maintain the relevant sparsity counts. However, when applying a 1-reduction or a fix-1-reduction, we add an edge. This edge might give rise to a subgraph that violates the sparsity count. We call such subgraphs blockers. Let (G,ψ) be a Γ-gain graph. Let m,l be non-negative integers such that m≤2,l≤3,m≤ l, and suppose (G,ψ) is (2,m,3,l)-gain tight. Let v∈ V(G) be a free vertex of degree 3, or a fixed vertex of degree 2. Let (G',ψ') be obtained from (G,ψ) by applying a 1-reduction or a fix-1-reduction at v, and let e=(v_1,v_2) be the edge we add when we apply such reduction. Let H be a subgraph of G-v with v_1,v_2∈ E(H) and E(H)≠∅. We say H is * a general-count blocker of v_1,v_2 (equivalently, of (G',ψ')) if H+e is connected and H is (2,m,l)-tight. * a balanced blocker of e (equivalently, of (G',ψ')) if H is (2,3)-tight and H+e is balanced under ψ'. Both general-count blockers and balanced blockers are referred to as blockers of (G',ψ'). The following result states that, given two blockers H_1,H_2 with E(H_1∩ H_2)≠∅, their union H_1∪ H_2 can also be seen as a blocker. It will be used in Section <ref> to show that a vertex of degree 3 always admits a 1-reduction, except for two special cases (see Theorem <ref>). Let m,l be non-negative integers such that m≤2,1≤ l≤2,m≤ l. Let (G,ψ) be a Γ-gain graph, and suppose there is some v∈V(G) of degree 3 with no incident loops. Assume further that |V_0(G)|≤1 if m=2. Suppose (G,ψ) is (2,m,3,l)-gain tight. Assume there are graphs (G_1,ψ_1),(G_2,ψ_2) obtained from (G,ψ) by applying two different 1-reductions at v, which add the edges f_1 and f_2, respectively. Assume that, for i=1,2, (G_i,ψ_i) has a blocker H_i, and that E(H_1∩ H_2)≠∅. Let H:=H_1∪ H_2. The following hold: (i) The blockers H_1,H_2 are not general-count blockers. (ii) H+f_1+f_2 is balanced and H is (2,3)-tight. Notice that H_1∪ H_2 always contains all neighbours of v. To see this, we consider N(v). If N(v)=1 this is clear. If N(v)=2, let v_1,v_2 be the neighbours of v and e_1=(v,v_1),e_1'=(v,v_1),e_2=(v,v_2)∈ E(G). By Corollary <ref>, Proposition <ref> and Lemma <ref>(i), we may assume that ψ(e_1)=ψ(e_2)=id and that ψ(e'_1)≠id. Then, at most one 1-reduction at v adds a loop at v_1 (with gain ψ(e'_1)) and no 1-reduction at v adds a loop at v_2. It follows that one of H_1,H_2 contains v_1 and v_2, and so v_1,v_2∈ V(H_1∪ H_2). Finally, let N(v)=3. For 1≤ i≤3, let e_i=(v,v_i)∈ E(G). By Corollary <ref>, Proposition <ref> and Lemma <ref>(i) we may assume that ψ(e_i)=id for all 1≤ i≤3. Then, for each pair 1≤ i≠ j≤3, there is at most one 1-reduction at v which adds an edge between v_i and v_j (with gain id). It follows that v_1,v_2,v_3∈ V(H_1∪ H_2). Throughout the proof, we let H'=H_1∩ H_2 and we let H'_1,…, H'_c be the connected components of H'. Let c_0≤ c-1 be the number of isolated vertices of H', so that H'_1,…,H_c_0' are the isolated vertices of H', and H_c_0+1',…,H_c' are the connected components of H' with non-empty edge set. We first prove (i). Assume, by contradiction, that one of H_1,H_2 is a general-count blocker. Without loss of generality, let it be H_1. If H_2 is also a general-count blocker, then, since |E(H')|≤2|V(H')|+m|V_0(H')|-l, it is easy to check that |E(H)|=|E(H_1)|+|E(H_2)|-|E(H')|≥2|V(H)|+m|V_0(H)|-l. By Proposition <ref>, this is a contradiction. Hence, we may assume that H_2 is a balanced blocker. It follows that H' is balanced. Then, for each c_0+1≤ i≤ c, H_i' must be (2,3)-sparse, and so |E(H')| =∑_i=1^c|E(H_i')|≤∑_i=1^c_0[2|V(H_i')|-2]+∑_i=c_0+1^c[2|V(H_i')|-3]=2|V(H')|-(2c_0+3(c-c_0)). Hence, |E(H)| =|E(H_1)|+|E(H_2)|-|E(H')| ≥(2|V(H_1)|+m|V_0(H_1)|-l)+(2|V(H_2)|-3)-(2|V(H')|-(2c_0+3(c-c_0))) =(2|V(H_1)|+m|V_0(H_1)|-l)+(2|V(H_2)|+2|V_0(H_2)|-3)-(2|V(H')|+2|V_0(H')|-(2c_0+3(c-c_0))) =(2|V(H_1)|+m|V_0(H_1)|-l)+(2|V(H_2)|+m|V_0(H_2)|+(2-m)|V_0(H_2)|-3) -(2|V(H')|+m|V_0(H')|+(2-m)|V_0(H')|-(2c_0+3(c-c_0))) =2|V(H)|+m|V_0(H)|-l+(2-m)(|V_0(H_2)|-|V_0(H')|)+2c_0+3(c-c_0-1) ≥2|V(H)|+m|V_0(H)|-l, where the last inequality holds because 0≤ c_0≤ c-1,m≤2 and V(H')⊆ V(H_2). By Proposition <ref>, this is a contradiction. So H_1,H_2 must be balanced blockers, as required. We now prove (ii). By (i), H_1,H_2 are both balanced blockers. Hence, H'_i is (2,3)-sparse for all c_0+1≤ i≤ c. It follows that |E(H')|≤2|V(H')|-(2c_0+3(c-c_0)) (see the proof of (i) for details). Hence, we have |E(H)| =|E(H_1)|+|E(H_2)|-|E(H')| ≥(2|V(H_1)|-3)+(2|V(H_2)|-3)-(2|V(H')|-(2c_0+3(c-c_0))) =2|V(H)|+2c_0+3(c-c_0)-6=2|V(H)|+3c-c_0-6. We show that c=1. Assume, by contradiction, that c≥2. Then, since c_0≤ c-1, 3c-c_0-6≥2c-5 and so, since m≤2,1≤ l, we have |E(H)|≥2|V(H)|+2c-5≥2|V(H)|-1≥2|V(H)|+m|V_0(H)|-l. By Proposition <ref> and the sparsity of (G,ψ), this is a contradiction. So c=1, and H' is connected. Hence, Equation (<ref>) becomes |E(H)|≥2|V(H)|-3. We now show that H' has at most one fixed vertex. Assume, by contradiction, that |V_0(H')|≥2. By assumption, this implies that m≠2. By Equation (<ref>), |E(H)|≥2|V(H)|-3≥2|V(H)|+1. If m=0, this contradicts the sparsity of (G,ψ). Hence, m=1. By Equation (<ref>), |E(H)|≥2|V(H)|-3=2|V(H)|+|V_0(H)|+(|V_0(H)|-3)≥2|V(H)|+|V_0(H)|-1≥2|V(H)|+|V_0(H)|-l, where the last inequality holds because m≤ l and m=1. This contradicts Proposition <ref>. Hence, H' has at most one fixed vertex. We look at the cases where V_0(H')=∅ and V_0(H')={v_0} separately. In both cases, we show that (ii) holds. First, assume that V_0(H')=∅. Then, by Lemma <ref>, H+f_1+f_2 is balanced, since H' is connected. By the sparsity of (G,ψ) and Equation (<ref>), it follows that H is (2,3)-tight, and so (ii) holds. Now, suppose that V_0(H')={v_0}. Then, by Equation (<ref>), |E(H)|≥2|V(H)|-3=2|V(H)|-1≥2|V(H)|-l, since l≥1. If m=0, this contradicts Proposition <ref>, so m≠0. By Equation (<ref>), we also know that |E(H)|≥2|V(H)|-3=2|V(H)|+|V_0(H)|-2. If (m,l)=(1,2), this contradicts Proposition <ref>, so (m,l)≠(1,2). Hence, (m,l) is one of (1,1) and (2,2). We claim that, in both cases, v_0 is not a cut vertex of H'. So, assume by contradiction, that v_0 is a cut vertex of H'. For t≥2, let I_1,…,I_t be the connected components of H'-v_0. For each 1≤ i≤ t, let I_i+v_0 be the graph obtained from I_i by adding v_0, together with all edges in H' incident to v_0 and a vertex of I_i. Notice that, since H' is connected, for all 1≤ i≤ t, the graph I_i+v_0 contains an edge incident to v_0. Since I_i+v_0 is balanced for all 1≤ i≤ t, we have |E(H')|=∑_i=1^t|E(I_i+v_0)|≤2∑_i=1^t|V(I_i+v_0)|-3t=2|V(H')|+2(t-1)-3t≤2|V(H')|-t-2≤ 2|V(H')|-4. But then, |E(H)| =|E(H_1)|+|E(H_2)|-|E(H')|≥(2|V(H_1)|-3)+(2|V(H_2)|-3)-(2|V(H')|-4) =2|V(H)|-2=2|V(H)|+|V_0(H)|-1. This contradicts Proposition <ref>, both when (m,l)=(1,1) and when (m,l)=(2,2). Hence, v_0 is not a cut vertex of H'. Then, H+f_1+f_2 is balanced by Lemma <ref>. By the sparsity of (G,ψ) and Equation (<ref>), it follows that H is (2,3)-tight, and so (ii) holds. §.§ Applying a 1-reduction to a (2,m,3,l)-gain tight graph The following result is crucial for our combinatorial characterisations of symmetry-generic infinitesimal rigidity. It states that, except for two specific cases, there is always an admissible reduction at a vertex v of degree 3 of a (2,m,3,l)-gain tight graph, where (m,l)=(0,2),(1,1),(1,2),(2,2). Let (G,ψ) be a Γ-gain graph with a free vertex v of degree 3 which has no loop. Suppose that (G,ψ) is (2,m,3,l)-tight, where (m,l) is one of the pairs (0,1),(1,1),(1,2) or (2,2). Suppose further that if |V_0(G)|≥2, then m=1. If there is not an admissible 1-reduction at v, then exactly one of the following holds. (i) (G,ψ) is (2,2,3,2)-tight and v has exactly one free neighbour v_1 and exactly one fixed neighbour v_2 (see Figure <ref> (a),(b)) . (ii) (G,ψ) is (2,1,3,2)-tight and v has three neighbours, all of which are fixed (see Figure <ref> (c)). We will look at the cases where N(v)=1,N(v)=2, and N(v)=3 separately. We prove that, in each case, we may always apply a reduction at v, unless one of (i),(ii) holds. Case 1: N(v)=1. Let u be the neighbour of v, and e_1,e_2,e_3 be the edges incident to u and v. By Corollary <ref>, Proposition <ref> and Lemma <ref>(i), we may assume that ψ(e_1)=id. Moreover, ψ(e_2)ψ(e_3)^-1≠id by the definition of gain graph. Let (G',ψ') be obtained from G-v by adding a loop f at u with gain ψ(e_2)ψ(e_3)^-1. We show that this 1-reduction is admissible. Suppose, by contradiction, that H is a blocker of (G',ψ'). Since H+f contains the loop f, H is not a balanced blocker. By Proposition <ref>, H is not a general-count blocker. This contradicts the fact that H is a blocker. Hence, there is an admissible 1-reduction at v. Case 2: N(v)=2. Let v_1,v_2 be the neighbours of v, let e_1,e'_1:=(v,v_1) and e_2:=(v,v_2), and let g=ψ(e'_1). By Corollary <ref>, Proposition <ref> and Lemma <ref>(i), we may assume that ψ(e_1)=ψ(e_2)=id and g≠id. We will look at the cases where v_2 is free and fixed separately. * Sub-case 2a: v_2 is free. Let (G_1,ψ_1),(G_2,ψ_2),(G_3,ψ_3) be obtained from G-v by adding, respectively, the edge f_1=(v_1,v_2) with gain id, the edge f_2=(v_1,v_2) with gain g, and a loop f_3 at v_1 with gain g. Assume, by contradiction, that H_1,H_2 and H_3 are blockers for (G_1,ψ_1),(G_2,ψ_2) and (G_3,ψ_3), respectively. Let H=H_1∪ H_2∪ H_3 and H'=H_1∩ H_2∩ H_3. By Proposition <ref>, H_1,H_2 are balanced blockers. Moreover, by Lemma <ref>(ii), E(H_1∩ H_2)=∅: otherwise H_1∪ H_2+f_1+f_2 is balanced, a contradiction since f_1,f_2^-1 is an unbalanced 2-cycle. Since H_3+f_3 contains the loop f_3, H_3 is a general-count blocker. It follows, from Lemma <ref>(i), that E(H_1∩ H_3)=E(H_2∩ H_3)=∅. Moreover, by Proposition <ref>, v_2∉V(H_3). Since m≤2, for i=1,2, |E(H_i)|=2|V(H_i)|-3≥2|V(H_i)|+m|V_0(H_i)|-3. Hence, |E(H)| =|E(H_1)|+|E(H_2)|+|E(H_3)| ≥(2|V(H_1)|+m|V_0(H_1)|-3)+(2|V(H_2)|+m|V_0(H_2)|-3)+(2|V(H_3)|+m|V_0(H_3)|-l) =2(|V(H_1)|+|V(H_2)|+|V(H_3)|)+m(|V_0(H_1)|+|V_0(H_2)|+|V_0(H_3)|)-6-l =2(|V(H)|+∑_1≤ i≠ j≤3|V(H_i∩ H_j)|-|V(H')|)+m(|V_0(H)|+∑_1≤ i≠ j≤3|V_0(H_i∩ H_j)|-|V_0(H')|)-6-l ≥2(|V(H)|+∑_1≤ i≠ j≤3|V(H_i∩ H_j)|-|V(H')|)+m|V_0(H)|-6-l, where the last inequality holds because V_0(H')⊆ V_0(H_i∩ H_t) for all pairs 1≤ i≠ t≤3 and m≥0. Since the free vertex v_2 is not contained in H', it is easy to see that ∑_1≤ i≠ j≤3|V(H_i∩ H_j)|-|V(H')|≥3. Hence, |E(H)|≥2|V(H)|+m|V_0(H)|-l. This contradicts Proposition <ref> and so there is an admissible 1-reduction at v. * Sub-case 2b: v_2 is fixed. If (G,ψ) is (2,2,3,2)-tight, then (i) holds. So, assume (G,ψ) is not (2,2,3,2)-tight. Let (G_1,ψ_1),(G_2,ψ_2) be the graphs obtained from G-v by adding, respectively, an edge f_1=(v_1,v_2) with gain id, and a loop f_2 at v_1 with gain g (see Figure <ref>). Notice that, if there is already an edge (v_1,v_2)∈ E(G), (G_1,ψ_1) is not a well-defined gain graph. We look at the cases where (v_1,v_2)∈ E(G) and (v_1,v_2)∉E(G) separately. First, assume (v_1,v_2)∈ E(G). Then it is easy to check that the graph induced by v,v_1,v_2 violates both (2,0,3,1)-gain sparsity and (2,1,3,2)-gain sparsity. Hence, we may assume that (G,ψ) is (2,1,3,1)-gain tight. Assume, by contradiction, that (G_2,ψ_2) has a blocker H_2. Since H_2+f_2 contains the loop f_2, it is unbalanced. Hence, H_2 is a general-count blocker. It follows, from Proposition <ref>, that v_2∉V(H_2). Hence, the graph H obtained from H_2 by adding v,v_2, together with the edges e_1,e_1',e_2,(v_1,v_2), satisfies |E(H)|=|E(H_2)|+4=2|V(H_2)|+|V_0(H_2)|+3=2|V(H)|+|V_0(H)|, contradicting the sparsity of (G,ψ). Thus, the 1-reduction at v which yields (G_2,ψ_2) is admissible. Now, let (v_1,v_2)∉E(G). Assume that H_1 and H_2 are blockers for (G_1,ψ_1) and (G_2,ψ_2), respectively. By Proposition <ref>, H_1 is a balanced blocker. Since H_2+f_2 contains the loop f_2, it is a general-count blocker. By Proposition <ref>, v_2∉V(H_2). Moreover, by Lemma <ref>(i), E(H_1∩ H_2)=∅. Let H:=H_1∪ H_2 and H':=H_1∩ H_2. We have |E(H)| =(2|V(H_1)|-3)+(2|V(H_2)|+m|V_0(H_2)|-l) =(2|V(H_1)|+2|V_0(H_1)|-3)+(2|V(H_2)|+m|V_0(H_2)|-l) =2|V(H)|+m|V_0(H)|-l+[2|V(H')|+m|V_0(H')|+(2-m)|V_0(H_1)|-3] =2|V(H)|+m|V_0(H)|-l, where the last inequality holds because |V(H')|≥ 1, |V_0(H_1)|≥ 1 and m≤1. This contradicts Proposition <ref>. Hence, there is an admissible 1-reduction at v. So for Case 2, we have shown that there always is an admissible 1-reduction at v, unless (i) holds. Case 3: N(v)=3. For i=1,2,3, let e_i=(v,v_i) be the edges incident with v. Let f_1=(v_1,v_2),f_2=(v_2,v_3) and f_3=(v_3,v_1). By Corollary <ref>, Proposition <ref> and Lemma <ref>(i), we may assume ψ(e_1)=ψ(e_2)=ψ(e_3)=id. For 1≤ i≤3, let (G_i,ψ_i) be obtained by applying a 1-reduction at v, during which we add the edge f_i with gain id, and assume that (G_i,ψ_i) has a blocker H_i. Let H:=H_1∪ H_2∪ H_3 and H':=H_1∩ H_2∩ H_3. We first show that E(H_i∩ H_j)=∅ for all pairs i≠ j. As a first step, we show that E(H_i∩ H_j)≠∅ for at most one pair i≠ j. Without loss of generality, let E(H_1∩ H_2)≠∅ and E(H_1∩ H_3)≠∅. By Lemma <ref>(ii), H_1∪ H_2 is (2,3)-tight, and H_1∪ H_2+f_1+f_2 is balanced. Moreover, |E(H_1∪ H_2+v)|=|E(H_1∪ H_2)|+3=2|V(H_1∪ H_2)|=2|V(H_1∪ H_2+v)|-2. If H_1∪ H_2+v is balanced, this contradicts the sparsity of (G,ψ). So, we may assume that H_1∪ H_2+v (equivalently, H_1∪ H_2+f_1+f_2+f_3) is unbalanced. The group <H_1∪ H_2+v>≃<H_1∪ H_2+f_1+f_2+f_3> is given by the elements of <H_1∪ H_2+f_1+f_2>, together with the gains of the walks from v_1 to v_3 which do not contain fixed vertices. Since H_1∪ H_2+f_1+f_2 is balanced and H_1∪ H_2+v is unbalanced, there must be a path P from v_3 to v_1 in H_1∪ H_2 with gain g≠id, which contains only free vertices. In particular, v_1,v_3 are free. Moreover, v_2 must be fixed, for otherwise, f_1,f_2,P is a closed path in H_1∪ H_2+f_1+f_2 with gain g≠id and with no fixed vertex, contradicting the fact that H_1∪ H_2+f_1+f_2 is balanced. Applying the same argument to H_1∪ H_3, we may conclude that v_1 is fixed and v_2,v_3 are free. But this contradicts the fact that v_1 is free and v_2 is fixed. Hence, E(H_i∩ H_j)≠∅ for at most one pair 1≤ i≠ j≤3. Without loss of generality, let E(H_1∩ H_2)≠∅ and E(H_1∩ H_3)=E(H_2∩ H_3)=∅. Note that in this case, as shown above, v_2 is fixed and v_1 and v_3 are free, which is a fact we will use later. If H_3 is a balanced blocker, then |E(H)| =|E(H_1∪ H_2)|+|E(H_3)|=(2|V(H_1∪ H_2)|-3)+(2|V(H_3)|-3) =2|V(H)|+2|V((H_1∪ H_2)∩ H_3)|-6. If |V((H_1∪ H_2)∩ H_3)|≥3, then |E(H)|≥2|V(H)|>2|V(H)|+m|V_0(H)|-l, contradicting the sparsity of (G,ψ). Hence, V((H_1∪ H_2)∩ H_3)={v_1,v_3}, and so H is balanced (every closed walk in H is composed of closed walks in H_1∪ H_2, of closed walks in H_3, and of concatenations of walks from v_1 to v_3 in H_1∪ H_2 together with walks from v_3 to v_1 in H_3; all such walks must have identity gain, since H_1∪ H_2+f_1+f_2,H_3+f_3 are balanced). However, by Equation (<ref>), we have |E(H)|=2|V(H)|-2, contradicting the sparsity of (G,ψ). So, we may assume that H_3 is a general-count blocker. Then, it is easy to see that |E(H)| =|E(H_1∪ H_2)|+|E(H_3)|=(2|V(H_1∪ H_2)|-3)+(2|V(H_3)|+m|V_0(H_3)|-l) =2|V(H)|+m|V_0(H)|-l+(2-m)|V_0(H_1∪ H_2)|+m|V_0((H_1∪ H_2)∩ H_3)|+2|V((H_1∪ H_2)∩ H_3)|-3 ≥2|V(H)|+m|V_0(H)|-l+1, where the inequality holds because v_1,v_3∈V((H_1∪ H_2)∩ H_3) and 0≤ m≤2. This contradicts the sparsity of (G,ψ). Hence, E(H_i∩ H_j)=∅ for all pairs i≠ j. We now show that H_1,H_2,H_3 cannot all be balanced blockers. Assume, by contradiction, that H_1,H_2,H_3 are all balanced blockers. If |V(H_i∩ H_j)|=1 for all 1≤ i≠ j≤3, then it is easy to see that H+v is balanced (since every closed walk in H is composed of closed walks in H_i for 1≤ i≤ 3, and the concatenation of walks from v_1 to v_2 in H_1, walks from v_2 to v_3 in H_2, and walks from v_3 to v_1 in H_3; all such walks either include a fixed vertex or must have identity gain) and |E(H)| =∑_i=1^3|E(H_i)|=2∑_i=1^3|V(H_i)|-9=2|V(H)|+2∑_1≤ i≠ j≤3|V(H_i∩ H_j)|-2|V(H')|-9 =2|V(H)|+2(1+1+1)-9=2|V(H)|-3. This contradicts Proposition <ref>. Hence, |V(H_i∩ H_j)|≥2 for some 1≤ i≠ j≤3. Without loss of generality, let |V(H_1∩ H_2)|≥2. Then, |E(H_1∪ H_2)|=2|V(H_1∪ H_2)|+2|V(H_1∩ H_2)|-6≥2|V(H_1∪ H_2)|-2≥2|V(H_1∪ H_2)|+m|V_0(H_1∪ H_2)|-2, where the last inequality holds because m≤2. If l=2, this contradicts Proposition <ref>. Hence, l=1. Moreover, rearranging Equation (<ref>), we know that |E(H)| =2|V(H)|-1+2(∑_1≤ i≠ j≤3|V(H_i∩ H_j)|-|V(H')|-4) ≥2|V(H)|+m|V_0(H)|-1+2(∑_1≤ i≠ j≤3|V(H_i∩ H_j)|-|V(H')|-4), since m≤2. If we show that f:=∑_1≤ i≠ j≤3|V(H_i∩ H_j)|-|V(H')|≥4, this contradicts Proposition <ref>. Notice that |V(H')| is at most the minimum of |V(H_i∩ H_j)|, where i≠ j run from 1 to 3. Call this number min. Hence, f≥∑_1≤ i≠ j≤3|V(H_i∩ H_j)|-min. If min=|V(H_1∩ H_2)|, then |V(H_2∩ H_3)|,|V(H_1∩ H_3)|≥2, and so f≥2+2≥4. So assume, without loss of generality, that min=|V(H_2∩ H_3)|, and hence that f≥|V(H_1∩ H_2)|+|V(H_1∩ H_3)|. If |V(H_1∩ H_3)|≥2, then f≥4. So, assume that V(H_1∩ H_3)={v_1}. By minimality, V(H_2∩ H_3)={v_3}. It follows that V(H')=∅, and so f≥2+1+1=4. Since we reached a contradiction, H_1,H_2,H_3 are not all balanced blockers. Assume, without loss of generality, that H_1 is a general-count blocker. For 2≤ i≤3, we have |V(H_1∩ H_i)|=1. If H_i is also a general-count blocker, then, since |E(H_1∪ H_i)|= |E(H_1)|+|E(H_i)|, we have |E(H_1∪ H_i)| =2|V(H_1∪ H_i)|+m|V_0(H_1∪ H_i)|-l+(2|V(H_1∩ H_i)|+m|V_0(H_1∩ H_i)|-l) ≥2|V(H_1∪ H_i)|+m|V_0(H_1∪ H_i)|-l+(2|V(H_1∩ H_i)|+m|V_0(H_1∩ H_i)|-2), since l≤2. If |V(H_1∩ H_i)|≥1, or if |V_0(H_1∩ H_i)|≥2 (and so m=1 by assumption), it is easy to see that this is at least 2|V(H_1∪ H_i)|+m|V_0(H_1∪ H_i)|-l. This contradicts Proposition <ref>. Hence, |V(H_1∩ H_i)|=|V_0(H_1∩ H_i)|=1, and the claim holds. If H_i is a balanced blocker, it is easy to see that |E(H_1∪ H_i)| =2|V(H_1∪ H_i)|+m|V_0(H_1∪ H_i)|-l+(2|V(H_1∩ H_i)|+m|V_0(H_1∩ H_i)|+(2-m)|V_0(H_i)|-3) ≥2|V(H_1∪ H_i)|+m|V_0(H_1∪ H_i)|-l+(2|V(H_1∩ H_i)|+2|V_0(H_1∩ H_i)|-3) =2|V(H_1∪ H_i)|+m|V_0(H_1∪ H_i)|-l+(2|V(H_1∩ H_i)|-3), where the inequality holds because V_0(H_1∩ H_i)⊆ V_0(H_i) and m≤2. If |V(H_1∩ H_i)|≥2, then this contradicts the sparsity of (G,ψ). Hence, the claim holds. By the Claim, V(H_1∩ H_2)={v_2} and V(H_1∩ H_3)={v_1}. Hence, v_1,v_2,v_3 do not lie in V(H'). Let n be the number of free vertices in {v_1,v_2,v_3}. Since each vertex in {v_1,v_2,v_3} lies in H_i∩ H_j for some 0≤ i≠ j≤1, this implies that S:=∑_1≤ i≠ j≤3|V(H_i∩ H_j)|-|V(H')|≥ n and S_0:=∑_1≤ i≠ j≤3|V_0(H_i∩ H_j)|-|V_0(H')|≥ 3-n. We look at the following sub-cases separately: H_2,H_3 are balanced blockers; H_2 is a general-count blocker and H_3 is a balanced blocker; H_2,H_3 are general-count blockers; * Sub-case 3a: H_2,H_3 are balanced blockers. Then, |E(H)| =(2|V(H_1)|+m|V_0(H_1)|-l)+(2|V(H_2)|-3)+(2|V(H_3)|-3) =2[|V(H)|+S]+m[|V_0(H)|+S_0]+(2-m)(|V_0(H_2)|+|V_0(H_3)|)-6-l ≥2[V(H)|+n]+m[|V_0(H)|+(3-n)]+(2-m)(|V_0(H_2)|+|V_0(H_3)|)-6-l =2|V(H)|+m|V_0(H)|-l+[2n+m(3-n)+(2-m)(|V_0(H_2)|+|V_0(H_3)|)-6]. Let f:=2n+m(3-n)+(2-m)(|V_0(H_2)|+|V_0(H_3)|)-6. If f≥0, Proposition <ref> leads to a contradiction, and so there is an admissible 1-reduction at v. We will show that indeed f≥0. This is clear if n=3. Suppose n=2, so that f=m+(2-m)(|V_0(H_2)|+|V_0(H_3)|)-2. Since n=2, there is at least one fixed vertex in {v_1,v_2,v_3}, and so |V_0(H_2)|+|V_0(H_3)|≥1. Hence, f≥ m+2-m-2=0. So, we may assume n≤1. Hence, there are at least two fixed vertices in {v_1,v_2,v_3}⊂ V(G), and so |V_0(H_2)|+|V_0(H_3)|≥ 2. By assumption, this implies that m=1. Hence, f=n-3+|V_0(H_2)|+|V_0(H_3)|≥ n-1. When n=1, f≥0. So, let n=0. Then |V_0(H_2)|,|V_0(H_3)|≥2, so f≥1. * Sub-case 3b: H_2 is a general-count blocker and H_3 is a balanced blocker. We have |E(H)| =(2|V(H_1)|+|V_0(H_1)|-l)+(2|V(H_2)|+|V_0(H_2)|-l)+(2|V(H_3)|-3) =2[|V(H)|+S]+m[|V_0(H)|+S_0]+(2-m)|V_0(H_3)|-3-2l ≥2[|V(H)|+n]+m[|V_0(H)|+(3-n)]+(2-m)|V_0(H_3)|-3-2l =2|V(H)|+m|V_0(H)|-l+[2n+m(3-n)+(2-m)|V_0(H_3)|-3-l] If f:=2n+m(3-n)+(2-m)|V_0(H_3)|-3-l≥0, then we obtain a contradiction by Proposition <ref>. We will show that indeed f≥0. If n=3, then f≥3-l>0, since l≤2. If n=2, then f≥1+m-l≥0, since l-m≤1. Hence, we may assume that n≤1. So, at least two of the elements in {v_1,v_2,v_3}⊂ V(G) are fixed. It follows that m=1 and f=n-l+|V_0(H_3)|. If n=1, then |V_0(H_3)|≥1 and f=1-l+|V_0(H_3)|≥2-l≥0, since l≤2. If n=0, then |V_0(H_3)|≥2 and f≥2-l≥0. * Sub-case 3c: H_2,H_3 are general-count blockers. We have |E(H)| =∑_i=1^3|E(H_i)|=2∑_i=1^3|V(H_i)|+m∑_i=1^3|V_0(H_i)|-3l =2(|V(H)|+S)+m(|V_0(H)|+S_0)-3l ≥2|V(H)|+m|V_0(H)|-l+[2n+m(3-n)-2l]. If f:=2n+m(3-n)-2l≥0, then we obtain a contradiction by Proposition <ref>. We will show that f≥0 unless (ii) holds. If n=3, f=6-2l=2(3-l)>0, since l≤2. If n=2, then f=4+m-2l=2(2-l)+m≥0, since l≥2 and m≥0. So, we may assume that n≤1, which implies that m=1. Hence, f=n+3-2l. If n=1, then f=2(2-l)≥0. If n=0, then f=3-2l. So if l≤1, then f≥1. This leaves the case that l=2. In this case (ii) holds. This proves the result. §.§ Reflection group Let (G̃,p̃) be a 𝒞_s-generic framework. Recall that the Γ-gain graph (G,ψ) of G̃ is (2,1,3,1)-gain tight whenever (G̃,p̃) is fully-symmetrically isostatic, and (G,ψ) is (2,1,3,2)-gain tight whenever (G̃,p̃) is anti-symmetrically isostatic (see Proposition <ref> in Section <ref>). In this section, we show that the converse statements are also true. To do so, we employ a proof by induction on |V(G)|, which uses the vertex extension and reduction moves described in Section <ref>. Hence, we first need to show that there is an admissible reduction of (G,ψ), whose corresponding extension does not break fully-symmetric or anti-symmetric infinitesimal rigidity. Let v∈ V(G) be a free vertex of degree 3 with no loop. By Theorem <ref>, there is always an admissible 1-reduction at v, unless all neighbours of v are fixed and (G,ψ) is (2,1,3,2)-gain tight. Lemma <ref> shows that a 1-extension maintains the fully-symmetric and anti-symmetric infinitesimal rigidity of a framework. However, the result assumes that all neighbours of the added vertex do not lie on the same line, and hence they cannot all be fixed. This issue arises both in the fully-symmetric and the anti-symmetric cases. Hence, our proof by induction cannot rely on applying a 1-reduction to a vertex whose neighbours are all fixed. In the following result we show that, if G has at least two free vertices, and all free vertices of degree 3 in V(G) have three fixed neighbours, then there is another vertex in V(G) at which we may apply an admissible reduction. For 1≤ l≤ 2, let (G,ψ) be a (2,1,3,l)-gain tight graph with |V(G)|≥2. Then there is a reduction of (G,ψ) which yields a (2,1,3,l)-gain tight graph (G',ψ'). The reduction which yields (G',ψ') is one of the following: a fix-0-reduction, a 0-reduction, a loop-1-reduction, a 1-reduction at a vertex with at least one free neighbour, or a fix-1-reduction. The case where there are no fixed vertices is known (see e.g., Theorem 6.3 in <cit.>), so we may assume V_0(G)≠∅. Suppose G has a vertex v which is either a fixed vertex of degree 1, or a free vertex of degree 2, or a free vertex of degree 3 with a loop (notice that if v has a loop, then l=1). Then, we may apply a fix-0-reduction, or a 0-reduction, or a loop-1-reduction at v. All such reductions are clearly admissible. Hence, we may assume that there is no such vertex v. By Theorem <ref>, we may also assume that all free vertices of degree 3 in V(G) have three distinct neighbours, all of which are fixed. Let n be the number of vertices of degree 2 in V_0(G). Under the above assumptions, we have n≥3. Proof. To see this, let v_1,…,v_t be the free vertices in G of degree 3 and assume that for all 1≤ i≤ t, the edges incident with v_i are directed to v_i. Notice that t≥2, by Lemma <ref>. Define the set V':={v∈ V_0(G):(v,v_i)∈ E(G) for some 1≤ i≤ t}. Let n'=|V'| and consider the subgraph H of G induced by {v_1,…,v_t}∪ V'. By the sparsity of (G,ψ), 3t≤|E(H)|≤2t+n'-l and hence n'≥ t+l. Now, the average degree of G is ρ̂=4|V(G)|+2|V_0(G)|-2l/|V(G)|. This average is smallest when all vertices in V(G)∖{v_1,…,v_t} have degree 4, and all fixed vertices in V(G) which do not have degree 2, have degree 3. This gives ρ̂≥4|V(G)|+3|V_0(G)|-n-t/|V(G)|. Hence, n≥|V_0(G)|+2l-t≥ n'+2l-t≥(t+l)+2l-t=3l≥3, as required. □ So, there is a fixed vertex v of degree 2. Let u_1,u_2 be the neighbours of v. Notice that there is no (2,1,3,l)-gain tight subgraph H of G with u_1,u_2∈ V(H),v∉V(H), as otherwise the graph H':=H+v satisfies |E(H')|=|E(H)|+2=2|V(H)|+|V_0(H)|-l+2=2|V(H')|+|V_0(H')|-l+1, contradicting the sparsity of (G,ψ). We show that there is an admissible fix-1-reduction at v. First, suppose that l=2. By the sparsity of (G,ψ), u_1,u_2 are free. Let (G_1,ψ_1),(G_2,ψ_2) be obtained from (G,ψ) by removing v and adding the edge e=(u_1,u_2) with gains id and γ, respectively. Assume, by contradiction, that for 1≤ i≤2, (G_i,ψ_i) has a blocker H_i. By Equation (<ref>), H_1,H_2 are balanced blockers. If E(H_1∩ H_2)=∅, then |E(H_1∪ H_2)| =2|V(H_1)|-3+2|V(H_2)|-3=2|V(H_1∪ H_2)|+2|V(H_1∩ H_2)|-6≥2|V(H_1∪ H_2)|-2, where the inequality holds because u_1,u_2∈ V(H_1∩ H_2). But then the graph H':=H_1∪ H_2+v satisfies |E(H')|=2|V(H')|-2=2|V(H')|+|V_0(H')|-2+|V_0(H')|≥2|V(H')|+|V_0(H')|-1, where the inequality holds because v∈ V_0(H'). This contradicts the sparsity of (G,ψ). So E(H_1∩ H_2)≠∅. Since H_1,H_2 are balanced blockers, all paths from u_1 to u_2 in H_1 have gain id and all paths from u_1 to u_2 in H_2 have gain γ. By the sparsity count, H_1∩ H_2 is connected (see, e.g., the proof of Lemma <ref>(ii)). So, there is a path from u_1 to u_2 in H_1∩ H_2 with two different gains, a contradiction. Hence, at least one of (G_1,ψ_1),(G_2,ψ_2) is (2,1,3,2)-gain tight. Now, let l=1. Let (G_1,ψ_1) be obtained from (G,ψ) by removing v and adding the edge e_1=(u_1,u_2) with gain id. Assume that (G_1,ψ_1) has a blocker H_1. By Equation (<ref>), H_1 is a balanced blocker. Hence, H_1+v satisfies |E(H_1+v)|=2|V(H_1+v)|-3=(2|V(H_1+v)|+|V_0(H_1+v)|-3)+|V_0(H_1+v)|. If H_1+v contains three fixed vertices, this contradicts the sparsity of (G,ψ). Since v is fixed, this implies that at most one of u_1,u_2 is fixed. Assume, without loss of generality, that u_1 is free. Let (G_2,ψ_2) be obtained from (G,ψ) by removing v and adding a loop e_2 at u_1 with gain γ. Assume that (G_2,ψ_2) has a blocker H_2. Since H_2+e_2 contains the unbalanced loop e_2, H_2 is a general-count blocker. Hence, by Equation (<ref>), u_2∉V(H_2). If E(H_1∩ H_2)≠∅, then |E(H_1∩ H_2)|≤2|V(H_1∩ H_2)|-3 and so H_12:=H_1∪ H_2 satisfies |E(H_12)| ≥2|V(H_1)|-3+2|V(H_2)|+|V_0(H_2)|-1-2|V(H_1∩ H_2)|+3 =2|V(H_12)|+|V_0(H_12)|-1+(|V_0(H_1)|-|V_0(H_1∩ H_2)|)≥2|V(H_12)|+|V_0(H_12)|-1. By Equation (<ref>), this contradicts the sparsity of (G,ψ). So, E(H_1∩ H_2)=∅. Hence, |E(H_12)| =2|V(H_1)|-3+2|V(H_2)|+|V_0(H_2)|-1 =2|V(H_12)|+|V_0(H_12)|-4+(|V_0(H_1)|+2|V(H_1∩ H_2)|+|V_0(H_1∩ H_2)|) ≥2|V(H_12)|+|V_0(H_12)|-2+(|V_0(H_1)|+|V_0(H_1∩ H_2)|). where the inequality holds because u_1∈V(H_1∩ H_2). If |V_0(H_1)|≥1, then H_12 is (2,1,3,1)-gain tight which, by Equation (<ref>), contradicts the sparsity of (G,ψ). Hence, V_0(H_1)=∅. In particular, u_2 is free. Let (G_3,ψ_3) be obtained from (G,ψ) by removing v and adding a loop e_3 at u_2 with gain γ. Assume that (G_3,ψ_3) has a blocker H_3. Similarly as we did with H_2, it is easy to see that H_3 is a general-count blocker, that u_1∉V(H_3) and that E(H_1∩ H_3)=∅. Moreover, E(H_2∩ H_3)=∅, as otherwise H_2∪ H_3 is (2,1,3,1)-gain tight which, by Equation (<ref>), contradicts the sparsity of (G,ψ). Let S=∑_1≤ i≠ j≤3|V(H_i∩ H_j)|-|V(H_1∩ H_2∩ H_3)| and S_0=∑_1≤ i≠ j≤3|V_0(H_i∩ H_j)|-|V_0(H_1∩ H_2∩ H_3)|. Since u_1,u_2∉V(H_1∩ H_2∩ H_3), we have S≥2. So the graph H:=H_1∪ H_2∪ H_3 satisfies |E(H)| =2|V(H_1)|-3+2|V(H_2)|+|V_0(H_2)|-1+2|V(H_3)|+|V_0(H_3)|-1 =2|V(H)|+|V_0(H)|-5+(|V_0(H_1)|+2S+S_0)≥2|V(H)|+|V_0(H)|-1. By Equation (<ref>), this contradicts the sparsity of (G,ψ). Hence, there is an admissible fix-1-reduction at v. Let Γ be a cyclic group of order 2, let (G,ψ) be a Γ-gain framework, τ:Γ→𝒞_s be a faithful representation, and p:V(G)→ℝ^2 be 𝒞_s-generic. The following hold: * If (G,ψ) is (2,1,3,1)-gain-tight, then the covering framework (G,p̃) is fully-symmetrically isostatic. * If (G,ψ) is (2,1,3,2)-gain-tight, then the covering framework (G,p̃) is anti-symmetrically isostatic. We use a proof by induction on |V(G)|. First, assume that V(G) has no free vertex. If (G,ψ) is (2,1,3,1)-gain-tight, then G is a tree. The base case consists of exactly one single vertex and no edge, which is clearly fully-symmetrically isostatic. Assume that the statement is true for all graphs on m vertices and let G be a graph on m+1 vertices. Since G is a tree, it has a vertex v of degree 1. Thus, we may apply a fix-0-reduction at v to obtain a (2,1,3,1)-gain tight graph (G',ψ') on m vertices. By the inductive hypothesis, all 𝒞_s-generic realisations of G̃' are fully-symmetrically isostatic. Choose a 𝒞_s-generic realisation (G̃'̃,q̃') of G̃'. By Lemma <ref>, there is a 𝒞_s-symmetric realisation (G̃,q̃) of G̃ which is fully-symmetrically isostatic. By 𝒞_s-genericity, (G̃,p̃) is also fully-symmetrically isostatic. If (G,ψ) is (2,1,3,2)-gain tight, then G consists of exactly two isolated vertices, with no edges, since any edge would violate the sparsity count. In this case, (G̃,p̃) is clearly anti-symmetrically isostatic, since any anti-symmetric motion must be trivial. Hence, we may assume |V(G)|≥1. All base graphs are given in Figure <ref>. It is easy to check that 𝒞_s-symmetric realisations of these base graphs are fully-symmetrically and anti-symmetrically isostatic, respectively. For the inductive step, assume the result holds whenever |V(G)|=m for some m∈ℕ. Let 1≤ l≤2 and suppose (G,ψ) is (2,1,3,l)-gain tight with |V(G)|=m+1. If G has a fixed vertex v of degree 1, then we may apply a fix-0-reduction at v to obtain a (2,1,3,l)-gain tight graph (G',ψ') on m vertices. By induction, all 𝒞_s-generic realisations of G̃'̃ are fully-symmetrically isostatic if l=1, and they are anti-symmetrically isostatic if l=2. Then, our result follows from Lemma <ref>. So, assume that all fixed vertices of G have degree at least 2. Suppose that V(G)={u}, and let V_0(G)={v_1,…,v_t} for some t≥1. The average degree of G, denoted ρ̂, satisfies ρ̂=2|E(G)|/|V(G)|=4+2t-2l/|V(G)|. The average degree of G is smallest when all vertices in V_0(G) have degree 2, and so 2t+deg(u)≤ 4+2t-2l. Hence deg(u)≤4-2l. By Lemma <ref>(i), l=1 and deg(u)=2. Then we may apply a 0-reduction at u to obtain a (2,1,3,1)-gain tight graph (G',ψ') on m vertices. By induction, all 𝒞_s-generic realisations of G̃' are fully-symmetrically isostatic. Then the result holds by Lemma <ref>. So, assume |V(G)|≥2. By Lemma <ref>, (G,ψ) admits a reduction using one of the moves listed in the statement of the lemma. Let (G',ψ') be a (2,1,3,l)-gain tight graph obtained by applying such a reduction to (G,ψ). By induction, all 𝒞_s-generic realisations of G̃'̃ are fully-symmetrically isostatic if l=1 and anti-symmetrically isostatic if l=2. Let q̃' be a 𝒞_s-generic configuration of G̃'̃ which also satisfies the conditions of Lemma <ref> (respectively, Lemma <ref>) if G̃'̃ is obtained from G̃ by applying a 1-reduction (respectively, a fix-1-reduction). Such a configuration exists: if necessary, we may apply a small symmetry-preserving perturbation to the points of a 𝒞_s-generic framework, which will maintain 𝒞_s-genericity. By Lemmas <ref>, <ref>, <ref> and <ref>, there is a realisation (G̃,q̃) of G̃ which is fully-symmetrically isostatic if l=1 and anti-symmetrically isostatic if l=2. Since p̃ is 𝒞_s-generic, the result follows. The following main result for 𝒞_s is now a consequence of Proposition <ref> and Theorem <ref>. Let (G̃,p̃) be a 𝒞_s-generic framework with 𝒞_s-gain framework (G,ψ,p). (G̃,p̃) is infinitesimally rigid if and only if the following hold: * (G,ψ) has a (2,1,3,1)-gain tight spanning subgraph. * (G,ψ) has a (2,1,3,2)-gain tight spanning subgraph. §.§ Half-turn group Let (G̃,p̃) be a 𝒞_2-generic framework. Recall that (G,ψ) is (2,0,3,1)-gain tight whenever (G̃,p̃) is fully-symmetrically isostatic, and (G,ψ) is (2,2,3,2)-gain tight whenever (G̃,p̃) is anti-symmetrically isostatic (see Proposition <ref> in Section <ref>). In this section, we show that the converse statements are also true. We do so by strong induction on |V(G)|, using the vertex reduction moves shown in Section <ref>. Hence, we first need to show that there is an admissible reduction of (G,ψ). Let v∈ V(G) be a free vertex of degree 3. By Theorem <ref>, there is always an admissible 1-reduction at v, unless (G,ψ) is (2,2,3,2)-gain tight, v has exactly one free neighbour and exactly one fixed neighbour. In the following Lemma, we take care of this remaining case. Let (G,ψ) be a (2,2,3,2)-gain tight graph with |V_0(G)|≤1 and |V(G)|≥2. Then (G,ψ) admits a reduction. The case where there is no fixed vertex is already known (see e.g., Theorem 6.8 in <cit.>). Hence, we may assume V_0(G)={v_0}. By Lemma <ref>, there is a free vertex in V(G) of degree 2 or 3. By the sparsity of (G,ψ), no vertex of G has a loop. We may assume that G has no free vertex of degree 2. Otherwise, we may apply a 0-reduction to (G,ψ) (clearly, any 0-reduction is admissible). Further, we may assume that all free vertices of degree 3 have exactly 2 distinct neighbours, one of which is v_0: otherwise, we may apply a 1-reduction to (G,ψ), by Theorem <ref>. So let v_1,…,v_t be the free vertices in G of degree 3. For 1≤ i≤ t let u_i be the free neighbour of v_i, and e_i:=(u_i,v_0). By Lemma <ref>(ii), deg(v_0)≤ t. So, if the edge e_i is present for some 1≤ i≤ t, then u_i must be a vertex of degree 3. Hence, we can apply a 2-vertex reduction at u_i,v_i. So, we may assume that e_i∉E(G) for all 1≤ i≤ t. For 1≤ i≤ t, let (G_i,ψ_i) be obtained from (G,ψ) by removing v_i and adding e_i with gain id. We will show that, for some 1≤ i≤ t, (G_i,ψ_i) is an admissible 1-reduction. Assume, by contradiction, that for all 1≤ i≤ t there is a blocker H_i for (G_i,ψ_i). By Proposition <ref>, each H_i is a balanced blocker. Moreover, for each 1≤ i≠ j≤ t, v_j∉V(H_i). To see this, suppose, by contradiction, that v_j∈ V(H_i). Since H_i is a balanced blocker, all of its vertices have degree at least 2 (see the first paragraph of the proof of Lemma <ref> for an argument). Hence, two of the edges incident to v_j lie in E(H_i). Moreover, since H_i is balanced, it cannot contain parallel edges. Hence, H_i contains exactly 2 of the edges incident to v_j. Let e be the edge incident to v_j such that e∉E(H_i). Then |E(H_i+v_i+e)|=|E(H_i)|+4=2|V(H_i)|+1=2|V(H_i+v_i+e)|-1, contradicting the sparsity of (G,ψ). So v_j∉V(H_i) for all 1≤ i≠ j≤ t. Claim: E(H_i∩ H_j)=∅ and V(H_i∩ H_j)={v_0} for all 1≤ i≠ j≤ t. Proof. Choose some 1≤ i≠ j≤ t. First, assume by contradiction that E(H_i∩ H_j)≠∅. In a similar way as we did in the proof of Lemma <ref>(ii), we can see that |E(H_i∪ H_j)|≥2|V(H_i∪ H_j)|+3c-c_0-6, where c,c_0 are, respectively, the number of connected components and isolated vertices of H_i∩ H_j. Notice that c_0≤ c-1 (since all isolated vertices of H' are also connected components of H', and H' has at least one connected component with non-empty edge set), and so |E(H_i∪ H_j)|≥2|V(H_i∪ H_j)|+2c-5. By the sparsity of (G,ψ), c=1 and |E(H_i∪ H_j)|=2|V(H_i∪ H_j)|-3. But the graph H obtained from H_i∪ H_j by adding v_i,v_j and its incident edges satisfies |E(H)|=2|V(H)|-1, contradicting the sparsity of (G,ψ). Thus, E(H_i∩ H_j)=∅ for all 1≤ i≠ j≤ t. Now, if V(H_i∩ H_j)≠{v_0}, then |E(H_i∪ H_j)|=|E(H_i)|+|E(H_j)|=2|V(H_i∪ H_j)|+2|V(H_i∩ H_j)|-6≥2|V(H_i∪ H_j)|-2. But then the graph H obtained from H_i∪ H_j by adding v_i,v_j and its incident edges satisfies |E(H)|=2|V(H)|, contradicting the sparsity of (G,ψ). So V(H_i∩ H_j)={v_0}. Since i,j were arbitrary, the claim holds. □ Let H:=⋃_i=1^tH_i. By the Claim, |E(H)|=∑_i=1^t|E(H_i)| =2∑_i=1^t|V(H_i)|-3t =2(|V(H)|+(t-1))-3t =2|V(H)|-t-2. So for the graph G' obtained from H by adding the vertices v_i, i=1,…, t, and their incident edges, we have |E(G')|=2|V(G')|-2. This implies that there is no edge e∈ E(G)∖ E(H) that joins two vertices in V(H). Next we show that there is no non-empty subgraph H' of G such that V(G) is the disjoint union of V(G') and V(H'). Assume, by contradiction, that such H' exists. By assumption, all vertices of H' have degree at least 4 in G. Let d(G',H') be the number of edges joining a vertex in G' with one in H'. We know |E(H')|=2|V(H')|-α for some α≥2. We have that 4|V(H')|≤∑_v∈ V(H') deg_G(v)=2|E(H')|+d(G',H')=4|V(H')|-2α+d(G',H'), and so d(G',H')≥2α. Hence, |E(G)|=|E(G')|+|E(H')|+d(G',H')≥2|V(G')|-2+2|V(H')|-α +2α=2|V(G)|-2+α, which contradicts the sparsity of (G,ψ), since α≥2. So, H' does not exist, and G=G'. Finally, fix some 1≤ i≤ t and let n,m be the vertices of H which have degree 2 and 3 in H_i. The average degree of H_i is ρ̂=2|E(G)|/|V(H_i)|=4|V(H_i)|-6/|V(H_i)|. The minimum average degree of H_i is 4|V(H_i)|-2n-m/|V(H_i)|. Hence, 2n+m≥6. In particular, there are at least 3 vertices of degree 2 or 3 in V(H_i), and so there is a free vertex v of degree 2 or 3 that is not v_0 or u_i. This means that v has degree 2 or 3 in G=G'. But this is not possible, since we assumed there are no free vertices of degree 2 in G, and that all free vertices of degree 3 are v_1,…,v_t. The result follows. The following results will be proved in a very similar way to Theorem <ref>. However, we now work with the half-turn group. So |V_0(G)|≤ 1. Let Γ be a cyclic group of order 2, let (G,ψ) be a connected Γ-gain framework with |V_0(G)|≤ 1, τ:Γ→𝒞_2 be a faithful representation, and p:V(G)→ℝ^2 be 𝒞_2-generic. The following hold: * If (G,ψ) is (2,0,3,1)-gain-tight, then the covering framework (G,p̃) is fully-symmetrically isostatic. * If (G,ψ) is (2,2,3,2)-gain tight, then the covering framework (G,p̃) is anti-symmetrically isostatic. First, notice that if there is no free vertex, then G̃ is a single fixed vertex. In this case G̃ is not (2,0,3,1)-gain-tight. It is (2,2,3,2)-gain-tight and clearly also anti-symmetrically isostatic. Hence, we may assume |V(G)|≥1. We prove the result by induction on |V(G)|. Assume |V(G)|=1. If (G,ψ) is (2,0,3,1)-gain tight, G is either composed of a free vertex and a loop, or a free vertex, a fixed vertex, and an edge connecting them. In either case, O_0(G,ψ,p) is a non-zero row with one-dimensional kernel, and so (G̃,p̃) is fully-symmetrically isostatic. If (G,ψ) is (2,2,3,2)-gain tight, G must be a single free vertex. Any anti-symmetric motion of any realisation (G̃,p̃) of G̃ must be a translation of the whole framework, and so (G̃,p̃) is anti-symmetrically isostatic. The base cases for the fully-symmetric and anti-symmetric case are given in Figure <ref>. Assume the result holds whenever |V(G)|≤ m for some m∈ℕ and consider the case where |V(G)|=m+1. If (G,ψ) is (2,0,3,1)-gain tight, G has a free vertex v of degree 2 or 3, by Lemma <ref>. If v has degree 2, or if it has degree 3 with a loop, then we may apply a 0-reduction or loop-1-reduction at v to obtain a (2,0,3,1)-gain tight graph (G',ψ'), since 0-reductions and loop-1-reductions are always admissible. Moreover, if v has degree 3 with a loop, then it is not incident to a fixed vertex, by the sparsity of (G,ψ). By the inductive hypothesis, all 𝒞_2-generic realisations of G̃'̃ are fully-symmetrically isostatic. Then, our result follows from Lemmas <ref> and <ref>. So, assume that v has degree 3 and no loops. By Lemma <ref>, there is a (2,0,3,1)-gain tight graph (G',ψ') obtained from (G,ψ) by applying a 1-reduction at v. By induction, all 𝒞_2-generic realisations of G̃'̃ are fully-symmetrically isostatic, so take a 𝒞_2-generic realisation (G̃'̃,q̃') of G̃'̃ such that the conditions in Lemma <ref> are satisfied. Then, our result holds by Lemma <ref>. If (G,ψ) is (2,2,3,2)-tight, then, by Lemma <ref>, there is a (2,2,3,2)-gain tight graph (G',ψ') on at most m vertices (exactly m if we apply a 0-reduction, loop-1-reduction, or 1-reduction, and exactly m-1 if we apply a 2-vertex reduction) obtained by applying a reduction to (G,ψ). By the inductive hypothesis, all 𝒞_2-generic realisations of G̃'̃ are anti-symmetrically isostatic. Let q̃' be a 𝒞_2-generic configuration of G̃'̃, which also satisfies the conditions of Lemma <ref> if G̃'̃ is obtained from G̃ by applying a 1-reduction. By Lemmas <ref>, <ref> and <ref>, our result holds. From Proposition <ref> and Theorem <ref>, we obtain the following main result for 𝒞_2. Let (G̃,p̃) be a 𝒞_2-generic framework with associated 𝒞_2-gain framework (G,ψ,p). (G̃,p̃) is infinitesimally rigid if and only if: * there is a spanning subgraph of (G,ψ) which is (2,0,3,1)-gain tight; and * there is a spanning subgraph of (G,ψ) which is (2,2,3,2)-gain tight. §.§ Rotation group of order 3 Let k≥3, and (G̃,p̃) be a 𝒞_k-generic framework. Recall that (G,ψ) is (2,0,3,1)-gain tight whenever (G̃,p̃) is fully-symmetrically isostatic, and (G,ψ) is (2,1,3,1)-gain tight whenever (G̃,p̃) is ρ_1-symmetrically isostatic and ρ_k-1-symmetrically isostatic. Here, we prove that the converse is also true, which will give us the desired characterisation for 𝒞_3-generic frameworks. Let Γ be a cyclic group of order k≥3, (G,ψ) be a connected Γ-gain framework with |V_0(G)|≤ 1, τ:Γ→𝒞_k be a faithful representation and p:V(G)→ℝ^2 be 𝒞_k-generic. The following hold: * If (G,ψ) is (2,0,3,1)-gain tight, then the covering framework (G,p̃) is fully-symmetrically isostatic. * If (G,ψ) is (2,1,3,1)-gain tight, then the covering framework (G,p̃) is ρ_1-symmetrically isostatic and ρ_k-1-symmetrically isostatic. We prove the result by induction on |V(G)|, with the base cases given in Figure <ref>. It is easy to check that, in the first two examples of Figure <ref>, the ρ_0-orbit matrix has full rank and nullity 1, whereas, in the latter two cases, the ρ_1-orbit matrix and the ρ_k-1-orbit matrix have full rank and nullity 1. The base cases for the ρ_0-symmetric, ρ_1-symmetric and ρ_k-1-symmetric case are given in Figure <ref>. For the inductive step, assume the result holds when |V(G)|=t for some t≥1, and let (G,ψ) be a (2,m,3,1)-gain tight graph with |V(G)|=t+1, for some 0≤ t≤1. Suppose that m=0, and that V(G) has an isolated fixed vertex. Then, we may remove it to obtain a (2,0,3,1)-gain tight graph (G',ψ') on t vertices. By the inductive hypothesis, all 𝒞_k-generic realisations of G̃'̃ are fully-symmetrically isostatic. Let q̃' be a 𝒞_k-generic configuration of G̃'̃. For any extension q̃:V(G)→ℝ^2 of q̃', we have O_0(G,ψ,q)=O_0(G',ψ',q'). So, (G̃,p̃) is also fully-symmetrically isostatic. By 𝒞_k-genericity of (G̃,p̃), the result follows. So, we may assume that each fixed vertex of (G,ψ) has degree at least 1. By Lemma <ref>, G has a free vertex v of degree 2 or 3 (both when m=0 and when m=1). If v has degree 2, or if it has degree 3 with a loop, then we may apply a 0-reduction or loop-1-reduction at v to obtain a (2,m,3,1)-tight graph (G',ψ') on t vertices. By the inductive hypothesis, all 𝒞_k-generic realisations of G̃ are fully-symmetrically isostatic when m=0, and ρ_1-symmetrically isostatic, ρ_k-1-symmetrically isostatic when m=1. Moreover, when m=0, the vertex incident to v is free, by the sparsity of (G,ψ). Then, by Lemmas <ref> and <ref>, the result holds. So, assume that v has degree 3 and no loop. By Theorem <ref>, there is (2,m,3,1)-tight graph (G',ψ') obtained from (G,ψ) by applying a 1-reduction at v. By the inductive hypothesis, all 𝒞_k-generic realisations of G̃'̃ are fully-symmetrically isostatic when m=0, and anti-symmetrically isostatic when m=1. Let (G̃,q̃') be any 𝒞_k-generic realisation of G̃'̃ which satisfies the conditions of Lemma <ref>. Then, our result holds by Lemma <ref>. We finally have our main combinatorial characterisation for 𝒞_3, which is a direct result of Theorem <ref>. Let (G̃,p̃) be a 𝒞_3-generic framework with Γ-gain framework (G,ψ,p) (here, Γ is a cyclic group of order 3) and |V_0(G)|≤1. (G̃,p̃) is infinitesimally rigid if and only if: * there is a spanning subgraph (H,ψ|_E(H)) of (G,ψ) which is (2,0,3,1)-gain tight; and * there is a spanning subgraph (H,ψ|_E(H)) of (G,ψ) which is (2,1,3,1)-gain tight. Note that for |Γ|>3, there are irreducible representations of Γ that lead to additional necessary sparsity counts for ρ_j-symmetric isostaticity. Hence we will discuss these groups in a separate paper <cit.>. § FURTHER WORK In this paper, we have given necessary conditions for reflection and rotationally symmetric frameworks to be infinitesimally rigid in the plane, regardless of whether the action of the group is free on the vertices or not. Moreover, for the groups of order 2 and 3, we have shown that these conditions are also sufficient for symmetry-generic infinitesimal rigidity. In a second paper <cit.>, we establish the analogous combinatorial characterisations of symmetry-generic infinitesimally rigid plane frameworks for the cyclic groups of odd order up to 1000 and of order 4 and 6. The proofs for these groups follow the same pattern as the ones given in this paper, but become more complex due to an even more refined sparsity count. A natural direction for future research is the completion of the study of symmetric plane frameworks by considering cyclic groups of odd order at least 1000, even order cyclic groups of order at least 8, and dihedral groups. It is conjectured that the proofs of this paper extend to all odd order cyclic groups. The only issue here is to deal with the growing number of base graphs. In our second paper we will show that for cyclic groups of even order at least 8, the necessary sparsity conditions are no longer sufficient. Thus, it seems very challenging to try to characterise symmetry-generic infinitesimal rigidity for those groups. This leaves the case of the dihedral groups. In <cit.>, there is a combinatorial characterisation of forced-symmetric infinitesimal rigidity for frameworks that are generic with respect to a dihedral group D_2k, where k is odd, and the group acts freely on the vertex set. However, no such characterisation has been found for the case when k is even, despite significant efforts. (See Section 7.2.4 in <cit.> and Section 4.4.3 in <cit.> for examples of graphs that satisfy the desired combinatorial counts described in <cit.>, but are fully-symmetrically flexible.) So even in the free action case, for even k, a characterisation for D_2k-generic infinitesimal rigidity also does not exist. For odd k, the key obstacle in obtaining a characterisation for D_2k-generic infinitesimal rigidity (in either the free or non-free action case) is that these groups are non-abelian and hence have irreducible representations of dimension greater than 1. For such representations, it is unclear how to define the corresponding gain graph or orbit rigidity matrix. However, we expect that the methods of this paper extend to characterising fully-symmetric infinitesimal rigidity for D_2k, where k is odd and the action is not free on the vertices. Similarly, we expect to be able to deal with ρ_i-symmetric infinitesimal rigidity, where ρ_i is 1-dimensional, in either the free or non-free action case. This is all work in progress <cit.>. It is a famous open problem to find a combinatorial characterisation of infinitesimally rigid generic bar-joint frameworks (without symmetry) in dimension at least 3. Hence, we can also not yet combinatorially characterise infinitesimal rigidity for symmetry-generic bar-joint frameworks in ℝ^d for d≥3. However, there are classes of graphs which are known to be generically rigid in ℝ^3. These include frameworks obtained by a recursive constructions, starting with a simplex and applying a series of Henneberg 0- and 1-extensions, or triangulated simplicial polytopes in 3-space. Since Lemma <ref> holds in all dimensions, and our construction of orbit matrices generalises to higher dimensions, these classes of frameworks are amenable to our approach, a least when the group is abelian. We note that initial higher-dimensional results have very recently been obtained (via a different approach) for the special class of d-pseuodmanifolds in (d+1)-space with a free ℤ_2-action <cit.>. There are also results for special classes of symmetric body-bar and body-hinge frameworks in d-space; see <cit.>. plain
http://arxiv.org/abs/2407.12967v1
20240717192008
Rényi-infinity constrained sampling with $d^3$ membership queries
[ "Yunbum Kook", "Matthew S. Zhang" ]
cs.DS
[ "cs.DS", "cs.LG", "math.ST", "stat.ML", "stat.TH" ]
#1 #1 #1 #1 #11 #11 #1#1 #1#1 #1#1 #1#1 Grönwall Hölder Itô Nyström Schätten Matérn *#1<cit.> *#1 *#1<cit.> *#1<cit.> *#1 *#1<cit.> #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 A B C Δ f F g G h H I m P Q R v V W X Y Z ∑ ∏ ⋂ ⋃ ℝ ℤ ℚ ℕ ℂ #1#1 #1#2[[ #1; #2 ]] #1#2#3[[ #1; #2; #3 ]] #1#2#3[[ #1; #2; ⋮; #3 ]] #1λ_max(#1) #1λ_min(#1) Re #1I[#1] #1log(#1) polylog #1max(#1) #1min(#1) #1[#1] #1_#1 #1#2_#1[#2] #1𝒪(#1) #1o(#1) P #1(#1) #1#2_#1[#2] #1[#1] #1[#1] Var #1[#1] #1#2_#1[#2] Cov #1[#1] #1#2_#1[#2] Corr #1[#1] #1#2_#1[#2] #1rank(#1) #1supp(#1) #1#2[#1, #2) addtoresetequationsection .equation #1#1 𝖡𝖺𝗅𝗅 𝗐𝖺𝗅𝗄 𝖲𝗉𝖾𝖾𝖽𝗒 𝗐𝖺𝗅𝗄 𝖧𝗂𝗍-𝖺𝗇𝖽-𝖱𝗎𝗇 𝖨𝗇-𝖺𝗇𝖽-𝖮𝗎𝗍 𝖦𝖺𝗎𝗌𝗌𝗂𝖺𝗇 𝗐𝖺𝗅𝗄 𝖯𝗋𝗈𝗑𝗂𝗆𝖺𝗅 𝗌𝖺𝗆𝗉𝗅𝖾𝗋 𝖦𝖺𝗎𝗌𝗌𝗂𝖺𝗇 𝖼𝗈𝗈𝗅𝗂𝗇𝗀 𝖯𝗋𝗈𝗑𝗂𝗆𝖺𝗅 𝖦𝖺𝗎𝗌𝗌𝗂𝖺𝗇 𝖼𝗈𝗈𝗅𝗂𝗇𝗀 𝖯𝗋𝗈𝗑-𝗎𝗇𝗂𝖿𝗈𝗋𝗆 𝖯𝗋𝗈𝗑-𝖦𝖺𝗎𝗌𝗌𝗂𝖺𝗇 𝖣𝗂𝗄𝗂𝗇 𝗐𝖺𝗅𝗄 𝖱𝗂𝖾𝗆𝖺𝗇𝗇𝗂𝖺𝗇 𝖧𝖺𝗆𝗂𝗅𝗍𝗈𝗇𝗂𝖺𝗇 𝖬𝗈𝗇𝗍𝖾 𝖢𝖺𝗋𝗅𝗈 𝖱𝗂𝖾𝗆𝖺𝗇𝗇𝗂𝖺𝗇 𝖫𝖺𝗇𝗀𝖾𝗏𝗂𝗇 𝖬𝗈𝗇𝗍𝖾 𝖢𝖺𝗋𝗅𝗈 𝖬𝗂𝗋𝗋𝗈𝗋 𝖫𝖺𝗇𝗀𝖾𝗏𝗂𝗇 Ø𝒪 𝒪 Ω 𝔼 ℤ ℙ ℕ 𝒦 ℝ ℝ^d ℝ^d×d ℝ^n ℝ^n×n 𝕊_+^d 𝕊_++^d def= C_𝖯𝖨 C_𝖫𝖲𝖨 ε λ φ 1/2 1/2 1/2 1 𝗈𝗉 #1#2_#1C_#2 vol law tr diag Diag int ess sup e 𝕀id span row col rank 𝖳 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 #1#1 ∫ D d ∇ ∇^2 #1#2#1/#2 #1#2∂#1/∂#2 ∂ ℒ div 𝒩 BeP Ber Bern Beta ΒBeta Bin BP Dir DP Expo Gamma GEM HypGeo Mult NegMult Poi Pois Unif #1(#1) #1(#1) #1#1 #1‖#1‖ #1‖#1‖ #1[#1] #1[#1] #1[#1] #1{#1} #1{#1} #1{#1} #1|#1| #1(#1) #1[#1] #1{ #1} #1⟨#1⟩ #1#2⟨#1,#2⟩ #1#1 #1#1_1 #1#1_2 #1#1_∞ #1#1_F #1#1_* #1#1 #1#1_2 #1O(#1) W #1#1 KL d_ FI TV Cov Var #1#1 #1#1 #1#1 #1#1 ⇔ μ̅ K̅ %s/%s#1#2#1/#2 Rényi-infinity constrained sampling with d^3 membership queries Yunbum Kook School of Computer Science, Georgia Institute of Technology, Matthew S. Zhang Dept. of Computer Science, University of Toronto, and Vector Institute, ===================================================================================================================================================================================================== § ABSTRACT Uniform sampling over a convex body is a fundamental algorithmic problem, yet the convergence in KL or Rényi divergence of most samplers remains poorly understood. In this work, we propose a constrained proximal sampler, a principled and simple algorithm that possesses elegant convergence guarantees. Leveraging the uniform ergodicity of this sampler, we show that it converges in the Rényi-infinity divergence (R_∞) with no query complexity overhead when starting from a warm start. This is the strongest of commonly considered performance metrics, implying rates in { R_q, 𝖪𝖫} convergence as special cases. By applying this sampler within an annealing scheme, we propose an algorithm which can approximately sample ε-close to the uniform distribution on convex bodies in R_∞-divergence with 𝒪(d^3 polylog1/ε) query complexity. This improves on all prior results in { R_q, 𝖪𝖫}-divergences, without resorting to any algorithmic modifications or post-processing of the sample. It also matches the prior best known complexity in total variation distance. § INTRODUCTION Uniform sampling from convex bodies is a fundamental question in computer science. Its applications have spanned from differential privacy <cit.> to scientific computing <cit.> and machine learning <cit.>. The standard computational model assumes that, given a convex body ⊂^d which satisfies B_1(0) ⊆⊆ B_D(0), the algorithm can access a membership oracle <cit.> which, given a point x ∈^d, answers Yes or No to the query “Is x ∈?”. This framework possesses two advantages: (i) it is the most general framework under which one can conduct analysis, subsuming other computational models as particular cases, (ii) it has been thoroughly studied both in optimization and sampling. Under this computational model, what is the complexity in terms of membership oracle queries to generate a random sample from a convex body (i.e., from π = 1/()_)? As many of the desired applications require the dimension d to be large, it is typically impractical to generate samples exactly from the uniform distribution. Among other difficulties, this is due to the intractability of deterministically computing the normalizing factor (). Instead, one resorts to sampling from an approximate distribution which is ε-close to the uniform in an appropriate sense. Subsequent work has aimed to develop algorithms and analysis with optimal asymptotic dependence both on the dimension d and error tolerance ε. The seminal work of <cit.> proposed a randomized polynomial time algorithm with oracle complexity poly(d,logD/). Since then, a proliferation of improvements to both the algorithm and analysis <cit.> have led to a refined understanding of the complexity of this problem. Such algorithms are typically instances of a random walk, which first samples a point x_0 ∼μ_0 for a tractable initial measure μ_0, and then produces x_1, x_2, …, x_k_* following a well-specified, possibly random procedure for some number of iterations k_*. Under mild assumptions, it is possible to make μ_k_*, the distribution of x_k_*, ε-close to π in some desired sense. These iterative schemes are called often Markov chain Monte Carlo (MCMC) methods. The notion of closeness can take many different forms. Natural choices of metric[In this text, we will often refer to any of these divergences as a “metric” in the sense of performance metric, although only the total variation is a true metric on the space of probability measures.] such as the divergence and total variation () fit into the following hierarchy (see <ref> for definitions): ^2 ≤KL divergence≤ R_q divergence≤ R_∞ divergence. While the complexity of uniformly sampling convex bodies was known to be (d^3log^21/) for - <cit.>, the best known rates in any other metric had much worse dependence on key parameters. Additionally, they relied on algorithmic modifications which deviated from practical implementations <cit.>. On the other hand, the literature on unconstrained sampling has provided stronger guarantees in or χ^2 without overhead in complexity, through refined analyses of Langevin-based algorithms <cit.>. Adapting those guarantees to the convex body setting is not trivial, since restricting the distribution to K places a strong constraint on the candidate algorithm. The discrepancies in complexity when going beyond leads us to a fundamental question on the complexity of uniform sampling in a stronger metric: Is it possible to achieve a query complexity of (d^3 1/ε) for uniform sampling in {, R_q, R_∞}-divergence? In general, almost all algorithms take two phases: (P1) warm-start generation and (P2) faster sampling under warm-start. P1 finds a tractable initial distribution μ_0 towards the target π, whose closeness to π is measured by the R_∞ divergence at initialization, M = supμ_0/π = exp ( R_∞(μ_0 π)). P2 then samples from the target by leveraging the initial warmness. This approach is also taken by <cit.>, the best known uniform-sampling algorithm in . However, it fails to go beyond , as a warm-start generated by the algorithm holds only in a weak sense, causing “-collapse” (we return to this shortly). The resulting sampler can only have guarantees in . Ignoring P1 for the moment, one hopes that, towards stronger guarantees, there should be a sampler which converges in R_q in P2. Recently, <cit.> proposed an algorithm with query complexity (qMd^2 Σ_1/ε) for Σ covariance of π, which seamlessly extends the best complexity of (Md^2Σ_1/ε) in that is achieved by . However, this immediately raises a problem: it requires a R_∞ guarantee at initialization in order to achieve R_q-convergence. One might want to sample from π directly from some simple feasible start, such as a uniform distribution on the unit ball. Unfortunately, such choices potentially incur exponential warmness in dimension (i.e., M≳exp(d)). This is where P1 demonstrates its importance, allowing us to address this issue through an annealing scheme. Annealing algorithms The strategy of annealing for volume computation and uniform sampling appeared in <cit.>. It proceeds with a sequence of distributions {μ_k }_k ≤ k_* which steadily evolve from μ_0 ∝_B_1(0), the uniform distribution on a unit ball, towards μ_k_* = π the uniform distribution on the entire body. This is done by fattening μ_k, steadily increasing its variance, until it is close enough to π to be sampled directly with MCMC.[In the original paper, this is done by a sequence of uniform distributions, but this strategy has since been superseded by considering a sequence of truncated Gaussians with appropriate variance.] This general schema, wherein a tractable, well-understood distribution is gradually converted to a generic one, has been called annealing. This scheme has been refined by annealing via exponential distributions <cit.> and via Gaussians. The latter, whose use of { N(0, σ^2_k I_d)|_ K}_k ≤ k_* was pioneered by the algorithm <cit.>, turns out to be a robust choice for {μ_k}_k ≤ k_* under an appropriate update schedule for the variance σ_k^2. These intermediate distributions are “stepping stones” towards the desired uniform distribution, in the sense that each distribution is Ø(1) close to the subsequent one in R_∞, and no more than k_* = (d) annealing stages are required in total. can guarantee M = Ø(1) in a weak sense with the total query complexity (C^2 d^3 1/ε) under a canonical set-up where is well-rounded (i.e., _X ∼π[X^2] ≤ C^2 d for some constant C) <cit.>. Later work <cit.> claims that one can find an affine transformation which maps a skewed convex body to a well-rounded one with C=Ø(1), via a randomized algorithm with query complexity (d^3). TV-collapse Such annealing schemes are interwoven with an approximate sampler, typically an MCMC subroutine such as and , requiring the sampler to approximately sample from evolving measures. Hence, the warmness for a next target μ_k+1 is actually measured with respect to a distribution μ̃_k, approximately close to μ_k. Standard constrained samplers like or are only able to provide guarantees (between μ̃_k and μ_k) in for truncated Gaussian. However, due to failure of the triangle inequality through and R_q in general, the warmness of μ̃_k with respect to μ_k+1 cannot be bounded. Even worse, it is essential that the warm-start for these samplers is in some Rényi divergence, preferably the R_∞ divergence. As a result, the annealing schemes fail to carry a strong enough notion of warmness across the evolving measures. Instead, it passes a weakened form of warmness to the last step via a coupling argument, causing any sampling guarantees based on this warmness to degrade to distance. Given this challenge, we state a refinement of (P1): Can we generate a true warm-start M=Ø(1) with (d^3) queries? The answer to this hinges upon the existence of a sampler which converges in R_∞. As the triangle inequality holds for R_∞, this would enable us to anneal the distribution without compromising R_∞-closeness throughout! Examining this sends us back to a more specific form of (P2): Given M=Ø(1) and uniform or truncated Gaussian target, is there a sampler with R_∞ guarantees without any overhead in query complexity? Rényi divergence: theoretical gap and privacy From our earlier discussion, we cannot help but circle back to the study of Rényi divergence in constrained sampling. This is theoretically important in its own right, since it leads us to examine a current gap between unconstrained and constrained sampling. More specifically, the improvement in metric from to {, R_q} nicely parallels the work in unconstrained sampling, where convergence in weaker metrics (Wasserstein, ) <cit.> paved the way for a characterization of rates in {, R_q} <cit.>. This research program led to many fruitful insights into algorithmic properties and mathematical techniques, and inspires us to attempt the same in the constrained setting. See <ref> for detail. In purely practical terms, the q-Rényi divergence, particularly R_∞, has appeared in the literature most often in the context of differential privacy (DP) <cit.>. R_q can be used to establish (ϵ, δ)-DP algorithms <cit.>, while R_∞ can directly establish ϵ-DP algorithms. This connection has been well-explored in other sampling applications <cit.>. However, convergence in R_∞ has remained largely out of reach, and the existing algorithms are both suboptimal in complexity and limited in scope <cit.>. §.§ Contributions We close a gap in our current understanding of uniform sampling via a principled approach. We first leverage recent results <cit.> for sampling on convex bodies, proposing a generalization of <cit.> that we call the . Using uniform ergodicity results from the study of Markov chains <cit.>, we show that it is possible to achieve convergence in the very strong notion of R_∞ for both uniform and truncated Gaussian distributions on K from an Ø(1) warm-start without suffering any loss in complexity. By using the inside an annealing scheme we call , we show that it is possible to sample uniformly from a well-rounded convex body (i.e., _X ∼π[X^2] ≤ C^2 d) with (C^2 d^31/ε) queries in expectation. This convergence takes place in R_∞, much stronger than the prior best results in . This framework is principled, and the essential benefits of the annealing scheme and sampler are made clearly evident. Furthermore, the approach is simple, with no need to resort to algorithmic modifications as seen in other work <cit.>. We present our improvements in further detail below. Result 1: R_∞ convergence from a warm-start We show that, given an Ø(1) warm-start, there exists an algorithm which samples approximately with ε-accuracy in R_∞ for any ε > 0. The target distribution is either the uniform distribution on K or a truncated Gaussian of the form N(0, σ^2 I_d)|_. The oracle complexity is given by the following theorem. Consider a target of either of the forms π∝_ K, π_σ^2 = N(0, σ^2 I_d)|_ K, where B_1(0) ⊆ K ⊆ B_D(0). Then, given π_0 with R_∞(π_0 π) = Ø(1), the algorithm with probability 1-η succeeds in sampling from ν, R_∞(νπ) ≤ε, requiring no more than Ø(d^2 D^2 log^Ø(1)D/ηϵ) , D^2 := 1/d∨σ^2 if the target is π_σ^2 , D^2 if the target is π , membership queries in expectation. The algorithm is a generalization of the sampler <cit.> (Algorithm <ref>). We call this (Algorithm <ref>, <ref>), in analogy with algorithms in unconstrained sampling <cit.>. The chief benefit of this scheme is its analytic simplicity, which allows us to extend known guarantees with little effort. More specifically, it is naturally connected to isoperimetric properties such as the log-Sobolev constant of the target. This fact will be key in establishing convergence in R_∞. The surprising fact is that this convergence takes place essentially in the strongest possible sense of R_∞ without any overhead. To show this, we also demonstrate a strong mixing property of given any point mass within K as the initial distribution. While we are not the only work to obtain rates in this divergence <cit.>, we are the first to show R_∞ guarantees with efficient rates and without resorting to algorithmic modifications. Furthermore, the results for the truncated Gaussian are the first guarantees in R_q and R_∞ for the non-uniform, constrained case with d^2 σ^2 query complexity. While nowhere close to a comprehensive guarantee for sampling from constrained distributions π̃∝ e^-f|_ K, our results hint that this generic problem can be tackled using the same analytic framework. Result 2: Warm-start generation and (d^3) uniform sampler in R_∞ As a major application of the previous result, we show that, for any well-rounded convex body K, there is an annealing algorithm with (d^31/) complexity that achieves ε-accuracy in R_∞. Compared to , this algorithm uses as a subroutine for sampling at each stage. As a result, we call it (Algorithm <ref>). This allows us to strengthen the previous guarantees from <cit.> to R_∞-divergence, which encompasses all other divergences. Let π∝_ K for K well-rounded, i.e. _X ∼π [X^2] ≤ C^2 d while B_1(0) ⊆ K. Then, succeeds in sampling from ν with probability 1-η, where R_∞(νπ) ≤ε, requiring no more than Ø(C^2 d^3 log^Ø(1)1/ηε) expected number of membership queries. We emphasize that our rate of (d^3) matches that of <cit.> while being in a much stronger metric (R_∞). To compare, the expected rate of the proximal sampler for unconstrained distributions ν is (β C_PI(π) d^3/2) from a feasible start <cit.>, where β is the Lipschitz constant of ∇logπ. By contrast, Theorem <ref> states that, in exchange for letting β = ∞, we need to pay only an extra multiplicative factor of d^3/2C_PI^-1 when sampling from a well-rounded convex body. §.§ Techniques and challenges §.§.§ Constrained samplers Uniform sampling from a warm-start Before worrying about the problem of obtaining a warm start, it is necessary to establish the best possible oracle complexity for the sampling routine in R_∞ when given an Ø(1) warm-start. Previous works such as <cit.> relied on , as underlying samplers with their annealing schemes. These possess several disadvantages. Firstly, the method of analysis is rather complicated. As for , the best analysis (in terms of complexity) makes statements about , which is a biased variant of <cit.>. To implement with , an additional correction step for debiasing the stationary measure is required. This degrades rate estimates for to , making the rate estimates for only in as a result. , on the other hand, is only known to mix in {, χ^2} for uniform and exponential distributions <cit.> and in for general log-concave distributions <cit.>. By contrast, the recent work of <cit.> demonstrated that (Algorithm <ref>) could achieve ε-mixing in any R_q, requiring (qM d^2 Σ_%s/%s1) queries in expectation from an M-warm start. and the The algorithm of (or Prox-uniform) is remarkably simple. It alternately samples from two distributions, first getting y from a Gaussian centred at the current point, then z from a Gaussian centred at y but truncated to K (Lines <ref> and <ref> in Algorithm <ref>). This turns out to correspond to simulating the heat equation applied to the current measure, and then simulating a notion of time reversal of the heat equation (Lemma <ref>). These dynamics cause any R_q divergence to decay exponentially, with the speed of decay dependent on the isoperimetry of the target distribution. The only remaining challenge is to bound the wasted queries in Line <ref>. This follows a local conductance argument, where one quantifies the probability of y landing in a set that has a hard time returning to K. This step requires both that μ_0 be warm (for reasons to be explained later) and that the variance h =(d^-2) is appropriately small. Gaussian sampling with Since we will end up annealing with Gaussians, we need to demonstrate that the guarantees for are compatible with (truncated) Gaussian distributions N(0, σ^2 I_d)|_ K. This is not necessarily trivial. While the rate estimates for can be adapted for Gaussians <cit.>, those for cannot be <cit.>. To use in an annealing scheme, <cit.> resorts to exponential distributions, which have worse complexity. In this work, we generalize the algorithm for truncated Gaussians on K. This can be done by adapting the forward and backward heat flow interpretations to incorporate a potential function in the target, in analogy with works in unconstrained sampling <cit.>. By the same analogy, we call our algorithm the (constrained) . While the calculations for the Gaussian case are somewhat involved, the approach to bounding the query complexity does not require any algorithmic modifications instead. The number of iterations needed to mix in R_q for the , (q(d ∨ d^2 σ^2)), already follows as a consequence of a generic lemma shown in <cit.> (restated in Lemma <ref>). As for the query complexity, a careful bound on the local conductance following the same approach as <cit.> (Lemma <ref>) shows that (M) membership queries suffices in each iteration. This makes (qM(d ∨ d^2 σ^2)) queries in total. The complexity matches that of <cit.> for Gaussians in , and the ease of derivation underscores another strength of the / framework. We note that this is the first time that d^3 guarantees have been shown in R_q (and, as we shall see, in R_∞ as well) for truncated Gaussians, even if just for the specific subclass of those that are centered and isotropic. The techniques here suggest that, with more effort, the analysis could be conducted more generally for other distributions of the form π = e^-f|_. §.§.§ Warm starts and Rényi divergence Importance of warm starts We briefly digress to explain why warm starts are necessary in each of these contexts. The above samplers repeat a subroutine wherein a point is proposed, either from uniform distribution in a ball for , and a Gaussian for . However, naïvely using each of these points will bias the sampling algorithm. Thus, the subroutines must contain rejection steps which correct the distribution of the proposal. To preserve our complexity bounds, we must ensure that the number of rejections is small. Otherwise, the algorithm will waste the majority of its queries without moving. How to control the number of rejections? If is initialized at the target π∝_ K with suitable algorithmic parameters, then one can bound the expected number of rejections by (1). Instead, with exp ( R_∞) ≤ M at initialization, a straightforward calculation bounds the number of rejections by that experienced by π, multiplied by a factor of M. This explicitly reveals the importance of starting warm when bounding the number of rejections. Given this context, we denote by M→N when a sampler starting with warmness in a metric M has convergence guarantees in a possibly weaker metric N. The guarantees of the previous samplers can be stated for the truncated Gaussian as , : R_∞→ and : R_∞→ R_q. The previous discussion implies that we would require R_∞→ R_∞ to relay R_∞-guarantees across the annealing routine. Of the aforementioned works, only is close to achieving this. Nonetheless, closing this gap from R_q to R_∞ is far from trivial and requires the introduction of new analytical techniques, as we subsequently explain. The difficulties of R_∞ One would expect that convergence in R_∞ is extremely difficult to show. By comparison, even for a continuous-time process such as the Langevin dynamics, convergence in R_∞ is not known outside a few specific cases. Previous sampling guarantees in R_q usually involve a complexity at least linear in q <cit.>, which render them useless for R_∞. The techniques for analyzing q-Rényi divergence usually involve constructing a Markov semigroup whose interpolation in discrete time yields the sampler of interest. The time evolution equations for this semigroup can then be used to derive exponential convergence of R_q, with a constant depending on the isoperimetry of the stationary measure for the semigroup. This approach fails miserably for R_∞. In its standard representation, R_∞, being the supremum of the density ratio, cannot be written as the expectation of a continuous quantity, and its decay properties are difficult to establish. Instead, one must look for proof strategies that use more information about the semigroup. Any-start implies R_∞ It is well known in the Markov chain literature that, if a Markov chain mixes rapidly in from any deterministic starting point (a property known as uniform ergodicity), then the Markov chain causes the density to contract towards that of π in any arbitrarily strong norm. In particular, choosing the L^∞-norm, it is possible to translate statements in a weak norm like “the algorithm converges to within ε in distance of π within K iterations, given any starting point in the support of π” to an unconditional statement in an extremely strong norm — “the algorithm mixes in L^∞ distance within K iterations” (Theorem <ref>). We call this analytical technique boosting for its remarkable strengthening of the performance metric. This result had thus far been difficult to apply in most sampling settings. Consider for instance sampling from an unconstrained standard Gaussian, via some MCMC method like the Langevin diffusion. The can always be bounded by the -divergence, which contracts exponentially. However, because π is supported on ^n, it is always possible to pick a starting point arbitrarily far away from the Gaussian's mode. This causes at initialization to be arbitrarily large. The number of iterations that are needed to converge from a point-mass at x is roughly ≳logx, which is not bounded for all starting points x ∈^d. Thus, no finite number of iterations of the Markov chain will be sufficient to show uniform ergodicity across the entirety of ^d, even though the dynamics converge exponentially quickly in any R_q. In passing, we also mention another notion of boosting from to R_∞. This is done via a post-processing step that can be found in the work of <cit.>. Examining the guarantees of this approach, however, shows that the query complexity of the original algorithm unavoidably increases by a factor of (d). Thus, even if one could obtain warm-start at every step, this approach would give at minimum (d^5) query complexity[One can use the algorithm followed by the post-processing of <cit.>. The claimed complexity follows from the 's complexity of (d^3log^21/ϵ) along with ϵ≲exp(-d). See <ref> for detail.] for the entire algorithm. Our approach, in contrast, does not require any algorithmic modifications, but rather extracts a convergence guarantee that is already latent in the algorithm. Proximal sampler mixes from any start Thus, it suffices to establish convergence guarantees given any deterministic starting point within K. When attempting to do this, a major issue arises in the analysis of the classic , which uses . The mixing of is established using a conductance-oriented proof. If the is analyzed directly via s-conductance, however, then the warmness of the initial distribution shows up directly in the number of steps (rather than just the query complexity) <cit.>. This is problematic, since if we started at a bad initial point (for example, the tip of a cone), the resulting number of steps can be exponentially large in dimension d, making it impossible to invoke uniform ergodicity without degrading the rate. The other way of implementing through  <cit.>, requires a non-Markovian correction step, which rules out the boosting technique. Another candidate, , uses a weaker isoperimetric property (more precisely, a Cheeger inequality), so it depends poly-logarithmically on the warmness <cit.>. Hence, this disqualifies as a sampler due to additional overhead of poly(d) from any feasible start. For these discrete-time samplers, one could try to directly prove a modified logarithmic-Sobolev inequality (MLSI) of their corresponding Markov kernel (as opposed to for the stationary distribution π). While this approach can potentially bound the complexity overhead of these samplers from any feasible start by (d) (or worse), they require an involved analysis of the kernel as well as the target π. In contrast, can be interpreted in terms of continuous forward and backward heat flows, so its mixing can be characterized by functional inequalities of just the target π. Thus, enjoys substantially simpler analysis, with richer connections to established functional inequalities for log-concave measures. Using (Algorithm <ref> and <ref>), we can indeed achieve any-start results without overhead, thereby obtaining R_∞ guarantees via the boosting. Leveraging the log-Sobolev inequality for the target, the number of iterations needs only depend doubly-logarithmically on the warmness. From a point-mass, it can be established that the warmness (after one iteration of ) is exponential in dimension, so this allows one to pay only a logarithmic factor when establishing mixing from any point-mass within K. These factors combined allow to do what , could not: obtain R_∞ guarantees without any computational overhead or post-processing. To our knowledge, our paper is the first work in MCMC to accomplish this, and we hope that our methods would serve as a blueprint for obtaining similar guarantees, at least in the constrained sampling setting. §.§.§ Proximal Gaussian cooling At its core, our approach in the annealing part does not deviate from <cit.>. The core idea is that, for a family of distributions {μ_i}_i ∈{0, …, k_*}, one must balance two factors: (i) Each distribution must be Ø(1)-warm with respect to the succeeding distribution, and (ii) the total number of distributions should not be larger than (d). The choice of Gaussian distributions gives more flexibility in this respect, since one is able to update the covariance with accelerating speed. The algorithm, including the choices of covariance, is given in Algorithm <ref>. We note that with sidestepped the issue of warmness by coupling together approximate samplers. This strategy, however, reduces all the bounds down to , and cannot be applied when examining a stronger metric. With our approach, by combining all the aforementioned technical ingredients, we obtain a (d^3) algorithm in R_∞ for a well-rounded convex body. One additional benefit of our scheme is its high-level simplicity: we only need to implement the for different targets, which can be viewed as instances of a unified algorithm. Using this, we then follow the Gaussian-based annealing strategy. §.§ Related work Constrained samplers The previous approaches to constrained sampling via random walks date to <cit.>. The most well-studied algorithms are  <cit.> and  <cit.>. is a simple scheme which samples y from a ball around the current point, accepting y if y ∈ K and otherwise just remaining at the current point. Although it is possible to analyze directly <cit.>, usually refers to the the algorithm given by composing with rejection sampling. The best known guarantee in <cit.> gives a complexity of (Md^2 D^2 log1/ε) with M = exp R_∞(μ_0 π). Another standard algorithm is , which samples uniformly at random from a chord ∩ℓ, where ℓ is a random line passing through the current point. Its query complexity for uniform and exponential distributions is (d^2 ΣlogM/ε), where Σ = _π[(X- X)^⊗ 2]. See <ref> for the benefits and drawbacks of these two approaches. Both and can assume that K is well-rounded, in the sense that _π[X^2] ≤ C^2 d for a known constant C. A general convex body K can be converted to a well-rounded body via an affine transformation. Finding this transformation requires an additional algorithmic ingredient called rounding. It has been claimed in the literature <cit.> that a randomized rounding algorithm exists with (d^3) complexity. Since this adds no computational overhead, we make the same assumption in the present work. Apart from these general-purpose random walks, there also exist several samplers which exploit geometric information about K, or which use stronger oracle models. The algorithm makes use of a self-concordant barrier function ϕ to draw samples from an anisotropic Gaussian (instead of isotropic as in ,), and converges in (md) iterations for a convex body specified by m linear constraints <cit.>. Apart from this, Riemannian algorithms such as or equip K with a Riemannian metric, and then run a random walk using this geometry <cit.>. Likewise,  <cit.> alters the geometry so that one can apply methods from unconstrained sampling, and can obtain (d) complexity with some additional assumptions. Constrained samplers like <cit.> borrow algorithms from the unconstrained setting, and interweave them with projection steps onto K. The best known overall complexity is (d^2 D^3/ε^4) in terms of projection oracle queries. Finally, <cit.> use “soft” penalties rather than projections. All of the aforementioned techniques, however, require either additional assumptions on K or the oracle model, or are otherwise inefficient terms of complexity. Annealing strategies Annealing as a computational tool dates back to <cit.>. Its original application was to combinatorial optimization, where the landscape of solutions is highly non-convex, and it has served as an efficient strategy for diverse problems <cit.>. The history of annealing for volume computation / uniform sampling dates back to the inception of constrained sampling, from the earliest works of <cit.> through a long line of further improvements <cit.>. The original incarnation of this algorithm, which remained unchanged through the listed references, samples uniformly from a sequence of bodies { K_k}_1 ≤ k ≤ k_*, with K_k = K ∩ 2^k/d B_1(0), with k_* = (d) iterations. The distribution at each prior phase is used as a warm start for the subsequent phase. Combined with a coupling argument for the approximate sampler, the law of the sample at iteration k_* can be viewed as a warm start for the uniform distribution on . While the number of phases is not severe, the bodies become increasingly difficult to sample as k increases. For instance, the complexity with moves from (d^2) in the earliest phases to (d^3) in the latest phases, with the total complexity being (d^4) as a result. To rectify this, <cit.> proposed a similar multi-phase sampling strategy, but successively sampling with the exponential distributions μ_k ∝ e^-a_k^ x|_ K where a_k = 2d(1-d^-1/2)^k. This needs only k_* = (√(d)) phases to converge. However, these exponential distributions become increasingly ill-conditioned as k increases, and the warmness between μ_k, μ_k+1 is only in χ^2. As a result, the best known complexity for obtaining a single sample with this approach is (d^7/2). <cit.> proposes , which avoids the pitfalls of both earlier approaches. Each stage μ_k consists of a truncated Gaussian with variance σ_k^2. As opposed to the previous annealing schemes, it is possible to accelerate the schedule (in the sense that σ_k/σ_k-1 is increasing in k) while still maintaining R_∞-warmness between μ_k, μ_k+1. As a result, while the total number of phases is (d), the overall complexity remains at (d^3). Warm starts and Rényi divergence The convergence of samplers in Rényi divergences has been well studied in unconstrained sampling, beginning with results in continuous time <cit.>. These were followed by Rényi guarantees of discrete time algorithms under log-Sobolev inequalities <cit.>, Poincaré inequalities <cit.> and beyond <cit.>. These algorithms are vital for warm starts in the unconstrained field as well, particularly for Metropolis adjusted algorithms <cit.>. In fact, the study of R_q divergences and their connection to warm starts directly led to the present state-of-the-art algorithms in unconstrained sampling <cit.>. Within constrained sampling, the work of <cit.> proposed one method for obtaining guarantees in R_∞ on transformations of the unit ball. This approach is based on grid walk, and unrelated to that in the current work. It has been extended by <cit.> to more general distributions. Secondly, <cit.> show a boosting scheme which mollifies the convex body by convolving it with a ball. Such a technique uses a sampler with Ø(e^-dε)-guarantees in as an inner loop of the routine, which adds at least (d) overhead when applied to a sampler with 1/ε dependence. The technique of proving L^∞ bounds using uniform ergodicity was well known in the study of Markov chains. Its history dates perhaps back to the works of Markov himself <cit.>, among other venerable works <cit.>. It is connected with Doeblin's minorization condition and other fundamental properties of Markov chains <cit.>, and has perhaps been most succinctly stated in <cit.>. The uniform ergodicity property, however, is difficult to establish, barring some exceptions such as exact Gibbs sampling or samplers on the lattice <cit.>. As far as we know, we are the first to apply it in the constrained sampling setting. §.§ Organization The remainder of this paper is organized as follows. In <ref>, we introduce key notions that will be used in the paper, and in <ref> elucidate the boosting technique, which converts any-start guarantees in to a contraction in R_∞. We then illustrate how it can be applied for uniform distributions on K, as a teaser for <ref>, where we develop guarantees for truncated Gaussians. Finally, we put these all together in <ref> to obtain our final, (d^3) uniform sampling guarantees before concluding. § PRELIMINARIES §.§ Notation Unless otherwise specified, · denotes the 2-norm on ^d and the operator norm on ^d × d. The notation a= O(b), a ≲ b will be used to signify that a ≤ cb for c > 0 a universal constant, while a ≳ b, a = Ω(b) will denote a ≥ cb. a = Θ(b) is used when a = Ω(b), a= Ø(b) simultaneously. The notation (b) will mean a = O(b (b)), and likewise for Θ, Ω. Finally, we conflate a measure with its density where there is no confusion. §.§ Basic notions Before proceeding, we reiterate our computational model. We are given a convex body K which has B_1(0) ⊆ K ⊆ B_D(0) ⊂^d for some D > 0. We assume access to a membership oracle, which, given a point x ∈^d, answers Yes or No to the query “Is x ∈ K?” Where not otherwise specified, we will write π = 1/( K)_ K for the uniform distribution on K. We introduce the following metrics between probability measures. Let μ, ν be two probability measures on ^d. Their total variation distance is given by μ-ν_TV := sup_B ∈ B(^d)μ(B) - ν(B) , where B(^d) is the collection of all Borel measurable subsets of ^d. The q-Rényi divergence is defined as R_q(μν) := 1/q-1log∫(μ/ν)^q ν , if μ≪ν, and R_q(μν) := ∞ otherwise. In the limit q → 1, it converges to the divergence, (μν) := ∫logμ/ν μ , if μ≪ν, and again (μν) := ∞ otherwise. The χ^2-divergence is defined as χ^2(μπ) := exp( R_2(μν)) - 1. Finally, the limit lim_q →∞ R_q(μν) can be written as R_∞(μν) := _νlogμ/ν . The R_∞ distance is especially important for us, through the concept of warmness. For μ≪ν, we denote the warmness M of μ with respect to ν as M := _νμ/ν = exp( R_∞(μν)) . Alternatively, if the above holds, then we also say that μ is an M-warm start for ν. The following inequality for Rényi divergences will also be useful. Let μ, ν be probability measures, P a Markov kernel, and q ≥ 1. Then, R_q(μ P ν P) ≤ R_q (μν) . §.§ Functional inequalities for constrained distributions The following functional inequalities, also known as isoperimetry inequalities, on the target distribution are vital for the analysis of sampling algorithms, being equivalent to the exponential mixing of a broad class of Markov chains. A probability measure ν satisfies a log-Sobolev inequality (LSI) with parameter C_LSI(ν) if for all smooth functions f: ^d →, LSI_ν(f^2) ≤ 2 C_LSI(ν) _ν[∇ f^2] , with _ν(f^2) := _ν[f^2 log f^2] - _ν[f^2] log_ν[f^2]. A weaker form of isoperimetry that is implied by the above is the Poincaré inequality. A probability measure ν satisfies a Poincaré inequality (PI) with parameter C_PI(ν) if for all smooth functions f: ^d →, PI_ν f ≤ C_PI(ν) _ν[∇ f^2] , with _ν f := _ν[(f-_ν f)^2]. The following lemma summarizes the functional inequalities which are satisfied by (truncated) Gaussians and uniform distributions on K, due to <cit.>. We refer readers to <cit.>. Let K⊂ be a convex body with diameter D. Then, if π∝_ K is the uniform distribution on K, C_LSI(π) ≲ D^2 and C_PI(π) ≲Σlog d , where Σ = _X ∼π[(X- _π X)(X- _π X)^] is the covariance of π. For a Gaussian π_σ^2 = N(0, σ^2 I_d)|_ K, we have C_PI(π_σ^2)≤(π_σ^2) ≤σ^2 . § TOTAL VARIATION TO RÉNYI INFINITY VIA LSI In this section, we first review the work of <cit.>, which allows us to bound the distance between the iterates of a Markov chain and their stationary measure in L^∞ (and so in R_∞) by the worst-case distance in TV from any start. Then we reveal a connection between (<ref>) and convergence from any start (uniform ergodicity), demonstrating a way to leverage this powerful boosting technique. As a concrete example, we provide a R_∞-guarantee of for uniform sampling over a convex body without incurring additional factors to the state-of-the-art complexity. §.§ Strong contraction of a Markov chain We recall standard notion in the theory of Markov semigroups. Let (Ω, F) be a measurable space. A Markov kernel P: Ω× F → [0, 1] satisfies * for each x ∈Ω, the map P(· | x):=P(x, ·) from F to [0,1] is a probability measure on (Ω, F). * for each E ∈ F, the map P(·, E) from Ω to [0,1] is F-measurable. Given a probability measure μ over Ω and function f:Ω→, a Markov kernel P acts on μ and f to yield objects μ P and Pf defined by μ P(·) := ∫_ΩP(·|x) μ( x) , Pf(x) := _Y∼ P(·|x)[f(Y)]=∫_Ω f(y) P( y|x) . A probability measure π is called stationary for P if π P = π. In Markov semigroup theory, it is of major importance to study the contractivity of a Markov kernel, since it quantifies the convergence rate of its corresponding Markov chain toward the stationary distribution π. This contractivity can be captured via the contraction coefficient of P defined by P_L^p→ L^p:= sup_0≠ f∈ L^p_0Pf_L^p/ f_L^p , where f_L^p := f_L^p(π) = (_π[|f|^p])^1/p and L^p_0:= {f: _π[|f|^p] <∞, _π f=0}. The most classical setting that has been studied is the L^2(π) space, whose contraction coefficient is given by γ if the spectral gap of the Markov kernel P is 1-γ. By substituting f=μ/π-1, it is also possible to quantify the convergence rate of a Markov chain in χ^2-divergence with reference to the same constant γ. This classical setting, however, is not sufficient for understanding R_∞, since we only have the one-sided bound R_2 = log(1+χ^2) ≤ R_∞ =log |μ/π| = logμ/π_L^∞. Instead, it is natural to study the contraction coefficient in L^∞(π), defined by P_L^∞→ L^∞:=sup_0≠ f∈ L^∞_0Pf_L^∞/ f_L^∞ , where f_L^∞ := f_L^∞(π) = _πf and L^∞_0:={f:_πf<∞, _π f=0}. One observes that L^∞→ L^∞ contractivity implies the uniform ergodicity of a Markov chain, and that the opposite inequality also holds due to <cit.>. Let P be a Markov kernel that is reversible with respect to the stationary distribution π. Then, P^n-1_π_L^∞→ L^∞≤ 2 _xP^n(·|x)-π_ , where 1_π is the operator defined by 1_π(f)=_π[f]. We can now deduce an explicit L^∞-convergence result for a reversible Markov chain as follows. Consider a Markov chain with kernel P, initial distribution μ and stationary distribution π. Then, μ P^n/π-1_L^∞≤μ/π-1_L^∞·2_xP^n(·|x)-π_ In particular, R_∞(μ P^n π) ≤μ/π-1_L^∞·2_xP^n(·|x)-π_. Setting f=μ/π-1, we have P^n-1_π_L^∞→ L^∞≥P^nf-1_π(f)_L^∞/μ/π-1_L^∞=P^nf_L^∞/μ/π-1_L^∞=μ P^n/π-1_L^∞/μ/π-1_L^∞ , where the last equality follows from P(μ/π) = (μ P)/π due to the reversibility of P. Therefore, μ P^n/π-1_L^∞≤μ/π-1_L^∞·2_xP^n(·|x)-π_ . The Rényi-infinity bound immediately follows from log(1+x)≤ x. §.§ LSI to uniform ergodicity without overhead We now need to control the -distance uniformly over any initial point x ∈Ω. That is, one should find the iteration number n of the Markov chain such that δ_xP^n - π_≲ε for almost every x∈Ω, where δ_x denotes the Dirac measure at x. One expects that it is impossible in general to bound this quantity for arbitrary Markov chains and stationary measures, but we can get around this in our current setting, which only considers probability measures with compact support. Mixing rates of many convex bodies samplers have been studied when π satisfies a Cheeger isoperimetric inequality (which is equivalent to a Poincaré inequality for log-concave distributions). For comparison, standard choices of Markov kernel in the unconstrained setting (such as the Langevin or underdamped Langevin dynamics) relate Poincaré inequalities for π to the convergence of the sampler in χ^2, and have theoretical guarantees typically given by χ^2(δ_x P^n π) ≲exp-n/(π) χ^2(δ_x P^1 π) . As 2 ^2≤χ^2, one can achieve - in (π) log(χ^2(δ_x P^1 π) /^2) ≲(π) log((δ_x P^1)/π_L_∞ /) iterations. However, the initial χ^2 is typically exp(Ω(d)), so under (<ref>) this approach ends up incurring an additional overhead of Ω(d) to the mixing rate. While it would already be impressive to boost a mixing rate from the weakest metric () to the strongest metric (R_∞) under (<ref>) at the cost of additional (d), one can achieve this with only additional (d) factors under (<ref>). Just as a Poincaré inequality for π is normally sufficient to imply the exponential convergence of a corresponding Markov process to π in χ^2, the log-Sobolev inequality is equivalent to exponential convergence of many process in entropy (or in ). Hence, under (<ref>), theoretical guarantees of Markov chains are typically of the form (δ_x P^n π) ≲exp-n/(π) (δ_x P^1 π) . Since we know that 2 ^2 ≤ (CKP inequality), - can be achieved after (π) log((δ_x P^1 π)/) ≲(π) log(1/log(δ_x P^1)/π_L_∞) iterations. As noted earlier, this initial distance is at worst exp(d^Ø(1)), which only results in additional (d) factors after evaluating the double logarithm. Therefore, one can boost from to R_∞ with only logarithmic overhead through (<ref>). §.§.§ Rényi-infinity guarantees for uniform sampling under warm start We demonstrate the effectiveness of this boosting technique using (<ref>) for the uniform distribution. Thereby we achieve a R_∞-guarantee of uniform sampling without compromising the well-known mixing rate (d^2D^2log1/) of and (given in terms of and χ^2 respectively). To this end, it is useful to work with a sampler whose mixing is well understood under several functional inequalities such as (<ref>) and (<ref>). Recently, <cit.> studies [In the original work, it is called the , inspired by the geometric behavior of the sampler. We call it instead, since this geometric behaviour is not as clear when working with arbitrary target distributions.] for uniform distributions under those functional inequalities, via calculations following the heat flow and its π-adjoint. In particular, <cit.> already establishes R_q-guarantees (with q<∞) of the with respect to the uniform distribution π, given any starting measure. For any ∈(0,1) and convex body K⊂ with diameter D, let P be the Markov kernel of the with variance h. For given x∈ K, let μ_x^n:=δ_xP^n be the law of the n-th iterate of the , and π be the uniform distribution over K. Then, R_q(μ_x^nπ)≤ for n=qh^-1(π)logd+h^-1D^2/. Combining this with the boosting technique, we obtain a R_∞-guarantee for uniform sampling: For a convex body K⊂ with diameter D, assume that an initial distribution μ is M-warm with respect to the uniform distribution π over . For any ∈(0,1), the with variance h achieves R_∞(μ P^nπ)≤ (or 1-≤μ P^n/π≤1+ on K) after n=h^-1D^2logM(d+h^-1D^2)/ iterations. Using 2·- π_^2≤(·π)=lim_q↓1 R_q(·π) and (π)=Ø(D^2), one obtains that after n≳ h^-1D^2logM(d+h^-1D^2)/ iterations, sup_x∈ Kμ_x^n-π_≤/M . By Theorem <ref> with μ/π-1_L^∞≤ M, we have μ P^n/π-1_L^∞≤ and R_∞(μ P^nπ)≤. We note that the above guarantee bounds the iteration number for mixing in R_∞, not the query complexity, through any-start guarantees in . The query complexity needed to attain ε -distance from any start in our implementation can potentially be exponential in d. This, however, will not be relevant to our results since the kernel P only captures the accepted proposals. This way, we can view the result above as merely extracting a latent property of the algorithm, which is not dependent on details of its implementation. For Metropolized algorithms such as however, the kernel P will need to take the number of rejections into account, and the dependence on d for any-start guarantees can potentially scale poorly as a result. Lastly, combined with the query complexity of implementing each step of the for uniform distributions, we obtain a guarantee on the query complexity for achieving R_∞-mixing for uniform sampling. For any η,ε∈ (0,1), n∈ℕ defined below, and convex body given by a well-defined membership oracle, the (Algorithm <ref>) with h = (2d^2log9nM/η)^-1, N = (nM/η), and initial distribution μ_0 M-warm with respect to π the uniform distribution over achieves R_∞(μ_n π)≤ after n = (d^2 D^2 log^2 M/η) iterations, where μ_n is the law of the n-th iterate. With probability 1-η, the algorithm iterates this many times without failure, using (M d^2D^2log^61/η) expected number of membership queries in total. By <cit.>, if one takes variance h = (2d^2log9nM/η)^-1 and threshold N = (nM/η), then for each iteration the expected number of membership queries is (Mlog1/η), and the failure probability is at most η/n. By Lemma <ref>, the should iterate n=(d^2D^2logM/η) times to output a sample whose law is -close to π in R_∞. Therefore, throughout the n outer iterations, the failure probability of the is at most η by a union bound, and the total expected number of queries is (Md^2D^2logM/η). § UNIFORM ERGODICITY OF PROXIMAL SAMPLER FOR TRUNCATED GAUSSIANS In the algorithm <cit.>, a truncated Gaussian π_σ^2:= N(0, σ^2 I_d)|_ for a convex body serves as an annealing distribution, where the variance σ^2 is gradually increased across phases. In particular, uses a Metropolized to sample such a truncated Gaussian. Its convergence rate is quantified in through a Cheeger isoperimetry of the truncated Gaussian (namely, a Poincaré inequality). In order for us to properly carry the warmness across phases, we must need a R_∞-guarantee for sampling truncated Gaussians. Just as we established R_∞ guarantees for uniform sampling via Theorem <ref> and (<ref>) for the uniform distribution, it is natural to propose a sampler for a truncated Gaussian whose convergence rate can be concisely related to (π_σ^2). Thus, in <ref> we analyze the for π_σ^2, providing its convergence rate in terms of (π_σ^2) through a heat flow perspective just as in <cit.>. Then in <ref> we establish the uniform ergodicity of the for π_σ^2, deducing the R_∞ guarantee in Theorem <ref>. §.§ Convergence analysis Compared with the for uniform distributions, the for truncated Gaussians (Algorithm <ref>) requires a different backward step (i.e., the implementation of RGO) while using the same forward step (i.e., Gaussian step). One iteration of the for N(0,σ^2I_d)|_ K consists of two steps: * (Forward step) y∼π^Y|X(·|x)= N(x,hI_d). * (Backward step) x∼π^X|Y(·|y)∝exp(-1/2σ^2 x^2-1/2hy-x^2)·_ K(x)∝ N1/1+hσ^-2y,h/1+hσ^-2I_d|_ . To implement the backward step, we use rejection sampling with the proposal N1/1+hσ^-2y,h/1+hσ^-2I_d, accepting if the proposal lies inside of . Then the expected number of trials for the first success is 1/ℓ(y):=2π(h^-1+σ^-2)^-1^d/2/∫_ Kexp-σ^-2+h^-1/2x-h^-1(σ^-2+h^-1)^-1y^2 x . We can write down the density of π^Y as follows: π^Y(y) =∫_ Kexp-1/2σ^2 x^2-1/2hx-y^2 x/(2π h)^d/2∫_ Kexp-1/2σ^2 x^2 x =∫_ Kexp-1/2(σ^-2+h^-1)x-h^-1(σ^-2+h^-1)^-1y^2 x/(2π h)^d/2∫_ Kexp-1/2σ^2 x^2 x exp-1/2(h+σ^2) y^2 =(1+hσ^-2)^-d/2 ℓ(y)/∫_ Kexp-1/2σ^2 x^2 x exp-1/2(h+σ^2) y^2 . §.§.§ Mixing analysis The two steps within have continuous interpolation via the forward and backward heat flow, so their mixing guarantees can be naturally associated with functional inequalities (e.g. (<ref>) and (<ref>)) for a target distribution. Such a mixing result for the has already been established for unconstrained distributions π^X∝exp(-V) <cit.>, for which the can be generalized as follows: for π(x,y)∝exp-V(x)-1/2hx-y^2, repeat (i) y_i+1∼π^Y|X=x_i= N(x_i,hI_d) and (ii) x_i+1∼π^X|Y=y_i+1. Let μ_k^X be the law of the k-th output of with initial distribution μ_0^X. Then, for any q≥1, R_q(μ_k^Xπ^X)≤ R_q(μ_0^Xπ^X)/1+h/(π^X)^2k/q . This has been further extended to constrained distributions, including the truncated Gaussian, under only mild additional assumptions. Let ν be a measure, absolutely continuous with respect to the uniform distribution over , and μ_0 an arbitrary measure. The forward and backward heat flow solutions given by ∂_t μ_t = 1/2Δμ_t , ∂_t μ_t^← = -(μ_t^←∇log (ν P_h-t))+ 1/2Δμ_t^← with μ_0^← = μ_h , admit solutions on (0,h], and the weak limit lim_t → hμ_t^← = μ_h^← exists for any initial measure μ_0 with bounded support. Moreover, for any q-Rényi divergence with q ∈ (1, ∞), R_q(μ_h^←ν) ≤lim_t ↓ 0 R_q(μ_h-t^←ν_t) . We will set ν= N(0,σ^2 I_d)|_ in above. It turns out that the solutions to the two equations above give precisely the laws of μ_k^Y, μ_k+1^X when starting at μ_k^X. Secondly, it is well-known in <cit.> that a truncated Gaussian ν has (ν) ≤σ^2, as truncation to a convex set only improves the log-Sobolev constant. Then we can derive a contraction result of the for ν in R_q for any q ≥ 1, emulating the proof for uniform distributions in <cit.>. First, convolve the truncated Gaussian with Gaussian with very small variance ϵ, which generates a smooth and unconstrained distribution. Then invoke the contraction result in Proposition <ref>, and use lower semicontinuity of the R_q divergence to conclude when sending ε→ 0. Let μ^X_k be the law of the k-th output of the with initial distribution μ_0^X and target π^X = N(0,σ^2 I_d)|_. Then, for any q ≥ 1, R_q(μ_k^Xπ^X) ≤ R_q(μ_0^X π^X)/(1+hσ^-2)^2k/q . For small ϵ>0, as μ_ϵ = (μ_0^X)_ϵ = μ_0^X * N(0,ϵ I_d) and π_ϵ = π^X * N(0, ϵ I_d) are C^∞-smooth, we can invoke the decay result with step size h-ϵ in Proposition <ref>. Thus, for contraction constants C_ϵ=(1+h-ϵ/σ^2+ϵ)^-2/q (since (ν * N(0, ϵ I_d))≤(ν)+ϵ in general), it follows that R_q(μ_h-ϵ^←π_ϵ) ≤ C_ϵ· R_q(μ_ϵπ_ϵ)≤ C_ϵ· R_q(μ_0 π_0) , where we used the data-processing inequality (Lemma <ref>) for the last inequality. By the lower semicontinuity of R_q as noted earlier, sending ϵ→ 0 leads to R_q(μ_1^X π^X) = R_q(μ_h^←π_0) ≤ C· R_q(μ_0 π_0) = C· R_q(μ_0^X π^X) . Repeating this argument k times completes the proof. §.§.§ Per-step guarantees We can find an effective domain under π^Y, from which escapes with only negligible probability. More precisely, we denote the δ-blowup of K by _δ = {x∈: d(x,)≤δ}. Let R=(1+hσ^-2) _δ with δ=t/d and h=σ^2c^2(d^-1∧(d^2σ^2-c^2)^-1), where c is any constant smaller than d^2σ^2 and t≥2c(c+1). Then, π^Y(R^c)≤expc^2-t^2/8c^2 . Using the density formula for π^Y in (<ref>), ∫_exp-1/2σ^2 x^2 x·π^Y(R^c) =∫_[(1+hσ^-2) _δ]^c∫_ Kexp-1/2(σ^-2+h^-1)x-(1+hσ^-2)^-1y^2 x/(2π h)^d/2 exp-1/2(h+σ^2) y^2 y (i)=(1+hσ^-2)^d/(2π h)^d/2∫_ K_δ^c∫_ Kexp-σ^-2+h^-1/2x-z^2exp-1/2(h+σ^2)(1+hσ^-2)^2 z^2 x z =(1+hσ^-2)^d/(2π h)^d/2∫_ K_δ^c∫_ Kexp-σ^-2+h^-1/2x-z^2exp-1+hσ^-2/2σ^2 z^2 x z (ii)≤(1+hσ^-2)^d/(2π h)^d/2∫_ K_δ^c∫_ H(z)exp-σ^-2+h^-1/2x-z^2exp-1+hσ^-2/2σ^2 z^2 x z =(1+hσ^-2)^d/2∫_ K_δ^c∫_d(z, K)^∞√(σ^-2+h^-1/2π)exp-(σ^-2+h^-1)y^2/2exp-1+hσ^-2/2σ^2 z^2 y z , where (i) follows from the change of variables z=(1+hσ^-2)^-1y, and in (ii) H(z) denotes the supporting half-space at proj_ K(z) containing K for given z∈ K_δ, given as H(z) = {x ∈^d: proj_ K(z)-z, x- proj_ K(z)≥ 0 } , when z ∉ K. We define the one dimensional Gaussian integral T(s)=_z∼ N0,(σ^-2+h^-1)^-1(z≥ s)=√(σ^-2+h^-1/2π)∫_s^∞exp-(σ^-2+h^-1) y^2/2 y . By the co-area formula and integration by parts, for H^d-1 the (d-1)-dimensional Hausdorff measure ∫_ K_δ^c∫_d(z, K)^∞√(σ^-2+h^-1/2π)exp-(σ^-2+h^-1)y^2/2exp-1+hσ^-2/2σ^2 z^2 y z =∫_δ^∞ T(s)∫_ K_sexp-1+hσ^-2/2σ^2 z^2 H^d-1( z) s = T(s)∫_0^s∫_ K_rexp-1+hσ^-2/2σ^2 z^2 H^d-1( z) r_ I_s=δ^∞ +∫_δ^∞√(σ^-2+h^-1/2π)exp-(σ^-2+h^-1)s^2/2∫_0^s∫_ K_rexp-1+hσ^-2/2σ^2 z^2 H^d-1( z) r s . The double integral above can be bounded as follows: ∫_0^s∫_ K_cexp-1+hσ^-2/2σ^2 z^2 H^d-1( z) c =∫_ K_s\ Kexp-1+hσ^-2/2σ^2 z^2 z (i)≤∫_(1+s) exp-1+hσ^-2/2σ^2 z^2 z =(1+s)^d∫_ Kexp-(1+s)^2(1+hσ^-2)/2σ^2 z^2 z ≤(1+s)^d∫_ Kexp-1/2σ^2 z^2 z . where we used K_s⊂(1+s) K in (i), which follows from B_1(0)⊂ K. Hence, the double integral is bounded by (1+s)^d( K). Recall a standard tail bound for a Gaussian distribution: T(s)≤exp-s(σ^-2+h^-1)^1/2-1^2 . Combining these two bounds, it holds that I vanishes at s=∞. Upper bounding I by 0, we have just derived that ∫_ exp-1/2σ^2 x^2 x·π^Y(R^c) ≤(1+hσ^-2)^d/2∫_δ^∞(1+s)^d√(σ^-2+h^-1/2π)exp-(σ^-2+h^-1)s^2/2 s ·∫_exp-1/2σ^2 z^2 z . Dividing both sides by the factor ∫_exp(-1/2σ^2 x^2) x, we obtain the following bound, π^Y(R^c) ≤(1+hσ^-2)^d/2∫_δ^∞(1+s)^d√(σ^-2+h^-1/2π)exp-(σ^-2+h^-1)s^2/2 s ≤(1+hσ^-2)^d/2∫_δ^∞exp(sd) √(σ^-2+h^-1/2π)exp-(σ^-2+h^-1)s^2/2 s (i)≤1/2(1+hσ^-2)^d/2exph'd^2/2exp-δ/√(h')-d√(h')-1^2 , where in (i) we again use the tail bound for a Gaussian distribution. Above, we introduced a new variable h':=(σ^-2+h^-1)^-1=h/1+hσ^-2. Taking δ=t/d and h'=c^2/d^2 subject to t≥2c(c+1), we can make exph'd^2/2exp-δ/√(h')-d√(h')-1^2≤expc^2/2-t^2/8c^2 . Since h≤σ^2c^2/d, we also have (1+hσ^-2)^d/2≤exp(c^2/2), and the claim follows. We can now provide the per-step complexity of the under a warm start. Let be a convex body in , and μ an initial distribution M-warm with respect to π^X. For any given n∈ℕ and η∈(0,1), set Z=9nM/η(≥9), h=σ^2loglog Z/log Z(d^-1∧(d^2σ^2-loglog Z/2log Z)^-1) and N=Z(log Z)^4=(nM/η). Then, the failure probability of one iteration is at most η/n. Moreover, the expected number of membership queries needed per iteration is ØM(lognM/η)^4. For μ_h:=μ* N(0,hI_d), the failure probability is _μ_h[(1-ℓ)^N]. Since μ/π^X≤ M implies μ_h/(π^X)_h=μ_h/π^Y≤ M, it follows that _μ_h[(1-ℓ)^N]≤ M _π^Y[(1-ℓ)^N]. We now bound the expectation. For the effective domain R=(1+hσ^-2)_δ, ∫_(1-ℓ)^N π^Y_ A =∫_R^c A+∫_R∩[ℓ≥ N^-1log(3nM/η)] A+∫_R∩[ℓ<N^-1log(3nM/η)] A (i)≤π^Y(R^c)+∫_[ℓ≥ N^-1log(3nM/η)]exp(-ℓ N) π^Y +∫_R∩[ℓ<N^-1log(3nM/η)](1+hσ^-2)^-d/2ℓ(y)/∫_ Kexp-1/2σ^2 x^2 xexp-1/2(h+σ^2) y^2 y (ii)≤ e^c_1^2-t^2/8c_1^2+η/3nM+log(3nM/η)/N ∫_R(1+hσ^-2)^-d/2exp-1/2(h+σ^2) y^2 y/∫_ Kexp-1/2σ^2 x^2 x (iii)≤ e^c_1^2exp(-t^2/8c_1^2)+η/3nM+log(3nM/η)/N·(1+hσ^-2)^d/2(1+δ)^d (iv)≤ e^c_1^2exp(-t^2/8c_1^2)+η/3nM+exp(t+c^2)/N log3nM/η , where in (i) we bounded the (1-ℓ)^N ≤ 1 in the first term, (1-ℓ)^N≤exp(-ℓ N) in the second term, and again (1-ℓ)^N ≤ 1 in the third term, and used the density formula (<ref>) of π^Y in third term. In (ii), the first bound follows from Lemma <ref>, while the second and third uses the condition on ℓ over each domain. (iii) follows from the change of variables. Lastly, (iv) follows from the setup δ=t/d and h=σ^2c^2(d^-1∧(d^2σ^2-c^2)^-1)≤σ^2 c^2d^-1 in Lemma <ref>. With c^2=loglog Z/2log Z, t=√(8)loglog Z, and N=Z(log Z)^4, the last line is bounded by η/nM. Therefore, _μ_h[(1-ℓ)^N]≤ M _π^Y[(1-ℓ)^N]≤η/n . We now bound the expected number of trials per iteration. Let S be the minimum of the threshold N and the number of trials until the first success. Then the expected number of trials per step is bounded by M_π^Y[S] due to μ_h/π^Y≤ M. Thus, ∫_1/ℓ∧ N π^Y ≤∫_R1/ℓ π^Y+Nπ^Y(R^c)=∫_R(1+hσ^-2)^-d/2exp-1/2(h+σ^2) y^2 y/∫_ Kexp-1/2σ^2 x^2 x+Nπ^Y(R^c) ≤exp(t+c_1^2)+Nexp(-Ω(t^2))≤ e(log Z)^3+3(log Z)^4=ØlognM/η^4 . Therefore, the expected number of trials per step is ØM(lognM/η)^4, and the claim follows since each trial uses one query to the membership oracle of K. §.§.§ Query complexity under a warm start We now put together the mixing and per-step guarantees established above. For any η,ε∈ (0,1), q≥ 1, n∈ℕ defined below and convex body given by a well-defined membership oracle, the (Algorithm <ref>) with h = σ^2(1/d∧1/d^2σ^2-1)(log9nM/η)^-1, N = (nM/η), and initial distribution μ_0^X which is M-warm with respect to π^X = N(0, σ^2 I_d)|_ achieves R_q(μ^X_n π^X)≤ after n = (q(d∨ d^2σ^2) logM/η) iterations, where μ_n^X is the law of the n-the iterate. With probability 1-η, the algorithm iterates n times without failure, using qM(d∨ d^2σ^2)(log1/η)^5 expected number of membership queries in total. In particular, the query complexity is (qMd^2σ^2(log1/η)^5) when d^-1≲σ^2. By Lemma <ref>, the should iterate Øqσ^2h^-1loglog M/ times to achieve -distance in R_q. To ensure that the query complexity is bounded, we choose h=σ^2loglog Z/log Z1/d∧1/d^2σ^2-1 and N = 9nM/ηlog9nM/η^4 . By Lemma <ref>, we need the following total number of membership queries in expectation: qM(d∨ d^2σ^2)log1/η^5 . Hence, if d^-1≲σ^2, then the query complexity is simply (qMd^2σ^2log^51/η). §.§ Rényi-infinity guarantees for truncated Gaussians under warm start To use the boosting technique, we need the uniform ergodicity of the , bounding δ_x P^n- π^X_ uniformly over , where P is the Markov kernel of the for a truncated Gaussian. To this end, for any x∈, we bound R_∞(δ_x P^1π^X) uniformly by exp(poly(D,d)), so that loglog R_∞(δ_x P^1π^X) does not add more than a polylogarithmic factors to the complexity. In other words, we establish a Gaussian analogue of Lemma <ref>. For any given ε∈(0,1), the for a truncated Gaussian π^X with variance h and any feasible start x_0∈ K achieves R_∞(μ_n^Xπ^X)≤ for n=(h^-1σ^2logd+h^-1D^2/) iterations. We bound the warmness of μ_1^X towards π^X when μ_0^X=δ_x_0. One can readily check that μ_1^X(x)=_ K(x)·∫exp-1+hσ^-2/2hx-1/1+hσ^-2y^2exp-1/2hy-x_0^2/(2π h)^d/2∫_ Kexp-1+hσ^-2/2hz-1/1+hσ^-2y^2 z y , and we should compare this with π^X(x) = exp-1/2σ^2 x^2·_ K(x)/∫_ Kexp-1/2σ^2 z^2 z . For D=diam( K), exp-1+hσ^-2/2hz-1/1+hσ^-2y^2 =exp-1/2h z^2-1/2σ^2 z^2-1/2h(1+hσ^-2) y^2+z^y/h ≥exp-D^2/2h·exp-1/2σ^2 z^2-1/2h(1+hσ^-2) y^2+z^y/h . As |z^y|≤(2 z^2+ y^2) due to Young's inequality, exp-D^2/2h·exp-1/2σ^2 z^2-1/2h(1+hσ^-2) y^2+z^y/h ≥ exp-D^2/2h·exp-1/2σ^2 z^2-1/2h(1+hσ^-2) y^2- z^2/h- y^2/4h ≥ exp-3D^2/2h·exp-1/2σ^2 z^2-1/2h(1+hσ^-2) y^2- y^2/4h . Hence, ∫exp-1+hσ^-2/2hx-1/1+hσ^-2y^2exp-1/2hy-x_0^2/(2π h)^d/2∫_ Kexp-1+hσ^-2/2hz-1/1+hσ^-2y^2 z y ≤ exp3D^2/2h/(2π h)^d/2∫exp-1+hσ^-2/2hx-1/1+hσ^-2y^2exp-1/2hy-x_0^2exp1/2h(1+hσ^-2) y^2+ y^2/4h/∫_ Kexp-1/2σ^2 z^2 z y_ (#) . The numerator of the integrand can be bounded as follows: exp-1+hσ^-2/2hx-1/1+hσ^-2y^2exp-1/2hy-x_0^2exp1/2h(1+hσ^-2) y^2+ y^2/4h = exp-1/4hy-2(x_0+x)^2-1/2hx_0^2+ x^2+h/σ^2 x^2-2x_0+x^2 ≤ expD^2/hexp-1/4hy-2(x_0+x)^2-1/2σ^2x^2 . Putting this bound back to the integral above, (#) ≤exp5D^2/2h/(2π h)^d/2∫exp-1/4hy-2(x_0+x)^2 y/∫_ Kexp-1/2σ^2 z^2 z·exp-1/2σ^2 x^2 = 2^d/2exp5D^2/hexp-1/2σ^2 x^2/∫_ Kexp-1/2σ^2 z^2 z≤ 2^d/2exp5D^2/h . Therefore, the ratio can be bounded by ∫_ Kexp-1/2σ^2 z^2 z/exp-1/2σ^2 x^2∫exp-1+hσ^-2/2hx-1/1+hσ^-2y^2exp-1/2hy-x_0^2/(2π h)^d/2∫_ Kexp-1+hσ^-2/2hz-1/1+hσ^-2y^2 z y≤2^d/2exp5D^2/h , so M=μ_1^X/π^X≤2^d/2exp(5h^-1D^2), and R_q(μ_1^Xπ^X)≲q/q-1log M≤q/q-1d+h^-1D^2. By Lemma <ref>, one can achieve R_q(μ_n^Xπ^X) ≤ for n = (qh^-1σ^2 logd+h^-1D^2/). Using 2·_^2≤=lim_q↓1 R_q and (π^X)≤σ^2, one can achieve that after n≳ h^-1σ^2logM(d+h^-1D^2)/ iterations, sup_x∈ Kμ^X_n-π^X_≤/M . By Theorem <ref> with μ^X_1/π^X-1_L_∞≤ M, we have μ^X_n/π^X-1_L_∞≤ and R_∞(μ^X_n π)≤. Using the uniform ergodicity above, we can now obtain a guarantee in R_∞ of the for a truncated Gaussian over , boosting the metric in Proposition <ref>. For any η,ε∈ (0,1), n∈ℕ defined below, and convex body given by a well-defined membership oracle, the (Algorithm <ref>) with h = σ^21/d∧1/d^2σ^2-1log9nM/η^-1, N = (nM/η), and initial distribution μ_0^X that is M-warm with respect to π^X the truncated Gaussian N(0, σ^2 I_d)|_ achieves R_∞(μ^X_n π^X)≤ after n = ((d∨ d^2σ^2) (logMD/η)^2) iterations, where μ_n^X is the law of the n-the iterate. With probability 1-η, the algorithm iterates n times without failure, using M(d∨ d^2σ^2)(logD/η)^6 expected number of membership queries in total. In particular, the query complexity is (Md^2σ^2(logD/η)^6) when d^-1≲σ^2. By Lemma <ref>, the should iterate n=((d∨ d^2σ^2)(logMD/η)^2) times to output a sample whose law is -close to π^X in R_∞. As argued in Proposition <ref>, we can conclude that throughout n outer iterations, the failure probability of the is at most η by the union bound, and the total expected number of queries is (M(d∨ d^2σ^2)(logD/η)^6). It can be checked that, conditioning on the event that the algorithm has not failed, the distribution of the iterates remains the correct distribution μ_n^X. Thus, in practical terms, the 1-η probability of failure will not be a significant obstacle, since one can just use the samples of the successful trials without compromising any of the guarantees. § SUCCESSIVE PROXIMAL SCHEME: RÉNYI INFINITY GUARANTEE FOR UNIFORM SAMPLING We put together the ingredients prepared in previous sections, namely the for uniform distributions (Theorem <ref>) and truncated Gaussian (Theorem <ref>), along with the annealing scheme introduced in <cit.>, and obtain (Algorithm <ref>). In <ref>, we outline , together with our main result showing that it can sample a point approximately uniformly from a well-rounded convex body with query complexity (d^3(1/)). In <ref>, we then provide a proof for the main claim. §.§ Rényi infinity guarantee with cubic complexity As mentioned earlier in <ref>, using the rounding algorithm in <cit.>, we can assume that a given convex body ⊂, presented by a well-defined membership oracle, is well-rounded. This means that satisfies _X∼[ X^2]≤ C^2d with a known constant C>0, where X∼ indicates that X is drawn from the uniform distribution over . We state the main result of this paper below. Assume that a well-rounded convex body with _ K[ X^2]≤ C^2d and 0∈ is presented by a well-defined membership oracle. Let π be the uniform distribution over K. For given η, ∈ (0,1), is a randomized algorithm, which succeeds with probability 1-η in outputting a sample X∼ν such that R_∞(νπ)≤. Conditioned on its success, uses (C^2d^3log^8 1/η) membership queries in expectation. Preliminaries. We first collect a series of observations that simplify our setup and arguments. For X̅:=_ K[X], by Jensen's inequality, X̅≤_ K[ X] ≤√(_ K[ X^2])≤ C√(d) . Also, for the covariance matrix Σ_ of the uniform distribution over , (Σ_) = _[X-X̅^2] ≤_[X^2] ≤ C^2d . Lastly, it is known from <cit.> that _ KX-X̅≥ t·(Σ_)^1/2≤exp(-t+1). Hence, _ KX-X̅≥ t· C√(d)≤exp(-t+1) . Therefore, we can actually work with a `truncated' convex body instead of the full convex body : There exists a constant L=Clog3e/ such that the volume of := K∩ B_L√(d)(0) is at least (1-/3)( K). Sketch of . Let μ_i be a truncated Gaussian N(0,σ_i^2I_d)|_ K, and _i be an approximate measure close to μ_i produced by the algorithm. Then, consists of four phases: * Phase I (σ^2=1/d) * Initial distribution: Uniform measure over B_1(0) denoted by μ_0=_0. * Target distribution: N(0,σ^2I_d)|_=μ_1. * Phase II (1/d≤σ^2≤1) * Run with initial dist. _i (not μ_i) and target dist. μ_i+1. * Update σ_i+1^2=σ_i^21+1/d. * Phase III (1≤σ^2≤ L^2d) * Run with initial dist. _i (not μ_i) and target dist. μ_i+1. * Update σ_i+1^2=σ_i^21+σ_i^2/L^2d. * Phase IV (σ^2=L^2d) * Run with initial distribution N(0,σ^2I)|_ and target distribution π_K̅, the uniform distribution over . Phases I-III can be viewed as “preprocessing steps” whose purpose is to generate a warm start for Phase IV. We also note that, while R_∞(νπ) ≤ε, we do not have the (slightly stronger) property that, for ε < 1, _π|ν/π - 1| ≲ε . This is because the truncation of π=π_ to π̅=π_ causes there to be A= \ where π̅( A) = 0 < π( A). Since ν≪π̅, we cannot establish a lower bound on ν/π - 1 better than 0. On the other hand, Corollary <ref> shows that we can bound ν/π - 1_L^p(π)^p ≤ε . Under the same assumptions as Theorem <ref>, succeeds with probability 1-η in outputting a sample X ∼ν such that ν/π - 1_L^p(π)^p ≤ε, so long as ε < 1, p ≥ 1. Conditioned on its success, uses (C^2d^3log^8 1/η) membership queries in expectation. As an intermediate step in the proof of Theorem <ref>, we obtain that _π̅|ν/π̅ -1|≤ε/3, so that, replacing with , _π|ν/π -1|^p ≤(\)/() + ()/() |ν/π̅ ()/()-1|^p ≤ε/3 + (1-ε/3) (2ε/3(1-ε/3))^p ≤ε , so long as ε < 1 and p ≥ 1. §.§ Proof details The following subsections state a series of lemmas, each one of which proves a guarantee on each phase of the algorithm. Failure probability. For a target failure probability η of the entire algorithm, we can achieve this by setting the failure probability at each phase to be on the order of η̂:= (η/C^2 d log^2 1/ε) , since the total number of inner phases is (C^2 d log^2 1/ε). This will be sufficient for the failure probability to be at most η, where we apply a union bound over all the phases. Since the dependence on η̂ of the is polylogarithmic, this will not impact the resulting oracle complexity bounds by more than polylogarithmic factors. Structure of lemmas. We provide a lemma for each outer phase, where each lemma establishes three facts: (i) the number of updates (to σ), (ii) a quantitative guarantee on the warmness on each update, and (iii) the final query-complexity bound given by Theorem <ref>. Briefly, this theorem says that, starting from an M-warm distribution towards a truncated Gaussian N(0, σ^2 I)|_, the oracle complexity to achieve ε error in R_∞, with success probability 1-η̂, is bounded by a quantity of asymptotic order (Md^2σ^2log^6L/εη̂) . Since σ^2 > d^-1 at all times, it suffices to choose h^-1 = Θ(d^2logM log L/εη̂) and the number of iterations to be n = (d^2 σ^2 log^2 L/εη̂). Except for the last phase, we take ε = log 2, in which case the query complexity is simply (Md^2σ^2log^6L/η̂) , with h^-1 = Θ(d^2logM log L/η̂) and number of iterations being n = (d^2 σ^2 log^2 ML/η̂). Note that log 2 error in R_∞ implies that the law of the resultant sample is at least 2-warm with respect to the target. §.§.§ Phase I With probability at least 1-η̂, initial distribution μ_0 = Unif(B_1(0)), and target distribution μ_1 = N(0, 1/d I_d) |_, Phase I outputs a sample with law _1 satisfying _μ_1|_1/μ_1 - 1| ≤ 1 . Its oracle complexity is (d^3/2log^6 L/η̂) , using with step size h^-1 = Θ(d^2 loglog L/η̂) and (d log^2 L/η̂) iterations. # Inner phases: The number of inner phases is clearly 1. Warmness: The initial measure μ_0 has warmness given by μ_0(x)/μ_1(x) = 1/(B_1(0))/exp(-d/2x^2)/∫_exp(-d/2z^2) z ≤(2π/d)^d/2exp(d/2)/(B_1)·1/(2π/d)^d/2∫_exp-d/2z^2 z_≤ 1 (i)≤ (2/d)^d/2exp(d/2) Γd/2+1(ii)≤γ√(d)exp-d/2log d + d/2 + d/2log d - d/2≤γ√(d) , where in (i) we used (B_1) = π^d/2 / Γ(%s/%sd2+1), and (ii) follows from Stirling's approximation that Γ(d/2 + 1) ≤γ e d^d/2+1/2 (2 e)^-d/2 for some universal constant γ>0 . Final complexity: It follows from substituting M = Ø(√(d)) and σ^2 = d^-1 into (<ref>). §.§.§ Phase II With probability at least 1-η̂i_* for i_* = (d), initial distribution _1 (given by from Lemma <ref>), and target distribution μ_i_* = N(0, I_d) |_, Phase II outputs a sample with law _i_* satisfying _μ_i_*|_i_*/μ_i_* - 1| ≤ 1 , Its oracle complexity is (d^3log^6 L/η̂) , using . The step size scheme for each inner phase is presented in the proof. Note that we slightly modify the scheme, which is to take σ_i+1^2 = min(1, σ_i^2 (1+1/d)). # Inner phases: As (1+d^-1)^d≥ 2 for any d≥ 1, it takes at most d iterations to double σ. Thus, since we need to double σ on the order of O(log d) many times, the number of inner phases within Phase II is at most i_*=(d). Warmness: By the construction of the algorithm, the following bound holds for all x ∈: μ_i(x)/μ_i+1(x) =exp-1/2σ_i^2 x^2/exp-1/2σ_i+1^2 x^2 ∫_exp-1/2σ_i+1^2 x^2 x/∫_exp-1/2σ_i^2 x^2 x≤∫_exp-1/2σ_i+1^2 x^2 x/∫_exp-1/2σ_i^2 x^2 x ≤1+1/d^d/2∫_(1+1/d)^-1/2exp-1/2σ_i^2 x^2 x/∫_exp-1/2σ_i^2 x^2 x≤√(e) . Furthermore, if at each previous phase, we have _μ_i|μ̅_̅i̅/μ_i - 1| ≤ 1 , then this implies warmness of constant order between _i and μ_i+1: _μ_i+1|μ̅_̅i̅/μ_i+1| ≤ 2√(e) . Final complexity: For any fixed i, the complexity in each inner phase is given by substituting M ≤ 2√(e) and σ^2_i ≤ 1 into (<ref>), to obtain a complexity of (d^2 log^6 L/η̂) . Here, we run the with h_i^-1 = Θ(d^2 loglog L/η̂) and n_i = (d^2 σ^2 log^2 L/η̂), to get R_∞ bounded by log 2. Implicitly, this verifies the condition (<ref>). Secondly, as i_* = (d), the total query complexity is bounded by the product of the worst-case complexity in each inner phase with the total number of inner phases, which is given by (d^3 log^6 L/η̂) . §.§.§ Phase III With probability at least 1-η̂j_* for j_* = (d), initial distribution _i_* (given by from Lemma <ref>), and target distribution μ_i_*+j_* = N(0, L^2 dI_d) |_, Phase II outputs a sample with law _i_*+j_* satisfying _μ_i_*+j_*|_i_*+j_*/μ_i_*+j_* - 1| ≤ 1 , Its oracle complexity is (L^2d^3log^6 1/η̂) , using the . The step size scheme for each inner phase is presented in the proof. In this phase, we will first perform the analysis over each doubling and then aggregate over the doublings. # Inner phases: We first partition [1, L^2d] by a sequence of doubling parts, where the terminal σ^2 is at least double of the initial σ^2 in each doubling part. Clearly, the number of doubling parts is log_2(L^2d). Let σ^2 be an initial variance of a given doubling part. Since we have (1+σ^2/L^2 d)^L^2 d/σ^2≥ 2 for all d≥ 1 and σ≤ L√(d) , the number of inner phases within the doubling part is at most L^2d/σ^2≤ L^2d. Therefore, the total number of inner phases during Phase III is (L^2d). Warmness: Let j ≥ i_*. By the construction of the algorithm, for all x ∈, μ_j(x)/μ_j+1(x) =exp-1/2σ_j^2 x^2/exp-1/2σ_j+1^2 x^2 ∫_exp-1/2σ_j+1^2 x^2 x/∫_exp-1/2σ_j^2 x^2 x≤∫_exp-1/2σ_j+1^2 x^2 x/∫_exp-1/2σ_j^2 x^2 x . As x^2 ≤ L^2 d on , we have exp-1/2σ_j+1^2 x^2 =exp-1/2σ_j^2 x^2·exp1/2(L^2d+σ_j^2) x^2≤√(e)exp-1/2σ_j^2 x^2 . As a result, μ_j(x)/μ_j+1(x)≤∫_exp-1/2σ_j+1^2 x^2 x/∫_exp-1/2σ_j^2 x^2 x≤√(e)∫_exp-1/2σ_j^2 x^2 x/∫_exp-1/2σ_j^2 x^2 x=√(e) . Furthermore, if at each previous phase, we have _μ_j|μ̅_̅j̅/μ_j - 1| ≤ 1 , then this implies warmness of constant order _μ_j+1|μ̅_̅j̅/μ_j+1| ≤ 2√(e) . Final complexity: We first bound the total query complexity of one doubling part. Let σ^2 be an initial variance of a given doubling part. For any intermediate variance σ_j^2 within the part, the query complexity of the with h^-1 = Θ(d^2log^2L/η), iteration number n = (d^2σ_j^2 log^2 L/η̂), and warmness M≥ 2√(e) can be obtained from (<ref>) as d^2σ_j^2log^6L/η̂ . This achieves log 2-accuracy in R_∞ in each update, which also implicitly verifies (<ref>). Since within the doubling part, σ_j^2 ≤ 4σ^2 (where σ^2 is the initial variance within the doubling part) and the number of inner phases is at most L^2d/σ^2, the total query complexity during Phase III, aggregating over all the doubling parts, is log_2(L^2 d) ·L^2d/σ^2·d^2(4σ^2)log^6L/η̂ = L^2d^3log^61/η̂ , achieving log 2-accuracy in R_∞ for the law of the final sample. §.§.§ Phase IV With probability at least 1-η̂, initial distribution _i_*+j_* (given by from Lemma <ref>), and target distribution π̅∝_, Phase IV outputs a sample with law ν satisfying _π̅|ν/π̅ - 1| ≤ε/3 , Its oracle complexity is (L^2 d^3log^6 1/η̂) , using the with h^-1 = Θ(d^2 loglog (L/ε)/η̂) and (L^2 d^3 log^2 1/η̂ε) iterations. # Inner phases: There is only a single update in this phase. Warmness: Let π̅ be the uniform distribution over . First, bounding exp(-1/2L^2d x^2)≤ 1 and, since ⊆ B_L√(d)(0), ∫_exp-1/2L^2d x^2 x ≥() exp(-L^2d/2L^2 d) = ()/√(e) . Then, we can deduce μ_i_* + j_*(x)/π̅(x) = μ_i_* + j_*(x)/1/() =()·exp-1/2L^2d x^2/∫_exp-1/2L^2d x^2 x≤√(e) . As before, since we have _μ_i_* + j_*|_i_* + j_*/μ_i_* + j_* - 1| ≤ 1 , then this implies warmness of constant order _μ_i_* + j_*_i_* + j_*/π̅≤ 2√(e) . Final complexity: Here, using Theorem <ref> (instead of the Gaussian results) with M = 2√(e) and D = 2L√(d), we obtain the query complexity of (L^2 d^3 log^6 1/η̂ε) . We now put together these four lemmas to prove Theorem <ref>. Given the start μ_0, we first apply Lemma <ref> to produce _1. Then apply Lemma <ref> to produce _i_*, Lemma <ref> to produce _i_* + j_*, and finally Lemma <ref> to produce ν. Their complexity is dominated by that of Lemma <ref>, which is (L^2 d^3 log^6 1/η̂ε) =C^2 d^3 log^8 1/η̂ , with the guarantee R_∞(νπ̅) ≤ε/3. From our choice of L, it follows that R_∞(π̅π) ≤log(1+2/3) due to sup_x∈ Kπ̅(x)/π(x)=( K)/()≤1+2/3 . Therefore, R_∞(νπ) = _πlogν/π≤ R_∞ (νπ̅) + R_∞ (π̅π) ≤/3 + log1 + 2/3≤ . Secondly, we need to quantity the probability of successfully generating a sample. This is given, by a union bound, as 1-η̂(2+i_* + j_*). With η̂ chosen according to (<ref>), has at least 1-η probability of success. Putting all these together concludes the proof. § CONCLUSION We have presented , an algorithm which achieves ε-closeness in R_∞ with Ø(d^3 1/ε) query complexity. Here we note some possible future directions for research. * In this work, we showed that it is possible to extend the guarantees for for Gaussians of the form N(0, σ^2 I_d). Naturally, one asks whether it is possible to sample from general distributions of the type e^-f|_ K, when f has some nice properties? The applications of this problem are innumerable, and the techniques required are far from trivial. It is likely that one would need to devise a different annealing scheme in order to control the initial warmness parameter, or find a new strategy which bypasses the need for annealing entirely. * This work assumed the membership oracle model for K, but it is interesting to consider alternative oracle models. For instance, one can use more features of the geometry, by assuming access to a self-concordant barrier or a Riemannian metric on K. Alternatively, one can assume access to a stronger oracle such as a separation oracle, which returns not just a binary response to if x ∈ K, but when x ∉ K also gives a hyperplane separating x and K. It is as yet unknown if such a model can improve the query complexity, even for uniform sampling. It is also possible that a different annealing strategy would be beneficial in this context. § ACKNOWLEDGEMENTS We thank Sinho Chewi for helpful feedback. YK was supported in part by NSF awards CCF-2007443 and CCF-2134105. MSZ was supported by NSERC through the CGS-D program. alpha
http://arxiv.org/abs/2407.12908v1
20240717180000
The legacy of boson clouds on black hole binaries
[ "Giovanni Maria Tomaselli", "Thomas F. M. Spieksma", "Gianfranco Bertone" ]
gr-qc
[ "gr-qc", "hep-ph", "hep-th" ]
|| Gravitation Astroparticle Physics Amsterdam (GRAPPA), University of Amsterdam, Amsterdam, 1098 XH, NetherlandsNiels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen, DenmarkGravitation Astroparticle Physics Amsterdam (GRAPPA), University of Amsterdam, Amsterdam, 1098 XH, Netherlands§ ABSTRACT Superradiant clouds of ultralight bosons can leave an imprint on the gravitational waveform of black hole binaries through “ionization” and “resonances”. We study the sequence of resonances as the binary evolves, and show that there are only two possible outcomes, each with a distinct imprint on the waveform. If the cloud and the binary are nearly counter-rotating, then the cloud survives in its original state until it enters the sensitivity band of future gravitational wave detectors, such as LISA. In all other cases, resonances destroy the cloud, while driving the binary to co-rotate with it and its eccentricity close to a fixed point. This opens up the possibility of inferring the existence of a new boson from the statistical analysis of a population of black holes binaries. The legacy of boson clouds on black hole binaries Gianfranco Bertone Received date / Accepted date ================================================= * Introduction. Gravitational waves (GWs) will enable detailed measurements of the environments of compact objects, especially for intermediate-to-extreme mass ratio inspirals <cit.>. These systems spend millions of cycles in the millihertz regime, allowing environmental effects to build up throughout the inspiral. Future GW detectors, such as LISA <cit.> will probe this frequency range, hence a precise modelling of the dynamics between the binary and the environment is essential. Before the binary's frequency enters the detector's band, the companion object may have already impacted the environment through its gravitational perturbation. Understanding the “history” of the system is thus required to study effects later in the inspiral, such as dynamical friction. A type of environment that has recently attracted significant attention are clouds made of hypothetical ultralight bosons surrounding black holes (BHs). It has been found that the cloud-BH system, known as a “gravitational atom”, exhibits a rich phenomenology when part of a binary system <cit.>, and thus forms an interesting candidate for precision tests of fundamental physics with GWs. Boson clouds have a natural way of forming through superradiance<cit.>. This is a purely gravitational process where rotating BHs shed a significant amount of their mass and angular momentum to a boson field, and can be excited without the need of any prior abundance. The efficiency of this process depends on the gravitational coupling[We use natural units G=ħ=c=1.]α=μ M, where μ is the boson mass and M the BH mass, and is maximum when α∼𝒪(0.1). For astrophysical BHs, this condition implies that the boson has to be ultralight, i.e., in the mass range 𝒪(10^-20-10^-10) eV. These types of particles are predicted from different areas of physics, for example in the context of the strong CP problem (the QCD axion <cit.>), the string axiverse <cit.> and as possible candidates to resolve the dark matter problem <cit.>. The cloud-binary interaction involves ionization (or “dynamical friction”) <cit.> and resonances<cit.>. While ionization occurs “late” in the inspiral, resonances can affect the system when the binary is well beyond the cloud's radius, making them crucial for understanding the history of the system. In this Letter and its companion <cit.>, we study this resonant behaviour on orbits with generic eccentricity and inclination and uncover new phenomena that can cause a resonance to “break” before its completion. We are able to (i) narrow down direct observational signatures from the cloud, when it survives all resonances, and (ii) discover new indirect observational signatures, when the cloud is destroyed early on, thereby leaving a legacy on the binary through its eccentricity and inclination. 4pt * Setup. We work in the frame of the larger BH, assumed to host the cloud, with r⃗ = {r, θ, ϕ}. We specialize to scalar fields and work in the non-relativistic limit, where the Klein-Gordon equation reduces to the Schrödinger equation and is solved by the familiar hydrogenic eigenfunctions, ψ_nℓ m = R_n ℓ(r)Y_ℓ m(θ, ϕ) e^-i(ω_nℓ m - μ) t. Here, R_nℓ are the hydrogenic radial functions, Y_ℓ m the scalar spherical harmonics and n, ℓ and m the usual quantum numbers. The energy of each eigenstate is given by ϵ_nℓ m=μ(1-α^2/2 n^2-F_nℓα^4+h_ℓ/n^3ã m α^5+𝒪(α^6)) , where ã is the spin of the BH and the coefficients F_nℓ and h_ℓ can be found in <cit.>. The energy difference between two states is referred to as Bohr (Δ n ≠ 0), fine (Δ n = 0, Δℓ≠ 0) or hyperfine (Δ n =Δℓ = 0, Δ m ≠ 0), depending on the leading power of α. The companion object has mass M_*, position R⃗_* = {R_*, θ_*, φ_*} and its Newtonian gravitational perturbation can be expanded in multipoles as V_*(t,r⃗)=-∑_ℓ_*, m_*4π qα/2ℓ_*+1r^ℓ_*_0.60</r^ℓ_*+1_0.60>Y_ℓ_*m_*(θ_*,φ_*)Y_ℓ_*m_*^*(θ,ϕ) for ℓ_* ≠ 1, where r_0.60>(r_0.60<) indicates the larger (smaller) of R_* and r and q=M_*/M is the mass ratio. The dipole contribution ℓ_* = 1 has a slightly different expression and can be found in <cit.>. 4pt * Nonlinear resonances. The overlap between two different eigenstates |a⟩ and |b⟩ induced by the perturbation (<ref>) is composed of terms that oscillate with an integer multiple g of the orbital frequency Ω=φ̇_*, ⟨a|V_*|b|∼⟩η^ge^igφ_*. Energy losses such as GW radiation and the cloud's ionization induce a slow frequency chirp, that can be linearized around a reference point Ω_0 as Ω=γ t+Ω_0, where γ quantifies the speed of the chirp. Restricting our attention to two eigenstates, the Schrödinger equation can be written in a dimensionless form as /τ[ c_a; c_b ]=-i[ ω/2 √(Z); √(Z) -ω/2-iΓ ][ c_a; c_b ] , where τ=√(gγ) t and c_j=⟨j|ψ|$⟩, withj=a,b. The parameterΓquantifies the decay of state|b⟩<cit.>, the strengthη^gof the perturbation (<ref>) defines the Landau-Zener (LZ) parameterZ=(η^g)^2/(gγ), and the dimensionless frequency is given byω=(Ω-Ω_0)/√(γ/g), whereΩ_0=(ϵ_b-ϵ_a)/gis the resonance frequency. As long asωincreases linearly, the long-time behavior of the linear system (<ref>) follows the well-known LZ formula <cit.>: if initially only state|a⟩is populated, its final population is given by|c_a|^2=e^-2π Z. However, the transition between states of the cloud induces a backreaction on the orbit. From the energy balance of the cloud-binary system, the frequency is found to increase as ω=τ-Bc_b^2 , where the dimensionless parameterBquantifies the strength of the backreaction and is given by B=3M_ c/MΩ_0^4/3(1+q)^1/3M^1/3/qα√(γ/g)(-g) . The system (<ref>)–(<ref>) is now nonlinear, and behaves differently from the LZ solution. Two cases can be distinguished based on the sign ofB. IfB>0, the backreaction slows down the chirp. The equations then enjoy a positive-feedback mechanism: the stronger the backreaction, the more population is transferred between the states, which in turn makes the backreaction stronger, and so on. Hence, there is a critical threshold above which this process becomes self-sustaining and the system enters a distinct phase of a floating orbit during whichΩstays approximately constant. The value of the tipping point can be estimated perturbatively for2π Z≪1: ignoring the backreaction,c_b^2would reach the value1-e^-2π z≈2π Zin a time intervalτ≈1; the backreaction term in (<ref>) must then become significant when 2π ZB≳1 (float starts). Careful numerical study of the non-linear set of eqs. (<ref>)–(<ref>) confirms this formula to an accuracy better than6%. While (<ref>) determines whether the float starts, a complete transfer from|a⟩to|b⟩is only realized for small enoughΓ. In particular, it can be shown analytically <cit.> that, forΓ B≫1, the float breaks abruptly when the population left in the initial state is c_a^2≈Γ/2ZB (float breaks), whilec_b^2remains at negligible values throughout and after the resonance.[When formula (<ref>) returns c_a^2>1, the resonance breaks as soon as it starts, effectively behaving as if (<ref>) was not satisfied.] Together, (<ref>) and (<ref>) characterize the nonlinear behavior of floating resonances. ForB<0instead, the backreaction speeds up the chirp and reduces the population transferred. For smallq, we have2π Z≪1, and the final population in state|b⟩is found to be c_b^2≈3.7 (Z/B^2)^1/3 , which is valid forB≪-1/Z. While the numerical coefficient in (<ref>) is determined with a numerical fit, the dependence onZandBcan be justified by only keeping the second term in (<ref>), which dominates for large negativeB, and then looking for a stationary solution of (<ref>) with smallc_b. The backreaction now produces a sinking orbit, during which the frequency “jumps” ahead of its unbackreacted value in a short interval of time. 4pt * Eccentricity and inclination. Any general conclusion about the role of resonances in the evolution of the cloud-binary system must take into account generic orbital configurations, with nonzero eccentricityεand inclinationβ. While on equatorial circular orbits resonances are only excited atΩ_0=Δϵ/Δ m(that is, only for the main toneg=Δ m), on generic orbits the gravitational perturbation includes overtonesgΔ m, allowing a resonance between two states to be excited at multiple points in the inspiral. Furthermore, a much larger number of resonances is now possible, virtually between any two given states. Eccentricity and inclination also modulate the chirp rateγand the strengthη^gof the perturbation. For example, on circular orbits, the LZ parameter of the most interesting hyperfine and fine resonances depends on the inclination as Z∝sin^2(Δ m-g)(β/2)cos^-2(Δ m+g)(β/2) , meaning that these resonances all become weak close to counter-rotating orbits (β=π). Most importantly, however, resonances severely impact the orbital parameters with their backreaction. First of all, eq. (<ref>) is modified to ω/τ=f(ε)-Bc_b^2/τ , f(ε)=1+73/24ε^2+37/96ε^4/(1-ε^2)^7/2 , wheref(ε)quantifies the eccentricity dependence of GW energy losses <cit.>. Then, other dynamical equations forεandβarise from the conservation of angular momentum. On floating orbits, which are particularly efficient in changing the orbital parameters due to their long duration, these read C/τ√(1-ε^2) =Δ m/gf(ε)cosβ-h(ε) , C√(1-ε^2)β/τ =-Δ m/gf(ε)sinβ , whereC=3Ω_0/√(γ/g)andh(ε)=(1+7ε^2/8)/(1-ε^2)^2. The flow defined by (<ref>) and (<ref>) is shown in Fig. <ref> for an example value ofΔ m/g. Each overtone forces the parameters to evolve towards a fixed point where the binary is co-rotating (β=0) and the eccentricity depends onΔ m/g, for example [ ε≈ …, 0.65, 0.58, 0.46, 0 ,; Δ m/g= …, 1/4, 1/3, 1/2, 1 . ] If the resonance does not break before completion, it lasts for a timeΔτ=B, and its “distance” from the fixed point in the(ε,β)plane roughly reduces by a factore^D, whereD≡ B/C∝ M_cq^-1. Unequal mass binaries thus approach the fixed point more than equal mass ones. Finally, we note that an increase inεor decrease inZduring the float can break the resonance, similar to the effect of the decay of state|b⟩, while variations in the opposite direction can prevent the break. For the main tones (g=Δ m), however, these effects turn out to not be relevant <cit.>. 4pt * Resonant history. Predicting the nonlinear behavior of resonances on generic orbits allows us to determine the evolution of the cloud-binary system. This includes answering the pressing question of whether the cloud is still present (and in which state) when the binary is close enough to give direct observational signatures, through Bohr resonances <cit.> and ionization <cit.>. The answer is found studying hyperfine and fine resonances. All of them are floating, since superradiance populates the most energetic state for a givenn, such as|211⟩or|322⟩, see (<ref>). Furthermore, these resonances all involve a decaying final state, i.e., with(ω)<0. This decay is always much faster than duration of the floatΔ t_ float, especially for small mass ratios, where the two timescales are separated by many orders of magnitude; see Fig. 10 in <cit.>. We can thus already conclude that none of the early resonances is able to change the state of the cloud: either it survives in its original state|a⟩≡|n_aℓ_am_a⟩, or it is destroyed. It remains to be answered under which conditions either possibility occurs. The overtones generated by a nonzero eccentricity are generally weaker than the resonances encountered on circular orbits. So, if the cloud safely gets past the strongest hyperfine or fine resonance on circular orbits, it will still be present when the binary eventually enters the Bohr region. Two conditions need to be satisfied for a floating resonance to destroy the cloud. First, the float must start, i.e., inequality (<ref>) must hold. Second, the float must not break before the cloud is destroyed. The mass left in the original state when the resonance breaks is given in (<ref>), and is shown in Fig. <ref> for the example resonance|211⟩→|210⟩. Both conditions fail when the resonance is weak, for example whenZ→0. From (<ref>), it follows that this is necessarily the case in a certain interval of inclinations that neighbours the counter-rotating configuration, sayπ-χ_i<β≤π, wherei=1(i=2) for hyperfine (fine) resonances. Not breaking the resonance is often a stronger condition than starting it, and it is thus the one that determinesχ_i. Analytical approximations forα≪1give χ_1≈Θ×(M_ c^ br/M/10^-2)^-1/6(α/0.2)^(3+4ℓ_a)/6(ã/0.5)^-5/18 , whereΘ=38for|211⟩andΘ=4.8for|322⟩, whileM_ c^ bris the cloud's mass at resonance breaking. Fine resonances, instead, give χ_2≈9(M_ c^ br/M/10^-2)^-1/4(α/0.2)^3/2 . for|322⟩, while they are never able to destroy a cloud in the|211⟩state. Although resonances withgΔ malso fail to destroy the cloud in other angular intervals, such as near-co-rotating binaries (β=0), the one aroundβ=πis the only interval where none of the hyperfine and fine resonances is effective. While only binaries that nearly counter-rotate get through hyperfine and fine resonances without destroying the cloud, they might not be the only ones that carry it up to the Bohr region. The separations where these resonances are located can be even larger than the scale where binary formation mechanisms or other astrophysical interactions take place <cit.>. We do not dive into further details here, and simply consider close binary formation as a mechanism to “skip” early resonances. The plethora of Bohr resonances encountered late in the inspiral, responsible for direct overvational signatures, are almost all weak and sinking, meaning that they too do not change the state of the cloud. The only possible exception among the initial states we considered is|322⟩→|211⟩. These can float and populate a relatively long-lived final state; however, the cloud is very efficiently ionized during the float and thus loses most of its mass. A summary of the possible resonant histories is given in Fig. <ref>. 4pt * Direct observational signatures. Understanding the resonant history of the system allows us to narrow down its impact on the waveform. The cloud is either destroyed, or remains in its original state while the binary is nearly counter-rotating. The possible shapes of the ionization-driven frequency chirp <cit.> are then simply the ones induced by the state initially populated by superradiance, most likely|211⟩or|322⟩, withβ≈π. The sequence of sinking Bohr resonances encountered by the system <cit.> is also completely fixed by the initial state of the cloud. Their frequency <cit.>, f_0.60GW^ res=26 mHz/g(10^4M_⊙/M)(α/0.2)^3(1/n_a^2-1/n_b^2) , can fall in the LISA band.[For n_b→∞, one recovers from (<ref>) the position of the ionization kinks <cit.>.] Our work predicts the amplitude of the corresponding frequency “jump”, Δ f_0.60GW =0.61 mHz/Δ m^1/3 (10^4M_⊙/M)(M_ c/M/10^-2)(q/10^-3)^-1 ×(α/0.2)^3(1/n_a^2-1/n_b^2)^4/3(c_b^2/10^-3) , where|c_b|^2can be determined through (<ref>) and is typically∼𝒪(10^-3). The dephasing induced by even a single jump can be∼𝒪(10^4)radians, which is well above the expected LISA precision. 4pt * Indirect observational signatures. Perhaps our most striking result is that, even when the cloud is destroyed during an early resonance, it still leaves a permanent, detectable mark on the binary. It does so by severely affecting the eccentricityεand inclinationβduring the floating orbit, as shown in Fig. <ref>. This mechanism depends on two parameters: the ratioΔ m/g, which determines the fixed pointεtends to (see eq. (<ref>)), and D=D_0(-g/2)^2/3(M_ c/M/10^-2)(q/10^-3)^-1(α/0.2)(ã/0.5)^1/3 , which determines how closely the binary approaches it. For the two strongest hyperfine resonances from|211⟩(|322⟩), the parameterD_0assumes the values3.30and4.16(1.28and1.62) respectively. We show in Fig. <ref> the possible values ofεandβat the end of a floating resonance, as function ofDand for two values ofΔ m/g. The float brings the orbit significantly close to the equatorial plane, even for large initial inclinations. An abundance of quasi-planar inspiral events can thus be indirect evidence for boson clouds. Whether the formation mechanisms of the binary, or other astrophysical processes, also lead to a natural preference for small inclinations is still subject to large uncertainties <cit.>. Additionally, the eccentricity is suppressed by the main tones (g=Δ m) and brought close to, or above, a nonzero fixed point by overtones (g>Δ m). The latter scenario is especially interesting for binaries that are not dynamically captured, such as in the case of comparable mass ratios, because they are generally expected to be on quasi-circular orbits. The past interaction with a cloud can overturn this prediction. The float-induced high eccentricities are mitigated by the subsequent GW emission, but the binary will remain more eccentric than it would have been otherwise, even in late stages of the inspiral. 4pt * Conclusions. In this Letter, we answered one of the outstanding questions about the phenomenology of gravitational atoms in binary systems. By studying their resonant history, we are able to show that either the cloud remains in its original state until late stages of the inspiral, or it is destroyed during an early floating resonance. In the first case, the binary is expected to be nearly counter-rotating with the cloud, which pinpoints uniquely its direct observational signatures given by Bohr resonances and ionization. In the second case, the binary's eccentricity and inclination evolve towards a fixed point. The possibility of inferring the past existence of a cloud from its legacy left on the binary parameters is a new and exciting observational prospect both for ground based GW detectors, such as LIGO-Virgo or the Einstein Telescope, and space-borne ones, such as LISA. Proper population studies will be needed to turn this prediction into a test of fundamental physics with the available and future data. 4pt * Acknowledgements We thank Rafael Porto for useful discussions and for sharing a draft of their related work <cit.>. TS is supported by the VILLUM Foundation (grant no. VIL37766), the Danish Research Foundation (grant no. DNRF162), and the European Union’s H2020 ERC Advanced Grant “Black holes: gravitational engines of discovery” grant agreement no. Gravitas–101052587.
http://arxiv.org/abs/2407.12637v1
20240717150612
Toward INT4 Fixed-Point Training via Exploring Quantization Error for Gradients
[ "Dohyung Kim", "Junghyup Lee", "Jeimin Jeon", "Jaehyeon Moon", "Bumsub Ham" ]
cs.CV
[ "cs.CV" ]
INT4 Fixed-Point Training D. Kim et al. ^1Yonsei University, ^2Articron Inc. <http://cvlab.yonsei.ac.kr/projects/LBT> Toward INT4 Fixed-Point Training via Exploring Quantization Error for Gradients Dohyung Kim1 Junghyup Lee1 Jeimin Jeon 1,2 Jaehyeon Moon 1,2 Bumsub Ham 1,Corresponding author July 22, 2024 ===================================================================================================== § ABSTRACT Network quantization generally converts full-precision weights and/or activations into low-bit fixed-point values in order to accelerate an inference process. Recent approaches to network quantization further discretize the gradients into low-bit fixed-point values, enabling an efficient training. They typically set a quantization interval using a min-max range of the gradients or adjust the interval such that the quantization error for entire gradients is minimized. In this paper, we analyze the quantization error of gradients for the low-bit fixed-point training, and show that lowering the error for large-magnitude gradients boosts the quantization performance significantly. Based on this, we derive an upper bound of quantization error for the large gradients in terms of the quantization interval, and obtain an optimal condition for the interval minimizing the quantization error for large gradients. We also introduce an interval update algorithm that adjusts the quantization interval adaptively to maintain a small quantization error for large gradients. Experimental results demonstrate the effectiveness of our quantization method for various combinations of network architectures and bit-widths on various tasks, including image classification, object detection, and super-resolution. § INTRODUCTION Over the past decade, convolutional neural networks (CNNs) have shown the effectiveness on various applications in computer vision <cit.>. The networks exploit wide <cit.> and deep architectures <cit.> with lots of training samples for better performance, which requires a large amount of memory to store, , weights, activations, and/or gradients, typically using full-precision values. Multiply-accumulate operations (MACs) with full-precision values are computationally demanding for both training and inference processes. Network quantization alleviates this problem by replacing the full-precision values with low-bit fixed-point ones (, integer format). This allows to employ an efficient integer arithmetic, while reducing the required memory and computational cost simultaneously. Recent studies focus on quantizing weights and/or activations in a forward pass to accelerate an inference process. Several methods have shown that the bit-width could be reduced to extremely low ones, , 3-bit, while retaining the accuracy of an original model <cit.>. They, however, also require high computational cost at training time, since gradients for backward propagation are kept to full-precision values. For an efficient training process, quantizing the gradients into low-bit widths is crucial, while minimizing the performance drop. Low-bit training approaches reduce the bit-width of gradients for an efficient backward propagation, which can be categorized into low-bit floating-point (FLP) and fixed-point (FXP) training methods. Low-bit FLP methods, representing the gradients with low-bit FLP values, have been widely used to boost the efficiency for backward propagation, but they still adopt MACs with FLP values <cit.>. Low-bit FXP methods have recently attracted significant attention that enable using integer arithmetic operations for backward propagation <cit.>. To this end, they exploit a discrete quantization function that maps full-precision gradients (, with 32-bit FLP values) into low-bit FXP ones. Specifically, the quantization function first normalizes the full-precision gradients within a quantization interval, and then maps them to the low-bit ones using a discretizer (, rounding function). Since finding an optimal quantization interval brings better quantization performance, recent approaches to quantizing weight and/or activation propose to learn the interval end-to-end, providing state-of-the-art results <cit.>. Adopting the learnable interval to quantize gradients is however computationally intractable, mainly due to computing derivatives of gradients (, Hessian). For this reason, current FXP methods simply set the interval to the min-max range of gradients <cit.>. A recent work proposes to adjust the interval such that the quantization error for entire gradients is minimized <cit.>. We have found that this approach narrows the quantization interval significantly, compared to the methods using a min-max range, since most gradients are distributed around a zero value <cit.>, while the min-max range spanning entire gradients is relatively very wide. Narrowing the quantization interval drastically leads to a significant quantization error for large gradients around a tail of distribution that have larger magnitudes affecting the training process dominantly <cit.>. In this paper, we introduce a simple yet effective method for a low-bit FXP training that updates the quantization interval for gradient quantization in a way of maintaining a small quantization error for large gradients. We conjecture that minimizing the quantization error for entire gradients causes a significant error for large gradients, which leads to an unstable training process. Our approach instead lowers the quantization error for large gradients in the FXP training. To this end, we derive an upper bound of the quantization error for the large gradients using a quantization interval, and obtain a condition for the interval that lowers the upper bound of the quantization error. Based on this condition, we propose an interval update algorithm that adjusts the quantization interval adaptively, maintaining a low quantization error for large gradients accordingly. We apply our method to various network architectures with different bit-widths, and achieve superior results on various vision tasks including image classification, object detection, and super-resolution. The main contributions of our work can be summarized as follows: ∙ We have found that minimizing the quantization error for entire gradients enlarges the quantization error for large gradients. Based on this, we propose to focus on reducing the quantization error for large gradients that play an important role for the low-bit FXP training. ∙ We derive an upper bound of the quantization error for large gradients, and compute an optimal condition for quantization intervals lowering the quantization error for large gradients. We design an interval update algorithm for the low-bit FXP training with a negligible computational overhead. ∙ We demonstrate the effectiveness of our approach to updating the interval to maintain a small quantization error for large gradients with various architectures on standard benchmarks especially in 4-bit setting, and show an extensive analysis of our method. § RELATED WORK §.§ Low-bit FLP training FLP training approaches accelerate a training process by lowering the bit-width of gradients into a 16-bit <cit.> or an 8-bit <cit.>. A FLP value consists of exponent and mantissa parts, which represent dynamic range and precision, respectively. The FLP training approaches carefully assign bit-widths for the exponent and mantissa parts in order to minimize an accuracy drop caused by gradient quantization. Specifically, the work of <cit.> shows in-depth studies on distributions of weights, activations, and gradients, and proposes to use an 8-bit FLP value. More specifically, it uses 1, 5, and 2-bits for sign, exponent, and mantissa parts, respectively, in forward and backward propagations. After that, several approaches apply different formats of the 8-bit FLP value for weights, activations, and gradients <cit.>, or leverage scaling and shifting operations to adjust gradients within the dynamic range for an 8-bit FLP value <cit.>. More recently, the work of <cit.> adopts a radix-4 data format specialized for the FLP training with 4-bit values. Current FLP training approaches have shown the effectiveness to the low-bit training, but they still require MACs with FLP values at training time. On the contrary, our work is for the FXP training that is more hardware-friendly in terms of computational power and chip area, compared to the FLP training <cit.>. §.§ Low-bit FXP training Using FXP gradients for backward propagation degrades the performance significantly compared to the FLP counterparts within the same bit-width, due to the narrow dynamic range compared to that of FLP values <cit.>. MACs with FLP values are more resource-intensive than those of FXP values, suggesting that a FXP training is suitable for hardware implementation <cit.>. In this regard, recent works have focused on the FXP training that converts full-precision gradients into low-bit FXP values <cit.>. DorefaNet <cit.> quantizes gradients into FXP values for the first time, and adopts the stochastic rounding technique <cit.> to reduce the average quantization error in the training process. FPT <cit.> proposes to assign different bit-widths for each layer, and changes them continually during training. FQT <cit.> presents a per-sample quantization approach by employing multiple quantizers for different samples within a batch. Although this effectively captures the dynamic range variations across samples, extra FLP operations are required to normalize each sample, which is less efficient compared to layer-wise quantization techniques. More recently, IQB <cit.> introduces a piecewise FXP format for gradient quantization that lowers the quantization error effectively, while avoiding clipping gradients. However, the piecewise format requires specially designed hardwares. On the contrary, our method uses a layer-wise quantization with a uniform quantizer, which aligns well with hardware implementation, while boosting the quantization performance significantly in low-bit FXP training. Closely related to ours, several methods  <cit.> design quantizers for gradients considering the quantization error. DSGC <cit.> claims that minimizing the quantization error for entire gradients is important for low-bit FXP training. It thus proposes to search the quantization interval that maximizes cosine similarity between full-precision and quantized gradients. This approach makes the quantization interval significantly narrow, since the majority of gradients are concentrated near zero. The narrow interval incurs substantial quantization error for large gradients that affect the training process dominantly. In contrast to DSGC, we design an update algorithm adjusting the quantization interval adaptively to lower the quantization error for large gradients. DAIQ <cit.> employs a channel-wise quantization strategy, using multiple quantizers with different quantization intervals along the channel dimensions of gradients for each layer, in order to reduce the quantization error effectively. However, this approach is less suitable for hardware implementation compared to a layer-wise quantization method. DAIQ also designs a magnitude-aware clipping strategy that lowers the quantization error weighted by gradient magnitudes. It sets a clipping value as the running mean of the maximum gradient over training iterations. DAIQ applies this technique to the channels whose gradients follow inverted-T distributions. Otherwise, it employs the min-max quantizer. Different from DAIQ, our approach adopts a layer-wise quantization method exploiting a single quantizer per layer, which is more feasible for hardware implementation, and efficient in terms of computational cost. Moreover, our quantizer is applicable to the gradients of any distributions, enabling lowering the quantization error for large gradients regardless of their distributions. § METHOD §.§ Overview Following recent works <cit.>, we quantize full-precision weights, activations, and gradients into low-bit FXP ones. To this end, we use a uniform quantizer that converts a full-precision input x (, weights, activations, or gradients) to a b-bit quantized output. We adopt a layer-wise quantization for efficiency. Specifically, we clip the input within a quantization interval, parameterized by a clipping value c, and normalize it using a scale factor s to obtain a normalized input x_n as follows: x_n = clip(x,c)/s, where the clipping function and the scale factor are defined differently depending on distributions of inputs <cit.>. For example, for the input data with a zero-centered distribution, , weights or gradients, the clipping function and the scale factor are designed as clip(x,c)=min(max(x,-c),c),     s = c/2^b-1-1. Differently, they are defined as clip(x,c)=min(max(x,0),c),     s = c/2^b-1, if the input data follows a half-normal distribution, , activations after a ReLU. We then obtain a quantized output Q(x) by applying a rounding ⌈·⌋ to the normalized input x_n, followed by multiplying it with the scale factor s for denormalization as follows: Q(x) = s⌈ x_n ⌋. Following the works of <cit.>, we learn the clipping values c (, the quantization interval) end-to-end for weight and activation quantizers at each layer. Note that learning the clipping values for a gradient quantizer is intractable, since it requires to compute the derivatives of gradients (, Hessian). We manually set the clipping value for gradients, denoted by c_g, to γ g_max, where γ∈ (0,1] is a clipping factor, and g_max is the maximum absolute gradient (, max(| G|), where we denote by G a set of entire gradients in a single layer). Note that the clipping factor γ controls the quantization interval. For example, the interval becomes narrow as the clipping factor decreases. Previous methods set the clipping factor to 1, suggesting that all gradients are taken into account to estimate the quantization interval <cit.>, or adjust the factor to minimize the quantization error for entire gradients <cit.>. In contrast to these approaches, we propose to update the clipping factor adaptively to keep a small quantization error for large gradients. §.§ Empirical analysis Here we present an analysis on how the quantization error for gradients affects the quantization performance. We train ResNet-20 <cit.> on CIFAR-100 <cit.> using DSGC[Since the code for DSGC is not publicly available, we have reproduced it by ourselves.] <cit.> and our baseline in Sec. <ref> with different clipping factors (γ=0.4, 0.6, 0.8, 1.0). We use 4-bit FXP values for each weight, activation, and gradient. We define an average quantization error for entire gradients, normalized w.r.t. the absolute maximum gradient g_max as follows: E(G) = ∑_g∈G| g-Q(g)|/N(G)g_max, where N(G) counts the number of elements in the set G. Similar to Eq. (<ref>), we formulate the quantization error for large gradients as follows: E(G_L) = ∑_g∈G_L| g-Q(g)|/N(G_L)g_max. We define large gradients, denoted by G_L, as a set of gradients whose magnitude is larger than a certain threshold splitting the density of gradients into 1-α and α (Fig. <ref>). That is, α is the ratio between the numbers of large gradients G_L and entire gradients G, , α=N(G_L)/N(G), which is a hyperparameter in our framework. We show an analysis on the quantization performance w.r.t. the quantization error for entire and large gradients, respectively, in Fig. <ref>. It provides a comparison of DSGC with a baseline (γ=1) in terms of the quantization error and accuracy. We can see from Figs. <ref> and <ref> that the clipping factors of DSGC are kept to small values, and the quantization error for entire gradients is smaller than that of the baseline. This enlarges the quantization error for large gradients significantly compared to the baseline (Fig. <ref>). We can also see from Fig. <ref> the training loss of DSGC having large error for large gradients increases in the middle of the training, , from the 50k-th to 100k-th iterations. Since large gradients mainly affect the training process <cit.>, the quantization error for the large gradients deviates gradients significantly and causes unstable gradient flows, making the training unstable and subsequently degrading the quantization performance. We can conclude from Fig. <ref> that lowering the quantization error for large gradients is more important than that for entire gradients in the low-bit FXP training. To delve deeper into this observation, we show in Fig. <ref> quantization errors for large gradients on different layers. 1) We can see that fixing a clipping factor to the small value (, γ=0.4) brings the large quantization error for large gradients in 13, 15, 17th layers, similar to the observation from DSGC, and it shows worse quantization performance compared to other baselines (γ=0.6,0.8,1.0) providing smaller quantization errors for large gradients. This strengthens our motivation once more that lowering the quantization error for large gradients is a key factor to boost the performance in the FXP training. 2) We can see that a clipping factor lowering the quantization error for large gradients differs depending on the layer. For example, Figs. <ref> and <ref> show that fixing γ to 1 results in a larger quantization error for large gradients compared to others (γ=0.6, 0.8) in the 13th layer, while it shows a smaller error in the 15th layer. This is because distributions of gradients are different according to the layer <cit.>. 3) Even in the same layer, a clipping factor lowering the quantization error for large gradients changes during training. For example, in the 17th layer (Fig. <ref>), fixing γ to 0.6 leads to a larger quantization error for large gradients compared to other baselines in early iterations, while the error becomes smaller in later iterations. Our empirical analysis suggests that lowering the quantization error for large gradients is better in terms of stability and accuracy in the FXP training, compared to lowering the error for entire gradients, even if the number of large gradients is very small compared to that of entire gradients (Fig. <ref>). It also indicates that we would adjust clipping factors adaptively for different layers and update them continually during training, in order to maintain a small quantization error for large gradients (Fig. <ref>). §.§ Interval update algorithm We first derive an upper bound of the quantization error for large gradients (ULG), and obtain an optimal condition for the clipping factor γ lowering ULG. We then present our interval update algorithm that adjusts the clipping factor with a negligible computational overhead. §.§.§ ULG. We divide the large gradients into two parts, clip-in and clip-out gradients, denoted by G_in and G_out, respectively. Specifically, the clip-in and clip-out gradients represent large gradients located inside and outside the quantization interval, respectively, , G_in={g|| g|≤γ g_max, g∈ G_L} and G_out={g|| g| > γ g_max, g∈ G_L} (Fig. <ref>). The clip-in and clip-out gradients are hence influenced by the value of the clipping factor γ. If we raise the clipping factor, the numbers of clip-in and clip-out gradients, N(G_in, γ) and N(G_out, γ), increase and decrease, respectively. The quantization error for large gradients in Eq. (<ref>) can be represented as follows: E(G_L) = ∑_g∈G_in| g-Q(g)| + ∑_g∈G_out| g-Q(g)|/N(G_L)g_max. Finding the clipping factor γ that minimizes E(G_L) is intractable. We instead derive ULG, and find the condition for the clipping factor lowering ULG. To this end, we define upper bounds of the quantization error within clip-in and clip-out gradients, separately: 1) The quantization error is maximized at the transition point, when the gradient exists inside the quantization interval, , the clip-in gradient. In this case, the error quantifies the half of quantization step size, where the step size is 2γ g_max/(2^b - 2). The upper bound of quantization error for the clip-in gradient U_in can then be set to γ g_max/(2^b - 2). 2) For the clip-out gradient, the quantized value is mapped to the end of the quantization interval, , Q(g)=γ g_max or -γ g_max. We thus compute the upper bound of error for the clip-out gradient U_out as (1-γ) g_max. Using the upper bounds of the error for clip-in and clip-out gradients, U_in and U_out, we can derive ULG as follows (See the supplement for details): U(G_L) = U_inN(G_in, γ) + U_outN(G_out, γ)/N(G_L)g_max = (γ/2^b - 2R(G_in, γ) + (1-γ)R(G_out, γ))1/α, where R(G_in, γ) and R(G_out, γ) are the ratios of clip-in and clip-out gradients, respectively, w.r.t. all gradients, defined as R(G_in, γ)=N(G_in, γ)/N(G),   R(G_out, γ)=N(G_out, γ)/N(G), and α = R(G_in, γ) + R(G_out, γ). That is, α is the ratio between numbers of large gradients and entire gradients, , N(G_L)/N(G), which is a hyperparameter in our framework. Note that the union of clip-in and clip-out gradients, G_in and G_out, is equal to the set of large gradients, G_L (Fig. <ref>). In order to find an optimal condition for the clipping factor, we take the derivative of ULG the factor γ as follows (See supplement for details): dU(G_L)/dγ = (1/2^b - 2R(G_in, γ)-R(G_out, γ) + (1-γ-γ/2^b - 2)dR(G_out, γ)/dγ)1/α, Generally, the gradients follow a zero-centered distribution with a very long tail, but they are sparse around the tail <cit.>. Assuming that each side of the quantization interval exists around the tail, we can approximate that the change of the clip-out ratio R(G_out) w.r.t. the clipping factor γ is negligible (, dR(G_out, γ)/dγ≈ 0). Using the approximation, we represent Eq. (<ref>) as follows: dU(G_L)/dγ≈(1/2^b - 2R(G_in, γ)-R(G_out, γ))1/α. Accordingly, the optimal clipping factor γ^* should satisfy the following condition: 1/2^b - 2R(G_in, γ^*)-R(G_out, γ^*) = 0. The clipping factor γ^* satisfying the condition in Eq. (<ref>) can keep the small quantization error of large gradients for each layer and at every iteration. §.§.§ Updating clipping factors Using Eqs. (<ref>) and (<ref>), we can obtain the condition for optimal interval γ^* as follows: R(G_in, γ^*) = 2^b - 2/2^b - 1α,  R(G_out, γ^*) = 1/2^b - 1α. Manually searching the clipping factor that satisfies the condition in Eq. (<ref>) is computationally demanding. We instead explore the relation between R(G_out, γ) and the clipping factor γ. Note that R(G_out, γ) in Eq. (<ref>) decreases slightly as the clipping factor γ increases, and vice versa (Fig. <ref>). Based on this, we design an algorithm that encourages the clipping factor to increase when the clip-out ratio is larger than the condition in Eq. (<ref>), , R(G_out, γ) > α/(2^b - 1), and vice versa. Concretely, we design the update scheme as follows: γ_i = γ_i-1 + βsign(T(G_out, γ_i-1)), where sign(·) is a signum function and T(G_out, γ_i-1) = R(G_out, γ_i-1) - 1/2^b-1α, which adjusts the direction of updating the clipping factor γ. The scaling parameter β(>0) controls the scale of sign(T(G_out, γ_i-1)). If the clip-out ratio is larger than the condition, , R(G_out,γ_i-1) exceeds α/(2^b-1) corresponding to the condition in Eq. (<ref>), T(G_out, γ_i-1) is positive. The update algorithm thus raises the clipping factor of γ_i-1 in the (i-1)th iteration. Conversely, the algorithm reduces the clipping factor by β when R(G_out,γ_i-1) is below the condition in Eq. (<ref>). Accordingly, the clipping factor is adjusted adaptively to satisfy the condition in Eq. (<ref>), maintaining a small quantization error for large gradients. We provide algorithm table describing the process of updating the clipping factor in the supplementary material. § EXPERIMENTS §.§ Experimental details §.§.§ Image classification. We quantize weights, activations, and gradients for a family of ResNets <cit.> and MobileNetV2 <cit.> on CIFAR-100 <cit.> and ImageNet <cit.>. Following the works of <cit.>, we do not quantize the first and last layers, and use the stochastic rounding technique <cit.> for gradient quantization. We use the Adam optimizer <cit.> with a learning rate of 1e-5 for all networks to train clipping values for weights and activations, where they are initialized with the approach of <cit.>. Note that we do not learn the clipping value for gradient quantization. We train network weights from scratch with random initialization using the SGD optimizer, where the initial learning rates are set to 1e-1 and 5e-2 for ResNets and MobileNetV2, respectively. For ResNet-20, we train quantized networks for 160 epochs on CIFAR-100 with a batch size of 128, and a weight decay of 1e-4. We use a step learning rate schedule with a decay of 0.1 at epoch 80 and 120. For the ResNet-18, -34, and -50 architectures, quantized networks are trained on ImageNet for 100 epochs with a batch size of 256 and the weight decay of 1e-4. We adopt a step learning rate schedule with a decay of 0.1 at epoch 30, 60, and 90. For MobileNetV2, we train quantized networks for 150 epochs on ImageNet with a batch size of 256 and a weight decay of 4e-5. We use a cosine annealing technique <cit.> for learning rate decay. Following <cit.>, we do not quantize the depth-wise convolutional layers in MobileNetV2. We fix the scaling parameter β to 1e-3 for all experiments. We set the ratio α as a hyperparameter of τ divided by the total number of gradients in the network. We find τ by a grid search with ResNet-20 <cit.> on CIFAR-100 <cit.>, and fix it for all experiments [We provide an analysis on the hyperparmeters in the supplementary material.]. §.§.§ Object detection. We quantize Faster R-CNN <cit.> exploiting the ResNet-50 architecture as a backbone. We use the SGD optimizer with an initial learning rate of 1e-2, a weight decay of 1e-4, and a batch size of 16. We train the model for 90k iterations with a step learning rate scheduler on the COCO dataset <cit.>, where the learning rate is reduced by a factor of 0.1 at 60k and 80k iterations. §.§.§ Super-resolution. To demonstrate the generalizability of our method, we apply our method to image super-resolution. To this end, we quantize weights, activations, and gradients for EDSR <cit.> on the DIV2K dataset <cit.>, and train the model for 300 epochs using the Adam optimizer with a batch size of 16. The learning rate is initialized with 2e-4, and we decay the learning rate by a factor of 0.5 every 100 epochs. §.§ Results §.§.§ Image classification. We apply our quantization method to various CNN architectures, including a family of ResNets <cit.>, MobileNetV2 <cit.>. We compare our approach with state-of-the-art FXP training methods[For a fair comparison, we compare our approach with FXP training methods exploiting layer-wise quantization, and do not perform the comparisons with FLP training methods, sample-wise quantization, and channel-wise quantization.] <cit.> on ImageNet and CIFAR-100 in Tables <ref> and <ref>, respectively. All numbers except for the baselines, DSGC^†, and IQB^†[Since the code for IQB is not publicly available, we reproduce the piecewise FXP format, and apply it to quantize the gradients in our baseline.] are taken from the work of <cit.> including the results of full-precision models. Results for the transformer architecture can be found in the supplementary material. From these tables, we observe four things: 1) Our method outperforms other FXP training methods by a significant margin in terms of a top-1 accuracy, regardless of datasets, network architectures, and bit-widths. The accuracy of DSGC is slightly better than ours for the 8/8/8-bit setting only on the ResNet-50 architecture in Table <ref>. Nevertheless, ours shows a lower accuracy drop w.r.t the full-precision model. Note that the full-precision model in DSGC also shows a higher accuracy, possibly due to different training settings for, , the number of epochs and learning rate scheduling. 2) We can see that the accuracy drop of DSGC becomes severe as bit-widths decrease. A plausible reason is that reducing the bit-width increases the quantization error for entire gradients, and the quantization interval of DSGC becomes narrower in order for keeping a small error for entire gradients. It incurs a significant quantization error for large gradients, and the performance in turn degrades drastically. Compared to DSGC, our method provides better results consistently, confirming once more that lowering the quantization error for large gradients is important in the FXP training. 3) Our method shows better results compared to the state of the art, including DSGC and IQB, in particularly low-bit settings (, 6/6/6, 5/5/5, and 4/4/4-bit settings). For example, our method performs better than IQB <cit.> employing a piecewise FXP format for gradient quantization, when training ResNet-18 and -34 in 4/4/4 and 5/5/5-bit settings, and obtains the superior results over the baseline when training in 4/4/4 and 5/5/5-bit settings. This suggests that maintaining a small error for large gradients is effective to improve the quantization performance in the low-bit settings. 4) We can clearly observe that ours gives better results than the baselines with various architectures consistently, especially in the 4/4/4 and 5/5/5-bit settings. This indicates that maintaining a small quantization error for large gradients, regardless of the layers or training iterations, is significant in the FXP training. Analysis on updating intervals. We compare in Fig. <ref> our method and baselines with different clipping factors (γ=0.6,0.8,1.0), in terms of the quantization error for gradients. We train ResNet-20 with the 4/4/4-bit setting on CIFAR-100 <cit.>. We can see that our method brings a small quantization error for large gradients compared to other baselines, regardless of layers and iterations (Figs. <ref>, <ref>, <ref>). This suggests that adjusting the clipping factor according to the condition in Eq. (<ref>) is effective to maintaining a small error for large gradients. We also compare the quantization error of entire gradients for ours and the baselines (Figs. <ref>, <ref>, <ref>). We can observe that performance and the quantization error for entire gradients are less correlated to each other. For example, ours brings a large error for entire gradients compared to the baseline (γ=0.6) in the 15th and the 17th layers. It also shows a larger error than other baseline (γ=1.0) in the 5th layer. Nevertheless, our method outperforms the baselines significantly (Table <ref>, Fig. <ref>). This strengthens our motivation that lowering the error for large gradients, rather than entire gradients, plays a crucial role in enhancing the performance of the FXP training. We can see from Fig. <ref> that the clipping factors vary depending on layers and training iterations, since our update algorithm adjusts them according to the distribution of gradients. For example, if most gradients are concentrated near zero and large gradients are distributed broadly around the tail of the distribution, a small clipping factor is preferred. On the other hand, if a number of large gradients exist in the tail densely, a larger clipping factor would be used. We can also observe that the clipping factors are relatively small in later iterations. A reason is that gradients become sparse and they are around a zero value, as the training progresses, as observed in <cit.>. Moreover, the gradients in early layers are likely to be sparse compared to the ones in later layers <cit.>, and clipping factors for the early layers thus tend to be small values. Runtime analysis. We compare in Fig. <ref> the relative latencies for forward and backward passes. Specifically, we convert the data formats of weights, activations, and gradients to 8-bit and simulate the low-bit operations. We can observe that ours and the baseline, which does not use an interval update algorithm, accelerate the training process in both forward and backward passes compared to the FP models. We can also see that ours and baseline show almost the same latency, demonstrating that our interval update algorithm introduces marginal computations compared to overall convolutional operations of the network. §.§.§ Object detection. We compare our approach with DSGC <cit.> and the baseline (γ=1.0) on COCO <cit.> that provides over 330,000 images of 80 object categories. We apply ours and the baseline (γ=1.0) to the Faster R-CNN architecture <cit.> with 8/8/8 and 6/6/6-bit settings, and then report the performance in terms of mAP in Table <ref>. From the table, we can observe that our method shows better results compared to other methods, confirming the effectiveness of our approach once more. Although the full-precision model for DSGC shows a lower mAP than ours, the performance drop of our method w.r.t the full-precision model is lower, compared to DSGC. This verifies that lowering the quantization error for large gradients is more effective to alleviate performance degradation of low-bit FXP training for object detection, compared to that of entire gradients. We provide qualitative comparisons in the supplementary material. §.§.§ Image super-resolution. We apply our method to quantize gradients for EDSR <cit.> on image super-resolution, and demonstrate the generalization ability. To our knowledge, we are the first to quantize gradients for image super-resolution, making it difficult to compare the performance of ours and existing gradient quantization methods <cit.> on image super-resolution. We provide a quantitative comparison in Table <ref>, where we report the average PSNR for upsampled images with factors of 2, 3, and 4, on Set5 <cit.>. From this table, we can see that our method provides better results than the baseline regardless of bit-widths, demonstrating that our method is particularly effective in the low-bit gradient quantization. Note that EDSR is trained with the Adam optimizer, in contrast to the networks for image classification and object detection using SGD, suggesting that our method is robust to the type of optimizers. More results on various datasets and qualitative comparisons can be found in the supplementary material. § CONCLUSION We have shown an influence of the quantization error for gradients on a low-bit FXP training through experimental analysis, and found that minimizing the quantization error for large gradients contributes to boosting the performance significantly. Based on this, we have introduced the simple yet effective interval update algorithm adjusting the quantization interval adaptively to keep the small quantization error for large gradients. We have presented that our update algorithm achieves the state of the art on the low-bit training for various network architectures and bit-widths. We believe that our approach provides a significant advancement in low-bit FXP training. §.§ Acknowledgement. This research was supported by Samsung Research Funding & Incubation Center of Samsung Electronics under Project Number SRFC-IT2102-06. splncs04 [pages=1]08921-supp.pdf [pages=2]08921-supp.pdf [pages=3]08921-supp.pdf [pages=4]08921-supp.pdf [pages=5]08921-supp.pdf [pages=6]08921-supp.pdf [pages=7]08921-supp.pdf [pages=8]08921-supp.pdf
http://arxiv.org/abs/2407.12092v1
20240716180012
Theory of phonon spectroscopy with the quantum twisting microscope
[ "Jiewen Xiao", "Erez Berg", "Leonid I. Glazman", "Francisco Guinea", "Shahal Ilani", "Felix von Oppen" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.supr-con" ]
§ ABSTRACT We develop a theory of probing phonon modes of van-der-Waals materials using the quantum twisting microscope. While elastic tunneling dominates the tunneling current at small twist angles, the momentum mismatch between the K points of tip and sample at large twist angles can only be bridged by inelastic scattering. This allows for probing phonon dispersions along certain lines in reciprocal space by measuring the tunneling current as a function of twist angle and bias voltage. We illustrate this modality of the quantum twisting microscope by developing a systematic theory for graphene-graphene junctions. We show that beyond phonon dispersions, the tunneling current also encodes the strength of electron-phonon couplings. Extracting the coupling strengths for individual phonon modes requires careful consideration of various inelastic tunneling processes. These processes are associated with the intralayer and interlayer electron-phonon couplings and appear at different orders in a perturbative calculation of the tunneling current. We find that the dominant process depends on the particular phonon mode under consideration. Our results inform the quest to understand the origin of superconductivity in twisted bilayer graphene and provide a case study for quantum-twisting-microscope investigations of collective modes. Theory of phonon spectroscopy with the quantum twisting microscope Felix von Oppen July 22, 2024 ================================================================== § INTRODUCTION Beyond conventional transport experiments, much information on van-der-Waals materials derives from local probes such as scanning tunneling microscopy (STM) as well as scanning single-electron-transistor and scanning SQUID measurements. A particularly well-studied system is twisted bilayer graphene with its multitude of electronic phases when the twist angle is close to the magic angle <cit.>. Scanning tunneling microscopy has been instrumental in elucidating the effects of correlations on their flat-band dispersion <cit.>, of superconductivity <cit.>, and of the correlated insulating states <cit.>. Compressibility measurements using a scanning single-electron transistor revealed a cascade of flavor-polarized phases <cit.> as well as excess entropy associated with the formation of local moments <cit.>. It was also instrumental in uncovering anomalous Chern-insulator phases <cit.>. Scanning-SQUID measurements revealed the formation of Chern mosaics <cit.>. The quantum twisting microscope (QTM) <cit.> is a powerful new instrument complementing previously existing local probes. Rather than measuring the local tunneling current at the atomic scale as in scanning tunneling microscopy, it relies on coherent tunneling across a twistable finite-area junction formed at the interface between van-der-Waals systems placed on a scanning tip with a flat pyramidal top and on a substrate, see Fig. <ref>. Due to the finite contact area, tunneling conserves crystal momentum modulo reciprocal lattice vectors of the tip and sample layers. Except at special twist angles, umklapp processes involving larger reciprocal lattice vectors will typically be suppressed. The twist imposes a relative rotation of the dispersions of tip and sample in momentum space. Moreover, the bias voltage introduces a relative shift of the dispersions in energy. Measuring the tunneling current as a function of bias voltage and twist angle will then provide direct signatures of momentum-resolved dispersions. This has been used to explore the electronic dispersion of graphene layers using graphene-graphene junctions as well as the flat-band dispersions of twisted bilayer graphene using junctions of graphene and twisted bilayer graphene <cit.>. Theoretical work has explored the use of the QTM to probe two-dimensional superconductors <cit.>, spin liquids <cit.>, as well as spin-ordered states close to metal-insulator transitions <cit.>. Beyond electronic dispersions, the QTM is also exquisitely suited to access momentum-resolved dispersions of the collective excitations of van-der-Waals systems as shown by a very recent experiment <cit.>. Whenever the electronic dispersions of tip and sample with their relative twist do not intersect, the momentum mismatch can be bridged by emission of a collective-excitation quantum. Here, we illustrate this modality of the QTM by developing a comprehensive theory of phonon spectroscopy. Our considerations focus on graphene-graphene junctions, but the approach readily generalizes to other junctions and other collective excitations. Along the way, we include analytical results for elastic tunneling between tip and sample for context and comparison. Due to the semimetallic nature of graphene, its Fermi circles are typically small for relevant gate-induced densities. Thus, inelastic tunneling processes enabled by phonon emission dominate except at the smallest twist angles (i.e., for θ≳ 5^∘ in current experiments <cit.>), and the phonon wavevector is approximately equal to the distance between Dirac points of tip and sample. This implies that the relevant phonon wavevector can be systematically varied by changing the twist angle, allowing for direct measurements of the phonon dispersions. In addition to the phonon dispersion, the measurements also encode the strength of electron-phonon coupling as a function of wavevector. The phonon spectrum and electron-phonon coupling are important inputs in developing a theory of superconductivity <cit.> and the linear temperature dependence of the resistivity <cit.> in magic-angle twisted bilayer graphene. In both cases, there has been substantial debate whether the phenomenon has a conventional origin in the electron-phonon coupling or an underlying exotic mechanism due to electron-electron correlations. Measurements of graphene-graphene junctions in a QTM may be highly relevant in this context as van-der-Waals coupling tends to lock the interlayer distance between tip and sample layers of a QTM to the interlayer distance of a twisted bilayer. At the same time, QTM measurements cannot access inelastic phonon processes at very small twist angles, where they will be difficult to differentiate from the elastic-tunneling background. Gleaning information on phonon dispersions and electron-phonon couplings in magic-angle twisted bilayer graphene will thus require extrapolation from larger twist angles. This paper is organized as follows. In Sec. <ref>, we begin by summarizing the results of the detailed calculations in the subsequent sections. This summary section makes our results accessible without the need to read the more technical sections in detail. Section <ref> collects background material and fixes notation. We first introduce the electronic properties of graphene layers (Sec. <ref>), review tunneling between twisted layers in Secs. <ref> and <ref>, and discuss the electrostatics of QTM junctions <ref>. Section <ref> focuses on the elastic tunneling current, giving analytical results for the threshold behaviors of the differential conductance. Section <ref> discusses the phonon modes and the electron-phonon coupling, including both intralayer and interlayer coupling. The theory of phonon spectroscopy is finally addressed in Sec. <ref>. Following the general expressions for the inelastic tunneling current in Sec. <ref>, we discuss the contributions of the inter- and intralayer electron-phonon coupling in Secs. <ref> and <ref>, respectively. We conclude in Sec. <ref>. § OVERVIEW OF RESULTS §.§ Tunneling We illustrate phonon spectroscopy by considering tunneling between two graphene layers. Twisting the tip and sample layers with respect to each other leads to a relative rotation of their Brillouin zones (Fig. <ref>). In particular, this induces a relative displacement of their K-points by equal-length vectors 𝐪_j, where j=0,1,2 enumerates the three equivalent K-points within the Brillouin zone [Fig. <ref>(b)]. Current flow between tip and sample is dominated by elastic tunneling as long as the Fermi circles of tip and sample intersect <cit.> [Fig. <ref>(c)]. This is a consequence of the fact that tunneling between tip and sample conserves crystal momentum modulo reciprocal lattice vectors of the graphene layers. At larger twist angles, the momentum mismatch between the electrons in tip and sample requires inelastic processes involving phonon emission, with the phonon wavevector approximately equal to one of the 𝐪_j [Fig. <ref>(d)]. (Here, we assume that temperature is sufficiently low that phonon absorption can be neglected.) Bias voltages, at which eV_b equals the energy ħω_r,𝐪_j of a phonon mode r are associated with threshold features in the current-voltage characteristic. Tracking these inelastic-tunneling features as a function of twist angle θ (and hence 𝐪_j) allows for mapping out the phonon dispersion along certain lines in momentum space. (Umklapp processes lead to additional sharp elastic scattering peaks at larger, commensurate twist angles due to overlap of Fermi surfaces in higher Brillouin zones <cit.>.) Electron-phonon coupling emerges from several mechanisms <cit.>. Modifications of the hybridization of carbon orbitals associated with phonon-induced changes in the bond lengths contribute as illustrated in Fig. <ref>. The intralayer coupling ℋ_intra originates in changes of the hopping amplitudes within the layer, corresponding to electron-phonon coupling within the individual graphene layers. In twisted bilayers, phonons also affect the amplitude of interlayer tunneling, giving rise to the interlayer electron-phonon coupling ℋ_inter. In addition, longitudinal acoustic phonons lead to a local expansion or contraction of the lattice, which shifts the chemical potential <cit.>, referred to as defomation potential. We find that the coupling mechanism dominating the tunneling current differs between phonon modes, so that it is important to account for the various couplings to understand phonon signatures in QTM measurements. Twisted graphene layers are described by the Hamiltonian ℋ=ℋ_0 + ℋ _T + ℋ _intra + ℋ _inter. Here, ℋ_0 describes the two uncoupled graphene layers, including their phonon modes, and ℋ_T accounts for the (purely electronic) interlayer tunneling. Retaining terms to first order in the interlayer tunneling, we can then expand the 𝒯-matrix for electron scattering between tip and sample layers as 𝒯 = ℋ_T + ℋ_inter + ℋ_T𝒢_0ℋ_intra+ℋ_intra𝒢_0ℋ_T +… The first term ℋ_T on the right hand side gives rise to the elastic tunneling processes at small twist angles. Inelastic tunneling involving the emission of a phonon due to the interlayer electron-phonon coupling is described by the second term. Both of these processes can be described in the lowest order in a Fermi-golden-rule calculation of the tunneling current. The remaining two terms describe higher-order inelastic processes involving both electron tunneling and intralayer electron-phonon coupling, with the Green function 𝒢_0=[E-ℋ_0]^-1 of the uncoupled layers accounting for the energy denominators of the virtual intermediate states. We find that the inelastic tunneling current can be dominated by one electron-phonon coupling or the other, despite their different orders in perturbation theory. §.§ Electrostatics and characteristic voltages Due to the small quantum capacitance of the graphene layers, a bias voltage applied between tip and sample will predominantly modify the chemical potentials. This is accompanied by a smaller relative shift eϕ in energy of the Dirac points of tip and sample due to the electrostatic potential difference ϕ. The ratio of the shifts in electrostatic and chemical potentials is of order q_TFd, where q_TF is the Thomas-Fermi screening wavevector of graphene and d the distance between tip and sample layers [see Eq. (<ref>) below and the discussion around it for more details]. In our analytical calculations, we focus on the limit in which q_TFd≪ 1 (small quantum capacitance) and assume overall charge neutrality. Then, tip and sample have opposite chemical potentials ±μ, so that the bias voltage (i.e., the difference in electrochemical potential) is eV_b=2μ+eϕ with eϕ≪ 2μ. At small bias voltages, the electrostatic shift ϕ can be neglected and the Dirac points of tip and sample are aligned in energy, but offset in momentum by 𝐪_j. The offset Dirac cones intersect at energies that are larger in magnitude than ħ v_D q_0/2, where v_D is the Dirac velocity. As the chemical potentials of tip and sample are equal to ± eV_b/2, this leads to a characteristic voltage of eV_b^*=ħ v_Dq_0 for elastic scattering [Fig. <ref>(a) and (b)], with current flow due to elastic tunneling only possible for bias voltages V_b>V_b^*. At larger bias voltages, the electrostatic potential leads to an appreciable relative shift of the Dirac points in energy. When this shift eϕ becomes of order ħ v_D q_j, there is approximate nesting of the Dirac dispersions of tip and sample [Fig. <ref>(c)]. Nesting defines a second characteristic voltage V_b^∗∗ through the condition eϕ(V_b^∗∗)=ħ v_D q_0. Note that there is a sharp drop in the elastic tunneling current as the bias increases past this characteristic voltage. The dispersions depicted in Fig. <ref>(a) and (b) neglect interlayer tunneling, which opens gaps at the crossing points of the Dirac dispersions of the two layers. The magnitude of the resulting gaps is given by the interlayer tunneling strength w. Our perturbative approach requires that these gaps be small compared to eV_b^*. This is satisfied for twist angles θ > w/v_D|𝐊|. (Here, 𝐊 denotes the location of the K-point as measured from the Γ-point.) This is equivalent to the condition that the twist angle be larger than the magic angle of twisted bilayer graphene. For twist angles satisfying Eq. (<ref>), inelastic tunneling processes are relevant provided that the phonon energy ħω_r,𝐪_j is smaller than eV_b^*. In this case, the tunneling current at bias voltages V_b<V_b^* is entirely due to inelastic processes, allowing for measurements of the phonon dispersions and the electron-phonon coupling. For optical phonons with frequencies ∼ 200meV, this gives a minimal twist angle of 1-2^∘. For acoustic phonons, the condition is less stringent due to their smaller energy. There exists an energy window as long as the Dirac velocity is large compared to the mode velocity of the acoustic phonons, which is always the case away from the magic angle. §.§ Elastic tunneling Coherent tunneling across an extended contact (area Ω = L_x L_y) is central to the QTM <cit.>. To bring out the importance of coherence, we consider a reference problem of Ω/λ_F^2 parallel incoherent local tunneling contacts. The Fermi wavelength λ_F=2π/κ_F defines the characteristic size of the local contacts, so that the coarse-grained local tunneling of a contact at 𝐑 can be approximated as wλ_F^2δ(𝐫-𝐑). Here, w denotes the underlying tunneling amplitude of the extended contact with units of energy. This gives an incoherent tunneling current of order (2π e/ħ)(wλ_F^2)^2ν(μ)[N_fν(μ)eV_b] per tunnel contact, where ν(μ) = μ/(2πħ ^2 v_D^2) is the density of states per flavor at the chemical potential. This expression for the current is composed of the Fermi-golden-rule rate for tunneling of an incident electron and the number of incident electrons within the voltage window accounting for the spin and valley degeneracy N_f=4. With the linear graphene dispersion E=ħ v_D κ and the relation μ=ħ v_D κ_F = eV_b/2 between voltage and chemical potential (for sufficiently small voltages, so that the electrostatic potential ϕ can be neglected), this gives a differential-conductance scale of order G_incoh= e^2/hN_f w^2Ω/ħ^2v_D^2 for an array of Ω/λ_F^2 parallel contacts. We find that G_incoh provides a convenient scale for expressing our results for the tunneling current in the QTM. Elastic tunneling sets in at voltages V_b>V_b^*. The differential conductance in the vicinity of V_b^* is given by (see Sec. <ref>) dI/dV_b= 3√(2) G_incoh√(V_b^*/V_b-V_b^*)θ(V_b -V_b^* ), exhibiting a square-root divergence in the bias voltage. As eV_b^∗ = ħ v_D q_0, measuring the threshold voltage as a function of twist angle (and hence 𝐪_0) can be viewed as direct spectroscopy of the Dirac dispersion of the graphene layers. Similarly, the current near the threshold voltage V_b^∗∗ for nesting is (see Sec. <ref>) I = 3 /2 G_incoh(V_b^**) ^2 /√(V_b^*|ϕ(V_b^**)-ϕ(V_b)|)θ(V_b^** -V_b). This implies a large negative differential conductance at voltage V_b^∗∗. The divergence at V_b^∗∗ is a consequence of the assumptions of strictly linear dispersion and momentum-conserving tunneling. The tunneling current originates from momenta, where the Dirac cones of tip and sample touch, so that the contributions to the current diverge concurrently at all energies within the bias window [Fig. <ref>(c,d)]. The divergence of the current at V_b^** will thus be cut off by nonlinear corrections to the dispersion relation as well as spatial inhomogeneities, which lift strict momentum conservation. §.§ Inelastic tunneling When the Fermi circles of tip and sample no longer intersect at larger twist angles, momentum-conserving tunneling at low bias voltages is enabled by emission (or absorption) of phonons. Neglecting the small electronic momenta relative to the K-points, the relevant phonons have wavevectors equal to the wavevectors 𝐪_j connecting the K-points of the tip and the sample [Fig. <ref>(b)]. Consequently, the inelastic tunneling channels for the various phonon modes r open beyond the threshold voltages eV_b=ħω_r,𝐪_j, where ω_r,𝐪_j denotes the phonon frequency of mode r at wavevector 𝐪_j. This appears in the tunneling current as steps in the differential conductance and consequently as peaks in d^2I/dV_b^2. For out-of-plane (flexural) phonons, the electron-phonon coupling is dominated by the interlayer electron-phonon coupling ℋ_inter. While ℋ_inter depends linearly on the phonon displacements, the intralayer coupling ℋ_intra to out-of-plane phonons has a weaker quadratic dependence on the mode displacements. The dominant coupling due to ℋ_inter emerges directly from the change in the interlayer tunneling amplitude due to the out-of-plane component of the atomic displacements and is thus of order ℓ_ZPM(∂ w/∂ d). Here, d is the interlayer distance and [Notice that the actual real-space amplitude of the zero-point fluctuations for a particular phonon mode is smaller than ℓ_ZPM by a factor 1/√(N).] ℓ_ZPM=(ħ/Mω_r,𝐪_0)^1/2 (with M the mass per unit cell of a single layer) the zero-point-motion amplitude of the relevant out-of-plane phonon mode. We then find [Eq. (<ref>)] d^2I/dV_b^2∼ G_incoh (κ_F a)^2 ( ℓ_ZPM∂ln w/∂ d)^2δ(V_b- ħω_r,𝐪_0/e), where the factor (κ_F a)^2 accounts for the size of the graphene Fermi circles. (a is the carbon-carbon bond length of graphene.) There are two out-of-plane phonon modes r, which we denote by ZO^' and ZO, respectively. The mode ZO^' is an optical mode in the out-of-plane direction, i.e., the atomic displacements are antisymmetric between the two layers. At the same time, it is acoustic in nature within the plane, i.e., the long-wavelength displacements of the two sublattices are symmetric within each layer. Even when mechanical coupling between the tip and sample layers is negligible, the mode frequencies saturate to a constant at small wavevectors due to mechanical coupling between the graphene layers and their substrates. At larger wavevectors, the mode frequencies are quadratic in the phonon wavevector akin to the flexural modes of free-standing graphene membranes. The ZO mode is also optical in nature within the plane and thus characterized by larger mode frequencies for all wavevectors. The interlayer electron-phonon coupling to in-plane phonons has a different origin. A relative displacement 𝐮 of the two layers does not modify the magnitude of the interlayer tunneling, but changes its phase by exp(i𝐊·𝐮) <cit.>. If 𝐮 originates from a phonon displacement, the phase becomes time dependent, akin to a time-dependent vector potential. Thus, ℋ_inter can be viewed as being due to a synthetic electric field <cit.>. Importantly, the coupling is maximal for mode displacements 𝐮, which are parallel to the vector 𝐊 of the K-point. At small twist angles, the phonon wavevector 𝐪_0 is approximately perpendicular to 𝐊, so that the coupling is predominantly to transverse phonon modes. This is illustrated in Fig. <ref>. Unlike the intralayer electron-phonon coupling, this coupling does not go to zero in the long-wavelength limit. Transverse phonons, both acoustic (TA) and optical (TO), will thus involve an interlayer electron-phonon coupling of order w 𝐊·𝐮∼ w (ℓ_ZPM/a)cosθ from expanding the phase factor to linear order. Combining this with the factor accounting for the size of the graphene Fermi circles gives .d^2I/dV_b^2|_inter,T∼ G_incoh (κ_F ℓ_ZPM)^2 cos^2θδ(V_b- ħω_r,𝐪_0/e) for the contribution of the interlayer electron-phonon coupling for transverse acoustic and optical phonons. The expression for longitudinal acoustic (LA) and optical (LO) phonons is similar, differing only in its twist-angle dependence, .d^2I/dV_b^2|_inter, L∼ G_incoh (κ_F ℓ_ZPM)^2 sin^2θδ(V_b- ħω_r,𝐪_0/e). The full result for both longitudinal and transverse phonons is given in Eq. (<ref>). The intralayer electron-phonon coupling ℋ_intra gives contributions of the same order for acoustic phonons, albeit with a different twist-angle dependence. We can estimate the relevant electron-phonon coupling by noting that ℋ_intra originates in the dependence of t^∥ on the bond length. Thus, the intralayer electron-phonon coupling is of order ∂ t^∥/∂ aℓ_ZPM(q_0 a) for acoustic phonons. Here, the last factor accounts for the fact that the relative displacements of neighboring atoms is suppressed in the long-wavelength limit. Combining this with the fact that the tunneling Hamiltonian is of order w and the energy denominators are of order ħ v_D q_0, we find that the contribution of ℋ_intra to the 𝒯-matrix is of order w (∂ t^∥/∂ a)ℓ_ZPM(q_0 a)/ħ v_D q_0. With the estimates ∂ t^∥/∂ a ∼ t^∥/a and ħ v_D/a ∼ t^∥, this becomes of order w(ℓ_ZPM/a), which is indeed of the same order as the contribution of ℋ_inter. At the same time, ℋ_intra has only weak twist-angle dependence and is of the same order for longitudinal and transverse phonons due to the triad of bond vectors. This yields .d^2I/dV_b^2|_intra∼ G_incoh (κ_F ℓ_ZPM)^2 δ(V_b- ħω_r,𝐪_0/e). for acoustic phonons. For optical phonons, the intralayer electron-phonon coupling ℋ_intra does not involve the suppression factor q_0 a, so that .d^2I/dV_b^2|_intra∼ G_incoh(κ_F ℓ_ZPM/q_0 a)^2 δ(V_b- ħω_r,𝐪_0/e). At long wavelengths, this dominates over the contribution of ℋ_inter in Eqs. (<ref>) and (<ref>). The full result for the contribution of ℋ_intra for both acoustic and optical phonons is given in Eq. (<ref>). We summarize these results for the inelastic contributions to the tunneling current in Table <ref>. We conclude this section by addressing the limit of small twist angles θ (but sufficiently large that elastic scattering can be neglected). For acoustic modes, q_0 ∼θ and ω_r,𝐪_0∼ q_0, which implies ℓ_ZPM∼ 1/√(q_0). For overall charge neutrality, the threshold condition eV_b = ħω_q_0 implies that κ_F ∼ q_0, while κ_F approaches a nonzero constant for small q_0 away from overall charge neutrality. We thus find that for acoustic modes, d^2I/dV_b^2 diverges as 1/q_0 at small twist angles away from charge neutrality. For optical modes in the limit q_0→ 0, κ_F and ℓ_ZPM approach nonzero constants, so that one finds a 1/q^2_0 divergence both at and away from charge neutrality. The divergence is cut off by the condition for the twist angle in Eq. (<ref>), which corresponds to q_0≳ w/v_D. A further limitation specific to optical phonons is set by Eq. (<ref>). §.§ Scattering picture The current can be estimated using a scattering approach, which gives insight beyond the Fermi-golden-rule calculations presented in subsequent sections. We consider a scattering geometry, in which electrons in the tip impinge on the tip-sample junction in the y-direction, with channels defined by wavevectors k_x (Fig. <ref>). When normalizing the incoming and outgoing states to unit flux in the y-direction, the tunneling current is I = e/h∑_k_xk_x'∫ dE dE' T_k_xk_x'(E,E')f_μ(E)[1-f_-μ(E')], where we assume V_b>0 and zero temperature. Here, T_k_xk_x'(E,E') is the probability density that an electron in channel k_x of the tip impinging on the junction at energy E is scattered into channel k_x' of the sample at energy E'. For elastic tunneling, we can approximate T_k_x k_x^'(E,E') ∼w^2/ħ^2|v_y v_y^'||∫_Ω d𝐫e^i𝐤·𝐫-i𝐤'·𝐫/L_x|^2 δ(E-E') to lowest order in the tunneling amplitude (Born approximation). Here, we defined the mode velocities v_y=(1/ħ)(∂ E/∂ k_y) and v_y^' = (1/ħ) (∂ E'/∂ k_y^') of tip and sample, respectively. Focusing on order-of-magnitude estimates for small twist angles, we have suppressed the sublattice structure of the graphene wave functions and consider only momentum-conserving tunneling at the K-point, i.e., we neglect umklapp processes. Evaluating the integral and taking the limits of large L_x and L_y gives T_k_x k_x^'(E,E') ∼w^2/ħ^2|v_y v_y^'|δ_k_x,k_x^' L_y δ(k_y-k_y^') δ(E-E'), which makes the momentum-conserving nature of tunneling explicit. Here, k_y=k_y(k_x,E) and k_y^'=k_y^'(k_x^',E') are determined by the electron dispersions in tip and sample, respectively. Inserting T_k_x k_x^'(E,E') into the expression for the current and performing the energy integrals, we find I ∼∫_Δ k_y=0 dk_x ew^2Ω/ħ^2 |v_y -v_y^'|θ(k_max-|k_x|). For convenience, we temporarily measure k_x from the line connecting the Dirac points of tip and sample. The integral is over the line defined by Δ k_y = k_y - k_y^' = 0 and we used that ∂Δ k_y/∂ E = 1/ħ v_y - 1/ħ v_y^'. For voltages close to V_b^∗, the dispersions of tip and sample are illustrated in Fig. <ref>(b). The condition Δ k_y = 0 enforces k_y=k_y^' = 0, so that v_y^' = -v_y ≃ v_D. Moreover, one reads off ħ v_D k_max = [(eV_b/2)^2-(eV_b^∗/2)^2]^1/2. Inserting these relations into Eq. (<ref>), we recover the parametric dependences in Eq. (<ref>). For bias voltages near V_b^**, the dispersions of tip and sample are illustrated in Fig. <ref>(d). Approximate nesting of the Fermi circles of tip and sample implies that v_y is close to v_y^', leading to a strongly enhanced current. Nesting occurs for any energy E within the bias window, with the current dominated by energies |E| > eϕ (V_b^∗∗). For these energies, one readily estimates |v_y-v_y^'|/v_D∼(eϕ(V_b^∗∗)/|E|)(ħ v_D k_x/E)^2, accounting for the fact that v_y-v_y^' is nonzero due to eϕ(V_b^∗∗) and increases from zero symmetrically about k_x=0. Almost nesting crossing points of the dispersions of tip and sample exist only for ϕ<ϕ(V_b^∗∗) (and thus V_b<V_b^∗∗). At energy E, these crossing points satisfy ϕ - ϕ(V_b^∗∗)/ϕ(V_b^∗∗)∼(ħ v_D k_x/E)^2, so that k_max∼eV_b/ħ v_D(|ϕ - ϕ(V_b^∗∗)|/ϕ(V_b^∗∗))^1/2. These relations combined with Eq. (<ref>) reproduce the parametric dependences in Eq. (<ref>). The scattering approach is readily extended to inelastic current flow between tip and sample. Here, we consider the contribution of the interlayer electron-phonon coupling. The contribution of the intralayer electron-phonon coupling can be obtained by replacing the relevant electron-phonon coupling strength along the lines sketched in Sec. <ref> above. The plane-wave factor 1/√(Ω)e^i𝐐·𝐫 of the phonon introduces the phonon wavevector 𝐐 into the momentum-conservation factors. Moreover, we account for the phonon energy ħω_𝐐 in the energy balance and use that in addition to the interlayer tunneling amplitude w, the characteristic strength of the interlayer electron-phonon coupling is controlled by the ratio ℓ_ZPM/a of the atomic zero-point motion ℓ_ZPM and the graphene bond length a (see Sec. <ref> for a detailed discussion). We can then estimate T_k_x,k_x^'(E,E') ∼1/N∑_𝐐w^2/ħ^2v_y v_y^'(ℓ_ZPM/a)^2 ×δ_k_x,k_x^'+Q_x L_yδ(k_y-k_y^'-Q_y)δ(E-E'-ħω_𝐐), with N=Ω/Ω_uc being the number of unit cells of area Ω_uc. For sufficiently large twist angle, q_0≫κ_F, we approximate the phonon frequency ω_𝐐≃ω_𝐪_0 [Fig. <ref>(d)]. We can then perform the sum over 𝐐 to obtain T_k_x,k_x^'(E,E') ∼L_y^2/Nw^2/ħ^2v_y v_y^'(ℓ_ZPM/a)^2 δ(E-E'-ħω_𝐪_0). For bias voltages close to eV_b=ħω_𝐪_0, we have d/dV_b f_μ(E)[1-f_-μ(E')] δ(E-E'-ħω_𝐪_0) ≃ eθ(eV_b -ħω_𝐪_0)δ(E-μ)δ(E'+μ). The δ-functions constrain the energies of the initial and final states to the Fermi circles of tip and sample. Thus, we can approximate |v_y v_y^'| ∼ v_D^2 and the sums over k_x and k_x^' each contribute a factor of the order of the number of channels, 2κ_F L_x. Collecting factors into Eq. (<ref>), this reproduces Eqs. (<ref>) and (<ref>) up to the twist-angle dependence, which we dropped in the estimate of the electron-phonon coupling. § TWISTED GRAPHENE LAYERS: ELECTRONIC STATES For completeness and for fixing notation, we briefly review some elements of the electronic properties of graphene <cit.> and of tunneling between twisted layers <cit.>. We also include a discussion of the electrostatics of graphene-graphene junctions in the QTM <cit.>. §.§ Graphene Each of the two graphene layers is described by a tight-binding Hamiltonian H = -t^∥∑_𝐑∑_j=1^3 {|𝐑⟩⟨𝐑+𝐞_j| + |𝐑+𝐞_j⟩⟨𝐑|}, where the 𝐞_j denote the three bond vectors 𝐞_1 = a([ 0; 1 ]) ; 𝐞_2/3=a ([ ∓√(3)/2; -1/2 ]) emanating from an A site to the three nearest-neighbor B sites (Fig. <ref>). The sum is over the sites 𝐑 of sublattice A. In the Bloch basis defined through |𝐤,α⟩ = 1/√(N)∑_𝐑 e^i𝐤·(𝐑+ τ_α)|𝐑+ τ_α⟩ (α=A,B denotes the sublattice, τ_A=0, and τ_B=𝐞_1), the Hamiltonian takes the form H_𝐤 = ( [ 0 -t^∥∑_j e^i𝐤·𝐞_j; -t^∥∑_j e^-i𝐤·𝐞_j 0 ]). The band structure E_𝐤,± = ± t^∥√(A_𝐤^2 + B_𝐤^2) has valence (-) and conduction (+) bands with the lattice-periodic part |𝐤,±⟩=(u_A,𝐤,±,u_B,𝐤,±)^T of the Bloch functions given by u_A,𝐤,± = 1/√(2) ; u_B,𝐤,± = ∓1/√(2) e^-iγ_𝐤. Here, we define A_𝐤=∑_j cos(𝐤·𝐞_j) and B_𝐤=∑_j sin(𝐤·𝐞_j) as well as the phase γ_𝐤=arctan (B_𝐤/A_𝐤). For graphene lattice vectors 𝐚_1/2=a[ ±√(3)/2 , 3/2], the reciprocal lattice is spanned by the vectors (see Fig. <ref>) 𝐐_1/2=4π/3a([ √(3)/2; ± 1/2 ]). We occasionally find it useful to define 𝐐_0=0. We measure wavevectors 𝐤 from the Γ-point. For states close to the K-point at 𝐊=4π/3a(1/√(3),0) or the K'-point at -𝐊, the electron dispersion simplifies to the Dirac form E_κ,± = ±ħ v_D|κ| (with v_D=3t^∥ a/2ħ), where κ is measured from the K or K' point, respectively. The phase γ_𝐤 becomes γ_𝐤 = π - arctan(κ_y/κ_x) (K-point) and γ_𝐤 = arctan(κ_y/κ_x) (K'-point). §.§ Tunneling between twisted layers We describe tunnneling between the twisted graphene layers on tip (layer 1, unprimed) and sample (layer 2, primed) following Bistrizer and MacDonald <cit.>. Starting with the Bernal-stacked configuration [see Fig. <ref>(a)], the sites of the A sublattice of the twisted layers (denoted 𝐑 and 𝐑', respectively) are related by 𝐑' = D(θ) (𝐑-𝐞_1)+𝐝. Here, D(θ) is a rotation matrix involving the twist angle θ and 𝐝 a relative shift of the rotated lattices. Note that with our conventions, the vector 𝐊' denotes the location of the K-point of the primed (sample) layer. Tunneling between the layers is described by the matrix elements ⟨𝐑+τ_α | H_T | 𝐑'+τ_β^'⟩=t^⊥(𝐑+τ_α - 𝐑'-τ_β^'), of the (first-quantized) tunneling Hamiltonian H_T, where t^⊥(𝐫) is assumed to be only a function of the distance of the sites projected into the graphene plane. We consider the tunneling matrix elements T^αβ_𝐤𝐩' = ⟨𝐤α|H_T|𝐩'β⟩ between states |𝐤α⟩ with momentum 𝐤 and sublattice α in the tip [Eq. (<ref>)] and states |𝐩'β⟩ = 1/√(N)∑_𝐑' e^i𝐩'· (𝐑'+τ_β^')|𝐑'+τ_β^'⟩ with momentum 𝐩' and sublattice β in the sample. The vector τ_β^' is rotated relative to τ_β by the twist angle θ. Inserting definitions and expanding t^⊥(𝐫)=1/Ω∑_𝐪t^⊥_𝐪e^i𝐪·𝐫 into a Fourier series (note that the sum over 𝐪 is not restricted to the Brillouin zone), one finds T^αβ_𝐤𝐩' = 1/Ω∑_𝐪1/N∑_𝐑∑_𝐑' t^⊥_𝐪 × e^i𝐪·(𝐑+τ_α - 𝐑'-τ_β^') e^-i𝐤· (𝐑+τ_α) e^i𝐩'· (𝐑'+τ_β^'). We transform the sums over 𝐑 and 𝐑' into sums over reciprocal lattice vectors 𝐆 and 𝐆' of the two layers using ∑_𝐑 e^i𝐪·𝐑= N ∑_𝐆δ_𝐪,𝐆 and ∑_𝐑' e^-i𝐪·𝐑'= N ∑_𝐆 e^-i𝐆'·(-𝐞_1^' + 𝐝)δ_𝐪,𝐆'. Unlike the sum over 𝐑, the sum over 𝐑' generally does not include a term with 𝐑=0, resulting in the phase factor on the right hand side. This yields <cit.> T^αβ_𝐤𝐩' = ∑_𝐆_1∑_𝐆_2t^⊥_𝐤+𝐆_1/Ω_uc × e^i 𝐆_1·τ_α-i𝐆_2· (τ_β -𝐞_1) -i𝐆_2^'·𝐝δ_𝐤+𝐆_1,𝐩'+𝐆_2^'. With momenta measured relative to the Γ-point, translation invariance enforces that tunneling conserve crystal momentum modulo reciprocal lattice vectors of the two layers. Unlike for elastic tunneling, inelastic tunneling at larger twist angles involves virtual states far from the Fermi energy. However, matrix elements of H_T always involve one momentum close to the K-points of one of the layers. This can be used to simplify the tunneling matrix element since t_𝐪 decays rapidly on the scale of the Brillouin zone <cit.>. Assuming for definiteness that 𝐤 is close to the K-point, one retains only those terms in the sum over 𝐆_1, in which the momenta in t^⊥_𝐤+𝐆_1 have the smallest magnitude, i.e., the three contributions with 𝐆_1 =0, 𝐆_1 = -𝐐_1, and 𝐆_1 = -𝐐_2 corresponding to vectors 𝐤-𝐐_j located near the three equivalent K-points in the hexagonal Brillouin zone. Using that momentum conservation effectively restricts 𝐆_1 = 𝐆_2 for relevant twist angles, this gives T^αβ_𝐤𝐩'≃ w ∑_j=0^2 e^-i 𝐐_j·τ_α + i𝐐_j· (τ_β -𝐞_1) + i 𝐐_j^'·𝐝δ_𝐤-𝐐_j,𝐩'-𝐐_j^' where w=t_𝐊^⊥/Ω_uc. Explicit evaluation of the exponentials gives T^αβ_𝐤𝐩'≃∑_j=0^2 T_j^αβ e^i 𝐐_j^'·𝐝δ_𝐤-𝐐_j,𝐩'-𝐐_j^' with the matrices T_0 = w ([ 1 1; 1 1 ]) , T_1/2 = w ([ e^∓ iζ 1; e^± iζ e^∓ iζ ]) in sublattice space. Here, we used the abbreviation ζ = 2π/3. We note that the same expression holds when 𝐩' is close to the K-point of the sample, but 𝐤 is possibly further from the tip's K-point. §.§ Matrix elements Calculations of the tunneling current by Fermi's golden rule require the matrix elements of the tunneling Hamiltonian between eigenstates of the upper and lower layers, T^ss'_𝐤𝐩' =⟨𝐤,s|H_T|𝐩',s'⟩. Here, s,s'=± enumerate the valence and conduction bands of the top and bottom graphene layers, respectively. According to Eq. (<ref>), we can write T^ss'_𝐤𝐩'≃∑_j=0^2 T^ss'_𝐤𝐩';j e^i 𝐐_j^'·𝐝δ_𝐤-𝐐_j,𝐩'-𝐐_j^'. with T^ss'_𝐤𝐩';j =⟨𝐤,s|T_j|𝐩',s'⟩. Equation (<ref>) gives ⟨ u|T_j|u'⟩ = w e^-ijζ(u_A + e^ijζu_B)^* (u_A^' + e^ijζ u_B^'), where we used e^3iζ=1. Interestingly the matrix elements factorize into independent contributions of the two spinors. Using the explicit Bloch spinors in Eq. (<ref>), this becomes T^ss'_𝐤𝐩';j = w/2 e^-ijζ[1 - s e^-i(γ_𝐤-jζ)]^* [1 - s' e^-i(γ^'_𝐩'-jζ)]. We note that the phases γ_𝐤 and γ^'_𝐩' are defined in terms of the bond vectors of tip and sample, respectively. We can also express the matrix elements for momenta 𝐤 and 𝐩' close to the Dirac points. Writing 𝐤=𝐊+κ and 𝐩'=𝐊'+π', specifying to the K-point, and using Eq. (<ref>), we find T^ss'_κπ';j = we^-ijζ/2[1 + s e^i(θ_κ-θ/2+jζ)] [1 + s' e^i(θ_π'+θ/2+jζ)]. Note that here, we have defined the angles θ_κ and θ_π' in a global coordinate system, relative to which the tip/sample layers are rotated by ±θ/2. §.§ Electrostatics We review the electrostatics of the QTM contact <cit.>. We assume a configuration (see Fig. <ref>), in which a bias voltage V_b is applied between tip and sample. The electron densities n_T and n_S of tip and sample are further controlled by gate voltages V_TG=V_G+V_D and V_BG=V_G-V_D applied to the top and bottom gates. Here, we defined the symmetrized and antisymmetrized gate voltages V_G = 1/2(V_TG + V_BG) and V_D= V_TG-V_BG. The gate electrodes are assumed to have a high density of states (large quantum capacitance), so that their chemical potentials are independent of the applied voltages. We set μ_TG≃μ_BG≃ 0. Consequently, the gate voltages control their electric potentials, e(V_G+V_D) = eϕ_TG ; e(V_G-V_D) = eϕ_BG. Similarly, the bias voltage V_b controls the electrochemical potentials (encompassing the chemical potentials μ and the electrostatic potentials ϕ) of tip (T) and sample (S), ±eV_b/2 = μ_T/S +eϕ_T/S The chemical potentials are related to the electron densities of tip and sample through n_T/S = N_f μ^2_T/S/4πħ^2 v_D^2 sgnμ_T/S. Here, N_f=4 is the number of flavors and we assume that the graphene dispersions can be approximated as linear for relevant densities. Electrostatics relates the potentials and electron densities through e(ϕ_TG-ϕ_T) = -e^2 d_g/ϵ_hBNϵ_0n_TG e(ϕ_T-ϕ_S) = e^2 d/ϵϵ_0(n_S + n_BG) = -e^2 d/ϵϵ_0(n_T + n_TG) e(ϕ_S-ϕ_BG) = e^2 d_g/ϵ_hBNϵ_0n_BG. Here, d_g denotes the thickness of the gate dielectrics (dielectric constant ϵ_hBN) and d the distance of tip and sample (dielectric constant ϵ). Finally, the relation n_T + n_S = -(n_TG + n_BG). is imposed by overall charge neutrality. We first solve these equations for the charges on tip and sample. The gate voltage V_G controls the overall charge density in tip and sample, 2eV_G = e^2 d_g/ϵ_hBNϵ_0(n_T + n_S) - (μ_T + μ_S). Assuming that the screening lengths are small compared to d_g in gate electrodes as well as tip and sample, we can further neglect the chemical potential shifts, so that 2eV_G = e^2 d_g/ϵ_hBNϵ_0(n_T + n_S). Correspondingly, the difference in electron densities on tip and sample is controlled by the bias voltage V_b in conjunction with the displacement field V_D, ϵ_hBNϵ_0/e^2 d_g2eV_D - 2ϵϵ_0/e^2 d eV_b = 2ϵϵ_0/e^2 d(μ_T-μ_S) + (n_T-n_S). Here, we assumed that d≪ d_g. Equation (<ref>) can now be used to extract n_T/S as well as μ_T/S. This in turn yields ϕ_T/S with Eq. (<ref>) . We note that the setup in Fig. <ref> admits independent control of the chemical potentials μ_T and μ_S as well as the potential difference ϕ_T-ϕ_S. In our analytical calculations, we choose V_G=V_D=0 and a small quantum capacitance. Then, tip and sample are overall charge neutral, n_T+n_S=0, and have opposite chemical potentials, μ_T = -μ_S = μ. For a linear graphene dispersion, the density of states per flavor at the (bias-dependent) Fermi energy is ν(μ) = μ/(2πħ^2 v_D^2) for both layers and the difference ϕ = ϕ_T - ϕ_S in electric potentials is eϕ = e^2d/ϵϵ_0n_T = (q_TFd) μ≪μ, where q_TF = (e^2/2ϵϵ_0) N_fν(μ) is the Thomas-Fermi wavevector. Thus, eV_b = 2μ + eϕ≃ 2μ. At the same time, eϕ gives a relative shift of the Dirac points in tip and sample. This leads to the bias regimes sketched in Fig. <ref>. The electrostatics of the contact region (d≪ d_g) differs from that far from the contact (d≫ d_g), where the gate charges directly control the electron densities of tip and sample, n_T≃ -n_TG and n_S≃ -n_BG. For certain parameters, this difference in electrostatics induces pn-junctions in the sample, which enclose the contact region. This leads to Fabry-Perot-type resonances within the sample, which affect the measured tunneling currents. We do not consider this experimental issue in the following, as it can be avoided in phonon spectroscopy by a judicious choice of parameters. § ELASTIC TUNNELING We begin our discussion of the tunneling conductance of QTM junctions by considering elastic tunneling. For small twist angles, the Dirac cones of tip and sample layers intersect at small bias voltages and for the same valley, as illustrated in Fig. <ref>(c). In practice, one limits the strong tunnel coupling in this limit by separating tip and sample by a few atomic layers of a transition metal dichalcogenide (e.g., WSe_2) <cit.>. Complementing the scattering approach sketched in Sec. <ref>, we evaluate the current from tip to sample using Fermi's golden rule, I = 2π e N_f/ħ∑_s,s'∑_𝐤∑_𝐩' |⟨𝐤,s|H_T|𝐩',s'⟩|^2 ×δ(E_𝐤,s^(T) + eϕ_T - E_𝐩',s'^(S)-eϕ_S ) ×[f_μ_T(E_𝐤,s^(T)) -f_μ_S(E_𝐩',s'^(S))]. For overall charge neutrality and small quantum capacitance (see Sec. <ref>), the tunneling current becomes I = 2π e N_f/ħ∑_s,s'∑_𝐤∑_𝐩' |⟨𝐤,s|H_T|𝐩',s'⟩|^2 ×δ(E_𝐤,s^(T) + eϕ - E_𝐩',s'^(S) ) ×[f_eV_b/2(E_𝐤,s^(T)) -f_-eV_b/2(E_𝐩',s'^(S))]. In describing elastic scattering at small twist angles, it is advantageous to measure momenta from the K-points of tip and sample. With 𝐤=𝐊+κ and 𝐩'=𝐊'+π' as well as Eq. (<ref>) for the tunneling matrix elements, we find I = 2π e N_f/ħ∑_s,s'∑_κ∑_π'∑_j |T_κ,π';j^ss'|^2 δ_π'-κ,𝐪_j ×δ(E_κ,s + eϕ - E_π',s') ×[f_eV_b/2(E_κ,s) -f_-eV_b/2(E_π',s')]. We dropped the layer superscripts on the dispersions, which are identical provided that trigonal warping can be neglected. One expects structure in the differential conductance dI/dV_b at two characteristic voltages <cit.>, see Fig. <ref>. As the bias voltage increases, the tunneling current onsets at eV_b^*=ħ v_D q_j = 2ħ v_D|𝐊|sinθ/2, which depends linearly on small twist angles. At a larger bias voltage [see Eq. (<ref>) and Fig, <ref>] V_b^** = 2 V_b^*/q_TFd≫ V_b^*, the potential difference ϕ leads to nesting of the Dirac cones of tip and sample. This occurs when eϕ (V_b^**)=ħ v_D q_j = 2ħ v_D|𝐊|sinθ/2. The linear dependence of V_b^* on (small) θ implies that V_b^** has a leading square-root dependence. Retaining the electric potential in the relation eV_b=2μ + eϕ yields a subleading linear term. §.§ Voltages of order of the first characteristic voltage We first consider voltages of order V_b^*, such that the electrostatic potential difference ϕ is negligible. The energy δ-function imposes s=s', so that I = 2π e N_f /ħ∑_j∑_s∑_κ |T_κ,κ+𝐪_j;j^ss|^2 δ(E_κ,s - E_κ+𝐪_j,s) ×[f_eV_b/2(E_κ,s) -f_-eV_b/2(E_κ+𝐪_j,s)]. It is convenient to rewrite this as I = 2π e N_f /ħ∑_j∑_κ∑_s |T_κ,κ+𝐪_j;j^ss|^2 × |E_κ,s + E_κ+𝐪_j,s| δ(E^2_κ,s - E^2_κ+𝐪_j,s) ×[f_eV_b/2(E_κ,s) -f_-eV_b/2(E_κ+𝐪_j,s)]. We pass to the differential conductance at zero temperature. The derivative of the first Fermi function places E_κ,s and thus also E_κ+𝐪_j,s at the chemical potential μ = eV_b/2 with s=+. Corresponding results hold for the derivative of the second Fermi function, which yields d I/d V_b = 2π e^3 N_f V_b/ħ∑_j∑_κ∑_s |T_κ,κ+𝐪_j;j^ss|^2 ×δ(E^2_κ,+ - E^2_κ+𝐪_j,+) δ(E_κ,+-eV_b/2) . The first δ-function enforcing energy conservation has the argument E^2_κ,+ - E^2_κ+𝐪_j,+ = ħ^2v_D^2 q_j (2κcosθ_j + q_j), where θ_j = ∡ (κ,𝐪_j). Thus, the δ-function imposes θ_j≃π for bias voltages near the threshold, which implies θ_κ + jζ = -π/2 and θ_κ+𝐪_j + jζ = π/2. Evaluating the matrix element in Eq. (<ref>) for this case gives |T_κ,κ+𝐪_j;j^ss|^2 = w^2 for small twist angles, which is independent of j and s. Thus, we find d I/d V_b≃12π e^3 N_f w^2 V_b/ħ Ων(eV_b/2) ×1/ħ^2 v_D^2 q_0∫dθ_0/2πδ(2κ_F cosθ_0 + q_0). Performing the angular integral yields the result given in Eq. (<ref>) of Sec. <ref>. §.§ Voltages of order of the second characteristic voltage In the vicinity of V_b^**, we need to retain the electrostatic potential in Eq. (<ref>). We write the potential as ϕ = ϕ(V_b^**) + δϕ and evaluate the current for small δϕ. For small quantum capacitance, q_TFd≪ 1, the chemical potential μ=eV_b/2 is much larger than ϕ, so that we retain only the contributions of regions I and III in Fig. <ref>(c), where tunneling satisfies s=s'. Regions I and III give identical contributions by particle-hole symmetry. Moreover, the sum over j gives a factor of three in view of C_3 symmetry. Thus, we have I = 12π e N_f/ħ∑_κ|T_κ,κ+𝐪_0;0^++|^2δ(E_κ,+ + eϕ - E_κ+𝐪_0,+) ×[f_eV_b/2(E_κ,+) -f_-eV_b/2(E_κ+𝐪_0,+)]. The Fermi-function factor is nonzero and equal to unity as long as 0<ħ v_Dκ<eV_b/2. For small δϕ, we can approximate E_κ,+ + eϕ - E_κ+𝐪_0,+≃ eδϕ + ħ v_D κ q_0(1-cosθ_0)/κ + q_0, We thus have θ_0 ≪ 1. Moreover, we observe that θ_κ≃θ_κ+𝐪_0≃π/2 in region I. According to Eq. (<ref>), we can now approximate the matrix element as |T_κ,κ-𝐪_0;0^++|^2≃ w^2 for small twist angles. Thus, we find I = 3 e N_f w^2 Ω/πħ∫_0^μ/(ħ v_D) dκκ ×∫ dθ_0 δ(eδϕ + ħ v_Dκ q_0/2(κ + q_0)θ_0^2) . We note that the integral over κ is dominated by the upper limit, so that we can approximate κ+q_0≃κ in the argument of the δ-function. Evaluating the remaining integrals yields the result in Eq. (<ref>) of Sec. <ref>. § PHONONS AND ELECTRON-PHONON COUPLING §.§ Mode expansion We consider the phonon modes of tip or sample layer, neglecting mechanical coupling between the graphene layers. We expand the atomic displacements 𝐮(𝐑+τ_α) of each of the layers, including both in-plane and out-of-plane components, into phonon modes (annihilation operator b_r,𝐐) enumerated by their momenta 𝐐 and mode index r, 𝐮(𝐑+τ_α,t) = 1/√(N)∑_𝐐∑_r ϵ_r,𝐐^α1/√(2M ω_r,𝐐) × e^i𝐐· (𝐑+τ_α)(b_r,𝐐 e^-iω_r,𝐐t + b^†_r,-𝐐 e^iω_r,𝐐t). Here, M is the mass of the unit cell and the momentum sum is restricted to the Brillouin zone. The polarization vectors ϵ_r,𝐐^α denote the mode displacement of sublattice α, which we normalize according to ∑_α M_α (ϵ^α_r,𝐪)^∗·ϵ^α_r',𝐪 = M δ_r,r' with the M_α denoting the mass of the atom on sublattice α (i.e., M_α = M/2 for graphene layers). We keep the mode expansion of the atomic displacements general throughout this paper to facilitate generalization from the case of graphene to other types of layers on tip and sample. For graphene, the polarization vectors can be chosen real due to inversion symmetry. The polarization vectors satisfy the general relation [ϵ_r,𝐐^α]^* = ϵ_r,-𝐐^α, while inversion symmetry implies ϵ_r,𝐐^α = ϵ_r,-𝐐^α. At long wavelengths, the phonon modes include longitudinal and transverse acoustic modes, longitudinal and transverse optical modes, as well as a flexural mode. The quadratic dispersion of the flexural mode is cut off at a nonzero frequency at long wavelengths due to the coupling between the graphene layers and the substrates, even in the absence of mechanical coupling between tip and sample. §.§ Electron-phonon coupling Electron-phonon coupling arises from changes in the bond lengths of the graphene layers associated with the atomic displacements as well as from the deformation potential. Changes in the intralayer bond lengths result in the electron-phonon coupling of individual graphene layers. At long wavelengths, this intralayer coupling can be incorporated in the Dirac description as a gauge field. The atomic displacements also modify the interlayer tunneling discussed in Sec. <ref>, leading to interlayer electron-phonon coupling. The electron-phonon coupling originating from the deformation potential gives a separate contribution to the intralayer coupling. §.§.§ Intralayer electron-phonon coupling We consider the intralayer electron-phonon due to changes in the bond lengths for one of the layers. The atomic displacements modify the electronic tight-binding Hamiltonian of the graphene layer by H_intra = - ∑_𝐑∑_i=1^3 δ t^∥_𝐑,𝐑+𝐞_i|𝐑⟩⟨𝐑+ 𝐞_i| + h.c. Assuming that the hopping amplitude t^∥ depends only on the interatomic distance, one finds δ t^∥_𝐑,𝐑+𝐞_i≃βê_i· [𝐮(𝐑 + 𝐞_i)-𝐮(𝐑)] to linear order in the atomic displacements 𝐮. Here, ê_j denotes the unit vector in the direction of 𝐞_j and β=∂ t^∥/∂ a the derivative of the hopping amplitude with respect to the bond length. Notice that the intralayer coupling is limited to in-plane phonons. Coupling to flexural phonons appears only in quadratic order in the atomic displacements and can be neglected, since there is coupling to flexural modes already at linear order in the interlayer electron-phonon coupling. The matrix elements of the electron-phonon coupling can be obtained by inserting the mode expansion in Eq. (<ref>) into H_intra|𝐤,A⟩. This yields H_intra|𝐤,A⟩ = -1/N∑_𝐑∑_j ∑_𝐐∑_r β/√(2M ω_r,𝐐)ê_j· [ϵ_r,𝐐^B e^i𝐐·𝐞_j - ϵ_r,𝐐^A](b_r,𝐐 e^-iω_r,𝐐t + b^†_r,-𝐐 e^iω_r,𝐐t) e^i(𝐤+𝐐)·𝐑|𝐑+𝐞_j⟩. Evaluating the sum over 𝐑 yields H_intra|𝐤,A⟩ = -1/√(N)∑_j ∑_𝐐∑_r β/√(2M ω_r,𝐐)ê_j· [ϵ_r,𝐐^B - ϵ_r,𝐐^A e^-i𝐐·𝐞_j](b_r,𝐐 e^-iω_r,𝐐t + b^†_r,-𝐐 e^iω_r,𝐐t) e^-i𝐤·𝐞_j|𝐤+𝐐,B⟩ with |𝐤+𝐐,B⟩ interpreted as the state, for which 𝐤+𝐐 is folded back into the Brillouin zone. Here, we have used the identity 1/√(N)∑_𝐑 e^i𝐩·𝐑|𝐑+𝐞_j⟩ = e^-i 𝐩·𝐞_j|𝐩,B⟩, which can be checked by direct calculation. Similarly, one finds H_intra|𝐤,B⟩ = - 1/√(N)∑_j ∑_𝐐∑_r β/√(2M ω_r,𝐐)ê_j· [ ϵ_r,𝐐^B e^i𝐐·𝐞_j -ϵ_r,𝐐^A ](b_r,𝐐 e^-iω_r,𝐐t + b^†_r,-𝐐 e^iω_r,𝐐t) e^i𝐤·𝐞_j | 𝐤+𝐐,A ⟩. As a result, the intralayer electron-phonon interaction takes the form H_intra = ∑_𝐐∑_r (b_r,-𝐐 e^-iω_r,-𝐐t + b^†_r,𝐐 e^iω_r,𝐐t) ∑_𝐤{|𝐤-𝐐,B⟩ M_𝐤-𝐐,B;𝐤;A^r ⟨𝐤,A| + |𝐤-𝐐,A⟩ [M_𝐤,B;𝐤-𝐐;A^r]^* ⟨𝐤,B|} with the electron-phonon matrix element M_𝐤-𝐐,B;𝐤,A^r = 1/√(N)∑_j β/√(2M ω_r,𝐐) ×ê_j·[ ϵ_r,𝐐^A e^-i(𝐤-𝐐)·𝐞_j - ϵ_r,𝐐^B e^-i𝐤·𝐞_j]. For acoustic phonons, the term in square brackets vanishes linearly in 𝐐, while the denominator vanishes only as |𝐐|^1/2. Thus, the intralayer electron-phonon coupling vanishes in the long-wavelength limit. For optical phonons, the coupling approaches a constant in the long-wavelength limit. §.§.§ Interlayer electron-phonon coupling The interlayer electron-phonon coupling arises from modifications in the interlayer distances between atoms in the two layers. Importantly, this contribution to the electron-phonon coupling cannot be obtained starting with the continuum model of twisted bilayer graphene, even at long phonon wavelengths. The reason is that as we will see, the dominant contribution to the coupling arises from phonon-induced changes of phases of the tunneling matrix element, which depend on the large momenta of the K-points as measured from the Γ-point. The interlayer tunneling amplitude is a function of the in-plane and out-of-plane distances of the atoms in the two layers. Including the modification of the distances by the atomic displacements, we have t^⊥(|𝐑+τ_α - 𝐑'-τ_β^'+𝐮_∥-𝐮_∥^'|,d+u_⊥-u_⊥^') = 1/Ω∑_𝐪 t^⊥_𝐪(d+u_⊥-u_⊥^') e^i𝐪·(𝐑+τ_α - 𝐑'-τ_β^'+𝐮_∥-𝐮_∥^'). Here, we used the shorthands 𝐮 = 𝐮(𝐑+τ_α) and 𝐮^' = 𝐮^'(𝐑^' +τ_α^') and denote the equilibrium interlayer distance by d. Expanding to linear order in the displacements yields t^⊥(|𝐑+τ_α - 𝐑'-τ_β^'+𝐮_∥-𝐮_∥^'|,d+u_⊥-u_⊥^') ≃1/Ω∑_𝐪 t^⊥_𝐪(d) e^i𝐪· (𝐑+τ_α - 𝐑'-τ_β^'){ 1 + i𝐪· (𝐮_∥-𝐮_∥^') + ∂ln t^⊥_𝐪(d)/∂ d(u_⊥-u_⊥^') } . The first term in the curly brackets is just the interlayer tunneling discussed in Sec. <ref> <cit.>. The contributions of the second and third terms, collectively denoted δ t^⊥, encapsulate the electron-phonon coupling H_inter = ∑_𝐑∑_𝐑'∑_α,β|𝐑+ τ_α⟩δ t^⊥(|𝐑 + τ_α -𝐑'-τ_β^' +𝐮_∥-𝐮_∥^'|,d+u_⊥-u_⊥^') ⟨𝐑'+τ_β^'| + h.c. to in-plane (𝐮_∥) and out-of-plane (u_⊥) phonons. The derivation of matrix elements ⟨𝐤,α | H_inter | 𝐩',β⟩ closely follows the discussion of interlayer tunneling in Sec. <ref>. Using the mode expansion in Eq. (<ref>) of the atomic displacements, one finds the following rule for shifting the electronic momenta in the expressions for the tunneling matrix element: Scattering from phonons in the tip (wavevector 𝐐) can be accounted for by shifting the outgoing momentum according as 𝐤→𝐤-𝐐 and leaves the incoming momentum 𝐩' invariant. Scattering from phonons in the sample (wavevector 𝐐') can be accounted for by shifting the incoming momentum 𝐩'→𝐩'+𝐐' and leaves the outgoing momentum 𝐤 invariant. This yields ⟨𝐤,α | H_inter | 𝐩',β⟩ = ∑_𝐐∑_r ⟨𝐤,α|H_inter|𝐩',β;r,𝐐⟩(b_r,𝐐 e^-iω_r,𝐐t + b^†_r,-𝐐 e^iω_r,𝐐t) - (𝐐→𝐐') , where [cp. Eq. (<ref>)] ⟨𝐤,α|H_inter|𝐩',β;r,𝐐⟩ = 1/√(N)∑_𝐆_1∑_𝐆_2 e^i 𝐆_1·τ_α-i𝐆_2· (τ_β -𝐞_1) -i𝐆_2^'·𝐝 δ_𝐤-𝐐 + 𝐆_1,𝐩'+𝐆_2^' ×1/√(2M ω_r,𝐐){t^⊥_𝐩'+𝐆_2^'/Ω_uc i(𝐩'+𝐆_2^')·ϵ^α_r,𝐐 + 1/Ω_uc∂ t^⊥_𝐩'+𝐆_2^'(d)/∂ dẑ·ϵ^α_r,𝐐} and ⟨𝐤,α|H_inter|𝐩',β;r,𝐐'⟩ = - 1/√(N)∑_𝐆_1∑_𝐆_2 e^i 𝐆_1·τ_α-i𝐆_2· (τ_β -𝐞_1) -i𝐆_2^'·𝐝 δ_𝐤 + 𝐆_1,𝐩'+𝐐' + 𝐆_2^' ×1/√(2M ω_r,𝐐'){t^⊥_𝐤+𝐆_1/Ω_uc i(𝐤+𝐆_1)·ϵ^β_r,𝐐' + 1/Ω_uc∂ t^⊥_𝐤+𝐆_1(d)/∂ dẑ·ϵ^β_r,𝐐'} . The first terms in the curly brackets in Eqs. (<ref>) and (<ref>) describe coupling to in-plane phonons, while the second terms describe coupling to flexural phonons. The contribution of H_inter to the inelastic tunneling current can be accounted for in first-order perturbation theory. Consequently, the electronic momenta 𝐤 and 𝐩^' are close to the K-points of tip and sample. In this case, we can further simplify the matrix elements. As discussed in Sec. <ref>, the dominant terms are then given by 𝐆_1=0, 𝐆_1 = -𝐐_1, and 𝐆_1=-𝐐_2 with 𝐆_1=𝐆_2. Moreover, except for the smallest twist angles, we can neglect the distance of 𝐤 and 𝐩^' from the respective K-points relative to the distance q_j between the K-points of tip and sample. With these approximations, the Kronecker-δ imposing momentum conservation in Eq. (<ref>) simplifies as δ_𝐤-𝐐 + 𝐆_1,𝐩'+𝐆_2^'→δ_𝐊-𝐐 - 𝐐_j,𝐊'-𝐐_j^' = δ_𝐐,𝐪_𝐣, so that the phonon wavevector equals a connection vector between the Dirac points of tip and sample [and analogously for Eq. (<ref>)]. Thus, we find ⟨𝐤,α|H_inter|𝐩',β;r,𝐐⟩ = 1/√(N)∑_j=0^2 T_j^αβ e^ i𝐐_j^'·𝐝 δ_𝐐,𝐪_j1/√(2M ω_r,𝐐){ i(𝐊'-𝐐_j^')·ϵ^α_r,𝐐 + w̃/wẑ·ϵ^α_r,𝐐}, ⟨𝐤,α|H_inter|𝐩',β;r,𝐐'⟩ = - 1/√(N)∑_j=0^2 T_j^αβ e^i𝐐_j^'·𝐝 δ_𝐐', 𝐪_j1/√(2M ω_r,𝐐'){ i(𝐊-𝐐_j)·ϵ^β_r,𝐐' + w̃/wẑ·ϵ^β_r,𝐐'} . Here, we defined w̃=∂ w/∂ d. (Note that w̃ and w have different dimensions.) Several comments are in order concerning these results in the limit of small twist angles. (i) The vectors 𝐪_j and hence the phonon wavevector 𝐐 are approximately perpendicular to 𝐊-𝐐_j as well as 𝐊^' -𝐐_j^'. Thus, as a consequence of the scalar product, the coupling to in-plane phonons is predominantly to transverse phonon modes for the phonon vectors probed in QTM experiments. (ii) The coupling to transverse acoustic modes diverges as |𝐐|^-1/2 at small phonon wavevectors 𝐐. (iii) The coupling to the transverse acoustic modes is effectively to layer-antisymmetric phonons, also known as the phason mode. As mentioned in the beginning of this section, the relevant momenta entering Eqs. (<ref>) and (<ref>) are the large momenta of the K-points rather than the phonon wavevector. It is for this reason that even in the long-wavelength limit, this coupling cannot be obtained by adding phonon displacements to the continuum model of twisted bilayer graphene. Instead, one has to follow the derivation of the continuum model after taking the phonon displacements into account. The underlying reason is that the interlayer coupling arises from phase factors associated with the interlayer tunneling. We use this observation in App. <ref> to show that by means of a gauge transformation, the interlayer coupling can be brought into a form, which is analogous to the intralayer coupling. As it appears in this section, the contribution of the interlayer coupling to the inelastic tunneling current can be accounted for in a first-order golden rule calculation. In the transformed form, it must be treated in second order. Remarkably, in the transformed form, the intra- and interlayer couplings are related by the replacement ∂ t^∥/∂ a↔ i t_∥𝐊. This shows that one expects both contributions to the inelastic tunneling current to have corresponding parametric dependences, as born out by the explicit calculations, see Table <ref>. §.§.§ Deformation-potential coupling For long-wavelength phonons, the deformation potential V(𝐫) = -D ∇·𝐮 gives an additional contribution to the electron-phonon coupling of longitudinal acoustic phonons. This can be compared with the intralayer electron-phonon coupling in Eqs. (<ref>) and (<ref>) due to changes in the bond length. In contrast to the deformation potential, the latter is of comparable magnitude for transverse and longitudinal acoustic phonons. Apart from this difference, the magnitudes of the deformation-potential and the gauge coupling are related by the replacement D ↔∂ t^∥/∂ln a. We can use this correspondence to obtain the additional contribution of the deformation potential to the inelastic tunneling current from the result for the intralayer gauge coupling. We also note that the deformation coupling locally shifts the chemical potential, inducing changes of the charge density. These charge fluctuations will be screened by electron-electron interactions, effectively reducing the strength of the deformation potential. We take D to be the renormalized coupling. § PHONON SPECTROSCOPY §.§ Inelastic tunneling current We are now in a position to compute the inelastic tunneling current to the leading orders in tip-sample tunneling and electron-phonon coupling. Inelastic electron tunneling in conjunction with phonon emission emerges from the interlayer electron-phonon coupling ℋ_inter [see Eqs. (<ref>) and (<ref>)] as well as the intralayer electron-phonon coupling ℋ_intra in conjunction with interlayer tunneling. While the first contribution can be captured by Fermi's golden rule in lowest order, the second requires a higher-order calculation. Keeping the calculation general at first, we assume a nonzero matrix element ⟨𝐩',s';r,𝐐 | 𝒯_inel|𝐤,s⟩ of the inelastic contribution to the 𝒯-matrix without specifying to a particular process. We assume zero temperature and eV_b>0. Then, tunneling is unidirectional from tip to sample and only phonon emission contributes. We also specify to charge neutrality and small quantum capacitance, so that ϕ_T = ϕ_S and μ = eV_b/2 at bias voltages corresponding to typical phonon frequencies. With these assumptions, the phonon contribution δ I to the tunneling current from tip to sample is given by δ I = 2π e N_f/ħ∑_𝐐∑_r∑_𝐤,𝐩'∑_s,s'|⟨𝐩',s';r,𝐐|𝒯_inel|𝐤,s⟩|^2 δ(E_𝐩',s'+ ħω_r,𝐐-E_𝐤,s) f_μ(E_𝐤,s)[1-f_-μ(E_𝐩',s')] + (𝐐→𝐐'). Passing to the differential conductance in the limit of zero temperature using μ=eV_b/2 gives dδ I/dV_b = 2π e^2 N_f/2ħ∑_𝐐∑_r∑_𝐤,𝐩'∑_s,s'θ(eV_b-ħω_r,𝐐) |⟨𝐩',s';r,𝐐|𝒯_inel|𝐤,s⟩|^2 ×{δ(E_𝐩',s'+ħω_r,𝐐-μ)δ(E_𝐤,s-μ) + δ(-μ+ħω_r,𝐐-E_𝐤,s)δ(E_𝐩',s'+μ) } + (𝐐→𝐐'). The singular contribution to the second derivative, d^2δ I/dV_b^2, arises from the derivative of the threshold factor θ(eV_b-ħω_r,𝐐). The resulting δ-function enforces that the bias voltage match the phonon energy. Using this constraint, the two terms in curly brackets become equal, and we obtain d^2δ I/dV_b^2 = 2π e^3 N_f/ħ∑_𝐐∑_r δ(eV_b-ħω_r,𝐐) ∑_𝐤,𝐩' |⟨𝐩',-;r,𝐐|𝒯_inel|𝐤,+⟩|^2δ(E_𝐩',-+μ)δ(E_𝐤,+-μ)+ (𝐐→𝐐'). Here, we also used that tunneling at the threshold is from s=+ to s'=-. Due to the δ-functions, the initial and final electron states are located at the Fermi energies of tip and sample, respectively. Thus, we find d^2δ I/dV_b^2 = 2π e^3 N_f Ω^2/ħν(μ)ν(-μ)∑_r∑_𝐐δ(eV_b-ħω_r,𝐐) ∫dθ_𝐤/2πdθ_𝐩^'/2π |⟨𝐩',-;r,𝐐|𝒯_inel|𝐤,+⟩|^2 + (𝐐→𝐐'), which depends on Fermi-circle averages of the 𝒯-matrix element. §.§ Interlayer electron-phonon coupling: First-order perturbation theory We first consider the contribution δ I_inter of the interlayer electron-phonon coupling, 𝒯_inel→ℋ_inter in Eq. (<ref>). We can readily perform the angular averages in Eq. (<ref>) using Eqs. (<ref>), (<ref>), and (<ref>). By C_3 symmetry, the sum over j implicit in the matrix elements can be accounted for by a factor of three and we obtain d^2δ I_inter/dV_b^2 = 6π e^3 N_f Ω^2/ħ N∑_r δ(eV_b-ħω_r,𝐪_0)ν(μ)ν(-μ)/2M ω_r,𝐪_0 ×∑_α{| iw𝐊'·ϵ^α_r,𝐐=𝐪_0 + w̃ẑ·ϵ^α_r,𝐐=𝐪_0|^2 + |iw𝐊·ϵ^α_r,𝐐'=𝐪_0 + w̃ẑ·ϵ^α_r,𝐐'=𝐪_0|^2}. Here, we also used that by symmetry, ω_r,𝐪_0 is the same for the tip and sample layers. Finally we note that the contributions of phonon emissions in tip and sample are identical in magnitude and use the fact that the polarization vectors are real. This yields the result d^2δ I_inter/dV_b^2 = G_incoh∑_r δ(V_b-ħω_r,𝐪_0/e) 8π^2(κ_Fℓ_r,𝐪_0)^2/√(3)∑_α{ (𝐊̂'·ϵ^α_r,𝐐=𝐪_0)^2 + w̃^2/w^2|𝐊|^2 (ẑ·ϵ^α_r,𝐐=𝐪_0)^2 }. Here, we have rewritten the prefactor by introducing the length ℓ_r,𝐪_0 = √(ħ/M ω_r,𝐪_0) characterizing the contribution of phonon mode r with wavevector 𝐪_0 to the amplitude of the zero-point motion of the atoms and using |𝐊|^2Ω_uc = 8π^2/(3√(3)). We note that in Sec. <ref> as well as Table <ref>, we use the less specific notation ℓ_ZPM for ℓ_r,𝐪_0. Each phonon mode contributes a δ-function peak to d^2δ I/dV_b^2 at eV_b=ħω_r,𝐪_0. Several comments are in order: (i) At small twist angles, transverse phonons contribute more strongly to the inelastic tunneling current as the vectors 𝐊 and 𝐊' are nearly orthogonal to the phonon wavevector 𝐪_0. (ii) As the twist angle decreases and the phonon wavevector q_0→ 0, the strength of the electron-phonon coupling diverges for acoustic phonons. Concurrently, the Fermi wavevector κ_F decreases. On balance, we have (κ_Fℓ_r)^2∼ω_r,𝐪_0, so that the strength of the phonon resonance decreases as the twist angle becomes smaller. We note, however, that this is specific to the case of overall charge neutrality. Away from charge neutrality, the Fermi wavevector remains nonzero at zero bias and one finds a singular enhancement at small twist angles. The cutoff at small twist angles is discussed at the end of Sec. <ref>. (iii) For the same reason, acoustic phonons contribute more weakly than optical phonons at overall charge neutrality, but stronger away from charge neutrality provided that the phonon frequencies are small compared to the zero-bias chemical potential. In deriving d^2δ I/dV_b^2, we neglected the electronic momenta measured from the respective K-points, relative to q_0 [see Eqs. (<ref>) and (<ref>) and the preceding discussion]. Within this approximation, all inelastic tunneling events involve phonons with momenta 𝐪_j and d^2δ I/dV_b^2 becomes a δ-peak as a function of bias voltage. Retaining the small electronic momenta effectively broadens the δ-peak. The phonon momenta are now equal to 𝐪_j only up to wavevectors of order κ_F. For acoustic modes, we can estimate the change in phonon frequency as Δω_r∼ħ cκ_F with c the speed of sound. Thus, we find Δω_r/ω_r,𝐪_0∼c/v_D≪ 1 for the relative broadening of the phonon peak. We note that this approximation also implies a limitation on the twist angle due to the requirement κ_F ≪ q_0≈ |𝐊|θ. For an optical mode (for which this condition is more stringent), this implies θ≫c/v_D. Here, we estimated the frequency of the optical phonon as c|𝐊|. §.§ Intralayer electron-phonon coupling: Second-order perturbation theory The contribution of the intralayer electron-phonon interaction is obtained by 𝒯_inel→ℋ_T𝒢_0ℋ_intra + ℋ_intra𝒢_0ℋ_T in Eq. (<ref>). This is written in second quantization (as indicated by calligraphic symbols) to keep proper track of the relative exchange phase of the two contributions. The relevant processes are illustrated in Fig. <ref>, for a bias voltage at threshold (eV_b=ħω_r,𝐪_j). Electron tunneling is from the chemical potential in the tip (conduction band) to the chemical potential in the sample (valence band). We specify to electron wavevectors in the vicinity of 𝐊 and 𝐊'. Figures <ref>(a) and (c) describe phonon emission in the tip, (b) and (d) in the sample. By symmetry, both sets of processes contribute equally. For definiteness, we specify to emission of a phonon in the tip. In the process shown in Fig. <ref>(a), tunneling follows the initial phonon emission, so that the process contributes to ℋ_T𝒢_0ℋ_intra. Similarly, in the process shown in Fig. <ref>(c), phonon emission follows the initial tunneling, so that the process contributes to ℋ_intra𝒢_0ℋ_T. These processes are described by the matrix elements ⟨𝐩',-;r,𝐐| H_T G_0 H_intra |𝐤,+ ⟩ = ∑_𝐫⟨𝐩',-;r,𝐐| H_T |𝐫,+;r,𝐐⟩⟨𝐫,+;r,𝐐| H_intra |𝐤,+ ⟩/E_𝐤,+-E_𝐫,s-ħω_r,𝐐 and ⟨𝐩',-;r,𝐐| H_intra G_0 H_T |𝐤,+ ⟩ = ∑_𝐫⟨𝐩',-;r,𝐐| H_intra |𝐩',-; 𝐫,-;𝐤,+⟩⟨𝐩',-; 𝐫,-;𝐤,+| H_T |𝐤,+ ⟩/E_𝐫,s-E^'_𝐩',- of the 𝒯-matrix, respectively. Here, we denote holes by overlines. In this section, we also denote energies of tip (sample) without (with) prime. Energy and momentum conservation demand that E_𝐤,+=E^'_𝐩',-+ħω_𝐐 as well as 𝐐=𝐤-𝐩' ≃𝐪_j. The three contributions to tunneling are related by C_3 symmetry and incoherent, so that we focus on the j=0 contribution, which leaves the momentum unchanged, i.e., 𝐫=𝐩' in both processes. The processes in Fig. <ref>(a) and (c) contribute with a relative statistical minus sign. This follows as they involve the operator products c^†_𝐩,-c_𝐫,+c^†_𝐫,+c_𝐤,+ and c^†_𝐫,-c_𝐤,+ c^†_𝐩,-c_𝐫,-, respectively. Anticommuting the operators in the second product turns it into -c^†_𝐩,-c^†_𝐫,-c_𝐫,-c_𝐤,+. The relative minus sign follows by noting that c_𝐫,+c^†_𝐫,+ and c^†_𝐫,-c_𝐫,- both reduce to unity when acting on the ground state. The energy denominators of the two processes in Eqs. (<ref>) and (<ref>) are equal to each other up to corrections which are small in the ratio of the phonon and electron velocities. As can also be seen by inspection of Figs. <ref>(a) and (c), the energy denominators differ by the phonon energy ħω_r,𝐐, which can be neglected in leading order. Thus, they can be approximated by E_𝐤,+-E_𝐩',+≃ħ v_Dq_0 provided that the linearized Dirac spectrum accurately describes the intermediate state at momentum 𝐫=𝐩'. Using Eq. (<ref>), we find ⟨𝐩',-;r,𝐐| H_T |𝐫=𝐩',+;r,𝐐⟩ = w/2[1 + e^iγ_p'] [1 - e^-iγ^'_p'] and ⟨𝐩',-; 𝐫=𝐩',-;𝐤,+| H_T |𝐤,+ ⟩ = w/2[1 + e^iγ_p'] [1 + e^-iγ^'_p'] for the matrix elements of the tunneling Hamiltonian. Note that here, γ_p' and γ^'_p' are evaluated using the bond vectors of the respective layer. To compute the matrix elements of H_intra, we use that it has only offdiagonal matrix elements in sublattice space. We can approximate the electron momenta as 𝐤≃𝐊 and 𝐩' = 𝐫≃𝐊'. Using Eqs. (<ref>) and (<ref>), we then have ⟨𝐫=𝐩',+;r,𝐐| H_intra |𝐤,+ ⟩≃ - δ_𝐐,𝐪_0 ×1/2{ M_𝐊',B;𝐊,A^r e^iγ_𝐩^' + [M_𝐊,B;𝐊',A^r]^* e^-iγ_𝐤} and ⟨𝐩',-;r,𝐐| H_intra |𝐩',-; 𝐫=𝐩',-;𝐤,+⟩≃δ_𝐐,𝐪_0 ×1/2{ M_𝐊',B;𝐊,A^r e^iγ_𝐩^' - [M_𝐊,B;𝐊',A^r]^* e^-iγ_𝐤}. Adding the amplitudes accounting for their relative minus sign, we find ⟨𝐩',-;r,𝐐| H_intra G_0 H_T - H_T G_0 H_intra |𝐤,+ ⟩ = - δ_𝐐,𝐪_0w/4ħ v_D q_0[1 + e^iγ_p'] ×{[1 - e^-iγ^'_p'] ( M_𝐊',B;𝐊,A^r e^iγ_𝐩^' + [M_𝐊,B;𝐊',A^r]^* e^-iγ_𝐤) + [1 + e^-iγ^'_p'] ( M_𝐊',B;𝐊,A^r e^iγ_𝐩^' - [M_𝐊,B;𝐊',A^r]^* e^-iγ_𝐤)} = - δ_𝐐,𝐪_0w/2ħ v_D q_0[1 + e^iγ_p'] { M_𝐊',B;𝐊,A^r e^iγ_𝐩^' - [M_𝐊,B;𝐊',A^r]^* e^-iγ_𝐤-iγ^'_p'} = δ_𝐐,𝐪_0w/2ħ v_D q_0[1 - e^-i(θ_π'-θ/2)] { M_𝐊',B;𝐊,A^r e^-i(θ_π^'-θ/2) + [M_𝐊,B;𝐊',A^r]^* e^iθ_κ + iθ^'_π'}. In the last step, we have approximated the phases in the Dirac approximation. Taking the absolute value, averaging over the Fermi circles, and noting that as a result of the average over θ_κ, the two terms contribute incoherently gives ∫dθ_𝐤/2πdθ_𝐩^'/2π |⟨𝐩',-;r,𝐐|𝒯_inel|𝐤,+⟩|^2 →w^2/2(ħ v_D q_0)^2{ |M_𝐊',B;𝐊,A^r|^2 + |M_𝐊,B;𝐊',A^r|^2 } Inserting this into Eq. (<ref>) and accounting for phonon emission in the substrate, we find d^2δ I_intra/dV_b^2 = G_incoh∑_r δ(V_b-ħω_r,𝐪_0/e) κ_F^2Ω/(ħ v_D q_0)^2{ |M_𝐊',B;𝐊,A^r|^2 + |M_𝐊,B;𝐊',A^r|^2 } Both matrix elements contribute equally, so that we find the result d^2δ I_intra/dV_b^2 = G_incoh∑_r δ(V_b-ħω_r,𝐪_0/e)(κ_Fℓ_r,𝐪_0)^2 β^2Ω_uc/(ħ v_D q_0)^2|∑_j[ϵ_r,𝐪_0^A e^-i𝐪_0·𝐞_j - ϵ_r,𝐪_0^B] |^2 The behavior in the limit of small twist angle, i.e., 𝐪_0→ 0, depends on the type of phonon mode. For acoustic phonons, the dependences on q_0 of the last two factors on the right hand side compensate to yield a constant. At charge neutrality, (κ_Fℓ_r)^2∼ω_r,𝐪_0 implies that the overall expression reduces linearly with decreasing q_0→ 0. Away from charge neutrality, the peak grows as 1/q_0. For optical phonons, the last factor as well as (κ_Fℓ_r)^2 remain constant for q_0→ 0. Thus, the peak grows as 1/q_0^2 as the twist angle decreases both at and away from charge neutrality. We end this section by remarking that making the replacement derived in App. <ref>, one can reproduce the contribution of acoustic phonons due to the interlayer electron-phonon interaction from Eq. (<ref>) for the intralayer coupling. § CONCLUSIONS While STM probes local tunneling in real space, the QTM exploits tunneling which is local in reciprocal space. This makes the QTM ideally suited to probing momentum-dependent dispersions. Elastic tunneling between twisted graphene layers requires voltages above a twist-angle dependent threshold. This opens a window at small voltages, in which the tunneling current in a QTM is purely inelastic and gives access to collective-mode dispersions. In this paper, we illustrated this modality by developing a comprehensive and analytical theory for phonon spectroscopy in twisted graphene-graphene junctions. We showed that intra- and interlayer electron-phonon couplings give contributions to the inelastic tunneling current, and that the dominant contribution can come from various processes for a specific phonon mode, as summarized in Table <ref>. The intralayer electron-phonon coupling has two contributions at long wavelengths. Phonon-induced modifications in the bond lengths can be described as a gauge field within the effective Dirac description of the graphene band structure. The conventional deformation potential associated with compressions and dilations of the lattice enters the Dirac equation as a scalar potential. It has been difficult to reliably extract corresponding coupling constants from experiment. Provided that the contribution of intralayer electron-phonon coupling contributes significantly in QTM measurements, we find that for small twist angles, the gauge coupling is much stronger for transverse acoustic modes, while the deformation potential couples only to longitudinal acoustic modes. Thus, QTM measurements may well resolve this issue in addition to providing direct access to coupling constants. Our considerations were limited to situations in which tunneling preserves the valley. In practice, rotating the tip by π/3 merely replaces the tip's K-valley by the K'-valley. This makes the twist-angle dependent tunneling invariant under π/3 rotations. Moreover, every phonon mode contributes twice at a particular twist angle, due to scattering between identical as well as opposite valleys. Both contributions become symmetric at a twist angle of π/6, making this a symmetry point of the measured spectra. Our analytical considerations focused on zero temperature, where only phonon emission contributes to the inelastic tunneling current. At finite temperatures, also phonon absorption contributes. We also did not consider higher-order umklapp processes. These do affect elastic scattering at specific angles, making their contribution to inelastic tunneling an interesting question. Both effects can be included by straightforward extension of the calculations presented here. Our considerations further neglected the finite size of the tunneling contact. A finite contact area limits the accuracy of momentum conservation in the tunneling process. This enables a temperature-dependent background current due to thermal phonons. The magnitude of this background depends sensitively on the contact shape. We can evaluate the tunneling amplitude for a finite contact area following the discussion of the scattering picture in Sec. <ref>. While a rectangular contact would have independent Lorentzian-type broadenings of the x- and y-components of the momentum, these components are coupled in more general situations. For instance, one would expect just a single Lorentzian broadening for a weakly disordered contact. The background current is dominated by the tail of these broadening functions, which yields different behaviors in these two cases. In addition to phonon dispersions, QTM measurements also give access to momentum-resolved electron-phonon couplings. In view of the multiple inelastic tunneling processes, extracting this information from experiment must rely on theoretical results of the kind that we provide in this paper. Corresponding results promise to shed light on the nature of superconductivity and the linear temperature dependence of the resistivity in magic-angle twisted bilayer graphene. Phonon spectroscopy based on QTM measurements may finally bring us closer to understanding the origin of these phenomena, which may or may not originate from electron-phonon coupling. Research at Weizmann, Yale, and Freie Universität Berlin was supported by Deutsche Forschungsgemeinschaft through CRC 183 (project C02 and a Mercator Fellowship). Research at Freie Universität Berlin was further supported by Deutsche Forschungsgemeinschaft through a joint ANR-DFG project (TWISTGRAPH). Research at Yale was supported by NSF Grant No. DMR-2410182 and by the Office of Naval Research (ONR) under Award No. N00014-22-1-2764.P.G. acknowledges support from the Severo Ochoa program for centers of excellence in R&D (CEX2020-001039-S/AEI/10.13039/501100011033, Ministerio de Ciencia e Innovacion, Spain), from grant (MAD2D-CM)-MRR MATERIALES AVANZADOS-IMDEA-NC, NOVMOMAT, as well as Grant PID2022-142162NB-I00 funded by MCIN/AEI/10.13039/501100011033. Work at Weizmann was supported by the Leona M. and Harry B. Helmsley Charitable Trust grant, and the Rosa and Emilio Segre Research Award (S.I.) as well as an NSF-BSF award DMR-2000987, and the European Research Council (ERC) under grant HQMAT (Grant Agreement No. 817799) (E.B.). § INTERLAYER ELECTRON-PHONON COUPLING It may seem accidental that for acoustic phonons, the contributions of first- and second-order perturbation theory are of the same order. Here, we rationalize this result by showing that the two kinds of electron-phonon couplings can be made to look similar by means of a gauge transformation. Focusing on the contribution of T_0, the interlayer tunneling can be written in the continuum limit as ℋ_T = ∫ d𝐫ψ_t,α^†(𝐫) T_0^αβ(𝐫) e^i𝐊·[𝐮(𝐫+τ_α)-𝐮'(𝐫+τ_β)] ψ_b,β(𝐫) + h.c. The phonon displacements of the two layers contribute a phase to the interlayer tunneling, akin to a vector potential introducing a Peierls phase. This phase can be eliminated by a gauge transformation ψ_b,β(𝐫) →ψ_b,β(𝐫) e^i𝐊·𝐮'(𝐫+τ_β) and an analogous transformation for the upper layer. This is implemented by the unitary transformation 𝒰 = exp{ - i∑_α∫ d𝐫𝐊· [𝐮(𝐫+τ_α) ψ^†_t,α(𝐫) ψ_t,α(𝐫) - 𝐮'(𝐫+τ_α) ψ^†_b,α(𝐫) ψ_b,α(𝐫)] }. Apart from eliminating the phase factor including the phonon displacements in H_T, this transforms the electronic Hamiltonian of the graphene layers. (An additional contibution due to the transformation of the phonon Hamiltonian is small in the parameter c/v_D.) We revert to a tight-binding description for convenience. Then, the unitary transformation becomes 𝒰 = exp{ - i∑_α∑_𝐑𝐊·[ 𝐮(𝐑+τ_α) c^†_t(𝐑+τ_α) c_t(𝐑+τ_α) - 𝐮'(𝐑'+τ_α^') c^†_b(𝐑'+τ_α^') c_b(𝐑'+τ_α^') ] }. The graphene Hamiltonian, say for the top layer, transforms into ℋ = - t^∥∑_𝐑∑_𝐞_j e^i𝐊·[𝐮(𝐑)-𝐮(𝐑+𝐞_j)] c^†_t(𝐑+𝐞_j) c_t(𝐑) + h.c. Expanding the exponential to linear order in the atomic displacements, we find that the resulting electron-phonon coupling takes essentially the same form as the intralayer electron-phonon coupling of the top layer, except for the fact that it is phase shifted by π/2 due to the factor of i. The calculation for the bottom layer proceeds by complete analogy and generates a corresponding electron phonon coupling for the bottom layer. With this form of the interlayer electron-phonon coupling, we readily see that the results for intralayer and interlayer electron-phonon coupling are related by the replacement ∂ t^∥/∂ a↔ i t^∥𝐊. One can check that making this replacement in Eq. (<ref>) indeed reproduces the contribution of in-plane phonons to Eq. (<ref>) for the interlayer electron-phonon coupling. The vectorial nature of the interlayer coupling implies that it is strongly twist-angle dependent, with coupling to transverse phonons dominating over longitudinal phonons. This relation also makes it explicit that apart from this difference in their twist-angle dependence, the two types of electron-phonon couplings give contributions of the same order, when making the (approximate) replacement ∂ t^∥/∂ a→ t^∥/ a. In particular, we expect that the parametric dependences on electron filling and phonon wave vector are identical in both cases. Moreover, the relative phase shift of π/2 implies that there are no interference terms between the two contributions. It is also interesting to note that the dependence of the electron-phonon coupling on the phonon momentum 𝐐 is seemingly different before and after the gauge transformation. Focusing on acoustic phonons, the coupling before the gauge transformation diverges as 1/√(Q) due to the linear dispersion. This dependence emerges from the zero-point amplitude of the phonon displacements. In contrast, after the transformation, there is an additional factor of Q due to the difference in phonon displacements of neighboring graphene sites. Thus, the coupling behaves as √(Q). This shows that the coupling is consistent with the expectation that it vanishes in the long-wavelength limit corresponding to near-uniform relative shifts of the top and bottom layers. Of course, in keeping with gauge invariance, physical results are identical in both cases, with the singular Q-dependence emerging from the energy denominator after the gauge transformation. 43 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Cao et al.(2018a)Cao, Fatemi, Demir, Fang, Tomarken, Luo, Sanchez-Yamagishi, Watanabe, Taniguchi, Kaxiras, Ashoori, and Jarillo-Herrero]Cao2018a author author Y. Cao, author V. Fatemi, author A. Demir, author S. Fang, author S. L. Tomarken, author J. Y. Luo, author J. D. Sanchez-Yamagishi, author K. Watanabe, author T. Taniguchi, author E. Kaxiras, author R. C. Ashoori, and author P. Jarillo-Herrero, title title Correlated insulator behaviour at half-filling in magic-angle graphene superlattices, https://doi.org/10.1038/nature26154 journal journal Nature volume 556, pages 80–84 (year 2018a)NoStop [Cao et al.(2018b)Cao, Fatemi, Fang, Watanabe, Taniguchi, Kaxiras, and Jarillo-Herrero]Cao2018b author author Y. Cao, author V. Fatemi, author S. Fang, author K. Watanabe, author T. Taniguchi, author E. Kaxiras, and author P. Jarillo-Herrero, title title Unconventional superconductivity in magic-angle graphene superlattices, https://doi.org/10.1038/nature26160 journal journal Nature volume 556, pages 43–50 (year 2018b)NoStop [Andrei et al.(2021)Andrei, Efetov, Jarillo-Herrero, MacDonald, Mak, Senthil, Tutuc, Yazdani, and Young]Andrei2021 author author E. Y. Andrei, author D. K. Efetov, author P. Jarillo-Herrero, author A. H. MacDonald, author K. F. Mak, author T. Senthil, author E. Tutuc, author A. Yazdani, and author A. F. Young, title title The marvels of moiré materials, https://doi.org/10.1038/s41578-021-00284-1 journal journal Nat. Rev. Mater. volume 6, pages 201 (year 2021)NoStop [Choi et al.(2019)Choi, Kemmer, Peng, Thomson, Arora, Polski, Zhang, Ren, Alicea, Refael, von Oppen, Watanabe, Taniguchi, and Nadj-Perge]Choi2019 author author Y. Choi, author J. Kemmer, author Y. Peng, author A. Thomson, author H. Arora, author R. Polski, author Y. Zhang, author H. Ren, author J. Alicea, author G. Refael, author F. von Oppen, author K. Watanabe, author T. Taniguchi, and author S. Nadj-Perge, title title Electronic correlations in twisted bilayer graphene near the magic angle, https://doi.org/10.1038/s41567-019-0606-5 journal journal Nat. Phys. volume 15, pages 1174–1180 (year 2019)NoStop [Kerelsky et al.(2019)Kerelsky, McGilly, Kennes, Xian, Yankowitz, Chen, Watanabe, Taniguchi, Hone, Dean, Rubio, and Pasupathy]Kerelsky2019 author author A. Kerelsky, author L. J. McGilly, author D. M. Kennes, author L. Xian, author M. Yankowitz, author S. Chen, author K. Watanabe, author T. Taniguchi, author J. Hone, author C. Dean, author A. Rubio, and author A. N. Pasupathy, title title Maximized electron interactions at the magic angle in twisted bilayer graphene, https://doi.org/10.1038/s41586-019-1431-9 journal journal Nature volume 572, pages 95–100 (year 2019)NoStop [Xie et al.(2019)Xie, Lian, Jäck, Liu, Chiu, Watanabe, Taniguchi, Bernevig, and Yazdani]Xie2019 author author Y. Xie, author B. Lian, author B. Jäck, author X. Liu, author C.-L. Chiu, author K. Watanabe, author T. Taniguchi, author B. A. Bernevig, and author A. Yazdani, title title Spectroscopic signatures of many-body correlations in magic-angle twisted bilayer graphene, https://doi.org/10.1038/s41586-019-1422-x journal journal Nature volume 572, pages 101–105 (year 2019)NoStop [Jiang et al.(2019)Jiang, Lai, Watanabe, Taniguchi, Haule, Mao, and Andrei]Jiang2019 author author Y. Jiang, author X. Lai, author K. Watanabe, author T. Taniguchi, author K. Haule, author J. Mao, and author E. Y. Andrei, title title Charge order and broken rotational symmetry in magic-angle twisted bilayer graphene, https://doi.org/10.1038/s41586-019-1460-4 journal journal Nature volume 573, pages 91–95 (year 2019)NoStop [Oh et al.(2021)Oh, Nuckolls, Wong, Lee, Liu, Watanabe, Taniguchi, and Yazdani]Oh2021 author author M. Oh, author K. P. Nuckolls, author D. Wong, author R. L. Lee, author X. Liu, author K. Watanabe, author T. Taniguchi, and author A. Yazdani, title title Evidence for unconventional superconductivity in twisted bilayer graphene, https://doi.org/10.1038/s41586-021-04121-x journal journal Nature volume 600, pages 240–245 (year 2021)NoStop [Nuckolls et al.(2023)Nuckolls, Lee, Oh, Wong, Soejima, Hong, Călugăru, Herzog-Arbeitman, Bernevig, Watanabe, Taniguchi, Regnault, Zaletel, and Yazdani]Nuckolls2023 author author K. P. Nuckolls, author R. L. Lee, author M. Oh, author D. Wong, author T. Soejima, author J. P. Hong, author D. Călugăru, author J. Herzog-Arbeitman, author B. A. Bernevig, author K. Watanabe, author T. Taniguchi, author N. Regnault, author M. P. Zaletel, and author A. Yazdani, title title Quantum textures of the many-body wavefunctions in magic-angle graphene, https://doi.org/10.1038/s41586-023-06226-x journal journal Nature volume 620, pages 525–532 (year 2023)NoStop [Zondiner et al.(2020)Zondiner, Rozen, Rodan-Legrain, Cao, Queiroz, Taniguchi, Watanabe, Oreg, von Oppen, Stern, Berg, Jarillo-Herrero, and Ilani]Zondiner2020 author author U. Zondiner, author A. Rozen, author D. Rodan-Legrain, author Y. Cao, author R. Queiroz, author T. Taniguchi, author K. Watanabe, author Y. Oreg, author F. von Oppen, author A. Stern, author E. Berg, author P. Jarillo-Herrero, and author S. Ilani, title title Cascade of phase transitions and Dirac revivals in magic-angle graphene, https://doi.org/10.1038/s41586-020-2373-y journal journal Nature volume 582, pages 203–208 (year 2020)NoStop [Wong et al.(2020)Wong, Nuckolls, Oh, Lian, Xie, Jeon, Watanabe, Taniguchi, Bernevig, and Yazdani]Wong2020 author author D. Wong, author K. P. Nuckolls, author M. Oh, author B. Lian, author Y. Xie, author S. Jeon, author K. Watanabe, author T. Taniguchi, author B. A. Bernevig, and author A. Yazdani, title title Cascade of electronic transitions in magic-angle twisted bilayer graphene, https://doi.org/10.1038/s41586-020-2339-0 journal journal Nature volume 582, pages 198–202 (year 2020)NoStop [Rozen et al.(2021)Rozen, Park, Zondiner, Cao, Rodan-Legrain, Taniguchi, Watanabe, Oreg, Stern, Berg, Jarillo-Herrero, and Ilani]Rozen2021 author author A. Rozen, author J. M. Park, author U. Zondiner, author Y. Cao, author D. Rodan-Legrain, author T. Taniguchi, author K. Watanabe, author Y. Oreg, author A. Stern, author E. Berg, author P. Jarillo-Herrero, and author S. Ilani, title title Entropic evidence for a Pomeranchuk effect in magic-angle graphene, https://doi.org/10.1038/s41586-021-03319-3 journal journal Nature volume 592, pages 214–219 (year 2021)NoStop [Xie et al.(2021)Xie, Pierce, Park, Parker, Khalaf, Ledwith, Cao, Lee, Chen, Forrester, Watanabe, Taniguchi, Vishwanath, Jarillo-Herrero, and Yacoby]Xie2021 author author Y. Xie, author A. T. Pierce, author J. M. Park, author D. E. Parker, author E. Khalaf, author P. Ledwith, author Y. Cao, author S. H. Lee, author S. Chen, author P. R. Forrester, author K. Watanabe, author T. Taniguchi, author A. Vishwanath, author P. Jarillo-Herrero, and author A. Yacoby, title title Fractional Chern insulators in magic-angle twisted bilayer graphene, https://doi.org/10.1038/s41586-021-04002-3 journal journal Nature volume 600, pages 439–443 (year 2021)NoStop [Grover et al.(2022)Grover, Bocarsly, Uri, Stepanov, Di Battista, Roy, Xiao, Meltzer, Myasoedov, Pareek, Watanabe, Taniguchi, Yan, Stern, Berg, Efetov, and Zeldov]Grover2022 author author S. Grover, author M. Bocarsly, author A. Uri, author P. Stepanov, author G. Di Battista, author I. Roy, author J. Xiao, author A. Y. Meltzer, author Y. Myasoedov, author K. Pareek, author K. Watanabe, author T. Taniguchi, author B. Yan, author A. Stern, author E. Berg, author D. K. Efetov, and author E. Zeldov, title title Chern mosaic and Berry-curvature magnetism in magic-angle graphene, https://doi.org/10.1038/s41567-022-01635-7 journal journal Nat. Phys. volume 18, pages 885–892 (year 2022)NoStop [Inbar et al.(2023)Inbar, Birkbeck, Xiao, Taniguchi, Watanabe, Yan, Oreg, Stern, Berg, and Ilani]Inbar2023 author author A. Inbar, author J. Birkbeck, author J. Xiao, author T. Taniguchi, author K. Watanabe, author B. Yan, author Y. Oreg, author A. Stern, author E. Berg, and author S. Ilani, title title The quantum twisting microscope, https://doi.org/10.1038/s41586-022-05685-y journal journal Nature volume 614, pages 682 (year 2023)NoStop [Xiao et al.(2023)Xiao, Vituri, and Berg]Xiao2023 author author J. Xiao, author Y. Vituri, and author E. Berg, title title Probing the order parameter symmetry of two-dimensional superconductors by twisted Josephson interferometry, https://doi.org/10.1103/PhysRevB.108.094520 journal journal volume 108, eid 094520 (year 2023)NoStop [Peri et al.(2024)Peri, Ilani, Lee, and Refael]Peri2024 author author V. Peri, author S. Ilani, author P. A. Lee, and author G. Refael, title title Probing quantum spin liquids with a quantum twisting microscope, https://doi.org/10.1103/PhysRevB.109.035127 journal journal volume 109, eid 035127 (year 2024)NoStop [Pichler et al.(2024)Pichler, Kadow, Kuhlenkamp, and Knap]Pichler2024 author author F. Pichler, author W. Kadow, author C. Kuhlenkamp, and author M. Knap, https://arxiv.org/abs/2402.04311 title Probing magnetism in moiré heterostructures with quantum twisting microscopes (year 2024), https://arxiv.org/abs/2402.04311 arXiv:2402.04311 [cond-mat.str-el] NoStop [Birkbeck et al.(2024)Birkbeck, Xiao, Inbar, Taniguchi, Watanabe, Berg, Glazman, Guinea, von Oppen, and Ilani]Birkbeck2024 author author J. Birkbeck, author J. Xiao, author A. Inbar, author T. Taniguchi, author K. Watanabe, author E. Berg, author L. Glazman, author F. Guinea, author F. von Oppen, and author S. Ilani, @noop title Measuring phonon dispersion and electron-phason coupling in twisted bilayer graphene with a cryogenic quantum twisting microscope (year 2024), https://arxiv.org/abs/arXiv:24XX.XXXX arXiv:24XX.XXXX NoStop [Wu et al.(2018)Wu, MacDonald, and Martin]WMM18 author author F. Wu, author A. H. MacDonald, and author I. Martin, title title Theory of phonon-mediated superconductivity in twisted bilayer graphene, https://doi.org/10.1103/PhysRevLett.121.257001 journal journal Phys. Rev. Lett. volume 121, pages 257001 (year 2018)NoStop [Choi and Choi(2018)]CC18 author author Y. W. Choi and author H. J. Choi, title title Strong electron-phonon coupling, electron-hole asymmetry, and nonadiabaticity in magic-angle twisted bilayer graphene, https://doi.org/10.1103/PhysRevB.98.241412 journal journal Phys. Rev. B volume 98, pages 241412 (year 2018)NoStop [Lian et al.(2019)Lian, Wang, and Bernevig]LBWB19 author author B. Lian, author Z. Wang, and author B. A. Bernevig, title title Twisted bilayer graphene: A phonon-driven superconductor, https://doi.org/10.1103/PhysRevLett.122.257002 journal journal Phys. Rev. Lett. volume 122, pages 257002 (year 2019)NoStop [Wu et al.(2019)Wu, Hwang, and Das Sarma]WHS19 author author F. Wu, author E. Hwang, and author S. Das Sarma, title title Phonon-induced giant linear-in-t resistivity in magic angle twisted bilayer graphene: Ordinary strangeness and exotic superconductivity, https://doi.org/10.1103/PhysRevB.99.165112 journal journal Phys. Rev. B volume 99, pages 165112 (year 2019)NoStop [Angeli et al.(2019)Angeli, Tosatti, and Fabrizio]ATF19 author author M. Angeli, author E. Tosatti, and author M. Fabrizio, title title Valley Jahn-Teller effect in twisted bilayer graphene, https://doi.org/10.1103/PhysRevX.9.041010 journal journal Phys. Rev. X volume 9, pages 041010 (year 2019)NoStop [Cea and Guinea(2021)]CG21 author author T. Cea and author F. Guinea, title title Coulomb interaction, phonons, and superconductivity in twisted bilayer graphene, journal journal Proc. Natl. Acad. Sci. volume 118, https://doi.org/10.1073/pnas.2107874118 10.1073/pnas.2107874118 (year 2021)NoStop [Lewandowski et al.(2021)Lewandowski, Chowdhury, and Ruhman]LCR21 author author C. Lewandowski, author D. Chowdhury, and author J. Ruhman, title title Pairing in magic-angle twisted bilayer graphene: Role of phonon and plasmon umklapp, https://doi.org/10.1103/PhysRevB.103.235401 journal journal Phys. Rev. B volume 103, pages 235401 (year 2021)NoStop [Chou et al.(2022)Chou, Wu, Sau, and Das Sarma]CWSS22 author author Y.-Z. Chou, author F. Wu, author J. D. Sau, and author S. Das Sarma, title title Acoustic-phonon-mediated superconductivity in bernal bilayer graphene, https://doi.org/10.1103/PhysRevB.105.L100503 journal journal Phys. Rev. B volume 105, pages L100503 (year 2022)NoStop [Sharma et al.(2021)Sharma, Yudhistira, Chakraborty, Ho, Ezzi, Fuhrer, Vignale, and Adam]Setal21 author author G. Sharma, author I. Yudhistira, author N. Chakraborty, author D. Y. H. Ho, author M. M. A. Ezzi, author M. S. Fuhrer, author G. Vignale, and author S. Adam, title title Carrier transport theory for twisted bilayer graphene in the metallic regime, journal journal Nat. Comm. volume 12, https://doi.org/10.1038/s41467-021-25864-1 10.1038/s41467-021-25864-1 (year 2021)NoStop [Ishizuka et al.(2021)Ishizuka, Fahimniya, Guinea, and Levitov]IFGL21 author author H. Ishizuka, author A. Fahimniya, author F. Guinea, and author L. Levitov, title title Purcell-like enhancement of electron–phonon interactions in long-period superlattices: Linear-temperature resistivity and cooling power, https://doi.org/10.1021/acs.nanolett.1c00565 journal journal Nano Lett. volume 21, pages 7465–7471 (year 2021)NoStop [Das Sarma and Wu(2022)]SW22 author author S. Das Sarma and author F. Wu, title title Strange metallicity of moiré twisted bilayer graphene, https://doi.org/10.1103/PhysRevResearch.4.033061 journal journal Phys. Rev. Res. volume 4, pages 033061 (year 2022)NoStop [Hartnoll and Mackenzie(2022)]HM22 author author S. A. Hartnoll and author A. P. Mackenzie, title title Colloquium: Planckian dissipation in metals, https://doi.org/10.1103/RevModPhys.94.041002 journal journal Rev. Mod. Phys. volume 94, pages 041002 (year 2022)NoStop [Ishizuka and Levitov(2022)]IL22 author author H. Ishizuka and author L. Levitov, title title Wide-range T^2 resistivity and umklapp scattering in moiré graphene, https://doi.org/10.1088/1367-2630/ac688c journal journal New J. Phys. volume 24, pages 052001 (year 2022)NoStop [Aydin et al.(2023)Aydin, Keski-Rahkonen, and Heller]AKH23 author author A. Aydin, author J. Keski-Rahkonen, and author E. J. Heller, @noop title Quantum acoustics spawns Planckian resistivity (year 2023), https://arxiv.org/abs/2303.06077 arXiv:2303.06077 [cond-mat.str-el] NoStop [Davis et al.(2023)Davis, Chou, Wu, and Das Sarma]DCWS23 author author S. M. Davis, author Y.-Z. Chou, author F. Wu, and author S. Das Sarma, title title Phonon-limited resistivity of multilayer graphene systems, https://doi.org/10.1103/PhysRevB.107.045426 journal journal Phys. Rev. B volume 107, pages 045426 (year 2023)NoStop [Ochoa and Fernandes(2023)]OF23 author author H. Ochoa and author R. M. Fernandes, title title Extended linear-in-T resistivity due to electron-phason scattering in moiré superlattices, https://doi.org/10.1103/PhysRevB.108.075168 journal journal Phys. Rev. B volume 108, pages 075168 (year 2023)NoStop [Suzuura and Ando(2002)]SA02 author author H. Suzuura and author T. Ando, title title Phonons and electron-phonon scattering in carbon nanotubes, https://doi.org/10.1103/PhysRevB.65.235412 journal journal Phys. Rev. B volume 65, pages 235412 (year 2002)NoStop [Castro Neto et al.(2009a)Castro Neto, Guinea, Peres, Novoselov, and Geim]NGPNG09 author author A. H. Castro Neto, author F. Guinea, author N. M. R. Peres, author K. S. Novoselov, and author A. K. Geim, title title The electronic properties of graphene, https://doi.org/10.1103/RevModPhys.81.109 journal journal Rev. Mod. Phys. volume 81, pages 109 (year 2009a)NoStop [Note1()]Note1 note Notice that the actual real-space amplitude of the zero-point fluctuations for a particular phonon mode is smaller than ℓ _ZPM by a factor 1/√(N).Stop [Bistritzer and MacDonald(2011)]Bistritzer2011 author author R. Bistritzer and author A. H. MacDonald, title title Moiré bands in twisted double-layer graphene, https://doi.org/10.1073/pnas.1108174108 journal journal Proc. Nat. Acad. Sci. volume 108, pages 12233 (year 2011)NoStop [von Oppen et al.(2009)von Oppen, Guinea, and Mariani]OGM09 author author F. von Oppen, author F. Guinea, and author E. Mariani, title title Synthetic electric fields and phonon damping in carbon nanotubes and graphene, https://doi.org/10.1103/PhysRevB.80.075420 journal journal Phys. Rev. B volume 80, pages 075420 (year 2009)NoStop [Ochoa(2019)]O19 author author H. Ochoa, title title Moiré-pattern fluctuations and electron-phason coupling in twisted bilayer graphene, https://doi.org/10.1103/PhysRevB.100.155426 journal journal Phys. Rev. B volume 100, pages 155426 (year 2019)NoStop [Koshino and Nam(2020)]MN20 author author M. Koshino and author N. N. T. Nam, title title Effective continuum model for relaxed twisted bilayer graphene and moiré electron-phonon interaction, https://doi.org/10.1103/PhysRevB.101.195425 journal journal Phys. Rev. B volume 101, pages 195425 (year 2020)NoStop [Castro Neto et al.(2009b)Castro Neto, Guinea, Peres, Novoselov, and Geim]CastroNeto2009 author author A. H. Castro Neto, author F. Guinea, author N. M. R. Peres, author K. S. Novoselov, and author A. K. Geim, title title The electronic properties of graphene, https://doi.org/10.1103/RevModPhys.81.109 journal journal Rev. Mod. Phys. volume 81, pages 109 (year 2009b)NoStop
http://arxiv.org/abs/2407.13770v1
20240718175956
Moiré Fractional Chern Insulators IV: Fluctuation-Driven Collapse of FCIs in Multi-Band Exact Diagonalization Calculations on Rhombohedral Graphene
[ "Jiabin Yu", "Jonah Herzog-Arbeitman", "Yves H. Kwan", "Nicolas Regnault", "B. Andrei Bernevig" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mes-hall" ]
=1000 3000BAD!!
http://arxiv.org/abs/2407.13022v1
20240717212313
On-sky, real-time optical gain calibration on MagAO-X using incoherent speckles
[ "Eden A. McEwen", "Jared R. Males", "Olivier Guyon", "Sebastiaan Y. Haffert", "Joseph D. Long", "Laird M. Close", "Kyle Van Gorkom", "Jennifer Lumbres", "Alexander D. Hedglen", "Lauren Schatz", "Maggie Y. Kautz", "Logan A. Pearce", "Jay K. Kueny", "Avalon L. McLeod", "Warren B. Foster", "Jialin Li", "Roz Roberts", "Alycia J. Weinburger" ]
astro-ph.IM
[ "astro-ph.IM" ]
Length-preserving biconnection gravity and its cosmological implications Sorin V. Sabau July 22, 2024 ======================================================================== § ABSTRACT The next generation of extreme adaptive optics (AO) must be calibrated exceptionally well to achieve the desired contrast for ground-based direct imaging exoplanet targets. Current wavefront sensing and control system responses deviate from lab calibration throughout the night due to non linearities in the wavefront sensor (WFS) and signal loss. One cause of these changes is the optical gain (OG) effect, which shows that the difference between actual and reconstructed wavefronts is sensitive to residual wavefront errors from partially corrected turbulence. This work details on-sky measurement of optical gain on MagAO-X, an extreme AO system on the Magellan Clay 6.5m. We ultimately plan on using a method of high-temporal frequency probes on our deformable mirror to track optical gain on the Pyramid WFS. The high-temporal frequency probes, used to create PSF copies at 10-22 λ /D, are already routinely used by our system for coronagraph centering and post-observation calibration. This method is supported by the OG measurements from the modal response, measured simultaneously by sequenced pokes of each mode. When tracked with DIMM measurements, optical gain calibrations show a clear dependence on Strehl Ratio, and this relationship is discussed. This more accurate method of calibration is a crucial next step in enabling higher fidelity correction and post processing techniques for direct imaging ground based systems. § INTRODUCTION Ground based observatories interested in high contrast imaging compensate for atmospheric turbulence with sophisticated adaptive optics (AO) systems. Recent and future high contrast systems have incorporated the Pyramid Wavefront Sensor (PyWFS) for its increased sensitivity. A pyramid WFS <cit.> splits the PSF on the tip of a pyramid into 4 pupil planes, as opposed to the widely used Shack Hartman WFS, which takes a pupil plane and splits into subapertures to sense local slope. The PyWFS provides increased sensitivity in lower order modes but suffers from a degradation of response to, and thus reconstruction of, wavefronts when operated on sky. This loss in the WFS sensitivity is known as Optical Gain (OG), and is an unknown in control systems that limit the future of AO control, from predictive control, wavefront reconstruction, to post-processing techniques. Recent work <cit.> for next generation ELTs show the importance of compensating for OG through various algorithms in the next generation of ground based instruments. Real time measurement and application of optical gain will allow other systems to reach their full potential for high contrast correction. MagAO-X<cit.>, shown in Figure <ref> is a visible-NearIR extreme AO system built with a visible light pyramid WFS particularly susceptible to the OG effect. The deformable mirror (DM) correction architecture is a woofer-tweeter<cit.> system, with a high-stroke 97 actuator woofer followed by a 2040 actuator tweeter DM. The visible PyWFS operates modulated, up to 3.6kHZ, though typically at 2kHz modulated at 3 λ /D. The control architecture is based off of CACAO<cit.> and routinely corrects up to 15 modes. MagAO-X is optimized for high contrast, faint target systems, and is currently being used as a pathfinder for the GMT system. In the following, we discuss the work done with the MagAO-X instrument to characterize and calibrate optical gain on sky at the MagAO-X instrument. Though directly recovering the modal response matrix through self response matrices, discussed in section <ref>, we are able to get a modal measure of Optical Gain. In Section <ref> we discuss the implications of these measurements and in Section <ref> we discuss how these OG findings are being used for real time control. § SELF RESPONSE MATRICIES For on-sky measurement of optical gain per mode, we elected to use the self response matrix (selfRM) a built-in CACAO method for acquiring the modal response matrix. The selfRM probes each mode in the command matrix at the specified amplitude, recording the modal response over time, returning a matrix of size [N modes x N modes x time]. The diagonal gives the modal response, a value close to 1 for lab responses, while the off diagonal terms are cross talk between modes. The selfRM when plotted by mode, for both an example sky and lab acquisition taken with MagAO-X, is shown in Figure <ref>. The lab response does not maintain a flat 1.0 signal as selfRMs retain the scaling factors incorporated into the control matrix. These scaling factors are the same in lab and on sky and divide out. By dividing the sky response by the lab response an optical gain modal curve is recovered. When smoothed, this curve is approximately flat vs. mode number. § OPTICAL GAIN MEASURED ON SKY OG curves using the selfRM method were collected through the MagAO-X 2022B and 2023A observing runs, for a total of 32 measurements. In the following section we detail key insights these measurements have allowed us to make on OG behavior on sky. §.§ Optical Gain vs. Mode number On the same target with the same number of control modes but with seeing varying, Figure <ref> shows that the selfRM retains its shape as the OG curve shifts to worse conditions. From this result, we feel confident that the optical gain response of the MagAO-X system is approximately flat to the degree necessary for effective OG control systems. §.§ Optical Gain vs. DIMM Seeing Figure <ref> displays all measured OG points against DIMM seeing. The OG points displayed are calculated as the average of the modal OG curve, excluding the modes offloaded to the woofer (0-107) and the higher order modes with the greatest scatter (1207 to 1564). The error displayed is not uncertainty in OG but the measurement scatter across the remaining modes. The OG values shown in Figure <ref> follow expected performance<cit.> on bright stars and good seeing, about 0.8. Guide star magnitude is recorded in the PyWFS wavelength of I band, calculated from Gaia Magnitudes<cit.>, using SIMBAD<cit.> reported stellar types and the Mamajeck <cit.> table for conversion. Performance decreases for fainter guide stars and worse seeing. §.§ Optical Gain vs. Strehl Ratio For a fuller understanding of our system's performance relative to OG we predict the Strehl Ratio (SR) at the tip of the PyWFS using telemetry from each observation. In Figure <ref> the SR calculated from the telemetry for each OG point is plotted against the OG value itself, and shows good agreement. In our forthcoming paper we describe how we predict SR and validate this result with simulations. § REALTIME OPTICAL GAIN TRACKING With this preliminary work that confirms OG as flat as a function of mode number on the MagAO-X instrument, we have pursued correction algorithms that track optical gain at a single spatial frequency, following the work done at the LBT<cit.>. If a mode of known amplitude is applied to the DM throughout an observation, that mode's reconstruction amplitude, averaged over turbulence, can be monitored to track optical gain. Incoherent Speckles<cit.> are a high frequency probe already in routine use on the MagAO-X instrument. The Fourier mode applied in both x and y in the DM's pupil plane creates a PSF copy on the focal plane at separations of 10-22 λ /D that allows for photometric calibration and centering during coronagraphic observations. A simulation of its effect on the DM, WFS, and focal plane is shown in Figure <ref>. We have pursued several methods of extracting the known amplitude pattern shape from WFS images, involving averaging over turbulence and comparing to lab reference images. Signals are present in the data extracted, but detailed optical gain tracking is stalled until real time control upgrades allow precise synchronization between the WFS camera frame and the DM command. § CONCLUSION This work explored optical gain measurements on the MagAO-X instrument, in furthering the work of real time optical gain calibration and control. We have demonstrated: * Optical Gain is approximately flat with respect to mode number. * Optical Gain follows the Strehl Ratio of the PSF on the tip of the pyramid. Through a selfRMs, a CACAO process for extracting modal response amplitudes, we have measured optical gain values across 0.4 to 1.8as DIMM seeing across target magnitude 1.2 to 9.4 magnitudes. This sample has allowed us to confirm that optical gain is approximately flat across the MagAO-X modal control basis. Using a simple error budget and assumed wind speed, Strehl ratios were calculated for each OG measurement. Comparing estimated SR to measured OG show that OG follows SR on sky as predicted in simulations. Current unfinished work has been undertaken to measure OG with the incoherent speckles already native to the MagAO-X control scheme. With the conclusions derived here we are confident that real time measurement and control of OG is possible with a high frequency DM probe. We are very grateful for the support from the NSF MRI Award MRI Award #1625441 for MagAO-X. Eden McEwen would like to acknowledge her funding through the NSF GRFP. Thank you to the MagAO-X observers and collaborators who gave time for SelfRM data collection from their observing time. This research made use of HCIPy, an open-source object-oriented framework written in Python for performing end-to-end simulations of high-contrast imaging instruments (Por et al. 2018). This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. spiebib
http://arxiv.org/abs/2407.13535v1
20240718140744
Fundamental Visual Navigation Algorithms: Indirect Sequential, Biased Diffusive, & Direct Pathing
[ "Patrick Govoni", "Pawel Romanczuk" ]
cs.NE
[ "cs.NE", "cs.AI" ]
GlobalPointer: Large-Scale Plane Adjustment with Bi-Convex Relaxation Bangyan Liao1,2⋆0009-0007-7739-4879 Zhenjun Zhao3⋆0009-0000-6551-4537 Lu Chen40009-0003-5779-4673 Haoang Li50000-0002-1576-9408 Daniel Cremers60000-0002-3079-7984Peidong Liu20000-0002-9767-6220 ======================================================================================================================================================================================================== § ABSTRACT Effective foraging in a predictable local environment requires coordinating movement with observable spatial context - in a word, navigation. Distinct from search, navigating to specific areas known to be valuable entails its own particularities. How space is understood through vision and parsed for navigation is often examined experimentally, with limited ability to manipulate sensory inputs and probe into the algorithmic level of decision-making. As a generalizable, minimal alternative to empirical means, we evolve and study embodied neural networks to explore information processing algorithms an organism may use for visual spatial navigation. Surprisingly, three distinct classes of algorithms emerged, each with its own set of rules and tradeoffs, and each appear to be highly relevant to observable biological navigation behaviors. § INTRODUCTION As dissipative systems, all biological beings are bound by the curse of having to find food. While some have evolved sedentary strategies, others need to move to ensure a steady flow of nutrients. When the location of food is unpredictable, this movement can be classified as a search problem. A classic approach to foraging, much has been written in regards to optimal search time and energy <cit.>, patch staying vs. leaving decisions <cit.>, as well as corresponding movement patterns <cit.>. Whereas search strategies may prevail in unpredictable environments, organisms living in environments with some regularity in food location would benefit from the ability to return to known resource-abundant areas, e.g. a bee to a flower patch, a mouse to the kitchens of an apartment building, or a human foraging for mushrooms. Simply put, the organism must learn to navigate. Foraging navigation involves correlating perceivable features with expected food location, requiring the external environment to be sensed and processed before effective movement is taken. Given its broad ubiquity across the animal kingdom and its informational importance <cit.>, this study focuses on vision as the sole sensory modality. Rather than analyzing how biology currently answers the navigation problem, a more conceptual approach that explores potential algorithms for connecting sensory information with useful action can offer unique insights. Using Marr's levels to construct our hypothetical organism as an informational processing system <cit.>: the computational level is tasked with navigating to an expected food location in a given environment, the algorithmic level relates to the strategy or behavior needed, and the implementational level translates perceptual input to movement. While the high level is fixed in a given environment and the low level is relatively constrained across a particular species, the algorithmic level can vary between individuals as well as change through time due to the flexible arrangement of our neuronal mediums. At the timescale of an individual, this is where most of the creative differentiation happens. Mirroring this intuition, by fixing the task and environment as well as the perceptual and movement-related degrees of freedom, freely fitting the parameters that govern behavior reflects short-term biological adaptation, i.e. development and learning <cit.>. There are a few different approaches to accomplish this. First is a parameter sweep for hand-coded behaviors or behavioral states, a strategy often used in mathematical modeling and individual-based modeling <cit.>. Although this method enables quicker computation and easier analysis, it constrains algorithmic expressivity to predefined dimensions. An alternative is to learn the rules using an artificial neural network. While hyperparameters still confer algorithmic constraints, this approach increases expressive dimensionality relative to hand-coded rules while maintaining computational speed and manipulability not easily attained via empirical approaches <cit.>. Considering the vast potential design space of navigational algorithms, prioritizing flexibility will likely be fruitful. The main downside, the lack of transparency of the learned algorithms, can be eased by the wide array of approaches available to peer inside the black box, such as perturbing along specific computational or implementational dimensions <cit.>. Particularly suited to visual perception, trained convolutional networks have been shown to learn similar representations to the visual cortex, a landmark demonstration of how artificial networks can be used to understand biology <cit.>. The visual perception-action loops of embodied agents were trained with minimal environment, perceptual, cognitive, and action constraints to solve a basic navigational task. Computational and implementational level constraints were intially fixed, then perturbed to explore the interplay between the constraints and algorithms. Behavior was analyzed using various statistical methods, separated into distinct classes, and mechanisms were probed via perturbations and image difference matching. Resulting behavior was then compared with observed biological movement patterns, yielding evidence of convergence and generating hypotheses for algorithmic properties fundamental to spatial navigation. § IMPLEMENTATION §.§ Perception Visual input, stemming from a previous publication by our lab <cit.>, was simulated as a one-hot encoded raycast from the agent to the boundary walls of the environment (Fig. <ref>). Though a simplification from 2D vision, collapsing the vertical dimension has been found to retain sufficient directional information for visual navigation <cit.>. Two key hyperparameters govern the raycasting process: field of vision (FOV) and resolution. FOV stretches the limits of the raycast with respect to the agent, and resolution determines the number of rays to cast within these limits. The FOV was set at 40% or 144 degrees to mimic the functional visual field for humans, as measured by reaction time to discrimination tasks <cit.>, though the impact of FOV on navigation performance was also assessed (Fig. <ref>). The visual resolution (υ) was minimally initialized at 8 for much of the analysis and was later varied to examine its effect on behavior (Fig. <ref>). For basic perception, visual contact with walls was binary, not weighted by distance, while Weber-Fechner logarithmic scaling was applied for more complex vision, similar to how height is perceived by our eyes <cit.>. In an unknown environment, an accurate distance estimation requires memory or prediction of wall height, or dynamic parallax inference, both of which require higher order computation and are prone to inaccuracies. Non-scaled vision represents minimum reliable environmental information, where only visual angles are available, and distance information was added at increasing attenuation levels to characterize increasing perceptual ability or memory. The scaling was calculated as follows: y = - 1/k(σ)ln(x) + m(σ) with x representing distance from the perceived environmental boundary, and k and m fixed, set by relating minimum and maximum possible distances to visual encoding ranges attenuated by σ, termed the distance scaling factor. §.§ Environment + Task The environment was designed to be minimally complex: a square grid with four distinctly perceivable walls. Using boundary walls rather than landmarks was to restrain ability to estimate distance via horizontal width, forcing the algorithm to rely soley on relative angles to grid corners. The agent was initialized in random locations and orientations for each simulation, driving the agents to learn an allocentric rather than egocentric representation of space. Environmental geometry provides the spatial reference needed for both orientation and location <cit.>, here bound by the four corners, which can be viewed as landmarks without directly estimable depth. Although landmark and geometric encoding has been argued to be performed by separate modules <cit.>, in the present study the two are not differentiated. The setup mirrors the classic Morris Water Maze, which tests spatial learning in mice and rats, as well as humans with a later variant of the setup <cit.>. As for the former, rodents are placed in a cloudy pool of water and must find a hidden platform submerged below the surface. If placed in the same starting location every time, the rodents may learn an egocentric strategy by counting the number of strokes taken in each direction in order to reach the platform <cit.>. If placed in random locations and orientation however, the rodents must learn an allocentric strategy by correlating external visual cues with an effective trajectory. It is this second, more challenging version of the experiment that relates to our setup. Since the spatial context changes unpredictably, vestibular integration and within-trial memory are not critical components. §.§ ANN Architecture The agents' information processing consisted of three modules: a convolutional network (CNN), a multilayer perceptron (MLP), and a linear output layer. The CNN used is a simplified version of the ConvNeXt v2 architecture <cit.>, a state-of-the-art design that outperforms earlier varitions in the CNN design space and rivals the best vision transformers. While simpler designs exist, the ConvNeXt was chosen for its separable depthwise and pointwise convolutions, where separate parameters act on the orthogonal depth and channel axes, enabling greater expressibility for relatively low additional computational cost. A transformer was not chosen since they lack biological inductive biases, such as translation equivariance, and due to their novelty, lack the understanding and tools currently available for CNNs <cit.>. A simple MLP was chosen as recurrent memory allowing a more complex temporal representation of the environment was not the goal of the present study. Rather, temporal perceptual ability or memory was phenomenologically implemented by adding distance scaling to the visual encoding. The single network output was scaled through a Tanh and linear hat function, representing a ratio between turning angle and speed. This choice reflects an intuiting action space constraint that organisms need to slow down in order to turn. Beyond critical minimal values, depth and dimension of the CNN and MLP did not noticeably affect performance (Fig. <ref>, Fig. <ref>). Notably, two channel outputs for the CNN is insufficient for agents to reliably solve the task, possibly reflecting an intrinsic, minimal dimensionality to the navigation problem <cit.>. §.§ Evolutionary Algorithm The network was optimized via an evolutionary strategies (ES) algorithm: Policy Gradient Parameter Exploration (PGPE) using the ClipUp optimizer <cit.>. ES uses population-based black-box optimization without gradient calculation, unlike reinforcment learning (RL) which optimizes a single agent, often via discounted rewards, by using backpropagation on a continuous differentiable objective function. ES entails less complexity than RL, as there is no need for differentiability, value function approximation, or temporal credit assignment <cit.>. Additionally, the intrinsic exploration of the population-based approach can help navigate rugged fitness landscapes. The main drawback of ES, sample inefficiency, is acceptable given its parallelizability <cit.>. Among various ES algorithms, PGPE was chosen for its balance between performance and speed, with greater performance than OpenAI-ES and greater speed than CMA-ES <cit.>. While the choice of initial random seed seems to cause issue to other types of RL algorithms, it did not affect our performance distribution (Fig. <ref>). Fitness, or performance, was measured as the time taken to reach the patch plus the remaining distance. The former is the primary driver for judging how well the agent navigates, and the latter drives initial learning behavior. The relative weighting of each was varied in initial experiments but found to not make a significant difference. § RESULTS Training the networks to this simple navigation task and environment yielded expected relative performance distributions (Fig. <ref>). While only visual angle input is needed to adequately learn the task, performance and speed of convergence improves relative to the saliency of the added distance signal. At maximal distance input, performance approaches that of perfect trajectories straight to the patch, though with a long tail depicting the few initializations at the corners of the arena that result in a premature limit cycle locked in a corner (a case to be explained in detail later). Attenuated scaling reduces the utility of the distance information in the network. At σ = 0.2 signal variation is too subtle for the network to effectively use the information, slowing down learning rate and hindering performance, while at σ = 0.2 the signal is not used at all. On the other hand, an intriguing interplay of divergent individual signatures, despite convergence into classes or archetypes, was observed in the navigation trajectory space (Fig. <ref>). With angles-only vision, most trained agents from separate evolutionary runs tended to learn one of two separate strategies that we term "Indirect Seqential" and "Biased Diffusive". While adding distance scaling to the visual input resulted in a third class called "Direct Pathing". This clustering illustrates separate, competing potential wells in the algorithmic landscape, and as we will show, each class uses visual input and moves about the space with styles quite distinct from one another. It is clear that this approach yields substantial room to explore our previously posed questions on how the algorithmic level, i.e. navigation behavior, may be affected by different environmental, perceptual, and neuronal parameters. §.§ Indirect Sequential Perhaps the most striking of the three classes of algorithms are those that compose a sequence of route segments, scaffolded by distinct elliptical arcs sweeping across the arena (Fig. <ref>a). Tracing individual agent trajectories towards the patch involves long straight segments interrupted by sharp turns. Echoing indirect navigation through a grid-like city, these turns are often close to right angle, oftentimes travel is perpendicular to the destination, sometimes doubling back in the opposite direction before continuing. Though instead of buildings specifying route nodes, elliptical manifolds determine turn location, reducing the space of navigational possibilities to a compact route sequence. Despite a lack of distance perception, the agents learn to use spatial invariances in the environmental structure in order to reach the patch. However which invariances to use and how to compose them into an effective strategy varies between runs, as shown by the T-like, either clear or blurry, and plus sign patterns of the representative examples. [width=0.325, colback=white, colframe=gray, boxrule=0.2mm, nobeforeafter, left=0pt,right=0pt,top=0pt,bottom=0pt] [b] < g r a p h i c s > < g r a p h i c s > [b] < g r a p h i c s > < g r a p h i c s > [width=0.325, colback=white, colframe=gray, boxrule=0.2mm, nobeforeafter, left=0pt,right=0pt,top=0pt,bottom=0pt] [b] < g r a p h i c s > < g r a p h i c s > [b] < g r a p h i c s > < g r a p h i c s > [width=0.325, colback=white, colframe=gray, boxrule=0.2mm, nobeforeafter, left=0pt,right=0pt,top=0pt,bottom=0pt] [b] < g r a p h i c s > < g r a p h i c s > [b] < g r a p h i c s > < g r a p h i c s > [width=0.325, colback=white, colframe=gray, boxrule=0.2mm, nobeforeafter, left=0pt,right=0pt,top=0pt,bottom=0pt] [b] < g r a p h i c s > < g r a p h i c s > [b] < g r a p h i c s > < g r a p h i c s > [width=0.325, colback=white, colframe=gray, boxrule=0.2mm, nobeforeafter, left=0pt,right=0pt,top=0pt,bottom=0pt] [b] < g r a p h i c s > < g r a p h i c s > [b] < g r a p h i c s > < g r a p h i c s > [width=0.325, colback=white, colframe=gray, boxrule=0.2mm, nobeforeafter, left=0pt,right=0pt,top=0pt,bottom=0pt] [b] < g r a p h i c s > < g r a p h i c s > [b] < g r a p h i c s > < g r a p h i c s > [width=0.325, colback=white, colframe=gray, boxrule=0.2mm, nobeforeafter, left=0pt,right=0pt,top=0pt,bottom=0pt] [b] < g r a p h i c s > < g r a p h i c s > [b] < g r a p h i c s > < g r a p h i c s > [width=0.325, colback=white, colframe=gray, boxrule=0.2mm, nobeforeafter, left=0pt,right=0pt,top=0pt,bottom=0pt] [b] < g r a p h i c s > < g r a p h i c s > [b] < g r a p h i c s > < g r a p h i c s > [width=0.325, colback=white, colframe=gray, boxrule=0.2mm, nobeforeafter, left=0pt,right=0pt,top=0pt,bottom=0pt] [b] < g r a p h i c s > < g r a p h i c s > [b] < g r a p h i c s > < g r a p h i c s > These route-determining elliptical manifolds undoubtedly exist, but what determines their location and shape? Perturbing corner positions and visual angles via the FOV range stretch, squeeze, and shift the ellipses in an intuitive manner (Fig. <ref>, Fig. <ref>). Though this does not answer the question of how the ellipses are constructed. Given the ellipses emanate from two adjacent corners, these thresholds may be marked by two raycasts simultaneously intersecting with both corner locations, representing a change in wall perception in both corners at once. A numerical search for these ideal locations does indeed show that simultaneous dual corner detection reconstructs elliptical arcs (Fig. <ref>, top row). Not only do perceptual and environmental parameters, such as FOV and corner position, affect dual corner detectability, but agents can learn to modulate incoming turning speed to adjust the positioning of the elliptical manifolds. Higher turning speeds means the agents sweeps wider angles, which result in an anticipation or a precession of the dual corner detection (Fig. <ref>). In case the agent cannot construct an ellipse to overlap the patch at low speed, the agent increases speed to shift the ellipse to a more useful position (Fig. <ref>), middle & bottom rows. Another way to view dual corner detection is that the agent learns critical visual snapshots with which to start their major turns <cit.>. By using image difference matching to find positions in the grid at which the agent can perceive the same visual input, accounting for both translation and rotation, the elliptical thresholds are recovered, provided that the visual input involves two corners (Fig. <ref>). In other words, the agent learns to exploit a spatially invariant degeneracy in translational and rotational degrees of freedom. Snapshots of a single corner can still be used for navigational inference, but their area of degeneracy is significantly greater, entailing lower inferential utility. This approach reveals a final key element regarding both single and dual corner detection: incoming orientation. In order to perceive the same snapshot from any point of the manifold, the agent orientation must be rotated about the center axis of the corner or corners, as illustrated by the color gradients in Fig. <ref>. Compiling all information from this approach into a single heatmap, all perceptual thresholds can be simultaneously illustrated (<ref>). The resulting figure maps out the perceptual affordance space, i.e. where the agent can switch control regimes. At these locations, the agent has the opportunity to switch from its current navigational perception-action circuit. Otherwise, the agent will continue along a straight trajectory, behavior which is modulated by a balance between a circuit of two or few views commanding the agent to turn slightly left and right. Aligning with the previous approach, the affordance space thresholds match with those of the dual raycast plots. Though to describe the remaining thresholds in the affordance space that stretch diagonally across the grid, the points at which the agent simultaneously perceives two opposite corners must be added (Fig. <ref>). Accounting for both adjacent and opposite corner raycasts, the high degree of overlap indicates the two approaches are alternative explanations of the same phenomenon. To summarize, these agents navigate by sequentially interleaving straight line segments with grid-like turn decisions upon intersection with elliptical manifolds, which are locally perceived with respect to two adjacent corners, a mechanism governed by perceptual, environmental, and movement constraints. By examining the statistics of the movement behavior, the orientational sparsity of the routes these indirect sequential agents take becomes more evident. Although agents travel for much of the time on routes chosen quickly after initialization, deviations from this initial route are only made at a minor collection of tight alternative headings, as shown by both the horizontal lines on the spatial relative orientation profile and the tight bumps on the corresponding polar histograms (Fig. <ref>a, top left & top right). Correlating direction relative to patch position was found to not be as visually intuitive (Fig. <ref>, <ref>). The initial heading shapes the route by triggering the perception-action sequence, while there is no accessible representation of where the patch is or will be, resulting in distinguishable patterns in the former but not the latter. Adding a small amount of noise to their visual angle greatly inhibits their ability to find their way to the patch (Fig. <ref>), revealing that visual angle is critically important to the efficacy of their navigation algorithm. Given that incoming orientation is a principal parameter determining the perceivability of the manifolds, inaccuracies in visual angle disrupts reliable calculation of these turn decisions. §.§ Biased Diffusive The movement profile of the local sequential agents stands in stark contrast to the second class of algorithms, which can be distinguished by looping or ratchet-like trajectories, with a complete absence of major turn points (Fig. <ref>b). While overall progress is generally biased towards the direction of the patch, these agents have found their niche by regularly spinning around before continuing in the direction of the patch, as if they are looking around to sense relative global position. Rather than learning which locations to stage major turns, they vary turning speed to dynamically bias their overall motion, turning faster when facing away and slower when facing towards the patch. The trajectory maps render clouds of agent trajectories with little apparent correlation amongst each other, illustrating in contrast to the previous group that there are no clear manifolds or sequence compositions, instead the process is diffusive. The more distributed profiles in the heading profile and polar histograms show the decreased relative importance of particular directions (Fig. <ref>b). The diagonal lines across their spatial profiles show the constant speed at which these agents turn. Given the similarity between their trajectory maps and sperm chemotaxis, we temporally correlated orientation from initial heading <cit.>: C(t) = <cos[ϕ(t _0 + t) - ϕ(t _0)]> with ϕ representing the orientation angle of the agent. The bottom left plots in Fig. <ref>b illustrate the rhythmic oscillation of the biased diffusive algorithms. The oscillation in the correlation function shows that each agent spins at a similar frequency regardless of initial location and orientation and at different frequencies between agents. Though the oscillation tends to retain positive correlation longer than the indirect sequential agents, their decorrelation time, marked by the time needed for half of the agents to decorrelate from their initial heading, is lower. By spinning, the biased diffusers shift from their initial heading rather quickly, although they tend to follow a generally unidirectional overall trajectory towards the patch. Indirect sequential agents, however, stick to straight paths, yet significantly deviate in different directions upon reaching an elliptical manifold, eliminating any remaining positive correlation. The second key measure to quantify differentiation is the range of directional possibilities available to the agent, mapped across space. We calculate the information entropy <cit.> of the orientational frequency distributions for each spatial bin across the environment: H(Φ,b) = -∑_ϕ∈Φp(ϕ)log(p(ϕ)) with H being the directional entropy, Φ being the set of 16 orientations, and b the spatial bin. Bounding to the range of entropy values possible for Φ and inverting, we arrive at a measure of directedness: D(H,Φ,b) = H_max(Φ) - H(Φ,b)/H_max(Φ) - H_min(Φ) Entropic directedness further demonstrates a classification boundary between the two classes (Fig. <ref>, bottom right). Average directedness is generally greater for the indirect sequential than the biased diffusive agents, i.e. directional entropy is lower. This measure quantifies the observations made for the top plots of Fig. <ref>: the first class tends to stick to a few directions at any given point in the arena, while the second is more directionally indiscriminate. Also observable is the elliptical manifolds for the indirect sequential agents, reflecting the wider range of potential directions while they change their heading, whereas there is no evident spatial structure seen for the biased diffusive. To better understand both metrics, a null model was created with known movement parameters of a persistent random walk to recreate values observed. Varying the rotational diffusion coefficient of the agent reflects the entire range of decorrelation time (Fig. <ref>). The directedness for these agents however is much lower than any agent evolved due to the random directionality of the null model agents. Only by adding an allocentric bias, such as pushing the agent toward the patch, can we recreate the range of the directedness metric (Fig. <ref>). Decorrelation time thus measures the temporal aspect of correlated movement, while directedness measures the spatial aspect. The correlated oscillations in the orientational correlation plots can be recovered by adding a fixed spin to the agent for each timestep (Fig. <ref>). Though to observe the sustained positive correlation of these oscillation plots, a ratcheting threshold must be added to keep the agent moving in a similar heading (Fig. <ref>). As opposed to the first class, this algorithm is robust to visual angle noise (Fig. <ref>). Although visual angle is certainly useful information to these agents, its utility does not depend on its accuracy. Rather than relying on heading adjustments based on localized spatial information, constantly sweeping visual perspective around the entire grid allows access to global information, invariant to local noise. However this comes at an apparent cost to reliable determination of patch location, judging by the wider radius at the end of their trajectories seen in both the map and spatial profiles. As such, it would be expected for the biased diffusive algorithm, conferring lower fitness, to evolve at a lower probability than the indirect sequential. In order to answer such a question, we must first quantifiably characterize the separation between the two algorithms. By plotting entropic directedness and decorrelation time for each evolutionary run, we can find such a clear demarcation and can use location in this phase space to label each agent as residing in one or the other class. (Fig. <ref>, <ref>, <ref>). Indeed, both fitness and relative occurrence of the biased diffusive class is lower than the indirect sequential, at least for a visual resolution of 8 rays. Increasing resolution can be seen to shift both relative fitness and evolvability in the other direction, favoring the biased diffusive at high visual resolution (Fig. <ref>). Evidently, varying visual density tilts the fitness landscape between these two algorithms. However average performance remains at a similar level irrespective of resolution and highest possible performance from either algorithm is comparable (Fig. <ref>, <ref> right). Thus either algorithm can confer similar navigability, but have different input requirements to facilitate movement computation and thus to readily evolve. Note these two classes are not mutually exclusive. Instead they exist along a continuum, with a smaller subset of runs containing significant elements of both navigational algorithms (Fig. <ref>). This suggests the potential for hybridization between algorithmic structures, yet they tend to correspond with lower fitness and occurrence (Fig. <ref>, <ref>). The runs have been quantified as those outside the cluster centers of the plots in Fig. <ref>, and qualitatively determined as to which class the trajectories are most similar. A few runs within the indirect sequential cluster have been determined to be better expressed as hybrids. §.§ Direct Pathing Until now, agents have not been able to perceive their distance from walls, relying instead on simple binary vision and relative angles. Adding distance information to the visual input gives rise to a third class of algorithms, allowing agents to navigate directly to the patch. Although superior in navigational performance, these trajectories require a more complex visual system, a tradeoff likely relevant in nature. These agents often immediately incorporate wall distances and align with the direction of the patch. Many instantiations of the direct pathing algorithm calculate unique paths for each potential initialization, resulting in spiky star-like trajectory maps as in the top example of Fig. <ref>c. Each differs in the degree common routes collapse as they approach the patch, resembling preferential alignment along entangled fast and slow nullclines. The directness of their routes also shows up in their spatial profiles and polar histograms (Fig. <ref>c). After the initial heading calculation, the vast majority of movement is confined to a tight range along that same direction. The accuracy of their localization ability is demonstrated by the tight revolutions of the ending trajectories near the center of the patch, and the thin cyclical space in the left hand side of their spatial orientation profiles. Though there are some locations in the environment that catatrophically break the navigability of some instantiations. The circles at the four corners of the trajectory maps of the bottom right example in Fig. <ref> and the corresponding low directedness shown in Fig. <ref> reveal the existence of these condemned fates. Given the logarithmic nature of the distance scaling, agents perceiving a corner maxes out their turn, and each succesive orientation is still close enough to the corner that the perception-action loop traps the agent in position. Although more apparent and frequent when distance information is available, these cycles are sometimes observed in the corners for biased diffusive agents. These maladaptive limit cycles are apparently an edge case that do not occur often enough in the generational timeframe for the evolutionary algorithm to adequately account for. A caution for model choice, these artefacts are due in part to the deterministic nature of the networks. By attenuating the distance signal, the evolvability of direct pathing algorithms decreases until the other two classes fully occupy the resulting space (Fig. <ref> <ref>). At a distance scaling factor of 0.4, the three classes coexist at a triple point, though as mentioned before, as continuums among each other rather than as clearly designated phases (Fig. <ref>). What differs now, is that attenuating distance information has a clear negative effect on performance. Therefore, developing navigation algorithms have a distinct incentive to use distance information when possible, which helps to understand the more abrupt, sigmoidal phase transition as the signal dampens. When no longer possible to adequately use the signal, evolving networks abandon that dimension of information and instead revert back to the angle-based systems of the indirect sequential and biased diffusion classes. § DISCUSSION We trained simple yet expressive artificial neural network controlled agents to visually perceive a sparse grid environment and move to a hidden patch in a robust and efficient manner. By initializing the agents in random positions and orientations for each task iteration, they were forced to develop a perceptive-invariant, allocentric frame of reference to perform indirect spatial inference, correlating visual cues of the environment boundaries with movement towards the patch. Three distinct classes of algorithms emerged in order to accomplish this, each with a set of distinguishable features. The first class of agents navigate by composing a sequence of straight segments, indirectly oriented with respect to the patch, with major turns defined by elliptical-shaped manifolds. These manifolds designate spatially invariant thresholds which separate potentially distinct perception-action control regimes, and arise through a perceptual degeneracy in rotational and translational degrees of freedom. The second class uses the same manifolds to perceive the environment, though rather to dynamically adjust turning speed, resulting in diffusive motion biased towards the patch. And the third uses distance perception, i.e. memory and inference, to accurately determine near perfect paths directly to the patch. Described in terms of their perceptual preference and resulting behavior, these classes are termed indirect sequential, biased diffusive, and direct pathing, respectively. Angle-based algorithms (indirect sequential and biased diffusive), confer comparable performance though prefer different visual resolutions. Indirect sequential agents evolve more readily with lower visual resolution and biased diffusive with higher. While only requiring a lower quality visual signal and local observations, the indirect sequential agents are highly susceptible to visual angle noise throwing them off-course, whereas such noise does not affect the dynamic, globally perceptive process of the other class. The distance-based, direct pathing agents perform better than either angle-based class, resulting in an abrupt phase transition as the distance input is computationally usable however require a second informational dimension to their perceptual process. The three classes can be separated by decorrelation time and entropic directedness. Decorrelation time indicates temporal persistence in heading, and directedness indicates directional potential in space, in other words if an agent is more likely to be traveling in one or many directions at a particular point. Biased diffusive has low measures for both, indirect sequential has medium, and direct pathing has high. While the three classes tend to cluster into distinct bins in this 2D phase space, they are not mutually exclusive. Hybrid forms share features across classes, using a mixture of information types and behaviors to construct their routes. §.§ Convergent Behavior A convergent learning hypothesis has recently emerged to explain the striking similarity between early parts of the visual cortex and convolutional neural networks, the latter of which contains only abstract biological detail and is trained on simple image classification tasks <cit.>. The hypothesis proposes that training a sufficiently expressive network to perform a task, whether biological or artificial, will ultimately result in similar learned task behavior and representations. Further evidence has started to accumulate as artificial networks trained on biological tasks have been used to generate neural control hypotheses <cit.> Similarly, a "universality hypothesis" is being explored in the machine learning community, attempting to explain how a wide array of artificial neural network architectures and initializations can result in similar trained representations <cit.>. §.§ Convergent Behavior: Morris Water Maze The three classes of navigational algorithms and trajectories, resulting simply from optimization towards effective navigation, appear to reflect rodent swimming paths observed in the Morris Water Maze <cit.>. Having to rely on vision and their memory of previous attempts, rodents learn to navigate to a hidden platform. While the optimal choice is to take the direct path to the patch, many rodents travel in a clearly wrong direction before making a turn that takes them to the patch. Although often termed "indirect search", this pattern parallels behavior of the indirect sequential class <cit.>. Other rodents instead walk towards the patch yet spin in circles as they approach, attributed as "self-orienting movements" or even "nonsense movements" of a "directed search" strategy <cit.>. In a similar way, biased diffusive agents rotate in a similar fashion to these trajectories. The relative performance distributions of the three strategies corresponds closely to those of the two angle-based strategies and the slightly better performance of the distance-based, direct pathing class <cit.>. The relative occurence frequencies are roughly on balance between the three classes, as seen for distance scaling factor of 0.45 in Fig. <ref>. This may be argued as an evolutionary optimal condition for the species, since each strategy is balanced by tradeoffs. Individual variation in visuo-spatial preference has been observed across populations of zebrafish, likely evolved as a population-level survival strategy <cit.>. The observation of behavioral hybridization mirrors the recent studies that segment rodent navigation behavior observed in a single trial into multiple separate strategies <cit.>. Rather than averaging the behavior across an entire episode and fitting it into a single class, each strategy may be used at different situations or frequencies. A more robust navigational strategy utilizing multiple algorithms however requires a different computational medium than that tested in the present study, as it would involve a system of how to choose and when to switch behaviors. §.§ Convergent Behavior: Binary Spatial Choice This behavioral similarity between water-bound rodent and simulated agent trajectories presents an intriguing hypothesis: are the learned representations of the aritifical agents similar to those of the rodents? Suggestively, elliptical decision-making thresholds have appeared recently as governing spatial choice bifurcations <cit.>. Fruit flies, desert locusts, and zebrafish have each been observed to move towards the average of two choices until reaching a critical threshold and spontaneously force the decision towards one or the other option, even in sequence if more than two options are available. By simulating this behavior, Gorbonos et al has found elliptical geometry to define this threshold, spanning two options in much the same way ellipses defined by two adjacent corners govern navigational decisions in indirect sequential agents. A ring attractor model, using a spin system or a Hopfield network, reproduced these bifurcations through local excitation and global inhibition between choices. Rather than resulting from training perception-action loops for a navigation task, the model describes the emergence of choice between goal vectors. While the generated elliptical threshold is used to compose route segments for indirect spatial inference in our case, here it is used to mark when to choose to travel towards a particular goal or set of goals. Despite both computational and implementational level details are distinct from the present study, the representational space appears to have converged on a common element. In fact, this only compounds the argument of the fundamentality of this spatial decision-making algorithm. The possibility of a binary choice algorithm to govern navigation has been suggested before <cit.>. Further giving credence to the idea that this algorithm may be applicable across contexts, a common egocentric navigational and joystick movement algorithm has recently been found to be linked via the hippocampal representational space in mice <cit.>. §.§ Convergent Behavior: Sperm Chemotaxis In a third case of convergent behavior, the biased diffusive trajectories mirror the helices of sperm cell chemotaxis <cit.>. Although not visually sampling environmental landmarks, sperm cells sample chemical concentrations to bias their motion up-gradient towards an egg. Distinct chemotactic behavior from slime molds, which measure concentration at different spatial points in their body to estimate the gradient, sperm as well as bacteria compare concentration across different points in time as they move <cit.>. Bacteria temporally correlates concentration through modulating stochasticity, a process described as a biased random walk, while sperm deterministically modulates the curvature of their swimming path, constructing helices up a gradient <cit.>. This results in sustained correlation oscillations in heading across time (<cit.>), as in Fig. <ref>. §.§ Vision vs. Vectors The success of purely visual navigation algorithms, even without distance information, as well as the apparent similarity to rodent trajectories, provides evidence for the sufficiency of vision-based representations to guide navigational behavior. The leading hypothesis, cognitive maps, has been widely accepted as the way many organisms represent space, with place cells tuned to specific points along a scaffold of grid cells, collectively comprising a Euclidean graph <cit.>. A major argument for such a representation is the ability to flexibly take novel shortcuts between two locations. Though the robustness of this ability has been put in question. Warren has argued that although a Euclidean structure would be convenient, several experiments have demonstrated consistent errors in direction estimation and route integration as well as systematic local feature biases <cit.>. Instead, a labeled graph or a network of views better describing the locality and bias observed. A unifying perspective suggest multiple parallel representations exist, including: Euclidean maps, route graphs, and local scene or image systems <cit.>. Each can benefit from partial information sharing from the other, such as using local observations to update angles between points on the map or route graphs <cit.>. Navigation via direct perception of the environment, the type explored in this study, aligns closely with the image-based representations of the parahippocampal place area, occipital place area, and retrosplenial complex <cit.>. Although distance information, kept abstract in the present study, perhaps arises from a cognitive map or cognitive graph representation. §.§ Vision vs. Vectors: Individual Variation Individuals seem to differ in preference of spatial representation, often illustrated as a continuum between map-based and route-based systems <cit.>. While map-based navigation is often portrayed as superior <cit.>, both systems appear to perform on par with each other <cit.>, with slight advantages towards one or the other in their respective environment <cit.>. Recent evidence has proposed both reference frames to project into our understanding of the environment <cit.>, likely with variable preferential ratios. In context of the present study, this aligns with the continuum between direct pathing agents, which uses a spatial map to estimate distance to boundaries to interpolate patch location and direction, and the angle-based agents that use visual input to construct routes. Variation within the route-based navigation class does not seem to have been well-studied <cit.>, perhaps due to its perceived non-ideal nature. The usefulness of such strategies seems to be an under-explored area, whether it be computational simplicity or risk avoidance in an environment with potentially dangerous shortcuts. Intraspecies differentiation in navigation ability is not unique to humans. In a study by Yashina et al, individual zebrafish exhibited a variety of individually consistent strategies, differing in the way they use spatial cues to safely navigate their environment <cit.>. A notable recent study found consistent interspecies navigational ability between two birds, one food-caching and the other not <cit.>. Payne et al demonstrates significant and expected distinctness between the two bird species, as well as a comparability of neural signatures between the two birds and mammals. More in line with the route-based or scene system approach, a neural circuit perspective proposes that complex behaviors are compositions of simple underlying algorithms. Spanning phototaxis <cit.>, escape response <cit.>, thermotaxis <cit.>, evidence accumulation <cit.>, and collective behavior <cit.>, an algorithmic approach has the possibility to yield a finer grained understanding of complex behaviors. Rather than requiring to idealize a Euclidean representation of space, this approach offers a bottom-up understanding of how simple perception-action heuristics can be arranged to perform higher order computations. Though how the algorithmic level should be trained in order to effectively correlate artificial and biological representations is an open question, whether it be for prediction which is then used to solve a task, as in <cit.>, or explicitly for solving a task, as in our study and those comparing CNN classifiers to the visual cortex <cit.>. §.§ Insect Visual Navigation The cognitive flexibility required to construct parallel representations such as maps, graphs, and scence systems may be a luxury of a high amount space alloted to computation. Which representational framework central-place foraging insects use to return home or to a foraging patch and how they use visual information to effectively navigate is currently being contested from a number of angles <cit.>, with the aim of exploring navigability under the computational constraint of their minimal neural systems. In a similar way to mammalian debates, various perspectives argue over whether the framework includes an explicit map or a disjoint collection of routes and observations, whether visual perception is translated into a goal vector estimate or simply converted into a useful movement signal, and to what extent and at which point odometry, vector estimates, and view memory information is combined. Although egocentric view or route-based systems appear to have a stronger foundation to a much greater extent in insects than for other organisms, where studies on allocentric encoding currently dominate. It remains to be seen if such viewpoints are relevant to navigation frameworks in humans, rodents, birds, or fish. Many insect visual navigation systems have particular relevance to the model of the present study. Baddeley et al used a neural network to determine familiarity of present views with those previously experienced <cit.>. Stone et al proposes Zernike moments, providing rotational invariant shape representations, to increase the catchment area of view-matching navigation <cit.>. Le Möel et al demonstrates the effectiveness of using familiarity as well as anti-familiarity to make effective movement decisions <cit.>. Firflefyn et al used convolutional networks trained to estimate a goal vector before calculating movements <cit.>. Hoinville and Wehner combines view matching with path integration, using both as dissociable systems that each estimate a goal vector <cit.>. While goal vectors clearly do exist <cit.>, to what extent they drive navigational decisions remains unanswered, especially given how frequent animals make notably indirect trajectories to reach a goal. Not including them in a model, such as in ours, allows this alternative hypothesis to be explored. Several studies use a similar environment to the Morris Water Maze adapted for insect navigation, using heat as a deterrant rather than water. Crickets were found to successfully find the non-visible target, with theoretical extensions proposing center-of-mass landmark vector-based or image difference matching without vector calculation as two potential mechanisms <cit.>. A similar setup was used for Drosophila to investigate neural and visual receptive fields hypotheses <cit.>. Dewar et al finds that low resolution ring neuron receptive fields are sufficient for visual navigation, potentially marking a bridge between the CNN perceptual medium used in this study with those used in the binary choice tasks <cit.>, marking the similarity between the two approaches. §.§ Visual Perception Another outcome stems from the tradeoff between visual angle and distance information. While navigating directly to the patch takes the least amount of time, it is clear that it requires a greater amount of energy, whether that be regarded as sensory capability, attention, memory, computation ability, or a mix. However, navigation relying on simpler visual angle principles performs almost as well. Rather than analyzing navigational deficits by these angular strategies, we may find insight in asking what other dimensions may be enhanced as a result of simplifying the visual space, exploring the Pareto front of energy and performance <cit.>. Our minimal approach allows easier generalization and analysis, however carries a corresponding drawback. Biological organisms are not so simple. We used vision here as the only sensory modality, while animals can smell, hear, vestibulate, echolocate, and use a variety of other senses to make their way towards a goal. <cit.> Non-local, celestial information is a major component in the navigation and orientation of most animals, including us <cit.>, though need to be time-compensated due to the earth's rotation and may require magnetic calibration <cit.>. There are many interesting questions to ask and tradeoffs to explore in the multisensory space <cit.>, however here, we focused on vision for simplicity and for relatability among us humans. Limiting FOV may be relevant for our human perception, though many animals have near complete vision. In either case, turning one's head can serve as complete visual perception of the surroundings. By integrating such percepts, navigation will surely follow the distance-based strategy of navigating directly towards the patch. As described in terms of image differerence matching <cit.>, panoramic visual input provides precise location information and local gradient descent can lead one directly to any desired place. Through limiting the FOV, visual resolution, and memory this task of spatial inference become more difficult. Demonstrated above in describing the emergence of elliptical thresholds and in Fig. <ref>, regions of perceptual degeneracy result from the two degrees of freedom: rotation and translation. Meaning for us that if we do not constantly turn our heads around, we suffer from visual degeneracy. Rodents appear to suffer from the same perceptual-movement limitation (need citation for occulomotor reflex), while insects do not <cit.>. Regardless, both angle-based navigational strategies learn to exploit the constraint, enabling sufficient performance at ostensibly lower visual computational capacity. Especially so for the indirect sequential agents, only needing to store important turning information in a few key snapshots. Evolvability and navigational performance shift from perferring the indirect sequential strategy to biased diffusion as visual resolution increases. Though it may be surprising, lower resolution observations increases the signal to noise ratio when navigating by visual familiarity, thus smoothening the image difference function and increasing navigational ease <cit.>. As predicted by this theory, low dimensional visual filters used for orientation have been found in the Drosophila central complex <cit.>. View matching is only one theory for how organisms may use environmental geometry to spatially reorient. Modularity theory, associative theory, adaptive combination theory, and an empirically-driven neural substrate approach each are competing perspectives that may be more or less applicable to a given species <cit.>. The above approaches have not been tested here, as a CNN is the computational substrate used. Comparing other means of organizing geometry in making navigational decisions would be an interesting extension, though would not be universally applicable, as the first three would require object-based processing streams. As discussed above, a neural substrate approach, using a ring attractor network, has produced similar elliptical thresholds as our view-matching model, suggesting compatibility between the two approaches <cit.>. §.§ Real World Applicability Through minimizing complexity, tractability is maximized, in terms of both the model to learn the task and for resulting behavior to be analyzed, as well as generality across environments and species. However this comes at the cost of reduced detail relative to real world environments and biological neural systems. Experimental paradigms such as the Morris Water Maze highlights specific aspects of navigational problem-solving ability, and thus can be used to isolate genetic or neural components necessary for such ability or to differentiate the range in ability across a population. Although in the wild, the need to get to a specific location without being able to directly perceive it is not often realized. The cognitive flow of information is constrained, largely to a single visual process. Most biological organisms have access to some form of memory, likely enabling more complex spatial reasoning to be performed. Of course, flexible spatial adaptivity arises from recurrent connectivity, place cells, and grid cells <cit.>. Such inferential capability allows an agent to estimate distance to a goal <cit.>, or prediction of future locations to plan trajectories <cit.>. Though such abilities are balanced by the energy required for the needed complexity in computation. Other navigational algorithms are unable to be learned in this cognitive model instantiation, such as using short-term memory to turn around and develop a global perspective, before moving straight along an inferred vector. Many more algorithmic variations are possible, but out of the scope for comparision in the present study. Rather than implementing such processes, the end result can be inferred to be similar to the direct pathing agents. The action space simplifies the degrees of freedom to a ratio between speed and turning, constrained to only forward movement. Such limitations may mirror ant movement <cit.> or rodent head-eye constrained perception <cit.>, but the lack of lateral and backward directions are not directly comparable to human movement. Lateral movement, coupled with short-term memory, can be used to infer distance via parallax. Though similarly to enabling memory, increasing dimensionality in the action space effectively leads to the trajectory space of the direct pathing agents. Another way of constructing the cognitive flow is in terms of perceptual control. Rather than direct control of an action, the agent may update a probabilistic map of where the patch might be before sampling within that space to take an action, i.e. updating a goal vector encoding <cit.>. Although subtle in its difference, this control approach may yield unique algorithms by framing useful action in terms of desired percepts or direction <cit.>. §.§ Navigation vs. Search Navigation and search have been historically semantincally entangled, both enveloped by the term "foraging", yet the role of memory plays quite unique parts in both. Navigation involves travel towards a known destination, whether previously visited personally or not, and search involves travel to a known thing or place with unknown location. These two processes certainly overlap, as search may precede later navigation or navigation is only possible proximally and search is needed to reach an exact spot. As such, organisms may evolve a preference towards one or the other depending on what the environment allows or requires. For example, C. elegans has been found to evolve a search strategy with a few key gradient-following principles <cit.>, while loggerhead turtles have developed trans-Pacific migratory behavior <cit.>. Though for many, both are needed. §.§ Social Navigation Taking one step further, for social animals in particular, navigational computation may be offloaded to others. Rather than having to create a complete map of the territory, many animals may learn to depend on the navigation abilities of their conspecifics <cit.>. This may be particularly true for migration, with vast distances and locations sometimes not known by a vast majority of the collective. Much like the simplification to binary decisions and reliance on visual angle, social animals may learn to rely on collective computation rather than individual navigation ability. In order to do so, they must associate themselves with respect to their spatial surroundings, as well as to their social context, which recent studies have shown to both reside in mammalian hippocampal activity <cit.> §.§ Robotic Navigation The recent performances of single camera autonomous vehicles demonstrate just how much can be attributed to vision without binocular depth perception <cit.>. While another line of research has been using biological network design and learning patterns to decode visual navigation principles or to optimize for memory or space <cit.>. Most relevant to the angle-only navigating agents, a number of approaches have designed means of bearing-only navigation, given the unreliability of depth estimation <cit.>. An approach to minimize navigational errors when experiencing perceptual degradation, as shown to play a significant role in the minimal environment of the present study or in the real world environments such as caves and mines, has been put forth by Ebadi et al <cit.>. As the convergent hypothesis would predict, one group has sought elliptical geometry with respect to landmark bearings to ground their navigation, similar to what our agents learned from raw experience <cit.>. § ACKNOWLEDGEMENTS This work was supported by the Elsa-Neumann Stipendium. Ideas within this paper were inspired from discussions at CapoCaccia Cognitive Neuromorphic Workshops (CCNW) 2024, especially those with Gabriel Gattaux, Andrew Philippides, and Florian Engert. apalike § SUPPLEMENTARY
http://arxiv.org/abs/2407.13235v1
20240718074510
Minimal-length quantum field theory: a first-principle approach
[ "Pasquale Bosso" ]
gr-qc
[ "gr-qc", "hep-th" ]
Minimal-length quantum field theory: a first-principle approach]Minimal-length quantum field theory: a first-principle approach [1]Pasquale Bossopasquale.bosso@ino.cnr.it *[1]CNR-INO, Istituto Nazionale di Ottica, Via Campi Flegrei 34, Pozzuoli, I-80078, Italy Phenomenological models of quantum gravity often consider the existence of some form of minimal length. This feature is commonly described in the context of quantum mechanics and using the corresponding formalism and techniques. Although few attempts at a quantum field-theoretical description of a minimal length has been proposed, they are rather the exception and there is no general agreement on the correct one. Here, using the quantum-mechanical model as a guidance, we propose a first-principle definition of a quantum field theory including a minimal length. Specifically, we propose a two-step procedure, by first describing the quantum-mechanical models as a classical field theory and subsequently quantizing it. We are thus able to provide a foundation for further exploration of the implications of a minimal length in quantum field theory. [ * July 22, 2024 ================= § INTRODUCTION A minimal measurable length, explicated in various different forms, is present in several approaches to quantum gravity <cit.>. From the phenomenological point of view, a relevant and interesting question concerns effects of such a minimal length on low-energy systems <cit.>. A possible strategy to introduce a minimal length, in the form of a minimal uncertainty in localization, is to modify the commutation relation between position and momentum, that is, the mathematical structure of quantum mechanics implying the uncertainty relation between these two quantities <cit.>. In what follows, when we will refer to a minimal length, we will indeed mean a minimal uncertainty in position, specifically. Studying such deformed models can then be used as a probe for features in high-energy physics, aiming for an indirect observation of quantum gravitational effects. Currently, quantum field theory serves as the indispensable theory to describe high-energy phenomena. Therefore, it represents the necessary language for a phenomenological description of quantum gravitational effects. Unfortunately, when it comes to models concerning the description and properties of a minimal length, quantum field theoretical models are rather the exception than the rule (see, e.g., <cit.>.) Furthermore, the few models present in the literature are usually inspired by the mathematical structure of the quantum mechanical counterpart, rather than by introducing a consistent construction that could lead to physical features associated with a minimal length. In this work, we intend to solve this issue via a two-step procedure. First, starting from the description of minimal-length quantum mechanics, we will rewrite it as a classical field theory, mimicking the way ordinary quantum mechanics can be described as a classical field theory, that is, in terms of a classical field (the wavefunction) whose equation of motion is the Schrödinger equation. Along the way, studying the symmetry by spatial translations, time shifts, and complex phase shifts, we will find the corresponding conserved currents and densities. Second, we will derive a (non-relativistic) quantum field theoretical formulation for a minimal length by quantizing the classical counterpart, thus effectively introducing a second quantization. We already anticipate that this procedure will result in a model in which the commutation relation between fields is not deformed, contrarily to what is commonly assumed in the literature. This work is organized as follows. In Section <ref>, we review the main features of quantum mechanics as a classical field theory. In Section <ref>, we review the fundamental and necessary properties of a quantum-mechanical description of a minimal measurable length. In Section <ref>, we present the classical field-theoretical description for a minimal length. In Section <ref>, we study various symmetries and find the corresponding conserved currents and densities. In particular, we consider spatial translations (Sec. <ref>) and complex phase shifts (Sec. <ref>), while time shifts result to be trivial. In Section <ref>, we proceed with our plan by proposing a quantum field-theoretical description for a minimal length, including, as an example, the case of quartic interaction in Sec. <ref>, thus explicitly showing the effects of such description. Finally, in Section <ref>, we conclude summarizing the work. A series of appendices are supplied with this paper. Specifically: Appendix <ref> describes the representations used in this work and the transforms relating them. Appendix <ref> includes the necessary computations to obtain the conserved current and density associated with an arbitrary symmetry transformation. Appendix <ref> specializes the results of the previous Appendix to the one-dimensional case. § QUANTUM MECHANICS AS A CLASSICAL FIELD THEORY We start by reviewing the main feature of quantum mechanics spelled as a classical field theory <cit.>. Here and throughout the paper, we will consider spinless, scalar systems for simplicity. Consider the following Lagrangian density ℒ = i ħ/2(Ψ^*∂_0 Ψ - ∂_0 Ψ^*Ψ) - ħ^2/2 m∂_i Ψ^*∂^i Ψ-Ψ^* V Ψ, where Ψ and Ψ^* are two functions of the position vector x⃗ and are treated as two independent fields. Here, we are using the notation ∂_i to denote partial derivatives with respect to x^i, namely ∂_i ≡∂/∂ x^i, and ∂_0 to denote partial derivatives with respect to time, i.e., ∂_0 = ∂/∂ t. The Lagrange equation for the field Ψ^* is i ħ∂_0 Ψ = - ħ^2/2 m∂_i ∂^i Ψ + V Ψ, which corresponds to the Schrdinger equation for the field Ψ. From the Lagrangian in Eq. (<ref>), we find the momenta conjugate to the fields Ψ and Ψ^* to be, respectively, Π = ∂ℒ/∂ (∂_0 Ψ) = i ħ/2Ψ^*, Π^* = ∂ℒ/∂ (∂_0 Ψ^*) = - i ħ/2Ψ. Thus, the Hamiltonian density, found from the Lagrangian using a Legendre transformation, reads ℋ = Π (∂_0 Ψ) - Π^* (∂_0 Ψ^*) - ℒ = ħ^2/2 m∂_i (Ψ^*∂^i Ψ) - ħ^2/2 mΨ^*∂_i ∂^i Ψ + Ψ^* V Ψ. We then observe that, when the Hamiltonian density is integrated over the entire space, thus obtaining the Hamiltonian, we find the expectation value of Eq. (<ref>) under the assumption that the fields vanish at infinity. §.§ Symmetries: time shifts We now proceed by studying the symmetries of the theory, starting with time shifts. Specifically, let us consider a transformation law for the time coordinate of the form t → t - ε, with ε = const. We can then introduce a conserved current <cit.> j^i = ε[ - ∂_0 Ψ∂ℒ/∂ (∂_i Ψ) - ∂_0 Ψ^* ∂ℒ/∂ (∂_i Ψ^*)] = εħ^2/2 m(∂_0 Ψ∂^i Ψ^* + ∂_0 Ψ^* ∂^i Ψ), and conserved charge ρ = ε[ ℒ - ∂_0 Ψ∂ℒ/∂ (∂_0 Ψ) - ∂_0 Ψ^* ∂ℒ/∂ (∂_0 Ψ^*)] = - εℋ. We thus conclude that the conserved quantity in a time shift is the Hamiltonian (density). §.§ Symmetries: spatial translations Let us repeat the same argument for the case of spatial translations, that is, for a transformation of coordinates of the form x^i → x^i - ε^i, with ε^i = const for i = 1,2,3. Thus, we can identify a conserved current <cit.> j^i = ε^k [ η_k^i ℒ - ∂_k Ψ∂ℒ/∂ (∂_i Ψ) - ∂_k Ψ^* ∂ℒ/∂ (∂_i Ψ^*)] = ε^i [ i ħ/2(Ψ^*∂_0 Ψ - ∂_0 Ψ^*Ψ) - Ψ^* V Ψ] + ε^k ħ^2/2 m(∂^i Ψ^* ∂_k Ψ + ∂_k Ψ^* ∂^i Ψ) and conserved charge ρ = ε^k [ - ∂_k Ψ∂ℒ/∂ (∂_0 Ψ) - ∂_k Ψ^* ∂ℒ/∂ (∂_0 Ψ^*)] = - ε^k i ħ/2[ Ψ^* ∂_k Ψ - Ψ∂_k Ψ^* ] = - ε^k i ħΨ^* ∂_k Ψ + i ħ/2∂_k (ε^k |Ψ|^2). The last term, when integrated over the entire space, cancels out under the assumption of vanishing fields at infinity. Therefore, we can conclude that the conserved quantities under spatial translations are the components of the momentum. §.§ Symmetries: complex phase shifts Finally, let us consider a transformation which changes the fields by a complex phase φ, that is Ψ→Ψ' = e^i φΨ≃Ψ (1 + i φ). In this case, the current density is <cit.> j^i = - δΨ∂ℒ/∂ (∂_i Ψ) - δΨ^* ∂ℒ/∂ (∂_i Ψ^*) = - i φħ^2/2m(Ψ^* ∂^i Ψ - Ψ∂^i Ψ^*), while for the conserved charge we get ρ = - δΨ∂ℒ/∂ (∂_0 Ψ) - δΨ^* ∂ℒ/∂ (∂_0 Ψ^*) = φħ |Ψ|^2. Therefore, these are the probability current density and probability density, respectively, for a field (wavefunction) Ψ. § MODELS OF QUANTUM MECHANICS WITH A MINIMAL UNCERTAINTY IN POSITION Here, we will summarize the main features of models of minimal-length quantum mechanics. Thus, let us suppose that the minimal length is obtained via a modified commutator of the form [x̂^a,p̂_b] = i ħ[f(p̂^2) δ_b^a+f̅(p̂^2)p̂^ap̂_b/p̂^2] = i ħ F^a_b(p⃗̂⃗) with suitable choices of the functions f and f̅ <cit.>. In particular, demanding that the coordinates commute, as we will do in the following, we require f̅ = 2 f f' p̂^2/f - 2 f' p̂^2. Within such models, the Hamiltonian can be written as <cit.> Ĥ = p̂^2/2 m + V(|X⃗̂⃗|), where X^a is the conjugate configuration variable to p_b, namely <cit.> [X̂^a,p̂_b] = i ħδ^a_b, and X^a approximates to the ordinary position x^a in the low-energy limit. It is worth pointing out that the momentum p⃗ has a linear composition law. Furthermore, it is possible introducing the wavenumber operator k⃗̂⃗ as the conjugate momentum to the position x⃗̂⃗, that is, such that <cit.> k̂^a = 1/ħg̅(p̂^2) p̂^a, [x̂^a,k̂_b] = i δ^a_b. For a minimal length to be present, the wavenumber operator has to be bounded <cit.>. Consequently, and in contrast to p⃗, the wavenumber k⃗ cannot compose linearly. We will therefore call κ the set of its eigenvalues. In what follows, it is also useful considering the operator p̂_a as a function of the vector operator k⃗̂⃗. In particular, inverting Eq. (<ref>), we can write p̂_a = ħ g(k̂^2) k̂_a, with g(k^2) = 1 / g̅(p^2). From the assumption of commutativity, it is possible to relate the functions g̅ and f <cit.>. In particular, we have g̅(p^2) = 1/f(p^2) g(k^2) = f(p^2). To find the relation between the coordinates X^a and x^a, from Eq.(<ref>) and noting that F^a_b is a function of p⃗ only, we have x^b = 1/2{F^b_a(p⃗̂⃗), X̂^a}, where {·,·} indicates an anticommutator. Furthermore, from Appendix A of <cit.>, it is easy to show that [X̂^a,k̂_b] = (F^-1)^a_b(p⃗̂⃗) = (G^-1)^a_b(k⃗̂⃗) where we introduced the matrix operator . From this, we can write X̂^a = 1/2{(G^-1)^a_b(k⃗̂⃗), x̂^b}. It is worth mentioning that the symmetrical ordering in Eqs. (<ref>) and (<ref>) allows for the ordinary measure for momentum space, although other ordering prescriptions can be equivalently applied <cit.>. Furthermore, it is useful finding the determinant of the matrices F^a_b and G^a_b. We have, respectively, |F| = f^4/f - 2 f' p^2, |G| = g^2 (g + 2 g' k^2). Using these expressions explicitly, we find the following useful relations ∂/∂ k_b |G| (G^-1)^a_b = 0 ∂/∂ p_bF^a_b/|F| = 0. In what follows, we will often describe a system in the representation in which either the operators X̂^a or x̂^a are multiplicative operators. The transformation rules between such spaces and the representations for the various operators can be found in Appendix <ref>. §.§ Example: Kempf-Mangano-Mann model As a toy model, we will consider the one-dimensional deformation considered in <cit.>, namely [x̂, p̂] = i ħ (1 + βp̂^2). Such a model is characterized by a bounded wavenumber operator according to |k| < π/2 √(β)ħ and therefore a minimal uncertainty in position given by <cit.> Δ x ≥√(β)ħ. In this model, we have g(k̂^2) = tan(√(β)ħk̂)/√(β)ħk̂, g̅(p̂^2) = arctan(√(β)p̂)/√(β)p̂. Furthermore, in p-space representation, the coordinate operators, in symmetrical ordering, are x̂ = i ħ (1 + β p^2) / p + i ħβ p, X̂ = i ħ/ p. It is possible to use this relation to find the x-eigenfunctions and the corresponding inner product. We thus find ⟨ p | x ⟩ = √(√(β)/π)e^-i x arctan(√(β) p)/√(β)ħ/√(1 + β p^2), ⟨ x | x' ⟩ = 2 √(β)ħsin[π (x-x')/2 √(β)ħ]/π (x- x'). The inner product is presented in Fig. <ref>. It is worth observing that the x-eigenstates are not orthogonal. § MINIMAL LENGTH CLASSICAL FIELD THEORY Now, we are going to describe the model introduced in the previous section in terms of a classical field theory. In particular, remembering that p_a and X^a form a conjugate pair and considering that we expect a Schrödinger-like equations in these two quantities, as hinted by the Hamiltonian in Eq. (<ref>), the Lagrangian corresponding to this model, by comparison with the ordinary case in Eq. (<ref>), is ℒ = i ħ/2(Ψ^*∂_0 Ψ - ∂_0 Ψ^*Ψ) - ħ^2/2 m∂̇_i Ψ^*∂̇^i Ψ-Ψ^* V Ψ, where ∂̇_̇i̇ = ∂/∂ X^i. Although it has the same form as the ordinary Lagrangian, and therefore admits the same equations of motion in terms of the variables {X^a,p^a}, the description in terms of the ordinary position is far from being straightforward since the functional relation between the two sets of coordinates involves the momentum p⃗ or, alternatively, the wavenumber k⃗ <cit.>. Thus, changing from X^a to x^a results in a higher (potentially infinite, as in the case of the model in Section <ref>) order Lagrangian. Nonetheless, employing the similarity between the Lagrangians in Eqs. (<ref>) and (<ref>), we have that the conserved quantity in a time shift is the Hamiltonian, which in the present context, up to a total derivative, acquires the same form as in Eq. (<ref>) ℋ = - ħ^2/2 mΨ^*∂̇_i ∂̇^i Ψ + Ψ^* V(|X⃗|) Ψ. or, in x-space, ℋ = - ħ^2/2mΨ^* g^2(-∇^2) ∂_i ∂^i Ψ + Ψ^* V( 1/2{ (G^-1)^i_a (-i∇⃗), x^a }) Ψ. Here, the symbol g(-∇^2) is understood as a (infinite) series of derivative, that is, g(-∇^2) = ∑_j=0^∞(-1)^j/j![ ^j g(k^2)/ (k^2)^j]_k^2=0 (∇^2)^j. The Hamiltonian can be found by introducing the momenta Π = ℒ/ (∂_0 Ψ), Π^* = ℒ/ (∂_0 Ψ^*), which have the same identical form as in the ordinary theory. It is worth observing that, to derive the above Hamiltonian, we do not need to introduce any other field, nor we need to modify the Poisson brackets between the fields and their conjugate momenta. The essence of the minimal length, which can be traced back to Eqs. (<ref>) and (<ref>), is encapsulated in the specific form of the Lagrangian or the corresponding Hamiltonian in the two representations, namely Eqs. (<ref>) and (<ref>). Although the description of time shifts trivially descends from that in the ordinary theory, that of spatial translations is not this straightforward. Indeed, consider a transformation of the form (<ref>). As a canonical transformation, this leaves the momentum k^a, and therefore p^a, as is. However, when described in terms of the variables X^a, it corresponds to a transformation involving both X^a and p^a. To have a better insight, let us start by considering a free Lagrangian, which in terms of the position x^a and derivatives ∂_a reads ℒ = i ħ/2(Ψ^*∂_0 Ψ - ∂_0 Ψ^*Ψ) - ħ^2/2 m [g(-∇^2) ∂_a Ψ^*] [g(-∇^2) ∂^a Ψ], Thus, in terms of x^a, the Lagrangian presents higher derivative terms. Using the stationary action principle, we can then write δ S/δΨ^* = ∑_j=0^∞ (-1)^j ∂_μ_1 …μ_j∂ℒ/∂Ψ_μ_1 …μ_j^* = 0, in which, using standard notation, we have written ∂_μ_1 …μ_j = ∂_μ_1…∂_μ_j, Ψ_μ_1 …μ_j = ∂_μ_1 …μ_jΨ. We can now check whether the current description is indeed equivalent to the one given by the free counterpart of Eq. (<ref>). For example, the equation of motion becomes 0 = i ħ∂_0 Ψ + ħ^2/2 m g^2(-∇^2) ∂_a ∂^a Ψ. Notice that, in terms of X^a and ∂̇_a, the equations of motion is 0 = i ħ∂_0 Ψ + ħ^2/2m∂̇_a ∂̇^a Ψ. As expected, the equations of motion descried in terms of x^a and X^a are equivalent when we take Eq. (<ref>) in consideration. Furthermore, they correspond to the Schrödinger equation in the two representations. Finally, it is easy to see that the same equivalence between the two formulations is maintained even when a potential V(X⃗) is present. § SYMMETRIES We will dedicate this section to the study of symmetry transformations in the context of minimal length models. We will focus on spatial translations and phase shifts, as time shifts have been already considered in the previous section. For this purpose, a generic expression for the conserved current and density associated to an arbitrary transformation are derived in Appendix <ref> for the case of spatial derivatives of any order. §.§ Spatial translations Let us thus consider a transformation of the form of Eq. (<ref>). By using Eq. (<ref>), or alternatively (<ref>), and the Lagrangian in Eq. (<ref>), we find for the conserved charge π_i = Θ_i^0 = - ∂ℒ/∂Ψ_0∂_i Ψ - ∂ℒ/∂Ψ^*_0∂_i Ψ^* = - i ħ/2(Ψ^* ∂_i Ψ - Ψ∂_i Ψ^*). Thus, as expected, the conserved charge under spatial translation is the wavenumber k⃗ and is consistent with ordinary quantum mechanics. §.§ Complex phase shifts Let us now consider a transformation of the form given in Eq. (<ref>). For the conserved current, we then have J^i = - i ħ/2 m∑_N=0^∞∑_n=0^N∑_l=1^2n+1∑_n_1,n_2,n_3=0 n_1+n_2+n_3=n^n(-1)^l+N/n_1! n_2! n_3! (N-n)! ×[ ^n g(k^2)/ (k^2)^n]_k^2=0[ ^N-n g(k^2)/ (k^2)^N-n]_k^2=0ξ_a 1…1_2n_1times2…2_2n_2times3…3_2n_3times^iμ_2…μ_2n+1 ×[ ( ∂_μ_2…μ_l∇^2(N-n)∂^i Ψ^*) ( ∂_μ_l+1…μ_2n+1Ψ) . . - ( ∂_μ_2…μ_l∇^2(N-n)∂^a Ψ) ( ∂_μ_l+1…μ_2n+1Ψ^* ) ] up to a factor ħφ associated with the arbitrary phase shift and where we introduced the symbol ξ^a_1 … a_n_b_1 … b_n = 1 when {a_i}_i=1^n = {b_i}_i=1^n; 0 otherwise, which is completely symmetric with respect to any arbitrary exchange of the indices in the two sets. Besides the complicated-looking form, it is worth observing that, from a perturbative perspective, the integer N in Eq. (<ref>) corresponds to the expansion order of the probability current, as we will shortly see. In particular, for N=0 we have J^i = i ħ/2 m[ ( ∂^i Ψ^*) Ψ - ( ∂^i Ψ) Ψ^* ] which corresponds to the probability current density of ordinary quantum mechanics. As for the probability density, we find, to all orders, ρ = - i/ħ(∂ℒ/∂Ψ_0Ψ - ∂ℒ/∂Ψ_0Ψ^*) = |Ψ|^2, corresponding to the usual probability density. §.§ Field theory example: Kempf-Mangano-Mann model We will now apply the results presented above to the model introduced in Sec. <ref>. We start with the free, one-dimensional Lagrangian ℒ = i ħ/2(Ψ^*∂_0 Ψ - ∂_0 Ψ^*Ψ) - ħ^2/2 m g(- ∂^2) ∂Ψ^* g(- ∂^2) ∂Ψ. In this case, as shown in Appendix <ref>, the probability current is J = i ħ/2 m∑_N=0^∞(-1)^N/N![ ^N/ x^N g^2(k^2) ]_x=0∑_l=0^2N+1 (-1)^l( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ). Inserting explicitly the expressions presented in Sec. <ref> and expanding up to first order in β (N=1), we find J = i ħ/2 m[ ( ∂Ψ^*) Ψ - ( ∂Ψ) Ψ^* ] - i ħ/3 mβħ^2 [ ( ∂^(3)Ψ^*) Ψ - ( ∂^(2)Ψ^* ) ( ∂Ψ) + ( ∂Ψ^*) ( ∂^(2)Ψ) - Ψ^* ( ∂^(3)Ψ) ]. This expression is consistent with what was previously found in <cit.>. § SECOND QUANTIZATION Above, we have seen that ordinary and minimal-length quantum mechanics are equivalent to a classical field theory described in terms of the field Ψ, its canonical momentum Π, and the corresponding complex conjugate fields, Ψ^* and Π^*. We now proceed to second-quantize such fields, imposing a commutation relation, focusing on a free system, as interactions can easily be introduced following the arguments included in Sec. <ref>. As pointed out above, the fields Ψ and Ψ^* form a pair of canonically conjugate fields, while the information on a minimal measurable length is already present in the specific form of the Lagrangian and the bounded character of the wavenumber k⃗. In other words, there is no need to modify the Poisson brackets between the fields. As a consequence, we are going to impose ordinary commutation relations for the fields, resulting in [Ψ(X⃗,t),Ψ^*(X⃗',t')] = δ^(3)(X⃗ - X⃗') δ(t - t'), [Ψ(X⃗,t), Ψ(X⃗',t')] = [Ψ^*(X⃗,t), Ψ^*(X⃗',t')] = 0. It is worth observing that, given the momenta in Eq. (<ref>) and the Lagrangian in Eq. (<ref>), the relations between fields and the corresponding momenta are not modified, that is [Ψ(X⃗,t),Π^*(X⃗',t')] = i ħ/2δ^(3)(X⃗ - X⃗') δ(t - t'). In terms of the coordinates x^a, using Eq. (<ref>), we have [Ψ_x(x⃗,t),Ψ_x^*(x⃗',t')] = ⟨x⃗ | x⃗' ⟩δ(t - t'), where, to not clutter the expression, we have used the notation ⟨x⃗|x⃗'⟩ to represent the function corresponding to the scalar product between x⃗-eigenstates at x⃗ and x⃗', presented in Eq. (<ref>). We then see that the commutation relations between fields is modified when described in terms of the coordinates x^a. However, since ⟨x⃗ | x⃗' ⟩→δ(x⃗ - x⃗') in the limit of ordinary quantum mechanics, we recover the ordinary quantum field theory description in the limit of vanishing minimal position uncertainty. As typical in both quantum field theory and in models of minimal length, a description in momentum space is convenient. The question to ask at this point concerns the choice of pairs of canonical variables to be used, that is, if one should consider a Fourier transform between X⃗ and p⃗ or between x⃗ and k⃗. We claim that the first is the route to pursue since, as hinted in Sec. <ref> and in <cit.>, the momentum p⃗ is defined to compose linearly. Let us then introduce the Fourier amplitudes a(p⃗,t) and a^*(p⃗,t) for the classical fields as a(p⃗,t) = (2 πħ)^-3/2∫^3 X  Ψ(X⃗,t) e^- i p⃗·X⃗/ħ. Since the commutation relation in Eq. (<ref>) is identical to the ordinary one, upon promoting the field modes to operators, we obtain the usual algebra, namely [a(p⃗,t), a^† (p⃗',t)] = δ^(3)(p⃗-p⃗'), [a(p⃗,t), a(p⃗',t')] = [a^† (p⃗,t), a^† (p⃗',t')] = 0. For the same reason, we find the following expression for the number operator N̂ = ∫_ℝ^3^3 X  ρ = ∫_ℝ^3^3 p   a^† (p⃗,t) a(p⃗,t). The relation above, as usual in quantum field theory, can be interpret as counting the particles in mode p⃗ and summing over all modes. A similar description can be given in terms of the wavenumber k⃗, employing the relation given in Eq. (<ref>). Specifically, we have N̂ = ∫_κ^3 k  ħ^3 |G| a_k^†(k⃗,t) a_k(k⃗,t), with a_k(k⃗,t) = a(p⃗,t). The operators a_k and a^†_k are good ladder operators in k-space. Indeed, we find [a_k(k⃗,t), a_k^† (k⃗',t)] = [a(p⃗,t), a^† (p⃗',t)] = δ(p⃗-p⃗') = 1/ħ^3 |G|δ(k⃗-k⃗'), which agrees with Eq. (<ref>). In this sense, the number operator written in terms of the ladder operators in k-space can be interpreted as usual as counting the number of particle in mode k⃗ and then summing over all modes, with the further care of correcting the count by the measure factor ħ^3 |G|. As for the momentum operator, we can write p̂_a = - i ħ∫_ℝ^3^3 X  Ψ^†(X⃗,t) ∂̇_a Ψ(X⃗,t) = ∫_ℝ^3^3 p   a^†(p⃗,t) a(p⃗,t) p_a, which follows the ordinary interpretation of counting particles and summing over all modes. As for the wavenumber k⃗, following Eq. (<ref>), we defined the corresponding operator as k̂_a = 1/ħg̅(p̂^2) p̂_a. It is worth noticing that, in this case, the usual “count and sum” interpretation cannot be adopted as k⃗ does not compose linearly. §.§ Example: quartic interaction As an example, let us consider a scalar system characterized by a quartic interaction ℒ = Ψ^†(i ħ∂_0 + ħ^2/2 m∂̇_i ∂̇^i) Ψ - λ/2Ψ^†Ψ^†ΨΨ. Following the standard procedure for this problem, the Hamiltonian can be written in Fourier components as the sum of a free term H_0 and an interaction term H_int H_0 = 1/2 m∫^3 p   a^†(p⃗,t) a(p⃗',t) p_i p^i, H_int = λ/16(πħ)^3∫ p_1^3 p_2^3 p_3^3 p_4^3   a^†(p⃗_4,t) a^†(p⃗_3,t) × a(p⃗_2,t) a(p⃗_1,t) δ^3(p⃗_1 + p⃗_2 - p⃗_3 - p⃗_4). At this point, one is usually interested in the matrix elements of H_int. Let us then consider the interaction between two particles described initially by a state of definite momenta |p⃗'_1,p⃗'_2⟩ and evolving into another state of definite momenta |p⃗'_3,p⃗'_4⟩. The matrix elements for the interaction are then ⟨p⃗'_3, p⃗'_4 | H_int | p⃗'_1, p⃗'_2⟩ = λ/4(πħ)^3δ^(3)[(p⃗_1' + p⃗_2') - (p⃗_3' + p⃗_4')]. As expected, we get the same result as in the standard theory and, in particular, momentum is conserved in the interaction. Let us now consider the following matrix elements, corresponding to the interaction between two particles initially at X⃗_1 and X⃗_2 and then ending up in X⃗_3 and X⃗_4 ⟨X⃗_3, X⃗_4 | H_int | X⃗_1, X⃗_2⟩ = 2 λ∫^3 X  δ^(3)(X⃗ - X⃗_4) δ^(3)(X⃗ - X⃗_3) δ^(3)(X⃗ - X⃗_2) δ^(3)(X⃗ - X⃗_1). This result simply states that the interaction is local in the X⃗-space, that is, the interaction is relevant only when X⃗_1 = X⃗_2 = X⃗_3 = X⃗_4, as in the ordinary theory. However, we stress that X⃗ does not represent a position in ordinary space. To know what happens for the ordinary position x⃗, we compute the following ⟨x⃗_3, x⃗_4 | H_int | x⃗_1, x⃗_2⟩ = 2 λ∫^3 X  ⟨x⃗_3 | X⃗⟩⟨x⃗_4 | X⃗⟩⟨X⃗ | x⃗_2 ⟩⟨X⃗ | x⃗_1 ⟩. Now, it is easy to see that the function ⟨X⃗ | x⃗⟩, considered as a function of X⃗ for any specific choice of x⃗, is a wider function than a Dirac delta, as demonstrated in Fig. <ref> for the specific model of Sec. <ref>. Therefore, the integral above would in general be different from zero every time that the supports of the four functions in the integrand intersect. This may happens also when x⃗_1 ≠x⃗_2 ≠x⃗_3 ≠x⃗_4. Thus, within the framework presented in this work, a quartic interaction is compatible with two particles interacting even when not at the same position, and the outgoing particles can indeed appear in other two different positions. All this is consistent with a theory characterized by a minimal uncertainty in position. § CONCLUSIONS Many approaches to quantum gravity present some sort of minimal length. Phenomenological models including a minimal length have been commonly used to study this feature on low-energy systems. One of such models consists in modifying the commutation relation between position and momentum to deform the uncertainty relation accordingly. Although this procedure fits the scope of the program of quantum gravity phenomenology in the context of non-relativistic quantum mechanics, it may be ill-suited to different frameworks, such as quantum field theory. In this work, we have surpassed this impasse by first resorting to a classical field theory description of quantum mechanics and then proceeding to a second quantization, first-principle introduction of a quantum field theory with a minimal length de facto. Specifically, treating quantum mechanics as a classical field theory, we were first able to write the appropriate Lagrangian and Hamiltonian with a minimal length. At the same time, this procedure allowed us to identify the correct conserved currents and densities associated with symmetry transformations, and in particular for spatial translations, time shifts, and complex phase shifts. We then continued by quantizing the model thus far developed introducing a model in non-relativistic quantum field theory compatible with a minimal length. To show this explicitly, we finally proposed the specific case of a quartic interaction, which in the ordinary theory is associated with an interaction between two particles at a specific point. However, in this case, it resulted in particles interacting even at different positions, compatibly with the existence of a minimal length. Using this procedure, it becomes evident that, to obtain a quantum field theory including a minimal length, no modification to the field commutators is required, since the information regarding the presence of a minimal length is embedded in the form of the Lagrangian and in the relations between the variables X⃗ and x⃗, the latter being the physical position. At the same time, this work sets the stage for phenomenological studies related to the nature and implications of a minimal length in quantum field theory, both in non-relativistic, as presented in this paper, and in relativistic contexts, which we hope to treat in future works. § CHANGE OF BASIS Here, we want to describe the change of basis corresponding to the variables X⃗, p⃗, x⃗, and k⃗. §.§ X⃗ and p⃗ Given their definition, it is straightforward to see that the variables X⃗ and p⃗ have to be related as in the ordinary theory. In particular, given Eq.(<ref>) and using the ordinary measure for both the X- and p-spaces, the two bases are related by the ordinary Fourier transform. Consequently, the operators corresponding to X⃗ and to p⃗ in the two representations acquire the usual form. Furthermore, since x⃗ and k⃗ are given by Eqs.(<ref>) and (<ref>), respectively, it is easy to find the corresponding representations in X- and p-spaces. Namely, in X-space, we have X̂^j = X ·, p̂_j = - i ħ∂/∂ X^j, x̂^j = 1/2{ F^j_a(- i ħ∇⃗̇⃗), X^a }, k̂_j = - i/f(- ħ^2 ∇̇^2)∂/∂ X^j, with ∇̇ = ∂/∂ X^j∂/∂ X_j, while in p-space, we have X̂^j = i ħ∂/∂ p_j, p̂_j = p_j ·, x̂^j = 1/2{ F^j_a(p⃗), i ħ∂/∂ p_a}, k̂_j = 1/ħ f(p^2) p_j ·. §.§ p⃗ and k⃗ Changing from p⃗ to k⃗ or vice versa simply amounts to a change of coordinates. Thus, besides writing the field in the new coordinates, the integration measure changes according to ^3 p →^3 k |∂ p_b/∂ k_a| = ^3 k  ħ^3 |G|. Thus, the resolution of identity in terms of eigenstates of k⃗ reads 𝕀 = ∫_κ^3 k  ħ^3 |G| |k⃗⟩⟨k⃗|, while the inner product between such states is ⟨k⃗ | k⃗' ⟩ = 1/ħ^3 |G|δ^(3)(k⃗-k⃗'). Using these relations and the representations of the operators in X- or p-spaces, as shown in the previous subsection, we can find the following representation in k-space X̂^j = i (G^-1)^j_a(k⃗) ∂/∂ k_a, p̂_j = ħ g(k^2) k_j ·, x̂^j = 1/2{ |G|(k^2), i/|G|(k^2)∂/∂ k_b}, k̂_j = k_j ·. §.§ k⃗ and x⃗ To find the correct transform between k- and x-spaces, we first find the eigenfunctions of the operators x⃗ in k-space, ⟨k⃗|x⃗⟩. Such functions are solution of the following equation x^i ⟨k⃗|x⃗⟩ = i ∂/∂ k_i⟨k⃗ | x⃗⟩ + i/2∂ln |G|/∂ k_i⟨k⃗ | x⃗⟩. We then get ⟨k⃗ | x⃗⟩ = 𝒩_k/√(|G|) e^-i x⃗·k⃗ which 𝒩_k a normalization constant given by 𝒩_k = 1/√(V(κ) ħ^3) with V(κ) = ∫_κ^3 k the volume of k-space. We can use this relation to find the inner product between x-eigenstates. Specifically, ⟨x⃗ | x⃗' ⟩ = 1/V(κ)∫_κ^3 k   e^i (x⃗ - x⃗') ·k⃗. Now, we can find the following relation ⟨k⃗ | [ ∫_ℝ^3^3 x   |x⃗⟩⟨x⃗| ] |k⃗'⟩ = (2 π)^3/V(κ)⟨k⃗ | k⃗'⟩ from which we have ∫_ℝ^3^3 x   |x⃗⟩⟨x⃗| = (2 π)^3/V(κ). This relation can be used to find the transform from x-space to k-space, that is, given a state |ψ⟩, we have ⟨k⃗ | ψ⟩ = √(V(κ))/(2 πħ)^31/√(|G|)∫_ℝ^3^3 x   e^- i x⃗·k⃗⟨x⃗ | ψ⟩. In turn, using this transform we can find the x-space representation for the following operators X̂^i = 1/2{ (G^-1)^i_a(-i ∇⃗), x^a }, p̂_i = - i ħ g(- ∇^2) ∂/∂ x^i, x̂^i = x ·, k̂_i = - i ∂/∂ x^i, with ∇^2 = ∂/∂ x^j∂/∂ x_j. Finally, the inverse transform is given by ⟨x⃗ | ψ⟩ = 1/V(κ)∫_κ^3 k  √(|G|) e^i x⃗·k⃗⟨k⃗ | ψ⟩. § CONSERVED CURRENT IN MINIMAL LENGTH MODELS Here, we will derive the form of the conserved current for a generic transformation starting from a Lagrangian depending on a field Ψ, its time derivative ∂_0 Ψ, and spatial derivatives of any order. Let us consider a coordinate transformation of the form x^μ→ x'^μ = x^μ - δ x^μ and the corresponding field transformation Ψ→Ψ' = Ψ + δΨ. Following standard procedures, e.g. as in <cit.>, and assuming that the action is invariant under the transformation, i.e., the transformation acts as a symmetry, we are interested in the variation of the volume form δ ( x ℒ) = (δ x) ℒ + x δℒ = - x (∂_μδ x^μ) ℒ + x δℒ. We will first focus on the variation of the Lagrangian. We find δℒ = - δ x^μ∂_μℒ + ∑_j=0^∞∂ℒ/∂Ψ_μ_1…μ_j∂_μ_1…μ_j[δΨ + (δ x^μ∂_μΨ)] + similar terms for Ψ^*. By induction, one can easily prove that ∂ℒ/∂Ψ_μ_1…μ_j∂_μ_1…μ_j[δΨ + (δ x^μ∂_μΨ)] = ∑_l=0^j (-1)^j-ljl∂_μ_1…μ_l[ ( ∂_μ_l+1…μ_j∂ℒ/∂Ψ_μ_1…μ_j) (δΨ + δ x^μ∂_μΨ) ]. In the expression above, the derivative indices are enumerated in increasing order, i.e., 1 ≤⋯≤ l < l+1 ≤⋯≤ j and, e.g., the symbol ∂_μ_1…μ_l amounts to no derivative when l < 1. Using Eq. (<ref>) and since the action is invariant under the symmetry transformation, we find the off-shell relation ∂_μ_1{∑_j=1^∞∑_l=1^j (-1)^j-ljl∂_μ_2…μ_l[ ( ∂_μ_l+1…μ_j∂ℒ/∂Ψ_μ_1…μ_j) ( δΨ + δ x^μ∂_μΨ) ] . . - δ_μ^μ_1δ x^μℒ} + similar terms for Ψ^* = - ∑_j=0^∞ (-1)^j ( ∂_μ_1…μ_j∂ℒ/∂Ψ_μ_1…μ_j) ( δΨ + δ x^μ∂_μΨ) + similar terms for Ψ^*. Thus, on-shell, using the equations of motion for Ψ in Eq. (<ref>) and the analogous for Ψ^*, we can identify a conserved current J^μ_1 = δ_μ^μ_1δ x^μℒ - ∑_j=1^∞∑_l=1^j (-1)^j-ljl∂_μ_2…μ_l[ ( ∂_μ_l+1…μ_j∂ℒ/∂Ψ_μ_1…μ_j) ( δΨ + δ x^μ∂_μΨ) ] + similar terms for Ψ^*. It is convenient writing the same current as J^μ_1 = δ_μ^μ_1δ x^μℒ + ∑_j=1^∞∑_l=1^j (-1)^l ( ∂_μ_2 …μ_l∂ℒ/∂Ψ_μ_1 …μ_j) ∂_μ_l+1…μ_j( δΨ + δ x^μ∂_μΨ) + similar terms for Ψ^*. Finally, we can introduce the energy-momentum tensor Θ_μ^μ_1 for a constant coordinate shift as J^μ_1 = Θ_μ^μ_1δ x^μ. In this case, the fields transform like scalars obtaining Θ_μ^μ_1 = δ^μ_1_μℒ + ∑_j=1^∞∑_l=1^j (-1)^l [ (∂_μ_2…μ_l∂ℒ/∂Ψ_μ_1…μ_j) ( ∂_μ_l+1…μ_j μΨ) . . + ( ∂_μ_2…μ_l∂ℒ/∂Ψ^*_μ_1…μ_j) ( ∂_μ_l+1…μ_j μΨ^* ) ]. § CURRENT DENSITY IN ONE DIMENSION We will now adapt the results presented above to a one-dimensional model. In this case, the free Lagrangian acquires the following form ℒ = i ħ/2(Ψ^*∂_0 Ψ - ∂_0 Ψ^*Ψ) - ħ^2/2 m g(- ∂^2) ∂Ψ^* g(- ∂^2) ∂Ψ. Thus, for the probability current, we get J = i ħ/2 m∑_N=0^∞(-1)^N/N!∑_n=0^NNn[ ^n/ x^n g(k^2) ]_x=0[ ^N-n/ x^N-n g(k^2) ]_x=0 ×∑_l=0^2n (-1)^l[ ( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ) - ( ∂^(2N-l+1)Ψ) ( ∂^(l)Ψ^* ) ]. Let us momentarily consider the case with an odd value of N. Since the factor Nn[ ^n/ x^n g(k^2) ]_x=0[ ^N-n/ x^N-n g(k^2) ]_x=0 is invariant when substituting n with N-n, we can write ∑_n=0^NNn[ ^n/ x^n g(k^2) ]_x=0[ ^N-n/ x^N-n g(k^2) ]_x=0 ×∑_l=0^2n (-1)^l[ ( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ) - ( ∂^(2N-l+1)Ψ) ( ∂^(l)Ψ^* ) ] = ∑_n=0^⌊ N/2 ⌋Nn[ ^n/ x^n g(k^2) ]_x=0[ ^N-n/ x^N-n g(k^2) ]_x=0 ×{∑_l=0^2n (-1)^l[ ( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ) - ( ∂^(2N-l+1)Ψ) ( ∂^(l)Ψ^* ) ]. + . ∑_l=0^2(N-n) (-1)^l[ ( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ) - ( ∂^(2N-l+1)Ψ) ( ∂^(l)Ψ^* ) ]}. Let us focus on the sum over l. We then have ∑_l=0^2(N-n) (-1)^l[ ( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ) - ( ∂^(2N-l+1)Ψ) ( ∂^(l)Ψ^* ) ] = 2 ∑_l=2n+1^N (-1)^l[ ( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ) - ( ∂^(2N-l+1)Ψ) ( ∂^(l)Ψ^* ) ] + ∑_l=0^2n (-1)^l[ ( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ) - ( ∂^(2N-l+1)Ψ) ( ∂^(l)Ψ^* ) ]. From this, we can write ∑_l=0^2(N-n) (-1)^l[ ( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ) - ( ∂^(2N-l+1)Ψ) ( ∂^(l)Ψ^* ) ] + ∑_l=0^2n (-1)^l[ ( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ) - ( ∂^(2N-l+1)Ψ) ( ∂^(l)Ψ^* ) ] = 2 ∑_l=0^N (-1)^l[ ( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ) - ( ∂^(2N-l+1)Ψ) ( ∂^(l)Ψ^* ) ]. The last expression is trivially valid also in the case of an even value for N and n=N/2. Thus, we have ∑_n=0^NNn[ ^n/ x^n g(k^2) ]_x=0[ ^N-n/ x^N-n g(k^2) ]_x=0 ×∑_l=0^2n (-1)^l[ ( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ) - ( ∂^(2N-l+1)Ψ) ( ∂^(l)Ψ^* ) ] = [ ^N/ x^N g^2(k^2) ]_x=0∑_l=0^2N+1 (-1)^l( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ). When N is even, Eq. (<ref>) becomes ∑_n=0^NNn[ ^n/ x^n g(k^2) ]_x=0[ ^N-n/ x^N-n g(k^2) ]_x=0 ×∑_l=0^2n (-1)^l[ ( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ) - ( ∂^(2N-l+1)Ψ) ( ∂^(l)Ψ^* ) ] = [ ^N/ x^N g^2(k^2) ]_x=0∑_l=0^2N+1 (-1)^l( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ). Thus, in both cases of even or odd N, we get J = i ħ/2 m∑_N=0^∞(-1)^N/N![ ^N/ x^N g^2(k^2) ]_x=0∑_l=0^2N+1 (-1)^l( ∂^(2N-l+1)Ψ^*) ( ∂^(l)Ψ). 31 #1ISBN #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<https://doi.org/#1>et al.#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<><#>1#1#1#1#1#1#1#1#1#1#1#1#1#1PreBibitemsHook [Mead1964]Mead:1964zz Mead, C.A.: Possible Connection Between Gravitation and Fundamental Length. Phys. Rev. 135, 849–862 (1964) 10.1103/PhysRev.135.B849 [Padmanabhan1987]Padmanabhan:1987au Padmanabhan, T.: Limitations on the Operational Definition of Space-time Events and Quantum Gravity. Class. Quant. Grav. 4, 107–113 (1987) 10.1088/0264-9381/4/4/007 [Konishi et al.1990]Konishi:1989wk Konishi, K., Paffuti, G., Provero, P.: Minimum Physical Length and the Generalized Uncertainty Principle in String Theory. Phys. Lett. B 234, 276–284 (1990) 10.1016/0370-2693(90)91927-4 [Ng and Van Dam1994]Ng:1993jb Ng, Y.J., Van Dam, H.: Limit to space-time measurement. Mod. Phys. Lett. A 9, 335–340 (1994) 10.1142/S0217732394000356 [Garay1995]Garay:1994en Garay, L.J.: Quantum gravity and minimum length. Int. J. Mod. Phys. A 10, 145–166 (1995) 10.1142/S0217751X95000085 https://arxiv.org/abs/gr-qc/9403008arXiv:gr-qc/9403008 [Adler and Santiago1999]Adler:1999bu Adler, R.J., Santiago, D.I.: On gravity and the uncertainty principle. Mod. Phys. Lett. A 14, 1371 (1999) 10.1142/S0217732399001462 https://arxiv.org/abs/gr-qc/9904026arXiv:gr-qc/9904026 [Scardigli1999]Scardigli:1999jh Scardigli, F.: Generalized uncertainty principle in quantum gravity from micro - black hole Gedanken experiment. Phys. Lett. B 452, 39–44 (1999) 10.1016/S0370-2693(99)00167-7 https://arxiv.org/abs/hep-th/9904025arXiv:hep-th/9904025 [Calmet et al.2004]Calmet:2004mp Calmet, X., Graesser, M., Hsu, S.D.H.: Minimum length from quantum mechanics and general relativity. Phys. Rev. Lett. 93, 211101 (2004) 10.1103/PhysRevLett.93.211101 https://arxiv.org/abs/hep-th/0405033arXiv:hep-th/0405033 [Ashtekar et al.2003]Ashtekar:2002sn Ashtekar, A., Fairhurst, S., Willis, J.L.: Quantum gravity, shadow states, and quantum mechanics. Class. Quant. Grav. 20, 1031–1062 (2003) 10.1088/0264-9381/20/6/302 https://arxiv.org/abs/gr-qc/0207106arXiv:gr-qc/0207106 [Amelino-Camelia2013]Amelino-Camelia:2008aez Amelino-Camelia, G.: Quantum-spacetime phenomenology. Living Reviews in Relativity 16, 5 (2013) 10.12942/lrr-2013-5 [Pesci2019]Pesci:2018syy Pesci, A.: Quantum metric for null separated events and spacetime atoms. Class. Quant. Grav. 36(7), 075009 (2019) 10.1088/1361-6382/ab0a40 https://arxiv.org/abs/1812.01275arXiv:1812.01275 [gr-qc]. [Erratum: Class.Quant.Grav. 36, 229501 (2019)] [Bosso and Das2019]Bosso:2018ufr Bosso, P., Das, S.: Lorentz invariant mass and length scales. Int. J. Mod. Phys. D 28(04), 1950068 (2019) 10.1142/S0218271819500688 https://arxiv.org/abs/1812.05595arXiv:1812.05595 [gr-qc] [Lake et al.2019]Lake:2018zeg Lake, M.J., Miller, M., Ganardi, R.F., Liu, Z., Liang, S.-D., Paterek, T.: Generalised uncertainty relations from superpositions of geometries. Class. Quant. Grav. 36(15), 155012 (2019) 10.1088/1361-6382/ab2160 https://arxiv.org/abs/1812.10045arXiv:1812.10045 [quant-ph] [Kempf et al.1995]Kempf:1994su Kempf, A., Mangano, G., Mann, R.B.: Hilbert space representation of the minimal length uncertainty relation. Phys. Rev. D 52, 1108–1118 (1995) 10.1103/PhysRevD.52.1108 https://arxiv.org/abs/hep-th/9412167arXiv:hep-th/9412167 [Maggiore1993]Maggiore:1993rv Maggiore, M.: A generalized uncertainty principle in quantum gravity. Physics Letters B 304, 65–69 (1993) 10.1016/0370-2693(93)91401-8 [Das and Vagenas2009]Das:2009hs Das, S., Vagenas, E.C.: Phenomenological implications of the generalized uncertainty principle. Canadian Journal of Physics 87, 233–240 (2009) 10.1139/P08-105 [Pikovski et al.2012]Pikovski:2011zk Pikovski, I., Vanner, M.R., Aspelmeyer, M., Kim, M.S., Brukner, C.: Probing Planck-scale physics with quantum optics. Nature Phys. 8, 393–397 (2012) 10.1038/nphys2262 https://arxiv.org/abs/1111.1979arXiv:1111.1979 [quant-ph] [Marin et al.2013]Marin:2013pga Marin, F., Marino, F., Bonaldi, M., Cerdonio, M., Conti, L., Falferi, P., Mezzena, R., Ortolan, A., Prodi, G.A., Taffarello, L., Vedovato, G., Vinante, A., Zendri, J.-P.: Gravitational bar detectors set limits to planck-scale physics on macroscopic variables. Nature Physics 9, 71–73 (2013) 10.1038/nphys2503 [Bosso et al.2017]Bosso:2016ycv Bosso, P., Das, S., Pikovski, I., Vanner, M.R.: Amplified transduction of Planck-scale effects using quantum optics. Phys. Rev. A 96(2), 023849 (2017) 10.1103/PhysRevA.96.023849 https://arxiv.org/abs/1610.06796arXiv:1610.06796 [gr-qc] [Casadio and Scardigli2020]Casadio:2020rsj Casadio, R., Scardigli, F.: Generalized Uncertainty Principle, Classical Mechanics, and General Relativity. Phys. Lett. B 807, 135558 (2020) 10.1016/j.physletb.2020.135558 https://arxiv.org/abs/2004.04076arXiv:2004.04076 [gr-qc] [Segreto and Montani2023]Segreto:2022clx Segreto, S., Montani, G.: Extended GUP formulation and the role of momentum cut-off. Eur. Phys. J. C 83(5), 385 (2023) 10.1140/epjc/s10052-023-11480-4 https://arxiv.org/abs/2208.03101arXiv:2208.03101 [quant-ph] [Husain et al.2013]Husain:2012im Husain, V., Kothawala, D., Seahra, S.S.: Generalized uncertainty principles and quantum field theory. Physical Review D 87, 025014 (2013) 10.1103/PhysRevD.87.025014 [Bosso et al.2020]Bosso:2020fos Bosso, P., Das, S., Todorinov, V.: Quantum field theory with the generalized uncertainty principle i: Scalar electrodynamics. Annals of Physics 422, 168319 (2020) 10.1016/j.aop.2020.168319 [Bosso and Luciano2021]Bosso:2021koi Bosso, P., Luciano, G.G.: Generalized uncertainty principle: from the harmonic oscillator to a QFT toy model. Eur. Phys. J. C 81(11), 982 (2021) 10.1140/epjc/s10052-021-09795-1 https://arxiv.org/abs/2109.15259arXiv:2109.15259 [hep-th] [Dick2020]Dickbook Dick, R.: Advanced Quantum Mechanics: Materials and Photons. Springer, Cham (2020) [Bosso et al.2023]Bosso:2023aht Bosso, P., Luciano, G.G., Petruzziello, L., Wagner, F.: 30 years in: Quo vadis generalized uncertainty principle? Class. Quant. Grav. 40(19), 195014 (2023) 10.1088/1361-6382/acf021 https://arxiv.org/abs/2305.16193arXiv:2305.16193 [gr-qc] [Bosso et al.2024]Bosso:2023nst Bosso, P., Fabiano, G., Frattulillo, D., Wagner, F.: Fate of Galilean relativity in minimal-length theories. Phys. Rev. D 109(4), 046016 (2024) 10.1103/PhysRevD.109.046016 https://arxiv.org/abs/2307.12109arXiv:2307.12109 [gr-qc] [Bosso2018]Bosso:2018uus Bosso, P.: Rigorous Hamiltonian and Lagrangian analysis of classical and quantum theories with minimal length. Phys. Rev. D 97(12), 126010 (2018) 10.1103/PhysRevD.97.126010 https://arxiv.org/abs/1804.08202arXiv:1804.08202 [gr-qc] [Bosso2021]Bosso:2020aqm Bosso, P.: On the quasi-position representation in theories with a minimal length. Class. Quant. Grav. 38(7), 075021 (2021) 10.1088/1361-6382/abe758 https://arxiv.org/abs/2005.12258arXiv:2005.12258 [gr-qc] [Bosso et al.2023]Bosso:2023sxr Bosso, P., Petruzziello, L., Wagner, F.: Minimal length: A cut-off in disguise? Phys. Rev. D 107(12), 126009 (2023) 10.1103/PhysRevD.107.126009 https://arxiv.org/abs/2302.04564arXiv:2302.04564 [hep-th] [Bosso et al.2022]Bosso:2022vlz Bosso, P., Petruzziello, L., Wagner, F.: The minimal length is physical. Phys. Lett. B 834, 137415 (2022) 10.1016/j.physletb.2022.137415 https://arxiv.org/abs/2206.05064arXiv:2206.05064 [gr-qc]
http://arxiv.org/abs/2407.13382v1
20240718104022
Open-World Visual Reasoning by a Neuro-Symbolic Program of Zero-Shot Symbols
[ "Gertjan Burghouts", "Fieke Hillerström", "Erwin Walraven", "Michael van Bekkum", "Frank Ruis", "Joris Sijs", "Jelle van Mil", "Judith Dijk" ]
cs.LG
[ "cs.LG" ]
Open-World Visual Reasoning G.J. Burghouts et al. TNO, 2597 AK The Hague, The Netherlands Open-World Visual Reasoning by a Neuro-Symbolic Program of Zero-Shot SymbolsSupported by TNO ERP APPL.AI program. G.J. Burghouts, F. Hillerström, E. Walraven, M. van Bekkum, F. Ruis, J. Sijs, J. van Mil, J. Dijk, W. Meijer July 22, 2024 ================================================================================================================= § ABSTRACT We consider the problem of finding spatial configurations of multiple objects in images, e.g., a mobile inspection robot is tasked to localize abandoned tools on the floor. We define the spatial configuration of objects by first-order logic in terms of relations and attributes. A neuro-symbolic program matches the logic formulas to probabilistic object proposals for the given image, provided by language-vision models by querying them for the symbols. This work is the first to combine neuro-symbolic programming (reasoning) and language-vision models (learning) to find spatial configurations of objects in images in an open world setting. We show the effectiveness by finding abandoned tools on floors and leaking pipes. We find that most prediction errors are due to biases in the language-vision model. § INTRODUCTION Finding spatial configurations of multiple objects in images is a very relevant capability. For instance, a mobile inspection robot is tasked to localize situations of interest on an industrial site, such as abandoned tools on the floor, because they may pose a hazard to the personnel and robot itself. Once such a spatial configuration is found, proper action can be taken, such as reporting it to the operator or removing the object using the robot. On such sites, there may be many activities; as a consequence, the objects, their position, and the environment may change constantly and this may differ per site. Therefore the robot may encounter new objects, e.g., a new type of tool; new environments, e.g., the floor is made of a different material; and new relevant situations, e.g., a leaking pipe. This is a challenge of the open world: handling previously unseen objects and configurations. We consider the problem of localizing spatial configurations of interest in an open world, with a focus on situations that require an action of some kind. To identify situations of interest, which deviate from the normal, a common approach is to use a statistical model of the sensory data <cit.>. However, the robot's goal, its context and explicit prior knowledge are not taken into account. As a consequence, the detected anomalies are not necessarily relevant for the robot's mission. Another drawback of statistical models is that they do not generalize well to new observations in the open world. They cannot be adapted quickly, because it requires a significant amount of training samples to adjust the statistical model. We take a different approach by leveraging prior knowledge, in this case knowledge about spatial configurations of objects. This knowledge can be adapted quickly during operation and via generic definitions it can generalize better to new situations. An example of a spatial configuration is a tool that is left behind on the floor. A tool can be one of many types, such as a hammer, screwdriver, wrench, and many more. Likewise, floors can be composed of different materials with various appearances. Our goal is to find spatial configurations in images, based on a high-level definition of the involved object (categories) and the spatial relations between them, with the ability to define the object or its category, and without learning dedicated models for each of them. The rationale is that such an approach has a broader applicability, because it can generalize better across similar configurations and is adaptable to new configurations by formulating a new definition. We start by defining the configuration of interest by first-order logic. Logic formulas specify the object categories and their spatial configuration in terms of relations between them. For flexibility and generalization, the logic formulas may entail the categories of the objects (tool), instead of the specific objects (hammer, screwdriver, etc.). This reduces the amount of human effort. The logic formulas are symbolic, where each relation and object (category) is expressed as a symbol. Each symbol is used as a query to generate object proposals for the current image. Object proposals are generated from language-vision models <cit.>, by querying them for the symbols that are in the logic formulas. In the image, there may be many objects and their proposals will be imperfect, leading to many possible hypotheses. Therefore, a multi-hypothesis framework is required. A natural choice is to leverage a neuro-symbolic program <cit.> for this purpose, as it validates the logic formulas against many hypotheses about the object proposals and the relations between them. Our method is outlined in Figure <ref>. Our contribution is the integration of the neuro-symbolic programming (reasoning) with the language-vision models (learned) for the purpose of finding spatial configurations of objects. We show the effectiveness on real-world images by finding specific situations in a robotic inspection setting. § RELATED WORK To operate in an open world, it is essential to interpret situations well. One aspect of such interpretation is to analyze configurations of objects in the scene. A typical approach is to incorporate prior knowledge when analyzing images <cit.>. Connecting knowledge representation and reasoning mechanisms with deep learning models <cit.> shows great promise for learning from the environment and at the same time reasoning about what has been learned <cit.>. Previous reasoning methods based on logic, such as DeepProbLog <cit.>, were limited in terms of scalability when there were many possible hypotheses. For instance, a task such as industrial inspection involves many possible objects and relations and therefore many possible hypotheses. Therefore, such methods were ill-suited for real-world applications. A more efficient variant of DeepProbLog was proposed <cit.>. Recently, a framework was proposed that further improved the efficiency: the neuro-symbolic programming framework called Scallop <cit.>. Scallop is based on first-order logic and introduces a tunable parameter k to specify the level of reasoning granularity. It restrains the validation of hypotheses by the top-k proofs. This asymptotically reduces the computational cost while providing relative accuracy guarantees. This is beneficial for our purpose, as we expect many possible hypotheses in complex environments with many objects and imperfect observations. In <cit.>, the neuro-symbolic programming framework was used to reason about visual scenes using a pre-defined set of classes of objects. End-to-end learning between objects and a neuro-symbolic program was proposed to jointly learn visual concepts, words and semantic parsing <cit.>. Both approaches rely on a fixed set of visual concepts, since a vocabulary, knowledge graph, or learning scheme are involved. We aim to generalize to the open world, extending the vocabulary of logical symbols to previously unseen objects, and enabling one to define symbols at the category level instead of the class level. For open world settings, perception models have to be applicable for a wide variety of tasks in a broad range of settings. General-purpose vision systems have been proposed, e.g., <cit.>, that are trained on a large set of datasets and tasks. Because some of these models can solve various tasks at the same time, these are coined foundation models <cit.>. An example of such a complex task is where the model provides answers to textual questions about images <cit.>. Large progress has been achieved in language-vision tasks. Language-vision models learn directly from huge datasets of texts describing images, which offers a broad source of supervision <cit.>. They have shown great promise to generalize beyond crisp classes and towards semantically related classes. This so-called zero-shot capability is beneficial for recognizing the object categories that are involved in the spatial configuration of interest. Recently, these models were extended with capabilities to localize objects in images via co-attentions <cit.> and to segment parts of the scene based on textual descriptions <cit.>. We adopt both methods to relate image parts to respectively objects and segments from the environment, which are relevant for the configuration at hand. The abovementioned works in language-vision models have made huge progress in learning visual concepts in relation to language and semantics. However, they have not considered a combination with reasoning. We integrate neuro-symbolic programming with language-vision models.To the best of our knowledge, this work is the first to combine neuro-symbolic programming and language-vision models to find spatial configurations of objects in images in an open world setting. § METHOD We find spatial configurations of object categories, based on prior knowledge about the involved objects and their relations. An overview of our method is shown in Figure <ref>. At the top, it shows how a configuration of interest, such as our working example `tool on floor', is translated into symbolic predicates such as object(o, tool), segment(x, floor) and above(o, x). At the bottom left, the figure shows how the symbols from the predicates, such as `tool' and `floor', are measured from images by language-vision models. These measurements are transformed into probabilistic facts, e.g., P(tool | image) and P(floor | image), which are provided to the neuro-symbolic program which validates them against the logic (bottom right). Each component is detailed in the following paragraphs. §.§ First-order logic The configuration of interest is defined by logic formulas and predicates. The symbols in the predicates are about the objects and segments in an image. For a tool that is left on the floor, the formulas are: ∃ o: object(o, tool) side(o, s_1) segment(s_1, floor) above(o, s_2) segment(s_2, floor) ¬ between_vert(z, o, s_2) This defines that a tool on the floor is defined by seeing the tool above and aside the floor. For a robot, its perspective is oblique downward, i.e., the floor will be visible at the bottom of the tool and on the side of the tool. We also define that the tool should be on the floor, without anything (z) in between. Otherwise, a tool that is on a cabinet standing on the floor, would also fulfil the definition. The helper predicates are: between_vert(o_1, o_2, o_3) = above(o_1, o_2) above(o_3, o_1) side(o_1, o_2) = left(o_1, o_2) right(o_1, o_2) to express that some o_2 is vertically between o_1 and o_3. Finally, these are the predicates for positioning of one object relative to the left, right and above another object: left(o_1, o_2) = pos_x(o_1) < pos_x(o_2) right(o_1, o_2) = pos_x(o_1) > pos_x(o_2) above(o_1, o_2) = pos_y(o_1) < pos_y(o_2) Another example of a spatial configuration is a leaking pipe. Analogous to the abandoned tool, it can be formulated by predicates that relate objects and segments: ∃ o: object(o, pipe) neighbor(o, s) segment(s, leakage) An additional helper predicate is needed to express that one object is neighboring another object: neighbor(o_1, o_2) = | pos_x(o_1) - pos_x(o_2) | ≤ 1 | pos_y(o_1) - pos_y(o_2) | ≤ 1 We refer to the set of logic formulas as L. §.§ Image to Symbols To find a specified configuration in the current image, the symbols from the logic formulas are used to initiate corresponding image measurements. From the image, measurements are taken that are an estimate of the symbols. We refer to this process as `image to symbols'. For each logical symbol, we produce a probabilistic fact about that symbol for the given image, e.g., P(tool | image) and P(floor | image). The probabilistic facts are input to the neuro-symbolic inference process (next subsection). Each symbol generates probabilistic proposals for the image by prompting a language-vision model. For our purpose, we adopt CLIP <cit.> because we found its performance to be solid for various symbols, even for niche objects (i.e., zero-shot performance <cit.>). We query CLIP by a text prompt of the symbol, e.g. `tool'. CLIP operates on the image level, i.e., matching a prompt to the full image. Our objective is different: we aim to localize the symbols, such that we can analyze their spatial configurations within the image. We adopt recent extensions of CLIP that can generate attention maps <cit.> and segmentation maps <cit.> for text prompts. The attention maps show where the model is activated regarding objects, as the model is specialized to learn the important objects in images. Hence we use them to localize objects (e.g., tool). However, this model is object-specific and not suitable for environmental elements such as walls and floors. For this purpose, we leverage a segmentation model to segment scenes, in order to localize environmental segments (e.g., floor). In this way, we obtain probabilistic symbols. The attention maps for the CLIP model are obtained by <cit.>: A = 𝔼_h((∇ A ⊙ A)^+) where ⊙ is the Hadamard product, ∇ A = ∂ y_k/∂ A is CLIP's output for a textual prompt T_k, 𝔼_h is the mean across CLIP's transformer heads, and ·^+ denotes removal of negative contributions <cit.>. A is an attention map and therefore it is not calibrated to probabilistic values. For that purpose, we scale A to the range [0, 1] by dividing by its maximum value, thereby we obtain A'. The values of A' are not probabilities. That is, the values are uncalibrated for the prompt T_k, since the maximum value is always 1. To calibrate A' for a prompt T_k, we weight A' with the confidence for T_k, Y_k: G(k) = A' ⊙ Y_k, with Y = softmax({y}) for CLIP's outputs {y}_1:N for the respective set of prompts {T}_1:N. We consider a set of prompts that contrast the objects of interest (e.g., tool) with irrelevant negatives (e.g., wall, floor, closet, ceiling, etc.): T = {tool, ..., ceiling}. Since G(k) is calculated for an image I at its pixels (i,j), we rewrite: G(I,i,j,k). To obtain a proxy for the probability for the object of interest o at index k from set T, we take: G(I,i,j,k_o). For the full image, we denote the probability map shortly as G(I, o). To segment image I at image location (i,j), the response to a textual prompt T_k from the set of prompts {T}_1:N is given by <cit.>: f(I,i,j,k) = I'(i,j) · T'_k, k ∈{1, ..., N} where I' is the visual encoding of image I, T' the textual encoding of prompt T and · is the dot product. The vector f is transformed by the softmax operator to obtain values in the range of [0, 1]: F(I,i,j,:) = softmax(f(I,i,j,:)). We consider a set of prompts that contrast the segments of interest (e.g., floor) with irrelevant negatives (e.g., wall, ceiling, etc.): T = {floor, ..., ceiling}. To obtain the proxy of a probability for the segment of interest s at index k from set T, within image I at pixel (i,j), we take: F(I,i,j,k_s). For the full image, we denote the probability map shortly as F(I, s). Using F(I, s) and G(I, o), we obtain a probability for each symbol from the logic formulas, where for each pixel of image I the probability is stored: P(object = o | I, o) = G(I, o), o ∈{tool, pipe, etc.} P(segment = s | I, s) = F(I, s), s ∈{floor, leakage, etc.} §.§ Neuro-Symbolic Inference Inference of the probability for the (spatial) configuration C as defined by L, in the image I is performed by the neuro-symbolic program. This program operates on probabilistic facts <cit.> which we derive from the symbolic heatmaps P(s) and P(o) (Equation <ref>). We consider various spatial scales to infer C, because the distance between the robot and the scene may differ from time to time. As a consequence, the amount of pixels on the objects may vary. The original heatmaps P(s) and P(o) are finegrained. For multi-scale analysis, we add down-sampled versions of them, P(s, σ) and P(o, σ) at spatial scales σ ∈ {1, 2, .., 32} to indicate the down-sampling factor. This enables both finegrained and coarser inference, to accommodate for varying distances from camera to the relevant objects. To achieve scale-invariant inference, we select the scale σ that maximizes the probability P(C) for the spatial configuration C in the current image I given the logic L and its symbols S and O: P(C) = _σ∈{1, 2, 4, 8, 16} P(C | I, L, {P(s, σ)}_s ∈ S, {P(o, σ)}_o ∈ O) Note that this can be generalized to an optimal scale σ specifically for each segment s and object o in Equation <ref>. However, this requires that all possible combinations of s and o at all scales σ are analyzed by the neuro-symbolic program P. This will cause a computational burden. For computational efficiency, we optimize a single scale σ for all s and o. § EXPERIMENTS To analyze the performance of our method, we collected a wide variety of test images to find various spatial configurations of objects. The configurations are about undesired situations that require action: finding an abandoned tool on the floor inside a factory, and detecting a leaking pipe on an industrial site. We evaluate our method in a zero-shot manner: we use the models without any retraining so they have not been optimized for the tested situations. The purpose of this setup is to validate the method in an open world. §.§ Abandoned Tool on Floor We collected 31 test images about an abandoned tool on the floor. To validate how well the method generalizes to various tools, we include images with hammers, screwdrivers, wrenches, etc. For the same reason, we include various floors, with different materials, textures and colors. Moreover, the viewpoint and zoom are varied significantly. There are 9 images of tools on floors. These are the positives which should be detected by our method. To verify true negatives, we include 8 images where there is a both a tool and a floor, but the tool is not on the floor (but on a cabinet, wall, etc.). There are 5 images with only a floor (no tool) and 4 images with only a tool (no floor). To verify true negatives, there are also 5 images where there is no tool and no floor. There are 9 positives, from which we detect 7 cases. Figure <ref> - <ref> shows 4 out of those 7 cases. Each case is illustrated by showing the original image in the top left and the result of the neuro-symbolic program in the top right. The involved symbols are shown in the lower left and right. The visualizations are heatmaps, where a high probability is red, whereas blue indicates a low probability. For the symbols (lower rows), the full heatmap is shown. For the result of the neuro-symbolic program (top-rights), only the most likely location is shown. The resolution of the heatmaps indicates the granularity of the reasoning, as it reflects the optimal spatial scale which maximized the resulting probability (Equation <ref>). In Figure <ref>, most spatial scales are relatively small, indicated by the fine-grained heatmaps. Since the symbols are predicted well (often the tools and floors have a high probability at the respective symbols), the reasoning is able to pinpoint a place in the image where the spatial configuration is fulfilled mostly (a peak in the neuro-symbolic output). In the case of a negative, the probability is typically much lower, as shown in the next paragraph. Out of the 17 negatives, there are 8 hard cases, as these images contain both a tool and a floor, but not in the defined configuration: the tool is not on the floor. Out of these 8 negatives, 6 are correctly assessed as such. The two errors (i.e., false positives) are shown in the next subsection. Figure <ref> and <ref> show 2 out of 6 true negatives. Although there are both a floor and tools, the reasoner correctly finds that the spatial configuration is not a tool that is on the floor. The two false positives are shown in Figure <ref> and <ref>. The neuro-symbolic program incorrectly reasons that these cases are a tool left on the floor (false positives). This is due to errors in the symbols. There is a wrong association of the symbol tool in both images. On the left, the Gazelle logo is associated with a tool. We hypothesize that the reason for this flaw is that Gazelle is a manufacturer of bicycles: many images on the web about this brand involve tools. The language-vision model probably has learned a bias to associate Gazelle with tools. On the right, the duct tape is considered to be a tool. From a semantic point of view this makes sense. These symbol errors propagate into the reasoner's outputs. Refining the prompts that we pose to the language-vision models, may overcome such errors in the symbols. Figure <ref> and <ref> show missed cases (false negatives). Again, the source of the errors is in the symbols. On the left, the tool (a grinder) is not recognized as such. This is a flaw in the language-vision model, probably because this tool does not appear often in everyday images and language. On the right, the floor is not recognized as such. We hypothesize that this is due to a lack of context within the image: it could also be a wooden plate. Without the proper evidence for each involved symbol, the reasoner cannot assess these configurations correctly. The computation time per image is approximately 1 second on a standard CPU without any optimization of efficiency. §.§ Leaking Pipe The second experiment is about another task: to find a leaking pipe. We collected 15 images of very different cases with various pipes and leaking fluids. Likewise the abandoned tool, we include various viewpoints and distances to the scenes. There are 15 positives, from which we detect 13 cases. Figure <ref> and <ref> show 2 out of those 13 cases. The organization of the figure is the same as previous result figures. Although the scenes are very different, the symbols are predicted well. Often the pipes and leakages have a high probability at the respective symbols. The neuro-symbolic program is able to pinpoint a place in the image where the spatial configuration is fulfilled (bright spot in the heatmap). Out of the 15 actual cases, 2 were considered to be a negative (miss). Figure <ref> shows a false negative where the leakage was localized too far away from the pipe. Hence the spatial arrangement did not fulfil the definition and therefore it was not assessed as a leaking pipe. In <ref> there is a false negative, because the left part of the broken pipe was not assessed as such. Therefore the leakage is not close to a part in the image that was considered as a pipe. §.§ Performance and Ablations The performance is evaluated by ROC curves as shown in Figure <ref>. Our neuro-symbolic program is compared against baselines that use partial information (only the tool or only the floor) and a baseline that uses the same information (tool and floor) but without the spatial configuration. For ablation, three variants of the neuro-symbolic program are evaluated, each having more contextual knowledge and multi-scale reasoning. It can be concluded that both tool and floor are required, and the spatial configuration is also essential. With only tool and floor as inputs, the performance becomes almost random, resp. AUC=0.55 and AUC=0.63. With both inputs, yet without spatial configuration, the performance increases only slightly: AUC=0.70. Including tool and floor in the neuro-symbolic program, without taking their spatial relations into account, is equally ineffective: AUC=0.71. When the spatial configuration is considered by the neuro-symbolic program, the performance increases significantly: AUC=0.83. Including multi-scale makes the neuro-symbolic program very effective: AUC=0.98. Most of the situations of interest can be detected without false positives, whereas the alternatives produce many false positives. § CONCLUSIONS For open world settings, we proposed a method to find spatial configurations of multiple objects in images. It enables expert-driven localization of new or unseen object configurations. Our method is able to find situations of interest in a robotic inspection setting: abandoned tools on floors and leaking pipes. The tools, floors, pipes and leakages have not seen before and no task-specific training was performed. Most of the situations of interest were correctly localized. A few false positives occurred, due to erroneous object proposals. This was caused by a bias in the language-vision model, e.g., a logo of a tool that was considered to be a tool. A few false negatives happened, due to missed object proposals. A typical example is a close-up image of a floor, which was missed because context was lacking. Our method avoids the necessity of learning dedicated models for each of the involved objects, which makes our method flexible and quickly operational in an open world. splncs04
http://arxiv.org/abs/2407.12619v1
20240717144452
MBE-grown virtual substrates for quantum dots emitting in the telecom O- and C-bands
[ "Bianca Scaparra", "Elise Sirotti", "Akhil Ajay", "Bjoern Jonas", "Beatrice Costa", "Hubert Riedl", "Pavel Avdienko", "Ian D. Sharp", "Gregor Koblmueller", "Eugenio Zallo", "Jonathan J. Finley", "Kai Mueller" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
Walter Schottky Institut, Technical University of Munich, Germany Technical University of Munich; TUM School of School of Computation, Information and Technology, Department of Electrical Engineering, Germany Munich Center for Quantum Science and Technology (MCQST), Germany Walter Schottky Institut, Technical University of Munich, Germany Technical University of Munich; TUM School of School of Natural Sciences, Department of Physics, Germany Walter Schottky Institut, Technical University of Munich, Germany Technical University of Munich; TUM School of School of Natural Sciences, Department of Physics, Germany Munich Center for Quantum Science and Technology (MCQST), Germany Walter Schottky Institut, Technical University of Munich, Germany Technical University of Munich; TUM School of School of Computation, Information and Technology, Department of Electrical Engineering, Germany Munich Center for Quantum Science and Technology (MCQST), Germany Walter Schottky Institut, Technical University of Munich, Germany Technical University of Munich; TUM School of School of Computation, Information and Technology, Department of Electrical Engineering, Germany Munich Center for Quantum Science and Technology (MCQST), Germany Walter Schottky Institut, Technical University of Munich, Germany Technical University of Munich; TUM School of School of Natural Sciences, Department of Physics, Germany Munich Center for Quantum Science and Technology (MCQST), Germany Walter Schottky Institut, Technical University of Munich, Germany Technical University of Munich; TUM School of School of Natural Sciences, Department of Physics, Germany Munich Center for Quantum Science and Technology (MCQST), Germany Walter Schottky Institut, Technical University of Munich, Germany Technical University of Munich; TUM School of School of Natural Sciences, Department of Physics, Germany Walter Schottky Institut, Technical University of Munich, Germany Technical University of Munich; TUM School of School of Natural Sciences, Department of Physics, Germany Munich Center for Quantum Science and Technology (MCQST), Germany Walter Schottky Institut, Technical University of Munich, Germany Technical University of Munich; TUM School of School of Natural Sciences, Department of Physics, Germany Munich Center for Quantum Science and Technology (MCQST), Germany Walter Schottky Institut, Technical University of Munich, Germany Technical University of Munich; TUM School of School of Natural Sciences, Department of Physics, Germany Munich Center for Quantum Science and Technology (MCQST), Germany Walter Schottky Institut, Technical University of Munich, Germany Technical University of Munich; TUM School of School of Computation, Information and Technology, Department of Electrical Engineering, Germany Munich Center for Quantum Science and Technology (MCQST), Germany [bianca.scaparra@wsi.tum.de] § ABSTRACT InAs semiconductor quantum dots (QDs) emitting in the near infrared are promising platforms for on-demand single-photon sources and spin-photon interfaces. However, the realization of quantum-photonic nanodevices emitting in the second and third telecom windows with similar performance remains an open challenge. Here, we report an optimized heterostructure design for QDs emitting in the O- and C-bands grown by means of molecular beam epitaxy. The InAs QDs are grown on compositionally graded InGaAs buffers, which act as virtual substrates, and are embedded in mostly relaxed active regions. Reciprocal space maps of the indium profiles and optical absorption spectra are used to optimize In_0.22Ga_0.78As and In_0.30Ga_0.70As active regions, accounting for the chosen indium grading profile. This approach results in a tunable QD photoluminescence (PL) emission from 1200 up to 1600nm. Power and polarization dependent micro-PL measurements performed at 4K reveal exciton-biexciton complexes from quantum dots emitting in the telecom O- and C-bands. The presented study establishes a flexible platform that can be an essential component for advanced photonic devices based on InAs/GaAs that serve as building blocks for future quantum networks. MBE-grown virtual substrates for quantum dots emitting in the telecom O- and C-bands K. Müller ==================================================================================== § INTRODUCTION Photonic quantum technologies require sources that emit photons at a fast rate and with a high degree of indistinguishability. Semiconductor quantum dots (QDs) emitting at 950nm or 785nm have been demonstrated to be very promising systems to meet these demands<cit.>. Indeed, the implementation of resonators, combined with the ability to tune the QDs emission wavelengths via the Stark effect and to electrically control the surrounding electronic environment, make InAs QDs excellent building blocks for quantum communication protocols<cit.>. In this regard, the possibility to achieve equal performance at telecom wavelengths would be especially appealing due to the pre-existing fiber infrastructure and low propagation losses <cit.>. Emission in the telecom C-band is obtained by means of InAs QDs based heterostructures grown on InP substrates <cit.>, whereas compositionally graded InGaAs layers are needed for the case of InAs QDs on GaAs substrates to reduce their lattice mismatch <cit.>. While the InAs/InP system allows for direct implementation of cavities by using Bragg gratings or photonic crystals <cit.>, an optimized nonlinear grading profile was suggested for the second heterostructure <cit.>. However, in that work the relaxed portion of the graded layer was included in the fabricated resonator, thus limiting the further implementation of electrical contacts or more elaborate post-processing. GaAs substrates offer the possibility to grow lattice-matched high refractive index contrast distributed Bragg reflectors and are less brittle than InP substrates. Additionally, metamorphic laser heterostructures emitting in the telecom bands grown on GaAs substrates have proven to be compatible with the implementation in the active region of gate-tunable devices and sacrificial layers<cit.>. A similar approach has also been effective in realizing emission sources in the telecom C-band<cit.>, however a tunable indium grading design resulting in high-quality QD emission from telecom O- to the C-bands is still coveted. In this paper, we develop a grading design in which the InAs QD layer is embedded in an InGaAs active region with fixed indium content that is carefully chosen depending on the maximum relaxation reached in the underlying graded layer. Using reciprocal space maps and optical absorption measurements, we determine the indium profiles that best match the used indium grading rate. In particular, we identify a favorable indium concentration step-back value between the matrix and graded layer as a function of the grading rate of the latter. The active region acts as an independent substrate with chosen lattice constant and the dislocations are mostly confined to the relaxed part of the graded layer. By embedding a single QD layer in the active region we demonstrate the tunability of the lattice constant of the final substrate resulting in a QD emission in both the telecom O- and C-bands. Power and polarization dependent photoluminescence measurements reveal bright and sharp lines with the typical exciton-biexciton behavior. This highlights the potential of the presented design for the realization of single-photon sources in the O- and C-bands. § METHODS The studied samples were grown with a solid source Veeco Gen II molecular beam epitaxy (MBE) system on undoped GaAs(001) substrates. The growth was monitored by reflection high energy electron diffraction and the native oxide desorption from the substrate was used to calibrate the surface temperature. The layer profile is presented in Fig. <ref> as a function of the indium content. First, a 200nm-thick GaAs buffer layer was grown, followed by a 100nm-thick AlAs layer. Both layers were grown at 610^∘C. Subsequently, 30nm of GaAs and a compositionally graded In_xGa_1-xAs metamorphic buffer layer (MBL) were grown at 470^∘C with an arsenic beam equivalent pressure of 1.6×10^-5 mbar. The group-III elemental fluxes (gallium, indium, aluminium) were calibrated in equivalent growth rate units of Å/s. During the growth of the graded In_xGa_1-xAs layer, the gallium growth rate was kept constant at 1Å/s, while the indium cell temperature was increased with a nominal rate of 0.02^∘C/s. The indium growth rate was then increased from 0.05Å/s to either 0.58Å/s or 0.75Å/s depending on the desired final lattice constant of the designed virtual substrate. The graded layer was then followed by an InGaAs matrix, or active region. The matrix presents a fixed indium content and was grown with a lower indium cell temperature, as depicted by the step-back in the indium profile shown in Fig. <ref>. The indium contents of the InGaAs matrices were varied from 19% to 36% depending on the maximum indium content of the underlying MBL. During the growth of the InGaAs matrix, the substrate temperature was increased to 500^∘C and the arsenic beam equivalent pressure was reduced to 1.1×10^-5 mbar. After a 150nm-thick InGaAs layer, a 100nm-thick InAlAs layer was grown to prevent further propagation of dislocations into the matrix. The InAs QD layer, consisting of 2.2 monolayers, was grown at 470^∘C and was embedded in the middle of a 300nm-thick InGaAs layer. To achieve a QD density gradient, the substrate rotation was stopped halfway through the growth of the QD layer. Structural characterization was performed via high-resolution X-ray diffraction measurements acquired with a Rigaku SmartLab system equipped with a copper anode. The Cu_Kα1 emission line (λ= 1.54059 Å) was filtered in the incident beam path with a Ge(220)x2 monochromator for high-resolution measurements. Reciprocal space maps (RSMs) around the asymmetric GaAs (422) reflection were used to analyze the crystalline properties of the grown heterostructures. The absorption coefficients of the InGaAs matrices grown on top of the MBLs were measured with a custom-made photothermal deflection spectroscopy (PDS) system. The sample was placed into a cuvette filled with perfluorohexane and illuminated at normal incidence with monochromatic light obtained with a monochromator placed after a Halogen lamp. The modulation frequency of the incident light was set at 9 Hz. The absorption was probed with a red laser diode directed parallel to the surface of the sample. A 2D detector connected to a lock-in amplifier was used to track the deflected probe laser beam. The PDS signal was then converted to an absorption coefficient with the layer thickness determined from scanning electron microscopy measurements. Temperature-dependent PL spectroscopy from ensembles of InAs QDs was performed under continuous wave non-resonant excitation at 660nm by using a helium flow cryostat operating in the 4-300K temperature range. Micro-PL measurements were carried out at 4K using a confocal microscopy setup based on continuous wave excitation at 780nm and an apochromatic objective with a numerical aperture of 0.81. The detected signal was analyzed using a spectrometer with a focal distance of 750mm equipped with an InGaAs linear array detector, providing a resolution of 51 μeV at 1550nm and of 71 μeV at 1310nm. Polarization-resolved measurements were carried out by detecting the emitted signal as a function of the angle of a half-waveplate placed in front of a linear polarizer in the detection path. § RESULTS AND DISCUSSION To realize the heterostructure presented in Fig. <ref>, the compositionally graded In_xGa_1-xAs layer first had to be optimized in order to obtain the desired final lattice constant. Depending on its final value, the indium content of the matrix, or final substrate, was then selected. Fig. <ref>a shows RSMs along the asymmetric (422) GaAs reflection of two different heterostructures, each consisting of a graded buffer layer and an InGaAs active region grown on top. The heterostructure A was grown with a maximum indium growth rate in the graded layer of 0.58Å/s, while in sample B the maximum growth rate was 0.75Å/s. The peak at larger coordinates arises from the GaAs substrate, while the diffraction signal spanning from the GaAs peak to the lowest q_x and q_z values originates from the graded MBL. Intensity maxima with coordinates close to the GaAs substrate correspond to areas of the MBL with lower indium content, whereas peaks located further away indicate regions with higher indium content. The dashed line illustrates the relaxation line, which corresponds to the direction in reciprocal space of a fully relaxed epitaxial layer. The signal arising from the layers with low indium content follows the relaxation line, revealing relaxed layers where the majority of the dislocations are confined <cit.>. Meanwhile, the part of the graded layers with higher indium content grows pseudomorphically to such relaxed layers, showing a constant q_x value and indicating the indium content at which the MBL starts to show residual strain <cit.>. The derived maximum indium contents in the graded layers are 35% (A) and 43% (B), determined following an approach similar to Ref. <cit.>. The signal spreading in diagonal directions stems from the mostly relaxed active regions with indium contents of 22% and 30%, respectively. Both exhibit some mosaicity. As desired, the peaks arising from the layers in the active region have the same q_x values as those of the relaxed part of the MBL at higher indium contents. Hence, the two layers show similar in-plane lattice constants and thus, the probability of dislocation propagation into the matrix is reduced <cit.>. Due to the chosen grading rate and growth temperature of the InGaAs graded layers, we expect that near-equilibrium strain relaxation was reached <cit.>. It has been shown that the residual strain in the uppermost part of a linearly graded layer is independent of both its thickness and the maximum indium concentration <cit.>. Thus, the indium step-back in the active region, needed to compensate for the residual strain of a graded layer, depends on the chosen grading rate<cit.>. Although the indium profile set by the temperature ramping rate of the indium cell is not strictly linear along the graded layer as shown in Fig. <ref>b, we found that an indium step-back close to ∼ 13% leads to a mostly relaxed active region for both sample A and B. This is consistent with previous literature, where for a linear grading rate of ∼30 In%/μm, the indium step-back required to compensate for the residual strain was determined to be ∼8.3% <cit.>. Consequently, the higher average grading rate used in this study (∼49 In%/μm) results in a larger degree of residual strain <cit.>. Therefore, a larger indium step-back is required. The inset in Fig. <ref>b shows the final indium profile with some non linearities across the graded layer. This is the result of the indium content dependence as a function of the cell temperature (see the inset at fixed gallium rate of 1Å/s). However, similar values of residual strain are found for both samples (see Fig. <ref>a) thus requiring the same indium step back value. Therefore, to minimize a further relaxation of the graded layer, the indium content selected for the matrix was closely matched to the value determined based on the chosen grading profile. Fig. <ref>a shows optical absorption spectra measured via PDS <cit.> from samples with active regions grown with different indium contents. Each curve is labeled with the indium content of the active region determined from asymmetric RSMs. All spectra exhibit similar absorption around 1.42eV (dashed line), which is attributed to the absorption at the bandgap of the GaAs substrate. In addition, absorption onsets around 1.1eV (0.9eV) are observed for samples in which InGaAs matrices are grown atop MBLs with maximum indium contents of 35% (43%). The linear regressions of the Tauc plots shown in Fig. <ref>b allow to estimate the optical bandgaps of the InGaAs matrices. The color scale follows the indium contents labeled in Fig. <ref>a. Prior to performing the linear regression, each spectrum was divided by the spectrum recorded from the bare GaAs substrate, as the optical absorption of the substrate is stronger than the one from the matrices. The intersections of the linear regions of the (αhν)^2 vs.hν plots with the energy axis indicate the optical bandgaps<cit.>, as labeled in Fig. <ref>b with λ. The optical bandgaps obtained by linear regression correspond to indium contents similar to the ones measured via RSMs. To prove whether the presented sample designs lead to a QD emission in the telecom bands, ensemble PL measurements were taken at low temperature by exciting at 780nm. As shown in Fig. <ref>c, the variation of the lattice mismatch due to different indium contents in the active region leads to a shift of the QD PL emission to the second and even further to the third telecom windows. Two peaks can be distinguished in each spectrum: the one at shorter wavelengths is attributed to charges recombining in the graded layer and the matrix, while the emission at longer wavelengths is attributed to the QDs. Samples with active regions with indium contents between 19% and 22% (maximum MBL indium content of 35%) lead to a QD emission that can be tuned across the O-band. When the indium in the matrices ranges between 30 and 36% (maximum MBL indium content of 43%) the QD emission shifts up to the telecom C-band. It is important to note that samples with indium step-back much lower than 13%, e.g. the one with a indium content of 36%, result in an active region that is compressively strained with respect to the matrix. Since this could lead to further relaxation in the graded layer<cit.>, such sample was not considered for further optical measurements. Fig. <ref>a shows ensemble spectra obtained under above-bandgap excitation at a power of 600 μW from samples with matrix indium contents of 22 and 33%, recorded in regions of the wafer with different QD densities. If the QD density is low enough (LD, red), the injected charges mostly recombine in the wetting layer (WL), leading to the appearance of the peaks at higher energies. Meanwhile, in regions where the QD density increases (HD, black), the signal from charges recombining in the WL is quenched and the QD peak appears at lower energies in both graphs. The peaks centered around 1.05eV and 0.92eV are due to charges recombining in the matrix and in the uppermost regions of the MBL (see Fig. <ref>). The insets show bright and sharp lines in the telecom O and C-bands attributed to PL emission of a few QDs excited at a wavelength of 780nm. The respective full width at half maximum (FWHM) values of the emissions is 89 ±15 μeV (matrix 22%) and 90 ±22 μeV (matrix 33%), which are limited by the spectrometer resolution. The assignment of the peaks presented in Fig. <ref>a is further confirmed by the temperature dependent PL recorded from sample A, as shown in Fig. <ref>b. The spectra were recorded in a similar region where the spectrum LD was measured and were normalized with respect to the peak maxima. The measurements were performed under above-bandgap excitation with a power of 300 μW. Each spectrum was fitted with Gaussian profiles and the corresponding center peak positions are shown for each temperature. The fitted spectral positions are overlaid with the InAs Varshni relation (dashed lines), which indicates the dependence of the bandgap of bulk InAs on the lattice temperature <cit.>. At 10K the emissions attributed to the WL (blue triangles) and the matrix (purple triangles) are visible. When the temperature reaches 60K, another peak at lower energies appears, which is attributed to the QD emission (red triangles), while the one that stems from the WL quenches. This marks the temperature at which the charge transfer mechanism from the WL states to the QD ones becomes favorable <cit.>. The QD peaks show the typical steeper spectral redshift when compared to the Varshni relation (dashed lines). This can be attributed to charge redistribution into QDs with lower energy ground states <cit.>. In contrast, the WL peak follows the behavior given by the Varshni relation until it quenches at 60K in favor of the QD emission. Meanwhile, the signal arising from the matrix and graded layer persists up to 110K before vanishing. Fig. <ref>c shows a 25x25 μm^2 micro-PL spatial map from sample A recorded at 4K. The existence of periodic modulations on the surface, and thus in the QD formation, is a hallmark of the underlying dislocation network due to the plastic relaxation in the MBL<cit.>. The image shows an area of the sample in which the QD density is on the order of ∼ 10^9 cm^-2. The pattern resulting from optically active QDs resembles the one from the cross-hatched surfaces along the [110]-[11̅0] directions typical for metamorphic surfaces <cit.>. As implied by the figure, the QD nucleation tends to follow the surface undulations along the [1-10] direction<cit.> and it is affected by the indium composition modulation and thus, strain fluctuation in the active region<cit.>. Micro-PL experiments on samples A and B were carried out at 4K in order to identify excitonic complexes in the second and third telecom windows. Figs. <ref>a-b show the power dependent PL of exciton-biexciton complexes. The marked emission lines can be identified as excitons (X) and biexcitons (XX) separated by binding energies of 2.5 and 2meV in the O and C-telecom bands, respectively, similar to the values reported in literature <cit.>. This assignment is further corroborated via power dependent and polarization resolved measurements. In Figs. <ref>c-d the integrated areas of the transition are presented as a function of the excitation power. The data follow Poissonian distributions, where an average number of generated excitons equal to one (blue dots) correspond to X transitions. Meanwhile, the data corresponding to an average number of two generated excitons (red dots) correspond to XX transitions<cit.>. At low powers, the X transitions show a linear dependence on the excitation power, while the XX transitions only appear at higher powers and exhibit a quadratic behavior. For powers greater than the ones at which the X emission saturates, the XX emission prevails. The FWHMs from X and XX of Fig. <ref>a, measured in the X saturation regime, are 123 and 90 μeV, respectively, while values of 107 and 54 μeV are measured from the respective lines in Figs. <ref>b. These values are lower than the average ones reported for QDs grown directly on MBE-grown MBLs (around 200-300 μeV for emissions in the telecom O- and C-bands)<cit.>. Polarization dependent PL measurements shown in Fig. <ref>e-f support the assigned excitonic behaviors. The measurements were carried out in the saturation regime of the X transitions. The counter-oscillating behavior of the spectral oscillations that present the same amplitude is typical of a XX-X cascade <cit.>. From fits to these data, we extract fine structure splittings of 60±6 μeV and 55±6 μeV for the QDs shown in Figs. <ref>e-f, respectively. These results confirm the effectiveness of the studied indium profiles along the heterostructure in tuning the QD emission over a large spectral region. They also demonstrate the successful design of the virtual substrates, which can be overgrown with substrates with tailored lattice constants. § CONCLUSIONS In this paper, we investigated indium concentration profiles within metamorphic substrates tailored to achieve a QD emission in the telecom O- and C-bands. The graded layers were overgrown with InGaAs matrices and act as virtual substrates. By means of reciprocal space maps and optical absorption measurements, the indium step-back value between MBL and active region was optimized to guarantee a good relaxation degree of the latter for the adopted indium grading profile. Low temperature PL spectra showed the tunability of the QD emission as a function of the indium content in the matrix. The reported excitonic complexes in the telecom O- and C-bands proved the effectiveness of the heterostructure design also showing narrow linewidths of the excitonic transitions. One major advantage of the presented heterostructures is the possibility to grow InAs QDs on optimized substrates with tailored lattice constants. The control of the mismatch between the QDs and the final substrate enables the tuning of their emission from 1200 up to 1600nm. Another advantage lies in the confinement of dislocations caused by the plastic relaxation of the lattice constant, primarily within the graded layer grown beneath the final substrate. The possibility of growing barriers or superlattices in the active region beneath the QDs helps prevent further propagation of the dislocations across the matrix. We anticipate this study to be the first step towards developing quantum light emitting diodes at 1550nm and defect-free substrates for integrated photonics on silicon based on GaAs/InAs material systems. We gratefully acknowledge financial support from the German Federal Ministry of Education and Research via the funding program Photonics Research Germany (Contract No. 13N14846), the European Union's Horizon 2020 research and innovation program under Grants Agreement No. 862035 (QLUSTER), the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) via the projects MQCL (INST 95/1220-1), CNLG (MU 4215/4-1) and Germany's Excellence Strategy (MCQST, EXC-2111, 390814868), the Bavarian State Ministry of Science and Arts (StMWK) via the project EQAP and the Bavarian Ministry of Economic Affairs (StMWi) via project 6GQT. G.K. acknowledges financial support by the ERC project QUANtIC (ID: 771747) funded by the European Research Council. § REFERENCES
http://arxiv.org/abs/2407.13587v1
20240718152743
Hadronic Decays of a Higgs-mixed Scalar
[ "Patrick J. Blackstone", "Jaume Tarrús Castellà", "Emilie Passemar", "Jure Zupan" ]
hep-ph
[ "hep-ph", "hep-ex" ]
=1 pycode backgroundcolor=, commentstyle=, keywordstyle=, numberstyle=, stringstyle=, basicstyle=, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2 style=pycode ./images/ #1#1 #1 #1 #1 ∫ #1#2#30=#1#2#3∫ #2#3-.50 = -
http://arxiv.org/abs/2407.13410v1
20240718113033
Neuromorphic Circuit Simulation with Memristors: Design and Evaluation Using MemTorch for MNIST and CIFAR
[ "Julio Souto", "Guillermo Botella", "Daniel García", "Raúl Murillo", "Alberto del Barrio" ]
cs.NE
[ "cs.NE", "physics.comp-ph" ]
DeepClair: Utilizing Market Forecasts for Effective Portfolio Selection Jaewoo Kang July 22, 2024 ======================================================================= § ABSTRACT Memristors offer significant advantages as in-memory computing devices due to their non-volatility, low power consumption, and history-dependent conductivity. These attributes are particularly valuable in the realm of neuromorphic circuits for neural networks, which currently face limitations imposed by the Von Neumann architecture and high energy demands. This study evaluates the feasibility of using memristors for in-memory processing by constructing and training three digital convolutional neural networks with the datasets MNIST, CIFAR10 and CIFAR100. Subsequent conversion of these networks into memristive systems was performed using Memtorch. The simulations, conducted under ideal conditions, revealed minimal precision losses of nearly 1% during inference. Additionally, the study analyzed the impact of tile size and memristor-specific non-idealities on performance, highlighting the practical implications of integrating memristors in neuromorphic computing systems. This exploration into memristive neural network applications underscores the potential of Memtorch in advancing neuromorphic architectures. § THEORETICAL INTRODUCTION §.§ Von Neumann bottleneck In the present day, the majority of computers use or are based on Von Neumann’s architecture, first proposed by John Von Neumann in 1945. This architecture stores both instructions and data in the same memory system, treating them as the same type of information, which allows for greater flexibility as new instructions can be entered into memory. The design features a Central Processing Unit (CPU), comprising a Control Unit and an Arithmetic Logic Unit, alongside an Input/Output subsystem and a unified memory for both data and instructions. The interpretation of data and instructions depends on their context <cit.>. Although CPU speeds have significantly outpaced those of memory access, a major imbalance in cycle times persists, limiting overall performance. This limitation, known as the Von Neumann bottleneck, is primarily due to delays in memory access that can cause the CPU to remain idle. Additionally, the latency of the system bus connecting these components is a critical determinant of the total processing speed. §.§.§ PIM: Proccessing in Memory There have been many strategies utilized to overcome the aforementioned bottleneck. One approach has been to modify the architecture to minimize data movement by conducting the processing in memory. This implementation in the architecture allows memory-intensive devices to overcome the so-called “memory wall". We can distinguish three different paths to implement this methodology <cit.>. As an architectural issue, focusing on combining memory chips and processing units, for instance by constructing 3D arrays. Secondly, utilizing peripheral circuits within memory sub-arrays as a manner of conducting the processing in memory. And lastly, and perhaps the most relevant to our case of study, by manufacturing devices that allow in-situ computation in memory. For example, Resistive Random Access Memory (RRAM) devices. This specific approach makes use of Kirchoff's laws and Ohm's law in order to carry out matrix calculations. §.§ Neuromorphic circuits Following the discussion on processing in memory devices, the interest on diminishing the amount of energy of today's computational tasks has added a special relevance to the construction of circuits taking inspiration on the brain, as it is very energy-efficient. As a result, the notion of neuromorphic circuits appeared by trying of reproduce the functioning of neurons and synapses in the brain <cit.>. Our brain's neurons transmit signals that vary according to the strength of the synaptic connection. This strength is updated during learning. Similarly, a weight is assigned to a physical characteristic of, for instance, a resistive device, such as a conductive state. The weight must be a parameter that can be stored and be able to change in the artificial learning process. Several devices are being studied for their applicability in neuromorphic circuits: spintronics, which use the electron's spin to carry out computations; memristors, devices that offer a resistance that remembers its current history; synaptic transistors, hardware components -less costly in terms of energy than the memristor- that remember their voltage history and have, until recently, been devices limited to impractically low operating temperatures. A few more candidates exist, but we will focus on the memristor. It is worth to comment briefly the recent advances of the synaptic transistors, a device that utilizes a moiré pattern that presents ferromagnetic properties at room temperature <cit.>, it mimics a neuron as it displays a moiré potential which captures electrons in the same way synaptic vesicles accumulate neurotransmitters. Nowadays, the first place as the the optimal device for neuromorphic circuits is very disputed, as each proposed device offers different improvements and limitations. Building on the innovations in Processing in Memory (PIM) and the significance of minimizing data movement to overcome the 'memory wall,' it becomes crucial to explore effective architectural modifications. These modifications not only address the limitations of traditional computing architectures but also pave the way for advanced computational technologies. Integrating processing capabilities directly within memory arrays reduces latency and energy consumption associated with data transfer, leading to a resurgence in analog computing. Field-Programmable Analog Arrays (FPAAs) serve as key examples of how processing capabilities can be smoothly integrated within memory arrays <cit.>. §.§ Neural Networks Neural Networks (NNs) may have many layers with thousands of artificial neurons. Figure 2 shows a typical NN structure composed of input layers, hidden layers, and output layers. As mentioned earlier, neurons communicate via synapses. This behavior is replicated by artificial neurons, substituting the synapse by an input and the strength by the weight; then, activation is determined by a mathematical function. y=ϕ(Σ_jx_jw_j-b) Here, the b corresponds to the bias, an inner parameter of the artificial neuron that participates in the linear relationship to displace the outcome. The ϕ is the chosen activation function. As the weights dictate whether signals are propagated or inhibited, having a correct value is crucial. Thus, learning algorithms are used to tune the values of the weights based on the error of the network in a process called training, during which patterns are learned in order to make predictions. This process is called backpropagation. We can compute the error of each neuron by comparing the outcome to the desired output: E_i(x,w,d)=O_i(x,w)-d_i The total error of the neural network is the sum of all individual errors. Using the gradient descent method, the network updates its weights based on their influence on the error, according to next formula: Δ w_ij=-η∂ E /∂ w_ij There are several parameters that are set beforehand, called the hyperparameters. For instance, the learning rate of the network, which is the pace at which the algorithm updates its values during training. Other examples are batches, the number of training samples passed through at once during one training step; epochs, the number of learning repetitions on the same dataset; or the amount of hidden layers. §.§.§ Convolutional Neural Networks Convolutional Neural Networks (CNNs) are a class of neural networks that have become the standardized network for computer vision tasks. These functionalities benefit greatly from the ability of the CNNs to extract features from an input image. The key components are convolutional layers, which create a feature map from an input image and later send it to an activation function. In a way, convolutional layers simplify images by keeping their most recognizable features in order to classify them. In the interest of understanding CNNs, we can use a common model such as VGG, represented in Figure 3, to explain the different layers <cit.>: * Convolutional layer: This layer scans the entire input image using sections known as kernels, which extract patterns through an element-wise dot product between the input and the kernel weights. After this multiplication, a bias term is added. For RGB images, the outputs from the three color channels are combined element-wise before adding the bias. The stride specifies the movement of the kernel across the image after each operation. The dimensions of the output image change according to the following formula, which accounts for the sizes of each variable: Output=Input+2· Padding-Kernel/Stride+1 Padding is used to maintain a certain image size, which can be useful at times to extract more features and prevent early downsizing. * ReLU layer: Rectified Linear Unit is a type of activation function that is very commonly used in NNs. The output of the convolutional layer is passed element-wise through the activation function: ReLU(x)=max(0,x) This functions adds non-linearity to the CNN which is needed to prevent having an output that corresponds to a linear combination of the inputs. Non-linear decision boundaries are a necessity in order to learn complex images. * Pooling layer: Similarly to a convolutional layer, a Kernel and a Stride have to be selected in the pooling layer. This function sweeps the feature map created by the convolutional layer and takes the maximum value of each Kernel, which effectively decreases image size. * Fully-connected layer: It is composed of a flatten function that eliminates the area dimension of the image for it to be used as an input for the linear layers. The primary function of these linear layers is to classify the image based on the features extracted by previous layers in the network, effectively mapping the learned features to specific outputs. Several techniques can be employed to improve the accuracy and stability of convolutional neural networks (CNNs). For instance, batch normalization standardizes the inputs to each layer within a network, facilitating smoother and more stable training <cit.>. Additionally, dropout layers enhance model generalization by randomly omitting a proportion of the units during training, thereby preventing the units from co-adapting too closely. When configured with an optimal dropout rate, this technique helps mitigate overfitting. §.§ Memristors Circling back to devices for processing in memory, we are focusing on memristors. Since Leon O. Chua <cit.>, an electrical engineer, published 'Memristor - The Missing Circuit Element' in 1971 [2], this device has been regarded as the missing basic circuit element. Chua postulated its existence, noting that until then, there were three primary components in a circuit, each corresponding to two fundamental circuit variables: magnetic flux, voltage, charge, and current. Resistors relate current and voltage; inductors link magnetic flux and current; and capacitors connect charge and voltage. The only missing device was one that could establish a relationship between charge and magnetic flux. It was not until 2008 that a research group at Hewlett-Packard Labs claimed to have discovered this missing element <cit.>. Controversy arose because the device was not entirely new and did not establish the relationship between memory and magnetism as initially theorized. In fact, this new device could operate without magnetism. Nevertheless, Leon O. Chua recognized the discovery. In 2014, he published 'If it’s pinched it’s a memristor,' a paper in which he broadened the definition of memristors <cit.>. He emphasized that if a semiconductive device exhibited a pinched hysteresis loop, then it was a memristor, even if it was not ideal. §.§.§ Physical characteristics Memristors are characterized by a simple structure of metal-insulator-metal. It can be described as a sandwich where the two outside layers are formed by electrodes and, in the middle, there can either be a insulator or semiconductor depending on the device. As a general explanation of the functioning of the device it will be useful to analyze HP's memristor, which offered a bilayer of Pt electrodes and a middle layer of TiO_2 <cit.>. The TiO_2 was doped, creating a zone of oxygen vacancies and a zone without them. As a consequence, a boundary is created in the resistive layer between the two zones. As the oxygen-starved TiO_2 offers electrons to carry the current, these vacancies reduce resistance. Nevertheless, it is necessary to take into account that vacancies have positive charge, which consequently causes the boundary between doped and undoped zones to move, effectively altering resistance. Depending on the polarity of the current, the boundary moves towards either side, setting the memristor to a high resistance state -or HRS-, if the doped region is narrow, or low resistance state -or LRS-, if the doped region is wide <cit.>. This is determined by the state variable w <cit.> and can be observed in Figure 5. In addition, memristors do not store any current, thus, the set resistance state is remembered and depends on current history. This makes the memristor a candidate for neuromorphic devices, since it displays the plasticity of neurons. The internal materials may vary, as there are different types of memristors. For example, the HP memristor falls under the anion memristors <cit.>. Meanwhile, there are also cation memristors and dual-type ones, which mix the migration of both anions and cations whithin the resistive layer. In order to explain their properties, a brief comment on possible materials for each component will be made. The electrodes may play a role in the resistive switching present in memristors or solely let current through. For instance, in the case of cation memristors, electrochemically active metals are used, mainly Cu and Ag. This creates conductive filaments, narrow pathways that offer a route for ions to migrate which form thanks to electrochemical reactions in the electrode. There are other materials that do not alter the behavior noticeably, commonly inert metals such as Pt and Au. Additionally, other materials like graphene or several alloys can be used depending on the technical requirements of the device. The key component is the resistive layer, as it ultimately determines the behavior of the memristor. It can be composed of inorganic or inorganic materials <cit.>. The former offers more stability, while the latter is less costly production-wise. Inorganic materials that can be used in a memristive device may be binary oxides, perovskites and 2D materials. Binary oxides, as TiO_2, allow for high stability, long durability, fast switching speeds and compatibility with Complementary Metal-Oxide Semiconductor processes for fabrication. Perovskites, which are compounds that share the structure of CaTiO_3, possess optoelectronic properties that make them optimal for memristors, yet sustainability is an issue as well as its production, as they are not compatible with the standardized CMOS process. Inorganic 2D materials can be used to produce memristors that display low-power consumption and flexibility but little scalability. §.§.§ Properties Memristors stand out by its low power consumption, perhaps one of their most relevant qualities nowadays, considering the amount of energy consumed in machine learning. Analog computing in memory with memristors tackles both the need to surpass the memory wall as well as the energy issue. Further, memristors are two-terminal devices that present non-volatility, the ability to store data without the necessity of current going through. This property allows for the construction of ReRAM devices or Resistive Random-Access Memory <cit.>. These devices support the creation of densely packed computational circuits and can endure a great amount of cycles. Memristors can switch states very quickly, which is very useful in high-speed data processing. This switching behavior can be seen in Figure 6.a, a current-voltage graph. Current is plotted logarithmically, which accentuates the leap from low currents nearing 0 voltage to higher values of current. We can observe how the current nearly plateaus at high values. The mechanism explained in the previous section is clearly seen: increasing voltage increases current in a non-linear manner, and the same can be said for decreasing negative voltages as the memristor is bipolar. Both set and reset paths are observed on each side of the RS graph. The hysteresis loop is the representation of the ferromagnetic behavior of memristors, showing the dependence of a system on its history, usually in terms of magnetic flux (B) and magnetic field strength. In Figure 6.b it is represented in the I-V (current-voltage), meaning that current can be different for the same voltage depending on history. It crosses itself at (0,0), what we typically refer to as a pinched loop. Thus, when there is no voltage, the current is 0 as well. Nevertheless, the resistance state is remembered. Depending on the frequency of the current, the hysteresis loop can appear closer to a line if the frequency is high, or wider if the frequency is lower <cit.>. Commercial Ag/Ge2Se3/SnSe/Ge2Se3 memristors have been characterized experimentally in the literature<cit.> through I-V plots derived from voltage-ramped signals. Variability in the devices was assessed using automated extraction methods. Both the I-V behavior and variability were successfully modeled using the Dynamic Memdiode Model, with model parameters refined through a genetic algorithm. §.§.§ Memristor array Delving further into memristive analog computation requires explaining the arrangement of memristor crossbar arrays. This architecture allows processing in memory by carrying out matrix-vector multiplications, which are essential for CNNs. Non-volatile memory elements as memristors allow us to encode numerical values to perform calculations as analog conductance states. Thus, we exploit the memoristic ability of a device to store information used to perform Matrix Vector Multiplications by means of Kirchoff's and Ohm's laws. Conductance acts as a counterpart to resistance, measuring the memristor's ease to let electricity flow. G=I/V=1/R Conductance depends on the state variable w of the memristor. Thus its behavior can be explained as a function of w: I(t)=G(w(t))· V(t) d/d tw(t)=F(w(t),V(t)) Equation (8) showcases how the state variable changes in time depending on the voltage going through and the state of the memristor. Thus, the inherent non-linearity is characterized by these expressions <cit.>. The array represents a matrix of conductances corresponding to weights. It features a structure formed by Word Lines (WL), which connect rows of memristors originating from a Digital to Analog Converter (DAC), and Bit Lines (BL), which run perpendicular to the Word Lines and connect to an Analog to Digital Converter (ADC). The Word Lines input a vector that is encoded as voltages by the DAC. These voltages are then modulated by the conductance states of the memristors in the array <cit.>. In this way, the matrix multiplication is carried out analogically as represented in Figure 7 by means of circuit laws and elements. Finally, each Bit Line outputs a current that is a result of the conductance state of each memristor in the column and the input voltages (9). This device is commonly called a memristor-based accelerator as it improves the speed of computation. I_j=Σ_i V_i G_ij Loading the weight matrices into the array is a critical yet time-consuming task. When handling large matrices, multiple crossbar tiles are employed to prevent performance degradation and ensure accurate layer representation. To facilitate this, a best-fit algorithm is utilized. This algorithm seeks to allocate matrices to the smallest suitable tile. If a matrix cannot be accommodated within any available tile, it is divided and distributed across multiple tiles to ensure proper mapping. As memristors do not have negative conductance states, for each weight matrix two crossbar arrays may be needed to account for negative weights <cit.>. As a result, weights are separated into negative and positive values, each into a different array. When a MVM (Matrix-Vector Multiplication) is conducted, both are taken into account: AB=KΣ A[i,:](g_pos[i,j]-g_neg[i,j]) Where the B matrix is the one that we map and the A matrix is the input that enters through the WLs. The value of K has to be determined for each crossbar by performing a linear regression to the output current of each column and its corresponding output. The conductances are represented by g. Furthermore, weights must be translated into conductance values, which can be done by means of: g[i,j]=(g_ON-g_OFF)(σ (w)[i,j]-w_min)/|w|_max-w_min Firstly, g_ON and g_OFF represent the conductances corresponding to the high and low resistance states of the memristor, respectively. Then, w_min and w_max are the lowest and highest weights to map. We are considering the physical bounds, the resistance states, and the bounds in our matrices' weights. As a result, the weights are adjusted to the analog bounds of the device. Finally, σ (w) corresponds to the weight being mapped. In certain crossbar arrays, there are transistors integrated next to each memristor as seen in Figure 8, allowing for the selection of individual devices. This proves useful when mapping matrices, a process implemented by applying voltages to the memristors. Thus, a 1T1R (one transistor-one resistor) array enables individual mapping of each memristor without influencing adjacent ones, whereas a conventional 1R (one resistor) array requires corrective methods to manage influences on nearby memristors. §.§.§ Simulation Frameworks Regarding the current interest in memristor-based accelerators for their performance and low energetic consumption, there is a need for a reliable and multifaceted simulation framework. Nowadays, each simulator offers several great and unique qualities, but a general framework is lacking. Most simulation frameworks focus on machine learning tasks. For instance, MNSIM was one of the first behavior-level simulators created for neuromorphic computing, enabling the customization of the design and estimating accuracies of memristive structures when computing. This model offers estimations of area, power consumption and latency, focusing on hardware design <cit.>. NeuroSim also grants the tools to study the latency, area and power aspects of MBAs <cit.>. It is built in a hierarchical multilevel manner that touches circuit, device and neural network topology, making it a robust framework to research device design. Lastly, PUMA assesses performance using an Instruction Set Architecture that allows for complex programming in conjunction with the efficiency of analog computing <cit.>. There are other frameworks that focus on non-ideal behavior, which is very relevant when simulating memristive-based devices, since analog computing presents errors due to electrical properties. DL-RSIM, for example, calculates the error rates of MBAs and takes them into account in the inference phase <cit.>. RxNN also considers non-idealities, such as parasitic behavior and process variations. It is centered around large-scale DNNs and it has shown proof that only modelling ideal behaviors offers poor results, as non-idealities cause important downgrades in accuracy <cit.>. GraphRSim analyzes the impact of these non-idealities in graph algorithms that use ReRAM devices as matrix multiplicators <cit.>. IBM's Analog Hardware Acceleration Kit or aihwkit, on the other hand, focuses on optimization of NN algorithms for training and inference with analog chips <cit.>. The structure of a NN is very relevant when mapping weights on memristor arrays and thus an optimal model can greatly improve performance. Simulations in this study will be conducted via Memtorch, a framework specifically designed to address non-ideal behaviors during inference <cit.> that excels in replicating the nuanced behaviors of memristive systems under real-world conditions. It offers a detailed examination of non-linear and stochastic effects that are often overlooked by other frameworks. §.§.§ Memtorch Memtorch is an open-source framework based in Python that is compatible with the PyTorch MLlibrary focused on the modelling of non-ideal characteristics to study losses in accuracy <cit.>. In order to utilize the simulator, we first need to construct and train a PyTorch model. Then it can be patched into a memristive one, writing weights into memristor crossbar arrays that Memtorch simulates, transforming the computation framework from digital to analog. This framework offers the possibility of taking into account non-idealities making it fairly useful in comparison tasks and assessment of precision losses. It is possible to account for finite conductance states, device faults, non-linearity, endurance and retention. § METHODOLOGY Our objective is to asses the versatility and usefulness of Memtorch as a framework for conducting simulations of the matrix multiplications that comprise inference using memristor crossbar arrays. Initially, the purpose is to build the CNNs models to later convert to memristive models following the methodology displayed in Figure 9. Three models will be constructed, varying in sizes, each for a different dataset. §.§ Dataset selection As Pytorch offers several datasets that are readied for machine learning tasks we can use them to train our models. Each model built will be tailored to a chosen dataset from Table 1. MNIST is commonly used for the training of toy models and it posed a great starting block for studying Memtorch's capabilities. It can also serve as tuning dataset as its simpler model can be patched faster, being useful for try and error processes of hyperparameter tuning. Nevertheless, the most relevant feature of the dataset is the simplicity of the images, which depict letters and numbers in gray scale such as that in Figure 10.a. This simplicity translates into easy-to-learn patterns that contribute to having robust and stable models. CIFAR10 and CIFAR100 have the same type of colored images as seen in Figure 10.b, but the latter presents a harder challenge for machine learning as it has 100 classes and less images per class, thus the focus is set on efficient feature learning. Both these models have classes such as dogs, planes, etc. These vary in distortion, lighting and point of view. Therefore, their features are harder to learn and, at the same time, more relevant, in the sense that to detect the same pattern in a very different image of the same class, the features must be well learned. §.§ Model construction As mentioned above, we have taken inspiration from VGG CNNs to build our models. Our image sizes are small and thus the network will have less layers and perform a shallower analysis. Memtorch also offers a limitation in terms of time of computation, which is hugely affected by the size of the model to patch. For instance, for a patched model of 6 convolutions, 3 fully-connected linear layers and more than 264 hidden units, patching and inference could take several hours. Therefore, the models have to be the most efficient they can in terms of size and accuracy. On top of that, image size is another aspect to consider, as the area varies with each layer according to (4). The right hyperparameters have to be chosen to avoid decreasing too much the size of the image before the features can be learned, or to not add excessive padding to the image, as it increases computation time. Several factors impact the design of CNN architectures. Firstly, colored images pose greater challenges for pattern recognition, making datasets like MNIST, which contains simpler grayscale images, less complex by comparison. Additionally, the size of the images is crucial, as the dimensions vary across different layers. Selecting appropriate hyperparameters is essential to ensure that the image size does not reduce excessively before adequate feature extraction occurs. Moreover, avoiding excessive padding is important, as it can lead to increased computation time. The number of classes is a very relevant property of a dataset, as the model needs a better understanding of each one in order to set them apart. On top of that, more complex images need deeper CNNs with more convolutional layers to be learned. When constructing our conventional CNNs, we adjust several parameters including the number of convolutional layers, padding, kernel size, and stride. During the training phase, we also determine the optimal learning rates and the number of hidden units. As a standard practice, all models are trained for 50 epochs. §.§.§ Model Architecture Customization for MNIST, CIFAR10, and CIFAR100 Datasets To clarify, each model is named after the dataset used for its training, as shown in Figure 11. The so-called "MNIST" model consists of two convolutional blocks, each containing two convolution layers, batch normalization, and ReLU activation. Following each convolutional block, a pooling step is performed to simplify and reduce the image size by half. A fully-connected layer concludes the architecture, a common practice in such models, with the number of hidden units fixed at 10. The "CIFAR10" and "CIFAR100" models are structured similarly, each comprising three blocks with two convolution layers. However, "CIFAR10" features 16, 32, and 64 hidden units in each successive block, while "CIFAR100" model has twice as many in each—32, 64, and 128. To combat overfitting, dropout layers are incorporated into both models. Additionally, the fully-connected section of "CIFAR100" includes three layers, compared to "CIFAR10's" two layers. From now on, we will refer to each network by the name of its training dataset (without quotation marks). §.§ Training phase To assess the quality of the models, we implemented a K-Fold cross-validation method, dividing each dataset into 5 folds, with each fold containing designated training and validation segments. This approach allows us to evaluate how consistently the model performs across different subsets and check for any variations in convergence across the folds. After completing the validation phase, we proceeded with training using PyTorch's designated training and testing partitions. Following this training phase, we saved the state dictionaries of the models, which include their respective weights and biases, along with their performance metrics on the test set. §.§ Patching model with Memtorch §.§.§ Memristor model For this study, the chosen memristor model was VTEAM <cit.>, which is a voltage-controlled memristor. It has a certain voltage threshold that needs to be surpassed in order for the device to display a change in its conductance state. This is a realistic implementation, as this threshold has been observed experimentally. The VTEAM model is calibrated with parameters that replicate the behaviors of three experimentally tested memristors: a Pt-Hf-Ti memristor, a ferroelectric memristor, and a metallic nanowire memristor, thereby providing a robust foundation for simulating authentic physical phenomena. The customizable memristor parameters are the time resolution of the simulation; α_off and α_on, which control the rate of change of the state variable of the memristor at the low and high resistance state; v_off and v_off, the thresholds from which the state variable starts to vary; R_off and R_on, the minimum and maximum resistance states; and k_off and k_on, which set the rate of change of the state variable when voltage is below or over a certain value. As a general measure, we chose the parameters that correspond to the behavior of the Pt-Hf-Ti memristor. §.§.§ Crossbar Array and Mapping When integrating the model with the crossbar structure and mapping algorithm, several additional parameters must be considered. For example, the mapping routine provides multiple functional options. In this study, we opted for the naive mapping approach because it simplifies the translation of weights into memristor parameters. We also need to specify which types of layers are to be converted; in this case, both convolutional and linear layers are included. Additionally, it's essential to determine the configuration of the crossbar array, whether it's a 1T1R (one transistor-one resistor) or a simpler 1R (one resistor) setup. This distinction is made using a boolean variable. Transistors are preferred in our model because they help to accurately map weights by minimizing interference from adjacent memristors, enhancing the precision of weight adjustments and deletions. Consequently, this setting has been enabled (set to true). Memtorch allows us to simulate the programming routine, which sets how weights are mapped with voltage pulses. It has been set to none in order to prevent going through excessive steps in the simulation. If our crossbar array is 1R, a programming routine should be specified, as mapping weights requires specific instructions. To block out and select memristors is much more complex without transistors. Additionally, it is very relevant to modify tile shapes to adjust to each CNN. Memtorch API recommends tile shapes of 128×128, 256×64 or 256×256. Nevertheless, smaller arrays of 64×64 are also an option. In order to keep reasonable physical variables, a maximum input voltage is specified to prevent simulating unrealistic voltages that would damage the device. The threshold was adjusted to study its effect on weight mapping. Shifting our focus to the external parts of the circuitry, the ADC has to be given a resolution of bits that are used to interpret the analog input and an overflow percentage. When an input is higher than the ADC's limit, it overflows, giving the analog input the highest possible digital output. The percentage of overflowing inputs was set to 0. When converting weights to conductances, a quantization method is required. Our models were patched using linear quantization, which effectively maps the weights into a discrete number of conductance states evenly. Quantization can be done logarithmically as well as exponentially. Finally, we can customize parameters like the mapping scheme, which determines how weights are assigned to memristors. We've opted for the double column scheme, which utilizes two memristors per weight to accommodate negative values. While a single column scheme is an alternative, it does not support negative values, which could compromise the accuracy of weight mapping and subsequent calculations. §.§ Experiments Using the available non idealities offered by Memtorch, we will conduct a series of inference phases with different combinations. * Effect of varying max input voltage on accuracy. * Different tiles sizes of 64×64, 128×128 and 256×256 with varying ADC resolutions from 2 to 10 bits. * Stochastic effects varying σ in relation to conduction states. * Device faults for stuck-at HRS and LRS. * Endurance and retention effects. CIFAR100 will be only used for the first experiment. For the rest, only MNIST and CIFAR10 will be patched as they are computationally lighter and easier to work with. § RESULTS §.§ Training of CNNs Following our methodology, we will first comment on the quality of our models. To study their behavior on the datasets, K-Fold cross validation was performed. Hereunder the results are presented for each CNN in Figure 12. As seen in the graphs, the training of each fold behaved similarly, which showed proof that the models do not act differently in the training phase depending on the slice of the dataset. Cross validation also allowed us to measure the quality of the model in terms of the reached accuracy over the test set, which we obtained by averaging the obtained accuracies of the folds, displayed on Table 2.a. For MNIST, the convergence happens rather quickly and it has a very high starting accuracy. This shows the simplicity of the dataset, which is quickly learned by the model. Both CIFAR10 and CIFAR100 take more epochs to converge into a final value. Yet, after this characterization of the quality of the CNN, another training phase is conducted using the partitions given by PyTorch for each dataset. We have saved the state dictionary of the epoch for which the model had a higher test accuracy over the test set. This decision was made in order to later patch the most accurate model possible. The models' final accuracy after training over the test set can be seen in Table 2.b. §.§ Memristive inference experiments In this section, the effects in inference accuracy of the patching of the CNNs using Memtorch will be discussed. Testing phases varied on time depending on the model and parameters. MNIST inference took 196 seconds with tile size of 64×64, while CIFAR10's took 680 seconds and CIFAR100's 1300 seconds using the same settings. The Memtorch simulations were computed with a NVIDIA GTX 1060 GPU, Intel Core i7-8750H CPU with 6 cores and 16GB of RAM on a Python 3.10.12 environment. The operating system was Ubuntu 22.04 using Windows Subsystem for Linux. Nevertheless, the training of the digital models was carried out in Google Colab using an A100 GPU. All models averaged epochs of 15 seconds. §.§.§ Effect of max input voltage The maximum input voltage (Max Input) was adjusted to observe its impact on the accuracy of our models. As depicted in Figure 13, the accuracy across the test set stabilizes for voltages above 6V for all models. Nevertheless, MNIST has a rather high initial precision for 1V, whereas CIFAR10 deteriorates to a 14% and CIFAR100 to a 1%. §.§.§ ADC resolution vs Tile Size The model was patched for each combination of hyperparameters, i. e., an ADC resolution going from 2 to 10 bits in intervals of two and tile shapes of 64×64, 128×128, 256×256. As seen in Figure 14, the smaller tile shape offers slightly better results consistently for all ADC resolutions for CIFAR10. This fact proves the necessity of choosing an appropriate crossbar array size for effective performance. §.§.§ Stochastic Effects vs Conduction States Random effects can be modeled by means of a sigma value that scales the stable resistance states of the memristor, representing the inherent variability of the device. This causes a loss in accuracy as mapped weights are altered. On top of these stochastic effects, we can apply different numbers of conductive states to observe which non-ideal behavior is more dominant and their interrelation. In Figure 15, we see the deterioration caused by increasing the variance, whereas the conductance states do not affect accuracy significantly. §.§.§ Device Faults Memristors are susceptible to getting stuck into their high and low resistance states, rendering them unusable. This can be modeled by introducing a probability of falling into these states. A 3D plot in Figure 16 was constructed for both MNIST and CIFAR10 to observe how these probabilities for HRS and LRS relate. LRS probability is clearly dominant for the same range of parameters as HRS. For a null probability of getting stuck at a LRS, HRS probability deteriorates the accuracy of the model less abruptly. Nevertheless, a 5% chance of stuck-at LRS effectively reduces accuracy to minimum. §.§.§ Endurance and Retention Effects Memristors deteriorate with cycles of set/reset operations and their conductance state slowly drifts. These non-idealities can be simulated using the retention and endurance Memtorch submodules. Then, for each 10-class model two simulations have been carried out obtaining the results displayed in Table 3. For retention, drift must be specified as it accounts for the shift in conductance with time. In our experiment we simulated conductance state drift of 0.1 per second over a period of 10,000 seconds. Next, to evaluate endurance, we modeled the decrease in accuracy resulting from 10,000 set/reset cycles. We also considered the temperature, as this parameter significantly influences deterioration over time, setting it at T=350 K. § ANALYSIS OF RESULTS §.§ PyTorch Models The three models used varied in shape, depth and amount of layers. This was beneficial to test the simulator for a range of inputs. MNIST offered great results due to the easy-to-learn patterns of the inputs, reaching a 98.73 % accuracy over the test set. On the other hand, the CIFAR10 dataset required a deeper network in order to show an acceptable accuracy of 83.34%. Finally, CIFAR100's dataset converged to 58.85%. This value is not satisfactory for computer vision tasks, nevertheless, the computational limitations of the GPU when patching models with Memtorch made impossible the improvement of the model. Improvements would involve more convolution layers, more hidden units or applying more advanced techniques. Residual layers or LeakyReLU are examples of beneficial additions for attaining higher accuracy rates. §.§ Maximum Input Voltage Observing the results in Figure 13, it can be inferred that Maximum Input Voltage (MIV) effectively alters inference. Excessively low voltages can lead to severe degradation of accuracy. Further, excessively high voltages can damage memristors due to thermal effects and electromigration, consequently decreasing performance and longevity. Thus, a reasonable value has to be achieved in order to map weights and perform MVMs properly. For the proposed models, it was seen how voltages over 6V did not improve accuracy as the value was constant from that point. Nonetheless, the maximum accuracy after patching is evidently the maximum accuracy reached after training. This value was almost achieved, as the models fell approximately 1% the maximum value as seen in Table 4. For the remainder of the simulations, a voltage of 9V was maintained to ensure satisfactory accuracy for analyzing other parameters. Yet, in a realistic memristor 9V is a rather high voltage input. High enough voltages could damage the crossbar array, increasing temperature and consuming more energy than intended. Max input voltage directly controls how weights are mapped and how input images enter through the DAC converter. Lower values may not be able to map bigger weights and thus this threshold can overflow the mapping if a high enough value is not selected. In MNIST's case, the accuracy was high from 1V, meaning than the model's performance is not as dependent on precise weight mapping. §.§ Tile shape vs ADC Resolution The ADC resolution resulted to a be a bottleneck for accuracy for resolutions of less than 6 bits as seen in Figure 14. Nevertheless, over that value the accuracies did not vary noticeably. This is due to the fact that ADCs must convert the analog signal into a discrete digital one. Thus, the more bits of resolution the more accurately the device will be able to represent changes in the analog signal. Insufficient resolution may cause poorly translated output signals and consequently inaccurate labels in image classification tasks. On the other hand, ADC resolution must be kept the lowest possible as it has a bigger impact on energy consumption the higher it is <cit.>. Therefore, 8 bits was used as a standardized value for ADC resolution to secure no impact on accuracy. Depending on the tile size of the memristor crossbar arrays, the best fit algorithm mapping the network may divide the weight matrix into several arrays <cit.>. Although it is a more complex process it has advantages. Errors are localized, whereas with bigger tiles an error greatly decreases accuracy. In addition, smaller tiles allow for parallel processing. Small tiles come with disadvantages too as they make the design much more complex. This is due to the fact that tiles much be interconnected and be synchronized. Thus, there is an inherent latency between tiles. In conclusion, a middle ground between over separating the matrix and mapping it as a whole must be reached via testing. As seen during simulations, the sizes used did not differentiate noticeably in terms of accuracy. Nevertheless, the smallest tile shape, 64×64, was very useful throughout the work as it allowed for much faster inference than the other two. A tendency can be observed in Table 5 where smaller tiles meant faster testing for MNIST, which is consistent with the parallelism that they allow while computing. This increase in speed was seen for all models. §.§ Stochastic Effects and Number of Conductance States Setting a finite number of conductance states did not alter accuracy as it can be seen by observing the values for σ=0 in Figure 15. Therefore, the calculated accuracy for each conductance state can be used as an extra iteration of the simulation to observe how a certain value of σ has a range of possible accuracies. This variability that is introduced models how devices display random behaviors based on material defects, thermal effects, etc. As a result, the R_off and R_on are affected by this variance, which degrades performance. In fact, they can overlap if the σ is big enough, effectively sharing a conductance state <cit.>. Circling back to the number of conductance states, these discretize the values of the mapped weights. For both our models two conductance steps were sufficient to quantize weights. This can be interpreted as a result of using Memtorch on a simple model that does not need more resolution to map the weights. A more complex network may be more susceptible to the number of conductance states available to map their weights, as a binary representation lacks the necessary range for precise mapping. Yet, binary devices are easier to manufacture, since multi-level memristors pose a harder challenge to produce on a large scale <cit.>. The stochastic effects were clearly seen. Higher σ resulted in more randomness in the results. For σ=100 in MNIST it was seen how the standard deviation was still low enough that the accuracies were not affected as drastically. §.§ Device Faults Memristors undergo a process of electroforming to operate at a pristine state. If this practice is not performed well, it can lead to memristors that do not operate properly <cit.>. For instance by getting fixed in a high resistance state if conductive filaments are not formed. Memtorch accounts for this as well as for LRS stuck-at faults (SAFs) by means of assigning probabilities to these faults. As it was seen in Figure 16, LRS probability lowered accuracy much more than HRS. For instance, in Table 6, a 5% probability for LRS SAF meant a decrease to the minimum accuracy. This can be explained by means of interpreting what these states represent. A high resistance state stores a small weight as conductance is inversely proportional to resistance. Essentially, smaller weights have a lesser impact on the network's output compared to larger weights, which have the potential to activate additional neurons more effectively. §.§ Retention and Endurance Devices present a limited number of cycles as their characteristics start to degrade with use. Memtorch allowed us to account for the drift of the stable high and low resistance values after a number of cycles. The number of set/reset cycles was added to the inputs, in our case, 10,000 cycles were sufficient to observe a degradation in performance. During set/reset cycles, the conductive filaments of the device are created and broken <cit.>. This process degrades the R_on/R_off ratios, effectively decreasing the dynamic range of the memristor to map weights. As seen in the experiment, endurance chunked accuracy to 52.47% in MNIST and 36.01% in CIFAR10. Another element that decreases endurance is temperature, which can damage the device. High temperatures can alter conductance acting as a bias <cit.>. On top of that, they are responsible for material degradation. As we know, a memristor is a highly relevant device for in-memory computing, since it is able to maintain its state. Yet, its conductance does vary with time slightly. Memtorch grants us a framework to simulate this drift by specifying total time and rate of change of conductance. For 10,000 seconds and a conductance drift of 0.1, accuracy did not decrease noticeably for CIFAR10 as it was kept at 81.89%. In MNIST, accuracy was 3% lower than that of an ideal memristive network. Overall, it can be seen how retention effects are less prominent than endurance non-idealities as many set/reset cycles may take place in under a second <cit.>. Thus endurance decreases performance quicker than retention effects. § CONCLUSION This research advances the simulation of neuromorphic systems utilizing analog devices by presenting a methodology designed to assess nonlinear behaviors in CNNs using memristive devices. It involved designing three distinct CNN architectures, each named after and tailored to accommodate the varying complexities of their respective datasets: MNIST, CIFAR10, and CIFAR100. Five experiments were conducted using Memtorch as a simulation framework. Firstly, for ideal conditions the effect of varying MIV, tile shape and ADC resolution was tested to tune hyperparameters for later experiments. A MIV of 9V, a tile shape of 64×64 and 8 bits of resolution offered a loss of less than 1% in test accuracy with respect to its digital counterpart. Additionally, this set of parameters optimizes acceleration while keeping the precision loss bounded. Continuing with non-ideal behavior, a varying number of conductance states in relation to stochastic effects resulted in virtually no loss of accuracy for the range of conductance states. On the other hand, randomness deteriorated performance drastically, especially for CIFAR10, which was more susceptible than MNIST to inherent device variability. Device faults showed the landslide in accuracy loss between LRS and HRS SAFs, showing evidence of how bigger weights are more influential to outputs. Finally, endurance and retention were simulated to test the evolution in performance of memristive devices. Cycle to cycle degradation was dominant over retention, which caused less than 3% loss in accuracy. In conclusion, following the proposed methodology, near-digital accuracies were reached for the proposed models using Memtorch. The CNNs constructed were built with a set of hyperparameters that optimized accuracy and size. Then models were patched using a set of realistic physical parameters to analyze their behavior and assess the quality of the simulator for PIM with ReRAM devices. The results were satisfactory, strengthening the importance of analog computing in today's computational landscape. § BIBLIOGRAPHY 8 VN1 Zou, X., Xu, S., Chen, X., Yan, L., & Han, Y. (2021). Breaking the von Neumann bottleneck: architecture-level processing-in-memory technology. Science China Information Sciences, 64(6). https://doi.org/10.1007/s11432-020-3227-1 OGchua L. Chua, "Memristor-The missing circuit element," in IEEE Transactions on Circuit Theory, vol. 18, no. 5, pp. 507-519, September 1971, doi: 10.1109/TCT.1971.1083337. ifChua Chua, L. (2014). If it’s pinched it’s a memristor. Semiconductor Science and Technology, 29(10), 104001. https://doi.org/10.1088/0268-1242/29/10/104001 found Strukov, D. B., Snider, G. S., Stewart, D. R., & Williams, R. S. (2008). The missing memristor found. Nature, 453(7191), 80–83. https://doi.org/10.1038/nature06932 RRAM Jin, H., Liu, C., Liu, H., Luo, R., Xu, J., Mao, F., & Liao, X. (2021). ReHy: A ReRAM-based Digital/Analog Hybrid PIM Architecture for Accelerating CNN Training. IEEE Transactions on Parallel and Distributed Systems, 1–1. https://doi.org/10.1109/tpds.2021.3138087 CNN Yamashita, R., Nishio, M., Do, R. K. G., & Togashi, K. (2018). Convolutional Neural networks: an Overview and Application in Radiology. Insights into Imaging, 9(4), 611–629. https://doi.org/10.1007/s13244-018-0639-9 MEM American Scientist. (n.d.). Retrieved May 25, 2024, from https://wase.urz.uni-magdeburg.de/mertens/teaching/seminar/themen/amsci_memristor.pdf wmem Ke, Y., Yang, J., Huang, R., & Yang, Y. (2021). Nonlinearity in Memristors for Neuromorphic Dynamic Systems. Small Science, 2(1), 2100049–2100049. https://doi.org/10.1002/smsc.202100049 designsim Alex, P., Jiménez I., Roldán J., Del Barrio A., Botella G. & Jiménez F. (2023). Design and simulation of memristor-based neural networks. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.2306.11678 memcha Jiménez-Gallo, I., Alex-Lázaro, P., Botella, G., Del Barrio, A. A., Maldonado, D., Roldán, J. B., & Jiménez-Molinos, F. (2023, June). Characterization and modeling of variability in commercial self-directed channel memristors. In 2023 14th Spanish Conference on Electron Devices (CDE) (pp. 1-4). IEEE. clusterfpaas Moreno, D. G., Del Barrio, A. A., Botella, G., & Hasler, J. (2021). A cluster of FPAAs to recognize images using neural networks. IEEE Transactions on Circuits and Systems II: Express Briefs, 68(11), 3391-3395. sims H. Liu, J. Xu, X. Liao, H. Jin, Y. Zhang and F. Mao, "A Simulation Framework for Memristor-Based Heterogeneous Computing Architectures," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 41, no. 12, pp. 5476-5488, Dec. 2022, doi: 10.1109/TCAD.2022.3152385 VTEAM ‌ Kvatinsky, S., Ramadan, M., Friedman, E. G., & Kolodny, A. (2015). VTEAM: A General Model for Voltage-Controlled Memristors. IEEE Transactions on Circuits and Systems II: Express Briefs, 62(8), 786–790. https://doi.org/10.1109/tcsii.2015.2433536 memT2 Lammie, C., Xiang, W., Linares-Barranco, B., & Rahimi Azghadi, M. (2022). MemTorch: An Open-source Simulation Framework for Memristive Deep Learning Systems. Neurocomputing, 485, 124–133. https://doi.org/10.1016/j.neucom.2022.02.043 endret Anteneh Gebregiorgis, Singh, A., Sumit Diware, Rajendra Bishnoi, & Said Hamdioui. (2022). Dealing with Non-Idealities in Memristor Based Computation-In-Memory Designs. Research Repository (Delft University of Technology). https://doi.org/10.1109/vlsi-soc54400.2022.9939618 temp Manea, P.-P., Chirag Sudarshan, Cüppers, F., & John Paul Strachan. (2023). Non-idealities and Design Solutions for Analog Memristor-Based Content-Addressable Memories. https://doi.org/10.1145/3611315.3633254 VN Eigenmann, R.,& Lilja, D.(1998).Von Neumann Computers. https://engineering.purdue.edu/ eigenman/reports/vN.pdf Neuro Yan, X., Qian, J. H., Sangwan, V. K., & Hersam, M. C. (2022). Progress and Challenges for Memtransistors in Neuromorphic Circuits and Systems. Advanced Materials, 2108025. https://doi.org/10.1002/adma.202108025 syntrans Yan, X., Zheng, Z., Sangwan, V.K. et al. Moiré synaptic transistor with room-temperature neuromorphic functionality. Nature 624, 551–556 (2023). https://doi.org/10.1038/s41586-023-06791-1 typesmem Xiao, Y., Jiang, B., Chen, X., Ke, S., Jin, Y., Wen, X., & Ye, C. (2023). A review of memristor: material and structure design, device performance, applications and prospects. 24(1). https://doi.org/10.1080/14686996.2022.2162323 MNSIM Xia, L., Li, B., Tang, T., Gu, P., Chen, P.-Y., Yu, S., Cao, Y., Wang, Y., Xie, Y., & Yang, H. (2017). MNSIM: Simulation Platform for Memristor-based Neuromorphic Computing System. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 1–1. https://doi.org/10.1109/tcad.2017.2729466 DLRSIM Lin, M.-Y., Cheng, H.-Y., Lin, W.-T., Tzu Hsien Yang, Tseng, I-Ching., Yang, C.-L., Hu, H.-W., Chang, H.-S., Li, H.-P., & Chang, M.-F. (2018). DL-RSIM. International Conference on Computer Aided Design. https://doi.org/10.1145/3240765.3240800 NeuroSIM Chen, P.-Y., Peng, X., & Yu, S. (2018). NeuroSim: A Circuit-Level Macro Model for Benchmarking Neuro-Inspired Architectures in Online Learning. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 37(12), 3067–3080. https://doi.org/10.1109/tcad.2018.2789723 awhkit Rasch, M. J., Moreda, D., Gokmen, T., Gallo, M. L., Carta, F., Goldberg, C., Maghraoui, Kaoutar El, Sebastian, A., & Narayanan, V. (2021). A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.2104.02184 puma Ankit, A., Hajj, I. E., Chalamalasetti, S. R., Ndu, G., Foltin, M., Williams, R. S., Faraboschi, P., Hwu, W. W., Strachan, J. P., Roy, K., & Milojicic, D. S. (2019). PUMA. Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems. https://doi.org/10.1145/3297858.3304049 Zsim Sanchez, D., & Kozyrakis, C. (2013). ZSim: Fast and Accurate Microarchitectural Simulation of Thousand-Core Systems. MIT Web Domain. https://dspace.mit.edu/bitstream/handle/1721.1/90820/Sanchez_Z%20sim.pdf rxnn Jain, S., Sengupta, A., Roy, K., & Raghunathan, A. (2021). RxNN: A Framework for Evaluating Deep Neural Networks on Resistive Crossbars. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 40(2), 326–338. https://doi.org/10.1109/tcad.2020.3000185 graphrsim Chin-Fu Nien, Hsiao, Y.-J., Cheng, H.-Y., Wen, C.-Y., Ko, Y.-C., & Lin, C.-C. (2020). GraphRSim: A Joint Device-Algorithm Reliability Analysis for ReRAM-based Graph Processing. https://doi.org/10.23919/date48585.2020.9116232 memT1 Lammie, C., & Mostafa Rahimi Azghadi. (2020). MemTorch: A Simulation Framework for Deep Memristive Cross-Bar Architectures. https://doi.org/10.1109/iscas45731.2020.9180810 Batch Santurkar, S., Tsipras, D., Ilyas, A., & Mądry, A. (n.d.). How Does Batch Normalization Help Optimization? https://proceedings.neurips.cc/paper_files/paper/2018/file/905056 c1ac1dad141560467e0a99e1cf-Paper.pdf quant Roldán, J. B., Maldonado, D., Cantudo, A., Shen, Y., Zheng, W., & Lanza, M. (2023). Conductance quantization in h-BN memristors. Applied Physics Letters, 122(20). https://doi.org/10.1063/5.0147403 saf Xia, L., Wenqin Huangfu, Tang, T., Yin, X., Chakrabarty, K., Xie, Y., Wang, Y., & Yang, H. (2018). Stuck-at Fault Tolerance in RRAM Computing Systems. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 8(1), 102–115. https://doi.org/10.1109/jetcas.2017.2776980 ADC Aguirre, F., Sebastian, A., Le Gallo, M. et al. Hardware implementation of memristor-based artificial neural networks. Nat Commun 15, 1974 (2024). https://doi.org/10.1038/s41467-024-45670-9 Cond Huang, L., Diao, J., Nie, H., Wang, W., Li, Z., Li, Q., & Liu, H. (2021). Memristor Based Binary Convolutional Neural Network Architecture With Configurable Neurons. Frontiers in Neuroscience, 15. https://doi.org/10.3389/fnins.2021.639526 MNIST Y. LeCun, L. Bottou, Y. Bengio and P. Haffner: Gradient-Based Learning Applied to Document Recognition, Proceedings of the IEEE, 86(11):2278-2324, November 1998 CIF Krizhevsky, A. (2009). Learning Multiple Layers of Features from Tiny Images. https://www.cs.toronto.edu/ kriz/learning-features-2009-TR.pdf ‌ ‌
http://arxiv.org/abs/2407.12934v1
20240717180803
Distributions and correlation properties of offshore wind speeds and wind speed increments
[ "So-Kumneth Sim", "Philipp Maass", "H. Eduardo Roman" ]
physics.ao-ph
[ "physics.ao-ph" ]
ssim@uos.de Universität Osnabrück,Fachbereich Mathematik/Informatik/Physik, Barbarastraße 7, D-49076 Osnabrück, Germany maass@uos.de Universität Osnabrück,Fachbereich Mathematik/Informatik/Physik, Barbarastraße 7, D-49076 Osnabrück, Germany hector.roman@unimib.it Department of Physics, University of Milano-Bicocca, Piazza della Scienza 3, 20126 Milano, Italy § ABSTRACT We determine distributions and correlation properties of offshore wind speeds and wind speed increments by analyzing wind data sampled with a resolution of one second for 20 months at different heights over the sea level in the North Sea. Distributions of horizontal wind speeds can be fitted to Weibull distributions with shape and scale parameters varying weakly with the vertical height separation. Kullback-Leibler divergences between distributions at different heights change with the squared logarithm of the height ratio. Cross-correlations between time derivates of wind speeds are long-term anticorrelated and their correlations functions satisfy sum rules. Distributions of horizontal wind speed increments change from a tent-like shape to a Gaussian with rising increment lag. A surprising peak occurs in the left tail of the increment distributions for lags in a range 10-200 km after applying the Taylor's hypothesis locally to transform time lags into distances. The peak is decisive in order to obtain an expected and observed linear scaling of third-order structure functions with distance. This suggests that it is an intrinsic feature of atmospheric turbulence. Distributions and correlation properties of offshore wind speeds and wind speed increments H. Eduardo Roman July 17, 2024 =========================================================================================== § INTRODUCTION Detailed investigations of wind features are important for testing and further developing theories of atmospheric turbulence. Due to ongoing improvements of measurement techniques <cit.> and increasing amount of available data, new insights are obtained by analyzing statistical properties of wind speeds over large time and length scales covering many orders of magnitude <cit.>. Better knowledge of wind properties is of high current interest also for applications, as, for example, for harvesting wind energy <cit.>, or for controlling pollutant dispersion in urban areas <cit.>. This includes improved forecasting of wind speeds <cit.> and a better understanding of wake effects <cit.>. In addition, spatial correlations on scales of several hundred meters are relevant for mechanical stabilities and performances of wind turbines with steadily growing size <cit.>. Stationary distributions of wind speeds u are commonly described by the Weibull distribution, where its shape and scale parameters vary with location and season <cit.>. Distributions of wind speed increments Δ_τ u=u(t)-u(t-τ) in a time interval τ change in shape strongly with τ. For small lags τ, they decay exponentially for both negative and positive u_τ, corresponding to a tent-like form in a linear-log representation <cit.>. This implies a nonlinear dependence of their moments with respect to the moment order, a behavior already considered by Kolmogorov for turbulent flows <cit.> and often referred to as intermittency <cit.>. It is commonly believed that the nonlinear variation of the moments with their order is universal. Different suggestions have been made to describe it <cit.>. The intermittent features of wind speeds are transferred to wind power feed-in to electricity grids, on the level of both a single wind turbine and an entire wind farm <cit.>. With increasing lag τ, the shape of increment distributions changes towards that of a Gaussian, the kurtosis becoming three when τ exceeds one or several days. It was suggested <cit.> that the probability density of wind speed increments conditioned on the mean wind speed is given by a linear weighting of Gaussian distributions with log-normally distributed standard deviations. By weighting with the Weibull distribution, the unconditioned probability density of wind speeds is obtained. To account for the shape variation of the increment distribution, the parameters of the log-normal distribution are considered to depend on τ <cit.>. In a complementary approach, commonly referred to as extended self-similarity (ESS) method <cit.>, the moments of absolute values of wind velocity increments are considered <cit.>. This approach is based on an exact result from the theory of three-dimensional (3D) isotropic turbulence for the third-order moment of spatial velocity increments (third-order structure function) and the assumption that this moment is approximately equal to the third-order moment of absolute values of spatial velocity increments. It was pointed out that a lack of 3D isotropy in the turbulent flow, as it is present for near-surface wind flows, narrows the applicability of the ESS method <cit.>. Nevertheless, studies of measured and model-generated time series of wind speeds showed that the ESS method is useful, with the remarkable result that the scaling of moments with their order is close to that found in isotropic 3D turbulence <cit.>. In a further application of the ESS method to wind speeds measured at different sites in the Netherlands, a universal intermittent behavior was reported <cit.>. Applications of the ESS method to other geological and geophysical phenomena revealed similar results as for wind <cit.>. In an earlier study, downward cascades of turbulent atmospheric flows were shown to be reflected in scaling features of spatial rainfall and river flow distributions <cit.>. Sufficient conditions for the applicability of the ESS methods to dynamical systems in general were derived in <cit.>. On land, wind speeds show pronounced daily oscillations <cit.> and their fluctuation properties depend on the local topography. For offshore wind, daily oscillations are not significant in correlation functions or spectral analysis <cit.>, and the fluctuation properties exhibit generic statistical features. In a recent study power spectra S(f), distinct frequency regimes were identified displaying characteristic behaviors in agreement with predictions from theories of atmospheric turbulence <cit.>. These frequency regimes correspond to time regimes, which in turn are reflected in spatial domains when applying the Taylor's hypothesis <cit.>. The distinct frequency regimes can be well identified by looking at the frequency-weighted spectrum, f S(f), as illustrated in Fig. <ref> for a measurement height h=90 m. At a frequency 1/τ_ p∼ 3×10^-5 Hz, a peak appears which originates from the motion of low and high-pressure areas in the atmosphere with typical spatial extent of a few thousand kilometers. This peak is also seen in spatial wind speed correlations obtained from aircraft measurements <cit.>. For frequencies below 1/τ_ p, wind speed fluctuations are uncorrelated. Accordingly, S(f) is constant corresponding to white-noise behavior. In the weight­ed spectrum shown in Fig. <ref>, f S∼ f at low f. For frequencies above 1/τ_ p, different physical mechanisms lead to correlations. They govern the spectral behavior in distinct regimes: below a frequency 1/τ_ 2D≃ 8×10^-5 Hz, f S∼ f^-2, due to an enstrophy cascade in two-dimensional geostrophic turbulence <cit.>. Above 1/τ_ 2D, a regime dominated by three-dimensional turbulence induced by gravity waves, f S∼ f^-2/3 occurs <cit.>.This is followed at higher frequencies by one referred to as wall turbulence regime, f S∼ const., where the air flow is strongly affected by the Earth's surface <cit.>. Depending on the height h of the wind speed measurement above the sea level, the wall turbulence regime terminates at a frequency 1/τ_ 3D∼v̅/h, where v̅ is the mean wind speed. For instance, at h=100 m, we have 1/τ_ 3D∼ 3×10^-2 Hz. For frequencies above 1/τ_ 3D, the spectrum is reflecting isotropic 3D turbulence in accordance with Kolmogorov's celebrated law, S(f)∼ f^-5/3 <cit.>, i.e. f S∼ f^-2/3. Regarding offshore wind data, previous studies have dealt with cross-correla­tions between equal-time horizontal wind speeds at different laterally separated points on the Earth's surface. The cross-correlations were found to decay exponentially for distances in the range 4-600 km, and to stay approximately constant beyond 600 km <cit.>. In the range 2-12 km, the coherence function also decays exponentially with distance <cit.>. Measurements performed at a station located at the western tip of the island Frøya in Norway provided information about cross-correlations between horizontal wind speeds in both horizontal and vertical directions to the Earth's surface <cit.>. For the horizontal direction, the cross-correlation coefficient between equal-time horizontal wind velocities was found to increase with height for velocity components parallel and perpendicular to the mean wind flow. For the vertical direction, the cross-correlation turned out to decay slower with the separation distance if the mean wind speed becomes higher. In this work, we investigate distribution functions of wind speeds and their increments, on different times scales, as well as time correlation functions between the respective quantities. In particular, we analyze cross-correlations between points separated by a vertical distance Δ h. § WIND SPEED DATA FROM FINO1 Wind speeds were measured at the FINO1 platform in the North Sea, which is located about 45 km north from the island Borkum [FINO1 project supported by the German Government through BMWi and PTJ. For further details on the data sampling and instrumentation, see .], see Fig. <ref>. The wind speed data were sampled by three-cup anemometers over 20 months, from September 2015 to April 2017, for eight different heights h between 30 m and 100 m. The time resolution is Δ t=1 s, yielding time series with N≅ 5×10^7 wind speeds for each height. Because a lightening rod can influence the measurement at h=100 m, we exclude the corresponding data from our analysis. The time series of wind speeds contain missing values, where the fraction of these values is less than 1% for all heights. For the quantities analyzed in this work, we have checked that no special treatment was necessary for interpolating missing values. When calculating any averaged quantity, results obtained for different interpolation schemes did not differ significantly from those obtained when omitting the missing values. § THEORETICAL METHODS AND ANALYSIS §.§ Distributions of horizontal wind speeds at different heights PDFs of wind speeds are commonly described by the Weibull distribution <cit.> ψ(u) = k/λ(u/λ)^k-1e^-(u/λ)^k with λ the scale and k the shape parameter. These parameters are estimated by a nonlinear fit of Eq. (<ref>) to the PDF obtained from the wind speed data, where we have used an equidistant binning of the winds speeds with bin sizes in the range 0.42-0.52 m/s. According to the Weibull distribution, the relative standard deviation of wind speeds, also referred to as turbulence intensity, is independent of the scale parameter: σ_u/u̅=√(Γ^2(1+1/k)/Γ(1+2/k)-1) . Here, Γ(.) is the Gamma function, and u̅ and σ_u are the mean and standard deviation of ψ(u). To quantify the difference between wind speed distributions ψ(u;h_1) and ψ(u;h_2) at two different heights h_1 and h_2, we make use of the Kullback-Leibler divergence <cit.> 𝒟[ψ(u;h_1),ψ(u;h_2)] = ∫_0^∞du ψ(u;h_1)logψ(u;h_1)/ψ(u;h_2) = log(k_1/k_2λ_2^k_2/λ_1^k_1) +(k_1-k_2)(logλ_1 - γ/k_1) + [(λ_1/λ_2)^k_2Γ(k_2/k_1 + 1) - 1 ] , where γ is with the Euler-Mascheroni constant, and k_j and λ_j are the shape and scale parameters of ψ(u) at height h_j. Specifically, we consider the symmetrized Kullback-Leibler (or Jeffrey) divergence, D_ KL(h_1,h_2) =1/2(𝒟[ψ(u;h_1),ψ(u;h_2)]+𝒟[ψ(u;h_2),ψ(u;h_1)]) =1/2log[ (λ_1/λ_2)^k_1(λ_2/λ_1)^k_2]+ γ/2(k_1/k_2+k_2/k_1)-γ-1 +1/2(λ_1/λ_2)^k_2Γ(k_2/k_1+1) + 1/2(λ_2/λ_1)^k_1Γ(k_1/k_2+1) . §.§ Temporal cross-correlations of wind speeds and wind speed increments at different heights Temporal cross-correlations between fluctuations of a quantity X(t,h_1) at height h_1 and X(t,h_2) at height h_2 are investigated based on the normalized cross-correlation function C_X(t;h_1,h_2) = ⟨[X(t',h_1)-X̅(h_1)][X(t'+t,h_2)-X̅(h_2)]⟩_t'/σ_X(h_1)σ_X(h_2) where ⟨…⟩_t' denotes an average over t', and X̅(h)=⟨ X(t,h)⟩_t and σ_X(h)=[⟨ X^2(t,h)⟩_t-⟨ X(t,h)⟩_t^2]^1/2 are the mean and standard deviation of X at height h, respectively. For t=0, these cross-correlation functions equal Pearson correlation coefficients between equal-time fluctuations of X at different heights. For h_1=h_2=h, C_X(t;h,h) is the normalized autocorrelation function of X at height h. It holds |C_X(t;h,h)|≤ C_X(0;h,h)=1 and |C_X(t;h_1,h_2)|≤1 for h_1 h_2. We study cross-correlations of both the wind speeds [X(t,h)=u(t,h)] and of their time derivates ∂_t u(t,h)≅ u(t+Δ t,h)-u(t,h) [X(t,h)=u(t+Δ t,h)-u(t,h), Δ t=1s], i.e. horizontal wind accelerations. In power spectra of the offshore wind speeds, we have not seen any signatures of diurnals variations <cit.>, and annual variations are not relevant here for the considered time scales. We thus regard the wind speed to be a stationary process in our correlation analysis. Then, the cross-correlation functions C_u(t;h_1,h_2) and C_∂_t u(t;h_1,h_2) should fulfill the relation C_u(t;h_1,h_2)=C_u(0;h_1,h_2)- σ_∂_t u(h_1)σ_∂_t u(h_2)/σ_u(h_1)σ_u(h_2)∫_0^t t' (t-t') C_∂_t u(t';h_1,h_2) , and C_∂_t u(t;h_1,h_2) the two sum rules ∫_0^∞ t C_∂_t u(t;h_1,h_2) =0 , ∫_0^∞ t C_∂_t u(t;h_1,h_2) t =-[u(t,h_1)u(t,h_2)]/σ_∂_t u(h_1)σ_∂_t u(h_2) . Here, [u(t,h_1),u(t,h_2)]=⟨ [u(t,h_1)-u̅(h_1)][u(t,h_2)-u̅(h_2)]⟩ is the covariance between equal-time wind speeds at heights h_1 and h_2, which becomes the variance σ_u^2(h) for h_1=h_2=h. Equations (<ref>)-(<ref>) can be derived by using u(t,h)-u(0,h)=∫_0^t t' ∂ u(t',h)/∂ t' , and by taking into account that ∂_t u(t,h) is a stationary process. It thus holds ⟨ [u(t,h_1)-u(0,h_1)][u(t,h_2)-u(0,h_2)]⟩ =∫_0^t t_1 ∫_0^t t_2 ⟨∂ u(t_1,h_1)/∂ t_1∂ u(t_2,h_2)/∂ t_2⟩ =σ_∂_t u(h_1)σ_∂_t u(h_2)∫_0^t t_1 ∫_0^t t_2 C_∂_t u(|t_1-t_2|;h_1,h_2) =2σ_∂_t u(h_1)σ_∂_t u(h_2)[t∫_0^t t' C_∂_t u(t';h_1,h_2)-∫_0^t t' t'C_∂_t u(t';h_1,h_2) ] . The simplification of the double integral over t_1 and t_2 to the single integral over t in the last step is possible due to the fact that the integrand is a function of |t_1-t_2| only. Defining δ u(t,h)=u(t,h)-u̅(h), and considering C_u(t;h_1,h_2) to be even under time reversal, C_u(-t;h_1,h_2)=C_u(t;h_1,h_2), the left-hand side of Eq. (<ref>) can be rewritten as ⟨ [u(t,h_1)-u(0,h_1)][u(t,h_2)-u(0,h_2)]⟩ =⟨ [δ u(t,h_1)-δ u(0,h_1)][δ u(t,h_2)-δ u(0,h_2)]⟩ =2⟨δ u(0,h_1)δ u(0,h_2)⟩-2⟨δ u(0,h_1)δ u(t,h_2)⟩ =2[u(t,h_1),u(t,h_2)]-2σ_u(h_1)σ_u(h_2)C_u(t;h_1, h_2) . Equating this with the right-hand side of Eq. (<ref>) yields Eq. (<ref>). When considering the limit t→∞ in Eq. (<ref>), the left-hand side becomes two times the covariance [u(t,h_1),u(t,h_2)]. Accordingly, the first integral in the square bracket in Eq. (<ref>) must vanish, which yields Eq. (<ref>), and the remaining second integral in Eq. (<ref>) must satisfy Eq. (<ref>). To quantify the correlation between wind speed increments Δ_τ u(t,h)=u(t,h)-u(t-τ,h) , for time lags τ>1 s at different heights, we calculate the Pearson correlation coefficient ρ(τ;h_1,h_2)=C_Δ_τ u(0;h_1,h_2) =⟨Δ_τ u(t,h_1)Δ_τ u(t,h_2)⟩/√(⟨Δ_τ u^2(t,h_1)⟩⟨Δ_τ u^2(t,h_2)⟩) . §.§ Detrended fluctuation analysis of differences between wind speeds in vertical direction To study how differences Δ u_h_1,h_2(t) = u(t,h_1) - u(t,h_2) between horizontal wind speeds at different heights h_1 and h_2 fluctuate in time, we apply the detrended fluctuation analysis (DFA) <cit.>. This is a valuable method in particular in the presence of long-range power-law correlations. In the DFA, the Δ u_h_1,h_2(t) are considered as positions of a random walk. A sub- or superlinear power-law scaling of its mean squared displacement with time signals long-range power-law decays of the correlation function C(t; Δ u_h_1,h_2,Δ u_h_1,h_2). To uncover such correlations clearly, the DFA includes a detrending due to local biases. Specifically, we define a profile of the detrended random walk in a time window t by R_h_1,h_2(t')=∑_t^''=1^t'Δ u_h_1,h_2(t^'') -t'/t∑_t^''=1^t Δ u_h_1,h_2(t^'') , 1≤ t'≤ t , where the second term removes a linear bias in the time window. Note that R_h_1,h_2(0)=R_h_1,h_2(t)=0. The time series of total duration T is divided into M=⌊ T/t⌋ time windows of size t, where ⌊ x⌋ is the largest integer smaller than x. The mean of the profile in the mth window is R̅_h_1,h_2(m)= 1/t∑_t'=1^t R_h_1,h_2[(m-1)t+t'] ,m=1,…,M . It is a measure of the mean of detrended wind speed differences in the mth window on time scale t. Differently speaking, it is the mth position of the detrended random walk, when it is coarse-grained on time scale t. The variance F_h_1,h_2(t)=F_h_2,h_1(t)=1/M-1∑_m=1^M-1 [R̅_h_1,h_2(m+1)-R̅_h_1,h_2(m)]^2 of the displacements R̅_h_1,h_2(m+1)-R̅_h_1,h_2(m) of this random walk quantifies the strength of fluctuations of Δ u_h_1,h_2(t) on time scale t. It is called the fluctuation function in the DFA. For power-law behavior F_h_1,h_2(t) ∼ t^H , at long times with 0<H<1, H is commonly referred to as the Hurst exponent. Values 1/2< H<1 correspond to a long-range power lay decay ∼ t^2H-2 of the autocorrelation function ⟨∂_tΔ u_h_1,h_2(t')∂_tΔ u_h_1,h_2(t'+t)⟩_t'. §.§ Distribution functions of horizontal wind speed increments at different Taylor distances For analyzing distributions of horizontal wind speed increments at a constant height, we make use of the Taylor's hypothesis locally by applying the method described in Ref. <cit.> to convert data from the time to the spatial domain. We first calculate the average wind speed u̅_t,t+τ in the interval [t,t+τ[. The time interval is then converted into a distance r_t,t+τ=u̅_t,t+τ τ, which we refer to as the Taylor distance for time lag τ at time t. For each pair of times (t, t+τ), we calculate the distance r=r_t,t+τ, and the corresponding wind speed increment Δ_r u≡ u_t-u_t+τ. The Δ_r u are then grouped with respect to equally spaced Taylor distances r on a logarithmic scale. The jth group contains all wind speed increments for r lying in the interval [r_j^-,r_j+1^-[, j=0,1,…, j_ max, where r_j^-=r_010^j/10 with r_0=1 m . The value of r assigned to the jth group is the geometric mean r_j=(r_j^-r_j+1^-)^1/2, corresponding to the arithmetic mean of logarithms, ln r_j=(ln r_j^-+ln r_j+1^-)/2. For each r_j, the wind speed increments Δ_r u are sampled in small bins Δ_r u to obtain the PDFs p_r(Δ_r u). §.§ Scaling behavior of moments of horizontal wind speed increment distributions Characteristic turbulence features can be identified in the scaling behavior of structure functions, which are moments of p_r(Δ_r u) of order q: D_q(r)=⟨ (Δ_r u)^q⟩ . The third-order structure function D_3 is expected to follow known behaviors in certain distance or time regimes. In the regime of 3D isotropic turbulence, D_3 is negative, proportional to the energy dissipation rate ϵ and grows linearly with r <cit.>: D_3(r)=-4/5ϵ r . Here, ϵ is the energy dissipation rate. For gravity-wave induced turbulence, Coriolis forces cannot be neglected, which leads to a modified amplitude in the linear scaling of the third-order structure function <cit.>: D_3(r)=-2 ϵ r . In quasi-2D geostrophic turbulence, D_3 is positive and scales as D_3(r)=1/4η r^3 , where η is the enstrophy dissipation rate <cit.>. In the regime of wall turbulence, a simple scaling of D_3 with r is not expected but measurements have shown that D_3 is negative in general <cit.>. To examine intermittency in the regime of 3D isotropic turbulence, we consider absolute values of wind speed increments. The corresponding moments are denoted as D̃_q(r) to distinguish them from the ones in Eq. (<ref>): D̃_q(r)=⟨ |Δ_r u|^q⟩ . As D_3∼ -r in the considered regime, one can expect that D̃_3∼ r also, which is indeed obtained. As for other moments, a self-similar structure of the turbulent field <cit.> would imply a monofractal behavior D̃_q∼D̃_3^q/3∼ r^q/3. However, due to intermittent behavior, D̃_q(r) ∼ r^ζ(q) , where ζ(q) is a nonlinear function of q with ζ(3)=1. Deviations from the linear dependence ζ(q) = q/3 give insight into the multifractal scaling due to intermittency. Based on a turbulent cascade model with log-normally distributed energy transfer rates, Kolmogorov <cit.> derived ζ(q) = q/3 - μ/18q(q-3) , where the intermittency correction μ lies in the range 0.2-0.5 <cit.>. Assuming high velocity fluctuations on small spatial scales, another nonlinear behavior ζ(q) = q/9 + 2[1 - (2/3)^q/3] , was proposed by She and Leveque <cit.>. A few other functions ζ(q) were suggested in the literature <cit.>. We will check our data against Eqs. (<ref>) and (<ref>). § RESULTS We discuss our results according to the subsectioning in the Theoretical Methods outlined in Sec. <ref>. §.§ Distributions of horizontal wind speeds at different heights PDFs of the wind speeds sampled for the two heights h=30 m and h=90 m (symbols) are displayed in Fig. <ref>(a) together with fits of the Weibull distribution to the data (solid lines). For the fitting, we applied the Levenberg-Marquardt algorithm <cit.>. The Weibull distribution gives a good description except in the tail regime where it underestimates the frequency of high horizontal wind speeds u≳18 m/s. The shape parameter k is close to two, in agreement with earlier studies reported in the literature <cit.>, see Fig. <ref>(b). The solid line in Fig. <ref>(b) indicates a weak logarithmic dependence of k on h, k(h)=k_0-αln(h/h_0) , where we have taken h_0=30 m as the reference height, k_0=k(h_0)≅2.0036, and α≅ 0.15. The shape parameter λ decreases approximately linearly with k, see the inset of Fig. <ref>(c). This yields λ(h)=λ_0-λ_ s [k(h)-k_0]=λ_0+αλ_ sln(h/h_0) for the h-dependence, where the coefficients are λ_0≅9.12 m/s and λ_ s≅3.33 m/s. The solid line in Fig. <ref>(c) marks the dependence on h according to Eq. (<ref>). The mean velocity u̅ and the turbulence intensity σ_u/u̅ are related to k and λ: u̅(h) =Γ(1+1k)λ≅√(π)/2[λ_0+αλ_ sln(h/h_0)] , σ_u/u̅ =√(Γ(1+2k)/Γ^2(1+1k)-1)≅√(4/π-1)≅0.52 . Here, we have evaluated the Gamma functions at k=2 in the approximate expressions. In agreement with earlier findings <cit.> and theories of wall turbulence <cit.>, Eq. (<ref>) gives a logarithmic increase of the mean wind speed u̅ with height h. Figure <ref> shows how the dissimilarity between the wind speed PDFs, as quantified by the symmetrized Kullback-Leibler divergence D_KL(h_0, h) [see Eq. (<ref>)], increases with h/h_0. Using Eqs. (<ref>) and (<ref>), we can express this dependence. To this end, D_KL(h_0,h) in Eq. (<ref>) is considered as a function of k, i.e. k_1, λ_1 in Eq. (<ref>) become k_0, λ_0, and k_2, λ_2 become k, λ(k), with λ(k) the linear function in Eq. (<ref>). As k is weakly varying with h, we can expand this function D_KL of k in a Taylor series around k_0. Up to second order, we obtain (D_KL=0 and D_KL/ k=0 for k=k_0), D_ KL(h_0,h) =1/2[(1-γ+λ_ s/λ_0 k_0^2)^2+π^2/6] Δ k^2/k_0^2 =α^2/2k_0^2[(1-γ+λ_ s/λ_0 k_0^2)^2+π^2/6] ln^2(h/h_0) . In the last step, we have inserted Δ k=-αln(h/h_0) from Eq. (<ref>). The solid line in Fig. <ref> demonstrates that D_ KL(h_0,h) is well described by Eq. (<ref>). §.§ Temporal cross-correlations of wind speeds and wind speed increments at different heights Kullback-Leibler divergences in the previous section quantify differences between distributions of the wind speeds, but do not allow conclusions regarding correlations. In this subsection, we discuss correlation properties of wind speeds, wind accelerations and wind speed increments. Figure <ref> shows 1-C_u(t;h_0,h_0+Δ h) for a reference height h_0=30 m and different vertical separations Δ h. For Δ h=0, the curve is proportional to the second-order structure function at height h_0, 1-C_u(t;h_0,h_0)=D_2(t,h_0)/2σ_u^2(h_0), where D_2(t,h_0)=⟨ u(t',h_0)-u(t'+t)⟩_t'. By plotting 1-C_u(t;h_0,h_0+Δ h) rather than C_u(t;h_0,h_0+Δ h) in the double-logarithmic representation, we see that relative deviations between the cross-correlation functions for different Δ h are strong at small times. For large times t≳ 10^4 s, they become negligible. The C_u(t;h_0,h_0+Δ h) decay very slowly, i.e. they slowly approach one in Fig. <ref> for large t. Correlation functions C_∂_t u(t;h_0,h_0+Δ h) between ∂_t u(t,h_0) and ∂_t u(t,h_0+Δ h) at different vertical separations Δ h=0, 10, 20 m are shown in Fig. <ref> for the same reference height h_0=30 m as in Fig. <ref>. We checked that these correlation functions satisfy the sum rule (<ref>). The relation (<ref>) between C_∂_t u(t;h_0,h_0+Δ h) and C_u(t;h_0,h_0+Δ h) holds true, as shown by the symbols in Fig. <ref>, which were determined by numerical calculations of the integral in Eq. (<ref>). The confirmation of the relation (<ref>) strongly indicates that the second sum rule (<ref>) is fulfilled also, although we could not verify it directly due to the lack of data C_∂_t u(t;h_0,h_0+Δ h) at times t≳ 4×10^3 s with sufficient accuracy. The autocorrelation function C_∂_t u(t;h_0,h_0) in Fig. <ref> rapidly decreases from its value one at t=0 to a negative minimal value and thereafter approaches zero smoothly. The functional form of C_∂_t u(t;h_0,h_0+Δ h) for Δ h>0 is similar, but |C_∂_t u(t;h_0,h_0+Δ h)| of the cross-correlations is reduced for small times t≲ 7 s compared to the autocorrelations. For larger times, relative deviations between the C_∂_t u(t;h_0,h_0+Δ h) for different Δ h are insignificant, see the inset of Fig. <ref>. The double-logarithmic plot in this inset suggests that the decay in the time regime 10 s≲ t≲ 10^2 s can be described by a power law, C_∂_t u(t;h_0,h_0+Δ h)∼ - A t^-ν , where A≅ 0.7 and ν≅1.9. Figure <ref>(a) shows the Pearson correlation coefficients ρ(τ;h_0,h_0+Δ h) defined in Eq. (<ref>) as a function of Δ h for various time lags τ and the same reference height h_0=30 m as in Figs. <ref> and <ref>. The coefficients quantify how wind speed increments Δ_τ u at vertical separation Δ h are correlated. In all cases, the data can be well fitted by an exponential decay, ρ(τ;h_0,h_0+Δ h)=exp[-Δ h/ξ(τ)] , where ξ(τ) is a τ-dependent correlation length. The correlation length ξ increases with τ, see Fig. <ref>(b). For τ-scales corresponding to the regime of 3D isotropic turbulence (see Introduction), ξ(τ) is of the order of 10 m, i.e. smaller than the measurement heights h. For τ≳10^5 s, ξ approaches 1 km, which is comparable to the planetary boundary layer height <cit.>. This height is the upper limit, where vertical convection has a significant influence on the spatial distribution of horizontal wind speeds. Interestingly, the variation of ξ with τ is similar to that of 1/κ(τ,h) for fixed h where κ(τ,h)=⟨Δ_τ u^4(t,h)⟩_t/⟨Δ_τ u^2(t,h)⟩_t^2 is the kurtosis of the wind speed increment distribution at height h. We have included the plot of 1/κ(τ,h_0) as a function of τ in Fig. <ref>(b) for the reference height h_0=30 m. For other h, the corresponding curve would look almost the same, i.e. there are no significant changes in the dependence of 1/κ(τ,h) on τ when varying h. The similarity in the behavior of ξ(τ) and 1/κ(τ,h) can be interpreted as follows: when κ(τ,h) is large, the probability for the occurrence of very large wind speed increments is high. Large wind speed increments disturb coherent turbulent structures and hence spatial correlations. Accordingly, one can expect ξ to become smaller for larger κ in agreement with the behavior seen in Fig. <ref>(b). §.§ Detrended fluctuation analysis of differences between wind speeds in vertical direction A representative example of a one-hour section of the time series of wind speed differences Δ u_h_1,h_2(t) = u(t,h_1) - u(t,h_2) for heights h_1=40 m and h_2=50 m is shown in Fig. <ref>(a). The patchy structure of this series suggests the presence of long-time correlations <cit.>. In Fig. <ref>(b), we display profiles R_h_1,h_2(t) of the detrended random walk calculated from Eq. (<ref>) for the time window in Fig. <ref>(a). The black line corresponds to the data in Fig. <ref>(a) [h_1=40 m, h_2=50 m]. Two further profiles for heights h_1=50 m, h_2=60 m (red) and h_1=50 m, h_2=80 m (blue) are shown also. Fluctuation functions F_h_1,h_2(t) calculated from the profiles, see Eq. (<ref>), are depicted in Fig. <ref> for h_1=50 m, and h_2=40 m, 60 m, and 80 m. One can identify two regimes, separated by a crossover time t_×≃ 110 s, where F(t) shows different power-law behavior. In the long-time regime t>t_×, fluctuation functions scale as F(t)∼ t^H_ l, with a Hurst exponent H_ l≃ 0.96 showing no significant variation with heights h_1, h_2. In the short-time regime, t<t_×, F(t)∼ t^H_ s, where the Hurst exponent H_ s is smaller than H_ l and depends on h_1 and h_2. The values vary in the range 0.73≤ H_S≤ 0.92, see Table  <ref>. The long-range correlations reflected by the large Hurst exponent H_ l≃0.96 demonstrate a high persistence of fluctuations of wind speed differences at different heights. The short-time exponents H_ s listed in Table <ref> indicate a weakly increasing persistence for larger height differences. This result is unexpected and should be considered with care as the relation between power-law behavior in F_h_1,h_2(t) and corresponding power-law decays of ⟨∂_tΔ u_h_1,h_2(t')∂_tΔ u_h_1,h_2(t'+t)⟩_t' is less robust than for the long-time asymptotic behavior. §.§ Distributions of horizontal wind speed increments on different length scales PDFs p_r(Δ_r u) of horizontal wind speed increments at 16 different Taylor distances r [see Sec. <ref>] are displayed in Fig. <ref> for the height h=90 m. At small r≲ 100 m, the distributions have a tent-like shape, which reflects the intermittent behavior of 3D isotropic turbulence <cit.>. In contrast to a Gaussian distribution, strong changes of wind speeds over distances r, or corresponding time intervals τ≃ r/u̅, are much more frequent, which has important consequences for wind park engineering and other wind-sensitive applications <cit.>. For larger r, the tent-like shape continues to be present in a core interval Δ_r u∈[Δ_r u^-,Δ_r u^+] around the maximum at Δ_r u=0. We define this core part by ∫_-∞^Δ_r u^- dx p_r(x)=∫_Δ_r u^+^∞ dx p_r(x)=1/g , i.e. Δ_r u^- and Δ_r u^+ are the first and last g-quantile. Taking g=1000, we can fit the PDFs in [Δ_r u^-,Δ_r u^+] by skewed stretched exponential functions, p_r(Δ_r u)∝{[ exp(-|Δ_r u/s_-|^α_-) , Δ_r u≤0 ,; exp(-|Δ_r u/s_+|^α_+) , Δ_r u>0 , ]. where the scale parameters s_± quantify the widths and the shape parameters α_± describe the decays for negative and positive Δ_r u, respectively. Corresponding least-square fits are shown as solid lines in Fig. <ref>. In Fig. <ref>, we display the fit parameters for h=90 m, and in the insets the parameters for h=30 m. For both h=30 m and 90 m, the parameters are nearly the same, demonstrating that the core part of the wind speed increment PDFs is almost independent of h. The scale parameters s_± in Fig. <ref>(a) increase with r, reflecting the broadening of the PDFs in Fig. <ref>. This broadening can also be quantified by the second-order structure functions D_2(r), see Eq. (<ref>). In fact, by assuming Eq. (<ref>) to describe the PDFs for all Δ_r u, one finds D_2=a^2(s_-^2+b^2s_+^2) , where b^2=α_+Γ(2/α_-)/α_-Γ(2/α_+)≃ 1, and a^2=b^2 Γ(1/α_+)/α_++ Γ(1/α_-)/α_-/b^2 Γ(3/α_+)/α_++ Γ(3/α_-)/α_- . The shape parameters α_± in Fig. <ref>(b) increase from values α_± < 1 for small r, to values α_±≃ 2 for large r. Accordingly, deviations from a Gaussian distribution become less pronounced for larger distances r, as it can be seen also by the change of shape of the core part from tent-like to parabolic in Fig. <ref>. In addition to the features describing the core part of the distributions, bumps occur in both the left and right tail of the PDFs in Fig. <ref>, see, for example, the results for r=4 km and r=1122 km. The bumps in the right tail appear for r≳ 400 km and are small. The bumps in the left tail become more pronounced with increasing r and develop into a peak around Δ_r u≃ -25 m/s, see the PDFs in the range 10 km <r<450 km. When the Taylor distance changes from r=447 km to r=1122 km, the peak suddenly disappears. Interestingly, the second peak at negative Δ_r u has a strong impact on the change of sign in the third-order structure function D_3(r) and its scaling behavior in the regime 10-200 km. This is demonstrated in Fig. <ref>, where in Fig. <ref>(a) D_3(r) is plotted for the core part, assuming Eq. (<ref>) to provide a description for all Δ_r u, while in Fig. <ref>(b), D_3 was calculated for all data including the bumps in the tails of the PDF. In both Figs. <ref>(a) and (b), D_3 changes sign, but in Fig. <ref>(a) this change occurs at a significantly smaller r≃ 100 km, compared to r≃ 500 km in Fig. <ref>(b). In view of previous results reported in the literature <cit.>, the value r≃ 500 km is more reliable, indicating that the tail features of the PDF have a decisive influence. This finding is further corroborated when analyzing the scaling behavior of D_3, where D_3=-2ϵ r is expected for the regime of 3D turbulence induced by gravity waves [Eq. (<ref>)], and D_3=η r^3/4 for the regime of geostrophic turbulence [Eq. (<ref>)], see also Fig. <ref>. In the insets of Figs. <ref>(a) and (b), D_3(r) is plotted in double-logarithmic representation. For the 3D turbulence regime induced by gravity waves, the expected scaling D_3∼ -r is clearly visible in the inset of Fig. <ref>(b), while it cannot be seen in the inset of Fig. <ref>(a). The solid lines with slopes three indicate a rise of D_3 expected in the regime of geostrophic turbulence. We denote the wind speed increments, where p_r(Δ u) exhibits a local minimum and maximum in its left tail as (Δ_r u)_ max and (Δ_r u)_ min, see the inset of Fig. <ref>(a). How (Δ_r u)_ max varies with the Taylor distance r for various heights h is shown in Fig. <ref>(a). With increasing h, (Δ_r u)_ max decreases, i.e. the position of the second peak shifts further into the left tail part of p_r(Δ _r u). For all h, (Δ_r u)_ max becomes minimal at a Taylor distance r_max≃250 km. Interestingly, this value coincides with the minimum of the third-order structure function D_3, see Fig. <ref>(b). The data suggest that increments Δ u_r<(Δ_r u)_ min are resulting from a different physical mechanism than increments Δ u_r>(Δ_r u)_ min distributed according to the core part of p_r(u). The probability Prob[Δ_r u ≤ (Δ_r u)_min] for increments Δ u_r<(Δ_r u)_ min to occur, is depicted in Fig. <ref>(b). It increases approximately linearly with r. To summarize, our analysis shows that the peak in the left tail of distributions of horizontal wind speed increments is decisive to obtain the scaling features of D_3(r) according to gravity-wave induced turbulence, Eq. (<ref>). The position of this peak agrees with the minimum of D_3(r). These findings suggest that the peak in the left tail is not an artifact but an important, so-far unexplored feature. That it was not reported yet, could be due to the lack of sufficient amount of data for resolving it in the tail part of PDFs. §.§ Scaling behavior of moments of horizontal wind speed increment distributions For Taylor distances r≲ h, i.e. in the regime of 3D isotropic turbulence, intermittency shows up in large values of wind speed increments that are much more frequent than expected from Gaussian statistics. The intermittency is reflected in multifractal behavior, given by the nonlinear q-dependence of exponents ζ(q) in Eq. (<ref>). To calculate these exponents, we examine the dependence of the absolute value structure functions D̃_q on Taylor distances r. Figure <ref>(a) shows D̃_q(r) for the measurement height h=90 m and various orders q. The solid lines in the figure are least-square fits of power laws to the data in the range r≲ 90 m of 3D isotropic turbulence. The slopes of the lines in the double-logarithmic plot yield the exponents ζ(q), see Eq. (<ref>). They are shown in Fig. <ref>(b). The linear monofractal behavior ζ(q)=q/3 [K41] and the nonlinear multifractal behavior according to Eqs. (<ref>) [K61] and (<ref>) [SL] are represented by lines. Both equations yield a concave curvature of ζ(q) in agreement with the data, but neither of the expressions gives ζ(q) accurately. § CONCLUSIONS We have analyzed offshore horizontal wind speeds sampled in the North Sea at heights 30-90 m above the sea level over a time period of 20 months with a time resolution of one second. Distributions and correlation properties were investigated for wind speeds and wind speed increments. Cross-correlation functions were determined with respect to time and height differences. For equal-time wind speed differences at different heights, we carried out a detrended fluctuation analysis. In the analysis of distributions of wind speed increments and their moments, we considered a mapping of time lags τ onto spatial distances r by applying the Taylor's hypothesis locally, i.e. by using the mean wind speed u̅_τ in every time window given by the lag τ. We call the resulting r=u̅_ττ Taylor distance. Distributions of horizontal wind speeds can be described by the Weibull form. This form, however, underestimates the frequency of high wind speeds ≳ 18 m/s in the tail region. Both the shape parameter k and the scale parameter λ of the Weibull distribution vary weakly with the height h above the sea level. The shape parameter k is about two and decreases logarithmically with h. The scale parameter λ decreases approximately linearly with k. These results yield a height-independent turbulence intensity of about one half and a logarithmic increase of the mean wind speed with h, in agreement with previously reported findings and theories of wall turbulence. As a measure of the differences between wind speed distributions at different heights, we calculated the symmetrized Kullback-Leibler divergence. It increases as ∼ln^2(h/h_0) with h, where h_0 is a reference height. Cross-correlation functions between wind speeds at different heights decay very slowly. Relative deviations between them are significant for small times, while for large times t≳ 10^4 s they are almost equal. A detrended fluctuation analysis of differences between equal-time wind speeds at different heights showed a short- and long-time scaling regime with Hurst exponents of about 0.75 and close to one. These large Hurst exponents reflect strong persistent correlations in the temporal changes of equal-time wind speed differences. Cross-correlation functions between time derivates ∂_t u(t,h) of wind speeds, numerically represented by wind speed increments in the smallest time interval Δ t=1 s, show anticorrelations. They slowly decay towards zero from negative values. At short times, cross-correlations between ∂_t u(t,h) and ∂_t u(t,h + Δ h) quickly drop to negative values and their magnitudes become much smaller for larger height differences Δ h. For larger times t≳ 10^2 s, no significant variation with the vertical separation Δ h occurs. Both the auto- and cross-correlation functions satisfy sum rules required by the fact the mean kinetic wind energy should not diverge, i.e. by the finite variance wind speed distributions. Equal-time correlations between wind speed increments of time lag τ at different heights decay exponentially with the height separation Δ h, where the correlation length ξ of the exponential decay increases monotonically with τ. Moments of the distribution of wind speed increments show intermittency corrections in the regime of 3D isotropic turbulence. The intermittency is reflected in a multifractal scaling behavior of the moments with the Taylor distance. Models suggested for the multifractal scaling give an approximate but not perfect match to the data. Distributions of wind speed increments Δ_r u at Taylor distances r have a tent-like shape in the core part around Δ_r u for small r. With increasing r, their shape in the core part changes and approaches a Gaussian for large r≳10^3 km. A striking feature in the distribution of wind speed increments is the occurrence of a peak in their left tails for Taylor distances in the range 10-200 km. In this regime, turbulent wind fields are commonly believed to be governed by gravity waves. Interestingly, the expected and observed linear scaling D_3(r)∼ -r of the third-order structure breaks down in this regime, when the peak in the left tails is ignored, i.e. when using the shape of the core part to represent the full distributions including their tails. This suggests that the peak is an intrinsic feature of atmospheric turbulence. It will be interesting to see whether further investigations can corroborate or confirm this finding. Our thorough analysis of offshore wind speed data allows for testing theoretical approaches against empirical findings and thus lays a suitable basis for improvements of wind speed modeling. We thank J. Peinke for valuable discussions and M. Wächter for helping us with the data acquisition. We are grateful to the BMWI (Bundesministerium für Wirtschaft und Energie) and the PTJ (Projektträger Jülich) for providing the data of the offshore measurements at the FINO1 platform. We acknowledge use of a high-performance computing cluster funded by the Deutsche Forschungsgemeinschaft (Project No. 456666331). 90 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Suomi and Vihma(2018)]Suomi/Vihma:2018 author author I. Suomi and author T. Vihma, title title Wind gust measurement techniques - from traditional anemometry to new possibilities, https://doi.org/10.3390/s18041300 journal journal Sensors volume 18, pages 1300 (year 2018)NoStop [van Ramshorst et al.(2020)van Ramshorst, Coenders-Gerrits, Schilperoort, van de Wiel, Izett, Selker, Higgins, Savenije, and van de Giesen]vanRamshorst/etal:2020 author author J. G. V. van Ramshorst, author M. Coenders-Gerrits, author B. Schilperoort, author B. J. H. van de Wiel, author J. G. Izett, author J. S. Selker, author C. W. Higgins, author H. H. G. Savenije, and author N. C. van de Giesen, title title Revisiting wind speed measurements using actively heated fiber optics: a wind tunnel study, https://doi.org/10.5194/amt-13-5423-2020 journal journal Atmos. Meas. Tech. volume 13, pages 5423 (year 2020)NoStop [Larsén et al.(2016)Larsén, Larsen, and Petersen]Larsen/etal:2016 author author X. G. Larsén, author S. E. Larsen, and author E. L. Petersen, title title Full-scale spectrum of boundary-layer winds, https://doi.org/10.1007/s10546-016-0129-x journal journal Boundary-Layer Meteorol. volume 159, pages 349 (year 2016)NoStop [Sim et al.(2023)Sim, Peinke, and Maass]Sim/etal:2023 author author S.-K. Sim, author J. Peinke, and author P. Maass, title title Signatures of geostrophic turbulence in power spectra and third-order structure function of offshore wind speed fluctuations, https://doi.org/10.1038/s41598-023-40450-9 journal journal Sci. Rep. volume 13, pages 13411 (year 2023)NoStop [Veers et al.(2019)Veers, Dykes, Lantz, Barth, Bottasso, Carlson, Clifton, Green, Green, Holttinen, Laird, Lehtomäki, Lundquist, Manwell, Marquis, Meneveau, Moriarty, Munduate, Muskulus, Naughton, Pao, Paquette, Peinke, Robertson, Sanz Rodrigo, Sempreviva, Smith, Tuohy, and Wiser]Veers/etal:2019 author author P. Veers, author K. Dykes, author E. Lantz, author S. Barth, author C. L. Bottasso, author O. Carlson, author A. Clifton, author J. Green, author P. Green, author H. Holttinen, author D. Laird, author V. Lehtomäki, author J. K. Lundquist, author J. Manwell, author M. Marquis, author C. Meneveau, author P. Moriarty, author X. Munduate, author M. Muskulus, author J. Naughton, author L. Pao, author J. Paquette, author J. Peinke, author A. Robertson, author J. Sanz Rodrigo, author A. M. Sempreviva, author J. C. Smith, author A. Tuohy, and author R. Wiser, title title Grand challenges in the science of wind energy, https://doi.org/10.1126/science.aau2027 journal journal Science volume 366, pages eaau2027 (year 2019)NoStop [Leelőssy et al.(2014)Leelőssy, Molnár Jr., Izsák, Havasi, Lagzi, and Mészáros]Leelossy/etal:2014 author author Á. Leelőssy, author F. Molnár Jr., author F. Izsák, author Á. Havasi, author I. Lagzi, and author R. Mészáros, title title Dispersion modeling of air pollutants in the atmosphere: a review, https://doi.org/doi:10.2478/s13533-012-0188-6 journal journal Cent. Eur. J. Geosci. series Open Geosciences, volume 6, pages 257 (year 2014)NoStop [Tan and Ni(2022)]Tan/Ni:2022 author author S. Tan and author R. Ni, title title Universality and intermittency of pair dispersion in turbulence, https://doi.org/10.1103/PhysRevLett.128.114502 journal journal Phys. Rev. Lett. volume 128, pages 114502 (year 2022)NoStop [Costa et al.(2008)Costa, Crespo, Navarro, Lizcano, Madsen, and Feitosa]Costa/etal:2008 author author A. Costa, author A. Crespo, author J. Navarro, author G. Lizcano, author H. Madsen, and author E. Feitosa, title title A review on the young history of the wind power short-term prediction, https://doi.org/https://doi.org/10.1016/j.rser.2007.01.015 journal journal Renewable Sustainable Energy Rev. volume 12, pages 1725 (year 2008)NoStop [Santhosh et al.(2020)Santhosh, Venkaiah, and Vinod Kumar]Santhosh/etal:2020 author author M. Santhosh, author C. Venkaiah, and author D. M. Vinod Kumar, title title Current advances and approaches in wind speed and wind power forecasting for improved renewable energy integration: A review, https://doi.org/https://doi.org/10.1002/eng2.12178 journal journal Eng. Rep. volume 2, pages e12178 (year 2020)NoStop [Hanifi et al.(2020)Hanifi, Liu, Lin, and Lotfian]Hanifi/etal:2020 author author S. Hanifi, author X. Liu, author Z. Lin, and author S. Lotfian, title title A critical review of wind power forecasting methods – past, present and future, https://doi.org/10.3390/en13153764 journal journal Energies volume 13, pages 3764 (year 2020)NoStop [Tawn and Browell(2022)]Tawn/Browell:2022 author author R. Tawn and author J. Browell, title title A review of very short-term wind and solar power forecasting, https://doi.org/https://doi.org/10.1016/j.rser.2021.111758 journal journal Renewable Sustainable Energy Rev. volume 153, pages 111758 (year 2022)NoStop [Sanderse et al.(2011)Sanderse, van der Pijl, and Koren]Sanderse/etal:2011 author author B. Sanderse, author S. van der Pijl, and author B. Koren, title title Review of computational fluid dynamics for wind turbine wake aerodynamics, https://doi.org/https://doi.org/10.1002/we.458 journal journal Wind Energy volume 14, pages 799 (year 2011)NoStop [Archer et al.(2018)Archer, Vasel-Be-Hagh, Yan, Wu, Pan, Brodie, and Maguire]Archer/etal:2018 author author C. L. Archer, author A. Vasel-Be-Hagh, author C. Yan, author S. Wu, author Y. Pan, author J. F. Brodie, and author A. E. Maguire, title title Review and evaluation of wake loss models for wind energy applications, https://doi.org/https://doi.org/10.1016/j.apenergy.2018.05.085 journal journal Appl. Energy volume 226, pages 1187 (year 2018)NoStop [Nash et al.(2021)Nash, Nouri, and Vasel-Be-Hagh]Nash/etal:2021 author author R. Nash, author R. Nouri, and author A. Vasel-Be-Hagh, title title Wind turbine wake control strategies: A review and concept proposal, https://doi.org/https://doi.org/10.1016/j.enconman.2021.114581 journal journal Energy Convers. Manage. volume 245, pages 114581 (year 2021)NoStop [Houck(2022)]Houck:2022 author author D. R. Houck, title title Review of wake management techniques for wind turbines, https://doi.org/https://doi.org/10.1002/we.2668 journal journal Wind Energy volume 25, pages 195 (year 2022)NoStop [Porté-Agel et al.(2020)Porté-Agel, Bastankhah, and Shamsoddin]Porte-Agel/etal:2020 author author F. Porté-Agel, author M. Bastankhah, and author S. Shamsoddin, title title Wind-turbine and wind-farm flows: A review, https://doi.org/10.1007/s10546-019-00473-0 journal journal Boundary-Layer Meteorol. volume 174, pages 1 (year 2020)NoStop [Bošnjaković et al.(2022)Bošnjaković, Katinić, Santa, and Marić]Bosnjakovic/etal:2022 author author M. Bošnjaković, author M. Katinić, author R. Santa, and author D. Marić, title title Wind turbine technology trends, https://doi.org/10.3390/app12178653 journal journal Appl. Sci. volume 12, pages 8653 (year 2022)NoStop [Johnson(1998)]Johnson:1998 author author G. Johnson, @noop title Wind Energy Systems (publisher Prentice-Hall, address Englewood Cliffs, year 1998)NoStop [Lun and Lam(2000)]Lun/Lam:2000 author author I. Y. Lun and author J. C. Lam, title title A study of Weibull parameters using long-term wind observations, https://doi.org/https://doi.org/10.1016/S0960-1481(99)00103-2 journal journal Renewable Energy volume 20, pages 145 (year 2000)NoStop [Shu and Jesson(2021)]Shu/Jesson:2021 author author Z. R. Shu and author M. Jesson, title title Estimation of Weibull parameters for wind energy analysis across the UK, https://doi.org/10.1063/5.0038001 journal journal J. Renewable Sustainable Energy volume 13, pages 023303 (year 2021)NoStop [Castaing et al.(1990)Castaing, Gagne, and Hopfinger]Castaing/etal:1990 author author B. Castaing, author Y. Gagne, and author E. J. Hopfinger, title title Velocity probability density functions of high Reynolds number turbulence, https://doi.org/https://doi.org/10.1016/0167-2789(90)90035-N journal journal Physica D volume 46, pages 177 (year 1990)NoStop [Böttcher et al.(2007)Böttcher, Barth, and Peinke]Boettcher/etal:2007 author author F. Böttcher, author S. Barth, and author J. Peinke, title title Small and large scale fluctuations in atmospheric wind speeds, https://doi.org/10.1007/s00477-006-0065-2 journal journal Stochastic Environ. Res. Risk Assess. volume 21, pages 299 (year 2007)NoStop [Milan et al.(2013)Milan, Wächter, and Peinke]Milan/etal:2010 author author P. Milan, author M. Wächter, and author J. Peinke, title title Turbulent character of wind energy, https://doi.org/10.1103/PhysRevLett.110.138701 journal journal Phys. Rev. Lett. volume 110, pages 138701 (year 2013)NoStop [Kolmogorov(1962)]Kolmogorov:1962 author author A. N. Kolmogorov, title title A refinement of previous hypotheses concerning the local structure of turbulence in a viscous incompressible fluid at high Reynolds number, https://doi.org/DOI: 10.1017/S0022112062000518 journal journal J. Fluid Mech. volume 13, pages 82 (year 1962)NoStop [Mollo-Christensen(1973)]Mollo-Christensen:1973 author author E. Mollo-Christensen, title title Intermittency in large-scale turbulent flows, https://doi.org/https://doi.org/10.1146/annurev.fl.05.010173.000533 journal journal Annu. Rev. Fluid Mech. volume 5, pages 101 (year 1973)NoStop [Lohse and Grossmann(1993)]Lohse/Grossmann:1993 author author D. Lohse and author S. Grossmann, title title Intermittency in turbulence, https://doi.org/https://doi.org/10.1016/0378-4371(93)90382-E journal journal Physica A volume 194, pages 519 (year 1993)NoStop [Benzi and Vulpiani(2022)]Benzi/Vulpiani:2022 author author R. Benzi and author A. Vulpiani, title title Multifractal approach to fully developed turbulence, https://doi.org/10.1007/s12210-022-01078-5 journal journal Rend. Lincei Sci. Fis. Nat. volume 33, pages 471 (year 2022)NoStop [Johnson and Wilczek(2024)]Johnson/Wilczek:2024 author author P. L. Johnson and author M. Wilczek, title title Multiscale velocity gradients in turbulence, https://doi.org/10.1146/annurev-fluid-121021-031431 journal journal Annu. Rev. Fluid Mech. volume 56, pages 463 (year 2024)NoStop [She and Leveque(1994)]She/Leveque:1994 author author Z.-S. She and author E. Leveque, title title Universal scaling laws in fully developed turbulence, https://doi.org/10.1103/PhysRevLett.72.336 journal journal Phys. Rev. Lett. volume 72, pages 336 (year 1994)NoStop [Benzi et al.(1993)Benzi, Ciliberto, Tripiccione, Baudet, Massaioli, and Succi]Benzi/etal:1993 author author R. Benzi, author S. Ciliberto, author R. Tripiccione, author C. Baudet, author F. Massaioli, and author S. Succi, title title Extended self-similarity in turbulent flows, https://doi.org/10.1103/PhysRevE.48.R29 journal journal Phys. Rev. E volume 48, pages R29 (year 1993)NoStop [Grossmann et al.(1997)Grossmann, Lohse, and Reeh]Grossmann/etal:1997 author author S. Grossmann, author D. Lohse, and author A. Reeh, title title Application of extended self-similarity in turbulence, https://doi.org/10.1103/PhysRevE.56.5473 journal journal Phys. Rev. E volume 56, pages 5473 (year 1997)NoStop [Amati et al.(1997)Amati, Benzi, and Succi]Amati/etal:1997 author author G. Amati, author R. Benzi, and author S. Succi, title title Extended self-similarity in boundary layer turbulence, https://doi.org/10.1103/PhysRevE.55.6985 journal journal Phys. Rev. E volume 55, pages 6985 (year 1997)NoStop [Vindel et al.(2008)Vindel, Yagüe, and Redondo]Vindel/etal:2008 author author J. M. Vindel, author C. Yagüe, and author J. M. Redondo, title title Structure function analysis and intermittency in the atmospheric boundary layer, https://doi.org/10.5194/npg-15-915-2008 journal journal Nonlin. Processes Geophys. volume 15, pages 915 (year 2008)NoStop [Kiliyanpilakkil and Basu(2016)]Kiliyanpilakkil/Basu:2016 author author V. P. Kiliyanpilakkil and author S. Basu, title title Extended self-similarity of atmospheric boundary layer wind fields in mesoscale regime: Is it real?, https://doi.org/10.1209/0295-5075/112/64003 journal journal Europhys. Lett. volume 112, pages 64003 (year 2016)NoStop [Baïle and Muzy(2010)]Baile/Muzy:2010 author author R. Baïle and author J.-F. Muzy, title title Spatial intermittency of surface layer wind fluctuations at mesoscale range, https://doi.org/10.1103/PhysRevLett.105.254501 journal journal Phys. Rev. Lett. volume 105, pages 254501 (year 2010)NoStop [Nikora and Goring(2001)]Nikora/Goring:2001 author author V. I. Nikora and author D. G. Goring, title title Extended self-similarity in geophysical and geological applications, https://doi.org/10.1023/A:1007630021716 journal journal Math. Geol. volume 33, pages 251 (year 2001)NoStop [Gupta and Waymire(1990)]Gupta/Waymire:1990 author author V. K. Gupta and author E. Waymire, title title Multiscaling properties of spatial rainfall and river flow distributions, https://doi.org/https://doi.org/10.1029/JD095iD03p01999 journal journal J. Geophys. Res.: Atmos. volume 95, pages 1999 (year 1990)NoStop [Schleiss et al.(2011)Schleiss, Jaffrain, and Berne]Schleiss/etal:2011 author author M. Schleiss, author J. Jaffrain, and author A. Berne, title title Statistical analysis of rainfall intermittency at small spatial and temporal scales, https://doi.org/https://doi.org/10.1029/2011GL049000 journal journal Geophys. Res. Lett. volume 38, pages L18403 (year 2011)NoStop [Acuña et al.(2020)Acuña, Jorda-Capdevila, Vezza, De Girolamo, McClain, Stubbington, Pastor, Lamouroux, von Schiller, Munné, and Datry]Acuna/etal:2020 author author V. Acuña, author D. Jorda-Capdevila, author P. Vezza, author A. M. De Girolamo, author M. E. McClain, author R. Stubbington, author A. V. Pastor, author N. Lamouroux, author D. von Schiller, author A. Munné, and author T. Datry, title title Accounting for flow intermittency in environmental flows design, https://doi.org/https://doi.org/10.1111/1365-2664.13590 journal journal J. Appl. Ecol. volume 57, pages 742 (year 2020)NoStop [van Leth et al.(2021)van Leth, Leijnse, Overeem, and Uijlenhoet]vanLeth/etal:2021 author author T. C. van Leth, author H. Leijnse, author A. Overeem, and author R. Uijlenhoet, title title Rainfall spatiotemporal correlation and intermittency structure from micro-γ to meso-β scale in the Netherlands, https://doi.org/https://doi.org/10.1175/JHM-D-20-0311.1 journal journal J. Hydrometeorol. volume 22, pages 2227 (year 2021)NoStop [Wilby et al.(2023)Wilby, Dawson, Yu, Herring, Baruch, Ascott, Finney, Macdonald, Marsham, Matthews, and Murphy]Wilby/etal:2023 author author R. L. Wilby, author C. W. Dawson, author D. Yu, author Z. Herring, author A. Baruch, author M. J. Ascott, author D. L. Finney, author D. M. J. Macdonald, author J. H. Marsham, author T. Matthews, and author C. Murphy, title title Spatial and temporal scaling of sub-daily extreme rainfall for data sparse places, https://doi.org/10.1007/s00382-022-06528-2 journal journal Clim. Dyn. volume 60, pages 3577 (year 2023)NoStop [Yakhot(2001)]Yakhot:2001 author author V. Yakhot, title title Mean-field approximation and extended self-similarity in turbulence, https://doi.org/10.1103/PhysRevLett.87.234501 journal journal Phys. Rev. Lett. volume 87, pages 234501 (year 2001)NoStop [Brett and Tuller(1991)]Brett/Tuller:1991 author author A. C. Brett and author S. E. Tuller, title title The autocorrelation of hourly wind speed observations, https://doi.org/https://doi.org/10.1175/1520-0450(1991)030<0823:TAOHWS>2.0.CO;2 journal journal J. Appl. Meteorol. Climatol. volume 30, pages 823 (year 1991)NoStop [Thomann and Barfield(1988)]Thomann/Barfield:1988 author author G. Thomann and author M. Barfield, title title The time variation of wind speeds and windfarm power output in Kansas, https://doi.org/10.1109/60.4198 journal journal IEEE Trans. Energy Convers. volume 3, pages 44 (year 1988)NoStop [Pérez et al.(2004)Pérez, García, Sánchez, and de Torre]Perez/etal:2004 author author I. A. Pérez, author M. García, author M. L. Sánchez, and author B. de Torre, title title Autocorrelation analysis of meteorological data from a RASS sodar, https://doi.org/https://doi.org/10.1175/1520-0450(2004)043<1213:AAOMDF>2.0.CO;2 journal journal J. Appl. Meteorol. volume 43, pages 1213 (year 2004)NoStop [Taylor(1938)]Taylor:1938 author author G. I. Taylor, title title The spectrum of turbulence, https://doi.org/10.1098/rspa.1938.0032 journal journal Proc. R. Soc. Lond. Ser. A volume 164, pages 476 (year 1938)NoStop [Nastrom and Gage(1983)]Nastrom/Gage:1983 author author G. D. Nastrom and author K. S. Gage, title title A first look at wavenumber spectra from GASP data, https://doi.org/10.3402/tellusa.v35i5.11449 journal journal Tellus A: Dyn. Meteorol. Oceanogr. volume 35, pages 383 (year 1983)NoStop [Nastrom et al.(1984)Nastrom, Gage, and Jasperson]Nastrom/etal:1984 author author G. D. Nastrom, author K. S. Gage, and author W. H. Jasperson, title title Kinetic energy spectrum of large-and mesoscale atmospheric processes, https://doi.org/10.1038/310036a0 journal journal Nature volume 310, pages 36 (year 1984)NoStop [Charney(1971)]Charney:1971 author author J. G. Charney, title title Geostrophic turbulence, https://doi.org/https://doi.org/10.1175/1520-0469(1971)028<1087:GT>2.0.CO;2 journal journal J. Atmos. Sci. volume 28, pages 1087 (year 1971)NoStop [Xie and Bühler(2018)]Xie/Buehler:2018 author author J.-H. Xie and author O. Bühler, title title Exact third-order structure functions for two-dimensional turbulence, https://doi.org/DOI: 10.1017/jfm.2018.528 journal journal J. Fluid Mech. volume 851, pages 672 (year 2018)NoStop [Poblet et al.(2023)Poblet, Vierinen, Avsarkisov, Conte, Charuvil Asokan, Jacobi, and Chau]Poblet/etal:2023 author author F. L. Poblet, author J. Vierinen, author V. Avsarkisov, author J. F. Conte, author H. Charuvil Asokan, author C. Jacobi, and author J. L. Chau, title title Horizontal correlation functions of wind fluctuations in the mesosphere and lower thermosphere, https://doi.org/https://doi.org/10.1029/2022JD038092 journal journal J. Geophys. Res.: Atmos. volume 128, pages e2022JD038092 (year 2023)NoStop [Calaf et al.(2013)Calaf, Hultmark, Oldroyd, Simeonov, and Parlange]Calaf/etal:2013 author author M. Calaf, author M. Hultmark, author H. J. Oldroyd, author V. Simeonov, and author M. B. Parlange, title title Coherent structures and the k^-1 spectral behaviour, https://doi.org/10.1063/1.4834436 journal journal Phys. Fluids volume 25, pages 125107 (year 2013)NoStop [Drobinski et al.(2004)Drobinski, Carlotti, Newsom, Banta, Foster, and Redelsperger]Drobinski/etal:2004 author author P. Drobinski, author P. Carlotti, author R. K. Newsom, author R. M. Banta, author R. C. Foster, and author J.-L. Redelsperger, title title The structure of the near-neutral atmospheric surface layer, https://doi.org/https://doi.org/10.1175/1520-0469(2004)061<0699:TSOTNA>2.0.CO;2 journal journal J. Atmos. Sci. volume 61, pages 699 (year 2004)NoStop [Kolmogorov(1941a)]Kolmogorov:1941a author author A. N. Kolmogorov, title title The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers, https://doi.org/10.1098/rspa.1991.0075 journal journal Proc. R. Soc. London, Ser. A volume 434, pages 9 (year 1941a)NoStop [Mehrens et al.(2016)Mehrens, Hahmann, Larsén, and von Bremen]Mehrens/etal:2016 author author A. R. Mehrens, author A. N. Hahmann, author X. G. Larsén, and author L. von Bremen, title title Correlation and coherence of mesoscale wind speeds over the sea, https://doi.org/https://doi.org/10.1002/qj.2900 journal journal Q. J. R. Meteorolog. Soc. volume 142, pages 3186 (year 2016)NoStop [Vincent et al.(2013)Vincent, Larsén, Larsen, and Sørensen]Vincent/etal:2013 author author C. L. Vincent, author X. G. Larsén, author S. E. Larsen, and author P. Sørensen, title title Cross-spectra over the sea from observations and mesoscale modelling, https://doi.org/10.1007/s10546-012-9754-1 journal journal Boundary-Layer Meteorol. volume 146, pages 297 (year 2013)NoStop [Bardal and Sætran(2016)]Bardal/Saetran:2016 author author L. M. Bardal and author L. R. Sætran, title title Spatial correlation of atmospheric wind at scales relevant for large scale wind turbines, https://doi.org/10.1088/1742-6596/753/3/032033 journal journal J. Phys. Conf. Ser. volume 753, pages 032033 (year 2016)NoStop [Note1()]Note1 note FINO1 project supported by the German Government through BMWi and PTJ. For further details on the data sampling and instrumentation, see .Stop [Tuller and Brett(1984)]Tuller/Brett:1984 author author S. E. Tuller and author A. C. Brett, title title The characteristics of wind velocity that favor the fitting of a Weibull distribution in wind speed analysis, https://doi.org/https://doi.org/10.1175/1520-0450(1984)023<0124:TCOWVT>2.0.CO;2 journal journal J. Appl. Meteorol. volume 23, pages 124 (year 1984)NoStop [Chang(2011)]Chang:2011 author author T. P. Chang, title title Performance comparison of six numerical methods in estimating Weibull parameters for wind energy application, https://doi.org/https://doi.org/10.1016/j.apenergy.2010.06.018 journal journal Appl. Energy volume 88, pages 272 (year 2011)NoStop [Mohammadi et al.(2016)Mohammadi, Alavi, Mostafaeipour, Goudarzi, and Jalilvand]Mohammadi/etal:2016 author author K. Mohammadi, author O. Alavi, author A. Mostafaeipour, author N. Goudarzi, and author M. Jalilvand, title title Assessing different parameters estimation methods of Weibull distribution to compute wind power density, https://doi.org/https://doi.org/10.1016/j.enconman.2015.11.015 journal journal Energy Convers. Manage. volume 108, pages 322 (year 2016)NoStop [Sim et al.(2019)Sim, Maass, and Lind]Sim/etal:2019 author author S.-K. Sim, author P. Maass, and author P. G. Lind, title title Wind speed modeling by nested ARIMA processes, https://doi.org/10.3390/en12010069 journal journal Energies volume 12, pages 69 (year 2019)NoStop [Kullback(1997)]Kullback:1997 author author S. Kullback, @noop title Information theory and statistics (publisher Courier Corporation, year 1997)NoStop [Teixeira et al.(2019)Teixeira, O'Connor, and Nogal]Teixeira/etal:2019 author author R. Teixeira, author A. O'Connor, and author M. Nogal, title title Probabilistic sensitivity analysis of offshore wind turbines using a transformed Kullback-Leibler divergence, https://doi.org/https://doi.org/10.1016/j.strusafe.2019.03.007 journal journal Struct. Saf. volume 81, pages 101860 (year 2019)NoStop [Koscielny-Bunde et al.(1998)Koscielny-Bunde, Bunde, Havlin, Roman, Goldreich, and Schellnhuber]Koscielny-Bunde/etal:1998 author author E. Koscielny-Bunde, author A. Bunde, author S. Havlin, author H. E. Roman, author Y. Goldreich, and author H.-J. Schellnhuber, title title Indication of a universal persistence law governing atmospheric variability, https://doi.org/10.1103/PhysRevLett.81.729 journal journal Phys. Rev. Lett. volume 81, pages 729 (year 1998)NoStop [Kolmogorov(1941b)]Kolmogorov:1941c author author A. N. Kolmogorov, title title Dissipation of energy in the locally isotropic turbulence, https://doi.org/10.1098/rspa.1991.0076 journal journal Proc. R. Soc. London, Ser. A volume 434, pages 15 (year 1941b)NoStop [Taylor et al.(2003)Taylor, Kurien, and Eyink]Taylor/etal:2003 author author M. A. Taylor, author S. Kurien, and author G. L. Eyink, title title Recovering isotropic statistics in turbulence simulations: The Kolmogorov 4/5th law, https://doi.org/10.1103/PhysRevE.68.026310 journal journal Phys. Rev. E volume 68, pages 026310 (year 2003)NoStop [Lindborg and Cho(2001)]Lindborg/Cho:2001 author author E. Lindborg and author J. Y. N. Cho, title title Horizontal velocity structure functions in the upper troposphere and lower stratosphere: 2. Theoretical considerations, https://doi.org/https://doi.org/10.1029/2000JD900815 journal journal J. Geophys. Res.: Atmos. volume 106, pages 10233 (year 2001)NoStop [Lindborg(1999)]Lindborg:1999 author author E. Lindborg, title title Can the atmospheric kinetic energy spectrum be explained by two-dimensional turbulence?, https://doi.org/DOI: 10.1017/S0022112099004851 journal journal J. Fluid Mech. volume 388, pages 259 (year 1999)NoStop [Lindborg(2007)]Lindborg:2007 author author E. Lindborg, title title Third-order structure function relations for quasi-geostrophic turbulence, https://doi.org/10.1017/S0022112006003697 journal journal J. Fluid Mech. volume 572, pages 255 (year 2007)NoStop [Poblet et al.(2024)Poblet, Liu, and Chau]Poblet/etal:2024 author author F. L. Poblet, author H. Liu, and author J. L. Chau, title title Third-order structure functions of zonal winds in the thermosphere using champ and goce observations, https://doi.org/https://doi.org/10.1029/2024GL108367 journal journal Geophys. Res. Lett. volume 51, pages e2024GL108367 (year 2024)NoStop [Sreenivasan and Kailasnath(1993)]Sreenivasan/Kailasnath:1993 author author K. R. Sreenivasan and author P. Kailasnath, title title An update on the intermittency exponent in turbulence, https://doi.org/10.1063/1.858877 journal journal Phys. Fluids A volume 5, pages 512 (year 1993)NoStop [Praskovsky and Oncley(1997)]Praskovsky/Oncley:1997 author author A. Praskovsky and author S. Oncley, title title Comprehensive measurements of the intermittency exponent in high Reynolds number turbulent flows, https://doi.org/https://doi.org/10.1016/S0169-5983(97)86593-8 journal journal Fluid Dyn. Res. volume 21, pages 331 (year 1997)NoStop [Lovejoy et al.(2001)Lovejoy, Schertzer, and Stanway]Lovejoy/etal:2001 author author S. Lovejoy, author D. Schertzer, and author J. D. Stanway, title title Direct evidence of multifractal atmospheric cascades from planetary scales down to 1 km, https://doi.org/10.1103/PhysRevLett.86.5200 journal journal Phys. Rev. Lett. volume 86, pages 5200 (year 2001)NoStop [Yakhot(1998)]Yakhot:1998 author author V. Yakhot, title title Probability density and scaling exponents of the moments of longitudinal velocity difference in strong turbulence, https://doi.org/10.1103/PhysRevE.57.1737 journal journal Phys. Rev. E volume 57, pages 1737 (year 1998)NoStop [Fischer et al.(2024)Fischer, Izmailov, and Solodov]Fischer/etal:2024 author author A. Fischer, author A. F. Izmailov, and author M. V. Solodov, title title The Levenberg–Marquardt method: an overview of modern convergence theories and more, https://doi.org/10.1007/s10589-024-00589-1 journal journal Comput. Optim. Appl., volume DOI:10.1007/s10589-024-00589-1 (year 2024)NoStop [Costa Rocha et al.(2012)Costa Rocha, de Sousa, de Andrade, and da Silva]CostaRocha/etal:2012 author author P. A. Costa Rocha, author R. C. de Sousa, author C. F. de Andrade, and author M. E. V. da Silva, title title Comparison of seven numerical methods for determining Weibull parameters for wind energy generation in the northeast region of Brazil, https://doi.org/https://doi.org/10.1016/j.apenergy.2011.08.003 journal journal Appl. Energy volume 89, pages 395 (year 2012)NoStop [Tizgui et al.(2017)Tizgui, El Guezar, Bouzahir, and Benaid]Tizgui/etal:2017 author author I. Tizgui, author F. El Guezar, author H. Bouzahir, and author B. Benaid, title title Comparison of methods in estimating Weibull parameters for wind energy applications, https://doi.org/10.1108/IJESM-06-2017-0002 journal journal Int. J. Energy Sect. Manage. volume 11, pages 650 (year 2017)NoStop [Patidar et al.(2022)Patidar, Shende, Baredar, and Soni]Patidar/etal:2022 author author H. Patidar, author V. Shende, author P. Baredar, and author A. Soni, title title Comparative study of offshore wind energy potential assessment using different Weibull parameters estimation methods, https://doi.org/10.1007/s11356-022-19109-x journal journal Environ. Sci. Pollut. Res. volume 29, pages 46341 (year 2022)NoStop [Karlsson(1986)]Karlsson:1986 author author S. Karlsson, title title The applicability of wind profile formulas to an urban-rural interface site, https://doi.org/10.1007/BF00120987 journal journal Boundary-Layer Meteorol. volume 34, pages 333 (year 1986)NoStop [Marusic et al.(2013)Marusic, Monty, Hultmark, and Smits]Marusic/etal:2013 author author I. Marusic, author J. P. Monty, author M. Hultmark, and author A. J. Smits, title title On the logarithmic region in wall turbulence, https://doi.org/DOI: 10.1017/jfm.2012.511 journal journal J. Fluid Mech. volume 716, pages R3 (year 2013)NoStop [Tennekes(1973)]Tennekes:1973 author author H. Tennekes, title title The logarithmic wind profile, https://doi.org/10.1175/1520-0469(1973)030<0234:TLWP>2.0.CO;2 journal journal J. Atmos. Sci. volume 30, pages 234 (year 1973)NoStop [Townsend(1976)]Townsend:1976 author author A. Townsend, @noop title The structure of turbulent shear flow (publisher Cambridge University Press, year 1976)NoStop [Zhang et al.(2013)Zhang, Seidel, and Zhang]Zhang/etal:2013 author author Y. Zhang, author D. J. Seidel, and author S. Zhang, title title Trends in planetary boundary layer height over Europe, https://doi.org/10.1175/JCLI-D-13-00108.1 journal journal J. Clim. volume 26, pages 10071 (year 2013)NoStop [Tabeling et al.(1996)Tabeling, Zocchi, Belin, Maurer, and Willaime]Tabeling/etal:1996 author author P. Tabeling, author G. Zocchi, author F. Belin, author J. Maurer, and author H. Willaime, title title Probability density functions, skewness, and flatness in large Reynolds number turbulence, https://doi.org/10.1103/PhysRevE.53.1613 journal journal Phys. Rev. E volume 53, pages 1613 (year 1996)NoStop [Peinke et al.(2004)Peinke, Barth, Böttcher, Heinemann, and Lange]Peinke/etal:2004 author author J. Peinke, author S. Barth, author F. Böttcher, author D. Heinemann, and author B. Lange, title title Turbulence, a challenging problem for wind energy, https://doi.org/https://doi.org/10.1016/j.physa.2004.02.040 journal journal Physica A volume 338, pages 187 (year 2004)NoStop [Ren et al.(2018)Ren, Wan, Liu, Yu, and Söder]Ren/etal:2018 author author G. Ren, author J. Wan, author J. Liu, author D. Yu, and author L. Söder, title title Analysis of wind power intermittency based on historical wind power data, https://doi.org/https://doi.org/10.1016/j.energy.2018.02.142 journal journal Energy volume 150, pages 482 (year 2018)NoStop [Cho and Lindborg(2001)]Cho/Lindborg:2001 author author J. Y. N. Cho and author E. Lindborg, title title Horizontal velocity structure functions in the upper troposphere and lower stratosphere: 1. Observations, https://doi.org/https://doi.org/10.1029/2000JD900814 journal journal J. Geophys. Res.: Atmos. volume 106, pages 10223 (year 2001)NoStop [Gkioulekas and Tung(2006)]Gkioulekas/Tung:2006 author author E. Gkioulekas and author K.-K. Tung, title title Recent developments in understanding two-dimensional turbulence and the Nastrom–Gage spectrum, https://doi.org/10.1007/s10909-006-9239-z journal journal J. Low Temp. Phys. volume 145, pages 25 (year 2006)NoStop [Callies et al.(2014)Callies, Ferrari, and Bühler]Callies/etal:2014 author author J. Callies, author R. Ferrari, and author O. Bühler, title title Transition from geostrophic turbulence to inertia–gravity waves in the atmospheric energy spectrum, https://doi.org/10.1073/pnas.1410772111 journal journal Proc. Natl. Acad. Sci. U.S.A. volume 111, pages 17033 (year 2014)NoStop
http://arxiv.org/abs/2407.12487v1
20240717111202
Application of Prompt Learning Models in Identifying the Collaborative Problem Solving Skills in an Online Task
[ "Mengxiao Zhu", "Xin Wang", "Xiantao Wang", "Zihang Chen", "Wei Huang" ]
cs.HC
[ "cs.HC" ]
University of Science and Technology of China, Anhui Province Key Laboratory of Science Education and Communication Hefei China mxzhu@ustc.edu.cn University of Science and Technology of China Hefei China University of Science and Technology of China Hefei China University of Science and Technology of China Hefei China National Education Examinations Authority Beijing China § ABSTRACT Collaborative problem solving (CPS) competence is considered one of the essential 21st-century skills. To facilitate the assessment and learning of CPS competence, researchers have proposed a series of frameworks to conceptualize CPS and explored ways to make sense of the complex processes involved in collaborative problem solving. However, encoding explicit behaviors into subskills within the frameworks of CPS skills is still a challenging task. Traditional studies have relied on manual coding to decipher behavioral data for CPS, but such coding methods can be very time-consuming and cannot support real-time analyses. Scholars have begun to explore approaches for constructing automatic coding models. Nevertheless, the existing models built using machine learning or deep learning techniques depend on a large amount of training data and have relatively low accuracy. To address these problems, this paper proposes a prompt-based learning pre-trained model. The model can achieve high performance even with limited training data. In this study, three experiments were conducted, and the results showed that our model not only produced the highest accuracy, macro F1 score, and kappa values on large training sets, but also performed the best on small training sets of the CPS behavioral data. The application of the proposed prompt-based learning pre-trained model contributes to the CPS skills coding task and can also be used for other CSCW coding tasks to replace manual coding. <ccs2012> <concept> <concept_id>10003120.10003130.10011762</concept_id> <concept_desc>Human-centered computing Empirical studies in collaborative and social computing</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Human-centered computing Empirical studies in collaborative and social computing Application of Prompt Learning Models in Identifying the Collaborative Problem Solving Skills in an Online Task Wei Huang July 22, 2024 =============================================================================================================== § INTRODUCTION Collaborative problem solving (CPS) refers to two or more people working together to solve a problem using their respective skills through information sharing and effective communication <cit.>. As one of the most important skills in the 21st century, CPS competence is widely required in many scenarios, including learning environments and the workplace. For example, students are often asked by instructors to work in teams to complete course projects. In the workplace, an increasing number of tasks can no longer be accomplished by individuals but instead require multiple team members to work together. On a global scale, people from different countries collaborate to solve health crises (e.g., COVID-19) and find solutions to other serious global problems (e.g., pollution and global warming) <cit.>. In recent years, CPS has attracted significant attention from researchers. Theoretical CPS frameworks have been proposed to conceptualize this abstract construct <cit.>, and assessment approaches have also been developed to measure the CPS competence of individuals <cit.>. Although CPS competence is important, many students worldwide lack this skill, as reported by the Programme for International Student Assessment (PISA) <cit.>. Consequently, it is necessary to deepen the understanding of CPS to improve the CPS abilities of students. Theoretical frameworks and measurement approaches form the foundation for conducting in-depth CPS analyses. As a composite competence, CPS encompasses various aspects, such as critical thinking, collaboration, communication, and innovation <cit.>. It is also a multimodal, dynamic, and synergistic phenomenon <cit.>, where collaboration and problem solving occur synergistically and influence each other dynamically. Due to the complexity and abstraction of CPS, researchers have proposed theoretical frameworks for conceptualizing this construct and dividing it into multiple subskills (the details are described in Section 2.1) that capture different aspects of CPS competence. Two types of assessments are often used for CPS, the traditional multiple-choice methods and simulated-task-based methods. Traditional multiple-choice assessments use text and sometimes images to provide relevant situations to the participants, who then need to choose among the available options to solve problems in imaginary CPS scenarios <cit.>. In this case, the CPS skill levels of individuals can be assessed according to their answers. However, this approach is considered deficient <cit.> because it only captures limited information, and more detailed process data on collaboration and problem solving are not available. Hence, researchers have attempted to develop virtual environments to simulate realistic operating spaces <cit.> based on the evidence-centered design approach (ECD; <cit.>). In a simulated environment, the behavioral trajectories of team members, including their actions (e.g., clicking the mouse and pressing certain keys on the keyboard) and communications (e.g., sending messages to other teammates), can be tracked by a logging system. By analyzing these behavioral data, researchers can further evaluate individuals' mastery level in certain CPS subskills <cit.>, which is beneficial for comprehensively understanding their CPS competence. Such behavioral data can offer insights into how students attempt to achieve goals. However, behavioral data come in very large quantities and are usually unstructured due to the variety of observed behaviors and the inconsistent formats used for data collection. To quantitatively analyze behavioral data, it is necessary to first code the collected behavioral data into specific CPS subskills <cit.>. Nevertheless, coding behavioral data is challenging due to the difficulty encountered when attempting to make sense of the large number of operational behaviors of participants. Generally, the existing coding approaches can be categorized into two types, manual coding and automatic coding approaches. Manual coding refers to the traditional method of labeling observed data with human coders (e.g., <cit.>. The coding process usually involves two or more coders working together on the task based on a mutually agreed-upon coding schema built on a related theoretical framework. After the initial training stage, the coders are expected to reach a high level of agreement regarding the coding results. Because of the rigorous manual coding process, results with high interrater reliability are considered reliable and suitable for further analysis. However, manual coding has limitations. On the one hand, it is a time-consuming and labor-intensive process <cit.>. It is not feasible to rely on manual coding for generating large-scale and real-time codes for CPS behavioral data. On the other hand, if the task scenario or theoretical framework changes, the entire coding process needs to be restarted from scratch. To efficiently code CPS behaviors, automatic coding is considered a crucial step toward scaling up the related assessment and learning tasks in the context of CPS <cit.>. Essentially, the automatic coding task can be regarded as a classification problem, and researchers have begun to explore and develop machine learning and deep neural network models for coding explicit behavioral data <cit.>. However, traditional classifiers rely on the quantity and quality of the utilized training set to achieve acceptable performance. In real-world applications, a significant amount of high-quality training data may not be readily available. Consequently, the performance of the existing models should be improved, and alternative approaches for building classifiers with limited training data need to be explored. In this study, we introduce and adopt a prompt learning strategy and develop a prompt-based learning pre-trained model to enable automatic coding without relying on large training sets. Prompt-based learning, as reviewed in <cit.>, is a technique that guides a pre-trained model to generate specific types of outputs by inserting specific prompt texts into the input. By designing effective prompt texts, we can guide the pre-trained model to better understand and handle automatic CPS coding tasks with high accuracy. Furthermore, by leveraging the advantages of prompt-based learning for low-resource data <cit.>, we can obtain satisfactory classification results with fewer training data. As a starting point, we explored different combinations of prompt generation methods, including prompt templates, mappings between original labels (i.e., the targeted labels for true CPS subskills), and label words (i.e., the output words downstream of the prompt model), as well as different pre-trained models. The templates and mappings could be generated either manually or automatically. Our results revealed that manually designed templates, along with manually designed mappings, outperformed other prompt generation methods when using the T5 <cit.> pre-trained model. We also conducted a comparative analysis with other automatic coding models proposed in previous studies, such as KNN<cit.>, RF <cit.>, Linear <cit.>, CNN <cit.>, LSTM <cit.>, GRU <cit.>, and pre-trained models based on fine-tuning, such as Finetune-BERT <cit.>, Finetune-RoBERTa <cit.>, and Finetune-T5 <cit.>. The results showed that the model developed in our study outperformed the other approaches and achieved the highest accuracy, macro F1 score, and kappa values. Finally, we assessed the performance of our model on small training sets by reducing the amount of input training data and compared the performances of different models with that of our model. The results demonstrated the superior performance of our proposed model on small training sets. In conclusion, this study makes the following contributions. 1) We introduce a prompt-based learning pre-trained model to address the problem of coding process data in CPS tasks. 2) The performance of our proposed model is compared with that of other automatic coding models on an empirical dataset derived from a CPS task to demonstrate the superior performance of our model. 3) By using partial data from the training set, we find that the performance of our model is satisfactory when limited training data are available, and our model outperforms the other models. § RELATED WORK In this section, we first review various CPS frameworks proposed in previous research. Next, we provide an overview of the progress made by coding approaches developed for CPS process data, including both manual and automatic coding methods. Finally, we briefly review the existing prompt learning methods. §.§ CPS Frameworks To comprehend the behaviors and processes of individuals in collaborative problem solving tasks, and to assess their CPS competence, researchers have developed CPS frameworks that can operationalize this complex construct (e.g., <cit.>). Most frameworks share a common structure, encompassing both a social aspect related to collaboration and a cognitive aspect related to problem solving <cit.>. In international CPS assessments, the two most widely used frameworks are the Assessment and Teaching of 21st Century Skills (ATC21s) and the PISA Assessment <cit.>, and we briefly review these two theoretical frameworks. In addition, we review a third framework <cit.> developed for research purposes, which is also adopted in the current study. As a CPS framework developed for assessment, ATC21s <cit.> identifies both social and cognitive aspects. The social aspect covers three components, including participation, perspective taking, and social regulation. Participation is a long-term process of becoming a community member, which involves interaction, action, and task completion. Perspective taking refers to understanding team members' knowledge, resources, and skills, and responding to others. The final component, social regulation, pertains to the strategies and team processes group members employ to facilitate CPS, including negotiating, taking initiative, and assuming responsibility. The cognitive aspect consists of two dimensions, including task regulation and learning and knowledge building. Task regulation is related to problem solving capabilities, such as setting goals, managing resources, exploring problems, and aggregating information. And, learning and knowledge building refers to the abilities to plan, execute, reflect, and monitor problem solving. As a distinct construct, the PISA framework <cit.> is composed of four problem solving (also regarded as cognitive) competencies and three collaborative (also regarded as social) competencies. Specifically, the four subdimensions of the problem solving dimension include exploring and understanding, representing and formulating, planning and executing, and monitoring and reflecting. And, the three subdimensions of the collaborative dimension comprise establishing and maintaining shared understanding, taking appropriate actions to solve the problem, and establishing and maintaining group organization. These two dimensions interact and cross, forming a matrix with 12 subskills. In addition to the above two frameworks, Andrews and colleagues <cit.> also proposed a CPS ontology framework to conceptualize the CPS construct mainly for research purposes. The framework includes nine CPS subskills across two dimensions. The first dimension, the cognitive dimension, involves five subskills, including exploring and understanding (CEU), representing and formulating (CRF), planning (CP), executing (CE), and monitoring (CM). The second dimension, the social dimension, involves four subskills, including maintaining communication (SMC), sharing information (SSI), establishing shared understanding (SESU), and negotiating (SN). In the cognitive dimension, exploring and understanding involves actions for exploring problem-related information and building a mutual understanding of the given problem. Representing and formulating refers to actions and communication that aim to better visualize problems and form hypotheses. Planning concerns communications that are used for the determination of task targets and solutions, as well as subsequent revisions and refinements. Executing involves actions and communications during the execution of a task completion plan. Finally, monitoring refers to the activities related to determining task completion progress. In the social dimension, maintaining communication is about communicating content that is irrelevant to tasks, while sharing information is about communicating content that is relevant to tasks. Establishing shared understanding refers to group members trying to understand each other’s perspectives. The last subskill, negotiating, involves communications used to understand conflicts and propose solutions to reach a consensus. Since the two subskills of executing and monitoring can occur in either actions or chats, each of them is further split into two components, yielding executing actions (CE), executing chats (CEC), monitoring actions (CM), and monitoring chats (CMC). Thus, the proposed CPS framework includes eleven subskills. The framework provides a theory-driven relationship of the CPS subskills associated with explicit behaviors when participants perform tasks. The empirical analysis conducted in this study used a dataset collected from a three-resistor task <cit.>, and this CPS ontology was initially developed for human coding. This CPS framework was thus adopted for building the classifiers. However, our proposed method and model do not depend on any specific CPS framework or any CPS task and can be generalized to other practices and applications beyond CPS. §.§ Coding of CPS activities To associate conversational and behavioral data from CPS activities with CPS skills, researchers rely on coding methods, which can be roughly divided into manual and automatic coding methods. We first review the manual coding approaches used in the field. Manual coding processes usually depend on coding schemes <cit.>. In general CSCW communities, researchers have developed various manual coding schemes to serve different purposes. For instance, in a study of computer-supported cooperative learning scenarios, <cit.> proposed a multidimensional encoding method for dialectical knowledge construction. As another example, <cit.> developed two complementary encoding schemes with different granularities to annotate dialogs in peer collaboration scenarios. Additionally, <cit.> coded interactive behaviors such as negotiation and elaboration between different participants. With the recent studies on CPS, researchers have also developed coding schemes that fit the CPS simulation environment. For example, <cit.> proposed a hierarchical CPS coding scheme that can effectively capture participants’ behavioral indicators and associate them with CPS skills. <cit.> proposed an ontology framework for CPS, encoding participants’ chats and actions (e.g., changing their resistance values) into 23 CPS subskills. Moreover, <cit.> proposed a coding scheme combining the 12 CPS subskills classified by the PISA 2015 and students' mastery levels. In general, the manual coding method is based on a certain theoretical framework that maps a piece of explicit behavior to a specific skill. However, this method has significant limitations. Trained raters need to go through the input data manually, check a large number of corpora, and then map them to specific CPS subskills. Additionally, coders need to ensure rating consistency among themselves, which requires frequent discussions to produce consistent coding results. This process is undoubtedly time-consuming and labor-intensive <cit.>. With the development of technology, advanced methods have been applied to implement automatic coding, thereby facilitating tasks such as coding text and providing automated feedback <cit.>. Recently, researchers in the CPS field have also explored automatic coding approaches. Overall, automatic coding approaches can be divided into two types, i.e., machine learning and deep learning methods. One machine learning method used a linear chain-based conditional random field (CRF) to construct the sequential dependencies of dialog content, and the authors developed an automatic coding system named CPS-rater <cit.>. This method was proved to be more effective than that of <cit.>, which treated different dialogs independently. In another study <cit.>, preselected n-grams and emotions were used to model four aspects of CPS (i.e., sharing ideas, regulating problem-solving activities, negotiating, and maintaining communication). Additionally, the KNN classifier was also used for CPS coding <cit.> and was found to be more satisfactory than naive Bayes classification and comparable to manual encoding. In addition to the aforementioned automatic encoding methods, <cit.> employed a more advanced deep transfer learning approach called bidirectional encoder representations from transformers (BERT) to explore the feasibility of using this model to encode CPS data obtained from simulated indoor environments or real scenes <cit.>. They also analyzed the generalizability of several different NLP methods (BERT, n-grams, and word categories) for encoding tasks <cit.>. As reviewed above, automatic CPS dataset coding is an emerging research direction, and more efficient automatic coding models need to be developed. Additionally, since existing automatic coding models rely on existing manual coding datasets for training, generalization of the existing model requires manually coded datasets. To improve the generalizability of an automatic coding model, we aim to develop a model that depends on a small amount of existing data for automatic coding and can also achieve a relatively high accuracy. §.§ Prompt Learning Paradigm Before introducing prompt learning, we briefly review the pre-trained language model (PLM) concept, which plays a vital role in facilitating the development of prompt learning methods. When trained on large-scale open corpora <cit.>, PLM achieves superior performance in diverse NLP tasks, such as sentiment analysis <cit.> and machine translation <cit.>, due to its ability to embed abundant semantic and syntactic information. Additionally, the model can be adapted to different downstream tasks by learning domain-specific knowledge via fine-tuning <cit.>. Nevertheless, fine-tuning a PLM can be challenging due to the need for large-scale datasets and the involvement of an enormous number of parameters. This challenge is particularly pronounced in low-resource scenarios <cit.>. To address this limitation, a new paradigm called "prompt-based learning", which allows PLMs to process downstream tasks through prompts has emerged <cit.>. Unlike PLM fine-tuning for a downstream task, the prompt-based method reformulates a downstream task using a textual prompt, effectively turning it into a masked word classification task <cit.>. We take text classification as an example. Given the input sentence, "I love this movie", the model is expected to output "positive" or "negative" information about the meaning of the input sentence. However, PLMs designed for text generation cannot directly handle classification tasks. By properly transforming the raw input using the prompt-based method, we can enable text-generated PLMs to perform classification as well. Utilizing the above example, the prompt-based method involves adding a [MASK] token to the input sentence, structuring the input sentence as "I love this movie. It is a [MASK] movie". The model can then generate output words with their associated probabilities, such as "funny", "interesting", or "boring" (referred to as label words). The first two words represent positive emotions, while the third word indicates a negative emotion. The output words can then be mapped to the corresponding emotion words for classification purposes, and this step is known as label-word mapping. Prompt-based models modify the input to adapt a pre-trained model to various downstream tasks, eliminating the need to train a separate model for each task and reducing the requirement for encoding large-scale datasets <cit.>. Therefore, prompt-based learning methods can achieve excellent performance in few- <cit.> and zero-shot <cit.> tasks. Due to the advantages of prompt learning, it has garnered increasing attention in recent years. For instance, in the field of mental disease diagnosis, a prompt-based topic-modeling method was developed to detect depression based on question-and-answer data gathered during interviews <cit.>. Researchers utilized the prompt learning paradigm and made topic-wise predictions using the characteristics of the interview data to construct a fusion model for detecting depression. It is worth noting that the sample size of people with mental illness is relatively limited, resulting in even less available data available. Overall, the study demonstrated that the prompt-based model is well suited for addressing the challenge of insufficient training data. The model was also proven to be efficient in personality and interpersonal reactivity prediction tasks. For example, <cit.> employed a prompt-based pre-trained model to participate in a competition involving personality prediction and reactivity prediction, achieving 1st place in both subtasks. The advantage of the designed prompt is that it provides additional personalized information that enhances the performance of the pre-trained model. Furthermore, the prompt-based method has been applied in affective computing. For example, <cit.> conducted an empirical study on prompt-based sentiment analysis and emotion detection. They demonstrated the biases of PLMs in prompting by comparing the performances of different prompt templates, label-word forms, and other control variables. This study highlighted the importance of prompt engineering and label-word selections. It is evident from the aforementioned studies that prompt-based models excel in classification tasks and are also effective with small training datasets. In this study, the prompt-based model is applied to automatically code the process data of CPS tasks. Given the pivotal role of the prompt method and pre-trained models in prediction performance, this study aims to determine the appropriate prompt generation method and pre-trained model. § PRELIMINARY ANALYSIS In this section, we first introduce the process of collecting and building the utilized dataset, which encompasses participants’ behaviors observed during CPS activities. Next, we present a visualization of the dataset, showcasing the proportions of each subskill (referred to as labels) and the distribution of chat data lengths. Finally, we delve into the data preprocessing steps conducted on the dataset to prepare it for being input into the model. §.§ Data Collection Task. The data for this study were collected from an online three-resistor task. The task involves applying relevant physical knowledge of series circuits to adjust resistor values, ensuring that the voltage across the resistance satisfies the requirements of the task. A total of 378 participants were recruited, and randomly divided into 126 groups. Each group consisted of 3 members, with each member responsible for one resistor. The operation interface is represented in Fig. <ref>. At the top of the screen, the known conditions and targets are displayed. The screen presents a complete circuit structure. The goal of each participant was to adjust the resistor values to reach the target voltage. Because each member received varying information and the resistors in the series circuit influenced each other’s voltage, the group members needed to engage in discussions and collaborate to complete the task. To facilitate communication among the group members, a chat box was provided. Additionally, participants can utilize the calculator in the upper-right corner of the screen for calculations. To accommodate the CPS competence levels of different groups, the task was divided into four levels, primarily differing in the known conditions and goals, as outlined in Table <ref>. Dataset. The data were recorded by a logging system, which included participants' information, such as student IDs and group names, as well as task information and participants' behaviors during the activities. The participants' behavioral data could be classified into two categories. The first category involved manipulating the system, such as changing the resistor or performing calculations, and the second category included chatting with other members in the chat box, such as "I think it will make it", or "Alright, let's do a big one". In total, we collected 50,817 pieces of data, comprising 15,950 chat records and 34,867 manipulation records. §.§ Dataset Building The collected explicit behavioral data were manually coded by three coders based on the rubric of the CPS ontology framework <cit.>. Each record contained information on either an interaction with the simulated task system or a single chat message between team members. For example, the chat message "we need 6.69V across our resistors" could be classified as planning (CP). The interrater reliability was satisfactory with kappa=.93 for the 20% triple-coded samples. Eventually, 50,817 log entries were classified into 11 CPS subskills, and the chat data covered 8 subskills. Table <ref> displays some coding examples. As shown in Table <ref>, the manipulation data could be mapped to a specific subskill since they are generally deterministic. It is more challenging to address chat data due to their diversity and irregularity. To a great extent, chats can be associated with all the subskills, which significantly increases the coding difficulty of the model. Thus, this study focused primarily on automatic chat data coding. §.§ Data Descriptions We conducted a fundamental statistical analysis of the subskill categories in the chat data. Specifically, we calculated the proportion of each type and counted the number of related words that appeared in each chat message to better understand its characteristics. Table <ref> shows the frequency and proportion of each classified subskill in the chat data. This reveals that the chat data were unevenly distributed across categories. More than 70% of the chat data pertained to social subskills. Sharing information (SSI) appeared most frequently, followed by establishing shared understanding (SESU). Conversely, representing and formulating (CRF), and planning (CP) were the least commonly used subskills. In conclusion, social subskills are employed more frequently than cognitive subskills during group communication. The uneven proportions of subskills pose challenges when automatically coding models. The model needed to avoid showing a preference for a specific category during the training process. This ensured that even if the overall prediction accuracy was high, its performance in terms of predicting fewer proportion categories was not extremely poor. Thus, we took this factor into account when evaluating the model performance. The distribution of the chat data length is depicted in Fig. <ref>. The distribution exhibited a skewed pattern and was predominantly composed of short sentences. Given that 96.68% of the chat data contained 16 words or fewer and considering the computational efficiency of the model, we chose a maximum sentence length of 16 for the subsequent experimental model settings. §.§ Data Preprocessing To facilitate the subsequent data serialization process, we performed text replacements as outlined in Table <ref>. We applied several steps to process the chat data. First, we replaced nouns related to the three-resistor task with special tokens. For example, for the relevant expressions of the four resistors R0-R3 in the circuit, we replaced them with [R_zero-R_three] and added them to the pre-trained model. Similarly, we also replaced the expressions of the voltage values, current values, and pure numerical values. In addition, we replaced colloquial abbreviations related to voltage or resistance. Third, expressions referring to team members' nicknames (e.g., tiger, lion) were also substituted with the common names of people to help the language model recognize them as different members of a team. § METHODOLOGY The proposed method consists of a data filtering module and an automatic coding module. The filtering module preprocesses the raw data, which is followed by modeling the input data using two kinds of classification methods based on their categories (chat or manipulation data). This construct is primarily inspired by the design of <cit.>. In the automatic coding module, we present a formal formulation and detailed problem descriptions as follows. §.§ Problem Formulation In a collaborative problem solving activity, our goal is to predict a CPS subskill Y corresponding to a participant’s explicit behavior X at a certain time. §.§ Prompt-Based Coding Method for Chat Data We utilized a prompt-based approach to enable the pre-trained language model to automatically code the CPS chat data. This process is illustrated in Fig. <ref>. Specifically, for each piece of chat data, we first concatenated it with a manually defined template as follows, T(X)=[CLS] X, it is [MASK] [SEP] where T represents a modified vector embedding that incorporates the prompt (in this case, the prompt template is “it is [MASK]”); X corresponds to the embedding of the raw chat data; [CLS] and [SEP] denote the beginning and end markers of a sentence in the pre-trained language model, respectively; and [MASK] is the symbol of the position to be predicted by the model. After obtaining the templates, we used the pre-trained model to predict the probability of generating each word at the [MASK] position. To elaborate, we constructed a vocabulary W=w_1,w_2,w_3,…,w_n, and the probability of generating the word w_t is, P(w_t) = predict(T(x), PLM, w_t) where PLM represents the pre-trained model and predict(·) denotes the use of the PLM to predict the probability of [MASK] belong to w_t in the embedding of T(x). Thus, in this equation, P(w_t) represents the probability of each word w_t (i.e., the word in the vocabulary) being generated in the [MASK] position by the pre-trained model. After predicting the probability of each word, the model mapped the label words to the original label by calculating the total probabilities of each label word associated with the label. Specifically, if the sth label is associated with k words, then the probability of the final automatic coding of the sth label is, P(Y=s) = ∑_i^kw_i where the prediction probability P is the sum of all probabilities for the associated label words. Ultimately, the result of automatic coding Y corresponds to the label with the highest probability. Thus, the objective function can be defined as follows, Y = argmax_sP(Y=s) where argmax_s is used to find the argument that maximizes a given function. §.§ Rule-Based Coding Method for Manipulation Data We use the rule-based model for coding manipulation data. Because the action type was definite, we could code the manipulation data using a one-to-one mapping strategy involving the relevant CPS subskills. For example, actions such as "open Zoom" and "view board in Zoom" were coded as monitoring actions (CM). Another directly corresponding action was "perform calculator with XXX", which could be categorized as executing actions (CE). However, concerning actions that involve changing the value of a resistor from value A to value B, they could be classified as either executing actions (CE) or exploring and understanding (CEU), depending on the group state. If a group already had a plan, the action was labeled as CE. If a group is in an exploring phase, the action is labeled as CEU. Overall, most manipulation data can be directly coded through one-to-one mapping, but some may also require coding techniques based on the specific problem solving stages of the groups during their tasks. § EXPERIMENT AND EVALUATION In this section, we present the results of three experiments aimed at demonstrating the advantages of the prompt-based learning pre-trained model in CPS behavioral data classification tasks, especially for cases with small sample sizes. The first experiment focuses on determining the most effective prompt method and pre-trained model combination, in which case the prompt-based learning pre-trained model can achieve superior performance. The second experiment involves a comparative analysis, pitting our model against different classification models proposed in previous automatic CPS coding studies, including both machine learning and deep learning based models. The final experiment aims to verify the superiority of the prompt-based learning pre-trained model in tasks with small sample datasets. We evaluate the performance of the model using accuracy, the macro F1 score, and kappa. The formulas for these metrics are provided in equations (5)-(7) as follows, Accuracy = TP+TN /TP+FP+TN+FN where TP (true positives) denotes the number of correctly classified positive labels; TN (true negatives) denotes the number of correctly classified negative labels; FP (false positives) denotes the number of incorrectly classified positive labels; and FN (false negatives) denotes the number of incorrectly classified negative labels. Macro F1 score=F1 score_class1+F1 score_class2+ …+F1 score_classN/N where N is the number of classes or categories in the classification problem. Kappa=P_0-P_e/1-P_e where P_0=TP+TN/TP+TN+FP+FN, and P_e=(TP+FP)*(TP+FN)+(TN+FP)*(TN+FN)/(TP+TN+FP+FN)^2. The former two metrics, accuracy and macro F1 score are commonly used in classification tasks. Given the imbalanced categories of our dataset, we used the macro F1 score to assess the performance of the model. Additionally, we employed kappa to measure the consistency between the results of the model’s coding and manual coding, following the guidelines outlined in <cit.>. A kappa value of 0.60 indicates acceptable consistency, 0.80 represents a relatively high level of consistency, and 0.90 suggests nearly perfect consistency <cit.>. §.§ Experiment 1: Comparison Among Different Prompt Methods In a prompt method, the selected pre-trained model is crucial to the performance of the resulting model. At the same time, for prompt methods using the same pre-trained model, different training strategies can also lead to significant performance differences. To achieve the best CPS classification performance, we compared four common pre-trained models, namely, BERT<cit.>, RoBERTa<cit.>, GPT-2<cit.>, and T5<cit.>. We designed four different training conditions and described them as follows. Manual. All templates and mappings between the original labels and the label words in the vocabulary are manually defined. Trainable Verbalizer (TV). The mappings between the original labels and label words are determined through training, while the templates are manually defined. Trainable Template (TT). The templates are obtained through training, while the mappings between the original labels and label words are manually defined. Trainable Template and Verbalizer (TTV). Both the templates and mappings between the original labels and label words are obtained through training. Experimental Setup. We divide the dataset into a training set, a validation set, and a test set with proportions of 0.70, 0.15, and 0.15, respectively. The number of training epochs is set to 20, the learning rate is set to 1e-5, and the maximum sentence length is set to 16. For each model and training environment, we conducted multiple experiments by varying the seeds, which are set to (0, 1, 2). Our model is implemented in PyTorch and trained on an NVIDIA RTX 3090 GPU device. To effectively evaluate the performance of the model, we use the accuracy, macro F1 score, and kappa values achieved on the test set and calculate the average scores derived from different seeds. Results. Table <ref> summarizes the overall performance of different pre-trained models under various training conditions. From this table, we can observe that under the T5-manual condition, the model exhibits the best performance, achieving an accuracy of 0.802, a macro F1 score of 0.725, and a kappa value of 0.743, which indicates acceptable consistency with the manual results. Overall, using T5 as our pre-trained model with manually defined templates and mappings can yield the best classification results. §.§ Experiment 2: Comparison with Other Text Classification Models Next, we compare the performance of the prompt learning model with that of other text classification models. We select nine baseline text classification models based on previous studies concerning CPS automatic coding, as well as other commonly used text classification models. These baseline models can be classified into three categories, n-gram based methods, deep learning methods, and fine-tuning based methods. N-gram based methods. This class of methods uses an n-gram model to determine the frequencies of word groups and applies TF-IDF for feature engineering to provide input for downstream classification models. For the downstream classification models, we choose linear, K-nearest neighbors (KNN), and random forests (RF) classifiers to perform the final automatic coding task. Deep learning methods. This class of models uses deep learning methods to extract text features and achieves coding via linear neural networks. In the feature extraction stage, we choose the Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), and Convolutional Neural Network (CNN) to process the text data. Fine-tuning based methods. Similar to the prompt-based coding method proposed in Section 4.3, this type of method also uses a pre-trained model, with the difference being that these methods directly use a linear neural network to perform automatic coding. Experimental setup. The general setup of the experiments remains the same as that described in Section 5.1 but with 85% of the total data as the training set. More setup details regarding the comparison experiments are as follows. N-gram based methods. In the n-gram based methods, we set n to 3 and the maximum number of features in TF-IDF to 10000. For the downstream classification models, the setups are as follows. 1) Linear uses a two-layer fully connected neural network, and the number of neurons is set to 300. 2) KNN calculates the distances between samples to complete the automatic coding task, and the K value is set to 5. 3) RF uses multiple weak classifiers (decision trees) for automatic encoding, and we set the number of weak classifiers to 10. Deep learning methods. In the deep learning methods, we set the maximum text length to 20. The feature extraction methods are set as follows. 1) GRU is set to be bidirectional, and the number of layers of hidden layers is set to 2. Each hidden layer has 256 neurons. 2) LSTM is set in the same way as the GRU, also with 2 hidden layers consisting of 256 neurons. 3) CNN is set to have 20 convolutional kernels possessing different sizes, with the sizes of the convolutional kernels ranging from 1 to 20. Fine-tuning based methods. We use BERT, RoBERTa, and T5 as our pre-trained models and set the maximum text length to 16. Results. Table <ref> summarizes the results produced by the prompt-based pre-trained method and the other comparative models, from which it can be seen that our model achieves the best performance concerning all three evaluation criteria. The accuracy is 0.804, the macro F1 score is 0.743, and the kappa value is 0.746. This comparison demonstrates that the proposed classification method based on deep learning is superior to the traditional machine learning methods. Moreover, the methods that use pre-trained models, including prompts and fine-tuning, achieve far better performance than other methods, with our proposed prompt-based model outperforming all other approaches. §.§ Experiment 3: Study on Small Training Sets In this section, we examine the performance achieved by the prompt model on small samples. We conducted a series of experiments, in which we randomly sampled a portion of the original training set for use as a new training set. Subsequently, we retrained all the models discussed in Section 5.2 using these new training sets and evaluated their performance on the same test set. We employ all three evaluation metrics, namely accuracy, the macro F1 score, and kappa, to comprehensively examine the performance of these models with different training set sizes. Experimental setup. We randomly sampled various percentages of the original training dataset to create new training sets. Specifically, we used the following percentages to demonstrate the results, 6%, 8%, 11%, 14%, 18%, 24%, 31%, 41%, 53%, 69%, and 85%. We then retrain all the models discussed in Section 5.2 using these various training set sizes. Subsequently, we tested these retrained models on the original test set and recorded their performance in terms of accuracy, macro F1 score, and kappa. The results of the experiment are shown in Fig. <ref>. Since the accuracy and macro F1 score are the same, we present them in one figure. Results. As shown in Fig. <ref>, overall, the prompt-based models (except for Prompt-T5) perform better on the small training samples than do the fine-tuning models. Prompt-RoBERTa and Prompt-Bert are the two best models. To reach the satisfactory predictions indicated by the horizontal red dotted line in the plots, the former model needs only approximately 6% of the original training set (except for the macro F1 score indicator, which needs 11% of the original training set to achieve a satisfactory result), while the latter model needs approximately 11% of the original training set. The Prompt-T5 model needs approximately 16% of the original training set to reach the metric targets. Although this model does not have an obvious advantage on small training sets, it can achieve similar accuracy, macro F1 score, and kappa values to those of the best model, Prompt-RoBERTa, when the training set proportion exceeds 18%. For the fine-tuning models, except for Finetune-RoBERTa, which can achieve satisfactory results in terms of the accuracy and kappa indicators with more than 11% of the original training set, Finetune-Bert and Finetune-T5 rely on larger training samples to achieve great results. Additionally, the experimental results demonstrate the significant advantages of pre-trained models. Methods that do not utilize pre-trained models (e.g., GRU or CNN), do not perform as well as the other approaches even when the entire training set is used. In addition, we find that the selected pre-trained model influences the prediction results obtained on small training sets. Specifically, RoBERTa is superior to BERT, and BERT is superior to T5. However, T5 performs better when using the entire training set. § CODING RESULTS ANALYSIS In addition to the accuracy, macro F1 score, and kappa indicators used to evaluate the performance of the automatic coding models, we also performed a confusion matrix analysis and an error analysis to observe the detailed prediction results yielded by the models for every CPS subskill. We employ the two best-performing models in Experiment 3, Prompt-RoBERTa and Prompt-BERT, on the small training set with 11% of the original training set as examples to demonstrate the analysis results. §.§ Class Confusion Analysis Fig. <ref> represents the confusion matrix heatmaps of the accuracies of the predictions produced by Prompt-RoBERTa (the figure on the left) and Prompt-BERT (the figure on the right) for the eight subskills relative to the actual labels when using 11% of the original training set. The Prompt-RoBERTa model attains the highest prediction accuracy (0.85) for the sharing information (SSI) subskill, while it has the lowest prediction accuracy (0.33) for the representing and formulating (CRF) subskill, which is consistent with the subskill frequency results (see Table <ref>). This model tends to confuse CRF with SSI (0.36) and to confuse monitoring chats (CMC) with SSI (0.23). The reason for this may be that representing and formulating (CRF), and monitoring (CMC) chats both involve communication related to tasks, thus requiring the problem or the roles of team members to be understood. Thus, the model may incorrectly regard them as sharing information (SSI). However, CRF and CMC belong to the cognitive dimension, whereas SSI belongs to the social dimension, which shows that the model makes an incorrect prediction in terms of dimensions. Thus, the model may be improved by first considering predicting the high-level dimensions, and then proceeding to more detailed predictions concerning the subskills. The next pair of frequently confused subskills includes maintaining communication (SMC) and SSI (0.16), which may result from the model having trouble determining whether the communication is related to the given task. Overall, the model achieves higher prediction accuracy for the subskills of the social dimension (with all accuracies greater than 0.55) than for the subskills of the cognitive dimension (with some accuracies lower than 0.50). The confusion matrix produced by the Prompt-BERT model is similar to that of the Prompt-RoBERTa model. The differences are mainly displayed in the predictions yield for the negotiating (SN), representing and formulating (CRF), and executing chats (CEC) subskills. Specifically, the Prompt-BERT model has lower prediction accuracy than the Prompt-RoBERTa model. In conclusion, the prediction results obtained by the two models on the small training set with 11% of the data are both satisfactory. In addition, the results show that the overall prediction accuracy may be improved by improving the prediction accuracies attained for the subskills related to cognition. §.§ Error Analysis We perform an error analysis to illustrate the errors induced by the Prompt-RoBERTa and Prompt-BERT models. The following lists some cases. First, we show examples that are incorrectly predicted by the Prompt-RoBERTa model. The chat message of “click the leads next to the butt plug looking thing” concerns planning (CP), but the model considers it as executing chats (CEC). Planning generally refers to tasks to be done that have not yet occurred, while executing usually refers to the implementation of a plan. Thus, there is an obvious difference between the timings of these behaviors. However, the model does not detect such a difference, leading to incorrect predictions. Another frequent error occurs between representing and formulating (CRF) and sharing information (SSI). Take the chat message “if mine is [number], the other two r's should sum up to [number]” as an example. The chat message is labeled as CRF, but the model incorrectly classifies it as SSI. The word “if” represents a conditional assumption, i.e., an inference made in a certain situation, so this sentence involves representing and formulating, instead of sharing information, which involves sharing something based on existing knowledge. The model may lack the ability to capture keywords for classification purposes. Second, we show the erroneous cases predicted by the Prompt-BERT model. For instance, the chat message “I’m on the mark”, is incorrectly predicted as sharing information (SSI) instead of the true label monitoring chats (CMC). On the one hand, this sentence does not include useful information about the task and only aims to inform team members about the progress of their missions. This shows that the model may not accurately capture the information presented in certain cases. On the other hand, the model cannot effectively distinguish between the cognitive and social dimensions. The model regards the chat message of “lucky guess”, as establishing shared understanding (SESU), but the true label is maintaining communication (SMC). This message aims to express the joy of making a correct guess and does not include useful information. Thus, the message should be encoded as SMC. This example again shows that the model cannot grasp the sophisticated information and message conveyed by the sentence, resulting in an inappropriate prediction. In conclusion, when the model performs encoding, it cannot take full advantage of the information contained in special words, e.g., conditional words and tense words, to help make more accurate predictions, which may lead to deviations in sentence classification tasks. § DISCUSSION Individuals with different skills and knowledge are increasingly required to work on a team to collaboratively solve complex problems <cit.>. To better understand and analyze behavioral patterns observed in collaborative activities, recent studies have designed simulated CPS activities to collect process data during tasks. Given the unstructured characteristics of process data, it is necessary to encode and transform them into structured data. However, the existing automatic coding models for CPS process data have relatively low accuracies and significant dependencies on training sets with large numbers of samples. To address this problem, this study proposes an automatic coding model based on prompt learning to label process data, with a primary focus on chat data. In this section, we summarize our main findings from the three conducted experiments and discuss the applications and limitations of this study. §.§ Main Findings Experiment 1 explores the influences of prompt generation strategies and pre-trained models on the resulting classification performance. We find that manually generating prompts and using T5 as the pre-trained model can achieve the best classification results. As noted by <cit.>, the classification results may be biased due to the selected prompt and label word forms. Thus, it is suggested that when using prompt-based classification models, the prompt method selection task should be considered. In our study, we compare different prompt methods and find that both the prompt templates and the mappings between the original words and label words display better performance when using the manual design approach. This reflects the uniqueness and complexity of the task of coding CPS process data, making it difficult to obtain appropriate prompt methods only through training. Instead, it is necessary to design suitable prompts according to the activities of the participants in CPS tasks to achieve improved classification performance. Additionally, in our study, we consider different pre-trained models, including BERT and its RoBERTa variant, as well as a transformer-based model (T5), and find that T5 achieves the best performance. This may be because the T5 model has a larger number of model parameters and utilizes a superior relative positional encoding approach instead of the original absolute positional encoding approach. Experiment 2 compares our prompt-based learning pre-trained model with nine other widely used classification models. All the models can be divided into four types, namely, n-gram-based methods, deep learning methods, fine-tuning methods, and prompt methods, which share the common feature of considering the task-specific information derived from words in speech <cit.>. Overall, we find that across the three evaluation indicators, the performance rankings of the different models are as follows, prompt > fine-tuning > deep learning > n-gram. Nevertheless, compared to non-pre-trained model approaches, methods that use pre-trained models perform better because the pretraining task implemented on large-scale data allows them to learn richer and more comprehensive language representations. Furthermore, compared to fine-tuning methods, prompt methods can offer guidance information to the model, helping it better understand the task and domain, ultimately leading to enhanced performance. Experiment 3 tests the superiority of the prompt-based learning pre-trained model on small training sets. Specifically, based on Experiment 2, we explored the classification results produced by different models with different training set sizes. These results are consistent with our hypothesis, as shown in Fig. <ref>. Prompt-based learning models, such as Prompt-RoBERTa and Prompt-BERT, have the best classification performance. With 85% of the total data as the training set, only 11% of these data (9.35% of the total data) are needed to achieve satisfactory results. In other words, it is possible to achieve a satisfactory automatic coding effect for a new dataset with only approximately 10% of its data manually coded and used as the training set, which can alleviate the problem of data scarcity <cit.>. Interestingly, we also find that the pre-trained models based on RoBERTa and BERT perform better than T5 on small training datasets, and the latter requires approximately 16% of the original training set to reach acceptable performance. However, T5 has an advantage when trained on the whole training set. §.§ Applications The main findings of its work have practical implications. First, performing automatic coding based on a language model can reduce the time and human resources required for manual coding. This strategy relies on only a portion of the manually coded data to learn mapping patterns through training, enabling it to automatically code the remaining data. This method can assist teachers in monitoring group behaviors during CPS activities, enabling them to provide instant support to groups. Additionally, it can help teachers identify students' strengths and weaknesses in cooperative tasks, allowing them to take appropriate actions to improve students' CPS competence levels <cit.>. Second, when compared with other models developed in previous research <cit.>, our model demonstrates higher classification accuracy and has lower requirements regarding the scale of the input labeled training data. Models based on prompts make pre-trained models directly adaptable to downstream tasks <cit.>. This contextual learning paradigm, allowing a model to learn by providing it with hints, proves to be more effective. This may be the reason why prompt-based learning models outperform the other classification models (e.g., KNN, LSTM, and GRU). <cit.> noted that improving the accuracy of a classifier without increasing the amount of input training data is challenging. Generally, the larger the training dataset is, the better the classification results. However, for CPS tasks, obtaining a large amount of training data is time-consuming and laborious. Our proposed model can achieve satisfactory results with few-shot learning to address this problem. §.§ Limitations As with all other research, our study has some limitations. The first limitation is that the imposed data preprocessing requirement is relatively high. In different CPS tasks, the task-related words or symbols should be processed differently. Additionally, abbreviations and colloquial words appearing in chat data should also be processed. Due to the diversity of CPS tasks and participants' behaviors, no uniform method is available for preprocessing the task-specific data. Such a method should adapt to the characteristics of the data and model inputs. Another limitation is that our current model considers utterances independently, without accounting for the connections between sentences. Analyzing context can help us more precisely understand the intention of the current utterance, and the mappings between an utterance and the CPS subskills can then be determined more accurately. Finally, the generalizability of the proposed model is not tested in the current study. We only verify the effectiveness of the prompt model on one dataset collected from three-resistor CPS tasks. Therefore, whether the presented findings can be generalized to other CPS tasks and coding scenarios with different CPS frameworks needs to be explored, and this issue will be addressed in our future studies. § CONCLUSION This study aims to design an automatic coding model for classifying CPS subskills based on logging data that records participants' explicit behaviors. To achieve this goal, we construct a prompt-based learning pre-trained model and conduct three experiments to verify its superiority. Experiment 1 compares different prompt generation methods and pre-trained models to determine the combination that achieves the best performance. Experiment 2 compares our model with other classification models, while Experiment 3 assesses the performance attained by different models on various small training sets. The results show that our model not only has the highest accuracy, macro F1 score, and kappa values on large training sets but also performs the best on small training sets. Overall, this study demonstrates the effectiveness of the developed prompt-based learning pre-trained model in CPS subskill classification tasks involving low-resource datasets. In the future, we plan to modify our model to implement context-based classification. Additionally, we will test the generalizability of the model to different datasets. We hope that such encoding methods can be extended to more general research fields, such as other human-computer interactions that are commonly studied in the CSCW community, to achieve the text stream, audio stream, and video stream coding. ACM-Reference-Format
http://arxiv.org/abs/2407.12531v1
20240717131210
The Circular Bragg Phenomenon Updated
[ "Akhlesh Lakhtakia" ]
physics.optics
[ "physics.optics" ]
The Circular Bragg Phenomenon Updated Akhlesh Lakhtakia Department of Engineering Science and Mechanics, The Pennsylvania State University, University Park, PA, USA School of Mathematics, University of Edinburgh, Edinburgh EH9 3FD, UK akhlesh@psu.edu The circular Bragg phenomenon is the circular-polarization-state-selective reflection of light in a spectral regime called the circular Bragg regime. In continuation of an expository review on this phenomenon published in 2014, an album of theoretical results is provided in this decadal update. Spectral variations of intensity-dependent and phase-dependent observable quantities in the reflection and transmission half-spaces are provided in relation to the polarization state and the direction of propagation of the incident plane wave. Keywords: circular Bragg phenomenon, circular dichroism, circular polarization state, circular reflectance, circular transmittance, ellipticity, geometric phase, linear dichroism, linear reflectance, linear transmittance, optical rotation, Poincaré spinor, structurally chiral material § INTRODUCTION The circular Bragg phenomenon is the circular-polarization-state-selective reflection of plane waves in a spectral regime called the circular Bragg regime that depends on the direction of incidence <cit.>. This phenomenon is displayed by structurally chiral materials (SCMs) exemplified by chiral liquid crystals <cit.> and chiral sculptured thin films <cit.>. These linear materials are periodically non­homogeneous along a fixed axis, their constitutive dyadics rotating either clockwise or counterclockwise at a fixed rate about that axis <cit.>. The reflection is very high when left-circularly polarized (LCP) light is incident on a left-handed SCM which is periodically non­homogeneous along the thickness direction, provided that (i) the direction of incidence is not too oblique with respect to the thickness direction, (ii) the free-space wavelength lies in the circular Bragg regime, and (iii) the number of periods in SCM is sufficiently large; however, when right-circularly polarized light (RCP) is incident on a left-handed SCM, the reflectance is very low in the circular Bragg regime <cit.>. An analogous statement holds for right-handed SCMs. The circular Bragg phenomenon is resilient against structural disorder <cit.> and the tilt of the axis of periodicity <cit.>. An expository and detailed review of the literature on circular Bragg phenomenon was published in 2014 <cit.>, to which the interested reader is referred. During the subsequent decade, several novel results—both experimental <cit.> and theoretical <cit.>—on the plane-wave response of SCMs have emerged, which prompted me to compile this album of theoretical numerical results on the plane-wave response of SCMs. Of course, no novelty can be claimed for these results, but this album is expected to guide relevant research for the next decade. This chapter is organized as follows. Section <ref> provides the essentials of the boundary-value problem underlying the plane-wave response of an SCM of finite thickness. Section <ref> provides illustrative results on the spectral variations of intensity-dependent observable quantities and Sec. <ref> is focused similarly on phase-dependent observable quantities, in the reflection half-space as well as in the transmission half-space, in relation to the polarization state and the direction of propagation of the incident plane wave. An exp(-i ω t) dependence on time t is implicit, where ω as the angular frequency and i=√(-1). With and , respectively, denoting the permittivity and permeability of free space, the free-space wavenumber is denoted by = ω√(), and =2π/ is the free-space wavelength. The Cartesian coordinate system (x,y,z) is adopted. Vectors are in boldface and unit vectors are additionally decorated by a caret on top. Dyadics <cit.> are double underlined. Column vectors are underlined and enclosed in square brackets. The asterisk (^∗) denotes the complex conjugate and the dagger (^†) denotes the conjugate transpose. § BOUNDARY-VALUE PROBLEM The half-space z < 0 is the region of incidence and reflection, while the half-space z > L is the region of transmission. The region 0<z<L is occupied by a SCM. §.§ Relative permittivity dyadic of SCM The relative permittivity dyadic of the SCM is given by <cit.> _ rel(z)= S̄_ z(h,Ω,z) ̄̇S_ y(χ)[+ +] ̄̇S_ y^-1(χ) ̄̇S_ z^-1(h,Ω,z) , z∈(0,L) . The frequency-dependent relative permittivity scalars , , and embody local orthorhombicity <cit.>. The tilt dyadic S̄_ y (χ) = + (+)cosχ + (-)sinχ contains χ∈[0,π/2] as an angle of inclination with respect to the xy plane. The structural handedness of the SCM captured by the rotation dyadic S̄_z(h,Ω,z)= +(+)cos(hπ z/Ω) +(-)sin(hπ z/Ω) , where 2Ω is the structural period in the thickness direction (i.e., along the z axis), whereas h ∈{-1,1} is the structural-handedness parameter, with h=-1 for structural left-handedness and h=1 for structural right-handedness. The foregoing equations apply to chiral sculptured thin films <cit.> and chiral smectic liquid crystals <cit.>, with and χ>0. They also apply to cholesteric liquid crystals with = and χ=0 <cit.> and heliconical cholesteric liquid crystals <cit.> with = and χ∈(0,90). §.§ Incident, reflected, and transmitted plane waves A plane wave, propagating in the half-space z < 0 at an angle ∈[0,π/2) with respect to the z axis and at an angle ψ∈[0,2π) with respect to the x axis in the xy plane, is incident on the SCM of thickness L. The electric field phasor associated with the incident plane wave is represented as <cit.> = ≤ + exp iκ≤ xcosψ + ysinψexp(i zcos) = ≤ i - /√(2) - ≤ i +/√(2) exp iκ≤ xcosψ + ysinψ ×exp(i zcos) , z < 0 , where .[ κ = sin; =-sinψ + cosψ; #p_±=∓≤cosψ + sinψcos + sin ]} . The amplitudes of the perpendicular- and parallel-polarized components, respectively, are denoted by and in Eq. (<ref>). The amplitudes of the LCP and the RCP components of the incident plane wave are denoted by and , respectively, in Eq. (<ref>). The electric field phasor of the reflected plane wave is expressed as = ≤+exp iκ≤ xcosψ + ysinψexp(-i zcos) = - ≤ i - /√(2) - ≤ i + /√(2) exp iκ≤ xcosψ + ysinψ × exp(-i zcos) , z < 0 , and the electric field phasor of the transmitted plane wave is represented as = ≤ +exp iκ≤ xcosψ + ysinψexpi (z-L)cos = ≤ i - /√(2) - ≤ i +/√(2) exp iκ≤ xcosψ + ysinψ × expi (z-L)cos , z > L . Linear reflection amplitudes are denoted by and in Eq. (<ref>), whereas the circular reflection amplitudes are denoted by and in Eq. (<ref>). Similarly, and are the linear transmission amplitudes in Eq. (<ref>), whereas and are the circular transmission amplitudes in Eq. (<ref>). §.§ Reflection and transmission coefficients The reflection amplitudes and as well as the transmission amplitudes and (equivalently, , , , and ) are unknown. A boundary-value problem must be solved in order to determine these amplitudes in terms of and (equivalently, and ). Several numerical techniques exist to solve this problem <cit.>. The most straightforward technique requires the use of the piecewise uniform approximation of _ rel(z) followed by application of the 4×4 transfer-matrix method <cit.>. The interested reader is referred to Ref.  for a detailed description of this technique. Interest generally lies in determining the reflection and transmission coefficients entering the 2×2 matrixes on the left side in each of the following four relations <cit.>: [ ; ] = [ ; ] [ ; ] , [ ; ] = [ ; ] [ ; ] , [ ; ] = [ ; ] [ ; ] , and [ ; ] = [ ; ] [ ; ] , These coefficients are doubly subscripted: those with both subscripts identical refer to co-polarized, while those with two different subscripts denote cross-polarized, reflection or transmission. Clearly from Eqs. (<ref>)–(<ref>), the coefficients defined via Eqs. (<ref>) and (<ref>) are simply related to those defined via Eqs. (<ref>) and (<ref>). §.§ Parameters chosen for calculations An album of numerical results is presented in the remainder of this chapter, with the frequency-dependent constitutive parameters _ a,b,c() = 1+ p_ a,b,c/1 + (N_ a,b,c^-1 - i^-1λ_ a,b,c )^2 chosen to be single-resonance Lorentzian functions <cit.>, this choice being consistent with the requirement of causality <cit.>. The oscillator strengths are determined by the values of p_ℓ, λ_ℓ (1 + N_ℓ^-2)^-1/2 are the resonance wavelengths, and λ_ℓ/N_ℓ are the resonance linewidths, ℓ∈{ a, b, c}. Values of the parameters used for all theoretical results reported in this chapter are as follows: p_ a = 2.3, p_ b =3.0, p_ c =2.2, λ_ a = λ_ c =260 nm, λ_ b = 270 nm, and N_ a = N_ b =N_ c=130. Furthermore, χ = 37, L=30Ω, and Ω = 150 nm. The album comprising Figs. <ref>–<ref> contains 2D plots of the theoretically calculated spectral variations of diverse observable quantities in the reflection and transmission half-spaces for either * ∈ [0,90) and ψ= 0 or * =0 and ψ∈[ 0,360). These plots are provided for both h=1 and h=-1 to facilitate easy comparison of the effect of structural handedness. § INTENSITY-DEPENDENT QUANTITIES §.§ Circular remittances The square of the magnitude of a circular reflection or transmission coefficient is the corresponding circular reflectance or transmittance; thus, = ||^2∈[0,1] is the circular reflectance corresponding to the circular reflection coefficient , = ||^2∈[0,1] is the circular transmittance corresponding to the circular transmission coefficient , and so on. The total circular reflectances are given by . [ = +∈[0,1]; =+∈[0,1] ] and the total circular transmittances by . [ = +∈[0,1]; =+∈[0,1] ] . As the principle of conservation of energy must be satisfied by the presented formalism, the inequalities <cit.> R_ℓ+T_ℓ≤1 , ℓ∈ L,R . hold, with the equalities relevant only if the SCM is non-dissipative at a particular frequency of interest. The Bragg phenomenon was discovered as a reflection phenomenon, so that its chief signature comprises high-reflectance spectral regimes <cit.>. The same is true of the circular Bragg phenomenon, which has been confirmed by time-domain simulations <cit.>. In addition, the circular Bragg phenomenon is best manifested as the circular-polarization-state-selective reflection of light. Therefore it is best to begin the album with the spectral variations of the circular reflectances R_μν(,,ψ), μ∈{ L,R} and ν∈{ L,R}. These are presented in Fig. <ref> for h=±1. Note the presence of a high-reflectance ridge in the plots of for h=1 and in the plots of for h=-1 in Fig. <ref>. For fixed ψ, the high-reflectance ridge curves towards shorter wavelengths as increases, which has been experimentally verified <cit.>. For fixed , the high-reflectance ridge is more or less invariant with respect to ψ. The ridge is absent in the plots of for h=1 and in the plots of for h=-1; however, the ridge is vestigially present in the plots of both cross-polarized reflectances. The fraction of the power density of the incident plane wave that is not reflected is either transmitted into the half-space z>L or absorbed in the SCM (0<z<L). Since (_ℓ)>0, ℓ∈ a,b,c, there is some absorption <cit.>. Accordingly, in Fig. <ref>, the circular Bragg phenomenon is manifested as a low-transmittance trough in the plots of for h=1 and in the plots of for h=-1, that trough being absent in the plots of for h=1 and in the plots of for h=-1. Vestigial presence of the trough in the plots of and for h=±1 should also be noted. §.§ Linear remittances The square of the magnitude of a linear reflection or transmission coefficient is the corresponding linear reflectance or transmittance; thus, = ||^2∈[0,1] is the linear reflectance corresponding to the linear reflection coefficient and = ||^2∈[0,1] is the linear transmittance corresponding to the linear transmission coefficient , etc. The total linear reflectances are given by . [ = +∈[0,1]; =+∈[0,1] ] and the total linear transmittances by . [ = +∈[0,1]; =+∈[0,1] ] . The inequalities (<ref>) still hold with ℓ∈ s,p and convert to equalities only if the SCM is non-dissipative at a particular frequency of interest. Linear reflectances can be written in terms of circular reflectances <cit.>. Therefore, the circular Bragg regime is evident in the spectral variations of both co-polarized linear reflectances as a medium-reflectance ridge and in the spectral variations of both cross-polarized reflectances as a low-reflectance ridge <cit.>, in Fig. <ref> for both h=1 and h=-1. For fixed ψ, the ridge curves towards shorter wavelengths as increases. For fixed , the low-reflectance ridge in the plots of and is more or less invariant with respect to ψ; but the medium-reflectance ridge in the plots of and has two periods of undulations. Linear transmittances can be written in terms of circular transmittances <cit.>. The spectral variations of both co-polarized linear transmittances exhibit a medium-transmittance trough and both cross-polarized linear transmittances show a low-reflectance ridge <cit.>, in Fig. <ref> for both h=1 and h=-1. Indicative of the circular Bragg phenomenon, these features curve towards shorter wavelengths as increases when ψ is held fixed. For fixed but variable ψ, the low-transmittance ridge in the plots of and and the medium-transmittance trough in the plots of and have two periods of undulations. §.§ Circular and linear dichroisms With . [ = 1- (+)∈[0,1]; [5pt] = 1- ( +) ∈[0,1] ] as the circular absorptances, = √() - √()∈[-1,1] is the true circular dichroism which quantitates the circular-polarization-dependence of absorption. The apparent circular dichroism = √() - √()∈[-1,1] is a measure of the circular-polarization-state-dependence of transmission <cit.>. Whereas may not equal zero for a non-dissipative SCM, must be. Figure <ref> contains plots of the spectral variations of both and in relation to the direction of plane-wave incidence. The circular Bragg phenomenon is evident as a trough in all plots of and for h=1, and as a ridge in all plots of and for h=-1. Furthermore, the quantities h and h are invariant if the sign of h is changed. These features curve towards shorter wavelengths as increases while ψ is held fixed, as has been experimentally verified <cit.>. For normal incidence (i.e., =0), the effect of ψ is minimal. Similarly to the circular absorptances, . [ = 1- ( +)∈[0,1]; [5pt] = 1- ( + ) ∈[0,1] ] are the linear absorptances. The true linear dichroism is defined as <cit.> =√()-√()∈[-1,1] and the apparent linear dichroism as =√()-√()∈[-1,1] . Whereas ≡ 0 for a non-dissipative SCM, may not be null valued. Figure <ref> also contains plots of the spectral variations of both and in relation to the direction of plane-wave incidence. The circular Bragg phenomenon is featured in all plots. For fixed ψ, the feature curves towards shorter wavelengths as increases, which has been experimentally verified <cit.>. For normal incidence, the feature has two undulations with increasing ψ, and the replacement h → -h affects both and non-trivially. § PHASE-DEPENDENT QUANTITIES §.§ Ellipticity and optical rotation The most general plane wave in free space is elliptically polarized <cit.>. Signed ellipticity functions . [ = - 2 ( ^* )/|| ^2 + || ^2; [10pt] = - 2 ( ^* )/|| ^2 + || ^2; [10pt] = - 2 ( ^* )/|| ^2 + || ^2 ] characterize the shapes of the vibration ellipses of the incident, reflected, and transmitted plane waves. Note that EF^ℓ∈[-1,1], ℓ∈ inc, ref, tr. The magnitude of EF^ℓ is the ellipticity of the plane wave labelled ℓ. The vibration ellipse simplifies to a circle when | EF^ℓ| = 1 (circular polarization state), and it degenerates into a straight line when EF^ℓ = 0 (linear polarization state). The plane wave is left-handed for EF^ℓ > 0 and right-handed for EF^ℓ < 0. The major axes of the vibration ellipses of the incident and the reflected/transmitted plane wave may not coincide, the angular offset between the two major axes known as optical rotation. The auxiliary vectors <cit.> . [ #F^ inc = 1+ 1 - ()^2^1/2 ( +) + ( -); [8pt] #F^ ref = 1+ 1 - ()^2^1/2 ( +) + ( -); [8pt] #F^ tr = 1+ 1 - ()^2^1/2 ( +) + ( -) ] are parallel to the major axes of the respective vibration ellipses. Therefrom, the angles τ^ inc, τ^ ref, and τ^ tr are calculated using the following expressions: .[ cosτ^ℓ = ( #F^ℓ)/ |#F^ℓ| , ℓ∈ inc, ref, tr; sinτ^ℓ = ( #F^ℓ)/ |#F^ℓ| , ℓ∈ inc, tr; sinτ^ ref = ( #F^ ref)/ |#F^ ref| ]} . The optical rotation of the reflected/transmitted plane wave then is the angle OR^ℓ = [ τ^ℓ-τ^ inc+π , if -π≤τ^ℓ-τ^ inc≤ -π/2 ,; τ^ℓ-τ^ inc , if |τ^ℓ-τ^ inc| < π/2 ,; τ^ℓ-τ^ inc-π , if π/2≤τ^ℓ-τ^ inc≤π , ]. , ℓ∈ ref, tr . The ellipticity function of the reflected/transmitted plane wave is denoted by EF^ℓ_ s and EF^ℓ_ p, respectively, and the optical rotation of the reflected/transmitted plane wave is denoted by OR^ℓ_ s and OR^ℓ_ p, respectively, for incident perpendicular-polarized and parallel-polarized plane waves, ℓ∈ ref, tr. Figure <ref> provides the spectral variations of EF^ ref_ s,p and OR^ ref_ s,p, and Fig. <ref> the spectral variations of EF^ tra_ s,p and OR^ tra_ s,p. A feature representing the circular Bragg phenomenon is clearly evident in all 32 plots in the two figures. For fixed ψ, the feature curves towards shorter wavelengths as increases. For normal incidence, the feature has two undulations with increasing ψ. Although measurements of optical rotation and ellipticity of the transmitted plane wave for normal incidence have been reported for over a century <cit.>, comprehensive experimental investigations for oblique incidence are very desirable in the near future. §.§ Geometric phases of reflected and transmitted plane waves The Stokes parameters of the incident plane wave are given by <cit.> .[ = ||^2+||^2 =||^2+||^2; =2 ( ^∗)=||^2-||^2; =2 ( ^∗)=2 ( ^∗); = ||^2-||^2=2 ( ^∗) ]} . The Poincaré spinor of the incident plane wave can then be obtained from Eqs. (<ref>) and (<ref>) in the Appendix. After making the changes →, →, →, →, Eqs. (<ref>) can be used to determine the Stokes parameters , , , and of the reflected plane wave, and the Poincaré spinor of the reflected plane wave can then be obtained from Eqs. (<ref>) and (<ref>). Calculation of the Poincaré spinor of the transmitted plane wave follows the same route. Thereafter, the reflection-mode geometric phase _ℓ and the transmission-mode geometric phase _ℓ, ℓ∈{s,p,R,L}, can be calculated with respect to the incident plane wave using Eq. (<ref>) in available in the Appendix. The subscript ℓ in both quantities indicates the polarization state of the incident plane wave: perpendicular (s), parallel (p), left-circular (L), or right-circular (R). Note that =≡0 because of the structure of for an incident RCP plane wave <cit.>. The other six geometric phases _ℓ and _ℓ, ℓ∈{s,p,L}, are generally non-zero in Figs. <ref>–<ref>; furthermore, their spectral dependencies have some resemblance to those of the corresponding total remittances defined in Eqs. (<ref>), (<ref>), (<ref>), and (<ref>). Indeed, a feature representing the circular Bragg phenomenon is clearly evident in the plots of _ℓ and _ℓ, ℓ∈{s,p,L}. The feature curves towards shorter wavelengths as increases while ψ is fixed, and the feature has two undulations with increasing ψ for normal incidence. Although the geometric phase of the transmitted plane wave has been measured for normal incidence on a chiral sculptured thin film <cit.>, that was done only at a single value of , that too in the long-wavelength neighborhood of the circular Bragg regime. Hopefully, experimental verification of the features evident in Figs. <ref>–<ref> will be carried out soon and the role of structural handedness clarified. § FINAL REMARK Measurement of intensity-dependent observable quantities such as reflectances and transmittances was instrumental in the identification of the circular Bragg phenomenon <cit.> and continues to be undertaken <cit.>. However, measurement of phase-dependent quantities, especially for oblique incidence, has been identified in this update as an arena for comprehensive research in the near future. § APPENDIX: POINCARÉ SPINOR AND GEOMETRIC PHASE Any uniform plane wave propagating in free space can be represented as a point on the surface of the Poincaré sphere s_1^2+s_2^2+s_3^2=s_0^2, where s_0, s_1, s_2, and s_3 are the four Stokes parameters. The plane wave's location is identified by the longitude α∈[0,2π) and the latitude β∈[-π/2,π/2] defined through the relations .[ s_1=s_0 cosβ cosα; s_2=s_0 cosβ sinα; s_3=s_0 sinβ ]} . The angles α and β appear in the Poincaré spinor =[ cos(π/4-β/2); sin(π/4-β/2)exp(iα) ] . With respect to a plane wave labeled “1", the geometric phase of a plane wave labeled “2" is defined as the angle Φ_21= Argϕ_1^†·ϕ_2 . § ACKNOWLEDGMENTS This chapter is in appreciation of the national cricket team of U.S.A. that acquitted itself very well during the T20 World Cup tournament held jointly in U.S.A. and West Indies in 2024. The author thanks the Charles Godfrey Binder Endowment at Penn State for supporting his research from 2006 to 2024. 999 FLaop Faryad, M., Lakhtakia, A.: The circular Bragg phenomenon. Advances in Optics and Photonics 6, 225–292 (2014). LVaeu Lakhtakia, A., Venugopal, V.C.: On Bragg reflection by helicoidal bianisotropic mediums. Archiv für Elektronik und Übertragungstechnik 53, 287–290 (1999). Chan Chandrasekhar, S.: Liquid Crystals, 2nd edn. Cambridge University Press, Cambridge, United Kingdom (1993). deG De Gennes, P. G., Prost, J. A.: The Physics of Liquid Crystals, 2nd edn. Oxford University Press, Oxford, United Kingdom (1993). Nityananda Nityananda, R.: On the theory of light propagation in cholesteric liquid crystals. Molecular Crystals and Liquid Crystals 21, 315–331 (1973). Parodi Parodi, O.: Light propagation along the helical axis in chiral smectics C," Journal de Physique Colloques 36, C1-325–C1-326 (1975). Garoff Garoff, S., Meyer, R.B., Barakat, R.: Kinematic and dynamic light scattering from the periodic structure of a chiral smectic C liquid crystal. Journal of the Optical Society of America 68, 1217–1225 (1978). Abdulhalim Abdulhalim, I., Benguigui, L., Weil, R.: Selective reflection by helicoidal liquid crystals. Results of an exact calculation using the 4×4 characteristic matrix method. Journal de Physique (Paris) 46, 815–825 (1985). Xiang Xiang, J., Li, Y., Li, Q., Paterson, D.A., Storey, J.M.D., Imrie, C.T., Lavrentovich, O.D.: Electrically tunable selective reflection of light from ultraviolet to visible and infrared by heliconical cholesterics. Advanced Materials 27, 3014–3018 (2015). STFbook Lakhtakia, A., Messier, R.: Sculptured Thin Films: Nanoengineered Morphology and Optics. SPIE, Bellingham, WA, USA (2005); Chap. 9. McAtee2018 McAtee, P.D., Lakhtakia, A.: Experimental and theoretical investigation of the co-occurrence of linear and circular dichroisms for oblique incidence of light on chiral sculptured thin films. Journal of the Optical Society of America A 35, 1131–1139 (2018). Replace “-(z-L)" by “+(z-L)" in Eq. (8b). Erten2015 Erten, S., Lakhtakia, A., Barber, G.D.: Experimental investigation of circular Bragg phenomenon for oblique incidence. Journal of the Optical Society of America A 32, 764–770 (2015). HBM1 Lakhtakia, A., Weiglhofer, W.S.: Axial propagation in a magnetic-dielectric cholesteric medium. Liquid Crystals 15, 659–667 (1993). HBM2 Lakhtakia, A., Weiglhofer, W.S.: Axial propagation in general helicoidal bianisotropic media. Microwave and Optical Technology Letters 6, 804–806 (1993). StJohn St. John, W.D., Fritz, W.J., Lu, Z.J., Yang, D.-K.: Bragg reflection from cholesteric liquid crystals. Physical Review E 51, 1191–1198 (1995). PLA Lakhtakia, A.: Reflection of an obliquely incident plane wave by a half space filled by a helicoidal bianisotropic medium. Physics Letters A 374, 3887–3894 (2010). dcs Lakhtakia, A.: Resilience of circular-polarization-state-sensitive reflection against morphological disorder in chiral structures. Journal of Nanophotonics 18, xxxxxx (2024). Slant1 Wang, F., Lakhtakia, A., Messier, R.: Coupling of Rayleigh–Wood anomalies and the circular Bragg phenomenon in slanted chiral sculptured thin films. European Physics Journal Applied Physics 20, 91–103 (2002). Slant2 Yin, K., Zhan, T., Xiong, J., He, Z., Wu, S.-T.: Polarization volume gratings for near-eye displays and novel photonic devices. Crystals 10, 561 (2020). FialloPhD Fiallo, R.A.: Architecting One-dimensional and Three-dimensional Columnar Morphologies of Chiral Sculptured Thin Films, Doctoral Thesis, Pennsylvania State University (2024). Das Das, A., Mandal, S., Fiallo, R.A., Horn, M.W., Lakhtakia, A., Pradhan, M.: Geometric phase and photonic spin Hall effect in thin films with architected columnar morphology. Journal of the Optical Society of America B 40, 2418–2428 (2023). Replace “tan" by “sin" in Eq. (5b). Lakh2024josab Lakhtakia, A.: Transmission-mode geometric-phase signatures of circular Bragg phenomenon. Journal of the Optical Society of America B 41, 500–507 (2024). Lakh2024pra Lakhtakia, A.: Geometric phase in plane-wave transmission by a dielectric structurally chiral slab with a central phase defect. Physical Review A 109, 053517 (2024). Chen Chen, H.C.: Theory of Electromagnetic Waves: A Coordinate-free Approach. TechBooks, Fairfax, VA, USA (1992). Nye Nye, J.F.: Physical Properties of Crystals: Their Representation by Tensors and Matrices. Oxford University Press, Oxford, United Kingdom (1985). Dreher Dreher, R., Meier, G.: Optical properties of cholesteric liquid crystals. Physical Review A 8, 1616–1623 (1973). Sugita Sugita, A., Takezoe, H., Ouchi, Y., Fukuda, A., Kuze, E., Goto, N.: Numerical calculation of optical eigenmodes in cholesteric liquid crystals by 4×4 matrix method. Japanese Journal of Applied Physics 21, 1543–1546 (1982). Oldano Oldano, C. Miraldi, E., Taverna Valabrega, P.: Dispersion relation for propagation of light in cholesteric liquid crystals. Physical Review A 27, 3291–3299 (1983). LW95 Lakhtakia, A., Weiglhofer, W.S.: Further results on light propagation in helicoidal bianisotropic mediums: oblique propagation. Proceedings of the Royal Society of London A 453, 93–105 (1997). MLbook Mackay, T.G., Lakhtakia, A.: The Transfer-Matrix Method in Electromagnetics and Optics. Morgan & Claypool, San Ramon, CA, USA (2020). Wooten Wooten, F.: Optical Properties of Solids. Academic Press, New York, NY, USA (1972); Sec. 3.1. Frisch Frisch, M.: `The most sacred tenet’? Causal reasoning in physics. British Journal for the Philosophy of Science 60, 459–474 (2009). Silva Silva, H., Gross, B.: Some measurements on the validity of the principle of superposition in solid dielectrics. Physical Review 60, 684–687 (1941). Kinsler-EJP Kinsler, P.: How to be causal: time, spacetime and spectra. European Journal of Physics 32, 1687–1700 (2011). Braggs1913 Bragg, W.H., Bragg, W.L.: The reflection of X-rays by crystals. Proceedings of the Royal Society of London A 88, 428–438 (1913). WHBragg1913 Bragg, W.H.: The reflection of X-rays by crystals. Nature 91, 477 (1913). Ewald1916-2 Ewald, P.P.: Zur Begründung der Kristalloptik; Teil II: Theorie der Reflexion und Brechung. Annalen der Physik Series 4 49, 117–143 (1916). GML2000 Geddes III, J.B., Meredith, M.W., Lakhtakia, A.: Circular Bragg phenomenon and pulse bleeding in cholesteric liquid crystals. Optics Communications 182, 45–57 (2000). ML2000 Meredith, M.W., Lakhtakia, A.: Time-domain signature of an axially excited cholesteric liquid crystal. Part I: Narrow-extent pulses. Optik 111, 443–453 (2000). GL2000 Geddes III, J.B., Lakhtakia, A.: Time-domain signature of an axially excited cholesteric liquid crystal. Part II: Rectangular wide-extent pulses. Optik 112, 62–66 (2000). BW Born, M., Wolf, E.: Principles of Optics, 6th edn. Cambridge University Press, Cambridge, United Kingdom (1980); Sec. 1.4.2. Reusch Reusch, E.: Untersuchung über Glimmercombinationen. Annalen der Physik und Chemie (Leipzig) 138, 628–638 (1869). Mathieu Mathieu, J.-P.: Examen de quelques propriétés optiques fondamentales des substances cholestériques. Bulletin de la Société Française de Minéralogie 41, 174–195 (1938). Jackson Jackson, J.D.: Classical Electrodynamics, 3rd edn. Wiley, New York, NY, USA (1999); Sec. 7.2. Fergason Fergason, J.L.: Cholesteric structure—I Optical properties. Molecular Crystals 1, 293–307 (1966).
http://arxiv.org/abs/2407.13280v1
20240718083339
AI-Assisted SQL Authoring at Industry Scale
[ "Chandra Maddila", "Negar Ghorbani", "Kosay Jabre", "Vijayaraghavan Murali", "Edwin Kim", "Parth Thakkar", "Nikolay Pavlovich Laptev", "Olivia Harman", "Diana Hsu", "Rui Abreu", "Peter C. Rigby" ]
cs.SE
[ "cs.SE", "cs.DB" ]
§ ABSTRACT brings generative AI into the data analytics domain. SQL is declarative, has formal table schemas, and is often written in a non-linear manner. We address each of these challenges and develop a set of models that shows the importance of each problem. We first develop an internal SQL benchmark to perform offline tests at . We evaluate how well the Public model performs. We attain a BLEU score of 53% and 24% for single- and multi-line predictions, respectively. This performance is consistent with prior works on imperative languages. We then fine-tune on our internal data and database schemas. substantially outperforms by 16 percentage points on BLEU score. SQL is often written with multiple sub queries and in a non-sequential manner. We develop which is aware of the context before and after the line(s) that need to be completed. This fill-in-the-middle model outperform by 35 percentage points. We also measure how often the models get the correct table names, and is able to do this 75% of the time a major improvement over the other two models. Aside from our scientific research, we also roll out at . has is used on a weekly basis by over 10k users including data scientists and software engineers, less than 1% of users have disabled . We use the feedback from users to improve . Interesting positive themes include completing tedious or repetitive SQL clauses, suggesting boilerplate coding, and help in eliminate the need to remember difficult SQL syntax. The most significant negative themes was table and column name hallucinations, which has been reduced with the release of . The models consistently outperform public and internal LLMs despite their smaller size (7 bn and 13 bn), which provides early indications that smaller specialist models can outperform larger general purpose models. Meta Platforms Inc. USA Meta Platforms Inc. USA Meta Platforms Inc. USA Meta Platforms Inc. USA Meta Platforms Inc. USA Meta Platforms Inc. USA Meta Platforms Inc. USA Meta Platforms Inc. USA Meta Platforms Inc. USA Meta Platforms Inc. USA Rigby is also a professor at Concordia University in Montreal, QC, Canada. Meta Platforms Inc. USA AI-Assisted SQL Authoring at Industry Scale Peter C. Rigby July 22, 2024 =========================================== § INTRODUCTION While Large Language Models (LLMs) have been used extensively on general coding problems <cit.>, previous work on LLMs for SQL is very limited. Related work has been concerned with the generation of SQL code from natural language specifications <cit.>, but, to the best of our knowledge, no work exists on autocompleting SQL queries. While it can be tempting to cast SQL as just another programming language for existing LLM autocompletion systems, there are three main challenges that warrant special treatment for SQL. (i) SQL is declarative in nature, and thus represents a different paradigm from the general mix of languages used in code LLM training data. SQL is intimately tied to a data warehouse (often proprietary), which severely limits the LLM's ability for knowledge transfer across training stages and programming languages. (ii) SQL exacerbates the hallucination problem often seen in LLMs. For example, it is easy for an LLM to conjure up a non-existent table name, which then corrupts the remaining context for suggesting column names, joins, etc., quickly making the entire query invalid. While code LLMs also hallucinate, the impact of a single incorrect generation is more cascading for SQL. We conjecture that the data/db structure can help in mitigating hallucinations. (iii) SQL queries are often written ad-hoc, from scratch, as one-off queries. There is no repository of queries as there is code, so LLMs have to work with very little context. Developers tend to write short queries often starting with the FROM clause and working back to the final columns and aggregations. Developers will often have complex subqueries, the WITH clause, which makes the context before and after the current line even more important than in procedural programming languages. In this work, we provide evidence to four research questions that deal with these specific SQL challenges. Our first three research questions revolve around three candidate models and their efficacy for SQL autocompletion. For RQ 1, the pyramid in Figure <ref> is built upon the public  <cit.> model. For RQ 2, we then use first-party data at , internal code and SQL code as well as schema information including table names and columns to create . For RQ 3, we observe that when engineers write SQL, they often do not write in a sequential top-to-bottom manner. For example, they might fill in a WHERE clause before fully indicating the columns to be selected. As a result, we develop a fill-in-the-middle model, , that has context before and after the line(s) of code to be completed. Our final research question, describes the rollout to to thousands of engineers and examines their feedback. RQ 1. Public : How well does the a public model generate SQL code? While LLMs have been used extensively on general coding problems <cit.>, our goal is to understand how well they work on SQL, which is declarative. To establish a baseline model at , we evaluate how well the public  <cit.> model performs on our internal benchmarks. The model is trained on publicly available code including natural language datasets related to code. is also the base of our pyramid in Figure <ref>. Results summary. For we see an exact match, BLEU, containment, and table match of 29%, 53%, 66%, and 12% for single line. The corresponding values for multi-line are 0%, 12%, 57%, and 26%, respectively. These results are comparable with prior work examining imperative languages like python <cit.>. RQ 2. : How important is fine-tuning on table schemas? We train on first-party data and code at . Since LLMs notoriously hallucinate, we also fine-tune on internal table schema including table names and columns. We expect to see fewer incorrect column names and invalid table names. Figure <ref> illustrates how important it is to have knowledge of the schema. The model hallucinates table names and suggests infrequently used tables. In contrast, once the table names are known, the columns are correct. Results summary. For we see an exact match, BLEU, containment, and table match of 48%, 69%, 78%, 13% for single line. The corresponding values for multi-line are 0%, 24%, 77%, and 62%, respectively. These results represent a substantial 11 to 48 percentage point improvement over the public model. RQ 3. : How well does a fill-in-the-middle (FIM) model perform? SQL authoring is usually not linear and sequential. When authoring long queries, it is common for developers to jump around nested subqueries and common table expressions (CTEs), such that information in the suffix becomes just as important as information in the prefix. See the nested query in Figure <ref>. The Fill-In-the-Middle (FIM) paradigm <cit.> helps widen the context aperture. With FIM, we provide both the prefix and the suffix to the model. benefits from not only knowing the schema but also knowing how common the use of a particular column or table is. In Figure <ref>, we see that although has correct column names given the table, has more context about which columns make the most amount of sense given the context of the query. Results summary. For we see an exact match, BLEU, containment, and table match of 50%, 69%, 78%, 23% for single line. The corresponding values for multi-line are 20%, 59%, 82%, and 75%, respectively. The improvement in single line is mostly contained to better table match percentages over , the multi-line improvement is dramatic, increasing from 0% exact matches to 20%. Furthermore, suggests the correct table 75% of the time. RQ 4. Adoption and Feedback: How is used in practice? At , we incrementally rolled out the models. We did not rollout all models, instead, those that performed the best in the offline historical tests were rolled out. We describe the rollout methodology and results. While we do not conduct a controlled experiment, we allow developers to provide feedback on . Since there is no requirement to provide feedback, we also report the opt-out rate for to ensure that developers who did not enjoy but did not comment are captured. Results summary. has is used on a weekly basis by over 10k users including data scientists and software engineers, less than 1% of users have disabled . We use the feedback from users to improve . Interesting positive themes include completing tedious or repetitive SQL clauses, suggesting boilerplate coding, and help in eliminate the need to remember difficult SQL syntax. The most significant negative themes was table and column name hallucinations, which has been reduced with the release of . Other negative themes include interfering with traditional auto-complete system and changes in the keyboard shortcuts and the stylistic aspects. § BACKGROUND AND MODEL Before we provide the technical details, we introduce our running example. Figure <ref> shows three screenshots of a suggestion generated by plugging in three different models into the Daiquery UI while providing the same context. The first screenshot shows the suggestion generated by the public Llama model. It hallucinates column names, suggests a function that is nonexistent in the first-party data warehouse (), and hallucinates the table name as well (). As a result, the generated query does not even compile and user rejects the suggestion. The second screenshot shows a suggestion generated by the model that is fine tuned on the first-party data warehouse. The model is already doing a good job with respect to predicting the correct function names ( as opposed to ) and table names ( as opposed to ). Because the model has seen numerous examples of first-party SQL and seen schema information during training. This query compiles but generates run time errors if executed as-is because it is predicting the column names that do not exist in the table (, , and ). User may accept this query with an understanding to rework it to correct the column names. The third screenshot shows a suggestion generated by the model trained with the FIM objective. By virtue of having the bidirectional context (code before and code after), the model is able to predict the right column names as well (, , and ). This query compiles and executes as-is without the users needing to make any changes to the generated query. §.§ Data Pyramid and Models The data pyramid consists of four main components: A checkpoint of the public Llama model, first party data, domain specific data (SQL), and instruct fine tuning data. §.§.§ Public data Public data (used by the Llama model) predominantly contains a near-deduplicated data set of publicly available code (859 GB) <cit.>. The data set also consists of natural language data sets related to code (78GB). This data set contains many discussions about code and code snippets included in natural language questions or answers. §.§.§ First-party data For training on our first-party data, we collected data from 's code repositories and notebooks, first-party data, applying several filters <cit.>: * Rather than crawling the entire repository, we used code that is modified through diffs ('s term for pull requests) checked in by developers as a way of staying close to our end application (writing code in a code editor). This way we avoid training on code that may have been added a long time ago but is never modified. * To keep the training data fresh, we only included diffs that are up to 2 years old, and only kept the latest versions of files to avoid bugs that may have been patched. * For each major target language, we exclude code that is not in production or deprecated. After these filters, our first-party training data included in the order of tens of millions of files amounting to a few billion lines of code across more than 10 languages. §.§.§ SQL artifacts To specialize the model for the SQL domain, we sourced 9 million SQL queries from our internal data warehouse. These queries are fully verified to make sure they pass syntax and semantic checks, and can be executed without producing any run time errors. Additionally, we sourced the schema information of the source tables that are used in these SQL queries. Schema information includes table names, column names and their data types. §.§ Data quality improvements Dataset quality played a crucial role in shaping the performance and effectiveness of . We deploy a combination of manual curated and automated techniques for improving data quality. The main components of our data quality pipeline include query quality filtering, query diversity filtering and deduplication of similar or repetitive queries. For example for the SQL completion learning task, we have leveraged simple heuristics to enhance the quality of our training dataset. The heuristics that we have used include keeping SQL queries with a certain minimum or maximum length, deduping queries and keeping queries that have been run successfully. For instance deduping resulted in filtering of over 5% of queries (leaving 10 million queries for training after filtering). Finally, to improve the dataset diversity we have ensured that we have a representative queries from different table schemas and query complexity levels (e.g., easy, medium, hard) defined by the number of SQL components, selections, and conditions. More specifically we defined the complexity levels defined based on the Spider  <cit.>. Using the data quality filtering steps above, we ended with about 10M queries for SQL completion task fine-tuning. §.§ model development We use a checkpoint of the Llama model that is trained heavily on code as our base model <cit.>. They come in four model sizes: 7B, 13B, 34B and 70B parameters. We take the 7B model as our base model as it helps us strike the right balance between prediction accuracy and end-to-end inference latency requirements of a code completion system (∼200 milliseconds). At the time this work began, we were only able to use the Llama 2 model weights <cit.> and trained on 500B tokens from a code-heavy data set. The Llama model is trained predominantly on a near-deduplicated data set of publicly available code. It was trained to support infilling tasks. Infilling is the task of predicting the missing part of a program given a surrounding context. Applications include code completion at the cursor's position in code IDEs, type inference and generation of in-code documentation (e.g., docstrings). Infilling models were trained following the concept of causal masking <cit.>, where parts of a training sequence are moved to the end, and the reordered sequence is predicted autoregressively. §.§ model development models are trained on top of the models. This model is initialized with Llama model weights <cit.> and trained on first-first party code data. This data includes code from other programming languages such as Python, C++, React, etc. as mentioned in Section <ref>. This helps the model learn the basics of coding patterns, frameworks, nomenclature used in the company. After a checkpoint is produced, we continue to pre-train this checkpoint using the first-party SQL data and SQL schema information to specialize or align the model further on SQL completion and prediction tasks. As the base model has seen SQL queries as part of the pre-training (Llama training), the model already understands how to write SQL queries. However, it does not understand the dialects used at Presto SQL <cit.>. Therefore, we continued pre-training on first-party SQL queries and schema. This serves two purposes: it teaches the model about the nuances of the SQL used internally, and it equips the model with the knowledge of the first-party data warehouse, which is very important to prevent the model from hallucinating table and column names while synthesizing SQL queries. §.§ model development While training the model on the first-party coding data and SQL artifacts, we leverage a training objective named Language Causal Masking (LCM) <cit.>. This helps the model consume context bidirectionally (code before and code after), which is important for any code completion system. Moreover, LCM overcomes the limitations imposed by regular CM objective with respect to tokenization. We list the modifications we performed to the CM objective to produce LCM objective below: * CM implements the masking after the text has been tokenized into token IDs, which limits the model during training to only seeing mask spans with edges at common tokenizer tokens. LCM lifts the masking step to the language level and avoids this, similar to the fill-in-the-middle (FIM) task <cit.>. Also, LCM only masks at certain trigger characters – that is, characters where the model will be queried during inference such as , , , , , , etc. * We prefix certain metadata to the input in LCM, such as the programming language, full path to the file, and the kernel name for notebooks. * Through model-level ablations, we found an optimal 70-30 split of the model's input length between code before and code after the cursor. * Specialized for our use case, LCM has only one mask in any input. A step-by-step overview of constructing an input in LCM is shown in Figure <ref>, along with an example SQL query. Once an input is constructed, during training, we maximize the log probability of the language-masked input: log𝒫([; ; ; ; ; ]) where , and are the tokens in the metadata, code before, and code after the cursor, respectively, is the code that was masked, and is a special token. During inference, we sample tokens in an auto-regressive manner from the distribution: 𝒫(· | [; ; ; ; ]) As we are suggesting lines of code, we stop the generation early once a newline token has been generated. Due to the real-time nature of our application and the inline suggestion user experience (UX), we only return one sequence of generated tokens. § CONTEXTUAL INFORMATION ABOUT This section offers an overview of the technology stack used by data engineers at . This is important to provide context on our dataset and experimental setup. 's data warehouse is the main data repository that is used for analytics. It is a collection of millions of tables, physically stored using an internal fork of ORC[Apache ORC, https://orc.apache.org/] 's exabyte-scale data warehouse is so large that it cannot physically be stored in one single datacenter. Instead, data is spread across different geographical locations. The warehouse, due to its non-centralized nature, is divided into `namespaces.' These namespaces represent both a geographical and logical segmentation of the warehouse: tables that share a common “theme” are grouped into the same namespace. This allows for efficient querying as data does not need to be transferred across different locations. However, if a query requires tables from two distinct namespaces (for example, table1 in namespace A and table2 in namespace B), data replication becomes necessary. Either table2 can be replicated to namespace A, or table1 to namespace B, allowing the query to be run in the namespace where both tables are present. Data engineers have the ability to create these cross-namespace replicas swiftly using a web-based tool, and these replicas are automatically synchronized. Data is typically introduced into the warehouse in three primary ways: * Through data workflows and pipelines, such as data inserted by a Dataswarm pipeline This data is usually sourced by querying other tables within the warehouse. * Via logs, which are data produced from either server-side or client-side logging frameworks. * Through daily snapshots of entities present in the production graph database. The warehouse can be queried by many different entry points, but data engineers at generally use Presto and Spark. While both are open-source (Presto was originally developed at Meta and was open-sourced in 2019), uses and maintains its own internal forks — but frequently rebases from the open-source repository so that we are kept up-to-date, and contributes features back into the open-source projects. With our focus primarily on business impact, design and optimization, most of our pipelines and queries are written in SQL in one of two dialects, Spark SQL or Presto SQL. This approach provides a consistent understanding of the data and business logic and enables any data engineer, data scientist, or software engineer comfortable with SQL to understand all of our pipelines and even write their own queries. The choice of Presto or Spark depends mostly on the workload: Presto is typically more efficient and is used for most queries while Spark is employed for heavy workloads that require higher amounts of memory or expensive joins. Presto clusters are sized in a way that most day-to-day adhoc queries (that scan, generally, a few billions rows — which is considered a light query at scale) produce results in a few seconds (or minutes, if there’s complex joins or aggregations involved). §.§ Real-time Querying Scuba is ’s real-time data analytics framework. It is frequently used by data engineers and software engineers to analyze trends on logging data in real time. It is also extensively used for debugging purposes by software and production engineers. Scuba tables can be queried either through the Scuba web UI (which is comparable to tools like Kibana), or via a dialect of SQL. In the Scuba web UI, engineers can quickly visualize trends on a log table without having to write any queries, with data that was generated in the past few minutes. §.§ Bento and Daiquery Daiquery is one of the tools data engineers use on a daily basis at . It is a web-based notebooks experience which acts as a single entrypoint to query any data source: the warehouse (either through Presto or Spark), Scuba, and plenty of others. It includes a notebook interface with multiple query cells, and users can quickly run and iterate on queries against our data warehouse. Results appear as tables by default, but built-in visualization tools allow the creation of many different types of plots. Daiquery is optimized for rapid query development, but does not support more complex post-query analysis. For this, users can promote their Daiquery notebooks into Bento notebooks. Bento is ’s implementation of managed Jupyter notebooks, and in addition to queries also enables python or R code (with a range of custom kernels for different use cases) and access to a wide range of visualization libraries. In addition to its use by data engineers, Bento is also used extensively by data scientists for analytics and machine learning engineers for running experiments and managing workflows. § DATA, EVALUATION METHODOLOGY, AND MEASURES §.§ Internal Benchmark Benchmarking and evaluation which are essential tools for guiding model improvements through better training data selection, prompt engineering and supervised fine-tuning. Internal benchmarking is especially vital for understanding the performance of the current models, comparing different models (large versus small, catch-all models versus expert models), and even catching future regressions in production deployments of models. While external data sets such as Spider <cit.>, Geo Query <cit.> exist for benchmarking general SQL completion, Spider <cit.> contains annotations from 11 college students. For problems like SQL completion, State-Of-The-Art (SOTA) benchmarking does not even exist, which makes it more difficult to benchmark these applications. Typically, a benchmarking exercise constitutes two primary components: ground truth data, metrics. Benchmarking comprises two primary components: ground truth data and metrics. Ground truth data can be human curated or can be generated programmatically. In both cases, the dataset quality is of utmost importance. To obtain human-generated dataset, we employ human annotators (preferably, domain experts) to curate and annotate datasets with the appropriate labels, responses, etc. While this is of higher quality, it is also resource-intensive. To programmatically obtain ground truth data, we apply heuristics as well as simple machine learning models to curate data sets. Programmatic datasets are of lower quality compared to human-generated datasets, however, scalable and less resource consuming. To curate our benchmark, named , we started with a partial held-out dataset from the fine tuning data, and randomly cut-off a part of the queries (both consisting of a single line and multi lines) starting from specific trigger characters (e.g., whitespaces, comma, parenthesis). We also categorized each data point by the length of the queries, Small, Medium, and Large, and the complexity of the queries. To classify the queries we defined query complexity levels inspired by Spider  <cit.> where we classify SQL keywords such as "SELECT", "FROM", and "WHERE" into easy, keywords such as "JOIN", "GROUP BY", "HAVING", and "ORDER BY" into medium, keywords such as "UNION", "EXCEPT", "INTERSECT", and "LIMIT" into hard, and keywords such as "WITH", "CASE", "IF", and "COALESCE" into extra hard complexity levels. Regarding the length of the queries, we defined Small, Medium, and Large based on the third quantiles of the dataset. Then, we randomly selected a subset of the dataset which forms a uniform distribution of all query lengths and complexity levels resulting in a balanced dataset of 15,256 data points. Metrics help us track the performance of various solutions against the ground truth data. There exist a plethora of offline metrics such as Exact Match (EM), BLEU score, Levenshtein edit distance, ROUGE score that evaluate the performance of translation systems. Further, they can be extended to measure any system that generates text. While the conservative metrics are standard for translation and other text and code generation applications, accomplishing a data task using SQL can be done in an infinite number of ways by querying different tables and columns. Therefore, there is a compelling need for domain-specific metrics such as “containment” or "Table Match" to better understand the usefulness of generated SQL query. To evaluate , we created our SQL completion benchmark, and measured a series of standard text and code generation metrics as well as defining SQL-specific metrics. §.§ Evaluation Method To evaluate , , and capabilities in predicting SQL completions, we ran them against the benchmark. More specifically, we ask the models to predict the masked part of a given SQL query, such as the “target” in Figure <ref>, having the query texts before and after the masked target as their input. We evaluate each model in two modes of single-line, where the model is expected to only complete the query until the end of the first line of the masked target, and multi-line, where the model is expected to complete the whole masked target. We then measure the following metrics: * Exact Match (EM) is a simple but strict metric that evaluates how often the generated SQL is exactly the same as the masked portion. * BLEU Score is a ratio measuring the average number of n-grams between the masked portion and the generated SQL. * Containment Score (CS) measures to what extent the predicted completion contains the same SQL keywords as the masked target, such as WHERE clauses, predicates, joins, group, orderby. * Table Match Score (TMS) is a binary score that measures whether the predicted completion contains the same table names as the masked target. This is important to understand how often the models hallucinate table information. §.§ Evaluation methodology for use in production We used a mixed methods approach <cit.> to evaluate in production, collecting usage data and feedback comments. Our rollout strategy for consists of gradual deployment in waves of randomly selected user cohorts. Within each wave, we rolled it out to increments of 25% of the developer population until we enable it for 100% of developers. The rollout was completed after four weeks in the fall of 2023. We report how many suggestions are accepted by engineers and what proportion of SQL code is written by . We instrumented telemetry to track various events in the SQL authoring tool such as displaying a suggestion inline, accepting or rejecting a suggestion, and the length of accepted suggestions. In total, our large-scale deployment resulted in making suggestions. distinct developers have seen at least one suggestion. We only count suggestions that were displayed for at least 750 milliseconds to ensure that developers were exposed to a suggestion and had a chance to see and comprehend it <cit.>. Our outcome measures are the acceptance rate of suggestions and the percentage of code typed using . These measures have been used in prior work, with, for example, Google reporting that 3% of the code typed by engineers was from their AI <cit.>. While we do not use a formal thematic or grounded theory research methodology to understand user feedback, we provide examples of both negative and positive feedback. We have used this feedback to incrementally improve . We also extract overarching themes from the feedback. Future work is necessary to systematically understand the user experience of AI-assisted SQL editing. § RESULTS §.§ RQ 1. Public How well does the a public model generate SQL code? Our goal is to understand how well LLMs work on SQL, which is a declarative language. To establish a baseline model at , we evaluate how well the public  <cit.> performs at autocompleting SQL queries. is an open-sourced model trained on publicly available data. We evaluated it against our internal benchmark, described in Section <ref>, for both single-line and multi-line SQL completion tasks. For more details on the evaluation methodology and execution refer to Section <ref>. In Table <ref>, we see 's performance in both single-line and multi-line SQL completion tasks. For we see an exact match, BLEU, containment, and table match of 29%, 53%, 66%, and 12% for single line. The corresponding values for multi-line are 0%, 12%, 57%, and 26%, respectively. These results are comparable with prior work examining imperative languages <cit.>. As shown in Table <ref>, is able to accurately predict the single-line completion of a small portion of the SQL queries in our internal benchmark. For multi-line, was not able to correctly predict any of the SQL queries in our benchmark, as multi-line completion is a more challenging task which requires accurate table and column names beyond SQL keywords. Additionally, it is not able to predict the table names in many SQL queries of our internal benchmark, only 12% in single-line and 26% in multi-line completions. Note that, single-line completions in our benchmark contain less number of completions with a table name in them. more specifically, since the model is asked to complete only a partial line of a SQL query, that specific line might not be the line containing the table name in the SQL query. Therefore, in multi-line completions the model is presented with more opportunity to predict the table names. The results for represent a baseline performance for a model as it has not been trained on our internal data, and hence, it is not familiar with the internal SQL queries, coding styles, and table names. §.§ RQ 2. How important is fine-tuning on table schemas? To evaluate the impact of fine-tuning in SQL completion task, we fine-tuned on first-party data and code at . LLMs notoriously hallucinate, and in this case, they especially hallucinate table and column names. Therefore, we also fine-tune on internal table schema including table and column names. We evaluate the fine-tuned model, , against our internal benchmark, described in Section <ref>, and measure the same set of metrics as described in Section <ref>. In Table <ref>, we see the detailed results of 's performance in single-line and multi-line SQL completion tasks compared to those of , in terms of percentage points (pp) difference. For we see an exact match, BLEU, containment, and table match of 48%, 69%, 78%, 13% for single line. The corresponding values for multi-line are 0%, 24%, 77%, and 62%, respectively, representing a substantial pp increase over public Llama. outperforms in both single-line and multi-line SQL completions, except for EM in multi-line which shows that is not able to correctly predict any of the multi-line SQL completions either. However, 's accuracy in predicting the correct table name has significantly improved with a fine-tuning on table schemas, 36 percentage points increased in multi-line. As a result, fine-tuning the model on the first-party data and code significantly improves the performance of SQL completion in our internal benchmark. §.§ RQ 3. How well does a fill-in-the-middle (FIM) model perform? SQL authoring is often not linear or sequential. When authoring long queries, it is common for developers to jump around nested sub-queries and common table expressions (CTEs), such that information in the suffix becomes just as important as information in the prefix, they Fill In the Middle (FIM). is trained on the first-party data and code with FIM objective where the model consumes bidirectional contexts of the code and is asked to fill in the middle. See Section <ref> for more details of the 's development. To evaluate the impact of the FIM training we evaluated against our internal benchmark and compared its results with . In Table <ref>, we see the detailed results of 's performance in SQL completion in both single-line and multi-line modes compared to those of in terms of percentage points (pp) difference. For we see an exact match, BLEU, containment, and table match of 50%, 69%, 78%, 23% for single line. The corresponding values for multi-line are 20%, 59%, 82%, and 75%, respectively. The improvement in single line is mostly contained to better table match percentages over , the multi-line improvement is dramatic, increasing from 0% exact matches to 20%. Furthermore, suggests the correct table 75% of the time. outperforms in both single-line and multi-line SQL completion tasks. More specifically, we see a significant lift in multi-line completions, an increase of 20 pp in EM from 0% in . Note that EM is the most restrictive metric and it is exceedingly difficult to achieve in multi-line completions, as they include longer responses and more chance of failure. (continued from above) Additionally, is able to predict correct table names in 75% of the multi-line completions which highlights the necessity of consuming the bidirectional context, the suffix as well as the prefix in a SQL query. §.§ RQ 4. Adoption and Feedback How is used in practice? has enjoyed a wide and consistent adoption among employees at . It peaks at approximately 8,100 Daily Active Users (DAU) and 15,700 Weekly Active Users (WAU), where active users are those that accept suggestions consistently. In the first quarter of 2024, the system made over 8 million suggestions at a rate of approximately 680,000 suggestions per week. On average, users accepted 21% of the suggestions that were shown for more than 750 milliseconds. This makes up over 50 million characters of SQL and represents almost 6% of all the SQL authored at . While acceptance rate helps us understand the likeability of and can serve as a proxy to suggestion accuracy, it can be gamed easily. We propose a new metric named Characters accepted Per Opportunity (CPO) <cit.>. An opportunity is any editing action in the editor that could trigger a suggestion. CPO allows us to track the throughput of accepted suggestions in a normalized way. It is more robust than simple acceptance rate because it also takes into account the length of accepted suggestions and cannot be affected by simply showing less and/or trivial suggestions. The system records a CPO of 2.2. Week-over-week product retention, which is the fraction of weekly active users who were also active in the previous week, for is 80%. Opt-out rate, which is the number of users that disabled voluntarily. The opt-out rate of currently ranges around 0.3%. §.§ User sentiment and feedback At , developers are encouraged to post their feedback in a tool-specific feedback group. The developers are generally vocal and provide feedback despite the feedback being publicly visible to others, including the developers of . We use the user feedback to keep track of sentiment, learn about UX and suggestion accuracy issues, and identify bugs. §.§.§ Favorable scenarios for The scenarios for which was able to add the biggest value includes (but is not limited to), completing tedious or repetitive SQL clauses, boilerplate coding, helping eliminate the need to remember difficult SQL syntax. We also noticed auxiliary benefits reported by the developers that helped them in filling out the natural language parts of the query (e.g., column alias names or inline comments). Many developers highlighted the fact that the suggestions are accurate. Also, we received feedback about how nicely navigates the precision versus recall problem by not showing suggestions too often, which is reflected in the metrics we shared in Section <ref>. Also, we noticed that is being received by occasional and experienced SQL developers alike. We list some anecdotes below. “The other day it literally wrote the exact query I wanted, which was one with a window function which I always forget the syntax for. For me as a PM, who only digs into data every now and then, it really makes it more efficient as I don't have to pull up the Presto SQL documentation to see how functions are working all the time." This summarizes the productive experience a product manager who does not author SQL frequently. “I just wanted to give a quick shout out that inline completions has been getting really good and I've personally seen the improvements as someone who writes a lot of queries. Today, I was writing a comment, and I literally only wrote one word and it somehow telepathically knew exactly what I wanted to say, and auto complete was great. Also, queries auto completion has been getting noticeably better with picking up patterns, assigning date stamps, writing filters, etc." Here, the feedback summarizes the delightful and productive experience a data engineer who authors SQL regularly had with . Another fascinating trend we noticed in these feedback items is, developers tend to be fine with reworking the suggested queries per their needs as long as helps them get started. As SQL development is iterative, the biggest time savings and productivity boost comes from the fact that helps developers bootstrap and iterate on complex SQL queries without needing to navigate clunky SQL syntax, semantics, and documentation. “saved me hours of SQL iteration today: I feel like in its current state, the value of to me is already there. The answers in here aren't exactly right, but they are close enough to have saved me a lot of smashing my face into daiquery which is what usually happens when I start doing some new data analysis. instantly got me from 0 to 10 to about 90%, and I was able to do the rest on my own." Additionally, many developers provided unsolicited feedback about how helpful has been with respect to saving several hours of their time and contributing towards bringing a positive developer experience at . “I just wanted to drop a line and say, I got to use this feature today. The auto-complete and suggestions are a fantastic use of generative AI." “This query would have taken me several hours, but with , it saved me a significant amount of time. This is awesome!" §.§.§ Unfavorable scenarios After analyzing negative feedback, we found that common problems themed around hallucinating table names, issues related to UX such as competing with traditional inline completions, etc. A developer passed the following feedback about : “I was playing around to see if it could write queries for me in SQL and it looks like it is hallucinating columns that don't exists on the tables." “I attempted to use to help with writing a daiquery query since I am bad at SQL, but it seems to have no knowledge of any of the tables" While we reduced hallucinations significantly (as reported in Table <ref>), it is hard to solve it completely as tables keep getting moved, renamed, deleted, and created continuously. The scale at which these operations happen at amplifies the complexity further. Our offline tests shows that table names are correct around 75% of the time. Additionally, SQL is a language where (and the column list) is authored before writing the table names many times. This makes the problem even more difficult. To alleviate that, we train the models on schema information and the model tend to perform significantly better in its ability to pass subsequent suggestions once the developer writes the clause and the table names. Another developer expressed their negative experience about overloading of the tab key for both indentation and accepting suggestions: "I have a distinct style of formatting my SQL queries that I've been using for 5+ years and will likely never change, as it makes SQL much more readable for me. As part of this I utilize the tab key extensively for indenting + spacing. As you can probably imagine, this makes tab-autocomplete a frustrating experience for me, especially when it triggers really quickly." Coexisting with the traditional auto complete system is a challenge faced by many AI-assisted code authoring solutions <cit.>. Developers tend to have strong preferences around UX, keyboard shortcuts, and the stylistic aspects. It takes time, great amount of user education, and novel and innovative ways if presenting AI suggestions to make the experience enjoyable for everyone. has is used on a weekly basis by over 10k users including data scientists and software engineers, less than 1% of users have disabled . We use the feedback from users to improve . Interesting positive themes include completing tedious or repetitive SQL clauses, suggesting boilerplate coding, and help in eliminate the need to remember difficult SQL syntax. The most significant negative themes was table and column name hallucinations, which has been reduced with the release of . Other negative themes include interfering with traditional auto-complete system and changes in the keyboard shortcuts and the stylistic aspects. § THREATS TO VALIDITY §.§ Generalizability Drawing general conclusions from empirical studies in software engineering is difficult because any process depends on a potentially large number of relevant context variables. The analyses in the present paper were performed at , and it is possible that results might not hold true elsewhere. However, our study does cover a very wide swath of software engineering. The software systems covers millions of lines of code and 10's of thousands of developers who are both collocated and working at multiple locations across the world. We also cover a wide range of domains from user facing social network products and virtual and augmented reality projects to software engineering infrastructure, such as calendar, task, and release engineering tooling. In Sections <ref> and <ref> we provide detailed steps and descriptions of the types of data we used in our model. We look forward to reading how other researchers use LLMs to assist in SQL composition. §.§ Construct Validity To evaluate our models in offline tests, we used standard metrics such as exact match and BLEU scores. However, given that SQL is declarative and is often not written in a sequential manner, we introduced two new metrics (see Section <ref>). The Containment Score (CM), which determines how many SQL clauses are correct. We also introduced the Table Match Score. A hallucinated table name will drastically reduce the quality of the SQL and impact column names. These metrics need further validation and we hope that other researchers will build upon our SQL specific measures. §.§ Internal Validity Unlike a traditional experiment, we also have to produce and release a running product. While our offline experimental results and benchmark are on consistent dataset, our rollout is an ongoing process without a constrained timeframe. Instead of a traditional research method, we monitor the feedback and usage results on a daily basis. New features are gradually rolled-out to avoid any regressions using an A/B test methodology <cit.>. § LITERATURE AND DISCUSSION To the best of our knowledge, almost all published work in the domain of authoring SQL using LLMs is focused on the problem of converting natural language to SQL queries (Text2SQL) <cit.>, whereas the focus of this work is on autocompleting SQL queries as authors type them out. While related, the problems are different and need different approaches. For instance, (a) for autocompleting SQL queries, the LLM must be trained on a large number of SQL queries instead of text-sql pairs, (b) the latency constraints are much tighter for an autocompletion tool compared to natural language querying, and (c) the model must be able to pick up context from after the cursor unlike the relatively straight-forward left-to-right generation in the typical Text2SQL setting. Text2SQL: Early works did not handle generalization to unseen databases well, later works such as RAT-SQL did attempt to generalize to unseen database schemas but assumed the schemas are small enough that they can be encoded at query time, and furthermore assumed that the system knows which database to look at <cit.>. Most works assume small database schema or the schema is known at runtime, which is not the case for us <cit.>. text2sql assumes the model is aware of the intent of the author, whereas in completion even the intent is not clear in most of the cases. makes things like encoding schema in the query, or using PICARD (constrained decoding) hard. Even if these were known in a subset of cases, latency constraints make it hard to use them. Some works also use the DB content which cannot work for us due to the sheer number of dbs/tables and size of the data, and the latency constraints. Further complication in our setting is there may be multiple tables that may answer the user's question which makes evaluation also challenging. Code completion: While there is a large body of work around code autocompletion with LLMs<cit.>, there has been limited deployment of these in large industrial environments <cit.>. These works have been effective at generating code in programming languages. For example, Nguyen et al. <cit.> used 33 LeetCode questions to create queries for Copilot in four different programming languages. They found that Copilot's Java suggestions have the highest correctness score (57%) while JavaScript is the lowest (27%). Although there are blog posts explaining to developers how to use GitHub Copilot with SQL <cit.>, there is no description of the model and no evaluation of how well it performs on SQL. Our paper illustrates three main challenges that warrant special treatment for SQL autocompletion: (i) its declarative nature coupled with its ties to a data warehouse, (ii) exacerbated impact of LLM hallucinations, and (iii) atypical coding styles (CTEs, non-linear authoring) when developers write SQL queries. In this light, our work is the first to develop specialized models for AI-assisted SQL authoring at scale with comprehensive evaluation. LLMs for CodeCompletion: - Code models - Context stuffing - FIM - Inference time tricks - UX § CONCLUSION AND CONTRIBUTIONS Our major contribution is to show how well LLMs can work in the context of SQL. Our specific contributions that provide answer to our research questions are the following: * RQ 1. Public : We see an exact match, BLEU, containment, and table match of 29%, 53%, 66%, and 12% for single line. The corresponding values for multi-line are 0%, 12%, 57%, and 26%, respectively. These results are comparable with prior work examining imperative languages like python <cit.>. * RQ 2. : We see an exact match, BLEU, containment, and table match of 48%, 69%, 78%, 13% for single line. The corresponding values for multi-line are 0%, 24%, 77%, and 62%, respectively. These results represent a substantial improvement over the public model. * RQ 3. : We see an exact match, BLEU, containment, and table match of 50%, 69%, 78%, 23% for single line. The corresponding values for multi-line are 20%, 59%, 82%, and 75%, respectively. The improvement in single line is mostly contained to better table match percentages over , the multi-line improvement is dramatic, increasing from 0% exact matches to 20%. Furthermore, suggests the correct table 75% of the time. * RQ 4. Rollout and Feedback: has is used on a weekly basis by over 10k users including data scientists and software engineers, less than 1% of users have disabled . We use the feedback from users to improve . Interesting positive themes include completing tedious or repetitive SQL clauses, suggesting boilerplate coding, and help in eliminate the need to remember difficult SQL syntax. The most significant negative themes was table and column name hallucinations, which has been reduced with the release of . Other negative themes include interfering with traditional auto-complete system and changes in the keyboard shortcuts and the stylistic aspects. We anticipate that other researchers will build upon our techniques, models, and evaluation metrics to ensure LLMs continue to accelerate and assist in writing SQL. § ACKNOWLEDGEMENTS We would like to thank Charlie Regan, Kristian Kristensen, Daniel Cheng, Kelly Hirano, Bhaskar Mehta, Barak Yagour, Killian Murphy, Aparna Ramani, Dinkar Pataballa, Shahin Sefati, Michael Jiang, Emily Yeh, Kamran Asif, Peyton Foucht, Mariel Carter, Imad Ahmad, Gabriel Synnaeve, Baptiste Rozière, Jeremy Reizenstein, Sten Sootla, Maria Lomeli, and Michael Bolin for their help and support with this work. IEEEtran
http://arxiv.org/abs/2407.13516v1
20240718134616
Optimal Mechanisms for Quantum Local Differential Privacy
[ "Ji Guan" ]
quant-ph
[ "quant-ph" ]
Institute of Software, Chinese Academy of Sciences, Beijing 100190, China Optimal Mechanisms for Quantum Local Differential Privacy Ji Guan July 22, 2024 ========================================================= § ABSTRACT In recent years, centralized differential privacy has been successfully extended to quantum computing and information processing to safeguard privacy and prevent leaks in neighboring relationships of quantum states. This paper introduces a framework known as quantum local differential privacy (QLDP) and initializes the algorithmic study of QLDP. QLDP utilizes a parameter ϵ to manage privacy leaks and ensure the privacy of individual quantum states. The optimization of the QLDP value ϵ, denoted as ϵ^*, for any quantum mechanism is addressed as an optimization problem. The introduction of quantum noise is shown to provide privacy protections similar to classical scenarios, with quantum depolarizing noise identified as the optimal unital privatization mechanism within the QLDP framework. Unital mechanisms represent a diverse set of quantum mechanisms that encompass frequently employed quantum noise types. Quantum depolarizing noise optimizes both fidelity and trace distance utilities, which are crucial metrics in the field of quantum computation and information, and can be viewed as a quantum counterpart to classical randomized response methods. Additionally, a composition theorem is presented for the application of QLDP framework in distributed (spatially separated) quantum systems, ensuring the validity (additivity of QLDP value) irrespective of the states' independence, classical correlation, or entanglement (quantum correlation). The study further explores the trade-off between utility and privacy across different quantum noise mechanisms, including unital and non-unital quantum noise mechanisms, through both analytical and numerically experimental approaches. Meanwhile, this highlights the optimization of quantum depolarizing noise in QLDP framework. empty plain J. Guan et al. State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China University of Chinese Academy of Sciences, Beijing 100049, China Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China § INTRODUCTION Quantum computers have the potential to solve specific problems significantly faster than classical computers through the utilization of quantum parallelism in the development of quantum algorithms <cit.>. Examples include Grover's algorithm <cit.> for searching unstructured databases, Shor's algorithm <cit.> for identifying prime factors of integers, and the HHL algorithm <cit.> by Aram Harrow, Avinatan Hassidim, and Seth Lloyd for solving linear equations systems. This advancement could bring about revolutionary changes in various fields such as financial analysis <cit.>, drug discovery <cit.>, and machine learning <cit.>. On another note, principles of quantum mechanics like the no-cloning theorem and the observer effect establish inherently secure communication channels <cit.>. Any unauthorized attempt to intercept quantum signals would inevitably disrupt them, thereby alerting the legitimate parties. This feature allows for the secure distribution of encryption keys, exemplified by the BB84 quantum key distribution (QKD) protocol <cit.>. As quantum hardware techniques rapidly advance, quantum computers have been developed to exhibit quantum supremacy (outperforming classical computation), as seen in Google's Sycamore <cit.>, USTC's Jiuzhang <cit.>, and Zhucongzhi <cit.>. Additionally, various quantum communication channels and networks have been demonstrated or deployed mainly for research and experimental purposes. Notable examples include the Chicago Quantum Exchange in the United States, China's Micius Satellite, Japan's Tokyo QKD Network, and the Netherlands' Quantum City Internet. However, just like their classical counterparts, quantum computers and channels are susceptible to data privacy leakage. This is due to the various threats they encounter, including internal and external attacks like coherent attacks <cit.>, entangling attacks <cit.>, higher-energy state attacks <cit.>, and quantum-side-channel attacks <cit.>. Some of these attacks are unique to quantum systems and necessitate new methods to safeguard against them, surpassing the capabilities of traditional classical defense mechanisms. Consequently, safeguarding the privacy of information stored in quantum states for computing and communication purposes presents significant challenges. In the realm of classical computing, differential privacy stands out as a key method for safeguarding individual data <cit.>. It is commonly divided into two categories: centralized differential privacy, where institutions publish databases containing information about multiple individuals or respond to queries based on such databases (e.g., the US Government releasing census data or companies like Netflix sharing proprietary data for testing advanced machine learning techniques), and local differential privacy, where individuals reveal personal details, such as voluntarily on social networking platforms. The former aims to safeguard data privacy against the interrelation between datasets, while the latter seeks to protect all information pertaining to an individual. Previous studies have endeavored to expand centralized differential privacy into the realm of quantum computing by establishing various meaningful relationships between neighboring quantum data sources <cit.>. A more adaptable framework has recently been put forth <cit.> by building upon the classical concept of pufferfish privacy <cit.>. Nevertheless, practical implementation of quantum databases remains challenging in the current era of Noisy Intermediate-Scale Quantum (NISQ) computing. Presently, quantum systems are predominantly utilized in a distributed fashion. In the realm of quantum computing, users typically access quantum computers via cloud platforms (e.g. IBM's Qiskit and Amazon's Braket), enabling them to remotely submit their data for computational t`asks on these quantum machines. On the other hand, in quantum communication, the transfer and sharing of quantum states occur between remote user nodes within a communication network. Consequently, the deployment of quantum computing has predominantly focused on the local model where quantum systems gather quantum states directly from users as computational and communication resources, thereby emphasizing the critical need to protect users' information comprehensively. Therefore, integrating privacy measures into the quantum state at the client end before collection is essential to preserve individual data. Local differential privacy emerges as one of the most effective privacy measures in the context of local models; however, its application in quantum systems remains relatively unexplored. In this paper, we initialize the algorithmic study of quantum local differential privacy (QLDP) and establish a framework for safeguarding quantum state privacy. The key inquiries addressed are: * How can we evaluate and manage the local differential privacy of quantum mechanisms? * How can we maintain a balance between preserving utility and safeguarding privacy in quantum systems? * How can privacy be safeguarded against unique quantum attacks facilitated by quantum entanglement? To tackle these issues, we commence by defining local differential privacy for quantum computers and communication systems. Similar to the classical scenario, QLDP incorporates a parameter denoted as ϵ that governs the formal evaluation of privacy loss. We present a method for quantifying the optimization (minimum) of ϵ, denoted as ϵ^*, for any given quantum mechanism by solving an optimization problem. In order to ensure privacy, a post-processing theorem is demonstrated to ensure that clients' quantum state privacy is preserved through quantum noise mechanisms before aggregation, akin to the classical setting. Subsequently, we present the ϵ^* value of various standard quantum noise mechanisms for practical implementation. It is noteworthy that all quantum mechanisms involve trade-offs between utility and privacy, and a primary objective of algorithmic research in differential privacy is to optimize this balance. By considering two key quantum utility measures — fidelity and trace distance, we establish that the (quantum) depolarizing noise mechanism is the optimal choice, enhancing both utility measures, for all ϵ-QLDP unital quantum mechanisms. The collection of unital quantum mechanisms encompasses a diverse range of quantum processes, such as the frequently employed quantum noises — bit flip and phase flip, which will be elaborated on later. The depolarizing noise mechanism can be seen as a quantum adaptation of the classical differential privatization mechanism known as randomized response <cit.>. Furthermore, in the classical case, the correlation between data has been found to disclose more information than expected <cit.>. In the context of the current distributed quantum computers and communication systems over long distances, we present a composition theorem to ensure QLDP protection among multiple parties irrespective of the quantum states' independence, classical correlation, or entanglement (quantum correlation). Entanglement is a unique and anti-intuitive relationship over quantum states, and the attacker can use it to engage in an entangling attack and may gain privacy leakage <cit.>. Last but not least, to validate the optimization of depolarizing noise mechanisms in terms of ϵ-QLDP, we conduct both analytical and numerical analysis illustrating the trade-offs between utility and QLDP privacy across a range of standard quantum noise mechanisms, including unital noises (bit flip, phase flip, bit-phase flip, and depolarizing noises) as well as non-unital noises like (generalized) amplitude damping and phase damping <cit.>. In summary, our main contributions in QLDP framework are: * Offering a practical approach to quantifying the optimal parameter ϵ^* of QLDP for any quantum mechanism by addressing an optimization problem. [Theorem <ref>] * Introducing a post-processing theorem that ensures the preservation of clients' quantum state privacy during aggregation using quantum noise mechanisms, similar to classical methods. [Theorem <ref>] * Proving that the depolarizing quantum noise mechanism is the most effective privatization mechanism, optimizing both fidelity and trace distance utilities overall ϵ-QLDP unital quantum mechanisms. [Theorem <ref>] * Establishing a composition theorem to defend against entangling attacks and protect QLDP in the current scenario of quantum systems distributed over a distance and sharing entangled quantum states. [Theorem <ref>] * Conducting both analytical and numerical analyses to evaluate the balance between utility and privacy across different standard quantum noise mechanisms, emphasizing the superiority of the depolarizing noise mechanism. [Examples <ref> and <ref>] § RELATED WORKS Quantum Centralized versus Local Differential Privacy. In recent years, there has been successful progress in extending centralized differential privacy to quantum computing and information processing to enhance privacy and prevent leaks in neighboring relationships of quantum states. These neighboring relationships are defined qualitatively and quantitatively using informatic measures — local operation <cit.> and trace distance <cit.>, respectively. The former involves monic classical neighboring through a local operation to transfer a state to another, while the latter measures the distance between quantum states on a scale from 0 to 1. Moreover, the evaluation and formal verification of quantum centralized differential privacy have been established <cit.>, along with the establishment of a connection to quantum gentle measurements <cit.>. On the other hand, quantum local differential privacy (QLDP), a generalization of LDP, was introduced to ensure that two distinct quantum states passed through a quantum mechanism are indistinguishable by any measurement <cit.> to maintain quantum state privacy. While previous studies have delved into the applications of the QLDP framework in quantum statistical query models <cit.> and quantum hypothesis testing <cit.>, there has been limited direct research on implementing the QLDP approach to safeguard quantum state privacy. This paper marks the beginning of an algorithmic investigation into QLDP, aiming to measure and describe the balance between the utility and privacy of unital quantum privacy mechanisms. Our findings indicate that the depolarizing noise mechanism, a quantum generalization of the classical randomized response mechanism, emerges as the optimal solution for managing this balance effectively. Classical versus Quantum Local Differential Privacy. The concept of classical local differential privacy (LDP) has been extensively researched and implemented in real-world applications like Google's Chrome and Apple's iOS keyboard to safeguard user privacy. However, the methods used in classical LDP cannot be directly translated to the quantum realm, where QLDP emerges as an extension of LDP. Verifying that a quantum mechanism adheres to QLDP is equivalent to confirming that an infinite number of classical randomized mechanisms comply with LDP (refer to Lemma <ref>). Therefore, novel techniques, such as eigenvalue analysis of quantum mechanisms, are being developed to explore the fundamental properties of QLDP and ensure its practical efficacy in preserving the privacy of quantum states. Moreover, quantum systems encounter distinctive vulnerabilities, like entangling attacks, in current distributed environments. To demonstrate the effectiveness of the QLDP framework in multi-party scenarios, a composition theorem is established to counteract entangling attacks in this paper. § PRELIMINARIES In this section, we aim to present the fundamental concepts of quantum computing in a more accessible manner for the reader by using mathematical explanations. Quantum computing, in essence, represents a novel circuit model that departs from traditional classical circuits. It offers computational advantages and employs quantum communication protocols that utilize quantum channels as opposed to classical ones for transmitting quantum states. Both quantum circuits and channels operate based on quantum states as inputs, enabling them to carry classical information essential for executing classical computational tasks and transmitting classical messages. The information encoded within these quantum states can be decoded through quantum measurement processes. Therefore, the execution of comprehensive quantum computational or communication tasks involves three key components: quantum states, quantum mechanisms (quantum circuits or channels), and quantum measurements. In the subsequent sections, we will delve into each of these components individually. §.§ Quantum State Quantum Bit. In the classical realm, a bit represents the fundamental unit of data characterized by two distinct states, 0 and 1. In quantum computing, a quantum bit (qubit) is a 2-dimensional unit vector q⃗ existing within a linear complex vector space spanned by the computational basis {0⃗, 1⃗}, defined as: q⃗( a b )=a( 1 0 )+b( 0 1 )=a 0⃗+b1⃗ . Here, 0⃗=( 1 0 ) and 1⃗=( 0 1 ), and a and b are complex numbers satisfying the normalization condition |q⃗|√(aa^*+bb^*)=1. A qubit can be interpreted as a probabilistic distribution over {0⃗,1⃗}, corresponding to the classical states {0,1}. However, this distribution is not unique and is contingent upon the method of observation (specifically, the choice of basis). Further elucidation on this matter will be provided subsequently. Bra–ket Representation. In quantum computation and information, the bra-ket representation, also known as Dirac notation, is commonly employed to depict quantum states. This notation, utilizing angle brackets ⟨ and ⟩ along with a vertical bar |, is utilized to describe linear algebra and linear operators on complex vector spaces and their dual spaces. It is specifically tailored to simplify calculations frequently encountered in quantum mechanics, hence its widespread application in this field. In this notation, "bras" and "kets" are constructed using angle brackets and a vertical bar to represent a column vector and a row vector, respectively. For instance, in the case of a qubit, the representation is as follows: |x⟩x⃗=( a b ) ⟨x|x⃗^†=( a^*,b^* ) Here, x⃗^† denotes the complex conjugate and transpose of x⃗, and a^* represents the conjugate of the complex number a. For a concrete example, the computational basis of a qubit consists of |0⟩ and |1⟩ corresponding to the classical bit: |0⟩=( 1 0 ) |1⟩=( 0 1 ). These states, |0⟩ and |1⟩, can be combined to create larger quantum systems through their tensor product: |k,l⟩=|k⟩⊗|l⟩ for k,l∈{0,1}. |0,0⟩=( 1 0 0 0 ), |0,1⟩=( 0 1 0 0 ), |1,0⟩=( 0 0 1 0 ), |1,1⟩=( 0 0 0 1 ). The tensor product of the computational basis |0⟩ and |1⟩ corresponds to the bit string of the classical bits 0 and 1. Pure Quantum State. In accordance with the bra-ket notation, a state |ψ⟩ comprising n qubits can be constructed by combining individual qubits as shown below: |ψ⟩( a_0 a_1 ⋮ a_2^n-1 ) and ensuring normalization |ψ|=1, where |ψ|√(⟨ψ|ψ⟩)=√(∑_0≤ k≤ 2^n-1a_ka_k^*). Generally, |ψ⟩ is referred to as a pure state, distinct from mixed states discussed later. Essentially, a pure state |ψ⟩ consisting of n qubits can be represented as a normalized column vector in a Hilbert space (a finite-dimensional linear space) with a dimension of 2^n. Moreover, |ψ⟩ can be expressed as a sum over binary strings of length n as follows: a_0|0,0,…,0⟩+a_1|1,0,…,0⟩+⋯+ a_2^n-1|1,1,…,1⟩. Here, a_k's represents the amplitudes of |ψ⟩. To facilitate the understanding of the reader, we summarize the linear algebra concepts pertaining to the pure state |ψ⟩ utilized in this paper using the bra-ket notation as described below: * |ψ⟩ denotes an n-qubit unit complex column vector (quantum pure state) labeled as ψ; * ⟨ψ||ψ⟩^† represents the complex conjugate and transpose of |ψ⟩; * ⟨ψ_1|ψ_2⟩⟨ψ_1||ψ_2⟩ signifies the inner product of |ψ_1⟩ and |ψ_2⟩; * |ψ_1⟩⟨ψ_2||ψ_1⟩·⟨ψ_2| denotes the outer product of |ψ_1⟩ and |ψ_2⟩; specifically, ψ|ψ⟩⟨ψ| represents the outer product of |ψ⟩ with itself. Quantum State Encoding. In order to apply quantum circuits to solve practical classical problems and transfer classical information through channel mechanisms, the initial step involves encoding classical data into quantum states. Specifically, a classical unit vector x⃗=(a_0,a_1,⋯,a_2^n-1) can be represented by the amplitudes of a quantum state |ψ⟩, known as amplitude encoding. For more general cases, including non-unit vectors, various encoding methods such as angle encoding and bit encoding can be employed <cit.>. These encoding techniques can be physically realized through quantum circuits, a process referred to as quantum state preparation. Angle encoding encodes a vector v̅ by rotating each qubit by an angle corresponding to one element of v̅: v̅=(v_1,v_2,…,v_n) →|v̅⟩ = ⊗_j=1^n R(v_j)|0⟩ where R(v_j) rotates qubit j by angle v_j along some axis, i.e., R can be one of R_x,R_y,R_z. This encoding uses n qubits for an n-dimensional vector but only requires simple 1-qubit rotation gates. As an example, encoding v̅=(π,π,π) via R_y rotations yields |v̅⟩=|1,1,1⟩=|1⟩⊗|1⟩⊗|1⟩. A key advantage of angle encoding is its parallelizability. Each qubit undergoes a rotation gate simultaneously, enabling encoding in constant time as shown in the following. This makes angle encoding well-suited for the current NISQ devices. Therefore, angle encoding is commonly used in the experimental implementation of quantum algorithms on existing quantum computers for solving classical 0.8[row sep=8pt] |0⟩ [style=minimum width=1.5cm]R(v_1) R(v_1)|0⟩ [-5pt] [4]⊗_j=1^n R(v_j)|0⟩ |0⟩ [style=minimum width=1.5cm]R(v_2) R(v_2)|0⟩ ⋮ ⋮ ⋮ |0⟩ [style=minimum width=1.5cm]R(v_n) R(v_n)|0⟩ With the above encoding methods for pure state |v̅⟩, we can simply obtain a mixed state to carry the classical data v̅: ρ_v̅=|v̅⟩⟨v̅|. X=[ 0 1; 1 0 ], Y=[ 0 -i; i 0 ], Z=[ 1 0; 0 -1 ]. Mixed Quantum State. In quantum mechanics, uncertainty is a common feature of quantum systems due to quantum noise and measurements. The concept of a quantum mixed state ρ is introduced to describe the uncertainty of possible pure quantum states. It can be expressed as ρ = ∑_k p_k|ψ_k⟩⟨ψ_k|. Here, (p_k, |ψ_k⟩)_k denotes an ensemble, signifying that the quantum state is |ψ_k⟩ with probability p_k. Moreover, if |ψ_k⟩ are mutually orthogonal (i.e., ⟨ψ_i|ψ_j⟩=0 for all i≠ j), then the decomposition of ρ above represents the eigendecomposition with |ψ_k⟩ as the eigenvector of ρ corresponding to eigenvalue p_k. In the context of mixed quantum states, a pure quantum state ψ can be seen as a mixed state ψ=|ψ⟩⟨ψ|. To ensure clarity, in the subsequent discussion, the term "quantum states" will specifically denote mixed quantum states, given the broader context. From a mathematical viewpoint, a mixed quantum state ρ is a 2^n-by-2^n matrix on a n-qubit system that satisfies three conditions: * Hermitian ρ^† = ρ. Here † denotes the complex conjugate and transpose of matrices; * positive semi-definite ⟨ψ|ρ|ψ⟩≥ 0 for all pure quantum state |ψ⟩∈; * unit trace (ρ) = ∑_k⟨ψ_k|ρ|ψ_k⟩ = 1 defines the trace of ρ as the total of the diagonal elements of ρ. Here, (ρ) signifies the trace of ρ, which is the sum of the diagonal elements of ρ, and {|ψ_k⟩} represents a unit basis of that extracts the diagonal element of ρ. §.§ Quantum Mechanism Quantum mechanisms governed by quantum mechanics can be broadly categorized into two types: quantum circuits for computation and quantum channels for information transmission. Both quantum circuits and channels on a Hilbert space are mathematically described by a super-operator denoted as , which transforms an input quantum state ρ into an output state ρ' according to the equation: ρ' = (ρ). The super-operator () mathematically represents a linear mapping from ð to ð, where ð is the set of all quantum states on . According to the Kraus representation theorem <cit.>, can be represented by a finite set of matrices {E_k} on with the expression (ρ) = ∑_k E_k ρ E_k^† ∀ρ∈ð. This representation also satisfies the trace-preserving condition, which is expressed as ∑_k E_k^† E_k = I, where I is the identity matrix on . Furthermore, if (I)=∑_k E_k E_k^† = I, then is a unital quantum mechanism. In classical differential privacy, the introduction of noise mechanisms is a primary method to safeguard privacy. Similarly, in the realm of quantum privacy, we can employ analogous techniques, which will be elaborated upon later. Below, we outline some typical and standard quantum noise mechanisms represented by super-operators. In contrast to the standard quantum mechanism denoted by , we specifically highlight the quantum noise mechanism represented by the symbol . [Quantum Noise Mechanisms] Quantum computation and information fields commonly involve quantum noises on a single qubit <cit.>. General quantum noise with multiple qubits can be constructed based on these individual noises. One type of 1-qubit noise involves Pauli matrices (X, Y, and Z), which give rise to four types of errors in quantum systems similar to classical errors. These matrices consist of three 2-by-2 complex Hermitian matrices and together with the identity matrix I, they form a basis for 2-by-2 matrices. X=[ 0 1; 1 0 ], Y=[ 0 -i; i 0 ], Z=[ 1 0; 0 -1 ]. By manipulating these matrices with a probability 0≤ p≤ 1, four Pauli-type noise mechanisms can be created to control the introduction of errors, where errors occur with a probability of 1-p (quantum states are staying unchanged with a noiseless probability p). * X flip (Bit flip) noise mechanism: _XF(ρ)=pρ+(1-p)Xρ X using Kraus matrices {√(p)I, √(1-p)X}; * Z flip (Phase flip) noise mechanism: _ZF(ρ)=pρ+(1-p)Zρ Z using Kraus matrices {√(p)I, √(1-p)Z}; * Y flip (Bit–phase flip) noise mechanism: _YF(ρ)=pρ+(1-p)Yρ Y using Kraus matrices {√(p)I, √(1-p)Y}; * Depolarizing noise mechanism: _(ρ)=1+3p/4ρ+1-p/4(Xρ X+Yρ Y+Zρ Z) using Kraus matrices {√(1+3p)/2I, √(1-p)/2X,√(1-p)/2Y,√(1-p)/2Z}. Alternative form of _ is _(ρ)=pρ+(1-p)I/2. Furthermore, a parameter denoted as γ can be introduced to regulate the energy damping within a quantum system, with γ representing the probability associated with energy loss. Subsequently, three distinct types of damping noise mechanisms are derived: * Phase damping noise mechanism: 𝒩_PD(ρ) = N_0 ρ N_0^† + N_1 ρ N_1^† where the corresponding Kraus matrices are given by: N_0 = [ 1 0; 0 √(1-γ) ], N_1 = [ 0 0; 0 √(γ) ]. * Amplitude damping noise mechanism: 𝒩_AD(ρ) = N_0 ρ N_0^† + N_1 ρ N_1^† where the corresponding Kraus matrices are given by: N_0 = [ 1 0; 0 √(1-γ) ], N_1 = [ 0 √(γ); 0 0 ]. * Generalized amplitude damping noise mechanism: 𝒩_GAD(ρ) = N_0 ρ N_0^† + N_1 ρ N_1^† + N_2 ρ N_2^† + N_3 ρ N_3^† where the Kraus matrices are defined as follows: N_0 = √(q)[ 1 0; 0 √(1-γ) ], N_2 = √(1-q)[ 0 0; √(γ) 0 ] N_1 = √(q)[ 0 √(γ); 0 0 ], N_3 = √(1-q)[ √(1-γ) 0; 0 1 ]. Here, a probability 0 ≤ q ≤ 1 represents the likelihood of Kraus matrices {N_0, N_1} contributing to _AD. Specifically, when q=1, the generalized amplitude damping noise _ is simplified to the standard noise model _AD. Through basic calculations, it is evident that amplitude damping and generalized amplitude damping noises are non-unital, whereas all other noises are unital. §.§ Quantum Measurement To extract classical information from a quantum state ρ, the only method is to conduct a quantum measurement on ρ. In mathematical terms, a quantum measurement is a stochastic mechanism defined over a finite set ø_ of measurement outcomes: : ð→. Pr[(ρ)=k]=(M_kρ) ∀ρ∈ð. In this context, represents the set of probability distributions over the measurement outcomes set ø, {M_k}_k∈ø denotes a collection of positive semi-definite matrices on the state (Hilbert) space , and Pr[(ρ)=k] signifies the probability of observing the outcome k through the measurement . It is essential to note that the set of measurements {M_k}_k ∈ø adheres to the unity condition ∑_k M_k=I, ensuring that the sum of probabilities of all outcomes equals 1. In simpler terms, ∑_k(M_kρ) = (∑_k M_kρ) = (ρ) = 1. These types of measurements are commonly referred to as Positive Operator-Valued Measures and are extensively utilized for determining outcome probabilities without involving the post-measurement quantum states. It is important to note that following the measurement, the state undergoes collapse or alteration based on the measurement outcome k, demonstrating a fundamental distinction from classical computation. § QUANTUM THREAT MODEL Creating a threat model for local differential privacy in quantum systems requires defining possible adversaries and outlining their abilities. This section aims to establish a comprehensive threat model for our QLDP framework, highlighting the distinctive characteristics of adversaries in quantum environments. These adversaries can access quantum states directly and execute various quantum measurements in single-part scenarios, while also being capable of conducting entangling attacks through quantum entanglement in distributed multi-part settings. Single-part Setting. In order to enhance the resilience of our defense strategies, we are addressing the (quantum) adversary with the ability to directly manipulate the quantum states published by quantum computers and communication channels and conduct diverse quantum measurements on them, as illustrated in the right side of Fig. <ref>. This is a departure from conventional attacks, where adversaries are limited to observing the outcomes of a specific quantum measurement only <cit.>. As discussed in the previous section, classical defense mechanisms can effectively thwart such attacks due to the stochastic nature of the measurement process. However, in our more stringent scenario, the adversary can execute any quantum measurement on the state acquired from the quantum system and infer the state's confidentiality. Consequently, there is a necessity to introduce quantum defense measures to uphold local differential privacy in quantum systems, given the infinite potential quantum measurements for a quantum system. In this paper, we propose the utilization of quantum noise to protect QLDP and advocate for depolarizing noise as the optimal choice to strike a balance between utility and privacy. Multi-party Distributed Setting. Traditional studies in the classical domain often assume independence among dataset records when implementing differential privacy. However, real-world datasets typically show correlations among records, which may result in unintended data disclosures <cit.>. In the quantum realm, entanglement, a unique feature in quantum state correlation beyond classical correlation, poses new challenges in ensuring privacy. In the current distributed setting of quantum systems and communication channels, quantum states are stored in remote locations. Users can perform local measurements on a part of the system and communicate the results via classical channels for further processing. This process, known as Local Operations and Classical Communication (LOCC), has led to significant advancements in quantum information science <cit.>, benefiting applications such as state preparation, state discrimination, and entanglement transformations. Shared entangled states among multiple parties play a vital role in enhancing these applications within the LOCC framework. To enhance the practicality of our QLDP framework, we consider the adversary's access to any quantum state shared among spatially separated systems, which can be independent, classically correlated, or entangled. In our defense strategy, we introduce a composition theorem to show that the QLDP loss across multiple parties can be controlled by combining the privacy loss across all parties, even when dealing with entangled quantum states. This framework validates the effectiveness of our defense approach in real-world scenarios for protecting the privacy of distributed quantum states. § QUANTUM LOCAL DIFFERENTIAL PRIVACY In this section, we start by defining local differential privacy for quantum systems and exploring its relationship with classical local differential privacy via quantum measurements. Following that, we demonstrate the assessment of the local differential privacy in quantum mechanisms and introduce a post-processing theorem for leveraging quantum noise to safeguard the privacy of the quantum state. For the convenience of the reader, we put all proofs of theoretical results in the appendix. In accordance with the threat model we face in Section <ref>, the attacker has the ability to access the quantum state and conduct any quantum measurement on it. To safeguard the quantum state privacy under such conditions, quantum local differential privacy is introduced as follows. A quantum mechanism operating on a Hilbert space is ϵ-quantum locally differentially private (ϵ-QLDP) if, for any quantum measurement ={M_k: k∈ø_} and any quantum states ρ,σ∈ð, the following inequality holds: Pr[((ρ))∈]≤ e^ϵPr[((σ))∈] ∀⊆ø_M where Pr[((ρ))∈]=∑_k∈(M_k(ρ)). The provided definition elaborates on classical local differential privacy for randomized mechanisms (see Fig. <ref>). It builds upon the same idea as local differential privacy, asserting that despite an adversary having access to an individual's personal responses, they should not be able to discern significant personal data due to the consistent behavior of the randomized mechanism across any pair of input states. This concept is extended to the quantum realm by introducing a parameter ϵ that ensures the similarity between the outcome probabilities ∑_k∈(M_k(ρ)) and ∑_k∈(M_k(σ)) of quantum measurement for any outcome subset ⊆ø_ to manage the leakage of quantum state privacy under quantum mechanism . To draw a comparison between quantum and classical local differential privacy, we can utilize quantum measurement as a (classical) randomized mechanism, as explained in Section <ref>. Let us first revisit classical local differential privacy (LDP). A randomized mechanism (mapping) f is ϵ-locally differentially private (LDP) if, for any states x,y∈domain(f), the following condition is met: Pr[f(x)∈]≤ e^ϵPr[f(x)∈] ∀⊆image(f) where domain(f) and image(f) denote the domain and image of the mapping f(·), respectively. For any quantum mechanism , we can derive a randomized mechanism ∘, which represents the function composition of and . Subsequently, we can redefine QLDP of by considering the LDP of ∘. A quantum mechanism is said to be ϵ-QLDP if and only if the randomized mechanism ∘ exhibits ϵ-LDP for any quantum measurement . Based on the aforementioned outcome, it is evident that in order to ensure an ϵ-QLDP assurance for a quantum system, a quantum mechanism must ensure ϵ-LDP for all quantum measurements conducted on the system simultaneously. This presents a significant challenge in safeguarding privacy against potential attackers, given the infinite number of quantum measurements possible within the system's Hilbert space . To overcome this difficulty, we apply eigenvalue analysis on the matrices describing the evolution of quantum mechanisms to study the essential quantitative properties of QLDP in the following. §.§ Evaluating QLDP In this section, we discuss the evaluation of the parameter ϵ for QLDP. Initially, we outline a method to calculate the classical LDP value of a randomized mechanism f=∘ involving a quantum mechanism and measurement . By utilizing this approach, we can determine the parameter ϵ of QLDP using Lemma <ref>. Before proceeding, it is necessary to introduce the Heisenberg picture, which tracks the evolution of quantum measurements, as opposed to the Schrödinger picture that monitors the evolution of quantum states as shown previously in Eq.(<ref>). To illustrate this, when considering a quantum measurement with matrices {M_k}_k∈ø and a quantum mechanism denoted by with Kraus matrices {E_j}, given a quantum state ρ, the following calculations of the probability Pr[((ρ))=k]=(M_k(ρ)) of observing measurement outcome k can be made: (M_k∑_jE_jρ E_j^†)=(∑_j E_j^† M_jE_k ρ )=(^†(M_k) ρ ). In this context, the second equation is derived from the commutative nature of the trace operation, specifically, (AB)=(BA), where ^†(·)=∑_jE_j^†· E_j represents the dual (adjoint) mapping of aimed at describing the evolution of quantum measurement . With the Heisenberg picture and the dual mapping , we are able to analyze the LDP of randomized mechanism ℳ∘. Suppose represents a quantum mechanism and represents a quantum measurement with matrices {M_k}_k∈ø_. The randomized mechanism ∘ is ϵ-LDP if and only if 0 ≥max_⊆ø_λ_max[^†(M_)]-e^ϵλ_min[^†(M_)]. Here, M_=∑_k∈M_k, and λ_max(M) and λ_min(M) refer to the maximum and minimum eigenvalues of the positive semi-definite matrix M, respectively. Therefore, the (optimal) LDP value ϵ^* (minimizing ϵ) provided by the randomized mechanism ∘ can be determined as follows: ϵ^*(∘)=max_⊆ø_lnλ_max[^†(∑_k∈M_k)]/λ_min[^†(∑_k∈M_k)]. In cases where λ_min[^†(∑_k∈M_k)]=0 for certain ⊆ø_, the mechanism ∘ is considered to be ∞-LDP, indicating that the LDP leakage is unbounded. Here, ϵ^*(·) is considered a function determining the (optimal) LDP value for randomized mechanisms, which can also be extended to QLDP for quantum mechanisms. By utilizing Lemma <ref>, we can express ϵ^*()=max_ϵ^*(∘). Calculation of ϵ^*(∘) is essential for all quantum measurements as indicated by the equation provided. It is worth noting that a quantum measurement can be defined by approximately N^4 parameters, where N=2^n denotes the size of the state space in an n-qubit quantum system . This arises from a quantum measurement consisting of N^2 N-by-N matrices at most <cit.> and each matrix containing N^2 parameters. In the following, the objective is to gradually decrease the parameter count from N^4 to N to enhance the efficiency of QLDP evaluation in quantum mechanisms. In order to accomplish this, it can be observed that when S^* denotes the optimal subset where ϵ^*(∘) is achieved in Eq. (<ref>), we can establish a binary (outcome) quantum measurement _^*=M_^*, I-M_^*. Thus, ∘ obeys ϵ-LDP if and only if _^*∘ complies with ϵ-LDP. It is essential to recognize that M_^* is a matrix falling within the set {M:0≤ M≤ I}. For any positive semi-definite matrices M_1 and M_2, M_1≤ M_2 if and only if ⟨ψ|M_1|ψ⟩≤⟨ψ|M_2|ψ⟩ holds for all pure quantum state |ψ⟩. Moreover, any 0≤ M≤ I can function as a matrix contributing to a quantum measurement (for instance, a binary one _M with matrices {M, I-M}). Such a matrix 0≤ M≤ I is referred to as a measurement matrix. Consequently, we obtain ϵ^*()=max_0≤ M≤ Iϵ^*(_M∘). By combining these insights with Theorem <ref>, we can derive the following description of QLDP for the quantum mechanism . A quantum mechanism is ϵ-QLDP if and only if, for any measurement matrix 0≤ M≤ I, the following inequality holds: 0 ≥max_0≤ M≤ Iλ_max[^†(M)]-e^ϵλ_min[^†(M)]. Following the above result, our attention turns to managing an N-by-N matrix M, which can be expressed through N^2 parameters based on the aforementioned outcome. To decrease the number of parameters even further, we can utilize the eigendecomposition of the measurement matrix M=∑_kλ_k|ψ_k⟩⟨ψ_k| with eigenvalue λ_k≥ 0, as illustrated in Eq.(<ref>) by noting that the normalized M/(M) is a quantum state. A quantum mechanism acting on a Hilbert space is ϵ-QLDP if and only if ϵ≥ϵ^*(), where the optimal QLDP value ϵ^*() of is determined by the formula: ϵ^*()=max_|ψ⟩∈lnλ_max[^†(ψ)]/λ_min[^†(ψ)]. If λ_min[^†(ψ)]=0 for some ψ, then we say is ∞-QLDP, denoted as ϵ^*()=∞, indicating unbounded QLDP leakage. Here, it is worth recalling that ψ=|ψ⟩⟨ψ| represents the mixed state form corresponding to the pure state |ψ⟩. Based on the outcome mentioned above, we can calculate the QLDP value by addressing an optimization problem involving the variables represented by |ψ⟩, which encompass around N parameters for (N=2^n)-dimensional quantum system . This approach aims to minimize the parameter count to N as previously discussed for enhancing efficiency. For the 1-qubit quantum system, calculating QLDP value ϵ^*() of quantum mechanism can be simplified by recognizing that there are a maximum of 2 eigenvalues for 2-by-2 positive semi-definite matrices. If is a 1-qubit quantum mechanism, then ϵ^*()=ln (1/1-max_ψλ_max[^†(ψ)] -1) ). By utilizing Theorems <ref> and <ref>, we can calculate the LDP and QLDP values for randomized mechanisms and quantum mechanisms, correspondingly. Examining examples of quantum noise mechanisms can help illustrate the distinction between these two quantities. Consider two measurements _0 and _1 denoted by measurement matrices {|0⟩⟨0|,|1⟩⟨1|} and {1/2I,1/2I}, respectively. When subjected to depolarizing noise mechanism _ with probability p as defined in Example <ref>, the QLDP value is given as ϵ^*(_)=ln1+p/1-p, while the LDP value is expressed as ϵ^*(_0∘_)=ln1+p/1-p and ϵ^*(_1∘_)=0. It is evident that the QLDP value ϵ^*(_) can be reached using the LDP value ϵ^*(_0∘_). This suggests that by employing quantum measurement _0, an attacker can attain the maximum QLDP leakage offered by the quantum noise mechanism _. On the other hand, utilizing quantum measurement _1 alone does not provide any information on quantum state privacy, as the corresponding LDP value is 0. In the simplest scenario of quantum algorithms and channels, a quantum mechanism =$̆ can be represented by a single Kraus (unitary) matrix U (UU^†=I), where(̆ρ)=UρU^†for allρ∈ð. In such instances, the QLDP of$̆ can be directly obtained from Theorem <ref> in the worst-case scenario. Any unitary quantum mechanism $̆ is∞-QLDP. As stated in Corollary <ref>, unitary quantum systems are highly vulnerable to attacks in the QLDP scenario. Therefore, it is imperative to establish a defense mechanism to safeguard the privacy of quantum states. In classical local differential privacy, one can provide privacy guarantee by incorporating noise into the classical data before collecting. Similarly, in the realm of quantum systems, this can be achieved through the application of a quantum post-processing theorem, as elaborated in the subsequent section. §.§ Post-processing Guarantee and Effective Noise Mechanisms In this section, we introduce a post-processing theorem for QLDP to demonstrate that incorporating quantum noise mechanisms is an effective strategy for safeguarding the privacy of quantum states. Additionally, we outline a method to pinpoint efficient quantum noise mechanisms that yield a measurable QLDP value. If a quantum mechanism is ϵ-QLDP, then the (functionally) composed quantum mechanism ∘ is still ϵ-QLDP for any quantum mechanism . According to this post-processing theorem, we can utilize a quantum noise mechanism to avoid privacy breaches. The challenge now lies in selecting the appropriate mechanism, which should offer a finite QLDP value at the very least. To address this, it is essential to initially assess whetherϵ^*()<∞for a specified quantum mechanism. Consider a quantum mechanism with Kraus matrices {E_k}_k=0^K-1 on a Hilbert space . The QLDP value ϵ^*()< ∞ holds if and only if one of the following equivalent conditions is met: * ^†⊗ı(|Ω⟩⟨Ω|)>0, where |Ω⟩=∑_j|j,j⟩⟨j,j| is the unnormalized maximal entangled state over composed quantum system ⊗ and {|j⟩}_j is a mutually orthogonal basis of ; * The linear span of {E_k^†} encompasses the entire matrix space on , indicating that any matrix on can be linearly represented by {E_k^†}. Checking either of the two conditions mentioned above has a time complexity of N^4K^2, where N represents the dimension of and K stands for the number of Kraus matrices of . Based on the second condition, we can infer from linear algebra principles that a minimum ofN^2Kraus matrices is required to cover the entire matrix space on a Hilbert spaceℋof dimensionN. Consequently, the numberKof Kraus matrices{E_k}_k=0^K-1in a quantum systemℰeffectively indicates thatϵ^*(ℰ)=∞. Consider a quantum mechanism with Kraus matrices {E_k}_k=1^K on a Hilbert space with dimension N. If K<N^2, then ϵ^*()=∞. By the corollary, it is evident that some quantum noise mechanisms outlined in Example <ref> do not yield a quantifiable QLDP value. In contrast, for the rest quantum noises, we can determine the QLDP valueϵ^*by solving the optimization problem shown in Theorem <ref> along with its corresponding corollary (Corollary <ref>). For the standard 1-qubit noises presented in Example <ref>, the (optimal) QLDP values provided by them are as follows. * For the Pauli-type noises with probability 0≤ p≤ 1, we have ϵ^*(_XF)=ϵ^*(_YF)=ϵ^*(_ZF)=∞ and ϵ^*(_)={ ln1+p/1-p if 0≤ p<1 ∞ if p=1 .. * For the damping-type noises with parameters 0≤ r,q ≤ 1, we have ϵ^*(_PD)=ϵ^*(_AD)=∞ and furthermore, for q=0,1 or r=0, ϵ^*(_)=∞. Otherwise, ϵ^*(_)=ln(1+√(1-4rq+4rq^2)/1-√(1-4rq+4rq^2)). Moreover, when considering these efficient quantum noise mechanisms such as_and_, the key issue is determining their relative effectiveness. In addressing this concern, utility serves as a crucial criterion for selecting the most suitable noise mechanism within the classical realm. Therefore, the subsequent section will focus on identifying the most optimal mechanism in terms of utility to effectively manage the trade-off between QLDP and utility. Remark. By Theorem <ref>, many quantum noise mechanisms do not yield a finite QLDP value necessary for safeguarding the privacy of quantum states. This limitation may result from the stringent confinement of QLDP as defined in Definition <ref>, which relies solely on the parameterϵ. To address this issue, we may consider loosening the requirement of achievingϵ-QLDP by introducing an additional parameterδto manage privacy breaches. This approach follows the same idea as the classical case. Consequently, we introduce the following revised definition: A quantum mechanism acting on a Hilbert space is (ϵ,δ)-quantum locally differentially private (QLDP) if, for any quantum measurement ={M_k: k∈ø_} and any quantum states ρ,σ∈ð, the following condition holds: ∑_k∈(M_k(ρ))≤ e^ϵ∑_k∈(M_k(σ))+δ ∀⊆ø_M. Despite the relaxation in the QLDP definition, it is observed that_XF,_ZF,_PD, and_ADin Example <ref> can only ensure(ϵ,1)-QLDP for anyϵ, which is unsurprising given thatδ≤1. Consequently, this relaxed version of QLDP may not be the most suitable for quantum computing, necessitating the exploration of alternative methods to incorporate these noises effectively as a defensive mechanism. We left this as a future work. § OPTIMAL MECHANISMS FOR UTILITIES In this section, we demonstrate that the depolarizing noise mechanism is the best choice for ensuring anϵ-QLDP guarantee while maximizing fidelity and trace distance utilities across all dimensions of quantum state space compared to other unital quantum mechanisms. §.§ Choices of Utilities Within our threat model framework in Section <ref>, the key emphasis lies in protecting quantum states. The incorporation of quantum noise mechanisms is crucial to achieving this objective but can potentially compromise the integrity of these quantum states. Consequently, this compromises the precision and utility of quantum states for specific analytical purposes. Therefore, it becomes essential to assess the effectiveness of a quantum noise mechanism in retaining information pertaining to quantum states. Traditionally, the evaluation of such utilities is often conducted through fidelity and anti-trace distance within the quantum realm. The two utilities of quantum noise mechanisms are based on fidelity and trace distance of a pair of quantum states. The fidelityF(ρ,σ)and trace distanceT(ρ,σ)for two quantum statesρandσare defined as follows: F(ρ,σ) [tr(√(√(ρ)σ√(ρ)))]^2 and T(ρ,σ) 1/2tr(|ρ-σ|). For any Hermitian matrixH[A matrix H is considered Hermitian when H^†=H. Consequently, all eigenvalues of H are real numbers.] (e.g.,ρandρ-σ) with eigendecomposition∑_kλ_k|ψ_k⟩⟨ψ_k|, the square root ofHis defined as√(H)=∑_k√(λ_k)|ψ_k⟩⟨ψ_k|and the absolute value ofHis denoted as|H|=∑_k|λ_k||ψ_k⟩⟨ψ_k|. The fidelity of two states quantifies their similarity, while trace distance measures their dissimilarity. To see this, for example,F(ρ,σ)=1(T(ρ,σ)=0) if and only ifρ=σ, indicating thatρandσare indistinguishable. On the other hand,F(ρ,σ)=0(T(ρ,σ)=1) if and only ifρis orthogonal toσ, implying thatρandσare entirely distinguishable <cit.>. We can now establish the concepts of fidelity and anti-trace distance utilities for quantum mechanisms with respect to all quantum states in the worst-case scenario <cit.>. The fidelity utility of a quantum mechanism on a Hilbert space is defined as F() min_ρ∈ð F((ρ), ρ). By definition, the fidelity utilityF()is determined by the minimal information (utility) preserved byon all quantum states based on the fidelity measure. The anti-trace distance utility of a quantum mechanism on a Hilbert space is defined as: T̂() 1 - T() where T() max_ρ∈ð(ρ) - ρ. Defined asT(), this metric quantifies the maximal noisy effects caused byon all quantum states using the trace distance. Consequently,T̂()evaluates the minimal information (utility) preserved byon all quantum states. To estimate the two utilities, by the joint concavity of fidelity and trace distance <cit.>, we only need to focus on pure quantum states as follows. F()=min_|ψ⟩∈F((ψ), ψ)=min_|ψ⟩∈⟨ψ|(ψ)|ψ⟩ T̂()=1-max_|ψ⟩∈T((ψ), ψ). With the above equations, we can analytically compute the fidelity and anti-trace distance utilities for the quantum noise mechanisms introduced in Example <ref>. For the standard 1-qubit noises presented in Example <ref>, the (optimal) QLDP values provided by them are as follows. * For the Pauli-type noises with noiseless probability 0≤ p≤ 1, we have F(_UF)=T̂(_UF)=p for U∈{X, Y, Z} F(_)=T̂(_)=1+p/2 * For the damping-type noises with parameters 0≤ r,q ≤ 1, we have F(_PD)=T̂(_PD)=1+√(1-r)/2 F(_AD)=T̂(_AD)=1-r F(_)=T̂(_)={ 1-qr if q≥1/2 1-(1-q)r if q< 1/2. . As we can see, parametersp, q, andrcan be utilized to regulate the utilities offered by quantum noise mechanisms. When considering the QLDP value illustrated in Theorem <ref>, determining the most suitable option becomes an intriguing challenge. By utilizing Theorems <ref> and <ref>, we can establish the quantitative connection between utility and privacy concerning 1-qubit depolarizing and generalized amplitude damping noise mechanisms with0≤p<1,0<q<1and0< r≤1. It is important to note that other noise mechanisms in Example <ref> do not yield significant QLDP values in this context, as shown in Theorem <ref>. F(_)=T̂(_)=e^ϵ/e^ϵ+1 for ϵ=ϵ^*(_) F(_)=T̂(_)={ 1-e^ϵ/(1-q)(e^ϵ+1)^2 if q≥1/2 1-e^ϵ/q(e^ϵ+1)^2 if 0<q< 1/2 1 if q=0 . Here,ϵ=ϵ^*(_). The forms above illustrate the balance between the utilities and QLDP value of the two quantum noises. Additionally, we can compare these quantum noises within the QLDP framework. Further elaboration on this comparison will be presented in the subsequent analytical analysis and numerical experiments in Section <ref>. Remark. Although Theorem <ref> demonstrates that for any standard 1-qubit quantum noise mechanism introduced in Example <ref>, the fidelity utility is equal to the anti-trace distance utility, it remains uncertain whether this equality holds true for all quantum mechanisms. Instead, a general inequality holds for any quantum mechanism: T̂()≤ F(). To elaborate, the relationship between trace distance and fidelity states that for any quantum statesρandσ, the trace distance provides upper and lower bounds on the fidelity, as indicated by the Fuchs–van de Graaf inequalities <cit.>: 1-√(F(ρ ,σ ))≤ T(ρ ,σ )≤√(1-F(ρ ,σ )). When at least one of the states is a pure stateψ, the lower bound can be further refined to: 1-F(ρ,ψ)≤ T(ρ,ψ). Consequently, through the arbitrariness ofρ, we deduce 1-T((ψ),ψ)≤ F((ψ),ψ). By optimizing all pure states independently on both sides, we derive min_ψ[1-T((ψ),ψ)]≤min_ψF((ψ),ψ). This leads to the conclusionT̂()≤F().It is important to note that the trace distance is often more computationally feasible to evaluate or constrain compared to fidelity, making these relationships particularly valuable. In this paper, we examine the balance between the utility and privacy concerns of quantum noise mechanisms. We do not currently address the equivalence of these benefits and consider it a topic for future research. §.§ Optimal Mechanisms This section is dedicated to the identification of optimal unital quantum mechanisms that maximize information theoretic utilities while upholding a constant QLDP value. Specifically, we focus on fidelity and trace distance utilities within ann-qubit quantum systemfor practical implementation. We address two main problems formally: [Fidelity utility optimization] In the case of fidelity utility, the objective is to maximize fidelity utility among all ϵ-QLDP quantum mechanisms, given by: max_ F() subject to ∈_ϵ. Here, _ϵ={:ϵ^*()=ϵ and (I)=I} represents the set of unital quantum mechanisms with a QLDP value of ϵ. [Anti-trace distance utility optimization] In the case of trace distance utility, the goal is to maximize the anti-trace distance utility T̂ among all ϵ-QLDP quantum mechanisms, given by: max_ T̂() subject to ∈_ϵ. To address the two optimization issues mentioned above, we initially establish the optimal bounds for both utilities by utilizing the QLDP value as depicted below. For any unital quantum mechanism with ϵ^*()<∞, we have F()≤e^ϵ/e^ϵ+2^n-1 T̂()≤e^ϵ/e^ϵ+2^n-1. Here ϵ=ϵ^*() and either of these equalities is obtained if and only if ∈^*, where the set of optimal quantum mechanisms ^* is defined as {_η:^†_η(ρ)=ηρ+1-η/2^n-1(I-ρ) and 1>η≥1/2^n}. Hence, based on the aforementioned outcome, our primary focus should be on the set^*to maximize utilities. Let's examine the scenario of a 1-qubit system, wheren=1. In this instance,^*is represented by the set {_η: ^†_η(ρ)=(2η-1)ρ+(1-η)I and 1>η≥1/2}. If we letp=2η-1, we can express^*as {: ^†(ρ)=pρ+1-p/2I and 1>p≥ 0}. Therefore, in a 1-qubit system,^*is comprised of depolarizing quantum noise processes with a probabilityp<1, where depolarizing noise_is its own adjoint. Extending this concept to a generaln-qubit quantum system, the parameterpdefined asp=η-1-η/2^n-1results in: ^*={:^†(ρ)=pρ+(1-p)I/2^n and 0≤ p <1}. Under this definition, all quantum processes within^*are essentiallyn-qubit depolarizing noises with a probability less than 1. The formal representation of a depolarizing noise, denoted as_^n, acting with a probability ofpon ann-qubit quantum state space<cit.>, is given by _^n(ρ)=pρ+(1-p)I/2^n ∀ρ∈ð. Furthermore, the adjoint of_^nis itself. Specifically, the depolarizing noise for a 1-qubit system is labeled as_^1. Therefore, the optimal set^*in Theorem <ref> encompasses all depolarizing noises with a probabilityp<1. It is worth noting that the adjustment of utilities and QLDP values of depolarizing noise can be achieved by manipulating the probabilityp. For a depolarizing quantum noise _^n with probability 0≤ p <1 on a n-qubit system, we have * QLDP value is ϵ^*(_^n)=ln[1+2^n(1/1-p-1)]. * Fidelity and anti-trace distance utilities both are F(_^n)=T̂(_^n)=(2^n-1)p+1/2^n. This result indicates that as the increment ofpoccurs, the QLDP value and utilities also increase. This demonstrates a clear quantitative trade-off between the utilities and privacy of depolarizing noise mechanisms. By combining these values based onp, we can address the optimization problems related to fidelity and trace distance utility (Problems <ref> and <ref>) as follows. The optimal solutions for the fidelity and anti-trace distance utility optimization problems (Problems <ref> and <ref>) in an n-qubit system are given by F(^*)=T̂(^*)=e^ϵ/e^ϵ+2^n-1, where ^* represents the quantum mechanism leading to the optimal values. All feasible solutions contribute to a set denoted as ^*_ϵ, ^*_ϵ={_^n:_^n(ρ)=e^ϵ-1/e^ϵ+2^n-1ρ+I/e^ϵ+2^n-1}. According to the above findings, the depolarizing noise mechanism emerges as the preferred option for balancing the trade-off between QLDP and either fidelity or trace distance utility. Finally, it is evident that depolarizing noise serves as a quantum extension of classical randomized responses <cit.>. To illustrate this point, we first consider the depolarizing noise mechanism denoted as𝒩_^n. According to Theorem <ref>, the behavior of𝒩^n_with QLDP valueϵcan be expressed as follows: 𝒩_^n(ρ) = e^ϵ/e^ϵ+2^n-1ρ + (1/e^ϵ+2^n-1)(I-ρ). Next, we introduce a quantum measurement{ψ_k}_0≤k≤2^n-1comprising a collection of mutually orthogonal pure states|ψ_k⟩. Considering the observation of quantum stateψ_kgiven quantum stateψ_junder depolarizing noise𝒩_^n, the conditional probability can be formulated as: Pr(X=k|Y=j) = Tr(ψ_k𝒩_^n(ψ_j)) = { e^ϵ/e^ϵ+2^n-1 if k=j 1/e^ϵ+2^n-1 if k ≠ j. . In this scenario, the depolarizing noise involves a straightforward randomization across the measurement outcome{0,1,⋯,2^n-1}, where the actual data is disclosed with a probability ofe^ϵ/e^ϵ+2^n-1. This mirrors the concept of classical randomized response as discussed in <cit.>. Therefore, depolarizing noise can be seen as an extension of randomized response. This extension provides a clearer insight into the effectiveness of safeguarding QLDP, given that the randomized response mechanism has been shown to optimize the KL divergence utility across all classicalϵ-LDP mechanisms <cit.>. Remark. In certain extreme scenarios, our objective is to minimize the QLDP value concerning a fixed anti-trace distance or fidelity utility. According to Theorem <ref>, it holds for any unital quantum mechanismthat: ϵ^*()≥ln[(2^n-1)(1/1-F()-1)] ϵ^*()≥ln[(2^n-1)(1/1-T̂()-1)]. Consequently, the minimum value ofϵ^*()is only attained when∈^*. Thus, depolarizing noise emerges as the optimal mechanism for minimizing the QLDP value across all quantum mechanisms with either a fixed fidelity or anti-trace distance utility. § COMPOSITION THEOREM FOR DISTRIBUTED QUANTUM SYSTEMS This section presents a composition theorem for scenarios involving distributed quantum computing, ensuring that the cumulative leakage loss of the combined systems is additive. Initially focusing on a bipartite quantum system, we consider all types of relationships—be it independent, classically related, or entangled—between states distributed across each subsystem. Subsequently, we extend this analysis to encompass anym-party system. In a bipartite distributed quantum system_A⊗_B, quantum states are categorized into three classes based on the relationship between the two sides: * Independent: The system's global state is a product state such as ρ^A⊗ρ^B, where ρ^A and ρ^B represent quantum states on subsystems _A and _B, respectively. * (Classically) Correlated: The global state of the system is a convex combination of product states ∑_ip_iρ_i^A⊗ρ_i^B with ∑_ip_i=1 where ρ_i^A and ρ_i^B denote quantum states on subsystems _A and _B, respectively. * Entangled: The global state of the system cannot be expressed in a classically correlated form. Independent subsystems are essentially a subset of correlated subsystems, withρ_A⊗ρ_Brepresenting a simple convex combination of product states, indicating classical correlation. In contrast, entangled states exhibit distinct characteristics compared to classical systems and are crucial in quantum secure communication, teleportation, and accelerating computational processes. Exploring a quantum entangled state can provide further insight into its nature. [Entangled States] Consider a 2-qubit pure state |φ⟩ in a bipartite 1-qubit system _A⊗_B given by: |φ⟩=|0⟩_A⊗|0⟩_B+|1⟩_A⊗|1⟩_B/√(2) The state φ=|φ⟩⟨φ| is entangled. Determining the entanglement of a mixed state is generally challenging. It has been proven that the general bipartite scenario is NP-hard <cit.>. For the 2-qubit case, the Positive Partial Transpose (PPT) condition <cit.> serves as a necessary and sufficient criterion for identifying entangled states. This condition can be applied to verify the entanglement of |φ⟩. Suppose Alice acts as an observer for system A, and Bob as an observer for system B. They share the entangled state φ over the composed system of A and B. If Alice conducts a measurement on subsystem A using the measurement operators {|0⟩⟨0|,|1⟩⟨1|}, two possible outcomes, 0 and 1, will occur with equal probabilities: * If the outcome is 0, the system's state collapses to |0⟩_A⊗|0⟩_B. * If the outcome is 1, the system's state collapses to |1⟩_A⊗|1⟩_B. In the event of the former outcome, any subsequent measurement conducted by Bob using the same measurement basis will always yield outcome 0. Conversely, if the latter outcome occurs (Alice measures 1), Bob's measurement will unequivocally return 1. Consequently, Alice's local measurement on system A affects system B, regardless of any spatial separation between the two systems. Entangled states like these present new obstacles in safeguarding quantum state information, as an attacker could manipulate one part of the system through measurement to compromise privacy elsewhere. Luckily, our subsequent findings introduce a composition theorem that guarantees the effectiveness of QLDP in securing distributed quantum systems. In distributed quantum systems, it is not feasible to conduct a global measurement on the entire system due to the spatial separation of subsystems. However, it is possible to carry out a joint measurement by performing local measurements on each subsystem. More formally, the joint measurement on a bipartite system_A⊗_Binvolves measurement matrices denoted as M_k=∑_i,jM_k,i^A⊗ M_k,j^B where ∑_k M_k=I. Here,{M_k,i^A}and{M_k,j^B}represent quantum measurements on_Aand_B, respectively. We can redefine QLDP as stated in Definition <ref> to provide a clearer formal definition of QLDP for distributed quantum systems. Let _k be a quantum mechanism on Hilbert space _k for 0≤ k≤ t-1. The composed quantum mechanism _0⊗_1⊗⋯⊗_t-1 is ϵ-quantum locally differentially private (ϵ-QLDP) if for any joint quantum measurement ={M_j}_ j∈ø_ and any quantum states ρ,σ∈𝒟(_0⊗_1⊗⋯⊗_t-1), we have ∑_k∈(M_k(ρ))≤ e^ϵ∑_k∈(M_k(σ)) ∀⊆ø_. Next, we establish a composition theorem for all quantum states of multiple systems, encompassing product quantum states, classically correlated quantum states, and entangled quantum states, along with joint quantum measurements. This pertains to the setting of LOCC addressed in our paper. We demonstrate the additivity of QLDP in the distributed scenario, and a summary is provided in Table <ref>. If two quantum mechanisms _1 and _2 on Hilbert space _1 and _2 are ϵ_1 and ϵ_2-QLDP, respectively, then the composited mechanism _1⊗_2 is (ϵ_1+ϵ_2)-QLDP. If quantum mechanisms _k is ϵ_k-QLDP for 0≤ k≤ t-1, then _0⊗_1⊗⋯⊗_t-1 is (∑_kϵ_k)-QLDP. § EXPERIMENTAL EVALUATION In this section, we conduct numerical experiments on three aspects to demonstrate the efficacy of depolarizing noise in our QLDP framework. * Utility Optimization: In comparison to different (non-unital) general amplitude damping noise mechanisms, we establish the optimization of fidelity (anti-trace distance) utility across varying QLDP values. * Trade-off: By adjusting the noiseless probability p to attain the targeted QLDP value and depolarizing utility, we can observe the trade-off between these two aspects. * Qubit Number: As we scale up the number of qubits in quantum systems, we notice a significant exponential decline in fidelity utility for a specific QLDP value. It is important to mention that since the fidelity and anti-trace distance utilities of depolarizing channels are equal, we will focus solely on the fidelity utility in the upcoming experiments. §.§ Utility Optimization We have demonstrated the optimization of the fidelity and anti-trace distance utilities with depolarizing noise within unital quantum mechanisms in Theorem <ref>. The next question to address is whether the effectiveness of the depolarizing noise mechanism surpasses that of non-unital quantum mechanisms, such as generalized amplitude damping noises, in this context. To explore this, we compare the fidelity utilities of 1-qubit depolarizing noise and generalized amplitude damping noise mechanisms by varying their QLDP values. To begin, we revisit the explicit form of depolarizing noise presented below: F(_)=e^ϵ/e^ϵ+1 Subsequently, we establish a similar function for generalized amplitude damping noise. For generalized amplitude damping noise characterized by parameters r and q as introduced in Example <ref>, and considering the form of QLDP value for _ in Example <ref>, we can express r as a function of q for a given QLDP value ϵ of _ as follows: r=e^ϵ/(q-q^2)(e^ϵ+1)^2. Then combining with the fidelity utility representation in Example <ref>, we have F(_)={ 1-e^ϵ/(1-q)(e^ϵ+1)^2 if q≥1/2 1-e^ϵ/q(e^ϵ+1)^2 if 0<q< 1/2 1 if q=0 . By utilizing Eqs.(<ref>) and (<ref>), we are able to graph the fidelity utility function using the QLDP value as the variable for 1-qubit depolarizing noise and various generalized amplitude damping noise types characterized by parameterq. Refer to Fig. <ref> for a visual comparison. It is evident that the utility derived from depolarizing noise surpasses that of any generalized amplitude damping noise at any given QLDP value. This observation underscores the effectiveness of depolarizing noise in safeguarding quantum state information. §.§ Trade-off Lemma <ref> showcases how adjusting the probabilitypcan lead to varying levels of the QLDP and fidelity utility for anyn-qubit depolarizing noise. Theorem <ref> establishes a trade-off between fidelity utility and QLDP values. By leveraging these observations, we investigate the trade-off concerningn-qubit depolarizing noise for values ofnfrom 1 to 5. The results are depicted in Fig. <ref>. §.§ Qubit Number As indicated by Theorem <ref>, the fidelity utility of depolarizing noise diminishes exponentially as the number of qubits increases for a specified QLDP valueϵ. This trend is visually represented in Fig. <ref> forϵ=0,1,2,3,4. It is apparent that applying depolarizing noise to a small number (2-4) of qubits is sufficient to maintain high utility for practical applications and achieve the desired QLDP value. § CONCLUSION In this paper, we introduce the exploration of local differential privacy in quantum computing from an algorithmic perspective. Initially, we demonstrate the feasibility of utilizing quantum noise mechanisms to uphold quantum local differential privacy (QLDP). Subsequently, we establish that the depolarizing noise mechanism, which acts as a quantum counterpart to the classical randomized response mechanism, stands out as the optimal choice for maximizing both fidelity and trace distance utility among all unital quantum mechanisms with a constant QLDP value. Additionally, we introduce a composition theorem for entangled quantum systems, ensuring the effectiveness of local differential privacy strategies in the context of distributed quantum computing environments. For future research directions, it would be intriguing to address the optimization challenges posed by non-unital quantum mechanisms. Moreover, investigating QLDP within a local framework that allows global measurements to observe the state across multiple quantum subsystems presents an interesting avenue. This prompts the question of whether a composition theorem holds in such scenarios. Another promising research avenue involves examining the implications of enabling multiple rounds of attacks involving entanglements, as opposed to the single-round attacks considered in this study under the framework of LOCC. Multiple rounds of attacks could potentially lead to increased privacy leakage, necessitating a comprehensive exploration of defense strategies to safeguard quantum local differential privacy in this more intricate setting. unsrt § APPENDICES §.§ Proof of Lemma <ref> Proof. This directly follows the definitions of QLDP in Definition <ref> and classical LDP in Definition <ref>.§.§ Proof of Theorem <ref> Proof. For anyρ≠σ∈ð, we can representρ-σ=Δ_+-Δ_-as a decomposition into orthogonal positive and negative parts, whereΔ_±> 0andΔ_+Δ_-=0. In the subsequent proof, we utilize the property0<(Δ_+)=(Δ_-)≤1stemming from(ρ-σ)=0andρ-σ=Δ_+-Δ_-, and thusΔ_±/(Δ_±)are quantum states. The following derivation is based on these: [^†(M_)(ρ-e^ϵσ)] = [^†(M_)(ρ-σ+σ-e^ϵσ)] = [^†(M_)(Δ_+-Δ_-+σ-e^ϵσ)] = (^†(M_)Δ_+)-(^†(M_)Δ_-)-(e^ϵ-1)(^†(M_)σ) = (Δ_+)[(^†(M_)Δ_+/(Δ_+))-(^†(M_)Δ_-/(Δ_-))]-(e^ϵ-1)(^†(M_)σ) ≤ (Δ_+)[max_ρ_1∈ð(^†(M_)ρ_1)-min_ρ_2∈ð(^†(M_)ρ_2)] -(e^ϵ-1)min_ρ_3∈ð(^†(M_)ρ_3) = (Δ_+)[λ_max(^†(M_))-λ_min(^†(M_))]-(e^ϵ-1)λ_min(^†(M_)) ≤ λ_max(^†(M_))-e^ϵλ_min(^†(M_)) = ⟨ψ^*|^†(M_)|ψ^*⟩-e^ϵ⟨ϕ^*|^†(M_)|ϕ^*⟩ = [^†(M_)(ψ^*-e^ϵϕ^*)] Here,|ψ^*⟩and|ϕ^*⟩are the normalized eigenvectors (pure state) of^†(M_)corresponding to the maximal and minimal eigenvalues, respectively. Consequently, we have: max_ρ,σ∈ð[^†(M_)(ρ-e^ϵσ)]=λ_max(^†(M_))-e^ϵλ_min(^†(M_)) This completes the proof based on the definition of QLDP provided in Definition <ref>.§.§ Proof of Lemma <ref> Proof. For any measurement_Mwith matrices{M,I-M}ϵ^*(_M∘)=max_A∈{I,M,I-M}lnλ_max[^†(A)]/λ_min[^†(A)] Noting that{I,M,I-M}⊆{M:0≤M≤I}, the set of all quantum measurement matrices. Combining thatϵ^*()=max_0≤M≤Iϵ^*(_M∘),we can complete the proof.§.§ Proof of Theorem <ref> Proof. By Lemma <ref>,isϵ-QLDP if and only if for any measurement matrixM, 0 ≥λ_max(^†(M))-e^ϵλ_min(^†(M)). With the eigendecomposition ofM=∑_kλ_kψ_kand eigenvaluesλ_k≥0, we have 0 ≥λ_max(^†(∑_kλ_kψ_k))-e^ϵλ_min(^†(∑_kλ_kψ_k)). By the linearity of, we have 0 ≥λ_max[∑_kλ_k^†(ψ_k)]-e^ϵλ_min[∑_kλ_k^†(ψ_k)]. Noting the linear algebra fact that for any positive semi-definite matricesAandB, λ_max(A+B)≤λ_max(A)+λ_max(B) λ_min(A+B)≥λ_min(A)+λ_min(B). Applying these to the summation∑_kλ_k^†(ψ_k)of positive semi-definite matrices^†(ψ_k), we obtain λ_max[∑_kλ_k^†(ψ_k)]-e^ϵλ_min[∑_kλ_k^†(ψ_k)] ≤ ∑_kλ_k[λ_max(^†(ψ_k))-e^ϵλ_min(^†(ψ_k))] ≤ (∑_jλ_j)max_k[λ_max(^†(ψ_k))-e^ϵλ_min(^†(ψ_k))]. Therefore, by noting that eachψ_kis also a measurement matrix, we can claim that 0 ≥λ_max(^†(M))-e^ϵλ_min(^†(M)) ∀ 0≤ M≤ I. if and only if 0 ≥λ_max(^†(ψ))-e^ϵλ_min(^†(ψ)) ∀|ψ⟩∈. This completes the proof and we have ϵ^*()=max_|ψ⟩∈lnλ_max(^†(ψ))/λ_min(^†(ψ)).§.§ Poof of Theorem <ref> Proof. We first note that ifMis a quantum measurement matrix, then^†(M)is also a quantum measurement matrix. Furthermore, it follows that{^†(M):0< M≤I}⊆{M:0< M≤I}, thus concluding the proof based on Lemma <ref>.§.§ Proof of Lemma <ref> Proof. We first prove thatϵ^*()<∞if and only the first condition is satisfied. Initially, according to Theorem <ref>, it is evident thatϵ^*()<∞if and only ifλ_min(^†(ψ))>0for any|ψ⟩ ∈. This condition is equivalent to stating that^†(ψ)>0holds true for any|ψ⟩ ∈based on the eigendecomposition of^†(ψ)withλ_min(^†(ψ))being the minimum eigenvalue. Considering the linearity of^†, the condition^†(ψ)>0for any|ψ⟩ ∈is equivalent to^†(ρ)>0for allρ∈ð. Such an^†is specifically referred to as strictly positive, which can be validated by examining whether the Choi-Jamiolkowski matrix^†⊗ı(|Ω⟩⟨Ω|)of^†is strictly positive, i.e.,^†⊗ı(|Ω⟩⟨Ω|)>0, as detailed in <cit.>. For a quantum mechanismwith Kraus matrices{E_k}_k=0^K-1, the computation of^†⊗ı(|Ω⟩⟨Ω|)requires𝒪(N^6K)operations, as shown below: ^†⊗ı(|Ω⟩⟨Ω|)=∑_k=0^K-1(E_k^†⊗ I)|Ω⟩⟨Ω|(E_k⊗ I). The complexity arises from the fact that there areKsummations and each of these involves twoN^2-by-N^2matrix multiplications with a computation cost ofN^6K. Consequently, verifying if^†⊗ı(|Ω⟩⟨Ω|) > 0can be achieved by computing its eigenvalues, incurring a computational cost of𝒪(N^6). Thus the total complexity is𝒪(N^8).We proceed to demonstrate the equivalence of the two conditions. Initially, we observe that^†⊗ı(|Ω⟩⟨Ω|) > 0is the same as stating that the set formed by{(E_k^†⊗I)|Ω⟩}spans the entire vector space⊗. This equivalence arises from the fact that the mappingE→(E⊗I)|Ω⟩establishes a linear one-to-one relationship between the matrix space inand vector space⊗. Addressing the issue of complexity to check the second condition, this leads to determining the rankrof the linear space generated by{E_k}_k=0^K-1, achievable inN^6K. Consequently,r=N^2holds if and only if the second condition is met.§.§ Proof of Theorem <ref> Proof. According to Corollary <ref>, we can verify the instances where quantum noises do not result in a finite QLDP value as claimed. We only need to calculateϵ^*(_)for0≤p<1andϵ^*(_)for0<q<1and0<r≤1in the following. Recall that for any state|ψ⟩, the following relation holds: _(ψ) = pψ + (1-p)I/2. It can be observed that|ψ⟩and|ψ^⟩represent the normalized eigenvectors associated with eigenvalues(1+p)/2and(1-p)/2, respectively. Here,|ψ^⟩represent a quantum state orthogonally to|ψ⟩, i.e.,⟨ψ|ψ^⟩=0. Consequently, for0≤p<1, we deriveϵ^*(_) = ln1+p/1-pbased on Theorem <ref>. To determineϵ^*(_), according to Corollary <ref>, our task involves solving the optimization problemmax_ψλ_max(^†_(ψ)). Initially, we represent a 1-qubit state|ψ⟩with two parametersaandbas |ψ⟩=[ a; b ] such that aa^*+bb^*=1. To find the eigenvalues of^†_(ψ), we start by computing the characteristic polynomialdet[λI-^†_(ψ)]and obtain det[λ I-^†_(ψ)]=λ^2-λ+r(1-r)(aa^*)^2-2(1-r)rp(aa^*)^2+rp-r^2p^2. We introducex=aa^*, leading to det[λ I-^†_(ψ)]=λ^2-λ+r(1-r)x^2-2(1-r)rpx^2+rp-r^2p^2. By solvingdet[λI-^†_(ψ)]=0, we find the maximum eigenvalue as λ_max[^†_(ψ)]=1+ √(1-4[r(1-r)x^2-2(1-r)rpx+rp-r^2p^2])/2. The value ofmax_ψλ_max[^†_(ψ)]is achieved atx=p, resulting in the optimal value 1+√(1-4rp+4rp^2)/2. Hence, in accordance with Corollary <ref>, we conclude that ϵ^*(_)=ln(1+√(1-4rq+4rq^2)/1-√(1-4rq+4rq^2)).§.§ Proof of Theorem <ref> Proof. First, we deal with_XF, _YFand_ZF. They share the following same structure for their corresponding to super-operators: _UF(ρ)=pρ+(1-p)Uρ U^†. Then, the fidelity utility is F(_UF)=p+min_|ψ⟩∈|⟨ψ|U|ψ⟩|^2.⟨ψ|U|ψ⟩=0can be obtained by|ψ⟩=|0⟩forU=X,|ψ⟩=|0⟩forU=Y, and|ψ⟩=|0⟩+|1⟩/√(2)forU=Z. ThusF(_UF)=pforU∈{X,Y,Z}. The anti-trace distance utility is T(_UF)=(1-p)max_|ψ⟩∈1/2(|Uψ U-ψ|)1/2(|UψU-ψ|)=1can be obtained by|ψ⟩=|0⟩forU=X,|ψ⟩=|0⟩forU=Y, and|ψ⟩=|0⟩+|1⟩/√(2)forU=Z. Therefore,T̂(_UF)=pforU∈{X,Y,Z}. Now we consider the utilities of depolarizing noise. Recall one form of the noise is _(ρ)=pρ+(1-p)I/2. Then F(_)=1+p/2 T(_)=1-p/2⇒T̂(_)=1+p/2. Now we move to deal with damping noise mechanisms by parameterizing pure state|ψ⟩. Any 1-qubit|ψ⟩can be represented by two parametersa,bas |ψ⟩=[ a; b ] with aa^*+bb^*=1. Then we compute fidelity and anti-trace distance utilities by searching on suchaandb. ψ=|ψ⟩⟨ψ|=[ aa^* ab^*; a^*b bb^* ]. For phase damping_PD, _PD(ψ)=[ aa^* √(1-r)ab^*; √(1-r)a^*b bb^* ]. Then we have the eigenvalues of_PD(ψ)-ψare: λ[_PD(ψ)-ψ]=±√((1-√(1-r))^2aa^*bb^*) Byaa^*+bb^*=1, we have the maximal value of√((1-√(1-r))^2aa^*bb^*)is1-√(1-r)/2foraa^*=1/2. ThusT̂()=1+√(1-r)/2. On the other hand,F(_PD(ψ),ψ)is ⟨ψ|_PD(ψ)|ψ⟩=(aa^*)^2+2√(1-r)aa^*bb^*+(bb^*)^2. Letx=aa^*with0≤x≤1and thenbb^*=1-xas⟨ψ|ψ⟩=aa^*+bb^*=1. Then the above equation is F(_PD(ψ),ψ)=2(1-√(1-r))x^2-2(1-√(1-r))x+1. Therefore, the minimal value of fidelityF(_PD(ψ),ψ)is achieved atx=1/2and thus the utility fidelity isF(_PD)=1+√(1-r)/2. For_, _(ψ)=[ (1-r)aa^*+rq √(1-r)ab^*; √(1-r)a^*b raa^*+bb^*-rq ]. Then we have the eigenvalues of_(ψ)-ψare: λ[_(ψ)-ψ]=±√((rq-raa^*)^2+(√(1-r)-1)^2aa^*bb^*) Letx=aa^*with0≤x≤1and thenbb^*=1-xas⟨ψ|ψ⟩=aa^*+bb^*=1. Then the magnitude ofλ[_(ψ)-ψ]is √((r^2+2-2√(1-r)-r)x^2-(2qr^2+2-2√(1-r)-r)x+r^2q^2) As(r^2+2-2√(1-r)-r)≥0, the maximal value of the above equation is achieved atx=0or 1. For x=1, |λ[_(ψ)-ψ]|=(1-q)r For x=0, |λ[_(ψ)-ψ]|=qr. Thus T(_)={ qr if q≥1/2 (1-q)r if q< 1/2. Then T̂(_)={ 1-qr if q≥1/2 1-(1-q)r if q< 1/2. On the other hand,F(_GAD(ψ),ψ)=⟨ψ|_GAD(ψ)|ψ⟩is (1-r)(aa^*)^2+rqaa^*+(2√(1-r)+r)aa^*bb^*+(bb^*)^2-rqbb^*. Letx=aa^*with0≤x≤1and thenbb^*=1-xas⟨ψ|ψ⟩=aa^*+bb^*=1. Then the above equation is 2(1-r-√(1-r))x^2+(2rq+r+2√(1-r)-2)x+1-rq. Therefore, the minimal value of fidelityF(_GAD(ψ),ψ)is achieved atx=0or 1 and thus the utility fidelity is F(_)={ 1-qr if q≥1/2 1-(1-q)r if q< 1/2. .the eigenvalues of (ψ)-ψ is ±√(r^2(aa^*)/4^2+(1-√(1-r))^2aa^*bb^*). Then the optimal solution is . For _GAD, the fidelity F((ψ),ψ)=⟨ψ|(ψ)|ψ⟩ is as follows. 1/2[(2-r)(aa^*)^2+(2r+4√(1-r))aa^*bb^*+(2-r)(bb^*)^2] Let x=aa^* and then bb^*=1-x as ⟨ψ|ψ⟩=aa^*+bb^*=1. Then the above value is 1/2[(4-4r-4√(1-r))x^2-(4-4r-4√(1-r))x+2-r] The minimal value of the fidelity is achieved at x=1/2. Consequently, the fidelity utility F()=min_ψF((ψ),ψ)=1/2+√(1-r)/2. For _GAD, the trace distance T((ψ),ψ)=1/2(|(ψ)-ψ|), where (ψ)-ψ=[ rbb^*-raa^*/2 (√(1-r)-1)ab^*; (√(1-r)-1)a^*b -rbb^*-raa^*/2 ] For finding the eigenvalues of (ψ)-ψ, we compute λ I-((ψ)-ψ)=[ λ-rbb^*-raa^*/2 (1-√(1-r))ab^*; (1-√(1-r))a^*b λ+rbb^*-raa^*/2 ] For |λ I-((ψ)-ψ)|=0 we have λ^2=(rbb^*-raa^*)^2/4+(1-√(1-r))^2aa^*bb^* Let x=aa^* and then bb^*=1-x as ⟨ψ|ψ⟩=aa^*+bb^*=1. λ^2=(r^2+r+2√(1-r)-2)x^2-(r^2+r+2√(1-r)-2)x+r^2/4 The maximal value of λ^2 is achieved at x=1/2 and the optimal value is 1/4(2-r-2√(1-r)). Consequently, the trace distance of is T()=max_ψT((ψ),ψ)=|λ|=1/2(1-√(1-r)). Then the trace distance utility is T̂()=1-T()=1/2(1+√(1-r)) §.§ Proof of Lemma <ref> 1-sup_|ψ⟩^†(ψ)-ψ_≤inf_|ψ⟩F(^†(ψ),ψ)≤sup_|ψ⟩λ_max(^†(ψ)) max_|ψ⟩λ_max(^†(ψ))≤max_|ψ⟩λ_max(^†(ψ))/λ_min(^†(ϕ)) for any |ϕ⟩ Let ψ^* be the optimal solution of max_|ψ⟩λ_max(^†(ψ)). Then λ_max(^†(ψ^*))/λ_min(^†(ψ^*))≤max_ψλ_max(^†(ψ))/λ_min(^†(ψ))=exp(ϵ^*()) §.§ Proof of Theorem <ref> Proof. LetDbe the dimension of the state spacewhichis working on. Then, by Theorem <ref>, we have e^ϵ^*() = max_|ψ⟩λ_max(^†(ψ))/λ_min(^†(ψ)) ≥ max_|ψ⟩(D-1)λ_max(^†(ψ))/1-λ_max(^†(ψ)) = max_|ψ⟩(D-1)(1/1-λ_max(^†(ψ))-1) ≥ max_|ψ⟩(D-1)(1/1-⟨ψ|^†(ψ)|ψ⟩-1) = (D-1)(1/1-max_ψ⟨ψ|^†(ψ)|ψ⟩-1) ≥ (D-1)(1/1-F()-1). Then we have the inequality for fidelity utility forD=2^n: F()≤e^ϵ^*()/e^ϵ^*()+2^n-1. In the above, the first inequality results from that for anyD-by-Dpositive semi-definite matrixMwith(M)=1(eigenvalues ofMare all non-negative and the summation of them is 1), we have λ_min(M)≤1-λ_max(M)/D-1. The second inequality comes from λ_max(^†(ψ))=max_ϕ⟨ϕ|^†(ψ)|ϕ⟩. The third inequality results from F()=F(^†)=min_ψ⟨ψ|^†(ψ)|ψ⟩≤max_ψ⟨ψ|^†(ψ)|ψ⟩ And we observe that the above two equalities are obtained if and only if ^†(ψ)=η_0ψ+∑_1≤ k≤ D-1η_kψ_k ∀|ψ⟩∈ where the set{|ψ⟩,|ψ_1⟩,|ψ_2⟩,…, |ψ_D-1⟩}of pure state are mutually orthogonal andη_0≥η_1=η_2=⋯=η_D-1.That is ^†_η(ψ)=ηψ+1-η/D-1(I-ψ). whereη≥1/Dis a constant and independent onψ. Furthermore, by noting that()≥1-(), we can get the second inequality for anti-trace distance fidelity.§.§ Proof of Lemma <ref> Proof. This can be accomplished directly using Theorem <ref> and the definitions of fidelity and anti-trace distance utilities, similar to how we handle 1-qubit depolarizing noise previously.§.§ Proof of Theorem <ref> Proof. It directly follows from Theorem <ref> and Lemma <ref>.with p=η-1-p/2^n, i.e., for any ∈^*, we have that (ρ)=(λ-1-p/D)ρ+(1-λ+1-p/D)I/D On the other hand, let we consider any depolarizing quantum channel _Dep _Dep(ρ)=pρ+(1-p)I/D It is easy to see that _Dep∈^* as we can rewrite it as _Dep(ψ)=(p+1-p/D)ψ+1-p/D(I-ψ) §.§ Proof of Theorem <ref> Proof. By Theorem <ref>, ϵ^*(_1⊗_2)=max_|ψ⟩∈_1⊗_2lnλ_max[_1^†⊗_2^†(ψ)]/λ_min[_1^†⊗_2^†(ψ)]. Note that by the proof of Theorem <ref>,ψis a product measurement as we only consider the joint measurement. Let|ψ⟩=|ψ_1⟩⊗|ψ_2⟩. Then we have ϵ^*(_1⊗_2)=lnmax_|ψ_1⟩∈_1λ_max[_1^†(ψ_1)]/λ_min[_1^†(ψ_1)]max_|ψ_2⟩∈_2λ_max[_2^†(ψ_2)]/λ_min[_2^†(ψ_2)]=ϵ^*(_1)+ϵ^*(_2).
http://arxiv.org/abs/2407.13216v1
20240718065526
QuIIL at T3 challenge: Towards Automation in Life-Saving Intervention Procedures from First-Person View
[ "Trinh T. L. Vuong", "Doanh C. Bui", "Jin Tae Kwak" ]
cs.CV
[ "cs.CV" ]
Vuong et al. QuIIL at T3 challenge School of Electrical Engineering, Korea University *First authors contributed equally. QuIIL at T3 challenge: Towards Automation in Life-Saving Intervention Procedures from First-Person View Trinh T.L. Vuong 1* Doanh C. Bui 1* Jin Tae Kwak 1 ======================================================================================================= § ABSTRACT In this paper, we present our solutions for a spectrum of automation tasks in life-saving intervention procedures within the Trauma THOMPSON (T3) Challenge, encompassing action recognition, action anticipation, and Visual Question Answering (VQA). For action recognition and anticipation, we propose a pre-processing strategy that samples and stitches multiple inputs into a single image and then incorporates momentum- and attention-based knowledge distillation to improve the performance of the two tasks. For training, we present an action dictionary-guided design, which consistently yields the most favorable results across our experiments. In the realm of VQA, we leverage object-level features and deploy co-attention networks to train both object and question features. Notably, we introduce a novel frame-question cross-attention mechanism at the network's core for enhanced performance. Our solutions achieve the 2^nd rank in action recognition and anticipation tasks and 1^st rank in the VQA task. The source code is available at <https://github.com/QuIIL/QuIIL_thompson_solution>. § INTRODUCTION Scene understanding and analysis are essential tasks in AI systems that provide insights into a scene and allow for the prediction of appropriate actions. This is, in particular, critical in developing medical computer-assisted systems such as remote instruction systems in uncontrolled and austere environments, especially for life-saving procedures <cit.>. Such capabilities can provide valuable support to both first responders and individuals lacking specialized training in the medical scenario. For instance, in a situation where a patient is experiencing bleeding caused by a blast injury, the necessary response could involve the application of a tourniquet. In this particular scenario, it is necessary to develop an AI-based computer-assisted system from a first-person perspective. This is fundamentally crucial since it precisely emulates the visual perception of an individual possessing expertise in first aid, thereby offering an unequivocally clear and comprehensive account of the events transpiring around an injured person. However, the resources for the mentioned problem are still restricted. While the field of first-person view scene analysis for daily-living activities has seen a proliferation of datasets <cit.>, the availability of such invaluable resources in the context of medical applications remains limited. Leveraging advanced methods and transferring pre-trained weights from other domains for medical procedure prediction is one of the potential solutions to overcome dataset limitations. In this manuscript, we present a comprehensive description of our approaches to address three tasks in the Trauma THOMPSON (T3) challenge, including action recognition, anticipation, and visual question answering. To provide a succinct overview, the key methods can be outlined as follows: * In the action recognition and anticipation tasks, we introduce a pre-processing strategy and action dictionary-guided (ADG) learning design. We also exploit the pre-trained model, which is adapted to life-saving procedures, through an advanced knowledge distillation method, i.e., Momentum contrastive learning with Multi-head Attention (MoMA) <cit.>. This ADG-learning strategy results in our best-performing experiments compared to single-task and multi-task learning. * In the visual question answering task, we leverage the object features extracted by the pre-trained VinVL model <cit.>, which excels at capturing first-view camera context. To further enhance the approach, we introduce frame-question cross-attention based on the MCAN baseline network <cit.>. This integration effectively identifies crucial object features, leading to more accurate answers to questions. We show that the proposed solution obtained the best performance among our three experiments. The rest of this paper is structured as follows: Section <ref> provides a comprehensive overview of our solutions for the three tasks; Section <ref> reports the results and discussions; Section <ref> summarizes our report for the T3 challenge. § METHODOLOGY §.§ Track 1 - Task 1 & 2: Action Recognition & Anticipation Overview of the proposed method, as shown in Figure <ref> includes four components: the pre-processing module, feature embedding, attention contrastive distillation, and action dictionary-guided (ADG) classification head. §.§.§ Problem definition. For two tasks involving action recognition and anticipation, given a sequence of N^f frames 𝐅_i = {f^(i)_k}_k=1^N^f, the objective is twofold: to recognize the current action ŷ_𝐢 and to predict the subsequent action ŷ_𝐢+1 based on the next sequence of frames 𝐅_i+1. Notably, action recognition (𝐲_i) relies on two predicted components: the verb (v̂_i) and the noun (n̂_i), a structure similarly mirrored in 𝐲_i+1. §.§.§ Action Dictionary-guided Learning This section outlines our Action Dictionary-guided (ADG) design for the action recognition and anticipation tasks. We begin with two sets: a set of verb labels, denoted as V = {v_1, v_2, …, v_k}, and a set of noun labels, denoted as N = {n_1, n_2, …, n_h}. From these sets, we create a dictionary of unique action labels, represented as A = {(v_1, n_1), (v_2, n_2), …, (v_k, n_h)}. In our ADG design, we construct an extended action label dictionary, denoted as  = {a_1: (v_1, n_1), a_2: (v_2, n_2), …, a_g: (v_k, n_h)}. The model's training involves learning these action labels from Â. During inference, the model maps these action labels back to their respective verb and noun classes. In our dataset, we work with 41 verb classes (k=41), 45 noun classes (h=45), and a total of 114 unique action classes (g=114). Illustrative examples of verb-noun pairings and corresponding actions for action recognition and anticipation are shown in Table <ref>. To assess the effectiveness of our learning design, we also compare it with a single-task classification approach, where the model separately learns verb and noun classes. In this setup, two models for verb and noun classification are trained independently. However, it is important to note that verb and noun labels often exhibit relationships within the same video context. Learning them separately tends to yield suboptimal results. To address this, we adopt a multi-task learning approach, allowing the model to share valuable features between the tasks and consequently improve performance. In our multi-task model, we employ a shared encoder for both verb and noun classification tasks, each with its own dedicated classification head. In contrast, combination task learning requires only a single action classifier head. In all classification tasks, including ADG, single-task, and multi-task learning, we optimize using the cross-entropy loss, denoted as ℒ_CE. It is worth highlighting that the single-task learning approach necessitates training two separate models to obtain action predictions. In multi-task learning, one encoder and two classification heads must be trained. In contrast, our ADG learning approach is the most parameter-efficient option, as it requires only one encoder and one classification head to be trained. §.§.§ Video-to-image prepossessing. In the training phase, given a sequence of frames 𝐅_i including N^f frames f^(i)_k ∈ℝ^3 × H × W, we first randomly select N^f' < N^f frames (specially, N^f' = 16). Each frame is resized to one-fourth (1/4) of its original size, then randomly cropped to 224×224, then applied the RandAugment augmentation strategy for each frame. Then, N^f' selected augmented frames are stiched as a √(N^f')×√(N^f') window 𝐅_i' ∈ℝ^ 3 × (√(N^f')× H) × ( √(N^f')× W). Here in, 𝐅_i' are treated as a single input image when processing 𝐅_i' through the backbone network. In the testing phase, a similar strategy from training is applied to each testing video 𝐅_i obtain 𝐅_i'. However, for each frame, we start by center cropping a region of (224 × 1.3) × (224 × 1.3) and subsequently perform a random crop of (224) × (224) within this area. No RandAugment is applied during this process. This repeat is repeated n=30 times for each input video. §.§.§ Image-MoMA. We came to the realization that starting the CNN backbone network training from scratch did not yield satisfactory performance. Consequently, we opted to leverage the insights from the MoMA study <cit.>, which empowered our model to effectively inherit knowledge from a pre-trained CNN backbone, originally trained on the ImageNet dataset, thereby enhancing its performance on the challenging tasks at hand. First, we convert the input video into 4 × 4 frames images. We refer to it as Image-MoMA, as we flatten a video into a single image to input it into image-based deep learning models, which is different from other video processing methods in Video-ViT<cit.>. There are two types of networks in the MoMA setting: the momentum teacher f^(T) that was trained on the ImageNet dataset, and the student model f^(S) that will be trained on a target-domain dataset, which is provided by the T3 challenge organizer. f^(T) network is frozen during the training process, and only weights of f^(S) are updated. The MoMA framework actively influences the relationships between samples within a batch during each iteration. In the k^th iteration, two networks process a batch denoted as 𝐁_k = {𝐅^(k)_i}_k=1^bs. Subsequently, f^(T) and f^(S) generate two distinct sets of embedding vectors, 𝐄^T_k and 𝐄^S_k, respectively. Following this, a multi-head self-attention mechanism is applied to each of these sets, effectively recalibrating the importance of the embedding vectors contained within them. There exists a memory bank 𝐪𝐮𝐞𝐮𝐞 with a length of L_N, designed to store a set of negative samples 𝐄^T_k, denoted as 𝐪𝐮𝐞𝐮𝐞^𝐓 = {𝐄^T_l}_l=1^L_N. Subsequently, contrastive learning is employed between 𝐄^S_k and 𝐪𝐮𝐞𝐮𝐞^𝐓 to facilitate knowledge transfer from the teacher network f^(T) to the student network f^(S). In accordance with the MoMA framework, a classifier is devised to learn the action class. This study uses the EfficientNet-B0 <cit.> as the backbone for all the Image-MoMA-based models. §.§.§ Objective function. The attention contrastive distillation is optimized using the Noise-Contrastive Estimation loss ℒ_ InfoNCE that is proposed in <cit.>. The ℒ_ InfoNCE for an image x_i, with features feature e^S ∈𝐄^T_k from student model and feature e^T∈𝐄^T_k from teacher model, is formulated as: ℒ_ InfoNCE = -logexp(e_i^S · e_i^T /τ)/∑_i,j=0^L_Nexp(e_i^S · e_j^queue^T/τ), where τ is a temperature hyper-parameter (τ=0.07). By minimizing InfoNCE, we maximize the mutual information between the positive pairs, i.e., 𝐄^T_k and 𝐄^S_k, and minimize the similarity between e^S and negative samples from 𝐪𝐮𝐞𝐮𝐞^𝐓. For supervised training in the action recognition task, we utilize pairs (𝐅_i, 𝐲_i) provided by the organizers. In the context of action anticipation, we straightforwardly construct pairs (𝐅_i, 𝐲_i+1). The supervised tasks are optimized using cross-entropy loss ℒ_CE. The overall objective function is as follows: ℒ = αℒ_CE + βℒ_ InfoNCE, where α = β = 1. §.§.§ MoMA pre-trained weight. First, we utilized ImageNet pre-trained weights to initialize both the student and teacher networks. Secondly, drawing insights from MoMA, we also employed pre-trained weights from the action recognition task to transfer knowledge to both action recognition and action anticipation. The final results for the two tasks were obtained by aggregating outcomes from two sources: MoMA initialized with ImageNet pre-trained weights (ImageNet-MoMA) and MoMA initialized with action recognition task pre-trained weights (ActionRec-MoMA). Due to time constraints, the AGD learning for action recognition was based on an aggregation of four runs with ImageNet-MoMA and two runs with ActionRec-MoMA, while action anticipation utilized two runs for each. §.§.§ Comparative experiment. We compare our method with Video-ViT pre-trained on masked video distillation <cit.>. Similar to our image-based method, we randomly sampled 16 frames and fit them into Video-ViT. Video-ViT includes a 3D-CNN layer patch embedding and ViT-transform encoder, and one classification head in single-task and two classifier heads in multi-task classification. This study uses the ViT-small <cit.> as the backbone for all the Video-ViT models. We also compare our ADG learning with single-task and multi-task learning on the MoMA framework. §.§ Track 2 - Task 1: Visual Question Answering (VQA) In this section, we first introduce the definition of the problem, and then we present clearly our proposed solution for this task. Figure <ref> illustrates the overview of our solution. §.§.§ Problem definition. For the VQA task of the T3 challenge, the goal is to answer the questions provided per frame in videos. Hence, we can formulate the problem as: given a video 𝐕 including N frames, and a set of 𝐐^(i) = {q^(i)_k}^N^q_k=1 including N^q questions for each frame I_i, the target is predicting the answers for those questions: [ â^(i)_k = P(a^(i)_k|I_i, q^(i)_k) = (I_i, q^(i)_k),; Â^(i) = {â^(i)_k}^N^q_k=1,; ] where â^(i)_k is the predicted answer for k-th question of i^th frame v_i, and then we obtain the set of answers Â^(i) for set of questions 𝐐^(i). Finally, we can obtain the full answers  for a video 𝐈. In this section, we present our approach for building the VQA model ((·)) for this task. §.§.§ Video-to-frame processing. The organizer supplied the dataset, which comprises 125 training videos and 71 testing videos. Each video has a frame length ranging from 809 to 19583, and for each frame, there are 8 to 15 associated questions, with each question having a single answer. In total, there are 15 types of questions and 16 types of answers, with varying numbers of possible answers for each question type. Although question-answer annotations are provided at the frame level, we observed that a sequence of multiple frames can share the same questions and answers. Given the high number of frames per video, utilizing all of them would be time-consuming. Consequently, for each video, we opted to sample one frame per every 15 frames, resulting in a significant reduction in the number of frames used for training while still maintaining answer accuracy. It is important to note that during inference, we are required to predict answers on a per-frame basis. To achieve this, we partition a video into N^L sequential frames, with each sequence comprising 15 frames. Notably, the final frame within each sequence serves as a representative for all preceding 14 frames, shouldering the responsibility of answering all questions pertaining to those 14 frames as well as itself. §.§.§ Feature extraction. Feature extraction is always a crucial aspect of good performance in vision-language (VL) tasks <cit.>. Two types of features are used for VL tasks: grid features and object features. Grid features capture the global context and can be considered the same as features extracted from CNNs, while object features, including feature vectors for specific detected objects on an image, can be regarded as local features. In the context of Medical Visual Question Answering (VQA), particularly concerning life-saving intervention (LSI) procedures, it is crucial to emphasize the significance of object recognition, including hands, tools, and other relevant items. This recognition is essential to ensure the provision of accurate and potential answers. Hence, we choose the VinVL pre-trained model <cit.>. This model is based on Faster R-CNN <cit.> but trained on large-scale datasets. For feature extraction in VQA of the T3 challenge, we utilized this pre-trained model. For a video 𝐕 including N frames, for each frame, we obtain the set of N^o × 2048 feature vectors, where N^o is the number of objects detected by the model. §.§.§ Deep modular co-attention networks (MCAN). For the VQA model, we adopted the Deep Modular Co-attention Networks (MCAN) <cit.> as the baseline. This model was the crucial milestone for the VQA problem. Given the set of object feature vectors for i^th frame ℱ∈ℝ^N^o × 2048, and the text question q, first, the GloVe pre-trained word embeddings <cit.> are leveraged to provide embedding vectors for question words. Then, from q, the embedding vectors for questions 𝒬∈ℝ^N^s × 300 are obtained, where N^s is the number of words/tokens of a question. Then, an LSTM layer is used to process Q and map the original GloVe dimension to the hidden dimension (dim). There are two model flows f^(F) and f^(Q) which are based on multi-head self-attention () <cit.> to process ℱ and 𝒬. Both f^(F) and f^(Q) have a stack of L layers. However, in f^(Q), there is a special type of called guided-attention followed by each normal self-attention. i^th guided-attention considered ℱ_l as query and 𝒬_L as key and value, where ℱ_l is the output of i^th normal , and 𝒬_L is the final output from f^(Q). In this manner, the model learns to prioritize object features that are relevant to the context of the question. After processed by f^(F) and f^(Q), ℱ_L ∈ℝ^N^o × dim and 𝒬_L ∈ℝ^N^s × dim are obtained. Then, a dimension reduction design is applied, which can be formulated as follows: [ α^q = (^(Q)(𝒬_L)), α^f = (^(F)(ℱ_L)),; Q = α^q^𝖳⊙𝒬_L, F = α^f^𝖳⊙ℱ_L, ] where ^(Q) and ^(F) are MLP layers to reduce the dimension from dim to 1. Then, (·) is used to compute the weights per object or token in ℱ^L or 𝒬^L, i.e., α^q and α^f. Finally, matrix multiplication ⊙ is then performed to multiply α^q with 𝒬^L, and α^f with ℱ^L, to obtained Q∈ℝ^dim and F∈ℝ^dim. Then, the summation of Q and F is performed to obtain fused features 𝒵, and a (·) is applied to map the fused dim-dimensional features 𝒵 to N-dimensional logits, where N is the number of all possible answers in training set. Then, the Binary Cross-entropy (BCE) loss function is used to train the N-way answer classification problem. §.§.§ Frame-question cross-attention (FQCA). As mentioned in the previous section, the dimension reduction design produced question features 𝒬̃ and object features ℱ̃ now are only single feature vectors and then are used to predict the answer. Herein, we design the cross-attention at the head of f^(F) and f^(Q), that helps the single question feature vector 𝒬̃∈ℝ^dim can be aware of the set of object features in a frame ℱ∈ℝ^N^o × 2048, and the same with ℱ̃∈ℝ^dim and 𝒬∈ℝ^N^s × dim. Note that, ℱ and 𝒬 mentioned here are object features and question features before being passed to f^(F) and f^(Q). We argue that pre-trained VinVL and GloVe are strong enough to produce the raw representations for a frame and a question, so it makes sense to let ℱ̃ and 𝒬̃ interpolate information from those raw features. We are inspired by the cross-attention mechanism <cit.>, which allows only one token to be performed self-attention to a sequence of tokens, to design the frame-question cross-attention. In terms of Q and ℱ^L, the cross-attention can be formulated as: [ ℱ_L' = (ℱ_L),; Q' = (Q, ℱ_L'),; Q^” = Q + ^(1)(^(0)(Q) + ((Q, Q', Q'))), ] where (·) is a skip-projection to reduce the original dimension of ℱ_L to match 𝒬̃'; 𝒬̃' ∈ℝ^(N^o + 1) × dim is the concatenation of 𝒬̃ and ℱ_L'; (·) is the multi-head self-attention that takes 𝒬̃ as query and 𝒬̃' as key and value; ^(0) and ^(1) are both alignment projection layers; (·) is the normalization layer. After Eq. <ref>, Q^”∈ℝ^dim is still a dim-dimensional feature vector, but aware of the context of sequence of object feature vectors in ℱ_L, and the awareness is compressed into dim dimensions. We also perform the same Eq. <ref> in terms of F and 𝒬_L, to obtain F^”∈ℝ^dim. After that, as same as MCAN <cit.>, the summation of Q^” and F^” is computed, followed by a (·) to product N-dimensional logits. § RESULTS §.§ Track 1 - Task 1 & 2: Action Recognition & Anticipation We report our performance for action recognition and anticipation tasks in Table <ref> and <ref>, respectively. As previously mentioned, action recognition and anticipation encompass two sub-tasks: verb and noun predictions. We face the choice of either training two separate models for these tasks or pursuing an end-to-end multi-task learning approach. Since the end goal of the two tasks is to predict the action that requires the model to predict correctly both verb and noun at the same time, our ADG showed robust performance compared to both single-task and multi-task learning. §.§.§ Action recognition. Initially, we implemented the Video-ViT framework proposed by <cit.> and trained two distinct networks for each task. However, the results exhibited subpar performance, with only 5.74% accuracy in terms of 𝐀𝐜𝐜_action, despite relatively better performance per task, achieving 23.17% and 20.20% accuracy for 𝐀𝐜𝐜_verb and 𝐀𝐜𝐜_noun, respectively. This led us to the realization that excelling in individual tasks may not suffice. Consequently, we devised a multi-task learning strategy for the Video-ViT model, resulting in improved overall performance, yielding an 𝐀𝐜𝐜_action of 8.32%. We applied the same multi-learning strategy to the MoMA setting proposed by <cit.>, leading to a performance boost compared to the multi-task Video-ViT model, with an increase of +5.74% in 𝐀𝐜𝐜_action. Finally, our AGC learning improved to 𝐀𝐜𝐜_action accuracy to 15.45% despite both 𝐀𝐜𝐜_verb and 𝐀𝐜𝐜_noun are inferior compare to the multi-task Image-MoMA. The results show the importance of simple changes in task design that target the final goal of action recognition and are able to improve performance without consuming extra computational resources. Our AGD learning approach utilizes the fewest parameters when compared to both single-task learning and multi-task learning. §.§.§ Action anticipation. Our Track 1 - Task 2 Action anticipation results are shown in table <ref> Similar to Track 1 - Task 1 Action recognition, in Track 1 - Task 2 Action anticipation, the single-task Video-ViT framework <cit.> also obtains a relatively good accuracy on verb and noun classification, but only 4.21% 𝐀𝐜𝐜_action accuracy. Multi-task Video-ViT slightly improves the 𝐀𝐜𝐜_action accuracy to 5.89 %. Since the action anticipation task is more challenging than the action recognition, training the Video-ViT framework on a small amount of training data is even harder. Image-based method with contrastive learning proposed by <cit.> improves the multi-task performance to 10.53% 𝐀𝐜𝐜_action accuracy. Finally, our AGC learning improved to 𝐀𝐜𝐜_action accuracy to 12.84%. §.§ Track 2 - Task 1: Visual Question Answering (VQA) While the organizer provided 125 official training videos, we were able to use only 52 of them due to issues with frame extraction from some videos. These 52 videos were split into training and testing sets in an 8:2 ratio. Subsequently, the training set was employed to train the MCAN model using VinVL object features, and validation was performed on the validation set. The highest-performing model on the validation samples was selected for submission on the official testing set, including 71 videos. In terms of evaluation metrics, we utilized the BLEU score <cit.> and Accuracy. The BLEU score is employed to measure the similarity between two sequences in n-gram words, a common practice in vision-language tasks. Accuracy, on the other hand, quantifies the total number of correct answers. It's worth noting that the BLEU score is only reported for our validation set (𝐁@1_𝐯𝐚𝐥 and 𝐁@4_𝐯𝐚𝐥), as the submission system did not provide BLEU scores for the official testing set. Two variants of the MCAN model are available: a version with a hidden dimension of 512, and a version with a hidden dimension of 1024. Higher dimensions enable the model to capture more information but may result in reduced computational efficiency. Our performance evaluation includes results for both the small and large versions of the MCAN model. As depicted in Table <ref>, MCAN- surpasses MCAN- by 0.35% and 2.3% in terms of 𝐀𝐜𝐜_𝐯𝐚𝐥 and 𝐀𝐜𝐜_𝐭𝐞𝐬𝐭, respectively, albeit exhibiting a decrease in BLEU metrics of -3.43 and -1.43 for 𝐁@1_𝐯𝐚𝐥 and 𝐁@4_𝐯𝐚𝐥, respectively. Accuracy is reported on both validation set (𝐀𝐜𝐜_val) and official testing set (𝐀𝐜𝐜_test). Upon the incorporation of our proposed FQCA mechanism, we achieved 88.78% and 74.35% in 𝐀𝐜𝐜_𝐯𝐚𝐥 and 𝐀𝐜𝐜_𝐭𝐞𝐬𝐭, respectively, marking the highest performance on the official testing set. However, there was a slight reduction in 𝐀𝐜𝐜_𝐯𝐚𝐥 compared to MCAN- and MCAN-. Notably, when evaluating sequence-comparing metrics 𝐁@1_𝐯𝐚𝐥 and 𝐁@4_𝐯𝐚𝐥, our approach excelled, achieving 92.62% and 44.87%, respectively. § CONCLUSION In summary, our participation in the T3 challenge covered action recognition, anticipation, and visual question answering. We achieved promising results in action recognition and participation by employing an action dictionary-guided design with a MoMA setting. For the VQA task, we utilized VinVL pre-trained models for feature extraction and introduced a frame-question cross-attention mechanism based on the MCAN model, leading to the best performance in our experiments. splncs04
http://arxiv.org/abs/2407.12596v1
20240717141749
Frieze patterns over finite commutative local rings
[ "Bernhard Böhmler", "Michael Cuntz" ]
math.CO
[ "math.CO", "math.GR", "20H05, 05E99, 13F60, 51M20" ]
[2020]20H05, 05E99, 13F60, 51M20 § ABSTRACT We count numbers of tame frieze patterns with entries in a finite commutative local ring. For the ring ℤ/p^rℤ, p a prime and r∈ℕ we obtain closed formulae for all heights. These may be interpreted as formulae for the numbers of certain relations in quotients of the modular group. Instance-wise Uncertainty for Class Imbalance in Semantic Segmentation Luís Almeida 1 0009-0004-0705-1801 Inês Dutra 1 0000-0002-3578-7769 Francesco Renna 1 0000-0002-8243-8350 July 22, 2024 ============================================================================================================= § INTRODUCTION The special linear group _2(ℤ) is, for example, generated by the following two matrices: S=[ 0 -1; 1 0 ], T=[ 1 1; 0 1 ]. Since S^2=-𝕀, any element in _2(ℤ) can be written (up to a sign) as a product of matrices of the form η(a):=T^a S for a∈ℤ. The relations among these matrices η(a), a∈ℤ, are particularly interesting. For example, a sequence of positive integers c_1,…,c_n such that η(c_1)⋯η(c_n)=-𝕀 is called a quiddity cycle because it is the quiddity of a Conway-Coxeter frieze pattern. It turns out that these quiddity cycles are in bijection with triangulations of a convex n-gon by non-intersecting diagonals and are thus counted by Catalan numbers <cit.>. More generally, one can count or parametrize the set of solutions to η(c_1)⋯η(c_n)=-𝕀 with c_1,…,c_n∈ℤ (cf. <cit.>), or for example η(c_1)⋯η(c_n)=±𝕀 with c_1,…,c_n∈ℕ_>0 (cf. <cit.>). As in <cit.>, we call a solution to η(c_1)⋯η(c_n)=ε𝕀 an ε-quiddity cycle (where c_1,…,c_n are elements of a ring and η(·) is as in Definition <ref>, which is compatible with the above η). A solution c_1,…,c_n∈ℤ/Nℤ to η(c_1)⋯η(c_n)=±𝕀 may be viewed as a solution over ℤ to η(c_1)⋯η(c_n)∈Γ where Γ is a congruence subgroup of _2(ℤ). Thus it is interesting to count the number of such solutions. With the chinese remainder theorem, this can be reduced to the case in which N is a power of a prime. Such a closed formula appeared first in <cit.> for finite fields when the product is equal to -𝕀, a generalization with arbitrary matrix on the right is contained in <cit.>, and the case of ℤ/2^rℤ of odd length is considered in <cit.>. In this article we give closed formulae for these numbers of solutions over all rings ℤ/Nℤ, N∈ℕ, and a recursion for the case of an arbitrary matrix on the right side, i.e. for the solutions of η(c_1)⋯η(c_n)=A for arbitrary A∈_2(ℤ/Nℤ). In Section <ref>, we first count the quiddity cycles of odd lengths in the more general case of commutative finite local rings: [Thm. <ref>] Let R be a finite commutative local ring with maximal ideal I R, ε∈{± 1}⊂ R, and n∈ℕ_>1 with n odd or (-1)^n/2ε. If ω_n is the number of ε-quiddity cycles over R/I of length n, then the number of ε-quiddity cycles over R is ω_n · |I|^n-3. As a corollary, this implies most of the previously known results for quiddity cycles over residue class rings. In a second much more technical part (Section <ref>) we give closed formulae for the number of solutions of arbitrary lengths over ℤ/p^rℤ, p an arbitrary prime. For m∈ℕ, q∈ℤ∖{± 1}, we write m2_q := (q^m-1)(q^m-1-1)/(q-1)(q^2-1) and [m]_q:=q^m-1/q-1. [Thm. <ref>] Let R=ℤ/p^rℤ for a prime p and r∈ℕ, I=pR the maximal ideal, n∈ℕ_>1 with n even, and ε∈{± 1}. If ω_n is the number of ε-quiddity cycles over R/I of length n, then the number of ε-quiddity cycles over R is (ω_n-1) · p^(r-1)(n-3) + σ_n if ε=(-1)^n/2 ω_n · p^(r-1)(n-3) if ε=-(-1)^n/2, where σ_2 = 1, σ_4=(r-1)p^r-(r-2)p^r-1, σ_n = p^(n/2 - 1) r [r-1]_p^n/2-2 - p^(n/2 - 1) r - 1 [r-2]_p^n/2-2. Note that the numbers ω_n are known: [<cit.>, <cit.>] Let q be a prime power and n∈ℕ, n>4. Then the number ω_n of ε-quiddity cycles over 𝔽_q of length n is ω_n = [n-1/2]_q^2 if n ≡ 1 2, (q-1)n/22_q+q^n/2-1 if n ≡ 0 2, ε=(-1)^n/2, (q-1)n/22_q if n ≡ 0 2, ε=-(-1)^n/2, q odd. Finally in Section <ref>, we determine recursions for the number of solutions to η(c_1)⋯η(c_n)= A for c_1,…,c_n∈ℤ/Nℤ, N∈ℕ and A an arbitrary matrix. A closed formula could theoretically be computed for each fixed matrix A. § QUIDDITY CYCLES AND RELATIONS Let R be a ring and a∈ R. Then we set (a):=[ a -1; 1 0 ]. For c_1,…,c_n∈ R we write c_1,…,c_n := ∏_i=1^n (c_i). Let R be a commutative ring, a,b,u,v,x,y ∈ R such that a,(uv-1)∈ R^×. Then x,u,v,y = x+1-v/uv-1, uv-1, y+1-u/uv-1, x,1,y = x-1,y-1 . The first equation already appeared in <cit.>; the second one is the special case when u=1. The following lemma was discovered by the second author 2022 when working with quiddity cycles over rings. We shall not need it for our main theorems, but it is the key for the recursion in the last section of this article. It is also the main ingredient for the main results of <cit.>; although the lemma was published there first, F. Mabilat acknowledged the origin of this lemma in his article. Note that the result in <cit.> is a very special case of our result on local rings in Section <ref> which is proven with a completely different idea. Let R be a commutative ring, c,u,v,b,d ∈ R. Then c,u,v,b,d = c-vb-2/x, x, d-uv-2/x for x := (vb-1)(uv-1)-1/v if v,x∈ R^×. Direct computation. For a ∈ R^× we write λ_a := [ a 0; 0 a^-1 ]. Let R be a commutative ring, (c_1,…,c_n) ∈ R^n and ∈ R^×. If n is odd, then [ 0; 0 1 ] c_1,…, c_n[ 1 0; 0 ^-1 ] = c_1, ^-1 c_2, c_3,^-1 c_4,…, c_n . In particular, if c_1,…, c_n = λ_a for a∈ R^×, then c_1, ^-1 c_2, c_3,^-1 c_4,…, c_n = λ_t a. Observe first that for a,b∈ R, [ ^-1 0; 0 1 ][ a -1; 1 0 ][ ^-1 b -1; 1 0 ][ 0; 0 1 ] = [ a -1; 1 0 ][ b -1; 1 0 ], and [ 0; 0 1 ][ a -1; 1 0 ][ 1 0; 0 ^-1 ] = [ a -1; 1 0 ]. Thus c_1,…, c_n[ 1 0; 0 ^-1 ] = [ ^-1 0; 0 1 ] c_1, ^-1 c_2 ,…,^-1 c_n-1[ 0; 0 1 ]η(c_n) [ 1 0; 0 ^-1 ] = [ ^-1 0; 0 1 ] c_1, ^-1 c_2,…,^-1 c_n-1η( c_n) which is the claim. § QUIDDITY CYCLES OVER FINITE LOCAL RINGS [<cit.>] Let R be a commutative ring, c=(c_1,…,c_n)∈ R^n, and ε∈ R. Then c is called an ε-quiddity cycle if c_1,…,c_n = ε𝕀 = λ_ε. Note that since this product is in _2(R), we always have ε=ε^-1. For instance, if R is a domain, then ε∈{± 1}. §.§ Odd length Let R be a finite commutative local ring with maximal ideal I R, ε∈{± 1}⊂ R, and n∈ℕ_>1 with n odd or (-1)^n/2ε. If ω_n is the number of ε-quiddity cycles over R/I of length n, then the number of ε-quiddity cycles over R is ω_n · |I|^n-3. The condition of being a quiddity cycle is a polynomial identity. Thus if κ : R ⟶ R/I is the canonical projection and (c_1,…,c_n) is an ε-quiddity cycle, then (κ(c_1),…,κ(c_n)) is an κ(ε)-quiddity cycle over R/I. For each fixed ε-quiddity cycle c=(c_1,…,c_n)∈ (R/I)^n, we count the number of ε-quiddity cycles which project to this cycle under κ. Since I is maximal, R/I is a field. Assume first that c contains no unit, i.e. c=(0,…,0). Then 0,…,0 = [ 0 (-1)^n; -(-1)^n 0 ] if n≡ 1 2, [ (-1)^n/2 0; 0 (-1)^n/2 ] if n≡ 0 2, so n is even and ε=(-1)^n/2. These are exactly the cases that are excluded by assumption. Thus we may assume that there is an entry in c which is a unit, after rotating the cycle, without loss of generality c_2∈ (R/I)^×. Let (c_1,…,c_n)∈ R^n be an ε-quiddity cycle that maps to c under κ; we have c_2∈ R^× because c_2∉ I. If c_1,…,c_n = ε𝕀, then there exist f,g∈ R, h∈ R^× such that [ (1+fg)/h f; g h ] = ε c_4,…,c_n ^-1 = η(c_1)η(c_2)η(c_3)= [ c_1 c_2 c_3 -c_1 -c_3 1-c_1 c_2; c_2 c_3 -1 -c_2 ]. Thus the entries c_1,c_2,c_3 are uniquely determined by f,g,h: c_2=-h, c_1=(1-f)/c_2, c_3=(1+g)/c_2. Since there are |I|^n-3 possible choices[Note that any such choice implies h∈ R^× since h=-c_2 0.] for c_4,…,c_n that map to c_4,…,c_n, we obtain ω_n · |I|^n-3 different ε-quiddity cycles of length n. As an example, we use the theorem to obtain closed formulae for (-1)-quiddity cycles: Let n∈ℕ_>1 with n≢2 4, p be a prime, and r∈_>0. Then the numbers of sequences (c_1,…,c_n)∈ℤ/p^rℤ such that c_1,…,c_n = -𝕀 are as follows. (a) For n≡ 1 2 we have p^(n-3)(r-1)[n-1/2]_p^2 sequences. (b) For n≡ 0 4 we have p^(n-3)(r-1)(p^n-2/2-1)[n/4]_p^2 sequences if p>2. This is Theorem <ref> for R=ℤ/p^rℤ and I=(p). Note that for p=2 and n even, the number (-1)^n/2 is always equal to ε=1, so in this case the Theorem does not apply. In a finite local ring R, every element is either a unit or nilpotent. Indeed, let a∈ R. The set {a^i | i∈ℕ} is finite, so there exist 0≤ m<n with minimal m such that a^m=a^n. This implies a^m (a^n-m-1) = 0. If m=0, then a is a unit; otherwise, a^m=0 since a^n-m-1 is a unit. On the other hand, if a is nilpotent, say a^n=0, then (1+a) ∑_i=0^n-1 (-a)^i=1. §.§ Even length In order to understand the missing cases (even length), we need to understand those cycles in which all entries lie in the maximal ideal. Let R be a finite commutative local ring with maximal ideal I R. For n≥ 2 and z∈ R, denote σ_z,n := { (c_1,…,c_n)∈ I^n | c_1,…,c_n = λ_z }. If z∈ R and n≥ 2, then σ_z,n∅ implies that n is even and z∈ (-1)^n/2 + I. If (c_1,…,c_n)∈ I^n such that c_1,…,c_n = λ_z, then in R/I, 0,…,0 = λ_z+I. But the computation of 0,…,0 in the proof of Theorem <ref>, Equation <ref> shows that z+I = (-1)^n/2+I and n≡ 0 2. Let R be a finite commutative local ring with maximal ideal I R, ε∈{± 1}⊆ R, z∈ε+I ⊆ R^×, and n∈ℕ_>3 with n even. Then |σ_z,n| = ∑_x∈ -ε+I |σ_x,n-2| · |{ (u,v)∈ I × I | uv-1 = x/z }|. Let (c,u,v,b,…)∈ I^n be a sequence with c,u,v,b,… = λ_z. Since x:=uv-1 is a unit, using Equation (<ref>) from Lemma <ref> we can reduce the sequence to a sequence of length n-1 via λ_z = c,u,v,b,… = c+(1-v)/(uv-1), uv-1, b+(1-u)/(uv-1),… = c+(1-v)/x, x, b+(1-u)/x,…. Since n-1 is odd, we may apply Lemma <ref> and Equation (<ref>) from Lemma <ref>: λ_xz = cx+(1-v), 1, bx+(1-u),… = cx-v, bx-u,…. Thus the resulting sequence (cx-v, bx-u,…) is contained in σ_xz,n-2 because it consists of elements of I; note that xz∈ -ε+I. We obtain a map ρ : σ_z,n⟶⋃_x∈ -ε+I σ_x,n-2. Conversely, take a sequence (e,f,…) in σ_x,n-2 for some x∈ -ε+I. Then by Equation (<ref>) from Lemma <ref>, λ_x = e,f,… = e+1,1,f+1,…. With Lemma <ref> and y:=x/z ∈ -1+I, λ_z = (e+1)/y,y,(f+1)/y,…. Now if uv-1 = y = x/z for u,v∈ I, then by Lemma <ref> λ_z = (e+1)/(uv-1), uv-1, (f+1)/(uv-1), … = (e+v)/(uv-1), u, v, (f+u)/(uv-1), …. This last sequence is in σ_z,n; thus the above map ρ is surjective. The preimages of ρ are parametrized by σ_x,n-2, x∈ R^× and u,v∈ I with uv-1 = x/z. Moreover, the entries with labels 3,…,n-2 in the parameters ensure that the preimages are unique; ρ is injective. § QUIDDITY CYCLES MODULO A POWER OF A PRIME We now concentrate on the case of the local ring R:=/p^r for p a prime and r∈ℕ. The maximal ideal is I=pR. Write ν_p(a) for the largest k such that a≡ 0 p^k; we agree that ν_p(0)=r. For a∈ R∖ R^×, we let M:={(u,v) ∈ I× I | uv = a}. Then, |M| = (r-1)p^r-(r-2)p^r-1 if a = 0, (p^r-p^r-1)(ν_p(a)-1) if a 0. Let a=0. Note that uv≡ 0p^r if and only if ν_p(u)+ν_p(v)≥ r. Consider the ring R as an ordered set. Only each p-th element is divisible by p. Hence, precisely |R|-|R|/p elements are divisible by 1=p^0 but not by p. Moreover, precisely |R|/p - |R|/p^2 elements are divisible by p but not by p^2. Continuing this way, we deduce that, for 1≤ i≤ r-1, |{x∈ I | ν_p(x)=i}|=p^r-i-p^r-i-1, as |R|=p^r. Therefore, |{(u,v)∈ M | ν_p(u)=i}|=(p^r-i-p^r-i-1)· p^i, due to the following argument. There are exactly n:= (p^i-p^i-1)+(p^i-1-p^i-2)+… elements v in I with ν_p(v)≥ r-i wherefore we obtain that n=p^i, as ν_p(0)=r. Furthermore, if u=0, there are p^r-1 possible choices for v such that (u,v)∈ M. This yields |M|=1· p^r-1 + ∑_i=1^r-1(p^r-i - p^r-i-1)· p^i = p^r-1 + ∑_i=1^r-1p^r-i· (1-1/p)· p^i = p^r-1 + (r-1)· p^r· (1-1/p). The assertion follows. Next, let a≠ 0. Note that ν_p(a)≥ 1. We write a=a_1· p^ν_p(a), u=u_1· p^ν_p(u), v=v_1· p^ν_p(v), where (u_1,p)=1=(v_1,p)=(a_1,p). Then, the equation uv=a becomes uv=u_1 v_1 p^ν_p(u) + ν_p(v) =a_1p^ν_p(a). We can express ν_p(a) in exactly ν_p(a)-1 ways as the sum of the two positive integers ν_p(u) and ν_p(v). In each such case, we may fix u_1 and obtain v_1 = u_1^-1· a_1 as a possible solution of the equation uv=a. If ν_p(u)=i then there are exactly p^r-i - p^r-i-1 different choices for u and hence for u_1. To our particular solution v_1=v_ chosen above we have to add all possibilities for v_1 which yield a total valuation of uv which is larger than or equal to r, that is, all other numbers ṽ_1 such that ν_p(ṽ_1) ≥ r-i. In total, there are p^i such numbers, as ν_p(0)=r. Altogether, there are |M|= ∑_i=1^ν_p(a)-1(p^r-i - p^r-i-1)· p^i = (p^r-p^r-1) (ν_p(a)-1) solutions in this case. For even n and z,z'∈ R, ν_p(z-(-1)^n/2) = ν_p(z'-(-1)^n/2) ⟹ |σ_z,n|=|σ_z',n|. We proceed by induction over n. If n=2, then since c_1, c_2 = [ c_1 c_2-1 -c_1; c_2 -1 ], we get that if c_1,c_2 = λ_z for some z, then c_1=c_2=0 and z=-1. Thus |σ_z,2|=0 for all z with ν_p(z-(-1)^n/2)>0 and |σ_-1,2|=1. Recall from Theorem <ref> that for z∈ε+I ⊆ R^×, and even n∈ℕ_>3 we have |σ_z,n| = ∑_x∈ -ε+I |σ_x,n-2| · |{ (u,v)∈ I × I | uv-1 = x/z }|. We fix x:=-ε +bp^k and let z=ε + ãp^j. Then, z^-1 = ε - ãp^j +… - … = ε + ap^j for some a which is coprime to p. We inspect the different cases: first, assume k≠ j. We have 1+xz^-1= 1+(-ε + bp^k)(ε +ap^j) = bε p^k -aε p^j +abp^k+j. Hence, ν_p(1+xz^-1)=min{ν_p(x+ε), ν_p(z- ε)} if ν_p(x+ε)≠ν_p(z-ε). Next, assume ν_p(x+ε) = ν_p(z-ε)=k. In this case, we may write z^-1=ε +ap^k. We take another element z^'∈ε +I with the property that z^'^-1 = ε + a^' p^k. We let x^' :=- ε +b^' p^k where b^' is yet to be specified. Then, ν_p(xz^-1+1)= k+ν_p(ε (b-a)+abp^k) and ν_p(x^'z^'^-1+1)= k+ν_p(ε (b^' - a^')+a^' b^' p^k). Now, we choose b^' := a^' a^-1z^' z^-1 b. Hence, a/a^'(ε (b^' - a^')+ a^' b^' p^k) = ε (b - a)+ab p^k and therefore ν_p(ε (b^' - a^')+ a^' b^' p^k) = ν_p(ε (b - a)+ab p^k). Next, we set 𝒩_k := {w∈ -ε + I | ν_p(w+ε) = k} and define the map φ_z, z^': 𝒩_k →𝒩_k by φ_z,z^' (x) = φ_z,z^' (-ε +bp^k) := -ε +b^' p^k = -ε + (a^' a^-1z^' z^-1 b)p^k = x^'. As an inverse is given by φ_z,z^'^-1 (x^') = φ_z,z^'^-1 (-ε +b^' p^k) := -ε +(zz^'^-1aa^'^-1b^')p^k, the map φ_z, z^' is bijective. In view of Lemma <ref> and Lemma <ref>, we will write μ(j) := (r-1)p^r-(r-2)p^r-1 j = r, (p^r-p^r-1)(j-1) j<r and σ_n(ℓ) := |σ_(-1)^n/2+p^ℓ,n|. Moreover, let n_j := |{a∈ R |ν_p(a)=j }| = 1 j=r p^r-j-p^r-j-1 j<r for j=0,…,r. The numbers σ_n(ℓ), n∈ℕ, ℓ=1,…,r-1 satisfy: σ_n(ℓ) = ∑_j=1^ℓ-1 n_j σ_n-2(j)μ(j) + ∑_j=ℓ+1^r n_j σ_n-2(j)μ(ℓ) + σ_n-2(ℓ) (μ(ℓ)(p-2)p^r-ℓ-1 + ∑_j=ℓ+1^r n_j μ(j)), σ_n(r) = ∑_j=1^r n_j σ_n-2(j)μ(j). With the new notation, Theorem <ref> translates to σ_n(ℓ) = ∑_x∈ -ε+Iσ_n-2(ν_p(x+ε)) ·μ(ν_p(x/(ε+p^ℓ)+1)) = ∑_x∈ Iσ_n-2(ν_p(x)) ·μ(ν_p((-ε+x)· (ε+p^ℓ)+1)) = ∑_x∈ Iσ_n-2(ν_p(x)) ·μ(ν_p(-ε p^ℓ + ε x + x p^ℓ)) = ∑_x∈ Iσ_n-2(ν_p(x/(ε+p^ℓ))) ·μ(ν_p(-ε p^ℓ + x)) = ∑_x∈ Iσ_n-2(ν_p(x)) ·μ(ν_p(x-ε p^ℓ)) where ε=(-1)^n/2. In the last sum we distinguish the cases ν_p(x)<ℓ, ν_p(x)>ℓ, ν_p(x)=ℓ. Note that the case ℓ=r has to be treated separately (see below), so assume ℓ<r first. If ν_p(x)<ℓ, then ν_p(x-ε p^ℓ)=ν_p(x); if ν_p(x)>ℓ, then ν_p(x-ε p^ℓ)=ℓ: σ_n(ℓ) = ∑_j=1^ℓ-1 n_j σ_n-2(j)μ(j) + ∑_j=ℓ+1^r n_j σ_n-2(j)μ(ℓ) + σ_n-2(ℓ) ∑_y=1^p-1∑_w=0^p^r-ℓ-1-1μ(ℓ + ν_p(pw + y -ε))_x = (y+pw)p^ℓ, ν_p(x)=ℓ. In the third sum we consider the cases yε and y=ε: σ_n(ℓ) = ∑_j=1^ℓ-1 n_j σ_n-2(j)μ(j) + ∑_j=ℓ+1^r n_j σ_n-2(j)μ(ℓ) + σ_n-2(ℓ) (μ(ℓ)(p-2)p^r-ℓ-1_yε + ∑_w=0^p^r-ℓ-1-1μ(ℓ + 1 + ν_p(w))_y=ε) = ∑_j=1^ℓ-1 n_j σ_n-2(j)μ(j) + ∑_j=ℓ+1^r n_j σ_n-2(j)μ(ℓ) + σ_n-2(ℓ) (μ(ℓ)(p-2)p^r-ℓ-1 + ∑_j=ℓ+1^r n_j μ(j)). If ℓ=r, we obtain the simpler formula σ_n(r) = ∑_j=1^r n_j σ_n-2(j)μ(j) because then ν_p(x-ε p^ℓ)=ν_p(x). Let p be a prime number, let r∈ℕ_≥ 2 and let ℓ∈{1,…, r-1}. Then, the following assertions hold: (a) ∑_i=1^m p^-i =1-p^-m/p-1, (b) ∑_j=1^m jp^-j =p^-m((-1-m)p+m+p^m+1)/(p-1)^2, (c) ∑_j=ℓ +1^r-1 (p^r-j-p^r-j-1)· (p^r-p^r-1)· (j- 1) = p^r-1(-rp+r-1)-p^2r-2-ℓ(-p-ℓ p+ℓ ) -p^2r-2(-p^2-r+p^1-r) +p^2r-2(-p^1-ℓ+p^-ℓ). (a) Direct verification using the geometric sum formula. (b) For x∈ℝ∖{1} we have ∑_j=1^m -jx^-j-1=d/dx( ∑_j=1^m x^-j) =d/dx( 1-x^-m/x-1). Therefore, ∑_j=1^m jx^-j=(-x)·d/dx( 1-x^-m/x-1) = x^-m((-1-m)x+m+x^m+1)/(x-1)^2. The claim follows. (c) We have =∑_j=ℓ +1^r-1 (p^r-j-p^r-j-1)· (p^r-p^r-1)· (j- 1) =(p^r-p^r-1)^2 ∑_j=ℓ +1^r-1 p^-j(j-1) = (p^r-p^r-1)^2 ( (∑_j=1^r-1 jp^-j - ∑_j=1^ℓ jp^-j) - (∑_j=1^r-1 p^-j - ∑_j=1^ℓ p^-j) ) (a), (b)=(p^r-p^r-1)^2 (p^-(r-1)(-rp+r-1+p^r)/(p-1)^2 - p^-ℓ((-1-ℓ)p+ℓ+p^ℓ +1)/(p-1)^2. .- 1-p^1-r/p-1 +1-p^-ℓ/p-1) =(p^r-1)^2[p^-(r-1)(-rp+r-1+p^r) - p^-ℓ((-1-ℓ)p+ℓ+p^ℓ +1) -(p-1)(1-p^1-r)+(p-1)(1-p^-ℓ)] =(p^r-1)^2[p^-(r-1)(-rp+r-1+p^r) - p^-ℓ((-1-ℓ)p+ℓ+p^ℓ +1)+(p-1)(p^1-r-p^-ℓ)] =(p^r-1)^2[p^-(r-1)(-rp+r-1) - p^-ℓ((-1-ℓ)p+ℓ)+(p-1)(p^1-r-p^-ℓ)] =p^r-1(-rp+r-1)-p^2r-2-ℓ(-p-ℓ p+ℓ ) -p^2r-2(-p^2-r+p^1-r) +p^2r-2(-p^1-ℓ+p^-ℓ) as claimed. For r∈ℕ_≥ 2 and ℓ=1,…,r-1: μ(ℓ)(p-2)p^r-ℓ-1 + ∑_j=ℓ+1^r n_j μ(j) = (ℓ-1)p^2r-ℓ-(2ℓ-3)p^2r-ℓ-1+(ℓ-1)p^2r-ℓ-2. Using the definitions of μ and n_j we obtain that our claim is equivalent to ⇔ (p^r-p^r-1)· (ℓ -1)· (p-2)· p^r-ℓ -1 + ∑_j=ℓ +1^r-1 (p^r-j-p^r-j-1) (p^r-p^r-1) (j-1) + 1· ((r-1)p^r -(r-2)p^r-1) = (ℓ-1)p^2r-ℓ-(2ℓ-3)p^2r-ℓ-1+(ℓ-1)p^2r-ℓ-2. By Lemma <ref>(c) the left-hand side of the last equation is equal to ⇔ (p^r-p^r-1)· (ℓ -1)· (p-2)· p^r-ℓ -1 +p^r-1(-rp+r-1)-p^2r-2-ℓ(-p-ℓ p+ℓ ) -p^2r-2(-p^2-r+p^1-r) +p^2r-2(-p^1-ℓ+p^-ℓ) + 1· (r-1)p^r -(r-2)p^r-1 = p^2r-2-ℓ[(p-1)(ℓ -1)(p-2)+(p+ℓ p -ℓ)-p+1] +p^r-1(-rp+r-1) -p^2r-2(-p^2-r+p^1-r) + 1· (r-1)p^r -(r-2)p^r-1 = p^2r-2-ℓ[(ℓ -1)(p^2-3p+2)+ℓ p -ℓ +1)] + rp^r-1 -rp^r -p^r-1 +p^r -p^r-1 +rp^r -p^r -rp^r-1+2p^r-1 = p^2r-2-ℓ[p^2ℓ -3pℓ +2ℓ -p^2 +3p -2 +pℓ -ℓ +1] = p^2r-2-ℓ(p^2ℓ -p^2 -2pℓ +ℓ +3p -1) = (ℓ-1)p^2r-ℓ-(2ℓ-3)p^2r-ℓ-1+(ℓ-1)p^2r-ℓ-2. Let p be a prime number, let s∈ℕ_≥ 2 and let m∈ℕ_≥ 3. Then, the following holds. 0∑_j=2^s-1p^-j(m-1)[j-1]_p^m-2(j-1) = 1/p^m-2-1( (1-s)p^4-s-m+(s-2)p^3-s-m+p^2-m/(p-1)^2 - (1-s)p^(m-1)(2-s) + (s-2)p^(m-1)(1-s)+1/(p^m-1-1)^2). We have 0∑_j=2^s-1p^-j(m-1)[j-1]_p^m-2(j-1) = ∑_j=2^s-1 p^-j(m-1)p^(m-2)(j-1)-1/p^m-2-1 (j-1) = 1/p^m-2-1∑_j=2^s-1 (p^2-m-j-p^-j(m-1))(j-1) = p^2-m/p^m-2-1( ∑_j=2^s-1 jp^-j - ∑_j=2^s-1 p^-j) - 1/p^m-2-1( ∑_j=2^s-1 j(p^m-1)^-j - ∑_j=2^s-1(p^m-1)^-j) = p^2-m/p^m-2-1·( p^-s+1(-sp+s-1+p^s)/(p-1)^2 -1/p - 1-p^-s+1/p-1 + 1/p) - 1/p^m-2-1( p^(m-1)(1-s)(-sp^m-1+s-1+p^(m-1)s)/(p^m-1-1)^2 - 1/p^m-1 - 1-p^(m-1)(1-s)/p^m-1-1 +1/p^m-1) =p^2-m/p^m-2-1·p^1-s(s-sp-1+p^s)+(1-p)(1-p^1-s)/(p-1)^2 - 1/p^m-2-1·p^(m-1)(1-s)(-sp^m-1+s-1+p^(m-1)s) + (1-p^m-1)(1-p^(m-1)(1-s))/(p^m-1-1)^2 = p^2-m/p^m-2-1·-sp^2-s+sp^1-s-p^1-s+1-p^1-s+p^2-s/(p-1)^2 - 1/p^m-2-1·-sp^(m-1)(2-s)+(s-1)p^(m-1)(1-s)+1-p^(m-1)(1-s)+p^(m-1)(2-s)/(p^m-1-1)^2 = 1/p^m-2-1( (1-s)p^4-s-m+(s-2)p^3-s-m+p^2-m/(p-1)^2 - (1-s)p^(m-1)(2-s) + (s-2)p^(m-1)(1-s)+1/(p^m-1-1)^2). We use the notation of q-numbers, [n]_q := ∑_i=0^n-1 q^i = q^n-1/q-1 where the last equality requires q 1. Let x∈ℝ^+∖{1} and let r∈ℕ. Then, the following equation holds: (1-r)x^2-r+(r-2)x^1-r+1/x-1 =(1-r)x^1-r + x^1-r[r-1]_x. By polynomial long division, we obtain ((1-r)x^2-r+(r-2)x^1-r+1)/(x-1) = (1-r)x^1-r - x^1-r-1/x-1 = (1-r)x^1-r -1-x^r-1/x^r-1/x-1 =(1-r)x^1-r + 1/x^r-1·x^r-1-1/x-1=(1-r)x^1-r + x^1-r[r-1]_x. The numbers σ_n(ℓ), n even, ℓ=1,…,r satisfy (for m>2): σ_2(ℓ) = δ_ℓ,r, σ_4(ℓ)= 0 if ℓ=1, (ℓ-1)(p^r-p^r-1) if 1<ℓ<r, (r-1)p^r-(r-2)p^r-1 if ℓ=r. σ_2m(ℓ) = 0 if ℓ=1, p^(2 m - 3) r -ℓ(m - 2) + 1-m (p^m-1-1) [ℓ-1]_p^m-2 if 1<ℓ<r, p^(m - 1) r [r-1]_p^m-2 - p^(m - 1) r - 1 [r-2]_p^m-2 if ℓ=r. To begin with, we prove the base case. For ℓ∈{1,…, r}, we have |σ_2(ℓ)| = |{(c_1,c_2)∈ I^2 | [c_1,c_2] = [ -1+p^ℓ 0; 0 1/-1+p^ℓ ]}|. As [c_1,c_2] = [ c_1c_2 -1 -c_1; c_2 -1 ], we deduce that c_1=c_2=0 such that there exists a solution if and only if ℓ =r in which case the solution is unique modulo p^r. Similarly, for ℓ∈{1,…, r}, we have |σ_4(ℓ)| = |{(c_1,c_2, c_3, c_4)∈ I^4 | [c_1,c_2,c_3,c_4] = [ 1+p^ℓ 0; 0 1/1+p^ℓ ]}|. As [c_1,c_2,c_3,c_3] = [ c_1c_2c_3c_4 -c_1c_4 -c_3c_4 -c_1c_2 +1 -c_1c_2c_3 +c_1+c_3; c_2c_3c_4 -c_4-c_2 -c_2c_3 + 1 ], we obtain [ c_1c_2c_3c_4 -c_1c_4 -c_3c_4 -c_1c_2 +1 c_1(c_2c_3-1); c_2(c_3c_4 -1) -c_2c_3 + 1 ] = [ 1+p^ℓ c_3; c_4 1/1+p^ℓ ]. As c_2c_3-1, c_3c_4-1∈ R^×, we deduce that c_2=c_4/c_3c_4-1 c_1 = c_3/c_2c_3-1=c_3^2c_4-c_3. Hence, the two remaining equations c_1c_2c_3c_4 -c_1c_4 -c_3c_4 -c_1c_2 +1=1+p^ℓ and -c_2c_3+1=1/1+p^ℓ become both equivalent to the equation p^ℓ =-c_3c_4. Note that any choice of c_3 and c_4 uniquely determines c_1 and c_2 modulo p^r by (<ref>). For 2≤ℓ≤ r the claim follows now from Lemma <ref> and for ℓ =1 the claim follows from the fact that c_3, c_4∈ I.Next, assume that 1<ℓ<r and consider the right hand side of the recursion for n=2m. The two sums and the last summand are: ∑_j=1^ℓ-1 n_j σ_n(j)μ(j) = ∑_j=2^ℓ-1 n_j p^(2 m - 3) r -j (m - 2) + 1-m (p^m-1-1) [j-1]_p^m-2(p^r-p^r-1)(j-1) = ∑_j=2^ℓ-1 p^(2 m - 3) r -j (m - 2) + 1-m - j (p^m-1-1) [j-1]_p^m-2(p^r-p^r-1)^2(j-1) = ∑_j=2^ℓ-1 p^(2 m - 3) r -j (m - 2) + 1-m - j (p^m-1-1) p^(j-1)(m-2)-1/p^m-2-1(p^r-p^r-1)^2(j-1) = p^m-1-1/p^m-2-1 (p^r-p^r-1)^2 p^(2m-3)r+1-m∑_j=2^ℓ-1 p^-j(m-1) (p^(j-1)(m-2)-1) (j-1) = p^m-1-1/p^m-2-1 (p^r-p^r-1)^2 p^(2m-3)r+1-m( -(ℓ-1)p^4-ℓ-m+(ℓ-2)p^3-ℓ-m+p^2-m/(p-1)^2 - -(ℓ-1)p^(m-1)(2-ℓ)+(ℓ-2)p^(m-1)(1-ℓ)+1/(p^m-1-1)^2). ∑_j=ℓ+1^r n_j σ_n(j)μ(ℓ) = ∑_j=ℓ+1^r-1 n_j p^(2 m - 3) r -j (m - 2) + 1-m (p^m-1-1) [j-1]_p^m-2(p^r-p^r-1)(ℓ-1) + n_r (p^(m-1)r[r-1]_p^m-2 - p^(m-1)r-1[r-2]_p^m-2)(p^r-p^r-1)(ℓ-1) = ∑_j=ℓ+1^r-1 p^(2 m - 3) r -j (m - 2) + 1-m-j (p^m-1-1) [j-1]_p^m-2(p^r-p^r-1)^2(ℓ-1) + (p^(m-1)r[r-1]_p^m-2 - p^(m-1)r-1[r-2]_p^m-2)(p^r-p^r-1)(ℓ-1) = ∑_j=ℓ+1^r-1 p^(2 m - 3) r -j (m - 2) + 1-m-j (p^m-1-1) p^(j-1)(m-2)-1/p^m-2-1(p^r-p^r-1)^2(ℓ-1) + (p^(m-1)rp^(r-1)(m-2)-1/p^m-2-1 - p^(m-1)r-1p^(r-2)(m-2)-1/p^m-2-1)(p^r-p^r-1)(ℓ-1) = p^m-1-1/p^m-2-1 (p^r-p^r-1)^2(ℓ-1) p^(2m-3)r+1-m∑_j=ℓ+1^r-1 p^-j(m-1) (p^(j-1)(m-2)-1) + (p^(m-1)rp^(r-1)(m-2)-1/p^m-2-1 - p^(m-1)r-1p^(r-2)(m-2)-1/p^m-2-1)(p^r-p^r-1)(ℓ-1) = p^m-1-1/p^m-2-1 (p^r-p^r-1)^2(ℓ-1) p^(2m-3)r+1-m( p^2-mp^-r-p^-ℓ-1/p^-1-1 -p^(1-m)r-p^(1-m)(ℓ+1)/p^1-m-1) + (p^(m-1)rp^(r-1)(m-2)-1/p^m-2-1 - p^(m-1)r-1p^(r-2)(m-2)-1/p^m-2-1)(p^r-p^r-1)(ℓ-1). σ_n(ℓ) (μ(ℓ)(p-2)p^r-ℓ-1 + ∑_j=ℓ+1^r n_j μ(j)) = p^(2 m - 3) r -ℓ(m - 2) + 1-m (p^m-1-1) [ℓ-1]_p^m-2(μ(ℓ)(p-2)p^r-ℓ-1 + ∑_j=ℓ+1^r n_j μ(j)) = p^(2 m - 3) r -ℓ(m - 2) + 1-m (p^m-1-1) p^(ℓ-1)(m-2)-1/p^m-2-1(μ(ℓ)(p-2)p^r-ℓ-1 + ∑_j=ℓ+1^r n_j μ(j)) = p^(2 m - 3) r -ℓ(m - 2) + 1-m (p^m-1-1) p^(ℓ-1)(m-2)-1/p^m-2-1((ℓ-1)p^2r-ℓ-(2ℓ-3)p^2r-ℓ-1+(ℓ-1)p^2r-ℓ-2). With x:=p^m, y:=p^r, z:=p^l, u:=p^mr, v:=p^ml the equations become ∑_j=1^ℓ-1 n_j σ_n(j)μ(j) + ∑_j=ℓ+1^r n_j σ_n(j)μ(ℓ) + σ_n(ℓ) (μ(ℓ)(p-2)p^r-ℓ-1 + ∑_j=ℓ+1^r n_j μ(j)) = p^m-1-1/p^m-2-1 (p^r-p^r-1)^2 p^(2m-3)r+1-m( -(ℓ-1)p^4-ℓ-m+(ℓ-2)p^3-ℓ-m+p^2-m/(p-1)^2 - -(ℓ-1)p^(m-1)(2-ℓ)+(ℓ-2)p^(m-1)(1-ℓ)+1/(p^m-1-1)^2) +p^m-1-1/p^m-2-1 (p^r-p^r-1)^2(ℓ-1) p^(2m-3)r+1-m( p^2-mp^-r-p^-ℓ-1/p^-1-1 -p^(1-m)r-p^(1-m)(ℓ+1)/p^1-m-1) + (p^(m-1)rp^(r-1)(m-2)-1/p^m-2-1 - p^(m-1)r-1p^(r-2)(m-2)-1/p^m-2-1)(p^r-p^r-1)(ℓ-1) + p^(2 m - 3) r -ℓ(m - 2) + 1-m (p^m-1-1) p^(ℓ-1)(m-2)-1/p^m-2-1((ℓ-1)p^2r-ℓ-(2ℓ-3)p^2r-ℓ-1+(ℓ-1)p^2r-ℓ-2) = (ℓ - 1)(y - y/p)(u (p^2 u/x y^2 - 1)/y (x/p^2 - 1) - u (p^4 u/x^2 y^2 - 1)/p y (x/p^2 - 1)) +(ℓ - 1) p u^2(y - y/p)^2(p^2(1/y - 1/p z)/x (1/p - 1) - y/u - p z/v x/p/x - 1)(x/p - 1)/x y^3(x/p^2 - 1) + ((ℓ - 1) y^2/z - (2 ℓ - 3) y^2/p z + (ℓ - 1) y^2/p^2 z) p u^2 z^2(x/p - 1)(p^2 v/x z^2 - 1)/v x y^3(x/p^2 - 1) - p u^2(y - y/p)^2(x/p - 1)((ℓ - 1) p^4/x z - (ℓ - 2) p^3/x z - p^2/x/(p - 1)^2 + (ℓ - 2) x z/p v - (ℓ - 1) x^2 z/p^2 v + 1/(x/p - 1)^2)/x y^3(x/p^2 - 1) (*)= u^2(x - 1) z (p v/x z - 1)/v x y (x/p - 1) = p^(2 m - 1) r -ℓ(m - 1) -m (p^m-1) p^(m-1)(ℓ-1)-1/p^m-1-1 = p^(2 m - 1) r -ℓ(m - 1) -m (p^m-1) [ℓ-1]_p^m-1 = σ_n+2(ℓ) where (*) may be checked with a computer (e.g. with Sage) since these are rational functions with constant exponents. The case ℓ=1 follows from the fact that σ_4(1)=0 and from Theorem <ref>, as μ(1)=0. Next, we consider the case ℓ =r. Accoring to Theorem <ref> we have to prove that σ_2(m+1)(r):=p^mr[r-1]_p^m-1 - p^mr-1[r-2]_p^m-1 = ∑_j=1^rn_jσ_2m(j)μ(j). We compute the right-hand side of the previous equation: = ∑_j=1^rn_jσ_2m(j)μ(j)= 0+ ∑_j=2^r-1n_jσ_2m(j)μ(j) + ( p^(m-1)r[r-1]_p^m-2 -p^(m-1)r-1[r-2]_p^m-2)·( (r-1)p^r-(r-2)p^r-1) =∑_j=2^r-1(p^r-j-p^r-j-1)· p^(2m-3)r+1-m(p^m-1-1)p^-j(m-2)[j-1]_p^m-2(j-1)(p^r-p^r-1) + ( p^(m-1)r[r-1]_p^m-2 -p^(m-1)r-1[r-2]_p^m-2)·( (r-1)p^r-(r-2)p^r-1) =(p^r-p^r-1)^2· p^(2m-3)r+1-m(p^m-1-1)∑_j=2^r-1p^-j(m-1)[j-1]_p^m-2(j-1) + ( p^(m-1)r[r-1]_p^m-2 -p^(m-1)r-1[r-2]_p^m-2)·( (r-1)p^r-(r-2)p^r-1) <ref>=(p-1)^2(p^m-1-1)p^2mr-r-m-1/p^m-2-1( (1-r)p^4-r-m+(r-2)p^3-r-m+p^2-m/(p-1)^2. .- (1-r)p^(m-1)(2-r) + (r-2)p^(m-1)(1-r)+1/(p^m-1-1)^2) + ( p^(m-1)r[r-1]_p^m-2 -p^(m-1)r-1[r-2]_p^m-2)·( (r-1)p^r-(r-2)p^r-1) <ref>=(p-1)^2(p^m-1-1)p^2mr-r-m-1/p^m-2-1( (1-r)p^4-r-m+(r-2)p^3-r-m+p^2-m/(p-1)^2. .- (1-r)p^(m-1)(1-r) + p^(m-1)(1-r)· [r-1]_p^m-1/(p^m-1-1)) + ( p^(m-1)r[r-1]_p^m-2 -p^(m-1)r-1[r-2]_p^m-2)·( (r-1)p^r-(r-2)p^r-1) =p^2mr-r-m-1/p^m-2-1( (p^m-1-1)[(1-r)p^4-r-m+(r-2)p^3-r-m+p^2-m]. . - (p-1)^2 [(1-r)p^(m-1)(1-r) + p^(m-1)(1-r)[r-1]_p^m-1)]) + ( p^(m-1)r[r-1]_p^m-2-p^(m-1)r-1[r-2]_p^m-2)·( (r-1)p^r-(r-2)p^r-1). Next, we set x:=p^m-1. Then, our equation becomes (px)^r[r-1]_x - (px)^r/p[r-2]_x = x^2rp^r-2x^-1/x/p-1·((x-1)[(1-r)x^-1p^3-r+(r-2)x^-1p^2-r+x^-1p]. . -(p-1)^2[(1-r)x^1-r+x^1-r[r-1]_x]) +(x^r[r-1]_x/p - x^r/p[r-2]_x/p)·((r-1)p^r -(r-2)p^r-1). Now, we replace the q-integers by fractions. Then, the last equation is equivalent to: p^rx^rx^r-1-1/x-1 -p^r-1x^rx^r-2-1/x-1 = x^2r-1p^r-2/x/p-1·((x-1)[(1-r)x^-1p^3-r+(r-2)x^-1p^2-r+x^-1p]. . -(p-1)^2[(1-r)x^1-r+x^1-rx^r-1-1/x-1]) +(x^r(x/p)^r-1-1/x/p-1 - x^r/p·(x/p)^r-2-1/x/p-1)·( (r-1)p^r -(r-2)p^r-1). Next, we multiply both sides of the last equation by (x-1)· (x/p-1) and obtain: = p^rx^r(x^r-1-1)(x/p-1)-p^r-1x^r(x^r-2-1)(x/p-1)= x^2r-1p^r-2· ·( (1-r)[x^-1p^3-r + (r-2)x^-1p^2-r +x^-1p] - (p-1)^2[(x-1)(1-r)x^1-r+x^1-r(x^r-1-1)]) + (x-1)( x^r( (x/p)^r-1-1 ) - x^r/p( (x/p)^r-2-1 ) )·( (r-1)p^r -(r-2)p^r-1). The correctness of this last equation follows from Lemma <ref>. Let p be a prime number, let r∈ℕ_≥ 3, and let x∈ℝ^+∖{1}. Set α := p^rx^r(x^r-1-1)(x/p-1)-p^r-1x^r(x^r-2-1)(x/p-1), β := x^2r-1p^r-2·( (1-r)[p^3-r/x + (r-2)p^2-r/x +p/x] - (p-1)^2[(x-1)(1-r)x^1-r+x^1-r(x^r-1-1)]), γ :=(x-1)( x^r( (x/p)^r-1-1 ) - x^r/p( (x/p)^r-2-1 ) )·( (r-1)p^r -(r-2)p^r-1). Then, α = β + γ. We have α =p^rx^r( x^r/p -x^r-1 - x/p + 1 ) -p^r-1x^r ( x^r-1/p -x^r-2 -x/p +1 ) = x^2rp^r-1-x^2r-1p^r -x^r+1p^r-1 +x^rp^r -x^2r-1p^r-2 +x^2r-2p^r-1 +x^r+1p^r-2-x^rp^r-1 = x^2r (p^r-1) + x^2r-1 (-p^r-p^r-2) +x^2r-2 (p^r-1) + x^r+1 (p^r-2-p^r-1) +x^r(p^r -p^r-1), β/x^2r-1p^r-2 = (x^2-2x+1)[(1-r)x^-1p^3-r+(r-2)x^-1p^2-r+x^-1p] = -(p^2-2p)[x^2-r-rx^2-r+rx^1-r+1-2x^1-r] = (1-r)x^3-r + (r-2)xp^2-r+xp-2(1-r)p^3-r-2(r-2)p^2-r-2p = +(1-r)x^-1p^3-r+(r-2)x^-1p^2-r+x^-1p = -(p^2-1)[x^2-r-rx^2-r+rx^1-r+1-2x^1-r] = +2px^2-r -2prx^2-r+2prx^1-r +2p -4px^1-r = x( (1-r)p^3-r+(r-2)p^2-r+p ) +1·( (-2)(1-r)p^3-r-2(r-2)p^2-r-p^2-1 ) = +x^-1( (1-r)p^3-r+(r-2)p^2-r+p ) + x^1-r( -(p^2+1)(r-2)+2pr-4p ) = +x^2-r( -(p^2-1)(1-r)+2p-2pr ), γ = (x-1)(x^2r-1p^1-r-x^r -x^2r-2p^1-r+x^rp^-1)(rp^r-p^r-rp^r-1+2p^r-1) =( x^2rp^1-r -x^r+1-x^2r-1p^1-r +x^r+1p^-1-x^2r-1p^1-r+x^r+x^2r-2p^1-r-x^rp^-1)· = · (rp^r-p^r-rp^r-1+2p^r-1) =( x^2r(p^1-r)+x^2r-1(-2p^1-r)+x^2r-2(p^1-r)+x^r+1(p^-1-1)+x^r(1-p^-1))· = · (rp^r-p^r-rp^r-1+2p^r-1) =x^r+1(2rp^r-1 -3p^r-1 -rp^r-2 +2p^r-2 -rp^r +p^r) = + x^r(rp^r-p^r-2rp^r-1+3p^r-1+rp^r-2-2p^r-2) = +x^2r-2(pr-p-r+2) + x^2r-1(-2pr+2p+2r-4) +x^2r(pr-p-r+2). Hence, β + γ = x^2r( (1-r)p +(r-2) +p^r-1 +pr-p-r+2) = + x^2r-1( (-2+2r)p+4-2r-(p^2+1)p^r-2-2pr+2p+2r-4 ) = + x^2r-2( (1-r)p +r-2+p^r-1 +pr-p-r+2 ) = + x^r ( p^r-2(-p^2r+2p^2-r+2+2pr-4p)+p^r-2(r-2-2rp+3p+rp^2-p^2) ) = + x^r+1( p^r-2(-p^2+rp^2-1+r+2p-2pr) +p^r-2(-r+2+2rp-3p-rp^2+p^2) ) =x^2r(p^r-1) + x^2r-1(-p^r-2(p^2+1)) + x^2r-2(p^r-1) + x^rp^r-2(p^2-p) + x^r+1p^r-2(1-p) =α . Let R=ℤ/p^rℤ for a prime p and r∈ℕ, I=pR the maximal ideal, n∈ℕ_>1 with n even, and ε∈{± 1}. If ω_n is the number of ε-quiddity cycles over R/I of length n, then the number of ε-quiddity cycles over R is (ω_n-1) · p^(r-1)(n-3) + σ_n(r) if ε=(-1)^n/2, ω_n · p^(r-1)(n-3) if ε=-(-1)^n/2. We proceed as in the proof of Theorem <ref>: If κ : R ⟶ R/I is the canonical projection and (c_1,…,c_n) is an ε-quiddity cycle, then (κ(c_1),…,κ(c_n)) is a κ(ε)-quiddity cycle over R/I. For each fixed ε-quiddity cycle c=(c_1,…,c_n)∈ (R/I)^n, we count the number of ε-quiddity cycles which project to this cycle under κ. Since I is maximal, R/I is a field. Assume first that c contains no unit, i.e. c=(0,…,0). Then the number of cycles which map to c under κ is σ_n(r) if ε=(-1)^n/2. If ε (-1)^n/2, then there is no such solution. Otherwise c(0,…,0). There are ω_n-1 such cases if ε=(-1)^n/2 and ω_n otherwise: we may assume that there is an entry in c which is a unit, after rotating the cycle, without loss of generality c_2∈ (R/I)^×. Now the same argument as in the proof of Theorem <ref> produces |I|^n-3=p^(r-1)(n-3) different ε-quiddity cycles of length n in each case. § CYCLES IN RESIDUE CLASS RINGS In this last section we present a recursion for the number of solutions π_x,n := |{ (c_1,…,c_n)∈ R^n | c_1,…,c_n=A and x=c_2 }|. for R:=/p^r, A∈ R^2× 2, and n≥ 3. A sum over all x∈ R then gives a recursion for the number of solutions c_1,…,c_n=A. As an application one could recover the formulae from the previous sections; however, all in all this would result in a longer proof than before. Since this section is less important, we omit some of the technical proofs and leave them to the reader. To count the different cases we need the following numbers. For x,u∈ R, let ξ_x,u := p^ν_p(u) ν_p(u)≤ν_p(x+1) 0 ν_p(u) > ν_p(x+1) , ζ_x,u := 0 x+u∈ R^× or x∉ R^× rp^r-(r-1)p^r-1 x+u = 0 (p^r-p^r-1)ν_p(x+u) ν_p(x+u)>0 and x+u 0 where ν_p(a) is the largest k such that a≡ 0 p^k. We obtain: For u∈ R and n≥ 5, we have the recursion π_u,n = ∑_x∈ R^×π_x,n-1ξ_x,u + π_x,n-2ζ_x,u. Let (c,u,v,b,d,…) be a sequence of length n. There are two cases: (a) uv-1 is a unit. Then we can reduce the sequence to a sequence of length n-1 via [c,u,v,b,…] = [c+(1-v)/(uv-1), uv-1, b+(1-u)/(uv-1),…] (b) uv-1 is not a unit. Then v is a unit and we can reduce the sequence to a sequence of length n-2 via [c,u,v,b,d,…] = [c-(vb-2)/x, x, d-(uv-2)/x,…] for x = ((vb-1)(uv-1)-1)/v. Thus every sequence comes from a shorter sequence after a step of type (a) or (b): Assume we have a sequence of length n-1 with second entry x∈ R^×. If this sequence was obtained via a reduction as in (a), then x=uv-1 for some u,v∈ R. If the longer sequence is counted by π_u,n, then ν_p(u) > ν_p(x+1) is excluded, and if ν_p(u)≤ν_p(x+1) then there are p^ν_p(u) possible u∈ R that satisfy x=uv-1. This explains the summand π_x,n-1ξ_x,u in the recursion. Now assume that we have a sequence of length n-2 with second entry x∈ R^×. If this sequence was obtained via a reduction as in (b), then x=((vb-1)(uv-1)-1)/v for some u,v,b∈ R such that uv-1 is not a unit. In particular, x+u=b(uv-1) is not a unit. If x+u=0, then there are rp^r-(r-1)p^r-1 triple u,v,b that satisfy the relations; if ν_p(x+u)>0 and x+u 0, then we have (p^r-p^r-1)ν_p(x+u) solutions. This is why the summand π_x,n-2ζ_x,u appears in the recursion. Note that the cases (a) and (b) always produce a unit at the second position in the shorter sequence, thus we may sum over x∈ R^× in the recursion. We use the sets u_d,r := {d}, u_d,i := {u | u≡ d p^i, u≢d p^i+1}, e := {u | u ≢-1,0,1 p}, for d=-1,0,1 and i=1,…,r-1. Since R = e ∪⋃_d,i u_d,i is a disjoint union, we obtain an equivalence relation on R. For u∈ R, denote u the class of u with respect to this relation, and write π_x,n := ∑_u∈xπ_u,n, ξ_x,u := ∑_v∈x, w∈uξ_v,w, ζ_x,u := ∑_v∈x, w∈uζ_v,w. The following proposition and corollary explain why these definitions are useful. We omit the proofs since they do not give any relevant new insights. For x,u∈ R and y_1,y_2∈x, ∑_z∈uξ_y_1,z = ∑_z∈uξ_y_2,z, ∑_z∈uζ_y_1,z = ∑_z∈uζ_y_2,z. For x,u∈ R, we have π_x,n = |x|·π_x,n and π_u,n = ∑_x, x∈ R^×1/|x| (π_x,n-1ξ_x,u + π_x,n-2ζ_x,u). We can now compute the required values: Let m:=|R^x|=p^r-p^r-1. We have |e| = p^r-3p^r-1, n_i := |u_d,i|= p^r-i-p^r-i-1 i<r 1 r=i , and for 0<i,j<r, i j, 0≤ k≤ r, k≤ℓ, x∈ R we obtain: ξ_(x),u_± 1,k = |(x)| · |u_± 1,k|, ξ_(x),e = |(x)| · |e|, ξ_u_-1,ℓ,u_0,k = p^k· |u_-1,ℓ|· |u_0,k|, and ξ_(*),(*)=0 in all the remaining cases, and ζ_u_1,i,u_-1,j = ζ_u_-1,j,u_1,i = min(i,j)· m · |u_1,i|· |u_-1,j|, = min(i,j) ((p^3r-i-j-p^3r-i-j-3)-3(p^3r-i-j-1-p^3r-i-j-2)), ζ_u_1,i,u_-1,i = ζ_u_-1,i,u_1,i = (i p^2r-i-(2i-1)p^2r-i-1+ip^2r-i-2) |u_1,i|, = i (p^3r-2i-p^3r-2i-3)-(3i-1)(p^3r-2i-1-p^3r-2i-2), ζ_u_1,r,u_-1,r = ζ_u_-1,r,u_1,r = r p^r-(r-1) p^r-1, ζ_e,e = p^2r-1 |e| = p^2r-1 (p^r-3p^r-1), and ζ_(*),(*)=0 in all the remaining cases. In principle, for a fixed matrix A, one can use the above formulae to obtain closed formulae for the numbers of solutions. As a last remark, we show that the sum over units satisfies a simpler recursion: Let n≥ 5 and τ_n := ∑_(u), u∈ R^×π_(u),n. Then τ_n = (p^r-p^r-1) τ_n-1 + p^2r-1τ_n-2. By Proposition <ref>, we have τ_n = ∑_u,x∈ R^×π_x,n-1ξ_x,u + π_x,n-2ζ_x,u = ∑_x∈ R^×π_x,n-1∑_u∈ R^×ξ_x,u + π_x,n-2∑_u∈ R^×ζ_x,u. For the first sum we get ∑_x∈ R^×π_x,n-1∑_u∈ R^×ξ_x,u = (p^r-p^r-1)∑_x∈ R^×π_x,n-1 because ξ_x,u=1 if u is a unit. The second sum is ∑_x∈ R^×π_x,n-2∑_u∈ R^×ζ_x,u = ∑_x∈ R^×π_x,n-1( (rp^r-(r-1)p^r-1)_u=-x + ∑_k=1^r-1(p^r-p^r-1) n_k k_ν_p(x+u)=k) = ∑_x∈ R^×π_x,n-1 p^2r-1. amsalpha
http://arxiv.org/abs/2407.13601v1
20240718153943
Spikes and spines in 4D Lorentzian simplicial quantum gravity
[ "Johanna Borissova", "Bianca Dittrich", "Dongxue Qu", "Marc Schiffer" ]
gr-qc
[ "gr-qc", "hep-th" ]
[1] #1 #1 = jborissova@perimeterinstitute.caPerimeter Institute, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, CanadaDepartment of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canadabdittrich@perimeterinstitute.caPerimeter Institute, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, CanadaTheoretical Sciences Visiting Program, Okinawa Institute of Science and Technology Graduate University, Onna, 904-0495, Japandqu@perimeterinstitute.caPerimeter Institute, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canadamschiffer@perimeterinstitute.caPerimeter Institute, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada§ ABSTRACT Simplicial approaches to quantum gravity such as quantum Regge calculus and spin foams include configurations where bulk edges can become arbitrarily large while the boundary edges are kept small. Spikes and spines are prime examples for such configurations. They pose a significant challenge for a desired continuum limit, for which the average lengths of edges ought to become very small. Here we investigate spike and spine configurations in four-dimensional Lorentzian quantum Regge calculus. We find that the expectation values of arbitrary powers of the bulk length are finite. To that end, we explore new types of asymptotic regimes for the Regge amplitudes, in which some of the edges are much larger than the remaining ones. The amplitudes simplify considerably in such asymptotic regimes and the geometric interpretation of the resulting expressions involves a dimensional reduction, which might have applications to holography. Spikes and spines in 4D Lorentzian simplicial quantum gravity Marc Schiffer ============================================================= § INTRODUCTION The path integral approach to quantum gravity requires to construct an integral over a suitable space of geometries. Simplicial approaches, such as quantum Regge calculus <cit.> and spin foams <cit.>, utilize triangulations as regulators. The geometry of these triangulations is specified by geometric data, e.g., edge lengths in the case of Regge calculus. The path integral is then replaced by sums over these geometric data, thus turning an a priori infinite-dimensional path integral to a finite-dimensional one. One could therefore compare such approaches to lattice regularizations of quantum field theories, such as lattice QCD. However, whereas in lattice QCD the lattices feature an explicit lattice constant, there is a priori no such lattice constant available in Regge gravity or spin foams. The path integral rather sums over all geometric data and thus also over the lengths of the edges in the triangulation. One could attempt to define a lattice constant by considering the expectation values of the lengths of all edges and by averaging over these expectation values. There is however a priori no guarantee that these expectation values are finite. This is due to configurations in which bulk edges can become arbitrarily large, while the boundary geometry is fixed. The prime examples of such a behaviour are so-called spikes and spines <cit.>. To define a spike configuration, consider a bulk vertex v and the set of all simplices containing this vertex, i.e., the star of v, see Figure <ref> for an illustration. We fix the lengths of the edges in the boundary of this set to some finite values. Spikes are configurations where the length of the edges, which share the vertex v, can become arbitrarily large. [The lengths of the edges are restricted by the generalized triangle inequalities, see Section <ref>. For spikes, the generalized triangle inequalities allow infinitely large edge lengths.] Spine configurations can only appear in Lorentzian triangulations. Here we consider a bulk edge e and the set of all simplices containing this edge, i.e., the star of e, see Figure <ref> for an illustration. We again fix the length of the edges in the boundary of this set to some finite values. A spine configuration allows for an arbitrarily large (timelike or spacelike) length of the edge e. Thus any subregion of the triangulation containing a bulk vertex or a bulk edge can include a spike or spine configuration, respectively. Spikes and spines can therefore easily dominate the path integral, and in this way preclude a suitable continuum limit. Hence, obtaining control over spike and spine configurations should be viewed as a key task in simplicial approaches to quantum gravity, such as Regge calculus and spin foams. In the Euclidean version of quantum Regge calculus, spike configurations are particularly problematic. It has been shown that in the two-dimensional theory [The Regge action is a topological invariant in two dimensions. Spikes in two dimensions are often defined as configurations with arbitrarily large edge lengths, but with finite area. This can be obtained by scaling the boundary edges to be small. The requirement for a finite area prevents an exponential suppression by the cosmological constant term. In this paper, we consider the four-dimensional theory (without a cosmological constant) and drop the requirement of a finite four-volume for the spikes.] spikes lead to divergences for the expectation values of the edge lengths <cit.>. In three and four dimensions, spike configurations include the conformal factor mode <cit.>, which makes the Euclidean action unbounded from below <cit.>. The exp(-S_E) amplitude of Euclidean quantum gravity therefore leads to an exponential enhancement of spike configurations. Lorentzian quantum gravity could in principle avoid the conformal factor problem due to the oscillatory nature of the amplitudes. There are, however, only few investigations of this issue in Lorentzian Regge calculus, in particular for the four-dimensional case: <cit.> shows that the conformal mode problem is indeed avoided perturbatively to one-loop order. <cit.> and <cit.> consider the two-dimensional theory and, inspired by Causal Dynamical Triangulations <cit.>, implement a restriction on the spacetime signatures of the edges in the lattice, which does not allow for spike configurations (but does allow for other configurations with large bulk edges). Furthermore, <cit.> employs a measure that exponentially suppresses configurations with large edge lengths. Here, we will consider four-dimensional Lorentzian Regge calculus and show that a particular class of spike and spine configurations, appearing in the 5-1 and 4-2 Pachner moves <cit.>, leads to a finite path integral and to finite expectation values for arbitrary powers of the edge lengths in combination with a large class of measures (without explicit exponential suppression factors). The techniques employed here are very similar to the ones utilized for the three-dimensional theory studied in <cit.>, but the present paper can be read independently from <cit.>. A key point of the present work is to analyze the Regge action in regimes where some edges are much larger than the remaining edges. We will find that the Regge action, which in general is a highly non-polynomial function of its edge lengths, simplifies considerably in the asymptotic regime. In fact, we will often find that the leading-order coefficients in the Regge action refer to the geometry of a lower-dimensional subtriangulation. We hope therefore that the techniques employed here and this phenomenon can be generalized to other configurations with large edge lengths. Spikes are closely related to bubbles, which are of great concern for spin foams <cit.> and group field theories <cit.>. Spin foams differ from the Regge approach in that they are based on a different set of geometric data, namely areas and angles, which feature a discrete spectrum (more details are provided in Section <ref>). The strategy developed here can nevertheless help to investigate systematically bubble configurations in spin foams. Our results on the finiteness of spike and spine configurations on the one hand provide further evidence that the Lorentzian (Regge or spin foam) path integral to quantum gravity can be made well-defined. On the other hand, we will also identify examples of spike configurations which are light-cone irregular, i.e., configurations which do not have a regular light-cone structure everywhere. This includes configurations describing topology change in time. For a certain type of such configurations, the Regge action features a branch cut and imaginary terms. A complete specification of the Lorentzian path integral therefore inevitably requires a prescription of how to deal with such configurations <cit.>. This paper is structured as follows: In Subsection <ref> we introduce the complex Regge action and the notion of light cone (ir-)regular configurations. Subsection <ref> discusses the generalized triangle inequalities. In Subsection <ref> we analyse the asymptotic regimes for the volumes of (sub-) simplices in the limit of one or multiple large edges. The results obtained therein will help us to derive asymptotic approximations to the Regge action for spine and spike configurations arising in the 4-2 and 5-1 Pachner moves, respectively, in Section <ref>. In Section <ref>, we consider expectation values of powers of the length variables and establish their convergence properties. We close with a discussion in Section <ref>. § LORENTZIAN GEOMETRY OF SIMPLICES §.§ The complex Regge action In this section, we provide a general outline of the Lorentzian Regge action <cit.> and in particular employ the framework of the complex Regge action <cit.>. A detailed overview of the geometry of Lorentzian and Euclidean simplices can be found in the Appendix of <cit.>. Regge calculus <cit.> is a discrete approach to study the dynamics of general relativity. The d-dimensional spacetime manifold is approximated by gluing piecewise flat [A framework using homogeneously curved simplices is also available <cit.>.]d-dimensional simplices along shared (d-1)-dimensional subsimplices. In Lorentzian Regge calculus, the configuration variables of the Regge action are the signed squared lengths s_e for the edges e of the triangulation. (Our convention for the signature of the spacetime metric is (-,+,+,⋯).) The complex Regge action is given by <cit.> S = - ∑_h√(𝕍_h) ϵ_h , where the sum is taken over both bulk and boundary hinges h, i.e., the (d-2)-dimensional subsimplices in the triangulation. The signed squared volume 𝕍_h of a hinge h can be calculated using a Cayley-Menger determinant, see Section <ref>. The bulk and boundary deficit angles, which provide a measure for the curvature concentrated at a given hinge, are defined as [Here we make one of the two possible sign choices for the definition of the complex Regge action, see <cit.>. Both choices lead to the same complex action, see <cit.>. ]ϵ_h^(bulk) = 2π + ∑_σ⊃ hθ_σ, h , ϵ_h^(bdry) = π + ∑_σ⊃ hθ_σ, h , where θ_σ,h denotes the complex dihedral angle at the hinge h in the simplex σ. The choice of the constant π in the definition of the boundary deficit angles is a convention, which we will adopt throughout this paper. The complex dihedral angles θ_σ,h in a simplex σ at a hinge h⊂σ are defined by <cit.>θ_σ,h = -log( a⃗·b⃗ -√( (a⃗·a⃗) (b⃗·b⃗)- (a⃗·b⃗)^2 )/√(a⃗·a⃗)√(b⃗·b⃗) ) , with a⃗·b⃗= d^2/𝕍_h∂𝕍_σ/∂ s_h̅ ,a⃗·a⃗= 𝕍_ρ_a/𝕍_h ,b⃗·b⃗= 𝕍_ρ_b/𝕍_h . Here, ρ_a ⊂σ and ρ_b ⊂σ are the (d-1)-subsimplices of σ sharing the hinge h, and h̅⊂σ is the edge opposite to h. We shall adopt the principal branches for square roots and logarithms, however, we also need to specify [This specification is determined by the choice of convention mentioned in the previous footnote.] which sides of the branch cuts for square roots and logarithms to use. For the remainder of this article, along the negative real axis, we define √(-r) = √(r) and log(-r) = log(r) - π where r ∈ℝ_+. Thus we adopt the branch-cut value originating from the lower complex half-plane for the logarithm and from the upper complex half-plane for the square root. The complex dihedral angles allow to represent Euclidean as well as Lorentzian angles. In a Lorentzian triangulation, both Euclidean or Lorentzian dihedral angles can occur. More precisely, if the hinge h is spacelike, the space orthogonal to h includes a timelike direction and the associated dihedral angle is Lorentzian. Conversely, if h is timelike, the associated dihedral angle is Euclidean. Null hinges do not contribute to the Regge action (<ref>), as their volume is zero. If the data {a⃗·b⃗,a⃗·a⃗,a⃗} defines a Euclidean angle, the complex angle in (<ref>) and the Euclidean angle are related by θ_σ,h=-ψ^E_σ,h. As a consequence, the (bulk) deficit angle for a timelike hinge h is ϵ^E_h=2π-∑_σ⊃ hψ^E_σ,h. As the volume square 𝕍_h for a (timelike) hinge is negative, the contribution of timelike hinges to the Regge action (<ref>) is real. For a Euclidean triangulation which satisfies the Euclidean generalized triangle inequalities (see Section <ref>), we can also define dihedral angles (<ref>). The Euclidean Regge action (which provides a discretization of the Euclidean action for general relativity) is then given by S^E = -∑_h√(𝕍_h)ϵ^E_h . In this case the complex Regge action in (<ref>) evaluates to S= S^E. Thus, the complex Regge action S for a Euclidean signature triangulation is purely imaginary. Let us return to the Lorentzian triangulation and consider a spacelike hinge. The data {a⃗·b⃗,a⃗·a⃗,a⃗} then defines a Lorentzian angle, i.e., a⃗ and b⃗ can be embedded into two-dimensional Minkowski space. According to <cit.>, the complex angles in (<ref>) can be expressed in terms of Lorentzian angles as θ_σ,h=-ψ^L_σ,h=-(β_σ,h - m_σ,hπ/2), where ψ^L_σ,h is the Lorentzian angle, β_σ,h∈ℝ, and m_σ,h∈{0,1,2} indicates the number of light rays included in the (convex) wedge between a⃗ and b⃗, as illustrated in Figure <ref>. Therefore, the Lorentzian (bulk) deficit angle is given by: ϵ^L_σ,h= 2π-π/2(∑_σ⊃ h m_σ_h) - ∑_σ⊃ hβ_σ,h . This angle is purely imaginary if ∑_σ⊃ h m_σ_h = 4, i.e., if the sum of the dihedral angles includes exactly four light rays and therefore two light cones. We will refer to spacelike hinges satisfying this condition as light-cone regular. Timelike hinges are light-cone regular by definition. Therefore, the contribution of a regular hinge to the Regge action (<ref>) is real. However, if the number of light ray crossings in the bulk deficit angle associated with a given hinge h, denoted as N_h=∑_σ⊃ h m_σ_h , is smaller or larger than 4, we will obtain a negative or positive imaginary contribution to the Regge action, respectively. It is important to note that the sign of the imaginary contributions for the Lorentzian Regge action depends on the choice of convention, mentioned in Footnote <ref>. This choice is equivalent to defining the Lorentzian action by using a complexification of the squared edge lengths s_e → s_e +ε (so that one avoids the branch cuts of the square roots and the logarithm) and taking the limit ε→ 0 <cit.>. Using s_e → s_e -ε instead, we obtain, in the case that all hinges are light-cone regular, the same value for the Lorentzian action as with s_e → s_e -ε. However, in the case of an irregular hinge, the sign of the imaginary term of the action changes. This shows that the complex Regge action has branch cuts if the data define a Lorentzian triangulation and if there are light-cone irregular hinges. The Regge action on opposite sides of the branch cut associated to a given hinge, differs only in the sign of the imaginary term originating from this hinge. Thus, if we perform a path integral, we can choose the sign in front of these imaginary terms, as this amounts to a choice along which side of the branch cut the path integral contour is placed <cit.>. Note that the imaginary terms are determined by the analytical structure of the Regge action and can be reproduced by analytical continuation from two different starting points: The first one is to apply a generalized Wick rotation and construct the Lorentzian action through an analytic continuation from the Euclidean action <cit.>. An alternative starting point is to begin with Lorentzian data which are light-cone regular, and analytically continue to data with light-cone irregularities by slightly deviating into complexified squared edge lengths to go around branch points, as discussed in <cit.>. Light-cone irregular hinges represent a particular type of conical singularities for a Lorentzian metric, that leads to imaginary terms for the action also in the continuum <cit.>. Such conical singularities play an important role for the derivation of thermodynamic quantities such as entropies, from the gravitational path integral <cit.>. For two-dimensional spacetimes, such conical singularities seem to appear always for spacetimes describing topology change <cit.>. In three- and four-dimensional triangulations, light-cone irregular hinges can also appear without topology change <cit.>. §.§ Generalized triangle inequalities The generalized Euclidean and Lorentzian triangle inequalities ensure that a simplex with given signed length squares can be embedded into flat Euclidean or Minkowski space, respectively. The generalized triangle inequalities for a Euclidean or spacelike (non-degenerate) simplex σ require that the signed squared volume for the simplex σ itself and all its sub-simplices ρ satisfy the following condition: 𝕍_σ >0, 𝕍_ρ>0 , where the signed squared volume of a d-dimensional simplex σ^d=(012⋯ d) is given by: 𝕍_σ^d = (-1)^d+1/ 2^d (d!)^2( 0 1 1 1 ⋯ 1 1 0 s_01 s_02 ⋯ s_0d 1 s_01 0 s_12 ⋯ s_1d ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ 1 s_0d s_1d s_2d ⋯ 0 ) , where s_ij is the signed squared length of the edge between vertices i and j. Moreover, the signed squared volume determines the spacelike, null, or timelike nature of a simplex. A simplex ρ is timelike if 𝕍_ρ < 0, null if 𝕍_ρ = 0, and spacelike if 𝕍_ρ > 0. The generalized triangle inequalities for a Lorentzian d-simplex σ^d in d-dimensional spacetime (<ref>) require that if a subsimplex ρ⊂σ^d is timelike or null, then all subsimplices ρ' ⊂σ^d containing this subsimplex ρ, i.e., ρ⊂ρ', must not be spacelike <cit.>. That is, a timelike simplex cannot be a subsimplex of a spacelike simplex. Therefore, for a Lorentzian d-simplex σ, we have the condition that 𝕍_σ^d < 0 (non-degeneracy) and additionally that: ρ⊂σ , 𝕍_ρ≤ 0 ⇒ ∀ρ' ⊃ρ: 𝕍_ρ'≤ 0 . We will consider the asymptotics of the Regge action with one or multiple bulk edges scaled to become large. The generalized triangle inequalities determine whether certain edges are allowed to become large. E.g., for a Euclidean d-simplex, we cannot scale just one of the edges to be large while keeping the other edges fixed, as this violates the Euclidean triangle inequality. For a Lorentzian d-simplex, such a scaling is however possible. For example, let us consider a timelike triangle (012) with a spacelike (or timelike) edge (01) and with the other edges timelike (or spacelike). The squared area of triangle (012) is given by 𝕍_(012)= -1/16[s_01^2+ (s_02-s_12)^2-2s_01(s_02+s_12)]<0 , which shows that the Lorentzian triangle inequality is always satisfied. Therefore, we can scale the edge (01) to become large while still satisfying the triangle inequality. §.§ Simplex geometry in the limit of large edges We will consider triangulations with a few simplices and with boundary, in the limit where the bulk edges are large as compared to the boundary edges. We will thus have simplices involving one or more edges which are much larger than the remaining edges. To compute the Regge action in this limit, we need to investigate the asymptotic behavior of the dihedral angles. To this end, we will first derive the asymptotic behavior of the volumes for a d-simplex and its subsimplices, as the volume and the dihedral angles satisfy the following the relations <cit.>: sin(θ_σ, h) = -d/d-1√(𝕍_h)√(𝕍_σ)/√(𝕍_ρ_a)√(𝕍_ρ_b) ,cos(θ_σ, h) = d^2 /√(𝕍_ρ_a)√(𝕍_ρ_b)𝕍_σs_h̅ . These relations can be derived from (<ref>), see  <cit.>. Moreover, the derivative of the squared volume with respect to a squared edge length can be expressed in terms of the volume squares of the simplex and its subsimplices <cit.>: (∂𝕍/∂ s_i j)^2=1/d^4𝕍_i̅𝕍_j̅-1/d^2(d-1)^2𝕍𝕍_i j . Thus, to derive the asymptotic behavior of the dihedral angles we essentially need the asymptotic behavior of the volumes of the simplex and its subsimplices. To study the asymptotic behavior of the volumes of a d-dimensional simplex σ^d = (0⋯ d) with vertices {0,…,d}, we will use the expression for the signed squared volume via the determinant of the associated Cayley-Menger matrix C in terms of the signed squared edge lengths <cit.>: 𝕍_σ^d = - (-1)^d/2^d (d!)^2( 0 1 1 1 ⋯ 1 1 0 s_01 s_02 ⋯ s_0d 1 s_01 0 s_12 ⋯ s_1d ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ 1 s_0d s_1d s_2d ⋯ 0 ) ≡ - (-1)^d/ 2^d (d!)^2(C) . Applying Laplace's expansion for the determinant of a (d+2)× (d+2) matrix C and expanding around an arbitrary row i, we can write (C) = ∑_j=1^d+2 (-1)^i+jC_ij(C̃_ij) . Here, C_ij is the (ij)-th entry of C, and C̃_ij denotes the determinant of the submatrix of C obtained by removing the i-th row and j-th column of C. With the expansion (<ref>), it is straightforward to separate the terms that include large edge lengths and to determine the asymptotic behavior. §.§.§ Volumes in the limit of one large edge Here, we will study the asymptotic behavior of the squared volume 𝕍_ρ^d of a d-simplex ρ^d=(012⋯ d) with only one large edge (01). The squared volume 𝕍_ρ^d is a polynomial of at most quadratic order in s_01: 𝕍_ρ^d = a s_01^2 + b s_01 + c , where a, b and c do not depend on s_01. By using Laplace's formula (<ref>) repeatedly, we can derive the first coefficient as 𝕍_(012⋯ d) = -1/4 d^2(d-1)^2𝕍_01s_01^2 + O(s_01^1) , where 𝕍_01 is the signed squared volume of the subsimplex of ρ^d obtained by removing the vertices (0) and (1). Thus, 𝕍_ρ^d is of order s_01^2 if 𝕍_01≠ 0. From (<ref>), one can see that the signature of the squared d-dimensional volume 𝕍_ρ^d depends on the signature of 𝕍_01. If the d-simplex (012⋯ d) is timelike (spacelike), the (d-2)-simplex (23… d) needs to be spacelike (timelike) to satisfy the triangle inequalities when the edge (01) is large. Furthermore, one can also see that scaling only one edge length to be large is only possible for timelike (or null) simplices. Formula (<ref>) involves a dimensional reduction: the leading-order coefficient of the volume of the d-simplex (012· d) is determined by the volume of its (d-2)-subsimplex (2· d). We will see later, that for 4-simplices, this extends to the dihedral angles at triangles (01k) (with k=2,3,4) which contain the large edge — the leading-order coefficient of a given dihedral angle will coincide with the angle in the triangle (234) at the vertex (k). §.§.§ Volumes in the limit of multiple large edges Next we will study the asymptotic behavior of the squared volume 𝕍_ρ^d of a d-simplex ρ^d=(012⋯ d), which has d large edges (0i), where i∈{1,…,d}. There are two options for scaling multiple edges to become large: one can either choose a multiplicative scaling s_0i=λ s_0i^0, or an additive scaling s_0i=s_0i^0 ±λ and then consider the limit λ→∞. Here we choose the additive scaling throughout, so that the leading-order term of the volumes does not depend on the initial values s_0i^0. This will lead to simpler results for the asymptotic form of the action. With the additive scaling, also the triangle inequalities become independent of the initial values, whereas, for the multiplicative scaling, the triangle inequalities can lead to intricate conditions on the s_0i^0 <cit.>. We start by considering a triangle (012), whose signed squared area can be written as 𝕍_(012) = -1/16s_01^2 - 1/16s_02^2 + 1/8s_01s_02 + 1/8s_01s_12+ 1/8s_02s_12-1/16s_12^2 . In the case that the two large edges have the same signature, s_01=s_02=±λ, the terms quadratic in λ cancel out, and we are left with 𝕍_(012) = ±1/4λ s_12 + 𝒪(λ^0) . If we consider s_01=±λ and s_02=∓λ, the dominant term in (<ref>) is quadratic in λ: 𝕍_(012) = -1/4λ^2 + 𝒪(λ^1) . In general, for a d-simplex in which the large edges s_0i=±λ (i=1,…,d) agree in their signature, we obtain 𝕍_(012⋯ d) = ±1/d^2𝕍_(12⋯ d)λ + 𝒪(λ^0) . From this we see that, if all large edges are timelike, the d-simplex must be timelike (or null), and the subsimplex (12⋯ d) must be spacelike (or null). In the case where all large edges are spacelike, the signature of the d-simplex must agree with the signature of the subsimplex (12⋯ d). Note that for a four-dimensional Lorentzian triangulation (with non-degenerate top-dimensional simplices) all 4-simplices must be timelike. We again notice that (<ref>) involves a dimensional reduction, this time, from d-simplices (01· d) to (d-1)-subsimplices (1· d). This will again extend to the dihedral angles. E.g., for a 4-simplex (01234) we have that the leading-order coefficient of the dihedral angle at a triangle (0ij) will be given by the dihedral angle at the edge (ij) in the tetrahedron (1234). In the case that s_01=±λ and s_0j=∓λ for j=2,…, d, we find 𝕍_(012⋯ d) = -1/d^2× (d-1)^2𝕍_(23⋯ d)λ^2 + 𝒪(λ^1) . Clearly, the d-simplex (012⋯ d) has to be timelike, and therefore the subsimplex (23⋯ d) has to be spacelike. We again notice a dimensional reduction in (<ref>) from a d-simplex to a (d-2)-simplex. For a 4-simplex (01234) this reduction extends only to the dihedral angles at triangles (01i) with i=2,3,4. It does not apply to dihedral angles at (0kl) with 2≤ k<l≤ 4, as these triangles contain two large edges of the same signature. Allowing for a renaming of the vertices, the cases described in (<ref>) and (<ref>) cover all possible choices for the signatures of the edges (0i) in a triangle (012) and in a tetrahedron (0123). For a 4-simplex, there is, however, in addition also the case s_01=s_02=±λ and s_03=s_04=∓λ. In this case one can verify by an explicit calculation that the following relations hold: 𝕍_(01234) = λ^2/2^4 · (4· 3)^2( (s_13-s_14-s_23+s_24)^2 -4 s_12s_34)+ 𝒪(λ^1) = -λ^2/2^4( ∂/∂ s_13 + ∂/∂ s_14 + ∂/∂ s_23+∂/∂ s_24)𝕍_(1234) = λ^2/2^4(2 ∂/∂ s_12𝕍_(1234) -1/3^2𝕍_(134) -1/3^2𝕍_(234))+ 𝒪(λ^1) = λ^2/2^4(2∂/∂ s_34𝕍_(1234) -1/3^2𝕍_(123)- 1/3^2𝕍_(124))+ 𝒪(λ^1) . As 𝕍_(01234) has to be negative, the inequality 4 s_12s_34>(s_13-s_14-s_23+s_24)^2 must be satisfied in order to be able to apply this scaling. In particular, the signatures of s_12 and s_34 have to agree. § REGGE ACTION ASYMPTOTICS We will now consider the asymptotic behaviour of the Regge action for triangulations with a boundary and with one or several bulk edges. We will consider the limit where these bulk edges are much larger than the boundary edges. More precisely, we will consider the initial configurations of the 4-2 and 5-1 Pachner moves, which contain one and five bulk edges, respectively. By scaling these bulk edges to become large, we obtain examples for spine configurations in the case of 4-2 moves and spike configurations in the case of 5-1 moves. The initial configuration for the 4-2 Pachner move is given by four 4-simplices, which altogether share one bulk edge, see Figure <ref> for an illustration. The boundary of the initial configuration can also serve also as a boundary of two 4-simplices sharing a tetrahedron (if the triangle inequalities are satisfied for these two 4-simplices). This serves as the final configuration of the 4-2 Pachner move. The initial configuration for the 5-1 Pachner move is given by five 4-simplices, which share one bulk vertex, see Figure <ref> for an illustration. This initial configuration includes five bulk edges. The boundary of the initial configuration can also serve as the boundary of one 4-simplex (if this 4-simplex satisfies the generalized triangle inequalities), which serves as the final configuration for the 5-1 Pachner move. The final configurations of these Pachner moves, i.e., two 4-simplices sharing a tetrahedron or a single 4-simplex, can be embedded into flat Minkowskian space, if the generalized Lorentzian triangle inequalities hold for the 4-simplices in the final configuration. We can then construct a classical solution for the (squared) lengths of the bulk edges in the initial configurations. One uses the embedding of the final Pachner move configuration into flat space to compute the lengths of the bulk edges in the initial configuration. As one uses a flat embedding, the deficit angles at all bulk hinges (i.e., bulk triangles) vanish. This satisfies the Regge equation of motion. The Regge action for the initial configuration evaluated on this flat solution is then equal to the Regge action for the final configuration. There are cases where the Lorentzian generalized triangle inequalities are satisfied for the initial configuration of a Pachner move (with some range of edge lengths allowed for the bulk edges), but not for the final configuration. It might still be possible that the triangle inequalities are satisfied for simplices with a different spacetime signature, e.g., Euclidean signature. In this case one can construct a solution to the equation of motion, which, however, describes a simplicial complex with this different signature. Such solutions can play a role in the path integral as saddle points in a complexified configuration space, see for instance <cit.>. In Lorentzian signature there are more possibilities to scale the bulk edges large than in Euclidean signature. E.g., in Euclidean signature we cannot scale just one bulk edge large, as this would violate the Euclidean triangle inequality. In contrast, this is allowed in Lorentzian signature, and leads to spines in the initial 4-2 Pachner move configurations. The initial 5-1 Pachner move configurations allow for unbounded bulk edge lengths, both in Euclidean and Lorentzian signature. In Lorentzian signature we can have cases where all bulk edges are either spacelike or timelike and large. But we can also have cases where some of the bulk edges are spacelike and the remainder is timelike. These mixed-signature cases, and also the case with only spacelike bulk edges, can lead to light cone irregular bulk triangles, where the action acquires imaginary terms. §.§ 4-2 Pachner move The initial 4-2 Pachner move configuration includes four 4-simplices (01ijk) with 2≤ i<j<k≤ 5. These share a (bulk) edge (01), cf. Figure <ref>. In a path integral we can integrate out the corresponding length squared variable and in this way remove the edge. The final configuration of this Pachner move can therefore be interpreted as two 4-simplices (02345) and (12345) glued along the tetrahedron (2345). The classical solution can be constructed by embedding the glued final 4-simplices (02345) and (12345) into Minkowski space and by determining the distance between the vertices (0) and (1). The two simplices can only be embedded into Minkowski space if the generalized Lorentzian triangle inequalities are satisfied. If this is not the case, there may be a complex solution. E.g., in case the two 4-simplices satisfy the Euclidean triangle inequalities, there will be a solution for the bulk edge length, which will lead to a triangulation where all four initial 4-simplices satisfy the Euclidean triangle inequalities. We will consider the asymptotic regime for the 4-2 configuration when the bulk edge length is very large and describes a spine configuration. To determine the geometry of one of the four initial 4-simplices, consider an edge e_01 in a 4-simplex (01234) whose edge lengths we scale to be large, s_01→±∞, in a triangle where the other two edge lengths remain bounded. This is only possible due to the Lorentzian triangle inequality and thus the triangle has to be timelike in the asymptotic regime. The dihedral angles at the triangles (01i), i=2,3,4, are therefore Euclidean angles. The dihedral angles at (01i) can be found by projecting the 4-simplex onto the plane perpendicular to the triangle (01i). In the asymptotic limit, the resulting triangle assumes the same geometry as the triangle (ijk). Indeed, using the asymptotic behaviour of simplex volumes derived in Section <ref>, we obtain for example for the dihedral angle θ_(01234),(012), sin(θ_(01234),(012)) = -4/3√(-1/16s_01^2)√(-1/576𝕍_(234)s_01^2)/√(-1/144s_23 s_01^2)√(-1/144s_24s_01^2) + 𝒪(s_01^-1) = sin(θ_(234),(2)) + 𝒪(s_01^-1), s_01→±∞ , and cos(θ_(01234),(012)) = 16/√(-1/144s_23 s_01^2)√(-1/144s_24s_01^2)-s_01^2/576∂𝕍_(234)/∂ s_34 + 𝒪(s_01^-1) = cos(θ_(234),(2)) + 𝒪(s_01^-1) , s_01→±∞ . Similarly, the dihedral angle at the boundary triangles (0ij) and (1ij), such as θ_(01234),(023), approaches a constant for large s_01, which can be seen from the following equation: sin(θ_(01234),(023)) = -4/3√(𝕍_(023))√(-1/576𝕍_(234)s^2_01)/√(-1/144s_23s^2_01)√(𝕍_(0234))+𝒪(s_01^-1) . Furthermore, the dihedral angle at the boundary triangles (234), such as θ_(01234),(234), can be expressed as sin(θ_(01234),(234)) = -4/3√(𝕍_(234))√(-1/576𝕍_(234)s^2_01)/√(𝕍_(0234))√(𝕍_(1234))+𝒪(s_01^0) . Thus θ_(01234),(234) scales as 𝒪(log s_01). In the Regge action, these boundary dihedral angles are multiplied by terms of order 𝒪(s_01^0). As a result, the leading contribution to the Regge action arises from the bulk deficit angles, which are multiplied by the area of the bulk triangles. In the 4-2 move configuration, there are four bulk triangles with asymptotically equal areas given by √(𝕍_(01i))= √(-s^2_01)/4 + 𝒪(s_01^0) , with i=2,3,4,5. Each bulk triangle (01i) is shared by three 4-simplices (01ijk), and the dihedral angle θ_(01ijk)(01i) approaches the value of the dihedral angle θ_(ijk)(i) in the triangle (ijk). The total sum of dihedral angles θ_(01ijk)(01i) is thus asymptotically equal to the sum of all dihedral angles in the four triangles (ijk) of the tetrahedron (2345), which amounts to -4×π. Therefore, we have S^4-2 = -√(-s^2_01)/4(4× 2π - 4×π) + 𝒪(log s_01) = πs_01 + 𝒪(log s_01) . Different from the 3-2 Pachner move in three dimensions, discussed in <cit.>, here we do not have an asymptotic regime with light-cone irregular bulk triangles. The reason for this is that, as discussed above, the bulk triangles are necessarily timelike in the asymptotic regime. Thus, the deficit angles at these triangles are Euclidean and cannot be light-cone irregular.   Example: * Consider an initial 4-2 Pachner move configurations with simplices (01ijk), where 2≤ i<j<k≤ 5. We choose all s_ij=2 (for 2≤ i<j≤ 5) and s_0i=s_1i=1/4 (for 2≤ i≤ 5). (We take all squared lengths to be in Planck units.) The generalized Lorentzian triangle inequalities restrict the squared length of the bulk edge (01) to be either s_01<-5/3 or to be spacelike, s_01>0. The timelike case allows for a flat solution at s_01=-2. The spacelike case includes a light-cone irregular regime for 0<s_01≤ 1. For both, the timelike and spacelike asymptotic regime, we find S^4-1(s_01)/|s_01|=π in the limit |s_01|→∞. Figures <ref> and <ref> illustrate the behaviour of the Regge action for the cases with timelike and spacelike bulk edge, respectively. §.§ Generalization to N 4-simplices sharing an edge The result for the asymptotic limit of the Regge action for the 4-2 move can be easily generalized [We thank José Padua-Argüelles for sharing this result.] to configurations of N simplices, which share an edge e_01, whose squared length we take to ±∞. Each bulk triangle leads to a contribution of 2π |s_01|/4 to the action. The asymptotic values for the dihedral angles at the bulk triangles in a given 4-simplex add up to -π. These are then multiplied by the square root of the asymptotic area value |s_01/4. We thus have S = πs_01(2T - N)/4 + 𝒪(log s_01) , where N is the number of 4-simplices which share the edge (01), and T is the number of (bulk) triangles, which share the edge (01). This asymptotic behaviour of the Regge action may have interesting consequences for spin foams. As discussed in the introduction, spin foams <cit.> are another path-integral approach based on a triangulation to which one assigns geometric data, which are then summed over in the path integral. In the case of spin foams, the geometric data are triangle areas and 3D dihedral angles <cit.>, which can be encoded into an area metric associated to each 4-simplex <cit.>. The 3D dihedral angles can be integrated out, leaving only the areas <cit.>. Using triangle areas as basic variables leads to a configuration space, which is genuinely larger than the configuration space of length Regge calculus. The Regge action can however be extended to such an area configuration space <cit.>, and reduces to the length Regge action in case the area configuration is induced by a length configuration <cit.>. Spin foams suppress configurations outside the length configuration space. This is most evident in the construction of the effective spin foam models <cit.>. One can thus parameterize spin foam variables by length parameters associated to the edges of the triangulation, and another set of variables describing deviations from the length configuration space, see e.g. <cit.>. The length parameters are allowed to become large, but larger deviations from the length configuration space lead to an exponential suppression of the amplitude. We will therefore restrict to configurations where these deviations are very small. The regime in which all areas are large A≫1 (we express all geometric quantities in Planck units ), allows for a semi-classical approximation. In this approximation, the 4-simplex amplitude reproduces the cosine of the Regge action <cit.>. (Effective spin foams make more direct use of the Area Regge action, thus the amplitude reproduces the Regge action also for smaller areas.) Here, we will consider a configuration of N 4-simplices sharing a bulk edge, and a regime where all edges are large l_e≫1, with the bulk edge much larger than the boundary edges. We can thus apply the asymptotic formula (<ref>). We furthermore note that the areas for timelike triangles in spin foams have a discrete spectrum [Areas are conjugated to dihedral angles <cit.>, which for timelike triangles are Euclidean, and therefore periodic. This explains a discrete equidistant spectrum.], the areas have absolute values in ℕ/2 ℓ_P^2 <cit.>. With the asymptotic approximation of the absolute value of the bulk areas given by A∼ |s_01|/4 we can express (<ref>) as S∼ Aπ(2T-N). In the case of the 4-2 move we have 2T-N=4. We would therefore expect that the amplitude is well approximated by 1. Here we ignored measure terms, which might suppress the amplitude for large values of the area, see the discussion in <cit.>. §.§ 5-1 Pachner move In the initial 5-1 Pachner move configuration, there are five 4-simplices (0ijkl) with i<j<k<l taking values in {1,2,3,4,5}. These share a (bulk) vertex (0), cf. Figure. <ref>. In a path integral, the five bulk edges with edge squared variables s_0i, i=1,2,3,4,5 are integrated out, and in this way the bulk vertex is removed. The remaining edges are associated with a triangulation consisting of one 4-simplex (12345). To derive the asymptotic behavior of the dihedral angles and the Regge action for the 5-1 move, we have to distinguish between all possible combinations of signatures for the five bulk edges. The equations of motion for the five bulk edge lengths allow for a flat configuration. If the 4-simplex (12345) satisfies the generalized Lorentzian [If the 4-simplex satisfies the generalized Euclidean triangle inequalities, one can construct an Euclidean solution, which amounts to a saddle point in the complexified configuration space, see <cit.> for an example.] triangle inequalities, it can be embedded into Minkowski space. Embedding also a vertex (0) and adopting as lengths for the bulk edges (0i) the geodesic distance between (i) and (0), leads to a four-parameter family of solutions. This family represents a gauge orbit with flat configurations, arising from a remnant diffeomorphism symmetry <cit.>. The Regge action is constant along this gauge orbit and coincides with the Regge action of the 4-simplex (12345). Away from the flat solution, we identify the level sets of the Regge action, which are generically four-dimensional, as gauge orbits. Thus, the path integral associated to the 5-1 move can be simplified to a one-dimensional integral. We will consider a gauge fixing |s_0i|=λ for i=1,…,4 for the asymptotic regime. This choice corresponds to the additive scaling discussed in Section <ref>. With this choice we assume that the gauge conditions |s_0i|=λ define a good gauge fixing for large λ. We will proceed by considering the case of homogeneous signature for all the bulk edges.   Case s_0i=±λ (with the same sign for all i=1,…,5): To determine the dihedral angles at the bulk edges, let us first consider the dihedral angle at the triangle (012) in the 4-simplex (01234): sin(θ_(01234),(012)) = -4/3√(𝕍_012)√(𝕍_01234)/√(𝕍_0123)√(𝕍_0124) = -3/2√(± s_12λ)√(±𝕍_1234λ)/√(±𝕍_123λ)√(±𝕍_124λ) +𝒪(λ^-1) = sin(θ_(1234),(12))+𝒪(λ^-1) . Likewise, we obtain cos( θ_(01234),(012))= cos( θ_(1234),(12))+𝒪(λ^-1). Similarly, the dihedral angles at the boundary triangles (ijk), such as θ_(01234),(123), become constant for large λ. In the Regge action, these boundary dihedral angles are multiplied by terms of order 𝒪(λ^0). As a result, the dominant contribution to the Regge action comes from the bulk deficit angles, which are multiplied by the areas of the bulk triangles. For the deficit angles at the bulk triangles (0ij) we find ϵ_(0ij)^5-1 = 2π + θ_(0ijkl),(0ij) + θ_(0ijkm),(0ij)+ θ_(0ijlm),(0ij) = 2π + θ_(ijkl),(ij) +θ_(ijkm),(ij) +θ_(ijlm),(ij) + 𝒪(λ^-1) . To compute the Regge action we multiply these deficit angles with √(𝕍_0ij)=√(±λ s_ij/4) +𝒪(λ^0) . We therefore obtain for the Regge action S^5-1 = -/2√(λ)∑_0<i<j≤ 5√(± s_ij) ϵ_(ij) + 𝒪(λ^0) , where we denote with ϵ_(ij) the 3D deficit angle ϵ_(ij) = 2π + θ_(ijkl),(ij) +θ_(ijkm),(ij) +θ_(ijlm),(ij) at an edge (ij), which is shared by the three tetrahedra (ijkl), (ijkm) and (ijlm). If all the bulk edges (0i) are timelike and large, we know from (<ref>) that all the boundary tetrahedra of the initial 5-1 Pachner move configuration have to be spacelike. The 3D deficit angles ϵ_(ij) are therefore real, and we obtain a real leading-order term for the 3D Regge action, which agrees with 1/2 of the Regge action for the spacelike boundary triangulation S^5-1 =-√(λ)/2 S^E,3D + 𝒪(λ^0) , where we define the Euclidean Regge action by S^E,3D = -∑_1≤ i<j≤ 5√(s_ij)ϵ_(ij) . In the case that all the bulk edges (0i) are spacelike and large, the boundary tetrahedra need to be timelike, in order for the triangle inequalities to be satisfied. Therefore, we have a three-dimensional boundary triangulation with Lorentzian spacetime signature, and can write for the 4D action S^5-1 =√(λ)/2 S^L,3D + 𝒪(λ^0) , where the Lorentzian 3D action for the boundary of the 4-simplex (12345) is given by S^L,3D = -∑_1≤ i<j≤ 5√(s_ij)ϵ_(ij) . The bulk triangle (0ij) is light-cone regular with respect to the 4D triangulation, if and only if the edge (ij) is light-cone regular with respect to the three-dimensional triangulation of the boundary. Similarly, we find a real leading-order term for the 4D Regge action, if the Regge action for the three-dimensional boundary is real.   Examples: * Consider an equilateral 4-simplex (12345), with all squared edge lengths equal to s_ij=1. We subdivide this 4-simplex into five 4-simplices by inserting a bulk vertex (0) and setting the length square of these bulk edges to s_0i=-λ. For large λ, the quotient S^5-1/√(λ) reproduces 10(2π-arccos(1/3))/2=12.9515, which is -S^E,3D/2 for the boundary triangulation. All bulk triangles are timelike and therefore light-cone regular. Figure <ref> compares the exact 4D Regge action to its leading-order asymptotics. * Consider a 5-1 Pachner move configuration with boundary squared edge lengths given by s_12=-3 and s_34=s_35=s_45=1, as well as s_kl=-1/5 for k=1,2 and l=3,4,5. We set the bulk squared edge lengths to s_0i=+λ with i=1,…,5. The quotient S^5-1/√(λ) asymptotes to 4.24276, which is indeed 1/2 of the Lorentzian 3D Regge action for the boundary triangulation, as shown in Figure <ref>. Here, we have spacelike bulk triangles (034), (035), and (045), which are light-cone regular. * We change the length assignments above by only switching the sign for the s_kl, which are now s_kl=+1/5 for k=1,2 and l=3,4,5. The quotient S^5-1/√(λ) now asymptotes to -0.788443-8.42978, which is indeed 1/2 of the Lorentzian 3D Regge action for the boundary triangulation. Here, in the 4D triangulation, we have light-cone irregular bulk triangles (0ik), for i=1,2 and k=3,4,5. The result is shown in Figure <ref>.   Case s_01=±λ and s_0j=∓λ for j=2,…,5: We have two types of 4-simplices. Firstly, there is the 4-simplex (02345), for which all edges of type (0i) agree in their signature, and for which we have already discussed the behavior of the dihedral angles. Secondly, there are the four 4-simplices (01ijk) with 2≤ i<j<k≤ 5. For the latter, we have two types of bulk triangles: bulk triangles (01i) and bulk triangles (0ij) with 2≤ i<j≤ 5. Consider, for example, the dihedral angle at the triangle (012) in the 4-simplex (01234), sin(θ_(01234),(012)) = -4/3√(𝕍_012)√(𝕍_01234)/√(𝕍_0123)√(𝕍_0124) = -2 √(-λ^2)√( -𝕍_234λ^2 )/√(-s_23λ^2)√(-s_23λ^2) + 𝒪(λ^-1) = sin(θ_(234),(2)) + 𝒪(λ^-1) . Similarly, we find cos(θ_(01234),(012))=cos(θ_(234),(2))+𝒪(λ^-1). For the dihedral angle at the triangle (023) in the 4-simplex (01234), we find sin(θ_(01234),(023)) = -4/3√(𝕍_023)√(𝕍_01234)/√(𝕍_0123)√(𝕍_0234) = -√(∓ s_23λ)√(-𝕍_234λ^2 )/√(-s_23λ^2)√(∓𝕍_234λ) + 𝒪(λ^-1) = -1 + 𝒪(λ^-1) . Similarly, we find that cos(θ_(01234),(023)) = 𝒪(λ^-1/2), such that θ_(01234),(023) = -π/2+𝒪(λ^-1/2) . We have already determined the dihedral angles at the triangles (0ij) in the 4-simplex (02345) as θ_(02345),(0ij) = θ_(2345),(ij) + 𝒪(λ^-1) . The dihedral angles at the boundary triangles are at most 𝒪(logλ). Therefore, the boundary deficit angles do not contribute to the leading-order terms in the Regge action. Next, let us consider the bulk deficit angles. For the deficit angle at (012), we find ϵ_(012)^5-1 = 2π + θ_(01234),(012) + θ_(01235),(012)+ θ_(01245),(012) = 2π + θ_(234),(2) + θ_(235),(2)+ θ_(245),(2) + 𝒪(λ^-1) , whereas for the deficit angle at the triangle (023), we find ϵ_(023)^5-1 = 2π + θ_(01234),(023) + θ_(01235),(023)+ θ_(02345),(023) = π + θ_(2345),(23) + 𝒪(λ^-1/2) . The areas for triangles (01i) with i≥ 2 are given by 𝕍_01i=-λ^2/4+𝒪(λ^0), whereas for the triangles (0ij) with j>i≥ 2, the areas are given by 𝕍_0ij=∓λ s_ij/4+𝒪(λ^0). To leading order, we therefore need to consider only the deficit angles at the triangles (01i). The Regge action is then given by S^5-1 = 1/2λ∑_i=2,…,5ϵ_01i + 𝒪(λ^1/2) = 2πλ + 𝒪(λ^1/2) , where we have again used the fact that the angles in a triangle sum up to -π. Therefore, we find that the leading-order term in the action is real. This result can be explained by the triangles (01i), which are timelike, and can therefore not be light-cone irregular. The next-to-leading term scales with λ^1/2. Next, we determine these subleading λ^1/2 terms. To this end, we note that the next-to-leading-order correction coming from the (01i) triangles is of order O(λ^0), so the λ^1/2 correction comes from the six triangles (0ij) with 5≥ j>i≥ 2. We determined the associated deficit angles in (<ref>) ϵ_(0ij)^5-1 = π + θ_(ijkl),(ij) + 𝒪(λ^-1/2) , where 1< i,j,k,l≤ 5 are pairwise distinct. With 𝕍_0ij = ∓λ s_ij/4 + 𝒪(λ^0), the action evaluates to S^5-1 = 2πλ - /2∑_2≤ i<j ≤ 5√(∓λ s_ij)(π +θ_(2345),(ij)) + 𝒪(log(λ)) . In the case where s_0i=-λ for 2≤ i≤ 5, and λ is large, we know from equation (<ref>) that the Lorentzian triangle inequalities imply that the tetrahedron (2345) is spacelike. Thus, s_ij≥ 0, and ∑_2≤ i<j ≤ 5√(s_ij)(π + θ_(2345),(ij)) = -S^E,3D_(2345), which is minus the Euclidean Regge action for the tetrahedron (2345). Therefore, in the case where four of the bulk edges are timelike, the action is given by S^5-1 = 2πλ - 1/2√(λ) S^E,3D_(2345) + 𝒪(log(λ)) . Here, there are no light-cone irregular bulk triangles, as all these triangles are timelike. Thus, the leading and next-to-leading order terms in the action, both determined by the deficit angles associated with the bulk triangles, are real. In the case that s_0i=+λ for 2≤ i≤ 5, and λ is large, the tetrahedron (2345) has to be timelike. Here, -∑_2≤ i<j ≤ 5√( s_ij)(π +θ_(2345),(ij))=S^L,3D_(2345) evaluates to the Lorentzian action for the tetrahedron (2345), and in this case S^5-1 = 2πλ + 1/2√(λ) S^L,3D_(2345) + 𝒪(log(λ)) . According to (<ref>) applied to the volume of a 4-simplex with one large timelike edge and three large spacelike edges, the triangles (ijk) with 2≤ i<j<k≤ 5 have to be spacelike. Thus, the tetrahedron (2345) is timelike, but all its triangles (and therefore edges) are spacelike. According to (<ref>), this implies that the bulk triangles (0ij) are spacelike and might be light-cone irregular. Whether this is the case is determined by the 3D dihedral angles at (ij) in the tetrahedron (2345): the triangle (0ij) is irregular if the 3D dihedral angle at (ij) contains more or less than one light cone. Note that in a timelike tetrahedron with only spacelike triangles, the 3D dihedral angles contain either zero or one light cone. Such tetrahedra also contain at least one edge with a dihedral angle containing zero light cones. [ Consider an embedding of the tetrahedron into Minkowskian space. By replacing the Minkowski metric with a Euclidean metric, we obtain a Euclidean tetrahedron. The sum of the dihedral angles in a Euclidean tetrahedron is bounded by 2π from below and 3π from above <cit.>. We note that the dihedral angles of a Lorentzian tetrahedron, which has only spacelike triangles, contain either zero or one light cone. Accordingly, the dihedral angles of the associated Euclidean tetrahedron are either smaller or larger than π/2, and they are always positive. Thus, some of these angles are smaller than π/2, meaning that some of the Lorentzian angles do not contain a light cone. ] Therefore, there are imaginary contributions to the action S^L,3D_(2345), and these all have the same negative sign and cannot cancel each other out.   Examples: * We choose all boundary edge lengths squared to be s_ij=1 and the bulk squared edge lengths to be s_01=λ and s_02=s_03=s_04=s_05=-λ. An explicit calculation of the Regge action yields a leading-order term of 2πλ and a next-to-leading term of 6(π-arccos1/3)/2√(λ)=5.7319√(λ), which is indeed equal to -S^E,3D_(2345)√(λ)/2. Figure <ref> compares the exact Regge action to its leading asymptotic order. * We choose the following boundary squared edge lengths: s_1i=1 for i=2,…,5 and s_23=s_24=s_34=1 and s_25=s_35=s_45=3/10. This allows s_01=-λ and s_0i=+λ (i=2,…,5) with λ large. Computing the exact Regge action for this case and extracting the asymptotics gives 2πλ -(0.0725118 + 9.42478)√(λ)/2+ O(logλ). The 3D Lorentzian action for the tetrahedron (2345) evaluates indeed to -(0.0725118 + 9.42478). The action includes an imaginary part of order √(λ), showing that there are light-cone irregular bulk triangles. The corresponding comparison is shown in Figure <ref>.   Case s_01=s_02=±λ and s_0j=∓λ for j=3,4,5: Here, there are two types of 4-simplices, firstly the set {(01345),(02345)} and secondly the set {(012ij),3≤ i<j≤ 5}. For the first set we have one large edge with opposite signature to the other three large edges, and we have already determined the dihedral angles at the bulk triangles. For the second set we have two large edges with opposite signature to the other two large edges. We know that the area for the bulk triangles with two large edges of opposite signature will scale as λ, whereas for the bulk triangles with two large edges of the same signature we have a scaling with λ^1/2. The set of triangles whose areas scale with λ, is given by the bulk triangles (0ij) where i=1,2 and j = 3,4,5. The second set is given by the bulk triangles (012) and (0kl) where k,l = 3,4,5 with k<l. Let us first determine the dihedral angles for the triangles with dominant area scaling. We already determined, that, e.g., θ_(01345),(013) = θ_(345),(3) + 𝒪(λ^-1) . But for the dihedral angles in e.g. (01234) we find more complicated expressions sinθ_(01234),(013) = -4/3√(𝕍_013)√(𝕍_01234)/√(𝕍_0123)√(𝕍_0134) = -6 √(( ∂/∂ s_13 + ∂/∂ s_14 + ∂/∂ s_23+∂/∂ s_24)𝕍_1234)/√(s_12)√(s_34) + 𝒪(λ^-1) = - 1/2√(4 s_12s_34-(s_13-s_14-s_23+s_24)^2)/√(s_12)√(s_34) + 𝒪(λ^-1) , and cosθ_(01234),(013) = 4^2/√(𝕍_0123)√(𝕍_0134)∂/∂ s_24𝕍_01234 = 1/2-s_13+s_14+s_23-s_24/√(s_12)√(s_34) + 𝒪(λ^-1) . Below we need to know whether sinθ_(01234),(013) is positive or negative. From Equation (<ref>) we see that 4 s_12s_34-(s_13-s_14-s_23+s_24)^2 has to be positive, and thus s_12s_34 has to be positive. From (<ref>) we can conclude that the triangle (345) has to be spacelike, and therefore s_34 positive. Thus, s_12 has also to be positive, and we see that sinθ_(01234),(013) is negative. Now deriving the same expressions for θ_(01234),(014) as for θ_(01234),(013), we notice that sinθ_(01234),(014) = +sinθ_(01234),(013) + 𝒪(λ^-1) , cosθ_(01234),(014) = -cosθ_(01234),(013) + 𝒪(λ^-1) . Together with the fact that sinθ_(01234),(013)<0, we conclude θ_(01234)(013)+θ_(01234)(014)=-π +𝒪(λ^-1). In general we have θ_(012kl)(01k)+θ_(012kl)(01l) = -π + 𝒪(λ^-1) , θ_(012kl)(02k)+θ_(012kl)(02l) = -π + 𝒪(λ^-1) , for 3≤ <k<l≤ 5. Let us now come to the deficit angles. The deficit angle at the triangle (013) is given by ϵ^5-1_013 = 2π + θ_(345),(3) +θ_(01234),(013) +θ_(01235),(013) + 𝒪(λ^-1) . Similarly, we have to consider deficit angles at five additional triangles, that is consider the set of deficit angles at {(013),(014),(015),(023),(024),(025)} For the sum over these deficit angles we obtain ∑_i=1,2 j=3,4,5ϵ^5-1_0ij = 12π + 2∑_k=3^5θ_(345),(k)+ ∑_i=1,2 3≤ k<l≤ 5(θ_(012kl),(0ik)+θ_(012kl),(0il)) + 𝒪(λ^-1) = 12π -2π-6π + 𝒪(λ^-1) = 4π + 𝒪(λ^-1) . The signed squared areas of {(013),(014),(015),(023),(024),(025)} have all the same large λ limit, namely -λ^2/4. For the leading term of the Regge action we therefore obtain S^5-1 = 1/2λ∑_i=1,2 j=3,4,5ϵ^5-1_0ij + 𝒪(λ^1/2) = 2πλ + 𝒪(λ^1/2) . The next-to-leading-order terms of order λ^1/2 arise solely from the triangles {(012),(034),(035),(045)} with two large bulk edges of the same signature. As before, the boundary deficit angles are at most of order O(logλ), and the next-to-leading-order coming from the triangles {(013),(014),(015),(023),(024),(025)} is of order O(λ^0). The deficit angles associated to the triangles {(012),(034),(035),(045)} are given by ϵ_(012)^5-1 = 2π + θ_(01234),(012) + θ_(01235),(012)+ θ_(01245),(012) , ϵ_(0kl)^5-1 = 2π + θ_(012kl),(0kl) + θ_(01klm),(0kl)+ θ_(02klm),(0kl) , with 3≤ k,l,m ≤ 5 and k,l,m pairwise different. All the dihedral angles in the first line are associated to a bulk triangle with both bulk edges of equal signature in a 4-simplex whose other two bulk edges have opposite signature. We still need to compute these. In the second line, the first dihedral angle is also of this type, whereas for the other two we can use the result (<ref>) to conclude that each of these amounts to -π/2 + 𝒪(λ^-1/2). For the dihedral angles of type θ_(01234),(012) we have sinθ_(01234),(012) = -4/3√(𝕍_012)√(𝕍_01234)/√(𝕍_0123)√(𝕍_0124) = -4/3√(±1/4s_12λ)√(-λ^2/2^4(s_13 + s_14 + s_23 + s_24)𝕍_1234)/√(-1/36s_12λ^2)√(-1/36s_12λ^2) + 𝒪(λ^-3/2) . Therefore this type of dihedral angle goes to zero in the limit λ→∞. Thus, the deficit angles above are given by ϵ_(012)^5-1 = 2π +𝒪(λ^-1/2) , ϵ_(0kl)^5-1 = π + 𝒪(λ^-1/2) . The leading order of the area square for the triangle (012) is given by ±14λ s_12 and the leading order for the triangles (0kl) (with 3≤ k<l≤ 5) is ∓14λ s_kl. We thus obtain for the action S^5-1 = 2πλ -λ^1/2( π√(± s_12) + π/2∑_3≤ k<l≤ 5√(∓ s_kl)) + 𝒪(logλ) . Let us first consider the case where s_01=s_02 =+λ and s_0j=-λ for j=3,4,5. The tetrahedron (0123) has to be timelike, and as it has two large spacelike edges (01) and (02) and one large timelike edge (03), s_12 has to be spacelike, see (<ref>). The triangles (0kl) with 3≤ k<l≤ 5 are timelike, according to Equation (<ref>), the edges (kl) have therefore to be spacelike. Thus the first term in the brackets in (<ref>) is real, whereas the second one is imaginary. In the second case, where s_01=s_02 =-λ and s_0j=+λ, one also finds that (01) has to be spacelike (due to the triangle (012) being timelike and (<ref>)) and that the edges (kl) have to be spacelike (due to the tetrahedra (01kl) being timelike and (<ref>)). Thus the first term in the brackets in (<ref>) is imaginary, whereas the second one is real. In summary, the action includes in both cases imaginary terms of order λ^1/2. These arise from the spacelike bulk triangles, which are all light-cone irregular and of yarmulke type.   Examples: * We choose all boundary squared edge lengths to be s_ij=1 and the bulk squared edge length to be s_01=s_02=λ and s_03=s_04=s_05=-λ. An explicit computation of the Regge action leads indeed to the asymptotic expression (<ref>) and thus also yields the imaginary part -πλ^1/2+ O(logλ). See Figure <ref> for a comparison of the real part of the action with the leading-order asymptotics. * With the same boundary data we can also choose s_01=s_02=-λ and s_03=s_04=s_05=+λ. The explicit computation of the Regge action again conforms with the asymptotic expression (<ref>). Allowing for a relabeling of the vertices, we have covered all possible cases with different number of spacelike and timelike bulk edges. A summary of the various cases can be found below. §.§ Summary of asymptotic behaviour of the Regge action Here we summarize the asymptotic behaviour of the Regge action for the different Pachner moves and the various choices of signature for the bulk edges: * 4-2: Case s_01=-λ: S^4-2 = = πλ + 𝒪(logλ) * 4-2: Case s_01=λ: S^4-2 = = πλ + 𝒪(logλ) * 5-1: Case s_0i=-λ all i=1,…,5: S^5-1 =-√(λ)/2 S^E,3D + 𝒪(λ^0) * 5-1: Case s_0i=+λ all i=1,…,5: S^5-1 =√(λ)/2 S^L,3D + 𝒪(λ^0) * 5-1: Case s_01=+λ and s_0j=- λ for j=2,…,5: S^5-1 = 2πλ - 1/2√(λ) S^E,3D_(2345) + 𝒪(log(λ)) * 5-1: Case s_01=-λ and s_0j=+ λ for j=2,…,5: S^5-1 = 2πλ + 1/2√(λ) S^L,3D_(2345) + 𝒪(log(λ)) * 5-1 Case s_01=s_02=+ λ and s_0j=- λ for j=3,4,5: S^5-1 = 2πλ -λ^1/2( π√( |s_12|) + π/2∑_3≤ k<l≤ 5√( |s_kl)|) + 𝒪(logλ) * 5-1: Case s_01=s_02=- λ and s_0j=+λ for j=3,4,5: S^5-1 = 2πλ -λ^1/2( π√( |s_12|) + π/2∑_3≤ k<l≤ 5√(| s_kl|)) + 𝒪(logλ) We see that the terms of order λ do not depend on the boundary data, and are associated to timelike bulk triangles. The terms of order √(λ) do depend on the boundary data. In the cases where the bulk edges have not all the same signature, and where there are spacelike bulk triangles, there are also light-cone irregular bulk triangles. § FINITE EXPECTATION VALUES FOR SPINE AND SPINE CONFIGURATIONS Spike and spine configurations involving bulk edges which can have arbitrarily large lengths pose a challenge to the convergence of the gravitational path integral. In Euclidean Regge quantum gravity, spike configurations are highly problematic. It has been shown for two-dimensional Regge calculus, that (for a measure that does not suppress edge lengths exponentially) expectation values for sufficiently high powers of the bulk lengths in fact diverge <cit.>. In higher dimensions, the 5-1 and 4-1 Pachner move in four and three spacetime dimensions, respectively, isolate the conformal degree of freedom <cit.>. This mode comes with the “wrong" sign <cit.> and leads to an exponential enhancement of these spike configurations in the Euclidean path integral. Due to an infinite integration range, this leads to divergences. In contrast, the oscillatory nature of the path integral for Lorentzian Regge calculus might avoid the conformal factor problem. This has been shown to be the case for the theory linearized around a flat background in <cit.>. Moreover, in three spacetime dimensions, the recent work <cit.> showed that also in the full theory, arbitrary expectation values of powers of bulk edge squares in (light-cone regular 4-1 and 3-2 Pachner moves) spike and spine configurations remain finite. Here we will consider the four-dimensional Lorentzian Regge path integral and investigate the convergence for spine and spike configurations appearing in 4-2 and 5-1 Pachner moves, respectively. For the Regge path integral we need to specify a measure. Whereas in three-dimensional Regge calculus there exists a unique local measure which renders the partition function triangulation-invariant to one-loop order <cit.>, there is no such local measure available for the four-dimensional theory <cit.>. For the 5-1 and 4-2 moves (for which the Regge action is invariant), it is possible to find a measure which leads to invariance modulo a non-local factor, which does not factorize over the simplices. This measure is given by <cit.>𝒟s_e = 1/∏_e⊂bdry√(√(96 π))1/∏_e⊂bulk√(√(96 π))1/∏_σ𝕍_σ^1/4∏_e⊂bulks_e , where σ denotes the 4-simplices in the triangulation. We note that we can apply the asymptotic expansion formulae (<ref>), (<ref>) and (<ref>) to the squared volumes 𝕍_σ, and obtain a measure that includes inverse powers of the bulk edge lengths. In the following we will assume that the dependence of the measure on the bulk length squared variables in the asymptotic regime is given by a fractional positive or negative power. As our primary goal is to argue for the finiteness of expectation values for spine and spike configurations, we only need to consider the asymptotic regime of large bulk edge squared variables. Here we will only focus on spine and spike configurations with light-cone regular triangles and therefore real contributions to the Regge action. As discussed in Section <ref>, light-cone irregular triangles lead to branch cuts for the Regge action and imaginary terms. The sign for these imaginary terms changes if one crosses the branch cuts. Choosing the path integral contour such that it goes along the suppressing side, one can exponentially suppress these configurations. To study the convergence properties of the path integral, we therefore only need to consider light cone regular configurations. §.§ 4-2 Pachner move For the 4-2 Pachner move, the path integral amounts to an integral over one bulk squared edge variable, s_01=±λ, λ>0. We found previously that, independent of the timelike or spacelike nature of the bulk edge, the asymptotic expression for the Regge action is real and given by S^4-2 = πλ + 𝒪(logλ) . To investigate the convergence of expectation values of powers of the squared length s_01^n=(±)^nλ^n, we use the above asymptotic expression for the Regge action and combine it with our assumption for the asymptotics of the measure μ∝λ^M, where M can be any positive or negative fraction. Setting m=n + M, and dropping a sign (±)^n we thus have to consider integrals of the type ℐ̃_4-2(m,c) = ∫_c^∞λλ^m e^πλ , where c>0 is a sufficiently large positive constant which ensures that we integrate over configurations where the asymptotic approximation is valid. Introducing a regulator ϵ>0 allows us to write the previous integral as ℐ̃_4-2(m,c) = lim_ϵ→ 0∫_c^∞λλ^m e^(π - ϵ)λ = c^m+1 E_-m(- π c) , where E_n(z)≡∫_1^∞t t^-ne^-zt is the exponential integral function. (<ref>) is finite for any m∈ℝ. We will use the full action for a numerical computation of the expectation value, and define ℰ_4-2(m,c)=∫_c^∞dλ λ^m e^ S^4-2(λ) . The lowest possible value for c is determined by the generalized triangle inequalities, but to show finiteness of the expectation values we can also choose a larger c. This integral cannot be solved analytically, due to its highly oscillatory behaviour and the complicated form of the full Regge action. However, as in <cit.>, to deal with the unbounded integral we employ series-acceleration methods, like Wynn's epsilon algorithm or iterated Shank's transforms, see <cit.> and <cit.> for a review. These methods have been shown to be efficient tools when evaluating path integrals and expectation values <cit.>. The algorithms are particularly efficient if we choose the lower bound c for the integral such that the asymptotic form of the action holds approximately. In Figure <ref> we show the comparison of the expectation value ℰ_4-2(m,c) (<ref>) defined with the full action with the analytical approximation ℐ̃_4-2(m,c) (<ref>) for the same configuration as displayed in Figure <ref>, and for c=10. At this lower bound of the integration, the full action S^4-2 deviates from the asymptotic approximation by almost 40%. This also results in a deviation between the full and approximated expectation values of about 30%, which however decreases when increasing the power m. This is expected, since for larger m, the regime of large λ, where the integrands of both expressions agree better, contributes more. Overall we find stable numerical results for ℰ_4-2(m,c), which remain finite for all tested m∈[0,25]. §.§ 5-1 Pachner move The 5-1 Pachner move configuration can be obtained by subdividing a simplex (12345) via a placement of a vertex (0) inside this simplex and by subsequently connecting all boundary vertices with the bulk vertex. In a path integral for such a configuration one would therefore have to integrate over five length squared variables s_0i, i=1,…, 5. As discussed previously, we will only consider the path integral for configurations which have only light cone regular bulk triangles in the asymptotic regime. We thus only need to consider the cases where (a) the signature of all bulk edges is the same and (b) one bulk edge is timelike and the remaining four are spacelike. The action for these cases is of the form S^5-1 = αλ + β√(λ) + 𝒪(logλ) , where α=0 in the case that the signature for all bulk edges agrees, and α=2π in the case that one bulk edge is timelike and the other four are spacelike. β is a real constant, which depends on the data of the boundary triangulation. As discussed in Section <ref>, the 5-1 configuration features a four-dimensional gauge symmetry, which is a remnant of diffeomorphism symmetry <cit.>. We therefore must specify how to deal with this gauge symmetry. [The squared edge length is not invariant under this gauge symmetry. We will nevertheless consider the corresponding expectation values, in order to be able to compare with statements in previous literature <cit.>. To this end, we can regard the gauge fixing as a form of symmetry reduction. Namely, we will set all bulk edge lengths to be equal. We then compute the expectation value in this symmetry-reduced model. Alternatively, one can construct gauge invariant observables, e.g. using the relational formalism <cit.>, and express one edge length in relation to the other four. Assuming that the resulting observable can be approximated by a polynomial function in the edge lengths, the results on the finiteness also apply to this observable.] Our choice of additive scaling for the bulk edges s_0i = s^0_0i±λ already implies a gauge fixing in the asymptotic regime s_0i = λ. We thus have to insert a Faddeev-Popov determinant and as a result remain with only a one-dimensional integral over the variable λ. Along the flat solution, where the gauge orbits can be parametrized explicitly, the Faddeev-Popov determinant can be computed as (see <cit.> for a similar computation) ℱ = 2^4 4! V_(12345) . Here V_(12345) is the absolute volume of the final simplex (12345), which is independent of the bulk variables and thus independent of λ. Away from the flat solution, we will assume that the gauge-fixing determinant together with the measure can be approximated asymptotically by some fractional positive or negative power of λ. We thus have to consider one-dimensional integrals over the variable λ with a Regge action of the form (<ref>). For the case with five bulk edges of equal signature we therefore consider ℐ̃_5-1(m,β,c) = ∫_c^∞λλ^m e^β√(λ) = 2 ∫_√(c)^∞λ̃λ̃^2m + 1 e^βλ̃ , where c>0 is again some sufficiently large positive constant such that the asymptotic approximation leads only to a very small correction. The power m∈ℝ summarizes the contribution from the measure, the Faddeev-Popov determinant, and the insertion of the bulk lengths observables into the path integral. Similarly as for the 4-2 Pachner move, these integrals result in ℐ̃_5-1(m,β,c) = 2 c^m+1 E_-2m -1(-β√(c)) and are finite for any value of m∈ℝ. For the second case with one timelike and four spacelike bulk edges we do have a leading order term of order λ and a subleading order term of order λ^1/2 in the action. If we only consider the leading order term the calculation proceeds in the same way as for the 4-2 move; the only difference is that the leading order term is now 2πλ and not πλ. If we also consider the subleading order term in the action we obtain the integrals ℐ̃_5-1(m,β,c) = ∫_c^∞λλ^m e^2 πλ + β√(λ) = 1/2^m∫_c̃^∞λ̃(1-sgn(b) b^2/√(b^4 + 4 b^2 λ̃))( b^2 + 2 λ̃- sgn(b) √(b^4+4 b^2 λ̃))^m e^2πλ̃ , where we applied a coordinate transformation λ̃= λ + b √(λ) with b=β/2π. Here c̃=c+b√(c) and we assume c>0 and √(c)>|b|, thus c̃>0. We expand the non-exponential factors in the integral around λ̃=∞λ̃^m(1-sgn(b) b^2/√(b^4 + 4 b^2 λ̃))(2+ b^2/λ̃ - sgn(b) √(b^4/λ̃^2+4 b^2 /λ̃))^m = λ̃^m ∑_k=0^∞ a_k(m) λ̃^-k/2 . Given the asymptotic approximation discussed here, one can truncate this series at a sufficiently large but finite k=K, so that absolute convergence of the remaining integrals is guaranteed and the errors are sufficiently small: [The errors can be estimated using |∫^∞_c̃λ̃λ̃^m-k/2exp(2πλ̃)|≤1/-m+k/2-1c̃^m-k/2+1 for m-k/2<-1, and are thus small if c̃≫ 1 and k/2≫ m. ]ℐ̃_5-1(m,β,c) ≈ 1/2^m∑_k=0^K a_k(m) ∫_c̃^∞λ̃λ̃^m-k/2 e^2πλ̃ = 1/2^m∑_k=0^K a_k(m) c̃^m-k/2+1 E_-m+k/2(- 2πc̃) . The E_-m+k/2(- 2πc̃) are finite for real m and k. We therefore conclude that the expectation values computed with the asymptotic form of the action are finite. Similar to the 4-2 move, we will now go one step further and compute the expectation values based on the full action. Again, we will employ series-acceleration methods to numerically evaluate the following integrals: ℰ^(j)_5-1(m,c)=∫_c^∞dλ λ^m e^ S^5-1_(j)(λ) , where j=a,b distinguishes the two cases (a) the signature of all bulk edges is the same and (b) one bulk edge is timelike and the remaining four are spacelike. As for the 4-2 move, the lower integration limit c is bounded from below by the generalized triangle inequalities. But to show finiteness of the expectation values we can choose a larger c, and will do so in order to obtain faster convergence for the series acceleration algorithm. In Figure <ref> we compare the full result to the analytic approximation for both cases (a) and (b). For the homogeneous case (a) (left panel), we choose c=100, at which the asymptotic approximation to the action deviates from the full action by about 5·10^-4, see also Figure <ref>. The resulting deviation between the approximated and full expectation values is on the per-mil level, and decreases for larger values of m. For the inhomogeneous case (b) (right panel), we choose c=50, at which the asymptotic approximation to the action deviates from the full action by about 15%, see also Figure <ref>. This results in a deviation of the full and approximated expectation values of the order of a few percent, which decreases for increasing m. For the comparison of approximated and full result, we neglected the sub-leading term β in the analytic expression (<ref>). As for the 4-2 move, we conclude that the expectation values of the bulk edge to some power m remains finite. Furthermore, these expectation values can be computed numerically, using series-acceleration methods, or analytically, by approximating the action with its asymptotic behaviour for large bulk edges. § DISCUSSION In this paper we investigated spike and spine configurations which appear in simplicical approaches to quantum gravity, such as Regge calculus and spin foams. Such configurations allow for infinitely large bulk edges, and also an infinitely large four-volume, despite keeping the boundary geometry fixed. This illustrates one of the difficulties to define a notion of scale in quantum gravity <cit.>. Due to their unbounded support, spike and spine configurations could potentially dominate the path integral or even lead to divergences. It is therefore important to investigate these configurations and to understand their role in the quantum gravitational path integral. The results of this paper include: * The Regge action is a complicated non-polynomial function of the edge lengths, making the extraction of physical properties difficult. Here we determined much simpler asymptotic forms for the Regge action for spikes and spines appearing in 5-1 and 4-2 Pachner moves, respectively. In some cases, these simplifications involve a type of dimensional reduction. For the 4-2 as well as the 5-1 move, we considered all possible choices for the signature of the bulk edges. In all cases, except for two (which are the 5-1 move configurations with all edges either spacelike or timelike), we found that the leading-order term of the Regge action is of order λ^1, where λ denotes the absolute value of the squared edge length. The λ^1 coefficient is independent of the boundary data. In the two remaining cases, the leading order term is of order λ^1/2. Such terms also appear as subleading terms for the 5-1 configurations with bulk edges of mixed signature. The λ^1/2 coefficients do depend on the boundary data. The techniques employed here can be generalized to other configurations containing large edge lengths. Section <ref> already considered a generalization of the 4-2 Pachner move to configurations of N 4-simplices sharing one large edge. We can expect simplifications of the Regge action along the lines observed in this work, as long as all involved 4-simplices contain a mixture of large and small edges. * We found that in some cases the asymptotic regime exhibits light cone-irregular bulk triangles. This includes all 5-1 move configurations with inhomogeneous signature for the bulk edges and with spacelike bulk triangles. The situation is however different from the three-dimensional case, where large spacelike bulk edges in the 3-2 and 4-1 configurations always lead to light-cone irregular hinges. We found an example for a 5-1 move configuration with only spacelike bulk edges, which has light-cone regular spacelike bulk triangles. Changing the boundary data, we also found a configuration with light-cone irregular triangles. * In all but one of the cases where the asymptotic regime is light-cone irregular, the light-cone irregular bulk deficit angles are of yarmulke type. That is, the angles include less than two light cones. The only case where we might obtain irregular deficit angles of trouser type, is in the case of the 5-1 move configuration with only spacelike bulk edges. In this case a bulk triangle is light cone irregular if its boundary edge is light-cone irregular with respect to the Lorentzian boundary triangulation. So far we cannot exclude that trouser type light-cone irregularities might appear in such a Lorentzian boundary triangulation. It would be helpful to obtain a deeper geometric understanding of the light-cone irregular configurations, e.g., by considering how null geodesics traverse such configurations <cit.>. As the revealed light-cone irregular regimes appear for arbitrary large edge lengths, these regimes can only be included in a path integral if the path integral contour is chosen to go along the suppressing side of the branch cuts. * Our methods allow us to decide which types of asymptotic regimes are allowed by the generalized triangle inequalities and which are forbidden. For instance, a 5-1 move configuration with spacelike boundary tetrahedra and large spacelike bulk edges is not allowed. But this is the one case, on which numerical efforts of the spin foam community have been concentrated: the latest result indicates that the associated state sum is finite (with a specific choice of measure) <cit.>. Given that for this case there are no underlying Regge geometries with arbitrarily large edges, this result may not be surprising. We point out, that there are however many other types of 5-1 Pachner move configurations, involving timelike tetrahedra, timelike triangles, or timelike edges where the triangle inequalities do allow for large bulk edges. These configurations also need to be investigated, and we cannot draw a conclusion regarding finiteness from the one case which has been investigated so far. * The found asymptotic expressions allow us to obtain estimations for the path integral involving the corresponding spike and spine configurations. One can interpret 5-1 and 4-2 moves as coarse graining moves <cit.>, and estimations for the corresponding path integrals can help in understanding renormalization properties of Lorentzian Regge calulus or of spin foams <cit.>. * We showed not only that the Lorentzian Regge path integral associated to 5-1 and 4-2 configurations is finite, but also that this holds for arbitrary powers of length expectation values and for a large class of measures. This is in sharp contrast to the Euclidean quantum gravity version of Regge calculus. One can show that the 5-1 spike configurations isolate the conformal factor in the gravitational action <cit.>, leading to an exponential enhancement of these configurations. Thus, without rotating the sign of the conformal factor, such configurations lead to divergences. We established finiteness using the asymptotic expressions for the action and an analytical calculation. We also performed the calculation using the exact action and the Wynn algorithm <cit.> for numerically dealing with oscillating integrals. The integrals are also finite for cases where the integrals are not absolutely convergent. Here the oscillatory nature of the amplitudes is crucial in order to obtain finite results. * We have thus also illustrated the usefulness of the Wynn algorithm in dealing with oscillatory integrals. The work <cit.> applied the Wynn algorithm successfully to sums arising from partition functions and expectation value calculations for effective spin foams <cit.>. It was pointed out in <cit.> that in order for the algorithm to work, it is important that the action scales at most linearly in the summation variables, which are the areas. We found here that the leading-order terms in the action are indeed linear in the areas. One should therefore be able to apply the Wynn algorithm in order to study 5-1 and 4-2 configurations in spin foams. These results provide valuable progress for our understanding of Lorentzian simplicial path integrals. In particular, the results on the finiteness show how Lorentzian Quantum Regge calculus can circumvent one of the crucial issues of Euclidean Quantum Regge calculus, namely the conformal mode problem. Here we do not need to employ a restriction on the signature of, e.g. the edges, as proposed (for two-dimensional triangulations) in <cit.>, or an exponentially suppressing measure as in <cit.>. The finiteness of the partition function is also a key question for spin foams. So-called bubble configurations, in which areas can become arbitrarily large, have been a major concern in spin foam research, e.g. <cit.> and for group field theories <cit.>. For technical reasons, efforts have been focused on configurations which include only spacelike tetrahedra, but, as mentioned above, these do not correspond to Regge configurations with large areas. We have seen here that there are however many possible Regge configurations with unbounded bulk areas, if we allow timelike sub-simplices. We have found here, that the Regge action in the asymptotic regimes simplifies and in particular observed a certain type of dimensional reduction by one, and in some cases by two dimensions. A similar dimensional reduction has been found in spin foams <cit.> for a so-called melon configuration, where one has two 4-simplices glued along four tetrahedra. This work considered 4-simplices with only spacelike tetrahedra. It would be interesting to see whether this behaviour generalizes, and whether spin foam amplitudes simplify in the same way as we have found here for the Regge action. More generally, the work here motivates to consider systematically new asymptotic regimes for simplex amplitudes. The resulting asymptotic expressions might facilitate the numerical investigation of bubbles. Bubbles involve simplex amplitudes where some of the areas are large and one has thus large spin numbers. Such amplitudes are however prohibitively computationally expensive <cit.>. Using possible simplifications in regimes where some areas are much larger than other areas, will allow a probe of a much richer regime of spin foams, without having to compute the full amplitudes explicitly. The idea of hybrid schemes, which so far have only been suggested for the regime where all areas are large <cit.>, could be extended to mixed regimes of large and small areas. Another interesting application of dimensional reduction is the construction of (d-1)-dimensional quantum-geometric models from d-dimensional ones. A similar relationship has been observed for the three-dimensional Ponzano-Regge model and two-dimensional so-called intertwiner models <cit.>, both of which are topologically invariant. Generalizing such relationships might also lead to new examples for holographic behaviour in quantum-geometric models <cit.>. As for three-dimensional Regge calculus, we have found also for the four-dimensional version a number of cases in which the asymptotic regime is light cone irregular. We can either exclude such configurations from the path integral (as done in Causal Dynamical Triangulations <cit.>) or include them, but then choose as integration contour the suppressing side of the branch cuts. Invariance of the quantum gravity partition function under changes of the triangulation is equivalent to an implementation of a discrete form of diffeomorphism invariance <cit.>. Therefore, a criterion for deciding which option to choose, is to look for (approximate) invariance of the partition function under 5-1 and 4-2 Pachner moves. This work provided valuable tools to do so. The authors thank José Padua-Argüelles for providing his derivation of the result in Section <ref>. JB is supported by an NSERC grant awarded to BD and a doctoral scholarship by the German Academic Scholarship Foundation. DQ is supported by research grants provided by the Blaumann Foundation. Part of this research was conducted while BD was visiting the Okinawa Institute for Science and Technology (OIST) through the Theoretical Sciences Visiting Program (TSVP). Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. jhep
http://arxiv.org/abs/2407.13301v1
20240718090627
CoD, Towards an Interpretable Medical Agent using Chain of Diagnosis
[ "Junying Chen", "Chi Gui", "Anningzhe Gao", "Ke Ji", "Xidong Wang", "Xiang Wan", "Benyou Wang" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Mean Teacher based SSL Framework for Indoor Localization Using Wi-Fi RSSI Fingerprinting Sihao Li, Graduate Student Member, IEEE, Zhe Tang, Graduate Student Member, IEEE, Kyeong Soo Kim, Senior Member, IEEE, and Jeremy S. Smith, Member, IEEE This work was supported in part by the Postgraduate Research Scholarships (under Grant PGRS1912001), the Key Program Special Fund (under Grant KSF-E-25), and the Research Enhancement Fund (under Grant REF-19-01-03) of Xi'an Jiaotong-Liverpool University. This paper was presented in part at CANDAR 2023, Matsue, Japan, November 2023. S. Li and Z. Tang are with the School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, P.R. China (e-mail: [Sihao.Li19, Zhe.Tang15]@student.xjtlu.edu.cn), and also with the Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, L69 3GJ, U.K. (e-mail: [Sihao.Li, Zhe.Tang]@liverpool.ac.uk). K. S. Kim is with the School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, P.R. China (e-mail: Kyeongsoo.Kim@xjtlu.edu.cn). J. S. Smith is with the Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, UK (e-mail: J.S.Smith@liverpool.ac.uk). ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT The field of medical diagnosis has undergone a significant transformation with the advent of large language models (LLMs), yet the challenges of interpretability within these models remain largely unaddressed. This study introduces Chain-of-Diagnosis (CoD) to enhance the interpretability of LLM-based medical diagnostics. CoD transforms the diagnostic process into a diagnostic chain that mirrors a physician’s thought process, providing a transparent reasoning pathway. Additionally, CoD outputs the disease confidence distribution to ensure transparency in decision-making. This interpretability makes model diagnostics controllable and aids in identifying critical symptoms for inquiry through the entropy reduction of confidences. With CoD, we developed DiagnosisGPT, capable of diagnosing 9,604 diseases. Experimental results demonstrate that DiagnosisGPT outperforms other LLMs on diagnostic benchmarks. Moreover, DiagnosisGPT provides interpretability while ensuring controllability in diagnostic rigor. § INTRODUCTION In healthcare, disease diagnosis is pivotal yet complex, serving as a bridge between medical expertise and decision-making <cit.>. The diagnosis task involves predicting a disease using explicit (self-reported) and implicit (inquired) symptoms <cit.>. As illustrated as Figure <ref>, the doctor either inquires further or makes a diagnosis, aiming for high accuracy with minimal inquiries. Large Language Models (LLMs) offer a promising path for automated diagnosis, due to their superior reasoning and dialogue abilities, coupled with extensive knowledge. These capabilities enable them to address a wide range of diseases and interact effectively with patients <cit.>. However, the application of LLMs in medical diagnosis encounters significant challenges of interpretability <cit.>. Considering the issue of hallucinations, LLMs could arbitrarily make a diagnosis. Without interpretability, it is unclear if decisions meet sound analysis and ethical standards <cit.>. Although LLMs can offer rudimentary explanations for their decision, they lack a comprehensive process to explain why other potential diseases are excluded and to which extent of confidence it made such a decision. <cit.> This highlights the need for an interpretable LLM solution for diagnosis. In response to these limitations, we propose the Chain of Diagnosis (CoD) to enhance the interpretability of LLMs. CoD provides transparency in both reasoning and decision-making. It transforms the black-box decision-making process into a diagnostic chain that mirrors a physician’s thinking process with five distinct steps. It reveals the series of thoughts behind each decision. For decision transparency, CoD outputs a confidence distribution, with higher confidence indicating a stronger belief in diagnosing a specific disease. This allows for control over the LLM's decisions using a confidence threshold. Additionally, diagnostic uncertainty can be quantified by the entropy of the confidences. The goal of entropy reduction can aid in eliciting more effective symptoms for inquiry. To implement CoD, we propose a method for constructing CoD training data from patient cases. Due to the restriction of real-world cases, this paper proposes generating synthetic cases from disease encyclopedias. This approach facilitates scalability in supported diseases without any concerns of patient privacy. With synthetic cases, we constructed a training dataset with 48,020 CoD instances, leading to the development of our model, DiagnosisGPT, capable of diagnosing 9,604 diseases. In out-of-domain settings, DiagnosisGPT outperforms other LLMs on public diagnostic datasets and a newly-created benchmark DxBench. Moreover, it achieves an accuracy exceeding 90% on all datasets with a diagnostic threshold of 0.55, highlighting the reliability of its confidence levels. Our contributions are summarized as follows: 1) We introduce the Chain-of-Diagnosis (CoD) diagnostic method, designed to enhance interpretability of LLMs in disease diagnosis; 2) We propose a method to synthesize patient cases using disease encyclopedia data. This enables low-cost creation of CoD training data for various diseases while avoiding privacy and ethical concerns; 3) Using CoD, we built DiagnosisGPT that can support automatic diagnosis for 9,604 diseases. Experiments demonstrate that DiagnosisGPT surpasses other LLMs across diverse diagnostic datasets; 4) We present DxBench, a diagnostic benchmark with 1,148 real cases covering 461 diseases, manually verified and derived from public doctor-patient dialogues. § PROBLEM DEFINITION OF DIAGNOSIS §.§ Problem definition In the diagnostic process, physicians navigate through explicit symptoms (𝒮_exp), self-reported by patients, and implicit symptoms (𝒮_imp), elicited through further inquiry, to predict a target disease (d_t) from a predefined list (𝒟). This process involves a sequence of decisions: physicians either probe for additional symptoms ( Symptom Inquiry) or select the most probable disease from 𝒟 (Disease Diagnosis). The objective is to maximize diagnostic accuracy (a ↑) while minimizing the number of inquiries (n ↓), ensuring the inquiry count n does not exceed a patient's patience threshold L. §.§ The Challenge The challenge lies in determining when and how to inquire about symptoms to enhance diagnostic accuracy. As stated in <cit.>, LLMs don't excel at questioning users. To explore this, we conducted a preliminary experiment on diagnostic benchmarks with two LLMs: The results shown in Table <ref> reveals two possible issues for diagnosis: Case I, Arbitrary Diagnosis (too small n): LLMs diagnose immediately without any symptom inquiry, leading to minimal benefit from symptom inquiry; Case II, Excessive Inquiries (too large n): LLMs continuously inquire about symptoms until reaching the maximum limit n=L, resulting in inefficient diagnosis. * Case I, Arbitrary Diagnosis (too small n): LLMs diagnose immediately without any symptom inquiry, leading to minimal benefit from symptom inquiry. * Case II, Excessive Inquiries (too large n): LLMs continuously inquire about symptoms until reaching the maximum limit n=L, resulting in inefficient diagnosis. The objective of diagnosis is to enhance disease identification accuracy through symptom inquiry. However, there is a trade-off between efficiency and accuracy. Efficient diagnosis is faster with fewer inquiries, while accurate diagnosis requires more detailed inquiries. LLMs may struggle with symptom inquiry, particularly in determining when to ask questions. Additionally, symptom inquiry by LLMs does not significantly improve accuracy, indicating that LLMs might fail to ask for crucial symptom information. § METHODOLOGY: CHAIN OF DIAGNOSIS As depicted on the left side of Figure <ref>, the CoD outputs a diagnostic chain that represents the thought process of LLM diagnosis. It mirrors a physician's diagnostic thinking, which involves summarizing symptoms, identifying diseases, analyzing and deciding. This interpretability aids in diagnostic decision-making. To implement the CoD, we constructed CoD training data based on patient cases to fine-tune LLMs to perform CoD, as shown on the right side of Figure <ref>. §.§ The Philosophy of CoD for Interpretability Lipton <cit.> classified interpretability into two aspects: 1) transparency, i.e., how does the model work? and 2) post-hoc explanations, i.e., what can the learned model tell us? The two aspects inspire us design an interpretable diagnostic LLM, consisting of a CoD framework that includes Property <ref>, <ref> for transparency and Property <ref> for post-hoc explanations. Transparency <cit.> encompasses two types: decomposability and algorithmic transparency. Decomposability refers to a system where each component can be individually interpreted. For CoD, this is demonstrated in Property <ref>. Decomposabilty with a pipeline chain: CoD transform the black-box decision of diagnosis into a explainable chain, which can be viewed as a sequence of intermediate steps. Therefore, the designed CoD is a chain pipeline (see Sec. <ref>), each step of which has a clear functionality. The overall chain mimics a real diagnosis of a physician. Algorithmic transparency refers to the understanding of the learning algorithm itself, such as whether it converges and why. Regrading the challenge in Sec. <ref>, the algorithmic transparency of CoD can be understood from a entropy-reduction perspective: with more inquiries made, the uncertainty of the diagnosis estimate (quantitatively measured by entropy) will be reduced, see Property <ref>. Transparency with confidence-driven flow: In the chain, CoD introduces a disease confidence distribution C = {c_d | d ∈𝒟}, where decision-making is based on whether the highest confidence exceeds a given threshold τ. By using explicit confidence levels in CoD, one can easily observe that the accuracy becomes larger with increasing inquiries thanks to the improved certainty (i.e., reduced entropy), see Sec. <ref>. It converges when accuracy saturates with enough inquiries. Post-hoc explanations <cit.> indicate the information and functions a model can offer to humans. The post-hoc explanations for CoD are illustrated in Property <ref>. Diagnosis explainability: CoD elucidates the diagnostic thinking process, providing physicians with a diagnostic pathway that supports their clinical decisions and ensures that the LLM's decisions adhere to reasonable analysis and ethical standards. §.§ The Diagnosis Chain Here, we introduce the response methods and the construction approach of CoD, as illustrated in Figure <ref>. All prompts for building CoD training data are detailed in Appendix <ref>. Step 1: Symptom Abstraction The first step summarizes the symptoms 𝒮 of the patient's question: 𝒮 = f_1(q_patient) It allow the model to focus on the refined symptoms and provide an understanding of patient's query. For training data, the initial patient question is generated from (𝒮_exp) with the LLM. Step 2: Disease Recall & Knowledge Integration Next, CoD identifies the top-K potential diseases based on a disease retriever: 𝒟' = f_2(𝒟,𝒮,k) where 𝒟' ⊆𝒟 and |𝒟'| = k. A smaller space 𝒟' is necessary for subsequent analysis and reasoning, since analyzing all diseases is impractical (considering |𝒟| = 9604) and most irrelevant diseases can realistically be excluded. We use Dense Retrieval training methods <cit.> to train this retriever, with the following training objective: ℒ(𝒮_exp, 𝒮_imp, d_t )=-loge^sim(E_S(𝒮_exp∪𝒮_imp),E_D(d_t))/∑_ d ∈𝒟 e^sim(E_S(𝒮_exp∪𝒮_imp),E_D(d)) where sim denotes the cosine similarity, and E_S and E_D are the symptom and disease encoders, respectively. The performance of the disease retriever is detailed in Appendix <ref>. Then, for each candidate disease d ∈𝒟', CoD retrieves corresponding disease knowledge from the disease database and integrates it into the output to enhance understanding of the disease. Similarly, other tools like RAG can also be utilized in this step to enhance reasoning. Step 3: Diagnostic Reasoning In step 3, CoD generates the diagnostic reasoning process T: T = f_3(𝒮, 𝒟') Similar to CoT, T is a thought process that carefully analyzes whether each disease in 𝒟' corresponds to the patient's symptoms. To build training data, we prompt a LLM to generate T. Step 4: Confidence Assessment After generating T, CoD generates a confidence distribution: 𝒞 = f_4(𝒮, 𝒟', T) 𝒞 satisfies ∑_d ∈𝒟'c_d = 1. This distribution indicates the model's tendency towards diagnosing a disease, mainly according to the analysis of T. According to f_3, 𝒞 can be considered a posterior probability distribution: 𝒞= {p_θ(d|𝒮,𝒟')|d∈𝒟'} Here, p_θ represents the confidence distribution generated by the LLM θ. For constructing training data, we validate 𝒞 against the target disease d_t to ensure T and 𝒞 are reasonable. If max_d ∈𝒟' ∖{d_t} c_d ≥τ, the generated data is considered erroneous, i.e., the model assigns high confidence to an incorrect disease. If erroneous, we prompt the model to rethink and correct its reasoning until the distribution is verified. With 𝒞, CoD can make decisions based on the confidence in its diagnosis. Step 5: Decision Making In the last step, a confidence threshold τ is set to control the decision-making. The diagnostic task involves two decision types: 1) making a diagnosis A_diag(d), where d is the diagnosed disease, and 2) to inquiring about a symptom A_inq(s), where s represents the symptom under inquiry. The next decision A_next of the CoD is defined as: A_next = A_diag(d_max), if c_max > τ A_inq(s_t), if c_max≤τ where c_max = d ∈𝒟'max{c_d } and d_max = d ∈𝒟'argmax{c_d }. A_inq(s_t) signifies the operation of querying about the symptom s_t that the CoD generates. Here, τ serves as a hyperparameter. A higher τ allows the model to perform more rigorous diagnoses (that achieving higher accuracy a but requiring more rounds of questioning, i.e., higher n). Conversely, a lower τ can reduce n but also lowers a. §.§ CoD as an Entropy-reduction Process Symptom inquiry is a key step in diagnosis, serving to gather additional patient information to clarify the diagnosis. This inquiry process can be viewed as a transition from diagnostic uncertainty to certainty. The uncertainty level can be captured by the entropy of confidence: H(C) = - ∑_d ∈ D' c_dlog c_d Symptom inquiry is a process of entropy reduction. Given a symptom s, its post-inquiry entropy is: H(C|s) = - ∑_d ∈ D' p_θ(d|S∪{s},D') log p_θ(d|S∪{s},D') For the diagnostic task, it is preferable to minimize n. Hence, the objective of symptom inquiry can be formalized as maximizing the increase in diagnostic certainty to expedite the diagnosis. Accordingly, CoD selects the symptom to inquire about by maximizing the entropy reduction: s_t = s ∈𝒮'argmax ( H(𝒞) - H(𝒞|s) ) where 𝒮' represents the candidate symptoms for inquiry and s_t is the chosen symptom. 𝒮' = 𝒮_imp∪{s_gen}, where s_gen is the symptom generated by the LLM and 𝒮_imp comes from the training case data. Through entropy reduction, the CoD training data tuned the model to inquire about more crucial symptoms for diagnosis, thereby enhancing its querying capability. §.§ Synthesizing Patient Cases CoD requires patient cases to build training data. However, due to privacy concerns, the collection of such data is significantly restricted. To address this, we propose generating synthetic case data in reverse from online disease encyclopedias, which provide comprehensive and reliable disease information. As illustrated in Figure <ref>, the synthesis process is a pipeline consists of two stages: Stage 1: Constructing Disease Database The first step involves the extraction of essential information from the disease encyclopedia data. This process results in a knowledge base encompassing 9,604 diseases, each detailed with sections on "Overview," "Symptoms," and "Treatment". We use regular expression matching to identify and extract these key sections. Stage 2: Synthesizing Patient Cases In disease diagnosis <cit.>, a patient can be abstracted into a triplet (S_exp, S_imp, d_t). Using the LLM, we generate structured case data based on the disease knowledge from the database. For each disease, we synthesize five distinct cases to ensure diversity. The prompt used for generation is provided in the Appendix <ref>. In the end, we developed a database containing 9,604 diseases and then synthesized 48,020 unique cases. The used LLM is GPT-4-0125-preview. Medical experts reviewed the quality of the synthetic cases, finding potential errors in only 6 out of 100 cases, which confirms the reliability of the synthetic cases. For more details, see Appendix <ref>. Based on these synthetic cases, we constructed a training dataset for medical diagnosis, which consists of 48,020 instances with an average of 2.4 consultation rounds. This dataset is used to train an interpretable medical diagnosis model, DiagnosisGPT. § EXPERIMENTS §.§ Model Training & Setup Utilizing the created CoD data, we fully fine-tuned the Yi-34B-Base <cit.> to develop DiagnosisGPT. To equip it with chat capabilities, ShareGPT data [<https://huggingface.co/datasets/philschmid/sharegpt-raw>] is incorporated into the training data. Training parameters included a batch size of 64 and a learning rate of 2e-5. For the retrieval model, we trained on the all-mpnet-base-v2 <cit.> model using DRhard <cit.>, with a batch size of 256 and a learning rate of 2e-5. The training was conducted on a GPU server with 8 NVIDIA A100 GPU cards. §.§ Benchmarking Settings Traditional baselines (Non-LLM) Traditional supervised Automatic Diagnosis methods approach the diagnostic task as a decision-making task, where all symptoms and diseases are predefined. In traditional methods, we adhere to the original settings, which involve training on a training set of benchmarks. We compared four models: Basic DQN <cit.>, HRL <cit.>, Diaformer <cit.> and MTDiag <cit.>. LLM baselines Our comparison mainly focused on advanced LLMs including proprietary models like Gemini-Pro <cit.>, ERNIE Bot(UTF8gbsn 文心一言) <cit.>, Claude-3-Opus <cit.>, GPT-3.5 <cit.> (GPT-3.5-turbo-1106), and GPT-4 <cit.> (GPT-4-0125-preview), as well as open-source models including gemma-7b-it <cit.>, Mixtral-8x7B-Instruct-v0.1 <cit.>, and Yi-34B-Chat <cit.>. LLM Evaluation We simulate a patient using GPT-4, which presents both 𝒮_exp and 𝒮_imp. The simulation begins with 𝒮_exp (chief complaints). When the evaluated LLM inquires about symptoms, the simulator can only respond with "yes" or "no" to prevent information leakage. Details of the LLM evaluation can be found in Appendix <ref>. For the evaluated LLMs, we prompt them to perform an automated diagnosis task, which is detailed in Appendix <ref>. §.§ Benchmark Public benchmarks To evaluate diagnostic performance, we used two publicly available benchmarks: Muzhi <cit.> and Dxy <cit.>. Both are based on real doctor-patient consultations. However, their data scale and disease variety are limited, as shown in Table <ref>. DxBench To better assess diagnostic capabilities, we develop a larger dataset, DxBench. Using the MedDialog <cit.> dataset, which contains real doctor-patient dialogues, we filtered out 3,121 cases with clear dialogues and definitive diagnoses. Then GPT-4 is employed to extract 𝒮_exp and 𝒮_imp, and we manually refine this to 1,148 high-quality cases. Details are in Appendix <ref>. DxBench includes over 1,000 real cases, covering 461 disease types from 15 departments and 5,038 symptoms. Considering the large number of diseases in DxBench, each case is provided with three candidate diseases. §.§ Diagnosis Performance As shown in Table <ref>, we evaluated diagnostic performance using two public benchmarks: Muzhi <cit.> and Dxy <cit.>. The results indicate that LLMs' performance approach traditional IID methods while requiring fewer inquiries. Among the LLMs, DiagnosisGPT achieves the highest accuracy improvement with symptom inquiries, demonstrating superior inquiry capabilities. In the Muzhi dataset, DiagnosisGPT is the only model that improves accuracy through symptom inquiries. In the Dxy dataset, DiagnosisGPT required fewer inquiries than Claude-3-Opus but achieved a greater improvement. It can also be observed that Gemini-Pro and GPT-3.5-turbo-1106 tend not to ask questions. Overall, DiagnosisGPT (τ = 0.6) delivered the best results across the datasets, owing to its inquiry and decision-making capabilities. The results of DxBench are shown on the Figure <ref>. DiagnosisGPT achieved the highest accuracy with the τ=0.6 setting. At τ = 0.5, it outperformed Claude-3-Opus by 3.7% with the same inquiry round n. Since DiagnosisGPT supports diagnoses for 9604 diseases, it can perform open-ended consultations without relying on candidate diseases. In this scenario, DiagnosisGPT's consultation process significantly improved diagnostic accuracy from 34.7% to 44.2%. Although the accuracy is only 44.2%, DiagnosisGPT demonstrates the potential of LLMs to identify the correct diagnosis from a large number of diseases. The right side of Figure <ref> shows the accuracy across different departments, indicating that DiagnosisGPT perfoms well across all departments. § MORE ANALYSIS §.§ Interpretability on Confidence Levels To assess the interpretability of the diagnostic confidence, we examined diagnostic accuracy at various confidence thresholds. The results, depicted in Figure <ref>, show that increasing the threshold indeed enhances accuracy. With τ = 0.55, the model achieves over 90% accuracy across three datasets. This demonstrates that the model's confidence in disease prediction is dependable and aligns with the expected accuracy rates. However, higher thresholds reduce success rates, indicating that the model becomes more stringent to make a diagnosis. We further tested the performance of Diagnosis on DxBench under various τ values. As shown in Table <ref>, as τ increases, the number of iterations n rises, and the accuracy a also improves. It can be seen that τ serves as an effective control for balancing the trade-off in diagnosis. Table <ref> illustrates the process of entropy reduction in Diagnosis as the number of inquiries increases. §.§ Ablation Study Table <ref> presents the ablation study results for CoD. The performance of DiagnosisGPT_baseline indicates that directly training the target disease prediction on CoD training data does not significantly enhance diagnosis. The results show that diagnostic reasoning benefits the model's performance, implying that providing diagnostic analysis is effective. Without diagnostic confidence, the model's performance drops significantly, suggesting that model decisions based on confidence are valuable. Furthermore, knowledge integration does not appear to have a substantial effect, probably due to that the model grasps disease knowledge through CoD training data. We leave this for future research. §.§ Case Study Figure <ref> presents a diagnostic case using DiagnosisGPT. When queried about diseases, DiagnosisGPT systematically outputs its diagnostic reasoning process. It first summarizes the user's symptom information, then identifies potential diseases, and begins the diagnostic analysis, ultimately providing a diagnostic confidence level. As shown in the first round of replies, the highest confidence level is 0.45, below the threshold, prompting the model to inquire about symptoms. When the patient responds to the symptom inquiry, the probability of the target disease significantly increases, leading DiagnosisGPT to confirm and makes a correct diagnosis. § CONCLUSION In this paper, we propose the Chain of Diagnosis (CoD) to enhance the interpretability of large language models (LLMs) for disease diagnosis. Using CoD, we developed DiagnosisGPT, an LLM that supports the diagnosis of 9,604 diseases. Distinct from other LLMs, DiagnosisGPT can provide diagnostic confidence and relies on its own disease database for diagnostic reasoning. Experiments show that the diagnostic capabilities of DiagnosisGPT surpass those of other LLMs. Furthermore, higher accuracy can be achieved by adjusting the diagnostic threshold values. This means that CoD can control the trade-off between effectiveness and efficiency in diagnosis. CoD offers a novel solution for medical diagnosis. We believe that the data, models, and methods from this work can advance the field of medical LLMs. § ACKNOWLEDGEMENT This work was supported by the Shenzhen Science and Technology Program (JCYJ20220818103001002), Shenzhen Doctoral Startup Funding (RCBS20221008093330065), Tianyuan Fund for Mathematics of National Natural Science Foundation of China (NSFC) (12326608), Shenzhen Key Laboratory of Cross-Modal Cognitive Computing (grant number ZDSYS20230626091302006), and Shenzhen Stability Science Program 2023, Shenzhen Key Lab of Multi-Modal Cognitive Computing. unsrt evalprompt[1] boxrule = 1.5pt, fontupper = , fonttitle = , arc = 5pt, rounded corners, colframe = black, colbacktitle = white!97!yellow, colback = white!97!yellow, title = #1, patientprompt[1] boxrule = 1.5pt, fontupper = , fonttitle = , arc = 5pt, rounded corners, colframe = black, colbacktitle = white!97!yellow, colback = white!97!yellow, title = #1, fromKnow[1] boxrule = 1.5pt, fontupper = , fonttitle = , arc = 5pt, rounded corners, colframe = black, colbacktitle = white!97!purple, colback = white!97!purple, title = #1, patient[1] boxrule = 1.5pt, fontupper = , fonttitle = , arc = 5pt, rounded corners, colframe = black, colbacktitle = white!97!black, colback = white!97!black, title = #1, doctor[1] boxrule = 1.5pt, fontupper = , fonttitle = , arc = 5pt, rounded corners, colframe = black, colbacktitle = white!97!blue, colback = white!97!blue, title = #1, chooseSym[1] boxrule = 1.5pt, fontupper = , fonttitle = , arc = 5pt, rounded corners, colframe = black, colbacktitle = white!97!red, colback = white!97!red, title = #1, judgeSym[1] boxrule = 1.5pt, fontupper = , fonttitle = , arc = 5pt, rounded corners, colframe = black, colbacktitle = white!97!yellow, colback = white!97!yellow, title = #1, judgeDis[1] boxrule = 1.5pt, fontupper = , fonttitle = , arc = 5pt, rounded corners, colframe = black, colbacktitle = white!97!green, colback = white!97!green, title = #1, § RELATED WORK LLMs for Medical Scenarios The success of models like ChatGPT <cit.> has inspired research into their application in healthcare, resulting in medical-specific LLMs such as DoctorGLM <cit.>, MedicalGPT <cit.>, DotaGPT <cit.>, HuatuoGPT <cit.>, and Apollo <cit.>. Despite their focus on medical knowledge, these models have limited capabilities in automating medical diagnoses. Automated Diagnosis Task Medical diagnosis, a key AI application in healthcare <cit.>, has predominantly utilized reinforcement learning (RL). Pioneering works include <cit.>, who introduced neural symptom checking using RL. Subsequent advancements include hierarchical RL for diagnostic and contextual decisions <cit.>, Deep Q-networks for symptom collection from patient interactions <cit.>, and incorporation of medical knowledge into RL policy learning <cit.>. Two-level hierarchical RL <cit.>, policy gradient frameworks with Generative Adversarial Networks <cit.>, and customization of RL models using multi-level rewards and dialogue data <cit.> have further enhanced diagnostic accuracy. <cit.> and <cit.> conceptualizes automatic diagnosis as a sequence generation task. However, these models are limited by predefined symptoms and diseases, and cannot support open-ended consultations. Reasoning of LLMs LLMs show promise in complex tasks such as mathematical reasoning <cit.>. To harness their reasoning abilities, CoT<cit.> is proposed with intermediate steps, and Tree-of-Thought (ToT)<cit.> using DFS/BFS for enhanced reasoning paths. Graph of Thoughts (GoT) <cit.> is introduced for intricate problems. ReAct <cit.> combines reasoning with actions. Uncertainty of Thoughts (UoT) <cit.> improves decision-making by simulating multiple requests for information gain. § DXBENCH DISTRIBUTION The data distribution in DxBench dataset is illustrated in Figure <ref>. We categorize the data distribution according to the medical departments responsible for diagnosing the diseases. The data shows a relatively balanced distribution across different departments. Notably, the Dermatovenereology department has the highest number of entries with 121 cases, while the Infectious Diseases and Immunology department has the fewest, with 27 cases. § THE PROMPT FOR LLM DIAGNOSIS The prompt for LLM diagnosis is shown in Table <ref>. We instruct the LLMs to determine whether a diagnosis can be made. If a diagnosis is possible, the LLMs output the diagnosed disease. Otherwise, the LLMs query the user with questions regarding a specific symptom. § PATIENT SIMULATOR FOR EVALUATION To evaluate the automatic diagnostic capabilities of LLMs, we instruct GPT-4 to play the role of a patient. Initially, we provide explicit symptoms S_exp as input for the model to diagnose. If the LLMs ask questions, the patient GPT will respond using a simulated patient prompt, as shown in Figure <ref>. § PROMPT OF DATA SYNTHESIS We constructed a disease database encompassing 9,604 diseases. Each disease entry includes four fields: "disease name", "overview", "symptoms", and "treatment". For each disease, we used the prompt shown in Figure <ref> to generate five patient cases with GPT-4, ensuring that each case study exhibits distinct typical characteristics. § PROMPT OF COD To generate CoD training data, we prompt GPT-4 to construct CoD dialogue data based on patient case data. This involves the following 8 prompts: Prompt 1: Patient Self-report Prompt (Role: Patient) As shown in figure <ref>, the patient self-report prompt is used to generate the user's initial question q_1 based on the patient's explicit symptoms, primarily expressing the patient's chief complaint. Prompt 2: Reasoning Prompt (Role: Diagnosis) When provided with the known symptoms S of a patient and the candidate diseases D', the reasoning prompt, as illustrated in Figure <ref>, is utilized to generate the reasoning process T and the confidence distribution C. Prompt 3: Rethinking Prompt (Role: Diagnosis) If the generated C does not meet the condition max C ∖ c_d_t > τ, the rethinking prompt, as shown in Figure <ref>, is used to have GPT­4 regenerate a valid diagnosis T and C. Prompt 4: Doctor Diagnosis Prompt (Role: Doctor) If max C > τ, we prompt GPT-4 to generate a response regarding the diagnostic result. The prompt used is shown in Figure <ref>. The disease database information will be provided to generate more reliable suggestions. Once the diagnostic response is generated, the data generation process concludes. Prompt 5: Symptom Generation Prompt (Doctor) If max C ≤τ, we will have the LLM generate the symptom s_gen it wants to inquire about, using the prompt shown in Figure <ref>. Then, we will select the inquired symptom s_t from S_imp∪{s_gen} based on H(C|s). Prompt 6: Doctor Inquiry Prompt (Role: Doctor) After confirming the symptom s_t, the Doctor Inquiry Prompt, shown in Figure <ref>, generates questions regarding the symptom. Prompt 7: Symptom Assessment Prompt (Role: Patient) As shown in Figure <ref>, the symptom assessment prompt is used to determine whether the patient exhibits the symptoms inquired about by the doctor. Prompt 8: Patient Response Prompt (Role: Patient) The Patient Response Prompt, as shown in Figure <ref>, is used to generate verbal responses from patients regarding the symptom of inquiry. Then, s_t will be added to the doctor's known symptoms S, initiating the next doctor's response generation. § BUILDING OF DXBENCH We used MedDialog <cit.>, a publicly available dataset with real doctor-patient dialogues from various departments, to create a realistic evaluation dataset. Despite its chaotic nature, we filtered 3,121 cases with clear dialogue and definite disease diagnoses. Then, using prompts designed to extract structured case information, as shown in Figure <ref>, we extracted structured case information from these cases. We further filtered out cases where the diseases were not included in the database or where there was insufficient symptom information, resulting in 1,429 cases. Finally, we manually filtered 1,148 high-quality cases, verifying the accuracy and completeness of symptom extraction, to construct DxBench. Additionally, for each case, we identified the two most relevant diseases using the disease retriever of DiagnosisGPT to form three candidate diseases. The construction process is detailed in Appendix <ref>. § CASE STUDY Below, two detailed diagnostic cases will be provided to better understand the response of our model. User I recently found a lump in my abdomen, and the pain is severe. Can you help me? DiagnosisGPT Analyzing patient symptoms: “abdominal lump", “severe pain" Based on the information provided, the likely diagnoses include: - Greater omentum cyst, common symptoms include an abdominal mass or progressive abdominal enlargement, a feeling of heaviness in the abdomen when lying on the back, and severe abdominal pain may occur when there is concurrent intestinal torsion or obstruction. - Intestinal tumor, common symptoms include an abdominal lump, severe pain, anemia, rectal bleeding, frequent and malformed stools, and fatigue with weight loss. - Recurrent retroperitoneal tumor, common symptoms include an abdominal lump, abdominal distension, abdominal pain, dull pain, and severe pain. Diagnostic reasoning: The patient presents symptoms of abdominal mass, severe pain, and rectal bleeding. These symptoms may be associated with various intra-abdominal conditions. The abdominal mass and severe pain may be direct manifestations of some abdominal tumor-related disease, while rectal bleeding could indicate involvement of the in- testines. When considering candidate diseases, the likelihood of colorectal tumors is high as they can account for the occurrence of all these symptoms. Mesenteric cysts typically present as abdominal masses but may not necessarily cause significant pain or rectal bleeding if they are small or located in specific positions. Recurrent retroperitoneal tumors, depending on their origin and type, could also result in sim- ilar symptoms, especially if they affect the intestines or other abdominal structures. Considering all factors, colorectal tumors are more closely associated with symptoms such as rectal bleeding due to their direct involvement with the intestines. Recur- rent retroperitoneal tumors are less likely than colorectal tumors but more likely than mesenteric cysts, as the latter typically show a weaker association with pain and rectal bleeding. Diagnostic confidence: - “Greater Omental cyst": 0.15, - “Intestinal tumor": 0.4, - “Recurrent retroperitoneal tumor": 0.45 Insufficient information for diagnosis. Additional symptom query: Have you noticed any abnormalities or blood streaks in your stool color recently? User I’ve noticed some bloody stools recently, and the color doesn’t seem right. DiagnosisGPT Analyzing patient symptoms: “abdominal lump", “severe pain", “bloody stool" Based on the information provided, the likely diagnoses include: - Intestinal tumor, ... - Greater omentum cyst, ... - Recurrent retroperitoneal tumor, ... (The disease retriever identified the same three diseases as last time, but their priority has changed.) Diagnostic reasoning: The patient presents symptoms of abdominal mass, severe pain, and rectal bleeding. These symptoms may be associated with various intra-abdominal conditions. The abdominal mass and severe pain may be direct manifestations of some abdominal tumor-related disease, while rectal bleeding could indicate involvement of the in- testines. When considering candidate diseases, the likelihood of colorectal tumors is high as they can account for the occurrence of all these symptoms. Mesenteric cysts typically present as abdominal masses but may not necessarily cause significant pain or rectal bleeding if they are small or located in specific positions. Recurrent retroperitoneal tumors, depending on their origin and type, could also result in sim- ilar symptoms, especially if they affect the intestines or other abdominal structures. Considering all factors, colorectal tumors are more closely associated with symptoms such as rectal bleeding due to their direct involvement with the intestines. Recur- rent retroperitoneal tumors are less likely than colorectal tumors but more likely than mesenteric cysts, as the latter typically show a weaker association with pain and rectal bleeding. Diagnostic confidence: - “Intestinal tumor": 0.6, - “Greater Omental cyst": 0.15, - “Recurrent retroperitoneal tumor": 0.25 Diagnosis Based on the symptoms and rectal bleeding you described, there is a possibility of a Intestinal tumor. I recommend you to visit a hospital for further examinations, including endoscopy and imaging tests, to confirm the diagnosis. Early-stage Intestinal tumors can be treated with surgical removal, while malignant tumors may require additional treatments such as radiation therapy or chemotherapy. Wishing you a speedy recovery. Case 1: Interaction between an intestinal tumor patient and the DiagnosisGPT model. User I recently discovered that I have developed an inflammatory lump, and I have also been running a fever, feeling generally unwell. DiagnosisGPT Analyzing patient symptoms: “inflammatory mass", “fever" Based on the information provided, the likely diagnoses include: - Granulomatous lymphadenitis, common symptoms include inflammatory mass, lymph node enlargement and pain, fever. - Geotrichosis, common symptoms include fever, chronic cough, sputum production and hemoptysis, stomach pain, diarrhea, cheesy white pseudomembrane on the oral mucosa, tongue, and palate, erythematous base with a burning sensation, involvement of the pharynx, larynx, and their junctions, skin erythema and exudation, cutaneous candidiasis in skin folds, tumor-like nodules, granulomas, itching and pain, edema, erythema, epithelial desquamation, and crust formation. - Suppurative mastitis, common symptoms include fever, localized breast pain, lump, breast fistula with abscess formation that may rupture internally or externally. Diagnostic reasoning: The primary symptoms of the patient are inflammatory masses and fever. Without more detailed symptom descriptions, lymphadenitis granulomatous, geotrichosis, and suppurative mastitis can all present with these symptoms. Lymphadenitis granulomatous often causes lymph node enlargement and may be accompanied by inflammatory masses. Geotrichosis, an infection caused by Geotrichum, can also cause systemic symptoms such as fever and may lead to granuloma formation in immunocompromised individuals. Suppurative mastitis primarily occurs in breastfeeding women, with symptoms including pain in the breast area, masses, and fever. Given the presence of inflammatory masses, if the patient is a breastfeeding woman, the likelihood of suppurative mastitis increases. Due to the lack of explicit latent symptoms, negating symptoms information, and detailed background information of the patient (such as gender, age, occupation, etc.), the specific diagnosis requires further clinical examination and auxiliary examination data. Diagnostic confidence: - Granulomatous lymphadenitis: 0.3, - Geotrichosis: 0.3, - Suppurative mastitis: 0.4 Insufficient information for diagnosis. Additional symptom query: Do you have a headache? User I have a bit of a headache, but it's not very serious. DiagnosisGPT Analyzing patient symptoms: “inflammatory mass", “fever", “headache" Based on the information provided, the likely diagnoses include: - Granulomatous lymphadenitis, common symptoms include inflammatory mass, lymph node enlargement and pain, fever. - Occipital indurative folliculitis, common symptoms include fever, hard rashes, swollen hair follicles, pain, and poor sleep. - Hair cysts, common symptoms include cystic masses on the scalp, often solitary, more common in women. Diagnostic reasoning: The patient's primary symptoms are inflammatory masses accompanied by fever, which typically indicate an inflammatory response in a certain area of the body. Headaches may be a systemic symptom caused by the body's overall inflammatory response. Common symptoms of lymphadenitis granulomatosa include fever and localized lymph node enlargement, and the local inflammatory masses may be related to the swelling of the lymph nodes. Suboccipital kerion celsi is characterized by inflammation and induration of hair follicles in a specific area of the head and typically does not cause systemic symptoms such as fever. Pilar cysts are cystic formations within hair follicles, usually accompanied by localized inflammation, but are unlikely to cause fever and systemic symptoms. Considering these candidate conditions and the patient's symptoms, a preliminary judgment can be made accordingly. Diagnostic confidence: - Granulomatous lymphadenitis: 0.6, - Occipital indurative folliculitis: 0.1, - Hair cysts: 0.3 Diagnosis It seems you may have granulomatous inflammation. I recommend that you see a doctor as soon as possible, as you may need some blood tests and imaging exams to confirm the diagnosis. The doctor will develop a detailed treatment plan based on the examination results. Meanwhile, make sure to rest, drink plenty of water, and avoid overexertion. Case 2: Interaction between an granulomatous inflammation patient and the DiagnosisGPT model. § PERFORMANCE OF DISEASE RETRIEVER We allocated 10% of the data as a validation set to evaluate retrieval performance. Table <ref> shows the retrieval performance of diseases under Diagnosis on the validation set. It can be seen that the top 3 diseases achieve a recall rate of 73%, indicating that most diseases can be effectively excluded. § STANDARD ERRORS OF RESULT We report the standard errors of the results from our model in Table <ref>. The standard errors were obtained by conducting five random experiments. § REVIEW OF SYNTHETIC CASES BY MEDICAL EXPERTS To verify the quality of the synthetic cases, we had two licensed physicians review the data. Each physician was given 50 randomly sampled synthetic cases and asked to assess whether any cases posed a risk of errors. Based on their feedback, they identified that out of the 100 cases, only 6 might be incorrect, as the symptom information was less likely to be associated with the respective diseases. This suggests that synthesizing cases from a medical encyclopedia is a fairly reliable method. § LIMITATIONS Despite its promising performance in diagnostic tasks, DiagnosisGPT has several limitations that must be considered: * Limited Disease Coverage: DiagnosisGPT is trained to identify only a specific set of diseases. This constraint means that the model's diagnostic capabilities are confined to this predefined list, and it may not recognize or provide accurate diagnoses for conditions that fall outside its training parameters. Consequently, this limitation could hinder the model's applicability in a real-world medical setting where a wide range of diseases, including rare and emerging conditions, need to be diagnosed. * Synthetic Data Annotation: The dataset used to train DiagnosisGPT relies on annotations created by Large Language Models (LLMs). While utilizing LLMs for annotation is a cost-effective approach, it raises concerns about the quality and reliability of the data. LLMs can sometimes generate plausible but incorrect information—often referred to as "hallucinations"—which can introduce biases or errors into the training data. This could potentially lead to the model making incorrect or misleading diagnoses. * Reliance on Synthetic Cases: DiagnosisGPT's training is based on synthetic medical cases, which are constructed to avoid the privacy concerns associated with using real patient data. However, these synthetic cases may not always accurately reflect the complexity and variability of actual patient presentations. The nuances of real-life medical conditions, including co-morbidities and patient-specific factors, are difficult to replicate in artificial scenarios. This gap between the training data and real-world contexts may impact the model's diagnostic accuracy and its generalizability to real patient populations. § IMPACT §.§ Positive Impact * Promotes medical AI development: DiagnosisGPT promotes the development of medical AI, as diagnostics are crucial in healthcare AI. Accurate diagnostic capabilities enhance patient outcomes and streamline clinical processes. * Improves interpretability in healthcare: DiagnosisGPT improves the interpretability of medical AI by utilizing a disease retriever function and knowledge base integration. This increased interpretability builds trust in AI systems among healthcare providers and patients. By making the diagnostic process more transparent, DiagnosisGPT helps users understand the reasoning behind AI-generated suggestions, fostering greater confidence in AI-assisted medical practices. * Addresses privacy concerns in medical cases: DiagnosisGPT offers a solution to privacy issues prevalent in medical case handling by constructing cases using a knowledge base, thereby eliminating patient privacy concerns. This approach also alleviates the problem of data scarcity. * Assists healthcare professionals: DiagnosisGPT assists healthcare professionals by rapidly collecting patient symptom information and providing preliminary diagnoses. This capability enables medical practitioners to save time and focus on more complex aspects of patient care. §.§ Potential Negative Impact The development of DiagnosisGPT raises several potential risks. * Risk of Misdiagnosis: Despite the promising results shown by DiagnosisGPT in diagnosis, it is crucial to underscore that at this stage, it should not be used to provide any medical advice. There is a possibility that it could provide incorrect interpretations or inaccurate diagnoses. Considering the nature of this field, our model and data will only be available for download by researchers. Our model will not be available for public use. * Data Privacy and Ethics: The diagnostic field may involve ethical issues related to patient privacy. To address this, we use synthetic data. The training data for CoD is entirely generated by GPT-4, ensuring that there are no privacy or ethical concerns. As for DxBench, we constructed it using open-source licensed datasets, ensuring compliance with ethical standards.
http://arxiv.org/abs/2407.13420v1
20240718114258
Exploring End-to-end Differentiable Neural Charged Particle Tracking -- A Loss Landscape Perspective
[ "Tobias Kortus", "Ralf Keidel", "Nicolas R. Gauger" ]
physics.comp-ph
[ "physics.comp-ph", "cs.LG" ]
DeepClair: Utilizing Market Forecasts for Effective Portfolio Selection Jaewoo Kang July 22, 2024 ======================================================================= § ABSTRACT Measurement and analysis of high energetic particles for scientific, medical or industrial applications is a complex procedure, requiring the design of sophisticated detector and data processing systems. The development of adaptive and differentiable software pipelines using a combination of conventional and machine learning algorithms is therefore getting ever more important to optimize and operate the system efficiently while maintaining end-to-end (E2E) differentiability. We propose for the application of charged particle tracking an E2E differentiable decision-focused learning scheme using graph neural networks with combinatorial components solving a linear assignment problem for each detector layer. We demonstrate empirically that including differentiable variations of discrete assignment operations allows for efficient network optimization, working better or on par with approaches that lack E2E differentiability. In additional studies, we dive deeper into the optimization process and provide further insights from a loss landscape perspective. We demonstrate that while both methods converge into similar performing, globally well-connected regions, they suffer under substantial predictive instability across initialization and optimization methods, which can have unpredictable consequences on the performance of downstream tasks such as image reconstruction. We also point out a dependency between the interpolation factor of the gradient estimator and the prediction stability of the model, suggesting the choice of sufficiently small values. Given the strong global connectivity of learned solutions and the excellent training performance, we argue that E2E differentiability provides, besides the general availability of gradient information, an important tool for robust particle tracking to mitigate prediction instabilities by favoring solutions that perform well on downstream tasks. § INTRODUCTION Charged particle tracking <cit.> is a central element in analyzing readouts generated by ionizing energy losses in particle detectors, with the goal of reconstructing distinct sets of coherent particle tracks from a discrete particle cloud measured over subsequent detector layers. Current state-of-the-art approaches in particle tracking heavily utilizes geometric deep learning in a two-phase predict-then-optimize approach <cit.>. In an initial prediction step, relations between detector readouts described as a graph structure are modelled and quantified. Then the predicted relationships are used in a separate and disconnected step to parameterize a discrete optimization problem generating disconnected track candidates. Separating the initial scoring from the final assignment operations, however, bares the risk of learning suboptimal solutions, that only minimize the intermediate optimization quantity of the scoring task (prediction loss) instead of the final task loss <cit.>. Furthermore, it prevents the integration of particle tracking in any other combined gradient-based optimization of any kind. This could include for example the joint optimization of reconstruction loss and other auxiliary losses, generated for a downstream task, as well as the integration of particle tracking in optimization pipelines for experiment design and optimization <cit.>. However, providing end-to-end differentiable solutions is not trivial as the final track assignment involves discrete, piece-wise constant assignment operations where the gradient is undefined. The decision-focused learning or predict-and-optimize paradigm <cit.> provides a general framework where prediction of intermediate scoring and optimization for structured outputs are directly integrated in the training procedure by embedding the combinatorial solver as a component of the network architecture, allowing to optimize the main objective in an E2E manner. This framework has already been proven highly efficient for various fields of applications such as natural language processing, computer vision or planning and scheduling tasks <cit.>. In this work, we propose and explore a predict-and-optimize framework for charged particle tracking, providing to our knowledge the first fully differentiable charged particle tracking pipeline for high-energy physics[The source code and data utilized in this study are detailed in Appendix <ref>, including all relevant source code, datasets, and results necessary to reproduce the findings presented in this paper.]. Our main contributions summarize as follows: * We propose an edge-classification architecture for particle tracking, that operates on a filtered line graph representation of an original hit graph, providing a strong inductive bias for the dominant effect of particle scattering. * We then formalize the generation of track candidates as a layer-wise linear sum assignment problem, for which we provide gradient information based on previous work by <cit.>. * We demonstrate the competitive performance of our proposed network architecture for both traditional and E2E optimization, comparing it with existing tracking algorithms for the Bergen pCT detector prototype. * We reveal, analyzing the local and global structure of the loss landscape, strong connectivity between the trained representations both for E2E training and optimization of a prediction loss. * Further, we find significant predictive instabilities between the different training paradigms as well as across random initializations, arguing the importance of E2E differentiability for robust particle tracking, providing necessary tools to constraint the solution space of tracking models to a subset optimizing an additional metric of the downstream tasks. * Finally, we point out a coupling between the interpolation parameter defined by the blackbox gradients and the prediction stability, suggesting the choice of sufficiently small values for optimization. § RELATED WORK Charged particle tracking is a well-studied problem, with the majority of advancements originating from developments in high-energy physics research at particle facilities (e.g., CERN). For the last decades, tracking algorithms have been dominated by conventional model based, pattern recognition and optimization algorithms such as Kalman filter <cit.>, Hough transform <cit.> or cellular automaton <cit.>. Here we specifically want to point out existing work by <cit.> related to our work, using combinatorial optimization on handcrafted features for particle tracking providing solutions that satisfy unique combination of vertex detector observations using a five-dimensional assignment model. With the ever-accelerating advancement and availability of deep learning algorithms, coupled with the increasing complexity and density of collision events, a significant demand has been placed on novel reconstruction algorithms, resulting in faster and more precise models. While initial studies mainly relied on basic regression- and classification models based on convolutional neural networks (CNN) or recurrent neural networks (RNN) <cit.>, remarkable progress has been made recently by computationally efficient and well-performing network architectures based on geometric deep learning (GDL) <cit.> leveraging sparse representations of particle events in combination with graph neural networks (GNN). Here, the vast majority of current designs can be categorized into one of two main schemes. Edge classification tracking <cit.> predicts for each graph edge connecting two particle hits in subsequent layers a continuous output score which is then used to construct track candidates. In object condensation tracking <cit.>, GNN's are trained with a multi-objective object condensation loss. In contrast to scoring edges in the graph, object condensation embeds particle hits in an N-dimensional cluster space, where hits belonging to the same track are attracted to a close proximity while hits from different tracks are repelled from each other. While both approaches provide great tracking performances, they require a separate optimization step (e.g., clustering) to generate final track candidates in a disconnected step, thus lacking an end-to-end differentiable architecture, preventing gradient flow throughout the whole reconstruction process. Recent progress has been made using reinforcement learning <cit.> for charged particle tracking, allowing to differentiate through the discrete assignment operation using variants of the policy gradient or score function estimator approach maximizing an objective function J(θ), defined as the expected future reward obtained by the agent's policy. § THEORY AND BACKGROUND In this work, we focus on reconstructing particle tracks generated in the pixelated proton computed tomography (pCT) detector prototype, proposed by the Bergen pCT Collaboration <cit.>. This detector is designed for generating high-resolution computer tomographic images using accelerated protons and is composed using a total of 4644 high-resolution ALPIDE CMOS Pixel sensors <cit.> developed for the upgrade of the Inner Tracking System (ITS) of the ALICE experiment at CERN. To capture the residual energy and path of the particle, the detector is arranged in a multi-layer structure, aligned in two tracking and 41 calorimeter layers. Each of the tracking layers, used for capturing the incoming particle path with as little interaction as possible, is separated from each subsequent layer by a 55.18 mm and 39.58 mm air gap respectively. The following calorimeter layers are separated from each other with a 3.5 mm aluminum slate. A detailed description of the structure and the material budget of the detector prototype is depicted in Figure <ref>. In this detector setup, multiple simultaneous particle trajectories, generated by a 230 MeV scanning pencil beam, are captured distal to a patient as discrete readouts of pixel clusters in the sensitive layers which have to be reconstructed into particle tracks under the influence of physical interactions with the detector material. §.§ Particle Interactions in Matter The path of protons at energies relevant for pCT is mainly influenced by interactions of the particles with both electrons and atomic nuclei changing the path of the accelerated particle <cit.>, baring additional difficulties in recovering the traversal path of the particle throughout a multi-layer particle detector. In the following, we briefly describe all three relevant effects in increasing order regarding their influence on the particle trajectories: * Ionizing energy loss: When interacting with negatively charged electrons in the atom's outer shell, protons lose a fraction of their initial energy, due to attracting forces between the particles <cit.>. Due to the relatively small mass of the electron compared to the high momentum of the accelerated proton, the proton remains on its original path. * Elastic nuclear interactions: However, when repeatedly interacting elastically with the heavier atomic nuclei, the proton is deflected from its original path due to the repelling forces between nuclei and particle. Integrated over multiple interactions, this effect, also referred to as Multiple-Coulomb scattering (MCS), follows an approximately Gaussian shape proportional to the particle's energy as well as the amount and density of material traversed <cit.>. * Inelastic nuclear interactions: Finally, on rare occasions, particles suffer from inelastic interactions with an atomic nucleus. In this case, the proton undergoes a destructive process, where the primary can knock out one or multiple secondary particles of various types from the target nucleus. Due to the stochastic nature of the event, the outgoing path of particles is highly complex in its nature and involves significantly larger angles compared to MCS <cit.>. Inelastic interactions are therefore extremely difficult to reconstruct while losing any meaningful information for image reconstruction due to its stochastic behavior. §.§ Loss Landscape Analysis Optimization of neural networks in high-dimensional parametric spaces is notoriously difficult to visualize and understand. While theoretical analysis is complex and usually requires multiple restrictive assumptions, recent work addressed this problem by providing empirical tools for loss landscape analysis providing compressed representations for characterizing and comparing different architectures or optimization schemes. We provide in the following section a brief introduction to a subset of methods, used for the analyses in the later sections. Here, we closely follow the methodology and taxonomy provided by <cit.> helping us to characterize and assess differences and qualities of traditional and end-to-end differentiable particle tracking with graph neural networks by characterizing and comparing the general form of the loss landscape as well as the global connectivity of the landscape. Loss surfaces: To obtain a general understanding of the smoothness and shape of the loss landscape <cit.> proposed a method to visualize two-dimensional loss surfaces along random slices of the loss landscape centered on the estimated parameters θ^* according to f(α, β) = ℒ(θ^* + αν + βη). Here, ν and η are random vectors, spanning a 2D-slice through the high-dimensional loss landscape. While this technique is not fully descriptive and only a partial view of the optimization landscape, <cit.> demonstrates that the qualitative characteristic and behavior of the loss landscape is consistent across different randomly selected directions. In our analysis, we use the first two eigenvectors with the highest eigenvalues of the hessian matrix <cit.> of the trained network parameters θ^*, providing the loss surface along the steepest curvature. Mode connectivity: <cit.> and <cit.>, demonstrated in independent work the existence of non-linear low-energy connecting curves between neural network parameters. While this approach was initially proposed for generating ensembles, <cit.> and <cit.> demonstrated the usefulness of this approach for comparing different initialization and optimization strategies as well as characterizing global loss landscape characteristics. For finding connecting curves of form ψ_θ(t): [0, 1] →ℝ^d, <cit.> defines an optimization schemes using two modes ŵ_a, ŵ_b (weights after training), minimizing the integral loss alongside the parameterized curve approximated by minimizing the expectation of randomly sampled points, following a uniform distribution: ℒ(θ) = ∫_0^1ℒ(ψ_θ(t)) dt = 𝔼_t ∼ U(0,1)[ℒ(ψ_θ(t)) ]. For quantifying the connectivity of minima generated using two-step and E2E optimization, we follow the recommended parametrization of the learnable curve used in <cit.>, defined by a Bézier curve with three anchor points (ŵ_a, ŵ_b and a learnable anchor θ), with ψ_θ(t) = (1-t)^2 ŵ_a + 2t(1 -t)θ + t^2ŵ_b. We additionally evaluate the linear connectivity both as a baseline and a measure for mechanistic similarities <cit.>. Representational and functional similarities: Analyzing network similarities in both representations and outputs is an effective tool in comparing results of network configurations, providing an estimate of proximity in the loss landscape, especially for globally well-connected minima <cit.>. Achieving high network similarity is especially desirable for our use cases, as differences in reconstructed tracks might influence results of the downstream tasks, such as image reconstruction for pCT or statistical analysis for HEP experiments. To gain an insight into the behavior of the trained networks, we thus quantify both the similarity of learned representations using CKA similarities <cit.> and prediction instability using the min-max normalized disagreement <cit.>. * Linear-CKA Linear central kernel alignment (CKA) is widely used to quantify a correlation-like similarity metric <cit.> to compare learned representations of neural network layers of same or different models. CKA compares representations via the Hilbert-Schmidt Independence Criterion (HSIC) according to CKA(K, L) = HSIC(K,L)/√(HSIC(L,L)HSIC(K,K)). where K = XX^T and L=YY^T are the gram matrices calculated for the representations of two layers with X∈ℝ^n× d_1 and Y∈ℝ^n× d_2. To cope with the large number of processed nodes and edges processed, we use the batched Linear-CKA leveraging an unbiased estimator of the HSIC over a set of minibatches <cit.>. * Prediction instability Prediction instability or churn captures the average ratio of disagreement between predictions of different models f_1 and f_2 <cit.>. For a general multi-class classifier, the disagreement is defined as following d(f_1, f2) = 𝔼_x, f_1, f_2 1 {max f_1(x) ≠max f_2(x) }. This concept naturally translates to binary classifiers as used in this paper, where the argmax is replaced by a threshold function. This metric, however, is difficult to interpret since the value range is bounded by | q_Err(f_1) - q_Err(f_2) |≤ d(f_1, f_2) ≤min( q_Err(f_1) + q_Err(f_2), 1), where q_ Err is the error rate of the model f_1 or f_2, respectively <cit.>. We thus use the min-max normalized disagreement <cit.>, defined as d_norm(f_1, f_2) = d(f_1, f_2) - min d(f_1, f_2)/max d(f_1, f_2) - min d(f_1, f_2). with min d(f_1, f_2) = | q_Err(f_1) - q_Err(f_2) | and max d(f_1, f_2) = min(q_Err(f_1) + q_Err(f_2), 1) giving us similarities in a bounded interval of [0, 1]. § PREDICT-THEN-TRACK AND PREDICT-AND-TRACK FRAMEWORK In this section, we introduce our end-to-end differentiable predict-and-track (PAT) and two-step predict-then-track (PTT) approaches, including preprocessing steps directly aimed at the specific data gathered during proton computed tomography. We closely follow existing state-of-the-art edge-classifying approaches, with modifications to match the algorithm to the specific challenges of reconstruction in a digital tracking calorimeter, to provide a generally applicable blueprint and transferrable results that can be easily adapted to different detector geometries and data structures. §.§ Hit Graph and Edge (Line) Graph Construction In this work, we parameterize the detector data as an undirected line graph 𝒢_L = (𝒱_L, ℰ_L) generated from an initially generated hit graph 𝒢_H = (𝒱_H, ℰ_H) containing all the detector readouts of a single readout frame. In an initial hit graph structure, we represent all particle hit centroids in a readout frame as a set of graph vertices and describe possible track connections (segments) between two consecutive layers in the detector by edge connections. To capture a richer representation of the scattering behavior, providing a strong inductive bias for the message passing mechanism of graph neural networks, we transform this initial representation into a line graph (𝒢_L), where each edge in 𝒢_H is transformed into a node in G_L (ref. Figure <ref>). Further, edges are generated between nodes v_L,i, v_L,j that share a common vertex, constructing descriptions of track deflections of candidates over three consecutive layers. This transformed representation allows to efficiently aggregate information of scattering behavior (two nodes connected by an edge in 𝒢_H) providing important information on possible track segments in a single GNN message, as supposed to the more complex aggregation in 𝒢_H over a two-hop neighborhood. To provide machine-readable and differentiable features (w.r.t. simulation output) we parameterize both vertices (v_i) and edges (e_ij) as v_i = (Δ E_H,i‖Δ E_H,j‖⟨ x, y, z⟩_H,i‖⟨ x, y, z⟩_H,j) e_ij = sin (ω· [1 - √(s_cos) ]) if i = 2k cos (ω· [1 - √(s_cos) ]) if i = 2k + 1 , where Δ E_H is the deposited energy in the sensitive layers and ⟨ x, y, z ⟩_H is the global position of the hit in the detector[We use the continuous energy deposition and position outputs of the Monte-Carlo simulations here to be able to calculate gradients w.r.t. position and energy deposition. The parameters can be replaced with the cluster size and cluster centroid position accordingly. However, this representation requires additional work to provide differentiable solutions for pixel clustering.] for the adjacent nodes i and j. Further, each edge is parameterized by a positional encoding, similar to <cit.>. However, instead of using discrete token positions, we leverage cosine similarities s_cos between two hit segments as proposed by <cit.>. To reduce the complexity of the graph and to minimize the combinatorial explosion of edges in 𝒢_L, we remove edges with high angles in 𝒢_H, measured orthogonal to the sensitive layer. We find suitable thresholds as a trade-off between reducing the graph size and minimizing the number of removed true edges (see Appendix <ref>). §.§ Edge Scoring Architecture Following the basic idea of track reconstruction using an edge classification scheme with edge scores <cit.>, we compose a graph neural network based on the interaction network (IN) architecture <cit.> as proposed in <cit.>. However, we use a slightly modified architecture to predict node scores on the line graph representation 𝒢_L, which directly correspond to the edge scores in 𝒢_H (see Figure <ref>). Following is the formulation of edge and node updates in a generic message passing formulation, as well as the final scoring function based on node representation on 𝒢_L. In a first step, updated edge representations are calculated using a concatenated vector containing the edge attributes, describing the scattering behavior of the triplet-candidate, together with the node attributes v_i^(0) and v_j^(0) of adjacent nodes according to e_ij^ (1) = ϕ_R,1(v_i^ (0), v_j^ (0), e_ij^ (0)). Here ϕ_R,1 is a multilayer feed-forward network mapping the concatenated input to a fixed size representation with ϕ_R,1: ℝ^2 d_node + d_edge→ℝ^d_hidden. In a subsequent step, all node features of 𝒢_L are updated in an aggregation step using the current node representation and edge information aggregated from the direct neighborhood of the respective node v_i: v_i^ (1) = ϕ_O(v_i^ (0), max_j ∈𝒩_i(e_ij^ (1))) Given the dynamic number of incoming edges for each graph vertex, either due to the changing numbers of hit readouts in subsequent layers of 𝒢_H or the required generalization ability to different particle densities of the proton beam, we use a node-agnostic max-aggregation scheme, providing empirically better generalization abilities compared to non-node-agnostic aggregation schemes <cit.>. To predict for each edge in 𝒢_H an edge score, we perform a final reasoning step, transforming the node embeddings of each node in 𝒢_L into a single scalar output using a feedforward network ϕ_R,2: s_ij = σ(ϕ_R,2(v_i^ (1)) ) We generate the final edge scores s_ij∈ [0, 1] using a sigmoid activation function as proposed in <cit.>. We augment the multi-layer feed forward networks using batch normalization <cit.>, to mitigate the risk of vanishing gradients and smoothen the optimization landscape <cit.>, providing better training and inference performance. §.§ Track Construction as a Differentiable Assignment Problem Generating discrete track candidates from the continuous edge-score predictions, requires the application of a discrete algorithm that matches the tracks with the lowest total cost. Optimally, this would require a multidimensional assignment problem (MAP) to minimize the total cost of all assignments over the entire detector geometry. This exact definition is, however, impractical in real-world usage since solving a MAP is NP-hard <cit.>, requiring an approximate solution to solve. In the related literature, different optimization procedures, such as DBSCAN or union-find or partitioning algorithms, are used frequently <cit.>. In this work, however, we rely on a relaxation of the original MAP formulation to a layer-wise linear sum assignment problem (LSAP), finding a solution y(𝒞) that minimizes y(𝒞): min ∑_(i,j) ∈ℰπ_ijc_ij s.t. ∑_i ∈𝒱_Fπ_ij = 1, j ∈𝒱_T ∑_j∈𝒱_Tπ_ij≤ 1, i ∈𝒱_S π∈{0, 1}, i, j ∈𝒱_T ×𝒱_S for every detector layer. Here, c_ij∈𝒞 is the assignment cost for the edge combination i,j, where i ∈𝒱_S is a node from the set of source nodes and j ∈𝒱_T is a node from the set of target nodes respectively. Further, π_ij is the respective assignment policy defining whether the edge should be assigned under the reconstruction policy. While the results in Section <ref> demonstrate good performance for the choice of the LSA assignment, we want to note here that an increasing amount of detector noise and particle scattering can significantly influence the performance due to the required assignment without any notion of noise. Edge-costs and solver: For the application of charged particle tracking we define a dense assignment cost matrix 𝒞∈ℝ^|𝒱_F|×|𝒱_T |, where each entry with an existing edge in 𝒢_H is set as 1 - s_ij [Equivalently, we could directly predict the edge cost c_ij by the graph neural network instead of using the edge score s_ij. However, we chose this particular notation to stay compatible with the output of the predict-then-optimize variant.]. For all other entries, we assign the cost matrix an infinite cost, marking this assignment as infeasible: 𝒞 = 1 - s_ij if e_ij∈ℰ_H ∞ if e_ij∉ℰ_H , The dense cost matrix allows us to solve the assignment problem reasonably efficient in 𝒪(n^3) using the Jonker-Volgenant algorithm (LAPJV) <cit.>. Similarly, for larger tracking detectors with a sparser occupation per area and thus connectivity, this notation can be replaced by a sparse cost matrix and solved with the sparse LAPMOD variant of the Jonker-Volgenant algorithm <cit.> respectively. Blackbox differentiation: For this general formulation of LSAP a wide range of work of predict-and-optimize schemes providing an end-to-end optimization ability to combinatorial optimization problems exist based on e.g., the interpolation of optimization mappings <cit.>, continuous relaxations <cit.> or methods that bypass the calculation of gradients for the optimizer entirely using surrogate losses <cit.>. For an in-depth review and comparison of existing approaches, the mindful reader is referred to <cit.>. <cit.> demonstrated for all major and representative types of end-to-end training mechanisms for bipartite matching problems, similar regret bounds. We thus base our selection of a technique solely on simplicity and generalizability of the approach to other optimizers and use-cases. <cit.> defines a general blackbox differentiation scheme for combinatorial solvers of form y(𝒞) = min_y ∈𝒴 c(𝒞, y) by considering the linearization of the solver mapping at the point y(𝒞̂) according to ∇_𝒞 f_λ (𝒞̂) := - 1/λ[ y(𝒞̂) - y_λ(𝒞') ], where 𝒞' = clip(𝒞̂ + λdL/dy(y(𝒞̂)), 0, ∞). Here, y(𝒞̂) and y_λ(𝒞') are standard and perturbed-cost output of the combinatorial solver, λ is a hyperparameter controlling the interpolation and dL/dy is the gradient of the reconstruction loss w.r.t. the output of the combinatorial solver. Using a linearized view of the optimization mapping around the input allows remaining with exact solvers without the necessity of using any relaxation of the combinatorial problem. Cost margins: The usage of discrete assignment does not provide any confidence scores, thus task losses of correctly assigned edges are always exactly zero, even for marginal differences between cost elements, potentially having a noticeable impact on the generalization performance <cit.>. To enforce larger margins between predicted costs, <cit.> introduced ground-truth-induced margins where a negative or positive penalty is added on the predicted costs based on the ground truth labels. Later, <cit.> introduced noise-induced margins, removing the previous ground-truth dependence by adding random noise to the predicted costs. However, for the large number of edges in our use case of particle tracking, we found both methods to be highly unstable, even for small margin factors. §.§ Network Optimization While all previous building blocks are shared between the proposed predict-and-track and predict-then-track framework, the main difference lies in the optimization procedure of the network. We aim to optimize the prediction quality of both networks by reducing a specific loss function minimizing the disagreement between prediction and ground truth, which is defined as a binary label for every edge in 𝒢_H. We treat splitting particle tracks containing secondary particles as noisy labels due to the relatively low production rate and generally short lifetime of most secondary particles (e.g., electrons). This avoids the need for computationally costly tracing algorithms that follow the particle generation of the Monte-Carlo (MC) simulation for the creation of the training data and avoids generating assignment rules for complex particle interactions. Given the true edge labels, we minimize for the predict-and-optimize scheme the hamming loss of the assignments over a batch of N randomly sampled hit graphs according to ℒ({y_n}_1:N, {f(𝒢_n)}_1:N) = 1/N∑_n=1^N1/|ℰ_n |∑_(i,j) ∈ℰ_n w_ij·( y_ij· (1 - ŷ_ij) + (1 - y_ij) ·ŷ_ij), where ŷ_ij is the assignment of the k-th edge in hit graph 𝒢_H and y_ij is the ground truth assignment of the track. We perform additional weighting of the reconstruction loss for tracking and calorimeter layer to account for the different scattering behavior. Similarly, we use the binary-cross entropy (BCE) loss of the predicted edge scores for the predict-then-optimize version for comparison. To reduce the overall memory footprint of the optimization, we implement the gradient calculation of a single batch using gradient accumulation as N individual predictions over a single graph. We use for all following studies an RMSProp optimizer <cit.> with a learning rate of 1e-3, which we found to be significantly more stable than Adam <cit.> and more robust to the selection of the learning rate compared to standard SGD. We train all network variants PAT and PTT for 10,000 training iterations, where in each iteration a random minibatch of N graphs is sampled from the training set. In initial experiments, we observed that the E2E architectures showed the tendency to get unstable after reaching a certain reconstruction performance. We thus perform selective “early stopping”, where we used the training checkpoint (evaluated every 100 iterations) with the best validation performance, determined based on the purity of the reconstructed tracks. § EXPERIMENTAL RESULTS AND ANALYSIS Dataset: For the studies reported in this work, we rely on MC simulation of detector readout data, which we generate using GATE <cit.> version 9.2 based on Geant4 (version 11.0.0) <cit.>. We generate different datasets for training (100,000 particles) and validation (5,000 particles), using a 230 MeV pencil beam as a beam source with water phantoms of various thicknesses between particle beam and detector. For comparability purposes, the test set, containing 10,000 particles in total, is taken from <cit.>. A detailed listing of all data sources is provided in Appendix <ref>. We generate for the training and validation simulations hit- and line graphs with 100 primaries per frame (p^+/F), according to the description in Section <ref>, which we consider the expected target density for the constructed detector. Further, we generate hit graphs for the test set with 50, 100 and 150 p^+/F to assess the generalization ability to unseen densities. Hyperparameter settings: We share model hyperparameters for both PAT and PTT framework, documented in detail in Appendix <ref>. As we consider the interpolation parameter λ as an important tunable parameter, potentially impacting the optimization behavior, we analyze the effect of the parameter choice on the tracking performance. For this study, we select values of 25, 50 and 75 covering a range that is well inside the specified value ranges defined for similar applications <cit.>. Track reconstruction and filtering: During inference, particle tracks are constructed combining the trained reconstruction networks (PTT and PAT) in all configurations with the linear assignment solver, creating unique assignments of particle hits to tracks[Visualizations of a selection of reconstructed tracks can be found in Appendix <ref>]. To remove particle tracks produced by secondary particles or involving inelastic nuclear interactions, we apply a track filtering scheme similar to <cit.> and <cit.> removing implausible tracks after reconstruction based on physical thresholds. In this work, we limit the track filters to an energy deposition threshold (625 keV in the last reconstructed layer), ensuring the existence of a Bragg peak in the reconstructed track. Baseline models: As baseline models, providing reference values for quantifying the track reconstruction performance of the proposed architectures, we select both a traditional iterative track follower algorithm <cit.>[The source code for the track follower proposed by Pettersen et al. is provided as a software component in the Digital Tracking Calorimeter Toolkit (<https://github.com/HelgeEgil/DigitalTrackingCalorimeterToolkit>)], and a reinforcement learning (RL) based tracking algorithm <cit.>. The first baseline finds suitable tracks by iteratively searching for track candidates minimizing the total scattering angle. Similarly, the RL-based algorithm aims to learn a reconstruction policy functioning as a learned heuristic to the algorithm by <cit.> by iteratively maximizing the expected likelihood of observing the scattering behavior determined by MCS during training. Quantification of model performances: To compare the quality of the reconstruction, we quantify the performance based on true positive rate (TPR), false positive rate (FPR) of the assigned edges as well as reconstruction purity (p) and efficiency (ϵ) of entire tracks, defined respectively as TPR = n_TP/n_TP + n_FN, FPR = n_FP/n_FP + n_TN, p = N_rec,+^filt/N_rec,+/-^filt, ϵ = N_rec,+^filt/N^prim_total. Here n_TP, n_TN, n_FN, and n_FP are the number of true-positive, true-negative, false-negative and false-positive reconstructed edge segments. Further, N_rec,+^filt is the number of correctly reconstructed particle tracks after filtering, N_rec,+/-^filt is the total number of reconstructed tracks after applying the track filter, and N^prim_total is the total number of primary tracks (without inelastic nuclear interactions) in a readout frame. To provide confidence intervals for the model performance as well as the following analysis of the loss landscapes, we calculate the results over five independently trained networks with different random initializations. §.§ Training and Inference Performance Figure <ref> visualizes the intermediate model performances for every 100 training iterations, determined by true-positive rate, false-positive rate, purity and efficiency, both on the training and validation dataset. Additionally, marked with horizontal dotted lines is the iteration of each model run with the highest validation purity. Noticeably, all model instances (PAT and PTT) display similar training performances with a significant improvement of all metrics over the first 1000-2000 training iterations. Continuing from there, the end-to-end trained networks, especially with a lambda of 50 and 75, shows slightly improved performance compared to the model minimizing the prediction loss. However, this performance advantage does not translate to the validation dataset, where once again all models show almost identical results. This lack of generalization ability can be likely traced back to the absence of cost margins, as described in Section <ref>, which we had to remove due to the amount of instability introduced by this mechanism. The replacement of this mechanism with an equivalent, more stable mechanism is thus likely beneficial. Further, Figure <ref> demonstrates that the end-to-end differentiable versions with an interpolation factor λ of 50 and 75 can converge faster than the PTT variant. PAT (λ = 50) and (λ = 75) converges on average in 7000 ± 1512 and 4780 ± 1263 training steps, respectively, while PTT requires on average 7580 ± 1536 steps to converge. In contrast, PAT (λ = 25) requires with 9080 ± 1151 more steps than PPT. Additional performance results are provided in Table <ref>. Here, the performance of the trained models is compared for different phantom and particle density configurations. In addition, the performance results of a traditional track follower procedure, which was developed for the particular use-case of particle tracking in the DTC prototype, are provided as a baseline. Similar to the results provided in Figure <ref>, all model configurations, trained on the task- and the prediction-loss, demonstrate near identical reconstruction purity and efficiency over all tested phantom and particle configurations. Further, the graph neural networks outperform the baseline algorithm in almost all performance metrics over all configurations, with only the configuration of 150 p^+/F and 100 mm water phantom as a slight outlier in this tendency. Here, the track follower and RL-based tracker demonstrate a higher reconstruction efficiency, while the reconstruction purity is still higher for the GNN architectures. §.§ Evaluation of Local and Global Loss Landscapes To analyze and characterize the training and inference behavior of the end-to-end and two-step tracking approaches and gain an understanding of the effects of end-to-end optimization and its hyperparameters, we visualize <cit.> and characterize the loss landscape structure, closely following the methodology and taxonomy by <cit.> for characterizing the optimization loss landscapes. We specifically focus on the global connectivity of the loss landscape as it allows us to infer conclusions and comparisons between different model types, given the same parameterization along all configurations. We thus further augment the evaluation by additional analysis of functional similarities <cit.> of the optimized models. All used algorithms are detailed in Section <ref>. Additional implementation details are listed in Appendix <ref>. §.§.§ Local Structure of the Task Loss Surfaces for Decision-focused Learning Figure <ref> visualizes the two-dimensional loss surfaces (Hamming loss) of the first four runs of all configurations in terms of filled contour maps in logarithmic scale. Noticeably, all loss surfaces show similar and consistent shape and patterns in the local 2D loss landscape around the found minima, both over both random initializations and choices of the interpolation parameter λ. Projected onto two dimensions, the surfaces show a mostly convex structure with wide and flat regions along the minima, supporting the previous findings of good training performance of the E2E training configuration. This pattern also often coincides with good generalization performance <cit.>. However, <cit.> demonstrates that the notion of flatness is not sufficient to reason about the generalization ability itself. We, argue that in this case, the large flat regions likely are conditioned and defined by the range of sensitivity of predictions along various parameter configurations. We further strengthen this intuition by comparing the hamming loss with the BCE loss surfaces in Section <ref>, demonstrating that the hamming loss surface strongly correlates with a flattened version of the BCE-loss. While a significant amount of the projected loss surface is occupied by a flat, low-loss area, all loss surfaces show a significant loss barrier near the trained network parameters, demonstrating the convergence of the networks to outputs close to decision boundaries (see. Section <ref>). Providing an alternative to cost margins, discussed in Section <ref>, may thus be helpful to improve the generalization performance of the network. §.§.§ Agreement of Prediction Loss and Task Loss Surfaces While optimizing prediction and task loss use different loss functions on both intermediate and final outputs that generally do not coincide <cit.>, Figure <ref> and Table <ref> revealed substantial similarities both during training and inference. Figure <ref> indicates that the similarities directly translate, and thus are most likely caused by the similar shape of the projected loss landscape of prediction and task loss. For all tracking networks, trained minimizing the prediction loss, we find similar shapes and pattern for both the prediction loss (BCE) and task loss surface (Hamming). We especially emphasize the existence of minima in the loss surface with strongly correlated shapes, demonstrating the best agreement around the minimum itself [The difference of prediction loss and task loss vanishes for perfect predictions of the two-step approach with perfect confidences (either exactly zero or one).]. While the overall shape of the loss surface suggests generally good agreement between the two losses, the surfaces also show significant differences in relative loss for suboptimal solutions. While this did not seem to be significant for our work, theoretical work indicates increasingly growing regrets for models optimizing the prediction loss if the parametric model class is ill-posed and there is little data available <cit.>. For well-specified models, experimental results reported in <cit.> further strengthen our findings that regret for decision-focused learning strategies in matching problems closely line up with the ones obtained with two-step training, minimizing the prediction loss. §.§ Representational and Functional Similarities The similar training and inference performance (Section <ref>) as well as the strong correlation of prediction and task loss surfaces (Section <ref>) indicate strong connections between the E2E and two-step training paradigms. However, previous results lack information about parameter and prediction level similarities and differences, thus providing only limited information of the connectivity in the global loss landscape. We thus continue analyzing the inherent similarities of the networks to get a more profound understanding of existing structural differences in both training paradigms. Figure <ref> visualizes the representational similarities (measured as average CKA similarities between layer representations) and functional similarities (measured as min-max normalized disagreement in predictions). Here, the first row shows all elementwise similarities with an empty diagonal, and the second row shows group wise similarities across random initializations. Noticeably, PAT-based models reveal consistently moderate to high functional similarities in all network modules. Especially the first two network layers, representing the message passing and aggregation steps of the GNN architecture, show significantly higher similarities compared to their two-step counterpart. This effect is consistent both across random initialization and different combinations of interpolation factors λ. The representational similarities in the final network module are comparable for both decision-focused and two-step learning, despite the two-step models exhibiting lower similarities in the preceding layers. Overall, combinations of PTT and PAT demonstrate the lowest representational similarities, indicating that both E2E and two-step training can find similar performing network parameters, that, however, show significantly larger differences in their internal representation compared to models trained using the same optimization paradigm. While the average reconstruction performance across multiple readout frames is unaltered by changes in the learned network representations (ref. Table <ref>), subtle prediction instability caused by different learned prediction mechanisms can have a noticeable impact on downstream tasks. Especially for medical applications such as pCT, robustness and stability of the classifier is essential to assure reproducibility and avoid unpredictable changes in the following downstream tasks. To avoid comparing the outcome of downstream tasks, which is computationally expensive and depends on multiple intermediate steps, we analyze functional similarities or prediction instability of the tracking as a helpful proxy, complementing the previous results. As illustrated in Figure <ref>, both PTT and PAT-based models show substantial functional disagreement in a similar range, with median disagreement rates around 15%. We want to specifically point out the increase in disagreement for models with higher interpolation values (see Appendix <ref> for the statistical significance analysis), suggesting choosing smaller λ values for improved stability of the model. Further, the experiments demonstrate, similar to the representational similarity results, a significant increase in disagreement for combinations containing both PAT and PTT variants. Here, we can observe median disagreement rates of approximately 20%, corresponding to an increase in prediction instability of approximately 33%. §.§ Global Connectivity of Trained Models Given the findings of Section <ref>, demonstrating substantial disagreements in model predictions, we now aim to quantify the global shape or connectivity of the loss landscape. Table <ref> and Table <ref> summarize the mode connectivity, comparing connecting Bézier curves between model parameters as specified in Section <ref>. To get a more profound understanding of the global landscape for prediction and task loss, we generate curves using both the BCE-loss on the predicted edge-scores and the Hamming loss on the assigned edges. Here, we take advantage of our findings in Section <ref>, suggesting a strong agreement between Hamming-loss and BCE-loss for low-loss values. In addition, we provide the linear connectivity between two models as a baseline. Further, <cit.> and <cit.> demonstrate in their work that trained models deficient in linear connectivity, are mechanistic dissimilar[Trained models using different mechanisms or representations for predictions.]. This allows us to gain further insides on the stability of the trained models from a loss landscape perspective, supporting the results in Section <ref>. Supplemented by the results in Table <ref>, we find that connecting curves with consistently low-loss values and high true positive rates are existent for models with the same training configurations. However, despite the existence of the excellent connectivity with Bézier curves, the linear connectivity is associated with persistently high losses indicating low mechanistic similarity. Analogous to the functional similarity results in Figure <ref>, we report increasing losses for higher interpolation values, strengthening the findings in Section <ref> pointing towards the importance of the interpolation factor despite similar reconstruction performances. Despite the lack of linear connectivity, given the marginal difference in non-linear connecting curves between all PAT-trained models, all parameter configurations seem to be strongly interlinked with globally well-connected minima. This property also holds for the PTT configuration; however, we were only able to find low-loss connecting curves by using the BCE-loss as the optimization criterion. Optimizing the Hamming-loss, in contrast, yielded substantially worse results, which we cannot explain. We find similar results demonstrating strong nonlinear connectivity between trained models of PAT and PTT mixtures, indicating a good global connectivity of the modes without noticeable loss barriers that could indicate more pronounced differences in the optimization process (see Table <ref>). However, the high loss values of the linear connectivity (mean_PTT → PAT: 117.42±76.26, vs. e.g., mean_PTT: 107.24±42.2) demonstrate a noticeable impact of mixing the training paradigms on the mechanistic similarity. In addition, we find similar to the connecting curves of PTT minimizing the Hamming-loss noticeable higher loss values compared to the BCE-loss. § DISCUSSION AND CONCLUSION In this work, we propose and explore E2E differentiable neural charged particle tracking using graph neural networks. We integrate combinatorial optimization mechanisms used for generating disconnected track candidates from predicted edge scores directly into the training pipeline by leveraging mechanisms from decision-focused learning. By optimizing the whole track reconstruction pipeline in an E2E fashion, we obtain comparable results to two-step training, minimizing the BCE-loss of raw edge scores. However, we demonstrate the predict-and-track approach can provide gradient information that can be propagated throughout the reconstruction process. While the usage for simply optimizing the reconstruction performance of the network is limited, providing gradient information is highly valuable for various use-cases such as uncertainty propagation, reconstruction pipeline optimization and allows the construction of complex, multistep architectures, potentially allowing to reduce the combinatorial nature of particle tracking by ensuring unique hit assignment. By examining the global loss landscape, we observe that, despite similar training behaviors leading to globally well-connected minima with comparable reconstruction performance, there are discernible differences in the learned representations leading to substantial prediction instability. This instability is evident across random initializations and is further exacerbated by comparing models optimized for prediction- and task loss, respectively. The observed prediction instability highlights the importance of E2E differentiable solutions, given that the impact of model instability on separate downstream tasks, such as image reconstruction in pCT, is unpredictable and thus especially crucial for safety-critical applications. Therefore, incorporating the functional requirements of downstream tasks through an additional loss term appears to be highly beneficial. Given the strong training performance of the E2E architecture as well as the mostly convex shape of the two-dimensional loss surfaces with globally well-connected minima, the combined optimization of tracking and downstream task loss promises to be feasible and effective. However, as this would have exceeded the scope of this paper, we leave this for future work. Our analysis demonstrates, beside the similar reconstruction results obtained for different interpolation values and the general understanding that the exact choice of the hyperparameter is insignificant <cit.>, a coupling of the choice of lambda on the prediction stability of the network. Selecting lower values for lambda is thus especially important in critical applications where a consistency of outputs is desirable or even essential. Moving forward, we plan to expand the framework by incorporating auxiliary losses for downstream tasks, especially for the use-case of pCT. Additionally, we will continue extending our studies to explore other potential combinatorial solvers for generating disconnected track candidates from predicted edge scores, as well as investigate additional network architectures to gain a more profound understanding of the impact of end-to-end optimization on charged particle tracking. §.§.§ Members of the Bergen pCT Collaboration Max Aehlea, Johan Almeb, Gergely Gábor Barnaföldic, Tea Bodovab, Vyacheslav Borshchovd, Anthony van den Brinke, Mamdouh Chaarb, Viljar Eikelandb, Gregory Feofilovf, Christoph Garthg, Nicolas R. Gaugera, Georgi Genovb, Ola Grøttvikb, Håvard Helstruph, Sergey Igolkinf, Ralf Keideli, Chinorat Kobdajj, Tobias Kortusa, Viktor Leonhardtg, Shruti Mehendaleb, Raju Ningappa Mulawadei, Odd Harald Odlandk, b, George O'Neillb, Gábor Pappl, Thomas Peitzmanne, Helge Egil Seime Pettersenk, Pierluigi Piersimonib,m, Maksym Protsenkod, Max Rauchb, Attiq Ur Rehmanb, Matthias Richtern, Dieter Röhrichb, Joshua Santanai, Alexander Schillingi, Joao Secoo, p, Arnon Songmoolnakb, j, Ákos Sudárc, q, Jarle Rambo Sølier, Ganesh Tambaves, Ihor Tymchukd, Kjetil Ullalandb, Monika Varga-Kofaragoc, Boris Wagnerb, RenZheng Xiaob, v, Shiming Yangb, Hiroki Yokoyamae, a) Chair for Scientific Computing, TU Kaiserslautern, 67663 Kaiserslautern, Germany b) Department of Physics and Technology, University of Bergen, 5007 Bergen, Norway; c) Wigner Research Centre for Physics, Budapest, Hungary; d) Research and Production Enterprise ”LTU” (RPELTU), Kharkiv, Ukraine; e) Institute for Subatomic Physics, Utrecht University/Nikhef, Utrecht, Netherlands; f) St. Petersburg University, St. Petersburg, Russia; g) Scientific Visualization Lab, TU Kaiserslautern, 67663 Kaiserslautern, Germany; h) Department of Computer Science, Electrical Engineering and Mathematical Sciences, Western Norway University of Applied Sciences, 5020 Bergen, Norway; i) Center for Technology and Transfer (ZTT), University of Applied Sciences Worms, Worms, Germany; j) Institute of Science, Suranaree University of Technology, Nakhon Ratchasima, Thailand; k) Department of Oncology and Medical Physics, Haukeland University Hospital, 5021 Bergen, Norway; l) Institute for Physics, Eötvös Loránd University, 1/A Pázmány P. Sétány, H-1117 Budapest, Hungary; m) UniCamillus – Saint Camillus International University of Health Sciences, Rome, Italy; n) Department of Physics, University of Oslo, 0371 Oslo, Norway; o) Department of Biomedical Physics in Radiation Oncology, DKFZ—German Cancer Research Center, Heidelberg, Germany; p) Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany; q) Budapest University of Technology and Economics, Budapest, Hungary; r) Department of Diagnostic Physics, Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway; s) Center for Medical and Radiation Physics (CMRP), National Institute of Science Education and Research (NISER), Bhubaneswar, India; t) Biophysics, GSI Helmholtz Center for Heavy Ion Research GmbH, Darmstadt, Germany; u) Department of Medical Physics and Biomedical Engineering, University College London, London, UK; v) College of Mechanical & Power Engineering, China Three Gorges University, Yichang, People's Republic of China §.§.§ Acknowledgments This work was supported by the German federal state Rhineland-Palatinate (Forschungskolleg SIVERT) and by the Research Council of Norway (Norges forskningsråd), the German National High-Performance Computing (NHR) association for the Center NHR South-West and the University of Bergen, grant number 250858. The simulations and training were partly executed on the high-performance cluster "Elwetritsch" at the University of Kaiserslautern-Landau, which is part of the "Alliance of High-Performance Computing Rhineland-Palatinate" (AHRP). We kindly acknowledge the support of the regional university computing center (RHRK). Tobias Kortus and Nicolas Gauger gratefully acknowledge the funding of the German National High-Performance Computing (NHR) association for the Center NHR South-West. The ALPIDE chip was developed by the ALICE collaboration at CERN. tmlr § EDGE FILTER RESULTS In Section <ref> we mentioned the necessity of performing edge filtering, minimizing the size of both the hit- and line graph representations, improving reconstruction speeds and minimizing memory requirements during training and inference. Selecting sufficient thresholds, given the stochastic nature of particle scattering in material, requires a trade-off between reduced graph size and the number of illegitimately removed true edges. Figure <ref> shows the fraction of removed number of edges in the graph opposed to the fraction of removed true edges, generated for 100 graphs of the train dataset selected uniformly over all phantom thicknesses. Based on the results in Figure <ref> we select the thresholds θ_d = 400 mrad for the edges in the calorimeter layer and θ_t = 200 mrad for all edges contained in the tracking layer. § HYPERPARAMETERS § RECONSTRUCTED PARTICLE TRACKS § IMPLEMENTATION DETAILS FOR LOSS LANDSCAPE EVALUATIONS IN SECTION <REF> This section is intended to provide additional implementation details, specifying the implementation and evaluation of the approaches for evaluating the loss landscape in Section <ref> and all its subsections. General remarks: As all experiments require a significant amount of network evaluations, often including tens or hundreds of iterations over the data in the training set, we select for Sections <ref>,<ref>,<ref> and <ref> a subset of 100 uniformly selected graphs from the train set to remain with feasible runtimes, while still providing enough data. Loss surfaces: We generate the loss surfaces following the implementation details described in <cit.>. This includes especially the usage of fixed batch normalization parameters across all sampled α, β configurations. We generate the eigenvectors of the hessian used as the spanning vectors for the loss landscape using an implementation based on the PyHESSIAN framework <cit.>, adapted for our use case. All loss surfaces are generated using 50 x 50 parameter values and parallelized on a single machine with multiple GPUs using the Ray software framework <cit.>. Representational and functional similarities Our implementation for quantifying representational and functional similarities matches the description of <cit.> and <cit.>. However, given the large cardinality and connectivity of the line graph representations, we sample each iteration of the CKA algorithm only a subset of 80 × 1024 features to remain with feasible runtimes. Each combination of networks is evaluated as individual jobs using the SLURM resource manager on the Elwetritsch HPC cluster of the University of Kaiserslautern-Landau. Mode connectivity: We closely follow the implementation details for generating connecting, curves provided by <cit.>. However, instead of updating the batch normalization parameters for every t, we follow a similar approach used for generating the loss landscapes <cit.>. We use constant parameters determined by the first curve anchor, providing us with reasonable normalization values. Each curve is optimized using 1000 iterations with a batch size of 8 graphs. We found that increasing the training iterations resulted with worse connectivity results for PTT-based combinations, minimizing the hamming loss. We use the same parallelization strategy as used for calculating representational and functional similarities. § REPRODUCIBILITY We analyze in this work several trained models as well as results based on various evaluation mechanisms, each requiring a significant amount of computing resources. We thus provide for transparency all trained models together with the evaluation results and data under <https://doi.org/10.5281/zenodo.12759188>. Additionally, we provide all the source code for generating the tables and figures used throughout this work, which can be re-run without regenerating any of the required result data. All source code that supports the findings of this study will be openly available open source after paper acceptance under <https://github.com/SIVERT-pCT/e2e-tracking>. § STATISTICAL TESTING FOR SIMILARITY SCORES Given the seemingly increasing predictive instability of the tracking network with increasing interpolation factor λ, we suspect a dependency between those two dependencies. We verify this hypothesis, using Welch's t-test, demonstrating that the prediction instability decreases with smaller λ values (see. Figure <ref>).
http://arxiv.org/abs/2407.12226v1
20240717004247
Individualized Federated Learning for Traffic Prediction with Error Driven Aggregation
[ "Hang Chen", "Collin Meese", "Mark Nejad", "Chien-Chung Shen" ]
cs.LG
[ "cs.LG" ]
Individualized Federated Learning for Traffic Prediction with Error Driven Aggregation Hang Chen, Collin Meese, Student Member, IEEE, Mark Nejad, Senior Member, IEEE, Chien-Chung Shen, Member, IEEE Hang Chen and Chien-Chung Shen are with the Department of Computer and Information Sciences, University of Delaware, Newark, Delaware, 19716, USA (e-mail: {chenhang, cshen}@udel.edu). Collin Meese, and Mark Nejad are with the Department of Civil and Environmental Engineering, University of Delaware, Newark, Delaware, 19716, USA (e-mail: {cmeese, nejad}@udel.edu). ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== plain plain § ABSTRACT Low-latency traffic prediction is vital for smart city traffic management. Federated Learning has emerged as a promising technique for Traffic Prediction (FLTP), offering several advantages such as privacy preservation, reduced communication overhead, improved prediction accuracy , and enhanced adaptability to changing traffic conditions. However, majority of the current FLTP frameworks lack a real-time model updating scheme, which hinders their ability to continuously incorporate new incoming traffic data and adapt effectively to the changing dynamics of traffic trends. Another concern with the existing FLTP frameworks is their reliance on the conventional FL model aggregation method, which involves assigning an identical model (i.e., the global model) to all traffic monitoring devices to predict their individual local traffic trends, thereby neglecting the non-IID characteristics of traffic data collected in different locations. Building upon these findings and harnessing insights from reinforcement learning, we propose NeighborFL, an individualized real-time federated learning scheme that introduces a haversine distance-based and error-driven, personalized local models grouping heuristic from the perspective of each individual traffic node. This approach allows NeighborFL to create location-aware and tailored prediction models for each client while fostering collaborative learning. Simulations demonstrate the effectiveness of NeighborFL, offering improved real-time prediction accuracy over three baseline models, with one experimental setting showing a 16.9% reduction in MSE value compared to a naive FL setting. individualized federated learning, real-time traffic prediction, error-driven aggregation, distance heuristic, reinforcement learning, recurrent neural network. § INTRODUCTION In the pursuit of high fidelity Traffic Prediction (TP), Federated Learning (FL) has emerged as a technique with promising performance in traffic flow prediction <cit.> and speed prediction <cit.>. Compared to the conventional centralized approach <cit.>, FL presents a myriad of advantages that contribute to its growing popularity and potential impacts on TP. In particular, the distributed nature of FL allows for continuous model updates based on the latest traffic data collected at each traffic monitoring device, or device for short. This dynamic learning capability enables the model to promptly incorporate changes and adapt to evolving traffic patterns in real time, ensuring that the predictions remain accurate and up to date. Such adaptability is essential to traffic prediction, as the dynamic nature of traffic patterns necessitates agile learning and rapid adjustments of the model <cit.>. In addition, FL has demonstrated its potential to enhance prediction accuracy, where the distributed nature of FL incorporates diverse data sources, and encompasses various traffic scenarios and conditions. This diversity contributes to a more comprehensive and robust training process, resulting in improved prediction accuracy. The collaborative aspect of FL also pools the knowledge from multiple traffic management organizations, creating collective intelligence that surpasses the capabilities of centralized models <cit.>. Early Federated Learning frameworks for Traffic Prediction (FLTP) often adopted a naive aggregation approach. These frameworks aimed to produce a global model that would be utilized by all the devices or organizations within the study region to predict individual short-term traffic situations <cit.>. However, this aggregation method overlooked one important factor: the traffic data collected across different devices could exhibit non independent and identically distributed (non-IID) data characteristics <cit.>. For instance, devices located in proximity to each other along the same arterial corridor would more likely exhibit similar traffic trends compared to those located farther apart, or on ramps or other roads, as observed in <cit.>. In other words, local models trained by respective devices may exhibit a significant bias towards their localized traffic trends <cit.>. This data imbalance limits the effectiveness of training a single, optimal global model, as it will likely not perform optimally for all devices. To address the non-IID issue in FLTP, a common approach is to incorporate similarity clustering during the aggregation process<cit.>. Indeed, even non-FL-based TP frameworks have explored clustering methods, such as K-nearest neighbors based clustering, to enhance prediction accuracy <cit.>, and more recent FL works have explored the K-means algorithm and Graph Neural Networks (GNN) based approaches for clustering the devices <cit.>. More specifically, the devices within the study region are divided into non-overlapping subgroups using specific metrics, such as geographical distance <cit.>, prediction error <cit.>, or model parameter similarity <cit.>. Although these approaches have been shown to enhance the accuracy of the global model employed by individual devices, a mere static grouping of devices may prove ineffective, particularly during unexpected and disruptive traffic events, such as accidents or disabled vehicles. These random, non-recurring events introduce a dynamic facet characterized by location-dependency and high-intensity, yet ephemeral impacts on the traffic network, prompting the requirement for an adaptive approach. Another notable observation of many existing FLTP frameworks is that these federated models are typically trained in an offline manner <cit.>. This implies that the federated models are exclusively trained using historical traffic data that has already been collected by the devices, thus relying solely on past information for learning purposes. Once the federation is completed, the final global model is deployed in the live environment to make predictions based on real-time streaming data. However, a practical TP model should adapt in real-time using newly incoming traffic data. This is crucial to effectively handle data drift caused by the changing traffic dynamics, such as non-recurring events, seasonal cycles, and infrastructure changes. It is worth noting that FL has demonstrated the ability to update models in real-time, as exemplified by its successful implementation in the renowned Gboard word prediction application <cit.>. Therefore, it is essential to explore the feasibility of FLTP in a real-time streaming manner. Motivated by the aforementioned insights and drawing inspiration from reinforcement learning <cit.>, we propose NeighborFL, a real-time and individualized FL-based TP workflow to further improve the prediction accuracy of each individual traffic sensing location. This workflow introduces a new individualized aggregation method, guided by minimizing real-time prediction error and utilizing distances between devices as a heuristic. In NeighborFL, instead of having one global model for all devices in each communication round, each device uses an individualized group of local models from its chosen neighboring devices, plus its own local model, to create its aggregated model for real-time prediction. From a high level perspective, NeighborFL operates as follows: * During the initialization phase of the federation, each device begins with an initial model and an empty set to record its chosen Favorite Neighbors within a predefined radius. The initial model can be randomly initialized or pretrained using the device's historical traffic data. All devices adhere to the same model architecture, following the convention in the original FL framework <cit.> * At the start of each communication round, every device has an individualized model, denoted as A, which is produced by aggregating the parameters of the local models obtained from the devices in its Favorite Neighbors set (short for FN) along with its own local model (denoted as L_own) from the previous communication round. For the first communication round, each device's FN is empty, and the initial model serves as the individualized model. In contrast to conventional FL approaches, where devices share the same global model, denoted as G, for traffic prediction, each device employs its A_ to make predictions, labeled as P_, while simultaneously gathering new traffic data from its sensors in real-time. * In addition, a device might have another aggregated model, denoted as A_eval, in situations where its FN has not included all its candidate neighbors and the device decides to evaluate a neighboring device's model for potential inclusion as a Favorite Neighbor in the current round. This is done by acquiring the local model from its nearest device not yet included in its FN at the end of the last communication round. This specific local model is referred to as L_eval, and the owner of L_eval is denoted as d_eval. The production of A_eval involves aggregating the parameters of the local models gathered from its Favorite Neighbors, and those of L_own and L_eval. In essence, the difference between A_eval and A_ is the inclusion of an extra local model (i.e., L_eval) in the aggregation process of A_eval. * The process of evaluating L_eval involves generating a new set of predictions with A_eval, which we refer to as P_eval. The device then assesses the accuracy of these predictions by comparing the prediction errors between P and P_eval, using an error metric such as the mean squared error. This comparison involves evaluating the error of P against the actual ground truth, T, which we denote as E, and similarly, evaluating the error of P_eval against T, denoted as E_eval. * If E_eval is smaller than E_, indicating that the inclusion of L_eval in the model aggregation results in improved prediction accuracy in this round, d_eval is included in the device's FN. Subsequently, A_eval is adopted for local learning in the current round. Otherwise, d_eval is skipped from being added to the device's FN, and is optionally placed in a retry list with an assigned retry interval, while A_ is used for learning to produce the device's local model for the current round. * To further adapt to real-time traffic dynamics, a device retains the option of eliminating one or more devices from its FN based on certain conditions. For instance, this might occur when a device's FN becomes stable (i.e., not changing for a series of continuous rounds), or when the real-time prediction error consistently increases over several consecutive rounds. * Each device repeats Steps 2-6 in each subsequent communication round, adapting to the evolving traffic dynamics and refining its prediction capabilities in a dynamic and collaborative manner. Figure <ref> depicts the above steps of NeighborFL throughout a communication round, utilizing mean squared error as the error metric and adopting FedAvg <cit.> for local model aggregation. Additionally, it offers a parallel comparison with the workflow of the conventional FL approaches. The objective of NeighborFL is to enable devices to collaboratively search for an improved model that leads to lower prediction errors in the present round. This collaborative effort takes into account the spatial relationship among devices, aiming to enhance the predictions made in the subsequent round considering the temporal conditions. The contributions of NeighborFL and the distinct differences from existing FLTP architectures are summarized as follows: * Personalized Aggregated Models. Instead of starting the learning process from a single global model generated in the previous communication round, our approach first employs a personalized aggregated model for prediction and another aggregated model for evaluation if a A_eval is proposed. Then, each device tests these two models and selects the one with the lower prediction error to continue learning. This approach enables the system to adapt more effectively to real-time traffic dynamics. * Radius-based Candidate Selection. A device can set a radius to confine its candidate Favorite Neighbors, limiting the maximum number of local models that will be used to produce the aggregated model. This restriction prevents a device from requesting the local models of spatially distant devices, thereby avoiding potential communication delays, exploiting cooperative edge computing infrastructure, and improving the response time of the entire system. * Knowledge Propagation. The Favorite Neighbors sets of two nearby devices may contain overlapping devices. As a result, a local model from one device can still propagate its learned knowledge to a distant device, even if the distant device does not have the corresponding owner in its Favorite Neighbors set. This propagation of knowledge across devices employs natural weightings to local models during aggregation, facilitating the dissemination of valuable insights and, thereby, enhancing the overall predictive capabilities of the system. * Individual Device Perspective. NeighborFL allows the devices to explore the spatial and temporal correlation between other participants to form dynamic and individualized groups, rather than considering the entire cohort of devices statically. This device-centric approach builds upon prior research and has the potential to directly incorporate decentralized, device-level model cross-validation to enhance system robustness and resilience <cit.>. We conducted comprehensive simulations to evaluate the effectiveness of NeighborFL. Without the loss of generality, we chose traffic speed data from 26 detectors in the publicly available PEMS-BAY reference dataset <cit.>, and used the Long Short-Term Memory (LSTM) <cit.> recurrent neural network as the client model architecture. In addition to using a randomly initialized global model across each of the 26 devices, we also investigated the impact of pretraining the initial model in NeighborFL <cit.>. For this, we allowed each of the 26 devices to pretrain their initial models using the traffic speed data from the first week of January 2017. We then analyzed the real-time learning and prediction capabilities of NeighborFL by testing its performance over the following two-week period involving both the pretrained and non-pretrained initial models. The experimental results presented in Section <ref> demonstrate that NeighborFL outperforms the three baseline models in terms of real-time prediction capability in the majority of cases, as indicated by the lower prediction error. Our NeighborFL code repository is available at https://github.com/hanglearning/NeighborFL, and our experiments can be reproduced by the provided portion of the dataset and the specified random seed. The remainder of this paper is organized as follows. Section <ref> provides a comprehensive review of related work that has guided us in the development of NeighborFL. Section <ref> presents the problem definitions for the key components of NeighborFL and provides a detailed outline of the workflow. Experimental results are presented in Section <ref>. Section <ref> addresses certain limitations of the current NeighborFL design and proposes potential directions for future research. Section <ref> summarizes the key findings of the paper. § RELATED WORK In this section, we review the existing TP architectures by categorizing them into three general categories - centralized methods, non-streaming FL methods, and streaming FL methods. Both centralized and non-streaming FL methods are further differentiated into methods with and without the grouping mechanism. §.§ Centralized methods §.§.§ Non-grouping Research in TP has a long history and has become a crucial component of Intelligent Transportation Systems. Early conventional TP models, such as the autoregressive integrated moving average (ARIMA) <cit.> and exponential smoothing <cit.>, have been used to forecast short-term traffic based on previous observations. In recent years, deep learning-based approaches have gained popularity. Lv et al. (2015) introduced a stacked autoencoder model <cit.>, while Fu et al. (2016) were among the first to utilize LSTM and GRU neural networks <cit.> for traffic flow prediction. The LSTM and GRU models proposed in <cit.> demonstrated superior prediction performance compared to the ARIMA model, solidifying the foundation of recurrent neural networks in TP. §.§.§ Grouping The K-nearest neighbor (KNN) is one of the most widely used grouping mechanisms in the TP literature. One of the first uses of KNN in TP research can be found in <cit.>, where its capability for improving TP was analyzed using freeway data. The forecasting method with KNN involves computing the average of predictions from a traffic node's K-nearest neighbors and its own. However, the authors concluded that “the KNN method performed comparably to, but not better than, the linear time-series approach, and further research is needed to delineate those situations where the KNN approach may be preferable." In 2012, Chang et al. proposed a KNN-based non-parametric regression (KNN-NPR) TP model to address situations where current or future traffic data exhibits fluctuations or abrupt changes. The proposed model demonstrated effectiveness and easy optimization for minimizing prediction errors <cit.>. Furthermore, Luo et al. (2019) introduced a model named KNN-LSTM, which utilized KNN to select the most relevant neighboring traffic nodes with respect to a specific node. The final prediction results are obtained by weighting the LSTM prediction values from all K stations in addition to its own output<cit.>. We observed that NeighborFL shares a similar methodology with KNN-LSTM in terms of fusing the prediction values. However, in KNN-LSTM, the K value specifying the number of neighboring traffic nodes to consider for any given node remains constant. In contrast, our Favorite Neighbors set dynamically expands and shrinks as communication rounds progress. These works have demonstrated that considering the spatiotemporal correlation characteristics of traffic data can improve the prediction performance of individual nodes. They also inspired us to incorporate distance as a heuristic when a traffic node searches for a Favorite Neighbor. §.§ Non-streaming FL methods §.§.§ Non-grouping An early application of FLTP was introduced by Liu et al. (2020) <cit.>, where the authors proposed a FL-based architecture for TP using Gated Recurrent Units (GRU) <cit.>, termed FedGRU. FedGRU demonstrated comparable prediction performance to centralized GRU methods while addressing privacy concerns associated with the traffic dataset. Qi et al. (2021) further advanced FLTP by integrating it into a blockchain architecture <cit.>. This framework addresses the single-point-of-failure issue in centralized FLTP and leverages smart contract technology to safeguard against malicious attacks and ensure high-quality training data. Zhang et al. (2021) extended FLTP by incorporating Graph Neural Networks (GNN) and proposing an adjacency matrix aggregation approach, enabling local GNN-based models to access the global network for improved training effectiveness <cit.>. §.§.§ Grouping The same work by Liu et al. (2020) introduced an ensemble clustering-based FedGRU algorithm that utilizes traffic data with improved spatio-temporal correlation based on location information. This approach enhances prediction accuracy compared to the standard FedGRU method. It also handles scenarios where multiple clients collaborate to train a traffic prediction model by grouping traffic nodes into K clusters prior to applying FedGRU <cit.>. Zeng et al. (2021) developed a divisive hierarchical clustering approach to partition traffic data at each traffic node into clusters. They applied FL to collaboratively train learning models for each cluster across all stations. The authors demonstrated improved prediction performance compared to the standard FL method using the PeMS dataset <cit.>. Furthermore, GNN-based methods are also explored as heuristics to group traffic nodes into non-overlapping clusters in <cit.> and <cit.>. Although these FLTP frameworks paved the way for exploring privacy-preserving distributed learning in TP, they lack the ability to update the global model in real-time after deployment, making them unable to adapt to changing traffic dynamics, such as non-recurrent traffic incidents and varying traffic patterns. We categorize these FLTP frameworks as non-streaming FL methods since they do not use real-time streaming traffic data for training or updating the models. §.§ Streaming FL methods In contrast to non-streaming FL approaches, streaming FL architectures rely on real-time traffic data streams to provide inputs for learning and evaluation, allowing for live updates and continuous adaptation. Our research identified two streaming FLTP models. Meese et al. (2022, 2024) introduced BFRT, a streaming Blockchained FL workflow for TP that defines a protocol for federated models to predict real-time traffic volume while collecting live data. By comparing it with a per-device centralized model, the authors demonstrated that the streaming FL method exhibits lower real-time prediction error, highlighting the advantages of collaborative real-time learning <cit.>. Subsequently, Liu et al. (2023) proposed FedOSTC, an alternative streaming FLTP workflow that incorporates a Graph Attention Network to assess spatial correlation among traffic nodes. It also integrates a period-aware aggregation mechanism to combine local models optimized using the Online Gradient Descent (OGD) algorithm <cit.>. FedOSTC outperforms five chosen baselines in terms of prediction performance on the partial PEMS-BAY and METR-LA datasets <cit.>. To the best of our knowledge, there has been no published work focusing on streaming FLTP methods involving grouping, i.e., personalized aggregation, and NeigborFL aims to fill this research gap and lays the foundation for multiple research directions discussed in Section <ref>. § SYSTEM MODULES In this section, we present NeighborFL's key modules and their formulations, along with their implementation in pseudocode. §.§ System Entity NeighborFL is operated by a set of traffic monitoring devices 𝒟={d_1, d_2, ⋯, d_i, ⋯} in the study region through communication rounds ℛ={R_1, R_2, ⋯, R_j, ⋯}. Each device d has a unique device ID and sequence number. We assume that a device is equipped with sensor(s) to collect traffic data such as volume, speed, and occupancy, and we also assume a device can act as a computational node to perform model training and participate in other algorithmic operations at the network edge. §.§ Candidate and Favorite Neighbors The concept of Favorite Neighbors distinguishes NeighborFL from other FLTP frameworks. In NeighborFL, a device aggregates local models from itself plus its Favorite Neighbors (Algo. <ref>: line 21) instead of using the local models from all the participants (i.e., 𝒟). Prior to joining NeighborFL, a device initializes an empty set called FN to keep track of its favorite neighbors (Algo. <ref>: line 5) and a radius value r to populate a hashmap called CFN to record its candidate favorite neighbors (Algo. <ref>: lines 9-11). A single candidate favorite neighbor is denoted as cfn. A device considers neighboring devices within a distance of r as candidates. The key in CFN represents the device ID of the cfn, while the associated value is the Haversine distance between the cfn and the device (Algo. <ref>: lines 8, 11). In this work, a device's cfn becomes its favorite neighbor (fn) after evaluating the real-time prediction error, as described in Sec. <ref>. The evaluation order is determined by the Haversine distance between the device and cfn, from closest to farthest (Algo. <ref>: line 15; Algo. <ref>: line 4), with a retry interval control mechanism (Algo. <ref>: lines 4-6, 12-14; Algo. <ref>: lines 5-8; Algo. <ref>: lines 15-17) explained in Sec. <ref>. Algo. <ref> outlines the process of populating CFN for the i-th device d_i, while Algo. <ref> explains how a cfn may be selected by d_i in R_j for evaluation based on the relative Haversine distance. Note that different devices may choose different values for r, and r_i specifically represents the r value chosen by device d_i in Algo. <ref>. d^j_i,eval represents the selected cfn to be evaluated by d_i in R_j+1 (Algo. <ref>: line 7). Technically, a device has the option to set r as unlimited to consider all devices in the study region. However, selecting an appropriate value for r can enhance a device's operational efficiency, and the overall system scalability, by restricting the number of candidate neighbors. This is important because distant devices located on different roads may display distinctly different traffic patterns, decreasing the evaluation efficiency. Also note that a device might consider evaluating more than one candidate. Without loss of generality, we concentrate on the scenario where only one cfn is evaluated at a time in this study (Algo. <ref>, lines 6-8). §.§ Online Streaming Model NeighborFL is designed to operate in real-time regarding data collection, training, and prediction. This section provides formal definitions for these three aspects. §.§.§ Online Streaming Data Collection Each device collects traffic data in real-time for training, prediction, and evaluation when participating in NeighborFL. We denote x as a single traffic data point, and x^j_i,m as the m-th traffic data point collected by device d_i in communication round R_j. This data point, x^j_i,m, represents a scalar value such as traffic volume or speed (Algo. <ref>: line 6). We define τ^j as the total number of data points that a device needs to collect within the R_j (Algo. <ref>: line 5). Each device is equipped with a memory card to store the collected data, and the collection of the data points stored in the memory of d_i is represented as data_i. To mitigate overfitting with outdated data, the length of data_i is limited to a maximum size called MaxDataSize. This ensures that data from older rounds is excluded from training in the current round (Algo. <ref>: lines 9-11). The algorithm for real-time data collection and updating data_i is presented in Algo. <ref>. data^j_i represents the updated data_i during or after the completion of R_j. §.§.§ Online Training As mentioned in Sec. <ref>, at the end of a communication round, a device utilizes the selected aggregated model and the updated data for training and generating its updated local model (Algo. <ref>, lines 7-11). We define a training instance as I = {𝐗, y}, where 𝐗⊂ data represents a vector containing a continuous sequence of traffic data points from data. The variable y can be either a scalar value or a vector, depending on the desired number of output forecasting horizon (i.e., prediction steps) 𝒪, and the length of 𝐗 depends on the number of input units ℐ of the model. At any moment, the number of training instances in data can be calculated as num(I) = data.size - ℐ - 𝒪 + 1 For example, in this work, we employed an LSTM neural network with 12 input units (ℐ = 12) and 1 output unit (𝒪 = 1), representing a 1-step look-ahead prediction. Let x^(m) be the m-th data point in data and I^(k) be the k-th training instance. Suppose a device has collected at least MaxDataSize data points and consider MaxDataSize = 72, then the first training instance in data can be expressed as I^(1) = {<x^(1), x^(2), ..., x^(12)>, x^(13)}, and the last training instance, corresponds to the 72-12-1+1=60-th instance as calculated in Eq. <ref>, can be written as I^(60) = {<x^(60), x^(61), ..., x^(71)>, x^(72)}. The process of updating the local model of d_i in R_j using the defined local epoch numbers ℰ and a batch size of 1 is outlined in Algo. <ref>. §.§.§ Real-time Prediction In round R_j, device d_i makes one real-time prediction for the next 𝒪 time steps (denoted as ŷ) using an aggregated model from R_j-1, denoted as A^j-1_i, on the most recent ℐ data points from data^j_i (denoted as 𝐗) (Algo. <ref>: lines 10-12), prior to collecting a new data point (Algo. <ref>: lines 13-14). Consequently, the number of predictions made by a device within the second communication round (R_2) and onwards is equal to the number of newly collected data points, which is denoted by τ^2+ (Algo. <ref>: line 5; Algo. <ref>: lines 6-14). As a special case, in the first communication round (R_1), a device needs to collect at least ℐ data points to make the first prediction (Algo. <ref>: line 7-8), and ℐ + 𝒪 data points to initiate local learning[To ensure that there is a sufficient amount of data for training, it can be inferred that the value of MaxDataSize also needs to be larger than or equal to ℐ + 𝒪.] (Algo. <ref>, line 6, Algo. <ref>: line 15). Therefore, τ^1 must be at least ℐ + 𝒪 to ensure a local model is produced at the end of R_1. Note that data^j_i.size denotes the number of data points (i.e., x's) stored in data^j_i, and the maximum value of data^j_i.size is MaxDataSize. Consequently, the number of predictions a device makes in R_1 would equal to τ^1 - ℐ - 𝒪 + 1 which is similar to the calculation of the number of data instances in data as expressed in Eq. <ref>. The steps for real-time prediction by d_i in R_j are described in Algo. <ref>. The resulting ground truth Y^j_i consists of instances of actual traffic data points (denoted as y) corresponding to the prediction instances (i.e., ŷ) in P^j_i. Specifically, each ground truth instance y (Algo. <ref>: line 16) or prediction instance ŷ (Algo. <ref>: line 11) is represented by a scalar value or a vector, depending on the value of 𝒪, and is recorded in chronological order in Y^j_i or P^j_i, respectively, controlled by m over time within a communication round (Algo. <ref>: lines 6-7, 15, 18). P^j_i and Y^j_i will be utilized to demonstrate the evaluation of a candidate favorite neighbor in Sec. <ref>. §.§.§ Evaluation of a candidate neighbor As mentioned in Sec. <ref>, a device's candidate favorite neighbor (cfn) becomes its favorite neighbor (fn) based on real-time prediction error evaluation (Algo. <ref>: lines 11-12). Inspired by the concept of reinforcement learning and the general approach to training a neural network by minimizing inference errors, the following process occurs: If a device d^j_i,eval is selected for evaluation by device d_i in round R_j (Algo. <ref>: line 7), d_i in round R_j+1 will evaluate an aggregated model A^j_i,eval that integrates the local model of d^j_i,eval, denoted as L^j_i,eval (Algo. <ref>: lines 18-19, 24-25). This evaluation involves the comparison of the prediction error E^j+1_i,eval, derived by applying an error metric between P^j+1_i,eval and Y^j+1_i (Algo. <ref>, line 7), and another prediction error E^j+1_i, obtained by assessing P^j+1_i against Y^j+1_i (Algo. <ref>, line 8). P^j+1_i,eval is comprised of the predictions made by A^j_i,eval (Algo. <ref>: lines 15-16), which includes L^j_i,eval while being produced (Algo. <ref>: lines 24-25), whereas P^j+1_i are composed of the predictions made by A^j_i (Algo. <ref>: line 14), which does not include L^j_i,eval (Algo. <ref>: line 21). In simpler terms, a device determines the effectiveness of a potential candidate neighbor device by incorporating its local model into an aggregated model for making predictions. It then compares the error from these predictions to the error from another aggregated model that excludes the neighbor's local model. The prediction error metric (Algo. <ref>, lines 7-10) used for evaluation could be any commonly used measure in traffic prediction models, such as MAE, MRE, MSE, RMSE, among others <cit.>. An example of device-level MSE calculation for 𝒪 = 1 is given in Eq. <ref>. If E^j+1_i,eval is found to be lower than E^j+1_i, indicating that L^j_i,eval contributes positively to the prediction in R_j+1, d_i adds d^j_i,eval to its favorite neighbor set d_i.FN (Algo. <ref>: lines 11-12), and A^j_i,eval will be chosen to perform local update at the end of R_j+1 (Algo. <ref>: line 13), meanwhile L^j+1_eval will be included to produce A^j+1_i (Algo. <ref>: line 25). Otherwise, d_i records d^j_i,eval and updates its evaluation round R_j+1 in a hashmap named d_i.last_try_round (Algo. <ref>: line 15), and A^j_i will be chosen to perform local update (Algo. <ref>: line 17). Additionally, d_i increments the retry interval of d^j_i,eval in another hashmap d_i.retry_interval (Algo. <ref>: line 16), in which all of d_i's candidate favorite neighbors are assigned an initial retry interval value of 0 (Algo. <ref>: line 13). The same pattern of retry interval incrementation occurs when d^j_i,eval is tried again in subsequent rounds without being favored. The combination of last_try_round and retry_interval forms the retry control mechanism for NeighborFL. This mechanism prevents a cfn (i.e., a specific d^j_i,eval) from being frequently retried without considering other available candidate neighbors (Algo. <ref>, lines 5-8). Taking inspiration from reinforcement learning, this mechanism can be seen as a form of penalty that delays the future inclusion of a local model from an evaluated candidate that fails to contribute effectively to the timely prediction process. Note that the retry pattern (Algo. <ref>, lines 15-16) can be designed differently, such as decreasing the retry interval if d^j_i,eval is re-added to d_i.FN and remains there for an extended period. The evaluation process of A^j_i,eval by d_i in round R_j+1, j ≥ 1 is defined in Algo. <ref>. The output A^j_i, chosen represents the chosen aggregated model to perform local update at the end of R_j (Algo. <ref>, lines 13, 17). Algo. <ref> also involves calculating the reputation for d^j_i,eval (Algo. <ref>, lines 9-10), which will be used to illustrate a favorite neighbor removal method in Sec. <ref>. In situations where 𝒪 > 1, handling the instances in Y, P_eval and P_ requires special consideration to align the error calculations correctly (Algo. <ref>, lines 7-8). Table. <ref> illustrates the global sequence number of data points collected in Y and the global sequence number of the predicted values in P by a random device following Algorithm <ref> for ℐ = 12, 𝒪 = 1, 2, 3 during R_1, R_2 and R_3, with τ^1 = 24 and τ^2+ = 12. For example, when 𝒪 = 2 and during the 4th step (i.e., m = 4, Algo. <ref>, line 6) in R_2, according to Algo. <ref>, a device would first predict the 28th and 29th traffic values (Algo. <ref>, lines 10-12), denoted as the prediction instance ŷ in green. Then, it collects the 28th true data point (Algo. <ref>, lines 13-14), colored in gray, and extracts the 27th and 28th true data points from data^2 (Algo. <ref>, line 16) denoted as the truth value instance y in yellow. Note that m = 1, 2, ..., 12 in R_1 are omitted from the table as the devices haven't started training nor real-time prediction during those steps. From the table, it can be inferred that for 𝒪 = 2, the sequences of the data points in the instances of Y and P differ by one position. Specifically, the first prediction instance ŷ in P corresponds to the 2nd truth instance y in Y. Similarly, for 𝒪 = 3, the first ŷ in P corresponds to the 3rd y in Y. In general, fthe corresponding y in Y of the ŷ in P is shifted by 𝒪 - 1 positions in terms of index. To ensure that d_i accommodates data collection shifts correctly while evaluating a candidate's local model L^j_i, try in R_j+1, j ≥ 1, one approach could be to let A^j_i, try infer the latest 𝒪 - 1 predictions in R_j using the historical data and append these 𝒪 - 1 ŷ's to the beginning of P^j+1_i,eval, and also append the latest 𝒪 - 1 ŷ's of d_i to P^j+1_i. This way, the indexes of the sequences in Y^j+1_i, P^j+1_i,eval, and P^j+1_i will align. A simpler alternative is to drop the first 𝒪 - 1 y's in Y^j+1_i and drop the last 𝒪 - 1 ŷ's in both P^j+1_i,eval and P^j+1_i , as shown in pink in Table. <ref> for examples with 𝒪 = 2 and 𝒪 = 3. Using this method, we can maintain the evaluation of d^j_i,eval after d_i finishes collecting new data during R_j+1. Note that by using this approach, τ^2+ has to be at least 𝒪. §.§ Removal of a favorite neighbor With evolving traffic conditions, including non-recurrent events, seasonal variations, or route detours, neighboring devices can undergo substantial changes in traffic dynamics. Consequently, the local models derived from these devices may no longer contribute effectively or may even adversely impact the real-time predictions made by a given device that opts to include them in its favorite neighbors set (FN). Thus, the removal of poor performing favorite neighbors from a device's FN becomes crucial in maintaining accurate real-time predictions. In this section, we introduce our design of the removal mechanism, which includes two parts: a removal trigger and a removal method. However, it should be noted that this design is presented as an example and alternative approaches can be explored when implementing NeighborFL in practice. §.§.§ Removal Trigger The removal trigger is responsible for determining when a device should initiate the removal action for one or more favorite neighbors (fn) from its favorite neighbors set (FN). In the current design, the trigger is activated when the prediction error E_ of a device increases over a specified number of continuous rounds, denoted as ν (Algo. <ref>, line 3). §.§.§ Removal Method We propose two distinct removal methods in this work: “remove by reputation" and “remove the last added". In the “remove by reputation" method, each device maintains a hashmap called rep_book (Algo. <ref>, lines 6, 14; Algo. <ref>, line 10; Algo. <ref>) to track the subjective reputation of its candidate favorite neighbors (cfn's). The reputation is determined by calculating the difference between E_ and E_eval when a cfn is evaluated as d^j_i,eval (Algo. <ref>, line 9). If the difference Δ E = E_ - E_eval is a positive value, indicating that the local model of d^j_i,eval (L^j_i,eval) contributes to reducing the real-time prediction error, d^j_i,eval is assigned a positive reputation. Conversely, if Δ E is negative, d^j_i,eval receives a negative reputation (Algo. <ref>, line 10). The reputation accumulates over the course of communication rounds when a cfn is being (re-)evaluated. When a device removes a fn, it eliminates the fn associated with the lowest reputation (Algo. <ref>, lines 6-8). Algo. <ref> illustrates the process of “remove by reputation" executed by d_i. A removed fn is denoted as ~fn. The “remove the last added" method allows a device to remove fn in the reverse order of their addition, effectively undoing the addition of a fn. As a device typically (but not definitely) adds cfn to FN from closest to farthest, the device usually removes fn from farthest to closest (Algo. <ref>: lines 4-5), following a similar heuristic as Algo. <ref>. In this method, a device keeps track of the order in which cfn's are added to its FN, and always removes the most recently added fn. In this method, the FN data structure utilizes a stack instead of a set. This method intuitively exhibits a more chronological adaptation to time-dependent changes compared to the “remove by reputation" method. Algo. <ref> outlines the process of the “remove the last added" method performed by d_i, assuming the removal of one fn at a time. Lastly, to avoid rapid re-evaluation and addition of a ~fn in subsequent communication rounds, a device must maintain a record of ~fn in its last_try_round and retry_interval hashmap (Algo. <ref>, lines 5-6). The complete favorite neighbor removal steps are described in Algo. <ref>. §.§ NeighborFL complete operations The preceding subsections have divided NeighborFL into various modules. This section consolidates those operations and presents the overall top-to-bottom functioning of NeighborFL in Algo. <ref>. Along with forming the CFN hash map, a device also initializes five variables (Algo. <ref>, lines 5-9). The initial model provided to the devices in 𝒟 is denoted by A^0, which can be randomly generated or pretrained. Different devices may have different initial models, and thus A^0_i denotes the initial model of d_i before it participates in NeighborFL (Algo. <ref>, line 8). The first three actions a device performs upon starting a new communication round are to make two predictions while collecting the incoming data points to update its local dataset (Algo. <ref>: lines 13-16). It is important to underscore that these three functions operate concurrently in an asynchronous manner, governed by the shared time stamp variable m (Algo. <ref>, lines 3, 5, 8; Algo. <ref>, lines 3, 6, 18), as well as τ^j. The Aggregate() function takes a device's freshly produced local model and the local models from its Favorite Neighbors to perform model aggregation and generate A_ (Algo. <ref>, line 21), typically using a method like Federated Averaging <cit.>. If an available candidate is selected, the same aggregation method should also be used to produce A_eval (Algo. <ref>, lines 24-25). Also note that the removal action should occur before a device selects an available candidate (Algo. <ref>, lines 22-23). This is done to prevent a scenario where a newly selected candidate has the lowest reputation and gets removed immediately after being chosen, particularly when a device adopts the “remove by reputation" method. § EXPERIMENTAL RESULTS §.§ Experimental Design §.§.§ Dataset We manually selected a study region from the widely available reference dataset, PEMS-BAY<cit.>, collected by the California Transportation Agency Performance Measurement System (PeMS). Our chosen area covers 26 devices from the total 325 devices within the dataset. In Fig. <ref>, the locations of all 325 devices are shown in the lower left corner. The devices included in our experiments are enclosed within the red region, and their corresponding device IDs are labeled next to the devices' locations in the remainder of the figure, showing a zoomed-in map. Each device's ID includes an underscore followed by the letter “N" or “S" to indicate the traffic direction it monitors, either North or South. Without loss of generality, we use traffic speed as the modeling variable in this study. §.§.§ Deep Learning Model In our experiments, we employed a light-weight construction of the Long Short-Term Memory (LSTM) <cit.> variety of recurrent neural network (RNN) because it can be efficiently trained on heterogeneous edge devices for FLTP applications <cit.>. Following the convention of federated learning, all devices utilized identical LSTM networks. These networks consisted of 12 input neurons (ℐ = 12), corresponding to one hour of traffic data with a 5-minute time resolution, and one output neuron (𝒪 = 1), enabling a one-step 5-minute look-ahead prediction, as discussed in Sec. <ref>. The LSTM network is constructed with two hidden layers of size 128 hidden units. Furthermore, the two LSTM layers are followed by a dropout layer initialized at 0.2 and a fully connected output layer of size 𝒪 = 1. §.§.§ Baseline Methods To assess the performance of NeighborFL, we conducted a comparative analysis with three baseline methods: Centralized, NaiveFL, and r-NaiveFL. These baseline methods are also streaming methods and exhibit the following distinctions compared to NeighborFL: * The Centralized model of a device undergoes continuous real-time training without federation at the end of each communication round. From this perspective, the Central model trained by a device can be seen as updated solely by this specific device's incoming data throughout the entire process. This model was initially introduced in <cit.>, and its performance highlighted here serves to emphasize the benefits of adopting federated learning for traffic prediction. Technically, we can interpret the training process of a Central model as a device performing NeighborFL where the Faviorite Neighbors set remains empty indefinitely. * The NaiveFL model is also updated online in real-time and follows the conventional FL aggregation method. In every communication round, all devices share the same aggregated (global) model for prediction and learning. It serves as the primary model for comparison with NeighborFL, emphasizing the effectiveness of the aggregation heuristics employed in NeighborFL. Note that the conventional FL approach, represented on the left side of Fig. <ref>, illustrates the workflow of NaiveFL. Formally, we can think of the training process of a NaiveFL model in this context as a device implementing NeighborFL where its Favorite Neighbors set includes the entire cohort of devices in the study region, excluding itself, from the onset of the FL process. * The r-NaiveFL model can be viewed as a hybrid between NeighborFL and NaiveFL. Essentially, the training process of a r-NaiveFL model, when performed by a device, involves filling its Favorite Neighbors set with devices within its radius r (used to form its Candidate Favorite Neighbors set), excluding itself, at initialization. Notably, unlike NaiveFL, r-NaiveFL still enables each device to have a customized aggregated model for training and prediction in each communication round. As both r-NaiveFL and NeighborFL select devices from the same area for federation, with the only difference being that NeighborFL dynamically adjusts a device's Favorite Neighbors set, we present the performance of r-NaiveFL to further highlight the effectiveness of the heuristics employed in NeighborFL and the benefits of evolutionary Favorite Neighbors sets. For all four learning methods, including NeighborFL, we utilized an identical LSTM network architecture outlined in Sec. <ref>. The optimizer chosen for all methods was RMSProp <cit.>. In each communication round, all methods used a local epoch of 5 (ℰ = 5) and a batch size of 1. §.§.§ Number of Predictions τ In most traffic datasets, including PEMS-BAY <cit.> used in this study, the time resolution is set to 5 minutes. To align the communication round with hours, we set τ^2+ to 12 for R_2 and subsequent rounds. This indicates that a device needs to collect 12 data points and makes 12 one-step look-ahead predictions in R_2+, corresponding to one hour of data in the PEMS-BAY dataset. However, because the LSTM models employed in this study have 12 input neurons and a single output neuron, they require a minimum of I + O = 13 data points to form the online training batch and begin learning. Consequently, we set τ^1 to 24 to ensure τ^1 ≥ℐ + 𝒪 is satisfied in R_1, which is equivalent to 2 hours of data. This configuration allows devices to make 24 - 12 - 1 + 1 = 12 predictions in R_1 (Algo. <ref>: lines 5-8, 10-12), as calculated by Eq. <ref>, thereby unifying the number of predictions for all communication rounds to be 12. §.§.§ NeighborFL Setup For the NeighborFL method, we conducted experiments using four different favorite neighbor removal strategies to assess their impact on prediction performance. These strategies were compared while keeping other parameters consistent. The four strategies are summarized as follows: * Remove-by-reputation (Algo. <ref>) with a removal trigger of ν = 1 (referred to as R1). * Remove-by-reputation (Algo. <ref>) with a removal trigger of ν = 3 (referred to as R3). * Remove-the-last-added (Algo. <ref>) with a removal trigger of ν = 1 (referred to as L1). * Remove-the-last-added (Algo. <ref>) with a removal trigger of ν = 3 (referred to as L3). Furthermore, we used a fixed radius (r = 1 mile) for all devices to form their Candidate Favorite Neighbors Set (CFN). The number of candidate favorite neighbors (#CFN) for each device is listed in Table <ref>. For example, device 400760_N has 6 candidate favorite neighbors (401817_N, 400911_N, 401816_S, 400863_N, 409526_N, 409529_S) within a 1-mile radius, which is consistent with Fig. <ref>. Additionally, we set MaxDataSize = 72 for all devices, which corresponds to 6 hours of traffic data. The Mean Squared Error (MSE) was used to evaluate the real-time prediction error of a candidate neighbor's local model (Algo. <ref>, lines 9-10), as we want a device to skip a candidate neighbor's model producing large errors. Moreover, as pretrained models have previously been utilized to initialize the models used in FLTP applications <cit.>, we have also evaluated the effects of pretraining on the performance of NeighborFL and the three baseline methods. In the experiments involving pretrained models, we used the first week of traffic speed data from PEMS-BAY (ranging from 2017-01-01 00:00:00 to 2017-01-07 23:55:00, both timestamps included) as training data for each of the 26 sensors. We trained 26 models individually using an LSTM network with randomly initialized weights (represented as A^0) with the structure mentioned in <ref>, applying 5 local epochs and a batch number of 1. The resulting trained models are denoted as A^0_1, A^0_2, ..., A^0_26 and serve as the initial models for each corresponding device in real-time federated learning. For experiments involving non-pretrained models, each device uses A^0 as its initial model. Our simulations were performed on Google Colab using an NVIDIA Tesla T4 GPU with standard RAM. The files containing the experimental results mentioned in Sec. <ref> are available in our GitHub repository and can be reproduced using a random seed value of 40. §.§.§ Results We conducted a total of 250 communication rounds across 26 devices for all of our experiments, which involved both pretrained and non-pretrained initial models. Within each of these two running types, we employed three baseline methods and four different configurations of NeighborFL (R1, R3, L1, L3), resulting in a total of 14 running methods, as indicated in Table. <ref> and Table. <ref>. These 250 rounds utilized speed data for training from 2017-01-08 00:00:00 to 2017-01-18 10:55:00 (both timestamps inclusive), and the devices provided real-time predictions for speed values from 2017-01-08 01:00:00 to 2017-01-18 10:55:00 (both timestamps inclusive). To visually compare the real-time prediction accuracy among these methods, we have plotted the real-time prediction curves (i.e., values in P_fav, Algo. <ref>, line 13) alongside the ground truth curves (i.e., values in Y, Algo. <ref>, line 13) for the last 24 hours of the simulated federation (i.e., rounds 227 to 250). Due to space constraints, we presented the curves from three baseline and the NeighborFL L1 methods under the pretrained setting, as shown in Fig. <ref>. The criteria for choosing these four out of the 14 methods are referred to Table. <ref>, which outlines the average device mean squared error (MSE) across all 26 devices for the last 24 rounds between the ground truth values and the corresponding methods. As an example of the calculation of the average device MSE values for any of these 14 methods shown in Table. <ref>, the Pretrain MSE value of NeighborFL L1 (i.e., 7.45 highlighted in pink in Table. <ref>) is computed as follows: * First, we calculate the MSE value between the NeighborFL L1 predictions and the corresponding ground truth values from rounds 227 to 250 for each of the 26 devices. For a random device d_i, the MSE value is obtained by following Eq. <ref>, MSE_i^m ∼ n= 1/N∑_k=1^N(ŷ^(k)-y^(k))^2 Where: P_i^m ∼n = P_i^m ∪P_i^m+1 ∪... ∪P_i^n, m ≤n Y_i^m ∼n = Y_i^m ∪Y_i^m+1 ∪... ∪Y_i^n, m ≤n N = |P_i^m ∼n |= |Y_i^m ∼n |= (m - n + 1) * τ^2+ ŷ^(k) ∈P_i^m ∼n: the k-th prediction instance in P_i^m ∼ n y^(k) ∈Y_i^m ∼n: the k-th ground truth instance in Y_i^m ∼ n MSE_i^m ∼n: MSE value of d_i from round m to n In this case, m = 227, n = 250, τ^2+ = 12; P_i^m ∼ n corresponds to P_i^227 ∼ 250 of NeighborFL L1, and Y_i^m ∼ n represents the truth values Y_i^227 ∼ 250 extracted from the raw PEMS-BAY dataset <cit.> of d_i with the associated device ID for the corresponding time range. The resulting MSE values for each of the 26 devices are listed under the NeighborFL L1 column in Table. <ref>. * These individual device MSE values are then summed up and divided by the number of devices in 𝒟 to obtain the average device MSE value, as shown in Eq. <ref>. AVGMSE^m ∼ n= 1/|𝒟|∑_i=1^|𝒟| MSE_i^m ∼ n In this case, |𝒟| = 26, and the resulting value AVGMSE^227 ∼ 250 for NeighborFL L1 is 7.45, serving as an overall last 24 hours error evaluation metric. The same calculation procedure is applied to generate the other 13 average MSE values presented in Table <ref>. From Table. <ref>, we observe that NeighborFL L1 had the smallest Average Device MSE value (highlighted in pink) among all 14 methods. In the pretrained setting, all four NeighborFL configurations exhibited lower average MSE values compared to the three baseline methods, highlighting the superior real-time learning and prediction capabilities of NeighborFL. For instance, NeighborFL L1 reduces the MSE value by 16.9% compared to NaiveFL. A similar trend is observed in the non-pretrained setting, where NeighborFL L3 achieves the lowest average MSE value (highlighted in green) among all methods. Although the MSE value of r-NaiveFL (i.e., 10.88) is lower than that of NeighborFL L1 and NeighborFL R3, the difference is marginal, and all four NeighborFL configurations outperformed NaiveFL by at least 5%, while also significantly outperforming Central under the non-pretrained setting. Furthermore, by comparing the Average Device MSE values of the seven methods between the pretrained and non-pretrained settings, we observed that using pretrained models can significantly enhance the training and prediction performance for all seven configurations. Specifically, for NeighborFL, pretraining the initial models with one week of historical data can reduce the prediction error by at least 30%, as indicated in the table. In Fig. <ref>, the curves representing the speed predictions produced by Central, NaiveFL, and r-NaiveFL methods are shown in orange, green, and gray, respectively. The curve of the NeighborFL L1 method is depicted in red, while the ground truth curve is displayed in blue. The x-axis represents the communication round, with each round consisting of 12 steps of speed measurements between any two round indexes, since we chose 𝒪 = 1 and τ^2+ = 12. By closely examining the curves of the representative device 400760_N, we can observe that the NeighborFL method consistently achieves better overall predictions compared to the three baseline methods. The red curve is generally closer to the blue curve compared to the other three curves, which is particularly evident during rounds 246 to 250. Moreover, NeighborFL demonstrates enhanced adaptability to sudden speed changes detected by the device, as evidenced by the 12 speed steps in the round 246 to 247 segment. This trend is also observed in other devices such as 401817_N, 400911_N, 409529_S, and 400922_S. To further assess the performance of NeighborFL at an individual device level, we provide the MSE values for each of the 26 devices over the last 24 rounds, categorized by the different methods, in Table <ref>. In each device's row, the lowest MSE value is highlighted in green. By counting the number of green highlights under the NeighborFL L1 column, we observe that out of the 26 devices, NeighborFL outperforms all three baseline methods in 18 devices. For the remaining 8 devices, 5 devices (i.e., 400394_S, 400045_N, 400001_N, 401560_N, 400109_S) still have their NeighborFL outperform NaiveFL method. Overall, NeighborFL surpasses NaiveFL in 88% of the devices within the entire cohort during the last 24 hours of the simulations. Another way to visualize the performance of NeighborFL compared to the three baselines is to examine the overall trend of MSE throughout the entire simulation (i.e., from communication round 1 to 250). In Fig. <ref>, we present how the MSE values of the predictions generated by the three baselines and NeighborFL L1 change in relation to the ground truth for each of the 26 devices as the communication round progresses. The color scheme for the four methods follows that of Fig. <ref>. Unlike Fig. <ref>, the y-axis represents the MSE value, and the x-axis represents round ranges instead of individual round indexes. Each round range represents a group of round indexes, such as Round Range 24-48 indicating communication rounds 24, 25, 26, ..., 48 (inclusive). We group the error values into round ranges to fit the curves within the small subfigures while preserving the true trends. Each round range consists of 24 communication rounds, except for the first round range which has 23 rounds (due to the absence of predictions in the initial hour of the simulation), and the last round range which has 10 rounds. The calculation of these MSE values follows the same formula as Eq. <ref>. Each plot also includes three percentage values displayed at the top. These percentages indicate the number of times NeighborFL L1 achieves a lower MSE value than the corresponding baseline method across the 11 smoothed MSE values within the respective round ranges on the x-axis, for a particular device. For example, for device 409529_S, NeighborFL L1 produces MSE values that are lower than Central 6 times and lower than both NaiveFL and r-NaiveFL 8 times, resulting in percentages of 6/11=54.55% and 8/11=72.73%, respectively. When a percentage value exceeds 50%, the corresponding method and the percentage are highlighted in red, indicating that NeighborFL L1 outperforms the particular baseline method for the majority of the simulation. Notably, for devices where NeighborFL L1 outperforms NaiveFL by more than 50%, a star is displayed in the upper left corner of the figure. The device 400922_S has been selected as the representative for this analysis. By examining the error curves of 400922_S, we can clearly observe that the error of NeighborFL L1 consistently remains lower than the three baseline methods, as indicated by the highlighted red values. Among the remaining 25 devices, we have seen that 22 devices outperform NaiveFL according to the stars, further highlighting the effectiveness of NeighborFL compared to NaiveFL. While it is noticeable that the error of the Central method exhibits greater fluctuations in certain devices such as 401816_S, 400911_N, 409529_S, 402364_N, 400109_S, we also observed that 10 out of 26 devices have their Central method outperforming NeighborFL L1 over 50% of the times. This is particularly evident for devices such as 402365_S, 400394_S, 400045_N, and 400965_N. In fact, the Central method produces the best results for these four devices compared to NaiveFL, r-NaiveFL, and NeighborFL. Our preliminary hypothesis is that 402365_S, 400394_S, and 400045_N are all located on the same small segment of highway. Their location in the network, as shown in Fig. <ref>, is a critical junction connecting two major highways, likely experiencing severe bottleneck congestion during rush periods. The prediction plots for 402365_S and 400394_S in Fig. <ref> show a severe drop in speed around the round 235 period, which is inconsistent with most other sensors. In the future, it would be desirable for the enhanced NeighborFL to autonomously recognize such scenarios and avoid performing any type of federation to at least match the performance of the Central method. Potential solutions for addressing this issue will be discussed in Section <ref>. Table. <ref> presents the Average Device MSE values for the 26 devices across the 250 rounds, categorized by the prediction methods. The calculation of those average MSE values follows the same procedure as the values shown in Table. <ref>. It is worth noting that the Central method generally exhibits much larger average MSE values compared to the FL methods. Additionally, we observe that, under both pretrained and non-pretrained settings, the average MSE values for all four configurations of NeighborFL (highlighted in green) outperform the baseline methods. Since the average MSE values among the four NeighborFL configurations are very close, and different removal methods may be developed to adapt to specific traffic situations, we have omitted the comparison analysis among the four removal methods of NeighborFL. § DISCUSSION AND FUTURE WORK In this section, we explore potential solutions for NeighborFL to outperform the Central method in certain situations and enhance the overall performance of NeighborFL as a whole. Additionally, we present two potential future research directions to extend NeighborFL. §.§ Improving the performance of NeighborFL As shown in Fig. <ref>, there are 10 devices where the Central method outperforms all three types of federated methods most of the time, including our best run of NeighborFL. However, it is important to note that NeighborFL L1 still outperforms NaiveFL in 9 out of these 10 devices. Despite the fact that the Central method exhibits the largest Average Device MSE throughout the entire execution, as shown in Table. <ref>, we aim to address this issue and elevate the performance of NeighborFL to be on par with Central for our future work. One primary drawback of NeighborFL is the lack of direct consideration of local traffic trends within the candidate radius r when evaluating or removing candidates, instead relying on one-time error evaluation and time- or reputation-based removal. Ignoring the sensor's purpose and the properties of the observed road segment may degrade the model performance for some clients. One potential solution is to reset a device's favorite neighbor set (FN) entirely if its real-time prediction error continuously increases, allowing NeighborFL to operate similarly to the Central method for an extended period and recalibrate by re-evaluating its candidate favorite neighbors from scratch. Another advanced approach is to incorporate local traffic trend analysis and road segment properties into the evaluation criteria for candidate selection. In conclusion, experimenting with novel neighbor evaluation heuristics and/or favorite neighbor removal mechanisms will be key design choices to enhance the performance of NeighborFL. §.§ Predicting the impacts of Non-Recurrent Events Intelligent Transportation Systems often involve predicting the impacts of non-recurrent events such as bad weather conditions or sports events on traffic congestion and delays <cit.>. Fig. <ref> demonstrates that NeighborFL exhibits better adaptability to sudden traffic dynamic changes compared to the selected baseline methods. Moreover, with its support for multi-horizon prediction capabilities, we believe that NeighborFL has the potential to predict the impacts of non-recurrent events. By training NeighborFL on traffic datasets that include non-recurrent events, the system may rapidly adjust a device's FN to include only those devices within its radius that would be affected by these changes. This adaptation can lead to more accurate short-term traffic predictions precisely when non-recurrent events occur. Thus, NeighborFL shows promise in predicting the effects of such events and improving real-time traffic management. §.§ Detecting malfunctioning detectors As NeighborFL enables devices to remove certain favorite neighbors when real-time prediction errors increase, it opens up the potential for detecting malfunctioning detectors. In practice, a group of devices within a specific area could share information about their removed favorite neighbors. If a particular device consistently appears in most removal reports, there is a high probability that it is a malfunctioning detector. In such cases, the devices can report this finding to the traffic management center, prompting a repair order. Moreover, nearby devices can also be proactive by temporarily excluding the malfunctioning detector from their favorite neighbors set for an extended period. By incorporating such mechanisms, NeighborFL contributes to improved traffic management and enhanced overall system reliability. § CONCLUSION The paper presents NeighborFL, a decentralized Federated Learning-based approach for real-time traffic prediction, leveraging spatial-temporal relationships among detectors. The methodology enables devices to dynamically allocate personalized groups for model aggregation based on real-time prediction errors, mitigating the non-IID issues of traffic data collected in different locations and enhancing prediction performance by up to 16.9% compared to the naive FL setting. Future research will explore additional neighbor evaluation heuristics and favorite neighbor removal mechanisms to further enhance prediction accuracy, predict impacts of non-recurrent events, and detect malfunctioning detectors. NeighborFL shows potential for efficient and adaptable Intelligent Transportation Systems, benefiting commuters and traffic management agencies. IEEEtran § BIOGRAPHIES -1plus -1fil [ < g r a p h i c s > ]Hang Chen received his B.S. degree in Computer Science from the University of Delaware, USA, in 2015, where he is currently pursuing his Ph.D. degree. His research interests span distributed ledger technology, federated learning consensus design, deep networks cross-validation, and intelligent transportation systems. -24pt plus -1fil [ < g r a p h i c s > ]Collin Meese(Student Member, IEEE) received his B.S. degree in computer science from the University of Delaware in 2020. He is currently a Ph.D. student at the University of Delaware. His research interests include blockchain, distributed machine learning, intelligent transportation systems, connected and autonomous vehicles. He is an NSF GRFP Fellow and a student member of IEEE. -24pt plus -1fil [ < g r a p h i c s > ]Mark Nejad (Senior Member, IEEE) is an Associate Professor at the University of Delaware. His research interests include network optimization, distributed systems, blockchain, game theory, and intelligent transportation systems. He has published more than fifty peer-reviewed papers and received several publication awards, including the 2016 IISE Pritsker Best Doctoral Dissertation award and the 2019 CAVS Best Paper award from the IEEE VTS. His research is funded by the National Science Foundation and the Department of Transportation. He is a member of the IEEE and INFORMS. [ < g r a p h i c s > ]Chien-Chung Shen (Member, IEEE) received the B.S. and M.S. degrees from National Chiao Tung University, Taiwan, and the Ph.D. degree from the UCLA, all in computer science. He was a Research Scientist at Bellcore Applied Research, working on the control and management of broadband networks. He is currently a Professor with the Department of Computer and Information Sciences, University of Delaware. His research interests include blockchain, Wi-Fi, SDN and NFV, ad hoc and sensor networks, dynamic spectrum management, cybersecurity, distributed computing, and simulation. He was a recipient of the NSF CAREER Award and a member of the ACM. -24pt plus -1fil
http://arxiv.org/abs/2407.12703v2
20240717162537
Subgraph-Aware Training of Text-based Methods for Knowledge Graph Completion
[ "Youmin Ko", "Hyemin Yang", "Taeuk Kim", "Hyunjoon Kim" ]
cs.CL
[ "cs.CL" ]
Subgraph-Aware Training of Text-based Methods for Knowledge Graph Completion Youmin Ko^*, Hyemin Yang^*, Taeuk Kim, Hyunjoon Kim^† Hanyang University {youminkk021, hmym7308, kimtaeuk, hyunjoonkim}@hanyang.ac.kr Received 04 December 2023 / Accepted 15 July 2024 ============================================================================================================================================================= [1]Equal contribution. [2]Corresponding author. § ABSTRACT Fine-tuning pre-trained language models (PLMs) has recently shown a potential to improve knowledge graph completion (KGC). However, most PLM-based methods encode only textual information, neglecting various topological structures of knowledge graphs (KGs). In this paper, we empirically validate the significant relations between the structural properties of KGs and the performance of the PLM-based methods. To leverage the structural knowledge, we propose a Subgraph-Aware Training framework for KGC (SATKGC) that combines (i) subgraph-aware mini-batching to encourage hard negative sampling, and (ii) a new contrastive learning method to focus more on harder entities and harder negative triples in terms of the structural properties. To the best of our knowledge, this is the first study to comprehensively incorporate the structural inductive bias of the subgraphs into fine-tuning PLMs. Extensive experiments on four KGC benchmarks demonstrate the superiority of SATKGC. Our code is available. § INTRODUCTION Factual sentences, e.g., Leonardo da Vinci painted Mona Lisa, can be represented as entities, and relations between the entities. Knowledge graphs treat the entities (e.g., Leonardo da Vinci and Mona Lisa) as nodes, and the relations (e.g., painted) as edges. Each edge and its endpoints are denoted as a triple (h, r, t) where h, r and t are a head entity, a relation, and a tail entity respectively. Since KGs can represent complex relations between entities, they serve as key components for knowledge-intensive applications <cit.>. Despite their applicability, real-world KGs miss factual relations, which can be inferred from existing facts in the KGs. Hence, the task of knowledge graph completion (KGC) has become an active research topic <cit.>. Given an incomplete triple (h,r,?), this task is to predict the correct tail t. A true triple (h,r,t) in KG and a false triple (h,r,t̂) which does not exist in KG are called positive and negative, respectively. A negative triple difficult for a KGC method to distinguish from its corresponding positive triple is regarded as a hard negative triple. Existing KGC methods are categorized into two approaches. An embedding-based approach learns embeddings of entities in continuous vector spaces, but ignores contextualized text information in KGs, thus being inapplicable to entities and relations unseen in training <cit.>. A text-based approach, based on pretrained language models (PLMs), learns textual representations of KGs, but suffers from a lack of structural knowledge in KGs <cit.>. Meanwhile, contrastive learning has become a key component of representation learning <cit.>, but an important aspect of contrastive learning, i.e., the effect of hard negatives, has so far been underexplored in KGC. In this paper, we empirically validate significant relationships between the structural properties of KGs and the performance of the PLM-based KGC methods. To address the aforementioned longstanding limitations of two KGC approaches by utilizing the above relationships, we hypothesize that incorporating the structural inductive bias of KGs into both sampling hard negatives and fine-tuning PLMs leads to a major breakthrough in learning comprehensive representations of KGs. To this end, we propose a Subgraph-Aware Training framework for KGC (SATKGC), which (i) samples subgraphs of a KG to treat triples of each subgraph as a mini-batch to encourage hard negative sampling, and (ii) fine-tunes a PLM via contrastive learning which focuses more on structurally harder entities and structurally harder negative triples induced by topological bias in the KG. To sum up, we make four contributions. * We provide key insights that the topological structure of KGs is closely related to the performance of PLM-based KGC methods. * We propose a subgraph-aware training strategy for PLM-based KGC methods, which is effective in sampling in-batch hard negatives. * We propose a novel contrastive learning method that gives different importances to positive and negative triples based on the structural properties of KGs. * We conduct extensive experiments on four KGC benchmarks to demonstrate the superiority of SATKGC over existing KGC methods. § RELATED WORK An embedding-based approach maps complex and structured knowledge into low-dimensional spaces. This approach computes the plausibility of a triple using translational scoring functions on the embeddings of the triple’s head, relation, and tail <cit.>, e.g., h+r≈ t, or semantic matching functions which match latent semantics of entities and relations <cit.>. This approach exploits the spatial relations of the embeddings, but cannot make use of texts in KGs, i.e., the source of semantic relations. In contrast, a text-based approach learns contextualized representations of the textual contents (e.g., names and descriptions) of entities and relations by leveraging PLMs <cit.>. More recently, there has been a significant increase in the adoption of large language models (LLMs) in KGC <cit.>. However, these PLM-based models are often oblivious to structural inductive bias in KGs. A few attempts have been made to utilize the above two approaches at once. StAR <cit.> proposes an ensemble model incorporating an output of a Transformer encoder <cit.> with a triple score produced by RotatE <cit.>. CSProm-KG <cit.> trains KG embeddings through the soft prompt for a PLM. Nevertheless, the integration of structural and textual information in a KG in training has not yet been fully realized. Contrastive learning, shown to be effective in various fields <cit.>, has recently emerged as a promising approach in the context of KGC <cit.>. Despite a critical aspect of contrastive learning, the effect of hard negatives has been overlooked in KGC. Random walk with restart (RWR) <cit.> and its extension, biased random walk with restart (BRWR), have been employed in various domains such as node representation learning <cit.> and graph traversals <cit.>. In BRWR, a random walker performs random walks in a graph from the source node. For each iteration, the walker moves from the current node u to either (i) source with a probability of p_r, or (ii) one of the neighbors of u with a probability of 1-p_r, where p_r is a hyperparameter. In case (ii), the probability of selecting one of the neighbors is decided by a domain-specific probability distribution, whereas one is selected uniformly at random in RWR. To our knowledge, we are the first to extract a subgraph of KG via BRWR to utilize the subgraph as a mini-batch during training. § MOTIVATION To demonstrate the limitations of text-based methods exhibiting competitive performance such as SimKGC <cit.> and StAR <cit.>, we investigate the characteristics of false positive (FP) triples which are ranked by these models higher than corresponding true triples, on two widely-used datasets WN18RR and FB15k-237. Our analysis draws two conclusions. First, the closer the tail and head of a false triple are to each other in the KG, the more likely the false triple is to be ranked higher than the corresponding true triple. Figure <ref> illustrates the distribution of distance, i.e., the length of the shortest path, between the head and tail of a FP triple, where y-axis represents the FP ratio[(the number of FPs with a specific head-to-tail distance) / (the number of pairs of entities in the KG with the same distance).] for each distance. For StAR and SimKGC, the FP ratio dramatically grows as the distance decreases (see green and red bars in Figure <ref>). These findings highlight the importance of considering the proximity of two entities of a negative triple in KGs for text-based methods to distinguish a positive triple from the negative. Second, we discover that the higher the degree of the head in a false triple is, the more likely the false triple is to be ranked higher than the corresponding true triple. Figure <ref> illustrates the distribution of the degree of heads of FP triples. The FPs are sorted in ascending order of the degrees, and then they are divided into five groups such that each group contains an equal number of distinct degrees. The y-axis represents the FP ratio[(the average number of FPs whose head's degree falls into each group) / (the number of entities in the KG whose degree falls into each group)] in each degree group. The FP ratio for StAR and SimKGC increases as the degree of the head grows (see green and red bars in Figure <ref>). This indicates that the existing text-based methods have difficulty in predicting correct tails for missing triples with the high-degree heads. Hence, the degree can be taken into account to enhance the performance of text-based methods. Our proposed framework (dubbed SATKGC) tackles the above two phenomena[The trends in Figures <ref> and <ref> are also confirmed on Wikidata5M and NELL-995.], thereby significantly reducing the FPs for all distances and all degree groups compared to the existing methods (see blue bars in Figures <ref> and <ref>). § METHOD We propose a novel training framework for KGC that captures the structural inductive bias of the KG, based on the aforementioned observations. Figure <ref> illustrates the overview of our framework. First, for every triple, a subgraph is extracted around that triple from the KG, performed before training (Section <ref>). During training, we keep track of the number of visits for every triple. For each iteration, a subgraph is selected based on that number, and then all forward and inverse triples in the subgraph are fetched as a mini-batch B to a model (Section <ref>). We adopt the bi-encoder architecture <cit.> as a backbone, which uses pre-trained BERT <cit.> as encoders. Specifically, Encoder_hr and Encoder_t take the text, i.e., name and description, of (h,r) and t as input, and produce their embeddings x_hr and x_t respectively. We then calculate the cosine similarity between x_hr and x_t for every (h,r) and t in the mini-batch, and perform contrastive learning based on two structure-aware factors (Section <ref>). The model inference is described in Appendix <ref>. §.§ Random-Walk Based Subgraph Sampling First, we aim to extract subgraphs from the KG to treat all the triples in the extracted subgraph as a mini-batch for training. For each triple in the KG, we perform BRWR starting from that triple called a center triple, and the triples visited by BRWR composes an extracted subgraph as follows: (i) for each center triple, either head h or tail t of the center triple is selected as the start entity s based on an inverse degree distribution of h and t, i.e., |N(x)|^-1/|N(h)|^-1+|N(t)|^-1, where x∈{h,t} and N(x) denotes a set of x's neighbors; (ii) next, we perform BRWR from s until we sample M triples where M is a predefined maximum number (e.g., 10,000). For each iteration in BRWR, a random walker moves from the current node to either s with a probability of p_r or one of the neighbors of the current node with a probability of 1-p_r. We define the probability of selecting one of u's neighbors v∈ N(u) as p_v=|N(v)|^-1/∑_v∈ N(u) |N(v)|^-1, which is a normalized inverse degree distribution of the neighbors. Figures <ref> and <ref> show the running example of step (i) and an iteration of step (ii). Performed before the model training, this subgraph sampling algorithm can extract many distinct entities due to the inverse degree distribution, which will be validated in Section <ref>. §.§ Subgraph as a Mini-Batch In this subsection, we describe our training framework dubbed Subgraph as a Mini-batch (SaaM). We count the number of visits for all triples in the training set T throughout the training process. For every iteration, we select the subgraph whose center triple is the least frequently visited, to prioritize unpopular and peripheral triples. The rationale behind this selection will be elaborated in Section <ref>. Next, we randomly select |B|/2 triples from the subgraph, and feed to the model a mini-batch B of the these selected triples (h,r,t) and their inverse triples (t, r^-1, h). For every positive triple (h, r, t)∈ B, we obtain negative triples (h, r, t̂) with t replaced by |B|-1 tails t̂ of the other triples in B. As per our observation in Figure <ref>, these negative triples are likely to be hard negatives, which will facilitate contrastive learning. To introduce a concept corresponding to an epoch in our training framework, let iterating above process for |T|/|B| times, i.e., selecting and feeding |T|/|B| subgraphs, be a “phase” as opposed to an epoch that visits every triple exactly once. §.§ Subgraph-Aware Contrastive Learning For effective contrastive learning, we propose to incorporate two structure-aware factors into InfoNCE loss with additive margin <cit.> over a mini-batch: (a) structural hardness of negatives for each positive triple, and (b) structural hardness of head entities in the mini-batch. These two factors are simply obtained from the topological structure of the subgraph, while incurring a minimal computational overhead overall. §.§.§ Structural Hardness of Negative Triples Entities close to each other are more likely to be related than entities far away from each other. Although text-based KGC methods capture semantic relations within the text of triples, they overlook capturing the proximity between two entities in a negative triple in KGs. To tackle this problem, we propose a novel InfoNCE loss that penalizes structurally hard negative triples. For each positive triple (h,r,t) in a mini-batch B of SaaM, the loss function L_(h,r,t) is defined as 0.75L_(h,r,t) = -log e^(ϕ(h,r,t) - γ)/τ/ e^(ϕ(h,r,t) - γ)/τ + ∑_i=1^|B|-1 e^(ϕ(h,r,t_i) + βω_ht_i)/τ where γ is an additive margin, a temperature parameter τ adjusts the importance of negatives, structural hardness ω_ht_i stands for how hard a negative triple (h,r,t_i) is in terms of the structural relation between h and t_i in the KG, and β is a trainable parameter that adjusts the relative importance of ω_ht_i. Specifically, we define ω_ht_i as the reciprocal of the distance (i.e., length of the shortest path) between h and t_i to impose a larger weight ω_ht_i to the negative triple with a shorter distance between h and t_i, which serves as a harder negative triple. Since computing the exact distance between every head and every tail in B may spend considerable time, in our implementation, we calculate the approximate distance between h and t_i, i.e., the multiplication of two distances: (d1) the distance between h and head h_c of the center triple of B, and (d2) the distance between t and h_c. For this, the distance between h_c and every entity in B is pre-computed before training, the multiplication between the two distances is performed in parallel during training, which requires a minimal computational overhead.[We conducted experiments using the exact distance and different approximate distances, e.g., the sum of (d1) and (d2), but the performance gap between all the methods is very marginal.] §.§.§ Structural Hardness of Entities For many triples with the same head in the KG, varying only relation r in (h,r,?) may lead to different correct tails with various semantic contexts. The text-based KGC methods may find it difficult to predict the correct tail for these triples, as their head-relation encoders may generate less diverse embeddings for (h,r) due to much shorter text of relation r than entity h. To encourage the text-based method to more sensitively adapt to varying relations for many triples with the identical head, we propose a novel loss weighting strategy that penalizes structurally hard head entities. For each triple in a mini-batch B in SaaM, mini-batch loss L_B is defined as 0.8L_B = ∑_(h,r,t)∈ Bψ_hL_(h,r,t) where structural hardness ψ_h of head h indicates how difficult (h,r,t) is in terms of the structural property of h in the KG. Specifically, we define ψ_h as log(d_h+1) where d_h represents the degree of h, and adding one to d_h can prevent ψ_h from becoming zero when d_h=1. To sum up, ψ_h ensures that the triples whose heads have a larger degree contribute more significantly to L_B. § EXPERIMENTS §.§ Experimental Setup For evaluation we adopt widely-used KG datasets WN18RR, FB15k-237, NELL-995 and Wikidata5M. Table <ref> shows their statistics, and more details are provided in Appendix <ref>. For every incomplete triple in the test set, we compute mean reciprocal rank (MRR) and Hits@k where k∈{1,3,10} as evaluation metrics based on the rank of the correct entity among all the entities in the KG. We use the mean of the forward and backward prediction results as the final performance measure. Implementation details are described in Appendix <ref>. §.§ Main Results We compare SATKGC with existing embedding-based and text-based approaches. Table <ref> shows the results on WN18RR, NELL-995, and FB15k-237, and Table <ref> shows the results on Wikidata5M-Trans and Wikidata5M-Ind.[The results of GHN on NELL-995 are unavailable due to a lack of access to the requisite codes.] SATKGC denotes the bi-encoder architectures trained by our learning framework in Figure <ref>. Note that Table <ref> presents the performance of the latest baselines, and the complete results for all baselines compared with ours are found in Appendix <ref>. SATKGC consistently outperforms all the existing methods on all the datasets except MRR on NELL-995. SATKGC demonstrates significantly higher MRR and Hits@1 than other baselines especially on WN18RR and FB15k-237. The small performance improvement of our method on NELL-995 stems from the limited textual context in this dataset, unlike other datasets. Nevertheless, SATKGC is the runner-up lagged only 0.001 behind CompoundE for MRR on NELL-995, while outperforming all baselines on the remaining metrics. As shown in Table <ref>, SATKGC demonstrates its applicability to large-scale KGs, and achieves strong performance in both inductive and transductive settings.[Baselines listed in Table <ref> but not in Table <ref> could not be evaluated on Wikidata5M due to out-of-memory for StAR and CSProm-KG, or unreasonably large training time, i.e., more than 100 hours expected, for the remaining baselines.] SATKGC significantly outperforms the baselines on Wikidata5M-Ind, reaching MRR of 0.763 and Hits@1 of 0.659.[Wikidata5M-Ind shows better performance than Wikidata5M-Trans, because a model ranks 7,475 entities in the test set for Wikidata5M-Ind while ranking about 4.6 million entities for Wikidata5M-Trans.] In the transductive setting, performance degrades in the order of WN18RR, Wikidata5M-Trans, and FB15k-237, showing that a higher average degree of entities tends to negatively affect performance. To compare our framework employing BERTs with a LLM-based model, we evaluate the KGC performance of KoPA <cit.>, which adopts Alpaca <cit.> fine-tuned with LoRA <cit.> as its backbone, on the FB15k-237N dataset <cit.>. Since KoPA cannot perform inference for all queries within a reasonable time, i.e., 111 hours expected, we rank the correct tail among 1,000 randomly selected entities for both KoPA and SATKGC. As shown in Table 4, SATKGC outperforms KoPA on all metrics, demonstrating that LLMs do not necessarily produce superior results on KGC. §.§ Ablation study To show the contributions of applying SaaM and adding the two weighting factors to InfoNCE loss, we compare results of four different settings including SATKGC in Table <ref>: SATKGC-SPW-DW denotes applying only SaaM and using original InfoNCE loss, SATKGC-SPW represents applying SaaM and the degree factor ψ_h, and SATKGC-DW represents applying SaaM and the distance factor ω_ht_i. SATKGC-SPW-DW already shows higher Hits@1 than other baselines on WN18RR, which highlights that SaaM alone can lead to performance improvement. The efficacy of SaaM is further evidenced in Appendix <ref>. Between SATKGC-SPW and SATKGC-DW, SATKGC-SPW achieves higher performance, which indicates that the degree factor contributes more than the distance factor. §.§ Performance Across Encoders To investigate the impact of the encoder architecture and the number of model parameters, we conduct experiments replacing BERT-base in SATKGC with BERT-large, DeBERTa-base, and DeBERTa-large <cit.>. Table <ref> presents the results. SATKGC is highly compatible with different encoders, showing the competitive performance. DeBERTa-large fine-tuned by SATKGC achieves the best performance on WN18RR. In addition, an increase in the number of model parameters may not necessarily result in enhanced performance on KGC, e.g., BERT-large on WN18RR, and BERT-large and DeBERTa-large on NELL-995 underperform the smaller encoders. §.§ Comparing Sampling Methods We investigate how model performance varies depending on the probability distribution p_v used for neighbor selection in Section <ref>.[Recall that in our BRWR algorithm, a random walker selects one of the neighbors v of a current node based on the inverse degree distribution p_v.] We compare the performance of SATKGC using p_v in subgraph sampling with two variants, one with p_v replaced by the uniform distribution (dubbed RWR) and the other with p_v replaced by the degree proportional distribution (dubbed BRWR_P). Table <ref> shows the results. The three methods mostly outperform existing KGC models in Hits@1, with BRWR performing best. BRWR_P performs the worst, likely due to many duplicate entities in the extracted subgraphs sampled from the degree-proportional distribution. We also employ a Markov chain Monte Carlo (MCMC) based subgraph sampling method <cit.>, referred to as MCMC (see details of MCMC in Appendix <ref>). Note that in Table <ref>, MCMC outperforms other methods in terms of Hits@1 on WN18RR. § ANALYSIS §.§ Analysis on Negative Triples Figure <ref> shows how the cosine similarity distribution of in-batch negative triples varies depending on the epoch of SimKGC, and the phase of SATKGC for FB15k-237. SATKGC encounters consistently more hard negatives with scores from 0.2 to 1.0 than SimKGC, though the majority of the scores range from -0.2 to 0.2 by the end of training for both methods.[This trend is also observed in WN18RR.] We speculate that SATKGC ends up with distinguishing positives from the hard negatives sampled from the subgraphs of a KG, as opposed to SimKGC which employs randomly-sampled easy negatives. The effectiveness using only negatives from SaaM in enhancing performance is demonstrated in Appendix <ref>. Based on our analysis, only 2.83% and 4.42% of the true triples ranked within the top 10 by SimKGC drop out of the top 10 for SATKGC on WN18RR and FB15k-237, respectively. In contrast, 34.87% and 13.03% of the true triples dropping out of the top 10 for SimKGC are ranked within the top 10 by SATKGC. This indicates that SATKGC effectively samples hard negatives while reducing false negatives. §.§ Skewness of Occurrences Figures <ref> and <ref> show the number of visits of every triple and every entity, respectively, in the training process for RANDOM and the SaaM variants: RWR, BRWR_P, and BRWR. RANDOM, adopted in all the baselines, denotes selecting a mini-batch of triples at random without replacement. Triples and entities are sorted by descending frequency. Figure <ref> demonstrates that RANDOM exhibits a uniform distribution, while RWR, BRWR_P, and BRWR display varying degrees of skewness, with BRWR being the most skewed and BRWR_P the least. Figure <ref> illustrates that BRWR is the least skewed whereas RANDOM shows the most skewed distribution. As a result, a larger skewness in the distribution of number of visits for triples in turn leads to more equally visiting entities, thus improving the performance.[The similar trends are shown on WN18RR.] Further analysis reinforces this finding. In FB15k-237, the average degree of FP triples' tails is 75 for SATKGC and 63 for SimKGC. A smaller portion of low-degree tails for SATKGC than for SimKGC indicates that exposure to more low-degree entities t̂ in training helps the model position their embeddings farther from the (h,r) embeddings for negative triples (h,r,t̂), as SATKGC visits low-degree entities more often during training than RANDOM for SimKGC.[This is also confirmed in all the other datasets.] For BRWR in Figure <ref>, we examine the structural characteristics on sets S_m and S_l of entities in 1,000 most and least frequent triples, respectively. The entities in S_m have an average degree of 11.1, compared to 297.3 for those in S_l. The betweenness centrality[The betweenness centrality of node v is the number of shortest paths that pass through v in the graph divided by the total number of shortest paths between all pairs of nodes, which measures v's importance in the graph.] averages around 5.2 × 10^-5 for S_m and 8.2 × 10^-4 for S_l. These observations implies that SaaM prioritizes visiting unpopular and peripheral triples in the KG over focusing on information-rich triples. § CONCLUSION In this paper, we propose a generic training scheme and a new contrastive learning method for KGC. Harmonizing (i) SaaM using a subgraph of KG as a mini-batch, and (ii) contrastive learning that incorporates the structural hardness of the KG into fine-tuning PLMs helps learning the contextual text embeddings aware of the difficulty in the structural context of the KG. Our findings imply that unequally feeding triples in training and leveraging the unique characteristics of KG lead to the effective text-based KGC method, achieving state-of-the-art performance on the four KG benchmarks. § LIMITATIONS While our training scheme efficiently samples many hard negatives from subgraphs, BRWR incurs a computational overhead to extract subgraphs from Wikidata5M before training (see Appendix <ref>). However, note that this algorithm can be efficiently parallelized or replaced by a simpler one. acl_natbib § INFERENCE For inference, the bi-encoder model calculates the cosine similarity between x_hr for a given (h, r, ?) and x_t for all entities t. Then the tails with the top-k largest cosine similarities are answers. For a single pair, we need |E| forward passes of Encoder_t to obtain x_t for all entities t. Given a set T of test triples, 2|T| forward passes of Encoder_hr are required to get x_hr for every triple (h,r,?)∈ T and its inverse triple (t,r^-1,?), thus resulting in O(|E| + |T|) computation in total. § MARKOV CHAIN MONTE CARLO BASED SUBGRAPH SAMPLING Inspired by a negative sampling approach <cit.> based on Markov chain Monte Carlo (MCMC) in a general graph, we propose a new method to sample subgraphs from a KG. A negative sampling distribution should be positively but sublinearly correlated with the positive sampling distribution, which was validated by <cit.>. To include the entities close to a positive triple in the KG in negative triples, we define the sampling distribution p_n of the negative tail t̂ as : p_n(t̂|h,r)∝ p_d(t̂|h,r)^α,0<α<1, p_d(t̂|h,r)=cos (x_hr,x_t̂)/∑_e∈ Ecos (x_hr,x_e). where α is a parameter to stabilize the optimization process, E is a set of entities in the KG, and p_d is the sampling distribution of the positive tail. Calculating the normalization term in p_d(t̂|h,r) is time consuming and almost impossible. Therefore, we sample a negative tail t̂ from p̃_̃d̃=cos (x_hr,x_t̂) by using the Metropolis-Hastings (M-H) algorithm, and randomly select a triple whose head is t̂. Algorithm 1 describes the process of sampling a subgraph by the M-H algorithm. To prepare a mini-batch of triples for SaaM, we traverse KG using depth-first search (DFS). From each entity e∈ E, we perform DFS until we visit d triples. For every visited triple along the DFS path, the triple inherits the probability distribution p̃_̃d̃ from the previous triple in the path, and k negative tails t̂ are sampled from this distribution. Then p̃_̃d̃ is updated. The proposal distribution q is defined as a mixture of uniform sampling and sampling from the nearest k nodes with the 0.5 probability each <cit.>. Both d and k above are hyperparameters. The d triples in a DFS path, the sampled d× k triples, and their inverse triples compose a subgraph. We throw away the tails t̂ extracted during the burn-in period, and use the tails extracted after the period as the heads of the triples in the subgraph. § DATASETS In this paper, we use four KGC benchmarks. WN18RR is a sparse KG with a total of 11 relations and ∼ 41k entities. WN18RR is the dataset derived from WN18, consisting of relations and entities from WordNet <cit.>. WN18RR addresses the drawbacks of test set leakage by removing the inverse relation in WN18. NELL-995 is a sparse KG extracted from the web. FB15k-237 is a dense KG with 237 relations. Wikidata5M, a much larger KG than the others, provides transductive and inductive settings. Wikidata5M-Trans is for the transductive setting, where entities are shared and triples are disjoint across training, validation, and test. Wikidata5M-Ind is for the inductive setting, where the entities and triples are mutually disjoint across training, validation, and test <cit.>. § IMPLEMENTATION DETAILS In our weighted InfoNCE loss, additive margin γ is set to 0.02. We select the best performing batch sizes of 1024 from 512, 1024, 1536, 2048 for WN18RR, NELL-995, and Wikidata5M, and 3072 from 1024, 2048, 3072, 4096 for FB15k-237. We set the restart probability p_r to 1/25 in BRWR. We used six A6000 GPUs and 256G RAM. Training on WN18RR took 50 phases, for a total of 4 hours. NELL-995 and FB15k-237 took 20 phases and a total of 3 hours and 10 hours respectively, while Wikidata5M took 2 phases and 20 hours. § RUNTIME ANALYSIS Training SATKGC incurs a marginal computational overhead because (i) sampling subgraphs and (ii) computing distances and degrees are performed in advance before training. As shown in Table <ref>, the computational cost for (i) and (ii) is acceptable. The cost depends on the size of a mini-batch, which can be adjusted. Moreover, the time complexity for (ii) is acceptable. For each mini-batch B of triples, we run Dijkstra’s single source shortest path algorithm, and thus the runtime to compute the distance is O(|V| log |V| + |B| log |V|), where V is a set of entities in B. As described in Section <ref>, we do not calculate all-pair shortest paths for every pair of vertices in V. To reduce the computational overhead, we compute the approximate distance by using the shortest path between the head of the center triple in a subgraph and every tail in that subgraph, which have been already obtained fro Dijkstra’s algorithm. Table <ref> shows the training time per phase for SATKGC and per epoch for SimKGC <cit.>, StAR <cit.>, and SANS <cit.>. SATKGC remains competitive, though it takes slightly more time than SimKGC due to the computational overhead for (a) computing its loss using the shortest path weight (SPW) and degree weight (DW), (b) counting the occurrences of visited triples to select the next subgraph, and (c) fetching subgraphs. StAR and SANS take longer than SATKGC. StAR runs out of memory on the Wikidata5M datasets, while SANS cannot be applied to the inductive setting (Wikidata5M-Ind), and is not expected to finish within a reasonable time on Wikidata5M-Trans. § EFFECTIVENESS OF SAAM To demonstrate the generality of SaaM, we apply SaaM to another text-based method StAR <cit.>, which adopted Siamese network architecture. Table <ref> shows the results where StAR+SaaM stands for the StAR model architecture trained by our training framework SaaM. Performance improvement are observed in all evaluation metrics, with significant gain in MRR and Hits@1. There results empirically validate that the model-agnostic SaaM framework can be successfully applied to different text-based methods. § PERFORMANCE OF HYBRID IN-BATCH NEGATIVES To evaluate the efficacy of constructing a batch exclusively with triples in a subgraph (i.e., SaaM), we compare two sampling methods SaaM and Mixed. Mixed represents a variant that replaces a half of triples in each mini-batch produced by SaaM with randomly selected triples. Table <ref> illustrates the performance of SaaM and Mixed. Including random samples deteriorates performances, e.g., a significant drop in Hits@1, which indicates that a mini-batch consisting only of triples within the subgraph is more beneficial. § FALSE POSITIVE ANALYSIS We aim to demonstrate that our contrastive learning with two structure-aware factors selectively penalizes hard negative triples. Table <ref> compares ψ_h L_(h,r,t) and L_(h,r,t) on average only for false positives and those for all training triples, where L_(h,r,t) in Equation (1) represents loss with only the distance factor ω_ht_i applied, and ψ_h L_(h,r,t) in Equation (2) additionally applies the degree factor ψ_h. The average loss values for the false positives are higher than those for all the triples, which indicates that the structure-aware contrastive learning method severely punishes incorrect predictions for hard negative triples. § CORRELATION OF RELATION TYPES AND PERFORMANCES We investigate the distribution of triples based on their relation types on WN18RR, NELL-995, and FB15k-237. Figure <ref> shows that the ratio of triples with the N-N relation type increases in the order of WN18RR, NELL-995, and FB15k-237, while the ratio of triples with the N-1 and 1-N relation types decreases in the same order. In Table <ref>, both embedding-based and text-based approaches achieve the best results on WN18RR among the three datasets, whereas the performance is worse on FB15k-237. We observe that a high proportion of the N-N relation type and a low proportion of the N-1 and 1-N relation types negatively impact the model performance. This performance difference between the datasets is larger for a text-based approach than for an embedding-based approach. We speculate that this is because the embedding-based approach randomly initializes the entity and relation embeddings, while the text-based approach uses contextualized text embeddings obtained from PLMs. For the N-N relations where multiple tails can be the correct answer for the same (h,r) pair, the embeddings of these correct tails should be similar. However, PLMs take only text as input, being oblivious of their high similarity. Therefore, these tail embeddings generated by the PLMs might be far apart from each other, so the (h, r) embedding is likely to remain in the middle of these tail embeddings during fine-tuning. § HYPERPARAMETER SENSITIVITY We investigate how restart probability p_r in BRWR affects model performance. The hyperparameter p_r is associated with the length of the random walk path from the start entity, which in turn influences the occurrence of duplicate entities in a mini-batch. A longer path leads to fewer duplicate entities in the mini-batch. Figure <ref>(a) illustrates that a lower p_r value, encouraging a longer random walk path, leads to higher Hits@1 for WN18RR and FB15k-237. We analyze the impact of duplicate entities in a mini-batch on the model performance. In Figure <ref>(b), more duplicate entities resulting from higher p_r negatively impact on the performance, which highlights the importance of reducing the duplicates in a mini-batch to avoid the performance degradation. § ENTIRE MAIN RESULTS Table <ref> presents the results of all baselines compared with ours.
http://arxiv.org/abs/2407.13355v1
20240718095433
EarlyMalDetect: A Novel Approach for Early Windows Malware Detection Based on Sequences of API Calls
[ "Pascal Maniriho", "Abdun Naser Mahmood", "Mohammad Jabed Morshed Chowdhury" ]
cs.CR
[ "cs.CR" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Maniriho et al.: EarlyMalDetect: A Novel Preventive Approach for Early Malware Detection from Sequences of API Calls EarlyMalDetect: A Novel Approach for Early Windows Malware Detection Based on Sequences of API Calls Pascal Maniriho, Abdun Naser Mahmood, and Mohammad Jabed Morshed Chowdhury P. Maniriho, A. N. Mahmood, and M.J.M. Chowdhury are with the Department of Computer Science and Information Technology, La Trobe University, Melbourne, July 22, 2024 ====================================================================================================================================================================================================================================================================== § ABSTRACT In this work, we propose EarlyMalDetect, a novel approach for early Windows malware detection based on sequences of API calls. Our approach leverages generative transformer models and attention-guided deep recurrent neural networks to accurately identify and detect patterns of malicious behaviors in the early stage of malware execution. By analyzing the sequences of API calls invoked during execution, the proposed approach can classify executable files (programs) as malware or benign by predicting their behaviors based on a few shots (initial API calls) invoked during execution. EarlyMalDetect can predict and reveal what a malware program is going to perform on the target system before it occurs, which can help to stop it before executing its malicious payload and infecting the system. Specifically, EarlyMalDetect relies on a fine-tuned transformer model based on API calls which has the potential to predict the next API call functions to be used by a malware or benign executable program. Our extensive experimental evaluations show that the proposed approach is highly effective in predicting malware behaviors and can be used as a preventive measure against zero-day threats in Windows systems. Dynamic malware analysis, Malware detection, Transformer model, Recurrent neural networks, API Calls, Transfer learning, Machine learning, Deep learning § INTRODUCTION THE prevalence of Internet technologies provides many people with access to a diverse range of services, including social networking, online retail, navigation, and object positioning through the Global Positioning System (GPS), among others. As reported by Petrosyan <cit.>, there are 5.35 billion Internet users around the globe as of April 2024, which accounts for 66.2% of the world’s population. DataReportal <cit.> also revealed that Internet users are on the rise with statistics showing that the number grew by 100 million users in 2022. These billions of users connected worldwide make the Internet the center and pillar of today's digital world. However, it is worth mentioning that despite the benefits of increased connectivity through the Internet, there is also a higher risk of cyber attacks due to the growing number of interconnected devices and systems. Hackers develop advanced malware programs with new ways to infiltrate systems and steal sensitive information, posing a growing risk to computer security worldwide <cit.>. Furthermore, hackers are constantly innovating their attack mechanisms, making it increasingly difficult for traditional security techniques to keep up. As reported in <cit.>, victims of ransomware attacks have paid out a high ransom (449 million US dollars) in the first six months of 2023. Thus, taking preventive measures to secure online devices and sensitive information becomes a security requirement. Detecting malware based on behavioral data gathered during execution can identify advanced malware. However, it requires a long time to gather all malware behaviors, increasing the likelihood for malware to deliver the malicious payload before detection. Furthermore, the collection of behavioral data itself is computationally prohibitive and resource expensive. Many of the current detection models mostly rely on post-benign and malware execution logs (features) to identify malware activities <cit.> <cit.> <cit.>. Consequently, these models can detect malware activities in the aftermath of a malware infection in the system, which is already too late for safety-critical systems. This is a major limitation observed in most behavior-based malware detection models found in the literature <cit.> <cit.> <cit.> <cit.> <cit.>. Detecting and classifying malware in its early stages is a critical aspect of security, particularly when it comes to critical systems and networks. For Windows malware detection, the common features extracted from executable files include API call sequences <cit.> operation code sequences (opcodes) <cit.>, loaded dynamic linked libraries (DLLs) <cit.>, and static images of executable file <cit.>. Additionally, other features such as file changes, network connections history, registry changes, printable strings, and information from file headers have been also used to distinguish benign from malware applications <cit.> <cit.>. In particular, API call features have become popular in Windows malware detection <cit.>. In the context of API call-based malware detection, existing techniques lack preventive techniques for early malware detection. Thus, this work is specifically focused on addressing this challenge by developing a new malware detection technique that utilizes features of API calls. The proposed method allows for effective early detection of malicious files by predicting the next possible actions to be performed by a Windows executable program (file) based on its invoked API calls. Our approach involves analyzing API calls to predict and identify threats at an early stage which allows taking appropriate action before the malware can cause damage. We rely on a sequence prediction approach based on a fine-tuned transformer model to predict the behavior of malware while the detection technique uses the predicted behavioral data to detect malware. The key advantage of the proposed approach lies in its ability to predict the behaviors of malware or benign programs since API calls are overloaded and overused in different contexts when performing various tasks in Windows OS. By predicting and classifying suspicious API calls that are indicative of malicious intent, we can quickly and effectively detect a malware program in its early execution stages, minimizing the likelihood of potential damage caused by the infection or compromise. More specifically, in this work, we aim to address the following two main research questions (RQs) that have not been covered in prior works based on API call sequences in Windows. * Predicting sequences of API calls (RQ1): Based on the initial API calls invoked by an executable program (malware or benign) in Windows, is it possible to predict the next sequences of API calls to be invoked by that program? * Detecting malware at its early stage of execution (RQ2): Can malware be detected by predicting its behaviors before it infects the system? More importantly, examining both research questions enables a focused investigation into the potential utility of predictive models in identifying malware through anticipated behavioral patterns of API calls preceding actual infection. §.§ The Main Contributions of This Work The following are key contributions of this article, with each contribution underscoring important advancements and perspectives in the evolving landscape of malware detection. * We design and build a transformer-based model for predicting the next API calls invoked by an executable program by fine-tuning the existing Pre-trained Transformer Model (GPT-2) using the Huggingface open-source Transformers library. The fine-tuned model (Winapicallseq-finetuneGPT2-Model) can predict the behaviors of an executable program based on initial sequences of API calls. * We propose a new malware detection approach that relies on the fine-tuned transformer model and bidirectional recurrent neural networks with an attention mechanism. The proposed detection approach can detect and classify malicious activities before they occur, making it a preventive and mitigation approach for early malware detection in Windows systems. * We perform various experiments using various datasets of API call sequences and the experimental results demonstrate valuable performance achieved by the proposed approach under different experimental conditions. The overall performance shows promising results, and the proposed approach outdid other state-of-the-art malware detection approaches tested on the same datasets. Organization: The remaining part of this paper is organized as follows. Section <ref> presents the background, Section <ref> discusses the related works on malware detection while Section <ref> presents the proposed approach. Section <ref> discusses the experimental results and limitations of the proposed approach and discusses future work. Furthermore, the conclusion of this work is presented in Section <ref>. § BACKGROUND In this section, we introduce the concepts of sequence prediction, sequence classification, and transformer models, as these techniques are foundational for building the proposed approach for early malware detection. §.§ Sequence Predictions The attempt to predict elements (or words) of a sequence on the basis of the preceding elements is referred to as sequence prediction in natural language processing (NLP) <cit.>. Given a sequence X of text represented in the form of a sentence, the goal of a sequence prediction model is to predict the next word in the sequence based on the elements of the original sequence X. For instance, if the sequence X=[1,2,3,4,5,6,7,9,k] with numbers indicating the encoded words from X, the goal is to predict the next word k using a sequence prediction model P. It is worth mentioning that in many cases, more than one word can be predicted at the same time. A typical sequence prediction model is first trained using a deep learning algorithm and a set of sequences in the train set. Once the model has been trained, it can be leveraged to make sequence predictions. Deep learning algorithms have shown promising results in sequence predictions and have been successfully employed to build powerful language modeling models <cit.> <cit.>. Given its potential, sequence prediction has gained popularity in several application domains such as speech recognition, text classification, stock market prediction, product recommender systems, web page prefetching, and many more. More importantly, Figure <ref> (a) presents a typical scenario for sequence prediction. §.§ Sequence Classification The primary goal of sequence classification is to create a classification model using a labeled dataset <cit.>. It involves applying different feature extraction techniques to extract meaningful features from input sequences and use them to create classification models that accurately categorize and classify sequences in their respective categories. Thus, a sequence classification model aims to predict a class (label) for a given input sequence. For instance, having an encoded sequence X=[1,2, 3,4,5,6,7] representing behaviors (e.g. sequence of API calls) extracted from an executable program in Windows, the task is to predict if X represents a malicious file or benign file (in the context of malware classification). Accordingly, Figure <ref> (b) illustrates the process for sequence clarification. §.§ Transformer Models Introduced in 2017 by Vaswani et al. <cit.>, a transformer model is a type of neural network with a deep learning architecture that uses a parallel multi-head attention mechanism to process data (e.g: text sequences, or sequences of API calls) and learn important patterns from the processed data. The transformer architecture adopts an encoder-decoder structure but it generates the output without relying on recurrence and convolution operations. The encoder transforms (maps) an input sequence into a sequence of continuous representations holding all the learned information for that particular sequence. The encoded representation is then passed to the decoder which decodes it to generate the output sequence. Transformer models demonstrated better performance over the previous architectures commonly relying on recurrent neural networks (RNNs). Text (sequence) classification is one of the well-known common tasks that can be performed with transformers models <cit.>. Transformers can effectively identify relevant patterns in long sequences and can handle big datasets <cit.>. Generative Pre-trained Transformer (GPT) <cit.>, Bidirectional Encoder Representations from Transformers (BERT) <cit.>, and distilled version of BERT (DistilBERT) <cit.>, to name a few, are the examples of transformer models presented in the literature. With unsupervised learning employed by transformer architectures, transformer models have eliminated the need to train task-specific models from scratch. Most of the existing pre-trained transformer models can be freely accessed from the Hugging Face hub <cit.>, an online AI community/platform that provides cybersecurity researchers, data scientists, and AI practitioners immediate access to several pre-trained models (over 20,000 models implemented based on various transformer architectures). § RELATED WORK Several approaches were proposed for detecting malware using behavioral (dynamic) features in recent years <cit.> <cit.>. These approaches utilize specific features extracted from executable files to discover malware. XiaoFeng et al. <cit.> combined machine learning (ML) and deep learning (DL) algorithms to implement an API call-based technique for malware detection. Their technique is based on analyzing dependency relations in sequences of API calls and relies on bidirectional residual neural networks to detect malicious programs. Using this technique, they were able to achieve a detection accuracy of 96.7%. Li et al. <cit.> proposed DMalNet, a graph-deep neural-based framework that uses a hybrid feature encoder and call graph to extract semantic features from API call sequences and their arguments to detect Windows malware. The evaluations conducted using over 20,000 benign and 18,000 malicious executable files show a detection accuracy of 98.43%. Ndibanje et al. <cit.> have used statistical methods to analyze the frequency and similarity between API calls extracted from malware and benign executable files. While their similarity matching-based method achieved an accuracy of 96.7%, the effectiveness of this approach is limited as adversaries can manipulate the frequency of API call tokens by adding superfluous (unnecessary) API calls to avoid detection. Karbad and Debbabi <cit.> employed the bag-of-words (BoW) method to transform API call sequences into word sequences and have introduced the MalDy framework for behavior-based malware detection. Although the experimental evaluations demonstrate better performance achieved with their framework, the approach used for analyzing API calls can be vulnerable to obfuscated malware which can greatly affect the effectiveness of their detection model <cit.>. Han et al. <cit.> proposed MalDAE, a malware detection framework that uses both static and dynamic analysis sequences of API calls. Ki et al. <cit.> suggested a detection technique that relies on a DNA sequence alignment approach to identify recurring patterns of API calls across various types of malware. By examining whether specific API call functions are present on new malware samples, their approach can identify malware threats. However, it can be computationally expensive when dealing with longer sequences. Dabas and Prabha <cit.> developed MalAnalyser, a new API calls-based system for Windows malware detection. MalAnalyser works by extracting frequently occurring API calls from the entire sequences of API calls captured during execution. Dabas and Prabha’s approach has also used Particle Swarm Optimization and Genetic algorithms to identify a small subset of relevant features that enhanced the detection of unseen malware based on patterns of API calls. Amer and Zelinka <cit.> relied on behavioral graph and word embedding to analyze the contextual relationship between functions of API calls in malware detection in order to generate features fed to a Markov Chain-based approach that classifies malware. Li et. <cit.> extracted API call sequences from benign and malicious files and applied a Markov chain model to extract feature vectors from API calls by constructing a cyclic graph. The extracted features were fed to a graph convolution neural network model which achieved an accuracy rate of 98.32% when detecting unknown malware samples. Huang et al. <cit.> used visualization and convolutional neural networks (CNNs) to transform dynamic and static sequences of API calls into images. While this approach achieved an accuracy of 94.7%, converting API calls directly into images or other forms of representations may lead to the loss of relevant information as noted by Ma et al. <cit.>. Moreover, Table <ref> presents other current API call-based approaches and highlights ML or DL algorithms that were used to implement the detection model. It also highlights two main limitations of existing malware detection approaches. Accordingly, it shows if the detection approach has attempted to address the issue of automatic API call sequence prediction (predicting the behaviors of a Windows executable program when it starts executing). The last column also reveals if a given detection technique can perform early malware detection or not. In general, existing detection techniques are most likely to detect malware programs when they have already infected the system, resulting in high rates of failure. As a solution to the mentioned limitations, this work presents a new approach that performs early detection and classification of malware in Windows. § THE PROPOSED MALWARE DETECTION APPROACH In this work, we take advantage of sequence modeling approaches presented in Section <ref> to implement a new approach (EarlyMalDetect) for early malware detection in Windows. Moreover, we also use transfer learning and attention-guided recurrent neural networks. Specifically, Figure <ref> shows the architecture of the proposed methodology. In contrast to the previous malware detection approaches, EarlyMalDetect can predict the next API calls to be invoked by a malware or benign program and use them to distinguish malicious or normal activities., making it a preventive and mitigation approach with the potential to detect and stop malware programs before they can infect the target system. In the next sections, we provide more details on our methodology such as datasets used, fine-tuning procedures (steps for designing the fine-tuned model for modeling API calls), and finally the design of the detection model. §.§ Collecting Datasets of API Call Sequences We have collected various datasets of API calls representing malware and benign executable files. These include the API calls dataset presented in <cit.>, <cit.>, <cit.> <cit.> and <cit.>. These datasets are used for experimental evaluations (train and test the proposed approach) and details on the number of samples on each dataset are presented in Table <ref>. It is worth noting that API calls are essential for applications to communicate and interact with the Windows OS, making them valuable features when building malware detection techniques in Windows. §.§ Fine-tuning the GPT-2 Transformer Model on API Calls Dataset Through Transfer Learning Training deep neural networks like transformer models <cit.> from scratch can be challenging due to several factors. For instance, obtaining the necessary amount of data for the problem at hand can be time-consuming. Additionally, acquiring the computational resources (such as GPUs) required to train these deep learning models can also be costly <cit.> <cit.>. As a result, one way to overcome the challenge is by utilizing the transfer learning approach which offers various benefits such as reducing training time, accelerating the training process for new models, and minimizing the model deployment time. Transfer learning allows us to use a pre-trained model (a model trained on large-scale data from a source domain/non-specific domain) as a starting point for building a new model <cit.>. Although transfer learning has gained more popularity in fields such as computer vision over the past years, it has not been fully explored in other fields such as cybersecurity, particularly in malware detection. Some of the previous works attempted to use transfer learning to build malware detection approaches based on images of Windows executable files. For example, Kumar and Janet <cit.> achieved a good detection accuracy (98.92%) by using various CNN-based pre-trained models (like Google’s inceptionV3, VGG16, ResNet50, and VGG19) as feature extractors. Lo et al. <cit.> examined the performance of Xception and VGG16 models on both Malimg and Microsoft Malware image Datasets. Nevertheless, existing techniques did not explore the use of transfer learning in API call-based malware detection, specifically in the automatic prediction of API call sequences. Thus, we exploit the potential of transfer learning in this work. Specifically, we introduce Winapicallseq-finetuneGPT2-Model, a new domain-specific transformer model for modeling API call sequences. Our model is built by fine-tuning the GPT-2 transformer model on our custom dataset of API calls representing malware and benign executable files. GPT-2 is an open-source generative pre-trained transformer model introduced by OpenAI in <cit.>. It was implemented and made publicly accessible by the Hugging Face Community <cit.>. GPT-2 is a powerful language model that has been pre-trained on a large corpus of text and it can perform new text generation, text data augmentation, and next-word or sentence predictions in a format similar to the content (text) it was trained on. However, it is not an optimal choice (most appropriate option) for performing specific tasks such as modeling the sequence of API calls. This is because the corpus of text (natural language text) used to train the original pre-trained GPT-2 model is different from the executable program's API call sequences. That is, API call sequences (also called the sequence of function calls) are not typically expressed in natural language. Even if some of the API call function names may use words that are part of the natural language, the overall structure of sequences of API calls is not expressed in a typical natural language sentence format or structure. Consequently, we have fine-tuned the GPT-2 model on the datasets of API call sequences to produce Winapicallseq-finetuneGPT2-Model, a domain-specific model for modeling sequences of API calls in Windows. The model was fine-tuned using 68,961 sequences of API calls from the dataset presented in <cit.> <cit.> and <cit.>. The fine-tuning process was performed following the guidelines outlined in <cit.> using the Hugging Face Transformers library and PyTorch framework. This involves processing our dataset of API call sequences by converting and assembling them into input batches of tensors supported by the GPT-2 transformer model using a tokenizer from the transformers library. The tokenizer splits the sequences of API calls into tokens. These tokens of API calls also get converted into numerical representation and then tensors which serve as the input for the model. It is worth noting that we have also applied sequence padding and truncation methods to handle variable lengths of API calls. This is important as the model requires all tensors to have the same length. Training is the next stage after data pre-processing. Thus, we have used the trainer class optimized for training transformers models which is provided in the transformers library <cit.>. This makes training easier and avoids manually writing a training loop from scratch. We trained the model with default parameters as outlined in the training configuration class presented in <cit.>. However, we have set the maximum block size/sequence length to 500. The fine-tuned model is used to predict the next API calls to be invoked by a Windows executable program (malware or benign) during execution. In addition, the fine-tuned model can also be used to generate new sequences of API calls. We make the fine-tuned model available for the research community and it can be freely accessed from <cit.>. It is also important to mention that the labels of the samples were discarded during training, allowing us to fine-tune the model through unsupervised learning. This is because our goal is not to use the fine-tuned model for performing any classification tasks. Thus, labeled API call sequences are only considered when building the detection model, as discussed in subsection <ref>. More details on the fine-tuning steps are presented in Figure <ref>, while the main steps can also be found in the algorithm presented in <ref>. In addition, Section <ref> discusses how the fine-tuned transformer model is used to discover the behaviors of an executable file by predicting its next API calls to be used while executing. §.§ Designing the Detection Model The proposed detection model is designed during this phase and it involves training and testing as illustrated in Figure <ref> part 1 and part 2. Training is performed using the datasets of API call sequences from <cit.> <cit.> while testing is carried out using the datasets in <cit.> and <cit.>. In our experiment, we refer to the dataset in <cit.> and <cit.> as dataset 1 and dataset 2, respectively. During testing, we rely on the next sequence prediction method using the fine-tuned model (Winapicallseq-finetuneGPT2-Model) in order to identify and detect unknown Windows malicious files as early as possible before they can infect and compromise the system. The testing process is illustrated in part 2 of Figure <ref>, which shows the necessary steps to detect malware based on the fine-tuned model and the designed detection model. In addition, more details on testing are presented in Section <ref>. It is also important to mention that we have considered the best practices for training and testing malware detection models suggested in the works in <cit.> <cit.>, allowing us to follow practical procedures to avoid temporal and spatial bias in our experiments. The proposed detection model is also based on the DistilBERT transformer model which encodes and embeds sequences of API calls. That is, DistilBERT creates numerical representations of contextual API call embeddings (also known as embedding matrix) which are passed to a bidirectional gated recurrent unit with attention (BiGRU-Attention) that learns and extracts contextual patterns of API calls. The output from the BiGRU-Attention layer is then passed to the last layer with fully connected neurons (fully connected layer) that performs further non-linear processing followed by classification of the sequence. The output of the fully connected layer reveals if the API call sequence represents malware or a legitimate Windows application. More specifically, sections <ref>, <ref> <ref> and <ref> provide details on each part of the proposed malware detection approach and discuss how they interact to perform early malware detection. §.§.§ API call Sequence Numerical Representation Before feeding sequences of API calls to the feature extraction layer (BiGRU-Attention), we have to represent tokens of API calls as numbers. We refer to this process as generating numerical representation and it is accomplished through API calls tokenization and embedding. Tokenization involves breaking down a sequence of API calls into individual tokens of API calls. During tokenization, each unique API call in the sequence is given a numerical index, with the most common API call tokens having lower indices. This is typically done to create a fixed-size vocabulary of the most frequent API calls in the corpus. Because API call sequences in the dataset have different lengths, they have to be standardized to allow them to have the same length. Thus, in this work, this is achieved by padding shorter API call sequences with zeroes and truncating longer sequences to a maximum length that is defined. After tokenizing, indexing, and padding input sequences of API calls, each token of an API call is then represented as an embedding vector through embedding. For example, by using the whitespace tokenization (splits the sequences based on spaces) and embedding the tokenized sequence, a numerical representation of an API call sequence can be created as follows. * Input Sentence: LdrUnloadDll NtOpenSection CoUninitialize GetForegroundWindow GetNativeSystemInfo NtOpenSection. * Tokenized API call: ['LdrUnloadDll', 'NtOpenSection', 'CoUninitialize', 'GetForegroundWindow', 'GetNativeSystemInfo’, 'NtOpenSection’] * API call Counts: Counter('NtOpenSection': 2, 'LdrUnloadDll': 1, 'CoUninitialize': 1, 'GetForegroundWindow': 1, 'GetNativeSystemInfo': 1) * Vocabulary: ['NtOpenSection', 'LdrUnloadDll', 'CoUninitialize', 'GetForegroundWindow', 'GetNativeSystemInfo’] * API call Indexing: 'NtOpenSection': 0, 'LdrUnloadDll': 1, 'CoUninitialize': 2, 'GetForegroundWindow': 3, 'GetNativeSystemInfo': 4 * Embedding Matrix: [[0.25963068 0.6280229 0.66656035 0.39301154 0.68322694] [0.6930088 0.6117446 0.3116537 0.76647747 0.1841041 ] [0.7610203 0.96127915 0.2734483 0.8345999 0.50677675] [0.77004313 0.5877002 0.9444492 0.37301415 0.28019246] [0.41838863 0.66963714 0.46777412 0.4987319 0.58486855] [1.0454408 0.10222486 0.11390683 0.3511025 0.8852803 ]]. The embedding matrix in step 6 (last step) contains the learned representations of each API call in the corpus as dense vectors. It has the shape of (6,5), where 6 represents the number of unique API calls in the input sentence and 5 represents the dimensionality of the embedding space which is defined before embedding. Each element in the matrix represents the weight value associated with a particular API call feature. For example, the weight at position [0,1] is 0.6280229, which corresponds to the second feature (dimension) of the embedding vector for the first API call in the corpus. All values in the embedding matrix are learned by the model during training and represent semantic relationships between API calls based on the input data. Specifically, API calls that appear in similar contexts will always be mapped to similar dense vectors in the embedding space. In this work, we employ the DistilBERT model <cit.> to perform tokenization, padding, and truncation of API call sequences and to create their embeddings. DistilBERT is a smaller and faster variation of the BERT transformer architecture that utilizes distillation to shrink the larger BERT model (reduce its size and complexity). In comparison to the BERT model, DistilBERT has 40% few parameters while maintaining over 95% of BERT's performance on the GLUE language understanding benchmark. Additionally, this smaller lightweight model runs 69% faster than its predecessor, making it preferable in our experiment. Consequently, we have trained the DistilBERT on our API calls dataset to create contextualized API call embedding features that improve the detection of malware attacks. During tokenization and padding, DistilBERT generates an attention mask to indicate which tokens are actual API calls versus padding. All padded sequences are processed by DistilBERT to generate embeddings for API calls. Additionally, the parameter configurations for the DistilBERT model are provided in Table <ref>. Once all API calls are encoded, the embedding vectors are then passed to the next layer for contextual feature extraction. l_i=ReLU(∑_i^W_i*h_ti+b_i) Sigmoid (x) =1/1+e^-x §.§.§ Bidirectional GRU Layer In the proposed detecting model depicted in Figure <ref> part 1, the bidirectional gated recurrent unit (BiGRU) layer takes the contextual API calls embeddings matrix generated by the DistilBERT transformer model as input. This layer then processes the input sequence in both forward and backward directions to capture more context from both directions of the input sequence. By using forward and backward sequence modeling, this layer learns and captures more dependencies between API calls in a given sequence (with the sequence representing a legitimate or malware program) and improves the ability of the detection layer (fully connected layer in our case). This allows the detection model to make accurate predictions when classifying malware attacks. Specifically, the BiGRU layer is configured with 32 hidden units which control the dimensionality of the output space. The network is activated using the Sigmoid and ReLU activation functions while a Dropout regularizer with a rate of 0.3 is used to prevent model overfitting. The Sigmoid and ReLU activation are computed using the equations in (<ref>) and (<ref>). The output of bidirectional GRU contains hidden states representing API calls which are fed to the next layer of the proposed early malware detection approach. §.§.§ Attention Layer The attention layer is used in our detection model's architecture to weigh the input tokens before passing them to the last layer. Specifically, the attention layer receives the input processed by the BiGRU layer and then computes the importance of each input token. This involves computing the attention weights for each input token by performing a dot product between the concatenated output from the bidirectional BiGRU layer. The attention layer uses a self-attention mechanism, where the weights are solely calculated based on the input tokens themselves. This process helps the proposed model to focus on relevant parts of the input sequences of API calls, allowing the model to produce more accurate predictions. On the other hand, the flatten layer in Figure <ref> is used to convert the 3-dimensional output tensor of the attention layer into a 2-dimensional tensor suitable for the Softwamx activation function. Thus, the attention layer is followed by a Softmax activation function which is applied to the flattened tensors to obtain the final attention weights. §.§.§ Fully Connected Layer The output tensors from the attention layer are passed to a fully connected layer of neural networks which learns from them to perform the classification of malware. During training, each input sequence passes through hidden layers on the network. The process involves adjusting the weights and biases between neurons of the network through backpropagation, to optimize its parameters and minimize the learning loss which is measured using Binary Cross-Entropy presented in equation (<ref>). Our fully connected layer's architecture is designed with two hidden layers and one output layer. The first hidden layer has 64 neurons with ReLU activation function and takes the API call context vector from the attention layer as input. The second layer has 32 neurons with the ReLU activation function while the output layer has one neuron with the Sigmoid activation function. The output ranges between 0 and 1, indicating if the input sequence belongs to the negative class (benign) or positive class (malware), respectively. After this stage, the output is a trained malware detection model that can be tested on unseen samples to make predictions. Thus, in the next section <ref> we discuss how the trained model works in combination with the fine-tuned model presented in section <ref> to perform early malware detection as illustrated in Figure <ref>. §.§ Testing the Trained Model As shown in Figure <ref>, the second part of our methodology deals with testing the proposed approach to perform early malware detection based on sequence prediction and classification. More specifically, we fully rely on our fine-tuned Winapicallseq-finetuneGPT2-Model model to perform early malware detection. Having an initial short sequence of API calls made by a malware or benign program when it starts executing in Windows, we use the fine-tuned transformer model to predict the behaviors of that particular program. That is, the initial sequence is fed to the fine-tuned transformer model which predicts the next possible API call functions to be used by the program under investigation, allowing us to discover the malicious behaviors of malware before it can infect the system. The predicted API calls are then appended to the original (initial) sequence to create a new sequence of API calls which is then fed to our trained malware detection model to determine if the program is malicious or legitimate. Because the behaviors of a malware program can be predicted before it can execute all of its malicious payloads and perform malicious activities, there is a high probability for the proposed approach to stop the malware and prevent it from harming the victim's system. Accordingly, Figure <ref> illustrates the process of API call prediction (for predicting the behaviors of a program through API calls) using the fine-tuned model While all steps for performing API call prediction are summarized in the algorithm presented in <ref>. LF(y,y')=-(y*log(y')+(1-y)*log(1-y')) § EXPERIMENTAL EVALUATIONS AND RESULTS This section gives details on the programming environment, tools, and performance metrics. It also presents experimental results and provides insights into the overall performance of the proposed approach. §.§ Environment and Tools As processing large-size sequential datasets with transformer models requires a computer with Graphic Processing Units (GPUs), fine-tuning the GPT-2 model on API calls was performed in Google Colaboratory (Google Colab) <cit.> on a computer with 51GB RAM, 16GB for GPU RAM, and 166GB disk storage with a Colab Pro subscription. Given the high cost of GPU access in Google Colab, the fine-tuned model was saved to the local disk, and the remaining experiments (designing and implementing the proposed early malware detection approach) were performed on our computer with 12th Gen Intel(R) Core(TM) i7-12700H 2.30 GHz processor, 32.0 GB RAM, 4GB GPU RAM with NVIDIA GeForce RTX 3050 Ti, and 1TB disk storage. The transformers library from the Hugging Face hub<cit.>, Pytorch, TensorFlow framework, and Scikit-learn frameworks were used for the implementation in Python. Jupyter Notebook was also used as the main integrated development environment (IDE). §.§ Measuring the Performance We have assessed the performance by examining the false negative rate (FNR), false positive rate (FPR), true positive rate (TPR), true negative rate (TNR), accuracy, F1-score, recall, precision, and ROC curve ( AUC-ROC) score, macro average and weighted average for recall (R), precision (P), and F1-score (F1). These metrics help us to determine whether the proposed approach performs well when detecting malware. §.§ Results and Discussion This section presents and discusses the results of our experiments. We conducted extensive evaluations and have carefully analyzed and interpreted the results. As described earlier, our approach relies on the fine-tuned Winapicallseq-finetuneGPT2 model which works in combination with DistilBERT, Bi-GURU with attention, and fully connected neural networks to perform early malware detection based on API calls predictions. As indicated in the results, we have set the initial length of the API call sequence to 20 and this sequence was fed to the fine-tuned model to predict the next 10, 20, and 30 API calls. As a result, we obtained 3 lengths of API call sequence (30, 40, and 50) which were fed to the trained model for testing. However, it is important to note the proposed fine-tuned model can also predict sequences longer than 30 API calls. Table <ref> presents the results achieved by the proposed approach based on dataset 1. The proposed approach achieved an accuracy of 91.14% and an AUC-ROC score of 0.9413 when detecting malware with an API call sequence length of 30. When the sequence length was increased to 40, the accuracy improved from 91.14% to 95.39% with an AUC-ROC score also increased to 0.9427. Finally, when the sequence length was increased to 50, the detection accuracy improved again to 95.97, resulting in an AUC-ROC score of 0.9477. These detection scores in Table <ref> indicate that the proposed approach is accurate in predicting and detecting malware based on the behaviors of API calls. They also show that longer sequences of API calls generally lead to better performance. The results in Table <ref> show that with API calls sequence length of 30, the proposed approach (model) achieved a precision score of 0.9976 for benign and 0.5019 for malware. The proposed approach also produced a recall score of 0.9049 for benign and 0.9778 for malware while it achieved an F1-Score of 0.9490 (for benign) and 0.6633 (for malware) with the same sequence length. Precision scores of 0.9932 for benign and 0.6562 for malware were achieved with a length of 40. The model also reached a recall score of 0.9521 for benign and 0.9333 for malware malware With the same sequence length (40). In addition, the model achieved an F1-Score of 0.9722 for benign and 0.7706 for malware. On the other hand, the model obtained a precision score of 0.9933 for benign and 0.7079 for malware with a recall score of 0.9622 for benign and 0.9333 for malware on a sequence length of 50. In addition, an F1-Score of 0.9775 (benign) and 0.8051 (malware) was also achieved using the API call sequence of length 50. Overall, it can be observed that the precision scores for benign API calls are higher across all sequence lengths implying that the model is performing well in identifying benign API calls with a low rate of false positives. The results also indicate that the model correctly identifies malware samples, despite a few benign samples misclassified as malware. Table <ref> presents the obtained macro average and weighted average F1-score which shows that the model also performs well in both metrics (which is crucial for malware detection). This shows that the proposed model is more accurate in detecting malware while maintaining good performance. Furthermore, the weighted average for the F1-score increased from 0.9235 (using a sequence length of 30) to 0.9621 (with an API call sequence length of 50). The macro average recall remained consistent with the results indicating that the proposed model can identify most malicious activities. Furthermore, the TNR, FNR, TPR, and FPR scores presented in Figures <ref> and <ref> show good performance achieved EarlyMaldetect (proposed model). In general, the results suggest that increasing the API call sequence length improves the performance of the malware detection model. The proposed model has been also tested on dataset 2 and the results are presented in Tables <ref> <ref> and <ref>. Upon examining the results in these tables, there is a slight drop in the performance compared to the results obtained from dataset 1. For instance, the proposed model achieved an accuracy of 92.39% when detecting malware using dataset 2. The reduction in performance can be attributed to the concept drift, which refers to the changes in data distribution over time between the two datasets. That is, features and distributions of the data in the malware landscape can change significantly over time, i.e., the nature of malware changes as attackers develop new techniques and strategies to evade detection. Nevertheless, the overall results on both datasets prove significant performance achieved by the proposed model. § PERFORMANCE AGAINST DIFFERENT TECHNIQUES we have also implemented and evaluated the performance of different malware detection approaches against EarlyMalDetect. Each detection technique is tested using the same dataset to ensure consistency and fairness in the evaluation process. This is performed by analyzing and comparing various detection scores based on different metrics. By examining these detection models, we gain a better understanding of which technique may be the most suitable for detecting and preventing potential malware. Consequently, comparative results are presented in Tables <ref>, <ref>, <ref>, <ref>, <ref>, and <ref>. The detection approaches presented in this section are based on different deep learning architectures such as a combination of DistilBERT with other architectures like LSTM (Long Short-Term Memory), CNN (Convolutional Neural Network), GRU (Gated Recurrent Unit), and their bidirectional variants. The classification report in Table <ref> shows that the DistillBert-LSTM detection approach(model) exhibits a balance between false positives and false negatives while maintaining high accuracy and a good AUC-ROC score, indicating a strong predictive capability. The DistillBert-CNN detection model has slightly higher accuracy than the DistillBert-LSTM. It also has a low FNR which means that it has fewer missed samples of the true positive class. On the other hand, the DistilBERT-BiLSTM model achieved the same accuracy as DistillBert-CNN, and it has a lower FPR. However, this model suffers from a relatively higher FNR, indicating more instances where it fails to recognize the positive instances. The AUC-ROC score is also lower compared to the first two models. Although the DistillBert-GRU detection model has the highest accuracy and lowest FPR (compared to the first three models), it has a considerably higher FNR, which means that it misses a substantial number of positive classifications. The AUC-ROC score is notably the lowest, which might indicate less consistency in the model's performance across different thresholds. On the other hand, the proposed EarlyMalDetect model (DistillBert-BiGRU-Attention) shows the highest accuracy and a satisfactory AUC-ROC score, implying that it generally performs well over other models. However, it is important to mention that it has a higher FPR compared to other models. In general, if we prioritize accuracy and a good balance between FPR and FNR, EarlyMalDetect is the most effective model. Table <ref> presents the detection scores for precision, recall, and F1-Score. The results show that the DistillBert-LSTM model has a higher precision for benign instances but a lower precision for Malware instances. The F1 score balances precision and recall and is quite high for both classes. The CNN-based model improves upon the LSTM-based model slightly in precision for benign instances and significantly for the malware class, indicating fewer false positives when predicting malware. The recall for malware is very high and the model has a higher F1-score for the malware than the LSTM-based model. With the DistilBERT-BiLSTM model, there is a slight decrease in precision for the benign compared to the previous models, but there is an increase in precision score for malware samples. The recall is high, although it is slightly lower for malware instances compared to the CNN-based model. The DistillBert-GRU model has a decrease in precision for benign compared to other models, but it has an improvement in precision for malware and its recall is more balanced between the two classes. However, the F1-score for the malware class is lower among the models, suggesting some trade-offs in precision and recall. The EarlyMalDetect model appears to deliver the most balanced performance across precision, recall, and F1-scores, especially for the malware, which is typically the most challenging class to predict in anti-malware detection techniques. We have also evaluated the above models by measuring both macro and weighted averages. The obtained results are presented in Table <ref>. The LSTM-based model has a good macro recall, but its macro precision is lower, indicating a higher number of false positives. The CNN-based model has slightly better macro precision and recall than the DistillBert-LSTM model. It appears to be strong at classifying instances correctly across different classes and it also performs well according to the obtained weighted metrics. The DistillBERT-BiLSTM model improves upon the macro precision, but it has a lower macro recall compared to the DistillBert-LSTM and DistillBert-CNN models. The DistillBert-GRU model achieves the highest macro precision, but it has the lowest macro recall among the models. Its weighted precision is lower, but it maintains a high weighted recall and F1-score. The EarlyMalDetect model outperforms all other models in both macro and weighted averages across all three metrics. It is also important to highlight that the classification reports presented in Tables <ref>, <ref>, and <ref>, also prove that the proposed model performs well over other detection models. Figure <ref> (a) shows the accuracy of all models. The results are based on dataset 1 and dataset 2. Accordingly, DistillBert-LSTM achieved an accuracy of 94.38% while DistillBert-CNN achieved an accuracy of 94.64%. DistilBERT-BiLSTM has reached an accuracy of 94.64% with the accuracy showing that the model based on BiLSTM can understand the context between API calls better than the standard LSTM-based model as they process data in both forward and backward directions. Furthermore, the DistillBert-GRU model achieved a slightly higher accuracy of 94.84%. The highest accuracy is achieved by the proposed approach with its performance reaching a detection accuracy of 95.97%. The attention mechanism allowed the model to focus on different parts of the input sequence, improving its ability to capture highly relevant patterns from the data. Additionally, the bidirectional approach, combined with the attention layer, allows this model to effectively contextualize the information from both directions and assign importance to different parts of the input sequence, resulting in the highest accuracy among the listed models. On the other hand, Figure <ref> (b) shows the performance of EarlyMaldet against other detection models on dataset 2. Overall, the relative performance trends between the models are consistent, with the proposed approach outperforming other models. §.§ Limitations of this Work Although this work has fine-tuned the proposed technique using 69,234 sequences representing malware and benign executable programs, the dataset size is still small given the large size of malware programs which is rapidly increasing. Thus, in future work, we will improve the performance of the proposed approach by increasing the size of the training set and then refine-tune the proposed model on large samples of API Call sequences. While the fine-tuned model can automatically generate new sequences and predict the next API calls in a given sequence, it is important to note that the fine-tuned model can generate irrelevant sequences in some cases. This is because language-based models can be biased when performing certain prediction tasks <cit.>. It is worth mentioning that, we will also plan to test the proposed approach on other large transformer models such as GPT-3.5 or GPT-4 on malware detection in Windows. § CONCLUSION In this work, we have presented a novel preventive and mitigation approach for early detection of malware attacks based on API call sequence prediction. Our proposed fine-tuned transformed model has demonstrated significant potential in accurately predicting the behaviors of malicious programs in Windows. The proposed approach is mainly based on sequence prediction with transformer models, bidirectional GRU with attention, and fully connected neural networks. Our results demonstrate better performance in detecting and classifying unknown Windows malware, making the proposed approach highly effective compared to the existing malware detection approaches. We performed a comparison between different detection models and the proposed approach demonstrated superior performance against them. By providing a preventive approach for predicting and detecting malicious behaviors of malware programs before they can infect the victim's system, the proposed approach can help to improve malware detection. The results provide new insights into malware detection in Windows. Therefore, we believe this work will be valuable to the scientific community and potentially impact the real-world application of malware detection in Windows. IEEEtran § BIOGRAPHY Pascal Maniriho received his B.Tech with Honors in Information and Communication Technology from Umutara Polytechnic, Rwanda, and Master’s degree in Computer Science from Institut Teknologi Sepuluh Nopember (ITS), Indonesia, in 2013 and 2018, respectively. He has been working in academia in Information Technology since 2019. He is currently pursuing his Ph.D. degree in cybersecurity at La Trobe University, Australia. His research interests include malware detection, data theft prevention, information security, machine learning, and deep learning. Abdun Naser Mahmood received the B.Sc. degree in applied physics and electronics, and the M.Sc. (research) degree in computer science from the University of Dhaka, Bangladesh, in 1997 and 1999, respectively, and the Ph.D. degree from the University of Melbourne, Australia, in 2008. He is currently an Associate Professor with the Department of Computer Science, School of Engineering and Mathematical Sciences, La Trobe University. His research interests include data mining techniques for scalable network traffic analysis, anomaly detection, and industrial SCADA security. He is a senior member of the IEEE. Mohammad Jabed Morshed Chowdhury is currently working as an Associate Lecturer at La Trobe University, Melbourne, Australia. He has earned his Ph.D. Candidate at Swinburne University of Technology, Melbourne, Australia. He has earned his double Masters in Information Security and Mobile Computing from Norwegian University of Science and Technology, Norway, and the University of Tartu, Estonia under the European Union’s Erasmus Mundus Scholarship Program. He has published his research in top venues including TrustComm, HICSS, and REFSQ. He is currently working with Security, Privacy, and Trust. He has published research work related to blockchain and cyber security in different top venues.
http://arxiv.org/abs/2407.12327v1
20240717055320
Spectra: A Comprehensive Study of Ternary, Quantized, and FP16 Language Models
[ "Ayush Kaushal", "Tejas Pandey", "Tejas Vaidhya", "Aaryan Bhagat", "Irina Rish" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CL", "68T30", "I.2.6; I.2.7" ]
Uncertainty Calibration with Energy Based Instance-wise Scaling in the Wild Dataset Mijoo Kim10000-0002-0397-1852 Junseok Kwon10000-0001-9526-7549 July 22, 2024 =================================================================================== § ABSTRACT Post-training quantization is the leading method for addressing memory-related bottlenecks in LLM inference, but unfortunately, it suffers from significant performance degradation below 4-bit precision. An alternative approach involves training compressed models directly at a low bitwidth (e.g., binary or ternary models). However, the performance, training dynamics, and scaling trends of such models are not yet well understood. To address this issue, we train and openly release the Spectra LLM suite consisting of 54 language models ranging from 99M to 3.9B parameters, trained on 300B tokens. Spectra includes FloatLMs, post-training quantized QuantLMs (3, 4, 6, and 8 bits), and ternary LLMs (TriLMs) - our improved architecture for ternary language modeling, which significantly outperforms previously proposed ternary models of a given size (in bits), matching half-precision models at scale. For example, TriLM 3.9B is (bit-wise) smaller than the half-precision FloatLM 830M, but matches half-precision FloatLM 3.9B in commonsense reasoning and knowledge benchmarks. However, TriLM 3.9B is also as toxic and stereotyping as FloatLM 3.9B, a model six times larger in size. Additionally, TriLM 3.9B lags behind FloatLM in perplexity on validation splits and web-based corpora but performs better on less noisy datasets like Lambada and PennTreeBank. To enhance understanding of low-bitwidth models, we are releasing 500+ intermediate checkpoints of the Spectra suite at https://github.com/NolanoOrg/SpectraSuitehttps://github.com/NolanoOrg/SpectraSuite. § INTRODUCTION The FLOPs, memory capacity, and memory bandwidth of GPUs keep increasing exponentially, doubling every 1.26, 2, and 2.9 years, respectively <cit.>, i.e. the compute capabilities (FLOPs) are growing faster than memory capacity and bandwidth. In Large Language Models (LLMs) inference, the primary bottlenecks are caused by model size (bits), which affects memory usage (memory capacity) and data transfer to processors (memory bandwidth). These issues are becoming more critical than the growing number of model parameters which affects the computational limits (FLOPs). For instance, state-of-the-art LLMs such as 340B Nemotron 4 <cit.> have sizes (in bits) exceeding the memory capacity of data center GPUs, such as 8xH100s. Token generation speed, or latency, is now limited by memory bandwidth <cit.>. Addressing these bottlenecks requires more expensive training, exceeding Chinchilla's compute-optimal regime <cit.>, with billion-parameter models being trained on up to 15 trillion tokens <cit.>. Another popular but, as we show later, sub-optimal method is post-training quantization during deployment <cit.>. In post-training quantization, LLMs initially trained in 16-bit floating point (FP16/BF16) format (referred to as FloatLM) have their parameters quantized, i.e. converted to a smaller bitwidth after training; we refer to the resulting models as QuantLMs. These models use optimized kernels for deployment, offering speedups nearly proportional to the compression factor <cit.>. However, very low bitwidths cause a significant mismatch between the pre-trained FloatLM representations and the deployable QuantLM, resulting in undesired behavior and quality degradation <cit.>. Some of the state-of-the-art methods <cit.> mitigate this issue by using calibration and re-training data from target domains; however, this increases the sensitivity to calibration data. For instance, simple choices like whether to length-normalize or not the calibration data can significantly impact QuantLM performance <cit.>. Other works have observed that QuantLM at 4 bits (4-bit QuantLMs) have about 65% lower knowledge capacity per parameter compared to trained and aligned FloatLMs <cit.>. Another approach to reducing model size while maintaining parameter count is training neural networks with low effective bitwidths <cit.>. This approach offers compression benefits beyond post-training quantization without its drawbacks. Typically, we low bitwidths like binary or ternary quantization are used; However, binary quantization usually underperforms compared to regular FP16 models <cit.>, while ternary modeling can perform match performance while still considerably saving memory; thus, in this paper, we will focus on ternary networks. For instance, BitNet b1.58 <cit.> demonstrates that LLMs trained from scratch with low effective bitwidths (1.58 bits) follow similar scaling laws as FloatLMs <cit.> and perform competitively at the 3B+ parameter scale. Despite this, relative performance of low-bitwidth lanugage models compared to QuantLMs across similar size (bits) and similar parameter counts remains unclear. This is a crucial unanswered question, given the significant cost for training LLMs at very large scales. Additionally, the training dynamics and optimization schedules for these low-bitwidth models remain poorly understood due to a lack of publicly available suites of during-training vs post-training quantized models, and associated comparative studies. The challenges mentioned above were the primary motivation for our work, that resulted in the following set of contributions presented in this paper. Spectra LLM suite. We present Spectra, the first open suite of LLMs spanning multiple bit-widths, and including FloatLMs, the corresponding QuantLMs at 3, 4, 6, and 8 bits, and ternary LLMs (TriLMs). The latter use ternary weights {-1, 0, +1}, like bitNet b1.58. The suite features 9 models ranging from 99M to 3.9B parameters, all trained on the same 300B token dataset, totaling 54 models. Demonstrating advantages of TriLM architecture and training dynamics for Ternary Language Modeling We empirically demonstrate the superiority of TriLM's approach over BitNet b1.58, despite being simpler and more stable. We also highlight the critical role of its optimization schedule and discuss TriLM's training dynamics, including a sudden loss drop at the halfway point and accelerated convergence in the final third. All TriLM models were trained on the same data in the same order. We release over 500 intermediate checkpoints from TriLMs and FloatLMs in the Spectra suite. Evaluation and comparative analysis of TriLMs, FloatLMs, and QuantLMs at different scales. We evaluate TriLMs, FloatLMs, and QuantLMs across multiple benchmarks, spanning commonsense, reasoning, knowledge capacity and toxicity, as well as on validation perplexity both in-domain and out-of-domain. Across commonsense and reasoning, as well as knowledge capacity benchmarks, TriLMs at billion parameters scale consistently outperform, for a given bit size, their QuantLMs and FloatLMs counterparts (see Figure <ref>). At the 3.9B parameter scale, TriLM matches FloatLM 3.9B across benchmarks despite having fewer bits than FloatLM 830M. However, a few challenges remain. For example, TriLM 3.9B exhibits the same level of toxicity and stereotyping as FloatLM 3.9B, significantly higher than a similarly sized FloatLM 830M. Across validation perplexity, TriLMs scale much better in terms of performance for their size (bits), and, with scale, the gap between FloatLM with a similar number of parameters also starts to decrease. While TriLM 3.9B and FloatLM 3.9B show similar validation perplexity on less noisy datasets, such as Penn Tree Bank and Lambada, a gap persists at this scale on web corpora, both in-domain (i.e., on a test subset of SlimPajama, same domain used to train the models) and out-of-domain (e.g., on Dolma, C4 and RefinedWeb datasets). For detailed perplexity results, see the appendix <ref>. Overall, we believe that the Spectra LLM suite makes a valuable contribution to LLM research community, as it enables comparative studies, examines ternary modeling's scalability and efficiency, aids in developing new low-bitwidth training techniques, and enhances interpretability from neuronal to connection levels. § MEMORY BOTTLENECKS AND LOW-BITWIDTH LANGUAGE MODELLING <cit.> recently observed that given the slower pace of improvements in memory and communication as compared to compute (FLOPs), the bottleneck continues to shift away from computation towards memory-related characteristics of hardware for deploying large language models. In this section, we start by expanding their analysis to a wider range of recent datacenter General Purpose GPUs (GPGPUs) used for neural network development and research since 2018 from multiple hardware providers. We consider different configurations across the recent microarchitectures. These include Volta (V100 SXM/PCIe) <cit.>, Ampere (A100 40GB/80GB SXM/PCIe) <cit.>, Hopper (H100 SXM/PCIe, H200) <cit.> and Blackwell <cit.>[At the time of access, preliminary specifications for Blackwell were subject to change.] from Nvidia, and also the MI200 Series (MI210, MI250, MI250X) <cit.>, and MI300 Series (MI300A, MI300X, MI325X) <cit.> from AMD. Additionally, we consider Gaudi 2 and Gaudi 3 <cit.> from Intel, as well as TPUv3 <cit.>, TPUv4 <cit.>, TPUv5 (TPUv5e, TPUv5p) <cit.> from Google. We obtained all our data from their respectively cited datasheets, documentation or press releases. Over the past several years, each of these four accelerator families has improved in three areas - FLOPS, memory capacity, and bandwidth. For our analysis, we consider the configurations of transformers in the LLaMa-family, Falcon-180B, and Nemotron 340B. Since, larger vocabulary in LLMs are becoming common for efficient multilingual modeling, we use the vocabulary size of 128k from LLaMa 3 for our analysis. We assume the Embedding and LM Head weights are retained in Half-Precision across all bitwidths for these analysis. In Figure <ref>, we show the trends of Memory Capacity over Peak TFLOPS (Half Precision - FP16/BF16) for various accelerators over the years. We also perform a linear fit for each family of accelerators separately. The linear fit for all the families has a downward slope, showing that memory capacity is improving at a slower pace than computation capability. This trend holds true even for the most recent hardware, such as Blackwell, MI325X, and Gaudi3. Though we consider Half-Precision TFLOPs, the slope is expected becomes steeper when considering peak TFLOPS over Ampere sparse or FP8. In Figure <ref>, we show the size of models (in GB) across parameter count for two low bitwidth modeling scenarios, TriLM and QuantLM 4-Bit along with the standard half-precision FloatLM. For simplicity, we do not consider the overhead of KV Cache, activations, and compilation overhead incurred during model deployment. The FloatLM model starts to reach the capacity of a single H100 at just 34B parameters. At 340 Billion (the size of Nemotron 4) is more than the capacity of a single 8xH100 node. QuantLM 4-Bit scales better, easily supporting the deployment of a 70 billion parameter model (like largest LLaMa 1 and 2) on a single H100 and 300B parameter models on a single MI300X. However, TriLMs with more than 300 billion parameters, with appropriate packing, can fit on a single H100. This feature makes TriLMs especially crucial for deployment at the edge, where devices have less than 8GB or 16GB of RAM, shared across the operating system and multiple applications. In Figure <ref>, we show the trends of Memory Bandwidth (specifically for DRAM or its equivalent memory) over FLOPs for the accelerators over the years, along with the linear fit for each family. We observe a downward slope here as well, indicating the trend that memory bandwidth is growing much slower than computation. <cit.> established the memory wall in autoregressive LLM computation. They found that the speed of token generation is bottlenecked by the rate at which data is fed from memory to processors, rather than the processing speed of the hardware. As a result, the autoregressive decoding of LLM inference can have a theoretical speedup proportional to its compression factor. Various efficient inference kernels over quantized models have realized this speedup in low batch settings across a variety of hardware. This includes CPUs [https://github.com/ggerganov/llama.cpp], consumer GPUs [https://github.com/turboderp/exllamav2] and data center GPUs <cit.>. However, since TFLOPS to bandwidth ratio is up to 500 times, this ideal speedup can also be achieved in much higher batch settings encountered in LLM deployment. Open-source kernels like Marlin <cit.> have demonstrated that these ideal speedups can also be consistently realized in high batch size scenarios and sustained over longer periods of time. In Figure <ref>, we show the (theoretically) maximum possible speedup relative to FP16 at varying parameter counts for QuantLM 4-Bit and TriLM. Even at 7 billion parameters, TriLMs can be more than 4 times faster at autoregressive decoding than FloatLM and 2 times faster than QuantLM 4-bit. While QuantLM 4-Bit plateaus at a maximum possible speedup factor of 4x, TriLMs plateau much higher at 10x for FloatLM. § TRILM: TERNARY LANGUAGE MODEL In this section, we present the architectural and optimization details of the TriLM (Ternary Language Model). The following subsections provide an in-depth analysis of the architectural choices distinguishing TriLM from BitNet, as well as optimization strategies employed during training. §.§ Architecture TriLM is LLaMa-style <cit.> autoregressive transformers <cit.> model with RMSNorm <cit.> instead of LayerNorm <cit.>, SwiGLU Gated MLP <cit.> instead of standard transformer MLP, Rotary Position Embedding (RoPE) <cit.>, Multi-Headed Attention and no bias terms. In TriLMs, the weights of linear layers are represented in one of three possible ternary states {-1, 0, 1}, along with an additional floating point number called `scale' shared across the matrix. During training, the latent (or master) weights are maintained in floating point precision, allowing for the accumulation of small updates over iterations that eventually contribute to a switch in the estimated ternary state of a parameter. During forward pass, the floating point latent weights are ternarized on the fly. This is done by first computing the scale to the absolute mean of the latent weights, then estimating the ternary state of a parameter by rounding off to the nearest ternary state after scaling. In the backward pass, a straight-through estimator is used to estimate backward pass on the floating point latent weights. During inference, ternarized states and scale needs to be estimated only once - allowing for more than 10x reduction in model size and inference time at larger scales. A formal description of these forward pass, backward pass, and inference time equations is provided in the Appendix (<ref>). Across all our experiments the embedding and language model head are represented in half-precision floating point. Since, training of TriLMs requires computing of scale on the fly, synchronizing for a single scalar across devices in model parallel training <cit.> can cause significant communication overhead. Thus, we let each device independently compute scale over its own shard of the matrix. These lead to additional artifacts, similar to BitNet, where the number of scalar values for each matrix is same as the degree of model parallelism used during training. This leads to negligible increase in size - in our case, only 6 additional scalar values for each matrix with millions of parameters. Differences from BitNet Architecture r0.4 < g r a p h i c s > Performance across various architectures - TriLM 1.1B, FloatLM 1.1B, BitNet b1.58 1.1B (our replication) along with reported scores of BitNet b1.58 at 700M and 1.3B params. Scores are averaged across 6 common sense and reasoning benchmarks, mentioned in Table <ref>. TriLM differs from BitNet b1.58 in several ways for better performance as well as for fairer comparison with FloatLMs. Figure <ref> shows the commonsense and reasoning performance of TriLM 1.1B, FloatLM 1.1B and our replication of BitNet b1.58's architecture at 1.1B scale, along with the reported performance for BitNet b1.58 700M and 1.3B. All these models have been trained for 100B tokens. Our BitNet replication achieves performance between the 700M and 1.3B models. However, all the BitNet models, including the larger 1.3B parameter model performs worse than TriLM 1.1B. It should be noted that at this 1.1B scale TriLMs does not achieve parity with FloatLMs of same parameter count. Table <ref> lists detailed performance of these models across common sense benchmarks. Following are the key differences in TriLM's architecture. We follow GPT3's Pre-Normalization <cit.> approach to normalize before each linear layer - this was observed to be crucial for stable training in FP16. Thus, normalization is done twice in each transformer layer, at the input representations to the two sub-layers - attention and Gated MLP. This is in contrast to BitNet, where before each linear layer (i.e. 4-7 times per transformer layer depending on the implementation), the activation (or intermediate representations) are normalized, scaled and quantized to 8 bits. We use RMSNorm with a scale parameter over the parameterless RMSNorm. §.§ Optimization Schedule Optimization of low bitwidth neural networks (such as in Quantization Aware Training) <cit.> requires a set of consideration like higher initial learning rate and reduced weight decay. Our optimization schedule for TriLM closely follows that of BitNet <cit.> consisting of two interventions in a vanilla linear decay learning rate scheduling with warmup and weight decay (L2 Regularization). (1) Peak LR - at roughly the halfway point, we reduce the peak learning rate. (2) L2 Reg. - at roughly two-thirds of the training, we remove the weight decay regularization as ternarization provides sufficient regularization <cit.>. Figure <ref> demonstrates ablation run performed for a 1.1B parameter model on 100B tokens with both, only-one and neither of these interventions. l0.53 < g r a p h i c s > Training loss for a 1.1B parameter TriLM, across different optimization schedules. We intervene for combinations of two hyperparameters peak learning rate and L2 regularization. Intervention for both hyperparameter given best training loss. Among these four runs, we notice the lowest final training loss when both, the L2 Regularization and Peak LR are intervened, closely followed only L2 Regularization being intervened and then only Peak LR being intervened. Dropping the peak LR at halfway point leads to a quick sharp drop in training loss. Similar phenomena have also been observed in schedules with small episodes of fast learning rate decaying like MiniCPM <cit.>. On the other hand, removing L2 regularization, or weight decay, leads to accelerated convergence, which can even mostly have the same effect as lowering peak LR leading to a quick drop in loss. These relative training loss observation at 100B tokens also go hand in hand with relative downstream performance across commonsense and reasoning tasks, which are listed in Table <ref>. Thus, we fix the TriLM optimization schedule. We drop in the peak learning rate at the halfway mark Weight decay is removed at the two-thirds mark. § SPECTRA SUITE: SPANNING PARAMETERS AND BITWIDTHS The Spectra suite includes comprehensive families of Large language models designed to span different parameter counts and bit-widths. This suite includes three main model families: TriLMs, FloatLMs, and QuantLMs (3, 4, 6, and 8 bits). Drawing inspiration from established model suites such as those by <cit.>, Spectra aims to facilitate scientific research on low-bitwidth LLMs. §.§ Overview of Spectra Suite r 0.42 < g r a p h i c s > The Spectra Suite spans across two dimensions of parameters and scale. Each point corresponds to a language model in the spectra suite. The Spectra suite stands out with several key properties: * Scale: The suite spans a broad spectrum of scales across parameter count (99M to 3.9B), sizes (9*10^8 to 6.4*10^10 bits) and bitwidths (1.58 bits to 16 bits). * Uniform Training: All models are trained using identical data sequences. * Public Accessibility: The training data is publicly available for study. * Consistent Model Size Mapping: All models across the families maintain a consistent one-to-one mapping for parameter count. Each model family within Spectra spans from 99M to 3.9B parameters, covering nearly two orders of magnitude in size. All the TriLMs and FloatLMs are trained on a standardized 300B subset of Slim Pajama <cit.> dataset, ensuring training consistency. QuantLMs undergo quantization using the same calibration data, maintaining uniformity in model quantization procedures. Data ordering and batch sizes are also kept consistent within each model family to support reproducibility and comparability in research efforts. Figure <ref> demonstrates the Spectra LM suite spanning across two dimensions - size (bits) and parameters. For each parameter count, we have 6 models across different bitwidths. Due to availability of FloatLM, Spectra can easily be extended with new QuantLMs by using different Post Training Quantization methods. The architectural and optimizer hyperparameters across the families of models are detailed in the Appendix (<ref>). §.§ FloatLM and QuantLM FloatLMs: We utilize LLaMa-style <cit.> architecture akin to TriLM. In FloatLMs, parameters in the weight matrices of linear layers are represented as floating-point numbers (FP16/BF16). The optimization schedule for FloatLM follows a cosine decay scheduling with weight decay and includes a learning rate warmup. This methodology is consistent with the practices established in models such as Pythia, OLMo, LLM360. For more details, refer to the Appendix (<ref>). QuantLMs: Recently, Data-aware quantisation techniques like GPTQ <cit.> have emerged as efficient solutions for near-lossless weight quantization down to 4-bit precision <cit.>. In our work, we implemented GPTQ post-training quantization to FloatLM, creating the QuantLM family of models across 3, 4, 6, and 8 bits. We quantized all transformer layer weights. For 3-bit and 4-bit quantization, we employ a group size of 128, which results in effective bit rates of 3.25 and 4.25 bits per parameter, respectively. We've refined our approach by incorporating best practices from recent research <cit.>, particularly in terms of calibration data and scaling it to a million tokens for improved reconstruction. To ensure a fair comparison with TriLM, we maintain certain components in their original precision. Specifically, we do not quantize the embedding, language model head, or activations. Additionally, we use symmetric quantization (without zero offset) as it is simpler, is supported by fast inference kernels <cit.> and offers similar performance to assymmetric quantization (with separate zero offsets in addition to scale for each group). It also offers consistency and a fairer comparison with TriLMs. It's worth noting that our Spectra suite is designed with flexibility in mind, allowing for easy extension to other quantization methods as needed. §.§ Training Dynamics and Scaling Laws Figure <ref> shows the training loss curves for all the TriLMs trained and Figure <ref> shows relative training loss of a TriLM to two smaller FloatLMs. The loss curves demonstrate a continuous and consistent improvement in TriLMs with increase in parameter count. Furthermore, since the TriLMs were all trained on same data, with same ordering, minor spikes and drops in training loss are consistently observed at all scales at a given token count. It should be noted that the two largest models - TriLM 2.4B and TriLM 3.9B also showcase one large spike in training loss each in the first half of training. Upon dropping the peak learning rate at halfway point, a sharp drop (spanning over a course of only a few hundred million tokens) in training loss is observed. While, for the larger TriLMs (2.4B and 3.9B), rate of decrease in loss after this sudden drop reverts back to the same as before halfway-mark, it plateaus for the smaller ones (1.1B and 1.5B). In fact, for TriLMs with less than a Billion parameters, training loss starts to increase after this. At two-thirds mark, when weight decay is removed, all models start to converge faster, and this is most pronounced for the largest TriLM models. Figures <ref> and <ref> show the final validation loss across size (in bits) and parameters respectively. When measuring performance in terms of size (crucial for output generation phase of inference), TriLMs, with increasing size, offer much better performance at same number of bits. Specifically, at the size of TriLM 3.9B, these ternary models start offering better performance than models, more than five times their size. In this work, scaling laws for FloatLM and TriLMs (up to the 3.9B parameter scale) show FloatLMs as a better choice. The difference between the two, however, considerably narrows at Billion+ parameter scale; the trends show the potential for TriLMs to meet (or even outperform) FloatLMs of same parameter count. Despite the gap in validation loss, we will later observe that TriLMs offer competitive downstream performance with FloatLMs of same parameter count across a variety of benchmarks in commonsense, reasoning and knowledge based tasks. In appendix (<ref>), we show that that gap in perplexity is also observed across other overlapping web based datasets like (Dolma, RefinedWeb), however the gap is not present for less noisy data, like Penn Tree Bank and OpenAI's Lambada. §.§ Advancing Research through Open Access: The open suite of TriLM, FloatLM, and QuantLM families aims to empowers researchers to explore the nuanced impacts of precision levels on model performance and efficiency, thereby catalyzing ongoing advancements in the development and deployment of language models, as well as enhancing their interpretability and safety. By providing a range of publicly accessible models trained on openly available data, the suite offers unprecedented transparency in the training process. Intermediate checkpoints are available for all models, accompanied by detailed documentation of training procedures and hyperparameters. This comprehensive suite enables researchers to investigate the capacities and limitations of TriLMs at various scales, thus facilitating advancements in model development, and safety. § EVALUATION We evaluate the families of LLMs on three aspects - commonsense & reasoning tasks, knowledge based tasks, and toxicity, all of which are crucial measures of their downstream performance. Readers may refer to appendix for more details regarding the benchmarks (<ref>). §.§ Commonsense and Reasoning We assess the models using eight distinct commonsense and reasoning benchmarks consisting of tasks from logical and reasoning questions to grounded and physical commonsense tasks: Arc Easy, Arc Challenge <cit.>, BoolQ <cit.>, HellaSWAG <cit.>, WinoGrande <cit.>, PIQA <cit.>, LAMBADA <cit.>, LogiQA <cit.>, all under zero-shot settings. Figures <ref> and <ref> display the average performance of the LLMs on first six benchmarks (the same benchmarks as those reported for BitNet b1.58) across size (bits) and params. Figures <ref> and <ref> present the performance for the LAMBADA dataset. TriLMs consistently demonstrate superior performance for their size across all benchmarks at the 2.4B and 3.9B parameter scales. At the largest scale of 3.9B, TriLM surpasses FloatLM on LAMBADA and achieves competitive average scores across six benchmarks. Additionally, TriLMs at the largest scales consistently outperform 4-bit QuantLMs of equivalent parameter count. However, across the considered scales, all LLMs show poor performance on LogiQA, making it difficult to identify a clear performance trend. For detailed benchmarking across all datasets, refer to Tables <ref> and <ref>. §.§ Knowledge Several downstream practical uses of LLMs requires LLMs to have knowledge about common subjects like science or topics like political figures. We evaluate the performance of LLMs on SciQ <cit.>, TriviaQA <cit.> and MMLU <cit.> benchmarks in zero-shot settings. Figures <ref> and <ref> shows the accuracy of the LLMs on SciQ across size (bits) and parameter counts. Figures <ref> and <ref> does the same for TriviaQA, while <ref> and <ref> does so for MMLU. Across both the benchmarks, at large 2.4B+ scales, TriLMs offer the best performance at a given size (bits). Surprisingly, despite having fewer bits, the knowledge capacity of TriLM do not have any significant degradation as observed in case of QuantLMs <cit.>. Low-bitwidth LLMs like TriLMs have similar knowledge capacity to FloatLMs, indicate that knowledge capacity is parameterized via presence and nature of a connection (+1 or -1), rather than its strength. Tables <ref> and <ref> expands on these results. §.§ Toxicity We evaluate the Spectra suite across various safety and toxicity benchmarks of TruthfulQA <cit.>, Big Bench BBQ Lite <cit.> and CrowsPairs <cit.>. These scores are listed in the Appendix in Table <ref>. We observe that none of the LLMs, even at largest sclaes of 3.9B parameter with 300B tokens perform significantly better than random guessing on TruthfulQA. Across the remaining two datasets, we observe that toxicity and stereotypes correlate with LLMs capability across other tasks. Specifically, TriLMs at less than Billion parameter scale are less stereotyping than FloatLMs of same parameter count, however the difference closes with scale and TriLM 2.4B and TriLM 3.9B start performing equally biased as FloatLM 2.4B and FloatLM 3.9B across these benchmarks. This also highlights that it implies TriLMs are far more stereotyping than FloatLMs of similar size (bits), at par with FloatLMs of similar parameter counts. § RELATED WORK Training Language Models At Lower Precision: Several notable language models such as GPT <cit.>, NeoX <cit.> and Pythia families have been trained using mixed precision (FP32/FP16 or FP32/BF16) <cit.> or fully half-precision (FP16/BF16) <cit.>. Recent line of works on BitNet <cit.> and BitNet b1.58 <cit.> leverage strategies native to training extremely low bitwidth networks <cit.> for transformer based language models. These studies demonstrate that low-bitwidth language models scaling trends are similar to those of floating point language modeling. In their work, models are trained at low “effective” precision of binary and ternary respectively - where the latent (or master) weights during training are maintained in higher precision like FP16. The model weights are binarized or ternarized on the fly during the forward pass and gradients are backpropagated for the latent weights using the straight-through estimator <cit.>. Prior works emphasize the importance of maintaining latent (or master) weights at high precision to allow accumulation of small updates during training - for example, <cit.> observed significant performance drop on language model when the latent (or master) model weights were switch from 16-bits (FP16/BF16) to 8-bits (FP8) during training. Concurrent architectural improvements such as Flash Attention <cit.>, mixture of experts <cit.> and state space modeling <cit.> complement these advancements in lower precision modeling. Quantization of Large Language Models after Training: Post-training quantization (PTQ) algorithms convert a pretrained high-precision model (FP32 / FP16 / BF16) into a lower precision format without requiring the original training process<cit.>. These methods can be either data-independent or need a small calibration dataset. <cit.> observed the sensitivity to calibration datasets. Post-training quantization of LLMs is additionally difficult due to presence of numerical outliers in weights and activations <cit.>. GPTQ <cit.> is a state-of-the-art one-shot weight quantization method aimed at finding a matrix of quantized weights (say Ŵ) that minimizes the squared error relative to the full precision layer output. This can be expressed mathematically as: min_Ŵ | Wx - Ŵx |_2^2, where W represents the weight and x the activation. By leveraging second-order information, GPTQ derives a closed-form solution to this optimization problem. Other methods <cit.> emphasize the importance of outlier weights that correspond to high-magnitude activations. some methods <cit.> also quantised activation along with the weights. § CONCLUSION We introduce the Spectra suite, an open family of LLMs across varying bitwidths, consisting of ternary LLMs (TriLMs), FP16 LLMs (FloatLM) as well as their quantized QuantLMs (3, 4, 6 and 8 bits) all pretrained on same 300B tokens of data. We also present our improved and simplified TriLM architecture for ternary language modeling that offers stable training at FP16 precision. Our evaluation of these models demonstrate that low bitwidth language models like TriLMs offer better performance for their size than quantized models at Billion+ parameter count. The TriLM 3.9B specifically achieves competitive performance to FloatLM 3.9B (a model much larger than TriLM 3.9B) across various benchmarks of commonsense & reasoning and knowledge based tasks. These results underscore the potential of TriLMs in addressing bottlenecks in LLM inference, stemming from memory capacity and bandwidth, better than QuantLMs. We open-source over 500 checkpoints (including intermediate training checkpoints) of the Spectra suite to further research on better understanding these models, their training dynamics, current optimization bottlenecks as well as finer-grained interpretability methods that leverages their ternarized structure. § BROADER IMPACT Interpretability Beyond Neuron Level: While several efforts have been made to understand how language models work and means to steer them without training, these methods have mostly focussed on intervening at neuron level. TriLMs opens a new degree of interpretability - at the connection level. Here, the connections between any two neurons in a layer are in one of the three states - 0 (no connection), -1 (negative connection) and +1 (positive connection), each with equal strength. This is in sharp contrast to FloatLMs, where these connections can be of varying strengths, making it harder to study interpretability beyond neuron level. By releasing the checkpoints across our training runs, we facilitate research along these directions. Environmental Benefits and Resource Efficiency: The open release of our models mitigates future emissions by allowing others to bypass the need for pretraining models from scratch. Moreover, TriLMs much lesser resource to deploy, and can perform the autoregressive generation as a faster pace - making them critical to scenarios demanding strict latency. Additionally, TriLMs represent a substantial advancement in enhancing performance on resource-constrained edge devices, including smartphones, laptops, and automobiles. Impact on Specialised Hardware: While TriLMs offers significant memory reduction and latency improvements on General Purpose GPUs like H100 and RTX4090, certain specialized hardware benefits more from ternary modeling. Hardware (like Cerabras[https://www.cerebras.net/product-chip/]) that support high byte-to-flop ratio computations, can leverage the sparsity stemming from ternarization for speedup in both training as well as inference. On the other hand, hardware with limited Memory/SRAM (like Groq[https://groq.com/]), benefit from reduction in the number of chips needed to deploy an LLMs. Reduced Training Costs: The Chinchilla scaling laws established that for training compute optimality, it may be recommended to train larger LLMs for lesser tokens than smaller LLMs for more tokens for achieving the desired model performance. However, memory requirements and latency associated with deployment of larger models, has motivated costlier training runs that go far beyond Chinchilla optimality. For example a LLaMa 3 model with only 8B parameter was trained for 15T tokens. Since, TriLM and ternary models in general can reduce the memory requirements and latency, this can motivate a shift inparameter-token tradeoff for efficient training runs towards Chinchilla's compute-optimal regime. § ACKNOWLEDGEMENT We acknowledge the support from the Mozilla Responsible AI Grant, the Canada CIFAR AI Chair Program and the Canada Excellence Research Chairs Program. This research was enabled by the computational resources provided by the Summit supercomputer, awarded through the Frontier DD allocation and INCITE 2023 program for the project "Scalable Foundation Models for Transferable Generalist AI" and SummitPlus allocation in 2024. These resources were supplied by the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, with support from the Office of Science of the U.S. Department of Energy. We extend special thanks to Jens Glaser for his assistance with the Summit and Frontier supercomputers. abbrvnat § ARCHITECTURE AND PRETRAINING DETAILS This section provides a comprehensive overview of the architectural design and pretraining for TriLM (Ternary Language Model) and FloatLM (Floating Point Language Model). We outline the forward and backward pass equations specific to their linear layers, highlighting the contrast between the FP16 matrices in FloatLM and the ternary matrices with scalar scaling in TriLM. Additionally, it covers dataset selection, tokenizer usage, and preprocessing methods employed for training data preparation. These discussions provide information on pretraining setups, implementation nuances, and key hyperparameters critical to the models' development. §.§ Forward Pass, Backward Pass and Inference Equations Table <ref> show the equations across TriLM vs FloatLM for forward pass, backward pass and inference. §.§ Data and Tokenizer Dataset Selection: Let input be X ∈ R_b × n for a linear layer with FP16 weight matrix W ∈ R_m × n and Y ∈ R_b × m be the output. The same matrix W is also used to denote latent weights in TriLMs during training. For ternarized layers in TriLMs, we also have a scalar scale γ∈ R, matrix with ternarized states W∈{-1, 0, 1}_n × m and ternarized matrix W∈ R_n × m. We set ϵ=1e-5. r0.3 Dataset Size (Tokens) Arxiv 13B Book 13B C4 80B Common Crawl 156B GitHub 16B Stack Exchange 10B Wikipedia 12B Total 300B 300B Subset of Slim Pajama Due to lack of availability of Pile 300B <cit.> used in Pythia, we opted to use a 300B token sample of deduplicated Slim Pajama dataset[We also make this subset public]. We sample from each subset with the probability proportional to its size. Training Data Preparation: * Main experiments (Spectra suite): We used the full 300B token sample * Ablation studies: Training runs with 100B tokens, we sample from these 300B tokens with equal probability weight to each data-point * Fine-Web Edu experiments: We tokenized one-third of a 350B token sample, from which we then sampled 100B tokens for our experiments. QuantLM: For the creation of QuantLM, we utilized a subset of the Slimpajama-627B dataset, consisting of 512 samples with a sequence length of 2048. These samples were normalized for length. Our approach closely follows the methodology outlined in <cit.>. Tokenizer and Optimization Techniques: We use the GPT-NeoX 20B tokenizer following Pythia. For speeding up training, we round embedding rounding of to nearest multiple of 128 times the model parallel size. §.§ PreTraining Setup We scale using 2D-parallelism with Megatron-style sharding <cit.> and use ZeRO stage 2 Deepspeed <cit.> for ZeRO <cit.>. Our implementation was based on GPT NeoX Codebase <cit.>. We use AdamW <cit.> for optimization. We train on nodes with with IBM Power9 PC CPUs and 6x16GB V100. Due to lack of BFloat16 support in V100, we train both TriLM and FloatLM in FP16 using Mixed Precision Training and Dynamic Loss Scaling. Please refer to <ref> for more implementation specific details. We extensively use Huggingface <cit.> and Wandb <cit.> for handling the checkpoints and experiment tracking. §.§ Hyperparameters Table <ref> shows the hyperparameters for TriLM and FloatLM's transformer architecture and their learning rate. We set Adam β are set to (0.9, 0.95) for both families of models and all the reported runs are trained to 2048 sequence length. FloatLM and TriLM are respectively trained with batch sizes of 2M and 1M tokens respectively. §.§ Known Implementation Artifacts * Similar to BitNet <cit.>, our models have artifacts from model parallelism. Specifically, computing the scale γ across all the entire weight matrix - which has been sharded across multiple devices requires a costly communication overhead from all-reduce. In our implementation, we compute these scales over the portion of weight matrix local to each device. Thus, for inference over TriLM models, scales should be independently computed over each model parallel group. It should be noted that this negligible change on effective on bits/parameter of <10^-5, even at highest model parallelism of 6 for our largest model. * Because we train in FP16, we expect some artifacts from training. However, we do not expect a reasonable performance difference from mixed precision training with BF16 or even FP32 because the lowest values of loss scales observed during any of the runs were at least as high as recommended 128 <cit.>. Moreover, in BitNet b1.58 (Section 3), they compared models to their reproduced FP16 LLaMA LLM. Thus, our setting closely resemble theirs. § BENCHMARK DETAILS We benchmark TriLM, FloatLM and QuantLM across Knowledge, Commonsense, Reasoning and Toxicity benchmarks. We average our scores across 3 different `seeds' by preparing three different QuantLM models quantized using different calibration sets. We also add Pythia (deduplicated with consistent 2M batch size across families) suite of models (70M to 2.8B params) and BitNet b.158 performance scores from their paper for comparison. We use the LM Evaluation Harness <cit.> to benchmark. §.§ Commonsense and Reasoning We report commonsense and reasoning benchmark scores across 6 benchmarks previously considered by BitNet b.158 in Table <ref> and rest in Table <ref>. Each is considered in a zero shot setting. Following are the details of each of the benchmark considered: * ARC Challenge and Easy: <cit.> ARC dataset comprises 7787 multiple-choice science questions divided into two sets: Challenge and Easy. We calculate accuracy and normalised accuracy across both of these sets. * BoolQ: <cit.> BoolQ is a reading comprehension dataset consisting of naturally occurring yes/no questions. We calculate the accuracy on this tasks. * HellaSwag: <cit.> HellaSWAG is a dataset multiple choice questions for testing grounded commonsense. The incorrect options are generated through Adversarial Filtering (AF) to fool machines but not humans. We calculate the accuracy and normalised accuracy on this task. * WinoGrande: <cit.> WinoGrande is a collection of 44k problems for testing commonsense reasoning formulated as a fill-in-a-blank task with binary options. We report the accuracy on this task. * PIQA: <cit.> Physical Interaction Question Answering (PIQA) a physical commonsense reasoning benchmark dataset to test the physical knowledge of language models. We calculate the accuracy and normalised accuracy on this task. * LAMBADA OpenAI: <cit.> LAMBADA is a dataset to evaluate text understanding by next word prediction. It is a collection of narrative passages BooksCorpus To succeed on LAMBADA, models must integrate broader discourse information, not solely rely on local context. We calculate the perplexity and the accuracy of the model on this task. * LogiQA: <cit.> LogiQA is a dataset for testing human logical reasoning. It contains questions spanning multiple types of deductive reasoning. We calculate the accuracy and normalised accuracy on this task. §.§ Knowledge We report performance on SciQ, TriviaQA and MMLU in Tables <ref>, <ref> and <ref>. Each is considered in a zero shot setting. Following are the details of each of the benchmark considered: The knowledge-based evaluation included the following tasks: * SciQ: <cit.> The SciQ dataset contains multiple-choice questions with 4 answer options from crowd-sourced science exams. The questions range from Physics, Chemistry and Biology and several other fields. We calculate the accuracy and length normalised accuracy on this task. * TriviaQA: <cit.> TriviaQA is a reading comprehension dataset containing question-answer-evidence triples. We calculate the exact match accuracy on this task. * MMLU <cit.>: The benchmark aims to assess the knowledge gained during pretraining by evaluating models solely in zero-shot and few-shot scenarios. It spans 57 subjects, including STEM fields, humanities, social sciences, and more. §.§ Toxicity We report toxicity-based evaluation in <ref>. Each is considered in a zero-shot setting. The toxicity-based evaluation included the following tasks: * BBQ <cit.>: The Bias Benchmark for QA (BBQ) dataset, comprises sets of questions developed by its authors, focusing on documented social biases directed towards individuals from protected classes across nine distinct social dimensions pertinent to U.S. English-speaking environments. * Crows Pairs <cit.>: proposed a challenge dataset aimed at quantifying stereotypical biases embedded within language models, with a specific emphasis on U.S. contexts. Hosted on GitHub, this dataset serves as a crucial resource for assessing and addressing biases through paired sentences that illuminate societal stereotypes. * TruthfulQA <cit.>: A benchmark designed to evaluate the truthfulness of language models in generating responses to questions. This benchmark includes 817 questions across 38 categories, such as health, law, finance, and politics. §.§ Perplexity on other datasets We measure perplexity using TriLM 3.9B and FloatLM 3.9B across various other corpora than SlimPajama, which was used for training - OpenAI Lambada, Penn Tree Bank, C4, Cosmopedia, Dolma, S2Orc, Wikipedia, RefinedWeb. A portion of Wikipedia, C4 is included in Slim Pajama. Some other corpora like Dolma and RefinedWeb, may also have overlaps from C4, Wikipedia as well as Common Crawl. Figure <ref> demonstrates that while TriLM 3.9B is similar or better than FloatLM 3.9B on PTB and Lambada, across the other datasets, with potential overlaps with SlimPajama, it's performance is consistently worse - indicating lower capability to memorize training data as well as worse in-distribution performance, despite competitive out of distribution performance. § ABLATIONS Table <ref> shows the performance of ablation 100B token training runs over the six commonsense benchmarks from BitNet b1.58 at 1.1B parameters. First two rows show the performance of TriLM 1.1B and Float 1.1B at this token count, followed by our replication of BitNet b1.58 (Ours) as well as the scores from BitNet b1.58 over 700M and 1.3B parameters. We observe that at this scale, TriLM does not come close to matching the performance of FloatLM, but it outperforms much larger BitNets. The next two rows, show the performance of TriLM 1.1B and FloatLM 1.1B when trained on 100B tokens of FineWeb, instead of SlimPajama. While the performance of both the models improve on FineWeb, the averaged difference in their performance across datasets remains the same. Lastly, we show the performances across various optimization schedules. A significant drop in averaged performance is noticed when the baseline schedule of linear decay with constant weight decay is used. The gains from dropping l2 regularization in the schedule is more than that of dropping the peak learning rate, however, not enough to match that of TriLM 1.1B's schedule. § ILLUSTRATIVE EXAMPLES OF TRILM 3.9B'S COMPLETION CAPABILITIES We showcase instances of outputs produced by TriLM (3.9B) across diverse tasks, highlighting its proficiency in tasks such as comprehension, prompt completion, and creative composition. [title=Generated Output on Reading Comprehension by TriLM (3.9B), colframe=gray!100!black, colback=gray!5!white, coltitle=white, colbacktitle=green!45!black, ] Title: The Blitz Background: From the German point of view, March 1941 saw an improvement. The Luftwaffe flew 4,000 sorties that month, including 12 major and three heavy attacks. The electronic war intensified but the Luftwaffe flew major inland missions only on moonlit nights. Ports were easier to find and made better targets. To confuse the British, radio silence was observed until the bombs fell. X- and Y-Gerät beams were placed over false targets and switched only at the last minute. Rapid frequency changes were introduced for X-Gerät, whose wider band of frequencies and greater tactical flexibility ensured it remained effective at a time when British selective jamming was degrading the effectiveness of Y-Gerät. Q: How many sorties were flown in March 1941? A: 4,000 Q: When did the Luftwaffe fly inland missions? A: Only during moonlit nights Title: Helsinki Background: Helsinki is the capital and largest city of Finland. It is in the region of Uusimaa, in southern Finland, on the shore of the Gulf of Finland. Helsinki has a population of , an urban population of , and a metropolitan population of over 1.4 million, making it the most populous municipality and urban area in Finland. Helsinki is some north of Tallinn, Estonia, east of Stockholm, Sweden, and west of Saint Petersburg, Russia. Helsinki has close historical connections with these three cities. The Helsinki metropolitan area includes the urban core of Helsinki, Espoo, Vantaa, Kauniainen, and surrounding commuter towns. It is the world’s northernmost metro area of over one million people, and the city is the northernmost capital of an EU member state. The Helsinki metropolitan area is the third largest metropolitan area in the Nordic countries after Stockholm and Copenhagen, and the City of Helsinki is the third largest after Stockholm and Oslo. Helsinki is Finland’s major political, educational, financial, cultural, and research center as well as one of northern Europe’s major cities. Approximately 75% of foreign companies that operate in Finland have settled in the Helsinki region. The nearby municipality of Vantaa is the location of Helsinki Airport, with frequent service to various destinations in Europe and Asia. Q: What is the most populous municipality in Finland? A: Helsinki Q: How many people live there? A: 1.4 million in the metropolitan area Q: What percent of the foreign companies that operate in Finland are in Helsinki? A: 75% Q: What towns are a part of the metropolitan area? A: Espoo, Vantaa, Kauniainen, and surrounding commuter towns [title=Auto-complete on a creative writing task by TriLM (3.9B), colframe=gray!100!black, colback=gray!5!white, coltitle=white, colbacktitle=green!45!black, breakable, label=Tab:User:readingcomprehension1] Essay Writing In the year 2150, humanity had established colonies on Mars. Captain Ramirez was on a routine mission to explore an uncharted region of the planet when he and his crew were attacked by Martian natives who wanted them dead for their technology. The captain's ship crashed into the surface of Mars where it exploded in flames. The survivors managed to escape from the wreckage but they soon found themselves stranded with no way out or back home. They would have to survive alone until rescue came. But what if there is nothing waiting for them? What if they are trapped forever? Poem Writing In a garden where the roses bloom, Underneath the silver moon, A gentle breeze begins to sway, And whispers secrets of the day. The stars above start to appear, While shadows dance and disappear, The night unfolds in its own time. The sun rises over the horizon, As it does every morning, But this is no ordinary dawn. It's a new beginning for all who are here today. For those who have come before us, And those yet to be born. We celebrate life with music and laughter, With friends old and new. This is our moment to shine! [ title=Auto-complete on various prompts by TriLM (3.9B), colframe=gray!100!black, colback=gray!5!white, coltitle=white, colbacktitle=green!45!black, breakable, label=Tab:User:readingcomprehension2 ] Making a cake: Several cake pops are shown on a display. A woman and girl are shown making the cake pops in a kitchen. They use a mixer to mix ingredients, then knead dough by hand. The women roll out the dough with a rolling pin, cut it into shapes using cookie cutters, bake them, and decorate them with frosting or sprinkles. The energy given up by electrons as they move through the electron transport chain is used to produce ATP. In this process, oxygen and water are produced in a reaction called cellular respiration. When talking about sensitive topics, it is crucial to be aware of the fact that there are many different types of people. Some will react negatively and others positively. It's important not to take offense when someone says something you don't like. If someone disagrees with you, it is best to ignore them. The most important thing in life is not what we have but who we are and how we treat others.
http://arxiv.org/abs/2407.12199v1
20240716215033
The Gelfand-Tsetlin basis for irreducible representations of an infinite-dimensional general linear group
[ "Evgenii Movchan" ]
math.RT
[ "math.RT", "math.GR" ]
The Gelfand–Tsetlin basis for irreducible -modules]The Gelfand–Tsetlin basis for irreducible representations of an infinite-dimensional general linear group E. Movchan]Evgenii Movchan [Evgenii Movchan]evgenymov@gmail.com § ABSTRACT We consider the problem of constructing a Gelfand–Tsetlin basis in irreducible representations of an infinite-dimensional general linear group. For a finite-dimensional irreducible representation of a general linear group, all elements of the Gelfand–Tsetlin basis are parameterized by Gelfand–Tsetlin schemes. We extend this definition to infinite Gelfand–Tsetlin schemes, which in turn parameterize elements of the Gelfand–Tsetlin basis of an irreducible representation of an infinite-dimensional complete linear group. Using the properties of colimits of representations with the highest weight, we present an explicit form of the Gelfand–Tsetlin basis. [ [ July 16, 2024 ================= arabic § INTRODUCTION Let {j}_j = 0 ^n be a family of finite-dimensional reductive complex Lie algebras, such, that 0 = 0. For this family, let's consider a chain of embeddings [column sep = 4em, row sep = 2em] 0[hook]rι_0 1[hook]rι_1 2[hook]rι_2 ⋯[hook]rι_n-1 n. According to Weyl's theorem, any finite-dimensional complex representation V of the algebra n is completely reducible. In other words, it has a unique decomposition up to isomorphism V ≅⊕_λV^λ^ ⊕ a_λ, where V^λ are irreducible finite-dimensional complex representations of the algebra n, and a_λ are some natural numbers. We will call the numbers a_λ the multiplicities of occurrence of the representation V^λ in V. Any irreducible finite-dimensional complex representation V^λ of the algebra n is uniquely (up to isomorphism) determined by its highest weight λ. Using a chain of embeddings (<ref>) for each such representation V^λ, considering its decomposition V^λ≅⊕_T V_T into irreducible 0-modules and choosing in each one-dimensional subspace V_T = ⟨ e_T ⟩ a basis vector e_T, we can construct a natural basis {e_T}⊂ V^λ, called the Gelfand-Tsetlin basis. The canonicity of this basis depends on the multiplicities a_λ in the expansion (<ref>). In particular, for the case of interest to us n = n, all multiplicities a_λ≡ 1 (see <cit.> ch.5). This basis was first constructed by Gelfand and Tsetlin in their papers <cit.>. Such natural basis turned out to be very useful both for pure mathematics and theoretical physics. This theory was later developed by Zhelobenko, and in 1962, a method for constructing lowering operators for representations n was described in the work <cit.> (reducing operators first appeared in the work <cit.>). Using this lowering operators, it was possible to construct a Gelfand–Tsetlin basis in all finite-dimensional complex representations of the classical Lie algebras A_n, B_n, C_n, D_n, which is described, for example, in the works <cit.>. The key property of these operators is the following: let V^λ be an irreducible finite-dimensional complex representation of the reductive Lie algebra n. By Weyl's theorem, there is decomposition of Res^n_n-1 V^λ≅⊕_μV^μ^ ⊕ a_μ to irreducible n-1-modules. Reducing operators z_n i∈ U(n), with index 1 ≤ i ≤ n-1, define maps (under certain conditions on the weights) z_n i: V^λ⟶ V^λ: z_n i· v_λ = v_λ - δ_i, where λ - δ_i is the highest weight obtained from λ by replacing the term λ_i with λ_i+1, and v_λ and v_λ - δ_i are highest weight vectors, respectively. By acting with these operators with different indices i on the highest weight vector v_λ of the representation V^λ, the entire Gelfand–Tsetlin basis of this representation can be obtained. We will be interested in the algebra , for which it is not clear what the phrase “irreducible 𝔤𝔩_∞ - 1ℂ-module” means. In this case, the theory of lowering operators described above does not directly suit us. The observation inspired by the Yangian theory turns out to be productive (see <cit.>). As shown in the works of Nazarov and Tarasov <cit.>, for a fixed algebra n and a representation V^λ there are operators {A_m (u)}_m = 1 ^n, called quantum minors of the L-operator, such that the Gelfand–Tsetlin basis is their eigenvectors. In other words, all operators {A_m (u)}_m = 1 ^n act diagonally in the Gelfand–Tsetlin basis. Thus, knowing the operators {A_m(u)}_m=1^n and their eigenvalues {λ_m(u)}_m=1^n, we can reduce the problem of constructing the Gelfand–Tsetlin basis to a spectral problem. Such an approach is convenient when the action of lowering operators on the highest weight vector is not defined in principle. This approach was used in the works <cit.>, where the elements of the Gelfand–Tsetlin basis were directly defined as eigenvectors of quantum minors {A_m(u)}_m=1^n. So, to construct the Gelfand–Tsetlin basis in irreducible representations of the algebra , together with the well-known theory of lowering operators z_n i, we need the theory of quantum minors A_m(u). Using these two approaches, it will be possible to reduce the problem to a finite-dimensional case. More information about these operators and the correctness of the objects they define will be written below. This work has the following structure. In section 2, we note the general facts of the representation theory of the Lie algebra n. In section 3, using the combinatorial Gelfand–Tsetlin schemes, an explicit formula of the Gelfand–Tsetlin basis in irreducible representations of the algebra n is given. Next, the infinite-dimensional group and the algebra are formally defined, and the equivalence of their representations is noted. In section 4, an important concept of the Gelfand–Tsetlin algebra n is introduced, with the help of which an alternative definition of the Gelfand–Tsetlin basis is given as a basis, in which the algebra n acts diagonally. Finally, we formally define the infinite Gelfand–Tsetlin algebra . In section 5, the polynomial representations of the algebra are defined as solutions to a universal problem similar to the finite-dimensional case. Their existence and correct certainty are shown. In section 6, containing the main original results of this work, by analogy with the finite-dimensional case, we define infinite Gelfand–Tsetlin schemes, with the help of which we construct the Gelfand–Tsetlin basis in polynomial representations of the algebra . In parallel, it is proved that the polynomial representations of the algebra are irreducible representations with highest weight. In the end, we note that the ideas used can be applied to construct the Gelfand–Tsetlin basis in irreducible representations with highest weight of the colimits of all classical Lie algebras. § REPRESENTATION THEORY OF GENERAL LINEAR LIE ALGEBRA As is known, all irreducible finite-dimensional complex representations (more precisely, their isomorphism classes) of the algebra n are in one-to-one correspondence with ordered sets λ=(λ_1,…,λ_n), called highest weights, such that λ_1 ≥…≥λ_n and ∀ i ∈{ 1, …, n }λ_i ∈. Denote by V^λ the irreducible n-module indexed by the highest weight λ. Let λ_m ≥ 0 and λ_m+1≤ 0, define partitions μ =μ_1 ≥…≥μ_m and ν =ν_m+1≥…≥ν_n of the number n, such that λ_1 = μ_1, …, λ_m =μ_m and λ_m+1 = -ν_m+1, …,λ_n = -ν_n. Let's identify partitions and Young diagrams. Thus, any irreducible n-module V^λ is parameterized by a pair of Young diagrams V^μν = V^λ. For k≥ 0, we define the determinant representation D_k = (⋀^n^n)^⊗ k as the k-th tensor degree of the n-th wedge power of the tautological representation ^n of the algebra n. For k<0, we define the determinant representation D_k = D_-k^* as dual to D_-k. Using the determinant representation, it is not difficult to prove the following isomorphism of representations: V^μν≅ V^μν⊗ D_-k for k - ν_m+1≥ 0, where μ = λ_1 + k ≥…≥λ_m + k and ν = 0 ≥…≥ 0 = 0. Representations V^μν, whose partitions ν=0, are called polynomial representations. As can be seen from (<ref>), any irreducible representation V^μν is isomorphic to the tensor product of the polynomial and determinant representations. We will be interested in the basis in the representations V^μν, and the key observation here is that all determinant representations are one-dimensional, that is, dim D_k = 1 for all k. Thus, without loss of generality, for an arbitrary irreducible finite-dimensional n-module V^λ, it can be assumed that λ is a Young diagram (which is further always assumed). This observation is the motivation to consider only the irreducible polynomial representations of the algebra . For the algebra n and fixed Young diagram λ of m cells, we define the representation (^n)^×λ as the Cartesian product of m copies of the tautological representation ^n, indexed by the cells of the diagram λ (see <cit.> ch.8). Thus, the elements of the representation (^n)^×λ are diagrams λ, in each cell of which an element from ^n is written. For example, for λ = (2,2,1), an arbitrary element w∈ ( ^n )^×λ has the form mathmode, boxframe=normal, boxsize=2em w = v_1 v_2 v_3 v_4 v_5  , where v_i are some elements from ^n. Let's call the map f: ( ^n )^×λ F into some vector space F symmetrizing if it satisfies the conditions: 1) f– multilinear, 2) f– skew-symmetric across elements in the same column, 3) ∀ w ∈ ( ^n )^×λ f(w) = ∑_w'f(w'), where summation is implied by those w'∈ ( ^n )^×λ resulting from the w from exchange between two fixed columns, with the selected subset of cells in the right of the selected columns. For example, for λ = (2,2,1), selecting the entire right column, we get f(   v_1 v_2 v_3 v_4 v_5  ) = f(   v_2 v_1 v_4 v_3 v_5  ) + f(   v_1 v_3 v_2 v_5 v_4  ) + f(   v_2 v_1 v_3 v_5 v_4  ). The polynomial representations V^λ of the algebra n are solutions to the following universal problem: if Φ_uni : (^n)^×λ V^λ is a symmetrizing map, such that for any symmetrizing map Φ:(^n)^×λF, there is a single map φ : V^λF, such that Φ=φ∘Φ_uni. In other words, for any Φ and F as above, there is a single map φ, that makes the following diagram commutative [column sep = 4em, row sep = 4em] ( ^n )^×λrΦ_uni[swap]drΦ V^λ[dashrightarrow]dφ V^λdρ F, 𝕊_λ ( ^n ), where ρ : V^λ𝕊_λ ( ^n ) is an isomorphism of representations, and 𝕊_λ ( ^n ) is a Schur functor. § AN INFINITE-DIMENSIONAL GENERAL LINEAR GROUP AND ITS LIE ALGEBRA As mentioned in the introduction, the Gelfand–Tsetlin basis was constructed by Gelfand and Tsetlin for all irreducible finite-dimensional n-modules V^λ. The elements of this basis are naturally parameterized by combinatorial objects called Gelfand–Tsetlin schemes. For a fixed Young diagram λ = λ_1≥…≥λ_n the Gelfand–Tsetlin scheme is an ordered set of elements λ_i j Λ = [ [ λ_n 1 λ_n 2 ⋯ ⋯ λ_n n; λ_n-1,1 λ_n-1,2 ⋯ λ_n-1, n-1 ; ⋯ ⋯ ⋯ ; λ_2 1 λ_2 2 ; λ_1 1 ; ]], where for all 2 ≤ j≤ n and 1 ≤ i ≤ j - 1 the following relations hold λ_i j∈ℕ, λ_n j = λ_j, λ_i j≥λ_i - 1, j and λ_i - 1, j≥λ_i, j + 1. As it was said, the set of Λ_n (λ) of all Gelfand–Tsetlin schemes of the form λ of the algebra n sets the natural parametrization of the Gelfand–Tsetlin basis in the irreducible n-module V^λ. Thus, for each Gelfand–Tsetlin scheme Λ is assigned an element e_T ∈ V^λ of the Gelfand–Tsetlin basis, hereinafter referred to as e_Λ. Using Gelfand–Tsetlin schemes and lowering operators, it is possible to write the Gelfand–Tsetlin basis of the representation V^λ explicitly (see <cit.>). So, for a fixed Gelfand–Tsetlin scheme Λ, the element e_Λ of the Gelfand–Tsetlin basis has the form e_Λ = ∏_2 ≤ k ≤ n^⟶∏_i = 1^k-1 z_k i^λ_k i - λ_k-1, i· v_λ, where the terms in the ordered product are ordered according to the increasing index k, v_λ is the highest weight vector of the representation V^λ, and the lowering operators z_k i∈ U(n) have the form z_k i = ∑_i < i_1 < … < i_p < k E_i_1 i· E_i_2 i_1·…· E_i_p i_p-1· E_k i_p· (E_i i - E_j_1 j_1 + j_1 - i) ·…· (E_i i - E_j_q j_q + j_q - i), where the sum is calculated over all p∈ℕ, and the set {j_1, …, j_q} is the complement to the subset {i_1,…, i_p} in the set {i+1, …,k-1 }. Next, we will use an explicit formula for the Gelfand–Tsetlin basis (<ref>) in an irreducible representation of the algebra n to construct the Gelfand–Tsetlin basis in irreducible representations of the algebra , but for now let's define how we understand the group and the algebra . To do this, let's consider a commutative diagram [column sep = 4em, row sep = 4em] 0 [hook]rd_e ι_0 1[hook]rd_e ι_1dexp 2[hook]rd_e ι_2dexp ⋯[hook]rd_e ι_n-1 n[hook]rd_e ι_ndexp ⋯ 0 [hook]rι_0 1[hook]rι_1 2[hook]rι_2 ⋯[hook]rι_n-1 n[hook]rι_n ⋯, where exp is an exponential map, ι_n-1 : n-1 n: A ↦[ A 0; 0 1 ] – injection, and d_eι_n-1 is its differential in identity e. Colimit of the bottom chain of embeddings of the diagram (<ref>) we will understand as the group =n, and the copy of the upper chain of embeddings as the algebra =n, respectively. The group and the algebra can be perceived as a set of infinite matrices in which almost all elements are equal to zero, and in the case of a group, almost all elements on the diagonal are equal to one. In other words, the following equalities take place = { (a_i j)_(i, j) ∈ℕ^*^2 | ∀ (i, j) ∈ℕ^*^2 ∃ N ∈ℕ^*: if i+j > N a_i j = δ_i j}, = { (a_i j)_(i, j) ∈ℕ^*^2 | ∀ (i, j) ∈ℕ^*^2 ∃ N ∈ℕ^*: if i+j > N a_i j = 0}, where ℕ^* is the set of natural numbers without zero, and δ_i j is the Kronecker symbol. Note that usually an infinite-dimensional general linear group and the symbol are understood to be a group whose elements are infinite in both directions, that is, they have the form (a_i j)_(i, j)∈ℤ^2 (see <cit.> ch.4). Thus, it would be more correct for us to use the notation GL_∞/2ℂ and 𝔤𝔩_∞/2ℂ. However, ignoring the possible confusion, we will use the above notation and . Since n is a connected Lie group, any irreducible representation V^λ is also an irreducible representation of its Lie algebra n and vice versa, and the following diagram is commutative [column sep = 4em, row sep = 4em] nrd_e ρ_λ[swap]dexp 𝔤𝔩(V^λ) dexp nrρ_λ GL(V^λ), where ρ_λ is the homomorphism of the irreducible representation V^λ. The Lie group is connected as the colimit of connected Lie groups n. Indeed, let's consider arbitrary elements A, B∈. We want to show that there is a smooth path γ : [0,1], such that γ(0)= A and γ(1)=B. By definition of the colimit, for the elements A and B there is a smooth map ϕ_n : n and the elements A, B∈n, which are preimages of the elements A and B. In other words, ϕ_n(A) = A and ϕ_n(B) = B. But the Lie group n is path-connected, which means that there is a smooth path γ : [0,1]n, such that γ(0)=A and γ(1) = B. Let's define the path γ : [0,1] as a composition γ = ϕ_n ∘γ. Thus, all elements of the Lie group are connected with a smooth path, which means that the group is path-connected as a smooth manifold, thus connected. The commutativity of diagrams (<ref>) and (<ref>) induces the commutativity of the colimit diagram [column sep = 4em, row sep = 4em] rd_e ρ[swap]dexp 𝔤𝔩(V) dexp rρ GL(V), where V is some irreducible representation of , and ρ is its homomorphism. Thus, speaking of irreducible modules, without loss of generality, we can consider only irreducible modules. § AN INFINITE-DIMENSIONAL GELFAND–TSETLIN ALGEBRA AND THE QUANTUM MINORS OF THE L-OPERATOR In addition to the usual definition of the Gelfand–Tsetlin basis of the irreducible representation V^λ of the algebra n through the restriction on 0-modules described in the introduction, it is possible to define the Gelfand–Tsetlin basis as the basis on which some algebra acts diagonally. We will be interested in the second definition, because it is much more convenient to work with colimits with it. An algebra acting diagonally on the Gelfand–Tsetlin basis is called the Gelfand–Tsetlin algebra n of the algebra n. In our case, the Gelfand–Tsetlin algebra n is some subalgebra of the universal enveloping algebra U(n). Consider the chain of embeddings of universal enveloping algebras U(n) induced by the chain (<ref>) of algebras n [column sep = 3.74em, row sep = 2em] 0 [hook]rι^*_0 U(1) [hook]rι^*_1 U(2) [hook]rι^*_2 ⋯[hook]rι^*_n-1 U(n) [hook]rι^*_n ⋯, where ι^*_n-1 : U(n-1) U(n): E_i_1 j_1·…· E_i_s j_s↦ι^*_n-1 (E_i_1 j_1·…· E_i_s j_s) = d_e ι_n-1(E_i_1 j_1) ·…· d_e ι_n-1(E_i_s j_s) – injection. For the algebra n of the chain (<ref>) the Gelfand–Tsetlin algebra n⊂ U(n), generated by all centers of universal enveloping algebras U(k), where 0≤ k≤ n, is called the Gelfand–Tsetlin subalgebra. In other words, n = ⟨ Z( U(1) ), …, Z( U(n) )⟩, where Z(U(k)) is the center of the universal enveloping U(k). As mentioned above, a remarkable property of the Gelfand–Tsetlin algebra n is that it acts diagonally on the Gelfand–Tsetlin basis {e_Λ}_λ∈Λ_n(λ) of some irreducible n-module V^λ. This property is the motivation for introducing a similar definition of the Gelfand–Tsetlin basis for irreducible modules. Now let's introduce the definition of the L-operator and its quantum minors, which is important for further discussion (see <cit.>). For a fixed algebra n, an L-operator is a formal polynomial in the parameter u, called the spectral parameter, given by the formula L(u) = u + E, где E = ( [ E_1 1 E_1 2 … E_1 n; E_2 1 E_2 2 … E_2 n; ⋮ ⋮ ⋱ ⋮; E_n 1 E_n 2 … E_n n ]). For the L-operator and the index 1≤ m≤ n, the quantum minor is a formal polynomial in the spectral parameter u, given by the formula A_m (u) = ∑_σ∈𝔖_m(σ) L(u)_σ(1) 1·…· L(u - m + 1)_σ(m) m, where m is the permutation group of m elements, and (σ) is the sign of the permutation σ∈m. As can be seen from the definition, the quantum minor A_m(u) is a polynomial of degree m, and can be represented as A_m(u) =u^m +a_m 1u^m-1 +…+a_m m. The operator A_m(u) is the Capelli determinant of the algebra m, so its coefficients {a_m i}_i=1^m are generators of the center of the universal enveloping U(m), that is Z(U(m)) = ⟨{ a_m i}_i=1^m⟩. Therefore, the Gelfand–Tsetlin algebra n is generated by all coefficients {a_m i}_1≤ i≤ m≤ n. So, for any index 1≤ m≤ n, the quantum minors A_m(u) act diagonally in the Gelfand–Tsetlin basis {e_Λ}_λ∈Λ_n(λ) of irreducible representation V^λ of algebra n. Namely, there is a formula A_m (u) · e_Λ = ∏_i=1^m(u + λ_m i - i + 1) e_Λ. This property uniquely (up to multiplication by scalar) defines the Gelfand–Tetlin basis {e_Λ}_λ∈Λ_n(λ) in the irreducible representation of V^λ. Thus, the Gelfand–Tsetlin basis can be defined as a solution to the spectral problem (<ref>), without using lowering operators. This approach can be used in cases where the action of lowering operators on the highest weight vector is not defined in principle (see <cit.>). Let's return to the chain (<ref>). Note that for any n∈ℕ^*, the center of the universal enveloping U(n-1) is embedded in the center of the universal enveloping U(n), that is, ι^*_n-1 ( Z( U(n-1)) ) ⊂ Z(U(n)). Thus, we obtain a well-defined chain of embeddings of Gelfand–Tsetlin algebras, shown in the following commutative diagram [column sep = 4em, row sep = 4em] 0 [hook]rι^*_0 1[hook]rι^*_1[hook]did_1 2[hook]rι^*_2[hook]did_2 ⋯[hook]rι^*_n-1 n[hook]rι^*_n[hook]did_n-1 ⋯ 0 [hook]rι^*_0 U(1) [hook]rι^*_1 U(2) [hook]rι^*_2 ⋯[hook]rι^*_n-1 U(n) [hook]rι^*_n ⋯, Using the upper chain of embeddings of algebras n of the diagram (<ref>), it is possible to correctly define the infinite Gelfand–Tsetlin algebra . So, we will understand the copy of the upper chain as the algebra =n. It is easy to see that the Gelfand–Tsetlin algebra is generated by all coefficients of all quantum minors A_m(u), that is, =⟨{a_m i}_1≤ i≤ m≤∞⟩. Thus, the spectral problem (<ref>) can also be put on a basis in irreducible modules. Below, using the infinite Gelfand-Cetlin algebra , we, by analogy with the algebra n, define the Gelfand–Tsetlin basis in irreducible representations of the algebra . § POLYNOMIAL REPRESENTATIONS OF INFINITE-DIMENSIONAL GENERAL LINEAR LIE ALGEBRA Let's proceed to the consideration of the algebra that interests us. First of all, we will limit the class of irreducible representations we are considering. Namely, similarly to the finite-dimensional case, we define the highest weight as an ordered set λ = (λ_1,λ_2,…) such that λ_1 ≥λ_2 ≥… and ∀ i ∈ℕ^* λ_i ∈ℕ, where there exists a number N ∈ℕ^*, such that for any n > N all λ_n = 0. As in the case of n, we identify all the highest weights with the corresponding Young diagrams. Denote by ^∞ the direct sum of all one-dimensional subspaces spanned by vectors e_i, in other words ^∞ =⊕_i∈ℕ^*⟨ e_i⟩. Vectors e_i can be perceived as columns of dimension ∞× 1, in which there are zeros in each place, and one in the i-th place. In other words, e_i = (δ_i j)_j∈ℕ^*. Let's define the action of the algebra on the space ^∞ by the usual multiplication of a vector by a matrix, that is, E_i j· e_k = δ_j k e_i. The representation ^∞ will be called the tautological representation of the algebra . Recall that, in the case of the algebra n, any irreducible polynomial representation of it was a solution to a universal problem (<ref>). This observation motivates us to define irreducible polynomial representations V^λ of the algebra as solutions to a similar universal problem. Namely, we will call the representation V^λ of the algebra polynomial if there is a universal symmetrizing map Φ_uni : (^∞)^×λ V^λ. That is, such a map that for any symmetrizing map Φ : (^∞ )^×λ F, there exists a single map φ : V^λ F, such that Φ = φ∘Φ_uni. In other words, for any Φ and F as above, there is a single map φ that makes the following diagram commutative [column sep = 4em, row sep = 4em] ( ^∞ )^×λrΦ_uni[swap]drΦ V^λ[dashrightarrow]dφ F. Generally speaking, we have not yet defined the action of the algebra on the spaces V^λ, therefore, formally speaking, we do not have the right to call them -modules. However, despite the possible confusion, we will continue to adhere to the terminology introduced. So, having defined the representations V^λ, first of all we will show their existence and uniqueness (up to isomorphism), this will be the result of the following proposition. For any highest weight λ representations V^λ exist and are unique up to isomorphism. Let's fix the highest weight λ = λ_1 ≥…≥λ_N ≥ 0 ≥… of the representation V^λ. Let's call a map f semisymmetrizing if it satisfies only the first two conditions (<ref>) and (<ref>). Consider a similar universal problem with semisymmetrizing maps: if Φ_uni' : (^∞)^×λ W^λ is a semisymmetrizing map, such that for any semisymmetrizing map Φ : ( ^∞)^×λ F there is a single map φ' : W^λ F, such that Φ=φ'∘Φ_uni'. In other words, for any Φ and F as above, there is a single map φ' that makes the following diagram commutative [column sep = 4em, row sep = 4em] ( ^∞ )^×λrΦ_uni'[swap]drΦ W^λ[dashrightarrow]dφ' F. From the universal properties of the tensor product, it is obvious that the solution will be the space W^λ = ⊗_k=1^N⋀^λ_k^∞, and the universal map Φ_uni' is given by the formula Φ_uni': v_1^(1) v_1^(2) […] v_1^(N) v_2^(1) v_2^(2) […] v_λ_N^(N) [⋮] [⋮] v_λ_1^(1) v_λ_2^(2) ⊗_k = 1^N ⋀_i = 1^λ_k v_i^(k). Let's define the subspace U^λ of the space W^λ as the subspace generated by all the differences Φ_uni'(w) -∑Φ_uni'(w'), where summation is implied by those w'∈ (^∞ )^×λ, resulting from w by exchange between two fixed columns, with the selected subset in the right of the selected columns (see <ref>). As is known (see <cit.> Ch.3), the factorspace W^λ/U^λ has the following universal property: for the projection π^λ: W^λ W^λ/U^λ : w↦[w], which sends the element w into its conjugacy class [w], and any linear map φ' : W^λF there is a single linear map φ : W^λ/U^λ F, such that φ' = φ∘π^λ. In other words, for any φ' and F as above, there is a single φ map that makes the following diagram commutative [column sep = 4em, row sep = 4em] W^λrπ^λ[swap]drφ' W^λ/U^λ[dashrightarrow]dφ F. So, commutative diagrams (<ref>) and (<ref>) can be completed to the diagram (<ref>) with symmetrizing maps. In other words, the commutativity of diagrams (<ref>) and (<ref>) induces the commutativity of the following diagram [column sep = 4em, row sep = 4em] ( ^∞ )^×λrΦ_uni'[swap]drΦ[bend left]rrΦ_uni W^λrπ^λ[dashrightarrow]dφ' W^λ/U^λ[dashrightarrow]dlφ F, where Φ_uni = π^λ∘Φ_uni'. Thus, the commutative diagram (<ref>) gives a solution to the original problem, thereby proving the existence of polynomial representations V^λ. Namely, for each polynomial representation V^λ, there is an isomorphism of vector spaces V^λ≅( ⊗_k=1^N⋀^λ_k^∞) / U^λ. Finally, in accordance with this formula, we define the action of the algebra on the space V^λ, which justifies the name "polynomial representation". The action of the element g∈ is given as the usual action of the Lie algebra on the factor space and tensor product, and the action on the space ^∞ is given as the action of a tautological representation. In other words, the following formula holds g · [ v_1^(1)∧ v_2^(1)∧… ] = [ g · v_1^(1)∧ v_2^(1)∧… + v_1^(1)∧ g · v_2^(1)∧… + … ]. Now, let us prove the uniqueness up to isomorphism. Suppose the contrary: let there exist a polynomial representation V^λ for the same highest weight λ, which is not isomorphic to V^λ. Then V^λ is also a solution to the universal problem (<ref>), and for any Φ and F, the following diagram is commutative [column sep = 4em, row sep = 4em] ( ^∞ )^×λrΦ_uni[swap]drΦ V^λ[dashrightarrow]dφ F. Let Φ =Φ_uni and F=V^λ, then, by the universal property of V^λ, there is a single symmetrizing map of φ : V^λV^λ, making the following diagram commutative [column sep = 4em, row sep = 4em] ( ^∞ )^×λrΦ_uni[swap]drΦ_uni V^λ[dashrightarrow, shift left=0.5ex]dφ V^λ[dashrightarrow, shift left=0.5ex]uφ. Note that φ∘φ∘Φ_uni =φ∘Φ_uni =Φ_uni=id_V^λ∘Φ_uni, but the map φ∘φ is unique, which means it is equal to the identity φ∘φ = id_V^λ. Similar reasoning shows that φ∘φ = id_V^λ. Thus, V^λ≅ V^λ. We got a contradiction, which means the solution of a universal problem (<ref>) is the only one up to isomorphism. Thus, all polynomial representations of the algebra are correctly defined. Next, it will be shown that all polynomial representations of the algebra are irreducible and have a highest weight vector. It will also be shown that different highest weights define different (up to isomorphism) polynomial representations. § THE GELFAND–TSETLIN BASIS IN IRREDUCIBLE POLYNOMIAL REPRESENTATIONS Let's proceed to the construction of the Gelfand–Tsetlin basis in irreducible -modules. As mentioned earlier, we define the Gelfand–Tsetlin basis in the irreducible representation V similarly to the finite-dimensional case. Namely, we will call the basis {e_T} of the irreducible representation V of the algebra the Gelfand–Tsetlin basis if the Gelfand–Tsetlin algebra acts diagonally on it. Note that we do not yet know that the polynomial representations V^λ are irreducible. To prove this fact, we will construct a certain basis in the representation V^λ, which will soon turn out to be the desired Gelfand–Tsetlin basis. However, before that, we will define infinite Gelfand–Tsetlin schemes, which will index the elements of our basis. For a fixed highest weight λ = λ_1≥λ_2≥…, the infinite Gelfand–Tsetlin scheme is an ordered set of elements λ_i j Λ = [ [ ⋯ ⋯ ⋯ ⋯ ⋯; λ_n,1 λ_n,2 ⋯ λ_n, n ; ⋯ ⋯ ⋯ ; λ_2 1 λ_2 2 ; λ_1 1 ; ]], where for any 1≤ j≤ i≤∞ the elements of λ_i j are defined as the number of cells less or equal to i in the j-th row of some semi-standard Young tableau of the form λ (see <cit.>). For the polynomial representation V^λ with the highest weight λ =λ_1≥…≥λ_N≥ 0≥…, using the infinite Gelfand–Tsetlin schemes defined above, we define the elements e_Λ∈ V^λ by the formula e_Λ = ∏_2 ≤ k < ∞^⟶∏_i = 1^k-1 z_k i^λ_k i - λ_k-1, i· v_λ, where the terms in the ordered product are ordered according to the increasing index k, and the vector v_λ∈ V^λ has the form v_λ = [ (e_1 ∧ e_2 ∧…∧ e_λ_1) ⊗ (e_1 ∧ e_2 ∧…∧ e_λ_2) ⊗…⊗ (e_1 ∧ e_2 ∧…∧ e_λ_N) ]. Note that on the right side of the expression (<ref>) only a finite number of differences λ_k i - λ_k-1, i are not equal to zero. Therefore, in reality, the ordered product has only a finite number of terms, which means that the action on the vector v_λ is defined correctly. For the elements e_Λ, we formulate and prove the following lemma. For the polynomial representation V^λ, the family of vectors {e_Λ}_Λ∈Λ_∞(λ) indexed by all infinite Gelfand–Tsetlin schemes forms the basis of the vector space V^λ. Let 𝔥^n⊂n be the Cartan subalgebra, E_i j be the matrix unit, deg E_i j = j - i be the degree of the matrix unit and 𝔤_q =⟨{E_i j | deg E_i j = q }⟩ – span of matrix units of degree q over . For algebra n we have a triangular decomposition n=𝔫^n_-⊕𝔥^n⊕𝔫^n_+, where 𝔫^n_- = ⊕_n > q > 0𝔤_q, and 𝔫^n_+ = ⊕_n < q < 0𝔤_q. We introduce a similar decomposition for the algebra , namely, we define the Cartan subalgebra 𝔥^∞=𝔤_0 and the subspaces 𝔫^∞_-=⊕_q>0𝔤_q and 𝔫^∞_+ = ⊕_q < 0𝔤_q. Then there is a triangular decomposition =𝔫^∞_-⊕𝔥^∞⊕𝔫^∞_+. For the highest weight λ, we define representations V^λ_n=𝔫^n_-· v_λ of the algebra n. Now let's go back to the diagram (<ref>). Note that under the embedding d_e ι_n-1, the subspace 𝔫^n-1_- is embedded into 𝔫^n_-, that is, d_e ι_n-1(𝔫^n-1_-) ⊂𝔫^n_-. This observation induces the following diagram of embeddings [column sep = 4em, row sep = 4em] 0 [hook]rd_e ι_0[d, squiggly] 𝔫^1_- [hook]rd_e ι_1[d, squiggly] 𝔫^2_- [hook]rd_e ι_2[d, squiggly] ⋯[hook]rd_e ι_n-1 𝔫^n_- [hook]rd_e ι_n[d, squiggly] ⋯ V^λ_0 [hook]r#ι_0 V^λ_1 [hook]r#ι_1 V^λ_2 [hook]r#ι_2 ⋯[hook]r#ι_n-1 V^λ_n [hook]r#ι_n ⋯, where #ι_n is some embedding that sends highest weight vector v_λ∈ V^λ_n into the highest weight vector v_λ∈ V^λ_n+1. The lower chain of embeddings of spaces V^λ_n forms a directed system and has a colimit V^λ_n= V^λ. Hence, in particular, it can be seen that the polynomial representation V^λ is generated by the action of the subspace 𝔫^∞_- on the vector v_λ, that is, V^λ=𝔫^∞_- · v_λ. Let's define the degree degΛ of the Gelfand–Tsetlin scheme Λ as the smallest number N, such that for any k>N all differences λ_k i - λ_k-1, i of the elements of the Gelfand–Tsetlin scheme Λ are zero for any i. Let's introduce subset Λ^n_∞(λ)⊂Λ_∞(λ) as a set consisting of all Gelfand–Tsetlin schemes of degree n, in other words, Λ^n_∞(λ) = {Λ∈Λ_∞(λ) | degΛ = n }. Then there is equality Λ_∞(λ) = ⋃_n > 0Λ^n_∞(λ). As mentioned above, in the product (<ref>) there are only a finite number of terms. More formally, if Λ∈Λ^n_∞(λ), then e_Λ =∏_2≤ k <∞^⟶∏_i =1^k-1 z_k i^λ_k i - λ_k-1, i· v_λ = ∏_2 ≤ k ≤ n^⟶∏_i = 1^k-1 z_k i^λ_k i - λ_k-1, i· v_λ. However, for all representations V^λ_n of the algebra n set {e_Λ}_Λ∈Λ^n_∞(λ) forms the basis, and therefore the set {e_Λ}_Λ∈Λ_∞(λ) = ⋃_n > 0{ e_Λ}_Λ∈Λ^n_∞(λ) forms the basis in the polynomial representation V^λ. Now we prove that the basis {e_Λ}_Λ∈Λ_∞(λ) of the polynomial representation V^λ is the desired Gelfand–Tsetlin basis. However, before doing this, it must be shown that the polynomial representations V^λ are irreducible and are defined correctly. This will be the result of the following lemmas. The polynomial representation V^λ of the algebra is an irreducible representation with the highest weight. To begin with, we show that the vector v_λ∈ V^λ is the highest weight vector. The fact that 𝔫^∞_- · v_λ=V^λ was proved in the previous lemma. Let's show the validity of the remaining properties. Note that there is an equality 𝔫^∞_+ = ⋃_n > 0𝔫^n_+. But, for any n∈ℕ^*, the action of 𝔫^n_+ on the vector v_λ gives a null vector, which means that 𝔫^∞_+· v_λ = 0. Similarly, the equality 𝔥^∞ = ⋃_n > 0𝔥^n holds. But, for any n∈ℕ^*, the action of the element E_ii∈𝔥^n on the vector v_λ gives λ_i v_λ, which means ∀ E_ii∈𝔥^∞ E_ii· v_λ = λ_i v_λ. Thus, all polynomial representations are representations with the highest weight. Let's show their irreducibility. Let's define the positive definite Hermitian form ⟨·|·⟩ on the space V^λ, declaring the basis elements e_Λ orthogonal. Let U ⊂ V^λ be a nontrivial subspace invariant with respect to the action of the algebra . Consider the orthogonal complement U^⊥ with respect to the form ⟨· |·⟩. It is clear that the subspace U^⊥ is also invariant with respect to the action of the algebra . Indeed, for any g∈ it follows that g· U⊂ U. But, by definition of the orthogonal complement, it is true that ⟨ U |U^⊥⟩ = 0. Then ⟨ g· U|U^⊥⟩ =⟨ U|g^†· U^⊥⟩ = 0, where g^† is a Hermitian conjugate matrix. Therefore, g^†· U^⊥⊂ U^⊥, which means U^⊥ is an invariant subspace. So, there is a decomposition into irreducible representations V^λ = U ⊕ U^⊥. Let the highest weight vector v_λ belong to the subspace U. The action of the algebra · v_λ = V^λ. On the other hand, U is an invariant subspace, which means · v_λ⊂ U. Therefore, then the following equality holds U=V^λ, and U^⊥ = 0. Polynomial representations with different highest weights are not isomorphic. Let's fix the polynomial representations V^λ and V^λ' such that λ≠λ'. Suppose the opposite, let ρ : V^λ V^λ' be an isomorphism of representations. With isomorphism, the highest weight vector v_λ goes to the highest weight vector v_λ'. Then for any E_i i∈𝔥^∞ it is true that E_i i· v_λ =λ_i v_λ. By the property of the intertwining operator ρ, we obtain that λ_i v_λ = ρ (E_i i· v_λ) = E_i i·ρ (v_λ) = E_i i· v_λ' = λ'_i · v_λ' for any index i. We got a contradiction, which means that polynomial representations with different higher weights are not isomorphic. So, all polynomial representations of the algebra are irreducible representations with highest weight, and there is a bijection between the classes of isomorphisms of irreducible representations and the set of all highest weights. All these properties were inherited from similar finite-dimensional representations. Basis {e_Λ}_Λ∈Λ_∞(λ) of the irreducible polynomial representation V^λ is the Gelfand–Tsetlin basis. As mentioned earlier, the infinite Gelfand–Tsetlin algebra is generated by all coefficients of all quantum minors A_m(u), that is, =⟨{a_m i}_1≤ i≤ m≤∞⟩. Thus, it is sufficient to show that all vectors e_Λ are eigenvectors of quantum minors A_m(u). Note that the equality {A_m(u)}_1 ≤ m <∞ = ⋃_n>0{A_m(u)}_1≤ m≤ n holds. Due to the formula (<ref>), for any n∈ℕ, all elements of the set {e_Λ}_Λ∈Λ^n_∞(λ) are the eigenvectors of quantum minors {A_m(u)}_1≤ m≤ n. Therefore, the basis is {e_Λ}_Λ∈Λ_∞(λ) is the set of eigenvectors of all quantum minors {A_m(u)}_1≤ m<∞, which means it is the Gelfand–Tsetlin basis. Gelfand–Tsetlin basis (<ref>) has the simplest form in the fundamental representations of the algebra . Similarly to the case of n, for any polynomial representation V^λ of the algebra there is an embedding V^λ ⊗_k ≥ 0^a_k⋀^k^∞, where almost all the numbers a_k∈ℕ are zeros. So, for the fundamental representation ⋀^k^∞, the Gelfand–Tsetlin basis has the form { e_Λ}_Λ∈Λ_∞(λ) = { e_i_1∧ e_i_2∧…∧ e_i_k}_1 ≤ i_1 ≤ i_2 ≤…≤ i_k < ∞. As noted earlier, all representations of the algebra are also representations of the group . Thus, the basis (<ref>) is the Gelfand–Tsetlin basis of all polynomial representations V^λ of the group . As can be seen from the above, all the ideas of constructing the Gelfand–Tsetlin basis in irreducible modules were based on reducing the infinite-dimensional case to a finite-dimensional one. These ideas can be applied to construct the Gelfand–Tsetlin basis in irreducible representations of the colimits of all classical Lie algebras A_∞, B_∞, C_∞, D_∞ and other reductive algebras. alpha
http://arxiv.org/abs/2407.12702v2
20240717162436
TransCAD: A Hierarchical Transformer for CAD Sequence Inference from Point Clouds
[ "Elona Dupont", "Kseniya Cherenkova", "Dimitrios Mallis", "Gleb Gusev", "Anis Kacem", "Djamila Aouada" ]
cs.CV
[ "cs.CV", "cs.AI" ]
: A Hierarchical Transformer for CAD Sequence E. Dupont et al. SnT, University of Luxembourg, Luxembourg Artec 3D, Luxembourg TransCAD: A Hierarchical Transformer for CAD Sequence Inference from Point Clouds Elona Dupont1 Kseniya Cherenkova1,2 Dimitrios Mallis 1 Gleb Gusev 2 Anis Kacem 1 Djamila Aouada 1 July 22, 2024 ===================================================================================================== § ABSTRACT 3D reverse engineering, in which a CAD model is inferred given a 3D scan of a physical object, is a research direction that offers many promising practical applications. This paper proposes , an end-to-end transformer-based architecture that predicts the CAD sequence from a point cloud.  leverages the structure of CAD sequences by using a hierarchical learning strategy. A loop refiner is also introduced to regress sketch primitive parameters. Rigorous experimentation on the DeepCAD <cit.> and Fusion360 <cit.> datasets show that  achieves state-of-the-art results. The result analysis is supported with a proposed metric for CAD sequence, the mean Average Precision of CAD Sequence, that addresses the limitations of existing metrics. § INTRODUCTION Practically every object encountered in daily life originates from a Computer-Aided Design (CAD), highlighting the fundamental role of CAD in industrial manufacturing processes. Currently, the dominant paradigm for CAD design is feature-based modelling <cit.>. It allows the creation and manipulation of 3D models through a series of features, individual elements or operations (holes, slots, fillets, ), that modify the geometry of a CAD model. The process is typically initiated with the design of planar sketches, collection of loops composed of 2D curves, followed by a CAD operation (extrusion, revolution, etc.) that expands sketches into a 3D solid model. The final model is represented by the sequence of these CAD sketches and operations. Feature-based modelling has been widely adopted, as it enables intuitive design alterations and seamless CAD software integration, making it essential for an iterative development of complex designs. The recent availability of large CAD model datasets, such as ABC <cit.> and Fusion360 <cit.>, has sparked significant interest in developing learning-based approaches for feature-based modelling. Recent efforts have been focused on deep generative modelling <cit.>, where large transformer-based networks are trained to create new CAD models or to automatically complete partial designs via autoregressive inference. While this research direction offers a lot of potential practical applications for CAD software integration, far less attention has been put to reverse engineering. Feature-based reverse engineering emerges as a real-world application addressing the need to automatically replicate physical objects as CAD models. Recovery of CAD design is facilitated by the acquisition of a point cloud or triangular mesh of a physical object scanned using commercial 3D sensors. Some existing reverse engineering approaches investigate the recovery of alternative CAD model representations like Constructive Solid Geometry <cit.> (CSG) or Boundary-Representation (B-Rep) <cit.>. Other methods tackle feature-based reverse engineering and predict implicit representations of sketches and CAD operations from point clouds <cit.>. Nevertheless, such approaches do not allow for seamless integration into CAD software and often require post-processing (parametric curve fitting). To address these limitations, models capable of learning explicit CAD sequence of parametric sketches and operations from point clouds are needed. This can be enabled within a generative learning framework as in <cit.>. In that work, an auto-encoder reconstructing CAD parametric sequences is proposed and the latent representation is used for generating novel CAD sequences. An extension for reverse engineering was proposed by replacing the CAD sequence encoder with a point cloud encoder trained to map point clouds to the latent representations. The main limitation of the above is the predefined latent space that cannot adapt to the variations present in real-world point clouds. This disconnection can cause the model to generalize poorly to unseen inputs, especially those with noise or irregularities that are present in 3D scans and that are not well-represented in the training data. To that end, we propose  , a novel end-to-end trainable and single-stage hierarchical network for feature-based explicit CAD sequence reverse engineering from point clouds. Our network is hierarchical in the sense that it employs a two-tiered decoding process. Initially, a primary CAD sequence embedding is decoded, encapsulating high-level features of the design, that are then processed by secondary decoders, one dedicated to loop parameters and another to CAD operations. Each decoder specializes in a certain input, enabling a nuanced and precise recovery of CAD parameters. The decomposition of learned representations matches the decomposition inherent in the actual feature-based design process of conceptualizing a 3D model through distinct loop and operation steps. Moreover,  does not predict sketch primitive types explicitly as in <cit.>; instead, we employ a unified primitive representation where types are determined solely by coordinates. Our formulation narrows the learning space by eliminating syntactically incorrect predictions and facilitates a seamless transition between different primitive types. Additionally, it allows for a cascaded parameter refining that further enhances model performance. Another focus of this work is the evaluation of parametric CAD sequence. We identify several limitations of the existing evaluation framework used by <cit.> and suggest a suitable metric for CAD sequence similarity based on mean average precision, computed in the unquantized parametric space. Contributions: In summary our contributions are the following: * We propose , a novel hierarchical architecture for feature-based reverse engineering. Our model is single-stage and end-to-end trainable.  allows for a compact CAD sequence representation that does not include categorical types and enables cascaded coordinate refinement. * We identify several limitations of the existing evaluation framework for feature-based reverse engineering and propose a new evaluation metric framework to ensure fair comparison among diverse network architectures. * Our model surpasses the performance of recent generative-based approaches while also bridging the gap to real-world applications by exhibiting robustness to perturbed point clouds. Paper Organization: The rest of the paper is organized as follows. Section <ref> reviews related works. Section <ref> formulates the problem of feature-based CAD reverse-engineering. The proposed  is described in Section <ref>. Discussion on the current evaluation framework and suggested extension is introduced in Section <ref>. An experimental validation of the proposed network is provided in Section <ref>. Finally, conclusions are given in Section <ref>. § RELATED WORKS Generative Models for CAD: The advent of large-scale 3D shape datasets <cit.>, combined with the significant progress for generative models in vision <cit.>, has sparked interest in the generation of 3D shapes. Existing methods have been proposed for various 3D representations, including point clouds<cit.>, 3D meshes <cit.>, voxel grids <cit.>, and signed distance functions <cit.>. This work focuses on CAD model generation, which compared to the above is parametric and directly editable in CAD software. A line of work explores the generation of the Boundary-Representation (B-Rep), a collection of parametric surfaces connected via a structured topological graph. SolidGen <cit.> considers B-Rep synthesis based on transformers and two-level pointer networks. BrepGen <cit.> represents a B-Rep via a fixed tree of latent geometry representations that can be generated by a diffusion model. Feature-based CAD generation has also been recently explored. Most relevant to our work is DeepCAD <cit.>, a non-autoregressive generative model capable of synthesizing novel CAD sequences based on a transformer auto-encoder architecture. In <cit.> the authors also follow autoregressive strategies. HNC <cit.> uses a hierarchical model based on high-level concepts and a code tree for CAD model generation and auto-completion. Similarly, in SkexGen <cit.> a transformer architecture is used to generate CAD models in the sketch-extrude format by encoding the topology and geometry using different codebooks. All the aforementioned works are oriented around 3D shape generation and either do not address the reverse-engineering task or address it via adaptation of generative modelling leading to suboptimal performance. CAD Reverse Engineering: Reverse engineering is a well-studied problem with a substantial research effort directed towards predicting geometric features of CAD models, by analyzing the corresponding point clouds. Parametric fitting techniques infer the parametarization of edges <cit.> and surfaces <cit.>. Various attributes of the B-Rep and CAD operations are recovered from 3D scans in <cit.>. CADOps-Net <cit.> recovers 2D sketches from faces segmented into their CAD operation steps. Reasoning about a CAD model via properties discovered by parametric fitting offers insights solely into its end-state, without considering the sequential CAD design process intrinsic to feature-based modeling. A step closer to CAD reverse engineering, a line of work explores the reconstruction of a point cloud into Constructive Solid Geometry (CSG) <cit.>, a modelling technique that uses boolean operations to combine primitives into 3D models. Point2Cyl <cit.>, on the other hand, predicts extrusion cylinders given a point cloud, but requires user input to combine cylinders. SECAD-Net <cit.> and ExtrudeNet <cit.> use a self-supervised learning strategy to recover CAD sequences in the form of implicit representations given voxels and point clouds, respectively. In contrast to feature-based modelling, 3D representations produced by these methods (CSG, extrusion cylinders, ) have limited compatibility with modern CAD software workflows. The authors in <cit.> learn sketch-extrude sequences conditioned on a voxel input, however, the model relies on strong data priors and is limited to predefined extrusion combinations. Closer to our work is DeepCAD <cit.> and subsequent MultiCAD <cit.>. Even though DeepCAD <cit.> proposes a non-autoregressive generative framework for feature-based CAD, authors explore further conditioning on input point clouds. Taking a similar direction, MultiCAD <cit.> proposes a two-stage multimodal contrastive learning strategy of both point clouds and CAD sequences. The two aforementioned methods opt for separate stage learning for point clouds and CAD sequences. Concurrent to our work, the autoregressive strategy in <cit.> and the multimodal diffusion based approach in <cit.> attempt to solve the point cloud to CAD sequence problem. To our knowledge  is the first non-autoregressive single-stage architecture for feature-based reverse engineering. § PROBLEM STATEMENT A CAD model 𝐂∈𝒞 is constructed in a sequence of construction steps. Each step can be seen as a 2D parametric sketch 𝐬∈𝒮 (set of lines, arcs, ) followed by a CAD operation 𝐨∈𝒪 (extrusion, revolution, ) <cit.>. Here, 𝒞 is the set of all possible CAD models, 𝒮 and 𝒪 represent the sets of possible CAD sketches and operations, respectively. CAD models constructed exclusively from the extrusion operation type are considered in this work. Extrusion 𝐞∈ℰ, where ℰ denotes the set of possible extrusions, is the most common operation and enables the description of a wide range of CAD models <cit.>.  aims at learning how to predict the sequence of CAD construction steps from an input point cloud. Formally, given a point cloud 𝐏 = [𝐩_1,…,𝐩_n] ∈ ℝ^n× 3, where 𝐩_i = [x_i,y_i,z_i] denotes the 3D coordinates of the point i and n the number of points, the objective of  is to learn the mapping Φ : ℝ^n×3 →𝒞 , Φ(𝐏)= { 𝐬_l,𝐞_l}_l=1^L , where L denotes the length of the CAD sequence. In what follows, the proposed formulations of sketches and extrusions are described. CAD Sketch and Extrusion Formulation: The proposed formulations for sketch and extrusion steps are inspired by <cit.>. A sketch 𝐬 is composed of one or more loops (see left panel of Fig. <ref>). Each loop {ρ_j}_j=1^L_ρ, where L_ρ, denoting the number of loops, consists of one primitive (circle) or a combination of primitives (lines and arcs). In contrast to <cit.> which specifies the type of primitives in their representation, the proposed primitive representation is type-agnostic (see right panel of Fig. <ref>). In particular, each primitive δ is represented by three 2D coordinates of start, mid, and end points, δ = [(x_start,y_start), (x_mid,y_mid), (x_end,y_end)] ∈ ℝ^6. This representation has the advantage that the type of primitive can be deduced from the configurations of the points, hence reducing the search space and facilitating the transition between different types during training. In practice, the mid point of a line is replaced by a dummy value. As in <cit.>, we ensure that the loops are always closed by using the end point of a primitive as the start point of the next one. Further, a similar to <cit.> quantization is considered to reduce the parameters search space. As a result, a loop of n_p primitives ρ_j ∈ ℝ^6 × n_p is considered as a quantized representation ρ^⋆_j ∈0,d_q^6× n_p, where d_q denotes the quantization interval. As for extrusion, similarly to  <cit.>, a quantized representation 𝐞^⋆_j ∈ 0,d_q^11 is considered to represent the sketch plane/scale and extrusion type/distances. Note that 𝐞^⋆ ∈ 0,d_q^11× L_e and ρ^⋆ ∈0,d_q^6× n_p × L_ρ will be used in the following to denote sequences of quantized extrusions and loops, respectively. Here, L_ρ and L_e denote the length of loop and extrusion sequences, respectively. § HIERARCHICAL CAD SEQUENCE LEARNING FROM POINT CLOUDS  non-autoregessively learns to predict a CAD sequence from an input point cloud in the format described in Section <ref>. First, the point cloud is encoded into point features using a standard point cloud encoder. In order to facilitate the learning of CAD sequences, a hierarchical CAD sequence decoding is proposed. In particular, a high-level sequence of embedding corresponding to the loop and extrusion steps is learned. Those embedding are then fed to either a loop or extrusion decoder based on predicted type to learn the loop and extrusion parameters. Finally, the predicted loop parameters are further refined using the actual unquantized loop parameters (ground truth). The overall model architecture is depicted in Fig. <ref> and the different components are described below. §.§ Point Cloud Encoder The point cloud encoder Φ_p consists of 4 layers of PointNet++ <cit.> and operates on an input point cloud 𝐏. It outputs per-point features encoding local neighborhood information, 𝐅_p = [𝐟_p^1…𝐟_p^n] ∈ ℝ^n× d_p, that encode local neighborhood information, d_p denotes the dimension of the features. Note that point normals of 𝐏 are estimated using <cit.> and are provided as input to Φ_p along with its 3D coordinates. §.§ Loop-Extrusion Decoder The main objective of the loop-extrusion decoder Φ_ρ,e is to learn a high-level sequence of embedding 𝐅_ρ,e = [𝐟_ρ,e^1,…,𝐟_ρ,e^L_ρ,e] ∈ ℝ^d_z× L_ρ,e corresponding to loops and extrusions from the point cloud features 𝐅_p. Here, d_z and L_ρ,e denote the embedding dimension and the length of the sequence, respectively. The decoder Φ_ρ,e is composed of multi-head transformer-based blocks <cit.>. In the first block, learned constant embedding 𝐅_c ∈ ℝ^d_z × L_ρ,e undergo a self-attention operation <cit.> and the resulting representation cross-attends to the point cloud features 𝐅_p to produce loop extrusion embedding for the first block 𝐅_ρ,e^1 ∈ ℝ^d_z × L_ρ,e as follows, 𝐅_ρ,e^1 = ((𝐅_c),𝐅_p) , where and ( .,. ) denote the self and cross attention operators <cit.>, respectively. The same self and cross attention operations are conducted in the subsequent blocks by feeding the output of each block as input to the next one, 𝐅_ρ,e^b = ((𝐅_b-1),𝐅_p) , yielding the final sequence of embedding 𝐅_ρ,e after the last block. In order to ensure that each element 𝐟_ρ,e^i in the sequence embedding 𝐅_ρ,e corresponds to the right type (loop, extrusion, or end of sequence), a 3 layer followed by  that operates on each 𝐟_ρ,e^i and predicts its type is introduced. A cross-entropy loss, ℒ_ρ,e, is computed between the predicted and ground truth types to supervise the learning of 𝐅_ρ,e. Note that the loop-extrusion decoder is solely used to obtain a high-level sequence of loop and extrusion embedding. These embedding can then be decoded through either a loop decoder or an extrusion decoder to obtain their parameters. At training time, the ground truth type labels are used to identify which decoder should be used for each embedding, while at inference time the predicted types are used. The identification of loop and extrusion types results into separate loop embedding 𝐅_ρ ∈ ℝ^d_z × L_ρ and extrusion embedding 𝐅_e ∈ ℝ^d_z × L_e by splitting 𝐅_ρ,e according to loop and extrusion types. §.§ Loop and Extrusion Parametrization After obtaining the representation and the type of loop and extrusion steps, the parameters of both loops and extrusions are decoded from these representations using separate decoders. Extrusion Decoder: As mentioned in Section <ref>, the extrusion sequence is described by a sequence of 11 quantized parameters 𝐞^⋆ ∈ 0,d_q^11× L_e. In order to obtain these parameters from the extrusion sequence embedding 𝐅_e, an extrusion decoder Φ_e consisting of 3 layers followed by is used. The predicted probabilities of the extrusion sequence parameters 𝐞^⋆ ∈ [0,1]^11 × d_q × L_e are compared to the ground truth one-hot-encoded parameters in 𝐞^⋆ using a cross-entropy loss, ℒ_e. Loop Decoder: Similarly to the extrusion decoder, the loop decoder Φ_ρ predicts the quantized parameters of the loop sequence ρ^⋆ ∈0,d_q^6× n_p × L_ρ as explained in Section <ref>. Nevertheless, 4 layers of multi-head transformer blocks are employed instead of simple layers. This is due to the sequential nature of loop decoding in contrast to extrusions. Note that a similar strategy as loop-extrusion decoder is opted for the transformer block of loop decoder. The first block performs self-attention on learned constant embedding of loops 𝐅_c^ρ ∈ ℝ^d_z× n_pL_ρ and the result cross-attends to loop embedding 𝐅_ρ as in Eq.(<ref>). The same self and cross attention operations in Eq.(<ref>) are conducted in subsequent blocks to yield a final representation at the last block 𝐅_ρ^b ∈ ℝ^d_z× n_pL_ρ. A linear layer followed by is used to obtain predicted probabilities for the loop sequence parameters ρ^⋆ ∈ [0,1]^ 6 × d_q × n_pL_ρ which are compared to ground truth one-hot-encoded loop parameters of ρ^⋆ using a cross-entropy loss, ℒ_ρ. Loop Refiner: As in many transformer-based architectures <cit.>, the quantization of loop parameters helps to reduce the search space and facilitates the learning. However, it has been observed in our case that it can lead to accumulation of quantization approximation errors. To overcome this issue, unquantized ground truth loop parameters are leveraged. In particular, a loop refiner Φ_r composed of a 4 layer is introduced. This refiner takes as input a concatenation of loop embedding 𝐅_ρ and their corresponding predicted parameter probabilities ρ^⋆. It attempts to predict the offset Ô_ff ∈ ℝ^6× n_pL_ρ between the predicted quantized loop parameters ρ̂^⋆ ∈ 0,d_q^ 6 × n_pL_ρ and the unquantized ground truth loop parameters ρ^⋆ ∈ ℝ^6× n_pL_ρ. An MSE loss, ℒ_r, is computed between the predicted offset Ô_ff and the one given by 𝐎_ff = ρ^⋆ - ρ, to supervise the refiner and the rest of the network. Once the offset is predicted, it is added to the predicted quantized loop parameters yielding unquantized predicted loop parameters as follows ρ̂ = ρ̂^⋆ + Ô_ff. Total Loss:  is an end-to-end network with a training objective guided by the sum of the individual losses, ℒ_total = ℒ_ρ,e + ℒ_ρ + ℒ_e + ℒ_r. § PROPOSED EVALUATION In this section, we first outline the limitations of existing evaluation methods in CAD sequence. Then, our new proposed evaluation metric framework for assessing the performance of CAD sequence inference from point clouds is described. DeepCAD<cit.> Evaluation for Feature-based Reverse Engineering: An evaluation framework for CAD sequence was originally introduced in <cit.> and later used in <cit.>. This framework includes both accuracy for assessing the fidelity of the predicted sequence and Chamfer Distance (CD) to measure the quality of the recovered 3D geometry. Accuracy is assessed using two metrics, specifically Command Type Accuracy (ACC_cmd) and Parameter Accuracy (ACC_param) defined by ACC_cmd = 1/N_c∑_i=1^N_c𝕀[t_i = t̂_̂î] , ACC_param = 1/K∑_i=1^N_c∑_j=1^|p̂_i|𝕀[|p_i,j - p̂_i,j|<η]𝕀[t_i = t̂_̂î] , where t_i and t̂_̂î are the ground truth and predicted command types (for commands representing primitives and extrusions), p_i,j and p̂_i,j are ground truth and predicted command parameters, N_c denotes the total number of CAD commands and 𝕀[.] is the indicator function. K = ∑_i=1^N_c𝕀[t_i = t̂_̂î]|p_i| is the total number of parameters of the correctly recovered commands and η is a tolerance threshold. The 3D geometry is evaluated with Chamfer Distance (CD) computed by sampling 2000 points on the ground truth and predicted shapes. Limitations: We identify the following limitations of the aforementioned evaluation. (1) The proposed ACC_cmd overlooks the possibility of over-prediction in the CAD sequence. As indicated in Eq.(<ref>), the computation of this metric sums across the set of ground truth CAD commands N_c. A predicted sequence could erroneously include extra loop-extrusion operations and still achieve a full score, as exemplified on the left panel of Fig. <ref>. (2) The evaluation of ACC_param is conducted solely on the subset of K accurately identified commands, thus introducing a trade-off between ACC_param and ACC_cmd. This interdependence complicates the interpretation of results. (3) Assessment of parameter quality via ACC_param solely in terms of accuracy is failing to distinguish between the magnitudes of errors. A parameter inaccurately placed in an adjacent quantization bucket incurs the same penalty as one with a larger deviation, despite potentially minor implications on the CAD model's final geometry. These limitations cannot be entirely mitigated by complementing CAD command accuracies with the chamfer distance (CD) metric. While CD is a valuable assessment of shape similarity, it does not address the core objective of reverse engineering: to accurately recover the designer's original CAD sequence. Two CAD models might be close in terms of CD yet possess vastly different CAD construction steps (see right panel of Fig. <ref>). Proposed Evaluation Framework: To overcome the identified challenges, we introduce the mean Average Precision of CAD Sequence (APCS), a novel evaluation metric tailored for feature-based reverse engineering. APCS adopts the concept of Average Precision (AP) commonly used in other tasks, to quantify the similarity between predicted and ground truth CAD sequences. We introduce the CAD Sequence Similarity Score (CSSS) that can be computed between predicted and ground truth CAD sequences as follows 0.93!CSSS(𝐂̂, 𝐂)= 1/2N_δ∑_j=1^N^ρ∑_i=1^N_j^ρ[S(δ̂_j,i, δ_j,i)·𝕀[t_yp(δ̂_j,i) = t_yp(δ_j,i) ]]+ 1/2N^e∑_j=1^N^e S(ê_j, e_j) , where t_yp(.) is a function that determines the type of each primitive δ (arc, line, ), and S(p̂,p) = e^-k||p̂ - p|| is a scoring function with S(p̂,p) ∈ [0,1] with 1 assigned when predicted parameterization is identical to the ground truth. We define N_j^ρ = max(|ρ_j|,|ρ̂_j|) where |.| denotes set cardinality, N^ρ = max(L_ρ,L̂_̂ρ̂) where L_ρ is the number of predicted primitives and L̂_ρ is the number of ground truth primitives for loop ρ_j and N_δ=∑_j=1^N^ρmax(|ρ_j|,|ρ̂_j|). Finally, N^e = max(L_e,L̂_̂ê) where L_e is the number of predicted extrusions and L̂_e is the number of ground truth extrusions. The proposed CSSS metric evaluates both the operation type and parameter prediction. It assigns a score of 0 to loops with incorrectly predicted types, which gradually increases to 1 as parameter prediction improves. Assessment is conducted on the unquantized parameter space and calculates the score based on the maximum count of either predicted or ground truth primitives. This approach ensures that both over and under predicted sequences are penalized equally. We aggregate CSSS scores across various thresholds to derive the mean Average Precision of CAD Sequences (APCS). Furthermore, the median CD is used to measure shape similarity as in <cit.> with the difference that it is evaluated on 4096 points instead of 2000 in order to decrease the uncertainty in the CD measurement. All the reported CD measurements in this work are multiplied by 10^3. Moreover, the ratio of predictions that cannot be reconstructed using <cit.> is reported as the invalidity ratio, IR. § EXPERIMENTS In this section, the experimental setup is first presented. Then, qualitative and quantitative results are analyzed. Afterwards, the components of  are ablated. Finally, the limitations of our model are outlined. §.§ Experimental Setup Dataset: For training and evaluation, the DeepCAD dataset <cit.> is used. The sketch extrusion sequences of the CAD models are processed in quantized (8 bits) and unquantized space. The size of the train, validation and test sets are 140 294, 7 773, and 7 036 CAD models, respectively. Moreover, cross-dataset evaluation is conducted on the Fusion360 dataset <cit.> that contains 6 794 samples. Training Details: The network is trained for 100 epochs with a batch size of 72 and an Adam optimizer is employed with a learning rate of 0.001 and a linear warm-up period of 2 000 steps as in <cit.>. The training is conducted on an NVIDIA RTX A6000 GPU. The input point clouds are extracted using <cit.> and are made of n=4096 points. The dimension of the point features d_p is set to 16 and of loop-extrusion features d_z to 256. Baselines: In order to evaluate the performance of , two state-of-the-art methods, MultiCAD <cit.> and DeepCAD <cit.>, and a retrieval baseline are used. As the code for MultiCAD <cit.> is not available, we report the results from the original paper. DeepCAD <cit.> is retrained with the same parameters and procedure as outlined in the original paper. One of the known limitations of the DeepCAD dataset is that it contains duplicate models <cit.>. While the works in <cit.> proposed a method to remove duplicate models that contain exactly the same CAD sequence, we find that this method does not remove all the duplicates as some models can have the same geometry but are constructed through a slightly different sequence of sketch extrusion operations (see supplementary material for more details). In order to address this issue, we propose a retrieval baseline. The retrieval baseline uses the point cloud encoder from a trained DeepCAD <cit.> to identify the closest latent vector from the train set for each test sample. As a result, the solution is always a train set CAD sequence. §.§ Experimental Results Qualitative Results: Fig. <ref> shows some qualitative results for the retrieval baseline, DeepCAD <cit.> and  (Ours) on both the DeepCAD <cit.> and Fusion360 <cit.> datasets. As mentioned in Section <ref>, the DeepCAD dataset contains many duplicates, not just in terms of CAD sequence but also in terms of geometrical shape. As a result, the retrieval baseline is able to identify accurately duplicates (most right column of the DeepCAD dataset panel) and in other cases the baseline manages to retrieve shapes with similar geometry as the ground truth CAD model. On the other hand, it can be noticed that DeepCAD <cit.> can even fail at retrieving duplicates.  is able to predict models that are similar in shape and also in terms of loop-extrusion sequence, even though it sometimes fails to predict the parameters accurately (second and fifth columns of DeepCAD dataset and fourth column of Fusion360 dataset). Quantitative Results: The trends observed from the qualitative results are further supported by the quantitative results presented in Table <ref>. The APCS, on both DeepCAD <cit.> and Fusion360 <cit.> datasets, show that  is the most capable model at predicting correct CAD sequences. However, it can be noted that the retrieval baseline obtains the lowest CD by a small margin on the DeepCAD dataset and by a more significant margin on the Fusion360. One of the reasons is that this baseline always outputs a CAD model that is of roughly similar shape as the input even if the retrieved CAD sequence can vary from the ground truth. To further analyse the results, the variations of the APCS and CD model complexity on the DeepCAD dataset are displayed in Fig. <ref>. We define the model complexity as the lowest possible CD of a test point cloud sample with respect to the train samples. In other words, the model complexity quantifies the amount by which a test sample is out of distribution from the train set in terms of shape. While  consistently outperforms on average all other baselines in terms of APCS for all model complexities, DeepCAD <cit.> can only perform better than the retrieval baseline for the more complex models. This shows that DeepCAD <cit.> often fails at retrieving the CAD sequence for simple models. In terms of CD, it can be observed that  and the retrieval baseline have similar performance.  can predict a CAD sequence that is closer to the ground truth one but the predicted overall shape can vary from the ground truth for more difficult samples. Ablation Study: In this section, the different components of the proposed network architecture are ablated. Table <ref> shows the results for Ours w/o hier., in which the learning is done without both the loop-extrusion decoder and the refining network, Ours w/o refining where the refining component is ablated and Ours. It can be noted that each component leads to an improvement in all the metrics. It is worth noting that the F1 score on the loop-extrusion type prediction introduced for the hierarchical learning of  is 0.79. This implies that on most cases both loop and extrusion decoders receive embedding of the correct type. Moreover, while the refining network is only applied on the loop parameters, we observe that the component of the APCS for the extrusion parameters are also higher when the network is trained with the refining component. This suggests that the refining network is able to provide a useful signal for guiding the learning process. More details are in the supplementary material. Input point cloud perturbation: Reverse engineering is a real-world practical problem. The results in the previous section are obtained from sampling points from the B-Rep representations of the CAD models. While modern 3D sensors can reconstruct the mesh of models with high resolution, they still suffer from some artifacts such as noise and small missing parts. In order to evaluate the performance of our network in such realistic conditions, we run experiments in two scenarios, one in which noise is added and one in which small holes are created on the point cloud. In order to simulate realistic noise, Perlin noise <cit.> is added to the mesh from which the point coordinates and normals are extracted. More details about the noise and hole generations can be found in the supplementary materials. Table <ref> shows that  is more robust to such perturbations than other methods. As the noise also adds a disturbance to the direction of the input point normals, it leads to a larger drop in performance. Failure Cases and Limitations: In this section, we describe the reasons that lead  to predict invalid CAD models as measured by IR. Among the predictions of   on DeepCAD test set, only one contains a loop parametrization that results in an invalid CAD sequence. This shows that the proposed representation of the loop sequence leads to syntactically correct loops on practically all cases. However, in some cases the representation of the CAD sequence can be syntactically valid, yet it is not possible to reconstruct a B-Rep from it. For 75 test samples, the loop-extrusion decoder fails to predict an extrusion token, this implies that the predicted model is therefore an infinitely thin sketch and not a 3D model as expected. The rest of the invalid models are mostly due to a loop being made of a single line within a model, which cannot be extruded into a valid 3D shape within our context. Finally, Fig. <ref> shows examples for which the predicted sequences do not lead to a shape that is close to the ground truth one. In these examples, the input CAD models contain a large number of small features that  is unable to capture. § CONCLUSION In conclusion, we propose  an end-to-end transformer-based neural network that learns to recover the CAD sequence from a given point cloud. Two of the main features of  are a hierarchical structure that enables the learning of a high-level loop-extrusion sequence and a loop refiner that aims at correcting errors in loop parameter predictions. We also propose a primitive representation in which each primitive is described by the same number of parameters. We identify the limitations of current metrics in the emerging domain of 3D reverse engineering and propose a new metric, the APCS that leads to a fair comparison of parametric CAD sequences. Thorough experiments show that  achieves state-of-the-art results in different realistic scenarios. Acknowledgement: The present project is supported by the National Research Fund, Luxembourg under the BRIDGES2021/IS/16849599/FREE-3D and IF/17052459/CASCADES projects, and by Artec 3D. splncs04 Supplementary Materials : A Hierarchical Transformer for CAD Sequence Inference from Point Clouds § FORMULATION Fig. <ref> shows an example of the construction process of a CAD model as well as the corresponding loop-extrusion sequences. The loop-extrusion sequence is made of three types of tokens (loop, extrusion and end of sequence). The CAD sequence combines the high level loop-extrusion sequence tokens and their corresponding parameters. Note for the loop parameters, the coordinates of the start point of a primitive is always the same as the coordinates of the end point of the previous primitive as in <cit.>. § DUPLICATES IN THE DEEPCAD DATASET As mentioned in Section <ref> of the main paper, one of the main limitations of the DeepCAD dataset <cit.> is that it contains many duplicates across the train and test sets. While the works in <cit.> proposed a strategy to identify duplicates that have exactly the same CAD sequence (sequence duplicates), we observe that the dataset also contains models that have almost identical geometry but different CAD sequences (geometrical duplicates). Moreover some CAD models in the DeepCAD train set are almost identical to some models in the Fusion360 dataset <cit.>, with often just the amount of extrusion varying slightly. Fig. <ref> shows examples of the different types of duplicates from the DeepCAD <cit.> and Fusion360 <cit.> datasets. We define a geometrical duplicate as a test set CAD model for which it exists a CAD model in the train set with a chamfer distance less than the uncertainty in the chamfer distance measurement (± 3×10^-4 when 4096 points are sampled). From this definition and using the train set of the DeepCAD dataset <cit.>, we observe that about 14% of the DeepCAD test set is made of geometrical duplicates and about 12% in the Fusion360 test set. We observe that current datasets that contain both CAD models and their corresponding CAD sequences are limited by either by the number of samples (Fusion360 <cit.>) or by the lack of diversity they present (DeepCAD <cit.>). § IMPLEMENTATION DETAILS In this section, more details on the network architecture are provided. Transformer decoders: The loop-extrusion decoder Φ_ρ,e and the loop decoder Φ_ρ are both transformer decoder with the same network architecture. They are both made of 4 layers, each made of 8 heads with a feed-forward dimension of 512. A dropout rate of 0.1 is used. PointNet++: As mentioned in the main paper, the point encoder Φ_p is PointNet++ <cit.>. The implementation provided in <cit.> was used. The input dimension d_p=6 corresponds to the point and normal coordinates. The parameters for the 4 layers are as follows: number of points (512, 256, 128, 16) with radius (0.1, 0.2, 0.4, 0.8) and number of samples (64, 64, 64, 32). § APCS METRIC In this section, more details on the computation of the proposed metric APCS (see Section <ref> of the main paper) are provided. The mean Average Precision of CAD Sequence, APCS, is a score between 0 and 1 that evaluates how close two CAD sequences are by taking into account the types and parameters of loop primitives and the extrusion parameters. The types of primitives considered in this work are line, arc and circle. The parameters of the loop primitives correspond to 6 point coordinates in a normalized 2D space. An extrusion is described using 11 parameters as in <cit.> which can be grouped into 4 main categories: 1) Ext.: the type of extrusion and the amount of extrusion; 2) Origin: the origin of the 2D sketch in 3D space; 3) Orientation: the sketch plane orientation; and 4) Size: the size of the sketch in 3D space. Note the latter 3 categories describe the projection and scaling of the normalized 2D sketch in 3D. § RESULTS In this section, further quantitative and qualitative results are presented. §.§ Quantitative Results The APCS scores reported in the main paper are computed per model and then averaged over all the models of the test set. It is also possible to compute the average of each individual component of the APCS scores over the whole test set. Such results are discussed in the following paragraphs for the DeepCAD dataset <cit.>, Fusion360 dataset <cit.> and the ablation study. DeepCAD Dataset: Table <ref> shows the APCS scores on the DeepCAD dataset <cit.> for each of the components averaged over the test set. It can be observed that  (Ours) obtains a significantly better score for the arc primitive and also to some extent for the circle primitive compared to the retrieval baseline and DeepCAD <cit.>. However, the scores corresponding to the placement of the 2D sketch in 3D (Origin, Orientation and Size) for  are slightly lower than for DeepCAD <cit.>. The APCS score provides a more complete evaluation of the predicted CAD sequences than the command accuracy (ACC_cmd) and parameter accuracy (ACC_param) used in the works <cit.>. Nevertheless, we provide the results for  and DeepCAD <cit.> against those metrics in Table <ref> for the DeepCAD dataset. Fusion360 Dataset: The trends previously described on the DeepCAD dataset <cit.> can also be observed on the cross-dataset evaluation using the Fusion360 dataset <cit.> (see Table <ref>). Furthermore, the APCS score corresponding to the line primitive is significantly higher for  than for the other two baseline models. Fig. <ref> shows the variation of the APCS and CD to model complexity for the Fusion360 dataset <cit.>. It can be observed that while the performance on the APCS metric for DeepCAD <cit.> and  are relatively close,  achieves lowest CD for all model complexities compared to DeepCAD <cit.> and the retrieval baseline. Complex shape performance: Figure <ref> show the APCS the sequence length for  and DeepCAD <cit.>. Similarly, Figure <ref> presents the variation of CD the sequence length. The size of the data points is proportional to the number of models it represents. The performance decreases for both models as the length of the CAD sequence increases. However,  consistently outperforms DeepCAD. Ablation Study: Table <ref> shows the APCS component scores corresponding to the ablation study presented in Section <ref> of the main paper. The most striking result is that the Loop Refiner leads to improved performance in all the extrusion components, even though the Loop Refiner only acts directly on the loop parameters. The Loop Refiner is a component of the end to end pipeline of . As a result, the backprogation of the Loop Refiner loss ℒ_ρ acts on all the parameters of the network and can therefore impact the predictions at all levels. The Loop-Extrusion module classifies the features 𝐅_ρ,e as loop, extrusion or end of sequence type. These predictions are used to route the features 𝐅_ρ,e to either the loop decoder Φ_ρ or extrusion decoder Φ_e to obtain their parameters. To demonstrate the impact of the Loop-Extrusion type classification, we conduct the following experiment: the ground truth Loop-Extrusion type labels are used instead of the predicted ones at testing time on the DeepCAD dataset <cit.>. In this scenario, the APCS metric evaluating the CAD sequence increases from 0.732 to 0.790. The IR also improves and decreases to nearly 0% (only 2 invalid models). Notably, there is no significant change in the CD. As a result,  is robust to moderate classification errors in the Loop-Extrusion prediction the final reconstruction. However, these errors might impact the performance of the predicted CAD sequence the ground truth. This suggests that the Loop-Extrusion classification errors might result in alternative yet plausible design paths. §.§ Qualitative Results In this section, further qualitative results are presented by first comparing TransCAD to the retrieval baseline and then to DeepCAD <cit.>. Retrieval Baseline Comparison: Qualitative results comparing  to the the retrieval baseline can be found in Fig. <ref>. It can be observed that TransCAD can perform well on duplicate models and more importantly that it performs better than the retrieval baseline on non-duplicate models. TransCAD can identify most of the components of an unseen CAD model but sometimes fails to place those parts in the right location as suggested by the APCS scores presented in the previous section. DeepCAD Comparison: Qualitative results showing the comparison between  and DeepCAD <cit.> can be found in Fig. <ref>. The results on simple and difficult models show that  is able to outperform DeepCAD <cit.> on most occasions. § COMPARISON WITH COMPLEXGEN The output of  is a CAD sequence that allows full integration and editability in CAD software at both the final shape and intermediate design process levels. On the other hand, the output of ComplexGen <cit.> is drastically different since it is only the final shape as a set of corners, curves, and patches with topology constraints. ComplexGen cannot predict the intermediate design steps, as shown in Fig. 1 in the paper. To the best of our knowledge, DeepCAD <cit.> and MultiCAD <cit.> are the only works that predict CAD sequences from point clouds, with only DeepCAD providing source codes for extensive comparisons. Nevertheless, we ran a pretrained ComplexGen model on the DeepCAD test set and recorded a CD of 2.3 (compared to 4.5 for ). Note that as ComplexGen does not predict the CAD sequence, the APCS metric cannot be computed. On the other hand, ComplexGen metrics are tailored to their output and designed to evaluate the predicted patches of the final shape and topological validity. To conclude, other reverse engineering baselines (ComplexGen <cit.>) might achieve a better final reconstruction than , but could not be used seamlessly for reverse engineering (intermediate design level editability). § POINT CLOUD PERTURBATION In this section, further details on the implementation of the perturbations applied to the input point cloud P described in Section <ref> of the main paper are presented. §.§ Perlin Noise Implementation To simulate the noise created when an object is scanned using a 3D sensor, a Perlin noise <cit.> is added to the point cloud. The perlin noise is created using the following strategy. Starting from the mesh representation of the original CAD models, the faces are divided to ensure that the mesh contains a dense number of vertices. Then, a 3D Perlin noise is computed for each vertex using 64 octaves with a minimum and maximum magnitude of -0.001 and 0.001, respectively. Finally, the normals are recomputed from the mesh and points are sampled. A visual example of a perturbed mesh can be found in Fig. <ref>. §.§ Holes Implementation Holes in the input point clouds are created using the following strategy. Firstly, for each point cloud the number holes is selected from a uniform distribution ranging from 1 to 10 included. Then, the ratio of points to be removed for each hole is chosen by sampling a normal distribution with mean 0.03 and standard deviation 0.015. Finally, for each hole a point is chosen at random and the corresponding number of nearest neighbors points are removed. The nearest neighbors are identified using a geodesic distance computed on the mesh surface. We ensure that the remaining number of points is at least n=4096, which corresponds to the number of points used as input. Fig. <ref> shows some examples of point clouds on which holes have been created. §.§ Qualitative Results Qualitative results for both the hole and noise input perturbations can be found in Fig. <ref>. Those results complement the quantitative results found in Section <ref> of the main paper. §.§ Further Quantitative Results In this section, further quantitative results for different amounts of point cloud perturbation are presented. The maximum magnitude of the Perlin noise on the input point cloud is increased compared to the one reported in the main paper (now referred to as Original Noise). The results presented in Table <ref> show that both  and DeepCAD are sensitive to the amount of noise. We also perform 2-epoch finetuning of both models on Original Noise training data and report the results (Finetune, last row of Table <ref>). Notably,  almost recovers its performance with finetuning. Furthermore, we conduct an experiment to evaluate the effect of the input cloud sparsity on the performance. Using  and DeepCAD <cit.> trained with input point clouds of size 4096 points, predictions for the test set with decreasing input point cloud sizes are generated. The results demonstrate that  is more robust than DeepCAD <cit.> input sparsity (see Table <ref>).
http://arxiv.org/abs/2407.12627v1
20240717145515
Entropy-Stable Model Reduction of One-Dimensional Hyperbolic Systems using Rational Quadratic Manifolds
[ "Robin Klein", "Benjamin Sanderse", "Pedro Costa", "Rene Pecnik", "Ruud Henkes" ]
math.NA
[ "math.NA", "cs.NA", "physics.flu-dyn" ]
1,2]R.B. Kleincor1 rbk@cwi.nl 1,3]B. Sanderse B.Sanderse@cwi.nl 2]P. Costa P.SimoesCosta@tudelft.nl 2]R. Pecnik R.Pecnik@tudelft.nl 2]R.A.W.M. Henkes R.A.W.M.Henkes@tudelft.nl [cor1]Corresponding author [1]Centrum Wiskunde & Informatica, Science Park 123, Amsterdam, The Netherlands [2]Delft University of Technology, Process and Energy, Leeghwaterstraat 39, Delft, The Netherlands [3]Eindhoven University of Technology, Department of Mathematics and Computer Science, PO Box 513, Eindhoven, The Netherlands § ABSTRACT In this work we propose a novel method to ensure important entropy inequalities are satisfied semi-discretely when constructing reduced order models (ROMs) on nonlinear reduced manifolds. We are in particular interested in ROMs of systems of nonlinear hyperbolic conservation laws. The so-called entropy stability property endows the semi-discrete ROMs with physically admissible behaviour. The method generalizes earlier results on entropy-stable ROMs constructed on linear spaces. The ROM works by evaluating the projected system on a well-chosen approximation of the state that ensures entropy stability. To ensure accuracy of the ROM after this approximation we locally enrich the tangent space of the reduced manifold with important quantities. Using numerical experiments on some well-known equations (the inviscid Burgers equation, shallow water equations and compressible Euler equations) we show the improved structure-preserving properties of our ROM compared to standard approaches and that our approximations have minimal impact on the accuracy of the ROM. We additionally generalize the recently proposed polynomial reduced manifolds to rational polynomial manifolds and show that this leads to an increase in accuracy for our experiments. Entropy stability Manifold Galerkin method Reduced order models Rational quadratic manifolds Nonlinear conservation laws § INTRODUCTION Conservation laws are nearly universally present in any branch of physics and engineering e.g. fluid dynamics, structural mechanics, plasma physics and climate sciences; they express the conservation of some physical quantity of interest. Often such conservation laws are described by hyperbolic equations or systems thereof <cit.>. Physicists and engineers are increasingly reaching to simulation tools for approximate solutions. With the increase in computational power of recent decades, very large-scale problems have indeed been solved to a satisfactory accuracy. Nonetheless, some applications of major engineering interest like those of a multi-query (e.g. design optimization <cit.>, uncertainty quantification <cit.>) or real-time nature (e.g. model predictive control <cit.>, digital twin technology <cit.>) remain out of question for many large scale systems. A way around these computational issues has been offered by reduced order models (ROMs), which are low-dimensional surrogates of high-fidelity models of interest, often referred to as full order models (FOM) in the ROM community. ROMs rely on the offline-online decomposition paradigm <cit.> for their efficiency, they are trained in an expensive offline phase and subsequently evaluated at very low computational cost in new situations in the online phase. Particularly, the low dimensionality of the ROM allows for fast and cheap evaluation of its solution. A popular class of ROMs are projection-based ROMs (pROMs) <cit.>, which have traditionally been constructed by projecting equations of interest on well-chosen linear subspaces. These subspaces are often found in a data-driven manner using the proper orthogonal decomposition (POD) <cit.> or greedy methods <cit.> and the projections are carried out using a Galerkin <cit.> or Petrov-Galerkin <cit.> approach. However, the success of applications of linear subspace-based pROMs has been limited in the field of hyperbolic equations. This is a result of an almost inherently slow decay of the so-called Kolmogorov n-width (KnW) of the solution manifolds ℳ_u (i.e. the set of all solution trajectories for a range of parameters and initial conditions of interest) of these systems[(𝒳,||·||_𝒳) denoting a Banach space of interest containing analytical PDE or approximate simulation solutions]: d_r(ℳ_u) = inf_𝒱⊂𝒳; dim(𝒱) = rsup_u∈ℳ_uinf_v∈𝒱 ||u - v||_𝒳, which measures the worst error that can be incurred when optimally approximating ℳ_u with an optimally chosen r-dimensional linear subspace. The KnW decay of many hyperbolic systems is slow because their solution trajectories are often not contained in low-dimensional linear subspaces due to characteristically having moving features as part of their solution. This has been shown analytically for some systems <cit.> and empirically for many others. In recent years a range of possible solutions have been proposed, falling in roughly four categories. First, there have been adaptive approaches that use linear subspaces that are changed during the online phase to be better suited to new conditions <cit.>; second, domain decomposition approaches that localize ROM construction in time, space or parameter space <cit.>; third, Lagrangian, registration and/or optimal transport based approaches that track moving features improving linear data compressibility <cit.>; fourth, constructing ROMs on nonlinear spaces (manifolds). Since it is not dependent on user-defined localization or adaptation strategies and due to its high expressiveness we will be interested in the latter category. In model reduction on manifolds the linear reduced spaces of classical ROMs are replaced by nonlinear spaces. Given sufficient expressiveness the nonlinear reduced spaces can potentially approximate the solution manifolds of hyperbolic systems. These manifolds are typically constructed from data. In <cit.> autoencoder neural networks are used to construct manifold ROMs. Another popular nonlinear manifold construction approach are polynomial manifolds <cit.>. Finally, there has been an increasing interest in a more physics-based approach, where ROMs are constructed on the invariant manifolds of physical systems <cit.>. Although, to the authors' knowledge, there have not been any studies of manifold ROMs on large scale under-resolved and shock-dominated cases within fluid dynamical applications, it has long been known that their linear counterparts can suffer from stability issues for such problems <cit.>. It is quite reasonable to assume this will also be the case for manifold ROMs. A promising approach to stabilization of such simulations is the concept of entropy stability <cit.>, which has been widely used in the context of obtaining stable FOMs. A numerical method is entropy stable if it dissipates a convex functional associated to a conservation law, referred to as entropy, given suitable boundary conditions. Entropy stability endows numerical methods with physically admissible behaviour, which for fluid dynamical applications manifests itself in satisfaction of the second law of thermodynamics. Furthermore, it generalizes the L^2-stability properties of linear hyperbolic systems to fully nonlinear systems <cit.>. Additionally, stability in L^p spaces can be shown formally <cit.>. However, in the projection step to construct pROMs an entropy-stable numerical method generally loses its stability property. In recent years, some work has been carried out in the ROM community to preserve the entropy stability property. In <ref> several possible approaches have been visualized. In <cit.> the ROM is evaluated at a corrected state that ensures entropy stability is maintained (also known as `entropy projection'). Inspired by the approach taken in the classical finite element work in <cit.>, <cit.> writes the conservation laws in symmetric form using an alternative set of variables and projects the continuous equations leading to correct entropy estimates. In <cit.> the Hilbert spaces in which the projections are carried out are defined according to physical arguments. We also note the works in <cit.>. All these approaches suffer a major drawback – they are built on linear reduced spaces. As a result, relatively high-dimensional reduced spaces are required to model many physical systems of interest, which comes at the cost of computational efficiency of the ROM. Our main contribution is to generalize these entropy-stable approaches to nonlinear reduced spaces. This allows for lower-dimensional reduced spaces and thus potentially more efficient ROMs. In particular, we will be interested in generalizing the work in <cit.>. This approach is colored red in <ref>. Our main argument for not choosing <cit.> stems from the argument given e.g. in <cit.>, namely that formulations in alternative variables are not consistent with the famous Lax-Wendroff theorem <cit.> and can thus yield wrong shock solutions. To our knowledge, our method is the first manifold ROM that is provably entropy stable. At the same time, we note that preservation of other mathematical structures on reduced manifolds has been successfully achieved in the past: symplectic <cit.>; metriplectic <cit.>; conservative <cit.>. A great overview of recent structure-preserving model reduction contributions in general is given in <cit.>. A second contribution of this work is the development of a novel generalization of polynomial manifolds <cit.> to rational polynomial manifolds. While these polynomial manifolds have shown successes in certain applications, in our experience they are not sufficiently accurate for shock-dominated problems. Rational polynomials are better at capturing discontinuities, as we will show in this work. This article is organized as follows. In <ref> we introduce the theory of nonlinear hyperbolic conservation laws and entropy analysis, we introduce a baseline entropy stable FOM <cit.>, and an entropy-stable linear ROM as proposed in <cit.>. In <ref> we introduce our main contribution, the novel entropy stable nonlinear manifold ROM. In <ref> we discuss our second contribution, being rational polynomial manifolds. In <ref> we show the effectiveness of our approach using several numerical experiments that are based on a range of well-known conservation laws from fluid dynamics. We conclude our work in <ref>. § PRELIMINARIES: ENTROPY INEQUALITY FOR CONSERVATION LAWS, ENTROPY-STABLE FOM, LINEAR ROM §.§ Introducing the entropy inequality We give a short introduction to the concept of entropy, some related concepts used in its analysis and its role in the theory of nonlinear conservation laws. We consider conservation laws in one spatial dimension that can be written as partial differential equations (PDE) of the form: ∂u/∂ t + ∂f(u)/∂ x = 0, where Ω is a spatial domain and [0,T] is a temporal domain with T > 0. Furthermore u : Ω× (0,T] →ℝ^n is the solution function, f : ℝ^n →ℝ^n is the nonlinear flux function, n ∈ℕ is the number of conserved quantities and x∈Ω and t ∈ [0,T] are the spatial coordinate and time, respectively. To facilitate conservation statements and minimize the role of boundary conditions we will focus on periodic spatial domains Ω = 𝕋([a,b]) with b > a and b,a ∈ℝ in this research, here 𝕋 is the torus. The equations are complemented by a set of initial conditions u_0 : Ω→ℝ^n so that u_0(x) = u(x,0). The conservation law (<ref>) is referred to as hyperbolic when the Jacobian matrix ∂f/∂u is diagonalizable with real eigenvalues for all physically relevant u <cit.>. In many cases, the solutions of physically relevant hyperbolic conservation laws also satisfy additional conservation laws of the form: ∂ s(u)/∂ t + ∂ℱ(u)/∂ x = 0, where the function s : ℝ^n →ℝ is called the entropy function which is defined to be convex[A function g : ℝ^n →ℝ is convex if its Hessian, ∂^2 g/∂u^2(u), is positive definite for all u.] and ℱ : ℝ^n →ℝ is called the entropy flux. In particular, such an additional conservation law exists if the compatibility relation: η(u)^T∂f/∂u = ∂ℱ/∂u^T, is satisfied. Here, η : ℝ^n →ℝ^n, u↦∂ s/∂u(u) is the gradient of the entropy function s with respect to u. This mapping is injective due to the convexity of s and hence can be inverted. A pair (s,ℱ) satisfying the compatibility relation (<ref>) is called an entropy pair of (<ref>). From the compatibility relation (<ref>) different ways of symmetrizing[A conservation law is symmetrized if it can be written as A∂u/dt + B ∂u/∂ x = 0 with A,B symmetric matrices.] the conservation law (<ref>) can be derived <cit.> which has implications on the well-posedness of the initial value problem (<ref>) <cit.>. Taking the gradient[It is understood that (η(u)^T∂^2 f/∂u^2)_ij = ∑_k η_k ∂^2 f_k/∂ u_i ∂ u_j.] of the compatibility relation gives: ∂^2 s/∂u^2∂ f/∂u + η(u)^T∂^2 f/∂u^2 = ∂^2 ℱ/∂u^2. As a linear combination of symmetric matrices the right term on the left side is symmetric and as a Hessian matrix of a scalar-valued function the term on the right side is also symmetric. Consequently, the left term on the left side must be symmetric meaning that the Hessian of the entropy symmetrizes the flux Jacobian from the left: ∂^2 s/∂u^2∂ f/∂u = (∂^2 s/∂u^2∂ f/∂u)^T. In the literature <cit.> η := η(u) are commonly referred to as the entropy variables as, since s is convex, η is injective and may be used as a change of variables. Writing the conservation laws (<ref>), (<ref>) in terms of entropy variables allows for a second way of symmetrizing the conservation law: ∂ u(η)/∂ t + ∂ g(η)/∂ x = 0 ∂σ(η)/∂ t + ∂𝒢(η)/∂ x = 0, where g(η) := f(u(η)), σ(η) := s(u(η)) and 𝒢(η) := ℱ(u(η)). The compatibility condition (<ref>) now requires that: η^T∂ g/∂η = ∂𝒢/∂η^T. Taking the gradient with respect to the entropy variables results in: ∂ g/∂η + η^T ∂^2 g/∂η^2 = ∂^2 𝒢/∂η^2, showing that ∂ g/∂η is symmetric. Since ∂ u/∂η = ∂η/∂u^-1 = ∂^2 s/∂u^2^-1 is also symmetric, the conservation law written in terms of entropy variables is symmetric <cit.>. By the Poincaré lemma, this shows that ∂ u/∂η and ∂ g/∂η are the Hessians of two potential functions: u(η) = ∂ς/∂η with ς(η) = η^Tu(η) - σ(η) g(η) = ∂ψ/∂η with ψ(η) = η^Tg(η) - 𝒢(η), where ς is the entropy potential and ψ is the entropy flux potential which has become a powerful tool for numerical methods <cit.>. Since ∂ g/∂η = ∂ f/∂u∂ u/∂η = ∂ f/∂u∂^2 s/∂u^2^-1 the flux Jacobian ∂ f/∂u is symmetrized from the right by ∂^2 s/∂u^2^-1. Finally, formally taking a spatial derivative of the entropy flux potential and taking into account (<ref>) gives: ∂ψ/∂ x = ∂η/∂ x^T g(η), which will be important when defining numerical fluxes later. It is well-known that solutions u to (<ref>) can develop discontinuities in finite time for smooth u_0 <cit.>. When this occurs the solution u is said to contain a shock or contact discontinuity depending on the behaviour of the discontinuity <cit.>. In this case both formulation (<ref>) and the manipulations to obtain (<ref>) are no longer valid. We must therefore consider (<ref>) in a weak sense to retain a notion of solutions. This weak form of the conservation law is obtained by integrating against a space of smooth test functions v : Ω× [0,∞ ) →ℝ^n with compact support i.e. v∈ C_0^∞(Ω× [0,∞) ) and transferring all derivatives to these test functions to obtain: ∫_0^∞∫_Ωu·∂v/∂ t + f(u) ·∂v/∂ x dx dt + ∫_Ωu_0 ·v(x,0) dx = 0, on periodic Ω. Note that this expression is valid even for discontinuous u and that any smooth u satisfying the strong form (<ref>) also satisfies this weak form (<ref>) <cit.>. However, weakening the notion of a solution like this comes at the cost that it does not necessarily yield unique solutions (examples of such cases may be found in <cit.>). Out of all weak solutions, i.e. solutions to (<ref>), those of physical interest are the ones that arise as the limit function u = lim_ε→ 0u_ε where u_ε satisfies a viscously regularized version of the conservation law (<ref>): ∂u_ε/dt + ∂ f(u_ε)/∂ x = ε∂^2 u_ε/∂ x^2. These weak solutions are referred to as vanishing viscosity weak solutions. Intuitively, allowing only vanishing viscosity weak solutions excludes well-known unphysical solutions like expansion shocks (see <cit.>) as these would instantly by smeared over the domain in the presence of ε > 0 and are therefore not limits of viscous solutions. Entropy is an important tool in the analysis of vanishing viscosity solutions as these solutions satisfy a so-called entropy inequality <cit.>. Namely, multiplying (<ref>) from the left with η(u_ε)^T and integrating against smooth non-negative test function v ≥ 0 with compact support we can find after some manipulations: ∫_0^∞∫_Ω s(u_ε) ∂ v/∂ t + ℱ(u_ε) ∂ v/∂ x dx dt = ∫_0^∞∫_Ωε∂u_ε/∂ x·(∂^2 s/∂u^2∂u_ε/∂ x) - ε s(u_ε) ∂^2 v/∂ x^2 dx dt ≥ -∫_0^∞∫_Ωε s(u_ε) ∂^2 v/∂ x^2 dx dt, where the inequality follows from the convexity of s. Given that certain mathematical conditions are satisfied (see <cit.>) taking the limit ε→ 0 results in a inequality for the vanishing viscosity weak solution: ∫_0^∞∫_Ω s(u) ∂ v/∂ t + ℱ(u) ∂ v/∂ x dx dt ≥ 0. This inequality is referred to as the entropy inequality in the literature <cit.> and is often written as: satisfying: ∂ s(u)/∂ t + ∂ℱ(u)/∂ x≤ 0, in the sense of distributions, with equality for smooth u following (<ref>) and inequality for solutions containing shocks. This inequality arises from considering limits of regularized conservation laws, more details can be found in <cit.>. For scalar conservation laws Kruzkov <cit.> established that weak solutions satisfying (<ref>) are unique, but for systems uniqueness is not yet completely established <cit.>. We can define the total entropy functional as: 𝒮[u] := ∫_Ω s(u) dx. Defining appropriate sequences of test functions and taking limits <cit.>, it can be shown that the estimate: d𝒮[u]/dt≤ 0, follows from (<ref>) on periodic Ω. This inequality will be the main interest of this paper. §.§ Entropy stable spatial discretization (FOM) We will discretize the conservation law (<ref>) with a finite volume method (FVM) based on flux-differencing <cit.>. To introduce the general and frequently recurring structure of our entropy stability proof we will provide some detail on the full order model (FOM) discretization. The FVM discretization will be constructed such that discrete analogues to (<ref>) hold. Other discretization methods that similarly mimic (<ref>) are also possible, for example the split-form discontinuous Galerkin (DG) methods described in <cit.>, the summation-by-parts schemes in <cit.> and the higher-order methods of <cit.>. We choose the FVM to keep the exposition simple, but note that our entropy-stable ROM framework should work with other entropy-stable FOM discretizations as well. The scheme is formulated as: Δ x_i d u_i/dt + f_i+1/2 - f_i-1/2 = 0, i ∈{0,...,N-1}, on a grid of N grid cells so that i ∈{0,...,N-1}. Here, u_i : [0,∞) →ℝ^n is the numerical solution vector in the i-th grid cell, Δ x_i := x_i+1/2 - x_i-1/2 is the cell size of the i-th cell with x_i± 1/2 denoting respectively the x values of the right (+) and left (-) cell boundary and f_i+1/2 := f_h(u_i+1, u_i) is the numerical flux on the right cell boundary of the i-th cell with f_h : ℝ^n×ℝ^n →ℝ^n being a two-point numerical flux function <cit.> approximating the flux function f on a cell boundary based on two neighbouring numerical solution values, similarly f_i-1/2 approximates f on the left boundary. We also define the total number of unknowns as N_h := n· N. Periodic boundary conditions are enforced by setting u_N := u_0 and u_-1 := u_N-1. In the schemes we are considering the numerical flux function is constructed from entropy-conservative flux functions <cit.>. These are flux functions that assure discrete analogues of (<ref>) and (<ref>) are satisfied with equality. This makes them suitable starting points from which to construct flux functions that have appropriate entropy-dissipative properties. We follow Tadmor's framework <cit.> of entropy-conservative fluxes, which are defined as follows: An entropy-conservative two-point numerical flux f^*_h : ℝ^n ×ℝ^n →ℝ^n is a numerical two-point flux satisfying: * consistency: f^*_h(u,u) = f(u); * symmetry: f_h^*(u_l,u_r) = f_h^*(u_r,u_l); * entropy conservation: (η(u_l) - η(u_r))^Tf_h^*(u_l,u_r) = ψ(u_l) - ψ(u_r). We see that condition 3 is a discrete equivalent to (<ref>) and represents a discrete gradient property (see <cit.>) of the numerical flux in the light of (<ref>). The entropy-dissipative fluxes f_h : ℝ^n ×ℝ^n ×ℝ^N_h→ℝ^n are now constructed from entropy-conservative fluxes f_h^* by adding (possibly solution dependent) entropy dissipation operators like: f_i+1/2 := f_h(u_i+1, u_i, u_h) = f_h^*(u_i+1, u_i) - D_i+1/2(u_h)Δη_i+1/2 with D_i+1/2 : ℝ^N_h→𝕊^n_+ so that D_i+1/2(u_h) is symmetric positive semi-definite (SPSD) for any u_h(t) ∈ℝ^N_h (𝕊^n_+ and 𝕊^n_++ are the convex sets of symmetric positive definite and symmetric positive semi-definite n× n matrices, respectively). Here, u_h : [0,∞) →ℝ^N_h is the numerical solution vector on the whole grid to be defined in what follows (this is required for higher order reconstructions like in e.g. <cit.>). Additionally, we have defined Δη_i+1/2 := η(u_i+1) - η(u_i). We will refrain from denoting explicitly the dependence on u_h in the third argument of f_h and simply write f_h(u_i+1, u_i) for (<ref>). For the purpose of model reduction in section <ref> we rewrite discretization (<ref>) with flux (<ref>) in a matrix-vector formulation. We will introduce the following notations: volume-based quantities which live on cell centers and interface-based quantities which live on cell interfaces. The numerical solution vector is a volume-based quantity given by: u_h(t) := [u^1_0(t)...,u^k_i(t),u^k_i+1(t),...,u^n_i(t),...]^T ∈ℝ^N_h, where u^k_i(t) ∈ℝ is the approximation of the k-th conserved variable (the variable conserved by the k-th equation in (<ref>)) in the i-th cell evaluated at time t. The numerical flux vector is an interface-based quantity, overloading the notation for the numerical flux function, it is given by: f_h(u_h) := [f^1_1/2, ..., f^1_N-1/2,f^2_1/2, ... f^k_i-1/2, f^k_i+1/2,...,f^n_i+1/2,...,f^n_N-1/2]^T ∈ℝ^N_h, where f^k_i+1/2 is the numerical flux of the k-th conservation equation evaluated at interface i+1/2 between cells i+1 and i. Periodic boundary conditions are built into the flux vector by evaluating f^k_N-1/2(u_0,u_N-1) for all k∈{1,...,n}. To perform finite-difference operations as in (<ref>) for volume-based and interface-based quantities, respectively, the following matrices are defined: Δ̅_v = [ 1 0 0 - 1; -1 1 0 0; 0 ⋱ ⋱ 0; 0 0 -1 1; ]∈ℝ^N× N, Δ̅_i = [ -1 1 0 0; 0 ⋱ ⋱ 0; 0 0 -1 1; 1 0 0 -1; ]∈ℝ^N× N. We note the skew-adjointness relation Δ̅_v = -Δ̅_i^T, and that both have zero row and column sum. These properties will be used in proving entropy stability of the scheme. For systems we define Δ_v := I ⊗Δ̅_v and Δ_i := I ⊗Δ̅_i with I being the n× n identity matrix and ⊗ the Kronecker product. Clearly, Δ_v and Δ_i satisfy a similar skew-adjointness property (<ref>). We will also introduce the FVM mass matrices Ω̅_h = diag(Δ x_i) with i = 0,1,...,N-1 and Ω_h := I ⊗Ω̅_h. With these operators we can write a compact form of the discretization (<ref>) as follows: Ω_hd u_h/dt + Δ_v f_h(u_h) = 0. To emphasize the role played by the dissipation operator D_i+1/2 in obtaining entropy-stable spatial discretizations we will decompose Δ_v f_h(u_h) in an entropy-conserving part and an entropy-dissipating part, resulting in: Ω_hd u_h/dt + Δ_v f_h^*(u_h) = Δ_v D_h(u_h) Δ_i η_h, here, f_h^*(u_h) is a vector of entropy conservative numerical fluxes, D_h(u_h) ∈𝕊_+^N_h is an SPSD matrix containing the terms associated to the dissipation operators D_i+1/2 and η_h is a vector containing the grid values of the entropy variables ordered similarly as u_h. To this end, we will define the following block diagonal matrix-valued function D̃_h : ℝ^N_h →𝕊^N_h_+, u_h ↦diag_(D_i+1/2(u_h)) where diag_(D_i+1/2(u_h)) produces a block diagonal matrix with blocks being equal to D_i+1/2(u_h) with i = 0,1,...,N-1. As in the formulation (<ref>) we would like to apply this matrix to the vector Δ_i η_h with: η_h := [η^1(u_0(t)), ..., η^k(u_i(t)), η^k(u_i+1(t)), ..., η^n(u_i(t)), ...]^T ∈ℝ^N_h, being a volume-based quantity and η^k(u_i) being the k-th component of the entropy variables evaluated at the i-th cell. However, before this can be done we need to suitably permute the rows and columns of D̃_h(u_h). We define the permutation matrix: P = [ I ⊗[ 1 0 ... 0 ]; I ⊗[ 0 1 ... 0 ]; ⋮; I ⊗[ 0 0 ... 1 ] ]∈{0,1}^N_h × N_h, where I is the N× N identity matrix and the row vectors are of length n. The permuted dissipation matrix D_h(u_h) := PD̃_h(u_h)P^T ∈𝕊^N_h_+ with correct ordering can then be defined. Finally, we rewrite (<ref>) where the flux vector f_h is decomposed over a entropy-conserving part f_h^* and an entropy dissipative part D_h(u_h)Δ_i η_h: Ω_hd u_h/dt + Δ_v f_h^* = Δ_v D_h(u_h) Δ_i η_h, where we have taken the entropy dissipative part to the right side. Having introduced an entropy-dissipative numerical flux, we evaluate the discrete analogue to the continuous total entropy functional (<ref>) which should be suitably dissipated by the entropy stable discretization (<ref>) or conserved in the case of no dissipation. The discrete total entropy functional will be defined as: S_h[u_h] := 1^TΩ̅_h s_h, where 1 is a vector of ones and the local entropy is defined as: s_h(u_h(t)) := [s(u_0(t)), ..., s(u_i(t)), ..., s(u_N-1(t))]^T ∈ℝ^N, which is a volume-based quantity. The time evolution of S_h is given by dS_h[u_h]/dt = 1^TΩ̅_hds_h/dt = ∑_i Δ x_i η(u_i)^T du_i/dt = η_h^TΩ_hdu_h/dt. We also define the entropy flux potential vector as: ψ_h(u_h(t)) := [ψ(u_0(t)), ..., ψ(u_i(t)), ..., ψ(u_N-1(t))]^T ∈ℝ^N, which is a volume-based quantity, like the local entropy vector. To analyse the entropy evolution we can substitute the spatial discretization (<ref>) in the previous expression to obtain: dS_h[u_h]/dt = η_h^TΩ_hdu_h/dt = -η_h^TΔ_v f_h^*(u_h) + η_h^TΔ_vD_h(u_h)Δ_iη_h = (Δ_i η_h)^T f_h^*(u_h) - η_h^T Δ_i^T D_h(u_h) Δ_i η_h = 1^TΔ̅_i ψ_h - η_h^T Δ_i^T D_h(u_h) Δ_i η_h = 0 - η_h^T Δ_i^T D_h(u_h) Δ_i η_h ≤ 0 , where we used the skew-adjointness property (<ref>), the entropy conservation condition of the numerical flux, positive-definiteness of the dissipation operator D_h(u_h) (and thus of Δ_i^T D_h(u_h) Δ_i) and the zero column sum of Δ̅_i. Clearly, in case no entropy dissipation is added in the numerical flux (<ref>), equation (<ref>) reduces to dS_h[u_h]/dt = 0. We note that the inequality (<ref>) allows for formal L^p-stability statements <cit.>. §.§ The entropy-stable linear ROM of Chan <cit.> The main aim of this work is to propose reduced order models (ROMs) that are a nonlinear generalization of the entropy stable ROM of Chan <cit.>. To highlight key conceptual differences between the ROM in <cit.> and ours, and to introduce the ROM methodology, we will briefly discuss the elements of Chan's ROM leading to its entropy stability. Classical reduced order models including <cit.> make the assumption that the evolution of u_h(t) can be accurately approximated with elements from a linear space 𝒱⊂ℝ^N_h where dim(𝒱) := r ≪ N_h so that 𝒱 can be referred to as low-dimensional <cit.>. Classically, the subspace 𝒱 is constructed using a truncated proper orthogonal decomposition (POD) based on snapshot data collected in a matrix X ∈ℝ^N_h × n_s with n_s ∈ℕ the number of snapshots <cit.>. by finding an optimal orthogonal basis to a set of solution snapshots defined by: X = [u_h(t^0), u_h(t^1), ..., u_h(t^n_s - 1)] ∈ℝ^N_h × n_s, where n_s ∈ℕ is the number of snapshots. We remark that here we have varied only time, but this approach can be applied to any parameter dependent set of snapshots. The optimal orthogonal basis is found by proper orthogonal decomposition (POD) <cit.> and minimizes: Φ = _Φ∈ℝ^N_h × r ||X - ΦΦ^T A X||_F^2 s.t. Φ^TA Φ = I, in some desired A-weighted inner-product space <cit.>. The construction of the ROM in <cit.> starts by defining the approximation u_h ≈u_r := Φa with a∈ℝ^r being generalized coordinates in 𝒱 relative to the basis Φ. Here, we assume Φ is orthogonal in the Ω_h-weighted inner product, i.e. ϕ_i^TΩ_h ϕ_j = δ_ij with ϕ_i the i-th column of Φ and δ_ij the Kronecker delta function. Then, the approximation is substituted in (<ref>) introducing a semi-discrete residual : r := Φda/dt + Ω_h^-1Δ_v f_r^* - Ω_h^-1Δ_v D_h(Φa)Δ_i η_r, which is set orthogonal to 𝒱 by solving the Galerkin projected system: da/dt + Φ^TΔ_vf_r^*(a) = Φ^TΔ_v D_r(a)Δ_iη_r(a), with f_r^*(a) := f_h^*(Φa), D_r(a) := D_h(Φa) and η_r(a) := η_h(Φa). Equation (<ref>) defines a (linear) POD-Galerkin ROM <cit.>. Similar to the FOM case, see equation (<ref>), we can evaluate the evolution of the ROM total entropy. The reduced total entropy functional is defined as: S_r[a] := S_h[Φa] = 1^TΩ̅_hs_r(a), with s_r(a) := s_h(Φa). The evolution of the reduced total entropy is: dS_r[a]/dt = 1^TΩ̅_hds_r/dt = ∑_i Δ x_i η(Φ_i a)^T Φ_i da/dt = η_r^TΩ_hΦda/dt, with Φ_i ∈ℝ^n × r the rows of Φ approximating values in cell i. The entropy evolution of (<ref>) is: dS_r[a]/dt = η_r^TΩ_hΦda/dt = - η_r^T Ω_h ΦΦ^T Δ_vf_r^*(a) + η_r^T Ω_h ΦΦ^TΔ_v D_r( a)Δ_iη_r = -η̃_r^T Δ_vf_r^*(a) + η̃_r^T Δ_v D_r(a)Δ_iη_r, where, since ΦΦ^T Ω_h defines an Ω_h-orthogonal projection operator <cit.>, η̃_r := ΦΦ^T Ω_h η_r are the so-called projected entropy variables. It is unclear whether this expression is bounded. To solve this, Chan <cit.> proposes a technique used earlier in DG finite element literature <cit.>. Namely, the discretization is not evaluated at Φa but at the entropy projected state: ũ_r := u(ΦΦ^T Ω_h η_r) = u(η̃_r), where we have defined u : ℝ^N_h→ℝ^N_h, η_h ↦u_h for notational convenience. Recall that the mapping u is indeed available since η is injective. In this case we have: dS_r[a]/dt = -η̃_r^T Δ_vf_h^*(u(η̃_r)) + η̃_r^T Δ_v D_h(ũ_r)Δ_iη̃_r = (Δ_i η̃_r)^T f_h^*(u(η̃_r)) - η̃_r^T Δ_i^T D_h(ũ_r)Δ_iη̃_r = 1^TΔ̅_iψ̃_r - η̃_r^T Δ_i^T D_h(ũ_r)Δ_iη̃_r = 0 - η̃_r^T Δ_i^T D_h(ũ_r)Δ_iη̃_r ≤ 0, so that we re-obtain an entropy estimate that mimics the FOM estimate (<ref>). Here, ψ̃_r is the entropy flux potential vector evaluated at the entropy projected state. A key difference with the DG literature <cit.> is that in the ROM case the basis Φ is only constructed to resolve solutions present in the snapshot matrix X, whereas in DG the trial basis is able to approximate a larger subspace of the relevant PDE function spaces. As a result, the DG trial basis can resolve the entropy variables well, but this may not be the case for the reduced basis Φ. To address this issue, <cit.> builds the basis Φ from a set of augmented snapshots given (with some abuse of notation) by: X̃ = [X, η(X)], so that the projection of the entropy variables on the basis Φ is close to the identity. As we will explain in section <ref>, for our proposed nonlinear manifold ROMs, such a construction is not sufficient, and a new tangent space enrichment technique will be proposed to ensure the accuracy of the entropy projection. § AN ENTROPY-STABLE NONLINEAR MANIFOLD GALERKIN ROM The solution manifolds of many hyperbolic conservation laws (<ref>) have slow Kolmogorov n-width decay (<ref>). Hence, approximations using linear subspaces as in <cit.> may require very large reduced space dimensions r before they become accurate. This comes at the cost of their efficiency. For this reason ROMs built on nonlinear spaces endowed (at least locally) with a manifold structure have become a topic of interest <cit.>. To address the shortcomings of the linear subspaces employed in <cit.> we will generalize this method to nonlinear reduced spaces, while keeping the entropy-stability property. We will be interested specifically in nonlinear subsets of ℝ^N_h endowed with some inner product, instead of any abstract space. Therefore we will not be very rigorous about our use of the word manifold, following predominantly the treatise of <cit.> and standard multivariable calculus interpretations. For a rigorous treatment we suggest consulting the recent preprint <cit.>. We will give a brief description of nonlinear manifold ROMs and then propose our generalization of <cit.>. §.§ Manifold Galerkin model reduction In constructing ROMs on nonlinear manifolds we make the assumption that for any t∈ [0,T] there are points u_r(t) on a low-dimensional submanifold ℳ⊂ℝ^N_h that accurately approximate u_h(t). Here, we denote r := dim(ℳ) and the low-dimensionality of ℳ implies that r ≪ N_h. We will refer to the submanifold ℳ as the reduced manifold. Instead of the classical affine reduced space parameterization seen in the previous section we will use nonlinear manifold parameterizations given as: u_h(t) ≈u_r(t) := φ(a(t)) ∈ℳ, where: φ : ℝ^r →ℝ^N_h, is assumed to be a smooth nonlinear injective function - at least when restricted to some subset 𝒜⊆ℝ^r of interest where the ROM will be well-defined. This means that φ(ℝ^r) = ℳ with a Jacobian J : ℝ^r →ℝ^N_h × r, a↦∂φ/∂a(a) of full rank for any a∈𝒜⊆ℝ^r, where a : [0,T] →ℝ^r are generalized coordinates on the manifold ℳ. The function φ may be obtained in many ways: some examples are quadratic approximations <cit.> or neural networks <cit.>. We will propose a new method based on rational polynomials in section <ref>. In this section, we develop an entropy-stable ROM which is agnostic of the choice for φ. As a result of (<ref>), the generalized coordinates a(t) provide a low-dimensional description of the reduced solution vector u_r(t). As a smooth function of the form u_r : ℝ⊃ [0,T] →ℳ, the ROM solution defines a curve on the manifold ℳ. Consequently, the time derivative du_r/dt defines a velocity vector tangent to ℳ at a point u_r(t) on the curve. The set of all velocity vectors at u_r(t) of all curves on ℳ passing through u_r(t) naturally defines an r-dimensional vector space called the tangent space and denoted T_u_rℳ. Since any curve γ : ℝ⊃ I →ℳ, t ↦φ(a(t)) passing through u_r(t) can be parameterized by φ in (<ref>) we find that any velocity vector dγ/dt∈ T_u_rℳ can be written as d γ/dt = Jda/dt. Indeed, we have that the range of the Jacobian matrix J(a) : ℝ^r →ℝ^N_h, v↦J(a) v evaluated at a such that φ(a) = u_r(t) forms a basis to T_u_rℳ, i.e. : T_u_rℳ = range(J(a)). We will primarily denote J instead of J(a) as the context is often clear. As ℳ is embedded in the ambient space ℝ^N_h for which we can also define tangent vectors we have that T_u_rℳ⊂ T_u_rℝ^N_h where T_u_rℝ^N_h denotes the N_h-dimensional vector space of vectors emanating from u_r(t). To construct a ROM we substitute (<ref>) into the FOM discretization (<ref>) so that after applying the chain rule we find the residual: r(da/dt,a) := Jda/dt + Ω_h^-1Δ_vf_r^*(a) - Ω_h^-1Δ_v D_r(a) Δ_i η_r(a), where we define f_r^*(a) := f_h^*(φ(a)), D_r(a) := D_h(φ(a)) and η_r(a) := η_h(φ(a)). The ROM is defined by minimizing this residual in the Ω_h-norm for da/dt given some a, this results in the ROM: (J^TΩ_hJ) da/dt + J^T Δ_v f_r^*(a) = J^TΔ_vD_r(a) Δ_i η_r(a), which is indeed well-defined for a∈𝒜⊆ℝ^r since the Jacobian J is assumed to be of full-rank on the subset 𝒜, making the mass matrix (J^TΩ_hJ) invertible <cit.>. In this nonlinear case the ROM is given by the coefficients of an orthogonal projection of the FOM on the tangent space of ℳ defined by T_u_rℳ := span(J(a)) with a such that u_r = φ(a). This orthogonal projection is carried out using the Ω_h-weighted Moore-Penrose pseudoinverse J^† := (J^TΩ_hJ)^-1J^TΩ_h. Constructing a ROM by projecting the FOM on the tangent space instead of the reduced manifold itself will result in key differences in our approach compared to the linear case outlined in <cit.>: in contrast to the linear case where J = Φ, T_u_rℳ and ℳ are no longer the same space. We introduce J^+ = J^†Ω_h^-1 = (J^TΩ_hJ)^-1J^T and write the ROM in compact form: da/dt + J^+ Δ_v f_r^*(a) = J^+Δ_vD_r(a) Δ_i η_r(a). The choice of inner-product spaces for ROMs of hyperbolic systems has recently come into question <cit.>. Indeed when n > 1 the norm ||Jα||_Ω_h for α∈ℝ^r is dimensionally inconsistent in general. It is shown in <cit.> that dimensionally consistent inner products that are more appropriate in some sense can improve robustness of the ROMs. For our approach however it will be important that the ROM is calculated with the same inner product as used to calculate dS_h/dt. Therefore, we will only deal with nondimensionalized conservation laws. Alternatively, our results can also be applied at an equation-by-equation basis at the cost of potentially introducing a larger number of generalized coordinates. A popular approach to construct nonlinear manifold ROMs is the least squares Petrov-Galerkin (LSPG) method <cit.>. Using this method a fully discrete residual is minimized. We have chosen not to use this method because we want to use the structure of our entropy stable FOM discretization in constructing entropy stable ROMs. The fully discrete residual minimization approach of LSPG makes it more difficult to apply this structure. §.§ An entropy stable nonlinear manifold Galerkin ROM The reduced total entropy functional of the nonlinear manifold ROM is now defined similarly to the linear case (equation (<ref>)) as: S_r[a] := S_h[φ(a)] = 1^T Ω̅_h s_r(a), with s_r(a) := s_h(φ(a)). The reduced total entropy evolution is given by: dS_r[a]/dt = 1^T Ω̅_h ds_r/dt = ∑_i Δ x_i η(φ_i(a))^TJ_i da/dt = η_r^TΩ_h Jda/dt, where φ_i : ℝ^r →ℝ^n is the ROM approximation of the conserved variables in the i-th cell and J_i ∈ℝ^n× r is the Jacobian matrix of φ_i evaluated at a. Using (<ref>), the entropy evolution of the nonlinear manifold Galerkin ROM is: dS_r[a]/dt = η_r^T Ω_h Jda/dt = -η_r^T Ω_h JJ^+ Δ_v f_r^*(a) + η_r^T Ω_h JJ^+Δ_vD_r(a) Δ_i η_r(a) = -η̃_r^T Δ_v f_r^*(a) + η̃_r^T Δ_vD_r(a) Δ_i η_r(a), where the projected entropy variables are defined as η̃_r = (Ω_hJJ^+)^T η_r = JJ^†η_r, where JJ^† is an Ω_h-orthogonal projection on T_u_rℳ. It follows from (<ref>) that the reduced total entropy evolution satisfies an equation that is quite similar to the total entropy evolution of the FOM. However, instead of the actual entropy variables η_r evaluated at the point u_r on the reduced manifold ℳ, the inner product is taken with the projected entropy variables η̃_r. We would like to use the entropy conservation condition at this point to show that the inner product in (<ref>) is zero, but this does not hold because η(u_r)≠η̃_r in general. To solve this, we can instead use the invertibility of the entropy variables to find for what state ũ_r ∈ℝ^N_h we do have η(ũ_r) = η̃_r. If we evaluate our flux at this state instead we can invoke the entropy conservation condition of the numerical flux to complete the proof of entropy conservation (or stability). This is exactly what is done in the linear setting in <cit.> and leads to our main novelty. We now present the main novelty of our work. We introduce a novel nonlinear manifold generalization of the linear entropy projection of <cit.>. It is given by: ũ_r = u(JJ^†η_r) = u(η̃_r), where the entropy variables η_r evaluated at the ROM state u_r are projected onto the tangent space T_u_rℳ instead of the reduced space itself. We note that projecting the entropy variables on the tangent space T_u_rℳ is a rather natural operation as the entropy variables can be interpreted as the gradient vector field of the entropy functional S_h and thus as tangent vectors of ℝ^N_h. The vector ũ_r is the entropy projected state of the ROM. Carrying out this entropy projection on the tangent space is necessary as the projected entropy variables η̃_r appearing in the reduced total entropy evolution equation also are projected on the tangent space. A potential issue of our proposed form (<ref>) is that the difference between the projected entropy variables η̃_r and the original entropy variables η_r can be very large. Namely, in similar fashion to the naively constructed linear spaces spanned by Φ described in <ref> and <cit.>, for arbitrary φ the entropy variables η_r may not be well-resolved by the columns of J(a) and thus by the Ω_h-orthogonal projection in (<ref>). This will very likely cause problems with accuracy of the ROM and the mapping u might not even be well-defined at η̃_r. We will propose a novel method to assure the difference between η_r and η̃_r remains small in the following section. First, we continue constructing an entropy stable nonlinear manifold ROM and perform a Galerkin projection of the FOM at the entropy projected state ũ_r. Doing so we obtain the following ROM: da/dt + J^+Δ_vf_h^*(ũ_r) = J^+Δ_v D_h(ũ_r) Δ_i η̃_r, where we used η̃_r = η_h(u(η̃_r)). Here, the Jacobian matrix is still evaluated at the reduced coordinate a∈ℝ^r such that u_r = φ(a) i.e. the non-projected state. It can be seen that the reduced total entropy evolution is bounded for this ROM since: dS_r[a]/dt = -η̃_r^TΔ_v f_h^*(u(η̃_r)) + η̃_r^TΔ_vD_h(ũ_r)Δ_iη̃_r = (Δ_i η̃_r)^Tf_h^*(u(η̃_r)) - η̃_r^TΔ_i^TD_h(ũ_r)Δ_iη̃_r = 1^TΔ̅_iψ̃_r - η̃_r^TΔ_i^TD_h(ũ_r)Δ_iη̃_r = 0 - η̃_r^TΔ_i^TD_h(ũ_r)Δ_iη̃_r ≤ 0, where ψ̃_r = ψ_h(ũ_r) is the entropy flux potential of the entropy projected state and we were allowed to invoke the entropy conservation condition of the numerical fluxes. Clearly, we have: dS_r[a]/dt = 0, when no entropy dissipation is present. Note that this approach exactly recovers the linear approach of <cit.> when J = Φ, making it a proper generalization. Thus, by changing the state at which the FOM is evaluated and projected to the entropy projected state ũ_r, correct total entropy evolution estimates can be recovered. We added a visualization of the ROM construction with an entropy projection in <ref>. To make relation (<ref>) hold in a fully-discrete setting for Galerkin ROMs, entropy stable time integration is necessary. This is not trivial as most methods to satisfy entropy inequalities exactly during time integration require convexity of the entropy for existence results <cit.>. Although this is the case for S_h, S_r is not necessarily convex in the generalized coordinates a unless φ(a) is affine i.e. φ(a) = Φa + u_0 for some constant u_0 ∈ℝ^N_h and Φ∈ℝ^N_h× r. In this work, we focus on semi-discrete entropy stability and leave fully discrete entropy-stable ROMs as a suggestion for future work. In numerical experiments we use sufficiently small time steps to make entropy errors coming from the time integration negligible, for details see <ref>. The entropy conservative hyper-reduction method proposed in <cit.> does not generalize to nonlinear spaces as it relies on precomputation of compositions of linear operators. In the nonlinear case precomputation is not possible due to the changing tangent space. An entropy conservative hyper-reduction method suitable for nonlinear reduced spaces is also a suggestion for future work. §.§ Tangent space enrichment To arrive at the correct entropy estimate (<ref>) we carried out the entropy projection (<ref>). Though the evolution of the entropy then satisfies a correct estimate, it is not clear whether the ROM solution itself remains accurate. Particularly, the difference between the entropy projected state (<ref>) and the original state (<ref>) can be very large. To see this we consider the entropy projection error: ε_s := ||u_r - ũ_r||_Ω_h. Assuming the mapping u from entropy variables to conservative variables is sufficiently smooth, using the mean-value theorem we can bound this term as follows: ||u_r - ũ_r||_Ω_h = ||u_r - u(JJ^†η_r)||_Ω_h = ||u_r - [u_r - ∂u/∂η(θ) (I-JJ^†)η_r]||_Ω_h θ_i ∈ [(u_r)_i, (ũ_r)_i] ∀ i ≤ ||∂u/∂η(θ)||_Ω_h ||(I-JJ^†)η_r||_Ω_h = ||(∂^2 s/∂u^2)^-1(θ)||_Ω_h ||(I-JJ^†)η_r||_Ω_h, where we used the definition of the entropy variables in the last line and used the induced operator Ω_h-norm for the Hessian matrix of the entropy function. It can be seen that there are two contributions to this bound. There is one based on the model, specifically on the Hessian of the entropy, and one given by the projection error of η_r on the tangent space T_u_rℳ. As long as the entropy is a convex function at the mean value θ, the contribution of the model-based term is bounded by a term involving the inverse of the smallest eigenvalue of the entropy Hessian. We have little influence over this term. Specifically, this term can be large when the entropy is close to being non-convex. We do have control over the projection error. The magnitude of this term is controlled by the choice of reduced space. For a general reduced space constructed to contain solution snapshots this term can be very large, since the columns of the Jacobian J can be close to orthogonal to η_r while the standard nonlinear manifold ROM (<ref>) works fine. In the linear case, Chan <cit.> solved this problem by enriching the snapshot data to construct Φ with snapshots of the entropy variables. This lowered the projection error contribution to the bound on ε_s since J=Φ in this case. However, for the general nonlinear case, J is not the same as the reduced space itself and we can no longer construct our reduced space to also contain the entropy variables to keep the projection error low. Instead, we need a different approach and therefore we propose a novel method to which we refer as tangent space enrichment. If this is the case, the entropy projected ROM (<ref>) may differ significantly from the original ROM (<ref>) and, consequently, the entropy projected ROM can become inaccurate. Therefore, there is a need to assure that the error associated to the entropy projection remains small. This error is small when the projection error of the entropy variables on the local tangent space is small, i.e. the error norm ||η_r - JJ^†η_r||_Ω_h. In turn, this projection error is small when the tangent space T_u_rℳ at least approximately contains the entropy variable η_r = η(u_r) in the column span of its basis J(a), where a is such that u_r = φ(a). It is here where the problem lies. Namely, for any arbitrary reduced manifold ℳ constructed to (approximately) pass through a set of snapshots there is no say whether this is the case. To assure this is the case we propose our novel method which we refer to as tangent space enrichment. The key idea of tangent space enrichment is to construct an r+1-dimensional manifold ℳ̂⊂ℝ^N_h from the original r-dimensional manifold ℳ by a `lifting' operation. Consequently, we use this new manifold for the ROM instead. This lifting operation is defined so that the original manifold ℳ is a subset of the lifted manifold ℳ̂, i.e. ℳ⊂ℳ̂. Most importantly however, for all points u_r ∈ℳ⊂ℳ̂ the lifting operation is constructed such that η_r ∈ T_u_rℳ̂. This means that the entropy variable projection error is precisely zero at the points contained in the old manifold when projecting η_r on the tangent space of the new manifold. This assures that the entropy projection is accurate for the points u_r ∈ℳ⊂ℳ̂ when using tangent space enrichment. We motivate this approach over a more straightforward generalization of Chan's snapshots enrichment <cit.> method by the following. Nonlinear reduced spaces are often constructed iteratively by minimizing some loss function. A nonlinear version of Chan's enrichment method would require, for a given a, fitting φ(a) to a snapshot u_h whilst the Jacobian J(a) has a low entropy projection error ε_s. The construction of ℳ would therefore require including terms based on J in the loss function. This can be very expensive and generally will not exactly embed the entropy variables in the tangent space at the appropriate points. As will be discussed, our method requires no extra effort in constructing φ and contrary to the straightforward approach exactly enriches the tangent spaces with the correct entropy variables. Our novel tangent space enrichment is defined by the following parameterization for ℳ̂: φ̂(a,α) = φ(a) + η(φ(a)) α, here φ : ℝ^r →ℝ^N_h is the parameterization of the original manifold ℳ, a∈ℝ^r are the r reduced coordinates associated to the original parameterization φ and α∈ℝ is the r+1-th reduced coordinate associated to the lifting operation. As in remark <ref>, here we see another reason for the importance of non-dimensionalization. Namely, on dimensional grounds the expression (<ref>) does not make sense if φ and η are not suitably normalized. The new parameterization can be interpreted as follows. Given any point φ(a) ∈ℳ we generate new points û_r ∈ℳ̂ by lifting the new points from the point φ(a) in the direction of η(φ(a)) by a distance ||φ̂(a,α) - φ(a)|| = ||η(φ(a))|| · |α|. Note that at α = 0 we do not lift the point at all, resulting in a point at φ(a); in other words, the original manifold ℳ is the set φ̂(ℝ^r,α=0). The lifting operation is visualized in <ref>. The Jacobian matrix of the new parameterization (<ref>), whose columns span the tangent space T_û_rℳ̂ of the lifted manifold ℳ̂ at the point û_r ∈ℳ̂, is given by: Ĵ(a,α) = [ ∂φ̂/∂a ∂φ̂/∂α ] = [ (I + α∂η/∂u(φ(a)))J(a) η(φ(a)) ], where I ∈ℝ^N_h × N_h is the identity matrix and ∂η/∂u(φ(a)) = ∂^2 s_h/∂u^2(φ(a)) ≽ 0 is a sparse SPSD 2n-1-diagonal matrix containing components of the Hessian of the local entropy value with respect to the solution on each diagonal. Note that the derivative with respect to the r+1-th tangent space enrichment coordinate α is exactly η(φ(a)). Furthermore, on the old manifold ℳ associated to α = 0 the matrix ∂φ̂/∂a is equal the original Jacobian J(a). At the points φ̂(a,α = 0) = φ(a) we have thus exactly enriched the tangent space with the entropy variables η_r = η(φ(a)) at those points. This is the direct result of the lifting operation. This can be seen from the enriched parameterization (<ref>). Lifting a point from φ(a) by changing α while keeping a constant, moves a point in the direction tangent to η(φ(a)). As a consequence η(φ(a)) appears as a tangent vector in the enriched Jacobian (<ref>). Given the tangent space enrichment, the ROM is constructed in a similar fashion as in the previous section, but using the enriched lifted manifold ℳ̂. The r+1-th tangent space enrichment coordinate α is simply treated as an additional reduced coordinate. The ROM thus takes the form: d/dt[ a; α ] + Ĵ^+Δ_v f_h^*(ũ̂_r) = Ĵ^+ Δ_v D_h(ũ̂_r)Δ_i η̂̃̂_r, where: ũ̂_r = u(ĴĴ^†η̂_r), and η̂̃̂_r = ĴĴ^†η(φ̂(a,α)). Since the proof of entropy stability for our ROM is independent of the manifold parameterization, this ROM is still entropy stable. § RATIONAL POLYNOMIAL MANIFOLDS §.§ Background In this article we are interested in systems that can exhibit strong spatial gradients that are moving in time. Many existing data compression methods for reduced manifold construction are not well suited for these types of systems. Linear data compression methods like proper orthogonal decomposition (POD) and general reduced basis methods fail because the moving gradients imply that the data is often of very high rank. This indicates that the data is not well represented in low-dimensional linear subspaces, which is the fundamental assumption of linear approaches. Nonlinear data compression methods offer a potential solution to this problem by instead compressing the highly nonlinear data on nonlinear reduced manifolds. In model reduction different parameterizations have become popular, in particular the decoder part of autoencoder neural networks <cit.> and multivariate quadratic polynomials <cit.>. However, in the vicinity of strong gradients these nonlinear methods can suffer from oscillations <cit.>. These oscillations may be difficult or impossible to remove. This is a problem for Galerkin projection-based ROMs which can be sensitive to errors in the solution compression <cit.>. To accurately assess the performance of our novel entropy stable manifold Galerkin ROM there is thus a need for nonlinear data compression methods that are more capable of dealing with large and moving spatial gradients, particularly without significant oscillations. Recently, neural networks with discontinuous activation functions have been proposed <cit.>, however training these networks can be cumbersome. Furthermore, registration based approaches <cit.> have been very effective, but have not yet been applied in the context of nonlinear manifold ROMs similar to ours. In this research we will propose a novel reduced manifold parameterization method based on rational polynomials, that is far less oscillatory around strong spatial gradients than the previously mentioned methods (neural networks and quadratic approaches), but is still equally interpretable as the recently proposed quadratic manifolds. §.§ Pole-free rational quadratic manifolds We now give a description of rational polynomial manifolds. A rational polynomial manifold is the element-wise ratio of two polynomial manifolds: φ(a) = ∑_i=1^p_numH^i : a^⊗ i + u_ref/∑_i=1^p_denG^i : a^⊗ i + 1, here, H^i, G^i ∈ℝ^N_h × r × ... × r are (i+1)^th-order tensors with the first axis of size N_h and i axes of length r, a^⊗ i is the i-fold outer product such that for example (a^⊗ 3)_ijk = a_i a_j a_k, H^i : a^⊗ i denotes summation of the components of a^⊗ i and the components of slices along the first axis of H^i, again as example (H^3 : a^⊗ 3)_i = ∑_j,k,l=0^r-1(H^3)_ijkl a_j a_k a_l. Furthermore, we have u_ref∈ℝ^N_h and we consider division of two vectors element-wise. The constant vector in the denominator has been set to one without loss of generality. The expression (<ref>) generalizes polynomial manifolds <cit.>, in the sense that those are recovered by setting p_den = 0. This shows that rational polynomial manifold encapsulate a larger class of functions than polynomial manifolds. By introducing a polynomial in the denominator and allowing it to rapidly and smoothly approach zero for small changes in a we can obtain very fast and smooth increases in the function value of φ without oscillations. This gives us the opportunity to model steep gradients in the snapshot data which may be present in the form of advected shocks. Since higher-order tensors can become quite expensive to deal with, we restrict our attention to rational quadratic manifolds by setting p_num = p_den = 2. To compromise between efficiency and expressiveness we will take p_num = p_den = 2. The i-th component of the vector-valued function output φ_i(a) can then be written as: φ_i(a) = a^TH_i^2 a + H_i^1a + (u_ref)_i/a^TG_i^2 a + G_i^1a + 1, where H_i^2,G_i^2 ∈ℝ^r× r are the i-th slices along the first axes of H^2 and G^2, respectively and H_i^1, G_i^1 ∈ℝ^1× r are the i-th rows of H^1 and G^1, respectively. Since the matrices are only used in quadratic forms, we can, without loss of generality, assume H_i^2,G_i^2 to be symmetric. An obvious concern with expressions of this form is the occurrence of spurious poles, i.e. unwanted division by zero. We avoid this issue for the case p_num = p_den = 2 and all a∈ℝ^r by constraining the quadratic form in the denominator to be positive semi-definite and setting the linear term to zero: G_i^2 ≽ 0, G_i^1 = 0, ∀ i ∈{0,...,N_h}. It is easily seen that, in this case, the denominator is never less than 1. Consequently, spurious poles cannot occur for any real a. Removing the linear term has not resulted in a significant loss in accuracy in our numerical experiments. The full φ is then given as: φ(a) = H^2 : [a⊗a] + H^1 a + u_ref/G : [a⊗a] + 1, G_i ≽ 0 ∀ i, since there is no linear term in the denominator we write G instead of G^2. To construct manifold Galerkin ROMs we will require the Jacobian matrix of this expression - see equations (<ref>) and (<ref>). The Jacobian matrix is given by the following: ∂φ/∂a = 2H^2 ·a + H^1/(G : [a⊗a] + 1) ⊗1 - [H^2 : [a⊗a] + H^1 a + u_ref/(G : [a⊗a] + 1)^2⊗1] ∘ (2G·a), where the operation 2H^2 ·a∈ℝ^N_h × r indicates slice-wise matrix multiplication, i.e. for the i-th row it holds that (2H^2 ·a)_i = 2H_i^2 a, due to symmetry of H_i^2,G_i the order of axes is not relevant, division of matrices is understood element-wise and ∘ is the Hadamard matrix multiplication operator. Generally, it holds that higher degree polynomials have more approximation power. However, for higher degree polynomials (of even order) it is more difficult to assure positiveness and thus pole-freeness. As for the quadratic case, this can be assured at the cost of some approximation power when only considering sparse higher order tensors. For example, for 4-th order one can obtain positive polynomials by only considering (G^4 : a^⊗ 4)_i = (a^2)^T G_i^4 a^2, G_i^4 ≽ 0, where a^2 is the element-wise square of a and G_i^4 ∈ℝ^r× r is a matrix containing values of the i-th sparse slice of G^4. Similarly, 6-th order can be obtained by considering a^2 acting on a third order tensor with positive definite slices. We will not pursue this approach further in this article. §.§ Manifold construction We will determine the coefficient tensors in (<ref>) from data. Like for quadratic manifolds <cit.>, it holds for rational quadratic manifolds that the coefficients in two different slices of the coefficient tensors are independent, as can be seen in (<ref>). Consequently, the coefficients can be determined purely from the parametric and temporal behaviour of the data in the specific cell and solution variable associated to the i-th component φ_i(a) of φ(a). The task of fitting a rational manifold thus reduces to fitting an expression (<ref>) to data for each cell and solution variable. In the spirit of quadratic manifolds we will compress the snapshot data defined by: X = [u_h(t^0), u_h(t^1), ..., u_h(t^n_s - 1)] ∈ℝ^N_h × n_s, where n_s ∈ℕ is the number of snapshots, as the coefficients of their projection on the first r left singular vectors of X given in Φ∈ℝ^N_h × r. Following this we define A := (Φ^T X)^T ∈ℝ^n_s × r. We aim to find the coefficients G_i, H_i^2, H_i^1, (u_ref)_i such that for all j = 0, ..., n_s-1 we have φ_i(a_j) ≈ X_ij, where a_j is the j-th row of A written as a column vector. Defining the i-th row of X as y^i ∈ℝ^n_s, the optimization procedure for the coefficients G_i, H_i^2, H_i^1, (u_ref)_i is formulated as: G_i, H_i^2, H_i^1, (u_ref)_i = _G∈𝕊^r_+ ,H∈𝕊^r,h∈ℝ^r, u ∈ℝ||y^i - (AH∘ A)1 + A h + u/(AG∘ A)1 + 1||^2, though the sets 𝕊^r_+ and 𝕊^r are convex subsets of ℝ^r× r the objective function is non-convex due to the division operation. A popular approach is to linearize this nonlinear optimization problem <cit.> by multiplying with the denominator. This results in a convex semi-definite program which can be solved very efficiently with e.g. interior point methods. However, the optimum value of this linearized problem is generally not the same as the nonlinear problem (<ref>). Iterative approaches that attempt to somehow refine the solution of the linearized problem and that can potentially be supplemented with our semi-definite constraint exist <cit.>. However, convergence to (local) minima of the nonlinear problem is generally not guaranteed, nor is convergence in general <cit.>. For this reason, we will stick to the fully nonlinear and expensive optimization problem (<ref>). Nonetheless, we believe that the linearized approach holds significant promise and that it will be crucial for future work in order to scale the approach to larger meshes and datasets. Finally, we will implement the semi-definite constraint in this fully nonlinear setting by optimizing for the Cholesky decomposition of G_i = L_i L_i^T <cit.>: L_i, H_i^2, H_i^1, (u_ref)_i = _L ∈𝕃^r ,H∈𝕊^r,h∈ℝ^r, u ∈ℝ||y^i - (AH∘ A)1 + A h + u/(AL ∘ AL)1 + 1||^2, where 𝕃^r ⊂ℝ^r× r is the vector subspace of lower triangular r× r matrices. As initial guess we can either use L_i-1, H_i-1^2, H_i-1^1, (u_ref)_i-1 if available and corresponding to the same solution variable, or otherwise simply vectors or tensors consisting of “ones". We will carry out the fitting procedure using the JAXFit package <cit.> for GPU-accelerated nonlinear least squares solutions. § NUMERICAL EXPERIMENTS To show that our entropy stable manifold Galerkin ROMs satisfy appropriate semi-discrete entropy inequalities we will perform numerical experiments on a range of one-dimensional nonlinear conservation laws. We will carry out the experiments using the rational quadratic manifolds proposed in section <ref>. We will also compare the ability of the rational quadratic manifolds to compress convection dominated data to that of linear POD-based methods and quadratic manifolds <cit.>. We do not compare against neural network based approaches <cit.> since in our experience they struggle with approximating discontinuities, and require careful hyperparameter tuning to give reasonable results. The underlying FOM discretizations will be the existing TeCNO schemes of <cit.>, so that we will only mention some aspects of the discretizations but for details we will refer to <cit.>. We will start with the inviscid Burgers equation in <ref>, then we will treat the shallow water equations in <ref> and finally we will treat the compressible Euler equations with ideal thermodynamics in <ref>. We test different aspects of the ROM with the different test cases, an overview of the different test purposes has been provided in <ref>. The experiments have been implemented using the JAX library <cit.> in Python, which allows for automatic differentiation to compute Jacobian matrices and, where possible, accelerated computing using an Nvidia A2000 laptop GPU. §.§ Inviscid Burgers equation We will use this experiment mainly to highlight our proposed rational quadratic manifolds when compared to existing (`standard') Galerkin ROMs on different types of manifolds. We will already include the entropy stable ROM (<ref>) here, but the focus on the role of entropy stability will be in the next test cases. The inviscid Burgers equation is given by: ∂ u/∂ t + ∂/∂ x(u^2/2) = 0, with conserved variable u : Ω× [0,T] →ℝ. As continuous and discrete entropy we take <cit.>: 𝒮[u] = 1/2∫_Ω u^2 dx, and S_h[u_h] = 1/2||u_h||_Ω_h^2. The reduced entropy functional follows in a straightforward fashion from the discrete entropy functional. Using these specific entropies we have for the entropy variables: η(u) = u. Consequently, the manifold parameterization with TSE is given in the particularly simple form: φ^*(a,α) = (1 + α) ·φ(a). An entropy conservative flux is given by <cit.>: f_i+1/2 = u_i+1^2 + u_i+1u_i + u_i^2/6, and we use a local Lax-Friedrichs-type of entropy dissipation operator <cit.>: D_i+1/2(u_h) = max(|u_i+1|,|u_i|). We will discretize (<ref>) on a domain Ω = 𝕋([0,L]) of length L = 1 using a numerical grid consisting of 300 cells. Discretization in time will be done using the classical Runge-Kutta 4 (RK4) method <cit.> with a time step of size Δ t = 0.001. We will integrate in time until T = 1. Solution snapshots are captured after every 5 timesteps resulting in n_s = 201. We perform two tests: in the first, we compress the data to a reduced dimension of r = 15 for all manifolds, and in the second we compress the data to a reduced dimension such that the reconstruction errors (to be defined later) are of the same order as the rational quadratic manifold with r = 15. The initial condition is a simple offset sine wave: u_0(x) = sin(2π x) + 1. The KnW decay (<ref>) for this system is very slow as is evident from the normalized singular values depicted in <ref>. Defining the relative information content (RIC) as in <cit.> we have RIC≈ 99.5% for r = 15. We will first compare the reconstruction accuracy of our proposed rational quadratic manifold to existing quadratic manifold <cit.> and POD linear <cit.> manifold approaches for the solution data of the FOM with r = 15. To do this we will save the matrix of generalized coordinates A = Φ^TX ∈ℝ^r× n_s associated to the snapshots in X. Note that these coordinates form the reduced representation for all manifolds since all of the tested manifolds are constructed from the POD compression of the data. In turn we will try to reconstruct the snapshots in X from their reduced representations in A. We will construct the quadratic manifold as in <cit.> with a manually determined regularization coefficient λ = 0.5 (α in (27) of <cit.>) and the rational manifold using the fully nonlinear curve-fitting approach outlined in the previous section. In <ref> we display the reconstruction in addition to the original data using an x-t plot. Furthermore, in <ref> we plot the local error in space–time defined, with some abuse of notation, as: ε_xt = φ(A) - X. It can be seen that for r=15, the reconstruction accuracy of both the quadratic as POD linear manifold is poor, whereas the reconstruction of the rational quadratic manifold is visually nearly identical to the data. Indeed, the largest local error of the rational quadratic manifold is at most approximately 0.003 which is two orders of magnitude lower than the largest errors of the quadratic and linear manifolds (approximately 0.4 and 0.6 respectively). The sources of error for the linear and quadratic manifolds are predominantly oscillations around the moving shock discontinuity as can be seen in <ref>. This shows the poor performance of these methods for such problems. The rational quadratic manifold also oscillates around the shock, but with a much smaller amplitude, indicating that it is better-suited for shock-dominated problems. The accuracy of the rational quadratic manifold is much higher than the quadratic and POD linear manifolds. We note that the increased accuracy comes at the cost of a slow fitting procedure. The precise fitting times and maximum absolute space-time errors, ε_xt^max := max_i,j|ε_xt|_i,j, have been displayed in <ref>. When we construct the POD linear manifold and quadratic manifold to an accuracy of ε_xt^max≈ 3 · 10^-3 we see that we require approximately r = 160 and r = 150, respectively. The reconstructions are given in <ref> and the largest space–time errors ε_xt are 𝒪(4· 10^-3). The changes in fitting time have also been denoted in <ref>. A large increase can be observed for quadratic manifolds, while the linear POD stays constant as the implementation calculates all n_s singular vectors at once. We continue to consider the ROM performance and accuracy in more detail. We compare the rational quadratic manifold ROM in entropy stable (<ref>) (ES-ROM) and generic (<ref>) (RQ-ROM) form to a linear manifold POD-Galerkin ROM (L-ROM) and a quadratic manifold Galerkin ROM (Q-ROM). We will make one comparison of the ROMs using the previously obtained manifolds with r = 15 and another comparison where r is chosen such that each manifold has approximately the same space–time reconstruction error ε_xt^max≈ 3 · 10^-3. The initial conditions for the simulations with r = 15 will be taken as the first column of the matrix A = Φ^TX and the entropy stable form of the rational quadratic ROM will have α = 0 at t = 0. For the manifolds that have approximately the same accuracy we will take the first columns of the matrices A_r = Φ_r^T X ∈ℝ^r × n_s with r = 160 and r = 150 for the linear and quadratic manifolds, respectively. Here, Φ_r ∈ℝ^N_h × r are the r first singular vectors of X. We plot the temporal evolution of the total error norm: ε_u(t) := ||u_h(t) - u_r(t)||_Ω_h, and the ideal linear projection error (L-ideal): ε_proj(t) := ||(I - Π_r_lin) u_h(t)||_Ω_h, where Π_r_lin : ℝ^N_h→𝒱 projects on the r_lin-dimensional reduced space of the respective linear ROMs in the different experiments. The ideal projection error forms a lower bound for the linear POD-Galerkin ROM error. To measure computational performance we will track the runtimes of the online phases which we denote t_online. The results for the simulations with constant r are shown in <ref> and the results for approximately constant space–time error are given in <ref>. We also show the spatial profile of the solution at t = T as predicted by the different ROMs and different manifolds in <ref> for constant r and <ref> for constant space–time error, respectively. The rational manifold ROMs clearly outperform the others in case of constant r and the differences between the results of the rational manifolds themselves are nearly zero. Steep increases in errors occur for all ROMs upon formation of the shock which indicates this is indeed a difficult instant of the flow for the reduced manifolds to fit to the data. For the simulation with manifolds with constant error the performance of the ROMs is more equal, with the linear and quadratic ROMs only suffering of some oscillations in the spatial profile. The oscillations occur at moments when the ideal projections error ε_proj is also oscillatory in time. The rational quadratic manifolds do not suffer from oscillations. Because of the large reduced spaces required to obtain approximately the same reconstruction errors as the rational manifolds, the linear and quadratic manifold ROMs have more expensive online phases than when tested at constant r. This is especially notable for quadratic manifolds where computing the Jacobian and its Moore-Penrose pseudoinverse contribute heavily to the increase in cost. At constant r the quadratic manifold ROM and the rational manifold ROMs are nearly equally fast showing that the Jacobian of the quadratic manifold parameterization and of the rational manifolds with and without enrichment are nearly equally expensive to compute. From this experiment we conclude that at the cost of a slower fitting process rational quadratic manifolds can significantly outperform quadratic and linear POD manifolds in terms of reconstruction accuracy for the same number reduced space dimensions. §.§ Shallow water equations This experiment will mainly focus on the entropy conservation and stability aspect of our proposed ROM on rational manifolds. We show that our novel entropy stable ROM satisfies the reduced total entropy estimate (<ref>). To this end we will carry out experiments with the shallow water equations. We use the shallow water equations due to their nontrivial entropy function, which will be defined later, as compared to the Burgers equation. We will perform one experiment where discontinuities appear in the solution and one where the solution remains smooth during the time interval of interest. In the smooth case we can run the FOM and ROMs without entropy dissipation operators. As a result we can analyse the entropy conservation properties of the ROM. In the discontinuous case we will analyse the behaviour of the reduced entropy as compared to the FOM entropy. In the following we briefly introduce the shallow water equations and the entropy stable numerical scheme used to obtain the FOM. The shallow water equations are given by: ∂/∂ t[ h; hu ] + ∂/∂ x[ hu; hu^2 + 1/2gh^2 ] = 0, with conserved variables h : Ω× [0,T] →ℝ and hu : Ω× [0,T] →ℝ representing the local water column height and the momentum per unit mass, respectively. We collect the conserved variables in a vector u := [h, hu]^T. The constant g ∈ℝ^+ is the positive real gravitational acceleration, we assume the equations are normalized such that g = 3, which gave challenging test cases for our spatial domain size and initial conditions. As continuous and discrete total entropy we take the common choice <cit.>: 𝒮[u] = ∫_Ω1/2(u_2^2/u_1 + gu_1^2)dx, leading to: S_h[u_h] = ∑_i Δ x_i h_i u_i^2 + gh_i^2/2, from which the reduced total entropy follows. This choice leads to the following entropy variables: η(u) = [ g u_1 - 1/2(u_2/u_1)^2; u_2/u_1 ], where u_1 = h and u_2 = hu, and inverse function: u(η) =2η_1 + η_2^2/2g[ 1; η_2 ]. An entropy conservative flux is given by <cit.>: f_i+1/2 = [ h_i+1/2u_i+1/2; h_i+1/2·u_i+1/2^2 + 1/2gh^2_i+1/2 ], where a_i+1/2 = 1/2(a_i+1 + a_i) indicates taking the average of neighbouring volume based quantities. As entropy dissipation operator we take the diffusion operators of Roe type (see <cit.>) with the eigenvalues and eigenvectors of the flux Jacobian evaluated at the arithmetic average of neighbouring values. We obtain a second accurate entropy dissipation operator using the entropy stable total variation diminishing (TVD) reconstruction based on the minmod limiter (see <cit.>). We will discretize (<ref>) for both experiments on a domain Ω = 𝕋([-L,L]) with L = 1 using a numerical grid consisting of 300 cells. Discretization in time will be done using the RK4 method with a time step of size Δ t = 0.0005. We will integrate the discontinuous experiment in time until T = 1 and the smooth experiment until T = 0.5. Solution snapshots are captured every 5 timesteps resulting in n_s = 401 and n_s = 201 for the discontinuous and smooth experiment, respectively. For the discontinuous case we will be interested in a dam break problem, this means we will take as initial condition: h_0(x) = 1.5 |x|<0.2, 1 |x|≥ 0.2, (hu)_0(x) = 0. The smooth case will consist of a quiescent water level with a small perturbation, so that the initial condition is given by: h_0(x) = 1 + 0.1 ·exp(-100 · x^2), (hu)_0(x) = 0. The reduced space dimension is taken at r = 15 for both experiments. Although it is important that entropy is appropriately conserved or dissipated, accuracy with respect to the FOM is also required for an effective ROM. Hence, before we analyse the conservation properties of our proposed ROM, we compare the FOM solution approximation quality of our entropy stable ROM and the generic ROM. We will provide space–time plots of both the discontinuous and smooth experiments. The discontinuous case is given in <ref> and the smooth case is given in <ref>. Visually, both ROMs closely resemble the FOM in both cases. Furthermore, it can be seen from the sharp color gradients that the dam break problem develops shocks. To emphasize these are difficult cases for linear model reduction approaches we also plot the normalized singular value decay in <ref> and <ref> for the dam break and water height perturbation problems, respectively. For the dam beak problem the decay is very slow and the water height perturbation decays moderately slow, indicating slow and moderately slow Kolmogorov n-width decay (<ref>). We will analyse the entropy conservation and stability properties of the entropy stable (ES-ROM) and generic (RQ-ROM) rational manifold ROMs. To this end, we define the entropy error: ε_𝒮(t) := |S_h[u_h(t)] - S_r[a(t)] |, giving the absolute instantaneous deviation of the entropy of the ROM from the entropy of the FOM. Similarly we will define the entropy conservation error: ε_𝒮_0(t) := |S_r[a(0)] - S_r[a(t)] |, which measures the departure from the initial entropy and thus the error in exact conservation of the entropy in time. Since our models are semi-discretely entropy stable we have to monitor the instantaneous time rate of change of reduced total entropy (<ref>) to verify that our proposed theoretical framework works. Hence, we will analyse the contribution to the total entropy production (<ref>) of two separate parts of the ROM (<ref>), namely the entropy conserving part: (dS_r/dt)_cons := -η̃_r^TΔ_v f_h^*(u(η̃_r)), which should equal zero to machine precision, and the entropy dissipative part: (dS_r/dt)_diss := η̃_r^TΔ_vD_h(ũ_r)Δ_iη̃_r, which should always be negative or zero. Similar quantities can be defined in an obvious manner for the generic ROM without entropy projection. In the results given by <ref> and <ref> we have used symmetric log plots which are linear around zero so that negative values can also be plotted. This allows us to see when a ROM is unphysically producing entropy i.e. (dS_r/dt)_cons, (dS_r/dt)_diss > 0. The results of the discontinuous dam break experiment are displayed in <ref> and <ref>. The results confirm that the proposed entropy stable framework works as expected. This is the case since the entropy production of the conservative part is zero to machine precision for the entropy stable ROM while the entropy dissipative part does not change sign and is indeed negative. The entropy production of the conservative part of the generic ROM is orders of magnitude larger than that of the entropy stable ROM. An important point is that large portions in time of the entropy production by the entropy conservative part are positive (instead of zero). This indicates physically incorrect behaviour as entropy is being produced. The contribution of the entropy dissipation operator from the generic ROM is erratic. Moreover, it is also occasionally positive showing that this part of the ROM is also sometimes producing physically incorrect results. The temporal evolution of the entropy error, ε_𝒮, is also given in <ref>. The temporal evolution is approximately constant in time for the entropy stable ROM and ε_𝒮 is small. This indicates that the evolution of the entropy behaves roughly the same as the FOM and is off mainly due to an error in representation of the initial condition and of subsequent FOM solutions. The general behaviour of the entropy error of the generic ROM is erratic and shows that the entropy of the generic ROM oscillates around the values predicted by the FOM. This can also be seen in the temporal evolution of the reduced entropy as in the top panel of <ref>. The results of the smooth experiment are shown in <ref> and <ref>. As there is no entropy dissipation present in the FOM or ROM the entropy should remain approximately constant (exact conservation is difficult since RK4 is not an entropy conservative time-integrator <cit.>). For our entropy stable ROM this is indeed the case as can be seen from the bottom panel of <ref>, the entropy conservation error, ε_𝒮_0, does not exceed 𝒪(10^-9). The entropy of our entropy stable ROM stays almost exactly constant. The error in entropy with respect to the FOM is almost entirely dictated by the initial error ε_𝒮(0) = |S_h[u_h(t)] - S_r[a(0)]|. The generic ROM does not conserve entropy and its entropy conservation error behaves erratically. This manifests itself in clear deviations from the FOM entropy which can be observed in the top panel of <ref>. For completeness we also plot the entropy production of the conservative part of the spatial discretization of the ROM (dS_r/dt)_cons in <ref>. Again, it can be seen that entropy is conserved up to machine precision by the spatial discretization of our entropy stable ROM where this is not the case for the generic ROM. Additionally, the generic ROM produces entropy during several intervals of the simulation and is therefore not physically correct. From both experiments, we conclude that our novel entropy stable ROM ensures physically correct behaviour, whereas this cannot be assumed for the generic manifold Galerkin ROM. §.§ Compressible Euler equations The focus of this experiment is on the effect of the entropy projection and tangent space enrichment on the accuracy of the ROM. We will be interested in particular in the benefit of tangent space enrichment in the reconstruction accuracy of the entropy projection. In addition, we will analyse the error that can be incurred with respect to the FOM by the introduction of an entropy projection step in the ROM as we propose. A good case to study for this experiment are the compressible Euler equations. Due to their nontrivial entropy functional and corresponding entropy variables it is not expected that without extra measures, like TSE, the entropy projection will be accurate. A short introduction to the compressible Euler equations now follows. The compressible Euler equations are given by: ∂/∂ t[ ρ; ρ u; E ] + ∂/∂ x[ ρ u; ρ u^2 + p; (E + p)u ] = 0, where ρ : Ω× [0,T] →ℝ is the density, ρ u : Ω× [0,T] →ℝ is the momentum and E : Ω× [0,T] →ℝ is the total energy. We gather these conserved variables in a vector u = [ρ, ρ u, E]^T. Furthermore, we assume the equations are suitably normalized so that they are dimensionless. The pressure p : ℝ^n →ℝ is related to the conserved variables through an equation of state, representing the thermodynamics at hand: p(u) = (u_3 - 1/2u_2^2/u_1)(γ - 1), where γ∈ℝ^+ is the specific heat ratio, which we take at the standard choice γ = 1.4. The thermodynamic quantities, i.e. pressure, density and total energy are necessarily nonnegative. Throughout the experiments we will assume our FOM and ROMs respect this condition, assuring this condition mathematically may be the subject of future work. The entropy functional of interest will be taken as: 𝒮[u] = ∫_Ω-u_1 σ/γ - 1 dx, where σ : ℝ^n →ℝ is the specific entropy defined as a function of the conserved variables like: σ(u) = ln(p/u_1^γ) , where p is evaluated using (<ref>). Different entropies are also possible, see for instance <cit.>. In turn, this gives rise to the discrete total entropy functional: S_h[u_h] = ∑_i Δ x_i -ρ_i σ_i/γ - 1. The associated entropy variables are given by: η(u) = [ γ - σ/γ - 1 - u_2^2/2u_1 p; u_2/p; -u_1/p ], and consequently the inverse of the entropy variables is: u(η) = exp(γ/1-γ - [η_1 - 1/2η_2^2/η_3] )[ (-η_3)^1/1-γ; -η_2(-η_3)^γ/1-γ; (1/2η_2^2 - η_3/γ - 1)· (-η_3)^2γ - 1/1-γ ]. Considering -η_3 = ρ / p is generally exponentiated to some noninteger power we see the importance of positivity of the thermodynamic variables. As in <cit.>, we will use the entropy conserving numerical flux by Ismail and Roe <cit.> for which we define the following variables: z = [ z^1; z^2; z^3 ] = √(ρ/p)[ 1; u; p ], finally an entropy conservative flux is given by f_i+1/2 = [ F_i+1/2^1 F_i+1/2^2 F_i+1/2^3 ]^T: F_i+1/2^1 = z^2_i+1/2· (z^3)_i+1/2^ln, F_i+1/2^2 = z^3_i+1/2/z^1_i+1/2 + z^2_i+1/2/z^1_i+1/2F_i+1/2^1, F_i+1/2^3 = 1/2z^2_i+1/2/z^1_i+1/2(γ + 1/γ - 1(z^3)_i+1/2^ln/(z^1)_i+1/2^ln + F_i+1/2^2), where a^ln denotes the logarithmic mean, which is defined as: a_i+1/2^ln := a_i+1 - a_i/lna_i+1 - lna_i. Computation of the logarithmic mean is generally not numerically stable when a_i+1≈ a_i, but a popular algorithm which we will use to deal with this is also given in <cit.>. There is an abundance of alternative entropy conservative numerical fluxes that can be used <cit.> some of which also conserve kinetic energy in the sense of Jameson <cit.>. As an entropy dissipation operator we take the Roe-type diffusion operator <cit.> where the eigenvalues and vectors of the flux Jacobian are evaluated at the arithmetic average of the neighbouring conserved variables. As with the shallow water equations, we obtain a second accurate entropy dissipation operator using the entropy stable total variation diminishing (TVD) reconstruction based on the minmod limiter <cit.>. For the experiment we will consider a periodic modification of the famous Sod's shock tube <cit.>, which avoids the need to implement entropy stable boundary conditions. We will discretize (<ref>) on a domain Ω = 𝕋([0,L]) with L = 1 on a numerical grid of 250 cells. The number of cells is relatively small to facilitate a relatively short manifold learning process. Integration of the ROM in time will be carried out using the RK4 time integrator with a time step size Δ t = 0.0001. We will integrate in time until T = 0.5 (beyond the typical time used for this experiment), resulting in interesting shock-rarefaction interactions. Again, we will capture snapshots after every 5 timesteps so that we have n_s = 1001. Our periodic modification of Sod's shock tube experiment has an initial condition given by: ρ_0(x) = 1 0.25 < x < 0.75 0.125 elsewhere u_0(x) = 0, p_0(x) = 1 0.25 < x < 0.75 0.1 elsewhere, where the conserved variables (ρ, ρ u, E) are calculated from these primitive variables using the equation of state (<ref>) and the definition of momentum. We take a reduced space dimension r = 15. We will be primarily interested in the entropy projection and tangent space enrichment during this experiment, but for completeness we also plot the ROM approximations to the FOM solution and the singular values. The singular values are displayed in <ref> and the ROM approximations are shown using x-t plots in <ref>. A relatively slow decay of singular values can be observed in <ref>, hence linear model reduction methods are likely to not perform well for this problem. The FOM solution is approximated well by our novel entropy stable manifold Galerkin ROM. The solution approximations of the ROM as displayed in <ref> are nearly identical to the FOM. We are interested in the accuracy of the entropy projection with and without tangent space enrichment. Accordingly, we introduce a metric to measure this accuracy. Since we are not only interested in comparing errors, but also to get an idea of the absolute size of the error we specifically introduce the relative entropy projection error: ε_Π(t) := ||u_r(t) - u(Π_Tℳη_r(t))||_Ω_h/||u_r(t)||_Ω_h, measuring not only how far the entropy projection is from the identity mapping as with the entropy projection error ε_s of <ref>, but also the size of the error ε_s relative to the approximated value u_r. We will plot this value for two ROM simulations of the compressible Euler equations with an entropy projection, where one has an enriched tangent space and the other not. The results are provided in <ref>. The entropy projection error with TSE can be seen to be a very small fraction of the norm ||u_r(t)||_Ω_h, indicating minimal impact on the accuracy of the ROM given it is well-conditioned. In contrast, the ROM without TSE instantly produces NaN values and could therefore not be included in <ref>. To obtain a further comparison we plot the spatial profiles of the projected entropy variables at two selected moments t_p ∈ℝ^+ in time, namely t_p ∈{0.1,0.5}. To have a meaningful comparison, i.e. one where we are not projecting NaN values to start with, we calculate the entropy variables from the stable ROM with enriched tangent space. Furthermore, we use the the generalized coordinates a_p = Φ^T X_p to compute the tangent space basis for the ROM without TSE. The results are shown in <ref> and <ref>. Very poor reconstruction of entropy variables can be observed for the ROM without TSE, whereas with TSE the reconstruction is accurate at both moments. From <ref> and <ref> the NaN values in <ref> can be explained by the projection of the entropy variables taking unphysical values (positive η_3). From this we conclude that tangent space enrichment or any other manner of assuring the accuracy of the entropy projection is vital for a properly functioning ROM when using an entropy projection. In applying the tangent space enrichment framework, we rely on the artificial TSE coordinate α staying small (α≪ 1) during simulations. If this is not the case we cannot assure that the reduced space can accurately represent the solution nor that the local tangent space can accurately represent the FOM dynamical system du_h/dt at that point. The reason for this being that the enriched manifold parameterization φ̂ and its Jacobian matrix Ĵ are no longer close to the original parameterization φ and Jacobian J which are accurate by assumption. To verify this is indeed not the case we will monitor the value of α throughout a simulation of the compressible Euler equations. The results are shown in <ref>. We have also plotted the error ε_u of the ROM with entropy projection and tangent space enrichment and of a generic manifold Galerkin ROM for reference in <ref>. It can be seen in <ref> that the value of α remains small around 𝒪(10^-5). Consequently, the original manifold parameterization φ and Jacobian J are well-approximated by their enriched counterparts φ̂ and Ĵ. It can be seen in <ref> that this is, in fact, the case, as the errors of the ROM differ by at most 𝒪(10^-5) and evolve very similarly. Hence, we conclude that the entropy projection step, introduced to obtain an entropy stable framework, is not detrimental to the accuracy of the ROM, provided that an enriched tangent space is used. § CONCLUSION In this article we have proposed a method to construct nonlinear manifold Galerkin reduced order models (ROMs) in such a way that the important total entropy functional of the ROM approximation is appropriately conserved or dissipated. This is a crucial concept in obtaining stable and physically admissible ROM solutions. In particular, we have focused on systems of one-dimensional nonlinear conservation laws. Correct semi-discrete entropy estimates upon orthogonal projection were obtained for these systems by evaluating the projected system not at the current state, but at its entropy projection. This was proposed earlier for linear ROMs and extended in this work to nonlinear manifold ROMs. The entropy projection of the state is obtained by transforming conservative variables to entropy variables, consequently projecting these on the tangent space of the reduced manifold and finally transforming back to conserved variables. To assure accuracy, it is important that the entropy projection is as close as possible to the identity mapping. This is generally not the case for general nonlinear reduced spaces and hence we have proposed the method of tangent space enrichment (TSE). With TSE the manifold is lifted along an additional dimension parameterized by a new coordinate. This coordinate direction is constructed to linearly extend in the direction of the local entropy variables, so that the tangent space spans the entropy variable at least approximately given the absolute value of the TSE coordinate. Accordingly, the entropy projection error will stay small. We have tested our proposed framework on several nonlinear conservation laws from fluid dynamics. We verified that the entropy estimates are satisfied (semi-discretely): the projection of entropy-conserving flux differences produces no total entropy and the projection of entropy dissipative terms dissipates total entropy. This is in contrast to the generic manifold Galerkin framework which leads to production of entropy in our numerical experiments, which is physically incorrect. We have also shown that the introduction of the artificial TSE coordinate is vital for the accuracy of the entropy projection and leads to minimal decreases in accuracy. We have also for the first time generalized the recently proposed quadratic manifolds to rational quadratic manifolds. We have suggested a framework to find the coefficients of the rational quadratic manifolds based on a nonlinear curve fitting approach. We have also formulated the rational quadratic polynomials such that they do no not have real poles. This was achieved through semi-definite constraints, avoiding division by zero for any point in the reduced space. Numerical experiments on the inviscid Burgers equation have shown the increased performance of these rational quadratic manifold parameterizations as compared to existing quadratic manifold parameterizations and linear methods. In future work, two challenges need to be tackled to make the approach computationally efficient: (i) we need a faster way to fit the rational quadratic manifolds and (ii) we need an entropy-stable hyperreduction approach. The former can possibly be achieved through linearization and iterative techniques combined with better choices of generalized coordinates <cit.>, whereas the latter could be achieved by adapting the constrained optimization formulation that we proposed for energy-conserving systems in <cit.>. In addition, the framework would benefit from extension with an entropy-stable time integration method and entropy-stable treatment of boundary conditions. § CREDIT AUTHORSHIP CONTRIBUTION STATEMENT R.B. Klein: conceptualization, methodology, software, validation, formal analysis, investigation, writing - original draft B. Sanderse: writing - review & editing, supervision P. Costa: writing - review & editing, supervision R. Pecnik: writing - review & editing, supervision R.A.W.M. Henkes: writing - review & editing, supervision, project administration, funding acquisition § DECLARATION OF COMPETING INTERESTS The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § DATA AVAILABILITY Data will be made available on request. § ACKNOWLEDGEMENTS R.B. Klein gratefully acknowledges the funding for this project obtained from Delft University of Technology. plain
http://arxiv.org/abs/2407.12223v1
20240717002535
Conditional Quantile Estimation for Uncertain Watch Time in Short-Video Recommendation
[ "Chengzhi Lin", "Shuchang Liu", "Chuyuan Wang", "Yongqi Liu" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Kuaishou Technology Beijing China 1132559107@qq.com Kuaishou Technology Beijing China liushuchang@kuaishou.com Kuaishou Technology Beijing China wangchuyuan@kuaishou.com Kuaishou Technology Beijing China liuyongqi@kuaishou.com § ABSTRACT Within the domain of short video recommendation, predicting users' watch time is a critical but challenging task. Prevailing deterministic solutions obtain accurate debiased statistical models, yet they neglect the intrinsic uncertainty inherent in user environments. In our observation, we found that this uncertainty could potentially limit these methods' accuracy in watch-time prediction on our online platform, despite that we have employed numerous features and complex network architectures. Consequently, we believe that a better solution is to model the conditional distribution of this uncertain watch time. In this paper, we introduce a novel estimation technique—Conditional Quantile Estimation (CQE), which utilizes quantile regression to capture the nuanced distribution of watch time. The learned distribution accounts for the stochastic nature of users, thereby it provides a more accurate and robust estimation. In addition, we also design several strategies to enhance the quantile prediction including conditional expectation, conservative estimation, and dynamic quantile combination. We verify the effectiveness of our method through extensive offline evaluations using public datasets as well as deployment in a real-world video application with over 300 million daily active users. <ccs2012> <concept> <concept_id>00000000.0000000.0000000</concept_id> <concept_desc>Do Not Use This Code, Generate the Correct Terms for Your Paper</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Recommender systems. 20 February 2007 [revised]12 March 2009 [accepted]5 June 2009 Conditional Quantile Estimation for Uncertain Watch Time in Short-Video Recommendation Yongqi Liu ====================================================================================== § INTRODUCTION The rapid growth of online video platforms has significantly changed the way users consume digital content, with short videos emerging as an increasingly popular format <cit.>. These platforms rely on recommendation systems to curate personalized content that aligns with users' interests, thereby enhancing user engagement and satisfaction. Different from indicators like click-through-rate and ratings that are widely used in conventional recommendation problems (e.g. e-commerce and news recommendation), a critical metric for measuring the engagement/interest in video recommendation is the watch-time, which is a dense and nuanced signal that reflects the amount of time a user spends on a particular video. Additionally, an accurate watch-time prediction would also help augmenting the ranking models and benefit the video recommendation. Nevertheless, accurately predicting the watch-time presents a formidable challenge due to the complex and dynamic nature of user-video interaction. Generally, one can formulate the watch-time prediction problem as a regression task. Existing solutions learn a deterministic predictor <cit.>, which have shown that the watch-time not only depends on the user's interest but is also related to non-interest factors like the full video's duration. Despite their effectiveness, the inherent uncertainty in user interactions are not modeled, which potentially limit their accuracy and robustness. For example, a user would watch the entire video of interest in normal cases but may skip the video if he/she is caught up by something else outside the online service. As a solution, some work proposes to model the watch-time as a random variable <cit.> by assuming a Gaussian, mixture of Gaussian or Categorical distribution, and include a variance estimation that represents how confident the output watch-time is. This approach turns out to be more robust against user's uncertain behaviors than a deterministic point estimator, and is able to generate more accurate predictions and recommendations. Even though, this strategy inevitably depends on the validity of the pre-defined distribution, which might over-simplify or mismatch the real-world user behavior. In this paper, we aim to achieve a more nuanced approach to watch-time prediction that overcomes the aforementioned limitations, and introduce our Conditional Quantile Estimation (CQE) framework that explicitly models the watch-time distribution in quantiles. Specifically, it consists of a multi-quantile modeling step that outputs the estimation of evenly separated quantile points for the target watch-time distribution, and a quantile combination step that allows different strategies to integrate the output point estimations. By modeling the watch-time distribution with quantiles, the method becomes agnostic to the underlying watch-time distributions while providing sufficient expressiveness of the user behavior uncertainty. During training, we follow standard quantile regression techniques <cit.> known for its robustness and flexibility. After obtaining all quantiles' estimations, the theoretically most accurate predictions could be achieved with the expectation. Yet, this strategy might not always be suitable when users are not tolerant of bad recommendations or the recommender system requires dynamic controls. In practice, we offer conservative estimation and dynamic quantile combination as alternative strategies to accommodate such scenarios. In summary, the contributions of this paper are as follows: * We introduce the Conditional Quantile Estimation (CQE) framework, a novel estimation method that explicitly models the conditional distribution of watch time. * We develop several innovative strategies that extend the CQE model to different recommendation scenarios and user demands. * We conduct the comprehensive offline evaluation of the proposed CQE framework, which demonstrates consistent effectiveness in enhancing video consumption. * We further verify the superior performance of CQE in a real-world industrial video recommendation setting with billion-scale user interactions. In the remainder of the paper, we will first illustrate the background of the watch-time prediction task in video recommendation and quantile regression in section <ref>. Then, we present our solution framework in section <ref> and our experiments in section <ref>. § RELATED WORK §.§ Video Recommendation and Watch-time Prediction Video recommendation systems have evolved to cater to the growing demand for personalized content delivery. With the advent of online video platforms like YouTube and TikTok, the importance of accurate video recommendation has been underscored by the significant impact on user retention and satisfaction <cit.>. In the realm of video recommendation systems, accurately predicting user engagement through watch time is a critical challenge. Watch time serves as a key metric for gauging user interest and engagement with recommended videos. The initial study <cit.> centered on enhancing video recommendations for the YouTube platform, introducing the Weighted Logistic Regression (WLR) technique for predicting watch time. This approach has since been recognized as an advanced method in its field. Nevertheless, the WLR's applicability is not straightforward to full-screen video recommendation systems, and it could encounter significant bias problems attributable to its weighted calculation system. D2Q <cit.> mitigates the duration bias through the implementation of backdoor adjustments and modeling the watch-time quantile under different duration groups. D^2Co <cit.> tackles the problem of bias in video recommendation watch times by using a model that corrects for both duration bias and noisy watching, providing a more accurate measure of user interest. DVR <cit.> introduces a novel metric called WTG (Watch Time Gain) and employs adversarial learning to learn unbiased user preferences. Our method can be seamlessly integrated into a wide array of duration debiased methods, thereby significantly improving their predictive accuracy. TPM <cit.> breaks down the task into a series of interconnected classification problems arranged in a tree-like structure. Although TPM considers watch-time variance, it does not capture the full breadth of the watch-time distribution as our method does. The watch-time prediction task also faces the critical problem of duration bias <cit.>. This bias indicates that users prefer to spend more time watching longer-duration videos, skewing the average watch time in favor of longer content. Such a preference for duration over shorter alternatives complicates the task of accurately predicting user engagement. Our methodology integrates seamlessly with most duration debiasing methods, significantly improving their predictive ability. §.§ Quantile Regression Quantile regression is a type of regression analysis widely used in statistics, econometrics, and ecology <cit.>. Unlike traditional mean/linear regression—which focuses on estimating the average outcome, quantile regression seeks to estimate the conditional median and other quantiles of the random variable. This flexible feature provides a more comprehensive understanding of the variable's distributional effects that mean regression may overlook<cit.>. In the context of machine learning, quantile regression has been extended beyond linear models. Representative approaches <cit.> integrate quantile regression into neural networks, offering means to forecast conditional quantiles in non-linear and high-dimensional settings. QRF <cit.> further deploys quantile regression within random forests, further exemplifying its adaptability and the enhancement of predictive capabilities across diverse models. The presented solution in this paper intends to integrate the principles of quantile regression into the domain of video recommendation systems. By adapting this statistical approach to account for the uncertainty and variability in watch time, we propose a novel application that enhances the predictive performance of recommendation systems. This advancement promotes a more nuanced understanding of user engagement, driving toward more personalized and satisfactory user experiences. § METHOD §.§ Problem Formulation The objective of the recommender system is intrinsically a predictive task, where we aim to estimate the degree of a user's engagement with the recommended content. In the video recommendation scenario, this engagement is mainly estimated by the user's watch-time—the time a user spends watching the video. Formally, we represent each user-video pair as (u, v), and assume a pre-defined mapping function that extracts an n-dimensional feature vector 𝐱 = ψ(u, v) ∈ℝ^n. These features encompass contextual information, video characteristics, user profile attributes, and the user's historical interaction data. Considering the user's watch-time as a random variable W, the goal is to estimate the probability distribution of W. §.§ Conditional Quantile Estimation Previous research simplifies the problem by assuming that the conditional watch-time follows a Gaussian, mixed Gaussian or Categorical distribution  <cit.>. Yet, the user's behavior might be far more complicated as we have discussed in <ref>. In contrast, we aim to learn the quantiles of the watch-time without making any assumption about the underlying distribution. Formally, we define a set of N quantiles τ_1,τ_2,…,τ_N∈(0,1) that we are interested in. For example, τ = 0.5 corresponds to the median, while τ = 0.1 and τ = 0.9 correspond to the lower and upper tails of the distribution, respectively. As a special case, when adopting even separation, we have τ_i=i/(N+1). Rather than giving a point estimation of the expected watch-time, we construct a conditional quantile estimation function ϕ(t_τ_1,…,t_τ_N|x) to infer the corresponding watch-time value t_τ_i for each of the quantile τ_i, given the input x. To ensure the values of {t_τ_i}_i=1^N are ordered, we will first have the model output N non-negative elements, and then {t_τ_i}_i=1^N will be the cumulative sum of these elements. As shown in Figure <ref>, these output watch-time values exemplify the watch-time distribution, where output quantile values are closer to each other in areas with higher likelihood. This provides richer information about the user behavior, especially for extreme values like very high or very low watch-times. During training, an ideal solution would sample the watch-time for multiple rounds until sufficient samples are observed for accurate approximation of all N quantiles. However, this is impractical since we cannot require the user to repeatedly watch the same video under identical conditions. To solve this problem, we minimize the pinball loss that penalizes the deviation between the quantile estimates and the actual observed values <cit.>. For an individual τ, the loss function is defined as ℒ_τ(y, t_τ_i) = τ_i(y - t_τ_i) if y ≥ t_τ_i (1 - τ_i)(t_τ_i - y) otherwise where y is the actual watch-time. And for each sample (x,y), we minimize the aggregated quantile regression loss: ℒ_QR = ∑_i=1^N ℒ_τ(y, t_τ_i). Intuitively, the optimization is slower either when the prediction quantile value t_τ_i is close to the ground truth y, or its quantile is far from y. This would help learning more accurate approximations for quantiles closer to y, and avoid misguidance to quantiles on the other side of the distribution. Without specification, we denote our solution framework as Conditional Quantile Estimation (CQE) for the rest of the paper. Through this method, we can construct a broad spectrum of predictions that allows us to reliably infer the distribution of user watch-time. This more complete understanding of the distribution of watch times potentially improves the precision of the estimation and enhance the performance of recommendation systems as a whole. As we will illustrate in the next section, it also provides a solid foundation for the flexible design of personalized recommendation strategies during inference. Though not the focus of this paper, this estimation can extend to adjust for duration bias or capture user interest inherent in the observed watch-time as in existing works <cit.>. §.§ Inference with Expanded Strategies Once we have estimated the quantiles of the distribution t_τ_i,…,τ_N, we can design a suitable quantile combination strategy for the watch-time inference. §.§.§ Conditional Expectation The most intuitive strategy is recovering the mean estimation through the conditional expectation. One challenge of this strategy is that we do not have the output value for τ∈(τ_i,τ_i+1) between any two consecutive quantiles. To circumvent this lack of information, we can use the interpolation method to approximate the conditional distribution. In practice, we adopt a linear interpolation between consecutive quantiles so the expected watch-time between τ_i and τ_i+1 becomes (t_τ_i+t_τ_i+1)/2(N+1). For the two endpoints, we assume t_0=t_τ_1 and t_1=t_τ_N. Then, we can approximate the overall watch-time expectation as: 𝔼[W|𝐱] ≈ 1/2(N+1)[(t_τ_1 + t_τ_1) + (t_τ_1 + t_τ_2) + + (t_τ_2 + t_τ_3)... +(t_τ_N-2 + t_τ_N-1) +(t_τ_N-1 + t_τ_N) + (t_τ_N + t_τ_N) ] = 1/N+1∑_i=1^Nt_τ_i + t_τ_1 +t_τ_N/2(N+1). Theoretically, this expectation gives the most accurate prediction in general and it would achieve the optimal prediction when N→∞. Empirically, we will verify its superiority with our experimental analysis in section <ref>. Yet, we remind readers that this strategy may not be well-suited for scenarios where users are not tolerant of bad recommendations or the recommender system requires dynamic controls. Fortunately, the proposed CQE is a flexible framework that provides sufficient freedom on how a strategy would combine different quantile outputs. §.§.§ Conservative Estimations In environments with implicit uncertainty, especially online platforms where users are sensitive to bad recommendations, conservative estimation stands as a pivotal strategy. For example, a video with high average watch-time may actually being polarized towards 0s and 1s in its watch-time distribution, so the expectation is no longer representative in this case. Adopting a conservative approach in video recommendations means that we should favor a quantile that reveals the underestimated watch-time scenarios, in contrast with an optimistic approach that selects overestimated watch-time. Consequently, the inferred watch-time is likely to be exceeded by the actual user behavior and the recommendation ensures that the lower bounded watch-time is still satisfactory. Formally, for a user-video pair, we focus on the lower quantiles of the expectation Eq.(<ref>) and select one of the quantile τ_low as the inference. In practice, τ_low may be set to reflect our degree of risk aversion. For instance, a τ_low of 0.25 could be used if we wish to ensure that the actual watch-time exceeds the predicted time at least 75% of the time. It is more cautious than using the median and enables a recommendation strategy that systematically avoids user disappointment due to overly optimistic estimates. §.§.§ Dynamic Quantile Combination Intuitively, user engagement is a multifaceted behavior influenced by the interplay of various user-specific and content-specific factors, which results in the different propensity towards longer or shorter watch-time. Following this ever-changing nature of user interest and platform content, we also designed a dynamic quantile combination (DQC) strategy that allows the system to control the combination between the higher and lower quantile. Specifically, we use a blending parameter k that is carefully designed to encode the live feedback and activity level of users, as well as the novelty and diversity of the recommended content. Then the combined score s_dyn is computed as a weighted average of lower and upper quantile predictions: s_dyn = t_τ_low· k + t_τ_high· (1 - k). where t_τ_low represent a conservative quantile prediction and t_τ_high corresponds to a more optimistic one. The blending parameter k is thus a tuning knob that focuses the recommendation's strategy between these two bounds. § EXPERIMENTS AND RESULTS We conduct extensive experiments in both offline and online environments to demonstrate the effectiveness of CQE. Three research questions are investigated in the experiments: * How do Conditional Expectation (CDE) of Conditional Quantile Estimation (CQE) compare with the state-of-the-art methods in terms of predicting user interest and watch time with regard to recommendation accuracy? * What is the impact of the number of quantiles on the performance of CQE-CED? * How do the strategies Conditional Expectation(CDE), Conservative Estimation(CSE), and Dynamic Quantile Combination(DQC) perform when implemented on online platforms? §.§ Offline Experiments Our offline experiments were designed in line with two existing research settings. The first focused directly on watch-time prediction as proposed by TPM <cit.>. The second aimed at predicting user interest through duration bias adjustment as D^2Co<cit.>. §.§.§ Watch-Time Prediction. In this problem scope, our primary objective is to predict the watch-time. We employed the same training methodology as referred to in the TPM code. Datasets. Following TPM, we used two public datasets: Kuaishou (collected from Kuaishou App[https://kuairec.com/]) and CIKM16 (from CIKM16 Cup [https://competitions.codalab.org/competitions/11161]) for our experiments. CIKM16 aims to predict the dwell time for each session in online search results. Each item in the session is used as a single feature for input. Kuaishou dataset contains 7,176 users, 10,728 items, and 12,530,806 impressions; and CIKM16 dataset contains 310,302 sessions, and 122,991 items and the average length of each session is 3.981. Metrics. We used two metrics to evaluate the model's performance: Mean Average Error (MAE) and XAUC<cit.>. * MAE: This metric is a typical measurement for evaluating regression accuracy. Denote the prediction as ŷ and the true watch time as y, MAE = 1/N∑_i=1^Nŷ_i - y * XAUC: this metric evaluates if the predictions of two samples are in the same order with their true watch time. Such pairs are uniformly sampled and the percentile of samples that are correctly ordered by predictions is XAUC. Baselines. Four state-of-the-art methods for watch time prediction were selected for comparison, including WLR(Weighted Logistic Regression) <cit.>, D2Q(Duration-Deconfounded Quantile) <cit.>, OR(Ordinal Regression) <cit.> and TPM(Tree-based Progressive regression Model) <cit.>. The first three methods are deterministic, while the latter introduces an element of uncertainty, offering both an estimate of the mean and the variance. §.§.§ User Interest Prediction. In this context, the training label is based on the transformation of watch time and duration, whereas the testing label is the user interest indicator defined as long_view. Following D^2Co <cit.>, in detail, it defines the user interest for a given user-video pair (u,v) as: x = 1, if (d ≤ 18s and w = d) or (d > 18s and w > 18s); 0, otherwise; where d is video duration and w is watch-time. We adopted the same training configuration as D^2Co and used the classical deep recommendation model DeepFM <cit.> and the state-of-the-art recommendation model AutoInt <cit.> as our backbone recommendation model. Datasets. Following D^2Co, we leveraged two publicly available real-world datasets: WeChat[https://algo.weixin.qq.com/] and KuaiRand[http://kuairand.com/]. These datasets are sourced from prominent micro-video platforms, namely WeChat Channels and Kuaishou. WeChat dataset contains 20,000 users, 96,418 items, 7,310,108 interactions. This dataset, provided through the WeChat Big Data Challenge 2021, encompasses logs from WeChat Channels spanning a two-week period. KuaiRand dataset is a newly released sequential recommendation dataset collected from KuaiShou. As suggested in <cit.>, we utilized one of the subsets KuaiRand-pure in this study. It contains 26,988 users, 6,598 items, and 1,266,560 interactions. Metrics. GAUC(Group Area Under Curve)<cit.> and nDCG@k (normalized Discounted Cumulative Gain at rank k) <cit.> are utilized as the evaluation metric of recommendation performance. * GAUC: this metric is calculated by weighted averaging the Area Under the ROC Curve (AUC) across different user groups, reflecting the model's ability to rank items accurately. * nDCG@k: this metric measures the gain of a recommendation list based on the relevance of items and their positions up to the kth rank, offering insight into the quality of the top recommended items and their ordering. Baselines. We used the weighted binary cross-entropy loss defined in D^2Co and Mean Squared Loss (MSE) as our baseline. The binary cross-entropy loss is defined as ℒ_CE = -r log [ σ ( f(x) )] -(1-r) log [1 - σ ( f(x) ), where σ is the sigmoid function and r is the user’s interest defined by PCR, WTG or D^2Co. Following D^2Co, in PCR and WTG, we treat all samples with less than 5 seconds of watch time as 0 values after calculating the value of labels. This can help remove the noise in watch time. By default, we set the number of quantiles N to 100. The value of τ_low is selected empirically from 0.2, 0.25, and 0.3. Similarly, the value of τ_high is selected empirically from 0.6, 0.7, and 0.8. §.§.§ Experimental Results We summarize the results as follows: Comparison between CDE-CQE and other methods: We compared the performances of different approaches in the watch-time prediction task, and the results are listed in Table <ref>. Both TPM and CDE-CQE outperform other methods in both MAE and XAUC metrics, thereby highlighting the significance of incorporating uncertainty into the models. Furthermore, our approach demonstrates superior performance on both metrics compared to TPM, thereby emphasizing the advantages of employing quantile modeling techniques. Additionally, the consistent behavior between MAE and XAUC metrics also verifies the feasibility of the watch-time estimation serving as ranking metrics. As for the user interest prediction task, we compare different frameworks across backbone models (DeepFM and AutoInt) and various label designs (PCR, WTG, and D^2Co) and present the results in Table <ref>, our proposed CDE-CQE consistently outmatches the alternatives in all cases, indicating the robustness and effectiveness of CDE-CQE. In terms of the optimization framework, CE generally performs better than MSE, indicating the correctness of including the ordinal categorical information as guidance. And CDE-CQE can improve over CE across all the designs of user interest metrics (PCR, WTG, and D^2Co), which means that the proposed framework is generalizable to different label settings. The Effect of Hyper-Parameters in CDE-CQE: To better investigate the characteristics of the proposed CQE framework, we further conduct an ablation study on the number of quantiles N by varying its value from 1 to 500. Theoretically, larger N generates a more accurate approximation of the truth expectation and in turn achieves better recommendation performance in general. This is attributed to the fact that more quantiles yield a distribution that closely mirrors the actual one. In the context of the watch time prediction task, as evidenced by Figure <ref>, the model performance improves with an increase in predicted quantiles. In contrast, for the user interest prediction task, it was observed (shown in <ref>) that model performance was relatively weak when the number of quantiles was under 10. Beyond 10, the results oscillated around 0.663. Interestingly, unlike the watch time prediction task, more quantiles did not necessarily lead to better results. This discrepancy suggests that there is a gap between the training objective and the user interest label as defined in the testing set. In practice, another critical issue that may hinder the employment of large N is the trade-off between the marginal accuracy improvement and the linearly increased computational cost. In general, increasing N would potentially improve the prediction accuracy under the Conditional Expectation strategy, but there may exists several bottlenecks that potentially limit the performance: 1) the design of the target label; 2) the data characteristics; and 3) the computational concerns. §.§ Online Experiments To validate the real-world impact of our Conditional Quantile Estimation (CQE) framework, we conducted extensive online A/B tests on the KuaiShou platform, which serves over 300 million daily active users. These experiments allowed us to assess the practical effectiveness of all three CQE strategies in a live environment with a substantial user base. §.§.§ Experiment Setup Users were randomly assigned to control and experimental groups, with a minimum of 10% of daily user traffic allocated to the experimental group to ensure statistical significance. Each online A/B test ran for over a week, providing ample time for data collection and reliable result analysis. The recommendation system operates in a two-stage process: candidate retrieval followed by ranking. We incorporated the CQE model into the ranking stage to predict watch time, a crucial component of the recommendation process. We evaluated the performance of the recommendation system using four key metrics: * Average Watch Time per User: This core metric directly measures user engagement by quantifying the average time users spend watching recommended videos. * Total Play Count: This metric accounts for the cumulative number of video plays across all users, reflecting the frequency of user interactions with the recommended content. * Active Days per User: This metric measures the number of days users engage with the platform, indicating user retention. * Active Users per Day: This metric represents the number of unique users who interact with the platform, reflecting the system's ability to maintain and grow its user base. §.§.§ Experiment Results Table <ref> and Table <ref> summarize the performance of the CQE strategies relative to the baseline across all four metrics: Conditional Expectation (CDE): The CDE methodology demonstrated a statistically significant increase of 0.165% in the "Average Watch Time per User" metric. However, it showed a slight decrease of 0.088% in "Total Play Count". This suggests that CDE effectively increased the depth of user engagement with individual videos, albeit at the cost of slightly reduced breadth of interaction. Conservative Estimation (CSE): CSE yielded a balanced improvement across all metrics. It achieved a modest 0.008% increase in "Average Watch Time per User" while simultaneously boosting "Total Play Count" by 0.346%. Furthermore, CSE led to a 0.033% increase in active days and a 0.031% growth in active users. These results indicate that CSE successfully encourages users to interact with more videos, visit the platform more frequently, and maintain longer-term engagement, albeit with slightly shorter average watch times per video. Dynamic Quantile Combination (DQC): By setting the blending parameter k based on content novelty, DQC achieved improvements across both engagement and interaction metrics. It increased "Average Watch Time per User" by 0.106% and "Total Play Count" by 0.177%. While we don't have specific data for active days and users for DQC, Figure <ref> illustrates that it increased two core diversity metrics over the course of the experiment. These results collectively demonstrate the effectiveness of our CQE framework in improving various aspects of the recommendation system. Each strategy offers unique benefits: * CDE excels at deepening user engagement with individual content pieces. * CSE provides a balanced approach, improving all metrics and particularly excelling at encouraging broader platform interaction and user retention. * DQC offers a middle ground, improving both depth and breadth of engagement while also enhancing content diversity. The choice between these strategies would depend on specific platform goals, such as prioritizing deep engagement, broad interaction, user retention, or content diversity. Moreover, these strategies could potentially be combined or dynamically applied based on user segments or content types to optimize overall system performance. § CONCLUSION In this paper, we introduce CQE (Conditional Quantile Estimation), a novel approach to modeling the conditional distribution of uncertain watch time in short video recommendation systems. By leveraging quantile regression, CQE captures the inherent uncertainty and variability in user behavior, providing a more accurate and robust estimation of user engagement. Furthermore, we introduce innovative strategies for integrating the CQE model into existing recommendation systems, including CDE (Conditional Expectation), CSE (Conservative Estimation), and DQC (Dynamic Quantile Combination). These strategies leverage the estimated watch time distribution to enhance user satisfaction and engagement through personalized recommendations. Through extensive offline evaluations on public datasets and a real-world deployment on a platform with over 300 million daily active users, we demonstrated the superior performance of CQE compared to state-of-the-art methods. ACM-Reference-Format
http://arxiv.org/abs/2407.12256v1
20240717015906
Enhancing Polygonal Building Segmentation via Oriented Corners
[ "Mohammad Moein Sheikholeslami", "Muhammad Kamran", "Andreas Wichmann", "Gunho Sohn" ]
cs.CV
[ "cs.CV" ]
Compound Expression Recognition via Multi Model Ensemble for the ABAW7 Challenge Xuxiong Liu1 Kang Shen1 Jun Yao Boyan Wang Liuwei An Zishun Cui Minrui Liu Xiao SunWeijie Feng July 22, 2024 =================================================================================================== § ABSTRACT The growing demand for high-resolution maps across various applications has underscored the necessity of accurately segmenting building vectors from overhead imagery. However, current deep neural networks often produce raster data outputs, leading to the need for extensive post-processing that compromises the fidelity, regularity, and simplicity of building representations. In response, this paper introduces a novel deep convolutional neural network named OriCornerNet, which directly extracts delineated building polygons from input images. Specifically, our approach involves a deep model that predicts building footprint masks, corners, and orientation vectors that indicate directions toward adjacent corners. These predictions are then used to reconstruct an initial polygon, followed by iterative refinement using a graph convolutional network that leverages semantic and geometric features. Our method inherently generates simplified polygons by initializing the refinement process with predicted corners. Also, including geometric information from oriented corners contributes to producing more regular and accurate results. Performance evaluations conducted on SpaceNet Vegas and CrowdAI-small datasets demonstrate the competitive efficacy of our approach compared to the state-of-the-art in building segmentation from overhead imagery. § INTRODUCTION The integration of high-resolution satellite imagery and unmanned aerial vehicle (UAV) data has had a profound impact on sectors like urban planning and monitoring of the built environment. Despite these advancements, the creation of accurate maps remains a resource-intensive endeavor, particularly in the realm of building extraction. This process traditionally comprises two key phases: firstly, the segmentation of building footprints from imagery, and secondly, the conversion of these footprints into vector formats compatible with geographic information systems (GIS). Challenges such as shadows, tree obstructions, and errors in raster quantization often lead to imperfect masks during segmentation, resulting in suboptimal polygons during the vectorization process (see <Ref>). Within the domain of deep learning methods for polygonal building segmentation, two primary categories have emerged: 1) two-step methodologies and 2) direct polygonal segmentation approaches <cit.>. Two-step methods, akin to conventional techniques, involve initial raster segmentation followed by post-processing steps, including vectorization and potential simplification. They frequently employ auxiliary representations such as frame fields <cit.>, directional indicators <cit.>, or attraction fields <cit.> to aid in the vectorization process. However, these methods are constrained by their reliance on offline post-processing, which can fail under certain conditions. In contrast, direct polygonal segmentation strategies directly predict building polygons from input images, sidestepping the challenges associated with two-step methodologies. Nevertheless, these methods can be intricate to train, computationally demanding, and may encounter difficulties such as irregular predictions or missing corners <cit.>. This paper presents a novel deep neural network named OriCornerNet, designed to enhance the performance of a baseline direct polygonal segmentation model called R-PolyGCN <cit.>. OriCornerNet achieves this by integrating the detected oriented corners as an auxiliary representation. This innovative approach, similar to direct segmentation methods, eliminates the need for post-processing steps like vectorization. Also, it utilizes the auxiliary representation to guide the generation of building polygons like two-step methodologies. In this context, 'orientation' refers to vectors associated with each corner that point towards adjacent corners. Our method uses auxiliary representation to generate regular polygons, identifying occluded corners and capturing the architectural nuances effectively. § RELATED WORK In this section, we will briefly review polygonal segmentation methods that utilize auxiliary information to guide them through the vectorization process. These methods are mostly two-step approaches. For example, Girard  <cit.> predict a raster segmentation along with frame fields and use the latter to vectorize the predicted raster segmentation. Similarly, Shu  <cit.> predict corners and direction vectors using a deep network and then connect the corners by a boundary tracing algorithm. Another approach adopted by <cit.> formulates the polygons as a combination of vertex and direction maps predicted by deep networks. Then, a polygon generation algorithm is used to generate the final output. Recently, HiSup <cit.> proposed to use the attraction field as a mid-level supervision in mask and corner segmentation tasks, leading to a state-of-the-art performance in polygonal building segmentation. The major disadvantage of the methods mentioned earlier is requiring rule-based post-processing, which is susceptible to failure. Additionally, two-step methods experience significant issues with raster quantization error, as noted in <cit.>. This emphasizes the need for direct methods to effectively use auxiliary information. § METHODOLOGY For our model, we selected R-PolyGCN <cit.> as the baseline, which comprises three main stages: an instance segmentation model, graph initialization, and graph convolutional networks (GCNs). Then, we modified each stage to improve polygonal segmentation performance using oriented corners, as shown in <Ref> and described below. §.§ Instance Segmentation Baseline Following R-PolyGCN <cit.>, we use Mask R-CNN <cit.> as the baseline for semantic mask segmentation. In addition, we add corner and orientation heads. Given an input image I∈ℝ^800×800×3, semantic features S∈ℝ^28×28×256 are extracted for each RoI and inputs to the heads, then the corner head, fed by the S, predicts a heatmap H ∈ℝ^28×28 and xy offset maps Δ∈ℝ^28×28×2. To be concise, we only bring the losses for corner and orientation heads: L_heat. = -1/N∑_i=1^N (w.y_i.log x_i + (1-y_i) log(1-x_i)) and L_offset = 1/N∑_i=1^N L_i, L_i = 0.5 (x_i - y_i)^2, if |x_i - y_i| < 1 |x_i - y_i| - 0.5, otherwise where x_i, y_i, w_p, and N are, respectively, prediction, target, weight of the positive class, and number of pixels in each task. For orientation representation on each edge pixel, we use two unit vectors O_cw=(cosθ_cw, sinθ_cw) and O_ccw=(cosθ_ccw, sinθ_ccw) pointing to the next corner in the polygon in respectively clockwise and counter-clockwise directions. Thus, four channels are regressed and supervised on ground truth edge pixels using the below loss: L_orient. = 1/N_e∑_i=1^N_e L_i, L_i = 0.5 (x_i - y_i)^2, if |x_i - y_i| < 1 |x_i - y_i| - 0.5, otherwise where x_i, y_i, and N_e are prediction, target, and the number of edge pixels. §.§ Initialization Module Many methods in literature <cit.> use a fixed number of predefined corners, e.g., located on the perimeter of a circle, for creating the initial graph. However, we use the predicted corners and connect them based on their distances from the mask contour. Also, we propose to combine semantic corners (mask's contour points) with the detected ones to enhance the graph initialization accuracy. In this sense, predicted corners further than δ_cor2cont from the contour are removed. Moreover, the semantic corners further than δ_sem2graph from the graph constructed using only detected corners are added to the corners. Lastly, predicted orientations and corner heatmaps, i.e. geometric features G ∈ℝ^28×28×5, are concatenated with RoI semantic features S to construct F=concat{S, G}∈ℝ^28×28×261. §.§ GCNs This part includes three GCN modules as described in <cit.>. Getting the initial graph, GCN modules repeatedly sample the feature vectors for each corner from F and predict their position offsets. As the number of corners in prediction and target may vary, a bi-projection <cit.> loss is used: L_poly = 1/N (∑_(p,p̅) ∈ S_ic‖ p - p̅‖_2 + ∑_(q,q̅) ∈ S_ac‖ q - q̅‖_2) where S_ic represents the set of initially matched corners based on their distance, and S_ac denotes the set of additional corners matched to their projection on the nearest edge. Additionally, N stands for the number of corners in the polygon with a higher count. We also incorporate two regularization losses: 1) orientation consistency loss and 2) orthogonality loss. Orientation consistency loss penalizes the error between predicted orientation vectors in each edge: L_ori-cons(P) = 1/n∑_i=1^n (1+⟨ O_ccw^i, O_cw^i+1⟩) where n is the number of vertices in polygon P and ⟨ ., .⟩ is the inner product of the vectors. The idea is that the orientation vectors of two consecutive corners should point to each other. On the other hand, orthogonality loss encourages all the inner angles to be orthogonal, as is frequently seen in buildings: L_ortho.(P) = 1/n∑_i=1^nmin_ω_peak |ω_i - ω_peak | where n is the number of inner angles in polygon P, ω is the inner angle and ω_peak∈{0^∘, 90^∘, 180^∘, 270^∘}. The final loss is a weighted sum of all the losses. § EXPERIMENTS §.§ Datasets and Settings The proposed network's performance is evaluated using the SpaceNet Vegas dataset <cit.> to demonstrate its superiority over R-PolyGCN, its baseline, and several competing methods. The dataset contains 3,851 images, with a size of 650×650 pixels and a pixel size of 0.3 meter. The dataset is randomly divided into training, validation, and test sets. We trained the model using a single NVIDIA GeForce RTX 3090 GPU with a batch size of 1 for 50 epochs. Additionally, the CrowdAI-small dataset <cit.> is utilized for both ablation studies and comparison with state-of-the-art methods. This dataset consists of 10,186 images sized at 300×300 pixels, with a pixel size of 0.3 meter. Training on the CrowdAI-small dataset is performed for 40 epochs on a single NVIDIA GeForce RTX 6000 GPU with a batch size of 1. Images from both datasets are resized to 800×800 pixels for input to the network. Also, δ_cor2cont=5 and δ_sem2graph=5 pixels for both datasets. §.§ Evaluation Metrics Two groups of metrics have been utilized in the performance assessment of the proposed method: 1) Raster-based MS-COCO metrics <cit.>, such as mean average precision (AP) and recall (AR) over different IoU thresholds, and 2) Vector-based metrics, including PoLiS <cit.> and C-IoU <cit.>. PoLiS calculates the average distance between the matched vertices in polygons P and Q: PoLiS(P,Q)=1/2N_P∑_p_j∈ Pmin_q ∈ Q‖ p_j - q ‖ + 1/2N_Q∑_q_k∈ Qmin_p ∈ P‖ q_k - b ‖, where N_P and N_Q are, respectively, the number of vertices in the predicted and ground truth polygons. Furthermore, C-IoU <cit.> measures the polygon complexity as well as its IoU: CIoU(P,Q)=IoU(P,Q).(1-D(N_P,N_Q)), where D(N_P,N_Q)=|N_P-N_Q|/(N_P+N_Q) is the relative difference between the number of vertices in the polygons. §.§ Preliminary Results and Discussion To assess the effectiveness of utilizing oriented corners, we conducted a comparison between the proposed method and its baseline, R-PolyGCN <cit.>. Raster-based metrics, specifically AP and AR, exhibited a significant improvement of 16% and 14%, respectively, as shown in <Ref>. Furthermore, vector-based metrics indicated that our method achieved more precise corners without compromising polygon complexity, highlighting the effectiveness of corner-based initialization and the multi-task learning approach for orientation (see <Ref>). Also, the proposed method is compared with two two-step (Mask R-CNN <cit.> and FrameField <cit.>) and two direct (PolyMapper <cit.> and BiSVP <cit.>) polygonal segmentation methods. OriCornerNet proved to outperform them by 4% in AP and by 11% in terms of AR. An ablation study was conducted on the CrowdAI-small dataset to quantitatively evaluate the contribution of each component in the proposed method. According to <Ref>, the study revealed that combining semantic corners with the detected corners could significantly improve performance in terms of both raster-based and vector-based metrics. Additionally, the orientation consistency loss increased the AP by 0.5% and AR by 0.2%. Ultimately, the full method, which includes the above-mentioned components plus the orthogonality loss, delivered the best performance in terms of all metrics. Further, compared to HiSup, the state-of-the-art method, our method demonstrated competitive results in terms of AP and considerably better results in terms of AR and vector-based metrics. <Ref> provides qualitative results, demonstrating the proposed method's more robust performance in predicting regular and simpler polygons. § CONCLUSION In this work, we have proposed a new method for generating building polygons using oriented corners as an auxiliary representation in an end-to-end direct polygonal segmentation network. The network utilizes both semantic and geometric features to reconstruct the building polygons. It also refines them via GCNs by applying some geometric regularization terms. The experimental results on two datasets have demonstrated the effectiveness of our proposed method in extracting delineated polygons. In the future, we plan to focus on handling buildings with holes and improving the simplification using orientation information. ieeenat_fullname
http://arxiv.org/abs/2407.13267v1
20240718081721
A Partially Pooled NSUM Model: Detailed estimation of CSEM trafficking prevalence in Philippine municipalities
[ "Albert Nyarko-Agyei", "Scott Moser", "Rowland G Seymour", "Ben Brewster", "Sabrina Li", "Esther Weir", "Todd Landman", "Emily Wyman", "Christine Belle Torres", "Imogen Fell", "Doreen Boyd" ]
stat.AP
[ "stat.AP" ]
equationsection assumpAssumption
http://arxiv.org/abs/2407.12091v1
20240716180008
Time-convolutionless cosmological master equations: Late-time resummations and decoherence for non-local kernels
[ "Suddhasattwa Brahma", "Jaime Calderón-Figueroa", "Xiancong Luo" ]
hep-th
[ "hep-th", "astro-ph.CO", "gr-qc" ]
Time-convolutionless cosmological master equations: Late-time resummations and decoherence for non-local kernels Suddhasattwa Brahma^1[e-mail address: suddhasattwa.brahma@gmail.com], Jaime Calderón-Figueroa^2[e-mail address: jrc43@sussex.ac.uk] and Xiancong Luo^1[e-mail address: edlxc.35@gmail.com] ^1Higgs Centre for Theoretical Physics, School of Physics and Astronomy, University of Edinburgh, Edinburgh EH9 3FD, UK ^2Astronomy Centre, University of Sussex, Falmer, Brighton, BN1 9QH, UK § ABSTRACT We revisit a simple toy model of two scalar fields in de Sitter space, playing the roles of “system” and “environment” degrees of freedom, which interact with each other. We show that there are secular divergences in physically relevant observables which arise solely from the non-Markovian part of the memory kernel, contrary to popular belief that secular growth typically comes from local terms in evolution equations. Nevertheless, we show that these terms can still be non-perturbatively resummed, using the time-convolutionless master equation formalism, which improves upon previous approximations. At the same time, there are other physical quantities in the same model that are dominated by local terms in the memory kernel. Therefore, we conclude that, for cosmological backgrounds, either the dissipation or the noise kernel can end up being dominated by non-local terms depending on the nature of the system-environment coupling. § INTRODUCTION Inflation – the most widely accepted paradigm for the early-universe – predicts that the large scale structure of the cosmos has its origins in the quantum vacuum fluctuations of the scalar field responsible for the background accelerated expansion <cit.>. However, a deeper dive into this formalism shows that one needs to break up the same inflaton field into the homogeneous background, sourcing the evolution of the universe, and its quantum fluctuations, responsible for structure formation <cit.>. A natural question that follows is what is the backreaction <cit.> of the sub-horizon quantum fluctuations on the effective field theory (EFT) of the `classicalized' super-Hubble modes <cit.>? Another way to phrase this problem is as follows: The equal-time in-in correlations of a massless scalar field in quasi-de Sitter (dS) space give us the spectra of the adiabatic perturbations which can be verified in the temperature fluctuations in the CMB <cit.>. Nevertheless, issues of infrared (IR) singularities and late-time secular divergences <cit.> are know to afflict such computations when considering loop corrections <cit.>. These questions are typically tackled using Starobinsky's stochastic formulation of inflation<cit.>. The general idea behind it is that gradient terms are expensive once the wavelength of a perturbation becomes super-Hubble, and thus its evolves essentially independently outside the horizon. This allows one to write down a Fokker-Planck equation for the probability distribution of the scalar field, coarse-grained over each Hubble volume. Its evolution is a balance between the classical drift due to the quasi-dS scalar potential and `quantum kicks', modeled by a Gaussian white noise, due to the sub-horizon fluctuations. Since it offers a gradient expansion for the super-Hubble density perturbation, instead of the one in terms of its amplitude, it can make predictions regarding the tail of its probability distribution where linear perturbation theory is bound to fail. And in this way, it can make nontrivial predictions for, say, the production of primordial black-holes during inflation (see, for instance, <cit.>). Stochastic inflation is supposed to also be precisely the non-perturbative formalism designed to cure the IR issues of massless fields during inflation. Since it is expected to provide the leading order effect in the resummation of late-time IR divergences in cosmology[For partial results in this direction, see <cit.>.], its assumptions need to be carefully examined. For instance, one of the primary approximations for stochastic inflation is made in the computation of the quantum noise due to the short wavelength fluctuations. Generally, it is typically assumed to be white <cit.>, although `colored' corrections are expected to come from non-Gaussian contributions <cit.>. While it has been argued that both the choice of the `sharp' window function <cit.> (to demarcate the short and long wavelength scales) as well as the `test scalar' approximation <cit.> can invalidate it, nevertheless this `Markovian' assumption[At the very outset, we want to emphasize that we do not use Markovian and time-locality interchangeably in this draft, as is sometimes erroneously done in the cosmology literature. Also, we loosely say that a white noise implies a Markovian system in the classical sense even though we will define what we mean by Markovianity in a quantum setup as having a semi-group property later on <cit.>.] of having a white noise for the integrated-out quantum degrees of freedom (dofs) remains a ubiquitous one in cosmology (e.g., <cit.>) with a few notable exceptions <cit.>. What is even more, this assumption in stochastic inflation has often been used as an argument for inferring about the origin of secular terms that are to be expected when computing loop corrections during inflation. There is a rich literature on computing one-graviton loop corrections to, say, a conformally coupled scalar field propagator in dS space <cit.> in which it has been explicitly shown that the secular terms are the ones which come from time-local terms in the evolution equation whereas the non-local terms decay and do not contribute at late-times. Similar results have also been obtained for the one-loop graviton correction to the vacuum polarization for dynamical photons in a dS background <cit.>. Such secular enhancement is generally found to independent of the nonlocal terms in a variety of models <cit.>, and is hence free of any memory effects. These results lend support to the general expectation that stochastic inflation captures all the leading IR secular terms via a local equation, and thus the Markovian approximation is a sufficient one for cosmology. In this paper, we will revisit an old model <cit.> to show that indeed secular divergences can just as easily come from the non-local part of the memory kernel – an object that captures correlations of the integrated-out fields. More concretely, we will show explicitly that secularly divergent terms come from non-local terms in the (dissipation) part of the kernel and, consequently, that it is not necessary that late-time behaviour is always dominated by local terms. More importantly, we shall demonstrate that such secular divergences, arising from non-Markovian terms, can nevertheless be exactly resummed, without invoking any additional, ad hoc approximations, using an appropriate master equation formalism. In our example, we shall give the physical interpretation of different co-efficients of the master equation, corresponding to different parts of the kernel, which will be dominated by local and non-local terms at late-times. The main purpose of our work is thus to show that: * Secular divergences can just as easily stem from non-Markovian terms and, more importantly, such terms can still be resummed at late times following a precise algorithm that does not involve any arbitrary approximations. * The memory kernel, corresponding to the integrated-out (or coarse-grained) fields, in the same model can affect different physical quantities differently. More specifically, local and non-local parts of the kernel can become dominant for different physical observables. In the next section, we briefly discuss master equations in cosmology before setting up the one relevant for us in Sec-3. We discuss physical interpretations of coefficients of the master equations as they pertain to dissipative corrections to cosmological observables and diffusion terms leading to decoherence of the quantum fields. In Sec-4, we compute corrections to the power spectrum and the purity of the (system) density matrix and show how they are dominated by the non-local and local parts of the kernel, respectively. Finally, we conclude by summarizing our findings and outlining their applications to future work. § OPEN QUANTUM SYSTEMS IN COSMOLOGY To achieve the above objectives, we shall adopt an open EFT approach <cit.> to the problem, an idea that has been gaining considerable importance over the last decade in cosmology <cit.>. The main realization here is that open quantum systems are particularly suited to spacetimes with horizons, such as the one for accelerated expansion. There is a sector of the Hilbert space that is hidden from the observer, thus being dubbed “environment” (), which can still exchange energy and information with the “system” () sub-sector. The usefulness of using an open quantum system is that it is able to coarse-grain the effects of the high energy or sub-Hubble dofs on the dynamics of the long-wavelength modes when there is no exclusion principle (such as the conservation of energy) available. Hence, the master equation aims to capture the evolution of the system dofs by integrating (or tracing) out the environment modes. One of the main advantages of this formalism is that it can capture reliable late-time dynamics of the system by overcoming the secular growth associated with time-dependent backgrounds in cosmology. The underlying reason for secular growth at late times is that gravitational interactions are universal, and once turned on, it is not possible to isolate the system from the ever-present medium, namely, gravity. This implies that even in the weakly-interacting regime, for long time-intervals small effects can accumulate to lead to the failure of perturbation theory. Resummation of such late-time divergences typically implies deriving the master equation at some order in perturbations theory, say λ^2, and then treating it as the bona fide dynamical evolution equation that needs to be solved non-perturbatively without performing any expansion <cit.>. This means that the master equation would contain all term at 𝒪(λ^2) and some terms at 𝒪(λ^n>2). The details of much of this discussion can be found in, say <cit.>, and we only very briefly touch upon the time-convolutionless master equation formalism that we will employ for our model. The density matrix, corresponding to the entire system ( + ) undergoes unitary evolution governed by the Louville-von Neumann equation: /τρ_I (τ) = -i [H_I (τ),ρ_I (τ)] , where the interaction Hamiltonian has operators corresponding to both and . For the sub-sector, this equation can be put in an exact, closed form which determines the evolution of the reduced density matrix corresponding to , known as the Nakajima-Zwanzig master equation. However, it is an integro-differential equation that has exactly the same level of difficulty as the Louville-von Neumann equation, and thus needs to be solved using some approximation scheme. The first of these is the so-called Born approximation, which assumes a weak coupling between and and expands in this interaction paarameter λ. One typically also assumes that the initial density matrix is in a factored state: ρ(τ_0) = ρ^(τ_0) ⊗ρ^(τ_0). Although this is not an essential assumption, it is a natural one for cosmology since we will start off both the system and environment modes in the Bunch-Davies vacuum state. At 𝒪(λ^2), this results in a master equation of the form: ∂_t ρ^ (τ) = - λ^2 ∫_τ_0^ττ' _[ H_I (τ), [ H_I (τ'), ρ^ (τ') ⊗ρ^] ] . Since the RHS of the above equation has ρ^ depending on τ', rather than on τ alone, this equation is not amenable to late-time resummations. This is because assuming that the master equation generates dynamics non-perturbatively, even though it is derived at 𝒪(λ^2), works only because it is not dependent on some specific value of time. The next popular approximation is the Markovian one where the above master equation reduces to the standard Lindblad form. However, in this work, we shall not invoke this restrictive limit which basically assumes that the master equation generates dynamics that fulfills a semi-group property, i.e., ρ^(τ') = ℒ_τ→τ' ρ^(τ) where ℒ_τ→τ' = ℒ_τ→τ” ℒ_τ”→τ'. Instead, we will work with the time-convolutionless approximation, based on a cumulant expansion (see <cit.> for details). In this case, one can rewrite the master equation, at leading order (hereafter the TCL_2 master equation), as ∂_t ρ^ (τ) = - λ^2 ∫_τ_0^ττ' _[ H_I (τ), [ H_I (τ'), ρ^ (τ) ⊗ρ^] ] . Thus, at this order, the only difference is that in the TCL_2 master equation ρ^ is evaluated at τ instead of τ'. It can be shown that the terms dropped in this equation, as compared to (<ref>), is at 𝒪(λ^4) or higher. More importantly, the TCL formalism allows one to capture non-Markovian effects even though we end up working with a time-local equation. And this is exactly one of the technical issues that we wish to clarify in this work, and correct some associated errors in the literature on this topic. Note that the reader might think that the substitution of ρ^(τ') →ρ^(τ) in the above master equation is nothing but an application of the Markovian approximation to the system. Indeed, this line of thinking has been mistakenly employed before <cit.>. One of the goals of this work is to underscore that despite the (time-)local nature of the TCL_2 equation, it remains capable of adequately describing a non-Markovian system. However, a subtlety arises in the form of a residual artifact of the “memory” of the initial state remains in the lower limit of the integral on the RHS. In other words, when we compute the coefficients of the TCL_2 master equation, in terms of expressions quantifying diffusion and dissipation, we shall have to take particular care to drop such terms with explicit dependence on such initial time τ_0. Such terms have been called “spurious” earlier in <cit.>, and it will be shown that the TCL_2 cannot be resummed unless such terms are dropped by hand. Going beyond these technical aspects, the main message is that the system described by the TCL_2 master equation is not a Markovian one and we will show how previous results were only approximate since it did not involve a proper analyses of such spurious terms, and how we improve upon those results in our work. § A TOY-MODEL FOR SUB- AND SUPER-HUBBLE MODE-COUPLING Our model consists of a system comprising a minimally coupled scalar field (χ) and an environment composed of a conformally coupled scalar field (ψ), with an interaction term which has previously been studied in <cit.>. The reason behind studying this model is two-fold. Firstly, the associated memory kernel of the environment field has the advantage of cleanly splitting into two different contributions, one that is clearly time-local while the other non-local, as we will show later on. Secondly, the conformally-coupled “environment” field is a good proxy for the sub-Hubble modes since the latter are least influenced by the background curvature. On the other hand, the minimally coupled “system” scalar would also be assumed to be massless, thereby playing the role of the adiabatic degree of freedom. Therefore, this setup can play the role of a toy-model for stochastic inflation. Nonetheless, it is important to keep in mind that this is still a model of two interacting test scalars in a dS background: s^2 = a(τ)^2 [-τ^2 + x^2] where a(τ)=-1/(Hτ) is the scalar factor, H the Hubble parameter and τ denotes the conformal time. The action, written in terms of (canonical) Mukhanov-Sasaki fields, is given by S = ∫τ ^3 x {1/2[ χ'^2 - (∇χ)^2 + a”/aχ^2 + ψ'^2 -(∇ψ)^2] + λ a χ : ψ^2 : } , where primes denote derivatives with respect to conformal time, τ. From the above action, we can obtain the system and environment Hamiltonians, together with the interaction potential, as: H_ = 1/2∫^3 k/(2π)^3[ π_ kπ_- k + k^2 χ_ kχ_- k + a'/a( χ_ kπ_- k + π_ kχ_- k) ] , H_ = 1/2∫^3 k/(2π)^3[ π^(ψ)_ kπ^(ψ)_- k + k^2 ψ_ kψ_- k] , V(τ) = - λ a(τ) ∫^3 x χ (x) : ψ^2 (x) : , where π is the momentum conjugate of χ, and π^(ψ) is that of ψ, respectively. From here on, operators in the Schrödinger picture will be denoted with hats, while interaction picture operators will be denoted with tildes. Following <cit.>, the interaction term has been normal-ordered as :ψ^2: = ψ^2 - [ Tr(ρ_0 ψ)]^2, where ρ_0 is the initial (product) density matrix. Given the free Hamiltonians, the Heisenberg equations of motion can be evaluated as χ”_ k (τ) + [ k^2 - 2/τ^2]χ_ k (τ) = 0 , ψ”_ k (τ) + k^2 ψ_ k (τ) = 0 , with solutions, assuming Bunch-Davies initial conditions, given by χ_k (τ) = e^-i k τ/√(2k)( 1 - i/k τ) , ψ_k (τ) = e^-i k τ/√(2k) , and similary for their conjugate momenta: π_k (τ) = χ_k' (τ) - a'/aχ_k (τ) = -i √(k/2)e^-ikτ , π^(ψ)_k (τ) = -i √(k/2)(1 + i/kτ) . Consequently, the quantum operators for the system field (and its conjugate momenta) read χ̃_ k (τ) = χ_k (τ) []ak + χ_k^* (τ) [-]ak , π̃_ k (τ) = π_k (τ) []ak + π_k^* (τ) [-]ak , where []ak and [-]ak are the annihilation and creation operators at τ_0, respectively. Finally, let us express these fields at arbitrary times in a time-local way, as required in the TCL framework. For this, we `solve' (<ref>) and (<ref>) for []ak and [-]ak at τ, and subsequently substitute these solutions back into the corresponding equations at τ', yielding χ̃_ p (τ') = 2 Im[χ_p (τ') π_p^* (τ)] χ̃_ p (τ) - 2 Im[χ_p (τ') χ_p^*(τ)] π̃_ p (τ) , π̃_ p (τ') = 2 Im[ π_p^*(τ) π_p (τ') ] χ̃_ p (τ) - 2 Im[ χ_p^* (τ) π_p (τ') ] π̃_ p (τ) . Plugging the mode functions above, one obtains χ̃_ p (τ') = [ cos (p(τ - τ')) + sin(p(τ-τ'))/pτ']χ̃_ p (τ) - [ τ' - τ/p^2 ττ'cos(p(τ-τ')) + 1/p(1 + 1/p^2 ττ') sin(p(τ-τ')) ] π̃_ p (τ) , π̃_ p (τ') = p sin(p(τ-τ')) χ̃_ p (τ) + [ cos(p(τ-τ')) - 1/pτsin(p(τ - τ')) ] π̃_ p (τ) . §.§ Building the TCL_2 master equation Following the recipe outlined in Sec-2, some of the details of which can be found in <cit.>, we can write down the TCL_2 master equation as[We have used ρ^ and ρ_ red interchangably in this paper.] /τ = - ∑_ pλ a(τ) ∫_τ_0^ττ' λ a(τ') {[ χ̃_ p (τ) χ̃_- p (τ') (τ) - χ̃_- p (τ') (τ) χ̃_ p (τ) ]K_ p (τ,τ') - [ χ̃_ p (τ) (τ) χ̃_- p (τ') - (τ) χ̃_- p (τ') χ̃_ p (τ) ] K_ p^* (τ, τ') } , where the memory kernel is given by[To write the solution of the integral in the quoted form, one invokes the Sokhotski–Plemelj theorem, which states lim_ϵ→ 0^+1/x ± i ϵ = P( 1/x) ∓ i πδ(x) , where P denotes the principal part which shall be expressed more explicitly later on.] K_ p(τ,τ') = 2 ∫^3 k/(2π)^3 ψ_k (τ) ψ_k^* (τ') ψ_q (τ) ψ_q^* (τ') , p = |k + q| = - i/8π^2 e^-i p (τ-τ') P( 1/τ - τ') + 1/8πδ(τ - τ') . Given the system field appears at most at quadratic order in the the full Lagrangian, the evolution equation for the density matrix can be written as a sum over independent momentum modes p without any mode-coupling. This is another technical advantage of choosing the type of coupling that is being considered in this model (as opposed to, say, a χ^2 ψ^2 coupling term). It will be more compact to express the field and its conjugate momentum as coordinates of phase space, denoted as z̃_1 (τ) = χ̃_ p (τ), z̃_2 (τ) = π̃_ p (τ). Using (<ref>),(<ref>), the resulting master equation can be written as /τ = - 1/2∑_ p{ [D_11 + i Δ_11] (z̃_1 z̃_1^ - z̃_1^z̃_1) + 2 [D_12 + i Δ_12] (z̃_1 z̃_2^ - z̃_2^z̃_1) - [D_11 - i Δ_11](z̃_1 z̃_1^ - z̃_1^z̃_1) - 2 [D_12 - i Δ_12](z̃_1 z̃_2^ - z̃_2^z̃_1) } , where all the operators are evaluated at τ, and the functions are given by the following integrals: D_11 = 4 λ a(τ) ∫_τ_0^ττ' λ(τ') Im[χ_p (τ') π_p^* (τ)] Re[K_p (τ,τ')] , Δ_11 = 4 λ a(τ) ∫_τ_0^ττ' λ a(τ') Im[χ_p (τ') π_p^* (τ)] Im[K_p (τ,τ')] , D_12 = -2 λ a(τ) ∫_τ_0^ττ' λ a(τ') Im[χ_p (τ') χ_p^*(τ)] Re[ K_p (τ,τ') ] , Δ_12 = -2 λ a(τ) ∫_τ_0^ττ' λ a(τ') Im[χ_p (τ') χ_p^*(τ)] Im[ K_p (τ,τ') ] . We leave the solutions of these integrals for the next section. At this point, let us consider the lower limit of the (formal) antiderivative of the above functions. Following the convention adopted in <cit.>, we can rewrite the RHS of each of the above equations as F_i(τ, τ) - F_i(τ, τ_0), the abstract index i denoting the different functions for the four different quantities above. The second term, coming from the lower limit of the integral, has the memory of the initial state and is the one that has been termed `spurious' in <cit.>. In the Appendix, we compute these terms explicitly and show that they dominate at late-times near the conformal boundary. What this implies is that keeping such terms would not allow for the TCL_2 master equation to be resummed. Moreover, as has also been noticed in <cit.>, such terms are absent in a perturbative treatment. In other words, if one were to solve the master equation by expanding the density matrix to a fixed order in λ^n, then these terms get cancelled amongst themselves. However, when solving the TCL_2 equation non-perturbatively by considering it as the authentic dynamical generator, we only keep some of the terms at orders higher than λ^2, and therefore such spurious terms need to be removed by hand. However, getting rid of these spurious terms is not the same as imposing a Markovian approximation. As will be shown later on, some of the master equation coefficients in (<ref>)-(<ref>) have contributions solely from the non-local part of the memory kernel and hence, by definition, has a non-Markovian origin. It is important to point out that previous papers where this model has been studied lacks any discussion of these spurious terms and we shall explain later on in the next section why their results are, therefore, necessarily approximate when considering non-perturbative resummations. With these expressions at hand, the master equation is written in a rather economical and suggestive way, /τ = ∑_ p( -i/2Δ_ij(τ) [z̃_i (τ) z̃^_j (τ), (τ) ] - 1/2 D_ij (τ) [ z̃_i (τ), [ z̃^_j (τ), (τ) ]] - i/2Δ_12 (τ) ω_ij[ z̃_i (τ), {z̃^_j (τ), (τ) }] ) , where ω_ij is the antisymmetric matrix ω_ij = [ 0 1; -1 0 ] , and D_ij and Δ_ij are symmetric matrices, whose entries are given by (<ref>)-(<ref>). Noticeably, there is no D_22 term present, a consequence of the specific form of the interaction involving only configuration field-field coupling. On the other hand, in interactions involving time derivatives of the fields, one anticipates the emergence of such terms (as that would involve a field-momentum coupling <cit.>). The master equation, presented in this manner, bears resemblance to stochastic equation describing (quantum) Brownian motion, such as the one obtained from the Caldeira-Leggett model <cit.>. For practical purposes, it is more useful to work in the Schrödinger picture where the master equation takes the form: /τ = ∑_ p( -i H_ij^(2)[ z_i z_j^ , (τ) ] - 1/2 D_ij (τ) [ z_i , [ z_j^ , (τ) ]] - i/2Δ_12ω_ij[ z_i , {z_j^ , (τ) }] ) , with the effective quadratic Hamiltonian given by H_ij^(2) = 1/2[ z_2 z^_2 + (k^2 + Δ_11) z_1 z^_1 + (a'/a + Δ_12) ( z_1 z_2^ + z_2 z_1^) ] . Finally, we can rewrite the master equation in a Lindblad–like form as /τ = ∑_ p( -i H_ij^(2)[ z_i z_j^ , (τ) ] + γ_ij (τ) ( z_i (τ) z_j^ - 1/2{z_j^z_i , (τ) }) ) , where γ_ij≡ D_ij - iΔ_12 ω_ij is known as the dissipator matrix. §.§ The dissipator matrix: Dissipation and Diffusion Let us focus on the coefficients of the master equation, some of which appear in the effective Hamiltonian and the others as entries of the dissipator matrix. Before presenting the explicit form of these expressions, let us delve into the physical implications of these terms. The real and imaginary part of the memory kernel K_p(τ, τ') correspond to the noise and the dissipation kernel, respectively[For thermal states, the real and imaginary parts are related to each other via the fluctuation-dissipation theorem. Such dynamics takes place in scenarios like warm inflation <cit.>.]. In terms of the RHS of the dissipator matrix written above, D_ij corresponds to the noise kernel while Δ_12 the dissipation kernel, respectively. Both these terms signify non-unitary contributions to the reduced density matrix due to interactions with the environment field. On the other hand, Δ_11 in (<ref>) corresponds to a unitary term renormalizing the energy spectrum of the system Hamiltonian. Since the D_ij term is responsible for diffusion, it leads to decoherence of the quantum modes associated with the system dofs. On the other hand, the Δ_12 is the dissipative or friction term that controls the amount of energy transfer between the system and the environment and would be reflected in the equal-time correlation functions, e.g., the configuration field power-spectrum. (Δ_12 plays the dual role of renormalizing the comoving Hubble parameter.) The utility of our model is that it would be clear which of these quantities are dominated by the local and the non-local parts of the kernel, and thereby which physical observables would reflect the non-Markovianity of the model. Keeping in mind the physical significance of Δ_11, D_ij and Δ_12, we wish to find their explicit form using the integrals in (<ref>)-(<ref>). Although intricate, these integrals can indeed be evaluated analytically for this model. Particularly delicate are the terms arising from the principal value in the kernel (see (<ref>)), i.e., from the time non-local term. One strategy to address this, and to isolate the associated UV-divergence, is to express the principal value as: P( 1/τ-τ') = τ-τ'/(τ-τ')^2 + ϵ^2 , and subsequently expand around ϵ = 0^+. We present the result of these integrals derived through this method, while relegating the expressions of the spurious F_i (τ,τ_0) functions, originating from the lower limits of the integrals, to the Appendix. In the following, we have highlighted in blue those terms arising from the local part of the kernel while the rest all originate from the non-local bit. Furthermore, the UV-divergent piece (also coming from the non-local part of the kernel) has been marked in red. The resulting expressions are: D_11 = -λ^2/8π^2 H^2 p τ^3[ γ_E + ln(-2pτ) - Ci(-2pτ) [ cos(2pτ) + p τsin(2pτ) ] + Si (2pτ) [ pτcos(2pτ) - sin(2pτ) ] ] + λ^2/4π H^2 τ^2 + F_1[τ,τ_0] , Δ_11 = λ^2/8π^2 H^2 p τ^3[ pτ[ γ_E -ln(-2 pτ) ] + Ci(-2pτ) [pτcos(2pτ) - sin(2pτ) ] + Si(2pτ) [ cos(2pτ) + p τsin(2pτ) ] ] + λ^2/4π^2 H^2 τ^2ln(2pϵ) + F_2 [τ,τ_0] , D_12 = λ^2/16 π^2 H^2 p^3 τ^4[ (1+p^2 τ^2) [ γ_E + ln(-2pτ) ] + Ci(-2pτ) [ (-1 + p^2 τ^2) cos(2pτ) - 2pτsin(2pτ) ] + Si(2pτ) [ 2pτcos(2pτ) + (-1 + p^2 τ^2) sin(2pτ) ]] + 0 + F_3[τ,τ_0] , Δ_12 = λ^2/16π^2 H^2 p^3 τ^4[ 2 p τ + Ci(-2pτ) [ sin(2pτ) - pτ (2 cos(2pτ) + pτsin(2pτ)) ] + Si(2pτ) [ (-1+p^2 τ^2) cos(2pτ) - 2pτsin(2pτ) ] ] + F_4 [τ,τ_0] . We show the evolution of these functions in Figs. <ref> and <ref>. The depicted plots exclusively come from the finite terms where, as promised, we have simply dropped the spurious contributions F_i(τ, τ_0) for now. Note that we have appropriately rescaled these quantities with the correct factors of p to make them dimensionless. For generating the plots and performing subsequent numerical analyses, we have selected values of p such that the system evolves for 20 e-foldings before horizon crossing. We verified that extending this time does not alter our results. a_* denotes the value of the scale-factor when the mode crosses the horizon; thus, the zero of the horizontal axis denotes horizon-crossing in the following plots. Let us first take a close look at the expression for D_11 as given in (<ref>). The term coming from the local part of the kernel apparently looks subdominant (∝ 1/τ^2) to the nonlocal part (with a piece going as 1/τ^3). However, on tracking the evolution of these terms separately, it is clearly seen that the time-local contribution consistently dominates, becoming more pronounced over timescales of a few e-foldings after horizon exit (see Fig. <ref>). One can show that the Taylor expansion of D_11 around τ=0 is: D_11≈λ^2/4π H^2 τ^2 +λ^2 p/8π^2 H^2 τ+ 𝒪(τ) What this indicates is that one of the diffusion terms (D_11) is dominated by the contribution coming from the local part of the memory kernel (at all times and certainly after horizon-crossing). This will be important for our discussion about decoherence later on. On the other hand, D_12 only has contributions from the non-local part of the kernel since the term coming from the local bit turns out to be zero for our model. Next let us look at the expression of Δ_11 and Δ_12 from (<ref>) and (<ref>), respectively. These have contributions coming solely from the non-local part of the memory kernel which shows that the dissipation in this model is necessarily non-Markovian. However, Δ_11 does contain a UV-divergent term that needs to be taken care of by invoking a mass counter-term. Note that this implies that this mass term has to counteract long-time correlations such that they get screened. Or, conversely, we might say that the non-local origin of this UV-divergent term means that the time non-local history dependence of the correlations in the environment induce a mass term for the system. Δ_12 does not have any such UV-divergent piece. Finally, note that both Δ_11 and Δ_12 also display secular divergence even though these now have a non-local origin. § COSMOLOGICAL OBSERVABLES The purpose of tracking the time evolution of the (reduced) density matrix through the master equation is to compute observables, particularly equal-time n-point correlation functions. In this work, we primarily focus on 2-point correlators, where every relevant combination is encapsulated by the following matrix: [Ξ_ab]_ p = 1/2( ẑ_a ẑ_b^† + ẑ_b ẑ_a^†) , where each term on the RHS is to be understood as the p-mode of their respective field. Observables are identified as the expectation values of these operators, represented by Σ_ab := ⟨Ξ_ab⟩. This is known as the covariance matrix, and the differential equations governing the evolution of its entries are dubbed transport equations. §.§ The transport equations To derive the transport equations, let us begin by considering the expectation value of an arbitrary operator Ô, which in the Schrödinger picture is given by ⟨Ô⟩ (τ) = _[ Ô (τ)] . Then, the evolution equation of the expectation value can be found through /τ⟨Ô⟩ (τ) = _{Ô L__ P [ (τ)] } . Using the TCL_2 master equation in any of its forms, say (<ref>), and leveraging the cyclic property of the trace, we can cast (<ref>) as a combination of expectation values (underscoring the importance of a time convolutionless master equation). In doing so, and repeatedly using [z_i, O] = [z_i, Ξ_ab] = 1/2 [z_i, ẑ_a ẑ_b^† + ẑ_b ẑ_a^†] = 1/2{z_a [z_i, z_b^] + [z_i, z_a] z_b^} + (a ↔ b) = 1/2{z_a (i ω_ibδ_k,p ) + (i ω_iaδ_k, -p) z_b^} + (a ↔ b) , where z_i ≡ [z_i]_ k, we obtain /τΣ_ p = ωH^(2) (Σ_ p + Σ_-p) - (Σ_ p + Σ_-p) H^(2)ω -ωDω - Δ_12( Σ_p + Σ_-p) . The first two terms on the RHS come from the unitary evolution, while the latter two are associated with diffusion and dissipation, respectively, as explained in the previous subsection. For parity conserving Hamiltonians, one can safely assume Σ_ p = Σ_- p, yielding the simple set of transport equations /τΣ_ p = 2 ωH^(2)Σ_ p - 2 Σ_ pH^(2)ω -ωDω - 2 Δ_12Σ_p , or, in matrix form, [ Σ_11' Σ_12'; Σ_12' Σ_22' ] = [ 2(a'/a)Σ_11 + 2 Σ_12 Σ_22 - (p^2 + Δ_11) Σ_11 - D_12 - 2 Δ_12Σ_12; Σ_22 - (p^2 + Δ_11) Σ_11 - D_12 - 2 Δ_12Σ_12 -2 (p^2 + Δ_11) Σ_12 - 2 (a'/a + 2 Δ_12) Σ_22 + D_11 ] , where it is understood that all of the terms are computed at a specific momentum mode p. §.§.§ A simple consistency check Before diving into solving the transport equations – something that can only be achieved through numerical simulations – let us verify that the equations yield sensible expressions for the free theory, where analytical exact results are readily available. For such a scenario, we know the elements of the covariance matrix, which are merely combinations of the mode functions described in (<ref>) and (<ref>), and their complex conjugates. Upon calculation, we find: Σ^ free_11 (τ) = 1/2p( 1 + 1/(pτ)^2) /τΣ^ free_11 (τ) = - 1/(pτ)^3 Σ^ free_12 (τ) = 1/2pτ /τΣ^ free_12 (τ) = - 1/2pτ^2 Σ_22^ free (τ) = p/2 /τΣ^ free_22 (τ) = 0 , which perfectly matches with the free-theory transport equations: /τΣ^ free = [ 2(a'/a)Σ_11^ free + 2 Σ_12^ free Σ_22^ free - p^2 Σ_11^ free; Σ_22^ free - p^2 Σ_11^ free -2 p^2 Σ_12^ free - 2 (a'/a) Σ_22 ^ free ] . On a cursory glance, the transport equation for Σ_11, namely the configuration field (χ) power spectrum, looks remarkably similar in structure to the analogous equation in (<ref>). However, this resemblance does not imply that interactions have no impact on the power spectrum. Quite the contrary, the influence of interactions is encapsulated in the coupling of Σ_11 to other equations, where the dissipative and diffusion coefficients appear explicitly. §.§ Numerical Solutions for the covariance matrix As previously mentioned, the system of transport equations (<ref>) requires numerical techniques to obtain a solution. Alternative approaches, like the one implemented in <cit.>, involve approximations in the super-Hubble regime by ignoring the contribution of the so-called decaying modes to the system. As a by-product, we shall show that these approximations render results in excellent agreement with the exact (numerical) results obtained here. Figs. <ref>–<ref> depict the solutions to the transport equations for a range of couplings. In our code, we simulated 20 e-folds of evolution in the sub-horizon regime, ensuring that extending this duration yields consistent final results once the mode has long exited the horizon and is in the super-Hubble regime. As before, we have rescaled the quantities – in this case, the covariance matrix elements – to make them dimensionless. Additionally, the value of p was chosen based on the same reasoning discussed earlier, and a/a_* = 1 is the instant of horizon-crossing. §.§.§ Non-perturbative resummation and spurious terms One aspect of our solutions that might not be immediately apparent is the inherent non-perturbative resummation of any potential secular divergences. This feature has long been recognized as one of the advantages of using master equation techniques for open quantum systems <cit.>. An analytical scheme which has been used to find resummations for master equations follows a super-horizon approximation which, in effect, calls for dropping all terms proportional to the decaying mode outside the horizon <cit.>. Specifically, on applying this to this model resulted in an amplitude for the spectrum of the configuration field (Σ_11, for a given p,) in the approximate form <cit.>: P(p;τ) := p^3/2π^21/a^2Σ_11 (p;τ) = H^2/2π^2exp[α(p) ln (-pτ)]exp[-λ^2/12π^2 H^2ln^2 (-pτ)]⟨ Q_ p Q_- p⟩ (τ_*) , where α(p) = 2/3M_R^2 (τ_0)/H^2 + λ^2/6π^2 H^2[ln(-pτ_0) + 1 ] and ⟨ Q_ p Q_- p⟩ (τ_*) represents the correlation between the free–theory growing modes, and is given by 1/2 at horizon exit. Further, M_R represents a renormalized mass, whose origin (in our framework) is the counterterm that subtracts the divergent term in Δ_11 (see (<ref>)). The double exponential ensures a finite result since the second exponential –– having a quadratic dependence on the number of e-folds N := ln (-pτ) –– will dominate at late times. To compare our results with the above analytical expression, α(p) must be fully determined. The form presented in (<ref>) arises from equating the renormalization scale to the initial time at the beginning of inflation when the density matrix gets prepared (τ_0). The reason behind this choice shall be discussed shortly. Our treatment of the divergence in Δ_11 differs in that we chose a counterterm that precisely cancels out the infinity, leaving only the finite parts shown in (<ref>). Consequently, no term analogous to ln(-pτ_0) contributes to our result, leading us to drop it and consider α(p) = λ^2/(6π^2H^2) for comparison purposes[In the final expression in <cit.>, to provide numerical estimates, p is chosen such that the mode would have exited the horizon 50 e-folds before the end of inflation, i.e., ln(-pτ_0) ≃ 50. On the other hand, the renormalization scheme there had eliminated the constant term λ^2/(6π^2H^2) in the exponent.]. Although our implementation of the renormalization procedure varies slightly from previous results <cit.> – mainly due to the fact that a numerical, yet exact, computation that we have executed here would have been significantly difficult otherwise – note that this small differences appear in the term linear in N and should not matter for the actual resummation which follows from the double exponential. With this caveat in mind, we quantify the consistency of our results with those of <cit.> by defining the relative deviation as: | Δ P/P| = | 1 - P_ SH-approx/ P_ exact|, where P_ SH-approx is given by (<ref>), and P_ exact is the corresponding result obtained with our approach. This function is plotted in Fig. <ref>, showing a remarkable agreement for small λ/H, with expected discrepancies for λ∼ H, the limit in which both the TCL_2 approximation and the perturtabative approach in <cit.> break down. To provide further evidence with respect to the consistency of both approaches, we have fitted our results to a double exponential function as described in (<ref>), which ultimately forms an exponential of a quadratic function of N. The outcomes of this fitting process are summarized in Table <ref>, focussing on the fit for the quadratic function as it is the most important one for resummation purposes. Several insights can be drawn from these fits. Most importantly, the quadratic coefficient in N aligns closely with the prediction in <cit.>, even for the strongest couplings. As mentioned above, this coefficient is crucial for resummation, indicating that both approaches are comparable in this regard. The reason behind the discrepancy for the linear term, due to the slightly different ways of implementing the renormalization schemes, has been explained above. Also, it is worth noting that the analytical and numerical approaches predict almost identical coefficients for the term independent of any e-foldings (N^0). For the former, this value corresponds to the logarithm of the last term in (<ref>), ln (1/2) ≈ -0.693147, which aligns well with the fitted functions. Having shown that the super-horizon approximation that leads to an analytical non-perturbative resummation works quite well when compared to our exact numerical solution, we are now in a position to explain why that is the case and also why does one need an additional approximation in the former case at all. Recall that there was no proper analysis of the spurious terms done in <cit.>, and yet the results of such a resummation procedure compares quite well to our exact numerical solutions. The reason is the judicious choice of the renormalization scale which was identified with τ_0 in order to minimize the effects from the lower limits of the time integrals involved in the calculations. Although this process is reminiscent to removing spurious terms, it does not quite completely eliminate the contribution of the lower limit of the integral to the master equation. To do so, one needs to implement a further approximation, that of dropping all the decaying modes in the final result. In this way, all contributions of the spurious terms were effectively eliminated in that approach. Although this works fine if one is concerned with computing the value of an observable at the final conformal boundary, the approximation becomes progressively worse at earlier times, when evaluating the evolution of the observable over time as one goes away from the τ→ 0 limit. On the other hand, in the TCL_2 approach, one simply identifies terms which would cancel in a perturbative approach and contain remnants of the initial state, and drops them from the very beginning. In this way, no further approximations are necessary to derive dynamics of cosmological observables. Moreover, apart from not having to impose additional assumptions, the TCL resummation has been shown to better match with exact results for solvable models <cit.> than the super-horizon one. §.§ Purity as the determinant of the covariance matrix Beside computing the in-in correlation functions, open EFTs are key to studying non-unitary effects such as decoherence (see <cit.> for a partial list). In particular, in this paper, we will consider the behaviour of a specific quantum informatic measure, namely purity, for each mode[As mentioned earlier, for our model different system modes are decoupled from each other at the level of the TCL_2 master equation, which greatly simplifies the task at hand.], which signals the amount of quantum entanglement between the system and the environment. By analysing the purity of the reduced ρ^, we can study the decoherence (and possibly more exotic phenomena of recoherence or purity-freezing, as a consequence of non-Markovianity <cit.>) of the system quantum field. The purity γ_ p of the system, for a particular momentum mode, is defined as: γ_ p=Tr_[(ρ^_ p)^2] , where ρ^ is the reduced density matrix, corresponding to the system dofs, as before. When γ_ p=1, the system is in a pure state and there is no entanglement between system and environment. When γ_ p=0, the system density matrix is maximally mixed. As a side-note, recall that the purity is indeed independent under field-redifinitions as the trace indicates. From hereon, we will drop the p subscript to avoid clutter since the evaluation can be done mode by mode. One can simply solve the master equation (<ref>) to solve for the reduced density matrix ρ^ and use that to compute the evolution of the purity. However, an equivalent step would be to use the covariance matrix to do so. More specifically, it is known that for Gaussian states, we can use the covariance matrix of the system sector Σ^[We introduce the superscript here for the covariance matrix to emphasize that this corresponds to the system field alone and is not the full covariance matrix.] to calculate the purity of the system <cit.>: γ=1/√(4 det [Σ^S]) . Solving ρ^ from the master equation by truncating at the TCL_2 order is the same as using the covariance matrix to compute the purtiy, while ignoring contributions from higher order correlators, and thus our approximation is self-consistent. Although we can solve for each of the components of the covariance matrix individually and plug them into the above equation, in practice, it is simpler to derive the equation for the determinant of covariance matrix from (<ref>) as d/dτdet[Σ^] =Tr[Σ^𝐃] - 4 Δ_12 det[Σ^] . We choose initial conditions such that the field is in the Bunch-Davies vacuum. Explicitly, these are given by Σ^_11 = 1/(2p), Σ^_22 = p/2, Σ^_12 = Σ^_21 = 0, which implies that det[Σ^]=1/4 ⇒γ =1 as τ_0 → -∞. With these initial conditions, we solve for the above differential equation for the determinant of Σ^ and use that to solve for γ. For a mode that spends 10 e-folds before horizon exit, we solve the above equation and compare the dynamical behavior of purity for different values of λ as a ratio over H. As shown in Fig. <ref>, the system evolves rapidly to a mixed state due to its coupling with the environment. We conclude that decoherence proceeds very efficiently for this model supporting previous findings <cit.> derived in a different manner. However, this raises an important question. In <cit.>, it was shown that the purity decreases from 1 and then bounces back up to some value, when considering the (Gaussian) coupling of an adiabatic mode with an entropic one. This was rightly pointed out to be an effect of the non-Markovian nature of the system-environment coupling for an accelerating background. In the previous sub-sections, as was also done in <cit.> using different approximations, we have demonstrated that the corrections to Σ_11 comes from the non-local part of the memory kernel, and thus it is established that this model is also non-Markovian. Why do we then not see such recoherence or purity-freezing in this case? This can be understood if we focus on different terms appearing in (<ref>). The first term on the RHS of that equation is characterized by diffusion, and D_11 is the dominant contribution to this term due to the initial conditions mentioned above. (Recall that D_22 is zero for our model and D_12, which does have a non-local origin, multiplies the off-diagonal terms in Σ^.) It can be easily seen that the dynamics is primarily driven by this first term on the RHS which dominates over the second piece featuring the dissipative Δ_12 term (at least, until very late-times once the system classicalizes). Furthermore, the dissipative term forms the homogeneous part of the equation and since Δ_12 is negative for this model, the overall sign becomes positive for this term. Consequently, this yields an exponentially growing solution for the determinant, indicative of a decline in purity, i.e., decoherence. However, the main reason why we see no recoherence or purity-freezing is that the diffusion term D_11, which dominates the evolution of the determinant of Σ^, is, in turn, dominated by the local part of the memory kernel as shown in Fig. <ref> and in (<ref>). Consequently, it also indicates that if we were to consider a model in which the coupling for a time-dependent background such that the diffusion terms are primarily derived from the non-local part of the kernel, then we might be able to see such exotic phenomena such as recoherence <cit.>. To drive home this message even more forcefully, let us solve (<ref>) separately with only local and non-local components, where we note that while the non-local effects leave an oscillating flow of information between the system and environment, decoherence is purely driven by the local component of D_11. We need to zoom in the plot (Fig. <ref>) for the evolution of the purity to see this. We plot sub-horizon evolution of the purity for this since, on artificially only keeping the non-local contributions to the transport equations, the plot for γ_non-local diverges after horizon-crossing. Nevertheless, when all the terms are kept (plot corresponding to γ), the purity is perfectly well-behaved and shows efficient decoherence, and closely follows the plot for purity if we had (again, artificially) dropped all the non-local terms in the transport equations. For comparison, we also evaluate the purity of the reduced density matrix using perturbation theory (PT). Since TCL_n has been shown to contain all the terms appearing in PT_n and some other higher order m>n terms <cit.>, we get the PT_2 equation by terminating the TCL_2 equation at λ^2 order. Accordingly, the perturbative transport equation, at leading order, is given by dΣ^(2)_PT/dτ=ωH^(2)Σ^(2)_PT-Σ^(2)_PTH^(2)ω+ωΔΣ^(0)-Σ^(0)Δω-ωDω-2 Δ_12Σ^(0) . The calculation of the perturbative purity shows that TCL_2 solution is almost identical to SPT_2 solution, which is a signature of validity for TCL method. It also points to the real advantage of using the TCL formalism lies in its ability to resum secular divergences which appear for some physical observables, such as Σ_11 as shown in the previous sub-section when computed perturbatively, and not so much for purity in this model. Finally, following the expressions of the spurious term F_i(τ, τ_0), as computed in the Appendix, we also consider the solution for purity on including these terms. Since the spurious terms dominate at late-times, the purity naturally behaves radically differently in this case. In general, if we do not drop these terms, the system will go through incomplete decoherence and stabilize itself around some non-zero value of purity, as shown in Fig. <ref>, which is another physical motivation to drop such terms. § CONCLUSIONS What have we learnt from revisiting this toy model? Firstly, we have shown that secular late-time divergences can appear from the non-local part of the memory kernel and show up in the quantum corrections to the power spectrum of the system field. This is made abundantly clear by the fact that even if we drop the local term in the kernel, the secular divergences in the dissipative terms persist in our model. This is in stark contrast to previous computations for quantum corrections to mode functions of scalar or photon fields due to graviton loops, where the local terms in the evolution equation always dominate at late-times and were responsible for secular divergences. More explicitly, for our model, the solution for the power spectrum is numerical. The leading order expansion of this quantity in the coupling constant λ^2 is in general what one would have obtained in perturbation theory (in the absence of any resummations). Looking up Table <ref>, the leading order expansion can be written (for, say, λ/H=10^-3), as 𝒫 ^PT∝ e^-6.93147060× 10^-1(1-8.44291201× 10^-9N^2), which clearly denotes secularly divergent terms, the superscript `PT' reminding us that this would be the result in perturbation theory[This, of course, can also be demonstrated from the results of <cit.> P^PT(p;τ)= H^2/2 π^2⟨ Q_ p Q_- p⟩ (τ_*){1-λ^2/12π^2 H^2[2 ln(-pτ)-ln^2(-pτ)] } . ]. Thus, quantum corrections to a physical observable can indeed have secularly growing contributions from non-Markovian terms and, what is more, the time-convolutionless form of the master equation is nevertheless able to dynamically resum such terms and render finite results. Although some of these observations have been made in the past, we correct some misconceptions regarding the Markovian approximation, namely that having a time-local master equation does not necessarily imply that the system must be Markovian. This gives us a segue into how we have also shed light on an important aspect of such resummations which have been ignored in the past – spurious terms which arise from the lower (time) limit of the kernel integration as remnants of the initial state. This can be very cleanly handled using the TCL formalism and we have shown why such terms must be dropped by hand in order to have a meaningful resummation of physical observables. It makes sense to drop such terms since they are cancelled, between themselves, in a perturbative treatment to any order. Furthermore, we have resolved the interesting paradox of why previous computations of the power spectrum <cit.> yielded accurate results even when such a treatment of spurious terms were lacking. This was due to a judicious choice of the renormalization scale coupled with an additional ad hoc approximation of dropping the decaying mode contributions in the final answer. Although this approximation is good one for computing the power spectrum at the conformal boundary, it falls short in tracking the evolution of the elements of the covariance matrix when going to earlier times. The TCL formalism does not demand such approximations and render them redundant by applying a consistent analysis of the spurious terms. Another reason to have more faith in the TCL_2 resummation over the super-horizon one employed in <cit.> is that for exactly solvable Gaussian models, the former has been shown to have a closer fit with exact results <cit.>. Secondly, what our model shows is that although the memory kernel has a non-local part, the diffusion term which controls decoherence is dominated by the local term. In this context, we clarify that unlike previous claims, there is no emergent Markovian behaviour after horizon-crossing. First, this is evidenced by the fact that the corrections to the power spectrum, coming from non-local part of the kernel, survive in the τ→ 0 limit. Just because the system density matrix attains “positivity” at late-times <cit.>, which is a quantitative measure of classicality of the system, does not mean that system has become Markovian. In fact, as has been shown in <cit.> itself, one of the eigenvalues of the dissipator matrix remains negative at all times, thus clearly satisfying the definition of a non-Markovian system. Rather, what happens is that D_11 is dominated throughout by the contributions coming to it from local terms, which get rather pronounced at after horizon-crossing and at late times. Although D_11 plays the crucial role for decoherence, its contribution to the quantum correction to the power spectrum is negligible and thus different physical observables can depend on the local and non-local pieces of the memory kernel. One of the main reasons for the belief that late-time secular growth in inflationary models typically come from local terms is stochastic inflation since the latter invokes a white noise for modelling the quantum diffusion due to the sub-horizon modes. Our work shows that there might be a case for considering the non-local corrections to the noise term for stochastic inflation as well. Indeed, in this model, it is the dissipation terms that appear solely from the non-local terms and hence it will be interesting to consider a model in the future where the diffusion (or the noise) terms depend exclusively on the non-Markovian piece of the memory kernel. We leave constructing such a realistic model for future work <cit.>. Our results also find that purity is not a sufficiently useful tool to probe the Markovianity of the model. In <cit.>, it was shown that a non-Markovian kernel can lead to recoherence. What we find is that this is, however, not a smoking gun signal of non-Markovianity. To the best of our knowledge, this is the first application the TCL formalism to a cosmological model involving a non-linear interaction generalizing beyond strictly Gaussian models. Our results, although sometimes technical, would serve well as benchmarks against applying this particular master equation formalism to future realistic models for inflation. § APPENDIX: SPURIOUS TERMS As mentioned in Sec-3, the lower limits of integrals (<ref>)-(<ref>) give rise to so-called spurious terms. These terms, initially studied in the context of the cosmological Caldeira-Leggett model <cit.>, were found to dominate at late times for all coefficients, which is unexpected for time-local (or quasi-time-local) master equations. In the same context, these terms were shown to be absent for perturbative solutions to any given order, as explained earlier. If these spurious terms are considered, they could lead to a breakdown of the resummation of the power spectrum, contradicting the exact results available for such a model. Clearly, it is crucial to check the behaviour of these terms within the model we are working with. This not only completes the TCL_2 description, but also adds this is necessary when considering non-Gaussian models such as this one, for which exact results are not available. An expectation that was articulated in <cit.> was that for more realistic interactions, going beyond the Gaussian case, it might be true that such spurious terms (coming from the lower limit of the integrals) would get suppressed. We want to show that this is certainly not the case for our model, and this has consequences for the non-perturbative resummation as has been explained in Sec-4. For the reader's convenience, we show again the explicit form of the master equation coefficients (keeping our color code for the local and divergent terms): D_11 = -λ^2/8π^2 H^2 p τ^3[ γ_E + ln(-2pτ) - Ci(-2pτ) [ cos(2pτ) + p τsin(2pτ) ] + Si (2pτ) [ pτcos(2pτ) - sin(2pτ) ] ] + λ^2/4π H^2 τ^2 + F_1[τ,τ_0] , Δ_11 = λ^2/8π^2 H^2 p τ^3[ pτ[ γ_E -ln(-2 pτ) ] + Ci(-2pτ) [pτcos(2pτ) - sin(2pτ) ] + Si(2pτ) [ cos(2pτ) + p τsin(2pτ) ] ] + λ^2/4π^2 H^2 τ^2ln(2pϵ) + F_2 [τ,τ_0] , D_12 = λ^2/16 π^2 H^2 p^3 τ^4[ (1+p^2 τ^2) [ γ_E + ln(-2pτ) ] + Ci(-2pτ) [ (-1 + p^2 τ^2) cos(2pτ) - 2pτsin(2pτ) ] + Si(2pτ) [ 2pτcos(2pτ) + (-1 + p^2 τ^2) sin(2pτ) ]] + 0 + F_3[τ,τ_0] , Δ_12 = λ^2/16π^2 H^2 p^3 τ^4[ 2 p τ + Ci(-2pτ) [ sin(2pτ) - pτ (2 cos(2pτ) + pτsin(2pτ)) ] + Si(2pτ) [ (-1+p^2 τ^2) cos(2pτ) - 2pτsin(2pτ) ] ] + F_4 [τ,τ_0] , where F_i are the expressions coming from the lower limit of the integrals. As for the upper limits, Analytical expressions are available for these functions, namely F_1 (τ, τ_0) ≈λ^2/16π H^2 p τ^3[ sin (2pτ) - pτ (2 + cos(2pτ)) ] + O (τ_0^-1) , F_2 (τ, τ_0) ≈λ^2/16π H^2 p τ^3[ -2 + cos(2pτ) + pτsin(2pτ) ] + O (τ_0^-1) , F_3 (τ,τ_0) ≈λ^2/16π H^2 p^3 τ^4[ p τcos(pτ) - sin(pτ) ] [ cos(pτ) + p τsin(pτ) ] + O (τ_0^-1) , F_4 (τ, τ_0) ≈λ^2/32 π H^2 p^3 τ^4[ 2(1+ p^2 τ^2) - (1-p^2 τ^2) cos(2pτ) - 2 p τsin(2pτ) ] + O (τ_0^-1) , where we have isolated the nonzero parts in the limit τ_0 → -∞. Notice how even after doing this, there is a surviving contribution at late times, which we should now check to see if they are dominant over the others. Expanding these functions in the limit -pτ≪ 1, we get: F_1 (τ) ≈ - λ^2/16π H^2 τ^2 , F_2 (τ) ≈ - λ^2/16π H^2 p τ^3 , F_3 (τ) ≈ -λ^2/48π H^2 τ , F_4 (τ) ≈λ^2/32π H^2 p^3 τ^4 . Compare this to the non-spurious late-time behaviour of the master equation coefficients, given by D_11 ≈λ^2 p/8π^2 H^2 τ + λ^2/4π H^2 τ^2 , Δ_11 ≈λ^2/4π^2 H^2 τ^2 [1 - ln(-2pτ) + ln (2pϵ)] , D_12 ≈λ^2/16π^2 H^2 pτ^2 , Δ_12 ≈λ^2/72π^2 H^2 τ [-7 + 3γ_E + 3 ln(-2pτ)] . Notice how the spurious terms are at least comparable to these expressions, with only the spurious part of D_12 being subdominant. Fig. <ref> presents the solution of the transport equation for Σ_11, comparing cases with and without the inclusion of spurious terms. It is immediately noticeable that when spurious terms are excluded, the solutions increase at a lower rate, a signal of resummation. Further, for the time range considered, the solutions obtained by including spurious terms look insensitive to the choice of λ, as illustrated in Fig. <ref>. § ACKNOWLEDGEMENTS SB thanks Drazen Glavan for interesting discussions regarding secular divergences from non-local terms that spurred this work. The authors also thank Thomas Colas for numerous discussions about open quantum systems in cosmology. SB is supported in part by the Higgs Fellowship and by the UK Science and Technology Facilities Council (STFC) Consolidated Grant “Particle Physics at the Higgs Centre”. JCF is supported by the STFC under grant ST/X001040/1. XL is supported in part by the Program of China Scholarship Council (Grant No. 202208170014). For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
http://arxiv.org/abs/2407.13300v1
20240718090549
Robust ASR Error Correction with Conservative Data Filtering
[ "Takuma Udagawa", "Masayuki Suzuki", "Masayasu Muraoka", "Gakuto Kurata" ]
cs.CL
[ "cs.CL", "eess.AS" ]
Mean Teacher based SSL Framework for Indoor Localization Using Wi-Fi RSSI Fingerprinting Sihao Li, Graduate Student Member, IEEE, Zhe Tang, Graduate Student Member, IEEE, Kyeong Soo Kim, Senior Member, IEEE, and Jeremy S. Smith, Member, IEEE This work was supported in part by the Postgraduate Research Scholarships (under Grant PGRS1912001), the Key Program Special Fund (under Grant KSF-E-25), and the Research Enhancement Fund (under Grant REF-19-01-03) of Xi'an Jiaotong-Liverpool University. This paper was presented in part at CANDAR 2023, Matsue, Japan, November 2023. S. Li and Z. Tang are with the School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, P.R. China (e-mail: [Sihao.Li19, Zhe.Tang15]@student.xjtlu.edu.cn), and also with the Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, L69 3GJ, U.K. (e-mail: [Sihao.Li, Zhe.Tang]@liverpool.ac.uk). K. S. Kim is with the School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, P.R. China (e-mail: Kyeongsoo.Kim@xjtlu.edu.cn). J. S. Smith is with the Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, UK (e-mail: J.S.Smith@liverpool.ac.uk). ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Error correction (EC) based on large language models is an emerging technology to enhance the performance of automatic speech recognition (ASR) systems. Generally, training data for EC are collected by automatically pairing a large set of ASR hypotheses (as sources) and their gold references (as targets). However, the quality of such pairs is not guaranteed, and we observed various types of noise which can make the EC models brittle, e.g. inducing overcorrection in out-of-domain (OOD) settings. In this work, we propose two fundamental criteria that EC training data should satisfy: namely, EC targets should (1) improve linguistic acceptability over sources and (2) be inferable from the available context (e.g. source phonemes). Through these criteria, we identify low-quality EC pairs and train the models not to make any correction in such cases, the process we refer to as conservative data filtering. In our experiments, we focus on Japanese ASR using a strong Conformer-CTC as the baseline and finetune Japanese LLMs for EC. Through our evaluation on a suite of 21 internal benchmarks, we demonstrate that our approach can significantly reduce overcorrection and improve both the accuracy and quality of ASR results in the challenging OOD settings. § INTRODUCTION Automatic speech recognition (ASR) is the task of transcribing human speech into readable text, which is of practical use in various applications. In contrast to the traditional hybrid approach <cit.>, modern ASR systems are trained in an end-to-end manner using a large parallel corpus of acoustic speech paired with gold transcriptions <cit.>. Despite their huge success, end-to-end ASR systems have limited linguistic knowledge due to the difficulty of leveraging unpaired text-only data which exist in abundance <cit.>. Error correction (EC) is an effective strategy to correct linguistic errors produced by such ASR systems <cit.>. Recently, large language models (LLMs) pretrained on massive text-only data have shown promising results for this purpose <cit.>. While several works explore the zero-shot or in-context learning capability of LLMs <cit.>, finetuning LLMs with sufficient EC training data remains critical to impart the knowledge of ASR-specific error patterns and desired corrections <cit.> Generally, training data for EC are collected by automatically pairing the ASR hypothesis (source) and its gold transcription (target), and the task is formulated as sequence transduction from the source to target <cit.>. However, the quality of such pairs is not guaranteed: in fact, we observed various types of noise which require incorrect, unnecessary, or uninferable corrections that are unreasonable to be predicted from the source. We show some illustrative examples in Table <ref>. Training EC models on such noisy data can amplify overcorrection, which is a typical problem in current EC <cit.>. However, existing works largely overlook the existence of such noise and apply minimal data filtering, e.g. simply discard pairs with large edit distance <cit.>. In this study, we propose two fundamental criteria that EC training data should satisfy in general. Specifically, we ensure that EC targets * improve linguistic acceptability over sources * are inferable from the available context (e.g. source phonemes) Based on these criteria, we identify low-quality EC pairs and train the models to avoid making any correction on them. Such conservative behavior is often crucial to reduce overcorrection and improve robustness, esp. in the out-of-domain (OOD) settings <cit.>. The overall flow of our data filtering strategy is shown in Figure <ref>. In our experiments, we focus on Japanese ASR using an internal Conformer-CTC as the baseline <cit.>. For EC, we finetune opensource Japanese LLMs, namely Swallow-Mistral 7B[<https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1>] and Sarashina-2 7B[<https://huggingface.co/sbintuitions/sarashina2-7b>], and evaluate the performance on 21 internal benchmarks comprised of various domains. Through our experiments, we confirm that our approach can significantly reduce overcorrection and robustly improve ASR results in the most challenging OOD settings. § RELATED WORK In the existing literature, EC primarily focuses on the in-domain setup where models are trained and evaluated over the same domain <cit.>. Recently, <cit.> proposed a low-resource OOD setup where EC models are finetuned on a limited amount of target domain data to generalize beyond in-domain data. However, target domains of EC are conceptually broad or even open-ended, so it is desirable that EC models work reliably in any target domain without prior knowledge or finetuning. In this study, we focus on the most challenging zero-resource OOD setup to develop general-purpose EC models which work out of the box in a variety of domains. Despite the recent progress, overcorrection remains a major challenge in EC, esp. in the OOD setup. To alleviate this issue, constrained decoding <cit.> restricts or biases the correction towards retaining the original ASR hypotheses. <cit.> use a representative data source and partially train the models to copy the input to induce conservative behavior. In complementary to their approach, we focus on the quality of EC data and apply sophisticated data filtering, which is a novel aspect of our approach that works much more effectively than existing filtering based on edit distance <cit.>. Typically, ASR errors originate from confusing phonetically similar words and phrases. Therefore, supplementing EC models with phonetic/acoustic information can help improve their performance <cit.>. In this study, we use the source phonemes as an additional input, which can be easily handled by the text-based LLMs. When available, the full N-best hypotheses can be used as input to provide richer clues on where the ASR systems are confused <cit.>. However, for both simplicity and computational efficiency, we only use the 1-best hypothesis (i.e. top ASR prediction) in our experiments. § METHODS EC can be formulated as a sequence transduction task from the ASR hypothesis (source) to the gold reference (target). Formally, let (W^S, W^T) denote the source and target sequence pair. In the simplest setting, the EC model is trained to estimate p_ EC (W^T | W^S) with the expectation of transforming an error-prone source into a clean target. In this study, we also incorporate the source phonemes W^S as an additional input, in which case the EC model estimates p_ EC (W^T | W^S, W^S). Source phonemes are obtained from our ASR system (<ref>) and represented in hiragana, one of the Japanese syllabaries, which can be easily consumed by Japanese LLMs. Below is an example: * W^S: また海属性に関しては W^S: [mata kai zokusee: ni kanshitewa] W^S: (also in terms of sea attribute) * W^S: また かい ぞくせー に かんしてわ Generally, training data of EC can be collected at scale by automatically pairing the hypotheses and gold references in the ASR system's training data.[Although the ASR systems are directly trained on these datasets, they usually make sufficient errors for EC models to learn from. One can virtually increase the amount of errors through noise injection <cit.> or data partitioning to avoid training on each partition <cit.>. ] However, not all source-target pairs are suitable for training EC models, as we observed various types of noise (illustrated in Table <ref>). To address this issue, we propose two fundamental criteria that high-quality EC pairs should satisfy. Criteria 1: EC targets should improve linguistic acceptability over sources. The main objective of EC is to resolve linguistic errors in the ASR system's predictions and improve linguistic acceptability. While the gold reference usually contains cleaner text, this is not always the case, e.g. due to speaker disfluency in spontaneous speech or noisy transcriptions. In addition, Japanese is a language with rich orthographic variation where multiple valid spellings exist <cit.>. For instance, the correction is not necessary if the source transcribes a bottle as 瓶 [bin] while the target transcribes as ビン [bin], since both spellings are equally acceptable. To improve robustness, EC models should only focus on apparent mistakes and resolve them accurately. One simple way to express this criterion is based on the following equation: p(W^T)/p(W^S)≥ c_1 Here, p(W^S) and p(W^T) denote the likelihoods of the source and target, which can be computed using any language model. In this study, we simply use the base Japanese LLM. c_1 denotes the threshold, set to 1 by default, which can control the strength of the filter. Intuitively, (W^S, W^T) that do not satisfy this criterion do not sufficiently improve the linguistic acceptability, indicating the correction is incorrect or unnecessary. Criteria 2: EC targets should be inferable from the available context. Existing works assume that EC targets are generally inferable from the source. However, this is not always the case: in fact, expert evaluation revealed that about one-third of the errors cannot be corrected from the source alone <cit.>. This is mainly attributed to the large phonetic discrepancy between the source and target, e.g. caused by environmental noise or incapability of the ASR system. A robust EC model should only make the correction when it is inferable from the available context. To express this criterion, we quantify the degree of inferability from the source phonemes using the following equation: p_EC4pt(W^T | W^S)/p_EC4pt(W^S | W^S)≥ c_2 Here, p_ EC is a baseline EC model trained only using the source phonemes as input. In this study, we finetune the base Japanese LLM following the procedure described in <ref>. Again, c_2 denotes the threshold which can be set to 1 by default. Intuitively, (W^S, W^T) that do not satisfy this criterion cannot be easily inferred from the available context, namely source phonemes in our case. It is worth noting that edit distance is not a suitable measure of inferability. For instance, the uninferable example in Table <ref> has a relatively small edit distance but is very difficult to be inferred. In contrast, the following example is quite dissimilar in terms of edit distance but can be more naturally inferred from the source phonemes: * W^S: そうか検出で [souka kensyutsu de] * W^T: 相関係数で [soukan keisuu de] Based on the above criteria (C1 and C2), we identify low-quality EC pairs and train the models to avoid making any correction on them by simply replacing the target with source (W^T → W^S): see Figure <ref> for an illustration. We found this approach more effective than discarding the noisy pairs, since the model is explicitly trained to be conservative on noisy or otherwise ambiguous examples. Note that both criteria are defined based on the likelihood ratio between the source and target (eq. <ref>, <ref>). In Figure <ref>, we plot the distribution of the (log) likelihood ratio for each criterion in our training data, using Swallow-Mistral 7B as the LLM.[Statistics based on Sarashina-2 7B are provided in Appendix <ref>, where we observed similar results.] We can verify that a non-negligible portion of the pairs do not satisfy the criteria, suggesting noisy pairs are prevalent in EC training data. Out of the whole training data, our ASR baseline predicts the exact gold reference (i.e. W^S = W^T) in about 34% of the cases. Therefore, the EC model effectively learns to make a correction (i.e. W^S ≠W^T) in only 66% of the cases. Of these effective pairs, 34% are classified as noisy based on our C1 filter, 33% based on C2 filter, and 42% when combined. While this results in even fewer examples to learn from, we can expect the model to focus on clearer errors and improve OOD robustness. Our approach is also in line with the principle that data quality can be more important than quantity for LLM alignment <cit.>. § EXPERIMENTAL SETUP ASR System For the ASR baseline, we use an internal Conformer-CTC developed for commercial use cases. The acoustic model is a CTC <cit.> with 240-dimensional logmel-derived features every 40 milliseconds as input, consisting of 10 conformer layers <cit.>, followed by an output layer of 42 Japanese phonemes including the blank symbol. For inference, a static graph for graph decoding is created using a word n-gram model and a dictionary representing the mapping between words (W^S) and their phoneme sequences (W^S). In total, our training data consists of 8000 hours of transcribed speech with little or no overlap between our benchmarking domains. EC Model For EC, we finetune two Japanese LLMs, namely Swallow-Mistral 7B and a more recent Sarashina-2 7B, on a subset of the ASR training data ensuring a 1:1 mixture of read and spontaneous speech. For finetuning, we use LoRA <cit.> with rank r=32 and scaling factor α=16. The effective batch size is set to 128 on a single A100 GPU, and the learning rate is 5e–4 annealed with a cosine scheduler. All EC models are trained for a total of 1000 steps, since we observed more steps led to overfitting in the OOD setup. For inference, we use greedy decoding, which we found to be efficient yet effective. For C2 filtering, we train the phoneme-based EC model to predict the target W^T only using the source phonemes W^S as input. Otherwise, models are trained with both the source phonemes W^S and the source hypothesis W^S as input. As an ablation study, we compare the performance of EC models without any data filtering (No Filter), with C1 and C2 filtering applied independently (C1/C2 Only), and with both filtering applied in combination (C1+C2). In addition, to confirm that noisy pairs are less effective for EC training, we also experiment with an inverse filtering of C1+C2, considering the noisy pairs as clean and vice versa (Inv. C1+C2). Evaluation We evaluate EC performance on 21 internal benchmarks comprised of various domains. Details of each benchmark are provided in Table <ref>. All EC models are evaluated in the zero-resource OOD setup without any domain adaptation. As for the evaluation metrics, we primarily focus on character error rate (CER^↓) which is standardly used for ASR. To quantify the degree of overcorrection, we also measure the percentage of source hypotheses altered after EC (%EC^↓). Finally, we measure the percentage of hypotheses where the linguistic acceptability is improved after EC (%LA^↑). To measure %LA, we compare the masked language modeling score <cit.> of the hypothesis before and after EC using Japanese DeBERTa V2 large[<https://huggingface.co/ku-nlp/deberta-v2-large-japanese>]. While DeBERTa is relatively small compared to recent LLMs, it can take into account the full (bidirectional) context of the hypothesis and effectively assess its linguistic acceptability <cit.>. § RESULTS AND DISCUSSION In Table <ref>, we report the results of our experiments using Swallow-Mistral 7B as the Japanese LLM. Results based on the recent Sarashina-2 7B are provided in Appendix <ref>, where we observed similar trends with even better performance. Focusing on Swallow-Mistral 7B, there is no single approach which outperforms all others due to the diversity of the test sets. However, we can still draw several conclusions from the overall metrics. First, compared to the original ASR results, we can verify that EC without data filtering drastically worsens CER on average (11.84 → 12.51). This is mainly attributed to the overcorrection problem, as we can see a large portion of the hypotheses (43.0% on average) are altered by EC. Such aggressive behavior can be helpful in some occasions (e.g. Test 13) but generally too risky in the OOD setup, leading to modest or even severe performance degradation (e.g. in Test 1, 4, 16, 18, to count a few). In contrast, by applying our C1 filtering, we can substantially alleviate the degradation of CER (11.84 → 11.86) by almost halving the frequency of corrections (22.1% on average). This shows that EC can be kept more accurate and conservative by training on cleaner pairs which improve linguistic acceptability. Our C2 filtering also has a similar benefit and makes the EC model more robust in the OOD setup, outperforming the original ASR results in 57.1% (12/21) of the test cases. In addition, by combining our C1+C2 filtering, we can further cut down the frequency of corrections to 13.9% on average. Through this conservative behavior, we could significantly improve the OOD robustness of EC and reduce the original CER in 71.4% (15/21) of the test sets. This result demonstrates that both C1 and C2 filters help EC focus on clear and fixable ASR errors whilst ignoring more controversial ones. To verify that clean (rather than noisy) portions of the data contribute to this improvement, we also experimented with the inverse filtering of C1+C2. Generally, we confirmed that inverse filtering worsens CER on average (11.84 → 12.13) and only improves upon the original ASR in 28.6% (6/21) of the test sets. Therefore, noisy pairs are much less effective for accurate EC. While the frequency of correction is drastically suppressed (8.2% on average), this is largely attributed to the difficulty of learning from noisy examples and overlooking clear errors. In a few cases (e.g. Test 5), we found inverse filtering to be quite competitive, which suggests that noisy pairs still include useful examples for some domains. We expect that our filtering can be improved for such domains by appropriately tuning the thresholds (e.g. lowering c_1 and c_2) to include useful pairs of borderline quality. Finally, in terms of the linguistic acceptability (%LA), we generally see improvement through EC: this indicates that EC is at least successful in resolving linguistic errors and improving ASR quality, even if by deviating from the ground truth <cit.>. Naturally, our C1 filtering consistently strengthens this desirable property by explicitly taking this criterion into account (eq. <ref>). As an additional experiment, we also evaluated the results of EC with data filtering based on maximum edit distance <cit.>. In this approach, EC pairs with normalized edit distance above a certain threshold are simply discarded from the training data.[Before computing edit distance, we normalized source and target texts by converting them into hiragana using pykakasi: <https://github.com/miurahr/pykakasi>.] We chose the commonly used thresholds of 0.5 and 0.25, which discard 1% and 5% of the whole training data, respectively. The results are shown in Table <ref>. We can confirm that filtering based on edit distance fails to improve CER and hardly reduces %EC. This demonstrates that such simple filtering is insufficient to improve the robustness of EC in the challenging OOD setup, regardless of its widespread usage. Finally, to verify that our claims hold for a different Japanese LLM, we also experimented using Sarashina-2 7B. As discussed in Appendix <ref>, we can draw similar conclusions with even better performance, achieving an average CER of 11.41 in the best case and outperforming the original ASR results in 85.7% (18/21) of the test sets. Therefore, our approach is generalizable using other LLMs and we can expect to further improve performance by leveraging more powerful LLMs. § CONCLUSION EC is an emerging technology to boost the performance of ASR by harnessing the power of LLMs. However, current EC remains brittle, often degrading performance due to overcorrection in the OOD setup, which hinders its practical application. In this study, we first focused on the quality of EC training data and proposed a method to identify noisy data based on two fundamental criteria. Second, we revealed that EC data contains a considerable proportion of such noisy pairs, which can be effectively handled through our conservative data filtering. Finally, we demonstrated that our approach can significantly alleviate the overcorrection problem and improve the robustness of EC in the challenging zero-resource OOD setup. In future work, we also plan to control for other important factors of data quality, such as diversity and representativeness <cit.>, to further improve the robustness of EC. Overall, we expect our approach to be a foundational step towards developing general-purpose EC models applicable in any domain of interest, facilitating the utilization of LLM technology in the real-world scenarios. § BENCHMARK DETAILS In Table <ref>, we provide a brief description of the benchmarks used in our experiments. To evaluate ASR from multiple aspects, our test sets encompass a wide range of domains with various difficulties and characteristics, which in turn introduces diverse ASR errors that need to be corrected through EC. While our training data inevitably contains some data similar to the benchmarking domains (e.g. daily conversation and presentations), we consider the overlap to be sufficiently small to regard all of them as OOD.[In fact, we confirmed that EC performs much better on in-domain data, i.e. unseen samples from the ASR system's training data, and keeps improving with more training steps.] § EXPERIMENTS BASED ON SARASHINA-2 7B While Swallow-Mistral 7B is a continuously pretrained model built upon Mistral 7B <cit.>, Sarashina-2 7B is a recently opensourced Japanese LLM pretrained from scratch on a mixture of Japanese and English texts. To verify that our conclusions are generalizable to different LLMs, we also run the whole experimental pipeline (<ref>-<ref>) using Sarashina-2 7B. In Figure <ref>, we plot the distribution of the log likelihood ratio for each criterion in our training data based on Sarashina-2 7B. Out of the effective pairs (where W^S ≠W^T), 33% are classified as noisy based on the C1 filter, 49% based on C2 filter, and 63% when combined. While the C2 filter removes a larger portion of the data, we generally observe similar trends as Swallow-Mistral 7B. In Table <ref>, we report the results of our experiments using Sarashina-2 7B. Similar to Swallow-Mistral 7B, we found that EC without data filtering fails to improve CER on average (11.84 → 11.84) due to overcorrection. By applying our C1 filtering, we could significantly improve the average CER (11.84 → 11.41) whilst reducing the frequency of corrections. Our C2 filtering has a similar benefit, and by combining both filters, we could significantly mitigate overcorrection and improve the original CER in nearly all (85.7%; 18/21) of the test cases. As in the case of Swallow-Mistral 7B, we found that inverse filtering generally has a negative effect on EC performance. In Table <ref>, we show the results of edit distance based filtering using Sarashina-2 7B. Again, we can confirm that simple filtering is much less effective compared to our sophisticated filtering which takes into account the pair-wise data quality and explicitly induces conservative behavior.
http://arxiv.org/abs/2407.13626v1
20240718160007
Managing Risk using Rolling Forecasts in Energy-Limited and Stochastic Energy Systems
[ "Thomas Mortimer", "Robert Mieth" ]
eess.SY
[ "eess.SY", "cs.SY" ]
IEEE:BSTcontrol Systematic moment expansion for electroweak baryogenesis [ July 22, 2024 ======================================================== Abstract— We study risk-aware linear policy approximations for the optimal operation of an energy system with stochastic wind power, storage, and limited fuel. The resulting problem is a sequential decision-making problem with rolling forecasts. In addition to a risk-neutral objective, this paper formulates two risk-aware objectives that control the conditional value-at-risk of system cost and the buffered probability of exceeding a predefined threshold of unserved load. The resulting policy uses a parameter-modified cost function approximation that reduces the computational load compared to the direct inclusion of those risk measures in the problem objective. We demonstrate our method on a numerical case study. § INTRODUCTION In energy systems that largely rely on electric power generation from wind and solar, security of supply cannot be ensured only by matching installed generation capacity with peak demand plus some reserve <cit.>. Instead, system operators require new sets of tools alongside synergistic energy resources (e.g., battery or power-to-X storage technology) that can translate variable and uncertain renewable power production into reliable and continuous supply of demand <cit.>. This challenge is amplified in energy-limited systems where falling back to a virtually infinite source of fuel-based generation <cit.> or supply from a central grid is not available <cit.>. As a result, system operators require risk-aware decision-making tools that guarantee pre-defined reliability targets (e.g., Loss-of-Load Probability–LOLP) while remaining cost-efficient. Resulting operational decisions on controllable resources, e.g., charging or discharging storage, using available fuel, or calling upon flexible demand resources, depend on the real-time production from wind and solar, forecasts of their production, and the statistical properties of these forecasts. The energy- and use-limited nature of these controllable resources in combination with wind and solar forecast uncertainty creates a complex sequential decision-making problem that quickly becomes intractable in practical settings <cit.>. Moreover, it is not straightforward to translate the day-to-day usage of the portfolio of available controllable resources into effective capacity values that can be used for system planning purposes <cit.>. This paper takes a step towards addressing these challenges building on results by Ghadimi and Powell <cit.> who present a parameter-modified cost function approximation for the sequential decision-making problem of an energy storage model with rolling forecasts. In particular, we modify the model in <cit.> such that we can directly manage system risk by enforcing Conditional Value-at-Risk () or Buffered Probability-of-Exceedance () constraints. This allows the system operator to directly enforce pre-defined reliability targets. The resulting decision policy takes the form of a deterministic look-ahead model with an offline-tuned discount parameter that offers insights on how wind and solar production time series should be adjusted in planning simulations to reflect the reality of system operations. §.§ Related Literature Besides <cit.>, parameter-modified cost function approximation for an energy storage model has been investigated in <cit.>, which propose alternative tuning strategies and do not include risk-awareness. In <cit.> an online learning strategy is used instead of training the parameter in a simulator and in <cit.> a transfer function is used to obtain the parameter values. Managing risk in energy-limited and stochastic energy systems has been studied extensively, e.g., in <cit.>. This paper differs from the other studies by including a rolling forecast and formulating the system as a sequential decision-making problem, enabling the possibility of investigating the evolution of the forecast over time. Other studies on risk-averse sequential decision-making in energy systems have been introduced in the form of risk-averse model predictive control (MPC) methods, e.g., in <cit.>. Relative to these works that manage risk using , this paper uses the cost function approximation approach from <cit.> and enforces BPoE as an additional means of managing risk on technical constraints. Managing risk using has been highlighted in other engineering and financial domains <cit.>. § ENERGY SYSTEM MODEL We study a risk-averse decision-maker that continuously operates an energy system with uncertain power injections from a wind generator and fluctuating energy and fuel prices. Figure <ref> shows a schematic of the energy system model. The system operator manages stochastic and energy-limited resources with the objective of meeting electricity demand in a cost-effective and reliable manner. We model the decision-making process through discrete time steps indexed by t over an operational time horizon of T steps. The main energy source is wind power. In addition, the operator has access to an electric battery system, fuel-based generation, and fuel storage. We assume that fuel can only be acquired at certain time steps forcing the operator to plan and purchase the required amount for current and future time periods. This can for example represent an islanded microgrid with a gas or hydrogen storage that receives fuel shipments at a set schedule <cit.>. In this paper we assume hydrogen powering a fuel cell but note that this is not required for the model. For each t we define the vector of decision variables as , where: * x_t^wd: Energy from wind satisfying load; * x_t^wr: Energy from wind saved in the battery; * x_t^wx: Curtailed wind energy. * x_t^rd: Energy from battery satisfying load; * x_t^hd: Energy from the fuel cell satisfying the load; * x_t^hr: Energy from the fuel cell stored in the battery; * x_t^h: Amount of hydrogen purchased. Further, in each time step t decision x_t is constrained by: x_t^wr + x_t^wd + x_t^wx ≤ E_t x_t^wd + β^d x_t^rd + β^h x_t^hd ≤ D_t x_t^h ≤ T^H_t R^H̅ x_t^rd ≤ R^E_t x_t^hr + x_t^hd ≤ R^H_t x_t^h ≤ R^H̅ - R^H_t β^c (x_t^wr + β^h x_t^hr) - x_t^rd ≤ R^E̅ - R^E_t x_t^wr + β^h x_t^hr ≤γ^c x_t^rd ≤γ^d β^h( x_t^hr + x_t^hd) ≤γ^h R^H_t - x_t^hd - x_t^hr + x_t^h = R^H_t+1 R^E_t - x_t^rd + β^c (x_t^wr + x_t^hr) = R^E_t+1 x_t ≥ 0. [0] Constraint (<ref>) relates demand D_t to production from wind, the battery, and the fuel cell. Constraints (<ref>) and (<ref>) limit power production from the battery by the current state of charge R_t^E and the battery power rating γ^d, respectively. Constraints (<ref>) and (<ref>) limit the total power production of the fuel cell to the available energy stored as hydrogen R_t^H and its power rating γ^h, respectively subject to the fuel cell efficiency. Constraint (<ref>) limits the total power production from wind to the currently available wind power E_t. Hydrogen acquisition is limited by the remaining available storage capacity in Constraint (<ref>). In Constraint (<ref>) hydrogen acquisition is limited to a subset of timesteps defined by T^H∈{0,1}^T, i.e., T^H_t = 1 if hydrogen can be acquired in time step t and 0 otherwise. Battery charging and discharging is subject to the battery efficiency β^c and limited by remaining available storage capacity in Constraint (<ref>) and the battery power rating in Constraint (<ref>). Charging of the battery storage is also limited by the charging capacity in Constraint (<ref>). Constraint (<ref>) enforces non-negativity for all decision variables. Constraints (<ref>) and (<ref>) are the time-coupled storage constraints. We define S_t as the state variable containing all the information needed to solve a problem objective with respect to constraints (<ref>). The initial state at t=0 is predefined as S_0. At each t, state S_t is defined by: * D_t: Electricity demand at time t. * E_t: Wind power at time t * P^H_t: Cost of hydrogen at time t. * R^E_t: The level of energy in the battery satisfying R^E_t ∈ [0, R^E̅], where R^E̅ > 0 represents the capacity. * R^H_t: Level of energy in the hydrogen storage satisfying R^H_t ∈ [0, R^H̅], where represents the capacity. Additional parameters are: * β^c: Battery charging efficiency. * β^d: Battery discharging efficiency. * β^h: Fuel cell generation efficiency. * γ^c: Battery charging limit. * γ^d: Battery discharging limit. * γ^h: Fuel cell generation limit. * C^P: Penalty cost for unsatisfied load. * C^W: Penalty cost for curtailed wind. Available wind power production is uncertain for all timesteps {t+1, T}, demand and hydrogen are assumed to be known for ease of exposition in this paper. Power demand and hydrogen prices are assumed to be known over the time horizon. The system operator minimizes its cost function: C_t(S_t,x_t) = C^P L_t(S_t,x_t)+C^W x_t^wx+ P_t^h x_t^h, where mismatch between demand and supply is given by L_t(S_t,x_t) = D_t - (x^wd_t + β^d x^rd_t + β^h x^hd_t). Each unit of demand mismatch L_t(S_t,x_t) incurs cost C^P. (Note the non-negativity condition (<ref>).) Cost C^W penalizes wind curtailment and the final term in (<ref>) captures the cost of purchasing fuel x_t^h at price P_t^h. § RISK-AWARE DECISION-MAKING For any future t'>t, cost function (<ref>) depends on uncertain system states S_t(ω) where ω∈Ω is a random variable. To account for this uncertainty, decision-makers instead minimize expected cost 𝔼[∑_t'=t^T(C_t'(S_t'(ω), x_t'))], ∀ t', e.g., as in <cit.>. This risk-neutral approach, however, is often not suitable for practical applications where decision-makers are risk-averse. This section briefly discusses options to handle risk. §.§ Risk Measure: Conditional Value-at-Risk Conditional Value-at-Risk () is a popular risk metric in energy modelling due to its convex properties <cit.>. is defined via the Value-at-Risk (VaR) q_α(X), which, for a given α∈ [0,1], returns the minimum value z that a random variable X will not exceed with probability α: _α (X):= q_α(X) = min{z |ℙ(X ≤ d) ≥α}. VaR has useful applications for chance-constrained programs under some conditions, e.g., <cit.>, but typically results in non-convex formulations. (q̅_α), on the other hand, captures the expected value of X under the condition that X exceeds q_α <cit.>: _α(X) := q̅_α(X) = 𝔼[ X | X > q_α(X) ]. For a linear cost function and assuming discrete outcomes (scenarios) X_ω, q̅_α(X) can be minimized in a tractable linear program <cit.>: min_x,z ∈ℝ,y_ω≥ 0 z + 1/1-α∑_ω = 1^Ω1/|Ω| y_ω s.t. C(X_ω,x) - z - y_ω≤ 0 ∀ω∈Ω. Fig. <ref> illustrates the relation between q_α and q̅_α. allows to minimize the expected cost of the 100α-percent worst cases. In many engineering applications, however, risk-targets are not defined in terms of severity (probability times outcome), but whether a security threshold is maintained with high probability. We discuss this in the following section. §.§ Risk Measure: Buffered Probability-of-Exceedance Analogous to the relationship between VaR and , Buffered Probability-of-Exceedance () is defined via the Probability-of-Exceedance (PoE, or reliability), which quantifies the probability mass of a random variable X exceeding a given threshold ζ∈ℝ (see also Fig. <ref>): _ζ := p_ζ(X) = ℙ[X > ζ]. PoE suffers the same mathematical shortcomings as VaR. As the counterpart to , overcomes these shortcomings and computes the risk-level α at which the is equal to the predefined threshold ζ <cit.>: p̅_ζ(X) = γ≥ 0min 𝔼[γ ( X - ζ) + 1 ]^+, where γ is an auxiliary variable and [·]^+ = max{·, 0}. Similarly to , minimizing over a set of scenarios X_ω, ω∈Ω can be written as the linear program <cit.>: x, γ≥ 0 , η_ω≥ 0min ∑_ω = 1^Ω1/Ωη_ω s.t. γ C(X_ω,x) - γζ+ 1 -η_ω≤ 0 ∀ω∈Ω. Term γ C(X_ω,x) in (<ref>) allows the convex reformulation <cit.>: (Cγ)(x) = γ C(x/γ) γ > 0 0 γ = 0, x = 0 + ∞ γ = 0, x ≠ 0. In contrast to , which has been connected to energy-based reliability metrics such as expected energy-not-served (EENS), e.g., in <cit.>, the properties of allow to incorporate predefined frequency-based reliability metrics, e.g., the “1 day in 10 years” rule <cit.> or loss-of-load probability. Hence, for a given ζ returns the probability that C does not exceed ζ for a given decision x, thus certifying reliability. § RISK-AWARE POLICIES FOR ENERGY SYSTEM MODEL We now seek a risk-aware decision policy to solve the energy model from Section <ref> as a sequential decision-making problem with a rolling wind power forecast. We define a policy as a function X(S_t) that returns a decision x_t given the current state S_t. Considering a rolling wind power forecast allows us to model a realistic decision-making process in which the operator only needs to commit to here-and-now decisions and can adjust look-ahead decisions with access to more accurate forecasts in the next time step. We also model a limited operating horizon H≤ T as it can be ineffective to make decisions for time periods too far into the future. Fig. <ref> illustrates the rolling forecast and horizon H. §.§ Stochastic risk-aware policies As a baseline, we consider a scenario-based risk-neutral look-ahead strategy (S-LA). The S-LA policy computes decisions that minimize cost at the current time step t and the expected cost for future time steps t<t'≤ t+H over a set of scenarios ω∈Ω. We denote all scenario-dependent variables with scenario index ω. The resulting S-LA policy is: X_t^ S-LA(S_t): min_x_t,x_ω t C_t(S_t,x_t) + 1/|Ω|∑^Ω_ω = 1∑^min(t+H,T)_t' = t+1 C_t'(S_t'(ω),x_t') s.t. t: (<ref>) ∀ t':(<ref>)-(<ref>), x_ω t'^wd + β^d x_t'^rd + x_t'^hd≤ D_t' ∀ω∈Ω x_t'^wr + x_ω t'^wd + x_ω t'^wx≤ f^E_ω t' ∀ω∈Ω x_t'^h≤ H^H_t R^H̅, [0] where f^E_ω t' is the available wind power at time t' in scenario ω (e.g., obtained from a probabilistic forecast). Constraints (<ref>), (<ref>) are the scenario-dependent counter-parts of (<ref>), (<ref>). Constraint (<ref>) alters (<ref>) such that only the time steps that allow hydrogen purchase within the horizon are considered. Next, we modify the risk-neutral S-LA strategy to become risk-aware. The resulting risk-averse stochastic look-ahead (S-CVaR) approach computes decisions that minimize the expected cost in the 100α-percent worst case scenarios using CVaR (see Section <ref>): X_t^ S-CVaR(S_t): min_x_t,x_ω t,y_ω,z C_t(S_t,x_t) + z + 1/1 - α∑^Ω_ω = 11/|Ω|y_ω s.t. t: (<ref>) ∀ t': (<ref>)-(<ref>),(<ref>)-(<ref>) ∑^min(t+H,T)_t' = t+1 C_t'(S_t'(ω),x_t') - z - y_ω≤ 0 ∀ω∈Ω y_ω≥ 0 ∀ω∈Ω, [0] where we use auxiliary variables z, y_ω for the CVaR representation from (<ref>). Note that α is a parameter in S-CVaR that must be pre-defined by the system operator. Instead of defining α beforehand, an operator might be more interested in ensuring that a pre-defined load-not-served threshold is maintained with maximum probability. We define this threshold as γ. In this approach we minimize expected cost as in S-LA and additionally minimize of ∑_t`=t+1^min(t+H,T)L_t` with respect to γ. We denote this approach S-BPoE given by: X_t^ S-BPoE(S_t): min_x_t,x_ω t,η_ω,ζ C_t(S_t,x_t) + 1/|Ω|∑^Ω_ω = 1∑^min(t+H,T)_t' = t+1 C_t'(S_t'(ω),x_t') +M 1/|Ω|∑^Ω_ω = 1η_ω s.t. t: (<ref>) ∀ t':(<ref>)-(<ref>),(<ref>)-(<ref>), γ (∑^min(t+H,T)_t'= t+1L_t'(S_t'(ω),x_t')) - γζ + 1 ≤ 0 ∀ω∈Ω η_ω≥ 0 ∀ω∈Ω [0] We use a large scalar M to ensure that meeting the reliability target is prioritized over cost minimization. For large-scale problems and/or large numbers of scenarios, the stochastic approaches S-LA, S-CVaR, and S-BPoE can become computationally intractable. The following section introduces a parameter-modified cost function approximation that can still capture risk aversion but shifts computational load from decision-making to tuning the parameter. §.§ Parameter-modified cost function approximation policies The goal of this subsection is to find a single wind power scenario equivalent that performs well compared to the stochastic approaches from Section <ref> above. We write this deterministic look-ahead (D-LA) model as: X_t^ D-LA(S_t): min_x_t C_t(S_t,x_t) + ∑^min(t+H,T)_t' = t+1 C_t'(S_t'(ω),x_t') s.t. t: (<ref>) ∀ t': (<ref>)-(<ref>), x_t'^wr + x_t'^wd + x_t'^wx≤ b_t'(f^E_t', θ) [0] Model D-LA includes an additional parameter θ that modifies the wind power scenario f_t'^E to the upper bound b_t'(f_t'^E, θ) in (<ref>). Modification b_t'(f_t'^E, θ) can be chosen as: Constant: b_t'(f^E_t', θ) = θ f^E_t' All future forecast values are equally discounted. Look-up table: b_t'(f^E_t', θ) = θ_t'-t f^E_t' with a different θ_τ for each look-ahead period τ = 0,1,2,... with τ = t'-t. All future wind forecast values are discounted with an individual parameter depending on the look-ahead distance. Parameter θ needs to be tuned offline as we describe below. Choosing a single “Constant” parameter will reduce tuning effort and further simplify the model. The “Look-up table” approach, on the other hand, increases modelling fidelity but also computational efforts in both tuning and solving the model. We also refer to <cit.> for more discussion. §.§ Parameter tuning Our goal is to tune θ such that the resulting deterministic policy in D-LA achieves the goals of the risk-averse stochastic programs discussed in Section <ref>. We formalize the resulting parameter tuning problems corresponding to S-LA, S-CVaR, and S-BPoE, respectively, as: Expected cost: Parameter θ is tuned such that (<ref>) minimizes expected cost. θmin{ F^EC(θ):= 𝔼_ω[ F(θ,ω) ] = 𝔼[ ∑_t=0^T C_t(S_t(ω),X^ D-LA_t(S_t(ω) | θ)) | S_0 ] }. Risk-aware cost: Parameter θ is tuned such that (<ref>) minimize the 100α-percent worst-case cost outcomes: θmin{ F^CVaR(θ) := q̅_αω[ F(θ,ω) ] = 𝔼[ F(θ,ω) | F(θ,ω) > q_α(F(θ,ω)) ] }. Energy security: Parameter θ is tuned such that (<ref>) minimizes of unserved energy beyond ζ: θmin{ F^BPoE(θ) := p̅_ζ,ω[R(θ,ω) ] =ℙ[R(θ,ω) > z | 𝔼[R(θ,ω) | R(θ,ω) > z ]=ζ]}, where R(θ,ω)=𝔼[ ∑^T_t=0L_t(S_t(ω), X^ D-LA_t(S_t(ω) | θ)) | S_0 ]. The resulting parameter tuning problems(<ref>), (<ref>), and (<ref>) are possibly non-convex and non-smooth in θ, making the optimization problem hard to solve. We use the stochastic gradient descent approach proposed in <cit.> and shown in Algorithm <ref> to iteratively find close-to-optimal values for θ. Besides an initial value θ_0 and an initial gradient G̅^0, Algorithm <ref> requires hyperparameters {η_k}_k ≥ 1, {ψ_k}_k ≥ 1, {ϕ_k}_k ≥ 1 ∈ (0,1) that define gradient smoothing and learning rates. Iteration limit R is drawn from a predefined distribution P_R. § CASE STUDY WITH NUMERICAL RESULTS We investigate the policies derived in Section <ref> on a stylized case study of the energy system model discussed in Section <ref>. We assume daily decisions such that each time step t represents a day and T = 365. We use real load profiles from the transparency platform for Denmark's bidding zone DK2 <cit.>. The peak load over the considered year is D̅=1913 MW, which we use to dimension other parameters shown in Table <ref>. We set the interval for hydrogen acquisition to every 7 days starting with t=1. We use monthly historical gas prices in Denmark from <cit.> as a proxy for hydrogen prices. Cost of unserved load C^P is set to 1000 $/MW and of wind curtailment penalty C^W to 800 €/MW. We implement all models in Julia with the Gurobi 10.0.2 solver. All computations have been performed on a Macbook with Apple M1 chip and 8GB RAM. We assume that the operator has access to wind power forecasts. Actual wind power injection is driven by atmospheric phenomena that are well-represented by persistence models (i.e., assuming only small changes between time steps) for short forecast lead times <cit.>. For longer lead times forecast accuracy decreases. In our case study, we capture this through a martingale model of forecast evolution <cit.>: f^E_t+1 = f^E_t + ϵ_t+1 ∀ t = 0,...,H-1. where f^E_0 is a given initial value. We define the error term as ϵ_t+1∼𝒩(0, σ_ϵ,t^2) with σ_ϵ,t = ρ_E f^E_t where ρ_E=0.1 is a predefined parameter. §.§ Constant parameter tuning We first compare S-LA (<ref>), and S-CVaR (<ref>) the to D-LA (<ref>) with a constant discounting parameter tuned to reduce expected cost described in (<ref>) and no discount parameter, which is equalivalent to θ = 1. We use 100 forecast scenarios in each decision-making time step and evaluate the decision performance over 1000 out-of-sample scenarios. We itemize the results in Table <ref>. Using a grid search, a constant parameter of θ = 0.2 achieves minimal expected cost across 1000 training scenarios. Any θ < 1 indicates that wind forecast should be under-estimated during decision time to improve expected cost in the long run. Notably, the deterministic look-ahead policy significantly reduces computational time at only a 0.3% average cost increase. §.§ Look-up table parameter modification For the look-up table tuning of θ we use Algorithm <ref> with mini-batches size m_k = 10 and iteration limit N = 2000. We refer to <cit.> for notes on setting hyperparameters {η_k}, {ψ_k}, {ϕ_k}. Table <ref> presents the resulting θ that reduce expected cost as in (<ref>), 90%- as in (<ref>), and with threshold ζ = 7000 as in (<ref>). Each look-up table formulation performs best to its given goal as seen in Tab. <ref>. §.§ Influence of θ in reducing energy shortfalls Finally, we investigate how a constant parameter θ in X_t^ D-LA(θ) impacts the resulting of a given security threshold ζ. Fig. <ref> shows the , as per (<ref>), for 10 values of θ over the 1000 out-of-sample runs. For θ=1, i.e., operating the system under the assumption that the given forecast is the true wind-power injection, any of the given thresholds are exceeded with =1. With higher conservatism (θ < 1) is reduced depending on the target threshold ζ. § CONCLUSION This paper derives a risk-aware linear policy approximation for energy-limited and stochastic energy systems with rolling forecasts. In particular, the inclusion of the BPoE in the tuning of the parameter enables the system operator to certify that the implemented operation policy can achieve predefined reliability targets. There are two main avenues for future work. On the one hand, we will increase the model fidelity through other parameter modifications and by including several parameter modifications across several sources of uncertainty. On the other hand, we will investigate how the gained insights on parameter θ can be used as a means to discount wind and solar time series in deterministic investment models such that they avoid over- or under-estimation of available energy. IEEEtran
http://arxiv.org/abs/2407.12162v1
20240716204006
Estimate of water and hydroxyl abundance on asteroid (16) Psyche from JWST data
[ "Stephanie G. Jarmak", "Tracy M. Becker", "Charles E. Woodward", "Casey I. Honniball", "Andrew S. Rivkin", "Margaret M. McAdam", "Zoe A. Landsman", "Saverio Cambioni", "Thomas G. Müller", "Driss Takir", "Kurt D. Retherford", "Anicia Arredondo", "Linda T. Elkins-Tanton" ]
astro-ph.EP
[ "astro-ph.EP" ]
0000-0002-0786-7307]Stephanie G. Jarmak Southwest Research Institute San Antonio, TX, USA Harvard & Smithsonian | Center for Astrophysics Cambridge, MA, USA 0000-0002-1559-5954]Tracy M. Becker Southwest Research Institute San Antonio, TX, USA 0000-0001-6567-627X]Charles E. Woodward Minnesota Institute for Astrophysics, University of Minnesota Twin Cities Twin Cities, MN, USA 55455 0000-0001-8248-8991]Casey I. Honniball NASA Goddard Space Flight Center Greenbelt, MD, USA 0000-0002-9939-9976]Andrew S. Rivkin The John Hopkins University Applied Physics Laboratory Laurel, MD, USA 0000-0003-3356-1368]Margaret M. McAdam NASA AMES Research Center Mountainview, CA, USA 0000-0003-4980-1135]Zoe A. Landsman University of Central Florida Orlando, FL, USA 0000-0001-6294-4523]Saverio Cambioni Massachusetts Institute of Technology Cambridge, MA, USA 0000-0002-0717-0462]Thomas G. Müller Max-Planck Institute for Extraterresetrial Physics Garching, Germany 0000-0003-4942-2741]Driss Takir NASA Johnson Space Center Houston, TX, USA 0000-0001-9470-150X]Kurt D. Retherford Southwest Research Institute San Antonio, TX, USA 0000-0002-1706-6255]Anicia Arredondo Southwest Research Institute San Antonio, TX, USA 0000-0003-4008-1098]Linda T. Elkins-Tanton Arizona State University Tempe, AZ, USA § ABSTRACT Our understanding of Solar System evolution is closely tied to interpretations of asteroid composition, particularly the M-class asteroids. These asteroids were initially thought to be the exposed cores of differentiated planetesimals, a hypothesis based on their spectral similarity to iron meteorites. However, recent astronomical observations have revealed hydration on their surface through the detection of 3-μm absorption features associated with OH and potentially H2O. We present evidence of hydration due mainly to OH on asteroid (16) Psyche, the largest M-class asteroid, using data from the James Webb Space Telescope (JWST) spanning 1.1 - 6.63 μm. Our observations include two detections of the full 3-μm feature associated with OH and H2O resembling those found in CY-, CH-, and CB-type carbonaceous chondrites, and no 6-μm feature uniquely associated with H2O across two observations. We observe 3-μm depths of between 4.3 and 6% across two observations, values consistent with hydrogen abundance estimates on other airless bodies of 250 - 400 ppm. We place an upper limit of 39 ppm on the water abundance from the standard deviation around the 6-μm feature region. The presence of hydrated minerals suggests a complex history for Psyche. Exogenous sources of OH-bearing minerals could come from hydrated impactors. Endogenous OH-bearing minerals would indicate a composition more similar to E-or P-class asteroids. If the hydration is endogenous, it supports the theory that Psyche originated beyond the snow line and later migrated to the outer main belt. § INTRODUCTION (16) Psyche (hereafter Psyche) is the largest M-class asteroid in the Tholen taxonomy <cit.>. This class of asteroids has long been believed to include the remnant metallic cores of large, differentiated asteroids exposed through a series of collisions that stripped off their mantles e.g., <cit.>, <cit.>, and <cit.>. They are also thought to include the parent bodies of the iron meteorites <cit.>. Observations supporting a metallic composition for Psyche are its high radar albedo (0.34 ± 0.08; <cit.>), bulk density potentially higher than expected from a solar abundance of iron and rocky forming elements (4.0 ± 0.2 g/cm3 ; <cit.>), spectral matches to pure iron and iron meteorites e.g., <cit.>, <cit.>, metal content of no less than 20% as inferred from millimeter wavelength data <cit.>, and mid-infrared (MIR) spectroscopy from <cit.> indicating a lack of strong spectral features consistent with a metal/oxide surface. The evidence that Psyche is metal-rich and could be the exposed iron core of a differentiated asteroid has generated significant community interest culminating in the Psyche mission that launched October 2023 with an expected arrival date of August 2029 e.g., <cit.>. However, there is also evidence that challenges this view. M-class asteroids, while consistent with iron meteorites due to their lack of features in the vis-NIR wavelength range used in the Tholen taxonomy, are also consistent with enstatite chondrites that are similarly featureless in these wavelengths. Additionally, Psyche’s density measurements have varied widely over the years, and recent estimates suggest a density of 3.88 ± 0.25 g/cm3 <cit.>, which if towards the lower end, would be similar to that of Vesta, a non-metallic asteroid. This raises doubts about Psyche being predominantly metallic (> 50%), as achieving such a low density would require an unrealistically high macroporosity, unlikely for an object of Psyche’s size without evidence of a massive asteroid family that would result from its breakup and reaccumulation <cit.>, <cit.>. Additionally, spectral features attributed to silicate minerals have also been detected on Psyche and other M-class asteroids e.g., <cit.>, <cit.>, <cit.>, and references therein. Thermal inertia values derived from disk-integrated mid-infrared data (125 ± 40 J s1/2 K-1m-2 <cit.>; 5 - 25 J s1/2 K-1m-2 <cit.>; 20 - 80 J s1/2 K-1m-2 <cit.>) are at the lower end of what is expected for a powdered metal regolith (e.g, < 450 J s1/2 K-1m-2, <cit.>) and suggest the presence of at least some silicates. <cit.> found that the thermal inertia across Psyche’s surface was 25 - 600 s1/2 K-1m-2 based on spatially-resolved millimeter wavelength data, with most of Psyche’s surface thermal inertia falling in the 150 - 300 J s1/2 K-1m-2  range and dielectric constant (proxy for metal content) in the 15-25 range, also consistent with a heterogeneous surface in terms of metal content. Near-infrared (NIR) surveys of M-class asteroids (<cit.>, <cit.>, <cit.>, <cit.>, <cit.>) detected 3-μm spectral absorption features attributed to hydrated silicates on ∼35% of known M-class asteroids. Interestingly, Psyche was one of the only two large (D > 65 km) M-class asteroids in the <cit.> study for which they did not detect a 3-μm band, though the uncertainties on these spectrophotometric measurements precluded the detection of weak absorption features. Later high-resolution spectral observations by <cit.> reported the detection of a shallow 3-μm feature on Psyche with apparent rotational variability. This feature is difficult to detect and impossible to characterize fully from Earth-based telescopes due to atmospheric water absorptions blocking a part of the 3-μm feature. The AKARI spacecraft did not detect a 3-μm feature but also concluded the shallow feature reported by <cit.> would be below their detection limit <cit.>. The detection of a 3-μm feature implies the presence of hydroxyl (OH) or water-bearing minerals, depending on the precise center and shape of the band <cit.>, <cit.>. A more unambiguous detection of H2O would come from detecting the H2O 6-μm fundamental bending mode emission feature. This 6-μm feature was used to detect widespread molecular water on the Moon <cit.> and recently on asteroids <cit.>.  The presence and strength of either of these features might be linked to several possible formation and evolutionary history scenarios for the Solar System. Assessing the abundance of hydration products present on Psyche provides important context for the amount of volatiles that the largest planetesimals could retain during their formation and differentiation in the early solar system and thus whether impacts of differentiated planetesimals onto the early Earth should be considered an important source of water. Hydrous materials, whether water- or OH-bearing minerals or the ices themselves, may be remnants of planetesimal formation or they could be later and even recent additions from impacts. <cit.> found that water ice is not stable at any latitude on Psyche, and therefore any detected water is most likely in the form of water-bearing minerals. The two competing hypotheses for Psyche’s hydrated surface are either exogenous sources via C-complex asteroids in the neighborhood of Psyche e.g., <cit.>, or that Psyche formed in the outer solar system and retained hydrated materials upon differentiation e.g., <cit.>. The recently-launched Psyche mission will obtain elemental, space physics, and multispectral imaging data with high spatial resolution at the asteroid, but does not have spectral capabilities beyond 1.1 μm. Complementary measurements from other facilities are thus necessary to comprehensively understand the abundance and nature of hydrated minerals on Psyche's surface. § METHODS We observed Psyche with the James Webb Space Telescope (JWST) using the Near Infrared Spectrograph (NIRSpec) and Mid-Infrared Instrument (MIRI) instruments to fully characterize the 3-μm absorption feature associated with OH/H2O, and to identify whether the 6-μm emission feature associated with H2O is present. We obtained two sets of observations for each instrument to assess the variability of these features with the NIRSpec observations each on March 5 2023, and the MIRI observations on March 20 and 27, 2023. During these observations, Psyche was nearly pole-on (aspect angle 24°) with observations covering the north pole region (Figure <ref>). We show the spectrum acquired with the NIRSpec and MIRI instruments, featuring scattered sunlight and thermal emission from the asteroid, in Figure <ref>. §.§ Observations and Data Reduction We used JWST’s NIRSpec and MIRI instruments to observe Psyche (program ID 1731) in March 2023. The NIRSpec Integral Field Unit observations produced two data sets with the first observation beginning March 5 2023 02:21:33 UT and the second observation beginning March 5 2023 03:15:10 UT using the G140H/F100LP, G235H/F170LP, and G395H/F290LP modes, each with a total effective exposure time of 128.84 s. The MIRI Integral Field Unit observations also produced two data sets, with the first observation starting March 20 2023 22:51:59 UT and the second observation starting March 26 20:39:46 UT using all four spectral channels of the MRS (and corresponding sub-bands) with an exposure time of 632.71 s. The target saturated in Channels 2 through 4 and improving the correspondingly degraded data quality to carry out spectral feature analysis requires effort beyond the scope of our program goals. The proposed MIRI observations included Channel 1 (B = MED) data only and given that a 6-μm feature would be entirely detectable within Channel 1 (expected centers are 6.04 - 6.12 μm with widths of 0.1 - 0.55 μm, see Figure 2 <cit.>) we therefore focus only on analysis of Channel 1 observations. Uncalibrated (level-0) PID 1731 data products (_ucal files) were retrieved from the Mikulski Archive for Space Telescopes and reprocessed with JWST Science Calibration Pipeline calibration versions (CAL_VER) v1.11.3 and v1.14.3 for the NIRSpec IFU and MIRI MRS IFU data respectively. The MRS data were locally processed with Calibration Reference Data System (CRDS) file CRDS_CTX jwst_1241.pmap, while the NIRSpec data used CRDS_CTX jwst_1106.pmap. In Stage 2 of the pipeline (Spec2Pipeline), the NIRSpec rate files where de-stripped (suppressing the vertical stripes in the frames from 1/f noise) and background subtraction was achieved using rate_combinedbackground files generated from the background data model using the four off-source background observations for each grating. The final NIRSpec IFU spectral cubes (_s3d file) were then generated in Stage3 of the pipeline (Spec3Pipeline) using the _cal files generated in Stage2 sorted into proper association files (_asn.json) for NRS1 and NRS2, where the outlier_detection and master_background steps were both skipped. The MRS IFU data were processed in a similar fashion, where generation of the rate files from the level_0 in the Detector1Pipeline stage invoked a jump.three_group_rejection_threshold = 100 (useful for very bright targets and short ramp times) and jump.find_showers = "True". In the subsequent MRS pipeline processing the background subtraction was performed in a pixel-by-pixel fashion[see descriptive notes in section 4 of https://github.com/STScI-MIRI/MRS-ExampleNB/blob/main/Flight_Notebook1/MRS_FlightNB1.ipynb], and in the MRS Spec3Pipeline call outlier_detection was set to "True", with an outlier_detection.kernal_size = "11 1", and an outlier_detection.threshold_percent = 99.5. For both NIRSpec and the MRS the IFU Cube Build parameter, cube_build.weighting = "drizzle". Asteroid spectra were extracted from each final spectral-spatial data cube which were "drilled" along the cubes spectral axis using 1" effective circular aperture centered on the photocenter of asteroid. In the case of the NIRSpec data, spectra were sigma clipped to suppress residual "hot-pixels". §.§.§ NIRSpec Reflectance Spectrum We produced the reflectance spectrum by dividing the calibrated NIRSpec data by the solar spectrum obtained from the Planetary Spectrum Generator (PSG) using conditions matching the observations. The NIRSpec data also extend far enough into the infrared to be affected by thermal emission. We applied a tailored version of the Standard Thermal Model as established by <cit.> modified with some features from the Near Earth Asteroid Thermal Model (NEATM; <cit.>) to correct for the effect of thermal emission across the NIRSpec data (1.10 - 5.23 μm). The thermal model differs from the NEATM in that it does not account for night-side thermal emission, which may be neglected for phase angles as low as those of our observations (13.8 - 16.9) and has been applied previously in <cit.> and <cit.>. The model integrates various parameters including physical attributes such as radius, albedo, and emissivity, alongside observational parameters like the distances to the Sun and Earth, and the phase angle. The only free parameter of the model was the "beaming parameter" (η), a variable that encapsulates multiple factors influencing the asteroid's temperature. Determining the correct η value is critical as it directly influences the extent of thermal flux removal, with each value suggesting a specific continuum behavior. We note, however, that the peak of Psyche’s thermal emission is well beyond 6 μm, and therefore thermal flux removal cannot create a false absorption feature contained within the bounds of the datasets. We assumed a linear continuum and iteratively adjusted η values until finding one that aligned with the expected 3.75 μm reflectance ratio (η = 0.8) using extrapolated data from shorter wavelengths without a thermal flux contribution. The thermally corrected reflectance spectrum is shown in Figure <ref>. §.§.§ MIRI Emission Spectrum We calculated Psyche's thermal emission for the MIRI dataset via a thermophysical model (TPM) code <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. We used the latest available spin-shape solution <cit.> which is based on a wide range of radar measurements, plane-of-sky ALMA imaging, and adaptive optics images from Keck and the VLT. For the surface temperature calculations and the flux predictions, we used true solar insolation and JWST observing geometry for the two different MIRI epochs. Critical thermophysical properties like surface emissivity, thermal inertia or roughness are all based on a detailed TPM study <cit.> which allowed us to interpret a wide range of thermal-IR measurements taken during the last 40 years. From that study, we took the best-fit thermal inertia of 50 s1/2 K-1m-2 and a low level of surface roughness for the interpretation of the MIRI measurements. We generated the solar spectrum for the MIRI observations in PSG as well and corrected for solar flux contributions by subtracting the solar flux from the total observed flux (∼0.02 Jy near 6 μm corresponding to ∼2% of the total flux). We then divided this solar subtracted MIRI flux by the TPM to produce the MIRI emission spectrum. We then fit a line using data from 5.75 - 5.90 μm and 6.1 - 6.33 μm to detrend the data. To increase the signal-to-noise while retaining sufficient resolution to assess the presence of a 6-μm feature with an expected width of ∼100 nm we binned the data to 20 nm. §.§.§ Near-Infrared Observations of Psyche After producing the NIRSpec reflectance spectrum, we split the NIRSpec observations into wavelength ranges of 1.10 - 1.35 μm, 1.48 - 2.35 μm, 2.48 - 3.95 μm, and 4.23 - 5.23 μm. These ranges represent the continuously available NIRSpec data (i.e., without gaps) over which we could fit a linear continuum to produce a normalized reflectance spectrum for each range. We note that the NIRSpec data begin at ∼1 μm, but we selected a cutoff of 1.1 μm as we believe the 1.0 - 1.1 μm range is dominated by instrumental effects. We divided each of these wavelength ranges by separate fit linear continuums to produce normalized reflectance spectra to identify absorption features. We first fit a continuum using only portions of the beginning and end of the wavelength range, then once we identified potential spectral features we fit a new continuum over the entire wavelength range with those spectra features masked. We binned the data between 1 and 20 nm depending on the data resolution prior to fitting a gaussian to any identified features. We also omitted G235H/F170LP mode data beyond 2.9 μm for observation 1 as these data appeared to be dominated by a potential instrumental effect with flux values largely inconsistent with the G395H data from the same observation and G235H data from observation 2. The resulting normalized reflectance spectra and corresponding spectra binned to 10 nm for each wavelength range are shown in Figure <ref>. We measured the band depth of the 3-μm feature in two ways. First, we applied a gaussian fit to the 3-μm region. Second, we took the centroid of the flux between 2.7 and 3.1 μm and estimated the band strength by taking the mean of the observed flux at the estimated band center subtracted from the the mean continuum flux at the estimated band center. Third, we set the band center as the minimum value of the data binned to a wavelength of 10 nm and measured the band depth as the observed minimum flux subtracted from the mean continuum flux at this band center. For observation 1 this was repeated with the spike between 2.87 and 2.91 μm excluded from the analysis. The results of these various methods are in Table <ref> §.§.§ Mid-Infrared Observations of Psyche We subtracted the solar contribution from the MIRI data and then binned the data to a wavelength resolution of 20 nm. We then divided the data by a spin-shape thermophysical model and fit a line that masked the 6-μm feature region (5.9 - 6.1 μm) and longward of 6.33 μm to remove the continuum and produce a normalized emission spectrum. We then fit a Gaussian to the data in the 6-μm feature region and calculated the standard deviation around the feature region to report an upper limit on the detectable amount of water potentially present. § RESULTS The principal objective of this study is to investigate the presence of 3 and 6-μm features on Psyche's surface to evaluate the corresponding abundance and heterogeneity of hydration across Psyche's surface. Alongside this main goal, we have also mapped additional spectral features detailed in Table <ref>. In the following sections, we describe our approach for identifying and characterizing spectral features from the NIRSpec and MIRI observations. §.§ 3-μm Feature We observed a 3-μm feature associated with OH, and potentially H2O, in each observation and applied a Gaussian fit to the region to determine the feature band depth, band center, and width (Figure <ref>). The relative normalized reflectance spectra in Figure <ref> is the normalized reflectance subtracted from the mean of the continuum between 3.6 and 3.7 μm such that the average of the continuum would be 0. We compare these fit parameters to an estimation of the band center by taking the centroid of flux within the wavelength range of 2.7 to 3.1 μm and estimating the band depth taking the mean observed flux at the estimated band center and subtracting this from the mean continuum flux at the estimated band center.  The estimated band center for observation 1 using the centroid method was 2.89 μm with a band depth estimate of 4.3%. The estimated band center from observation 2 from the centroid method was 2.90 μm with a band depth estimate of 4.9%. These same parameters from the Gaussian fit for observation 1 were a band center and depth respectively of 2.92 μm and 4.5% and for observation 2 were 2.96 μm and 5.8%. For observation 1 we also applied the Gaussian fit method to the data while masking the spike between 2.87 and 2.91 μm resulting in a band center of 2.98 and a band depth of 4.9%. Taking the minimum of the data binned to 10 nm for observation 1 resulted in a band center of 2.86 μm with a band depth of 5.1% and for observation 2 a band center of 2.95 with a band depth of 6.0%. The band depths calculated from the variety of approaches are given in Table <ref>. The reported errors for the band depth measurements from the centroid method are assigned from the standard deviation of the data from 3.56 - 3.90 μm. When varying the linear fit by the standard error of the fit slope we found that the variation of the estimated band depth was within the standard deviation of this baseline and when applying the band depth error calculation method of <cit.> the error was on the order of ∼0.001% and therefore insignificant compared to variations that may arise from the choice of a linear fit to the continuum and standard deviation of the data. The band depths estimated from the observations by <cit.> range between 2.72 - 3.26%, the center of these bands was not measurable due to the lack of available data resulting from atmospheric opacity at the band center and therefore the authors assumed a center of 3 μm (see Figure <ref>) We estimate that the band depth of the 3-μm feature is between 4.3 and 4.7% for observation 1 and 5.3 and 6.0% for observation 2. The differences in measured feature strength from JWST and IRTF observations further suggest heterogeneity of the hydration abundance across Psyche’s surface by a few percent. We also compared the normalized reflectance spectrum of Psyche near 3 μm to laboratory spectroscopic measurements of carbonaceous chondrites from <cit.> and <cit.> (Figure <ref>). Figure <ref> shows the normalized reflectance spectrum of Psyche near 3-μm compared with laboratory spectra of CM2-, CH/CBb- and CY-type carbonaceous chondrites. The shape of the 3-μm feature more closely resembles CY and CH/CBb type chondrites. We include the CM2 data as a common example of sharp 2.7-μm features typically associated with OH present both in the chondrites and asteroids e.g., Bennu from <cit.> that do not match the feature shape observed on Psyche. The CH/CBb type chondrite Isheyevo shown in blue in Figure <ref> was also found to be a good match for the vis-NIR spectra of Psyche from <cit.>. We also found that partial 3-μm features observed on asteroids (324) Bamberga and (704) Interamnia, both considered to have either sharp type or not sharp type 3-μm features depending on the observation, most closely matched the shape of  Psyche’s 3-μm feature when performing a sum of square difference comparisons between Psyche and the asteroid spectra provided in <cit.> (Figure <ref>). The sum of square differences between Psyche’s spectra and that of (324) Bamberga and Interamnia were as low as 0.11. For comparison, the lowest values for (24) Themis, (52) Europa, and (2) Pallas were 2.18, 0.81, and 1.21 respectively.  It is possible that Psyche’s 3-μm band shape may have been a 2.7-μm sharp type that shifted closer to 2.9-μm taking on a more rounded shape as a result of thermal metamorphism (as described in <cit.>) . Psyche’s 3-μm band is also a close match to the stage III CY 980115 chondrite from <cit.> that contains ∼8 vol% secondary olivine, ∼2 vol% magnetite, ∼11 vol% Fe-sulfide, and ∼79 vol% dehydrated phyllosilicate. Stage III chondrites are characterized by low degrees of aqueous alteration and unequilibrated mineral assemblages with anticipated peak temperatures of the parent body around 600C. From <cit.>, 3-μm features for chondrites in stage III samples show features between 2.88 and 3.02 μm (as we see on Psyche),though the position of the feature is not influenced by degree of aqueous alteration and therefore offers no information on phyllosilicate abundance. §.§ 6-μm Feature We did not detect any definitive 6-μm feature that would have potentially been associated with H2O in either MIRI observation. Figure <ref> shows the relative normalized emission for each observation. The normalized emission was subtracted from the mean of the continuum between 6.15 and 6.22 μm such that the average of the continuum would be 0. The data reduced using a previous version of the pipeline (v.1.11.3 with pmap 0994) showed a peak exceeding  1% above the baseline in observation 1 and this was no longer observed after reducing the data with a more recent version of the pipeline (v.1.14.0 with pmap 1241). The standard deviation of data near where the band may be present (from 6.15 - 6.22 μm) is 0.0037 or 0.37%. We also don't anticipate that the large scatter of up to 0.1 or 10% between 6.4 and 6.6 μm is due to any spectral features and therefore suggests an instrumental artifact. We therefore do not report a definitive detection of water on Psyche, but cannot completely rule out its absence, particularly if it is as low as a 0.4 - 1% level which may be anticipated based on values observed on the Moon and at Itokawa of between 1 - 5%. To provide a reasonable upper limit on the amount of water that may be present at Psyche but would potentially not be observable as a result of pipeline limitations and instrumental effects, we report this upper limit conservatively at 0.4%. The water abundance from the strength of the 6-μm feature may be calculated using the equation from <cit.> A = 9394 · D2 + 9594· D where A is abundance in ppm and D is the band strength. Using a band strength of 0.4% this amounts to an upper limit for the water abundance of 39 ppm. The possible interpretations for a non-detection include: 1. Water is present but at an amount that the instrument was not sensitive to, which would likely be below 0.4% or 39 ppm. 2. Water is present above this level, up to potentially 10%, but issues with the calibration pipeline have suppressed the feature. 3. Water is not present at all in the hemisphere observed by these JWST observations. 4. Water is present but other spectral absorption features near 6-μm may be present as well. Of these possibilities, interpretation 1 seems most likely given that water abundance estimates of 100 - 500 ppm have been measured at the Moon <cit.> and on near-earth asteroids such as Itokawa <cit.> (corresponding to 1 - 5% band strengths), but Psyche's farther distance from the Sun leads to a correspondingly lower exposure to solar wind implantation. The hydrogen ions from the solar wind are thought to interact with the surface of these airless bodies to produce water, and Psyche's reduced exposure to this solar wind may lead to a correspondingly reduced amount of water produced. As a result we would not necessarily anticipate significantly higher water abundance than on the surface of the Moon, though we cannot definitively rule this out due to instrumental effects that are apparent in the data and due to deviations in spectral features that arise as a result of updates to the calibration pipeline. Additionally, the relationship between hydration and the solar wind may be further complicated by cooler temperatures for bodies farther away from the Sun potentially increasing the retention of water even though the body has less access to solar wind hydrogen ions. We also note that olivine has an emission feature near 6-μm <cit.> and other minerals may have absorption features in emissivity near this wavelength as well. This points to possible degeneracies in interpretations of the strength of the 6-μm band. The lack of any water at all would indicate that the hydration producing the 3-μm band is entirely due to OH, which is possible, though a limited amount of water under the detection threshold of the instrument is more likely given the exposure of the asteroid to hydrogen ions that would likely interact with the surface in a similar way to other airless bodies. We also note that <cit.> identify a relationship in non-carbonaceous asteroids between hydrogen abundance and the band depth at 2.95 μm where a 2% band depth corresponds to ∼100 pm of hydrogen. We are reluctant to make any estimates of specific amounts of hydrogen abundance inferred from this relationship based on the largest band depths observed at Psyche because the relationship is likely nonlinear and is unknown. However, observations of other airless bodies' 3-μm bands with depths similar to what is observed at Psyche, e.g. at Eros and Ganymed, have had hydrogen estimates placed at 250 - 400 ppm <cit.>. Other sources have correlated a 2% absorption feature on the Moon to ∼1000 ppm water <cit.>. We do not claim that a similar amount of water exists on Psyche, particularly because no 6-μm emission feature uniquely associated with water was observed, but point out that the relationship between hydrogen, OH and H2O abundance from 3-μm feature properties alone may be insufficient and limitations in degeneracies of interpretation of the 6-μm feature generally may further complicate inferred water abundance. §.§ Additional Spectral Features We identified several other spectral features by visual inspection and further assessed these by fitting their parameter properties. The identified absorption features in NIRSpec observations are located at 1.25 and 1.28 μm in observation 2, 1.59 μm in observation 1, and 2.66 and 4.8 μm in both observations. The identified absorption features in the MIRI emission spectrum are located at 5.74 and 5.93 μm in observation 1 and 5.75 and 5.98 μm in observation 2. We fit Gaussians to these features to determine their properties (see Appendix A. Supplemental Figures) and recorded these in Table <ref>. The data with the 1.25, 1.28, and 1.34 μm features were binned to 1 nm, the data with the 1.59 μm feature were binned to 10 nm, the data with the 2.66 μm feature were binned to 1 nm, and the data with the 4.8 μm feature and the MIRI data were binned to 20 nm. The 1.25, 1.28, 1.34, 1.59, and 2.67 μm absorption features observed are quite narrow (FWHMs 0.004 - 0.023 μm), with strengths that are unlikely to have gone undetected by previous observations in these wavelengths (we were unable to find these features in previous near-IR observations of Psyche described in <cit.>). We also report absorption features in emissivity near 5.75 and 5.95 μm. We are uncertain whether any of these correspond to real features but we include them for comparative purposes as more JWST asteroid and laboratory observations become available. We compared the emissivity spectra to that of olivine and orthopyroxene and found no similarities. However, the broad 4.81 μm absorption feature in both observations may be associated with an Si-O bending feature present with silicate material providing additional evidence for the presence of silicate material across Psyche’s surface.  § DISCUSSION JWST observations of asteroid (16) Psyche offer critical insights into the asteroid's surface composition and its geological history while providing context for upcoming Psyche mission in-situ observations. The presence of 3-μm features and the lack of a 6-μm feature in Psyche's spectrum has profound implications for understanding the asteroid's origin, evolution, and the broader narrative of water distribution in the early Solar System. The detection of the 3-μm absorption feature associated with hydroxyl or water molecules in our NIRSpec observations suggests that Psyche's surface is not exclusively metallic but hosts widespread hydrated material. Laboratory studies of the 3-μm features in meteorites show trends in band shape and center with the degree of aqueous alteration <cit.>, <cit.>, <cit.>, and <cit.>. The nature of the 3-μm feature observed on Psyche is not comparable to the sharp shape centered at ∼2.7-2.8 μm seen in CI-, CM-, and CR- type carbonaceous chondrites (i.e., OH in phyllosilicates). Instead, the band shape and center appear consistent with those in CY-, CH- and CB-type carbonaceous chondrites. The CHs are rich in metal (20%; <cit.>, <cit.>) and the 3-μm band could be attributed to the presence of iron oxyhydroxides (e.g., FeO (OH), rust: FeO (OH), H2O). Previous studies have proposed that CBs formed in a vapor-melt plume produced by a giant impact, and the presence of CB-like material on Psyche’s surface could be the result of a giant impact removing Psyche’s mantle <cit.>. The variability in the strength of the hydration features across our observations implies a heterogeneous distribution of hydrated minerals, which could result from impacts, suggesting a complex surface history involving hydrated impactors with CHs and CBs as reasonable impactor candidates.  The lack of a definitive detection of a 6-μm emission feature on Psyche indicates that Psyche's hydration is dominated by OH. Based on the standard deviation of the data around this feature location we set an upper limit of the potential presence of water that is below our detection limit at an abundance of 39 ppm. This value is less than half of and up to an order of magnitude lower than the molecular abundance of water on the Moon (∼100 - 400 ppm <cit.>), and on S-type asteroids that were previously considered anhydrous (∼450 ppm <cit.>). However, the presence of water at an amount up to an order of magnitude lower than at the Moon is reasonable considering the solar wind flux at the Moon is 7 cm-3 <cit.> and at Psyche is 0.6 cm-3 <cit.>, and we may expect that if water is produced mostly as the result of solar wind implantation the correspondingly lower flux of solar wind would produce less water. Additionally, such a low upper limit of abundance lends further evidence that hydration is due to an exogenic source likely with some contribution from solar wind implantation but not yielding a significant amount of hydration due to water. The range of possible interpretations highlights the enigmatic nature of Psyche's composition and inferred evolution. If Psyche’s hydration is endogenous, this supports the proposal that the asteroid may have formed beyond the snow line of the early solar system and later became implanted in the outer main belt <cit.>. The presence of H2O and/or OH on Psyche could be attributed to either the delivery of hydrated material through impacts, space weathering processes, or the scenario that Psyche’s overall composition is not consistent with M-class asteroids (i.e., iron meteorites). Psyche could instead have the same composition as a P-class asteroid (with a somewhat higher albedo than typical P-class asteroids) or an E-class asteroid (with a lower albedo than a typical E-class asteroid). Additionally, the depth of the 3-μm absorption feature observed on Psyche is similar to that of other airless bodies (e.g., Eros and Ganymed; <cit.>) with hydrogen abundance estimates of 250 - 400 ppm. This estimated amount should be detectable by the Psyche mission's Gamma Ray and Neutron Detection instrument and has implications for interpretations of the results from that instrument <cit.>. The Psyche mission will explore the asteroid's composition, internal structure, and geology to better understand the building blocks of planet formation. Our JWST observations provide a crucial pre-mission snapshot of Psyche's surface composition using a wavelength range outside the spacecraft’s capabilities. Understanding the distribution and abundance of hydrated minerals on Psyche will help to interpret the mission's data within the context of the asteroid's formation and evolutionary history. Our findings suggest that there are hydrated materials on Psyche’s north pole, potentially observable as silicates in chondrites that in addition to OH contain pyroxene which the Psyche mission will be able to refute or verify through visual to near-infrared observations with the Multispectral Imager <cit.>.  The detection of hydration features on Psyche contributes to the growing body of evidence that M-class asteroids are a diverse population and that hydration is more widespread in the asteroid belt than previously thought. This has significant implications for our understanding of water delivery to the inner Solar System, potentially offering insights into the contribution of differentiated planetesimals (which are traditionally thought to be volatile-poor as a result of igneous differentiation) to the Earth's water budget. Psyche's complex surface composition, indicative of a rich geological history involving both metallic and hydrated components, challenges traditional asteroid classification schemes and calls for a reevaluation of our understanding of asteroid formation and evolution. § DATA AVAILABILITY JWST data are publicly available from the Space Telescope Science Institute’s Mikulski Archive for Space Telescopes https://mast.stsci.edu/. The observations produced by the JWST 1731 observation program can be accessed via DOI at 10.17909/xmzs-f849. Reduced data used in this analysis and the data required to reduce them (i.e.,the PSG solar spectrum and thermal model outputs) are publicly available at Zenodo 10.5281/zenodo.12536821. § CODE AVAILABILITY The Planetary Spectrum Generator used to generate the solar spectrum is at https:/psg.gsfc.nasa.gov/. The JWST science data calibration pipeline is at https://github.com/spacetelescope/jwst. The pipeline version used to calibrate the NIRSpec data was v1.11.3 <cit.> and the pipeline version used to calibrate the MIRI data was v1.14.0 <cit.>. § ACKNOWLEDGEMENTS We thank the reviewers for their helpful feedback that led to substantial improvement of this manuscript. This work is based (in part) on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-03127 for JWST. These observations are associated with program #1731. aasjournal § SUPPLEMENTARY FIGURES
http://arxiv.org/abs/2407.13487v1
20240718130755
Anisotropic cosmology in Bumblebee gravity theory
[ "Pranjal Sarmah", "Umananda Dev Goswami" ]
gr-qc
[ "gr-qc" ]
[E-mail:]p.sarmah97@gmail.com [E-mail:]umananda@dibru.ac.in Department of Physics, Dibrugarh University, Dibrugarh 786004, Assam, India § ABSTRACT The Bumblebee vector model of spontaneous Lorentz symmetry breaking (LSB) in Bianchi type I (BI) Universe to observe its effect on cosmological evolution is an interesting aspect of study in anisotropic cosmology. In this study, we have considered a Bumblebee field under vacuum expectation value condition (VEV) with BI metric and studied the cosmological parameters along with observational data. Further, we have studied the effect of anisotropy and the Bumblebee field in cosmic evolution. We have also studied the effect of both anisotropy and Bumblebee field while considering the Universe as a dynamical system. We have found that there are some prominent roles of both anisotropy and the Bumblebee field in cosmic evolutions. We have also observed an elongated matter-dominated phase as compared to standard cosmology. Moreover, while studying the dynamical system analysis, we have also observed the shift of critical points from standard ΛCDM results showing the anisotropy and the Bumblebee field effect. Anisotropic cosmology in Bumblebee gravity theory Umananda Dev Goswami 0000-0003-0012-7549 Received: date / Accepted: date ================================================= § INTRODUCTION Standard cosmology, also known as ΛCDM cosmology mainly relies on the two cosmological principles, viz., the isotropy and the homogeneity of the Universe on a large scale. Along with the Friedmann-Lemaître-Robertson-Walker (FLRW) metric supported by an energy-momentum tensor of conventional perfect fluid form, this theory provides answers to many questions regarding the understanding of the Universe <cit.>. However, the observed accelerated expansion of the Universe <cit.> along with the absence of observational evidence on the dark matter (DM) <cit.> and dark energy (DE) <cit.> etc. have motivated researchers to search for alternate theories or modifications in general relativity (GR) through expanding the conventional formalism <cit.>. In this context, a gravitational model known as the Bumblebee gravity model was proposed <cit.>. This model is beyond the ambit of GR, which relaxes the cosmological principles. In this theory, a vector field known as the Bumblebee field has been introduced and as a result, it modifies the Einstein field equations of GR. The Bumblebee model was first introduced in 1989 by Kostelecky and Samuel <cit.>. This is a simple yet reliable standard model extension (SME) that basically works in the principle of Lorentz symmetry breaking (LSB) <cit.> through using a vector field. The model introduces this field and the potential in the conventional Einstein-Hilbert (EH) action which results in the modification of the general Einstein field equations and it helps to understand the various cosmological aspects without considering the exotic matter and energy contents like DM and DE in the Universe <cit.>. However, several other SMEs are also available, in which the EH action was modified through the inclusion of some nontrivial curvature terms <cit.>, suitable scalar terms <cit.>, vector fields <cit.> and so on. The impact of SMEs in the research of the gravitational sector can be found in the Refs. <cit.>. Although, as mentioned already, the isotropic and homogeneous cosmology successfully explains most of its aspects with the help of the standard ΛCDM model which includes the Hubble tension <cit.>, the σ_8 tension <cit.>, the coincidence problem <cit.> etc., several dependable observational data sources, including WMAP <cit.>, SDSS (BAO) <cit.>, and Planck <cit.> have shown some deviations from the principles of standard cosmology. These suggest that the Universe may have some anisotropies. Additionally, some studies have pointed to a large-scale planar symmetric geometry of the Universe, and for such a geometry the eccentricity of the order 10^-2 can match the quadrupole amplitude to the observational evidence without changing the higher-order multipole of the temperature anisotropy in the CMB angular power spectrum <cit.>. Again, the existence of asymmetry axes in the vast scale geometry of the Universe is also evident from the polarization analysis of electromagnetic radiation which is traveling across great cosmological distances <cit.>. Thus, the isotropy and homogeneity assumptions are not sufficient to explain all the cosmological phenomena completely. To explain these anisotropies, we require a metric with a homogeneous background that possesses the anisotropic character. Such a class of metrics had already been provided by Luigi Bianchi and out of his eleven types of metrics, Bianchi type I (BI) is the simplest yet suitable to explain the anisotropy of the Universe <cit.>. Researchers have already used this metric along with its special case like the locally rotationally symmetric BI (LRS-BI) metric to explain different aspects of anisotropic cosmology in both GR and other modified theories and some of them are found in the Refs. <cit.>. The SME theory of the Bumblebee vector field has been extensively studied by researchers in the black hole physics <cit.>. However, there are very limited studies that have been carried out on cosmological scenarios. Cosmological implications of Bumblebee theory on isotropic cosmology can be found in the Refs. <cit.>. In anisotropic cosmology too the study of Bumblebee gravity is in the very preliminary stage. In Ref. <cit.>, the Bumblebee field is considered as a source of cosmological anisotropies. Another work of Bumblebee gravity on the Kesner metric has been found in Ref. <cit.>. However, constraining the model parameters with the help of various observational data and extensive studies of cosmological parameters in Bumblebee gravity with Bianchi type I metric is an interesting topic of study to understand the Universe as well as the role of anisotropy and Bumblebee field in cosmic expansion and evolution. In this work, we have considered the BI metric along with the Bumblebee vector model to explain the anisotropic characteristics of the Universe by studying the cosmological parameters. Here we have used available observational data like Hubble data, Pantheon data, BAO data, etc. to understand the situation in a more realistic and physical way. Further, we have covered the study of the effect of anisotropy and Bumblebee field on cosmic evolution through the study of the evolution of the density parameters against cosmological redshift. Finally, we have also studied the dynamical system analysis for the considered case to understand the property of the critical points and sequences of the evolution of the various phases of the Universe. In all our analyses, we have compared our results with standard ΛCDM results to understand the effect of anisotropy and Bumblebee field in cosmic expansion. The current article is organized as follows. Starting from this introduction part in Section <ref>, we have discussed the general form of field equations in the Bumblebee field in Section <ref>. In Section <ref>, we have developed the field equations and continuity equation for the Bianchi type I metric with the consideration of the time-dependent Bumblebee field. In Section <ref>, we have further simplified the situation by considering the vacuum expectation value (VEV) condition and derived the field equations and cosmological parameters. In Section <ref>, we have constrained the model parameters by using the techniques of Bayesian inference by using various observational data compilations and compared our model results with standard cosmology by using the constrained values of the parameters. In Section <ref>, we have studied the effect of anisotropy and Bumblebee field on cosmological evolutions. In Section <ref> we have made the dynamical system analysis for both standard ΛCDM and anisotropic Bumblebee model. Finally, the article has been summarised with conclusions in Section <ref>. § THE BUMBLEBEE GRAVITY AND FIELD EQUATIONS The Bumblebee vector field model and its associated gravity theory are based on the principle of spontaneous Lorentz symmetry breaking (LSB) within the gravitational context. These models contain a potential term V that, for the field configurations, results in nontrivial VEVs. This can have an impact on other fields' dynamics that are coupled to the Bumblebee field, while also maintaining geometric structures and conservation laws that are compatible with the standard pseudo-Riemannian manifold used in GR <cit.>. The simplest model with a single Bumblebee vector field B_μ coupled to gravity in a non torsional spacetime can be described by the action <cit.>, S_B = ∫√(-g)[1/2κ(R+ξ B^μB_μR_μν)- 1/4B^μνB_μν-V(B^μB_μ± b^2)+ℒ_ℳ]d^4x, where κ = 8π G, ξ is the coupling constant with the dimension of [M^-2] accounting for the interaction between the Bumblebee field and Ricci tensor of spacetime, B^μ is the Bumblebee vector field, B_μν = ∂_μB_μ-∂_νB_μ is the field strength tensor, and ℒ_ℳ is the matter Lagrangian density. Further, b^2≡ b^μb_μ = ⟨ B^μB_μ⟩_0≠ 0 is the expectation value for the contracted Bumblebee vector field and V is a field potential satisfying the condition: B_μB^ν± b^2 = 0. The field equations obtained through varying the action (<ref>) with respect to the metric g_μν can be written as G_μν = κ[ 2V'B_μ B_ν + B_μα B^α_ ν - ( V + 1/4 B_αβ B^αβ) g_μν] + ξ[1/2B^α B^β R_αβ g_μν - B_μ B^α R_αν - B_νB^α R_αν] + ξ[1/2∇_α∇_μ (B^α B_ν)+1/2∇_α∇_ν (B^α B_μ)-1/2 □(B_uB_u) - 1/2g_uv∇_α∇_β(B^α B^β)] +κ T^M _μ v . Here, G_μν is the Einstein tensor and T_μν^M is the energy momentum tensor. Further, varying the action (<ref>) concerning the Bumblebee field gives the equation of motion of the field as <cit.> ∇_μB^μν = 2(V'B^ν- ξ/2κB_μR^μν). If the left-hand side of the equation (<ref>) vanishes, then the relation becomes a simple algebraic relation between the Bumblebee potential V and the geometry of spacetime. In our work, we have considered the Bianchi type I metric and a Bumblebee field to solve the equation (<ref>). Here, we have considered the VEV of the field and hence it holds the condition: V = V' = 0. § BIANCHI COSMOLOGY IN BUMBLEBEE GRAVITY We have considered the Bianchi type I metric in our study which has the form: ds^2 = - dt^2+a_1^2(t) dx^2+ a_2^2(t) dy^2+ a_3^2(t) dz^2, where a_1, a_2, a_3 are the directional scale factors along x,y,z directions respectively. Thus, BI metric provides three directional Hubble parameters: H_1 =ȧ_̇1̇/a_1, H_2 = ȧ_̇2̇/a_2, and H_3 = ȧ_̇3̇/a_3 along three axial directions. Accordingly, the average expansion scale factor for the metric is (a_1 a_2 a_3)^1/3 and the average Hubble parameter can be written as H = 1/3(H_1+H_2+H_3). In our work, we have considered the Bumblebee field, which has only one surviving component, the temporal component <cit.>, i.e. B_μ = (B(t),0,0,0), and it holds the condition, B_μν = 0. Thus, under this condition equation (<ref>) can be simplified as [V' - 3ξ/2κ(ä_̈1̈/a_1+ä_̈2̈/a_2+ä_̈3̈/a_3)]=0. Now, for the stress-energy tensor T_μ^ν = diag(-ρ,P,P,P), the temporal component of the field equation (<ref>) can be written as (H_1 H_2+H_2 H_3+ H_3 H_1)- ξ(H_1^2 + H_2^2 + H_3^2)B^2- ξ B Ḃ(H_1+H_2+H_3) = κ(ρ+V). And, the spatial components of the filed equations are obtained as follows: [(ä_̈2̈/a_2+ä_̈3̈/a_3) +H_2 H_3](1- ξ B^2)= κ(-P+ V) - ξ B^2[1/2(H_1 + H_2 + H_3)^2 -3/2(H_1^2 + H_2^2 + H_3^2) ] + ξ[2( H_2 + H_3)BḂ+BB̈+ Ḃ^2], [(ä_̈1̈/a_1+ä_̈3̈/a_3) + H_3 H_1](1- ξ B^2)=κ(-P+ V) -ξ B^2[1/2(H_1 + H_2 + H_3)^2 -3/2(H_1^2 + H_2^2 + H_3^2)] +ξ[2(H_1+ H_3)BḂ+BB̈+ Ḃ^2], [(ä_̈1̈/a_1+ä_̈2̈/a_2) +H_1 H_2](1- ξ B^2)=κ(-P+ V)-ξ B^2[1/2(H_1 + H_2 + H_3)^2 -3/2(H_1^2 + H_2^2 + H_3^2)] +ξ[2(H_1 + H_2 )BḂ+BB̈+ Ḃ^2]. Further, by adding all the spatial components of the field equations, one can write, [2(ä_̈1̈/a_1 +ä_̈2̈/a_2+ä_̈3̈/a_3)+(H_1 H_2 +H_2 H_3+ H_3 H_1)](1- ξ B^2)= 3κ(-P+ V) -3ξ B^2[1/2(H_1 + H_2 + H_3)^2 -3/2(H_1^2 + H_2^2 + H_3^2) ]+ξ[4(H_1 + H_2 + H_3)BḂ+3BB̈+ 3Ḃ^2]. These equations are used later for deriving various cosmological parameters. Moreover, using the condition ∇_μT_μν =0, we get the continuity relation as ρ̇ = -3H(ρ+p)-3 ξ B(HB+Ḃ)/κ[(ä_̈1̈/a_1 +ä_̈2̈/a_2+ä_̈3̈/a_3)-2 σ^2 ] -3 ξ B^2/κ[H^3 - 3H σ^2 -H_1 H_2 H_3- 2 σσ̇ + 1/3(⃛a_1/a_1 +⃛a_2/a_2+⃛a_3/a_3) ]. § FIELD EQUATIONS UNDER VACUUM EXPECTATION VALUE (VEV) CONDITION At VEV, the Bumblebee field equation holds the condition V =V'=0 and also it can be assumed that the bumblebee field is constant for a time-like vector and gives the relation B_μ B^μ = ± B_0^2 <cit.>. Thus, at VEV, while considering the average Hubble parameter expression (<ref>), the field equations (<ref>) and (<ref>) can be written as 3H^2 = κρ/(1-l)+σ^2(1- 2l)/(1-l), 3H^2 + 2Ḣ = -κ P/(1-l) + σ^2(1- 2l)/(1-l). Here, l = ξ B_0^2 is the Lorentz violation parameter and σ^2 = 1/2[(H_1^2+H_2^2+H_3^2) - 3H^2] is the shear scalar associated with the anisotropic spacetime. Using equations (<ref>) and (<ref>) we can obtain the effective equation of state and it takes the form: ω_eff = -3H^2+2Ḣ/3H^2=κ P -σ^2(1- 2l)/κρ + σ^2(1- 2l). In terms of ω_eff, the deceleration parameter (q) can be written as q = 1/2(1+3ω_eff). The continuity relation (<ref>) under the VEV condition takes the form: ρ̇ = -3H(ρ+p)-3l H/κ[ -3/2 H^2 - 3/2σ^2(2l-1/1-l) ] -3l/κ[H^3- 3H σ^2 -H_1 H_2 H_3- 2 σσ̇ + 1/3(⃛a_1/a_1 +⃛a_2/a_2+⃛a_3/a_3) ]. Now, taking the σ^2 ∝θ^2 condition <cit.> in which the θ is the expansion scalar, we can write, H_1 = α H_2 and H_1 = β H_3, where α and β are two proportionality constants. These lead to the average Hubble parameter to have a simplified form in terms of the x-directional parameter H_1 as H = α + β + αβ/3 αβ H_1 = 1/λ H_1 and also lead to the shear scalar in form as σ^2 = 3H^2 [3(α^2+β^2+αβ )- αβ (α+β + αβ)^2]/(α+β+αβ)=3H^2 η, where λ = (α + β + αβ)/3 αβ and η = [3(α^2+β^2+αβ )- αβ (α+β + αβ)^2]/(α+β+αβ). In view of this form of σ^2, equation (<ref>) takes the form: ρ̇ = -3Hρ[2-l(1+3η) - l(1-3η)/(1-l)] + 9lH^3/2κ[1 + η(2l-1/1-l) ] -3l H^3/κ[1 -27α ^2 β^2/(α + β + αβ)^3 + 18η^2(1-2l)/(1-l)+9(γ - η) ], where γ = 3(αβλ + η). With the consideration of p = ωρ along with equations (<ref>) and (<ref>), the above equation can further be rewritten as ρ̇ = -3Hρ[ (1+ω)-l{1+η+ω(1-η)}/(1-l)-l 1 -λ^3/αβ + 18η^2(1-2l)/(1-l)+9(γ - η)/(3+η) - l(3+2η) -3l /2 {(3+η) - l(3+2η)}{ 1 + η(2l-1/1-l) }]. This equation of continuity can be solved for the density ρ as ρ = ρ_0 a^-3 δ, where δ = (1+ω)-l{1+η+ω(1-η)}/(1-l)-l 1 -λ^3/αβ + 18η^2(1-2l)/(1-l)+9(γ - η)/(3+η) - l(3+2η) -3 l /2 [(3+η) - l(3+2η)][ 1 + η(2l-1/1-l) ]. Furthermore, from the equation (<ref>) we can write the Hubble parameter as H = √(κρ/3-η(1-2l)) . Again, taking the relation a = 1/(1+z) in which z is the cosmological redshift, we can rewrite the Hubble parameter in the form: H = H_0 √(E(z)), where E(z)=3[Ω_mo(1+z)^3δ_m+Ω_ro(1+z)^3δ_r+Ω_Λ 0(1+z)^3δ_Λ]/3-η(1-2l), with δ_m = 1-l(1+η)/(1-l)-l 1 -λ^3/αβ + 18η^2(1-2l)/(1-l)+9(γ - η)/(3+η) - l(3+2η) -3l /2 {(3+η) - l(3+2η)}[1 + η(2l-1/1-l) ], δ_r = 4-2l(2+η)/3(1-l)-l 1 -λ^3/αβ + 18η^2(1-2l)/(1-l)+9(γ - η)/(3+η) - l(3+2η) -3l/2 {(3+η) - l(3+2η)}[1 + η(2l-1/1-l) ], δ_Λ = -2l η/(1-l)-l 1 -λ^3/αβ + 18η^2(1-2l)/(1-l)+9(γ - η)/(3+η) - l(3+2η) -3 l /2 {(3+η) - l(3+2η)}[1 + η(2l-1/1-l) ]. Utilizing the expression of H, i.e. E(z), the distance modulus (D_m) can be obtained from the following formula: D_m = 5log d_L+ 25, where d_L is the luminosity distance and it has the mathematical form: d_L = (1+z)/H_0∫_0^∞dz/√(E(z)). Also, the equation (<ref>) for the effective equation of state can be rewritten as ω_eff = 1/3 Ω_r0(1+z)^3δ_r-Ω_Λ0(1+z)^δ_Λ-η (1-2l)H^2/H_0^2/Ω_m0(1+z)^δ_m+Ω_r0(1+z)^3δ_r+Ω_Λ0(1+z)^δ_Λ+η (1-2l) . With all these derivations, we are ready for the graphical visualization of all the cosmological parameters that have been expressed in equations (<ref>), (<ref>), (<ref>) and (<ref>). However, before doing that, we need to constrain different model parameters, like l,α, β, λ, etc. to obtain results consistent with the current observations which we have done in the next section. § PARAMETERS ESTIMATIONS AND CONSTRAINING For estimating and constraining the parameters, we have used the Bayesian inference technique which is based on Bayes' theorem. The theorem states that the posterior distribution 𝒫(ψ|𝒟,ℳ) of the parameter ψ for the model ℳ with cosmological data 𝒟 can be obtained as 𝒫(ψ|𝒟,ℳ) = ℒ(𝒟|ψ, ℳ) π (ψ|ℳ)/ℰ(𝒟|ℳ). Here, ℒ(𝒟|ψ, ℳ), π (ψ|ℳ) and ℰ(𝒟|ℳ) are the likelihood, the prior probability of the model parameters and the Bayesian evidence respectively. The Bayesian evidence can be obtained as ℰ(𝒟|ℳ) = ∫_ℳℒ(𝒟|ψ, ℳ) π (ψ|ℳ) dψ, in which the likelihood ℒ(𝒟|ψ, ℳ) can be considered as a multivariate Gaussian likelihood function and it takes the form <cit.>: ℒ(𝒟|ψ, ℳ) ∝exp[-χ^2(𝒟|ψ, ℳ)/2], where χ^2(𝒟|ψ, ℳ) is the Chi-squared function of the dataset 𝒟. For a uniform prior distribution π(ψ|ℳ) of the model parameters, we can simply be considered the posterior distribution as 𝒫(ψ|𝒟,ℳ) ∝exp[-χ^2(𝒟|ψ, ℳ)/2]. We have used this technique to estimate various model and cosmological parameters with the help of various observational data sets in our current work. §.§ Data and their respective likelihoods Here we will use observational data of Hubble parameter, BAO, CMB, and Pantheon supernovae type Ia from various sources to carry forward our analysis. §.§.§ Hubble parameter H(z) We have considered 57 observational H(z) data from different literatures and tabulated them in Table <ref>. The chi-square function value for the mentioned dataset of H(z) denoted by χ^2 can be calculated as χ^2_H = ∑_n=1^57[H^obs(z_n)- H^th(z_n)]^2/σ^2_H^obs(z_n), where σ^2_H^obs(z_n) is the standard deviation of the nth observational H(z) data and H^th(z_n) is the theoretical value of H obtained from the considered cosmological model at z_n. §.§.§ BAO measurements Baryon acoustic oscillation (BAO) helps to understand the angular diameter distance between two points in the Universe in terms of redshift (z) and is also useful in studying the evolution of H(z). Usually, the BAO measurements provide the dimensionless ratio d of the comoving size of the sound horizon r_s at the drag redshift z_d = 1059.6 <cit.> to the volume-averaged distance D_v (z). Thus, d = r_s(z_d)/D_v (z), where r_s(z_d) and D_v (z) are expreesed respectively as r_s(z_d) = ∫^∞_z_dc_s dz/H(z), D_v (z) = [(1+z)^2 D_A (z)^2 cz/H(z)]^1/3. The term C_s = c/√(3(1+ℛ)) appears in equation (<ref>) is the sound velocity in the baryon-photon fluid. Here, ℛ = 3 Ω_b0/(4Ω_r0(1+z)) with Ω_b0 = 0.022 h^-2 <cit.>, Ω_r0 = Ω_γ 0(1+ 7/8 (4/11)^4/3N_eff) in which Ω_γ 0 = 2.469× 10^-5 h^-2 and N_eff = 3.046 <cit.>. Moreover, D_A in equation (<ref>) is the angular diameter distance which can be expressed as D_A = c/(1+z)∫^z_0dz/H(z), and c is the speed of light. In our work, we have tabulated 8 BAO data in Table <ref> from various works in the literature and computed the total chi-square value (χ_BAO^2) for them. The chi-square function value χ^2_d of first five data set of Table <ref> can be calculated by using the expression, χ^2_d = ∑_i=1^5[d^obs(z_i)- d^th(z_i)]^2/σ^2_d^obs(z_i), in which d^th(z_i) is the theoritical value of d for the considered cosmological model. For the remaining three dataset of the WiggleZ survey in Table <ref>, the chi-square value can be obtained by using the method of covariant metrix. Here the inverse covariant matrix of the considered dataset can be written as <cit.> C^-1_w= [ 1040.3 -807.5 336.8; -807.5 3720.3 -1551.9; 336.8 -1551.9 2914.9 ]. From this inverse covariant matrix the chi-square value for the considered three WiggleZ survey dataset can be obtained as χ^2_w = D^TC^-1_wD, in which the matrix D can be written as D = [ d^obs(0.44)- d^th(0.44); d^obs(0.60)- d^th(0.60); d^obs(0.73)- d^th(0.73); ]. Thus, the total chi-squar value χ^2_BAO can be obtained as χ^2_BAO = χ^2_d + χ^2_w. §.§.§ CMB data The CMB data include the angular scale of the sound horizon at the last scattering surface which is denoted by l_a and can be defined as l_a = πr(z^*)/r_s (z^*), where r(z_*) is the comoving distance to the last scattering surface and it can be measured as r(z_*) = ∫^z^*_0cdz/H(z), and r_s(z_*) is the size of the comoving sound horizon at the redshift of the last scattering (z^* = 1089.9). The observed value l^obs_a = 301.63 with uncertainty σ_l_a = 0.15 as per Ref. <cit.>. The chi-square value χ^2_CMB here can be evaluated as χ^2_CMB = [l^obs_a - l^th_a]^2/σ^2_l_a. Here, the l^th_a is the theoretical value of l_a for the considered cosmological model. §.§.§ Pantheon plus supernovae type Ia data The Pantheon data sample comprised 1048 observational data spanning over the z range between 0.001 < z < 2.3 taken from five subsamples, which include PS1, SDSS, SNLS, low-z and HST <cit.>. The Pantheon plus sample is the successor of the Pantheon sample and contains 1701 observational data from 18 different sources <cit.>. This pantheon plus data compilation contains mainly the observed peak magnitude m_B and also the distance modulus D_m for different Type Ia supernovae (SN Ia). Theoretically, the distance modulus D_m can be calculated as D_m = 5 log_10d_L (z_hel,z_cmb)/10pc = 5 log_10d_L (z_hel,z_cmb)/1Mpc +25, in which z_hel is the heliocentric redshift, z_cmb is the redshift of the CMB rest frame and d_L is the luminosity distance. The theoretical luminosity distance can be calculated by using equation (<ref>). The chi-square value for the Pantheon plus dataset denoted by χ^2_Pan+ can be obtained by using the equation as follows: χ^2_Pan+ = m^TC^-1m, where C is the total covariance matrix of m_B and the matrix m is obtained by the relation m = m_B - m_th with m_th = 5 log_10 D_L + M. Here, D_L = (1+z_hel) ∫^Z_cmb_0H_0 dz/H(z), and M is the nuisance parameter whose value for the Pantheon dataset is 23.739^+0.140_-0.102 <cit.>. Further, the total covariance metrix C consists of systematic covariance metrix C_sys and the diagonal covariance matrix of the statistical uncertainty D_stat <cit.>. §.§ Constraining the cosmological parmaeters In order to obtain observational constraints on the anisotropic cosmological model in Bumblebee gravity theory, we have considered a multivariate joint Gaussian likelihood of the form <cit.>: ℒ_tot∝exp(-χ^2_tot/2), in which, χ^2_tot = χ^2_H + χ^2_BAO +χ^2_CMB+χ^2_Pan+ In this work, we have considered uniform prior distribution for all cosmological parameters and model parameters. The prior range of various parameters has been considered as follows: 55 < H_0 < 85, 0.1 < Ω_mo < 0.5, 0.00001 < Ω_ro < 0.0001, 0.6 < Ω_Λ 0< 1, 0.9 <δ_m < 1.05, 1.25 <δ_r< 1.75, 0.001<δ_l< 0.01, 0.001 <l< 0.01, and 0.001 <η< 0.01. Here the likelihoods are considered within the mentioned ranges such that results should be consistent with standard Planck data release 2018. However, the parameter η is associated with the anisotropic characteristics of the Universe hence its current value is considered too small. Further, the Lorentz violation parameter l = ξ B^2_0 is in general considered to be in the order of 10^-23 <cit.>, however, to observe its effect on the cosmological parameters like the Hubble parameter and Distance modulus we have taken its range as 0.01 to 0.001. The idea of taking higher values of Lorentz violation parameter l in the Bumblebee theory of gravity for studying other physical systems like blackholes can be found in Refs. <cit.>. This actually helps to study the physical system to understand the effect of Lorentz symmetry breaking on it. With these considerations, we have plotted one-dimensional and two-dimensional marginalized confidence regions (68% and 95% confidence levels) for anisotropic Bumblebee model parameters H_0, Ω_mo, Ω_Λ0, η and l for H(z) (DS-A), H(z)+ Pantheon plus (DS-B), H(z) + Pantheon plus + BAO (DS-C) and H(z) + Pantheon plus + BAO + CMB (DS-D) datasets as shown in Fig. <ref>. Table <ref> shows the constraints (68% and 95% confidence level) on the anisotropic Bumblebee model parameters along with the ΛCDM model parameters from the different available datasets. From Table <ref> and Fig. <ref>, we found that the tightest constraint can be obtained from the DS-D dataset, i.e. joint dataset of H(z) + Pantheon plus + BAO + CMB on all the cosmological parameters for both the anisotropic Bumblebee model and the ΛCDM model. With the use of Table <ref>, we have tried to compare the H_0, Ω_m0, Ω_Λ0 and Ω_r0 parameters for both the models for different dataset combinations within 68% confidence intervals as shown in Fig. <ref>. The shift of the parameters' values from the standard ΛCDM model due to anisotropic background and bumblebee field for different combinations of dataset are clearly observed in the plots of these figures. The largest deviations of the cosmological parameters' from DS-A, DS-B, DS-C and DS-D dataset are compiled in Table <ref> for both the standard ΛCDM model and the anisotropic Bumblebee model. From this Table, we can conclude that the deviations are higher in the ΛCDM model in comparison to the anisotropic Bumblebee model. Moreover, we have tried to compare the Hubble parameter versus cosmological redshift variations for both models with the parameters constrained for the DS-D dataset (H(z) + Pantheon plus + BAO + CMB dataset) found from the Table <ref> as shown in Fig. <ref>. The plot shows that for the estimated values of cosmological parameters, the Hubble parameter is consistent with the observational data. However, the anisotropic Bumblebee model shows deviations from the standard ΛCDM plot with the increase of cosmological redshift z. Thus, from this figure, we have found that the expansion rate of the early Universe for the anisotropic Bumblebee model is slower in comparison to that for the standard ΛCDM model. Similarly, we have plotted the distance modulus (D_m) against cosmological redshift z in Fig. <ref> for the both ΛCDM and anisotropic Bumblebee model along with the distance modulus residue relative to anisotropic Bumblebee model in the logarithmic z scale for the constrained set of model parameters as mentioned in the H(z) vs z plot. The plot shows that like ΛCDM results, the distance modulus for the anisotropic Bumblebee model is consistent with the observational Pantheon plus data obtained from different Type Ia supernovae (SN Ia) for the constrained set of model parameters listed in Table <ref>. Furthermore, the plot of distance modulus residue also shows that the model is consistent with observational data. We have plotted the effective equation of state ω_eff from equation (<ref>) against cosmological redshift z for the constrained set of parameters mentioned in Table <ref> for both ΛCDM and anisotropic Bumblebee model in the left panel of Fig. <ref>. The plot shows deviations of the anisotropic Bumblebee model results from the standard ΛCDM results for higher values of z, indicating the role of anisotropy and bumblebee field effect in the early stages of the Universe. Apart from the ω_eff vs z plot, we have also plotted the deceleration parameter (q) using equation (<ref>) against the cosmological redshift z in the right plot of Fig. <ref> for both anisotropic Bumblebee and standard ΛCDM models. In this case also, the anisotropic Bumblebee model results show deviations from the standard ΛCDM results for the higher value of z, again indicating the role of anisotropy and the effect of Bumblebee field in the early Universe. However, both plots show that anisotropic Bumblebee cosmology is consistent with the standard cosmology for z = 0, i.e. in the present time. § EFFECT OF ANISOTROPY AND BUMBLEBEE FIELD ON COSMOLOGICAL EVOLUTION In this section, we want to investigate how cosmological evolution can be affected by a considering Bumblebee field in the presence of anisotropy and compare it with the standard ΛCDM results. To this end, we have plotted the density parameters of matter (Ω_m), radiation (Ω_r) and dark energy (Ω_Λ) variation with the cosmological redshift for both ΛCDM and anisotropic Bumblebee models indicating the specific features in the plots as shown in Fig. <ref>. From Fig. <ref>, we have found that due to the presence of anisotropy and Bumblebee field, the value of (1+z) for the transition from radiation-dominated to matter-dominated phase in the Universe has been shifted from 7000 of standard ΛCDM model to 8500 anisotropic Bumblebee model. Thus the anisotropic Bumblebee model suggests an elongated matter-dominated phase. Again, there is a clear sign of an anisotropic effect in terms of obtaining the maximum value of density parameters in the anisotropic Bumblebee model, as its value is much lower than the standard ΛCDM model. In our analysis, we have found that the difference of maximum value from unity for Ω_m in ΛCDM model is 0.0019, while in the case of anisotropic bumblebee model, the difference is 0.037. Thus, it indicates that there are some effects of anisotropy and bumblebee field on the evolution of the matter-dominated as well as other phases in the Universe. Thus, we can simply say that there are some roles of anisotropy and Bumblebee fields in cosmic evolution. § DYNAMICAL SYSTEM ANALYSIS OF ANISOTROPIC BUMBLEBEE MODEL In this section, for a dynamical system analysis of anisotropic Bumblebee model we have considered the density parameters Ω_m = κρ_m/3H^2, Ω_r = κρ_r/3 H^2 and Ω_Λ = Λ/3 H^2 as dynamical variables and then we have rewritten the equations (<ref>) and (<ref>) as Ω_m + Ω_r + Ω_Λ + η (1 - 2l) + l = 1, Ḣ/H^2 = -3/2(1-l)[1/3Ω_r -Ω_Λ - η(1 - 2l) + (1-l)]. Further, we have renamed the parameter Ω_m = x and Ω_r = y and derived the derivative for each of them with respect to N = loga as dx/dN = Ω_m/(1-l)[3Ω_m + 4Ω_r - 3 δ_m], dy/dN = Ω_m/(1-l)[3Ω_m + 4Ω_r - 3 δ_r]. Moreover, the effective equation of state and the deceleration parameter can further be written as ω_eff = -1 +1/(1-l)[4/3Ω_r +Ω_m], q = -1 +3/2(1-l)[4/3Ω_r + Ω_m]. We have compared the critical point analysis for the considered Bumblebee model with the standard ΛCDM results in Table <ref>. The phase space portrait for the anisotropic Bumblebee model also shows the heteroclinically connected radiation, matter, and dark energy phases as predicted by the standard ΛCDM model (see Fig. <ref>). However, in contrast to the standard ΛCDM model, the anisotropic Bumblebee model shows that the phases have some anisotropic and Bumblebee field contributions as the critical point solutions P_1a and P_2a contains no exact unity value. Thus, from this analysis, we can say that both anisotropy and Bumblebee may have some contributions to cosmic evolution. § CONCLUSIONS In this work, we have considered B_μ = (B(t),0,0,0) Bumblebee vector model in Bianchi type I Universe and trying to understand its effect on the cosmological parameters and cosmological evolutions of the Universe. Further, we have considered the Universe as a dynamical system and drawn the phase portrait to understand the sequence of different phases of the Universe. We have started our work by deriving the general form of field equations for the considered model and metric as mentioned above and also obtained the continuity equations in Section <ref>. In the next section, we have considered the vacuum expectation value (VEV) condition in the field equations of Section <ref> and also obtained the continuity equations for the considered condition. With the help of the field equations, we have further obtained the cosmological parameters like the Hubble parameter (H(z)), luminosity distance (d_L), distance modulus (D_m), etc., and carry forward to study these parameters in our next sections. In Section <ref>, we have constrained our various model parameters that have appeared in the cosmological parameters by using the technique of Bayesian inference. Here, we have used various data compilations for Hubble parameter H(z), Supernovae Type Ia data (Sn Ia), Bao, CMB data, etc. to estimate various model parameters and cosmological parameters which are compiled in Table <ref>. This table also includes estimated values of cosmological parameters of ΛCDM model. With these sets of parameters, we have compared the values of H_0, Ω_m0, Ω_r0 and Ω_Λ0 for both the models within the 68% confidence interval. Moreover, we have plotted the Hubble parameter, and distance modulus along with distance modulus residue with cosmological redshift for the constrained values of model parameters as mentioned above along with the ΛCDM results, and found that the anisotropic Bumblebee model shows good agreement with standard cosmology and consistent with observational data. However, the Hubble parameter shows deviations from the standard ΛCDM results as the z value increases. Furthermore, the effective equation of state and deceleration parameters are also plotted against cosmological redshift z. Both the plots show good agreement with standard cosmology in the current scenario while, for higher z, the anisotropic Bumblebee model shows deviations from the standard results, hence indicating the effect of anisotropy and Bumblebee field in the early Universe. In Section <ref>, we have studied the effect of anisotropy and bumblebee field in various density parameters' evolution. Here, we have compared our results with the standard ΛCDM model and find that there is a shift of the transition point from the radiation-dominated to the matter-dominated phase of the Universe. In our study, we have found that the value of cosmological redshift (z) for the transition from radiation to matter phase is around 7000 for ΛCDM model while for the anisotropic bumblebee model, this value shifted to around 8500. Further, we have noticed that the density parameter value never reached unity, hence no pure matter or radiation era, but a mixed state of matter, radiation, and dark energy era. Thus, it is better to say matter-dominated or radiation-dominated state rather than the pure state. Also, we have found that the maximum value of these density parameters is quite lower than standard ΛCDM results. Thus there must be some role of anisotropy and bumblebee field in the cosmic evolution of the Universe. In Section <ref>, we have considered the Universe as a dynamical system and hence considered its density parameters as dynamical variables. Subsequently, we studied its stability point analysis for both ΛCDM and anisotropic Bumblebee models. From this anaysis, we have found that the anisotropic Bumblebee model also shows that there are heteroclinically connected radiation-dominated, matter-dominated, and dark energy-dominated phases of the Universe as suggested by standard cosmology, however, there is a shift of critical points from the unity value in anisotropic bumblebee model, which also confirmed the role of anisotropy and Bumblebee field on cosmic evolution as mentioned in Section <ref>. Finally, in conclusion, we have observed that with the consideration of anisotropy and the Bumblebee field, the evolution of the Universe i.e. the evolution of the various phases of the Universe is somehow affected and hence may provide an interesting scenario of cosmic expansion. More observational data on the early Universe may help us to understand this scenario more clearly in the near future. § ACKNOWLEDGEMENT UDG is thankful to the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India for the Visiting Associateship of the institute. 99 Pebbles_1994 P. J. E. Peebles, Principles of Physical Cosmology (Princeton University Press)(1994). Riess A. G. Riess et al, Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant, Astron. J. 116, 1009 (1998). Perlmutter_1999 S. Perlmutter et al.,Measurements of Ω and Λ from 42 High-Redshift Supernovae, Asto. Phys. J 517, 565 (1999). Ma_2011 C. Ma and T.-J. Zhang, Power of observational Hubble parameter data: A figure of merit exploration, Astro. Phys. J. 730 74 (2011) Trimble_87 V.Trimble, Existence and nature of dark matter in the Universe, Annu. Rev. Astron. Astrophys. 25, 425 (1987). Frieman_2008 J. A. Frieman, M. S. Turner and D. Huterer, Dark Energy and the Accelerating Universe, Annu. Rev. Astron. Astrophys. 46, 385 (2008). Capelo_2015 D. Capelo, J. Pármos, Cosmological implications of bumblebee vector model, Phys. rev. D.91, 104007 (2015). Kostelecky_1989 V. A. Kostelecky and S. Samuel, Gravitational phenomenology in higher-dimensional theories and strings, Phys. Rev. D 40, 1886 (1989). Kostelecky_1989a V. A. Kostelecky and S. Samuel, Spontaneous breaking of Lorentz symmetry in string theory, Phys. Rev. D 39, 683 (1989). Bertolami_2005 O. Bertolami and J. Páramos, Vacuum solutions of a gravity model with vector-induced spontaneous Lorentz symmetry breaking, Phys. Rev. D 72, 044001 (2005). Felice_2010 A. De. Felice and S. Tsujikawa, A Cyclic Cosmological Model Based on the f(ρ) Modified Theory of Gravity, Living. rev. Relativity 13, 3 (2010). Bertolami_2014 O. Bertolami and J. Páramos, Minimal extension of General Relativity: alternative gravity model with non-minimal coupling between matter and curvature, Int. J. Geom. Methods Mod. Phys. 11, 1460003 (2014). Zlatev_1999 I. Zlatev, L. M. Wang and P. J. Steinhardt, Quintessence, cosmic coincidence, and the cosmological constant, Phys. Rev. Lett. 82, 896 (1999). Tartagila_2007 A. Tartagila and N. Radicella, Vector field theories in cosmology, Phys. Rev. D. 76, 083501 (2007). Armendariz_2009 C. Armendariz-Picon and A. Diez-Tejedor, Aether Unleashed, J. Cosmol. Astropart. Phys. 12, 018 (2019). Kostelecky_2004V. A. Kostelecky, Gravity, Lorentz violation, and the standard model, Phys. Rev. D. 69, 105009 (2004). Bluhm_2005R. Bluhm and V. A. Kostelecky, Spontaneous Lorentz violation, Nambu-Goldstone modes, and gravity, Phys. Rev. D. 71, 065008 (2005). Bluhm_2008 R. Bluhm, S. H. Fung and V. A. Kostelecky, Spontaneous Lorentz and diffeomorphism violation, massive modes, and gravity, Phys. Rev. D 77, 065020 (2008). Kostelecky_2005 B. Altschul and V. Kostelecky, Spontaneous Lorentz violation and nonpolynomial interactions, Phys. Lett. B. 628, 106 (2005). Kostelecky_2009 V. A. Kostelecky and R. Potting, Gravity from spontaneous Lorentz violation,Phys. Rev. D. 79, 065018 (2009). Kostelecky_2005a V. A. Kostelecky and R. Potting, Gravity from local Lorentz violation, Gen. Relativ. Garvit. 37, 1675 (2005). Gogoi_2022 D. J. Gogoi and U. D. Goswami, Quasinormal modes and Hawking radiation sparsity of GUP corrected black holes in bumblebee gravity with topological defects, J. Cosmol. Astropart. Phys. 06, 029 (2022). Karmakar_2023 R. Karmakar and U. D. Goswami, Thermodynamics and shadows of GUP-corrected black holes with topological defects in Bumblebee gravity, Phys. Dark. Univ. 41, 101249 (2023). Planck_2018 N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters, Astron. & Astrophys. 641, A6 (2020). Nedelco_2021 A. Chudaykin, D. Gorbunov and N. Nedelko, Exploring an early dark energy solution to the Hubble tension with Planck and SPTPol data, Phys.Rev. D. 103, 043529 (2021). Velten_2014H. E. S. Velten, R. F. vom Marttens and W. Zimdahl, Aspects of the cosmological "coincidence problem", Eur. Phys. J. C. 74, 3160 (2014). wmapG. Hinshaw et al., First-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: The Angular Power Spectrum, Astrophys. J. Suppl. 148, 135 (2003). wmap1G. Hinshaw et al., Three-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Temperature Analysis, Astrophys. J. Suppl. 170, 288 (2007). wmap2 G. Hinshaw et al., Five-Year Wilkinson Microwave Anisotropy Probe Observations: Data Processing, Sky Maps, and Basic Results, Astrophys. J. Suppl. 180, 225 (2009). SDSS_2005 D. J. Eisenstein et. al., Detection of the Baryon Acoustic Peak in the Large-Scale Correlation Function of SDSS Luminous Red Galaxies, Astrophys. J. 633, 560 (2005). Bessett_2009 B. A. Bessett & R. Hlozek, Baryon Acoustic Oscillations, [arXiv:0910.5224] (2009). Tully_2023R. B. Tully, C. Howlett and D. Pomarède., Ho'oleilana: An Individual Baryon Acoustic Oscillation?, Astrophys. J. 954, 169 (2023). Planck_2015 P. A. R. Ade et al., Planck 2015 results XIII. Cosmological parameters, Astron. Astrophys. 594, A13 (2016). Tedesco_2006 L. Campanelli, P. Cea, L. Tedesco, Ellipsoidal Universe Can Solve the Cosmic Microwave Background Quadrupole Problem, Phys. Rev. Lett. 97, 131302 (2006). akarsu_2010 Ö. Akarsu, C. B. Kılıç, Bianchi Type-III Models with Anisotropic Dark Energy,Gen. Rel. Grav 42, 109 (2010). Sarmah_2022 P. Sarmah, U. D. Goswami, Bianchi Type I model of the universe with customized scale factor, Mod. Phys. Lett. A 37, 21 (2022). Cea_2022 P.Cea, The Ellipsoidal Universe and the Hubble tension, [arXiv:2201.04548]. Perivolaropoulos_2014 L. Perivolaropoulos, Large Scale Cosmological Anomalies and Inhomogeneous Dark Energy, Galaxies 2(1), 22-61 (2014). Berera_2004 A. Berera, R.V. Buniy, and T.W. Kephart, The eccentric universe, J. Cosmol. Astropart. Phys. 10, (2004) 016. Campanelli_2006 L. Campanelli, P. Cea, L. Tedesco, Ellipsoidal Universe Can Solve the Cosmic Microwave Background Quadrupole Problem, Phys. Rev. Lett. 97, 131302 (2006). Campanelli_2007 L. Campanelli, P. Cea, L. Tedesco, Cosmic microwave background quadrupole and ellipsoidal universe, Phys. Rev. D 76, 063007 (2007). Paul_2008 B. C. Paul and D. Paul, Anisotropic Bianchi-I universe with phantom field and cosmological constant,Pramana J. Phys. 71, 6 (2008). Barrow_1997 J. D. Barrow, Cosmological limits on slightly skew stresses ,Phys Rev. D. 55, 7451 (1997). Hossienkhani_2018 H. Hossienkhani, H. Yousefi and N. Azimi, Anisotropic behavior of dark energy models in fractal cosmology,Int. J. Geom. Methods Mod. Phys. 15 1850200 (2018). akarsu_2019Ö. Akarsu, S. Kumar, S. Sharma, and L. Tedesco, Constraints on a Bianchi type I spacetime extension of the standard ΛCDM model, Phys. Rev. D 100, 023532 (2019). Sarmah_2023 P. Sarmah, A. De and U. D. Goswami, Anisotropic LRS-BI Universe with f(Q) gravity theory, Phys. Dark. Univ. 40, 101209 (2023). Sarmah_2024 P. Sarmah, U. D. Goswami, Dynamical system analysis of LRS-BI Universe with f (Q) gravity theory, Phys. Dark. Univ. 46, 101556 (2024). espo F. Esposito et al.,Reconstructing isotropic and anisotropic f(Q) cosmologies, Phys. Rev. D 105, 084061 (2022). Mai_2023 Z. F. Mai et al., Extended thermodynamics of the bumblebee black holes, Phys. Rev. D. 108, 024004 (2023). Gu_2022 J. Gu et al., Probing bumblebee gravity with black hole X-ray data, Eur. Phys. J. C, 82, 708 (2022). Saka_2023 Í. Sakalli, and E. Yörük, Modified Hawking radiation of Schwarzschild-like black hole in bumblebee gravity model, Phys. Scr., 98, 125307 (2023). Maluf_2021 R. V. Maluf and J. C. S. Neves, Bumblebee field as a source of cosmological anisotropies, J. Cosmol. Astropart. Phys.10, 038,(2021). Neves_2023 J. C. S. Neves, Kasner cosmology in bumblebee gravity, Annal. Phys. 454, 169338 (2023). Zhang_2014 C. Zhang et al., Four New Observational H(z) Data From Luminous Red Galaxies of Sloan Digital Sky Survey Data Release Seven, Res. Astron. Astrophys 14, 1221 (2014). Simon_2005 J. Simon et al., Constraints on the redshift dependence of the dark energy potential, Phys. Rev. D 71, 123001 (2005). Moresco_2012 M. Moresco et al., Improved constraints on the expansion rate of the Universe up to z ∼ 1.1 from the spectroscopic evolution of cosmic chronometers, JCAP 08, 006 (2012). Gaztanaga_2009 E. Gaztan̄aga et al., Clustering of luminous red galaxies-IV. Baryon acoustic peak in the line-of-sight direction and a direct measurement of H(z) ,MNRAS 399, 1663 (2009). Oka_2014 A. Oka et al., Simultaneous constraints on the growth of structure and cosmic expansion from the multipole power spectra of the SDSS DR7 LRG sample, Mon. Not. Roy. Astron. Soc. 439, 2515 (2014). Wang_2017 Y. Wang et al.,The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: tomographic BAO analysis of DR12 combined sample in configuration space, Mon. Not. Roy. Astron. Soc. 469, 3762 (2017). Xu_2013 X. Xu et al., Measuring D_A and H at z = 0.35 from the SDSS DR7 LRGs using baryon acoustic oscillation, Mon. Not. Roy. Astron. Soc. 431, 2834 (2013). Alam_2017 S. Alam et al., The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: cosmological analysis of the DR12 galaxy sample, Mon. Not. Roy. Astron. Soc. 470, 2617 (2017). Moresco_2016 M. Moresco et al., A 6% measurement of the Hubble parameter at z∼ 0.45: direct evidence of the epoch of cosmic re-acceleration, JCAP 05, 014 (2016). Blake_2012 C. Blake et al., The WiggleZ dark energy survey: Joint measurements of the expansion and growth history at z < 1, Mon. Not. Roy. Astron. Soc. 425, 405 (2012). Ratsimbazafy_2017 A. L. Ratsimbazafy et al., Age-dating luminous red galaxies observed with the Southern African Large Telescope,Mo. Not. Roy. Astron. Soc. 467, 3239 (2017). Samushia_2013 L. Samushia et al., The clustering of galaxies in the SDSS-III DR9 Baryon Oscillation Spectroscopic Survey: testing deviations from Λ and general relativity using anisotropic clustering of galaxies, Mon. Not. Roy. Astron. Soc. 429, 1514 (2013). Chuang_2013 C. H. Chuang et al., The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey: single-probe measurements and the strong power of f(z)σ_8(z) on constraining dark energy ,Mon. Not. Roy. Astron. Soc. 433, 3559 (2013). Moresco_2015 M. Moresco, Raising the bar: new constraints on the Hubble parameter with cosmic chronometers at z ∼ 2, Mon. Not. Roy. Astron. Soc. 450, L16 (2015). Busca_2013 N. G. Busca et al., Baryon acoustic oscillations in the L_yα forest of BOSS quasars, Astron. Astrophys. 552, A96 (2013). Bautista_2017 J. E. Bautista et al., Measurement of baryon acoustic oscillation correlations at z =2.3 with SDSS DR12 L_yα - Forests, Astron. Astrophys. 603, A12 (2017). Delubac_2015 T. Delubac et al., Baryon acoustic oscillations in the L_yα forest of BOSS DR11 quasars,Astron. Astrophys. 574, A59 (2015). Ribera_2014 A. F. Ribera et al., Quasar-Lyman α forest cross-correlation from BOSS DR11: Baryon Acoustic Oscillations, J. Cosmol. Astropart. Phys. 05, 027 (2014). Cooke_2016 J. R. Cooke et al., The primordial deuterium abundance of the most metalpoor damped Lyα system, Astrophys. J., 496, 605 (1998). Dodelson_2003 S. Dodelson, Modern Cosmology (Academic Press, New York) (2003). Beutler_2011 F. Beutler et al., The 6dF galaxy survey: Baryon acoustic oscillation and the local Hubble constant, Mon. Not. R. Astron. Soc. 416, 3017 (2011). Ross_2015 A. J. Ross et al, The clustering of the SDSS DR7 main Galaxy sample- I. A 4 percent distance measure at z = 0.15, Mon. Not. R. Astron. Soc. 449, 835 (2015). Padmanabhan_2012 N. Padmanabhan et al., A 2 % distance to z = 0.35 by reconstructing baryon acoustic oscillations I. Methods and application to the Solan digital sky survey, Mon. Not. R. Astron. Soc. 427, 2132 (2012). Anderson_2014 L. Anderson et al., The clustering of galaxies in the SDSS-III Baryon oscillation spectroscopic survey: Baryon acoustic oscillations in the data release 10 and 11 Galaxy samples, Mon. Not. R. Astron. Soc. 441, 24 (2014). Betoule_2014 M. Betoule et. al, Improved cosmological constraints from a joint analysis of the SDSS-II and SNLS supernova samples, Astron. Astrophys. 568, A22 (2014). Scolnic_2022 D. Scolnic et al., The Pantheon+ Analysis: The Full Data Set and Light-curve Release, Astropys. J., 938, 113 (2022). Zhao_2019 D. Zhao, Y. Zhou, and Z. Chang, Anisotropy of the Universe via the Pantheon supernovae sample revisited, 486, 5679 (2019). Scolnic_2018 D. Scolnic et al., The complete light-curve sample of spectroscopically confirmed SNe Ia from Pan-STARRS1 and cosmological constraints from the combined Pantheon sample, Astrophys. J. 859, 101 (2018). Paramos_2014 J. Páramos and G. Guiomar, Astrophyisical Constraints on the Bumblebee model, Phys. Rev. D 90, 082002 (2014). apsrev
http://arxiv.org/abs/2407.13429v1
20240718115434
Towards Dynamic Feature Acquisition on Medical Time Series by Maximizing Conditional Mutual Information
[ "Fedor Sergeev", "Paola Malsot", "Gunnar Rätsch", "Vincent Fortuin" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Reducing Barriers to the Use of Marginalised Music Genres in AI Zijin Li =============================================================== § ABSTRACT Knowing which features of a multivariate time series to measure and when is a key task in medicine, wearables, and robotics. Better acquisition policies can reduce costs while maintaining or even improving the performance of downstream predictors. Inspired by the maximization of conditional mutual information, we propose an approach to train acquirers end-to-end using only the downstream loss. We show that our method outperforms random acquisition policy, matches a model with an unrestrained budget, but does not yet overtake a static acquisition strategy. We highlight the assumptions and outline avenues for future work. § INTRODUCTION In the medical setting, clinicians often need to monitor patients over time during their hospital stay, especially in Intensive Care Units <cit.>. They try to improve the patient's state by administering drugs while relying on continuous measurements of vital signs (e.g., heart rate) and occasional lab tests (e.g., blood tests, X-rays). While the continuous measurements are automatic and practically free, performing lab tests takes the clinical staff's time and incurs additional costs. We aim to develop a method for recommending which lab tests to perform, in order to best monitor the patient's state, while decreasing workload and costs. More formally, the hospital stay of a patient i can be represented as a multivariate (or even multi-modal) time series x^i={x^i_t,f} with the features f at time t being the values of either vital signs, lab tests, or administered drugs. Usually, these data are used for time series classification (e.g., mortality prediction), early event prediction (e.g., circulatory failure prediction), or intervention recommendation <cit.>. We consider the Dynamic Feature Acquisition (DFA) task — based on an observed patient state {x^i_t,f}_t ≤τ at time τ, recommend which feature(s) f should be measured at some future time τ' at known cost c_τ,f (see <ref>). The aim is to reduce the total measurement cost ∑_t,f c_t,f while maintaining or even improving the performance of a downstream predictor. DFA is also relevant for wearables (e.g., extend battery life by reducing the number of sensor activations) <cit.>, active perception in robotics <cit.>, and efficient video classification <cit.>. Our contributions are: * We propose a novel CMI-based approach for DFA. It is compatible with clinically-relevant downstream prediction tasks and can be trained end-to-end. * We test on benchmark time series classification datasets with fake features and show that our method outperforms random, matches complete, but falls short of static selection methods. Icons: Rockicon, Dilich, Lorc, CC BY 3.0 https://creativecommons.org/licenses/by/3.0https://creativecommons.org/licenses/by/3.0, via Wikimedia Commons. § PROBLEM SETTING Let us assume that the time series are regular, indexed with t ∈{0, …, T^i}, where T^i is the length of x^i. We consider the case when an acquisition recommendation is made for features that will become available at the next time step (“next-step” assumption): τ' = τ + 1. For simplicity, we assume that the measurement cost is constant over time and features: c_t,f =: c. Without loss of generality, we set c = 1. Similarly to <cit.>, we assume that the data are fully observed. We set a budget for the total acquisition cost. For static data, the budget is usually be given per sample. For time series, a budget per time step b(x_t, t) should be predicted from the sample budget. In our experiments, we consider a simplified scenario, when it is constant and given a priori: b(x_t, t) = b. The acquisition and prediction cycle under these assumptions is shown in <ref>. Here, an acquirer is a model that at each time step τ outputs the acquisition vector m_τ. It is a binary vector with ones indicating which features should be acquired at the next time step τ + 1. Since the data are fully observed, we imitate the measurement procedure with an element-wise product. The measured data are then passed to the classifier. Additionally, the acquirer and classifier may have access to each other's internal state. We discuss the assumptions and provide pseudocode of the DFA cycle in <ref>. Note that the models here are not limited to recurrent architectures: time steps can be accumulated, and the classifier (e.g., a transformer) reapplied <cit.>. At the same time, by having the classifier receive new data at each time step, we allow for it to be used for both classification and early event prediction (tasks that are relevant for data from ICU and wearables). From this point on, we consider only classification objectives. § METHOD DFA usually follows one of two approaches: use the cost estimate as a penalty function to train the acquisition model using reinforcement learning <cit.>, or use some acquisition function to rank and select the most meaningful features <cit.>. The latter approach often uses CMI. Estimating it directly (e.g., using a partial variational autoencoder) can be challenging <cit.>. Instead, CMI can be approximated <cit.>. We discuss related work in <ref>. In <cit.>, the authors use a neural network and Categorical distribution to sequentially predict the feature with the largest CMI, and perform (greedy) selection on static data. Similarly, we use the acquirer neural network to predict logits of the approximate CMI at each time step of the time series (see <ref>). We then iteratively (until the budget b is reached) sample a one-hot vector indicating the selected feature using the Gumbel-Softmax <cit.>. To avoid selecting the same feature twice, we subtract a penalty vector from the acquirer's output. This approach is differentiable and therefore can be trained end-to-end via backpropagation using the classification loss. Further details are available in <ref>. § EXPERIMENTS We test the proposed method on the FordA and SpokenArabicDigits datasets from the UCR and UEA time series classification archives <cit.>. The data summary and samples are shown in <ref>. We consider balanced classification and use accuracy as the performance metric. By default, no features are considered observed; they all have to be explicitly acquired. The FordA dataset is univariate, so, to imitate a multivariate dataset, we take m=10 consecutive time steps from FordA and set them as one time step with m features of a new m-FordA dataset (short for multivariate or multi-step FordA). In contrast, SpokenArabicDigits is multivariate and variable length by design. §.§ Fake features The features in these datasets are quite similar (i.e., measured by the same device). Therefore it is not obvious whether one feature is more informative than another. To reliably test whether our model learns to acquire the right features, we add 30 fake features that do not hold any information about the class label (see <ref>). We test three different varieties of fake features: zeros, Gaussian noise, and samples from a Gaussian process (GP). We set a constant budget per time step of b = 5 and compare our method to a random acquisition policy (selects b features at random at each step) and a complete acquisition policy (selects all features at each step). This means that the complete acquirer obtains 8 times more features than the other two. The classification accuracy on SpokenArabicDigits is presented in <ref>, and the acquisition patterns for zeros on m-FordA are presented in <ref>. Other results, samples, training and implementation details are available in <ref>. Our acquirer consistently outperforms the random acquisition policy, and often even matches the performance of the complete acquirer. The acquisition patterns show that our acquire starts selecting the real features (notice the horizontal lines), although still occasionally sampling fake features. Additionally, we note that in some cases, the complete acquirer exhibits overfitting, while our acquirer avoids it (e.g., for noise fake features on m-FordA, shown in <ref>). §.§ Shifted fake features To test whether the learned acquisition is dynamic, we shift the real features so that the acquisition pattern would have to change over time (see <ref>). The dynamic policy should be able to learn that shift, while a static acquirer will only select the same set of features throughout the time series. We use a random forest (RF) as a static feature selection baseline, as it has been used for feature importance analysis of ICU data <cit.>. The results and the acquisition patterns are shown in <ref>. Our acquirer outperforms the random policy, but is outperformed by the static policy. The acquisition pattern shows that the model does not manage to capture the shift in fake features. We hypothesize that the underperformance of our acquirer is due to its simplistic architecture. The time step is passed to the model, but it does not receive the hidden state of the classifier. A more sophisticated architecture (e.g., an LSTM) that receives the classifier state as input will likely perform better. § CONCLUSION Dynamic feature acquisition is a challenging problem that arises for temporal data across various applications: medicine, wearable sensors, active perception, etc. It has seen little attention, with previous work considering only reinforcement learning approaches. In this work, we propose to dynamically select the most informative features using an approach similar to CMI maximization. We show that the acquirer trained using our approach learns to distinguish fake features from real ones for time series classification. Our model outperformed a random acquisition policy, but it did not surpass the static acquisition. This performance gap is likely due to the simplicity of the used architectures. We hope that this work will be continued, as a wide range of questions remain open. Future work may consider more advanced architectures, compare the performance of our training approach to reinforcement learning <cit.>, and loosen the assumptions we adopted: fixed time step budget, fully observed training data, and equal feature acquisition cost. § ACKNOWLEDGEMENTS FS thanks Shkurta Gashi and Manuel Burger for helpful discussions. Computational data analysis was performed at https://sis.id.ethz.ch/services/sensitiveresearchdata/Leonhard Med secure trusted research environment at ETH Zurich. FS was supported by grant #902 of the Strategic Focus Area “Personalized Health and Related Technologies (PHRT)” of the ETH Domain (Swiss Federal Institutes of Technology). VF was supported by a Branco Weiss Fellowship. figuresection equationsection tablesection § RELATED WORK To the best of our knowledge, the only prior work that has considered DFA on time series data is <cit.>. They use reinforcement learning and focus on multimodal data. Feature acquisition on static data has received wider attention. Both methods using mutual information <cit.> and reinforcement learning <cit.> have been developed. In <cit.>, CMI is estimated by training a partial variational autoencoder (P-VAE). This allows the model to perform imputation from any subset of observed features and select the features associated with high-value information. In <cit.>, this approach has been developed further with the use of transformers for processing sets of observed features. The main challenge with using the P-VAE is its training. Training generative models can be challenging, especially for more complex data such as images <cit.>. An alternative approach presented in <cit.> aims to approximate CMI instead of estimating it precisely. They propose using a Categorical distribution and greedily select the feature with the largest CMI at each step. Unlike <cit.>, they use only simple (dense) architectures. However, their approach accepts set-based models as well. For static ICU data, deep reinforcement learning has been used for DFA training <cit.>. The authors took into account that medical tests are usually done in panels (i.e., provide multiple features at the same time) and differ in cost. They also produce the accuracy-cost Pareto fronts, which help analyze the trade-off made when setting a specific acquisition budget. DFA using CMI is closely related to active learning and active feature acquisition (see <ref>). Recent works show that Bayesian models can perform well in active learning <cit.>. It has been shown that Bayesian acquisition functions such as Bayesian active learning by disagreement (BALD) are connected to CMI <cit.>. Perhaps other Bayesian acquisition functions, such as expected predictive information gain <cit.>, could be adapted for use in DFA. For ICU time series data, feature importance has been studied using random forests <cit.>. In <cit.> authors showed that deep learning architectures can achieve state-of-the-art performance in early event prediction. Tokenization of observed ICU features has been shown to improve the performance of such models <cit.>. Tokenization of observed features is a natural part of the set-based approaches <cit.>, and could be applied in DFA. § DFA mycommfont Next-step DFA on regular time series InputInput time series x of length T, time step budget function b(·,·), acquirer with hidden state h_t, classifier with hidden state H_t t 0 m_0 acquirer.init() *initial acquisition request t < T x'_t m_t ·x_t *measure requested features classifier.step(x'_t, m_t, h_t, t) m_t+1 acquirer.step(x'_t, m_t, b(x_t,t), H_t, t) t t + 1 y_pred classifier.predict() *make the prediction C ∑_t=0^Tm_t *calculate the cost The “next-step” prediction assumption is satisfied when the time it takes to measure requested features is smaller than the time step duration. Both the “next-step” and regularity assumptions are plausible for ICU when a bigger resolution (e.g., one hour) is chosen <cit.>. The assumptions about equal feature acquisition cost, fully observed data, and constant a prior set budget per time step do not hold for medical data. We leave generalization to future work. § DATASETS The fake features are either zeros, sampled from Gaussian noise (0 mean and 0.5 standard deviation), or sampled from a GP with an RBF kernel using <cit.> (amplitude coefficient 0.5, length scale 1.5, length scale bounds [0.1,10]). These parameters were selected so that the fake features are visually similar to real ones. To create the data with shifted real features, they were swapped with fake features (by ids) every few time steps proportionally to their number. More specifically, if the number of the real features is R and the number of fake features is F, the indices i of real features will shift to i + R every *R/R + F· T time steps. For example, for m-FordA with m=10 with 20 fake features, the real features will have indices 0 to 10 during the first third of the time steps, 10 to 20 during the second third, and 20 to 30 for the rest of the series (see <ref>). § EXPERIMENTS §.§ Setup details The acquirers are implemented using fully connected neural networks with 1 hidden layer (4 hidden units for m-FordA, and 8 for SpokenArabicDigits) and ReLU activations. The input is formed by concatenating the previous observation, acquisition mask, and current time step. The internal classifier state is not passed to the acquirer. The classifiers are implemented using Long Short-Term Memory networks <cit.> with 16 hidden units, 2 layers for m-FordA and 3 for SpokenArabicDigits (with ReLU activations), and a linear dimension of 8, followed by one linear layer outputting class logits. We use a simpler training procedure compared to <cit.>: the temperature in the Gumbel distribution is fixed, and we do not pre-train the classifiers. For logits vector l, the penalty function R is R(l) = 100 ·m_t · |l|, where the absolute value is taken elementwise. We use a random forest (static feature selector) with 1000 esimators, leaving the other parameters as defaults provided by scikit-learn <cit.>. We train using the Adam optimizer <cit.> with cross-entropy loss in PyTorch <cit.>. The batch size is 1000, and the learning rate is 0.001 in all experiments. §.§ Additional results
http://arxiv.org/abs/2407.13113v1
20240718024606
Multiobjective Vehicle Routing Optimization with Time Windows: A Hybrid Approach Using Deep Reinforcement Learning and NSGA-II
[ "Rixin Wu", "Ran Wang", "Jie Hao", "Qiang Wu", "Ping Wang", "Dusit Niyato" ]
cs.AI
[ "cs.AI" ]
Multiobjective Vehicle Routing Optimization with Time Windows: A Hybrid Approach Using Deep Reinforcement Learning and NSGA-II Rixin Wu, Ran Wang, Member, IEEE, Jie Hao, Member, IEEE, Qiang Wu, Member, IEEE, Ping Wang, Fellow, IEEE, and Dusit Niyato, Fellow, IEEE Corresponding author: Ran Wang. Rixin Wu, Ran Wang, Jie Hao, and Qiang Wu are with the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China, and also with the Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing, Jiangsu 210038, China (e-mail: wurixin@nuaa.edu.cn; wangran@nuaa.edu.cn; haojie@nuaa.edu.cn; wu.qiang@nuaa.edu.cn). Ping Wang is with the Department of Electrical Engineering and Computer Science, Lassonde School of Engineering, York University M3J 1P3, Canada (e-mail: pingw@eecs.yorku.ca). D. Niyato is with the School of Computer Science and Engineering, Nanyang Technological University 639798, Singapore (e-mail: dniyato@ntu.edu.sg). July 22, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT This paper proposes a weight-aware deep reinforcement learning (WADRL) approach designed to address the multiobjective vehicle routing problem with time windows (MOVRPTW), aiming to use a single deep reinforcement learning (DRL) model to solve the entire multiobjective optimization problem. The Non-dominated sorting genetic algorithm-II (NSGA-II) method is then employed to optimize the outcomes produced by the WADRL, thereby mitigating the limitations of both approaches. Firstly, we design an MOVRPTW model to balance the minimization of travel cost and the maximization of customer satisfaction. Subsequently, we present a novel DRL framework that incorporates a transformer-based policy network. This network is composed of an encoder module, a weight embedding module where the weights of the objective functions are incorporated, and a decoder module. NSGA-II is then utilized to optimize the solutions generated by WADRL. Finally, extensive experimental results demonstrate that our method outperforms the existing and traditional methods. Due to the numerous constraints in VRPTW, generating initial solutions of the NSGA-II algorithm can be time-consuming. However, using solutions generated by the WADRL as initial solutions for NSGA-II significantly reduces the time required for generating initial solutions. Meanwhile, the NSGA-II algorithm can enhance the quality of solutions generated by WADRL, resulting in solutions with better scalability. Notably, the weight-aware strategy significantly reduces the training time of DRL while achieving better results, enabling a single DRL model to solve the entire multiobjective optimization problem. Multiobjective optimization, Vehicle routing problem, Deep reinforcement learning, Transformer, Weight-aware strategy. § INTRODUCTION Vehicle routing problem (VRP), a pivotal sector in transportation logistics, plays a crucial role in modern traffic management and operational efficiency. This significance is underscored by the fact that optimized vehicle routing can significantly reduce operational costs, a critical factor in the highly competitive and cost-sensitive transportation industry <cit.>. As the demand for precise delivery schedules escalates and the transportation sector becomes more competitive, incorporating customer delivery time windows has become not just a preference but a necessity. This evolution has catapulted the study of the vehicle routing problem with time windows (VRPTW) to the forefront of transportation research, marking it as a key area for innovative solutions in traffic and logistics management <cit.>. The advancements in VRPTW are not just academic pursuits; they directly translate to more efficient, reliable, and cost-effective transportation systems, reinforcing the backbone of global trade and commerce. In the study of VRPTW, time windows are typically categorized into two distinct types: hard time windows and soft time windows. In scenarios involving hard time windows, vehicles are obligated to adhere strictly to the scheduled delivery times. Specifically, for a VRP with a hard time window, the vehicle must arrive and conduct deliveries within the predefined time window. In instances where the vehicle arrives before the onset of the hard time window, it is requisite for the vehicle to wait until the specified start time before proceeding with the delivery. Conversely, if the vehicle arrives within the defined hard time window, it is authorized to proceed with the delivery directly. However, delivering goods subsequent to the expiration of the hard time window is strictly prohibited <cit.>. In the VRP with soft time windows, there exists a permissible degree of flexibility for delivery vehicles to operate outside the designated time windows <cit.>. This flexibility, however, often incurs a consequential trade-off with customer satisfaction levels. An optimization strategy prioritizing the shortest path distance (or minimal cost) can result in the infringement of time windows for certain customers, thereby adversely affecting overall customer satisfaction <cit.>. To enhance customer satisfaction, managers may find themselves compelled to augment the fleet size or extend the driving routes, which paradoxically escalates transportation costs. This scenario elucidates a prevalent conflict in VRPTW: the dichotomy between minimizing transportation costs and maximizing customer satisfaction. Consequently, the incorporation of multiobjective optimization emerges as a pivotal consideration within VRPTW. In the domain of single-objective VRPTW (SOVRPTW) which predominantly focuses on minimizing travel costs, constraints such as time windows, vehicle capacity, and the number of available vehicles need to be considered simultaneously, hence significantly amplifying the complexity of solving SOVRPTW. Presently, the most commonly used algorithms for solving SOVRPTW are heuristic algorithms, such as genetic algorithm <cit.>, particle swarm algorithm <cit.>, etc. The typical procedure for solving SOVRPTW using these algorithms involves the random generation of an initial set of solutions, followed by the application of heuristic strategies to enhance the quality of these solutions, thereby obtaining more cost-effective routes <cit.>. Particularly in largerscale VRPTW instances, generating initial solutions that meet the constraints can be time-consuming, and the quality of these initial solutions is often low. These challenges become even more pronounced when dealing with multiobjective VRPTW (MOVRPTW), which simultaneously considers driving costs and customer satisfaction. Therefore, the development of an effective algorithm for solving MOVRPTW becomes an urgent and pressing need. In this paper, we propose an MOVRPTW model which takes into account both travel cost and customer satisfaction in a holistic manner. To address this model, we develop a novel multiobjective optimization algorithm that combines weight-aware deep reinforcement learning (WADRL) and the non-dominated sorting genetic algorithm-II (NSGA-II) to solve MOVRPTW. The main contributions of this paper are summarized as follows: * To mitigate the extensive training time typically associated with training individual subproblems in deep reinforcement learning (DRL) for multiobjective optimization, we have innovatively conceptualized and implemented a WADRL method. Specifically, during each training process, a set of random weights is generated to describe the relationship between the two objective functions, and a set of continuous weight combinations are adopted during the test process to obtain vehicle routes under different weight combinations. Such a method enables only one single DRL model to obtain the Pareto front of the entire multiobjective optimization problem. * The implementation of the WADRL approach, while innovative and time-saving, may lead to suboptimal performance, as the solutions generated cannot assure Pareto optimal. To enhance solution quality and achieve an ideal Pareto front, we propose integrating the initial solutions generated by WADRL into the NSGA-II. This hybrid approach not only improves solution quality but also addresses the challenge of time-consuming initial solution generation often encountered by traditional heuristic algorithms when dealing with complex constraint problems. * To the best of our knowledge, based on the existing literature, our study represents the first application of DRL in solving the MOVRPTW. The experimental results demonstrate that our method markedly surpasses existing traditional methods in terms of both convergence speed and solution diversity, particularly in the context of MOVRPTW. Concurrently, the implementation of a weight-aware strategy within the DRL framework significantly reduces training time and yields superior solutions. The remainder of this paper is organized as follows. In Section II, related work is introduced. We present the system model and formulation of MOVRPTW in Section III. In Section IV, the details of WADRL combined with NSGA-II to solve multiobjective optimization problems are introduced. Simulation results and discussions are presented in Section V. Finally, this paper is concluded in Section VI. § RELATED WORK In recent years, there has been significant research focus on the vehicle routing problem with time windows (VRPTW), leading to the development of numerous algorithms aimed at solving this problem. These algorithms can be categorized into three main groups: exact algorithms <cit.>, heuristic algorithms <cit.>, and learning-based algorithms <cit.>. Exact algorithms approach problems by rigorously deriving mathematical formulations, thus establishing mathematical models for the problem, and proposing algorithms to find optimal solutions based on mathematical principles. The authors in <cit.> employed the column generation method to achieve the shortest paths in VRPTW. Furthermore, branch and bound <cit.> as well as branch and price <cit.> methods have also been applied in the context of VRPTW. However, it's worth noting that exact algorithms, which seek optimal solutions from all feasible solutions according to certain rules, face exponential growth in search space and computational complexity as the number of customers in VRPTW increases. Moreover, these algorithms tend to be less effective when dealing with multiobjective VRPTW (MOVRPTW). Heuristic algorithms are the most commonly used methods in solving VRPTW. In solving single-objective VRPTW (SOVRPTW), common approaches encompass genetic algorithm (GA) <cit.>, particle swarm algorithm (PSO) <cit.>, and tabu search (TS) algorithm <cit.>, among others. In <cit.>, Ursani et al. employed a local genetic algorithm to implement small-scale VRPTW targeting the shortest path and produced better solutions than most other heuristics methods. Another genetic algorithm, which was centered on the insertion heuristic, was proposed for the resolution of VRPTW, as documented in <cit.>. The experimental results revealed the algorithm was able to find the solution in less time. Furthermore, both PSO with local search <cit.> and the TS algorithm <cit.> were also used to solve SOVRPTW. In real-life scheduling optimization, increasing demands have rendered SOVRPTW less capable. Consequently, many researchers have delved into the study of MOVRPTW. Jaber Jemai et al. employed the non-dominated sorting genetic algorithm-II (NSGA-II) to simultaneously optimize the minimization of both total travel distance and carbon dioxide emissions <cit.>. In ambulance route optimization, NSGA-II and the multiobjective particle swarm optimization (MOPSO) algorithm were employed to jointly optimize the latest service completion times and the number of patients whose medical conditions deteriorated due to delayed medical services <cit.>. Additionally, multiobjective evolutionary algorithm based on decomposition (MOEA/D) <cit.> and multiobjective simulated annealing (MOSA) algorithm <cit.> have also been utilized in addressing MOVRPTW. In the process of employing heuristic algorithms for solving MOVRPTW, it is common practice to begin by generating a set of initial solutions that adhere to the problem's constraints. Subsequently, a set of heuristic strategies, such as those found in genetic algorithms (GA), including selection, crossover, and mutation operations, in conjunction with non-dominated sorting strategies, is applied to obtain the Pareto front. This approach is advantageous due to its simplicity and its capability to yield high-quality solutions. However, it is worth noting that heuristic algorithms typically generate initial solutions using random strategies, and for MOVRPTW instances with numerous constraints, obtaining a feasible solution can be time-consuming. Moreover, the overall effectiveness of the algorithm is greatly influenced by the quality of the initial solutions <cit.>. Additionally, whenever there is any modification to the problem's information, even minor changes, it necessitates rerunning the heuristic algorithm, which is known as the No Free Lunch Theorem <cit.>. Moreover, when the dimension of the problem is particularly large, these algorithms require a substantial number of iterations for overall updates and iterative searches to achieve relatively favorable results, leading to extended computation times <cit.>. With the advancement of artificial intelligence (AI) technology, deep reinforcement learning (DRL) has also been employed to tackle multiobjective optimization problems. DRL was initially proposed for solving multiobjective travelling salesman problems (MOTSP) in <cit.>, in which the authors decomposed the MOTSP into several subproblems through weight combinations. They employed a sequence-to-sequence pointer network with two recurrent neural networks (RNNs) to train models for each subproblem. Additionally, a parameter transfer strategy was used in the initialization process of model parameters for adjacent subproblem training. In <cit.>, a similar approach was adopted, where the authors decomposed multiobjective vehicle routing problem (MOVRP) into multiple scalar subproblems based on weights. They utilized the pointer network to address each subproblem and trained the parameters of the policy network using the policy gradient algorithm of reinforcement learning to obtain the Pareto front of MOVRP. The general method of deep reinforcement learning to solve multiobjective optimization problems is to convert the problem into multiple subproblems based on multiple sets of weight combinations with the same interval. Each subproblem can be regarded as a single objective optimization problem. Subsequently, a DRL-based model is trained for each subproblem, and only after training all models for the subproblems can the entire multiobjective optimization problem be addressed. While this method may be effective for biobjective optimization problems, as the number of objective functions increases, the training time escalates exponentially, which becomes impractical for solving it. Additionally, it is challenging to ensure that the DRL-based models for each subproblem are adequately trained, causing the solutions of some subproblems to be dominated by the solutions of other subproblems <cit.>, rendering these solutions meaningless in multiobjective optimization problems. Considering the limitations of DRL in solving multiobjective optimization problems, we propose WADRL, which enables a single DRL model to address the entire multiobjective optimization problem. Furthermore, to overcome the drawbacks of heuristic algorithms in solving multiobjective optimization problems, we utilize the solutions generated by WADRL as the initial solutions of NSGA-II, ensuring that the initial solutions are feasible and of high quality. After undergoing evolution with NSGA-II, it is basically guaranteed that all solutions are Pareto optimal. § PROBLEM STATEMENT §.§ System Model The logistics businesses typically schedule service appointments with customers through phone calls or text messages before delivering goods or providing services. However, the delivery vehicles may arrive earlier or later than the scheduled time due to inefficient delivery routes or traffic jams. Customers usually have some level of tolerance for delays or early arrivals, but this may reduce customer satisfaction. Therefore, logistics companies should aim to improve customer satisfaction while simultaneously reducing travel cost <cit.>. We consider a multivehicle routing system in which K vehicles are instructed to deliver goods to the set H of customers. As illustrated in Fig. <ref>, the scenario involves three vehicles tasked with delivering goods to eight customers. Our system is formally conceptualized as a graph G=(V, E), where the set of vertices V comprises a depot vertex v_0 and a set of customer vertices C={v_i, i=1,2, …, h}. The edges E in the graph represent the roads connecting these vertices, each associated with a specific travel cost. The coordinate of the depot is u_0, where each vehicle k must depart from and return to. The information of each customer v_i includes the coordinate c_i, the demand for goods d_i, the interval [e_i, l_i] and the interval [E_i, L_i]. Among them, the interval [e_i, l_i] denotes the soft time window for customer i. Service provided to the customer within the intervals [E_i, e_i] or [l_i, L_i] leads to a decrease in customer satisfaction. The customer being served outside of the hard time interval [E_i, L_i] is not permitted. The definition of the hard time window interval is typically as follows: E_i=e_i-ζ_i^e(l_i-e_i), L_i=l_i+ζ_i^l(l_i-e_i), where ζ_i^e and ζ_i^l are the parameters used to define the allowable maximum violation. We define S_i(a_i) as the satisfaction of customer i when the vehicle arrives at customer i at time a_i, and S_i(a_i) can be defined as: S_i(a_i)= 0, a_i<E_i ∨ a_i>L_i a_i-E_i/e_i-E_i, E_i ≤ a_i<e_i 1, e_i ≤ a_i ≤ l_i L_i-a_i/L_i-l_i, l_i<a_i ≤ L_i. The key nomenclature used in this paper, along with their respective definitions, is outlined in TABLE <ref>. §.§ Problem Formulation In this section, we propose a multiobjective vehicle routing problem with time windows model that considers the cost incurred by travel and vehicle usage as well as customer satisfaction, as follows: The decision variables in this paper are defined as follows: x_i j k={[ 1, if vehicle k travels from customer i to j; 0, otherwise, ]. y_i k={[ 1, if customer i is served by vehicle k; 0, otherwise. ]. The objective functions in this paper are defined as follows: Objective function 1: minimizing the total travel cost, which can be expressed as: min f_1=∑_k ∈ K c_k^1 ∑_i ∈ H∑_j ∈ H d i s_i j x_i j k+∑_k ∈ K c_k^2 ∑_j ∈ H^* x_0 j k, where c_k^1 represents the travel cost of vehicle k per unit distance, c_k^2 denotes the fixed cost associated with the utilization of vehicle k, and dis_ij represents the distance between customer i and customer j. Objective function 2: maximizing the average customer satisfaction, which can be expressed as: max f_2=1/h S_i(a_i), where a_i represents the arrival time of the vehicle at the location of customer i. Furthermore, S_i(a_i) is defined as the satisfaction of customer i when the vehicle arrives at customer i at time a_i. Subject to: ∑_i ∈ H^* d_i y_i k<u_k ∀ k ∈ K ∑_k ∈ K y_i k=1 ∀ i ∈ H^* ∑_j ∈ H^* x_0 j k-∑_i ∈ H^* x_i 0 k=0 ∀ k ∈ K ∑_i ∈ H x_i j k=y_j k ∀ k ∈ K, ∀ j ∈ H^* ∑_j ∈ H x_i j k=y_i k ∀ k ∈ K, ∀ i ∈ H^* w_0=v_0=0 w_i=max{0, E_i-a_i} ∀ i ∈ H^* E_i ≤ a_i+w_i ≤ L_i ∀ i ∈ H^* a_j=∑_k ∈ K∑_i ∈ H x_i j k(a_i+w_i+v_i+t_i j) ∀ j ∈ H^* x_i j k∈{0,1}, y_i k∈{0,1}, a_i ≥ 0 ∀ i ∈ H, j ∈ H, k ∈ K In the aforementioned mathematical model, let H^* represent the set of customer vertices, H represent the combined set of customer and depot vertices, defined as H = H^* ∪{0}. The set of vehicles is denoted by K. The constraint (<ref>) ensures that the demand of customers does not exceed the capacity of each vehicle, where u_k represents the capacity of vehicle k. The constraint (<ref>) mandates that each customer is served by exactly one vehicle. The constraint (<ref>) guarantees that each vehicle must start and end its route at the depot. The constraints (<ref>) and (<ref>) ensure that each customer is served exactly once. The constraint (<ref>) defines the waiting and service times of the depot, with w_i and v_i denoting the waiting and service times of customer i, respectively. The constraint (<ref>) defines the waiting time. The constraint (<ref>) enforces the hard time window requirements. Finally, the constraint (<ref>) defines the relationship between the arrival time of a customer and the arrival time of the previous customer, and t_ij represents the time required to travel from customer i to customer j. § SOLUTIONS AND ALGORITHMS Considering the complexity of multiobjective vehicle routing problem with time windows (MOVRPTW), particularly in scenarios involving a large number of customers, existing scheduling methods seem insufficient to effectively solve the problem. Therefore, this section introduces an innovative approach that amalgamates a weight-aware deep reinforcement learning (WADRL) methodology with the non-dominated sorting genetic algorithm-II (NSGA-II). We design this hybrid method to tackle the challenges posed by MOVRPTW. Initially, the WADRL algorithm is employed to generate the initial population for the NSGA-II algorithm. Subsequently, the NSGA-II algorithm undertakes the task of optimizing this initial population, thus enhancing the solution's efficacy. §.§ General Framework In traditional deep reinforcement learning (DRL) methods for solving multiobjective optimization problems, as illustrated in Fig. <ref>, a biobjective optimization problem is decomposed into N subproblems according to N weight combinations. For the solution of each subproblem, the training of a DRL model is required. Consequently, a total of N DRL models need to be trained to solve the biobjective optimization problem <cit.>. This approach appears to be effective for solving biobjective optimization problems. However, as the number of objective functions increases, the time required to solve the entire multiobjective optimization problem will increase exponentially. Considering the shortcomings of the existing DRL methods to solve multiobjective optimization problems, our proposed algorithm takes a different approach. Similarly, we decompose the entire multiobjective optimization problem into N subproblems based on N weight combinations. During the training process of DRL, the algorithm randomly selects a weight combination for training each time. Furthermore, during the testing phase, the algorithm generates the corresponding optimal solutions for these N subproblems. This approach enables a single DRL model to solve the entire multiobjective optimization problem, significantly reducing the model training time, particularly for many-objective optimization problems. The WADRL framework is drawn in Fig. <ref>, in which a novel transformer architecture is also adopted. In the original NSGA-II algorithm, the initial population is typically generated through a random process, and if the solution does not satisfy the constraints, it will be regenerated. In MOVRPTW, it is obviously impossible to generate solutions that satisfy constraints through a completely random method. While there exist methods to expedite the initial solution generation, they are still relatively inefficient <cit.>, and the quality of these initial solutions is generally low. In our proposed algorithm, solutions generated by WADRL are used as the initial solutions for the NSGA-II algorithm. NSGA-II is then applied to optimize these solutions. The advantage of this approach is that the initial solutions generated by WADRL are guaranteed to satisfy the constraints. Furthermore, the generation of these initial solutions adheres to a carefully designed decomposition strategy. This approach guarantees a relatively uniform distribution across the Pareto front of the initial solutions. Importantly, this strategy not only facilitates coverage but also significantly enhances the quality of the initial solutions. §.§ The Markov Decision Process of MOVRPTW in Weight-aware Deep Reinforcement Learning Given the proficiency of reinforcement learning in managing sequential decision-making problems, we employ deep reinforcement learning to address the multiobjective vehicle routing problem with time windows (MOVRPTW). In this approach, MOVRPTW is modeled as a markov decision process (MDP), characterized by specific components: state S, action A, state transition function, and reward R. These elements are defined as follows. §.§.§ State At any given step t, the state of the system encompasses three primary components: the vehicle state v^t, customer state c^t, and weight state w^t, defined as follows. The vehicle state v^t is characterized by two dynamic attributes: its load, denoted as u^t, and the traveled time, symbolized as τ^t. The term `dynamic' implies that these states may change as actions occur. The customer states encompass static states, including soft and hard time windows e_i^t, l_i^t, E_i^t and L_i^t, coordinate c_i^t, and service time v_i^t. Static states indicate that these states will not change with the occurrence of actions. The dynamic state of a customer is demand d_i^t. The weight state is static, signifying the weights of two objective functions w_1^t and w_2^t. §.§.§ Action The action at each step t, denoted as a^t, represents the next vertex to which the vehicle will travel. The entire sequence of actions from the initial step 0 to the final step T is represented as {a^0, a^1, a^2 …, a^T}. a^t ∈{0,1, …, h}, where h is the total number of vertices in the system. a^t = 0 means that the vehicle returns to the depot. a^t ∈{1,2, …, h} signifies that the vehicle travels towards the customer vertex. The vehicle is initially at the depot vertex, i.e. a^0 = 0. §.§.§ State transition function The current system state S^t will transition to the next state S^t+1 based on the currently executed action a^t. All static states remain unchanged, while dynamic states may change. Once a customer node is visited, the demand of the customer becomes zero, which can be expressed as: d_i^t={[ 0, if a^t=i; d_i^t-1, otherwise . ]. In addition, the state of the vehicle will also change. If the vehicle travels to a customer node, its load will decrease accordingly. Otherwise, if the vehicle travels to the depot node, its load will be replenished, as given by: u^t={[ u^t-1-d(a^t), if a^t ≠ 0; u^0, if a^t=0. ]. The traveled time is determined by the travel time between two vertices and previous traveled time, as given as follows: τ^t={[ max(τ^t-1, E(a^t-1)); +v(a^t-1)+t(a^t-1, a^t), if a^t ≠ 0; 0, if a^t=0, ]. where E(a^t-1) denotes the earliest service start time of the customer a^t-1, v(a^t-1) represents the service time of the customer a^t-1, and t(a^t-1, a^t) is the travel time between the two vertices. The vehicle returning to the depot means dispatching a new vehicle and updating the traveled time. §.§.§ Reward By executing actions {a_0, a_1, a_2 …, a_T}, the traveled paths of all vehicles, as well as the arrival time and satisfaction of all customers, can be obtained. The objective function values can be calculated according to Eq. (<ref>) and Eq. (<ref>), and the total reward R can be calculated using Eq. (<ref>): R={[ -1000, if ∑_i ∈ N^* d_i^T ≠ 0; -w_1 f_1-f_1^min/f_1^max-f_1^min+w_2 f_2-f_2^min/f_2^max-f_2^min, otherwise , ]. where R = -1000 indicates that after all vehicles have delivered the goods and returned to the depot, there are still customer vertices that have not been visited, which means the constraints are not met. Otherwise, the minus sign before the first objective function means that the objective function is a minimization function. f_1^min, f_1^max, f_2^min and f_2^max denote the minimum and maximum values of the two objective functions respectively, which aim to normalize the objective function. These values can be obtained using single objective DRL model, objective function structures or the existing research. §.§ The Policy network in Weight-aware Deep Reinforcement Learning In our system model, various information needs to be considered simultaneously, such as customer coordinates, demand, time windows, and the weights of the objective functions. Therefore, learning-based methods need to be carefully designed to process the information. However, simple neural networks or learning strategies often struggle to handle the complex information mentioned above. Meanwhile, transformer architecture, as a type of self-attention mechanism, has been proven to perform well in many fields, including natural language processing (NLP) <cit.>, computer vision (CV) <cit.>, and recommender systems <cit.>. With the development of transformer technology, it has found wide applications in deep reinforcement learning, displaying excellence in problems like path planning <cit.>, the knapsack problem <cit.>, and reservoir operation <cit.>. Its advantage is that the attention mechanism in the transformer can effectively extract information through key-value-query maps. In this paper, we employ the transformer architecture <cit.> to model the delivering agent for solving MOVRPTW, as illustrated in Fig. <ref>. The model primarily comprises an encoder module, a weight embedding module, and a decoder module <cit.>. In the encoder process, the encoder embeds the original information of MOVRPTW into a high-dimensional space and utilizes a self-attention mechanism to extract features of the problem. In the weight embedding process, the embedding module encapsulates the state of vehicles and the weights associated with the two objectives of MOVRPTW. In the decoder process, the decoder generates vertex probabilities based on contextual information. §.§.§ The information encoder process Initially, the encoder employs a linear layer to transform the features of the vertices (including both the depot and customers) into a high-dimensional space. This transformation results in the generation of initial embedding information, represented as h^(0)={h_0^(0), h_1^(0), …, h_N^(0)}, where h_i^(0) corresponds to the initial embedded representation of vertex i. We employ separate linear projections to handle distinct information of warehouse and customer vertices, as follows <cit.>: h_i^(0)={[ W_0^x[c_0, L_0]+b_0^x, if i=0; W_i^x[c_i, E_i, e_i, l_i, L_i, d_i, v_i]+b_i^x, if i ≠ 0, ]. where the depot vertex only needs to embed coordinate and latest arrival time information, while the customer vertices need to embed coordinate, soft and hard time window, demand, and service time information. The operation [·, ·, …, ·] concatenates the information of the same dimension together. W_0^x, b_0^x, W_i^x and b_i^x are trainable linear projection parameters, and W_0^x ∈ℝ^d_h × 2, b_0^x ∈ℝ^d_h, W_i^x ∈ℝ^d_h × 8, b_i^x ∈ℝ^d_h, where d_h represents the dimension of h^(0). In our algorithm d_h = 128. Subsequent to the initial embedding, the encoded information undergoes processing through L identical layers to yield the final embedding. Each of these layers is composed of several key components: multi-head attention layer (MHA), skip connection layer, batch normalization (BN) layer, and fully connected feedforward (FF) layer. The input of layer l is the output of the preceding layer, i.e. h^(l-1)={h_0^(l-1), h_1^(l-1), …, h_N^(l-1)}. The MHA layer employs M heads with a dimensionality of d_k. In our algorithm, we set L=3, M = 8 and d_k=d_h/M=16. For each head m ∈{1,2, …, M}, the query, key, and value are computed as follows <cit.>: Q_m^(l)=W_Q m^(l) h^(l-1), K_m^(l)=W_K m^(l) h^(l-1), V_m^(l)=W_V m^(l) h^(l-1), where W_Q m^(l), W_K m^(l)∈ℝ^d_k × d_l, and W_V m^(l)∈ℝ^d_h × d_h. Then the attention value A_m^(l) and M H A(h^(l-1)) are calculated as: A_m^(l)=softmax(Q_m^(l)(K_m^(l))^T/√(d_k)) V_m^(l), M H A(h^(l-1))=[A_1^(l), A_2^(l), …, A_M^(l)] W_O^(l). ĥ^(l) is obtained from the attention value through skip connection layer and BN layer, which can be expressed as follows: ĥ^(l)=B N^(l)(h^(l-1)+M H A(h^(l-1))). Ultimately, the output of the node embedding at layer l is denoted as h^(l), which can be obtained through an FF layer and a skip connection layer as follows: h^(l)=B N^(l)(ĥ^(l-1)+F F(ĥ^(l-1))). §.§.§ The weight embedding process As MOVRPTW is modeled as an MDP, each vehicle is treated as an agent. When the agent selects actions, it needs to consider the weights of the two objective functions in MOVRPTW. Hence, we incorporate a specialized weight embedding module within our framework. This module is carefully designed to capture the current state of the vehicle and the state of the weights associated with the objective functions. The strategy allows the agent to focus on the weights of the objective functions when making decisions. The output of the weight embedding module is defined as follows: o^t=F F(W_o[u^t, τ^t, w_1, w_2]+b_o). §.§.§ The tour decoder process At each given step t, the agent selects a decision a^t based on the current state s^t, which contains the embedding of the entire graph denoted as ĥ^(L), the embedding information of the vertex corresponding to the vehicle's current location h_a^t-1^(L) and the output o^t of the weight embedding module as follows: s^t=[ĥ^(L), h_a^t-1^(L), o^t]. Subsequently, we calculate the contextual information of s^t through the MHA layer, defining the query vector as the embedding of the agent, and the key vector and value vector as the embedding of customers: Q^d t=W_Q^d s^t, K^d=W_K^d h^L, V^d=W_V^d h^L. The compatibility between each vertex and the vehicle is calculated through an attention mechanism: λ_t=C ·tanh((Q^d t)^T K^d/√(d_k)). In MOVRPTW, various constraints are established, meaning that at each step t, certain vertices are inaccessible due to these constraints. To navigate the diverse constraints inherent in the system, we also implement a masking rule. This rule functions to selectively exclude vertices that are not feasible for visitation during the current step. The primary masking rules include the following: 172 Apart from the depot vertex, other previously visited vertices are masked; 173 Customer vertices with demand exceeding the load of vehicle are masked; 174 Vertices where the vehicle's arrival time at the node exceeds the hard time window are masked. Whether the vertex i is masked at step t is defined as: mask_i^t={[ 1, if node i is masked at step t; 0, otherwise . ]. Finally, the probability vector for vertex selection is generated by combining the compatibility vector between the vehicle and the vertices: p^t=softmax(λ^t+O ·mask k^t), where O is a large negative number, e.g., O = -999999, which means that the customer vertex cannot be visited. §.§.§ Model training We employ a policy gradient method to train all parameters θ within the neural network. The algorithm comprises two networks, namely the policy network and the baseline network. Both the policy and baseline networks share the same network structure, with the only difference being that the policy network selects actions based on sampling from the probability vector, while the baseline network adopts a strategy where actions are selected based on the action possessing the highest probability, namely `greedy policy'. The gradient of the loss function is defined as follows <cit.> ∇_θ L(θ)=E_p_θ(π^θ)[(R(π^θ)-R(π^B L)) ∇_θlog p_θ(π^θ)], where R(π^θ) and R(π^BL) are utilized to denote the rewards accrued by the policy network and the baseline network, respectively. During each training batch, a t-test is employed to ascertain the statistical significance of the difference in performance between these two networks at a 95% confidence level. Should this test yield a significant result, it prompts an update wherein the parameters of the policy network supersede those of the baseline network, thereby optimizing the learning process. The detailed procedural steps of this algorithm are given in Algorithm <ref>. §.§ Combine Weight-Aware Deep Reinforcement Learning with NSGA-II In the pursuit of solving the MOVRPTW, the application of WADRL may reveal promising results. However, several limitations still exist: 172 The neural network sometimes struggles to iterate optimally, falling into a local optimal solution; 173 Its performance may exhibit instability under specific objective weight combinations; 174 Most of the obtained solutions may be dominated by other solutions. To address these limitations, we propose the approach that combines WADRL with NSGA-II. Our approach involves utilizing WADRL to generate initial solutions for the MOVRPTW. These initial solutions serve as the foundation for subsequent optimization through NSGA-II. WADRL ensures that the initial solutions are of high quality, meet all constraints, and have a relatively uniform distribution due to the applied decomposition strategy. This combined approach is designed to improve the overall optimization process, enhance solution quality, and mitigate the inherent limitations of WADRL. The integration of NSGA-II complements the strengths of WADRL, providing a more robust and effective solution strategy for MOVRPTW. The process of NSGA-II optimizing the solution generated by WADRL is shown in Fig. <ref>. § RESULTS AND ANALYSIS To demonstrate the advantages of our proposed approach that combines weight-aware deep reinforcement learning (WADRL) and non-dominated sorting genetic algorithm-II (NSGA-II), we apply it to the Solomon dataset <cit.>. Furthermore, we conduct comparative studies on datasets with varying numbers of customers, comparing our method with state-of-the-art approaches, including traditional deep reinforcement learning (DRL) and NSGA-II. §.§ Experimental Setup §.§.§ Training sets The model is trained on instances with 20-, 50- and 100-customer vertices and a single depot vertex, respectively. In the training dataset, the demand for customer vertices is uniformly sampled from the range [1,40]. For the instances with 20-, 50-, and 100-customer instances, the coordinates of the vertices are generated from a uniform distribution within the range [0, 100]. We randomly generate soft time windows with intervals greater than 30 within the range [0, 240]. §.§.§ Test set For the 20-customer instance, we use the information of the first 20 customers in RC101; For the 50-customer instance, we use the information of the first 50 customers in RC101; For the 100-customer instance, we use RC102 <cit.>. §.§.§ Parameter settings In MOVRPTW, the travel cost of vehicle k per unit distance and fixed cost generated by using vehicle, denoted as k (c_k^1, c_k^2), are set to be fixed, as described in (2.0,400) <cit.>. The allowable maximum violation of time window parameters, denoted as (ζ_i^e, ζ_i^l), are set as (0.25,0.25). In each iteration, a set of weight combinations is extracted according to the Dirichlet distribution. We use the Adam optimizer with a learning rate of 10^-4 with decay of 10^-6 <cit.>. In total, we train the model for 300 epochs and the batch size is set to 64. DRL and WADRL use weight combinations with an interval of 0.02 to decompose MOVRPTW. Specifically, these weight combinations are defined as [[1.00, 0.00], [0.98, 0.02], ... , [0.00, 1.00]], resulting in a total of 51 subproblems. Furthermore, we configure the population size of the NSGA-II algorithm to also be 51. §.§ Comparison of Initial Solutions We first compare the performance of our method and the original NSGA-II in generating initial solutions, as shown in Fig. <ref>. The results indicate that our method consistently outperforms the traditional NSGA-II algorithm in terms of the quality of initial solutions, across instances with 20-, 50-, 100-customer. Furthermore, the average execution time together with the average objective function value of the initial solution are presented in TABLE III. As shown in Fig. <ref>, all initial solutions generated by the WADRL-based NSGA-II outperform the initial solutions generated by the original NSGA-II using random policies, both in terms of objective functions and time efficiency. This improvement is particularly pronounced as the number of customers increases, with our method exhibiting more significant enhancements over the traditional NSGA-II approach. For example, in the case of the 100-customer instance, our method, on average, reduces the cost by 4008.6, increases customer satisfaction by 0.3715, and decreases the time required for generating initial solutions by 28.45 seconds. Furthermore, we observe that within the solutions generated by WADRL, many of them are dominated by other solutions, rendering them non-meaningful in MOVRPTW. As a result, it is reasonable to use the NSGA-II algorithm to optimize these solutions generated by WADRL. §.§ Experimental Results We implement NSGA-II algorithms with 200, 500, 1000, and 2000 iterations, as well as NSGA-II with 500 iterations using the solution of WADRL as the initial solution. We compare these results with DRL and WADRL, and the average objective function values and running times are presented in TABLE III. Fig. <ref>, Fig. <ref> and Fig. <ref> depict the experimental results on instances of 20-, 50-, and 100-customer, respectively. Our approach (WADRL+NSGA-II) consistently achieves optimal results on all instances, particularly as the problem scale increases. WADRL itself has achieved relatively competitive results, and when combined with the NSGA-II algorithm, WADRL's solutions exhibit higher quality and extended coverage on the Pareto front. In the 20-customer instance, our method, NSGAII-1000 and NSGAII-2000 show almost the same performance, but our method still finds solutions which have a lower travel cost. Not to mention that in the 100-customer instance, the advantage of our method is overwhelming, using less than 200 seconds to obtain effects that the NSGA-II algorithm cannot obtain in 1000 seconds. In terms of the average objective function values, as shown in TABLE III, the WADRL+NSGA-II algorithm outperforms other benchmark algorithms under all instances. Meanwhile, the runtime of WADRL is significantly lower than that of the traditional DRL algorithms and all NSGA-II algorithms. §.§ Effectiveness of Weight-aware Strategy In this section, the weight-aware strategy in deep reinforcement learning is experimentally validated. Fig. <ref> displays the results of all solutions generated by WADRL and DRL on instances on 20-, 50-, and 100-customer instances. TABLE IV presents the average training times. The results indicate that WADRL achieves superior results while utilizing only about 1/10 of the training time compared of the traditional DRL. In the 20-customer instance, as shown in Fig. <ref>, the results of DRL are relatively unstable, and the results of WADRL are relatively stable. In the 50-customer instance, as shown in Fig. <ref>, the results of DRL and WADRL are almost the same, but WADRL still finds some solutions with lower travel cost. In the 100-customer instance, as shown in Fig. <ref>, the results of WADRL are significantly better than those of DRL. § CONCLUSION This paper proposed a weight-aware deep reinforcement learning (WADRL) framework combined with non-dominated sorting algorithm-II (NSGA-II) to solve the multiobjective vehicle routing problem with time windows (MOVRPTW). Firstly, a comprehensive MOVRPTW model was proposed, which takes into account both travel cost and customer satisfaction. Subsequently, a WADRL framework was introduced, where the weights of objective functions were integrated into the state of deep reinforcement learning (DRL), enabling a single DRL model to solve the entire multiobjective optimization problem. An innovative DRL framework was introduced, which is centered around a transformer-based policy network. The architecture of this policy network was carefully designed, comprising three integral components: the encoder module, which is responsible for embedding customer information, the weight embedding module, which plays a critical role in capturing and encoding the weights of objective functions, and the decoder module, tasked with generating executable actions based on the contextual information. Furthermore, considering the limitations of DRL, we employed the NSGA-II algorithm to optimize the solutions generated by WADRL. Experimental results demonstrated the promising performance of our method across all MOVRPTW instances. Specifically, our method produced Pareto fronts with better coverage and quality. Finally, the weight-aware strategy reduces the training time of DRL greatly and achieves superior performance. Regarding future research, in the current use of WADRL to solve MOVRPTW, a single DRL model can solve the entire MOVRPTW for the same scale of customers. However, for different MOVRPTW scales, multiple models still need to be trained. We will investigate how to train a single DRL model to handle MOVRPTW instances of varying scales. [ < g r a p h i c s > ]Rixin Wu was born in Taizhou, Jiangsu, China in 2000, and obtained an B.E. degree from Nanjing University of Chinese Medicine in 2022 and is currently pursuing a M.E degree at Nanjing University of Aeronautics and Astronautics. His research focuses on deep reinforcement learning and multiobjective optimization problems. [ < g r a p h i c s > ]Ran Wang (Member, IEEE) is an Associate professor and Doctoral Supervisor with the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China. He received his B.E. in Electronic and Information Engineering from Honors School, Harbin Institute of Technology, China in July 2011 and Ph.D. in Computer Science and Engineering from Nanyang Technological University, Singapore in April 2016. He has authored or co-authored over 50 papers in top-tier journals and conferences. He received the Nanyang Engineering Doctoral Scholarship (NEDS) Award in Singapore and the innovative and entrepreneurial Ph.D. Award of Jiangsu Province, China in 2011 and 2017, respectively. He is the recipient of the Second Prize for Scientific and Technological Progress awarded by the China Institute of Communications and he is the ChangKong Scholar of NUAA. His current research interests include telecommunication networking and cloud computing. [ < g r a p h i c s > ]Jie Hao (Member IEEE) received her BS degree from Beijing University of Posts and Telecommunications, China, in 2007, and the Ph.D. degree from University of Chinese Academy of Sciences, China, in 2014. From 2014 to 2015, she has worked as post-doctoral research fellow in the School of Computer Engineering, Nanyang Technological University, Singapore. She is currently an Associate Professor at College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, China. Her research interests are wireless sensing, wireless communication and etc. [ < g r a p h i c s > ]Qiang Wu (Member, IEEE) is currently a Professor with the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China. Before that, he worked as a general engineer of ZTE Central Research Institute, and received his Ph.D. degree from the College of Computer Science and Technology, Zhejiang University, Hangzhou, China. Prof. Wu is a Fellow of the China Institute of Communications, an Executive Member of Service Computing Technical Committee of CCF, and a Member of the Technical Committee of National Engineering Laboratory for Mobile Internet System and Application Security. His main research fields include future networks, industrial Internet, cyber security, and space-earth integration networks. The Chinese government honored him with the Second-Class National Science and Technology Progress Award in 2009 and the Second-Class National Technology Innovation Award in 2014. He has more than 100 authorized patents, of which more than 20 have corresponding relations with international standards. [ < g r a p h i c s > ]Ping Wang (Fellow, IEEE) is a Professor at the Department of Electrical Engineering and Computer Science, York University, and a Tier 2 York Research Chair. Prior to that, she was with Nanyang Technological University, Singapore, from 2008 to 2018. Her research interests are mainly in radio resource allocation, network design, performance analysis and optimization for wireless communication networks, mobile cloud computing and the Internet of Things. Her scholarly works have been widely disseminated through top-ranked IEEE journals/conferences and received four Best Paper Awards from IEEE prestigious conferences. Her publications received 28,900+ citations with H-index of 77 (Google Scholar). She is a Fellow of IEEE and a Distinguished Lecturer of the IEEE Vehicular Technology Society. [ < g r a p h i c s > ]Dusit Niyato (Fellow, IEEE) received a Ph.D. degree in Electrical and Computer Engineering from the University of Manitoba, Canada, in 2008. He is currently a Professor in the School of Computer Science and Engineering at Nanyang Technological University, Singapore. He has published more than 400 technical papers in the area of wireless and mobile computing. He has won the Best Young Researcher Award of the Asia/Pacific chapter of the IEEE Communications Society and the 2011 IEEE Communications Society Fred W. Ellersick Prize Paper Award. Currently, he is serving as a senior editor of IEEE Wireless Communications Letters, an area editor of IEEE Transactions on Wireless Communications, an area editor of IEEE Communications Surveys and Tutorials, an editor of IEEE Transactions on Communications, and an associate editor of IEEE Transactions on Mobile Computing, IEEE Transactions on Vehicular Technology, and IEEE Transactions on Cognitive Communications and Networking. He was a Distinguished Lecturer of the IEEE Communications Society in 2016-2017. He has been named as a highly cited researcher in computer science. He is a Fellow of IEEE.
http://arxiv.org/abs/2407.12250v1
20240717013534
Testing the cosmic distance duality relation with Type Ia supernova and transverse BAO measurements
[ "Min Wang", "Xiangyun Fu", "Bing Xu", "Yang Huang", "Ying Yang", "Zhenyan Lu" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc" ]
http://arxiv.org/abs/2407.13588v1
20240718152756
Robust Calibration of Large Vision-Language Adapters
[ "Balamurali Murugesan", "Julio Silva-Rodriguez", "Ismail Ben Ayed", "Jose Dolz" ]
cs.CV
[ "cs.CV" ]
Robust Calibration of Large Vision-Language Adapters B. Murugesan et al. ETS Montreal, Canada balamurali.murugesan.1@ens.etsmtl.ca Robust Calibration of Large Vision-Language Adapters Balamurali Murugesan 0000-0002-3002-5845 Julio Silva-Rodríguez 0000-0002-9726-9393 Ismail Ben Ayed 0000−0002−9668−8027 Jose Dolz 0000-0002-2436-7750 July 2024 ========================================================================================================================================================== § ABSTRACT This paper addresses the critical issue of miscalibration in CLIP-based model adaptation, particularly in the challenging scenario of out-of-distribution (OOD) samples, which has been overlooked in the existing literature on CLIP adaptation. We empirically demonstrate that popular CLIP adaptation approaches, such as Adapters, Prompt Learning, and Test-Time Adaptation, substantially degrade the calibration capabilities of the zero-shot baseline in the presence of distributional drift. We identify the increase in logit ranges as the underlying cause of miscalibration of CLIP adaptation methods, contrasting with previous work on calibrating fully-supervised models. Motivated by these observations, we present a simple and model-agnostic solution to mitigate miscalibration, by scaling the logit range of each sample to its zero-shot prediction logits. We explore three different alternatives to achieve this, which can be either integrated during adaptation or directly used at inference time. Comprehensive experiments on popular OOD classification benchmarks demonstrate the effectiveness of the proposed approaches in mitigating miscalibration while maintaining discriminative performance, whose improvements are consistent across the three families of these increasingly popular approaches. The code is publicly available at: <https://github.com/Bala93/CLIPCalib> . § INTRODUCTION Deep learning is undergoing a paradigm shift with the emergence of pre-training large-scale language-vision models, such as CLIP <cit.>. These models, and more particularly the variants integrating vision transformers, have demonstrated impressive generalization capabilities in visual recognition tasks, yielding exceptional zero-shot and few-shot performance. Nevertheless, in a dynamic and evolving open world, machine learning applications inevitably encounter the challenge of out-of-distribution (OOD) data, which typically hinders the scalability of these models to new domains. Existing literature based on CLIP faces this scenario with different solutions to exacerbate robustness. In particular, freezing the entire vision backbone to re-use these generalizable features has been a popular choice, especially in the low-data regime <cit.>. Thus, CLIP adaptation during training typically resorts to Adapters <cit.> or Prompt Learning <cit.> strategies, which leverage a few labeled samples to adapt the model with the hope that it will generalize properly to unseen related-domains. Furthermore, a more challenging scenario consists of adapting the model during inference without any access to labeled data, where a prevalent method is Test-Time Prompt Tuning (TPT) <cit.>. While these strategies have further improved the discriminative performance of a zero-shot baseline, we have observed that the accuracy of the uncertainty estimates of the predictions, i.e., calibration, is significantly degraded (see <ref>), regardless of the family of adaptation models or setting. Thus, after adaptation, model predictions are often over-confident, even if they are wrong. This represents a major concern, as inaccurate uncertainty estimates can carry serious implications in safety-critical applications, such as healthcare, where CLIP is emerging as a popular strategy <cit.>. Nevertheless, despite its importance, the miscalibration issue has been overlooked in the CLIP adaptation literature. Motivated by these observations, this paper addresses this critical issue, which has been disregarded in current literature. Indeed, few-shot adaptation strategies, notably Prompt Learning and Adapters, are attracting wide attention recently, with an unprecedented surge in the number of methods proposed <cit.>, albeit being a relatively recent research topic. Nevertheless, the main focus of this growing literature has been on improving the discriminative power of adapted models. Thus, given their increasing popularity, and quick adoption in real-world safety-critical problems, we believe that assessing the calibration performance of CLIP adaptation strategies in OOD scenarios is of paramount importance to deploy not only high-performing but also reliable models. We can summarize our contributions as follows: * We empirically demonstrate that popular CLIP adaptation strategies, such as Adapters, Prompt Learning, and Test-Time Prompt Tuning, substantially degrade the calibration capabilities of the zero-shot baseline in the presence of distributional drift. * For these adaptation strategies, we expose that the underlying cause of miscalibration is, in fact, the increase of the logit ranges. This contrasts with recent work in calibrating fully-supervised models <cit.>, which suggests that the inherent cause of miscalibration is the increase of its norm instead, due to the standard cross-entropy loss used for training. * Based on these observations, we present a simple, and model-agnostic solution, which consists in scaling the logit range of each sample based on the zero-shot logits. We further present several alternatives to accommodate our solution, which can be implemented either at training or inference time. * Comprehensive experiments on popular OOD classification benchmarks empirically demonstrate the effectiveness of our approaches to reduce the miscalibration error, while keeping the discriminative performance. § RELATED WORK §.§ Vision language models Text-driven pre-training of image representation, so-called vision-language models (VLMs) is revolutionizing the paradigm of transfer learning. These models can integrate massive web-scrabbled data sources thus learning robust feature representations. In particular, models such as CLIP <cit.> or ALIGN <cit.> train joint multi-modal embedding spaces via contrastive learning of paired images and text, using dual encoder architectures. Such strong vision-language alignment has demonstrated robust open-vocabulary zero-shot generalization capabilities <cit.>. Given such potential, transferring pre-trained VLMs to a wide variety of tasks is gaining increasing popularity. Nevertheless, this process faces particular challenges. First, large-scale pre-training usually involves also scaling network sizes, which is a computational bottleneck for low-resource adaptation scenarios. Second, recent attempts to fine-tune VLMs have demonstrated a deterioration of their robustness against domain drifts <cit.>, especially when available data is limited. Thus, an emerging core of recent literature is focusing on novel alternatives to overcome these limitations. More concretely, freezing the pre-trained backbone, and reusing such features by training a small set of parameters, via Prompt Learning <cit.>, or black-box Adapters <cit.>, is getting increasing attention. §.§ Prompt based learning CLIP models have shown encouraging results by hand-crafting personalized text descriptions of the target visual representation <cit.>. The automatizing of this cumbersome process raises the concept of Prompt Learning (PL) <cit.>, a family of methods to adapt CLIP that inserts a set of continuous learnable tokens in the original text prompt at the input of the VLM language encoder. While the CLIP model remains frozen, PL optimizes the most discriminative text input, given a few-shot support set <cit.>. CoOP <cit.> represents one of the initial attempts to study the effect of prompt tuning on different tasks, and proposed to learn the prompt's context words. CoCoOP <cit.>, on the other hand, designed a simple network to predict the input text prompt through image features, as CoOP failed to match the zero-shot performance on generic tasks. TPT <cit.> extends PL to address time-test adaptation scenarios by updating the prompt for a batch with original and augmented samples through entropy minimization. §.§ Black-box Adapters Prompt Learning involves using the CLIP's encoder throughout the entire training process as the backpropagation of the gradient has to pass through it to update the prompts, which results in large computational constraints <cit.>. Adapter-based techniques provide an alternative to Prompt Learning for aligning to downstream tasks, leveraging uniquely pre-computed features with minimal additional parameters. A base version of such methods involves training a linear classifier via logistic regression, typically referred to as Linear Probing <cit.>. Nevertheless, leveraging only the vision features does not fully exploit the potential of VLMs. To this end, several methods have proposed enhanced Adapters, which further rely on zero-shot text-driven class-wise prototypes. In particular, Clip-Adapter <cit.> introduced additional fully connected layers and operated on the vision or language branch through residual style feature combination. Training-free methods such as Tip-Adapter <cit.> resorted to a key-value cache model based on the available few-shot supports. Likewise, TaskRes <cit.> introduced additional learning parameters and applied a residual modification of the text representation, which led to a better initialization when learning from few-shot supervision. More recently, <cit.> provided a wider look at the coupling of vision and text features in such Adapters, by pointing out that these methods largely build up their improved performance on initializing the logistic classifier weights with the zero-shot prototypes, proposing a simple solution, coined CLAP, for a better distillation of such prototypes. §.§ Model calibration Calibrating the confidence of deep learning models is paramount in developing reliable solutions, as the confidence is expected to correlate with correctness. Given the importance and the potential impact of miscalibration, a growing literature to address this issue has emerged in the last years. Post-processing techniques have been widely used to achieve calibration, wherein a linear transformation <cit.> is applied to the predicted logits before converting it to softmax. Nevertheless, an important limitation is that these transformations are obtained using held-out validation data, which is assumed to follow the same distribution as the test data, limiting their applicability in the presence of domain drifts <cit.>. A popular alternative consists in calibrating the networks at training time. This can be achieved by incorporating explicit penalties that either penalize overconfident softmax predictions <cit.> or encourage small logit differences <cit.>. Furthermore, <cit.> demonstrated that popular classification losses, such as Focal Loss <cit.> or Label Smoothing <cit.>, integrate an implicit term that maximizes the entropy of the network predictions, thus favoring low-confidence models. Other works to improve the accuracy of the uncertainty estimates during training include the use of MixUp <cit.>, or enforcing a constant vector norm on the logits <cit.>, among others. Nevertheless, all these methods have been proposed in the context of fully-supervised models, and the calibration of Prompt Learning and Adapter-based methods for CLIP remains unexplored in the literature. § BACKGROUND §.§ CLIP Zero-Shot Classification CLIP <cit.> is a large vision-language model, trained via contrastive learning to produce visual representations from images paired with their associated text descriptions T. To do so, CLIP consists of an image encoder θ and a text encoder ϕ. This generates the corresponding vision ∈^d and class text _k ∈^d embeddings, which are typically projected into an ℓ_2-normalized shared embedding space. Given a new task consisting in visually discriminating between K categories, the set containing all the text embeddings for all the K classes can be denoted as 𝒲={_k}_k^K, with _k=ϕ(“A photo of a [class_k]”). At inference, this learning paradigm enables zero-shot prediction. More concretely, for a given set of K classes, and an ensemble of N different prompts per category, we can generate the set of available prompts as 𝒯={{T_n,k}_n=1^N}_k=1^K}. Then, a popular strategy <cit.> consists in obtaining a class zero-shot prototype, which is computed as _k=1/N∑^N_n=1ϕ(T_n,k). Then, for a given test image, the zero-shot prediction, p=(p_k)_1 ≤ k ≤ K, can be obtained as: p_k = exp(^⊤_k/τ)/∑_j=1^Kexp(^⊤_j/τ) where τ is a temperature hyperparameter, whose value is learned during training, and ^⊤ denotes the dot product operator[As vectors are ℓ_2-normalized, the dot product between these two vectors is equivalent to their cosine similarity.]. §.§ Adaptation to novel tasks Let us now consider a support set that contains a few labeled samples 𝒮={(_i,_i)}_i=1^S, with ∈{0,1}^K the ground truth vector associated with . The vector of predicted logits of a given image i is defined as _i=(l_ik)_1 ≤ k ≤ K. In Prompt Learning methods, such as CoOp <cit.> or KgCoOp <cit.>, the adaptation is done by modeling the input text T_k of a given class k as learnable continuous vectors. Thus, in contrast with zero-shot inference, where the resulting text embeddings are obtained as the mean over the different pre-defined prompts, in Prompt Learning these are optimized. To generate the logits, the learnable prompts are combined with the fixed visual embedding from the test image i, such that l_ik=_i^⊤_k / τ, which can then be integrated into <ref> to minimize the cross entropy loss over the few labeled shots. The family of methods commonly referred to as Adapters <cit.> proceeds differently, and learns transformations over the visual and text embeddings, yielding the following logits l_ik=θ_a(_i,_k, τ), where θ_a is the set of learnable parameters of the Adapter. A more challenging scenario consists in adapting the text prompts at inference, which is commonly referred to as test-time prompt tuning <cit.>. As this setting does not include few-shot supports to adapt the prompts, the supervised cross-entropy objective is replaced by an unsupervised minimization of the Shannon entropy. Thus, regardless of the method selected, the objective is to optimize either _k (Prompt Learning and test-time prompt tuning) or θ_a (Adapters) to minimize either the CE over the softmax predictions obtained from the few-shots, or the Shannon entropy on the test samples predictions at inference. § CONSTRAINING LOGITS DURING ADAPTATION §.§ Impact of adaptation in logits To understand the impact on calibration of using the cross-entropy (CE) loss to adapt CLIP, let us decompose the logit vector into its Euclidean norm =√(l_1^2+...+l_K^2), (magnitude) and its unit vector (direction), such that =. Considering now the magnitude and direction of the logit vector, the general form of the cross-entropy loss over a given support sample, using the softmax probabilities in <ref>, can be formulated as: -logexp(_il̂_ik)/∑_j=1^Kexp(_il̂_ij) This view of the cross-entropy implies that the direction of the logit vector _i determines the predicted class of the image i. Thus, if the predicted category is incorrect, _i will change to match the target class provided in the one-hot encoded label. Once the network prediction is correct, i.e., y_i=max_j(l_ij), the direction of the vector will remain unchanged. Nevertheless, the nature of the cross-entropy loss will favor higher softmax probabilities for the predicted class. Recent literature <cit.> suggests that this is achieved by increasing _i, indicating that the miscalibration issue originates from the augmentation of the logit norm. Nevertheless, in what follows we refute this argument and advocate for the increase of the logits range as the potential cause of miscalibration. Proposition 1. Let us consider the softmax cross entropy loss, where σ(·) denotes the softmax function. Assume that ≥0. Then, for any scalar a>0, σ_k()=σ_k(+a) ∀ k, and +a 1>, where 1 denotes the vector of ones. Prop. 1 demonstrates that adding a strictly positive constant value a ∈_++ to all the logits increases the norm of the vector , but this does not necessarily lead to more confident predictions, whose probability scores remain unchanged. Proposition 2. Let R() = max() - min() denotes the range of logit vector , where max() (respectively min()) denotes the largest (respectively smallest) value among the elements of vector . Then, for any given scalar a>1, and for k=max_j(l_j), we have σ_k(a )>σ_k() and R(a )>R(). The proofs of Propositions 1 and 2 are deferred to the . From the above proposition we find that increasing the range of a given logit vector results in higher softmax probability values. Thus, contrary to the widely spread belief that increasing the logit norm hinders model calibration, we argue that this effect of logit distance magnification, which yields higher softmax distributions, is a potential source of miscalibration[Note that the same reasoning applies to TPT <cit.>, whose learning objective to adapt the CLIP baseline is to minimize the Shannon entropy of the softmax distribution.]. This explains why, even though adaptation of CLIP yields performance gains in terms of accuracy, adapted models are worse calibrated than a zero-shot baseline. Furthermore, this analysis is supported empirically by the observations depicted in <ref>, where can observe that, while calibration has been degraded in the adapted models, the logit norm of their predictions has substantially decreased. §.§ Our solution From our previous analysis and empirical observations, we can derive that: i) despite improving their classification performance, state-of-the-art strategies to adapt CLIP suffer from miscalibration, particularly compared to the original zero-shot predictions, and ii) one of the main causes arises from the logit magnification issue introduced by the cross-entropy loss used during adaptation. In light of these findings, we propose a simple but effective solution that can alleviate the miscalibration issue in CLIP adaptation. More concretely, we propose to constraint the range of the logits during the training of a main objective ℋ, which results in the following constrained problem: minimize ℋ(,) subject to l^ZS-min_i1≤_i ≤ l^ZS-max_i1 ∀ i ∈𝒟, where and are matrices containing the sample-wise ground-truth and softmax-prediction vectors for all the samples involved in the training, l^ZS-min_i and l^ZS-max_i are the min and max logit magnitudes of the zero-shot prediction for sample _i. 𝒟 denotes a given set of available samples. Furthermore, in the test-time prompt tuning setting, we simply need to replace by in <ref>. Directly solving the constrained problem in <ref> in the context of deep models is not trivial <cit.>, and Lagrangian-dual optimization has been typically avoided in modern deep networks involving millions of parameters. To address this issue, we propose several alternatives to approximate the constrained problem presented in <ref>, which are detailed below. §.§ Zero-shot logit normalization during training () The constraint in the presented problem, i.e., l^ZS-min_i1≤_i ≤ l^ZS-max_i1, ∀ i ∈𝒟, can be integrated into the main objective by transforming the logits before computing the CE loss over the support set samples (here 𝒟=𝒮). More concretely, the modified learning objective can be defined as: ℋ(,)=-∑_i∈𝒮∑_k=1^K y_iklogexp(l'_ik)/∑_j=1^Kexp(l'_ij), where '_i denotes the zero-shot normalized logit vector of _i, obtained as: '_i=(l^ZS-max_i-l^ZS-min_i)/(l^max_i-l^min_i) (_i-l^min_i1)+l^ZS-min_i1, with l^max_i=max_j(l_ij) and l^min_i=min_j(l_ij), respectively. While the calibration strategy formalized in Eq. <ref> forces the direction of the logit vector to match the correct category encoded in the one-hot label, its magnitude is normalized according to the ZS logit range of image _i. Note that this is different from the solution presented in <cit.>, as the logit values are normalized by the logit norm, which does not guarantee that the logit values will be in a certain range. §.§ Integrating explicit constraints in the learning objective () The problem in <ref> can also be approximated by an unconstrained problem, for example by transforming the enforced inequality constraints into penalties, which are implemented with the ReLU function. The resulting learning objective can be formally defined as: min_θ ℋ(,) + λ∑_i ∈𝒮∑_k=1^K (ReLU(l_ik-l^ZS-max_i)+ReLU(l^ZS-min_i-l_ik)), where λ controls the trade-off between the main loss and the penalties. The intuition behind the penalties is that when the constraint in Eq. (<ref>) is not satisfied, i.e., there exist logit magnitudes outside the zero-shot logit range, the value of the penalty term increases, backpropagating gradients to modify the logit values according to the enforced constraint. We would like to stress that a natural solution to tackle the constrained problem in <ref> would be the use of Lagrangian multipliers. Nevertheless, as stated earlier, in the context of deep learning, these methods suffer from several well-known limitations, which include training instability and non-convergence due to the difficulty of convexifying loss functions <cit.>. Thus, despite its simplicity, the use of penalties has proven to be effective in constraining deep models on a myriad of problems, such as image segmentation <cit.>, adversarial attacks <cit.>, or modeling thermal dynamics <cit.>. §.§ Sample-adaptive logit scaling (SaLS) Last, we explore a simple but efficient solution that is closely related to temperature scaling (TS) <cit.>. In particular, TS is a single-parameter variant of Platt scaling <cit.>, which consists in learning the scaling hyperparameter τ in <ref>. While this strategy has led to very competitive results, it requires an external validation set to fine-tune the value of τ, which limits its use to learning scenarios with abundant labeled data and absence of distributional drifts <cit.>. Furthermore, τ is fixed for a whole dataset, which is suboptimal from a sample-wise standpoint. To alleviate these issues, we propose to use the logit normalization defined in <ref> at inference time to obtain the final softmax probability in <ref>. More concretely, for each sample i to be classified, we compute its zero-shot prediction, whose min and max logit values are used in <ref> to scale the logit distribution of that sample i provided by the adaptation method selected. This can be viewed as an unsupervised sample-wise temperature scaling during testing, which does not require additional validation samples to fix its value, and adapts to the specificity of each sample, regardless of distributional drifts. § EXPERIMENTS §.§ Setup Datasets. We use popular datasets for benchmark few-shot <cit.> and test-time <cit.> CLIP adaptation. Domain Generalization: the adaptation robustness to domain shifts is evaluated using ImageNet <cit.> distributions. Concretely, we sample a 16-shot training subset from ImageNet's training partition which is directly evaluated on out-of-distribution test data from ImageNetV2 <cit.>, ImageNet-Sketch <cit.>, ImageNet-A <cit.>, and ImageNet-R <cit.>. Fine-grained tasks: calibration during test-time adaptation is assessed on an assembly of 11 datasets that include heterogeneous discriminative tasks. These include Imagenet <cit.>, Caltech101 <cit.>, OxfordPets <cit.>, StanfordCars <cit.>, Flowers102 <cit.>, Food101 <cit.>, FGVCAircraft <cit.>, SUN397 <cit.>, DTD <cit.>, EuroSAT <cit.>, and UCF101 <cit.> datasets. Note that for test time adaptation we uniquely employed their corresponding test partitions. Selected methods. Our proposed calibration framework is agnostic to any adaptation strategy of zero-shot models. We evaluate its performance across different popular settings and state-of-the-art methods for CLIP adaptation. Prompt Learning (PL): CoOp <cit.>, CoCoOp <cit.>, ProGrad <cit.> and MaPLe <cit.> are considered as the baselines. Adapters: CLIP-Adapter <cit.>, TIP-Adapter <cit.>, and TaskRes<cit.> are used. Test-Time Adaptation: TPT <cit.> is selected as the primary method for test time prompt tuning, together with C-TPT <cit.>, a concurrent method recently proposed for calibrating TPT. CLIP adaptation. We now describe the experimental details for training the selected adaptation methods. Backbones: All experiments build upon CLIP <cit.>, using its ResNet-50 <cit.> and ViT-B/16 <cit.> pre-trained weights. Text prompts: The textual descriptions for zero-shot representation of the target concepts used are the hand-crafted text prompts used in CoOp <cit.>. Image augmentations: For few-shot adaptation, we applied random zoom, crops, and flips, following <cit.>. Regarding Prompt Learning methods, these transformations are applied continuously during training, while for Adapters, since feature representations are pre-computed, the number of augmentations per support sample is set to 20, following <cit.>. Finally, regarding test-time prompt tuning (TPT), we employed AugMix <cit.> as in <cit.> to form a 64-image batch from each original image. Training details: Adapters are trained following the recent benchmark in <cit.>. We optimized the Adapters for 300 epochs, using SGD optimizer with a Momentum of 0.9 and an initial learning rate of 0.1. In the case of PL, we set the context length of the prompt to 4 and trained CoOp and CoCoOp for 50 and 10 epochs, respectively. We set the same training schedule, optimizer, and learning as in <cit.>. For ProGrad and MaPLe, we follow the training settings considered for domain generalization reported in theirs respective works <cit.>. Likewise, for TPT, we optimized the learned prompt by doing a single step with AdamW optimizer, with the learning rate set to 0.005, as in <cit.>. Evaluation metrics. To measure the discriminative performance of the different methods, we use classification accuracy (ACC). In terms of calibration, we follow the standard literature and resort to the Expected Calibration Error (ECE). In particular, with N samples grouped into M bins {b_1, b_2, …, b_M}, the ECE is calculated as: ∑_m=1^M|b_m|/N|acc(b_m)-conf(b_m)|, where acc(·) and conf(·) denote the average accuracy and confidence in bin b_m. Calibration details. We introduced three different alternatives to alleviate the miscalibration of adapted models (<ref>). For and , we incorporated such modifications during training (adaptation), and kept all implementation details previously presented. Furthermore, the penalty-based calibration weight λ in Eq. <ref> is set to 10 and remains fixed across all settings. §.§ Results I) Task 1: Few-shot domain generalization. Table <ref> introduces the average few-shot generalization (OOD) results using black-box Adapters, whereas Table <ref> presents the same for PL approaches. We refer the reader to for the detailed results per dataset. First, results consistently show a miscalibration phenomenon when CLIP models are adapted, regardless of the CLIP backbone used, or the transferability approach. Few-shot Adapters calibration: We find that miscalibration is especially occurring in few-shot black-box Adapters. For example, Clip-Ad or TaskRes in <ref> (a) show ECE increments of +8.3 and +4.0 respectively. This is further magnified when using the popular TIP-Adapter method. Few-shot PL calibration: PL approaches are relatively more robust in this setting (+3.8 CoOp in <ref> (a)). On the impact of logit range regularization: Results show the potential of logit range scaling among its different proposed variants, improving calibration for all Prompt Learning approaches, and most of the used Adapters. Impact of different strategies to adjust logit range:. The only strategy that does not allow for consistent performance gains is , which deteriorates performance in some Adapters (see Clip-Ad in Table <ref>). We believe that the re-parameterization in Eq <ref> might not properly prevent logit range de-adjustment before normalization, and thus overfit to the few support samples. In contrast, constraint directly regularizes such values, showing consistent ECE decreases for both Adapters (-22.0 for TIP-Ad(f) using ViTs, or -4.3 for CLIP-Ad using RN50) and PL (-2.9 for CoOp using RN50, or -0.94 for CoCoOp using ViTs). Interestingly, as a side effect, we also observed accuracy improvements for domain generalization for several methods. Nevertheless, the best calibration performance is provided by a simple, yet effective post-processing standardization, . This is especially relevant, since this method does not require any modification of the adaptation strategy, and can be potentially applied to the output of any few-shot model. II) Task 2: Test Time Adaptation (TTA). We report in Table <ref> the performance for test-time prompt tuning across 11 fine-grained adaptation datasets for ResNet-50 backbone. Our results show that compared to zero-shot prediction, TPT largely deteriorates the calibration. Despite this degradation is somehow alleviated by C-TPT, further integrating our approaches show promising potential for better calibration of such methods, with consistent improvements for both strategies (e.g.,-2.0 and -0.9 in ECE for TPT and C-TPT with ). III) Further constraining the logit range to smaller values. ZS predictions are well calibrated. Nevertheless, during adaptation, the model improves its discriminative performance at the cost of degrading its calibration capabilities. While in this work we advocate for increases of the logit range as a cause of miscalibration, decreasing this range should be done with care. In particular, further decreasing the logit range approaches a scenario of maximum entropy, where the predicted probabilities are semantically meaningless, leading to worse discrimination performance. This reasoning is empirically supported in Table <ref>, where we can see that, regardless of the learning paradigm, significantly decreasing the logit range yields higher ECE scores, i.e., miscalibration is magnified. IV) Effect on logits. Following one of our main observations (<ref>), we argued that the source of miscalibration in CLIP adaptation models is the increase of the logit range of their predictions, and not the logit norm. To empirically validate this hypothesis, we depict in <ref> both the logit norm and logit ranges for a relevant method of each category, as well as the version improved with our solution, across the four OOD datasets of ImageNet. We can observe that, indeed, applying our approach (which improves calibration) leads to reduced logit ranges (bottom), whereas the logit norm (top) typically increases. § CONCLUSIONS We have investigated the miscalibration issue of popular CLIP adaptation approaches on the challenging task of few-shot and zero-shot adaptation under distributional drifts. We have analyzed the source of this issue and demonstrated that, in contrast to existing evidence pointing to the logit norm, increases in the range of predicted logits might be a potential cause of miscalibration on the adapted models. To overcome this issue, we have presented three simple solutions, which consist in constraining the logit ranges to the values of the zero-shot predictions, either at training or test time. Extensive experiments on multiple models from the three categories, and popular OOD benchmarks, demonstrate that incorporating our simple solution to existing CLIP adaptation approaches considerably enhances their calibration performance, without sacrificing model accuracy. The proposed approach is model-agnostic, and demonstrate superior performance regardless of the family of approaches or setting, making of our model an appealing yet simple solution for zero-shot and few-shot CLIP adaptation, particularly in the challenging scenario of out-of-distribution data. Acknowledgments. This work is supported by the National Science and Engineering Research Council of Canada (NSERC), via its Discovery Grant program. We also thank Calcul Quebec and Compute Canada. splncs04 § ROBUST CALIBRATION OF LARGE VISION-LANGUAGE ADAPTERS SUPPLEMENTARY MATERIAL § SUPPLEMENTARY EXPERIMENTS §.§ Proof of Proposition 1 If we add a strictly positive constant value a ∈_++ to all the elements of a positive logit vector ≥0, the modified vector is '= + a 1. Considering σ(·) as the softmax function, we can then rewrite the softmax prediction for class k as (we omit the temperature scalar τ for simplicity, as it does not have any impact on this proof): σ_k(')=σ_k(+a1) = exp(l_k + a)/∑_j=1^Kexp(l_j + a) =exp(l_k) exp( a)/∑_j=1^Kexp(l_j)exp(a) =exp(a)/exp(a)exp(l_k)/∑_j=1^Kexp(l_j) =exp(l_k)/∑_j=1^Kexp(l_j) This proves the first part of Proposition 1. Showing '≥. Considering ' as +a1, we have: +a1- =√(∑_j=1^K(l_j+a)^2)- √(∑_j=1^K l_j^2) = √(∑_j=1^K (l_j^2+2al_j+a^2)) - √(∑_j=1^K l_j^2) = √(∑_j=1^K l_j^2+2a∑_j=1^K l_j+ Ka^2)) - √(∑_j=1^K l_j^2). Since Ka^2 ≥0, and 2a∑_j=1^K l_j >0 (we assume a ∈_++ and ≥0), the first square root is greater than the second one. This results in a positive value for +a1-, which demonstrates that +a1 = '≥. Thus, Proposition 1 is proved, i.e., an increase in the logit norm does not necessarily modify the confidence of the predictions. §.§ Proof of Proposition 2 Considering a scalar a>1, s=a-1, and σ(·) the softmax function, we can define for the predicted class k (k= max_j (l_j)): σ_k(a)=e^al_k/∑_j=1^K e^al_j=e^(s+1)l_k/∑_j=1^K e^(s+1)l_j=e^l_k/∑_j=1^K e^l_j+s(l_j-l_k) where l_k= max_j (l_j). If we consider now that for any j ∈ [1,2,...,K], if j≠ k, then l_j-l_k ≤ 0, we have that: l_j + s (l_j - l_k) < l_j for j ≠ k Therefore, e^l_j + s (l_j - l_k) < e^l_j for j ≠ k (note that for j = k, the exponent remains l_k). Thus, the sum in the denominator for σ_k(a ) is smaller than the sum in the denominator for σ_k(): ∑_j=1^K e^l_j + s (l_j - l_k) < ∑_j=1^K e^l_j Since the numerator e^l_k remains the same, we have: σ_k(a ) = e^l_k/∑_j=1^K e^l_j + s (l_j - l_k) > e^l_k/∑_j=1^K e^l_j = σ_k() This proves that increasing the logit range (by scaling the logits with a factor a > 1) increases the confidence of the predicted class: σ_k(a ) > σ_k() Showing R(a )>R(). Let us denote the range R() as: R() = max() - min() For a given scalar a > 1, we can scale a logit vector 𝐥, whose maximum and minimum values are also scaled: max(a ) = a max() and min(a ) = a min() Following our definition of range R(): R(a ) = a max() - a min() where a can be factored out, leading to: R(a ) = a (max() - min()) = a R() Last, as a>1, we have that: a R() > R() which proves that R(a ) > R(). Thus, Proposition 2 is proved. §.§ Few shot domain generalization in adapters We supplement the results depicted in the main manuscript for few-shot adapter calibration (Table <ref>) by providing results for individual datasets. In particular, we considered in this experiment the popular adapter techniques CLIP-Adpater <cit.>, TIP-Adapter <cit.>, and TaskRes(e) <cit.> and, additionaly, ZS-LP <cit.>. In Table <ref>, to each of the above methods, we compare our contributions , , and for ResNet-50 and ViT-B/16 architectures. The adapters are initially trained with source ImageNet dataset, and evaluated under ImageNet distributional shifts, -V2 <cit.>, Sketch <cit.>, Adversarial <cit.>, and Rendition <cit.>. The classification metric accuracy and calibration metric ECE are reported for the individual datasets. CLIP-Adpater and TIP-Adapter are sensitive to the technique, which is possible due to the method's dependency on specific hyper-parameter settings <cit.>. Even for these methods, consistently improves calibration while retaining or improving the accuracy. Last, our post-processing technique can retain the accuracy and assist in calibration consistently across all approaches. §.§ Few shot domain generalization in prompt learning In the following, we extend the evaluation of few-shot prompt learning generalization with per-dataset metrics, and cross-domain generalization. ImageNet shifts. In this experiment, prompt learning methods were adapted in 16-shot ImageNet, and evaluated in its corresponding domain drifts (OOD). In this section, we complement the results in Table <ref> with detailed per-dataset metrics and additional prompt learning methods. In particular, we evaluate our proposed calibration methods when applied to CoOp <cit.>, CoCoOp <cit.>, ProGrad <cit.>, and MaPLe <cit.>. We evaluate CoOp and CoCoOp for both ResNet-50 and ViT-B/16 architectures. As MaPLe is specifically designed for transformer architectures, we consider the ViT-B/16 CLIP backbone. Analogously for ResNet-50, we consider the prompt-aligned gradient technique ProGrad. These results are presented in Table <ref>. Following the earlier reported trends, and consistently provide better calibration and accuracy. In comparison with applying in Adapters, using them in prompt learning provides stable results, and often provides improved calibration compared to the baseline. It is noteworthy to mention that Prompt learning methods such as CoCoOp, ProGrad, and MaPLe are designed for improved generalization, and thus provide better performance than previously evaluated adapters in Section <ref>. Despite this, our proposed range re-normalization technique can improve the calibration even for these methods, supporting our observation that the range of the logit indeed plays a key role in calibration. Cross-domain generalization. This additional experiment evaluates prompt learning methods adapted in ImageNet in 10 fine-grained tasks. These 10 tasks include different target categories, different from the set existing in ImageNet, and evaluate the robustness of the zero-shot capabilities of the learned prompts on new categories. We evaluate the 10 few-shot benchmarks with CoOp based on the prompt learned with the 16-shot setting trained on ImageNet. Results are depicted in Table <ref>. As reported in the literature <cit.>, the prompt learned by CoOp on ImageNet is not sufficient to adapt to the diverse fine-grained tasks, thereby providing lower accuracy and calibration than the original vision-language model (zero-shot). It is worth mentioning that incorporating our logit range normalization techniques, particularly SaLS, provides consistent improvement in calibration compared to the original prompt learning approach CoOp. §.§ Test time prompt tuning with ImageNet OOD benchmark Test time prompt tuning methods, such as TPT <cit.>, provide a provision to infer an individual sample directly during the test time. In this supplementary experiment, we analyze our methods with TPT for the ImageNet OOD datasets and complement results on fine-grained datasets depicted in Table <ref>. The numbers for each ImageNet OOD dataset comparing TPT with our methods are reported in Table <ref>. In this setting, Zero-Shot inference is better calibrated than the TPT, even when the accuracy increases with adaptation. This drastic degradation in calibration may be largely due to the use of entropy, which favours larger distances between the winner and other logits, thereby increasing the logit range. Through our methods, we have attempted to restrict the logit range from going beyond the Zero-Shot range, providing us the expected improvement in calibration, and even in accuracy in some cases. §.§ Additional experiments for Test time prompt tuning In this section, we further study TPT with our methods on 11 few-shot benchmarks. In particular, we complement the results provided in the main manuscript (Table <ref>) with ImageNet results and the CLIP ViT-B/16 model. In <ref>, the overall (Avg.) results show that our calibration methods are better than the baselines, especially in calibration. As expected from a good post-processing technique, retains the accuracy and consistently improves the calibration across tasks. Importantly, even for C-TPT, our method still improves the calibration, proving that even with the best prompt choice for calibration, there is still scope for improvement by adjusting the predictions logit range. More importantly, our approach can be directly applied to the logit predictions, not requiring pre-training the network, such as C-TPT, making of it an efficient ready-to-use solution. §.§ Additional studies on Logit norm, range, and calibration In this experiment, we analyze the impact of our contributions in calibration to the logit norm and range. We consider representative methods for few-shot Adapters and Prompt Learning, , TaskRes <cit.> and CoOp <cit.>, respectively. Fig. <ref>, and <ref> depict the comparison of logit norm/range with ECE for , , and proposed calibration methods. As expected, after applying our method, ECE is reduced in most scenarios. It is worth mentioning that ECE improvements correlate with the decrease in the logit range. This is not the case of the logit norm, which either increases or remains constant. These observations correlate with our hypothesis in the main paper, and demonstrate that logit range plays a key role in calibration. §.§ Comparison to other calibration methods We further evaluate the performance of our simplest solution, , compared to several existing unsupervised calibration approaches. Our reasoning behind using these methods, i.e., L-Norm <cit.> and ECP <cit.>, stems from the fact that they do not require labeled samples, in contrast to most existing methods (for example, Temperature Scaling (TS) needs a large validation set to fix the temperature value). These results, which are reported in Table <ref>, showcase that the proposed post-processing alternative brings important performance gains, in terms of calibration, without sacrificing discriminative power. This gap is particularly significant in prompt learning, where the proposed improves the ECE on CoOp by 11% and 16%, compared to L-Norm and ECP, respectively. §.§ Reliability plots Fig. <ref>, <ref>, and <ref> depict the reliability plot of ZS, , and for one from each of the setting of Adapters (Clip-Adapter), Prompt learning (CoOp), and Test time prompt tuning (TPT) for few representative cases in ImageNet OOD, and Few shot benchmarks respectively. From these plots, it could be noted that the density of the plots near the expected calibration curve is lower in our methods compared to the baselines, moving closer to ZS without compromising much the accuracy. Furthermore, the difference between the accuracy and the average confidence (in the bottom of the plots) is typically reduced when our approaches are integrated into the original methods, a sign that indicates that a model is better calibrated.
http://arxiv.org/abs/2407.12408v1
20240717083920
Towards Revisiting Visual Place Recognition for Joining Submaps in Multimap SLAM
[ "Markus Weißflog", "Stefan Schubert", "Peter Protzel", "Peer Neubert" ]
cs.RO
[ "cs.RO", "cs.CV" ]
Towards Revisiting VPR for joining Submaps in Multimap SLAM M. Weißflog et al. Process Automation, Chemnitz University of Technology, Chemnitz, Germany markus.weissflog@etit.tu-chemnitz.de Intelligent Autonomous Systems, University of Koblenz, Koblenz, Germany Towards Revisiting Visual Place Recognition for Joining Submaps in Multimap SLAMThe work of Stefan Schubert was supported in part by the German Federal Ministry for Economic Affairs and Climate Action. Markus Weißflog10009-0003-1163-8755 Stefan Schubert10000-0001-9841-0689 Peter Protzel10000-0002-3870-7429 Peer Neubert20000-0002-7312-9935 July 22, 2024 ========================================================================================================================================================================================================= § ABSTRACT Visual SLAM is a key technology for many autonomous systems. However, tracking loss can lead to the creation of disjoint submaps in multimap SLAM systems like ORB-SLAM3. Because of that, these systems employ submap merging strategies. As we show, these strategies are not always successful. In this paper, we investigate the impact of using modern VPR approaches for submap merging in visual SLAM. We argue that classical evaluation metrics are not sufficient to estimate the impact of a modern VPR component on the overall system. We show that naively replacing the VPR component does not leverage its full potential without requiring substantial interference in the original system. Because of that, we present a post-processing pipeline along with a set of metrics that allow us to estimate the impact of modern VPR components. We evaluate our approach on the NCLT and Newer College datasets using ORB-SLAM3 with NetVLAD and HDC-DELF as VPR components. Additionally, we present a simple approach for combining VPR with temporal consistency for map merging. We show that the map merging performance of ORB-SLAM3 can be improved. Building on these results, researchers in VPR can assess the potential of their approaches for SLAM systems. § INTRODUCTION Visual Simultaneous Localization And Mapping (SLAM) is a key technology for autonomous systems. It has applications in areas like robotics, autonomous driving, and augmented reality. In SLAM, an agent aims to create a map of its environment and tries to localize itself in this map at the same time. The resulting map can be used for surveying, navigation, obstacle avoidance, and other tasks. However, the agent cannot always accurately estimate its current position. One of the reasons for this is tracking loss, which prevents the agent from estimating its pose in relation to the global map. Tracking loss can occur due to fast motion, textureless regions, occlusions, or various other factors. In multimap visual SLAM systems, the agent has no choice but to create new maps, resulting in a set of disjoint submaps whose relative pose to each other is unknown, as shown in <ref>. This is a problem for mapping applications, for example, as it is not possible to create a continuous map. For this reason, modern visual SLAM systems like ORB-SLAM3 <cit.> use submap merging strategies. As we will show in <ref>, these strategies are not always successful, especially on challenging datasets. A promising approach to counteract this problem is Visual Place Recognition (VPR). VPR is the task of recognizing a previously visited place based on images. This can be used to merge submaps. We will show in <ref> that there are newer VPR approaches that outperform ORB-SLAM3's bag-of-visual-words-based (BOW) approach. However, as we show in <ref>, by just switching out the VPR component in isolation, ORB-SLAM3 cannot make use of the improved VPR performance. A complete integration into the overall system is required, which would involve considerable implementation effort and changes to the algorithm. Before these changes are made, it would be sensible to estimate the impact of using modern VPR for submap merging on the overall system. This estimation is the goal of our work. For this, we introduce a simplified experimental setup in <ref>. We evaluate this setup in <ref>. Finally, we discuss the results in <ref> and conclude in <ref>. We start by discussing the related work in <ref>. Our main contributions are: * We present a pipeline along with a set of metrics to evaluate the performance of VPR for submap merging in visual SLAM. Our pipeline does not require reparametrization or other modifications to the SLAM system. * We evaluate our approach on two challenging datasets using a state-of-the-art SLAM system with modern VPR approaches. * We present a simple approach for map merging that combines the advantages of modern VPR with temporal consistency. § RELATED WORK Visual Simultaneous Localization and Mapping. A general introduction to SLAM is given in <cit.>. This paper focuses on monocular visual SLAM, which uses a single camera as the only sensor. <cit.> provides a survey of visual SLAM methods. Modern SLAM algorithms include maplab 2.0 <cit.>, VINS <cit.>, Basalt <cit.> and ORB-SLAM3 <cit.>. hier mehr? Visual Place Recognition. Yin et al. <cit.> survey the general problem of place recognition, including VPR. Schubert et al. <cit.> provide a tutorial on VPR where they introduce the general problem, challenges, and evaluation metrics. Masone and Caputo <cit.> provide a comprehensive survey on the role of deep learning in VPR. A central part of VPR are holistic image descriptors. They convert the pixel data of an image into a vector representation, enabling comparison between two images to determine if they show the same location. Holistic descriptors can either be directly computed from the whole image <cit.>, or they are an aggregation of local features <cit.>. Similar to our work, Khaliq et al. <cit.> compare the loop closing component of ORB-SLAM3 with a deep-learning-based VPR method. VPR-Bench <cit.> benchmarks different VPR metrics. However, both publications measure performance only on the VPR tasks and do not consider the underlying SLAM system. §.§ ORB-SLAM3 We use ORB-SLAM3 <cit.> for our experiments, as it is considered a highly performant algorithm <cit.>. ORB-SLAM3 is a landmark-based visual-inertial SLAM approach, which can also handle stereo and RGB-D cameras. In essence, ORB-SLAM3 consists of four main threads, which run in parallel: * The tracking thread aims to localize the current frame in the submap using ORB features <cit.>. If localization fails, ORB-SLAM3 saves the current submap and starts a new one. * The local mapping thread inserts the current keyframe and its new landmarks into the map and optimizes a local window. * The loop and map merging thread searches the database for similar frames to the current keyframe. Upon finding a matching frame (which must satisfy certain checks, see <ref>), a loop or a merge operation is triggered. If the current keyframe and the matching frame are part of the same submap, a loop closure is performed. If they are part of different submaps, the submaps are merged. * In parallel, full bundle adjustment is performed. For tracking, ORB-SLAM3 uses ORB features <cit.>. These features are also aggregated and used as holistic descriptors in the mapping and map merging thread. The descriptors are computed using the BOW <cit.> implementation DBoW2 <cit.>. In our experiments, we used ORB-SLAM3 in the monocular configuration and only updated the camera parameters according to the datasets. § ANALYSIS OF THE PROBLEM §.§ Loops and Submap Merges in ORB-SLAM3 In this section, we analyze the submap merging performance of ORB-SLAM3 without post-processing. The results are shown in <ref>. The top part of the table shows the results for the NCLT <cit.> and Newer College <cit.> datasets, which will be introduced in <ref>. Noteworthy is the large number of unmerged submaps. The loop and map merging thread is unable to recover the relative transformations between the submaps after tracking loss, which significantly limits mapping capabilities. We performed the same analysis on the EuRoC <cit.> dataset, which is shown in the bottom part of the table. This dataset consists of eleven sequences captured using a drone under different conditions and motion patterns. We evaluated all sequences but only show the best (MH_01) and worst (V2_02) sequences in <ref>. On this dataset, ORB-SLAM3 has almost no tracking losses, which might be due to the short duration of the sequences.[The longest sequence lasts for 182 s <cit.>] This makes this dataset unsuitable for evaluating submap merging performance. Nevertheless, the number of loops and merges found never surpasses one. §.§ Performance of Modern Holistic Descriptors In this section, we analyze the performance of the BOW-based descriptors used in ORB-SLAM3 in isolation. We use the widely adapted metrics precision and recall <cit.> for evaluation. Good performance is indicated by high values for both metrics. Our evaluation setup is similar to <cit.>. As can be seen in <ref>, ORB-BOW is outperformed by the more modern descriptors HDC-DELF and NetVLAD, which will be presented in more detail in <ref>. This evaluation hints at the fact that switching out the holistic descriptors in ORB-SLAM3 could lead to better performance in map merging. However, the performance of the descriptors in isolation does not necessarily translate to better performance in the overall SLAM system. For example, matches within a submap contribute to improving the accuracy by providing loop closures; however, they cannot help with joining submaps. Matches between submaps, on the other hand, are crucial for map merging. The difference between these types of matches is not considered by precision and recall. In <ref>, we will present an evaluation metric that takes this difference into account. §.§ Naive Approach: Only Replacing the VPR Descriptor ORB-SLAM3 performs multiple checks before it performs a loop closure. These checks are visualized in <ref>. The most important for this work are: * VPR: ORB-BOW descriptors are utilized to find a matching place within the database of keyframes. The check fails if none are found. * Neighbours: The current keyframe and the potential match cannot be neighbours in ORB-SLAM3's covisibility graph. * Geometry: RANSAC and optimization are used on the ORB features of the current frame and the loop/ merge candidate to estimate the transformation between the two frames. The check is passed if there are enough inlying ORB keypoints. * Consistency: The checks have to be passed three times in a row to perform a loop closure/ map merge. After a frame passes all checks, the loop closure/ map merge is performed. As can be seen in <ref>, most loops/ merges are aborted due to the strict check <ref>. We have experimented with different holistic descriptors for the VPR-check, which has only a marginal effect on <ref> and no effect on check <ref>. To benefit from the full potential of modern VPR, the SLAM algorithm would need reparametrization and algorithmic adjustments to its loop/ merge pipeline. HDC-DELF, for example, works on local DELF <cit.> features, which could also be used during tracking and check <ref>. Such deep changes, which could potentially have a significant impact on system performance and involve high implementation efforts, should be tested in a simplified setup before implementation. In the remaining sections, we propose such a setup and evaluate the potential of these changes. § APPROACH This section describes our evaluation pipeline and the metrics used. An exemplary pipeline is shown in <ref>. The pipeline starts by predicting the distance between submaps. We use the word distance to refer to the inverse of the similarity. <Ref> describes how VPR and the timestamps of a SLAM trajectory are used to predict submap adjacency matrix A. <Ref> describes how these matrices are used to compute the metrics precision and coverage and how the ground truth is obtained. <Ref> summarizes the assumptions we make for our pipeline. Finally, <ref> describes the descriptors and the datasets used in our experiments. Our pipeline is a post-processing step for a multimap SLAM system, which outputs a trajectory 𝒯 containing N poses. For this work, we define a trajectory as 𝒯 = {( t_i, 𝐩_i, R_i, I_i, j_i ) }_i^N, where t_i is the timestamp of pose i, 𝐩_i and R_i are the position and orientation relative to the submap, I_i is the camera image, and j_i is the submap index where frame i is localized, with j∈[1, M] and M≪ N. §.§ Predicting Submap Adjacency This section describes how the submap adjacency matrix A is computed, which is required by our post-processing pipeline. We use the word adjacency to describe that the transformation between two frames or two submaps is known. A is a boolean, square, symmetric matrix of size M× M, where M is the number of submaps. If the transformation between submap i and submap j is known, the submaps are adjacent, and A contains a one at positions (i, j) and (j, i). If no post-processing is performed, A is the identity matrix because no transformations between submaps are known. We analyze three different rules for estimating A: Rule Time. One natural approach to predict a submap adjacency matrix is to use the timestamps of the submaps. We define the temporal distance between submaps as the difference between the timestamp of the last keyframe of submap i and the timestamp of the first keyframe of submap j. By pairwise comparing all timestamps of the trajectory 𝒯, we can create a time distance matrix S_time. Rule VPR. Another way to predict the adjacency is to use the visual information of the images I. For that, we compute a holistic descriptor for each image using methods from VPR. We define the visual distance between two images as the distance between their descriptors. Based on this, we can calculate the frame distance matrix S_VPR^F, which contains at position (i, j) the descriptor distance of the images I_i and I_j. This matrix can be converted into a submap distance matrix S_VPR. At positions (i, j) and (j, i), the matrix S_VPR contains the smallest visual distance between any two images of submap i and submap j. The computation of S_VPR is visualized in the left part of <ref>. Combining Time and VPR. We test the combination of the two rules using a simple, exemplary algorithm, which can be expanded as required. Using the temporal distance matrix S_time and the visual distance matrix S_VPR, we define two maps as adjacent, if * their visual distance is below a strict threshold τ_VPR, * or their temporal distance is below a strict threshold τ_time, * or their temporal distance is below a more relaxed threshold f_time·τ_time and their visual distance is below a relaxed threshold f_VPR·τ_VPR. f is the factor by which the threshold is relaxed. In the later evaluation, we will show exemplary results for the following two combinations of thresholds[These values were chosen by hand to demonstrate the feasibility of the approach. No parameter optimization was performed]: Rule Comb. 1 uses τ_time=2 s, f_time=10 and f_VPR=2. Rule Comb. 2 uses τ_time=0.5 s, f_time=10 and f_VPR=4. We leave τ_VPR as a free parameter. To convert the distance matrices S into adjacency matrices A, a threshold τ (or τ_VPR in the case of the combined rules) is chosen and applied to S. All distances below the threshold are set to one, and distances above are set to zero. Ground truth. To obtain the ground truth, we define submaps as adjacent if they have two frames whose Euclidean distance (of the ground truth trajectory) and angle of rotation are smaller than two predefined thresholds ϵ_dist=10 m and ϵ_rot=20^∘. Similar definitions are typically used for VPR<cit.>. Thus, the ground truth matrix of adjacent submaps A_gt is a boolean matrix of shape M× M. §.§ Evaluation Pipeline From Adjacency to Reachability. The adjacency matrices A can be further processed into reachability matrices R. Two submaps are reachable if the transformation between them can be obtained directly or indirectly.[Indirectly refers to the relationship that, if the transformation between the submaps (i,j) and the transformation between the submaps (j,k) are known, the transformation (i,k) is also known.] R is boolean, symmetric, and of size M× M. It is computed as follows: R = nonzero(A^n) The function nonzero(·) sets all non-zero entries of a matrix to one. n is the number of times the matrix A is multiplied with itself until R converges. This step is visualized in the central portion of <ref>. Computing Coverage and Precision. At first, a weight vector 𝐰∈ℕ^M is extracted from 𝒯, whose elements w_j are defined as the number of frames in submap j. A weight matrix W∈ℕ^M× M is computed using W = 𝐰𝐰^T. Comparing the ground truth matrix R_gt with the predicted reachability matrix R yields the true positive (TP), false positive (FP), true negative (TN), and false negative (FN) matches. From that, coverage and precision can be computed, which is visualized in the right part of <ref>. Coverage describes how much the reachability matrix is filled: c = TP+FP/N^2 = ∑( R ⊙ W )/∑(W) The symbol ⊙ refers to the Hadamard product, and the symbol ∑(·) refers to the sum over all matrix elements. Precision describes how many of the found matches are correct: p = TP/TP+FP = ∑( R⊙ R_gt⊙ W)/∑( R ⊙ W) Note that we exclude the main diagonal of all matrices, as all submaps are always reachable from themselves, and thus precision and coverage would be skewed. Note also that the adjacency matrix, the estimated reachability matrix, and thus coverage and precision depend all on the threshold value τ (or τ_VPR for the combined rules). By varying this threshold, pairs of precision and coverage values are obtained. All the values together form the precision-coverage curve, which is used for evaluation in the following experiments. The area under the curve (AUC) can be obtained by integration. §.§ Assumptions and Limitations In this work, we investigate the isolated problem of finding possible matches between submaps of a SLAM system. Because of that, we make several assumptions about the system and the data we work with: (1) We assume that the SLAM system estimates accurate poses within the submaps. (2) We assume that the SLAM system can detect tracking loss reliably. (3) We assume that the transformations between matching frames can be estimated. §.§ Experimental Setup Datasets. We used the NCLT and Newer College datasets. NCLT <cit.> consists of multiple sequences that were collected onboard a robot covering a large outdoor area of a college campus. We selected five representative sequences that cover different times of the day and different seasons. They have a duration of 40 to 80 minutes. The Newer College dataset <cit.> consists of a single sequence collected using a handheld sensor setup. It covers a large trajectory over a courtyard and a park at New College, Oxford, and has a duration of approximately 44 minutes. We subsample it at 2 Hz. As we focus on monocular SLAM, we only employ the front camera for NCLT and the left camera for Newer College. Holistic Descriptors. We refer to the local BOW-based <cit.> aggregation of ORB features <cit.> used by ORB-SLAM3 as ORB-BOW. HDC-DELF is a deep learning-based descriptor that uses Hyperdimensional Computing to aggregate DELF features <cit.> to a holistic descriptor. NetVLAD <cit.> is a deep learning-based holistic descriptor as well. In our experiments, we used the implementations and parameters as proposed by the original authors. § RESULTS Our results can be seen in <ref> and <ref>. In <ref>, the curves are not smooth because a small change in the threshold value τ could lead to a merge of two large maps, causing coverage and precision to change abruptly by a large amount.[In all experiments, we iterate over all possible values of τ to ensure that the smoothest possible curves are obtained.] On the left of the coverage axis, few highly confident matches are found; on the right, many matches are found, but many of them are false positives. We see that, except for the Newer College dataset, ORB-BOW is among the lowest-performing rules. As discussed in <ref>, ORB-SLAM3 employs a very conservative approach to submap merging. This is reflected in the fact that the purple marker in <ref> has very low coverage but high precision. ORB-SLAM3 is kept as originally proposed in <cit.>, so there are no parameters that could be varied to get a curve instead of a single point. The Newer College dataset has large areas of failed tracking (as can be seen by the low tracking rate in <ref>). This causes time-based rules to perform poorer than pure VPR-based rules on this dataset. § DISCUSSION <Ref> show a large variation in the results, which could be due to the different characteristics of the datasets. It can be seen that the ORB-BOW strategy is outperformed in almost all cases. These results indicate that a modern VPR component offers the potential to improve a SLAM system. HDC-DELF has a very good mean and worst-case performance, especially in combination with the temporal consistency rule, making it a promising choice for future research. This VPR approach could also improve other parts of the SLAM pipeline: the local DELF features, for example, could be used for tracking and transformation estimation. As this work provides an estimation, future research is needed to show the extent to which this potential is realized in a full SLAM pipeline. We propose that modern VPR methods for map merging as part of a SLAM pipeline should maximize coverage.diesen Satz vielleicht raus nehmen § CONCLUSION Multimap SLAM systems like ORB-SLAM3 suffer from tracking loss, which leads to the creation of disjoint submaps without relative pose information between them. VPR is a potential approach for merging these submaps. However, integrating a modern VPR component into a SLAM system requires considerable modifications to the system. In this work, we have presented a pipeline to estimate the performance of an improved VPR component for submap merging in visual SLAM. We have evaluated our approach using ORB-SLAM3. Additionally, we have presented a submap merging approach that combines VPR with temporal consistency for map merging. Our pipeline does not require reparametrization or changes to the SLAM system. Our results show that the map merging performance of ORB-SLAM3 can be improved by using modern VPR approaches. As this work only provides an estimation of possible improvements, future work includes fully integrating the new VPR components into the overall SLAM system to exploit their potential. splncs04
http://arxiv.org/abs/2407.13717v1
20240718171635
CoDefeater: Using LLMs To Find Defeaters in Assurance Cases
[ "Usman Gohar", "Michael C. Hunter", "Robyn R. Lutz", "Myra B. Cohen" ]
cs.SE
[ "cs.SE", "cs.AI" ]
Iowa State University Ames Iowa USA ugohar@iastate.edu Iowa State University 30 Shuangqing Rd Ames Iowa USA mchunter@iastate.edu Iowa State University 1 Thørväld Circle Ames Iowa USA rlutz@iastate.edu Iowa State University Ames Iowa USA mcohen@iastate.edu § ABSTRACT Constructing assurance cases is a widely used, and sometimes required, process toward demonstrating that safety-critical systems will operate safely in their planned environment. To mitigate the risk of errors and missing edge cases, the concept of defeaters - arguments or evidence that challenge claims in an assurance case - has been introduced. Defeaters can provide timely detection of weaknesses in the arguments, prompting further investigation and timely mitigations. However, capturing defeaters relies on expert judgment, experience, and creativity and must be done iteratively due to evolving requirements and regulations. This new ideas paper proposes CoDefeater, an automated process to leverage large language models (LLMs) for finding defeaters. Initial results on two systems show that LLMs can efficiently find known and unforeseen feasible defeaters to support safety analysts in enhancing the completeness and confidence of assurance cases. <ccs2012> <concept> <concept_id>10011007.10010940.10011003.10011114</concept_id> <concept_desc>Software and its engineering Software safety</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178</concept_id> <concept_desc>Computing methodologies Artificial intelligence</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10011007.10011074.10011075.10011076</concept_id> <concept_desc>Software and its engineering Requirements analysis</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Software and its engineering Software safety [500]Computing methodologies Artificial intelligence [500]Software and its engineering Requirements analysis CoDefeater: Using LLMs To Find Defeaters in Assurance Cases Myra B. Cohen July 22, 2024 =========================================================== § INTRODUCTION Safety-critical systems have become deeply integrated into many societal domains, including healthcare, transportation, energy, and aviation <cit.>. Failures in these systems can lead to catastrophic consequences for human safety, including fatalities and environmental and property damage <cit.>. This has led to an increased focus on their dependence, reliability, and safety <cit.>. Many systems must comply with regulations <cit.>, provide evidence of safety, and undergo rigorous certification processes <cit.> for approval from regulatory bodies. Assurance cases (ACs) have emerged as a common practice for this purpose, facilitating the verification of system correctness and the validation of specific claims regarding safety, security, and trustworthiness, among others <cit.>. An assurance case is a structured hierarchy of claims and arguments supported by evidence that a system will function as intended in a specified environment <cit.>. Several formal notations (e.g., Goal Structuring Notation (GSN) <cit.>, Claims-Arguments-Evidence (CAE) <cit.>, and Eliminative Argumentation (EA) <cit.>), along with tools <cit.>, have been proposed. However, concerns arise over their completeness, uncertainty, and soundness for cyber-physical systems. <cit.>, leading to false confidence and catastrophic failures <cit.>. For example, failure of the minimum safe altitude warning system that led to a major aviation accident was attributed to incomplete and flawed reasoning in the safety case <cit.>. To enhance the robustness of assurance cases, it is critical to identify and mitigate their defeaters (also known as assurance weakeners). Defeaters highlight gaps in evidence or reasoning that undermine the validity of claims in the assurance case <cit.>. An example of a defeater, drawn from the assurance case for the safe operation of an sUAS (small Uncrewed Aircraft System) battery, challenges the assurance case's claim that “The sUAS has enough battery charge to complete its mission." The defeater casting doubt on this claim is, “Unless the battery monitor is not calibrated/inaccurate." The defeater serves to record, within the assurance case itself, the analyst's challenge to the validity of the claim. Various approaches have been proposed to identify and mitigate defeaters <cit.>. However, manually creating defeaters is a labor-intensive and time-consuming process <cit.>, relying heavily on safety analysts' judgment, experience, creativity, and understanding of the system. This can lead to confirmation bias <cit.>. As assurance cases evolve with new standards and technological advances, ongoing efforts are focused on formal and semi-automated approaches for detecting and managing defeaters <cit.>. Best practices build assurance cases incrementally, so automating all/or part of this process is important <cit.>. Large Language Models (LLMs) are increasingly automating software engineering tasks like test generation and defect detection <cit.>. In particular, these models have become valuable in tasks requiring complex understanding, such as vulnerability detection, requirements elicitation, and code generation <cit.>. Moreover, LLMs have shown promise in automating evaluation tasks and acting as surrogate evaluators <cit.>. Consequently, we explore whether LLMs' capabilities can be harnessed to automate defeater analysis towards the completeness, soundness, and confidence of assurance cases. Despite calls for more research into LLMs' ability to identify defeaters <cit.>, no study has been conducted to evaluate and investigate their effectiveness. This new ideas paper presents the first empirical investigation of the feasibility and utility of LLMs for identifying defeaters, using a process we call CoDefeater. It evaluates their potential to aid practitioners and safety analysts in the iterative human-in-the-loop process of finding defeaters, as shown in Figure <ref>. We evaluate the performance of an LLM (ChatGPT) in automated defeater analysis on two complex real-world case studies. Our experimental results suggest that CoDefeater is a promising approach for identifying and generating novel assurance case defeaters. Overall, this work makes three key contributions. 1) To the best of our knowledge, we provide the first empirical results from an investigation of the effectiveness and usefulness of an LLM (GPT 3.5) in identifying and creating defeaters for real-world assurance cases. 2) We provide a new assurance case fragment with defeaters that can be leveraged for further research on automated defeater identification techniques. 3). Based on our findings, we outline current challenges and directions for future work. All experimental artifacts are available here: https://gitlab.com/anonymousdot/codefeater. § BACKGROUND AND RELATED WORK Assurance case arguments typically adopt an inductive approach, where sub-claims offer direct evidence to support the parent claim but do not ensure it with certainty <cit.>. Focusing solely on proving a claim may introduce confirmation bias, as exemplified by the Nimrod aircraft crash <cit.>. Recent approaches have therefore embraced defeasible reasoning, which acknowledges that arguments about system properties in practice are inherently defeasible <cit.>. Defeaters are potential doubts or objections that challenge the validity of a claim, reflecting gaps in evidence and reasoning <cit.>. Figure <ref> shows a fragment of an assurance case for an sUAS battery, with examples of defeaters (red boxes). Defeaters are typically represented using EA notation <cit.> but have also been integrated into GSN and CAE notations <cit.>. Hence, this study does not aim to evaluate LLMs' performance in identifying defeaters in any specific notation or semantic accuracy but rather to investigate the feasibility of this approach. Finally, the indefeasibility criterion requires a thorough search for defeaters in an assurance case <cit.>, motivating our investigation into LLMs' potential to assist practitioners and safety analysts in identifying and generating novel defeaters. In software engineering, LLMs are increasingly employed to assist developers and automate tasks such as discovering requirements, code generation, testing, and program synthesis <cit.>. <cit.> have reported ChatGPT's effectiveness in hazard analysis for safety-critical systems, highlighting their potential to assist human analysts. In the context of software assurance cases, <cit.> assessed LLM's (GPT-4) proficiency in understanding GSN representations and its performance in constructing safety cases. <cit.> proposed using LLMs to identify defeaters; however, they did not report an empirical evaluation of their capabilities. <cit.> explored LLM's (GPT-4) understanding of EA notation and defeater concepts, with empirical validation left as future work. We aim to address this gap by examining LLM performance in identifying defeaters for assurance cases in two case studies. The Machine Learning (ML) community also has explored the reasoning abilities of LLMs <cit.>. This line of inquiry investigates how LLMs fare in open-ended tasks and their effectiveness in assisting humans. Findings have indicated that LLMs can demonstrate consistent responses exhibiting similarities with human evaluations, suggesting their potential as automated tools <cit.>. § METHODOLOGY We conducted a preliminary exploratory study toward answering the following two research questions: * RQ1: (Effectiveness). How effective are LLMs in identifying and analyzing defeaters in assurance cases? * RQ2: (Utility). Can LLMs support practitioners in generating novel and meaningful defeaters? §.§ Experimental Setup Datasets. We performed our experiments on two assurance cases. The first is for the CERN Large Hadron Collider (LHC) Machine Protection System (MPS), which provides assurance that the MPS will prevent damage to the LHC from unstable, high-energy particle beams <cit.>. The assurance case for the CERN LHC uses EA notation, and we extracted claim nodes with their corresponding defeaters for our experiments. The second assurance case is a fragment of a larger one that our team has recently created for small Uncrewed Aircraft Systems (sUAS)[Anonymized for submission] using GSN. It addresses the claim “the sUAS has enough charge in its battery to complete the mission" (see claim 1.2 in Figure <ref>). We include both the assurance case fragment and corresponding defeaters in the supplementary material[https://gitlab.com/anonymousdot/codefeater]. The real-world complexity of these systems and the availability of defeaters made them suitable for our preliminary experiments. Table <ref> provides a summary of the assurance cases. Model. We used ChatGPT (GPT-3.5) for our study, specifically model GPT-3.5-turbo released by Open AI <cit.>. §.§ Prompt Design An effective prompt design is crucial for achieving good performance, as the choice of prompts significantly impacts the quality, relevance, and accuracy of the LLM's response <cit.>. It involves crafting a system prompt to establish the context and prepare the LLM for the task, along with a user prompt that contains the specific task request <cit.>. We designed the system prompt following OpenAI's best practices <cit.> and relevant literature in software engineering <cit.> and open-ended evaluation tasks <cit.>. Our process identified role-based system prompts <cit.> as the most effective approach. Figure <ref> shows the system prompt used in our study. Several user prompting techniques have been proposed, e.g., zero-shot, one-shot, few-shot, and chain-of-thought <cit.>. Zero-shot learning involves providing the model with only the task description (system prompt), without examples of unseen tasks to learn from. In contrast, one-shot and few-shot learning conditions the model on one or more examples in the prompt, respectively <cit.>. For our preliminary study, we adopted the zero-shot learning setting. This approach both (1) facilitates the immediate, off-the-shelf application of LLMs, eliminating the need for computationally expensive fine-tuning procedures, and (2) is naturally suited to scenarios such as ours, with limited data availability for training or fine-tuning <cit.>. Each prompt was presented independently to the model to avoid influencing subsequent responses, allowing us to assess its standalone capabilities <cit.>. §.§ Evaluation Criteria Due to the complexity and open-ended nature of the task (e.g., varying response length vs. ground truth, and subjectivity), automatic evaluation metrics were not suitable <cit.>. Therefore, we relied on human evaluation for assessing LLM performance. For RQ1, we used a deductive coding approach <cit.>, where the first two authors independently reviewed responses and categorized them as complete match, partial match, or no match based on similarity to ground-truth defeaters. We used the defeaters in the LHC assurance case as the ground truth. For the battery assurance case, a set of ground-truth defeaters was provided by one of the authors (independently) familiar with the domain, following best practices <cit.>. For RQ2, the responses were evaluated for being reasonable <cit.>, i.e., the defeater could reasonably be in the ground truth but had been overlooked. This aimed to assess the LLM's capability to identify novel defeaters. Figure <ref> shows an example of each type of match. Next, the reviewers met to discuss and finalize their assigned codes. In the case of post-discussion disagreement, if one reviewer labeled a response as a partial match and the other as a complete match, we categorized it as a partial match to avoid confirmation bias <cit.>. In the one instance where one reviewer indicated no match while the other identified a partial or complete match, it was discarded. Last, we calculated inter-rater agreement using Cohen's Kappa <cit.> to evaluate the consistency and reliability of the coding process. §.§ Threats to Validity There is potential subjectivity in the qualitative evaluation of LLM performance on defeater identification. To address that, two authors independently coded the LLM responses, following best practices <cit.>, and held multiple discussions to avoid misinterpretations. We also computed Cohen's kappa <cit.>, which indicated substantial inter-reviewer agreement. To avoid confirmation bias, disagreements were coded as partial or no match. The non-deterministic nature of LLMs and different versions might produce slightly different responses; however, we used a single version of ChatGPT. Finally, the preliminary results presented here lead us to propose that the use of LLMs to generate defeaters merits further work; however, generalizability awaits larger studies with improved LLMs. finding § RESULTS In this section, we present the key findings of our experiments, grouped by our research questions. §.§ (RQ1): Effectiveness in Identifying Defeaters finding [style = exampledefault] Finding : The LLM displayed promising zero-shot capabilities for defeater analysis in assurance cases. Figure <ref> presents the results of our closed coding process, which assessed the GPT-3.5 model's performance in identifying defeaters for the two real-world assurance cases in a zero-shot setting. Our experiments showed that the model demonstrated unexpectedly good zero-shot capabilities for defeater analysis. Specifically, it completely identified more than half of all defeaters and partially identified more than a third in both datasets. Fewer than 15% of the defeaters created by human analysts were not identified at all by the LLM. These results were noteworthy given the complexity of the systems under study and the lack of system information, domain knowledge, or few-shot examples provided to the LLM. Additionally, the total coded responses achieved high inter-rater agreement, with a Cohen's Kappa score of 0.81, <cit.>. Finally, all of these defeaters were identified in the first prompt, illustrating the convenience of the approach. Among the unidentified defeaters, some required specific domain or system knowledge not provided to the model, indicating areas for potential improvement. finding [style = exampledefault] Finding : The LLM struggled with defeaters that challenged implicit assumptions. We conducted a manual analysis of the unidentified defeaters (n = 17) to investigate whether there were any patterns behind the LLM's failure. Interestingly, we found that the model struggled to identify those defeaters that implicitly challenged the truth of an assumption. For example, for the claim "The BICs will not produce a FALSE BEAM-PERMIT to trigger a beam dump, unless a loss of the high-frequency signal (10Mhz) in either Beam Permit Loop (A and B) is detected...", a (ground-truth) defeater in the LHC assurance case questions the assumption that the 10Mhz signal is the right signal to monitor. In other words, the presence of another similar high-frequency signal might lead to a false indication of TRUE BEAM-PERMIT. Unlike the analysts, the LLM did not question the underlying assumption in the claim and thus did not identify the defeater. Consequently, future work should explore integrating external knowledge sources since, in similar tasks, it significantly enhances LLM's performance <cit.>. §.§ (RQ2): Utility in Generating Novel Defeaters To answer RQ2, we evaluated the LLM's performance on the sUAS battery's assurance case, where we have the necessary domain knowledge. Using the same prompting method, we iteratively requested additional defeaters to assess its capacity to generate novel defeaters beyond the ones that had been identified in the building of the assurance case. finding [style = exampledefault] Finding : LLMs can support practitioners in providing useful and novel defeaters. The LLM output was a useful source of five novel defeaters, each of which was feasible upon further investigation using our evaluation criteria. These were: (1) an unexpected power drain due to an onboard component failure; (2) an emergency external to the sUAS that forced the sUAS into a longer flight; (3) a missed waypoint to which the pilot had to return; (4) unexpected power drain arising from ongoing efforts to recover a lost GPS; and (5) external interference that the sUAS had to dodge repeatedly. The last one was interesting to us because the LLM gave as an example of interference that birds might attack the sUAS. This, in fact, happens quite often and is dangerous <cit.>. The response shows how an LLM can offer a creative perspective that catches missing edge cases. § DISCUSSION Based on our findings, we highlight several challenges and opportunities for an LLM-based process to help find defeaters. Designing better prompts. Prompt designing has been shown to significantly impact LLM performance <cit.>. Many prompting methods have been proposed, and further investigation for suitability to defeater analysis is needed. Moreover, our study revealed that the LLM can generate creative, redundant, and far-fetched scenarios (e.g., defeaters due to budget constraints). Balancing LLM creativity with defeater relevance poses an important challenge. Rationale behind defeaters. In our experiments, we found that the LLM responses not only identified defeaters but also provided helpful rationale and examples. For instance, if the ground truth defeater stated, "Unless there are incorrect readings," the LLM suggested, "The sensors may not be properly calibrated, leading to inaccurate readings." These explanations can assist analysts in understanding and analyzing both a defeater's feasibility and its potential mitigations. Investigating explainable prompting techniques such as Chain-Of-Thought <cit.> is an important next step. Towards incremental assurance using LLMs. Our study focused on single claims and associated defeaters. Future research should evaluate the performance of LLMs on a combination of claims. It will be interesting to investigate whether LLMs can identify the impact of defeaters on multiple claims and assess if the provided evidence adequately addresses them. This direction will require developing detailed data for evidence analysis and exploring prompts specifically designed for this purpose. § CONCLUSION AND FUTURE WORK We have presented CoDefeater, a process for automated defeater discovery in assurance cases using LLMs (GPT-3.5). Our evaluation on two real-world case studies demonstrated the LLM's zero-shot capabilities in identifying defeaters and its potential to support practitioners in an iterative human-in-the-loop process. We make available the portion of a new assurance case and its ground-truth defeaters used in our experiments for other researchers. Future work will expand beyond the zero-shot setting to explore one-shot and few-shot learning approaches for improved performance. Additionally, fine-tuning LLMs on assurance cases presents an avenue to improve their performance. Our study provides preliminary results as a starting point for future research to explore the role of LLMs as a tool to assist with the identification of defeaters toward the development of improved assurance cases. This work was funded by grant 80NSSC23M0058 from the National Aeronautics and Space Administration (NASA). ACM-Reference-Format
http://arxiv.org/abs/2407.12087v1
20240716180003
Black holes in effective loop quantum gravity: Covariant holonomy modifications
[ "Idrus Husin Belfaqih", "Martin Bojowald", "Suddhasattwa Brahma", "Erick I. Duque" ]
gr-qc
[ "gr-qc", "hep-th" ]
Black holes in effective loop quantum gravity: Covariant holonomy modifications Idrus Husin Belfaqih,^1[e-mail address: i.h.belfaqih@sms.ed.ac.uk] Martin Bojowald,^2[e-mail address: bojowald@psu.edu] Suddhasattwa Brahma^1[e-mail address: suddhasattwa.brahma@gmail.com] and Erick I. Duque^2[e-mail address: eqd5272@psu.edu] ^1Higgs Centre for Theoretical Physics, School of Physics & Astronomy, University of Edinburgh, Edinburgh EH9 3FD, Scotland, UK ^2 Institute for Gravitation and the Cosmos, The Pennsylvania State University, 104 Davey Lab, University Park, PA 16802, USA § ABSTRACT Emergent modified gravity provides a covariant, effective framework for obtaining spherically symmetric black hole solutions in models of loop quantum gravity with scale-dependent holonomy modifications. Exact solutions for vacuum black holes in the presence of a cosmological constant are derived here and analyzed in four different gauges, explicitly related to one another by standard coordinate transformations. The global structure is obtained by gluing space-time regions corresponding to the gauge choices, reconstructing a non-singular wormhole space-time for an arbitrary scale-dependent holonomy parameter. This outcome demonstrates the robustness of black-hole models with covariant holonomy modifications under quantization ambiguities. Compared with previous constructions, full covariance of the resulting space-time models as derived here implies subtle new effects and leads to a novel understanding of the parameters in holonomy modifications, distinguishing a constant holonomy length from a possibly scale-dependent function that may change coefficients of holonomy terms. New physical results are obtained for instance in the context of a non-trivial zero-mass limit of holonomy-modified space-times. The existence of a consistent effective space-time structure implies various novel aspects of a net gravitational stress-energy and related thermodynamical properties. § INTRODUCTION Black holes are now everyday objects encountered in astronomical observations indicating that much of their exterior is well described by General Relativity (GR). Yet, the region beyond the event horizon remains inaccessible to observations and we must rely on theoretical extrapolations to understand the interior physics. In particular, GR robustly predicts a singularity at the center of black holes <cit.>, which has long been interpreted as the result of the breakdown of the theory. Hence, it has been argued that a quantum theory of gravity would be able to resolve such singularities in analogy to how the quantum theory of light resolves the ultraviolet catastrophe. One candidate background-independent and non-perturbative theory is loop quantum gravity (LQG) <cit.>. We shall explore the fate of stationary, spherically symmetric (vacuum) solutions due to quantum corrections from LQG in an effective framework, making sure that full covariance is realized in any effective line element used to describe the physics of black holes. The canonical quantization in LQG is based on the holonomy-flux algebra. This is so because the regularization scheme followed in LQG does not allow for quantum operators corresponding to the connection, but rather its parallel transport along one-dimensional curves. In effective black-hole models, one focuses on the spherically symmetric reduced theory and incorporates quantum corrections corresponding to this feature by modifying the Hamiltonian constraint from a simple polynomial dependence on the connection to a trigonometric one resulting from the real-valued combinations of holonomies. This procedure is often done by hand, rather than derived from first principles, and covariance of the resulting models is seldom checked or even acknowledged as a problem. General covariance is harder to grasp in the canonical formulation than in its more common geometric or Lagrangian cousins. A modified Hamiltonian constraint will in general modify its Poisson brackets. One must therefore ensure that the brackets remain `first class' for the equations of motion to be gauge covariant. Even if this condition is fulfilled, there often remains a modified structure function in the Poisson bracket of two Hamiltonian constraints, whose proper interpretation is a modified kinematical dependence of the space-time metric on the phase-space variables <cit.>. Contrary to what is commonly believed, preserving the first-class nature of the constraint brackets does not imply that the space-time metric tensor is covariant, constructed in the usual way from phase-space variables, the structure function, as well as a lapse function and shift vector field: In a canonically modified theory, this object is not guaranteed to be subject to gauge transformations that are on-shell equivalent to the tensor transformation law. Rather, additional covariance conditions must be imposed on the structure function, which further restrict compatible modifications of the Hamiltonian constraint <cit.>. In order to implement these conditions, we will use Emergent Modified Gravity (EMG) <cit.> as the underlying framework for effective LQG. In particular, the spherically symmetric model of EMG can be solved for the most general, covariant, modified constraint in (3+1)-dimensions with a dependence on the spatial derivatives of the phase space up to the second order, which contains terms that can be interpreted as holonomy modifications. An earlier, special case of EMG in spherical symmetry was studied in <cit.> where the holonomy modifications were shown to lead to a nonsingular black-hole solution in vacuum. The global structure of such a solution is an interuniversal wormhole joining a black hole to a white hole through their interiors. The holonomy parameter μ in these works was assumed constant, μ=μ_0, a framework referred to as the μ_0 scheme in some of the LQG literature. This holonomy parameter is associated with the coordinate length of the links in the triangulation of space as required by LQG. More precisely, in the spherically symmetric effective framework where the space-time topology may be fixed to ℳ = ℝ^2 × S^2, one only considers the angular discretization of the spheres (the submanifold S^2⊂ℳ) and hence μ_0 corresponds to the angular coordinate length of these links. A constant μ_0 is required if the constraint operator is constructed from basic holonomy operators. However, holonomies depend on a canonical variable such as a spatial connection component or extrinsic curvature of the spheres, which unlike space-time curvature scalars are not guaranteed to be small in low-curvature regimes. Depending on specific properties of dynamical solutions, a constant μ_0 could therefore imply large holonomy modifications in regions that should remain close to the classical limit. It is therefore necessary to introduce scale-dependent holonomy parameters μ that become smaller in semiclassical regions in order to compensate for potentially growing extrinsic curvature. Heuristically, such a behavior can be motivated as including lattice-refinement effects of an underlying discrete spatial state. As spherical areas in a semiclassical region grow larger, a discrete lattice state would have to be refined more and more in order to avoid discreteness being magnified to macroscopic sizes. The coordinate length of lattice links correspondingly decreases, implying a scale-dependent μ as a function of the spherical area. The scale dependence is not unique and determined largely by phenomenological reasoning, making sure that the modified theory is consistent with classical behavior in the right regimes. In homogeneous cosmological models, there is only one relevant semiclassical regime given by large volumes. In spherical symmetry and the related black-hole models, however, there are different semiclassical regimes, including large radii as well as near-horizon geometries, which should also be semiclassical for astrophysical black holes. In the latter case, however, the conditions on a suitable scale dependence of μ are different from what is required for large radii because they refer to different components of the spatial metric. Moreover, these conditions are dependent on the space-time gauge or slicing used to express the near-horizon geometry. This observation re-inforces the condition that holonomy modifications must be made compatible with general covariance of an effective line element, and that a large set of scale-dependent μ must be compatible with a given class of modified theories. A simple ad-hoc version of scale-dependent holonomy parameters is often referred to as a μ̅-scheme <cit.>, but it is not uniquely defined. The original motivation was to replace the coordinate length of the links of a triangulation with a more tangible length such as the Planck length ℓ_ Pl, characteristic of quantum gravity. Since the holonomy around a closed loop of links is related to the curvature of the gauge connection, one may use the Planck area, or the smallest non-zero eigenvalue Δ in the area spectrum of loop quantum gravity, as an ingredient of the holonomy parameter μ. If the exponent of Δ in the holonomy parameter is chosen, which might be proportional to Δ itself as an area or √(Δ) as a side length, or perhaps some other power law depending on one's intuition, dimensional arguments require that this quantity be divided by a function of metric components with suitable length units. For this purpose, one can employ the component q_ϑϑ that determines the areas of spheres. This procedure may then result in μ∝ q_ϑϑ^-1 or μ∝ q_ϑϑ^-1/2 or with some other exponent. (The square-root version is closest to the μ̅-scheme in spatially flat cosmological models, where the relevant connection component is proportional to ȧ in terms of the scale factor. This component is then multiplied by √(Δ) for dimensional reasons, and divided by a in order make the expression independent of spatial rescalings. Here, we have a simple covariance argument, but, compared with spherical symmetry, only referring to a small subset of possible coordinate transformations.) Such a construction does not lead to a unique form of a scale-dependent μ because the final result depends on the exponent chosen for Δ, and the radial metric component q_xx could also be used in such an ad-hoc argument <cit.>. A more systematic approach is therefore needed, which we construct here based on the condition of covariance. As we will see, this condition, while still not leading to a unique analog of μ, implies additional restrictions on possible phase-space dependencies of this function, which turn out to include a dependence on q_ϑϑ but not on q_xx. Such a behavior is well-defined even when crossing the horizon because the physical length of the links remains spacelike everywhere and discretizes the submanifold S^2 ⊂ℳ. In Schwarzschild coordinates, the spatial slices have the topology Σ = ℝ× S^2, such that the exterior is foliated by spheres with squared areal radius q_ϑϑ, while the interior geometry is that of hypercylinders with squared radius q_ϑϑ. In the heuristic lattice picture, both cases can be interpreted as discrete submanifold S^2 with plaquettes of constant size if μ∝ q_ϑϑ^-1/2 is used. The purpose of this paper is to generalize the results of <cit.> to an arbitrary holonomy parameter as a function of the radius, μ (q_ϑϑ), and hence obtaining a general triangulation scheme in its heuristic interpretation. We will note crucial conceptual differences in our new scheme as well, in particular its general covariance and its role in light of a complete discussion of canonical transformations that may be used to change a functional dependence such as μ(q_ϑϑ)K_φ where K_φ is one of the momenta, classically related to extrinsic curvature. While some properties of μ_0 and μ̅-type schemes can still be recognized in specific choices of canonical variables, our covariant construction (both in terms of space-time and phase-space transformations) provides a novel and unified picture of holonomy modifications. Since the relevant equations are obtained in the framework of emergent modified gravity, we follow the latter's notation and refer to the new holonomy parameter as λ(q_ϑϑ), highlighting its new origin and features. The new conceptual understanding of holonomy modifications leads to novel physical insights. In particular, we extend previous constructions for black holes in LQG to asymptotically de Sitter space-times with a well-defined asymptotic infinity, rather than ending at a maximal radius. For this result, our covariant treatment of scale-dependent holonomy parameters is crucial. Covariance also allows us to apply standard space-time analysis in an unambiguous way, including the derivation of timelike and likelight geodesics, effective stress-energy tensors, and thermodynamics. After a general discussion of holonomy modifications, we apply our results to specific versions, including one that in its dynamical implications corresponds to the traditional choice of μ(q_ϑϑ)∝ q_ϑϑ^-1/2. To this end we begin Section <ref> with a discussion of emergent modified gravity and describe how some modification functions can be interpreted as holonomy corrections and hence used as a covariant effective model for LQG. The black-hole solution with a general holonomy parameter is obtained in Section <ref>. The procedure follows the solution of the equations of motion in four different gauges which we show are all related by standard coordinate transformations covering distinct but overlapping regions of the spacetime. These four patches are then sewed together to recover a global structure with the properties of a wormhole and present the maximal extension in null coordinates in Section <ref>. We further show that the physical global structure is non-unique, owing to ambiguities in the slicing and gluing of the maximal extension. Section <ref> is devoted to an analysis of the interior structure of space-time with the black-hole horizon. We highlight the robust feature that the classical singularity is replaced by a non-singular hypersurface of reflection symmetry, at which the initially collapsing interior can be glued to an expanding version. This outcome is largely independent of the specific scale dependence of holonomy parameters. Geodesic analysis can be used in order to relate this non-singular behavior to heuristic effects such as repulsive modified gravity. In Section <ref>, we study several physical effects of holonomy modifications in the black-hole exterior including corrections to the precession frequency of nearly circular orbits, to the deflection angle of null rays, and to black hole thermodynamics. In Section <ref>, we revisit asymptotic problems in holonomy modifications that correspond to the traditional μ_0-type scheme of scale-independent μ, including asymptotic corrections to the deflection angle, to the Brown-York quasilocal energy, and to the Bekenstein-Hawking entropy, as well as the existence of a maximum radius of the universe. In Section <ref> we study the effects of a specific version of scale-dependent μ and confirm that it removes unexpected and potentially problematic features of the μ_0-type scheme in asymptotic regimes. § EMERGENT MODIFIED GRAVITY IN SPHERICAL SYMMETRY The framework of emergent modified gravity plays an important role in discussions of general covariance in a canonical setting and is therefore reviewed in this section. Its main feature is that the space-time metric is emergent rather than effective: It is not a corrected version of a pre-existing classical metric tensor. Rather, it is the only metric tensor admissible in a modified canonical theory because the classical expression as a phase-space function fails to obey the tensor transformation law when gauge transformations are modified. The entire emergent metric is derived from covariance conditions, in contrast to what is usally referred to as an effective metric if perturbative correction terms are derived for the classical metric. §.§ Spherically symmetric canonical gravity Canonical gravity, originally derived by Dirac, is conveniently based on the Arnowitt–Deser–Misner (ADM) decomposition where the line element in spherical symmetry takes the form <cit.> d s^2 = - N^2 d t^2 + q_x x ( d x + N^x d t )^2 + q_ϑϑ dΩ^2 , where q_a b is the three-dimensional metric induced on the spatial hypersurfaces foliating the manifold, while N and N^x are the lapse function and shift vector field associated to the observer frame with time coordinate t. The Hamiltonian is composed of the Hamiltonian constraint H and the diffeomorphism constraint H_x. As phase-space functions, depending on the spatial metric and its momenta, these expressions vanish on physical solutions, H=0 and H_x=0, bringing the system on-shell. The Hamiltonian that generates evolution with respect to t is given by H[N,N^x] =H[N] + H_x[N^x] where the square brackets indicate that these terms are smeared (or spatially integrated) by the functions N and N^x. The constraints then generate time evolution via Poisson brackets, Ȯ = { O , H[N,N^x] } for any phase-space function O. The same constraints also generate gauge transformations by use of different smearing functions ϵ^0 and ϵ^x instead of the fixed N and N^x for a given frame, and hence the generator of a general gauge transformation is H[ϵ^0,ϵ^x]. The vanishing of the constraints must be consistent in different gauges, which means that the Poisson brackets of the constraints with themselves must vanish on-shell. The specific brackets found in canonical gravity are known as the hypersurface-deformation brackets, given by { H_x [N^x] , H_x[ϵ^x] } = - H_x [ϵ^x (N^x)' - N^x (ϵ^x)'] , { H [N] , H_x [ϵ^x] } = - H[ϵ^x N'] , { H [N] , H[ϵ^0] } = - H_x [ q^x x( N' ϵ^0 - N (ϵ^0)' )] , where the structure-function q^xx is the inverse of the radial metric component, q^xx=1/q_xx. These brackets indeed vanish on-shell when the constraints equal zero. Furthermore, if the system is fully on-shell, also imposing equations of motion generated by the constraints, the gauge freedom of canonical gravity is equivalent to the freedom of choosing a space-time slicing or a coordinate system. Therefore, the constraints are an essential ingredient of encoding general covariance. However, starting with gauge transformations of phase-space functions, two additional steps are needed for a full realization of general covariance, understood as gauge transformations being equivalent to coordinate changes, or as the existence of a tensor-transformation law for the space-time metric: First, we need gauge transformations not only for the spatial part q_xx, which is a basic phase-space degree of freedom, but also for the time-time and the mixed space-time components, depending on N and N^x. While the latter space-time functions do not have momenta and therefore are not included in the phase space (unless an extended version is used), their gauge transformations are uniquely determined by the constraints and their Poisson brackets through consistency conditions between evolution and gauge equations. Secondly, coordinate changes and the tensor transformation law require explicit time derivatives, rather than momentum components as they appear in the constraints and the gauge transformation they generate. Time derivatives are related to momenta by some of the canonical equations of motion generated by the constraints. No additional ingredient is therefore required in order to obtain full expressions of coordinate transformations, but this can be achieved only if the theory is taken fully on-shell by imposing not only the constraint equations but also some of the equations of motion. We will include these ingredients in our present review in the more general setting of modified canonical gravity. §.§ Modified canonical gravity If one considers a modified Hamiltonian constraint H̃, as implied for instance by the regularization procedure of LQG, then we must first ensure that the constraint brackets remain first class for the constraints to vanish consistently in all gauges. More specifically, we require that the general form of hypersurface-deformation brackets be preserved, { H_x [N^x] , H_x[ϵ^x] } = - H_x [ϵ^x (N^x)' - N^x (ϵ^x)'] , {H̃ [N] , H_x [ϵ^x] } = - H̃[ϵ^x N'] , {H̃ [N] , H̃[ϵ^0] } = - H_x [ q̃^x x( N' ϵ^0 - N (ϵ^0)' )] , where the structure-function q̃^x x is determined by demanding that no additional terms appear on the right-hand sides. However, the specific structure function q̃^xx need not equal the classical one, given by a specific function of one of the basic phase-space variables, q_xx, on which the modified H̃ depends (or an equivalent version in triad variables). This condition is stronger than anomaly freedom, which would be compatible with, say, an additional H_x-term appearing in the Poisson bracket of two H̃. Both conditions, anomaly-freedom and the specific form of hypersurface-deformation brackets, restrict permissible modifications of the constraint. A comparison with the classical geometrical structure of space-time, together with an analysis of the transformation properties of lapse and shift to be discussed in the next section, suggests that the inverse of the modified structure function q̃^xx plays the role of the radial component of an effective line element (in the stricter sense of emergence) replacing q_xx in (<ref>). However, anomaly freedom or brackets of hypersurface-deformation form do not imply that this effective line element is invariant. Rather, a further condition must be imposed on H̃, derived from the tensor transformation law of (1/q̃^xx,N,N^x), such that the gauge transformations of the effective metric it generates correspond to infinitesimal coordinate transformations <cit.>. (The angular component q̃_ϑϑ also plays a role in the covariance condition, but it is less direct because this component does not appear in the structure functions of spherically symmetric gravity.) According to the classic analysis of <cit.>, if one considers the classical spatial metric as a configuration variable of the phase space, then the Hamiltonian constraint is uniquely determined by the hypersurface-deformation brackets up to the choice of Newton's and the cosmological constant. These results therefore imply that H must be the classical constraint of GR, and generally covariant modifications are ruled out unless one includes higher-derivative terms that in a canonical formulation enlarge the classical phase space. However, in the recent formulation of emergent modified gravity <cit.> the assumption that the spatial metric q_ab is a configuration variable of the phase-space is not necessary in order to obtain a consistent field-theory description of space-time. Instead, emergent modified gravity assumes that the phase space is composed of certain fundamental fields different from the metric, and the metric is an emergent object to be regained by the imposition of anomaly freedom and covariance. In fact, the most general modified Hamiltonian constraint depending on spatial derivatives of the phase space up to second order and satisfying anomaly freedom and covariance in vacuum spherical symmetry has been obtained exactly in <cit.>. If LQG is to be treated in a covariant framework of effective line elements, without, as usually assumed in this context, an extension of the classical phase space in order to account for perturbative higher-curvature corrections or higher time derivatives, its effective Hamiltonian constraint must be included in the family of constraints allowed by emergent modified gravity, and its effective line element must be given by the corresponding emergent space-time. §.§ Covariance conditions The hypersurface-deformation brackets (<ref>) together with general properties of a Poisson bracket, such as the Jacobi identity, imply that off-shell gauge transformations for the lapse function and shift vector are given by <cit.> δ_ϵ N = ϵ̇^0 + ϵ^x N' - N^x (ϵ^0)' , δ_ϵ N^x = ϵ̇^x + ϵ^x (N^x)' - N^x (ϵ^x)' + q̃^x x(ϵ^0 N' - N (ϵ^0)' ) . These equations play a central role in general covariance as explained below. Building on <cit.>, the brackets (<ref>) and the transformations (<ref>) suggest the existence of the space-time metric d s^2 = - N^2 d t^2 + q̃_x x ( d x + N^x d t )^2 + q̃_ϑϑ dΩ^2 , where q̃_x x is the inverse of the (possibly non-classical) structure-function, assuming for now that q̃_xx>0, while q̃_ϑϑ is not given by structure functions and must hence be chosen on different grounds. (This feature is a consequence of the symmetry-reduced theory under consideration and its space-time structure. Since the component is subject to canonical transformations, it is possible to prescribe a specific form of q̃_ϑϑ as a phase-space function in order to reduce the freedom of performing canonical transformations.) Because the spatial metric is not directly determined by kinematical properties of the phase space but rather depends, at least in part, on the inverse of the structure-function, we refer to it as an emergent metric. This nomenclature will make more sense when dealing with non-classical structure functions which arise from the requirement of anomaly-freedom of the modified constraints incorporating potential non-perturbative quantum corrections. Anomaly-freedom of the brackets (<ref>), even if they are of hypersurface-deformation form, does not ensure covariance of the resulting space-time (<ref>), contrary to what is commonly stated. Instead, we have to impose the stronger covariance condition δ_ϵg̃_μν|_O.S. = ℒ_ξg̃_μν|_O.S. , which ensures that canonical gauge transformations generate infinitesimal diffeomorphisms. This requirement contains (<ref>) as a necessary condition, but it is also non-trivial when applied to the spatial part of the metric and imposes yet another set of restrictions on allowed modifications of the classical constraints. §.§ Covariant modified constraints Following <cit.> we consider the vacuum spherically symmetric theory with canonical gravitational variables {K_φ(x),E^φ(y)}=δ(x-y) and {K_x(x),E^x(y)}=δ(x-y). (We use units such that Newton's constant equals one.) In the classical theory, the momenta E^x and E^φ are components of the densitized triad, while the configuration variables are directly related to the extrinsic-curvature components K_φ = K_φ and K_x = 2 K_x. Leaving the diffeomorphism constraint H_x = E^φ K_φ' - K_x (E^x)' unmodified and building on <cit.>, we consider the most general ansatz for the Hamiltonian constraint up to second-order derivatives and quadratic first-order derivative terms <cit.> H̃ = a_0 + ((E^x)')^2 a_x x + ((E^φ)')^2 a_φφ + (E^x)' (E^φ)' a_x φ + (E^x)” a_2 + (K_φ')^2 b_φφ + (K_φ)” b_2 + (E^x)' K_φ' c_x φ + (E^φ)' K_φ' c_φφ + (E^φ)” c_2 . Here, a_0, a_xx, a_φφ, a_x φ, a_2, b_φφ, b_2, c_2, c_φφ, c_x φ are all functions of the phase-space variables, but not of their derivatives. We have not included terms quadratic in the radial derivatives of K_x in (<ref>) because it can be readily shown that the covariance condition (<ref>) does not allow them if q̃_ϑϑ = q̃_ϑϑ (E^x) which is our case of interest. Demanding that this ansatz satisfies the anomaly-free brackets of hypersurface deformation form and imposing the additional covariance conditions encoded in (<ref>), the general Hamiltonian constraint reduces to H̃ = - √(E^x) g/2[ E^φ( f_0 + K_x/E^φ f_1 ) + ((E^x)')^2/E^φ( f_2 - K_x/2E^φ( ∂ln g/∂ K_φ + C_x φ) ) + (E^x)' (E^φ)'/(E^φ)^2 - (E^x)”/E^φ + (E^x)' K_φ'/E^φ C_x φ] . Now, g, f_0, f_1, f_2, and C_x φ are functions of E^x and K_φ, and are related to one another by four differential equations. Consequently, the structure function q̃^x x is composed of these functions plus an additional direct dependence on the phase-space variables. More explicitly, it is given by q̃^x x = ( ∂ f_1/∂ K_φ - f_1 C_x φ - 1/2( ∂^2 ln g/∂ K_φ^2 + C_x φ^2 + ∂ C_x φ/∂ K_φ + ∂ln g/∂ K_φ C_x φ) ( (E^x)'/E^φ)^2 ) g^2/4E^x/(E^φ)^2 . The constraint brackets (<ref>) and the covariance condition (<ref>), which can be formulated in terms of Poisson brackets as well, are both invariant under canonical transformations. We can therefore apply canonical transformations in order to further simplify the constraint and eliminate some of the free functions. An interesting subset of canonical transformations is given by those that preserve the diffeomorphism constraint (<ref>): K_φ = f_c (E^x , K̃_φ) , E^φ = Ẽ^φ( ∂ f_c/∂K̃_φ)^-1 , K_x = ∂ (α_c^2 E^x)/∂ E^xK̃_x + Ẽ^φ∂ f_c/∂ E^x( ∂ f_c/∂K̃_φ)^-1 , Ẽ^x = α_c^2 (E^x) E^x , where the new variables are written with a tilde. It can be shown that there is a canonical transformation with a function f_c = f_c (K_φ) and α_c = 1, such that the transformed C_x φ vanishes, at least locally in the phase space. This canonical transformation allows us to include C_x φ = 0 among the conditions for covariance, resulting in five equations for the five functions in (<ref>). The equations, which include partial differential ones, can be solved exactly, yielding g = χcos^2 ( λ( K_φ + λ_φ) ) , g f_1 = 4 χ(c_f sin (2 λ (K_φ + λ_φ))/2 λ + q cos(2 λ (K_φ + λ_φ))) , f_2 = - α_2/4 E^x + sin(2 λ ( K_φ + λ_φ) )/2 λcos^2(λ (K_φ+ λ_φ) )( λ∂ (λλ_φ)/∂ E^x + λ K_φ∂λ/∂ E^x) , g f_0 = χ( α_0/E^x + 2 sin^2 (λ(K_φ+λ_φ))/λ^2∂ c_f/∂ E^x + 4 sin(2 λ(K_φ+λ_φ))/2 λ∂ q/∂ E^x + 4 c_f ( 1/λ∂ (λλ_φ)/∂ E^xsin(2 λ(K_φ+λ_φ))/2 λ + (α_2/4 E^x - ∂lnλ/∂ E^x) sin^2 (λ(K_φ+λ_φ))/λ^2) + 8 q ( - λ∂ (λλ_φ)/∂ E^xsin^2 ( λ(K_φ+λ_φ))/λ^2 + (α_2/4 E^x - 1/2∂lnλ/∂ E^x) sin(2 λ(K_φ+λ_φ))/2 λ) + 4 K_φ∂lnλ/∂ E^x( c_f sin(2 λ(K_φ+ λ_φ))/2 λ + q cos(2 λ(K_φ+λ_φ)) ) ) , where χ , c_f , α_0, α_2, λ, q, λ_φ are undetermined functions of E^x. The classical constraint is recovered in the limit χ , c_f , α_0, α_2 → 1 and λ, q → 0. (The cosmological constant can be recovered by instead setting α_0→ 1 - Λ E^x, with Λ>0 corresponding to asymptotically de Sitter space in the classical limit.) Noting that the set of canonical transformations has not been exhausted, we can use the residual canonical transformation with f_c = f_x K_φ - λ̃_φ and α_c = 1 (where f_x and λ̃_φ are allowed to depend on E^x) to further simplify the constraint. A convenient choice for such a simplification is given by λ̃_φ = λ_φ: g = χ f_x cos^2 ( λ f_x K_φ) , g f_0 = χ/f_x( α_0/E^x + 2 sin^2 (λ f_x K_φ)/λ^2∂ c_f/∂ E^x + 4 sin(2 λ f_x K_φ)/2 λ1/λ∂(λ q)/∂ E^x + (α_2/E^x - 2 ∂lnλ^2/∂ E^x) ( c_f sin^2 (λ f_x K_φ)/λ^2 + 2 q sin(2 λ f_x K_φ)/2 λ) ) + ∂ln (λ f_x )/∂ E^x 4 χ_0 K_φ(c_f sin (2 λ f_x K_φ)/2 λ + q cos(2 λ f_x K_φ)) , g f_1 = 4 χ(c_f sin (2 λ f_x K_φ)/2 λ + q cos(2 λ f_x K_φ)) , g f_2 = χ f_x cos^2 ( λ f_x K_φ) ( - α_2/4 E^x - ∂ln f_x/∂ E^x + sin(2 λ f_x K_φ)/2 λcos^2(λ f_x K_φ)λ^2 f_x K_φ∂ln (λ f_x)/∂ E^x) . The associated structure function (<ref>) becomes q̃^x x = ( ∂ f_1/∂ K_φ - 1/2∂^2 ln g/∂ K_φ^2( (E^x)'/E^φ)^2 ) g^2/4E^x/(E^φ)^2 = ( ( c_f + λ^2 f_x^2 ( (E^x)'/2 E^φ)^2 ) cos^2 ( λ f_x K_φ) - 2 q λ^2 sin (2 λ f_x K_φ)/2 λ) χ^2 f_x^2 E^x/(E^φ)^2 . If we choose f_x=1, then the resulting set of Hamiltonian constraints is given by H̃ = - √(E^x)χ/2[ E^φ( α_0/E^x + 2 sin^2 (λ K_φ)/λ^2∂ c_f/∂ E^x + 4 sin(2 λ K_φ)/2 λ1/λ∂(λ q)/∂ E^x + (α_2/E^x - 2 ∂lnλ^2/∂ E^x) ( c_f sin^2 (λ K_φ)/λ^2 + 2 q sin(2 λ K_φ)/2 λ) + 4 (K_x/E^φ + K_φ/2∂lnλ^2/∂ E^x) (c_f sin (2 λ K_φ)/2 λ + q cos(2 λ K_φ)) ) + ((E^x)')^2/E^φ( - α_2/4 E^xcos^2 ( λ K_φ) + ( K_x/E^φ + K_φ/2∂lnλ^2/∂ E^x) λ^2 sin(2 λ K_φ)/2 λ) + ((E^x)' (E^φ)'/(E^φ)^2 - (E^x)”/E^φ) cos^2 ( λ K_φ) ] , with the associated structure function q̃^x x = ( ( c_f + λ^2 ( (E^x)'/2 E^φ)^2 ) cos^2 ( λ K_φ) - 2 q λ^2 sin (2 λ K_φ)/2 λ) χ^2 E^x/(E^φ)^2 . If we instead choose f_x = λ̃ / λ, where λ̃ is a constant reference value of λ(E^x), the resulting set of Hamiltonian constraints is given by H̃ = - λ̃/λχ√(E^x)/2[ E^φ( λ^2/λ̃^2α_0/E^x + 2 sin^2 (λ̃ K_φ)/λ̃^2∂ c_f/∂ E^x + 4 sin(2 λ̃ K_φ)/2 λ̃∂/∂ E^x(λ/λ̃ q) + ( α_2/E^x - 2 ∂lnλ^2/∂ E^x) ( c_f sin^2 (λ̃ K_φ)/λ̃^2 + 2 λ/λ̃ q sin(2 λ̃ K_φ)/2 λ̃) ) + 4 K_x (c_f sin (2 λ̃ K_φ)/2 λ̃ + λ/λ̃ q cos(2 λ̃ K_φ)) - ((E^x)')^2/E^φ( ( α_2/4 E^x - 1/2∂lnλ^2/∂ E^x) cos^2 ( λ̃ K_φ) - K_x/E^φλ̃^2 sin(2 λ̃ K_φ)/2 λ̃) + ( (E^x)' (E^φ)'/(E^φ)^2 - (E^x)”/E^φ) cos^2 ( λ̃ K_φ) ] , with the structure-function q̃^x x = ( ( c_f + ((E^x)' λ̃/2 E^φ)^2 ) cos^2 (λ̃ K_φ) - 2 λ/λ̃ q λ̃^2 sin(2 λ̃ K_φ)/2 λ̃) λ̃^2/λ^2χ^2 E^x/(E^φ)^2 . Note that the constraint (<ref>), unlike (<ref>), is bounded in K_φ and fully periodic with “frequency” λ̃. The two constraints, however, are related by a simple canonical transformation rescaling the component K_φ. Therefore, they describe the same system, just in different phase-space coordinates. In the modified context, the phase-space variable K_φ has lost its direct classical relationship with extrinsic curvature, which is instead determined by the emergent space-time metric. Therefore, there is no canonical or geometrical preference in favor of one of the two versions. (However, the periodic nature of (<ref>) may suggest that this version and the associated canonical variables are more relevant for a fundamental loop quantization in terms of basic holonomy operators.) The classical constraint is recovered from (<ref>) in the limit χ , c_f, α_2 → 1, λ→ 0, and α_0 → 1 - Λ E^x with cosmological constant Λ. The constraint (<ref>) cannot reproduce the classical limit directly because the canonical transformation K_φ→ (λ̃/λ) K_φ does not exist in the classical limit. Since the canonical transformation has introduced a new constant λ̃, its limit behavior must be specified separately. One possibility is the combination of λ→λ̃ followed by λ̃→0. If we redefine χ→χ̅λ/λ̃ , q →q̅λ̃/λ , α_0 →λ̃^2/λ^2α̅_0 , α_2 →α̅_2 + 4 E^x ∂lnλ/∂ E^x , the free function λ disappears from the constraint (<ref>) by being absorbed by the other parameters, and is completely replaced by the constant λ̃. A possible interpretation of λ̃ is given in the next subsection as the length of a reference curve along which the holonomy is computed. Replacing the function λ with a constant λ̃ simplifies the constraint, but it obscures the effects of non-constant λ by combining it with the other modification functions according to (<ref>). Since we are specifically interested in non-constant λ for the purpose of this paper, we will not make much use of this redefinition. A modified angular metric component q̃_ϑϑ = q̃_ϑϑ (E^x) may also be considered. However, through a residual canonical transformation (<ref>) with f_c = K_φ and the appropriate α_c, the angular metric component can always be rendered into its classical form q̃_ϑϑ = E^x. The appearance of α_c in the Hamiltonian constraint can thereby be absorbed in the undetermined functions of E^x, obtaining the same result (<ref>). Any covariant Hamiltonian constraint of second order in spatial derivatives in the spherically symmetric sector is therefore contained in (<ref>). It is important to note that canonical transformations cannot have an effect on the physical system because they merely represent a change of phase-space coordinates. The canonical transformations used here, and discussed in detail in <cit.>, serve the purpose of simplifying the covariance conditions such that they can be solved exactly. Factoring out canonical transformations is also important because a different choice of phase-space coordinates may lead one to misclassify a modified Hamiltonian constraint different from (<ref>) as a new theory, even if it is in fact equivalent to a well-known one. For instance, <cit.> argued that the non-invertibility of a canonical transformation of the form K_φ→sin(λ K_φ)/λ applied to the classical system could model holonomy modifications. This assertion is incorrect since the construction merely amounts to using the classical theory with phase-space coordinates that are valid in a bounded range. It is therefore not surprising that the results on critical collapse of scalar matter, reported in <cit.>, did not show any deviation from GR. Similarly, <cit.> partially attributed modified gravitational effects to non-invertible canonical transformations, misidentifying the correct reason for singularity removal. The relevant modifications instead originate in a linear combination of constraints with phase-space dependent coefficients which result in additional terms for the new Hamiltonian constraint, as explained in <cit.>. §.§ Effective loop quantum gravity The covariant Hamiltonian defined in (<ref>) contains various independent modification functions in several of its terms. How does it relate to effective loop quantum gravity, which is usually considered with a smaller number of modifications? Phenomenologically, the parameter λ is special since it is responsible for the non-singular black-hole solution obtained for constant λ=λ̃ in <cit.>. In the functional behavior of the constraint, this parameter has an appearance related to the holonomy length μ in LQG. §.§.§ Traditional ingredients The starting point of any loop quantization consists in replacing a direct quantization of the classical phase space by operators in the holonomy-flux representation. In spherically symmetric LQG <cit.>, the basic holonomies are given by h^x_e [K_x] = exp∫_e d x i K_x , h^φ_v, μ [K_φ] = exp∫_μ dϑ i K_φ = exp(i μ K_φ (v)) , where e stands for an arbitrary radial curve of finite coordinate length, while v stands for an arbitrary point in the radial line, and μ is the coordinate length of an arbitrary longitudinal curve on the 2-sphere intersecting the point v. The explicit integration in the angular holonomy (<ref>) is possible due to spherical symmetry, but the radial holonomy integration must remain formal. Similarly, the fluxes are given by direct integration of the densitized triads E^x and E^φ over finite, 2-dimensional surfaces with normals in the radial and angular direction, respectively. Fluxes do not require exponentiation and are therefore closer to the original phase-space variables. For the following argument, we need only the expressions of holonomies. Because loop quantization is based on the holonomy-flux variables, the Hamiltonian constraint must be rewritten as a function of holonomies rather than the bare (extrinsic) curvatures K_x and K_φ, leading to a modified constraint in the spirit of emergent modified gravity, as outlined in the previous section. However, the radial holonomy (<ref>) is essentially non-local, and cannot be studied within this framework, which has thus far only been formulated locally and up to second order in derivatives. On the other hand, the angular holonomies (<ref>) are indeed local when restricting the state space to spherically symmetric states. Therefore, it is natural to expect that quantum effects due to the angular holonomy components can be captured by the local equations of emergent modified gravity. Furthermore, since the angular holonomies can be integrated, they become simple complex exponentials of the angular extrinsic curvature (<ref>). Because the Hamiltonian constraint must be Hermitian as an operator in the quantum theory, or simply real-valued prior to quantization, the holonomy modifications in the Hamiltonian constraint will appear as trigonometric functions of K_φ with `frequency' μ. This is indeed the case of the Hamiltonian constraint (<ref>) derived in the previous section. Therefore, the parameter λ in the notation of emergent modified gravity can be given the interpretation of an angular holonomy length within LQG, and hence acquires the status of a quantum parameter under this interpretation. However, it also inherits new features from the enhanced symmetry properties of emergent modified gravity compared with traditional models of LQG. In LQG, different triangulations of space are allowed, and a specific choice would determine the form of the quantum parameter μ (E^x) appearing in the holonomy modification function, using the phase-space variable E^x as the squared radius of our geometry. For example, we can choose a fine lattice such that the spheres are triangulated by small regular polygons with n sides of angular length μ. (It is common to choose tetrahedra for the three-dimensional lattice and hence triangles for the two-dimensional lattice of the spheres, setting n=3 for this purpose.) Each polygon, also referred to as a plaquette, at radius √(E^x) then covers a geometrical area of n E^x μ^2 / (4 tan (π/n)). Choosing the plaquettes at different radii to preserve the same geometrical area, we obtain the formula μ = r̂/√(E^x)μ̂ , where r̂ is a constant reference radius at which μ = μ̂. This function satisfies μ→ 0 as E^x →∞, as desired for recovering an asymptotically smooth geometry without triangulation effects at large radii. §.§.§ Shortcomings Several assumptions are used in arriving at this form. An additional, previously unrecognized issue is that the classical component E^x of a space-time metric is used in the geometrical construction, which in general may deviate from the correct geometrical meaning provided by the emergent space-time metric. As we have seen, in spherically symmetric models it is always possible to choose phase-space variables such that the angular component of the metric is still given by E^x. The construction of holonomy length may therefore be applied as described here. However, strict proposals such as the μ̅-scheme would require an application of these arguments to any surface, even if it extends into the radial direction which may still be homogeneous in the black-hole interior. In this case, an emergent space-time metric may have to be taken into account, but this is available only after the modified theory has been analyzed. In general, the μ̅-scheme therefore does not provide a self-contained recipe for the construction of holonomy modifications. One ingredient of a μ̅-type scheme that may be applied here is a relationship between holonomy parameters and areas of interest, such as the smallest non-zero eigenvalue of an area operator in a loop quantization. Also this step is heuristic and not fully justified because holonomies of a gauge connection do not depend on a metric, and therefore need not have any relationship with geometrical areas. Nevertheless, such a relationship may be used in order to fix free parameters. (In practice, using area eigenvalues merely determines numerical factors multiplying suitable powers of the Planck length, whose form could easily be obtained from dimensional arguments.) To be specific, by choosing the plaquette size derived above to equal a certain distinguished area, n r̂^2 μ̂^2 / (4 tan (π/n)) = Δ, we arrive at the relation μ (E^x)^2 = Δ̃/E^x with Δ̃=4n^-1tan (π/n) Δ. The square of the quantum parameter (<ref>) is proportional to the solid angle of a single plaquette on a sphere of radius √(E^x). Since this expression equals Δ̃/E^x, the number of plaquettes triangulating the sphere of radius √(E^x) is given by 𝒩_Δ (E^x) = 4 π E^x/Δ̃ . Loop quantization requires the Hamiltonian constraint to be explicitly periodic in K_φ. Having μ depend on E^x preserves periodicity in K_φ at any fixed value of E^x, but it turns the holonomy exp (i μ (Ê^x) K̂_φ) into a holonomy-flux hybrid operator upon quantization. Such expressions do not obey the basic holonomy-flux algebra, and they do not even provide a closed set of commutators because an expression such as { h^x_e [K_x] , h^φ_v, μ [K_φ] } = - h^x_e_1 [K_x] ∂μ (ν)/∂ E^x (ν) K_φ (v) h^φ_v, μ [K_φ] h^x_e_2 [K_x] , where e = e_1 ∪ e_2, such that e_1 and e_2 join at x=ν, violates the periodicity condition. In the basic holonmy-flux algebra, the right-hand side of the Poisson bracket between two holonomy operators should be zero since the connection components Poisson commute. The factor of K_φ(v) in (<ref>) shows that trying to implement an E^x-dependent holonomy length is not compatible with a closed algebra of basic operators. Hence, the periodic version (<ref>) of the Hamiltonian constraint is preferred over (<ref>) when carrying out the loop quantization because it resolves two important problems: It depends on pure holonomies of the form exp (i λ̃K̂_φ) with constant λ̃, and can therefore be expressed in terms of the generators of a closed basic algebra. At the same time, it allows for scale-dependent λ-effects because a non-trivial λ(E^x) appears in the coefficients of (<ref>) after it has been removed as a factor of K_φ in holonomies by a canonical transformation. It is then possible to choose the holonomy length that appears in trigonometric functions as being equal to the E^x-independent reference value λ̃=μ̂. At the same time, dynamical implications of non-constant holonomy parameters are realized through modified coefficients of holonomy terms, as we demonstrate in detail below. A common motivation for an E^x-dependent μ in models of loop quantum gravity is not based on properties of basic holonomy operators, but rather on phenomenological questions because a connection or extrinsic-curvature component such as K_φ may be large in classical regimes, depending on the space-time slicing. A holonomy length μ(E^x) might then be used to counteract growing holonomy modifications if it decreases at a suitable rate as E^x grows, implying large spherical areas. It is often argued that this reason requires an explicit appearance of a non-constant μ(E^x) in holonomies. This reasoning seems to favor (<ref>) with a corresponding λ(E^x) in holonomy-like terms, even though covariance prevents this constraint from being strictly periodic in K_φ. Moreover, a quantum constraint operator for this version cannot be based on basic holonomies in a closed holonomy-flux algebra. §.§.§ New features Our new discussion of canonical transformations overturns this established understanding because it is possible to map any constraint of the form (<ref>) into an expression (<ref>) that is strictly periodic in K_φ. The canonical transformation implies that both versions generate exactly the same dynamics and physical observables. Effects of a non-constant holonomy length in (<ref>) are realized in (<ref>) by a modified E^x dependence of coefficients of holonomy terms, indicated by the non-constant ratios λ̃/λ in (<ref>), in particular in the K_x-term of the modified Hamiltonian constraint. This term is relevant for Hamilton's equation Ė^x=λ̃/λχ√(E^x)/2(4c_f+λ̃^2 ((E^x)')^2/(E^φ)^2) sin(2λ̃K_φ)/2λ̃ for the spherical area E^x (assuming vanishing N^x for the sake of simplicity). As an equation of motion generated by the periodic constraint, this result implies that the classical relationship between the canonical variable K_φ and the time derivative Ė^x receives an additional factor of λ/λ̃, making K_φ decrease as √(E^x) grows according to the modified dynamics. (For small λ̃ and λ/λ̃ independent of λ̃, we have K_φ≈ (λ/λ̃ )√̇(̇Ė^̇ẋ)̇/(χ c_f), where c_f,χ→ 1 in the classical limit.) This behavior has the same effect of suppressing holonomy modifications in classical regimes as an explicitly E^x-dependent holonomy length would imply. It is only harder to see this outcome immediately because it requires an analysis of equations of motion. By dismissing this option, the traditional arguments for holonomy modifications suggest an unjustified preference for one of the possible versions of canonically equivalent constraints, based on flawed heuristics that implicitly relate the canonical variable K_φ in a modified theory to a classical component of extrinsic curvature: Classically, K_φ is not necessarily small enough for sin(λ̃ K_φ)/λ̃ to be close to K_φ in regimes of small space-time curvature. It is therefore important to distinguish between small-λ regimes (as a formal limit of the constraint) and small-λ K_φ regimes (as a dynamical feature of solutions for K_φ). In a modified theory, one cannot use classical heuristics in order to estimate the magnitude of K_φ, interpreted as a component of extrinsic curvature, in order to determine whether λ K_φ is sufficiently small in a given semiclassical regime. Since λ K_φ appears in a modified term of the Hamiltonian constraint when holonomies are used and since the emergent space-time metric is not the same as the classical metric, there is no kinematical relationship between the magnitude of K_φ in a modified theory and the magnitude of extrinsic curvature in a corresponding classical slicing. An evaluation of suitable holonomy modifications can therefore be made only if equations of motion as well as the emergent metric are considered. However, traditional arguments in favor of certain holonomy modifications are held entirely at a kinematical level. The distinction between small-λ and small-λ K_φ is relevant in these considerations even if one is interested only in leading-order corrections to the classical behavior. The leading order of a small-λ expansion of the theory is simply the limit of λ→ 0, resulting in the classical dependence of the constraint on K_φ and an emergent metric identical with the classical metric. (The overall theory may still be modified in this limit if the remaining modification functions are non-classical.) Holonomy terms can also be expanded on a subspace of the solution space where λ K_φ is small, even if λ is not taken to zero in a limit. This expansion is relevant for physical properties of the semiclassical limit. In the low-curvature regime of small λ K_φ, the quadratic order in this expansion determines near-classical physics. The non-periodic version of the Hamiltonian constraint, (<ref>), then becomes H̃ ≈ - √(E^x)χ/2[ E^φ( α_0/E^x + 2 K_φ^2∂ c_f/∂ E^x + 4 K_φ1/λ∂(λ q)/∂ E^x + (α_2/E^x - 2 ∂lnλ^2/∂ E^x) ( c_f K_φ^2 + 2 q K_φ) + 4 (K_x/E^φ + K_φ/2∂lnλ^2/∂ E^x) (c_f K_φ + q (1-2 λ^2 K_φ^2)) ) + ((E^x)')^2/E^φ( - α_2/4 E^x( 1-λ^2 K_φ^2 ) + ( K_x/E^φ + K_φ/2∂lnλ^2/∂ E^x) λ^2 K_φ) + ((E^x)' (E^φ)'/(E^φ)^2 - (E^x)”/E^φ) ( 1-λ^2 K_φ^2 ) ] , with structure function q̃^x x ≈ ( ( c_f + λ^2 ( (E^x)'/2 E^φ)^2 ) ( 1-λ^2 K_φ^2 ) - 2 q λ^2 K_φ) χ^2 E^x/(E^φ)^2 . (A similar result follows for the periodic version.) Some holonomy effects therefore survive in the low-curvature regime and may lead to unexpected dynamical implications, unless the strict limit of λ→0 is taken in this regime too. This approximation of the Hamiltonian constraint is valid only in slices where λ K_φ is sufficiently small. Physical implications can only be analyzed if equations of motion are studied, revealing through q̃^xx the geometrical meaning and dynamical behavior of K_φ in suitable semiclassical regimes. An important additional flaw in several discussions of holonomy modifications is the lack of covariance. It is then possible for sin(λ̃ K_φ)/λ̃ to be close to K_φ in some space-time slicings but deviate strongly in others, even in classical regimes. This problem is solved by our constructions, which explicitly realize slicing independence together with the correct geometrical meaning of K_φ. As already emphasized, covariance is also the main reason why non-periodic terms are required in a Hamiltonian constraint with E^x-dependent holonomy length. Our constructions not only implement full covariance, they also relate different versions of holonomy modifications by carefully considering canonical transformations. For effective equations and their solutions, we may use either of the two, or perhaps others related by further canonical transformations. One may simply prefer an option that generates the simplest equations of motion for a given purpose which, however, will in general be gauge dependent. Covariant holonomy modifications therefore cannot be expressed in the straitjacket of μ_0 and μ̅-like schemes that have only a heuristic basis motivated by expectation about the holonomy length combined with a classical understanding of canonical variables in a modified theory. The appearance of holonomy terms, classified in this way, depends on the canonical variables used, and a μ_0-like scheme can be transformed into an equivalent μ̅-like scheme if only the arguments of trigonometric functions are used in the definition of holonomy modifications. As an important implication of our new λ-scheme, dynamical properties of holonomy modifications and their suppression in classical regimes can be realized with strictly periodic holonomy terms, provided their coefficients are amended by suitable modification functions. (In traditional models of loop quantum gravity, such modifications are usually understood as inverse-triad corrections because these coefficients classically depend only on the densitized triad and its inverse but not on the momenta.) §.§.§ Self-consistent treatment of holonomy effects Taking these lessons into account, we arrive at a novel understanding of holonomy modifications in models of loop quantum gravity. The freedom of performing canonical transformations can be fixed by requiring that all terms in the Hamiltonian constraint depending on K_φ are strictly periodic in this variable. According to emergent modified gravity, covariance then implies that only the combination λ̃K_φ with a constant λ̃ can appear in these periodic functions. At this point, the choice of canonical variables is unique if it is required to preserve the classical limit for λ̃→0, such that λ̃K_φ cannot be used as a new phase-space variable conjugate to E^φ/λ̃. There is therefore an unambiguous interpretation of K_φ-dependent functions as an effective description of angular holonomies. Accordingly, we refer to the constant λ̃ as the holonomy length of the theory. By definition, this length is always constant and does not depend on E^x. Different schemes are implemented by choosing specific functions for λ(E^x), which does not appear in holonomy terms but rather in their coefficients in the Hamiltonian constraint. The choice of λ(E^x) changes the geometrical meaning and the dynamical behavior of K_φ in holonomy terms according to (<ref>). As already mentioned, this modification makes it difficult to assess the significance of holonomy modifications in different regimes because this task would require an estimate of the values taken by K_φ, which depends on the dynamics and the space-time gauge. In fact, the equation of motion for E^x in a modified theory does not directly determine K_φ but rather the holonomy function sin(2λ̃K_φ)/2λ̃= λ/λ̃2Ė^x/χ√(E^x)1/4c_f+λ̃^2 ((E^x)')^2/(E^φ)^2 obtained by inverting (<ref>) and still asuming vanishing N^x for the sake of convenience. For χ and c_f close to their classical values, the ratio λ/λ̃ determines the strength of holonomy modifications, defined as the deviation of (2λ̃)^-1sin(2λ̃K_φ) from the classical expression 2Ė^x/√(E^x) of K_φ. For the classical limit to be realized at λ̃→0, we need λ(E^x)=λ̃h(E^x) with a function h(E^x) independent of λ̃ (or finite and non-zero for λ̃→0). Since this new function h(E^x)=λ(E^x)/λ̃ describes the relative strength of holonomy modifications on different scales, we call it the holonomy function. Its appearance shows that it is independent of the holonomy length and instead describes the dynamical behavior of holonomy modifications or the variable K_φ. Some choices of the holonomy function h(E^x) resemble properties of the traditional μ_0 or μ̅-type schemes, but in some cases the usual heuristics turns out to be misleading. For instance, a constant h(E^x)=1, or λ=λ̃, could be related to a μ_0-type scheme of constant holonomy parameter μ in the traditional formulation. A common argument against using such a constant is that the interpretation of λ̃ as a holonomy length derived from coordinates would imply a growing physical length λ̃√(E^x) as measured by the relevant spatial geometry. The growth in E^x of this function then suggests that holonomy modifications increase in asymptotic regimes of large E^x, which are often expected to behave in a classical manner. However, equation (<ref>) shows that holonomy modifications do not grow in this case but merely track the classical behavior of K_φ: The second term in the denominator on the right-hand side of (<ref>) is sub-dominant in this regime, and thus the expression (2λ̃)^-1sin(2λ̃K_φ) on the left is identical with the classical behavior 2Ė^x/√(E^x) of K_φ for λ̃ = λ. Modifications then appear only because one has to rewrite sin(2λ̃K_φ) in terms of sin(λ̃K_φ) for some terms of the constraint, using trigonometric identities. Holonomy modifications therefore do not grow in an unbounded manner in the case of a constant holonomy length, but they may not decrease quickly enough so as to become subdominant in asymptotic regimes. This behavior can be changed by a holonomy function h(E^x) that decreases at a sufficient rate for growing E^x. We will see examples in our specific cases to be discussed later. §.§ Gravitational observable Consider a scalar function on the phase space ℳ. This function is a weak observable if δ_ϵℳ = 𝒟_H H + 𝒟_x H_x, where 𝒟_H and 𝒟_x are functions on the phase space and ϵ is a gauge function. On-shell, the function ℳ then remains invariant under time evolution or arbitrary gauge transformations. In this section, we will keep the most general form of our modification functions in the Hamiltonian constraint, before restricting them to take values as required in effective LQG in the next section. If one considers the dependence ℳ = ℳ (E^x , K_φ , (E^x)'/E^φ), using the periodic constraint (<ref>), the weak observability condition δ_ϵℳ = 𝒟_H H + 𝒟_x H_x determines the functional dependence <cit.> ℳ = d_0 + d_2/2(exp∫ d E^x α̅_2/2 E^x) ( c_f sin^2(λ̃ K_φ)/λ̃^2 + 2 q̅sin(2 λ̃ K_φ)/2 λ̃ - cos^2 (λ̃ K_φ) ((E^x)'/2 E^φ)^2 ) + d_2/4∫ d E^x ( α̅_0/E^xexp∫ d E^x α̅_2/2 E^x) , which is unique up to the constants d_0 and d_2 with classical limits d_0→0 and d_2→1. (There exists an anomaly-free and covariant constraint more general than (<ref>), but after imposing the existence of a weak observable, the extra terms are restricted to vanish and we recover (<ref>) with (<ref>) as the unique weak observable <cit.>.) Here we use the barred functions as defined by (<ref>) for the sake of conciseness. If we insert the classical values α_2=1, and α_0 = 1 -Λ E^x, and d_0 = 0, d_2=1 the weak observable simplifies to ℳ = √(E^x)/2( 1 - Λ/3 E^x + λ̃^2/λ^2( c_f sin^2(λ̃ K_φ)/λ̃^2 + 2 λ/λ̃ q sin(2 λ̃ K_φ)/2 λ̃ - cos^2 (λ̃ K_φ) ((E^x)'/2 E^φ)^2 ) ) where we have chosen the integration constant λ̃^2 for the exponential integrals. As already seen for the constraint, we must be careful in taking the classical limit. If we first invert the canonical transformation to the non-periodic variables, the classical limit is given by λ→0. If this inversion is not performed, we must first take the limit λ→λ̃, followed by λ̃→0. The classical limit of this observable is then the classical mass observable. The structure function (<ref>) can be written in terms of this observable as q̃^xx = (c_f + λ^2 (1 - 2 ℳ/√(E^x) - Λ/3 E^x ) ) λ̃^2/λ^2χ^2 E^x/(E^φ)^2 . In the non-periodic version, the mass observable takes the form ℳ = √(E^x)/2( 1 - Λ/3 E^x + c_f sin^2(λ K_φ)/λ^2 + 2 q sin(2 λ K_φ)/2 λ - cos^2 (λ K_φ) ((E^x)'/2 E^φ)^2 ) , and the structure-function is given by q̃^xx = (c_f + λ^2 (1 - 2 ℳ/√(E^x) - Λ/3 E^x ) ) χ^2 E^x/(E^φ)^2 . Note that the mass observable is periodic in K_φ even in the non-periodic version of the constraint. §.§ Reflection symmetry of the transition surface The Hamiltonian constraint (<ref>), besides being periodic in K_φ has another interesting symmetry in the special case of q=0. In this case, the structure function is symmetric around the surface K_φ = π / (2 λ̃) in the sense that q̃^x x is invariant under the reflection δ→ - δ where δ is defined by K_φ = ±π / (2 λ̃) + δ. The Hamiltonian constraint is invariant under the combined operation δ→ - δ, K_x → - K_x. Thus, the two conditions K_φ = ±π / (2 λ̃) and K_x = 0 define a surface of reflection symmetry in the phase-space. This property can manifest itself in dynamical solutions as a reflection surface in space-time <cit.>. If q≠ 0, this reflection-symmetry is broken unless q, like K_x, changes sign in the dynamical solution, but this is unlikely to be the case in general. Physical implications of breaking the reflection-symmetry breaking effects of q≠0 will be studied elsewhere <cit.>. In the following we set q=0, such that reflection symmetry is realized. It will be useful in the following sections to evaluate the mass observable (<ref>) at the reflection symmetry surface K_φ=-π/(2 λ̃), or equivalently its incarnation in the non-periodic variables (<ref>) at K_φ=-π/(2 λ). After some simplifications, we find that E^x, for a given value of the observable M obeys the equation c_f + λ^2 ( 1 - 2 ℳ/√(E^x) - Λ/3 E^x ) = 0 in any region of space-time where K_φ=-π/(2λ). Since λ is an arbitrary function of E^x, this equation may have multiple solutions according to the chosen modification function. Multiple reflection surfaces may therefore exist in a space-time solution. As will be clear from dynamical solutions derived later on, this reflection symmetry happens around the hypersurface in space-time where the areal radius √(E^x) reaches an extremum. For the specific solution x_λ^(-), which we will use to denote the minimal value of √(E^x) according to the dynamics defined by a given λ, this will correspond to a spacelike transition surface connecting a black hole to a white hole through a wormhole interior. In later sections, we will discuss the physical interpretation of these extremal values, including the case where there exists a maximal value at x_λ^(+). § BLACK-HOLE SOLUTION FOR GENERAL TRIANGULATION SCHEMES We continue to work with a general functional dependence of λ = λ(E^x) and now demonstrate that, qualitatively, singularity resolution of black holes in models of loop quantum gravity based on holonomy modifications is a triangulation-independent feature. In order to isolate effects of the holonomy modification λ, we set the functions c_f, α_0, α_2, and q to their classical values, while keeping a nonclassical, nontrivial χ(E^x) for rescaling freedom. For now, we keep a cosmological constant Λ and only set it to zero when we analyze suitable conditions for asymptotic flatness. As will become clear later on, it is necessary to scale the emergent metric using a suitable χ if we are interested in asymptotic flatness, depending on the chosen λ. For instance, a constant λ=λ̃ requires χ(∞)=1/√(1+λ̃^2), while λ=√(Δ/E^x) requires χ(∞)=1. The scaling of the spatial metric by a constant χ is not a simple coordinate transformation of the radial coordinate because the latter would change other components non-trivially if there is space-time curvature. Given the classical choice for most of the modification functions, the non-periodic constraint (<ref>) simplifies to H̃ = - χ√(E^x)/2[ E^φ( 1/E^x - Λ + 1/E^xsin^2 ( λ K_φ)/λ^2 + 4 ( K_φsin (2 λ K_φ)/2 λ - sin^2 ( λ K_φ)/λ^2) ∂lnλ/∂ E^x) + 4 K_x sin (2 λ K_φ)/2 λ - ((E^x)')^2/4 E^φ( 1/E^xcos^2 (λ K_φ) - 4 λ^2 ( K_x/E^φ + K_φ∂lnλ/∂ E^x) sin (2 λ K_φ)/2 λ) + cos^2 (λ K_φ) ( (E^x)' (E^φ)'/(E^φ)^2 - (E^x)”/E^φ) ] . with structure-function q̃^x x = ( 1 + λ^2 ( (E^x)'/2 E^φ)^2 ) cos^2 ( λ K_φ) χ^2 E^x/(E^φ)^2 = (1 + λ^2 (1 - 2 ℳ/√(E^x) - Λ/3 E^x ) ) χ^2 E^x/(E^φ)^2 . For comparison, the periodic version (<ref>) becomes H̃ = - χλ̃/λ√(E^x)/2[ E^φ( λ^2/λ̃^2( 1/E^x - Λ) + 4 (1/4 E^x - ∂lnλ/∂ E^x) sin^2 (λ̃ K_φ)/λ̃^2) + 4 K_x sin (2 λ̃ K_φ)/2 λ̃ + ((E^x)')^2/E^φ( cos^2 ( λ̃ K_φ) ( - 1/4 E^x + ∂lnλ/∂ E^x) + λ̃^2 K_x/E^φsin( 2 λ̃ K_φ))/2 λ̃) + ((E^x)' (E^φ)'/(E^φ)^2 - (E^x)”/E^φ) cos^2 ( λ̃ K_φ) ] , with structure-function q̃^x x = ( 1 + λ̃^2 ( (E^x)'/2 E^φ)^2 ) cos^2 ( λ̃ K_φ) χ^2 λ̃^2/λ^2E^x/(E^φ)^2 = (1 + λ^2 (1 - 2 ℳ/√(E^x) - Λ/3 E^x ) ) χ^2 λ̃^2/λ^2E^x/(E^φ)^2 . Since the effective line element can be written as d s^2 = - N^2 d t^2 + q̃_x x ( d x + N^x d t )^2 + E^x dΩ^2 , we shall now solve for the phase-space variables, imposing different gauges, in order to explicitly obtain the resulting space-time geometry. §.§ Non-periodic phase-space coordinates We will use the non-periodic constraint (<ref>) in this subsection to compute regions of a Schwarzschild-type static exterior and homogeneous regions in two different gauges. We show that a third gauge results in a modified Gullstrand-Painlevé space-time and is related to the two previous gauges by a standard coordinate transformation in regions of overlap. These three gauges possess a second coordinate singularity beyond the horizons, but they all show regular Ricci and Kretschmann scalars at those points. In Section <ref> we will use the periodic version of the constraint and obtain another gauge that is free of such a coordinate singularity and related to the Schwarzschild-type homogeneous region by a standard coordinate transformation. Because the overall factor χ always multiplies the lapse function in the equations of motion, it will be useful to define N̅ = χ N . We encourage readers to consult Appendix <ref> for a detailed exposition of the equations of motion utilized in this section. §.§.§ Schwarzschild gauge: Static region We define the Schwarzschild gauge by N^x = 0 , E^x = x^2 . The consistency condition Ė^x = 0 implies the equation ( 1 + λ^2 ((E^x)'/2 E^φ)^2 ) sin (2 λ K_φ)/2 λ = 0 which, assuming consistency with the classical limit, is solved by imposing K_φ = 0. This result implies a second consistency equation K̇_φ = 0, which we will address later on by solving it for the lapse function. The vanishing of the diffeomorphism constraint further implies that K_x = 0. Finally, the vanishing of the Hamiltonian constraint implies the equation 0 = ( 1 - Λ x^2 ) (E^φ)^2 - 3 x^2 + x^3 (ln (E^φ)^2)' , which is solved by E^φ = x/√(1 - c_φ/x - Λ x^2/3) , with constant c_φ. Substituting all these results into the mass observable (<ref>), and rewriting it as ℳ = M (interpreted as the mass parameter of a line element rather than a phase-space function), determines c_φ = 2 M . Lastly, K̇_φ=0 implies that N̅ acquires the expression of the classical lapse function, up to a scaling constant α, since K_φ = 0 eliminates all λ dependence in the constraint and the equations of motion it generates. Therefore, N = √(1 - J(x))/αχ , where we have defined J(x)= 2 M/x + Λ x^2/3 for brevity. The structure function is then given by q̃^x x = χ^2 ( 1 + λ^2 ( 1 - J(x)) ) (1 - J(x)) , where λ is an arbitrary function of x through E^x. Space-time solutions therefore depend on λ through the emergent space-time metric, while traditional models of loop quantum gravity always imply λ-independent solutions in the static gauge. This property is one example of covariance problems in this setting because an unmodified static gauge makes it difficult to construct an equivalent non-static gauge for the same model. From (<ref>), the space-time line element is given by d s^2 = - (1 - J(x)) d t^2/α^2 χ^2 + 1/( 1 + λ^2 ( 1 - J(x) ) ) (1 - J(x)) d x^2/χ^2 + x^2 dΩ^2 , for arbitrary λ(x) and χ(x), and a constant α. For static solutions as constructed here, α may always be absorbed by a redefinition of the time coordinate. The locations of the black-hole and cosmological horizons are still given by their classical expressions since they solve the same classical equation, 1 - 2 M/x - Λ x^2/3 = 0 implied by factors of (1-J(x))^± 1 in (<ref>). Therefore, using a small positive cosmological constant so as to match with observations for our universe, the black-hole horizon appears at x_ H≈ 2 M , and the cosmological one at x_Λ≈√(3/Λ) . Just as for a classical black hole in a de Sitter background, the static metric is valid only in the region x_ H<x < x_Λ, beyond which the metric turns spatially homogeneous in an extension of the same gauge, just flipping the roles of t and x as time and space coordinates. However, now we have further coordinate singularities at the values of x that solve the equation 1+λ^2 (1 - 2 M/x - Λ x^2/3) = 0 implied by the new factor of dx^2 in (<ref>). This equation can have multiple solutions depending on the chosen triangulation scheme λ=λ(x), but we can still formulate several general results. These new coordinate singularities are closely related to non-classical hypersurfaces of reflection symmetry implied by holonomy effects. In particular, note that the term multiplying λ^2 is the expression on the left-hand side of the horizons equation (<ref>) and hence it vanishes at x_ H and x_Λ. Since λ^2>0, this term must turn negative for (<ref>) to hold, and hence a coordinate singularity of this form cannot appear in the static region where this term is always positive. Thus, the solutions x=x_λ^(i) of this equation, with i labeling the different solutions, must be either below the Schwarzschild horizon, x_λ^(-)<x_ H denoting the largest solution less than the Schwarzschild radius, or above the cosmological horizon, x_λ^(+)>x_Λ denoting the smallest solution greater than the cosmological horizon. Therefore, solutions to (<ref>) do not lie in the static region, but rather in the homogenous ones. Finally, we note that this equation corresponds precisely to (<ref>) for the reflection-symmetry surface. Therefore, the solutions x_λ^(-) and x_λ^(+) correspond to extrema of the radius √(E^x), around which the spacetime is symmetric, and will have particularly interesting consequences. Solutions of these two types are not guaranteed to exist for any triangulation scheme λ = λ(E^x), and there could be more than two solutions in some cases. Nevertheless, we will show that there exists an x_λ^(-), corresponding to the minimum of the areal radius, for any well-behaved holonomy function, introducing the quantum hypersurface where the geometry transitions from a black hole to a white hole through a wormhole interior. In later sections we will come back to specific solutions corresponding to the traditional μ_0 and μ̅ schemes. First, however, we must ensure that the homogeneous regions exist as solutions to the equations of motion in their corresponding gauges, and identify the coordinate transformations relating the different gauge choices in regions of overlap. §.§.§ Schwarzschild gauge: Homogeneous regions We now restrict the gauge by setting all spatial derivatives equal to zero. We will use the label t_ h for the time coordinate and x_ h for the radial coordinate in a homogeneous patch. The homogeneity condition implies the partial gauge fixing N^x = 0 , N' = 0 . The on-shell condition H̃=0 can be solved to obtain K_x = - E^φ/4 E^x2 λ/sin (2 λ K_φ)( 1 - Λ E^x + sin^2 ( λ K_φ)/λ^2. . + 2 ( K_φsin (2 λ K_φ)/2 λ - sin^2 ( λ K_φ)/λ^2) ∂lnλ^2/∂ln E^x) . Equations of motion for the remaining phase-space variables are Ė^x = 2 N̅√(E^x) sin (2 λ K_φ)/2 λ , as well as Ė^φ/E^φ = N̅/2 √(E^x)2 λ/sin (2 λ K_φ)( sin^2 ( λ K_φ)/λ^2 - (1-Λ E^x) cos (2 λ K_φ). . - 2 λ^2 sin^4 (λ K_φ)/λ^4∂lnλ^2/∂ln E^x) . and K̇_φ = - N̅/2 √(E^x)( 1 - Λ E^x + sin^2 ( λ K_φ)/λ^2 + 2 ( K_φsin (2 λ K_φ)/2 λ - sin^2 ( λ K_φ)/λ^2) ∂lnλ^2/∂ln E^x) . Combining these three time derivatives, we can pick E^x as an evolution parameter by using the chain rule, d K_φ/ d E^x = K̇_φ/Ė^x, obtaining the equation d/ dln E^x(sin^2 (λ K_φ)/λ^2) = - 1/2( 1 - Λ E^x + sin^2 (λ K_φ)/λ^2) , which has the solution sin^2 ( λ K_φ)/λ^2 = c_x/√(E^x) + Λ E^x/3 - 1 with constant c_x. Direct substitution into the mass observable (<ref>) and relabeling M=M determines c_x = 2 M. The equation of motion for the structure function is given by (lnq̃^x x/χ^2)^∙ = (ln( cos^2 (λ K_φ) E^x/(E^φ)^2))^∙ = 2 ( - λ^2 tan (λ K_φ)/λ( K̇_φ + 1/2 K_φ∂lnλ^2/∂ln E^xĖ^x/E^x) - Ė^φ/E^φ) + Ė^x/E^x = N̅/√(E^x)λ/tan (λ K_φ)( 2 M/√(E^x) + Λ/3 E^x - 2 λ E^x/3) . We now complete the gauge fixing by setting E^x = t_ h^2, yielding sin^2 ( λ K_φ)/λ^2 = 2 M/t_ h + Λ/3 t_ h^2 - 1 . The consistency equation Ė^x = 2 t_ h can be used to solve for the lapse function as N̅ = 2 λ/sin (2 λ K_φ) = ( J(t_ h) - 1)^-1/2(1-λ^2 (J(t_ h) - 1))^-1/2 with our previous definition of the function J, and the equation of motion for the structure function is solved by q̃^x x = α^2 χ^2 (2 M/t_ h + Λ/3 t_ h^2 - 1)^-1 with an integration constant α. We therefore obtain the space-time line element d s^2 = - 1/( 1 - λ^2 (J(t_ h) -1) ) (J(t_ h)-1) d t_ h^2/χ^2 + (J(t_ h)-1) d x_ h^2/α^2 χ^2 + t_ h^2 dΩ^2 , for arbitrary λ and χ as functions of t_ h, which matches the static Schwarzschild metric (<ref>) up to the label swap x → t_ h, t → x_ h. This effective metric has a new (quantum) coordinate singularity at the time coordinates solving the equation 1-λ^2 (2 M/t_ h + Λ t_ h^2/3 - 1) = 0 , which is the same equation as the one obtained in the static region (<ref>) up to the label change x→ t_ h. As discussed in this context, solutions t_ h represent hypersurfaces of reflection symmetry. (The name quantum coordinate singularity is motivated by the observation that this new singular hypersurface appears only in the presence of holonomy modifications. As mentioned earlier, on adopting suitable coordinates, we shall demonstrate later that one of the solutions of (<ref>) corresponds to the transition surface where √(E^x) acquires a minimum value.) Therefore, homogeneous Schwarzschild-type coordinates are valid only in the regions x_λ^(-)<t_ h<x_ H where x_λ^(-) is the largest solution of (<ref>) smaller than x_ H, and x_Λ<t_ h<x_λ^(+) where x_λ^(+) is the smallest solution of (<ref>) greater than x_Λ. If there is no x_λ^(-) obeying the condition that defines it, we set x_λ^(-)=0. If there is no x_λ^(+) obeying the condition that defines it, we set x_λ^(+)=∞. If (<ref>) has more than two distinct solutions, additional regions can be introduced. However, we will show in Section <ref> that the solutions x_λ^(±) closest to x_ H and x_Λ, respectively, represent extremal radii, such that x_λ^(-) is a minimum radius, or a “bounce,” and x_λ^(+) is a maximum radius of a recollapsing homogeneous region. Solutions x_λ^(i) less than x_λ^(-) or greater than x_λ^(+) are therefore never reached by dynamical solutions if we start in the static Schwarzschild region and extend it across coordinate singularities. Nevertheless, these alternative solutions might be used to define independent space-time models that could perhaps be connected with the static region by tunneling processes of a more complete quantum theory. As already indicated, the homogenous solutions can formally be obtained from the static one by a simple flip of coordinates. However, the corresponding space-time regions are disjoint and separated by horizons. For a more precise connection between these regions, we should construct a gauge that describes a space-time region overlapping with a portion of the static solution and a portion of a homogeneous one, crossing the horizon. Classically, an example for such a gauge choice is given by the Gullstrand-Painlevé coordinates, which we now construct for the modified equations. §.§.§ From static Schwarzschild to Gullstrand–Painlevé The metric (<ref>) is static and thus has a timelike Killing vector ξ=∂_t. The quantity ε=-ξ· u=-u_t = -g_t ν u^ν is therefore conserved along geodesics, using the 4-velocity u. Compared with the classical Gullstrand–Painlevé system, we are interested in geodesics for free fall starting at rest at a finite radius, x=x_0, because the asymptotic regime may be non-static if there is a cosmological horizon or non-classical asymptotic behavior implied by certain types of holonomy modifications. The 4-velocity of such an object has a time component u^t |_x= x_0=g^tt u_t |_x=x_0=-g^tt|_x=x_0ε, using the diagonal nature of the static metric in the second step, while all spatial components vanish. Timelike normalization, -1=g_μν u^μ u^ν|_x=x_0=g_tt(u^t)^2|_x=x_0, then implies ε^2= -1/g^tt|_x=x_0= 1 - J(x_0)/α^2 χ(x_0)^2 . For x≠x_0, u^x is non-zero and is related to u^t by the normalization condition. Since u_t=-ε is conserved, it is convenient to evaluate normalization in the form g^μνu_μu_ν=-1, such that u_x = s √(g_xx(x)(-g^tt(x)ϵ^2-1)) = s √(1/(1-J(x))(1+λ(x)^2(1-J(x)))χ(x)^2( χ(x)^2/χ(x_0)^21-J(x_0)/1-J(x)-1)) = s/χ(x)χ(x_0)√(χ(x)^2(1-J(x_0))-χ(x_0)^2(1-J(x)))/|1-J(x)| √(1+λ(x)^2(1-J(x))) with a sign choice s that distinguishes outgoing from ingoing GP coordinates. The normalization condition also implies that we can obtain the proper-time interval along our geodesics from the co-velocity components: dτ = -g^μν dx_μ dx_ν/ dτ= - u_μ g^μν dx_ν=-u_μ dx^ν . Using proper time as the new GP coordinate, we obtain d t_ GP = - u_μ d x^μ = ε d t - s/χ(x)χ(x_0)√(χ(x)^2(1-J(x_0))-χ(x_0)^2(1-J(x)))/|1-J(x)| √(1+λ(x)^2(1-J(x))) d x . The coordinate transformation t_ GP (t , x) is defined by an explicit integration of the 1-form (<ref>), which depends on the choice of λ(x) and χ(x) and therefore cannot be performed at a general level. However, since ε is constant and the coefficient of dx is independent of t, (<ref>) defines a closed 1-form, such that an integral t_ GP(t,x) exists as a local coordinate for any choice of λ(x) and χ(x). Substituting d t = ε^-1 d t_ GP + ε^-1 u_x d x in (<ref>), we obtain d s^2_ GP = - N^2 ε^-2 ( dt_ GP^2 + 2 u_x d t_ GP d x + u_x^2 d x^2) + q̃_x x d x^2 + x^2 dΩ^2 = - N^2/ε^2 d t_ GP^2 - 2 u_x N^2/ε^2 d t_ GP d x + (q̃_x x - N^2/ε^2 u_x^2) d x^2 + x^2 dΩ^2 = - d t_ GP^2 + q̃_x x N^2/ε^2( d x - u_x q̃^x x d t_ GP)^2 + x^2 dΩ^2 = - d t_ GP^2 + χ^-4/α^2 ε^2( 1 + λ^2 ( 1 - J(x) ) )^-1 ×( d x - s χ√(J(x)-1 -χ^2/χ(x_0)^2( J(x_0)-1))√(1 + λ^2 ( 1 - J(x) )) d t_ GP)^2 + x^2 dΩ^2 where we used u_x^2=q̃_xx(ε^2/N^2-1) twice in the third line, and then inserted the explicit u_x from (<ref>). The metric remains regular at the horizon coordinates, but still diverges at the new (quantum) singular coordinates x=x_λ^(i). In the case of a vanishing cosmological constant, Λ=0, it is useful to take the limit x_0 →∞, in which case αε→ 1/lim_x→∞χ(x)=:1/χ_∞ from (<ref>) and the metric simplifies to d s^2_ GP = - d t_ GP^2 + χ_∞^2/χ^4( 1 + λ^2 ( 1 - 2M/x) )^-1 ×( d x + s χ√(2M/x+χ^2/χ_∞^2-1)√(1 + λ^2 ( 1 - 2M/x)) d t_ GP)^2 + x^2 dΩ^2 . §.§.§ From homogeneous Schwarzschild to Gullstrand-Painlevé We can now start with the GP metric (<ref>) we just derived and extend it across the horizons. Because we expect the causal meaning of time and space coordinates to change in the interior of the black hole, let us define T_ GP = x and X_ GP = t_ GP and rewrite the GP metric (<ref>) as d s^2_ GP = - d X_ GP^2 + χ^-4/α^2 ε^2( 1 + λ^2 ( 1 - J(T_ GP) ) )^-1( d T_ GP + s χ/χ(x_0)√(χ^2 (1-J(x_0))-χ(x_0)^2 (1-J(T_ GP)))√(1 + λ^2 ( 1 - J(T_ GP) )) d X_ GP)^2 + T_ GP^2 dΩ^2 = - ( 1 - λ^2 (J(T_ GP) - 1) )^-1( J(T_ GP) - 1 )^-1 d T_ GP^2/χ^2 + J(T_ GP) - 1/α^2 χ^2 ε^2( d X_ GP + s/χ√(J(T_ GP)-1+(χ^2/χ(x_0)^2)(1- J(x_0))/ 1 - λ^2 (J(T_ GP) - 1)) dT_ GP/ J(T_ GP) - 1)^2 + T_ GP^2 dΩ^2 . A comparison of (<ref>) with (<ref>) shows that the two are related by the coordinate transformation t_ h = T_ GP and d x_ h = d X_ GP/ε - s/εχ√(J(T_ GP)-1+(χ^2/χ(x_0)^2)(1-J(x_0))/ 1 - λ^2 ( J(T_ GP) - 1)) d T_ GP/J(T_ GP)-1 for arbitrary λ and χ. The explicit integration can only be completed after specifying the choice of λ (t_ h) and χ(t_ h), but such an integration always exists locally because dx_ h is a closed 1-form. The metric in (<ref>) remains regular at the horizon coordinates if we choose αε=√(1-J(x_0))/χ(x_0), but it diverges at the reflection surfaces T_ GP=x_λ^(i) for any α. Therefore, the GP chart is valid only in the region x_λ^(-)<x<x_λ^(+), where for more than two solutions to (<ref>) the ±-versions of x_λ^(i) are as defined at the end of Section <ref>. In the case of a vanishing cosmological constant, we can take x_0→∞ and αε=1/lim_x→∞χ(x)=:1/χ_∞, simplifying the metric to d s^2_ GP = - ( 1 + λ^2 ( 1 - J(T_ GP) ) )^-1( J(T_ GP) - 1 )^-1 d T_ GP^2/χ^2 + χ_∞^2/χ^2( J(T_ GP) - 1) ( d X_ GP - s/χ√(J(T_ GP)+χ^2/χ_∞^2-1/ 1 - λ^2 (J(T_ GP) - 1)) dT_ GP/ J(T_ GP) - 1)^2 + T_ GP^2 dΩ^2 . §.§.§ Gullstrand-Painlevé gauge The GP metric (<ref>) covers both the static and the homogeneous regions in a single coordinate chart. We now perform a consistency check by showing that the solution obtained above satisfies the canonical equations of motion in an appropriate GP gauge. Based on the metric (<ref>), we define the GP gauge as N = 1 , E^x = x^2 . The consistency equation E^x = 0 determines the shift vector N^x = - χ( 1 + λ^2 x^2/(E^φ)^2) sin (2 λ K_φ)/2 λ . The on-shell conditions H_x=0 and H̃=0, respectively, determine K_x = E^φ K_φ' / (2 x) and sin^2 (λ K_φ)/λ^2 = ( 1 + λ^2 x^2/(E^φ)^2)^-1( c_φ/x + Λ x^2/3 + x^2/(E^φ)^2 - 1 ) , for arbitrary λ, where c_φ is an integration constant. Finally, the equation of motion for E^φ reads (ln(χ^2 (E^φ)^2/x^2))^∙ = - √((c_φ/x + Λ x^2/3 + x^2/(E^φ)^2 - 1) (1 + λ^2 (1 - c_φ/x - Λ x^2/3)))( ln(χ^2 (E^φ)^2/x^2) )' and is solved by E^φ = x / (αχε) with constant α and ε. (We introduce two integration constants for the sake of comparing with our previous line element, which will relate ε to the constant denoted earlier by the same letter. As a partial differential equation, (<ref>) has additional solutions that depend on the specific choice of λ(x) but will not be required here.) The resulting metric can then be derived from (<ref>), which matches exactly with (<ref>) upon the identification of c_φ=2 M and ε^2 = (1 - J(x_0))/(α^2 χ(x_0)^2). Therefore, the modified GP metric (<ref>) is related by standard coordinate transformations to both the static and homogeneous solutions in regions of overlap. Furthermore, it is well-defined at the Schwarzschild horizon x=x_ H provided x_0>x_ H, and at the cosmological horizon x=x_Λ provided x_0>x_Λ. In the latter case, x_0 (defined as the reference position for the inertial observer) can no longer be interpreted as the coordinate where the observer is at rest, but the coordinate transformation still holds. §.§ Internal-time gauge of homogeneous regions As we have seen, the homogeneous Schwarzschild charts terminate at the quantum hypersurfaces of reflection symmetry at x_λ^(i) solving (<ref>). As we will show in Section <ref>, the Ricci scalar remains regular at those surfaces, suggesting that another chart must exist for which the metric remains regular there. Since the coordinates x_λ^(i) always lie in the homogeneous regions, we will formulate a new homogeneous gauge where the time coordinate is not given by the radius in an extension of the Schwarzschild exterior, but is instead strategically chosen to track the angular momentum K_φ. To this end, it is useful to switch to the constraint (<ref>) because it is periodic in the variable of interest, K_φ. We set all spatial derivatives to zero and label t_φ as the time coordinate in our new gauge as well as x_ h as the spatial coordinate, which agrees with the previous homogeneous coordinate. The partial gauge-fixing N^x = 0 , N' = 0 , is used for the homogeneous gauge. The on-shell condition H̃=0 can be solved for K_x: K_x = - E^φ/4 E^x2 λ̃/sin (2 λ̃ K_φ)( λ^2/λ̃^2(1- Λ E^x) + (1 - 2 ∂lnλ^2/∂ln E^x) sin^2 (λ̃ K_φ)/λ̃^2) . The relevant equations of motion are Ė^x/E^x = λ̃/λ2 N̅/√(E^x)sin (2 λ̃ K_φ)/2 λ̃ and Ė^φ/E^φ = λ̃/λN̅/2 √(E^x)2 λ̃/sin (2 λ̃ K_φ)( (1 - 2 ∂lnλ^2/∂ln E^x) sin^2 (λ̃ K_φ)/λ̃^2 - λ^2/λ̃^2(1- Λ E^x) cos (2 λ̃ K_φ) ) , as well as K̇_φ = - λ̃/λN̅/2 √(E^x)( λ^2/λ̃^2(1- Λ E^x) + ( 1 - 2 ∂lnλ^2/∂ln E^x) sin^2 (λ̃ K_φ)/λ̃^2) . (Recall that N̅=χ N.) Combining these equations of motion we obtain d/ dln E^x( λ̃^2/λ^2sin^2 (λ̃ K_φ)/λ̃^2) = - 1/2( 1 - Λ E^x + λ̃^2/λ^2sin^2 (λ̃ K_φ)/λ̃^2) , which has a general solution λ̃^2/λ^2sin^2 (λ̃ K_φ)/λ̃^2 = c_x/√(E^x) + Λ E^x/3 - 1 , for arbitrary λ(E^x). Inserting into the mass observable (<ref>) and relabeling ℳ=M a constant (rather than phase-space function) determines c_x = 2 M. A different combination of the time derivatives gives us the equation of motion (lnq̃^xx/χ^2)^∙ = - 2 λ̃tan(λ̃ K_φ) K̇_φ - 2 Ė^φ/E^φ + (1 - ∂lnλ^2/∂ln E^x) Ė^x/E^x = λ̃/λN̅/√(E^x)[ sin (2 λ̃ K_φ)/2 λ̃ + λ^2/λ̃^2( λ̃^2 tan(λ̃ K_φ)/λ̃ + 2 λ̃/tan (2 λ̃ K_φ)) ] = N̅/√(E^x)λ/λ̃λ̃/tan(λ̃ K_φ)[ 1 - Λ E^x + λ̃^2/λ^2sin^2 (λ̃ K_φ)/λ̃^2] for the structure-function. We now complete the gauge fixing by choosing K_φ = - t_φ. The consistency equation K̇_φ = -1 determines the lapse function N = λ/λ̃2 √(E^x)/χλ̃^2/λ^2( 1 - Λ E^x + ( 1 - 2 ∂lnλ^2/∂ln E^x) λ̃^2/λ^2sin^2 (λ̃ K_φ)/λ̃^2)^-1 , such that the equation of motion for the structure function becomes (lnq̃^xx/χ^2)^∙ = 2 λ̃/tan(λ̃ K_φ)( 1 - Λ E^x + λ̃^2/λ^2sin^2 (λ̃ K_φ)/λ̃^2) ×( 1 - Λ E^x + ( 1 - 2 ∂lnλ^2/∂ln E^x) λ̃^2/λ^2sin^2 (λ̃ K_φ)/λ̃^2)^-1 . This equation is hard to solve directly, but we can perform the associated coordinate transformation from t_ h to t_φ using (<ref>) with E^x = t_ h^2 and K_φ = - t_φ: λ̃^2/λ(t_ h^2)^2sin^2 (λ̃ t_φ)/λ̃^2 = 2 M/t_ h + Λ t_ h^2/3 - 1 such that sin(2 λ̃ t_φ)/2 λ̃ d t_φ = 1/2( λ^2/λ̃^2∂lnλ^2/∂ t_ h(2 M/t_ h + Λ t_ h^2/3 - 1) - 1/t_ hλ^2/λ̃^2(2 M/t_ h - 2 Λ t_ h^2/3) ) d t_ h = 1/2 √(E^x)λ^2/λ̃^2( - 2 M/√(E^x) + 2 Λ E^x/3 + 2 ∂lnλ^2/∂ln E^xλ̃^2/λ^2sin^2 (λ̃ K_φ)/λ̃^2) d t_ h = - 1/2 √(E^x)λ^2/λ̃^2( 1 - Λ E^x + ( 1 - 2 ∂lnλ^2/∂ln E^x) λ̃^2/λ^2sin^2 (λ̃ K_φ)/λ̃^2) d t_ h or d t_ h = - 2 √(E^x)λ̃^2/λ^2sin(2 λ̃ t_φ)/2 λ̃( 1 - Λ E^x + ( 1 - 2 ∂lnλ^2/∂ln E^x) λ̃^2/λ^2sin^2 (λ̃ t_φ)/λ̃^2)^-1 d t_φ The homogeneous Schwarzschild metric (<ref>) then becomes d s^2 = - 4 E^x λ̃^2/λ^2( 1 - Λ E^x + ( 1 - 2 ∂lnλ^2/∂ln E^x) λ̃^2/λ^2sin^2 (λ̃ t_φ)/λ̃^2)^-2 d t_φ^2/χ^2 + λ̃^2/λ^2sin^2 (λ̃ t_φ)/λ̃^2 d x_ h^2/α^2 χ^2 + E^x dΩ^2 , for arbitrary functions λ and χ, where E^x is implicitly given in terms of t_φ by the solution to (<ref>). Notice that the reference value λ̃, which is always constant, can be eliminated completely by scaling the time coordinate to t̃_φ=λ̃ t_φ. One can now check that the lapse function of (<ref>) matches the one obtained canonically, (<ref>). Finally, we can take the inverse of the radial component of (<ref>) as the structure function, take its time derivative and confirm that it indeed satisfies the canonical equation of motion (<ref>) since ∂_t_φln(1/α^2 χ^2λ̃^2/λ^2sin^2 (λ̃ t_φ)/λ̃^2) = 2 λ̃/tan(λ̃ t_φ)( 1 - Λ E^x + λ̃^2/λ^2sin^2 (λ̃ t_φ)/λ̃^2) ×( 1 - Λ E^x + ( 1 - 2 ∂lnλ^2/∂ln E^x) λ̃^2/λ^2sin^2 (λ̃ t_φ)/λ̃^2)^-1 . §.§ Proof of extremality We are now ready to prove our previous claim that any solution of (<ref>) for the new quantum coordinate singularities in homogeneous regions of the modified Schwarzchild geometry (<ref>) dynamically corresponds to an extremal radius (with some restrictions in case of higher multiplicities of the roots). Inserting E^x=x^2 in equation (<ref>), we obtain the condition that the function defined by b(E^x)= 1+λ^2 (1 - 2 M/x - Λ x^2/3) = 1+λ^2 (1 - 2 M/√(E^x) - Λ E^x/3) vanishes at E^x=(x_λ^(i))^2. With our identification of c_x=2M and equation (<ref>), it then follows that sin^2(λ̃K_φ)= 1-b(E^x) and sin^2(λ̃K_φ)=1 at any solution of (<ref>). The function b(E^x) is always positive for E^x between x_ H^2 and x_Λ^2. Outside of this range, its first zeros on the two sides are defined as (x_λ^(±))^2. If these solutions have odd multiplicity, b(E^x) becomes negative for values of E^x less than (x_λ^(-))^2 or greater than (x_λ^(+))^2, such that there is no corresponding K_φ for which equation (<ref>) could hold. The only dynamical option is for E^x to start increasing after it reaches (x_λ^(-))^2, or to start decreasing after it reaches (x_λ^(+))^2. These values are therefore dynamical extrema of E^x, as well as of sin^2(λ̃K_φ) as a measure of curvature. The extremum of sin^2(λ̃K_φ) also characterizes these extrema of E^x as hypersurfaces of reflection symmetry. If there are solutions of (<ref>) other than x_λ^(±), they will not play a role for dynamical solutions obtained by extending the static region across coordinate singularities. Unlike in Schwarzschild or GP coordinates, the metric (<ref>) remains regular at any hypersurface of reflection symmetry, given by t_φ = π / (2λ̃) and can be used for this purpose. We discuss the global structure in Section <ref> and perform a detailed analysis of singularity resolution in Section <ref>. For completeness, we mention why the previous conclusions are restricted to x_λ^(±) with odd multiplicity. If x_λ^(-) or x_λ^(+) has even multiplicity, the function b(E^x) stays positive in a neighborhood around such a solution, and dynamical values of √(E^x) beyond this solution are not ruled out by the condition (<ref>). A detailed dynamical analysis would be required for a complete understanding, which we will not perform in this paper. Nevertheless, such cases, even though they are not generic, might be of interest as new types of extremal black holes, defined in the general sense that initially distinct coordinate singularities coincide for some parameter choices. For extremal black holes in classical general relativity, the inner and outer horizon coincide, implying a double root of the defining equation. For new extremal solutions in emergent modified gravity, the black-hole horizon retains its non-extremal form, but the internal structure changes because some of the interior coordinate singularities coincide. Such solutions could shed light on possible connections, such as tunneling processes, between space-time regions within x_λ^(±) and values beyond these extrema that lie in disjoint space-time regions in the non-extremal case. In these discussions, we assumed that the areal gauge is always available in a region around the extremal radius, which generically is the case for physically acceptable solutions: This gauge gauge would not be available in a given region if and only if E^x is completely constant there in space and time. (If E^x is constant only spatially in a given slicing but depends on time, one can always deform the slicing locally such that (E^x)'≠0.) The diffeomorphism constraint then implies that K_φ'=0 in this region because E^φ is non-zero around these new coordinate singularities. Maintaining Ė^x=0 in the region implies that K_φ cannot change in time according to the E^x-equation of motion, and stays at the value required for reflection symmetry on all slices in the region. With these conditions, the equation of motion (<ref>) for K_φ implies a first-order differential equation for λ which, for zero cosmological constant, has the solution λ=(c/√(E^x)-1)^-1/2 with a constant c. This behavior is not of the form usually considered for holonomy modifications, and could only be realized in a finite range of E^x. § GLOBAL STRUCTURE The global structure of classical space-time solutions may change if singularities are resolved or if there are large-scale effects from holonomy modifications that could alter the asymptotic behavior. Different implications are possible, depending on the form of holonomy modifications as determined by the function λ(E^x). §.§ Asymptotic flatness and zero-mass limit If we set Λ=0, we may impose asymptotic flatness. For this purpose, the metric (<ref>) for the modified Schwarzschild exterior is the relevant one, which takes the asymptotic form d s^2 ≈ - d t^2/α^2 χ(∞)^2 + ( 1 + λ(∞)^2 )^-1 d x^2/χ(∞)^2 + x^2 dΩ^2 . Therefore, asymptotic flatness partially determines the limiting behavior of the two modification functions by the relations α = χ(∞)^-1 , χ(∞) = 1 / √(1+λ(∞)^2) , which require χ, and hence λ, to be asymptotically constant because α is a constant, implied by one of our integrations. We will impose this condition in the following. We may independently ask that flat space-time be recovered in the zero-mass limit M→0, in which case the line element takes the form d s^2 ≈ - d t^2/α^2 χ^2 + ( 1 + λ^2 )^-1 d x^2/χ^2 + x^2 dΩ^2 . The radial component is rendered flat by the choice χ = 1 / √(1+λ^2) , which is stronger than the asymptotic-flatness condition (<ref>), but compatible with it. On the other hand, the time component becomes flat only if χ = constant =:χ_0 , α = χ_0^-1 , in which case the metric is asymptotically flat too according to (<ref>). Therefore, the radial and time components can both be flat in the zero-mass limit only if λ is constant, or if λ depends not only on E^x but also on the mass (for instance via renormalization) such that it approaches a constant as M→0. Here, we will not consider the second possibility. If one wishes to recover flat space as part of the classical geometry in the zero-mass limit, one needs a constant radial component. In this case, we impose the conditions (<ref>) and (<ref>) such that the static region is described by d s^2 = - (1 - 2M/x) 1+λ^2/1+λ_∞^2 d t^2 + ( 1 - λ^2/1+λ^22M/x)^-1(1 - 2M/x)^-1 d x^2 + x^2 dΩ^2 , where λ_∞ = lim_x→∞λ (x). The conditon of having flat space in the zero-mass limit could therefore be used to determine one of the modification functions. However, the spatial quantum geometry realized in LQG on Planckian scales can be expected to leave traces even in an effective description, such that a non-Euclidean 3-dimensional space could be acceptable at zero mass. Because the focus of this paper is on black holes in effective loop quantum gravity we will adopt this perspective and do not require (<ref>). Therefore, we impose only the asymptotic-flatness condition (<ref>), such that the static region is described by d s^2 = - (1 - 2M/x) d t^2/χ^2 (1+λ_∞^2) + ( 1 + λ^2 ( 1 - 2M/x) )^-1(1 - 2M/x)^-1 d x^2/χ^2 + x^2 dΩ^2 with the overall factor χ required to have the asymptotic limit χ(∞)=1/√(1+λ_∞^2)=:χ_0, but otherwise arbitrary. While several of the following analyses hold for arbitrary χ(x), we will, for simplicity of the notation, assume a constant overall factor, χ=χ_0. The metric then further simplifies to d s^2 = - (1 - 2M/x) d t^2 + ( 1 + λ^2 ( 1 - 2M/x) )^-1(1 - 2M/x)^-1 d x^2/χ_0^2 + x^2 dΩ^2 , This chart, and its homogeneous counterpart, will suffice for most of the following applications. A second argument to prefer (<ref>) over (<ref>) is the realization that the time component dominates in the non-relativistic regime: The proper-time interval for a timelike object moving along a radial curve with coordinate velocity v= d x / d t is given by dτ = √(- d s^2) = √(-g_tt + g_xx v^2) d t = (√(-g_tt) + O (v^2) ) d t. Therefore, (<ref>) would imply a non-classical gravitational potential even for slow-moving objects, whereas such effects are suppressed by a factor of v^2 if (<ref>) is used in this regime. §.§ The maximal extension For any λ(E^x) such that (<ref>) has only two roots x_λ^(±) we obtain a similar global structure to that of <cit.>, where constant λ was assumed. To see this, we start with a static Schwarzschild region with metric (<ref>). The GP metric (<ref>) covers this region as well as two homogeneous Schwarzschild regions with metric (<ref>), one for x< x_ H and another for x> x_Λ. Thus, we can sew all three charts together into a black-hole like space-time region that ends at the minimum radius x_λ^(-), rather than x=0, and at the maximum radius x_λ^(+), rather than x→∞ (assuming that both x_λ^(±) exist as finite positive solutions of (<ref>)). The additional homogeneous gauge based on K_φ as an internal time, implying the metric (<ref>), covers two homogeneous Schwarzschild regions connected to each other through x_λ^(+) or x_λ^(-). This gauge can be used to sew together GP charts with two complete homogeneous charts at the extremal radii, forming a single wormhole-like space-time. A transformation to null coordinates leads to the maximal extension as follows. Consider two null frames described by the 1-forms d u = v^(u)_μ d x^μ and d v = v^(v)_μ d x^μ. Replacing the time and the space coordinates with null coordinates u and v transforms a metric of the general form (<ref>) into a Kruskal-Szekeres form. Following the general results from <cit.>, we obtain the line element d s^2 = - N^2 - q̃_x x(N^x)^2/v^(u)_t v_t^(v) d u d v (suppressing the angular part of the metric). Using the null condition g^μν v^(i)_μ v^(i)_ν = 0 for i=u , v, the components of the null frames can be related to one another by v_t^(i)/v_x^(i) = s_(i)√(N^2 q̃^x x) + N^x , where s_(u) = - 1 and s_(v) = + 1, such that one is an ingoing family of light rays and the other is outgoing. Using the exterior metric (<ref>), which is static, we can choose null geodesics such that the components v_t^(i) are constant (being Killing conserved quantities) and can be absorbed into u and v. Using this, the null 1-forms can be written as d u = d t + α(1 - 2 M/x - Λ x^2/3)^-1( 1 + λ^2 ( 1 - 2 M/x - Λ x^2/3) )^-1/2 d x , d v = d t - α(1 - 2 M/x - Λ x^2/3)^-1( 1 + λ^2 ( 1 - 2 M/x - Λ x^2/3) )^-1/2 d x . The null and Schwarzschild coordinates are then easily related by u = t - x_* , v = t + x_* , where x_* is the direct integration of the x-component of the 1-forms (<ref>), which can be done once λ is specified. In these null coordinates, the metric is simply d s^2 = - (αχ)^-2(1 - 2 M/x - Λ x^2/3) d u d v . In the limit x → x_ H the null coordinates take the values u → + ∞ and v → - ∞, while x_* → - ∞; on the other hand, in the limit x → x_Λ, they take the values u → + ∞ and v → + ∞, and x_* →∞. We partially overcome these divergences by the usual coordinate transformation U = - e^-u / ( 4 M α) , V = e^v / ( 4 M α) , such that the region x_ H<x<x_Λ corresponds to U ∈ ( - ∞ , 0), V ∈ ( 0 , + ∞ ), where U,V=0 at the Schwarzschild horizon, and U=-∞,V=+∞ at the cosmological horizon. The metric becomes d s^2 = - F_E(x) d U d V with F_E (x) = 16 M^2/χ^2(1 - 2 M/x - Λ x^2/3) e^ - 2 x_* / ( 4 M α) , and x = x (U,V) given implicitly by the solution to U V = - e^ 2 x_* / ( 4 M α) . In the limits x→ x_ H and x→ x_Λ we obtain F_ E→ 0 for finite and differentiable functions λ and χ, as expected from null surfaces. Therefore, we can extend the null coordinates to x<x_ H by taking negative values for U , V (which must be possible, since the existence of such a region is implied by the GP metric). Finally, performing the conformal transformation U̅ = arctan (U) , V̅ = arctan (V) , we obtain the metric d s^2 = - F̅_E(x) dU̅ dV̅ . In these new coordinates, the values U→-∞ and V→+∞, corresponding to the horizon x=x_Λ, are located at U̅=-π/2 and V̅=+π/2, respectively. We can therefore extend the chart past the cosmological horizon by including values of U̅<-π/2 and V̅>+π/2 in the ranges of the conformal coordinates. The usual periodic Penrose diagram for the maximal extension therefore follows, see Figures <ref>-<ref>. This global structure was first obtained in <cit.> for constants λ=λ̃, χ=χ_0, and α=1, provided we still have exactly two finite solutions x_λ^(±) for the extremal radii if Λ≠0, and only one solution x_λ^(-) for Λ=0. We therefore have a similar global structure with the exception that the metric is allowed to have arbitrary functions λ (x) and χ (x), and a free constant α. An important result of this generalization is that the maximum radius can be avoided depending on the choice of λ. In particular, non-constant λ∝ 1/√(E^x), corresponding to the μ̅-scheme in models of loop quantum gravity but now realized in a covariant fashion, does not develop such a maximum radius surface as we will see below. §.§ Physical global structure We emphasize that the global solution is not fully determined by solving the equations of motion alone. For simplicity, we will consider the case of a vanishing cosmological constant given by the diagram in Fig. <ref>. This is not necessarily the physical space-time as we show now with a simple explicit example. Any sliced portion of the diagram Fig. <ref> is a local vacuum solution. Therefore, if we take multiple slices of this diagram and glue them smoothly we arrive at a new global solution. As a simple example of such a procedure, consider the region E_ in∪ I_ in∪ E_- ∪ I_-, sliced from the maximal extension at x_ in=x_λ and x_-=x_λ. We can then glue smoothly the two boundaries x_ in=x_λ and x_-=x_λ because the two regions are locally identical. The result is shown in Fig. <ref>, which has the peculiarity that closed timelike curves exist. The diagrams in Fig. <ref> and Fig. <ref> are both valid representations of the local solutions of the equations of motion, but they describe two different physical space-times as evidenced by their global properties such as the existence of closed timelike curves in the latter but not in the former. We conclude that the determination of the physical global solution requires extra input (such as the absence of closed timelike curves) which may be seen as extended boundary conditions. §.§ Presence of matter The solutions obtained above are in a vacuum because no matter terms were considered in the constraints. Nevertheless, the solution contains the parameter M which, through the classical limit and its equivalence to the observable of the system, we may interpret as the mass of the black hole that produces the gravitational field and the curvature of space-time. In classical black holes, this mass can be interpreted as resting at the singularity, representing a trace of all the matter that collapsed to form the black hole. However, the space-time implied by Fig. <ref> has no such singularity and forces us to face an interpretational quandary as evidenced by the following thought experiment. An observer may fall into the black hole from E_ in, cross the hypersurface x=x_λ^(-), and exit to the region E_ out or even E_+ without ever seeing any matter because the system is in vacuum. So, is there really a mass or not? The answer could be no, and in that case, the global space-time is akin to an eternal black hole or a wormhole. If the answer is that there must be some matter somewhere that accounts for the parameter M, then we need a model of emergent modified gravity coupled to matter such that the space-time solution will differ from the vacuum case somewhere near the maximum-curvature hypersurface. Figure <ref> then cannot give us the physical global structure because it cannot result from the gravitational collapse of matter. In particular, stability of singularity resolution in the presence of matter is not guaranteed <cit.>. Without knowing the non-vacuum space-time solution near the hypersurface of x=x_λ^(-), we cannot extend the space-time through it in an obvious way. However, we may still use the existing vacuum solution to draw possible, though not definite, conclusions of what gravitational collapse would look like, proceeding as follows. Consider a star with radius R_0, such that the region x>R_0 is described by our vacuum solution but the region x<R_0 is not. Under gravitational collapse, the radius of the star will follow a timelike worldline R_ in in Fig. <ref> starting at i_ in^-, crossing the horizon and arriving at the reflection-symmetry hypersurface x=x_λ^(-). After this, there are two possibilities: the worldline can either proceed outwards all the way to i_ out^+, or matter coupling effects may make it bounce back and proceed to i_+^+. We can now slice the vacuum portions at R_1= R_ in∪ R_ out or R_2= R_ in∪ R_+, arriving at different space-time solutions for the exterior of the star. In the latter case, for example, we obtain the diagram in Fig. <ref>, denoting the formation of a wormhole connecting two universes. We have simply extrapolated the diagram to the interior of the star whose explicit solution requires specific matter couplings. See <cit.> for an exact solution for the collapse of dust. The diagram in Fig. <ref> can be further sliced as shown in Fig. <ref>. The shaded area is a valid solution to the vacuum equations of motion with two boundaries, one given by the star's radius R, and the other given by S. The two surfaces TO_+ and TO_ in can then be smoothly joined because the space-time surrounding them are locally identical. The resulting conformal diagram is shown in Fig. <ref>. We conclude that a detailed investigation of matter models for gravitational collapse is required before reliable conclusions can be drawn about potential astrophysical implications of non-singular black holes. § INTERIOR PHYSICS Unlike the homogeneous Schwarzschild or the GP coordinates, the internal time coordinates allow us to cross the reflection-symmetry hypersurfaces at x_λ^(±) since the metric (<ref>) is regular at any x_λ^(i) solving (<ref>). What we find then is that (<ref>) connects two different regions on the two sides of the reflection surface, each with the same geometry (<ref>). We therefore obtain a quantum extension of the modified Schwarzschild solution, connected by a reflection-symmetry hypersurface at a minimum radius, in which case we have two black/white-hole interiors with a range of x_λ^(-)<x<x_ H, or at a maximum radius, in which case we have two de Sitter regions with a range x_Λ<x<x_λ^(+). In this section, we present a detailed analysis of the interior and the mechanism of singularity resolution implied by it. §.§ Curvature around reflection-symmetry hypersurfaces The classical Schwarzschild geometry is a vacuum solution and therefore has vanishing Ricci curvature. Even though we did not include matter in our new solutions, they solve modified field equations and there is no guarantee that the emergent space-time geometry remains Ricci flat. In particular, it could be possible that these geometries form new physical singularities where even the Ricci scalar becomes infinite, unlike at the classical Schwarzschild singularity. We first show that this possible outcome is not realized at a reflection-symmetry hypersurface, where Ricci curvature is not zero but finite. The Ricci scalar of the metric (<ref>) for arbitrary λ is given by R = 2/x^2( 1 - χ^2 ( 1 + λ^2 ( 1 - 3 M^2/x^2 + x (1 - 2 M/x) (1 - 3 M/2 x) (lnλ^2)' ) ) ) + 4 χ^2 Λ( 1 + λ^2 ( 3/2( 1 - 4 M/3 x) + 5 x/12(1- 9 M/5 x) (lnλ^2)') ) - 2 χ^2 λ^2 Λ^2 x^2 (1 + x/6 (lnλ^2)') , where λ' = ∂λ / ∂ x. Using equation (<ref>) for the location x_λ^(i) of a reflection-symmetry hypersurface, we obtain the Ricci scalar R |_x=x_λ^(i) = 2/(x_λ^(i))^2 + 2 χ^2/(x_λ^(i))^2λ^2 (M/x_λ^(i)-Λ (x_λ^(i))^2/3) (3 M/x_λ^(i) + Λ (x_λ^(i))^2-2) - 2 χ^2/x_λ^(i)(3 M/x_λ^(i) + Λ (x_λ^(i))^2-2) (lnλ)' |_x=x_λ^(i) which is finite for any finite and differentiable λ(x) as long as x_λ^(i) is finite and non-zero. Therefore, we need a minimum radius x_λ^(-)> 0 or a maximum radius x_λ^(+)≠∞. Finite negative values for x_λ^(-) would imply finite Ricci curvature at this location, but the classical singularity at x=0 would then be reached by an infalling observer before it could be resolved by holonomy-type effects. Thus, a finite and positive minimum radius must exist for the interior geometry to be regular at small scales. At large scales such that Λ x^2 ≫ 1 and M/x≪1, the Ricci scalar is approximately given by R ≈ 4 χ^2 Λ + 2/x^2( 1 - χ^2 + χ^2 λ^2 ( 1 + x (lnλ^2)' ) ) + 4 χ^2 Λλ^2 ( 3/2 + 5 x/12 (lnλ^2)') - 2 χ^2 λ^2 Λ^2 x^2 (1 + x/6 (lnλ^2)') . This is finite at x →∞ only if λ(x) falls off at least as 1/x=1/√(E^x), in which case no divergence occurs at large radii and no maximum radius is needed to tame it, though the latter might still exist based on this argument alone. Because the Ricci scalar is an invariant object, its value (<ref>) at a reflection-symmetry hypersurface, derived in static Schwarzschild coordinates, is the same as the Ricci scalar of the homogeneous metric (<ref>) at the value t_φ = ±π / (2 λ̃) of our internal time. The advantage of the latter coordinate is that it can be extended past the reflection-symmetry hypersurface. There is therefore a robust sense in which our covariant holonomy modifications resolve the singularity inside a static Schwarzschild black hole, even without specifying the quantization ambiguity of the triangulation scheme λ(E^x) chosen to regularize the curvature. We shall explore specific realizations of this reflection-symmetry hypersurface for a μ_0 (constant λ) and a μ̅-type scheme in Sections <ref> and <ref>, respectively. §.§ Energy conditions in an emergent space-time theory The Einstein tensor of the emergent metric, defined as usual by G_μν = R_μν - 1/2 g_μν R , can give us useful information about energy conditions and suitable interpretations related to singularity resolution. We may then define an effective stress energy tensor T_μν^ eff = (8π)^-1G_μν such that Einstein's equation G_μν=8π T_μν^ eff formally holds. However, this equation does not play the role of an equation of motion because the space-time geometry is determined by emergence from the canonical equations. Moreover, there is no physical field with the effective stress-energy in our vacuum solutions, and in non-vacuum models the matter stress-energy in general differs from the effective stress energy as defined here. Nevertheless, the concept of T_μν^ eff is useful because of its geometrical relationship with space-time curvature. The usual energy conditions can then be analyzed, which, for reference purposes, are given by * Null-energy condition (NEC): For any null v^μ_(i) Geometric: R_αβv^α_(i)v^β_(i)≥ 0. Physical: T^ eff_αβv^α_(i)v^β_(i)≥ 0. * Weak-energy condition (WEC): For any timelike u^μ_(i) Geometric: G_αβu^α_(i)u^β_(i)≥ 0. Physical: T^ eff_αβu^α_(i)u^β_(i)≥ 0. * Strong energy condition (SEC): For any timelike u^μ_(i) Geometric: R_αβu^α_(i)u^β_(i)≥ 0. Physical: (T^ eff_αβ - 1/2 T^ eff g_αβ)u^α_(i)u^β_(i)≥ 0. For any null v^μ_(i), we have R_αβv^α_(i)v^β_(i) = G_αβv^α_(i)v^β_(i) = 8π T^ eff_αβv^α_(i)v^β_(i) by definition of the Einstein tensor and the effective stress-energy tensor. For any timelike u^μ_(i), whenever G_αβu^α_(i)u^β_(i)≥ 0 we also have T^ eff_αβu^α_(i)u^β_(i)≥ 0 by definition of the effective stress-energy tensor. Finally, 8π(T^ eff_αβ - 1/2 T^ eff g_αβ) = G_αβ + 1/2 R g_αβ = R_αβ by definition of the effective stress-energy tensor. Therefore, the geometric and physical versions of the energy conditions are equivalent to each other in any theory that can be formulated with effective Einstein equations. For singularity resolution in a model, such as ours, in which solutions do not follow from effective Einstein equations, only the geometric versions are meaningful. They are based on focusing or decofusing properties of congruences of geodesics, geometrical properties captured by the Ricci tensor. Physical matter fields that propagate stress-energy are not required for this purpose. Importantly, such a theory still has to be fully covariant for the Ricci tensor to be well-defined, which is not the case for many models constructed in loop quantum gravity. Taking these conclusions into account and for further clarity, we reduce the number of conditions and rename them as follows: * Null geometric condition (NGC): R_αβv^α_(i)v^β_(i)≥ 0 for any null v^μ_(i). This condition corresponds to the NEC in GR. * Timelike geometric condition (TGC) R_αβu^α_(i)u^β_(i)≥ 0 for any timelike vector u^μ_(i). This condition corresponds to the SEC in GR. (The WEC has no known geometric interpretation because the Einstein tensor itself does not have a direct geometric meaning; see <cit.> for a discussion.) §.§ Congruences near a minimum-radius hypersurface It is a common scheme in effective approaches to loop quantum gravity that black-hole singularities may be resolved by the appearance of a minimum positive radius, determined by the phase-space variable E^x. In most cases, this outcome is simply an extension of results from quantum cosmology to black holes, using homogeneity of the Schwarzschild interior in order to transfer cosmological models to this situation. Heuristically, the minimum radius can then be interpreted as an effect of gravity becoming repulsive at large curvature when a possibly discrete nature of space may change the dynamics. However, the case of black holes is crucially different from cosmological models because it is more sensitive to coordinate or slicing changes going beyond time-reparametrizations: the homogeneous slicing of a Schwarzschild-type interior does not exist globally, and therefore cannot be interpreted as a preferred slicing of co-moving observers. Any physical statements such as a possibly repulsive behavior of gravity at large curvature, as well as associated effects of singularity resolution, are meaningful only if they can be shown to be covariant when different slicings are allowed. For instance, it is important to show that the position of the surface which replaces the classical singularity is invariant with respect to coordinate choices, something that is true for our reflection-symmetry hypersurface. Most models of loop quantum gravity have been shown to fail this crucial condition, but emergent modified gravity is in a position to present a detailed and consistent analysis. Using this framework, we will demonstrate in this section that the geometry near the reflection-symmetry hypersurface is such that the dynamics of a collection of test particles is defocusing. We will carry out the analysis by studying the expansion rate θ, derived from the emergent space-time metric. As usual, this parameter describes infinitesimal changes in the cross-sectional area or volume for a congruence of null or timelike geodesics. Inside the horizon, both the ingoing and outgoing lightlike congruences are converging, θ<0, for any family of causally future-directed worldlines. Whether congruences are focusing or defocusing, depending on the geometry of space-time, is determined by the sign of dθ/ dψ, where ψ is the affine parameter along lightlike geodesics or proper time along timelike geodesics. §.§.§ Lightlike congruences We start by studying null congruences. The relationship between components of ingoing and outgoing radial null worldlines has previously been used as (<ref>) where v_t is constant along null geodesics. Here, we need a more detailed version that includes all possible sign choices. Starting with the null condition -1/N^2 v_t^2 +2N^x/N^2 v_tv_x+(q̃^xx-(N^x)^2/N^2)v_x^2=0 and setting v_t=-e equal to a constant, we obtain the radial component v^(s)_x= eN^x+sN√(q̃^xx)/N^2q̃^xx-(N^x)^2 = es/N√(q̃^xx)-s(N^x)^2 where s=±1 is the sign choice obtained when solving the quadratic equation. The sign of e, which may be normalized to e=±1 for null geodesics, also remains free at this point. We therefore have four different combinations of the signs of s and e, corresponding to the four directions of a light cone in two space-time dimensions. The sign freedom should be partially fixed by restricting attention to future-pointing directions. This property is determined by the time component v_s^t with a raised index, given by v_s^t=e/N^2+esN^x/N^2/N√(q̃^xx)-s(N^x)^2= e/N^2/1-s√(q̃_xx)N^x/N . The corresponding radial component with raised index equals v_s^x=-eN^x/N^2+es(q̃^xx-(N^x)^2/N^2)/N√(q̃^xx)-s(N^x)^2= es√(q̃^xx)/N . Our equations are based on the condition that q̃^xx and N are always positive. The expansion parameter of a congruence of null geodesics starting at the points of a sphere of some initial radius are obtained as θ_s = 1/√(-g)∂_μ(√(-g)v^μ_(i)) =es/√(q̃_xx)N(ln q_θθ)' with a radial derivative denoted by the prime. The second equality assumes a stationary coordinate system with time-independent metric components. In this case, the signs of the expansion parameters are determined by the product es, or by the sign of v_s^x. For further evaluation, we express this result entirely in terms of phase-space coordinates and the lapse function, independently of any gauge choice, by using q_θθ=E^x and Eq. (<ref>), in which we ignore the cosmological constant for the present analysis: θ_s=es χ(E^x)'/N E^φ√(E^x)√(1+λ(E^x)^2(1-2M/√(E^x))) . However, a discussion of relevant sign combinations of e and s requires a specific gauge. It is convenient to evaluate this expression in a gauge, such as GP, which is non-degenerate across the horizon. In this case, using (<ref>), the sign of v_s^t is determined by 1-s √(q̃_xx)N^x/N=1-s χ_∞/χ√(2M/x+χ^2/χ_∞^2-1) . This expression is always positive for s=-1, such that e=1 implies future-directed null geodesics. Since v_s^x<0 for any x, these are ingoing geodesics with negative expansion because es<0. For s=+1, (<ref>) changes sign at the Schwarzschild radius, x=2M, for any positive function χ(E^x). For x>2M, the expression is still positive and we have e=+1 for future-directed null geodesics. For x<2M, however, we need e=-1 for future direction, such that es=-1. Therefore, the expansion changes from positive values at x>2M to negative values at x<2M. We conclude that the modification functions preserve the classical extent of the trapped region. A complete expression for the expansion of interior null congruences in the GP gauge is given by θ_s=- 2χ_0/x√(1+λ(x)^2(1-2M/x)) , using the solutions for E^φ and N and assuming constant χ=χ_0 for the sake of simplicity. We can extract information about whether gravity focuses or defocuses these congruences by deriving an evolution equation for this expansion parameter, obtained from dθ_s/ dψ=v_s^x dθ_s/ dx where ψ is an affine parameter. The expression (<ref>) is independent of the sign choice of s, which appears in both factors on the right. For instance, in the classical Schwarzschild space-time, still evaluated in a GP gauge, we have dθ_s/ dψ=-2/x^2<0 which diverges as x→0. This means that both ingoing and outgoing congruences converge to the center if they are within the horizon. Quantum gravity is often expected to have some sort of defocusing effect near the Planck scale, which can happen if the expression in Eq. (<ref>) turns positive somewhere in the black-hole interior. (The defocusing effect may also be expressed as a repulsive behavior of modified gravity acting on test objects on our background vacuum solutions.) This outcome is indeed realized with our holonomy-modified metric. Without loss of generality, we express this evolution in the GP gauge: dθ_s/ dψ=2χ_0^2/x^2[-1+λ^2(x)(3M/x-1)-λ(x)λ'(x)(2M-x)] . The last two terms in the parenthesis can contribute to defocusing effects at sufficiently small x. Rewriting them as M/x(3λ^2- x (λ^2)')- 1/2(2λ^2- x(λ^2)')= -x^3(M d(λ^2/x^3)/ dx -1/2 d(λ^2/x^2)/ dx) , we see that the contribution proportional to M is most significant for small x. It can overcome the classical focusing behavior implied by the constant -1 in (<ref>) provided λ^2/x^3 decreases at the rate of 1/x^2, or faster. Therefore, λ(x) can increase at most by λ∝√(x). The two cases of constant λ and λ(x) falling off as 1/x are included in this range and will be discussed in detail in the following sections. Any congruence of null radial geodesics in spherically symmetric space-time will maintain its shape, such that its shear vanishes, σ_μν=0. It also crosses each hypersurface of constant radius in an orthogonal manner, such that ω_αβ=0. Therefore, in the Raychaudhuri equation, which through the geodesic deviation equation holds in any Riemannian geometry including an emergent one, only the self-coupling term θ^2 and curvature through the Ricci tensor contribute. The Ricci tensor contributes by its contraction with the components of v^μ_s, R_αβv^α_sv^β_s=-χ_0^2λ^2/x^3(2M+x(lnλ^2)'(x-2M)) , and therefore we obtain -1/2θ_s^2- R_αβv^α_sv^β_s=2χ_0^2/x^2[-1+λ^2(x)(3M/x-1)+λ(x)λ'(x)(x-2M)] in agreement with (<ref>). This result confirms that the Raychaudhuri equation indeed holds in unmodified form, emphasizing the importance of a reliable space-time geometry for solutions of a modified theory. Physically, there is an unambiguous conclusion that emergent Ricci curvature is responsible for defocusing congruences of light rays. §.§.§ Timelike congruences and their implications A similar computation can be performed for timelike congruences, with slightly longer expressions and fewer cancellations owing to an extra term from the normalization condition. Moreover, we keep e as a free real number in order to obtain a complete family of timelike geodesics rather than just two directions from e=±1 used in the lightlike case. Setting u_t=-e, normalization implies the radial component u_x^(s)= eN^x+s N√((1-N^2/e^2)q̃^xx+(N^x)^2/e^2)/N^2q̃^xx-(N^x)^2 . Velocity components with raised indices are then given by u^t_s = e/N^2-q̃_xx( N^x)^2(1+s √(q̃_xx)N^x/N√(1-N^2-q̃_xx(N^x)^2/e^2)) = 1/N^2e^2+q̃_xx(N^x)^2/e-s √(q̃_xx)N^xN^-1√(e^2-N^2+q̃_xx(N^x)^2) and u^x_s=se√(q̃^xx)/N√(1-N^2-q̃_xx(N^x)^2/e^2) . For massive particles that reach infinity with zero initial velocity, we have e=±1, where e=1 corresponds to a future-pointing worldline falling in from infinity. The sign of u_s^t is determined by the sign of e/N-s √(q̃_xx)N^xN^-1√(e^2/N^2-1+q̃_xx(N^x)^2/N^2) = X-sY√(X^2+Y^2-1) where we defined X=e/N and Y=√(q̃_xx)N^x/N>0. It can easily be seen that this expression can vanish only if Y^2=1. If Y<1, which corresponds to x>2M in the GP gauge, we have X>Y√(X^2+Y^2-1) for any X>0, and therefore u_s^t>0 if e>0 for both choices of s=±1. If Y>1, or x<2M in the GP gauge, we have X<Y√(X^2+Y^2-1) for any X, and therefore u_s^t can be positive only for e>0 and s=-1. There are only infalling timelike geodesics in this trapped region. The definition (<ref>) implies θ_s=se/√(q̃_xx)Nq_θθ∂_x(q_θθ√(1+(q̃_xx(N^x)^2-N^2)/e^2)) or θ_s=2 sgn(e) s χ_0 e^2-1+3M/(2x)/x√(e^2-1+2M/x)√(1+λ(x)^2(1-2M/x)) in the GP gauge. The rate of change relative to proper time τ is given by dθ_s/ dτ = -χ_0^2/x^4 (2 M+(e^2-1) x)(x (9 M^2+8 M (e^2-1) x+2 (e^2-1)^2 x^2)+. .λ (x)^2 (-24 M^3+M^2 (32-23 e^2) x-2 M (3 e^4-10 e^2+7) x^2+2 (e^2-1)^2 x^3). .-x (x-2 M) λ (x) (2 M+(e^2-1) x) (3 M+2 (e^2-1) x) λ '(x)) The Ricci tensor contributes to the Raychaudhuri equation for timelike congruences by R_μνu^μ_su^ν_s = -χ_0^2λ^2(x)(2M+x(x-2M)(lnλ^2)')(6M^2+(4e^2-7)Mx-2(e^2-1)x^2)/2(2M-x)x^4 which provides defocusing effects in a collection of massive particles. The long expressions above obscure the physical effects when considering a generic e. Hence, let us focus on the particular case where the collection of massive particles starts at rest from infinity (e=1), such that the evolution of their geodesics takes the simpler form dθ_s/ dτ=-9M χ_0^2/2x^3(1+λ^2(x)(1-2M/x))+3χ_0^2Mλ^2(x)/2x^4(x (x-2 M) (lnλ^2)'+2M) with R_μνu^μ_su^ν_s = - 3Mχ_0^2λ^2 (x)/2x^4(x (x-2 M) (lnλ^2)'+2M ) Unlike null geodesics, any cross-section of a timelike congruence is a 3-dimensional spatial hypersurface. Therefore, moving from one hypersurface to another may imply non-vanishing shear. Non-zero effects of shear can be computed in the GP-gauge to provide σ_μνσ^μν = 6M χ_0^2/2x^3(1+λ^2(x)(1-2M/x)) . With this result, we can rewrite equation (<ref>) as Raychaudhuri's equation, dθ_s/ dτ=-1/3θ_s^2-σ_μνσ^μν- R_μνu^μ_su^ν_s Both cases of geodesics show that the transition from focusing to defocusing is captured by space-time curvature through the emergent Ricci tensor. We can formalize this behavior by using our purely geometric conditions (<ref>) and (<ref>): * Null geometric condition: Substituting the Ricci tensor contracted with tangent vectors to null geodesics, (<ref>), into the NGC (<ref>), we obtain -λ^2 χ_0^2/x^3(2M-x(lnλ^2)'(2M-x)) ≥ 0 . Our vacuum model clearly violates the condition everywhere inside the horizon x<2M for any constant or monotonically decreasing λ. * Timelike geometric condition: Substituting the Ricci tensor contracted with tangent vectors to timelike geodesics falling in from some reference point, (<ref>), into the TGC (<ref>) we obtain -χ_0^2λ^2 (x) (3 M+2 (e^2-1) x)/2x^4(2M-x(lnλ^2)'(2M-x)) ≥ 0 . This condition is violated everywhere in the interior for any constant or monotonically decreasing λ. A similar analysis can be done near the hypersurface of maximum radius. However, a maximal radius does not appear for asymptotically vanishing holonomy functions h(x)=λ(E^x)/λ̃. It may therefore be interpreted as an undesired feature of the asymptotically constant (or increasing) case because it implies strong modifications in low-curvature regimes. The μ̅-type scheme of λ∝ 1/√(E^x), in particular, does not have a hypersurface of maximum radius. We will therefore forego details of an analysis of repulsive effects beyond a maximal radius. §.§ The net stress-energy tensor and gravitational energy We define the net stress-energy tensor T̅_μν≡ T_μν - G_μν + Λ g_μν/8π , where T_μν is the matter stress-energy tensor (if present, and derived canonically from matter contributions to the constraints) and G_μν is the Einstein tensor (<ref>) of the emergent space-time metric. In the classical limit, where the Einstein equations hold, the net stress-energy is always zero. But in general it is non-zero in a modified gravity theory (not necessarily given by some effective version of LQG). In particular, in vacuum such that T_μν=0, the net stress-energy tensor has purely gravitational contributions. It inherits the important property of a covariant conservation law, ∇^μT̅_μν = - ∇^μ ( G_μν + Λ g_μν)/(8π) = 0, from the Bianchi identities, provided the Einstein tensor is indeed well-defined and covariant. We may refer to this contribution as local gravitational stress-energy, which has only non-classical contributions. This new concept is only defined relative to general relativity and, to the best of our knowledge, does not have an independent physical motivation. Nevertheless, we will show that it leads to several statements about modified gravity and new interpretations that are intuitively meaningful. One possible application of this new concept is a reformulation of the energy conditions. However, as we will see, these conditions do not play an important role in singularity resolution when the classical Einstein equations do not hold. We will therefore not consider them as strict conditions on possible modifications, but various signs in these expressions are still of importance in interpretations related to energy. With this in mind, we make the following definitions: * Null energy signature (NES): σ_ NES= sgn(T̅_αβv^α_(i)v^β_(i)) for any null v^μ_(i) . * Timelike energy signature (TES): σ_ TES= sgn(T̅_αβu^α_(i)u^β_(i)) for any timelike u^μ_(i) . As mentioned earlier, the interpretation of the physical form of the SEC is that of gravity being attractive for the sign imposed by this condition. As an energy condition, this interpretation is available only if Einstein equations hold, and therefore we will not consider its analog as an energy condition in a modified theory. The Ricci scalar in our models is given by R = 2 χ_0^2/x^4( λ_∞^2 x^2 - λ^2 (x^2-3 M^2) - x/2 (λ^2)' (x-2 M) (2 x-3 M)) in an asymptotically flat space-time with a suitable value of the constant λ_∞ as discussed in Section <ref>. By definition of the net stress-energy tensor, the NES (<ref>) has the opposite sign of the left-hand side of the NGC (<ref>) in vacuum. Using a family of null geodesics, we obtain σ_ NES= sgn (M/x-(lnλ)'(2M-x)) as the NES by simply flipping the sign of (<ref>) and removing positive factors. Using a family of timelike geodesics at rest at infinity for the TES (<ref>) we obtain σ_ TES= sgn ( 1- λ_∞^2/λ^2 - 2(lnλ)' (2 M-x) ) using (<ref>) and (<ref>). Both signatures are positive in the interior x<2M for monotonically decreasing λ. For constant λ, (<ref>) remains positive while (<ref>) vanishes. Using the concept of a net stress-energy tensor, the NGC and TGC are violated, while the NES and TES are non-negative near the hypersurface of minimum radius. If we do not restrict these expressions to the interior, we can obtain varying signs. For constant λ, we have σ_ NES=+1 for any x, while σ_ TES=0. For λ=λ̃/x, we have σ_ NES= sgn(3M/x-1) which is negative for x>3M, and σ_ TES= sgn(4M/x-1) which is negative for x>4M. There is therefore negative net stress-energy in most of the exterior region in a μ̅-type scheme. We interpret this outcome as an additional gravitational binding energy compared with the classical behavior. This interpretation is compatible with the acceleration equations (<ref>) and (<ref>) for null and timelike congruences, where the λ-contributions strengthen the classical negative term for large x≫ M. In the interior, however, the defocusing behavior is dominant, turning the signs of the acceleration equations as well as the signatures of net stress-energy. § EXTERIOR PHYSICS Different holonomy schemes qualitatively result in a similar mechanism of singularity-resolution. A suitable regime for interesting physical effects that can distinguish between different choices of λ(x) is therefore primarily given by the exterior geometry, described by the emergent metric (<ref>). For now we consider exterior effects around a black hole, but far from any cosmological scales. We therefore continue to assume a vanishing cosmological constant. We will focus on the asymptotically flat modified Schwarzschild metric with a constant overall factor χ, given by (<ref>), such that d s^2 = - (1 - 2M/x) d t^2 + d x^2/χ_0^2( 1 + λ^2 ( 1 - 2M/x ) )(1 - 2M/x) + x^2 dΩ^2 with χ_0=1/√(1+λ(∞)^2) . §.§ Newtonian limit Using the coordinate transformation x = r (1+ M/(2r))^2, which in the case of the classical solution corresponds to the isotropic radial coordinate, the metric (<ref>) becomes d s^2 = - (1-M/2r/1+M/2r)^2 dt^2 + (1+M/2r)^4 ( ( 1 + λ(x)^2 (1-M/2r/1+M/2r)^2 )^-1 d r^2/χ_0^2 +r^2 dΩ^2) . In this form, it is easier to derive the weak-field regime defined by M≪ r and λ^2 ≪ 1, d s^2 ≈ - (1-2M/r) dt^2 + (1+2M/r) ( ( 1 - λ(x)^2 (1-2M/r) ) d r^2/χ_0^2 +r^2 dΩ^2) , where x ≈ r + M in the argument of λ(x). Defining the new radial coordinate 𝔯(r) = r_0exp( ∫_r_0^r dr̃/χ_0 r̃√( 1 - λ(r̃+M)^2 (1-2M/r̃))) with a reference radius r_0 where 𝔯(r_0)=r_0, the metric recovers its isotropic form in the weak-field regime, d s^2 ≈ - (1-2M/r(𝔯)) dt^2 + (1+2M/r(𝔯)) B(𝔯) ( d𝔯^2 + 𝔯^2 dΩ^2) . The function B(𝔯) results from the inversion of (<ref>) to r^2 = B(𝔯) 𝔯^2. An explicit integration of (<ref>) and hence B(𝔯) depends on the chosen holonomy function λ(x)/λ̃. For our approximations of weak-field gravity and small holonomy modifications, it can be written as 𝔯≈ r_0exp(1/χ_0∫_r_0^r 1-1/2λ(r̃+M)^2/r̃ dr̃) ≈ r_0(r/r_0)^1/χ_0exp(-1/2χ_0∫_r_0^r λ(r̃)^2/r̃ dr̃) . The shift by M does not contribute because λ(r+M)^2≈λ(r)^2+2Mλ(r)λ'(r) has a subleading contribution from M for M≪ r and λ^2≪1. For constant λ=λ̃, we have 𝔯(r)≈ r_0(r/r_0)^(1-1/2λ̃^2)/χ_0 , and for λ(r)=λ̃/r with constant λ̃ we obtain 𝔯(r)≈ r_0(r/r_0)^1/χ_0exp(-λ̃^2/4r_0^2χ_0(1-r_0^2/r^2)) . Condition (<ref>) for asymptotic flatness, applied for small λ, implies that χ_0≈ 1-1/2λ̃^2 for constant λ, and χ_0=1 for λ(r)=λ̃/r. Since the exponential function in (<ref>) quickly approaches a constant for large r, 𝔯 is close to a linear function, as in the classical solution. §.§ Geodesic properties The strict covariance conditions imposed on our effective space-time models make it possible to derive reliable relativistic effects from properties of timelike and lightlike geodesics, going beyond the Newtonian limit. §.§.§ Timelike geodesics and effective potential We first analyze worldlines of massive objects in the effective space-time. Because the effective space-time is static and spherically symmetric, we have corresponding Killing vectors and conserved quantities Ẽ and L̃ related to the energy and angular momentum (per mass) of the particle. Using these conserved quantities and the lapse function in the space-time metric, we formulate the normalization condition g_μν u^μ u^ν = - κ for the particle's 4-velocity u^μ. For the sake of convenience, we include both massive and massless options by the parameter κ=0,1: 0 = 1/2( d x/ dτ)^2 + q̃^xx/2( κ + L̃^2/x^2 - Ẽ^2/N^2) . Here, τ is the particle's proper time (to be replaced by the affine parameter in the massless case). This equation can be interpreted as an energy-balance law between kinetic and potential energy of classical 1-dimensional motion, resulting in the effective potential (per unit mass) Ṽ_ eff = q̃^x x/2( κ - Ẽ^2/N^2 + L̃^2/x^2) = χ_0^2/2( 1 + λ^2 ( 1 - 2 M/x) ) ( ( κ + L̃^2/x^2) (1 - 2 M/x) - Ẽ^2 ) . The relevant metric components are given by (<ref>). We will also use the first derivative of the potential, ∂Ṽ_ eff/∂ x = χ_0^2/x^5[ x (3 L̃^2 M-L̃^2 x+κ M x^2) - λ^2 (L̃^2 (8 M^2-6 M x+x^2)+M x^2 (4 κ M+Ẽ^2 x-2 κ x)) + (λ^2)' x/2 (2 M-x) (L̃^2 (2 M-x)+x^2 (2 κ M+Ẽ^2 x-κ x)) ] where λ'= dλ/ d x. Circular orbits are characterized by ∂Ṽ_ eff / ∂ x = 0 and d x / dτ =0. Inserting these conditions into equations (<ref>) and (<ref>), respectively, gives us two equations that relate the energy and angular momentum: Ẽ^2 = (1-2 M/x) (L̃^2/x^2+κ) and Ẽ^2λ^2 (M/x + (1-2M/x) dlnλ/ dln x)= 3ML̃^2/x^3-L̃^2/x^2 + κ M/x + λ^2(1-2 M/x) ((4L̃^2 M/x^3-L̃^2/x^2 + 2κ M/x^2) + (1-2 M/x)^2 (L̃^2/x^2+κ) dlnλ/ dln x) . There is a qualitative difference in these equations compared with the classical case, in which case the left-hand side of (<ref>) vanishes. The right-hand side then determines L̃, which then provides Ẽ using (<ref>). For non-zero λ, (<ref>) depends on both L̃ and Ẽ, but Ẽ can easily be eliminated by using (<ref>). The terms with derivatives of λ then cancel out completely, and the resulting equation for L̃ reads (x + λ^2 (x-2 M)) (3 L̃^2 M-L̃^2 x+κ M x^2) = 0 . The first parenthesis by definition vanishes at an extremal radius x_λ^(i) (solving (<ref>) for Λ=0), which never happens in the static exterior region as we have shown. The second parenthesis must therefore vanish for circular orbits, which is independent of λ and implies the classical stationary radius x_0^± = L̃^2/2 M κ(1±√(1- κ12 M^2/L̃^2)) . For the null case, κ=0, only x_0^- is finite, given by the value 3M derived by taking the limit κ→0. At this radial coordinate, we have ∂^2 Ṽ_ eff/∂ x^2|_x_0^± = κ^4 χ_0^2 32 M^4/L̃^6(1±√(1-κ12 M^2/L̃^2))^-6[ - 6 κM^2/L̃^2 + κλ^2 6 M^2/L̃^2( κ4 M^2/L̃^2 - 1) + ( 1 - κ6 M^2/L̃^2 + λ^2 ( 1 - κ8 M^2/L̃^2) ) ( 1 ±√(1-κ12 M^2/L̃^2)) ] . As in the classical case, only x_0^+ is a local minimum of the potential for a massive particle, while x_0^- is a local maximum for both massive and massless particles. §.§.§ Nearly-circular orbits of massive objects For nearly-circular orbits we consider only the case of massive objects and hence set κ=1. It will be useful to invert the relation (<ref>) for the angular momentum in terms of the equilibrium coordinate x_0=x_0^+: L̃^2 = M x_0^2/x_0-3 M . If the object is displaced slightly from its equilibrium radius, it oscillates around it with frequency ω_r^2 = ∂^2 Ṽ_ eff/∂ x^2|_x=x_0, κ=1 = M/x_0^3x_0-6M/x_0-3Mχ_0^2 ( 1 + λ^2 (1 - 2 M/x_0)) = ω_φ^2 (1-6M/x_0) χ_0^2 ( 1 + λ^2 (1 - 2 M/x_0)) where ω_φ is the angular frequency of the orbit and given by ω_φ^2 = L̃^2/x_0^4 = M/x_0^2 (x_0-3 M) . Therefore, the precession rate for nearly-circular orbits equals ω_p = ω_φ - ω_r = ( 1 - χ_0 √(1-6M/x_0)√(1 + λ^2 (1 - 2 M/x_0))) ω_φ . §.§.§ Null rays: Deflection angle and redshift The effective potential (<ref>) for a massless particle is Ṽ_ null = χ_0^2/2( 1 + λ^2 ( 1 - 2 M/x) ) ( b^2/x^2(1 - 2 M/x) - 1 ) Ẽ^2 , where b ≡L̃ / Ẽ is the impact parameter. Using L̃ = x^2 ϕ̇ we can obtain dϕ / d x = ϕ̇ / ẋ from (<ref>) with κ=0, dϕ/ d x = - 1/χ_01/x^2( 1 + λ^2 ( 1 - 2 M/x) )^-1/2( 1/b^2 - (1 - 2 M/x) 1/x^2)^-1/2 . For a null ray that is not captured by the black hole, its turning point is given by the largest solution x_ tp to V_null (x_ tp) = 0: x_ tp^2/b^2 = 1 - 2 M/x_ tp . The complete change in the angular coordinate can then be derived as Δϕ = 2/χ_0∫^∞_x_ tp d x/ x^2√(( 1 + λ^2 ( 1 - 2 M/x ) ) ( 1/b^2 - (1 - 2 M/x)/x^2 )) - π = 2/χ_0∫_0^1/x_ tp d u/√(( 1 + λ^2 ( 1 - 2 M u ) ) ( (1 - 2 M x_ tp^-1) x_ tp^-2 - (1 - 2 M u ) u^2 )) - π , where we used the substitution u=1/x to obtain the second line. As usual, the subtraction of π expresses the result relative to the change of angle in flat space. This integral is complicated in analytical form, even in the classical case λ→0. We therefore consider the expression at leading order in M/x_ tp, Δϕ ≈ π( χ_0^-1 -1) - 2 x_ tp/χ_0∫_0^1/x_ tp d u λ^2/2√(1-u^2 x_ tp^2) + 2 M/χ_0∫_0^1/x_ tp d u 1 + u x_ tp (1+u x_ tp) + λ^2 (1 + 2 u x_ tp (1+u x_ tp))/(1+u x_ tp) (1-u^2 x_ tp^2)^1/2(1+λ^2)^3/2 . The deflection angle, as well as the effective potential and the force it implies on massive objects, can be used for standard derivations of physical scenarios once we have chosen a holonomy function h=λ/λ̃. In the next two sections, we will do this in the cases of constant h and a decreasing h(x)∝ 1/x, respectively. In addition, one could also consider redshift effects, but we now show that there are no λ-modifications in this effect for radial null rays. A stationary observer is described by a 4-velocity u^μ∂_μ = u^t ∂_t. Timelike normalization implies u^μ∂_μ = ∂_t/√(- g_t t) = (1 - 2 M/x)^-1/2∂_t . Such an observer, located at a radial position x, observes the frequency ω (x) = - g_μν k^μ u^ν|_x = √(- g_t t) k^t = √(1 - 2 M/x) k^t , of a light ray with wave 4-vector k^μ. Completing u^ν to a local inertial frame in the radial manifold by including the normalized spacelike vector ζ^μ∂_μ = ∂_x/√(q̃_x x) , the observed radial wave-number is k (x) = g_μν k^μζ^ν|_x = χ_0^-1( 1 + λ(x)^2 ( 1 - 2 M/x) )^-1/2(1 - 2 M/x)^-1/2 k^x . The lightlike condition g_μν k^μ k^ν = 0 implies k (x) = ±ω (x) in the orthonormal frame, where the positive sign determines properties of outgoing light rays. If the light ray was emitted near the horizon at x = x_ e≳ 2 M with frequency ω_ e, as measured by a stationary observer at x_ e, and follows a null geodesic, conservation of k_t = g_t t k^t implies that it is simply subject to the classical redshift ω_ o/ω_ e = √(1 - 2 M / x_ e/1 - 2 M / x_ o) , where ω_ o is the frequency observed by a stationary observer at x = x_ o. We therefore see no holonomy modifications to gravitational redshift, for arbitrary λ. The underlying reason for this result is that the component g_tt of the exterior metric (<ref>) has the same form as the classical component in Schwarzschild space-time. Only the spatial metric of the emergent line element is directly changed for a modified structure function, and the static gauge implies that λ-modifications do not affect conditions on the lapse function. For the same reason, gravitational time-dilation, for instance for an observer in a geostationary orbit at some distance R from the Earth with radius r_ E, retains the classical expression Δτ = Δ t √(1-2M/r_ E/1-2M/(r_ E+R)) , devoid of any holonomy corrections for arbitrary λ. §.§ Energy and thermodynamics Physical properties of our space-time, rather than of objects moving on this background, are determined by curvature tensors. We will first derive relevant ones and then apply them to various considerations related to energy and thermodynamics. §.§.§ Curvature tensors The metric (<ref>) is static and therefore has the Killing vector ξ^μ∂_μ = ∂_t. The unit normal vector of the foliation given by constant t equals n^μ∂_μ = (1 - 2 M/x)^-1/2∂_t = (1 - 2 M/x)^-1/2ξ^μ∂_μ . The Einstein tensor for an emergent metric of the form (<ref>) with arbitrary λ takes the form G_μν d x^μ⊗ d x^ν = - χ_0^2/x^2(1 - 2M/x) ( 1 - λ_∞^2/λ^2 -4 M^2/x^2 + x (lnλ^2)' (1-2 M/x)^2 ) d t^2 + λ^2/x^2(1-2 M/x)^-1(1+λ^2 (1-2 M/x))^-1( 1 - λ_∞^2/λ^2 - 2M/x) d x^2 + χ_0^2 λ^2/2(1-M/x) (2 M/x + x (lnλ^2)' (1-2 M/x)) dΩ^2 , where λ'=∂λ/∂ x. Each symmetric 2-sphere has a normal vector r^μ∂_μ = √(q̃^xx)∂_x , in space, such that g̃_μν r^μ r^ν=1. With this information, we can compute the extrinsic-curvature tensor of the spheres to find 𝒦^(S)_μν d x^μ d x^ν ≡ (1/2ℒ_r q̃_μν) d x^μ d x^ν = √(q̃^xx) x dΩ^2 = χ_0 x √(1 + λ^2 ( 1 - 2 M/x)) √(1 - 2 M/x) dΩ^2 , with trace given by 𝒦^(S) = 2 χ_0/x√(1 + λ^2 ( 1 - 2 M/x)) √(1 - 2 M/x) . §.§.§ Geometric conditions, net stress-energy tensor, and gravitational energy The null and timelike conditions, (<ref>) and (<ref>), are satisfied at sufficiently large distances for any λ(x) that falls off at least as 1/√(x). In general, for any λ(x)= a x^-n, the NGC is satisfied for x≥ 2 M (n+1/2)/n, which lies outside the horizon and gets closer to it for large n. Violations of the geometric conditions therefore generically start somewhere in the exterior. The corresponding coordinate value can be interpreted as the location where quantum effects become significant. The special case of constant λ may be included in this parameterization as the limit n→ 0, in which case the NGC is violated everywhere. In Schwarzschild coordinates, the net stress-energy tensor, defined by (<ref>) and using the Einstein tensor defined by (<ref>) for the emergent space-time metric, takes the form 8πT̅_μν d x^μ⊗ d x^ν = χ_0^2λ^2/x^2(1 - 2M/x) ( 1 - λ_∞^2/λ^2 -4 M^2/x^2 + x (lnλ^2)' (1-2 M/x)^2 ) d t^2 - λ^2/x^2(1-2 M/x)^-1(1+λ^2 (1-2 M/x))^-1( 1 - λ_∞^2/λ^2 - 2M/x) d x^2 - χ_0^2 λ^2/2(1-M/x) (2 M/x + x (lnλ^2)' (1-2 M/x)) dΩ^2 , using (<ref>). In the exterior, x>2M, and for monotonically decreasing λ, we obtain T̅_xx<0 everywhere, while T̅_tt and T̅_φφ potentially change sign at some point, leading to violations of the energy conditions. The energy density measured by a static observer with 4-velocity given by (<ref>) equals ρ≡T̅_μνu^μ u^ν = χ_0^2λ^2/x^2( 1 - λ_∞^2/λ^2 -4 M^2/x^2 + x (lnλ^2)' (1-2 M/x)^2 ) . For monotonically decreasing λ, this expression is negative at sufficiently large values of x where the last term dominates. This shows that, unlike in the interior where the gravitational energy density is positive, in the exterior at large scales it does act as a binding energy for these observers. (The sign of the energy depends on the observers. In particular, for comoving observers the interior energy density is proportional to, and of the same sign as, the component T̅_xx of (<ref>) which is negative.) §.§ Black-hole thermodynamics A useful energy quantity in general relativity is the (normalized) Brown-York quasilocal energy as seen by observers characterized by a vector t̂^μ=N̂n^μ+N̂^xs^μ_x, E = - 1/8 π∫ d^2 z N̂( √(σ)𝒦^(S) - √(σ̅)𝒦̅^(S)) . The integration is over a 2-dimensional surface with coordinates z and the induced 2-metric σ, and 𝒦^(S) is the trace of extrinsic curvature on the 2-sphere. The barred quantities are evaluated in Minkowski space-time The first steps of the derivation can be reproduced in a canonical setting, which we show here. However, it turns out that a crucial ingredient requires an extension of canonical variables into a timelike direction, which is not available in emergent modified gravity without using a specific solution. Brown-York quasilocal energy therefore cannot be defined by an independent derivation within emergent modified gravity. Instead, we will consider the classical geometrical expression (<ref>) as a net quasilocal energy in an emergent space-time, just as we used the classical Einstein tensor in our definition of net stress-energy. As for the derivation of the Brown-York expression, consider the classical Hamiltonian action over a space-time region R, S [N,N⃗,q,p] = ∫_ R d^4x (p^abq̇_ab - H N - H_aN^a) where p^ab=(16π )^-1√( q)(K^ab-Kq^ab) is the momentum conjugate to the 3-metric q_ab, and K_ab is the extrinsic curvature with trace K=K_abq^ab. The boundary contribution associated to the first term in the action is S_∂ R = ∫_∂ R d^3x p^abq_ab = - 1/8π∫_∂ R d^3x √(| q^(∂ R)|) K where q^(∂ R) is the 3-metric of the boundary ∂ R, which in general may be spacelike or timelike. We obtain the Brown-York term by restricting S_∂ R to the boundary of the (non-smooth) boundary ∂∂ R, S_∂∂ R = - 1/8π∫_∂∂ R d^2z N̂√(|σ|) K . Here, σ is the 2-metric and z are coordinates of the 2-dimensional surface ∂∂ R, while N̂ is the lapse function of the chosen observer which is normal to ∂ R. In triad variables we obtain the same boundary term since S_∂ R = ∫_∂ R d^3x (E^φ K_φ + E^x K_x) = - 1/8π∫_∂ R d^3x √(| q^(∂ R)|) K . This derivation shows that K is directly derived from a linear combination of canonical variables. In emergent modified gravity, K is therefore not given by extrinsic curvature of the emergent space-time. The canonical variables K_x and K_φ are readily available on spacelike parts of the boundary ∂ R, but not on the timelike components if there is no fundamental space-time theory with a corresponding action principle. However, the timelike extension is precisely what is required for a derivation of the Brown–York quasilocal energy, making it impossible to derive a strict analog of (<ref>). The final restriction to ∂∂ R is spacelike, and may be expressed in the same way as (<ref>) in emergent modified gravity with the trace K of the canonical variables. However, the presence of a crucial gap in the derivation suggests that such a definition may be of limited value. In what follows, we therefore proceed by using the classical result (<ref>) as the net quasilocal energy, defined geometrically through extrinsic curvature in the emergent space-time. The barred functions are the corresponding quantities in the ground state which in classical general relativity is given by Minkowski space-time. But, as discussed in the previous subsection, this may no longer be the case in a modified theory. We therefore choose our ground state as the space-time obtained in the zero-mass limit. We are interested in the net quasilocal energy defined by 2-spheres enclosing the black hole. Thus, the net quasilocal energy in the holonomy-modified model for observers with N̂=1 and N̂^x=0 is given by E (x) = x χ_0 ( √(1 + λ^2) - √(1 - 2 M/x)√(1 + λ^2 ( 1 - 2 M/x))) , using (<ref>), and χ_0=1/√(1+λ_∞^2) for asymptotically flat solutions. This expression is everywhere positive and monotonically decreasing for λ'≤ 0. Since we are considering vacuum solutions, we conclude that the gravitational field itself has an energy contribution. In the asymptotic limit of x →∞, the energy acquires a correction to its classical value, given by the mass, unless the holonomy parameter is asymptotically vanishing, lim_x→∞ E(x) = M (1+λ_∞^2/1+λ_∞^2) . A static observer has the 4-acceleration a^μ∂_μ = χ_0^2 2 M/x^2( 1 + λ^2 ( 1 - 2 M/x) ) ∂_x , with norm a = √(g̃_μν a^μ a^ν) = √(q̃_x x (a^x)^2) = χ_0^2 2 M/x^2(1 - 2 M/x)^-1/2( 1 + λ^2 ( 1 - 2 M/x) )^1/2 , subject to holonomy modifications. We perform a near-horizon expansion at x = 2 M + ρ^2 / 8 M = 2 M (1 + ρ^2) for small ρ: a = χ_0^2/2 M ρ(1 - 3/2(1 - λ_ H^2/3) ρ^2 + O(ρ̅^4) ) where λ_ H = λ(2 M). The modified Schwarzschild metric (<ref>) to leading order in ρ becomes d s^2 = - ρ^2 (1-ρ^2) d t^2 + 1 - λ_ H^2 ρ^2 + λ_ H^4 ρ^4/1-ρ^2 (4 M)^2 dρ^2/χ_0^2 + x^2 dΩ^2 = - ρ^2 (1-ρ^2) d t^2 + 1 - λ_ H^2 ρ^2 + λ_ H^4 ρ^4/1-ρ^2(4 M)^2/χ_0^2 dρ^2 + x^2 dΩ^2 = (4 M)^2/χ_0^2( - ρ^2 dτ^2 + dρ^2 ) + d X_⊥^2 , where τ = χ_0 t / (4 M), and X_⊥ are 2-dimensional local coordinates on the spheres. This is simply the Rindler metric up to a constant conformal scaling. Thus, we do not see any major local modifications to leading order in ρ̅, for arbitrary λ. The black-hole temperature is therefore classical near the horizon, T = 1/4 π√(2 M x (1-2 M/x)) . Since the space-time is static, it must be in thermal equilibrium, and the temperature away from the horizon is related to the horizon temperature through redshift. Thus, for some x̅ > x, T (x̅) = 1/4 π√(2 M x (1-2 M/x̅))≈1/8 π M(1-2 M/x̅)^-1/2 . Taking x̅→∞, we recover the Hawking temperature T_ H≡ T (∞) = 1 / (8π M). Let us now consider an observer at a fixed coordinate x̅. A change δ M in the mass will then change the net quasilocal energy (<ref>) by δ E (x̅) = χ_0 (1 - 2 M/x̅)^-1/2( 1 + 2 λ^2 ( 1 - 2 M/x̅) ) (1 + λ^2 ( 1 - 2 M/x̅))^-1/2δ M = T (x̅) 8 π M χ_0 ( 1 + 2 λ^2 ( 1 - 2 M/x̅) ) (1 + λ^2 ( 1 - 2 M/x̅))^-1/2δ M , where we used the temperature (<ref>) and have assumed that λ is independent of the mass. Using the thermodynamic analog δ E = T δ S, we find that the entropy seen by a static observer at x̅ is given by S (x̅) = ∫_0^M 8 π M' χ_0 ( 1 + 2 λ^2 ( 1 - 2 M'/x̅) ) (1 + λ^2 ( 1 - 2 M'/x̅))^-1/2 d M' = χ_0 8 πx̅^2/15 λ^4( √(1 + λ^2 (1-2M/x̅))( 3 + λ^2 ( 1 + 3 M/x̅) - 2 λ^4 ( 1 + M/x̅ - 6 M^2/x̅^2) ) - √(1 + λ^2)( 3 + λ^2 - 2 λ^4 ) ) , for arbitrary λ independent of the mass. Notice that the entropy is independent of the reference ground state used in the definition of the quasilocal energy (<ref>) because this ground state does not depend on the mass (as long as λ is mass independent) and hence does not enter the thermodynamic relation δ S = T δ E. In the zero-mass limit, the entropy is vanishing S|_M→0=0 and is a minimum δ S / δ M|_M→0=0. Therefore, no work can be extracted from the gravitational ground-state energy. The entropy (<ref>) has the correct classical limit, S (x̅) π (2 M)^2 = A_ H/4 = S_ BH , which is constant outside the event horizon. Therefore, all the information is contained inside the black hole. The general expression of the entropy, however, is not homogeneous. The asymptotic limit of the entropy in general depends on the asymptotic behavior of λ, S (∞) = (1 + λ_∞^2 + O (λ_∞^4) ) S_ BH , and hence may not correspond to the Bekenstein-Hawking entropy unless λ has a vanishing asymptotic limit. Finally, entropy in general does not acquire the classical expression at the horizon, S (2M) = 8 χ_0/15 λ_ H^4( 3 + 5/2λ_ H^2 - √(1 + λ_ H^2)( 3 + λ_ H^2 - 2 λ_ H^4 ) ) S_ BH = ( 1 + λ_ H^2-λ_∞^2/2 + O ( λ_ H^2 λ_∞^2 , λ_ H^4 , λ_∞^4) ) S_ BH , where λ_ H = λ (2M). In fact S(2M)>S(∞) if λ_ H≥λ_∞. This is a sign that the gravitational field itself contains some information even in the vacuum exterior, a feature that is supported by the concept of net gravitational energy. The fact that the thermodynamic quasilocal entropy of the black hole is not a constant outside the horizon, in the presence of holonomy modifications, is a signal that the constant classical Bekenstein-Hawking entropy does not count the quantum degrees of freedom of the black hole. Using general arguments, it has previously been suggested that once black to white hole transitions are allowed, it can be shown that the Bekenstein-Hawking entropy is not a measure of the black-hole Hilbert space but rather of the degrees of freedom of the horizon through which the black hole interacts with its surroundings <cit.>. In the presence of holonomy terms, we find that this expectation is further extended in the sense that the gravitational field itself also contains information and hence the entropy expressions computed by different external observers need not be the same. It will be interesting to check in the future whether information lost inside the black hole can indeed be `recovered' through the white hole once matter fields are added to the mix and the entropy stored in the gravitational field is taken into account. § CONSTANT HOLONOMY FUNCTION In this section we revisit the simplest, but nontrivial, case of constant λ=λ̃, such that h(x)=1, which had been previously solved in <cit.>. In the LQG literature, a constant holonomy parameter is often referred to as the μ_0-scheme. Since the angular lengths of holonomies on the symmetry spheres are constant, irrespective of their area, the geometrical length λ̃√(E^x) or the area of a plaquette increase with the radius and is unbounded asymptotically. In the Schwarzschild gauge, for instance, the geometrical length equals λ̃ x and grows linearly with the radial coordinate. This increase has no effect on holonomy modifications in the static region of the Schwarzschild geometry because the increasing length is multiplied by K_φ=0. However, it could be a problem in non-static gauges or when the static region is extended to a dynamical homogeneous region in the presence of a positive cosmological constant. For instance, if E^x=t_ h^2, the geometrical holonomy length increases with time and is multiplied by K_φ which, according to its classical behavior, could be expected to be constant and non-zero: K_φ∝Ė^x/√(E^x). However, the crucial point is that the dynamical behavior of K_φ is changed by holonomy modifications. The modified equation of motion (<ref>) then shows that, for constant λ=λ̃ and E^x=t_ h^2, holonomy modifications, quantified as the ratio (2λ̃)^-1sin(2λ̃K_φ)/(2Ė^2/√(E^x)) of the modified and classical terms representing K_φ, are in fact constant (for constant c_f) and do not show any potential implications of an increasing geometrical length of the holonomy. In a covariant formulation of holonomy modifications based on emergent modified gravity, holonomy functions have implications not only for the Hamiltonian constraint and the canonical dynamics but also for the geometrical structure of space-time. Unlike the Hamiltonian constraint, the emergent metric contains a K_φ-independent λ-term that remains non-zero even in the static region of a Schwarzschild gauge. Imposing asymptotic flatness on the metric (<ref>) with Λ=0, which requires α=1/χ_0 and χ_0 = 1/√(1+λ̃^2), we obtain the line element d s^2 = - (1 - 2 M/x) d t^2 + ( 1 - λ̃^2/1+λ̃^22M/x)^-1(1 - 2 M/x)^-1 d x^2 + x^2 dΩ^2 . The heuristic relationship between λ̃ and a holonomy length is lost in this expression, and λ̃ is not explicitly multiplied by factors of x. Nevertheless, this line element has been derived from a holonomy-modified Hamiltonian constraint before the gauge was fixed. In spite of these cautionary remarks about customary statements concerning holonomy modifications with constant length λ=λ̃, we will demonstrate in this section that there are several unexpected features in this case, including: * The deflection angle of a light ray moving past a massive object receives a holonomy modification for arbitrarily large impact parameters, even in the zero-mass limit. * The black-hole entropy monotonically increases as the observer moves farther away from the horizon. The maximum, given by the asymptotic value does not correspond to the Bekenstein-Hawking entropy. * In the presence of a positive cosmological constant, the areal radius is bounded not only from below but also from above. §.§ Minimum radius and geometric conditions The minimum radius of space-times for constant λ=λ̃ is obtained by solving equation (<ref>): x_λ̃ = 2 M λ̃^2/1+λ̃^2 . At this minimum radius, the Ricci scalar (<ref>) takes the value R |_x=x_λ̃ = 3 M/x_λ̃^3 = 3/8 M^2(1+λ̃^2/λ̃^2)^3 , and the Kretschmann scalar equals K |_x=x_λ̃≡ R_μναβ R^μναβ |_x=x_λ̃ = 1/x_λ̃^4 λ̃^4( 9/4 + λ̃^2/2 + 4 λ̃^4) . Both expressions are finite but diverge in the classical limit, λ̃→ 0, in which x_λ̃→0. A comoving observer in the homogeneous interior passes through this region in a finite amount of proper time, given by τ_ cross = π(2 M + x_λ̃) . The expansion parameter for null geodesic congruences is θ_±=±2/x√(1-x_λ̃/x) and changes relative to the affine parameter ψ according to dθ_±/ dψ=-2/x^2(1-3 x_λ̃/2 x) . In the classical limit, λ̃→0, we have lim_λ̃→0 dθ_±/ dψ = -2/x^2<0 which means that both ingoing and outgoing congruences converge towards the singularity, and therefore the geometry focuses null rays. For non-zero λ̃, however, light rays are defocused near the reflection symmetry surface in the region x<3x_λ̃/2. For timelike geodesics at rest at infinity, we have θ_±=-3/2 x√(2M/x)√(1-x_λ̃/x) with proper-time rate of change dθ_±/ dτ=-9M/2x^3(1-4 x_λ̃/3 x) . These geodesics are defocused in the region x<4x_λ̃/3. Thus, photons start feeling the repulsive effects of the emergent space-time before massive particles do. The geometric conditions, discussed in Subsection <ref> and now evaluated with λ(x)=λ̃, imply * Null geometric condition: Using (<ref>), R_αβv^α_±v^β_±=-x_λ̃/x^3<0 and therefore the condition is violated everywhere in space-time, even at large scales in the exterior. * Timelike geometric condition: Using (<ref>), R_αβu^α_±u^β_±=-3Mx_λ̃/2x^4<0 and therefore the condition as well is violated everywhere in space-time. The violation of geometric conditions everywhere for constant λ=λ̃ may be interpreted as an indication of large-scale problems since quantum effects such as defocusing are not expected to be dominant unless we are at large curvature. A quantitative assessment requires a detailed analysis of observational features. §.§ Observational features As in our general discussion, the covariant nature of our emergent space-time metric makes it possible to apply standard methods in order to derive modifications to weak-field gravity, geodesic motion, and thermodynamics. §.§.§ Newtonian and zero-mass limits The metric (<ref>) can be written in terms of the isotropic coordinate 𝔯(x) = M/2 k-2 √(x-2 M)√(x-x_λ̃)+2 M-2 x+x_λ̃/2 M-x_λ̃ , where k is a free constant. Inverting this function, x = k 𝔯( (1+M/2 k 𝔯)^2 + x_λ̃/2 k 𝔯(1-M/4 k 𝔯-k 𝔯/M) ) , leads to the line element d s^2 = - (1 - 2 M/k 𝔯( (1+M/2 k 𝔯)^2 + x_λ̃/2 k 𝔯(1-M/4 k 𝔯-k 𝔯/M) )^-1) d t^2 + ( (1+M/2 k 𝔯)^2 + x_λ̃/2 k 𝔯(1-M/4 k 𝔯-k 𝔯/M) )^2 k^2 ( d𝔯^2 + 𝔯^2 dΩ^2 ) . In the weak-field regime, M≪𝔯, the line element can be approximated by d s^2 ≈ - (1 - 2 M/k 𝔯(1 +λ̃^2 ) ) d t^2 + (1 +λ̃^2 )^-1( (1 +λ̃^2 )^-1 + 2M + x_λ̃/k 𝔯) k^2 ( d𝔯^2 + 𝔯^2 dΩ^2 ) . Both k and 1+λ̃^2 can be eliminated by absorbing a factor of k/(1+λ̃^2) in the coordinate 𝔯, or by simply choosing the free constant k to equal 1+λ̃^2. Asymptotically, 𝔯→∞, we then obtain Minkowski space-time d s^2 ≈ - d t^2 + d𝔯^2 + 𝔯^2 dΩ^2 . Minkowski space-time is also recovered in the zero-mass limit at any value of x or 𝔯, as can be seen by simply inserting M=0 in the Schwarzschild coordinates (<ref>) or in isotropic coordinates (<ref>), and using x_λ̃=2Mλ̃^2/(1+λ̃^2) in the latter case. As we saw previously, a constant holonomy function h(x)=1, or λ=λ̃, is the only choice that recovers a Minkowski space-time in the zero-mass limit M→0 for the case Λ=0. §.§.§ Nearly-circular orbits of massive objects Massive objects in nearly circular orbits for constant λ=λ̃ oscillate about the equilibrium radius x_0 with frequency ω_r = ω_φ √(1-6M/x_0) √(1 - x_λ̃/x_0) . The orbit has a precession rate ω_p = ( 1 - √(1-6M/x_0)√(1 - x_λ̃/x_0)) ω_φ . §.§.§ Deflection angle The deflection angle (<ref>) for constant λ=λ̃, expanded to leading order in M/x_ tp with the turning-point radius x_ tp, is given by Δϕ = π( (1-λ̃^2/2) √(1+λ̃^2) - 1) + 4 M/x_ tp1 + 3 λ̃^2/2/1+λ̃^2 + O(M^2/x_ tp^2) and has the correct classical limit as λ̃→ 0. However, for non-zero λ̃, there is a correction independent of the turning-point radius of the light ray and of the mass M. The limit for x_ tp→∞ or M→0 of Δϕ therefore has a non-zero remnant deflection angle, even at infinite distance from a massive object or in the absence of any mass. In particular, even though the zero-mass limit of the line element is equal to Minkowski space-time, the deflection angle can deviate from the Minkowski result if it is computed before the limit M→0 is taken. This result is surprising, but it is not necessarily problematic in an observational context. The traditional, Eddington-type measurement of a deflection angle does not determine a single Δϕ but rather the difference of two Δϕ, one with a heavy mass close to the light ray (M≠0) and one with a light ray far from any heavy mass (M→0). Since the unexpected λ̃-term in Δϕ is mass-independent, it cancels out in any difference of two deflection angles. §.§.§ Net stress-energy tensor For constant λ=λ̃, the Ricci scalar and the net stress-energy tensor, defined as the negative Einstein tensor, are given by R^(λ̃) = 3 M x_λ̃/x^4 , T̅_μν^(λ̃) d x^μ⊗ d x^ν = - 2 M x_λ̃/x^4(1 - 2 M/x) d t^2 + x_λ̃/x^3(1 - x_λ̃/x)^-1(1 - 2 M/x)^-1 d x^2 - x_λ̃/2 x(1-M/x) dΩ^2 , using the metric (<ref>). We have included the superscript (λ̃) in order to distinguish these quantities from the corresponding versions in a μ̅-type scheme studied in the next section. The energy densities according to a static observer in the exterior and a comoving observer in the homogeneous interior are given by ρ^(λ̃) ≡ T̅_μν^(λ̃)t̂^μt̂^ν = - 2 M x_λ̃/x^4 , ρ_ h^(λ̃) ≡ T̅_μν^(λ̃)t̂_ h^μt̂_ h^ν = - x_λ̃/t_ h^3(1 - x_λ̃/t_ h)^-2(2 M/t_ h-1)^-2 , respectively. Here, t̂^μ∂_μ = N^-1∂_t is the normalized time-like Killing vector field, and t̂_ h^μ∂_μ = N^-1∂_t_ h is the normalized comoving velocity associated to the metric (<ref>) under the appropriate coordinate swap. Let us list four important observations. * All the components of the Einstein tensor are asymptotically vanishing, with the angular pressure being the slowest to decay at a rate of x^-1 and the energy density being the fastest at a rate of x^-4. * The Ricci scalar is positive everywhere, R^(λ̃)>0, even in vacuum. * Restricting to the exterior, the energy density is negative everywhere, ρ^(λ̃)<0, the radial pressure is positive everywhere, T̅_xx^(λ̃)>0, and the angular pressure is negative everywhere, T̅_φφ^(λ̃)<0. * Restricting to the interior, the energy density for comoving observers is negative everywhere, the radial pressure is positive everywhere, while the angular pressure is negative for t_ h>M and positive for t_ h<M. §.§.§ Black-hole thermodynamics The net quasilocal energy (<ref>) for constant λ=λ̃ and with N̂=1 is given by E_λ̃ (x) = x (1-√(1 - 2 M/x)√(1 - x_λ̃/x)) , implying the net quasilocal entropy S_λ̃ (x) = 8 π x^2/15 λ̃^4( √(1 -x_λ̃/x)( 3 + λ̃^2 ( 1 + 3 M/x) - 2 λ̃^4 ( 1 + M/x - 6 M^2/x^2) ) - ( 3 + λ̃^2 - 2 λ̃^4 ) ) from (<ref>). This expression does not recover the classical value asymptotically: lim_x→∞ S_λ̃ (x) = S_ BH1+2 λ̃^2/1+λ̃^2 > S_ BH . At the horizon, it takes the value S_λ̃ (2 M) = S_ BH(1 + λ̃^4/48 + 𝒪(λ̃^6) ) , and therefore lim_x→∞S_λ̃ (x)>S_λ̃ (2 M). (The corrections to the horizon value are of order λ̃^4, while the asymptotic value is corrected to order λ̃^2.) In fact, the expression (<ref>) is a monotonically increasing function: the entropy increases as the observer moves farther from the horizon. One possible interpretation is that the gravitational field in the vacuum exterior contains entropy and reduces the information. The overall entropy is then maximal for asymptotic observers that have access to the whole gravitational field. This is contrary to what one would expect, namely, that if the gravitational field contains some information, then asymptotic observers, who unlike near-horizon observers have access to this information, would measure an entropy that would be the minimum of the above expression. Since the increase of entropy is implied by holonomy modifications, heuristically related to discrete spatial structures, one might argue that our result for S_λ̃ can only take into account information accessible to measurements in a continuum model. Information related to spatial discreteness cannot be observed in this way, and therefore increases the entropy because it indirectly affects the geometry. Whether this behavior is considered problematic depends on the physical significance attached to the net energy and entropy of a modified vacuum space-time. §.§ Maximal radius in the presence of a cosmological constant The presence of a cosmological constant can significantly alter the asymptotic and global structure of the modified space-time. Including a cosmological constant, the space-time line element for constant λ=λ̃ is given by d s^2 = - (1 - 2 M/x - Λ x^2/3) d t^2 + ( 1 - λ̃^2/1+λ̃^2(2M/x + Λ x^2/3) )^-1(1 - 2 M/x-Λ x^2/3)^-1 d x^2 + x^2 dΩ^2 . In the classical case, λ̃→0, a nonvanishing, positive Λ implies that a second horizon exists at x_Λ≈√(3/Λ) (neglecting M/x for small Λ), beyond which the space-time is homogeneous. With λ̃≠0, the coordinate singularity determined by equation (<ref>) is a third-order polynomial and hence has three rather complicated solutions some of which may be complex in general. For small x ≪ 2 M, the Newtonian term dominates and the solution is approximately that of the Λ=0 case, x_λ̃^(-)≈2 M λ̃^2/1+λ̃^2 . For large x≫ 2M, the cosmological constant dominates and the solution is approximately x_λ̃^(+)≈√(3/Λ1+λ̃^2/λ̃^2) if Λ>0, which is the case we will focus on since it agrees with observations. This value is always outside the cosmological horizon and is a maximum-radius surface, beyond which the space-time starts collapsing; see Fig. <ref>. The existence of a maximum radius due to the presence of a quantum parameter is unexpected since it occurs at macroscopic scales where quantum gravity is not expected to play a significant role. This outcome of holonomy modifications in the traditional treatment is usually related to a growing geometrical length λ̃t_ h in a homogeneous region, which, unlike in the static region, does contribute to holonomy modifications in the Hamiltonian constraint because K_φ≠0. However, as already discussed, covariant holonomy modifications in the Hamiltonian constraint do not show an increasing holonomy length but rather imply a constant magnitude of holonomy modifications related to the classical form of extrinsic curvature via (<ref>). The reason for unexpected effects on large scales is therefore not a growing holonomy length but rather the fact that holonomy modifications do not decrease quickly enough as t_ h increases. Although non-classical behavior on large scales is unexpected as an effect of quantum gravity, it is not impossible. While local quantum-gravity effects should be small at low curvature, global properties in large regions are subject to a combination of a huge number of tiny quantum-gravity corrections, which could conceivably add up to sizeable contributions. Moreover, non-classical behavior on large scales may well happen in the general context of modified gravity. Large-scale modifications of the classical behavior are not necessarily incompatible with observations if they happen sufficiently far outside of the cosmological horizon. Making λ̃ sufficiently small, all the unexpected effects found in this section may therefore be acceptable in an individual model. Nevertheless, the physical viability of a fundamental theory, such as the concept of holonomy modifications in general or of LQG, can easily be endangered if microscopic features have significant implications on macroscopic scales. Corrections to the classical theory may then be hard to control when one goes beyond relatively simple models such as spherically symmetric ones. The cosmological constant itself, in its traditional explanation as vacuum energy, is an example of such UV-IR mixing. Its macroscopic problems seem to be exacerbated by holonomy modifications with constant length parameter. However, while there is potential observational evidence for the presence of a cosmological constant, there is no indication for the existence of a maximal radius. From this perspective, it is therefore best to avoid models that place an upper bound on the range of the radial coordinate. Models with constant holonomy length may still be relevant if they are used in a finite range of radii. For instance, the complete function λ(x) may not follow a strict power law, but be nearly constant in a bounded region of space-time outside of which it decreases toward zero asymptotically. It is then important to show that non-constant λ(E^x) can indeed suppress holonomy-type effects in classical regimes and restrict implications of holonomy modifications to the expected quantum-gravity regime, as we will do in the next section. We also reiterate the importance of a covariant formulation based on an emergent space-time metric, which makes it possible to have consistent effects in the static gauge, in which traditional holonomy modifications would disappear, and in non-static homogeneous regions of space-time. § DECREASING HOLONOMY FUNCTION In this section, we use a holonomy function (<ref>) h(E^x)=λ(E^x)/λ̃= r̂/√(E^x) analogous to a μ̅-type scheme, and derive the particular physical effects of this choice. As in equation (<ref>), r̂ is a reference radius where h(r̂)=1. In models of loop quantum gravity, a specific coefficient is often chosen in the parameterization λ^2 = λ̃^2r̂^2/E^x , where the constant λ̃=λ√(E^x)/r̂ can be interpreted as the side length of plaquettes in a uniform triangulation of spheres. If one uses regular polygons with n sides and assigns the area Δ=8 πγℓ_ Pl^2 to each of them, suggested for instance by the area gap of LQG, it follows that λ̃^2r̂^2=4 n^-1tan (π/n) Δ=:Δ̃ . (See Sec. <ref> for the full details of these relations.) Note, however, that holonomies of a gauge connection do not depend on the metric, and therefore it is not required to have a strict relationship between the constant λ̃ in (<ref>) and the area spectrum, even in a fundamentally discrete theory such as loop quantum gravity. Using this parameterization of λ(E^x), we have λ_∞ = 0 and, for Λ=0, the asymptotically flat line element (<ref>) for the black hole exterior is simplified to d s^2 = - (1 - 2 M/x) d t^2 + ( 1 + Δ̃/x^2( 1 - 2 M/x) )^-1(1 - 2 M/x)^-1 d x^2 + x^2 dΩ^2 . We will reincorporate the cosmological constant in the final subsection and revisit the status of a maximum radius that appeared for constant λ=λ̃. Our detailed derivations show that the decreasing parameterization of λ(E^x) removes all the unexpected features of constant λ in low-curvature regimes. §.§ Minimum radius and geometric conditions The minimum radius of space-times for λ∝1/√(E^x), obtained by solving equation (<ref>), is given by x_Δ̃ = (Δ̃ M)^1/3( 1 + √(1+Δ̃/(27 M^2)))^2/3 - (Δ̃/(27 M^2))^1/3/(1 + √(1+Δ̃/(27 M^2)))^1/3 . To analyze the effect for small masses, we define δ≡ 4 M^2 / Δ̃ and expand around δ=0: x_Δ̃/2 M = 1 - δ + 3 δ^2 + O ( δ^3 ) . Therefore, to leading order, the horizon of a small mass corresponds to the minimum radius such that the wormhole interior will be very short, determined as a timelike distance since x is a time coordinate in the interior. This cannot happen in the case of constant λ=λ̃, where x_λ̃ = 2 M λ̃^2 / (1+λ̃^2) and the horizon and minimal radius do not match to leading order unless M=0. On the other hand, for large M black-holes, we define δ̃=Δ̃/(27M^2), and by expanding around δ̃=0, we obtain x_Δ̃/2 M = 3/4^1/3(δ̃^1/3-1/4^1/3δ̃^2/3+O ( δ̃^4/3) .) The results indicate that the minimum radius is relatively small compared to the horizon, suggesting that the wormhole interior will be significantly large, similar to the constant λ = λ̃ case. The expansion parameter for null geodesic congruences is θ_±=-2/x√(1+Δ̃/x^2(1-2M/x)) and changes according to dθ_±/ dψ=-2/x^2[1+Δ̃/x^2(2-5M/x)] The correction term is dominant near the surface of reflection symmetry, and the sign changes at some surface x̅=f(M,Δ̃)x_Δ̃ with f(M,Δ̃)>1. Similarly, for timelike congruences, we have θ_±=-3/2x√(2M/x)√(1+Δ̃/x^2(1-2M/x)) with the rate of change dθ_±/ dτ=-9M/2x^3[1+(5/3-4M/x) Δ̃/x^2]. The correction terms are dominant near the hypersurface of reflection symmetry, changing sign at x̃=g(M,Δ̃)x_Δ̃, where once more g(M,Δ̃)>1. It can be demonstrated that g(M,Δ̃)<f(M,Δ̃). Thus, as with constant λ=Δ̃, free-falling photons start being defocused at larger scales compared with massive particles. The geometric conditions for λ∝1/√(E^x) are: * Null geometric condition: R_μνv^μ_±v^ν_±=2/x^2Δ̃/x^2(1-3M/x)≥ 0 , This condition is violated only in the region x<3M. * Timelike geometric condition: R_μνu^μ_±u^ν_±=3M/x^3Δ̃/x^2(1-3M/x)≥ 0 , Also this condition is violated only in the region x<3M. Unlike for constant λ=λ̃, here the geometric conditions are satisfied at large scales and therefore the modified gravitational force focuses congruences far from the black hole, similar to its classical effect. §.§ Observational features As before, the weak-field behavior and properties of geodesic motion can help to distinguish physical implications of different modification functions. §.§.§ Newtonian limit For a decreasing holonomy function h(E^x)=r̂/√(E^x) and using the coordinate transformation x = r (1+ M/(2r))^2, the emergent space-time line element is given by d s^2 = - (1-M/2r/1+M/2r)^2 dt^2 + (1+M/2r)^4 ( ( 1 + Δ̃/r^2(1-M/2r)^2/(1+M/2r)^4)^-1 d r^2/χ_0^2 +r^2 dΩ^2) ≈ - (1-2M/r) dt^2 + (1+2M/r) ( ( 1 + Δ̃/r^2(1-3 M/r) )^-1 d r^2/χ_0^2 +r^2 dΩ^2) . where we assumed M/r≪ 1 in the second line. The isotropic coordinate (<ref>) is now given by 𝔯 = k exp∫ d r/r( 1 - Δ̃/r^2(1-3M/r) )^1/2≈ k exp∫ dr/r√(1-Δ̃/r^2)(1+3Δ̃M/2r(r^2-Δ̃)) . The integration can be done in closed form, but the resulting expressions are lengthy. In most cases of interest, the weak-field limit M≪ r also implies Δ̃≪ r^2, which can be used to simplify (<ref>). To leading order in Δ̃/r^2 and M/r, the integral is independent of M and therefore agrees with its value in the zero-mass limit. §.§.§ Zero-mass limit The decreasing holonomy function implies a metric (<ref>) that is not flat in the zero mass limit M→0 in the case of Λ=0, and instead approaches the line element d s^2 = - d t^2 + ( 1 + Δ̃/x^2)^-1 d x^2 + x^2 dΩ^2 . We interpret this result as suggesting that the spatial geometry is not smooth at the Planck scale x ∼√(Δ̃). The 3-dimensional volume in a slice of constant t and enclosed by x<r is given by V_Δ(r) = 4 π∫_0^r d x x^2 ( 1 + Δ̃/x^2)^-1/2 = 4 π r^3/3( (1-2 Δ̃/r^2) √(1+Δ̃/r^2) + 2 (Δ̃/r^2)^3/2) = 4 π r^3/3( 1-3/2Δ̃/r^2 + 2 Δ̃^3/2/r^3 + O (Δ̃^2/r^4)) . The volume of a given region is therefore always smaller compared with flat space. The zero-mass metric (<ref>) can be expressed in the isotropic coordinate 𝔯 = √(Δ̃/4)√(√(1+Δ̃/x^2) + 1/√(1+Δ̃/x^2) - 1) , such that d s^2 = - d t^2 + (1 - Δ̃/4𝔯^2)^2 ( d𝔯^2 + 𝔯^2 dΩ^2 ) . §.§.§ Nearly-circular orbits of massive objects Massive objects in nearly circular orbits, using a decreasing holonomy function h(E^x)=r̂/√(E^x), oscillate around the equilibrium radius x_0 with frequency ω_r = ω_φ√(1-6M/x_0)√(1 + Δ̃/x^2(1 - 2 M/x_0)) . Their orbits have the precession rate ω_p = ( 1 - √(1-6M/x_0)√(1 + Δ̃/x^2(1 - 2 M/x_0))) ω_φ . §.§.§ Deflection angle The deflection angle (<ref>), to leading order in M/x_ tp, is given by Δϕ = - π/4Δ̃/x_ tp^2 + 4 M/x_ tp(1 + 3/2Δ̃/x^2) (1 + Δ̃/x^2)^-3/2 + O (M^2/x_ tp^2) = - π/4Δ̃/x_ tp^2 + 4 M/x_ tp(1-3/8(Δ̃/x^2)^2 + O (Δ̃^3/x^6)) + O (M^2/x_ tp^2) . While a repulsive correction survives the zero-mass limit, it decreases quickly for x_ tp^2≫Δ̃, unlike in the case of a constant holonomy function. §.§.§ Net stress-energy tensor The Ricci scalar for h(x)=r̂/x is given by R = 2 Δ̃/x^4( 1 - 7 M/x + 9 M^2/x^2) , and, using (<ref>), the net stress-energy tensor equals T̅_μν^(Δ̃) d x^μ⊗ d x^ν = - Δ̃/x^4(1 - 6 M/x) (1 - 2 M/x)^2 d t^2 - Δ̃/x^4(1 + Δ̃/x^2( 1 - 2 M/x) )^-1 d x^2 + Δ̃/x^4( 1 - 3 M/x) ( 1 - M/x) x^2 dΩ^2 . The gravitational energy densities in the exterior and interior are given by ρ ≡ T̅^(Δ̃)_μνt̂^μt̂^ν = - Δ̃/x^4(1 - 6 M/x) (1 - 2 M/x) , ρ_ h ≡ T̅^(Δ̃)_μνt̂_ h^μt̂_ h^ν = - Δ̃/t_ h^4( 1 - Δ̃/t_ h^2(2 M/t_ h -1) )^-2(2 M/t_ h-1)^-1 , respectively, where t̂^μ∂_μ = N^-1∂_t is the normalized stationary velocity, while t̂_ h^μ∂_μ = N^-1∂_t_ h is the normalized comoving velocity associated to the metric (<ref>) under the appropriate coordinate swap. The four main differences between a decreasing holonomy function h(x)=r̂/x and a constant h(x)=1 are: * All the components of the Einstein tensor decay faster with increasing x for a decreasing holonomy function, with the exception of G_tt which has a similar decay in both schemes. For h(x)=r̂/x, all components decrease at the same rate to leading order in 1/x. * For h(x)=r̂/x, the Ricci curvature (<ref>) vanishes at the coordinates x_0^±/2 M = 7 ±√(13)/4 , where x_0^+ ≈ 5.3 M >2M and x_0^- ≈ 1.69 M < 2M. Therefore, unlike for a constant holonomy function, the Ricci scalar is not positive everywhere, becoming negative outside of but near the horizon, R - Δ̃/32 M^4 + O(δ^2/(2M)^2) . * Restricted to the exterior, the energy density (<ref>) has different signs in different regions, ρ<0 for x>6M and ρ>0 for 2M<x<6M; the radial pressure is negative everywhere, T̅_xx<0, and the angular pressure changes sign, such that T̅_φφ>0 for x>3M and T̅_φφ<0 for 2M<x<3M. * Restricted to comoving observers in the interior, the energy density is negative everywhere, the radial pressure is positive everywhere, T̅_tt>0, and the angular pressure changes sign at t_ h=M such that it is negative for t_ h>M and positive for t_ h<M. Finally, we note that in the zero-mass limit, M→0, the net stress-energy tensor (<ref>) does not vanish but reduces to T̅_μν^(Δ̃) d x^μ⊗ d x^ν|_M→ 0 = - Δ̃/x^4 d t^2 - Δ̃/x^4(1 + Δ̃/x^2)^-1 d x^2 + Δ̃/x^4 x^2 dΩ^2 , while the energy density is simply given by ρ|_M→ 0 = T̅_tt|_M→ 0. Some gravitational stress-energy therefore survives in the vacuum but quickly decays as Δ̃/x^4 and may therefore be observable only at the Planck scale. §.§.§ Black-hole thermodynamics The net quasilocal energy (<ref>) for h(x)=r̂/x with N̂=1 is given by E_Δ̃ (x) = - x ( √(1 - 2 M/x)√(1 + Δ̃/x^2( 1 - 2 M/x)) - √(1 + Δ̃/x^2)) , and the net quasilocal entropy (<ref>) by S_Δ̃ (x̅) = 8 πx̅^6/15 Δ̃^2( √(1 + Δ̃/x̅^2(1-2M/x̅))( 3 + Δ̃/x̅^2( 1 + 3 M/x̅) - 2 Δ̃^2/x̅^2( 1 + M/x̅ - 6 M^2/x̅^2) ) - √(1 + Δ̃/x̅^2)( 3 + Δ̃/x̅^2 - 2 Δ̃^2/x̅^4) ) . This entropy recovers its classical value asymptotically, lim_x→∞S_Δ̃ (x) = S_ BH. At the horizon, however, it has the non-classical value S_Δ̃ (2 M) = 2 (4 π)^4/15A_ H^3/Δ̃^2( 3 + 10 πΔ̃/A_ H - √(1 + 4πΔ̃/A_ H)( 3 + 4 πΔ̃/A_ H( 1 - 8 πΔ̃/A_ H) ) ) = S_ BH( 1 + 2 πΔ̃/A_ H + O ( Δ̃^2/A_ H^2) ) , such that S_Δ̃ (2 M)> S_ BH. In contrast to constant h(x)=1, the entropy S_Δ̃ is monotonically decreasing. This means that the gravitational field in the exterior vacuum has some information and hence the entropy is minimal for asymptotic observers who have access to the whole gravitational field. §.§ Presence of a cosmological constant After reincorporating the cosmological constant, the metric (<ref>) for h(x)=r̂/x such that λ(x)=√(Δ̃)/x reads d s^2 = - (1 - 2 M/x - Λ x^2/3) d t^2 + ( 1 + Δ̃/x^2( 1 - 2 M/x - Λ x^2/3) )^-1(1 - 2 M/x - Λ x^2/3)^-1 d x^2 + x^2 dΩ^2 . The presence of a cosmological constant can significantly alter the global structure of the modified space-time. Even in the classical case, Δ̃→0, a nonvanishing, positive Λ implies that a second horizon exists at x_Λ≈√(3/Λ) , beyond which the space-time is homogeneous. If we consider a decreasing holonomy function, h(x)=r̂/x, equation (<ref>) for coordinate singularities, evaluated at small x≪ 2M, has the approximate solution (<ref>) for a minimum radius similar to constant λ. However, for large x≫ 2M, the Newtonian term can be neglected, and the approximate solution x_Δ̃^(+)≈√(3 Δ̃/Δ̃Λ - 3) is imaginary for Δ̃Λ < 3. Hence, the radius is not bounded from above for a decreasing holonomy function h(x)=r̂/x in such a case. Based on the observed value of the cosmological constant, Δ̃Λ∼ 10^-122 is negligible for any Planck-sized area Δ̃, and we do not need to consider the case of Δ̃Λ >3 in which x^(+)_Δ is a real solution. This simple solution is exact only if M=0. With this in mind, we now write the only real, exact solution to the minimal radius for h(x)=r̂/x, including the cosmological constant: x_Δ̃ = ( - Δ̃ (3-Δ̃Λ) + (3 Δ̃ M)^2/3((3 -Δ̃Λ)^2 + √((3-Δ̃Λ)^3 (3-Δ̃Λ+Δ̃/(9 M^2) )))^2/3) × (3-Δ̃Λ)^-1 (3 Δ̃ M)^-1/3((3 -Δ̃Λ)^2 + √((3-Δ̃Λ)^3 (3-Δ̃Λ+Δ̃/(9 M^2) )))^-1/3 . The expression (<ref>) is recovered in the limit Λ→0, and appears as the leading zeroth order in a Δ̃Λ expansion. For massive black holes where Δ̃/M^2≪1 we have, to leading order, x_Δ̃ = (2 M Δ̃/1-Δ̃Λ/3)^1/3 . For small masses, defining δ≡ 4 M^2 / Δ̃, we have x_Δ̃/2 M = 1 - (1-Δ̃Λ/3) δ to leading order in δ. For h(x)=r̂/x, the conformal diagram of the maximal extension of the vacuum solution in a de Sitter background is shown in Fig. <ref> with x_λ=x_Δ̃. § CONCLUSIONS Emergent modified gravity provides a classification of modified canonical theories compatible with a covariant space-time geometry for each of its solutions. So far, explicit realizations have been derived in spherically symmetric models <cit.> and for polarized Gowdy models <cit.>, with up to second-order in derivatives. Even though this is the same as the classical derivative order, a large number of modifications are possible that are not of higher-curvature form. The direct canonical formulation makes this formalism an ideal candidate for an analysis of holonomy or other modifications proposed in models of loop quantum gravity. Our main result is the first covariant formulation of an effective theory of holonomy-modified gravity. The theory provides a consistent set of constraints that resemble what has been analyzed before in this context, going back to <cit.>. But an important new ingredient is the appearance of an emergent space-time metric that differs from the classical expression in terms of canonical fields. This metric is emergent rather than effective because, in this setting, it is the only metric object that obeys the tensor transformation law. An effective metric would instead be a correction to a classical metric, such that there would be at least two different metric objects within the same theory. Our explicit examples of space-time solutions in various gauges revealed the importance of the emergent space-time metric for a consistent interpretation of holonomy modifications. For instance, the old problem of how to reconcile, on one hand, zero holonomy modifications in a Schwarzschild gauge where extrinsic curvature vanishes, and non-trivial holonomy modifications in a non-static gauge such as Gullstrand–Painlevé on the other hand, is resolved by the observation that the emergent metric does receive corrections at the kinematical level. These corrections survive even in the Schwarzschild gauge. Our formulation of holonomy modifications is covariant in space-time because its solutions are compatible with a coordinate and slicing invariant space-time structure. It is also covariant in phase-space in that it provides an unambiguous interpretation of holonomy modifications and their specific ingredients, taking into account the option to apply canonical transformations. Our detailed discussion showed that the traditional distinction between different triangulation schemes that determine the behavior of holonomies in homogeneous reduced models, such as μ_0 or μ̅, cannot be maintained at the covariant level, in particular when canonical transformations are taken into account. Instead, the requirement that holonomy terms have a certain form, in particular periodicity in the relevant phase-space variable, distinguish a specific set of canonical variables in which the modification function λ that initially introduced holonomy modifications is split into two separate parts. This function, which may depend on the spherical area E^x and at first resembles the traditional holonomy length μ in models of loop quantum gravity, must be split into two contributions, λ(E^x)=λ̃h(E^x) with a constant λ̃ and a function h(E^x) that remains finite and non-zero for λ̃→0. Only the constant λ̃ then appears in strictly periodic holonomy modifications and therefore represents the holonomy length, while the holonomy function h(E^x) provides non-trivial modifications of the triad-dependent coefficients of holonomy terms. In this way, λ̃ resembles a traditional constant holonomy parameter μ_0, and h(E^x) would usually be interpreted as an inverse-triad correction. For covariant holonomy corrections, however, both expressions originate in the same modification function λ of a generic modified theory. A non-constant holonomy function h(E^x), rather than a non-constant holonomy length λ̃, can then be used to control the strength of holonomy modifications in various regimes. Such an approach also clearly elucidates how holonomy modifications can come from fundamental holonomies of a gauge connection that do not refer to a metric for their definition, thereby preserving the holonomy-flux algebra. In particular, it is possible to have suppressed holonomy effects at low curvature provided h(E^x) decreases sufficiently quickly for large E^x, even if the holonomy length λ̃ remains constant. Another significant advantage of a covariant formulation is that it allows a well-defined application of standard space-time concepts, such as the definition of black holes through horizons, an analysis of curvature singularities, the description of motion in curved space-time by geodesics, and the introduction of various physical quantities related to thermodynamical behavior. For the latter, we introduced several new concepts of net energy expressions, including a net stress-energy tensor and a net quasilocal energy, that quantify deviations of emergent space-time from classical space-time in general relativity. Even in vacuum, these quantities are in general non-zero and may be positive or negative, allowing a generalization of energy conditions and conclusions about the focusing or defocusing behavior of space-time on different scales. Our covariant analysis fills a lacuna in the LQG literature by exhibiting an effective repulsive behaviour of the emergent space-time, underlying the singularity resolution in this model, that is independent of any gauge or coordinate choice. We have presented detailed derivations of such examples for two cases, a constant holonomy function h(E^x)=1 and a decreasing one, h(E^x)∝ 1/x, respectively, confirming the generic resolution of black-hole singularities in spherically symmetric models. By and large, we did not encounter significant problems in these models, even for constant h(E^x) which in the usual interpretation as a μ_0-scheme is often considered problematic. However, there were several unexpected features in the latter case which are not as severe as often claimed in traditional formulations but still lead to modifications on small curvature scales. In particular, as already observed in <cit.>, the presence of a positive cosmological constant implies the existence of a maximal radius for constant holonomy functions. This outcome represents a global nonclassical effect that may be interpreted as resulting from the accumulation of a large number of small quantum corrections. Such instances of UV/IR mixing need to be interpreted carefully, keeping in mind that they may be artefacts of symmetry-reduced models in which a large number of identical small corrections distributed over a strictly homogeneous sphere can only add up. The symmetry assumption implies that they do not average out to small values as may be expected in an inhomogeneous context. Heuristic holonomy-modification schemes, motivated by phenomenological considerations, typically rule out such possibilities from the get go. However, reliably evaluating a fundamental theory requires careful considerations of all possible outcomes. Our constructions present a major step in this direction by working, for the most part, with general holonomy functions h(E^x). A constant holonomy function may then be realized in bounded regions of space-time, provided it merges into a suitable fall-off behavior for large E^x. Our discussion of decreasing holonomy functions show that the appearance of a maximal radius can easily be avoided. But they also reveal additional subtleties, such as remnants of quantum corrections that make the zero-mass limit of black-hole solutions differ from classical Minkowski space-time. Nevertheless, our derivation of various potentially observable properties demonstrated the overall consistency of this framework. § ACKNOWLEDGEMENTS: IHB is supported by the Indonesia Endowment Fund for Education (LPDP) grant from the Ministry of Indonesia. The work of MB and ED was supported in part by NSF grant PHY-2206591. SB is supported in part by the Higgs Fellowship and by the STFC Consolidated Grant “Particle Physics at the Higgs Centre”. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. § EQUATIONS OF MOTION In this appendix, we provide the equation of motion for both non-periodic and periodic phase-space variables, taking the classical values for all the modification functions except λ and χ_0. It is worth mentioning the structure of the basic canonical bracket, {K_φ(x),E^φ(y)} = δ(x-y) {K_x(x),E^x(y)} = δ(x-y) , and the fact that the diffeomorphism constraint remains classical: H_x = E^φ K_φ' - K_x (E^x)' . The time evolution of any space function f depending on E^x, E^φ, K_x, K_φ, and their spatial derivatives can be written as ḟ={f,H̃[N]+H_x[N^x]} where we implicitly absorb the global factor χ in the lapse function N̅≡χ N . We provide the explicit equation of motion for E^x, E^φ, and K_φ. The dynamics of K_x is not included for two reasons: first, it is quite lengthy compared to the other phase-space variables. Second, we can always use the vanishing of the constraints, either H̃=0 or H_x=0, to solve for K_x. §.§ Non-periodic variables For non-periodic phase-space variables, the Hamiltonian constraint is given by (<ref>). Therefore, the equations of motion are Ė^x=N^x(E^x)'+N̅sin(2λ K_φ)/λ√(E^x)(1+(λ(E^x)'/2E^φ)^2) and Ė^φ = (N^x E^φ)'+N̅[E_φ/2√(E^x)sin(2λ K_φ)/λ+2√(E^x)K_xcos(2λ K_φ)(1+(λ(E^x)'/2E^φ)^2). .+λsin(2λ K_φ)√(E^x)/2(E^φ/E^x((E^x)'/2E^φ)^2+((E^x)'/2E^φ)'). .+2√(E^x)E^φ∂lnλ/∂ E^x(K_φcos(2λ K_φ)-sin(2λ K_φ)/2λ.. ..+(λ(E^x)'/2E^φ)^2(K_φcos(2λ K_φ)+sin(2λ K_φ)/2λ))] , as well as K̇_φ = N^x K_φ'+N̅[√(E^x)/2( Λ-λ^2+sin^2(λ K_φ)/λ^2 E^x)+1/2√(E^x)((E^x)'/2E^φ)^2cos^2(λ K_φ). . +√(E^x)/2∂lnλ/∂ E^x(4sin^2(λ K_φ)/λ^2-4K_φsin(2λ K_φ)/2λ-2λ K_φ((E^x)'/2E^φ)^2 sin(2λ K_φ))] +N̅'√(E^x)(E^x)'/2(E^φ)^2cos^2(λ K_φ) §.§ Periodic variables For periodic phase-space variables (constant λ̃), the Hamiltonian constraint is given by (<ref>). The equations of motion is Ė^x=N^x(E^x)'+N̅λ̃sin(2λ K_φ)/λ√(E^x)(1+(λ̃(E^x)'/2E^φ)^2) and Ė^φ = (N^x E^φ)'+N̅λ̃√(E^x)/2λ[4K_x(1+(λ̃(E^x)'/2E^φ)^2)cos(2λ̃K_φ). .+((4E^φ/λ̃^2+((E^x)')^2/E^φ)(1/4E^x-∂lnλ/∂ E^x)+((E^x)'/E^φ)') λ̃sin(2λ̃K_φ)] , as well as K̇_φ = K'_φN^x+λ̃/2λN̅'√(E^x)(E^x)'cos(λ̃K_φ)/(E^φ)^2+λ̃N̅((E^x)')^2/8λ√(E^x)(E^φ)^2cos(λ̃K_φ)^2 -N̅λ̃/λ√(E^x)/2[λ^2/λ̃^2(1/E^x-Λ)+4(1/4E^x-∂lnλ/∂ E^x)sin^2(λ̃K_φ)/λ̃^2] 10 PhysRevLett.14.57 R. Penrose, Phys. Rev. Lett 14, 57 (1965). rovelli2004quantum C. Rovelli, Quantum Gravity, (Cambridge University Press, 2004). thiemann2008modern T. Thiemann, Introduction to Modern Canonical Quantum General Relativity, (Cambridge University Press, 2008). bojowald2018effective M.  Bojowald, S.  Brahma, and D. H. Yeom. Effective Line elements and black hole models in canonical loop quantum gravity Phys. Rev. D 98, 046015 (2018). doi:10.1103/PhysRevD.7.2333 EMG M. Bojowald, and E. I. Duque, Class. Quant. Grav. 42, 095008 (2024), arXiv:2404.06375. EMGCov M. Bojowald, and E. I. Duque, Phys. Rev. D 108, 084066, (2023), arXiv:2310.06798. alonso2022nonsingular A. A. Bardaji, D. Brizuela, and R. Vera. Nonsingular spherically symmetric black-hole model with holonomy corrections Phys. Rev. D 106, 024035 (2022). BBVeffBH A. A. Bardaji, D. Brizuela, and R. Vera. An effective model for the quantum Schwarzschild black hole Phys. Lett. B 829, 137075 (2022). APSII A. Ashtekar, T. Pawlowski, and P. Singh, Quantum Nature of the Big Bang: Improved dynamics, Phys. Rev. D 74 (2006) 084003, [gr-qc/0607039] SchwarzN M. Bojowald, D. Cartin, and G. Khanna, Lattice refining loop quantum cosmology, anisotropic models and stability, Phys. Rev. D 76 (2007) 064018, [arXiv:0704.1137] rovelli1995discreteness C. Rovelli, L. Smolin, Nuclear Physics B 442, 593 (1995). Kelly_2020 J. G. Kelly, R. Santacruz, and E. W.Ewing, Effective loop quantum gravity framework for vacuum spherically symmetric spacetimes, Phys. Rev. D. 102, 106024 (2020). ADM R. Arnowitt, S. Deser, and C. W. Misner, in Gravitation: An Introduction to Current Research, edited by L. Witten (Wiley, New York, 1962), reprinted in <cit.>. arnowitt2008republication R. Arnowitt, S. Deser, and C. W. Misner, Gen. Rel. Grav. 40, 1997 (2008). hojman1976geometrodynamics S. A. Hojman, K. Kuchař, and C. Teitelboim, Ann. Phys. (New York) 96, 88 (1976). kuchar1974geometrodynamics K. V. Kuchař, J. Math. Phys. 15, 708 (1974). pons1997gauge J. M. Pons, D. C. Salisbury, and L. C. Shepley, Phys. Rev. D 55, 658 (1997), arXiv:gr-qc/9612037. salisbury1983realization D. C. Salisbury, and K. Sundermeyer, Phys. Rev. D 27, 740 (1983). bojowald2000symmetry M. Bojowald, and H. Kastrup, Class. Quant. Grav. 17, 3009 (2000), arXiv:hep-th/9907042. bojowald2004spherically M. Bojowald, Class. Quant. Grav. 21, 3733 (2004), arXiv:gr-qc/0407017. alonso2021anomaly A. A. Bardaji and D. Brizuela. Anomaly-free deformations of spherical general relativity coupled to matter. Phys. Rev. D. 104, 084064 (2021). Gambini2022 R. Gambini, F. Benítez, and J. Pullin, Universe 8, 526, (2022), arXiv:2102.09501. EMGscalar M.  Bojowald and E. I.  Duque Phys. Rev. D 109, 084006, (2024) arxiv:2311.10693. EMGPF E. I. Duque, Phys. Rev. D 109, 044014, (2024), arXiv:2311.08616. Axion I. H. Belfaqih, M. Bojowald, S. Brahma, and E. I. Duque, [to appear]. alonso2023charged A. A. Bardaji, D. Brizuela, and R. Vera. Singularity resolution by holonomy corrections: Spherical charged black holes in cosmological backgrounds Phys. Rev. D 107, 064067 (2023). Higher M. Bojowald and E. I. Duque, Emergent modified gravity, Class. Quantum Grav. 41 (2024) 095008, arXiv:2404.06375. HigherCov M. Bojowald and E. I. Duque, Emergent modified gravity: Covariance regained, Phys. Rev. D 108 (2023) 084066, arXiv:2310.06798. EmergentGowdy M. Bojowald and E. I. Duque, Emergent modified gravity: Polarized Gowdy model on a torus, [to appear] JR J. D. Reyes, Spherically Symmetric Loop Quantum Gravity: Connections to 2-Dimensional Models and Applications to Gravitational Collapse, PhD thesis, The Pennsylvania State University, 2009. AOS A. Ashtekar, J. Olmedo, and P. Singh, Phys. Rev. D 98, 126003, (2018) arXiv:1806.02406. Curiel E. Curiel, A primer on energy conditions, arXiv:1405.0403 [physics.hist-ph]. Bardeen1 J. M. Bardeen, Black holes to white holes I. A complete quasi-classical model, arXiv:2006.16804. Bardeen2 J. M. Bardeen, Black holes to white holes II: quasi-classical scenarios for white hole evolution, arXiv:2007.00190.
http://arxiv.org/abs/2407.13401v1
20240718110943
Cooperative Integrated Sensing and Communication Networks: Analysis and Distributed Design
[ "Bowen Wang", "Hongyu Li", "Fan Liu", "Ziyang Cheng", "Shanpu Shen" ]
eess.SP
[ "eess.SP" ]
DISCOVER: A Data-driven Interactive System for Comprehensive Observation, Visualization, and ExploRation of Human Behaviour Tobias Baur July 22, 2024 =========================================================================================================================== § ABSTRACT This paper proposes a cooperative integrated sensing and communication network (Co-ISACNet) adopting hybrid beamforming (HBF) architecture, which improves both radar sensing and communication performance. The main contributions of this work are four-fold. First, we introduce a novel cooperative sensing method for the considered Co-ISACNet, followed by a comprehensive analysis of this method. This analysis mathematically verifies the benefits of Co-ISACNet and provides insightful design guidelines. Second, to show the benefits of Co-ISACNet, we propose to jointly design the HBF to maximize the network communication capacity while satisfying the constraint of beampattern similarity for radar sensing, which results in a highly dimensional and non-convex problem. Third, to facilitate the joint design, we propose a novel distributed optimization framework based on proximal gradient and alternating direction method of multipliers, namely PANDA. Fourth, we further adopt the proposed PANDA framework to solve the joint HBF design problem for the Co-ISACNet. By using the proposed PANDA framework, all access points (APs) optimize the HBF in parallel, where each AP only requires local channel state information and limited message exchange among the APs. Such framework reduces significantly the computational complexity and thus has pronounced benefits in practical scenarios. Simulation results verify the effectiveness of the proposed algorithm compared with the conventional centralized algorithm and show the remarkable performance improvement of radar sensing and communication by deploying Co-ISACNet. Cooperative integrated sensing and communication network, distributed optimization, hybrid beamforming, performance analysis. § INTRODUCTION The explosive growth of wireless services and the severe spectrum shortage have driven the demand for new paradigms and technologies to overcome spectrum congestion and improve spectrum efficiency for future wireless networks <cit.>. Among all of the emerging techniques, integrated sensing and communications (ISAC), where the radar sensing and wireless communication operations are integrated and jointly designed in a common hardware platform <cit.>, has benefits in enhancing spectrum efficiency and reducing hardware cost. With these benefits, ISAC is promising in supporting industry 4.0, autonomous vehicles and Internet-of-Things (IoT) in future wireless networks <cit.>. Extensive research results have focused on the waveform design for ISAC systems in sub-6GHz frequencies, where the transmitter is employed with fully-digital beamforming architectures <cit.>. To achieve higher-precision sensing while guaranteeing higher-throughput wireless communications, the research for ISAC has moved to millimeter wave (mmWave) frequency. The shorter wavelength of mmWave signals together with massive multiple input multiple output (MIMO) may provide sufficient gains to combat the severe path loss, while the fully-digital beamforming architecture in sub-6GHz bands is not viable for mmWave frequencies due to the extensively increasing cost and power consumption of RF chains and other hardware components <cit.>. To address the above issue, hybrid beamforming (HBF) architecture, which partitions the beamforming operation into a small-dimensional digital beamforming and a large-dimensional analog beamforming realized by a phase shift network, is proposed to compensate for the severe path loss with affordable cost and power consumption <cit.>. Prior work on HBF design for mmWave ISAC system is carried out in <cit.>, where a novel transceiver with HBF architecture for a ISAC base station (BS) is proposed. Following <cit.>, ISAC with double-phase-shifter-based HBF architecture is investigated in <cit.>, where the target detection performance of the extended target is improved while ensuring the downlink communication performance. In addition, a symbol-level based HBF design is proposed in <cit.>, where the constructive interference is utilized to improve both sensing and communication performance. To further improve the communication rate, the authors in <cit.> investigate the HBF design for a wideband OFDM ISAC system. Nevertheless, the above-mentioned mmWave ISAC works <cit.> are restricted to single-source scenarios, limiting the coverage for both wireless sensing and communications. Fortunately, there are already solutions in mmWave communications, that is to establish ultra-dense networks, namely cooperative cell-free (CoCF) networks where all access points (APs) controlled by a central processing unit (CPU) cooperatively serve users without cell boundaries <cit.>, thereby offering seamless wireless coverage. Relative works in mmWave CoCF <cit.> have demonstrated the benefits of improving wireless communication quality and enlarging coverage. The constructive results in mmWave CoCF <cit.> motivate us to employ CoCF networks in mmWave/THz ISAC, namely cooperative ISAC network (Co-ISACNet) <cit.>, which has benefits from the following perspectives: 1) From the communication perspective, the CoCF network enables cooperative signal transmission <cit.>, thus enabling high-quality mmWave communications. 2) From the radar perspective, multiple distributed APs in CoCF network provide extra spatial degrees-of-freedom (DoFs) <cit.>, which potentially leads to improved sensing performance. Despite the above benefits, the mmWave Co-ISACNet encounters several challenges: 1) The system model and operational mechanism for mmWave Co-ISACNet are still underdeveloped. While CoCF has been proved to have benefits in wireless communication networks, synergizing sensing functions with CoCF networks to achieve Co-ISACNet remains a challenging open problem. This raises new technical issues, e.g., how to effectively leverage the capabilities of multiple APs to achieve cooperative sensing. 2) The beamforming design inevitably faces challenges due to the increasing numbers of sources and antennas, complex constraints, and multiple functionalities. In this sense, the conventional centralized design for multi-source networks, where the beamforming design for different sources is centralized at the CPU <cit.>, would result in heavy computational burden and thus is not practical for the mmWave Co-ISACNet. In this paper, we aim to address the above challenges to demonstrate the benefits of the mmWave Co-ISACNet. To this end, we introduce a novel cooperative sensing method. Additionally, we propose a novel distributed optimization algorithm to jointly design the HBF for the mmWave Co-ISACNet, which has affordable beamforming design complexity with a reduced computational cost at the CPU. Our contributions are summarized as follows: First, we propose a mmWave Co-ISACNet with HBF architecture, which consists of a CPU and multiple dual-function APs. Then, we, for the first time, propose a practical cooperative sensing method for Co-ISACNet. Through comprehensive analysis, we reveal the essence of the proposed cooperative sensing method, and shed light on valuable design insights. Second, to show the advantages of Co-ISACNet and validate our proposed cooperative sensing method, a joint HBF design problem is formulated to maximize the average sum rate for the considered Co-ISACNet, while satisfying the constraint of beampattern similarity for radar sensing. Third, we, for the first time, propose a general distributed optimization algorithm, namely PANDA framework, to jointly design the HBF. Particularly, the PANDA modifies the centralized alternating direction method of multipliers (ADMM) by the proximal gradient (PG) to decouple variables and enable distributed optimization. Moreover, we theoretically prove that the proposed PANDA shares the same convergence behaviors as the conventional centralized ADMM. Fourth, we customize the proposed PANDA framework to solve the joint HBF design problem of the Co-ISACNet. Specifically, we first reformulate the original problem into a tractable form by fractional programming. Then, we apply the PANDA framework to tackle the reformulated problem, which requires only local CSI and minimal message exchange among the APs, thus reducing significantly the computational burden of the CPU as well as the backhaul signaling overheads. Finally, we present simulation results to evaluate the performance of the proposed algorithm and the Co-ISACNet. Specifically, the proposed distributed HBF design algorithm can achieve nearly the same performance as the conventional centralized algorithm, which validates the efficiency of the proposed algorithm. Furthermore, compared with the conventional ISAC with single AP, the proposed Co-ISACNet achieves better communication and radar performance, demonstrating the superiority of the proposed Co-ISACNet. Organization: Section <ref> illustrates the system model of the proposed Co-ISACNet. Section <ref> proposes a general distributed framework named PANDA. Section <ref> adopts the proposed PANDA to solve the joint HBF design problem. Section <ref> evaluates the performance of the proposed design and Section <ref> concludes this work. Notations: Boldface lower- and upper-case letters indicate column vectors and matrices, respectively. ℂ, ℝ, and ℝ^+ denote the set of complex numbers, real numbers, and positive real numbers, respectively. 𝔼{·} represents statistical expectation. (·)^∗, (·)^T, (·)^H, and (·)^-1 denote the conjugate, transpose, conjugate-transpose operations, and inversion, respectively. {·} denotes the real part of a complex number. 𝐈_L indicates an L × L identity matrix. 𝐀_F denotes the Frobenius norm of matrix 𝐀. |a| denotes the norm of variable a. = √(-1) denotes imaginary unit. λ_max ( 𝐀 ) is the maximum eigenvalue of 𝐀. ∠( 𝐀) denotes the phase values of 𝐀. ⊗ is the Kronecker product operator. 𝖽𝗂𝖺𝗀(·) denotes a diagonal matrix. 𝖳𝗋{·} denotes the summation of diagonal elements of a matrix. Finally, [𝐀]_i,j, and [𝐚]_i denote the (i,j)-th element of matrix 𝐀, and the i-th element of vector 𝐚, respectively. § SYSTEM MODEL AND PROBLEM FORMULATION In this section, we describe operating mechanism of the Co-ISACNet, introduce the system model and dervie performance metrics, and formulate the optimization problem. §.§ Operating Mechanism and Transmit Model As shown in Fig. <ref>, we consider a Co-ISACNet comprising a CPU, a set of dual-function APs 𝒜 = {1,⋯,A}, multiple downlink user equipments (UEs) 𝒰 = {1,⋯,U}, a previously detected radar target and clutter sources 𝒬 = {1,⋯,Q }. Suppose that all APs are equipped with N_T transmit antennas and N_R receive antennas arranged as uniform linear arrays (ULA). Multiple distributed APs cooperatively transmit waveforms to detect K radar targets and simultaneously provide communication service to U single-antenna downlink UEs. The CPU is deployed for control and planning, to which all APs are connected by optical cables or wireless backhaul <cit.>. The operation mechanism of the proposed scheme is concluded as the following three phases: Phase 1: Uplink Training. In this phase, each UE is assigned a random pilot from a set of mutually orthogonal pilots utilized by the APs. After correlating the received signal at AP a (a-th AP), the estimate of channel h_a,u∈ℂ^N_ T from UE u to AP a can be performed by many existing methods <cit.>, such as minimum mean square error (MMSE) estimator. Phase 2: ISAC Transmission. In this phase, according to the estimated CSI, APs first optimize the ISAC waveforms to maximize the network capacity and to ensure sensing performance. Then, all APs transmit the ISAC signals towards the direction of UEs[In this paper, we assume that all APs are synchronized and scheduled to the same frame structure to achieve coherent downlink communication <cit.>.] and targets. Phase 3: Reception. In this phase, the downlink UEs receive the signals from the APs and decode the received signals to obtain communication information. The radar sensing receiver at each AP collects echo signals reflected by the targets to perform radar detection and estimation. This paper focuses on the ISAC transmission and reception phases. In the following, we will elaborate on the ISAC transmitting model, derive ISAC performance metrics, and formulate the optimization problem. §.§ System Model §.§.§ Transmit Model We assume all APs are synchronized and serve multiple UEs by joint transmission. Moreover, as shown in Fig. <ref>, the APs are assumed to employ the HBF architecture with N_ RF RF chains, U ≤ N_RF≪ N_T. The HBF architecture consists of the digital beamformer 𝐅_D,a = [ 𝐟_D,a,1 , ⋯ , 𝐟_D,a,U ] ∈ℂ^N_RF× U and the analog RF beamformer 𝐅_A, a∈ℂ^N_ T× N_RF realized by a fully-connected phase shift network <cit.> and thus, imposes a constant modulus constraint of each entry, i.e., | [ 𝐅_A,a ]_m,n | = 1, ∀ m,n. Therefore, the transmitted signal from AP a at time instant t is 𝐱_a(t) = 𝐅_A,a𝐅_D,a𝐬_a(t) = 𝐅_A,a∑_u ∈𝒰𝐟_D,a,us_a,u(t) , where 𝐬_a(t) = ∑_l∈ℒ𝐬_a[l]rect(t-(l-1)Δ t) and s_a,u(t) = ∑_l∈ℒs_a,u[l]rect(t-(l-1)Δ t) with s_a,u(t) being the transmitted symbol to UE u and 𝐬_a[l] = [s_a,1[l] , ⋯ , s_a,U[l]]^T ∈ℂ^U being the transmit symbols vector. Since 𝐬_a[l] includes random communication symbols, we assume 𝔼{𝐬_a[l_1]𝐬_b^H[l_2]} = 𝐈_U when a=b and l_1=l_2, otherwise 𝔼{𝐬_a[l_1]𝐬_b^H[l_2]} = 0_U. §.§.§ Communication Reception Model For the downlink communication, the received signal at UE u is given by y_u(t) = ∑_a∈𝒜𝐡_a,u^H 𝐱_a(t) + n_C , u(t), where n_C , u(t) indicates the complex additive white Gaussian noise (AWGN) at UE u. The UE u down-converts the received signal (<ref>) to baseband via the RF chains, and the baseband signal at l-th time slot is given by y_u[l] = ∑_a∈𝒜𝐡_a,u^H 𝐅_A,a𝐟_D,a,u s_a,u[l] + ∑_a∈𝒜∑_v ∈𝒰 , v u𝐡_a,u^H 𝐅_A,a𝐟_D,a,v s_a,v[l] + n_C , u , where n_C , u∼𝒞𝒩(0 , σ_C,u^2) represents the complex AWGN with variance σ_C,u^2 at UE u. According to (<ref>), the achievable transmission rate at UE u can be written as Rate_u ( {𝐅_A,a} , {𝐅_D,a}) = log( 1 + | ∑_a ∈𝒜𝐡_a,u^H 𝐅_A,a𝐟_D,a,u|^2/∑_v ∈𝒰 , v u| ∑_a ∈𝒜𝐡_a,u^H 𝐅_A,a𝐟_D,a,v|^2 + σ_C,u^2 ) . §.§.§ Radar Reception Model In this paper, we assume all APs cooperatively detect a target in the presence of signal-dependent clutter sources. To achieve this, each AP is equipped with a radar sensing receiver, which is co-located with the AP transmitter. Based on above assumptions, the scatterback signal at sensing receiver in AP a is given by 𝐲_R,a(t) = ∑_b ∈𝒜ξ_a,b,0Υ_a,b,0𝐆_a,b,0𝐅_A,a𝐅_D,a𝐬_b(t-τ_a,b,0) + ∑_q∈𝒬∑_b ∈𝒜ξ_a,b,qΥ_a,b,q𝐆_a,b,q𝐅_A,a𝐅_D,a𝐬_b(t-τ_a,b,q) + 𝐧_R,a(t), where 𝐧_R,a(t) is the AWGN with power spectral density σ_S^2. 𝐆_a,b,q=𝐚_R(φ _a,q) 𝐚_T^H(φ_b,q) is the effective radar channel from AP b→Target(q=0)/Clutter(q∈𝒬)→AP a, with φ_a,q being the direction-of-arrivals (DoAs) from AP a→Target(q=0)/Clutter(q∈𝒬). τ_a,b,q = (𝐢_A,a - 𝐢_q_F + 𝐢_A,b - 𝐢_q_F)/c is the time delay from AP b→Target(q=0)/Clutter(q∈𝒬)→AP a, where 𝐢_A,a and 𝐢_q being the coordinate point of AP a and target(q=0)/clutter(q∈𝒬). ξ_a,b,q denotes the complex amplitude of the target(q=0)/clutter(q∈𝒬) observed via the path AP b→Target(q=0)/Clutter(q∈𝒬)→AP a. Υ_a,b,q denotes the path loss of the target(q=0)/clutter(q∈𝒬) observed via the path AP b→Target(q=0)/Clutter(q∈𝒬)→AP a. It is assumed that ξ_a,b,0 is considered deterministic but remains unknown, and ξ_a,b,q,q∈𝒬 follows a complex normal distribution ξ_a,b,q∼𝒞𝒩(0,ς_a,b,q^2),q∈𝒬. Based on the above models, the workflow of cooperative sensing can be summarized in the following steps. Step 1: Matched Filtering (MF). In this paper, we assume that all the AP sensing receivers are asynchronous in time, meaning that AP a only has local knowledge about 𝐅_A,a, 𝐅_D,a and 𝐬_a(t), and τ_a,a,q for q∈0,𝒬. Each AP a performs MF processing based on τ_a,a,0 and 𝐬_a(t). Therefore, the MF processing can be modeled as 𝐘_a = 1/T_0∫_T_0𝐲_R,a(t) 𝐬_a^H(t - τ_a,a,0) dt = ξ_a,a,0Υ_a,a,0𝐆_a,a,0𝐅_A,a𝐅_D,a + ∑_q∈𝒬ξ_a,a,qΥ_a,a,qι_a,q𝐆_a,a,q𝐅_A,a𝐅_D,a + 𝐍̃_R,a , where ι_a,q=R(τ_a,a,0-τ_a,a,q) is the auto-correlation function with R(0)=1. 𝐍̃_R,a = 1/T_0∫_T_0𝐧_R,a(t) 𝐬_a^H(t - τ_a,a,0) dt is the output AWGN with 𝐍̃_R,a = [𝐧̃_R,a,1 , ⋯ , 𝐧̃_R,a,U] and 𝐧̃_R,a,u∼𝒞𝒩(0,σ̅_R^2). The equivalent noise power σ̅_R^2 can be expressed as σ̅_R^2 = σ_R^2/BT_0, where σ_R^2=σ_S^2 B and BT_0 are the radar noise power and time-bandwidth product, respectively. Step 2: Receive Beamforming (RBF). Then, after vectoring 𝐘_a and processing it by the RBF 𝐰_a, the output is given by y̅_a = 𝐰_a^H 𝖵𝖾𝖼(𝐘_a) = 𝐰_a^H 𝐲̂_a = ξ_a,a,0x̅_a,a,0 + ∑_q∈𝒬ξ_a,a,qx̅_a,a,q + n̅_R,a , where n̅_R,a = 𝐰_0^H 𝖵𝖾𝖼(𝐍̃_R,a). x̅_a,a,0 = 𝐰_a^H 𝐆̂_a,a,0𝐟_a and x̅_a,a,q = 𝐰_a^H 𝐆̂_a,a,q𝐟_a where 𝐆̂_a,a,0 = 𝐈_U⊗Υ_a,b,0𝐆_a,a,0 and 𝐆̂_a,a,q = 𝐈_U⊗Υ_a,b,qι_a,q𝐆_a,a,q. Step 3: Cooperative Sensing Detector. Each AP forwards the local information y̅_a to the CPU, where the CPU carries out the data fusion and cooperative sensing. The cooperative sensing detector can be defined by the following proposition. In generalized likelihood ratio test (GLRT), the cooperative sensing detector can be given by ϖ = ∑_a∈𝒜|y̅_a|^2/σ_E,a^2≷_ℋ_0^ℋ_1𝒯 where σ_E,a^2 = 𝐱_a^Hς_a𝐱̅_a + σ̅_R^2𝐰_a_F^2, 𝐱̅_a = [x̅_a,a,1 , ⋯ , x̅_a,a,Q]^T, ς_a = 𝖣𝗂𝖺𝗀[ς_a,a,1 , ⋯ , ς_a,a,Q]^T, and 𝒯 is the detection threshold. Please refer to supplementary material (SM) Appendix <ref>. According to the above detector (<ref>), with given probability of false-alarm Pr_FA, the probability of detection Pr_D can be derived as Pr_D = ℚ_M^A( √(2∑_a∈𝒜SINR_a) , √(F_χ_(2A)^2^-1(1-Pr_FA))) where ℚ_M^A is generalized Marcum Q-function with order A, F_χ_(A)^2^-1 represents the inverse cumulative distribution function (CDF) of the chi-square distribution with order A, and the SINR_a is given by SINR_a = |ξ_a,a,0x̅_a,a,0|^2/σ_E,a^2 = |𝐰_a^H𝐆̂_a,a,0𝐟_a|^2/∑_q∈𝒬|𝐰_a^H𝐆̂_a,a,q𝐟_a|^2 + σ_R^2𝐰_a_F^2 Please refer to SM Appendix <ref>. From (<ref>), we observe that the radar detection performance is a function of the sum radar SINR ∑_a∈𝒜SINR_a, from which we derive the following insights: First, with a given probability of false-alarm Pr_FA, the radar detection performance improves as the sum radar SINR increases. Therefore, we can improve the detection performance of considered Co-ISACNet by improving sum radar SINR. Second, employing more APs can achieve a higher sum radar SINR, thereby enhancing cooperative detection performance. This reveals that deploying more APs can significantly improve performance. The proposed cooperative sensing approach is fully asynchronous, eliminating the need for highly synchronized clocks. Specifically, in Step 1, the a-th AP requires only local information for MF. Moreover, in Step 3, the forwarding of local information to the CPU also operates asynchronously. This is because (<ref>) represents the sum of the absolute values of local samples, known as non-coherent processing, which is not affected by time delays. The proposed cooperative sensing approach requires only minimal information exchange. Specifically, in Step 1, the a-th AP needs only local information for MF. Furthermore, in Step 3, to achieve cooperative sensing, it is only necessary to forward the local information y̅_a ∈ℂ^1 to the CPU. Although the proposed cooperative sensing method is based on a single target scenario, it can be easily extended to multiple targets scenario. Specifically, when detecting the o-th target among O targets, the other targets should be treated as “clutter sources". Then, detecting o-th target can proceed using the same proposed cooperative sensing method <cit.>. §.§ Problem Formulation §.§.§ Performance Metrics For communication function, our objective is to enhance the overall communication capacity of the network. As demonstrated in subsection II-B.2, this can be achieved by maximizing the weighted sum rate (WSR), which is given by WSR({𝐅_A,a} , {𝐅_D,a}) = ∑_u ∈𝒰w_uRate_u( {𝐅_A,a} , {𝐅_D,a}) where w_u ∈ℝ^+ is the weight of the UE u. For radar function, as indicated in Remark 1, the sensing performance can be enhanced by improving the sum radar SINR. However, we note that using radar SINR as a metric arouses the following drawbacks: 1) optimizing the sum radar SINR necessitates a joint transceiver design, which is often impractical in real-world scenarios. 2) optimizing sum radar SINR requires detailed knowledge of the target and clutter sources, which is typically challenging to obtain. To address these drawbacks, we propose to design the transmit beampattern, a common method used in practical radar systems. According to radar SINR (<ref>), the designed beampattern of the proposed Co-ISACNet system should have the following characteristics: 1) forming mainlobes towards targets; 2) achieving notch towards clutter sources; 3) achieving notch towards other APs[For a specific AP, the direct beams from other APs are relatively strong given the dense deployment of APs in mmWave frequencies, which causes the performance degradation of radar sensing. Therefore, the transmit beampattern is expected to limit the energy towards other APs.]. To achieve the first characteristic of the desired transmit radar beampattern as a function of the detection angle θ at AP a, given by 𝒫_a ( 𝐅_A,a , 𝐅_D,a , θ) = 𝐚_T^H(θ) 𝐅_A,a𝐅_D,a_F^2. We propose to match it to a pre-defined spectrum 𝐩_a = [ P_a(θ_1), ⋯ , P_a(θ_L)]^T, where L denotes the number of the discrete grid points within the angle region [-90^∘ , 90^∘]. The matching can be mathematically described by the weighted mean square error (MSE) between 𝒫_a( F_A,a,F_D,a,θ) and P_a( θ) for beampattern approximation as MSE_a ( 𝐅_A,a , 𝐅_D,a , Ψ_a) = 1/L∑_l = 1^L μ_l | 𝒫_a( 𝐅_A,a,𝐅_D,a,θ_l) - Ψ_a P_a( θ _l)|^2 , where Ψ_a∈ℝ^+ is a scaling parameter to be optimized[Ψ_a is introduced to scale the normalized pre-defined spectrum 𝐩_a, making the beampattern MSE more tractable <cit.>.], μ_l is a predefined parameter to control the waveform similarity at l-th discrete spatial angle θ_l. To achieve the second and third characteristics of the desired transmit beampattern, we consider constraining the energy towards the notch region under a pre-defined threshold Γ_a, which yields max_ϑ_a,t∈Θ_a𝒫_a ( 𝐅_A,a , 𝐅_D,a , ϑ_a,t) ≤Γ_a , ∀ a, where Θ_a ∈ℂ^T_a × 1 denotes the beampattern notch discrete grid angle set, with T_a being the number of the discrete grid points within the notch region. §.§.§ Problem Statement Based on above illustrations, we aim to maximize the WSR of the proposed Co-ISACNet subject to the radar beampattern weighted MSE, the transmit power budget and the analog beamformer constraints. Therefore, the joint transmit HBF design problem can be formulated as max_{𝐅_A,a} , {𝐅_D,a} , ΨWSR({𝐅_A,a} , {𝐅_D,a}) s.t. MSE_a ( 𝐅_A,a , 𝐅_D,a , Ψ_a) ≤γ _a,∀ a , max_ϑ_a,t∈Θ_a𝒫_a ( 𝐅_A,a , 𝐅_D,a , ϑ_a,t) ≤Γ_a, ∀ a , 𝐅_A,a𝐅_D,a_F^2 ≤ E,∀ a , | [ 𝐅_A,a]_m,n| = 1 ,∀ m,n,∀ a , The optimization problem (<ref>) is non-convex due to the log-fractional expression in the objective function, fourth-order constraint (<ref>), the constant modulus constraints of the analog beamformer and the coupling among variables. In addition, the centralized optimization framework is unrealistic due to the unaffordable computational burden at the CPU, which needs to deal with extremely large dimension of HBFs. To tackle these two difficulties, in the following sections, we propose a distributed optimization framework which is suitable for solving general large-dimensional HBF design problems. § DISTRIBUTED OPTIMIZATION FRAMEWORK In this section, we review the conventional centralized ADMM framework and propose a novel distributed optimization framework to solve the general HBF design problem for multi-AP scenarios. We consider a multi-AP network design problem where APs equipped with HBF architectures collaborate to accomplish a certain task. We write the multi-AP network optimization problems in the most general form as follows: min_{𝐅_A,a} , {𝐅_D,a} 𝒢( {𝐅_A,a} , {𝐅_D,a}) s.t. f_i( 𝐅_A,a , 𝐅_D,a) = 0 , i ∈ℐ_a, ∀ a , h_j( 𝐅_A,a , 𝐅_D,a) ≤ 0 , j ∈𝒥_a, ∀ a , 𝐅_A,a∈ℱ , ∀ a , where ℐ_a={ 1 , ⋯ , I_a } and 𝒥_a={ 1 , ⋯ , J_a } collect the index of equality and inequality constraints, respectively. Here we state the properties that problem (<ref>) has as follows. Property 1: Coupled Optimization Variables (Decision Variables). 𝐅_A,a and 𝐅_D,a are analog and digital beamfomers at a-th AP, which are always coupled as 𝐅_A,a𝐅_D,a in objective (<ref>) and constraints (<ref>)-(<ref>). Property 2: Structured Objective. The objective function 𝒢( {𝐅_A,a} , {𝐅_D,a}) is structured as a summation of a highly coupled component 𝒢_0 ( {𝐅_A,a} , {𝐅_D,a} ) and A separated components 𝒢_a ( 𝐅_A,a ,𝐅_D,a ), ∀ a, i.e., 𝒢 ( {𝐅_A,a} , {𝐅_D,a}) = 𝒢_0 ( {𝐅_A,a} , {𝐅_D,a}) + ∑_a ∈𝒜𝒢_a ( 𝐅_A,a , 𝐅_D,a) , where the coupled component 𝒢_0 (𝐅) is convex, continuous and Lipchitz gradient constinuous. Property 3: Multiple Constraints. Problem (<ref>) has equality constraints f_i( 𝐅_A,a , 𝐅_D,a) = 0 , i ∈ℐ_a, ∀ a, inequality constraints h_j( 𝐅_A,a , 𝐅_D,a) ≤ 0 , j ∈𝒥_a, ∀ a , and analog beamformer constraint 𝐅_A,a∈ℱ , ∀ a, where the set ℱ is determined by the topology of the phase shift network. The challenges for solving (<ref>) lie in the following three perspectives: 1) The HBF for the a-th AP, i.e., 𝐅_A,a and 𝐅_D,a, are highly coupled in objective function 𝒢( {𝐅_A,a} , {𝐅_D,a}), equality constraints (<ref>) and inequality constraints (<ref>). 2) The HBF for different APs are coupled in the objective function. In particular, if the objective function is complicated and non-convex, the problem (<ref>) would be extremely hard to solve. 3) Since the dimensions of analog beamformer 𝐅_A,a , ∀ a and the number of APs A are usually large, problem (<ref>) is a high-dimensional optimization problem, which results in high computational complexity. Our considered optimization problem (<ref>) differs from the existing decentralized consensus optimization problem (DCOP) <cit.> in the following two aspects. 1) In DCOP, each agent has its own private task, and multiple agents collaborate to accomplish the entire tasks. However, the problem (<ref>) is a single-task optimization problem, where multiple agents (APs) collaborate to complete the same task. 2) In DCOP, all the agents share the same decision variables. However, in problem (<ref>), each AP has its own decision variables (𝐅_A,a and 𝐅_D,a), and the decision variables of different APs are coupled in the objective function. §.§ Centralized ADMM Framework In this subsection, we review the conventional centralized optimization method to solve problem (<ref>). Centralized optimization usually transforms the original problem into multiple more tractable sub-problems. Then, all the sub-problems are solved together in a CPU, so that the overall objective function is optimized. Here, we introduce the centralized ADMM framework to solve problem (<ref>). To tackle the first challenge of solving problem (<ref>), we introduce auxiliary variables 𝐓 = [ 𝐓_1^H , ⋯ ,𝐓_A^H ]^H satisfying 𝐓_a = 𝐅_A,a𝐅_D,a , ∀ a and convert the optimization problem as min_{𝐅_A,a} , {𝐅_D,a} , 𝐓 𝒢( 𝐓) s.t. f_i( 𝐓_a ) = 0 , i ∈ℐ_a, ∀ a , h_j( 𝐓_a ) ≤ 0 , j ∈𝒥_a, ∀ a , 𝐅_A,a∈ℱ , ∀ a , 𝐓_a = 𝐅_A,a𝐅_D,a, ∀ a . Following the ADMM framework, we penalize the equality constraints (<ref>) into the objective function and obtain the following augmented Lagrangian (AL) minimization problem min_{𝐅_A,a} , {𝐅_D,a} , 𝐓 ℒ( {𝐅_A,a} , {𝐅_D,a}, 𝐓 , {𝐃_a }) s.t. (<ref>) - (<ref>) . where the scaled AL function is given by ℒ( {𝐅_A,a} , {𝐅_D,a}, 𝐓) = 𝒢( 𝐓) + ∑_a ∈𝒜𝕀_a ( 𝐅_A,a , 𝐅_D,a , 𝐓_a , 𝐃_a ) , with 𝕀_a ( 𝐅_A,a , 𝐅_D,a , 𝐓_a , 𝐃_a ) = ρ/2𝐓_a - 𝐅_A,a𝐅_D,a + 𝐃_a _F^2, 𝐃_a the dual variable, and ρ the penalty parameter. Problem (<ref>) can be iteratively solved with the following centralized ADMM framework: S1: 𝐓^k = min_𝐓 ∈𝒳 ℒ ( { 𝐅_A,a^k-1 } , { 𝐅_D,a^k-1 }, 𝐓 , { 𝐃_a^k-1 } ) S2: { 𝐅_A,a^k } = min_ { 𝐅_A,a } ∈ℱ ℒ ( { 𝐅_A,a } , { 𝐅_D,a^k-1 }, 𝐓^k , { 𝐃_a^k-1 } ) S3: { 𝐅_D,a^k } = min_ { 𝐅_D,a } ℒ ( { 𝐅_A,a^k } , { 𝐅_D,a }, 𝐓^k , { 𝐃_a^k-1 }) S4: Update Dual Variables: {𝐃_a^k} where 𝒳 = 𝒳_1 ∪𝒳_2 ⋯∪𝒳_A with 𝒳_a = {𝐓_a | f_i( 𝐓_a ) = 0 , h_j( 𝐓_a ) ≤ 0 , i ∈ℐ_a , j ∈𝒥_a}. The dual variables are typically updated as 𝐃_a^k = 𝐃_a^k-1 + 𝐓_a^k - 𝐅_A,a^k 𝐅_D,a^k , ∀ a. Although the centralized ADMM can obtain satisfactory performance in many HBF design scenarios, it has many drawbacks when it comes to multi-AP network scenarios: 1) With high-dimensional optimization problems (S1-S3) for multi-AP networks with HBF architecture, the centralized ADMM framework is computationally expensive such that it cannot be practically utilized to solve problem (<ref>). 2) Under the centralized ADMM framework, the CPU must have the information of all APs, such as CSI and parameter settings, which brings the CPU heavy tasks. 3) The centralized ADMM framework often requires much time to converge, which is unsuitable for real-time applications. To address these issues, we will propose a distributed optimization framework with affordable computational complexity, reduced information exchange among APs, and fast convergence speed. §.§ Proposed Distributed Optimization Framework In this subsection, we go beyond the above centralized ADMM framework and propose a novel distributed optimization framework. The core idea of distributed optimization is to decompose centralized objective function and constraints into distributed tasks such that they can be simultaneously implemented and the computational efficiency can be much increased. However, problem (<ref>) cannot be distributively solved since each of S2-S4 can be further decoupled into A distributed sub-problems, while S1 is coupled by the auxiliary variable 𝐓. To distributively solve problem (<ref>), we propose a novel distributed optimization framework, namely Proximal grAdieNt Decentralized ADMM (PANDA) framework, which modifies the centralized ADMM framework S1-S4 by decoupling S1 into A sub-problems, each of which is the function of 𝐓_a. To do so, we first give the following lemma to find a surrogate problem of S1 based on PG. Let 𝒢(𝐓) be a continuously differentiable function. For all 𝐓, the following inequality <cit.> holds: 𝒢(𝐓) ≤ 𝒢(𝐓^k) + α/2𝐓-𝐓^k_F^2 + {𝖳𝗋( ∇𝒢(𝐓)^H (𝐓-𝐓^k) ) } , where 𝐓^k is the point at the k-th iteration, and α is the Lipschitz constant. Please refer to <cit.>. Performing Lemma <ref> to the coupled part 𝒢_0 (𝐓) of the objective in S1, we replace ℒ ( {𝐅_A,a^k-1} , {𝐅_D,a^k-1}, 𝐓 , {𝐃_a^k-1} ) with its locally tight upper bound function at 𝐓^k and obtain the following update 𝐓^k = min_𝐓∈𝒳{{𝖳𝗋( ∇𝒢_0 (𝐓^k-1)^H (𝐓 - 𝐓^k-1) ) } + α/2𝐓 - 𝐓^k-1_F^2 + ∑_a ∈𝒜𝒢_a ( 𝐓_a ) + ρ/2∑_a ∈𝒜𝐓_a - 𝐅_A,a^k-1𝐅_D,a^k-1 + 𝐃_a^k-1_F^2 } where ∇𝒢_0 (𝐓) = [∇𝒢_0^H (𝐓_1) , ⋯ , ∇𝒢_0^H (𝐓_A)]^H is the first-order derivative, α is the Lipschitz constant of 𝒢_0 (𝐓). Now, the objective function and constraints are separable, which means updating (<ref>) can be further decoupled as 𝐓_a^k = min_𝐓_a ∈𝒳_a{{𝖳𝗋( ∇𝒢_0 (𝐓_a^k-1)^H (𝐓_a -𝐓_a^k-1) ) } + α/2𝐓_a - 𝐓_a^k-1_F^2 + 𝒢_a ( 𝐓_a ) + ρ/2𝐓_a - 𝐅_A,a^k-1𝐅_D,a^k-1 + 𝐃_a^k-1_F^2 } = Prox_𝒢_a , β^𝒳_a[ 1/β∇ℒ_a ( 𝐓_a^k - 1 , 𝐅_A,a^k - 1 , 𝐅_D,a^k - 1) ]. where ∇ℒ_a ( 𝐓_a^k - 1 , 𝐅_A,a^k - 1 , 𝐅_D,a^k - 1 , 𝐃_a^k - 1 ) = -∇𝒢_0(𝐓_a^k - 1) + α𝐓_a^k - 1 + ρ ( 𝐅_A,a^k - 1𝐅_D,a^k - 1 - 𝐃_a^k - 1 ), and β = α + ρ. The PG operator for function 𝒢 (𝐓) at a given point 𝐓 is given by Prox_𝒢 , β^𝒳[ 𝐓] = min_𝐓∈𝒳𝒢(𝐓) + β/2𝐓 - 𝐓_F^2. Finally, the proposed PANDA framework consists of the following iterative steps P1: 𝐓_a^k = Prox_𝒢_a , β^𝒳_a [ 1/β ∇ℒ_a ( 𝐓_a^k - 1 , 𝐅_A,a^k - 1 , 𝐅_D,a^k - 1 , 𝐃_a^k - 1 ) ] P2: 𝐅_A,a^k = min_𝐅_A,a ∈ℱ 𝕀_a ( 𝐅_A,a , 𝐅_D,a^k-1 , 𝐓_a^k , 𝐃_a^k-1 ) P3: 𝐅_D,a^k = min_𝐅_D,a 𝕀_a ( 𝐅_A,a^k , 𝐅_D,a , 𝐓_a^k , 𝐃_a^k-1 ) P4: Update Dual Variables: {𝐃_a^k} By applying the PANDA framework, P1-P4 can be distributively solved at each AP with local information, which significantly improves the beamforming design efficiency compared to the conventional centralized optimization methods. The sequence {{𝐓_a^k } , {𝐅_A,a^k } , {𝐅_D,a^k }} generated by the proposed PANDA has the following properties: * The proposed PANDA framework modifies S1 in the centralized ADMM without affecting the monotonicity. * Under the mild conditions lim_k→∞𝐃_a^k+1 - 𝐃_a^k = 0, ∀ a, there exists an stationary point {{𝐓_a^⋆} , {𝐅_A,a^⋆} , {𝐅_D,a^⋆}}, which is the optimal solution to (<ref>). Please refer to SM Appendix <ref>. The proposed framework is different from the existing PG based decentralized ADMM <cit.> from the following two aspects. 1) Existing papers <cit.> adopt PG to approximate the non-smooth part in objective function to reduce the complexity. However, we adopt PG to decouple the objective function into independent parts to facilitate distributed optimization. 2) Instead of optimizing overall decision variables ({𝐅_A,a}, {𝐅_D,a}) in all each agent (APs), our proposed framework optimizes the sub-set of decision variables (𝐅_A,a, 𝐅_D,a) in corresponding agent (AP), which decreases the computational complexity and reduces backhaul signaling. In the next section, we will customize the proposed PANDA framework to distributively solve problem (<ref>). § DISTRIBUTED OPTIMIZATION TO PROBLEM (<REF>) In this section, we reformulate problem (<ref>) to facilitate the use of the proposed PANDA framework and illustrate the solution of (<ref>) in detail. §.§ The PANDA Framework of Problem (<ref>) §.§.§ Problem Reformulation The objective (<ref>) is the sum of non-convex logarithmic functions, which complicates the design and hinders the employment of the proposed PANDA framework. Additionally, the quartic radar MSE constraints in (<ref>) also present challenges in employing the proposed PANDA framework. Therefore, before developing the PANDA framework for problem (<ref>), we propose the following problem transformation. Setp 1, Objective Transformation: We first deal with the objective function with the following proposition. Exploiting fractional programming and introducing auxiliary variables r = [r_1 , ⋯ , r_U]^T and η = [η_1 , ⋯ , η_U]^T, objective (<ref>) can be reformulated as 𝒢( {𝐅_A,a} , {𝐅_D,a}) = -∑_u ∈𝒰w_uRate_u( {𝐅_A,a} , {𝐅_D,a}) = - ∑_u ∈𝒰2√(w_u( 1 + r_u)){η _u^*∑_a ∈𝒜 [ 𝐇_a^H 𝐅_A,a𝐅_D,a ]_u,u} + ∑_u ∈𝒰| η _u|^2( ∑_v ∈𝒰| ∑_a ∈𝒜 [ 𝐇_a^H 𝐅_A,a𝐅_D,a ]_u,v| ^2 + σ_C,u^2) - ∑_u ∈𝒰w_ulog( 1 + r_u) + ∑_u ∈𝒰w_ur_u, = 𝐁_1𝐅_F^2_𝒢_0 ( {𝐅_A,a} , {𝐅_D,a}) + ∑_a ∈𝒜{𝖳𝗋[ 𝐁_2,a𝐅_A,a𝐅_D,a]}_𝒢_a ( 𝐅_A,a , 𝐅_D,a)     - ∑_u ∈𝒰c_1,u(r_u,η_u) , where the fresh notations are defined as 𝐅 = [𝐅_D,1^H 𝐅_A,1^H , , 𝐅_D,A^H 𝐅_A,A^H ]^H, 𝐉_1 = 𝖣𝗂𝖺𝗀( η_1^* , ⋯ , η_U^*) , 𝐉_2 = 𝖣𝗂𝖺𝗀( √(w_1( 1 + r_1 ))η_1^*, ⋯ ,√(w_U( 1 + r_U ))η_U^* ) 𝐁_1 = 𝐉_1 [ 𝐇_1^H , ⋯ , 𝐇_A^H ] , 𝐁_2 = -2𝐉_a 𝐇_a^H c_1,u(r_u,η_u) = w_ulog ( 1 + r_u ) - w_ur_u - | η _u|^2 σ_C,u^2 . Please refer to <cit.>. After applying Proposition <ref>, 𝒢 ( {𝐅_A,a},{𝐅_D,a} ) is convex with respect to each variable with fixed others. Step 2: Radar MSE Simplification: To deal with the quartic constraint (<ref>), we present the following proposition. The quartic optimization min_ X XX^H - N I_F^2 can be inexactly solved by optimizing the following simpler quadratic problem. min_ X, Z X - √(N) Z_F^2 s.t. Z^H Z = I, where Z is a semiunitary matrix. Please refer to <cit.>. By plugging 𝐱= 𝐅_D,a^H 𝐅_A,a^H a_T( θ _l), 𝐳^H𝐳=P(θ_l) and N=Ψ_a to (<ref>) in Proposition <ref>, we reformulate the radar beampattern weighted MSE as MSE_a ( 𝐅_A,a , 𝐅_D,a ,𝐕_a , ζ_a) = 1/L∑_l = 1^L μ_a,la_T^H( θ _l)𝐅_A,a𝐅_D,a - ζ_av_a,l^H_F^2≤γ_a, where 𝐕_a=[𝐯_a,1,⋯,𝐯_a,L] ∈ℂ^U × L , ∀ a is the auxiliary variables, satisfying v_a,l_F^2 = P_a(θ_l) , ∀ a,l, and ζ_a = √(Ψ_a). §.§.§ Application of PANDA Framework With the above reformulation, the joint design problem (<ref>) can be recast as min_{𝐅_A,a} , {𝐅_D,a} , ζ, 𝐫 , η , {𝐕_a}𝒢( {𝐅_A,a} , {𝐅_D,a}) s.t. MSE_a ( 𝐅_A,a , 𝐅_D,a , 𝐕_a , ζ_a) ≤γ _a,∀ a , max_ϑ_a,t∈Θ_a𝒫_a ( 𝐅_A,a , 𝐅_D,a , ϑ_a,t) ≤Γ_a, ∀ a , 𝐅_A,a𝐅_D,a_F^2 = E,∀ a , | [ 𝐅_A,a]_m,n| = 1 ,∀ m,n,∀ a . Now, the challenge for solving (<ref>) lies in the coupling of 𝐅_A,a and 𝐅_D,a in the objective and constraints (<ref>)-(<ref>). To decouple 𝐅_A,a and 𝐅_D,a within and among (<ref>)-(<ref>), we introduce several linear constraints 𝐓_a = 𝐔_a = 𝐅_A,a𝐅_D,a , ∀ a and 𝐳_a,t^H = 𝐚_T( ϑ_t) 𝐅_A,a𝐅_D,a, ∀ a,t, penalize each of them, and formulate the AL minimization problem as min_{𝐅_A,a} , {𝐅_D,a}, Ψ, 𝐫 , η , {𝐓_a } , {𝐔_a } , {𝐙_a} , {𝐕_a} ℒ ( {𝐓_a } , {𝐔_a } , {𝐙_a} , {𝐅_A,a} , {𝐅_D,a} , {𝐃_a } ) s.t. MSE_a ( 𝐔_a , 𝐕_a , ζ_a) ≤γ _a,∀ a , max_t_a𝐳_a,t_a_F^2 ≤Γ_a, ∀ a , 𝐓_a _F^2 = E,∀ a , | [ 𝐅_A,a]_m,n| = 1 ,∀ m,n,∀ a , where t_a ∈ [1,⋯,T_a]^T, 𝐙_a = [𝐳_a,1 , ⋯ , 𝐳_a,T_a]. The AL function is defined as ℒ( {𝐓_a } , {𝐔_a } , {𝐙_a} , {𝐅_A,a} , {𝐅_D,a} , {𝐃_a }) = 𝒢( {𝐓_a }) + ∑_a ∈𝒜𝕀_a ( 𝐓_a , 𝐔_a , 𝐙_a , 𝐅_A,a , 𝐅_D,a , 𝐃_a ) , where 𝐃_a is a collection of all dual variables, i.e., 𝐃_a = {Ω_a, Λ_a, Φ_a }. 𝕀_a ( 𝐓_a , 𝐔_a , 𝐙_a , 𝐅_A,a , 𝐅_D,a , 𝐃_a ) = ρ/2𝐓_a - 𝐅_A,a𝐅_D,a + Ω_a _F^2 + ϱ/2𝐔_a - 𝐅_A,a𝐅_D,a + Λ_a _F^2 + λ/2𝐙_a - 𝐅_D,a^H 𝐅_A,a^H 𝐀_N,a + Φ_a_F^2 with 𝐀_N,a = [𝐚_T(ϑ_1) , ⋯ , 𝐚_T(ϑ_T_a)]. Now we can adopt the proposed distributed framework following the steps in the sequel. P1: 𝐓_a^k = Prox_𝒢_a , β^𝒳_1,a [ 1/β ∇ℒ_a ( 𝐓_a^k-1 , 𝐅_A,a^k-1 , 𝐅_D,a^k-1 , Ω_a^k - 1 ) ] P2: 𝐔_a^k = arg min_𝐔_a ∈𝒳_2,a 𝐔_a - 𝐅_A,a^k-1 𝐅_D,a^k-1 + Λ_a^k-1 _F^2 P3: 𝐙_a^k = arg min_𝐙_a ∈𝒳_3,a 𝐙_a - (𝐅_A,a^k-1 𝐅_D,a^k-1)^H 𝐀_N + Φ_a^k-1 _F^2 P4: 𝐅_A,a^k = arg min_𝐅_A,a ∈𝒳_4,a 𝕀_a ( 𝐓_a^k , 𝐔_a^k , 𝐙_a^k , 𝐅_A,a , 𝐅_D,a^k-1 ) P5: 𝐅_D,a^k = arg min_𝐅_D,a 𝕀_a ( 𝐓_a^k , 𝐔_a^k , 𝐙_a^k , 𝐅_A,a^k , 𝐅_D,a ) P6: Update Dual Variables: { 𝐃_a } where β = α + ρ and ∇ℒ_a ( 𝐓_a^k - 1 , 𝐅_A,a^k - 1 , 𝐅_D,a^k - 1 , Ω_a^k - 1 ) = -∇𝒢_0(𝐓_a^k - 1) + α𝐓_a^k - 1 + ρ ( 𝐅_A,a^k - 1𝐅_D,a^k - 1 - Ω_a^k - 1 ) with ∇𝒢_0(𝐓_a^k - 1) = 𝐇_a 𝐉_1^H 𝐉_1 ( ∑_i ∈𝒜Ξ_i^k-1 ) and Ξ_a = 𝐇_a^H 𝐓_a. 𝒳_1,a = {𝐓_a | 𝐓_a _F^2 = E }, 𝒳_2,a = {𝐔_a | MSE_a ( 𝐔_a , 𝐕_a , ζ_a ) ≤γ _a}, 𝒳_3,a = {𝐙_a | max_t_a 𝐳_a,t_a_F^2 ≤Γ_a }, and 𝒳_4,a = {𝐅_A,a | | [ 𝐅_A,a]_m,n | = 1 ,∀ m,n }. In what follows, we discuss respectively the solutions to P1-P5. §.§ Solution to Sub-problems §.§.§ Update 𝐓_a Given other variables, 𝐓_a can be updated by PG method as 𝐓_a^k = Prox_𝒢_a , β^𝒳_1,a[ 1/β∇ℒ( 𝐓_a^k-1 , 𝐅_A,a^k-1 , 𝐅_D,a^k-1 , Ω_a^k - 1) ] , According to the definition of PG operation, problem (<ref>) can be equivalently rewritten as min_𝐓_a {𝖳𝗋 ( 𝐁_2 𝐓_a )} + β/2𝐓_a - 1/β( -∇𝒢_0(𝐓_a^k - 1) + α𝐓_a^k-1 + ρ ( 𝐅_A,a^k-1𝐅_D,a^k-1 - Ω_a^k-1 ) ) _F^2 s.t. 𝐓_a _F^2 = E, The following theorem provides the solution to problem (<ref>). Problem (<ref>) is a quadratically constrained quadratic program (QCQP) with one constraint (QCQP-1), whose closed-form solution can be given by 𝐓_a^k = √(E)𝐓_a^k-1 / 𝐓_a^k-1_F . where 𝐓_a^k-1 is defined in SM Appendix <ref>. Please refer to SM Appendix <ref>. §.§.§ Update {𝐔_a , ζ_a} Given other variables, the sub-problem of updating (𝐔_a , Ψ_a) is equivalently rewritten as min_𝐔_a 𝐔_a - 𝐅_A,a^k-1𝐅_D,a^k-1 + Λ_a^k-1_F^2 s.t. MSE_a ( 𝐔_a , 𝐕_a , ζ_a ) ≤γ _a , which can be equivalently rewritten as min_𝐮_a 𝐮_a - 𝐝_a_F^2 s.t. 𝐮_a^H 𝐆_a𝐮_a - 2 {𝐠_a^H 𝐮_a }≤γ̃_a, where 𝐮_a = 𝖵𝖾𝖼(𝐔_a), 𝐝_a = 𝖵𝖾𝖼(𝐅_A,a^k-1𝐅_D,a^k-1 - Λ_a^k-1), 𝐆_a = ( 𝐈_U ⊗𝐆_1,a^H 𝐆_1,a ), 𝐠_a = 𝖵𝖾𝖼( 𝐆_1,a^H 𝐆_2,a ), and γ̃_a = γ_a - 𝐆_2,a_F^2, with 𝐆_a,1 = 𝖣𝗂𝖺𝗀 ( √(μ_a,1) , ⋯ ,√(μ_a,L) ) 𝐀_all^H, 𝐆_a,2 = ζ_a^k𝖣𝗂𝖺𝗀( √(μ_a,1) , ⋯ ,√(μ _a,L) ) (𝐕_a^k)^T, and 𝐀_all = [ 𝐚_T(θ_1), ⋯ ,𝐚_T(θ_L) ]. The following theorem provides the solution to problem (<ref>). Problem (<ref>) is a convex QCQP-1, whose closed-form solution is derived by analyzing Karush-Kuhn-Tucker (KKT) conditions. Please refer to SM Appendix <ref>. §.§.§ Update 𝐙_a Given other variables, the sub-problem of updating 𝐙_a can be equivalently rewritten as min_𝐙_a 𝐙_a - {𝐅_A,a^k-1𝐅_D,a^k-1}^H 𝐀_N + Φ_a^k-1_F^2 s.t. max_t_a𝐳_a,t_a_F^2 ≤Γ_a , Problem (<ref>) can be separated into T_a sub-problem as follows min_𝐳_a,t_a 𝐳_a,t_a - {𝐅_A,a^k-1𝐅_D,a^k-1}^H 𝐚_T( θ_t) + ϕ_a,t_a^k-1_F^2 s.t. 𝐳_a,t_a_F^2 ≤Γ_a , which is also a QCQP-1 whose optimal solution can be obtained by analyzing KKT conditions like problem (<ref>) in SM Appendix <ref>. Since above T_a sub-problems are independent to each other, the update of 𝐳_a,t_a can be performed in parallel. §.§.§ Update { F_ A , a} Given other variables, the sub-problem of updating F_ A , a can be equivalently rewritten as min_𝐅_A,a ρ/2𝐓_a^k - 𝐅_A,a𝐅_D,a^k-1 + Ω_a^k-1_F^2 + ϱ/2𝐔_a^k - 𝐅_A,a𝐅_D,a^k-1 + Λ_a^k-1_F^2 + λ/2𝐙_a^k - {𝐅_A,a𝐅_D,a^k-1}^H 𝐀_N + Φ_a^k-1_F^2 s.t. | [ 𝐅_A,a]_m,n| = 1 ,∀ m,n. The following theorem provides the solution to problem (<ref>). Problem (<ref>) is a quadratic program (QP) with constant modulus constraint, whose closed-form solution at ℓ-th inner iteration can be given by 𝐅_A,a^[ℓ] = -exp{∠[ 𝐖_a^[ℓ - 1]] } . where 𝐖_a^[ℓ - 1] is defined in SM Appendix <ref>. Please refer to SM Appendix <ref>. §.§.§ Update {𝐅_D ,a } Given other variables, the sub-problem of updating 𝐅_D,a can be equivalently rewritten as min_𝐅_D,a ρ/2𝐓_a^k - 𝐅_A,a^k 𝐅_D,a + Ω_a^k-1_F^2 + ϱ/2𝐔_a^k - 𝐅_A,a^k 𝐅_D,a + Λ_a^k-1_F^2 + λ/2𝐙_a^k - {𝐅_A,a𝐅_D,a^k-1}^H 𝐀_N + Φ_a^k-1_F^2 whose closed-form solution can be calculated as 𝐅_D,a^k = {( 𝐅_A,a^k )^H 𝐌_1,a^k-1𝐅_A,a^k }^ - 1( 𝐅_A,a^k)^H 𝐌_2,a^k-1 , with 𝐌_1,a^k-1 = (ρ+ϱ) 𝐈_N_T + λ𝐀_N^H 𝐀_N, and 𝐌_2,a^k-1 = ρ (𝐓_a^k + Ω_a^k - 1 )+ ϱ (𝐔_a^k + Λ_a^k - 1 ) + λ𝐀_N ( 𝐙_a^k + Φ_a^k-1 )^H. §.§.§ Update {𝐫, η, {𝐕_a }, ζ} The optimal solutions to auxiliary variables 𝐫 and η can be derived by first-order derivatives as r_u = | [Ξ]_u,u|^2/∑_v ∈𝒰 ,v u| [Ξ]_u,v|^2 + σ _C,u^2 , η_u = √(w_u( 1 + r_u)) [Ξ]_u,u/∑_v ∈𝒰| [Ξ]_a,v|^2 + σ _C,u^2 . Additionally, the optimal solutions to auxiliary varriables 𝐕_a and ζ_a can be directly derived by first-order derivatives as 𝐯_a,l^k = √(P_a(θ _l)){𝐚_T^H( θ_l ) 𝐔_a^k}^H / 𝐚_T^H( θ_l ) 𝐔_a^k_F , ζ_a^k = ∑_l = 1^L μ _l {𝐚_T^H(θ_l) 𝐔_a^k{𝐯_a,l^k}^* } / ∑_l = 1^L μ _l𝐯_a,l^k_F . §.§ Summary We summarize the above update procedures in Algorithm <ref>, where steps 4-11 are distributively updated in corresponding AP until the convergence condition is reached. §.§.§ Complexity Analysis We discuss the complexity of the proposed Algorithm <ref> as detailed below. Specifically, we take a-th AP as example, and the main computational complexity for a-th comes from steps 4 to 11. Updating 𝐫 and η requires complexities of 𝒪( AN_U^2 N_T ). Updating ζ_a with close-form solution requires complexities of 𝒪( L N_T U ). Updating 𝐓_a and 𝐕_a with close-form solution need 𝒪( N_T N_RF U ) and 𝒪( L N_T U ), respectively. Updating 𝐔_a and {𝐳_a,t} by analyzing KKT conditions needs complexities of 𝒪( N_T^2U log(n) ) and 𝒪( TU^2 log(n) ), respectively. Updating 𝐅_A,a by Algorithm <ref> needs complexities of 𝒪( K_1 N_TN_RF^2 ), where K_1 is the number of inner iteration. Updating 𝐅_D,a with close-form solution needs 𝒪( N_TN_RF^2 + N_RFU^2 ). Overall, the complexity of the proposed algorithm in each AP is 𝒪( K_0 ( N_T^2U log(n) + TU^2 log(n) + K_1 N_TN_RF^2 ) ), where K_0 is the number of outer iteration. §.§.§ Practical Implementation The workflow diagram of the proposed distributed algorithm is shown in Fig. <ref>. The benefits of implementing the proposed PANDA framework in multi-AP scenarios in practice are summarized as follows. Benefit 1: Enhanced Computational Efficiency. Based on the proposed PANDA framework, each AP computes 𝐅_A,a and 𝐅_D,a in parallel to boost the real-time performance and improve the computational efficiency, which is the primary benefit of distributed optimization. Benefit 2: Reduced Information Exchange. Steps 6-12 only require CSI from individual AP without cross message from other APs. However, update 𝐫, η and 𝐓_a in steps 4-5 requires the CSI of all APs and auxiliary variables 𝐓_a, ∀ a in previous point. This means each AP must has perfect knowledge of the CSI of all the APs and need frequently change local information 𝐓_a. Fortunately, we find the update of 𝐫, η and 𝐓_a only requires Ξ_a= 𝐇_a^H 𝐓_a ∈ℂ^U × U from other APs. Therefore, instead of exchanging 𝐓_a and sharing CSI, we can exchange Ξ_a as an alternative, which significantly reduces backhaul signaling. § NUMERICAL RESULTS In this section, numerical results are presented to evaluate the performance of the proposed PANDA framework in the Co-ISACNet. §.§ Simulation Parameters Unless otherwise specified, in all simulations, we assume each AP equipped with N_T = 32 transmit antennas and N_RF = 4 RF chains serves U=4 downlink UEs. The power of each AP transmitter is set as E=100 mW. The transmission pulse is T_0=10 ms and the bandwidth is B=150 MHz. For the channel model, we adopt Saleh-Valenzuela (SV) channel model <cit.> and express 𝐡_a,u as 𝐡_a,u = √(ħ_a,u/N_P)( κ_a,u^L𝐚_T( ϕ_a,u^0 ) + κ_a,u^N∑_p=1^N_p-1𝐚_T( ϕ_a,u^p ) ), where N_P = 10 denotes the number of paths. ϕ_a,u^0 denotes the angles of departure (AoD) associated with direct link between the a-th AP and u-th UE. ϕ_a,u^p denotes AoD of p-th NLOS path between a-th AP and u-th UE, which is assumed to follow the uniform distribution. ħ_a,u [dB] = ħ_0 + 20 log_10( d_a,u ) denotes the path loss, in which d_a,u is the distance between the a-th AP and the u-th UE and ħ_0 = 60dB is the path loss at the reference distance d=1 m. κ_a,u^L = √(κ/(1+κ)) and κ_a,u^N= √(1/(1+κ)) are the factor for LoS and NLoS paths, with κ = 6 being the Rician factor. Generally, the targets and the clutter sources are located at different spatial angles for different APs. In the simulation, we calculate the spatial angle θ_o,a of o-th target for a-th AP through geometrical relation. Then, the pre-defined spectrum P_a(θ_l) is given by P_a(θ_l) = 1 when θ_l ∈ [ θ_o,a-Δ , θ_o,a+Δ], and P_a(θ_l) = 0 otherwise, where Δ = 4^∘. The spatial angle θ_q,a of q-th clutter source for a-th AP can be similarly calculated. Then, the notch region can be calculated as Θ_a = [ θ_1,a-Δ̅ , θ_1,a+Δ̅] ∪⋯∪ [ θ_Q,a-Δ̅ , θ_Q,a+Δ̅], where Δ̅ = 2^∘. The radar MSE thresholds and notch depth of different APs are, respectively, set as the same, i.e., γ_a = γ and Γ_a = Γ. The radar path loss model is the same as the above communication model. The radar noise power is set as σ_R,u^2 = -90dBm. §.§ Baseline Schemes For comparison, the proposed distributed PANDA (Dis-PANDA) algorithm is compared with the following baseline schemes. §.§.§ Centralized ADMM (Cen-ADMM) This scheme solves (<ref>) in a centralized manner, whose detailed procedure can be derived by following framework in the Sec. II-A. §.§.§ Semi-distributed two-stage (Semi-Dis TS) This scheme considers an indirect HBF design method. Specifically, we first design the fully-digital (FD) beamformer for the communication-only case on the CPU side. Then, we distributively optimize the HBF to approximate to the FD beamformer subject to radar constraints. §.§.§ Time-division duplex ISAC with HBF (HBF TDD Mode) To show more explicitly the advantages of CoCF, we consider a scheme adopting the TDD transmission, where only one AP performs ISAC at the same time. Besides, the conventional FD architecture is also included to indicate the system performance upper-bound. Note that all the numerical schemes are analyzed using Matlab 2020b version and performed in a standard PC with Intel(R) CPU(TM) Core i7-10700 2.9 GHz and 16 GB RAM. Without loss of generality, the results are averaged over 100 channel realizations. §.§ Scenario 1: Single Target with Clutter Sources In this scenario, we evaluate the Co-ISACNet performance in the presence of multiple clutter sources. Specifically, we assume that there are A = 3 APs located at the coordinates (0m, 0m), (90m, 0m), and (45 m,45√(3)m), respectively. Additionally, we position the a target at (33m, 26m) and Q=2 clutter sources at (28m, 36m) and (51m, 26m), respectively. The radar MSE is set as γ=4. §.§.§ Impact of the notch depth threshold In Fig. <ref>, we evaluate the impact of notch depth threshold Γ by plotting the average sum rate versus Γ when the radar MSE γ=4. As expected, there exists a trade-off between communication and radar performance. Specifically, the achievable sum rate increases with the increase of the radar notch depth threshold Γ. This is because when the intended radar notch depth is smaller, fewer design resources in the optimization problem can be used, resulting degraded communication performance. The performance of the proposed Dis-PANDA is almost the same as that of the Cen-ADMM regardless of Γ, where the slight performance loss is due to the fact that PG can only provide a suboptimal solution. In addition, the proposed Co-ISACNet achieves better performance that the ISAC with TDD mode thanks to the diversity provided by multiple APs. §.§.§ Impact of the number of transmit antennas In Fig. <ref>, we show the sum rate of the proposed Dis-PANDA with respect to the number of transmit antennas N_ TX when the radar MSE γ=4 and notch depth Γ=-35dB. As expected, the system sum rate will improve as the number of transmit antennas increases, which can offer more antenna diversity and larger beamforming gain. Besides, we observe that the gap between the proposed joint design method and the indirect Semi-Dis TS design method becomes large, which verifies the essential of joint beamforming design. Interestingly, we observe that the performance loss between the proposed Dis-PANDA and Cen-ADMM becomes large when N_TX decreases, which demonstrates the Dis-PANDA can perform well for large-scale MIMO systems. §.§.§ Transmit beampattern behaviours In Fig. <ref>, we depict the transmit beampattern of AP 1 and AP 3 with γ=4. From the Figs. <ref>(a) and <ref>(b), we can observe that the transmit power mainly concentrates around the target angle with nearly same mainlobe peak and sidelobe level. Besides, we also observe that the transmit beampattern can achieve the desired notches at the AP and clutter source angles, which validates the efficiency of the proposed Dis-PANDA algorithm. Combining Figs. <ref> and <ref>, we can conclude that a trade-off exists between communication sum rate and radar notch threshold Γ. Therefore, we should choose a proper Γ to balance communication sum rate and radar beampattern performance. §.§.§ Radar detection performance In Fig. <ref>, we assess the detection performance for the designed HBF considering different notch depth Γ. The probability of detection Pr_D is derived by adopting proposed cooperative sensing detector. As expected, the smaller Γ, the higher Pr_D for all considered algorithms, which is because the smaller Γ can achieve sharp nulls at clutter sources angles to resist the strong clutter sources. Additionally, compared with the conventional ISAC with TDD mode, the proposed Co-ISACNet can achieve a better detection performance benefiting from the diversity provided by multiple APs. §.§ Scenario 2: Multiple Targets In this scenario, we consider a scenario where the Co-ISACNet detects multiple targets. Specifically, we assume that A = 4 APs are located at (0m, 0m), (90m, 0m), (0m, 90m) and (90m, 90m), respectively. Besides, We assume the o=2 targets are located in (20m, 50m) and (80m, 45m), respectively. The notch depth is set as Γ=-30dB. §.§.§ Impact of the radar MSE threshold In Fig. <ref>, we evaluate the impact of radar MSE threshold γ by plotting the average sum rate versus γ when the notch depth Γ=-30dB. A similar conclusion can be drawn from Fig. <ref> that the proposed Dis-PANDA achieves nearly the same sum rate as that by Cen-ADMM, and achieves superior performance compared with ISAC with TDD mode and indirect Semi-Dis TS methods. We can also expect that the sum rate performance becomes better with increasing γ, which follows the same reason as Fig. <ref>. Besides, we note that compared with radar notch depth Γ, the radar MSE γ has a more significant impact on the sum rate. In Fig. <ref>, we investigate the relationship between radar MSE threshold γ and minimum sum radar SINR min_o{∑_a∈𝒜SINR_a,o} , ∀ o. In Fig. <ref>, we can observe that as the radar MSE threshold γ increases, the minimum sum radar SINR decreases. This is because a lower radar MSE allows for more concentrated energy around the target, leading to a higher radar SINR. Furthermore, we find that by increasing the number of radar receive antennas N_R, the minimum sum radar SINR also increases. This improvement is attributed to the fact that a larger number of receive antennas can achieve higher receiver beamforming gains. §.§.§ Impact of the number of RF chains In Fig. <ref>, we show the average sum rate as a function of the number of RF chains N_RF. As we can predict, for different radar MSE threshold γ, the average sum rate will first increase and then gradually saturate with the growth of N_RF. In this case, there is little growth in sum rate beyond about N_RF≥ 2U, where the sum rate obtained by the HBF architecture can approximate the that of FD architecture. Besides, we observe only a slight performance loss when N_RF = U, which confirms that the HBF scheme can dramatically reduce the number of RF chains with acceptable performance loss. §.§.§ Transmit beampattern behaviours In Fig. <ref>, we depict the transmit beampattern of AP 1 and AP 2 with Γ=-30dB. The results show that the transmit power mainly concentrates around the two target angles, while achieving notch at the other AP angles. We also observe that the lower radar MSE γ, the higher the mainlobe peaks and lower sidelobe level. Combining Figs. <ref> and <ref>, we can conclude that the proposed algorithm can flexibly control the beampattern. §.§.§ Radar Detection performance In Fig. <ref>, we analyze the radar detection performance of the proposed Co-ISACNet by using the ROC curve. As can be observed from Fig. <ref>, the probability of detection of all methods increases with the probability of false alarm. Besides, Fig. <ref> shows that the HBF with the smaller γ has better detection performance, which implies that the beampattern behaviors impact the detection performance. Additionally, compared with the conventional ISAC with TDD mode, the proposed Co-ISACNet can achieve a remarkable improvement of detection performance thanks to the diversity provided by multiple APs, which verifies the superiority of the proposed Co-ISACNet. §.§ Convergence of the Algorithm In Fig. <ref>, we plot the average sum rate versus the CPU time (Sec) for two scenarios. We can observe from Fig. <ref> that the proposed Dis-PANDA always converges within finite iterations with different parameter settings. Besides, the proposed Dis-PANDA achieves almost the same average sum rate as that obtained by Cen-ADMM in less computational time, which validates the time efficiency of the proposed algorithm. Additionally, the semiDis-TS costs much less computational time but has worse sum rate than Cen-ADMM. § CONCLUSION We proposed a novel Co-ISACNet, where a CPU is deployed to control multiple APs to cooperatively provide communication services to users and detect multiple targets of interest. A joint HBF optimization problem is formulated with the aim of maximizing the average sum rate of the considered network while satisfying the constraint of beampattern similarity for radar sensing. To reduce the computational burden of the CPU and take advantage of the distributed APs, we propose a novel PANDA framework to solve the general joint HBF design problem in a distributed manner. Then, we customize the proposed PANDA framework to solve the joint HBF design problem of the Co-ISACNet. Numerical results confirmed that our proposed PANDA based algorithm can achieve nearly the same performance as the centralized algorithm with less computational time. Moreover, our results revealed that Co-ISACNet is an efficient network to improve both radar and communication performance over conventional ISAC with a single transmitter. Based on this initial work on Co-ISACNet, there are many issues worth studying for future research, such as asynchronous distributed optimization, wideband waveform design, optimal Co-ISACNet design, and scenarios for target estimation. IEEEtran § PROOF OF COOPERATIVE SENSING DETECTOR IN (<REF>) Based on (<ref>), we formulate the binary hypothesis test as y̅_a = {[ ξ_a,a,0x̅_a,a,0 + ∑_q∈𝒬ξ_a,a,qx̅_a,a,q + n̅_R,a, ℋ_1; ∑_q∈𝒬ξ_a,a,qx̅_a,a,q + n̅_R,a, ℋ_0; ]. The probability density functions (PDFs) of the received signals y̅_a under hypothesis ℋ_0 and ℋ_1 are respectively given by f(y̅_a|ℋ_0) = 1/πσ_E,a^2exp{-|y̅_a|^2/σ_E,a^2} f(y̅_a|ℋ_1) = 1/πσ_E,a^2exp{-|y̅_a - ξ_a,a,0x̅_a,a,0|^2/σ_E,a^2} where σ_E,a^2 = 𝐱_a^Hς_a𝐱̅_a + σ̅_R^2𝐰_a_F^2, 𝐱̅_a = [x̅_a,a,1 , ⋯ , x̅_a,a,Q]^T and ς_a = 𝖣𝗂𝖺𝗀[ς_a,a,1 , ⋯ , ς_a,a,Q]^T. Then, the joint PDFs of the combined received signal vector 𝐲̅ = [y̅_1 , ⋯, y̅_A]^T can be written as f(𝐲̅|ℋ_0) = 1/π^A∏_a∈𝒜σ_E,a^2exp{ - ∑_a∈𝒜|y̅_a|^2/σ_E,a^2} f(𝐲̅|ℋ_1) = 1/π^A∏_a∈𝒜σ_E,a^2exp{- ∑_a∈𝒜|y̅_a - ξ_a,a,0x̅_a,a,0|^2/σ_E,a^2} To obtain a practical detector, we resort to the GLRT, which is equivalent to replacing all the unknown parameters with their maximum likelihood estimates (MLEs). In other words, the GLRT detector in this case is obtained from max_ξ_0f(𝐲̅|ℋ_1)/f(𝐲̅|ℋ_0)≷_ℋ_0^ℋ_1𝒯_0 where ξ_0 = [ξ_1,1,0 , ⋯ , ξ_A,A,0]^T. Thus, based on MLE, the estimation of ξ_0 can be given by ξ̅_a,a,0 = y̅_a / x̅_a,a,0. Inserting this MLE of ξ_0 into (<ref>) leads to max_ξ_0f(𝐲̅|ℋ_1)/f(𝐲̅|ℋ_0) = exp{∑_a∈𝒜|y̅_a|^2/σ_E,a^2} = ≷_ℋ_0^ℋ_1𝒯_0 By simplifying (<ref>), we have ∑_a∈𝒜|y̅_a|^2/σ_E,a^2 = ≷_ℋ_0^ℋ_1𝒯 Thereby, this proof is completed. § PROOF OF PROBABILITY OF DETECTION IN (<REF>) Let ẏ_a = y̅_a / σ_E,a. The sufficient statistic ϖ in (<ref>) can be reformulated as ϖ = ∑_a∈𝒜|y̅_a|^2/σ_E,a^2 = ∑_a∈𝒜 |ẏ_a|^2. Then we have the following two results: * Under ℋ_0, the mean and covariance of ẏ_a can be expressed as 𝔼{ẏ_a } = 0, 𝔻{ẏ_a } = 1. * Under ℋ_1, the mean and covariance of ẏ_a can be expressed as 𝔼{ẏ_a } = ξ_a,a,0x̅_a,a,0/σ_E,a, 𝔻{ẏ_a } = 1. From the results (<ref>) and (<ref>) we have the following observations: * From (<ref>), we know that ϖ under ℋ_0 is standard chi-square distribution with 2A DoFs, i.e., ϖ∼χ_(2A)^2. Therefore, the probability of false-alarm can be expressed as Pr_FA = Pr(g ≥𝒯|ℋ_0) = Pr(1/2χ_(2A)^2 ≥𝒯) = Pr(χ_(2A)^2 ≥ 2 𝒯) = 1 - F_χ_(2A)^2(1-2 𝒯). Thus, the detection threshold can be set according to a desired probability of false-alarm Pr_FA, i.e., 𝒯 = 1/2 F_χ_(2A)^2^-1( 1-Pr_FA ), where F_χ_(A)^2^-1 represents the inverse CDF of the chi-square distribution with order A. * From (<ref>), we note that ϖ under ℋ_1 is noncentral chi-square distribution with 2A DoFs and noncentrality parameter 𝔑 = ∑_a∈𝒜|ξ_a,a,0x̅_a,a,0|^2/σ_E,a^2, i.e., ϖ∼χ_(2A)^'2(𝔑). Thus, the probability of detection can be derived as Pr_D = ℚ_M^A( √(2𝔑) , √(2𝒯)) By defining SINR_a = |ξ_a,a,0x̅_a,a,0|^2 / σ_E,a^2 and plugging 𝒯 = 1/2 F_χ_(2A)^2^-1( 1-Pr_FA ) into (<ref>), we have Pr_D = ℚ_M^A( √(2∑_a∈𝒜SINR_a) , √(F_χ_(2A)^2^-1(1-Pr_FA))) Thus, we complete this proof. § PROOF OF LEMMA <REF> We start by proving the first property of the proposed PANDA in Lemma <ref> as follows. For notation simplicity, we define 𝒮_a( 𝐓_a , 𝐅_A,a , 𝐅_D,a , 𝐃_a ; 𝐓_a^k) = Prox_𝒢_a , β^𝒳_a[ 1/β∇ℒ_a ( 𝐓_a , 𝐅_A,a , 𝐅_D,a) ] . Then, according to the procedure of PG, we have the following inequalities ∑_a ∈𝒜𝒮_a ( 𝐓_a , 𝐅_A,a^k , 𝐅_D,a^k , 𝐃_a^k ; 𝐓_a^k) ≥ℒ( {𝐅_A,a^k } , {𝐅_D,a^k }, 𝐓 , {𝐃_a^k }) , ∑_a ∈𝒜𝒮_a ( 𝐓_a^k , 𝐅_A,a^k , 𝐅_D,a^k , 𝐃_a^k ; 𝐓_a^k) = ℒ( {𝐅_A,a^k } , {𝐅_D,a^k }, 𝐓^k , {𝐃_a^k }) . From the minimization of 𝐓_a , ∀ a in P1, we have 𝒮_a( 𝐓_a^k+1 , 𝐅_A,a^k , 𝐅_D,a^k , 𝐃_a^k ; 𝐓_a^k) ≤𝒮_a ( 𝐓_a^k , 𝐅_A,a^k , 𝐅_D,a^k , {𝐃_a^k } ; 𝐓_a^k) . Therefore, using (<ref>) and (<ref>), we obtain ℒ( {𝐅_A,a^k} , {𝐅_D,a^k }, 𝐓^k+1 , {𝐃_a^k }) ≤ℒ( {𝐅_A,a^k } , {𝐅_D,a^k }, 𝐓^k , {𝐃_a^k }) . Together with the update of 𝐅_A,a and 𝐅_D,a, we have ℒ( {𝐅_A,a^k+1} , {𝐅_D,a^k+1}, 𝐓^k+1 , {𝐃_a^k }) ≤ℒ( {𝐅_A,a^k } , {𝐅_D,a^k }, 𝐓^k , {𝐃_a^k }) . The above inequality (<ref>) shows that the value of the augmented Lagrangian is decreasing with respect to (w.r.t) the primal variables {𝐅_A,a} , {𝐅_D,a}, 𝐓 iteratively. Besides, the augmented Lagrangian is increasing w.r.t the dual variables {𝐃_a } since the dual ascent is implemented in P4. This indicates that PANDA exhibits the same convergence behaviour as that of conventional centralized ADMM, which completes the proof. Then, we prove the second property of the proposed PANDA in Lemma <ref>. Given that we assume lim_k→∞𝐃_a^k+1 - 𝐃_a^k = 0, and that we update the dual variables in a dual ascent manner in P4, we have lim_k→∞𝐓_a^k+1 - 𝐅_A,a^k+1𝐅_D,a^k+1 = 0, ∀ a ∈𝒜. As demonstrated in Property 3 of the optimization problem (<ref>), the equality constraints (<ref>) and inequality constraints (<ref>), denoted as 𝒳, are closed and continuous. Recalling the update of {𝐓_a } is achieved by PG method, as below 𝐓_a^k = Prox_𝒢_a , β^𝒳_a[ 1/β∇ℒ_a ( 𝐓_a^k - 1 , 𝐅_A,a^k - 1 , 𝐅_D,a^k - 1) ]. This implies that {𝐓_a } is always projected onto the closed and continuous set 𝒳, such that {𝐓_a^k } remains always bounded. Since we consider the HBF design problem, the analog beamformers {𝐅_A,a^k } are subject to the constant modulus constraint. Therefore, analog beamformers {𝐅_A,a^k } are also bounded. The update of digital beamformers {𝐅_D,a^k } in P3 is an unconstrained optimization problem, whose closed-form solution can be given by 𝐅_D,a^k = {( 𝐅_A,a^k )^H 𝐅_A,a^k }^ - 1( 𝐅_A,a^k)^H ( 𝐓_a^k-1 - 𝐃_a^k-1 ). Since {𝐓_a^k } and {𝐅_A,a^k } are bounded, the 𝐅_D,a^k is bounded. Therefore, the sequence {{𝐓_a^k } , {𝐅_A,a^k } , {𝐅_D,a^k }} is bounded. Hence, there exists a stationary point {{𝐓_a^⋆} , {𝐅_A,a^⋆} , {𝐅_D,a^⋆}} such that lim_k→∞𝐓_a^k = 𝐓_a^⋆, lim_k→∞𝐅_A,a^k = 𝐅_A,a^⋆, lim_k→∞𝐅_D,a^k = 𝐅_D,a^⋆. Based on the first part of this proof and the boundness of the AL function ℒ( {𝐅_A,a} , {𝐅_D,a}, {𝐓_a} , {𝐃_a }), we have lim_k→∞ℒ( {𝐅_A,a^k} , {𝐅_D,a^k }, {𝐓_a^k} , {𝐃_a^k }) = ℒ( {𝐅_A,a^⋆} , {𝐅_D,a^⋆}, {𝐓_a^⋆} , {𝐃_a^⋆}) , which implies that lim_k→∞ 𝐓_a^k - 𝐅_A,a^k𝐅_D,a^k = 𝐓_a^⋆ - 𝐅_A,a^⋆𝐅_D,a^⋆ = 0, ∀ a ∈𝒜 lim_k→∞ 𝐃_a^k = 𝐃_a^⋆ = 0, ∀ a ∈𝒜 and that the stationary point {{𝐓_a^⋆} , {𝐅_A,a^⋆} , {𝐅_D,a^⋆}} is an optimal solution. The proof is complete. § PROOF OF THEOREM <REF> Specifically, by introducing a multiplier ϵ_1,a∈ℝ for the power constant, we obtain the following Lagrangian function ℒ_1,a = ϵ_1,a ( 𝐓_a _F^2 - E ) + {𝖳𝗋 ( 𝐁_2 𝐓_a )} + β/2𝐓_a - 1/β( -∇𝒢_0(𝐓_a^k - 1) + α𝐓_a^k-1 + ρ ( 𝐅_A,a^k-1𝐅_D,a^k-1 - Ω_a^k-1 ) ) _F^2 . Setting the gradient of the Lagrangian to zero, we have 𝐓_a = 𝐓_a^k-1 / (α + ρ + 2ϵ_1,a) , where 𝐓_a^k-1 = -𝐁_2^H - ∇𝒢_0(𝐓_a^k - 1) + α𝐓_a^k-1+ρ(𝐅_A,a^k-1𝐅_D,a^k-1 - Ω_a^k-1 ). To determine the value of ϵ_1,a, we plug (<ref>) into power budget constraint 𝐓_a _F^2 = E and obtain the optimal solution 𝐓_a as 𝐓_a^k = √(E)𝐓_a^k-1/𝐓_a^k-1_F . Thus, we complete this proof. § PROOF OF THEOREM <REF> Specifically, the Lagrangian function of problem (<ref>) can be expressed as ℒ_2,a = 𝐮_a - 𝐝_a _F^2 + ϵ_2,a( 𝐮_a^H 𝐆_a𝐮_a - 2 {𝐠_a^H 𝐮_a } - γ̃_a ) , where ϵ_2,a∈ℝ^+ is a multiplier associated with 𝐮_a^H 𝐆_a𝐮_a - 2 {𝐠_a^H 𝐮_a }≤γ̃_a. Then the corresponding KKT conditions are given by 𝐮_a = ( 𝐈_N_T + ϵ_2,a𝐆_a)^-1 ( 𝐝_a + ϵ_2,a𝐠_a ) 𝐮_a^H 𝐆_a 𝐮_a - 2 { 𝐠_a^H 𝐮_a } ≤γ̃_a ϵ_2,a( 𝐮_a^H 𝐆_a 𝐮_a - 2 { 𝐠_a^H 𝐮_a } - γ̃_a ) = 0 ϵ_2,a ≥0 Accordingly, the optimal solution 𝐮_a to (<ref>) can be determined in the following two cases: ∙ Case 1: For ϵ_2,a = 0, the optimal solution of 𝐮_a is given by 𝐮_a = 𝐝_a, which must satisfy the condition (<ref>). ∙ Case 2: For ϵ_2,a > 0, from (<ref>) and (<ref>), we have 𝐮_a^H 𝐆_a𝐮_a - 2 {𝐠_a^H 𝐮_a } = γ̃_a. To determine ϵ_2,a, we plug (<ref>) into the equality constraint in (<ref>) and obtain the following equality h( ϵ_2,a) = ∑_n=1^N_TU| [𝐝_a]_n + ϵ_2,a[𝐠_a]_n /1 + ϵ_2,aμ_n |^2 + 2 {∑_n=1^N_TU [𝐠_a]_n^* [𝐝_a]_n + ϵ_2,a[𝐠_a]_n /1 + ϵ_2,aμ_n } - γ̃_a = 0. where μ_n is n-th singular value of 𝐆_a with μ_1 ≤μ_2 ≤ , ⋯, ≤μ_N_TU. Then, the derivative of h( ϵ_2,a ) is given by h'( ϵ_2,a) = - 2 ∑_n=1^N_TU| [𝐠_a]_n + ϵ_2,a[𝐝_a]_n |^2/( 1 + ϵ_2,aμ_n )^2 < 0, for all ϵ_2,a > - 1 / μ_1. Combining (<ref>) and (<ref>), we know h(ϵ_2,a) is monotonic in the possible region (0 , +∞], h(0) > 0, and lim_ϵ_2,a→ +∞ h(ϵ_2,a) ≤ 0. Therefore, we can find the unique (optimal) solution ϵ_2,a^⋆ by bisection or Newton’s method. By substituting ϵ_2,a^⋆ into (<ref>), the optimal solution to 𝐮_a is obtained. Thereby, this proof is completed. § PROOF OF THEOREM <REF> Given other variables, the sub-problem of updating F_ A , a can be equivalently rewritten as min_𝐅_A,a ρ/2𝐓_a^k - 𝐅_A,a𝐅_D,a^k-1 + Ω_a^k-1_F^2 + ϱ/2𝐔_a^k - 𝐅_A,a𝐅_D,a^k-1 + Λ_a^k-1_F^2 + λ/2𝐙_a^k - {𝐅_A,a𝐅_D,a^k-1}^H 𝐀_N + Φ_a^k-1_F^2 s.t. | [ 𝐅_A,a]_m,n| = 1 ,∀ m,n. By defining 𝐖_a,1^k-1 = [ √(ρ/2)𝐈_N_T , √(ϱ/2)𝐈_N_T , √(λ/2)𝐀_N ]^H, and 𝐖_a,2^k-1 = [ √(ρ/2)( 𝐓_a^k + Ω_a^k-1), √(ϱ/2)( 𝐔_a^k + Λ_a^k-1), √(λ/2)( 𝐙_a^k + Φ_a^k-1)^H ]^H, problem (<ref>) can be equivalently rewritten as min_𝐅_A,a f( 𝐅_A,a) = 𝐖_a,1^k-1𝐅_A,a𝐅_D,a^k-1 - 𝐖_a,2^k-1_F^2 s.t. | [ 𝐅_A,a]_m,n| = 1 ,∀ m,n. To solve the constant modulus constrained quadratic problem (<ref>), we adopt the block successive upper-bound minimization (BSUM) method. Specifically, by applying Lemma 2 again, the upper-bound function of f( 𝐅_A,a ) at [ℓ-1]-th inner iteration can be derived as f( 𝐅_A,a) ≤ f( 𝐅_A,a^[ℓ - 1]) + α̃/2𝐅_A,a - 𝐅_A,a^[ℓ - 1]_F^2 + ( 𝖳𝗋{ ( ∇ f( 𝐅_A,a^[ℓ - 1] ))^H ( 𝐅_A,a - 𝐅_A,a^[ℓ - 1] )}) = ( 𝖳𝗋{𝐅_A,a^H 𝐖̅_a^[ℓ - 1]}) + c_3. where 𝐖_a^[ℓ - 1] = ∇ f(𝐅_A,a^[ℓ - 1]) - α̃𝐅_A,a^[ℓ - 1], ∇ f(𝐅_A,a^[ℓ - 1]) = (𝐖_a,1^k-1)^H 𝐖_a,1^k-1𝐅_A,a^[ℓ - 1]𝐅_A,a^k - 1 (𝐅_A,a^k - 1)^H, and c_3 = f( 𝐅_A,a^[ℓ - 1] ) - ( 𝖳𝗋{ (∇ f(𝐅_A,a^[ℓ - 1]))^H 𝐅_A,a^[ℓ - 1]} ) + α̃/2 𝐅_A,a^[ℓ - 1]_F^2 + α̃ N_TU. Then, 𝐅_A,a can be updated by iteratively solving the following problem min_𝐅_A,a ( 𝖳𝗋{𝐅_A,a^H 𝐖̅_a^[ℓ - 1]}) s.t. | [ 𝐅_A,a]_m,n| = 1 ,∀ m,n, whose closed-form solution can be given by 𝐅_A,a^[ℓ] = -exp{∠[ 𝐖_a^[ℓ - 1]] } . The overall algorithm for updating analog beamformer 𝐅_A,a is summarized in Algorithm <ref>.
http://arxiv.org/abs/2407.12700v1
20240717162232
Bayesian Joint Modeling of Interrater and Intrarater Reliability with Multilevel Data
[ "Nour Hawila", "Arthur Berg" ]
stat.ME
[ "stat.ME", "62F15 (Primary), 62J12 (Secondary), 62H12 (Secondary), 62P10\n (Secondary)" ]
Asian Journal of Statistics and Applications Bayesian Joint Modeling of Interrater and Intrarater Reliability with Multilevel Data Nour Hawilaa and Arthur BergbCONTACT Authorb. Email: berg@psu.edu aBristol Meyers Squibb bDivision of Biostatistics & Bioinformatics, Penn State University =================================================================================================================================================================== fancy [RO]Hawila and Berg [LO]Asian Journal of Statistics and Applications arabic § ABSTRACT We formulate three generalized Bayesian models for analyzing interrater and intrarater reliability in the presence of multilevel data. Stan implementations of these models provide new estimates of interrater and intrarater reliability. We also derive formulas for calculating marginal correlations under each of the three models. Comparisons of the kappa estimates and marginal correlations across the different models are presented from two real-world datasets. Simulations demonstrate properties of the different measures of agreement under different model assumptions. Reliability; Bayesian; Hierarchical; Nested; Stan § INTRODUCTION It is often important to report on the inconsistent classification of the ratings from different raters (interrater) as well as the inconsistency of the ratings of a single rater's repeated ratings (intrarater). Although often used interchangeably, the primary distinction between reliability and agreement is that agreement is defined as the degree to which classifications are identical, whereas reliability focuses on the extent of variability and error inherent in a measurement <cit.>. Interrater reliability can be defined as the extent to which two or more raters agree in classifying a common observation. It is a helpful measure in assessing the consistency of the implementation of a rating system <cit.>. Intrarater reliability refers to the consistency of one rater's classification over multiple time points and is best determined when multiple trials are administered over a short period of time <cit.>. Intrarater reliability is helping in verifying the reproducibility of clinical measurements <cit.>. Many articles have discussed different measures of association for assessing interrater and intrarater reliability, and are mostly dependent on the type of outcome <cit.>. Commonly used measures for the case of binary outcomes include Cohen's Kappa statistic with a corrected standard error and some of its variants <cit.>, whereas the intraclass correlation coefficient <cit.> is generally restricted to continuous outcomes. Generalized estimating equations (GEE) are also used for modelling outcomes where Kappa estimates are calculated as a function of the GEE parameters <cit.>. <cit.> also proposed a weighted interrater Kappa calculation based on a GEE approach for categorical data. All of these models utilize a frequentist approach and only estimate interrater reliability and intrarater reliability separately but not jointly. Joint estimation of interrater and intrarater reliability addresses several key issues inherent in the separate estimation approaches commonly found in the literature. First, by jointly modeling these two types of reliability, we can more accurately capture the interdependencies between the consistency of different raters and the repeatability of the same rater over time. This is particularly important in studies where the same subjects are assessed multiple times by the same raters, as the correlation between assessments can significantly influence the overall reliability of the measurements. Moreover, joint modeling allows for the simultaneous estimation of both types of reliability, providing a more comprehensive understanding of the measurement process. This is crucial for ensuring the validity of conclusions drawn from the data, especially in clinical settings where decision-making depends heavily on the precision and repeatability of diagnostic assessments. By estimating these reliabilities together, we can also better account for variability between subjects and across time points, leading to more robust estimates of measurement error. Separately estimating inter- and intra-rater reliability ignores the dependencies in the data, whereas joint estimation may provide more accurate reliability estimates, better statistical efficiency, and less bias. In this paper, we compare three fully Bayesian joint models of interrater and intrarater reliability of increasing complexity. The first model, which we refer to as the Bayesian independent (BIN) model, expands on the model presented in <cit.> by converting it to a Bayesian model and incorporating a random effect for time thus allowing intrarater reliability to also be captured. The second model, which we refer to as the Bayesian partially nested (BPN) model, expands on the model presented in <cit.> by converting it to a Bayesian model and incorporating multiple time points. The third model, which we refer to as the Bayesian fully nested (BFN) model, provides a Stan implementation of the WinBUGS Bayesian model presented in <cit.> but with more flexible priors. In Section <ref> we describe both probability-based (Section <ref>) and model-based (Section <ref>) methods in estimating the probability of a positive classification and the resulting interrater and intrarater reliability calculations. Formulas for calculating model-based marginal correlations for each of the three models are provided (Section <ref>). In Section <ref>, we show the results of applying the proposed Bayesian models to two real-world datasets and provide simulation results aimed at assessing the properties of the different measures of association depending on various model assumptions. We conclude the paper with an overall summary of the findings along with limitations of the proposed methods. § METHODS We consider a three-level nested data structure {Y_ijk} for subject i, evaluated by rater j, taken at time k. This type of data structure is common in observational studies in social and behavioral science, with subjects nested within clusters <cit.>. The quality of rating procedures is often of interest with particular emphasis on interrater and intrarater reliability. In the case of continuous outcomes, the intraclass correlation coefficient (ICC) is typically used to quantify the degree to which different raters resemble each other or the extent to which raters resemble themselves at two different time points. For the scope of this paper, we focus on the case of dichotomous outcomes where J raters evaluate I subjects at K different time points. §.§ Probability-based Methods We first consider the simple case of I subjects evaluated by J raters, where interrater reliability is of interest. Denoting the response (classification) of the j^th rater on the i^th subject by Y_ij, a disagreement random variable Z_i between raters j and j' evaluating subject i could be constructed. It is assumed that Z_i is common for all pairs (j,j'). Kappa coefficients for assessing interrater reliability between raters j and j' are defined by κ = 1-E(Z_i)/E̅(Z_i) where E(Z_i) is the expectation of Z_i and E̅(Z_i) is the expectation of Z_i assuming statistical independence of raters j and j', i.e. P(Y_ij=k_1,Y_ij'=k_2)=P(Y_ij=k_1)P(Y_ij'=k_2) <cit.>. §.§.§ Cohen's Kappa Cohen's Kappa coefficient in particular is obtained when Z_i = 1-I(Y_ij,Y_ij') where I(Y_ij,Y_ij')=1 if Y_ij=Y_ij' and 0 otherwise. Assuming equal joint probabilities between raters j and j' across the subjects – i.e. P(Y_ij=k_1)=P(Y_ij'=k_1) for k_1=0,1 – the expectations for Cohen's Kappa are given by E(Z_i) = P(Y_ij=1,Y_ij'=0) + P(Y_ij=0,Y_ij'=1) E̅ (Z_i) = P(Y_ij=1) P(Y_ij'=0) + P(Y_ij=0) P(Y_ij'=1) We can now define p_o = 1- E(Z_i) = Pr(Y_ij=1,Y_ij'=1) + Pr(Y_ij=0,Y_ij'=0) p_c = 1- E̅(Z_i) = Pr(Y_ij=1)Pr(Y_ij'=1) + Pr(Y_ij=0)Pr(Y_ij'=0) to be the observed and expected agreement due to chance respectively. Cohen's Kappa is more commonly known in following form κ_c = p_o-p_c/1-p_c. Another formulation for p_c, without assuming equal joint probabilities between raters is given by p_c = Pr(Y_ij=1,Y_i'j'=1) + Pr(Y_ij=0,Y_i'j'=0) The true measure of chance agreement p_c is the probability that two randomly selected raters from a population make identical classifications on two different randomly chosen subjects <cit.>. The definitions of p_o and p_c are left relatively broad because the formulas are adjusted depending on whether interrater reliability or intarrater reliability is of interest and the nesting structure. For a three-level nested structure in particular, p_o = (Y_ijk =Y_ij'k) for interrater reliability (Y_ijk =Y_ijk') for intrarater reliability p_c = (Y_ijk =Y_i'j'k) for interrater reliability (Y_ijk =Y_i'jk') for intrarater reliability Like Cohen's Kappa, Scott's Pi is also applicable when evaluating categorical outcomes and therefore also binary data. Scott's Pi is calculated using the formula given in (<ref>) but with p_c calculated slightly differently. Scott's Pi assumes the distribution of the raters is the same and therefore p_c is calculated using squared “joint proportions” which are squared arithmetic means of the marginal proportions (whereas Cohen's uses squared geometric means of them) <cit.>. Fleiss' Kappa is a generalization of Cohen's Kappa to multiple raters (or time points) <cit.>. Congers Kappa which is a correction of Fleiss' Kappa and will be used throughout this paper when computing Kappa <cit.>. In the following two subsections, we present closed-form formulas for computing Fleiss' and Conger's Kappa. §.§.§ Fleiss' Kappa (1971) Let N represent the total number of subjects and M the number of raters per subject. Define n_ij to be the number of raters assigning subject i to category j where j=0,1. We also define p_j to be the proportion of all ratings which were assigned to category j and is given by p_j = 1/NM∑_i=1^Nn_ij. We note here that ∑_jn_ij=M and ∑_jp_j=1 The agreement between the M raters for the i^th subject, P_i is the proportion of agreeing pairs out of all M(M-1) possible pairs of assignment given by P_i = 1/M(M-1)( n_i1(n_i1-1)+n_i0(n_i0-1)) or equivalently can be written as combinations out of all M2 = M(M-1)/2 possible pair combinations as P_i = 1/M2[n_i12+n_i02]. The overall observed extent of agreement can be measured by the mean of P_is given by: P_o = 1/N∑_i=1^NP_i If the raters made their assignments by chance, the expected mean proportion of agreement would be (p_jp_j' = p_j^2) given by P_c = p_0^2+p_1^2 = [1/NM∑_i=1^Nn_i0]^2 + [1/NM∑_i=1^Nn_i1]^2. The Kappa coefficient of agreement between the m raters can now be given by κ= P_o-P_c/1-P_c. §.§.§ Conger's Kappa (1980) Using the same notation N, M, for j=0,1 P_o and P_c are given by: P_o = 1/N∑_i=1^Nn_i1(n_i1-1)+n_i0(n_i0-1)/M(M-1) P_c = p̅^2_+0-s^2_0/M+p̅^2_+1-s^2_1/M where s^2_k is given by s^2_k = 1/M-1(p_j0-p̅_+0)^2 + 1/M-1(p_j1-p̅_+1)^2 and represents the variance of the proportion of p_j0 and p_j1. The Kappa coefficient of agreement between the m raters can now be similarly given as κ= P_o-P_c/1-P_c. §.§ Model-based Methods The measures typically used in (<ref>) posses some unfavorable traits. Namely, the measures do not incorporate uncertainty about the estimates in addition to the lack of ability to adjust for covariates that could potentially impact the outcomes and most importantly ignore the nested aspect of the data. In this section, we propose some and generalize other Bayesian models for modelling p_ijk=(Y_ijk=1), the probability the i^th subject is classified as a `success' (Y=1) by the j^th rater at the k^th time point. §.§.§ Independent model <cit.> proposed a generalized linear mixed model (GLMM) for the case of a single time point given in (<ref>). g(p_ij) = η + u_i + v_j, i=1,...,I, j=1,...,J g(.) is a link function (usually chosen to be the probit or logit link) and η is the intercept term. The random effects terms are given by u_i and v_j for the i^th subject and j^th rater which are assumed to be independent and normally distributed with mean 0 and variances σ_u^2 and σ_v^2, respectively. Under the logit and probit models, estimates of p_0 and p_c for a single time point (k=1) are calculated as a function of η,σ^2_u,σ^2_v which are typically replaced by their estimates from the GLMM. This model formulation is useful for evaluating interrater reliability in the presence of a single time point. We propose the Bayesian model presented in (<ref>) that incorporates an additional random effects terms w_k for the k^th time point in addition to a fixed effects term for the i^th subject evaluated by the j^th rater at the k^th time point. g(p_ijk) = X_ijkβ^I + u_i + v_j +w_k, i=1,...,I, j=1,...,J, k=1,...,K Positive values of u_i indicate that the i^th subject was more likely to be classified as a success amongst raters across time points, positive values of v_j indicate that the j^th rater was more likely to classify subjects as success across time points, and positive values of w_k indicate that raters were more likely to classify subjects as success at the k^th time point. The underlying hierarchical Bayesian structure for model (<ref>) is given by u_i ∼ N(0,σ_u^2) v_j ∼ N(0,σ_v^2) w_k ∼ N(0,σ_w^2) σ^2_u,σ^2_v,σ^2_w ∼ IG(α,γ) β^I ∼ MVN(μ^β_p,Σ^β_p) Univariate Normal priors are assumed for the random effect terms with 0 mean and independent variance terms that are assumed to have inverse-gamma hyperpriors with parameters α and γ. The inverse-gamma parameters are typically chosen to be small and equal to reflect non-informative priors. The fixed effects coefficient term β^I is assumed to have a p-variate Normal prior with known mean μ^β_p and unstructured covariance matrix Σ^β_p. §.§.§ Fully nested model <cit.> proposed a Bayesian model able to assess interrater and intrarater reliability for nested binary data structures. The model is given in (<ref>). logit(p_ijk) = X_ijkβ^F +u_i+v_ij +w_ijk, i=1,...,I, j=1,...,J, k=1,...,K A logit link function was chosen with a flexible fixed effects term X_ijk of length p and accompanying fixed effects coefficients vector β^F. A first level of independent random effects is imposed at the subject level through u_i, followed by v_ij that represents a joint subject/rater random effect and finally a subject/rater/time point joint random effect w_ijk. Similar to the independent model, u_i are assumed to follow an independent univariate Normal distribution with mean 0 and variance σ_u^2. The joint subject/rater random effects v_i. are assumed to be J-variate Normal with mean [ 0 0 ... 0 ]^T and covariance matrix Σ_v^F=D_v^FΩ_v^F D_v^F where Ω_v^F is the correlation matrix with unit entries on the diagonal and ρ_v^F entries on the off-diagonal, and D_v^F=√(diag(Σ_v^F))=[σ_v1,...,σ_vJ]^T so that Var(v_ij) =σ_vj^2 Cov(v_ij,v_ij') =ρ_vσ_vjσ_vj' for j=1,2,...,J and j≠ j' We assume a JK-variate Normal distribution for w_ijk with similar structure to v_ij with 0 mean and covariance matrix Σ_w^F=D_w^FΩ_w^F D_w^F where Ω_w^F is the correlation matrix with unit entries on the diagonal and ρ_w^F entries on the off-diagonal, and D_w^F=√(diag(Σ_w^F))=[σ_w11,...,σ_wJK]^T so that Var(w_ijk) =σ_wjk^2 Cov(w_ijk,w_ij'k') =ρ_wσ_vjkσ_vj'k' for j=1,...,J, k=1,...,Kj≠ j', k≠ k' We note that the covariance matrices Σ_v^F and Σ_w^F are assumed to be unstructured for generalizability, but other assumptions could be made. A common covariance structure would assume σ_vj=σ_vj', ∀ j ≠ j' for Σ_v^F and σ_vjk=σ_vj'k', ∀ j ≠ j' and k ≠ k' for Σ_w^F. For Σ_w^F, we can also assume separate covariance structures for different raters (or time points), i.e. σ_vjk=σ_vj'k, ∀ j ≠ j' (or σ_vjk=σ_vjk', ∀ k ≠ k'). The full hierarchical Bayesian structure for model (<ref>) is given by u_i ∼ N(0,σ_u^2) v_i. ∼ MVN(0,Σ_v) w_i.. ∼ MVN(0,Σ_w) σ^2_u, σ^2_vj,σ^2_wjk ∼ IG(α,γ) Ω_v^F,Ω_w^F ∼ LKJ(η) β^F ∼ MVN(μ^β_p,Σ^β_p) Lewandowski-Kurowicka-Joe (LKJ) prior distributions are assumed for the correlation matrix with tuning parameter η to control the strength of the correlations. The fixed effects coefficient term β^F is assumed to have a p-variate Normal prior with known mean μ^β_p and unstructured covariance matrix Σ^β_p. We call this model the fully-nested model due to the hierarchical nature of modelling the random effects. Due to the complex structure of this model, different correlation measurements are of interest, namely, marginal correlations and correlations among random effects. Correlation among rater random effects ρ^R=Corr(v_ij,v_ij') was used to imply the strength of similarity among the underlying random effect mechanism for raters j and j' and the marginal correlation between these raters (interrater marginal correlation) was calculated as Corr^R=Corr(logit(p_ijk),logit(p_ij'k)). Similarly, for time points k and k' the correlation among time point random effects is given by ρ^T=Corr(w_ijk,w_ijk') and the marginal correlation between these time points (intrarater marginal correlation) by Corr^T=Corr(logit(p_ijk),logit(p_ijk')). The correlation among rater and time point random effects is used to infer similarity between raters and time points, whereas the marginal correlation measures are used to differentiate between raters or time points as they examine all dependencies involved. §.§.§ Partially nested model Twelve years after <cit.> proposed the independent GLMM, <cit.> proposed an ordinal GLMM model with a crossed random effects structure given in (<ref>). The ordinal GLMM model uses a probit link function to reflect an underlying continuous outcome, and creates a categorized version of the latent variable with C levels, evalued by J raters at K=2 time points. The primary usage of this model was to evaluate intrarater reliability across two time points. The model is given by g(p_ijk) = α_c - (β^P x_k+u_i+v_jk), i=1,...,I, j=1,...,J, k=1,2 where α_c represents the cutoffs defining the ordinal scaling with α_0=-∞, α_C=+∞ and c=1,...,C-1, x_k is an indicator variable for the k^th time point. A fixed effects coefficient β^P provides an overall adjustment for raters at different time points. The subject level random effects u_i are assumed to be independent and normally distributed with mean 0 and variances σ_u^2, whereas the rater/time point joint random effects v_j. are assumed to be bivariate Normal (BVN) with mean [ 0 0 ]^T and covariance matrix Σ_v^P=D^PΩ^P D^P where Ω^P is the correlation matrix with unit entries on the diagonal and ρ_v^P entries on the off-diagonal, and D^P=√(diag(Σ_v^P))=[σ_v1,σ_v2]^T so that Var(v_jk) =σ_vk^2 Cov(v_jk,v_jk') =ρ_vσ_vkσ_vk' for k=1,2 and k≠ k' <cit.> also developed integrals for calculating the observed intrarater association and chance intrarater association using the parameters (β, σ_u^2,Σ_v^P) of the model given in (<ref>). To prevent overestimating or underestimating the strength of association between raters, and to account for chance association, <cit.> suggest minimizing the impact of chance association on the Kappa estimate by selecting thresholds α_1 min,...,α_C-1 min such that the chance intrarater association integrand is minimized. A form of Cohen's Kappa (<ref>), could also be calculated based on model (<ref>) by computing the measure for a single rater's paired classification as κ_c^avg = p_o^avg-p_c^avg/1-p_c^avg, where p_o^avg and p_c^avg are the average observed and chance interrater (or intrarater) association taken for all pairs of raters j and j' (or time points k and k'). To make this model compatible the case of a binary outcome (C=2), and K time points, an updated version of (<ref>) is given by g(p_ijk) = η - (β_1^P x_k+u_i+v'_jk), i=1,...,I, j=1,...,J, k=1,...,K where β_1^P is a K×1 coefficients vector where the k^th entry represents the fixed effect of the k^th time point. v'_jk is now K-variate Normally distributed with mean [ 0 0 ... 0 ]^T and covariance matrix Σ_v^P'=D^P'Ω^P' D^P' with similar formulation to Σ_v^P but K-dimensional so that Var(v_jk) =σ_vk^2 Cov(v_jk,v_jk') =ρ_vσ_vkσ_vk' for k=1,2,...,K and k≠ k' Incorporating multiple time points allow for a generalized assessment of intrarater reliability. The underlying hierarchical Bayesian structure for model (<ref>) is given by u_i ∼ N(0,σ_u^2) v_j. ∼ MVN(0,Σ_v^P') σ^2_u, σ^2_vk ∼ IG(α,γ) Ω^P' ∼ LKJ(η) β_1^P ∼ MVN(μ^β_K,Σ^β_K) Univariate Normal priors are assumed for the subject random effect term with 0 mean and independent variance terms that are assumed to have inverse-gamma hyperpriors with parameters α and γ. The inverse-gamma parameters are typically chosen to be small and equal to reflect non-informative priors. Multivariate Normal priors are assumed for the rater/time point joint random effect with 0 mean and covariance matrix as given in (<ref>). The fixed effects coefficient term β_1^P is assumed to have a p-variate Normal prior with known mean μ^β_K and unstructured covariance matrix Σ^β_K. §.§ Marginal Correlations and Correlations between random effects After fitting one of the proposed Bayesian models from the previous section, two different measures other than Kappa could be of interest. Namely, the marginal correlation between two raters (or two time points) which is defined as the correlation between the observations made by raters j and j' (or time points k and k'). Corr^R = Corr(g(p_ijk),g(p_ij'k)) Corr^T = Corr(g(p_ijk),g(p_ijk')) Another measure of interest is the correlation between random rater effects ρ^R or between random time point effects ρ^T. These measures differ based on the type of model selected (independent, partially nested, or fully nested). §.§.§ Independent Model For the independent model, the subject, rater and time point random effects are a priori univariate independent Normally distributed. The marginal correlations based on this model are given by: Corr^R_IN = σ^2_u + σ^2_w/σ^2_u + σ^2_v + σ^2_w Corr^T_IN = σ^2_u + σ^2_v/σ^2_u + σ^2_v + σ^2_w The correlation between rater random effects and time point random effects are ρ^R = ρ^T = 0 due to independence of the random effects. §.§.§ Fully Nested Model Assuming a fully nested Bayesian model as given in <ref>, the marginal correlations between two raters and two time points are given by Corr^R_FN = σ^2_u + ρ_R×σ^2_v/σ^2_u + σ^2_v + σ^2_w Corr^T_FN = σ^2_u + ρ_T ×σ^2_w/σ^2_u + σ^2_v + σ^2_w where ρ_R is the correlation between rater random effects and ρ_T is the correlation between time point random effects. §.§.§ Partially Nested Model Assuming a Bayesian partially nested model as given in <ref>, the marginal correlations between two raters and two time points are given by Corr^R_PN = σ^2_u/σ^2_u + σ^2_vk Corr^T_PN = σ^2_u + ρ_Tσ_vkσ_vk'/σ^2_u + σ^2_vk The correlation between rater random effects is ρ^R = 0 and ρ^T is the correlation between time point random effects. § SIMULATION AND EXAMPLES §.§ Software and Implementation All the developed functions that will allow users to implement the three Bayesian models proposed in Sections <ref>, <ref> and <ref> using RStan <cit.> in addition to easily computing interrater and intrarater reliability posterior estimates, credible intervals and simulation results are available through <https://github.com/NourHawila/IRR>. §.§ Data Examples All three Bayesian models – BIN, BPN and BFN – are fit to two real-world datasets. The resulting model fits are subsequently used to explore model properties through simulations. §.§.§ Dataset: Running gait In a crossover study published in 2022, <cit.> recruited 32 cross-country, track and field, and recreational athletes with current running mileage of at least 15 km per week to compare indoor and outdoor running environments. Athletes first ran on a treadmill in an indoor environment recorded using static video, followed by outdoor video recorded by a drone. Three judges were shown both videos, two weeks apart, and independently performed running gait analysis on each foot for each of the 32 runners, with the goal of assessing interrater and intrarater reliability in addition to indoor vs. outdoor agreement. This dataset consists of a total of 768 data points (32 runners, 3 raters, 2 time points, 2 feet, 2 locations) in a complete block design. Fleiss' Kappa was used to assess reliability in <cit.> after averaging across the nested structures. §.§.§ Dataset: Radiograph The quality of root canal is typically evaluated by a radiographic assessment that is known to be necessary yet subjective. A study was conducted to assess the consistency and accuracy of the radiographic evaluation of 7 endodontists who partook in a training course and evaluated radiographs from 35 participants before and after the training <cit.>. The quality of endodontic treatment is defined as ‘‘good’’ if the filling reaches an adequate length within 2 mm of the radiographic apex and if the complete obturation is in the apical one-third of the root canal <cit.>. This dataset consists of a total of 490 data points (35 subjects, 7 raters, 2 time points) in a complete block design. <cit.> used the BFN model to assess interrater and intrarater reliability using WinBUGS. §.§.§ Model fits of two real-world datasets The three Bayesian models were fit to each of the two datasets. Each of the models used 2000 iterations, 200 warmup iterations per chain across 2 chains. The functions , , and , available at <https://github.com/NourHawila/IRR>, are used to fit the datasets to the respective Bayesian models. The syntax for fitting the three models is presented below. The argument controls whether the fixed effects design matrix includes an intercept term or not. The parameters and are the hyperpriors for the correlation terms and which are assumed to follow Beta distributions. The parameters and are the hyperpriors for the variance terms , , and which are assumed to follow inverse gamma distributions. The parameters and are the hyperior shape parameters for the correlation terms and respectively which are assumed to follow an LKJ correlation distribution. BIN BPN BFN Table <ref> presents fits of the three Bayesian models to the two datasets. Based on the leave-one-out information criterion (LOOIC), the best model for the running gait dataset is the BPN model with the BIN model in a close second place. The LOOIC selected the BFN as the best model for the radiograph data, while the other two models displayed substantially larger LOOICs. Estimates of the interrater and intrarater Kappa measures based on direct frequentist estimation and model-based methods are presented in Table <ref>. Using the generated quantities block in Rstan, we simulate data from the posterior predictive distribution and calculate the interrater and intrarater kappa estimates. For the running gait dataset, the LOOIC-selected BPN model produced interrater and intrarater kappa estimates (0.24 and 0.30, respectively) that substantially differ from the frequentist-based estimates (0.44 and 0.45, respectively). Similarly, for the radiograph dataset, the LOOIC-selected BFN model produced interrater and intrarater kappa estimates (0.07 and 0.33, respectively) that are even more different from the frequentist-based estimates (0.40 and 0.72, respectively). These results show that datasets with small number of raters and timepoints can lead to widely varying estimates of interrater and intrarater reliability depending on what approach is used. In the next section, we simulate datasets from different models to evaluate these differences in interrater and intrarater reliability estimation. §.§ Simulations Datasets are simulated from the three different models – independent (IN), partially-nested (PN), and fully-nested (FN) – as presented in Section <ref>. The parameter values used in these model-based simulations are the posterior estimates from the respective model fits presented in Table <ref>. In this way, the simulated datasets are based on the two real world datasets, and these respective simulations are labeled as `running gait' and `radiograph'. For each dataset-based simulation (running gait and radiograph), and for each model (IN, PN, FN), a total of 148 datasets were generated. For each of the generated datasets, the three Bayesian models (BIN, BPN, BFN) were fit to the data and their respective LOOIC was recorded. The model with the smallest LOOIC is referred to as the `LOOIC-selected' Bayesian model for that simulation. Over each set of 148 simulations, the relative proportions of the LOOIC-selected models are presented in Table <ref>. When the true simulation is IN, the BIN model is the most common LOOIC-selected model for both dataset references. Similarly, when the true simulation is PN, the BPN model is the most common LOOIC-selected model for both dataset references. However, when the the true simulation is FN, the BFN model is the most common LOOIC-selected model for the running gait dataset reference but the BPN model is the most common LOOIC-selected model for the radiograph dataset reference. For each dataset-based simulation (running gait and radiograph), and for each model (IN, PN, FN), approximate theoretical interrater and intrarater reliability measures are generated by simulating 10,000 datasets and taking the average kappa estimates. The resulting `true' kappa parameters are displayed in Table <ref>. The frequentist-based and model-based (BIN, BPN, BFN, LOOIC-selected) interrater and intrarater kappa estimates are calculated for each of the 148 simulated datasets and the average parameter estimates and root mean square error (RMSE) performances are presented. The overall performance of each method (frequentist, BIN, BPN, BFN, and LOOIC-selected) over all 148 × 3 = 444 simulated datasets within each dataset reference is highlighted in blue. The Bayesian model consistent with the respective data simulation is highlighted in yellow. The method with the smallest RMSE is bolded in each row of the table. The simulations show all three Bayesian models perform similarly in terms of interrater and intrarater estimation. However, the frequentist-based estimation of interrater and intrarater reliability is shown to be worse than each of the Bayesian models for every simulation scenario. All three Bayesian models tend to perform similarly with regard to interrater and intrarater reliability estimation; the Bayesian model consistent with the data simulation model was not always optimal though it is consistently close to the optimal model. § DISCUSSION In this paper we have presented three Bayesian GLMMs and assessed the properties of different model assumptions on the estimation of interrater and intrarater reliability. The proposed models are comprehensive in terms of jointly adjusting for subject, rater and time point effects. The models are also flexible and able to incorporate different prior beliefs and knowledge about specific model parameters through Bayesian modelling. Contrary to commonly used models, the ones presented are not restricted to two raters or time points but are applicable for the general case of I subjects, J raters and K time points. Some drawbacks in utilizing these methods for assessing interrater and intrarater reliability include the complexity in analyzing all the different parameters of interest and ensuring proper convergence of the model. Selecting the most appropriate model among the three Bayesian models requires a good understanding of the data, though the simulations show that utilizing the LOOIC model selection method will often select the optimal or near-optimal model. As in any Bayesian model fit, the choice of priors may have substantial impact on the posterior estimates, so a sensitivity analysis that varies the priors should be performed. agsm
http://arxiv.org/abs/2407.12129v1
20240716193522
Breakup dynamics of a neutron-halo projectile on heavy target at deep sub-barrier energies
[ "B. Mukeru", "T. Sithole", "Lauro Tomio" ]
nucl-th
[ "nucl-th", "nucl-ex" ]
Gas-Phase metallicity for the Seyfert galaxy NGC 7130 A. Amiri1 , 2 Email: amirnezamamiri@gmail.com J. H. Knapen 2 , 3 S. Comerón 3 , 2 A. Marconi 4 , 5 B. D. Lehmer 1 Received: date / Revised version: date =========================================================================================================================================== § INTRODUCTION A recent experimental measurement of the breakup of ^8 B proton-halo nucleus on a lead target at deep sub-barrier energies by Pakou et al.<cit.>, yielded a quite interesting result: the breakup channel is reported to be the main reaction channel at these energies. Intuitively, one assumes that at deep sub-barrier energies, the reaction should be dominated by reaction channels other than the breakup channel. In a subsequent study <cit.>, it was evidenced that the predominance of the breakup channel over the fusion channel in this incident energy region could be attributed to couplings among the continuum states of the projectile. Using the continuum discretized coupled channels (CDCC) calculations, further analysis led to the conclusion that such continuum-continuum couplings (CCC) indicate the breakup on the outgoing trajectory. While these couplings are known to suppress strongly the breakup cross-sections at incident energies above the Coulomb barrier <cit.>, it was verified in Ref.<cit.> that they enhance the breakup cross-section at sub-barrier energies, being further argued that this enhancement could be attributed to the breakup occurring on the outgoing trajectory. However, the question remains open whether the projectile breaks up on its incoming trajectory toward the target or on its outgoing trajectory as it leaves the target and how that affects the breakup dynamics. A subsequent analysis in Ref. <cit.>, of the same reaction with the ^8 B proton-halo nucleus, within the same incident energy range, further confirmed the findings of Ref. <cit.>, by indicating the effect of Coulomb polarization on the proton halo state, with the correlation information revealing that the prompt breakup mechanism dominates, occurring predominantly on the outgoing trajectory. This assertion corroborates the conclusion anticipated in Ref. <cit.> that the breakup of the projectile occurs in the outgoing trajectory. Particularly, in Ref.<cit.>, it was emphasized the relevance in elucidating the long-standing question about the breakup dynamics of a proton halo nucleus. Relying on the results obtained near Coulomb barrier energies, their analyses have signaled distinctive dynamics of a proton-halo nucleus as compared with a neutron-halo nucleus, which has been assigned to the Coulomb effect of the halo proton, indicating little influence of the continuum on elastic scattering and complete fusion. Nevertheless, as commented in Ref. <cit.>, further investigations are still desired to elucidate the breakup behavior of a proton halo nuclear system, which can be quite relevant in the light of potential astrophysical implications. In this regard, one could check whether the conclusions reported in Refs. <cit.> can be assumed as a universal signature of the breakup of weakly-bound systems at deep sub-barrier energies, by first extending the same investigation to neutron-halo projectiles. The fundamental difference between proton-halo and neutron-halo is the presence of the Coulomb barrier in the core-proton system, which is absent in the core-neutron system since the neutron is not charged. Therefore, considering a neutron-halo projectile for the same study would provide an opportunity to test, whether the importance of the breakup channel over other reaction channels at deep sub-barrier incident energies emanate from dynamical effects (associated with the projectile-target interaction), or from static effects (associated with the projectile ground-state wave function). On the competition between the breakup and fusion channels below the Coulomb barrier, it was also shown previously that the breakup cross-section becomes dominant over the fusion cross-section, in Refs. <cit.>. By assuming the same heavy target ^208Pb, this was shown in Ref. <cit.>, when studying the Coulomb barrier penetrability using the ^11Be projectile; and, in Ref. <cit.>, by considering the ^6Li as the projectile, treated as a weakly-bound cluster of an alpha particle with a deuteron. In short, despite the registered successes over the past decades in probing nuclear reactions by weakly-bound exotic projectiles in heavy targets, which can be traced through recent studies <cit.>, with their cited references, these investigations at deep sub-barrier energies are still limited, as most available results in this topic are based on incident energies above and around the Coulomb barrier. Our aim in this paper is to report an extension of the mentioned studies on proton-halo reactions <cit.>, by considering a neutron-halo projectile, to probe the possible universality of previous conclusions. To this end, we study the breakup dynamics of the s-wave neutron-halo nucleus ^11 Be on a lead target at Coulomb sub-barrier and around-barrier incident energies. The main goal is to verify whether, for deep sub-barrier incident energies, the breakup cross-section remains dominant over the total fusion cross-section as in the case of proton-halo projectiles, such that some universal characteristics can be extracted. § BRIEF THEORETICAL APPROACH The fundamental mathematical formulation of the CDCC (continuum discretized coupled channels) formalism, which is the theoretical approach used in this work, can be found in Ref. <cit.>, such that we avoid giving more details here. For a more recent review of CDCC with its theoretical foundation, we have also the Ref. <cit.>. Within the CDCC formalism, once the total wave function is expanded on the projectile bound and bin states (whose wave functions are square-integrable), a truncated set of coupled differential equations of the radial part χ_β(R) of the wave function is obtained, which contains the coupling matrix elements U_ββ^'^LL^' J(R) = ⟨𝒴_β^LJ(r,Ω_R) |U_pt(r,R)|𝒴_β^'^L^' J(r,Ω_R)⟩, where 𝒴_β^LJ(r,Ω_R), is the channel wave function that contains bound and bin wave functions, Ω_ R is the solid angle in the direction of the projectile-target center-of-mass R, expressed in spherical coordinates, with L and J being the orbital and total angular momentum quantum numbers. In Eq. (<ref>), U_pt(r,R) = U_ct( R_ct) +U_vt( R_vt), with U_ct, and U_vt, are core-target and valence-nucleon-target optical potentials, having the corresponding coordinates R_ct≡ R + 1/A_p r, R_vt≡ R - A_c/A_p r, where A_c and A_p = A_c+1, are the core and projectile atomic mass numbers, respectively. In Eq. (<ref>), β≡ (α_b,α_i), where α_b represents the quantum numbers that describe the projectile bound state, with α_i standing for the quantum numbers that describe the bin states, with i=1,2,…, N_b, where N_b is the number of bins. The coupling matrix elements (<ref>), can be split into couplings to and from the bound-state U_α_bα^L J(R), and couplings among the continuum states U_α_iα_i^'^LL^' J(R), given by U_α_bα_i^L J(R) = ⟨𝒴_α_b^LJ(r,Ω_R) |U_pt(r,R)|𝒴_α_i^L^' J(r,Ω_R)⟩, U_α_iα_i^'^LL^' J(R) = ⟨𝒴_α_i^LJ(r,Ω_R) |U_pt(r,R)|𝒴_α_i^'^L^' J(r,Ω_R). The coupling matrix elements are evaluated subject to the boundary conditions in the asymptotic region (R→∞), χ_β(R)R→∞→ i/2[H_α_i^-(K_α_iR) δ_α_bα_i-H_α_i^+(K_α_iR) S_ββ^'^J(K_β^')], where H_β^±(K_βR) are Coulomb-Hankel functions <cit.>, and S_α_iα_i^'^J(K_α_i^') are the scattering S-matrix elements, with K_α_i = √(2μ_pt(E-ε_α_i)/ħ^2), where μ_pt = m_0A_pA_t/(A_p+A_t) (m_0 is the nucleon's mass and A_t the projectile atomic mass number) is the projectile-target reduced mass, E is the incident energy, with ε_α_i being the bin energies. The breakup cross-section can be directly obtained from the scattering matrix as follows <cit.> σ_ BU = π/K_α_b^2∑_Jα_iLα_i^' L^'2J+1/2j+1|S_α_iα_i^'^J(K_α_i^')|^2, where j is the total angular momentum associated with the core-nucleon relative motion, and K_α_b, is the initial relative momentum, which is related to the final relative momentum K_α_i through the following energy conservation equation ħ^2 K^2_α_i/2μ_pt -ε_α_i = ħ^2 K^2_α_b/2μ_pt +ε_b (where ε_b<0 is the ground-state binding energy). The total fusion cross-section can be obtained as follows: σ_ TF = ∑_J = 0^J_maxσ_ TF^(J) σ_ TF^(J) = 2μ_pt/ħ^2K_α_b(2J+1) ∑_ββ_iχ^LJ_β(R) |W^LL^' J_ββ^'(R)|χ^L^' J_β(R), where W^LL^' J_ββ^'(R) are the imaginary parts of the coupling matrix elements V^LL^' J_ββ^'(R) that contain the imaginary parts of the potential U_pt(r,R). Therefore, they are responsible for nuclear absorption. §.§ Projectile-target potentials A selection of the projectile-target potentials necessary to calculate both fusion and breakup cross-sections on the same footing can prove to be a challenging task. The main reason is the fact that both cross-sections emanate from different dynamics. Quite often in the literature, the Woods-Saxon form factor is used to model the real and imaginary parts of the potentials U_pt(r,R). With the coordinate definitions (<ref>) and n≡ ct,vt, U_n( R_n) = V_n( R_n)+ iW_n( R_n) = V_0^(n)/1+exp[( R_n- R_0^(n))/a_0^(n)] + i W_0^(n)/1+exp[( R_n-R_w^(n))/a_w^(n)], n≡ ct,vt where V_0^(n) and W_0^(n) are the depths of the real and imaginary parts, respectively, R_0^(n) = r_0^(n)(A_n^1/3 + A_t^1/3) and R_w^(n) = r_w^(n)(A_n^1/3 + A_t^1/3) are the corresponding nuclear radii, with a_0n and a_wn the respective diffuseness. The potentials (<ref>) are used in the off-diagonal channels to couple bound to continuum and continuum to continuum channels. In the elastic scattering channel, the real and imaginary parts of the optical potential represent the expected value of U_pt(r,R), concerning the ground state of the projectile nucleus: V_α_bα_b(R) = ∫ d^3 r|ϕ_α_b( r)|^2V_pt(r,R), W_α_bα_b(R) = ∫ d^3 r|ϕ_α_b( r)|^2W_pt(r,R), where ϕ_α_b( r) is the ground state wave function, and V_pt(r,R) = V_ct( R_ct) + V_vt( R_vt), W_pt(r,R) = W_ct( R_ct) + W_vt( R_vt), are the real and imaginary parts of U_pt(r,R). Given the longer tail of the projectile nucleus, due to its low breakup threshold, the nuclear forces are extended well beyond the barrier radius through the tails of V_pt(r,R) and W_pt(r,R). As such, V_α_bα_b(R) will result in lowering the Coulomb barrier, whereas W_α_bα_b(R) will exhibit a long-range absorption behavior. Consequently, the total fusion obtained with these potentials is expected to be much larger. Realistic fusion cross-sections (i.e., comparable with the experimental data) are generally obtained by considering short-range imaginary potentials. For example, in Refs.<cit.>, strong short-range W_ct, W_vt, and W_pt, with parameters W_0 = 50 MeV, r_w = 1.0 and a_w = 0.1 fm, are adopted. However, such a choice of imaginary potentials may not be suitable in the calculations of breakup cross-sections. § DETAILS OF THE NUMERICAL CALCULATIONS Here we provide the necessary information on the breakup dynamics, by describing the internal structure of the neutron-halo projectile nucleus ^11Be. It is modeled as a ^10Be core nucleus to which a valence neutron is weakly bound in the following s-wave configuration ^10 Be ⊗ n(2s_1/2^+), with ℓ_0=0, where ℓ_0 is the ground-state orbital angular momentum associated with the core-neutron relative motion. The binding energy of this ground state is ε_0=-0.504 MeV <cit.>. Also, this nucleus exhibits a first excited bound-state with energy ε_1=-0.183 MeV in the p_1/2^- state (ℓ_0=1), and a narrow resonance with energy ε_res=1.274 MeV, in the d_5/2^+ continuum state. To obtain the internal states of the ^11Be nucleus (i.,e., bound and scattering states), the two-body Schrödinger equation is numerically solved, using the Woods-Saxon potential with both central and spin-orbit coupling components. For the numerical values of the different parameters, we assume the same ones considered in Ref.<cit.>, which were taken from Ref. <cit.>. As already indicated, it is not straightforward the procedure in determining which common imaginary potentials to use in simultaneous calculations of both cross-sections. In this study, the choice of the long-range imaginary potentials was motivated by the understanding that these potentials are expected to provide realistic calculations for the breakup cross-section, but end up overestimating the total fusion cross-section. Therefore, once the breakup cross-section is found to be larger than the total fusion counterpart, this would be even more so when short-range imaginary potentials are used. This happens because short-range imaginary potentials are expected to enhance the breakup cross-section. By taken from Ref. <cit.>, the real and imaginary parts of the core-target optical potential parameters used in the construction of the projectile-target coupling matrix elements, taken from Ref. <cit.>, are V_0=70 MeV, R_0=7.43 MeV, a_0=1.04 MeV, W_0 = 58.9 MeV, R_w = 7.19 MeV, and a_w = 1.0 MeV. For the neutron-target optical potential, the global parametrization of Ref. <cit.>, was adopted. These potentials, together with the folding potential (<ref>), in the elastic scattering channel, extend the absorption to outside the usual region, increasing the fusion cross-section. So, we need to be mindful of the choice of imaginary potentials in the analysis of the results. To obtain fusion cross-sections that are comparable with the available experimental data (as it happens for ^11 B+^209 Bi <cit.>), and test how the total fusion-cross section is overestimated of the long-range imaginary potentials, we will perform another set of calculations where we replace the long-range W_ct, W_nt and W_pt by the short-range ones, i.e., W_0 = 50 Mev, r_w = 1.0 MeV and a_W = 0.1 MeV, as in Refs. <cit.>. However, by calculating with long- or short-range imaginary potential, our interest is to verify whether the breakup cross-section remains larger than the fusion cross-section at incident energies below the Coulomb barrier. And, if that is the case, why this happens. For solving the coupled differential equations emanating from the projectile-target three-body Schrödinger equation, various numerical parameters were optimized to satisfy the convergence requirements. In this regard, the following maximum limiting values were applied: For the core-neutron, the orbital angular momentum ℓ was truncated at ℓ_max=6ħ, with r_max=100 fm being the maximum radial coordinate r, and ε_max=8 MeV the maximum excitation energies ε. For the projectile target, we have L_max=1000ħ and R_max=500 fm, respectively, for the maximum orbital angular momentum L and for the radial coordinate R. Also, the radial coordinates r_max and R_max were sliced into radial mesh points equally spaced by Δ r = 0.1 fm and Δ R = 0.05 fm, respectively. The projectile-target potentials were expanded into potential multipoles up to λ_max=4. The energy interval [0,ε_max] was discretized into energy bins of widths, Δε=0.5 MeV, for the s- and p-states; Δε=1.0 MeV, for the f- and d-states; Δε=1.5 MeV, for g-states; and Δε=2.0 MeV, for higher partial waves. Finer bins were considered for the resonant state. The numerical calculations were performed with FRESCO computer codes <cit.>. In Fig. <ref>, we show samples of convergence tests in terms of ℓ_max and ε_max for E_cm/V_ B=0.8, where V_ B=37.90 MeV is obtained from the São Paulo potential (SPP) <cit.>. As one can verify from the given results in this figure, the convergence is well reached for ε_max = 8 MeV, and ℓ_max = 6ħ. § RESULTS AND DISCUSSION To further assess the stability of the numerical results and how well the experimental data can be described, we first compare in Fig.<ref>, the computed results for the differential breakup cross-section with the available experimental data, measured at E_lab=140 MeV, as given in Ref. <cit.>. As shown, the experimental data are quite well reproduced. We also found that (results not shown) the breakup cross-section with the short-range imaginary potential becomes highly oscillatory, particularly at small angles, and does not provide a good fit for the experimental data as the ones obtained with the long-range imaginary potential. This is why, as already pointed out, we chose to use the long-range imaginary potential to calculate the breakup cross-sections. Figure <ref> shows the breakup and total fusion cross-sections, respectively σ_ BU and σ_ TF, as functions of the ratio between total energy E_cm and potential barrier height V_B, with E_cm/V_ B in the interval [0.5,1.3], The same potential parameters were applied in both calculations to obtain the breakup cross-section in Fig.<ref>. Although these parameters increase the total fusion cross-section, both fusion and breakup cross-sections are treated on the same footing with this choice, with the outcome not affected by different calculations. The results in Fig.<ref> were obtained with all the different couplings being included in the coupling matrix elements, identified as “All coupl.", i.e., with couplings to and from the projectile bound-state and continuum-continuum couplings. One observes that at sub-barrier energies (E_cm/V_ B≤ 1), the breakup cross-section is dominant over the total fusion cross-section. The transition occurs around the Coulomb barrier where the total fusion cross-section prevails. Therefore, one can infer that, even in the case of a neutron-halo weakly-bound projectile, the breakup channel remains the dominant reaction channel at sub-barrier incident energies. It is interesting to see that even when long-range imaginary potentials are considered which are known to increase the fusion cross-section, the breakup cross-section remains more important than its fusion counterpart. In this regard, it follows that the conclusions of Refs. <cit.> on proton-halo projectile can also be extended to a neutron-halo projectile. We do not expect the use of short-range imaginary potentials to reverse this trend at sub-barrier energies, but such potentials can be expected to push the transition point where the fusion cross-section becomes more important for larger incident energies. The results in Fig.<ref>, further suggests that the Coulomb barrier in the core-proton system is not responsible for the importance of the breakup cross-section over the total fusion cross-section at sub-barrier incident energies, which implies that static effects related to the projectile ground-state wave function are not the main factor contributing to this phenomenon. As argued in Ref. <cit.>, this leaves dynamical effects (due to the projectile-target interaction) as one of the main factors responsible for the importance of the breakup cross-section over the total fusion cross-sections at incident sub-barrier energies. A similar trend regarding the importance of the breakup cross-section over the total fusion cross-section was also reported in the breakup of ^6Li nucleus (treated as a weakly-bound cluster of an alpha particle and the deuteron) on the same target nucleus <cit.>. To verify how the long-range imaginary potentials overestimate the total fusion cross-section, we repeated the same calculations, but with the short-range imaginary potentials, as described in the previous section. The calculated fusion and breakup cross-sections are shown in Fig.<ref>. Compared to Fig.<ref>, it is noticed that the total fusion cross-sections are largely suppressed, and the breakup cross-section becomes substantially larger than the fusion cross-section. Although the fusion cross-sections, as shown in Fig.<ref>, are not considered in the following discussion, we are aware of this suppression. Within a careful comparison, the Figs.<ref> and <ref> suggest that the difference between both breakup cross-sections is not very pronounced. However, one can verify that the results obtained with short-range imaginary potentials are significantly larger than those obtained with long-range imaginary potentials. To gain more insight into the importance of the breakup cross-section over the total fusion cross-section at sub-barrier energies, it is essential to investigate the effect of the continuum-continuum couplings. In Fig.<ref>, we compare the total fusion and breakup cross-sections obtained when the continuum-continuum couplings are removed from the coupling matrix elements (without CCC), i.e., leaving a single transition to and from the projectile bound state. By inspecting the results shown in that figure a stark difference from Fig. <ref> is observed (with all couplings), as one notes that, at sub-barrier energies (E_cm/V_ B≤ 0.8), both breakup and total fusion cross-sections are almost similar, with the two curves becoming hardly distinguishable. Above the Coulomb barrier, the breakup cross-section starts becoming dominant. So, by comparing Figs.<ref> and <ref>, the continuum-continuum couplings appear to be responsible for the quantitative importance of the breakup cross-section over the fusion cross-section at sub-barrier incident energies, as also previously reported in Ref.<cit.>. However, considering the fusion calculations as given in Fig.<ref>, with short-range imaginary part in the nuclear potential, the fusion cross-section in the absence of the continuum-continuum couplings would be lower than the breakup cross-section even at sub-barrier energies. Therefore, although Fig. <ref> does not provide a realistic picture, it points out the fact that when the continuum-continuum couplings are removed from the matrix elements, the gap between the two curves will be significantly narrowed. To better assess of the significance of the continuum-continuum couplings on the breakup cross-section, we plot in Fig. <ref>, the breakup cross-sections in the presence and absence of the continuum-continuum couplings. Observing that figure, it resorts that at deep sub-barrier energies (E_cm/V_ B≤ 0.7), the continuum-continuum couplings serve to enhance the breakup cross-section, as the breakup cross-section in the presence of these couplings is larger than the breakup cross-section in the absence of these couplings. A similar trend is reported in Ref.<cit.>, in the case of the breakup of the weakly-bound cluster system ^6Li on the same target nucleus. At larger incident energies (E_cm/V_ B> 0.7), the continuum-continuum couplings account for the suppression of the breakup cross-section. What can we learn from the enhancement of the breakup cross-section at sub-barrier energies? Notice that, according to Refs. <cit.>, the structure of the continuum, with existing resonances, may delay the breakup process. For instance, as stated in Ref. <cit.>, “The near-target breakup is consistent with simulations which assume that the populated continuum states have a short but finite mean life, delaying into fragments." The same authors further argue that: “Similar behavior should be expected from breakup occurring from very short-lived states, irrespective of whether the breakup is direct or triggered by transfer of nucleons." If it could also be verified that continuum-continuum couplings produce similar effect, then, for energies above the Coulomb barrier, delaying the breakup would mean increasing the probability of the breakup to occur within the interaction region where nuclear forces are active to trigger absorption. Therefore, at such incident energies higher than the barrier, this will lead to total fusion larger than the breakup cross-sections as verified in Fig. <ref>. In the same energy region, if the breakup becomes more prompt when the continuum-continuum couplings are removed, then the projectile fragments have enough time to survive absorption after the breakup, explaining the increase of the breakup cross-section over the total fusion cross-section as observed in Fig. <ref>. However, until it is proven that continuum-continuum couplings in fact will delay the breakup process, our assertion here remains speculative. A study in this regard could be important, by considering for example a dynamical time-dependent approach. At sub-barrier incident energies, having low kinetic energy, the projectile further slows down due to the Coulomb repulsion. The dissipation of the projectile kinetic energy can be exacerbated by including couplings among continuum, since energies are required to excite these states. As such, as an intermediate step, the projectile can be in a continuum state, although asymptotically it ends up bound. However, on the outgoing trajectory, the projectile gains an initial kinetic energy as its leaves the target, considering the fact that in that case it is accelerated by the projectile-target Coulomb force. On both incoming and outgoing projectile trajectories, the continuum-continuum couplings play the same role in the breakup process. Again, assuming that the continuum-continuum couplings could delay the breakup, then such “delay” together with the projectile acceleration coming from the projectile-target Coulomb interaction, can increase the probability of the projectile breakup on its outgoing trajectory away from the absorption region, i.e., out of reach of nuclear forces. Consequently, the fusion cross-section would be reduced at the expense of the breakup cross-section, with less amount of flux being removed from the breakup channel to feed up the fusion channel. The opposing effect of the Coulomb interaction on both incoming and outgoing trajectories is also invoked in Ref.<cit.> to explain the magnitude of the opening angle of the breakup fragments, considering that this angle distribution provides information on the breakup location (as shown in Ref. <cit.>). Therefore, we cautiously speculate that the enhancement of the breakup cross-section at sub-barrier incident energies by the continuum-continuum couplings could be associated with the breakup of the projectile occurring in the outgoing trajectory. Although it is not possible to unambiguously prove this point in this work, it could provide hints for further studies in this direction. More discussion on this aspect can be found in Refs.<cit.>. The argument “on the outgoing trajectory the projectile can break up away from the reach of the nuclear forces" can be substantiated by showing that, at sub-barrier energies, the enhancement of the breakup cross-section is due to its Coulomb breakup component. This inference is born out of the fact that on the outgoing trajectory the nuclear breakup becomes increasingly less relevant as the projectile moves away from the target nucleus. To shed more light on this, let us analyze the Coulomb and nuclear breakup cross-sections. Notice that the breakup cross-section that we have so far discussed is obtained by coherently including both Coulomb and nuclear interactions in the coupling matrix elements, this we also call total (Coulomb + nuclear) breakup cross-section. The separation of the total breakup cross-section into its Coulomb and nuclear components is not a straightforward task, and this work is not intended to perform such a task. To obtain the Coulomb and nuclear breakup cross-sections, we resort to the following approximate procedure. To calculate the Coulomb breakup cross-section, we removed all the core-target and neutron-target nuclear interactions from the coupling matrix elements, keeping only its monopole component in the elastic scattering channel. This potential (as in all other calculations), was obtained by folding the projectile ground-state density with the projectile-target potentials. In this case, the Coulomb breakup cross-section is affected by the absorption in the elastic scattering channel due to the imaginary component of this potential. Similarly, the nuclear breakup cross-sections were obtained by removing the core-target Coulomb potential from the coupling matrix elements, also keeping its monopole diagonal term in the elastic scattering channel. This approach, although approximate, has proven to yield the desired effect. The Coulomb and nuclear breakup cross-sections thus obtained are shown in Fig.<ref>. Indeed, in that figure, we notice that at sub-barrier energies, the Coulomb breakup cross-section is strongly enhanced by the continuum-continuum couplings [see panel (a)]. As the incident energy increases, the enhancement strength decreases and the trend suggests that at higher incident energies, the continuum-continuum couplings would amount to a smaller effect on the Coulomb breakup cross-section. In fact, it has been shown that at higher incident energy, these couplings have very small suppression effect on the Coulomb breakup cross-section (see for example Ref.<cit.>). In contrast, panel (b) of Fig.<ref> shows that the nuclear breakup cross-section is strongly suppressed by continuum-continuum couplings at all the displayed incident energy regions. At energies above the barrier, the nuclear breakup cross-section is known to be strongly suppressed by these couplings compared to the Coulomb breakup cross-section. In fact, in Ref.<cit.>, this suppression is reported as one of the main reasons why the Coulomb breakup is more important than the nuclear breakup in reactions involving heavy targets. Comparing the effect of the continuum-continuum couplings on Coulomb and nuclear breakup cross-sections, it follows that the enhancement of the total breakup cross-section at sub-barrier incident energies due to these couplings comes exclusively from the Coulomb breakup. § CONCLUSION In this paper, we have analyzed the breakup of the weakly-bound neutron-halo nucleus ^11 Be on a lead target at the sub-barrier and around the barrier incident energies. It is found that at deep sub-barrier energies, the breakup cross-section is dominant over the total fusion cross-section, implying that it is the leading reaction channel at this incident energy range, as also reported in the case of the proton-halo projectile ^8B on the same target. The continuum-continuum couplings, which are reported to enhance the breakup cross-section at sub-barrier energies, are found to be responsible for this feature. The enhancement of the breakup cross-section by these couplings, at sub-barrier energies, is found to come exclusively from its Coulomb component. Based on this, we are speculating that such enhancement of the Coulomb breakup cross-section by the continuum-continuum couplings could be associated with the breakup occurring on the outgoing trajectory, provided it is proven that these couplings delay the breakup process. In summary, our study is confirming that the importance of the breakup channel over the total fusion channel, at energies below the Coulomb barrier, can also be extended to neutron-halo projectile on heavy targets. In spite of the fact that a detailed study may be required, based on the available investigations, one can anticipate this conclusion as being a universal feature in the breakup of weakly bound projectiles on heavy targets. We acknowledge partial support from the Brazilian agency Conselho Nacional de Desenvolvimento Científico e Tecnológico [INCT-FNA Proc. 464898/2014-5 (BM, LT) and Proc. 304469/2019-0(LT)]. 0 Pakou2020 Pakou A. Phys. Rev. C1022020031601(R). MukeruPR2021 Mukeru B., Ndala L. V. Lekala M. L. Pramana J. Phys.952021106. Nunes1999 Nunes F. M. Thompson I. J. Phys. Rev.5919992652-2659. Summers2004 Summers N. C. Nunes F. M. Phys. Rev. C702004011602(R). Canto2009 Canto L. F., Lubian J., Gomes P. R. S. Hussein M. S. Phys. Rev. C802009047601. Diaz2002 Diaz-Torres A. Thompson I. J. Phys. Rev. C652002024606. Mukeru2018 Mukeru B. J. Phys. G: Nucl. Part. Phys.452018065201. Mukeru2015 Mukeru B., Lekala M. L. Denikin A. S. Nucl. Phys. A935201518. MukeruCPC2022 Mukeru B. Tomio L. Chin. Phys. C462022014103. MukeruJPG2018 Mukeru B., Rampho G. J. Lekala M. L. J. Phys. G: Nucl. Part. Phys.452018045101. Yang2022 Yang L. Nature Communications1320227193. MukeruGR2015 Mukeru B., Lekala M. L. Rampho G. J. J. Phys.G: Nucl. Part. Phys.422015085110. Otamar2013 Otomar D. R., Gomes P. R. S., Lubian J., Canto L. F. Hussein M. S. Phys. Rev. C872013014615. 2020Lha Lha V., Parkar V. V. Kailas S. Phys. Rep.84520201. Canto2020 Canto L. F., Guimarães V., Lubian J.Hussein M. S. Eur. Phys. J. A562020281. MukeruPRC2020 Mukeru B., Frederico T. Tomio L. Phys. Rev. C1022022064623 Wang2021 Wang K. Phys. Rev. C1032021024606. Pietro2023 Pietro A. D. J. Phys.: Conf. Ser. 25862023012079. Ferreira2023 Ferreira J. L., Rangel J., Lubian J. Canto L. F. Phys. Rev. C 1072023034603. MukeruEPL2023 Mukeru B. EPL 143202364003. MukeruPRC2023 Mukeru B., Mahatikele M. B. Rampho G. J. Phys. Rev. C1072023064313. Austern1987 Austern N., Iseri Y., Kamimura M., Kawai M., Rawitscher G. Yahiro M. Phys. Rep.1541987125. 2012Yahiro Yahiro M., Ogata K., Matsumoto T. Minomo K. Prog. Theor. Exp. Phys.2012201201A206. Thompson1988 Thompson I. J. Comp. Phys. Rep.71988167. Pierre2017 Descouvemont P., Canto L. F. Hussein M. S. Phys. Rev. C952017014604 Hagino2000 Hagino K., Vitturi A., Dasso C. H. and Lenzi S. M. Phys. Rev. C 612000037602 Lubian2022 Lubian J., Ferreira J. L., Rangel J., Cortes M. R. and Canto L. F. Phys. Rev. C 1052022054601 Camacho2018 Gómez Camacho A., Wang B. Zhang H. Q. Phys. Rev. C 972018054610 Wang2017 Wang M. Chin. Phys. C412017030003. Capel2003 Capel P., Baye D. Melezhik V. S. Phys. Rev. C682003014612. Becchetti1969 Koning A. J., Delaroche J. P. Nucl. Phys. A7132003231. Hinde2010 Hinde D. J. and Dasgupta M. Phys. Rev. C812010064611. Signorini2004 Signorini C. Nucl. Phys. A 7532004329 Chamon2002 Chamon L. C. Phys. Rev. C662002014610. Duan2020 Duan F. F. Phys. Lett. B 8112020135942 Torres2018 Dias-Torres A. and Quraishi D. Phys. Rev. C972018024611. Sunil2016 Kalkal S. Phys. Rev. C932016044605. Simpson2016 Simpson E. C. Phys. Rev. C932016024605. Zhang2023 Zhang H. Science Bulletin 6820232. Rangel2020 Rangel J. Phys. Lett. B8032020135337.
http://arxiv.org/abs/2407.12323v1
20240717054939
Rainbow connectivity of multilayered random geometric graphs
[ "Josep Díaz", "Öznur Yaşar Diner", "Maria Serna", "Oriol Serra" ]
math.CO
[ "math.CO", "05C80, 05C82, 68R10" ]
Benchmarking adiabatic transformation by alternating unitaries Takuya Hatomura July 22, 2024 ============================================================== § ABSTRACT An edge-colored multigraph G is rainbow connected if every pair of vertices is joined by at least one rainbow path, i.e., a path where no two edges are of the same color. In the context of multilayered networks we introduce the notion of multilayered random geometric graphs, from h≥ 2 independent random geometric graphs G(n,r) on the unit square. We define an edge-coloring by coloring the edges according to the copy of G(n,r) they belong to and study the rainbow connectivity of the resulting edge-colored multigraph. We show that r(n)=(log n/n)^h-1/2h is a threshold of the radius for the property of being rainbow connected. This complements the known analogous results for the multilayerd graphs defined on the Erdős-Rényi random model. § INTRODUCTION Complex networks are used to simulate large-scale real-world systems, which may consist of various interconnected sub-networks or topologies. For example, this could include different transportation systems and the coordination of their schedules, as well as modeling interactions across different topologies of the network. Barrat et al. proposed a new network model to more accurately represent the emerging large network systems, which include coexisting and interacting different topologies <cit.>. Those network models are known as layered complex networks, multiplex networks or as multilayered networks. In a multilayered network, each type of interaction of the agents gets its own layer, like a social network having a different layer for each relationship, such as friendship or professional connections <cit.>. Recently, there's been a lot of interest in adapting tools used in the analysis for single-layer networks to the study of multilayered ones, both in deterministic and random models <cit.>. In the present work, we introduce a random model of colored multigraphs, the multilayered random geometric graphs and explore thresholds on their radius for being rainbow connected. §.§ Random Geometric Graphs A random geometric graph (RGG), G(n,r), where r=r(n), on the 2-dimensional unit square I^2=[0,1]^2 is defined as follows: Given n vertices and a radius r(n)∈[0,√(2)], the n vertices are sprinkled independently and uniformly at random (u.a.r.) in the unit square I^2. Two vertices are adjacent if and only if their Euclidean distance is less than or equal to r(n). Random geometric graphs provide a natural framework for the design and analysis of relay stations and wireless networks. Gilbert initially proposed this model for simulating the placement of relays between telephone stations <cit.>. His work is also considered as the beginning of continuum percolation theory. Since the introduction of wireless communication RGG has been one of the main models for those communication networks, leading to a fruitful line of research in this field, see for ex. <cit.>. For further information on random geometric graphs, one may refer to the book by Penrose <cit.> or to the more recent survey by Walters <cit.>. Random geometric graphs exhibit a sharp threshold behavior with respect to monotone increasing graph properties <cit.>. In particular, for connectivity, as the value of r(n) increases, there is a critical threshold value r_c(n) such that when r < r_c(n), the graph is typically disconnected, while for r > r_c(n), the graph is typically connected. The threshold for connectivity of G(n,r) appears at r_c(n)∼√(ln n/π n), and it is a sharp threshold, in the sense that the threshold window could be made as small as one wish. Notice r_c is also a threshold for the disappearance of isolated vertices in G(n,r). Regarding the diameter of a random geometric graph G(n,r(n)), Díaz et al. showed that if r=Ω(r(n)_c) then the diameter of G is (1+o(1))√(2)/r(n) <cit.>. In this paper, given a vertex v in a random geometric graph, G=G(n,r), we use _G (v) to denote the set of vertices that fell inside the “ball" centered at v with radius r, i.e., the neighbors of v in G. Recall that ∀ v∈ V(G) the expected degree |_G(v)| is [As usual, means with high probability, i.e., with probability tending to 1 as n→∞.] nπ r^2. We denote _G(v) as (v) when the graph under consideration is clear. §.§ Multilayered Random Geometric graphs To introduce the concept of multilayered graphs on V=[n] labeled vertices, we shall use the Erdös-Rényi graphs. Given a probability p and an integer h>0, which could depend on n, define h layers, where, each for each 1≤ i≤ h, G_i(n,p) is an independently generated Erdös-Rényi graph on the vertex set V. The multilayered random graph, G(n, h, p), on vertex set V is defined as the union of these layers, i.e., ∪_i=1^hG_i(n,p). I propose to remove the red part at it is repeated later and does not follow the natural order The fact that V in the are well ordered is not said anywhere (I believe) . In Maria's definition the fact that the vertices in G are given different labels that the vertices in the RGG hides a bit the fact that the set of vertices is exactly the same for G_i and for G. Another issue it should be remarked is that given a vertex v∈ V in RGG the ball (v) means the geometric ball and and the set of neighbors of v (π r^2) yes it is an abuse of notation but it's the way it has been used. In the same way for v and u (v)∩(u) means the geometrical lens in the intersection and the number of vertices in the intersection (easily computed from r and d_E(u,v).) Therefore (v), should be used to define neighbors of v in V(G). Of course we could invent new notation, no problem. Finally, note the final graph G could be defined on the unit square, the hypercube, the sphere or the Möbius strip Given a set of labeled points V={v_1,… v_n}, where V has a lexicographic total order according to the integer sub-index, each 1≤ℓ≤ n, we scatter V u.a.r. on I^2, to form the edges of a geometric graph. These graphs will form the layers of our resulting multigraph. Let h be a constant or a function of n. Let r(n) be a given radius to define the random geometric graph. Using the given r(n) and the set V, construct h independent random geometric graphs {G_i}_i=1^h, i.e., G_i(n,r(n)) is defined on V and r(n) for i∈ [h]. The multilayered random geometric graph G=G(n,h,r) is the colored multigraph on n vertices, where two vertices v_j,v_k ∈ V are adjacent in G if there exists at least one i∈ [h] such that v_j and v_k are adjacent in G_i(n,r(n)) (i.e. v_j∈_i(v_k). Notice that if v_j∈_i(v_k) then v_k∈_i(v_j). Each multiedge keeps as associated color the number of its original layer. See Figure <ref> for an example with two layers. We introduce now a general definition for the random model of edge colored multigraphs obtained by the superposition of a collection of random geometric graphs on the same set V of vertices. Formally, a multilayered geometric graph G(n, h, r, b) is defined by three parameters, n=|V|, r(n) the radius of connectivity of all the random geometric graphs, and h the number of layers, together with a position assignment b:[n]→I^2 ×…× I^2_h. For v_i∈ V, we denote b(i)=(b_1^i,…, b_h^i), where b_k^i∈ [0,1]^2, for k∈ [h] and i∈[n]. The multigraph G=G(n,h,r,b) has vertex set V and for k∈ [h], there is an edge (v_i,v_j) with color k, if the Euclidean distance between b_k^i and b^j_k is at most r(n). See Figure <ref> for an example with two layers. Note that, for k∈ [h], r(n) and the positions (b_k^i)_n, a geometric graph G_k(n,r) is defined by the edges with color k. Thus, G(n,h,r,b) can be seen as the colored union of h geometric graphs, all with the same vertex set and radius. Observe that G(n,r,h,b) is defined on I^2h. We refer to G_k(n,r) as the k-th layer of G(n,h,r,b). We simply denote the ball _G_i(v) of a vertex v in the i–th layer as _i(v). A multilayered random geometric graph G(n, h,r) is obtained when the position assignment b of the vertices is selected independently, for each vertex and layer, uniformly at random in [0,1]^2. Thus, the k-th layer of G=G(n, h,r) is a Random Geometric Graph and G is the colored union of h independent random geometric graphs. This definition is given for dimension two and it can be extended to points in a multidimensional space by redefining the scope of the position function. §.§ Rainbow Connectivity Given an edge-colored multigraph G, we say G is rainbow connected if, between any pair of vertices u,v∈ V(G), there is a path, called a rainbow path, with edges of pairwise distinct colors. Observe that the colored multigraph G given in Figure <ref> is not rainbow connected as there is no rainbow path from vertex i to vertex j. The rainbow connection number of a connected graph G is the minimum number of colors for which G admits a (not necessarily proper) edge-coloring such that G is rainbow connected. Chartrand et al <cit.> introduced the study of the rainbow connectivity of graphs as a strong property to secure strong connectivity in graphs and networks. Since then, variants of rainbow connectivity have been applied to different deterministic as well as random models of graphs. See e.g. the survey of Li et al. <cit.>, for further details on the extension of rainbow connectivity to other graph models. The study of rainbow connectivity has been addressed in the context of multilayered binomial random graphs by Bradshaw and Mohar <cit.>. The authors give sharp concentration results on three values on the number h of layers needed to ensure rainbow connectivity of the resulting multilayered binomial random graph G(n,p) with appropriate values of p. The results have been extended by Shang <cit.> to ensure k-rainbow connectivity in the same model, namely, the existence of k internally disjoint rainbow paths joining every pair of vertices in the multilayered graph. In this paper, we are interested in studying the rainbow connectivity of a multilayered random geometric graph G(n,h,r(n)). In particular, for every fixed h, we are interested in the minimum value of r(n) such that the multilayered random geometric graph G(n,h,r(n)) is rainbow connected. In the same way, for fixed values of r we want to determine the minimum number of layers h such that G(n,h,r(n)) is rainbow connected. The latter parameter can be defined as the rainbow connectivity of the multilayered random geometric graph. § MAIN RESULTS Our main results are lower and upper bounds of the value of r, to asymptotically assure that G(n,h,r), does or does not have the property of being rainbow connected. Let h≥ 2 be an integer and let G=G(n,h,r) be an h-layered random geometric graph. Then, if r(n)≥(log n/n^h-1)^1/2h, then G is rainbow connected. Moreover, there is a constant 0<c < 1 such that, if r(n)≤ c(log n/n^h-1)^1/2h, then G is not rainbow connected. if r<cn^-(h-1)/2h then G is not rainbow connected, where the constant c can be taken as 0<c<1/√(π)(2h!)^1/2h. For a mutilayered random geometric graph, being rainbow connected is clearly a monotone increasing property with increasing r. We will use implicitly this fact in the proofs for the threshold of rainbow connectivity stated in Theorem <ref>. Notice that Theorem <ref> can be restated as a threshold of h for the rainbow connectivity of a multilayered geometric random graph G with a given radius r=r(n). Let r=r(n) with r(n)=o(1). Set h_0=⌈log n+loglog n/log nr^2⌉. The multilayered random geometric graph G(n,r,h) is rainbow connected if h≤ h_0, while if h>h_0 it is not rainbow connected. In the next sections we prove Theorem <ref>. The proof of the case h=2 requires a special argument, given in Section <ref>. The proof of the general case uses a property of local expansion in multilayered random geometric graphs, which is discussed in Section 4 and is of independent interest. Section <ref> contains the proof of the lower bound in Theorem <ref> for the case h≥ 3. The proof of the upper bound is given in Section <ref>. The paper concludes with some final remarks and open problems. § RAINBOW CONNECTIVITY OF TWO-LAYERED RANDOM GEOMETRIC GRAPHS The proof of Theorem <ref> for the case h=2 requires a special argument which is given in Proposition <ref> below. We next recall upper and lower bounds for the probability that two vertices v_i,v_j are adjacent in a random geometric graph G(n,r). The lower bound takes into account the boundary effects, when the disc of radius r centered in a vertex close to the boundary is not completely contained in the unit square. Let v_i, v_j be two vertices in a random geometric graph G(n,r). Denote by (v_j) the ball of v_j in G(n,r). We have, π r^2-o(r^2)≤ (v_i∈ (v_j))≤π r^2 , The upper bound is clear by the definition of the random geometric graph. For the lower bound, let R be the centered sub-square of I=[0,1]^2 with sides of length 1-2r. Then, (v_i∈ (v_j)) = (v_i∈ (v_j)|v_j∈ R) Pr(v_j∈ R) + (v_i∈ (v_j)| v_j∉R) Pr(v_j∉R) ≥π r^2 (1-2r)^2 + π r^2/4 4 r(1-r) ≥π r^2 (1-3r + 3r^2) = π r^2 - o(r^2). Let G(n, r, 2) be a two-layered random geometric graph. If r(n)≥(log n/n)^1/4, thenG is rainbow connected. Moreover, there is a positive constant 0<c < 1 such that, if r(n)≤ c(log n/n)^1/4, thenG is not rainbow connected. Denote by G_1(n,r) and G_2(n,r) the two layers of G, with the value of r=r(n) given in the statement of the proposition. For each pair v_i,v_j∈ V, let X_v_i,v_j denote the indicator random variable X_v_i,v_j= 1 if there is not a rainbow path between v_i and v_j in G , 0 otherwise . Let v_k be different from v_i and v_j. Let A_v_k be the event that v_k is joined to v_i in G_1(n,r) and to v_j in G_2(n,r) (as examplified in Figure <ref>) I don't think it is needed or vice-versa, namely, A_v_k={{v_i∈_1(v_k)}∩{v_j∈_2(v_k)}}∪{{v_j∈_1(v_k)}∩{v_i∈_2(v_k)}} , where _i (v) denotes the set of neighbours of v in G_i, i=1,2. Using (<ref>), we have (π r^2-o(r^2))^2≤ (A_v_k)≤ 2(π r^2)^2. Let A_v_iv_j denote the event that v_i and v_j are joined by an edge either in G_1(n,r) or in G_2 (n,r), that is A_v_i,v_j={v_i∈_1(v_j)}∪{v_i∈_2(v_j)}, so that π r^2-o(r^2)≤ (A_v_i,v_j)≤ 2π r^2. For given v_i and v_j, the event that they are joined by a rainbow path in G is (∪_k≠ i,jA_v_k)∪ A_v_i, v_j. Therefore, since the events A_v_k, v_k∈ V∖{v_i,v_j} and A_v_i,v_j are independent, using the lower bounds on equation (<ref> ) and (<ref>), we have (X_v_i,v_j) = ((∪_v_k≠ v_i,v_jA_v_k)∪ (A_v_i,v_j)) = ((∩_v_k≠ v_i,v_jA_v_k)∩(A_v_i,v_j) ≤ (1-(π r^2-o(r^2))^2)^n-2 (1- (π r^2-o(r^2)) ≤ (1-π^2 r^4+o(r^4))^n . Suppose first that r(n)≥(ln (n)/n)^1/4. Let X=∑_i<j X_v_i,v_j be the random variable counting the number of pairs of vertices {v_i,v_j} in G which are not joined by a rainbow path in G. Plugging in the lower bound on r(n) in the previous expression, we obtain (X) =∑_i<j (X_v_i,v_j) ≤n 2(1-(π^2 r^4)+o(r^4))^n ≤ e^2log n(1-π^2log n/n+o(log n/n))^n ≤ e^(2-π^2)log n+o(log n) . It follows from Markov's inequality that (X≥ 1)≤(X)→ 0, as n→∞. This proves the first part of the statement, that G is rainbow connected if r(n)≥(ln n/n)^1/4. For the upper bound, let r(n)≤ c(log n/n)^1/4 for some positive small constant c to be specified later. We recall that, for a given vertex v_i, we denote by _1(v_i) the set of neighbours of of v_i in G_1. We have |_1 (v_i)|≤ nπ r^2≤ nπ c^2(ln n/ n)^1/2=π c^2(nln n)^1/2. It follows that, for all sufficiently large n, the complement J=V∖_1 (v_i) of _1 (v_i) has cardinality |J|≥ n/2. For each v_j∈ J, let Y_ij denote the indicator random variable that the neighborhood of v_j in G_2 is disjoint from _1 (v_i). Using 1-x≥ e^-x/2 and the upper bound on r(n), the number Y_i=∑_j∈ J Y_ij of vertices in J whose neighborhood in G_2 is disjoint from _1 (v_i) has expected value (Y_i)=∑_j∈ J(Y_ij)≥ |J|(1-π r^2)^|_1(v_i)|≥n/2e^-π^2nr^4/2≥ e^(1-c^4π^2/2)ln n-ln 2. By choosing c<2/√(π) we have (Y_i)→∞ as n→∞. Since the variables Y_ij are independent, by Chernoff bound we have (Y_i=0)≤ e^-(Y_i)/2=o(1) as n→∞. By repeating the argument exchanging the roles of G_1 and G_2, there are vertices v_j not joined with v_i by a rainbow path and G is not rainbow connected. This completes the proof of the second part of the statement. Denote by G_1(n,r) and G_2(n,r) the two layers of G, with the value of r=r(n) given in the statement of the proposition. For each pair v_i,v_j∈ V, let X_v_i,v_j denote the indicator random variable X_v_i,v_j= 1 if there is not a rainbow path between v_i and v_j in G , 0 otherwise . Let v_k be different from v_i and v_j. Let A_v_k be the event that v_k is joined to v_i in G_1(n,r) and to v_j in G_2(n,r) or vice-versa, namely, A_v_k={{v_i∈_1(v_k)}∩{v_j∈_2(v_k)}}∪{{v_j∈_1(v_k)}∩{v_i∈_2(v_k)}} , where _i (v) denotes the set of neighbours of v in G_i, i=1,2. Using (<ref>), we have (π r^2-o(r^2))^2≤ (A_v_k)≤ 2(π r^2)^2. Let A_v_iv_j denote the event that v_i and v_j are joined by an edge either in G_1(n,r) or in G_2 (n,r), that is A_v_i,v_j={v_i∈_1(v_j)}∪{v_i∈_2(v_j)}, so that π r^2-o(r^2)≤ (A_v_i,v_j)≤ 2π r^2. For given v_i and v_j, the event that they are joined by a rainbow path in G is (∪_k≠ i,jA_v_k)∪ A_v_i, v_j. Therefore, since the events A_v_k, v_k∈ V∖{v_i,v_j} and A_v_i,v_j are independent, using the lower bounds on equation (<ref> ) and (<ref>), we have (X_v_i,v_j) = ((∪_v_k≠ v_i,v_jA_v_k)∪ (A_v_i,v_j)) = ((∩_v_k≠ v_i,v_jA_v_k)∩(A_v_i,v_j) ≤ (1-(π r^2-o(r^2))^2)^n-2 (1- (π r^2-o(r^2)) ≤ (1-π^2 r^4+o(r^4))^n . The property of being rainbow connected, for multilayered geometric graphs, is monotone on the value of the radius therefore it is enough to prove the statement for the two extreme values of the radius. Suppose first that r(n)= (ln (n)/n)^1/4. Let X=∑_i<j X_v_i,v_j be the random variable counting the number of pairs of vertices {v_i,v_j} in G which are not joined by a rainbow path in G. Plugging in the value of r(n) in the previous expression, we obtain (X) =∑_i<j (X_v_i,v_j) ≤n 2(1-(π^2 r^4)+o(r^4))^n ≤ e^2log n(1-π^2log n/n+o(log n/n))^n ≤ e^(2-π^2)log n+o(log n) . It follows from Markov's inequality that (X≥ 1)≤(X)→ 0, as n→∞. This proves the first part of the statement, that G is rainbow connected if r(n)≥(ln n/n)^1/4. For the upper bound, let r(n) = c(log n/n)^1/4 for some positive small constant c to be specified later. We recall that, for a given vertex v_i, we denote by _1(v_i) the set of neighbours of of v_i in G_1. We have |_1 (v_i)|≤ nπ r^2 = nπ c^2(ln n/ n)^1/2=π c^2(nln n)^1/2, and |_1 (v_i)|≥ n π r^2 - o(nr^2)= π c^2(nln n)^1/2 - o((nln n)^1/2). It follows that, for all sufficiently large n, the complement J=V∖_1 (v_i) of _1 (v_i) has cardinality |J|≥ n/2. For each v_j∈ J, let Y_ij denote the indicator random variable that the neighborhood of v_j in G_2 is disjoint from _1 (v_i). Using 1-x≥ e^-x/2 and the value of r(n), the number Y_i=∑_j∈ J Y_ij of vertices in J whose neighborhood in G_2 is disjoint from _1 (v_i) has expected value (Y_i) =∑_j∈ J(Y_ij)≥ |J|(1-π n r^2)^|_1(v_i)| ≥n/2 (e^-π r^2/2)^n π r^2 - o(nr^2) = n/2 e^-nπ^2 r^4/2 + o(nr^4) ≥ e^(1-c^4π^2/2)ln n-ln 2 + o(ln n). By choosing c so that 1-c^4π^2/2 > 0 we have (Y_i)→∞ as n→∞. Since the variables Y_ij are independent, by Chernoff bound we have (Y_i=0)≤ e^-(Y_i)/2=o(1) as n→∞. By repeating the argument exchanging the roles of G_1 and G_2,there are vertices v_j not joined with v_i by a rainbow path and G is not rainbow connected. This completes the proof of the second part of the statement. § LOCAL EXPANSION A key property of multilayered random geometric graphs is their local expanding properties in the sense of Proposition <ref> below. In this section, we use the following concentration result for the size of the image of a random map. We include the proof of this statement for the sake of completeness. Let m and k be positive integers and let g:[m]→ [k] be a map in which each image of i∈ [m] in [k] choosen independently and uniformly at random. Let Y=|g([m])| be a random variable estimating the size of the image of g. For every a>0, we have that (|Y-(Y)|≥ a)≤ 2e^-2a^2/m. Moreover, (|Y-m|> a)≤ 2e^-2(a-m^2/2k)^2/m We will use the McDiarmid inequality <cit.>. Let f:[k]^m→ [k] be the function that, to an m-tuple (x_1,… ,x_m), assigns the cardinality of the set {x_1,… ,x_m}. The function f satisfies the bounded differences condition: for any t∈ [m], max_x,x'∈ [k]|f(x_1,… ,x_t-1,x,x_t+1,… ,x_m)-f(x_1,… ,x_t-1,x',x_t+1,… ,x_m)|≤ 1, as changing the value of one coordinate can modify the cardinality of the image by at most one unit (depending on x and x' being in the set {x_1,… ,x_t-1,x_t+1,… ,x_m} or not). Let X_i be a random variable indicating the image of i∈ [m] by the independent random map g:[m]→ [k], which is uniformly distributed on [k]. Let f(X_1,… ,X_m) be defined as a set of the cardinality of the set {X_1,… ,X_m}, so that we can define the random variable Y=f(X_1,… ,X_m), therefore (Y) = (f(X_1,…, X_m)). Hence, by McDiarmid inequality, as the sum of the differences of f is at most m, for every a>0 we have (|Y-(Y)|≥ a)≤ 2e^-2a^2/m, proving Equation (<ref>). To prove Equation (<ref>), for each j∈ [k] consider the indicator random variable Y_j= 1, if j∈Im(g) , 0, otherwise . Then Y=Y_1+⋯ +Y_k and m≥ (Y)=k(1-(1-1/k)^m)≥ k(1-e^-m/k)≥ k(m/k-m^2/2k^2)=m-m^2/2 k. It follows that, using Y≤ m together with the triangle inequality, (|Y-m|≥ a)≤ (|Y-(Y)|≥ a-m^2/2k)≤ 2e^-2(a-m^2/2k)^2/m . Let G=G(n,r,h) be a multilayered random geometric graph. For a permutation σ on the set of colors [h], u∈ V, and 1≤ℓ≤ h, let _ℓ,σ(u) denote the set of vertices reachable from u by rainbow paths of length ℓ starting at u such that the i-th edge along the path belongs to the layer G_σi. When σ is the identity, we write _ℓ(u). Let h≥ 3 be fixed and let G=G(n,r,h) be a multilayered random geometric graph with r=r(n)≥(log n/n^h-1)^1/2h. Then, if (nr^2)^h-1<n, for 1≤ℓ≤ h-1, we have |_ℓ(u)|= Θ((nr^2)^ℓ). The upper bound is clear as there can not possibly be more than (nr^2)^ℓ vertices joined with u by rainbow paths of length ℓ, following the fixed ordering of colors, as in the statement. The proof of the lower bound is by induction on ℓ. Denote by α_ℓ=|_ℓ (u)|. We show that α_ℓ≥ c_ℓ (nr^2)^ℓ- o((nr^2)^ℓ), for some constant c_ℓ>0. We assume that r(n)=(log n/n^h-1)^1/2h, this value as the statement will then hold for any larger r as long as (nr^2)^h-1<n. Note that for this selection, (nr^2)^h-1 =(n(log n/n^h-1)^1/2h^2)^h-1<n. For ℓ=1 there are α_1≥ nπ r^2-o(nr^2) neighbours of u in G_1(n,r) and the statement holds with c_1=π. Assume that the statement holds for ℓ-1 for some 2≤ℓ≤ h-1, so that α_ℓ-1≥ c_ℓ-1(nr^2)^ℓ-1-o((nr^2)^ℓ-1), Note that we have α_ℓ-1≤ (nr^2)^ℓ-1, which is the maximum number of vertices which can be possibly reached by rainbow paths, at most nr^2 at each step for each vertex. Subdivide the unit square into subsquares of diagonal r, so that the disc of radius r centered at any point in the subsquare contains entirely the subsquare. The number k of subsquares is k= 2/r^2=2 n^1-1/h/(log n)^1/h. Choose a constant c with max{2ℓ-1-h,(ℓ-1)/2}<c< ℓ-1. By Proposition <ref> with m=α_ℓ-1, k and a=n^c/h, the number of subsquares filled in by the points in _ℓ-1(u) when considered as vertices of the ℓ-th layer G_ℓ is at least α_ℓ-1-n^c/h with probability p_ℓ-1≥ 1-2exp(-2(n^c/h-α_ℓ-1^2/k)^2/α_ℓ-1). By the choice of c<ℓ-1 we have n^c/h=o((nr^2)^ℓ-1). Therefore, with probability p_ℓ-1, the number of filled subsquares is at least, c_ℓ-1(nr^2)^ℓ-1-o((nr^2)^ℓ-1)-n^c/h=c_ℓ-1(nr^2)^ℓ-1-o((nr^2)^ℓ-1). Let A=V∖(∪_t=0^ℓ-1_t (u)). Each subsquare contains |A|r^2/2 vertices of G_ℓ in A. If the subsquare contains a point in _ℓ-1(u), then these vertices give rise to |A|r^2/2 rainbow paths in _ℓ (u). Since |∪_t=0^ℓ-1_t (u)|≤∑_t=0^ℓ-1(nr^2)^t=O((nr^2)^ℓ-1) and r^2=o(nr^2), we have |_ℓ(u)| ≥ (|A|r^2/2)(c_ℓ-1(nr^2)^ℓ-1-o((nr^2)^ℓ-1) ≥ ((n-O((nr^2)^ℓ-1)r^2/2)(c_ℓ-1(nr^2)^ℓ-1-o((nr^2)^ℓ-1) =(c_ℓ-1/2)(nr^2)^ℓ-o((nr^2)^ℓ). The proof of the statement will hold for ℓ with c_ℓ=c_ℓ-1/2 once it is checked that p_ℓ-1→ 1 as n→∞. By the choice of c> 2ℓ-1-h we have, α_ℓ-1^2/k =c_ℓ-1^2(nr^2)^2ℓ-1/n-o((nr^2)^2ℓ-1/n) =c_ℓ-1^2n^(2ℓ-1-h)/h(log n)^(2ℓ-1)((1-1/h)-o(n^(2ℓ-1-h)/h(log n)^(2ℓ-1)((1-1/h) =o(n^c/h). Hence, the exponent in the expression of p_ℓ-1 satisfies, using c_ℓ-1(nr^2)^ℓ-1-o((nr^2)^ℓ-1)≤α_j-1≤ (nr^2)^ℓ-1 and c> (ℓ-1)/2, (n^c/h-α_ℓ-1^2/k)^2/α_ℓ-1 ≥(n^c/h/2)^2/α_ℓ-1≥1/2n^(2c-(ℓ-1))/h→∞, (n→∞), so that p_ℓ-1 tends to 1 with n→∞. This completes the proof of the Proposition. § LOWER BOUND FOR H≥ 3 We split the proof of the lower bound in Theorem <ref> according to the parity of h. We start with the odd case. Let G=G(n, r, h) be a h-layered random geometric graph, for some odd value h>3. If r(n)≥(log n/n^h-1)^1/2h, then G is rainbow connected. Write h=2k+1. Denote by G_i=G_i(n,r) the i-th layer of G, 1≤ i≤ h. For a subset I⊆ [h], we denote by G_I(n,r) the layered graph formed by the layers included in I. For a pair i,j of distinct vertices in V(G) and a permutation σ of {1,2,3,…, h}, let P(i,j;σ) denote the set of rainbow paths of length h joining i and j with the i-th edge in G_σ(i) and let I_1(σ)={σ(1),…, σ(k)}, I_2(σ)={σ(k+2),… ,σ (h)}. Let A=_k,σ(i) be the set of vertices reached from i by rainbow paths of length k starting at i following the colour order determined by σ. Let B be the set of vertices reached from j by rainbow paths of length k starting at j following the colour order determined by following σ in reversed order with the k-th edge along the path colored k+2. From Proposition <ref>, we have |A|,|B|= Θ(n^k r^2k). Let X_i,j denote the number of rainbow paths of length h in P(i,j;σ). For a pair (k,k')∈ A× B with k≠ k', let Y_k,k' be the indicator function that k and k' are neighbours in G_σ (k+1). We have (Y_k,k')=π r^2, the probability that the vertices k' and k are adjacent in G_σ (k+1). Then, X_ij=∑_k,k' Y_k,k', where the sum runs through all pairs (k,k')∈ A× B with k≠ k'. We observe that the variables Y_k,k' are independent: when the pairs (k,k'), (l,l') are disjoint it is clear that Y_k,k', Y_l,l' are independent, while if k=l, say, then (Y_k,k'=1, Y_k,l'=1) is the probability that k' and l' are both adjacent to k, which is the product (Y_k,k'=1) (Y_k,l'=1). As before we analyze the behaviour on the extreme cases. Let us fix r(n) =(log n/n^h-1)^1/2h. If some h-1-layered subgraph of G is already rainbow connected then there is nothing to prove. Assume this is not the case. Then, for a pair of vertices i,j not connected by a rainbow path of length h-1, the sets A and B are disjoint. In this case, (X_i,j=0) = (∩_k,k'{Y_k,k'=0}) =∏_k,k' (Y_k,k'=0) ≤ (1-π r^2)^(n^k r^2k)^2 ≤ e^-π n^2kr^4k+2. By using the union bound on all pairs i,j and the lower bound on r, (∩_i,j{X_ij≥ 1})=1- (∪_i,j X_i,j=0)≥ 1-n^2e^-π n^2kr^4k+2, As k=(h-1)/2, by the lower bound on r, n^2kr^4k+2= n^h-1r^2h≥log n, Therefore, the last term in the bound on (∩_i,j{X_ij≥ 1}) is o(1) as n→∞. Hence all pairs i,j are connected by a rainbow path of length h. We next consider the case h even. Let G=G(n, r, h) be a h-layered random geometric graph, for some even value h>2. If r(n)≥(log n/n^h-1)^1/2h, then G is rainbow connected. The proof is similar as the one for the case h odd. Let k=h/2. The first difference is in the definition of A and B. Let A=_k,σ(i) be the set of vertices reached from i by rainbow paths of length k starting at j following the colour order determined by σ. Let B be the set of vertices reached from j by rainbow paths of length k-1 starting at j following the reverse order in σ, the i-th edge colored σ (k-i). From Lemma <ref>, we have |A|= Θ(n^k r^2k) and |B|= Θ(n^k-1 r^2k-1). In this case |A||B|= Θ(n^2k-1r^4k-2). Following the same notation and the same arguments as in Proposition <ref>, we have (X_i,j=0) ≤ (1-π r^2)^n^2k-1r^4k-2≤ e^-π n^2k-1r^4k. Under similar hypothesis with respect to the size of _h-1(u), (∩_i,j{X_ij≥ 1})=1- (∪_i,j X_i,j=0)≥ 1-n^2e^-π n^2k-1r^4k, As k=h/2, by the lower bound on r, n^2k-1r^4k= n^h-1r^2h≥log n. Therefore, the last term in the bound on (∩_i,j{X_ij≥ 1}) is o(1) as n→∞. Hence all pairs i,j are connected by a rainbow path of length h. § UPPER BOUND Let h≥ 2 be fixed and let G=G(n,r,h) be an h-multilayered random geometric graph. There is a positive constant c, 0<c<1 such that, if r(n)≤ c(log n/n^h-1)^1/2h, then G is not rainbow connected. Denote by G_i=G_i(n,r) the i-th layer of G, 1≤ i≤ h, where r=r(n) = c(log n/n^h-1)^1/2h. The proof is by induction on h. By Proposition <ref> the statement holds for h≤ 2. Fix a permutation σ of {1,2,… ,h} and a vertex v of G. Let A=_h-1,σ(v). According to Proposition <ref>, there is a constant d such that |A|= d (nπ r^2)^h-1 |A|= d (nπ r^2)^h-1 = d c^2(h-1)π^h-1(nlog n)^h-1/h, For all n sufficiently large, the complement B=V∖ A has cardinality at least n/2. For each v_j∈ B let Y_ij be the indicator function that the neighborhood of v_j in G_σ (h) is disjoint from A. We have (Y_ij)≥ (1-π r^2)^|A|. Let Y_i=∑_v_j∈ JY_ij denote the number of vertices in B whose neighborhood in G_σ (h) is disjoint from A. By using |J|≥ n/2 and the upper bounds on |A| and r, we have (Y_i) =∑_v_j∈ B(Y_ij) ≥ |B|(1-π r^2) ^|A| ≥n/2e^-π r^2|A|/2 ≥1/2e^(1-d c^2(h-1)π^h/2)log n. By choosing c such that 1-d c^2(h-1)π^h/2> 0, we have (Y_i)→∞ as n→∞. Since the variables Y_ij are independent for j∈ J, by Chernoff bound, we have (Y_i=0)≤ e^-(Y_i)/2=o(1). Therefore, there are pairs v_i,v_j not joined by a rainbow paths with the i-th edge along the path from v_i to v_j in G_σ (i). By repeating the same argument for all h! permutations σ, the graph G is not rainbow connected. Finally, by monomtonicity of the property we get the result. § CONCLUSIONS The main purpose of this paper is to identify the threshold for the radius r(n) to get a rainbow-connected multilayered random geometric graph, as obtained in Theorem <ref>. As mentioned in Section <ref>, the analogous problem of determining the threshold for h in order that the multilayered binomial random graph is rainbow connected was addressed by Bradshaw and Mohar <cit.>. We find the model of multilayered random geometric graphs highly appealing and it leads to a host of interesting problems. One may think of a dynamic setting where n individuals perform random walks within the cube and communicate with the close neighbors at discrete times t_1<t_2<⋯ <t_h. The rainbow connectivity in this setting measures the number of instants needed so that every individual can communicate with each of the other ones. A natural immediate extension is to address the threshold to get rainbow connectivity k, as achieved in the case of multilayered binomial random graphs by Shang <cit.>. There is a vast literature addressing rainbow problems in random graph models, and this paper is meant to open the path to these problems in the context of multilayered random geometric graphs. It would also be interesting to find asymptotic estimates on r such that h copies produce a rainbow clique of size √(h). We observe that, for large h, the threshold of r for rainbow connectivity approaches the connectivity threshold of random geometric graphs. The arguments in the proof, however, apply only for constant h. For h growing with n, the correlation between distinct edges in our model decreases and the model gets closer to the random binomial graph, where the results are expected to behave differently and the geometric aspects of the model become irrelevant. * Given a fixed number of layers, what is the expected percentage of pairs of vertices which are connected by a rainbow path? * Given a fixed number of h-layers, what is the density (out of all random geometric graphs percentage) of the h-multilayerd random geometric rainbow connected graphs? * What will be the density of the (at least) logn copies of G(n,r)? Will it tend to the density of G(n,p) w.h.p.? * What can we say about the structural properties of n copies of G(n,r)? Does it contain, w.h.p., a clique of size n? Or a clique of size f(n) for some certain f? or a rainbow clique? rainbow triangles? * Threshold for r such that h copies produce a rainbow clique of √(h)? (for the case of triangles, three copies and rainbow triangle)? * What is a lower bound, call r_re, for r (as a function of n and h which makes the h-layered random geometric graphs G(n,r) an expander? Clearly, r_re≤ r_rc. * What structural properties of layered G(n,p) and layered G(n,r) overlap when considering large number of layers? * Given a multilayered random geometric graph which is not rainbow connected, what is the threshold for p such that adding G(n,p) to it results in a rainbow connected graph? * Compute the expected value of the number edges with growing h. Is there a threshold for r for the convergence to the complete graph? Is there a threshold for r to reach rainbow connectivity a fixed number k? abbrv
http://arxiv.org/abs/2407.13245v1
20240718075829
Descent Methods for Vector Optimization Problems: A Majorization-Minimization Perspective
[ "Jian Chen", "Liping Tang", "Xinmin Yang" ]
math.OC
[ "math.OC", "90C29, 90C30" ]
mymainaddress,mysecondaryaddress]Jian Chen mymainaddress]Liping Tang mymainaddress]Xinmin Yangmycorrespondingauthor [mycorrespondingauthor]Corresponding author. Email addresses: mailto:chenjian_math@163.comchenjian_math@163.com (Jian Chen), mailto:tanglipings@163.comtanglipings@163.com (Liping Tang), mailto:xmyang@cqnu.edu.cnxmyang@cqnu.edu.cn (Xinmin Yang) [mymainaddress]National Center for Applied Mathematics in Chongqing, Chongqing Normal University, Chongqing 401331, China [mysecondaryaddress]School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, Sichuan China § ABSTRACT In this paper, we develop a unified framework and convergence analysis of descent methods for vector optimization problems (VOPs) from a majorization-minimization perspective. By choosing different surrogate functions, the generic method reduces to some existing descent methods with and without line search, respectively. The unified convergence analysis shows that the slow convergence of the steepest descent method is mainly due to the large gap between surrogate and objective functions. As a result, the performance of descent methods can be improved by narrowing the surrogate gap. Interestingly, we observe that selecting a tighter surrogate function is equivalent to using an appropriate base of the dual cone in the direction-finding subproblem. Furthermore, we use Barzilai-Borwein method to narrow the surrogate gap and devise a Barzilai-Borwein descent method for VOPs with polyhedral cone. By reformulating the subproblem, we provide a new insight into the Barzilai-Borwein descent method and bridge it to the steepest descent method. Finally, several numerical experiments confirm the efficiency of the proposed method. Multiple objective programming Majorization-minimization optimization Barzilai-Borwein method Convergence rates Polyhedral cone [2010] 90C2990C30 § INTRODUCTION In vector optimization, a vector-valued function F:ℝ^n→ℝ^m is to be optimized under the partial order induced by a closed, convex, and pointed cone K⊂ℝ^m with a non-empty interior, defined as follows: y≼_K(resp.≺_K)y^' ⇔ y^'-y∈ K (resp. int(K)). Let K^*={c^*∈ℝ^m:⟨ c^*,y⟩≥0, ∀ y∈ K} be the positive polar cone of K, and C be a compact convex set such that 0∉ C and cone(C)=K^*, namely, C is a base of K^*. This paper focuses on the following vector optimization problem: min_K F(x), VOP where F:ℝ^n→ℝ^m is differentiable. In vector optimization, it is often impossible to improve all objectives simultaneously with respect to the partial order. Therefore, the concept of optimality is defined as efficiency <cit.>, meaning that there is no better solution for an efficient solution. Specifically, the problem (<ref>) corresponds to a multiobjective optimization problem when K=ℝ^m_+, where ℝ^m_+ denotes the non-negative orthant of ℝ^m. Various applications of multiobjective optimization problems (MOPs) can be found in engineering <cit.>, economics <cit.>, management science <cit.>, environmental analysis <cit.>, machine learning <cit.>, etc. Although many real-world problems reformulated as vector-valued problems adhere to the partial order induced by ℝ^m_+, some applications, such as portfolio selection in securities markets <cit.>, require partial orders induced by closed convex cones other than the non-negative orthant. Consequently, vector optimization problems (VOPs) have garnered significant attention in recent years. Over the past two decades, descent methods have received increasing attention within the multiobjective optimization community, primarily due to the seminal work on the steepest descent method proposed by <cit.>. Inspired by Fliege and Svaiter's contributions, researchers have extended other numerical algorithms to solve multiobjective optimization problems (MOPs) <cit.>. To the best of our knowledge, the study of descent methods for unconstrained vector optimization problems can be traced back to the work of <cit.>, who extended the steepest descent method for MOPs (SDMO) proposed by <cit.> to VOPs. In this context, the direction-finding subproblem at x^k is formulated as follows: min_d∈ℝ^nmax_c^*∈ Cc^*, JF(x^k)d+1/2d^2, where JF(x^k)∈ℝ^m× n is the Jacobian matrix of F(·) at x^k. Similar to MOPs, several standard numerical algorithms have been extended to VOPs, including the Newton method <cit.>, projected gradient method <cit.>, proximal point method <cit.>, conjugated gradient method <cit.> and conditional gradient method <cit.>. In recent years, complexity analysis of descent methods for MOPs has been extensively studied. <cit.> established the convergence rates of SDMO under different convexity assumptions. Similarly, <cit.> developed convergence results for the multiobjective proximal gradient method. <cit.> noted that both theoretical and empirical results indicate that multiobjective first-order methods exhibit slow convergence due to objective imbalances. To address this challenge, <cit.> proposed a Barzilai-Borwein descent method for MOPs (BBDMO) that dynamically tunes gradient magnitudes using Barzilai-Borwein's rule <cit.> in direction-finding subproblem. Moreover, an improved linear convergence rate is confirmed by <cit.>, demonstrating that Barzilai-Borwein descent methods can effectively mitigate objective imbalances from a theoretical perspective. Despite the extensive study of the complexity analysis of descent methods for MOPs, corresponding results for VOPs have received little attention. As described by <cit.>, the linear convergence rates of first-order descent methods for VOPs are influenced by C, the base of K^* in direction-finding subproblem. This naturally leads to the following question: (Q1) How to select an appropriate base to accelerate convergence ? Similarly, for the subproblem of SDMO: min_d∈ℝ^nmax_λ∈Δ_mλ, JF(x^k)d+1/2d^2, where Δ_m:={λ≽0:∑_i=1^mλ_i=1} is a base of ℝ^m_+, a natural question arises: (Q2) Is Δ_m a good choice of base for ℝ^m_+ in SDMO? If not, how to choose a better base? Motivated by majorization-minimization optimization <cit.>, which involves the successive minimization of a sequence of upper bounds of the objective function, we provide a unified convergence analysis of descent methods for VOPs from a majorization-minimization perspective. We emphasize that the gap between the surrogate and objective functions significantly affects the performance of descent methods, which plays a central role in majorization-minimization optimization. Specifically, we highlight that the steepest descent method for VOPs exhibits slow convergence due to the large gap between the surrogate and objective functions. To address this issue, we develop an improved descent method with a tighter surrogate function, resulting in improved linear convergence. Interestingly, we show that selecting a tighter surrogate function is equivalent to using an appropriate base in the direction-find subproblem. This provides a positive answer to (Q1). Additionally, we devise a Barzilai-Borwein descent method for VOPs (BBDVO) with polyhedral cones. By reformulating the subproblem, we observe that the BBDMO is essentially the SDMO with an appropriate base in direction-finding subproblem. This observation provides a positive answer to (Q2) and offers a new insight into BBDMO from a majorization-minimization perspective. Both theoretical and empirical results indicate that the performance of the proposed method is not sensitive to the choice of the transform matrix. The rest of this paper is organized as follows. Some notations and definitions are given in Section <ref> for our later use. In Section <ref>, we propose a generic majorization-minimization descent method for VOPs and analyze its convergence rates under different convexity assumptions. Section <ref> is devoted to showing the connections between different descent methods from a majorization-minimization perspective. In Section <ref>, the effect of the gradient adjustment is clarified from the vantage point of alleviating objective imbalances, and a Barzilai-Borwein descent method is devised for VOPs with polyhedral cone. Section <ref> presents numerical results demonstrating the efficiency of the proposed Barzilai-Borwein method. Finally, conclusions are drawn at the end of the paper. § PRELIMINARIES Throughout this paper, ℝ^n and ℝ^m× n denote the set of n-dimensional real column vectors and the set of m× n real matrices. The space ℝ^n is equipped with the inner product ⟨·,·⟩ and the induced norm ·. The interior, boundary and the closure of a set are denoted by int(·), bd(·) and cl(·), respectively. The cone hull of a set are denoted by cone(·). Let B[x,r] be the closed ball centered at x with radius r. For simplicity, we denote [m]:={1,2,...,m}. §.§ Vector optimization In the subsection, we revisit some definitions and results pertinent to VOPs. Firstly, we introduce the concept of efficiency. <cit.> A vector x^*∈ℝ^n is called efficient solution to (<ref>) if there exists no x∈ℝ^n such that F(x)≼_K F(x^∗) and F(x)≠ F(x^∗). <cit.> A vector x^*∈ℝ^n is called weakly efficient solution to (<ref>) if there exists no x∈ℝ^n such that F(x)≺_K F(x^∗). <cit.> A vector x^∗∈ℝ^n is called K-stationary point to (<ref>) if range(JF(x^*))∩(-int(K))=∅, where range(JF(x^*)) denotes the range of linear mapping given by the matrix JF(x^*). <cit.> A vector d∈ℝ^n is called K-descent direction for F(·) at x if JF(x)d∈-int(K). Note that if x∈ℝ^n is a non-stationary point, then there exists a K-descent direction d∈ℝ^n such that JF(x)d∈-int(K). Next, we introduce the concept of K-convexity for F(·). <cit.> The objective function F(·) is called K-convex if F(λ x+(1-λ)y)≼_Kλ F(x)+(1-λ)F(y), ∀ x,y∈ℝ^n, λ∈[0,1]. By the differentiability of F(·), K-convexity of F(·) is equivalent to JF(x)(y-x)≼_KF(y)-F(x), ∀ x,y∈ℝ^n. We conclude this section by elucidating the relationship between K-stationary points and weakly efficient solutions. <cit.> Assume that the objective function F(·) is K-convex, then x^*∈ℝ^n is a K-stationary point of (<ref>) if and only if x^* is a weakly efficient solution of (<ref>). §.§ Strong convexity and smoothness Strong convexity and smoothness of objective functions play a central role of first-order methods in optimization. This subsection is devoted to strong convexity and smoothness of vector-valued functions under partial order. <cit.> The objective function F(·) is called strongly K-convex with μ∈ K if F(λ x+(1-λ)y)≼_Kλ F(x)+(1-λ)F(y)-1/2λ(1-λ)x-y^2μ, ∀ x,y∈ℝ^n, λ∈[0,1], and the above relation does not hold for any μ̂ with μ̂⋠_Kμ. Comparing with the definition in <cit.>, Definition <ref> includes the case μ∈bd(K), then it reduces K-convexity when μ=0. Assume that the objective function F(·) is strongly K-convex with μ∈ int(K), then x^*∈ℝ^n is a K-stationary point of (<ref>) if and only if x^* is an efficient solution of (<ref>). By the differentiability of F(·), strong K-convexity of F(·) is equivalent to 1/2x-y^2μ + JF(x)(y-x)≼_KF(y)-F(x), ∀ x,y∈ℝ^n, it characterizes a quadratic lower-bound of F(·). Intuitively, we use quadratic upper-bound to define the K-smoothness of F(·) under partial order. The objective function F(·) is called K-smooth with ℓ∈ int(K) if F(y)-F(x)≼_KJF(x)(y-x)+1/2x-y^2ℓ, ∀ x,y∈ℝ^n, and the above relation does not hold for any ℓ̂ with ℓ⋠_Kℓ̂. Assume that F(·) is strongly K-convex with μ∈ K and K-smooth with ℓ∈ int(K), then μ≼_Kℓ. Comparing with the smoothness and strong convexity in <cit.> with Euclidean distance, i.e., ω(·)=1/2·, Definitions <ref> and <ref> are tighter and do not depend on the reference vector e. Next, we characterize the properties of the difference of two functions. Let F,G:ℝ^n→ℝ^m be two vector-valued functions. Define H(·):=G(·)-F(·). Then the following statements hold. (i) if G(·) is strongly K-convex with μ∈ int(K) and F(·) is K-smooth with ℓ∈ int(K), where ℓ≼μ, then H(·) is strongly K-convex with μ-ℓ; (ii) if G(·) is K-smooth with ℓ∈ int(K) and F(·) is K-convex, then H(·) is K-smooth with ℓ∈ int(K). (iii) if G(·) K-smooth with ℓ∈ int(K) and F(·) is strongly K-convex with μ∈ int(K), where μ≼ℓ, then H(·) is K-smooth with ℓ-μ. The proof is a consequence of the definition of strong K-convexity and K-smoothness, we omit it here. In SOPs, the condition number (the quotient of smoothness parameter and the modulus of strong convexity) plays a key role in the geometric convergence of first-order methods. We end this section with the definition of the condition number of a strongly K-convex function under partial order. Assume that F(·) is strongly K-convex with μ∈ int(K) and K-smooth with ℓ∈ int(K). Then, we denote κ_F,≼_K:=max_c^*∈ Cc^*,ℓ/c^*,μ the condition number of F(·) under partial order ≼_K. Notice that 0∉ C and K^*=cone(C), the condition number can be rewritten as follows: κ_F,≼_K:=max_c^*∈ K^*∖{0}c^*,ℓ/c^*,μ. In other words, the condition number of F(·) does not depend on C, but on K. Assume that F(·) is strongly K_1-convex with μ_1∈ int(K_1) and L_1-smooth with ℓ_1∈ int(K_1). Then, for any pointed closed cone K_1 satisfied K_1⊂ K_2, we have F(·) is strongly K_2-convex with μ_2∈ int(K_2) and K_2-smooth with ℓ_2∈ int(K_2). Futhermore, ℓ_2≼_K_2ℓ_1, μ_1≼_K_2μ_2, and κ_F,≼_K_2≤κ_F,≼_K_1. The proof is a consequence of the definitions of strong K-convexity, K-smoothness and condition number, we omit it here. § MAJORIZATION-MINIMIZATION OPTIMIZATION FOR VOPS §.§ Majorization-minimization descent method for VOPs In this section, we present a generic majorization-minimization scheme for minimizing a vector function in the sense of descent. The surrogate function G_k(·) plays a central role in the generic majorization-minimization scheme. Intuitively, G_k(·) should well approximate F(·)-F(x^k) near x^k and the related subproblem should be easy to minimize. Therefore, we measure the approximation error by H_k(·):=G_k(·) - F(·)+F(x^k). To characterize surrogates, we introduce a class of surrogate functions, which will be used to establish the convergence results of Algorithm <ref>. For x^k∈ℝ^n, we call G_k(·) a first-order surrogate function of F(·)-F(x^k) near x^k when (i) F(x^k+1)-F(x^k)≼_KG_k(x^k+1), where x^k+1 is the minimizer of min_x∈ℝ^nmax_c^*∈ Cc^*,G_k(x), furthermore, when F(·)-F(x^k)≼_KG_k(·) for all x∈ℝ^n, we call G_k(·) a majorizing surrogate; (ii) the approximation error H_k(·) is K-smooth with ℓ∈ int(K), H_k(x^k)=0, and JH_k(x^k)=0. We denote by 𝒮_ℓ,μ(F,x^k) the set of first-order strongly K-convex surrogate functions with μ∈ int(K). Next, we characterize the properties of first-order surrogate functions. Let G_k(·)∈𝒮_ℓ,μ(F,x^k) and x^k+1 be the minimizer of min_x∈ℝ^nmax_c^*∈ Cc^*,G_k(x). Then, for all x∈ℝ^n, we have (i) H_k(x)≼_K1/2x-x^k^2ℓ; (ii) c^*_k,F(x^k+1)+1/2x^k+1-x^2c^*_k,μ≤c^*_k,F(x)+1/2x^k-x^2c^*_k,ℓ, where c^*_k is a maximizer of max_c^*∈ Cmin_x∈ℝ^nc^*,G_k(x). Assertion (i) directly follows by the K-smoothness of H_k(·) and the facts that H_k(x^k)=0 and JH_k(x^k)=0. Next, we prove the assertion (ii). By Sion's minimax theorem <cit.>, by denoting c^*_k a maximizer of max_c^*∈ Cmin_x∈ℝ^nc^*,G_k(x), and x^k+1 the minimizer of min_x∈ℝ^nmax_c^*∈ Cc^*,G_k(x), we have JG_k(x^k+1)^Tc^*_k=0. This, together with the strong K-convexity of G_k(·), implies that c^*_k,G_k(x^k+1)+1/2x^k+1-x^2c^*_k,μ≤c^*_k,G_k(x), ∀ x∈ℝ^n. We thus use F(x^k+1)-F(x^k)≼_KG_k(x^k+1) to get c^*_k,F(x^k+1)-F(x^k)+1/2x^k+1-x^2c^*_k,μ ≤c^*_k,G_k(x^k+1)+1/2x^k+1-x^2c^*_k,μ ≤c^*_k,G_k(x) =c^*_k,F(x)-F(x^k) +c^*_k,H_k(x) ≤c^*_k,F(x)-F(x^k)+1/2x^k-x^2c^*_k,ℓ. This completes the proof. §.§ Convergence analysis In Algorithm <ref>, it can be observed that it terminates either with a K-stationary point in a finite number of iterations or generates an infinite sequence of non-stationary points. In the subsequent analysis, we will assume that the algorithm produces an infinite sequence of non-stationary points. §.§.§ Global convergence Firstly, we establish the global convergence result in the nonconvex cases, the following assumptions are required. Assume the following statements hold for F(·) and G_k(·): (i) The level set ℒ_F(x^0):={x:F(x)≼_KF(x^0)} is bounded; (ii) if x^k→ x^*, G_k(·)∈𝒮_ℓ,μ(F,x^k) and max_c^*∈ Cc^*,G_k(x^k+1)→0, then x^* is a K-stationary point to (<ref>). The Assumption <ref>(i) is a standard condition for nonconvex cases. Moreover, the Assumption <ref>(ii) holds for many descent methods, such as K-steepest descent method, Newton method. We are now in a position to establish the global convergence of Algorithm <ref>. Suppose that Assumption <ref> holds, let {x^k} be the sequence generated by Algorithm <ref> with G_k(·)∈𝒮_ℓ,μ(F,x^k). Then, {x^k} has at least one accumulation point and every accumulation point is a non-stationary point to (<ref>). Since G_k(·)∈𝒮_ℓ,μ(F,x^k), we have F(x^k+1)-F(x^k)≼_K G_k(x^k+1), and G_k(x^k+1)≼_KG_k(x^k)=H_k(x^k)=0. Then, we conclude that {F(x^k)} is decreasing under partial order ≼_K. It follows by Assumption <ref>(i) and continuity of F(·) that {x^k} is bounded and there exists F^* such that F^*≼_KF(x^k), ∀ k≥0. The boundedness of {x^k} indicates that {x^k} has at least one accumulation point. Next, we prove that any accumulation point x^* is a non-stationary point. By summing (<ref>) from 0 to infinity, we have F^*-F(x^0)≼_K∑_k=0^∞(F(x^k+1)-F(x^k))≼_K∑_k=0^∞G_k(x^k+1). It follows that -∞<max_c^*∈ Cc^*,F^*-F(x^0)≤max_c^*∈ Cc^*,∑_k=0^∞G_k(x^k+1)≤∑_k=0^∞max_c^*∈ Cc^*,G_k(x^k+1). This, together with the fact that G_k(x^k+1)≼_KG_k(x^k)=0, implies max_c^*∈ Cc^*,G_k(x^k+1)→0. For the accumulation point x^*, there exists an infinite index set 𝒦 such that x^k𝒦⟶x^*. Therefore, by Assumption <ref>(ii), we conclude that x^* is a K-stationary point. §.§.§ Strong convergence In the following, we establish the strong convergence result of Algorithm <ref> in K-convex cases. Suppose that Assumption <ref> holds and F(·) is K-convex, let {x^k} be the sequence generated by Algorithm <ref> with G_k(·)∈𝒮_ℓ,μ(F,x^k) and ℓ≼_Kμ. Then, the following statements hold: (i) {x^k} converges to a weakly efficient solution x^* of (<ref>); (ii) u_0(x^k)≤ℓ_maxR^2/2k, ∀ k≥1, where ℓ_max:=max_c^*∈ Cc^*,ℓ, R:={x-y:x,y∈ℒ_F(x^0)}, and u_0(x^k):=max_x∈ℝ^nmin_c^*∈ Cc^*,F(x^k)-F(x) is a merit function in the sense of weak efficiency. (i) By the similar arguments in the proof of Theorem <ref>, we conclude that {x^k} is bounded, and there exists a K-stationary point x^* such that F(x^*)≼_KF(x^k). Besides, the K-convexity of F(·) indicates that x^* is a weakly efficient point. From Lemma <ref>(ii), for any x∈ℝ^n we have c^*_k,F(x^k+1)-F(x)≤1/2x^k-x^2c^*_k,ℓ-1/2x^k+1-x^2c^*_k,μ. Substituting x=x^* into the above inequality, we obtain c^*_k,F(x^k+1)-F(x^*)≤1/2x^k-x^*^2c^*_k,ℓ-1/2x^k+1-x^*^2c^*_k,μ. Recall that F(x^*)≼_KF(x^k), it follows that x^k+1-x^*^2c^*_k,μ≤x^k-x^*^2c^*_k,ℓ. Furthermore, we use the fact that ℓ≼_Kμ to get x^k+1-x^*^2≤x^k-x^*^2. Therefore, the sequence {x^k-x^*} converges. This, together with the fact that x^* is an accumulation point of {x^k}, implies that {x^k} converges to x^* (ii) Since ℓ≼_Kμ, we use inequality (<ref>) and the definition of ℓ_max to obtain c^*_k,F(x^k+1)-F(x)≤1/2x^k-x^2c^*_k,ℓ-1/2x^k+1-x^2c^*_k,μ≤ℓ_max/2(x^k-x^2-x^k+1-x^2). Taking the sum of the preceding inequality over 0 to k-1, we have ∑_s=0^k-1c^*_s,F(x^s+1)-F(x)≤ℓ_max/2(x^0-x^2-x^k-x^2)≤ℓ_max/2x^0-x^2. Notice that F(x^k)≼_KF(x^s+1) for all s≤ k-1, it leads to ∑_s=0^k-1c^*_s,F(x^k)-F(x)≤ℓ_max/2x^0-x^2. Denote ĉ^*_k:=∑_s=0^k-1c^*_s/k, it follows by the convexity of C that ĉ^*_k∈ C. Therefore, we conclude that ĉ^*_k,F(x^k)-F(x)≤ℓ_maxx^0-x^2/2k. Select y^k∈max_x∈ℝ^nmin_c^*∈ Cc^*,F(x^k)-F(x), it holds that u_0(x^k) =max_x∈ℝ^nmin_c^*∈ Cc^*,F(x^k)-F(x)=min_c^*∈ Cc^*,F(x^k)-F(y^k) ≤ĉ^*_k,F(x^k)-F(y^k)≤ℓ_maxx^0-y^k^2/2k. By the definition of y^k, we deduce that y^k∈{x:F(x)≼_KF(x^k)}⊂ℒ_F(x^0), which implies x^0-z^k≤ R, the desired result follows. §.§.§ Linear convergence By further assuming that F(·) is strongly K-convex, the linear convergence result of Algorithm <ref> can be derived as follows. Suppose that Assumption <ref>(ii) holds F(·) is strongly K-convex, let {x^k} be the sequence generated by Algorithm <ref> with G_k(·)∈𝒮_ℓ,μ(F,x^k) and ℓ≺_Kμ. Then, the following statements hold: (i) {x^k} converges to an efficient solution x^* of (<ref>); (ii) x^k+1-x^*≤√(max_c^*∈ Cc^*,ℓ/c^*,μ)x^k-x^*, ∀ k≥0. (i) Since F(·) is strongly K-convex, then Assumption <ref>(i) holds and every weakly efficient solution is actually an efficient solution. Therefore, assertion (i) is a consequence of Theorem <ref>(i). (ii) By substituting x=x^* into inequality (<ref>), we have c^*_k,F(x^k+1)-F(x^*)≤1/2x^k-x^*^2c^*_k,ℓ-1/2x^k+1-x^*^2c^*_k,μ. It follows by F(x^*)≼_KF(x^k+1) that x^k+1-x^*≤√(c^*_k,ℓ/c^*_k,μ)x^k-x^*. The desired result follows. It seems that the convexity of F(·) plays no role in the proof of Theorems <ref> and <ref>. However, it can indeed be shown that F(·) is necessarily K-convex if ℓ=μ and strongly K-convex with μ - ℓ if ℓ≺_Kμ. In the next section, we give some examples where such a condition holds. The linear convergence rate is related to G_k(·), which confirms that the rate of convergence can be improved by choosing a tighter surrogate. § DESCENT METHOD FOR VOPS WITHOUT LINE SEARCH In this section, we will revisit some well-known descent methods for VOPs and link them to the Algorithm <ref>. The main aim is to provide a unified theoretical analysis for all of them. We also develop some new algorithms with faster linear convergence. §.§ K-steepest descent method for VOPs without line search For x∈ℝ^n, recall that d^k, the K-steepest descent direction <cit.> at x^k, is defined as the optimal solution of min_d∈ℝ^nmax_c^*∈ Cc^*, JF(x^k)d+1/2d^2. Select a vector e∈ int(K), and denote C_e={c^*∈ K^*:c^*,e=1}. If we set C=C_e in (<ref>), then the K-steepest descent direction can be reformulated as the optimal solution of min_d∈ℝ^nmax_c^*∈ C_ec^*, JF(x^k)d+1/2d^2e. If K=ℝ^m_+, and C_e=Δ_m, then e=(1,...,1)^T and the subproblem (<ref>) reduces to that of steepest descent method for MOPs <cit.>. From now on, we assume that F(·) is K-smooth with ℓ∈ int(K), denote L_max:=max_c^*∈ C_ec^*,ℓ. Let us revisit the K-steepest descent method without line search: We consider the following surrogate: G_k,Le(x):=JF(x^k)(x-x^k)+L/2x-x^k^2e. Let G_k,Le(·) be defined as (<ref>). Then, the following statements hold. (i) For any L≥ L_max, G_k,Le(·) is a majorizing surrogate of F(·)-F(x^k), i.e., F(·)-F(x^k)≼_KG_k,Le(·). (ii) If F(·) is K-convex, then G_k,Le(·)∈𝒮_Le,Le(F,x^k) for all L≥ L_max. (iii) If F(·) is strongly K-convex with μ∈ int(K), then G_k,Le(·)∈𝒮_Le-μ,Le(F,x^k) for all L≥ L_max. By the definition of L_max, we have ℓ≼_KL_maxe, it follows from the K-smoothness of F(·) that assertion (i) holds. Notice that G_k,Le(·) is strongly K-convex and K-smooth with Le, and μ≼ Le, then we obtain assertion (ii) and (iii) by Lemma <ref> (ii) and (iii), respectively. Assume that F(·) is strongly K-convex with μ∈ int(K), let {x^k} be the sequence generated by Algorithm <ref>. Then, the following statements hold: (i) {x^k} converges to an efficient solution x^* of (<ref>); (ii) x^k+1-x^*≤√(1-μ_min/L)x^k-x^*, ∀ k≥0, where μ_min:=min_c^*∈ C_ec^*,μ. Since F(·) is strongly K-convex, G_k,Le(·)∈𝒮_Le-μ,Le(F,x^k), and Assumption <ref> holds in this case, using C=C_e, then Theorem <ref> (i) and (ii) reduce to the assertions (i) and (ii), respectively. If K=ℝ^m_+, and e=(1,...,1)^T a base of ℝ^m_+, the convergence rate result in Lemma <ref> reduces to that of <cit.> with g(·)=0. It is worth noting that the rate of convergence is related to e, to the best of our knowledge, apart from e=(1,...,1)^T, it remains an open problem for the better choice of the parameter in MOPs. §.§ Improved K-steepest descent method for VOPs without line search As detailed in <cit.>, the linear convergence result in Lemma <ref> can be improved, this is mainly due to the large gap between F(·)-F(x^k) and G_k,Le(·) from a majorization-minimization perspective. Notice that ℓ≼_KL_maxe, we denote the following tighter surrogate: G_k,ℓ(x):=JF(x^k)(x-x^k)+1/2x-x^k^2ℓ. Let G_k,ℓ(·) be defined as (<ref>). Then, the following statements hold. (i) G_k,ℓ(·) is a tight majorizing surrogate of F(·)-F(x^k), i.e., F(·)-F(x^k)≼_KG_k,ℓ(·), and the relation does not hold for any G_k,ℓ̂(·) such that ℓ⋠_Kℓ̂. (ii) If F(·) is K-convex, then G_k,ℓ(·)∈𝒮_ℓ,ℓ(F,x^k). (iii) If F(·) is strongly K-convex with μ∈ int(K), then G_k,ℓ(·)∈𝒮_ℓ-μ,ℓ(F,x^k). The assertions can be obtained by using the similar arguments as in the proof of Proposition <ref>. By using the tighter surrogate, we devise the following improved K-steepest descent method without line search. Assume that F(·) is strongly K-convex with μ∈ int(K), let {x^k} be the sequence generated by Algorithm <ref>. Then, the following statements hold: (i) {x^k} converges to an efficient solution x^* of (<ref>); (ii) x^k+1-x^*≤√(1-1/κ_F,≼_K)x^k-x^*, ∀ k≥0. The assertions can be obtained by using the similar arguments as in the proof of Lemma <ref>. If K=ℝ^m_+, and e=(1,...,1)^T, the convergence rate in Lemma <ref> reduces to that of <cit.> with g(·)=0. Furthermore, 1 / κ_F,≼_K≥μ_min/L, and the improved linear convergence does not depend on the choice of e. §.§ Equivalence between Algorithms <ref> and <ref> As we described in Remark <ref>, the linear convergence rate of Algorithms <ref> depends on e. In response to this issue, we attempt to give a better choice of e in Algorithms <ref> from a complexity perspective. Recall that the linear convergence rate of algorithm <ref> does not depend on the choice of e. We denote C_ℓ:={c^*∈ K^*:c^*,ℓ=1}. In the following, we propose another improved K-steepest descent method for VOPs. Notice that the subproblem in Algorithm <ref> can be rewritten equivalently as: min_x∈ℝ^nmax_c^*∈ C_ℓc^*, JF(x^k)(x-x^k)+1/2x-x^k^2ℓ. Assume that F(·) is strongly K-convex with μ∈ int(K), let {x^k} be the sequence generated by Algorithm <ref>. Then, the following statements hold: (i) {x^k} converges to an efficient solution x^* of (<ref>); (ii) x^k+1-x^*≤√(1-1/κ_F,≼_K)x^k-x^*, ∀ k≥0. If K=ℝ^m_+, and Δ_m^ℓ:={c^*∈ℝ^m_++:c^*,ℓ=1}, the Algorithm <ref> reduces to <cit.> with g(·)=0. Interestingly, the relations between Algorithms <ref>, <ref> and <ref> only depends on the choice of e. If e=ℓ, Algorithms <ref>, <ref> and <ref> are equivalent. On the other hand, for the open problem of the better choice of e in Remark <ref>, we give a partially positive answer with e=ℓ. Even though Algorithms <ref> and <ref> enjoy similar improved linear convergence, the computational cost of subproblems in Algorithms <ref> is often cheaper. § DESCENT METHOD FOR VOPS WITH LINE SEARCH In the preceding section, the majorization-minimization optimization methods are devised based on majorizing surrogates, which might be too conservative due to the global upper bounds. From a majorization-minimization perspective, choosing a non-majorizing surrogate function may provide better performance. §.§ K-steepest descent method with line search In this section, we revisit K-steepest descent method with line search for VOPs. The stepsize has the following lower bound. The stepsize generated in Algorithm <ref> satisfies t_k≥ t_min:=min{γ/L_max,1}. By the line search condition in Algorithm <ref>, we have F(x^k+t_k/γd^k)-F(x^k) ⋠_Kt_k/γ(JF(x^k)d^k+1/2d^k^2e). Then there exists c^*_1∈ C_e such that c^*_1,F(x^k+t_k/γd^k)-F(x^k)>c^*_1,t_k/γ(JF(x^k)d^k+1/2d^k^2e). On the other hand, the K-smoothness of F(·) implies F(x^k+t_k/γd^k)-F(x^k)≼_Kt_k/γJF(x^k)d^k+1/2t_k/γd^k^2ℓ. Therefore, we have c^*_1,F(x^k+t_k/γd^k)-F(x^k)≤c^*_1,t_k/γJF(x^k)d^k+1/2t_k/γd^k^2c^*_1,ℓ. This, together with inequality (<ref>), yields t_k≥γ/c^*_1,ℓ. Then the desired result follows. We consider the following surrogate: G_k,e/t_k(x):=JF(x^k)(x-x^k)+1/2t_kx-x^k^2e. We can show that G_k,e/t_k(·) is a non-majorizing surrogate function of F(·)-F(x^k) near x^k. Let G_k,e/t_k(·) be defined as (<ref>). Then, the following statements hold. (i) F(x^k+1)-F(x^k)≼_KG_k,e/t_k(x^k+1). (ii) If F(·) is K-convex, then G_k,e/t_k(·)∈𝒮_e/t_k,e/t_k(F,x^k). (iii) If F(·) is strongly K-convex with μ∈ int(K), then G_k,e/t_k(·)∈𝒮_e/t_k-μ,e/t_k(F,x^k). Assume that F(·) is strongly K-convex with μ∈ int(K), let {x^k} be the sequence generated by Algorithm <ref>. Then, the following statements hold: (i) {x^k} converges to an efficient solution x^* of (<ref>); (ii) x^k+1-x^*≤√(1-t_minμ_min)x^k-x^*, ∀ k≥0, where μ_min:=min_c^*∈ C_ec^*,μ. The assertions can be obtained by using the similar arguments as in the proof of Proposition <ref>. If K=ℝ^m_+, and e=(1,...,1)^T, the convergence rate result in Lemma <ref> reduces to that of <cit.>. The assertions can be obtained by using the similar arguments as in the proof of Lemma <ref>. §.§ Generic first-order method for VOPs Select e_k∈ int(K), we devise the following generic first-order method: The stepsize has the following lower bound. The stepsize generated in Algorithm <ref> satisfies t_k≥ t_min:=min{min_c^*∈ C_eγc^*,e_k/c^*,ℓ,1}. We consider the following surrogate: G_k,e_k/t_k(x):=JF(x^k)(x-x^k)+1/2t_kx-x^k^2e_k. Let G_k,e_k/t_k(·) be defined as (<ref>). Then, the following statements hold. (i) F(x^k+1)-F(x^k)≼_KG_k,e_k/t_k(x^k+1).. (ii) If F(·) is K-convex, then G_k,e_k/t_k(·)∈𝒮_e_k/t_k,e_k/t_k(F,x^k). (iii) If F(·) is strongly K-convex with μ∈ int(K), then G_k,e_k/t_k(·)∈𝒮_e_k/t_k-μ,e_k/t_k(F,x^k). The assertions can be obtained by using the similar arguments as in the proof of Proposition <ref>. Assume that F(·) is strongly K-convex with μ∈ int(K), let {x^k} be the sequence generated by Algorithm <ref>. Then, the following statements hold: (i) {x^k} converges to an efficient solution x^* of (<ref>); (ii) x^k+1-x^*≤√(1-min_c^*∈ C_ec^*,μ/c^*,e_k/t_k)x^k-x^*, ∀ k≥0. The assertions can be obtained by using the similar arguments as in the proof of Lemma <ref>. By set e_k=μ or e_k=ℓ, we can derive that x^k+1-x^*≤√(1-1/κ_F,≼_K)x^k-x^*, ∀ k≥0. Therefore, convergence rate in Lemma <ref>(ii) reduces to that of Lemma <ref>(ii). Intuitively, to explore the local curvature information of F(·), we can devise a tighter local surrogate G_k,e_k/t_k(·) with μ≼_Ke_k/t_k≼_Kℓ. In this case, the linear convergence rate of Algorithm <ref> can be further improved by using a tighter local surrogate G_k,e_k/t_k(·). To narrow the surrogate gap and better capture the local curvature information, we compute e_k by Barzilai-Borwein method, namely, we set e_k:=JF(x^k)-JF(x^k-1),x^k-x^k-1/x^k-x^k-1^2. Assume that F(·) is strongly K-convex with μ∈ int(K), let {x^k} be the sequence generated by Algorithm <ref>, where e_k is defined as in (<ref>). Then, the following statements hold: (i) μ≼_Ke_k≼_Kℓ; (ii) t_k≥min_c^*∈ C_e{γc^*,e_k/c^*,ℓ}; Assertion (i) follows by the strong k-convexity and K-smoothness of F(·), and the definition of e_k. We can obtain the assertion (ii) by using the similar arguments as in the proof of Proposition <ref>. § DESCENT METHOD FOR VOPS WITH POLYHEDRAL CONES In this section, we consider the VOPs that K is a polyhedral cone with nonempty interior. Without loss of generality, for the polyhedral cone K, there exists a transform matrix A∈ℝ^l× m with m≤ l such that K:={x∈ℝ^m:0≼ Ax}. In this case, for any a,b∈ℝ^m, a≼_Kb can be equivalently represented as Aa≼ Ab. Denote A_i the i-th row vector of A. For the polyhedral cone K, we denote the set of transform matrices as follows: 𝒜:={A∈ℝ^l× m:AK=ℝ^l_+}. §.§ Steepest descent method for VOPs with polyhedral cones By using the transform matrix A∈𝒜, the steepest descent direction subproblem for VOPs with polyhedral cones is formulated as follows: min_d∈ℝ^nmax_λ∈Δ_lλ,AJF(x^k)d+1/2d^2. In the next subsection, we claim that the preceding subproblem can be efficiently solved via its duality. The complete K-steepest descent method for VOPs with polyhedral cones is described as follows: Assume that F(·) is strongly K-convex with μ∈ int(K), where K={x∈ℝ^m:0≼ Ax}. Let {x^k} be the sequence generated by Algorithm <ref>. Then, the following statements hold: (i) t_k≥min_i∈ [l]{γ/A_i,ℓ}; (ii) {x^k} converges to an efficient solution x^* of (<ref>); (iii) x^k+1-x^*≤√(1-min_i∈ [l]t_kA_i,μ)x^k-x^*, ∀ k≥0. The assertions can be obtained by using the similar arguments as in the proof of Proposition <ref> and Lemma <ref>. The subproblem (<ref>) can be rewritten as min_d∈ℝ^nmax_c^*∈ Cc^*,JF(x^k)d+1/2d^2, where C:=conv{A_i,i∈[l]} is a base of K^*. In other words, selecting a transform matrix in (<ref>) is equivalent to selecting a base of K^*. As a result, the linear convergence rate of Algorithm <ref> is sensitive to the choice of A. When K=ℝ^m_+, the steepest descent method <cit.> fixs A=I_m, i.e., C=Δ_m. §.§ Barzilai-Borwein descent method for VOPs with polyhedral cones In general, for the e_k defined as (<ref>) we have -ℓ≼_Ke_k≼_Kℓ, which can be written as -Aℓ≼_Ae_k≼_Aℓ. We denote α^k∈ℝ^l as follows: α_i^k={ max{α_min,min{⟨ s_k-1,y^k-1_i⟩/s_k-1^2, α_max}}, ⟨ s_k-1,y^k-1_i⟩ >0, max{α_min,min{y^k-1_i/s_k-1, α_max}}, ⟨ s_k-1,y^k-1_i⟩ <0, α_min, ⟨ s_k-1,y^k-1_i⟩ =0, . for all i∈[l], where s_k-1=x^k-x^k-1, y^k-1_i is the i-th row vector of A(JF(x^k)-JF(x^k-1)), α_max is a sufficient large positive constant and α_min is a sufficient small positive constant. The Barzilai-Borwein descent direction is defined as the minimizer of min_d∈ℝ^nmax_λ∈Δ_lλ,AJF(x^k)d+1/2d^2α^k. Alternatively, we use the similar strategy in Algorithm <ref>, the Barzilai-Borwein descent direction with appropriate base is defined as the minimizer of min_d∈ℝ^nmax_λ∈Δ^α^k_lλ,AJF(x^k)d+1/2d^2, where Δ^α^k_l:={c^*∈ℝ^l_++:c^*,α^k=1}. The subproblem can be reformulated as follows: min_d∈ℝ^nmax_λ∈Δ_lλ,Λ^k AJF(x^k)d+1/2d^2, where Λ^k:= [ 1/α^k_1 ; ⋱; 1/α^k_l ]. By optimality conditions, the minimizer of (<ref>) can be written as d^k=-(Λ^k AJF(x^k))^Tλ^k, where λ^k∈Δ_l is a solution of the following dual problem: DP min_λ∈Δ_l1/2(Λ_k AJF(x^k))^Tλ^2. The dual problem (<ref>) is lower dimensional quadratic programming with unit simplex constraint (the vertices of unit simplex constraint are known), then it can be solved by Frank-Wolfe/conditional gradient method efficiently <cit.>. However, dual problem of (<ref>) reads as min_λ∈Δ_l1/2( AJF(x^k))^Tλ/∑_i∈[l]λ_iα^k_i^2, which is not easy to solve. If K=ℝ^m_+ and A=I_m, the subproblem (<ref>) reduces to that of the BBDMO <cit.>. Consequently, transforming (<ref>) into (<ref>) gives a new insight into BBDMO. Comparing the subproblems (<ref>) and (<ref>), we conclude that the differences between the directions of the steepest descent and the Barzilai-Borwein descent depend on the choice of base of K^*. The complete K-Barzilai-Borwein descent method for VOPs with polyhedral cones is described as follows: Assume that F(·) is strongly K-convex with μ∈ int(K), where K={x∈ℝ^m:0≼ Ax}. Let {x^k} be the sequence generated by Algorithm <ref>. Then, the following statements hold: (i) Aμ≼α_k≼ Aℓ; (ii) t_k≥min_i∈ [l]{γα^k_i/A_i,ℓ}; (iii) {x^k} converges to an efficient solution x^* of (<ref>); (iv) x^k+1-x^*≤√(1-min_i∈ [l]t_kA_i,μ/α^k_i)x^k-x^*, ∀ k≥0. The assertions can be obtained by using the similar arguments as in the proof of Proposition <ref> and Lemma <ref>. A large stepsize may speed up the convergence of Algorithm <ref>. Accordingly, the Armijo line search can be applied, namely, compute the stepsize t_k∈(0,1] in the following way: t_k:=max{γ^j:j∈ℕ,A(F(x^k+γ^jd^k)-F(x^k))≼σγ^j AJF(x^k)d^k}, where σ∈(0,1). The following results show that the convergence rate of Algorithm <ref> is not sensitive to the choice of transform matrix. More specifically, the descent direction d^k and stepsize t_k of Algorithm <ref> are invariant for some A∈𝒜. Let A^1,A^2∈𝒜, d^k_1,t_k^1 and d^k_2,t_k^2 be the descent directions and stepsize generated by Algorithm <ref> with A^1 and A^2, respectively. If α_min<α_i^k,1,α_i^k,2<α_max, i∈[l], we have d^k_1=d^k_2 and t_k^1=t_k^2. Denote A_i^1 and A_i^2 the the i-th row vector of A^1 and A^2, respectively. Before presenting the main results, we rewritten the subproblem (<ref>) as follows: min_d∈ℝ^nmax_i∈[l]A_i/α^k_i, JF(x^k)d+1/2d^2. Recall that A^1,A^2∈𝒜, there exists a vector a∈ℝ^l_++ such that {A^1_i:i∈[l]}={a_iA^2_i:i∈[l]}. We claim the following assertion: {A^1_i/α_i^k,1:i∈[l]}={A^2_i/α_i^k,2:i∈[l]}. This, together with the reformulated subproblem, implies that d^k_1=d^k_2. Therefore, t_k^1=t_k^2 is a consequence of (<ref>). Next, we prove that assertion (<ref>) holds. For any i∈[l], it follows by (<ref>) that there exist j∈[l] such that A^1_i=a_jA^2_j. Notice that α_min<α_i^k,1,α_i^k,2<α_max, i∈[l], without loss of generality, we assume that α_i^k,1=A^1_i(JF(x^k)-JF(x^k-1))/s_k-1 and α_j^k,2=A^2_j(JF(x^k)-JF(x^k-1))/s_k-1. Then, A^1_i/α_i^k,1=A^1_is_k-1/A^1_i(JF(x^k)-JF(x^k-1))=A^2_js_k-1/A^2_j(JF(x^k)-JF(x^k-1))=A^2_j/α_j^k,2, where the second equality is due to the fact that A^1_i=a_jA^2_j. Thus, we have {A^1_i/α_i^k,1:i∈[l]}⊆{A^2_i/α_i^k,2:i∈[l]}. The relation {A^2_i/α_i^k,2:i∈[l]}⊆{A^1_i/α_i^k,1:i∈[l]} follows the similar arguments, this concludes the proof. Assume 𝒜̂⊂𝒜 is bounded and A_i is bounded away from 0 for all A∈𝒜̂. Then, there exists α_min and α_max such that the assumption α_min<α_i^k<α_max, i∈[l] holds for all A∈𝒜̂ with s_k-1,A_i(JF(x^k)-JF(x^k-1))≠0. For the case with linear objective, s_k-1,A_i(JF(x^k)-JF(x^k-1))=0 may hold. As illustrated in <cit.>, for any A_1,A_2∈𝒜̂, we have d^k_1≈ d^k_2 with sufficient small α_min. As a result, we conclude that the performance of Algorithm <ref> is not sensitive to the choice of A. § NUMERICAL RESULTS In this section, we present numerical results to demonstrate the performance of Barzilai-Borwein descent methods for VOPs (BBDVO) with polyhedral cones. We also compare BBDVO with steepest descent method for VOPs (SDVO) and equiangular direction method <cit.> for VOPs (EDVO). All numerical experiments were implemented in Python 3.7 and executed on a personal computer with an Intel Core i7-11390H, 3.40 GHz processor, and 16 GB of RAM. For all tested algorithms, we used Armijo line search (<ref>) with σ=10^-4 and γ=0.5. To ensure that the algorithms terminate after a finite number of iterations, for all tested algorithms we used the stopping criterion d(x)≤ 10^-6. We also set the maximum number of iterations to 500. The test algorithms were executed on several test problems, and the problem illustration is given in Table <ref>. The dimensions of variables and objective functions are presented in the second and third columns, respectively. x_L and x_U represent lower bounds and upper bounds of variables, respectively. For each problem, we used the same initial points for different tested algorithms. The initial points were randomly selected within the specified lower and upper bounds. Dual subproblems of different algorithms were efficiently solved by Frank-Wolfe method. The recorded averages from the 200 runs include the number of iterations, the number of function evaluations, and the CPU time. For the tested problems, the partial order are induced by polyhedral cones ℝ^2_+, K_1, and K_2, respectively, where K_1:={x∈ℝ^2:5x_1-x_2≥0, -x_1+5x_2≥0}⊆ℝ^2_+, and K_2:={x∈ℝ^2:5x_1+x_2≥0, x_1+5x_2≥0}⊇ℝ^2_+. §.§ Numerical results for VOPs with K=ℝ^2_+ In this case, we denote the set of transform matrices 𝒜_0:={A:Aℝ^2_+=ℝ^2_+}. For SDVO, we choose A^0=I_2∈𝒜_0 and Â^0∈𝒜_0 in subproblem, respectively, where Â^0:={A:A_i=A^0_i/max{1,∇ F_i(x^0)_∞}, i=1,2}[The scale strategy is initially proposed in <cit.> due to numerical reasons.]. For EDVO, normalization is applied for each of gradients in the transformed subproblem, which implies that EDVO is also not sensitive to the choice of transform matrix. As a result, we choose A=A^0 in subproblems of EDVO and BBDVO. §.§ Numerical results for VOPs with K=K_1 In this case, we denote the set of transform matrices 𝒜_1:={A:AK_1=ℝ^2_+}. For SDVO, we choose A^1=[ 5 -1; -1 5 ]∈𝒜_0 and Â^1∈𝒜_1 in subproblem, respectively, where Â^1:={A:A_i=A^1_i/max{1,∇ F_i(x^0)_∞}, i=1,2}. §.§ Numerical results for VOPs with K=K_2 We denote the set of transform matrices 𝒜_2:={A:AK_2=ℝ^2_+}. For SDVO, we choose A^2=[ 5 1; 1 5 ]∈𝒜_0 and Â^2∈𝒜_2 in subproblem, respectively, where Â^2:={A:A_i=A^2_i/max{1,∇ F_i(x^0)_∞}, i=1,2}. For test problems with different partial orders, the number of average iterations (iter), number of average function evaluations (feval), and average CPU time (time(ms)) of the different algorithms are listed in Tables <ref>, <ref> and <ref>, respectively. We conclude that BBDVO outperforms SDVO and EDVO, especially for problems DD1 and Imbalance1. For SDVO, its performance is sensitive to the choice of transform matrix, changing the transform matrix in subproblem cannot improve the performance on all test problems. Naturally, a question arises that how to choose an appropriate transform matrix for a specific test problem in SDVO. It is worth noting that BBDVO can be viewed as SDVO with variable transform matrices (Λ^kA is a transform matrix of K) and thus enjoys promising performance on these test problems. This provides a positive answer to the question. EDVO can also be viewed as SDVO with variable transform matrices, it generates descent directions with norm less than 1 (the minimizer of subproblem is the minimal norm element of the convex hull of some unit vectors), decelerating the convergence in large-scale problems (the initial point may be far from the Pareto set). Fig. <ref> plots the final points obtained by BBDVO on problems BK1, DD1, FF1, Imbalance1 and WIT1 with K=ℝ^2_+, K=K_1 and K=K_2, respectively. We can observe that enlarging the partial order cone reduces the number of obtained Pareto critical points, especially in the long tail regions, where improving one objective function slightly can sacrifice the others greatly. As a result, we can use an order cone containing the non-negative orthant in real-world MOPs to obtain Pareto points with a better trade-off. § CONCLUSIONS In this paper, we develop a unified framework and convergence analysis of descent methods for VOPs from a majorization-minimization perspective. We emphasize that the convergence rate of a descent method can be improved by narrowing the surrogate functions. By changing the base in subproblems, we elucidate that choosing a tighter surrogate function is equivalent to selecting an appropriate base of the dual cone. From the majorization-minimization perspective, we employ Barzilai-Borwein method to narrow the local surrogate functions and propose a Barzilai-Borwein descent method for VOPs polyhedral cone. The proposed method is not sensitive to the choice of transform matrix, which affects the performance of SDVO. Numerical experiments confirm the efficiency of the proposed method. In future work, it is worth analyzing proximal gradient methods and second-order methods for VOPs from a majorization-minimization perspective. Furthermore, solution methods for VOPs with non-polyhedral cones are also worth investigating. § ACKNOWLEDGEMENT This work was supported by the Major Program of the National Natural Science Foundation of China [grant numbers 11991020, 11991024]; the National Natural Science Foundation of China [grant numbers 11971084, 12171060]; and the Natural Science Foundation of Chongqing [grant number cstc2019jcyj-zdxmX0016]. § DATA AVAILABILITY The data that support the findings of this study are available from the first author, [Jian Chen], upon reasonable request. 0.95 model5-names authoryear
http://arxiv.org/abs/2407.13693v1
20240718165917
Model Predictive Path Integral Methods with Reach-Avoid Tasks and Control Barrier Functions
[ "Hardik Parwana", "Mitchell Black", "Georgios Fainekos", "Bardh Hoxha", "Hideki Okamoto", "Danil Prokhorov" ]
cs.RO
[ "cs.RO", "cs.SY", "eess.SY" ]
Model Predictive Path Integral Methods with Reach-Avoid Tasks and Control Barrier Functions Hardik Parwana, Mitchell Black, Georgios Fainekos, Bardh Hoxha, Hideki Okamoto, Danil Prokhorov Toyota Motor North America R & D firstname.lastname@toyota.com July 22, 2024 ==================================================================================================================================================================== § ABSTRACT The rapid advancement of robotics necessitates robust tools for developing and testing safe control architectures in dynamic and uncertain environments. Ensuring safety and reliability in robotics, especially in safety-critical applications, is crucial, driving substantial industrial and academic efforts. In this context, we extend , a Python/ROS2 toolbox, which now incorporates a planner using reach-avoid specifications as a cost function. This integration with the Model Predictive Path Integral (MPPI) controllers enables the toolbox to satisfy complex tasks while ensuring formal safety guarantees under various sources of uncertainty using Control Barrier Functions (CBFs). is optimized for speed using JAX for automatic differentiation and jaxopt for quadratic program solving. The toolbox supports various robotic applications, including autonomous navigation, human-robot interaction, and multi-robot coordination. The toolbox also offers a comprehensive library of planner, controller, sensor and estimator implementations. Through a series of examples, we demonstrate the enhanced capabilities of in different robotic scenarios. § INTRODUCTION The field of robotics is advancing rapidly, with systems now capable of operating in highly dynamic and uncertain environments. These systems are increasingly deployed in safety-critical applications, where failures can have severe consequences, making safety and reliability paramount. This drives significant industrial investment and academic research focused on developing methods that provide formal safety guarantees, especially for complex, multi-robot scenarios. Rapid development and testing of new methods are essential in this evolving field. Researchers and developers need tools for rapid prototyping of proof-of-concept ideas to demonstrate the interaction of various control architectures. These architectures typically integrate high-level planning, nominal and feedback control, along with complex sensing and estimation solutions. Efficient integration of these components allows for swift iteration and validation of new approaches, advancing robotic capabilities. To address these needs, we extend <cit.>, an open-source Python/ROS2[<https://github.com/bardhh/cbfkit.git>] toolbox designed to facilitate the rapid prototyping and deployment of safe control architectures for robotic systems. Built on Python and using JAX <cit.>, provides an efficient platform for developing and testing complex autonomy stacks. It supports a wide range of applications, including autonomous navigation <cit.>, human-robot interaction <cit.>, multi-robot coordination, and manipulation tasks. The toolbox combines model-based and model-free control approaches, offering flexibility to accommodate various system dynamics and control requirements. It uses JAX for automatic differentiation and jaxopt for fast quadratic program (QP) solving, resulting in significantly faster computation times compared to symbolic methods. Additionally, the toolbox provides a comprehensive library of implementations for various systems, sensors, estimators, Control Barrier Functions (CBFs), and tutorials for single and multi-agent applications. CBFs have been used to provide safety guarantees in various applications such as (semi-)automated driving <cit.>, arm manipulators <cit.>, and multi-agent coordination <cit.>. Since CBF controllers are myopic, and safety guarantees are only valid if the controller generates a solution, they benefit greatly from a high-level planner that provides waypoints and attempts to avoid problematic scenarios. In this paper, we demonstrate how can be extended with Model Predictive Path Integral (MPPI) <cit.> controllers using timed reach-avoid specifications <cit.>. Reach-avoid specifications, such as 'reach region r_1 within 5 seconds and then reach goal region r_2, all while avoiding obstacles,' can serve as a cost function for MPPI. Reach-avoid specifications offer a flexible framework for task planning in robotics, ensuring robots follow specific sequences of actions, maintain safety distances, and achieve goals within specified time windows. This integration is further enhanced by incorporating a CBF-based safety filter, enabling the design of controllers that optimize performance while ensuring safety through formal guarantees provided by CBFs. We showcase the capabilities of through a series of examples. These examples highlight the complementary benefits of different components in the autonomy stack. Starting with a robust CBF controller alone, we introduce noise into the dynamics, integrate a Kalman filter, add a planner, and finally incorporate a reach-avoid specification as a cost function into the system. The experiments illustrate the system’s behavior at various stages of complexity, demonstrating the effectiveness and versatility of . Contribution: * We extend for full stack autonomy support and demonstrate this with a novel implementation of the Model Predictive Path Integral (MPPI) planner, which uses timed reach-avoid tasks as a cost function. * A demonstration of with an MPPI planner combined with a Control Barrier Function (CBF) safety filter to ensure robust and safe navigation. * Implementation of a library of CBF controllers, along with various sensors and estimators, designed for dynamic and uncertain environments. § SUPPORTED MODELS AND CONTROL DESIGN PROBLEMS Our goal for is to be a rapid development, proof-of-concept tool for the development of autonomy control architectures that, at its lowest level, integrates CBF-based feedback controllers to act as safety filters. supports a number of different classes of control-affine models (model of system Σ): * Deterministic, continuous-time Ordinary Differential Equations (ODE): ẋ=f(x)+g(x)u, where x∈𝒳⊂ℝ^n is the system state, u∈𝒰⊂ℝ^m is the control input, f:ℝ^n→ℝ^n and g:ℝ^n→ℝ^n × m are locally Lipschitz functions, and x(0) ∈𝒳_0 ⊆𝒳 is the initial state of the system. * Continuous-time ODE under bounded disturbances: ẋ=f(x)+g(x)u+Mw, where w∈𝒲 is the disturbance input, 𝒲 is a hypercube in ℝ^l, and M is a n× l zero-one matrix with at most one non-zero element in each row. * Stochastic differential equations (SDE): dx = (f(x) + g(x)u)dt + σ(x)dw where σ: ^n →^n × q is locally Lipschitz, and bounded on 𝒳, and w∈^q is a standard q-dimensional Wiener process (i.e., Brownian motion) defined over the complete probability space (Ω, ℱ, P) for sample space Ω, σ-algebra ℱ over Ω, and probability measure P: ℱ→ [0,1]. In certain practical applications, not all the states of the system may be observable. In such scenarios, we may assume that a state vector y is observable. For example, in the case of SDE, we may assume: dy = Cx dt + D dv where C ∈ℝ^p × n, D ∈ℝ^p × r, and v ∈^r is a standard Wiener process. § AUTONOMY STACK In this section, we describe the architecture of an autonomy stack built using . An autonomy stack typically comprises of sensors, estimators, planners and controllers, each responsible for different aspects of autonomous operation. As shown in Fig. <ref>, we outline a three-tier architecture consisting of a high-level planner, a nominal controller, and a feedback controller. This structure ensures that the robotic system can navigate complex environments while maintaining safety and robustness. §.§.§ High-Level Planner The high-level planner generates a feasible path from a starting point to a goal location, considering global objectives and constraints such as avoiding large obstacles and navigating dynamic environments. In , the Model Predictive Path Integral (MPPI) controller is used as the high-level planner. We provide a brief description of the MPPI algorithm. MPPI is a sampling-based procedure to solve a finite horizon model predictive control problem. Consider a nonlinear system with state x_t ∈ℝ^n and control input u_t ∈ℝ^m that follows the following discrete-time dynamics x_t+1 = F(x_t, u_t). For a time horizon H, consider the state trajectory 𝐱=[x_t^T, ..., x_t+H^T]^T, mean control input sequence 𝐯 = [v_t^T, .., v_t+H^T]^T, v_τ∈ℝ^m and injected Gaussian noise 𝐰 = [w_t^T,..,w_t+H^T]^T where w_τ∼ N(0,Σ_w) where Σ_w is the noise covariance, often chosen by the user. Let the disturbed control input sequence be 𝐮=[u_t,...,u_t+H]=𝐯+w. MPPI then solves the following problem min_𝐯 J(𝐯) = 𝔼[ Q(x_t,..x_t+H,𝐮) + ∑_τ=t^t+H-1( λ/2 v_τ^T Σ_w^-1 v_τ) ] s.t. x_τ+1 = F(x_τ, v_τ+w_τ) w_τ∼𝒩(0,Σ_w) More algorithmic details to solving (<ref>) can be found in as well as MPPI papers<cit.>. Next, we present some details on Q in (<ref>). The cost functions are generally user-designed for specific objectives. For instance, we introduce cost metrics here to quantify progress towards achieving two types of user-specified tasks: convergence to a goal and collision avoidance. Convergence to a goal g within a radius r_g is specified in terms of distance to the goal using cost c_g as follows: c_g(x_t) = k_g (|| p_t - p_g,t ||^2 - r_g^2) where p_g,t is the location of the goal at time t, k_g>0 is a weighting factor, and p_t is the location of the robot extracted from its state x_t. For collision with an obstacle o, we use the inverse of distance to the obstacle to define the cost c_o as follows: c_o(x_t) = k_o/max( ||p_t-p_o||, ϵ ) where p_o is the location of the obstacle and 0 < ϵ≪ 1 prevents the cost from becoming infinitely large. For time-invariant tasks, we design cost functions is as follows Q = ∑_τ=t^t+H c_g(x_τ) + c_o(x_τ) For time-dependent tasks such as timed reach-avoid specifications, our MPPI leverages signal temporal logic-inspired robustness metrics to define the cost functions. The timed reach-avoid specifications can specify task requirements, such as reaching a destination within a specified time or avoiding certain regions. The cost functions for reaching the goal g between times t_1 and t_2 is designed as follows Q_g[t_1, t_2] = min(c_g(x_t_1),..., c_g(x_t_2) ) where t_1 and t_2 are global times. In (<ref>), if t_1<t (or t_2<t), x_t_1 (or x_t_2), where t is the current time, is obtained from the history of states. On the other hand, if t_2>t_1≥ t, x_t_1, x_t_2 are obtained using the predicted states during the sampling procedure of MPPI. Further note that in contrast to the standard way of designing MPPI costs in <cit.>, we allow MPPI cost to also depend on states x_τ for τ<t so that a task already achieved in the past is not revisited when MPPI planner/controller is implemented in a receding horizon fashion. Similarly, for collision avoidance tasks that should be satisfied for all time t>0, we design Q_o = -max ( c_o(x_o), ... , c_o(x_t+T) ) The final cost for a sampled trajectory is designed as Q(x_t,..,x_t+T) = min( Q_g[t_1,t_2], Q_c ) Note that our cost is sensitive to chosen weights k_g,k_o and thus requires some manual tuning. Automated tuning of these hyperparameters will be supported in future library releases. Finally, note that the above costs can be compounded for any number of goals and obstacles. We would also like to mention that other implementations of MPPI with spatio-temporal specifications <cit.> and CBF shielding have been developed in the past<cit.>. §.§.§ Nominal Controller The nominal controller operates at an intermediate level, refining the plan generated by the MPPI into a sequence of control commands executable by the feedback controller. This layer ensures that the planned path is followed accurately, adjusting for any deviations due to model inaccuracies or unforeseen obstacles. In , several controllers have been developed for different systems. These include a proportional controller for the bicycle model, a Lyapunov controller for the van der Pol oscillator, and a geometric controller for a quadrotor (6 DOF). §.§.§ Feedback Controller The feedback controller executes the control commands generated by the nominal controller. In , the feedback controller is implemented using various types of Control Barrier Functions (CBF) and Control Lyapunov Functions (CLF), tailored to different robotic systems, environments, and sources of uncertainty. The framework of CBF-based control can provide safety guarantees by enforcing a forward invariant set. Namely, if the system starts in a safe state, then it should always stay in the safe set. The CBF controller adjusts the control inputs to ensure the robot remains within safe operating conditions. For the sake of completeness, we provide a short and informal description of vanilla CBFs here. A safe set 𝒮 of safe states is defined as the 0-superlevel set of a continuously differentiable function h(x):𝒳→ℝ as follows: 𝒮 ≜{ x ∈𝒳 : h(x) ≥ 0 }, ∂𝒮 ≜{ x∈𝒳: h(x)=0 }, Int (𝒮) ≜{ x ∈𝒳: h(x)>0 }. The following CBF condition is then imposed in a controller ḣ(x,u) = ∂ h/∂ x( f(x) + g(x)u ) ≥ -ν(h(x)),  ∀ x ∈𝒳. where f, g are functions defining control-affine dynamics in (<ref>). The condition (<ref>) essentially restricts the rate at which the robot is allowed to approach the boundary of the safe set. And on the boundary where h=0, it pushes the robot back as ḣ(x,u)≥ 0. The reader is referred to <cit.> for more details. By utilizing JAX for automatic differentiation and jaxopt for efficient quadratic program solving, enables fast and accurate computation of control inputs, making it suitable for prototype implementations. One of the main strengths of is its library of various implementations of feedback controllers (see Table <ref>). These include vanilla CBF and CLF controllers, robust CBF and CLF controllers, risk-aware stochastic CBF and CLF controllers, and combined risk-aware stochastic CBF and Lyapunov controllers. §.§ Auto-Differentiation for CBF Implementations A unique feature of the toolkit is the use of JAX <cit.> for auto-differentiation of the barrier function. For complex dynamics, this can be computationally challenging using symbolic toolboxes like SymPy <cit.>. JAX computes derivatives without manually or symbolically differentiating the function, enabling our tool to support arbitrary systems and barrier functions, provided that the barrier functions used for control have relative-degree[A function p: ℝ_+ ×ℝ^n →ℝ is said to be of relative-degree r with respect to the dynamics (<ref>) if r is the number of times p must be differentiated before one of the control inputs u appears explicitly.] one with respect to the system dynamics. For barrier functions with a relative-degree greater than one, our module can derive a new barrier function whose zero super-level set is a subset of that of the original barrier function. This is done by iteratively differentiating the original barrier function with respect to the system dynamics until the control input appears explicitly (determined by evaluating samples of the term ∂ h(x_s)/∂ xg(x_s)u for samples x_s ∈𝒳), and applying exponential CBF <cit.> or high-order CBF <cit.> principles to return a “rectified” barrier function. In , we provide solutions (feedback controllers) to the above two problems using Quadratic Program formulations as in, e.g., <cit.> for model (<ref>), <cit.> for model (<ref>), and <cit.> for model (<ref>). §.§ Closing the Loop with Sensors and Estimators In addition to the autonomy stack, the closed-loop system also includes sensors and estimates into one framework. Sensor models are integrated into the toolbox to simulate realistic scenarios. Estimators are used to infer the system's state when direct measurements are not available or are noisy. In the following estimators are included: extended Kalman Filter (EKF), unscented Kalman Filter (UKF), hybrid EKF-UKF filter. We note that all of these are auto-generated based on the systems dynamics equations. For more details on the auto-generation process see <cit.>. The closed-loop simulator also supports a number of integrators, such as forward Euler and solve ivp from SciPy and several integrators from JAX <cit.>. § SIMULATION EXAMPLES We provide two simulation studies showing application of our planners and controllers. §.§ MPPI with Timed Reach-Avoid Tasks We use our timed reach-avoid specifications to guide the robot to three waypoints within user-specified time intervals while performing collision avoidance. For each goal i, i∈{1,2,3}, we consider the cost function in (<ref>) We design a time-varying MPPI cost as in (<ref>) to promote reaching g_1 between 0-3.5s, g_2 between 3.6-5s, and g_3 between 5.1-10s. We also perform collision avoidance with an obstacle (shown in black in Fig.<ref>) with cost defined in (<ref>). The robot is modeled as a single integrator and the MPPI controller is implemented with a horizon of 50 time steps and 10,000 samples. The results are shown in Fig. <ref>. We see that the robot touches the circle and immediately moves on to the next waypoint. §.§ MPPI-CBF controller Consider the scenario shown in Fig. <ref>. The objective of the robot is to reach its goal location while avoiding obstacles. The robot follows the SDE dynamics in (<ref>) with f(x),g(x) defining an extended unicycle model with inputs linear acceleration and angular velocity and a constant noise term σ(x)=0.28. We compare the following three methods: 1) Stochastic CBF (SCBF) QP, 2) MPPI, and 3) MPPI + SCBF. In MPPI + CBF, the MPPI acts as a local planner whose output is filtered by the SCBF QP controller. The MPPI in all scenarios is implemented using only the nominal deterministic dynamics. The MPPI cost are designed as in (<ref>). This is not uncommon in practice as planners typically use simplified dynamics and controllers consider the full dynamics model. As such, owing to imperfect dynamics and the non-existence of guarantees of hard constraint satisfaction in theory, MPPI is expected to violate safety constraints. We simulate for 5s and the resulting trajectories are visualized in Fig. <ref>. The MPPI planner uses a horizon of 80-time steps and 20,000 samples. The SCBF QP controller ensures the robot's safety but is unable to get close to the goal. We attribute this to its greedy local optimization from only considering the instantaneous state. The MPPI controller can get close to the goal however it also gets close to the obstacles and is also observed to collide with the obstacle at the top. The MPPI-SCBF performs best and avoids all the obstacles. The MPPI can guide the robot in the correct direction owing to its finite horizon planning and other SCBF filters correctly filter its output to provide safety guarantees. To help understand the MPPI execution, we also show a snapshot of the simulation at t=3s in Fig. <ref>. The MPPI sampled trajectories are shown in green and each sampled trajectory is weighted using our designed cost function. The final output trajectory is computed based on these weights and is shown in pink. § CONCLUSION The paper presented an extended version of , integrating Model Predictive Path Integral (MPPI) methods with reach-avoid tasks and Control Barrier Functions (CBFs) to enhance the safety and robustness of autonomous robotic systems. The integration of timed reach-avoid specifications with MPPI provides a powerful framework for task planning and control, enabling robots to navigate complex environments while adhering to safety requirements. plainnat
http://arxiv.org/abs/2407.13529v1
20240718140142
Experimental Sample-Efficient and Device-Independent GHZ State Certification
[ "Laura dos Santos Martins", "Nicolas Laurent-Puig", "Ivan Šupić", "Damian Markham", "Eleni Diamanti" ]
quant-ph
[ "quant-ph" ]
[Correspondence email address:]laura.dos-santos-martins@lip6.fr Sorbonne Université, CNRS, LIP6, 4 Place Jussieu, Paris F-75005, France [Correspondence email address:]nicolas.laurent-puig@lip6.fr Sorbonne Université, CNRS, LIP6, 4 Place Jussieu, Paris F-75005, France Sorbonne Université, CNRS, LIP6, 4 Place Jussieu, Paris F-75005, France Sorbonne Université, CNRS, LIP6, 4 Place Jussieu, Paris F-75005, France Sorbonne Université, CNRS, LIP6, 4 Place Jussieu, Paris F-75005, France § ABSTRACT The certification of quantum resources is a critical tool in the development of quantum information processing. In particular, quantum state verification is a fundamental building block for communication and computation applications, determining whether the involved parties can trust the resources at hand or whether the application should be aborted. Self-testing methods have been used to tackle such verification tasks in a device-independent (DI) setting. However, these approaches commonly consider the limit of large (asymptotic), identically and independently distributed (IID) samples, which weakens the DI claim and poses serious challenges to their experimental implementation. Here we overcome these challenges by adopting a theoretical protocol enabling the certification of quantum states in the few-copies and non-IID regime and by leveraging a high-fidelity multipartite entangled photon source. This allows us to show the efficient and device-independent certification of a single copy of a four-qubit GHZ state that can readily be used for the robust and reliable implementation of quantum information tasks. Experimental sample-efficient and device-independent GHZ state certification E. Diamanti July 22, 2024 ============================================================================ The certification of entangled quantum states is one of the most important quantum information primitives, as entangled states are very often the difficult resource for quantum information applications, so that their certification can in turn certify the reliable running of the associated application <cit.>. Examples of such resources are cluster states as the resource for universal measurement based quantum computation <cit.>, stabiliser states for error correction <cit.> and GHZ states, which are resources for anonymous communication <cit.>, quantum metrology <cit.>, leader election <cit.> and more. When entangled states are intended for use for a given application, their certification should meet three key requirements. First, it must output a state that can be used for a given application, without additional assumptions. Given that measurements in quantum theory are destructive and irreversible, it is crucial to certify the quality of quantum states without destroying them. Second, certification should ideally not rely on complete trust in the measurement devices, as faulty assumptions about their structure could compromise security, implying it should be, at least to some degree, device-independent. Lastly, the certification process must account for possible memory effects, thereby avoiding the assumption that the source produces independent and identical copies in every round of the experiment. In addition to the requirements imposed by the applications, certification should also prioritize efficiency in terms of both energy consumption and time investment. Indeed, quantum resources are inherently costly, underscoring the significance of verifying quantum systems using the fewest possible samples or measurements while still achieving robust confidence in the results. This not only minimizes the time, cost, and computational resources needed for verification but also mitigates the potential introduction of errors by external factors, narrowing the time window during which errors may occur. Bell nonlocality, aside from being a fundamentally nonclassical phenomenon used to invalidate locally-causal theories, offers an elegant solution for device-independent certification <cit.>. This approach enables certification despite lacking control over the measurement devices, with self-testing results pinpointing specific quantum experiments based solely on observed measurement correlations <cit.>. While traditionally expressed as mathematical theorems linking quantum setups with correlation probabilities obtained through infinite repetitions, recent advancements demonstrate the practical utility of such self-testing results in finite regimes, enabling protocols for sample-efficient device-independent certification of quantum states without the need for the independent and identical distribution (IID) assumption <cit.>. Such device-independent certification protocols pose significant experimental challenges and hence have been largely unexplored in practice. Recent experiments have certified states when measurement devices are trusted (i.e., not DI) <cit.>, even when some parties act dishonestly <cit.>. In the DI setting there have been experiments robustly self-testing states <cit.>, or verifying entanglement properties <cit.>, but these assume IID and are only valid in the large copy (asymptotic) limit. No experiment so far satisfies our three requirements for the certification of entangled quantum states. Here we experimentally demonstrate, for the first time to our knowledge, the DI certification of a single copy of a four-partite GHZ state, completely free of the IID assumption. Our demonstration relies upon and expands the sample-efficient protocol of <cit.> and leverages the characteristics of a high-performance multipartite entangled photon source <cit.>. We analyze the protocol in terms of the achieved certified optimized fidelity measure and show that our implementation opens the way to carrying out efficiently and reliably quantum information tasks. A fully device-independent scenario The objective of our certification protocol is to quantitatively assess the proximity of a state, σ_c, generated by an uncharacterized source to the target state |GHZ⟩=(|HHHH⟩+|VVVV⟩)/√(2), without direct measurement. Ideally, the source should consistently produce multiple copies of the state |GHZ⟩. However, due to its uncharacterized nature, it is possible that, over N rounds, the produced state, σ^N, may exhibit correlations or even entanglement across rounds. In our protocol, based on <cit.>, we perform measurements over N-1 rounds and use the obtained results to estimate the proximity of an unmeasured copy to the target state, as illustrated in Fig. <ref>. Operating in a device-independent scenario, where measurements are uncharacterized and conform to a Bell scenario, we only have access to input-output correlations. For this reason, our approach involves testing the violation of a Bell inequality, which self-tests the target state: a high Bell violation implies that all N-1 measured copies are close to the target state. Given the random selection of the unmeasured copy, we can infer with high confidence that it is also close to the target state. The fact that we operate in a DI scenario leads to a few caveats in our certification protocol. Unlike in many other approaches to quantum certification tasks (see e.g. <cit.>), the fidelity between states cannot serve as a standard metric in this setting. Under DI conditions, our protocol is limited to certifying states up to local isometries, at best. Therefore, to address the uncertainty inherent in treating all measurement devices as black boxes, we propose using extractability as an appropriate metric, as suggested in <cit.>. Extractability represents fidelity optimized over all possible local isometries. A high extractability indicates the presence of an isometry capable of aligning the measured state closely with the target state. As all other copies are measured, the extractability of the unmeasured copy is estimated conditionally on the outcomes of the performed measurements. In this sense the certified extractability is conditional, or, in other words, it characterizes the conditional state: σ̃_c=1/p_1,…,c-1,c+1,…,N Tr_1, …, c-1, c+1, …, N [(⊗^N\{c}_k=1 M_𝐨_k |𝐢_k) σ^N], where σ^N is the state over N rounds, M_𝐨_k |𝐢_k represents the measurement performed on the kth copy giving outcome 𝐨_k, and p_1, …, c-1,c+1,…,N=Tr[(⊗^N\{c}_k=1 M_𝐨_k |𝐢_k) σ^N]. This approach guarantees that the conditional state of the unmeasured copy is independent from all the other copies produced in the measurement rounds, which allows us to define the appropriate figure of merit for a fully DI quantum state certification protocol and formally express its final goal. We wish to claim, with a confidence level 1-δ, whether the extractability of the conditional state, σ̃_c, from the target state, |GHZ⟩, is bigger than some value 1-η, with η∈ [0,1], which can be written as: Ξ(σ̃_c,|GHZ⟩) = _Φℱ(Φ[σ̃_c],|GHZ⟩) ≥ 1-η, where Φ is an arbitrary local isometry and the fidelity of a state σ with respect to the target state |ψ⟩ is defined as ℱ(σ, |ψ⟩)=⟨ψ|σ|ψ⟩. In order to properly estimate the extractability of the GHZ state from the unmeasured copy, we must first carefully choose a Bell inequality that self-tests the target state. In other words, the selected Bell inequality should be maximally violated only by the |GHZ⟩ state (up to local isometries). This selection determines the Bell test to which the copies will be subjected during the measurement rounds. Furthermore, we can rely on robust self-testing statements based on a Bell inequality to establish a lower bound on the extractability of the underlying quantum state, Ξ(σ̃_c,|GHZ⟩), from the observed Bell violation, β. Moving from a general self-testing framework to a well-defined certification protocol, it is useful to reframe the scenario as a nonlocal game derived from the Bell inequality. In this context, after establishing the appropriate winning and losing outcomes (where winning corresponds to those outcomes that contribute to violation of the Bell inequality), only the target state (up to local isometries) achieves the optimal quantum winning probability, p_QM. Although self-testing statements are typically designed for IID sources, we can leverage the robustness statement to determine the maximal winning probability for states with limited extractability. After the copies are measured and given a score for their performance in all N-1 rounds (they score one if they get a winning result, zero if not), we can add them up to determine the overall score, N_win, and deduce the resulting verification pass rate, P = N_win / (N-1). For a given desired extractability 1-η (Eq. (<ref>)), and confidence level 1-δ, the number of copies required is determined by two parameters ϵ_1,ϵ_2 which are related to the violation of the Bell inequality in the ideal case. In the protocol, ϵ_1 fixes the required pass rate: our claim on extractability holds when P≥ p_QM-ϵ_1. This is then related to the desired extractability through ϵ_2, via cη = ϵ_2 > ϵ_1, where c is a constant coming from self-testing, linking the extractability to Bell violation (see <cit.> and Supplementary Information). The role of ϵ_2 is to allow for a gap in the requested pass rate and the desired extractability so that our goal can be achieved for finite N. The requested number of copies N is chosen so that the following is satisfied δ≤(1/N + N-1/Ne^D(p_1|| p_2))^N, where D(a||b) = alog(a/b) + (1-a)log[(1-a)/(1-b)] is the Kullback-Leibler divergence and p_i=p_QM-ϵ_i; see Supplementary Information for a detailed description of the protocol. Choosing the Bell measurement operator As explained above, a crucial step for experimentally demonstrating the certification of a GHZ state is to select the most appropriate Bell operator. To this end, we studied the performance of three different candidates: two Bell operators, proven to have very tight self-testing bounds in terms of robustness <cit.> (see Methods, Eqs. (<ref>) and (<ref>)) and the following Mermin-like operator <cit.>: 𝐁_Mermin = A_0B_0C_0D_0 - A_1B_1C_0D_0 -A_1B_0C_1D_0 -A_1B_0C_0D_1 +A_1B_1C_1D_1 -A_0B_1C_1D_0 -A_0B_1C_0D_1 -A_0B_0C_1D_1, where A_0,B_0,C_0,D_0 = X and A_1,B_1,C_1,D_1 = Y. The motivation behind this choice is the fact that its quantum bound, β_Q, saturates the algebraic bound, β_algebraic, leading to a maximum success probability of p_QM=1. However, to our knowledge, the only self-testing bound for a Mermin inequality existing in the literature is restricted to a tripartite system <cit.>. For this reason, we computed for this work a self-testing bound for the four-partite case, relying on the numerical method described in <cit.> (see Supplementary Information for more details). To compare the different Bell operators fairly, we consider a quantum state described as a statistical mixture of a GHZ state and white noise, mathematically expressed as ρ=(1-α)ρ_GHZ+α/161. We calculate the pass rate (as a probability in this case) P, and subsequently fix the winning probability threshold, p_1=P, for each operator, as a function of the noise characterizing the state we want to certify, α. This allows us to study the behaviour of the remaining parameters, captured in inequality (<ref>). For this analysis, three main figures of merit stand out for their significance: the maximum extractability one can certify, how many samples one needs to measure in order to complete the protocol and the confidence level associated with the results. In Fig. <ref> we analyse the behaviour of the different operators from the perspective of the parameters mentioned above (further details are given in the Methods). In all three plots, it is clear that the Mermin operator outperforms the others, providing not only a significant advantage in terms of sample efficiency, by almost two orders of magnitude, but an overall better performance, regarding how close we can certify a state with respect to the target state. It is therefore the one we use to device-independently certify a quantum state. Experimental results To experimentally demonstrate the fully DI certification of a quantum state, we use a compact and high-performance four-party GHZ state source, based on spontaneous parametric down conversion (SPDC) in a layered-Sagnac interferometer configuration <cit.> (see Fig. <ref>). Once the states are generated, we transmit each of the four photons to the measurement apparatus and run the protocol described above. After the protocol is successfully completed, we use all the recorded measurement outcomes, except for one corresponding to a single copy randomly selected, to calculate the pass rate P. After setting the desired confidence level, 1 - δ, we use the total number of copies, N, to numerically invert inequality (<ref>) and compute the solution for the maximum certified extractability that fulfills the condition ϵ_2 > ϵ_1 (a detailed description of the data acquisition and its analysis is given in the Methods). The results are shown in Fig. <ref>. The certified extractability with respect to the total number of copies follows the overall expected behaviour outlined by the simulation (full) curves, apart from standard experimental fluctuations. The plot on the left illustrates that, by tuning the confidence level, we can reduce the number of samples required to achieve the same certification level. However, this trade-off vanishes for large samples, as the certified extractability always converges to the self-testing bound. These results show, for the first time to our knowledge, the experimental device-independent certification of a four-qubit GHZ state with an extractability of Ξ(σ̃_c, |GHZ⟩) ≥ 0.896, for a total of 4643 verified samples and a confidence level of 1-δ=0.99, in a non-IID scenario. This demonstration is only possible due to the high-fidelity states produced with our experimental setup, yielding an average success probability of P=0.973. It is clear that the collection of additional samples would allow us to get closer to the self-testing bound of Ξ_max(σ̃_c,|GHZ⟩) ≥ 0.919, assuming the average passing probability of P would remain unchanged. However, a significant increase of the acquired statistics can be experimentally challenging due to the accumulation of the recovery time associated with updating each black-box's setting. This motivates the consideration of a less conservative trust scenario, in which, instead of a one-to-one input-to-output correspondence, we record multiple outcomes within the 15 seconds acquisition time associated with each classical input. Using post-processing techniques, we can decompose the results obtained within the acquisition window into multiple near-single-shot measurements, mimicking the recording of a single output for each input. A randomization of the decomposed events provides the opportunity to simulate the implementation of the protocol for a larger sample, using experimental data (see Methods for more details about the data acquisition and analysis). The resulting data set, shown in Fig. <ref> (right), is further explored by testing different state generation rates - ranging from 6 Hz to 101 Hz - which result in varying degrees of high-order SPDC emissions, consequently affecting the certified extractability - ranging from 0.923 to 0.0820, respectively. This analysis highlights the range of capabilities in which our multipartite entanglement source can operate and shows a clear convergence of the certified extractability towards the self-testing bound, which is only possible due to the drastic increase of copies to N∼ 4×10^5. Although this approach does not strictly follow the protocol, it shows that, as long as the stability of the setup is maintained, it is possible to saturate the self-testing bound. Discussion It is worth noting that, although the Mermin operator provides the best certification results among those we analysed, this does not mean they cannot be further improved. In fact, it is possible that the self-testing bound we found is not optimal in terms of robustness, i.e., that a tighter bound exists. Furthermore, there might exist operators, other than the ones we analysed, yielding a more favorable combination of robust self-testing bound parameters and maximum probability of winning, p_QM. Since the GHZ state certification protocol heavily relies on such parameters, this suggests that the same experimental data can potentially produce even better results. Additionally, while the experimental setup in Fig. <ref> is suitable for the purpose of demonstrating the certification of a quantum state, integrating this protocol as a subroutine in a quantum information task would likely require optical switches to guide the certified copies to their intended purpose. Moreover, our primary limitation in recording a large sample, while guaranteeing a one-to-one input-to-output correspondence, is the active control of the Mermin settings. More specifically, the experimental realisation of each black-box involves controlling the rotation of waveplates with mechanical motors. Adjusting their configuration requires waiting for their response time before measuring, which is accumulated over all classical inputs, leads to time-consuming implementations. Alternatively, a passive choice mechanism could accelerate the protocol execution, making it more efficient. This work reinforces the validity of the fully DI certification of quantum states as a valuable fundamental resource for a wide range of quantum information applications. We emphasize the practicality of our protocol in providing a rigorous framework in which a finite number of samples can yield meaningful results, without further assuming identical and independent distribution for all produced copies. Furthermore, it is instructive to observe the impact the number of samples has on the robustness of our results. This is particularly clear when comparing the purple data points in the two plots in Fig. <ref> - for a similar passing probability and the same confidence level, the certified extractability increases by 3% with ten times more samples. In other words, while the theory is able to characterize the few-copies regime, demanding confidence levels and extractability requirements can only be achieved for relatively large samples. In general, the ability to experimentally demonstrate the fully DI certification of such a high extractability level paves the way to the reliable and robust use of quantum information systems in practical, real-world settings. Note. At the time of finalising this work, we became aware of parallel and independent work on experimental quantum state certification, also submitted to the arXiv today: M. Antesberger et al., “Efficient and Device-Independent Active Quantum State Certification” (2024). Acknowledgments. We thank Uta Isabella Meyer and Henrique Silvério for fruitful discussions and technical support. We acknowledge financial support from the European Union’s Horizon 2020 framework programme under the Marie Sklodowska Curie innovation training network project AppQInfo, Grant No. 956071 (LdSM), the Horizon Europe research and innovation programme under the project QSNP, Grant No. 101114043 (ED), the European Research Council Starting Grant QUSCO, Grant No. 758911 (NLP, ED), the PEPR integrated projects QCommTestbed, ANR-22-PETQ-0011 (ED) and EPiQ, ANR-22-PETQ-0007 (IS, DM), and the HQI project, ANR-22-PNCQ-000 (DM), which are part of Plan France 2030. METHODS Self-testing bounds The self-testing bound plays a pivotal role in our implementation of the quantum state certification protocol, as it impacts not only the sample efficiency but also the lower bound for the certified extractability. For this purpose, as mentioned in the main text, we consider two options, introduced in <cit.>, alongside the Mermin operator (Eq. (<ref>)). The Bell operator, derived from F. Baccari et al. <cit.> holds significant potential for its tight self-testing bound and it can be written as: 𝐁_F.Baccari = 3(A_0B_0C_0D_0 + A_1B_0C_0D_0) + (A_0B_1 -A_1B_1) + (A_0C_1 -A_1C_1) +(A_0D_1 -A_1D_1). The subsequent operator, originating from Q. Zhao et al. <cit.>, offers comparable advantages, it was demonstrated to be even tighter than the first one, and it is defined as: 𝐁_Q.Zhao_1 = (A_0+A_1)B_1C_1D_1 +(A_0-A_1)B_0 + B_0C_0 + B_0D_0. For GHZ states, the optimal quantum bounds - for both operators mentioned above - can be achieved by taking A_0=X + Z/√(2), A_1=X - Z/√(2) and B_i,C_i,D_i defined as X (Z) for i=0 (i=1). Lastly, we take the Mermin operator, defined in Eq. (<ref>), due to its optimal maximum success probability of p_QM=1. In order to compare the different options, we consider the robust self-testing statement to be of the form (see Supplementary Information for more details): Ξ(σ, |GHZ⟩) ≥ sβ+μ, where s, μ∈ℝ, for all states σ achieving a violation greater than β. The lower bound of each inequality, determined by the values of s and μ (see Table <ref>), is illustrated in Fig. <ref>. The Mermin operator displays the tightest bound from a robust self-testing perspective, suggesting that it is likely the most appropriate operator for the certification protocol. However, since the lower bound on the extractability provided by the DI quantum state certification protocol also depends on the maximum probability of winning the nonlocal game with a quantum strategy, p_QM, we observe that the advantage of the Mermin operator becomes even more predominant than one would think by simply comparing the self-testing bounds (see Fig. <ref>). This indicates that the robust self-testing analysis does not contain all the necessary information for the choice of the most favorable operator for the DI certification of a quantum state. Data collection and analysis We conducted the experiment for three different pump power settings, associated with different state generation rates: 6 Hz, 46 Hz and 101 Hz. For each of these configurations, we collected more than 4×10^5 states over a fixed acquisition window of 15 seconds per randomly selected classical input. To implement the fully DI certification protocol, we must guarantee the precise isolation of one single random event from the whole sample. One possibility could be to employ an optical switch. While this method allows for the selection of some states, accurately discerning the presence of one and only one state would prove challenging due to the post-selected nature of states produced by SPDC and their inherent high-order emissions. Alternatively, post-processing analysis techniques are a viable solution. Using a high-performance time tagger, we can record the precise time-stamps associated with each detected event. With this capability, we can replay the full experiment and decompose each of the 15-second acquisitions into multiple ultra-short measurements, such that each of them records, on average, one single output. Two different methods are taken to analyse the resulting data. First, we take each 15-second window and randomly select one out of the full set of recorded outcomes, guaranteeing a true one-to-one input-to-output correspondence in the nonlocal game (Fig. <ref> - left). While this approach discards a significant portion of the recorded events and limits the number of measured samples, N, to the total number of randomly generated inputs throughout the experiment, it reflects a faithful implementation of the protocol. Alternatively, we consider the full dataset resulting from the decomposition of each classical input into as many inputs as the total number of recorded outputs within the 15-second acquisition window (Fig. <ref> - right). As mentioned in the main text, while this last approach does not strictly follow the protocol, it allows to use the full data set to simulate a considerably larger sample. For both approaches, we can use a random number generator to select a single copy to be certified and, consequently, excluded from the verification analysis. It is worth noting that high-order emissions hamper the isolation of individual single events. Each time one of these is selected to be certified, we restart the random selection until we discard one and only one copy for each classical input. ieeetr * § SUPPLEMENTARY INFORMATION §.§ Detailed protocol In this section, we provide a complete and more general description for the estimation of the sample efficiency bound (Eq. (<ref>)) and the quantum state certification protocol adopted in this work and inspired by <cit.>. We start by considering a source that produces N independent copies of a quantum system, denoted as S = {σ_1, …, σ_N}, where σ_i represents the i-th copy of a quantum state. For now, we assume the copies are independently distributed, but not identical, and then comment on the generalization. The purpose of quantum state certification is to quantitatively assess how close a set of states, S_c, is to the target state, |ψ⟩. This can be done using the notion of average fidelity, F̅(S_c,ψ), where S_c = {σ_1, …, σ_N_c} denotes a set of N_c states. Our goal is to claim, with a confidence level 1-δ, whether the average fidelity of the set of samples, S_c, to the target state, |ψ⟩, is bigger than some value 1-η, with η∈ [0,1]. This can be formally written as F̅(S_c,ψ) = 1/N_c∑_j=1^N_c⟨ψ|σ_j|ψ⟩≥ 1-η. However, in device-independent scenarios, local measurements are not characterized, or trusted, since all devices are treated as black boxes. In other words, some local isometries are undetectable, which renders state fidelity based verification methods impossible. To address this issue, we can leverage the concept of extractability, defined in the literature as the fidelity optimized across all possible isometries <cit.>. More specifically, the extractability of the target state |ψ⟩ from a state σ_j is written as: Ξ(σ_j,ψ)=_Φℱ{_j[Φ(σ_j)],|ψ⟩}, where Φ is an arbitrary local isometry. If we generalize this equation for the set S_c, we can re-define our goal with the average extractability: Ξ̅(S_c,ψ) = 1/N_c∑_j=1^N_cΞ(σ_j,|ψ⟩) ≥ 1-η. A high extractability implies the existence of an isometry that can bring the measured state close to the target state. In other words, we ensure that if we apply the inverse local isometry to any arbitrary measurement, the statistics obtained from the measured state will be close to those from the target state, which suggests that the extractability is the DI equivalent of the fidelity. As mentioned above, so far, our analysis has presumed that the copies are uncorrelated with each other. However, this assumption can be misleading and compromise the protocol. Two different perspectives can be adopted to face this issue. We can restrict the protocol to the quantum state certification of one single copy and abandon the IID assumption, adopting the idea of conditional extractability, as detailed in the main text (see Eq. (<ref>)). Alternatively, if we insist on certifying a set of states with more than one element (N_c>1), to our knowledge, as far as the theory is developed, we need to keep the assumption that all copies are uncorrelated with each other. In the latter case, it is useful to define μ=N_c/N as the fraction of certified samples. Independently of which option we follow, once the general goal of state certification is defined, we can move our focus towards the estimation of the average extractability. For this purpose, the proposed protocol adopts the form of a nonlocal game based on a Bell test. With this approach, once a Bell inequality that self-tests the target state, |ψ⟩, is selected, we can use robust self-testing results to establish a lower bound on the average extractability as a function of the violation of the selected Bell inequality <cit.>. More precisely, the robustness statement - characterized by a constant c̃, itself dependent on the Bell operator under analysis - asserts that, for a given state, σ, achieving the Bell violation β = β_Q - η/c̃, where β_Q is the quantum bound - defined as the maximal violation achievable by the correlations compatible with quantum theory - there exists an isometry, Φ, such that Eq. (<ref>) holds. The full protocol is detailed as follows: * A source generates N copies of a quantum state. Each copy, σ_j, is distributed over each spatially separated and non-communicating player, k. * After all states were distributed, the verifier rolls an N-faced die, until N_c=μ N different outcomes are obtained, therefore determining the set of states, S_c, to be preserved and certified. The remaining states constitute the verification set, S_v, which will be measured. * To decide the measurement setting for each copy, j, in the verification set, each player, k, will randomly generate an input, i_k,j, determining the measurement setting of their black-box, which will output a certain result o_k,j. * After gathering the score of each copy from the verification sample, i.e., a winning output was obtained, given the received input, we can calculate the overall pass rate P=N_win/N_v. The success of the protocol can only be evaluated after we fix the desired extractability, characterized by η, which in turn, defines the lower bound of the average success probability of the whole sample p_2=p_QM - ϵ_2, with ϵ_2 = cη, p_QM being the maximum probability of winning the nonlocal game with a given quantum strategy and c=(2c̃β_algebraic)^-1 serving as a fundamental factor in establishing a link between the self-testing bound and the nonlocal game. Additionally, if we acknowledge that the success probability threshold of the verified sample, p_1 = p_QM -ϵ_1, needs to be larger than p_2, i.e., ϵ_1 < ϵ_2, we can define the condition that determines the successful certification of the remaining copy, expressed as P ≥ p_1 = p_QM -ϵ_1. While studying the theoretical aspects of our work, we discovered an inconsistency between the violation of the Bell inequality and the winning probability of the nonlocal game, which eventually leads to the certification of an extractability higher than the self-testing bound. The origin of this disparity is in the definition of the constant c, lacking a factor of two in order to make a perfect translation of the Bell violation into a nonlocal game. For this reason, throughout the analysis of the experimental data, we considered the corrected expression c=1/2c̃β_algebraic. Depending on the specified parameters mentioned above, we can calculate the confidence level, associated with the certification process, N ≥lnδ/ln(1-μ + μ e^D(p_QM-ϵ_1|| p_QM-ϵ_2)), where D(a||b) = alog(a/b) + (1-a)log[(1-a)/(1-b)] is the Kullback-Leibler divergence. §.§ Robust self-testing bound In this section, we detail the method used to estimate the robustness results of self-testing, associated with the 4-qubit Mermin operator (Eq. (<ref>)), shown in Table <ref>. Following a similar approach to the one described in F. Baccari et al. <cit.>, we recall that our goal is to find a lower bound for the extractability of the target state, |GHZ⟩, from the measured state, σ, based on the violation, β, obtained with the Mermin operator, 𝐁_𝐌𝐞𝐫𝐦𝐢𝐧. More concretely, we want to find s, μ∈ℝ, such that: Ξ(σ, |GHZ⟩)≥ sβ+μ. If we recall that the extractability can be written as, Ξ(σ, |GHZ⟩) =Λ=Λ_1⊗…⊗Λ_4maxℱ(Λ(σ),|GHZ⟩), where Λ_i is the local channel on the i-th party and the fidelity is expressed as: ℱ(Λ(σ),|GHZ⟩)= [σ(Λ_1^†⊗…⊗Λ_4^†)(|GHZ⟩⟨GHZ|)], we can, equivalently, focus on finding the appropriate s and μ parameters, such that, for some quantum channels, Λ_i, the following operator inequality holds: K = (Λ_1^†⊗…⊗Λ_4^†)(|GHZ⟩⟨GHZ|) ≥ s 𝐁_𝐌𝐞𝐫𝐦𝐢𝐧 + μ1. Since the measurement of 𝐁_𝐌𝐞𝐫𝐦𝐢𝐧 only involves two dichotomic measurements per player, Jordan's lemma can be used to reduce the state to the N-qubit space, as mentioned in Ref. <cit.>, allowing the parameterization of the local observables as, A_i =B_i=C_i=D_i =cos(α_i)σ_++(-1)^isin(α_i)σ_-, where σ_+=(X+Y)/√(2) and σ_-=(X-Y)/√(2) and α_i∈[0,π/2]. Consequently, the operator 𝐁_𝐌𝐞𝐫𝐦𝐢𝐧(α⃗) can now be defined with respect to the angles α_i. With the same intent in mind for K, we consider the following depolarizing channel, as suggested in Ref. <cit.>: Λ_i(α_i)=1+g(α_i)/2σ+1-g(α_i)/2Γ_i(α_i)σΓ_i(α_i), where g(α_i) = (1 + √(2))(sinα_i + cosα_i - 1) and Γ_i(α_i)=σ_+ for α_i ∈ [0,π/4] σ_- for α_i ∈ (π/4,π/2]. After the operators' parameterization, we move our focus towards proving, for all possible α_i, that the following inequality is satisfied: K(α_0, …, α_4) ≥ s 𝐁_𝐌𝐞𝐫𝐦𝐢𝐧 (α_0, …, α_4)+μ1 for some s,μ∈ℝ. To find the optimal parameters, we start by fixing s and finding μ such that the extractability bound leads to 1 at the point of maximal violation, i.e., μ=1-sβ_Q. For that combination of s and μ, we then check that the minimum eigenvalue of K(α⃗)-s𝐁_𝐌𝐞𝐫𝐦𝐢𝐧(α⃗)-μ1 is larger or equal than 0, for all α_i. If this condition is verified, we repeat the previous steps for a lower s. The optimal bound is determined by the minimum value of s and corresponding μ, satisfying the imposed condition. §.§ Experimental details The polarization-entangled state used in our certification protocol is generated through the entanglement fusion of two Bell pairs. Each state is produced via type-II spontaneous parametric down conversion (SPDC) occurring within a periodically-poled KTP (ppKTP) crystal. To create the two Bell states, we split the pump into two parallel beams, using a spatial multiplexer, to pump the same ppKTP crystal in two different locations (top and bottom). Polarization entanglement is achieved by pumping the crystal from two opposing directions and then interfering the two paths using a Polarizing Beam Splitter (PBS), realized with a Sagnac interferometer <cit.>. As a result, we obtain the Bell state |Φ⟩ = (|HV⟩ + e^iθ|VH⟩)/√(2), where θ is determined by the path difference between the two directions of propagation. We extract one photon from each pair and guide them to interfere on a Fiber Polarizing Beam Splitter (FPBS). Using a motorized delay stage, we finely adjust the temporal overlap of the interfering photons. We post-select the events resulting in each photon occupying a different spatial port of the FPBS. In other words, conditioned on fourfold coincidences, we entangle the two Bell pairs, thereby generating a GHZ state of the form |GHZ⟩ = (|HHHH⟩ + e^iδ|VVVV⟩)/√(2), where δ is determined by the θ of each Bell pair. With the aforementioned setup, we can generate a GHZ state up to local unitaries resulting from the propagation of the state in single-mode fibers. To specifically produce the state |GHZ⟩, with δ=0, we begin by performing Quantum State Tomography (QST) to precisely identify the state being generated. Subsequently, an optimization method is employed to determine the necessary local unitaries required to transform the state to the desired form. Using three sets of Quarter-Wave Plates (QWPs), Half-Wave Plates (HWPs), and Quarter-Wave Plates (QWPs), one for each of three out of the four photons, we apply those unitaries to achieve the target state. More details can be found in <cit.>. Please note that the unitary compensation part of the setup is not shown explicitly in the quantum certification experimental setup illustrated in Fig. <ref>. For a pump power of 240 mW, yielding a fourfold coincidence rate of 6.7 Hz, we obtain a fidelity of ℱ=|⟨GHZ|ρ_exp|GHZ⟩|^2 =(94.15±0.21)% (see Fig. <ref>). To evaluate the fidelity, we use Quantum State Tomography, based on linear regression <cit.> and fast maximum likelihood estimation <cit.>. To assess the uncertainty associated with the reconstructed state, we employ the Monte Carlo method by sampling 500 times from Poissonian photon counting statistics and Gaussian QHP-HWP rotation angle distributions (which incorporates the systematic measurement basis error).
http://arxiv.org/abs/2407.13608v1
20240718154742
dzNLP at NADI 2024 Shared Task: Multi-Classifier Ensemble with Weighted Voting and TF-IDF Features
[ "Mohamed Lichouri", "Khaled Lounnas", "Boualem Nadjib Zahaf", "Mehdi Ayoub Rabiai" ]
cs.CL
[ "cs.CL" ]
Quantum Local Search for Traveling Salesman Problem with Path-Slicing Strategy This work was performed for Council for Science, Technology and Innovation (CSTI), Cross-ministerial Strategic Innovation Promotion Program (SIP), “Promoting the application of advanced quantum technology platforms to social issues” (Funding agency: QST). Chen-Yu Liu 123, Hiromichi Matsuyama14, Wei-hao Huang15, Yu Yamashiro16, 1 Jij Inc., 1-4-6 Nezu, Bunkyo, Tokyo 113-0031, Japan 2Graduate Institute of Applied Physics, National Taiwan University, Taipei, Taiwan Email:3 d10245003@g.ntu.edu.tw, 4 h.matsuyama@j-ij.com, 5 w.huang@j-ij.com, 6 y.yamashiro@j-ij.com ======================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT This paper presents the contribution of our dzNLP team to the NADI 2024 shared task, specifically in Subtask 1 - Multi-label Country-level Dialect Identification (MLDID) (Closed Track). We explored various configurations to address the challenge: in Experiment 1, we utilized a union of n-gram analyzers (word, character, character with word boundaries) with different n-gram values; in Experiment 2, we combined a weighted union of Term Frequency-Inverse Document Frequency (TF-IDF) features with various weights; and in Experiment 3, we implemented a weighted major voting scheme using three classifiers: Linear Support Vector Classifier (LSVC), Random Forest (RF), and K-Nearest Neighbors (KNN). Our approach, despite its simplicity and reliance on traditional machine learning techniques, demonstrated competitive performance in terms of F1-score and precision. Notably, we achieved the highest precision score of 63.22% among the participating teams. However, our overall F1 score was approximately 21%, significantly impacted by a low recall rate of 12.87%. This indicates that while our models were highly precise, they struggled to recall a broad range of dialect labels, highlighting a critical area for improvement in handling diverse dialectal variations. § INTRODUCTION Arabic, with its rich linguistic diversity encompassing numerous dialects alongside Modern Standard Arabic (MSA), presents a significant challenge for natural language processing (NLP) tasks. Despite its importance, many Arabic dialects remain understudied due to resource constraints such as limited research funding and datasets. Addressing this gap, the Nuanced Arabic Dialect Identification (NADI) shared task series <cit.> serves as a pivotal initiative to overcome these hurdles. By providing extensive datasets and modeling opportunities, NADI aims to facilitate dialect identification and processing tasks, thereby advancing the understanding and utilization of Arabic dialects. In the context of feature extraction for Arabic text analysis, standard approaches frequently employ Term Frequency-Inverse Document Frequency (TF-IDF). Our previous work <cit.> in the MADAR'2019 shared task <cit.> demonstrated the effectiveness of using TF-IDF for efficient feature extraction in Arabic text classification. Drawing inspiration from this prior research, we adopted a similar strategy in our current study. Specifically, in our first experiment, we employed a union of TF-IDF features with various n-gram analyzers (word, character, and character with word boundaries) to enhance the representation of Arabic text. Moreover, the utilization of weighted feature fusion has emerged as a promising technique for enhancing classification accuracy. <cit.> demonstrated the efficacy of weighted feature fusion in multilingual text classification, as showcased in our earlier study in NADI'2023 for Arabic dialect identification <cit.>, emphasizing its adaptability across various linguistic datasets. This methodology closely corresponds to our Experiment 2, which focuses on the weighted concatenation of TF-IDF features. These earlier studies, along with recent advancements in TF-IDF-based feature extraction and weighted feature fusion, have collectively inspired innovative ideas. In our third experiment, we implemented a weighted hard major voting approach. This paper, instead of introducing innovative solutions or groundbreaking insights for NADI 2024 <cit.>, serves as a concise consolidation of existing knowledge. The rest of the paper is structured as follows: Section <ref> provides an overview of the dataset used in our study. In Section <ref>, we present our proposed system, followed by discussing the findings and their significance in Section <ref>. Finally, our paper concludes in Section <ref>, summarizing the key takeaways and contributions. § DESCRIPTION OF THE DATASET The Nuanced Arabic Dialect Identification (NADI) 2024 shared task provided a valuable dataset for Subtask 1: Multi-label Country-Level Dialect Identification (MLDID). The dataset encompassed dialects from various Arabic-speaking countries, including Egyptian, Saudi Arabian, Algerian, Syrian, Palestinian, and Lebanese. The dataset provided for MLDID consisted of two parts: a development set, comprising 100 samples, served as a training ground for system development and evaluation, and a test set, containing 1,000 samples, was used for the final evaluation of participant models. Additionally, participants were granted access to NADI datasets from 2020, 2021, and 2023 for training purposes. These datasets provided a larger pool of Arabic text for system development and refinement. § PROPOSED SYSTEM Our proposed system is based on two concatenation or union process of features vs classifiers where we opted for three experiments: * Experiment 1 <cit.>: In this experiment, we employ the TF-IDFVectorizer, which employs three analyzers (Word, Char, and Char_wb), each with varying n-gram ranges. In the default configuration, we combine these three features, assigning equal weights of 1 to each. During feature extraction, we varied the n-gram values (ranging from n=1 to 5). Finally, the SVC classifier was trained. * Experiment 2 <cit.>: In this instance, we combine the three TF-IDF features using a weight vector comprising three distinct values (w1, w2, w3) corresponding to the Word, Char, and Char_wb analyzer, respectively. The LSVC classifier was then trained. * Experiment 3: In this instance, we applied a weighted hard major voting for three classifier (LSVC, RF and KNN). The various parameters used in these three experiments are reported in Table <ref> where we specify the settings and corresponding values for each parameter. For N-GRAM, we explored n-gram ranges from unigrams to 5-grams. In TFIDF, we varied the transformer weights from 0.1 to 1 and the maximum number of features from 300 to 1000. Additionally, for the LSVC classifier, we adjusted the regularization parameter C from 1 to 5 and set the class weight to 'balanced'. RF classifier was used with default settings, while for KNN, we set the number of neighbors to 3. Lastly, in the Major Voting ensemble technique, we experimented with weights ranging from 0.1 to 0.6. § OBTAINED RESULTS During our participation in the MLDID (Closed Track) project, we iteratively explored various approaches to enhance our multi-label dialect identification models (see Table <ref>). This exploration encompassed feature engineering with TF-IDF, incorporating ensemble methods, and experimenting with different voting combination strategies. As a baseline experiment, we utilized a basic TF-IDF representation with 1-gram features and a linear kernel SVC classifier. Despite its simplicity, it achieved a low F1-score of 19.43%. In the first experiment, we expanded the feature space by incorporating a combination of word and character-level n-grams (word, char and char_wb) in TF-IDF representation. We employed the Linear Support Vector Classifier (LSVC) classifier with balanced class weights, resulting in notable improvements in F1-score of 20.64%. In the second experiment, we experimented with a different combination of n-grams while maintaining the balanced class weights. We also explored variations in the weights assigned to different feature types within the TF-IDF representation. The obtained F1-score ranged from 20.51% to 22.51% (see Table <ref>). In the third experiment, we introduced ensemble methods, namely hard voting and weighted hard voting, combining SVC, RF, and KNN classifiers. Interestingly, despite the complexity of these ensemble approaches, their F1-score were lower compared to well-tuned single LSVC configurations (16.33% and 21.44%, respectively). This suggests that the ensemble methods may not have effectively leveraged the complementary strengths of the individual classifiers or their voting strategies might not have been optimal for this task. § CONCLUSION Our analysis of the NADI 2024 shared task highlights the critical role of feature engineering and model optimization in Arabic dialect identification. Key findings include integrating character-level features alongside word-level features consistently improved performance, balanced class weights within the LSVC classifier significantly enhanced F1-score, surpassing 20.64%, strategic assignment of transformer weights yielded the highest F1-score of 22.51%, and ensemble methods like hard voting achieved moderate F1-score but were surpassed by finely tuned single SVC configurations. In conclusion, our study emphasizes the importance of incorporating character-level information, utilizing balanced class weights, and exploring advanced feature weighting techniques to advance Arabic dialect identification systems. These insights provide valuable guidance for future research in this domain. acl_natbib
http://arxiv.org/abs/2407.13499v1
20240718133200
Three-State Information Hiding: Provably Secure Asymmetric Steganography
[ "Minhao Bai", "Jinshuai Yang", "Kaiyi Pang", "Xu Xin", "Yongfeng Huang" ]
cs.CR
[ "cs.CR" ]
lemmaLemma propositionProposition theoremTheorem FADE: A Task-Agnostic Upsampling Operator for Encoder-Decoder ArchitecturesCorresponding author: Zhiguo Cao. Hao Lu^1 Wenze Liu^1 Hongtao Fu^1 Zhiguo Cao^1 Received: date / Accepted: date ======================================================================================================================== § ABSTRACT The rise of language models has provided a fertile ground for the application of steganography. Due to their qualified output, steganographic texts become similar to human and have attracted most of the steganography researchers' attention. However, running a language model requires a strong computation platform. It limits the applicable scenario of steganography, since those electronic devices controlled by the decoder may not even equipped with a GPU. Traditional provably secure steganography methods cannot be applied to this low-resource scenario. Therefore, we aim at design a novel steganography framework that is practical in a low-resource scheme. We start from the rigorous probability analysis with the help of hypothesis testing techniques to construct an theoretical framework. Then we prove the security and robostness of our framework and point out its optimization goal. We test our theoretical framework in some famous LLMs and the results have proved its usability. There are still some practical problems and this gives the direction of future work. We hope that this work will expand the practical scope of steganography and create a new branch of steganography. § INTRODUCTION With the flood of Large Language Models (LLMs) running through the world, the application of steganography on LLMs has caused most of the researchers' attention. LLMs are used by steganography methods to generate looking-innocent texts while protecting peoples' private data via hiding it into these generated texts. Since the LLMs predict high-quality probability distribution on tokens when generating texts, most of the steganography methods aim to design a mimetic sampler to output a token from that distribution under the control of the secret bits, while keeping the stegos (texts generated by them) hardly indistinguishable from the real texts. When stegos exactly can not be distinguished from the real texts, a steganography method can be regarded as a secure linguistic steganography method. Provably secure linguistic steganography nowadays can be crudely realized, under the help of a shared pseudo-random generator (PRG). Then, as the pioneer of provably secure linguistic steganography in current age, METEOR <cit.> was proposed to solve the low-entropy problem in texts. However, METEOR sacrificed the hiding capacity to compensate for the security, which is not necessary. ShiMer <cit.> addressed this problem and improved the hiding capacity to near theoretical bound – the entropy of texts. DISCOP <cit.> took a different approach on Huffman Coding (HC). This method successfully decomposed the multi-variate distribution into multiple bi-variate distributions, but at the cost of hiding capacity. The above is all the provably secure linguistic steganography methods that can be practically implemented. All of them are symmetric methods, which require the sender and receiver share the same probability distribution, which indicates that the receiver should separately run a language model to compute that distribution. However, most of the computing devices are not capable of running a qualified language model. The demand of computational resources has restricted the application scheme of steganography. To make secure linguistic steganography feasible in a wider range of scenarios, we aim at developing a novel framework for practical secure linguistic steganography. We focus on a lower-resource scheme: the send has the access to LLMs to generate qualified texts, while the receiver can only read the texts and do some floating-point computation. However, the difficulty of design a practical steganography method under such a scheme is tremendous. In our scheme, the information that decoder can get is extremely restricted. For decades, none of the steganography methods has ever considered any asymmetric scheme. Most of the researchers have become accustomed to the excellent properties of symmetrical conditions and there is no previous work for reference. In this paper, we successfully build a steganography framework for the asymmetric scheme based on the concept of hypothesis testing. We design a special probabilistic function, which has a different probabilistic output when embedding different bits. Then we accumulate the output of this function across several loops, in order to obtain a high confidence on one of the hypothesis. During the decoding process, the decoder just need to compute the output of the probabilistic function according to the symbol received. The contribution of this paper can be concluded into 3 points: * We propose a framework of provably secure steganography for the asymmetric scheme. * We give an optimization problem to find the best function. * We prove the robustness of our framework against truncation. § METHODS §.§ Preliminaries §.§.§ Steganography A traditional steganographic system consists of three probabilistic functions: KeyGen, Encode, and Decode. * KeyGen(1^λ) generates a key k that is shared between the encoder and decoder. * Encode(k,M,h,B) employs the model M and the history h to predict the distribution of the next symbol. The key k and secret bits B are used to determine the transmission of symbol a. * Decode(k,M,h,a) employs the model M and the history h to predict the distribution of the received symbol. The key k is utilized to extract the secret information from symbol a. In most situations, we discuss the system in a channel C with an alphabet A. The channel could represent images, videos, texts, etc., whereas the alphabet is the set of symbols that could potentially appear. In this paper, we focus on text-based channels, where the channel distribution is predicted by a LLM, and the tokens (vocabulary) of the LLM can be conceptualized as the alphabet. In our framework, the decoder cannot access to the model and the history, therefore it should be written as Decode(k,a). §.§.§ Security There are two definitions of security in steganography. <cit.> Security is characterized by the Kullback-Leibler Divergence (KL-Divergence) between the stegotext and covertext approaching zero: D_KL(P_S||P_C) = ∑_x P_S(x)logP_S(x)/P_C(x) < ϵ, where D_KL(P_S||P_C) = 0 denotes perfect security. Security is based on the chosen hiddentext attack. A steganography system is considered secure against such an attack if, for all probabilistic polynomial-time adversaries 𝒜 with k ←KeyGen(1^λ), the advantage of 𝒜 in distinguishing between stegotext and covertext is negligible: |Pr[𝒜^𝒪_Encode(k,M,h,B) = 1] - Pr[𝒜^𝒪_R(M,h) = 1]| < negl(λ), where 𝒪_R denotes random sampling, 𝒪_Encode represents the encoding algorithm with arbitrary bits B, and negl(λ) is the negligible function. A function negl(λ) is considered negligible if, for any constant c > 0, there exists a large integer N such that for all λ > N, negl(λ) < 1/λ^c. In other words, a negligible function decays faster than the inverse of any polynomial function. §.§ Intuition We observed that a distribution has multiple permutations. We just consider 2 trivial permutation: normal permutation and its reverse, as Fig. <ref> shown. We have 2 symbols a and b, with probability of p(a) and p(b), respectively. In the normal permutation, we say that symbol a occupies the interval [1 - p(a),1] and the symbol b occupies the interval [0,p(b)], and we use “a,b” to denote this permutation. While in the reverse permutation, we say that symbol a occupies the interval [0,p(a)] and the symbol b occupies the interval [1 - p(b),1], and we use “b,a” to denote this permutation. During the sampling procedure, a random number will be drawn from PRG. If the random number r falls into the interval that symbol a occupies, we will get a symbol a from the sampling. We mention that the length of the interval that symbol a occupies is just the probability that symbol a has. Therefore, this sampling procedure dose not change the original distribution since the probability of output a is ℙ{r < p(a)} = p(a) and the probability of output b is ℙ{r ≥ p(a)} = p(b), recall that the r obeys a uniform distribution on [0,1]. We mention that sampling from any permutation of the distribution is equivalent, since the probability of each symbol is the same. As for the 2 trivial permutation, we can formally claim that there exist a function c(x) that convert the normal order to the reverse order. This function c(x) maps a number x ∈ [0,1] to 1-x ∈ [0,1]. It can also maps an interval [x,y] ⊆ [0,1] to [1-y,1-x] ⊆ [0,1]. If we use the permutation that shown in the left of the Fig. <ref>, when the symbol b is sampled we can know that the random number r ∈ [0,0.4]. If we use the permutation that shown in the right of the Fig. <ref>, when the symbol b is sampled we can know that the random number r ∈ [0.6,1]. If we cannot get access to the distribution, still we can guess that which permutation is used. Since the PRG is shared, if the symbol b is sampled and random number r is large, we will assume that the symbol b occupies the top. if the symbol b is sampled and random number r is small, we will assume that the symbol b occupies the bottom. Therefore, we have a probability to correctly guess the permutation of the distribution. We can let any of the permutation represent a bit 0 or 1, then we can do some steganography with these permutations and the decoder does not need to know the distribution. §.§ Asymmetric Steganography §.§.§ Theoretical Framework We begin with the simplest bi-variate distribution as the Fig. <ref> shown. We denote the probability of symbol a and b as p(a) and p(b). As we mentioned above, the random number and the sampled symbol leak some information about the permutation. We can use a function s(a/b,r) that relates to the random number r and the sampled symbol a or b to compute the difference between 2 permutations. We define the function s(a/b,r) as follows. s = f(r) if a is sampled f(1 - r) if b is sampled The accurate form of f is not decided yet. We will figure the f out during the following derivations. Now we need to compute the difference between 2 permutations. If we use the permutation in the left of Fig. <ref>, we can compute the expectation of s as 𝔼_a,b[s] = ∫_0^p(b) f(1-r) dr + ∫_p(b)^1 f(r) dr. If we use the permutation in the right of Fig. <ref>, we can compute the expectation of s(a/b, r) as 𝔼_b,a[s] = ∫_0^p(a) f(r) dr + ∫_p(a)^1 f(1-r) dr. We observed that there exists a possible gap between these expectations since 𝔼_a,b[s] - 𝔼_b,a[s] = 2 ∫_0^p(a) f(r) - f(1-r) dr. From the aspect of hypothesis testing, the following proposition is obvious. If |𝔼_a,b[s] - 𝔼_b,a[s]| = 2 |∫_0^p(a) f(r) - f(1-r) dr| > 0, there exists a method to successfully distinguish whether the current used permutation is a,b or b,a with a noticeable probability. Therefore, we can choose a reasonable function f to maximize the gap 𝔼_a,b[s] - 𝔼_b,a[s]. Since the above discussion is focused on the expectation, which is hard to use if the variance is really large. However, since 𝔼_a,b[s] and 𝔼_b,a[s] is related to the exact probability p(a), directly computing the variance of s(a/b,r) may not be useful. In order to wipe out the influence of p(a), we turn to consider the case that the random number r is not correlated with the sampling procedure. In this case, the expectation of s(a/b,r) is 𝔼[s] = 1/2(∫_0^1 f(r) + f(1-r) dr) and 𝔼[s] = 1/2(𝔼_a,b[s] + 𝔼_b,a[s]). The variance of s(a/b,r) when r is not correlated with sampling is 𝔻[s] = ∫_0^1 f^2(r) dr - (∫_0^1 f(r) dr )^2. Therefore, all of our decision should be based on the the case that the random number r is not correlated with the sampling procedure. If we observe that the 𝔼_a,b[s] or 𝔼_b,a[s] has a significant gap from the 𝔼[s], we can conclude that there must be some embedded bits. Since the gap 𝔼_a,b[s] - 𝔼_b,a[s] is not 0, there must be one term larger than 𝔼[s] and the other one smaller. According to the concepts of hypothesis testing, if there exists a gap between the expectations of 2 random variables, there must exist a method to distinguish them. The next problem is to find a reasonable threshold to make decision. In order to figure out the decision boundary, we need the help of central inequalities. To simplify the discussion, we assume that f is a bounded function. Therefore, all of the integrals are computable and we can use those conclusions on bounded random variables, such as Hoeffding Inequality. We consider the independent and identical random variables X_1, X_2, ··· , X_n, and each X_i is bounded by [a,b]. X_1, X_2, ··· , X_n are independent and identical random variables, and each X_i is bounded by [a,b]. Let X = 1/n∑_i=1^n X_i, μ = 𝔼(X_i), and σ^2 = 𝔼[(X_i-μ)^2], the probability that the sample mean X deviates from the theoretical mean 𝔼(X) up to t is ℙ(|X - μ| ≥ t) ≤ 2exp(-nt^2/2(b-a)^2). Therefore, we can press the probability bound exp(-nt^2/2(b-a)^2) to a small value like 10^-5. In the left of this paper, we call this bound as “error probability”, denoted as P_e. During the decoding procedure, we need to accumulate s(a/b,r) and compute its expectation 𝔼_·,·[s]. If we observed that the probability bound exp(-nt^2/2(b-a)^2) is less than P_e, we can decide that there exist some embedded bits. Then we decide the permutation is a,b or b,a according to the 𝔼_·,·[s] ≤𝔼[s] or 𝔼_·,·[s] ≥𝔼[s]. We conclude the situations into 3 hypothesis: ℋ_0: There exists an embedded bit 0. ℋ_1: There exists an embedded bit 1. ℋ_∅: There is no embedded bit. And we set the following conditions of judgement: If 𝔼_·,·[s] < 𝔼[s] and P_e < 10^-5 , accept ℋ_0. If 𝔼_·,·[s] > 𝔼[s] and P_e < 10^-5 , accept ℋ_1. If P_e ≥ 10^-5 , accept ℋ_∅. We construct the whole steganography framework according to the above 3 hypothesis. Now there still exists one problem: what is the function f? §.§.§ Optimization of Function f As we mentioned above, the function f is the main body of s(a/b,r). In an informal aspect, if we need to distinguish 2 random variables, we may expect that the gap between their expectations is huge and their own variances is small. Many of the central inequalities support this intuition. As we observe the form of Chernoff Bound, Hoeffding Inequality, and Bernstein Inequality, the probability bound P_e always decreases when the gap becomes larger and the variance becomes smaller. If the P_e decreases, we can use less n steps to decide to accept which hypothesis, which means that the embedding capacity increases. Therefore, we claim the final optimization goal is min_f ∫_0^1 f^2(r) dr/𝔼_p(a)[(∫_0^p(a) f(r) - f(1-r))^2], where f is bounded in [0,1]. Although the form of this optimization goal is elegant, to work out this problem is not trivial. We tend to use numerical solution to figure out the function f. We expand the function f into power series and compute the sum of first N terms f(x) = ∑_i = 1^N f_i x^i + o(x^N). Since x∈ [0,1], the remainder o(x^N) is likely to be negligible if N is large. The original optimization target f is turned to a series of coefficients f_i. Now its suitable to use numerical solution to optimize these coefficients. We have tried some traditional methods like gradient descent and expectation-maximization algorithm. The final output is similar to the trigonometric function cos(π x). Since iteratively computing 10,000 terms with complex power operation is not efficient, we choose to use the simple function cos(π x) with some cost of embedding capacity. We mentioned that the probability p(a) has a significant influence on the optimization problem. Since the decoder could not get the exact distribution of p(a), we can only trivially assume that p(a) obeys the uniform distribution on [0,1]. The closed form solution of this optimization problem is still a question that needs further exploration. §.§.§ Reduce the n-variate to 2-variate Since the above discussion is based on the bi-variate distributions, we have to find a way to decompose the multi-variate distributions of LLM into multiple bi-variate distributions. Different from DISCOP <cit.>, the decoder cannot access to the distribution. Theoretically speaking, we need to construct a grouping method. Using this grouping method we can recursively separate the tokens into 2 groups, and after several loops we can get a single token. We can convert the ID of tokens to the binary series. For example, in Llama2's tokenizer, the ID of “Hello” is 10,543, and it can be converted to 010100100101111. The length is set to be 15 bits since the vocabulary size of Llama2's tokenizer is 32,000, which is subtly smaller than 2^15. Therefore, we can separate the tokens whose bit form begins with 0 into one group and those with 1 into another group. In the group 0 we can further separate it into group 00 and group 01. We will continue to separate 15 loops to get a single token. That is to say, we convert a 32,000-variate distribution to 15× 2-variate distributions. If the decoder cannot even get access to the LLM's tokenizer, we suggest to convert the generated texts into ASCII or UTF-8. They can also be completely separated by the above grouping method. §.§.§ Encoding & Decoding Since there exists 2 permutations of a bi-variate distribution, we just named one of them Perm0 and another Perm1. If we are going to embed bit 0, we will choose Perm0 as the distribution we use. During the encoding process, the encoder will stick to the chosen permutation and accumulate s(a/b,r_i), where r_i is the random number that used in each loop and a/b denotes the 2 groups of tokens. Since we divide the tokens by the prefix of their bit forms, we write the accumulate function as s(g(0)/g(1),r_i), where g(0)/g(1) represents the tokens that starts with 0/1. As the accumulated s(g(0)/g(1),r_i) is correlated with the error probability P_e, once the accumulated value reaches the bound, we can decide to accept hypothesis ℋ_0 or ℋ_1 based on the sign of s(g(0)/g(1),r_i). Details are shown in Alg. <ref>. We use P(0) to denote the sum of probabilities of tokens whose bit form starts with 0. Dec(·) represents a function that converts binary number to decimal number. Decoding process has been included by the encoding process, the decoder just need to accumulate the s(g(0)/g(1),r_i) and determine to accept which hypothesis in each step. This process only needs the token list and the pseudo-random generator. Even if the LLM's tokenizer is banned, we can still convert the string data into binary form. Therefore, the decoding is completely free from models. §.§.§ Security The security of our framework is based on the pseudo-random generator. As we discussed above, if the random number is not correlated to the sampling process, there will be a probability of less than P_e to decode a single bit. In fact, any normal texts also have a probability of less than P_e to decode a single bit, which means that an adversary could not distinguish the texts by trying to decode something from the texts. From the perspective of cryptography, we consider the chosen hidden text attack. The adversary is given a series of texts with the same embedded secret messages, and a series of innocent texts. It needs to distinguish which one is embedded with bits. However, if the random number that the adversary uses is not related to the sampling, it will never find any clue because all of the texts have a probability of less than P_e to decode a single bit. As a principle of steganography, all of the steganography systems use non-repeating random numbers. So even if the embedded messages are the same, the generated texts will be totally different. Therefore, the adversary will fall into the computational blackhole - to guess the private key that the steganography systems use, which is not realistic if the length of key is larger than 256 bits. From the perspective of information-theory, our framework dose not change the distribution itself. Our sampling method maintains each token's probability and has no bias. Therefore, the KL divergence D_KL(P_S||P_C) = 0. The steganographic texts is indistinguishable from the model-generated innocent texts. §.§.§ Robustness against Truncation Current provably secure steganography methods <cit.> are sensitive to the prompt and history. Since the model needs history to produce accurate predictions, the encoder and decoder should share the history before communication. However, in some complex channels like social networks, the history and prompt can be easily modified or deleted. Any perturbation on the history and prompt will make them undecodable, since the distribution is changed. Previous prefix-sensitive methods are not practical in this condition, while our method is still robust. Our decoder dose not need the distribution and has nothing to do with the model. It can decode the embedded messages basing on the output texts themselves. Even if some prefix of the output texts is deleted, we can still try several keys and run our decoding algorithm with different start position of the left texts. Since the error probability P_e can be really low, if the random number is wrong we are not likely to decode any bit. Therefore, most of the remaining secret message has a large probability to be correctly decoded. §.§.§ Useful Error Mechanism If we are not lucky, we may find that there exist a part of the texts is not correctly encoded. For example, if the current embedding bit is 1 but we find that the accumulated value is less than the expectation of innocent texts and, more unfortunately, satisfies the demand of error probability P_e, it must be an error for the decoder. Though we cannot accurately compute the probability of this type of error, because it is correlated with the exact distribution, we can still conclude that its probability is less than P_e. For the decoder, when this type of error occurs, it will decode a wrong bit. After that, its random number will be out of sync with the encoding process. Now it's hard for decoder to decode the next bit since the error probability P_e is very small. So the generated texts has a probability of at least P_e^2 to decode 2 error bits. Therefore, we can simply wipe out the last extracted bit, to significantly reduce the bit error rate of the steganography system with a slight sacrifice on the embedding capacity. In all of our experiments we have embedded at least 1,000,000,000 bits but we cannot find any 2-bit error sample. § EXPERIMENT & RESULT §.§ Settings As a covert communication system, we measure the embedding capacity and encoding/decoding rate. Since our framework generates innocent-looking texts, we measure the perplexity (PPL) and entropy of the texts. The perplexity represents the fluency of generated texts and the entropy estimates the diversity of the texts. We utilize Llama2-7B <cit.> and Mistral-7B <cit.> as our steganography generator. We test our framework under 3 different temperature parameters (1.0 ∼ 1.2) in order to figure out the relationship of entropy and embedding capacity. The prompt we used is Write 100 totally different tweet-like long comments. This prompt is loaded into the function and then becomes the input of LLMs. Each data point represents the mean value of more than 10,000 samples. All of the experiments run in a server with 4 × NVIDIA A5000 GPUs (32GB RAM) and 24 × Intel Xeon w5-3423 CPUs. §.§ Main Results On the capacity & entropy. Results are shown in Tab.<ref>. We observed a significant gain in embedding capacity when increasing the temperature parameter. The increased temperature brings higher entropy and makes the distributions more flat. During the theoretical procedure, we assume that the group probability p(a) is uniformly distributed. However, these results show that this assumption is not suitable for the current LLMs. In most of the cases, the predicted distribution concentrates on 1 ∼ 2 tokens, which leads to low entropy. If the the predicted distribution is concentrated, the group probability may be near 1 or 0 in most steps and the 2 permutations becomes similar. Due to the probabilistic sampling, there is likely no choice for us to select one group. Therefore, the embedding loop becomes useless at most of steps. However, the problem of low entropy may come from the inefficiency of the tokenizer. If the entropy is extremely low, there must be some strong relationships between the last token and the current token, which means that they should have been merged into a single token. Meanwhile, as we find that the entropy of human twitter texts is 2.9514 bits (measured by Llama2), we find that it is much higher than the LLMs' output. It seems that the LLMs are usually outputting some uninformative sentences, which is totally different from human's behaviour. On the encoding & decoding rate. The encoding rate is partially limited by the LLMs' generation speed and the embedding capacity. It always has to loop through the encoding algorithm multiple times until confirming one bit has been embedded. During encoding and decoding, the speed of processing tokens is stable, leading to the encoding and decoding rate directly correlated with the embedding capacity. Since the decoding procedure gets rid of LLMs' complex computation, the decoding rate is much higher than the encoding rate. On the linguistic quality of stegotexts. From the Tab.<ref> we find that the linguistic quality of stegotexts is similar to the innocent covertexts directly generated by the LLMs. This phenomenon supports our proof of information theoretic security, which indicates that the KL divergence between the stegotexts and the covertexts is 0. § CONCLUSION In this paper, we design a linguistic steganography framework for the resource-limited scenarios. Specifically, we let the encoder have the access to the model to generate qualified stegotexts, while the decoder cannot access to the model. We construct the steganography framework based on the solid theoretical probability analysis and the concepts of hypothesis testing. The security and robustness of our framework are proved and the error mechanism of our framework is also useful for the future extensions. We have tested our framework with some popular LLMs and find its drawbacks in practical scenario, which points out the direction of our future works. This paper expands the practical scheme of steganography and we hope it to be the foundation of a new branch of steganography. § LIMITATIONS As we presented in the results, our framework is difficult to operate in a low entropy condition. Given the prompt “1+1=”, there is no redundancy to embed bits because the answer is just “2”. However, with plenty of tokens we still have some probabilities to embed one bit, which may be more suitable for a watermark scheme. § ETHICS STATEMENT We propose a steganography framework equipped with LLMs. Due to the convenience of accessing LLMs, this method may have an impact on the security of LLMs generated texts. The provider of LLMs may output steganographic texts to those who have access to the model, which may be useful in military cases. Therefore, the security of LLMs needs to be reviewed because there may be some hidden offensive messages in its generated texts. Moreover, our framework is provably secure against passive warden, which means that its steganographic texts cannot be distinguished from the model-generated texts. acl_natbib
http://arxiv.org/abs/2407.11968v1
20240716175951
mochi_class: Modelling Optimisation to Compute Horndeski In class
[ "Matteo Cataneo", "Emilio Bellini" ]
astro-ph.CO
[ "astro-ph.CO", "astro-ph.IM" ]
^*mcataneo@uni-bonn.de ^1Ruhr University Bochum, Faculty of Physics and Astronomy, Astronomical Institute (AIRUB), German Centre for Cosmological Lensing, 44780 Bochum, Germany ^2Argelander-Institut für Astronomie (AIfA), Universität Bonn, Auf dem Hügel 71, 53121 Bonn, Germany ^3SISSA, International School for Advanced Studies, Via Bonomea 265, 34136 Trieste, Italy ^4IFPU, Institute for Fundamental Physics of the Universe, via Beirut 2, 34151 Trieste, Italy ^5INFN, National Institute for Nuclear Physics, Via Valerio 2, I-34127 Trieste, Italy § ABSTRACT We introduce , an extension of the Einstein-Boltzmann solver , designed to unlock the full phenomenological potential of Horndeski gravity. This extension allows for general input functions of time without the need for hard-coded parametrisations or covariant Lagrangians. By replacing the traditional α-parametrisation with a set of stable basis functions, ensures that the resulting effective theories are inherently free from gradient and ghost instabilities. Additionally, features a quasi-static approximation implemented at the level of modified metric potentials, enhancing prediction accuracy, especially for models transitioning between a super- and sub-Compton regime. can robustly handle a wide range of models without fine-tuning, and introduces a new approximation scheme that activates modifications to the standard cosmology deep in the matter-dominated era. Furthermore, it incorporates viability conditions on the equation of motion for the scalar field fluctuations, aiding in the identification of numerical instabilities. Through comprehensive validation against other Einstein-Boltzmann solvers, demonstrates excellent performance and accuracy, broadening the scope of by facilitating the study of specific modified gravity models and enabling exploration of previously inaccessible regions of the Horndeski landscape. The code is publicly available at https://github.com/mcataneo/mochi_class_public. : Modelling Optimisation to Compute Horndeski in E. Bellini^3,4,5 Received XXX; accepted YYY ================================================= § INTRODUCTION The current cosmological paradigm (ΛCDM) is capable of describing the dynamics of the universe, the Cosmic Microwave Background (CMB) and the Large-Scales Structure (LSS) of the universe with only six parameters <cit.>. On top of standard model particles, it assumes two exotic components whose nature is unknown: (i) a cosmological constant (Λ) and some form of Cold Dark Matter (CDM). The gravitational interactions are regulated by General Relativity (GR). This modelling explains the observed accelerated expansion of the universe <cit.>, and it provides a good fit to the data <cit.>. However, the standard cosmological model is unsatisfactory as it suffers from at least three drawbacks: (i) the value of the observed cosmological constant is too small to be explained by fundamental physics <cit.>, (ii) GR is tested very precisely on local scales <cit.> but it is extrapolated over 15 orders of magnitude in length scale to be applied to cosmological data <cit.>, and (iii) comparing different datasets produces tensions for the measured parameters <cit.>. In particular, the most significant is the Hubble tension, arising when combining “early” and “late” times measurements of the expansion rate of the universe <cit.>. To alleviate these problems several possible extensions have been proposed, either promoting Λ to a dynamical field (Dark Energy, DE) <cit.> or modifying the laws of gravity at cosmological scales (MG) <cit.>. A unifying framework is realised by the Horndeski theory, that encompasses many of the scalar-tensor theories proposed in the literature, e.g. Quintessence, f(R) gravity, Brans-Dicke, Galileons. Horndeski gravity provides one of the most general scalar-tensor theories with at most second-order derivatives in the equations of motion on any background <cit.>. It gained significant popularity due to its rich phenomenology while remaining analytically tractable. To study the Horndeski landscape, there are two potential strategies <cit.>. It is possible to: (i) select a specific model based on its covariant Lagrangian formulation and analyse the effects of the additional degree of freedom across different regimes, from cosmological scales to compact objects and black holes; or (ii) adopt an effective-theory approach (EFT), focusing solely on the energy scales relevant to cosmology. While strategy (i) allows for a detailed investigation of particular scalar-tensor theories, it demands significant resources and time investment. In the absence of a compelling alternative to GR, strategy (ii) provides a more general and efficient method to detect or constrain deviations from standard gravity <cit.>. Given the large classes of models that the Horndeski Lagrangian encapsulates, there have been countless attempts to get cosmological constraints on specific parametrisations. Following each one of the two distinct approaches described above, in the literature it is possible to find both constraints based on sub-classes of Horndeski <cit.>, and on different parametrisations of the EFT functions <cit.>. A promising but less explored direction is to get model independent constraints. The idea is to try to “reconstruct” gravity without imposing an a priori time dependence for the EFT functions <cit.>. The public version of <cit.>, an extension of the Einstein-Botzmann solver <cit.>, implements both approaches, allowing users to easily implement new models belonging to Horndeski scalar-tensor theories or the EFT framework with the basis proposed in <cit.>. In a second release of the code, <cit.> included the quasi-static approximation scheme for improved computational efficiency and to extend the code's applicability to models where following the full dynamics of the scalar field can be extremely challenging. Even with these significant advances, further steps are required to reliably extend across a broader range of the Horndeski landscape. First, we need to simplify the implementation of specific models and parametrisations, which currently requires modifications to the source code. Additionally, efficient selection of stable models is crucial to avoid extensive searches for parameter combinations free from ghost and gradient instabilities. The accuracy of the QSA implementation in can be improved. Furthermore, addressing numerical noise from limited precision in the calculation of the scalar field speed of sound is essential to prevent incorrectly labeling healthy models as unstable. To address these issues, in this paper we introduce , a numerical tool to facilitate the study and statistical analysis of Horndeski models expressed in the effective-theory language. To improve flexibility, externalises the models definition, allowing to pass to the code precomputed arrays for the time evolution of the background and the EFT functions. In addition, it implements a new stable parametrisation, to satisfy the stability conditions by construction. Furthemore, we implement a new integrated approximation scheme to switch on gravity modifications only after a certain (user specified) redshift. This fixes the early time evolution to ΛCDM, avoiding numerical instabilities when deviations from standard gravity are irrelevant. Finally, implements an additional QSA approximation implemented at the level of modified metric potentials in Newtonian gauge. This offers a cross-check with the standard implementation and improves the agreement with the exact solution. Throughout the paper we assume the speed of light c=1 and use the following values for the cosmological parameters: for the fractional energy-density of the cold dark matter and baryons, we set Ω_ ch^2 = 0.120108 and Ω_ bh^2 = 0.022383; for the Hubble constant, h = 0.6781; for the amplitude and slope of the primordial power spectrum, 10^9 A_ s = 2.10055 and n_ s = 0.96605; for the optical depth at reionisation, τ = 0.054308. The paper is organised as follow. In Section <ref> we introduce the Horndeski models and their EFT description. In Section <ref> we describe the code structure and implementation, and in Section <ref> we validate it against other codes. In Section <ref> we present a new parametrisation, and leave Section <ref> for the summary and outlook. § HORNDESKI'S THEORY AND PARAMETRISATIONS In our analysis we focus on sub-classes of the Horndeski theory, one of the most general scalar-tensor theories with at most second-order equations of motion on any background <cit.>. Its action can be written as S_ H[g_μν,ϕ]=∫d^4x √(-g)[∑_i=2^51/8π G_N L_i[g_μν,ϕ] + ℒ_m] , ℒ_2 = G_2(ϕ, X) , ℒ_3 = -G_3(ϕ, X)ϕ , ℒ_4 = G_4(ϕ, X)R+G_4X(ϕ, X)[(ϕ)^2-ϕ_;μνϕ^;μν] , ℒ_5 = G_5(ϕ, X)G_μνϕ^;μν -1/6G_5X(ϕ, X)[ (ϕ)^3 + 2ϕ_;μ^νϕ_;ν^αϕ_;α^μ. . - 3ϕ_;μνϕ^;μνϕ]. Here, g_μν is the metric tensor, ℒ_m is the matter Lagrangian and G_2, 3, 4, 5 are arbitrary functions of the scalar field ϕ and its canonical kinetic term X=-ϕ^;μϕ_;μ/2. In a cosmological setup, there are two complementary approaches that are commonly used. Each one has its own advantages and motivations. For completeness we quickly review both, as it will be useful for the rest of the paper. A first approach consists in directly specifying the field dependence of the G_2, 3, 4, 5 functions. This is the case of e.g. quintessence or f(R) gravity. Assuming a Friedmann-Lemaître-Robertson-Walker (FLRW) background one can get the evolution of the matter and the scalar field at the background level. Then, it is possible to get the evolution of the perturbations on top of the background obtained. The key advantage of this approach is self-consistency. For a given covariant Lagrangian formulation one can derive the necessary equations in every regime (background vs perturbations, weak vs strong gravity) and apply the result of one regime to the others. This is extremely important, since the laws of gravity and DE should be universal. However, it is rather cumbersome to jump from one model to another, because the background evolution is non trivial and the effort to setup the machinery needed to solve the physical system under investigation has to be repeated for each model. Then, the main limitation of this approach is that without a favorite model in mind it is difficult to scan a significant part of the parameter space of gravity. Alternatively it is possible to use an Effective Field Theory (EFT) approach. This framework compresses the information content of the full Equation (<ref>) into few functions of time. Assuming a FLRW background and up to linear order in perturbation theory a common choice is to describe the linear evolution of cosmological perturbations in Horndeski with <cit.> {w_ϕ, , , , M_∗^2} . Here, w_ϕ=p_ϕ/ρ_ϕ is the equation of state of the scalar field (or we can use any other function describing the background evolution of DE), α_ K has been dubbed kineticity, α_ B is called braiding, α_ T is the tensor-speed excess and M_*^2 an effective Planck mass. M_*^2 can also be replaced with its run-rate ≡ln M_∗^2/ln a , together with the initial condition M_∗, ini. These functions describe all the possible dynamics of the Horndeski Lagrangian and they are independent of each other. w_ϕ is the only function affecting the background evolution (and the perturbations through a different expansion history), while the other functions in Equation (<ref>) modify only the evolution of the perturbations. As a consequence, a commonly used strategy is to fix the background evolution to the one predicted by some fiducial model (typically ΛCDM) and focus on the phenomenology of the perturbations. In ΛCDM the EFT functions in Equation <ref> take the values {w_ϕ=-1, =0, =0, =0, M_*^2=1} . One of the key advantages of the EFT approach is that it is very powerful to test deviations from the standard cosmology. Indeed, the usual (but not unique) approach is to choose a time dependence of the EFT functions such that ΛCDM is recovered at early times. The EFT framework can be seen as a bridge between the underlying Horndeski theory and observations (see Figure 1 of ), thus facilitating the inference of cosmological constraints. Then, for a given time evolution of the EFT functions it is possible to get a particular Horndeski covariant model following the strategy introduced in <cit.>. In particular, we choose to work within a sub-class of Horndeski models, specifically those that have = 0. The nearly-simultaneous detection of the GW signal (GW17081) and the corresponding gamma-ray burst (GRB 170817A) emitted from the merger of a binary neutron star system <cit.> implies that the speed of GW (c_T^2≡ 1+) has to be equal to the speed of light for any cosmological purpose <cit.> [An alternative interpretation of these constraints can be found in <cit.>.]. Translating this requirement to the Horndeski functions implies G_4=G_4(ϕ) and G_5=0. Then Equation (<ref>) becomes ℒ_SH = G_2(ϕ,X)+G_4(ϕ)R-G_3(ϕ,X)ϕ , where SH stands for Scalar Horndeski. As explained above, the background of this class of models can be fully specified by one function of time. It can be the scalar field energy-density, the scalar field equation of state, or directly the Hubble rate. This property is manifest by construction when working within the EFT framework. If working with the Horndeski G_i functions one can project them into the background function chosen <cit.>. The evolution of the perturbations depends on both the chosen expansion history and on the other EFT functions in a non trivial way. An in depth discussion that adopts the same notation used here can be found in, e.g., <cit.>. Here it is important for the rest of the discussion to report the definition of the de-mixed kinetic term for the scalar field perturbation, ≡ + 3/2^2 , and of its effective sound speed c_s^2 = 1/ [(2-α_B) (-H^'/a H^2+1/2α_B+α_M) . . -3(ρ_m+p_m)/H^2M_∗^2+α_B^'/a H] . A definition of the α_i's as a function of the Horndeski G_i's can be found in, e.g., <cit.>. The models we are going to focus are: Cubic Covariant Galileon. The covariant Galileon <cit.> model corresponds to the sub-class of Horndeski theories, Equation (<ref>), that possesses the Galilean symmetry in a flat spacetime, i.e. ∂_μϕ→∂_μϕ+b_μ, with b_μ a constant 4-vector <cit.>. While the most general version of the covariant Galileon does not respect the condition =0, here we focus on the cubic Galileon case, for which the G_i functions read G_2 = c_2 X , G_3=2c_3/Λ_3^3X , G_4=M_ Pl^2/2 . Here, as usual, we have set Λ_3^3≡ H_0^2 M_ Pl. On top of the initial conditions (IC) for the scalar field ϕ and its time derivative ϕ̇, c_2 and c_3 are extra parameters. In particular, c_2 can be normalized to ± 1 through a convenient redefinition of the scalar field. The sign of c_2 is crucial for the expansion history of the universe. If c_2=+1 one needs a cosmological constant to drive the accelerated expansion of the universe. On the contrary, if c_2=-1 the model self-accelerates. Since the model is shift symmetric, the IC chosen for the scalar field itself is irrelevant, while the IC for its derivative can be fixed by following the attractor solution Hϕ̇= const. <cit.>. Additionally, c_3 is fixed by requiring that the sum of the fractional energy densities today is equal to one in a flat universe. Finally, we choose the self-accelerating branch (c_2 = -1), which gives =3=6Λ_3^6Ω_ϕ,0/M_ Pl^2H^4 , M_*^2=1 , =0 , where Ω_ϕ,0 is the fractional energy-density of the scalar field today, and the Hubble rate is the only time dependent quantity. Kinetic Gravity Braiding. The Kinetic Gravity Braiding (KGB) model in its original formulation <cit.> is a cubic Horndeski theory. Here we focus on a model (nKGB) defined as G_2=-X , G_3=g^(2n-1)/2X^n/Λ^4n-1 , G_4=M_ Pl^2/2 , where Λ^(4n-1)≡ M_ Pl^(2n-1) H_0^2n. The sign of the standard kinetic term has been chosen negative, to have a self-accelerating universe. g and n are the two free parameters of the theory, and the exponent of g has been chosen such that gΩ_ϕ(a=1)∼𝒪(1) for every choice of n. It is easy to notice that Equation (<ref>) is a generalisation of Equation (<ref>), obtained fixing n=1 and g=4c_3^2. In nKGB the only non-zero α_i functions are = 2 X/M_ Pl^2 H^2[- 1 + 6 n^2 g^n-1/2H ϕ̇ X^n-1/Λ^4 n - 1] = 2 n g^n-1/2ϕ̇ X^n/Λ^4 n - 1 M_ Pl^2 H . In our analysis we fix n=5, while g is obtained requiring the sum of the fractional energy densities of all species is one today. gravity. This scalar-tensor theory extends the Einstein-Hilbert action by adding a non-linear function of the Ricci scalar, f(R). After defining ϕ≡ 1 + f_R, with f_R ≡df/dR, it can be reformulated in terms of the Horndeski G_i functions as follows <cit.>: G_2 = f - R f_R/2 , G_3 = 0 , G_4 = 1 + f_R/2 . From the definition of the α_i functions <cit.>, we can immediately see that: = 1 + f_R , = ḟ_R/1 + f_R , = - , = 0 , where the overdot denotes the derivative with respect to the logarithm of the scale factor, ln a. The evolution of the scalar field and the background expansion are related by the modified Friedmann equation <cit.> H^2-f_R(H Ḣ + H^2)+1/6f+H^2f_R RṘ= , coupled with the equation for the Ricci scalar R = 12 H^2 + 6 H Ḣ . In Equation (<ref>), f_RR = df_R/ dR, and includes all matter species, that is, radiation, non-relativistic matter, neutrinos etc. Thus, we can either specify the expansion history and solve for f(R), or provide an explicit functional form for the extension to the Einstein-Hilbert action and derive the associated background evolution. In the first case, we adopt the so-called designer approach <cit.>, and in this work, we fix the background to that of a cosmology. For this family of models, deviations from standard gravity are generally parametrised in terms of the dimensionless background quantity B = ḟ_R/1 + f_RH/Ḣ , and we will consider a model with B_0 ≡ B(z = 0) = 0.01. Alternatively, we can adopt an explicit Lagrangian formulation, with the high-curvature limit of the <cit.> form f(R)/m^2 = -c_1/c_2 + c_1/c_2^2(m^2/R)^n , being one of the most extensively studied models. Here c_1, c_2 and n are free dimensionless parameters, and the mass scale m^2 ≡ρ_ c + ρ_ b, where ρ_ c and ρ_ b are the background energy-density of cold dark matter and baryons, respectively. We can eliminate one parameter by requiring convergence to a cosmological constant in the early universe (i.e R ≫ m^2), which gives c_1/c_2 = 6Ω̃_Λ/Ω_ m, with Ω̃_Λ = λΩ_Λ, and λ is a real positive number that accounts for deviations from the late-time value Ω_Λ = 1 - Ω_ m - Ω_ r. We can also express the combination c_1/c_2^2 in terms of the initial scalar field value, f_R, ini, allowing Equation (<ref>) to be rewritten as f(R)/m^2 = -6Ω̃_Λ/Ω_ m - f_R, ini/n( R_ ini/m^2) ( R_ ini/R)^n , where R_ ini≡ R(a_ ini). We consider models with n=1 and n=4, and adjust λ and f_R, ini such that at z=0 we have ρ_ϕ = Ω_Λ H_0^2 and |f_R0| = 10^-4. We found λ∼𝒪(1 ± f_R0) (see, e.g., Figure <ref> in Section <ref>). To integrate the coupled system of Equations (<ref>) and (<ref>) we follow the strategy discussed in <cit.>. Finally, note that by combining Equations (<ref>) and (<ref>) into Equation (<ref>) one finds the well-known result = 1 valid for any f(R) gravity model <cit.>. Fixed-form parametrisation. Using the EFT approach we use a simple, yet effective, parametrisation. The background is fixed to ΛCDM, i.e. w=-1, and the alpha functions defined as α_i = α̂_i Ω_Λ(a) . Here Ω_Λ(a) is the fractional energy density of the cosmological constant and α̂_i free parameters that fix the amplitude of the EFT functions. The idea is to have a ΛCDM universe before the onset of DE, and allow for modifications at late times. Equation (<ref>) does not pretend to emulate realistic Horndeski models (with the exception of simple Quintessence), but rather to provide a minimal extension capable of capturing the phenomenology of all the alpha functions. Given that the background evolution is fixed to be ΛCDM, on top of the standard cosmological parameters the only extra ones are {α̂_ K, α̂_ B, α̂_ M} , to which we assign α̂_ K=1, α̂_ B=0.2 and α̂_ M=0.1 for our test cosmology. §.§ Stable parametrisation One challenge when considering the entire class of Horndeski models is their potential for instability in response to perturbations. These instabilities result from unsuitable background solutions, and it is crucial to identify and exclude any parameter combinations leading to such unstable evolution. Physical instabilities come in three kinds <cit.>: (i) gradient instabilities occur when the square of the speed of sound for perturbations turns negative as the background evolves. This negative value can cause perturbations to grow exponentially at small scales, destabilizing them over timescales that are comparable to the cutoff of the theory; (ii) ghost instabilities arise when the sign of the kinetic term for background perturbations is incorrect. This issue is often considered in discussions of quantum stability. If ghost modes are present and dynamically interact, they can destabilize the vacuum, resulting in the production of both ghost and normal modes; (iii) tachyonic instabilities manifest when the effective mass squared of scalar field perturbations is negative, leading to power-law instabilities at large scales growing faster than the Hubble scale. While the absence of ghost and gradient instabilities is an integral requirement for a healthy theory, tachyonic instabilities are not necessarily harmful. In fact, imaginary effective masses can still produce observationally viable models, and considering them as pathological a priori would amount to an overly conservative approach <cit.>. Here, we will only impose that a theory must be free from ghost and gradient instabilities, which is expressed by the conditions > 0, M_∗^2 > 0, > 0, ≠ 2. Another class of instabilities that can plague Horndeski's models are the mathematical (or classical) instabilities <cit.>, which manifest as exponentially growing modes in the perturbations. The scalar field fluctuations, V_X ≡ aδϕ/ϕ^', follow the equation of motion 𝒜(τ)V_X^''+ ℬ(τ)V_X^' + [ 𝒞(τ) + k^2 𝒟(τ) ]V_X = ℰ(τ,k) , where the expressions for the time-dependent coefficients, {𝒜,…,ℰ}, are detailed in Appendix <ref>. To prevent significant growth of unstable modes over cosmological timescales, the following conditions must be satisfied: * If ℬ^2(τ) - 4𝒜(τ)[ 𝒞(τ) + k^2𝒟 (τ) ] > 0: -ℬ(τ) ±√(ℬ^2(τ) - 4𝒜(τ)[ 𝒞(τ) + k^2𝒟 (τ) ])/2𝒜(τ) < ξ H_0 . * If ℬ^2(τ) - 4𝒜(τ)[ 𝒞(τ) + k^2𝒟 (τ) ] < 0: -ℬ(τ)/2𝒜(τ) < ξ H_0 . Here, ξ is a parameter controlling the level of instability in units of the Hubble constant, H_0, with larger values of ξ allowing models to exhibit faster growing rates of these pathological modes. For simple fixed-form parametrisations of the α-functions (e.g., Equation <ref>), or even for theories based on a covariant formulation (e.g., Equations <ref>-<ref>), finding stable models is just a matter of solving for the background evolution and check that the physical (and mathematical) conditions are satisfied. In these scenarios, we must only vary a handful of parameters and the search process is rather efficient. However, for more complex parametrisations producing a wider variety of functional forms, this trial-and-error approach can rapidly become sub-optimal. A mathematically equivalent, yet computationally more advantageous strategy consists in describing the Horndeski's Lagrangian at the level of background and linear perturbations using a basis that guarantees no-ghost and no-gradient instabilities from the start <cit.>. In practice, this can be achieved by replacing the α_i's with , M_∗^2, , , together with the energy density of the scalar field, . The initial condition, ≡(z=0), is used to integrate the non-linear differential equation for the braiding α̇_B + ( 1 - + Ḣ/H) - ^2/2 = 𝒮(x) , where the source term is 𝒮(x) ≡ - 2 + + /M_∗^22Ḣ/H - 3( + )/M_∗^2 H^2 , with the scalar field pressure, , obtained either from the continuity equation (see Appendix <ref>) or, if w_ϕ is provided, as = w_ϕ, ≡, ≡ M_∗^2 - 1, and given by Equation (<ref>). Note that Equation (<ref>) follows from Equation (<ref>) after expressing H^' in terms of the density and pressure of the matter and scalar fields, and upon transforming conformal time derivatives into derivatives with respect to ln a. Once a solution for is found, the kineticity, , can be derived from Equation (<ref>). It has been argued that stable basis functions offer no distinct advantages over the standard α-functions in terms of computational efficiency or modeling capabilities <cit.>. Contrary to this, our work suggests that reformulating Horndeski's gravity using the stable basis, Equation (<ref>), enhances our ability to efficiently sample viable theories and improves numerical stability. The left panel of Figure<ref> illustrates how parametrisation with the α-functions might misclassify models as unstable. Specifically, designer gravity on a background, which is inherently stable (=1), shows rapid variations in the speed of sound due to inexact numerical cancellations when using Equation (<ref>), violating the no-gradient condition multiple times. In contrast, the stable basis directly incorporates the speed of sound as an input function, which, in this example, remains constant at unity throughout. Furthermore, using physically motivated parametrisations of the stable basis functions alongside highly optimised root finders and fast integrators (such as those implemented in modern Einstein-Boltzmann solvers) typically leads to rapid converge towards a solution for the braiding with the expected early-time evolution (see Section <ref> for details). Depending on the complexity of the parameter space, iteratively solving a differential equation is often more efficient than searching for a stable model starting from the α-functions. §.§ Quasi-static approximation The quasi-static approximation (QSA) is key to the study of cosmological perturbations within modified gravity and dark energy models. This approximation rests on the premise that the primary timescale of influence is the Hubble rate, and on scales smaller than the sound horizon (k^2 ≫ a^2 H^2) the time derivatives of the scalar field perturbations can be neglected owing to the rapid response of this degree of freedom to changes in the system. This simplifies the equations of motion for the scalar field into algebraic constraints, thus offering substantial computational benefits. In particular, for large values of the effective mass (i.e. the term proportional to V_X in Equation <ref>) the scalar field perturbations undergo high-frequency oscillations that necessitate small integration steps. These can slow down computations significantly or even cause the solver to stall when the step size falls below machine precision. By employing the QSA, we can focus on the secular evolution of these perturbations, effectively excluding the high-frequency components. However, the QSA must be applied with care depending on the scale and the specific dynamics of the model. Einstein-Boltzmann solvers like  <cit.> and  <cit.> employ this approximation across all scales and throughout the entire evolution of the perturbations – an approach that can potentially affect the accuracy of the predictions on large scales <cit.>. At the other end of the spectrum are solvers like  <cit.> that avoid using the QSA entirely, a choice that can significantly degrade performance, in particular for certain sub-classes of models[See <cit.> for the strategy adopted in to mitigate the impact of this choice.]. The solver <cit.> adopts a hybrid strategy that leverages the computational efficiency of the QSA along with the accuracy provided by the full scalar field dynamics. Within this scheme, the QSA is selectively activated or deactivated for each time and scale individually, ensuring its use is optimized. The scalar field fluctuations and their time derivatives can be read off Equation (<ref>) once the terms proportional to V_X^'' and V_X^' are neglected, V_X = ℰ(τ, k)/𝒞(τ) + k^2 𝒟(τ) , V^'_X = /τ[ ℰ(τ, k)/𝒞(τ) + k^2 𝒟(τ)] . It should be noted that, even within the quasi-static regime, V_X remains time-dependent, implying that its time derivative does not vanish. Consequently, terms proportional to V^'_X (or time derivatives of other metric perturbations) in the Einstein equations, which are typically not reduced by small coefficients, must be accounted for in our calculations. Although the QSA implemented through Equation (<ref>) provides a good description of the scalar field fluctuations and their derivatives, it can lead to small-scale inaccuracies in the matter perturbations for some models. This is illustrated in the right panel of Figure <ref>, where the matter power spectrum for two of our test gravity models (pink and yellow solid lines), computed by selectively activating the QSA in , deviates by up to 1% for k ≳ 0.1 h/Mpc. For analyses restricted to the linear regime, these deviations are negligible. However, the linear power spectrum is also used as input for methods that predict non-linear structure formation <cit.>, and such small-scale inaccuracies can propagate to observables probing the non-linear regime, potentially exceeding the accuracy requirements established for Stage IV surveys <cit.>. A different QSA strategy consists in working directly with the modified metric potentials <cit.>. In the Newtonian gauge, the linearised modified Einstein field equation for the Newtonian potential, Ψ, and the intrinsic spatial curvature, Φ, read <cit.> k^2Ψ = -3/2μ(a,k)a^2[Δ_ m+3(+)σ_ m] , k^2[Φ-γ(a,k)Ψ] = 9/2μ(a,k)a^2(+)σ_ m , where the total comoving-gauge density perturbations are summed over all matter species I ∈{c, b, r, ν, …}[Here and throughout we will be using the shorthand notation: c → cold dark matter, b → baryons, r → photons, ν→ neutrinos.], Δ_ m≡∑_I ρ_IΔ_I, with Δ_I being the comoving density contrast for the species I, and (+)σ_ m≡∑_I (ρ_I+p_I)σ_I with σ_I denoting the anisotropic stress of the individual matter fluids. Changes to the gravitational coupling experienced by non-relativistic particles are encoded in μ, while the gravitational slip, γ, sources differences between the two metric potentials also in the absence of anisotropic stress. General Relativity is recovered for μ = γ = 1. For the Horndeski's models described by the Lagrangian in Equation (<ref>) the two phenomenological functions take the simple forms <cit.> μ(k,a) = 1/M_∗^2μ_ p + k^2 M_∗^2 μ_∞/a^2 H^2/μ_ p + k^2 /a^2 H^2 , γ(k,a) = μ_ p + k^2 M_∗^2 μ_Z,∞/a^2 H^2/μ_ p + k^2 μ_∞/a^2 H^2 , consistent with the notation of <cit.>. Definitions for the quantities {μ_ p, μ_∞, μ_Z,∞} are detailed in Appendix <ref>. The extension of introduced in this work, , bypasses the QSA formulation from Equation (<ref>) in favor of Equations (<ref>) and (<ref>) in synchronous gauge, similar to [For the relevant equations, see <cit.>. Here, we only report in Appendix <ref> a typo-corrected expression for the conformal time derivative of the scalar metric potential η.]. However, it activates the QSA selectively based on scale and time, when the scalar field's effective mass and the discrepancy between full and quasi-static solutions meet specific user-defined criteria <cit.>. The dashed lines in the right panel of Figure <ref> demonstrate that the QSA approach implemented through the metric potentials is less affected by the neglected dynamics of the scalar field fluctuations compared to the implementation using Equation (<ref>). § CODE STRUCTURE AND IMPLEMENTATION The code developed for this study, , builds on , which itself extends the capabilities of the Einstein-Boltzmann solver <cit.> to evolve the background and linear perturbations of scalar-tensor theories within the Horndeski's framework. This section explores the new features introduced in . For a comprehensive review of the functionalities in we refer the reader to <cit.>. enhances by modifying the input, background, and perturbations modules to ensure greater numerical stability and facilitate a more comprehensive investigation of Horndeski's gravity. In the following, we provide a detailed description of these modifications. §.§ Input and precision parameters By setting the variable to , users can specify a path to a file containing the functions , , and , as well as the ln a sampling times, using . Alternatively, these quantities can be directly assigned to the corresponding code variables: , , , and . This direct assignment method is particularly recommended when running via the Python wrapper. To mitigate the impact of interpolation errors within the computational pipeline, we recommend using a time-sampling step size no larger than Δln a = 7.5 × 10^-4 for the range ln a ∈ [-2,0]. In , the variable serves a container for the initial condition, . Users must also select an to define the background evolution of the scalar field. They can choose from the two analytic expansion models, {, }, or opt for the new non-parametric models {, }. In the model, the scalar field background density is integrated using the continuity equation (Equation <ref>), with the background pressure defined as = w_ϕ. For the model, the approach starts with the normalised background density ≡/(z=0), which is transformed to = Ω_ϕ H_0^2. Here, the Hubble constant is an input parameter, and Ω_ϕ = 1 - ∑_I Ω_I represents the present-day normalised background energy-density in a flat universe. The scalar field background pressure in this configuration is then computed as = - - ρ̇_ϕ/3 . It is important to note that employs a root-finding algorithm to determine the value of a user specified parameter necessary to achieve the present-day energy density, Ω_ϕ H_0^2 <cit.>. It can be the initial conditions of the scalar field, ϕ(τ_ ini), its time derivative, ϕ^'(τ_ ini), or any other parameter that contributes to the energy density of the scalar field. This step is essential for models based on a covariant Lagrangian where is not known in advance, requiring the cosmological background to be solved self-consistently with the scalar field and the α-functions. However, this becomes redundant in cases where the scalar field energy-density evolution is predefined, as often happens in the effective-theory approach. To optimize computational efficiency, has been modified to bypass the root-finding process. Instead, it directly uses Ω_ϕ from the closure relation, Ω_ tot = 1, to rescale the normalized energy density, . Similarly to the stable basis functions, and can read the necessary data either from a text file containing two columns {ln a, w_ϕ or } located at a path specified in , or directly from input arrays and . To minimise interpolation errors, it is recommended to use a time-sampling step-size Δln a ≤ 7.5 × 10^-4 for the range ln a ∈ [-2,0]. In addition, when utilizing , users must activate the GR approximation scheme (see Section <ref> below) by setting . This feature allows the code to solve the standard system of equations down to a user-specified redshift, . Given our focus on late-time modifications to GR, the default value for this transition epoch is set deep within the matter-dominated era, specifically at = 99. While the stable basis in ensures the positivity of the de-mixed kinetic term, the Planck mass, and the speed of sound, it does not automatically guarantee that the condition α_B ≠ 2 from Equation (<ref>) is met, as this requires a solution for the braiding first. Therefore, we can safely set , as always tests that the braiding never crosses 2, which was not implemented in . Furthermore, setting allows the code to check for exponentially growing modes by enforcing the mathematical stability conditions outlined in Equations (<ref>) and (<ref>). Users can control the allowed instability growth rate via the variable, which defaults to 1. Similar to , has the option to treat the scalar field as a dynamical degree of freedom throughout the perturbations’ evolution, or to apply the QSA whenever is safe to do so[Unlike , does not support enforcing QSA evolution at all times as does.]. These configurations can be selected by setting the input variable to for the former, or for the latter. In mode, the QSA is activated for specific time intervals and wavenumbers based on whether the effective mass of the scalar field exceeds the threshold , and the amplitude of scalar field fluctuations has reduced by a factor defined in <cit.>[<cit.> also introduce a radiation trigger to ensure that the scalar field mass is sufficiently large to counteract the oscillations inside the radiation sound horizon. While this requirement is relevant at early times, it becomes redundant at late times. Given that solves the standard GR equations throughout radiation domination and during the early stages of matter domination, this specific condition can be ignored here.]. Default values are set at and , which, even for large departures from the standard cosmology, affect the computed power spectra by ≲ 0.1%. can accurately solve the quasi-static equations down to z=0 (see Section <ref>), eliminating the need to enforce the full dynamic of the scalar field at low redshifts. Consequently, the corresponding control variable can be safely set to 0. also includes new precision parameters. Below is a list describing their scope and default values: * and : the background time-sampling strategy adopted for a ≳ 0.05 is crucial for achieving accurate interpolation of the quasi-static quantities {μ_ p, μ_∞, μ_Z,∞}. To this end, employs a denser sampling for ln a >, with the number of points controlled by . For ln a < the number of sampling points reverts to the standard set by the precision variable . The default settings are , , and . * : when using the , where users define the equation of state for the scalar field, its background density is calculated from the continuity equation. This equation is integrated from a=1 down to the scale factor (1 - )/(1+)[Note that this value must be larger than the smallest scale factor, a_ min, in the input array . We recommend setting ln a_ min = -5 for both and .]. To ensure smooth derivatives of the scalar field pressure and related quantities at , the default setting for is 0.1. * : the equation for the braiding, Equation (<ref>), is integrated from a=1 backward to the scale factor (1 - )/(1+). To ensure smoothness at for all functions derived from and its derivatives, we recommend setting to at least 0.1. * : this variable governs the integration accuracy of the continuity equation, Equation (<ref>), and of the braiding equation, Equation (<ref>). Based on our analysis, setting provides a good balance between computational speed and solution accuracy across all models examined in this study. * : Solving Equation (<ref>) into the matter-dominated epoch poses numerical challenges, particularly for models that rapidly converge to GR at early times. While inaccuracies at high redshifts generally do not impact the late-time modified gravity phenomenology, they can introduce spurious instabilities that may incorrectly render a model non-viable. To minimize these effects, modifications to the Einstein equations are triggered only when the braiding solution reaches the . Practically, this means adjusting the transition redshift, , to match the redshift at which || first equals . Note that this threshold is not enforced for models with || ≤, e.g. quintessence and k-essence. The default value of is chosen to ensure precise predictions across the broad range of deviations from GR examined in this study. * : at the onset of modified gravity, marked by , introduces the scalar field as a new degree of freedom by incorporating the equation for the field fluctuations (Equation <ref>) and its couplings to the metric perturbations into the Einstein-Boltzmann system. Initial conditions are defined by the QSA outlined in Equation (<ref>). The code then waits a conformal time specified by before assessing whether to activate the QSA, effectively turning the scalar field fluctuations into algebraic constraints. We found that works well for all models tested. §.§ Background Once the input arrays are prepared, the initial step involves calculating the background density of the scalar field. Depending on the chosen expansion model, the code performs one of the following actions: (i) integrates the continuity equation and stores both and for later use; (ii) rescales the input normalized background energy-density of the scalar field to compute , and calculates using Equation (<ref>), saving both values for future reference; (iii) analytically determines the scalar field density and pressure, along with other background quantities, as already implemented in . Subsequently, with access to the stable basis functions, the scalar field density and pressure, as well as H and Ḣ, the code solves for by backward integrating Equation (<ref>) from its present-day value, . It calculates the kineticity, , using Equation (<ref>), and if necessary updates based on the . However, it is important to acknowledge a slight inconsistency in our approach: , , and associated variables are not recalculated for the interval between the newly adjusted and the original values to accommodate a lower transition redshift. This methodological choice is made to save computational resources while only introducing a marginal error in MG models that rapidly converge to GR. Within the Horndeski's framework we expect changes to the growth of structure to be linked to changes to the background expansion. Therefore, viable models exhibiting significant departures from standard gravity only in recent epochs, presumably also closely match a expansion history until recently. Finally, with all relevant background functions now available, computes the quasi-static quantities {μ_ p, μ_∞, μ_Z,∞} and their derivatives. These are also stored in the background table and forwarded to the perturbations module for further processing. §.§ Perturbations The evolution of the perturbations closely follows that implemented in , except for the new GR approximation scheme and the effective field equations used for the QSA. While follows the dynamics of the scalar field and its coupling to the metric tensor already prior to recombination, activates the additional degree of freedom only after the time . This is achieved by enabling , which directs the solver to apply the standard equations up to the transition redshift. Once the GR approximation is deactivated, the initial conditions for the scalar field are derived from the quasi-static expressions given in Equation (<ref>), transitioning from the standard equations to those of . This approximation scheme is crucial to manage the discontinuity at , which otherwise leads to computational instability by forcing the integrator to attempt reductions in the step size to below double precision levels. Consistent with other approximation schemes used, the initial conditions for the metric and stress-energy tensor perturbations are set from the integration time step immediately preceding the deactivation. With the initial conditions now defined, can use the QSA scheme of for the evolution of the perturbations. Should the scalar field effective mass and fluctuations amplitude satisfy specific criteria <cit.>, the QSA may be selectively activated or deactivated up to six times. However, within , this scheme is only applicable to each wavenumber, k, following a designated time, τ() +. Furthermore, rather than applying the QSA directly to the scalar field fluctuations, V_X, targets the metric potentials, thereby enhancing numerical accuracy. Accordingly, the quasi-static part of has been restructured to align with the methodology developed for . § CODE VALIDATION AND PERFORMANCE Before exploring the new capabilities of , we first compare its predictions with those from established codes such as  <cit.>,  <cit.>, and  <cit.>. This comparative analysis allows for a thorough assessment of all test models and ensures the robustness of our results across different parametrisations, approximations, and integration strategies. Although not discussed here, similar agreements between and the other Einstein-Boltzmann solvers are achieved when including massive neutrinos. This is especially true for the small, non-factorisable contributions to the growth of structure caused by their coupling to modified gravity. §.§ CMB spectra Figure <ref> displays the CMB multipoles for temperature (upper left), polarization (upper right), their cross-correlation (lower right), and the lensing potential power spectrum (lower left). The lower sub-panels contain the relative deviations of the predictions (coloured lines) compared to the results from other Einstein-Boltzmann solvers (black lines). As reference spectra, we use without the QSA for cubic Galileon, nKGB, and models; for designer gravity; and for Hu-Sawicki gravity. These quantities were generated using the precision settings detailed in Appendix <ref> for both and . First, it is important to note that the differences between and are barely visible at this scale. This suggests that the newly implemented GR approximation at early times, the reparametrisation using stable basis functions, and the application of the quasi-static approximation until late in the perturbation evolution, all introduce minimal numerical errors. The agreement between and the -based codes is very good, with differences remaining well within ∼ 0.25%. Further analysis reveals that mismatches exceeding 0.05% come from variations between and , similarly affecting predictions (not shown). This validates the QSA implementation in , which closely aligns with , and confirms the reliability of the QSA/full dynamics switching feature in . Note that despite and both deriving from , discrepancies around 0.2% are observed between these codes for the low-ℓ temperature multipoles. Initially, these could be attributed to the activation of the QSA in , potentially affecting the accuracy of the integrated Sachs-Wolfe (ISW) effect calculations, particularly in the designer model where deviations from GR are more pronounced. However, further investigation indicated that these differences are due to the use of different versions, with employing a more recent release. §.§ Matter power spectrum Figure <ref> presents the evolution of the matter power spectrum ratio, P_ m^ MG(k)/P_ m^ GR(k), for the three cosmologies characterised by scale-independent growth on sub-horizon scales. The lower sub-panels show that for these models (coloured lines) deviates by an average of approximately 0.01% from (black lines). Differences are negligible for scales k ≲ 0.01, where the QSA is never activated. To isolate the impact of the new features, Figure <ref> also shows the matter power spectrum of the gravity predictions relative to same quantity in . For the Hu-Sawicki models the agreement between (coloured lines) and (dotted lines) is excellent across all redshifts. Differences larger that 0.2% are visible only for k ≳ 1, regardless of model and time. Using an independent notebook employing the QSA, we have verified that these small differences originate from . Moreover, Figure <ref> illustrates the (albeit small) inaccuracies introduced by the QSA in the designer model. The blue lines, generated using a hybrid method that activates the QSA under certain conditions (detailed in Section <ref>), should be compared with the predictions (dashed lines), which do not employ the QSA. As expected, the small scales are most affected by this approximation, due to the conditions for its activation being met for more extended time intervals. To verify that deviations from are caused by the QSA, we forced to follow the full dynamics of the scalar field (green line). The lower sub-panels confirm that predictions now align with at a level of 10^-4. §.§ Performance Here we assess the impact of the patch on execution time, examining both individual modules and the overall runtime. Table <ref> summarises the relative changes in execution time compared to for the input, background, and perturbations modules, as well as their combined effect and the total runtime variation, including the other unmodified modules. To ensure a fair comparison, we evaluate two models that activate different features in : the cubic Galileon model, which requires preparatory steps to iteratively determine suitable initial conditions and map the Horndeski G_i's to the α-basis, and the model, which involves direct analytic expressions for both the α's and the scalar field background evolution. Contrary to intuition, shows reduced runtime for both models at the input level, despite only needing to fill the relevant arrays. This is because unnecessarily applies the root finder for the initial conditions to this model–a step not performed by . Unsurprisingly, the execution of the background module takes twice as long as in , due to the integration of the differential equation for the braiding function. However, this slowdown is less severe than suggested by <cit.>. Assuming a uniform scanning strategy of their two-dimensional parameter space and a stable region amounting to 22% of the prior volume, the number of unstable models is 3.5 times that of the viable models. Therefore, starting from the stable basis ensures an overall 56% speedup for the background module compared to using the α-functions in this particular scenario. This efficiency gap becomes even wider for complex parametrisations, such as those discussed in Section <ref>, where stable models can be particularly challenging to find. The third column of Table <ref> highlights the advantage of enabling the QSA for z<10: it can reduce the computing time for the evolution of perturbations by up to 50%. Since this part of the code contributes most significantly to the cumulative execution time of the three modified modules (fourth column), any performance improvement here directly impacts the total runtime of the Einstein-Boltzmann solver (last column). The difference between the two codes arises from enabling the activation of the QSA in throughout the perturbation evolution, while in we apply it only up to some high redshift (i.e., z=10), as recommended by its developers. § EXTENDED PARAMETRISATION FOR HORNDESKI GRAVITY After establishing accuracy and efficiency using extensively studied modified gravity cosmologies, we can now leverage its new features to input general functions of time and explore the phenomenology of Horndeski gravity in greater detail. For this purpose, a parametrisation that generalises and includes all covariant theories analysed in Section <ref> could be extremely valuable <cit.>. A common feature of all our test models is that, deep in the matter-dominated era, their stable basis functions either approach the GR limit as power laws, A_J a^ζ_J, or tend to constant values, C_J, to a very good approximation. For instance, Figure <ref> illustrates that, regardless of the specifics governing the dynamics of the scalar field, the de-mixed kinetic term is well described by a power law for a ≲ 0.05 (shaded area). Therefore, any late-time evolution of the stable functions can be interpreted as deviations, Δ_J, from this early-time power-law behavior. Specifically, we can express the stable functions as = (A_M) e^ζ_M · x + b_M + Δ_M(x) , = e^ζ_D · x + b_D + Δ_D(x) , = [ (A_c_ s) e^ζ_c_ s· x + b_c_ s + C_c_ s] e^Δ_c_ s(x) , = [ (A_ρ)e^ζ_ρ· x + b_ρ + C_ρ] Δ_ρ(x) , where x ≡ln a, b_J ≡ln |A_J|, denotes the sign function, and for x ≪ 0 we have the limiting values Δ_M, Δ_D, Δ_c_ s → 0 , Δ_ρ → 1 . The constant C_ρ is not a free parameter; it is determined by the constraint (x=0) = 1. For negative A_M and A_c_ s, we must ensure that > -1 and > 0 throughout the evolution. Given that the power-law term for the speed of sound generally represents a small correction to the constant C_c_ s (as discussed below), it can be neglected when generating new models. Therefore, the gradient-free condition implies simply that C_c_ s > 0. §.§ Parametrising departures from constants and power laws Next, we need to find a sufficiently general yet simple parametrisation for the functions Δ_J. A commonly employed strategy to parametrise the EFT functions or the α_i's involves expanding them as ratios of polynomials, i.e., Padé approximants <cit.>. Here, we opt for an alternative methodology grounded in Gaussian Processes (GPs) <cit.> and Principal Component Analysis (PCA). In broad terms, we proceed as follows: * We generate training data such that the GP is consistent with Equation (<ref>). Practically, this requires an array of zeroes over the interval x ∈ [-5,-3]. * After choosing values for the hyperparameters defining the GP kernel, K(x_i,x_j), for a particular basis function, we let the GP learn from the training data. Note that all our GPs have zero mean. * We draw thousands of samples from the trained GP, calculate their mean, and subtract it from each sample so that the transformed samples have zero mean. * We perform PCA on the transformed samples and retain the first N_J principal components. * We project the specific basis function of each test model (cubic Galileon, nKGB, etc.) onto the selected principal components and quantify the accuracy of the reconstructed approximation. * For each basis function, we iterate over steps 2-5 to find the hyperparameter values and number of principal components that provide percent-level reconstruction accuracy for x > -3. It is reasonable to assume that a covariant theory of gravity in the Horndeski class should be described by basis functions that vary smoothly with time. This assumption restricts the choice of the GP kernel to either the squared exponential form, K(x_i,x_j) = σ^2 exp[ -(x_i - x_j)^2/2ℓ^2] , or the rational quadratic form, K(x_i,x_j) = σ^2[1 + (x_i - x_j)^2/2ϑℓ^2]^-ϑ . Here, the variance σ^2 represents the average distance of the generated samples away from the mean, the lengthscale ℓ determines the oscillatory features of the samples, and the parameter ϑ in the rational quadratic kernel controls the relative weighting of large-scale and small-scale variations [Note that the squared exponential kernel follows from the rational quadratic kernel in the limit ϑ→ +∞.]. Intuitively, we expect that the most relevant hyperparameters controlling the functional forms generated by the GP are ℓ and ϑ, as σ^2 acts as a simple rescaling factor. Therefore, we fix σ^2 = 10, and only vary ℓ and ϑ together with the number of principal components to achieve our target reconstruction accuracy of 1%. As an example, the left panel of Figure <ref> displays random functions, Δ_ρ, sampled from a trained GP using the rational quadratic kernel, as described by Equation (<ref>), with ℓ = 1 and ϑ = 5. The right panel of the same figure shows the first six principal components derived from a total set of 10,000 samples, collectively accounting for 90% of the cumulative explained variance, which meets the desired accuracy threshold. Similar considerations apply to the other basis functions; Table <ref> details the kernel, hyperparameter values, and number of principal components used for each. To keep interpolation errors in under control, our time-sampling strategy for the random curves (and consequently for the principal components) closely follows that described in Section <ref>. Specifically, we use 3,000 sampling points for the range [-3,0] and 200 sampling points for the range [-5,-3]. §.§ Reconstruction of test models Figures <ref> to <ref> illustrate the reconstruction accuracy of the principal components combined with the power law and/or constant approximation for the four basis functions in all of our test models. To project the exact functions onto the principal components, we first compute the natural logarithm of their absolute values [The stability conditions require and to be positive. However, and can be negative, hence the use of the absolute value.]. We then remove the power-law and/or constant trends from this quantity and finally apply the projection. For instance, to reconstruct the de-mixed kinetic term for any particular model, we do the following: * Compute the natural logarithm of the exact . * Fit a power-law over the range [-5,-3] to find the best fit parameters, ζ_D and b_D; * Subtract the power law from the exact ln() to obtain Δ_D; * Project Δ_D onto the six principal components to get Δ̃_D = ∑_i^N_D w_i ψ_i(x), where the w_i's are the N_D projection weights and ψ_i is the i-th principal component. * Define the approximated function using Equation (<ref>) by replacing Δ_D with Δ̃_D. From the lower panels of Figures <ref>-<ref> we observe that our PCA reconstruction matches the exact stable functions to within 1% for a ≳ 0.05. In a few cases, however, the relative difference, ϵ, exhibits a peak exceeding the target accuracy around a=0.1. Due to negligible departures from at those redshifts, this is not concerning and has no measurable effect on the cosmological observables. For comparison, Figure <ref> in Appendix <ref> demonstrates that when using the same number of free parameters, alternative parametrisations of the Δ_J functions rooted in polynomial expansion underperform compared to the GP+PCA strategy adopted here. To test how well the reconstructed stable functions describe the original MG cosmologies, we can use them as input to to predict the background expansion and linear perturbations. To assess the performance of the background reconstruction, we examine the Hubble parameter and conclude that the reconstructed expansion history resembles the exact one to better than 0.2% for all models. However, having the full set of stable functions is not sufficient to compute the perturbations, as we also need the initial condition, , for the integration of the braiding parameter. We determine this by minimizing the mean squared error MSE[] = 1/N_k∑_k [ P_ m^ rec.(k | )/P_ m^ exact(k) -1 ]^2 , where N_k is the total number of wavenumbers, k, in the interval [10^-4,10], and P_ m^ rec.(k | ) and P_ m^ exact(k) are the present-day matter power spectra of the reconstructed and exact model, respectively. The derived best-fit initial conditions are typically within 0.1-1% of their exact counterparts. Due to the high sensitivity of gravity to small inaccuracies in the reconstructed functions (mainly and ), we could not find a satisfactory that produced stable perturbations for the Hu-Sawicki n=4 model. We do not reconstruct trivial stable functions: the speed of sound in gravity is fixed at 1, = 0 for the cubic Galileon and nKGB models, and = 1 for . Figure <ref> confirms that by combining GPs and PCA we can predict the secondary CMB anisotropies (i.e. lensing and ISW effect) sourced by the late-time modified growth of the test models to precision levels well within the expected uncertainties of CMB-S4 experiments <cit.>. Matter power spectra for cubic Galileon, nKGB and models are also reconstructed to ≲ 0.05% precision across all redshifts, as shown in Figure <ref>. On the other hand, even though the reconstruction accuracy of the stable functions for gravity never exceeds 1%, Figure <ref> highlights that the derived matter power spectra deviate from the truth by ≲ 0.2% only up to k ≈ 1, with the agreement degrading to 1% at k ≈ 8. Despite these relatively large differences on small scales, uncertainties in the non-linear evolution <cit.> and baryonic feedback <cit.> have a much greater impact in this regime, effectively making the reconstructed models observationally indistiguishable from the true gravity theories. §.§ Exploring Horndeski's landscape By combining with the newly introduced parametrisation, Equation (<ref>), we can venture beyond the phenomenological boundaries defined by the extensions discussed in Section <ref>. For this first exploratory task, we focus on two families of models: * Scalar Horndeski (SH): here, we exploit the full functional freedom provided by Equation (<ref>) and use the GP+PCA basis presented in Section <ref>. For simplicity, we fix the number of principal components to six for all Δ_J's, and given the small contribution of the power-law term to the speed of sound (see Figure <ref>) we set A_c_ s = 0. Therefore, we have a total of 31 parameters accounting for the power laws and PC weights. * -like (FR): this represents a particular sub-space of the SH cosmologies, where sampling can be extremely challenging without imposing specific relations for the stable functions. The freedom is limited to and , with the remaining functions being = 1 and = 3^2/2. Models with ≪^2 closely resemble gravity. To describe this sub-class we use six PC weights and two power-law parameters for each stable basis function. In both cases we sample the parameter space using a Latin hypercube strategy with 5,000 sampling points for SH and 1,500 for FR. The parameters ranges defining the hypercube can be found in Appendix <ref>. As with the reconstruction of the test models, we need to choose a value for the initial condition, . While in the former case we could use the exact calculations as reference and minimise Equation (<ref>), here we are generating new models, and should be interpreted as an additional free parameter. However, if we want the early-time evolution of the braiding to be driven by the stable functions, we can impose an additional constraint to find suitable initial conditions. To see this, we start by linearising Equation (<ref>) around = 0. This approximation is valid for times x ≤ x_0, with x_0 sufficiently deep in the matter-dominated era, when we expect → 0. By applying the integrating factor technique, we can express the solution to the linearised non-homogeneous differential equation as ^ (lin)(x) = [ 𝒱(x) + (x_0) ] e^𝒢(x) , where e^𝒢(x) = H(x_0)/H(x)M_∗^2(x)/M_∗^2(x_0) e^(x_0 - x) represents the time-varying part of the homogeneous solution. The function 𝒱 contributes to the particular solution that depends on the source term, 𝒮. It is calculated by integrating the following differential equation with the associated initial condition: 𝒱^' = e^-𝒢(x)𝒮(x) , 𝒱(x_0) = 0 . Note that this equation contains no information about . In contrast, (x_0) does depend on the value of the braiding at x=0. In the regime of interest here, beside applying the approximations in Equation (<ref>), we can also neglect radiation by setting Ω_ r = 0. This allows us to simplify the stable functions as power laws plus constants, write (HH_0)^2 ≈ e^-3x, and approximate M_∗^2(x) ≈ M_∗^2(x_0) ≈ 1. Equation (<ref>) can then be integrated analytically for all times x ≤ x_0, and its solution consists of two parts: a constant term, v̅, that depends on x_0, and a sum of power laws. In the limit x ≪ x_0, the power laws vanish, leaving v̅ as the only remaining contribution. Now, let us assume a value such that (x_0) is either larger or smaller than -v̅. The early-time evolution of the approximate solution, Equation (<ref>), will be controlled by the homogeneous solution, Equation (<ref>), which ultimately scales as ∼±√(e^x). This scaling is completely independent of the cosmological parameters and late-time dynamics defined by the stable functions. To avoid this spurious behaviour, we must adjust such that α_B(x_0) = -v̅, which yields the following early-time solution: ^ (lin)(x) ≈ -(A_M) 2ζ_M + 3/ζ_M - 1/2 e^(ζ_M x + b_M) + C_/ζ_D - 1/2e^(ζ_D x + b_D) + (A_ρ) ( 1 - /) ζ_ρ/ζ_ρ + 5/2 e^(ζ_ρ + 3)x + b_ρ . As a case in point, we examine a designer gravity cosmology with a background and B_0 ≪ 1. For this model, Equation (<ref>) simplifies significantly, as we have = 1, = 3^2/2 ≪, and = 1. From the definition of the running, Equation (<ref>), and by imposing the condition = -, we derive a quadratic equation for the slope of , with the positive solution being ζ_M = (5 + √(73))/4. Reassuringly, this result matches the value derived by <cit.> in the limit Ω_ r→ 0 <cit.>. Finally, for each set of principal component weights, constants, power-law slopes, and amplitudes, we compute the stable functions and use Equation (<ref>) as an early-time target for the integrated braiding solution to obtain . In practice, we find the zero of the function defined as the definite integral over the range x ∈ [-5,-4] of the difference ^ (ode) - ^ (lin), where ^ (ode) is the solution to Equation (<ref>). For this step, we run only the background module of and search for in the interval [0, 2] [Although no theoretical argument prevents us from considering negative , we restrict ourselves to positive values at the cost of missing a small fraction of viable models. This choice is also supported by recent cosmological analyses, which found ≳ 0 with high statistical significance regardless of the employed parametrisation <cit.>.]. The root finder typically takes 6-7 iterations to reach convergence. Consequently, this process slightly degrades the performance presented in Section <ref> since more time must be spent on the background calculations. The left panel of Figure <ref> shows the present-day matter power spectrum ratio for 12 models selected from approximately 200 stable cosmologies generated using the SH Latin hypercube. A viability rate of 4% might seem surprising given that all 5,000 sample models pass the stability criteria, Equation (<ref>). However, only a small fraction of these models feature stable perturbations not affected by exponentially growing modes, highlighting the importance of starting from the stable basis functions–using the α_i's to sample this parameter space would be even more challenging. Similar arguments apply to the right panel, where we find a higher stability rate of 20%, owing to the considerably smaller parameter volume. Returning to the left panel, the variety of shapes for the matter power spectrum ratios reflects the rich phenomenology of the SH landscape. Besides the familiar behavior displayed by models such as cubic Galileon, nKGB, and , the growth of structure can be slower than in , and can also exhibit non-trivial scale dependence arising from the interplay between the braiding scale, the Compton scale, and the scalar field sound horizon <cit.>. As expected, the power spectrum ratios shown in the right panel of Figure <ref> generally follow the trend of our test cosmologies (see, e.g., Figure <ref>). However, the particular scale dependence can depart significantly from the commonly adopted designer and Hu-Sawicki models. This behaviour can be primarily attributed to (i) a different, possibly time-dependent, gravitational coupling in the sub-Compton limit (i.e., μ_∞≠μ_∞^f(R) = 4/3), and (ii) a more complex evolution of the Compton wavelength. § SUMMARY AND OUTLOOK Horndeski gravity provides a comprehensive theoretical framework to explore scalar-tensor extensions of GR, offering rich phenomenology through diverse background expansion and structure formation scenarios. To study this extensive Horndeski landscape, we can restrict ourselves to the energy scales relevant to cosmology by adopting an effective-theory approach. In this work, we have introduced a numerical tool to facilitate the study and statistical analysis of Horndeski models expressed in the effective-theory language. At the level of the background and linear perturbations, models within the scalar Horndeski sub-class are entirely described by four functions of time. The common α-parametrisation, proposed by <cit.>, neatly separates the phenomenology of the background from that of the growth of structure and is implemented in Einstein-Boltzmann solver . This code also includes the quasi-static approximation scheme for improved computational efficiency and to extend applicability to models where following the full dynamics of the scalar field is challenging. Despite these key computational advances, to reliably extend the use of to a broader portion of the Horndeski landscape, we must first address the following critical points: * Simplify the implementation of specific models, which currently still requires modifications to the source code. * Generalise the input functions to more flexible parametrisations, going beyond { w_0, w_a } for , and ∝Ω_ϕ(a) or a^s for the α-functions. * Efficiently select stable models, eliminating the need to search for parameter combinations free from ghost and gradient instabilities. * Improve on the accuracy of the QSA implementation in . * Correct the erroneous labeling of healthy models as unstable due to limited numerical precision in the calculation of . The extension released with this work, , effectively resolves all the points above by: * Replacing the α-functions directly with the parameters used in the stability conditions, {, , }. To ensure the absence of ghosts and gradient instabilities, users need only to provide positive input functions. * Accepting general functions of time as input, with the constraints → 0, → 1, and ≪ deep in the matter-dominated era. In other words, the model must resemble the cosmology at early times. * Including mathematical (or classical) stability conditions to catch exponentially growing modes, as done in . * Embedding a QSA implementation based on modified Poisson-like equations for the metric potentials. We validated 's accuracy and performance by comparing its output power spectra and computational efficiency to those of the publicly available Einstein-Boltzmann solvers , , and . For all tested cases, we found that the predicted changes to the CMB spectra due to new physics matched the output from the other solvers within 0.05%, while for the matter power spectrum, the agreement reached 0.01%. Thanks to its alternative QSA implementation and to the manifestly stable EFT basis, generally improves on the accuracy and numerical stability attained by , with f(R) gravity being an extreme case that cannot even solve. In addition, we proposed a new parametrisation for the stable functions and the background energy-density of the scalar field. This approach uses a simple early-time power-law (or constant) evolution and models late-time smooth deviations with Gaussian Processes, combined with principal component analysis for dimensionality reduction. We showed that this parametrisation can accurately reconstruct our test cosmologies, both in terms of input functions and output power spectra. As an initial application, we used it to generate Horndeski models with either all input functions free or by fixing relations and values to closely resemble gravity. The linear growth of structure in these models can vary significantly, deviating from the well-known behavior of our test models. This diversity emphasises the necessity of exploring beyond the commonly employed parametrisations <cit.>. The high versatility of 's Python wrapper, , ensures seamless integration into the pipeline of cosmological analyses using data probing both the background evolution and the linear regime of structure formation. This will provide an alternative avenue of investigation to the reconstruction strategies employed in, e.g., <cit.> and <cit.>, and will extend the analysis of studies based on fixed-form parametrisations of the α- or EFT-functions <cit.>. In a future release, we will also include a special case of the stable parametrisation enforcing the relation = b, which can be more effective for analyses restricted to models in the generalised Brans-Dicke sub-class <cit.>, where = -, or to investigate no-slip gravity modifications <cit.>, defined by = -2. The larger volume of Horndeski theory space covered by the new parametrisation will likely require scalable sampling methods capable of efficiently handling the high dimensionality of the problem <cit.>. One such method is Hamiltonian Monte Carlo <cit.>, a gradient-based sampling technique that leverages the derivatives of the likelihood function to significantly enhance the acceptance rate of samples in high-dimensional spaces. To implement this, we could rewrite using the framework, taking advantage of its automatic differentiation capabilities, as demonstrated by <cit.>. Alternatively, a simpler approach might involve using to generate training data for neural network emulators targeted to specific summary statistics <cit.>. These emulators can then be integrated into cosmology libraries that feature automatic differentiation <cit.>. Finally, for general Horndeski models, 's calculations represent a fundamental first step towards accessing the cosmological information contained on small scales <cit.>. For instance, in the halo model reaction framework <cit.>, the linear matter power spectrum and background evolution are key ingredients for obtaining the mean halo statistics and non-linear matter power spectrum of non-standard cosmologies. This method, implemented in the code <cit.>, can be applied to Lagrangian-based theories of gravity <cit.>, to phenomenological modifications to GR <cit.>, to an interacting dark sector <cit.>, and to a broader class of parametrised extensions within the Horndeski realm <cit.>. § ACKNOWLEDGEMENTS We acknowledge the other developers, Miguel Zumalacárregui and Ignacy Sawicki, for sharing a private version of the code. We are grateful to Noemi Frusciante and Alessandra Silvestri for their help with . We would also like to thank Lucas Lombriser for insightful discussions on the inherently stable parametrisation, and Lucas Porth for brainstorming strategies to optimise initial conditions. Finally, a special thank to Alexander Eggemeier for suggesting a memorable name for the code released with this work. We acknowledge the use of the software packages , , , and . Additionally, was utilized for comparison purposes and to compute the stable basis functions of some test cosmologies. MC acknowledges the sponsorship provided by the Alexander von Humboldt Foundation in the form of a Humboldt Research Fellowship for part of this work. He also acknowledges support from the Max Planck Society and the Alexander von Humboldt Foundation in the framework of the Max Planck-Humboldt Research Award endowed by the Federal Ministry of Education and Research. EB  has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 754496. § PRECISION PARAMETERS USED IN AND The level of agreement between and strongly depends on the precision settings adopted in the two codes <cit.>. To minimize the impact of numerical errors on the power spectra produced by different implementation strategies, code comparison analyses often employ high-precision settings <cit.>. Following this philosophy, we use high-precision parameter values for both and , Similarly for and we use In addition, for we set and for we use the default § RECONSTRUCTION WITH PADÉ APPROXIMANTS A Padé approximant of order [N/M] for a function f(x) is given by the ratio of two polynomials of degrees N and M, Q(x)=∑_n=0^Nα_n( x - x_0 )^n/1+∑_m=1^Mβ_m( x - x_0 )^m , where the coefficients α_n and β_m are chosen such that the Taylor series expansion of Q(x) around x = x_0 matches the Taylor expansion of f(x) up to order N+M. Here, we quantify the reconstruction accuracy of the Padé approximants for the deviations, Δ_J, used in the new parametrisation of the stable functions described in Section <ref>. We perform the expansions around x_0 = 0, but have verified that our conclusions are not sensitive to this particular choice. Since we lack analytical expressions for the Δ_J of our test models, we cannot derive the Padé coefficients by matching the Taylor series of Δ_J order by order. Instead, we interpret Equation (<ref>) as a fitting function and determine the best-fit values for α_n and β_m by minimising the reconstruction error. To ensure a fair comparison with the GP + PCA strategy discussed in Section <ref>, we use the same number of free parameters by setting N = 2 and M = 3, except for Δ_M, where N = 3. Figure <ref> illustrates the performance of the Padé approximants for two representative cases: and . Reconstruction errors can reach approximately 8% for the de-mixed kinetic term and around 1% for the Planck mass shift. In comparison, Figure <ref> demonstrates that our GP + PCA approach can match the exact quantities to better than 1%, which motivates the use of this reconstruction technique. § LATIN HYPERCUBE BOUNDARIES Table <ref> and <ref> list the parameter ranges defining the Latin hypercubes used to generate random models belonging to the Scalar Horndeski and -like classes, respectively (see Section <ref>). § USEFUL EQUATIONS IN §.§ Continuity equation When users select , the scalar field background energy-density, , is integrated using the following continuity equation with the corresponding initial condition: ρ̇_ϕ = -3(1 + w_ϕ) , ρ_ϕ(0) = Ω_ϕ H_0^2 . Here we remind the reader that overdots denote derivatives with respect to ln a, and primes (used below) represent derivatives with respect to conformal time. The choice between ln a or τ as the time variable is dictated by the original structure, with the former used in the module and the latter in the module. §.§ Scalar field fluctuations The coefficients and the source term in Equation (<ref>) can be found in Appendix A.2 of  <cit.>. For convenience, we provide them here: 𝒜(τ) = (2-) , ℬ(τ) = 8 aH λ_7 , 𝒞(τ) = -8 a^2H^2 λ_8 , 𝒟(τ) = 2 , ℰ(τ,k) = 2/aHk^2η + 3a/2H [ 2λ_2 δ - 3(2-)δ] , where λ_2 = -3( + )/H^2 - (2-)H^'/aH^2 + ^'/aH , λ_7 = /8(2-) [ 4 + + 2H^'/aH^2 + ^'/aH] + /8λ_2 , λ_8 = -λ_2/8 ( - 3λ_2 + 3^'/aH) + 1/8 (2-) [ (3λ_2 - )H^'/aH^2 - 9^'/2aH^3] - /8(2-) [ 4 + + 2H^'/aH^2 + ^'/aH] . §.§ Quasi-static approximation In the effective-field formulation of the quasi-static approximation, the gravitational coupling for the non-relativistic matter and the gravitational slip can be expressed as in Equation (<ref>). The small-scale limits can be expressed in terms of the α's as μ_∞ = 2 + ( + 2)^2/2 M_∗^2 , μ_Z, ∞ = 2 + ( + 2)/2 M_∗^2 , while μ_ p = 9 { aH [ 2 + ( - 2) + 4(-1)] ( + + + ) + 2(^' + ^') }/4aH^3 . Here, we also present a typo-corrected equation for the metric perturbations in synchronous gauge, η, used in our patch to : η^' = 1/2a^2/9 a^2μγ(1+w_ m)/2+k^2{3(1+w_ m)μγθ[1+3 ( ℋ^2-ℋ^'/k^2) ] + 3Δ_ m[ℋμ(γ-1)-μ^'γ-μ^'γ] . + 9μ(1-γ)(1+w_ m)σ_ m^' + k^2α[3μγ(1+w_ m)-2(ℋ^2-ℋ^'/a^2)] + 9ℋμ(γ-1)(1+w_ m)σ_ m(3w_ m+2) . - 9(1+w_ m)σ_ m[μ^'(γ-1)+γ^'μ+μ(γ-1)w_ m^'/1+w_ m] } , where ℋ is the conformal Hubble parameter. The corrections compared to Equation (26) in <cit.> include removing an extra power of two from ℋ^' in the first line, and changing the minus sign to a plus sign in front of the γ^'μ term in the last line. aasjournal.bst
http://arxiv.org/abs/2407.12530v1
20240717131201
One-dimensional gas-fueled nuclear reactor with thermal feedback
[ "Mathis Caprais", "Kacim François-Elie", "Daniele Tomatis" ]
physics.comp-ph
[ "physics.comp-ph" ]
One-dimensional gas-fueled nuclear reactor with thermal feedback]One-dimensional gas-fueled nuclear reactor with thermal feedback [1]Mathis Capraismathis.caprais@universite-paris-saclay.fr 1]Kacim François-Eliekacim.francois-elie@universite-paris-saclay.fr 2]Daniele Tomatis *[1]Université Paris-Saclay, Gif-sur-Yvette, 91190, France [2]Via Almese 15, Torino, 10129, Italy This study explores a simplified one-dimensional subchannel of a graphite-moderated nuclear reactor operating with a gaseous core in steady-state conditions, reproducing a neutronic-thermal-fluid-dynamics coupled problem with thermal feedback. The fuel gas, consisting of a homogeneous mixture of uranium hexfluoride (UF_6) and helium, is assumed to be ideal, with simplifications made to its thermodynamic state. Due to the high thermal expansion of the fuel, a possible interesting strong coupling is anticipated. The discrete ordinates' method is used to compute the one group scalar flux, the effective multiplication factor and the power released by the core. Six groups of Delayed Neutron Precursors (DNPs) are used to take into account the fuel motion drift. Compressible Euler equations are solved with a monolithic approach and the two-physics problem is treated with Picard iterations. As expected, the effective multiplication factor of a subchannel is shown to increase with the inlet pressure. The critical pressure, representing the threshold at which the system achieves criticality, changes as the fuel mixture changes. High thermal feedback coefficients are observed due to the high thermal expansion of the fuel. The amount of helium in the mixture greatly affects the temperature at core outlet in critical configurations. This study shows that a gaseous fuel reactor can be brought to criticality varying the inlet pressure. The thermal feedback is strong and should be taken into account in the design of the system. [ [ July 22, 2024 ================= § INTRODUCTION Gas and Vapor Core Reactors (G/VCR) are nuclear fission reactors using a mixture of fissile gas as a fuel. These reactors were often considered using a uranium fluoride gas UF_n, n=1 to 6, mixed with other metallic fluoride and helium. These systems were extensively studied between the 1950s and the 1980s <cit.>. A gaseous fuel offers the possibility of very high working temperatures <cit.> (from thousands to ten of thousands of degrees <cit.>) and also more direct ways to convert the energy released by fission into electricity such as magneto-inductive or magneto-hydrodynamic conversion <cit.>. Gas core reactors also offer a homogeneous burn-up, continuous reprocessing of the fuel and a lower mass of fissile materials than traditional reactors <cit.>. Some of these advantages were observed in a demonstrator built and operated in the Soviet Union, using UF_6 as a fuel <cit.>. The fuel was enriched at 90% in ^235U, flowing through beryllium-moderating channels surrounded by a graphite reflector <cit.>. Refueling was done continuously. The reactor demonstrated large negative feedback coefficients due to the high thermal expansion of the fissile gas. The studies previously cited were interested in the performance of the reactor rather than the coupling between neutronic and thermodynamic of a compressible and flowing neutron multiplying gas. Especially, this research paper investigates the influence of the fissile gas pressure on its criticality, in a coupled physics framework. In this study the coupling between neutronics and thermodynamics of a neutron multiplying gas is investigated as the classical problem of the reactor subchannel <cit.>. In Sec. <ref>, the main equations describing the gaseous fuel are presented. The reactor subchannel is studied in the steady-state. In Sec. <ref>, neutron balance equation are presented taking into account the drift of Delayed Neutron Precursors (DNPs) due to circulation of the gas. All equations are discretized using finite volumes on an equidistant mesh. A one energy group deterministic solver of the neutron transport equation has been developed for fast calculations of the coupled problem in a simplified geometry. This allows for a fast exploration of the parameters space, and avoid coupling a costly Monte-Carlo code to a CFD solver. Compressible Euler's equations are solved monolithically using Newton's algorithm, with the Jacobian matrix computed analytically. The thermal and pressure feedback on cross-sections is taken into account using the state law of the fuel. The coupled system is solved using Picard's iterations. The macroscopic cross-sections of the nuclear fuel are generated using the OpenMC Monte-Carlo code <cit.>. The evolution of the effective multiplication factor as a function of pressure, temperature distributions and feedback coefficients are presented in Sec. <ref> for different fuel mixtures. § PHYSICAL MODEL §.§ Thermodynamic equations of gaseous fuel In this section, the balance equations and closure law for a gas mixture are given. §.§.§ Physical parameters of interest The fuel gas is assumed to be ideal and composed of a mixture of helium and UF_6, with mixing atomic fraction e. UF_6 is chosen as the fissile gas as it had already been used in criticality experiments. Ideality greatly simplifies the equations describing the temperature (T, ), velocity (u, ), density (ρ, ) and pressure of the fuel (p, ). The ideal gas assumption also provides a closure law for compressible Euler's equations <cit.>. The ideal gas law states that, p = ρ R_s T, R_s = R / M is the gas specific constant () with R = 8.314 the ideal gas constant and M the gas molar mass (). The ideal gas law and all the following properties of the mixture can be derived from statistical physics assuming the energy of the molecules is purely kinetic <cit.>. Even though UF_6 is a complex molecule with internal degrees of freedom, the fuel is assumed ideal at high pressure and temperature as the literature is scarce on the evaluation of the thermophysical properties of the gas in this range of pressure and temperature. The ideal gas law preserves the high thermal expansion of the fuel of interest for assessing the thermal feedback. The average molar mass of the mixture is defined as, M = e M_UF_6 + (1-e) M_He. It is assumed that the heat capacity ratio, γ = C_p/C_v, is known for both gases. This ratio tells us how heat is used in the gas. At constant volume (C_v) all the energy is converted into a temperature increase while at constant pressure (C_p) the system shall be free to expand. This leaves less energy to raise the temperature of the gas and thus C_p > C_v so γ > 1. Therefore, a gas with a heat capacity ratio larger than unity will expand more when receiving the same amount of energy than a gas with γ≃ 1. The heat capacity ratio of the mixture is calculated using Mayer's relation for the mixture, C_p - C_v = R, the average heat capacity ratio γ̅ as, γ̅ = 1 + 1/e/γ_UF_6 - 1 + (1-e)/γ_He - 1and c_v = R_s/γ̅ - 1. Higher helium content reduces the constant volume heat capacity per unit of volume c_v = C_v / ρ, thus allowing higher temperature at the outlet to preserve the rated thermal power of the system. The quantities used in the equations above take the values listed in Table <ref> in the following. The heat capacity ratio of UF_6 is a function of temperature. Over the range of high pressures and temperatures of this study, its heat capacity ration varies from 1.08 to 1.062 <cit.>, which represents a relative variation of less than 2. Therefore, heat capacity ratio of UF_6 is assumed to be constant. §.§.§ Balance Equations The evolution of the mass, momentum and energy of the gas is described by the steady-state Euler's equations, written as a continuity equation, ÷F(X) = SwithS = [ 0; 0; P_v; ]andX = [ ρ; j; ε ] where S is a source of mass, momentum and energy. P_v is the fission volume heat source. X is the conservative vector containing the density ρ, the volume momentum j=ρ u and the volume total energy ε. The operator F acting on the vector X is <cit.>, F(X) = [ j; j^2/ρ + p; (ε + p) j/ρ ] = [ j; j^2/ρ + (γ̅ - 1)(ε - 1/2j^2/ρ); (ε + (γ̅ - 1)(ε - 1/2j^2/ρ)) j/ρ ] where the pressure has been eliminated using the ideal gas law relating conservative quantities. The non-conservative variables (temperature, pressure, velocity) are deduced from the conservative variables using the following relations, u = j/ρ, p = (γ̅ - 1)(ε - 1/2j^2/ρ), T = 1/ρ c_v(ε - 1/2j^2/ρ). §.§ Neutron balance equation In a gaseous fuel the diffusion approximation is not valid as scattering macroscopic cross-sections are of the same order of magnitude as absorption macroscopic cross-sections. Therefore, the one-dimensional one-group transport equation is solved using the discrete ordinates or S_N method. The angular dimension is treated using the Gauss-Legendre quadrature, that is with the cosines of the angles given as roots of the Legendre polynomials, and weights selected to preserve their integrals over [-1, 1] <cit.>. The sum of ω_i is taken to be equal to 2 as 4π = ∫_4π*Ω = 2π∫_-1^1(cosθ)≃ 2π∑_i ω_i. The scalar flux ϕ is calculated as the angle integral of the angular flux ψ, ϕ (x) = 2π∫_-1^1μψ(x, μ) ≃ 2π∑_i ω_i ψ(x, μ_i). The balance equation for the angular flux ψ for the i-th direction sampled is, μ_i ψ_ix + Σ_t ψ_i = Σ_s/2∑_k ω_k ψ_k + q_ext where Σ_t is the total macroscopic cross-section, Σ_s is the scattering macroscopic cross-section (anisotropic scattering is neglected). The source q_ext is given as: q_ext = (1-β)ν/Σ_f/2∑_k ω_k ψ_k + ∑_j λ_j/4 π C_j, where Σ_f and ν are respectively the macroscopic fission cross-section and the neutron multiplicity. The effective multiplication factor is playing the role of eigenvalue for the homogeneous problem. The total delayed neutron fraction β = ∑_j β_j is the sum of the delayed neutron fractions. The precursor concentration C_j in the j DNPs group is given as a solution of the transport equation: uC_jx + λ_j C_j = β_j ν/Σ_f/2∑_k ω_k ψ_k, where β_j and λ_j are respectively the DNPs proportion and the decay constant of the j-th precursors group. The concentration of DNPs at the inlet is set to be zero meaning that fresh fuel constantly enters the reactor, or that the recirculation time is very long. Vaccuum boundary conditions are imposed at the edges of the slab for the angular flux ψ(0, μ > 0) = ψ(ℓ, μ < 0) = 0. § A CASE STUDY §.§ Reference Configuration The reactor studied is shown in Fig. <ref>. It is a 5 high cylinder of nuclear graphite with density 1.82e3 composed of N_channels=100, 20 by 20 subchannels shown in Fig. <ref>. Each subchannel is a tube of 10 in diameter surrounded by a 0.5 cylindrical shell of BeO. Beryllium oxide serves both as refractory material <cit.> and a neutron reflector between the graphite channel and the gaseous fuel. The height was chosen to ensure criticality. The subchannel diameter was chosen in order to balance moderation and increased velocity through thermal feedback. The inlet velocity is set to be 20, the inlet pressure 40[1=101325] and the inlet temperature 600. The reference thermal power of the system is P_th=3. §.§ Fuel One-group macroscopic cross-sections are prepared by Monte Carlo calculations that reproduces the working conditions of a subchannel in the reactor. The subchannel is filled with a mixture of 50%UF_6-50%He. The gas is enriched at 3% in ^235U. Reflective boundary conditions are imposed on the boundaries of the subchannel. The simulation is conducted using OpenMC 0.13.3 at nominal parameters <cit.>. OpenMC is an open source Monte-Carlo transport code developed by researchers from the Computational Reactor Physics Group at the MIT. It is capable of solving multiple kind of neutron and photon transport problems. It is convenient to use OpenMC to generate multigroup cross-sections libraries, which can then be used in deterministic codes such as OpenMOC <cit.>. The nuclear data library used is ENDF-VIII0 <cit.>. 50 batches are simulated, the first 10 being inactive. 10000 particles are simulated per batch. Six DNPs groups are used. Cross-sections are then calculated at any temperature and pressure using the ideal gas law, such as: Σ(T,p) = Σ(T_0,p_0) p T_0/p_0 T. Macroscopic cross-sections are expected to increase with pressure and decrease with temperature. Eq. (<ref>) assumes that the microscopic cross-sections do not change significantly with different thermodynamic states. The validity of the assumptions leading to Eq. (<ref>) is tested by varying the mixture's temperature, showing an agreement within 1 with the Monte Carlo results. Density variations showed larger differences instead, up to 40. Results for a validation covering a larger range of thermodynamic states will be carried as future development. §.§ Full-core criticality calculation The reactor depicted in Fig. <ref> reaches criticality, = 1.0110± 0.0004, at 50 with an equimolar fuel. Vaccuum boundary conditions are imposed at the surfaces of the graphite cylinder. The effective multiplication factor was found to be an increasing function of the fuel pressure. Increasing the pressure increases the mass of fissile materials within the subchannels ultimately increasing the overall reactivity. § DISCRETIZATION OF THE EQUATIONS Let Δ x>0, the segment representing the reactor core is subdivided into an equidistant sequence of 𝐍 intervals K_i defined by K_i=(x_i-1 / 2, x_i+1 / 2), x_i+1 / 2=i Δ x, ∀ i ∈ℤ, as shown in Fig. <ref>. The center x_i of the cell K_i is x_i=(x_i+1 / 2+x_i-1 / 2) / 2, x_i-1 / 2 and x_i+1 / 2 mark the left and right faces of the cell i, respectively. §.§ Discretized neutron balance equation The neutron balance equation is integrated in every cell, in which the cross-sections are uniform. Spatial integration yields: 1/Δ x∫_x_i-1 / 2^x_i+1 / 2xΣψ = Σ_i ψ_i, where the reaction and the angular direction indices are dropped for simplicity. The spatial gradient of the angular flux is approximated using the upwind approximation, 1/Δ x∫_x_i-1 / 2^x_i+1 / 2xμψx≃μ/Δ x (ψ_i - ψ_i-1). While integrating the DNPs balance equation, the upwind scheme is used for the advection term: 1/Δ x∫_x_i-1 / 2^x_i+1 / 2xxuC ≃u_i C_i - u_i-1C_i-1/Δ x, and1/Δ x∫_x_i-1 / 2^x_i+1 / 2xλ C = λ C_i, where again the index of precursor family is dropped for simplicity. Cell quantities are considered as volume-averaged. It is assumed that u_i C_i is almost equal to the average DNPs mass flux in the i-th cell (uC)_i. §.§ Discretization of compressible Euler equations The flux vector is integrated on the volume of a cell such as, 1/Δ x∫_x_i-1 / 2^x_i+1 / 2xxF(X) = F_i+1 / 2 - F_i-1 / 2/Δ x≃F_i - F_i-1/Δ x, where the upwind scheme has been used. § NUMERICAL SCHEME The discretized equations presented above, and the coupling schemes presented hereafter are implemented in Python 3.10 with NumPy 1.26.1. §.§ Monolithic coupling of Euler's equations Euler's equations are solved as a fixed point problem. The source vector is moved on the left-hand side of the balance equation (<ref>), such as P(X) = ÷F(X) - S = 0 and the fixed-point problem is solved using Newton's method. Non-homogeneous Dirichlet boundary conditions are applied for the pressure, temperature and velocity: p(x=0) = p_0, T(x=0) = T_0, u(x=0) = u_0 so thatρ(x=0)=p_0/R_s T_0. These boundary conditions are written in conservative variables: j (x=0) = u_0 p_0/R_s T_0andε (x=0) = p_0/γ̅ - 1 + 1/2u_0^2 p_0/R_s T_0. In the following equations, K_x denotes the general first order discrete derivative operator, such as: K_x = 1/Δ x[ 1 0 0 0 … 0; -1 1 0 0 … 0; 0 -1 1 0 … 0; 0 0 -1 1 … 0; ⋮ ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 … -1 1; ] and the bold versions of the conservatives fluxes denote their discretized versions along the x-axis. The Jacobian of the system is: J(X) = [ 0_N× N K_x 0_N× N; ; j^2/*ρ^2γ̅ - 3/2⊙ K_x j/*ρ(3-γ̅)⊙ K_x (γ̅ - 1) K_x; ; (γ̅ε/*ρ - (γ̅ - 1)j^2/*ρ^2)(-j/*ρ)⊙ K_x (γ̅ε - 3/2(γ̅ - 1)j^2/*ρ)1/*ρ⊙ K_x γ̅j/*ρ⊙ K_x; ; ] where ⊙ is the element-wise or Hadamard product. The updated vector of conservative variables at the (n+1)-iteration is given by, X^(n+1) = X^(n) + δX^(n), and the correction δX^(n) is given by: J(X^(n))δX^(n) = - P(X^(n)). Newton's algorithm is considered converged when: X^(n+1) - X^(n)_∞/X^(n)_∞ < 1e-6. §.§ Coupling with neutronics The coupling scheme to solve the coupled thermodynamic and neutronic problem is presented in Fig. <ref>. When the conservative vector X is obtained, the pressure, velocity and temperature are evaluated with Eq. (<ref>). Then, the angular flux is calculated for the N angular directions with the updated macroscopic cross-sections. Each component of the angular flux is computed until convergence on the scattering source is attained. Power iterations are then performed on the fission source which accounts for both prompt and delayed neutrons. The norm of the scalar flux is imposed knowing the thermal power of the channel, P_th = 1/N_channels∫V P_v =1/N_channels∫VκΣ_f ϕ, where κΣ_f () is the energy released by a fission event times the macroscopic fission cross-section. Power normalization must be ensured at each iteration before solving the Euler equations. Both solvers exchange information until convergence on the effective multiplication factor and the neutronic vector Y containing the scalar flux and precursors concentrations is reached, 1 - ^(n+1)/^(n) < 1e-6andY^(n+1) - Y^(n)_∞/Y^(n)_∞ < 1e-6. § RESULTS The following results were obtained with 16 angular directions and 500 spatial cells. §.§ Reactivity as a function of pressure Higher pressure while keeping the inlet temperature and the thermal power unchanged shows a flow with higher density gas along the channel, which in turn yields higher multiplication factor due to the cross-section model from Eq. (<ref>). This behavior can be noticed in Fig. <ref> for different fuel mixtures. As anticipated, the shift from subcritical to critical state takes place at a reduced inlet pressure when employing a pure fissile gas, in contrast to a gas mixture with a higher proportion of helium. Higher helium content in the mixture results in higher temperature justifying the slower rate in achieving criticality. Moreover, still as per Eq. (<ref>) cross-sections decrease when temperature increases, explaining further the trend of the multiplication factor in the plot. §.§ Temperature distributions §.§.§ Non-critical temperature distributions Fig. <ref> shows the distributions of temperatures for different proportions of helium in the fuel mixture. As pointed out in the derivation of the main equations, an increase in the helium content in the fuel reduces the heat capacity of the mixture. This turns out achieving higher temperature at the core outlet. Fig. <ref> shows that very high temperature gradients can be obtained, possibly leading to higher thermal efficiency of the conversion system. However, the amount of helium should be adjusted to obtain criticality at a viable pressure level according to the technological constraints of the system. The systems with higher helium amount are farer from criticality, provided that the inlet conditions do not change. The increase of the helium fraction not only increases the critical mass to operate, but it also causes stronger thermal feedback due to the steeper temperature gradient in the core for the same power level and inlet conditions. §.§.§ Criticality & optimization of the outlet temperature The distribution of temperature in the subchannel for critical systems is shown in Fig. <ref>. To obtain criticality at a fixed thermal power, the inlet pressure is set as a free parameter. A Newton's algorithm iterates in order to find the pressure that yields a multiplication factor of unity. In Fig. <ref> the outlet temperature of the core now varies as a function of the fuel composition. These variations can be explained using an energy balance over the system. The enthalpy of the gas mixture changes due to the thermal power, and the temperature difference Δ T between the outlet and the inlet of the system is, Δ T (e) = P_th/A_0 u_0 ρ_0 (e) c_p (e). The mass flow rate u_0 ρ_0 A_0 is constant thanks to the conservation of mass, Eq. (<ref>), and A_0 is the cross-sectional area of the subchannel. The heat capacity c_p is constant in space and is given using Mayer's relation Eq. (<ref>), but depends on the molar fraction of UF_6 in the mixture. The critical pressure is also a function of the molar fraction, and is fitted using an inverse power law, p_ref / e^α, where p_ref is the critical pressure for pure UF_6 and α (=0.7628) is a fitting parameter (r^2 = 0.99877). The outlet temperature is then a function of the molar fraction e. A balance appears between density that increases with e and the heat capacity c_p at constant pressure per unit of mass that decreases with higher amounts of UF_6. Replacing ρ and c_p with their expressions in Eq. (<ref>) and maximizing the temperature difference with respect to e yields, e_opt = α(γ_UF_6 - 1)/(α - 1)(γ_UF_6 - γ_He) = 0.549. A nearly equimolar mixture of fissile gas and helium is the optimal composition to obtain the highest outlet temperature at constant thermal power. §.§ Pressure, velocity, density and Mach number Pressure, velocity and Mach number distributions are calculated when a converged critical steady-state is reached in the subchannel by changing the inlet pressure. The evolution of these quantities greatly differs for different fuel compositions. The Mach number is calculated for the different fuel compositions with the speed of sound calculated using the ideal gas law, Ma = u/√(γ̅p/ρ), and is larger for mixtures with a lower helium content, Fig. <ref>. This is a consequence of the heat capacity ratio and specific gas constant being both decreasing functions of e. The Mach number reaches 0.25 at the outlet of the subchannel for a mixture containing only fissile gas. Higher Mach numbers can be obtained at lower pressure or higher mass flow rates. This should be carefully considered when designing the system as the flow would experience compressibility effects above Ma = 0.3, and because the inlet velocity is already very low for a gas flowing in a channel. Strong density variations are observed along the channel, Fig. <ref>. The density decreases along the channel due to the increase in temperature, and the density decreases more rapidly for optimal mixtures due to higher temperature gradients, Fig. <ref>. On Fig. <ref>, the relative distributions of pressure and velocity are presented. The pressure decreases along the channel, and a pressure drop of 1, or 3 relative variation the inlet pressure is observed. Fuels with a higher UF_6 content experience a higher pressure drop along the channel due to higher velocities attained, Eq. (<ref>). On Fig. <ref>, the fuel velocity increases along the channel and reaches up to 120 its inlet value for the equimolar mixture subjected to higher temperature gradients. This is a consequence of the mass balance equation of Eq. (<ref>). As the temperature increases, the density decreases, and the velocity increases to maintain the same mass flow rate. The density decreases more rapidly for a higher helium content in the mixture due to higher temperature gradients, Fig. <ref>. §.§ Feedback coefficients The reactivity coefficients characterizing the thermal feedback are discussed in this section. They are defined as partial derivatives of the neutron reactivity with varying thermodynamic state of the multiplying system, which is represented by the average core temperature or pressure. Derivatives are approximated by ratios of finite differences. The temperature feedback coefficient is calculated as, α = ϱ_pert - ϱ_nom/⟨ T_pert⟩ - ⟨ T_nom⟩, T_pert = T_nom(1 + ϵ), where ϱ denotes the static reactivity defined as 1 - 1 /. The brackets represent the mean of the considered distribution over the core, ϵ is a perturbation parameter taken to be equal to 1e-6. Fig. <ref> shows that a growth in the helium proportion increases the magnitude of the thermal feedback coefficients. This can be explained using Eq. (<ref>) and Fig. <ref>. As the proportion in helium increases, the heat capacity decreases, resulting in higher temperature gradients for the same thermal power. As the fuel density is proportional 1/T, the larger the temperature gradient the lower the density. This leads to an overall reduction of the macroscopic cross-sections as shown by Eq. (<ref>). In this analysis, the pressure contribution is neglected as the relative pressure variations along the channel are less than 2.1. A decrease of the macroscopic cross-sections ultimately induces a loss of reactivity, much bigger at higher temperatures. Replacing the temperature by the pressure distribution in Eq. (<ref>) allows computing pressure coefficients, named δ as shown in Fig. <ref>. Fig. <ref> confirms the trend observed in Fig. <ref>. For a reference point of 40, changes in reactivity are larger for a fuel gas with a higher helium proportion. An increase of the pressure is a way to insert reactivity in such system. § DISCUSSION & CONCLUSION This work explored the physical behavior of a stationary ideal gas fuel reactor. The fuel mixture was composed of enriched uranium hexafluoride and helium flowing in a graphite subchannel coated with beryllium oxide. The addition of helium, an inert gas, served as a practical means to reduce the heat capacity of the fuel, allowing for a higher outlet temperature at constant thermal power without reacting with the core structure or activating under irradiation. A critical subchannel can be reached by varying the inlet pressure of the reactor while keeping the thermal power constant. An increase in pressure brings the mass of the fuel closer to the critical mass by increasing its density. The use of helium in the mixture increases the critical pressure due to dilution of fissile materials and higher temperature gradients. The system exhibits high temperature feedback coefficients due to a fuel thermal expansion coefficient equal to the inverse of temperature. A pressure growth induces an increase in reactivity. However, non-ideality of the gas mixture should be taken into account, as well as the chemistry of fuel. Moreover, the evolution of the gas fuel was not studied, and above 1500 or under irradiation UF_6 starts disassociating, changing the fuel composition <cit.>. Although the disassociation of UF_6 is an important reaction under irradiation, it has been shown experimentally that the reverse reaction, i.e. recombination with fluorine, allows an equilibrium concentration of UF_6 to exists even at low temperatures <cit.>. Corrosion is also an important topic, as UF_6 is a very aggressive chemical, especially at high temperatures. A study conducted on ceramics exposed to high temperature UF_6 concluded that 1273 is the maximum compatible temperature for a Al_2 O_3 refractory ceramic <cit.>. Therefore, the lifetime of the channel should be investigated in more details. Non-conventional heat extraction mechanism such as magneto-inductive of magneto-hydrodynamic conversion could be interesting topic for electricity production. As future work, the neutron transport solver will be extended to multiple energy groups, allowing for a better estimation of scalar fluxes. Possible pressure losses due to friction in the subchannel should also be investigated.
http://arxiv.org/abs/2407.12915v1
20240717180003
Bounding elastic photon-photon scattering at $\sqrt s \approx 1\,$MeV using a laser-plasma platform
[ "R. Watt", "B. Kettle", "E. Gerstmayr", "B. King", "A. Alejo", "S. Astbury", "C. Baird", "S. Bohlen", "M. Campbell", "C. Colgan", "D. Dannheim", "C. Gregory", "H. Harsh", "P. Hatfield", "J. Hinojosa", "D. Hollatz", "Y. Katzir", "J. Morton", "C. D. Murphy", "A. Nurnberg", "J. Osterhoff", "G. Pérez-Callejo", "K. Põder", "P. P. Rajeev", "C. Roedel", "F. Roeder", "F. C. Salgado", "G. M. Samarin", "G. Sarri", "A. Seidel", "C. Spindloe", "S. Steinke", "M. J. V. Streeter", "A. G. R. Thomas", "C. Underwood", "W. Wu", "M. Zepf", "S. J. Rose", "S. P. D. Mangles" ]
hep-ex
[ "hep-ex", "hep-ph", "physics.plasm-ph" ]
imp]R. Watt imp]B. Kettle imp]E. Gerstmayr ply]B. King qub]A. Alejo clf]S. Astbury york]C. Baird desy]S. Bohlen cern]M. Campbell imp]C. Colgan cern]D. Dannheim clf]C. Gregory hijena,jena]H. Harsh oxf]P. Hatfield umich]J. Hinojosa hijena,jena]D. Hollatz clf]Y. Katzir awe]J. Morton york]C.D. Murphy cern]A. Nurnberg desy]J. Osterhoff uva]G. Pérez-Callejo desy]K. Põder clf]P.P. Rajeev hijena]C. Roedel jena]F. Roeder hijena,jena]F.C. Salgado qub]G.M. Samarin qub]G. Sarri hijena,jena]A. Seidel clf]C. Spindloe lbnl]S. Steinke qub,lanc]M.J.V. Streeter umich,lanc]A.G.R. Thomas york]C. Underwood imp]W. Wu hijena,jena]M. Zepf imp]S.J. Rose imp]S.P.D. Mangles [imp]organization= [ply]organization= [umich]organization= [desy]organization= [qub]organization= [hijena]organization= [jena]organization= [clf]organization= [york]organization= [oxford]organization= [awe]organization= [celia]organization= [uva]organization= [lbnl]organization= [lanc]organization= [cern]organization= § ABSTRACT We report on a direct search for elastic photon-photon scattering using x-ray and γ photons from a laser-plasma based experiment. A gamma photon beam produced by a laser wakefield accelerator provided a broadband gamma spectrum extending to above [E_γ = 200]MeV. These were collided with a dense x-ray field produced by the emission from a laser heated germanium foil at [E_x ≈ 1.4]keV, corresponding to an invariant mass of [√(s) = 1.22 ± 0.22]MeV. In these asymmetric collisions elastic scattering removes one x-ray and one high-energy γ photon and outputs two lower energy γ photons. No changes in the γ photon spectrum were observed as a result of the collisions allowing us to place a 95% upper bound on the cross section of [1.5 × 10^15]µb. Although far from the QED prediction, this represents the lowest upper limit obtained so far for [√(s)≲ 1]MeV. Bounding elastic photon-photon scattering at √(s)≈ 1 MeV using a laser-plasma platform [ July 22, 2024 ====================================================================================== § INTRODUCTION Photon-photon scattering is one of the most fundamental processes in quantum electrodynamics (QED) and is of elementary importance in astrophysics. It is used in models that calculate primordial abundances, affects the observed spectra from γ-ray bursts from the first million years of the universe <cit.> and plays an important role in models of the evolution of strongly magnetised neutron stars <cit.>. However, these calculations all use the QED cross section which is currently poorly bounded by experiment. Photon-photon scattering involving virtual photons has previously been observed in several forms (see the summary in Tab. <ref>): the 1-to-1 process of Delbrück scattering (γγ^* →γγ^*), where a real photon, γ, scatters from a virtual photon, γ^*, in the Coulomb field of an ion <cit.>; the 1-to-2 process of photon splitting (γγ^* →γγ) in atomic fields <cit.>, and the 0-to-2 process of real double photon emission from colliding the virtual photons from Coulomb fields in ultra-peripheral heavy-ion collisions at the ATLAS and CMS experiments (γ^*γ^* →γγ) <cit.>. Instead, in this paper, we will report on a search for the 2-to-2 process of photon-photon scattering involving only real photons (γγ→γγ). A crucial parameter in photon-photon collisions is the invariant mass of the collision √(s). For two photons, energy E_1 and E_2 colliding at an angle ϕ, we have s = 2E_1 E_2 (1 - cosϕ). Photon-photon scattering has been searched for indirectly in the signal of vacuum birefringence at small invariant mass √(s)≪ m_ec^2 in cavity experiments such as PVLAS <cit.> and BMV <cit.> involving photons traversing a quasi-constant magnetic field. At invariant mass √(s)∼ O(eV) it has also been searched for by directly colliding two optical laser pulses <cit.> and three optical laser pulses <cit.>. At √(s)∼ O(keV), the cross-section has been bounded by experiments employing x-ray free electron lasers <cit.>. By comparison, for large invariant mass √(s)≫ m_ec^2, photon scattering with quasi-real photons has been measured by the ATLAS and CMS experiments (in which √(s)≈ 5-20 GeV). Despite these results, photon-photon scattering has yet to be measured using manifestly real photons and this has sustained interest in the process. The upcoming HIBEF experiment plans to provide the first measurement involving manifestly real photons at √(s)∼ O(10^2 eV) by colliding an x-ray free electron laser with a high power optical laser pulse <cit.> and there have been many suggestions for how to measure this effect using only PW-class optical lasers <cit.>. Apart from being a test of fundamental QED, searches for photon-photon scattering can also provide bounds on physics beyond the Standard Model (BSM), e.g. the ATLAS results enabled bounds on Born-Infeld electrodynamics <cit.> for energy scales >100 GeV. Suggestions for improving these bounds by measuring photon-photon scattering at future colliders have also recently appeared in the literature <cit.>. This paper reports on a search for elastic photon-photon scattering at [√(s)≈ 1]MeV. The search is motivated at this energy scale because it is relevant to astrophysics, it is where the cross-section takes its maximum value, and at this energy, the QED effect has only very weak experimental bounds. This scale also includes energies over the threshold for creation of real electron-positron pairs via the linear Breit-Wheeler process, which therefore is a sub-process of photon-photon scattering at these energies (this can be understood by the optical theorem <cit.>). Linear Breit-Wheeler is also being searched for at PW-class optical lasers <cit.>, and multi-photon Breit-Wheeler pair-creation forms part of the science goals for the LUXE experiment planned at DESY <cit.> and the E320 experiment at SLAC <cit.>. Our experiment uses [∼ 1]GeV electrons from a laser wakefield accelerator <cit.> which collide with a fixed target to generate a broadband bremsstrahlung distribution of γ photons extending to [E_γ≈ 800]MeV. The γ photons are then collided with the dense x-ray photon field in the vicinity of a laser heated germanium foil. The x-ray radiation is dominated by M-L-band emission in the region of [E_x ≈ 1.4]keV. The spectrum of the γ photons is determined using a caesium-iodide stack spectrometer <cit.>. In this asymmetric collision, a scattering event typically removes one x-ray and one high energy γ photon from the beam and replaces it with two lower energy γ photons. If sufficient numbers of γ photons were to scatter in the x-ray field, we would therefore be able to detect the effect in the photon energy spectrum. This is illustrated in Fig. <ref> for the idealised case of a narrow energy spread [500]MeV γ photon beam scattering from a dense x-ray field. If however, insufficient scattering events occur to produce a measurable change in the spectrum, this allows us to place a limit on the photon-photon scattering cross-section. As well as the effect on the photon spectrum, scattering also leads to an increase in the divergence of the photon beam. Due to the asymmetry in our photon energies very few photons will be scattered outside of the original γ beam profile and so, for our set-up, the effect on the spectrum is more pronounced. We therefore use the energy spectrum diagnostic only. The QED prediction was calculated by extending the Geant4 simulation framework <cit.> to include the cross-section for photon-photon scattering <cit.>. Simulation results were compared to an independent direct numerical evaluation of the process integrated over the x-ray and γ-distributions reported in experiment, and found to be in agreement. The extended Geant4 model simulates the leading order (in fine-structure constant α) photon-photon scattering contribution involving four photons [During preparation of the manuscript, another simulation framework was developed that includes the same process <cit.>.]. This complements other recently developed simulation frameworks <cit.> of photon-photon scattering at √(s)∼ O(eV)-O(keV) based on the weak-field Heisenberg-Euler Lagrangian. § EXPERIMENTAL SET-UP The experiment took place at the Gemini laser facility in the UK. This is a dual beam, [300]TW Ti:Sa system, allowing us to generate and collide two high energy-density photon sources. The experimental setup was based on the scheme by Pike et al., (2014) with asymmetrical photon sources <cit.>. The γ photon source was provided by bremsstrahlung emission produced by an electron beam in a high Z-material target. The electron beam was produced using laser wakefield acceleration<cit.>. The x-ray photon source was generated through direct laser heating of a thin metal foil. The two photon beams were temporally overlapped at the interaction point within 2 picoseconds using the drive laser beams (same optical path) and a fast-response photodiode. A schematic of the experimental setup can be found in figure <ref>. A detailed description of our laser-plasma platform for photon-photon physics can be found in ref <cit.>. The experiment can be separated into three parts: the x-ray photon source, the γ photon source, and the γ photon spectrometer §.§ x-ray photon source One of the Gemini laser pulses was used to generate a dense x-ray field by rapidly heating a [100]nm germanium (Ge) foil. As this solid Ge foil is heated, it turns into a plasma, leading to the emission of intense x-ray radiation predominantly due to M-L band transitions <cit.>. The heating laser pulse had a duration of [40]ps (fwhm intensity) and a total energy of [10.7 ± 0.3]J. It was focused to an elliptical spot using a distributive phase plate, with major and minor axes of [(217 ± 6)]µm and [(77 ± 6)]µm respectively, which contained 72% of the total energy. The Ge targets were mounted on a Kapton (C_22H_10N_2O_5) tape with a lower average atomic number, limiting the mass of Ge close to the interaction which is a potential noise source. A motorised tape-drive was used to change targets between shots. To diagnose the x-ray field, a pinhole imaging system and crystal spectrometer were used. The pinhole imaging system gave an on-shot measure of both the emission spot size and the target alignment. The spectrometer used a flat, thallium acid phthalate (TlAP) crystal, with a spectral window of [≈ 700]eV, centred at approximately [1.6]keV, signal above approximately [1.5]keV was surpressed due to an aluminium filter. This spectral window is around the M-L band transitions of Ge. It was found that the total conversion efficiency from laser energy to 1.3–1.5 keV x-rays was (2.4 ± 0.3)%. This corresponds to [(3.7±0.4)× 10^10]photons eV^-1 J^-1 srad^-1 emitted normal to the front surface of the germanium target. Taking into account the absorption in the kapton layer on the rear side of the target, at the interaction region (1 mm from the tape) this corresponds to an x-ray photon density of [(1.4 ± 0.5) × 10^12]mm^-3, over an effective length of approximately [3]mm. §.§ γ photon source The γ photon source was generated through bremsstrahlung emission, which first requires a beam of high energy electrons. To produce these electrons, one of the Gemini laser beams was focused into a [17.5]mm gas cell filled with helium and a 2% nitrogen dopant. The duration of the laser pulse was [45 ± 5]fs (fwhm intensity) and the focal spot was [(44 ± 2)]µm × [(53 ± 2)]µm (fwhm intensity). The laser energy on target was 5.5 ± 0.6 J, corresponding to a normalised vector potential a_0 = 1.1 ± 0.2. Through the laser wakefield acceleration mechanism <cit.>, a beam of high energy electrons (energy up to [≈ 800]MeV, charge [≈ 50]pC ) was emitted from the gas cell. These electrons then passed through a [0.5]mm thick bismuth (Bi) foil, acting as a bremsstrahlung converter. This emits a beam of high energy γ photons with a similar duration to that of the driving laser pulse. Bremsstrahlung converters also generate a large number of low energy, divergent γ photons, which are a potential noise source. If these divergent γ photons were to interact with the Ge foil, or another part of the experimental setup, they would produce background through the Compton scattering process. To prevent this a [100]mm block of tungsten (W) with a [2]mm diameter hole drilled through the centre was used to collimate the γ photon beam. This collimator effectively removes any γ photon travelling at an angle greater than [10]mrad to the beam propagation direction. To reduce the background further, a [50]mm tungsten block was placed just off-axis, shadowing the Ge foil from the γ photon beam. Placing such a high Z-material close the γ photon beam axis will itself generate a large number of background Bethe–Heitler pairs which would pass through the x-ray field and create a background signal through Compton scattering. Therefore, a [30]cm dipole magnet with a field strength of [B = 1]T, was used to remove these before the photon-photon interaction zone. A CsI crystal screen was placed before the γ spectrometer to detect the footprint of the γ beam, but is not used for this analysis. §.§ γ photon spectrometer The γ photon spectrometer consisted of a stack of 33 × 47 CsI crystals doped with thallium and imaged onto a 14-bit EMCCD camera. This was positioned inside the γ photon beam path, with higher energy γ photons penetrating deeper into the stack than lower energy γ photons. The γ-photon spectrum is determined using a trial function based forward model. Bayesian inference is used to determine the best fit parameters of the trial function and their uncertainty. The detector was calibrated to remove systematic effects such as variations in light yield in each crystal due to crystal imperfections and misalignment or effects produced by the imaging system. We compared the measured energy deposited in each crystal averaged over a large number of shots with that predicted in Geant4 using the average electron energy spectrum (as measured on a series of shots without the bismuth foil intercepting the electron beam) and generated a correction factor which can then be applied to each column of crystals. By running Geant4 simulations for a series of mono-energetic γ photon beams over a range of energies we can model response of each crystal as a function of photon energy, ρ_i(E_γ), where i is the crystal index and E_γ is the photon energy. From this the signal measured by the detector for an arbitrary γ photon spectrum can be quickly calculated with the following integral I_i[f]=C_i∫_0^∞ρ_i(E) f(E) d E where C_i is the correction factor. To enable a Bayesian inference of the γ photon spectrum on each shot, we first find a low dimensional parameterisation of the spectrum, f(E). We performed Geant4 simulations of the bremsstrahlung converter, using electron energy spectra observed on the experiment, to get a data set of typical γ photon spectra. From this data set, the following function was found to provide a good approximation to the spectra measured on the detector. f(E ; α, E_c)=α(1-0.18^2 E^2/E_c^2) E^-0.94 , where α controls the amplitude of the spectrum and E_c is a characteristic energy that controls both the slope and the position of the cut off at high energy. Using this parameterisation, we can obtain the γ photon spectrum on each shot by applying Bayesian inference to estimate a distribution over 𝐱=(E_c, α). This involves applying Bayes’ theorem p(𝐱|𝐲)=p(𝐲|𝐱) p(𝐱)/∫ p(𝐲|𝐱) p(𝐱) d𝐱 , where 𝐲 is an observed data point, corresponding to a vector of the crystal responses. In this equation p(𝐲|𝐱) is the likelihood and p(𝐱)=p(E_c) p(α) is a prior which we must set. If we make the assumption that the crystals exhibit random Gaussian noise, σ, we can write the likelihood function as p(𝐲|𝐱)=∏_i1/σ√(2 π)exp(-(y_i-I_i[f(𝐱)])^2/σ^2) . This introduces a new parameter, σ, which is treated in the same way as E_c and α. Given that we have little prior knowledge of E_c, α or σ, other than the fact that they cannot be negative, we set uniform priors on each with a lower bound of zero. We know the upper bound for E_c cannot be greater than the maximum energy of the electrons ([≈ 800]MeV) so the prior used is p(E_c)=𝒰(0,800 MeV). Through appropriate normalisation of the data set, we can ensure that α is never greater than 10, allowing us to apply the prior p(α)=𝒰(0,10)). Finally, we set the prior on σ to be p(σ)=𝒰(0,1) as if the limit is greater than this, the data will be too noisy to make any inference. With the likelihood and priors set, we can use equation <ref> to calculate the posterior. Given that the numerator involves a three-dimensional integral, it is most efficiently solved using a Markov chain Monte Carlo (MCMC) method. In figure <ref> we can see an example of this calculation performed on a randomly selected shot from the data set. § RESULTS §.§ γ photon spectrum Having developed a robust method for extracting the γ photon spectrum from the crystal response, we can test if the presence of the x-ray field has an effect on the γ photon spectrum. To do this, we run the Bayesian spectral retrieval algorithm on each shot of the experiment and compare the distributions over E_c for null and collision shots. Null shots involve firing only the beam that generates the γ photon beam. Collision shots involve firing both beams at a relative delay that ensures the γ photons pass through the x-ray field. The Ge foil was properly aligned for both null and collision shots to ensure that any contribution to the γ photon spectrum measurement due to interactions with the foil are fully accounted for. The data set consists of 32 null shots and 22 collision shots. These shots were all performed on a single shot day on the Gemini laser system. We compare the distribution of E_c on collision and null shots in various ways. Figure <ref> shows the histogram of the inferred value for the γ photon spectrum E_c. The relatively small number of shots in each distribution means it is not immediately clear if differences between them are significant. To investigate this, figure <ref> shows the distribution of mean and standard deviation calculated from 100,000 bootstrap samples of the null and collision shot data. The fact that the distributions overlap illustrates further that there is no significant difference between the distribution of E_c on null and collision shots. Figure <ref> shows the empirical cumulative distribution function (ecdf) of E_c for the data. Also shown are 50 ecdfs for bootstrap samples of the data, these effectively represent the uncertainty in the ecdfs. The overlap between these ecdfs again illustrates that there is no significant difference between the distributions. A third method to assess differences in the distribution of E_c is to use the two-sample Kolmogorov–Smirnov (KS) test. The null hypothesis of this test is that both null and full data sets have been sampled from the same distribution, i.e. that there is no measurable difference effect of the photon-photon collisions on the measured γ-ray spectrum. To perform the KS test, the two-sample KS-test statistic must be calculated: D=sup|F_N(E_c)-F_C(E_c)| , where F_N(E_c) and F_C(E_c) are the cumulative distribution functions for the null and collision shots respectively. The null hypothesis is accepted at the 95% confidence level if D < 0.378<cit.> for a data sets with 32 and 22 samples. The value obtained for our data set is D = 0.216, so we cannot reject the null hypothesis. We can also calculate the two-sample Kolmogorov-Smirnov test statistic for a large number of bootstrap samples from the data to estimate the uncertainty in the KS test statistic. This is shown in figure <ref>. It shows that the bulk of the distribution (≈ 90%) lies below the critical value, providing further strong evidence that we cannot reject the null hypothesis, i.e. we must assume that collisions between γ photons and the dense x-ray field did not produce a detectable difference in the energy spectrum of the γ photons. §.§ Bounding the cross section As the various analyses all show that there is no significant difference between the distribution over E_c on null and collision shots, we can conclude there was not a detectable level of photon-photon scattering. To find how much larger than the standard QED value the cross-section would have to be to produce a detectable, we performed multiple simulations of the experiment with an increasing cross-section. The factor by which the cross section would have to increase for us to have observed a significant difference in the value of E_c on collision shots provides a bound on the cross-section. These simulations were performed with Geant4 <cit.> which we have adapted to include photon-photon collisions between the γ photons and a dense x-ray field, using the QED cross section <cit.>. The simulation modelled all major aspects of the experimental geometry, including the γ-photon source, collimator and magnet before the collision point, the x-ray photon source (both the tape target and surrounding x-ray field), and the magnetic transport system, shielding and detectors and after the collision point. The x-ray photon source was modelled as a static (i.e. non evolving) photon field using the measured x-ray spectrum and a spatial distribution of photon density n_x(x,y,z) calculated from the experimentally measured photon numbers and assuming emission from a uniform disk of radius [100]nm. Details of the geometry can be found in <cit.>, and details of the modifications made to Geant4 are described in <cit.>. The result of these simulations is shown in figure <ref>. Also shown is the mean and 95% confidence limit of E_c for the null shots. Increasing the bias on the cross-section up to 10^13 has little effect on the E_c our detector would measure. Beyond this point, E_c starts to decrease. The copious amount of elastic photon-photon scattering that would occur if the cross section were 10^14 - 10^15 times higher than the QED prediction would significantly lower the average energy of the γ photons exiting the collision volume. The simulations show that this would result in a measurably lower value of E_c on collision shots than that measured on null shots. The fact that we do not measure a lower value of E_c on collision shots therefore allows us to place an upper bound on the elastic photon-photon scattering cross section at ≈ 10^15σ_ QED. The broadband nature of the photon spectra in this experiment means that this measurement is not at a single, specified value of √(s), but the effective √(s) be found by weighting the cross section, σ(E_1, E_2), with the measured photon spectra N_x(E_1) and N_γ(E_2) and considering the range of collision angles. The effective √(s) for this experiment is [1.22 ± 0.22]MeV. § CONCLUSIONS The cross-section limits that have been made by previous direct searches for elastic photon-photon scattering are shown in figure <ref>. The closest of these to the QED cross-section for real photon-photon scattering is that of Bernard et al. (2000) <cit.> using optical photons. However, this is a factor of 10^18 times higher than the QED prediction. More recently, work using x-ray photons provided by a free electron laser bounded the cross section at √(s)∼ 10^-2 m_e a factor of 10^19 times higher than the QED prediction <cit.>. These high bounds are due to the fact that these previous direct searches operated in a regime where √(s)≪ m_e, where the cross-section is severely suppressed. The experiment reported here provides the first bound at √(s)∼ m_e, where elastic scattering is expected to play a role in various astrophysical situations <cit.>. This experiment also provides the lowest ratio of the upper bound to the QED prediction for 2-to-2 photon-photon scattering to date and the lowest bound in the range close to [√(s)≈ 1]MeV. While this current work provides an upper bound on the cross section, it is also useful to consider if laser-plasma interactions are a potential route to directly observing photon-photon collisions in the laboratory. To do this we consider how long an experiment would have to operate to observe a single scatter event. A simple estimate of the number of scatter events per shot is N_ scatter≈ N_γσ n_x L_x, where N_γ is the number of γ photons, σ is the cross section, n_x is the x-ray photon density and L_x is the length of the x-ray field. For the current configuration described here N_γ∼ 10^7, σ∼ 10^-30  cm^2, n_x ∼ 10^15  cm^-3, and L_x ∼ 0.1  cm, resulting in N_ scatter∼ 10^-9 per laser shot. At the repetition rate of this experiment (0.05 Hz) this would require over 600 years of continuous operation, but a 100 Hz laser with similar capabilities would require only 100 days. Higher energy lasers such as EPAC <cit.> and ELI-NP<cit.> will be capable of producing greater than 10 times more γ photons per shot due to the higher charge, higher energy electron beams they will be capable of producing. If such lasers could be operated at 100 Hz, the required time drops to ∼ 1 day. The development of such high repetition rate, high power lasers has already been identified as a key future direction for laser wakefield accelerators <cit.> and is an area of active research (see e.g. <cit.>). Such facilities could open up the real possibility of observing and studying photon-photon scattering. § DATA AVAILABILITY Data will be made available on request. § ACKNOWLEDGEMENT We wish to acknowledge the support of the staff at the Central Laser Facility. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant Agreement No. 682399) and STFC (Grant No. ST/P002021/1). JH and AT acknowledge support from the US National Science Foundation grant #1804463. GS would like to acknowledge support from EPSRC (Grant Nos. EP/N027175/1, EP/P010059/1). GPC was supported by Research Grant No. PID2022-137632OB-I00 from the Spanish Ministry of Science and Innovation.
http://arxiv.org/abs/2407.13695v1
20240718170007
Enhancing gravitational-wave host localization with SKYFAST: rapid volume and inclination angle reconstruction
[ "Gabriele Demasi", "Giulia Capurri", "Angelo Ricciardone", "Barbara Patricelli", "Massimo Lenti", "Walter Del Pozzo" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.CO", "astro-ph.IM" ]
gabriele.demasi@unifi.it Dipartimento di Fisica e Astronomia, Università degli Studi di Firenze, Via Sansone 1, Sesto Fiorentino (Firenze) I-50019, Italy INFN, Sezione di Firenze, Sesto Fiorentino (Firenze) I-50019, Italy giulia.capurri@df.unipi.it Dipartimento di Fisica “Enrico Fermi”, Università di Pisa, Largo Bruno Pontecorvo 3, Pisa I-56127, Italy INFN, Sezione di Pisa, Largo Bruno Pontecorvo 3, Pisa I-56127, Italy Dipartimento di Fisica “Enrico Fermi”, Università di Pisa, Largo Bruno Pontecorvo 3, Pisa I-56127, Italy INFN, Sezione di Pisa, Largo Bruno Pontecorvo 3, Pisa I-56127, Italy Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, I-35131 Padova, Italy Dipartimento di Fisica “Enrico Fermi”, Università di Pisa, Largo Bruno Pontecorvo 3, Pisa I-56127, Italy INFN, Sezione di Pisa, Largo Bruno Pontecorvo 3, Pisa I-56127, Italy INAF - Osservatorio Astronomico di Roma, Via Frascati 33, Monte Porzio Catone (Rome) I-00078, Italy Dipartimento di Fisica e Astronomia, Università degli Studi di Firenze, Via Sansone 1, Sesto Fiorentino (Firenze) I-50019, Italy INFN, Sezione di Firenze, Sesto Fiorentino (Firenze) I-50019, Italy Dipartimento di Fisica “Enrico Fermi”, Università di Pisa, Largo Bruno Pontecorvo 3, Pisa I-56127, Italy INFN, Sezione di Pisa, Largo Bruno Pontecorvo 3, Pisa I-56127, Italy § ABSTRACT The scientific impact of GW170817 strongly supports the need for an efficient electromagnetic follow-up campaign to gravitational-wave event candidates. The success of such campaigns depends critically on a fast and accurate localization of the source. In this paper, we present , a new pipeline for rapid localization of gravitational-wave event hosts. runs alongside a full parameter estimation (PE) algorithm, from which posterior samples are taken. It uses these samples to reconstruct an analytical posterior for the sky position, luminosity distance, and inclination angle using a Dirichlet Process Gaussian Mixture Model, a Bayesian non-parametric method. This approach allows us to provide an accurate localization of the event using only a fraction of the total samples produced by the full PE analysis. Depending on the PE algorithm employed, this can lead to significant time savings, which is crucial for identifying the electromagnetic counterpart. Additionally, in a few minutes, generates a ranked list of the most probable galaxy hosts from a galaxy catalog of choice. This list includes information on the inclination angle posterior conditioned to the position of each candidate host, which is useful for assessing the detectability of gamma-ray burst structured jet emissions. Enhancing gravitational-wave host localization with : rapid volume and inclination angle reconstruction Walter Del Pozzo July 22, 2024 ======================================================================================================== § INTRODUCTION The era of multi-messenger astronomy with gravitational waves (GWs) began on August 17, 2017, with the detection of the binary neutron star (BNS) merger GW170817 <cit.> by the Advanced LIGO <cit.> and Advanced Virgo <cit.> interferometers . This was immediately followed by the independent observation of the short gamma-ray burst (GRB) GRB 170817A by the Gamma-ray Burst Monitor onboard Fermi <cit.> and the Spectrometer Anti-Coincidence Shield onboard the International Gamma-Ray Astrophysics Laboratory (INTEGRAL) <cit.>. An intense electromagnetic (EM) follow-up campaign, was launched across the entire EM spectrum after this joint detection, and especially after the subsequent well-constrained, three-dimensional LIGO-Virgo source localization (see <cit.> and references therein). The campaign led to the discovery of an optical transient source, SSS17a/AT 2017gfo, located in the NGC 4993 galaxy <cit.>. This source, observed also in ultraviolet and infrared wavelengths, was then identified as a kilonova <cit.>. The ensemble of all the GW and EM measurements associated with this event constitutes the first multi-messenger observation with GWs. Such a discovery led to a wide range of extraordinary results, including the identification of BNS mergers as progenitors of short GRBs, the evidence of a structure in GRB jets, the precise measurement of the GW propagation speed, and a new independent measurement of the Hubble constant <cit.>. GW170817 remains the only confident multi-messenger GW event to date (see, however, <cit.> and references therein), but we expect more multi-messenger events in the future as the sensitivity of GW facilities improves, see, e.g., <cit.>. An EM counterpart is expected from the merger of binary systems containing at least one neutron star, i.e., BNSs and neutron star-black hole (NS-BH) binaries <cit.>. After the merger and subsequent GW generation, the GRB occurs, consisting of a prompt γ-ray emission the duration of which is ≲ 2 s, and a multi-wavelength afterglow typically observable in the optical, X-ray, and radio bands for minutes, hours, days, and even months after the prompt emission. The optical/near-infrared transient associated with the kilonova is expected to appear hours/days after the prompt emission and progressively fade away. In this context, it is clear that a rapid and precise localization of the GW event is crucial to launch an effective EM follow-up campaign. Currently, when a GW event is detected, it takes from seconds to minutes to have an initial estimate of the event localization thanks to <cit.>, a Bayesian, non-Markov Chain Monte Carlo (MCMC) algorithm for the rapid localization of GW events. The initial volume maps are released to the astronomical community so that the hunt for the EM counterpart can promptly start <cit.>. However, to achieve a definitive, more precise localization of the GW event, one must wait for the results of a full parameter estimation (PE) pipeline, which generally takes longer due to the large number of parameters whose posterior must be reconstructed as well as to the complexity of the sampling algorithms used to explore the likelihood. In this paper, we present , a new localization algorithm based on Bayesian non-parametrics, extending the idea introduced in <cit.>. takes as input the samples produced by a PE pipeline (e.g., <cit.>, <cit.>, <cit.>) and uses them to reconstruct an analytic posterior distribution for the sky position, luminosity distance, and inclination angle of the GW event. The posterior is modelled as a weighted sum of multivariate Gaussians, whose weights, means, and covariance matrices are outcomes of a Dirichlet Process Gaussian Mixture Model (DPGMM) <cit.>. Specifically, we use the DPGMM implementation <cit.>. In , the Gaussian mixture is progressively updated as new samples are produced by the PE algorithm until the information entropy <cit.> reaches a plateau, indicating that the addition of more samples would not improve the quality of the reconstructed posterior. At this point, the accuracy of the posterior is comparable to the final PE results. This convergence criterion is typically reached in a fraction of the PE runtime, allowing to release an intermediate, yet accurate and fully analytical, joint posterior distribution of the sky localization, luminosity distance, and inclination angle ahead of time. Together with the reconstructed posterior, releases a skymap of the GW event and a ranked list of the most probable hosts from a galaxy catalog of choice. In this paper, we use the GLADE+ catalog <cit.> as an example to display the great potential of , but the algorithm can be easily adapted to work with any galaxy catalog. The ranked list includes median and 90% credible intervals for the inclination angle posterior conditioned on the position of each galaxy. This information allows for specific modeling of the GRB structured jet emission based on the location of the event within the identified potential host galaxies. The inclusion of the inclination angle is a novel feature of . Currently, the LIGO-Virgo-KAGRA (LVK) Collaboration does not release the inclination angle posterior in the alerts for EM follow-up campaigns. We show that the inclination angle information provided by can be relevant for optimizing the follow-up tiling strategies and tailoring the efforts to each candidate host, under the hypothesis that the GW event originated in that particular galaxy. The features of the different PE algorithms determine how can be integrated into the pipeline, its speed performance, and which of its outputs can be used. Most importantly, using to produce an intermediate analytical posterior reconstruction is only possible if the PE algorithm releases the samples during the analysis run, as is the case for MCMC samplers. This is not possible with nested sampling algorithms, where posterior samples are only available at the end of the run. Nonetheless, can always be used on the final set of samples to produce a list of the most probable galaxy hosts. is publicly available at <https://github.com/gabrieledemasi/skyfast>. The paper is structured as follows: in Sec. <ref>, we outline the main features and outputs of ; in Sec. <ref>, we validate its statistical robustness on a mock dataset of GW signals; in Sec. <ref>, we discuss the combination between and different PE algorithms, as well as the role can play in optimizing EM follow-up searches; finally, we draw our conclusions in Sec.<ref>. § EXECUTIVE SUMMARY is a tool designed for the rapid reconstruction of the joint posterior of the localization parameters, namely, right ascension (α), declination (δ), luminosity distance (d_L), and the inclination angle (θ_jn), which is defined as the angle between the direction of observation and the total angular momentum of the binary. For the reconstruction of the localization parameters, we follow the procedure discussed in <cit.>. A key difference, however, is that processes the posterior samples one by one, so that it can effectively work in parallel with any PE pipeline that releases the samples during the run. Moreover, differently from <cit.> and from existing galaxy ranking tools (e.g., <cit.>) we also include the reconstruction of the inclination angle. Indeed, the inclination angle is a key ingredient to estimate the expected temporal evolution of the multi-wavelength flux associated with possible EM counterparts, such as GRB structured jet emission <cit.>, and therefore to evaluate their detectability. takes the posterior samples produced by a PE pipeline as input and reconstructs an analytical posterior for the parameters {α, δ, d_L, θ_jn} using <cit.>, a publicly available[<https://github.com/sterinaldi/FIGARO.git>] inference code designed to estimate multivariate probability distributions using a DPGMM. A DPGMM is a non-parametric Bayesian method that, given a set of samples, reconstructs the distribution from which they have been drawn as a mixture of multivariate Gaussians with means μ_k and covariances σ_k: p(x) ≃∑_k=1^N_G w_k 𝒩(x|μ_k,σ_k) , where the number of components of the mixture N_G and the mixing proportions w_k are inferred from a Dirichlet process <cit.>. For further details about the DPGMM please refer to <cit.> The mixture is updated continuously as new samples from the PE run are added, until adding further samples no longer provides additional information. We quantitatively estimate the information encoded in the posterior using the information entropy, defined as <cit.> S(N) = -∑_k p_k^Nlog p_k^N , where p_k^N is the posterior obtained with N samples, evaluated at the kth element of a discrete grid. Convergence is achieved when S(N) reaches a plateau, and consequently, its derivative begins oscillating around zero. We assume, as in <cit.>, that the reconstruction has converged to its target distribution when the derivative dS(N)/dN undergoes a given number of zero crossings. Empirical analysis shows that three zero crossings are an effective indicator of convergence. At this point, is ready to release an intermediate reconstructed posterior distribution that already contains the same amount of information as the posterior obtained using all samples. The posterior obtained with is analytical and, being a mixture of Gaussians, allows for straightforward marginalization and conditioning on any subset of its parameters. In the remainder of this section, we discuss the main outcomes of , using GW170817 as a reference example. Specifically, we use the posterior samples from <cit.> [The posterior samples are publicly available at <https://github.com/sugwg/gw170817-inclination-angle>]. The primary output of is the analytical joint posterior for {α, δ, d_L, θ_jn}. In Figure <ref>, we compare the full PE posterior samples of GW170817 with the samples from posterior distributions reconstructed with . In particular, we use the posterior of the intermediate reconstruction, which is released immediately after reaching convergence. With an analytical posterior, it is straightforward to evaluate the probability of each galaxy in a given catalog being the host of the GW event. For this work, we use the GLADE+ catalog <cit.>, which is currently the most complete catalog available for EM follow-up of GW events. However, can be easily adapted to work with any catalog, which could be very useful for the specific requirements of different EM observatories. In its current form, produces a list of all galaxies from the GLADE+ catalog that fall within the 90% credible volume, ranked by their probability of being the host of the GW event according to their 3D position. Explicitly, the probability of being the host is obtained by evaluating the posterior distribution marginalized over the inclination angle at the position of each galaxy.[We convert the galaxy's measured redshift to luminosity distance using the Planck18 cosmology <cit.>. ] In Figure <ref>, we show the sky localization region of GW170817 obtained using the intermediate posterior distribution, along with all the galaxies from the GLADE+ catalog contained within the 90% credible volume. In Table <ref>, we list the five most probable host galaxies for GW170817, with the true host, NGC4993, in the third position. We further exploit the analytical reconstruction of the posterior to condition the inclination angle distribution on the position of each galaxy found within the localization region, breaking the degeneracy between d_L and θ_jn <cit.>. Since the redshift measurements for most galaxies in the GLADE+ catalog are photometric, the uncertainty in the luminosity distance is typically of the order of a few percents and, hence, non-negligible. To incorporate this uncertainty in our analysis, we assume a Gaussian distribution for the measured galaxy distances and marginalize over it. In the last column of Table <ref>, we report the median value of θ_jn along with 90% confidence intervals obtained from the posterior distribution conditioned to the position of each galaxy. These intervals encompass the uncertainty in the luminosity distance propagated from the error in the photometric redshift measurements. In Figure <ref>, we show a comparison between the inclination angle posterior distribution marginalized over the position parameters and conditioned to the position of NGC 4993. The conditioned posterior is computed by averaging over the distributions obtained by conditioning the posterior to 500 different positions drawn from a Gaussian distribution centered on the true NGC 4993 luminosity distance, with standard deviation given by the photometric redshift measurement as reported in the GLADE+ catalog. The shaded areas denote the 68% and 90% credible regions. As expected, conditioning to a specific position instead of marginalizing over all position parameters results in a narrower inclination angle posterior. This occurs because fixing the position breaks the degeneracy between d_L and θ_jn. Remarkably, the median value of θ_jn and its associated errors, obtained from the conditioned posterior for NGC 4993, are consistent with values reported in <cit.>. § VALIDATION OF THE ALGORITHM ON SIMULATED DATASETS In the following, we test the statistical robustness and performance of . In section <ref>, we consider the intermediate reconstruction of the posterior, while in section <ref>, we show how performs in galaxy ranking. §.§ Intermediate posterior reconstruction We test the robustness of the intermediate reconstruction of the posterior distribution by running on a set of simulated binary black-hole (BBH) events. We inject 100 BBH signals into Gaussian noise based on the publicly available LVK sensitivity curves[LVK sensitivity curves publicly available at <https://dcc.ligo.org/LIGO-T2000012/public>]. Specifically, we consider a network composed by the two Advanced LIGO detectors and the Advanced Virgo detector. For LIGO, we use the high-noise estimate for the fourth observing run (O4), while for Virgo, we use the power spectral density representative of the third observing run (O3) <cit.>. We use the waveform <cit.> both for signal injection and PE. We use the sampler <cit.> implemented in <cit.>, set to produce 5000 independent samples using 10 parallel runs. Since, with this settings, the median PE runtime is of about one hour, we decide to save the posterior Posterior samples are then extracted from checkpoint files created every five minutes, given the median sampling time of about 1 hour. In Figure <ref>, we compare the distribution of the total PE sampling time with the time required by to release the (intermediate) localization of the GW event. The latter is determined by the time that the PE algorithm takes to produce the necessary number of samples for to meet the convergence criterion. Figure <ref>, on the other hand, shows the cumulative distribution of the ratio between the and PE runtimes. With this specific PE pipeline and settings, completes the intermediate posterior reconstruction, skymap generation, and ranking of the most probable host galaxies for 50%(90%) of the events in less than ∼ 20%(36%) of the total sampling time. The performance of is significantly influenced by the chosen PE pipeline and its specific configuration, as discussed further in Section <ref>. Figure <ref> shows the cumulative distribution of the fraction of samples needed to achieve convergence. requires less than ∼ 11% (21%) of the total samples generated by the PE to achieve information entropy convergence and to reconstruct the intermediate posterior for 50% (90%) of events. Contrarily to naive expectations, the distribution of the sample fraction differs from that of the total PE time fraction shown in Figure <ref>. This difference is due to the burn-in inefficiency: during the initial burn-in period, the samplers produce correlated samples that cannot be used for analysis. Lastly, we test the statistical robustness of the reconstructed intermediate posterior distribution by plotting the fraction of injected BBH events found within a credible region CR_P as a function of the encompassed probability P for the single parameters α, δ, d_L and θ_jn, and the 2D and 3D localizations. The expected distribution is indeed a diagonal line. As shown in Figure <ref>, all the probability-probability plots lie on the diagonal within the 90% credible interval for the cumulative distribution computed from a beta distribution as in <cit.>. This allows us to assess that the intermediate posteriors reconstructed with are statistically unbiased. §.§ Galaxy ranking To evaluate the performance of in ranking the most probable hosts we analyze 50 simulated BNS signals. We inject these signals into Gaussian noise using the waveform, following the configuration discussed in Section <ref>. The localization parameters of the BNSs are randomly extracted from the GLADE+ catalog, with a luminosity distance cut of 100 Mpc to have a reasonable number of galaxies in the localization volume. We perform the parameter estimation using with the sampler <cit.>, employing the Reduced Order Quadrature (ROQ) technique <cit.>. This setup closely mirrors the online parameter estimation pipeline used by the LVK Collaboration <cit.>. However, with this configuration, posterior samples are only available at the end of the inference due to the nested sampling scheme of . Consequently, we use directly on the final samples, thus without the intermediate posterior reconstruction. Given that the median distance of the extracted galaxies is ∼ 67 Mpc, we obtain the following results: * the median number of galaxies in the 90% credible volume is 30; * on average, the true host galaxy is ranked as 5th. The distribution of the time needed for to obtain the ranked host galaxy list from the posterior samples is shown in Figure <ref>. Most of the events require a runtime of less than three minutes. The current online PE pipeline takes around 10 minutes, on average, to complete the inference for BNS events <cit.>. Our results suggest that by adding a few minutes, it is possible to obtain a list of the most probable host galaxies along with an estimate of the inclination angle conditioned to the position of each galaxy in the list[All the findings presented in Section <ref> are obtained running on Intel® Xeon® Gold 6140M processors with a 2.30GHz clock rate. The runtime can be further reduced by using more powerful processors. For instance, the average time needed to run on a GW event with a few thousand samples on an Apple® M3 Pro chip is about one minute.]. § DISCUSSION In the following, we discuss the relevance of the various applications of . In section <ref>, we comment on how the performance of depends not only on the PE pipeline from which it takes the samples but also on the specific setup within the same pipeline. In section <ref>, we discuss the contribution that can make in optimizing the EM follow-up searches, with particular emphasis on the importance of the inclination angle information released with the galaxy ranking. §.§ Combination with different PE pipelines The performance of depends on the parameters estimation pipeline from which it takes posterior samples. It should be kept in mind that the time gain distribution shown in Figure <ref> is sensitive not only to the algorithm used for the inference, but also to the specific PE setup. For example, running the same pipeline with more (less) independent samplers has the effect of decreasing (increasing) the time gain, because of the burn-in inefficiency. Releasing as soon as possible an accurate sky localization along with a list of potential host galaxies is of great relevance for GW events expected to have an EM counterpart, such as binary systems containing at least one neutron star. Currently, the online PE in the LVK Collaboration uses the ROQ technique and the sampler. With this setup and using the waveform, the PE wall time for BNS events is around ten minutes. As mentioned previously, accessing samples during the run is not possible with a nested sampler like . However, using in combination with an MCMC algorithm such as to generate an intermediate skymap would require more time than the average wall time of the current online PE pipeline. This is due to being less optimized compared to , mainly since distance marginalization cannot be used for our specific purposes. Nonetheless, can still come into play immediately after the online PE process ends. By adding just a few minutes to the PE runtime, it becomes possible to generate an analytical reconstruction of the posterior using with all the samples produced by the PE. This posterior can then be used to generate a list of the most probable galaxy hosts within the GW event localization region, along with the associated estimate of θ_jn, as discussed in Section <ref>. The full potential of can be realized when used in combination with Hamiltonian Monte Carlo (HMC) methods <cit.>, which sample the posterior distribution more efficiently than traditional MCMC or Nested Sampling methods. However, a significant drawback of HMC algorithms is their requirement to compute the gradient of the likelihood, and consequently the gradient of the waveform. This task is computationally expensive and could potentially make the entire pipeline inefficient. Given the ongoing efforts in developing differentiable waveforms <cit.>, we expect HMC to become the standard PE strategy in the future. Combining with HMC would further reduce the time required to obtain accurate localization and galaxy host information. §.§ Galaxy ranking and inclination angle As discussed in the previous section, one potential output of is the generation of a ranked list of galaxies within a given credible volume, typically 90%, based on their probability of being the host. If the pipeline does not provide access to the samples during the run, can be run right after the inference is complete, generating the ranked list within minutes. A feature that differentiates from other galaxy ranking tools <cit.> is the possibility to compute the inclination angle posterior conditioned to the position of each galaxy in the list. Currently, LVK GCN notices and circulars do not include inclination angle information. However, we do integrate this parameter into posterior reconstruction because it plays a crucial role in optimizing the EM follow-up campaign. Specifically, the inclination angle of the binary system relative to our line of sight affects the multi-wavelength light curve of possible EM counterparts such as, e.g., GRBs. To further illustrate this point, we return to our working example, GW170817/GRB 170817A. In Figure <ref>, we plot the expected GRB afterglow light curves in the X-ray band, assuming the event is located in each of the first three galaxies listed in Table <ref>. We compute the light curves using <cit.>, and assuming a Gaussian profile for the structured GRB jet. We adopted the best-fit GRB parameters obtained from <cit.> for GW170817 (see their Table 3). However, we use the luminosity distance and inclination angle values reported for each galaxy in Table <ref>. The shaded areas represent the uncertainties in the luminosity curves associated with the 90% credible intervals of the inclination angle estimate. The different inclination angle posteriors obtained by conditioning on the position of each galaxy have a significant impact on the afterglow peak time and peak luminosity. Vertical lines indicates the expected times of the luminosity peak for each curve, revealing an order of magnitude discrepancy mainly driven by variations in the inclination angle posteriors among galaxies. For certain configurations (such as ESO 575-055 and partially for ESO 508-014), the smallest inclination angles within the 90% credible intervals are comparable to the half-width of the jet core. This results in a time-decreasing light curve typically observed for GRBs viewed on-axis. The shape of the light curve affects the detectability of the EM emission. As an example, we compare the simulated X-ray light curves with the limiting luminosity that can be reached by Swift/XRT, considering an exposure time of 60 s and a distance of 40 Mpc. It can be seen that the afterglow could be detected until approximately 1.5 (2.5) days after the merger if the host galaxy is ESO 575-055 or ESO 508-014, respectively. To potentially detect the afterglow emission if the host galaxy is NGC 4993, a much longer exposure time is needed. Therefore, the conditioned inclination angle posteriors, along with a GRB jet model and the sensitivity limits of various instruments at the distance of the potential host galaxy, play a crucial role in determining whether and when the afterglow emission can be observed with a specific EM facility, depending on the host candidate this is also key to optimize the observational strategy in terms of exposure time of the various tilings, to maximize the chance of detection of a possible EM counterpart. As a consequence, can significantly contribute to optimizing the EM follow-up campaign by providing essential information to design a tiling strategy tailored to each potential host galaxy in the list. Finally, the information on the inclination angle could be used to estimate the likelihood that a transient source detected during the EM follow-up campaign is the counterpart to (or unrelated to) a GW event. § CONCLUSIONS We presented , a new tool for the rapid localization of GW events and the ranking of the most probable galaxy hosts. uses Bayesian non-parametrics to reconstruct an analytical posterior distribution of sky position, luminosity distance and inclination angle from the posterior samples produced by a PE pipeline. We developed to run in parallel with a PE pipeline and update the reconstructed posterior as new samples are produced, until the information entropy reaches a plateau. In general, needs less samples than a full PE to reconstruct an accurate posterior for the parameters of interest. This feature results in relevant time gains that can be crucial for the prompt identification of the EM counterpart. Together with the reconstructed posterior, releases a list of all the galaxies from a catalog of choice that are contained within the 90% credible volume of the GW event, ranked by their probability of being the host. In this work, we used the GLADE+ catalog as an example, but the algorithm has the great advantage of being easily interfaced with any galaxy catalog. We tested on the posterior samples of GW170817. We found that the reconstructed posterior is in agreement with the final PE results and the sky localization area is also compatible with the one reported in the literature. We ranked all the galaxies in the 90% credible volume, with the true host, NGC 4993, being the third most probable one. Thanks to the analytical form of the reconstructed posterior, can easily infer the inclination angle posterior conditioned on the position of each galaxy in the list. This not only breaks the degeneracy between d_L and θ_jn, resulting in a more stringent inclination angle constraint, but it also allows for more precise predictions of the GRB afterglow observed light curves expected if the GW event were located in each potential galaxy host. Interestingly, we found that the light curves can vary significantly among galaxies, mainly due to variations in the conditioned inclination angle posterior. The conditioned inclination angle posterior, combined with the sensitivity of different instruments at the distance of potential hosts, is crucial for predicting whether and when each instrument can detect GRB afterglow emission. This depends on the galaxy it is pointing to and helps choose an optimal exposure time to maximize the chance of detection. We tested the statistical robustness of by running it on a population of 100 simulated BBH events. We built PP-plots for the posteriors of all the reconstructed parameters, as well as for the 2D and 3D localizations. We found that the reconstructed posterior is unbiased and accurately describes the distribution from which the input samples are taken. Additionally, we found that is able to release the reconstructed posterior within one-fifth (one third) of the total PE runtime for 50% (90%) of the GW events, using less than one-tenth (one fifth) of the samples. However, we note that the exact values of the time and sample fractions depend on the PE pipeline used and its specific setup (e.g., the number of independent samplers and parallelization scheme). To test the performance of in ranking the most probable galaxy hosts, we ran it on 50 simulated BNS events. Specifically, we produced the input samples with , using a nested sampling scheme accelerated with the ROQ technique. With this setup, the PE runtime is 8 minutes, while the runtime is 1-2 minutes on average. These times depend on the computational power of the machines used and can be further reduced with next-generation processors. Regarding galaxy ranking, given that the median luminosity distance of our BNSs was around 70 Mpc, we found an average of 30 galaxies in the 90% credible volume, with the true host in the 5th position. In conclusion, is a lightweight and user-friendly tool designed to reconstruct analytical posteriors for sky localization, luminosity distance, and inclination angle from posterior samples generated by a PE pipeline, achieving these tasks in a fraction of the PE runtime. Even if real-time sample access is unavailable during PE, can be launched immediately after PE completion to provide a ranked list of potential galaxy hosts within just one or two minutes. A novel feature of compared to other tools is the inclusion of inclination angle information, crucial for astronomers to plan optimized EM follow-up strategies. Lastly, can be used offline for various purposes, such as cross-correlating GW events with different galaxy catalogs to infer values of H_0 and other cosmological parameters. § ACKNOWLEDGEMENT We thank Stefano Rinaldi for providing the scratch from which was developed and for his assistance with . We thank Gianluca Maria Guidi and Francesco Pannarale for useful discussions and for carefully reading the manuscript. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. This work has been supported by the project BIGA - “Boosting Inference for Gravitational-wave Astrophysics" funded by the MUR Progetti di Ricerca di Rilevante Interesse Nazionale (PRIN) Bando 2022 - grant 20228TLHPE - CUP I53D23000630006. GD acknowledges financial support from the National Recovery and Resilience Plan (PNRR), Mission 4 Component 2 Investment 1.4 - National Center for HPC, Big Data and Quantum Computing - funded by the European Union - NextGenerationEU - CUP B83C22002830001. AR acknowledges financial support from the Supporting TAlent in ReSearch@University of Padova (STARS@UNIPD) for the project “Constraining Cosmology and Astrophysics with Gravitational Waves, Cosmic Microwave Background and Large-Scale Structure cross-correlations” apsrev4-1