id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.13320 | Frustrated hops in Ring Polymer Surface Hopping: Real-time dynamics and
detailed balance | Ring Polymer Surface-Hopping (RPSH) has been recently introduced as a
well-tailored method for incorporating nuclear quantum effects (NQEs), such as
zero-point energy and tunneling, into non-adiabatic molecular dynamics
simulations. The practical widespread usage of RPSH demands a comprehensive
benchmarking of different reaction regimes and conditions with equal emphasis
on demonstrating both the cons and pros of the method. Here, we investigate the
fundamental questions related to the conservation of energy and detailed
balance in the context of RPSH. Using Tully's avoided crossing model as well as
a 2-level system coupled to a classical bath undergoing Langevin dynamics, we
probe the critical problem of the proper treatment of the classically forbidden
transitions stemming from the surface hopping algorithm. We show that proper
treatment of these frustrated hops is key to the accurate description of
real-time dynamics as well as reproducing the exact quantum Boltzmann
population. | Dil K. Limbu, Farnaz A. Shakib | 2023-05-14T20:23:12Z | http://arxiv.org/abs/2305.13320v1 | # Frustrated hops in Ring Polymer Surface Hopping: Real-time dynamics and detailed balance
###### Abstract
Ring Polymer Surface-Hopping (RPSH) has been recently introduced as a well-tailored method for incorporating nuclear quantum effects (NQEs), such as zero-point energy and tunneling, into non-adiabatic molecular dynamics simulations. The practical widespread usage of RPSH demands a comprehensive benchmarking of different reaction regimes and conditions with equal emphasis on demonstrating both the cons and pros of the method. Here, we investigate the fundamental questions related to the conservation of energy and detailed balance in the context of RPSH. Using Tully's avoided crossing model as well as a 2-level system coupled to a classical bath undergoing Langevin dynamics, we probe the critical problem of the proper treatment of the classically forbidden transitions stemming from the surface hopping algorithm. We show that proper treatment of these frustrated hops is key to the accurate description of real-time dynamics as well as reproducing the exact quantum Boltzmann population.
## I Introduction
Trajectory-based mixed quantum-classical dynamics methods have been a mainstream approach in describing electronic and/or nuclear quantum effects in condensed-phase dynamics.[1; 2; 3; 4; 5] For more than three decades, Tully's Fewest Switches Surface Hopping (FSSH)[6] has served as ground zero for developing a myriad of such methods. In the original FSSH, the classical degrees of freedom (DOFs) evolve on the adiabatic surfaces interrupted by instantaneous electronic transitions. For each nuclear trajectory, the associated electronic density matrix is determined by coherently integrating the electronic time-dependent Schrodinger equation along the classical trajectory. The transition probabilities between adiabatic states are then determined from the density matrix elements and their time derivatives. Hammes-Schiffer and Tully extended this methodology to non-adiabatic transitions between vibrational states as well, which allowed simulation of proton-transfer (PT)[7] and proton-coupled electron transfer (PCET)[8; 9; 10] reactions. Efforts to remedy the primary limitation of the FSSH ansatz, commonly known as overcoherence problem or lack of decoherence, gave birth to more sophisticated trajectory-based methods such as augmented surface hopping,[11] decoherence-induced surface hopping,[12] simultaneous-trajectory surface hopping,[13] and coherent fewest-switches surface-hopping,[14] to name a few. The lack of a formal derivation of surface hopping was met by the advent of a first-principle trajectory-based methodology utilizing quantum-classical Liouville equation (QCLE)[15; 16] which was proved to be very accurate for PT[17] reactions. Shakib and Hanna extended QCLE to the study of the rate and mechanism of PCET[18; 19; 20] reactions yielding exceptionally accurate results, albeit at the expense of a huge computational cost. The number of classical trajectories to reach convergence in the QCLE approach for PCET reactions turned out to be at the order of millions,[20] which is 1000 times more than the FSSH requirement. Nevertheless, later, Kapral showed that by applying two approximations, surface-hopping could be derived from QCLE[21] hence putting the former on much firmer ground than the previously phenomenologically perceived one. Very recently, developments aimed at dealing with the limitations of FSSH _via_ a mapping approach to surface hopping (MASH)[22], which imposes internal consistency between nuclear and electronic DOFs. Still, MASH performs its best with the addition of decoherence; however, this is not an _ad hoc_ correction scheme but the result of the rigorous derivation of the method at the QCLE limit.[22] In 2012, Shushkov _et al._ tried to remedy another inherent limitation of surface-hopping,[23] i.e., lack of nuclear quantum effects (NQEs), by marriage between surface hopping and ring polymer molecular dynamics (RPMD)[24]. In this method, the non-adiabatic electronic transitions are described by the FSSH algorithm. At the same time, NQEs are incorporated with Feynman's imaginary-time path-integral formalism,[25] giving birth to ring polymer surface hopping (RPSH). Thus, RPSH is deemed a well-tailored method for investigating multi-electron/multi-proton transfer dynamics in condensed phases. Shakib and Huo applied RPSH with centroid approximation to investigate electronic non-adiabatic dynamics in three infamous Tully models with explicit nuclear quantization.[26] They showed that RPSH can qualitatively capture the correct branching probabilities, especially at low-temperature limits, due to the inclusion of nuclear tunneling and zero-point energy via the extended phase-space of the classical ring polymer. Interestingly, RPSH was also capable of quantitatively reproducing the branching probabilities in the model of "extended coupling with reflections", which is specifically designed for the study of coherence/decoherence.
Despite all the promises that RPSH offers, it is still an approximate method whose accuracy and validity limits should be carefully examined. Specifically, while we know that it combines the powerful capabilities of FSSH and RPMD, we should also ask about the shortcomings of its constituent methodologies. Are they magnified or quenched within RPSH? And to what degree? In the current work, we address the critical issues related to surface hopping phenomenon. We will focus on the conservation of energy, which leads to frus
trated hops, and their effect on preserving detailed balance. We show how these frustrated hops are manifested in RPSH and offer different remedies for dealing with them. The effect of such remedies is discussed in terms of different model systems to provide valuable insights into the choice of proper recipe for treating frustrated hops with respect to the objectives of a study. While not a single treatment of frustrated hops seems applicable in all models, RPSH systematically operates better than FSSH. This is on top of the inclusion of NQEs into the dynamics that separates RPSH from the ever-expanding pool of surface hopping methods. Further, this paper portrays a clear picture of the functionality of RPSH, which is crucial for future developments _via_ a unified community effort. In the remainder of this paper, we will first give a summary of the RPSH algorithm at the centroid level as well as methodological considerations regarding frustrated hops. The studied models will be introduced at the beginning of each subsection in the Results and Discussions. Concluding Remarks and our Outlook toward future research directions in this field will follow.
## II Methodological considerations
### Ring Polymer Surface Hopping
We present a brief introduction to the RPSH ansatz with centroid approximation, interested reader is referred to Refs. [23] and [26] for further details. In RPSH, every nuclear DOF is represented by a ring polymer which is comprised of \(n\) copies of the nuclear DOF, known as beads, connected by harmonic forces. The corresponding extended Hamiltonian is described with:
\[H_{n}=\sum_{i=1}^{n}\left[\frac{\mathbf{P}_{i}^{2}}{2M}+\frac{M\omega_{n}^{2} }{2}(\mathbf{R}_{i}-\mathbf{R}_{i-1})^{2}+V_{\alpha}(\mathbf{R}_{i})\right]. \tag{1}\]
Here, \(n\) is the total number of beads and \(\omega_{n}=n/\beta\hbar\) where \(\beta=1/k_{\mathrm{B}}T\) is the reciprocal temperature. \(\mathbf{P}_{i}\) and \(\mathbf{R}_{i}\) represent the momentum and position of each bead of the ring polymer which moves on a single adiabatic surface \(|\alpha;\mathbf{R}_{i}\rangle\) corresponding to the potential energy \(V_{\alpha}(\mathbf{R}_{i})=\langle\alpha;\mathbf{R}_{i}|\hat{V}|\alpha; \mathbf{R}_{i}\rangle\). At every time step of the nuclear dynamics, the position and momentum of the centroid of the ring polymer is updated as:
\[\bar{\mathbf{R}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{R}_{i}\qquad\qquad\bar{ \mathbf{P}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{P}_{i} \tag{2}\]
At this point, according to the surface hopping algorithm, the time-dependent Shrodinger equation is numerically integrated as
\[\bar{\hbar}\dot{c}_{\alpha}(t)=V_{\alpha}(\bar{\mathbf{R}})\,c_{\alpha}(t)- \bar{\imath}\sum_{\beta}\bar{\mathbf{R}}\cdot\mathbf{d}_{\alpha\beta}(\bar{ \mathbf{R}})\,c_{\beta}(t) \tag{3}\]
to confer the electronic coefficients \(c_{\alpha}\) associated with each adiabatic surface. The important difference with the original FSSH is that this integration is carried out along the motion of the centroid. Both the energy of the adiabatic surfaces, \(V_{\alpha}(\bar{\mathbf{R}})\) and the non-adiabatic coupling vector between surfaces, \(\mathbf{d}_{\alpha\beta}(\bar{\mathbf{R}})=\langle\alpha;\bar{\mathbf{R}}| \nabla_{\mathbf{R}}|\beta;\bar{\mathbf{R}}\rangle\), are evaluated at the centroid level. The same goes for the probability of transition, i.e., switching between surfaces at each time step \(\Delta t\), which is defined based on density matrix elements, \(\rho_{\alpha\beta}=c_{\alpha}c_{\beta}^{\mathrm{s}}\), as
\[g_{\alpha\beta}=\frac{-2\mathrm{Re}(\rho_{\beta\alpha}^{\mathrm{s}}\dot{ \mathbf{R}}\cdot\mathbf{d}_{\beta\alpha}(\bar{\mathbf{R}}))\,\Delta t}{\rho_{ \alpha\alpha}}. \tag{4}\]
The nonadiabatic transition from the current surface \(\alpha\) to the next surface \(\beta\) occurs if \(g_{\alpha\beta}\) is greater than a randomly-generated number between 0 and 1. If a transition occurs, the entire ring polymer hops to the next adiabatic surface while the velocity of each bead in the ring polymer is re-scaled in order to conserve the total energy of the quantum plus classical subsystems.
### Conservation of energy and frustrated hops
By construction, surface hopping methodologies conserve total quantum plus classical energy by enforcing the hopping trajectories to have enough kinetic energy to compensate for the potential energy difference between surfaces, or states. Transition attempts that do not fulfill this requirement are deemed as _frustrated hops_ and return to the original states.[6] Proper treatment of these frustrated hops is an essential step in carrying out surface hopping simulations. Tully demonstrated frustrated hops as trajectories hitting a wall and coming back, hence the component of their velocity in the direction of nonadiabatic coupling vector needed to be reversed.[6; 7] Later, Muller and Stock showed that _not_ reversing the velocity of frustrated hops leads to significantly improved results in photoinduced relaxation dynamics in comparison to reversing the velocity[27] hinting that the choice of treatment can be system-dependent. In our first implementation of RPSH,[26] we opted to not reverse the velocity of the beads of the ring polymer in the case of frustrated hops. To assess the accuracy of the method and shed light on manifestation of frustrated hops in RPSH, here, we revisit Tully's single avoided crossing model and establish the effect of reversing or not reversing the velocity of frustrated hops on the adiabatic population transfer profile in the low to medium momentum regions where the nuclear quantum effects are paramount. Furthermore, we employ two different remedies suggested by Truhlar[28] and Subotnik[29] for frustrated hop treatment. Jasper and Truhlar combined the features of velocity "reversing" (VR) and "not reversing" (NR) in a prescription called \(\Delta V\) which, whenever a frustrated hop occurs, allows the trajectory to feel the target adiabatic state \(\beta\). Accordingly, in the case of a frustrated hop in RPSH, we compare \(\dot{\mathbf{R}}\cdot\mathbf{d}_{\alpha\beta}\) to the component of the force in the direction of the nonadiabatic coupling vector, defined as
\[F_{\beta}=-\nabla V_{\beta}(\bar{\mathbf{R}})\cdot\mathbf{d}_{\alpha\beta}(\bar{ \mathbf{R}}). \tag{5}\]
If these two quantities have the same sign, we do not reverse the velocity but will do so if they have opposite signs. On the other hand, Jain and Subotnik suggested a comparison between the components of forces from both the current and the
target adiabatic states. Accordingly, in RPSH, we calculate:
\[\left(\mathbf{d}_{\textit{\scriptsize{{\it{\it{\it{\it{\it{\it{\it{\it{\it{ \it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{{\it{{\it{{\it}}}}}}}} \nolimits
ers the correct physical behaviour, i.e. a smooth transition between \(R_{1}\) and \(T_{1}\), through the extended phase-space of a classical ring polymer. Here, we evaluate the effects of the proper treatment of frustrated hops on branching probabilities applying four different schemes for adjusting their velocities, namely velocity reversing (VR) or not reversing (NR) along with \(\Delta V\) and \(\Delta V^{2}\) approaches [28; 29] as explained in the previous section. As can be seen in Fig 1, the four schemes show similar behaviour in capturing the smooth transition in the very low momentum region but start deviating around \(k=3.3\) a.u. where only the NR scheme follows the same behaviour of the exact results in 100% reversal of \(R_{1}\) to \(T_{1}\). The other three schemes underestimate the magnitude of adiabatic transmission at higher values of momentum. The reason goes back to the fact that trajectories facing frustrated hops in NR scheme still continue their motion with positive momentum. They tunnel through the energy barrier on state \(|1\rangle\) and end up in the product side. On the other hand, fully or partially reversing the velocity in the other three schemes forces such a trajectory to go back to the reactant side negating the possibility of tunneling through the energy barrier. This results in an artificial increase of \(R_{1}\) in the expense of decrease in \(T_{1}\). As we increase the momentum beyond \(k=6.5\) a.u. we reach an intermediate reaction regime where non-adiabatic trajectories appear alongside adiabatic trajectories. This higher momentum results on successful transitions from state \(|1\rangle\) to \(|2\rangle\) and a smooth increase of \(T_{2}\) as shown in Fig. 2b. As can be seen, all four schemes retrieve the correct physical behaviour due to the inclusion of ZPE in the dynamics by RPSH. We have already shown that FSSH fails to do so.[26] On the other hand, there is still significant deviation between NR and the other three schemes in retrieving the correct physical behaviour in the case of \(T_{1}\) and \(R_{1}\) with only the former being successful. To shed more light on this finding, we take a look at the number of frustrated hops averaged over the number of trajectories (\(N_{FHs}\)) as a function of the incoming momentum in the RPSH method, Fig. 3. A comparison between these results and Fig. 1 and Fig. 2 show that the deviation between the four schemes starts as soon as we have frustrated hops in our simulations, i.e. around \(k=3.3\) a.u., and persists till around \(k=8.9\) a.u. where \(N_{FHs}\) goes to zero. Clearly, in the adiabatic or intermediate reaction regimes RPSH trajectories encounter frustrated hops and even partially reversing the velocity can prevent an otherwise successful trajectory dynamics from taking place. Another point seen in Fig. 3 is the lower number of frustrated hops in the NR scheme compared to the other three schemes. The reason can be traced back to the fact that in the NR scheme, successful trajectories continue moving to the product side regardless of how many times they encounter frustrated hops or whether this encounter happens in the vicinity of an energy barrier on the reactant or the product side. For the other three schemes, on the other hand, a tra
Figure 3: The number of frustrated hops at different velocity re-scaling schemes in RPSH in the low and intermediate momentum region of the SAC model.
Figure 2: Branching probabilities in the SAC model at the intermediate momentum region; transmission on state \(|1\rangle\), \(T_{1}\), (a), transmission on state \(|2\rangle\), \(T_{2}\), (b), and reflection on state \(|1\rangle\), \(R_{1}\) (c).
jectory on the product side might encounter a frustrated hop and due to reversing the velocity tunnels back to the reactant side. Now, if that trajectory encounters another reversing of velocity it will go back to the product side. Overall, reversing the velocity allows the trajectories to spend more time on the avoided crossing region and hence \(N_{FHs}\) increases. This can enhance the deviation of the results of velocity re-scaling schemes from the exact results.
### _N_-particle chain model: Detailed balance
The original RPMD, by construction, yields real-time MD trajectories that preserve exact quantum Boltzmann distribution.[24] Its expansion to multi-state non-adiabatic dynamics, however, did not always lead to preserving the detailed balance mainly due to the zero-point energy leakage. Nevertheless, multi-variable (MV)-RPMD, where both electronic states and nuclear DOFs are represented by continuous Cartesian coordinates,[33] demonstrated that ring polymer isomorphism can exactly preserve detailed balance in multi-state systems. On the other hand, as was explained before, FSSH approximately preserves detailed balance due to the presence of frustrated hops. Now, the question is whether RPSH can borrow these desired characteristics and improve upon. To investigate this matter, here, we employ a 2-state system, with constant values of the energy gap (\(\Delta=E_{2}-E_{1}\)) and non-adiabatic coupling vector (\(d_{12}\)), coupled to a chain of \(N\) nuclear DOFs.[30] The quantum subsystem is coupled to only the first particle in the chain and the nearest-neighbor potential energy between the particles is a quartic Morse potential as
\[V(\mathbf{R})=\sum_{i=1}^{N}V_{M}(R_{i}-R_{i+1}) \tag{8}\]
where
\[V_{M}(R)=V_{0}(a^{2}R^{2}-a^{3}R^{3}+0.58a^{4}R^{4}). \tag{9}\]
The atom farthest from the quantum subsystem (\(R_{N}\)) is connected to Langevin dynamics. All the particles in the chain are represented by ring polymers comprised of four beads based on our convergence tests, see the SI Figure S1. Subsequently, the EOM for the last particle would be slightly different from the conventional classical Langevin dynamics. Each mode of a free ring polymer (\(k\)) exhibits uncoupled harmonic oscillator dynamics when represented in the normal mode basis.[34] Accordingly, the corresponding EOM for the last ring polymer is written as
\[\dot{P}_{N}^{(k)}=-\frac{\partial V}{\partial R_{N}^{(k)}}-\gamma^{(k)}P_{N}^{ (k)}+\sqrt{\frac{2M\gamma^{(k)}}{\beta_{n}\Delta t}}. \tag{10}\]
where \(\beta_{n}=\beta/n\). The second term includes the Langevin friction constant \(\gamma^{(k)}\). For the excited modes of the ring polymer (\(k>0\)) \(\gamma^{(k)}=2\omega_{k}\) where \(\omega_{k}=2\sin(2\pi/n)/\beta_{n}\hbar\) and for the centroid mode \(\gamma^{(0)}=\gamma\). The last term in Eq. 10 represents a random force the width of which depends both on the number of beads within \(\beta_{n}\) and the time step of the simulation \(\Delta t\). All parameters of our 2-state chain model are listed in Table 1 with some variations compared to the original model.[30] Specifically, we changed the energy gap to 8.0 kJ/mol and the nonadiabatic coupling vector to -6.0 A\({}^{-1}\) to expand our dynamics simulations to a wider range of temperatures at 200 K\(-\)2500 K, with a special emphasis on the low temperature
\begin{table}
\begin{tabular}{l r r} \hline \hline Parameter & Value & Unit \\ \hline \(N\) & 20 & \\ \(m\) & 12.0 & amu \\ \(V_{0}\) & 175.0 & kJ/mol \\ \(a\) & 4.0 & Å\({}^{-1}\) \\ \(\gamma\) & 10\({}^{14}\) & \(s^{-1}\) \\ \(\Delta\) & 8.0 & kJ/mol \\ \(d_{12}\) & \(-\)6.0 & Å\({}^{-1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Simulation parameters used for the _N_-particle chain model.
Figure 4: Equilibrium populations on the first state \(P_{1}\) (a), and on the second state \(P_{2}\) (b) of the 2-state quantum subsystem in the chain model obtained from RPSH (\(\bullet\)) and FSSH (\(\blacklozenge\)) methods with NR scheme. Exact Boltzmann populations are shown in solid lines. Panel (c) shows the relative errors of RPSH and FSSH in producing \(P_{2}\) with respect to the Boltzmann population.
region. Fig. 4a and 4b show the populations of the two states (\(P_{1}\) and \(P_{2}\)) over a wide range of temperatures with the solid line representing the exact Boltzmann populations (\(P_{\text{B}}\)). The RPSH populations are obtained from 1000 trajectories for 50 ps with \(\Delta t=0.01\) fs and taking the average of the last 20 ps to avoid any dependence on the initial conditions. FSSH results are obtained similarly and are shown for comparison. The RPSH results are very satisfactory but at the same time are very close to the FSSH data. For a better comparison, Fig. 4c demonstrates the relative error of the RPSH and FSSH results for the second state defined as \((P_{2}-P_{\text{B}})/P_{\text{B}}\). As can be seen, at high temperatures RPSH and FSSH show identical results with the lowest relative error. As the temperature is decreased, the error of both methods increases but RPSH acts much better than FSSH in preserving detailed balance. Here, we used the NR scheme to obtain these results following the observations in the previous section. However, it should be noted that Prezhdo and coworkers earlier used a similar model to show that VR scheme has a slightly improving effect on preserving detailed balance with FSSH in a temperature range of 350 K\(-\)2500 K.[32] In Fig. 5, we demonstrate the relative error of the four different velocity re-scaling schemes in preserving detailed balance in RPSH. Among the four schemes, the VR scheme indeed leads to almost perfect preservation of detailed balance in both high and low temperatures. Nevertheless, regardless of the employed velocity re-scaling scheme, RPSH always operates better than FSSH at low temperature limit. This can be traced back to the natural preservation of the quantum Boltzman distribution in RPMD which is manifested even in the multi-state dynamics here.
### Differentiating between the two models
We showed in the case of the SAC model that reversing the velocity of the adiabatic RPSH trajectories prevents an otherwise successful trajectory from going from the reactant to the product side. This observation can be generalized to any situation where tunneling through the energy barrier is a prominent factor in the success of adiabatic reactions. It should be noted that one can enforce the correct dynamics even with total or partial velocity reversing in such systems by not allowing the trajectories to attempt a transition. However, that needs _a priori_ information about the reaction regimes of different systems. On the other hand, regardless of the velocity re-scaling scheme, RPSH is more successful than FSSH in preserving detailed balance in the case of the 2-state quantum subsystem coupled to a linear chain of 20 nuclear DOFs. The reason can be traced back to two factors. First, RPMD technically is a phase-space representation for the quantum Boltzmann distribution. Second, this extended phase-space, with a wider distribution of momentum, leads generally to a higher number of frustrated hops in RPSH compared to FSSH, see Fig. 6a. And, as we know the existence of frustrated hops is the reason behind approximate preservation of the detailed balance in surface hopping methods. It is noted that \(N_{FHs}\) of the RPSH method is similarly higher than FSSH in the case of the SAC model, Fig. 6b.
Then, why the real-time dynamics in the SAC model is more accurately recovered by not reversing the velocity of the frustrated hops in RPSH trajectories while reversing it leads to exactly preserving the detailed balance in the chain model, as was already shown for the FSSH case as well [32] One should note the essential differences between the two models with the lack of an avoided crossing in the chain
Figure 5: Relative errors as defined before in RPSH results with four different velocity re-scaling schemes.
Figure 6: Number of frustrated hops averaged over number of trajectories in the chain model over a range of temperatures (a) and in the SAC model over a range of initial momentum (b) obtained from RPSH and FSSH methods.
model being the most important one. The chain model is designed to be a representative of a quantum system in thermal equilibrium with its surrounding. Furthermore, it is not suitable for investigating scattering events. Hence, the important observation here is the superiority of RPSH in preserving detailed balance with the additional insight that one can fine tune the results by choosing an appropriate treatment for frustrated hops. On the other hand, real-time dynamics of multi-state systems is best recovered by not interfering with the role of NQEs throughout the dynamics. This is of course not the case for other surface hopping methods that neglect NQEs. Nevertheless, care should be taken in the choice of an appropriate treatment of frustrated hops with respect to the system of interest.
## IV Conclusions and Outlooks
In this work, Ring Polymer Surface Hopping (RPSH) non-adiabatic molecular dynamics methodology is critically revisited in order to shed light on its accuracy and validity limits. The most exciting aspect of RPSH, i.e., the inclusion of nuclear quantum effects into molecular dynamics, can be further harvested if the cons and pros of the method are exposed and documented in different systems and reaction regimes. RPSH, similar to other surface hopping algorithms, conserves the total quantum plus classical energy at the level of each individual trajectory. This leads to the creation of frustrated hops that, although they disrupt the internal consistency of the method but lead to the conservation of exact quantum Boltzmann distribution. We showed that RPSH yields better results in preserving detailed balance than the original FSSH method in the low-temperature region using a 2-state system coupled to Langevin dynamics via a linear chain of 20 nuclear DOFs. We also showed that different schemes of re-scaling velocity have a profound effect on preserving correct quantum Boltzmann populations where reversing the velocity after each frustrated hop leads to almost exact conservation of detailed balance in a range of low to high temperatures. This was opposite to what we observed in calculating the branching probabilities in Tully's avoided crossing model. While we have more frustrated hops in RPSH than FSSH for this model, we also have an interplay between frustrated hops and nuclear tunneling in the adiabatic reaction regime. Reversing the velocity of frustrated hops can disrupts the motion of an adiabatic trajectory that otherwise would tunnel through a reaction barrier from the reactant to the product side. The correct real-time dynamics in the case of this avoided crossing model is preserved better by not reversing the velocity of frustrated hops. Hence, we advise caution in choosing the proper treatment of frustrated hops in RPSH where one needs to consider the nature of the problem to be studied.
To further bring RPSH to the mainstream, research should be directed on (i) a systematic implementation of decoherence correction and (ii),if necessary, reformulating the algorithm for the proper treatment of internal consistency. Surface-hopping methods take care of electronic coherence in an elegant way. However, they have always been plagued with the over-coherence problem or neglect of decoherence.[35] We have already shown that, in contrast to FSSH, RPSH can capture distribution-dependent decoherence in Tully's "extended coupling with reflection" model even with a deterministic initial momentum.[26] This improvement is because, despite initial deterministic momentum, each bead's momentum gradually differs through the dynamical propagation, resulting in a broader centroid momentum distribution compared to the distribution in FSSH. Therefore, different reflected ring polymer trajectories gain different phases, and over an ensemble of the ring polymer trajectories, the high-frequency oscillations in reflection coefficients cancel out. However, we cannot yet claim this is a general effect. In fact, the original FSSH can also enjoy similar improvements if the initial conditions of the MD simulations involve a Maxwell-Boltzmann distribution instead of deterministic momenta. Application of RPSH in more sophisticated models designed for investigation of decoherence as well as equipping it with different decoherence correction schemes is currently being pursued in our group. On the other hand, as briefly mentioned in the methodological considerations, surface hopping methods violate internal consistency and as a result, populations of electronic states can be different if they are calculated based on the number of nuclear trajectories on each surface or the electronic amplitudes of those states or a mixture of both.[36] Recently, there has been a growing argument on the importance of dealing with this issue rather than "decoherence correction" to systematically improve the surface hopping results.[22] A part of the research in our group focuses on this issue in the context of the real-time population transfer dynamics of a couple of three-state Morse potential model systems that are designed for investigating photo-excited relaxation. As can be seen in the SI Fig. S2, RPSH yields time-dependent diabatic state populations that are in good agreement with the exact quantum dynamics simulations for all three models. However, the three approaches explained earlier give different population profiles. On the other hand, according to our observations, no frustrated hops are recorded for these simulations. Hence, these models provide a unique case for disentangling the concepts of "internal consistency" and "decoherence correction" in the context of the RPSH methodology.
###### Acknowledgements.
The authors acknowledge support from the New Jersey Institute of Technology (NJIT). This research has been enabled by the use of computing resources and technical support provided by the HPC center at NJIT. This work partially used Bridges2 at Pittsburgh Supercomputing Center through allocation CHE200007 from the Extreme Science and Engineering Discovery Environment (XSEDE), which was supported by National Science Foundation Grant no. 1548562.[37]
## Conflict of Interest
The authors have no conflicts to disclose.
## Data availability statement
The data that support the findings of this study are available within the article and its supplementary material.
|
2304.13009 | The Potential of Visual ChatGPT For Remote Sensing | Recent advancements in Natural Language Processing (NLP), particularly in
Large Language Models (LLMs), associated with deep learning-based computer
vision techniques, have shown substantial potential for automating a variety of
tasks. One notable model is Visual ChatGPT, which combines ChatGPT's LLM
capabilities with visual computation to enable effective image analysis. The
model's ability to process images based on textual inputs can revolutionize
diverse fields. However, its application in the remote sensing domain remains
unexplored. This is the first paper to examine the potential of Visual ChatGPT,
a cutting-edge LLM founded on the GPT architecture, to tackle the aspects of
image processing related to the remote sensing domain. Among its current
capabilities, Visual ChatGPT can generate textual descriptions of images,
perform canny edge and straight line detection, and conduct image segmentation.
These offer valuable insights into image content and facilitate the
interpretation and extraction of information. By exploring the applicability of
these techniques within publicly available datasets of satellite images, we
demonstrate the current model's limitations in dealing with remote sensing
images, highlighting its challenges and future prospects. Although still in
early development, we believe that the combination of LLMs and visual models
holds a significant potential to transform remote sensing image processing,
creating accessible and practical application opportunities in the field. | Lucas Prado Osco, Eduardo Lopes de Lemos, Wesley Nunes Gonçalves, Ana Paula Marques Ramos, José Marcato Junior | 2023-04-25T17:29:47Z | http://arxiv.org/abs/2304.13009v2 | # The Potential of Visual ChatGPT For Remote Sensing
###### Abstract
Recent advancements in Natural Language Processing (NLP), particularly in Large Language Models (LLMs), associated with deep learning-based computer vision techniques, have shown substantial potential for automating a variety of tasks. These are known as Visual LLMs and one notable model is Visual ChatGPT, which combines ChatGPT's LLM capabilities with visual computation to enable effective image analysis. These models' abilities to process images based on textual inputs can revolutionize diverse fields, and while their application in the remote sensing domain remains unexplored, it is important to acknowledge that novel implementations are to be expected into it. Thus, this is the first paper to examine the potential of Visual ChatGPT, a cutting-edge LLM founded on the GPT architecture, to tackle the aspects of image processing related to the remote sensing domain. Among its current capabilities, Visual ChatGPT can generate textual descriptions of images, perform Canny edge and straight line detection, and conduct image segmentation. These offer valuable insights into image content and facilitate the interpretation and extraction of information. By exploring the applicability of these techniques within publicly available datasets of satellite images, we demonstrate the current model's limitations in dealing with remote sensing images, highlighting its challenges and future prospects. Although still in early development, we believe that the combination of LLMs and visual models holds a significant potential to transform remote sensing image processing, creating accessible and practical application opportunities in the field.
1
Footnote 1: Faculty of Engineering and Architecture and Urtuismism, University of Western Sao Paulo (UNOESTE), Rod. Raposo Tavares, km 572, Limonein, Presidente Prudente 19067-175, SP, Brazil.
2
Footnote 2: Faculty of Computing, Federal University of Matos Grosso de Valt (Urtas), Av. Coutea e Silva-Pioneione, Clidade Universitatie, Campo Grande 79070-900, MS, Brazil.
3
Footnote 3: Department of Carnegieity, Sao Paulo State University (UNOESTE), Certificate, N6. Rotter Stinson, 305, Presidente Prudente, 19060-900, SP, Brazil.
4
Footnote 4: Faculty of Engineering, Architecture and Urtuism and Geography, Federal University of Matos Grosso de Valt (Urtas), Av. Coutea e Silva-Pioneione, Clidade Universitatie, Campo Grande 79070-900, MS, Brazil.
## 1 Introduction
Remote sensing image processing is a critical task for monitoring and analyzing the Earth's surface and environment. It is used in a wide range of fields such as agriculture, forestry, geology, water resources, and urban planning [38, 24]. However, analyzing and interpreting large volumes of remote sensing data can be time-consuming and labor-intensive, requiring specialized knowledge and expertise [24]. In recent years, Large Language Models (LLMs) emerged as powerful and innovative tools for human assistance in various domains [7], holding the potential to be implemented in the remote sensing area as well.
As Artificial Intelligence (AI) continues to evolve, novel models demonstrate an unprecedented ability to understand and generate human-like text, as well as perform numerous tasks based on human guidance [42]. Among the LLMs, a model named ChatGPT stands out as a remarkable example, offering immense promise for assisting humans in multiple activities. The Generative Pre-trained Transformer (GPT), a deep learning model developed by OpenAI [23], has gained considerable attention as a promising AI technique for natural language processing tasks. This VLM not only consists in one of the most recent foundation model in development, but as well one of the prominent in its field since has gained notoriety by the public eye in recent times.
The GPT model has been trained on extensive text data and can generate human-like responses to input prompts. This model is particularly useful in tasks such as chatbots, text summarization, and language translation [23, 19]. Recent research, however, has explored the application of LLMs models in visual tasks such as image generation, captioning, and analysis assistance [39]. These models, also known as Visual Language Models (VLMs), can generate natural language descriptions of images and perform image processing tasks from text descriptions. One model that is gaining attention is the Visual ChatGPT [35]. Visual ChatGPT is an extension of ChatGPT that incorporates visual information on its capabilities while also providing text-based responses in a conversational style.
Although still in its early concepts, the fusion of LLMs and visual models may revolutionize image processing and unlock new practical applications in various fields [41]. In this context, remote sensing is an area that could directly benefit from this integration. Fine-tuned VLMs could potentially be used to process and analyze satellite and aerial images to detect land use changes, monitor natural disasters, and assess environmental impacts, as well as assist in the classification and segmentation of images for easier interpretation and decision-making.
In this paper, we discuss the significance, utility, and limitations of the model Visual ChatGPT in assisting humans in remote sensing image processing. This model has shown great potential in various applications such as question-answering systems and image generation and modification. Currently, Visual ChatGPT can perform image processing tasks like edge detection, line extraction, and image segmentation, which are interesting for the remote sensing field. The model, however, is not fine-tuned to deal with the remote sensing domain, thus making it still an early adoption of the tool. Regardless, we investigate this, as a basis for discussion of its potential, by comparing these tools within publicly available datasets of remote sensing imagery, thus measuring its capabilities both quantitatively and qualitatively.
By enabling machines to understand and generate images, Visual ChatGPT paves the way for numerous applications in image processing. Herein, we discussed how Visual ChatGPT can be adapted to the remote-sensing domain, where it might revolutionize the way we process and analyze these images. We examined state-of-the-art developments in the model, evaluated their capabilities in the context of remote sensing imagery, and proposed future research directions. Ultimately, this exploration seeks to provide insights into the integration of VLMs into remote sensing science and community.
## 2 Visual ChatGPT: A Revolution in Image Analysis and its Potential in Remote Sensing
Visual ChatGPT is an advanced VLM that combines the capabilities of text-based LLMs with visual understanding. This revolutionary approach enables machines to analyze images and generate relevant text or visual outputs, opening up new possibilities for image analysis and processing. One of the key features of Visual ChatGPT is its ability to incorporate state-of-the-art algorithms and information into its current model, facilitating continuous improvement and adaptation [35].
By fine-tuning the model with domain-specific datasets, Visual ChatGPT can become increasingly proficient in specific tasks, making it an invaluable tool for image analysis. With its architecture built to process and analyze both textual and visual information, it has the potential to revolutionize diverse fields. Interaction with Visual ChatGPT involves a dynamic and iterative process, where users can provide textual input, image data, or both, and the model responds with relevant information or actions. This flexibility allows for a wide range of tasks to be performed, including generating images from the user input text, providing photo descriptions, answering questions about images, performing object and pose detection, as well as other various image processing techniques, such as edge detection, straight line detection, scene classification, and image segmentation, which are interesting in the remote sensing context.
Image processing methods are essential for extracting valuable information from remote sensing data. However, these techniques often require additional computational knowledge and can be challenging for non-specialists to implement. VLMs like Visual ChatGPT offer the potential to bridge this knowledge gap by providing an accessible interface for non-experts to analyze image data.
Although still early in its conception, many techniques and methods can be integrated into VLMs, thus providing the means to perform complex image processing [39, 41]. In remote sensing, tasks such as edge and line detection, scene classification, and image segmentation, which currently are some of the techniques embedded into Visual ChatGPT's model, can be used to perform and enhance the analysis of aerial or satellite imagery and bring important information to the end user.
Edge detection is an image processing technique that identifies the boundaries between different regions or objects within an image. In remote sensing, edge detection is vital for recognizing features on the Earth's surface, such as roads, rivers, and buildings, and others [1]. Visual ChatGPT, with its ability to analyze images and generate relevant text or visual outputs, can be adapted to assist non-experts in executing edge detection tasks of different objects present in the image. By providing textual input alongside image data, users can interact with the model to identify boundaries and extract valuable information about the scene being analyzed.
Straight line detection is another critical image processing technique in remote sensing, with applications in feature extraction. It involves identifying linear targets in remote sensing images, such as roads, rivers, and boundaries [14]. Visual ChatGPT can be utilized to help non-experts perform line detection tasks by processing image data and easily returning line pattern identification in the images. This capability enables users to extract additional information about the underlying terrain or land use and cover without requiring in-depth knowledge of these image-processing techniques.
Scene classification and image segmentation are also essential techniques in remote sensing for identifying different types of land cover and separating them into distinct regions. These techniques aid in monitoring land use changes, detecting deforestation, assessing urban growth, monitoring water reservoirs, and estimating agriculture growth, among many others [13]. On this, VLMs can be employed to facilitate scene classification and image segmentation tasks for non-experts. In scene classification, Visual ChatGPT can be used to detect and describe objects in the image. As for segmentation, with specifically fine-tuned models, there is the potential for users to obtain results by simply interacting with the model using textual input [18], allowing them to analyze land changes and monitor impacts.
However, it is important to note that the current version of Visual ChatGPT has not been yet specifically trained on remote sensing imagery. Neither have any other VLMs precisely tuned for this task since the technology is still in an early stage. Nonetheless, the model's architecture and capabilities offer a solid foundation for fine-tuning and adapting it to this domain in future implementations.
By training Visual ChatGPT on remote sensing datasets, it is possible that it can be tailored to recognize and analyze unique features, patterns, and structures present in aerial or satellite images. To fully realize its potential, thorough analysis and evaluation of its usage, impact, practices, and errors in remote sensing applications are necessary. This will not only assist the development of improved VLMs but also pave the way for more efficient, accurate, and comprehensive analyses of remote sensing data performed by these tools.
## 3 Materials and Methods
In this section, we detail the materials and methods used to evaluate the performance of Visual ChatGPT in remote sensing image processing tasks. The evaluation process is divided into several stages (Figure 1), focusing on different aspects of the models' current capabilities, mainly on image classification, edge, and straight line detection, and image segmentation.
We initiated our evaluation of Visual ChatGPT by assessing its performance in scene classification tasks. To this end, we used a publicly available dataset containing Google Earth images labeled by human specialists. We extracted a small portion of this dataset, considering a subset of its classes for our tests.
The model's classification performance was compared to the ground-truth labels provided in the dataset.
In the next stage, we qualitatively evaluated the edge and straight line detection capabilities of Visual ChatGPT on remote sensing imagery, from Google Earth, of another publicly available dataset. The detected edges and lines were assessed to determine the model's effectiveness in identifying target features in the images. The model's performance was compared with traditional edge filters and manually labeled lines.
Lastly, we evaluated the image segmentation feature of Visual ChatGPT using the images from the same previous dataset, which was specifically designed for segmentation data training. We then compared the resulting segmentations with their corresponding masks. The comparison was conducted using an associative method in which the classes identified by the Visual ChatGPT model were associated with the classes labeled in the dataset.
### Experiment Delineation
To implement Visual ChatGPT, we downloaded the code from Microsoft Github [22], created a virtual environment, installed the required dependencies, downloaded the pre-trained models, and started a Flask server. Once the server was running, we imported the required libraries on Python code and set the API key for the OpenAI platform access. The "run_image" function inside the original "visual_chatgpt.py" file was modified to handle image resizing and captioning. Next, the Visual ChatGPT model was loaded with the required sub-models.
It is important to point out that Visual ChatGPT provides a different set of tools, but not all of them are appropriate to deal with tasks related to remote sensing images. In this sense, we used only the following: "Get Photo Description", "Answer Question About The Image", "Edge Detection On Image", "Line Detection On Image" and "Segmentation On Image". Our code then loops through a folder containing the images and performs the Canny edge and straight line detection, as well as segmentation on each image. It also obtains the default image description of the original loaded image using the Visual ChatGPT model and
Figure 1: Diagram of the evaluation process of Visual ChatGPT in remote sensing image processing tasks. The diagram follows an up-down/left-to-right flow, indicating that the process begins with a data survey, preparation, and setting up of the environment for loading the images into Visual ChatGPT. Next, different tasks are performed using the tools provided by Visual ChatGPT, and the results are stored for analysis where different sets of metrics are applied to evaluate the performance of the model.
then asks a classification question to determine the class of the image. The results are then stored in a.csv file and used for further evaluation.
Visual ChatGPT utilizes sub-models that are specifically designed to cater to the different prompts and tools required. For instance, the "Get Photo Description" and "Answer Question About The Image" tools use models from the HuggingFace library [16] to generate natural language descriptions of an image and answer questions based on the given image path and the corresponding question. The "Edge Detection On Image" tool uses the Canny Edge Detector [5] from the OpenCV library to identify and detect the edges of an image when given its path. Similarly, the "Line Detection On Image" tool uses the M-LSD Detector for Straight Line model [10] to detect straight lines in the image. Finally, the "Segmentation On Image" tool employs the UniFormer Segmentation model [17] to segment different classes on the given image.
To assess the effectiveness of the Visual Chat-GPT models in handling remote sensing image data, we surveyed publicly available datasets related to this field. After consideration, we selected two datasets that would allow us to investigate the model's capabilities for performing specific tasks. These datasets were the "AID: Aerial Scene Classification" [36] and the "LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation" [34]. Both datasets contain Google Earth imagery captured at different times, with varying lighting conditions and visualization scales. These datasets provide a rich and diverse set of images that are well-suited for testing the model's performances.
In its current form, the computational cost of using Visual ChatGPT is slightly higher than traditional methods. This increased cost primarily stems from the necessity of consuming tokens within the OpenAI API. The tokens required to process each input and produce the corresponding output can add up, particularly in large-scale image-processing tasks. As technology and computational efficiency evolve, we anticipate a reduction in these costs in the near future. However, at the moment, this cost influenced the number of runs conducted throughout our experiment, as we detail in the description of each dataset.
The AID dataset contains different scene classes with about 200 to 400 samples of 600x600 size for each class, with 10,000 images in total. However, due to the current cost associated with using Visual ChatGPT, we randomly selected between 26 to 32 images of each class for evaluation. These images were reviewed to ensure that a broad representation of possible inputs were selected. The following classes were evaluated: "Airport", "Barel-and", "BaseballField", "Beach", "Bridge", "Center", "Church", "Commercial", "DenseResidential", "Desert", "Farmland", "Forest", "Industrial", "Meadow", "MediumResidential", "Mountain", "Park". These were stored in a "classes" variable within our code. We chose these 17 classes to ensure a diverse representation of the scenes, since the remaining classes provided similar context. This brought a total of 515 images to be loaded and described (and, therefore, classified) by the Visual ChatGPT model. These images were used for evaluating the "Get Photo Description", and "Answer Question About The Image" tools.
The LoveDA dataset is composed of 5,987 image chips, being segmented into 7" landerer categories (namely: "background", "building", "road", "water", "barren", "forest" and "farland", totaling 166,768 labels across 3 cities. This dataset focuses on multi-geographical environments, variating between "Urban" and "Rural" characteristics, while providing challenges like multi-scale objects presence; complex background samples, and inconsistent class distributions. The dataset also provides the segmentation masks used to train image models. Here we used these masks as our "ground-truth" data and selected a small portion of the dataset, consisting of 49 images (mixing both "Urban" and "Rural" environments). These 49 image chips were all used in the evaluation of the "Edge Detection On Image", "Line Detection On Image" and "Segmentation On Image" tools. They represent the most complex and rich environments within their respective geographical context, and were limited due to the cost associated with the API's usage.
As mentioned, for the latter, we utilized a purposive sampling methodology to directly select remote sensing images representative of different land covers. Our objective was to maintain a rich representation of diverse surface covers in our dataset. As such, to ensure a comprehensive depiction of geographical scenarios, we, in this case, directly hand-picked images that provided views of both natural and man-made environments. This approach is grounded in the intention to not just create a representative dataset but to ensure that our dataset reflects the complexities and variances that are inherently present in real-world scenarios. In doing so, we believe that the chosen dataset yielded more robust and generalized outcomes in subsequent analyses and applications.
### Protocol for Scene Classification Evaluation
We first investigated whether Visual ChatGPT can assist in classifying remote sensing scenes. To test this, we used the AID dataset (Aerial Scene Classification) [36]. We evaluated the "Get Photo Description" and "Answer Question About The Image" functions of Visual ChatGPT by asking it to describe and classify the selected images. For each image, we asked Visual ChatGPT to choose, based on its image description, with which class it would associate the image. We directly asked it to choose between each one of the 17 classes, instead of trying to guess them, thus generating guided predictions. A file was created with the stored results and compared the Visual ChatGPT classification with the correct class from the dataset.
We used the confusion matrix to evaluate the performance of Visual ChatGPT in classifying the scenes. The confusion matrix is a commonly used tool in the evaluation of classification models. It provides a summary of the performance of a model by showing the number of correct and incorrect predictions for each class. We begin by loading the dataset into a data frame. The set contains two columns, "Image" and "Answer to the Question", that correspond to the true and predicted labels for each data point, respectively.
The classes were defined as a list of strings representing the different categories in the dataset. The two mentioned columns were then converted and used for generating the confusion matrix. The matrix takes as input the true labels (y_true), predicted labels (y_pred), and the list of class labels (classes). Finally, a heatmap was created to represent it. The heatmap
was customized by adding annotations to show the number of predictions in each cell. We calculated the Precision, Recall, F-Score and Accuracy metrics to assess the performance of Visual ChatGPT in comparison to the correct class labeled from the AID dataset. These metrics can be described as follows [26]:
Precision: Precision measures the proportion of True Positive (TP) instances among the instances that were predicted as positive. Higher precision means fewer False Positives (FP).
\[\text{Precision}=\frac{\text{TP}}{(\text{TP}+\text{FP})} \tag{1}\]
Recall: Recall measures the proportion of TP instances among the actual positive instances, thus using False Negatives (FN) into its equation. This metric works better when considering binary tasks.
\[\text{Recall}=\frac{TP}{(TP+FN)} \tag{2}\]
F-Score: F-Score is the harmonic mean of Precision and Recall. It's a balanced metric that considers both false positives and false negatives, with a range from 0 (worst) to 1 (best).
\[\text{F Score}=2*\frac{(Precision*Recall)}{(Precision+Recall)} \tag{3}\]
Overall Accuracy: Accuracy is the proportion of correct predictions (both TP and TN) among the total number of instances. While it's a commonly used metric, it is not suitable for imbalanced datasets.
\[\text{Accuracy}=\frac{(TP+TN)}{(TP+FP+TN+FN)} \tag{4}\]
Taking into account the substantial number of classes in this problem (n=17), we computed the baseline accuracy to provide a context for evaluating the model's overall performance. The baseline accuracy, also referred to as "random chance," signifies the probability of accurately identifying a class by merely selecting the most prevalent class, as:
\[\text{Baseline Accuracy}=\max_{i}\frac{N_{i}}{N_{\text{total}}} \tag{5}\]
where:
i' represents each class in the dataset
N_i' is the number of images in class 'i'
N_total' is the total number of images in the dataset.
### Protocol for Edge and Line Detection Evaluation
For the edge and line detections, we asked Visual ChatGPT to perform both the "Edge Detection On Image", and "Line Detection On Image" functions, extracting the edge and straight line features in the images. To investigate its capabilities, we compared them with two traditional edge detection methods, the Canny filter [5] and the Sobel filter [29], and with manual annotation of straight lines present in the images. Both filters were manually fine-tuned over the same images to provide the overall most interesting results, thus differentiating from the default, fully-automated approach, of Visual ChatGPT. For this, we used the selected 49 images from the LoveDa dataset [34] to be processed by the filters and compared. The Python programming language was utilized for this implementation, relying on the NumPy, imageio, and scikit-image libraries.
First, the image file was loaded where a function was employed to read the image in grayscale format, simplifying the image for further processing. The resulting image matrix was converted into a floating-point data type and normalized to the range of [0, 1] by dividing each pixel value by 255. This normalization step was crucial for maintaining consistency across images and ensuring the edge detection algorithms could process them appropriately.
The Canny edge detection filter was applied to the normalized grayscale images. This was accomplished by passing the image and a sigma value, varying between 1 and 3, to its function. The sigma parameter determines the amount of Gaussian smoothing applied to the image, effectively controlling the sensitivity of the algorithm to any noise. The Canny edge detection filter aims to identify continuous edges in an image by performing non-maximum suppression and double thresholding to remove unwanted pixels [5]. The resulting edge map consists of pixels representing the detected edges.
Next, the Sobel edge detection filter was applied to the normalized grayscale images by implementing its function. This calculates the gradient magnitude at each pixel in the image, and the output is a continuous-valued edge map, providing an approximation of the edge intensity [29]. The Sobel edge detection algorithm is a simpler method. It is based on the convolution of the image with two 3x3 kernels, one for the horizontal gradient and one for the vertical gradient. This method is computationally efficient and straightforward but may be more susceptible to noise compared to the Canny edge detection filter.
After applying both edge detection filters, we saved the resulting images as 8-bit grayscale images into separate folders. The conversion to 8-bit grayscale format was performed by multiplying the processed image arrays by 255 and then casting them to the unsigned 8-bit integer data type before saving them. The data was stored to later be used to compare against the edge detection performed by Visual ChatGPT.
For the straight line detection approach, we compare the results of the straight lines detected by Visual ChatGPT with manually labeled lines from the dataset. The manually labeled lines served as the ground-truth for evaluating its performance. For this, we identified, in the same 49 images, line aspects like roads, rivers, plantations, and terrain that resembled linear characteristics and that are of overall interest when dealing with remote sensing data. These images were saved and stored in a folder to be promptly loaded and compared.
As such, we compared both the line and edge detection performances following the same protocol. To achieve this, we defined a function to load and preprocess the images. This function takes two image file paths as input (one from Visual ChatGPT and the other from our "ground-truth") and performs the following
steps: 1. Load the images in the grayscale format; 2. Resize both images to the same dimensions (512x512 pixels); 3. Apply Otsu's thresholding method to obtain the optimal threshold for each image to create edge and line binary maps, and; 4. Flatten the binary maps into 1D arrays for extracting the comparison metrics.
Finally, for each image pair, we called the process_images function to obtain the performance metrics and stored them in a list called "results". After processing the images, we calculated various performance metrics, such as True Positive Rate (TPR), False Positive Rate (FPR), Area Under the Curve (AUC), as well as Precision, Recall, F-Score, and Accuracy using scikit-learn's metrics module. These metrics were essential for evaluating and comparing the performance of the methods in terms of their ability to identify true and false lines and edges, and overall accuracy. Since we already explained Precision, Recall, F-Score, and Accuracy, the remaining metrics to be described are [26]:
True Positive Rate (TPR): TPR is the proportion of TP instances among the actual positive instances. The higher the TPR, the better the model is at identifying true lines and edges.
\[\text{TPR}=\frac{TP}{(TP+FN)} \tag{6}\]
False Positive Rate (FPR): FPR is the proportion of FP instances among the True Negative (TN) instances. The lower the FPR, the better the model is at avoiding false edge and line detections.
\[\text{FPR}=\frac{FP}{(FP+TN)} \tag{7}\]
Area Under the Curve (AUC): AUC is a measure of the overall performance of a classification model. It's calculated by plotting the Receiver Operating Characteristic (ROC) curve, which shows the trade-off between TPR and FPR. AUC ranges from 0 to 1, where a higher value indicates better performance.
### Protocol for Image Segmentation Evaluation
To evaluate the performance of Visual ChatGPT's image segmentation capabilities on remote sensing data, we used the previously separated 49 images from the LoveDa dataset [34], which includes manually labeled data as masks to segmentation training. The protocol used for this task comprises a two-step procedure by comparing the Visual ChatGPT's segmented output with the manually labeled ground-truth images. This VLM uses the "Segmentation on Image" function, which brings the Unified transformFormer (UniFormer) [17] model to perform image segmentation.
The Unified transFormer (UniFormer) is a model developed to handle both local redundancy and complex global dependency typically found in visual data. This model blends the merits of Convolution Neural Networks (CNNs) and Vision Transformers (ViTs) in a unified format. UniFormer incorporates three crucial modules: Dynamic Position Embedding (DPE), Multi-Head Relation Aggregator (MHRA), and Feed-Forward Network (FFN). DPE, as an initial step, dynamically incorporates position information into all tokens, which is particularly effective for visual recognition with arbitrary input resolution. Next, MHRA enhances each token by exploring its contextual tokens through relation learning. MHRA fuses convolution and self-attention, mitigating local redundancy while capturing global dependencies. Lastly, FFN enhances each token individually, following the typical ViTs approach, encompassing two linear layers and a non-linear function (GELU) [17].
Since Visual ChatGPT doesn't know which classes to look at on the image, it tries to guess them based on its current capabilities when implementing the "Segmentation on Image" function. Thus, it is not possible to perform a "direct" comparison between the ground-truth classes with which the class Visual ChatGPT imagines it to be. Therefore, metrics like Precision, Recall, F-Score, and Accuracy are not feasible to evaluate this task. Since we are comparing two segmented images with different classes, we opted to use metrics that quantify the similarity or dissimilarity between the images and determine how well they align with each other. To achieve this, we extracted two key metrics: the Structural Similarity Index Measure (SSIM) [32] and the Universal Image Quality Index (UQI) [43].
The SSIM is a metric used to measure the similarity between two images or patches based on structural information. It ranges between -1 and 1, with 1 indicating a perfect match and -1 indicating a complete mismatch. The Seward library likely provides local and global SSIM values. Local SSIM averages the score, providing a fine-grained evaluation and identifying local variations in image quality. Global SSIM computes the score for the entire image, providing a holistic evaluation of overall similarity. Having both local and global SSIM scores can help identify areas or regions where image quality is poorer or the modifications have had a more significant impact. The SSIM equations (both Local and Global) are defined by [32]:
\[\text{SSIM(x,y)}=\frac{(2\mu_{x}\mu_{y}+C_{1})(2\sigma_{xy}+C_{2})}{(\mu_{x}^{2 }+\mu_{y}^{2}+C_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2})} \tag{8}\]
where:
\(x\) and \(y\) are local regions (patches) of the two images being compared
\(\mu_{x}\) and \(\mu_{y}\) are the average intensities of the patches \(x\) and \(y\)
\(\sigma_{x}^{2}\) and \(\sigma_{y}^{2}\) are the variances of the patches \(x\) and \(y\)
\(\sigma_{xy}\) is the covariance between the patches \(x\) and \(y\)
\(C_{1}\) and \(C_{2}\) are small constants to stabilize the division (typically, \(C_{1}=(K_{1}L)^{2}\) and \(C_{2}=(K_{2}L)^{2}\), where \(L\) is the dynamic range of the pixel values, and \(K_{1}\) and \(K_{2}\) are small constants)
\[\text{Global SSIM(X,Y)}=\frac{1}{N}\sum_{i=1}^{N}SSIM(x_{i},y_{i}) \tag{9}\]
where:
\(X\) and \(Y\) are the two images being compared
\(x_{i}\) and \(y_{i}\) are local patches of the images \(X\) and \(Y\)
\(N\) is the number of local patches in the images
The UQI is a full-reference image quality metric that compares processed images with the original or reference image (ground-truth in this case). It measures the similarity between images using their structural information, based on their luminance and
contrast. The UQI calculates the mean, standard deviation, and covariance of luminance and contrast values for the two images, and combines them using a weighted average to obtain a final UQI value ranging from 0 to 1. Thus, higher UQI values indicate higher image quality and similarity between the processed and reference images. This metric is widely used to evaluate image processing and compression algorithms for both objective and subjective image quality evaluations. The UQI is defined by the following equation [43]:
\[\text{UQI(X, Y)}=\frac{4\sigma_{XY}\mu_{X}\mu_{Y}}{(\sigma_{X}^{2}+\sigma_{Y}^ {2})(\mu_{X}^{2}+\mu_{Y}^{2})} \tag{10}\]
where:
\(X\) and \(Y\) are the two images being compared
\(\mu_{X}\) and \(\mu_{Y}\) are the average intensities of the images \(X\) and \(Y\)
\(\sigma_{X}^{2}\) and \(\sigma_{Y}^{2}\) are the variances of the images \(X\) and \(Y\)
\(\sigma_{XY}\) is the covariance between the images \(X\) and \(Y\)
In the first part of the procedure, we preprocessed the ground-truth images. We begin by loading the black and white images and converting them to grayscale using the PIL library. Then, a color map was defined, assigning a specific color to each of the 7-pixel values in the ground-truth image. These colors were defined based on the colors used by Visual ChatGPT to return segmented regions of similar characteristics. By iterating over the width and height of each image, the black and white images were converted to colored images using this color map. The final step involves resizing the colored image to a 512x512 resolution and saving it to the appropriate directory.
The second part of the procedure focuses on computing the image quality metrics. To accomplish this, the necessary libraries were imported, including the Sewar library for full-reference image quality metrics, the imageio library for image input/output, and the skimage library for image processing. We then defined a list of dictionaries containing the file paths for pairs of the ground-truth and the predicted images. As the function iterates through each image pair, it loads, normalizes, and resizes the ground-truth and predicted images to the desired size of 512x512 pixels. The images are then converted back to twist format. For each image pair, we calculate the SSIM and UQI metrics using the Sewar library. These metrics were stored in a dictionary and appended to a list.
The SSIM and UQI metrics served as valuable tools for assessing the performance of Visual ChatGPT's image segmentation, considering our current limitation on dealing with different classes. In summary, these metrics were chosen because the SSIM measures the structural similarity between the predicted and ground-truth images, taking into account changes in similarity and structures, while the UQI provides a scalar value indicating the overall quality of the predicted image in comparison to the ground-truth image. By analyzing these metrics, it was possible to identify areas where the segmentation model excels or falters, assisting in guiding further model improvement and evaluation.
## 4 Results
### Scene Classification
We initially evaluated Visual ChatGPT's ability to classify remote sensing scenes using the AID dataset [36]. To support this analysis, Figure 2 presents a heatmap visualization of the calculated confusion matrix, generated from the scene classification predictions.
Based on the confusion matrix, we also calculated the Precision, Recall, and F-Score metrics and displayed them in a horizontal bar chart, presented in Figure 3. The overall accuracy of the model for this task was 0.381 (or 38.1%), with the averaged weighted values between all the classes as 0.583 (58.3%), 0.381 (38.1%), and 0.359 (35.9%) for Precision, Recall, and F-Score, respectively.
The selected classes offered valuable insights into the model's ability to interpret satellite imagery. The graphics (Figures 2 and 3) demonstrated that the model more accurately identified scenes containing Baseball Fields, Bridges, Beaches, and Mountains, as evidenced by the high F-Scores achieved. Conversely, it struggled to recognize landscapes such as Bareland, Meadows, and Deserts, resulting in lower performance metrics. Additionally, the model encountered difficulties in distinguishing urban scenes, including Commercial, Church, Center, Industrial, and Dense Residential areas. This was indicated by high Precision values, but low Recall and F-Scores, which fell significantly below the "random-guess" threshold.
Although the overall accuracy of the model is 38.1%, which might seem relatively low, it's important to consider the context of the problem with 17 classes. The "random chance" (baseline accuracy) for this classification task is about 5.88%. Furthermore, the Visual ChatGPT model effectively interpreted and classified a considerable number of images across various classes, demonstrating its potential for handling remote sensing imagery.
Figure 4 showcases examples of instances that were accurately classified by the model. Contrarily, Figure 5 displays examples of instances inaccurately classified by it, demonstrating the necessity for additional tuning. Ensuring the incorporation of appropriate training sets into the learning process may further enhance the model's capabilities.
In the first example of Figure 4, an Airport, the model correctly identified the image as an aerial view of an airport with visible airplanes. The Medium Residential image example showcases the model's ability to detect a large group of houses. However, it incorrectly stated that these houses were located in the "suburbs of Chicago." The Forest scene example was also accurately classified, as the model identified it as an aerial photo of a forest with trees covering the landscape. Another instance, a Baseball Field scene, received a precise description as a baseball field with clear markings and layout. This was also the best-identified class in our tests.
The Visual ChatGPT model, however, misinterpreted and misclassified images across various classes, thus the reason why it presented lower accuracy overall. This highlights the challenges the model faces when handling aerial or satellite imagery, but it's mostly because it hasn't incorporate appropriate training sets of remote sensing data into its learning process.
Figure 3: Evaluation metrics from the AID dataset image classified by Visual ChatGPT. The Precision, Recall, and F-Score values are displayed, sorted by F-Score from lowest to highest. A grey dashed vertical line is plotted at a score of 0.0588, serving as a visual reference point for comparison, indicating the “random-chance” point.
Figure 2: Confusion matrix from the evaluated portion of the AID dataset classified by Visual ChatGPT. The color intensity and the numeric values within each cell of the heatmap indicate the number of instances of the predicted label.
The first example of Figure 5 features a Beach, and the model recognizes the presence of a body of water and a "kite flying in the sky". However, Visual ChatGPT incorrectly classifies the image content as Park. This misclassification may have resulted from the additional objects present in the image. The Commercial example depicts an aerial view of a city center with various buildings, but Visual ChatGPT mistakenly classifies the image content as Center. This instance highlights the challenges in accurately classifying this dataset, primarily due to the similarities between urban centers and commercial areas. The Desert example showcases a desert landscape, but the model incorrectly assumes it contains "a person wearing a red shirt
Figure 4: Sample images with correct Visual ChatGPT descriptions and classifications. For each image, two accompanying text boxes were provided. The first text box contains the description generated by Visual ChatGPT, while the second text box specifies the scene classification provided by the model. The images are arranged with each image being accompanied by a title on the left side, indicating its ground-truth label.
and black shorts in the Middle East". Oddly, Visual ChatGPT misclassifies the image content as Mountain. In the Meadow example, the model identifies the scene as an aerial photo of farmland, wrongfully noting a "visible tractor", and therefore erroneously classifies it as Farmland.
The possible reasons for these mistakes can be attributed to the presence of similar features between the misclassified and true classes, or the model's reliance on specific visual cues that might not be present in every instance. These examples demonstrate the challenges and pitfalls in classifying certain aspects of an image. Nevertheless, some of the responses of Visual ChatGPT indicate its potential to accurately identify elements within these images, if fine-tuning and additional data training implementations were to be incorporated.
Figure 5: Sample images with incorrect Visual ChatGPT descriptions and misclassifications. Each image has a title specifying the true label of the scene, while the textboxes with incorrect descriptions and classifications are placed on the right side of each image.
### Edge Detection
In this section, we examine the performance of Visual ChatGPT's submodel in edge detection for remote sensing images. As the LoveDa dataset [34] did not provide edge ground-truth labels created by human specialists, and considering the labor-intensive and challenging nature of the edge labeling task for innumerous objects, we opt to compare Visual ChatGPT's edge detection capabilities with the Canny and Sobel filters. This comparison highlights the similarities between the automated edge detection by Visual ChatGPT and these well-established methods.
The Canny edge detection method is generally more accurate and robust to noise compared to the Sobel edge detection. It is particularly useful for remote sensing images, where the presence of noise is common due to atmospheric effects, sensor limitations, or image acquisition conditions. The filter is effective in detecting continuous edges and suppressing noise, which is essential for accurately delineating features and boundaries in the images.
The Sobel edge detection algorithm is computationally efficient, making it suitable for large-scale remote sensing data processing. However, the Sobel edge detection method is more susceptible to noise compared to the Canny edge detection, which might lead to false edges or missing features. Despite its limitations, Sobel edge detection can still provide valuable information about the presence and direction of edges, particularly when applied to high-quality remote sensing images with minimal noise.
Figure 6 illustrates that, for most image pairs, Visual ChatGPT achieves a True Positive Rate (TPR) above the "random-guess" threshold. However, due to the high False Positive Rate (FPR) observed, its Precision and F-Score are understandably lower than the other metrics.
When examining the TPR values, the edge detector model employed by Visual ChatGPT, which is based on the Canny edge from the OpenCV library, demonstrated greater similarity to our Canny edge filter compared to the Sobel filter. This outcome aligns with expectations since they are based on the same method, but considering we manually adjusted the Canny filter parameters to possibly yield superior visual results for each image. The findings are noteworthy as they reveal that the automated task performed by Visual ChatGPT closely approximates what a human might deem suitable.
However, it is crucial to acknowledge the substantial FPR and the low F-Score values. This can be primarily attributed to Visual ChatGPT's detector being sensitive to certain types of land cover, particularly in densely forested areas and heavily populated urban regions. Figure 7 presents image examples of the detection results in such locations, which exhibit overall enhanced similarity with both Canny and Sobel filters.
In areas covered with vegetation, Visual ChatGPT exhibited greater sensitivity than the Canny filter, though not as much as the Sobel filter. This pattern was also observed in built-up regions, particularly those with taller structures. Despite these limitations, Visual ChatGPT is capable of providing visually pleasing results in specific instances, such as detecting roads and bodies of water edges. However, the model generated a significant number of False Positives, which is undesirable as it introduces noise when interpreting the image. Figure 8 shows images image examples where the FPR was among the highest observed, illustrating how farmlands and even less dense vegetation can influence the detection process.
These images demonstrate the differences in edge detection performance between the Canny and Sobel methods, as they indicate how difficult it is to extract this feature in certain conditions or areas characteristics. To enhance Visual ChatGPT's edge detection model on such instances, it is crucial to fine-tune it using a dataset tailored for edge detection tasks, incorporating proven methods like the Canny or Sobel filters, and adopting regularization techniques to prevent overfitting. Additionally, augmenting training data, evaluating alternative architectures, utilizing ensemble methods, and applying post-processing techniques can also further improve the model's performance. By adopting these strategies, Visual ChatGPT could deliver more accurate and reliable edge detection results.
### Straight Line Detection
Straight line detection in remote sensing images serves various purposes, such as building extraction, road detection, pipeline identification, etc. It proves to be a potent tool for image analysis, offering valuable insights for users. The evaluation of Visual ChatGPT's model for detecting straight lines employed the same protocol as edge detection. However, unlike the previous approach, we used manually labeled images, providing a more accurate ground-truth sample. Figure 9 presents a swarm plot illustrating the evaluation metrics used to compare Visual ChatGPT's detection results with their respective ground-truth counterparts.
The results revealed that, concerning line detection, Visual ChatGPT's performance was quantitatively subpar. Given that lines typically constitute a small proportion of an image's pixels, metrics such as Accuracy are not well-suited for accurate measurement due to significant class imbalance. Moreover, the model generated a strikingly high number of False Positives compared to its TPR, primarily because it identified certain object edges as lines. To address this issue and provide a clearer understanding, we showcase image examples in Figure 10, which highlight the disparities in line detection between rural and urban areas. By examining such visual comparisons, we noted the model's limitations and potential areas for improvement.
As observed, farmland areas exhibit a large number of lines, primarily due to plantations and tractor roads between them. Identifying these lines can be challenging, even for human specialists. However, Visual ChatGPT managed to detect a considerable number of roads interspersed among the plantation fields. It was capable of identifying the boundaries of these fields, which is an important aspect of feature extraction for these areas. In urban settings, however, extracting streets can be difficult, mainly because objects and shadows partially obscure them. These are also heavily dense areas, with multiple objects overlapping the streets.
Figure 10 also highlights the overall best and worst results in its 3rd and 4th columns, featuring dirt roads and a paved highway, respectively. For the dirt roads, it is understandable that their winding nature may pose a challenge for the model. Conversely,
the paved highways represent the best overall detections by Visual ChatGPT, showcasing its potential in these contexts.
Improving Visual ChatGPT's line detection and extraction capabilities in remote sensing imagery involves practically the same procedures as described previously, like fine-tuning the model on a tailored dataset, augmenting training data, and also applying pre-processing techniques to enhance input image quality. Additionally, incorporating domain-specific knowledge, exploring alternative model architectures, utilizing ensemble methods,
Figure 6: Swarm comparison of the performance metrics for both Canny and Sobel edge detections. The swarm plot displays the distribution of values measured by the multiple pairs of compared images, with the median value labeled. Although not all individual data points are shown, the swarm plot gives a general indication of the trend of the values. We included a red dashed line at y=0.5 to indicate the “random-guess” point.
Figure 7: A comparison of the edge detection techniques on three example images. The visualizations are displayed using the “viridis” colormap symbolizing the magnitude of the detection, specifically in Sobel’s. The TPR values of the Canny and Sobel images in comparison to Visual ChatGPT’s detection are overlaid in the lower-left corner.
and employing enhanced post-processing techniques can further optimize its performance on returning satisfying results.
### Image Segmentation
As stated, image segmentation is the process of partitioning an image into homogeneous regions based on features such as color,
Figure 8: A visual comparison of edge detection techniques applied to three example images that returned low similarity. The visualizations use the ’RdPu’ colormap indicating the magnitude of the edges, specifically useful for visualizing Sobel’s detection. The FPR values, comparing both images with Visual ChatGPT’s result, are displayed in the lower-left corner of the respective Canny and Sobel images.
Figure 9: A swarm plot comparing performance metrics for the straight line detection model from Visual ChatGPT. The plot displays the distribution of values for each metric, with median values indicated in black text. We include a red dashed line at y=0.5 as a reference point for the ”random-guess” threshold. While not all individual data points are displayed, the swarm plot provides an overall representation of the direction of the values.
texture, or spectral properties, with multiple applications in image analysis. However, for the Visual ChatGPT model, handling remote sensing data can be challenging due to the diverse and complex nature of these images. Factors such as varying spatial resolutions, the presence of shadows, seasonal variations, and spectral similarities among different land cover types may hinder the model's performance, necessitating further optimization or the integration of domain-specific knowledge to effectively address these complexities. Still, VLMs can provide a valuable approach to the image segmentation task by enabling non-expert users to perform segmentation using text-based guidance. This capability has the potential to be integrated into remote sensing applications.
However, in the case of Visual ChatGPT, our tests with various prompts revealed that controlling the "Segmentation on Image" tool was not as feasible as it was for the "Get Image Description" and "Answer Question About Image" tools. Consequently, we were unable to guide Visual ChatGPT to segment specific classes from our images. As a reminder, since classification metrics like Precision, Recall, and F-Score necessitate matching classes in both ground-truth and predicted values, these metrics were unsuitable for comparing Visual ChatGPT's performance in this task. Instead, we employed metrics that assessed the similarity between image pairs, which, when combined with qualitative analysis, offered insight into the model's effectiveness in handling this type of data.
To evaluate the predictions of Visual ChatGPT, we compared the ground-truth data from the LoveDA dataset [34] to the segmented images generated by the model. Figure 11 presents the values of both Local and Global SSIM metrics, as well as the UQI values for this comparison. The Local SSIM metric is particularly noteworthy in this context, as it is designed to focus on local variations during image analysis. Meanwhile, the Global SSIM calculates a score for the entire image, offering a comprehensive assessment of overall similarity. The UQI metric compares structural information based on luminance and contrast between colors, making it a more suitable metric for overall performance.
In our comparison, the majority of the data revealed notable similarity values, with more pronounced negative effects on local analysis (Local SSIM) than on the full-scale (Global SSIM and UQI) assessment. These images predominantly featured
Figure 10: Comparative visualization of original RGB images (top row), manually annotated images (middle row), and Visual ChatGPT-generated images (bottom row) for four different sets. True positive rate (TPR) values are displayed in white text on the ChatGPT-generated images.
farmlands, as well as scenes with both urban and rural elements, resulting in a more varied landscape. Contrarily, some images exhibited high similarity with the ground-truth data. These images typically displayed less diverse features, such as extensive vegetation cover, large bodies of water, or densely clustered structures of a similar nature. To corroborate this, Figures 12 and 13 were included, showcasing both the challenges and potential of the Visual ChatGPT segmentation model. This visual comparison enables a clear evaluation of the model's performance to the manual annotations.
Visual ChatGPT utilizes a powerful image segmentation model underneath, thus making it an impressive tool. However, its knowledge is not specifically associated with aerial or satellite imagery, but more with the terrestrial type of images, while the segmentation classes are more diverse. Additionally, the model was not effective in incorporating additional textual information to segment remote sensing images, as our tests have shown that by asking the model to segment images, with or without human instructions, it yielded the same results. Furthermore, Visual ChatGPT did not indicate appropriately which classes it has segmented over the investigated images, even when prompted with a specific command. Instead, the model segments the image and uses the "Answer Question about Image" function to respond to it, using information about the context of the original RGB image rather than the labels/classes that it identified.
The segmentation model demonstrates both potential and challenges when dealing with various land cover types. While the model shows promising performance in images with less diverse features or densely clustered structures of a similar nature, it encounters difficulties in accurately segmenting more complex scenes. The difficulties primarily arise in the local analysis, as evidenced by lower Local SSIM values, which could be attributed to the model's limited exposure to such diverse data during training.
Nonetheless, Visual ChatGPT's ability to achieve high similarity with ground-truth data in certain cases indicates that, with targeted improvements, it could be adapted to effectively handle a wider range of land covers and deliver more accurate segmentation results. As such, to fully realize the potential of Visual ChatGPT in these scenarios, further improvements and fine-tuning are required to better handle the diverse and intricate characteristics of different land types.
## 5 Discussion
The investigation into the Visual ChatGPT model's proficiency in handling remote sensing imagery yielded intriguing results, indicating both its potential and limitations. While the overall model accuracy of 38.1% is considerably higher than the random chance baseline of 5.88% in a 17-class classification task, there were notable disparities in performance across different classes. The model exhibited proficiency in accurately identifying scenes containing Baseball Fields, Bridges, Beaches, and Mountains, as demonstrated by high F-Scores. However, it faced challenges recognizing and classifying Bareland, Meadows, and Deserts, evidenced by lower performance metrics. Additionally, the model encountered difficulties distinguishing urban scenes such as Commercial, Church, Center, Industrial, and Dense Residential areas.
The edge detection analysis revealed that the model demonstrated similarity to our adjusted Canny edge filter. Despite the similarity, the substantial False Positive Rate and the low F-Scores, particularly in densely forested areas and heavily populated urban regions, highlight a crucial area for improvement. The model's performance in straight-line detection was also mixed. It demonstrated potential in farmland areas by detecting numerous roads interspersed among plantation fields and boundaries of fields. However, it struggled with the extraction of streets in urban settings and the winding nature of dirt roads. Conversely, it performed optimally when detecting paved highways, which suggests a solid foundation on which future optimizations can be built.
Image segmentation is another area that highlighted the model's potential and its current limitations. The ViT implemented model demonstrated strong performance in images with less diverse features or densely clustered structures of a similar na
Figure 11: Horizontal box plots comparing image comparison metrics (Local SSIM, Global SSIM, and UQI) for the segmented images with the Visual ChatGPT model. The 25th, 50th (median), and 75th percentiles are displayed on each box plot, allowing for a clear assessment of the central tendency and spread of the data, and a red dashed line at x=0.5 serves as a reference point.
ture but faced difficulties accurately segmenting more complex scenes. It also didn't effectively leverage additional textual information to improve segmentation results, a feature that would be a significant enhancement to be implemented in future versions of it. While it is evident that the model can correctly interpret and classify images across several classes, it also made mistakes, underlining the importance of further model fine-tuning and incorporation of more diverse and representative training datasets.
As stated, the "Segment on Image" function incorporates the Uniform model [17], a vision-based transformer that was not specifically designed for remote sensing data. While not specifically trained for it, its architecture enables it to reduce local redundancy and capture global dependency effectively, which could be the reason behind the segmentation results in some cases. As such, it was capable of segmenting a broad range of land covers, although not without its mistakes. The recent literature, however, suggest that models based on ViT can be capable of performing zero-shot segmentation on different domains, or at least be adapted with few-shot learning [30, 12, 40, 27]. ViT-based models currently represent the state-of-the-art in handling remote sensing data as they have triumpled in areas where traditional Convolutional Neural Networks (CNNs) faced challenges. The potential of these models has already been demonstrated, but only when specifically trained with remote sensing data [3]. In different land cover segmentation and classification tasks, models such as SegFormer, ViNetFormer, and RSSFormer returned impressive results, with F-Scores values above 90% [33, 9, 37]. Furthermore, since the current segmentation model is not capable of discerning text-to-image, an integration with capable LLMs with the ViT models may improve the segmentation of these images [41].
As last, in the current state of its development, Visual ChatGPT may present certain challenges for non-experts in the realm of image processing tasks. The complexity of the interface and operations, an inherent characteristic of this early-stage technology, poses a potential barrier to its widespread adoption. Our research delineates the significant potential of Visual ChatGPT for remote sensing tasks; however, the transition from potential to practical usage necessitates further improvements, primarily targeted at enhancing its user-friendliness. We envisage that the
Figure 12: Examples of labeled images compared to the Visual ChatGPT segmentations that scored higher on the similarity metrics. In the bottom row, Local SSIM (LSSIM) values are displayed in the left corner of each segmented image, providing a quantitative measure of the similarity between the annotations and the Visual ChatGPT segmentations.
near future will witness concerted efforts towards improving the usability of such models, fostering an environment conducive for both experts and non-experts. We anticipate these improvements to manifest in the form of more intuitive user interfaces and comprehensive guidance, thus broadening the accessibility and usability of Visual ChatGPT.
## 6 Improving Visual Language Models for Remote Sensing Analysis
In this section, we provide a broader vision of Visual Language Models (VLMs) in remote sensing analysis and discuss possibilities for future implementations. While our experiments focused on Visual ChatGPT, it is clear that novel VLMs will be able to tackle different tasks and be useful, in general, in multiple domains. VLMs are a class of machine learning models that are designed to understand and generate content that combines both visual and textual information [21]. VLMs are trained to associate images with their related text, and this enables them to carry out tasks that involve understanding and generating such multimodal content [2]. VLMs are often built by combining techniques from the fields of computer vision, which focuses on understanding and processing images, and NLP, which focuses on understanding and processing text. As Visual ChatGPT is one of the many VLMs that are arguing recently, it is important to discuss their involvement with image manipulation and how they can be adapted into the remote sensing domain.
With the constantly increasing amount of remote sensing data available, there is a growing need for efficient methods to process and analyze this data [6]. As VLMs continue to evolve and improve, their applications in multiple fields are expected to expand significantly. By incorporating additional techniques and algorithms, it can become a powerful tool for non-experts to analyze and understand complex remote-sensing images. In this section, we explore the future perspectives of these technologies in remote sensing practice, discuss possible applications, and outline the necessary research directions to guide their development and improvement.
Firstly, to apply VLMs to remote sensing data, it would be necessary to collect a large dataset of labeled images. This may involve manually annotating the images, which can be a time-consuming and expensive process [30]. Alternatively, transfer
Figure 13: Examples of labeled images juxtaposed with Visual ChatGPT segmentations that scored the lowest on similarity metrics. In the bottom row, LSSIM values are shown, in black or white depending on its background, for each segmented image, offering a quantitative assessment of the dissimilarity between the ground-truth and the model’s segmentations.
learning techniques can be used to fine-tune pre-trained models on a smaller set of labeled images, possibly reducing the amount of labeled data required for training [31]. By learning from a limited number of examples, few-shot learning models, for instance, can develop better generalization capabilities [2], as they can be more robust to variations in remote sensing data. Such an approach can enable the models to recognize and analyze unique features, patterns, and structures present in satellite or aerial images, thereby significantly improving their performance and applicability in this domain.
By adapting VLMs like Visual ChatGPT for remote sensing analysis, we can also create powerful tools to aid professionals, students, and enthusiasis in their work. These models can facilitate the development of image and data processing, provide guidance in choosing and applying the most appropriate algorithms and techniques, and offer insights into the interpretation of remote sensing data [20]. The models can help users overcome coding challenges, offer guidance on data processing techniques, and facilitate collaboration between individuals with varying levels of expertise and study fields [39, 41]. In turn, this assistance can enhance the efficiency and accuracy of remote sensing workflows, allowing them to focus on higher-level tasks and decision-making.
A potential for Visual ChatGPT or VLMs, in general, is that they can be seamlessly integrated with a variety of geospatial tools and platforms to significantly elevate user experience. By combining advanced models with existing geospatial software, toolboxes, or cloud-computation platforms, users can access an enriched suite of functionalities that cater to a wide range of applications. This integration not only amplifies the capabilities of existing tools [21] but also unlocks innovative possibilities for analyzing and interpreting geospatial data. By leveraging the natural language understanding and visual processing abilities of VLMs, the interaction with these platforms can become more intuitive, leading to improved efficiency and accessibility.
In essence, the improved versions of VLMs can be applied to a wide range of remote sensing tasks. These applications can benefit from the model's ability to provide real-time feedback, generate code snippets, and analyze imagery, thus streamlining the overall process. For example, a model could be trained to identify common patterns in remote sensing data and generate code to automatically detect and analyze these patterns. This has the potential to help to speed up the processing of large datasets and minimize the intricacies of manual intervention.
As for applications, VLMs can be expanded to encompass various essential image tasks, such as texture analysis, principal components analysis, object detection, and counting, but also curated to domain-specific remote sensing practices as well. By integrating change detection algorithms [28] into these VLMs, for instance, users can interact with the models to automatically identify landscape alterations, facilitating the monitoring and assessment of the impacts caused by human activities and natural processes on the environment. Anomaly detection, a technique that identifies unexpected or unusual features in remote sensing images [11], can also greatly benefit from this integration. Time series analysis is also a valuable method that involves analyzing changes to reveal patterns, trends, and relationships in land cover [8] and could be added to it. Consequently, by incorporating tailored algorithms into VLMs, users can examine multiple images over time, gaining insights into the dynamics of the Earth's surface.
Furthermore, the integration of machine and deep learning algorithms specifically designed for remote sensing applications, such as convolutional neural networks and vision transformers [15, 3], can help enhance the performance and capabilities of visual models. These methods can improve the VLM's ability to recognize and analyze complex patterns, structures, and features in remote sensing images, leading to more accurate and reliable results. Currently, there are multiple networks and deep learning models trained for various remote sensing tasks that are available and could be potentially implemented [4, 25].
Overall, the potential for VLMs like Visual ChatGPT to aid in remote sensing image processing is vast and varied. As the technology continues to evolve and improve, we will likely see an increasing number of innovative applications in this field, with new features and capabilities being developed to meet the specific needs of users. Looking to the future, it is likely that VLMs will continue to play an increasingly important role in image data analysis. As these models become more advanced and better integrated with existing tools and workflows, they have the potential to greatly improve the efficiency and accuracy of remote sensing practices.
Although our experiments with Visual ChatGPT only consist of one perspective, VLMs have, in general, an important role in image analysis. In short, to guide the development and improvement of VLMs in remote sensing, several research directions could be explored:
* Investigating the optimal methods and strategies for fine-tuning and adapting models to remote sensing tasks;
* Developing performance benchmarks and evaluation metrics specific to remote sensing applications on these models;
* Exploring the integration of these models with other remote sensing tools and platforms, such as Geographic Information Systems (GIS), for a seamless user experience;
* Conducting user studies to understand how the models can best work for these data and how they can be adjusted to user behavior;
* Studying the limitations and biases of the models when applied to remote sensing imagery, and devising strategies to mitigate them.
And, in terms of applicability, the following areas can also be considered to be pursued, thus contributing to enhancing the development of VLMs in remote sensing imagery processing:
* Investigating the effectiveness of incorporating domain-specific knowledge and expertise into the models, such as spectral indices;
* Examining the scalability and efficiency of the models when working with large-scale remote sensing datasets;
* Assessing the robustness and generalizability of the models across various remote sensing data types, including multispectral, hyperspectral, Synthetic-Aperture Radar (SAR), and LiDAR;
* Evaluating these models for real-time or near-real-time remote sensing analysis;
* Exploring the potential of combining VLMs with other advanced machine learning techniques, such as reinforcement learning;
* Investigating the implementation for data fusion tasks, where information from different remote sensing sensors or platforms are combined.
## 7 Conclusions
In this study, we investigated the applicability and performance of Visual ChatGPT, a VLM, for remote sensing imagery processing tasks, highlighting its current capabilities, limitations, and future perspectives. We have demonstrated the effectiveness and problems of this model in various remote sensing tasks, such as image classification, edge and line detection, and image segmentation. Additionally, we have discussed its role in assisting users and facilitating the work of professionals, students, and enthusiasts in the remote sensing domain by providing an intuitive, easy-to-learn, and interactive approach to image processing.
In our investigation we found that, despite its ability to perform scene classification above the random-guess baseline, the model faced difficulties distinguishing certain landscape classes and urban scenes. The model showed potential in edge detection and straight-line identification, especially in farmland areas and on paved highways, but struggled in densely populated regions and complex landscapes. While the model's segmentation showed promising results in less diverse or densely clustered scenes, it faced difficulties in more complex environments. Still, although some results may not appear impressive, we believe that these initial findings lay a groundwork for future research and improvements.
While Visual ChatGPT shows promise in its current state, there is still plenty of room for improvement, fine-tuning, and adaptation to better suit the unique needs of remote sensing analysis. Future research could focus on optimizing the model by either fine-tuning with techniques such as few-shot learning, or improving their natural language capacities to recognize objects based on their class and segment them in a more guided manner, be it though label or text-based prompts. By doing so, we can unlock the capacity of these models in a wide range of remote sensing applications, varying from environmental monitoring and disaster management to precision agriculture and infrastructure planning.
In light of our findings, the integration of VLMs into remote sensing has immense potential to transform the way we process and analyze Earth's surface data. With continued evolution and adaptation to the specific needs of aerial/satellite data, these models can prove to be essential resources in assisting important challenges in image processing. It is crucial to emphasize the significance of ongoing research in this area and encourage further exploration of the capabilities of Visual ChatGPT, as well as other VLMs in dealing with remote sensing tasks in the near future.
## Acknowledgements
This study was financed in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES) - Finance Code 001 and Print (88881.311850/2018-01). The authors are funded by the Support Foundation for the Development of Education, Science, and Technology of the State of Mato Grosso do Sul (FUNDECT; 71/009.436/2022) and the Brazilian National Council for Scientific and Technological Development (CNPq; 433783/2018-4, 310517/2020-6; 405997/2021-3; 308481/2022-4; 305296/2022-1).
## Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
## Abbreviations
The following abbreviations are used in this manuscript:
\begin{tabular}{l l} AI & Artificial Inteligence \\ AUC & Area Under the Curve \\ FN & False Negative \\ FP & False Positive \\ FPR & False Positive Rate \\ GIS & Geographic Information Systems \\ GPT & Generative Pre-trained Transformer \\ LLMs & Large Language Models \\ NLP & Natural Language Processing \\ SAR & Synthetic-Aperture Radar \\ SSIM & Structural Similarity Index Measure \\ TN & True Negative \\ TP & True Positive \\ TPR & True Positive Rate \\ UQI & Universal Image Quality Index \\ VLM & Visual Language Model \\ \end{tabular}
|
2306.05012 | Sequence-to-Sequence Model with Transformer-based Attention Mechanism
and Temporal Pooling for Non-Intrusive Load Monitoring | This paper presents a novel Sequence-to-Sequence (Seq2Seq) model based on a
transformer-based attention mechanism and temporal pooling for Non-Intrusive
Load Monitoring (NILM) of smart buildings. The paper aims to improve the
accuracy of NILM by using a deep learning-based method. The proposed method
uses a Seq2Seq model with a transformer-based attention mechanism to capture
the long-term dependencies of NILM data. Additionally, temporal pooling is used
to improve the model's accuracy by capturing both the steady-state and
transient behavior of appliances. The paper evaluates the proposed method on a
publicly available dataset and compares the results with other state-of-the-art
NILM techniques. The results demonstrate that the proposed method outperforms
the existing methods in terms of both accuracy and computational efficiency. | Mohammad Irani Azad, Roozbeh Rajabi, Abouzar Estebsari | 2023-06-08T08:04:56Z | http://arxiv.org/abs/2306.05012v1 | Sequence-to-Sequence Model with Transformer-based Attention Mechanism and Temporal Pooling for Non-Intrusive Load Monitoring
###### Abstract
This paper presents a novel Sequence-to-Sequence (Seq2Seq) model based on a transformer-based attention mechanism and temporal pooling for Non-Intrusive Load Monitoring (NILM) of smart buildings. The paper aims to improve the accuracy of NILM by using a deep learning-based method. The proposed method uses a Seq2Seq model with a transformer-based attention mechanism to capture the long-term dependencies of NILM data. Additionally, temporal pooling is used to improve the model's accuracy by capturing both the steady-state and transient behavior of appliances. The paper evaluates the proposed method on a publicly available dataset and compares the results with other state-of-the-art NILM techniques. The results demonstrate that the proposed method outperforms the existing methods in terms of both accuracy and computational efficiency.
NILM, Smart Building, Deep Learning, Transformer, Attention Mechanism.
## I Introduction
Non-intrusive load monitoring (NILM) is a technique used to separate electricity consumption at the household appliance level. Smart meters only provide total energy consumption at the building level, which may not be sufficient to influence consumer behavior. The NILM process includes data collection, feature extraction, event detection, load identification, and energy separation, and machine learning techniques are used to identify appliances and extract features during steady-state and transient conditions. The NILM system can also evaluate appliance performance over a long period, helping manufacturers improve energy efficiency. Suggestions can be sent to consumers to reduce or postpone the use of portable appliances to non-peak times to save energy [1].
Monitoring the energy consumption of buildings can prevent wastage and help consumers take necessary measures. Smart meters monitor overall energy consumption, providing regular feedback to consumers, which has been shown to reduce energy use by 3%. Additionally, instantaneous energy consumption feedback at the building level can save up to 9% [2]. Consumer behavior plays a significant role in efficient energy use, and consumers are likely to change their consumption patterns based on feedback. The NILM system is an effective solution to monitor energy consumption and identify appliances that consume the most energy, providing insights to consumers and manufacturers to improve energy efficiency [3].
There are two main types of approaches in Non-Invasive Load Monitoring (NILM) - supervised and unsupervised. The supervised approach involves training models using the power consumption data of appliances. On the other hand, unsupervised methods include Factorial hidden Markov models (FHMM), hidden Markov models (HMM) [4], and methods based on event identification and clustering. A comprehensive review of unsupervised NILM methods can be found in [5, 6].
In recent years, various neural network-based methods under the supervision of NILM have been presented, thanks to the development of deep neural networks. Significant progress has been made through the use of Convolutional Neural Networks (CNN) [7, 8]. One method, called WaveNILM [9], is based on a gated version of the dilated causal convolutional layer (DC-CNN) and examines the benefits of additional input signals while maintaining causality. Another method, proposed in [10] that used a variational autoencoder, consists of two main parts: Net-IBN and VAE. Net-IBN is used to extract relevant features from raw power consumption measurements, and these features are then used in the VAE model to estimate the power consumption of each electrical device.
In a different study [11], a neural network-based structure called Concurrent Loads Disaggregator (COLD) was developed to solve the NILM problem. The COLD network is based on a feedforward ReLu network with a self-attention mechanism that estimates any continuous function. The input of the spectrogram matrix network is obtained from the Short Time Fourier Transform (STFT) and is related to cumulative consumption information, and the output of the network is binary vectors that indicate the activity or inactivity of
electrical devices at each time step.
Another method based on transformers, called ELECTRICity [12], extracts features from the cumulative signal of electricity consumption. During the pre-processing stage, the model consists of a transformer-based generator and a discriminator to improve performance. The generator generates artificial signals for electrical devices using cumulative signals, and the discriminator distinguishes the artificial data produced by the generator from real data. During the training phase, the pre-trained transformer is fine-tuned in a supervised manner to predict the amount of power consumed by the electrical appliances.
In a general sense, the goal of another study is to separate the consumption of electrical appliances based on the amount of consumption by the entire household. The architecture considered in this study is a combination of ResNet and Dialed Convolution network architecture. ResNet solves the gradient fading problem that occurs when the number of layers in a network increases, and Dialed Convolution is used instead of pooling layers to extract local information with less information loss [13].
Overall, these methods show promising results in the field of NILM, and further research could lead to even more efficient and accurate methods for energy consumption estimation. In the paper a method for improving non-intrusive load monitoring (NILM) using various techniques such as attention mechanism, temporal pooling, residual connections, and transformers is proposed. Attention mechanism helps the model focus on relevant parts of input sequence, temporal pooling combines representations of multiple time steps into a single representation, residual connections bypass one or more layers to make gradient flow smoother during training, and transformers have been shown to achieve advanced results in NLP and time series prediction tasks. These techniques can help improve the accuracy and robustness of energy decomposition at the device level while reducing computational cost.
This paper is structured as follows. In Section II, we present the proposed method, which includes the use of Transformers and Residual Connections, Temporal Pooling, and Attention Mechanism. In Section III, we describe the experimental setup and present the results obtained from applying the proposed method to several publicly available datasets. In Section IV, we provide a discussion of the results and draw conclusions regarding the effectiveness of the proposed method. We also discuss possible directions for future work in this area.
## II Proposed Method
The proposed method is composed of four main parts: Attention Mechanism, Temporal Pooling, Residual Connections, and Transformers. This section provides a detailed explanation of the structure of the model used.
Firstly, the Attention Mechanism is a technique that allows the model to selectively focus on the most relevant parts of the input sequence at a given time step. This mechanism calculates the output element as a weighted sum of input elements, where the weights are determined dynamically by the model based on the input sequence and the current state of the model. This approach significantly enhances the performance of sequence-to-sequence models.
Secondly, Temporal Pooling is a technique that extracts useful information from sequential data, such as video or speech signals. It combines representations of multiple frames or time steps into a single representation that captures the crucial features of the entire sequence. This technique addresses the challenge of processing sequences of variable length and allows the model to work with fixed-size inputs by summarizing the sequence's information in a compact representation.
Thirdly, Residual Connections or coupling, is a technique that connects the output of one layer to the input of the next layer, bypassing one or more layers in between. This method makes the gradient flow more smoothly during training, avoiding the vanishing gradient problem in deep networks. It also allows the network to learn a residual mapping, which is easier to learn than the full mapping from input to output, especially when the network is very deep.
Finally, Transformers have been applied recently in non-intrusive load monitoring to separate household power consumption into individual appliances, achieving advanced results on a wide range of NLP and time-series prediction tasks. Overall, the proposed method presents a robust and accurate technique for device-level energy decomposition in NILM, reducing the computational cost and improving the accuracy of energy decomposition. Figure 1 shows the proposed model architecture.
## III Dataset and Results
### _Dataset_
The UK-DALE dataset is a widely used dataset in the field of non-intrusive load monitoring (NILM) and is used extensively in this article. It consists of energy consumption data collected from five different households in England. The dataset was collected using smart meters that were installed in each household, and the data was sampled at a frequency of 1 Hz. Along with the energy consumption data, device data samples were also collected every 6 seconds, providing a detailed picture of the energy usage patterns of each household [14].
The dataset is divided into two parts: training and testing. The training dataset consists of energy consumption and device data samples from four households, while the remaining household is reserved for testing purposes. The dataset contains a total of 112 days of data for the training set and 28 days of data for the testing set. Each household in the dataset has a unique set of appliances, which provides a diverse range of energy usage patterns to train and test the NILM algorithms.
### _Experiments and Results_
To assess the model's generalization ability and its capacity to recognize the typical characteristics of home appliances, the network was trained and tested using seen and unseen
scenarios. In the seen scenario, houses used for training also included in the test set, and in the unseen scenario, houses used for training of the proposed method are not included in the test set. The model employed a multi-class classification approach for predicting the energy consumption of appliances, assuming that their consumption remains constant during operation. The network's parameters were optimized through gradient descent method using the Adam optimization algorithm with a learning rate of \(10^{-4}\) and batch size of 32. The training lasted for 300 epochs. The plot of the loss per epoch during the training of the network is presented in Figures 2 and 3 for the seen and unseen scenarios respectively. As it can be seen, in both cases the algorithm is converged and reached a satisfactory result.
The results of the proposed model are summarized in Tables I and II for seen and unseen cases respectively. It is observed that the model achieved better performance in terms of evaluation criteria such as SAE and F1 score than the existing state-of-the-art methods. These outcomes demonstrate the effectiveness of the designed model in non-intrusive load monitoring.
the complex temporal patterns and dependencies of energy consumption data. The model was able to accurately identify the consumption patterns of each household appliance, even in situations where there was a significant overlap between their energy consumption patterns.
The implications of accurately identifying the energy consumption patterns of personal appliances in a household are significant for energy-saving challenges, as it can help families better understand their energy consumption and identify areas where they can reduce their energy consumption. However, there is still room for future research to improve the proposed model's performance, optimize it for accuracy and computational efficiency, evaluate it on larger and more diverse data sets, and compare it with other advanced models and methods.
Furthermore, incorporating additional features such as time, day, weather conditions, etc., and extending the model to multi-task learning settings or combining it with other techniques such as transfer learning and group learning can contribute to the development of more accurate and efficient energy management systems. Overall, these future research directions can contribute to the advancement of the field of non-invasive load monitoring and have important implications for energy-saving and sustainability.
|
2305.15989 | Unitary groups, K-theory and traces | We show that continuous group homomorphisms between unitary groups of unital
C*-algebras induce maps between spaces of continuous real-valued affine
functions on the trace simplices. Under certain $K$-theoretic regularity
conditions, these maps can be seen to commute with the pairing between $K_0$
and traces. If the homomorphism is contractive and sends the unit circle to the
unit circle, the map between spaces of continuous real-valued affine functions
can further be shown to be unital and positive (up to a minus sign). | Pawel Sarkowicz | 2023-05-25T12:29:24Z | http://arxiv.org/abs/2305.15989v2 | # Unitary groups, \(K\)-theory and traces
###### Abstract.
We show that continuous group homomorphisms between unitary groups of unital C*-algebras induce maps between spaces of continuous real-valued affine functions on the trace simplices. Under certain \(K\)-theoretic regularity conditions, these maps can be seen to commute with the pairing between \(K_{0}\) and traces. If the homomorphism is contractive and sends the unit circle to the unit circle, the map between spaces of continuous real-valued affine functions can further be shown to be unital and positive (up to a minus sign).
###### Contents
* 1 Introduction
* 2 Preliminaries and notation
* 3 Continuous unitary group homomorphisms and traces
* 4 The order on \(\operatorname{Aff}T(\cdot)\)
* 5 General linear variants
* 6 Final remarks and open questions
## 1. Introduction
Unitary groups of C*-algebras have been long studied, and for many classes of operator algebras they form a complete invariant. In [4], Dye studied the unitary group isomorphism problem between non-atomic W*-algebras, with the assumption of _weak bicontinuity_ of the isomorphism. He later showed that the unitary group, this time as an algebraic object, determined the type of a factor [4] (except for type \(\mathrm{I}_{2n}\)) - here it was shown that such group isomorphisms were the restrictions of a *-isomorphism of conjugate linear *-isomorphism multiplied by a (possibly discontinuous - [1, Appendix A] gives exposition) character. Sakai generalized Dye's results to show that any uniformly continuous unitary group isomorphism between AW*-factors comes from a *-isomorphism or conjugate linear *-isomorphism [10] (see also [11] for general AW*-algebras which have no component of type \(I_{n}\)).
Dye's method was generalized to large classes of real rank zero C*-algebras by Al-Rawashdeh, Booth and Giordano in [1], where they applied the method to obtain induced maps between \(K\)-theory, a general linear variant was done by Giordano and Sierakowski in [10]. The stably finite and purely infinite cases were handled separately. The unital, simple AH-algebras of slow dimension growth and of real rank zero were classified by the topological group isomorphism class of their unitary groups (or general linear groups), and the unital, simple, purely infinite UCT algebras were classified via the algebraic isomorphism classes of their unitary groups (or general linear groups). These results made use of the abundance of projections in real rank zero C*-algebras (at least to show there were isomorphic \(K_{0}\)-groups), and made use of the Dadarlat-Elliott-Gong [1, 2] and Kirchberg-Phillips [14] classification theorems respectively (see Theorems 3.3.1 and 8.4.1 of [13] for each respective case).
In [12], it was proven by Paterson that two unital C*-algebras are isomorphic if and only there is an isometric isomorphism of the unitary groups which acts as the identity on the circle. In a similar vein, the metric structure of the unitary group has also played a role in determining the Jordan *-algebra structure on C*-algebras. In [15], Hatori and Molnar showed that two unital C*-algebras are Jordan *-isomorphic if and only if their unitary groups are isometric as metric spaces, not taking into account any algebraic structure.
Chand and Robert have shown in [1] that if \(A\) and \(B\) are prime traceless C*-algebras with full square zero elements such that \(U^{0}(A)\) is algebraically isomorphic to \(U^{0}(B)\), then \(A\) is either isomorphic or anti-isomorphic to \(B\). In fact, the group isomorphism is the restriction of a *-isomorphism or anti-*-isomorphism which follows from the fact that unitary groups associated to these C*-algebras have certain automatic continuity properties that allow one to use characterizations of _commutativity preserving maps_[1] (see [1]). Chand and Robert also show that if \(A\) is a unital separable C*-algebra with at least one tracial state, then \(U^{0}(A)\) admits discontinuous automorphisms. Thus the existence of traces is an obstruction to classification via algebraic structure on the unitary groups - at least an obstruction to unitary group homomorphisms being the restrictions of *-homomorphisms or anti-*-homomorphisms.
In this paper, we show that uniformly continuous unitary group homomorphisms yield maps between traces with have several desirable \(K\)-theoretic properties - especially under stricter continuity assumptions. Namely, that the homomorphism sends the circle to the circle and is contractive - this would be automatic if it had a lift to a *-homomorphism or anti-*-homomorphism.
We state our main results.
**Theorem A** (Corollary 3.6).: _Let \(A,B\) be unital C*-algebras. If \(\theta:U^{0}(A)\to U^{0}(B)\) is a uniformly continuous group homomorphism, then there exists a bounded \(\mathbb{R}\)-linear map \(\Lambda_{\theta}:\operatorname{Aff}T(A)\to\operatorname{Aff}T(B)\) such that_
(1.1)
_commutes._
Here \(\pi_{1}(\theta)\) is the map between fundamental groups induced by \(\theta\), and for a C*-algebra \(A\), \(\tilde{\Delta}^{1}_{A}\) is the _pre-determinant_ (used in the definition of the de la Harpe-Skandalis determinant associated to the universal trace) that takes a piece-wise smooth path in \(U^{0}(A)\)1 beginning at the unit to an element of \(\operatorname{Aff}T(A)\). See Section 2.2 for details.
Footnote 1: As every continuous path of unitaries (resp. invertibles) is homotopic to a piece-wise smooth path of unitaries (resp. invertibles) – see Remark 2.2(1) – and \(\tilde{\Delta}^{1}_{A}\) is homotopy-invariant, one can apply \(\tilde{\Delta}^{1}_{A}\) to any path of unitaries (resp. invertibles).
Recall that the \(K_{0}\)-group of a unital C*-algebra can be identified with the fundamental group \(\pi_{1}(U^{0}_{\infty}(A))\). Restricting to C*-algebras with sufficient \(K_{0}\)-regularity - by this we mean C*-algebras whose \(K_{0}\)-group can be realized as loops in the connected component of its unitary group - we get a map between \(K_{0}\)-groups and a map between spaces of continuous real-valued affine functions which commute with the pairing.
**Corollary B** (Corollary 3.6).: _Let \(A,B\) be unital C*-algebras such that the canonical maps_
\[\pi_{1}(U^{0}(A))\to K_{0}(A)\text{ and }\pi_{1}(U^{0}(B))\to K_{0}(B) \tag{1.2}\]
_are isomorphisms. If \(\theta:U^{0}(A)\to U^{0}(B)\) is a continuous group homomorphism then there exists a bounded linear map \(\Lambda_{\theta}:\operatorname{Aff}T(A)\to\operatorname{Aff}T(B)\) such that_
(1.3)
_commutes, where \(K_{0}(\theta)\) is the map induced by \(\pi_{1}(\theta)\) together with the isomorphisms of (1.2)._
C*-algebras satisfying the above hypothesis are quite natural - for example C*-algebras having stable rank one [10] or that are \(\mathcal{Z}\)-stable [11] have this property. Viewing \(\operatorname{Aff}T(A)\) and \(\operatorname{Aff}T(B)\) as partially ordered real Banach
spaces (under the uniform norm) with order units, it is not however true that \(\Lambda_{\theta}\) is unital or positive (see Example 3.4). This is remedied by adding stricter continuity assumptions on the homomorphism \(\theta\) (and possibly by replacing \(\Lambda_{\theta}\) with \(-\Lambda_{\theta}\)).
When \(\theta:U(A)\to U(B)\) is contractive, injective and sends the circle to the circle, then we show (Lemma 4.3) that either \(\Lambda_{\theta}\) or \(-\Lambda_{\theta}\) is unital and positive, and therefore \(\theta\) induces a map between \(K\)-theory and traces in such a manner that respects the pairing (which in turn gives a map between Elliott invariants for certain simple C*-algebras). As a consequence, we can identify certain unitary subgroups with C*-subalgebras by using \(K\)-theoretic classification of embeddings [10].
**Theorem C** (Corollary 4.13).: _Let \(A\) be a unital, separable, simple, nuclear C*-algebra satisfying the UCT which is either \(\mathcal{Z}\)-stable or has stable rank one, and \(B\) be a unital, separable, simple, nuclear \(\mathcal{Z}\)-stable C*-algebra. If there is a contractive injective group homomorphism \(U(A)\to U(B)\) which maps the circle to the circle, then there is an embedding \(A\hookrightarrow B\)._
This paper is structured as follows. In Section 3 we use a continuous unitary group homomorphism to construct a map between spaces of continuous affine functions on the trace simplices, and use the de la Harpe-Skandalis determinant to show that this map has several desirable properties with respect to the map induced on the fundamental groups of the unitary groups. In Section 4 we discuss how our map between spaces of affine functions respects or flips the order under certain continuity assumptions on the unitary group homomorphism. In Section 5 we discuss general linear variants. We finish in Section 6 with some remarks concerning possible alternative methods, as well as some unanswered questions.
## Acknowledgements
Many thanks to my supervisors Thierry Giordano and Aaron Tikuisis for many helpful discussions. Thanks to the authors of [10] for sharing a draft of their paper.
## 2. Preliminaries and notation
### Notation
For a group \(G\), we denote by \(DG\) the derived subgroup of \(G\), i.e.,
\[DG:=\langle ghg^{-1}h^{-1}\mid g,h\in G\rangle. \tag{2.1}\]
If \(G\) has an underlying topology, we denote by \(CG\) the closure of \(DG\) and \(G^{0}\) the connected component of the identity.
For a unital C*-algebra \(A\), \(U(A)\) denotes the unitary group of \(A\), while \(U^{0}(A)\) denotes the connected component of \(U(A)\). For \(n\in\mathbb{N}\), we write \(U_{n}(A):=U(M_{n}(A))\), \(U_{n}^{0}(A):=U^{0}(M_{n}(A))\), and we set
\[U_{\infty}(A):=\lim_{\to}U_{n}(A), \tag{2.2}\]
to be the inductive limit with connecting maps \(U_{n}(A)\ni u\mapsto u\oplus 1\in U_{n+1}(A)\). This makes \(U_{\infty}(A)\) both a topological space (with the inductive limit topology) and a group.2 We have general linear analogues by replacing \(U\) with \(GL\), where \(GL(A)\) denotes the group of invertible elements of \(A\). Similarly, we define \(M_{\infty}(A)=\lim_{\to}M_{n}(A)\) (as an algebraic direct limit) with connecting maps \(x\mapsto x\oplus 0\). If \(E\) is Banach space and \(\tau:A\to E\) is a linear map that is tracial (i.e., \(\tau(ab)=\tau(ba)\) for all \(a,b\in A\)), we extend this canonically to \(M_{\infty}(A)\) by setting \(\tau((a_{ij})):=\sum_{i}\tau(a_{ii})\) for \((a_{ij})\in M_{n}(A)\).
Footnote 2: As pointed out in a footnote of [CGS\({}^{+}\)], \(U_{\infty}(A)\) is not in general a topological group.
We write \(\pi_{1}(X)\) for the fundamental group of a topological space \(X\) with distinguished base point. In our case, we will usually have \(X=U_{n}^{0}(A)\), for \(n\in\mathbb{N}\cup\{\infty\}\), with the base point being the unit.
For a C*-algebra \(A\), we let \(K_{0}(A),K_{1}(A)\) be the topological \(K\)-groups of \(A\). We will often use the identification of \(K_{0}(A)\) with the fundamental group \(\pi_{1}(U_{\infty}^{0}(A))\). The set of tracial states on \(A\) will be denoted \(T(A)\), which is a Choquet simplex ([5, Theorem 3.1.18]), and we denote by \(\operatorname{Aff}T(A)\) the set of continuous affine functions \(T(A)\to\mathbb{R}\), which is an interpolation group with order unit (see [10, Chapter 11]). For unital \(A\), the pairing map \(\rho_{A}:K_{0}(A)\to\operatorname{Aff}T(A)\) is defined as follows: if \(x\in K_{0}(A)\), we can write \(x=[p]-[q]\) where \(p,q\in M_{n}(A)\) are projections, and then
\[\rho_{A}(x)(\tau):=\tau(p-q),\quad\tau\in T(A). \tag{2.3}\]
### The de la Harpe-Skandalis determinant and Thomsen's variant
We recall the definition of the unitary variant of the de la Harpe-Skandalis determinant [11] (see [11] for a more in-depth exposition). By a bounded trace we mean a bounded linear map \(\tau:A_{sa}\to E\), where \(E\) is a real Banach space, such that \(\tau(a^{*}a)=\tau(aa^{*})\) for all \(a\in A\). For \(n\in\mathbb{N}\cup\{\infty\}\), a bounded trace \(\tau:A_{sa}\to E\), and a piecewise smooth path \(\xi:[0,1]\to U_{n}(A)\), set
\[\tilde{\Delta}_{\tau}^{n}(\xi):=\int_{0}^{1}\tau\left(\frac{1}{2\pi i}\xi^{ \prime}(t)\xi(t)^{-1}\right)dt, \tag{2.4}\]
where this integral is just the Riemann integral in \(E^{3}\). We state the unitary variant of [11, Lemme 1].
**Proposition 2.1**.: _Let \(\tau:A_{sa}\to E\) be a bounded trace and \(n\in\mathbb{N}\cup\{\infty\}\). The map \(\tilde{\Delta}_{\tau}^{n}\), which takes a piecewise smooth path in \(U_{n}^{0}(A)\) to an element in \(E\), has the following four properties:_
1. _it takes pointwise products to sums: if_ \(\xi_{1},\xi_{2}\) _are two piecewise smooth paths, then_ (2.5) \[\tilde{\Delta}_{\tau}^{n}(\xi_{1}\xi_{2})=\tilde{\Delta}_{\tau}^{n}(\xi_{1})+ \tilde{\Delta}_{\tau}^{n}(\xi_{2}),\] _where_ \(\xi_{1}\xi_{2}\) _is the piecewise smooth path_ \(t\mapsto\xi_{1}(t)\xi_{2}(t)\) _from_ \(\xi_{1}(0)\xi_{2}(0)\) _to_ \(\xi_{1}(1)\xi_{2}(1)\)_;_
2. _if_ \(\|\xi(t)-1\|<1\) _for all_ \(t\in[0,1]\)_, then_ (2.6) \[2\pi i\tilde{\Delta}_{\tau}^{n}(\xi)=\tau\big{(}\log\xi(1)-\log\xi(0)\big{)};\]
3. \(\tilde{\Delta}_{\tau}^{n}(\xi)\) _depends only on the homotopy class of_ \(\xi\)_;_
4. _if_ \(p\in M_{n}(A)\) _is a projection, then the path_ \(\xi_{p}:[0,1]\to U_{n}^{0}(A)\) _given by_ \(\xi_{p}(t):=pe^{2\pi it}+(1-p)\) _satisfies_ \(\tilde{\Delta}_{\tau}^{n}(p)=\tau(p)\)_._
The de la Harpe-Skandalis determinant associated to \(\tau\) (at the \(n^{\text{th}}\) level) is then the map
\[\Delta_{\tau}^{n}:U_{\infty}^{0}(A)\to E/\tilde{\Delta}_{\tau}^{n}(\pi_{1}(U_ {n}^{0}(A))) \tag{2.7}\]
given by \(\Delta_{\tau}^{n}(x):=[\tilde{\Delta}_{\tau}^{n}(\xi_{x})]\) where \(\xi_{x}\) is any piecewise smooth path \(\xi_{x}:[0,1]\to U_{n}^{0}(A)\) from \(1\) to \(x\). This is a well-defined group homomorphism to an abelian group, and therefore factors through the derived group, i.e., \(DGL_{n}^{0}(A)\subseteq\ker\Delta_{\tau}^{n}\). For the \(n=\infty\) case, we just write \(\tilde{\Delta}_{\tau}\) and \(\Delta_{\tau}\) for \(\tilde{\Delta}_{\tau}^{\infty}\) and \(\Delta_{\tau}^{\infty}\) respectively.
We will often be interested in the _universal trace_\(\operatorname{Tr}_{A}:A_{sa}\to\operatorname{Aff}T(A)\) given by \(\operatorname{Tr}_{A}(a):=\hat{a}\), where \(\hat{a}\in\operatorname{Aff}T(A)\) is the function given by \(\hat{a}(\tau):=\tau(a)\) for \(\tau\in T(A)\). We note that in this case, for \([x]\in K_{0}(A)\), we have that \(\operatorname{Tr}(x)=\rho_{A}([x])\). In this case, we will write \(\Delta^{n}\) and \(\Delta\) for \(\Delta_{\operatorname{Tr}}^{n}\) and \(\Delta_{\operatorname{Tr}}^{\infty}\) respectively. If the C*-algebra needs to be specified, we write \(\Delta_{A}^{n}\) or \(\Delta_{A}\).
**Remark 2.2**.: _Every continuous path \([0,1]\to GL_{n}(A)\) is homotopic to a piece-wise smooth path (even a piece-wise smooth exponential path if we are in \(GL_{n}^{0}(A)\)), and as \(\tilde{\Delta}^{n}\) is homotopy-invariant, it makes sense to apply \(\tilde{\Delta}^{n}\) to any continuous path. Indeed, as in the proof of [11, Lemme 3], take any piece-wise smooth path \(\xi:[0,1]\to GL_{n}(A)\) and choose \(k\) such that_
\[\|\xi(\frac{j-1}{k})^{-1}\xi(\frac{j}{k})-1\|<1\text{ for all }j=1,\dots,k. \tag{2.8}\]
_Then taking \(a_{j}:=\frac{1}{2\pi i}\log\big{(}\xi(\frac{j-1}{k})^{-1}\xi(\frac{j}{k}) \big{)},j=1,\dots,k\), \(\xi\) will be homotopic to the path_
\[\eta(t)=\xi\left(\frac{j-1}{k}\right)e^{(kt-j)a_{j}},t\in\left[\frac{j-1}{k}, \frac{j}{k}\right]. \tag{2.9}\]
_We note that if \(a=\sum_{j=1}^{k}a_{j}\), then \(\tilde{\Delta}^{n}(\xi)=\tilde{\Delta}^{n}(\eta)=\text{Tr}(a)\). If \(\xi\) is a path of unitaries, then so is \(\eta\) and the \(a_{j}\)'s are self-adjoint._
Let \(A_{0}\) consist of elements \(a\in A_{sa}\) satisfying \(\tau(a)=0\) for all \(\tau\in T(A)\). This is a norm-closed real subspace of \(A_{sa}\) such that \(A_{0}\subseteq\overline{[A,A]}\), and there is an isometric identification \(A_{sa}/A_{0}\simeq\text{Aff}\,T(A)\) sending an element \([a]\) to \(\widehat{a}\). Indeed, it is not hard to see that the map \(A_{sa}/A_{0}\to\text{Aff}\,T(A)\) given by \([a]\mapsto\hat{a}\) is a well-defined \(\mathbb{R}\)-linear map. Moreover [13, Theorem 2.9], together with a convexity argument, gives that this is isometric identification. To see that we have all the real-valued affine functions, we note that the image of this map contains constant functions and separates points, so [14, Corollary 7.4] gives that the image is dense and therefore all of \(\text{Aff}\,T(A)\) (since this is an isometry). We freely identify \(A_{sa}/A_{0}\) with \(\text{Aff}\,T(A)\).
### Thomsen's variant
Thomsen's variant of the de la Harpe-Skandalis determinant is the Hausdorff version, taking into account the closure of the image of the homotopy groups. For a bounded trace \(\tau:A_{sa}\to E\), we consider the map
\[\bar{\Delta}^{n}_{\tau}:U^{0}_{n}(A)\to E/\overline{\bar{\Delta}^{n}_{\tau}( \pi_{1}(U^{0}_{n}(A)))}, \tag{2.10}\]
given by \(\bar{\Delta}^{n}_{\tau}(x):=[\tilde{\Delta}^{n}_{\tau}(\xi_{x})]\) where \(\xi_{x}:[0,1]\to U^{0}_{n}(A)\) is any piecewise smooth path from \(1\) to \(x\in U^{0}_{n}(A)\). This is similar to the map \(\Delta^{n}_{\tau}\), except the codomain is now the quotient by the closure of the image of the fundamental group under the pre-determinant (i.e., the Hausdorffization of the codomain). When considering the universal trace, we just write \(\overline{\Delta}^{n}\) for \(\overline{\Delta}^{n}_{\text{Tr}}\) and \(\overline{\Delta}\) for \(\overline{\Delta}^{\infty}_{\text{Tr}}\). If the C*-algebra needs to be specified, we write \(\overline{\Delta}^{n}_{A}\) or \(\overline{\Delta}_{A}\).
If one considers the universal trace, the kernel of \(\overline{\Delta}^{n}\) can be identified with \(CU^{0}_{n}(A)\) (where the closure is taken with respect to the inductive limit topology in the \(n=\infty\) case).
**Lemma 2.3** (Lemma 3.1, [16]).: _Let \(A\) be a unital C*-algebra. Then_
\[\ker\overline{\Delta}^{n}=CU^{0}_{n}(A). \tag{2.11}\]
It is not in general true that the kernel of \(\Delta^{n}\) can be identified with the derived group \(DU^{0}_{n}(A)\), although there are several positive results [11, 16, 17, 18].
It immediately follows that the quotient of \(U^{0}_{n}(A)\) by the closure of the commutator subgroup (under the inductive limit topology in the \(n=\infty\) case) can be identified with a quotient of \(\text{Aff}\,T(A)\).
**Theorem 2.4** (Theorem 3.2, [16]).: \(\overline{\Delta}^{n}\) _gives a homeomorphic group isomorphism_
\[U^{0}_{n}(A)/CU^{0}_{n}(A)\simeq\text{Aff}\,T(A)/\overline{\tilde{\Delta}^{n}( \pi_{1}(U^{0}_{n}(A)))} \tag{2.12}\]
_for every \(n\in\mathbb{N}\cup\{\infty\}\). In particular,_
\[U^{0}_{\infty}(A)/CU^{0}_{\infty}(A)\simeq\operatorname{Aff}T(A)/\overline{\rho_ {A}(K_{0}(A))}. \tag{2.13}\]
### The \(KT_{u}\)-invariant
Following [CGS\({}^{+}\)], for a unital C*-algebra, we let
\[KT_{u}(A):=(K_{0}(A),[1]_{0},K_{1}(A),\rho_{A},\operatorname{Aff}T(A) \tag{2.14}\]
be the invariant consisting of the \(K_{0}\)-group, the position of the unit in \(K_{0}\), the \(K_{1}\) group, the pairing between \(K_{0}\) and traces, and the continuous real-valued affine functions on the trace simplex (viewed as a partially ordered Banach space with order unit). For two unital C*-algebras \(A,B\), a \(KT_{u}\)-morphism
\[(\alpha_{0},\alpha_{1},\gamma):KT_{u}(A)\to KT_{u}(B), \tag{2.15}\]
will be a triple \((\alpha_{0},\alpha_{1},\gamma)\) consisting of \(\alpha_{0}:K_{0}(A)\to K_{0}(B)\) a group homomorphism such that \(\alpha_{0}([1]_{0})=[1]_{0}\), \(\alpha_{1}:K_{1}(A)\to K_{1}(B)\) a group homomorphism, and \(\gamma:\operatorname{Aff}T(A)\to\operatorname{Aff}T(B)\) is a unital positive map such that
(2.16)
commutes.
We note that for large classes of unital simple C*-algebras - for example the class of unital, simple, separable, nuclear \(\mathcal{Z}\)-stable C*-algebras satisfying the UCT - \(KT_{u}(\cdot)\) recovers the Elliott invariant.
## 3. Continuous unitary group homomorphisms and traces
Throughout, \(A\) and \(B\) will be unital C*-algebras with non-empty compact trace simplices, and \(\theta:U^{0}(A)\to U^{0}(B)\) will denote a uniformly continuous group homomorphism between the connected components of the unitary groups. We will specify any additional assumptions as we go along. As \(\theta\) is continuous, it sends the connected component to the connected component, and since it is a homomorphism it sends commutators to commutators. Moreover it sends limits of commutators to limits of commutators. Thus there are induced group homomorphisms
\[U^{0}(A)/CU^{0}(A)\to U^{0}(B)/CU^{0}(B)\text{ and }U^{0}(A)/DU^{0}(A)\to U^{0}(B)/DU^{0}(B). \tag{3.1}\]
Thomsen's isomorphism [10, Theorem 3] then brings about maps between quotients of \(\operatorname{Aff}T(A)\) and \(\operatorname{Aff}T(B)\):
(3.2)
In a similar vein, by modding out by \(\ker\Delta_{A}^{1}\) and \(\ker\Delta_{B}^{1}\), respectively, instead of the closure of derived groups, there is a purely algebraic variant of the above diagram:
(3.3)
In the special case where \(\pi_{1}(U^{0}(A))\to K_{0}(A)\) and \(\pi_{1}(U^{0}(B))\to K_{0}(B)\) are surjections, we have induced maps between quotients
\[\operatorname{Aff}T(A)/\overline{\rho_{A}(K_{0}(A))} \to\operatorname{Aff}T(A)/\overline{\rho_{B}(K_{0}(B))},\] \[\operatorname{Aff}T(A)/\rho_{A}(K_{0}(A)) \to\operatorname{Aff}T(A)/\rho_{B}(K_{0}(B)) \tag{3.4}\]
in the respective Hausdorff and non-Hausdorffized settings.
One question to be answered is whether or not we can lift the maps on the right of (3.2) and (3.3) to maps \(\operatorname{Aff}T(A)\to\operatorname{Aff}T(B)\). These spaces have further structure as dimension groups with order units [11, Chapter 7], so we would like to be able to alter the lift to get a map which is unital and positive. We show that we can always lift this map, and altering it to be unital and positive is possible under a certain continuity assumption on \(\theta\).
If we further assume that \(K_{0}(A)\simeq\pi_{1}(U^{0}(A))\) and \(K_{0}(B)\simeq\pi_{1}(U^{0}(B))\) in the canonical way (which is true in the presence of \(\mathcal{Z}\)-stability by [10] or stable rank one [12]), we would like this map to be compatible with the group homomorphism
\[K_{0}(\theta):K_{0}(A)\to K_{0}(B) \tag{3.5}\]
arising from the diagram
(3.6)
By compatible, we mean that
(3.7)
commutes, where the map on the right is the lift coming from the induced map on abelianizations as in (3.2) and (3.3). If our map between spaces of affine continuous functions is not unital and positive, but we can alter it accordingly, we must do the same to our map between \(K_{0}\). We would still have a commuting diagram as above, but it would give that maps induced on \(K_{0}(\cdot)\) and \(\operatorname{Aff}T(\cdot)\) respect the pairing.
Stone's theorem [14, Section X.5] allows one to recover from a strongly continuous one parameter family \(U(t)\) of unitaries a (possibly unbounded) self-adjoint operator \(X\) such that \(U(t)=e^{itX}\) for all \(t\in\mathbb{R}\). If it is a norm-continuous one parameter family of unitaries, one can recover a bounded self-adjoint operator \(X\), and \(X\) will lie in the C*-algebra generated by the unitaries. The use of Stone's theorem to deduce that continuous group homomorphisms between unitary groups send exponentials to exponentials is not new. Sakai used it in the 1950's in order to show that a norm-continuous isomorphism between AW*-algebras induces an *-isomorphism or anti-*-isomorphism between the algebras themselves [20]. More recently, this sort of idea has been used to understand how the metric structure of the unitary groups can be related to the Jordan *-algebra structure of the algebras [13].
**Lemma 3.1**.: _. Let \(a\in A_{sa}\) and represent \(B\subseteq\mathcal{B}(\mathcal{H})\) faithfully and let \(\theta:U^{0}(A)\to U^{0}(B)\) be a continuous group homomorphism. Then \((\theta(e^{2\pi ita}))_{t\in\mathbb{R}}\) is a one-parameter norm-continuous family of unitaries, and consequently is of the form \((e^{2\pi itb})_{t\in\mathbb{R}}\) for some \(b\in B_{sa}\)._
Proof.: Using the fact that \(\theta\) is a norm-continuous homomorphism, \(t\mapsto\theta(e^{2\pi ita})\) is a norm-continuous one-parameter family of unitaries. Stone's theorem gives that there is some self-adjoint \(b\in\mathcal{B}(\mathcal{H})\) (the boundedness of \(b\) follows from norm-continuity). Since \(\theta(e^{2\pi ita})=e^{2\pi itb}\in B\) for all \(t\in\mathbb{R}\), one can take \(t\) to be sufficiently small, then take a logarithm to get that \(b\in B\) by continuous functional calculus.
Let \(S_{\theta}:A_{sa}\to B_{sa}\) be defined via the correspondence given above:
\[\theta(e^{2\pi ita})=e^{2\pi itS_{\theta}(a)}\text{ for all }t\in\mathbb{R}. \tag{3.8}\]
Then \(S_{\theta}\) is a bounded \(\mathbb{R}\)-linear map (see [20, 13], or note that it is easy to see that its kernel is closed). It is also easily checked to respect commutation, and that its canonical extension to a map from \(A\) to \(B\) actually
sends commutators to commutators, although we won't explicitly use this. Recall that for a C*-algebra \(A\), \(A_{0}\) denotes the set of self-adjoint elements that vanish on every trace.
**Lemma 3.2**.: _If \(\theta:U^{0}(A)\to U^{0}(B)\) is a continuous group homomorphism, then \(S_{\theta}\) is a bounded linear map and the following hold._
1. _If_ \(\theta\) _is injective, then_ \(S_{\theta}\) _is injective._
2. _If_ \(\theta\) _is bijective, then_ \(S_{\theta}\) _is bijective._
Proof.: As already remarked, \(S_{\theta}\) is bounded. Assuming that \(\theta\) is injective, suppose that \(S_{\theta}(a)=S_{\theta}(b)\). Then
\[\theta(e^{2\pi ita})=\theta(e^{2\pi itb}) \tag{3.9}\]
for all \(t\in\mathbb{R}\). Injectivity of \(\theta\) gives that that \(e^{2\pi ita}=e^{2\pi itb}\) for all \(t\in\mathbb{R}\). But this implies that \(a=b\) since we can take \(t\) appropriately close to \(0\) and take logarithms.
Now if we further assume that \(\theta\) is surjective, then \((\theta^{-1}(e^{2\pi itb}))_{t\in\mathbb{R}}\subseteq U^{0}(A)\) is a norm-continuous one-parameter family of unitaries which we can write as \((e^{2\pi ita})_{t\in\mathbb{R}}\) for some \(a\in A_{sa}\). But then
\[\theta(e^{2\pi ita})=\theta\circ\theta^{-1}(e^{\pi itb})=e^{2\pi itb}. \tag{3.10}\]
Thus \(S_{\theta}(a)=b\).
We say that a linear map \(\tau:A_{sa}\to E\), where \(E\) is a real Banach space, is a bounded trace is if it is a bounded \(\mathbb{R}\)-linear map such that \(\tau(a^{*}a)=\tau(aa^{*})\) for all \(a\in A\) (note that this is equivalent to \(\tau\circ\operatorname{Ad}_{u}=\tau\) for all \(u\in U(A)\)).
**Proposition 3.3**.: _Let \(\theta:U^{0}(A)\to U^{0}(B)\) be a continuous group homomorphism, \(E\) be a real Banach space and \(\tau:B_{sa}\to E\) a bounded trace. Then \(\tau\circ S_{\theta}:A_{sa}\to E\) is a bounded trace. In particular, \(S_{\theta}(A_{0})\subseteq B_{0}\) and \(S_{\theta}\) induces a bounded \(\mathbb{R}\)-linear map_
\[\Lambda_{\theta}:\operatorname{Aff}T(A)\to\operatorname{Aff}T(B). \tag{3.11}\]
Proof.: Observe that, for \(a\in A_{sa}\) and \(u\in U(A)\), we have
\[\begin{split} e^{2\pi itS_{\theta}(uau^{*})}&=\theta (e^{2\pi itau^{*}})\\ &=\theta(ue^{2\pi ita}u^{*})\\ &=\theta(u)e^{2\pi itS_{\theta}(a)}\theta(u)^{*}\\ &=e^{2\pi it\theta(u)S_{\theta}(a)\theta(u)^{*}}\end{split} \tag{3.12}\]
for all \(t\in\mathbb{R}\). Therefore \(S_{\theta}(uau^{*})=\theta(u)S_{\theta}(a)\theta(u)^{*}\), and applying \(\tau\) yields
\[\begin{split}\tau\circ S_{\theta}(uau^{*})&=\tau \left(\theta(u)S_{\theta}(a)\theta(u)^{*}\right)\\ &=\tau(S_{\theta}(a)),\end{split} \tag{3.13}\]
i.e., \(\tau\circ S_{\theta}\) is tracial.4
Footnote 4: A bounded linear map \(\tau:A_{sa}\to E\) is tracial if and only if it is invariant under conjugation by unitaries.
Thus if \(a\in A_{0}\), it vanishes on every tracial state (hence on every tracial functional), and so \(\tau\circ S_{\theta}(a)=0\) for all \(\tau\in T(B)\). Therefore \(S_{\theta}(A_{0})\subseteq B_{0}\) and so \(S_{\theta}\) factors through a map
\[\Lambda_{\theta}:\operatorname{Aff}T(A)\simeq A_{sa}/A_{0}\to B_{sa}/B_{0} \simeq\operatorname{Aff}T(B), \tag{3.14}\]
where we identify \(A_{sa}/A_{0}\simeq\operatorname{Aff}T(A)\).
One cannot expect \(\Lambda_{\theta}\) (or \(S_{\theta}\)) to be unital or positive, as the following examples show.
**Example 3.4**.:
1. _Consider a continuous homomorphism_ \(\theta:\mathbb{T}\to\mathbb{T}=U^{0}(\mathbb{C})=U(\mathbb{C})\)_. By Pontryagin duality,_ \(\theta(z)=z^{n}\) _for some_ \(n\in\mathbb{Z}\)_. We then have that_ \(S_{\theta}:\mathbb{R}\to\mathbb{R}\) _is given by_ \(S_{\theta}(x)=nx\)_. If_ \(n\neq 1\)_, clearly_ \(S_{\theta}\) _is not unital. If_ \(n<0\)_, then_ \(S_{\theta}\) _is not positive since it sends_ \(1\) _to_ \(n<0\)_. An important observation, however, is that if_ \(n<0\)_,_ \(-S_{\theta}:\mathbb{R}\to\mathbb{R}\) _is positive, and_ \(\frac{1}{n}S\) _is unital and positive._
2. _Consider_ \(\theta:\mathbb{T}^{3}\to\mathbb{T}\) _given by_ \(\theta(z,w,v)=\overline{z}wv\)_. The corresponding map_ \(S_{\theta}:\mathbb{R}^{3}\to\mathbb{R}\) _is given by_ (3.15) \[S_{\theta}(a,b,c)=-a+b+c.\] _Clearly_ \((1,0,0)\in\mathbb{C}^{3}\) _is a positive element, however_ \(S_{\theta}(1,0,0)=-1<0\)_. This map is however unital._
3. _Let_ \(\theta:U_{2}\to\mathbb{T}\) _be defined by_ \(\theta(u)=\det(u)\)_. Then_ \(S_{\theta}:(M_{2})_{sa}\to\mathbb{R}\) _is defined by_ \(S_{\theta}(A)=\text{tr}A\)_, where tr is the unnormalized trace. Clearly this map is not unital, but it is positive._
4. _Let_ \(\theta:U_{2}\to U_{3}\) _be defined by_ \(\theta(u)=u\oplus 1\)_. Then_ \(S_{\theta}:(M_{2})_{sa}\to(M_{3})_{sa}\) _is given by_ \(S_{\theta}(A)=A\oplus 0\)_, which is again not unital, but is positive. The induced map_ \(\Lambda_{\theta}:\mathbb{R}\to\mathbb{R}\) _is given by_ \(\Lambda_{\theta}(x)=\frac{2}{3}x\) _for_ \(x\in\mathbb{R}\)_._
5. _Let_ \(\theta:\mathbb{T}\hookrightarrow U_{2}\) _be defined by_ (3.16) \[\theta(\lambda)=\begin{pmatrix}\lambda&\\ &\lambda\end{pmatrix}.\] _Then_ \(S_{\theta}\) _is a unital, positive isometry and_ \(\Lambda_{\theta}\) _gives rise the identity map_ (3.17) \[\mathbb{R}=\operatorname{Aff}T(\mathbb{C})\to\operatorname{Aff}T(M_{2})= \mathbb{R}.\]
The above examples are important. If \(\theta(\mathbb{T})\subseteq\mathbb{T}\), which is a moderate assumption - if \(\theta\) was the restriction of a unital *-homomorphism or an anti-*-homorphism, then it would hence \(\mathbb{C}\) to \(\mathbb{C}\) and in particular \(\mathbb{T}\) to \(\mathbb{T}\)
we can restrict the homomorphism to the circle to get a continuous group homomorphism \(\mathbb{T}\to\mathbb{T}\). We understand such homomorphisms by Pontryagin duality [11, Chapter 4].
We now use (pre-)determinant techniques in order to show desirable relationships between our maps.
**Proposition 3.5**.: _Let \(E\) be a real Banach space and \(\tau:B_{sa}\to E\) a bounded trace._
1. _Let_ \(\xi:[0,1]\to U^{0}(A)\) _be a piecewise smooth path with_ \(\xi(0)=1\)_. Then_ (3.18) \[\tilde{\Delta}^{1}_{\tau oS_{\theta}}(\xi)=\tilde{\Delta}^{1}_{\tau}(\theta \circ\xi).\] _In particular,_ \(\tilde{\Delta}^{1}_{\tau oS_{\theta}}(\pi_{1}(U^{0}(A)))\subseteq\tilde{ \Delta}^{1}_{\tau}(\pi_{1}(U^{0}(B)))\)_._
2. _The following diagram commutes:_ (3.19)
3. _The following diagram commutes:_ (3.20) _where the map on the right is the canonical map induced from the inclusion_ \(\tilde{\Delta}^{1}_{\tau oS_{\theta}}(\pi_{1}(U^{0}(A)))\subseteq\tilde{ \Delta}^{1}_{\tau}(\pi_{1}(U^{0}(B)))\) _coming from (_1_). The analogous diagram commutes if we consider Thomsen's variant of the de la Harpe-Skandalis determinant associated to_ \(\tau\) _and_ \(\tau\circ S_{\theta}\) _in (_3.20_)._
Proof.: By 2.2, we can find \(k\in\mathbb{N}\) and \(a_{1},\dots,a_{k}\in A_{sa}\) such that \(\xi\) is homotopic to the path
\[\eta(t)=\left(\prod_{l=1}^{j-1}e^{2\pi ia_{l}}\right)e^{2\pi i(kt-j)a_{j}},t \in\left[\frac{j-1}{k},\frac{j}{k}\right], \tag{3.21}\]
with the convention that the product on the left is \(1\) for \(j\leq 0\). Now whenever \(\omega:A_{sa}\to F\) is a bounded trace to a real Banach space \(F\), we have
\[\tilde{\Delta}^{1}_{\omega}(\xi)=\sum_{j=1}^{k}\omega(a_{j}). \tag{3.22}\]
Now \(\theta\circ\xi\) is homotopic to \(\theta\circ\eta\), which has the following form, for \(t\in[\frac{j-1}{k},\frac{j}{k}]\):
\[\begin{split}\theta\circ\eta(t)&=\left(\prod_{l=1}^{j }\theta(e^{2\pi ia_{l}})\right)\theta(e^{2\pi i(kt-j)a_{j}}\\ &=\left(\prod_{l=1}^{j}e^{2\pi iS_{\theta}(a_{l})}\right)e^{2\pi i (kt-j)S_{\theta}(a_{j})}.\end{split} \tag{3.23}\]
Of course, we then have that
\[\tilde{\Delta}^{1}_{\tau}(\theta\circ\xi)=\tilde{\Delta}^{1}_{\tau}(\theta \circ\eta)=\sum_{j=1}^{k}\tau(S_{\theta}(a_{j}))=\tilde{\Delta}^{1}_{\tau \circ S_{\theta}}(\xi). \tag{3.24}\]
Part (2) follows from (1) and (3) follows from (2). The remark about Thomsen's variant is obvious.
**Corollary 3.6**.: _The following diagram commutes:_
(3.25)
_In particular, when the canonical maps_
\[\pi_{1}(U^{0}(A))\to K_{0}(A)\text{ and }\pi_{1}(U^{0}(B))\to K_{0}(B) \tag{3.26}\]
_are isomorphisms, we have that_
(3.27)
_commutes, where \(K_{0}(\theta):K_{0}(A)\to K_{0}(B)\) is the map induced from the diagram_
(3.28)
Proof.: The first part follows from Proposition 3.5(2) with the trace being the universal trace \(\operatorname{Tr}_{B}\bigl{|}_{B_{sa}}:B_{sa}\to\operatorname{Aff}T(B)\). The second part follows since if \(\xi\) is
a (piecewise smooth) path in \(U_{n}(A)\) and \(m>n\), then
\[\tilde{\Delta}_{\tau}^{m}(\xi\oplus 1_{m-n})=\tilde{\Delta}_{\tau}^{n}(\xi). \tag{3.29}\]
**Proposition 3.7**.: _Let \(A,B\) be unital C*-algebras. Then \(\Lambda_{\theta}:\operatorname{Aff}T(A)\to\operatorname{Aff}T(B)\) is a lift of the map_
\[\operatorname{Aff}T(A)/\tilde{\Delta}_{A}^{1}(\pi_{1}(U^{0}(A)))\to \operatorname{Aff}T(B)/\tilde{\Delta}_{B}^{1}(\pi_{1}(U^{0}(B))) \tag{3.30}\]
_as described in (3.3)._
Proof.: Let us label the maps in the diagram (3.3):
(3.31)
where \(\tilde{\theta}([u]):=[\theta(u)]\), \(\delta_{A}^{1}([e^{2\pi ia}]):=[\hat{a}]\), \(\delta_{B}^{1}\) is defined similarly, and
\[P=\delta_{B}^{1}\circ\tilde{\theta}\circ(\delta_{A}^{1})^{-1}. \tag{3.32}\]
But then we have that
\[P([a]) =\delta_{B}^{1}\circ\tilde{\theta}\circ(\delta_{A}^{1})^{-1}([ \hat{a}])\] \[=\delta_{B}^{1}\circ\tilde{\theta}([e^{2\pi ia}])\] \[=\delta_{B}^{1}([\theta(e^{2\pi ia})])\] \[=\delta_{B}^{1}([e^{2\pi iS_{\theta}(a)}])\] \[=[\widehat{S_{\theta}(a)}]\] \[=[\Lambda_{\theta}(\widetilde{a})] \tag{3.33}\]
In particular, the diagram
(3.34)
commutes, where \(q_{A}^{1}\) and \(q_{B}^{1}\) are the respective quotient maps.
**Proposition 3.8**.: _Let \(A,B\) be unital C*-algebras and \(\theta:U^{0}(A)\to U^{0}(B)\) be a continuous group homomorphism. Then \(\Lambda_{\theta}:\operatorname{Aff}T(A)\to\operatorname{Aff}T(B)\) is a lift of the map_
\[\operatorname{Aff}T(A)/\overline{\tilde{\Delta}^{1}_{A}(\pi_{1}(U^{0}(A)))} \to\operatorname{Aff}T(B)/\overline{\tilde{\Delta}^{1}_{B}(\pi_{1}(U^{0}(B)))} \tag{3.35}\]
_as described in (3.2)._
Proof.: One can mimic the proof above or apply the above result and appeal to the commuting diagram
\[\begin{CD}\operatorname{Aff}T(A)/\tilde{\Delta}^{1}_{A}(\pi_{1}(U^{0}(A)))@>{ }>{}>\operatorname{Aff}T(A)/\overline{\tilde{\Delta}^{1}_{A}(\pi_{1}(U^{0}(A) ))}\\ @V{}V{}V@V{}V{}V\\ \operatorname{Aff}T(B)/\tilde{\Delta}^{1}_{B}(\pi_{1}(U^{0}(B)))@>{}>{}> \operatorname{Aff}T(B)/\overline{\tilde{\Delta}^{1}_{B}(\pi_{1}(U^{0}(B)))}, \end{CD} \tag{3.36}\]
where the vertical maps are defined via the diagrams from (3.3) and (3.2) respectively, and the horizontal maps are the canonical surjections.
In particular, assuming some \(K_{0}\)-regularity gives that \(\Lambda_{\theta}\) is a lift of a map between quotients of spaces of continuous real-valued affine functions by images of \(K_{0}\).
**Corollary 3.9**.: _Let \(A,B\) be unital C*-algebras such that the canonical maps_
\[\pi_{1}(U^{0}(A))\to K_{0}(A)\text{ and }\pi_{1}(U^{0}(B))\to K_{0}(B) \tag{3.37}\]
_are surjections. If \(\theta:U^{0}(A)\to U^{0}(B)\) is a continuous homomorphism, then \(\Lambda_{\theta}\) is a lift of the right maps of the following two commutative diagrams:_
\[\begin{CD}U^{0}(A)/CU^{0}(A)@>{\simeq}>{}>\operatorname{Aff}T(A)/\overline{ \rho_{A}(K_{0}(A))}\\ @V{}V{}V@V{}V{}V\\ U^{0}(B)/CU^{0}(B)@>{}>{\simeq}>\operatorname{Aff}T(B)/\overline{\rho_{B}(K_{0} (B))}\end{CD} \tag{3.38}\]
_and_
\[\begin{CD}U^{0}(A)/\ker\Delta^{1}_{A}@>{\simeq}>{}>\operatorname{Aff}T(A)/ \rho_{A}(K_{0}(A))\\ @V{}V{}V@V{}V{}V\\ U^{0}(B)/\ker\Delta^{1}_{B}@>{}>{\simeq}>\operatorname{Aff}T(B)/\rho_{B}(K_{0} (B)).\end{CD} \tag{3.39}\]
_Further, if \(\ker\Delta^{1}_{A}=DU^{0}(A)\) and \(\ker\Delta^{1}_{B}=DU^{0}(B)\), then \(\Lambda_{\theta}\) is a lift of the map induced by the diagram_
(3.40)
C*-algebras satisfying the last condition arise naturally - for example unital, separable, simple, pure C*-algebras of stable rank \(1\) such that every \(2\)-quasi-tracial state on \(A\) is a trace [16] have this property.
## 4. The order on \(\operatorname{Aff}T(\cdot)\)
We now examine when the map induced on \(\operatorname{Aff}T(\cdot)\) is positive in order to compare \(K\)-theory, traces, and the pairing. As we saw in Example 3.4, the map we get between spaces of affine functions on the trace simplices need not be positive nor unital in general. In this section, we will be able to use the map \(\Lambda_{\theta}\) to construct a unital positive map, under some extra assumptions on \(\theta\).
We record the following results as they give us necessary and sufficient conditions for the \(\Lambda_{\theta}\) to be positive. We use the C*-algebra-valued analogue of the fact that any unital, contractive linear functional is positive, along with the fact that completely positive maps are (completely) bounded with the norm determined by the image of the unit. Recall that an operator system is a self-adjoint unital subspace of a C*-algebra. We record some results about completely positive maps - these are a combination of Proposition 2.11, Theorem 3.9 and Proposition 3.6 in [14] respectively.
**Proposition 4.1**.: _Let \(\mathcal{S}\) be an operator system and \(B\) a unital C*-algebra._
1. _If_ \(\phi:\mathcal{S}\to B\) _is a unital contraction, then_ \(\phi\) _is positive._
2. _If_ \(B=C(X)\) _and_ \(\phi:\mathcal{S}\to B\) _is positive, then it is bounded with_ \(\|\phi\|=\|\phi(1)\|\)_._
**Lemma 4.2**.: _Let \(\theta:U^{0}(A)\to U^{0}(B)\) be a continuous group homomorphism such that \(\theta(\mathbb{T})=\mathbb{T}\). If \(\theta|_{\mathbb{T}}\) is injective, then \(S_{\theta}(1)\in\{1,-1\}\)._
Proof.: The restriction \(\theta|_{\mathbb{T}}:\mathbb{T}\to\mathbb{T}\) is a continuous group homomorphism, hence by Pontryagin duality is of the form \(\theta(z)=z^{n}\) for some \(n\). Injectivity implies that \(n\in\{1,-1\}\). We then have that
\[e^{2\pi iS_{\theta}(1)t}=\theta(e^{2\pi it})=e^{2\pi int} \tag{4.1}\]
for all \(t\in\mathbb{R}\). This implies that \(S_{\theta}(1)=n\cdot 1\in\{1,-1\}\).
**Lemma 4.3**.: _Let \(\theta:U^{0}(A)\to U^{0}(B)\) be continuous group homomorphism such that \(\theta(\mathbb{T})=\mathbb{T}\). If \(\theta\) is injective, the following are equivalent._
1. _one of_ \(\Lambda_{\theta}\) _or_ \(-\Lambda_{\theta}\) _is positive;_
2. \(\Lambda_{\theta}\) _is contractive._
Proof.: By Lemma 4.2, we know that \(S_{\theta}(1)\in\{1,-1\}\) and consequently \(\Lambda_{\theta}(\hat{1})\in\{\hat{1},\widehat{-1}\}\) (where we recall that, for \(a\in A_{sa}\), \(\hat{a}\in\operatorname{Aff}T(A)\) is the affine function \(\hat{a}(\tau)=\tau(a)\)). By replacing \(\Lambda_{\theta}\) with \(-\Lambda_{\theta}\), we can without loss of generality assume that \(\Lambda_{\theta}\) is unital. Using the fact that \(\operatorname{Aff}T(A)+i\operatorname{Aff}T(A)\subseteq C(T(A))\) is an operator system and the canonical extension
\[\Lambda_{\theta}^{\mathbb{C}}:\operatorname{Aff}T(A)+i\operatorname{Aff}T(A) \to\operatorname{Aff}T(B)+i\operatorname{Aff}T(B)\subseteq C(T(B)) \tag{4.2}\]
is a unital linear map with abelian target algebra, this is an easy consequence of the two parts of Proposition 4.1.
**Lemma 4.4**.: _Let \(\theta:U^{0}(A)\to U^{0}(B)\) be a continuous group homomorphism. If \(K>0\) is such that \(\|\theta(u)-\theta(v)\|\leq K\|u-v\|\) for all \(u,v\in U^{0}(A)\), then \(\|S_{\theta}\|\leq K\) and \(\|\Lambda_{\theta}\|\leq K\). If \(\theta\) is isometric, then so is \(S_{\theta}\). If \(\theta\) is a surjective isometry, then \(\Lambda_{\theta}\) is a surjective isometry._
Proof.: \(S_{\theta}\) being an isometry following from \(\theta\) being an isometry can be seen in an argument in [11]; we exemplify said argument to show the bound condition. We use the observation that
\[\frac{e^{2\pi ita}-1}{t}\to 2\pi ia \tag{4.3}\]
as \(t\to 0\). Since
\[\|e^{2\pi itS_{\theta}(a)}-1\|\leq K\|e^{2\pi ita}-1\| \tag{4.4}\]
for all \(t\in\mathbb{R}\), we can divide both sides by \(\frac{1}{2\pi}|t|\) and take \(t\to 0\) to get that
\[\|S_{\theta}(a)\|\leq K\|a\|. \tag{4.5}\]
Now if \(\theta\) is a surjective isometry, we identify \(\operatorname{Aff}T(A)\simeq A_{sa}/A_{0}\) and \(\operatorname{Aff}T(B)\simeq B_{sa}/B_{0}\) and note that \(S_{\theta}(A_{0})=B_{0}\) and that \(\Lambda_{\theta}\) will preserve the quotient norms.
**Corollary 4.5**.: _If \(S_{\theta}(1)=n\) and \(\|S_{\theta}\|=|n|\), then \(\frac{1}{n}S_{\theta}\) is a unital contraction, hence positive. In particular, if \(\theta(\mathbb{T})=\mathbb{T}\) and \(\theta|_{\mathbb{T}}\) is an injection, then either \(\Lambda_{\theta}\) or \(-\Lambda_{\theta}\) is unital and positive._
Proof.: The first part follows from the above lemma. If \(\theta\) is an injection with \(\theta(\mathbb{T})=\mathbb{T}\), we have that \(S_{\theta}(\hat{1})\in\{\hat{1},\widehat{-1}\}\) and that \(\Lambda_{\theta}\) is contractive, so one of \(\Lambda_{\theta}\) or \(-\Lambda_{\theta}\) is a unital contraction, hence positive by part (1) of Proposition 4.1.
**Theorem 4.6**.: _Suppose that \(\theta:U^{0}(A)\to U^{0}(B)\) is a contractive injection such that \(\theta(\mathbb{T})=\mathbb{T}\). Then there is a continuous affine map \(T_{\theta}:T(B)\to T(A)\)._
Proof.: This follows from the fact that the induced map \(\Lambda_{\theta}:\operatorname{Aff}T(A)\to\operatorname{Aff}T(B)\) will have the property that \(\Lambda_{\theta}\) or \(-\Lambda_{\theta}\) will be a unital positive map. Therefore by contravariant identification of compact convex sets (of locally convex Hausdorff linear spaces) with the state space of the space of continuous real-valued affine valued functions on them ([11, Chapter 7]), there exists a continuous affine map \(T_{\theta}:T(B)\to T(A)\).
**Theorem 4.7**.: _Let \(A,B\) be unital C*-algebras, and \(\theta:U^{0}(A)\to U^{0}(B)\) be a contractive topological group isomorphism such that \(\theta(\mathbb{T})=\mathbb{T}\). Then the map \(T_{\theta}:T(B)\to T(A)\) induced by \(\Lambda_{\theta}\) is an affine homeomorphism._
Proof.: As \(\theta(\mathbb{T})=\mathbb{T}\), \(S_{\theta}(1)\in\{-1,1\}\). Let \(\pm\Lambda_{\theta}:\operatorname{Aff}T(A)\to\operatorname{Aff}T(B)\) be either \(\Lambda_{\theta}\) or \(-\Lambda_{\theta}\), depending on which is unital, positive, contractive and surjective by combining Lemmas 4.4, 4.3 and 3.2(2). By the duality of (compact) simplices and continuous affine functions on them, the map \(T_{\theta}:T(B)\to T(A)\) is an affine homeomorphism.
**Theorem 4.8**.: _Let \(\theta:U(A)\to U(B)\) be a contractive injective homomorphism such that \(\theta(\mathbb{T})=\mathbb{T}\). If_
\[\pi_{i}(U(A))\simeq K_{i-1}(A)\text{ and }\pi_{i}(U(B))\simeq K_{i-1}(B)\text{ for }i=0,1, \tag{4.6}\]
_via the canonical maps, then there is an induced map_
\[KT_{u}(\theta):KT_{u}(A)\to\operatorname{KT}_{u}(B). \tag{4.7}\]
Proof.: Let
* \(\Lambda:=\Lambda_{\theta|_{U^{0}(A)}}:\operatorname{Aff}T(A)\to\operatorname{ Aff}T(B)\),
* \(\theta_{0}:\pi_{1}(U^{0}(A))\to\pi_{1}(U^{0}(B))\) be the map induced on fundamental groups by \(\theta|_{U^{0}(A)}\),
* \(K_{0}(\theta)\) be the map induced on \(K_{0}\) by \(\theta_{0}\) together with (4.6) for \(i=1\),
* \(\theta_{1}:\pi_{0}(U(A))\to\pi_{0}(U^{0}(B))\) be the map induced by \(\theta\) on connected components (so that \(\theta_{1}([u]_{\sim_{h}})=[\theta_{1}(u)]_{\sim_{h}}\)) and
* \(K_{1}(\theta)\) be the map induced by \(\theta_{1}\) together with (4.6) for \(i=0\).
Then
\[(\pm K_{0}(\theta),K_{1}(\theta),\pm\Lambda):KT_{u}(A)\to KT_{u}(B) \tag{4.8}\]
is a \(KT_{u}\)-morphism, where \(\pm\Lambda\) is either \(\Lambda\) or \(-\Lambda\) depending on which one is unital and positive, and \(\pm K_{0}(\theta)\) is either \(K_{0}(\theta)\) if \(\Lambda\) is positive or \(-K_{0}(\theta)\) if \(-\Lambda\) is positive. Indeed, \(\pm K_{0}(\theta),\theta_{1},\pm\Lambda\) are all appropriate morphisms, and we
have that
(4.9)
commutes5 by Corollary 3.6.
Footnote 5: The map \(-K_{0}(\theta)\) will take a piece-wise smooth loop \(\xi\) to the loop \(-\theta\circ\xi\) defined by \((-\theta\circ\xi)(t)=\theta(\xi(-t))\). From here its obvious that the diagram commutes.
**Corollary 4.9**.: _Let \(A,B\) be unital C*-algebras, \(\theta:U^{0}(A)\to U^{0}(B)\) is a contractive topological group isomorphism such that \(\theta(\mathbb{T})=\mathbb{T}\). If_
\[\pi_{i}(U(A))\simeq K_{i-1}(A)\text{ and }\pi_{i}(U(B))\simeq K_{i-1}(B) \text{ for }i=0,1, \tag{4.10}\]
_via the canonical maps, then \(KT_{u}(A)\simeq KT_{u}(B)\)._
Proof.: By Corollary 4.8, we have an induced \(KT_{u}\)-morphism. This map is necessarily an isomorphism since \(\theta\) is.
**Corollary 4.10**.: _Let \(A,B\) be unital C*-algebras which are either \(\mathcal{Z}\)-stable or of stable rank one. Let \(\theta:U(A)\to U(B)\) be a contractive injective homomorphisms such that \(\theta(\mathbb{T})=\mathbb{T}\). Then there is an induced map_
\[KT_{u}(\theta):KT_{u}(A)\to KT_{u}(B).\]
Proof.: \(\mathcal{Z}\)-stable and stable rank one C*-algebras satisfy the hypotheses of Theorem 4.8 by [10] and [11] respectively. So the theorem applies.
**Remark 4.11**.: _The strict ordering on \(\operatorname{Aff}T(A)\) is given by \(f\gg g\) if \(f(\tau)>g(\tau)\) for all \(\tau\in T(A)\). If \(A,B\) are unital and \(\theta:U^{0}(A)\to U^{0}(B)\) is a contractive injective homomorphism such that \(\theta(\mathbb{T})=\mathbb{T}\), then \(\pm\Lambda_{\theta}:\operatorname{Aff}T(A)\to\operatorname{Aff}T(B)\) is a unital positive contraction by Lemma 4.3 (again \(\pm\Lambda_{\theta}\) is \(\Lambda_{\theta}\) or \(-\Lambda_{\theta}\) depending on which is positive). We moreover have that_
\[\pm\Lambda_{\theta}(f)\gg\pm\Lambda_{\theta}(g)\iff f\gg g. \tag{4.11}\]
_Indeed, let us show that \(f\gg 0\) if and only if its image is \(\gg 0\). As \(\pm\Lambda_{\theta}\) has the form \(\pm\Lambda_{\theta}(\hat{a})=\widehat{\pm S_{\theta}(a)}\), it suffices to show that if \(\tau(a)>0\) for all \(\tau\in T(A)\), then \(\tau(\pm S_{\theta}(a))>0\) for all \(\tau\in T(B)\). But this is trivial because \(\tau\circ\pm S_{\theta}:A_{sa}\to\mathbb{R}\) extends canonically to a tracial state \(A\to\mathbb{C}\), so evaluating it against \(a\) must give that it is strictly positive._
The above says the following: for certain C*-algebras, we can read off positivity in \(K_{0}\), thinking of it as the fundamental group of the unitary group, from the strict positivity of the pre-determinant applied a the loop. Precisely, a non-zero element \(x\in K_{0}(A)\), where \(A\) is a unital, simple C*-algebra with
strict comparison, is in the positive cone if and only if the corresponding loop \(\xi_{x}\) satisfies \(\tilde{\Delta}_{\tau}(\xi_{x})>0\) for all \(\tau\in T(A)\).
Although the following is known, for example by very strong results in [1, Chapter 6] pertaining to certain prime C*-algebras, we give the following as a corollary by using \(K\)-theoretic classification results.
**Corollary 4.12**.: _Let \(A,B\) be unital, separable, simple, nuclear \(\mathcal{Z}\)-stable C*-algebras satisfying the UCT. Then \(A\simeq B\) if and only if there is a contractive isomorphism \(U(A)\simeq U(B)\)._
Proof.: Its clear that two isomorphic C*-algebras have isomorphic unitary groups. On the other hand, if \(U(A)\simeq U(B)\), then since these C*-algebras are \(\mathcal{Z}\)-stable, Corollary 4.9 applies. As \(KT_{u}(\cdot)\) recovers the Elliott invariant, which is a complete invariant for the C*-algebras as in the statement of the theorem (by [1, Corollary D], [1, 10, 11] and the references therein), \(A\simeq B\).
Using the state of the art classification of embeddings [13], there is an enlarged invariant of \(KT_{u}(\cdot)\) which is able to classify morphisms between certain C*-algebras. Any \(KT_{u}\)-morphisms automatically has a lift to this larger invariant, and so under the assumption that the \(KT_{u}\)-morphism is faithful (i.e., the map \(T(B)\to T(A)\) induced by the map \(\operatorname{Aff}T(A)\to\operatorname{Aff}T(B)\) sends traces on \(B\) to faithful traces on \(A\)), there is a *-homomorphism witnessing the \(KT_{u}\)-morphism. Therefore as a corollary of their main theorem, we have that for an abundance of C*-algebras, there is an (contractive) embedding of unitary groups if and only if there is an embedding of C*-algebras.
**Corollary 4.13**.: _Let \(A\) be a unital, separable, simple nuclear C*-algebra satisfying the UCT, which is either \(\mathcal{Z}\)-stable or of stable rank one, and \(B\) be a unital, separable, simple, nuclear \(\mathcal{Z}\)-stable C*-algebra. If there is a contractive homomorphism \(\theta:U(A)\to U(B)\) such that \(\theta(\mathbb{T})=\mathbb{T}\), then there is an embedding \(A\hookrightarrow B\)._
Proof.: Assuming such a \(\theta\) exists, it gives rise to a \(KT_{u}\)-morphism
\[KT_{u}(\theta):KT_{u}(A)\to KT_{u}(B) \tag{4.12}\]
by Corollary 4.10. As \(A,B\) are simple, the map \(T_{\theta}:T(B)\to T(A)\) necessarily maps traces on \(B\) to faithful traces on \(A\), and therefore the \(KT_{u}\)-morphism \(KT_{u}(\theta)\) is "faithful". Therefore \(KT_{u}(\theta)\) induces an embedding \(A\hookrightarrow B\) by [13].
## 5. General linear variants
Here we briefly describe some general linear variants of the results above. There are natural analogues of the unitary algebraic \(K\)-theoretic results. In
the presence of a continuous homomorphism \(\theta:GL^{0}(A)\to GL^{0}(B)\), we have corresponding maps
(5.1)
Again, by modding out by \(\ker\Delta^{1}_{A}\) and \(\ker\Delta^{1}_{B}\) respectively instead of closures of derived groups, there is a purely algebraic variant of the above diagram:
(5.2)
Thinking of \(K_{0}(A)\) as the Grothendieck group of the semigroup of equivalence classes of idempotents and \(K_{0}(A)\simeq\pi_{1}(GL^{0}_{\infty}(A))\). We would like to lift the maps on the right of (5.1) and (5.2) to a map
\[A/\overline{[A,A]}\to B/\overline{[B,B]}. \tag{5.3}\]
We'll construct our map by first constructing a map between the Banach spaces of bounded traces and using duality. For a C*-algebra \(C\) denote by \(\mathfrak{T}(C)\) the complex Banach space of bounded tracial functionals. Define \(F_{\theta}:\mathfrak{T}(B)\to\mathfrak{T}(A)\) by
\[F_{\theta}(\tau)(a):=\lim_{r\to 0^{+}}\frac{1}{2\pi ir}\tau(\log\theta(e^{2\pi ira })),\tau\in\mathfrak{T}(B),a\in A. \tag{5.4}\]
**Proposition 5.1**.: \(F_{\theta}(\tau):A\to\mathbb{C}\) _is a well-defined tracial functional and \(F_{\theta}:\mathfrak{T}(B)\to\mathfrak{T}(A)\) is a bounded linear map._
Proof.: We give an outline of the proof.
One can first convince oneself that \(F_{\theta}(\tau)(a)\) is well-defined for every \(a\in A\). To do this, one can use upper semi-continuity of the spectrum to convince oneself that \(\theta(e^{2\pi ira})\) is an exponential for small \(r>0\). Using the fact that \(\theta\) is a continuous homomorphism, one can show that the sequence \((\frac{n}{2\pi i}\log\theta(e^{2\pi i\frac{a}{n}}))_{n}\) is eventually constant, from which it follows that \((\frac{1}{2\pi ir}\log\theta(e^{2\pi ira}))_{r\to 0^{+},r\in\mathbb{Q}_{+}}\) is eventually constant, and then it is easy to show that the limit
\[\lim_{r\to 0^{+}}\frac{1}{2\pi ir}\log\theta(e^{2\pi ira}) \tag{5.5}\]
exists. Note that this gives a map \(G_{\theta}:A\to B\) given by
\[G_{\theta}(a):=\lim_{r\to 0^{+}}\frac{1}{2\pi ir}\log\theta(e^{2\pi ira}), \tag{5.6}\]
which can be shown to be bounded and linear. But then its clear that \(F_{\theta}(\tau)=\tau\circ G:A\to\mathbb{C}\) is a well-defined bounded linear map.
The Banach space \(\mathfrak{T}(A)\) can be isometrically identified with \(\left(A/\overline{[A,A]}\right)^{*}\) with duality \(\langle\tau,[a]\rangle=\tau(a)\). We can use duality to define a map \(\tilde{F}_{\theta}:=F_{\theta}^{*}|_{A/\overline{[A,A]}}:A/\overline{[A,A]} \to B/\overline{[B,B]}\). We note that if \(G_{\theta}\) is the map as in the proof above, we have that
\[\theta(e^{a})=e^{G_{\theta}(a)} \tag{5.7}\]
and that
\[[G_{\theta}(a)]=\tilde{F}_{\theta}([a]). \tag{5.8}\]
**Proposition 5.2**.: _The map \(\tilde{F}_{\theta}\) is a lift of the maps in (5.1) and (5.2)._
Proof.: This is essentially the same proof as Proposition 3.8.
In a similar vein to the unitary case, contractive injections \(GL(A)\to GL(B)\) which send the circle to the circle (or \(\mathbb{C}^{\times}\) to \(\mathbb{C}^{\times}\)) give rise to \(KT_{u}\)-morphisms.
## 6. Final remarks and open questions
We list some alternate ways to go about things. Rather than using Stone's theorem to get a map between self-adjoint elements, one can define from a trace on \(B\) a trace on \(A\) directly. By this we mean that if \(E\) is a Banach space and \(\tau:A_{sa}\to E\) is a bounded trace, the trace can be recovered from
\[\tau(a)=\lim_{r\to 0^{+}}\tau\left(\frac{1}{2\pi ir}\log e^{2\pi ira}\right). \tag{6.1}\]
One applies the unitary group homomorphism \(\theta\) to the only unitary in the above equation, and we can simply define
\[F(\tau)(a)=\lim_{r\to 0^{+}}\tau\left(\frac{1}{2\pi ir}\log\theta(e^{2\pi ira}) \right). \tag{6.2}\]
Of course one must check that this is a well-defined, bounded, tracial \(\mathbb{R}\)-linear map \(A_{sa}\to E\), which is obvious if one uses Stone's theorem, but it can be done directly without it. Note that it is helpful to reformulate this in terms of \(\tilde{\Delta}_{\tau}\) after picking appropriate paths in \(U^{0}(A)\) and \(U^{0}(B)\).
With this formulation one can define \(F_{\theta}:T_{s}(B)\to T_{s}(A)\) using the above formula, where \(T_{s}(\cdot)\) denotes the set of bounded, self-adjoint, tracial functionals on a C*-algebra, which is canonically isomorphic to the dual of \(\operatorname{Aff}T(A)\simeq\operatorname{Aff}T(A)\).
\(A_{sa}/A_{0}\). One can then use duality to define a map \(\tilde{F}_{\theta}:\operatorname{Aff}T(A)\to\operatorname{Aff}T(B)\), similar to how it was done in Section 5. It can be shown that \(\tilde{F}_{\theta}=\Lambda_{\theta}\).
Now to list some open problems.
1. There are classes where topological isomorphisms between \(U(A)\) and \(U(B)\) (or even \(U^{0}(A)\) and \(U^{0}(B)\)) come from *-isomorphisms or anti-*-isomorphisms. For example, if \(A,B\) are prime traceless C*-algebras containing full square zero elements, this is true by [10]. If \(A\) is a unital, separable, nuclear C*-algebra satisfying the UCT and \(B\) is a unital simple separable nuclear \(\mathcal{Z}\)-stable C*-algebra, then unital embeddings \(A\hookrightarrow B\) are classified by an invariant \(\underline{KT}_{u}(\cdot)\) which is an enlargement of \(KT_{u}\)[12]. Thus any isometric unitary group homomorphism \(U(A)\to U(B)\) will give a \(KT_{u}\)-morphism \(KT_{u}(\theta)\) and therefore there will be an embedding \(\phi:A\hookrightarrow B\) such that \(KT_{u}(\phi)=KT_{u}(\theta)\). However it is not clear that \(\phi\) satisfies \(\phi|_{U(A)}=\theta\). More generally though - in the tracial setting - are there continuous group homomorphisms which do not have lifts to *-homomorphisms or anti-*-homomorphisms? Note that in [1, Chapter 6], Lie isomorphisms between certain C*-algebras are shown to be the sum of a Jordan *-isomorphism and a center-valued trace. Is there a result for general (injective) Lie homomorphisms between certain classes of C*-algebras?
2. This enlargement of \(KT_{u}\) discussed in [12] contains \(K\)-theory with coefficients (along with appropriate pairing maps - the Bosckstein maps discussed in [13]). So we ask: do continuous group homomorphisms induce maps between \(K\)-theory with coefficients?
3. For a general continuous homomorphism \(\theta:U^{0}(A)\to U^{0}(B)\), does the norm \(\|S_{\theta}\|\) determine a Lipschitz constant for \(\theta\)? We clearly have that (6.3) \[\|S_{\theta}\|\leq\inf\{K\mid\theta\text{ is $K$-Lipschitz}\}\] by Lemma 4.4. Is this equality?
4. For \(A\) simple (or prime), is it true that any continuous injective homomorphism \(\theta:U^{0}(B)\to U^{0}(B)\) is isometric? Contractive? What if \(B\) is simple (or prime)?
|
2307.12647 | Super narrow peaks in excitation spectrum of alkali spin polarization:
non-adiabatic case of spin dynamics | We theoretically describe the phenomenon of non-adiabatic spin dynamics,
which occurs in a gas cell filled by alkali vapor in presence of a strong
alternating magnetic field and pump light. Steep increase of the spin
polarization occurs if frequency of the magnetic field is equal to the certain
value. Although, the observable effect relies on the periodic field that
consists of two perpendicular components defined by harmonics with the same
amplitudes and different frequencies. Considered spin effect cannot be
explained by a resonance, because the own Larmor frequency of spin precession
is absent without a constant component of magnetic field. Moreover, there are
some clearly visible peaks in the excitation spectrum of spin polarization, and
they are super narrow in comparison to relaxation rate. Detailed analysis
according to proposed quantum model results in the reasoning of the effect via
qualitative properties of non-adiabatic dynamics of atomic spin. | E. N. Popov, A. A. Gaidash, A. V. Kozubov, S. P. Voskoboynikov | 2023-07-24T09:39:07Z | http://arxiv.org/abs/2307.12647v2 | # Unusual spin effect in alkali vapor induced by two orthogonal multiple harmonics of magnetic field
###### Abstract
In this paper, we describe the unusual low-frequency magnetic resonances in alkali vapor with oriented atomic spins regarding the framework of density matrix formalism. The feature of the resonance is the absence of a constant component in the external magnetic field. To explain steep increase of the spin orientation at certain frequencies, we define special closed atomic spin trajectories governed by periodic magnetic perturbation. Any closed trajectory is characterized by the frequency of spin motion. The resonance effect was numerically verified in the paper. For instance, these trajectories can be observed in an alkali vapor via optical excitation. Surprisingly, the width of the resonance line is found to be narrower, as one may expect.
## I Introduction
The phenomenon of the electron spin resonance (ESR) was firstly observed in Kazan State University by Evgenii Zavoisky in 1944 [1]. Since that, ESR was widely studied, and now it is one of the most important methods of spectroscopy and metrology [2; 3]. Here we discuss ESR in a gas cell with polarized vapor of alkali, which can be described as a hot atomic ensemble. Since the mean path of alkali atoms without depolarization is long enough, the linewidth of spin and optical resonances in the vapor are narrow in compare to liquid medium or crystals. Magnetization in a gas cell is induced by optical excitation of alkali vapor [4; 5; 6]. Alkali spins can be oriented and then detected by the light, which excites an atomic resonance transition. Based on the optical scheme of excitation, ESR in a sodium vapor was proposed in 1957 by Dehmelt [7] and later was experimentally performed by Bell and Bloom [8]. Regarding vapor of alkali, the optical excitation was implemented by spectral lamps in early ESR experiments. Later, the spectral lamps were superseded by lasers as a more efficient and compact sources [9].
Explicit distinction of observed magnetic resonance in a gas cell have provided numerous advantages for practical applications. Furthermore, extensive development of laser technologies [10; 11; 12; 13; 14; 15; 16; 17; 18] have led to the progress in the optical control of an atomic state in a gas cell. An essential aspect of the study is the depolarization induced by collisions of alkali atoms in the gas cell. The effect determines the form of the magnetic resonance curve. The cornerstone of the depolarization is the population mixing among the Zeeman sub-levels [19; 20; 21; 22; 23; 24; 25; 26; 27]. Recent progress in coating technology allows increasing life-time of atomic spin orientation in a gas cell up to several minutes [28; 29; 30]. Slow relaxation (in order of minutes) is a reason of extreme ESR line narrowing, that allows utilizing a gas cell as a sensitive element for precision magnetometers. Study of a magnetic resonance is topical, especially for the magnetometry [31; 32; 33; 34; 35; 36; 37; 38; 39]. It should be noted, that the resonance scheme can also be applied to nuclear spins. A spin orientation can be transferred from the alkali electrons to the noble gas nuclei in the cell via long collisions. This effect is known as "spin-exchange"[40; 41; 42; 43; 44; 45; 46].
In the paper, we theoretically describe an unusual spin effect occurred in alkali vapor with applied magnetic field without a constant component. Mathematically, the temporal mean of the magnetic field vector is equal to zero:
\[\mathbf{B}(t+T)=\mathbf{B}(t) \tag{1}\]
\[\frac{\Omega}{2\pi}\int\limits_{T}\mathbf{B}(t)\mathrm{d}t=0,\qquad\Omega= \frac{2\pi}{T}, \tag{2}\]
where \(T\) is a period and \(\Omega\) is a repetition frequency. The alternating magnetic field \(\mathbf{B}(t)\) governs the dynamics of alkali spins, producing periodic perturbation.
The observed phenomenon entails a strong response of alkali spins to a specific temporal profile of the alternating magnetic field (1). Moreover, the spin orientation steeply increases when frequency \(\Omega\) is equal to a certain peculiar value. The latter property is the most interesting one, since the general precession of alkali spins with own Larmor frequency is absent. We call the effect _a resonance_, due to the existence of some peaks in the frequency dependence. However, it is not a classical ESR case, where the resonance frequency is defined by the magnitude of DC-field. Instead, every resonance frequency is defined by the amplitude of an alternating magnetic field and its periodic temporal profile.
Alkali spins in the presence of external field (1) can be defined as a linear dynamic system with unpredictable
behavior, which can not be described via simple analytical laws. Withal, numerical calculations allow finding the resonance frequencies, but do not explain its nature. Mathematically, we could describe the effect as a parametric resonance [47]. However, there is a problem: inner system's parameters, that determine the own frequency, are absent. Moreover, the resonance depends on a temporal profile of the periodic perturbation even for a fixed frequency of repetition. Therefore, we consider the resonance as a topological effect and explain it by the closed trajectories in the Phase Space of spin orientation components.
Similar resonances in vapor of alkali are considered in a very few works. The closest research is an experiment performed in [48]. The authors demonstrate a magnetometer, where external perturbation is formed by three orthogonal radio-fields with different frequencies. Indeed, the parametric resonance was induced in the experiment. In comparison to the quasi-adiabatic classical theory provided in [48], we present a quantum theory of the oriented spins dynamics. Furthermore, we provide the detailed explanation of the resonance effect. According to the proposed approach, we have found some curious narrow resonance lines in frequency dependence of the spin orientation.
## II Optical scheme for excitation and detection of the oriented spins
In the section, we suggest an optical scheme for observation of the unusual spin resonance, which is shown in the figure 1. A gas cell is filled with a vapor of non-zero spin atoms, which are sensitive to an external field. Inside the cell, spins are collectively oriented and scanned by two lasers. The resonance dynamics of the oriented spins is observed under an alternating homogeneous magnetic field, which is produced by Helmholtz coils around the cell. Three key parts regarding the optical scheme are discussed below.
### Gas cell
A gas cell contains a vapor of \({}^{87}\)Rb and a mix of inert gases. A gas mix composes of diatomic nitrogen in order to reduce alkali fluorescence and some noble monatomic gases as a buffer. Further, we call the vapor of \({}^{87}\)Rb just _alkali_ and the mix of inert gases just _buffer_ for the sake of simplicity. It is important that non-excited alkali atoms can populate two different hyperfine levels. Therefore, to define an alkali atom, which is at the ground hyperfine level with total angular momentum \(F\), we use a short term: \(L_{F}\)_-atom_.
Temperature within the cell is about \(80^{\circ}\,\mathrm{C}\). Under the conditions, the concentration of the alkali is 5-6 orders of magnitude less than the concentration of the buffer. Since an electron cloud of alkali has an enlarged radius and anisotropic form, multiple collisions between an excited atom \({}^{87}\)Rb and molecules N\({}_{2}\) lead to rapid non-radiative transitions from the upper to one of the two ground levels in the d1-line (Fig. 1). Moreover, since buffer environment freezes free motion of alkali, a surface of the cell influences weakly enough for essential increase of spin orientation life-time. The latter properties of the gas cell are the reason for the principal feasibility of a complicated collective spin motion in an external magnetic field.
### Pumping of alkali spin orientation
A circularly polarized light induces a spin orientation, which is directed along the path of the light propagation. As shown in the figure 1, the _pump light_ propagates along the Z-axis and causes the resonance transition between the levels with a total angular momentum \(F=1\) and \(F^{\prime}=2\). Full data about the \(\mathrm{d}-1\) line of \({}^{87}\)Rb can be found in [49]. In presence of optical excitation, alkali atoms are pumped and populate the upper levels with one-sided change of the angular momentum, which is projected along the path of the light propagation. Therefore, alkali atoms accumulate non-zero angular momentum directed along the Z-axis.
In the optical scheme, orientations of L\({}_{1}\)- and L\({}_{2}\)-atoms' spins are achieved through _depopulation_ and _repopulation_, respectively. Both processes create the non-equilibrium population among Zeeman sub-levels belonging to the ground levels of the \(\mathrm{d}-1\) line. The nature of depopulation is a selective depletion of certain sub-levels by a circularly polarized light. It should be noted, that repopulation process is more efficient than depopulation. As presented in [50], repopulation is based on the feature
Figure 1: _The general scheme for studying an electron spin resonance in vapor of alkali. An inserted picture at the right-bottom corner demonstrates transitions inside the d1-line of \({}^{87}\)Rb._
of collisional decay, that is a nucleus spin state is not destroyed during transition of an excited alkali atom from upper to a ground level. Preserved nucleus spin produces a spin orientation of L\({}_{2}\)-atoms.
### Scanning of alkali spin orientation
An atomic ensemble with oriented spins is circular birefringent due to induced optical anisotropy. The magnitude of the circular birefringence is determined by a spin orientation projection along the path of light propagation. Since refraction indices for the orthogonal circular components of a passing light are different, the linear polarization plane rotates around the path of light propagation. To observe the latter, a detection scheme with a polarization beam splitter can be implemented (see PBS in the Fig. 1).
Mathematically, the angle of the optical rotation is proportional to the expectation value of the full angular momentum:
\[\Delta\psi(t)\propto\langle\hat{F}_{x}\rangle_{Rb}, \tag{3}\]
where \(\Delta\psi(t)\) is the measurable angular rotation of the polarization plane, \(\hat{F}_{x}\) is an X-component of the total angular momentum operator. The X-axis is directed along the _scanning light_ in the Figure 1. Therefore, by selecting the scanning light direction, we can measure any spin orientation component of the alkali vapor in the gas cell.
The spectral linewidth of the scanning laser should be narrow enough to provide the hyperfine structure resolution. The condition above is necessary for selective measurement of the spin orientation, which is produced by L\({}_{1}\)- or L\({}_{2}\)-atoms. As shown in the Figure 1, the scanning light frequency should be close to a transition between hyperfine levels with total angular momentum \(F=2\) and \(F^{\prime}=2\). In the case, L\({}_{2}\)-atoms affect the polarization of the scanning light much stronger than L\({}_{1}\)-atoms. Withal, laser frequency should be detuned from the optical resonance for excluding a redundant depletion of ground level \(F=2\). Since the \(F=1\) level is broadened by a pump light, the spin orientation scanning of L\({}_{2}\)-atoms is more effective than of L\({}_{1}\)-atoms.
## III Magnetic resonance and closed spin loops
In the following section, we discuss the nature of observed resonance phenomena. To excite a magnetic resonance in the gas cell, an ensemble of alkali spins is perturbed by two harmonic magnetic fields with different frequencies:
\[\mathbf{B}(t)=B_{0}\mathbf{l_{z}}\sin\left(a\Omega t\right)-B_{0}\mathbf{l_{x} }\cos\left(b\Omega t\right), \tag{4}\]
where \(B_{0}\) is the amplitude of a magnetic field and \(\Omega\) is the repetition frequency, \(a\) and \(b\) are arbitrary coefficients in a general case. In the paper we consider \(a=1\) and \(b=2\), generalization is out of the scope. To avoid a spin dephasing, which occurs due to alkali diffusion, a magnetic field should be homogeneous. For instance, a magnetic field can be generated by Helmholtz coils, see the Fig. 1. The geomagnetic field can be removed by a compensation current or a magnetic shield.
According to the equation (4), the constant component and the temporal mean of the magnetic field are equal to zero. Therefore, alkali spins are not subject to a continuous precession with the non-zero averaged angular velocity.
In our work, we utilize two orthogonal components of an external magnetic field instead of three as in [48]. Another crucial difference from [48] is that we consider non-adiabatic dynamics. To achieve the latter, the following inequalities should be satisfied:
\[\gamma B_{0}\gg\tau^{-1},\qquad\tau\gg\Omega^{-1}, \tag{5}\]
where \(\gamma\) is the gyromagnetic ratio and \(\tau\) is the life-time of the spin orientation. The feature of the non-adiabatic case are frequent flips of the alkali spin orientation due to a strong magnetic field. Since life-time \(\tau\) is longer than perturbation period, alkali spins are able to repeat a similar periodic motion until the total dephasing. The latter property of the non-adiabatic case is beneficial; it allows observing a closed trajectory of the spin motion in an alkali vapor and propose an explanation for the existence of the resonance.
### Spin loops
To describe the spin dynamics in the simplest form, we consider the Pauli equation with magnetic field defined by the formula (4):
\[i\hbar\dot{\varphi}=\gamma\left(\hat{\sigma}\cdot\mathbf{B}(t)\right)\varphi, \tag{6}\]
\[\hat{\sigma}=\frac{\hbar}{2}\left(\hat{\sigma}_{x}\mathbf{x}+\hat{\sigma}_{y} \mathbf{y}+\hat{\sigma}_{z}\mathbf{z}\right), \tag{7}\]
where \(\varphi\) is the spinor, \(\gamma\) is the gyromagnetic ratio of a system, \(\hat{\sigma}_{\alpha}\) are Pauli matrices.
The solution of the equation (6) is a trajectory in Hilbert space of spinor states. We investigate only three components \(S_{\alpha}\) derived from the solution, which can be measured by the scanning light as described above:
\[S_{\alpha}(t)=\left(\varphi^{\dagger}\hat{\sigma}_{\alpha}\varphi\right), \qquad\alpha\in\left\{x,y,z\right\}, \tag{8}\]
\[\mathbf{S}(t)=S_{x}(t)\,\mathbf{x}+S_{y}(t)\,\mathbf{y}+S_{z}(t)\,\mathbf{z}. \tag{9}\]
The vector \(\mathbf{S}(t)\) describes the spin orientation of a dynamical system. Moreover, a point with coordinates \(S_{\alpha}(t)\) draws the path on 3D sphere, we call it _spin orientation trajectory_.
In general, spin orientation trajectories are not closed, i.e. the following condition is not satisfied:
\[\mathbf{S}_{loop}(t+T)=\mathbf{S}_{loop}(t),\qquad T=\frac{2\pi}{\Omega}. \tag{10}\]
However, it turns out that certain initial parameters lead to unique closed ones. Here and further, we call them _spin loops_. The defined spin loops can be self-intersected.
It should be noted, the Pauli equation (6) does not take into account a relaxation and an excitation. If we introduce these processes here, the equilibrium state would be set to Z-axis close to the origin in the phase space of spin orientation components. Therefore, spin orientation trajectory would be relocated from the sphere to the inner area.
### Resonance hypothesis
We suppose that alkali spins are oriented as efficiently as possible, if the spin orientation trajectory is equivalent to a spin loop and the temporal mean of \(\mathbf{S}\) is directed along the path of pump light propagation as follows:
\[\mathcal{S}=\frac{1}{T}\oint_{T}\mathbf{S}_{loop}(t)\mathrm{d}t,\qquad \mathcal{S}\uparrow\mathbf{z}. \tag{11}\]
Since the latter condition is satisfied under certain discrete frequencies \(\Omega\) of the alternating magnetic field (4), we call the effect _the resonance_. The hypothesis is based on the idea that a circularly polarized light can produce an alkali quantum state with an angular momentum directed only along the path of the light propagation. Furthermore, to hold the expectation value of the angular momentum enough to overcome dephasing, alkali spins should motion along the closed trajectory (10). Exploring the condition (11) we found some spin loops. However, since adiabatic condition is satisfied, the most of the spin loops are not relevant. Therefore, we consider only four spin loops (see Table 1 for the details), which arise under the highest frequency \(\Omega\).
Despite simplicity, the Pauli equation allows claiming a correspondence between some spin loops and resonance peaks, which are obtained further in the Section V. The spin loops from the Table 1 are drawn in the Figure 2, where the spin orientation of alkali moves due to periodic perturbation by the field (4).
## IV Quantum model of alkali dynamics
To verify the hypothesis about interconnection between defined spin loops and a magnetic resonance, we propose a quantum model for the optical scheme in the Fig. 1. The dynamics of the alkali spin orientation can be predicted with any frequency \(\Omega\) of the magnetic field (4). The model takes into account several processes, which are listed in the Table 2.
Rates of the relaxation processes (I, II, III, VI) are estimated by concentration of the buffer, temperature, size, and coating of the gas cell. Magnitudes of the perturbation (IV and V) are determined by an amplitude of the pump light and an amplitude of the external magnetic field. An inverse life-time determines a number of atomic state changes per second for each described process.
The ensemble of alkali atoms is described by a density matrix, which rank is equal to the number of Zeeman sublevels. There are 32 sub-levels in \(d1\)-line of \({}^{87}\)Rb. Then the master-equation for density matrix \(\hat{\rho}\) is as follows:
\begin{table}
\begin{tabular}{c c c c} Name of & Initial spinor & \(\Omega/\gamma B_{0}\) & \((\mathcal{S}\cdot\mathbf{z})\) \\ resonance & \((\varphi_{+},\varphi_{-})\) & \(\Omega/\gamma B_{0}\) & \((\mathcal{S}\cdot\mathbf{z})\) \\ \hline Ball & \((1,0)\) & 0.099 & 0.125 \\ Flower & \((0,1)\) & 0.126 & 0.162 \\ Ring & \((0,1)\) & 0.175 & 0.394 \\ Knot & \((0,1)\) & 0.259 & 0.117 \\ \end{tabular}
\end{table}
Table 1: _Four spin loops at the highest frequencies and with temporal mean directed along Z-axis._
Figure 2: _Depiction of spin loops from the Table I. Trajectories are drawn in the space of spin orientation components, \(S_{\alpha}\). The black arrow defines the initial state of the spinor, that is determined by the phases of magnetic harmonics in expression (4)._
\[\begin{split}& i\hbar\frac{\mathrm{d}\hat{\rho}}{\mathrm{d}t}=\left[ \hat{H},\hat{\rho}\right]-\mathcal{V}_{dcy}\left(\hat{\rho}-\mathcal{D}\left\{ \hat{\rho}\right\}\right)-\\ &-\mathcal{V}_{dec}\left(\hat{\rho}-\mathcal{R}\left\{\hat{\rho} \right\}\right)-\mathcal{V}_{mix}\left(\hat{\rho}-\hat{\rho}_{0}\right), \end{split} \tag{12}\]
where \(\mathcal{D}\) and \(\mathcal{R}\) are two superoperators of relaxation. The former describes the process I from the Table 2; it makes non-diagonal elements with an optical frequency of phase rotation equal to zero. The latter describes the process II from the Table 2; it decomposes the full density matrix \(\hat{\rho}\) to a tensor product of an electron and nuclear density matrices. Notice, the electron density matrix is reduced to the equilibrium ground state, and spin orientation of the nuclear density matrix is preserved [50; 25]. The last term in expression (12) describes the process VI from the Table 2. A matrix \(\hat{\rho}_{0}\) corresponds to the state of thermodynamic equilibrium with mixed population among the Zeeman sub-levels. The symbol \(\mathcal{V}_{dec}\) denotes the frequency of elastic collisions between an alkali atom and buffer atoms, \(\mathcal{V}_{dcy}\) is the frequency of strong collisions between an excited alkali atom and buffer atoms results to decay to ground levels, \(\mathcal{V}_{mix}\) is the relaxation rate of an alkali spin orientation.
The operator \(\hat{H}\) comprises the unperturbed Hamiltonian \(\hat{H}_{0}\) and an operator of interaction \(\hat{V}\):
\[\hat{H}=\hat{H}_{0}+\hat{V},\qquad\hat{V}=\hat{V}_{E}+\hat{V}_{B}, \tag{13}\]
\[\hat{V}_{E}=-\left(\mathbf{\hat{d}}\cdot\mathbf{E}\right),\qquad\hat{V}_{B}= \sum_{n=1}^{2}g_{n}\gamma_{e}\left(\mathbf{\hat{F}_{n}\cdot B}\right), \tag{14}\]
\[\mathbf{E}=\frac{\mathcal{E}}{2}\,\mathbf{1}_{+}e^{i(kz-\omega t)}+c.c., \qquad k=\frac{\omega}{c}, \tag{15}\]
where \(\mathcal{E}\) is the constant amplitude of the pump light, \(\mathbf{1}_{+}\) is the unit vector of circular polarization, \(\omega\) is the frequency of the pump light, the dipole operator \(\mathbf{\hat{d}}\) describes all optical transitions between Zeeman sub-levels in \(d1\)-line of \({}^{87}\)Rb [49], \(\gamma_{e}\) is the electron gyromagnetic ratio, \(g_{n}\) is the g-factor of the ground hyperfine levels of \({}^{87}\)Rb, \(\mathbf{\hat{F}_{1}}\) and \(\mathbf{\hat{F}_{2}}\) are total angular momentum operators of the L\({}_{1}\)-atom and L\({}_{2}\)-atom respectively, magnetic field \(\mathbf{B}\) is defined by (4). Interactions \(\hat{V}_{E}\) and \(\hat{V}_{B}\) describe the process IV and the process V from the Table 2 correspondingly.
## V Calculation and discussion
Now we can implement the quantum model and study a behavior of an alkali vapor after transition to the steady dynamics. According to the resonance hypothesis, we should discover the strongest dissimilarity from the equilibrium state, when varied frequency \(\Omega\) of the magnetic field (4) approaches the values from the Table 1.
Mathematically, the formula (12) is a linear system of non-homogeneous differential equations with variable coefficients. To observe the resonance, let us consider a spin orientation dependence on the frequency \(\Omega\). Since spin motion repeats in time, the results should be obtained from the last period of the steady dynamics.
Under optical pumping, alkali atoms populate the ground level \(F=1\) and the ground level \(F=2\). As noted in the first section, we scan a spin orientation from L\({}_{2}\)-atoms. In the quantum model, this data is contained in a cropped density matrix related to a subspace of L\({}_{2}\)-atoms with rank \(2F+1\) defined by the number of Zeeman sub-levels:
\[\hat{\rho}_{[2]}=\hat{P}_{2}\hat{\rho}\hat{P}_{2}, \tag{16}\]
where \(\hat{P}_{2}\) is the operator of projection to the ground hyperfine level with a total angular momentum \(F=2\). Diagonal and non-diagonal elements describe the population of L\({}_{2}\)-atoms' Zeeman sub-levels and low-frequency coherence between them, respectively.
We define components of the spin orientation, which are parallel and orthogonal to the path of pump light propagation, as _longitudinal_ and _transverse_ respectively. A longitudinal component is associated with inhomogeneous distribution of the diagonal elements. The magnitude and the direction of the transverse component is determined by the absolute values and phases of a complex non-diagonal elements of \(\hat{\rho}_{n}\). Similar to the classical approach as in (6-9), the spin orientation of F\({}_{2}\)-atoms can be described by the following vector in the quantum model:
\[S_{[2],\alpha}=\mathrm{Tr}\left\{\,\hat{\rho}_{[2]}\hat{\Sigma}_{\alpha}\, \right\},\qquad\alpha\in\{x,y,z\}, \tag{17}\]
\[\mathbf{S}_{[2]}=S_{[2],x}\,\mathbf{x}+S_{[2],y}\,\mathbf{y}+S_{[2],z}\, \mathbf{z}, \tag{18}\]
\begin{table}
\begin{tabular}{l l l} \(\mathcal{N}\) & \(\mathcal{V},\;c^{-1}\) & Process / Origin \\ \hline \multirow{2}{*}{I} & \multirow{2}{*}{\(10^{9}-10^{10}\)} & \multicolumn{2}{c}{Decoherence of electric dipole} \\ & & oscillations on optical transitions / \\ & & alkali-buffer elastic collisions \\ \hline \multirow{2}{*}{II} & \multirow{2}{*}{\(10^{8}-10^{9}\)} & \multicolumn{2}{c}{Decay without fluorescence /} \\ & & non-elastic collisions between \\ & & excited alkali and buffer atoms \\ \hline III & \(10^{8}-10^{9}\) & \multicolumn{2}{c}{Inhomogeneous broadening} \\ & & of d-1 line / Doppler effect \\ \hline \multirow{2}{*}{IV} & \multirow{2}{*}{\(10^{4}-10^{5}\)} & \multicolumn{2}{c}{Excitation of alkali atoms /} \\ & & absorption of circularly polarized \\ & & pump light (laser) \\ \hline \multirow{2}{*}{V} & \multirow{2}{*}{\(10^{4}-10^{5}\)} & \multicolumn{2}{c}{Motion of alkali spins /} \\ & & fast precession under external \\ & & alternating magnetic field \\ \hline \multirow{2}{*}{VI} & \multirow{2}{*}{\(10^{2}-10^{3}\)} & \multicolumn{2}{c}{Mixing of population of ground} \\ & & hyperfine levels / spin exchange \\ \multicolumn{2}{c}{} & & and collision with walls of the cell \\ \end{tabular}
\end{table}
Table 2: _Processes in the gas cell and their origin. An inverse life-time or a process rate is denoted by symbol \(\mathcal{V}\)._
where \(\hat{\Sigma}_{\alpha}\) is the Pauli matrices, equivalent for a Spin-2 particle. It is important, that an external alternating magnetic field totally determines the dynamics of the vectors \(\mathbf{S}_{[\mathbf{2}]}\) as well as a behavior of gyro precession.
Unlike the formulas (8) and (9), spin orientation behavior related to the vector \(\mathbf{S}_{[\mathbf{2}]}\) has a qualitative distinction: it moves along a closed trajectory at any frequency \(\Omega\) due to the existence of the steady dynamics determined by equilibrium state.
As the next step, we define two frequency dependencies: the first is the range of the transverse component, the second is the temporal mean of the longitudinal component:
\[C_{1}(\Omega)=\mathrm{Range}\left[S_{[2],x}(t)\right], \tag{19}\]
\[C_{2}(\Omega)=\frac{1}{T}\int\limits_{T}S_{[2],z}(t)\,\mathrm{d}t. \tag{20}\]
The convolution \(C_{1}\) describes the diameter of the globular trajectory in the phase space of spin orientation components, and the convolutions \(C_{2}\) describes the \(Z\)-offset of the one. Notice: \(C_{1}\) and \(C_{2}\) are measurable with the scanning light. Moreover, \(C_{2}\) can be measured by absorption of the circularly polarized pump light.
According to the hypothesis about interconnection between the spin loops' existence and the emergence of the resonance, one may observe several peaks in the Figure 3. From the numerical estimations, two observations follow immediately:
* The ratios \(\Omega_{res}/\left(g_{2}\gamma_{e}B_{0}\right)\) are very close to the corresponding values from the Table 1, where \(\Omega_{res}\) is the local maximum frequencies of the curves.
* The peak magnitudes' dependency on the parameter \(\left(\mathcal{S}\cdot\mathbf{z}\right)\) from the Table 2 is monotonic, but not linear. Also, the highest peak and the maximum value of \(\left(\mathcal{S}\cdot\mathbf{z}\right)\) relate to the same resonance frequency.
The results confirm the hypothesis that the resonance is explained by the existence of spin loops (10). When conditions of the resonance are met, the trajectory of \(\mathbf{S}_{[\mathbf{2}]}\) in the phase space converges to one of the spin loops from the Figure 2. If a frequency \(\Omega\) is not equal to resonance frequency \(\Omega_{res}\), the closed trajectories of \(\mathbf{S}_{[\mathbf{2}]}\) look as an asymmetric tangle with low diameter. However, their shapes resemble neighboring spin loops.
At the end of the day, we present a profitable property of the "ring"resonance:
\[\mathrm{W}\approx 0.5\ kHz\quad<\quad\mathcal{V}_{mix}=1\ kHz, \tag{21}\]
where \(\mathrm{W}\) is the estimated full width at half maximum of the convolution \(C_{2}\), and \(\mathcal{V}_{mix}\) is the rate of spin relaxation introduced in the quantum model (12). However, full width at half maximum for a classical ESR in alkali vapor must be about four times the relaxation rate. It is curious that the studied resonance is narrower than classical ESR. We consider it as a potential advantage for applications based on the observed spin effect. We hypothesized, the inequality (21) may be explained by the topological nature of the resonance, which is not based on matching between frequency of an alternating magnetic field and Larmor frequency.
## Funding
This work was financially supported by Russian Ministry of Education (Grant No. 2019-0903)
## Acknowledgments
We thank our colleagues A. Kiselev from Laboratory of Quantum Processes and Measurements ITMO, and G. Miroshnichenko from Institute "High School of Engineering" ITMO for fruitful discussions during the research.
Figure 3: _The resonance curve for a spin orientation of alkali atoms at hyperfine level \(F=2\). The convolutions \(C_{n}\) from expressions (19) and (20) describe a trajectory in the Phase Space of spin orientation components (17). The curves are calculated for the following parameters of the quantum model (12): magnetic field amplitude \(B_{0}=27\ \mu T\), mixing rate of Zeeman sub-levels population \(\mathcal{V}_{mix}=1\ kHz\), pump light amplitude \(\mathcal{E}\approx 100\ V/m\), temperature \(80^{\circ}\mathrm{C}\)._ |
2304.14766 | Hyperparameter Optimization through Neural Network Partitioning | Well-tuned hyperparameters are crucial for obtaining good generalization
behavior in neural networks. They can enforce appropriate inductive biases,
regularize the model and improve performance -- especially in the presence of
limited data. In this work, we propose a simple and efficient way for
optimizing hyperparameters inspired by the marginal likelihood, an optimization
objective that requires no validation data. Our method partitions the training
data and a neural network model into $K$ data shards and parameter partitions,
respectively. Each partition is associated with and optimized only on specific
data shards. Combining these partitions into subnetworks allows us to define
the ``out-of-training-sample" loss of a subnetwork, i.e., the loss on data
shards unseen by the subnetwork, as the objective for hyperparameter
optimization. We demonstrate that we can apply this objective to optimize a
variety of different hyperparameters in a single training run while being
significantly computationally cheaper than alternative methods aiming to
optimize the marginal likelihood for neural networks. Lastly, we also focus on
optimizing hyperparameters in federated learning, where retraining and
cross-validation are particularly challenging. | Bruno Mlodozeniec, Matthias Reisser, Christos Louizos | 2023-04-28T11:24:41Z | http://arxiv.org/abs/2304.14766v1 | # Hyperparameter Optimization
###### Abstract
Well-tuned hyperparameters are crucial for obtaining good generalization behavior in neural networks. They can enforce appropriate inductive biases, regularize the model and improve performance -- especially in the presence of limited data. In this work, we propose a simple and efficient way for optimizing hyperparameters inspired by the marginal likelihood, an optimization objective that requires no validation data. Our method partitions the training data and a neural network model into \(K\) data shards and parameter partitions, respectively. Each partition is associated with and optimized only on specific data shards. Combining these partitions into subnetworks allows us to define the "out-of-training-sample" loss of a subnetwork, _i.e._, the loss on data shards unseen by the subnetwork, as the objective for hyperparameter optimization. We demonstrate that we can apply this objective to optimize a variety of different hyperparameters in a single training run while being significantly computationally cheaper than alternative methods aiming to optimize the marginal likelihood for neural networks. Lastly, we also focus on optimizing hyperparameters in federated learning, where retraining and cross-validation are particularly challenging.
## 1 Introduction
Due to their remarkable generalization capabilities, deep neural networks have become the de-facto models for a wide range of complex tasks. Combining large models, large-enough datasets, and sufficient computing capabilities enable researchers to train powerful models through gradient descent. Regardless of the data regime, however, the choice of hyperparameters -- such as neural architecture, data augmentation strategies, regularization, or which optimizer to choose -- plays a crucial role in the final model's generalization capabilities. Hyperparameters allow encoding good inductive biases that effectively constrain the models' hypothesis space (_e.g._, convolutions for vision tasks), speed up learning, or prevent overfitting in the case of limited data. Whereas gradient descent enables the tuning of model parameters, accessing hyperparameter gradients is more complicated.
The traditional and general way to optimize hyperparameters operates as follows; **1)** partition the dataset into training and validation data1, **2)** pick a set of hyperparameters and optimize the model on the training data, **3)** measure the performance of the model on the validation data and finally **4)** use the validation metric as a way to score models or perform search over the space of hyperparameters. This approach inherently requires training multiple models and consequently requires spending resources on models that will be discarded. Furthermore, traditional tuning requires a validation set since optimizing the hyperparameters on the training set alone cannot identify the right inductive biases. A canonical example is data augmentations -- they are not expected to improve training set performance, but they greatly help with generalization. In the low data regime, defining a validation set that cannot be used for tuning model parameters is undesirable. Picking the right amount of validation data is a hyperparameter in itself. The conventional rule of thumb to use \(\sim 10\%\) of all data can result in significant overfitting, as pointed out by Lorraine et al. (2019), when one has a sufficiently large number of hyperparameters to tune. Furthermore, a validation set can be challenging
to obtain in many use cases. An example is Federated Learning (FL) (McMahan et al., 2017), which we specifically consider in our experimental section. In FL, each extra training run (for, _e.g._, a specific hyperparameter setting) comes with additional, non-trivial costs.
Different approaches have been proposed in order to address these challenges. Some schemes optimize hyperparameters during a single training run by making the hyperparameters part of the model (_e.g._, learning dropout rates with concrete dropout (Gal et al., 2017), learning architectures with DARTs (Liu et al., 2018) and learning data-augmentations with schemes as in Benton et al. (2020); van der Wilk et al. (2018)). In cases where the model does not depend on the hyperparameters directly but only indirectly through their effect on the value of the final parameters (through optimization), schemes for differentiating through the training procedures have been proposed, such as Lorraine et al. (2019). Another way of optimizing hyperparameters without a validation set is through the canonical view on model selection (and hence hyperparameter optimization) through the Bayesian lens; the concept of optimizing the _marginal likelihood_. For deep neural networks, however, the marginal likelihood is difficult to compute. Prior works have therefore developed various approximations for its use in deep learning models and used those to optimize hyperparameters in deep learning, such as those of data augmentation (Schwobel et al., 2021; Immer et al., 2022). Still, however, these come at a significant added computational expense and do not scale to larger deep learning problems.
This paper presents a novel approach to hyperparameter optimization, inspired by the marginal likelihood, that only requires a single training run and no validation set. Our method is more scalable than previous works that rely on marginal likelihood and Laplace approximations (which require computing or inverting a Hessian (Immer et al., 2021)) and is broadly applicable to any hierarchical modelling setup.
## 2 Marginal Likelihood and prior work
In Bayesian inference, the rules of probability dictate how any unknown, such as parameters \(\mathbf{w}\) or hyperparameters \(\psi\), should be determined given observed data \(\mathcal{D}\). Let \(p(\mathbf{w})\) be a prior over \(\mathbf{w}\) and \(p(\mathcal{D}|\mathbf{w},\psi)\) be a likelihood for \(\mathcal{D}\) with \(\psi\) being the hyperparameters. We are then interested in the posterior given the data \(p(\mathbf{w}|\mathcal{D},\psi)=p(\mathcal{D}|\mathbf{w},\psi)p(\mathbf{w})/p(\mathcal{D}|\psi)\). The denominator term \(p(\mathcal{D}|\psi)\) is known as the _marginal likelihood_, as it measures the probability of observing the data given \(\psi\), irrespective of the value of \(\mathbf{w}\): \(p(\mathcal{D}|\psi)=\int p(\mathbf{w})p(\mathcal{D}|\mathbf{w},\psi)d\mathbf{w}\).
Marginal likelihood has many desirable properties that make it a good criterion for model selection and hyperparameter optimization. It intuitively implements the essence of Occam's Razor principle (MacKay, 2003, SS 28). In the PAC-Bayesian literature, it has been shown that higher marginal likelihood gives tighter frequentist upper bounds on the generalization performance of a given model class (McAllester, 1998; Germain et al., 2016). It also has close links to cross-validation (see section 2.1) and can be computed from the training data alone. However, computation of the marginal likelihood in deep learning models is usually prohibitively expensive and many recent works have proposed schemes to approximate the marginal likelihood for differentiable model selection (Lyle et al., 2020; Immer et al., 2021; 2022; Schwobel et al., 2021).
### "Learning speed" perspective
Lyle et al. (2020); Fong and Holmes (2020) pointed out the correspondence between "learning speed" and marginal likelihood. Namely, the marginal likelihood of the data \(\mathcal{D}\) conditioned on some hyperparameters \(\psi\) can be written as:
\[\log p(\mathcal{D}|\psi)=\sum_{k}\log\mathbb{E}_{p(\mathbf{w}|\mathcal{D}_{1:k-1},\psi)}\left[p(\mathcal{D}_{k}|\mathbf{w},\psi)\right]\geq\sum_{k}\mathbb{E}_{p( \mathbf{w}|\mathcal{D}_{1:k-1},\psi)}\left[\log p(\mathcal{D}_{k}|\mathbf{w},\psi)\right] \tag{1}\]
where \((\mathcal{D}_{1},\ldots,\mathcal{D}_{C})\) is an arbitrary partitioning of the training dataset \(\mathcal{D}\) into \(C\) shards or chunks2, and \(p(\mathbf{w}|\mathcal{D}_{1:k},\psi)\) is the posterior over parameters of a function \(f_{\mathbf{w}}:\mathcal{X}\rightarrow\mathcal{Y}\), from the input domain \(\mathcal{X}\) to the target domain \(\mathcal{Y}\) after seeing data in shards \(1\) through \(k\). The right-hand side can be interpreted as a type of cross-validation in which we fix an ordering over the shards and measure the "validation" performance on each shard \(\mathcal{D}_{k}\) using a model trained on the preceding shards \(\mathcal{D}_{1:k-1}\)
Alternatively, it can be viewed as the _learning speed_ of a (probabilistic) model: _i.e._, a measure of how quickly it learns to perform well on new shards of data after only having been fit to the previous shards (through exact Bayesian updating).
This perspective neatly illustrates why models with higher marginal likelihood can exhibit good inductive biases, _e.g._, encoded through \(\psi\), \(\mathbf{w}\) and \(f_{\mathbf{w}}\). Namely, such models can be expected to learn faster and generalize better after seeing fewer samples. For example, if the hypothesis space is constrained3 to functions satisfying symmetries present in the data, we need fewer data to identify the correct function (Sokolic et al., 2017; Sannai et al., 2021). We argue that the "learning speed" aspect of marginal likelihood -- _i.e._, measuring how well the model generalizes to new data in the training set, having been trained only on the previous data points -- is the key property making marginal likelihood a useful tool for selecting hyperparameters.
Footnote 3: or if the learning algorithm is heavily biased towards returning hypotheses that satisfy a given invariance, _e.g._, through the use of a prior.
### Training speed for hyperparameter optimization
Computing the "learning speed", requires samples from the posterior \(p(\mathbf{w}|\mathcal{D}_{1:k},\psi)\). Unfortunately, in deep learning settings, such samples are impractical to obtain; thus, prior works have focused on more scalable alternatives. Lyle et al. (2020) propose to approximate the objective in Eq. 1 by looking at the _training speed_ during standard training of a neural network by SGD. Specifically, they define the training speed as the reduction in the training loss after a single SGD parameter update, summed over all updates in the first epoch. They argue that, during the first epoch of training, after the neural network parameters, \(\mathbf{w}\), have been updated with SGD steps using data from shards \(\mathcal{D}_{1:k}\), they can be approximately used in place of the sample from the posterior \(p(\mathbf{w}|\mathcal{D}_{1:k},\psi)\) in Eq. 1. They extend the analogy to training past one epoch and use the training speed estimate for model selection (Ru et al., 2021). As pointed out by the authors, however, the analogy between learning speed and training speed somewhat breaks down after \(1\) epoch of training. The network parameters have "seen" every datapoint in the training set after \(1\) epoch, and hence the connection to measuring the model's generalization capability is weakened.
For the sake of scalability and alignment with deep learning practice, we also focus on simple pointwise approximations \(q_{k}(\mathbf{w})=\delta(\mathbf{w}=\hat{\mathbf{w}}_{k})\) to the posteriors \(p(\mathbf{w}|\mathcal{D}_{1:k},\psi)\). However, in contrast to prior work, we explicitly parametrize the learning procedure such that, at any given training iteration, we have access to a model that is trained only on a subset of the data \(\mathcal{D}_{1:k}\). In doing so, we can approximate the objective in Eq. 1, and thus use it to optimize the hyperparameters during the entire training run.
## 3 Partitioned Neural Networks
Our goal is to optimize the objective
\[\mathcal{L}_{\mathrm{ML}}\left(\mathcal{D},\psi\right)=\sum_{k=1}^{C}\mathbb{ E}_{q_{k-1}(\mathbf{w})}\left[\log p(\mathcal{D}_{k}|\mathbf{w},\psi)\right] \tag{2}\]
wrt. \(\psi\), which is an approximation to the lower-bound presented in Eq. 1 above. In Appendix A, we show that the left-hand side is also a lower-bound on the marginal likelihood under some unobtrusive conditions. As mentioned in Section 2.2, our goal is to propose an architecture and a training scheme so that we can easily obtain models trained on only subsets of the data \(\mathcal{D}_{1:k}\) for all \(k\) throughout training. We propose that each \(\{q_{k}(\mathbf{w})\}_{k=1}^{C}\) optimizes a subset of the parameters of the neural network, in a manner that allows us to extract "subnetworks" from the main network that have been trained on specific chunks of data. We describe the partitioning scheme below.
**Partitioning the parameters** Denote the concatenations of the weights of a neural network \(\mathbf{w}\in\mathbb{R}^{N}\). We can define a partitioning \(((\mathbf{w}_{1},\dots,\mathbf{w}_{C}),P)\) of the parameters into \(C\) partitions, such that \(\mathbf{w}=P\operatorname{concat}(\mathbf{w}_{1},\dots,\mathbf{w}_{C})\) for a permutation matrix \(P\in\{0,1\}^{N\times N}\). For ease of exposition, we drop the dependence on \(P\), assuming that \(\mathbf{w}\) is already arranged such that \(P\) is identity, \(P=I_{N\times N}\).
Given the partitioning \((\mathbf{w}_{1},\dots,\mathbf{w}_{C})\) of the parameters, we then specify \(C\) subnetworks with weights \(\mathbf{w}_{s}^{(1)},\dots,\mathbf{w}_{s}^{(C)}\) such that \(\mathbf{w}_{s}^{(k)}=\operatorname{concat}(\mathbf{w}_{1},\dots,\mathbf{w}_{k},\hat{\mathbf{w} }_{k+1},\dots,\hat{\mathbf{w}}_{C})\), where \(\hat{\mathbf{w}}_{i}\) are some default
values not optimized during training4. More specifically, the \(k\)-th subnetwork, \(\mathbf{w}_{s}^{k}\), retains the first \(k\) partitions from the weight partitioning and sets the remaining parameters to \(\hat{\mathbf{w}}_{k+1:C}\). Note that, if each \(\mathbf{w}_{k}\) is only updated on chunks \(\mathcal{D}_{1:k}\), the subnetwork \(\mathbf{w}_{s}^{(k)}\) is only comprised of weights that have been updated on \(\mathcal{D}_{1:k}\). Thus, we can view the parameters of \(\mathbf{w}_{s}^{(k)}\) as an approximation to \(q_{k}(\mathbf{w})\). Although, given that a subset of the parameters in each \(\mathbf{w}_{s}^{(k)}\) is fixed, this would likely be a poor approximation to the true posterior over the weights given \(\mathcal{D}_{1:k}\), it could be, intuitively, a reasonable approximation in function space5.
Footnote 4: _e.g._, \(\hat{\mathbf{w}}_{i}\) could be the value of the weights at initialization, or \(\hat{\mathbf{w}}_{i}=\mathbf{0}\) corresponding to pruning those parameters and obtaining a proper subnetwork.
Footnote 5: Since a) the mapping from parameters to functions is not bijective and b) neural networks are highly overparameterised and can be heavily pruned while retaining performance (Frankle and Carbin, 2018), obtaining a good fit to a subset of the training data with a subset of the model parameters should be possible. Furthermore, “scaling laws” indicate that the benefit of having more parameters becomes apparent mostly for larger dataset sizes (Kaplan et al., 2020), thus it is reasonable for subnetworks fit to more data to have more learnable parameters.
**Partitioned training** Having partitioned the dataset \(\mathcal{D}\) into \(C\) chunks \((\mathcal{D}_{1},\ldots,\mathcal{D}_{k})\), we update each partition \(\mathbf{w}_{k}\) by optimising the negative log-likelihood6 on chunks \(\mathcal{D}_{1:k}\) using subnetwork \(\mathbf{w}_{s}^{(k)}\) by computing the following gradients:
Footnote 6: Optionally with an added negative log-prior regularization term \(\log p(\mathbf{w}_{s}^{(k)})\).
\[\nabla_{\mathbf{w}_{k}}\mathcal{L}\left(\mathcal{D}_{1:k},\mathbf{w}_{s}^{(k)}\right)= \sum_{(\mathbf{x},y)\in\mathcal{D}_{1:k}}\nabla_{\mathbf{w}_{k}}\log p\left(y\Big{|} \mathbf{x};\mathbf{w}_{s}^{(k)},\psi\right). \tag{3}\]
We interleave stochastic gradient updates of each partition of the weights with updating the hyperparameters \(\psi\) using \(\mathcal{L}_{\mathrm{ML}}\) in Eq. 2:
\[\nabla_{\psi}\mathcal{L}_{\mathrm{ML}}\left(\mathcal{D},\psi\right)\approx \sum_{k=2}^{C}\sum_{(\mathbf{x},y)\in\mathcal{D}_{k}}\nabla_{\psi}\log p\left(y \Big{|}\mathbf{x},\mathbf{w}_{s}^{(k-1)},\psi\right). \tag{4}\]
This can be seen as the sum of the _out-of-sample_ losses for each subnetwork \(\mathbf{w}_{s}^{(k)}\). The scheme is illustrated in Figure 1. For details of how the updates are scheduled in our experiments, see Appendix I. Note that, while we could incorporate the gradient of the first term from Eq. 1 corresponding to \(\mathbb{E}_{q_{0}(\mathbf{w})}[\log p(\mathcal{D}_{1}|\mathbf{w},\psi)]\) in Eq. 4, we chose to leave it out. Hence, the gradient of Eq. 4 is of an estimate that can be viewed as an approximation to the _conditional_ marginal likelihood \(\log p\left(\mathcal{D}_{2:C}|\mathcal{D}_{1},\psi\right)\). Conditional marginal likelihood has been shown to have many desirable properties for model selection and, in many cases, can be a better proxy for generalization (Lotfi et al., 2022).
This procedure, inspired by the marginal likelihood, has several desirable properties compared to prior work. **1)** Our objective is computationally efficient, with a computational cost roughly corresponding to evaluating subnetworks on the training set. There is no need to compute nor invert a Hessian with
Figure 1: Best viewed in colour. Illustration of the partitioning scheme for a single hidden layer perceptron with \(C=3\) chunks.
respect to the weights, as in the Laplace approximation (Immer et al., 2021, 2022). **2)** Our objective is readily amenable to optimization by stochastic gradient descent; we do not have to iterate over the entire training set to compute a single gradient update for the hyperparameters. **3)** Compared to the training speed objective (Lyle et al., 2020), in our method, the training of the weights in each subnetwork progresses independently of the data in future chunks. Hence, it can be seen as more truthfully measuring the generalization capability of a model using a given set of hyperparameters.
**Partitioning Schemes** There are several ways in which the neural network weights can be partitioned. In our experiments in Section 5, we partition the weights before beginning training by assigning a fixed proportion of weights in each layer to a given partition at random. For each subnetwork, for the weight partitions corresponding to future chunks, we use the values of the weights at initialisation. For a discussion of partitioning schemes, see Appendix C.
## 4 Related works
Hyperparameter optimization in deep learningMany works have tackled the challenge of optimizing hyperparameters in deep learning. Works on implicit differentiation, such as the one by Lorraine et al. (2019), allow for optimizing training hyperparameters such as the learning rate, weight-decay, or other hyperparameters that affect the final neural network weights only through the training routine. Other works have proposed ways to parameterize and optimize data-augmentations (Cubuk et al., 2018; Li et al., 2020), search-spaces for neural network architectures, as well as methods to optimize architectures using gradient-based optimization (Liu et al., 2018; Elsken et al., 2019). All of the above works have primarily relied on optimizing hyperparameters on a separate validation set and are compatible with the objective defined in this work. Several works have also aimed to cast learning data augmentations as an invariance learning problem. They do so by parameterizing the model itself with data augmentations, and frame invariance learning as a model selection problem (van der Wilk et al., 2018; Benton et al., 2020; Schwobel et al., 2021; Nabarro et al., 2022; Immer et al., 2022). We compare against Benton et al. (2020) ("Augerino") and Immer et al. (2022) ("Differentiable Laplace") on this task in the experimental section.
Hyperparameter optimization without a validation setA limited number of works consider learning hyperparameters without a validation set in a deep learning context. Benton et al. (2020) propose a simple method for learning invariances without a validation set by regularising invariance hyperparameters to those resulting in higher invariance. They show that the invariances found tend to be insensitive to the regularisation strength, determined by another hyperparameter. However, the method relies on being able to _a priori_ define which hyperparameters lead to higher invariance through a suitable regularisation function. In more complex invariance learning settings, defining the regulariser can be challenging. For example, if data-augmentation transformations were to be parameterized by a neural network (as proposed in Lorraine et al. (2019)), it is non-trivial to devise an adequate regulariser. We show that our method can be applied to such settings.
Other works focus on deriving tractable approximations to the marginal likelihood for deep neural networks. Schwobel et al. (2021) propose only marginalising-out the parameters in the last layer of the neural network by switching it out for a Gaussian Process. They treat the preceding layer effectively as a hyperparameter, and optimize invariance parameters using the marginal likelihood. Although they show promising results on MNIST, they found they "were unable to learn invariances for CIFAR-10" (Schwobel et al., 2021, SS7) and highlighted the need to marginalise lower layers as well. In contrast, our objective can be seen as being inspired by marginal likelihood where arbitrary network layers can be "marginalised", and works on datasets like CIFAR-10. Immer et al. (2022) have adapted the Laplace approximation (Immer et al., 2021) to make it tractable for learning data augmentations. In contrast to Schwobel et al. (2021), they approximately marginalize out all the network parameters, and performs favourably. Their approximation, however, requires approximations to a Hessian w.r.t. all network parameters; for that reason, their work reports results for architectures only up to a ResNet-14, whereas our method can easily scale to larger architectures.
Hyperparameter optimization in FLImproving hyperparameter optimization is especially relevant to FL. Given the potential system level constraints (Wang et al., 2021), methods that optimize the hyperparameters and parameters in a single training run are preferred. On this note, Khodak et al. (2021) introduced FedEx and showed that it can successfully optimize the client optimizer
hyperparameters. FedEx relies on a training/validation split on the client level and uses a REINFORCE type of gradient (Williams, 1992) estimator, which usually exhibits high variance and needs baselines to reduce it (Mohamed et al., 2020). This is in contrast to partitioned networks, which use standard, low-variance backpropagation for the hyperparameters and no separate validation set per client. To optimize the other hyperparameters, Khodak et al. (2021) wrapped FedEx with a traditional hyperparameter optimization strategy, the successive halving algorithm. This is orthogonal to our method and could be applied to partitioned networks as well. In Zhou et al. (2021), the authors perform a hyperparameter search independently on each client with some off-the-shelf methods and then aggregate the results of the search at the server once in order to identify the best hyperparameter setting. The main drawback of this method compared to partitioned networks is that when the local client datasets are small, a client-specific validation set is not informative, and the aggregation happens only once. Finally, there is also the recent work from Seng et al. (2022) which performs hyperparameter optimization and neural architecture search in the federated setting. Similarly to prior works, it requires client-specific validation data in order to optimize the hyperparameters.
## 5 Experiments
Input SelectionTo demonstrate that \(\mathcal{L}_{\mathrm{ML}}\) is a good objective for model selection that captures the desirable properties of the marginal likelihood, we first deploy our method on the toy model selection task of Lyle et al. (2020): there the first \(15\) features are informative, and the remaining \(15\) are spurious
\[y\sim\mathrm{Bern}\left(\frac{1}{2}\right)\qquad\mathbf{x}=\big{[}\underbrace{y+ \epsilon_{1},\ldots,y+\epsilon_{15}}_{\text{Informative}},\underbrace{\epsilon_ {16},\ldots,\epsilon_{30}}_{\text{Spurious}}\big{]}^{\intercal}\qquad \epsilon_{1},\ldots,\epsilon_{30}\stackrel{{\text{iid}}}{{ \propto}}\mathcal{N}(0,1).\]
We specify a fixed mask over the inputs prior to training, where the first \(K\) inputs remain unmasked, and the remainder is masked. We expect that, given multiple models with different (fixed) masks over the inputs, the proposed objective will be able to identify the correct one -- _i.e._, the one that keeps only the informative features. We train multiple fully connected neural networks (MLPs) on a training set of \(1000\) examples using our method and compare the final values of the \(\mathcal{L}_{\mathrm{ML}}\) objective. The results are shown in Figure 1(a). \(\mathcal{L}_{\mathrm{ML}}\) correctly identifies \(15\) input features as the optimum, and correlates well with test accuracy and log-likelihood. Training loss and training accuracy, on the other hand, cannot alone disambiguate whether to use \(15\) or more input features.
Differentiable input selectionWe further show that we can learn the correct mask over the inputs in a differentiable manner using our method during a single training run. We parameterize a learnable mask over the inputs with a concrete Bernoulli distribution (Maddison et al., 2016) and treat the parameters of the mask distribution as a hyperparameter. We optimize them with respect to the proposed objective using our method. The evolution of the learned mask during training is shown in Figure 1(b), where we see that we can correctly identify the first 15 informative features.
Figure 2: (a) Demonstrating the ability of the marginal-likelihood inspired objective \(\mathcal{L}_{\mathrm{ML}}\) to identify the correct model on a toy input selection task. We plot the hyperparameter objective, train —
Learning invariances through data-augmentationsFollowing previous literature on learning soft invariances through learning data augmentations (Nabarro et al., 2022; van der Wilk et al., 2018; Benton et al., 2020; Schwobel et al., 2021; Immer et al., 2022), we show that we can learn useful affine image augmentations, resulting in gains in test accuracy. We specify affine data augmentations as part of a probabilistic model as done by van der Wilk et al. (2018), averaging over multiple data augmentation samples during training and inference. This allows us to treat the data-augmentation distribution as a model hyperparameter rather than a training hyperparameter. For datasets, we consider MNIST, CIFAR10, TinyImagenet along with rotCIFAR10 and rotTinyImagenet, variants where the datapoints are randomly rotated at the beginning of training by angles sampled uniformly from \([-\pi,\pi]\)(Immer et al., 2022). Experimental setup details are provided in Appendix 1.
For the CIFAR10 and rotCIFAR10 datasets, we consider as baselines standard training with no augmentations, Augerino (Benton et al., 2020) and Differentiable Laplace (Immer et al., 2022). Following Immer et al. (2022), we use \(\mathrm{\SIUnitSymbolMicro g}_{\mathrm{W}}\)ResNets (Zhang et al., 2019) for the architectures. The results can be seen in Table 1. There, we observe that partitioned networks outperform all baselines in the case of CIFAR10 for both ResNet variants we consider. On RotCIFAR10, we observe that partitioned networks outperform the baseline and Augerino, but it is slightly outperformed by Differentiable Laplace, which optimizes additional prior hyperparameters. To demonstrate the scalability of partitioned networks, for the (rot)TinyImagenet experiments we consider a ResNet-50 architecture with GroupNorm(2). In Table 1 we observe that in both cases, partitioned networks learn invariances successfully and improve upon the baseline. Relative to Augerino, we observe that partitioned networks either improve (TinyImagenet) or are similar (rotTinyImagenet).
Imbuing a model with useful invariances is particularly useful in the low-data regime, due to better data efficiency. To show that, we perform experiments where we artificially reduce the size of the training dataset. The results can be seen in Figure 3. We see that by learning augmentations with partitioned networks, we can drastically improve performance in the low-data regime upon a baseline that does not learn augmentations, while performing favorably against prior works in most cases.
On MNIST, our method outperforms the last-layer marginal-likelihood method (last-layer ML) by Schwobel et al. (2021) in the large data regime but underperforms in the low-data regime. That is likely to be expected, as their work fits a Gaussian Process (GP) at the last layer (Wilson et al., 2016), which is better tailored for the low-data regime and results into a more flexible model (due to the GP corresponding to an additional, infinite width, layer). Since the MNIST-CNN is sufficiently small to fit multiple networks into memory, we also compare to a variant of our method where, instead of partitioning a single network, we train \(C\) different networks where network \(k\) is fit on data \(\mathcal{D}_{1:k}\). This serves as an upper bound on the performance of the partitioned networks. We see that by partitioning a single network, we can achieve almost equivalent accuracy. On CIFAR10, partitioned networks outperform all other works on all data sizes we considered. On RotCIFAR10, partitioned networks perform again favourably, but they are marginally outperformed by differentiable Laplace in the low-data regime. Compared to partitioned networks where we only optimize augmentations, differentiable Laplace also optimizes the precision of a Gaussian prior over the weights, which better combats overfitting in the low-data regime. On both the TinyImagenet and rotTinyImagenet experiments we observe that partitioned networks either outperform or are similar to the baselines on all data sizes considered.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & & & \multicolumn{3}{c}{Method} \\ Dataset & Architecture & Baseline & Augerino & Diff. Laplace & Partitioned \\ \hline RotCIFAR10 & \(\mathrm{\SIUnitSymbolMicro g}_{\mathrm{W}}\)ResNet-8 & \(54.2_{\pm 0.4}\) & \(75.4_{\pm 0.2}\) & \(\mathbf{79.5_{\pm 0.6}}\) & \(\mathbf{79.1_{\pm 0.0}}\) \\ \hline CIFAR10 & \(\mathrm{\SIUnitSymbolMicro g}_{\mathrm{W}}\)ResNet-8 & \(74.1_{\pm 0.5}\) & \(79.0_{\pm 1.0}\) & \(84.2_{\pm 0.8}\) & \(\mathbf{86.1_{\pm 0.4}}\) \\ & \(\mathrm{\SIUnitSymbolMicro g}_{\mathrm{W}}\)ResNet-14 & \(79.5_{\pm 0.3}\) & \(83.0_{\pm 0.1}\) & \(88.1_{\pm 0.2}\) & \(\mathbf{89.1_{\pm 0.8}}\) \\ \hline RotTinyImagenet & ResNet-50 & \(31.5_{\pm 0.6}\) & \(\mathbf{44.5_{\pm 0.2}}\) & OOM\({}^{\prime}\) & \(43.9_{\pm 0.3}\) \\ \hline TinyImagenet & ResNet-50 & \(44.2_{\pm 0.5}\) & \(41.1_{\pm 0.2}\) & OOM & \(\mathbf{48.6_{\pm 0.0}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test accuracy with learning affine augmentations on (rot)CIFAR10 and (rot)TinyImagenet.
Comparisons to traditional training / validation splitWe further perform comparisons between partitioned networks and the more traditional training/validation split (denoted as validation set optimization) with additional finetuning to the task of learning data augmentations. This is realized as follows; we partition \(20k\) CIFAR10 examples into training and validation data of specific proportions. We then either train a partitioned network (along with the hyperparameters on \(\mathcal{L}_{\mathrm{ML}}\)) on these two chunks of data or train a standard network on the training set while using the validation set loss to obtain gradients for the data augmentation hyperparameters. For the validation set optimization baseline, once the hyperparameters are optimized, the resulting network is finetuned on the whole dataset for \(20\) epochs. The results for varying chunk proportions are provided in Table 2.
We can see that partitioned networks (that do not employ additional finetuning) outperform validation set optimization with finetuning in all settings we tried. The gap does get smaller when we move to the more traditional \(90\)/\(10\) splits for training/validation: a \(10\%\) proportion for validation data is enough to optimize a handful of hyperparameters (just \(6\) scalars). To corroborate this claim, we set up an additional experiment; we use a Wide ResNet-20 on the full CIFAR10 dataset, where the first two out of the three stages (13 convolution layers) are considered as hyperparameters. The results for this setting can be seen in Table 3. We see that \(10\%\) validation data are not enough, and the validation set optimization baseline performs poorly. This is in contrast to partitioned networks, where with three chunks, we can learn all of these hyperparameters successfully. Note that, compared to Augerino, applying partitioned networks to this setting is straightforward. To apply Augerino, one would have to come up with a metric that can be used to regularize the feature extractor towards "higher invariance".
Partitioned networks for federated learningWe consider federated learning (FL) (McMahan et al., 2017), a setting where data is distributed across many clients. In this setting, there are system properties that make hyperparameter optimization especially challenging (Wang et al., 2021). More specifically, obtaining a validation set and performing multiple training runs with different
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & & \multicolumn{4}{c}{Chunk Proportions} \\ Method & \([0.3,0.7]\) & \([0.5,0.5]\) & \([0.7,0.3]\) & \([0.8,0.2]\) & \([0.9,0.1]\) \\ \hline Partitioned & \(\mathbf{82.9\%_{\pm 0.3}}\) & \(\mathbf{83.0\%_{\pm 0.01}}\) & \(\mathbf{83.7\%_{\pm 0.2}}\) & \(\mathbf{84.0\%_{\pm 0.6}}\) & \(\mathbf{84.6\%_{\pm 0.05}}\) \\ Validation set optim. & NaN & \(78.9\%_{\pm 0.04}\) & \(81.5\%_{\pm 0.2}\) & \(82.6\%_{\pm 0.1}\) & \(83.4\%_{\pm 0.1}\) \\ \(\blackleftarrow\)+Finetune & NaN & \(81.3\%_{\pm 0.09}\) & \(82.5\%_{\pm 0.2}\) & \(83.5\%_{\pm 0.1}\) & \(83.8\%_{\pm 0.3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Learning affine augmentations with \(\frac{\mathrm{fix}}{\mathrm{up}}\)ResNet-14 on subset of CIFAR-10 (\(20k\) examples). NaN denotes that a run crashed.
Figure 3: Learning affine data augmentations on subsets of data. (b) uses a \(\frac{\mathrm{fix}}{\mathrm{up}}\)ResNet-8 architecture whereas (c) a ResNet-50 architecture. (b,c) Top: normal dataset, bottom: rotated dataset.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Chunk Proportions & Test accuracy \\ \hline Validation set optim. & \([0.9,0.1]\) & \(59.6\%_{\pm 0.6}\) \\ Partitioned & \([0.1,0.8,0.1]\) & \(\mathbf{87.3\%_{\pm 0.8}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Learning a feature extractor (first \(2\) out of \(3\) stages of a Wide ResNet-20) as a hyperparameter on CIFAR10.
hyperparameter settings might not be possible due to the additional communication and computation costs, and transient client availability (clients join and leave the training process at any time). Optimizing hyperparameters together with the model parameters in a single run is therefore especially beneficial (Wang et al., 2021), and partitioned networks are a good fit for FL.
We extend our centralized experimental setup to FL by splitting all \(N\) clients into \(C\) non-overlapping chunks, such that each chunk is understood as the union of all clients' data shards that belong to that chunk. During federated training, a client belonging to chunk \(k\) sequentially optimizes partitions \(\mathbf{w}_{k:C}\) through sub-networks \(\mathbf{w}_{s}^{(k:C)}\) and computes a gradient wrt. the hyperparameters \(\psi\). Note that partitions \(\mathbf{w}_{1:k}\) remain unchanged and do not need to be communicated back to the server. This reduction in upload costs is a welcome property for FL, where upload costs can bottleneck system design. The server receives the (hyper-) parameter updates, averages them, and applies the result as a "gradient" to the server-side model in the traditional federated manner (Reddi et al., 2020). For partitioned networks, the hyperparameters that we optimize are the data augmentation parameters and, since we also include dropout in these architectures, the dropout rates (with the concrete relaxation from Maddison et al. (2016)). As a baseline, we consider the standard federated training without learning hyperparameters (denoted as FedAvg) as well as learning the augmentation parameters with Augerino Benton et al. (2020). Please see Appendix J for a detailed explanation of our FL setup.
Table 4 summarizes our results using different sub-sets and variations of MNIST and CIFAR10, where we also included rotMNIST Larochelle et al. (2007) as another dataset. We can see that partitioned networks allow training models that generalize better than both FedAvg and FedAvg with Augerino, at reduced communication costs. Especially when the true data-generating process and underlying source of non-i.i.d.-ness are explicitly accounted for -- here in the form of rotation -- the benefits of learning the augmentations with partitioned networks become apparent. For example, we observe that on the rotated datasets, partitioned networks learn to correctly increase the rotation angle.
## 6 Discussion
We propose partitioned networks as a new method for hyperparameter optimization inspired by the marginal likelihood objective. It provides a general and scalable solution to finding hyperparameters in a single training run without requiring access to a validation set while introducing less additional overhead to the training task than existing approaches. We showed that partitioned networks are applicable on a wide range of tasks; they can identify the correct model on illustrative toy examples, they can learn data augmentations in a way that improves data efficiency, they can optimize general feature extractors as hyperparameters and they can also optimize dropout rates. In the federated setting, partitioned networks allow us to overcome practical challenges, reduce the communication overhead and obtain better models. The notion of partitioned networks we propose in this work is novel to the literature and an orthogonal approach to many existing hyperparameter tuning algorithms. Like any other method, partitioned networks come with their own limitations, e.g., needing a partitioning strategy. We expand upon them in appendix H. We hope to see our method successfully reducing the need to perform hyperparameter search through repeated training and thereby contribute to the community's effort to reduce its carbon footprint.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Dataset \& size & \(\uparrow\)MNIST & & \(\uparrow\)RotMNIST & \(\downarrow\)Upload \\ Method & \(1.25k\) & \(5k\) & \(50k\) & \(1.25k\) & \(5k\) & \(50k\) & \([\%]\) \\ \hline FedAvg & \(95.4\%_{0.1}\) & \(97.4\%_{0.1}\) & \(99.0\%_{0.1}\) & \(80.5\%_{0.0}\) & \(90.4\%_{0.5}\) & \(96.8\%_{\pm 0.1}\) & \(100\) \\ FedAvg + Augerino & \(94.2\%_{0.5}\) & \(96.4\%_{0.1}\) & \(99.1\%_{0.0}\) & \(79.5\%_{0.3}\) & \(89.0\%_{\pm 2.0}\) & \(95.3\%_{\pm 0.2}\) & \(100\) \\ FedAvg + Partitioned & \(\mathbf{97.0\%_{0.1}}\) & \(\mathbf{98.3\%_{0.0}}\) & \(99.2\%_{0.1}\) & \(\mathbf{85.7\%_{0.9}}\) & \(\mathbf{93.5\%_{0.6}}\) & \(\mathbf{97.8\%_{0.1}}\) & \(77\) \\ \hline \multicolumn{7}{c}{\(\uparrow\)CIFAR10} & \multicolumn{7}{c}{\(\uparrow\)RotCIFAR10} & \(\downarrow\)Upload \\ & \(1.25k\) & \(5k\) & \(45k\) & \(1.25k\) & \(5k\) & \(45k\) & \([\%]\) \\ \hline FedAvg & \(50.2\%_{0.4}\) & \(64.5\%_{0.3}\) & \(79.2\%_{0.7}\) & \(35.6\%_{0.3}\) & \(45.2\%_{0.1}\) & \(53.9\%_{\pm 1.1}\) & \(100\) \\ FedAvg + Augerino & \(49.9\%_{0.8}\) & \(65.0\%_{0.2}\) & \(79.9\%_{0.4}\) & \(36.1\%_{0.2}\) & \(45.0\%_{0.2}\) & \(56.4\%_{0.7}\) & \(100\) \\ FedAvg + Partitioned & \(50.8\%_{1.0}\) & \(64.8\%_{0.4}\) & \(\mathbf{81.5\%_{0.5}}\) & \(\mathbf{37.1\%_{0.2}}\) & \(45.3\%_{0.3}\) & \(\mathbf{60.6\%_{0.2}}\) & \(91\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Validation accuracy averaged over the last \(10\) evaluations, each \(10\) rounds apart; standard-error is computed across \(4\) random seeds. All datasets are adapted to the federated setting and are synthetically split to be non-i.i.d. sampled as described in Appendix J.2. |
2307.02454 | Transgressing the boundaries: towards a rigorous understanding of deep
learning and its (non-)robustness | The recent advances in machine learning in various fields of applications can
be largely attributed to the rise of deep learning (DL) methods and
architectures. Despite being a key technology behind autonomous cars, image
processing, speech recognition, etc., a notorious problem remains the lack of
theoretical understanding of DL and related interpretability and (adversarial)
robustness issues. Understanding the specifics of DL, as compared to, say,
other forms of nonlinear regression methods or statistical learning, is
interesting from a mathematical perspective, but at the same time it is of
crucial importance in practice: treating neural networks as mere black boxes
might be sufficient in certain cases, but many applications require waterproof
performance guarantees and a deeper understanding of what could go wrong and
why it could go wrong. It is probably fair to say that, despite being
mathematically well founded as a method to approximate complicated functions,
DL is mostly still more like modern alchemy that is firmly in the hands of
engineers and computer scientists. Nevertheless, it is evident that certain
specifics of DL that could explain its success in applications demands
systematic mathematical approaches. In this work, we review robustness issues
of DL and particularly bridge concerns and attempts from approximation theory
to statistical learning theory. Further, we review Bayesian Deep Learning as a
means for uncertainty quantification and rigorous explainability. | Carsten Hartmann, Lorenz Richter | 2023-07-05T17:27:17Z | http://arxiv.org/abs/2307.02454v1 | Transgressing the boundaries: towards a rigorous understanding of deep learning and its (non-)robustness
###### Abstract
The recent advances in machine learning in various fields of applications can be largely attributed to the rise of deep learning (DL) methods and architectures. Despite being a key technology behind autonomous cars, image processing, speech recognition, etc., a notorious problem remains the lack of theoretical understanding of DL and related interpretability and (adversarial) robustness issues. Understanding the specifics of DL, as compared to, say, other forms of nonlinear regression methods or statistical learning, is interesting from a mathematical perspective, but at the same time it is of crucial importance in practice: treating neural networks as mere black boxes might be sufficient in certain cases, but many applications require waterproof performance guarantees and a deeper understanding of what could go wrong and why it could go wrong. It is probably fair to say that, despite being mathematically well founded as a method to approximate complicated functions, DL is mostly still more like modern alchemy that is firmly in the hands of engineers and computer scientists. Nevertheless, it is evident that certain specifics of DL that could explain its success in applications demands systematic mathematical approaches. In this work, we review robustness issues of DL and particularly bridge concerns and attempts from approximation theory to statistical learning theory. Further, we review Bayesian Deep Learning as a means for uncertainty quantification and rigorous explainability.
## 1 Introduction
According to Wheeler (2016, p. 2), machine learning is a "marriage of statistics and computer science that began in artificial intelligence". While statistics deals with the question of what can be inferred from data given an appropriate statistical model, computer science is concerned with the design of algorithms to solve a given computational problem that would be intractable without the help of a computer.
Artificial intelligence and, specifically, machine learning have undergone substantial developments in recent years that have led to a huge variety of successful applications, most of which would not have been possible with alternative approaches. In particular, advances in deep learning (i.e. machine learning relying on deep neural networks) have revolutionized many fields, leading, for instance, to impressive achievements in computer vision (e.g. image classification, image segmentation, image generation), natural language processing (semantic text understanding, text categorization and text creation, automatic question answering) and reinforcement learning (agents and games, high-dimensional optimization problems); cf. Sarker (2021) and the references therein.
Moreover, deep learning is nowadays increasingly applied in multiple scientific branches as an acceptable tool for conducting inference from simulated or collected data. For example, in the medical field, the development of drugs (Ma et al., 2015) or the analysis of tomography (Bubba et al., 2019) are enhanced with deep learning. In molecular simulations, ground-state properties of organic molecules are predicted (Faber et al., 2017), equilibrium energies of molecular systems are learnt (Noe et al., 2019) or multi-electron Schrodinger equations are solved (Hermann et al., 2020). Speaking of which, the numerical treatment of high-dimensional partial differential equations with neural networks has undergone vast improvements (E et al., 2017; Nusken and Richter, 2021), allowing for applications in almost all sciences. In biology, cell segmentation and classification have been studied with certain convolutional neural networks (Ronneberger et al., 2015), in signal processing speech separation is approached with temporal |
2306.16708 | Effect of Background Signal on Momentum Imaging | The velocity Slice Imaging technique has revolutionised electron molecule
interaction studies. Multiple electrostatic lens assemblies are often used in
spectrometers for resolving low kinetic energy fragments. However, in a
crossed-beam experiment with an effusive molecular beam, the extended source of
ion generation due to the presence of the background gas creates artefacts on
the momentum images as we try to magnify them beyond a certain size. Here, we
present a systematic study of this effect on momentum imaging and the solutions
to address this issue by background subtraction with suitable magnification.
Additionally, we demonstrated that a supersonic molecular beam target helps
minimise these artefacts in the image magnification by reducing the background
signal. These systematic findings may bring valuable insight into the
investigation of low kinetic energy release processes involving electron
impact, ion impact, and merge beam experiments with large interaction volumes
where high magnification is needed. | Sukanta Das, Suvasis Swain, Krishnendu Gope, Vishvesh Tadsare, Vaibhav S. Prabhudesai | 2023-06-29T06:09:44Z | http://arxiv.org/abs/2306.16708v1 | **Effect of Background Signal on Momentum Imaging**
###### Abstract
The velocity Slice Imaging technique has revolutionised electron molecule interaction studies. Multiple electrostatic lens assemblies are often used in spectrometers for resolving low kinetic energy fragments. However, in a crossed-beam experiment with an effusive molecular beam, the extended source of ion generation due to the presence of the background gas creates artefacts on the momentum images as we try to magnify them beyond a certain size. Here, we present a systematic study of this effect on momentum imaging and the solutions to address this issue by background subtraction with suitable magnification. Additionally, we demonstrated that a supersonic molecular beam target helps minimise these artefacts in the image magnification by reducing the background signal. These systematic findings may bring valuable insight into the investigation of low kinetic energy release processes involving electron impact, ion impact, and merge beam experiments with large interaction volumes where high magnification is needed.
## I Introduction
In the molecular collision process, the details of the dynamics leading to the dissociation are carried away by the fragments generated in a single collision condition. By measuring the momenta of these fragments, one can identify different reaction paths leading to the process of one's interest and unravel the role of molecular dynamics in the final outcome. Usually, ion imaging techniques are used to capture these momentum distributions. Chandler and Houston first demonstrated ion imaging on a 2D detector[1]. Later it was improved by Eppink and Parker[2] to Velocity Map Imaging (VMI) which reduces the effect of spatial spread of the interaction region on the final momentum image. Offerhaus _et al._ introduced a three-lens system at the entrance of the drift tube to magnify the momentum image of the low-energy electrons and ions[3]. They have shown a 20x magnification of the slow photoelectron emitted from photoionisation of Xe metastable state. These magnified images opened paths of molecular microscopy[4, 5, 6], which was not possible before. But the VMI technique required reconstruction of a 3D image from its 2D projection, for which different methods like able inversion[7, 8], Onion peeling[9], iterative inversion[10], BASEX[11], and pBASEX[12] were utilised. All these methods required a cylindrical symmetry about the axis parallel to the detector plane. These techniques are very prone to noise and
often leave a noisy patch along the line of symmetry [7, 8, 10] or at the centre [9, 11] of the image. In recent times, Sparling _et al._ have developed a new method of image reconstruction using an artificial neural network [13, 14]. This method does not require the presence of cylindrical symmetry.
Kitsopoulos and co-workers developed another imaging method [15] called Velocity Slice Imaging (VSI). In this method, instead of detecting the entire Newton sphere for the ion cloud, only its central slice is detected. Here, no cylindrical symmetry is required to obtain the momentum distribution of the ions generated. This method does not require any inversion algorithms and provides cleaner images. In VMI, a very high extraction voltage is given to pancake the image on the 2D detector. On the other hand, in VSI, the ion cloud is stretched in time. Kitsopoulos _et al._ used delayed pulsed extraction and a wire mesh on the extractor to keep the region between the repeller and extractor field free during the expansion. Later, Suits _et al._[16] implemented a new design similar to VMI with lower extraction voltage in DC mode. They also used an array of lenses to stretch the molecular cloud in time inside the acceleration region, improving the image resolution. VSI technique gained rapid popularity among crossed molecular beam experiments. Lin _et al._[17] used it for the first time for crossed molecular beam experiment. Nandi _et al._ adapted this technique for low-energy electron collision experiments [18]. Later several groups [19, 20, 21, 22, 23] used this technique in dissociative electron attachment (DEA) experiments. Over time, different modifications are made to improve the resolution of direct and sliced imaging techniques [24, 25, 26, 27].
Throughout the last three decades, these imaging techniques have been optimised for better resolution, minimising the noise, and increasing the magnification capability while maintaining the VMI condition to study very low-energy ions or electrons. In all these experiments, crossed-beam geometry is used for creating the interaction volume. Typically, for the photodissociation and photoionisation experiments, the light beam is focused in the interaction region, confining the interaction volume to its Rayleigh range. Unlike these experiments, the projectile beam is not necessarily focused in the charged particle interaction experiments. This beam interacts with the background gas along its path in addition to the relatively denser target beam. This results in a far-extended spatial spread in the interaction volume. In such cases, the ion momentum imaging spectrometer has to handle this extended spatial spread in the ion generation and keep reasonable imaging resolution using electrostatic lenses. For an effusive molecular beam, the density of the background gas that comprises mainly the target molecules is comparable to the in-beam target density. As a result, the non-negligible contribution from this background is difficult to eliminate from the measured momentum image. However, this extended region of ion generation affects the quality of the image and acts as a source of noise, degrading the imaging resolution. This effect becomes adverse when the momentum images are magnified beyond a certain size, especially for the processes with low kinetic energy release, where magnification is necessary to resolve the image.
Here we show that one can obtain the optimised imaging condition for a given initial momentum distribution where the extended volume of the ion generation does not limit the quality of the momentum image. However, these optimised conditions are very specific to the initial kinetic energy magnitude as well as the magnification of the image. On the other hand, by subtracting the background signal directly, one can suppress this effect to a reasonable extent. We also show that using a supersonic molecular beam target reduces this effect substantially for a much larger range of image magnification.
## II II. Experimental setup
We have used two VSI spectrometers with two types of target-generating mechanisms. The first set-up uses the effusive molecular beam from a capillary array, while the second one uses the supersonic molecular expansion from a multistage skimmer assembly to prepare the molecular target. The details of these setups are given below.
### A. Set-up 1
Figure 1 (a) shows the VSI spectrometer with four lens electrodes, the details of which are given elsewhere[28]. Here, we briefly describe it. This setup consists of an interaction region spanned by a set of two electrodes, namely a pusher and a puller, separated by 20 mm. The puller electrode has a molybdenum wire mesh with 64% transmittance to prevent the field penetration of the accelerating potentials into the interaction region. The acceleration region consists of a four-elements-electrostatic lens assembly to guide the ions through a short flight tube (10 mm long) towards the detector. The first and third lens electrodes are 6 mm thick, and the second and fourth are 2 mm thick. The separation among various electrodes is shown in the figure. All the lens electrodes, along with the flight tube entrance, have a 40 mm diameter aperture. The lens assembly is used to control the size and space focusing of the ion cloud by adjusting the potential on each of the electrodes. The spectrometer uses a molecular target prepared using an effusive molecular beam generated by a capillary array of length 10 mm. Each capillary of the array has a 100 \(\upmu\)m diameter. The molecular beam is coaxial with the spectrometer axis. We term this mode of operation the crossed-beam mode. A Granville Phillips gas regulator is used before the capillary to introduce the gas inside the chamber in a regulated manner for maintaining the effusive flow condition where intra-molecular collision is negligible compared to collision with the wall. An MKS Baratron pressure gauge is used to measure the pressure behind the capillary. The chamber pressure is measured using an ionisation gauge (Granville Phillipe). The setup also has an arrangement for filling the vacuum chamber with the target gas at a given pressure. We term it as a static gas mode operation. The low-energy electron beam is produced using a home-built thermionic electron gun. The gun operates in a pulsed mode with an adjustable repetition rate in the range 100Hz to 10kHz. The electron beam is collimated in the interaction region using a pair of magnet coils mounted in the Helmholtz geometry outside the vacuum chamber. The electron current is measured using the home-built Faraday cup, mounted coaxially with the electron gun on the opposite side of the interaction region, as shown in Figure 1 (a). A 2D position-sensitive microchannel plate
(MCP) based detector in a chevron configuration is used to detect the ions. The MCP detector is followed by the Phosphor screen (P43). A CCD camera mounted outside the chamber is used to capture the images of the Phosphor screen. The position information of the ion hits is determined in the offline analysis of the captured images using programs written in Matlab.
Typical operating pressure is a few hundred of mTorr behind the capillary, resulting in the background pressure of 5 x 10\({}^{7}\) to 1 x 10\({}^{-6}\) Torr in the VSI spectrometer region.
### Setup-2
The schematics of this setup are shown in Figure 1(b). The spectrometer used in this setup is similar to setup1, except that the interaction region has two additional ring electrodes mounted at 5mm from the pusher and puller electrodes each. The potential divider arrangement across the pusher and puller electrodes via these ring electrodes is used to apply a uniform electric field across the interaction region. The puller electrode is equipped with a molybdenum wire mesh of 64% transmittance. The lens electrodes have varying apertures of diameter 34 mm, 36 mm, 38 mm, and 40 mm, separated by 5 mm from each other with a thickness of 1 mm, followed by an 80 mm long Flight tube.
Here, a two-stage skimmer assembly is used to prepare the supersonic molecular target beam coaxial to the VSI spectrometer axis. The setup consists of three vacuum chambers. These chambers are separated by conical skimmers of diameter 1 mm with a separation of 100 mm. The first chamber houses the pulsed valve (Evan-Levy High-Temperature unmounted valve) with a nozzle of diameter 250 \(\upmu\)m at 5 mm from the first skimmer aperture. The third chamber houses the VSI spectrometer, as shown in Figure 1(b). Typical operating pressures in the three chambers with the pulsed valve operating at 5 bar pressure, 1 kHz repetition rate, and 25 \(\upmu\)s pulse width are \(1\times 10^{-4}\), \(5\times 10^{-6}\), and \(1\times 10^{-7}\) Torr, respectively. With appropriate delays, the pulsed valve is operated synchronously with the electron gun and pusher pulses.
Figure 1: Schematics of the VSI spectrometers used in the experiments with (a) effusive beam (setup-1) and (b) supersonic beam (setup-2) as the target.
## III Simulations
We have carried out the VSI measurements in both setups. We have also performed ion-trajectory simulations using SIMION 8.2 to investigate the spectrometer's performance under various operating conditions for both setups. In this work, we show all images in terms of pixels instead of momentum values to demonstrate the image magnification. However, the kinetic energy and angular distribution are determined from the momentum images, which are obtained by the appropriate transformation of the pixel images.
Electrostatic lenses are best suited to magnify momentum images of the ions generated along the spectrometer axis. The spatial spread in the small interaction volume, the overlapped volume of the molecular and electron beam, can be nullified by the stack of lenses. However, for setup-1, any ion generated in front of the puller electrode aperture (diameter 40 mm) along the path of the electron beam can be extracted to the detector. These ions will experience varying lensing forces based on their position about the spectrometer axis as they would be generated during the electron beam passage through the background gas in the effusive beam set-up (setup-1). To simulate the effect of such ions on the measured momentum images, we have considered the ions source in the simulations as a cylindrical volume with 40 mm length and 1mm diameter along the electron beam path. This cylindrical volume has been kept symmetric about the spectrometer axis. We have used appropriate algorithms for the SIMION platform for applying time-dependent potentials on the electrodes to mimic the delayed extraction potential on the pusher with respect to ion generation. We have also used the initial uncertainty of 100 ns in the ion generation instances to incorporate the effect of the electron pulse width. We note the time of arrival of the ions along with their position on the 2D detector from the simulation for further analysis using a Matlab-based program. Using the ToF of the ions, we obtain several velocity slice images around the centre of the ToF signal with a time window of 80 ns. Among all the images, the image with the largest diameter is chosen as the central slice, as this would correspond to the ions with the maximum velocity in the plane parallel to the detector plane. We have also independently verified the initial momentum distribution of these ions and found it consistent with the Newton sphere's central slice.
In all simulations, we have taken ions of mass 16 amu and initial energy as Gaussian distribution of mean 0.4 eV and FWHM of 0.2 eV, and mean 1.5 eV and FWHM of 0.5 eV. These distributions are similar to those obtained for 0\({}^{-}\) fragment from DEA to N\({}_{2}\)O at 2.3eV and DEA to O\({}_{2}\) at 6.5eV, respectively, for a typical incoming electron energy resolution of 1 eV FWHM[18]. The simulations are carried out in two sets: 1) for ions within a sphere of 1mm diameter at the centre of the interaction region, representing the background-free condition (Set-I) and 2) for ions within the above-mentioned cylindrical volume representing the background in the actual experiment (Set-II). In all cases, the initial velocity distribution is taken as isotropic about the origin. The number of ions used for simulations in
both sets is consistent with the observed counts from the corresponding regions in the actual experiments.
## IV Results and Discussion
We have carried out simulations of ion trajectories for various imaging conditions. We have also carried the actual imaging measurements for the ions generated in DEA measurements under different optimised conditions. In these VSI experiments using the setups described earlier, the slicing is carried out using a pulsed voltage of a fixed width (80 ns). The spread in the ToF signal has some effect on the observed resolution of the image as the fixed-width slicing accesses the central part of the Newton sphere up to different extents depending on the spread in the ToF. Under different voltage conditions, this ToF spread changes affecting the effective momentum resolution.
Figure 2 shows the simulated slice images for the ions with the initial kinetic energy of 0.4eV from Set-I. Figures 2(a) and 2(b) are obtained for two different lensing conditions for image magnifications. Figure 2(c) shows the kinetic energy distribution obtained from the slice images. The magnified image gives a better kinetic energy distribution, consistent with the initial kinetic energy distribution of about 0.2 eV around 0.4 eV. This shows the effect of spread in the ToF on the imaging using a fixed-width slicing technique.
Below we describe the various schemes we have implemented to minimise the effect of background signal on the slice images.
Figure 2: (a) and (b) are the simulated images for mass 16 amu ions with the initial kinetic energy of 0.4 eV and FWHM of 0.2 eV with the isotropic angular distribution under two different lensing conditions used for the spectrometer in the setup-1 for ions generated from Set-I (please refer to the text). (c) shows the corresponding kinetic energy distribution obtained from these images. Squares (\(\blacksquare\)) for the image (a) and circles (\(\bullet\)) for image (b). (d), (e) show the ToF spread of ions, and the red lines show the 80ns slice which corresponds to pixel image (a) and (b) respectively.
### _Minimising the effect of the background using lensing condition_
Due to the extended nature of the ion source, a spectrometer's velocity-focusing conditions need to be tweaked by playing with the potentials on various electrodes. We have performed this operation to obtain the best possible image that would have the minimum effect from the background. Figures 3 and 4 show the results of such simulations for the ions of mass 16 amu created with isotropic velocity distribution with the initial kinetic energy of 0.4 eV (FWHM 0.2 eV) and 1.5 eV (FWHM of 0.5 eV), respectively, for three different magnifications.
Figures 3 (a), (e), and (i) show the images obtained for the ions of 0.4 eV kinetic energy from the Set-I at three different magnification conditions, whereas Figures 3 (b), (f), and (j) show the images simulated for ions of same energy but with Set-I and Set-II together, mimicking the actual experimental situation. Figures 3 (c), (g) and (k) show the kinetic energy distribution obtained from the corresponding images integrated over all angles. The corresponding angular distributions are shown in Figures 3 (d), (h) and (l). The angular distribution and kinetic energy distribution show the effect of the background signal, which becomes worse for the magnified images. The images themselves show artefacts for higher magnifications arising due to signals from the background gas. The angular distributions and
Figure 3: Simulation result of VSI of mass 16 amu ions with 0.4 eV kinetic energy and 0.2 eV FWHM with the isotropic angular distribution in setup-1. (a), (e) and (i) show the image in the absence of the background under different lensing conditions. (b), (f), and (j) are the same images in the presence of a background. (c), (g), and (k) show the comparison of the kinetic energy distribution in the presence and in the absence of the background. (d), (h), and (l) show the comparison of the angular distribution for the respective cases. In (c), (d), (g), (h), (k), and (l) the squares (\(\blacksquare\)) are for images (a), (e), and (i) and the circles (\(\bullet\)) for images (b), (f), and (j). All images are plotted in pixels.
kinetic energy distributions are used to deduce information about the molecular dynamics underneath the dissociation processes. These simulations show the limitations of such imaging methodologies due to background gas present in the apparatus. From these simulations, we infer that for imaging low-energy ions, we have to minimise the background. We also note that these artefacts are imaging condition-dependent. Even if we achieve the same magnification using different lensing conditions, they also generate different artefacts. Here we have shown only one lensing condition per magnification.
Similar simulations are also carried out for the ions of mass 16 amu created with isotropic velocity distribution with the initial kinetic energy of 1.5 eV and FWHM of 0.5 eV. The results are shown in Figure 4. For ions with higher initial kinetic energies, here, 1.5 eV, we obtain a decent-quality image. However, we should be careful in choosing the imaging condition, as shown in Figure 4, among three images, only condition-1 produces (Figures 4 (a) and (b)) the image, where the image without background matches fairly well with the image in the presence of background in terms of kinetic energy and angular distribution (Figures 4 (c) and (d)).
Figure 4: Simulation result of VSI of mass 16 amu ions with 1.5 eV kinetic energy and 0.5 eV FWHM with the isotropic angular distribution in setup-1. (a), (e) and (i) show the image in the absence of the background under different lensing conditions. (b), (f), and (j) are the same images in the presence of a background. (c), (g), and (k) show the comparison of the kinetic energy distribution in the presence and in the absence of the background. (d), (h), and (l) show the comparison of the angular distribution for the respective cases. In (c), (d), (g), (h), (k), and (l), the squares (\(\blacksquare\)) are for images (a), (e), and (i) and the circles (\(\bullet\)) for images (b), (f), and (j). All images are plotted in pixels.
As can be seen from these simulations, for ions with a given initial energy, depending on the spectrometer geometry, we can obtain the optimised voltage condition where the spatially extended source from the background causes minimum distortion to the slice image. However, this situation worsens for different magnifications. We have also found that the optimised voltage condition for a given magnification that minimises the effect of background is applicable for only a limited initial kinetic energy range (typically up to 2 eV) for a given ion. This is a very difficult solution to implement in practice for the ions generated with a wider range of initial kinetic energy, which is the case for many polyatomic molecules. In such cases, the newly found imaging condition for different kinetic energy ranges would need a fresh calibration.
### Subtracting the background contribution
Another possible way of addressing this issue could be making the measurements of the background contribution and then subtracting it with an appropriate normalisation. We implemented this scheme experimentally in the DEA reaction in O\({}_{2}\). We have measured the VSIs for O\({}^{-}\) from O\({}_{2}\) obtained from DEA at 6.5 eV electron energy [18, 29]. The reaction channel is
\[\mathrm{O}_{2}+\mathrm{e}\to 0^{-}+0\ (^{3}\mathrm{P})\]
Angular distribution of O\({}^{-}\) shows four lobes with very low counts at 0\({}^{\circ}\) and 180\({}^{\circ}\) with respect to electron beam direction [18]. The images obtained for different magnifications are shown in Figure 5. The electron beam direction in all these measurements is from top to bottom. Due to the presence of the transverse magnetic field, the O\({}^{-}\) ions trajectories bend in one direction. This shifts the momentum image to one side of the spectrometer axis and introduces distortion [28]. This distortion is particularly starker for higher kinetic energy ions as they tend to travel farther from the spectrometer axis. This is the reason for not having similar intensity on the left and right sides of the image. For the image analysis, we have considered only half of the image obtained close to the centre of the detector, which is distorted the least.
For the first data (Figure 5 (g)), voltages on the electrodes are optimised such that the image can be taken under the best spatial focusing condition. The image shows similar angular distribution as reported by earlier studies [18]. We have changed the electrode voltages to magnify the images ((Figure 5 (h) and (i)). The magnified images show some artefacts, and as we increase the magnification, these artefact patterns change. Based on our simulations, we infer that the artefacts observed in the magnified images are mainly from the background and that the flat angular distribution obtained in Figure 5 (i) is due to the heavy accumulation of artefacts in 90\({}^{\circ}\) and 270\({}^{\circ}\) directions. Figure 5 (a), which is considered as best space-focusing condition, does not show any artefacts due to the perfect superposition of the background (Figure 5 (b)) contribution with the image obtained from the main beam. In this setup, background contribution is almost 1/3 of the total count coming in the presence of an effusive beam.
However, even after having very good statistics, these subtracted images are only partially free of background effects. This is mainly because we mimic the background gas contribution in the experiment by recording images for the static gas mode of operation. This condition is fixed by flooding the vacuum chamber with the target gas at the same pressure as that is measured by the ionisation gauge while carrying out the crossed-beam measurements. This is not necessarily an accurate method, as the gauge is mounted far away from the interaction region. Since the interaction region cannot have the same effective pumping speed as that obtained in the part of the vacuum chamber where the gauge is mounted, the contribution from the extended region during the crossed-beam measurement would always be higher. We see this in the subtracted images in Figure 5.
Figure 6 (b) shows the kinetic energy distribution of \(0^{-}\) for all three conditions, and the width of the distribution is the same for all of them. This shows that with increasing magnification, we are still in a good spacial focusing condition, but condition 2 and 3 is not good enough to focus the ions generated in the extended interaction region. This is consistent with our discussions in section A. Moreover, the angular distributions of all three subtracted images (Figure 6 (a)) are different. The one with the best
Figure 5: VSIs obtained for \(0^{-}\) from \(0_{2}\) from DEA at 6.5 eV. (a), (b) and (c) the images obtained in the crossed-beam mode under three different lensing conditions. (d), (e) and (f) the images obtained in the static gas mode (background) under the same conditions. (g), (h) and (i) the background-subtracted images obtained by subtracting images (d), (e), and (f) from images (a), (b), and (c), respectively, after appropriate normalisation.
focusing condition shows the angular distribution nearest to the earlier reports. This implies that care needs to be taken while interpreting such data when the background signal is subtracted.
### Reducing the background gas contribution.
In both the schemes suggested earlier, we see that the background gas contribution to the image severely limits the performance of the VSI spectrometer for the charged particle collision studies. The third possible solution for such experiments would be to reduce the contribution from the background gas by modifying the target properties. Here, we use the supersonic molecular beam as a target with a well-defined relatively higher density region of the molecular beam, and the corresponding background gas has a very small number density. This we can see from the ion gauge pressure reading of the spectrometer chamber when the molecular beam is on and off. In our experiment, in the effusive beam setup, the background pressure increases from 10\({}^{-8}\) Torr to 10\({}^{-6}\) Torr, whereas in the supersonic molecular beam setup, the pressure increases from 10\({}^{-8}\) Torr to 10\({}^{-7}\) Torr, at least an order of magnitude difference in the two cases, but their overall count rate remains almost identical.
To test the magnification capability and background effect, we have obtained the VSIs of O\({}^{-}\) from DEA to N\({}_{2}\)O and O\({}_{2}\) in both setups. The reaction channel for DEA to N\({}_{2}\)O at 2.3 eV is
\[\text{N}_{2}\text{O}+\text{e}\to\text{O}^{-}+\text{N}_{2}(\text{X}^{1}\Sigma_{ \text{g}}^{+})\]
O\({}^{-}\) has a mean kinetic energy of 0.4 eV [30] with a spread of \(\pm\)0.2 eV, it shows a nice ring in the pixel image. The images obtained from both setups with two different magnifications are shown in Figure 7. For setup-1, as we can see from Figure 7 (a), we have obtained a nice ring consistent with the reported data. However, on magnifying it, the image loses all of these features (Figure 7 (b)). This is consistent with the simulation results presented earlier. We understand this as due to the contribution from the background gas. We have imaged the same ions in setup-2, where the target gas is a supersonic molecular beam which has the background gas an order of magnitude lower in pressure as compared to the effusive beam of setup-1. We have taken images for two different magnification conditions, the
Figure 6: (a) Angular distribution of O\({}^{-}\) from O\({}_{2}\) at 6.5 eV for three images obtained in setup-1 after background subtraction given in Figure 5 and (b) corresponding kinetic energy distributions. The squares (\(\blacksquare\)) are for image (g), the circles (\(\bullet\)) are for image (h), and the triangles (\(\blacktriangle\)) are for image (i) from Figure 5.
same as for setup-1 (Figure 7 (c) and (d)). Unlike setup-1, in this case, both images show similar distributions. These findings are also consistent with our simulation results confirming that by suppressing background, we can easily magnify images of low initial kinetic energy.
We further investigate the magnification capability and the effect of background by imaging the O\({}^{-}\) ions from DEA to O\({}_{2}\). In this scheme, first, the electrode voltages are optimised to get the best spatially focused image (Figure 8 (a)). Then the image is magnified by changing electrode voltages (Figure 8 (b) and (c)). All three images have very little contribution from the background. Here, the transverse magnetic field distorts the right side of the image, and this setup has a longer flight tube, O\({}^{-}\) ions fly over a longer distance and hence for a longer time in the magnetic field. This causes more prominent distortion of the image compared to setup-1, which has a shorter flight tube.
The angular distribution shows a similar pattern for all three conditions (Figure 9 (a)). The kinetic energy distribution also shows a similar spread which shows all of the images are in the best special focusing condition. This also shows that we can magnify images without compromising the momentum resolution. We have also found that the detector size limits the extent of magnification achieved for the spectrometer geometry used in these experiments. Kinetic energy distribution for setup-2 images is narrower than for setup-1 images. This is due to the low thermal energy spread in molecules. Supersonic beams produce an overall superior image compared to effusive beams under all magnifications.
Figure 8: (a), (b), and (c) shows the images of O\({}^{-}\) from O\({}_{2}\) from DEA at 6.5 eV in the crossed-beam mode under three different lensing conditions.
Figure 7: Experimentally obtained VSIs of O\({}^{-}\) from N\({}_{2}\)O at 2.3eV electron energy. (a), and (b) are taken in setup-1 and (c), and (d) are taken in setup-2 under different magnification conditions, respectively.
## V Conclusion
In this work, we have shown how the presence of background gas creates artefacts in the velocity slice imaging, particularly in the charged particle interaction studies where the projectile beam is not focused. These artefacts change with the magnification of the momentum imaging. Hence magnifying an image in a crossed-beam setup, where the effusive beam is used as the target source, is a challenging task. If not done carefully, these artefacts can lead to an erroneous interpretation of data. This problem can be solved by minimising the background gas. One way to achieve this is by using a supersonic jet as the target beam, which shows cleaner images under all magnification and a narrower kinetic energy spread. However, generating a supersonic molecular beam with sufficient number density for molecules with a very low vapour pressure is a challenge in such schemes. The situation would be even more difficult for studying processes like DEA, which have relatively lower cross-sections. In such a scenario, an effusive molecular beam is easier to work with. In that case, appropriate focusing condition needs to be worked on. For a moderate to high kinetic energy range, we can operate the spectrometer in a medium magnification range where the static gas contribution has minimum effect on the crossed-beam images overlap.
## Acknowledgments
S.D. and V.S.P. acknowledge the financial support from the Department of Atomic Energy, India, under Project Identification No. RTI4002. S.D. wishes to thank S. Swain for teaching the operations of the experimental setup and analysis processes. S.D. also thanks S. Tare and Y. Upalekar for their technical support.
Figure 9: (a) Angular distribution of \(0^{-}\) from \(0_{2}\) at 6.5 eV for all three images obtained in setup-2 in the crossed-beam mode shown in Figure 8, and (b) corresponding kinetic energy distributions. The squares (\(\blacksquare\)) are for image (a), the circles (\(\bullet\)) are for image (b), and the triangles (\(\blacktriangle\)) are for image (c) from Figure 8. |
2308.09384 | Finitely generated bimodules over Weyl algebras | Let $A$ be the $n$-th Weyl algebra over a field of characteristic zero, and
$\varphi:A\rightarrow A$ an endomorphism with $S = \varphi(A)$. We prove that
if $A$ is finitely generated as a left or right $S$-module, then $S = A$. The
proof involves reduction to large positive characteristics. By holonomicity,
$A$ is always finitely generated as an $S$-bimodule. Moreover, if this bimodule
property could be transferred into a similar property in large positive
characteristics, then we could again conclude that $A=S$. The latter would
imply the Dixmier Conjecture. | Niels Lauritzen, Jesper Funch Thomsen | 2023-08-18T08:31:32Z | http://arxiv.org/abs/2308.09384v2 | # Finitely generated bimodules over Weyl algebras
###### Abstract
Affine space over the complex numbers is simply connected in the sense that it does not afford any non-trivial etale finite covers. In this paper we prove the non-commutative analogue of this statement: an endomorphism of a Weyl algebra provides a natural holonomic bimodule structure on the Weyl algebra. If this bimodule is finitely generated from the left or right, then the endomorphism is an automorphism. Finite generation of this bimodule in large positive characteristics is equivalent to the Dixmier conjecture.
## Introduction
We prove that finite endomorphisms of Weyl algebras over a field of characteristic zero are automorphisms. More precisely, let \(A\) be the \(n\)-th Weyl algebra over a field of characteristic zero and \(\varphi:A\to A\) an endomorphism with \(S=\varphi(A)\). We show that if \(A\) is finitely generated as a left or right \(S\)-module, then \(S=A\). This result answers the last question posed in [12] affirmatively. Notice that Bavula [5] has proved in general that \(A\) is holonomic as an \(S\)-bimodule (see [13] for an interpretation of this result using tensor products of bimodules) and therefore that \(A\) is finitely generated (in fact cyclic) as an \(S\)-bimodule.
Finite generation of \(A\) as an \(S\)-bimodule in large positive characteristics is equivalent to the Dixmier conjecture, which states that \(S=A\) for arbitrary endomorphisms of \(A\) in characteristic zero (see Lemma 3.1 of this paper). The Dixmier conjecture is equivalent to the Jacobian conjecture (see [4, p. 297], [9] and [16]), which has been an open problem for more than 80 years [10, p. 301]. We use finite left or right generation as a poor man's condition for a bimodule to be finitely generated. These properties reduce nicely to positive characteristics. We do not know how to deduce finite generation of \(A\) as an \(S\)-bimodule in large positive characteristics from its finite generation in characteristic zero.
Our result may be viewed as a non-commutative analogue of the simply connectedness of affine space over fields of characteristic zero, i.e., that any finite etale cover of affine \(n\)-space over a field of characteristic zero is trivial. For the proof we reduce to positive characteristic, where the \(n\)-th Weyl algebra is an Azumaya algebra and bimodules are categorically equivalent to modules over the center.
However, in order to make this work one needs a suitable version of the simply connectedness of affine space in positive characteristic (see Theorem 4.6)1. This involves reduction to characteristic zero and occupies Section 4 in the last part of the paper. Section 4 does not depend on the previous sections and can be read separately. Along the way, we give a very simple proof (see Theorem 4.3)
of the result [4, (1.4) Corollary] that if \(\varphi\) is an automorphism of affine \(n\)-space, then \(\deg(\varphi^{-1})\leq\deg(\varphi)^{n-1}\).
In the end of the paper we sketch how degree bounds for Grobner bases also lead to the simply connectedness in positive characteristic.
The most accessible way to handle transitions between characteristic zero and positive characteristic in our setup is through ultraproducts. We begin by briefly introducing this approach.
## 1 Ultraproducts
In this section \(X\) denotes a non-empty set and \(R\) a commutative ring.
### Filters
A _filter_ (see also [7, Chapter I, SS6]) on \(X\) is a subset \(\mathcal{F}\subset 2^{X}\) with \(\emptyset\not\in\mathcal{F}\), \(A\cap B\in\mathcal{F}\) if \(A,B\in\mathcal{F}\) and \(B\in\mathcal{F}\) if \(A\in\mathcal{F}\) and \(B\supset A\), where \(A\) and \(B\) are subsets of \(X\).
A maximal filter is called an _ultrafilter_. The set \(\mathcal{F}_{a}=\{S\subset X\mid a\in S\}\) of all subsets of \(X\) containing a specific \(a\in X\) is an ultrafilter on \(X\). Such a filter is called _principal_. The properties below follow (almost) immediately from the definition of a filter.
1. Every filter is contained in an ultrafilter (by Zorn's lemma).
2. If \(\mathcal{S}\) is a non-empty collection of subsets of \(X\) and \(S_{1}\cap\cdots\cap S_{n}\neq\emptyset\) for finitely many \(S_{1},\ldots,S_{n}\in\mathcal{S}\), then \[\{Z\subset X\mid Z\supset S_{1}\cap\cdots\cap S_{n}\text{ for finitely many }S_{1},\ldots,S_{n}\in\mathcal{S}\}\] is a filter containing \(\mathcal{S}\).
3. An ultrafilter \(\mathcal{U}\) has the property that \(A\in\mathcal{U}\) or \(X\setminus A\in\mathcal{U}\) for every subset \(A\subset X\).
All ultrafilters on finite sets are principal. On infinite sets there exists non-principal ultrafilters. In fact,
1. An ultrafilter contains the (filter of) cofinite subsets on an infinite set if and only if it is non-principal.
### Ultraproducts
Let \((S_{i})_{i\in X}\) be a family of sets. A filter \(\mathcal{F}\neq\emptyset\) on \(X\) defines an equivalence relation on \(\prod_{i\in X}S_{i}\) given by \((s_{i})\sim(t_{i})\) if and only if \(\{i\mid s_{i}=t_{i}\}\in\mathcal{F}\). We use the notation
\[\prod_{\mathcal{F}}S_{i}:=\left(\prod_{i\in X}S_{i}\right)\bigg{/}\sim.\]
If \(S_{i}\) are commutative rings, then \(I_{\mathcal{F}}=\{(s_{i})\mid(s_{i})\sim 0\}\) is an ideal in \(\prod_{i\in X}S_{i}\) and
\[\prod_{\mathcal{F}}S_{i}=\prod_{i\in X}S_{i}\bigg{/}I_{\mathcal{F}}.\]
Suppose that \(S_{i}\) are fields. Then \(\mathcal{F}\mapsto I_{\mathcal{F}}\) gives an inclusion preserving correspondence between filters on \(X\) and proper ideals in \(\prod_{i\in X}S_{i}\). The filter corresponding to a proper ideal \(I\) is \(\{Z(s)\mid s\in I\}\), where \(Z(s)=\{i\in X\mid s_{i}=0\}\). In particular, we get that \(\prod_{\mathcal{W}}S_{i}\) is a field if \(\mathcal{U}\) is an ultrafilter on \(X\).
Let \(\operatorname{Specm}(R)\) denote the set of maximal ideals of \(R\) and \(\operatorname{Jac}(R)\) the Jacobson radical of \(R\) i.e., the intersection of all maximal ideals of \(R\).
**Proposition 1.1**.: _Let \(R\) be an integral domain with \(\operatorname{Jac}(R)=\{0\}\), \(X=\operatorname{Specm}(R)\) and \(X_{r}=\{\mathfrak{m}\in X\mid r\notin\mathfrak{m}\}\) for \(r\in R\). Then_
\[\mathcal{F}=\{Z\subseteq X\mid X_{r}\subseteq Z\text{ for some }r\in R\setminus\{0\}\}\]
_is a filter on \(X\). Let \(\mathcal{U}\) be an ultrafilter on \(X\) containing \(\mathcal{F}\). Then \(\prod_{\mathcal{U}}R/\mathfrak{m}\) contains the fraction field of \(R\)._
Proof.: Notice that \(X_{r}=\emptyset\) if and only if \(r=0\) by the assumption \(\operatorname{Jac}(R)=\{0\}\). Therefore \(\emptyset\notin\mathcal{F}\). Let \(Z_{1},Z_{2}\in\mathcal{F}\) with \(X_{r}\subseteq Z_{1}\) and \(X_{s}\subseteq Z_{2}\), where \(r,s\in R\setminus\{0\}\). Then \(rs\in R\setminus\{0\}\) and \(X_{rs}\subseteq Z_{1}\cap Z_{2}\). Clearly, \(Z_{2}\in\mathcal{F}\) if \(Z_{1}\in\mathcal{F}\) and \(Z_{1}\subseteq Z_{2}\).
Let
\[\pi:R\to\prod_{\mathcal{U}}R/\mathfrak{m}\]
be the canonical map. Suppose that \(r\in R\setminus\{0\}\) and \(\pi(r)=0\). Then \(r\in\mathfrak{m}\) for every \(\mathfrak{m}\in Z\) for some \(Z\in\mathcal{U}\) so that \(Z\subseteq X\setminus X_{r}\). Therefore \(X\setminus X_{r}\in\mathcal{U}\) contradicting that \(X_{r}\in\mathcal{U}\). It follows that \(\pi\) is injective and therefore that \(\prod_{\mathcal{U}}R/\mathfrak{m}\) contains the fraction field of \(R\).
For \(F\subset R[x_{1},\ldots,x_{n}]\), we let \(V_{S}(F)=\{\alpha\in S^{n}\mid f(\alpha)=0\text{ for all }f\in F\}\), where \(S\) is an \(R\)-algebra.
**Lemma 1.2**.: _Let \(\mathcal{F}\) be a filter on a set \(X\). Consider \(F=\{f_{1},\ldots,f_{m}\}\subset R[x_{1},\ldots,x_{n}]\) and let_
\[S=\prod_{\mathcal{F}}S_{j},\]
_where \(\{S_{j}\mid j\in X\}\) is a set of \(R\)-algebras. Fix elements \(\gamma_{i}=(\gamma_{i,j})_{j\in X}\) in \(\prod_{j\in X}S_{j}\), for \(i=1,2,\ldots,n\). Then \(([\gamma_{1}],[\gamma_{2}],\ldots,[\gamma_{n}])\in V_{S}(F)\) if and only if there exists \(B\in\mathcal{F}\) such that \((\gamma_{1,j},\gamma_{2,j},\ldots,\gamma_{n,j})\in V_{S_{j}}(F)\) for \(j\in B\)._
Proof.: Notice that if \(f\in R[x_{1},\ldots,x_{n}]\) is a polynomial, then we have the following identity in \(S\)
\[f([\gamma_{1}],[\gamma_{2}],\ldots,[\gamma_{n}])=[\big{(}f(\gamma_{1,j},\gamma_ {2,j},\ldots,\gamma_{n,j})\big{)}_{j\in X}]. \tag{1.2.1}\]
In particular, \(f([\gamma_{1}],[\gamma_{2}],\ldots,[\gamma_{n}])=0\) if and only if there exists a \(B\in\mathcal{F}\), such that \(f(\gamma_{1,j},\gamma_{2,j},\ldots,\gamma_{n,j})=0\) for \(j\in B\). Using that \(\mathcal{F}\) is stable under finite intersections, the claim is a direct consequence of (1.2.1).
## 2 The Weyl algebra
In the following \(\mathbb{N}=\{0,1,2\ldots\}\) denotes the natural numbers. For \(n\in\mathbb{N}\) and an \(n\)-tuple \(a=(a_{1},\ldots,a_{n})\) of elements in a ring \(A\), we use the notation \(a^{v}\) with \(v=(v_{1},\ldots,v_{n})\in\mathbb{Z}^{n}\) to denote
\[a_{1}^{v_{1}}\cdots a_{n}^{v_{n}}\]
if \(v\in\mathbb{N}^{n}\) and \(0\) if \(v\in\mathbb{Z}^{n}\setminus\mathbb{N}^{n}\). We will use \(a\in\mathbb{Z}\) to denote \((a,\ldots,a)\in\mathbb{Z}^{n}\), when it fits the context. For integral vectors \(u=(u_{1},\ldots,u_{n})\) and \(v=(v_{1},\ldots,v_{n})\), we let \(u\leq v\) denote the partial order given by \(u_{1}\leq v_{1},\ldots,u_{n}\leq v_{n}\). For \(a,b\in A\), we let \([a,b]=ab-ba\).
Let \(R\) denote a commutative ring. For \(n\in\mathbb{N}\), \(A_{n}(R)\) denotes the \(n\)-th Weyl algebra over \(R\). This is the (free) \(R\)-algebra generated by \(2n\) variables \(x=(x_{1},\ldots,x_{n}),\partial=(\partial_{1},\ldots,\partial_{n})\) with relations
\[\begin{split}[x_{i},x_{j}]&=0\\ [\partial_{i},\partial_{j}]&=0\\ [\partial_{i},x_{j}]&=\delta_{ij}\end{split} \tag{2.0.1}\]
for \(i,j=1,\ldots,n\). Elements in
\[S_{n}=\{x^{\mu}\partial^{v}\mid u,v\in\mathbb{N}^{n}\}\subset A_{n}(R)\]
are called standard monomials. We have not been able to track down a proof of the following result (for general commutative rings). We therefore outline a proof (of linear independence) shown to us by J. C. Jantzen.
**Proposition 2.1**.: \(A_{n}(R)\) _is a free \(R\)-module with basis \(S_{n}\)._
Proof.: Using the relations (2.0.1) it follows that \(S_{n}\) spans \(A_{n}(R)\) as an \(R\)-module. To prove linear independence of \(S_{n}\), we let \(A_{n}(R)\) act on the free \(R\)-module with basis
\[M=\{X^{u}D^{v}\mid u,v\in\mathbb{N}^{n}\}\]
through
\[x_{i}\cdot X^{u}D^{v} =X^{u+e_{i}}D^{v}\] \[\partial_{i}\cdot X^{u}D^{v} =X^{u}D^{v+e_{i}}+u_{i}X^{u-e_{i}}D^{v}\]
for \(i=1,\ldots,n\), where \(e_{i}\) denotes the \(i\)-th canonical basis vector. Suppose that
\[\Delta=\sum_{u,v\in\mathbb{N}^{n}}c_{uv}x^{u}\partial^{v}=0\]
in \(A_{n}(R)\). Then acting on \(M\) we get
\[\Delta\cdot X^{0}D^{0}=\sum_{u,v\in\mathbb{N}^{n}}c_{uv}X^{u}D^{v}=0\]
showing that \(c_{uv}=0\) thereby proving the \(R\)-linear independence of \(S_{n}\).
**Definition 2.2**.: _The degree of \(f\in A_{n}(R)\setminus\{0\}\), where_
\[f=\sum_{u,v\in\mathbb{N}^{n}}a_{uv}x^{\mu}\partial^{v}\]
_is defined as_
\[\deg(f)=\max\{|u|+|v|\mid a_{uv}\neq 0\},\]
_where_
\[|w|=w_{1}+\cdots+w_{n}\]
_for \(w=(w_{1},\ldots,w_{n})\in\mathbb{N}^{n}\). The degree of an endomorphism \(\varphi:A_{n}(R)\to A_{n}(R)\) is defined as_
\[\deg(\varphi)=\max\{\deg(\varphi(x_{1})),\ldots,\deg(\varphi(x_{n})),\deg( \varphi(\partial_{1})),\ldots,\deg(\varphi(\partial_{n}))\}.\]
### The Weyl algebra in positive characteristic
**Theorem 2.3**.: _Suppose that \(\operatorname{char}(R)=p\), where \(p\) is a prime number and let_
\[C=R[x_{1}^{p},\ldots,x_{n}^{p},\partial_{1}^{p},\ldots,\partial_{n}^{p}]\subset A :=A_{n}(R).\]
_Then_
1. _The center of_ \(A\) _is equal to_ \(C\)_._
2. _If_ \(Q_{1},\ldots,Q_{n},P_{1},\ldots,P_{n}\in A\) _satisfy the commutation relations for the Weyl algebra i.e.,_ \[[Q_{i},Q_{j}] =0\] \[[P_{i},P_{j}] =0\] \[[P_{i},Q_{j}] =\delta_{ij}\] _for_ \(i,j=1,\ldots,n\)_, then_ \[\{Q^{\alpha}p^{\beta}\mid\alpha,\beta\in\mathbb{N}^{n},0\leq\alpha,\beta\leq p -1\}\] _is a basis for_ \(A\) _as a module over_ \(C\)_._
3. \(A\) _is an Azumaya algebra over_ \(C\) _and (therefore)_ \(F(N)=N\otimes_{C}A\) _defines an equivalence between the category of_ \(C\)_-modules and the category of_ \(C\)_-linear_ \(A\)_-bimodules (i.e.,_ \(A\)_-bimodules_ \(M\) _satisfying_ \(rm=mr\)_, for_ \(r\in C\) _and_ \(m\in M\)_) with inverse_ \[G(M)=M^{A}=\{m\in M\mid am=ma,\forall a\in A\}.\] _Here_ \(N\) _denotes a_ \(C\)_-module and_ \(M\) \(a\) \(C\)_-linear_ \(A\)_-bimodule. The equivalence preserves finitely generated modules._
Proof.: See [12, Theorem 1.7] for a proof of (i) and (ii).
Let \(A^{e}=A\otimes_{C}A^{\operatorname{op}}\). To prove (iii), we need to show (cf. [11, Theorem III.5.1, 2)]) that the natural map
\[\varphi:A^{e}\to\operatorname{End}_{C}(A) \tag{2.1.1}\]
of \(C\)-algebras given by \(\varphi(a\otimes b)(x)=axb\) is an isomorphism. Notice that \(A^{e}\) and \(\operatorname{End}_{C}(A)\) are free \(C\)-modules of rank \(p^{4n}\). The elements \(\partial_{1}\otimes 1,\ldots,\partial_{n}\otimes 1,1\otimes\partial_{1}, \ldots,1\otimes\partial_{n}\) and \(x_{1}\otimes 1,\ldots,x_{n}\otimes 1,1\otimes x_{1},\ldots,1\otimes x_{n}\) define \(a_{1},\ldots,a_{2n},b_{1},\ldots,b_{2n}\in A^{e}\) with \([a_{i},a_{j}]=[b_{i},b_{j}]=0\) and \([a_{i},b_{j}]=\delta_{ij}\) for \(i,j=1,\ldots,2n\).
As in the proof of Theorem 1.7(_ii_) in [12], it follows that
\[M=\{\alpha^{u}\beta^{v}\mid u,v\in\mathbb{N}^{2n},0\leq u,v\leq p-1\}\subset \operatorname{End}_{C}(A)\]
is a \(C\)-basis of \(\operatorname{End}_{C}(A)\), where \(\alpha=(\varphi(a_{1}),\ldots,\varphi(a_{2n}))\) and \(\beta=(\varphi(b_{1}),\ldots,\varphi(b_{2n}))\). Therefore \(\varphi\) is an isomorphism of \(C\)-modules and \(A\) is an Azumaya algebra over \(C\). This implies that \(F\) and \(G\) is an adjoint pair of inverse equivalences between the category of \(C\)-modules and the category of \(C\)-linear \(A\)-bimodules by [11, Theorem III.5.1, 3)]. Finally, finite generation is a categorical property preserved under equivalences by [1, Proposition 21.8].
### Endomorphisms of Weyl algebras
For an arbitrary ring \(S\) and a ring endomorphism \(\varphi:S\to S\), we let \({}_{\varphi}S_{\varphi}\) denote the \(S\)-bimodule \(S\) with multiplication
\[xsy=\varphi(x)s\varphi(y),\]
where \(s,x,y\in S\). Similarly \({}_{\varphi}S\) denotes \(S\) as a left module with multiplication given by \(xs=\varphi(x)s\). Using the notation from Theorem 2.3, we have the following corollary.
**Corollary 2.4**.: _Let \(\varphi:A\to A\) be an \(R\)-algebra homomorphism. Then_
1. \(\varphi(C)\subset C\)__
2. \(\varphi\) _is an isomorphism if and only if_ \(\varphi|_{C}\) _is an isomorphism._
3. _The bimodule_ \({}_{\varphi}A_{\varphi}\) _is_ \(C\)_-linear with_ \(({}_{\varphi}A_{\varphi})^{A}={}_{\varphi}C\)_. Furthermore,_ \({}_{\varphi}A_{\varphi}\) _is finitely generated if and only if_ \(\varphi|_{C}\) _is finite._
Proof.: The first two claims are consequences of Theorem 2.3\((ii)\) with \(P_{i}=\varphi(\partial_{i})\) and \(Q_{i}=\varphi(x_{i})\) for \(i=1,\ldots,n\). For the proof of \((iii)\), we note that the \(C\)-linearity of \({}_{\varphi}A_{\varphi}\) follows from \((i)\) and that \(({}_{\varphi}A_{\varphi})^{A}={}_{\varphi}C\) follows from Theorem 2.3\((ii)\). The last statement in \((iii)\) follows from Theorem 2.3\((iii)\).
**Theorem 2.5**.: _Let \(R\) be a reduced commutative ring. If \(\varphi:A_{n}(R)\to A_{n}(R)\) is an automorphism of \(R\)-algebras, then_
\[\deg(\varphi^{-1})\leq\deg(\varphi)^{2n-1}.\]
Proof.: Using Proposition 2.1 and expanding \(\varphi(x_{i}),\varphi(\partial_{j}),\varphi^{-1}(x_{i}),\varphi^{-1}( \partial_{j})\) in the basis of the standard monomials for \(i,j=1,\ldots,n\), we may assume that \(R\) is finitely generated over \(\mathbb{Z}\) and therefore a Jacobson ring (see [6, Chapter V, SS3.4]). Since \(R\) is reduced, \(\operatorname{Jac}(R)=0\) and there exists a maximal ideal \(\mathfrak{m}\) not containing the coefficient of a highest degree standard monomial in the definition of \(\deg(\varphi^{-1})\) (cf. Definition 2.2). But \(k=R/\mathfrak{m}\) is a field of characteristic \(p>0\) and \(\varphi,\varphi^{-1}\) induce inverse automorphisms \(\bar{\varphi},\bar{\varphi}^{-1}\) of \(A_{n}(k)\) with \(\deg(\bar{\varphi}^{-1})=\deg(\varphi^{-1})\) and \(\deg(\bar{\varphi})\leq\deg(\varphi)\).
Now the result follows by restricting \(\bar{\varphi}\) to the center \(C\) of \(A_{n}(k)\) (cf. Corollary 2.4(ii)) and using Theorem 4.3 as \(\deg(\bar{\varphi}|_{C})=\deg(\bar{\varphi})\).
**Remark 2.6**.: Notice that the bound in Theorem 2.5 only depends on \(\deg(\varphi)\) and \(n\). In particular, it does not depend on the (reduced) ring \(R\). We can only prove Theorem 2.5 when \(R\) is reduced. The independence of \(R\) in bounding the degree of the inverse in Theorem 2.5 for nilpotent rings should be equivalent to the Dixmier conjecture in analogy with [3, Theorem (1.1)] and [4, Ch. I, Prop. (1.2)].
The following result is non-trivial and crucial.
**Lemma 2.7**.: _Let \(k\) be a field of characteristic \(p>0\) and \(\varphi\) an endomorphism of \(A_{n}(k)\). Then \(\varphi\) restricts to an etale endomorphism of the center of \(A_{n}(k)\) if \(p>2\deg(\varphi)\)._
Proof.: This is a consequence of [16, Corollary 3.3].
**Remark 2.8**.: In the forthcoming paper [14], we strengthen the bound in Lemma 2.7 to \(p>\deg(\varphi)\).
The main result
The following lemma is the positive characteristic version of Theorem 3.3, which is our main result. It basically uses the equivalence between bimodules and modules over the center (cf. Theorem 2.3\((iii)\)) to reduce to the commutative case of finite etale endomorphisms of polynomial rings, which is the focus of section 4 and in particular Theorem 4.6. Notice again that it is crucial that an endomorphism of the Weyl algebra restricts to an etale endomorphism of the center if the characteristic is large enough (cf. Lemma 2.7).
**Lemma 3.1**.: _Let \(k\) be a field of characteristic \(p>0\), \(A=A_{n}(k)\) and \(\varphi:A\to A\) an endomorphism of degree \(\leq d\). Then there exists a uniform bound \(D=D(n,d)\) only depending on \(n\) and \(d\), such that if \(p>D\) and \({}_{\varphi}A_{\varphi}\) is finitely generated, then \(\varphi\) is an isomorphism._
Proof.: Let \(C\) be the center of \(A_{n}(k)\). By Lemma 2.7, \(\varphi|_{C}\) is etale for \(p>2d\). The assumption that \({}_{\varphi}A_{\varphi}\) is finitely generated shows that \(\varphi|_{C}\) is finite by Corollary 2.4\((iii)\). Let \(D(n,d)=\max\{2d,N(2n,d)\}\), where \(N(2n,d)\) refers to the uniform bound in Theorem 4.6. Then \(\varphi|_{C}\) is an isomorphism for \(p>D(n,d)\) by Theorem 4.6 and finally we conclude that \(\varphi\) is an isomorphism by Corollary 2.4\((ii)\).
To go to characteristic zero we need the following characterization of (left) generating sets.
**Lemma 3.2**.: _Let \(R\) be a commutative ring, \(A=A_{n}(R)\), \(\varphi:A\to A\) an \(R\)-algebra endomorphism and \(S=\varphi(A)\). Suppose that \(G=\{g_{1},\ldots,g_{m}\}\subset A\) satisfies_
1. \(1\in G\)__
2. _There exists_ \(s_{ijl},t_{ijl}\in S\) _, such that_ \[g_{j}x_{i} =s_{ij1}g_{1}+\cdots+s_{ijm}g_{m}\] (3.0.1) \[g_{j}\partial_{i} =t_{ij1}g_{1}+\cdots+t_{ijm}g_{m}\] (3.0.2) _for_ \(l=1,\ldots,m\) _and_ \(1\leq i,j\leq n\)_._
_Then \(G\) generates \(A\) as a left \(S\)-module i.e.,_
\[A=Sg_{1}\cdots+Sg_{m}.\]
_Conversely if \(G=\{g_{1},\ldots,g_{m}\}\) is a generating set for \(A\) as a left \(S\)-module with \(1\in G\), then \(G\) satisfies (3.0.1) and (3.0.2) above._
Proof.: Let \(M=Sg_{1}+\cdots+Sg_{m}\). We wish to prove that \(M=A\). Using (3.0.1) we get that \(x^{u}\in M\) for every \(u\in\mathbb{N}^{n}\). Building on this, one shows using (3.0.2) that \(x^{u}\partial^{v}\in M\) for every \(u,v\in\mathbb{N}^{n}\). Therefore we must have \(M=A\). The converse follows by definition of a (left) generating set.
**Theorem 3.3**.: _Let \(K\) be a field of characteristic \(0\), \(A=A_{n}(K)\) and \(\varphi:A\to A\) a \(K\)-algebra endomorphism with \(S=\varphi(A)\). If \(A\) is finitely generated as a left or right module over \(S\), then \(S=A\)._
Proof.: Assume that \(A\) is generated by \(G=\{g_{1},\ldots,g_{m}\}\) as a left \(S\)-module (the case of a right \(S\)-module is similar) with \(1\in G\). We start by fixing a finitely generated \(\mathbb{Z}\)-algebra \(R^{\prime}\) of \(K\) generated by the coefficients of \(\varphi(x_{i}),\varphi(\partial_{i})\) for \(i=1,\ldots,n\) in the basis of standard monomials (cf. Definition 2.2). In the same way we extend \(R^{\prime}\) to a finitely generated \(\mathbb{Z}\)-algebra \(R\), such that the coefficients of \(g_{1},\ldots,g_{m}\) and \(s_{ijk},t_{ijk}\) of Lemma 3.2 are in \(R\). For \(\mathfrak{m}\in\operatorname{Specm}(R)\), \(R/\mathfrak{m}\) is a field of characteristic
\(p>0\) and it follows by reduction modulo \(\mathfrak{m}\) of (3.0.1) and (3.0.2) that \(\varphi A_{n}(R/\mathfrak{m})_{\varphi}\) is finitely generated as a left module and therefore as a bimodule, where \(\bar{\varphi}:A_{n}(R/\mathfrak{m})\to A_{n}(R/\mathfrak{m})\) denotes the reduction of \(\varphi\) modulo \(\mathfrak{m}\).
Now let \(D=D(n,\deg(\varphi))\) be the bound from Lemma 3.1. Let \(M\) be the product of the prime numbers \(\leq D\). Fix an ultrafilter \(\mathcal{U}\) on \(\operatorname{Specm}(R)\) as in Proposition 1.1. Then \(\bar{\varphi}\) is an isomorphism for every \(\mathfrak{m}\in X_{M}\) (with the notation in Proposition 1.1) by Lemma 3.1 and Theorem 2.5 along with Lemma 1.2 show that \(\varphi\) is an isomorphism over the ultraproduct \(\prod_{\mathcal{U}}R/\mathfrak{m}\). Therefore \(\varphi\) is an isomorphism over \(K\) by [12, Lemma 3.3].
The last part of the paper concerns the proof of Theorem 4.6 below, which is the key result needed in the proof of Lemma 3.1 (and therefore Theorem 3.3) above.
## 4 Bounds on polynomial endomorphisms
In this section we deal exclusively with commutative rings and introduce new notation independent of the notation in the previous sections i.e., \(A\) no longer refers to the Weyl algebra.
Let \(B=K[x_{1},\ldots,x_{n}]\) be the polynomial ring of \(n\) variables over an arbitrary field \(K\) (with \(n>0\)), \(\varphi:B\to B\) a \(K\)-algebra homomorphism given by \(\varphi(x_{i})=f_{i}\) and \(\varphi(B)=:A\subset B\). Denote the fields of fractions of \(A\) and \(B\) by \(K(A)\) and \(K(B)\) respectively. We let
\[\deg(\varphi)=\max\{\deg(f_{1}),\ldots,\deg(f_{n})\}. \tag{4.0.1}\]
where \(\deg(f_{i})\) denotes the total degree of \(f_{i}\). For \(d\in\mathbb{N}\) we let \(B_{\leq d}\) denote the \(K\)-subspace of \(B\) consisting of polynomials \(f\) with \(\deg(f)\leq d\).
**Lemma 4.1**.: _Let \(d,r\in\mathbb{N}\), with \(r>0\), and \(h_{1},h_{2},\ldots,h_{r}\in B_{\leq d}\). Suppose that \(m>n+d\) is an integer and that \(h_{1},\ldots,h_{r}\) do not satisfy a non-trivial linear relation_
\[\varphi(b_{1})h_{1}+\cdots+\varphi(b_{r})h_{r}=0\]
_with \(b_{i}\in B_{\leq m}\) for \(i=1,\ldots,r\). Then_
\[m(r-\deg(\varphi)^{n})<2^{n}\deg(\varphi)^{n-1}(n+d).\]
Proof.: Notice first that \(\deg(\varphi)>0\) with the given assumptions. Let \(V\) denote the subspace
\[V=\left\{\varphi(b_{1})h_{1}+\cdots+\varphi(b_{r})h_{r}\ \bigg{|}\ b_{1},\ldots,b_{r} \in B_{\leq m}\right\}\subseteq B_{\leq m\deg(\varphi)+d}.\]
By assumption
\[\dim_{K}(V)=r\dim_{K}(B_{\leq m})\leq\dim_{K}(B_{\leq m\cdot\deg(\varphi)+d}).\]
Therefore
\[r\frac{m^{n}}{n!}\leq r\binom{m+n}{n}\leq\binom{m\ \deg(\varphi)+d+n}{n}\leq \frac{(m\ \deg(\varphi)+d+n)^{n}}{n!}\]
showing that
\[rm^{n}\leq(m\ \deg(\varphi)+d+n)^{n}<(m\ \deg(\varphi))^{n}+2^{n}(m\ \deg(\varphi))^{n-1}(d+n), \tag{4.0.2}\]
where the last inequality follows by the assumption \(d+n<m\) and as \(\deg(\varphi)>0\). The claimed inequality now follows from (4.0.2).
In the following we will assume that \(f_{1},f_{2},\ldots,f_{n}\) are algebraically independent over \(K\). The induced field extension \(K(A)\subseteq K(B)\) is then finite.
**Proposition 4.2**.: \[[K(B):K(A)]\leq\deg(\varphi)^{n}\]
Proof.: Let \(h_{1},h_{2},\ldots,h_{r}\) denote a basis for \(K(B)\) as a vector space over \(K(A)\). Clearing denominators we may assume that each \(h_{i}\) is contained in \(B\). Choose \(d\) such that \(h_{i}\in B_{\leq d}\), for each \(i\). Applying Lemma 4.1 we then conclude that
\[(r-\deg(\varphi)^{n})\cdot m<2^{n}\deg(\varphi)^{n-1}(n+d),\]
for every positive integer \(m>n+d\). Therefore \(r-\deg(\varphi)^{n}\leq 0\) and \(r\leq\deg(\varphi)^{n}\).
### The classical bound on the degree of the inverse
We will give a short and elementary proof of the following classical result inspired by ideas in [17].
**Theorem 4.3** ([4], Thm. 1.5).: _If \(\varphi\) is an automorphism of \(B\), then \(\deg(\varphi^{-1})\leq\deg(\varphi)^{n-1}\)._
Proof.: Suppose that \(\varphi^{-1}(x_{i})=g_{i}\) for \(i=1,\ldots,n\). We may assume that \(K\) is an infinite field. Thus by a linear change of coordinates we can assume, that each \(g_{i}\) is a monic polynomial in \(x_{n}\) with coefficients in the subring \(B^{\prime}=K[x_{1},x_{2},\ldots,x_{n-1}]\) of \(B\). Let \(N\) denote the degree of \(\varphi^{-1}\) and assume that \(\deg(g_{n})=N\). Now identify \(B^{\prime}\) with the quotient ring \(B/(x_{n})\) and use the notation \(\overline{h}\) for the image of \(h\in B\) in \(B/(x_{n})\).
Since \(\varphi\) is an automorphism, \(B=K[f_{1},f_{2},\ldots,f_{n}]\) and \(B^{\prime}=K[\overline{f_{1}},\overline{f_{2}},\ldots,\overline{f_{n}}]\). We claim that \(\overline{f_{1}},\overline{f_{2}},\ldots,\overline{f_{n-1}}\) are algebraically independent over \(K\) and that
\[[K(x_{1},x_{2},\ldots,x_{n-1}):K(\overline{f_{1}},\overline{f_{2}},\ldots, \overline{f_{n-1}})]=\deg(\varphi^{-1}).\]
By Proposition 4.2 this will end the proof, as \(\deg(\overline{f_{i}})\leq\deg(\varphi)\). Write
\[g_{n}=a_{0}+a_{1}x_{n}+a_{2}x_{n}^{2}+\cdots+a_{N-1}x_{n}^{N-1}+x_{n}^{N}\]
with \(a_{i}\in B^{\prime}\). Then
\[x_{n}=\varphi(g_{n})=\varphi(a_{0})+\varphi(a_{1})f_{n}+\cdots+\varphi(a_{N-1 })f_{n}^{N-1}+f_{n}^{N},\]
and
\[c_{0}+c_{1}\cdot\overline{f_{n}}+\cdots+c_{N-1}\cdot\overline{f_{n}}^{N-1}+ \overline{f_{n}}^{N}=0. \tag{4.1.1}\]
where \(c_{i}=\overline{\varphi(a_{i})}\) are elements in \(A^{\prime}=K[\overline{f_{1}},\overline{f_{2}},\ldots,\overline{f_{n-1}}]\). It follows that \(\overline{f_{n}}\) is integral over \(A^{\prime}\) and therefore that \(B^{\prime}=K[\overline{f_{1}},\overline{f_{2}},\ldots,\overline{f_{n}}]\) is integral over \(A^{\prime}\). We conclude, that the elements \(\overline{f_{1}},\overline{f_{2}},\ldots,\overline{f_{n-1}}\) are algebraically independent and that \([K(x_{1},x_{2},\ldots,x_{n-1}):K(\overline{f_{1}},\overline{f_{2}},\ldots, \overline{f_{n-1}})]\) equals the degree of the minimal polynomial of \(\overline{f_{n}}\) over \(K(\overline{f_{1}},\overline{f_{2}},\ldots,\overline{f_{n-1}})\). Consider the polynomial
\[F=c_{0}+c_{1}\cdot X+\cdots+c_{N-1}\cdot X^{N-1}+X^{N}\in A^{\prime}[X].\]
By equation (4.1.1) we see that \(\overline{f_{n}}\) is a root of \(F\). So it suffices to prove that \(F\) is irreducible in \(A^{\prime}[X]\) (and thus also in \(K(\overline{f_{1}},\overline{f_{2}},\ldots,\overline{f_{n-1}})[X]\)). To see this observe first that \(F\) is the image of \(g_{n}\) under the \(K\)-algebra automorphism \(B\to A^{\prime}[X]\) which maps \(x_{n}\) to \(X\) and \(x_{i}\) to \(\overline{f_{i}}\), for \(i<n\). Then use that \(g_{n}\) is irreducible as \(\varphi\) is an automorphism.
### Integral extensions
In the statement below we still work under the assumption that \(f_{1},f_{2},\ldots,f_{n}\) are algebraically independent.
**Proposition 4.4**.: _Let \(g\) denote an element in \(B\) of degree \(D\) which is integral over \(A\), and let_
\[G=a_{0}+a_{1}T+a_{2}T^{2}+\cdots+a_{r-1}T^{r-1}+T^{r}\in K(A)[T],\]
_denote its minimal polynomial. Then \(a_{i}\in\varphi(B_{\leq m})\) for \(i=0,\ldots,r-1\), where_
\[m=2^{n}\deg(\varphi)^{n-1}(n+D\cdot\deg(\varphi)^{n}).\]
Proof.: Applying Lemma 4.1 with \(r=\deg(\varphi)^{n}+1,d=D\deg(\varphi)^{n}\) and \(m\) as above, we conclude that there exists \(b_{i}\in B_{\leq m}\), for \(i=0,1,\ldots,\deg(\varphi)^{n}\) with
\[\varphi(b_{0})+\varphi(b_{1})g+\cdots+\varphi(b_{\deg(\varphi)^{n}})g^{\deg( \varphi)^{n}}=0,\]
such that not all \(b_{i}\) are zero. The polynomial
\[F=\varphi(b_{0})+\varphi(b_{1})T+\cdots+\varphi(b_{\deg(\varphi)^{n}})T^{\deg (\varphi)^{n}}\in A[T]\]
is thus divisible by \(G\) i.e., \(F=G\cdot H\) for some polynomial \(H\) in \(K(A)[T]\). By [2, Proposition 5.15], \(G\in A[T]\). Therefore we may even conclude that \(H\in A[T]\). Consider now \(F,G\) and \(H\) as polynomials in \(f_{1},f_{2},\ldots f_{n}\) with coefficients in \(K[T]\). Then the (total) degree of \(F\) is \(\leq m\) which implies, that the same is true for its divisor \(G\). This ends the proof.
Recall that the ring homomorphism \(\varphi:B\to B\) is called integral if \(B\) is integral over the image \(A=\varphi(B)\). In the present setup \(\varphi\) can only by integral if \(f_{1},f_{2},\ldots,f_{n}\) are algebraically independent. Combining this observation with Proposition 4.2 and Proposition 4.4 above we find:
**Theorem 4.5**.: _Let \(m=2^{n}\deg(\varphi)^{n-1}(n+\deg(\varphi)^{n})\). The following statements are equivalent._
1. _The ring homomorphism_ \(\varphi:B\to B\) _is integral._
2. _There exists nonzero monic polynomials_ \(F_{i}\in A[T]\)_, for_ \(i=1,2,\ldots,n\)_, of degree_ \(\deg(\varphi)^{n}\) _and with coefficients in_ \(\varphi(B_{\leq m})\)_, such that_ \(F_{i}(x_{i})=0\)_._
Proof.: The second statement clearly implies the first statement. So assume that \(A\subseteq B\) is an integral extension. By Proposition 4.4 we know that the minimal polynomial \(G_{i}\) for \(x_{i}\) has coefficients in \(\varphi(B_{\leq m})\) for each \(i\). Moreover, by Proposition 4.2 these minimal polynomials have degree \(\leq\deg(\varphi)^{n}\). Multiplying \(G_{i}\) by a suitable power of \(T\) we obtain a polynomial \(F_{i}\) of degree \(\deg(\varphi)^{n}\) which satisfies the statement in (ii).
The result below is a key component in the proof of Lemma 3.1 used in the proof of Theorem 3.3.
**Theorem 4.6**.: _Let \(n,D\in\mathbb{N}\) be positive. There exists a uniform bound \(N(n,D)\in\mathbb{N}\) only depending on \(n\) and \(D\), such that when \(k\) is a field of characteristic \(p>N(n,D)\) and \(\varphi\) is a finite and etale \(k\)-algebra endomorphism of the polynomial ring \(k[x_{1},\ldots,x_{n}]\) of degree \(\leq D\), then \(\varphi\) is an automorphism._
Proof.: Suppose that the theorem is wrong. Then we may find an infinite sequence of primes \(p_{1}<p_{2}<\dots\) and corresponding fields \(k_{1},k_{2},\dots\) of characteristic \(\operatorname{char}(k_{i})=p_{i}\) together with finite and etale \(k_{i}\)-algebra endomorphisms \(\varphi_{i}\) of \(k_{i}[x_{1},\dots,x_{n}]\) of degree \(\leq D\), which are not automorphisms. For simplicity we may assume that the determinant of the Jacobian matrix of each \(\varphi_{i}\) equals \(1\).
Choose a non-principal ultrafilter \(\mathcal{U}\) on the set \(\mathcal{P}=\{p_{1},p_{2},\dots\}\) and let \(C\) denote the ultraproduct \(\prod_{\mathcal{U}}k_{i}\). Let further
\[\varphi:C[x_{1},\dots,x_{n}]\to C[x_{1},\dots,x_{n}]\]
denote the \(C\)-algebra endomorphism induced by \(\varphi_{i},i>0\). We claim that \(\varphi\) is a finite and etale. That \(\varphi\) is etale follows as the determinant of the Jacobian matrix of each \(\varphi_{i}\), and hence also of \(\varphi\), equals \(1\). To conclude that \(\varphi\) is finite we apply Theorem 4.5. It suffices to prove that each \(x_{j}\) is integral over the image of \(\varphi\). By assumption \(x_{j}\) is integral over the image of \(\varphi_{i}\), so by Theorem 4.5 we may find polynomials \(p_{i,r}\in k_{i}[x_{1},\dots,x_{n}]\), for \(r=1,2,\dots,D^{n}\), of degree \(\leq m:=2^{n}D^{n-1}(n+D^{n})\), such that \(x_{j}\) is a root of the monic polynomial
\[P_{i}(T)=T^{D^{n}}+\sum_{r=1}^{D^{n}}\varphi_{i}(p_{i,r})T^{r-1}.\]
Let now \(p_{r}\in C[x_{1},\dots,x_{n}]\) denote the polynomial defined by the collection of polynomials \((p_{i,r})_{i>0}\). Then \(x_{j}\) is a root of the monic polynomial
\[P(T)=T^{D^{n}}+\sum_{r=1}^{D^{n}}\varphi(p_{r})T^{r-1},\]
and we conclude that \(\varphi\) is finite.
Next observe that \(C\) is a field of characteristic zero and thus \(\varphi\) is an automomorphism by [4, (2.1) Theorem, (d)] with inverse \(\varphi^{-1}\) of some degree \(D^{\prime}\). Fix, for each \(i>0\), a lift \(\psi_{i}\) of \(\varphi^{-1}\) to a \(k_{i}\)-algebra endomorphism of \(k_{i}[x_{1},\dots,x_{n}]\) of degree \(\leq D^{\prime}\). In this way \(\varphi^{-1}\) is defined from \(\psi_{i}\), for \(i>0\), in the same way as \(\varphi\) was defined from \(\varphi_{i}\), for \(i>0\). That \(\psi_{i}\) is an inverse to \(\varphi_{i}\) is then a polynomial condition in the coefficients of \(\psi_{i}\) and \(\varphi_{i}\). These polynomial conditions are satisfied for \(\varphi^{-1}\) and \(\varphi\), and thus \(\psi_{i}\) and \(\varphi_{i}\) will be mutually inverses for every prime \(p_{i}\) in some element \(B\) in \(\mathcal{U}\) by Lemma 1.2. This is a contradiction as none of the endomorphisms \(\varphi_{i}\), \(i>0\), are automorphisms.
### A Grobner basis approach
Here we outline an alternative approach to obtaining bounds similar to the ones in Theorem 4.5. We keep the notation of the previous section.
It suffices to prove that if \(f_{1},\dots,f_{n}\in K[x_{1},\dots,x_{n}]\) are polynomials of degree \(\leq d\), then \(x_{i}\) is integral over \(K[f_{1},\dots,f_{n}]\) for \(i=1,\dots,n\) if and only if the minimal polynomial for each \(x_{i}\) has the form
\[T^{m}+a_{m-1}(f_{1},\dots,f_{n})T^{m-1}+\dots+a_{1}(f_{1},\dots,f_{n})T+a_{0}( f_{1},\dots,f_{n})=0, \tag{4.3.1}\]
for \(m\leq M(n,d)\) and \(\deg(a_{j})\leq D(n,d)\) for \(j=0,\dots,m-1\), where \(M(n,d),D(n,d)\in\mathbb{N}\) depend only on \(n\) and \(d\) (and not on \(K\)).
We will need the following degree bound on Grobner bases for arbitrary term orderings.
**Theorem 4.7** ([8]).: _Let \(K\) be a field and \(I\subset K[x_{1},\dots,x_{n}]\) an ideal generated by polynomials of degree \(\leq d\). Then there exists a Grobner basis \(G\) of \(I\) with respect to any term order, such that_
\[\deg(f)\leq 2\left(\frac{d^{2}}{2}+d\right)^{2^{n-1}}\]
_for \(f\in G\)._
Combined with [2, Proposition 5.15], the existence of the degree bounded minimal polynomials in (4.3.1) now follows from the lemma below.
**Lemma 4.8** ([15]).: _Suppose that \(f,f_{1},\ldots,f_{m}\in K[x_{1},\ldots,x_{n}]\) and let_
\[I=(t-f,t_{1}-f_{1},\ldots,t_{m}-f_{m})\subset K[t,t_{1},\ldots,t_{m},x_{1}, \ldots,x_{n}].\]
_Then_
1. \[F(f,f_{1},\ldots,f_{m})=0\qquad\Longleftrightarrow\qquad F\in I\cap K[t,t_{1},\ldots,t_{m}]\] _for_ \(F\in K[t,t_{1},\ldots,t_{m}]\)_._
2. _Let_ \(G\) _be a Grobner basis of_ \(I\) _with the lexicographic order_ \(x_{1}>\cdots>x_{n}>t>t_{1}>\cdots>t_{m}\)_. If_ \(f\) _is algebraic over_ \(K(f_{1},\ldots,f_{m})\)_, then the minimal polynomial of_ \(f\) _over_ \(K(f_{1},\ldots,f_{m})\) _can be identified with the polynomial in_ \[G\cap(K[t,t_{1},\ldots,t_{m}]\setminus K[t_{1},\ldots,t_{m}])\] _with smallest leading term._
Proof.: For \(F\in K[t,t_{1},\ldots,t_{m}]\),
\[F(f,f_{1},\ldots,f_{m})=0\qquad\Longleftrightarrow\qquad F\in I\cap K[t,t_{1 },\ldots,t_{m}]\]
is a consequence of the division algorithm. The (elimination) ideal \(I\cap K[t,t_{1},\ldots,t_{m}]\) is generated by \(G\cap K[t,t_{1},\ldots,t_{m}]\). The result can now be deduced from the fact that the minimal polynomial (after clearing denominators) can be identified with the polynomial containing \(t\) in \(G\) with smallest leading term.
|
2302.03194 | UDApter -- Efficient Domain Adaptation Using Adapters | We propose two methods to make unsupervised domain adaptation (UDA) more
parameter efficient using adapters, small bottleneck layers interspersed with
every layer of the large-scale pre-trained language model (PLM). The first
method deconstructs UDA into a two-step process: first by adding a domain
adapter to learn domain-invariant information and then by adding a task adapter
that uses domain-invariant information to learn task representations in the
source domain. The second method jointly learns a supervised classifier while
reducing the divergence measure. Compared to strong baselines, our simple
methods perform well in natural language inference (MNLI) and the cross-domain
sentiment classification task. We even outperform unsupervised domain
adaptation methods such as DANN and DSN in sentiment classification, and we are
within 0.85% F1 for natural language inference task, by fine-tuning only a
fraction of the full model parameters. We release our code at
https://github.com/declare-lab/domadapter | Bhavitvya Malik, Abhinav Ramesh Kashyap, Min-Yen Kan, Soujanya Poria | 2023-02-07T02:04:17Z | http://arxiv.org/abs/2302.03194v2 | # UDAPter - Efficient Domain Adaptation Using Adapters
###### Abstract
We propose two methods to make unsupervised domain adaptation (uda) more parameter efficient using adapters, small bottleneck layers interspersed with every layer of the large-scale pre-trained language model (PLM). The first method deconstructs uda into a two-step process: first by adding a _domain adapter_ to learn domain-invariant information and then by adding a _task adapter_ that uses domain-invariant information to learn task representations in the source domain. The second method jointly learns a supervised classifier while reducing the divergence measure. Compared to strong baselines, our simple methods perform well in natural language inference (mnli) and the cross-domain sentiment classification task. We even outperform unsupervised domain adaptation methods such as DANN (Ganin et al., 2016) and DSN (Bousmalis et al., 2016) in sentiment classification, and we are within 0.85% F1 for natural language inference task, by fine-tuning only a fraction of the full model parameters. We release our code at _[https://github.com/declare-lab/domadapter_](https://github.com/declare-lab/domadapter_).
## 1 Introduction
Fine-tuning pretrained language models (PLM) is the predominant method for improving NLP tasks such as sentiment analysis, natural language inference, and other language understanding tasks (Wang et al., 2018). However, fine-tuning forces us to modify all the parameters of the model and store one copy of the model for one task. Given the large size of current PLMs, this can be expensive. Furthermore, fine-tuning needs large-scale data to be effective and is unstable when using different seeds (Han et al., 2021).
A new approach to alleviate this is parameter-efficient fine-tuning - freezing the PLM parameters and fine-tuning only a small fraction of the parameters. Fine-tuning with adapters (Houlsby et al., 2019) is one of these methods in which small additional layers are tuned within each PLM layer. Fine-tuning with adapters has many advantages: performance comparable to full fine-tuning (He et al., 2021), and robustness to different seeds and adversarial examples (Han et al., 2021).
Unsupervised domain adaptation (uda) aims to adapt models to new domains and considers situations where labeled data are available only in the source domain and unlabeled data are available in the target domain. uda methods in general have two components: The first reduces the divergence between the source and target domains, and the second reduces the loss corresponding to a particular task (Ramesh Kashyap et al., 2021). However, they fine-tune a large number of parameters and are susceptible to catastrophic forgetting. Adapters (Houlsby et al., 2019) can help solve these problems. However, the benefits of using adapters finetuning for domain adaptation have been mostly overlooked. _How well can adapter fine-tuning perform across different domains. Can we make domain adaptation more efficient?_ In this work, we answer these questions and propose models to perform domain adaptation using adapters.
Adapters are known to perform well in low-resource scenarios where a small amount of supervised data is available in a new domain or language (He et al., 2021; Pfeiffer et al., 2020). In this work, using the principles of uda, we propose to make domain adaptation more effective using unsupervised data from the target domain. We introduce two methods that we collectively call the **U** supervised **D**omain **A**daptation method using adapters (UDAPter). The first method is a two-step process: First, we learn _domain adapters_ - where we use a divergence measure to bring two probabilistic distributions closer together. This helps us to learn representations that are independent of the
domain from which they come. Second, we use the domain-invariant information learned as input to another task adapter that learns to perform an NLP task using labeled data from the source domain. We combine the two adapters by stacking them. The second method adds a single adapter without stacking, where we simultaneously reduce the divergence between domains and learn the task in the source domain.
Domain Adversarial Neural Networks (dann) and Domain Separation Networks (dsn) are the most common methods for unsupervised domain adaptation in NLP Ramesh Kashyap et al. (2021). We compare our proposed methods with these strong baselines that fine-tune all model parameters, on Amazon Blitzer et al. (2007) and the MNLI dataset Williams et al. (2018) consisting of five domains each. UDapter performs better than all baselines. It achieves competitive performance compared to UDA methods by fine-tuning only a fraction of the parameters. In an era where large resources are spent to further pretrain language models on large amounts of unsupervised data to achieve domain adaptation Gururangan et al. (2020), it is necessary to provide cheaper, faster solutions.
## 2 Method
Setup.We consider an NLP task (sentiment analysis) consisting of data \(\mathcal{X}\) and labels \(\mathcal{Y}\) (positive, negative). There exist two different distributions, called the source domain \(\mathcal{D}_{\mathcal{S}}\) and the target domain \(\mathcal{D}_{\mathcal{T}}\) over \(\mathcal{X}\times\mathcal{Y}\). Unsupervised domain adaptation (uda) consists of a model \(\mathcal{C}\) that receives labeled input samples \(\mathcal{X}_{\mathcal{S}}:(x_{s},y_{s})_{s=1}^{n_{s}}\sim\mathcal{D}_{\mathcal{S}}\) and unlabeled input \(\mathcal{X}_{\mathcal{T}}:(x_{t})_{t=1}^{nt}\sim\mathcal{D}_{\mathcal{T}}\). The goal of uda is to learn a model \(\mathcal{C}\) such that we perform well in the NLP task for the target domain \(\mathcal{D}_{\mathcal{T}}\).
The popular method in uda is to learn representations that are invariant in the input domain and still have sufficient power to perform well in the source domain Ganin et al. (2016); Bousmalis et al. (2018).
Figure 1: UDapter for a transformer layer \(l\) uses principles from unsupervised domain adaptation to make domain adaptation more parameter efficient. (a) The first method ts-dt-\(\clubsuit\) trains a Domain Adapter that reduces the marginal distribution between the domains (b) The task adapter is stacked on top of the domain adapter. and trained on an end task like sentiment analysis or natural language inference. The domain adapter is frozen during training. (c) The second method joint-dt-\(\clubsuit\) reduces the domain divergence and the task loss jointly.
2016). Then, according to the theory of domain divergence (Ben-David et al., 2010) shows that the error in the target domain is bounded by the error in the source domain and the divergence. The unsupervised domain adaptation method thus consists of two components: the reduction of the divergence measure and a classifier for the source domain. A new classifier must be learned for every pair of source-target domains, and the method fine-tunes a large number of parameters.
UDAPter makes unsupervised domain adaptation more parameter efficient (cf. SS 2.1, SS 2.2) using adapters. We follow the framework proposed by Houlsby et al. (2019) where small bottleneck layers are added to the transformer layers, fine-tuning only the adapter parameters while keeping the other parameters frozen, and propose the following.
### Two-Step Domain and Task Adapters
Domain Adapters.To learn domain-invariant representations, we first train a domain adapter. The adapter architecture follows the work of Pfeiffer et al. (2021), which consists of a simple down-projection followed by an up-projection. In a transformer layer \(l\), let \(h_{l}\) be the hidden representation of the layer **Add & Norm** and let \(r_{l}\) be the representation of the layer **Feed-Forward** (Figure 0(a)), then the adapter makes the following transformation and calculates a new hidden representation.
\[dom_{l}=W_{up}\cdot f(W_{down}\cdot h_{l})+r_{l} \tag{1}\]
where \(f\) is a nonlinear function (e.g., ReLU), \(W_{down}\in\mathbb{R}^{h\times d}\) projects the hidden representations down to a lower dimension, \(W_{up}\in\mathbb{R}^{d\times h}\) projects them back to a higher dimension, and \(d\ll h\). We pass a sample from the source domain \((x_{t}^{src})\sim\mathcal{D}_{\mathcal{S}}\) and one from the target \((x_{t}^{trg})\sim\mathcal{D}_{\mathcal{T}}\) through the adapters in layer \(l\) and obtain their representations \(h_{l}^{src}\) and \(h_{l}^{trg}\), respectively. We then reduce the divergence between these representations.
\[\Delta_{l}=div(dom_{l}^{src},dom_{l}^{trg}) \tag{2}\]
Here \(div(\cdot)\) is the divergence function such as the correlation alignment (CORAL) (Sun et al., 2016), the central moment discrepancy (CMD) (Zellinger et al., 2017) or the multi-kernel maximum mean discrepancy (MK-MMD) (Gretton et al., 2012; Bousmalis et al., 2016). In this work, we use MK-MMD for all of our experiments, since it performed best1. Similar ideas are used to adapt representations in computer vision models (Long et al., 2019; Sun and Saenko, 2016). The final divergence loss considers all \(L\) layers.
Footnote 1: CMD and CORAL also perform similarly to MK-MMD
\[\mathcal{L}_{div}=\sum_{l=1}^{L}\Delta_{l} \tag{3}\]
Task Adapters.Task adapters are stacked with frozen domain adapters. We pass the representations \(dom_{l}\) from the previous step and the supervised data from the source domain \((x_{s}^{src},y_{s}^{src})\sim\mathcal{D}_{\mathcal{S}}\). Task adapters have the same architecture as domain adapters and perform the following:
\[task_{l}=W_{up}\cdot f(W_{down}\cdot dom_{l}^{src})+r_{l} \tag{4}\]
The goal of these task adapters is to learn representations that are task-specific. Only task adapters are updated when training on the end task (sentiment classification, natural language inference) and all other parameters, including domain adapters, are frozen. Regular cross-entropy loss is reduced during training of task adapters:
\[\mathcal{L}_{task}=softmax\_ce(W_{task}\cdot h_{L}) \tag{5}\]
\(h_{L}\) is the hidden representations of the last layer of the transformer, \(W_{task}\in\mathbb{R}^{h*|\mathcal{Y}|}\) where \(|\mathcal{Y}|\) is the number of classes, and \(softmax\_ce\) is the softmax followed by cross-entropy. This two-step process deconstructs uda methods with a domain adapter and a task adapter. This affords composability, where task adapters can be reused for different pairs of domains (SS 3.4). However, domain and task representations can be learned jointly, as explored in the next section.
Training Process.Given a source-target domain adaptation scenario, we first train the domain adapter and save their weights. We then stack the task adapter with the domain adapter, which is trained using the supervised data from the source domain. When training the task adapter, the domain adapter is frozen. During inference, we stack the domain and task adapter.
### Joint Domain Task Adapters
This method adds a single adapter that performs the reduction of the divergence measure and learns task representations jointly. For a given supervised sample from the source domain \((x_{s}^{src},y_{s}^{src})\sim\mathcal{D}_{\mathcal{S}}\) and an unsupervised sample \((x_{t}^{trg})\sim\mathcal{D}_{\mathcal{T}}\), let
\(h_{l}^{src},h_{l}^{trg}\) be the hidden representations of the adapters for \(x_{s}^{src}\) and \(x_{t}^{trg}\) for layer \(l\). We reduce the following joint loss:
\[\mathcal{L}=\lambda\cdot\mathcal{L}_{task}+(1-\lambda)\cdot\mathcal{L}_{div} \tag{6}\]
Here \(\mathcal{L}_{task}\) is the task loss on the source domain supervised samples, \(\lambda\) is the adaptation factor.
Reducing divergence along with cross-entropy loss beyond a certain point makes training unstable and does not contribute to increased performance. Following [1] we suppress the noisy signal from the divergence function as training progresses and gradually change \(\lambda\) from 0 to 1 to reduce the contribution of divergence loss using the following schedule (\(\gamma=10\) for all of our experiments):
\[\lambda=\frac{2}{1+\exp{(-\gamma\cdot p)}}-1 \tag{7}\]
Similar methods have been proposed to adapt models to other domains by Long et al. (2019) and Wu et al. (2022). Compared to the two-step process introduced earlier (SS 2.2), we need to properly control the losses to obtain optimal results and also this method does not offer composability (SS 3.4).
## 3 Experiments
### Datasets
We evaluate our approach on two representative datasets with different tasks, both in English. Table 1 shows the details of the datasets. Every dataset has 5 domains, and we consider each domain with every other domain which results in 20 domain adaptation scenarios per dataset, 120 experiments per method, totalling over 1.9K experiments.
AmazonMulti Domain Sentiment Analysis Dataset [1] that contains Amazon product reviews for five different types of products (domains): Apparel (a), Baby (ba), Books (bo), Camera_Photo (c), and Movie Reviews (mr). Each review is labeled as positive or negative. We follow the setup in [12].
MnliThe Multigenre Natural Language Inference (MNLI) corpus [23] contains hypothesis-premise pairs covering a variety of genres: Travel (tr), fiction (f), telephone (te), government (G), and slate (s). Each pair of sentences is labeled Entailment, Neutral, or Contradiction. The train and validation data set are taken from the train set by sampling 90% and 10% samples, respectively. We use the MNLI-matched validation set as our test set.
### Baseline Methods
Fully supervised._Fine-tune (_ )_: Fine-tunes a language model using labeled data from the target domain. Serves as an upper bound of performance.
Unsupervised Domain Adaptation (uda)._Domain Adversarial Neural Networks_ (dann): An unsupervised domain adaptation method [1] that learns domain-invariant information by minimizing task loss and maximizing domain confusion loss with the help of gradient reversal layers. _Domain Separation Networks:_ (dsn)[1] improves dann, with additional losses to preserve domain-specific information along with the extraction of domain-invariant information. bert-base-uncased serves as a feature extractor for both methods.
Adapter Based.dann _Adapter_ (dann- ): Similar to dann, but we insert trainable adapter modules into every layer of a PLM.dann _Adapter with Multiple Classifiers_ (dann- - -mc): Unlike dann- which involves a single task and domain classifier, here a task and domain classifier are added to each of the last 3 layers of a PLM. The representation of the last layers of a PLM is domain variant [12], and this model obtains domain-invariant information2 (vi) Task adapter (task- ): Adapter finetuning [12] where adapters are fine-tuned in the labeled source domain and tested in the target domain. (vii) Two-step Domain and Task Adapter (ts-dt- ): This work, where we first train a domain adapter that reduces the probabilistic divergence between two domains and then fine-tunes a task adapter by stacking. (viii) Joint Domain Task Adapter (joint-dt- ) - We train a single adapter that reduces the domain and task loss
\begin{table}
\begin{tabular}{c r r r} \hline \hline Dataset & Train & Dev & Test \\ \hline mnli & 69,600 & 7,730 & 1,940 \\ amazon & 1,440 & 160 & 400 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics, showing number of train, dev, and test instances per domain.
jointly. For all adapter-based experiments, the PLM is frozen, and only adapter modules are trained.
Since we use adapters, we only consider other adapter based baselines and omit other methods such as Prefix-tuning Lester et al. (2021). Also, Zhang et al. (2021) target multidomain adaptation and use data from all the domains during training unlike our method and is not a fair comparison.
Implementation Details and Evaluation.For our experiments, we use bert-base-uncased Devlin et al. (2019) available in the HuggingFace Transformers library Wolf et al. (2020) as our backbone. Adapter implementations are from AdapterHub Pfeiffer et al. (2020). We follow Pfeiffer et al. (2021) and add only one bottleneck layer after the feedforward layer.
We use the AdamW optimizer and a learning rate of \(1e-4\) for all our adapter-based training and \(2e-5\) otherwise. Only for the smaller amazon dataset, we used an adapter bottleneck size (reduction factor) of 32. For all other adapter-based experiments and datasets, we use the default adapter bottleneck size of 16. We performed experiments on three different seeds. We report the mean and standard deviation of the F1 scores. For dann we use 0.04 as our \(\lambda\) and for dsn we use 0.1, 0.1, and 0.3 as our weights for three losses: reconstruction, similarity, and difference respectively. We avoid extensive hyperparameter tuning per domain adaptation scenario for efficiency.
### Results
From Table 2 and Table 3 our methods ts-dt- and joint-dt- perform well in both amazon and mnli. We find that fine-tuning the task adapter (task-) is a strong baseline and, compared to it, we perform well in 17/20 domain adaptation scenarios in amazon (largest increase of 8 points for c \(\rightarrow\) ba ) and 19/20 domain adaptation scenarios in mnli (largest increase of 2.2 for f \(\rightarrow\) te). One possible explanation of scenarios where our method finds the largest increase is the proximity of the two domains. The overlap in vocabularies (Figure 9 in the Appendix) between c \(\rightarrow\)ba in amazon and f \(\rightarrow\) te in mnli is high, and our method takes advantage of learning domain-invariant information that can be used for efficient domain transfer. Our methods for learning domain-invariant information are necessary to achieve good domain adaptation.
UDapter is comparable to uda methods.Compared to uda methods where all parameters of the backbone model are fine-tuned, we perform close to them on average. joint-dt- performs better than dsn by 0.2% in amazon. We are within 0.85% in mnli compared to dsn. Training dann is highly unstable and produces varied results, especially for amazon with a small number of examples in each domain. Our adapter method
\begin{table}
\begin{tabular}{l|c c c|c c c c c} \hline & \multicolumn{2}{c|}{**Fully Supervised**} & \multicolumn{2}{c|}{**Unsupervised Domain Adaptation**} & \multicolumn{3}{c}{**Adapter Based**} \\ \cline{2-10} Src \(\rightarrow\)Trg & \(\diamondsuit\) & \multicolumn{2}{c|}{dann} & \multicolumn{2}{c|}{dsn} & \multicolumn{2}{c|}{dann-} & \multicolumn{2}{c|}{dann-} & \multicolumn{2}{c|}{dsn-} & \multicolumn{2}{c|}{ts-dt-} & \multicolumn{2}{c}{joint-dt-} \\ \hline A \(\rightarrow\) ba & 87.52 (1.96) & 85.57 (3.72) & 89.90 (0.26) & 86.46 (0.26) & **88.74 (0.64)** & 87.03 (0.26) & 88.24 (0.76) & **88.74 (0.13)** \\ A \(\rightarrow\) bo & 86.67 (1.06) & 36.48 (0.45) & 84.47 (0.99) & 78.41 (1.14) & 83.36 (0.43) & 84.15 (1.10) & 84.22 (0.76) & **84.96 (0.28)** \\ A \(\rightarrow\) c & 91.62 (0.37) & 57.51 (3.32) & 88.56 (0.41) & 87.31 (0.39) & 88.75 (0.49) & **89.67 (0.38)** & 88.76 (1.32) & 89.39 (0.23) \\ A \(\rightarrow\) mr & 82.08 (0.76) & 35.23 (1.99) & 78.08 (0.46) & 75.54 (0.81) & 76.61 (0.30) & 76.63 (0.39) & 77.39 (0.17) & **77.63 (0.17)** \\ ba \(\rightarrow\) a & 89.12 (0.38) & 77.52 (1.25) & 87.46 (1.83) & 87.72 (1.85) & 88.47 (0.72) & 88.33 (1.10) & 89.55 (0.10) & **89.70 (0.23)** \\ ba \(\rightarrow\) bo & 86.67 (1.06) & 43.45 (9.90) & 82.19 (0.70) & 82.89 (0.80) & 83.86 (0.41) & 84.61 (0.39) & 84.38 (0.61) & **85.01 (0.46)** \\ ba \(\rightarrow\) c & 91.62 (0.37) & 47.58 (0.68) & 89.68 (0.71) & 86.63 (0.55) & 88.73 (0.42) & **90.63 (0.38)** & 87.46 (0.38) & 88.64 (0.30) \\ ba \(\rightarrow\) mr & 82.08 (0.76) & 50.73 (0.47) & 77.88 (0.38) & 74.48 (0.79) & 78.07 (0.34) & 78.43 (0.79) & **79.42 (0.74)** & 78.44 (0.70) \\ bo \(\rightarrow\) a & 89.12 (0.38) & 37.40 (1.90) & 88.20 (0.51) & 85.90 (0.12) & 85.91 (0.25) & 85.03 (0.30) & 84.79 (0.75) & **87.46 (0.27)** \\ bo \(\rightarrow\) ba & 87.52 (1.96) & 54.33 (2.49) & 88.56 (0.44) & 82.06 (1.15) & 84.27 (0.11) & 86.50 (0.39) & **86.84 (0.48)** & 86.41 (0.79) \\ bo \(\rightarrow\) c & 91.62 (0.37) & 39.43 (0.98) & 88.51 (0.01) & 86.94 (0.43) & 87.04 (0.40) & 88.44 (0.57) & 85.65 (0.61) & **85.53 (0.44)** \\ bo \(\rightarrow\) mr & 82.08 (0.76) & 54.23 (3.94) & 79.07 (0.11) & 76.19 (0.89) & 79.44 (0.86) & 79.94 (0.89) & **89.52 (0.44)** & 78.91 (0.38) \\ c \(\rightarrow\) a & 89.12 (0.38) & 60.93 (3.76) & 89.76 (0.76) & 87.02 (1.36) & 86.63 (0.29) & 87.74 (1.18) & 85.3 (0.42) & **88.92 (0.44)** \\ c \(\rightarrow\) ba & 87.52 (1.96) & 77.29 (0.61) & 89.42 (0.70) & 88.10 (1.13) & 89.14 (0.30) & 81.71 (2.7) & **89.72 (0.43)** & 89.32 (0.42) \\ c \(\rightarrow\) b & 86.67 (1.06) & 38.21 (1.40) & 85.56 (0.26) & 81.18 (0.57) & 83.61 (0.67) & 80.55 (0.31) & 84.14 (0.35) & **85.42 (0.70)** \\ c \(\rightarrow\) mr & 82.08 (0.76) & 35.08 (1.90) & 76.13 (0.54) & 64.99 (0.51) & **74.22 (0.63)** & 69.53 (1.24) & 73.22 (0.48) & 73.50 (0.38) \\ dr \(\rightarrow\) a & 89.12 (0.38) & 37.07 (4.16) & 82.64 (2.17) & 81.05 (1.15) & 79.56 (0.53) & 82.45 (0.4) & 81.93 (0.47) & **84.41 (0.43)** \\ mnl & 87.52 (1.96) & 38.76 (1.7) & 80.59 (2.18) & 77.75 (1.46) & 79.33 (0.43) & 81.70 (1.22) & 84.28 (0.41) & **84.91 (0.36)** \\ mnr \(\rightarrow\) bo & 86.67 (1.06) & 42.07 (4.86) & 85.13 (0.83) & 82.83 (0.62) & **84.90 (1.29)** & **84.90 (0.29)** & 84.47 (0.89) & 84.45 (0.31) \\ mnr \(\rightarrow\) c & 91.62 (0.37) & 36.92 (1.80) & 86.56 (0.65) & 84.58 (0.40) & 82.53
achieves better results compared to dann with a minimal modification of the hyperparameters.
Replacing uda Feature Extractors with **Adapter Versions is insufficient.**_Given that fully fine-tuned uda methods perform well, can we freeze the feature extractors uda methods and fine-tune only adapters and perform effective domain adaptation?_ We compare our methods with dann-\(\blackdiamond\) and dann-\(\blackdiamond\) -mc and outperform them both in amazon and mnli. This is in line with Karouzos et al. (2021) that although domain adversarial training brings domain representations closer, it introduces distortion in the semantic space, reducing model performance. This shows that simply replacing feature extractors with their adapter versions in existing uda methods is not an effective strategy.
Gap to Full Fine-Tuning.Fine-tuning a PLM with supervised data in the target domain is the upper bound performance for domain adaptation. The gap from full fine-tuning is greater when more data are available (3.15 in amazon and 4.13 in mnli). This is not surprising, as the supervised fine-tuning works better with more data. However, while adapters perform closely to complete fine-tuning in supervised scenarios (He et al., 2021), there is still a large gap between domain adaptation and complete fine-tuning.
### Further Analysis
Adapter Reduction Factor.The bottleneck size (\(d\)) of the adapters plays an important role in the final performance of the model. We show the performance of the models at various reduction factors in Figure 2. For joint-dt-\(\blackdiamond\), smaller reduction factors generally perform well in both amazon and mnli, with performance reducing for larger reduction factors. This shows that the joint-dt-\(\blackdiamond\) method requires a greater number of parameters to reduce divergence and learn task representations together. Since ts-dt-\(\blackdiamond\) adds two adapters, this increases the number of parameters added for the same reduction factor compared to joint-dt-\(\blackdiamond\). As a result, we find that as the data scale up, relatively low reduction factors work well.
The removal of adapters from continuous layer spans.All adapters are not equal. Removing adapters from the first few layers still preserves performance (Figure 3). For joint-dt-\(\blackdiamond\) and ts-dt-\(\blackdiamond\), the F1 slowly decreases as we continually remove the adapters. However, we obtained a comparable performance after removing the adapters from layers 1-6. This suggests that adapters are effective when added to higher layers, where the divergence between domains is greater at higher layers compared to lower layers (Ramesh Kashyap et al., 2021). Thus we can further reduce the number of parameters for domain adaptation.
\begin{table}
\begin{tabular}{l|c c c|c c c c c} \hline & \multicolumn{2}{c|}{**Fully Supervised**} & \multicolumn{2}{c|}{**Unsupervised Domain Adaptation**} & \multicolumn{3}{c}{**Adapter Based**} \\ \cline{2-10} \multicolumn{1}{c|}{Src \(\rightarrow\)Trg} & \multicolumn{1}{c|}{\(\blackdiamond\)} & \multicolumn{1}{c|}{dann} & \multicolumn{1}{c|}{dann} & \multicolumn{1}{c|}{dann-\(\blackdiamond\)} & \multicolumn{1}{c|}{dann-\(\blackdiamond\)} & \multicolumn{1}{c|}{dann-MC} & \multicolumn{1}{c|}{task-dt-\(\blackdiamond\)} & \multicolumn{1}{c}{ts-dt-\(\blackdiamond\)} & \multicolumn{1}{c}{joint-dt-\(\blackdiamond\)} \\ \hline \(\mathtt{F}\rightarrow\)s & 74.09 (0.40) & 73.68 (0.21) & 72.36 (0.17) & 70.96 (0.03) & 62.40 (-7.79) & 72.36 (0.36) & **73.46 (0.34)** & 72.30 (0.26) \\ \(\mathtt{F}\rightarrow\)d & 82.19 (0.12) & 79.17 (0.25) & 79.79 (0.21) & 78.73 (0.47) & 72.33 (0.33) & 79.00 (0.46) & 78.65 (0.25) & **79.79 (0.21)** \\ \(\mathtt{F}\rightarrow\)trg & 78.41 (0.26) & 73.72 (0.81) & 75.07 (0.32) & 70.89 (0.47) & 71.68 (0.59) & 70.83 (0.54) & **73.35 (0.79)** & 71.59 (0.73) \\ \(\mathtt{F}\rightarrow\)tr & 81.81 (0.20) & 76.99 (0.19) & 76.82 (0.30) & 74.42 (0.18) & 75.09 (0.05) & 75.85 (0.19) & 76.75 (0.08) & **77.07 (0.20)** \\ \(\mathtt{S}\rightarrow\)r & 78.59 (0.34) & 75.91 (0.23) & 76.62 (0.38) & 73.89 (0.61) & 73.47 (0.28) & 75.25 (0.19) & **75.25 (0.08)** & 75.35 (0.56) \\ \(\mathtt{S}\rightarrow\)G & 82.19 (0.12) & 80.91 (0.46) & 81.27 (0.23) & 79.99 (0.36) & 79.16 (0.10) & 80.76 (0.49) & **81.65 (0.11)** & 80.94 (0.30) \\ \(\mathtt{S}\rightarrow\)trg & 78.41 (0.68) & 74.32 (0.57) & 74.27 (0.48) & 72.29 (0.57) & 71.89 (0.47) & 72.66 (0.79) & **74.09 (0.39)** & 73.38 (0.63) \\ \(\mathtt{S}\rightarrow\)r & 81.81 (0.20) & 76.81 (0.35) & 78.17 (0.20) & 75.58 (0.85) & 75.77 (0.39) & 76.16 (0.22) & 77.31 (0.08) & 77.16 (0.11) \\ \(\mathtt{G}\rightarrow\)r & 78.59 (0.34) & 73.41 (0.73) & 72.62 (0.37) & 71.57 (0.68) & 70.34 (0.73) & 72.66 (0.31) & 72.66 (0.56) & **73.56 (0.22)** \\ \(\mathtt{G}\rightarrow\)s & 74.09 (0.40) & 72.51 (0.10) & 71.93 (0.25) & 70.17 (0.64) & 69.49 (0.40) & 71.11 (0.38) & 71.14 (0.21) & **71.36 (0.08)** \\ \(\mathtt{G}\rightarrow\)trg & 78.41 (0.66) & 71.52 (0.13) & 72.90 (0.39) & 69.45 (0.96) & 66.67 (0.17) & 71.40 (0.39) & 71.53 (0.34) & **71.99 (0.67)** \\ \(\mathtt{G}\rightarrow\)r & 81.81 (0.20) & 77.42 (0.54) & 77.80 (0.42) & 74.35 (0.27) & 74.04 (0.51) & 76.29 (0.10) & 76.16 (0.34) & **76.97 (0.59)** \\ \(\mathtt{F}\rightarrow\)r & 78.59 (0.34) & 75.07 (0.08) & 75.17 (0.35) & 72.24 (0.99) & 71.49 (0.45) & **74.48 (0.38)** & 73.34 (0.14) & 73.89 (0.12) \\ \(\mathtt{F}\rightarrow\)s & 74.09 (0.40) & 71.65 (0.50) & 72.16 (0.23) & 69.09 (0.19) & 69.25 (0.31) & 70.94 (0.16) & 70.94 (0.35) & **71.41 (0.19)** \\ \(\mathtt{F}\rightarrow\)d & 82.19 (0.12) & 78.57 (0.66) & 79.24 (0.31) & 77.80 (0.27) & 76.65 (0.20) & 79.24 (0.35) & 79.65 (0.05) & **79.78 (0.46)** \\ \(\mathtt{F}\rightarrow\)r & 81.81 (0.20) & 75.72 (0.37) & 77.29 (0.61) & 74.67 (0.79) & 74.08 (0.25) & 75.27 (0.75) & **76.11 (0.91)** & 75.95 (0.50) \\ \(\mathtt{T}\rightarrow\)r & 78.59 (0.34) & 73.22 (0.92) & 72.44 (0.50) & 72.07 (0.45) & 69.08 (0.64) & 72.20 (0.49) & 73.12 (0.08) & **73.13 (0.22)** \\ \(\mathtt{T}\rightarrow\)s & 74.09 (0.40) & 70.76 (0.72) & 70.97 (0.26) & 68.35 (0.62) & 67.23 (0.39) & 70.28 (0.37) & 70.67 (0.50) & **71.28 (0.38)** \\ \(\mathtt{T}\rightarrow\)r & 82.19 (0.12) & 80.91 (0.28) & 81.67 (0.37) & 79.25 (0.34) & 78.77 (0.32) & 81.26 (0.37) & 81
t-SNE plots.The t-SNE (van der Maaten and Hinton, 2008) plots from domain adapters are shown in Figure 4 for the data set mnli. The lower layers have low divergence and the data from the two domains are interspersed, whereas the higher layers have high divergence. Our method effectively reduces the divergence in higher layers.
Composability.We test the composability of our two-step method ts-dt-. We reuse the task adapter trained for c \(\rightarrow\) ba and replace the domain adapter with the domain adapter of c \(\rightarrow\) mr and perform inference on c \(\rightarrow\) mr dataset. The initial F1 of the c \(\rightarrow\) mr dataset was 73.22 and after composing it with a different task adapter, the F1 score is 72.66 - a minimal performance loss. This shows the composability of ts-dt-.
## 4 Literature Review
Parameter Efficient Fine-tuning Methods.Adapters (Houlsby et al., 2019) are task-specific modules added to frozen transformer layers, with only the adapter parameters updated. Their plug-and-play characteristics and the avoidance of catastrophic forgetting have resulted in their use for NLP tasks: machine translation (Bapna and Firat, 2019), named entity recognition (Pfeiffer et al., 2020), etc. Recently, (He et al., 2021) have shown that they are efficient in scenarios where there is minimal supervised data. However, they neither test their performance under domain shift nor propose methods to improve adapter fine-tuning. Closely related to our method is the work of Ngo Trung et al. (2021), who learns a shared-private representation per layer, similar to dsn(Bousmalis et al., 2016). Their method requires balancing multiple loss functions, compared to our simpler two-step domain adaptation method. The stacking of adapters has been followed before by (Pfeiffer et al., 2020) for cross-lingual tasks: learning a language adapter first and stacking a task adapter. However, one language adapter is learned per language, assumes large amounts of unsupervised data to be available in all the languages, and requires supervised data to be
Figure 3: Difference in performance when adapters are removed from certain layers (mentioned inside the cells) for the amazon dataset (top) and for mnli dataset (bottom). The performance reduces if adapters are removed from certain layers
Figure 2: (a) Performance for amazon on the c \(\rightarrow\) ba domain adaptation scenario for different reduction factors. (b) Performance for mnli on the s \(\rightarrow\) tr scenario for different reduction factors.
available to learn a task, which is not applicable for domain adaptation. Compared to other methods, we make domain adaptation more efficient using principles of unsupervised domain adaptation.
Unsupervised Domain Adaptation (uda).Existing uda approaches can be categorized into model-centric, data-centric, and hybrid. _Model-centric_ approaches involve augmenting feature space or altering the loss function, architecture, or model parameters Blitzer et al. (2006); Pan et al. (2010); Ganin et al. (2016) have been popular. A popular _model-centric_ approach is to use adversarial training between the domain and the task classifier Ganin et al. (2016) to extract domain-invariant information. Bousmalis et al. (2016) in addition preserves domain-specific information. These works involve training a large number of parameters and require careful balancing of multiple loss functions. Our methods build on top of these works and make it more parameter-efficient.
Large-scale transformers pretrained on domain-specific corpora have been a norm: biomedical Lee et al. (2019), scientific publications Beltagy et al. (2019), among others. Another alternative is to continue pretraining generic models on domain-specific data: domain adaptive pretraining Gururangan et al. (2020). Both solutions are expensive since a huge model has to be stored for every domain while using adapters affords storing a small number of parameters for every domain pair and can be quickly adapted to new domains.
## 5 Discussion
This work shows that domain adaptation in NLP can be made more efficient using adapters. We use adapters fine-tuning Houlsby et al. (2019) proposed before and stacking of adapters that have been proposed before for a cross-lingual setting Pfeiffer et al. (2020) for the unsupervised domain adaptation. The approach we have discussed will make domain adaptation more practical for real-world use cases, making adaptation faster and cheaper. However, in this work, we have used bert-base-uncased for all of our methods. Using other backbone transformer models is part of our future work. We deal only with a classification and natural language inference task. Adapters have previously been used for machine translation Bapna and Firat (2019) and other generation tasks Zhang et al. (2022). We need to explore our domain adaptation methods for other generation tasks.
In this work, we reduce the marginal distribution of the two distributions. Previous works such as Kumar et al. (2018) show that reducing only the marginal distribution is not sufficient and aligning the label distributions is necessary. However, NLP works do not consider this and would require further investigation by the community.
## 6 Conclusion
In this work, we propose UDApter, to make unsupervised domain adaptation more parameter-efficient. Our methods outperform other strong baselines, and we show that we can perform better than just training a task adapter on supervised data. We perform competitively to other uda methods at a fraction of the parameters and outperform them when there is limited data - a more practical scenario. Future work should explore other parameter-efficient methods such as prefix-tuning
Figure 4: (top) t-SNE plots for the representations from bert-base-uncased. The lower layers are domain invariant while the higher layers are domain-variant (bottom) tSNE plots from the domain adapter trained on the s \(\rightarrow\) tr domain. We reduce the divergence using domain adapters where even higher layers are domain invariant.
(Li and Liang, 2021) for domain adaptation. NLP should also consider other avenues, such as continuous adaptation to new domains and adaptation to new domains when there are no data available.
## 7 Acknowledgments
This research is supported by the SRG grant id: T1SRIS19149 and the Ministry of Education, Singapore, under its AcRF Tier-2 grant (Project no. T2MOE2008, and Grantor reference no. MOET2EP20220-0017). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore.
## 8 Limitations
We have several limitations to our work. We have experimented with only one type of parameter-efficient method, which is the adapter fine-tuning method. Several other alternative parameter-efficient methods, such as LoRA (Hu et al., 2021), Bitfit (Ben Zaken et al., 2022), and other unifying paradigms (He et al., 2021), have been proposed in recent times. These methods are modular and can be easily substituted for adapters.
Another major limitation of our work is that we cannot explore whether we can learn different tasks over a given pair of domains. For example, for a given pair of domains such as news and twitter, it would be ideal if we learned a domain adapter and reused it for different applications such as sentiment analysis, named entity recognition, among others. We are limited by the availability of data for such scenarios and this would be a potential future work.
|
2304.06362 | Hydrodynamic limit for the non-cutoff Boltzmann equation | This work deals with the non-cutoff Boltzmann equation for all type of
potentials, in both the torus $\mathbf{T}^3$ and in the whole space
$\mathbf{R}^3$, under the incompressible Navier-Stokes scaling. We first
establish the well-posedness and decay of global mild solutions to this
rescaled Boltzmann equation in a perturbative framework, that is for solutions
close to the Maxwellian, obtaining in particular integrated-in-time
regularization estimates. We then combine these estimates with spectral-type
estimates in order to obtain the strong convergence of solutions to the
non-cutoff Boltzmannn equation towards the incompressible Navier-Stokes-Fourier
system. | Chuqi Cao, Kleber Carrapatoso | 2023-04-13T09:34:48Z | http://arxiv.org/abs/2304.06362v3 | # Hydrodynamic limit for the non-cutoff Boltzmann equation
###### Abstract.
This work deals with the non-cutoff Boltzmann equation for all type of potentials, in both the torus \(\mathbf{T}^{3}\) and in the whole space \(\mathbf{R}^{3}\), under the incompressible Navier-Stokes scaling. We first establish the well-posedness and decay of global mild solutions to this rescaled Boltzmann equation in a perturbative framework, that is for solutions close to the Maxwellian, obtaining in particular integrated-in-time regularization estimates. We then combine these estimates with spectral-type estimates in order to obtain the strong convergence of solutions to the non-cutoff Boltzmann equation towards the incompressible Navier-Stokes-Fourier system.
###### Contents
* 1 Introduction
* 2 Main results
* 3 Linearized Boltzmann operator
* 4 Well-posedness and regularization for the rescaled Boltzmann equation
* 5 Well-posedness for the Navier-Stokes-Fourier system
* 6 Hydrodynamic limit
## 1. Introduction
Since Hilbert [50], an important problem in kinetic theory concerns the rigorous link between different scales of description of a gas. More precisely, one is interested in passing rigorously from a mesoscopic description of a gas, modeled by the kinetic Boltzmann equation, towards a macroscopic description, modeled by Euler or Navier-Stokes fluid equations, through a suitable scaling limit. We are interested in this paper on the convergence of solutions to the Boltzmann equation towards the incompressible Navier-Stokes equation, and we refer to the book [66] and the references therein to a detailed description of this type of problem as well as to different scalings and fluid limit equations.
We introduce in Section 1.1 below the (rescaled) Boltzmann equation, and then in Section 1.2 we describe the incompressible Navier-Stokes-Fourier system, which is the expected limit. We finally present our main results in Section 2.
### The Boltzmann equation
The Boltzmann equation is a fundamental model in kinetic theory that describes the evolution of a rarefied gaz out of equilibrium by taking into account binary collisions between particles. More precisely, it describes the evolution in time of the unknown \(F(t,x,v)\geq 0\) which represents the density of particles that at time \(t\geq 0\) and position \(x\in\Omega_{x}=\mathbf{T}^{3}\) or \(\Omega_{x}=\mathbf{R}^{3}\) move with velocity \(v\in\mathbf{R}^{3}\). It was introduced by Maxwell [62] and Boltzmann [15] and reads
\[\partial F+v\cdot\nabla_{x}F=\frac{1}{\varepsilon}Q(F,F), \tag{1.1}\]
which is complemented with an initial data \(F_{|t=0}=F_{0}\) and where \(\varepsilon\in(0,1]\) is the Knudsen number, which corresponds to the ration between the mean-free path and the macroscopic length scale.
The Boltzmann collision operator \(Q\) is a bilinear operator acting only on the velocity variable \(v\in\mathbf{R}^{3}\), which means that collisions are local in space, and it is given by
\[Q(G,F)(v)=\int_{\mathbf{R}^{3}}\int_{\mathbf{S}^{2}}B(v-v_{*},\sigma)(G^{\prime} _{*}F^{\prime}-G_{*}F)\,\mathrm{d}\sigma\,\mathrm{d}v_{*}, \tag{1.2}\]
where here and below we use the standard short-hand notation \(F=F(v)\), \(G_{*}=G(v_{*})\), \(F^{\prime}=F(v^{\prime})\), and \(G^{\prime}_{*}=G(v^{\prime}_{*})\), and where the pre- and post-collision velocities \((v^{\prime},v^{\prime}_{*})\) and \((v,v_{*})\) are related through
\[v^{\prime}=\frac{v+v_{*}}{2}+\frac{|v-v_{*}|}{2}\sigma\quad\text{and}\quad v^{ \prime}_{*}=\frac{v+v_{*}}{2}-\frac{|v-v_{*}|}{2}\sigma. \tag{1.3}\]
The above formula is one possible parametrization of the set of solutions of an elastic collision with the physical laws of conservation (momentum and energy)
\[v+v_{*}=v^{\prime}+v^{\prime}_{*}\quad\text{and}\quad|v|^{2}+|v_{*}|^{2}=|v^{ \prime}|^{2}+|v^{\prime}_{*}|^{2}.\]
The function \(B(v-v_{*},\sigma)\) appearing in (1.2), called the collision kernel, is supposed to be nonnegative and to depend only on the relative velocity \(|v-v_{*}|\) and the deviation angle \(\theta\) through \(\cos\theta:=\frac{v-v_{*}}{|v-v_{*}|}\cdot\sigma\). As it is customary, we may suppose without loss of generality that \(\theta\in[0,\pi/2]\), for otherwise \(B\) can be replaced by its symmetrized form.
In this paper we shall consider the case of _non-cutoff potentials_ that we describe now. The collision kernel \(B\) takes the form
\[B(v-v_{*},\sigma)=|v-v_{*}|^{\gamma}b(\cos\theta),\]
for some nonnegative function \(b\), called the angular kernel, and some parameter \(\gamma\in(-3,1]\). We assume that the angular kernel \(b\) is a locally smooth implicit function which is not locally integrable, more precisely that it satisfies
\[\mathcal{K}\theta^{-1-2s}\leq\sin\theta\,b(\cos\theta)\leq\mathcal{K}^{-1} \theta^{-1-2s}\quad\text{with}\quad 0<s<1,\]
for some constant \(\mathcal{K}>0\). Moreover the parameters satisfy the condition
\[\max\left\{-3,-\frac{3}{2}-2s\right\}<\gamma\leq 1,\quad 0<s<1,\quad\gamma+2s >-1. \tag{1.4}\]
We shall consider in this paper the full range of parameters \(\gamma\) and \(s\) satisfying (1.4), and we classify them into two cases: When \(\gamma+2s\geq 0\) we speak of _hard potentials_, and when \(\gamma+2s<0\) of _soft potentials_. We also mention that _cutoff kernels_ correspond to the case in which we remove the singularity of the angular kernel \(b\) and assume that \(b\) is integrable.
_Remark 1.1_.: When particles interact via a repulsive inverse-power law potential \(\phi(r)=r^{-(p-1)}\) with \(p>2\), then it holds (see [62, 25]) that \(\gamma=\frac{p-5}{p-1}\) and \(s=\frac{1}{p-1}\). It is easy to check that \(\gamma+4s=1\) which means the above assumption is satisfied for the full range of the inverse power law model.
Formally if \(F\) is a solution to equation (1.1) with the initial data \(F_{0}\), then it enjoys the conservation of mass, momentum and the energy, that is,
\[\frac{d}{dt}\int_{\Omega_{x}\times\mathbf{R}^{3}}F(t,x,v)\varphi(v)\,\mathrm{d }v\,\mathrm{d}x=0,\quad\varphi(v)=1,v,|v|^{2},\]
which is a consequence of the collision invariants of the Boltzmann operator
\[\int_{\mathbf{R}^{3}}Q(F,F)(v)\varphi(v)\,\mathrm{d}v=0,\quad\varphi(v)=1,v,| v|^{2}.\]
Moreover the Boltzmann H-theorem asserts on the one hand that the entropy
\[H(F)=\int_{\Omega_{x}\times\mathbf{R}^{3}}F\log F\,\mathrm{d}v\,\mathrm{d}x,\]
is non-increasing in time. Indeed, at least formally, since \((x-y)(\log x-\log y)\) is nonnegative, we have the following inequality for the entropy dissipation \(D(f)\):
\[D(f) =-\frac{\mathrm{d}}{\mathrm{d}t}H(F)=-\int_{\Omega_{x}\times\mathbf{ R}^{3}}Q(F,F)\,\mathrm{d}v\,\mathrm{d}x\] \[=\frac{1}{4}\int_{\Omega_{x}\times\mathbf{R}^{3}\times\mathbf{R} ^{3}\times\mathbf{S}^{2}}B(v-v_{*},\sigma)(F^{\prime}F_{*}^{\prime}-F_{*}F)\log \left(\frac{F^{\prime}F_{*}^{\prime}}{FF_{*}}\right)\,\mathrm{d}\sigma\, \mathrm{d}v_{*}\,\mathrm{d}v\,\mathrm{d}x\geq 0.\]
On the other hand, the second part of the H-theorem asserts that local equilibria of the Boltzmann equation are local Maxwellian distributions in velocity, more precisely that
\[D(F)=0\quad\Leftrightarrow\quad Q(F,F)=0\quad\Leftrightarrow\quad F(t,x,v)= \frac{\rho(t,x)}{(2\pi\theta(t,x))^{3/2}}\exp\left(-\frac{|v-u(t,x)|^{2}}{2 \theta(t,x)}\right),\]
with \(\rho(t,x)>0\), \(u(t,x)\in\mathbf{R}^{3}\) and \(\theta(t,x)>0\). In what follows, we denote by \(\mu=\mu(v)\) the global Maxwellian
\[\mu=(2\pi)^{-3/2}e^{-|v|^{2}/2}.\]
Observing that the effect of collisions are enhanced when taking small parameter \(\varepsilon\in(0,1]\), one can expect from the above H-Theorem that, at least formally, in the limit \(\varepsilon\to 0\) the solution \(F\) approaches a local Maxwellian equilibrium. One therefore considers, see for instance in [12], a rescaling of the solution \(F\) of (1.1) in which an additional dilatation of the macroscopic time scale has been performed in order to be able to reach the Navier-Stokes equation in the limit. This procedure gives us the following rescaled Boltzmann equation for the new unknown \(F^{\varepsilon}=F^{\varepsilon}(t,x,v)\):
\[\partial F^{\varepsilon}+\frac{1}{\varepsilon}v\cdot\nabla_{x}F^{\varepsilon }=\frac{1}{\varepsilon^{2}}Q(F^{\varepsilon},F^{\varepsilon}), \tag{1.5}\]
with initial data \(F^{\varepsilon}_{|t=0}=F^{\varepsilon}_{0}\).
In the torus case \(\Omega_{x}=\mathbf{T}^{3}\) (normalized as \(|\mathbf{T}^{3}|=1\)), we shall always assume, thanks to the conservation laws, that the initial datum \(F^{\varepsilon}_{0}\) satisfies the normalization
\[\int_{\mathbf{T}^{3}}\int_{\mathbf{R}^{3}}F^{\varepsilon}_{0}(x,v)[1,v,|v|^{2 }]\,\mathrm{d}v\,\mathrm{d}x=[1,0,3], \tag{1.6}\]
that is, the initial data \(F^{\varepsilon}_{0}\) has the same mass, momentum and energy as \(\mu\), and the Maxwellian \(\mu\) is the unique global equilibrium to (1.5).
In order to relate the above rescaled Boltzmann equation (1.5) to the expected incompressible Navier-Stokes-Fourier system (described below in (1.13)) in the limit \(\varepsilon\to 0\), we are going to work with the perturbation \(f^{\varepsilon}\) defined by
\[F^{\varepsilon}=\mu+\varepsilon\sqrt{\mu}f^{\varepsilon}, \tag{1.7}\]
which then satisfies the equation
\[\partial_{t}f^{\varepsilon}+\frac{1}{\varepsilon}v\cdot\nabla_{x}f^{ \varepsilon}=\frac{1}{\varepsilon^{2}}Lf^{\varepsilon}+\frac{1}{\varepsilon} \Gamma(f^{\varepsilon},f^{\varepsilon}), \tag{1.8}\]
with initial data \(f^{\varepsilon}_{0}=\frac{F^{\varepsilon}_{0}-\mu}{\varepsilon\sqrt{\mu}}\), and where we denote
\[\Gamma(f,g)=\mu^{-1/2}Q(\sqrt{\mu}f,\sqrt{\mu}g), \tag{1.9}\]
and
\[Lf=\Gamma(\sqrt{\mu},f)+\Gamma(f,\sqrt{\mu}). \tag{1.10}\]
In the case of the torus \(\Omega_{x}=\mathbf{T}^{3}\), we observe from (1.6) that \(f^{\varepsilon}_{0}\) satisfies
\[\int_{\mathbf{T}^{3}}\int_{\mathbf{R}^{3}}f^{\varepsilon}_{0}(x,v)[1,v,|v|^{2}] \sqrt{\mu}(v)\,\mathrm{d}v\,\mathrm{d}x=0, \tag{1.11}\]
and from the conservation laws recalled above that
\[\int_{\mathbf{T}^{3}}\int_{\mathbf{R}^{3}}f^{\varepsilon}(t,x,v)[1,v,|v|^{2}] \sqrt{\mu}(v)\,\mathrm{d}v\,\mathrm{d}x=0. \tag{1.12}\]
### The Navier-Stokes-Fourier system
We recall the Navier-Stokes-Fourier system associated with the Boussinesq equation which writes
\[\begin{cases}\partial_{t}u+u\cdot\nabla_{x}u-\nu_{1}\Delta_{x}u=\nabla_{x}p,\\ \partial_{t}\theta+u\cdot\nabla_{x}\theta-\nu_{2}\Delta_{x}\theta=0,\\ \operatorname{div}_{x}u=0,\\ \nabla_{x}(\rho+\theta)=0,\end{cases} \tag{1.13}\]
with positive viscosity coefficients \(\nu_{1},\nu_{2}>0\). In this system, the temperature \(\theta=\theta(t,x):\mathbf{R}_{+}\times\Omega_{x}\to\mathbf{R}\) of the fluid, the density \(\rho=\rho(t,x):\mathbf{R}_{+}\times\Omega_{x}\to\mathbf{R}\) of the fluid, and the pressure \(p=p(t,x):\mathbf{R}_{+}\times\Omega_{x}\to\mathbf{R}\) of the fluid are scalar unknowns, whereas the velocity \(u=u(t,x):\mathbf{R}_{+}\times\Omega_{x}\to\mathbf{R}^{3}\) of the fluid is an unknown vector field. The pressure \(p\) can actually be eliminated from the equation by applying to the first equation in (1.13) the Leray projector \(\mathbb{P}\) onto the space of divergence-free vector fields. In other words, for \(u\) we have
\[\partial_{t}u-\nu_{1}\Delta_{x}u=Q_{\mathrm{NS}}(u,u),\]
where the bilinear operator \(Q_{\mathrm{NS}}\) is defined by
\[Q_{\mathrm{NS}}(v,u)=-\frac{1}{2}\mathbb{P}(\operatorname{div}(v\otimes u)+ \operatorname{div}(u\otimes v)),\quad\operatorname{div}(v\otimes u)^{j}:= \sum_{k=1}^{3}\partial_{k}(v^{j}u^{k})=\operatorname{div}(v^{j}u),\]
and the Leray projector \(\mathbb{P}\) on divergence-free vector fields is as follows, for \(1\leq j\leq 3\) and all \(\xi\in\Omega^{\prime}_{\xi}\),
\[\mathcal{F}_{x}(\mathbb{P}f)^{j}(\xi)=\mathcal{F}_{x}(f^{j})(\xi)-\frac{1}{| \xi|^{2}}\sum_{k=1}^{3}\xi_{j}\xi_{k}\mathcal{F}_{x}(f^{k})(\xi)=\sum_{k=1}^{3 }(\delta_{j,k}-1)\frac{\xi_{j}\xi_{k}}{|\xi|^{2}}\mathcal{F}_{x}(f^{k})(\xi),\]
where \(\mathcal{F}_{x}\) denotes the Fourier transform in the spatial variable \(x\in\Omega_{x}\), see for instance [10, Section 5.1].
We therefore consider the system
\[\begin{cases}\partial_{t}u-\nu_{1}\Delta_{x}u=Q_{\mathrm{NS}}(u,u),\\ \partial_{t}\theta+u\cdot\nabla_{x}\theta-\nu_{2}\Delta_{x}\theta=0m\\ \operatorname{div}_{x}u=0,\\ \nabla_{x}(\rho+\theta)=0,\end{cases} \tag{1.14}\]
for the unknown \((\rho,u,\theta)\), which is complemented with a initial data \((\rho_{0},u_{0},\theta_{0})\) that we shall always suppose to verify
\[\operatorname{div}_{x}u_{0}=0,\quad\nabla_{x}(\rho_{0}+\theta_{0})=0. \tag{1.15}\]
In the case of the torus \(\Omega_{x}=\mathbf{T}^{3}\), we suppose moreover that the initial data is mean-free, namely
\[\int_{\mathbf{T}^{3}}\rho_{0}(x)\,\mathrm{d}x=\int_{\mathbf{T}^{3}}u_{0}(x)\, \mathrm{d}x=\int_{\mathbf{T}^{3}}\theta_{0}(x)\,\mathrm{d}x=0,\]
which then implies that the associated solution \((\rho,u,\theta)\) also is mean-free
\[\int_{\mathbf{T}^{3}}\rho(t,x)\,\mathrm{d}x=\int_{\mathbf{T}^{3}}u(t,x)\, \mathrm{d}x=\int_{\mathbf{T}^{3}}\theta(t,x)\,\mathrm{d}x=0. \tag{1.16}\]
## 2. Main results
Before stating our results we introduce some notation. Given a function \(f=f(x,v)\) we denote \(\widehat{f}(\xi,v)=\mathcal{F}_{x}(f(\cdot,v))(\xi)\) the Fourier transform in the space variable, for \(\xi\in\Omega^{\prime}_{\xi}=\mathbf{Z}^{3}\) (if \(\Omega_{x}=\mathbf{T}^{3}\)) or \(\Omega^{\prime}_{\xi}=\mathbf{R}^{3}\) (if \(\Omega_{x}=\mathbf{R}^{3}\)), more precisely
\[\widehat{f}(\xi,v)=\frac{1}{(2\pi)^{3/2}}\int_{\mathbf{R}^{3}}e^{-\mathrm{i}x \cdot\xi}f(x,v)\,\mathrm{d}x.\]
In particular, we observe that if \(f\) satisfies (1.8), then for all \(\xi\in\Omega^{\prime}_{\xi}\), its Fourier transform in space \(\widehat{f}^{\varepsilon}(\xi)\) satisfies the equation
\[\partial_{t}\widehat{f}^{\varepsilon}(\xi)=\frac{1}{\varepsilon^{2}}(L-\mathrm{i }\varepsilon v\cdot\xi)\widehat{f}^{\varepsilon}(\xi)+\frac{1}{\varepsilon} \widehat{\Gamma}(f^{\varepsilon},f^{\varepsilon})(\xi), \tag{2.1}\]
where
\[\widehat{\Gamma}(f,g)(\xi)=\sum_{\eta\in\mathbf{Z}^{3}}\Gamma\left(\widehat{f} (\xi-\eta),\widehat{g}(\eta)\right)\quad\text{if}\quad\Omega_{x}=\mathbf{T}^{ 3},\]
or
\[\widehat{\Gamma}(f,g)(\xi)=\int_{\mathbf{R}^{3}}\Gamma\left(\widehat{f}(\xi- \eta),\widehat{g}(\eta)\right)\mathrm{d}\eta\quad\text{if}\quad\Omega_{x}= \mathbf{R}^{3}.\]
For functions \(f=f(x,v)\) we write the _micro-macro decomposition_
\[f=\mathbf{P}^{\perp}f+\mathbf{P}f,\quad\mathbf{P}^{\perp}=I-\mathbf{P}, \tag{2.2}\]
where \(\mathbf{P}\) is the orthogonal projection onto \(\mathrm{Ker}(L)=\{\sqrt{\mu},v\sqrt{\mu},|v|^{2}\sqrt{\mu}\}\) given by
\[\mathbf{P}f(x,v)=\left\{\rho[f](x)+u[f](x)\cdot v+\theta[f](x)\frac{(|v|^{2}- 3)}{2}\right\}\sqrt{\mu}(v), \tag{2.3}\]
where
\[\rho[f](x) =\int_{\mathbf{R}^{3}}f(x,v)\sqrt{\mu}(v)\,\mathrm{d}v,\] \[u[f](x) =\int_{\mathbf{R}^{3}}f(x,v)v\sqrt{\mu}(v)\,\mathrm{d}v,\] \[\theta[f](x) =\int_{\mathbf{R}^{3}}f(x,v)\frac{(|v|^{2}-3)}{3}\sqrt{\mu}(v)\, \mathrm{d}v.\]
The function \(\mathbf{P}^{\perp}f\) is called the the _microscopic part_ of \(f\), whereas \(\mathbf{P}f\) is the _macroscopic part_ of \(f\).
We now introduce the functional spaces we work with. For every \(\ell\geq 0\) we denote by \(L^{2}_{v}(\langle v\rangle^{\ell})\) the weighted Lebesgue space associated to the inner product
\[\langle f,g\rangle_{L^{2}_{v}(\langle v\rangle^{\ell})}:=\langle\langle v \rangle^{\ell}f,\langle v\rangle^{\ell}g\rangle_{L^{2}_{v}}=\int_{\mathbf{R}^{ 3}}fg\langle v\rangle^{2\ell}\,\mathrm{d}v,\]
and the norm
\[\|f\|_{L^{2}_{v}(\langle v\rangle^{\ell})}:=\|\langle v\rangle^{\ell}f\|_{L^{ 2}_{v}},\]
where \(L^{2}_{v}=L^{2}(\mathbf{R}^{3}_{v})\) is the standard Lebesgue space. We denote by \(H^{s,*}_{v}\) the Sobolev-type space associated to the dissipation of the linearized operator \(L\) defined in [4] (see also [44] for the definition of a different but equivalent anisotropic norm), more precisely we denote
\[\|f\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})}:=\|\langle v\rangle^{\ell}f\|_{H^ {s,*}_{v}}, \tag{2.4}\]
where
\[\begin{split}\|f\|^{2}_{H^{s,*}_{v}}&:=\int_{ \mathbf{R}^{3}}\int_{\mathbf{R}^{3}}\int_{\mathbf{S}^{2}}b(\cos\theta)|v-v_{*}| ^{\gamma}\mu(v_{*})[f(v^{\prime})-f(v)]^{2}\,\mathrm{d}\sigma\,\mathrm{d}v_{* }\,\mathrm{d}v\\ &\quad+\int_{\mathbf{R}^{3}}\int_{\mathbf{R}^{3}}\int_{\mathbf{S} ^{2}}b(\cos\theta)|v-v_{*}|^{\gamma}f(v_{*})^{2}[\sqrt{\mu}(v^{\prime})-\sqrt {\mu}(v)]^{2}\,\mathrm{d}\sigma\,\mathrm{d}v_{*}\,\mathrm{d}v,\end{split} \tag{2.5}\]
which verifies, see [4, 44],
\[\|\langle v\rangle^{\gamma/2+s}f\|_{L^{2}_{v}(\langle v\rangle^{\ell})}+\| \langle v\rangle^{\gamma/2}f\|_{H^{s}_{v}(\langle v\rangle^{\ell})}\lesssim \|f\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})}\lesssim\|\langle v\rangle^{\gamma /2+s}f\|_{H^{s}_{v}(\langle v\rangle^{\ell})}.\]
We also define the space \((H^{s,*}_{v})^{\prime}\) as the dual of \(H^{s,*}_{v}\), namely
\[\|f\|_{(H^{s,*}_{v})^{\prime}}:=\sup_{\|\partial\|_{H^{s,*}_{v}}\leq 1}\langle f,\phi\rangle_{L^{2}_{v}}. \tag{2.6}\]
We further define the space \(H^{s,**}_{v}(\langle v\rangle^{\ell})\) as the space associated to the norm
\[\|f\|^{2}_{H^{s,*}_{v}(\langle v\rangle^{\ell})}:=\|\mathbf{P}^{\perp}f\|^{2} _{H^{s,*}_{v}(\langle v\rangle^{\ell})}+\|a(D_{x})\mathbf{P}f\|^{2}_{L^{2}_{v}}, \tag{2.7}\]
where \(a(D_{x})\) is the Fourier multiplier \(a(\xi)=\frac{|\xi|}{\langle\xi\rangle}\), which gives, in Fourier variable,
\[\|\widehat{f}(\xi)\|^{2}_{H^{s,**}_{v}(\langle v\rangle^{\ell})}=\|\mathbf{P}^{ \perp}\widehat{f}(\xi)\|^{2}_{H^{s,**}_{v}(\langle v\rangle^{\ell})}+\frac{|\xi |^{2}}{\langle\xi\rangle^{2}}\|\mathbf{P}\widehat{f}(\xi)\|^{2}_{L^{2}_{v}}. \tag{2.8}\]
Finally, given a functional space \(X\) in the variables \((t,\xi,v)\), we shall denote by \(\mathcal{F}^{-1}_{x}(X)\) the Fourier-based space defined as
\[\mathcal{F}^{-1}_{x}(X):=\left\{f=f(t,x,v)\mid\widehat{f}\in X\right\}.\]
Hereafter, in order to deal with the torus case \(\Omega_{x}=\mathbf{T}^{3}\) and the whole space case \(\Omega_{x}=\mathbf{R}^{3}\) simultaneously, we denote \(L^{p}_{\xi}=\ell^{p}(\mathbf{Z}^{3})\) in the torus case and \(L^{p}_{\xi}=L^{p}(\mathbf{R}^{3})\) in the whole space case, moreover we abuse notation and write
\[\int_{\Omega^{\prime}_{\xi}}\phi(\xi)\,\mathrm{d}\xi:=\left\{\begin{aligned} &\sum_{\xi\in\mathbf{Z}^{3}}\phi(\xi)& \quad\text{if}\quad\Omega^{\prime}_{\xi}=\mathbf{Z}^{3},\\ &\int_{\mathbf{R}^{3}}\phi(\xi)\,\mathrm{d}\xi& \quad\text{if}\quad\Omega^{\prime}_{\xi}=\mathbf{R}^{3}.\end{aligned}\right.\]
In particular, we shall consider below functional spaces of the type \(\mathcal{F}^{-1}_{x}(L^{p}_{\xi}L^{\infty}_{v}L^{2}_{v}(\langle v\rangle^{ \ell}))\) and \(\mathcal{F}^{-1}_{x}(L^{p}_{\xi}L^{2}_{v}H^{s,**}_{v}(\langle v\rangle^{\ell}))\) (or \(\mathcal{F}^{-1}_{x}(L^{p}_{\xi}L^{2}_{v}H^{s,**}_{v}(\langle v\rangle^{\ell}))\)) and the respective norms, for \(f=f(t,x,v)\),
\[\|\widehat{f}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}(v)^{\ell}}:=\left(\int_{ \Omega^{\prime}_{\xi}}\sup_{t\geq 0}\|\widehat{f}(t,\xi,\cdot)\|^{p}_{L^{2}_{v}( \langle v\rangle^{\ell})}\,\mathrm{d}\xi\right)^{1/p}\quad\text{for}\quad p\in[ 1,+\infty),\]
and
\[\|\widehat{f}\|_{L^{p}_{\xi}L^{2}_{t}H^{s,**}_{v}(\langle v\rangle^{\ell})}:= \left(\int_{\Omega^{\prime}_{\xi}}\left\{\int_{0}^{\infty}\|\widehat{f}(t,\xi,\cdot)\|^{2}_{H^{s,**}_{v}(\langle v\rangle^{\ell})}\,\mathrm{d}t\right\}^{p /2}\,\mathrm{d}\xi\right)^{1/p}\quad\text{for}\quad p\in[1,+\infty),\]
with the usual modification for \(p=+\infty\).
### Well-posedness for the rescaled Boltzmann equation
Our first result concerns the global well-posedness, regularization and decay for equation (1.8) for small initial data.
**Theorem 2.1** (Global well-posedness and decay for the Boltzmann equation).: _Let \(\ell\geq 0\). There is \(\eta_{0}>0\) small enough such that for all \(\varepsilon\in(0,1]\) the following holds:_
(1) _Torus case \(\Omega_{x}=\mathbf{T}^{3}\): For any initial data \(f^{\varepsilon}_{0}\in\mathcal{F}^{-1}_{x}(L^{1}_{\xi}L^{2}_{v}(\langle v \rangle^{\ell}))\) satisfying (1.12) and \(\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell })}\leq\eta_{0}\), there exists a unique global mild solution \(f^{\varepsilon}\in\mathcal{F}^{-1}_{x}(L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}( \langle v\rangle^{\ell})\cap L^{1}_{\xi}L^{2}_{v}H^{s,**}_{v}(\langle v\rangle^{ \ell}))\) to (1.8) satisfying (1.12) and the energy estimate_
\[\|\widehat{f}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})} +\frac{1}{\varepsilon}\|\mathbf{P}^{\perp}\widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}H^ {s,*}_{v}(\langle v\rangle^{\ell})}+\|\mathbf{P}\widehat{f}\|_{L^{1}_{\xi}L^{ 2}_{t}L^{2}_{v}}\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_ {v}(\langle v\rangle^{\ell})}. \tag{2.9}\]
_Moreover we have the following decay estimates: In the hard potentials case \(\gamma+2s\geq 0\), there exists \(\lambda>0\) such that_
\[\|\mathrm{e}_{\lambda}\widehat{f}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})}+\frac{1}{\varepsilon}\|\mathrm{e}_{\lambda}\mathbf{P}^{\perp }\widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}+\| \mathrm{e}_{\lambda}\mathbf{P}\widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}} \lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v \rangle^{\ell})}, \tag{2.10}\]
_where we denote \(\mathrm{e}_{\lambda}:t\mapsto e^{\lambda t}\). In the soft potentials case \(\gamma+2s<0\), if \(\ell>0\) then for any \(0<\omega<\frac{\ell}{|\gamma+2s|}\) there holds_
\[\|\mathrm{p}_{\omega}\widehat{f}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}}+\frac{1 }{\varepsilon}\|\mathrm{p}_{\omega}\mathbf{P}^{\perp}\widehat{f}\|_{L^{1}_{ \xi}L^{2}_{t}H^{s,*}_{v}}+\|\mathrm{p}_{\omega}\mathbf{P}\widehat{f}\|_{L^{1}_{ \xi}L^{2}_{t}L^{2}_{v}}\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{ 2}_{t}(\langle v\rangle^{\ell})}, \tag{2.11}\]
_where we denote \(\mathrm{p}_{\omega}:t\mapsto(1+t)^{\omega}\)._
(2) _Whole space case \(\Omega_{x}=\mathbf{R}^{3}\): Let \(p\in(3/2,\infty]\). For any initial data \(f^{\varepsilon}_{0}\in\mathcal{F}^{-1}_{x}(L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{ \ell})\cap L^{p}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell}))\) satisfying \(\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell})}+ \|\widehat{f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell})} \leq\eta_{0}\), there exists a unique global mild
solution \(f^{\varepsilon}\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})\cap L^{1}_{\xi}L^{2}_{v}H^{s,*}_{v}(\langle v\rangle^{\ell})) \cap\mathcal{F}_{x}^{-1}(L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{ \ell})\cap L^{p}_{\xi}L^{2}_{v}H^{s,*}_{v}(\langle v\rangle^{\ell}))\) to (1.8) satisfying the energy estimate_
\[\|\widehat{f}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})}+\frac{1}{\varepsilon}\|\mathbf{P}^{\perp}\widehat{f}\|_{L^{1 }_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}+\left\|\frac{|\xi|}{ \langle\xi\rangle}\mathbf{P}\widehat{f}\right\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v }}\] \[+\|\widehat{f}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})}+\frac{1}{\varepsilon}\|\mathbf{P}^{\perp}\widehat{f}\|_{L^{p}_ {\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}+\left\|\frac{|\xi|}{\langle \xi\rangle}\mathbf{P}\widehat{f}\right\|_{L^{p}_{\xi}L^{2}_{v}}\lesssim\| \widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell })}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_{v}(\langle v\rangle^{ \ell})}. \tag{2.12}\]
_Moreover we have the following decay estimates: In the hard potentials case \(\gamma+2s\geq 0\), for any \(0<\vartheta<\frac{3}{2}(1-\frac{1}{p})\) there holds_
\[\begin{split}\|\mathrm{p}_{\vartheta}\widehat{f}\|_{L^{1}_{\xi}L^ {\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}+\frac{1}{\varepsilon}\| \mathrm{p}_{\vartheta}\mathbf{P}^{\perp}\widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}H^ {s,*}_{v}(\langle v\rangle^{\ell})}&+\left\|\mathrm{p}_{ \vartheta}\frac{|\xi|}{\langle\xi\rangle}\mathbf{P}\widehat{f}\right\|_{L^{1}_ {\xi}L^{2}_{t}L^{2}_{v}}\\ &\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_ {v}(\langle v\rangle^{\ell})}.\end{split} \tag{2.13}\]
_where we denote \(\mathrm{p}_{\vartheta}:t\mapsto(1+t)^{\vartheta}\). In the soft potentials case \(\gamma+2s<0\), if \(0<\vartheta<\frac{3}{2}(1-\frac{1}{p})\) and \(\ell>\vartheta|\gamma+2s|\) there holds_
\[\begin{split}\|\mathrm{p}_{\vartheta}\widehat{f}\|_{L^{1}_{\xi}L^ {\infty}_{t}L^{2}_{v}}+\frac{1}{\varepsilon}\|\mathrm{p}_{\vartheta}\mathbf{P }^{\perp}\widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}}&+\left\| \mathrm{p}_{\vartheta}\frac{|\xi|}{\langle\xi\rangle}\mathbf{P}\widehat{f} \right\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\\ &\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_ {v}(\langle v\rangle^{\ell})}.\end{split} \tag{2.14}\]
The Cauchy theory and the large time behavior for Boltzmann equation for \(\varepsilon=1\) have been extensively studied. Concerning the theory for large data, we only mention the global existence of renormalized solutions [32] for the cutoff Boltzmann equation, and the global existence of renormalized solutions with defect measure [6] for the non-cutoff Boltzmann equation.
We now give a very brief review for solutions to the Boltzmann equation in a perturbative framework, that is, for solutions near the Maxwellian. For the case of cutoff potentials, we refer to the works [43, 70, 71, 17, 72] as well as the more recent [73, 31] for global solutions in spaces of the form \(L^{\infty}_{v}H^{N}_{x}\) ; and to [54, 61, 47, 69, 33] for solutions in \(H^{N}_{x,v}\) or \(H^{N}_{x}L^{2}_{v}\). On the other hand, for the non-cutoff Boltzmann equation, we refer to [44, 45] in the torus case and to [4, 2, 3] in the whole space case, for the first global solutions in spaces of the form \(H^{N}_{x,v}\) by working with anisotropic norms (see (2.5)). The optimal time-decay was obtained in [67] for the whole space, and recently [30] constructed global solutions in the whole space.
All the above results concern solutions with Gaussian decay in velocity, that is, they hold in functional spaces of the type \(H^{N}_{x,v}\) for the perturbation \(f\) defined in (1.7), which means that \(F-\mu\in H^{N}_{x,v}(\mu^{-1/2})\). By developing decay estimates on the resolvents and semigroups of non-symmetric operators in Banach spaces, Gualdani-Mischler-Mouhot [46] proved nonlinear stability for the cutoff Boltzmann equation with hard potentials in \(L^{1}_{v}L^{\infty}_{x}(\langle v\rangle^{k}\mu^{1/2}),k>2\), that is, in spaces with polynomial decay in velocity (\(f\in L^{1}_{v}L^{\infty}_{x}(\langle v\rangle^{k}\mu^{1/2})\) means \(F-\mu\in L^{1}_{v}L^{\infty}_{x}(\langle v\rangle^{k})\)). In the same framework, the case of non-cutoff hard potentials was treated in [49, 7], and that of non-cutoff soft potentials in [22].
The aforementioned results were obtained in Sobolev-type spaces, very recently Duan, Liu, Sakamoto and Strain [34] obtained the well-posedness of the Boltzmann equation in Fourier-based spaces \(L^{1}_{\xi}L^{\infty}_{t}L^{2}_{x}\) in the torus case, which was then extended to the whole space case by Duan, Sakamoto and Ueda in [35], see also [23] for the whole space case in polynomial weighted spaces. We also refer to the works [8, 21] for recent results on the well-posedness for non-cutoff Boltzmann using De Giogi arguments.
In our paper, we establish uniform in \(\varepsilon\) estimates for the rescaled non-cutoff Boltzmann equation (1.8). Our result in Theorem 2.1 is similar to [34, 35], but the proof is quite different. We first investigate the semigroup \(U^{\varepsilon}\) associated to the linearized operator \(\frac{1}{\varepsilon^{2}}(L-\varepsilon v\cdot\nabla_{x})\) appearing in (1.8). We provide boundedness and integrated-in-time regularization
estimates for \(U^{\varepsilon}\) (see Proposition 3.2), as well as for its integral in time against a source \(\int_{0}^{t}U^{\varepsilon}(t-s)S(s)\,\mathrm{d}s\) (see Proposition 3.4). Together with nonlinear estimates for \(\Gamma\) (see Lemma 4.1), we are then able to take \(S\) equal to the nonlinear term \(\Gamma(f,f)\) and prove the global well-posedness of mild solutions of (1.8), namely
\[f^{\varepsilon}(t)=U^{\varepsilon}(t)f_{0}^{\varepsilon}+\frac{1}{\varepsilon} \int_{0}^{t}U^{\varepsilon}(t-s)\Gamma(f^{\varepsilon}(s),f^{\varepsilon}(s)) \,\mathrm{d}s,\]
by applying a fixed point argument. The decay estimate is then obtained as a consequence of decay estimates for \(U^{\varepsilon}\) (see Propositions 3.6 and 3.10) and for \(\int_{0}^{t}U^{\varepsilon}(t-s)S(s)\,\mathrm{d}s\) (see Propositions 3.7 and 3.11). It is important to notice that the fixed point takes place in the space \(\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}(\langle v\rangle^{ \ell})\cap L^{1}_{\xi}L^{2}_{v}H^{s,*}_{v}(\langle v\rangle^{\ell}))\) for the torus case, and in \(\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}(\langle v\rangle^{ \ell})\cap L^{1}_{\xi}L^{2}_{v}H^{s,**}_{v}(\langle v\rangle^{\ell}))\)\(\cap\)\(\mathcal{F}_{x}^{-1}(L^{p}_{\xi}L^{\infty}_{v}L^{2}_{v}(\langle v\rangle^{\ell}))\) for the whole space, that is, the integrated-in-time regularization appears in the functional space. Furthermore, the estimate for \(\int_{0}^{t}U^{\varepsilon}(t-s)S(s)\,\mathrm{d}s\) is a key ingredient for our fixed point argument, and on the other hand it is also crucial for establishing the strong convergence in the proof of the hydrodynamic limit established below in Theorem 2.3.
### Well-posedness for the Navier-Stokes-Fourier system
Our second result concerns the global well-posedness of the incompressible Navier-Stokes-Fourier system (1.14) for small initial data.
**Theorem 2.2** (Global well-posedness for the Navier-Stokes-Fourier system).: _There exists \(\eta_{1}>0\) small enough such that the following holds:_
(1) _Torus case \(\Omega_{x}=\mathbf{T}^{3}\): For any initial data \((\rho_{0},u_{0},\theta_{0})\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi})\) satisfying (1.16) and \(\|(\widehat{\rho}_{0},\widehat{u}_{0},\widehat{\theta}_{0})\|_{L^{1}_{\xi}}\leq\eta _{1}\), there exists a unique global mild solution \((\rho,u,\theta)\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{\infty}_{t}\cap L^{1}_{ \xi}(\langle\xi\rangle)L^{2}_{t})\) to the Navier-Stokes-Fourier system (1.14) satisfying (1.16) and the energy estimate_
\[\|(\widehat{\rho},\widehat{u},\widehat{\theta})\|_{L^{1}_{\xi}L^{\infty}_{t}} +\|\langle\xi\rangle(\widehat{\rho},\widehat{u},\widehat{\theta})\|_{L^{1}_{ \xi}L^{2}_{t}}\lesssim\|(\widehat{\rho}_{0},\widehat{u}_{0},\widehat{\theta}_{ 0})\|_{L^{1}_{\xi}}.\]
(2) _Whole space case \(\Omega_{x}=\mathbf{R}^{3}\): Let \(p\in(3/2,\infty]\). For any initial data \((\rho_{0},u_{0},\theta_{0})\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi}\cap L^{p}_{\xi})\) satisfying \(\|(\widehat{\rho}_{0},\widehat{u}_{0},\widehat{\theta}_{0})\|_{L^{1}_{\xi}}+\| (\widehat{\rho}_{0},\widehat{u}_{0},\widehat{\theta}_{0})\|_{L^{p}_{\xi}}\leq\eta _{1}\), there exists a unique global mild solution \((\rho,u,\theta)\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{\infty}_{t}\cap L^{1}_{ \xi}(|\xi|)L^{2}_{t}\cap L^{p}_{\xi}L^{\infty}_{t}\cap L^{p}_{\xi}(|\xi|)L^{2} _{t})\) to the Navier-Stokes-Fourier system (1.14) satisfying the energy estimate_
\[\|(\widehat{\rho},\widehat{u},\widehat{\theta})\|_{L^{1}_{\xi}L^{ \infty}_{t}}+\|\xi|(\widehat{\rho},\widehat{u},\widehat{\theta})\|_{L^{1}_{ \xi}L^{2}_{t}} +\|(\widehat{\rho},\widehat{u},\widehat{\theta})\|_{L^{p}_{\xi}L^{ \infty}_{t}}+\|\xi|(\widehat{\rho},\widehat{u},\widehat{\theta})\|_{L^{p}_{\xi }L^{2}_{t}}\] \[\lesssim\|(\widehat{\rho}_{0},\widehat{u}_{0},\widehat{\theta}_{0} )\|_{L^{1}_{\xi}}+\|(\widehat{\rho}_{0},\widehat{u}_{0},\widehat{\theta}_{0})\| _{L^{p}_{\xi}}.\]
The incompressible Navier-Stokes equation, that is, the first equation in (1.14), possesses a vast literature so we only mention a few works in the three dimensional case below, and we refer the reader to the monographs [57, 10] and the references therein for more details. On the one hand, global weak solutions for large initial data were obtained in the pioneering work [58] (see also [51]). On the other hand, global mild solutions for small initial data were obtained in [37, 53, 28, 19, 20, 38] in different Lebesgue and Sobolev spaces, and we refer again to the book [57] for results in Besov and Morrey spaces. We mention in particular the work of Lei and Lin [56] where global mild solutions in the whole space \(\mathbf{R}^{3}\) were constructed in the Fourier-based space \(L^{1}_{\xi}(|\xi|^{-1})L^{\infty}_{t}\).
Our results in Theorem 2.2 are maybe not completely new, but we do not have a reference for this precise functional setting (observe that the functional spaces in Theorem 2.2 correspond exactly to the same functional setting as in the global well-posedness for the Boltzmann equation in Theorem 2.1). Therefore, and also for the sake of completeness, we shall provide a complete proof of them in Section 5.
Our strategy for obtaining the global solution \(u\) for the incompressible Navier-Stokes equation follows a standard fixed point argument. As in the proof of Theorem 2.1, we first obtain boundedness and integrated-in-time regularization estimates for the semigroup \(V\) associated to the operator \(\nu_{1}\Delta_{x}\) (see Proposition 5.1), as well as for its integral in time
against a source \(\int_{0}^{t}V(t-s)S(s)\,\mathrm{d}s\) (see Proposition 5.3). We then combine this with estimates for the nonlinear term \(Q_{\mathrm{NS}}\) (see Lemma 5.4) to obtain, thanks to a fixed point argument, the global well-posedness of mild solutions of the first equation in (1.14), namely
\[u(t)=V(t)u_{0}+\int_{0}^{t}V(t-s)Q_{\mathrm{NS}}(u(s),u(s))\,\mathrm{d}s.\]
Once the solution \(u\) is constructed, we can obtain in a similar (and even easier way) the well-posedness of mild solutions of the second equation in (1.14) for the temperature \(\theta\). Finally we easily obtain the result for the density \(\rho\) thanks to the last equation in (1.14).
### Hydrodynamic limit
Our third result regards the hydrodynamic limit of the rescaled Boltzmann equation, that is, we are interested in the behavior of solutions \((f^{\varepsilon})_{\varepsilon\in(0,1]}\) to (1.8) in the limit \(\varepsilon\to 0\).
Let \((\rho_{0},u_{0},\theta_{0})\) be an initial data and consider the associated global solution \((\rho,u,\theta)\) to the incompressible Navier-Stokes-Fourier system (1.14) given by Theorem 2.2, where the viscosity coefficients \(\nu_{1},\nu_{2}>0\) are given as follows (see [12]): Let us introduce the two unique functions \(\Phi\) (which is a matrix-valued function) and \(\Psi\) (which is a vector-valued function) orthogonal to \(\mathrm{Ker}\,L\) such that
\[\frac{1}{\sqrt{\mu}}L(\sqrt{\mu}\Phi)=\frac{|v|^{2}}{3}I_{3\times 3}-v\otimes v,\quad\frac{1}{\sqrt{\mu}}L(\sqrt{\mu}\Psi)=\frac{5-|v|^{2}}{2}v,\]
then the viscosity coefficients are defined by
\[\nu_{1}=\frac{1}{10}\int_{\mathbf{R}^{3}}L(\sqrt{\mu}\Phi)\Phi\sqrt{\mu}\, \mathrm{d}v,\quad\nu_{2}=\frac{2}{15}\int_{\mathbf{R}^{3}}\Psi\cdot L(\sqrt{ \mu}\Psi)\sqrt{\mu}\,\mathrm{d}v.\]
We define the initial kinetic distribution \(g_{0}\in\mathrm{Ker}\,L\) associated to \((\rho_{0},u_{0},\theta_{0})\) by
\[g_{0}(x,v)=\mathbf{P}g_{0}(x,v)=\left[\rho_{0}(x)+u_{0}(x)\cdot v+\theta_{0}(x )\frac{(|v|^{2}-3)}{2}\right]\sqrt{\mu}(v), \tag{2.15}\]
and then we consider the kinetic distribution \(g(t)\in\mathrm{Ker}\,L\) associated to \((\rho(t),u(t),\theta(t))\) by
\[g(t,x,v)=\mathbf{P}g(t,x,v)=\left[\rho(t,x)+u(t,x)\cdot v+\theta(t,x)\frac{(|v |^{2}-3)}{2}\right]\sqrt{\mu}(v). \tag{2.16}\]
**Theorem 2.3** (Hydrodynamic limit).: _Let \((f^{\varepsilon}_{0})_{\varepsilon\in(0,1]}\) satisfy the hypotheses of Theorem 2.1 and consider the associated global unique mild solution \((f^{\varepsilon})_{\varepsilon\in(0,1]}\) to (1.8). Let also \((\rho_{0},u_{0},\theta_{0})\) satisfy the hypotheses of Theorem 2.2 and consider the associated global unique mild solution \((\rho,u,\theta)\) to (1.14). Finally, let \(g_{0}=\mathbf{P}g_{0}\) be defined by (2.15) and \(g=\mathbf{P}g\) by (2.16). There exists \(0<\eta_{2}<\min(\eta_{0},\eta_{1})\) such that if_
\[\max\left(\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{ \varepsilon}}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_{\varepsilon }},\|\widehat{g}_{0}\|_{L^{1}_{\xi}L^{2}_{\varepsilon}}+\|\widehat{g}_{0}\|_{ L^{p}_{\xi}L^{2}_{\varepsilon}}\right) \leq\eta_{2}\quad\text{in the case}\quad\Omega_{x}=\mathbf{R}^{3},\]
_for all \(\varepsilon\in(0,1]\) and_
\[\lim_{\varepsilon\to 0}\|\widehat{f}^{\varepsilon}_{0}-\widehat{g}_{0}\|_{L^{1}_{ \xi}L^{2}_{\varepsilon}}=0,\]
_then there holds_
\[\lim_{\varepsilon\to 0}\|\widehat{f}^{\varepsilon}-\widehat{g}\|_{L^{1}_{\xi}L^{ \infty}_{\varepsilon}L^{2}_{\varepsilon}}=0. \tag{2.17}\]
_Remark 2.4_.: One can get a explicit rate of convergence in (2.17) if we suppose that the initial data \(g_{0}\) has some additional regularity in \(x\), namely a rate of \(\varepsilon^{\delta}\) if the initial data \(g_{0}\) satisfies
\[\|\langle\xi\rangle^{\delta}\widehat{g}_{0}\|_{L^{1}_{\xi}L^{2}_{\varepsilon}} <\infty,\]
for \(\delta\in(0,1]\). We refer to (6.25) and (6.26) for a quantitative version of this result.
_Remark 2.5_.: Our methods can also be applied to the Landau equation with Coulomb potential, and we obtain similar results as in Theorem 2.1 and in Theorem 2.3.
Before giving some comments on the above result and its strategy, we start by providing a short overview of the existing literature on the problem of deriving incompressible Navier-Stokes fluid equations from the kinetic Boltzmann one, and we refer to the book by Saint-Raymond [66] for a thorough presentation of the topic including other hydrodynamic limits. The first justifications of the link between kinetic and fluid equations were formal and based on asymptotic expansions by Hilbert, Chapman, Cowling and Grad (see [50, 27, 42]). The first rigorous convergence proofs based also on asymptotic expansions were given by Caflisch [18] (see also [55] and [29]). In those papers, the limit is justified up to the first singular time for the fluid equation. Guo [48] has justified the limit towards the Navier-Stokes equation and beyond in Hilbert's expansion for the cutoff Boltzmann and Landau equations.
In the framework of large data solutions, the weak convergence of global renormalized solutions of the cutoff Boltzmann equation of [32] towards global weak solution to the fluid system were obtained in [12, 11, 40, 41, 59, 60, 66]. Moreover, for the case of non-cutoff kernels, we refer to [9] who proved the hydrodynamic limit from global renormalized solutions with defect measure of [6].
We now discuss results in the framework of perturbative solutions, that is, solutions near the Maxwellian. Based on the spectral analysis of the linearized cutoff Boltzmann operator performed in [64, 26, 36], some hydrodynamic results were obtained in [65, 13, 39], see also [24] for the Landau equation. Moreover, for the non-cutoff Boltzmann equation, we refer to [52] where the authors obtained a result of weak-\(*\) convergence in \(L^{\infty}_{t}(H^{2}_{x,v})\) towards the fluid system by proving uniform in \(\varepsilon\) estimates. Up to our knowledge, our paper is the first to prove a strong convergence towards the incompressible Navier-Stokes-Fourier system for the non-cutoff Boltzmann equation. We also note here that, compared to former hydrodynamical limit results, in our work we do not need any derivative assumption on the initial data.
We now describe our strategy in order to obtain strong convergence results. Our approach is inspired by the one used in [13] for the cutoff Boltzmann equation, which was also used more recently in [16, 39] still for cutoff kernels and in [24] for the Landau equation. Indeed, as in [39, 24], using the spectral analysis performed in [36, 74, 75], in order to prove our main convergence result, we reformulate the fluid equation in a kinetic fashion and we then study the equation satisfied by the difference between the kinetic and the fluid solutions. More precisely, we denote the kinetic solution by
\[f^{\varepsilon}(t)=U^{\varepsilon}(t)f^{\varepsilon}_{0}+\Psi^{\varepsilon}[f^{ \varepsilon},f^{\varepsilon}](t),\]
and we observe, thanks to [13], that the kinetic distribution \(g\) associated to the fluid solution \((\rho,u,\theta)\) through (2.16) satisfies
\[g(t)=U(t)g_{0}+\Psi[g,g](t),\]
where \(U\) is obtained as the limit of \(U^{\varepsilon}\) and \(\Psi\) as the limit of \(\Psi^{\varepsilon}\) when \(\varepsilon\to 0\). The idea is then to compute the norm of the difference \(f^{\varepsilon}-g\) by using convergence estimates from \(U^{\varepsilon}\) to \(U\) (see Lemma 6.3) and from \(\Psi^{\varepsilon}\) to \(\Psi\) (see Lemma 6.4), which are based on the spectral study of [74, 75], together with uniform in \(\varepsilon\) estimates for the kinetic solution \(f^{\varepsilon}\) from Theorem 2.1. This was achieved in [39] for the cutoff Boltzmann equation by applying a fixed point method, however, as explained in [24], this can not be directly applied to the non-cutoff Boltzmann and Landau equations due to the anisotropic loss of regularity in the nonlinear collision operator \(\Gamma\). To overcome this difficult for the Landau equation, the authors in [24] proved new positive-in-time regularization estimates not only for the semigroup \(U^{\varepsilon}\) but also for the solution to the nonlinear rescaled kinetic equation, which were then used to close the estimates and obtain a result of strong convergence.
In our work, we propose a new method in order to obtain strong convergence using only the integrated-in-time regularization estimates (as opposed to pointwise-in-time regularization estimates in [24]) for the semigroup \(U^{\varepsilon}\) as well as for \(f^{\ell}_{0}U^{\varepsilon}(t-s)S(s)\,\mathrm{d}s\). More precisely, the fixed point argument in the space \(\mathcal{F}^{-1}_{x}(L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}\cap L^{1}_{\xi}L^{2}_{ v}H^{s,*}_{v})\) for the torus case, or in \(\mathcal{F}^{-1}_{x}(L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}\cap L^{1}_{\xi}L^{2}_{ v}H^{s,**}_{v})\cap\mathcal{F}^{-1}_{x}(L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}\cap L^{p}_{ \xi}L^{p}_{v}H^{s,**}_{v})\) for the whole space,
used for the global well-posedness in Theorem 2.1 above together with the corresponding energy estimates are sufficient to estimate the \(\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v})\)-norm of the difference \(f^{\varepsilon}-g\) and obtain strong convergence.
### Organization of the paper
In Section 3, we first establish basic properties for the rescaled linearized non-cutoff Boltzmann collision operator and then compute the basic estimates for the associate semigroup. In Section 4 we prove the well-posedness for the rescaled non-cutoff Boltzmann equation. We establish well-posedness for the Navier-Stokes-Fourier system in Section 5. Finally we obtain the hydrodynamical limit result in Section 6.
## 3. Linearized Boltzmann operator
It is well-known, see for instance [63] and the references therein, that the linearized Boltzmann collision operator \(L\), defined in (1.10), satisfies the following coercive-type inequality
\[\langle Lf,f\rangle_{L^{2}_{v}}\leq-\lambda\|\mathbf{P}^{\perp}f\|_{H^{ \varepsilon,*}_{v}}^{2}. \tag{3.1}\]
where we recall that \(\mathbf{P}^{\perp}=I-\mathbf{P}\) and \(\mathbf{P}\) is the orthogonal projection onto \(\operatorname{Ker}L\) given by (2.3). For all \(\varepsilon\in(0,1]\) and all \(\xi\in\Omega^{\prime}_{\xi}\), we denote by \(\Lambda^{\varepsilon}(\xi)\) the Fourier transform in space of the full linearized operator \(\frac{1}{\varepsilon^{2}}L-\frac{1}{\varepsilon}v\cdot\nabla_{x}\), namely
\[\Lambda^{\varepsilon}(\xi):=\frac{1}{\varepsilon^{2}}(L-\mathrm{i}\varepsilon v \cdot\xi). \tag{3.2}\]
We first gather dissipativity results for the operator \(\Lambda^{\varepsilon}(\xi)\) obtained for instance in [68], that we reformulate below as in [23] and inspired from [24, 14] in order to take into account the different scales related to the parameter \(\varepsilon\in(0,1]\). For every \(\xi\in\Omega^{\prime}_{\xi}\) we define
\[B[f,g](\xi) :=\frac{\delta_{1}\mathrm{i}}{\langle\xi\rangle^{2}}\xi\theta[ \widehat{f}(\xi)]\cdot M[\mathbf{P}^{\perp}\widehat{g}(\xi)]+\frac{\delta_{1} \mathrm{i}}{\langle\xi\rangle^{2}}\xi\theta[\widehat{g}(\xi)]\cdot M[\mathbf{ P}^{\perp}\widehat{f}(\xi)]\] \[\quad+\frac{\delta_{2}\mathrm{i}}{\langle\xi\rangle^{2}}(\xi \otimes u[\widehat{f}(\xi)])^{\mathrm{sym}}:\left\{\Theta[\mathbf{P}^{\perp} \widehat{g}(\xi)]+\theta[\widehat{g}(\xi)]I\right\}\] \[\quad+\frac{\delta_{2}\mathrm{i}}{\langle\xi\rangle^{2}}(\xi \otimes u[\widehat{g}(\xi)])^{\mathrm{sym}}:\left\{\Theta[\mathbf{P}^{\perp} \widehat{f}(\xi)]+\theta[\widehat{f}(\xi)]I\right\}\] \[\quad+\frac{\delta_{3}\mathrm{i}}{\langle\xi\rangle^{2}}\xi \rho[\widehat{f}(\xi)]\cdot u[\widehat{g}(\xi)]+\frac{\delta_{3}\mathrm{i}}{ \langle\xi\rangle^{2}}\xi\rho[\widehat{g}(\xi)]\cdot u[\widehat{f}(\xi)],\]
with constants \(0<\delta_{3}\ll\delta_{2}\ll\delta_{1}\ll 1\), where \(I\) is the \(3\times 3\) identity matrix and the moments \(M\) and \(\Theta\) are defined by
\[M[f]=\int_{\mathbf{R}^{3}}fv(|v|^{2}-5)\sqrt{\mu}(v)\,\mathrm{d}v,\qquad\Theta [f]=\int_{\mathbf{R}^{3}}f\left(v\otimes v-I\right)\sqrt{\mu}(v)\,\mathrm{d}v,\]
and where for vectors \(a,b\in\mathbf{R}^{3}\) and matrices \(A,B\in\mathbf{R}^{3\times 3}\), we denote
\[(a\otimes b)^{\mathrm{sym}}=\frac{1}{2}(a_{j}b_{k}+a_{k}b_{j})_{1\leq j,k\leq 3 },\qquad A:B=\sum_{j,k=1}^{3}A_{jk}B_{jk}.\]
We then define the inner product \(\langle\!\langle\cdot,\cdot\rangle\!\rangle_{L^{2}_{v}}\) on \(L^{2}_{v}\) (depending on \(\xi\)) by
\[\langle\!\langle\widehat{f}(\xi),\widehat{g}(\xi)\rangle\!\rangle_{L^{2}_{v}} :=\langle\widehat{f}(\xi),\widehat{g}(\xi)\rangle_{L^{2}_{v}}+ \varepsilon B[f,g](\xi), \tag{3.3}\]
and the associated norm
\[\|\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}:=\langle\!\langle\widehat{f}(\xi), \widehat{f}(\xi)\rangle\!\rangle_{L^{2}_{v}}. \tag{3.4}\]
In a similar fashion, for any \(\ell>0\), we define the inner product \(\langle\!\langle\cdot,\cdot\rangle\!\rangle_{L^{2}_{v}((v)^{\ell})}\) on \(L^{2}_{v}(\langle v\rangle^{\ell})\) (depending on \(\xi\)) by
\[\langle\!\langle\widehat{f}(\xi),\widehat{g}(\xi)\rangle\!\rangle_{L^{2}_{v}( \langle v\rangle^{\ell})} :=\langle\widehat{f}(\xi),\widehat{g}(\xi)\rangle_{L^{2}_{v}}+ \delta_{0}\langle\mathbf{P}^{\perp}\widehat{f}(\xi),\mathbf{P}^{\perp} \widehat{g}(\xi)\rangle_{L^{2}_{v}(\langle v\rangle^{\ell})}\] \[\quad+\varepsilon B[f,g](\xi), \tag{3.5}\]
with \(\delta_{1}\ll\delta_{0}\ll 1\), and the associated norm
\[\|\widehat{f}(\xi)\|_{L^{2}_{v}((v)^{\ell})}^{2}:=\langle\!\langle\widehat{f}( \xi),\widehat{f}(\xi)\rangle\!\rangle_{L^{2}_{v}((v)^{\ell})}. \tag{3.6}\]
It is important to notice the factor \(\varepsilon\) in front of the last term in the right-hand side of (3.3) and (3.5).
Arguing as in [68], the main difference being the factor \(\varepsilon\) at the second term of (3.3) and (3.5), we obtain the following dissipativity result.
**Proposition 3.1**.: _We can choose \(0<\delta_{3}\ll\delta_{2}\ll\delta_{1}\ll\delta_{0}\ll 1\) appropriately such that:_
(1) _The new norm \(\|\!\cdot\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot \!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\! \|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\| \!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\| \!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\| \!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\| \!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\| \!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\| \!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\| \!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\| \!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\cdot\!\|\! \cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\| \!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\| \!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\!\cdot\!\|\!\cdot\!\|\! \cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\|\!\!\cdot\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\! \cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\|\!\!\cdot\!\|\!\cdot\!\| \!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\! \cdot\!\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\cdot\|\!\!\cdot\|\!\!\cdot\!\|\!\cdot\|\!\|\! \cdot\!\|\!\cdot\|\!\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\cdot\|\!\cdot\!\|\! \cdot\|\!\!\cdot\|\!\!\cdot\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\!\cdot\|\!\cdot\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\|\! \cdot\!\|\!\cdot\|\!\!\cdot\|\!\cdot\|\!\!\cdot\|\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\!\cdot\|\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\|\!\cdot\!\|\!\cdot\!\|\!\|\!\cdot\|\!\!\cdot\|\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\|\!\cdot\!\|\!\cdot\|\!\!\cdot\!\|\|\!\cdot\!\|\!\!\cdot\|\!\|\!\cdot\|\!\!\|\cdot\!\|\!\!\cdot\!\|\!\|\!\cdot\!\|\!\|\!\cdot\|\!\!\|\!\cdot\|\!\!\|\!\cdot\|\!\|\!\cdot\!\|\!\cdot\|\!\|\!\cdot\!\|\!\cdot\!\|\!\|\!\cdot\!\|\!\cdot\|\!\!\|\!\cdot\!\
Using Proposition 3.1 we have, for all \(t\geq 0\),
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{f}(\xi)\|_{L^{2 }_{t}((v)^{\ell})}^{2} =\mathrm{Re}\langle\!\langle\Lambda^{\varepsilon}(\xi)\widehat{f}( \xi),\widehat{f}(\xi)\rangle\!\rangle_{L^{2}_{t}((v)^{\ell})}\] \[\leq-\lambda_{0}\left(\frac{1}{\varepsilon^{2}}\|\mathbf{P}^{ \perp}\widehat{f}(\xi)\|_{H^{s^{*}}_{v}((v)^{\ell})}^{2}+\frac{|\xi|^{2}}{ \langle\xi\rangle^{2}}\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\right),\]
which implies, for all \(t\geq 0\),
\[\|\widehat{f}(t,\xi)\|_{L^{2}_{t}((v)^{\ell})}^{2}+\frac{1}{\varepsilon^{2}} \int_{0}^{t}\|\mathbf{P}^{\perp}\widehat{f}(\xi,\xi)\|_{H^{s^{*}}_{v}((v)^{ \ell})}^{2}\,\mathrm{d}s+\int_{0}^{t}\frac{|\xi|^{2}}{\langle\xi\rangle^{2}} \|\mathbf{P}\widehat{f}(\xi,\xi)\|_{L^{2}_{t}}^{2}\,\mathrm{d}s\lesssim\| \widehat{f}_{0}(\xi)\|_{L^{2}_{t}((v)^{\ell})}^{2}.\]
where we have used that \(\|\widehat{f}(\xi)\|_{L^{2}_{v}}\) is equivalent to \(\|\widehat{f}(\xi)\|_{L^{2}_{v}}\) independently of \(\xi\) and \(\varepsilon\). Taking the supremum in time and then taking the square-root of previous estimate yields
\[\|\widehat{f}(\xi)\|_{L^{\infty}_{t}L^{2}_{v}((v)^{\ell})}+\frac{1}{ \varepsilon}\|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{L^{2}_{t}H^{s,*}_{v}((v)^ {\ell})}+\left\|\frac{|\xi|}{\langle\xi\rangle}\mathbf{P}\widehat{f}(\xi)\right\| _{L^{2}_{t}L^{2}_{v}}\lesssim\|\widehat{f}_{0}(\xi)\|_{L^{2}_{v}},\]
and we conclude by taking the \(L^{p}_{\xi}\) norm.
**Proposition 3.4**.: _Let \(\ell\geq 0\) and \(p\in[1,\infty]\). Let \(S=S(t,x,v)\) verify \(\mathbf{P}S=0\) and \(\langle v\rangle^{\ell}\widehat{S}\in L^{p}_{\xi}L^{2}_{t}(H^{s,*}_{v})^{\prime}\), and denote_
\[g_{S}(t)=\int_{0}^{t}U^{\varepsilon}(t-s)S(s)\,\mathrm{d}s.\]
_Then_
\[\|\widehat{g}_{S}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}((v)^{\ell})}+\frac{1 }{\varepsilon}\|\mathbf{P}^{\perp}\widehat{g}_{S}\|_{L^{p}_{\xi}L^{2}_{t}H^{s, *}_{v}((v)^{\ell})}+\left\|\frac{|\xi|}{\langle\xi\rangle}\mathbf{P}\widehat{ g}_{S}\right\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{v}}\lesssim\varepsilon\|\langle v \rangle^{\ell}\widehat{S}\|_{L^{p}_{\xi}L^{2}_{t}(H^{s,*}_{v})^{\prime}}.\]
_Remark 3.5_.: As in Remark 3.3, we observe that in the torus case \(\Omega_{x}=\mathbf{T}^{3}\) one can replace the term \(\frac{|\xi|}{\langle\xi\rangle}\mathbf{P}\widehat{g}_{S}\) in above estimate by \(\mathbf{P}\widehat{g}_{S}\).
Proof.: We first observe that \(g_{S}\) satisfies the equation
\[\partial_{t}g_{S}=\frac{1}{\varepsilon^{2}}(L-\varepsilon v\cdot\nabla_{x})g_ {S}+S,\quad g_{|t=0}=0, \tag{3.11}\]
thus, for all for all \(\xi\in\mathbf{Z}^{3}\) (if \(\Omega_{x}=\ \mathbf{T}^{3}\)) or all \(\xi\in\mathbf{R}^{3}\) (if \(\Omega_{x}=\mathbf{R}^{3}\)),
\[\partial_{t}\widehat{g}_{S}(\xi)=\Lambda^{\varepsilon}(\xi)\widehat{g}_{S}(\xi )+\widehat{S}(\xi),\quad\widehat{g}(\xi)_{|t=0}=0, \tag{3.12}\]
that is, for all \(t\geq 0\),
\[\widehat{g}_{S}(t,\xi)=\int_{0}^{t}\widehat{U}^{\varepsilon}(t-s,\xi) \widehat{S}(s,\xi)\,\mathrm{d}s. \tag{3.13}\]
We remark from (3.3) and the fact that \(\mathbf{P}S=0\) that
\[\langle\!\langle\widehat{S}(\xi),\widehat{g}_{S}(\xi)\rangle\! \rangle_{L^{2}_{v}((v)^{\ell})} =\langle\widehat{S}(\xi),\widehat{g}_{S}(\xi)\rangle_{L^{2}_{v}}+ \delta_{0}\langle\mathbf{P}^{\perp}\widehat{S}(\xi),\mathbf{P}^{\perp} \widehat{g}_{S}(\xi)\rangle_{L^{2}_{v}((v)^{\ell})}+\varepsilon B[S,g_{S}](\xi)\] \[=\langle\widehat{S}(\xi),\mathbf{P}^{\perp}\widehat{g}_{S}(\xi) \rangle_{L^{2}_{v}}+\delta_{0}\langle\mathbf{P}^{\perp}\widehat{S}(\xi), \mathbf{P}^{\perp}\widehat{g}_{S}(\xi)\rangle_{L^{2}_{v}((v)^{\ell})}+ \varepsilon B[S,g_{S}](\xi).\]
Using again that \(\mathbf{P}S=0\), so that \(\rho[S]=u[S]=\theta[S]=0\), we have
\[B[S,g_{S}](\xi)=\frac{\delta_{1}\mathrm{i}}{1+|\xi|^{2}}\xi\theta[\widehat{g}_ {S}(\xi)]\cdot M[\mathbf{P}^{\perp}\widehat{S}(\xi)]+\frac{\delta_{2}\mathrm{i}}{ 1+|\xi|^{2}}(\xi\otimes u[\widehat{g}_{S}(\xi)])^{\mathrm{sym}}:\Theta[\mathbf{ P}^{\perp}\widehat{S}(\xi)],\]
therefore observing that for any polynomial \(p=p(v)\) there holds
\[\left|\int_{\mathbf{R}^{3}}\widehat{S}(\xi)p(v)\sqrt{\mu}(v)\,\mathrm{d}v \right|\lesssim\|\widehat{S}(\xi)\|_{(H^{s,*}_{v})^{\prime}},\]
we get
\[|B[S,g_{S}](\xi)|\lesssim\|\mathbf{P}^{\perp}\widehat{S}(\xi)\|_{(H^{s,*}_{v}) ^{\prime}}\frac{|\xi|}{\langle\xi\rangle}\|\mathbf{P}\widehat{g}_{S}(\xi)\|_{L^ {2}_{v}}.\]
Moreover
\[\langle\widehat{S}(\xi),\mathbf{P}^{\perp}\widehat{g}_{S}(\xi)\rangle_{L^{2}_{v }}\lesssim\|\widehat{S}(\xi)\|_{(H^{s,*}_{v})^{\prime}}\|\mathbf{P}^{\perp} \widehat{g}_{S}(\xi)\|_{H^{s,*}_{v}},\]
and
\[\langle\widehat{S}(\xi),\mathbf{P}^{\perp}\widehat{g}_{S}(\xi) \rangle_{L^{2}_{\epsilon}((v)^{\ell})} =\langle\langle v\rangle^{\ell}\widehat{S}(\xi),\langle v\rangle^{ \ell}\mathbf{P}^{\perp}\widehat{g}_{S}(\xi)\rangle_{L^{2}_{\epsilon}}\] \[\lesssim\|\langle v\rangle^{\ell}\widehat{S}(\xi)\|_{(H^{*,*}_{v })^{\prime}}\|\langle v\rangle^{\ell}\mathbf{P}^{\perp}\widehat{g}_{S}(\xi)\|_ {H^{*,*}_{v}}, \tag{3.14}\]
Using Proposition 3.1 and arguing as in Proposition 3.2 we have, for all \(t\geq 0\) and all \(\xi\in\Omega_{\xi}^{\prime}\),
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{g}_{S}(\xi) \|_{L^{2}_{\epsilon}((v)^{\ell})}^{2} \leq-\lambda_{0}\left(\frac{1}{\varepsilon^{2}}\|\mathbf{P}^{ \perp}\widehat{g}_{S}(\xi)\|_{H^{*,*}_{v}((v)^{\ell})}^{2}+\frac{|\xi|^{2}}{ \langle\xi\rangle^{2}}\|\mathbf{P}\widehat{g}_{S}(\xi)\|_{L^{2}_{\epsilon}}^{ 2}\right)\] \[\quad+C\|\langle v\rangle^{\ell}\widehat{S}(\xi)\|_{(H^{*,*}_{v })^{\prime}}\left(\|\mathbf{P}^{\perp}\widehat{g}_{S}(\xi)\|_{H^{*,*}_{v}((v)^ {\ell})}+\varepsilon\frac{|\xi|}{\langle\xi\rangle}\|\mathbf{P}\widehat{g}_{S} (\xi)\|_{L^{2}_{\epsilon}}\right)\] \[\leq-\frac{\lambda_{0}}{2}\left(\frac{1}{\varepsilon^{2}}\| \mathbf{P}^{\perp}\widehat{g}_{S}(\xi)\|_{H^{*,*}_{v}((v)^{\ell})}^{2}+\frac{ |\xi|^{2}}{\langle\xi\rangle^{2}}\|\mathbf{P}\widehat{g}_{S}(\xi)\|_{L^{2}_{ \epsilon}}^{2}\right)\] \[\quad+C\varepsilon^{2}\|\langle v\rangle^{\ell}\widehat{S}(\xi)\|_ {(H^{*,*}_{v})^{\prime}}^{2}, \tag{3.15}\]
where we have used Young's inequality in last line, which implies
\[\|\widehat{g}_{S}(t,\xi)\|_{L^{2}_{\epsilon}((v)^{\ell})}^{2}+ \frac{1}{\varepsilon^{2}}\int_{0}^{t}\|\mathbf{P}^{\perp}\widehat{g}_{S}(s, \xi)\|_{H^{*,*}_{v}((v)^{\ell})}^{2}\,\mathrm{d}s+\int_{0}^{t}\frac{|\xi|^{2}} {\langle\xi\rangle^{2}}\|\mathbf{P}\widehat{g}_{S}(s,\xi)\|_{L^{2}_{\epsilon} }^{2}\,\mathrm{d}s\\ \lesssim\varepsilon^{2}\int_{0}^{t}\|\langle v\rangle^{\ell} \widehat{S}(s,\xi)\|_{(H^{*,*}_{v})^{\prime}}^{2}\,\mathrm{d}s.\]
Taking the supremum in time and then taking the square-root of previous estimate yields
\[\|\widehat{g}_{S}(\xi)\|_{L^{\infty}_{t}L^{2}_{\epsilon}((v)^{\ell})}+\frac{ 1}{\varepsilon}\|\mathbf{P}^{\perp}\widehat{g}_{S}(\xi)\|_{L^{2}_{t}H^{*,*}_{v }((v)^{\ell})}+\left\|\frac{|\xi|}{\langle\xi\rangle}\mathbf{P}\widehat{g}_{S }(\xi)\right\|_{L^{2}_{t}L^{2}_{\epsilon}}\lesssim\varepsilon\|\langle v \rangle^{\ell}\widehat{S}(\xi)\|_{L^{2}_{t}(H^{*,*}_{v})^{\prime}},\]
and we conclude by taking the \(L^{p}_{\xi}\) norm.
### Decay estimates: Hard potentials in the torus
In this subsection we shall always assume \(\gamma+2s\geq 0\) and \(\Omega_{x}=\mathbf{T}^{3}\), and we shall obtain decay estimates for the semigroup \(U^{\varepsilon}\) (see Proposition 3.6) as well as its integral in time against a source \(\int_{0}^{t}U^{\varepsilon}(t-s)S(s)\,\mathrm{d}s\) (see Proposition 3.7). We recall that given any real number \(\lambda\in\mathbf{R}\) we denote \(\mathrm{e}_{\lambda}:t\mapsto e^{\lambda t}\).
**Proposition 3.6**.: _Let \(\ell\geq 0\). Let \(\widehat{f}_{0}\in L^{1}_{\xi}L^{2}_{v}((v)^{\ell})\), then_
\[\|\mathrm{e}_{\lambda}\widehat{U}^{\varepsilon}(\cdot)\widehat{f }_{0}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{\epsilon}((v)^{\ell})}+\frac{1}{ \varepsilon}\|\mathrm{e}_{\lambda}\mathbf{P}^{\perp}\widehat{U}^{\varepsilon}( \cdot)\widehat{f}_{0}\|_{L^{1}_{\xi}L^{2}_{t}H^{*,*}_{v}((v)^{\ell})}+\| \mathrm{e}_{\lambda}\mathbf{P}\widehat{U}^{\varepsilon}(\cdot)\widehat{f}_{0} \|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{\epsilon}}\\ \lesssim\|\widehat{f}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(v)^{\ell})},\]
_for some \(\lambda>0\) (depending on \(\lambda_{0}\) of Proposition 3.1)._
Proof.: Let \(f(t)=U^{\varepsilon}(t)f_{0}\) for all \(t\geq 0\) which satisfies (3.9), so that \(\widehat{f}(t,\xi)=\widehat{U}^{\varepsilon}(t,\xi)\widehat{f}(\xi)\) satisfies (3.10) for all \(\xi\in\mathbf{Z}^{3}\). Using Proposition 3.1 we have, for all \(t\geq 0\) and some \(\lambda_{0}>0\),
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{f}(\xi)\|_{L^{2}_{v}((v)^{ \ell})}^{2}\leq-\lambda_{0}\left(\frac{1}{\varepsilon^{2}}\|\mathbf{P}^{\perp }\widehat{f}(\xi)\|_{H^{*,*}_{v}(v)^{\ell}}^{2}+\|\mathbf{P}\widehat{f}(\xi)\|_ {L^{2}_{\epsilon}}^{2}\right),\]
which implies, since \(\|\cdot\|_{H^{*,*}_{v}((v)^{\ell})}\geq\|\langle v\rangle^{\gamma/2s}\cdot\|_{L^ {2}_{v}}\geq\|\cdot\|_{L^{2}_{\epsilon}(v)^{\ell}}\) and the fact that \(\|\widehat{f}^{\varepsilon}(\xi)\|_{L^{2}_{v}((v)^{\ell})}\) is equivalent to \(\|\widehat{f}^{\varepsilon}(\xi)\|_{L^{2}_{v}((v)^{\ell})}\) independently of \(\xi\) and \(\varepsilon\), that
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{f}(\xi)\|_{L^{2}_{v}((v)^{ \ell})}^{2}\leq-\lambda\|\widehat{f}^{\varepsilon}(\xi)\|_{L^{2}_{v}((v)^{\ell} )}^{2}-\sigma\left(\frac{1}{\varepsilon^{2}}\|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_ {H^{*,*}_{v}((v)^{\ell})}^{2}+\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{\epsilon}}^{2} \right),\]
for some positive constants \(\lambda,\sigma>0\) depending only on the implicit constants in Proposition 3.1-(1) and on \(\lambda_{0}>0\) appearing in Proposition 3.1-(2). We therefore deduce
\[\frac{\mathrm{d}}{\mathrm{d}t}\left\{e^{2\lambda t}\|\widehat{f}(\xi)\|_{L^{2}_ {v}(\langle v\rangle^{\ell})}^{2}\right\}\leq-\sigma e^{2\lambda t}\left(\frac {1}{\varepsilon^{2}}\|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{s,*}_{v}( \langle v\rangle^{\ell})}^{2}+\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2} \right),\]
which implies, for all \(t\geq 0\),
\[e^{2\lambda t}\|\widehat{f}(t,\xi)\|_{L^{2}_{v}(\langle v\rangle ^{\ell})}^{2}+\frac{1}{\varepsilon^{2}}\int_{0}^{t}e^{2\lambda s}\|\mathbf{P}^ {\perp}\widehat{f}(s,\xi)\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})}^{2}\, \mathrm{d}s+\int_{0}^{t}e^{2\lambda s}\|\mathbf{P}\widehat{f}(s,\xi)\|_{L^{2}_ {v}}^{2}\,\mathrm{d}s\\ \lesssim\|\widehat{f}_{0}(\xi)\|_{L^{2}_{v}(\langle v\rangle^{ \ell})}^{2}.\]
where we have used again that \(\|\widehat{f}(\xi)\|_{L^{2}_{v}}\) is equivalent to \(\|\widehat{f}(\xi)\|_{L^{2}_{v}}\) independently of \(\xi\) and \(\varepsilon\). Taking the supremum in time and then taking the square-root of previous estimate yields
\[\|\mathrm{e}_{\lambda}\widehat{f}(\xi)\|_{L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})}+\frac{1}{\varepsilon}\|\mathrm{e}_{\lambda}\mathbf{P}^{\perp }\widehat{f}(\xi)\|_{L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}+\|\mathrm{ e}_{\lambda}\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{t}L^{2}_{v}}\lesssim\|\widehat{f}_{0}( \xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})},\]
and we conclude by taking the \(L^{1}_{\xi}\) norm.
**Proposition 3.7**.: _Let \(\ell\geq 0\). Let \(\lambda>0\) be given in Proposition 3.6. Let \(S=S(t,x,v)\) verify \(\mathbf{P}S=0\) and \(\mathrm{e}_{\lambda}\langle v\rangle^{\ell}\widehat{S}\in L^{1}_{\xi}L^{2}_{t} (H^{s,*}_{v})^{\prime}\), and denote_
\[g_{S}(t)=\int_{0}^{t}U^{\varepsilon}(t-s)S(s)\,\mathrm{d}s.\]
_Then_
\[\|\mathrm{e}_{\lambda}\widehat{g}_{S}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}( \langle v\rangle^{\ell})}+\frac{1}{\varepsilon}\|\mathrm{e}_{\lambda}\mathbf{P }^{\perp}\widehat{g}_{S}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{ \ell})}+\|\mathrm{e}_{\lambda}\mathbf{P}\widehat{g}_{S}\|_{L^{1}_{\xi}L^{2}_{ t}L^{2}_{v}}\lesssim\varepsilon\|\mathrm{e}_{\lambda}\langle v\rangle^{\ell} \widehat{S}\|_{L^{1}_{\xi}L^{2}_{t}(H^{s,*}_{v})^{\prime}}.\]
Proof.: Recall that \(g_{S}\) satisfies equation (3.11) and \(\widehat{g}\) verifies (3.12) for all \(\xi\in\mathbf{Z}^{3}\) as well as (3.13). Thanks to (3.15) and using that \(\|\cdot\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})}\geq\|\langle v\rangle^{\gamma /2+s}\cdot\|_{L^{2}_{\xi}(\langle v\rangle^{\ell})}\geq\|\cdot\|_{L^{2}_{ \xi}(\langle v\rangle^{\ell})}\) as in the proof of Proposition 3.6, we get for all \(t\geq 0\)
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{g}_{S}(\xi)\| _{L^{2}_{\xi}(\langle v\rangle^{\ell})}^{2} \leq-\lambda\|\widehat{g}_{S}(\xi)\|_{L^{2}_{\xi}(\langle v\rangle ^{\ell})}^{2}-\sigma\left(\frac{1}{\varepsilon^{2}}\|\mathbf{P}^{\perp} \widehat{g}_{S}(\xi)\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})}^{2}+\|\mathbf{P} \widehat{g}_{S}(\xi)\|_{L^{2}_{v}}^{2}\right)\] \[\quad+C\varepsilon^{2}\|\langle v\rangle^{\ell}\widehat{S}(\xi) \|_{(H^{s,*}_{v})^{\prime}}^{2},\]
for some constants \(\lambda,\sigma,C>0\). We therefore deduce
\[\frac{\mathrm{d}}{\mathrm{d}t}\left\{e^{2\lambda t}\|\widehat{g}_ {S}(\xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})}^{2}\right\} \leq-\sigma e^{2\lambda t}\left(\frac{1}{\varepsilon^{2}}\| \mathbf{P}^{\perp}\widehat{g}_{S}(\xi)\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})} ^{2}+\|\mathbf{P}\widehat{g}_{S}(\xi)\|_{L^{2}_{v}}^{2}\right)\] \[\quad+C\varepsilon^{2}e^{2\lambda t}\|\langle v\rangle^{\ell} \widehat{S}(\xi)\|_{(H^{s,*}_{v})^{\prime}}^{2},\]
which implies, for all \(t\geq 0\),
\[e^{2\lambda t}\|\widehat{g}_{S}(t,\xi)\|_{L^{2}_{v}(\langle v \rangle^{\ell})}^{2} +\frac{1}{\varepsilon^{2}}\int_{0}^{t}e^{2\lambda s}\|\mathbf{P}^{ \perp}\widehat{g}_{S}(s,\xi)\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})}^{2}\, \mathrm{d}s\] \[+\int_{0}^{t}e^{2\lambda s}\|\mathbf{P}\widehat{g}_{S}(s,\xi)\|_{L ^{2}_{v}}^{2}\,\mathrm{d}s\lesssim\varepsilon^{2}\int_{0}^{t}e^{2\lambda s}\| \langle v\rangle^{\ell}\widehat{S}(s,\xi)\|_{(H^{s,*}_{v})^{\prime}}^{2}\, \mathrm{d}s.\]
Taking the supremum in time and then taking the square-root of previous estimate yields
\[\|\mathrm{e}_{\lambda}\widehat{g}_{S}(\xi)\|_{L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})}+\frac{1}{\varepsilon}\|\mathrm{e}_{\lambda}\mathbf{P}^{\perp} \widehat{g}_{S}(\xi)\|_{L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}+\|\mathrm{e}_ {\lambda}\mathbf{P}\widehat{g}_{S}(\xi)\|_{L^{2}_{t}L^{2}_{v}}\lesssim\varepsilon\| \mathrm{e}_{\lambda}\langle v\rangle^{\ell}\widehat{S}(\xi)\|_{L^{2}_{t}(H^{s,*}_ {v})^{\prime}},\]
and we conclude by taking the \(L^{1}_{\xi}\) norm.
### Decay estimates: Soft potentials in the torus
In this subsection we shall always assume \(\gamma+2s<0\) and \(\Omega_{x}=\mathbf{T}^{3}\), and we shall obtain decay estimates for the semigroup \(U^{\varepsilon}\) (see Proposition 3.8) as well as its integral in time against a source \(\int_{0}^{t}U^{\varepsilon}(t-s)S(s)\,\mathrm{d}s\) (see Proposition 3.9). We recall that given any real number \(\omega\in\mathbf{R}\) we denote \(\mathrm{p}_{\omega}:t\mapsto(1+t)^{\omega}\).
**Proposition 3.8**.: _Let \(\ell>0\) and \(\widehat{f}_{0}\in L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell})\), then for any \(0<\omega<\frac{\ell}{|\gamma+2s|}\) we have_
\[\|\mathrm{p}_{\omega}\widehat{U}^{\varepsilon}(\cdot)\widehat{f}_ {0}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}}+\frac{1}{\varepsilon}\|\mathrm{p}_ {\omega}\mathbf{P}^{1}\widehat{U}^{\varepsilon}(\cdot)\widehat{f}_{0}\|_{L^{ 1}_{\xi}L^{2}_{t}H^{*,*}_{v}}+\|\mathrm{p}_{\omega}\mathbf{P}(\widehat{U}^{ \varepsilon}(\cdot)\widehat{f}_{0})\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\lesssim\|\widehat{f}_{0}\|_{L^{1}_{\xi}L^{2}_{v}}+\|\widehat{U} ^{\varepsilon}(\cdot)\widehat{f}_{0}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}(\langle v \rangle^{\ell})}.\]
Proof.: Arguing as in the proof of Proposition 3.6, denoting \(f(t)=U^{\varepsilon}(t)f_{0}\) and using that \(\|\cdot\|_{H^{*,*}}\geq\|\langle v\rangle^{\gamma/2+s}\cdot\|_{L^{2}_{v}}\), we obtain
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{f}(\xi)\|_{L^{2}_{v}} \leq-\lambda\|\langle v\rangle^{\gamma/2+s}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2} -\sigma\left(\frac{1}{\varepsilon^{2}}\|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_ {H^{*,*}_{v}}^{2}+\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\right), \tag{3.16}\]
for some positive constants \(\lambda,\sigma>0\).
We now observe the following interpolation inequality: for any \(R>0\) there holds
\[\|\widehat{f}(\xi)\|_{L^{2}_{v}}^{2} \lesssim\langle R\rangle^{|\gamma+2s|}\|\langle v\rangle^{\gamma /2+s}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}+\langle R\rangle^{-2\ell}\|\widehat{ f}(\xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})}^{2}. \tag{3.17}\]
Therefore coming back to (3.16) and choosing \(\langle R\rangle=[(\lambda/\omega)(1+t)]^{1/|\gamma+2s|}\) yields
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{f}(\xi)\|_{L^ {2}_{v}}^{2} \leq-\omega(1+t)^{-1}\|\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}-\sigma \left(\frac{1}{\varepsilon^{2}}\|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{*,*} _{v}}^{2}+\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\right)\] \[\quad+C(1+t)^{-1-\frac{2\ell}{|\gamma+2s|}}\|\widehat{f}(\xi)\|_{ L^{2}_{v}(\langle v\rangle^{\ell})}^{2},\]
for some constant \(C>0\) (independent of \(\xi\) and \(\varepsilon\)). Multiplying both sides by \((1+t)^{2\omega}\) gives
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\left\{(1+t)^{2\omega} \|\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\right\} \leq-\sigma(1+t)^{2\omega}\left(\frac{1}{\varepsilon^{2}}\| \mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{*,*}_{v}}^{2}+\|\mathbf{P}\widehat{ f}(\xi)\|_{L^{2}_{v}}^{2}\right)\] \[\quad+C(1+t)^{2\omega-1-\frac{2\ell}{|\gamma+2s|}}\|\widehat{f}( \xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})}^{2}.\]
Integrating last estimate in time gives, for all \(t\geq 0\),
\[(1+t)^{2\omega}\|\widehat{f}(t,\xi)\|_{L^{2}_{v}}^{2}+\frac{1}{ \varepsilon^{2}} \int_{0}^{t}(1+s)^{2\omega}\|\mathbf{P}^{\perp}\widehat{f}(s,\xi)\|_{H^{*, *}_{v}}^{2}\,\mathrm{d}s+\int_{0}^{t}(1+s)^{2\omega}\|\mathbf{P}\widehat{f}(s,\xi)\|_{L^{2}_{v}}^{2}\,\mathrm{d}s\] \[\lesssim\|\widehat{f}_{0}(\xi)\|_{L^{2}_{v}}^{2}+\sup_{s\in[0,t]} \|\widehat{f}(s,\xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})}^{2}\int_{0}^{t}(1+s )^{2\omega-1-\frac{2\ell}{|\gamma+2s|}}\,\mathrm{d}s,\]
where we have used again that \(\|\widehat{f}(\xi)\|_{L^{2}_{v}}\) is equivalent to \(\|\widehat{f}(\xi)\|_{L^{2}_{v}}\) independently of \(\xi\) and \(\varepsilon\). Observing that \((1+t)^{2\omega-1-\frac{2\ell}{|\gamma+2s|}}\) is integrable since \(0<\omega<\frac{\ell}{|\gamma+2s|}\), we can take the supremum in time in last estimate and then its square-root to obtain
\[\|\mathrm{p}_{\omega}\widehat{f}(\xi)\|_{L^{\infty}_{t}L^{2}_{v}}+\frac{1}{ \varepsilon}\|\mathrm{p}_{\omega}\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{L^{2}_{ t}H^{*,*}_{v}}+\|\mathrm{p}_{\omega}\mathbf{P}\widehat{f}(s,\xi)\|_{L^{2}_{t}L^{2}_{v}} \lesssim\|\widehat{f}_{0}(\xi)\|_{L^{2}_{v}}+\|\widehat{f}(\xi)\|_{L^{\infty}_{ t}L^{2}_{v}(\langle v\rangle^{\ell})},\]
and we conclude the proof by taking the \(L^{1}_{\xi}\) norm.
**Proposition 3.9**.: _Let \(S=S(t,x,v)\) verify \(\mathbf{P}S=0\) and \(\mathrm{p}_{\omega}\widehat{S}\in L^{1}_{\xi}L^{2}_{t}(H^{s,*}_{v})^{\prime}\) for some \(0<\omega<\frac{\ell}{|\gamma+2s|}\) and \(\ell>0\), and denote_
\[g_{S}(t)=\int_{0}^{t}U^{\varepsilon}(t-s)S(s)\,\mathrm{d}s.\]
_Assume that \(\widehat{g}_{S}\in L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})\), then we have_
\[\|\mathrm{p}_{\omega}\widehat{g}_{S}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2 }_{v}}+\frac{1}{\varepsilon}\|\mathrm{p}_{\omega}\mathbf{P}^{\perp}\widehat{g}_{S} \|_{L^{1}_{\xi}L^{2}_{t}H^{*,*}_{v}}+\|\mathrm{p}_{\omega}\mathbf{P}\widehat{g}_{S} \|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\lesssim\varepsilon\|\mathrm{p}_{\omega}\widehat{S}\|_{L^{1}_{\xi}L ^{2}_{t}(H^{*,*}_{v})^{\prime}}+\|\widehat{g}_{S}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{ \infty}_{t}(\langle v\rangle^{\ell})}.\]
Proof.: Arguing as in the proof of Proposition 3.7, but using now that \(\|\cdot\|_{H^{s,*}((v)^{\ell})}\geq\|\langle v\rangle^{\gamma/2+s}\cdot\|_{L^{2}_{ \varepsilon}((v)^{\ell})}\) as in Proposition 3.8, we have
\[\begin{split}\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{g }_{S}(\xi)\|_{L^{2}_{\varepsilon}}^{2}&\leq-\lambda\|\langle v \rangle^{\gamma/2+s}\widehat{g}_{S}(\xi)\|_{L^{2}_{\varepsilon}}^{2}-\sigma \left(\frac{1}{\varepsilon^{2}}\|\mathbf{P}^{\perp}\widehat{g}_{S}(\xi)\|_{H^ {s,*}_{v}}^{2}+\|\mathbf{P}\widehat{g}_{S}(\xi)\|_{L^{2}_{\varepsilon}}^{2}\right) \\ &\quad+C\varepsilon^{2}\|\widehat{S}(\xi)\|_{(H^{s,*}_{v})^{\ell}} ^{2}.\end{split} \tag{3.18}\]
for some constants \(\lambda,\sigma,C>0\). Using the interpolation (3.17) as in the proof of Proposition 3.8, we obtain
\[\begin{split}\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{ g}_{S}(\xi)\|_{L^{2}_{v}}^{2}&\leq-\omega(1+t)^{-1}\|\widehat{g}_{S}( \xi)\|_{L^{2}_{v}}^{2}-\sigma\left(\frac{1}{\varepsilon^{2}}\|\mathbf{P}^{ \perp}\widehat{g}_{S}(\xi)\|_{H^{s,*}_{v}}^{2}+\|\mathbf{P}\widehat{g}_{S}(\xi )\|_{L^{2}_{v}}^{2}\right)\\ &\quad+C\varepsilon^{2}\|\widehat{S}(\xi)\|_{(H^{s,*}_{v})^{\ell }}^{2}+C(1+t)^{-1-\frac{2\ell}{|\gamma|+2s|}}\|\widehat{g}_{S}(\xi)\|_{L^{2}_{ \varepsilon}((v)^{\ell})}^{2},\end{split}\]
for some constant \(C>0\) (independent of \(\xi\) and \(\varepsilon\)). We can then conclude exactly as in the proof of Proposition 3.8.
### Decay estimates: Hard potentials in the whole space
In this subsection we shall always assume \(\gamma+2s\geq 0\) and \(\Omega_{x}=\mathbf{R}^{3}\), and we shall obtain decay estimates for the semigroup \(U^{\varepsilon}\) (see Proposition 3.10) as well as its integral in time against a source \(\int_{0}^{t}U^{\varepsilon}(t-s)S(s)\,\mathrm{d}s\) (see Proposition 3.11). We recall that given any real number \(\omega\in\mathbf{R}\) we denote \(\mathrm{p}_{\omega}:t\mapsto(1+t)^{\omega}\).
**Proposition 3.10**.: _Let \(\ell\geq 0\), \(p\in(3/2,\infty]\) and \(0<\vartheta<\frac{3}{2}(1-\frac{1}{p})\). Let \(\widehat{f}_{0}\in L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell})\), then_
\[\begin{split}\|\mathrm{p}_{\vartheta}\widehat{U}^{\varepsilon}( \cdot)\widehat{f}_{0}\|_{L^{1}_{\xi}L^{\infty}_{\varepsilon}L^{2}_{v}(\langle v \rangle^{\ell})}+\frac{1}{\varepsilon}\|\mathrm{p}_{\vartheta}\mathbf{P}^{ \perp}\widehat{U}^{\varepsilon}(\cdot)\widehat{f}_{0}\|_{L^{1}_{\xi}L^{2}_{v} \mu^{*}_{v}(\langle v\rangle^{\ell})}+\left\|\mathrm{p}_{\vartheta}\frac{| \xi|}{\langle\xi\rangle}\mathbf{P}\widehat{U}^{\varepsilon}(\cdot)\widehat{f }_{0}\right\|_{L^{1}_{\xi}L^{2}_{v}L^{2}_{v}(\langle v\rangle^{\ell})}\\ \lesssim\|\widehat{f}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v \rangle^{\ell})}+\|\widehat{U}^{\varepsilon}(\cdot)\widehat{f}_{0}\|_{L^{p}_{ \xi}L^{\infty}_{\varepsilon}L^{2}_{v}(\langle v\rangle^{\ell})}.\end{split}\]
Proof.: Let \(f(t)=U^{\varepsilon}(t)f_{0}\) for all \(t\geq 0\) which satisfies (3.9), so that \(\widehat{f}(t,\xi)=\widehat{U}^{\varepsilon}(t,\xi)\widehat{f}(\xi)\) satisfies (3.10) for all \(\xi\in\mathbf{R}^{3}\). Using Proposition 3.1 we have, for all \(t\geq 0\) and some \(\lambda_{0}>0\),
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{f}(\xi)\|_{L^{2}_{v}( \langle v\rangle^{\ell})}^{2}\leq-\lambda_{0}\left(\frac{1}{\varepsilon^{2}} \|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})} ^{2}+\frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\|\mathbf{P}\widehat{f}(\xi)\|_{L^ {2}_{v}}^{2}\right),\]
and we already observe that, using \(\|\cdot\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})}\geq\|\langle v\rangle^{\gamma /2+s}\cdot\|_{L^{2}_{v}(\langle v\rangle^{\ell})}\geq\|\cdot\|_{L^{2}_{v}( \langle v\rangle^{\ell})}\) and \(\varepsilon\in(0,1]\),
\[\frac{1}{\varepsilon^{2}}\|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{s,*}_{v}( \langle v\rangle^{\ell})}^{2}+\frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\| \mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\gtrsim\frac{|\xi|^{2}}{\langle\xi \rangle^{2}}\|\widehat{f}(\xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})}^{2},\]
where we have used that \(\|\widehat{f}(\xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})}\) is equivalent to \(\|\widehat{f}(\xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})}\) independently of \(\xi\) and \(\varepsilon\). Therefore it follows
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{f}(\xi)\|_{L^{2}_{v}( \langle v\rangle^{\ell})}^{2}\leq-2\lambda\frac{|\xi|^{2}}{\langle\xi\rangle^{2}} \|\widehat{f}(\xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})}^{2}-\sigma\left(\frac{ 1}{\varepsilon^{2}}\|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{s,*}_{v}(\langle v \rangle^{\ell})}^{2}+\frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\|\mathbf{P} \widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\right), \tag{3.19}\]
for some constants \(\lambda,\sigma>0\). We now split our analysis into two cases: high frequencies \(|\xi|\geq 1\) and low frequencies \(|\xi|<1\).
For high frequencies \(|\xi|\geq 1\) we remark that \(\frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\geq\frac{1}{2}\), hence we obtain
\[\begin{split}\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{1}_{| \xi|\geq 1}\|\widehat{f}(\xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})}^{2}& \leq-\lambda\mathbf{1}_{|\xi|\geq 1}\|\widehat{f}(\xi)\|_{L^{2}_{v}( \langle v\rangle^{\ell})}^{2}\\ &\quad-\frac{\sigma}{2}\left(\frac{1}{\varepsilon^{2}}\mathbf{1}_{| \xi|\geq 1}\|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})} ^{2}+\mathbf{1}_{|\xi|\geq 1}\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\right).\end{split}\]
Arguing as in the proof of Proposition 3.6 we hence deduce
\[\mathbf{1}_{|\xi|\geq 1}\|\mathrm{e}_{\lambda}\widehat{f}(\xi)\|_{L^{ \infty}_{t}L^{2}_{v}((v)^{\ell})}+\frac{1}{\varepsilon}\mathbf{1}_{|\xi|\geq 1 }\|\mathrm{e}_{\lambda}\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{L^{2}_{t}H^{s,*}_{v }((v)^{\ell})} +\mathbf{1}_{|\xi|\geq 1}\|\mathrm{e}_{\lambda}\mathbf{P}\widehat{f}(\xi)\|_ {L^{2}_{t}L^{2}_{v}}\] \[\lesssim\mathbf{1}_{|\xi|\geq 1}\|\widehat{f}_{0}(\xi)\|_{L^{2}_{v }((v)^{\ell})}. \tag{3.20}\]
We now investigate the case of low frequencies \(|\xi|<1\). We denote by \(p^{\prime}\) the conjugate exponent of \(p\), that is \(1/p+1/p^{\prime}=1\) with the convention \(p^{\prime}=1\) if \(p=\infty\), and consider a real number \(r\) verifying \(1+p^{\prime}/3<r<1+1/(2\vartheta)\), which we observe is possible thanks to the conditions on \(p\) and \(\vartheta\). Remarking that \(|\xi|^{2}\leq 2|\xi|^{2}/\langle\xi\rangle^{2}\) if \(|\xi|<1\), by Young's inequality we get: for any \(\delta>0\) there is \(C_{\delta}>0\) such that, for all \(|\xi|<1\) and \(t\geq 0\), we have
\[1\leq\delta(1+t)\frac{|\xi|^{2}}{\langle\xi\rangle^{2}}+C_{\delta}(1+t)^{- \frac{1}{r-1}}|\xi|^{-\frac{2}{r-1}}. \tag{3.21}\]
We therefore obtain, coming back to (3.19) and choosing \(\delta>0\) appropriately,
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{1}_{|\xi|<1}| \|\widehat{f}(\xi)\|_{L^{2}_{v}(v)^{\ell}}^{2} \leq-\sigma\left(\frac{1}{\varepsilon^{2}}\mathbf{1}_{|\xi|<1} \|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{s,*}_{v}(v)^{\ell}}^{2}+\mathbf{1} _{|\xi|<1}\frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\|\mathbf{P}\widehat{f}(\xi) \|_{L^{2}_{v}}^{2}\right)\] \[\quad-\vartheta(1+t)^{-1}\mathbf{1}_{|\xi|<1}\|\widehat{f}(\xi) \|_{L^{2}_{v}((v)^{\ell})}^{2}\] \[\quad+C(1+t)^{-1-\frac{1}{r-1}}|\xi|^{-\frac{2}{r-1}}\mathbf{1}_{| \xi|<1}\|\widehat{f}(\xi)\|_{L^{2}_{v}((v)^{\ell})}^{2}.\]
for some constant \(C>0\). Multiplying both sides by \((1+t)^{2\vartheta}\) gives
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \left\{(1+t)^{2\vartheta}\mathbf{1}_{|\xi|<1}\|\widehat{f}(\xi) \|_{L^{2}_{v}((v)^{\ell})}^{2}\right\}\] \[\leq-\sigma(1+t)^{2\vartheta}\left(\frac{1}{\varepsilon^{2}} \mathbf{1}_{|\xi|<1}\|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{s,*}_{v}((v)^{ \ell})}^{2}+\mathbf{1}_{|\xi|<1}\frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\| \mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\right)\] \[\quad+C(1+t)^{2\vartheta-1-\frac{1}{r-1}}|\xi|^{-\frac{2}{r-1}} \mathbf{1}_{|\xi|<1}\|\widehat{f}(\xi)\|_{L^{2}_{v}((v)^{\ell})}^{2}.\]
Integrating in time implies, for all \(t\geq 0\),
\[(1+t)^{2\vartheta}\mathbf{1}_{|\xi|<1}\|\widehat{f}(t,\xi)\|_{L^ {2}_{v}((v)^{\ell})}^{2} +\frac{1}{\varepsilon^{2}}\int_{0}^{t}(1+s)^{2\vartheta}\mathbf{1}_{| \xi|<1}\|\mathbf{P}^{\perp}\widehat{f}(s,\xi)\|_{H^{s,*}_{v}((v)^{\ell})}^{2} \,\mathrm{d}s\] \[+\int_{0}^{t}(1+s)^{2\vartheta}\mathbf{1}_{|\xi|<1}\frac{|\xi|^{2 }}{\langle\xi\rangle^{2}}\|\mathbf{P}\widehat{f}(s,\xi)\|_{L^{2}_{v}}^{2}\, \mathrm{d}s\] \[\lesssim\mathbf{1}_{|\xi|<1}\|\widehat{f}_{0}(\xi)\|_{L^{2}_{v}((v )^{\ell})}^{2}+\mathbf{1}_{|\xi|<1}|\xi|^{-\frac{2}{r-1}}\|\widehat{f}(\xi) \|_{L^{\infty}_{t}L^{2}_{v}((v)^{\ell})}^{2},\]
where we have used that \((1+t)^{2\vartheta-1-\frac{1}{r-1}}\) is integrable since \(r<1+1/(2\vartheta)\). We now take the supremum in time and finally the square-root of the resulting estimate, which gives
\[\mathbf{1}_{|\xi|<1}\|\mathrm{p}_{\vartheta}\widehat{f}(\xi)\|_{L ^{\infty}_{t}L^{2}_{v}((v)^{\ell})}+\frac{1}{\varepsilon}\mathbf{1}_{|\xi|<1} \|\mathrm{p}_{\vartheta}\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{L^{2}_{t}H^{s,*}_{v }((v)^{\ell})}+\mathbf{1}_{|\xi|<1}\left\|\mathrm{p}_{\vartheta}\frac{|\xi|}{ \langle\xi\rangle}\mathbf{P}\widehat{f}(\xi)\right\|_{L^{2}_{t}L^{2}_{v}}\] \[\lesssim\mathbf{1}_{|\xi|<1}\|\widehat{f}_{0}(\xi)\|_{L^{2}_{v}((v )^{\ell})}+\mathbf{1}_{|\xi|<1}|\xi|^{-\frac{1}{r-1}}\|\widehat{f}(\xi)\|_{L^{ \infty}_{t}L^{2}_{v}((v)^{\ell})}. \tag{3.22}\]
Gathering the estimate for high frequencies (3.20) together with the one for low frequencies (3.22), it follows
\[\|\mathrm{p}_{\vartheta}\widehat{f}(\xi)\|_{L^{\infty}_{t}L^{2}_{v }((v)^{\ell})} +\frac{1}{\varepsilon}\|\mathrm{p}_{\vartheta}\mathbf{P}^{\perp} \widehat{f}(\xi)\|_{L^{2}_{t}H^{s,*}_{v}(v)^{\ell}}+\left\|\mathrm{p}_{ \vartheta}\frac{|\xi|}{\langle\xi\rangle}\mathbf{P}\widehat{f}(\xi)\right\|_{L^ {2}_{t}L^{2}_{v}}\] \[\lesssim\|\widehat{f}_{0}(\xi)\|_{L^{2}_{v}((v)^{\ell})}+\mathbf{1 }_{|\xi|<1}|\xi|^{-\frac{1}{r-1}}\|\widehat{f}(\xi)\|_{L^{\infty}_{t}L^{2}_{v}((v )^{\ell})}.\]
Taking the \(L^{1}_{\xi}\) norm above, we use Holder's inequality to obtain
\[\int_{\mathbf{R}^{3}}\mathbf{1}_{|\xi|<1}|\xi|^{-\frac{1}{r-1}}\| \widehat{f}(\xi)\|_{L^{\infty}_{t}L^{2}_{\xi}(\langle v\rangle^{\ell})}\, \mathrm{d}\xi \lesssim\left(\int_{\mathbf{R}^{3}}\mathbf{1}_{|\xi|<1}|\xi|^{- \frac{p^{\prime}}{r-1}}\,\mathrm{d}\xi\right)^{1/p^{\prime}}\|\widehat{f}\|_{ L^{p}_{\xi}L^{\infty}_{t}L^{2}_{\xi}(\langle v\rangle^{\ell})}\] \[\lesssim\|\widehat{f}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}( \langle v\rangle^{\ell})},\]
since \(r>1+p^{\prime}/3\), which implies
\[\|\mathrm{p}_{\theta}\widehat{f}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{ 2}_{\xi}(\langle v\rangle^{\ell})}+\frac{1}{\varepsilon}\|\mathrm{p}_{\theta} \mathbf{P}^{\perp}\widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v \rangle^{\ell})}+\left\|\mathrm{p}_{\theta}\frac{|\xi|}{\langle\xi\rangle} \mathbf{P}\widehat{f}\right\|_{L^{1}_{\xi}L^{2}_{v}}\] \[\lesssim\|\widehat{f}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v \rangle^{\ell})}+\|\widehat{f}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})},\]
and concludes the proof.
**Proposition 3.11**.: _Let \(\ell\geq 0\), \(p\in(3/2,\infty]\) and \(0<\vartheta<\frac{3}{2}(1-\frac{1}{p})\). Let \(S=S(t,x,v)\) verify \(\mathbf{P}S=0\) and \(\mathrm{p}_{\vartheta}\langle v\rangle^{\ell}\widehat{S}\in L^{1}_{\xi}L^{2}_{ t}(H^{s,*}_{v})^{\prime}\), and denote_
\[g_{S}(t)=\int_{0}^{t}U^{\varepsilon}(t-s)S(s)\,\mathrm{d}s.\]
_Assume that \(\widehat{g}_{S}\in L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})\), then_
\[\|\mathrm{p}_{\theta}\widehat{g}_{S}\|_{L^{1}_{\xi}L^{\infty}_{t} L^{2}_{v}(\langle v\rangle^{\ell})} +\frac{1}{\varepsilon}\|\mathrm{p}_{\theta}\mathbf{P}^{\perp} \widehat{g}_{S}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})} +\left\|\mathrm{p}_{\theta}\frac{|\xi|}{\langle\xi\rangle}\mathbf{P}\widehat{ g}_{S}\right\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\lesssim\varepsilon\|\mathrm{p}_{\vartheta}\langle v\rangle^{\ell }\widehat{S}\|_{L^{1}_{\xi}L^{2}_{t}(H^{s,*}_{v})^{\prime}}+\|\widehat{g}_{S} \|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\,.\]
Proof.: Recalling that \(\widehat{g}_{S}\) satisfies (3.12), we can argue as for obtaining (3.19) to get
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{g}_{S}(\xi) \|^{2}_{L^{2}_{v}(\langle v\rangle^{\ell})} \leq-2\lambda\frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\|\widehat{g}_{S}(\xi) \|^{2}_{L^{2}_{v}(\langle v\rangle^{\ell})}-\sigma\left(\frac{1}{\varepsilon^{ 2}}\|\mathbf{P}^{\perp}\widehat{g}_{S}(\xi)\|^{2}_{H^{s,*}_{v}(\langle v \rangle^{\ell})}+\frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\|\mathbf{P}\widehat{ g}_{S}(\xi)\|^{2}_{L^{2}_{v}}\right)\] \[\quad+C\varepsilon^{2}\|\langle v\rangle^{\ell}\widehat{S}(\xi) \|^{2}_{(H^{s,*}_{v})^{\prime}},\]
for some constants \(\lambda,\sigma,C>0\). By separating the cases of high and low frequencies, we can conclude exactly as in the proof of Proposition 3.10.
### Decay estimates: Soft potentials in the whole space
In this subsection we shall always assume \(\gamma+2s<0\) and \(\Omega_{x}=\mathbf{R}^{3}\), and we shall obtain decay estimates for the semigroup \(U^{\varepsilon}\) (see Proposition 3.12) as well as its integral in time against a source \(\int_{0}^{t}U^{\varepsilon}(t-s)S(s)\,\mathrm{d}s\) (see Proposition 3.13). We recall that given any real number \(\omega\in\mathbf{R}\) we denote \(\mathrm{p}_{\omega}:t\mapsto(1+t)^{\omega}\).
**Proposition 3.12**.: _Let \(p\in(3/2,\infty]\) and \(0<\vartheta<\frac{3}{2}(1-\frac{1}{p})\). Let \(f_{0}\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell}) \cap L^{p}_{\xi}L^{2}_{v})\) with \(\ell>\vartheta|\gamma+2s|\), then we have_
\[\|\mathrm{p}_{\vartheta}\widehat{U}^{\varepsilon}(\cdot)\widehat{f}_{0}\|_{L^{1 }_{\xi}L^{\infty}_{t}L^{2}_{v}} +\frac{1}{\varepsilon}\|\mathrm{p}_{\vartheta}\mathbf{P}^{\perp} \widehat{U}^{\varepsilon}(\cdot)\widehat{f}_{0}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*} _{v}}+\left\|\mathrm{p}_{\vartheta}\frac{|\xi|}{\langle\xi\rangle}\mathbf{P}( \widehat{U}^{\varepsilon}(\cdot)\widehat{f}_{0})\right\|_{L^{1}_{\xi}L^{2}_{t} L^{2}_{v}}\]
Proof.: Arguing as in the proof of Proposition 3.10, denoting \(f(t)=U^{\varepsilon}(t)f_{0}\) and using that
\[\|\cdot\|_{H^{s,*}(\langle v\rangle^{\ell})} \geq\|\langle v\rangle^{\gamma/2+s}\cdot\|_{L^{2}_{v}(\langle v \rangle^{\ell})},\]
we first obtain
\[\begin{split}\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{ f}(\xi)\|^{2}_{L^{2}_{\xi}}&\leq-\lambda\|\langle v\rangle^{ \gamma/2+s}\mathbf{P}^{\perp}\widehat{f}(\xi)\|^{2}_{L^{2}_{v}}-\lambda\frac{| \xi|^{2}}{\langle\xi\rangle^{2}}\|\mathbf{P}\widehat{f}(\xi)\|^{2}_{L^{2}_{v}} \\ &\quad-\sigma\left(\frac{1}{\varepsilon^{2}}\|\mathbf{P}^{\perp} \widehat{f}(\xi)\|^{2}_{H^{s,*}_{v}}+\frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\| \mathbf{P}\widehat{f}(\xi)\|^{2}_{L^{2}_{v}}\right),\end{split} \tag{3.23}\]
for some positive constants \(\lambda,\sigma>0\). We now split the analysis into high frequencies and low frequencies.
For high frequencies \(|\xi|\geq 1\) we observe that \(\frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\geq\frac{1}{2}\), which yields
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{1}_{|\xi|\geq 1} \|\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\leq-\lambda\mathbf{1}_{|\xi|\geq 1} \|\langle v\rangle^{\gamma/2+s}\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{L^{2}_{v} }^{2}-\lambda\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\] \[\qquad-\sigma\left(\frac{1}{\varepsilon^{2}}\mathbf{1}_{|\xi| \geq 1}\|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{*,*}_{v}}^{2}+\mathbf{1}_{| \xi|\geq 1}\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\right)\] \[\leq-\lambda\mathbf{1}_{|\xi|\geq 1}\|\langle v\rangle^{\gamma/2+s} \widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\] \[\qquad-\sigma\left(\frac{1}{\varepsilon^{2}}\mathbf{1}_{|\xi| \geq 1}\|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{*,*}_{v}}^{2}+\mathbf{1}_{| \xi|\geq 1}\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\right),\]
for some other constants \(\lambda,\sigma>0\). Thanks to the interpolation inequality (3.17) of the proof of Proposition 3.8, we hence deduce
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{1}_{|\xi|\geq 1} \|\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\leq- \omega(1+t)^{-1}\mathbf{1}_{|\xi|\geq 1}\|\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\] \[-\sigma\left(\frac{1}{\varepsilon^{2}}\mathbf{1}_{|\xi|\geq 1}\| \mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{*,*}_{v}}^{2}+\mathbf{1}_{|\xi|\geq 1 }\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\right)\] \[+C(1+t)^{-1-\frac{2\ell}{|\gamma+2s|}}\|\widehat{f}(\xi)\|_{L^{2 }_{v}(\langle v\rangle^{\ell})}^{2},\]
for any \(\vartheta<\omega<\frac{\ell}{|\gamma+2s|}\) and some constant \(C>0\). With this inequality we can thus argue as in the proof of Proposition 3.8, which gives, recalling that \((1+t)^{2\omega-1-\frac{2\ell}{|\gamma+2s|}}\) is integrable since \(0<\omega<\frac{\ell}{|\gamma+2s|}\),
\[\mathbf{1}_{|\xi|\geq 1}\|\mathrm{p}_{\omega}\widehat{f}(\xi)\|_{L ^{\infty}_{v}L^{2}_{v}}+\frac{1}{\varepsilon}\mathbf{1}_{|\xi|\geq 1}\| \mathrm{p}_{\omega}\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{L^{2}_{t}H^{*,*}_{v}} +\mathbf{1}_{|\xi|\geq 1}\|\mathrm{p}_{\omega}\mathbf{P}\widehat{f}(\xi)\|_{L ^{2}_{t}L^{2}_{v}}\] \[\lesssim\mathbf{1}_{|\xi|\geq 1}\|\widehat{f}_{0}(\xi)\|_{L^{2}_{v} }+\mathbf{1}_{|\xi|\geq 1}\|\widehat{f}(\xi)\|_{L^{\infty}_{v}L^{2}_{v}(\langle v \rangle^{\ell})}. \tag{3.24}\]
We now turn our attention to the low frequencies case \(|\xi|<1\). First of all, from (3.23), we use the interpolation inequality (3.17) of the proof of Proposition 3.8 to deduce
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{1}_{|\xi|<1}\| \widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\leq- \omega(1+t)^{-1}\mathbf{1}_{|\xi|<1}\|\mathbf{P}^{\perp}\widehat{f }(\xi)\|_{L^{2}_{v}}^{2}-\lambda\mathbf{1}_{|\xi|<1}\frac{|\xi|^{2}}{\langle \xi\rangle^{2}}\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\] \[-\sigma\left(\frac{1}{\varepsilon^{2}}\mathbf{1}_{|\xi|<1}\| \mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{*,*}_{v}}^{2}+\mathbf{1}_{|\xi|<1} \frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_ {v}}^{2}\right)\] \[+C(1+t)^{-1-\frac{2\ell}{|\gamma+2s|}}\mathbf{1}_{|\xi|<1}\| \mathbf{P}^{\perp}\widehat{f}(\xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})}^{2},\]
for any \(\vartheta<\omega<\frac{\ell}{|\gamma+2s|}\) and some constant \(C>0\). As in the proof of Proposition 3.10, we denote by \(p^{\prime}\) the conjugate exponent of \(p\), and consider a real number \(r\) verifying \(1+p^{\prime}/3<r<1+1/(2\vartheta)\). Using inequality (3.21) we hence deduce
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{1}_{|\xi|<1}\| \widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\leq- \vartheta(1+t)^{-1}\mathbf{1}_{|\xi|<1}\|\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\] \[-\sigma\left(\frac{1}{\varepsilon^{2}}\mathbf{1}_{|\xi|<1}\| \mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{*,*}_{v}}^{2}+\mathbf{1}_{|\xi|<1} \frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_ {v}}^{2}\right)\] \[+C(1+t)^{-1-\frac{2\ell}{|\gamma+2s|}}\mathbf{1}_{|\xi|<1}\| \mathbf{P}^{\perp}\widehat{f}(\xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})}^{2}\] \[+C(1+t)^{-1-\frac{1}{r-1}}|\xi|^{-\frac{2}{r-1}}\mathbf{1}_{|\xi|<1 }\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2},\]
for some constant \(C>0\). Multiplying both sides by \((1+t)^{2\vartheta}\) gives
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\left\{(1+t)^{2\vartheta }\mathbf{1}_{|\xi|<1}\|\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}\right\}\leq- \sigma(1+t)^{2\vartheta}\left(\frac{1}{\varepsilon^{2}}\mathbf{1}_{|\xi|<1}\| \mathbf{P}^{\perp}\widehat{f}(\xi)\|_{H^{*,*}_{v}}^{2}+\mathbf{1}_{|\xi|<1} \frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v} }^{2}\right)\] \[+C(1+t)^{2\vartheta-1-\frac{2\ell}{|\gamma+2s|}}\mathbf{1}_{|\xi| <1}\|\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})}^{2}\] \[+C(1+t)^{2\vartheta-1-\frac{1}{r-1}}|\xi|^{-\frac{2}{r-1}}\mathbf{1}_ {|\xi|<1}\|\mathbf{P}\widehat{f}(\xi)\|_{L^{2}_{v}}^{2}.\]
Integrating in time implies, for all \(t\geq 0\),
\[(1+t)^{2\vartheta}\mathbf{1}_{|\xi|<1}\|\widehat{f}(t,\xi)\|_{L_{e}^ {2}}^{2} +\frac{1}{\varepsilon^{2}}\int_{0}^{t}(1+s)^{2\vartheta}\mathbf{1}_{| \xi|<1}\|\mathbf{P}^{\perp}\widehat{f}(s,\xi)\|_{H_{v}^{*,*}}^{2}\,\mathrm{d}s\] \[+\int_{0}^{t}(1+s)^{2\vartheta}\mathbf{1}_{|\xi|<1}\frac{|\xi|^{ 2}}{\langle\xi\rangle^{2}}\|\mathbf{P}\widehat{f}(s,\xi)\|_{L_{v}^{2}}^{2}\, \mathrm{d}s\] \[\lesssim\mathbf{1}_{|\xi|<1}\|\widehat{f}_{0}(\xi)\|_{L_{v}^{2}}^ {2}+\mathbf{1}_{|\xi|<1}\|\widehat{f}(\xi)\|_{L_{v}^{2}L_{v}^{2}(\langle v \rangle^{\ell})}^{2}+\mathbf{1}_{|\xi|<1}|\xi|^{-\frac{2}{r-1}}\|\widehat{f}( \xi)\|_{L_{t}^{\infty}L_{v}^{2}}^{2},\]
where we have used that \((1+t)^{2\vartheta-1-\frac{2\ell}{|\gamma+2s|}}\) and \((1+t)^{2\vartheta-1-\frac{1}{r-1}}\) are integrable since \(0<\vartheta<\omega<\frac{\ell}{|\gamma+2s|}\) and \(r<1+1/(2\vartheta)\), respectively. We can now take the supremum in time and then the square-root of the resulting estimate, which gives
\[\begin{split}\mathbf{1}_{|\xi|<1}\|&\|p_{\vartheta }\widehat{f}(\xi)\|_{L_{t}^{\infty}L_{v}^{2}}+\frac{1}{\varepsilon}\|_{|\xi|<1 }\|\mathrm{p}_{\vartheta}\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{L_{t}^{2}H_{v} ^{*,*}}+\mathbf{1}_{|\xi|<1}\left\|\mathrm{p}_{\vartheta}\frac{|\xi|}{\langle \xi\rangle}\mathbf{P}\widehat{f}(\xi)\right\|_{L_{t}^{2}L_{v}^{2}}\\ &\lesssim\mathbf{1}_{|\xi|<1}\|\widehat{f}_{0}(\xi)\|_{L_{v}^{2}} +\mathbf{1}_{|\xi|<1}\|\widehat{f}(\xi)\|_{L_{t}^{\infty}L_{v}^{2}(\langle v \rangle^{\ell})}+\mathbf{1}_{|\xi|<1}|\xi|^{-\frac{1}{r-1}}\|\widehat{f}(\xi) \|_{L_{t}^{\infty}L_{v}^{2}}.\end{split} \tag{3.25}\]
Gathering the estimate for high frequencies (3.24) together with the one for low frequencies (3.25) and oberving that \(\vartheta<\omega\), it follows
\[\|\mathrm{p}_{\vartheta}\widehat{f}(\xi)\|_{L_{t}^{\infty}L_{v}^{2}(\langle v \rangle^{\ell})}+\frac{1}{\varepsilon}\|\mathrm{p}_{\vartheta}\mathbf{P}^{ \perp}\widehat{f}(\xi)\|_{L_{t}^{2}H_{v}^{*,*}(\langle v\rangle^{\ell})}+\left \|\mathrm{p}_{\vartheta}\frac{|\xi|}{\langle\xi\rangle}\mathbf{P}\widehat{f} (\xi)\right\|_{L_{t}^{2}L_{v}^{2}}\]
\[\lesssim\|\widehat{f}_{0}(\xi)\|_{L_{v}^{2}}+\|\widehat{f}(\xi)\|_{L_{t}^{ \infty}L_{v}^{2}(\langle v\rangle^{\ell})}+\mathbf{1}_{|\xi|<1}|\xi|^{-\frac {1}{r-1}}\|\widehat{f}(\xi)\|_{L_{t}^{\infty}L_{v}^{2}}.\]
Taking the \(L_{\xi}^{1}\) norm above, we use Holder's inequality to control the last term in the right-hand side as in the proof of Proposition 3.10, to obtain
\[\int_{\mathbf{R}^{3}}\mathbf{1}_{|\xi|<1}|\xi|^{-\frac{1}{r-1}}\|\widehat{f}( \xi)\|_{L_{t}^{\infty}L_{v}^{2}(\langle v\rangle^{\ell})}\,\mathrm{d}\xi \lesssim\|\widehat{f}\|_{L_{\xi}^{p}L_{t}^{\infty}L_{v}^{2}(\langle v \rangle^{\ell})},\]
since \(r>1+p^{\prime}/3\), which implies
\[\|\mathrm{p}_{\vartheta}\widehat{f}\|_{L_{t}^{1}L_{t}^{\infty}L_{v}^{2}( \langle v\rangle^{\ell})}+\frac{1}{\varepsilon}\|\mathrm{p}_{\vartheta} \mathbf{P}^{\perp}\widehat{f}\|_{L_{t}^{1}L_{t}^{2}H_{v}^{*,*}(\langle v \rangle^{\ell})}+\left\|\mathrm{p}_{\vartheta}\frac{|\xi|}{\langle\xi\rangle }\mathbf{P}\widehat{f}\right\|_{L_{\xi}^{1}L_{t}^{2}L_{v}^{2}}\]
\[\lesssim\|\widehat{f}_{0}\|_{L_{t}^{1}L_{t}^{2}}+\|\widehat{f}\|_{L_{t}^{1}L_{ t}^{\infty}L_{v}^{2}(\langle v\rangle^{\ell})}+\|\widehat{f}\|_{L_{\xi}^{p}L_{t}^{ \infty}L_{v}^{2}(\langle v\rangle^{\ell})}\]
and concludes the proof.
**Proposition 3.13**.: _Let \(p\in(3/2,\infty]\) and \(0<\vartheta<\frac{3}{2}(1-\frac{1}{p})\). Let \(S=S(t,x,v)\) verify \(\mathbf{P}S=0\) and \(\mathrm{p}_{\vartheta}\widehat{S}\in L_{\xi}^{1}L_{t}^{2}(H_{v}^{s,*})^{\prime}\), and denote_
\[g_{S}(t)=\int_{0}^{t}U^{\varepsilon}(t-s)S(s)\,\mathrm{d}s.\]
_Assume that \(g_{S}\in\mathcal{F}_{x}^{-}(L_{\xi}^{1}L_{v}^{2}(\langle v\rangle^{\ell})\cap L _{\xi}^{p}L_{v}^{2})\) with \(\ell>\vartheta|\gamma+2s|\), then_
\[\|\mathrm{p}_{\vartheta}\widehat{g}_{S}\|_{L_{t}^{1}L_{t}^{\infty}L_{v}^{2}}+ \frac{1}{\varepsilon}\|\mathrm{p}_{\vartheta}\mathbf{P}^{\perp}\widehat{g}_{S}\| _{L_{\xi}^{1}L_{t}^{2}H_{v}^{*,*}}+\left\|\mathrm{p}_{\vartheta}\frac{|\xi|}{ \langle\xi\rangle}\mathbf{P}\widehat{g}_{S}\right\|_{L_{\xi}^{1}L_{t}^{2}L_{v}^ {2}}\]
\[\lesssim\varepsilon\|\mathrm{p}_{\vartheta}\widehat{S}\|_{L_{t}^{1}L_{t}^{2}(H_{v }^{s,*})^{\prime}}+\|\widehat{g}_{S}\|_{L_{\xi}^{1}L_{t}^{\infty}L_{v}^{2}( \langle v\rangle^{\ell})}+\|\widehat{g}_{S}\|_{L_{\xi}^{p}L_{t}^{\infty}L_{v}^{2}}.\]
Proof.: Recalling that \(\widehat{g}_{S}\) satisfies (3.12), we can argue as for obtaining (3.23) to get
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\widehat{g}_{S}(\xi)\| _{L_{v}^{2}}^{2} \leq-\lambda\|\langle v\rangle^{\gamma/2+s}\mathbf{P}^{\perp}\widehat{g}_{S}( \xi)\|_{L_{v}^{2}}^{2}-\lambda\frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\| \mathbf{P}\widehat{g}_{S}(\xi)\|_{L_{v}^{2}}^{2}\] \[\quad-\sigma\left(\frac{1}{\varepsilon^{2}}\|\mathbf{P}^{\perp} \widehat{g}_{S}(\xi)\|_{H_{v}^{s,*}}^{2}+\frac{|\xi|^{2}}{\langle\xi\rangle^{2}}\| \mathbf{P}\widehat{g}_{S}(\xi)\|_{L_{v}^{2}}^{2}\right)+C\varepsilon^{2}\| \widehat{S}(\xi)\|_{(H_{v}^{s,*})^{\prime}}^{2},\]
for some constants \(\lambda,\sigma,C>0\). By separating the cases of high and low frequencies, we can conclude exactly as in the proof of Proposition 3.12.
## 4. Well-posedness and regularization for the rescaled Boltzmann equation
Consider the equation (1.8) that we rewrite here
\[\begin{cases}\partial_{t}f^{\varepsilon}=\frac{1}{\varepsilon^{2}}(L-\varepsilon v \cdot\nabla_{x})f^{\varepsilon}+\frac{1}{\varepsilon}\Gamma(f^{\varepsilon},f^ {\varepsilon})\\ f^{\varepsilon}_{t=0}=f^{\varepsilon}_{0}.\end{cases}\]
We shall consider mild solutions of (1.8), that is, we shall prove the well-posedness of a solution \(f^{\varepsilon}\) to (1.8) in Duhamel's form
\[f^{\varepsilon}(t)=U^{\varepsilon}(t)f^{\varepsilon}_{0}+\frac{1}{\varepsilon }\int_{0}^{t}U^{\varepsilon}(t-s)\Gamma(f^{\varepsilon}(s),f^{\varepsilon}(s) )\,\mathrm{d}s. \tag{4.1}\]
Taking the Fourier transform in space of (1.8), we have
\[\begin{cases}\partial_{t}\widehat{f}^{\varepsilon}(\xi)=\Lambda^{\varepsilon} (\xi)\widehat{f}^{\varepsilon}(\xi)+\frac{1}{\varepsilon}\widehat{\Gamma}(f^ {\varepsilon},f^{\varepsilon})(\xi)\\ \widehat{f}^{\varepsilon}(\xi)_{t=0}=\widehat{f}^{\varepsilon}_{0}(\xi), \end{cases} \tag{4.2}\]
and by Duhamel's formula
\[\widehat{f}^{\varepsilon}(t,\xi)=\widehat{U}^{\varepsilon}(t,\xi)\widehat{f }^{\varepsilon}_{0}(\xi)+\frac{1}{\varepsilon}\int_{0}^{t}\widehat{U}^{ \varepsilon}(t-s,\xi)\widehat{\Gamma}(f^{\varepsilon}(s),f^{\varepsilon}(s) )(\xi)\,\mathrm{d}s. \tag{4.3}\]
### Nonlinear estimates
We start by recalling some well-known trilinear estimates on the collision operator \(\Gamma\). We start with estimates without velocity weight. From [44, 5], for the hard potentials case \(\gamma+2s\geq 0\) there holds
\[\left|\langle\Gamma(f,g),h\rangle_{L^{2}_{v}}\right|\lesssim\|f\|_{L^{2}_{v}} \|g\|_{H^{s,*}_{v}}\|h\|_{H^{s,*}_{v}}. \tag{4.4}\]
Moreover from [1], for the soft potentials case \(\gamma+2s<0\) one has
\[\begin{split}\left|\langle\Gamma(f,g),h\rangle_{L^{2}_{v}( \langle v\rangle^{\ell})}\right|\\ \qquad\lesssim\left(\|\langle v\rangle^{\gamma/2+s}f\|_{L^{2}_{v} }\|g\|_{H^{s,*}_{v}}+\|f\|_{H^{s,*}_{v}}\|\langle v\rangle^{\gamma/2+s}g\|_{ L^{2}_{v}}\right)\|h\|_{H^{s,*}_{v}}\\ \qquad\qquad+\min\left\{\|\langle v\rangle^{\gamma/2+s}f\|_{L^{2 }_{v}}\|g\|_{L^{2}_{v}},\|f\|_{L^{2}_{v}}\|\langle v\rangle^{\gamma/2+s}g\|_{ L^{2}_{v}}\right\}\|h\|_{H^{s,*}_{v}}.\end{split} \tag{4.5}\]
From these estimates we already obtain
\[\begin{split}\|\Gamma(f,g)\|_{(H^{s,*}_{v})}&=\sup_{ \|\phi\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})}\leq 1}\langle\Gamma(f,g),\phi \rangle_{L^{2}_{v}(\langle v\rangle^{\ell})}\\ &\lesssim\|\langle v\rangle^{(\gamma/2+s)_{-}}f\|_{L^{2}_{v}}\|g \|_{H^{s,*}_{v}}+\|f\|_{H^{s,*}_{v}}\|\langle v\rangle^{(\gamma/2+s)_{-}}g\|_ {L^{2}_{v}}\\ &\quad+\min\left\{\|\langle v\rangle^{(\gamma/2+s)_{-}}f\|_{L^{2 }_{v}}\|g\|_{L^{2}_{v}},\|f\|_{L^{2}_{v}}\|\langle v\rangle^{(\gamma/2+s)_{-}} g\|_{L^{2}_{v}}\right\},\end{split} \tag{4.6}\]
which holds for both hard and soft potentials.
Furthermore, we also have estimates when adding velocity weight \(\langle v\rangle^{\ell}\). For any \(\ell>0\), from [44, 1, 5] (see for instance [34, Lemma 4.1] for a summary) for the hard potentials case \(\gamma+2s\geq 0\) there holds
\[\left|\langle\Gamma(f,g),h\rangle_{L^{2}_{v}(\langle v\rangle^{\ell})}\right| \lesssim\left(\|f\|_{L^{2}_{v}(\langle v\rangle^{\ell})}\|g\|_{H^{s,*}_{v}( \langle v\rangle^{\ell})}+\|f\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})}\|g\|_{ L^{2}_{v}(\langle v\rangle^{\ell})}\right)\|h\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})}, \tag{4.7}\]
and for the soft potentials case \(\gamma+2s<0\) one has
\[\begin{split}\left|\langle\Gamma(f,g),h\rangle_{L^{2}_{v}( \langle v\rangle^{\ell})}\right|\\ &\qquad\lesssim\left(\|\langle v\rangle^{\gamma/2+s}f\|_{L^{2}_{v} (\langle v\rangle^{\ell})}\|g\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})}+\|f\|_{H^ {s,*}_{v}(\langle v\rangle^{\ell})}\|\langle v\rangle^{\gamma/2+s}g\|_{L^{2}_{ v}(\langle v\rangle^{\ell})}\right)\|h\|_{H^{s,*}_{v}(\langle v\rangle^{\ell})}\\ &\qquad+\|\langle v\rangle^{\gamma/2+s}f\|_{L^{2}_{v}(\langle v \rangle^{\ell})}\|g\|_{L^{2}_{v}(\langle v\rangle^{\ell})}\|h\|_{H^{s,*}_{v}( \langle v\rangle^{\ell})}.\end{split} \tag{4.8}\]
Therefore we also deduce
\[\|\langle v\rangle^{\ell}\Gamma(f,g)\|_{(H^{*}_{v})^{\prime}} =\sup_{\|\phi\|_{H^{*,*}_{v}(v)^{\ell}}\leq 1}\langle\Gamma(f,g),\phi \rangle_{L^{2}_{v}(\langle v\rangle^{\ell})}\] \[\lesssim\|\langle v\rangle^{(\gamma/2+s)_{-}}f\|_{L^{2}_{v}( \langle v\rangle^{\ell})}\|g\|_{H^{*,*}_{v}(\langle v\rangle^{\ell})}+\|f\|_{H ^{*,*}_{v}(\langle v\rangle^{\ell})}\|\langle v\rangle^{(\gamma/2+s)_{-}}g\|_{ L^{2}_{v}(\langle v\rangle^{\ell})}\] \[\quad+\|\langle v\rangle^{(\gamma/2+s)_{-}}f\|_{L^{2}_{v}( \langle v\rangle^{\ell})}\|g\|_{L^{2}_{v}(\langle v\rangle^{\ell})}, \tag{4.9}\]
which again gathers both hard and soft potentials cases.
Thanks to (4.6) we deduce our main nonlinear estimate without weight.
**Lemma 4.1**.: _Let \(p\in[1,\infty]\). For any smooth enough functions \(f,g\) there holds_
\[\|\widehat{\Gamma}(f,g)\|_{L^{p}_{\xi}L^{2}_{t}(H^{*,*}_{v})^{\prime}} \lesssim\Gamma_{1}+\Gamma_{2}+\min\left\{\Gamma_{3},\Gamma_{4}\right\}\]
_where_
\[\Gamma_{1} =\min\Big{\{}\|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f}\|_ {L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}}\|\widehat{g}\|_{L^{1}_{\xi}L^{2}_{t}H^{*, *}_{v}},\|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f}\|_{L^{p}_{\xi}L^{2}_{ t}L^{2}_{v}}\|\widehat{g}\|_{L^{1}_{\xi}L^{\infty}_{t}H^{*,*}_{v}},\] \[\|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f}\|_{L^{1}_{\xi}L^ {\infty}_{t}L^{2}_{v}}\|\widehat{g}\|_{L^{p}_{\xi}L^{2}_{t}H^{*,*}_{v}},\| \langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{ v}}\|\widehat{g}\|_{L^{p}_{\xi}L^{\infty}_{t}H^{*,*}_{v}}\Big{\}},\] \[\Gamma_{2} =\min\Big{\{}\|\widehat{f}\|_{L^{p}_{\xi}L^{2}_{t}H^{*,*}_{v}}\| \langle v\rangle^{(\gamma/2+s)_{-}}\widehat{g}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{ 2}_{v}},\|\widehat{f}\|_{L^{p}_{\xi}L^{\infty}_{t}H^{*,*}_{v}}\|\langle v \rangle^{(\gamma/2+s)_{-}}\widehat{g}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\|\widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}H^{*,*}_{v}}\|\langle v \rangle^{(\gamma/2+s)_{-}}\widehat{g}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}}, \|\widehat{f}\|_{L^{1}_{\xi}L^{\infty}_{t}H^{*,*}_{v}}\|\langle v\rangle^{( \gamma/2+s)_{-}}\widehat{g}\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{v}}\Big{\}},\] \[\Gamma_{3} =\min\Big{\{}\|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f}\|_ {L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}}\|\widehat{g}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{ v}},\|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f}\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{ v}}\|\widehat{g}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}}\] \[\|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f}\|_{L^{1}_{\xi}L^ {\infty}_{t}L^{2}_{v}}\|\widehat{g}\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{v}},\|\langle v \rangle^{(\gamma/2+s)_{-}}\widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\| \widehat{g}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}}\Big{\}},\]
_and_
\[\Gamma_{4} =\min\Big{\{}\|\widehat{f}\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{v}}\| \langle v\rangle^{(\gamma/2+s)_{-}}\widehat{g}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{ 2}_{v}},\|\widehat{f}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}}\|\langle v\rangle^ {(\gamma/2+s)_{-}}\widehat{g}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\|\widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\|\langle v \rangle^{(\gamma/2+s)_{-}}\widehat{g}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}}, \|\widehat{f}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}}\|\langle v\rangle^{(\gamma /2+s)_{-}}\widehat{g}\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{v}}\Big{\}}.\]
Proof.: Using (4.6) we write
\[\left\{\int_{0}^{\infty}\|\langle v\rangle^{\ell}\widehat{\Gamma}(f(f,g(t))( \xi)\|_{(H^{*,*}_{v})^{\prime}}^{2}\,\mathrm{d}t\right\}^{1/2}\lesssim I_{1}+I _{2}+\min\{I_{3},I_{4}\}\]
with
\[I_{1} =\left\{\int_{0}^{\infty}\left(\int_{\Omega^{\prime}_{\eta}}\| \langle v\rangle^{(\gamma/2+s)_{-}}f(t,\xi-\eta)\|_{L^{2}_{v}}\|\widehat{g}(t, \eta)\|_{H^{*,*}_{v}}\,\mathrm{d}\eta\right)^{2}\,\mathrm{d}t\right\}^{1/2},\] \[I_{2} =\left\{\int_{0}^{\infty}\left(\int_{\Omega^{\prime}_{\eta}}\|f(t, \xi-\eta)\|_{H^{*,*}_{v}}\|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{g}(t, \eta)\|_{L^{2}_{v}}\,\mathrm{d}\eta\right)^{2}\,\mathrm{d}t\right\}^{1/2},\] \[I_{3} =\left\{\int_{0}^{\infty}\left(\int_{\Omega^{\prime}_{\eta}}\| \langle v\rangle^{(\gamma/2+s)_{-}}f(t,\xi-\eta)\|_{L^{2}_{v}}\|\widehat{g}(t, \eta)\|_{L^{2}_{v}}\,\mathrm{d}\eta\right)^{2}\,\mathrm{d}t\right\}^{1/2},\]
and
\[I_{4} =\left\{\int_{0}^{\infty}\left(\int_{\Omega^{\prime}_{\eta}}\| f(t,\xi-\eta)\|_{L^{2}_{v}}\|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{g}(t, \eta)\|_{L^{2}_{v}}\,\mathrm{d}\eta\right)^{2}\,\mathrm{d}t\right\}^{1/2}.\]
We now investigate the term \(I_{1}\). Thanks to Minkowski and Holder inequalities we then obtain
\[I_{1} \lesssim\int_{\Omega^{\prime}_{\eta}}\left(\int_{0}^{\infty}\| \langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f}(t,\xi-\eta)\|_{L^{2}_{v}}^{2}\| \widehat{g}(t,\eta)\|_{H^{*,*}_{v}}^{2}\,\mathrm{d}t\right)^{1/2}\mathrm{d}\eta\] \[\lesssim\int_{\Omega^{\prime}_{\eta}}\|\langle v\rangle^{(\gamma/2+ s)_{-}}\widehat{f}(\xi-\eta)\|_{L^{\infty}_{t}L^{2}_{v}}\|\widehat{g}(\eta)\|_{L^{2}_{t}H^{*,*}_{v}} \,\mathrm{d}\eta.\]
Taking the \(L^{p}_{\xi}\) norm in above estimate and using Young's inequality for convolution we first obtain
\[I_{1}\lesssim\|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f}\|_{L^{p}_{\xi}L^{ \infty}_{t}L^{2}_{\xi}}\|\widehat{g}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}}\quad \text{and}\quad I_{1}\lesssim\|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f} \|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{\xi}}\|\widehat{g}\|_{L^{p}_{\xi}L^{2}_{t}H^ {s,*}_{v}}.\]
Arguing exactly as above but exchanging the role of \(f\) and \(g\) when performing Holder's inequality, we also obtain the main weighted nonlinear estimate below, the proof of which we omit for simplicity.
**Lemma 4.2**.: _Let \(\ell>0\) and \(p\in[1,\infty]\). For any smooth enough functions \(f,g\) there holds_
\[\|\langle v\rangle^{\ell}\widehat{\Gamma}(f,g)\|_{L^{p}_{\xi}L^{2}_{t}(H^{s,*} _{v})^{\prime}}\lesssim\widetilde{\Gamma}_{1}+\widetilde{\Gamma}_{2}+ \widetilde{\Gamma}_{3},\]
_where_
\[\widetilde{\Gamma}_{1}=\min\Big{\{} \|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f}\|_{L^{p}_{\xi}L^{ \infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\|\widehat{g}\|_{L^{1}_{\xi}L^{ 2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})},\|\langle v\rangle^{(\gamma/2+s)_{ -}}\widehat{f}\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\| \widehat{g}\|_{L^{1}_{\xi}L^{\infty}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}\] \[\|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f}\|_{L^{1}_{\xi}L^{ \infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\|\widehat{g}\|_{L^{p}_{\xi}L^{ 2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})},\|\langle v\rangle^{(\gamma/2+s)_{ -}}\widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\| \widehat{g}\|_{L^{p}_{\xi}L^{\infty}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})} \Big{\}},\] \[\widetilde{\Gamma}_{2}=\min\Big{\{} \|\widehat{f}\|_{L^{p}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})} \|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{g}\|_{L^{1}_{\xi}L^{\infty}_{t}L ^{2}_{v}(\langle v\rangle^{\ell})},\|\widehat{f}\|_{L^{p}_{\xi}L^{\infty}_{t}H ^{s,*}_{v}(\langle v\rangle^{\ell})}\|\langle v\rangle^{(\gamma/2+s)_{-}} \widehat{g}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\] \[\|\widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{ \ell})}\|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{g}\|_{L^{p}_{\xi}L^{ \infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})},\|\widehat{f}\|_{L^{1}_{\xi}L^{ \infty}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}\|\langle v\rangle^{(\gamma/2+s )_{-}}\widehat{g}\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\Big{\}},\]
_and_
\[\widetilde{\Gamma}_{3}=\min\Big{\{} \|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f}\|_{L^{p}_{\xi}L^{ \infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\|\widehat{g}\|_{L^{1}_{\xi}L^{ 2}_{t}L^{2}_{t}(\langle v\rangle^{\ell})},\|\langle v\rangle^{(\gamma/2+s)_{-} }\widehat{f}\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\| \widehat{g}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\] \[\|\langle v\rangle^{(\gamma/2+s)_{-}}\widehat{f}\|_{L^{1}_{\xi}L^{ \infty}_{t}L^{2}_{t}(\langle v\rangle^{\ell})}\|\widehat{g}\|_{L^{p}_{\xi}L^{2}_{ t}L^{2}_{t}(\langle v\rangle^{\ell})},\|\langle v\rangle^{(\gamma/2+s)_{-}} \widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\| \widehat{g}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})} \Big{\}}.\]
### Proof of Theorem 2.1-(1)
We consider the torus case \(\Omega_{x}=\mathbf{T}^{3}\).
#### 4.2.1. Global existence
Let \(\ell\geq 0\) be fixed and define the space
\[\mathscr{X}=\Big{\{}f\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v }(\langle v\rangle^{\ell})\cap L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle ^{\ell}))\mid f\text{ satisfies }(\ref{eq:1}),\;\|f\|_{\mathscr{X}}<\infty\Big{\}}\]
with
\[\|f\|_{\mathscr{X}}:=\|\widehat{f}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{\xi}( \langle v\rangle^{\ell})}+\frac{1}{\varepsilon}\|\mathbf{P}^{\perp}\widehat{f} \|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}+\|\mathbf{P} \widehat{f}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{b}}.\]
Let \(f^{\varepsilon}_{0}\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{ \ell}))\) verify
\[\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}}\leq\eta_{0},\]
and consider the map \(\Phi:\mathscr{X}\to\mathscr{X}\), \(f^{\varepsilon}\mapsto\Phi[f^{\varepsilon}]\) defined by, for all \(t\geq 0\),
\[\Phi[f^{\varepsilon}](t)=U^{\varepsilon}(t)f^{\varepsilon}_{0}+\frac{1}{ \varepsilon}\int_{0}^{t}U^{\varepsilon}(t-s)\Gamma(f^{\varepsilon}(s),f^{ \varepsilon}(s))\,\mathrm{d}s, \tag{4.10}\]
thus, for all \(\xi\in\mathbf{Z}^{3}\),
\[\widehat{\Phi}[f^{\varepsilon}](t,\xi)=\widehat{U}^{\varepsilon}(t,\xi)\widehat{f }^{\varepsilon}_{0}(\xi)+\frac{1}{\varepsilon}\int_{0}^{t}\widehat{U}^{ \varepsilon}(t-s,\xi)\widehat{\Gamma}(f^{\varepsilon}(s),f^{\varepsilon}(s))( \xi)\,\mathrm{d}s. \tag{4.11}\]
Thanks to Proposition 3.2 we deduce, for some constant \(C_{0}>0\) independent of \(\varepsilon\), that
\[\|U^{\varepsilon}(\cdot)f^{\varepsilon}_{0}\|_{\mathscr{X}}\leq C_{0}\|\widehat{f }^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}}.\]
Moreover thanks to Proposition 3.4 we get, for some constant \(C_{1}>0\) independent of \(\varepsilon\),
\[\frac{1}{\varepsilon}\left\|\int_{0}^{t}U^{\varepsilon}(t-s)\Gamma( f^{\varepsilon}(s),f^{\varepsilon}(s))\,\mathrm{d}s\right\|_{\mathscr{X}} \leq C_{1}\|\widehat{\Gamma}(f^{\varepsilon},f^{\varepsilon})\|_ {L^{1}_{\xi}L^{2}_{t}(H^{*,*}_{v})^{\prime}}\] \[\leq C_{1}\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{ t}L^{2}_{\xi}}\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{*,*}_{v}}\] \[\leq C_{1}\|f^{\varepsilon}\|_{\mathscr{X}}^{2},\]
where we have used Lemma 4.2 in the second line. Gathering previous estimates yields
\[\|\Phi[f^{\varepsilon}]\|_{\mathscr{X}}\leq C_{0}\|\widehat{f}^{\varepsilon} _{0}\|_{L^{1}_{\xi}L^{2}_{v}}+C_{1}\|f^{\varepsilon}\|_{\mathscr{X}}^{2}. \tag{4.12}\]
Moreover for \(f^{\varepsilon},g^{\varepsilon}\in\mathscr{X}\) we observe that
\[\Phi[f^{\varepsilon}](t)-\Phi[g^{\varepsilon}](t) =\frac{1}{\varepsilon}\int_{0}^{t}U^{\varepsilon}(t-s)\Gamma(f^{ \varepsilon}(s),f^{\varepsilon}(s)-g^{\varepsilon}(s))\,\mathrm{d}s\] \[\quad+\frac{1}{\varepsilon}\int_{0}^{t}U^{\varepsilon}(t-s) \Gamma(f^{\varepsilon}(s)-g^{\varepsilon}(s),g^{\varepsilon}(s))\,\mathrm{d}s,\]
hence Proposition 3.4 and Lemma 4.2 yields, for some constant \(C_{1}>0\) independent of \(\varepsilon\),
\[\|\Phi[f^{\varepsilon}]-\Phi[g^{\varepsilon}]\|_{\mathscr{X}}\] \[\leq C_{1}\|\langle v\rangle^{\ell}\widehat{\Gamma}(f^{\varepsilon },f^{\varepsilon}-g^{\varepsilon})\|_{L^{1}_{\xi}L^{2}_{t}(H^{*,*}_{v})^{ \prime}}+C_{1}\|\langle v\rangle^{\ell}\widehat{\Gamma}(f^{\varepsilon}-g^{ \varepsilon},g^{\varepsilon})\|_{L^{1}_{\xi}L^{2}_{t}(H^{*,*}_{v})^{\prime}}\] \[\leq C_{1}\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{ v}L^{2}_{v}(v)^{\prime}}\|\widehat{f}^{\varepsilon}-\widehat{g}^{\varepsilon} \|_{L^{1}_{\xi}L^{2}_{t}H^{*,*}_{v}(\langle v\rangle^{\ell})}+C_{1}\|\widehat{ f}^{\varepsilon}-g^{\varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}( \langle v\rangle^{\ell})}\|\widehat{g}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H ^{*,*}_{v}(\langle v\rangle^{\ell})}\] \[\leq C_{1}(\|f^{\varepsilon}\|_{\mathscr{X}}+\|g^{\varepsilon} \|_{\mathscr{X}})\|f^{\varepsilon}-g^{\varepsilon}\|_{\mathscr{X}}. \tag{4.13}\]
As a consequence of estimates (4.12)-(4.13) we can construct a global solution \(f^{\varepsilon}\in\mathscr{X}\) to the equation (4.1) if \(\eta_{0}>0\) is small enough. Indeed let \(B_{\mathscr{X}}(\eta)=\{f\in\mathscr{X}\;|\;\|f\|_{\mathscr{X}}\leq\eta\}\) for \(\eta>0\) be the closed ball in \(\mathscr{X}\) of radius \(\eta\). Choose
\[\eta=2C_{0}\eta_{0}\quad\text{and}\quad\eta_{0}\leq\frac{1}{8C_{0}C_{1}},\]
and observe that \(\eta_{0}\) does not depend on \(\varepsilon\). Then for any \(f^{\varepsilon}\in B_{\mathscr{X}}(\eta)\) we have from (4.12) that
\[\|\Phi[f^{\varepsilon}]\|_{\mathscr{X}}\leq 2C_{0}\eta_{0}=\eta,\]
and for any \(f^{\varepsilon},g^{\varepsilon}\in B_{\mathscr{X}}(\eta)\) we have from (4.13) that
\[\|\Phi[f^{\varepsilon}]-\Phi[g^{\varepsilon}]\|_{\mathscr{X}}\leq 4C_{0}C_{1} \eta_{0}\|f^{\varepsilon}-g^{\varepsilon}\|_{\mathscr{X}}\leq\frac{1}{2}\|f^{ \varepsilon}-g^{\varepsilon}\|_{\mathscr{X}}.\]
Thus \(\Phi:B_{\mathscr{X}}(\eta)\to B_{\mathscr{X}}(\eta)\) is a contraction and therefore there is a unique \(f^{\varepsilon}\in B_{\mathscr{X}}(\eta)\) such that \(\Phi[f^{\varepsilon}]=f^{\varepsilon}\), which is then a solution to (4.1). This completes the proof of global existence in Theorem 2.1-(1) together with estimate (2.9).
#### 4.2.2. Uniqueness
Consider two solutions \(f^{\varepsilon},g^{\varepsilon}\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{\infty}_ {t}L^{2}_{v}(\langle v\rangle^{\ell})\cap L^{1}_{\xi}L^{2}_{t}H^{*,*}_{v}( \langle v\rangle^{\ell}))\) to (4.1) associated to the same initial data \(f^{\varepsilon}_{0}\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{2}_{v}(\langle v \rangle^{\ell}))\) satisfying \(\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell} )}\leq\eta_{0}\) with \(\eta_{0}>0\) small enough and
\[\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}( \langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H ^{*,*}_{v}(\langle v\rangle^{\ell})} \lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})},\] \[\|\widehat{g}^{\varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}( \langle v\rangle^{\ell})}+\|\widehat{g}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H ^{*,*}_{v}(\langle v\rangle^{\ell})} \lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})}.\]
Arguing as in the existence proof above, we obtain
\[\|f^{\varepsilon}-g^{\varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{ 2}_{v}(\langle v\rangle^{\ell})}+\|f^{\varepsilon}-g^{\varepsilon}\|_{L^{1}_{ \xi}L^{2}_{t}H^{*,*}_{v}(\langle v\rangle^{\ell})}\] \[\lesssim\left(\|f^{\varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{ 2}_{v}(\langle v\rangle^{\ell})}+\|g^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{ *,*}_{v}(\langle v\rangle^{\ell})}\right)\left(\|f^{\varepsilon}-g^{\varepsilon} \|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}+\|f^{ \varepsilon}-g^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{*,*}_{v}(\langle v \rangle^{\ell})}\right).\]
Using that \(\|f^{\varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})} +\|g^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{*,*}_{v}(\langle v\rangle^{\ell})} \lesssim\eta_{0}\) is small enough we conclude the proof of uniqueness in Theorem 2.1-(1).
#### 4.2.3. Decay for hard potentials
Let \(f^{\varepsilon}\) be the solution to (4.1) constructed in Theorem 2.1-(1) associated to the initial data \(f_{0}^{\varepsilon}\), and let \(\lambda>0\) be given by Proposition 3.2. Using Proposition 3.6 and Proposition 3.7 we obtain
\[\|\mathrm{e}_{\lambda}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{ \infty}_{\varepsilon}L^{2}_{\varepsilon}(\langle v\rangle^{\ell})}+\frac{1}{ \varepsilon}\|\mathrm{e}_{\lambda}\mathbf{P}^{\perp}\widehat{f}^{\varepsilon }\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}+\|\mathrm{e}_{ \lambda}\mathbf{P}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\lesssim\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})}+\|\mathrm{e}_{\lambda}\langle v\rangle^{\ell}\widehat {\Gamma}(f^{\varepsilon},f^{\varepsilon})\|_{L^{1}_{\xi}L^{2}_{t}(H^{s,*}_{v} )}.\]
Thanks to Lemma 4.2 we have
\[\|\mathrm{e}_{\lambda}\langle v\rangle^{\ell}\widehat{\Gamma}(f^{\varepsilon},f^{\varepsilon})\|_{L^{1}_{\xi}L^{2}_{t}(H^{s,*}_{v})} \lesssim\|\mathrm{e}_{\lambda}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{ \infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\|\widehat{f}^{\varepsilon}\|_{ L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})},\]
therefore using that \(\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v \rangle^{\ell})}\lesssim\|\widehat{f}^{\varepsilon}_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})}\) from the existence result in Theorem 2.1-(1), we obtain
\[\|\mathrm{e}_{\lambda}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{ \infty}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell})}+\frac{1}{\varepsilon}\| \mathrm{e}_{\lambda}\mathbf{P}^{\perp}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi }L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}+\|\mathrm{e}_{\lambda} \mathbf{P}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})}+\|\mathrm{e}_{\lambda}\widehat{f}^{\varepsilon}\|_{ L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}(\langle v\rangle^{\ell})}\|\widehat{f}^{ \varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}\] \[\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_ {v}(\langle v\rangle^{\ell})}\|\mathrm{e}_{\lambda}\widehat{f}^{\varepsilon }\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}.\]
Since \(\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{ \ell})}\leq\eta_{0}\) is small enough, the last term in the right-hand side can be absorbed into the left-hand side, which thus concludes the proof of the decay estimate (2.10) in Theorem 2.1-(1).
#### 4.2.4. Decay for soft potentials
Let \(f^{\varepsilon}\) be the solution to (4.1) constructed in Theorem 2.1-(1) associated to the initial data \(f_{0}^{\varepsilon}\) with \(\ell>0\), and let \(0<\omega<\frac{\ell}{|\gamma+2g|}\).
Using Proposition 3.8 and Proposition 3.9 we obtain
\[\|\mathrm{p}_{\omega}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{ \infty}_{t}L^{2}_{v}} +\frac{1}{\varepsilon}\|\mathrm{p}_{\omega}\mathbf{P}^{\perp} \widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}}+\|\mathrm{p}_{ \omega}\mathbf{P}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}}+ \|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})}+\|\mathrm{p}_{\omega}\widehat{\Gamma}(f^{\varepsilon},f^{ \varepsilon})\|_{L^{1}_{\xi}L^{2}_{t}(H^{s,*}_{v})},\]
and from Lemma 4.2 we have
\[\|\mathrm{p}_{\omega}\widehat{\Gamma}(f^{\varepsilon},f^{\varepsilon})\|_{L^{1 }_{\xi}L^{2}_{t}(H^{s,*}_{v})^{\prime}}\lesssim\|\mathrm{p}_{\omega}\widehat{f }^{\varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}}\|\widehat{f}^{ \varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}}.\]
Using that \(\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})}+\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v }}\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v \rangle^{\ell})}\) from the existence result in in Theorem 2.1-(1), we deduce
\[\|\mathrm{p}_{\omega}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{ \infty}_{t}L^{2}_{v}} +\frac{1}{\varepsilon}\|\mathrm{p}_{\omega}\mathbf{P}^{\perp} \widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}}+\|\mathrm{p}_{ \omega}\mathbf{P}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})}\|\mathrm{p}_{\omega}\widehat{f}^{\varepsilon}\|_{L^{1}_ {\xi}L^{\infty}_{t}L^{2}_{v}}.\]
Since \(\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell})} \leq\eta_{0}\) is small enough, the last term in the right-hand side can be absorbed into the left-hand side, which thus concludes the proof of the decay estimate (2.11) in Theorem 2.1-(1).
### Proof of Theorem 2.1-(2)
We consider the whole space case \(\Omega_{x}=\mathbf{R}^{3}\).
#### 4.3.1. Global existence
Recall that \(p\in(3/2,\infty]\) and define the space, with \(\ell\geq 0\),
\[\mathscr{Y}=\Big{\{}f\in \mathcal{F}^{-1}_{x}(L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})\cap L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell}) \cap L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})\cap L^{p}_{ \xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})\Big{|}\ \|f\|_{\mathscr{Y}}<\infty\Big{\}},\]
with
\[\|f\|_{\mathscr{Y}} :=\|\widehat{f}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})}+\frac{1}{\varepsilon}\|\mathbf{P}^{\perp}\widehat{f}\|_{L^{1}_{ \xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}+\left\|\frac{|\xi|}{ \langle\xi\rangle}\mathbf{P}\widehat{f}\right\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\quad+\|\widehat{f}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})}+\frac{1}{\varepsilon}\
Let \(f_{0}^{\varepsilon}\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{ \ell})\cap L^{p}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell}))\) verify
\[\|\widehat{f}_{0}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{\xi}(\langle v\rangle^{ \ell}}+\|\widehat{f}_{0}^{\varepsilon}\|_{L^{p}_{\xi}L^{2}_{\xi}(\langle v \rangle^{\ell})}\leq\eta_{0},\]
and consider the map \(\Phi:\mathscr{Y}\to\mathscr{Y}\), \(f^{\varepsilon}\mapsto\Phi[f^{\varepsilon}]\) given by (4.10), which in particular satisfies (4.11) for all \(\xi\in\mathbf{R}^{3}\).
Thanks to Proposition 3.2 we deduce, for some constant \(C_{0}>0\) independent of \(\varepsilon\), that
\[\|U^{\varepsilon}(\cdot)f_{0}^{\varepsilon}\|_{\mathscr{Y}}\leq C_{0}\left( \|\widehat{f}_{0}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{ \ell})}+\|\widehat{f}_{0}^{\varepsilon}\|_{L^{p}_{\xi}L^{2}_{v}(\langle v \rangle^{\ell})}\right).\]
Moreover thanks to Proposition 3.4 we get
\[\frac{1}{\varepsilon}\left\|\int_{0}^{t}U^{\varepsilon}(t-s)\Gamma (f^{\varepsilon}(s),f^{\varepsilon}(s))\,\mathrm{d}s\right\|_{\mathscr{Y}}\] \[\qquad\lesssim\|\langle v\rangle^{\ell}\widehat{\Gamma}(f^{ \varepsilon},f^{\varepsilon})\|_{L^{1}_{\xi}L^{2}_{t}(H^{s,*}_{v})^{\prime}}+ \|\langle v\rangle^{\ell}\widehat{\Gamma}(f^{\varepsilon},f^{\varepsilon})\| _{L^{p}_{\xi}L^{2}_{t}(H^{s,*}_{v})^{\prime}}\] \[\qquad\lesssim\left(\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^ {\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}\|_ {L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\right)\| \widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^ {\ell})},\]
where we have used Lemma 4.2 in the second line. We now observe that, splitting \(\widehat{f}^{\varepsilon}=\mathbf{P}^{\perp}\widehat{f}^{\varepsilon}+\mathbf{ P}\widehat{f}^{\varepsilon}\), on the one hand we have
\[\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v \rangle^{\ell})}\lesssim\|\mathbf{P}^{\perp}\widehat{f}^{\varepsilon}\|_{L^{1 }_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}+\|\mathbf{P}\widehat{f} ^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}.\]
On the other hand
\[\|\mathbf{P}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}L^{ 2}_{t}}\lesssim\|\mathbf{1}_{|\xi|\geq 1}\mathbf{P}\widehat{f}^{\varepsilon}\|_{L^ {1}_{\xi}L^{2}_{t}L^{2}_{t}}+\|\mathbf{1}_{|\xi|<1}\mathbf{P}\widehat{f}^{ \varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\qquad\lesssim\left\|\mathbf{1}_{|\xi|\geq 1}\frac{|\xi|}{\langle\xi \rangle}\mathbf{P}\widehat{f}^{\varepsilon}\right\|_{L^{1}_{\xi}L^{2}_{t}L^{ 2}_{v}}+\left\|\mathbf{1}_{|\xi|<1}|\xi|^{-1}\frac{|\xi|}{\langle\xi\rangle} \mathbf{P}\widehat{f}^{\varepsilon}\right\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\qquad\lesssim\left\|\frac{|\xi|}{\langle\xi\rangle}\mathbf{P} \widehat{f}^{\varepsilon}\right\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}+\left\|\frac{ |\xi|}{\langle\xi\rangle}\mathbf{P}\widehat{f}^{\varepsilon}\right\|_{L^{p}_{ \xi}L^{2}_{t}L^{2}_{v}},\]
where we have used Holder's inequality in last line, using that \(p>3/2\) so that \(\mathbf{1}_{|\xi|<1}|\xi|^{-1}\in L^{p^{\prime}}_{\xi}\). Putting together the two last estimates, we have
\[\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v \rangle^{\ell})}\lesssim\|\mathbf{P}^{\perp}\widehat{f}^{\varepsilon}\|_{L^{1 }_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}+\left\|\frac{|\xi|}{ \langle\xi\rangle}\mathbf{P}\widehat{f}^{\varepsilon}\right\|_{L^{1}_{\xi}L^{2 }_{t}L^{2}_{v}}+\left\|\frac{|\xi|}{\langle\xi\rangle}\mathbf{P}\widehat{f}^{ \varepsilon}\right\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{v}}. \tag{4.14}\]
We hence deduce that there is some constant \(C_{1}>0\), independent of \(\varepsilon\), such that
\[\frac{1}{\varepsilon}\left\|\int_{0}^{t}U^{\varepsilon}(t-s)\Gamma (f^{\varepsilon}(s),f^{\varepsilon}(s))\,\mathrm{d}s\right\|_{\mathscr{X}}\] \[\qquad\leq C_{1}\left(\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^ {\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}\|_{L^ {p}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\right)\] \[\qquad\qquad\times\left(\|\mathbf{P}^{\perp}\widehat{f}^{ \varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}+\left\| \frac{|\xi|}{\langle\xi\rangle}\mathbf{P}\widehat{f}^{\varepsilon}\right\|_{L^{1 }_{\xi}L^{2}_{t}L^{2}_{v}}+\left\|\frac{|\xi|}{\langle\xi\rangle}\mathbf{P} \widehat{f}^{\varepsilon}\right\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{v}}\right).\]
Therefore, gathering previous estimates, we obtain
\[\|\Phi[f^{\varepsilon}]\|_{\mathscr{Y}}\leq C_{0}\left(\|\widehat{f}^{\varepsilon} \|_{L^{1}_{\xi}L^{2}_{v}}+\|\widehat{f}^{\varepsilon}\|_{L^{p}_{\xi}L^{2}_{v}}+C_{1 }\|f^{\varepsilon}\|_{\mathscr{Y}}. \tag{4.15}\]
Moreover, for \(f^{\varepsilon},g^{\varepsilon}\in\mathscr{Y}\) we obtain arguing as above, thanks to Proposition 3.4 and Lemma 4.2, that
\[\frac{1}{\varepsilon}\left\|\int_{0}^{t}U^{\varepsilon}(t-s)\Gamma (f^{\varepsilon}(s),f^{\varepsilon}(s)-g^{\varepsilon})\,\mathrm{d}s\right\|_{ \mathscr{X}}\] \[\qquad\leq C_{1}\left(\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^ {\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}\|_{L^{1 }_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{\ell})}+\|\widehat{f}^{ \varepsilon}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}+\| \widehat{f}^{\varepsilon}\|_{L^{p}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^ {\ell})}\right)\] \[\qquad\qquad\times\left(\|\widehat{f}^{-\varepsilon}\|_{L^{1}_{\xi}L ^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}- \widehat{g}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{s,*}_{v}(\langle v\rangle^{ \ell})}\right),\]
as well as
\[\frac{1}{\varepsilon}\left\|\int_{0}^{t}U^{\varepsilon}(t-s)\Gamma(f^ {\varepsilon}(s)-g^{\varepsilon}(s),g^{\varepsilon})\,\mathrm{d}s\right\|_{ \mathscr{X}}\] \[\qquad\leq C_{1}\left(\|\widehat{g}^{\varepsilon}\|_{L^{1}_{ \xi}L^{\infty}_{t}L^{2}_{\xi}(\langle v\rangle^{\ell})}+\|\widehat{g}^{ \varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{*,*}_{v}(\langle v\rangle^{\ell})}+\| \widehat{g}^{\varepsilon}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{\xi}(\langle v \rangle^{\ell})}+\|\widehat{g}^{\varepsilon}\|_{L^{p}_{\xi}L^{2}_{t}H^{*,*}_{v }(\langle v\rangle^{\ell})}\right)\] \[\qquad\qquad\times\left(\|\widehat{f}^{\varepsilon}-\widehat{g}^{ \varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}+ \|\widehat{f}^{\varepsilon}-\widehat{g}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t }H^{*,*}_{v}(\langle v\rangle^{\ell})}\right).\]
Together with (4.14) for the term \(\|\widehat{f}^{\varepsilon}-\widehat{g}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t }H^{*,*}_{v}(\langle v\rangle^{\ell})}\), this implies that
\[\|\Phi[f^{\varepsilon}]-\Phi[g^{\varepsilon}]\|_{\mathscr{Y}}\leq C_{1}(\|f \|_{\mathscr{Y}}+\|g\|_{\mathscr{Y}})\|f-g\|_{\mathscr{Y}}. \tag{4.16}\]
As a consequence of estimates (4.15)-(4.16) we can construct a global solution \(f^{\varepsilon}\in\mathscr{Y}\) to the equation (4.1) if \(\eta_{0}>0\) is small enough by arguing as in Section 4.2.1. This completes the proof of global existence in Theorem 2.1-(2) together with estimate (2.12).
#### 4.3.2. Uniqueness
Using the above estimates, we can argue as in Section 4.2.2.
#### 4.3.3. Decay for hard potentials
Let \(f^{\varepsilon}\) be the solution to (4.1) constructed in Theorem 2.1-(2) associated to the initial data \(f^{\varepsilon}_{0}\), and let \(0<\vartheta<\frac{3}{2}(1-\frac{1}{p})\). Arguing as above, using Proposition 3.10 and Proposition 3.11 we obtain
\[\|\mathrm{p}_{\vartheta}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L ^{\infty}_{t}L^{2}_{\xi}(\langle v\rangle^{\ell})} +\frac{1}{\varepsilon}\|\mathrm{p}_{\vartheta}\mathbf{P}^{ \perp}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{*,*}_{v}(\langle v \rangle^{\ell})}+\left\|\mathrm{p}_{\vartheta}\frac{|\xi|}{\langle\xi\rangle }\mathbf{P}\widehat{f}^{\varepsilon}\right\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}\|_{L^{p}_{\xi}L^{\infty }_{t}L^{2}_{\xi}(\langle v\rangle^{\ell})}+\|\mathrm{p}_{\vartheta}\langle v \rangle^{\ell}\widehat{\Gamma}(f^{\varepsilon},f^{\varepsilon})\|_{L^{1}_{\xi }L^{2}_{t}(H^{*,*}_{v})}.\]
Thanks to Lemma 4.2 we have
\[\|\mathrm{p}_{\vartheta}\langle v\rangle^{\ell}\widehat{\Gamma}(f^{ \varepsilon},f^{\varepsilon})\|_{L^{1}_{\xi}L^{2}_{t}(H^{*,*}_{v})}\lesssim\| \mathrm{p}_{\vartheta}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{t}L^ {2}_{\xi}(\langle v\rangle^{\ell})}\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi }L^{2}_{t}H^{*,*}_{v}(\langle v\rangle^{\ell})},\]
and by (4.14) we have
\[\|\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{*,*}_{v}( \langle v\rangle^{\ell})} \lesssim\|\mathbf{P}^{\perp}\widehat{f}^{\varepsilon}\|_{L^{1}_{ \xi}L^{2}_{t}H^{*,*}_{v}(\langle v\rangle^{\ell})}+\left\|\frac{|\xi|}{\langle \xi\rangle}\widehat{\mathbf{P}}f^{\varepsilon}\right\|_{L^{1}_{\xi}L^{2}_{t}L^ {2}_{v}}+\left\|\frac{|\xi|}{\langle\xi\rangle}\widehat{\mathbf{P}}f^{ \varepsilon}\right\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_{ v}(\langle v\rangle^{\ell})},\]
where we have used the estimate of Theorem 2.1-(2) in last line. Observing that we also have \(\|\widehat{f}^{\varepsilon}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})}\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v} (\langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_ {v}(\langle v\rangle^{\ell})}\), it follows
\[\|\mathrm{p}_{\vartheta}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L ^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})} +\frac{1}{\varepsilon}\|\mathrm{p}_{\vartheta}\mathbf{P}^{\perp} \widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{*,*}_{v}(\langle v\rangle^{ \ell})}+\left\|\mathrm{p}_{\vartheta}\frac{|\xi|}{\langle\xi\rangle}\mathbf{P} \widehat{f}^{\varepsilon}\right\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_{v} (\langle v\rangle^{\ell})}\] \[\qquad+\|\mathrm{p}_{\vartheta}\widehat{f}^{\varepsilon}\|_{L^{1}_{ \xi}L^{\infty}_{t}L^{2}_{v}(\langle v\rangle^{\ell})}\left(\|\widehat{f}^{ \varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell})}+\|\widehat{f }^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell})}\right).\]
Since \(\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell})}+ \|\widehat{f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell})}\leq\eta_ {0}\) is small enough, the last term in the right-hand side can be absorbed into the left-hand side, which thus concludes the proof of the decay estimate (2.13) in Theorem 2.1-(2).
#### 4.3.4. Decay for soft potentials
Let \(0<\vartheta<\frac{3}{2}(1-\frac{1}{p})\). Let \(f^{\varepsilon}\) be the solution to (4.1) constructed in Theorem 2.1-(2) associated to the initial data \(f^{\varepsilon}_{0}\) with \(\ell>\vartheta|\gamma{+}2s|\). Arguing as above, using Proposition 3.12 and Proposition 3.13 we obtain
\[\|\mathrm{p}_{\vartheta}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{ \infty}_{t}L^{2}_{v}} +\frac{1}{\varepsilon}\|\mathrm{p}_{\vartheta}\mathbf{P}^{\perp} \widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{*,*}_{v}}+\left\|\mathrm{p}_{ \vartheta}\frac{|\xi|}{\langle\xi\rangle}\mathbf{P}\widehat{f}^{\varepsilon} \right\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v} _{v}}+\|\widehat{f}^{\varepsilon}\|_{L^{1}_{t}L^{\infty}_{t}L^{2}_{v}(\langle v \rangle^{\ell})}+\|\widehat{f}^{\varepsilon}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{2}_{v} }+\|\mathrm{p}_{\vartheta}\widehat{\Gamma}(f^{\varepsilon},f^{\varepsilon})\|_{L^{1}_ {\xi
For the nonlinear term above, we argue as in Section 4.3.3 so that
\[\|\mathrm{p}_{\vartheta}\widehat{\Gamma}(f^{\varepsilon},f^{ \varepsilon})\|_{L^{1}_{\xi}L^{2}_{t}(H^{\varepsilon,*}_{v})}\] \[\lesssim\|\mathrm{p}_{\vartheta}\widehat{f}^{\varepsilon}\|_{L^{1 }_{\xi}L^{\infty}_{t}L^{2}_{v}}\left(\|\mathbf{P}^{\perp}\widehat{f}^{ \varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{\varepsilon,*}_{v}}+\left\|\frac{| \xi|}{\langle\xi\rangle}\widehat{\mathbf{P}}f^{\varepsilon}\right\|_{L^{1}_{ \xi}L^{2}_{t}L^{2}_{v}}+\left\|\frac{|\xi|}{\langle\xi\rangle}\widehat{ \mathbf{P}}f^{\varepsilon}\right\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{v}}\right),\]
Therefore, using the estimate of Theorem 2.1-(2), we obtain
\[\|\mathrm{p}_{\vartheta}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{ \infty}_{v}L^{2}_{v}}+\frac{1}{\varepsilon}\|\mathrm{p}_{\vartheta}\mathbf{P} ^{\perp}\widehat{f}^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}H^{\varepsilon,*}_{v }}+\left\|\mathrm{p}_{\vartheta}\frac{|\xi|}{\langle\xi\rangle}\widehat{ \mathbf{P}}\widehat{f}^{\varepsilon}\right\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\] \[\lesssim\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}( \langle v\rangle^{\ell})}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_ {v}(\langle v\rangle^{\ell})}\] \[\qquad\qquad\qquad+\|\mathrm{p}_{\vartheta}\widehat{f}^{ \varepsilon}\|_{L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}}\left(\|\widehat{f}^{ \varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell})}+\|\widehat{ f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_{v}(\langle v\rangle^{\ell})}\right).\]
Since \(\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}(\langle v\rangle^{ \ell})}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_{v}(\langle v \rangle^{\ell})}\leq\eta_{0}\) is small enough, the last term in the right-hand side can be absorbed into the left-hand side, which thus concludes the proof of the decay estimate (2.14) in Theorem 2.1-(2).
## 5. Well-posedness for the Navier-Stokes-Fourier system
We start by considering the incompressible Navier-Stokes equation, that is, the first equation in (1.14). We denote by \(V\) the semigroup associated to the operator \(\nu_{1}\Delta_{x}\), and we also denote, for all \(t\geq 0\) and \(\xi\in\Omega^{\prime}_{\xi}\),
\[\widehat{V}(t,\xi)=\mathcal{F}_{x}(V(t)\mathcal{F}_{x}^{-1})(\xi)=e^{-\nu_{1}| \xi|^{2}t}.\]
We shall obtain below boundedness and integrated-in-time regularization estimates for \(V\) as well as for its integral in time against a source \(\int_{0}^{t}V(t-s)S(s)\,\mathrm{d}s\).
**Proposition 5.1**.: _Let \(p\in[1,\infty]\). Let \(u_{0}\in\mathcal{F}_{x}^{-1}(L^{p}_{\xi})\) and suppose moreover that \(u_{0}\) satisfies (1.16) in the torus case \(\Omega_{x}=\mathbf{T}^{3}\). Then_
\[\|\widehat{V}(\cdot)\widehat{u}_{0}\|_{L^{p}_{\xi}L^{\infty}_{t}}+\||\xi| \widehat{V}(\cdot)\widehat{u}_{0}\|_{L^{p}_{\xi}L^{2}_{t}}\lesssim\|\widehat{ u}_{0}\|_{L^{p}_{\xi}},\]
_and moreover \(V(t)u_{0}\) also satisfies (1.16) for all \(t\geq 0\) in the torus case._
_Remark 5.2_.: Observe that, in the torus case \(\Omega_{x}=\mathbf{T}^{3}\), one can replace \(|\xi|\widehat{V}(\cdot)\widehat{u}_{0}\) in above estimate by \(\langle\xi\rangle\widehat{V}(\cdot)\widehat{u}_{0}\) since \(V(t)u_{0}\) is mean-free.
Proof.: Let \(u(t)=V(t)u_{0}\), which satisfies
\[\partial_{t}u=-\nu_{1}\Delta_{x}u,\quad u_{|t=0}=u_{0}.\]
We already observe that, in the torus case, the solution \(u(t)\) is also mean-free, that is satisfies (1.16). For all \(\xi\in\Omega^{\prime}_{\xi}\) we thus have
\[\partial_{t}\widehat{u}(t,\xi)=-\nu_{1}|\xi|^{2}\widehat{u}(t,\xi),\quad \widehat{u}(\xi)|_{t=0}=\widehat{u}_{0}(\xi),\]
thus for any \(t\geq 0\) we have
\[|\widehat{u}(t,\xi)|^{2}+\int_{0}^{t}|\xi|^{2}|\widehat{u}(s,\xi)|^{2}\, \mathrm{d}t\lesssim|\widehat{u}_{0}(\xi)|^{2}.\]
Taking the supremum in time and then taking the square-root of previous estimate yields
\[\|\widehat{u}(\xi)\|_{L^{\infty}_{t}}+\||\xi|\widehat{u}(\xi)\|_{L^{2}_{t}} \lesssim|\widehat{u}_{0}(\xi)|,\]
and we conclude the proof by taking the \(L^{p}_{\xi}\) norm.
**Proposition 5.3**.: _Suppose \(p\in[1,\infty]\). Let \(S=S(t,\xi)\) satisfies \(\|\langle\xi\rangle^{-1}\widehat{S}\|_{L^{p}_{\xi}L^{2}_{t}}\) in the torus case \(\Omega_{x}=\mathbf{T}^{3}\) as well as (1.16), and \(|\xi|^{-1}\widehat{S}\in L^{p}_{\xi}L^{2}_{t}\) in the whole space case \(\Omega_{x}=\mathbf{R}^{3}\). Denote_
\[u_{S}(t)=\int_{0}^{t}V(t-s)S(s)\,\mathrm{d}s.\]
_Then in the torus case we have_
\[\|\widehat{u}_{S}\|_{L^{p}_{\xi}L^{\infty}_{t}}+\|\langle\xi\rangle\widehat{u}_{S} \|_{L^{p}_{\xi}L^{2}_{t}}\lesssim\|\langle\xi\rangle^{-1}\widehat{S}\|_{L^{p}_{ \xi}L^{2}_{t}}.\]
_and in the whole space case_
\[\|\widehat{u}_{S}\|_{L^{p}_{\xi}L^{\infty}_{t}}+\||\xi|\widehat{u}_{S}\|_{L^{p} _{\xi}L^{2}_{t}}\lesssim\||\xi|^{-1}\widehat{S}\|_{L^{p}_{\xi}L^{2}_{t}}.\]
Proof.: We first observe that \(u_{S}\) satisfies
\[\partial_{t}u_{S}+\nu_{1}\Delta_{x}u_{S}=S,\quad u_{S}|_{t=0}=0.\]
We only prove the whole space case, the case of the torus being similar by observing that \(u_{S}\) is mean-free, that is verifies (1.16).
For all \(\xi\in\mathbf{R}^{3}\) and all \(t\geq 0\) we have
\[\partial_{t}\widehat{u}_{S}(t,\xi)+\nu_{1}|\xi|^{2}\widehat{u}_{S}(t,\xi)= \widehat{S}(t,\xi),\quad\widehat{u_{S}(\xi)}_{|t=0}=0.\]
We can compute
\[\partial_{t}\frac{1}{2}|\widehat{u}_{S}(t,\xi)|^{2}+\nu_{1}|\xi|^{2}| \widehat{u}_{S}(\xi)|^{2}\leq(\widehat{S}(\xi),\widehat{u}_{S}(\xi)),\]
which implies, for all \(t\geq 0\),
\[|\widehat{u}_{S}(t,\xi)|^{2}+\int_{0}^{t}|\xi|^{2}|\widehat{u}_{S}(s,\xi)|^{2} ds\lesssim\int_{0}^{t}||\xi|^{-1}S(s,\xi)|^{2}\,\mathrm{d}s.\]
Taking the supremum in time, then taking the square-root of the estimate, and taking the \(L^{P}_{\xi}\) norm, the proof is thus finished.
We now obtain bilinear estimates for the operator \(Q_{\mathrm{NS}}\).
**Lemma 5.4**.: _Let \(p\in[1,\infty]\). Let \(u,v\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{\infty}_{t}\cap L^{p}_{\xi}L^{\infty}_ {t})\), then_
\[\||\xi|^{-1}\widehat{Q}_{\mathrm{NS}}(v,u)\|_{L^{p}_{\xi}L^{2}_{t}}\lesssim\|v \|_{L^{p}_{\xi}L^{2}_{t}}\|u\|_{L^{1}_{\xi}L^{\infty}_{t}}, \tag{5.1}\]
_and also_
\[\||\xi|^{-1}\widehat{Q}_{\mathrm{NS}}(v,u)\|_{L^{p}_{\xi}L^{2}_{t}}\lesssim\|v \|_{L^{p}_{\xi}L^{\infty}_{t}}\|u\|_{L^{1}_{\xi}L^{2}_{t}}. \tag{5.2}\]
Proof.: From the definition of \(Q_{\mathrm{NS}}\), we first observe that for all \(\xi\in\Omega^{\prime}_{\xi}\) we have
\[|\widehat{Q}_{\mathrm{NS}}(v,u)(\xi)|\lesssim|\xi|\int_{\Omega^{\prime}_{ \eta}}|\widehat{v}(\eta)||\widehat{u}(\xi-\eta)|\,\mathrm{d}\eta,\]
thus by Minkowski's inequality and then Holder's inequality
\[\||\xi|^{-1}\widehat{Q}_{\mathrm{NS}}(v,u)(\xi)\|_{L^{2}_{t}} \lesssim\int_{\Omega^{\prime}_{\eta}}\left(\int_{0}^{\infty}| \widehat{v}(t,\eta)|^{2}|\widehat{u}(t,\xi-\eta)|^{2}\,\mathrm{d}t\right)^{1/2 }\mathrm{d}\eta\] \[\lesssim\int_{\Omega^{\prime}_{\eta}}\|\widehat{v}(\eta)\|_{L^{2} _{t}}\|\widehat{u}(\xi-\eta)\|_{L^{\infty}_{t}}\,\mathrm{d}\eta.\]
We then conclude the proof of (5.1) by taking the \(L^{p}_{\xi}\) norm above and applying Young's convolution inequality. The proof of (5.2) can be obtained in a similar way, by exchanging the role of \(u\) and \(v\) when applying Holder's inequality with respect to the time variable.
### Global existence in the torus \(\Omega_{x}=\mathbf{T}^{3}\)
We shall construct mild solutions to the first equation in (1.14), namely
\[u(t)=V(t)u_{0}+\int_{0}^{t}V(t-s)Q_{\mathrm{NS}}(u(s),u(s))\,\mathrm{d}s. \tag{5.3}\]
We define the space
\[\mathscr{X}=\left\{u\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi}L^{\infty}_{t}\cap L^{1 }_{\xi}(\xi))L^{2}_{t})\mid u\text{ satisfies \eqref{eq:1.16}, }\|u\|_{\mathscr{X}}<\infty\right\},\]
with
\[\|u\|_{\mathscr{X}}:=\|\widehat{u}\|_{L^{1}_{\xi}L^{\infty}_{t}}+\|\langle\xi \rangle\widehat{u}\|_{L^{1}_{\xi}L^{2}_{t}}.\]
Let \(u_{0}\in\mathcal{F}_{x}^{-1}(L^{1}_{\xi})\) satisfy (1.16) and
\[\|\widehat{u}_{0}\|_{L^{1}_{\xi}}\leq\eta_{1}.\]
Consider the map \(\Phi:\mathscr{X}\to\mathscr{X}\), \(u\mapsto\Phi[u]\) defined by, for all \(t\geq 0\),
\[\Phi[u](t)=V(t)u_{0}+\int_{0}^{t}V(t-s)Q_{\mathrm{NS}}(u(s),u(s))\,\mathrm{d}s. \tag{5.4}\]
thus, for all \(\xi\in\mathbf{Z}^{3}\),
\[\widehat{\Phi}[u](t,\xi)=\widehat{V}(t,\xi)\widehat{u}_{0}(\xi)+\int_{0}^{t} \widehat{V}(t-s,\xi)\widehat{Q}_{\mathrm{NS}}(u(s),u(s))(\xi)\,\mathrm{d}s. \tag{5.5}\]
For the first term we have from Proposition 5.1 that
\[\|\widehat{V}(t,\xi)\widehat{u}_{0}(\xi)\|_{\mathscr{X}}\leq C_{0}\|\widehat{ u}_{0}\|_{L^{1}_{\xi}},\]
and by Proposition 5.3 we have
\[\left\|\int_{0}^{t}\widehat{V}(t-s,\xi)\widehat{Q}_{\mathrm{NS}} (u(s),u(s))(\xi)\,\mathrm{d}s\right\|_{\mathscr{X}} \lesssim\||\xi|^{-1}\widehat{Q}_{\mathrm{NS}}(u,u)\|_{L^{1}_{ \xi}L^{2}_{t}}\] \[\lesssim\|\hat{u}\|_{L^{1}_{\xi}L^{2}_{t}}\|\hat{u}\|_{L^{1}_{ \xi}L^{\infty}_{t}}\] \[\lesssim\|\langle\xi\rangle\hat{u}\|_{L^{1}_{\xi}L^{2}_{t}}\|\hat {u}\|_{L^{1}_{\xi}L^{\infty}_{t}}\] \[\lesssim\|u\|_{\mathscr{X}}^{2},\]
where we have used Lemma 5.4. Thus we obtain
\[\|\Phi[u]\|_{\mathscr{X}}\lesssim C_{0}\|\widehat{u}_{0}\|_{L^{1}_{\xi}}+C_{1 }\|u\|_{\mathscr{X}}^{2}.\]
Moreover for \(u,v\in\mathscr{X}\) we can also compute, using again Proposition 5.3 and Lemma 5.4, that
\[\left\|\int_{0}^{t}\widehat{V}(t-s,\xi)\widehat{Q}_{\mathrm{NS}}( (u-v)(s),v(s))(\xi)\,\mathrm{d}s\right\|_{\mathscr{X}}+\left\|\int_{0}^{t} \widehat{V}(t-s,\xi)\widehat{Q}_{\mathrm{NS}}(u(s),(u-v)(s))(\xi)\,\mathrm{d}s \right\|_{\mathscr{X}}\] \[\lesssim\||\xi|^{-1}\widehat{Q}_{\mathrm{NS}}(u-v,v)\|_{L^{1}_{ \xi}L^{2}_{t}}+\||\xi|^{-1}\widehat{Q}_{\mathrm{NS}}(u,u-v)\|_{L^{1}_{\xi}L^{ 2}_{t}}\] \[\lesssim\|\hat{u}-\widehat{v}\|_{L^{1}_{\xi}L^{\infty}_{t}}\| \widehat{v}\|_{L^{1}_{\xi}L^{2}_{t}}+\|\widehat{u}\|_{L^{1}_{\xi}L^{2}_{t}}\| \hat{u}-\widehat{v}\|_{L^{1}_{\xi}L^{\infty}_{t}}.\]
Therefore there is \(C_{1}>0\) such that
\[\|\Phi[u]-\Phi[v]\|_{\mathscr{X}}\leq C_{1}(\|u\|_{\mathscr{X}}+\|v\|_{ \mathscr{X}})\|u-v\|_{\mathscr{X}}.\]
Gathering the two inequalities and arguing as in Sections 4.2.1 and 4.2.2, we can construct a global unique solution \(u\in\mathscr{X}\) to the equation (5.3) if \(\eta_{1}>0\) is small enough, which moreover satisfies
\[\|u\|_{\mathscr{X}}\lesssim\|\widehat{u}_{0}\|_{L^{1}_{\xi}}.\]
Once \(u\) have been constructed, we can then argue in a similar and even simpler way in order to construct a global unique mild solution \(\theta\) for the second equations in (1.14) if \(\eta_{1}>0\) is small enough, namely
\[\theta(t)=\overline{V}(t)\theta_{0}+\int_{0}^{t}\overline{V}(t-s)[-\,\mathrm{ div}_{x}(u(s)\theta(s))]\,\mathrm{d}s,\]
where \(\overline{V}\) denotes the semigroup associated to the operator \(\nu_{2}\Delta_{x}\), and which satisfies moreover
\[\|\theta\|_{\mathscr{X}}\lesssim\|\widehat{u}_{0}\|_{L^{1}_{\xi}}+\|\widehat {\theta}_{0}\|_{L^{1}_{\xi}}.\]
We finally obtain the solution \(\rho\) by using the last equation in (1.14) and observing that we consider mean-free solutions, so that \(\widehat{\rho}(t,0)=\widehat{\theta}(t,0)=0\). This completes the proof of Theorem 2.2-(1).
### Global existence in the whole space \(\Omega_{x}=\mathbf{R}^{3}\)
Similarly as before we define the space, recalling that \(p\in(3/2,+\infty]\),
\[\mathscr{Y}=\left\{u\in\mathcal{F}_{x}^{-1}(L_{\xi}^{1}L_{t}^{\infty}\cap L_{ \xi}^{1}(|\xi|)L_{t}^{2})\cap\mathcal{F}_{x}^{-1}(L_{\xi}^{p}L_{t}^{\infty}\cap L _{\xi}^{p}(|\xi|)L_{t}^{2})\mid\|u\|_{\mathscr{Y}}<\infty\right\},\]
with
\[\|u\|_{\mathscr{Y}}:=\|\widehat{u}\|_{L_{\xi}^{1}L_{t}^{\infty}}+\|\xi|\widehat {u}\|_{L_{\xi}^{1}L_{t}^{2}}+\|\widehat{u}\|_{L_{\xi}^{p}L_{t}^{\infty}}+\|| \xi|\widehat{u}\|_{L_{\xi}^{p}L_{t}^{2}}.\]
Let \(u_{0}\in\mathcal{F}_{x}^{-1}(L_{\xi}^{1}\cap L_{\xi}^{p})\) satisfy
\[\|\widehat{u}_{0}\|_{L_{\xi}^{1}}+\|\widehat{u}_{0}\|_{L_{\xi}^{p}}\leq\eta_{ 1},\]
and consider the map \(\Phi:\mathscr{Y}\to\mathscr{Y}\), \(u\mapsto\Phi[u]\) defined by (5.4), in particular (5.5) is verified for all \(\xi\in\mathbf{R}^{3}\).
For the first term in (5.5) we have from Proposition 5.1 that
\[\|\widehat{V}(t,\xi)\widehat{u}_{0}(\xi)\|_{\mathscr{Y}}\leq C_{0}(\|\widehat {u}_{0}\|_{L_{\xi}^{1}}+\|\widehat{u}_{0}\|_{L_{\xi}^{p}}).\]
Furthermore, by Proposition 5.3 we have
\[\left\|\int_{0}^{t}\widehat{V}(t-s,\xi)\widehat{Q}_{\mathrm{NS}}( u(s),u(s))(\xi)\,\mathrm{d}s\right\|_{\mathscr{Y}} \lesssim\||\xi|^{-1}\widehat{Q}_{\mathrm{NS}}(u,u)\|_{L_{\xi}^{1}L _{t}^{2}}+\||\xi|^{-1}\widehat{Q}_{\mathrm{NS}}(u,u)\|_{L_{\xi}^{p}L_{t}^{2}}\] \[\lesssim\|\widehat{u}\|_{L_{\xi}^{1}L_{t}^{2}}\left(\|\widehat{u} \|_{L_{\xi}^{1}L_{t}^{\infty}}+\|\widehat{u}\|_{L_{\xi}^{p}L_{t}^{\infty}} \right).\]
where we have used Lemma 5.4. We now observe that
\[\|\widehat{u}\|_{L_{\xi}^{1}L_{t}^{2}}\lesssim\|\mathbf{1}_{|\xi|\geq 1}\widehat {u}\|_{L_{\xi}^{1}L_{t}^{2}}+\|\mathbf{1}_{|\xi|<1}\widehat{u}\|_{L_{\xi}^{1} L_{t}^{2}},\]
and for the first term we easily have
\[\|\mathbf{1}_{|\xi|\geq 1}\widehat{u}\|_{L_{\xi}^{1}L_{t}^{2}}\lesssim\||\xi| \widehat{u}\|_{L_{\xi}^{1}L_{t}^{2}}.\]
For the second term we use Holder's inequality to obtain
\[\|\mathbf{1}_{|\xi|<1}\widehat{u}\|_{L_{\xi}^{1}L_{t}^{2}}\lesssim\|\mathbf{1 }_{|\xi|<1}|\xi|^{-1}\|_{L_{\xi}^{p^{\prime}}}\|\mathbf{1}_{|\xi|<1}|\xi| \widehat{u}\|_{L_{\xi}^{p}L_{t}^{2}}\lesssim\||\xi|\widehat{u}\|_{L_{\xi}^{p} L_{t}^{2}},\]
where we have used that \(\|\mathbf{1}_{|\xi|<1}|\xi|^{-1}\|_{L_{\xi}^{p^{\prime}}}<\infty\) since \(p>3/2\). Therefore we get
\[\|\widehat{u}\|_{L_{\xi}^{1}L_{t}^{2}}\lesssim\||\xi|\widehat{u}\|_{L_{\xi}^{1 }L_{t}^{2}}+\||\xi|\widehat{u}\|_{L_{\xi}^{p}L_{t}^{2}}. \tag{5.6}\]
Gathering previous estimates, we have hence obtained
\[\|\Phi[u]\|_{\mathscr{Y}}\leq C_{0}\left(\|\widehat{u}_{0}\|_{L_{\xi}^{1}}+\| \widehat{u}_{0}\|_{L_{\xi}^{p}}\right)+C_{1}\|u\|_{\mathscr{Y}}^{2}.\]
Moreover for \(u,v\in\mathscr{Y}\) we can also compute, using again Proposition 5.3 and Lemma 5.4, that
\[\left\|\int_{0}^{t}\widehat{V}(t-s,\xi)\widehat{Q}_{\mathrm{NS}} ((u-v)(s),v(s))(\xi)\,\mathrm{d}s\right\|_{\mathscr{X}}+\left\|\int_{0}^{t} \widehat{V}(t-s,\xi)\widehat{Q}_{\mathrm{NS}}(u(s),(u-v)(s))(\xi)\,\mathrm{d}s \right\|_{\mathscr{X}}\] \[\lesssim\||\xi|^{-1}\widehat{Q}_{\mathrm{NS}}(u-v,v)\|_{L_{\xi}^{ 1}L_{t}^{2}}+\||\xi|^{-1}\widehat{Q}_{\mathrm{NS}}(u,u-v)\|_{L_{\xi}^{1}L_{t}^{ 2}}\] \[\quad+\||\xi|^{-1}\widehat{Q}_{\mathrm{NS}}(u-v,v)\|_{L_{\xi}^{p}L_ {t}^{2}}+\||\xi|^{-1}\widehat{Q}_{\mathrm{NS}}(u,u-v)\|_{L_{\xi}^{p}L_{t}^{2}}\] \[\lesssim\|\widehat{u}-\widehat{v}\|_{L_{\xi}^{1}L_{t}^{\infty}}\| \widehat{v}\|_{L_{\xi}^{1}L_{t}^{2}}+\|\widehat{u}\|_{L_{\xi}^{1}L_{t}^{2}}\| \widehat{u}-\widehat{v}\|_{L_{\xi}^{1}L_{t}^{\infty}}+\|\widehat{u}-\widehat{v} \|_{L_{\xi}^{p}L_{t}^{\infty}}\|\widehat{v}\|_{L_{\xi}^{1}L_{t}^{2}}+\|\widehat {u}\|_{L_{\xi}^{1}L_{t}^{2}}\|\widehat{u}-\widehat{v}\|_{L_{\xi}^{p}L_{t}^{ \infty}}\] \[\lesssim\left(\|\widehat{u}\|_{L_{\xi}^{1}L_{t}^{2}}+\|\widehat{v }\|_{L_{\xi}^{1}L_{t}^{2}}\right)\left(\|\widehat{u}-\widehat{v}\|_{L_{\xi}^{1}L _{t}^{\infty}}+\|\widehat{u}-\widehat{v}\|_{L_{\xi}^{p}L_{t}^{\infty}}\right).\]
Using inequality (5.6) we therefore get, for some constant \(C_{1}>0\),
\[\|\Phi[u]-\Phi[v]\|_{\mathscr{Y}}\leq C_{1}\left(\|u\|_{\mathscr{Y}}+\|v\|_{ \mathscr{Y}}\right)\|u-v\|_{\mathscr{Y}}.\]
Gathering these two inequalities together, the proof of Theorem 2.2-(2) is completed by arguing as in Section 5.1 above.
## 6. Hydrodynamic limit
Recalling that the semigroup \(U^{\varepsilon}\) is defined in (3.8), and also \(\widehat{U}^{\varepsilon}\) in (3.7), we also define, for all \(t\geq 0\),
\[\Psi^{\varepsilon}[f,g](t)=\frac{1}{\varepsilon}\int_{0}^{t}U^{\varepsilon}(t- s)\Gamma(f(s),g(s))\,\mathrm{d}s, \tag{6.1}\]
as well as its Fourier transform in space, for all \(\xi\in\Omega^{\prime}_{\xi}\),
\[\widehat{\Psi}^{\varepsilon}[f,g](t,\xi)=\frac{1}{\varepsilon}\int_{0}^{t} \widehat{U}^{\varepsilon}(t-s,\xi)\widehat{\Gamma}(f(s),g(s))(\xi)\,\mathrm{d}s. \tag{6.2}\]
### Estimates on \(\widehat{U}^{\varepsilon}\)
We denote that \(0\leq\chi\leq 1\) is a fixed compactly supported function of \(B_{1}\) equal to one on \(B_{\frac{1}{2}}\), where \(B_{R}\) is the ball with radius \(R\) centered at zero.
Arguing as in [13, 39] but using the spectral estimates of [74, 75] for the non-cutoff Boltzmann equation, we then have:
**Lemma 6.1**.: _There exist \(\kappa>0\) such that one can write_
\[U^{\varepsilon}(t)=\sum_{j=1}^{4}U^{\varepsilon}_{j}(t)+U^{\varepsilon\#}(t),\]
_with_
\[\widehat{U}^{\varepsilon}_{j}(t,\xi):=\widehat{U}_{j}(\frac{t}{\varepsilon^{ 2}},\varepsilon\xi),\quad\widehat{U}^{\varepsilon\#}(t,\xi)=\widehat{U}^{\# }(\frac{t}{\varepsilon^{2}},\varepsilon\xi),\]
_where we have the following properties:_
(1) _For_ \(1\leq j\leq 4\)_,_
\[\widehat{U}_{j}(t,\xi)=\chi(\frac{|\xi|}{\kappa})e^{t\lambda_{j}(\xi)}P_{j}( \xi),\]
_with_ \(\lambda_{j}\) _satisfying_
\[\lambda_{j}(\xi)=i\alpha_{j}(\xi)-\beta_{j}|\xi|^{2}+\gamma_{j}(|\xi|),\]
_with_
\[\alpha_{1}>0,\quad\alpha_{2}<0,\quad\alpha_{3}=\alpha_{4}=0,\quad\beta_{j}>0,\]
_and_
\[\gamma_{j}(|\xi|)=O(|\xi|^{3}),\quad\text{as}\quad\xi\to 0,\quad\gamma_{j}( \xi)\leq\beta_{j}|\xi|^{2}/2,\quad\forall|\xi|\leq\kappa,\]
_as well as_
\[P_{j}(\xi)=P_{j}^{0}(\frac{\xi}{|\xi|})+|\xi|P_{j}^{1}(\frac{\xi}{|\xi|})+| \xi|^{2}P_{j}^{2}(\xi),\]
_with \(P_{j}^{n}\) bounded linear operators on \(L^{2}_{v}\) with operator norms uniformly bounded for \(|\xi|\leq\kappa\)._
(2) _We also have that the orthogonal projector \(\mathbf{P}\) onto \(\operatorname{Ker}L\) satisfies_
\[\mathbf{P}=\sum_{j=1}^{4}P_{j}^{0}(\frac{\xi}{|\xi|}).\]
_Moreover \(P_{j}^{0}(\frac{\xi}{|\xi|})\), \(P_{j}^{1}(\frac{\xi}{|\xi|})\) and \(P_{j}^{2}(\xi)\) are bounded from \(L^{2}_{v}\) to \(L^{2}(\langle v\rangle^{l})\) uniformly in \(|\xi|\leq\kappa\) for all \(l\geq 0\)._
(3) _In the hard potentials case \(\gamma+2s\geq 0\), for all \(t\geq 0\) and all \(\xi\in\mathbf{R}^{3}\) there holds, for any \(\ell\geq 0\),_
\[\|\widehat{U}^{\varepsilon\#}(t,\xi)\widehat{f}(\xi)\|_{L^{2}_{v}(\langle v \rangle^{\ell})}\leq Ce^{-\lambda_{0}\frac{t}{\varepsilon^{2}}}\|\widehat{f}( \xi)\|_{L^{2}_{v}(\langle v\rangle^{\ell})}, \tag{6.3}\]
_for any \(f\) satisfying moreover (1.11) in the torus case, where \(\lambda_{0},C>0\) are independent of \(t,\xi,\varepsilon\)._
(4) _In the soft potential case \(\gamma+2s<0\), for all \(t\geq 0\) and all \(\xi\in\mathbf{R}^{3}\) there holds, for any \(k,\ell\geq 0\),_
\[\|\widehat{U}^{\varepsilon\#}(t,\xi)\mathbf{P}^{\perp}\widehat{f}(\xi)\|_{L^{2} _{v}((v)^{k})}\leq C\left(1+\frac{t}{\varepsilon^{2}}\right)^{-\frac{\ell}{| \gamma+2s|}}\|\widehat{f}(\xi)\|_{L^{2}_{v}((v)^{k+\ell})}, \tag{6.4}\]
_for any \(f\) satisfying moreover (1.11) in the torus case, where \(C>0\) is independent of \(t,\xi,\varepsilon\)._
Proof.: The proof is the same as in [24, Lemma 5.10]. For the soft potentials case, we need to replace the use of [74, Theorem 3.2 and Remark 5.2] in the proof by [75, Theorem 1.1 and Section 4], in particular the decay estimate (6.4) comes from [75, Equation (2.46)] and the fact that \(B_{0}(\xi)\mathbf{P}^{\perp}=B(\xi)\mathbf{P}^{\perp}\), where \(B_{0}(\xi)\) and \(B(\xi)\) are defined in [75, Equation (1.18)] and satisfy \(B_{0}(\xi)=B(\xi)-\mathbf{P}\).
Denoting
\[\widetilde{P}_{j}\left(\xi,\frac{\xi}{|\xi|}\right):=P^{1}_{j}\left(\frac{\xi }{|\xi|}\right)+\xi P^{2}_{j}(\xi),\]
for \(1\leq j\leq 4\), we can further split \(\widehat{U}^{\varepsilon}_{j}\) into four parts (one main part and three remainder terms):
\[U^{\varepsilon}_{j}=U^{\varepsilon}_{j0}+U^{\varepsilon\#}_{j0}+U^{\varepsilon }_{j1}+U^{\varepsilon}_{j2}, \tag{6.5}\]
where
\[\widehat{U}^{\varepsilon}_{j0}(t,\xi) =e^{i\alpha_{j}|\xi|\frac{t}{\varepsilon}-\beta_{j}t|\xi|^{2}}P ^{0}_{j}\left(\frac{\xi}{|\xi|}\right),\] \[\widehat{U}^{\varepsilon\#}_{j0}(t,\xi) =\left(\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)-1\right) e^{i\alpha_{j}|\xi|\frac{t}{\varepsilon}-\beta_{j}t|\xi|^{2}}P^{0}_{j}\left( \frac{\xi}{|\xi|}\right),\] \[\widehat{U}^{\varepsilon}_{j1}(t,\xi) =\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)e^{i\alpha_{j}| \xi|\frac{t}{\varepsilon}-\beta_{j}t|\xi|^{2}}\left(e^{t\frac{\gamma_{j}( \xi|\xi)}{\varepsilon^{2}}}-1\right)P^{0}_{j}\left(\frac{\xi}{|\xi|}\right),\] \[\widehat{U}^{\varepsilon}_{j2}(t,\xi) =\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)e^{i\alpha_{j}| \xi|\frac{t}{\varepsilon}-\beta_{j}t|\xi|^{2}}e^{t\frac{\gamma_{j}(\varepsilon |\xi|)}{\varepsilon^{2}}}\varepsilon|\xi|\widetilde{P}_{j}\left(\varepsilon\xi, \frac{\xi}{|\xi|}\right).\]
In particular we observe that \(\widehat{U}^{\varepsilon}_{30}\) and \(\widehat{U}^{\varepsilon}_{40}\) are independent of \(\varepsilon\), so that we define
\[\widehat{U}(t,\xi):=\widehat{U}^{\varepsilon}_{30}(t,\xi)+\widehat{U}^{ \varepsilon}_{40}(t,\xi), \tag{6.6}\]
which is then independent of \(\varepsilon\). We finally define
\[U(t)=\mathcal{F}^{-1}_{x}\widehat{U}(t)\mathcal{F}_{x}. \tag{6.7}\]
**Lemma 6.2**.: _([39], Proposition A.3) We have that \(U(0)\) is the projection on the subset of \(\operatorname{Ker}L\) consisting of functions \(f\) satisfying \(\operatorname{div}u_{f}=0\) and also \(\rho_{f}+\theta_{f}=0\). We also have_
\[U(t)f=U(t)U(0)f,\quad\forall t\geq 0,\]
_and_
\[\operatorname{div}u_{f}=0\quad\text{and}\quad\rho_{f}+\theta_{f}=0\quad \Rightarrow\quad P^{0}_{j}(\frac{\xi}{|\xi|})f=0,\quad j=1,2.\]
The following lemma studies the limit of \(U^{\varepsilon}(t)\) as \(\varepsilon\) goes to \(0\).
**Lemma 6.3**.: _Let \(f=f(x,v)\in\operatorname{Ker}L\) then we have_
\[\|(U^{\varepsilon}(\cdot)-U(\cdot))f\|_{L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}} \lesssim\|f\|_{L^{1}_{\xi}L^{2}_{v}},\]
_and_
\[\|(U^{\varepsilon}(\cdot)-U(\cdot))f\|_{L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}} \lesssim\varepsilon\|\xi|f\|_{L^{1}_{\xi}L^{2}_{v}}.\]
Proof.: The proof follows the idea of [39, Lemma 3.5], that we shall adapt since we work in different functional spaces.
First of all we observe that from the decomposition of \(U^{\varepsilon}\) in (6.5) we can write, for all \(t\geq 0\) and \(\xi\in\Omega_{\xi}^{\prime}\),
\[\widehat{U}^{\varepsilon}(t,\xi)\widehat{f}(\xi)-\widehat{U}^{ \varepsilon}(t,\xi)\widehat{f}(\xi) =\sum_{j=1}^{4}\widehat{U}_{j0}^{\varepsilon\#}(t,\xi)\widehat{f }(\xi)+\widehat{U}_{j1}^{\varepsilon}(t,\xi)\widehat{f}(\xi)+\widehat{U}_{j1 }^{\varepsilon}(t,\xi)\widehat{f}(\xi)\] \[\quad+\sum_{j=1}^{2}\widehat{U}_{j0}^{\varepsilon}(t,\xi)\widehat {f}(\xi)+\widehat{U}^{\varepsilon\#}(t,\xi)\widehat{f}(\xi),\]
and we shall estimate each term separately below.
We first compute the term \(U_{jm}^{\varepsilon}(t)f\) for \(j=1,2,3,4\) and \(m=1,2\). For the \(U_{j1}\) term, using Lemma 6.1 together with the inequality \(|e^{a}-1|\leq|a|e^{|a|}\) for any \(a\in\mathbf{R}^{+}\), we have
\[\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)e^{-\beta_{j}t|\xi|^{2}}\left| e^{\frac{t\gamma_{j}(\varepsilon|\xi|)}{\varepsilon^{2}}}-1\right|\leq\chi \left(\frac{\varepsilon|\xi|}{\kappa}\right)e^{-\frac{\beta_{j}}{2}t|\xi|^{2 }}t\varepsilon|\xi|^{3}\lesssim\chi\left(\frac{\varepsilon|\xi|}{\kappa} \right)\varepsilon|\xi|\lesssim\min\{1,\varepsilon|\xi|\}. \tag{6.8}\]
Then we can compute, for all \(t\geq 0\) and \(\xi\in\mathbf{R}^{3}\),
\[\|\widehat{U}_{j1}^{\varepsilon}(t,\xi)\widehat{f}(\xi)\|_{L_{ \text{v}}^{2}}\leq \chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)\left|e^{i\alpha_ {j}|\xi|\frac{t}{\varepsilon}-\beta_{j}t|\xi|^{2}}\right|\left(e^{\frac{t \gamma_{j}(\varepsilon|\xi|)}{\varepsilon^{2}}}-1\right)\|P_{j}^{0}(\frac{ \xi}{|\xi|})\widehat{f}(\xi)\|_{L_{\text{v}}^{2}}\] \[\lesssim \min\{1,\varepsilon|\xi|\}\|\hat{f}(\xi)\|_{L_{\text{v}}^{2}}.\]
For the \(\widehat{U}_{j2}^{\varepsilon}(t,\xi)\) term we have
\[\|\widehat{U}_{j1}^{\varepsilon}(t,\xi)\widehat{f}(\xi)\|_{L_{ \text{v}}^{2}}\leq \chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)\left|e^{i\alpha_ {j}|\xi|\frac{t}{\varepsilon}-\beta_{j}t|\xi|^{2}}e^{\frac{t\gamma_{j}( \varepsilon|\xi|)}{\varepsilon^{2}}}\right|\varepsilon|\|\widehat{P}_{j}( \varepsilon\xi,\frac{\xi}{|\xi|})\widehat{f}(\xi)\|_{L_{\text{v}}^{2}}\] \[\lesssim \min\{1,\varepsilon|\xi|\}\|\hat{f}(\xi)\|_{L_{\text{v}}^{2}}.\]
For the term \(\widehat{U}_{j0}^{\varepsilon\#}(t,\xi)\), using the fact that
\[\left|\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)-1\right|\lesssim\min\{1,\varepsilon|\xi|\}, \tag{6.9}\]
we have
\[\|\widehat{U}_{j0}^{\varepsilon\#}(t,\xi)\widehat{f}(\xi)\|_{L_{ \text{v}}^{2}}\leq \left(\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)-1\right) \left|e^{i\alpha_{j}|\xi|\frac{t}{\varepsilon}-\beta_{j}t|\xi|^{2}}\right| \|P_{j}^{0}(\frac{\xi}{|\xi|})\widehat{f}(\xi)\|_{L_{\text{v}}^{2}}\] \[\lesssim \min\{1,\varepsilon|\xi|\}\|\widehat{f}(\xi)\|_{L_{\text{v}}^{2}}.\]
Taking the \(L_{\text{v}}^{1}L_{\text{t}}^{\infty}\) norm in both sides yields, for all \(j=1,2,3,4\),
\[\|\widehat{U}_{j1}^{\varepsilon}(\cdot)\widehat{f}\|_{L_{\text{t}}^{1}L_{ \text{v}}^{\infty}L_{\text{v}}^{2}}+\|\widehat{U}_{j2}^{\varepsilon}(\cdot) \widehat{f}\|_{L_{\text{t}}^{1}L_{\text{v}}^{\infty}L_{\text{v}}^{2}}+\| \widehat{U}_{j0}^{\varepsilon\#}(\cdot)\widehat{f}\|_{L_{\text{t}}^{1}L_{ \text{v}}^{\infty}L_{\text{v}}^{2}}\lesssim\min\{\|\widehat{f}\|_{L_{\text{t}} ^{1}L_{\text{v}}^{2}},\varepsilon\|\xi|\widehat{f}\|_{L_{\text{t}}^{1}L_{ \text{v}}^{2}}\}.\]
By Lemma 6.2 we have, if \(f\in\operatorname{Ker}L\) is a well-prepared data, then
\[U_{10}^{\varepsilon}f+U_{20}^{\varepsilon}f=0.\]
Finally we compute the term \(U^{\varepsilon\#}(t,\xi)\), noticing that
\[\widehat{U}^{\varepsilon\#}(t,\xi)\widehat{f}(\xi,v)=\widehat{U}^{\varepsilon} (t,\xi)\widehat{U}^{\varepsilon\#}(0,\xi)\widehat{f}(\xi,v)=\widehat{U}^{ \varepsilon}(t,\xi)\left(1-\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right) \sum_{j=1}^{4}P_{j}(\varepsilon\xi)\right)\widehat{f}(\xi,v).\]
Since \(f\) belongs to \(\operatorname{Ker}L\), we have
\[\widehat{U}^{\varepsilon\#}(t,\xi)\widehat{f}(\xi,v)=\widehat{U}^{\varepsilon} (t,\xi)\left(1-\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)-\varepsilon| \xi|\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)\sum_{j=1}^{4}\tilde{P}_ {j}(\varepsilon\xi)\right)\widehat{f}(\xi,v).\]
By Proposition 3.2 we have
\[\|\widehat{U}^{\varepsilon\#}(\cdot)\widehat{f}\|_{L_{\text{t}}^{1}L_ {\text{v}}^{\infty}L_{\text{v}}^{2}}\lesssim \|\big{(}1-\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)- \varepsilon|\xi|\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)\sum_{j=1}^{4} \tilde{P}_{j}(\varepsilon\xi)\big{)}\widehat{f}(\xi)\|_{L_{\text{t}}^{1}L_{ \text{v}}^{2}}\] \[\lesssim \min\{\|\widehat{f}\|_{L_{\text{t}}^{1}L_{\text{v}}^{2}},\varepsilon\| \xi\|\xi|\widehat{f}\|_{L_{\text{t}}^{1}L_{\text{v}}^{2}}\},\]
thus the proof is finished by gathering together the two previous estimates.
### Estimates on \(\widehat{\Psi}^{\varepsilon}\)
The decomposition of the semigroup \(U^{\varepsilon}(t)\) in (6.5) also gives us a decomposition of the operator \(\Psi^{\varepsilon}(t)\) defined in (6.1).
**Lemma 6.4**.: _The following decomposition holds_
\[\Psi^{\varepsilon}=\sum_{j=1}^{4}\Psi^{\varepsilon}_{j}+\Psi^{\varepsilon\#},\]
_with_
\[\widehat{\Psi}^{\varepsilon\#}[f_{1},f_{2}](t,\xi):=\frac{1}{\varepsilon} \int_{0}^{t}\widehat{U}^{\varepsilon\#}(t-s,\xi)\widehat{\Gamma}(f_{1}(s),f_{ 2}(s))(\xi)\,\mathrm{d}s,\]
_and, for all \(1\leq j\leq 4\),_
\[\Psi^{\varepsilon}_{j}=\Psi^{\varepsilon}_{j0}+\Psi^{\varepsilon\#}_{j0}+\Psi ^{\varepsilon}_{j1}+\Psi^{\varepsilon}_{j2},\]
_where_
\[\widehat{\Psi}^{\varepsilon}_{j0}[f_{1},f_{2}](t,\xi) =\int_{0}^{t}e^{i\alpha_{j}|\xi|\frac{t-s}{\varepsilon}-\beta_{j }(t-s)|\xi|^{2}}|\xi|P^{1}_{j}(\tfrac{\xi}{|\xi|})\widehat{\Gamma}(f_{1}(s),f _{2}(s))(\xi)\,\mathrm{d}s,\] \[\widehat{\Psi}^{\varepsilon\#}_{j0}[f_{1},f_{2}](t,\xi) =\left(\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)-1\right) \int_{0}^{t}e^{i\alpha_{j}|\xi|\frac{t-s}{\varepsilon}-\beta_{j}(t-s)|\xi|^{2 }}|\xi|P^{1}_{j}(\tfrac{\xi}{|\xi|})\widehat{\Gamma}(f_{1}(s),f_{2}(s))(\xi) \,\mathrm{d}s,\] \[\widehat{\Psi}^{\varepsilon}_{j1}[f_{1},f_{2}](t,\xi) =\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)\int_{0}^{t}e^{i \alpha_{j}|\xi|\frac{t-s}{\varepsilon}-\beta_{j}(t-s)|\xi|^{2}}\left(e^{(t-s) \frac{\gamma_{j}(\varepsilon|\xi|)}{\varepsilon^{2}}}-1\right)|\xi|P^{1}_{j}( \tfrac{\xi}{|\xi|})\widehat{\Gamma}(f_{1}(s),f_{2}(s))(\xi)\,\mathrm{d}s,\] \[\widehat{\Psi}^{\varepsilon}_{j2}[f_{1},f_{2}](t,\xi) =\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)\int_{0}^{t}e^{i \alpha_{j}|\xi|\frac{t-s}{\varepsilon}-\beta_{j}(t-s)|\xi|^{2}}e^{(t-s)\frac{ \gamma_{j}(\varepsilon|\xi|)}{\varepsilon^{2}}}\varepsilon|\xi|^{2}P^{2}_{j}( \varepsilon\xi)\widehat{\Gamma}(f_{1}(s),f_{2}(s))(\xi)\,\mathrm{d}s.\]
Similarly as above, we observe again that that \(\widehat{\Psi}^{\varepsilon}_{30}\) and \(\widehat{\Psi}^{\varepsilon}_{40}\) are independent of \(\varepsilon\), so that we define
\[\widehat{\Psi}[f,g](t,\xi):=\widehat{\Psi}^{\varepsilon}_{30}[f,g](t,\xi)+ \widehat{\Psi}^{\varepsilon}_{40}[f,g](t,\xi) \tag{6.10}\]
which is then independent of \(\varepsilon\). We finally define
\[\Psi[f,g](t)=\mathcal{F}^{-1}_{x}\widehat{\Psi}[f,g](t)\mathcal{F}_{x}. \tag{6.11}\]
We are now able to prove the following result on the convergence of \(\Psi^{\varepsilon}\) towards \(\Psi\).
**Lemma 6.5**.: _Let \((\rho_{0},u_{0},\theta_{0})\) satisfy the hypotheses of Theorem 2.2 and consider the associated global unique solution \((\rho,u,\theta)\) to (1.14). Let also \(g_{0}=g_{0}(x,v)\in\mathrm{Ker}\,L\) be defined by (2.15) and \(g=g(t,x,v)\in\mathrm{Ker}\,L\) by (2.16). Then we have:_
\((1)\) _Torus case \(\Omega_{x}=\mathbf{T}^{3}\): There holds_
\[\|\Psi^{\varepsilon}[g,g]-\Psi[g,g]\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{\xi}} \lesssim\varepsilon\left(\|\widehat{g}_{0}\|^{2}_{L^{1}_{\xi}L^{2}_{\xi}}+\| \widehat{g}_{0}\|^{3}_{L^{1}_{\xi}L^{2}_{\xi}}\right).\]
\((2)\) _Whole space case \(\Omega_{x}=\mathbf{R}^{3}\): For any \(p\in(3/2,\infty]\) there holds_
\[\|\Psi^{\varepsilon}[g,g]-\Psi[g,g]\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{\xi}} \lesssim\varepsilon\left(\|\widehat{g}_{0}\|^{2}_{L^{1}_{\xi}L^{2}_{\xi}}+\| \widehat{g}_{0}\|^{2}_{L^{1}_{\xi}L^{2}_{\xi}}+\|\widehat{g}_{0}\|^{2}_{L^{ \prime}_{\xi}L^{2}_{\xi}}+\|\widehat{g}_{0}\|^{3}_{L^{\prime}_{\xi}L^{2}_{\xi}} \right).\]
Proof.: We adapt the proof of [39, Lemma 4.1] for the cutoff Boltzmann equation with hard potentials. Thanks to the decomposition of \(\Psi^{\varepsilon}\) in Lemma 6.4 we write, for all \(t\geq 0\) and \(\xi\in\Omega^{\prime}_{\xi}\),
\[\widehat{\Psi}^{\varepsilon}[g,g](t,\xi)-\widehat{\Psi}[g,g](t,\xi) =\sum_{j=1}^{4}\widehat{\Psi}^{\varepsilon\#}_{j0}[g,g](t,\xi)+ \widehat{\Psi}^{\varepsilon}_{j1}[g,g](t,\xi)+\widehat{\Psi}^{\varepsilon}_{j1}[ g,g](t,\xi)\] \[\quad+\sum_{j=1}^{2}\widehat{\Psi}^{\varepsilon}_{j0}[g,g](t,\xi)+ \widehat{\Psi}^{\varepsilon\#}[g,g](t,\xi).\]
We remark that for the zero frequency \(\xi=0\) we have
\[\widehat{\Psi}^{\varepsilon}[g,g](t,0)=\widehat{\Psi}^{\varepsilon\#}[g,g](t,0).\]
We split the proof into several steps and estimate each term separately below.
_Step 1._ By Lemma 6.4, (6.8) and Minkowski inequality, for the term \(\widehat{\Psi}_{j0}^{\varepsilon\#}[g,g]\) with \(j=1,2,3,4\), for all \(t\geq 0\) and all \(\xi\in\Omega_{\xi}^{\prime}\setminus\{0\}\) we have
\[\left\|\widehat{\Psi}_{j0}^{\varepsilon\#}[g,g](t,\xi)\right\|_{L _{v}^{2}} \lesssim\left|\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)-1 \right|\int_{0}^{t}e^{-\beta_{j}(t-s)|\xi|^{2}}|\xi|\left\|P_{j}^{1}(\tfrac{ \xi}{|\xi|})\widehat{\Gamma}(g(s),g(s))(\xi)\right\|_{L_{v}^{2}}\mathrm{d}s\] \[\lesssim\varepsilon\int_{0}^{t}e^{-\beta_{j}(t-s)|\xi|^{2}}|\xi| ^{2}\|\widehat{\Gamma}(g(s),g(s))(\xi)\|_{L_{v}^{2}}\;\mathrm{d}s\] \[\lesssim\varepsilon\|\widehat{\Gamma}(g,g)(\xi)\|_{L_{v}^{\infty }L_{v}^{2}}.\]
Similarly for the term \(\widehat{\Psi}_{j1}^{\varepsilon}[g,g]\), by Lemma 6.4, (6.9) and Minkowski inequality we have, for all \(j=1,2,3,4\),
\[\left\|\widehat{\Psi}_{j1}^{\varepsilon}[g,g](t,\xi)\right\|_{L _{v}^{2}} \lesssim\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)\int_{0}^ {t}e^{-\beta_{j}(t-s)|\xi|^{2}}\left|e^{(t-s)\frac{\gamma_{j}(\varepsilon|\xi| )}{\varepsilon^{2}}}-1\right|\left|\xi\right|\left\|P_{j}^{1}(\tfrac{\xi}{| \xi|})\widehat{\Gamma}(f_{1}(s),f_{2}(s))\right\|_{L_{v}^{2}}\mathrm{d}s\] \[\lesssim\varepsilon\int_{0}^{t}e^{-\frac{\beta_{j}}{2}(t-s)|\xi| ^{2}}|\xi|^{2}|\widehat{\Gamma}(g(s),g(s))(\xi)\|_{L_{v}^{2}}\;\mathrm{d}s\] \[\lesssim\varepsilon\|\widehat{\Gamma}(g,g)(\xi)\|_{L_{t}^{\infty }L_{v}^{2}}.\]
Similarly for the term \(\widehat{\Psi}_{j2}^{\varepsilon}[g,g]\), by Lemma 6.4 and Minkowski inequality we have, for all \(j=1,2,3,4\),
\[\left\|\widehat{\Psi}_{j2}^{\varepsilon}[g,g](t,\xi)\right\|_{L _{v}^{2}} \lesssim\chi\left(\frac{\varepsilon|\xi|}{\kappa}\right)\int_{0}^ {t}e^{-\beta_{j}(t-s)|\xi|^{2}}\left|e^{(t-s)\frac{\gamma_{j}(\varepsilon|\xi| )}{\varepsilon^{2}}}\right|\varepsilon|\xi|^{2}\left\|P_{j}^{2}(\varepsilon \xi)\widehat{\Gamma}(g(s),g(s))\right\|_{L_{v}^{2}}\mathrm{d}s\] \[\lesssim\varepsilon\int_{0}^{t}e^{-\frac{\beta_{j}}{2}(t-s)|\xi| ^{2}}|\xi|^{2}\|\widehat{\Gamma}(g(s),g(s))(\xi)\|_{L_{v}^{2}}\;\mathrm{d}s\] \[\lesssim\varepsilon\|\widehat{\Gamma}(g,g)(\xi)\|_{L_{t}^{\infty }L_{v}^{2}}.\]
Taking \(L_{t}^{1}L_{t}^{\infty}\) norm both side and using Young's inequality for convolution we have, for all \(j=1,2,3,4\),
\[\|\widehat{\Psi}_{j0}^{\varepsilon\#}[g,g]\|_{L_{t}^{1}L_{t}^{\infty}L_{v}^{2 }}+\|\widehat{\Psi}_{j1}^{\varepsilon}[g,g]\|_{L_{t}^{1}L_{t}^{\infty}L_{v}^{ 2}}+\|\widehat{\Psi}_{j2}^{\varepsilon}[g,g]\|_{L_{t}^{1}L_{t}^{\infty}L_{v}^ {2}}\lesssim\varepsilon\|\widehat{\Gamma}(g,g)(\xi)\|_{L_{t}^{1}L_{t}^{\infty }L_{v}^{2}}.\]
Thanks to [68] and the fact that \(\|(v)^{\ell}\mathbf{P}\phi\|_{H_{v}^{m}}\lesssim\|\mathbf{P}\phi\|_{L_{v}^{2}}\) for all \(m,\ell\geq 0\), we have
\[\|\Gamma(\mathbf{P}g_{1},\mathbf{P}g_{2})\|_{L_{v}^{2}}\lesssim\|\mathbf{P}g_ {1}\|_{L_{v}^{2}}\|\mathbf{P}g_{2}\|_{L_{v}^{2}}, \tag{6.12}\]
therefore arguing as in Lemma 4.1 it follows, for any \(p\in[1,\infty]\) and \(\ell\geq 0\),
\[\|\widehat{\Gamma}(g,g)\|_{L_{\xi}^{\infty}L_{v}^{2}(\langle v\rangle^{\ell} )}\lesssim\|g\|_{L_{\xi}^{1}L_{t}^{\infty}L_{v}^{2}}\|g\|_{L_{\xi}^{p}L_{t}^{ \infty}L_{v}^{2}}. \tag{6.13}\]
We therefore obtain, for all \(j=1,2,3,4\),
\[\|\widehat{\Psi}_{j0}^{\varepsilon\#}[g,g]\|_{L_{\xi}^{1}L_{t}^{\infty}L_{x}^{ 2}}+\|\widehat{\Psi}_{j1}^{\varepsilon}[g,g]\|_{L_{\xi}^{1}L_{t}^{\infty}L_{x} ^{2}}+\|\widehat{\Psi}_{j2}^{\varepsilon}[g,g]\|_{L_{\xi}^{1}L_{t}^{\infty}L_{ x}^{2}}\lesssim\varepsilon\|g\|_{L_{\xi}^{1}L_{t}^{\infty}L_{x}^{2}}^{2}. \tag{6.14}\]
_Step 2._ We now focus on the term \(\widehat{\Psi}_{j0}^{\varepsilon}[g,g]\) with \(j=1,2\), and recall that \(\alpha_{j}>0\) for \(j=1,2\). We denote
\[\widehat{H_{j}}(t,s,\xi)=e^{-\beta_{j}(t-s)|\xi|^{2}}|\xi|P_{j}^{1}(\tfrac{\xi} {|\xi|})\widehat{\Gamma}(g(s),g(s))(\xi),\]
and thus, using integration by parts, for all \(t\geq 0\) and all \(\xi\in\Omega^{\prime}_{\xi}\setminus\{0\}\) we have
\[\widehat{\Psi}^{\varepsilon}_{j0}[g,g](t,\xi) =\int_{0}^{t}e^{i\alpha_{j}|\xi|\frac{t-s}{\varepsilon}-\beta_{j} (t-s)|\xi|^{2}}|\xi|P^{1}_{j}(\frac{\xi}{|\xi|})\widehat{\Gamma}(g(s),g(s))( \xi)\,\mathrm{d}s\] \[=\frac{\varepsilon}{i\alpha_{j}|\xi|}\left(\int_{0}^{t}e^{i \alpha_{j}|\xi|\frac{t-s}{\varepsilon}}\partial_{t}\widehat{H}_{j}(t,s,\xi)\, \mathrm{d}s-\widehat{H}_{j}(t,t,\xi)+e^{i\alpha_{j}|\xi|\frac{t}{\varepsilon}} \widehat{H}_{j}(t,0,\xi)\right)\] \[=\frac{\varepsilon}{i\alpha_{j}}\left(\int_{0}^{t}e^{i\alpha_{j} |\xi|\frac{t-s}{\varepsilon}}\beta_{j}|\xi|^{2}e^{-\beta_{j}(t-s)|\xi|^{2}}P^ {1}_{j}(\frac{\xi}{|\xi|})\widehat{\Gamma}(g(s),g(s))(\xi)\,\mathrm{d}s\right)\] \[\quad+\frac{\varepsilon}{i\alpha_{j}}\left(\int_{0}^{t}e^{i \alpha_{j}|\xi|\frac{t-s}{\varepsilon}}e^{-\beta_{j}(t-s)|\xi|^{2}}P^{1}_{j}( \frac{\xi}{|\xi|})\partial_{t}\widehat{\Gamma}(g(s),g(s))(\xi)\,\mathrm{d}s\right)\] \[\quad-\frac{\varepsilon}{i\alpha_{j}}P^{1}_{j}(\frac{\xi}{|\xi|}) \widehat{\Gamma}(g(t),g(t))(\xi)\] \[\quad+\frac{\varepsilon}{i\alpha_{j}}e^{i\alpha_{j}|\xi|\frac{t}{ \varepsilon}}e^{-\beta_{j}t|\xi|^{2}}P^{1}_{j}(\frac{\xi}{|\xi|})\widehat{ \Gamma}(g(0),g(0))(\xi)\] \[=:I_{1}(t,\xi)+I_{2}(t,\xi)+I_{3}(t,\xi)+I_{4}(t,\xi). \tag{6.15}\]
For the first term in (6.15) we have for all \(t\geq 0\) and \(\xi\in\Omega^{\prime}_{\xi}\), using Lemma 6.4,
\[\|I_{1}(t,\xi)\|_{L^{2}_{v}} \lesssim\varepsilon\int_{0}^{t}\beta_{j}|\xi|^{2}e^{-\beta_{j}(t- s)|\xi|^{2}}\|\widehat{\Gamma}(g(s),g(s))(\xi)\|_{L^{2}_{v}}\,\mathrm{d}s\] \[\lesssim\varepsilon\|\widehat{\Gamma}(g,g)(\xi)\|_{L^{\infty}_{t} L^{2}_{v}}.\]
Similarly, for the third term in (6.15) there holds
\[\|I_{3}(t,\xi)\|_{L^{2}_{v}} \lesssim\varepsilon\|\widehat{\Gamma}(g(t),g(t))(\xi)\|_{L^{2}_{ v}}\] \[\lesssim\varepsilon\|\widehat{\Gamma}(g,g)(\xi)\|_{L^{\infty}_{t} L^{2}_{v}}.\]
and for the fourth one
\[\|I_{4}(t,\xi)\|_{L^{2}_{v}} \lesssim\varepsilon e^{-\beta_{j}t|\xi|^{2}}\|\widehat{\Gamma}(g( 0),g(0))(\xi)\|_{L^{2}_{v}}\] \[\lesssim\varepsilon\|\widehat{\Gamma}(g,g)(\xi)\|_{L^{\infty}_{t} L^{2}_{v}}.\]
This yields
\[\|I_{1}\|_{L^{1}_{t}L^{\infty}_{t}L^{2}_{v}}+\|I_{3}\|_{L^{1}_{t}L^{\infty}_{t} L^{2}_{v}}+\|I_{4}\|_{L^{1}_{t}L^{\infty}_{t}L^{2}_{t}}\lesssim\varepsilon\| \widehat{\Gamma}(g,g)\|_{L^{1}_{t}L^{\infty}_{t}L^{2}_{v}}\lesssim\varepsilon\| \widehat{g}\|^{2}_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}}, \tag{6.16}\]
where we have used (6.13) in last line
For the second term in (6.15) we first write, for all \(t\geq 0\) and \(\xi\in\Omega^{\prime}_{\xi}\),
\[\|I_{2}(t,\xi)\|_{L^{2}_{v}}\lesssim\varepsilon\int_{0}^{t}e^{-\beta_{j}(t-s)| \xi|^{2}}\|\partial_{t}\widehat{\Gamma}(g(s),g(s))(\xi)\|_{L^{2}_{v}}\, \mathrm{d}s.\]
Since \(\partial_{t}\widehat{\Gamma}(g,g)=\widehat{\Gamma}(\partial_{t}g,g)+\widehat {\Gamma}(g,\partial_{t}g)\), from (6.12) we get
\[\|\partial_{t}\widehat{\Gamma}(g(s),g(s))(\xi)\|_{L^{2}_{v}}\lesssim\int_{ \Omega^{\prime}_{\eta}}\|\widehat{g}(s,\xi-\eta)\|_{L^{2}_{v}}\|\partial_{t}g (s,\eta)\|_{L^{2}_{v}}\,\mathrm{d}\eta.\]
As \(g\) is defined through \((u,\theta,\rho)\) which satisfies the Navier-Stokes-Fourier system (1.14), we have for all \(s\geq 0\) and all \(\eta\in\Omega^{\prime}_{\eta}\)
\[\|\partial_{t}\widehat{g}(s,\eta)\|_{L^{2}_{v}}\lesssim|\eta|^{2}\|\widehat{g} (s,\eta)\|_{L^{2}_{v}}+|\eta|\int_{\Omega^{\prime}_{\zeta}}\|\widehat{g}(s,\eta- \zeta)\|_{L^{2}_{v}}\|\widehat{g}(s,\zeta)\|_{L^{2}_{v}}\,\mathrm{d}\zeta.\]
This implies
\[\|I_{2}(t,\xi)\|_{L^{2}_{v}} \lesssim\varepsilon\int_{0}^{t}e^{-\beta_{j}(t-s)|\xi|^{2}}\int_{ \Omega^{\prime}_{\eta}}\|\widehat{g}(s,\xi-\eta)\|_{L^{2}_{v}}|\eta|^{2}\| \widehat{g}(s,\eta)\|_{L^{2}_{v}}\,\mathrm{d}\eta\,\mathrm{d}s\] \[\quad+\varepsilon\int_{0}^{t}e^{-\beta_{j}(t-s)|\xi|^{2}}\int_{ \Omega^{\prime}_{\eta}}\|\widehat{g}(s,\xi-\eta)\|_{L^{2}_{v}}|\eta|\int_{ \Omega^{\prime}_{\zeta}}\|\widehat{g}(s,\eta-\zeta)\|_{L^{2}_{v}}\|\widehat{g} (s,\zeta)\|_{L^{2}_{v}}\,\mathrm{d}\zeta\,\mathrm{d}\eta\,\mathrm{d}s\] \[=:R_{1}(t,\xi)+R_{2}(t,\xi).\]
For the term \(R_{1}\) we split the integral on \(\eta\) into two parts: the region \(2|\xi|>|\eta|\) in which we have \(|\eta|^{2}\leq 4|\xi|^{2}\); and the region \(2|\xi|\leq|\eta|\) where we have \(|\eta-\xi|\sim|\eta|\), which yields
\[R_{1}(t,\xi) \lesssim\varepsilon\int_{0}^{t}e^{-\beta_{j}(t-s)|\xi|^{2}}\int_{ \Omega_{\eta}^{\prime}}\mathbf{1}_{|\eta|<2|\xi|}\|\widehat{g}(s,\xi-\eta)\|_{ L_{v}^{2}}|\eta|^{2}\|\widehat{g}(s,\eta)\|_{L_{v}^{2}}\,\mathrm{d}\eta\,\mathrm{d}s\] \[\qquad+\varepsilon\int_{0}^{t}e^{-\beta_{j}(t-s)|\xi|^{2}}\int_{ \Omega_{\eta}^{\prime}}\mathbf{1}_{|\eta|\geq 2|\xi|}\|\widehat{g}(s,\xi-\eta)\|_{ L_{v}^{2}}|\eta|^{2}\|\widehat{g}(s,\eta)\|_{L_{v}^{2}}\,\mathrm{d}\eta\, \mathrm{d}s\] \[\lesssim\varepsilon\int_{0}^{t}|\xi|^{2}e^{-\beta_{j}(t-s)|\xi|^ {2}}\int_{\Omega_{\eta}^{\prime}}\|\widehat{g}(s,\xi-\eta)\|_{L_{v}^{2}}\| \widehat{g}(s,\eta)\|_{L_{v}^{2}}\,\mathrm{d}\eta\,\mathrm{d}s\] \[\qquad+\varepsilon\int_{0}^{t}e^{-\beta_{j}(t-s)|\xi|^{2}}\int_{ \Omega_{\eta}^{\prime}}|\xi-\eta|\|\widehat{g}(s,\xi-\eta)\|_{L_{v}^{2}}|\eta| \|\widehat{g}(s,\eta)\|_{L_{v}^{2}}\,\mathrm{d}\eta\,\mathrm{d}s.\]
Thanks to Holder's inequality in the time variable, it follows
\[\|R_{1}(\xi)\|_{L_{t}^{\infty}} \lesssim\varepsilon\int_{\Omega_{\eta}^{\prime}}\|\widehat{g}( \xi-\eta)\|_{L_{v}^{\infty}L_{v}^{2}}\|\widehat{g}(\eta)\|_{L_{v}^{\infty}L_{v} ^{2}}\,\mathrm{d}\eta\] \[\qquad+\varepsilon\int_{\Omega_{\eta}^{\prime}}|\xi-\eta|\| \widehat{g}(\xi-\eta)\|_{L_{t}^{2}L_{v}^{2}}|\eta|\|\widehat{g}(\eta)\|_{L_{t }^{2}L_{v}^{2}}\,\mathrm{d}\eta,\]
therefore taking the \(L_{\xi}^{1}\) norm and using Young's convolution inequality we obtain
\[\|R_{1}\|_{L_{\xi}^{1}L_{t}^{\infty}}\lesssim\varepsilon\|\widehat{g}\|_{L_{ \xi}^{1}L_{t}^{\infty}L_{v}^{2}}^{2}+\varepsilon\|\|\xi|\widehat{g}\|_{L_{\xi }^{1}L_{t}^{2}L_{v}^{2}}^{2}. \tag{6.17}\]
For the term \(R_{2}\) we write
\[\|R_{2}(\xi)\|_{L_{t}^{\infty}} \lesssim\varepsilon\sup_{t\geq 0}\int_{0}^{t}|\xi|e^{-\beta_{j}(t-s) |\xi|^{2}}|\xi|^{-1}\int_{\Omega_{\eta}^{\prime}}\|\widehat{g}(s,\xi-\eta)\|_ {L_{v}^{2}}|\eta|\int_{\Omega_{\xi}^{\prime}}\|\widehat{g}(s,\eta-\zeta)\|_{L _{v}^{2}}\|\widehat{g}(s,\zeta)\|_{L_{v}^{2}}\,\mathrm{d}\zeta\,\mathrm{d}\eta \,\mathrm{d}s\] \[\lesssim\varepsilon\sup_{t\geq 0}\left(\int_{0}^{t}|\xi|^{2}e^{- \beta_{j}(t-s)|\xi|^{2}}\,\mathrm{d}s\right)^{1/2}|\xi|^{-1}\left(\int_{0}^{ \infty}G(s,\xi)^{2}\,\mathrm{d}s\right)^{1/2},\]
where we denote
\[G(s,\xi)=\int_{\Omega_{\eta}^{\prime}}\|\widehat{g}(s,\xi-\eta)\|_{L_{\xi}^{2} }H(s,\eta)\,\mathrm{d}\eta,\quad H(s,\eta)=|\eta|\int_{\Omega_{\xi}^{\prime}} \|\widehat{g}(s,\eta-\zeta)\|_{L_{\xi}^{2}}\|\widehat{g}(s,\zeta)\|_{L_{v}^{2 }}\,\mathrm{d}\zeta.\]
By Minkowski and Holder inequalities
\[\|G(\xi)\|_{L_{t}^{2}} \lesssim\int_{\Omega_{\eta}^{\prime}}\left(\int_{0}^{\infty}\| \widehat{g}(s,\xi-\eta)\|_{L_{v}^{2}}^{2}|H(s,\eta)|^{2}\,\mathrm{d}s\right)^{ 1/2}\mathrm{d}\eta\] \[\lesssim\int_{\Omega_{\eta}^{\prime}}\|\widehat{g}(\xi-\eta)\|_{L _{t}^{\infty}L_{v}^{2}}\|H(\eta)\|_{L_{t}^{2}}\,\mathrm{d}\eta.\]
Moreover
\[H(s,\eta)\lesssim\int_{\Omega_{\xi}^{\prime}}|\eta-\zeta|\|\widehat{g}(s, \eta-\zeta)\|_{L_{\xi}^{2}}\|\widehat{g}(s,\zeta)\|_{L_{\xi}^{2}}\,\mathrm{d} \zeta+\int_{\Omega_{\xi}^{\prime}}\|\widehat{g}(s,\eta-\zeta)\|_{L_{v}^{2}}| \zeta|\|\widehat{g}(s,\zeta)\|_{L_{v}^{2}}\,\mathrm{d}\zeta.\]
Thus again by Minkowski and Holder inequalities,
\[\|H(\eta)\|_{L_{t}^{2}} \lesssim\int_{\Omega_{\xi}^{\prime}}\left(\int_{0}^{\infty}\|| \eta-\zeta|\widehat{g}(s,\eta-\zeta)\|_{L_{v}^{2}}^{2}\|\widehat{g}(s,\zeta)\| _{L_{v}^{2}}^{2}\,\mathrm{d}s\right)^{1/2}\mathrm{d}\zeta\] \[\qquad+\int_{\Omega_{\xi}^{\prime}}\left(\int_{0}^{\infty}\| \widehat{g}(s,\eta-\zeta)\|_{L_{v}^{2}}^{2}\||\zeta|\widehat{g}(s,\zeta)\|_{L_{v }^{2}}^{2}\,\mathrm{d}s\right)^{1/2}\mathrm{d}\zeta\] \[\lesssim\int_{\Omega_{\xi}^{\prime}}\|\widehat{g}(\eta-\zeta)\|_{L _{v}^{\infty}L_{v}^{2}}\||\zeta|\widehat{g}(s,\zeta)\|_{L_{t}^{2}L_{v}^{2}} \,\mathrm{d}\zeta.\]
Hence we get
\[\|R_{2}(\xi)\|_{L_{t}^{\infty}}\lesssim\varepsilon|\xi|^{-1}\int_{\Omega_{ \eta}^{\prime}}\int_{\Omega_{\xi}^{\prime}}\|\widehat{g}(\xi-\eta)\|_{L_{t}^{ \infty}L_{v}^{2}}\|\widehat{g}(\eta-\zeta)\|_{L_{t}^{\infty}L_{v}^{2}}\|| \zeta|\widehat{g}(\zeta)\|_{L_{t}^{2}L_{v}^{2}}\,\mathrm{d}\zeta\,\mathrm{d}\eta.\]
Taking the \(L^{1}_{\xi}\) norm and distinguishing between high and low frequencies yields
\[\|\mathbf{1}_{|\xi|\geq 1}R_{2}\|_{L^{1}_{\xi}L^{\infty}_{t}}\lesssim\varepsilon\| \widehat{g}\|^{2}_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}}\|\xi\|\widehat{g}\|_{L^{1 }_{\xi}L^{2}_{t}L^{2}_{v}}, \tag{6.18}\]
and, in the whole space case \(\Omega_{x}=\mathbf{R}^{3}\) and \(\Omega^{\prime}_{\xi}=\mathbf{R}^{3}\),
\[\|\mathbf{1}_{|\xi|<1}R_{2}\|_{L^{1}_{\xi}L^{\infty}_{t}} \lesssim\varepsilon\|\mathbf{1}_{|\xi|<1}|\xi|^{-1}\|_{L^{p^{ \prime}}_{\xi}}\left\|\int_{\Omega^{\prime}_{\eta}}\int_{\Omega^{\prime}_{\zeta }}\|\widehat{g}(\xi-\eta)\|_{L^{\infty}_{t}L^{2}_{v}}\|\widehat{g}(\eta-\zeta )\|_{L^{\infty}_{t}L^{2}_{v}}\|\zeta|\widehat{g}(\zeta)\|_{L^{2}_{t}L^{2}_{v} }\,\mathrm{d}\zeta\,\mathrm{d}\eta\right\|_{L^{p}_{\xi}}\] \[\lesssim\varepsilon\|\widehat{g}\|_{L^{p}_{\xi}L^{\infty}_{t}L^{ \infty}_{x}}\|\widehat{g}\|_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}}\|\xi\|_{L^{1 }_{\xi}L^{2}_{t}L^{2}_{v}}, \tag{6.19}\]
where we have used that \(\mathbf{1}_{|\xi|<1}|\xi|^{-1}\in L^{p^{\prime}}_{\xi}\) since \(p>3/2\).
_Step 3._ It only remains to compute the term \(\widehat{\Psi}^{e\#}\), for which we first write, for all \(t\geq 0\) and \(\xi\in\Omega^{\prime}_{\xi}\),
\[\|\widehat{\Psi}^{e\#}[g,g](t,\xi)\|_{L^{2}_{v}}\lesssim\frac{1}{\varepsilon} \int_{0}^{t}\|\widehat{U}^{\varepsilon\#}(t-s,\xi)\widehat{\Gamma}(g(s),g(s)) (\xi)\|_{L^{2}_{v}}\,\mathrm{d}s.\]
In the hard potentials case \(\gamma+2s\geq 0\), thanks to (6.3) we have, for all \(t\geq 0\) and \(\xi\in\Omega^{\prime}_{\xi}\),
\[\|\widehat{\Psi}^{e\#}[g,g](t,\xi)\|_{L^{2}_{v}} \lesssim\frac{1}{\varepsilon}\int_{0}^{t}e^{-\lambda\frac{(t-s)}{ \varepsilon^{2}}}\|\widehat{\Gamma}(g(s),g(s))(\xi)\|_{L^{2}_{v}}\,\mathrm{d}s\] \[\lesssim\frac{1}{\varepsilon}\|\widehat{\Gamma}(g,g)(\xi)\|_{L^{ \infty}_{t}L^{2}_{v}}\int_{0}^{t}e^{-\lambda\frac{(t-s)}{\varepsilon^{2}}}\, \mathrm{d}s\] \[\lesssim\varepsilon\|\widehat{\Gamma}(g,g)(\xi)\|_{L^{\infty}_{t} L^{2}_{v}}.\]
For the soft potentials case \(\gamma+2s<0\), observing that \(\mathbf{P}\widehat{\Gamma}(g,g)=0\) we fix \(\ell>0\) such that \(\frac{\ell}{|\gamma+2s|}>1\) then we use (6.3) to obtain, for all \(t\geq 0\) and \(\xi\in\Omega^{\prime}_{\xi}\),
\[\|\widehat{\Psi}^{e\#}[g,g](t,\xi)\|_{L^{2}_{v}} \lesssim\frac{1}{\varepsilon}\int_{0}^{t}\left(1+\frac{(t-s)}{ \varepsilon^{2}}\right)^{-\frac{\ell}{|\gamma+2s|}}\|\widehat{\Gamma}(g(s),g(s ))(\xi)\|_{L^{2}_{v}(v)^{\ell}}\,\mathrm{d}s\] \[\lesssim\frac{1}{\varepsilon}\|\widehat{\Gamma}(g,g)(\xi)\|_{L^{ \infty}_{t}L^{2}_{v}((v)^{\ell})}\int_{0}^{t}\left(1+\frac{(t-s)}{\varepsilon ^{2}}\right)^{-\frac{\ell}{|\gamma+2s|}}\,\mathrm{d}s\] \[\lesssim\varepsilon\|\widehat{\Gamma}(g,g)(\xi)\|_{L^{\infty}_{t}L ^{2}_{v}((v)^{\ell})}.\]
Taking the \(L^{1}_{\xi}L^{\infty}_{t}\) norm in above estimates and using (6.13) yields, for both hard potentials and soft potentials cases,
\[\|\widehat{\Psi}^{e\#}[g,g]\|_{L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}}\lesssim \varepsilon\|\widehat{g}\|^{2}_{L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}}. \tag{6.20}\]
_Step 4: Conclusion._ We conclude the proof by gathering estimates (6.14), (6.16), (6.17), (6.18), (6.19), and (6.20) together with the bounds for \(g\) from Theorem 2.2.
### Proof of Theorem 2.3
Let \(f^{\varepsilon}\), for any \(\varepsilon\in(0,1]\), be the unique global mild solution to (1.8) associated to the initial data \(f^{\varepsilon}_{0}\) constructed in Theorem 2.1.
Let \(g=\mathbf{P}g\) be the kinetic distribution defined by (2.16) through the unique global mild solution \((\rho,u,\theta)\) to (1.14) associated to the initial data \((\rho_{0},u_{0},\theta_{0})\) constructed in Theorem 2.2, and denote also \(g_{0}=\mathbf{P}g_{0}\) the initial kinetic distribution defined by (2.15) through the initial data \((\rho_{0},u_{0},\theta_{0})\).
We now from [13, 39] for instance, that \(g\) verifies the equation
\[g(t)=U(t)g_{0}+\Psi[g,g](t), \tag{6.21}\]
where we recall that \(U(t)\) is defined in (6.7), and \(\Psi(t)\) in (6.11). Taking the Fourier transform in \(x\in\Omega_{x}\), we then have
\[\widehat{g}(t,\xi)=\widehat{U}(t,\xi)\widehat{g}_{0}(\xi)+\widehat{\Psi}[g,g](t,\xi). \tag{6.22}\]
for all \(\xi\in\Omega^{\prime}_{\xi}\), and where we recall that \(\widehat{U}\) is defined in (6.6), and \(\widehat{\Psi}\) in (6.10).
We first observe that the difference \(f^{\varepsilon}-g\) satisfies
\[\begin{split}\widehat{f}^{\varepsilon}(\xi)-\widehat{g}(\xi)& =\widehat{U}^{\varepsilon}(t,\xi)\widehat{f}_{0}^{\varepsilon}(\xi)- \widehat{U}(t,\xi)\widehat{g}_{0}(\xi)+\widehat{\Psi}^{\varepsilon}[f^{ \varepsilon},f^{\varepsilon}](t,\xi)-\widehat{\Psi}[g,g](t,\xi)\\ &=\widehat{U}^{\varepsilon}(t,\xi)\left\{\widehat{f}_{0}^{ \varepsilon}(\xi)-\widehat{g}_{0}(\xi)\right\}+\left\{\widehat{U}^{ \varepsilon}(t,\xi)-\widehat{U}(t,\xi)\right\}\widehat{g}_{0}(\xi)\\ &\quad+\left\{\widehat{\Psi}^{\varepsilon}[g,g](t,\xi)-\widehat {\Psi}[g,g](t,\xi)\right\}+\left\{\widehat{\Psi}^{\varepsilon}[f^{ \varepsilon},f^{\varepsilon}](t,\xi)-\widehat{\Psi}^{\varepsilon}[g,g](t,\xi )\right\}\\ &=:T_{1}+T_{2}+T_{3}+T_{4},\end{split} \tag{6.23}\]
and we estimate each one of these terms separately.
For the first term, from Lemma 6.1 we have
\[\|\widehat{U}^{\varepsilon}(\cdot)\{\widehat{f}_{0}^{\varepsilon}-\widehat{g }_{0}\}\|_{L_{\xi}^{1}L_{t}^{\infty}L_{v}^{2}}\lesssim\|\widehat{f}_{0}^{ \varepsilon}-\widehat{g}_{0}\|_{L_{\xi}^{1}L_{v}^{2}}.\]
Thanks to Lemma 6.3 and an interpolation argument, we obtain for the second term, for any \(\delta\in[0,1]\),
\[\|\{\widehat{U}^{\varepsilon}(\cdot)-\widehat{U}(\cdot)\}\widehat{g}_{0}\|_{L _{\xi}^{1}L_{t}^{\infty}L_{v}^{2}}\lesssim\varepsilon^{\delta}\|\langle\xi \rangle^{\delta}\widehat{g}_{0}\|_{L_{\xi}^{1}L_{v}^{2}}.\]
For the third term we use Lemma 6.5, which yields
\[\|\widehat{\Psi}^{\varepsilon}[g,g]-\widehat{\Psi}[g,g]\|_{L_{\xi}^{1}L_{t}^{ \infty}L_{v}^{2}}\lesssim\varepsilon\left(\|\widehat{g}_{0}\|_{L_{\xi}^{1}L_{ v}^{2}}^{2}+\|\widehat{g}_{0}\|_{L_{\xi}^{1}L_{v}^{2}}^{3}\right),\]
in the case \(\Omega_{x}=\mathbf{T}^{3}\), and
\[\|\widehat{\Psi}^{\varepsilon}[g,g]-\widehat{\Psi}[g,g]\|_{L_{\xi}^{1}L_{t}^{ \infty}L_{v}^{2}}\lesssim\varepsilon\left(\|\widehat{g}_{0}\|_{L_{\xi}^{1}L_ {v}^{2}}^{2}+\|\widehat{g}_{0}\|_{L_{\xi}^{1}L_{v}^{2}}^{3}+\|\widehat{g}_{0} \|_{L_{v}^{2}L_{v}^{2}}^{3}+\|\widehat{g}_{0}\|_{L_{\xi}^{1}L_{v}^{2}}^{3} \right),\]
in the case \(\Omega_{x}=\mathbf{R}^{3}\).
For the fourth term \(T_{4}\), we first decompose \(f^{\varepsilon}=\mathbf{P}^{\perp}f^{\varepsilon}+\mathbf{P}f^{\varepsilon}\) and use that \(g=\mathbf{P}g\) to write
\[T_{4} =\widehat{\Psi}^{\varepsilon}[f^{\varepsilon},f^{\varepsilon}]( t,\xi)-\widehat{\Psi}^{\varepsilon}[g,g](t,\xi)\] \[=\widehat{\Psi}^{\varepsilon}[\mathbf{P}^{\perp}f^{\varepsilon },\mathbf{P}^{\perp}f^{\varepsilon}](t,\xi)+\widehat{\Psi}^{\varepsilon}[ \mathbf{P}f^{\varepsilon},\mathbf{P}^{\perp}f^{\varepsilon}](t,\xi)+ \widehat{\Psi}^{\varepsilon}[\mathbf{P}^{\perp}f^{\varepsilon},\mathbf{P}f^{ \varepsilon}](t,\xi)\] \[\quad+\widehat{\Psi}^{\varepsilon}[\mathbf{P}f^{\varepsilon}, \mathbf{P}f^{\varepsilon}](t,\xi)-\widehat{\Psi}^{\varepsilon}[g,g](t,\xi)\] \[=\widehat{\Psi}^{\varepsilon}[\mathbf{P}^{\perp}f^{\varepsilon },\mathbf{P}^{\perp}f^{\varepsilon}](t,\xi)+\widehat{\Psi}^{\varepsilon}[ \mathbf{P}f^{\varepsilon},\mathbf{P}^{\perp}f^{\varepsilon}](t,\xi)+ \widehat{\Psi}^{\varepsilon}[\mathbf{P}^{\perp}f^{\varepsilon},\mathbf{P}f^{ \varepsilon}](t,\xi)\] \[\quad+\widehat{\Psi}^{\varepsilon}[\mathbf{P}(f^{\varepsilon}-g ),\mathbf{P}f^{\varepsilon}](t,\xi)+\widehat{\Psi}^{\varepsilon}[\mathbf{P}g, \mathbf{P}(f^{\varepsilon}-g)](t,\xi).\]
Thanks to Proposition 3.4 and Lemma 4.1 we have
\[\|\widehat{\Psi}^{\varepsilon}[\mathbf{P}^{\perp}f^{\varepsilon}, \mathbf{P}^{\perp}f^{\varepsilon}]\|_{L_{\xi}^{1}L_{t}^{\infty}L_{v}^{2}} \lesssim\|\widehat{\Gamma}[\mathbf{P}^{\perp}f^{\varepsilon}, \mathbf{P}^{\perp}f^{\varepsilon}]\|_{L_{\xi}^{1}L_{t}^{2}(H_{v}^{*\star})^{\prime}}\] \[\lesssim\|\mathbf{P}^{\perp}\widehat{f}^{\varepsilon}\|_{L_{\xi} ^{1}L_{t}^{\infty}L_{v}^{2}}\|\mathbf{P}^{\perp}\widehat{f}^{\varepsilon}\|_{L_{ \xi}^{1}L_{t}^{2}H_{v}^{*\star}},\]
moreover
\[\|\widehat{\Psi}^{\varepsilon}[\mathbf{P}f^{\varepsilon},\mathbf{P}^{\perp}f^{ \varepsilon}]\|_{L_{\xi}^{1}L_{t}^{\infty}L_{v}^{2}} \lesssim\|\widehat{\Gamma}[\mathbf{P}f^{\varepsilon},\mathbf{P}^{ \perp}f^{\varepsilon}]\|_{L_{\xi}^{1}L_{t}^{2}(H_{v}^{*\star})^{\prime}}\]
and also
\[\|\widehat{\Psi}^{\varepsilon}[\mathbf{P}^{\perp}f^{\varepsilon}, \mathbf{P}f^{\varepsilon}]\|_{L_{\xi}^{1}L_{t}^{\infty}L_{v}^{2}} \lesssim\|\widehat{\Gamma}[\mathbf{P}^{\perp}f^{\varepsilon}, \mathbf{P}f^{\varepsilon}]\|_{L_{\xi}^{1}L_{t}^{2}(H_{v}^{*\star})^{\prime}}\] \[\lesssim\|\mathbf{P}^{\perp}\widehat{f}^{\varepsilon}\|_{L_{\xi} ^{1}L_{t}^{2}H_{v}^{*\star}}\|\mathbf{P}^{\widehat{f}}\|_{L_{\xi}^{1}L_{t}^{ \infty}L_{v}^{2}},\]
where we have used that \(\|\mathbf{P}\phi\|_{H_{v}^{*\star}}\lesssim\|\mathbf{P}\phi\|_{L_{v}^{2}}\) and \(\|\langle v\rangle^{(\gamma/2+s)_{-}}\phi\|_{L_{v}^{2}}\lesssim\|\phi\|_{H_{v}^{ *\star}}\). This implies
\[\begin{split}\|\widehat{\Psi}^{\varepsilon}[\mathbf{P}^{\perp}f^{ \varepsilon},\mathbf{P}^{\perp}f^{\varepsilon}]\|_{L_{\xi}^{1}L_{t}^{\infty}L_{v}^ {2}}+\|\widehat{\Psi}^{\varepsilon}[\mathbf{P}f^{\varepsilon},\mathbf{P}^{ \perp}f^{\varepsilon}]\|_{L_{\xi}^{1}L_{t}^{\infty}L_{v}^{2}}+\|\widehat{ \Psi}^{\varepsilon}[\mathbf{P}^{\perp}f^{\varepsilon},\mathbf{P}f^{ \varepsilon}]\|_{L_{\xi}^{1}L_{t}^{\infty}L_{v}^{2}}\\ \lesssim\|\widehat{f}^{\varepsilon}\|_{L_{\xi}^{1}L_{t}^{ \infty}L_{v}^{2}}\|\mathbf{P}^{\perp}\widehat{f}^{\varepsilon}\|_{L_{\xi}^{1}L_ {t}^{2}H_{v}^{*\star}}.\end{split} \tag{6.24}\]
Therefore, using the bounds of Theorem 2.1, we deduce from (6.24) that
\[\|\widehat{\Psi}^{\varepsilon}[\mathbf{P}^{\perp}f^{\varepsilon}, \mathbf{P}^{\perp}f^{\varepsilon}]\|_{L_{\xi}^{1}L_{t}^{\infty}L_{v}^{2}}+\| \widehat{\Psi}^{\varepsilon}[\mathbf{P}f^{\varepsilon},\mathbf{P}^{\perp}f^{ \varepsilon}]\|_{L_{\xi}^{1}L_{t}^{\infty}L_{v}^{2}}+\|\widehat{\Psi}^{ \varepsilon}[\mathbf{P}^{\perp}f^{\varepsilon},\mathbf{P}f^{\varepsilon}]\|_{L_{ \xi}^{1}L_{t}^{\infty}L_{v}^{2}}\] \[\lesssim\varepsilon\|\widehat{f}_{0}^{\varepsilon}\|_{L_{\xi} ^{1}L_{t}^{2}}^{2},\]
in the case \(\Omega_{x}=\mathbf{T}^{3}\), and
\[\|\widehat{\Psi}^{\varepsilon}[\mathbf{P}^{\perp}f^{\varepsilon}, \mathbf{P}^{\perp}f^{\varepsilon}]\|_{L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}}+\| \widehat{\Psi}^{\varepsilon}[\mathbf{P}f^{\varepsilon},\mathbf{P}^{\perp}f^{ \varepsilon}]\|_{L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}}+\|\widehat{\Psi}^{ \varepsilon}[\mathbf{P}^{\perp}f^{\varepsilon},\mathbf{P}f^{\varepsilon}]\|_{L^ {1}_{\xi}L^{\infty}_{v}L^{2}_{v}}\] \[\qquad\lesssim\varepsilon\left(\|\widehat{f}^{\varepsilon}_{0}\| _{L^{1}_{\xi}L^{2}_{v}}^{2}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{\xi}_{v}L^{2 }_{v}}^{2}\right),\]
in the case \(\Omega_{x}=\mathbf{R}^{3}\).
Furthermore, from Proposition 3.4 and Lemma 4.1, and also using that \(\|\mathbf{P}\phi\|_{H^{s,*}_{v}}\lesssim\|\mathbf{P}\phi\|_{L^{2}_{v}}\), we have
\[\|\widehat{\Psi}^{\varepsilon}[\mathbf{P}(f^{\varepsilon}-g), \mathbf{P}f^{\varepsilon}]\|_{L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}} \lesssim\|\widehat{\Gamma}(\mathbf{P}(f^{\varepsilon}-g),\mathbf{ P}f^{\varepsilon})\|_{L^{1}_{\xi}L^{2}_{v}(H^{s,*}_{v})^{\prime}}\] \[\lesssim\|\mathbf{P}(\widehat{f}^{\varepsilon}-\widehat{g})\|_{L ^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}}\|\mathbf{P}\widehat{f}^{\varepsilon}\|_{L^ {1}_{\xi}L^{2}_{v}}^{2},\]
also
\[\|\widehat{\Psi}^{\varepsilon}[\mathbf{P}g,\mathbf{P}(f^{\varepsilon}-g)]\|_ {L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}} \lesssim\|\widehat{\Gamma}(\mathbf{P}g,\mathbf{P}(f^{\varepsilon} -g))\|_{L^{1}_{\xi}L^{2}_{v}(H^{s,*}_{v})^{\prime}}\] \[\lesssim\|\mathbf{P}(\widehat{f}^{\varepsilon}-\widehat{g})\|_{L ^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}}\|\mathbf{P}\widehat{g}\|_{L^{1}_{\xi}L^{2 }_{v}}.\]
In the case of the torus \(\Omega_{x}=\mathbf{T}^{3}\), we can use the bounds of Theorem 2.1-(1) and Theorem 2.2-(1) to obtain
\[\|\widehat{\Psi}^{\varepsilon}[\mathbf{P}(f^{\varepsilon}-g), \mathbf{P}f^{\varepsilon}]\|_{L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}}+\|\widehat{ \Psi}^{\varepsilon}[\mathbf{P}g,\mathbf{P}(f^{\varepsilon}-g)]\|_{L^{1}_{\xi}L ^{\infty}_{v}L^{2}_{v}}\] \[\qquad\lesssim\left(\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{ \xi}L^{2}_{v}}+\|\widehat{g}_{0}\|_{L^{1}_{\xi}L^{2}_{v}}\right)\|\widehat{f}^{ \varepsilon}-\widehat{g}\|_{L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}}\] \[\qquad\lesssim\eta_{2}\|\widehat{f}^{\varepsilon}-\widehat{g}\|_ {L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}}.\]
In the case of the whole space \(\Omega_{x}=\mathbf{R}^{3}\), we first use (4.14) to write
\[\|\mathbf{P}f^{\varepsilon}\|_{L^{1}_{\xi}L^{2}_{t}L^{2}_{v}}\lesssim\left\| \frac{|\xi|}{\langle\xi\rangle}\mathbf{P}f^{\varepsilon}\right\|_{L^{1}_{\xi} L^{2}_{t}L^{2}_{v}}+\left\|\frac{|\xi|}{\langle\xi\rangle}\mathbf{P}f^{ \varepsilon}\right\|_{L^{p}_{\xi}L^{2}_{t}L^{2}_{v}},\]
then we use the bounds of Theorem 2.1-(2) and Theorem 2.2-(2) to get
\[\|\widehat{\Psi}^{\varepsilon}[\mathbf{P}(f^{\varepsilon}-g), \mathbf{P}f^{\varepsilon}]\|_{L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}}+\|\widehat{ \Psi}^{\varepsilon}[\mathbf{P}g,\mathbf{P}(f^{\varepsilon}-g)]\|_{L^{1}_{\xi} L^{\infty}_{v}L^{2}_{v}}\] \[\qquad\lesssim\left(\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{ \xi}L^{2}_{v}}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_{v}}+\| \widehat{g}_{0}\|_{L^{1}_{\xi}L^{2}_{v}}+\|\widehat{g}_{0}\|_{L^{p}_{\xi}L^{2 }_{v}}\right)\|\widehat{f}^{\varepsilon}-\widehat{g}\|_{L^{1}_{\xi}L^{\infty}_ {v}L^{2}_{v}}\] \[\qquad\lesssim\eta_{2}\|\widehat{f}^{\varepsilon}-\widehat{g}\|_ {L^{1}_{\xi}L^{\infty}_{v}L^{2}_{v}}.\]
Gathering previous estimates and using that \(\eta_{2}>0\) is small enough, so that when taking the \(L^{1}_{\xi}L^{\infty}_{t}L^{2}_{v}\) of (6.23) the fourth and fifth terms in the right-hand side of (6.23) can be absorbed by the left-hand side, we deduce
\[\|\widehat{f}^{\varepsilon}-\widehat{g}\|_{L^{1}_{\xi}L^{\infty}_{ t}L^{2}_{v}} \lesssim\|\widehat{f}^{\varepsilon}_{0}-\widehat{g}_{0}\|_{L^{1}_{\xi}L^{2}_{v}}+ \varepsilon^{\delta}\|\langle\xi\rangle^{\delta}\widehat{g}_{0}\|_{L^{1}_{\xi} L^{2}_{v}}\] \[\qquad+\varepsilon\left(\|\widehat{g}_{0}\|_{L^{1}_{\xi}L^{2}_{v} }^{2}+\|\widehat{g}_{0}\|_{L^{1}_{\xi}L^{2}_{v}}^{3}\right)+\varepsilon\| \widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{\xi}L^{2}_{v}}^{2}, \tag{6.25}\]
in the case \(\Omega_{x}=\mathbf{T}^{3}\), and
\[\|\widehat{f}^{\varepsilon}-\widehat{g}\|_{L^{1}_{\xi}L^{\infty}_{ t}L^{2}_{v}} \lesssim\|\widehat{f}^{\varepsilon}_{0}-\widehat{g}_{0}\|_{L^{1}_{\xi}L^{2}_{v}}+ \varepsilon^{\delta}\|\langle\xi\rangle^{\delta}\widehat{g}_{0}\|_{L^{1}_{\xi}L^{ 2}_{v}}\] \[\qquad+\varepsilon\left(\|\widehat{g}_{0}\|_{L^{1}_{\xi}L^{2}_{v} }^{2}+\|\widehat{g}_{0}\|_{L^{1}_{\xi}L^{2}_{v}}^{3}+\|\widehat{g}_{0}\|_{L^{p}_ {\xi}L^{2}_{v}}^{2}+\|\widehat{g}_{0}\|_{L^{p}_{\xi}L^{2}_{v}}^{3}\right)\] \[\qquad+\varepsilon\left(\|\widehat{f}^{\varepsilon}_{0}\|_{L^{1}_{ \xi}L^{2}_{v}}^{2}+\|\widehat{f}^{\varepsilon}_{0}\|_{L^{p}_{\xi}L^{2}_{v}}^{2} \right), \tag{6.26}\]
in the case \(\Omega_{x}=\mathbf{R}^{3}\). From this estimate, we first conclude that
\[\lim_{\varepsilon\to 0}\|\widehat{f}^{\varepsilon}-\widehat{g}\|_{L^{1}_{\xi}L^{ \infty}_{t}L^{2}_{v}}=0,\]
assuming moreover that \(\langle\xi\rangle^{\delta}\widehat{g}_{0}\in L^{1}_{\xi}L^{2}_{v}\) for some \(\delta\in(0,1]\). We can finally prove Theorem 2.3, where we only assume \(\widehat{g}_{0}\in L^{1}_{\xi}L^{2}_{v}\), by using the previous convergence and arguing by density as in [24]. This completes the proof of Theorem 2.3. |
2306.10379 | Gradient-type subspace iteration methods for the symmetric eigenvalue
problem | This paper explores variants of the subspace iteration algorithm for
computing approximate invariant subspaces. The standard subspace iteration
approach is revisited and new variants that exploit gradient-type techniques
combined with a Grassmann manifold viewpoint are developed. A gradient method
as well as a nonlinear conjugate gradient technique are described. Convergence
of the gradient-based algorithm is analyzed and a few numerical experiments are
reported, indicating that the proposed algorithms are sometimes superior to
standard algorithms. This includes the Chebyshev-based subspace iteration and
the locally optimal block conjugate gradient method, when compared in terms of
number of matrix vector products and computational time, resp. The new methods,
on the other hand, do not require estimating optimal parameters. An important
contribution of this paper to achieve this good performance is the accurate and
efficient implementation of an exact line search. In addition, new convergence
proofs are presented for the non-accelerated gradient method that includes a
locally exponential convergence if started in a $\mathcal{O(\sqrt{\delta})}$
neighbourhood of the dominant subspace with spectral gap $\delta$. | Foivos Alimisis, Yousef Saad, Bart Vandereycken | 2023-06-17T15:39:58Z | http://arxiv.org/abs/2306.10379v2 | # Gradient-type subspace iteration methods for the symmetric eigenvalue problem
###### Abstract
This paper explores variants of the subspace iteration algorithm for computing approximate invariant subspaces. The standard subspace iteration approach is revisited and new variants that exploit gradient-type techniques combined with a Grassmann manifold viewpoint are developed. A gradient method as well as a conjugate gradient technique are described. Convergence of the gradient-based algorithm is analyzed and a few numerical experiments are reported, indicating that the proposed algorithms are sometimes superior to a standard Chebyshev-based subspace iteration when compared in terms of number of matrix vector products, but do not require estimating optimal parameters. An important contribution of this paper to achieve this good performance is the accurate and efficient implementation of an exact line search. In addition, new convergence proofs are presented for the non-accelerated gradient method that includes a locally exponential convergence if started in a \(\mathcal{O}(\sqrt{\delta})\) neighbourhood of the dominant subspace with spectral gap \(\delta\).
**Keywords:** Invariant subspaces; Eigenspaces; Partial diagonalization; Grassmann Manifolds; Gradient descent; Trace optimization. **AMS:** 15A69, 15A18
## 1 Introduction
When considering the many sources of large eigenvalue problems in numerical linear algebra, one often observes that the actual underlying problem it to compute an invariant subspace. In these cases, the eigenvalues and eigenvectors are often a by-product of the computations and they are not directly utilized. For example, one of the most common calculations in data science consists of performing a dimension reduction which extracts a subspace that provides a good approximation of the original data in that not much information is lost when we project the original problem into this low-dimensional space. This projection often results in better accuracy since the information that is shed out corresponds
to noise. Another example is in electronic structure calculations where an important class of algorithms called 'linear scaling methods' are entirely based on the eigenprojector on the subspace associated with the 'occupied states'. This projector is available through any orthonormal basis of the invariant subspace and here again eigenvectors and eigenvalues are not explicitly needed, resulting in methods that scale linearly with the number of particles.
While this distinction is often blurred in the literature, a number of articles did specifically deal with the problem of computing an invariant subspace by expressing it in terms of computing objects on the Grassmann manifold. Thus, the well-known article [12] addressed general optimization problems on manifolds, including eigenvalue problems as a special case. A number of other papers also adopted, explicitly or implicitly, a matrix manifold viewpoint for computing invariant subspaces, see, e.g., [1, 8, 20, 7, 4, 3, 5], among others. In many of these contributions, a Newton-type approach is advocated to solve the resulting equations. Newton's method typically requires solving linear systems, or a sequence of Sylvester equations and this can be quite expensive, or even impractical in some situations.
In view of the above observation, we may ask whether or not a gradient approach can yield an effective alternative to standard implementations of subspace iteration. Subspace Iteration (SI) computes the dominant invariant subspace of a matrix and some of its known advantages are highly desirable in a number of situations. SI is a block form of the power method and as such it is rather simple to implement. It is also known for its robustness properties. For example, it is resilient to small changes in the matrix during iteration, an important attribute that is not shared by Krylov subspace methods. This particular feature is appealing in many practical instances as in, e.g, subspace tracking [15, 13, 11, 22, 9], or in electronic structure calculations [24, 25].
This paper considers variations of subspace iteration that are grounded in a gradient-type method on the Grassmann manifold. A gradient and a (nonlinear) conjugate gradient approach are described that both share the same advantages as those of classical SI. However, the proposed methods are based on a direct optimization of the partial trace of the matrix in the subspace. The convergence of the gradient algorithm will be studied theoretically.
We point out that other gradient-descent type methods that employ a Grassmannian viewpoint have been recently developed in [4, 5]. These methods differ from those of this paper in that they aim at following a geodesic on the manifold by exploiting a Riemannian structure. No such attempt is made in the methods proposed herein. Instead, a simple gradient descent (or ascent) approach with (exact) line search is applied and a retraction step is added to ensure the orthogonality of the basis of the new subspace.
The first part of the paper discusses classical versions of subspace iteration. The second part develops line-search techniques combined with gradient descent-type approaches. A conjugate gradient approach is also presented.
Background and notation
This section begins with providing some background on invariant subspaces and then defines the notation to be used throuhgout the paper.
### Invariant subspaces
Given an \(n\times n\) matrix \(A\), a subspace \(\mathcal{X}\) that is invariant with respect to \(A\) is a subspace of dimension \(p\) of \(\mathbb{R}^{n}\) such that:
\[A\mathcal{X}\subseteq\mathcal{X}. \tag{1}\]
This can be expressed in matrix form by using a basis \(X\in\mathbb{R}^{n\times p}\) of \(\mathcal{X}\). In this case \(\mathcal{X}\) is invariant _iff_ there exists a matrix \(\Lambda\in\mathbb{R}^{p\times p}\) such that
\[AX=X\Lambda. \tag{2}\]
This second definition depends on a basis which is not unique. Herein lies a conundrum that is encountered in this context. We need a (non-unique) basis for computations and expressing equations and equalities; however, the original definition (1) does not require a basis.
We will sometimes use the term 'eigenspace' to mean invariant subspaces. The terminology used varies as eigenspaces are often restricted to mean the invariant subspace associated with one eigenvalue instead of a group of eigenvalues.
A number of computational tasks deal specifically with invariant subspaces. The most common of these is probably just to _compute_ an invariant subspace as represented by, e.g., an orthonormal basis. Sometimes, the task is to _update_ the subspace rather than compute it from scratch. This happens for example when solving the Kohn-Sham equation, see e.g., [19] in electronic structure calculations where at each iteration of the Self-Consistent Field (SCF) method the Hamiltonian changes slightly and it is necessary to update an already computed approximate invariant subspace for the previous Hamiltonian. There are also numerous applications in signal processing, where the problem is to _track_ an invariant subspace of a sequence of matrices, see e.g., [22, 11, 16, 13] a problem that is somewhat related to subspace updating problem.
Another problem that is often encountered is to (inexpensively) estimate the dimension of some invariant subspace. Thus, the approximate rank or numerical rank of some data matrix can be needed in order to determine the proper dimension required for an adequate "dimension reduction", or for subspace tracking [16]. This numerical rank can be determined as the dimension of the (near) invariant subspace corresponding to singular values larger than a certain threshold \(\epsilon\), see, e.g., [23]. Another problem in signal processing, is to find a subspace that is simultaneously a near-invariant subspace for a set of matrices. A common characteristic of the examples just mentioned is that they all deal with invariant subspaces - but they do not require eigenvalues and vectors explicitly.
### Notation and assumptions
For the remainder of the paper we will restrict our attention to the case when \(A\) is real symmetric and positive definite. The positive-definiteness assumption is made for theoretical reasons and without loss of generality since the iterates produced by the algorithms in this paper are invariant to the transformation \(A+cI\) for real \(c\). In addition, we will consider the problem of computing the invariant subspace associated with the largest \(p\) eigenvalues, which we refer to as the \(p\)th "dominant" subspace. In case the subspace that corresponds to the smallest \(p\) eigenvalues is sought, the algorithms can be applied to \(-A\).
Given an \(n\times n\) symmetric real matrix \(A\), we denote by \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\) its eigenvalues counted with their multiplicities.
A common method for finding the \(p\)th dominant subspace of \(A\) consists of minimizing the function
\[\phi(X)=-\frac{1}{2}\operatorname{Tr}(X^{T}AX), \tag{3}\]
over the set of \(n\times p\) matrices with orthonormal columns, i.e., such that1\(X^{T}X=I\), see for example, [20, 21, 2, 12]. In general, non-zero spectral gap will be assumed,
Footnote 1: In the complex Hermitian case, we would minimize \(-\frac{1}{2}\operatorname{Tr}(X^{H}AX)\) over matrices that satisfy \(X^{H}X=I\) where \(X^{H}\) is the transpose conjugate of \(X\).
\[\delta=\lambda_{p}-\lambda_{p+1}>0. \tag{4}\]
This condition implies that there exists a unique dominant subspace associated with the \(p\) largest eigenvalues of \(A\). Thus, minimizing the objective \(\phi\) has a unique solution.
We denote by \(\mathtt{Diag}(A)\) the diagonal matrix whose diagonal entries are the same as those of \(A\). The notation is overloaded by defining \(\mathtt{Diag}(\alpha_{i})_{i=1:n}\) to be the diagonal matrix with diagonal entries \(\alpha_{i},i=1:n\). This dual use of \(\mathtt{Diag}(.)\) causes no ambiguity and is consistent with common usage as, for example, in Matlab.
### Subspace iteration
Given some initial subspace with a basis \(X_{0}\in\mathbb{R}^{n\times p}\), the _classical_ subspace iteration algorithm is nothing but a Rayleigh-Ritz projection method onto the subspace spanned by \(X_{k}=A^{k}X_{0}\). That is, we seek an approximate eigenpair \(\tilde{\lambda},\tilde{u}\) where \(\tilde{\lambda}\ \in\ \mathbb{R}\) and \(\tilde{u}\ \in\ \operatorname{Span}(X_{k})\), by requiring that \((A-\tilde{\lambda}I)\tilde{u}\perp\operatorname{Span}(X_{k})\). If \(Q=[q_{1},\ldots,q_{m}]\) is an orthonormal basis of \(X_{k}\), and we express the approximate eigenvector as \(\tilde{u}=Q\tilde{y}\), then this leads to \(Q^{T}(A-\tilde{\lambda}I)\tilde{u}=0\) which means that \(\tilde{\lambda},\tilde{y}\) is the solution of the projected eigenvalue problem
\[Q^{T}AQ\tilde{y}=\tilde{\lambda}\tilde{y}. \tag{5}\]
A common and more effective alternative is to define \(X_{k}\) to be of the form \(X_{k}=p_{k}(A)X_{0}\), in which \(p_{k}\) is some optimally selected polynomial of degree \(k\).
The original form discussed above which was described by Bauer [6], corresponds to using the monomial \(p_{k}(t)\equiv t^{k}\) or the shifted monomial \(p_{k}(t)=(t-c)^{k}\). Rutishauser later developed more advanced versions in which \(p_{k}(t)\) was selected to be a shifted and scaled Chebyshev polynomial [17, 18].
```
1:Start: Select initial system \(X=[x_{1},\ldots,x_{p}]\) and initial polynomial \(p_{k}\).
2:for iter=1:MaxIts do
3: Compute \(\hat{X}=p_{k}(A)X;\;\;Q=qr(\hat{X});\;\;\text{and}\;\;C=Q^{T}AQ\).
4: Diagonalize \(C\) as \(C=U\Lambda_{C}U^{T}\) and set \(X_{new}=QU\)
5:if convergence satisfied then
6:return
7:else
8: Set \(X:=X_{new}\) and select a new polynomial \(p_{k^{\prime}}^{\prime}\)
9:endif
10:endfor
```
**Algorithm 1**\([X_{new},D]=\text{SubsIt}(A,X)\)
**The optimal polynomial** An important issue when applying subspace iteration, is to select an optimal polynomial \(p_{k}(t)\) to use. Consider a spectrum of \(A\) as in Figure 1. Assuming that we use a subspace of dimension \(m\), the polynomial is selected so as to enable the method to compute the eigenvalues \(\lambda_{1},\lambda_{2},\ldots,\lambda_{p}\). The standard approach [17] for subspace iteration when computing the dominant subspace is to use the polynomial
\[p_{k}(t)\equiv C_{k}((t-c)/h)\quad\text{where }c=(\lambda_{p+1}+\lambda_{n})/2 \text{ and }h=(\lambda_{p+1}-\lambda_{n})/2. \tag{6}\]
Here, \(C_{k}(t)\) is the Chebyshev polynomial of the first kind of degree \(k\). Remark that \(c\) is the middle of the interval \([\lambda_{n},\;\lambda_{p+1}]\) of unwanted eigenvalues, and \(h\) is half the width of this interval.
The polynomial above is found from an analysis of subspace iteration that reveals that, when considering each eigenpair \(\lambda_{i},u_{i}\) for \(i\leq m\), the method acts as if the other eigenvalues \(\lambda_{j},j<p,j\neq i\) are not present. In other words, the polynomial is selected with the aim to minimize the maximum \(p_{k}(\lambda_{k})/p_{k}(\lambda_{i})\) for \(k>p\) over polynomials of degree \(k\). This is then relaxed to minimizing the maximum of \(p_{k}(t)/p_{k}(\lambda_{i})\) for \(t\in\;[\lambda_{p+1},\lambda_{n}]\) over polynomials of degree \(k\). The optimal polynomial is \(C_{k}((t-c)/h)/C_{k}((\lambda_{i}-c)/h)\) where the denominator is for scaling purposes. In practice \(\lambda_{p+1},\ldots,\lambda_{n}\) are estimated and the scaling is performed for \(i=1\), i.e., with \(\lambda_{1}\) which is also estimated [17, 18]. Note that
Figure 1: Typical scenario for subspace iteration
this polynomial is optimal for _each eigenvalue \(\lambda_{j},j\leq p\), individually._ However, it is not optimal when considering the subspace as a whole: a few experiments will reveal that convergence can be much faster when we replace \(\lambda_{p+1}\) in the definition of polynomial by \(\lambda_{p+k}\) for some \(k>1\). Note that there does not seem to be a theoretical study of this empirical observation. Our numerical experiments will illustrate this room for improvement further.
Comparison with Krylov subspace methodsIt is known that when computing a small number of eigenvalues and vectors at one end of the spectrum, Krylov subspace methods such as the Lanczos method and Arnoldi's method are generally faster than methods based on subspace iteration. Standard Krylov methods require only one starting vector and this can be seen as an advantage in that little data needs to be generated to start the algorithm. However, for many applications it can be a disadvantage. Indeed, there are applications where a subspace must be computed repeatedly of a matrix that changes slightly from step to step. At the start of a new iteration we have available the whole (orthonormal) basis from the previous iteration which should be exploited. However, since Krylov methods start with only one vector this is not possible. In addition, Krylov methods do not have a fixed computational cost per iteration since they grow the subspace in each step.
On the other hand, the subspace iteration algorithm is perfectly suitable for the situation just described: When a computation with a new matrix starts, we can take in the algorithm as initial subspace the latest subspace obtained from the previous matrix. This is the exact scenario encountered in electronic structure calculations [24, 25], where a subspace is to be computed at each SCF iteration. The matrix changes at each SCF iteration and the changes depend on the eigenvectors and eigenvalues obtained at the previous SCF step.
Figure 2: Polynomial used in the context of subspace iteration.
Invariant subspaces and the Grassmannian perspective
An alternative to using subspace iteration for computing a dominant invariant subspace is to consider a method whose goal is to optimize the objective function \(\phi(X)\), defined in (3), over all matrices \(X\) that have orthonormal columns. This idea is of course not new. For example, it is the main ingredient exploited in the TraceMin algorithm [20, 21], a method designed for computing an invariant subspace associated with smallest eigenvalues for standard and generalized eigenvalue problems.
A key observation in the definition (3) is that \(\phi(X)\) is invariant upon orthogonal transformations. In other words if \(W\) is a \(p\times p\) orthogonal matrix then, \(\phi(XW)=\phi(X)\). Noting that two orthonormal bases \(X_{1}\) and \(X_{2}\) of the same subspace are related by \(X_{2}=X_{1}W\) where \(W\) is some orthogonal \(p\times p\) matrix, this means that the objective function (3) depends only on the subspace spanned by \(X\) and not the particular orthonormal basis \(X\) employed to represent the subspace. This in turn suggests that it is possible, and possibly advantageous, to seek the optimum solution in the Grassmann manifold [12]. Recall, from e.g., [12], that the Stiefel manifold is the set
\[\mathrm{St}(n,p)=\{X\;\in\;\mathbb{R}^{n\times p}\colon\;X^{T}X=I\}. \tag{7}\]
while the Grassmann manifold is the quotient manifold
\[\mathrm{Gr}(n,p)=\mathrm{St}(n,p)/\mathrm{O}(p) \tag{8}\]
where \(\mathrm{O}(p)\) is the orthogonal group of unitary \(p\times p\) matrices. Each point on the manifold, one of the equivalence classes in the above definition, can be viewed as a subspace of dimension \(p\) of \(\mathbb{R}^{n}\). In other words,
\[\mathrm{Gr}(n,p)=\{\mathcal{X}\subseteq\mathbb{R}^{n}\colon\mathcal{X}\text{ is a subspace of}\dim(\mathcal{X})=p\}. \tag{9}\]
An element of \(\mathrm{Gr}(n,p)\) can be indirectly represented by a basis \(X\in\mathrm{St}(n,p)\) modulo an orthogonal transformation and so we denote it by \([X]\), keeping in mind that it does not matter which member \(X\) of the equivalence class is selected for this representation.
With this Riemannian viewpoint in mind, one can try to minimize \(\phi(X)\) over the Grassmann manifold using one of the many (generic) Riemannian optimization algorithms for smooth optimization. One has, for example, first-order and second-order algorithms that generalize gradient descent and Newton's iteration, resp. We refer to the foundational article [12] and the textbook [2] for detailed explanations. Despite its algorithmic simplicity, the Riemanian gradient method for this problem has not been treated in much detail beyond the basic formulation and its generically applicable theoretical properties. In the next sections, our aim it to work this out in detail and show in the numerical experiments that such a first-order method can be surprisingly effective and competitive.
### Gradient method on Grassmann
In a gradient approach we would like to produce an iterate \(X_{k+1}\in\mathbb{R}^{n\times p}\) starting from \(X_{k}\in\mathbb{R}^{n\times p}\) following a rule of the form
\[X_{k+1}=X_{k}-\mu\operatorname{grad}\phi(X_{k}), \tag{10}\]
where the step \(\mu>0\) is to be determined by some linesearch. The direction opposite to the gradient is a direction of decrease for the objective function \(\phi\). However, it is unclear what value of the step \(\mu\) yields the largest decrease in the value of \(\phi\). This means that some care has to be exercised in the search for the optimal \(\mu\). A gradient procedure may be appealing if a good approximate solution is already known, in which case, the gradient algorithm may provide a less expensive alternative to one step of the subspace iteration 1.
For a Riemannian method defined on a manifold, the search direction (here, \(-\operatorname{grad}\phi(X_{k})\)) always lies in the tangent space of the current point (here, \(X_{k}\)) of said manifold. This makes sense since directions orthogonal to the tangent space leave the objective function constant up to first order in the step if the iterates are restricted to lie on the manifold. For our problem, an element of the Grassmann manifold is represented by a member \(X\in\operatorname{St}(n,p)\) of its class. The tangent space of the Grassmann manifold at this \([X]\) is the set of matrices \(\Delta\in\mathbb{R}^{n\times p}\) satisfying the following orthogonality relation, see [12]:
\[X^{T}\Delta=0. \tag{11}\]
The Riemannian gradient of (3) at \([X]\) is
\[\operatorname{grad}\phi(X)=-\Pi AX\equiv-(AX-XC_{X}), \tag{12}\]
with the orthogonal projector \(\Pi=I-XX^{T}\), and the projected matrix \(C_{X}=X^{T}AX\).
Even though \(\operatorname{grad}\phi(X_{k})\) is in the tangent space (and a direction of decrease for \(\phi\)), we are not interested in \(X_{k+1}\) per se but in the subspace that it spans. In particular, since we use orthogonal bases to define the value of \(\phi\) on the manifold, we will need to 'correct' the non-orthogonality of the update (10) when considering \(\phi\). This will be discussed shortly. For now we establish a few simple relations.
Figure 3: Illustration of the line-search and the tangent space
For simplicity we denote \(X:=X_{k}\) an orthonormal basis of the current iterate, \(\tilde{X}:=X_{k+1}\) a (probably non-orthonormal) basis of the new iterate and \(G:=\operatorname{grad}\phi(X)\) the gradient direction. Then a step of the gradient method satisfies \(\tilde{X}=X-\mu G\) and a short calculation will show that
\[\phi(\tilde{X})=\phi(X)-\mu\operatorname{Tr}((AX)^{T}\Pi(AX))-\frac{\mu^{2}}{2 }\operatorname{Tr}((AX)^{T}\Pi A\Pi(AX)). \tag{13}\]
We also have the following relations
\[(AX)^{T}\Pi(AX) = -(AX)^{T}G=-(G^{T}(AX))^{T}=-G^{T}(AX) \tag{14}\] \[= (AX)^{T}\Pi^{T}\Pi(AX)=G^{T}G \tag{15}\]
where the second equality exploits the fact that \(\Pi\) is an orthogonal projector.
Thus, the coefficient of \(\mu\) in the right-hand side of (13) is nothing but \(\|G\|_{F}^{2}\) and, therefore, as expected, the direction of \(G\) is a descent direction: for small enough \(\mu\), \(\tilde{X}\) will be close to orthogonal, and regardless of the value of the trace in the last term, we would get a decrease of the objective function \(\phi\). This will be the case unless we have already reached a critical point where \(G=0\).
When looking at (13) it may appear at first that when \(A\) is SPD, it is possible to increase the value of \(\mu\) arbitrarily and decrease the objective function arbitrarily. This is clearly incorrect because we have not yet adjusted the basis: we need to find the subspace spanned by \(\tilde{X}\) and compute the related value of the objective function. In the following we address this issue by actually optimizing the objective function on the manifold.
Observe that since \(X^{T}G=0\) we have:
\[\tilde{X}^{T}\tilde{X}=(X-\mu G)^{T}(X-\mu G)=I+\mu^{2}G^{T}G.\]
Let the spectral decomposition of \(G^{T}G\) be
\[G^{T}G=VD_{\beta}V^{T} \tag{16}\]
and denote \(\beta=\texttt{Diag}(D_{\beta})\) the eigenvalues. Equivalently, the columns of \(V\) are the right singular vectors of \(G\) with diagonal matrix \(D_{\beta}\) containing the squares of the corresponding singular values. We now define the diagonal matrix
\[D_{\mu}\equiv(I+\mu^{2}D_{\beta})^{1/2}. \tag{17}\]
In order to make \(\tilde{X}\) orthogonal without changing its linear span we will multiply it to the right by \(VD_{\mu}^{-1}\), i.e., we define
\[X(\mu)=\tilde{X}VD_{\mu}^{-1}=(X-\mu G)VD_{\mu}^{-1}. \tag{18}\]
With this we will have,
\[X(\mu)^{T}X(\mu) = D_{\mu}^{-1}V^{T}(X-\mu G)^{T}(X-\mu G)VD_{\mu}^{-1}\] \[= D_{\mu}^{-1}V^{T}(I+\mu^{2}G^{T}G)VD_{\mu}^{-1}\] \[= D_{\mu}^{-1}(I+\mu^{2}D_{\beta})D_{\mu}^{-1}\] \[= I\]
as desired.
### Efficient linesearch
We can now tackle the issue of determining the optimal \(\mu\). If we set
\[X_{v}=XV,\qquad G_{v}=GV, \tag{19}\]
then from (14)-(15) we get the relation \(G_{v}^{T}AX_{v}=G_{v}^{T}G_{v}\). In addition, note that \(G_{v}^{T}G_{v}=V^{T}G^{T}GV=D_{\beta}\). With these relations we can now show:
\[\phi(X(\mu)) = -\frac{1}{2}\operatorname{Tr}(D_{\mu}^{-1}V^{T}(X-\mu G)^{T}A(X- \mu G)VD_{\mu}^{-1}) \tag{20}\] \[= -\frac{1}{2}\operatorname{Tr}(D_{\mu}^{-1}(X_{v}-\mu G_{v})^{T}A( X_{v}-\mu G_{v})D_{\mu}^{-1})\] \[= -\frac{1}{2}\operatorname{Tr}\left(D_{\mu}^{-2}\left(X_{v}^{T}AX _{v}+2\mu(G_{v}^{T}G_{v})+\mu^{2}(G_{v}^{T}AG_{v})\right)\right)\] \[= -\frac{1}{2}\operatorname{Tr}\left(\left(I+\mu^{2}D_{\beta} \right)^{-1}\left(X_{v}^{T}AX_{v}+2\mu\ D_{\beta}+\mu^{2}\ G_{v}^{T}AG_{v} \right)\right)\]
We will simplify notation by introducing the diagonal matrices:
\[D_{\alpha} =\texttt{Diag}(\alpha_{i}); D_{\gamma} =\texttt{Diag}(\gamma_{i})\text{ with} \tag{21}\] \[\alpha_{i} =(X_{v}^{T}AX_{v})_{ii}; \gamma_{i} =(G_{v}^{T}AG_{v})_{ii}. \tag{22}\]
If we call \(u_{i}\) the left singular vector of \(G\) associated with \(\sqrt{\beta_{i}}\) then we get the useful relation
\[\gamma_{i}\equiv v_{i}^{T}G^{T}AGv_{i}=\beta_{i}u_{i}^{T}Au_{i}. \tag{23}\]
Observe that when \(D\) is a diagonal matrix and \(C\) is arbitrary, then \(\texttt{Diag}(DC)=D\)\(\texttt{Diag}(C)\). Therefore, (20) simplifies to:
\[\phi(X(\mu))=-\frac{1}{2}\operatorname{Tr}\left(\left(I+\mu^{2}D_{\beta} \right)^{-1}\left(D_{\alpha}+2\mu D_{\beta}+\mu^{2}D_{\gamma}\right)\right). \tag{24}\]
This is a rational function which is the sum of \(p\) terms corresponding to the \(p\) diagonal entries of the matrix involved in (24):
\[\phi(X(\mu))=-\frac{1}{2}\sum_{i=1}^{p}\frac{\alpha_{i}\ +2\beta_{i}\mu+ \gamma_{i}\mu^{2}}{1+\beta_{i}\mu^{2}}. \tag{25}\]
When \(\mu\to\infty\) each term \(\frac{\alpha_{i}\ +2\beta_{i}\mu+\gamma_{i}\mu^{2}}{1+\beta_{i}\mu^{2}}\) will _decrease_ to its limit \(\frac{1}{2}\sum\gamma_{i}/\beta_{i}\). The derivative of \(\phi(X(\mu))\) takes the form:
\[\frac{d\phi(X(\mu))}{d\mu}=-\sum_{i=1}^{p}\ \frac{\beta_{i}\ +(\gamma_{i}\ - \alpha_{i}\ \beta_{i})\mu-\beta_{i}^{2}\mu^{2}}{(1+\beta_{i}\mu^{2})^{2}}. \tag{26}\]
This derivative is the negative sum of \(p\) branches each associated with a diagonal entry of the matrix of which the trace is taken in the above equation. The numerator \(\beta_{i}\ +(\gamma_{i}\ -\alpha_{i}\ \beta_{i})\mu-\beta_{i}^{2}\mu^{2}\) of each branch has the shape of an inverted
parabola and has a negative and a positive root. Therefore, the derivative (26) is nonpositive at zero2 and as \(\mu\) increases away from the origin, each of the branches will have a negative derivative. The derivative remains negative until \(\mu\) reaches the second root which is
Footnote 2: It is equal to \(-\sum\beta_{i}=-\|G\|_{F}^{2}\)
\[\xi_{i}=\frac{(\gamma_{i}\ -\alpha_{i}\beta_{i})+\sqrt{(\gamma_{i}\ -\alpha_{i} \beta_{i})^{2}+4\beta_{i}^{3}}}{2\beta_{i}^{2}}\ >\ 0. \tag{27}\]
Let \(\xi_{min}=\min_{i}\{\xi_{i}\}\) and \(\xi_{max}=\max_{i}\{\xi_{i}\}\). Clearly all branches of (25), and therefore also their sum, will decrease in value when \(\mu\) goes from zero to \(\xi_{min}\). Thus, the value of the objective function (25) will decrease. Similarly, when \(\mu\) increases from \(\xi_{max}\) to infinity, the objective function (25) will increase. The minimal value of (25) with respect to \(\mu\) can therefore be determined by seeking the minimum in the interval \([\xi_{min},\ \xi_{max}]\). Since both \(\phi\) and its derivative are available, this can be done efficiently by any standard root finding algorithm.
The algorithm to get the optimal value for \(\mu\) is described in Algorithm 3. To obtain accurate solutions, some care is required in the numerical implementation due to floating point arithmetic. We explain this in more detail in Section 6.1. Observe also that \(X_{k+1}\) is calculated from the \(Q\) factor of the QR factorization instead of the matrix square root as in (18). The former gives us a numerically orthonormal matrix whereas the latter might lead to an accumulated error. Theoretically, they give the same sub-spaces however.
```
1:Start: Select initial \(X_{0}\) such that \(X_{0}^{T}X_{0}=I\).
2:for\(k=0,1,\ldots\)do
3: Compute \(G:=\operatorname{grad}\phi(X_{k})=-(AX_{k}-X_{k}C_{X_{k}})\) with \(C_{X_{k}}=X_{k}^{T}AX_{k}\).
4:if\(\|G\|<\operatorname{tol}\)then
5:return
6:endif
7: Diagonalize \(G^{T}G=VD_{\beta}V^{T}\).
8: Compute \(D_{\alpha},D_{\gamma}\) from (21) with \(X=X_{k}\).
9: Compute \(\mu\) as the (approximate) minimizer (25) using Get_Mu.
10: Compute \(X_{k+1}\) as the Q factor from the QR decomposition of \(X_{k}-\mu G\).
11:endfor
```
**Algorithm 2**Riemannian Gradient Descent\((A,X)\)
## 4 Convergence of the gradient method
We prove that the gradient method from Algorithm 2 converges globally to a critical point, that is, where the Riemannian gradient is zero. This result is valid for any initial iterate \(X_{0}\) but it does not give a linear rate of convergence. When \(X_{0}\) is close to the dominant subspace, we also prove a linear (exponential) rate of convergence of the objective function. The closeness condition depends
on the spectral gap \(\delta\) of the dominant subspace but only as \(O(\sqrt{\delta})\). This result seems to be new.
### Global convergence of the gradient vector field
We examine the expression (25) in order to obtain a useful lower bound. We first rewrite (25) as follows:
\[\phi(X(\mu)) =-\frac{1}{2}\sum_{i=1}^{m}\frac{\alpha_{i}(1+\beta_{i}\mu^{2})- \alpha_{i}\beta_{i}\mu^{2}+2\beta_{i}\mu+\gamma_{i}\mu^{2}}{1+\beta_{i}\mu^{2 }}\] \[=-\frac{1}{2}\sum_{i=1}^{m}\alpha_{i}-\frac{1}{2}\sum_{i=1}^{m} \frac{2\beta_{i}\mu+(\gamma_{i}-\alpha_{i}\beta_{i})\mu^{2}}{1+\beta_{i}\mu^{2 }}. \tag{28}\]
The first sum on the right-hand side is just the objective function before the update, that is, the value of \(\phi\) at the current iterate \(X(0)=X\). The second sum depends on the step \(\mu\) and thus represents what may be termed the 'loss' of the objective function for a given \(\mu\).
**Lemma 1**.: _Define \(L\equiv\lambda_{max}(A)-\lambda_{min}(A)\). Then for any given \(\mu\geq 0\) the 'loss' term (2nd term in right-hand side of (28)) satisfies_
\[\frac{1}{2}\sum_{i=1}^{m}\frac{2\beta_{i}\mu+(\gamma_{i}-\alpha_{i}\beta_{i}) \mu^{2}}{1+\beta_{i}\mu^{2}}\geq\frac{(2-L\mu)\mu}{2(1+\beta_{max}\mu^{2})} \cdot\|G\|_{F}^{2}, \tag{29}\]
_where \(G=\operatorname{grad}\phi(X(0))\) and \(\beta_{max}=\max\beta_{i}\)._
Proof.: We exploit (23) and set \(\tau_{i}=u_{i}^{T}Au_{i}\) in order to rewrite the term \(\gamma_{i}\ -\alpha_{i}\ \beta_{i}\) in the numerator as \(\gamma_{i}\ -\alpha_{i}\ \beta_{i}=(\tau_{i}-\alpha_{i})\beta_{i}\). From (19) and (22), we have \(\alpha_{i}=x_{i}^{T}Ax_{i}\) with \(x_{i}=Xv_{i}\). Hence, the term \(\tau_{i}-\alpha_{i}\equiv u_{i}^{T}Au_{i}-x_{i}^{T}Ax_{i}\) represents the difference between two Rayleigh quotients with respect to \(A\) and therefore, \(\tau_{i}-\alpha_{i}\geq-L\). Thus the 'loss' term satisfies
\[\frac{1}{2}\sum_{i=1}^{m}\frac{2\beta_{i}\mu+(\gamma_{i}-\alpha_{i}\beta_{i}) \mu^{2}}{1+\beta_{i}\mu^{2}}\geq\frac{1}{2}\sum_{i=1}^{m}\frac{2-L\mu}{1+\beta _{i}\mu^{2}}\beta_{i}\mu. \tag{30}\]
The denominators \(1+\beta_{i}\mu^{2}\) can be bounded from above by \(1+\beta_{max}\mu^{2}\) and this
will result in:
\[\frac{1}{2}\sum_{i=1}^{m}\frac{2\beta_{i}\mu+(\gamma_{i}-\alpha_{i}\beta_{i})\mu ^{2}}{1+\beta_{i}\mu^{2}}\geq\frac{1}{2}\sum_{i=1}^{m}\frac{2-L\mu}{1+\beta_{ max}\mu^{2}}\beta_{i}\mu=\frac{(2-L\mu)\mu}{2(1+\beta_{max}\mu^{2})}\sum_{i=1}^{m} \beta_{i}. \tag{31}\]
The proof ends by noticing that \(\sum\beta_{i}=\|G\|_{F}^{2}\) due to (16).
We now state a useful global upper bound for the biggest singular value of the Riemannian gradient. This result is proved in [5, Lemma 4] using properties from linear algebra.
**Lemma 2**.: _The spectral norm of the Riemannian gradient \(G\) of \(\phi\) satisfies \(\|G\|_{2}\leq L/2\) at any point._
**Lemma 3**.: _If \(\mu_{opt}\) is the optimal \(\mu\) obtained from a line search at a given \(X\), then_
\[\phi(X(\mu_{opt}))\leq-\frac{1}{2}\sum_{i=1}^{m}\alpha_{i}-\frac{2}{5}\frac{\| G\|_{F}^{2}}{L}. \tag{32}\]
Proof.: The right-hand side (29) is nearly minimized for \(\mu_{s}=1/L\) so we consider this special value of \(\mu\). We have
\[\phi(X(\mu_{opt}))\leq\phi(X(\mu_{s}))\leq-\frac{1}{2}\sum_{i=1}^{m}\alpha_{i} -\frac{(2-L\mu_{s})\mu_{s}}{2(1+\beta_{max}\mu_{s}^{2})}\cdot\|G\|_{F}^{2}.\]
The second inequality in the above equation follows from (28) and the previous Lemma 1. Calculating the right-hand side for \(\mu_{s}=1/L\) yields:
\[\phi(X(\mu_{opt}))\leq-\frac{1}{2}\sum_{i=1}^{m}\alpha_{i}-\frac{\|G\|_{F}^{2} }{2(L+\beta_{max}/L)}.\]
Be Lemma 2, we have \(\beta_{max}\leq\frac{L^{2}}{4}\) since \(\beta_{max}\) is the biggest eigenvalue of \(G^{T}G\). Plugging this into the last inequality we get the desired result.
The property (32) in Lemma 3 is known as a sufficient decrease condition of the line search. We can now follow standard arguments from optimization theory to conclude that (Riemannian) steepest descent for the smooth objective function \(\phi\) converges in gradient norm.
**Theorem 4**.: _The sequence of gradient matrices \(\operatorname{grad}\phi(X_{k})\) generated by Riemannian gradient descent with exact line search converges (unconditionally) to zero starting from any \(X_{0}\)._
Proof.: We will proceed by avoiding the use of indices. First, we observe that the traces of the iterates, that is, the consecutive values of \(\phi(X(\mu_{opt}))\) converge since they constitute a bounded decreasing sequence. Recall that the first term, that is, minus the half sum of the \(\alpha_{i}\)'s in the right-hand side of (32), is the value
of the objective function at the previous iterate. Thus, the second term in (32) is bounded from above by the difference between two consecutive traces:
\[0\leq\frac{2}{5}\frac{\|G\|_{F}^{2}}{L}\leq-\phi(X(\mu_{opt}))-\frac{1}{2}\sum_{ i=1}^{p}\alpha_{i}=-\phi(X(\mu_{opt}))+\phi(X), \tag{33}\]
and therefore it converges to zero. This implies that the sequence of gradients also converges to \(0\).
The bound of Lemma 3 can be used to prove some particular rate of convergence for the gradient vector field. This argument is again classical for smooth optimization. It is a slow (algebraic) rate but it holds for any initial guess.
**Proposition 5**.: _The iterates \(X_{k}\) of Algorithm 2 satisfy_
\[\min_{k=0,\ldots,K-1}\|\operatorname{grad}\phi(X_{k})\|\leq\sqrt{\frac{5}{2}L( \phi(X_{0})-\phi^{*})}\frac{1}{\sqrt{K}},\]
_where \(\phi^{*}\) is the minimum of \(\phi\)._
Proof.: Since \(\phi^{*}\) is the minimum of \(\phi\), it holds
\[\phi(X_{0})-\phi^{*} \geq\phi(X_{0})-\phi(X_{K})=\sum_{k=0}^{K-1}(\phi(X_{k})-\phi(X_{ k+1}))\] \[\geq\frac{2}{5}\frac{1}{L}K\min_{k=0,\ldots,K-1}\|\operatorname{ grad}\phi(X_{k})\|^{2},\]
where the last inequality follows by Lemma 3 (see also (33)). A simple rearrangement gives the desired result.
### Local linear convergence
We now turn to the question of fast but local convergence to the dominant \(p\) dimensional subspace \(\mathcal{V}_{\alpha}=\operatorname{span}(V_{\alpha})\) of \(A\). We therefore also assume a non-zero spectral gap \(\delta=\lambda_{p}-\lambda_{p+1}>0\). We denote the global optimal value as \(\phi^{*}=\phi(V_{\alpha})\).
Let \(T_{\mathcal{X}}\operatorname{Gr}(n,p)\) denote the tangent space of the Grassmann manifold \(\operatorname{Gr}(n,p)\) at \(\mathcal{X}\in\operatorname{Gr}(n,p)\) (represented by an orthonormal matrix). We take the inner product between two tangent vectors in \(T_{\mathcal{X}}\operatorname{Gr}(n,p)\) as
\[\langle\Delta_{1},\Delta_{2}\rangle_{\mathcal{X}}=\operatorname{Tr}(\Delta_{1 }^{T}\Delta_{2})\ \ \text{with}\ \Delta_{1},\Delta_{2}\in T_{\mathcal{X}} \operatorname{Gr}(n,p).\]
Here, \(\Delta_{1}\) and \(\Delta_{2}\) are tangent vectors of the same orthonormal representative \(X\). Observe that the inner product is invariant to the choice of this representative since the inner product of \(\bar{\Delta}_{1}=\Delta_{1}R\) and \(\bar{\Delta}_{2}=\Delta_{2}R\) with orthogonal \(R\), is the same as \(\langle\Delta_{1},\Delta_{2}\rangle_{\mathcal{X}}\). The norm induced by this inner product in any tangent space is the Frobenius norm, which is therefore compatible with our other theoretical results.
The Riemannian structure of the Grassmann manifold can be conveniently described by the notion of _principal angles_ between subspaces. Given two subspaces \(\mathcal{X},\mathcal{Y}\in\operatorname{Gr}(n,p)\) spanned by the orthonormal matrices \(X,Y\) respectively, the principal angles between them are \(0\leq\theta_{1}\leq\cdots\leq\theta_{p}\leq\pi/2\) obtained from the SVD
\[Y^{T}X=U_{1}\cos\theta\ V_{1}^{T} \tag{34}\]
where \(U_{1}\in\mathbb{R}^{p\times p},V_{1}\in\mathbb{R}^{p\times p}\) are orthogonal and \(\cos\theta=\operatorname{diag}(\cos\theta_{1},...,\cos\theta_{p})\).
We can express the intrinsic distance induced by the Riemannian inner product discussed above as
\[\operatorname{dist}(\mathcal{X},\mathcal{Y})=\sqrt{\theta_{1}^{2}+...+\theta_ {p}^{2}}=\|\theta\|_{2}, \tag{35}\]
where \(\theta=(\theta_{1},\ldots,\theta_{p})^{T}\).
The convexity structure of the Rayleigh quotient \(\phi\) on the Grassmann manifold, with respect to the aforementioned Riemannian structure, is studied in detail in [5]. In the next proposition, we summarize all the important properties that we use for deriving a linear convergence rate for Algorithm 2. In the rest, we denote subspaces of the Grassmann manifold by orthonormal matrices that represent them.
**Proposition 6**.: _Let \(0\leq\theta_{1}\leq\cdots\leq\theta_{p}<\pi/2\) be the principal angles between the subspaces \(\mathcal{X}=\operatorname{span}(X)\) and \(\mathcal{V}_{\alpha}=\operatorname{span}(V_{\alpha})\). The function \(\phi\) satisfies_
1. \(\phi(X)-\phi^{*}\geq c_{Q}\,\delta\operatorname{dist}^{2}(X,V_{\alpha})\) _(quadratic growth)_
2. \(\|\operatorname{grad}\phi(X)\|_{F}^{2}\geq 4\,c_{Q}\,\delta\,a^{2}(X)(\phi(X)- \phi^{*})\) _(gradient dominance)_
3. _The eigenvalues of the Riemannian Hessian of_ \(\phi\) _are upper bounded by_ \(L\)_. This implies_ \(\phi(X)-\phi^{*}\leq\frac{1}{2}L\mathrm{dist}^{2}(X,V_{\alpha})\) _(smoothness)_
4. \(\|\operatorname{grad}\phi(X)\|_{2}\leq\frac{1}{2}L\) _(cfr. Lemma_ 2_)_
_where \(c_{Q}=2/\pi^{2}\), \(\delta=\lambda_{p}-\lambda_{p+1}\), \(L=\lambda_{max}(A)-\lambda_{min}(A)\), and \(a(X)=\theta_{p}/\tan\theta_{p}\)._
Next, we use these properties to prove an exponential convergence rate for the function values of \(\phi\). In order to guarantee a uniform lower bound for \(a(X_{k})\) at the iterates \(X_{k}\) of Algorithm 2, we need to start from a distance at most \(\mathcal{O}(\sqrt{\delta})\) from the optimum.
**Theorem 7**.: _Algorithm 2 starting from a point \(X_{0}\) such that_
\[\operatorname{dist}(X_{0},V_{\alpha})\leq\sqrt{\frac{2c_{Q}\delta}{L}}<1\]
_satisfies for all \(k\geq 0\)_
\[\phi(X_{k+1})-\phi^{*}\leq\left(1-\frac{8}{5}c_{Q}a^{2}(X_{k})\frac{\delta}{L }\right)(\phi(X_{k})-\phi^{*}).\]
Proof.: By Lemma 3, we get
\[\phi(X_{1})-\phi^{*}\leq\phi(X_{0})-\phi^{*}-\frac{2}{5L}\|\operatorname{grad} \phi(X_{0})\|^{2}.\]
By the gradient dominance of \(\phi\) in Proposition 6, we have
\[\phi(X_{1})-\phi^{*}\leq\left(1-\frac{8}{5}c_{Q}a^{2}(X_{0})\frac{\delta}{L} \right)(\phi(X_{0})-\phi^{*})\]
thus the statement is correct for \(k=0\).
We proceed by induction. Assume that the statement is correct for \(0,1,...,k\). We then also have
\[\phi(X_{k+1})-\phi^{*}\leq\left(1-\frac{8}{5}c_{Q}a^{2}(X_{k})\frac{\delta}{L} \right)(\phi(X_{k})-\phi^{*})\leq\phi(X_{0})-\phi^{*}.\]
Then by quadratic growth and smoothness of \(f\) in Proposition 6, we have
\[\operatorname{dist}^{2}(X_{k+1},V_{\alpha}) \leq\frac{1}{c_{Q}\delta}(\phi(X_{k+1})-\phi^{*})\leq\frac{1}{c_{ Q}\delta}(\phi(X_{0})-\phi^{*})\] \[\leq\frac{L}{2c_{Q}\delta}\operatorname{dist}^{2}(X_{0},V_{ \alpha})\leq 1,\]
by the assumption on the initial distance on \(X_{0}\).
By Lemma 3, we have that
\[\phi(X_{k+2})-\phi^{*}\leq\phi(X_{k+1})-\phi^{*}-\frac{2}{5L}\|\operatorname{ grad}\phi(X_{k+1})\|^{2}.\]
Now using as above the gradient dominance of \(\phi\) in Proposition 6, we get that the statement in the theorem also holds for \(k+1\).
The convergence factor in the previous theorem still involves a quantity \(a(X_{k})\) that depends on the iterate \(X_{k}\) at step \(k\). To get a convergence factor for all \(k\) that only depends on the initial step, we proceed as follows.
**Corollary 8**.: _Algorithm 2 where \(X_{0}\) satisfies the assumption of Thm. 7 produces iterates \(X_{k}\) that satisfy_
\[\phi(X_{k})-\phi^{*}\leq\left(1-c_{Q}\frac{2\delta}{5L}\right)^{k}(\phi(X_{0}) -\phi^{*})\]
_for all \(k\geq 0\)_
Proof.: Recall that \(a(X_{k})=\theta_{p}/\tan\theta_{p}\) with \(\theta_{p}\) the largest principal angle between \(X_{k}\) and \(V_{\alpha}\). The bound of the previous theorem also provides that \(\operatorname{dist}(X_{k},V_{\alpha})\leq 1\), thus by elementary properties of \(\cos(x)\) and \(x/\tan(x)\) and using (35), we have
\[a(X_{k})\geq\cos(\theta_{p}(X_{k},V_{\alpha}))\geq\cos(\operatorname{dist}(X_{ k},V_{\alpha}))\geq\cos(1)\geq\frac{1}{2}.\]
Plugging this in the result of Theorem 7 and by an induction argument, we get the desired result.
Finally, we present an iteration complexity in function value. The \(\tilde{\mathcal{O}}\) notation hides non-leading logarithmic factors.
**Corollary 9**.: _Algorithm 2 where \(X_{0}\) satisfies the assumption of Thm. 7 computes an estimate \(X_{T}\) of \(V_{\alpha}\) such that \(\operatorname{dist}(X_{T},V_{\alpha})\leq\epsilon\) in at most_
\[T=\frac{5\pi^{2}L}{8\delta}\log\frac{\phi(X_{0})-\phi^{*}}{c_{Q}\varepsilon \delta}+1=\tilde{\mathcal{O}}\left(\frac{L}{\delta}\log\frac{\phi(X_{0})-\phi^ {*}}{\varepsilon}\right).\]
_many iterates._
Proof.: For \(\operatorname{dist}(X_{T},V_{\alpha})\leq\epsilon\), it suffices to have
\[\phi(X_{T})-\phi^{*}\leq c_{Q}\epsilon^{2}\delta\]
by quadratic growth of \(f\) in Proposition 6. Using \((1-c)^{k}\leq\exp(-ck)\) for all \(k\geq 0\) add \(0\leq c\leq 1\), Corollary 8 gives that it suffices to choose \(T\) as the smallest integer such that
\[\phi(X_{T})-\phi^{*}\leq\exp\left(-c_{Q}\frac{2\delta}{5L}T\right)(\phi(X_{0} )-\phi^{*})\leq c_{Q}\epsilon^{2}\delta.\]
Solving for \(T\) and substituting \(c_{Q}=4/\pi^{2}\), we get the required statement.
## 5 Accelerated gradient method
It is natural to consider an accelerated gradient algorithm as an improvement to the standard gradient method. For convex quadratic functions on \(\mathbb{R}^{n}\), the best example is the conjugate gradient algorithm since is speeds up convergence significantly at virtually the same cost as the gradient method. In our case, the objective function is defined on \(\operatorname{Gr}(n,p)\) and no longer quadratic. Hence, other ideas are needed to accelerate the gradient method. While there exist a few ways to accelerate the gradient method, they all introduce some kind of momentum term and compute a search direction \(P\) recursively based on the previous iteration.
### Polak-Ribiere nonlinear conjugate gradients
A popular and simple example to accelerate the gradient method is by the Polak-Ribiere rule that calculates a 'conjugate direction' as
\[P=P_{\text{old}}+\beta G\quad\text{with}\quad\beta=\frac{\langle G-G_{\text{ old}},G\rangle}{\langle G_{\text{old}},G_{\text{old}}\rangle}. \tag{36}\]
Here, we avoid indices by calling \(G_{\text{old}}\) the old gradient (usually indexed by \(k\)) and \(G\) the new one (usually indexed by \(k+1\)). The inner product used above is the standard Frobenius inner product of matrices where \(\langle X,Y\rangle=\operatorname{Tr}(Y^{T}X)\).
In addition to the above, we will also need the tangent condition (11) to be satisfied for the new direction, so we project the new \(P\) from above onto the tangent space:
\[P\leftarrow(I-XX^{T})P.\]
### Line search
In order to use \(P\) instead of \(G\), we need to modify the line search in Algorithm 2. We will explain the differences for a general \(P\).
Let \(X(\mu)=X_{k+1}\) and \(X=X_{k}\) denote the new and old point on Grassmann. As before, we construct an iteration
\[X(\mu)=(X-\mu P)M\]
where the search direction \(P\) is a tangent vector, \(P^{T}X=0\), and gradient-related, \(\operatorname{Tr}(G^{T}P)>0\) with \(G=\operatorname{grad}\phi(X)\). In addition, \(M\) is a normalization matrix such that \(X(\mu)^{T}X(\mu)=I\).
A small calculation shows that the same normalization idea for \(M\) from the gradient method (when \(P=G\)) can be used here: from the eigenvalue decomposition
\[VD_{\beta}V^{T}=P^{T}P\]
we define
\[D_{\mu}=(I+\mu^{2}D_{\beta})^{1/2}.\]
Then it is easy to verify that
\[X(\mu)=(X-\mu P)VD_{\mu}^{-1}\]
has orthonormal columns.
Let \(P_{v}=PV\) and \(X_{v}=XV\). To perform the linesearch for \(\mu\), we evaluate \(\phi\) in the new point:
\[\phi(X(\mu)) = -\frac{1}{2}\operatorname{Tr}(D_{\mu}^{-1}V^{T}(X-\mu P)^{T}A(X- \mu P)VD_{\mu}^{-1}) \tag{37}\] \[= -\frac{1}{2}\operatorname{Tr}(D_{\mu}^{-1}(X_{v}-\mu P_{v})^{T}A (X_{v}-\mu P_{v})D_{\mu}^{-1})\] \[= -\frac{1}{2}\operatorname{Tr}\left(D_{\mu}^{-2}\left(X_{v}^{T}AX _{v}-2\mu(P_{v}^{T}AX_{v})+\mu^{2}(P_{v}^{T}AP_{v})\right)\right)\] \[= -\frac{1}{2}\operatorname{Tr}\left(\left(I+\mu^{2}D_{\beta} \right)^{-1}\left(D_{\alpha}+2\mu\ D_{\zeta}+\mu^{2}\ D_{\gamma}\right)\right)\]
where
\[D_{\alpha}=\operatorname{diag}(X_{v}^{T}AX_{v}),\ \ \ D_{\beta}= \operatorname{diag}(P_{v}^{T}P_{v}), \tag{38}\] \[D_{\gamma}=\operatorname{diag}(P_{v}^{T}AP_{v}),\ \ \ \ D_{\zeta}=- \operatorname{diag}(P_{v}^{T}AX_{v}).\]
Comparing to (24), we see that a new \(D_{\zeta}\) has appeared. Observe that \(D_{\alpha},D_{\beta},D_{\gamma}\) all have non-negative diagonal but this is not guaranteed for \(D_{\zeta}\). If \(P=G\), then \(-P_{v}^{T}AX_{v}=P_{v}^{T}P_{v}\) and thus \(D_{\zeta}=D_{\beta}\). For a gradient related \(P\) that is a tangent vector, we know that \(0\leq\operatorname{Tr}(P^{T}G)=-\operatorname{Tr}(VP^{T}\Pi AXV)=- \operatorname{Tr}(P_{v}^{T}AX_{v})=\operatorname{Tr}(D_{\zeta})\). However, that does not mean that all the diagonal entries of \(D_{\zeta}\) are non-negative, only their sum is. This lack of positive diagonal complicates the line search, as we will discuss next.
Let \(\alpha_{i},\beta_{i},\gamma_{i},\zeta_{i}\) be the \(i\)th diagonal of \(D_{\alpha},D_{\beta},D_{\gamma},D_{\zeta}\), resp. The rational function that represents (37) and generalizes (25) satisfies
\[\phi(X(\mu))=-\frac{1}{2}\sum_{i=1}^{p}\frac{\alpha_{i}\ +2\zeta_{i}\mu+\gamma_{i}\mu^{2}}{ 1+\beta_{i}\mu^{2}},\]
with derivative
\[\frac{d\phi(X(\mu))}{d\mu}=-\sum_{i=1}^{p}\ \frac{\zeta_{i}\ +(\gamma_{i}\ - \alpha_{i}\ \beta_{i})\mu-\beta_{i}\zeta_{i}\ \mu^{2}}{(1+\beta_{i}\mu^{2})^{2}}. \tag{39}\]
Since we do not know the sign of \(\zeta_{i}\), each term in (39) has a quadratic in the numerator that can be convex or concave. This is different from (26), where it is always convex (accounting for the negative sign outside the sum) since \(\zeta_{i}=\beta_{i}\). In case there is a term with a concave quadratic, we can therefore not directly repeat the same arguments for the bracketing interval of \(\mu\) based on the zeros of the quadratics in (39). When there are negative \(\zeta_{i}\)'s, we could restart the iteration and replace \(P\) by the gradient \(G\). Since this wastes computational work, we prefer to simply disregard the branches that are concave when determining the bracket interval.
Overall, the line-search step for the CG approach will cost a little more than that for the gradient method, since we have an additional (diagonal) matrix to compute, namely \(D_{\zeta}\). In addition, as is standard practice when the search direction \(P\) has a negative angle with the (Riemannian) gradient, it is reset to the gradient.
```
1:Start: Select initial \(X\) such that \(X^{T}X=I\).
2:Compute \(G=-(AX-XC_{Y})\); Set \(P:=G\)
3:while\(\|G\|_{F}>tol\)do
4: Approximately minimize \(\phi(Y(\mu))\) based on (37).
5: Set \([Y,R]=qr(Y-\mu P,0)\)
6: Compute \(G_{new}=-(AY-YC_{Y})\) with \(C_{Y}=Y^{T}AY\)
7: Compute \(\beta=\frac{\langle G_{new}-G,G_{new}\rangle}{\langle G,G\rangle}\)
8: Update \(P_{new}:=G_{new}+\beta P\)
9: Keep \(G:=G_{new},P:=P_{new}\)
10:endwhile
```
**Algorithm 4**Riemannian Conjugate Gradient Descent
## 6 Numerical implementation and experiments
### Efficient and accurate implementation
A proper numerical implementation of Algorithms 2 and 4, and in particular the linesearch, is critical to obtain highly accurate solutions. We highlight here
three important aspects. However, as we will see in the numerical experiments, the nonlinear CG algorithm still suffers from numerical cancellation. We leave this issue for future work.
In addition, we give some details on how to improve the efficiency of a direct implementation of these algorithms so that they require the same number of matrix vector products with \(A\) as subspace iteration.
Calculation of bracketThe \(\beta_{i}\)'s in (27) can be very small in some situations. If we set \(\delta_{i}=\gamma_{i}-\alpha_{i}\beta_{i}\) then cancellation may cause loss of accuracy in formula (27) when \(\delta_{i}<0\). We can circumvent this by observing that in this case:
\[\xi_{i}=\frac{\sqrt{\delta_{i}^{2}+4\beta_{i}^{3}}-|\delta_{i}|}{2\beta_{i}^{2 }}=\frac{4\beta_{i}^{3}}{2\beta_{i}^{2}(|\delta_{i}|+\sqrt{\delta_{i}^{2}+4 \beta_{i}^{3}})}=\frac{2}{|\delta_{i}/\beta_{i}|+\sqrt{(\delta_{i}/\beta_{i})^{ 2}+4\beta_{i}}}. \tag{40}\]
When \(\delta_{i}>0\) we can simply use (27) which we rewrite as
\[\xi_{i}=\frac{1}{2\beta_{i}}\left(\frac{\delta_{i}}{\beta_{i}}+\sqrt{\left( \frac{\delta_{i}}{\beta_{i}}\right)^{2}+4\beta_{i}}\right). \tag{41}\]
Calculation of the minimizerFor numerical reasons, it is advisable to compute a root of \(\phi^{\prime}\) instead of a minimum of \(\phi\). This can be done in an effective way by a safe-guarded root finding algorithm, like the Dekker-Brent algorithm from fzero in Matlab. Since this algorithm converges superlinearly, we rarely need more than 10 function evaluations to calculate the minimizer of \(\phi\) in double precision.
Enforcing orthogonalityAs usual in Riemannian algorithms on the Grassmann manifold using Stiefel representatives, it is important to numerically impose the orthogonality condition of the tangent vectors by explicit projection onto the tangent space. This is especially important for the gradient \(G\) which requires a second orthogonalization.
Efficient matvecsAt each iteration \(k\), the linesearch requires computing \(AP_{k}\) and \(AX_{k}\); see (38). While \(AX_{k}\) was calculated previously, it would seem that we need another multiplication of \(A\) with \(P_{k}\) which is not needed in subspace iteration (accelerated by Chebyshev or not). Fortunately, it is possible to avoid this extra multiplication. First, we proceed as usual by computing the next subspace \(X_{k+1}\) from the QR decomposition
\[X_{k+1}R_{k+1}=X_{k}-\mu_{k}P_{k}.\]
Instead of calculating \(AX_{k+1}\) explicitly in the next iteration, we observe that
\[AX_{k+1}=(AX_{k}-\mu_{k}AP_{k})R_{k+1}^{-1}.\]
This requires only computing \(AP_{k}\) explicitly since \(AX_{k}\) was already computed before (using the same recursion from above). Except for a small loss of accuracy when the method has nearly converged, this computation behaves very well numerically.
Efficient orthogonalizationThe linesearch procedure requires the diagonalization \(P^{T}P=VD_{\beta}V^{T}\). In addition, the new iterate \(X_{new}\) is computed from the QR decomposition of \(X+\mu P\) with \(\mu\) being the step. Both computations cost \(O(np^{2}+p^{3})\) flops which together with the matvec with \(A\) make up the majority of the cost of each iteration. Fortunately, it is possible to replace the orthogonalization by the polar factor:
\[X_{new} =(X-\mu P)[(X-\mu P)^{T}(X-\mu P)]^{-1/2}\] \[=(X-\mu P)(I_{p}+\mu^{2}P^{T}P)^{-1/2}\] \[=(X-\mu P)V(I_{p}+\mu^{2}D_{\beta})^{-1/2}V^{T}\]
since the tangent vector \(P\) is orthogonal to \(X\). While this procedure has the same flop count as QR, it is grounded in more robust linear algebra and might be preferable. In our numerical experiments, however, we orthogonalize by QR since it leads to more accurate solutions at convergence.
### Comparison with subspace iteration for a Laplacian matrix
We first test our methods for the standard 2D finite difference Laplacian on a \(35\times 40\) grid, resulting in a symmetric positive definite matrix of size \(n=1\,400\). Recall that the dimension of the dominant subspace to be computed is denoted by \(p\).
Algorithms 2 and 4 are compared to subspace iteration applied to a shifted and scaled matrix \((A-cI)/h\) and a filtered matrix \(p_{d}(A)\) with given degree \(d\). The shift \(c\) and scaling \(h\) are defined in (6). Likewise, the polynomial \(p_{d}\) is the one from (6) with \(m=p\) and is determined from a Chebyshev polynomial to filter the unwanted spectrum in \([\lambda_{n},\ldots,\lambda_{p+1}]\). See also [25] for a concrete implementation based on a three-term recurrence that only requires computing one product \(AX_{k}\) per iteration. Recall that these choices of the shift and the polynomial are in some sense optimal as explained SS2.3 for the given degree \(d\).
Observe that both subspace iteration methods make use of the exact values of the smallest eigenvalue \(\lambda_{n}\) and of the largest unwanted eigenvalue \(\lambda_{p+1}\). While this is not a realistic scenario in practice, the resulting convergence behavior should therefore be seen as the best case possible for those methods. Algorithms 2 and 4 on the other hand, do not require any knowledge on the spectrum of \(A\) and can be applied immediately.
The subspace iteration with Chebyshev acceleration will restart every \(d\) iterations to perform a normalization of \(X_{k}\) and, in practice, adjusts the Chebyshev
polynomial based on refined Ritz values3. For small \(d\), the method does not enjoy as much acceleration as for large \(d\). On the other hand, for large \(d\) the method is not stable.
Footnote 3: This is not done in our numerical tests since we supply the method the exact unwanted spectrum.
In Figure 4, the convergence of the objective function \(\phi(X_{k})\) is visible for subspace dimension \(p=6\) and polynomial degrees \(d\in\{15,30,60\}\). All methods perform per iteration only one block matvec of \(A\) with a matrix of size \(n\times p\). Since this is the dominant cost in large-scale eigenvalue computations like SCF, we plotted the convergence in function of this number4.
Footnote 4: For this example with very sparse \(A\), the SI methods are much faster per iteration than the Riemannian methods. This is mainly because SI only needs to orthonomalize every \(d\) times.
The benefits of acceleration by the Chebyshev polynomial filter or by Riemannian CG are clearly visible in the figure. In black lines, we also indicated the asymptotic convergence \(O(\gamma^{k})\) in function of the number of matvecs \(k\) for two values of \(\gamma\). In particular, it is well known (see, e.g., [5, Lemma 7]) that
\[\kappa=\frac{\lambda_{1}-\lambda_{n}}{\lambda_{p}-\lambda_{p+1}}=\mathcal{O}( 1/\delta). \tag{42}\]
is the condition number of the Riemannian Hessian of \(\phi\) at the dominant subspace \(\mathcal{V}_{\alpha}\) with spectral gap \(\delta\). From this, the asymptotic convergence rate of Riemannian SD is known (see [14, Chap. 12.5]) to satisfy
\[\gamma_{SD}=\left(\frac{\kappa-1}{\kappa+1}\right)^{2}=1-\mathcal{O}(\delta).\]
In addition, for Riemannian CG we conjecture the rate
\[\gamma_{CG}=\left(\frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}\right)^{2}=1- \mathcal{O}(\sqrt{\delta})\]
based on the similarity to classical CG for a quadratic objective function with condition number \(\kappa\). For both Algorithms 2 and 4, we see that the actual convergence is very well predicted by the estimates above.
### A few other matrices
As our next experiment, we apply the same algorithms from the previous section to a few different matrices and several choices for the subspace dimension \(p\). In addition, we target also the minimal eigenvalues by applying the methods to \(-A\) instead of \(A\). Except for the standard finite difference matrices for the 3D Laplacian, the matrices used were taken from the SuiteSparse Matrix Collection [10]. This results in problems with moderately large Riemannian condition numbers \(\kappa\), defined in (42).
Due to the larger size of some of these matrices, we first compute with a Krylov-Schur method (implemented in Matlab as eigs) the eigenvalues that are required to determine the optimal Chebyshev filter in subspace iteration. The Riemannian methods do not require this or any other information.
Fd3dThis matrix is the 3D analogue of the matrix we tested in the previous section. It corresponds to a standard finite difference discretization of the Laplacian in a box with zero Dirichlet boundary conditions. We used \(n_{x}=35,n_{y}=40,n_{z}=25\) points in the \(x,y,z\) direction, resp. The resulting matrix is of size \(35\,000\). Compared to the earlier experiment, we took larger subspace dimensions and also a minimization of the Rayleigh quotient. All these elements make for a more challenging problem numerically.
In Fig. 5, we see that the convergence of the maximization problem is very similar to that of the 2D case. However, the more relevant case of finding the minimal eigenvalues of a Laplacian matrix turns out to be a challenge for SI with or without Chebyshev acceleration. In fact, even with a degree 100 polynomial it takes about 1000 iterations before we see any acceleration. The Riemannian methods, on the other hand, converge much faster and already from the first iterations. While the nonlinear CG iteration has some plateau, it happens at a smaller error, only lasts for 300 iterations, and it still makes meaningful progress.
Figure 4: Error in objective value for subspace iteration (SI) and Riemannian steepest descent (SD) and nonlinear conjugate gradients (CG) for a Laplacian matrix of size \(n=1\,400\) based on finite differences when computing the dominant subspace of dimension \(p=6\). For SI, optimal shift and optimal Chebyshev polynomials were used of various degree (number in legend). The black lines estimate the asymptotic convergence speed as explained in the text.
ukerbe1This matrix is related to a 2D finite element problem on a locally refined grid and it has a relatively small of size \(n=5\,981\). It is therefore more challenging than the uniform grid of the Laplacian examples above. We tested the following parameters.
\begin{tabular}{c c c c c} \hline problem & type & dimension \(p\) & Riem. cond. nb. & Cheb. degree \\ \hline
3 & max & 32 & \(4.85\cdot 10^{3}\) & 50 \\
4 & max & 64 & \(5.21\cdot 10^{3}\) & 100 \\ \hline \end{tabular}
In Figure 6, we observe that the Riemannian algorithms converge faster than their subspace iteration counterparts. This behavior is seen for many choices of \(p\) and the Chebyshev degree. Since the spectrum of this matrix is symmetric around zero, the min problems are mathematically equivalent to the max problems, and therefore omitted.
Figure 5: The FD3D matrix.
Figure 6: The ukerbe1 matrix.
ACTIVSg70KWe now test a larger matrix of size \(69\,999\). It models a synthetic (yet realistic) power system grid from the Texas A&M Smart Grid Center. This matrix has a spectral gap of \(\mathcal{O}(10)\) but the Riemannian condition number, which represents the correct relative measure of difficulty, is still large. Such a different kind of scale makes this an interesting matrix to test our algorithms.
For the minimisation problem (nb. 3), we see the Riemannian algorithms converge considerably faster than subspace iteration with or without Chebyshev acceleration of degree 50. (The reason for the bad performance of the Chebyshev acceleration is due to numerical instability with a degree 50 polynomial for this problem.) In addition, we see that Riemannian CG does not give any meaningful acceleration compared to Riemannian SD. Moreover, the linesearch has numerical issues since the error in function value increases around iterations 1000 and 1700. In theory, the error is monotonic. See section 6.4 for a more detailed analysis of this issue and an ad-hoc fix.
For the maximization problem (nb. 4), Chebyshev acceleration with degree 50 is clearly superior to Riemannian CG. However, Riemannian SD outperforms subspace iteration without acceleration. We remark that the Chebyshev acceleration needs quite accurate information on the spectrum to determine the filter polynomial, whereas the Riemannian algorithms do not require any information.
boneS01This final matrix is part of the Oberwolfach model order reduction benchmark set and models a 3D trabecular bone. It is our largest example of size \(n=127\,224\). As we can see from the table below, for subspace dimension \(p=64\) the minimization problem is particurlay challenging with a large Riemannian condition number.
Figure 7: The ACTIVSg70K matrix.
The convergence of the methods is visible in Fig. 8. We can make similar observations as for the example above: for the minimization problem the Riemannian methods are superior but for the maximization the Chebyshev iteration performs the best.
### Numerical cancellation
As was already mentioned for the _ACTIVSg70K_ matrix above, the line search does not always produce iterates that are monotonic in the objective function. This should be the case theoretically and thus the calculation in finite precision leads to this undesired behavior.
We conjecture that the issue is due to a catastrophic cancellation in floating point arithmetic. To support this conclusion, we repeated the same experiment as in the left panel of Fig. 7 (problem nb. 3) but now in quad precision (128 bit) compared to the standard double precision (64 bit) using the Advanpix multiprecision toolbox5.
Footnote 5: version 4.8.5.14569; see [https://www.advanpix.com](https://www.advanpix.com)
Fig. 9 compares Riemannian CG with and without multiprecision. When the calculation is done in multiprecision, the convergence is clearly smoother in function value and residual, and also monotonic in function value. An ad-hoc solution for Riemannian CG is to simply restart the iteration with the gradient if an increase of function value is detected. For this problem, the Riemannian CG in double precision then leads to a convergence plot that is similar to that in quad precision but slightly worse.
Figure 8: The boneS01 matrix.
## 7 Conclusion
We revisited the standard Riemannian gradient descent method for the symmetric eigenvalue problem as a more competitive alternative of subspace iteration. If accelerated using a momentum term from nonlinear CG, there is a wide variety of matrices where the Riemannian method is faster per number of matrix vector products than subspace iteration with optimal Chebyshev filter polynomials. This property would make it valuable in applications like the self-consistent field (SCF) iteration.
Among novel contributions, we derived a computationally efficient exact line search. Its accurate implementation is key to the good performance of the method. We also presented new convergence proofs for this geodesic-free Riemannian algorithm, including a locally fast convergence result in a \(\mathcal{O}(\sqrt{\delta})\) neighbourhood of the dominant subspace.
## Acknowledgments
FA was supported by SNSF grant 192363. YS was supported by NSF grant DMS-2011324. BV was supported SNSF grant 192129.
|
2305.11537 | Trustworthy Federated Learning: A Survey | Federated Learning (FL) has emerged as a significant advancement in the field
of Artificial Intelligence (AI), enabling collaborative model training across
distributed devices while maintaining data privacy. As the importance of FL
increases, addressing trustworthiness issues in its various aspects becomes
crucial. In this survey, we provide an extensive overview of the current state
of Trustworthy FL, exploring existing solutions and well-defined pillars
relevant to Trustworthy . Despite the growth in literature on trustworthy
centralized Machine Learning (ML)/Deep Learning (DL), further efforts are
necessary to identify trustworthiness pillars and evaluation metrics specific
to FL models, as well as to develop solutions for computing trustworthiness
levels. We propose a taxonomy that encompasses three main pillars:
Interpretability, Fairness, and Security & Privacy. Each pillar represents a
dimension of trust, further broken down into different notions. Our survey
covers trustworthiness challenges at every level in FL settings. We present a
comprehensive architecture of Trustworthy FL, addressing the fundamental
principles underlying the concept, and offer an in-depth analysis of trust
assessment mechanisms. In conclusion, we identify key research challenges
related to every aspect of Trustworthy FL and suggest future research
directions. This comprehensive survey serves as a valuable resource for
researchers and practitioners working on the development and implementation of
Trustworthy FL systems, contributing to a more secure and reliable AI
landscape. | Asadullah Tariq, Mohamed Adel Serhani, Farag Sallabi, Tariq Qayyum, Ezedin S. Barka, Khaled A. Shuaib | 2023-05-19T09:11:26Z | http://arxiv.org/abs/2305.11537v1 | # Trustworthy Federated Learning: A Survey
###### Abstract
Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI), enabling collaborative model training across distributed devices while maintaining data privacy. As the importance of FL increases, addressing trustworthiness issues in its various aspects becomes crucial. In this survey, we provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy. Despite the growth in literature on trustworthy centralized Machine Learning (ML)/Deep Learning (DL), further efforts are necessary to identify trustworthiness pillars and evaluation metrics specific to FL models, as well as to develop solutions for computing trustworthiness levels. We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy. Each pillar represents a dimension of trust, further broken down into different notions. Our survey covers trustworthiness challenges at every level in FL settings. We present a comprehensive architecture of Trustworthy FL, addressing the fundamental principles underlying the concept, and offer an in-depth analysis of trust assessment mechanisms. In conclusion, we identify key research challenges related to every aspect of Trustworthy FL and suggest future research directions. This comprehensive survey serves as a valuable resource for researchers and practitioners working on the development and implementation of trustworthy FL systems, contributing to a more secure and reliable AI landscape.
Federated Learning, Artificial Intelligence, Trustworthiness, Privacy, Fairness.
## I Introduction
Trustworthy AI is an evolving concept within responsible AI that encompasses various existing ideas, such as ethical AI [1], robust AI [2], explainable AI (XAI) [3] and fair AI [4], among others. In addition to trustworthiness, data privacy and protection have become increasingly important in today's society. AI systems, particularly those utilizing machine learning (ML) and deep learning (DL) models, are often trained on data owned and maintained by various stakeholders across distinct data silos. To address the challenge of preserving data privacy in AI, Google introduced Federated Learning (FL) [5] in 2016 as a decentralized machine learning paradigm. FL enables collaborative model building among federation members while ensuring sensitive data remains within each participant's control. FL provides a solution to data silo and fragmentation issues resulting from legislation that restricts data sharing and requires data owners to maintain isolation. As such, FL is seen as a crucial component in the present and future of AI, with its worldwide market valuation is anticipated to grow from USD 127 million in 2023 to USD 210 million by 2028 [6].
FL is a cutting-edge AI that preserves privacy by allowing clients, such as organizations or devices, to train models locally and construct a global model from local updates without sharing local data externally [7]. Trustworthy FL is essential to responsible AI principles since it faces challenges like accountability and fairness due to the involvement of multiple parties and diverse client data distribution. This article introduces the concept of Trustworthy FL. Trustworthy AI aims to achieve high learning accuracy consistently while considering aspects like interpretability, fairness, privacy, robustness, accountability, and environmental wellbeing. As a promising trustworthy AI framework, FL ensures data privacy during AI model training by coordinating multiple devices. In a server-client structure for FL, individual devices carry out local training and transmit their updated local models to a central server, all while keeping their private raw data secure. The server aggregates parameters from local models to refine the global FL model before redistributing the enhanced model back to the devices. This process maintains privacy for sensitive applications in areas like healthcare, finance, and the Internet of Things (IoT).
However, existing FL schemes often face vulnerabilities to attacks and difficulties meeting security requirements in practical applications [8]. Central servers, which distribute, aggregate, and update models, are attractive targets for malicious actors who might tamper with models to produce biased outcomes or desired results. Dishonest cloud service providers may deliver incorrect results to users or replace original models with simpler, less accurate ones to save on computational costs. To bolster FL security, it is essential to verify the integrity and authenticity of model updates throughout training to prevent malicious attacks. Secure FL systems must be resilient against dropout clients who fail to submit model updates for aggregation due to issues like poor network connections, temporary unavailability, or energy constraints [9]. Edge devices, with their widespread presence and easy internet access, are ideal candidates for quality training in various applications. However, participation is limited due to potential data leakage and inherent security issues, such as malicious participation, unfair contribution, biased aggregation, and central server bottlenecks [10]. Addressing these challenges, a Trustworthy FL should maintain the following goals and objectives:
1. Robust security with no single point of failure: Protect the system even if a single component doesn't produce results or is attacked by adversaries.
2. Local-to-global assurances: Provide a strong guarantee that high-quality models result from accurate aggregation of several local training models without manipulation.
3. Streamlined model verification and auditability: Facilitate users to confirm a model update's accuracy and easily access verifiable data for specific update versions.
4. Unalterable model update history: Showcase a globally uniform record of global model updates without changes. Within this log, each update corresponds to a distinct entry that cannot be edited or deleted once created.
5. Dependable client and model identification: Select trustworthy clients and models to enhance the FL process.
6. Reliable contribution evaluation and incentive allocation: Implement a credible assessment of contributions and provide incentives to encourage FL clients' participation in future sessions. Investigators must concentrate on novel approaches to address the challenges in FL by creating equitable and dependable metrics for worker selection and evaluation.
7. Dynamic authoritative keys management: Allow updates for authoritative keys, even if some, but not most, keys are compromised.
8. Timely Monitoring: Timely monitoring methods for workers and model assessment schemes are necessary. Strengthening security and privacy features is crucial for Trustworthy FL.
By emphasizing these objectives, crucial areas and fostering cooperation among stakeholders, the creation of secure and dependable FL systems can facilitate more responsible and ethical AI applications. Within the current research landscape, there is a noticeable absence of studies investigating the crucial pillars of trustworthiness, specifically in relation to FL. To the best of our knowledge, this is the pioneering work encompassing all facets of Trustworthy FL. In an effort to bridge the gaps in existing literature, this study presents the following contributions:
1. We furnish an overview of the fundamental principles underlying trustworthy FL, accompanied by a comprehensive architecture of Trustworthy FL and trust assessment mechanisms.
2. We develop a taxonomy centered on the most pertinent trustworthiness pillars within FL contexts. In order to construct this taxonomy, we identify three primary pillars as the foundational elements: Interpretability, Fairness, and Security & Privacy. Each pillar embodies a dimension of trust, which is further delineated into distinct concepts for each pillar. Additionally, we explore the key challenges and future directions in the realm of Trustworthy FL.
3. In conclusion, we highlighted the essential research challenges associated with each facet of Trustworthy FL, as well as the direction for future investigations.
The remainder of this paper is organized as follows: The structure of this paper is as follows: We commence with an introduction to FL and its classification in Section 2, succeeded by a discussion on trust in FL, fundamental principles of trustworthiness, and the architecture of Trustworthy FL systems in Section 3. Section 4 presents a literature review, emphasizing existing research in the domain. Sections 5-8 delve into trust evaluation, trust-aware interpretability, fairness-aware trustworthiness, and trust-aware security & privacy-preserving FL systems, respectively. Throughout the paper, we examine topics such as data and feature selection, data sharing, model selection, explainability, client selection, contribution assessment, incentive mechanisms, accountability, auditability, secure and data aggregation, privacy preservation, and more. In conclusion, we explore open challenges and future research avenues in the field of Trustworthy FL, laying the foundation for the creation of secure and dependable FL systems that facilitate responsible and ethical AI applications.
## II Federated Learning an Overview
FL is a transformative approach to distributed machine learning, enabling the collaborative training of models across multiple devices or clients while preserving data privacy [11]. Various FL architectures have been developed to address the diverse challenges and requirements in data distribution, communication patterns, and coordination methods. These architectures can be broadly classified into three main categories: data distribution-based architectures, such as Horizontal FL (HFL), Vertical FL (VFL), and Federated Transfer Learning (FTL); scale and communication-based architectures, including Cross-Silo FL (CSFL) and Cross-Device FL (CDFL); and coordination-based architectures, which encompass Centralized FL, Decentralized FL, and Hierarchical FL. Each of these architectures caters to specific needs and constraints, ensuring optimal model development across a wide range of applications while maintaining data privacy and security.
### _Data Distribution-Based FL Architectures_
#### Ii-A1 Vertical FL
VFL involves federated training of datasets that share the same sample space but differ in feature spaces. It is apt for scenarios in which data is divided vertically based on feature dimensions, with different parties holding homogeneous data that partially overlaps in sample IDs. Entity alignment and encryption methods can be employed to address data sample overlapping during local training. One instance of VFL in the healthcare sector is the collaboration between hospitals and insurance companies, jointly training an AI model using their respective datasets. Although VFL provides data privacy and joint training benefits, it presents more implementation challenges than HFL due to the requirement for entity resolution and the limitations of current techniques for complex machine learning models.
#### Ii-A2 Horizontal FL
In HFL, multiple participants possessing datasets with identical feature spaces but varying sample spaces collaborate in training a joint global model. The datasets are used locally, and a server merges the local updates received from the participants to develop a global update without accessing the local data directly. One example of HFL in the healthcare domain is speech disorder detection, where users with different voices speak identical sentences, and the local updates are combined to create a comprehensive speech recognition model. HFL is primarily utilized in smart devices and IoT applications. It allows leveraging data from numerous sources without sharing raw data, thus maintaining privacy.
However, HFL may encounter challenges when dealing with a limited number of labeled entities.
#### Ii-A3 Federated Transfer Learning
FTL is tailored to handle datasets that differ in both sample spaces and feature spaces. Transfer learning techniques are used to compute feature values from distinct feature spaces into a uniform representation, which is then employed for local dataset training. Privacy protection mechanisms, such as random masks, can be implemented during the gradient exchange between clients and servers. FTL can support smart healthcare use cases like disease diagnosis by enabling collaboration among multiple hospitals with distinct patients and treatment plans. Although FTL has potential, it remains an evolving research area with challenges related to communication efficiency and flexibility in dealing with diverse data structures. Nevertheless, it provides a promising solution for ensuring data security and user privacy while addressing data island problems. An illustration of these data distribution-based FL architectures is presented in Fig. 1.
### _Scale and Communication-Based FL Architectures_
#### Ii-B1 Cross-Silo FL
CSFL is an architectural approach within the FL paradigm that focuses on collaboration between a limited number of clients, each possessing large-scale datasets. Typically, CSFL is employed in scenarios where organizations or data centers aim to jointly train a shared model while preserving data privacy and adhering to regulatory compliance. By leveraging a central server for aggregating model updates, clients can effectively contribute to the global model without directly sharing raw data.
#### Ii-B2 Device-Silo FL
DSFL, also known as Cross-Device FL, involves a substantial number of clients with relatively smaller-scale datasets, such as those found in smartphones or Internet of Things (IoT) devices. The primary objective of DSFL is to harness the collective knowledge of numerous devices to develop a global model while minimizing data movement and maintaining user privacy. This approach is particularly well-suited for edge computing environments, where devices have limited computational resources and intermittent network connectivity.
Both CSFL and DSFL represent distinct FL architectures, catering to different requirements in terms of scale, data distribution, and communication patterns. By tailoring the architecture to the specific challenges and constraints of a given application, these methods enable efficient and privacy-preserving collaborative learning.
### _Coordination-Based FL Architectures_
#### Ii-C1 Centralized FL
This approach to FL involves the collaboration of multiple clients with a central server to train a global model. Clients independently train local models on their data and send updates to the central server. The server aggregates these updates, refines the global model, and distributes it back to clients. This iterative process maintains privacy by keeping data localized while benefiting from the collective knowledge of participating clients.
#### Ii-C2 Decentralized FL
In Decentralized FL, clients directly communicate with each other, eliminating the need for a central server. Clients exchange model updates in a peer-to-peer manner, allowing the system to be more resilient to failures and less reliant on a single point of control. This method offers enhanced privacy and scalability, making it suitable for large-scale networks with potentially unreliable or untrustworthy central authorities.
Fig. 1: A depiction of data distribution-based FL architectures.
#### Ii-A3 Hierarchical FL
HFL introduces a multi-layer structure to the system, combining elements of centralized and decentralized approaches. Clients are organized into clusters or groups, with each group having a local aggregator. Clients send their local model updates to their respective aggregators, which then share aggregated updates with a central server or other aggregators. This hierarchical structure optimizes communication efficiency, enhances scalability, and provides a more flexible framework for diverse network topologies. An illustration of these three coordination-based FL architectures is presented in Fig. 2.
## III Trustworthy Federated Learning an overview
### _What is trust in FL?_
In FL scenarios, trust refers to the assurance one node has in another's ability to execute specific tasks and serve as a reliable partner. Trust is subjective and depends on the domain and context, indicating the degree of control granted by one party to another. It is often portrayed as risk management, as it can be reduced but never fully eliminated. For example, when patients consent to data collection, they demonstrate trust in healthcare institutions. This data is valuable for ML algorithms, which strive to predict specific treatments or conditions. Trust involves a variety of factors that collectively ensure security, reliability, and privacy in different settings.
In IoT networks, the establishment and evaluation of trust allow end devices to form secure and efficient connections with other nodes or networks based on their trust values. The trustworthiness of devices within the network contributes to secure routing, facilitating stable data transmission paths and the selection of an efficient mobility model. Nodes in the network must assess each other's trustworthiness to maintain trusted communication. Trust is a psychological state reflecting a trustor's willingness to accept risks on behalf of a trustee without monitoring or external control. Trustworthiness is a prerequisite for choosing to trust someone and focuses on the trustee's attributes. Both AI and IoT fields consider trustworthiness crucial, but they approach it differently. AI technologies, such as machine learning systems, are designed to possess human-like traits. Rising public skepticism has driven governments and organizations to devise AI frameworks that outline principles for creating trustworthy AI. Key frameworks emphasize aspects like privacy, transparency, accountability, safety and security, and explainability, human control of technology, professional responsibility, fairness and non-discrimination, and the promotion of human values.
Trustworthiness is a quantitative expression of an entity's trust level, which changes dynamically based on the entity's actions. Factors like data security, authenticity, accuracy, service and processing capabilities of nodes, and the reliability and real-time performance of connections contribute to trustworthiness. Trust can be built through cryptographic techniques or distributed reputation systems. In the current era of information abundance, truth discovery (TD) methods have been applied to evaluate data quality in various real-world applications, such as mobile crowdsensing and knowledge bases. Truth discovery algorithms, like conflict resolution in heterogeneous data (CRH) and the Gaussian truth model
Fig. 2: An illustration of coordination-based FL architectures.
(GTM), determine true and reliable values by aggregating data from multiple sources and assessing the reliability of those sources.
### _Fundamentals of Trustworthiness in FL_
In this section, we will discuss a set of criteria for evaluating trust assessment methods based on machine learning and FL. These criteria pertain to the trustworthiness of FL systems, addressing questions such as: "Is the client device's local model non-adversarial and trustworthy?", "Has the local model genuinely been trained using the device's local data?", and "Can the client rely on the central server for the accuracy of the global model it provides?". The main accountability challenges in FL systems involve auditing the data used to train each local model, assessing the various local models provided by multiple client devices, and evaluating the global model derived from these individual models.
In this discussion, we will address a few crucial aspects of trust evaluation methods to ensure accuracy and effectiveness. A trustworthy evaluation method must provide precise trust values for trustees and offer reliable evaluation results. Various indicators, such as accuracy, precision, recall, and F-score, can represent effectiveness. When examining trust evaluation within machine learning, we must consider two critical components: the data used to train the model and the algorithm that builds the model. Selecting appropriate data and algorithms contributes to a highly accurate evaluation, so it's essential to explore the impact of these choices on trust evaluation. During the trust evaluation process, attacks such as conflict behavior, on-off, collision, Sybil whitewashing, and bad-mouthing may occur. Trust evaluation methods should aim to prevent these attacks to maintain robustness and ensure evaluation results remain unaffected. It's essential to protect users' private information when handling data for trust evaluation. Trust evaluation methods must prioritize privacy protection for both users and trust evidence to gain acceptance and recognition.
Context-awareness and subjectivity are fundamental characteristics of trust, so trust evaluation methods should support these attributes. By incorporating context-awareness, the evaluation scheme can adapt to changes in the application scenario, context, or environment. Ensuring subjectivity in trust evaluation brings the expression of trust closer to reality. Standardizing and operationalizing trustworthiness concerns in machine learning systems is crucial for incorporation into specifications and objective evaluation in applications. In FL, client selection is crucial for trustworthiness. Many selection methods emphasize server goals, such as faster convergence or better model accuracy, potentially disadvantage clients. Common threshold-based approaches can lead to unfairness, typically categorized as over-representation, under-representation, and non-representation. In systems sensitive to network speeds, clients with slower connections may be under-represented or excluded entirely, while those with faster connections are over-represented. This imbalance can cause global model bias, impacting overall performance negatively. Achieving fairness in client selection for FL involves balancing the interests of both the FL server and clients, while considering client heterogeneity. Biases introduced during the FL model optimization process can lead to unequal performance across clients, which is seen as unfair treatment. Recent research explores fairness issues in the FL model optimization process, focusing on two main approaches: objective function-based and gradient-based methods. These strategies aim to mitigate biases and performance discrepancies during model training. Contribution evaluation in FL is crucial for trustworthiness, as it measures each client's influence on the global model without exposing local data. This evaluation is vital for equitable client selection and incentive allocation. Traditional data valuation methods aren't directly applicable to FL. Current FL contribution evaluation strategies consist of self-reported data, utility game, individual evaluation, empirical methods, and Shapley value. Reputation systems track past contributions to determine reliability, assisting in client selection and rewards distribution, fostering trustworthiness in both centralized and decentralized FL systems. Improving explainability in the context of trustworthiness is valuable for addressing biases in machine learning. In FL, enhancing explainability can potentially promote fairness. FL clients often lack mechanisms to determine if they are treated fairly, and this uncertainty may negatively affect their future decisions to participate in FL. The objective of developing explainability is to offer a comprehensive understanding of the FL server's decision-making process and its impact on each client's interests, ultimately fostering trust between the parties. However, it is essential to conduct explainability research within the framework of privacy preservation in FL to avoid conflicting with its primary goal.
### _Architecture of Trustworthy FL_
In this section, we present a comprehensive architecture for Trustworthy FL that encompasses all aspects of trustworthiness within the FL process. Our proposed architecture comprises three main phases for achieving Trustworthy FL: Trust-aware interpretability phase covers all aspects related to data selection, data quality, feature selection, and trustworthy model selection at the local node. After carefully evaluating the local model performance and data verifiability, we proceed to the FL server-side phase. Upon receiving local data and requirements from clients, the server aggregates the models after thoroughly assessing their quality and trust level. To promote trustworthy client participation, a contribution evaluation method and incentive mechanism are implemented at the server side having fairness aspect in Trustworthy FL. Multiple parameters and strategies are used to compute incentives based on reputation and other factors, encouraging clients to participate in subsequent federated rounds. The verification module within the architecture is responsible for validating local models, local data, client and server interactions, and global model updates. The secure aggregation and verification module ensures privacy preservation in our proposed Trustworthy FL architecture. The proposed architecture is visually represented in Fig. 3, illustrating the interaction of these components and their contribution to achieving a Trustworthy FL system.
## IV Literature review
In this comparative analysis, we will discuss the focus of each paper concerning trustworthy aspects, including data selection, client selection, model selection, model aggregation, data aggregation, incentive mechanism, contribution evaluation, and privacy preservation.
The authors in [12] mainly discuss the relationship between trust in AI and the development of trustworthy machine learning technologies. This paper does not specifically address the trustworthy aspects mentioned above but provides a broader understanding of trust in AI systems. The research work in [13] present a survey on explainable, trustworthy, and ethical machine learning for healthcare. The paper focuses on ethical concerns and trustworthiness in healthcare applications, touching upon privacy preservation and model/data selection. moreover, [14] discuss trustworthy AI from principles to practices. The paper covers various aspects of trustworthiness, including privacy preservation, fairness-awareness, and interpretability but does not specifically focus on FL. Authors in [15] propose an interpretable FL approach. The paper emphasizes model selection and model aggregation, as well as privacy preservation, by providing interpretable and explainable models.
The articles [16, 17] both offer surveys on incentive mechanisms for FL. They focus on incentive mechanisms and contribution evaluation, discussing various economic and game-theoretic models to ensure effective and fair participation. Economic and game theoretic oriented FL incentive mechanisms have have been explored in [18]. This paper mainly targets the incentive mechanism and contribution evaluation. Authors in [19] presented a systematic review on incentive-driven FL and associated security challenges. The paper discusses incentive mechanisms, privacy preservation, and the challenges that arise from implementing incentives. however, [20] present a systemic survey on blockchain for FL, focusing on secure distributed Machine Learning (ML) systems. The paper addresses privacy preservation, data aggregation, and the use of blockchain for enhancing trust. The authors in [21] investigate users' contributions and the factors influencing user participation and performance. Fairness-aware FL is examined in [22], where client selection, incentive mechanisms, and contribution evaluations are explored with a taxonomy, but the paper does not focus on all aspects of trustworthiness in FL. Research works in [23, 24, 25, 26] discuss the security and privacy aspects in FL, exploring cryptographic techniques, secure multi-party computation, differential privacy, secure data aggregation, trust management, and secure model aggregation. These papers are not focusing on trust factor. Authors in[27] conduct a systematic literature model quality perspective in FL. The paper examines various aspects of model quality, including model selection, but does not specifically address the other trustworthy aspects. Verifiablity in FL is discussed in [28], emphasizing model aggregation and contribution evaluation by introducing verification mechanisms.
In the realm of Trustworthy FL, existing research studies primarily focus on individual aspects and domains, such as Fairness-aware FL, Contribution Evaluation, Verifiability, Secure Aggregation, Model Selection, and Privacy Preservation. These studies, while valuable, do not holistically address the overarching trustworthiness of FL, leaving gaps
Fig. 3: A diagram illustrating the proposed comprehensive architecture for Trustworthy FL, highlighting the primary phases along with their respective components, which collectively ensure trustworthiness, data integrity, fairness, security, and privacy throughout the FL system.
in our understanding. Our research paper aims to bridge this gap by providing a comprehensive exploration of all facets of Trustworthy FL. With a strong emphasis on both client and server-side considerations, our work delves into crucial topics ranging from client and model selection to secure aggregation and privacy preservation. Moreover, we contribute significantly to the understanding of contribution evaluation and the development of incentive mechanisms that promote equitable reward distribution and verifiability. By synthesizing and expanding upon these diverse aspects, our research paper stands as a more complete and integrative approach to Trustworthy FL, ultimately enhancing the field's knowledge and paving the way for further advancements. A comprehensive comparison of the previously discussed related works and our proposed survey focusing on Trustworthy FL is provided in Table 1. In this paper, we establish a clear taxonomy for Trustworthy FL by examining trust evaluation methods and relevant research. This foundational analysis aims to enhance our understanding of Trustworthy FL. We have organized the concept into three primary pillars: Trust-aware Interpretability, Fairness-aware Trustworthiness, and Security and Privacy-aware Trustworthiness in FL. Each pillar contains subcategories that further explore their respective key aspects. The remainder of the paper presents an in-depth investigation of each pillar in separate sections. This research is the first of its kind to encompass all dimensions of Trustworthy FL. Fig. 4 presents an illustration of the taxonomy for Trustworthy FL. For clearer comprehension by our readers, we have further refined this taxonomy. We categorize the algorithms and methodologies based on their primary objectives and these are then elaborated in their respective sections throughout the remaining part of the document.
## V Trust Evaluation in FL
In this section, we present a collection of criteria for assessing trust evaluation methods grounded in FL:
#### V-1 Effectiveness
A vital aspect of trust evaluation is the accurate determination of a trustee's trust value. Trustworthy methods must ensure precision, demonstrated by metrics like recall, precision, accuracy, and F-score.
#### V-2 Data and Algorithm Selection
Trust evaluation relies on two critical components: training data and model-building algorithms. Optimal data and algorithm choices lead to accurate evaluations, and methods should consider their impact on trust assessment.
#### V-3 Robustness
Trust evaluation is vulnerable to attacks. Addressing these attacks enhances resistance to disruptions, ensuring robust evaluation methods.
#### V-4 Privacy Protection
Trust evaluation data might include sensitive user information. It is crucial to protect this data from unauthorized disclosure, prioritizing user privacy and trust evidence protection in trust evaluation processes.
#### V-5 Context-Awareness
Trust evaluation methods should be adaptable to changes in application scenarios, contexts, or environments, reflecting the fundamental characteristic of trust: context-awareness.
#### V-6 Subjectivity
Trust evaluation must capture trust's subjective nature for a more authentic representation, emphasizing the importance of subjectivity as a key trust characteristic.
#### Iv-B7 Distributed Learning
In FL, trust evaluation methods should account for distributed data storage and processing, ensuring that trust assessment is compatible with the decentralized nature of the learning process.
#### Iv-B8 Local Model Quality
Trust evaluation should consider the quality of local models, as FL relies on combining local models to create a global model. Assessing the quality of local models can help identify unreliable participants.
#### Iv-B9 Incentive Mechanisms
Implementing incentive mechanisms can encourage honest participation and cooperation, elevating trust assessment in FL by ensuring that participants are motivated to contribute high-quality data and models.
#### Iv-B10 Contribution Evaluation
Trust evaluation methods should incorporate mechanisms to measure the value of each client's contributions to the global model. These mechanisms should consider factors such as data quality, data diversity, and the impact of the local model on the global model's performance.
#### Iv-B11 Client Selection
Incorporating client selection strategies in trust evaluation helps identify and select reliable clients that can contribute effectively to the global model. By selecting trustworthy clients, the overall quality and trustworthiness of the FL system can be improved.
#### Iv-B12 Verifiability and Audibility
Trust evaluation methods should provide a means of verifying the accuracy and reliability of both local models and the global model. This could involve techniques such as cryptographic proofs, secure aggregation, or trusted execution environments, ensuring transparency and trustworthiness in the FL system.
In the field of network security, trust is considered a crucial aspect for ensuring the secure transmission of data. In the context of the vehicle-road-cloud collaborative system, trust evaluation becomes increasingly complex due to the heterogeneity of the network and its openness to attacks. To address this challenge, the authors in [29] proposed a trust evaluation scheme based on FL. The scheme is designed as a hierarchical trust evaluation model that takes into account the different trust indices at various layers and factors affecting trust among nodes. The proposed model updates trust values in real-time, providing a personalized trust evaluation for each device in the network. This allows for a more thorough assessment of trust than traditional trust evaluation mechanisms, while also reducing the energy consumption and increasing accuracy compared to previous schemes. By combining FL with the hierarchical trust evaluation model, the system solves the problem of limited edge node resources and reduces the overhead of trust evaluation. This innovative approach to trust evaluation in the vehicle-road-cloud collaborative system shows promising results in improving network security and reliability.
The authors in [30] proposed a solution to the problem of trust in group decision-making for FL systems. The key contribution of their work is the introduction of a trust-based consensus method, called Trust X-BFT (TX-BFT), which utilizes a consortium chain to reach a consensus among participants. The TX-BFT method evaluates the trust levels of participants based on their historical behaviors in previous consensus processes and stores this information in a public ledger on the blockchain. This information is used to incentivize participants with higher trust levels by rewarding them and punishing those with lower trust levels. This, in turn, helps to improve the overall trust perception performance of the FL network. The proposed method has three stages - preliminary, prepare, and commit - and utilizes a parliament of consensus nodes to communicate and reach a consensus. In each round, a leader collects block generation proposals and broadcasts the pre-prepare information, while the verifiers wait for the pre-prepare message. Once the verifiers receive 2/3 commit messages, they begin inserting the proposed block into the chain and marking their status as final committed. The simulation results and security analysis demonstrate that the TX-BFT method can effectively defend against malicious users
Fig. 4: An Illustration of Trustworthy FL taxonomy.
and data, and enhance the trust and stability of FL networks. The authors' contribution provides a valuable solution to the problem of trust in group decision-making for FL systems and has the potential to be widely adopted in various applications.
A clustering-based and distance-based trust evaluation methods are proposed in [31]. The clustering-based method groups FL agents based on their trust scores, while the distance-based method calculates the trust scores based on the similarity of FL agents' behaviors. The authors introduce the trusted decentralized FL algorithm, which incorporates the trust concept into the FL process to enhance its security. This paper addresses the challenge of enhancing the security of FL by introducing trust as a metric to measure the trustworthiness of FL agents. The authors propose a mathematical framework for trust computation and aggregation in a multi-agent system, which is the main contribution of the paper. This framework enables the calculation of trust scores for each FL agent based on their behavior, which can then be used to assess the risk of malicious attacks. Furthermore, the authors propose an attack-tolerant consensus-based FL algorithm, which takes into account the trust scores of FL agents during the consensus step. This helps to mitigate the risk of malicious attacks and ensures the security of the FL training.
PRTrust in [32], a trust model is proposed for a peer-to-peer federated cloud system. The authors aim to address the challenge of trust establishment among participating cloud service providers (CSPs) to enable resource sharing in a secured manner. The trust model considers both reputation-based trust and performance-based risk in evaluating the trustworthiness of CSPs. PRTrust provides a two-tier weighted performance evaluation mechanism, a risk evaluation mechanism, and a personalized reputation-based trust evaluation mechanism. It also provides a CSP selection mechanism based on the evaluated trust and risk. The authors intend to reduce the risk of sub-standard service delivery and improve the selection of appropriate CSPs for resource and service sharing.
The security of the Internet of Vehicles (IoV) relies heavily on trust management between various connected devices and nodes. With the increasing number of connected vehicles, it becomes imperative to establish trust and identify dishonest nodes. To improve the security of IoV, a new approach for trust management is proposed in [33], which combines FL with blockchain technology (FBTM). This approach involves designing a vehicular trust evaluation to enhance the data acquired for the FL model and developing a blockchain-based reputation system to securely store and share the global FL models. The proof of reputation consensus is also proposed to evaluate the reliability of roadside units functioning as aggregators in the IoV network. Simulation results demonstrate the effectiveness of the proposed FBTM approach in ensuring the security of the IoV network.
The research work in [34] proposes a Federated Hierarchical Trust Interaction (FHTI) scheme for the Cross-Domain Industrial Internet of Things (IIoT) to address the challenge of multidomain trust interaction. To achieve this, the FHTI scheme integrates consortium blockchain and FL in a seamless manner. A blockchain-based trusted environment for the IIoT is established, followed by the development of a multidomain behavior detection mechanism using FL. The hierarchical trust system is then constructed by combining blockchain transaction performance, leading to unified trust management across multiple domains. Finally, a blockchain cross-chain interaction mechanism is proposed to ensure the credibility of trust values between parties. The main contributions of the article include a two-tier consortium blockchain security architecture and a hierarchical trust mechanism based on federated detection of blockchain nodes, which enables dynamic trust evaluation and hierarchical trust management, thereby improving trust between IIoT devices and breaking down trust barriers between domains.
The proposed system [35] combines FL with trust establishment mechanism and recommender selection strategy to address the challenge of cold-start items in recommendation systems. The cold-start problem occurs when a recommender system has limited or no information about a new user or item. To address this issue, the authors propose a trust establishment mechanism that enables the recommender system to build trust relationships with potential recommenders. The trust scores are derived from the devices' resource utilization data and the credibility scores of the recommenders. Additionally, the authors propose a recommender selection strategy based on double deep Q learning that considers the devices' trust scores and energy levels to choose the subset of IoT devices that will contribute to improving the accuracy of the recommendations. The authors demonstrate the value of FL for the cold-start item recommendation problem and provide insights into how to design intelligent algorithms that support the FL process while prioritizing trust management.
The authors presents a novel approach to address the challenges of trust management in cross-domain industrial IoT by introducing the FHTI architecture in [36]. It combines the power of consortium blockchain and FL to provide a safe and reliable network environment for users. The FHTI scheme is based on a behavior detection mechanism that uses FL to evaluate the trustworthiness of devices in a multidomain setting. The architecture also establishes a blockchain cross-chain interaction mechanism that ensures the credibility of the trust value of both parties. The results of the simulation indicate that the proposed scheme can improve the accuracy of abnormal behavior recognition, increase resource utilization, and enhance the stability of the system compared to traditional methods. The FHTI scheme presents a promising solution for trust management in the cross-domain industrial IoT.
## VI Trust aware interpretability in FL
In this section, we have delved into the comprehensive research efforts undertaken to explore trust-aware interpretability in FL. Various factors are considered in the pursuit of trustworthy interpretability within FL models, including algorithm transparency, model selection, data and feature selection, sample selection, and data sharing evaluation. This objective seeks to provide a clear and comprehensive understanding of ML and DL models, with some being inherently interpretable and others necessitating additional exploration.
To foster trust-aware interpretability in FL, attention must be given to enhancing the quality of clients' local features, which
affects the performance of both local and global FL models. Identifying and interpreting crucial features, while eliminating noisy and redundant ones, is vital for achieving trustworthy interpretability. Taking into account the diverse data held by clients in FL systems, it is imperative to acknowledge that not all training data carries equal importance for a specific FL task. By implementing trust-aware interpretability in sample selection, the server and clients can interpret the usefulness of local data, ultimately improving training efficiency and model performance.
Trustworthy interpretability in model optimization can be attained by designing intrinsically interpretable models or robust aggregation methods. Interpretable models incorporate interpretability directly into the model structures, while interpretable robust aggregation allows the FL server to assess the quality of clients' updates and perform quality-aware model aggregation. Thus, trust-aware interpretability in FL enhances the overall reliability and transparency of the system.
### _Trustworthy Feature and Sample Selection_
The authors proposes a new approach called Federated Stochastic Dual-Gate based Feature Selection (FedSDG-FS) [37] for feature selection in Vertical FL (VFL). Existing FS works for VFL assume prior knowledge on the number of noisy features or the threshold of useful features to be selected, making them unsuitable for practical applications. FedSDG-FS uses a Gaussian stochastic dual-gate to approximate the probability of a feature being selected with privacy protection through Partially Homomorphic Encryption without a trusted third-party. It also proposes a feature importance initialization method based on Gini impurity to reduce overhead. Experiments show that FedSDG-FS outperforms existing approaches in selecting high-quality features and building global models with higher performance. The proposed method solves the problem of efficient feature selection in VF.
A trustworthiness evaluation framework, TrustE-VC, is proposed in [38] that combines criteria importance and performance rates to determine the service attributes of vertical FL that require more attention. It also suggests a three-level security feature to enhance effectiveness and trustworthiness in VC. The proposed framework comprises three interconnected components, including an aggregation of the security evaluation values, a fuzzy multicriteria decision-making algorithm, and a simple additive weight associated with importance-performance analysis and performance rate to visualize the framework findings. The proposed framework provides a useful tool for designers and industrial CV practices to evaluate and select industrial CV trust requirements. The framework addresses the challenges of developing effective and trustworthy VFL models.
In [39], authors present an XAI Federated Deep Reinforcement Learning model aimed at improving decision-making for new Autonomous Vehicles (AVs) in trajectory and motion planning. This model tackles appropriate AV selection for FL and guarantees explainability and trustworthiness. Using XAI, it determines each feature's importance and AV's trust value. A trust-based deep reinforcement learning model is introduced for selections, showing superior performance in real-world data experiments. The study highlights trust's role in AV selection and proposes an innovative XAI-based trust computation method, providing a sophisticated mechanism for new AVs' decision-making.
The main contribution of [40] is a FL model named FedPARL, which aims to reduce the model size while performing sample-based pruning, avoiding misbehaved clients, and considering resource-availability for partial workloads. This is especially useful for resource-constrained IoT clients. FedPARL, a tri-level FL strategy, aids clients in conserving resources throughout training, eliminates unreliable or resource-deficient clients during selection, and allows for flexible local epochs based on client resource availability. An incentive-deterrent framework promotes effective clients and discourages poor-performing or malicious ones. This approach exhibits robustness in constrained FL-IoT environments, and results reveal that FedPARL outperforms existing methods, delivering an enhanced FL solution.
This authors proposes a new approach to optimize smart device sampling and data offloading in FL [41]. The authors propose a joint sampling and data offloading optimization problem where devices are selected based on their expected contribution to model training. The non-selected devices can transfer data to selected ones based on estimated data dissimilarities between nodes. The proposed approach aims to improve the efficiency and accuracy of FedL by reducing the communication and computational costs. The approach is evaluated using real-world data, and the results demonstrate its effectiveness in improving the performance of FedL.
A new framework is proposed for Importance Sampling FL (ISFL) [42], which addresses the non-i.i.d. data distribution issue. The proposed framework mitigates the performance gap by introducing local importance sampling and formulating the weighting selection of each client as an independent optimization sub-problem. The paper presents theoretical solutions for optimal IS weights and an adaptive water-filling approach for numerical solutions. The model deviation bound is derived between ISFL and centralized training, relaxing the restriction of convexity of loss functions. The proposed framework offers a local importance sampling-based solution to the label-skewed non-i.i.d. data distribution problem in FL.
The Quality Inference (QI) method is proposed to recover the quality of the aggregated updates and participants' datasets [43]. QI uses inferred information across multiple rounds and known per-round subset to evaluate the relative quality ordering. The method assigns scores to contributors based on The Good, The Bad, and The Ugly rules. QI successfully recovers the relative ordering of participant's dataset qualities when secure aggregation is in use, without requiring computational power or background information. The proposed method can be useful in ensuring trustworthy and high-quality FL in a decentralized setting.
TrustFL [44], a scheme that utilizes Trusted Execution Environments (TEEs) to ensure the integrity of participant's training executions in FL. FL faces new security challenges due to the lack of direct control over the training process. The proposed scheme randomly verifies a small proportion of
the training processes using TEEs for a customizable level of assurance, while computations are executed on a faster but less secure processor for improved efficiency. TrustFL also employs a commitment-based method with specific data selection to counteract cheating behaviors. The proposed scheme provides high confidence in participants' training executions in FL.
### _Trustworthy Data Sharing_
Centralized data sharing mechanisms are vulnerable to several issues, including single point of failure, data privacy, authenticity, equitable income distribution, and low user engagement. Despite being simple and easy to implement, these limitations hinder their effectiveness. Alternative approaches are needed to ensure secure and trustworthy data sharing, promoting collaboration, and facilitating innovation in data-driven industries [61, 62].
This authors in [63] proposes a solution to the problem of data, model, and result trust in cross-domain data sharing by using blockchain and cryptography to establish an endogenous trusted architecture. The proposed reverse auction node incentive mechanism based on high credit preference addresses issues such as low user enthusiasm, unstable data quality, and unfair data sharing benefit distribution. The decentralized, tamper-proof, and traceable nature of blockchain ensures a trusted trading environment, while FL combined with differential privacy enhances user privacy and data security sharing. The proposed approach provides a potential solution for a more secure and efficient data sharing framework, enabling users to participate in data sharing and submit higher quality data.
The authors present a reliable framework combining FL and BC in [64], for enhanced security and trustworthiness in IoT network. The framework employs a trust evaluation mechanism and introduces a reinforcement-based FL (R-FL) system for managing IoT devices' training models. The study assesses network lifespan, energy usage, and trust, while addressing communication security and device mobility with the adaptive FL-based trustworthiness evaluation (AFL-TE) system.
The integration of blockchain and FL is a promising approach to achieve trusted data sharing with privacy protection. However, existing mechanisms overlook the supervision of the FL model and computing process. To address this issue, [65] proposes a new paradigm for trusted sharing using the concepts of sandbox and state channel. The state channel creates a trusted sandbox to instantiate FL tasks in a trustless edge computing environment, while also solving data privacy and quality issues. An incentive mechanism based on smart contract encourages local devices and edge nodes to participate in FL tasks, and a DRL-based node selection algorithm selects different node sets with the reward of epoch delay and training accuracy. The proposed architecture uses PBFT consensus mechanism to ensure smooth generation of blocks and reduce communication delay for the blockchain platform. The DRL algorithm effectively deals with the node selection problem with a large action space, providing better accuracy and delay performance than traditional algorithms.
The study in [66] presents a blockchain and FL-based architecture for secure data sharing, prioritizing user privacy and data value transmission. The architecture uses on-chain and off-chain storage, user identity authentication, and data integrity verification. It employs an ABAC-based fine-grained access control model with transaction attributes and node reputation. The paper also suggests a state channel-based FL training supervision mechanism, addressing trust challenges in cross-domain data sharing by utilizing a state channel-based trust supervision mechanism. This approach enhances system security and trust while minimally affecting FL efficiency.
FL has the potential to break down data silos and enhance the intelligence of the Industrial Internet of Things (IIoT). However, the principal-agent architecture used in this approach increases costs and fails to ensure privacy protection and trustworthiness in flexible data sharing. In response, the authors propose a secure and trusted federated data sharing (STFS) approach based on blockchain [67]. They first construct an autonomous and reliable federated extreme gradient boosting learning algorithm to provide privacy protection, verifiability, and reliability. Then, they design a secure and trusted data sharing and trading mechanism that includes encryption for secure data storage, threshold aggregation signature to guarantee model ownership, and proxy re-encryption and retrieval for controllable and trusted data sharing. The proposed approach can enhance the connectivity of IIoT data and ensure secure and controlled sharing while protecting privacy and ownership.
This authors in [68], presents a blockchain-enabled FL model for the industrial Internet of Things (IIoT) to solve data sharing and model sharing security issues. The proposed data protection aggregation scheme ensures privacy and security in data sharing. Additionally, the paper proposes three distributed ML algorithms: K-means clustering based on differential privacy and homomorphic encryption, random forest with differential privacy, and AdaBoost with homomorphic encryption. These methods enable multiple data protection in complex IIoT scenarios.
The concept of Self-Driving Networks (SelfDN) and a framework to create distributed and trustworthy SelfDNs across multiple domains is presented in [69]. The framework utilizes programmable data planes for real-time In-band telemetry, AI for automatic network device code rewriting, and blockchain and FL for decentralized knowledge sharing among domains. The effectiveness of this approach is demonstrated through a proof-of-concept experiment where INT, Deep Learning, and P4 were used to autonomously detect and mitigate application-layer DDoS attacks at the data plane level. The proposed framework provides a promising approach to building self-driving networks that are secure, trustworthy, and effective in detecting and responding to network threats.
In [70], the authors propose a blockchain-supported FL (BFL) marketplace, integrating social Internet of Things (SIoT) to enable FL in devices with computational constraints. The BFL marketplace allows devices to exchange FL services, with blockchain standardizing market operations and logging transactions. The paper introduces a trust-enhanced collaborative learning strategy (TCL) and a quality-focused task allocation algorithm (QTA) for handling trust relation
ships among heterogeneous IoT devices and directing FL task allocation to proficient devices for optimal training quality. To maintain long-term stability, an encrypted model training scheme (EMT) is designed to defend against malicious attacks, and a contribution-based delegated proof of stake (DPoS) consensus mechanism ensures equitable reward distribution. These algorithms successfully utilize data from computationally limited devices for FL through SIoT while safeguarding security and fairness in the BFL marketplace.
### _Trustworthy Model Selection_
In FL, it is crucial to evaluate the contributions of participants to the performance of the final model while ensuring privacy. To achieve this, the widely adopted method is the use of Shapley Value (SV) techniques. However, existing SV-based approaches are computationally expensive and impractical for real-world applications. To tackle this issue, authors in [45] introduced the Guided Truncation Gradient Kapley (GTG-Shapley) approach, which reduces the computation cost of SV-based FL participant contribution evaluation. Unlike traditional methods, GTG-Shapley does not require extra learning tasks from participants, as it reconstructs FL sub-models using their previous gradient updates instead of training them from scratch. Additionally, GTG-Shapley employs guided Monte Carlo sampling to further reduce the number of required model reconstructions and evaluations, thereby enhancing the efficiency of SV computation. GTG-Shapley offers a more practical and scalable solution for fair FL participant contribution evaluation. GTG-Shapley enables FL to be more practical and widely adopted in real-world applications.
The aggregation of local models from participating clients is a critical component in generating the final global model in FL. However, traditional aggregation methods can be susceptible to adversarial attacks and client failures. To mitigate this issue, the authors of this paper propose a truth inference approach to FL that incorporates the reliability of each client's local model into the aggregation process. The proposed approach in [46] models the clients' reliability based on their submitted local model parameters and considers these parameters during the aggregation process to produce a robust estimate of the global model. The authors have further enhanced the method by considering the model parameters submitted by clients in previous rounds in addition to the current round, thus providing a more comprehensive evaluation of client reliability.The proposed truth inference approach provides a more robust estimate of the global model, protects against potential adversarial attacks, and considers client reliability in the aggregation process, thereby improving the robustness of FL.
In FL, the server aggregates the uploaded model parameters from participating clients to generate a global model. The common practice is to evenly weight the local models, assuming equal contribution from all nodes. However, the heterogeneous nature of devices and data leads to variations in contribution from users. To address this issue, authors in [47] introduces a reputation-enabled aggregation method that adjusts the aggregation weights based on the reputation scores of users. The reputation score is computed based on the performance metrics of the local models during each training round. The proposed method showed an improvement of 17.175% over the standard baseline in non-independent and identically distributed (non-IID) scenarios for a FL network of 100 participants. This work considers the mobile network of distributed computing nodes where the performance and reputation of individual nodes vary. The reputation-enabled weighted aggregation is hypothesized to lead to faster convergence and a higher accuracy level for FL in a mobile environment.
In an effort to improve privacy and reward mechanism, the research work in [48] proposes a block-chained FL (BlockFL) architecture that uses blockchain technology instead of a central entity. With BlockFL, each device computes and uploads its local model update to a miner in the blockchain network and receives a data reward proportional to the number of its data samples. Miners exchange and verify all the local model updates, and run Proof-of-Work (PoW). Once a miner successfully completes PoW, it generates a block that stores the verified local model updates and receives a mining reward. The generated block is added to a blockchain, which is downloaded by the devices. The devices then compute the global model update from the latest block, which is used as input for the next local model update. This architecture allows for on-device ML without central coordination, even when each device lacks its own training data samples.
In [49], a new approach to FL is proposed, focusing on the improvement of learning speed and stability. The approach includes three key components: a node recognition-based local learning weighting method, a node selection method based on participation frequency and data amount, and a weighting method based on participation frequency. The performance of this proposed approach is compared to traditional FL and the results show that it outperforms the traditional method in terms of both learning speed and stability. The paper presents a unique solution for improving the performance of FL through the use of blockchain-based node recognition.
The authors in [50], presents a framework for secure and privacy-preserving deep learning (DL) services in the Industrial Internet of Things (IIoT) systems. This framework leverages FL to aggregate multiple locally trained models without sharing datasets among participants, thereby overcoming the privacy challenges of traditional collaborative learning. However, FL-based DL (FDL) models can be vulnerable to intermediate results and data structure leakage. The proposed framework comprises a service-oriented architecture that identifies key components and implements a service model for residual networks-based FDL with differential privacy (DP) to produce trustworthy locally trained models. The services in the framework ensure secure execution through privacy preservation, while the privacy-preserving local model aggregation mechanism ensures further privacy protection. The framework features DP-based residual networks (ResNet50) that use GR to guarantee privacy during federated training, and a trusted curator who adds random noise to the function output to ensure global privacy. Additionally, DP is leveraged in the DL model architecture to generate DP local model representations during training, providing extra protection against data leakage. The DP approach has two important properties: composition and
postprocessing.
A new framework for robust FL is proposed in [51] to address the vulnerability of FL systems to malicious clients. These clients can send malicious model updates to the central server, leading to a degradation in learning performance or targeted model poisoning attacks. To tackle this issue, the proposed framework uses spectral anomaly detection to detect and remove the malicious model updates. The framework is evaluated in image classification and sentiment analysis tasks, and results show that the low-dimensional embeddings used in the framework can easily differentiate the malicious updates from normal updates, leading to targeted defense. The proposed solution is an effective method to address the threat of adversarial attacks in FL systems.
Automated FL (AutoFL) in [52] is proposed to enhance model accuracy and simplify the design process. The focus is on using Neural Architecture Search (NAS) for automation, leading to the development of a Federated NAS (FedNAS) algorithm. This algorithm enables the scattered workers to work together and find an architecture with improved accuracy. The implementation of FedNAS is also demonstrated through a system build.
The Federated Graph Convolution Network (FGC) in [53] is a new approach to recommendation systems that combines privacy and accuracy. Unlike traditional methods that gather raw data, the FGC approach allows clients to keep their data locally and only upload model parameters to a central server. This protects privacy while improving prediction accuracy. The FGC approach also features a model segmentation method that adjusts to varying weight dimensions, ensuring global weight aggregation. Additionally, it improves the calculation of service node embeddings by focusing only on relevant data, reducing the impact of noise and increasing trustworthiness and accuracy. The goal of recommendation systems is to leverage historical behavior and knowledge, but the model accuracy is often limited by the risk of data leaking from multiple departments. The FGC approach addresses these limitations by having clients train locally and upload only model weights to the server, while also leveraging overlapping services to optimize local training results. Overall, the FGC approach offers a novel solution to recommendation systems by balancing privacy and accuracy, while avoiding the risk of data leaking.
A hierarchical framework of federated control for Industrial Internet of Things (IIoTs) is proposed in [54], to address the trustworthiness and privacy preservation of tracking systems. The framework consists of a federated control center, network layer, and a federated control node, and integrates a collaborative Cloud-Edge-End structure and a ML-oriented localization method based on Expectation Maximization (EM). A trustworthy localization model is built using the EM method, which iteratively solves for the latent variable of untrustworthiness probability. The EM-based federated control scheme offers a solution to the trustworthiness issue in IIoTs while preserving privacy.
Authors in [55] introduces the Federated Trustworthy Artificial Intelligence (FTAI) Architecture, which combines TAI Architecture and FL to provide a secure platform for user data privacy. The proposed model aggregation strategy integrates FedCS and FedPSO, and employs AIF360 to guarantee fairness in the client-side training process by eliminating discrimination. The main contributions of this paper are: (1) a trustworthy and secure architecture for protecting user data privacy; (2) a client system that is tolerant to heterogeneity and low bandwidth; and (3) a fair and improved model.
The research study in [56], authors introduced the concept of Fine-Grained FL, aimed at decentralizing shared ML models on edge servers. This work outlines a comprehensive definition of Fine-Grained FL in Mobile Edge Computing systems and the key requirements of these systems, including personalization, decentralization, incentives, trust, and efficiency in communication and bandwidth. To ensure trustworthy collaboration, authors propose the use of a Blockchain-based Reputation-Aware Fine-Grained FL system that provides all participants with reputation information via Frontend DApps. This system leverages Ethereum's public blockchain and smart contract technologies to compute trustworthy reputation scores and aggregate them for model selection and aggregation. The reputation information about each device acts as a deterrent against malicious, faulty, and ghost devices in achieving the requirements of Fine-Grained FL.
In [57], authors propose a new FL system design that ensures the security of individual model updates during the learning process. The system enables clients to provide encrypted model updates while the cloud server performs aggregation. Our design differs from previous works by supporting lightweight encryption, aggregation and resilience against drop-out clients with no impact on future participation. Our system is designed to handle client drop-out while keeping their secret keys confidential. To improve communication efficiency, we employ quantization-based model compression and secure aggregation. Additionally, we present mechanisms to make our system more resilient against adversarial servers. Our experiments on real-world datasets show that our system achieves comparable accuracy to plaintext baselines with practical performance. By integrating a cherry-picked aggregation protocol, our system offers practical encryption of model updates at the client and aggregation of encrypted model updates at the cloud server.
In [58], the authors introduce a hybrid blockchain architecture, PermiDAG, to address the transmission load and privacy concerns in FL systems. The architecture combines a permissioned blockchain maintained by road-side units (RSUs) with a local Directed Acyclic Graph (DAG) run by vehicles for efficient data sharing in the Internet of Vehicles (IoV). The authors also propose an asynchronous FL approach using Deep Reinforcement Learning (DRL) for node selection to improve efficiency. The learned models are integrated into the blockchain and their quality is verified through two-stage validation, ensuring the reliability of shared data. The authors aim to improve the security and reliability of model parameters through the proposed hybrid architecture and FL approach.
In [59], the authors present a method for improving the performance of FL models by assigning a reputation score to individual models. This score is calculated based on various performance metrics and is used to select the best models
for aggregation. The proposed scheme ensures that the final aggregated model is of higher quality as it is based on the performance of models with high reputation scores. The reputation score is a novel addition to the FL process that enhances the performance and reliability of the final model.
Authors proposes a reputation opinion-based PoL consensus protocol for edge blockchain in IIoT, which enhances the trustworthiness of edge intelligence [60]. The protocol employs a smart contract to obtain reputation opinions, reducing the impact of malicious or accidental anomalies and minimizing reliance on trusted intermediaries. Trustworthy edge intelligence is achieved by adopting the winner's intelligence through a weight aggregation of the winner's learning model based on its reputation opinion, rather than completely discarding all local models. The proposed scheme is analyzed for performance in terms of security, latency, and throughput, and simulation results demonstrate its effectiveness. This approach offers a new way to handle model selection in FL, enabling more reliable and trustworthy model aggregation in a distributed setting.
### _Discussion_
In the feature and sample selection domain ([37]-[44]), the studies propose various techniques for efficient and trustworthy feature selection and model optimization. However, they often overlook the trustworthiness of clients or devices participating in the FL process. To address this limitation, future research should focus on incorporating trust metrics and secure aggregation techniques to ensure the integrity of the selected features and data. In the model selection domain ([45]-[60]), the studies propose different mechanisms to evaluate clients' contributions and enhance the security of the FL process. However, they tend to focus on specific application scenarios or client behaviors, limiting their generalizability. To improve these studies, it is crucial to develop more comprehensive frameworks that consider different aspects of trustworthiness, such as data privacy, model robustness, and client reputation. Additionally, techniques should be adaptable to various application domains and client behavior patterns. In the data sharing domain ([61]-[70]), blockchain-based approaches have been widely adopted to ensure trustworthiness in data sharing for FL. Despite the innovative solutions provided, these studies face challenges in terms of scalability and performance. To overcome these limitations, future research should explore alternative techniques to blockchain, such as secure multiparty computation and homomorphic encryption, which can provide better scalability and efficiency while maintaining data privacy and integrity. Moreover, the integration of these techniques with FL should be investigated to create a seamless and more trustworthy data sharing process. Overall, the existing literature in trustworthy interpretability has made significant strides in addressing trustworthiness in FL. However, there is still room for improvement in terms of generalizability, comprehensiveness, and scalability. Future research should focus on developing frameworks that can efficiently address the limitations and pitfalls identified in the current studies, paving the way for more trustworthy and robust FL systems.
## VII Fairness aware Trustworthiness in FL
Fairness is a critical aspect of Trustworthy FL since many parties provide data for training the model and eventually receive the same combined global model. Key ideas involve client selection that considers accuracy parity and selection fairness. These concepts measure how evenly performance is distributed across FL client devices, with the goal of reducing bias and decreasing under-representation or non-representation. Furthermore, contribution fairness and incentive mechanism fairness aim to fairly distribute rewards based on each client's input. In this section, we will explore existing research on fairness-aware Trustworthy FL. By examining these topics in more depth, we will develop a better understanding of the challenges and opportunities in creating a fair and Trustworthy FL framework that is easily accessible to readers. Fig. 5 provides a visual summary of the topics addressed in the context of fairness-aware Trustworthy FL. Moreover, we established detailed sub-categorization of all the aspects of Fairness aware Trustworthy FL. The Illustration of the refined taxonomy related to Fairness aware Trustworthy FL is presented in Fig. 6.
### _Trustworthy Client Selection_
The selection of FL worker nodes using reputation is faced with the issue of the "cold start problem". In previous studies, the assumption is that there is historical interaction data available to evaluate the reputation of the worker nodes. However, this becomes a challenge when the worker node has no prior interaction with the master node and there is a risk of tampering with the reputation values. To address these uncertainties, contract theory is used to mitigate the cold start problem in reputation-based FL worker node selection.
#### Vii-A1 Trust and Reputation based Trustworthy Client Selection
A trust-based deep reinforcement learning approach is proposed for client selection in FL [72]. The solution involves a resource and trust-aware DRL strategy that considers the execution time and trust levels of clients to make an efficient selection decision. The proposed solution integrates multiple technologies such as federated transfer learning, IoT, edge computing, trust management and DRL to provide a holistic approach for a detection-driven scenario. A stochastic optimization problem is formulated to determine the selection of IoT devices to which the FL tasks will be sent, with the goal of minimizing execution time while maximizing trust. The optimization problem is then solved using a DRL algorithm that models the server's uncertainty about the resource and trust levels of IoT devices.
In [73], researchers address the challenge of selecting IoT devices to participate in distributed training within FL. Existing approaches primarily consider device resource features, but the authors emphasize that trust should also factor into decision-making. To tackle this, they create a trust-building mechanism between edge servers and IoT devices, introducing DDQN-Trust, a selection algorithm based on double deep Q learning that considers both trust scores and energy levels of IoT devices. This solution is integrated FedAvg, FedShare,
FedProx and FedSGD. By accounting for resource characteristics and trust, the proposed method delivers a holistic solution for organizing IoT devices in FL scenarios.
The lack of profit motivates participants to provide low-quality data and prevents requesters from identifying reliable participants. To address this, authors proposes a horizontal FL incentive mechanism called RRAFL in [76], which combines reputation and reverse auction theory. Participants bid for tasks, and reputation indirectly reflects their reliability and data quality. his paper proposes a reputation mechanism as a critical component of the FL incentive mechanism. The reputation is a reflection of the requester's evaluation of the participants and is saved in the interaction blockchain, which is tamper-proof and open to transparency. The reputation is based on the participants' data quality and reliability, which are measured through a model quality detection method and a participant contribution measurement method. The reputation mechanism plays a crucial role in the reverse auction process, where participants bid for tasks and are selected based on their reputation and bid price. The selected participants with good reputation and low bid price are rewarded with the budget. The reputation mechanism is designed to incentivize participants to provide high-quality data and actively participate in the FL program. It also allows the requester to make informed decisions about selecting reliable participants with good data quality.
A Robust and Fair FL (RFFL) framework in [77], designed to address the challenges of achieving both collaborative fairness and adversarial robustness in FL. The framework relies on a reputation mechanism, which evaluates the contributions of each participant by comparing their uploaded gradients to the aggregated global gradients. By examining the cosine similarity between the two, RFFL can identify non-contributing or malicious participants and remove them. The authors emphasize that their approach does not require any auxiliary or validation dataset, setting it apart from other methods. The main contribution of this work is a framework for achieving both collaborative fairness and adversarial robustness in FL via a reputation mechanism, which is evaluated via the comparison of gradients.
#### V-B2 Blockchain based Trustworthy Client Selection
FEDAR, A trust and resource-aware FL framework is proposed in [80] to address the challenges posed by unreliable and resource-constrained FL environments. Specifically, the authors focus on distributed mobile robots that are often resource-limited and may exhibit inconsistent behavior. To address this issue, the authors introduce a trust score for each FL client, which is updated based on their previous performance and resource availability. The authors then use this trust score to select the most reliable and resource-efficient clients for FL training, while excluding those clients that are deemed untrustworthy. FoolsGold [81]algorithm is used to indentify the unreliable participants in fedar. In addition, the authors also address the straggler effect by applying asynchronous FL, which enables the FL server to aggregate the updates from clients without waiting for the slowest client.
Fig. 5: A visual overview of the Categorization of Fairness aware Trustworthy FL.
The authors use this framework to evaluate the performance of FL on resource-constrained mobile robots in a real-world setting and show that the proposed approach is effective in improving the convergence time and ensuring reliable results.
TrustWorker, a worker selection scheme for blockchain-based crowdsensing is proposed in [82]that focuses on trustworthiness and privacy protection. TrustWorker leverages the decentralization, transparency and immutability of blockchains to make the worker selection process trustworthy. Reputation privacy of workers is protected through the use of a deterministic encryption algorithm and a secret minimum heapsort scheme. Reputation comparison is carried using two-party comparison protocol[83]. The effectiveness and efficiency of TrustWorker is analyzed theoretically and through experiments. TrustWorker's limitation is its sole focus on worker reputation, without considering the influence of requester reputation on task selection. Future work should investigate this gap while balancing privacy and efficiency.
In [85], the authors aim to address the challenge of unreliable data being uploaded by mobile devices in FL, leading to frauds and low-quality training results. To solve this problem, they propose a new approach that leverages the concept of reputation as a reliable metric to select trusted workers for FL. To calculate the reputation efficiently, a multi-weight subjective logic model is applied, considering both the interaction histories of task publishers and recommended reputation opinions. To ensure secure and decentralized reputation management, the authors use consortium blockchain technology deployed at edge nodes. The consortium blockchain acts as a trusted ledger to record and manage the data owners' reputation, improving the reliability of FL tasks in mobile networks.
In [86], a blockchain-based reputation evaluation method is proposed to improve the security of FL against poisoning attacks. The method evaluates the reputation of each participant based on three factors: interaction reputation, data reputation, and resource reputation. By doing so, trusted participants can be selected and malicious participants can be identified. To enhance the detection of malicious behavior, a combination of multi-domain detection and a distributed knowledge base is proposed. A feature graph based on a knowledge graph is designed to store and manage multi-domain feature knowledge. The proposed solution aims to ensure the robustness of FL by identifying trustworthy participants.
TrustRe, A reputation evaluation scheme for improving worker selection in FL is proposed in [87]. Our approach takes a step further from existing work, which relied on subjective judgment for reputation evaluation. Instead, we use the quality of model training as the basis of reputation evaluation and store the reputation values on a blockchain. This ensures privacy protection for workers and accurately records the history of reputation. Our contribution also includes the design of a new blockchain platform for decentralized reputation management. Additionally, we propose a consensus algorithm called proof of reputation (PoR) to aggregate FL models. PoR uses worker reputation as a factor for competition for the bookkeeper role in the blockchain ledger, improving the quality of the global model and encouraging worker participation. Based on the workers' reputation values, a leader worker is selected in the blockchain network, responsible for responding to requests and facilitating consensus.
Fig. 6: An Illustration of fairness-aware trustworthy FL settings, showcasing the key elements and relationships that contribute to achieving fairness and trustworthiness in FL environments.
In [74], the authors address several limitations of traditional FL that hinder its use in untrusted environments. These limitations include low motivation among clients, high rate of client dropouts, potential model poisoning and stealing, and unauthorized access to the model. To overcome these issues, the authors propose a blockchain-based FL aggregation protocol that is divided into seven stages. This protocol has been implemented in the Ethereum smart contract and is designed to incentitize clients to participate in the training process. The proposed deposit mechanism differentiates between trusted and untrusted clients by imposing different deposit requirements. Trusted clients are required to pay lower deposits compared to untrusted clients, who are required to pay up to three times more. The authors believe that this approach will encourage clients to be more responsible and trustworthy in their participation in the FL process, ultimately leading to improved FL performance and security.
The authors in [79] aims to address the challenges of secure and efficient participant selection in FL. The proposed blockchain enabled FL architecture consists of two phases. The first phase is a numerical evaluation, which serves to prevent malicious devices from participating in FL. The second phase involves a participant-selection algorithm that allows the FL server to select the most suitable group of devices for each round of FL training. The numerical evaluation and participant-selection algorithm work together to ensure that
only trustworthy devices participate in FL, thereby mitigating the risk of malicious attacks and ensuring the privacy and security of the training data.
The research work in [84]presents the TrustFed framework, a decentralized and trustworthy framework for Crowdsourced FL (CDFL) systems. The use of CDFL in ML has been hindered by issues such as model poisoning attacks and the lack of fairness in the training process. The TrustFed framework addresses these issues by incorporating blockchain technology and smart contracts. CDFL removes detected outliers during training distributions before aggregating the model updates, thereby ensuring fairness in the training process. Additionally, the framework maintains the reputation of participating devices by recording their activities on the blockchain, encouraging honest model contributions. In conclusion, the TrustFed framework offers a solution to the problems of fairness and trust in CDFL systems, ensuring that the training process is secure and reliable.
#### V-A3 Contextual Optimization based Trustworthy Client Selection
The authors in [71] studies client selection in FL with a focus on minimizing the average model exchange time while satisfying long-term fairness and system constraints. The authors transform the offline problem into an online Lyapunov optimization problem and quantify the long-term fairness of client participation rate using dynamic queues. They propose a Contextual Combinatorial Multi-Arm Bandit (C2MAB) model for estimating model exchange time based on client contextual properties and historical performance. The proposed fairness-guaranteed selection algorithm, RBCS-F, efficiently solves the optimization problem in FL. Theoretical evaluation and real-data experiments show that RBCS-F can ensure long-term fairness while improving training efficiency while maintaining close accuracy to the random client selection scheme of FL.
A risk-aware Stochastic Integer Programming (SIP) based Client Selection method (SCS) for FL has been proposed in [75], to tackle uncertainty in worker nodes' reputation values. This method aims to minimize operational costs while considering reputation uncertainty. SCS selects worker nodes from both high and lower reputation pools, reducing herding issues. SIP addresses uncertainty in worker node reliability, and SCS combines SIP with reputation to select reliable nodes cost-effectively. Defence mechanisms like Foolsgold can be employed to protect FL models from damage, particularly near convergence.
The research work in [78] proposes the MarS-FL framework, a decision support framework for participation in FL, based on market share. MarS-FL introduces two key measures, stable market and friendliness, to evaluate the viability and market acceptability of FL. Using game theoretic tools, the framework predicts the optimal strategies of FL participants and quantifies the impact of market conditions on the final model performance. The results show that the FL performance improvement is bounded by the market conditions, and the friendliness of the market conditions to FL can be quantified using k. The MarS-FL framework provides a systematic approach for evaluating and optimizing FL participation.
The proposed mechanism in [88] aims to enhance the quality of FL by accurately assessing the truthfulness and reliability of clients, and selecting the best set of clients that maximize the social surplus in the FL service trading market. The reputation mechanism evaluates clients based on multiple reputation evaluations and is designed to detect malicious or negative behaviors during the FL training process. The mechanism also employs a reverse auction, powered by the deep reinforcement learning (DRL) algorithm D3QN, to select the optimal clients while satisfying weak budget balance, incentive compatibility, and individual rationality. The results of the proposed incentive mechanism demonstrate improved efficiency and economic benefit for FL tasks, as it motivates more number clients having data with high-quality and high reputations to participate in FL at lower costs. Table II provides an extensive overview of the research conducted on trustworthy client selection in FL. This table highlights various methods, techniques, and approaches employed to ensure trustworthiness during the client selection process.
### _Trustworthy Contribution Evaluation_
The contribution measurement strategies used in FL systems can be classified into three main categories: self-report based evaluation, Shapley value based evaluation, and influence and reputation based evaluation. Self-report based evaluation involves the data owner directly reporting their level of contribution to the model owner. Metrics used to evaluate the contribution include computational resources and data size. The Shapley value based evaluation takes into account the participation order of the data owners and assesses their contributions in a fair manner, typically used in cooperative games. Influence and reputation based evaluation considers the client's impact on the FL model's loss function and their reliability, and may involve techniques like Fed-Influence and reputation mechanisms that can be combined with blockchain. A client's reputation is generally a combination of their internal reputation in a task and their accumulated reputation based on historical records. The authors in [89] focuses on the critical aspect of measuring the contributions of users in FL systems. The authors conduct a comprehensive analysis of the different strategies for evaluating user contributions, examining the factors that can impact the effectiveness of these methods. The paper emphasizes the need for fair and accurate evaluation techniques to assess the contributions of data owners in FL and provides valuable insights into the current state of research in this area. This study highlights the importance of considering user contributions in FL systems and provides a solid foundation for future research in this field. The categorization and most relevant research work are presented in following sub sections.
#### V-B1 Smart Contract based Trustworthy Contribution Evaluation
The authors in [91] proposes a solution to the issue of data validity and quality in FL systems by utilizing the EOS blockchain and IPFS. The system records uploaded updates in a scalable manner and rewards users based on the cost of their training data. To ensure that only valuable updates are validated and rewarded, the authors propose the Class-Sampled Validation Error Scheme (CSVES). This scheme tailors the validation set to a device's data breakdown and
checks the validity of the data cost claimed by the device. By implementing CSVES, the system can ensure that the data quality is maintained while incentivizing users to contribute high-quality updates to the FL model.
A BC-based FL scheme with a reputation mechanism to motivate data owners in high-quality data contribution has been proposed in [96]. By integrating smart contract technology and blockchain, the authors aim to create a decentralized and trustworthy environment for conducting FL tasks transparently and fairly. The proposed reputation mechanism evaluates model quality and contribution, which encourages data owners to join the BFL and contribute high-quality data for local and weighted global model aggregation and reward allocation. The noncooperative game theory with equilibrium point make sure the highest reward to the contributer with high quality data. Moreover, an optional grouping mechanism is proposed to address the high complexity of a large number of participants. The BFL mechanism is expected to improve the credibility and reliability of FL and guarantee the model quality.
#### V-A2 Shapely Value and Consensus based Trustworthy Contribution Evaluation
In [90], a blockchain-assisted FL framework is presented to encourage honest participation and reward fair contributions. The framework employs a new consensus mechanism known as "Proof of Interpretation and Selection" (PoIS), which evaluates the contributions of individual clients by analyzing model interpretation results. PoIS aggregates feature attributions to distinguish prominent contributors and outliers and uses a credit function that considers contribution, relevance, and past performance to determine incentives. The proposed framework has been tested against various types of adversaries, datasets, and attack strategies and found to be robust. This framework offers a promising solution for addressing the issues of traditional FL and ensuring fairness in contribution-based incentivization.
The authors propose a decentralized FL system named BESIFL (Blockchain Empowered Secure and Incentive FL) that leverages blockchain technology to remove the central FL server in [94]. Three essential components are there in system: an accuracy-based malicious node detection mechanism, a contribution-based incentive mechanism, and an algorithm that coordinates both mechanisms. The malicious node detection mechanism identifies and removes malicious nodes during the training process, while the contribution-based incentive mechanism motivates nodes to participate in FL and rewards them based on their contribution to model training. BESIFL is designed to address the security and incentive issues in FL while also stimulating credible nodes to contribute to the model training. The proposed system applies the consensus algorithm and identity authentication of blockchain to ensure the security of model training and employs mechanisms for accuracy-based malicious node detection, contribution-based node selection, and token-based node incentive.
The varying data quality and heterogeneous data distributions across multiple healthcare institutions pose significant challenges to existing FL frameworks. To address these issues, a Contribution-Aware FL (CAreFL) framework [95], has been proposed, which focuses on fair and efficient contribution
evaluation of FL participants. The proposed GTG-Shapley approach allows for fast and accurate evaluation of participant contributions. Moreover, the framework introduces a novel FL model aggregation approach, which selects the best performing sub-model for the next round of local training instead of always aggregating all received local models. This approach addresses the heterogeneity issues of data distribution in the healthcare sector. The historical contribution evaluation records are further converted into reputation values for the FL participants, which can serve as a basis for stakeholder management decision support. The CAreFL framework offers a promising solution to the challenges faced by existing FL frameworks in the healthcare sector, improving FL model performance while ensuring privacy and fairness.
#### V-B3 Privacy Preserving Trustworthy Contribution Evaluation
Authors presents a novel architecture for healthcare institutions to collaborate in improving the performance of a global ML model in [92]. The proposed system utilizes a combination of blockchain and secure multi-party computation (SMPC) to ensure data integrity, model versioning and privacy preservation during model training and ensemble. Unlike traditional methods, proposed architecture prioritizes general model evaluation over incentivization for fair contribution assessment. This evaluation process is carried out by the blockchain nodes and recorded on tamper-proof storage. The final contribution of each participant's ML model is determined based on their performance on unforeseen data. Additionally, the architecture enables each participant to define their own model structure, taking into account the varying computing power among participants. The proposed hierarchical ensemble FL method promises to advance the field of collaborative ML in healthcare while maintaining the privacy and security of sensitive medical data.
The authors present a decentralized Fair and Privacy-Preserving Deep Learning (FPPDL) framework in [93], that aims to address the issue of fairness in FL models. Unlike traditional FL solutions that provide all parties with the same model regardless of their contribution, FPPDL aims to provide each participant with a final FL model that reflects their individual contributions. To achieve fairness, the authors propose a local credibility mutual evaluation mechanism, and for privacy preservation, a three-layer onion-style encryption scheme is proposed. The framework operates by recording all transactions, including uploading and downloading, through blockchain technology. This eliminates the need for participants to trust each other or a third party. The proposed framework is designed to create a healthy FL ecosystem where each participant's contribution is valued and accurately reflected in the final FL model.
Table III provides an extensive overview of the research conducted on trustworthy contribution evaluation in FL.
### _Trustworthy Incentive Mechanism_
In this section, we categorize trustworthy incentive mechanism algorithms and methodologies into four sub-categories based on their primary objectives: Game theory, Reputation based, Contribution based and Privacy and Security based Trustworthy Incentive Mechanisms.
#### V-C1 Game Theory based Trustworthy Incentive Mechanism
In [100], the authors propose a novel approach to designing incentives for a blockchain-enabled FL platform using mechanism design, an economic approach to realizing desired objectives in situations where participants act rationally. The main idea behind the incentive mechanism is to introduce a repeated competition for model updates, so that any rational worker follows the protocol and maximizes their profits. During each round, selected workers choose the best k model updates from previous round and update their own model based on them. The reward to workers in the previous round is decided by the vote of the next round workers. The model updates of the next round workers are also competed and voted by workers in the subsequent round, ensuring that they cannot sabotage the system. The authors provide a rigorous theoretical analysis of the incentive compatibility based on contest theory and clarify the optimal conditions for reward policy in a blockchain-enabled FL platform. The contribution of the paper includes a competitive incentive mechanism design, a full-fledged protocol that can be implemented on existing public blockchains, and a theoretical analysis to clarify incentive compatibility based on contest theory.
Authors proposed a novel FL incentive mechanism called Fair-VCG (FVCG) in [107], which is based on the well-known Vickrey-Clarke-Groves (VCG) mechanism [108]. FVCG incentivizes fair treatment of FL participants and can easily be integrated into existing FL platforms as a standard module. Our mechanism aims to optimally share revenues with data owners while encouraging full data contribution and truthful cost reporting. Through the use of a neural network method, FVCG optimizes for social surplus and minimizes unfairness, making it individually rational and weakly budget balanced. The practical applications of FVCG's economic concepts make it a promising solution for ensuring fairness in FL.
InfEDge [113], is a blockchain-based incentive mechanism proposed to address challenges related to multi-dimensional individual properties, incomplete information, and unreliable participants in edge computing. The mechanism models rationality of clients, edge servers, and the cloud, and proposes a hierarchical contract solution to obtain the optimal solution under incomplete information. The existence and uniqueness of Nash equilibrium is proved with closed-form solution under complete information. Blockchain is introduced to implement the incentive mechanism in the smart contract and provides a credible, faster, and transparent environment for the system. InFEDge ensures privacy, prevents unreliable participants' disturbance, and provides a credible and transparent environment to effectively manage the incentive mechanism.
Another incentive mechanism to encourage mobile users to participate in FL is presented in [114]. The winner selection problem in the auction game is formulated as a social welfare maximization problem and solved with a primal-dual greedy algorithm. The proposed auction mechanism is guaranteed to be truthful, individually rational, and computationally efficient. The wireless resource limitation makes the winner selection problem an NP-hard problem. The critical value-based payment is proposed to deal with the NP-hard problem of selecting the winning users.
A Social Federated Edge Learning (SFEL) framework is proposed for wireless networks, which leverages social relationships to address the trustworthiness and incentive mechanisms in Federated Edge Learning (FEL) [116]. The SFEL framework uses a social graph model to find trustworthy learning partners with similar learning task interests, and a social-effect-based incentive mechanism to encourage better personal learning behaviors. The proposed incentive mechanism is a Stackelberg game-based model, which can handle both complete and incomplete information to encourage active participation from learners. BFEL solves the issue of malicious or inactive learners that result in low quality and untrustworthy model updates, which undermine the viability and stability of FEL.
The authors presents FAIR [119], a distributed learning system that integrates three major technical components to ensure quality-aware FL. The system estimates individual learning quality to provide precise user incentives and model aggregation. It also allocates learning tasks and corresponding payments, and conducts model aggregation in real-time. The quality estimation is done using loss reduction during the learning process and leveraging historical quality records. The system uses a reverse auction case to motivate user participation, where mobile users submit their bids and the platform acts as the auctioneer. A greedy algorithm determines learning task allocation and reward distribution based on Myerson's theorem to maximize collective learning quality within the recruiting budget. The model aggregation algorithm integrates model quality and filters non-ideal model updates to enhance the global learning model. The proposed FAIR system is truthful, individually rational, and computationally efficient. The paper's contributions lie in providing a system that ensures quality-aware FL, which is crucial in practical distributed learning scenarios but rarely seen in the literature.
Ensuring stable user participation necessitates a robust, fair, and reciprocal incentive mechanism. FedAB in [120], a groundbreaking incentive strategy and client selection method based on a multi-attribute reverse auction mechanism and a combinatorial multi-armed bandit (CMAB) algorithm. FedAB contributes a local contribution evaluation technique tailored for FL, a payment mechanism fostering individual rationality and truthfulness, and a UCB-based winner selection algorithm to maximize utility, fairness, and reciprocity.
#### V-A2 Reputation based Trustworthy Incentive Mechanism
The proposed SRBFL framework [97] is designed to address the challenges faced by FL and provide a secure and trustworthy solution. The framework focuses on improving the reliability of FL devices through an incentive mechanism that uses subjective multi-weight logic. This mechanism provides a reputation mechanism that incites FL devices to provide reliable model updates. The results show that the framework is both efficient and scalable, making it a promising solution for FL. The framework also uses blockchain sharding to ensure data reliability, scalability, and trustworthiness, enabling parallel model training and improving the scalability of blockchain FL. lightweight sharding consensus (LWSC), and secure sharding using subjective logic (SSSL) is also used to improve the reliability and security of proposed mechanism respectively.
FRFL scheme in [98] addresses the challenges of FL in battery-constrained UAVs for UAV-supported crowdsensing. The scheme focuses on fairness and robustness, utilizing an optimal contract theory-based incentive mechanism that fosters UAV participation under information asymmetry, with proven truthfulness, contractual feasibility, and computational efficiency. Additionally, the FRFL method leverages edge computing-powered 5G heterogeneous networks for high-speed, low-latency FL services. It employs Byzantine-robust aggregation rules, equitable model profit distribution, and a reputation mechanism to recruit reliable UAVs while determine free-riding. Simulations confirm the FRFL scheme's efficacy in user utility, communication efficiency, and robustness, highlighting the importance of the incentive mechanism for fostering fair and robust FL in UAV-assisted crowdsensing.
The adoption of FL has been limited by certain challenges, including the lack of a comprehensive incentive mechanism for worker (mobile device) participation and the lack of a reliable worker selection method. To tackle these challenges, authors presents a novel approach that combines reputation and contract theory to incentivize high-reputation mobile devices to participate in the model training process in [99]. Firstly, we introduce reputation as a metric to evaluate the reliability and trustworthiness of the mobile devices. Secondly, a multi-weight subjective logic model is used to determine the worker's reputation, which is securely managed through the consortium blockchain. Incentivizing participation is achieved through a contract theory-based mechanism, which stimulates workers with high-quality data to join the learning process. The mechanism provides an incentive for high-reputation workers to maintain the accuracy and reliability of their local training data. Additionally, the subjective logic model is used to generate composite reputation values for worker candidates, allowing for the selection of credible and trustworthy participants. This approach introduces third-party miners which can lead to model leakage. The blockchain also has scalability issues that could increase communication delay for FL.The authors aim to create an incentive-aware platform that ensures the participation of devices in model training while taking into account the communication delays caused by the blockchain's scalability problems.
The authors in [101] propose a reputation-based selection methodology and an auction-driven incentive scheme to improve the participation of data owners while maintaining the desired level of platform utility and aggregated model performance. The reputation score is based on the performance of the local models, and the incentive mechanism is designed to reward participants fairly for their contributions. The compensation of the users is dynamically adjusted to distribute the benefits more fairly, while ensuring positive user utility and maintaining the aggregated model accuracy. Building on the Trustworthy Sensing for Crowd Management (TSCM) approach discussed in [102, 103], the authors propose a reputation score-based incentive scheme that assigns higher rewards to data owners with higher quality data and better-performing local models. The scheme adopts a reverse auction procedure to adjust the compensation of the users dynamically.
Numerical evaluations have shown that the proposed scheme can improve the user utility while maintaining the platform utility and test accuracy of the global FL model. The ongoing study involves adjusting the frequency of local updates to improve efficiency under limited resources and implementing the adjustable FL algorithm during the training process to improve model performance.
A Blockchain Empowered Secure and Incentive FL (BE-SIFL) paradigm is proposed in [115], to enhance security and performance in FL. The proposed BESIFL system is fully decentralized, leveraging blockchain technology to enable effective mechanisms for malicious node detection and incentive management. An accuracy-based malicious node detection mechanism is developed to identify and remove malicious nodes, while a contribution-based incentive mechanism with a token-based reward scheme motivates credible nodes to participate in the learning process. An algorithm is designed to coordinate the mechanisms for malicious node detection and contributing node incentive/selection, enabling the BESIFL system to efficiently address these issues. The proposed paradigm presents an innovative approach to enhancing the security and performance of FL.
#### V-B3 Contribution based Trustworthy Incentive Mechanism
FL in healthcare has gained considerable attention for training ML models. Ensuring the accuracy and fairness of these models is crucial, especially considering the potential for bias, which can lead to disparities in predictive performance across different patient subgroups. To address this issue, research work in [106] focuses on the prevention of excessive bias through the use of reward systems. Firstly, the researchers determine the contributions of each institution towards predictive performance and bias using an approximation method based on Shapley values. Subsequently, various reward systems are designed to incentivize high predictive performance or low bias, with a combined reward system incentivizing both. The effectiveness of these reward systems is evaluated using medical chest X-ray datasets and the results show that they successfully incentivize contributions towards a well-performing model with low bias. The study highlights the need for further research on developing practical reward distribution strategies, considering the challenge of incentivizing low bias contributions. Ultimately, the goal is to design a reward system that balances high predictive performance and low bias for fair and Trustworthy FL models in healthcare. Authors aims to quantify the bias in FL and design reward systems that encourage institutions to make contributions towards high predictive performance and low bias. The first step involves determining the contributions of FL institutions to bias through the use of Shapley value approximation method. The second step focuses on transforming these contributions into a reward system that incentivizes both high predictive performance and low bias. This requires further research on viable reward distribution strategies to incentivize institutions effectively. The ultimate goal is to develop a reward scheme that incentivizes both high performance and low bias in FL.
A novel approach to incentivize high-quality contributions in FL through tokenization is presented in [109]. Our method allocates tokens based on the value of each client's contribution during the aggregation phase. This incentivizes quality participation and discourages poor updates or attacks in a decentralized network environment, such as in Web 3.0. The proposed tokenized incentive design accommodates clients with diverse profiles, making it a robust solution for collaborative FL training. This approach is efficient and focuses on the allocation of tokens within budget constraints, known as quota, to ensure the optimal selection of local model parameters.
The authors proposes a fairness-aware incentive mechanism for FL called FedFAIM [117], which addresses the problem of unfairness in the existing FL systems. The proposed mechanism achieves two types of fairness: aggregation fairness and reward fairness. Aggregation fairness is achieved through an efficient gradient aggregation method that filters out low-quality local gradients and aggregates high-quality ones based on data quality. Reward fairness is achieved through a Shapley value-based contribution assessment method and a novel reward allocation method based on reputation and distribution of local and global gradients. The proposed method ensures that participants are assigned a unique model with performance reflecting their contribution. The reward allocation mechanism incorporates reputation to determine the model's performance level assigned to each participant. The proposed mechanism is evaluated, and the results show that it outperforms existing FL systems.
FL Incentivizer (FLI), to incentivize high-quality data owners to contribute their data to a FL system is proposed in [112]. FLI is a real-time algorithm that ensures contribution fairness, regret distribution fairness, and expectation fairness while accounting for the interests of both the federation and data owners. Data owners who have contributed high-quality data will receive a higher share of subsequent revenues generated by the federation. FLI dynamically adjusts data owners' shares to distribute benefits fairly and sacrifice among them. FLI produces near-optimal collective utility while limiting data owners' regret and accounting for the temporary mismatch between contributions and rewards. By incentivizing data owners, FLI enables a healthy FL ecosystem to emerge over time.
The authors proposes FIFL [110], a fair incentive mechanism for FL that combines workers' contribution and reputation indicators to reward honest workers and punish attackers. FIFL adopts a blockchain-based audit method and a reputation-based server selection method to prevent malicious nodes from manipulating assessment results. FIFL consists of four components: attack detection, reputation, contribution, and incentive modules. The attack detection module removes potential malicious updates to prevent model damage, and the reputation module measures workers' trustworthiness based on historical events. The contribution module measures workers' current utility to the system, and the incentive module determines workers' rewards based on both reputation and contribution indicators. FIFL is the first profit-sharing solution that works in unreliable federations containing unstable attackers and maintains a constant system revenue in such scenarios. The paper conducts comprehensive experiments to evaluate FIFL's effectiveness, and the results show that FIFL outperforms the baseline models. The study contributes to the
development of FL systems by proposing a fair and effective incentive mechanism that can incentivize honest participation in FL while mitigating the negative effects of malicious attacks. The use of a blockchain-based audit method ensures transparency and prevents fraud.
#### V-C4 Security and Privacy based Trustworthy Incentive Mechanism
Authors presents a novel deep learning framework named DeepChain in [104], which aims to tackle the security and fairness problems that are present in traditional FL. The framework is based on blockchain technology and provides a value-driven incentive mechanism to encourage honest behavior from participants. The incentive mechanism in DeepChain consists of two security mechanisms in blockchain: a trusted time clock mechanism and a secure monetary penalty mechanism. These mechanisms work together to ensure fairness during the collaborative training process and to prevent participants from behaving incorrectly. In addition to the incentive mechanism, DeepChain also guarantees data privacy and provides auditability for the training process. The confidentiality of local gradients is ensured through the use of the Threshold Paillier algorithm, which provides additive homomorphic properties. The auditability is achieved through the universally verifiable CDN (UVCDN) protocol [105]. By incorporating these features, DeepChain provides a secure and fair collaborative training environment for deep learning models. The framework not only protects the privacy of local gradients and guarantees the correctness of the training process, but also encourages honest behavior from participants through the use of incentives.
FGFL [111], a novel incentive governor for FL that assesses workers' contributions and reputations to reward efficient workers fairly and eliminate malicious ones. FGFL contains two main parts: a fair incentive mechanism and a reliable incentive management system. The incentive mechanism measures workers' utility and reliability, using the product of the two indicators to determine rewards and punishments. FGFL also includes a blockchain-based incentive management method, FG-chain, which achieves trusted management of incentives through an audit method and a reputation-based server selection method. The paper evaluates the effectiveness and fairness of FGFL through theoretical analysis and comprehensive experiments. The results show that FGFL effectively assesses workers' trustworthiness and utilities, prevents the decline of system revenue caused by attackers, and achieves secure management of FL. The use of a time-decay SLM algorithm and a lightweight method based on gradients similarity ensures real-time assessment of workers' contributions and reputations. FGFL contributes to the development of FL systems by proposing a fair and reliable incentive mechanism and management system.
A FL system that leverages blockchain technology, local differential privacy (LDP), and zero-knowledge proofs (ZKPs) to achieve fairness, integrity, and privacy has been proposed in [118]. The proposed FL architecture provides fair incentives to clients by measuring their individual contribution to the global model performance based on actual parameters and incentivizing them accordingly through a smart contract on the blockchain. Non-interactive ZKPs ensure the integrity of the FL system by enabling clients to validate fellow clients' model updates without revealing any private data. The blockchain-based design ensures neutrality, immutability, and transparency of the architecture, while LDP ensures clients' model updates cannot leak information on patterns within their private data. Overall, the proposed FL system offers a novel and smart FL-based architecture that achieves fairness, integrity, and privacy in a practical and scalable manner.
Table IV provides an extensive overview of the research conducted on trustworthy Incentive mechanism in FL. This table highlights various methods, techniques, and approaches employed to ensure trustworthiness during the reward allocation process in FL.
### _Accountability and Auditibility in FL_
In this section, we categorize trustworthy incentive mechanism algorithms and methodologies into two sub-categories based on their primary objectives: Smart Contract and Committee Selection based Trustworthy Auditibility and Accountability Mechanisms.
#### Vi-D1 Smart Contract based Trustworthy Accountability and Audtibility
BlockFLow [122, 123], is a fully PP decentralized FL system that aims to data privacy and attck resilience with contributions evaluation based on quality of data. Differential privacy and a unique auditing mechanism is utilized to protect datasets and evaluate model contribution respectively. Ethereum smart contracts is used in BlockFLow to incentivize good behavior resulting in more accountable and transparent approach.
A blockchain-based architecture for FL systems that improves accountability and fairness is proposed in [125, 126]. To achieve this, a smart contract-driven data-model provenance registry is designed to track and record local data used for model training, enabling auditing. Additionally, a weighted fair training dataset sampler algorithm is introduced to improve fairness affected by data class distribution heterogeneity. In this architecture, each client and central server should have a blockchain node to form a network. The blockchain is utilized for its immutability and transparency, and the smart contract improves accountability. The use of blockchain technology can improve accountability and has been evaluated previously. The proposed integration of blockchain and FL is feasible as both systems are decentralized. The effectiveness of the approach is demonstrated using a COVID-19 detection scenario with X-rays.
#### Vi-D2 Committee Selection based Trustworthy Accountability and Audtibility
The VFCchain framework [121] offers verifiable and auditable FL through the use of blockchain technology. It guarantees verifiability by choosing a committee via the blockchain to jointly aggregate models and create verifiable proofs. To achieve auditability, a unique authenticated data structure is introduced to boost the search efficiency of verifiable proofs and facilitate secure committee rotation. VFCchain enhances search efficiency for multi-model learning tasks while penalizing dishonest participants through the withdrawal of pre-frozen deposits. The paper introduces Dual skip chain (DSC), a practical committee selection method and novel authenticated data structure for blockchain. DSC augments verifiable proof search efficiency, enables secure committee rotation, and allows clients to securely traverse the blockchain. Furthermore, a comprehensive audit layer combines independent audit processes to improve model verification and audit performance.
The authors in [124], proposes FLchain, a decentralized and auditable framework for FL that addresses the issues of misbehavior, lack of auditability, and incentives. FLChain replaces the traditional parameter server with a consensus computation result, ensuring that the ecosystem is healthy and public auditable. The proposed framework provides sufficient incentive and deterrence to distributed trainers, allowing for a healthy marketplace for collaborative-training models. Honest trainers can receive a fairly partitioned profit from well-trained models based on their contribution, while malicious actors can be detected and heavily punished. Additionally, the paper introduces DDCBF, which accelerates the query of blockchain-documented information to reduce the time cost of misbehavior detection and model query.
Table V provides an extensive overview of the research conducted on accountability and auditability aspect Trustworthy FL.
### _Discussion_
In our discussion of the selected papers, we will focus on their pitfalls and potential improvements concerning trustworthiness in fairness aware FL.
Many research works are incorporating client Selection using Reputation Mechanisms [71, 72, 73, 75, 76, 77, 81, 84, 85, 87, 88]. There are some common limitations like Incomplete trustworthiness assessment, scalability and overhead, holistic trustworthiness assessment. Many papers focus on a single aspect of trustworthiness, such as client reputation or resource availability, without considering the interplay of various factors. Some proposed techniques may introduce high computational or communication overhead, limiting their applicability in large-scale FL deployments. Future research should focus on comprehensive trust models that incorporate multiple aspects, such as reliability, robustness, and privacy. Developing scalable client selection and reputation mechanisms with minimal overhead will help in practical FL deployments. Blockchain-based trustworthiness solutions [74, 79, 82, 83] often face issues such as latency, energy consumption, and storage constraints, which may hinder their use in FL systems. Combining blockchain with other trust-enhancing technologies (e.g., secure hardware, zero-knowledge proofs) may help overcome some limitations and provide a more robust trust foundation. Investigating new consensus protocols with a focus on low-latency, energy-efficient, and lightweight solutions will benefit blockchain-based FL trustworthiness. Application-specific trustworthy client selection [80, 86] address trustworthiness in specific application domains, which may limit the generalizability of their findings to broader FL settings. There is often insufficient benchmarking and comparison with existing methods, making it difficult to assess the real-world impact of the proposed solutions. Developing trust models and mechanisms that can be easily adapted to various application domains will improve the overall trustworthiness of FL systems. Future research should focus on evaluating and comparing proposed solutions using standardized metrics and real-world datasets, helping to assess their practical impact on trustworthiness in FL.
Trust models and assessments primarily concern the evaluation of users' contributions and the establishment of trust among participants. However, the studies [89, 90, 91] provide incomplete trustworthiness assessments, lacking consideration of aspects like reliability and robustness. Moreover, their trust models are narrow in focus, tailored to specific application domains. Incentive mechanisms aim to encourage participant cooperation and ensure fair reward distribution. Several works employ auction or contract-based mechanisms ([99, 107, 114]). While these approaches can provide fair incentives, they may not adequately address the users' privacy concerns. In addition, the use of reputation systems in some studies ([102, 103, 105]) may lead to unfair treatment of new users who have not yet established a reputation. Thus, a more balanced approach, combining both privacy preservation
and fairness, should be investigated. Privacy-preserving techniques are essential for maintaining data confidentiality and user privacy in FL. Techniques like [104, 105, 93] however, face limitations in scalability and generalizability, requiring improvements in lightweight privacy-preserving methods.
In conclusion, future research should prioritize developing comprehensive trust models, exploring lightweight and scalable trust and incentive mechanisms, and enhancing privacy preservation. Furthermore, evaluations using standardized metrics and real-world datasets are necessary to assess the practical impact of trustworthiness solutions in FL. By addressing these pitfalls and critics, we can significantly improve the learning process and trustworthiness in FL.
## VIII Security and Privacy Aware Trustworthy FL
FL has emerged as a promising approach to train models on distributed data while maintaining data privacy. However, achieving Trustworthy FL necessitates addressing various security and privacy challenges, including verifiability, privacy preservation, and secure aggregation. Data privacy serves as the driving force behind FL's development, and it is crucial that FL models effectively safeguard data privacy throughout their lifecycle to foster trust among participants. Although FL inherently provides a certain level of data privacy, assumptions about the integrity of the various actors and parties within the federation must be made. Preventing information leakage, whether by ensuring secure communication among honest-but-curious federation members or defending against external malicious attacks, must be prioritized. To address data privacy in FL, techniques can be grouped into three main categories: protection against poisoning attacks, preserving privacy in the face of data and model inversion, and ensuring verifiability and trustworthiness. Poisoning attacks can compromise the integrity of training data or manipulate the training process, making it essential to evaluate FL models for their defense mechanisms and effectiveness against such attacks. Due to FL's decentralized nature, it is vulnerable to numerous security threats, including data poisoning, model inversion, and membership inference attacks. The lack of a central authority further complicates matters, potentially leading to communication bottlenecks and impacting the quality of the trained models. As a result, designing a Trustworthy FL framework that ensures verifiability, secure aggregation, and privacy preservation is critical. Furthermore, privacy concerns arise in FL since global models and local updates can expose data information. Adversaries can accurately recover raw data from local updates or exploit differences between consecutive global models to reveal data properties and membership, posing a significant risk to user privacy. Training integrity is often neglected, as participants might not fully execute the protocol due to limited computational resources, which can reduce model accuracy. Trustworthiness of centralized FL centers can be difficult to assess, and communication can become a bottleneck. Therefore, it is crucial to develop a privacy-preserving, verifiable, and decentralized FL framework that safeguards data privacy, protects global models and local updates, ensures training integrity, and alleviates trust and communication concerns arising from centralized centers. The challenge of creating such a framework that balances participant rights and interests while achieving high-performance models has not yet been sufficiently explored. Moreover, dishonest aggregation from malicious servers can result in harmful aggregated gradients for some or all clients, further emphasizing the importance of addressing these issues. We established detailed sub-categorization of all the aspects of Security & Privacy aware Trustworthy FL. The Illustration of the refined taxonomy related to Security & Privacy aware Trustworthy FL is presented in Fig. 7. However, Fig. 8, depicts the various security and privacy challenges present within FL environments.
### _Varifiability in Trustworthy FL_
#### Vi-A1 Conensus and Smart Contract based Varifiability
Authors in [127] proposes a blockchain-empowered FL framework for the Internet of Vehicles (IoV). The framework offers secure, privacy-preserving, and verifiable FL services by mitigating potential anomalies that may occur at the FL server or during communication. The proposed approach limits access to the blockchain network to only necessary participants, reducing the attack surface and offering secure and trustworthy services in the IoV.
Authors proposed a privacy-preserving and verifiable FL method based on blockchain in [133]. The method consists of three main components: Secure aggregation protocol: This protocol is based on gradient masking and verifiable secret sharing (VSS). It provides protection against potential malicious miners in the blockchain and is robust to clients dropping out. Blockchain structure with global gradient verification: The blockchain structure integrates gradient verification into the consensus algorithm using polynomial commitment. This design effectively defends against tampering attacks and ensures reliable FL. Gradient compression method reduces communication overhead while maintaining the accuracy of the trained model. Overall, the proposed privacy-preserving and verifiable FL method based on blockchain aims to enhance the reliability and privacy of the FL process by combining cryptographic techniques, consensus algorithms, and communication optimization methods. The problem discussed in the paper is the privacy and reliability concerns in FL. FL allows for a model to be trained without access to the raw data, but the sharing of gradients in the process still poses privacy concerns. Additionally, there is a risk of malicious parties manipulating the aggregated gradients and affecting the accuracy of the model.
BytoChain in [136], introduces verifiers to execute heavy verification workflows in parallel and byzantine attacks can be detected through a Proof-of-Accuracy (PoA) consensus mechanism. BytoChain reduces the verification overhead for miners and prevents the loss of accuracy by tolerating models with limited non-positive gains. The Proof-of-Accuracy consensus effectively detects inferior models and provides a secure and efficient solution for FL.
A BC-based FL approach for device failure detection in IIoT has been proposed in [137]. A Merkle tree is used for
verifiable integrity of each client's data. The impact of data heterogeneity can be reduce in device failure detection by implementing a centroid distance weighted federated averaging (CDW_FedAvg) algorithm. A smart contract-based incentive mechanism is designed to motivate the participation of clients. The authors evaluate the feasibility, accuracy, and performance of the proposed approach using a real industry use case of detecting failures in water-cooled magnetic levitation chillers in air-conditioners.
#### V-A2 Homomorphic Encryption and Hash Function based Virifiability
Authors in [129], introduces PVD-FL, a secure, verifiable, and privacy-centric decentralized FL structure aimed at mitigating security threats such as privacy invasion and integrity breaches. PVD-FL's objective is to facilitate safe deep learning model training in a decentralized setting, eliminating the need for a trusted central entity and circumventing communication restrictions. The work presents an algorithm called Efficient and Verifiable Cipher-based Matrix Multiplication (EVCM), used for basic deep learning computations. This algorithm, combined with a set of decentralized methods, builds the PVD-FL structure, ensuring confidentiality of the global model and local updates, and offering training step verification. Security analysis indicates PVD-FL's capability to counter different inference attacks while upholding training integrity. With no need for a centralized body, this framework enables secure construction of a global deep learning model. The EVCM algorithm, grounded on lightweight symmetric homomorphic encryption, safeguards the training process. A range of decentralized algorithms are also designed for precise deep learning model training. The system confirms confidentiality, training step verifiability, high accuracy, and efficient computational and communication cost. Experimentation on real-world datasets verifies PVD-FL's efficacy and efficiency
The referenced work in [130] presents VerifyNet, a ground-breaking framework that addresses verifiability and privacy concerns in FL. It employs a homomorphic hash function and pseudorandom mechanisms to allow user verification, while a double-masking protocol safeguards local gradient privacy. This secure framework manages user dropouts during training and preserves privacy. VerifyNet's objective is to enable clients to validate cloud server accuracy and protect user privacy throughout the training. The framework consists of two main elements: a double-masking protocol for maintaining gradient confidentiality, and a verification method that combines a homomorphic hash function and pseudorandom approaches to facilitate user confirmation of server results with limited overhead.
The problem with FL is that while it allows for training a model without accessing training sets, the server can extract information from the shared gradients or falsify the calculated result, affecting the accuracy of the model. To address these issues, a privacy-preserving and verifiable FL scheme is proposed in [132]. The scheme processes shared gradients using a combination of the Chinese Remainder Theorem and Paillier homomorphic encryption, providing privacy preservation with low computation and communication costs. Additionally, the bilinear aggregate signature technology is introduced to verify the correctness of the aggregated gradient, ensuring the accuracy of the trained model.
A privacy-preserving decentralized workflow for trusted FL is proposed in [134]. This proof-of-concept uses decentralised identity technologies, such as Hyperledger Aries/Indy/Ursa,
Fig. 7: A visual overview of the attack landscape in the context of security and privacy preservation for FL, highlighting various threats and challenges faced in maintaining a secure and privacy-preserving FL environment..
to establish secure communication channels for entities with verified credentials to participate in a FL process related to mental health data. The trust model is defined and enforced through decentralised identity standards, and the authentication system is extended by separating Hyperledger Aries agents and controllers into isolated entities. A decentralized peer-to-peer infrastructure, TFL, is presented which uses Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs) for mutual authentication and federated ML within a healthcare trust infrastructure. The performance of the TFL approach is improved compared to the previous state-of-the-art while maintaining the privacy guarantees of the authentication techniques and privacy-preserving workflows. The problem discussed in this paper is the need for a privacy-preserving and trusted FL workflow for mental health data.
The authors propose a BC-based PPFL framework in [138], that uses immutable and decentralized features of BC to record information flows such as client's local and global updates, Fl tasks, and data provenance. This provides a verifiable mechanism for FL tasks, which was previously lacking in existing FL approaches. The verification mechanism can extend the semi-honest client assumption to a more realistic malicious client assumption, allowing for a secure FL process. Additionally, the blockchain ledger allows for the tracking of each client's contribution to the globally optimized model, enabling contribution-based incentive mechanisms and the rewarding of miners who own the improved model. The encrypted model updates are recorded on the blockchain, which allows for the tracking and verification of client contributions to the global model. In a preliminary experiment, the authors implemented a basic verification function where the server acts as a verifier to evaluate the gradients after the aggregate is recovered in each round. The updated global model's performance was then compared to the initial model of that round using the loss function.
Authors in [141] introduced the VERIFL protocol, which is designed to ensure the integrity of data aggregation in FL. The protocol uses a combination of a linearly homomorphic hash and a commitment scheme to prevent the aggregation server from forging results. The collision resistance of the hash scheme ensures that the server cannot have an honest client accept its false result. The protocol is also secure, as it only reveals the aggregation result computed by the server. The authors demonstrate through experiments that the VERIFL protocol can handle large amounts of data, a high number of clients, and high dropout rates with practical performance. This paper aims to address the challenge of verifying data aggregation in FL and provides a solution that is both efficient and effective.
In [142], authors presented a security vulnerability in VerifyNet and a proposed solution to address this issue. The authors show that an attacker can recover a victim's model updates in a reasonable amount of time in VerifyNet. To address this security concern, the authors propose a new scheme called VERSA, which is a verifiable and privacy-preserving model aggregation method. The main contribution of this paper is the development of VERSA, which uses a lightweight primitive such as pseudorandom generators to achieve verifiability of model aggregation in cross-device FL. The authors aim to provide a secure solution for model aggregation in FL.
Fig. 8: A visual overview of the Categorization of Security & Privacy aware Trustworthy FL.
A secure framework for privacy-preserving and verifiable FL is introduced in [143]. The framework uses Homomorphic Encryption (HE) and Verifiable Computation (VC) techniques to ensure both the confidentiality and integrity of the data and models. The authors provide a concrete implementation of the framework using the Paillier additive homomorphic scheme and the LePCoV primitive, which allows for authentication of the computation of the global training model over encrypted data. The authors present several practical use cases for their secure FL solution, which addresses threats to both the training data and model from the aggregation server. The aim of this paper is to provide a secure and verifiable solution for FL, which balances the privacy concerns of individual participants and the need for accurate results.
#### V-A3 Secret Sharing and Multi-Party Computation based Verifiability
In [128] authors present a novel framework for accountable and verifiable secure aggregation in FL. Our approach utilizes SMPC, secure multi-party computation protocols based on homomorphic proxy re-authenticators (HPRA) and homomorphic proxy re-encryption (HPRE) for secure aggregation, and integrates blockchain technology to enforce
penalties for malicious behavior. The proposed framework protects client confidentiality and data verifiability while being flexible and dynamically adjustable for participant dropouts. To demonstrate its feasibility, we conduct performance tests using a blockchain prototype system in the context of Internet of Things networks. Our framework guarantees data provenance verifiability and holds malicious clients accountable, offering a robust solution for secure aggregation in FL.
The VFL framework in [131] is designed to address the privacy and security concerns in FL for industrial IoT applications. The problem with traditional FL approaches is that the shared gradient can still retain sensitive information of the training set, and a malicious aggregation server may return forged aggregated gradients. To address these concerns, the VFL framework uses Lagrange interpolation to set interpolation points for verifying the correctness of the aggregated gradients. This allows participants to independently and efficiently detect forged results with an overwhelming probability, and the computational overhead remains constant regardless of the number of participants. Additionally, the VFL framework also uses the blinding technology and Lagrange interpolation to achieve secure gradient aggregation. The joint model and gradients are protected from the aggregation server, and if no more than n-2 of n participants collude with the aggregation server, the gradients of other participants will not be leaked. The problem addressed in the paper is the security and privacy concerns in FL, where a shared model is trained by aggregating gradients from multiple parties without accessing the training data.
The problem addressed in [135] paper is the lack of trust in FL due to several critical security requirements such as reliability and robustness for moderators and privacy for clients. The authors aim to provide solutions for three specific scenarios in FL to address the security requirements. The first scenario is Single Verifiable Moderator (SVM) where the goal is to verify the correctness of the aggregated result at the client. The solution proposed is Verifiable Aggregation Protocol (VAP) which can be used to achieve robust FL for non-private gradients. The second scenario is Single Secure and Verifiable Moderator (SSVM) where the clients' gradients are private and should be protected. The solution proposed is still Verifiable Aggregation Protocol (VAP) which can achieve a verifiable and private aggregation. The third scenario is Multiple Secure and Verifiable Moderators (MSVM) where the focus is on the robustness of moderators. To achieve this goal, the authors decentralize a single moderator into multiple moderators and use the classic BGW protocol with Shamir's secret sharing to prevent disruption caused by moderator failure.
FedIPR in [139], a novel ownership verification scheme is introduced to secure FL models and protect their intellectual property rights. This is done by embedding watermarks into the FL models to verify ownership. The authors emphasize the importance of security in FL and differentiate it from traditional distributed ML with no security guarantees. They propose a concept of Secure FL (SFL) with a goal of building trustworthy and safe AI systems with strong privacy and IP-right preservation.
The authors in [144], introduces a new framework called Certificateless Authentication-based Trustworthy FL (CATFL) for 6G semantic communications. The aim of this framework is to provide a secure and trustworthy environment for FL while protecting user privacy. The CATFL framework provides mutual authentication between clients and servers to ensure the trustworthiness of the local client's gradients and the global model. Additionally, a pseudonym generation strategy is proposed to balance the trade-off between user anonymity and identity traceability. This strategy allows for tracing the original real identity of each CATFL client by trusted third-parties, enabling the identification of malicious devices. The CATFL framework can resist both poisoning attacks (including server-side and client-side) and privacy leakage (including gradient leakage and semantic representation leakage). Overall, the proposed CATFL framework provides a powerful deterrent against malicious threats to FL-based 6G semantic communication systems.
DP-BFL framework is proposed in [145],to address vulnerabilities in FL in IoT-based SM 3.0 networks. DP-BFL improves upon existing solutions by adopting a Local Differential Privacy (LDP) technique to prevent inference attacks, using decentralized verification and validation mechanisms to mitigate poisoning attacks, and employing a GDP-FedAvg algorithm to tackle membership attacks. Furthermore, the framework incorporates an incentive mechanism to discourage free-riding attacks and encourage more participation, and a QBC mechanism to nominate the consensus leader. DP-BFL's benefits include wiser privacy budget allocation, secure global model aggregation, and a more qualified leader for block and global model construction. Overall, DP-BFL enhances the verifiability and trust of FL in IoT-based SM 3.0 networks. Table 6 presents an extensive overview of the research conducted on trustworthy verifiability in FL.
SVeriFL in [140], a secure and verifiable FL framework that aims to address the issues of malicious server conduct and to protect the rights and interests of participants. The proposed system is based on a protocol designed with BLS signature and multi-party security, which enables secure integrity verification of parameters uploaded by participants and secure correctness verification of aggregated results conducted by the server. The system also ensures the consistency of aggregation results received by multiple participants and the dynamic parameters in the protocol enhance the security of the algorithm. The authors believe that the secure and verifiable FL framework will help to obtain a high performance FL model while protecting the rights and interests of participants.
### _Secure Aggregation_
#### Iii-B1 Consensus based Secure Aggregation
The article [147] proposes a novel serverless FL framework called Committee Mechanism based FL (CMFL) to address the vulnerability of the typical FL system to Byzantine attacks from malicious clients and servers. The CMFL framework utilizes a committee system to monitor the gradient aggregation process and ensure the robustness of the algorithm with convergence guarantee.
The committee members are responsible for filtering the uploaded local gradients and selecting the local gradients rated by elected members for the aggregation procedure through the selection strategy. Two opposite selection strategies are designed for accuracy and robustness considerations. The proposed framework distinguishes between honest and malicious gradients through a scoring system based on Euclidean distance. The proposed election strategy guarantees the honesty of committee clients. The experiments show that CMFL achieves faster convergence and better accuracy than typical FL while obtaining better robustness than traditional Byzantine-tolerant algorithms in a decentralized approach. The proposed framework ensures the secure aggregation of local gradients while preserving data privacy.
The authors proposes a decentralized blockchain-based FL (B-FL) architecture in [149], with secure global aggregation and Byzantine fault tolerance consensus protocol to ensure security and privacy against attacks from malicious devices and servers. The PBFT consensus protocol is utilized in B-FL to achieve high effectiveness and low energy consumption for Trustworthy FL. The proposed PBFT-based wireless B-FL architecture combines permissioned blockchain and wireless FL system to create a trustworthy AI model training environment that resists failures and attacks from malicious edge servers and devices. The article presents the detailed procedures of the PBFT-based wireless B-FL system and characterizes the training latency by considering the communication and computation processes. The combination of secure global aggregation and Byzantine fault tolerance consensus protocol ensures the integrity and robustness of the FL system.
A decentralized FL framework called BFLC is presented in [159], which uses BC for updates of local model exchange and storage of global model, to address security issues in centralized systems. An innovative committee consensus mechanism is also devised to reduce consensus computing and malicious attacks. The paper further discusses BFLC's scalability in terms of theoretical security, storage optimization, and incentives.
The proposed ensemble FL method [157] provides a tight security guarantee against malicious clients, and an algorithm is introduced to compute certified security levels. The approach builds on existing base FL algorithms by training multiple global models, each learned using a randomly selected subset of clients. The ensemble method improves security by taking a majority vote among the models when predicting labels for testing data. Unlike single-global-model FL, the proposed approach uses a subsample of k clients chosen uniformly at random without replacement from the n clients.
#### V-B2 Blockchain and Smart Contract based Secure Aggregation
The authors presents a novel cloud intrusion detection scheme called BFL-CIDS [146] based on Blockchained-FL. The proposed scheme aims to improve the accuracy of a FL model for detecting distributed malicious attacks in the IoT environment while ensuring data privacy and security. The scheme sends local training parameters to a cloud computing center for global prediction and uses blockchain to store model training process information and behavior. To address the issue of false alerts, an alert filter identification module is introduced to filter out such alerts, reduce cloud workload, and improve the quality of the FL model. The proposed erasure code-based blockchain storage solution improves the storage performance of the blockchain and reduces storage pressure. The scheme uses Hyperledger Fabric expansion based on erasure codes to meet the storage needs of large amounts of alert training data in real scenarios. The proposed scheme provides an efficient and secure approach for detecting malicious attacks in distributed environments.
MiTFed [160], is a decentralized and secure framework based on FL, blockchain, and software-defined networking (SDN) for building efficient intrusion detection models that can detect new and sophisticated security attacks while maintaining the privacy of each SDN domain. SMPC based secure aggregation scheme is used by MiTFed to combine both local model updates and a BC-based scheme that uses Ethereum's smart contracts for maintaining a trustworthy, decentralized, and efficient collaboration. The proposed approach is flexible, transparent, tamper-proof, and scalable.
Fully decentralized FL presents challenges for securing the privacy of Internet of Health Things (IoHT) data due to the lack of training capabilities at all federated nodes, scarcity of high-quality training datasets, and the need for authentication of participating FL nodes. The author proposes a lightweight hybrid FL framework in [163], that combines blockchain smart contracts with edge training to manage trust, authentication, and distribution of global or locally trained models. The framework supports encryption of the dataset, model training, and inferencing process, and provides lightweight differential privacy (DP) for IoHT data. To ensure trust in the provenance of training data and models, blockchain and off-chain mechanisms track data lineage and secure the deep learning process. Secure aggregation of models mitigates malicious poisoning attacks by FL nodes. A provenance collection and management graph tracks the lineage of the data, model, and transaction history, and uses supervised learning to detect malicious intruder nodes.
FL has significantly improved the performance of automatic modulation recognition (AMC), but secure sharing of local model parameters remains an issue. To address this, a Blockchain-FL (BFL) framework is proposed for AMC [162], where the AMC model is cooperatively trained by sharing local model parameters with Blockchain. Additionally, a parameter validity evaluation method is designed to weaken the influence of malicious nodes during the aggregation process. The proposed BFL framework greatly improves the anti-attack ability of FL-based AMC schemes by enriching training samples. A validity evaluation mechanism is also introduced using the Criteria importance through inter-criteria correlation (CRITIC) to evaluate and determine the weights of network parameters. This ensures the security of the parameter aggregation process in the proposed BFL framework. The combination of Blockchain and FL provides a secure and efficient way to train AMC models, making it a promising solution for future research.
#### V-B3 Privacy based Secure Aggregation
DeTrust-FL [158], is a PP based FL framework that addresses the issue of isolation attacks due to lack of transparency during secure
aggregation. A decentralized trust consensus mechanism and functional encryption strategy is utilized by Detrust-FL to ensure privacy of secure aggregated model. The framework achieves a consensus among parties by allowing each party to present its 'decision' on a participation matrix that reflects its role in each secure aggregation round and its expectation of the proportion of benign parties in the FL training. This decentralized approach to secure computing mitigates the risk of privacy leaks caused by disaggregation attacks, which have recently been demonstrated to be a potential threat. Experimental results show that DeTrust-FL outperforms other SMPC enabled solutions in terms of reduced data volume transferred and training time. DeTrust-FL is an efficient and effective solution for PP FL in a decentralized trust setting.
FL is a communication-efficient approach to train ML models using data from multiple devices, but it also poses challenges such as device information leakage and centralized aggregation server risks. To address these challenges, a Structured Transparency empowered cross-silo FL on the Blockchain (ST-BFL) framework is proposed in [164]. The framework employs homomorphic encryption, FL-aggregators, FL-verifiers, and smart contracts to enable structured transparency components including input and output privacy, output verification, and flow governance. The framework also includes a reputation system that allows authenticated and authorized clients to query the history and authenticity of the deep learning process, datasets used, and training process in a secure manner. Additionally, ST-BFL incorporates secure aggregation to prevent malicious nodes from introducing backdoor poisonous models and supports Intel SGX TEE to add an extra layer of security.
The authors in [166] introduces a novel approach to make FL more robust to device failures or attacks. The proposed algorithm, called RFA, is based on a robust aggregation oracle built upon the geometric median and the smoothed Weiszfeld algorithm. It is designed to ensure convergence even if up to half of the devices send corrupted updates, making it suitable for FL with bounded heterogeneity. The algorithm preserves privacy by leveraging secure multi-party computation primitives. RFA comes in several variants, including a fast one with a single step of robust aggregation and another that adjusts to heterogeneity through on-device personalization. The approach is scalable, efficient, and easy to integrate with existing systems. The paper's contribution lies in presenting an approach that increases trust in the FL process, ensuring privacy and security while maintaining the efficacy of the learning algorithm.
The proposed research study in [155] introduces a distributed backdoor attack called Cerberus Poisoning (CerP) that exploits the core assumption of current defense methods in FL. The defensive mechanisms rely on the assumption that a poisoned local model trained with poisoned data is significantly biased compared to those trained with poison-free data. CerP casts the distributed backdoor attack as a joint optimization process of three learning objectives, thereby exploiting the limit of this assumption. The research aims to evaluate the effectiveness of CerP and show that it can successfully evade existing defenses while remaining stealthy and achieving a high attack success rate.
BAFFLE [169], proposed in this paper, is a new defense mechanism that leverages FL to detect and secure against backdoor attacks. BAFFLE uses a feedback loop to integrate the diverse data sets from different clients in the FL process to uncover model poisoning. By incorporating the views of multiple clients, the proposed approach can achieve very high detection rates against state-of-the-art backdoor attacks, even with basic validation methods. The core idea of BAFFLE is to use data from multiple clients not only for training but also for identifying model poisoning. The contribution of this paper is to propose a new defense mechanism to improve the security of FL against backdoor attacks, which can be effective even with simple validation methods.
FL is susceptible to model poisoning attacks (MPAs) where malicious clients attempt to corrupt the global model by transmitting deceptively modified local model updates. Existing defenses, focused on model parameters, struggle to detect these attacks. In [170], propose FLARE, a robust model aggregation mechanism that leverages penultimate layer representations (PLRs) to evaluate the adversarial influence on local model updates. Trust evaluation method used in FLARE assigns trust scores based on pairwise PLR discrepancies, allowing FLARE to aggregate model updates by their trust scores.
#### Vi-B4 gradient based Secure Aggregation
The authors propose a scheme called Sniper in [148] to eliminate poisoned local models from malicious participants during training. This paper explores the relations between the number of poisoned training samples, attackers, and attack success rate in a FL system. Sniper identifies benign local models by solving a maximum clique problem, and suspected (poisoned) local models are ignored during global model updating. The authors observed that honest user models and attacker models are in different cliques, which they utilize to propose a filtering defense mechanism in Sniper. During every communication, the parameter server runs Sniper to filter parameters updated by attackers, dropping the attack success rate significantly even with multiple attackers present. The proposed scheme ensures secure aggregation of local models while preserving privacy.
A framework called BREA is proposed in [150], to address the challenge of secure and resilient FL against adversarial users. By protecting local updates with random masks, the true values are concealed from the server. However, Byzantine users can modify the datasets aor local updates to manipulate the global model. BREA is a single server secure aggregation framework, that utilizes a verifiable outlier detection, integrated stochastic quantization, and secure model aggregation approach to achieve enhanced convergence, privacy, and Byzantine-resilience concurrently. The framework employs a robust gradient descent approach that enables secure computations over the secret shares of the local updates to handle such attacks. BREA also uses a distance-based outlier removal mechanism [151], to remove the effect of potential adversaries and ensure the selection of unbiased gradient estimators.
A novel deep metric learning method is presented in [152], using an auxiliary S-space to identify complex similarity regions in FL, aiming to extract reliable information from diverse data in FL situations. The primary novelty lies in an
interpretable quantifier for deep metric learning aggregation in FL applications, resulting in a more secure learnable metric for these use cases.
To address vulnerability of FL to poisoning attacks and the limitations of existing defense strategies, a novel method called Truth Discovery based FL (TDFL) is proposed in [153]. Unlike previous methods, TDFL can defend against multiple poisoning attacks without additional datasets and tolerate high proportions of Byzantine attackers. In the Honest-majority setting, TDFL uses a robust truth discovery aggregation scheme to remove malicious model updates, while in the Byzantine-majority setting, it employs a maximum clique-based filter to ensure global model quality. The proposed method is tested in different scenarios and under various types of poisoning attacks, including label flipping, arbitrary, Krum, Trim, and backdoor attacks. The experimental results demonstrate the robustness and effectiveness of TDFL, which can identify and filter malicious users with a 100% detection rate. Additionally, TDFL does not require a validation dataset or a separate server model, making it a practical and effective solution for FL.
An FL framework Fed-SCR [154], for detecting faults and anomalies in smart grids with improved privacy and security, leverages a novel lightweight generative network, called SRC-GAN, for semisupervised learning of anomalous patterns from unbalanced power data. A new aggregation scheme, called Fed-GMA, is presented to handle the issue of noisy gradients during aggregation by replacing the averaging operation with a geometric median. Fed-GMA restricts the availability of active participants by granting selective node involvement, which enables different proportions of fog nodes to participate in each training round. This periodic aggregation approach leads to a decrease in the total number of communication rounds and the total communication overhead. Overall, this study provides a promising approach to improving the performance and privacy of FL-based fault and anomaly detection in smart grids.
FLTrust in [156], is a new Byzantine-robust FL method that bridges the gap between the server and the clients. It proposes a new approach to bootstrapping trust in FL by having the service provider itself collect a small clean training dataset, called the root dataset, and maintain a model, called the server model, based on it to bootstrap trust. FLTrust incorporates this root of trust in its new Byzantine-robust aggregation rule. The server assigns a trust score to each local model update, which is larger if the direction of the local model update is more similar to that of the server model update. FLTrust then normalizes each local model update to have the same magnitude as the server model update and computes the average of the normalized local model updates weighted by their trust scores as the global model update. By limiting the impact of malicious local model updates with large magnitudes, FLTrust provides a more secure and private way of aggregating local model updates.
FL-WBC in [161] is a client-based defense that mitigates model poisoning attacks in FL. The defense is named White Blood Cell and perturbs the parameter space during local training to identify the space where the long-lasting attack effect on parameters resides. The defense provides a certified robustness guarantee against model poisoning attacks and a convergence guarantee to FedAvg after applying it.
The exponential growth of Internet of Things (IoT) devices has led to an increased use of FL for on-device ML. However, conventional FL is susceptible to privacy leakage, as a malicious aggregation server can infer sensitive information from end-devices' local learning model updates. To address this issue, a socially-aware device-to-device (D2D) communication-enabled distributed aggregation-based dispersed FL (DDFL) framework is proposed [165]. This DDFL framework enables robust, privacy-aware, and efficient communication resource usage, with better privacy preservation than traditional FL. The framework involves clustering devices based on social similarity, edge betweenness, and physical distance. The proposed algorithm involves solving three sub-problems: clustering, resource allocation, and local accuracy minimization, using low complexity matching-theory-based solutions and a convex optimizer. The proposed algorithm is validated through numerical experiments, showing its superior performance in terms of learning model accuracy compared to traditional FL. The main contributions of this work are the socially-aware clustering-enabled DDFL framework, a novel clustering algorithm, and a loss function that simultaneously considers packet error rate and local learning accuracy.
The scalability of FL is hindered by the overhead of secure model aggregation across many users, with the existing protocols incurring quadratic overhead. Turbo-Aggregate [167], the first secure aggregation framework for FL, that achieves
almost 50% lower user dropout and reduced overhead for secure aggregation. A multi-group circular strategy along with additive secret sharing with novel coding techniques is used by Turbo-Aggregate, to handle user dropouts while ensuring user privacy. The framework has robustness guarantees, and its running time grows almost linearly with the number of users.
First Certifiably Robust FL (CRFL) framework has been proposed in [168], to train models that are robust against backdoor attacks in FL. Existing methods lack certification of their robustness, whereas CRFL provides a general framework to train models with sample-wise robustness certification on backdoors with limited magnitude. The proposed method controls the global model smoothness by exploiting clipping and smoothing on model parameters, which specifies the relation to FL parameters such as poisoning ratio on instance level, number of attackers, and training iterations. The training dynamics of the aggregated model via Markov Kernel are analyzed, and parameter smoothing is proposed for model inference. Overall, CRFL provides theoretical analysis and certified robustness against backdoor attacks, which aim to manipulate local models to fit the main task and backdoor task, achieving high attack success rates on backdoored data samples.
#### V-B5 Data Aggregation
A privacy-preserving data aggregation mechanism for FL is proposed to resist reverse analysis attacks [171]. The proposed mechanism, called EPPDA, is based on secret sharing and can aggregate users' trained models secretly without revealing individual user data to the server. EPPDA also has efficient fault tolerance for user disconnections. The design goals of EPPDA are to protect a single user's model changes and prevent the server from initiating reverse analysis attacks. Homomorphisms of secret sharing are adopted in EPPDA to aggregate shared data without reconstruction. The mechanism enables the server to obtain an aggregated result without knowing anything about an individual user's trained model, thereby enhancing the privacy preservation of FL.
A FL-based PP data aggregation scheme (FLPDA) [172] for Industrial Internet of Things (IIoT) to protect data security and privacy. FLPDA adopts data aggregation to protect the changes made to individual user models in FL and resist reverse analysis attacks from industry administration centers. The PBFT consensus algorithm is used to select one of the IIoT devices in each round of data aggregation as the initialization and aggregation node. The scheme combines Paillier cryptosystem and secret sharing to achieve data fault tolerance and secure sharing. The proposed scheme does not rely on trusted authorities or third parties and has lower overhead in computation, communication, and storage, leading to higher efficiency and faster execution speed in data aggregation for IIoT. The main contribution of this paper is the FLPDA scheme that ensures secure and privacy-preserving data aggregation in IIoT, which can effectively address the challenges of data security and privacy protection in IIoT.
In [173], researchers present PPDAFL, a secure data integration technique using FL tailored for IIoT applications. PDDAFL combines secret sharing and FL to shield local model updates and counter reverse engineering. Utilizing the PBFT algorithm, the system independently selects initiating and merging nodes. Fault tolerance is maintained even with multiple IIoT device failures or collusion through secret sharing. PPDAFL surpasses existing methods in communication and computation efficiency, speed, and effectiveness. The paper's key contributions include a secure multi-dimensional data integration approach based on FL, protection of local model changes, and combining the Paillier cryptosystem with secret sharing for data security and sharing. The PDDAFL method is well-suited for IIoT data integration scenarios. Table VII and VIII provides an extensive overview of the research conducted on secure aggregation and data aggregation respectively in FL.
### _Privacy Preserving FL_
We categorize Privacy-aware Trustworthy FL works into more refined and detailed categories based on the main objectives and primary privacy and trustworthiness of strategies:
#### V-C1 Blockchain and Smart Contract based Privacy Preserving FL
FL is a ML technique that aims to protect data privacy. However, it is prone to security threats, such as model inversion and membership inference, which need to be addressed when applied to Autonomous Vehicles (AVs). The research work in [174] presents a significant contribution to the field of FL by proposing a novel privacy-preserving Byzantine-Fault-Tolerant (BFT) decentralized FL method for Autonomous Vehicles (AVs), called BDFL. The proposed BDFL method addresses the privacy leakage problem in FL while ensuring data security in AVs. BDFL employs a Peer-to-Peer (P2P) FL framework with BFT and uses the HydRand protocol and Publicly Verifiable Secret Sharing (PVSS) scheme to protect AVs' models. Experimental results demonstrate the effectiveness of BDFL in terms of accuracy and training loss on the MNIST dataset. Moreover, the decentralized P2P-FL framework built by the HydRand protocol ensures the method's robustness against node failures. The proposed BDFL method outperforms other BFT-based FL methods, which indicates its superiority in securing data privacy in AVs. Thus, the paper's contribution lies in proposing a feasible and effective FL method for secure model aggregation among AVs in a P2P network while maintaining data privacy.
Authors in [176] addresses the privacy-preservation challenges in FL, a promising technique for facilitating data sharing while avoiding data leakage. The proposed approach is based on Homomorphic encryption and blockchain technology to address the Single Point of Failure (SPoF), gradient privacy, and trust issues. The Homomorphic encryption is used for gradient privacy protection. FL trust and BC storage issue are solved suing a reputation mechanism based on smart contract and on/off chain storage process. The proposed approach is evaluated on the EMNIST dataset, and the results show that it achieves better performance than existing solutions in terms of accuracy and convergence speed while maintaining privacy-preservation.
A privacy-preserving and Byzantine-robust FL model based on blockchain is presented in [177]. The proposed model adopts cosine similarity to detect malicious gradients uploaded by malicious clients and utilizes Fully Homomorphic Encryption (FHE) to provide secure aggregation. Additionally, the use
of blockchain facilitates transparent processes and regulation enforcement. The proposed scheme achieves both efficiency and credibility, and extensive experiments demonstrate its robustness and efficiency. The main contributions of this work are the provision of a privacy-preserving training mechanism using the FHE scheme CKKS, the removal of malicious gradients via cosine similarity to provide a trusted global model, and the utilization of blockchain to facilitate transparent processes and enforcement of regulations. This work addresses the challenges of privacy preservation, poisoning attacks, and credibility in FL.
A decentralized blockchain-based FL model for ensuring trustworthiness and privacy in VNet systems has been proposed in [178]. The framework utilizes a consensus method in the blockchain to enable ML on end devices without centralized training or coordination. A consensus protocol is adopted to guarantee consensus in the fog for critical vehicles. The proposed solution integrates both FL and blockchain to ensure data privacy and network security. The model ensures privacy by adopting blockchain capability along with FL through fog consensus. The adopted practical Byzantine Fault Tolerance protocol overcomes faulty issues. The framework guarantees privacy, trustworthiness, and adherence to delay requirements for vehicles in VNet systems. The proposed solution provides a decentralized approach for mutual ML models on end-devices, promoting privacy-preserving and secure ML.
PriMod-Chain, a framework that combines smart contracts, blockchain, FedML, DP, and IPFS to enhance privacy and trustworthiness in ML (ML) sharing in an IIoT setting is presented in [185]. FedML is used as the global ML model sharing approach, while DP ensures privacy on the models. The framework includes smart contracts and EthBC for traceability, transparency, and immutability. IPFS provides low latency, fast decentralized archiving, and secure P2P content delivery. The framework was tested for its feasibility in terms of privacy, security, reliability, safety, and resilience. PriModChain proved to be a feasible solution for trustworthy privacy-preserving ML in IIoT systems and generated excellent results towards the five pillars of trustworthiness.
#### Vi-B2 Cryptography and Encryption based Privacy Preserving FL
To addresses the challenge of preserving privacy while defending against poisoning attacks in FL, especially in edge computing environments, authors in [175] proposes a differential privacy-based FL model designed for edge deployment that is resistant to poisoning attacks. The proposed model utilizes a weight-based detection algorithm that improves detection rates using small validation datasets, reducing communication costs. Moreover, the model protects the privacy of user data and model parameters on honest devices by leveraging differential privacy technology, with noise added dynamically to minimize disturbance and improve accuracy. The main contributions of the paper are a secure and privacy-preserving FL model for edge networks, a weight-based detection scheme to resist poisoning attacks, and an improved differential privacy technique for FL in edge networks. These contributions enable accurate neural network model training while maintaining privacy and security for both data and models in edge computing settings.
DetectPMFL [180], is a privacy-preserving momentum FL scheme for industrial agents, which considers the issue of unreliable agents. A detection method is designed to calculate the credibility of all agents while preventing the server from accessing the model parameters of the agents. Additionally, the privacy issues of convolutional neural networks (CNNs) are investigated, and the Cheon-Kim-Kim-Song (CKKS) encryption scheme is adopted to protect agents' data.
Biscotti [179], P2P distributed method for multi-party ML employs BC and cryptographic elements to facilitate a confidential ML procedure among collaborating clients. This strategy tackles the trust requirement in centralized systems and mitigates the risk of poisoning assaults by malevolent participants. The proposed solution is scalable, resilient to faults, and safeguards against recognized attacks. The effectiveness of this technique is demonstrated through evaluation, making it a cutting-edge solution for secure multi-party ML. However, Biscotti is a fully decentralized approach that provides privacy-preserving ML without the need for a trusted centralized infrastructure.
The main contribution of authors in [184] is a decentralized FL scheme called PTDFL that enhances both privacy protection and trustworthiness while maintaining accuracy even in the presence of untrusted nodes. The scheme is designed for dynamic scenarios where nodes can join or leave at any time. To achieve these goals, the scheme uses lightweight primitives such as Lagrange interpolation and pseudorandom generators to encrypt gradients and ensure the trustworthiness of aggregated results. It also leverages zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK) to enhance the trustworthiness of gradients. Additionally, the paper proposes a novel local aggregation strategy that doesn't require a trusted third party to ensure the aggregated results' trustworthiness. Finally, PTDFL is also designed to support data owners joining and leaving during the FL task. The proposed scheme solves the problem of enhancing privacy protection and trustworthiness while maintaining accuracy in a dynamic and decentralized setting.
#### Vi-B3 Statistical based Privacy Preserving FL
The authors presents FEDOBD in [186], a Federated Opportunistic Block Dropout approach that aims to reduce communication overhead while preserving model performance in large-scale deep models. FEDOBD divides models into semantic blocks and evaluates block importance, rather than individual parameter importance, to opportunistically discard unimportant blocks. The block importance measure is not based on the client loss function, enabling the approach to handle complex tasks. FEDOBD's evaluation shows that it outperforms the state-of-the-art method AFD, demonstrating its effectiveness in reducing communication overhead without compromising model performance. FEDOBD addresses the problem of communication overhead in FL by intelligently selecting and dropping unimportant blocks of a deep model.
Authors in [181] proposes a framework called TP2SF that aims to maintain trustworthy privacy-preserving security in IoT networks. The framework consists of three main components: a trust management module, a two-level privacy-preserving module using enhanced Proof of Work (ePoW) and Principal Component Analysis (PCA), and an intrusion detec
tion module using XGBoost. The proposed privacy-preserving module addresses inference and poisoning attacks by using ePoW and PCA, respectively. The framework also employs feature selection using Pearson correlation coefficient (PCC) to identify the relevant features for the smart city environment. The proposed framework demonstrates promising results in maintaining trustworthiness and privacy while detecting suspicious activities in IoT networks. Overall, TP2SF offers a comprehensive approach to privacy-preserving security in IoT, addressing various challenges in trust, privacy, and security.
#### V-C4 Trust and Reputation Management based Privacy Preserving FL
A Deep Reinforcement Learning (DRL)-based reputation management mechanism for improving the security and reliability of FL has been proposed in [182]. The proposed mechanism uses the concept of reputation as a metric to evaluate the reliability and trustworthiness of FL workers, and employs the DRL algorithm, Deep Deterministic Policy Gradient (DDPG), to improve the accuracy and stability of FL models. The paper compares the performance of the proposed method with conventional and DQN-based reputation methods, and demonstrates that the DRL-based mechanism outperforms the other two methods. The DDPG algorithm is capable of handling continuous and complex action spaces, and is used to detect unreliable workers in the FL environment. The reputation score of a worker is represented by the reliability of its local model updates, which is evaluated by the server using attack detection schemes. The proposed method addresses the problem of identifying reliable and trustworthy workers in FL, which is critical for FL security.
This authors proposes a trusted decentralized FL algorithm based on the concept of trust as a metric for measuring the trustworthiness of network entities in collaborative multi-agent systems [187]. The proposed method updates trust relations among agents based on evidence of their contributive or non-contributive collaboration towards achieving specific goals. Trust estimates are used in decisions such as access control, resource allocation, and agent participation. The paper presents a mathematical framework for trust computation and aggregation and discusses its incorporation within a decentralized FL setup. The proposed algorithm enhances the security of FL training by enabling trust-based decision-making. Trust can be computed and aggregated based on the specific application, making the approach adaptable to various contexts. Table 9 presents an extensive overview of the research conducted on privacy preserving and security aware trustworthiness in FL. This table highlights various methods, techniques, and approaches employed to ensure trustworthiness during the privacy preserving process.
SCFL in [183], is a Social-aware Clustered FL technique that attains equilibrium between data confidentiality and effectiveness by capitalizing on users' social relationships. It allows trusted parties to establish social groupings and merge their unprocessed model updates before uploading them to the cloud for comprehensive aggregation. Utilizing game theory, the social cluster organization is refined, and a just allocation system discourages free-riders. Additionally, an adaptable privacy safeguard is devised for clusters with minimal trust, enabling the dynamic cleansing of participants' model updates. An iterative, two-sided matching process results in an optimized disjoint partition with Nash-stable equilibrium. SCFL notably enhances model performance without sacrificing privacy, presenting an affordable and viable strategy for addressing the privacy and efficiency obstacles in FL.
### _Discussion_
The main limitations and pitfalls in the current literature on trustworthy verifiability in FL are the reliance on trusted third parties, efficiency trade-offs, and scalability limitations. To address these issues and improve the learning process, future research should focus on the development of trustless solutions, addressing scalability challenges, and optimizing privacy-preserving techniques. Additionally, researchers should explore adaptive and context-aware mechanisms that consider the unique characteristics of various application scenarios and adapt the learning process accordingly. A common limitation in secure and verifiable FL frameworks [127, 128, 129, 130, 131, 132, 133] is the reliance on trusted third parties for authentication and aggregation. This dependence could introduce security vulnerabilities and single points of failure. Future research should focus on developing trustless solutions that eliminate the need for third parties and improve the overall security of FL systems. FL systems incorporating trust [134, 135, 136, 137, 138, 139] often rely on blockchain technology to enhance transparency and trustworthiness. However, these papers fail to consider the efficiency trade-offs and scalability limitations of blockchain-based solutions. Future research should address these limitations and explore more efficient and scalable alternatives that can support large-scale FL systems. Privacy-preserving FL with additional features [140, 141, 142, 143, 144, 145, 139] tackles a diverse range of issues, such as privacy preservation, IP protection, and explainability. However, these studies often overlook the performance overhead introduced by their proposed solutions. For instance, privacy-preserving techniques like homomorphic encryption and verifiable computing employed in [139, 140, 143] can be computationally expensive. Future research should focus on optimizing these techniques to balance privacy and efficiency without compromising the learning process.
Security and attack resistance are major concerns in FL. Some papers focus on detecting and preventing poisoning attacks [152, 155, 161, 169, 148] or malicious clients [150, 151, 156, 157, 158, 162]. However, most of these methods assume some degree of trust in the clients or the aggregator, which may not hold in real-world applications. To enhance trustworthiness, a critical review should consider the assumptions made by these approaches and examine their robustness to various threat models. The research work [171, 172] proposes different schemes to protect data privacy during FL, but they may overlook the trade-off between privacy and learning efficiency. Moreover, they may not adequately address the issue of model robustness against adversarial attacks. The decentralized and blockchain-based methods have been investigated in [149, 160, 163, 164, 165]. These approaches aim to enhance the trustworthiness of FL by leveraging decentralized consensus mechanisms. However,
they may suffer from scalability and communication overhead issues. Additionally, the reliance on blockchain technologies may introduce new vulnerabilities and limitations.
## IX Open Issues and Future Research Directions
In this section, we aim to emphasize the primary challenges related to ensuring Trustworthy FL. Additionally, we will identify potential areas of future research that warrant further exploration. By doing so, we hope to provide researchers with a clearer understanding of the crucial aspects that require attention within the realm of FL. Addressing these key challenges in Trustworthy FL is essential for ensuring the robustness, reliability, and widespread adoption of this technology. As research progresses, it is crucial to focus on these aspects to develop improved methods and techniques in FL.
### _Interpretability Aware Trustworthy FL_
Trustworthy FL faces several key challenges that must be addressed to ensure its effectiveness and reliability. Some of these challenges are related to interpretability, model selection, feature and sample selection, and data sharing.
#### Ix-A1 Interpretability
Ensuring that the models used in FL are interpretable is crucial for building trust among stakeholders. This involves making the decision-making process of the models transparent and understandable to both technical and non-technical users. Developing techniques to improve the interpretability of complex models, such as deep learning architectures, is an ongoing challenge in this area.
#### Ix-A2 Model selection
Choosing the most appropriate model for a particular task in a federated setting is a significant challenge. Researchers must consider factors such as the heterogeneity of data across participating devices, communication constraints, and computational resources available on each device. Developing robust and efficient methods for model selection in FL is essential for ensuring the quality and reliability of the learning process. FL faces challenges that demand accurate and efficient model update validation methods for non-IID datasets, to detect poisoning attacks. Furthermore, optimizing the number of workers in FL is necessary to balance performance and resource costs.
#### Ix-A3 Feature and sample selection
In FL, the data distribution across devices can be non-uniform, leading to potential
issues with feature and sample selection. Addressing these challenges requires the development of robust methods to handle such heterogeneity and to identify the most relevant features and samples for model training. This, in turn, can improve the overall performance and efficiency of the learning process.
#### Vi-B4 Data sharing
Trustworthy FL relies on the secure and privacy-preserving sharing of data between devices. This involves designing protocols that allow devices to share information without revealing sensitive information about individual data points or compromising user privacy. Developing methods for secure and efficient data sharing, while maintaining the utility of the shared information, is a critical challenge in the field of FL.
### _Fairness Aware Trustworthy FL_
Challenges in Trustworthy FL related to fairness include addressing biases and discrimination in training data or procedures, which can reduce the fairness of the resulting model. Ensuring a model is fair involves training it with balanced and unbiased data, allowing for generalization across the entire class distribution. Challenges in Trustworthy FL related to fairness include quantifying the impact of fairness on the final accuracy and convergence speed. The lack of existing literature providing theoretical analysis of fairness in FL highlights the need for further investigation into this issue.
#### Vi-B1 Client Selection
There are several key issues regarding client selection in FL that include a decrease in performance with increasing clients, a communication bottleneck due to multiple transactions, and the possibility of off-chain collusion by malicious clients, which cannot be defended against by the proposed commitment hash scheme.
Selecting FL worker nodes based on reputation is challenging due to the cold start problem, where historical interaction data is required for reputation evaluation. The potential tampering of reputation values also introduces uncertainty. One approach to mitigate this problem is to use contract theory.
In addition, the simulation of the straggler effect in the FL process involved the inclusion of various straggler robots that fail to transmit their local model update within a specific time, leading to a decrease in the overall accuracy of the global model.
FL faces critical challenges including: 1) the assumption that all workers participate in training unconditionally, which is not realistic as some may refuse due to data privacy concerns or lack of incentives; 2) the risk of malicious attacks by workers or free riding behavior where workers provide fake parameters to improve their reputation; 3) the presence of a parameter server resulting in remote data communication and the difficulty of finding a trusted third-party server, limiting the application of FL. In the context of Trustworthy FL, there are several challenges that need to be addressed to ensure a more secure and reliable learning process. These challenges include:
Attack susceptibilityFL processes are vulnerable to various types of attacks, such as poisoning attacks, where malicious workers send incorrect local model updates to mislead the learning process.
Unreliable local updatesBenign workers may unintentionally contribute unreliable local updates due to factors such as unstable channel conditions, device mobility, and energy constraints.
Model convergence issuesThe presence of unreliable workers with erroneous local training data can hinder model convergence or prolong convergence time compared to situations involving only reliable workers.
Dynamic device behaviorsReal-world FL deployments often involve devices with changing behaviors, transitioning between reliable and unreliable or malicious states, making it difficult to maintain a stable learning environment. Offline and dropout workers: Workers may go offline or drop out of the FL task due to unstable network connections, high device mobility, or energy constraints, which can negatively impact the overall performance of the learning task.
Vulnerability to adversarial attacksAddressing the issue of FL susceptibility to attacks, such as poisoning attacks, where malicious workers provide incorrect local model updates, remains a significant challenge.
Malicious worker withdrawalDetected malicious workers may attempt to withdraw from the FL task, further complicating the learning process. Real-time monitoring challenges: Existing trust models often lack dynamic monitoring mechanisms, making it difficult for FL parameter servers to monitor worker behaviors and minimize the adverse effects from malicious workers in real-time.
#### Vi-B2 Contribution evaluation and Incentive Mechanism
A key challenge in FL is designing a fair reward system that compensates local devices proportionally to their data contributions, while addressing issues such as devices' unwillingness to federate and potential dishonest behavior from untruthful devices that could lead to inaccurate global model updates.
Incentivizing participationIdentifying appropriate incentives to encourage clients' participation and sharing of model updates, considering the system costs and security risks involved, is a critical aspect of future FL research.
Evaluating clients' contributionsDeveloping accurate, efficient, and fair methods to evaluate each client's contribution within FL systems, considering the unique challenges of this learning environment, is crucial for effective incentive mechanisms.
Real-time monitoringFuture research should focus on devising real-time monitoring mechanisms that enable FL parameter servers to detect and mitigate malicious or unreliable worker behaviors, ensuring a secure and reliable learning process.
Reward allocation and fairnessDesigning reward allocation schemes that account for the quality of each participant's contribution, maintain collaborative fairness, and offer suitable incentives to retain high-quality participants is vital for FL success.
Free Rider problemThe free rider issue is a common problem in collaborative systems, including FL, where some participants, known as free riders, take advantage of the shared resources, knowledge, or benefits without contributing to the system. Free riders can negatively impact FL in various ways
such as Inequitable distribution of resources, reduced overall performance, and lower participant motivation.
To tackle the free rider problem in FL, future research should concentrate on a few crucial areas. Creating innovative incentive mechanisms can encourage active participation by rewarding contributors, thereby discouraging free riders. Establishing reputation systems can identify free riders and promote positive involvement by assessing and ranking participants based on their input. Strengthening secure and privacy-preserving communication channels can foster trust and reduce free riding tendencies. Moreover, consistently monitoring participant contributions and imposing penalties on identified free riders can prevent potential exploitation of the system.
Model performance and bias::Future research should emphasize designing reward systems that incentivize both high predictive performance and low bias in FL models, especially in sensitive applications such as healthcare.
### _Security and Privacy Aware Trustworthy FL_
In this section, we outline the key challenges in security and privacy for Trustworthy FL along with the potential future research directions:
Distributed poisoning attacksDeveloping robust defenses against coordinated poisoning attacks, particularly when malicious clients collude, is a significant challenge. Future research should investigate strategies to identify and mitigate such attacks in FL environments.
Byzantine-resilience and privacy preservationReconciling the need for Byzantine-resilience with preserving privacy in FL environments presents a complex problem. Research should focus on developing frameworks that strike a balance between robustness and privacy protection. current Byzantine-robust FL methods are still vulnerable to local model poisoning attacks. Research should explore establishing a root of trust to prevent malicious clients from corrupting global models.
Secure aggregation and malicious clientsExisting secure aggregation schemes are often based on semi-honest assumptions, making them vulnerable to malicious clients. Future work should explore mechanisms to enhance the security of FL systems against malicious clients while minimizing communication overhead.
Training integrity and decentralizationEnsuring training integrity in FL is crucial, as incomplete or lazy participation can degrade model accuracy. Research should focus on designing privacy-preserving and verifiable decentralized FL frameworks to guarantee data privacy and training integrity while addressing trust concerns and communication bottlenecks.
Defending against data and model poisoning attacksFL faces challenges from data poisoning and model poisoning attacks. Investigating appropriate defenses to protect model performance and detect anomalies is crucial for future research.
Resource allocation in FLEfficient resource allocation schemes are needed to enable the participation of more devices in FL and maintain learning accuracy without significantly increasing convergence time.
Robustness against server failuresThe centralized aggregation server in FL might fail due to physical damage or security attacks. Future research should investigate methods to ensure the robustness of FL systems in such scenarios.
Scalability of secure model aggregationThe overhead of secure model aggregation creates a bottleneck in scaling secure FL to large numbers of users. Research should focus on developing efficient and scalable secure aggregation protocols.
Handling user dropouts and unavailabilityFL systems need to be robust against user dropouts and unavailability, which can lead to privacy breaches and degraded performance. Future work should focus on designing protocols that can handle user dropouts and maintain privacy guarantees.
Certifiable robustness against backdoor attacksThe goal of certifiable robustness in FL is to protect the global model against adversarial data modifications made to local training datasets. Research should explore methods to certify robustness in FL systems while defending against backdoor attacks.
Security and privacy in edge computing deploymentsAddressing security and privacy threats in FL approaches for edge computing, such as resource-constrained IoT devices and data privacy disclosure, is crucial for future research.
In FL, ensuring trustworthy security and privacy presents multiple challenges. Developing a Byzantine-resilient and privacy-preserving FL framework is crucial to protect against collusion between malicious clients engaging in distributed poisoning attacks. Additionally, there is a need to design privacy-preserving and verifiable decentralized FL frameworks that can guarantee data privacy and training integrity while alleviating trust concerns and communication bottlenecks caused by centralization. Addressing the impact of non-IID data on convergence and model performance is essential for effective learning in real-world scenarios. Efficient resource allocation schemes are required to increase the number of participating devices and maintain robustness against user dropouts and unavailability while preserving privacy.
Moreover, defending against various types of attacks, such as data poisoning, model poisoning, and backdoor attacks, is necessary to maintain model performance and integrity. This requires achieving certifiable robustness against adversarial data modification in local training data. In edge computing deployments for FL, balancing robustness and latency is critical to prevent interruptions due to physical damage or security attacks. Overcoming the overhead of secure model aggregation and its scalability is essential for practical applications. Finally, addressing challenges in social cluster-based FL, such as forming stable and optimized social cluster structures, quantifying contributions, ensuring fair revenue allocation, and designing flexible and differentiated perturbation mechanisms, is vital to strike a balance between privacy and utility.
## X Conclusion
Federated Learning (FL) is a significant development in AI, enabling collaborative model training across distributed devices while maintaining data privacy. Addressing trustworthiness issues in FL is crucial. In this survey, we present a
comprehensive overview of Trustworthy FL, exploring existing solutions and well-defined pillars. While there is substantial literature on trustworthy centralized ML/DL, more efforts are needed to identify trustworthiness pillars and metrics specific to FL models and develop relevant solutions. We propose a taxonomy encompassing three main pillars: Interpretability, Fairness, and Security & Privacy. We also discuss trustworthiness challenges and suggest future research directions, providing valuable insights for researchers and practitioners in the field.
## Acknowledgment
This work is supported by the Zayed health science center under fund 12R005.
|
2306.08536 | Pseudoscalar meson decay constants and distribution amplitudes up to
twist-4 in the light-front quark model | In the light-front quark model (LFQM) amenable to the simultaneous study of
both the mass spectroscopy and the wave function related observables, we
examine the decay constants and distribution amplitudes (DAs) up to the
twist-4. The analysis of the heavy pseudoscalar mesons is carried out both in
the $1S$ and $2S$ states. This investigation involves calculating the local and
nonlocal matrix elements $\langle 0 |{\bar q}{\Gamma} q|P \rangle$ using three
distinct current operators ${\Gamma}=(\gamma^\mu\gamma_5,
i\gamma_5,\sigma^{\mu\nu}\gamma_5)$. Considering a general reference frame
where ${\bf P}_\perp\neq 0$ and investigating all available current components,
we examine not only the frame-independence but also the component-independence
of the decay constants. The explicit findings from our study provide the
evidence for the equality of the three pseudoscalar meson decay constants
obtained from the three distinct current operators $\Gamma$. The notable
agreement in decay constants is attained by imposing the Bakamjian-Thomas
construction of the LFQM, namely the meson state is constructed by the
noninteracting quark and antiquark representations while the interaction is
added to the mass operator, which provides the self-consistency condition
replacing the physical mass $M$ with the invariant mass $M_0$ for the
noninteracting quark-antiquark representation of the meson state. In addition
to obtaining the process-independent pseudoscalar meson decay constant,
regardless of the choice of current operators $\Gamma$, we further demonstrate
its explicit Lorentz and rotation invariance. In particular, we present the
analysis conducted on the twist-4 DA derived from the minus component of the
axial-vector current. Finally, we discuss the various twist DAs and their
$\xi$-moments associated with the $1S$ and $2S$ heavy pseudoscalar mesons. | Ahmad Jafar Arifi, Ho-Meoyng Choi, Chueng-Ryong Ji | 2023-06-14T14:35:51Z | http://arxiv.org/abs/2306.08536v2 | Pseudoscalar meson decay constants and distribution amplitudes up to the twist-4 in the light-front quark model
###### Abstract
In the light-front quark model (LFQM) amenable to the simultaneous study of both the mass spectroscopy and the wave function related observables, we examine the decay constants and distribution amplitudes (DAs) up to the twist-4. The analysis of the heavy pseudoscalar mesons is carried out both in the \(1S\) and \(2S\) states. This investigation involves calculating the local and nonlocal matrix elements \(\langle 0|\bar{q}\Gamma q|P\rangle\) using three distinct current operators \(\Gamma=(\gamma^{\mu}\gamma_{5},\gamma_{5},\sigma^{\mu\nu}\gamma_{5})\). Considering a general reference frame where \(\mathbf{P}_{\perp}\neq 0\) and investigating all available current components, we examine not only the frame-independence but also the component-independence of the decay constants. The explicit findings from our study provide the evidence for the equality of the three pseudoscalar meson decay constants obtained from the three distinct current operators \(\Gamma\). The notable agreement in decay constants is attained by imposing the Bakamjian-Thomas construction of the LFQM, namely the meson state is constructed by the noninteracting quark and antiquark representations while the interaction is added to the mass operator, which provides the self-consistency condition replacing the physical mass \(M\) with the invariant mass \(M_{0}\) for the noninteracting quark-antiquark representation of the meson state. In addition to obtaining the process-independent pseudoscalar meson decay constant, regardless of the choice of current operators \(\Gamma\), we further demonstrate its explicit Lorentz and rotation invariance. In particular, we present the analysis conducted on the twist-4 DA derived from the minus component of the axial-vector current. Finally, we discuss the various twist DAs and their \(\xi\)-moments associated with the \(1S\) and \(2S\) heavy pseudoscalar mesons.
## I Introduction
The distribution amplitudes (DAs) of mesons are important non-perturbative ingredients in comprehending a range of the light-cone dominated processes that can be treated via collinear factorization [1; 2; 3], as they offer valuable insights into the nonperturbative makeup of hadrons and the distribution of partons in relation to their longitudinal momentum fractions within these particles. The meson's DA is typically defined as a matrix element of a quark-antiquark bilocal light-cone operator between the vacuum and the meson state in the light-front dynamics (LFD) [4] which provides a natural separation of the meson's momentum into its longitudinal and transverse components. Thus, the LFD appears to be a practical and rigorous framework for computing the DAs of mesons categorizing them according to their increasing twist.
While the leading-twist DA describes the longitudinal momentum distribution of valence quarks inside the meson providing a straightforward interpretation of the partonic structure of the meson, the higher-twist DAs are considerably more abundant as they take into account of various effects including the transverse motion of quarks or antiquarks, the higher Fock states involving extra gluons and/or quark-antiquark pairs, etc. [5]. In the light-cone dominated hard processes, the leading-twist DAs give the dominant contributions and the higher twist contributions are suppressed by a power of the hard scale. As a result, the study of higher twist DAs has received less attention in comparison to the leading twist DAs in the analyses of the hard processes. However, with the higher statistical precision of experimental data expected from KEKII, LHC, JLAB, and the forthcoming Electron-Ion-Collider (EIC) [6; 7], the relevance of the higher twist effects in hadron structure increases, accentuating the growing importance of further exploring higher twist contributions, e.g. in the formalism of TMD factorization [8], in the wide-angle photoproduction of pions [9], and in the nonleptonic \(B\)-meson decay [10] etc. Thus, the quest to obtain essential nonperturbative insights into QCD has spurred numerous theoretical investigations aimed at calculating not only the leading-twist DA but also the higher-twist DAs using various nonperturbative techniques, such as the QCD sum rule [11; 12; 13; 14; 15], the chiral-quark model derived from the instanton vacuum [16; 17], the Nambu-Jona-Lasinio (NJL) model [18; 19], the Dyson-Schwinger equation (DSE) approach [20; 21], and the light-front quark model (LFQM) [22; 23; 24; 25; 26].
In particular, the LFQM is the theoretical framework based on the LFD that has been highly successful in
explaining simultaneously both the mass spectra and the wave function related observables including the electroweak properties of mesons [27; 28; 29; 30; 31; 32; 33; 34; 35]. In this model, mesons are treated as bound states of constituent quarks and antiquarks. The LFQM typically places the constituent quark and antiquark on their respective mass shells, and the spin-orbit (SO) wave function is obtained through the Melosh transformation [36] which is independent of the interaction and is uniquely determined from the ordinary quantum state representation \(J^{PC}\). For the construction of the more phenomenologically accessible LFQM, we applied the variational principle with the trial radial wave function, typically the Gaussian radial wave function, with the Melosh transformed spin-orbit wave functions for the on-mass shell constituent quark and antiquark to provide simultaneous analyses of both the mass spectra and the wave function related observables such as the decay constants, form factors, etc. [27; 28; 29; 31; 34; 35]. Our LFQM follows the Bakamjian-Thomas (BT) construction [37; 38], where the meson state is constructed by the noninteracting quark and antiquark representations while the interaction is added to the mass operator via \(M=M_{0}+V_{Q\bar{Q}}\) applying the variational principle with typical Gaussian radial wave function as the trial wave function for the variational calculation. Due to the absence of manifest covariance, however, it is challenging to identify the LF zero-mode contributions in the phenomenological LFQM by itself. In particular, as the number of DAs proliferates with the higher twist, the computation of the higher twist DAs involves not only the good component (e.g. \(J^{+}=J^{0}+J^{3}\)) of the current but also the bad component (e.g. \(J^{-}=J^{0}-J^{3}\)) of the current to identify the proliferated number of DAs with the more number of current components. However, employing the bad component in the computation is often quite challenging due to the involvement of the light-front (LF) zero modes and/or the instantaneous contributions to restore the Lorentz and gauge invariance [39; 40; 41; 42; 43; 44; 45]. Thus, it is important to conduct a rigorous study of the higher twist DAs and address the challenges involved in order to gain the better understanding of the hadron structures.
To pin down the treacherous points involving the LF zero modes and the instantaneous contributions, one may utilize the manifestly covariant Bethe-Salpter (BS) field-theoretic model. Using the LF projection of the manifestly covariant BS model, one can provide the corresponding LFQM with the multipole ansatz for the meson-quark vertex function [40; 44; 45]. This type of LFQM is useful for providing the theoretical guidance on how to analyze the LF zero modes although the obtained results are in general semi-realistic. To account for the Lorentz structure of a hadronic matrix element, the light-like four-vector \(\omega\) was introduced and the corresponding covariant approach was developed originally in Ref.[46]1. Subsequently, the authors of Ref. [47] developed a method for identifying and separating spurious contributions, enabling them to determine the physically meaningful contributions to the hadronic form factors and coupling constants that are independent of the choice of \(\omega\). By employing this covariant methodology as described in Refs. [46; 47], Jaus [43] employed a manifestly covariant BS model as a reference to devise a fundamentally distinct technique for addressing this issue. In this approach, Jaus developed a way of identifying the LF zero-mode contributions by removing spurious contributions proportional to the light-like vector \(\omega^{\mu}\) in the physical observables. As the \(\omega\)-dependent contributions violate the covariance, they may be eliminated by including the LF zero-mode contributions and at the same time restoring the covariance of the current matrix elements in the solvable BS model. Jaus identified the light-front zero-mode contribution corresponding it to the removal of the spurious \(\omega\)-dependence and then applied the LF zero-mode contributions identified in the BS model to the LFQM simply by replacing the multipole type vertex function in the BS model with the Gaussian radial wave function.
Footnote 1: In this formulation, the state vector is defined on a plane characterized by the invariant equation \(\omega\cdot x=0\), where \(\omega\) represents an arbitrary light-like four vector \(\omega=(\omega_{0},\vec{\omega})\) satisfying \(\omega^{2}=0\). The special choice \(\omega=(1,0,0,-1)\) corresponds to the LF or null plane \(\omega\cdot x=x^{+}=x^{0}+x^{3}=0\).
However, two of us [24] found that Jaus's prescription to identify the zero mode is only valid in the BS model with the multipole type vertex function but not in the LFQM with the Gaussian radial wave function as it causes a serious problem of impeding the self-consistency in the computation of physical observables, e.g. the decay constant of a \(\rho\) meson gives different results for different polarization (longitudinal and transverse) states in the LFQM. This finding has also been confirmed by others [48; 49], indicating that the LF zero-mode contributions do depend on the model wave functions. In Ref. [24], we then identified a specific matching condition for the first time between the manifestly covariant BS model and our LFQM, which we called the "Type II" link (see Eq. (49) in [24]). This unique matching condition ensures the self-consistency of the LFQM analysis. For example, it was demonstrated [24] by using the "Type II" link that the two \(\rho\) meson decay constants obtained from the longitudinal and transverse polarizations exhibit the equivalence in the LFQM numerical results. One of the key ingredients in the "Type II" link is the replacement of the physical meson mass \(M\) that appeared in the integrand for the matrix element calculation with the invariant mass \(M_{0}\), which is equivalent to imposing the on-mass shell condition of the constituent quark and antiquark in the LFQM. Enforcing the on-mass shell condition for the constituents is tantamount to ensuring four-momentum conservation \(P=p_{1}+p_{2}\) at the meson-quark vertex, where the meson and quark (antiquark) momenta are denoted as \(P\) and \(p_{1(2)}\), respectively.
Such a replacement (\(M\to M_{0}\)) is indeed consistent with the BT construction [37; 38], namely, the meson state is constructed by the noninteracting quark and antiquark representations while the interaction is added to the mass operator via \(M=M_{0}+V_{Q\bar{Q}}\). This replacement can also be viewed as an effective way to include the LF zero-mode contributions and restore the Lorentz symmetry of the model, in particular the rotational symmetry, when compared to the covariant BS model [24].
Subsequent works using the same "Type II" link [24] have been made for the analyses of twist-2 and twist-3 DAs of the light pseudoscalar \((\pi,K)\) mesons [25; 26] through the matrix elements \(\langle 0|\bar{q}(z)\Gamma q(-z)|P\rangle\) of the nonlocal operators \(\Gamma=(\gamma^{\mu}\gamma_{5},i\gamma_{5},\sigma^{\mu\nu}\gamma_{5})\), discussing the link between the chiral symmetry of QCD and the numerical results of the LFQM. In the very recent work [50], the decay constant for pseudoscalar meson with the axial-vector (\(\gamma^{\mu}\gamma_{5}\)) current in the equal quark and antiquark mass case was investigated in the LFQM and the self-consistent result independent of the current components was obtained. The decay constant of the vector meson was also investigated for both longitudinal and transverse polarizations, obtaining a self-consistent result independent of all possible combinations of the current components and the polarizations. In particular, in Ref [50], it was explicitly demonstrated that the decay constants obtained via the "Type II" link between the BS model and the LFQM are precisely equivalent to those obtained directly in the LFQM, where the on-mass shell condition of the constituents is enforced.
In this work, we extend our previous LFQM analyses [25; 26] for the decay constant and the DAs of the ground \(1S\) state light pseudoscalar mesons through the matrix elements \(\langle 0|\bar{q}(z)\Gamma q(-z)|P\rangle\) of the operators \(\Gamma=(\gamma^{\mu}\gamma_{5},i\gamma_{5},\sigma^{\mu\nu}\gamma_{5})\) to include both \((1S,2S)\) state heavy-light and heavy-heavy pseudoscalar mesons [50]. In particular, we shall explicitly show that the three pseudoscalar meson decay constants defined through the three different operators \(\Gamma\) are all identical numerically in our LFQM constrained by the on-mass condition of the constituents. Namely, we obtain the process-independent pseudoscalar meson decay constant regardless of the current operators \(\Gamma\) used at the level of one-body current matrix element computation, as the independence of the current operators \(\Gamma\) means the independence of the decay process. The new two-particle twist-4 DA is also obtained from the minus component of the axial vector current (\(\Gamma=\gamma^{\mu}\gamma_{5}\)). We also investigate the different helicity contributions to the decay constants defined through different operators \(\Gamma\) and perform a quantitative analysis of each helicity component for different heavy-light and heavy-heavy pseudoscalar meson systems. For the numerical calculations, we present the results both for the ground state (\(1S\)) and the radially excited state (\(2S\)) of heavy pseudoscalar mesons, which were discussed in our recent work [31]. We then scrutinize the shape of the leading- and higher-twist DAs and their \(\xi\)-moments, where \(\xi=2x-1\) with the LF longitudinal momentum fraction \(x\) of the constituent. The "Type II" link between the covariant BS model and the LFQM is further discussed for the deeper understanding of the underlying physics involved.
The paper is organized as follows: In Section II, we describe the LFQM and the light-front wave functions of \(1S\) and \(2S\) pseudoscalar heavy meson. In Section III, we examine the pseudoscalar decay constants derived from the three distinct current operators \(\Gamma=(\gamma^{\mu}\gamma_{5},i\gamma_{5},\sigma^{\mu\nu}\gamma_{5})\) and establish the process independence and rotational invariance of the decay constants. In Section IV, we discuss the DAs up to the twist-4 obtained from the three local and nonlocal current operators \(\Gamma\) and their \(\xi\)-moments. Finally, we summarize our findings in Section V. In the Appendix, the "Type II" link between the manifestly covariant BS model and the LFQM is demonstrated for the completeness.
## II Light-front quark model
When applied to meson states reflecting the feature of BT construction [37; 38], the LFQM employs a noninteracting \(q\bar{q}\) representation to describe the Fock state that is composed of the constituent quark (\(q\)) and antiquark (\(\bar{q}\)) while the interactions are incorporated into the mass operator \(M:=M_{0}+V_{q\bar{q}}\) to ensure compliance with the group structure satisfying the Poincare algebraic commutation relations. The interactions are then encoded in the light-front wave function (LFWF) \(\Psi_{q\bar{q}}\), which satisfies the eigenvalue equation \(H_{q\bar{q}}|\Psi_{q\bar{q}}\rangle=(M_{0}+V_{q\bar{q}})|\Psi_{q\bar{q}}\rangle= M_{q\bar{q}}|\Psi\rangle\) of our QCD-motivated effective Hamiltonian [27; 28; 29; 30; 31].
Our LFQM for the \(1S\) state [27; 28; 29; 30; 31] and \(2S\) state [31] pseudoscalar and vector mesons is based on the central concept of using the radial wave function as a variational trial function for the QCD-motivated effective Hamiltonian \(H_{q\bar{q}}\), which results in the determination of the mass eigenvalues \(M_{q\bar{q}}\). Once the values of the model parameters are determined by the variational analysis of the mass spectra, those determined model parameters are used to describe different observables including decay constants and electromagnetic and weak form factors etc. [23; 27; 28; 29; 30; 31]
For the self-consistent analysis of the decay constants and the higher twist DAs for the \((1S,2S)\) state heavy pseudoscalar mesons performed in this work, we provide a brief overview of the LFWFs for \(1S\) and \(2S\) state heavy pseudoscalar mesons presented in Ref. [31] focusing on the important aspects of LFWFs constrained by the on-mass shell condition of the constituents.
The four-momentum \(P\) of the meson in terms of the LF components is defined as \(P=(P^{+},P^{-},\mathbf{P}_{\perp})\), where \(P^{+}=P^{0}+P^{3}\) and \(P^{-}=P^{0}-P^{3}\) are the LF longitudinal momentum and the LF energy, respectively, and \(\mathbf{P}_{\perp}=(P^{1},P^{2})\) are the transverse momenta. Here, we take the metric convention as \(P^{2}=P^{+}P^{-}-\mathbf{P}_{\perp}^{2}\). The meson state \(|\mathrm{M}(P,J,J_{z})\rangle\) of momentum \(P\) and spin state \((J,J_{z})\) can
then be constructed as follows [52; 53; 34]
\[|\mathrm{M}\rangle = \int\left[\mathrm{d}^{3}\bar{p}_{1}\right]\left[\mathrm{d}^{3}\bar{ p}_{2}\right]2(2\pi)^{3}\delta^{3}\left(\bar{P}-\bar{p}_{1}-\bar{p}_{2}\right) \tag{1}\] \[\times\sum_{\lambda_{1},\lambda_{2}}\Psi^{JJ_{z}}_{\lambda_{1} \lambda_{2}}(x,\mathbf{k}_{\perp})\left|q_{\lambda_{1}}(p_{1})\bar{q}_{\lambda _{2}}(p_{2})\right\rangle,\]
where \(p_{i}^{\mu}\) and \(\lambda_{i}\) are the on-mass shell (\(p_{i}^{2}=m_{i}^{2}\)) momenta and the helicities of the constituent quark (\(i=1\)) and antiquark (\(i=2\)), respectively, with the LF three momentum defined by \(\bar{p}=(p^{+},\mathbf{p}_{\perp})\) and \(\left[\mathrm{d}^{3}\bar{p}\right]\equiv\mathrm{d}p^{+}\mathrm{d}^{2}\mathbf{ p}_{\perp}/(16\pi^{3})\). The LF internal relative variables \((x,\mathbf{k}_{\perp})\) are defined by \(x_{i}=p_{i}^{+}/P^{+}\) and \(\mathbf{k}_{i\perp}=\mathbf{p}_{i\perp}-x_{i}\mathbf{P}_{\perp}\), where \(\sum_{i}x_{i}=1\) and \(\sum_{i}\mathbf{k}_{i\perp}=0\) and we set \(x=x_{1}\) and \(\mathbf{k}_{\perp}=\mathbf{k}_{1\perp}\). This meson state satisfies the following normalization
\[\langle\mathrm{M}(P^{\prime},J^{\prime}_{z})|\mathrm{M}(P,J,J_{z})\rangle\] \[=2(2\pi)^{3}P^{+}\delta^{3}(\bar{P}^{\prime}-\bar{P})\delta_{J^{ \prime}J}\delta_{J^{\prime}_{z}J_{z}}. \tag{2}\]
In momentum space, the LFWF \(\Psi_{q\bar{q}}(x,\mathbf{k}_{\perp})\) of a meson can be decomposed as
\[\Psi^{JJ_{z}}_{\lambda_{1}\lambda_{2}}(x,\mathbf{k}_{\perp})=\Phi(x,\mathbf{k }_{\perp})\ \mathcal{R}^{JJ_{z}}_{\lambda_{1}\lambda_{2}}(x,\mathbf{k}_{\perp}), \tag{3}\]
where \(\Phi(x,\mathbf{k}_{\perp})\) is the radial wave function that was used as our trial function for the mass spectroscopic analysis [27; 28; 29; 30; 31] and \(\mathcal{R}^{JJ_{z}}_{\lambda_{1}\lambda_{2}}\) is the SO wave function obtained by the interaction-independent Melosh transformation [36] for the corresponding meson quantum number \(J^{PC}\).
We should note that one crucial aspect of the LF formulation for a bound state, as depicted in Eq. (1), is the frame-independence of the LFWF [4]. In other words, the hadron's internal variables \((x,\mathbf{k}_{\perp})\) of the wave function remain unaffected by boosts to any physical \((P^{+},\mathbf{P}_{\perp})\) frame, which is not the case in the instant formulation. We shall explicitly show this boost invariance of the decay constant computed in general \(\mathbf{P}_{\perp}\neq 0\) frame.
The SO wave function for a pseudoscalar meson obtained from the interaction-independent Melosh transformation can be written as the covariant form [32; 33] consistent with the BT construction, which is given by
\[\mathcal{R}^{00}_{\lambda_{1}\lambda_{2}} = \frac{\bar{u}_{\lambda_{1}}(p_{1})\gamma_{5}v_{\lambda_{2}}(p_{2 })}{\sqrt{2}\bar{M}_{0}}, \tag{4}\] \[= \frac{1}{\sqrt{2}\sqrt{\mathcal{A}^{2}+\mathbf{k}_{\perp}^{2}}} \begin{pmatrix}-k^{L}&\mathcal{A}\\ -\mathcal{A}&-k^{R}\end{pmatrix}, \tag{5}\]
where \(\bar{M}_{0}^{2}=M_{0}^{2}-(m_{1}-m_{2})^{2}=(\mathcal{A}^{2}+\mathbf{k}_{ \perp}^{2})/x(1-x)\), \(k^{R(L)}=k^{1}\pm ik^{2}\) and \(\mathcal{A}=(1-x)m_{1}+xm_{2}\). The boost-invariant meson mass squared is given by
\[M_{0}^{2}=\frac{\mathbf{k}_{\perp}^{2}+m_{1}^{2}}{x}+\frac{\mathbf{k}_{\perp}^ {2}+m_{2}^{2}}{1-x}. \tag{6}\]
Note that the SO wave function satisfies the unitary condition, \(\sum_{\lambda^{\prime}s}\mathcal{R}^{\dagger}\mathcal{R}=1\). It is worth noting that the LFWF \(\Psi\) depends on the interaction-independent invariant mass \(M_{0}\) that follows the BT construction as the meson is constructed in the noninteracting representation.
For the \(1S\) and \(2S\) state radial wave functions \(\Phi_{ns}\) of Eq. (3), we allow the mixing between the two lowest order harmonic oscillator (HO) wave functions (\(\phi_{1S},\phi_{2S}\)) by writing [31]
\[\left(\begin{array}{c}\Phi_{1S}\\ \Phi_{2S}\end{pmatrix}=\begin{pmatrix}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{pmatrix}\begin{pmatrix}\phi_{1S}\\ \phi_{2S}\end{pmatrix}, \tag{7}\]
where
\[\phi_{1S}(\vec{k}) = \frac{4\pi^{3/4}}{\beta^{3/2}}e^{-\vec{k}^{2}/2\beta^{2}},\] \[\phi_{2S}(\vec{k}) = \frac{4\pi^{3/4}}{\sqrt{6}\beta^{7/2}}\left(2\vec{k}^{2}-3\beta^{ 2}\right)e^{-\vec{k}^{2}/2\beta^{2}}. \tag{8}\]
Here, \(\vec{k}=(k_{z},\mathbf{k}_{\perp})\) is the three momentum and \(\beta\) represents a parameter that serves as the variational parameter in our mass spectroscopic analysis [31]. The rotationally invariant HO wave functions \(\phi_{nS}(\vec{k})\) in Eq. (8) satisfy
\[\int\ \frac{\mathrm{d}^{3}\vec{k}}{2(2\pi)^{3}}\left|\phi_{nS}(\vec{k})\right|^{ 2}=1. \tag{9}\]
Sustaining this rotationally invariant property of the wave function, one can transform the normalization of \(\phi_{nS}(\vec{k})\) to that of \(\phi_{nS}(x,\mathbf{k}_{\perp})\) via the variable transformation \((k_{z},\mathbf{k}_{\perp})\rightarrow(x,\mathbf{k}_{\perp})\) as follows
\[\int_{0}^{1}\mathrm{d}x\int\frac{\mathrm{d}^{2}\mathbf{k}_{\perp}}{2(2\pi)^{3} }|\phi_{nS}(x,\mathbf{k}_{\perp})|^{2}=1. \tag{10}\]
We note that the wave functions \(\phi_{nS}(x,\mathbf{k}_{\perp})\) include the Jacobian factor \(\partial k_{z}/\partial x\) as
\[\phi_{nS}(x,\mathbf{k}_{\perp})=\sqrt{\frac{\partial k_{z}}{\partial x}}\phi_{nS }(\vec{k}) \tag{11}\]
because of the variable transformation \((k_{z},\mathbf{k}_{\perp})\rightarrow(x,\mathbf{k}_{\perp})\) and \(k_{z}(=k^{3})\) and \(x\) are related by [34]
\[x=\frac{E_{1}-k_{z}}{E_{1}+E_{2}},\ 1-x=\frac{E_{2}+k_{z}}{E_{1}+E_{2}}, \tag{12}\]
where \(E_{i}=\sqrt{m_{i}^{2}+\vec{k}^{2}}\). We then have \(M_{0}=E_{1}+E_{2}\) and
\[k_{z}=\left(x-\frac{1}{2}\right)M_{0}+\frac{m_{1}^{2}-m_{2}^{2}}{2M_{0}}. \tag{13}\]
The Jacobian factor is then given by
\[\frac{\partial k_{z}}{\partial x}=\frac{E_{1}E_{2}}{x(1-x)M_{0}}, \tag{14}\]
or in terms of \((x,\mathbf{k})\)
\[\frac{\partial k_{z}}{\partial x}=\frac{M_{0}}{4x(1-x)}\left[1-\frac{(m_{1}^{2}-m_ {2}^{2})^{2}}{M_{0}^{4}}\right]. \tag{15}\]
It should be noted that the total LFWF \(\Psi\) given by Eq. (3) meets the same normalization given by Eq. (10). This is due to the meson state \(|\mathrm{M}(P,J,J_{z})\rangle\) fulfilling the condition of Eq. (2), and the SO wave function adhering to the unitary condition. Especially, the inclusion of the Jacobian factor in defining \(\phi_{nS}(x,\mathbf{k}_{\perp})\) is the key aspect to retaining the rotational invariance of the model wave function and obtaining the self-consistent, i.e. current-component and boost invariant, results of physical observables. The quantitative effects of the Jacobian factor on the decay constants, twist-2 DAs, and electromagnetic form factors of mesons were also discussed in Ref. [54].
## III Decay constants
For the decay constants and the leading-and higher-twist DAs of pseudoscalar mesons, one may obtain them from the matrix elements \(\langle 0|\bar{q}(z)\Gamma q(-z)|P\rangle\) of the following three possible nonlocal operators \(\Gamma=(\gamma^{\mu}\gamma_{5},i\gamma_{5},\sigma^{\mu\nu}\gamma_{5})\), where \(z^{\mu}\) is the light-like vector (\(z^{2}=0\)). For the calculation of the decay constant \(f_{\mathrm{P}}\), while \(f_{\mathrm{P}}\) can be defined by using a local operator with axial-vector (\(\Gamma_{\mathrm{A}}=\gamma^{\mu}\gamma_{5}\)) and pseudoscalar (\(\Gamma_{\mathrm{P}}=i\gamma_{5}\)) current as [5; 11]
\[\langle 0|\,\bar{q}(0)\gamma^{\mu}\gamma_{5}q(0)\,|\mathrm{P}(P)\rangle = if_{\mathrm{P}}P^{\mu}, \tag{16}\] \[\langle 0|\,\bar{q}(0)i\gamma_{5}q(0)\,|\mathrm{P}(P)\rangle = \int_{\mathrm{P}}\mu_{M}, \tag{17}\]
it can also be computed by utilizing the nonlocal matrix element in the case of the pseudotensor current (\(\Gamma_{\mathrm{T}}=\sigma^{\mu\nu}\gamma_{5}\)) as defined by the subsequent equation [11]:
\[\langle 0|\,\bar{q}(z)\sigma^{\mu\nu}\gamma_{5}q(-z)\,|\mathrm{P}(P )\rangle=-\frac{i}{3}f_{\mathrm{P}}\left(1-\rho_{+}\right)\mu_{M}\] \[\times(P^{\mu}z^{\nu}-P^{\nu}z^{\mu})\int_{0}^{1}\mathrm{d}x\ \mathrm{e}^{i\zeta P \cdot z}\psi_{\mathrm{3,P}}(x), \tag{18}\]
where \(\sigma^{\mu\nu}=\frac{i}{2}[\gamma^{\mu},\gamma^{\nu}]\), \(\mu_{M}=M^{2}/(m_{1}+m_{2})\), \(\rho_{+}=(m_{1}+m_{2})^{2}/M^{2}\), and \(\zeta=2x-1\). In this definition of Eq. (III.1), the two-particle twist 3 DA \(\psi_{\mathrm{3,P}}(x)\) is normalized to unity \(\int_{0}^{1}\mathrm{d}x\psi_{\mathrm{3,P}}(x)=1\). As a reference, in Ref. [5], defining the matrix element \(\langle 0|\,\bar{q}(z)\sigma^{\mu\nu}\gamma_{5}q(-z)\,|\mathrm{P}(P)\rangle\), the authors removed the term \((1-\rho_{+})\) on the right-hand side (RHS) of Eq. (III.1) by normalizing \(\psi_{\mathrm{3,P}}(x)\) in such a way that \(\int_{0}^{1}\mathrm{d}x\ \psi_{\mathrm{3,P}}(x)=1-\rho_{+}\). In the previous LFQM analysis [26] for \(\psi_{\mathrm{3,P}}(x)\), two of us used the definition of Ref. [5] rather than Eq. (III.1). However, in this study, we opt to use Eq. (III.1) as we observe that this definition yields the same decay constant as those obtained from Eqs. (16) and (17) in which the same normalization for the leading and higher twist DAs is used as \(\psi_{\mathrm{3,P}}(x)\) defined in Eq. (III.1).
### Process-Independence
In this subsection, we shall first compute the decay constants defined by the three different operators \(\Gamma=(\Gamma_{\mathrm{A}},\Gamma_{\mathrm{P}},\Gamma_{\mathrm{T}})\) and show their equivalence, i.e. process-independent decay constant in our LFQM. As the different decay operators are used for the different decay processes, the decay constant's independence of the current operators \(\Gamma\) means the independence of the decay process for the decay constant as a physical observable. The leading-and higher-twist DAs obtained from the matrix elements \(\langle 0|\bar{q}(z)\Gamma q(-z)|P\rangle\) will be analyzed separately in the next section.
In the LFQM, the decay amplitudes for the operators \(\Gamma=(\Gamma_{A},\Gamma_{\mathrm{P}})\) given by Eqs. (16) and (17) can be defined at the level of one-body local current matrix element as
\[\langle 0|\,\bar{q}\Gamma q\,|P\rangle = \sqrt{N_{c}}\int_{0}^{1}\mathrm{d}x\int\frac{\mathrm{d}^{2} \mathbf{k}_{\perp}}{16\pi^{3}}\ \Phi(x,\mathbf{k}_{\perp}) \tag{19}\] \[\times\sum_{\lambda_{1},\lambda_{2}}\mathcal{R}^{00}_{\lambda_{1} \lambda_{2}}\left[\frac{\bar{v}_{\lambda_{2}}(p_{2})}{\sqrt{x_{2}}}\Gamma \frac{u_{\lambda_{1}}(p_{1})}{\sqrt{x_{1}}}\right],\]
where \(N_{c}=3\) arises from the color factor implicit in the wave function [33; 34]. Denoting the decay constants \(f_{\mathrm{P}}\) corresponding to the current operators \((\Gamma_{\mathrm{A}},\Gamma_{\mathrm{P}},\Gamma_{\mathrm{T}})\) as \((f_{\mathrm{A}},f_{\mathrm{P}},f_{\mathrm{T}})\), we may provide the generic form for the decay constants \(f_{\mathrm{A(P)}}\) obtained from the two local operators \(\Gamma_{\mathrm{A(P)}}\) as [50]
\[f_{\mathrm{A(P)}} = \sqrt{N_{c}}\int_{0}^{1}\mathrm{d}x\int\frac{\mathrm{d}^{2} \mathbf{k}_{\perp}}{16\pi^{3}}\ \Phi(x,\mathbf{k}_{\perp})\] \[\times\frac{1}{\mathcal{P}_{\mathrm{A(P)}}}\sum_{\lambda_{1}, \lambda_{2}}\mathcal{R}^{00}_{\lambda_{1}\lambda_{2}}\left[\frac{\bar{v}_{ \lambda_{2}}(p_{2})}{\sqrt{x_{2}}}\Gamma_{\mathrm{A(P)}}\frac{u_{\lambda_{1}}(p _{1})}{\sqrt{x_{1}}}\right],\]
where we incorporate the Lorentz structures \(\mathcal{P}_{\mathrm{A(P)}}=iP^{\mu}(\mu_{M})\) on the RHS of Eqs. (16) and (17) into the integral. This incorporation of Lorentz structures into the integral is to assure the consistent one-body current level of approximation in the computation of the decay constant, which is the crucial aspect of our recently developed LFQM analysis [24; 25; 26; 50; 51] to obtain the self-consistent, i.e. current-component and boost invariant as well as the process (e.g. \(\Gamma_{\mathrm{A(P)}}\)) independent physical observables by replacing all physical mass \(M\) appeared in Eq. (III.1) with the invariant mass \(M_{0}\). So far, most LF calculations of the decay constant, e.g. \(f_{\mathrm{A}}\) from \(\Gamma_{\mathrm{A}}\), used \(\mu=+\) or \(\perp\) since \((P^{+},\mathbf{P}_{\perp})\) do not involve any physical mass \(M\). On the other hand, the minus component of the axial-vector current involves \(P^{-}=(M^{2}+\mathbf{P}_{\perp}^{2})/P^{+}\) and one fails to produce the same result as the one obtained from the currents with \((\mu=+,\perp)\) if one uses the physical mass in the calculation. The difference between \(\mu=-\) and \(\mu=(+,\perp)\) was identified as the LF traceherous points such as the instantaneous and zero-contributions to the minus current in the solvable covariant model [24]. However, in our LFQM consistent with BT construction, we showed [50; 51] that the result from the minus component of axial-vector current gives the same result as the one obtained from the currents with \((\mu=+,\perp)\) if we replace \(M\) with \(M_{0}\). In this work, we
shall show that the result obtained from \(\Gamma_{\rm P}\) also gives the same result as the one obtained from \(\Gamma_{\rm A}\) as far as we use \(M\to M_{0}\) prescription. This may be regarded as the effective LF zero-mode inclusion at the level of the one-body matrix element calculation in the LFQM is consistent with BT construction.
For the pseudotensor current (\(\Gamma_{\rm T}^{\mu\nu}=\sigma^{\mu\nu}\gamma_{5}\)), since the decay constant can be computed only in the nonlocal limit (i.e. \(z^{\mu}\neq 0\)), the calculation incorporating \(\psi_{3;{\rm P}}(x)\) is inevitably required in the process of deriving the decay constant from the pseudotensor current. From the light-like vector \(z^{\mu}\) with \(z^{+}={\bf z}_{\perp}=0\), there are two possible ways to compute the nonlocal matrix element by choosing \(\mu\nu=+-\) or \(\perp-\).
As for an example, let us choose \(\mu\nu=+-\). We first integrate Eq. (18) on both sides using the dummy variable \(x^{\prime}\) (and \(\zeta^{\prime}=2x^{\prime}-1\)) with respect to \(z^{-}\) as
\[\int_{-\infty}^{\infty}\frac{{\rm d}z^{-}}{2\pi}{\rm e}^{-i\zeta ^{\prime}P\cdot z}\left\langle 0\right|\bar{q}(z)\Gamma_{\rm T}^{+-}q(-z) \left|P\right\rangle\] \[=CP^{+}\int_{0}^{1}{\rm d}x\,\int_{-\infty}^{\infty}\frac{{\rm d }z^{-}}{2\pi}z^{-}{\rm e}^{-i(x^{\prime}-x)P^{+}z^{-}}\psi_{3;{\rm P}}(x), \tag{21}\]
where \(C=-\frac{i}{3}f_{\rm T}\left(1-\rho_{+}\right)\mu_{M}\). Then, we obtain the RHS of Eq. (21) via
\[\int_{-\infty}^{\infty}\frac{{\rm d}z^{-}}{2\pi}z^{-}{\rm e}^{-i( x^{\prime}-x)P^{+}z^{-}}\psi_{3;{\rm P}}(x)\] \[=\frac{i}{P^{+}}\frac{\partial}{\partial x^{\prime}}\int_{-\infty }^{\infty}\frac{{\rm d}z^{-}}{2\pi}{\rm e}^{-i(x^{\prime}-x)P^{+}z^{-}}\psi_{3; {\rm P}}(x)\] \[=\frac{i}{P^{+}}\frac{\partial}{\partial x^{\prime}}\left[\delta( (x^{\prime}-x)P^{+})\psi_{3;{\rm P}}(x)\right] \tag{22}\]
as follows
\[{\rm RHS\ of\ Eq.\ (\ref{eq:21})}=\frac{1}{3P^{+}}f_{\rm T}\left(1-\rho_{+} \right)\mu_{M}\frac{\partial}{\partial x^{\prime}}\psi_{3;{\rm P}}(x^{\prime}). \tag{23}\]
On the other hand, the left-hand side (LHS) of Eq. (18) can be rewritten as
LHS of Eq. (21) \[=\sqrt{N_{c}}\int_{0}^{1}{\rm d}x\int\frac{{\rm d}^{2}{\bf k}_{ \perp}}{16\pi^{3}}\int_{-\infty}^{\infty}\frac{{\rm d}z^{-}}{2\pi}{\rm e}^{-i \zeta^{\prime}P\cdot z}{\rm e}^{-i(p_{2}-p_{1})\cdot z}\] \[\times\sum_{\lambda_{1},\lambda_{2}}\Psi_{\lambda_{1}\lambda_{2}} ^{00}(x,{\bf k}_{\perp})\left[\frac{\bar{v}_{\lambda_{2}}(p_{2})}{\sqrt{x_{2} }}\Gamma_{\rm T}^{+-}\frac{u_{\lambda_{1}}(p_{1})}{\sqrt{x_{1}}}\right],\] (24)
where \({\rm e}^{-i\zeta^{\prime}P\cdot z}{\rm e}^{-i(p_{2}-p_{1})\cdot z}={\rm e}^{- i(x^{\prime}-x)P^{+}z^{-}}\) and the \(z^{-}\) integration gives the \(\delta[(x^{\prime}-x)P^{+}]\) and then it is trivially integrated by \({\rm d}x\). Integrating Eqs. (23) and (24) over \(x^{\prime}\), we obtain \(\psi_{3;P}(x)\) as
\[\psi_{3;P}(x) = \frac{3\sqrt{N_{c}}}{f_{\rm T}}\int_{0}^{x}{\rm d}x^{\prime}\int \frac{{\rm d}^{2}{\bf k}_{\perp}}{16\pi^{3}}\Phi(x^{\prime},{\bf k}_{\perp})\] \[\times\frac{1}{{\cal P}_{\rm T}}\sum_{\lambda_{1},\lambda_{2}}{ \cal R^{\prime}}^{00}_{\lambda_{1}\lambda_{2}}\left[\frac{\bar{v}_{\lambda_{2} }(p_{2}^{\prime})}{\sqrt{x_{2}^{\prime}}}\Gamma_{\rm T}^{+-}\frac{u_{\lambda_ {1}}(p_{1}^{\prime})}{\sqrt{x_{1}^{\prime}}}\right],\]
where \({\cal P}_{\rm T}=(1-\rho_{+})\mu_{M}\) and the prime(\(\prime\)) in \(({\cal R},p_{i})\) implies that they are functions of \(x^{\prime}\). By integrating both sides with respect to \({\rm d}x\) and using the normalization of the DA, \(\int_{0}^{1}{\rm d}x\)\(\psi_{3;P}(x)=1\), we obtain the decay constant \(f_{\rm T}\) from the pseudotensor channel as
\[f_{\rm T} = 3\sqrt{N_{c}}\int_{0}^{1}{\rm d}x\int_{0}^{x}{\rm d}x^{\prime} \int\frac{d^{2}{\bf k}_{\perp}}{16\pi^{3}}\Phi(x^{\prime},{\bf k}_{\perp}) \tag{26}\] \[\times\frac{1}{{\cal P}_{\rm T}}\sum_{\lambda_{1},\lambda_{2}}{ \cal R^{\prime}}^{00}_{\lambda_{1}\lambda_{2}}\left[\frac{\bar{v}_{\lambda_{2} }(p_{2}^{\prime})}{\sqrt{x_{2}^{\prime}}}\Gamma_{\rm T}^{+-}\frac{u_{\lambda_ {1}}(p_{1}^{\prime})}{\sqrt{x_{1}^{\prime}}}\right].\]
We should note that the term \({\cal P}_{\rm T}\) including the physical mass \(M\) is also incorporated into the integral so that \(M\) is replaced with \(M_{0}\). The same results for \(\psi_{3;P}(x)\) in Eq. (25) and \(f_{\rm T}\) in Eq. (26) can be obtained with \(\Gamma_{\rm T}^{\perp-}\). We also note that the main update on the calculation of pseudotensor current compared to Ref. [26] is the inclusion of the term \((1-\rho_{+})\), which leads to the process independence of the decay constant, i.e. \(f_{\rm A}=f_{\rm P}=f_{\rm T}\), as we discuss below.
Here, we explicitly demonstrate that all three decay constants, \((f_{\rm A},f_{\rm P},f_{\rm T})\) as defined by Eqs. (20) and (26), yield identical numerical results. Using the Dirac helicity spinors [1; 32] and the SO wave function defined in Eq. (4), it is straightforward to compute \((f_{\rm A},f_{\rm P},f_{\rm T})\), especially, in terms of different helicity contributions for different usage of current operators. The final results of \((f_{\rm A},f_{\rm P},f_{\rm T})\) in the most general \({\bf P}_{\perp}\neq 0\) frame are summarized as follows
\[f_{\rm A(P)}=\sqrt{6}\int_{0}^{1}{\rm d}x\int\frac{{\rm d}^{2}{\bf k}_{\perp}}{16 \pi^{3}}\,\,\frac{\Phi(x,{\bf k}_{\perp})}{\sqrt{{\cal A}^{2}+{\bf k}_{\perp}^{2}}} \,\,{\cal O}_{\rm A(P)}(x,{\bf k}_{\perp}), \tag{27}\]
and
\[f_{\rm T}=\sqrt{6}\int_{0}^{1}{\rm d}x\int_{0}^{x}{\rm d}x^{\prime}\int\frac{{\rm d }^{2}{\bf k}_{\perp}}{16\pi^{3}}\frac{\Phi(x^{\prime},{\bf k}_{\perp})}{\sqrt{{ \cal A}^{\prime}}^{2}+{\bf k}_{\perp}^{2}}{\cal O}_{\rm T}(x^{\prime},{\bf k}_{ \perp}), \tag{28}\]
where \({\cal A}^{\prime}={\cal A}(x\to x^{\prime})\). The operators \({\cal O}\) given by Eqs. (27) and (28) are obtained from the sum of each helicity contribution \(H_{\lambda_{1}\lambda_{2}}\), i.e.,
\[{\cal O}=\sum_{\lambda_{1},\lambda_{2}}H_{\lambda_{1}\lambda_{2}}. \tag{29}\]
The results of each helicity contributions \(H_{\lambda_{1}\lambda_{2}}\) and their sum \({\cal O}\) defined by Eq. (29) for different current operators \(\Gamma=(\Gamma_{\rm A},\Gamma_{\rm P},\Gamma_{\rm T})\) together with different components of the currents for \(\Gamma=(\Gamma_{\rm A},\Gamma_{\rm T})\) are summarized in Table 1. We should note that all the physical masses \(M\) are replaced with the invariant mass \(M_{0}\) in the final results presented in Table 1. We confirmed that the three decay constants given by Eqs. (27) and (28) are the same as each other, i.e. the pseudoscalar meson decay constant in our LFQM can be obtained in the process-independent manner (i.e. \(f_{\rm A}=f_{\rm P}=f_{\rm
In the Appendix, we also discuss the "Type II" link [24] between the covariant BS model and our LFQM, which is the alternative method to obtain the self-consistent LFQM results for the decay constants given by Eqs. (27) and (28).
### Lorentz and Rotation Invariance
In this work, we also compute the decay constant with the nonvanishing \(\mathbf{P}_{\perp}\) frame. As one can see from Table 1, the operators \(\mathcal{O}_{\mathrm{P}}\), \(\mathcal{O}_{\mathrm{T}}^{\mu\nu}\), and \(\mathcal{O}_{\mathrm{A}}^{+,\perp}\) are completely independent of \(\mathbf{P}_{\perp}\). Although the operator \(\mathcal{O}_{\mathrm{A}}\) obtained from the minus component of the current \(\Gamma_{\mathrm{A}}^{-}\) depends on \(\mathbf{P}_{\perp}\), which is originated from \(P^{-}\) associated with the Lorentz factor \(P^{\mu}\) on the RHS of Eq. (16), we confirm that the decay constant itself is \(\mathbf{P}_{\perp}\)-independent as long as the replacement \(M\to M_{0}\) is made in \(P^{-}\).
In this subsection, we shall explicitly prove not only the \(\mathbf{P}_{\perp}\)-independence but also the rotational invariance of the decay constant \(f_{\mathrm{A(P)}}\) given by Eq. (27). This can be shown explicitly by converting Eq. (27) into the integral form of the ordinary three vector \(\vec{k}=(k_{z},\mathbf{k}_{\perp})\) using Eqs. (9) and (10) together with the Jacobi factor given by Eq. (14), which results in
\[f_{\mathrm{A(P)}}=\sqrt{6}\int\frac{\mathrm{d}^{3}\vec{k}}{16\pi^{3}}\;\sqrt{ \frac{M_{0}}{E_{1}E_{2}}}\frac{\Phi(\vec{k})}{M_{0}}\;\mathcal{O}_{\mathrm{A( P)}}(\vec{k}), \tag{30}\]
where \(\Phi(\vec{k})\) now becomes the wave function mixed with \(\phi_{1S}(\vec{k})\) and \(\phi_{2S}(\vec{k})\) given by Eq. (8). For the pseudoscalar current case, the rotational invariance of the operator \(\mathcal{O}_{\mathrm{P}}=\tilde{M}_{0}^{2}/\mu_{M}^{0}\) is evident. For the axial-vector current case, the operators \(\mathcal{O}_{\mathrm{A}}^{(+,\perp)}=2\mathcal{A}\) can be converted into
\[\mathcal{O}_{\mathrm{A}}^{(+,\perp)}(\vec{k})=\frac{2}{M_{0}}\left[m_{1}E_{2}+ m_{2}E_{1}+(m_{1}-m_{2})k_{z}\right], \tag{31}\]
where the last \(k_{z}\)-term vanishes for the \(k_{z}\)-integration in Eq. (30) and the rest of terms are rotationally invariant. Finally, the operator \(\mathcal{O}_{\mathrm{A}}^{-}\) satisfies
\[\mathcal{O}_{\mathrm{A}}^{+}(\vec{k})-\mathcal{O}_{\mathrm{A}}^{-}(\vec{k})= \frac{4(m_{2}-m_{1})M_{0}}{(\mathbf{P}_{\perp}^{2}+M_{0}^{2})}k_{z}, \tag{32}\]
which is an odd function of \(k_{z}\). Equation (32) indicates that the decay constant obtained from the minus current is not only independent of \(\mathbf{P}_{\perp}\) but also completely equivalent to the one obtained from the plus (and perpendicular) component of the current. It is worth noting that the utilization of a factorized form of the LFWFs, such as \(\Psi(x)=\Psi_{1}(x)\Psi_{2}(\mathbf{k}_{\perp})\)[55], would result in breaking this rotational invariance.
In the numerical section, we will conduct a quantitative analysis to examine the \(\mathbf{P}_{\perp}\)-independence of \(\mathcal{O}_{\mathrm{A}}^{-}\) through the \(\mathbf{P}_{\perp}\)-dependence of the helicity contributions.
## IV Distribution amplitudes
In this section, we discuss the two-particle DAs up to twist 4 obtained from the three different pseudoscalar meson decay modes. We summarize in Table 2 the twist classification based on the choice of the currents \((\gamma^{\mu}\gamma_{5},i\gamma_{5},\sigma^{\mu\nu}\gamma_{5})\) and all possible components of the currents.
The DAs up to twist-4 accuracy for the pseudoscalar meson with axial-vector current \(\Gamma_{\mathrm{A}}\) are defined in terms of the following matrix element of gauge invariant non-local operators as [5]
\[A_{\mathrm{A}}^{\mu} = \left\langle 0\right|\bar{q}(z)\gamma^{\mu}\gamma_{5}q(-z)\left| \mathrm{P}(P)\right\rangle, \tag{33}\] \[= if_{\mathrm{A}}\int_{0}^{1}\mathrm{d}x\;\mathrm{e}^{i\zeta P_{-z} }\Bigg{[}P^{\mu}\left(\phi_{2;\mathrm{P}}(x)+z^{2}(\cdots)\right)\] \[\quad+\frac{M^{2}}{2}\frac{z^{\mu}}{P\cdot z}\Bigg{(}\phi_{4; \mathrm{P}}(x)-\phi_{2;\mathrm{P}}(x)\Bigg{)}\Bigg{]}.\]
In order to make a connection between the DAs and the LFWFs of the meson, we utilize the equal LF time condition on the light-like vector \(z^{\mu}\) (i.e. \(z^{2}=z^{-}z^{+}-{\bf z}_{\perp}^{2}=0\)) with \(z^{+}={\bf z}_{\perp}=0\). We then obtain
\[A_{\rm A}^{\mu}\Big{|}_{z^{+}={\bf z}_{\perp}=0} = if_{\rm A}\int_{0}^{1}{\rm d}x\ e^{i\zeta P\cdot z}\bigg{[}P^{\mu }\phi_{2;{\rm P}}(x) \tag{34}\] \[+\frac{M^{2}z^{\mu}}{P^{+}z^{-}}\bigg{(}\phi_{4;{\rm P}}(x)-\phi_ {2;{\rm P}}(x)\bigg{)}\bigg{]}.\]
To isolate the twist-2 DA, \(\phi_{2;{\rm P}}(x)\), one may take either the plus or transverse component of the current and obtain
\[A_{\rm A}^{(+,\perp)}=if_{\rm A}P^{(+,\perp)}\int_{0}^{1}{\rm d}x\ e^{i\zeta P \cdot z}\phi_{2;{\rm P}}(x). \tag{35}\]
This explains why the two decay constants, \(f_{\rm A}^{(+)}\) and \(f_{\rm A}^{(\perp)}\), have the same operator \({\cal O}_{\rm P}^{(+)}={\cal O}_{\rm P}^{(\perp)}\). On the other hand, the twist-4 DA \(\phi_{4;{\rm P}}(x)\) can be obtained from the minus component of the current in the \({\bf P}_{\perp}=0\) frame as
\[A_{\rm A}^{-}=if_{\rm A}P^{-}\int_{0}^{1}{\rm d}x\ e^{i\zeta P\cdot z}\phi_{4 ;{\rm P}}(x). \tag{36}\]
Here it is shown that the higher twist DAs are associated to the bad current while the leading twist DAs correspond to the good current.
For the twist-3 case, there are two different DAs that are related to pseudoscalar (\(\Gamma_{\rm P}\)) and pseudotensor (\(\Gamma_{\rm T}\)) currents. For pseudoscalar current, the twist-3 DA \(\phi_{3;{\rm P}}(x)\) is uniquely determined by [5]
\[A_{\rm P}\bigg{|}_{z^{+}={\bf z}_{\perp}=0} = \langle 0|\,\bar{q}(z)i\gamma_{5}q(-z)\,|{\rm P}(P)\rangle\,, \tag{37}\] \[= f_{\rm P}\mu_{M}\int_{0}^{1}{\rm d}x\ e^{i\zeta P\cdot z}\phi_{3 ;{\rm P}}(x),\]
without choosing a particular component of current. For pseudotensor current, the DA is computed as [5]
\[A_{\rm T}^{\mu\nu}\bigg{|}_{z^{+}={\bf z}_{\perp}=0} = \langle 0|\,\bar{q}(z)\sigma^{\mu\nu}\gamma_{5}q(-z)\,|{\rm P}(P) \rangle\,, \tag{38}\] \[= -\frac{i}{3}f_{\rm T}\left(1-\rho_{+}\right)\mu_{M}(P^{\mu}z^{ \nu}-P^{\nu}z^{\mu})\] \[\times\int_{0}^{1}{\rm d}x\ e^{i\zeta P\cdot z}\psi_{3;{\rm P}}(x).\]
In this case, the nonvanishing components are \(\mu\nu=+-\) and \(\perp\,-\) and they give the same \(\psi_{3;{\rm P}}(x)\) as we have shown for the computation of the decay constant \(f_{\rm T}\).
In our notation, all DAs \(\phi_{n;{\rm P}}(x)(n=2,3,4)\) and \(\psi_{3;{\rm P}}(x)\) are normalized to unity as
\[\int_{0}^{1}{\rm d}x\ \{\phi_{n;{\rm P}}(x),\psi_{3;{\rm P}}(x)\}=1. \tag{39}\]
From Eqs. (27) and (28) together with Eq. (39), we obtain \(\phi_{n;{\rm P}}(x)(n=2,3,4)\) from the axial-vector (\(n=2,4\)) and pseudoscalar (\(n=3\)) channels as
\[\phi_{n;{\rm P}}(x)=\frac{\sqrt{6}}{f_{\rm P}}\int\frac{{\rm d}^{2}{\bf k}_{ \perp}}{16\pi^{3}}\ \frac{\Phi(x,{\bf k}_{\perp})}{\sqrt{{\cal A}^{2}+{\bf k}_{\perp}^{2}}}\ {\cal O}_{\rm A(P)}. \tag{40}\]
Here, we have \({\cal O}_{\rm A}^{+}={\cal O}_{\rm A}^{\perp}\) corresponding to \(n=2\), \({\cal O}_{\rm A}^{-}\) corresponding to \(n=4\), and \({\cal O}_{\rm P}\) corresponding to \(n=3\). For \(\psi_{3;{\rm P}}(x)\) from the pseudotensor channel, we obtain
\[\psi_{3;{\rm P}}(x)=\frac{\sqrt{6}}{f_{\rm P}}\int_{0}^{x}{\rm d}x^{\prime} \int\frac{{\rm d}^{2}{\bf k}_{\perp}}{16\pi^{3}}\ \frac{\Phi(x^{\prime},{\bf k}_{\perp})}{\sqrt{{\cal A}^{2}+{\bf k}_{\perp}^{2}} }\ {\cal O}_{\rm T}, \tag{41}\]
where \({\cal O}_{\rm T}={\cal O}_{\rm T}^{+-}={\cal O}_{\rm T}^{\perp-}\).
## V Numerical results and discussion
The model parameters \(m\) and \(\beta\) used in the present work are summarized in Table 3, which were determined from the spectroscopic study in our previous work [31].
### Light-front wave function
We first discuss the LFWFs \(\Psi^{00}_{\lambda_{1}\lambda_{2}}(x,{\bf k}_{\perp})\) defined in Eq. (3) for the ground (\(1S\)) state and the radially excited (\(2S\)) state heavy pseudoscalar mesons. Figure 1(a) shows the two-dimensional (2D) plots of \(D(1S)\) and \(D(2S)\) mesons as a function of \((x,k_{\perp})\), respectively, as an example of the unequal-mass case while Figure 1(b) shows the 2D plots of \(\eta_{c}(1S)\) and \(\eta_{c}(2S)\) heavy quarkonia, respectively, as an example of the equal-mass case. In Fig. 1, the LFWFs \(\Psi^{00}_{\lambda_{1}\lambda_{2}}\) are presented in term of the helicity configuration \(\lambda_{1}\lambda_{2}\) where we denote \(\lambda=+1/2\) and \(-1/2\) as \(\uparrow\) and \(\downarrow\), respectively. Note that the longitudinal momentum fraction \(x\) is carried by the lighter quark with mass \(m_{1}\). The LFWFs can be compared with those obtained in Ref. [55].
There are several salient features related to the LFWFs in Fig. 1. (i) The center of the LFWF (\(k_{\perp}\to 0\) and \(k_{z}\to 0\)), which is associated with its extremum point, is located at
\[x=\frac{m_{1}}{m_{1}+m_{2}}, \tag{42}\]
which is obtained by solving Eq. (13). For the equal-mass case, \(\Psi(k_{\perp}\to 0,k_{z}\to 0)\) is located at \(x=1/2\)
\begin{table}
\begin{tabular}{c c c c} Current & Comp & Twist & DAs \\ \hline \(\gamma^{\mu}\gamma_{5}\) & \(+,\perp\) & 2 & \(\phi_{2;{\rm P}}\) \\ & \(-\) & 4 & \(\phi_{4;{\rm P}}\) \\ \(i\gamma_{5}\) & \(\dots\) & 3 & \(\phi_{3;{\rm P}}\) \\ \(\sigma^{\mu\nu}\gamma_{5}\) & \(+-,\perp-\) & 3 & \(\psi_{3;{\rm P}}\) \\ \end{tabular}
\end{table}
Table 2: Twist classification based on the choice of the current and its component.
as can be seen in Fig. 1. But, for the unequal-mass case (i.e. \(m_{1}=m_{u(d)},m_{2}=m_{c}\)) in Fig. 1, the center moves to the value of \(x<1/2\) and therefore, the LFWF is somewhat distorted on \((x,k_{\perp})\) plane. (ii) The LFWF correctly represents the pseudoscalar meson as \(\Psi^{00}_{\uparrow\downarrow}(x,\mathbf{k}_{\perp})=-\Psi^{00}_{\uparrow \downarrow}(x,\mathbf{k}_{\perp}).\) In addition to the ordinary helicity (\(\uparrow\downarrow,\downarrow\uparrow\)), there is also a nonvanishing contribution from the higher helicities (\(\uparrow\uparrow,\downarrow\downarrow\)) that couple to the quark orbital angular momentum as the sign is different in the positive and negative domain of \(k_{\perp}\). This configuration is possible in the relativistic dynamics. However, such contribution is suppressed as the quark mass increases and vanishes in the heavy-quark limit (\(m\to\infty\)). Therefore, at the heavy-quark or nonrelativistic limit, the LFWF will take only the contribution from the ordinary helicity without involving the orbital angular momentum. (iii) It is also shown that the \(2S\) state has a nodal structure represented as a white circle/oval where the center of LFWF has a dip represented as a blue region for the case of ordinary helicity. One may notice that the shape of the LFWFs is largely reflected in the DAs.
### Decay constant
First of all, the numerical values of decay constants for \(1S\) and \(2S\) state heavy pseudoscalar mesons obtained from the axial-vector current are presented in our previous work [31]. In this work, we confirm that the decay constants obtained from the pseudoscalar and pseudotensor channels also produce the same results as those obtained from the axial-vector channel regardless of the currents as well as all possible current components. Namely, we obtain the process-independent pseudoscalar meson decay constants in the LFQM. For the sake of completeness, we display again the results of \(1S\) and \(2S\) state heavy pseudoscalar mesons for the case of mixing angle \(\theta=12^{\circ}\) in Table 4.
In addition, we examine in Fig. 2 the contributions of helicity to the decay constants for \(1S\) and \(2S\) state heavy pseudoscalar mesons, as they are contingent upon the current component, as indicated in Table 1. Notably, as observed in Fig. 2, the helicity contributions exhibit variation across different currents and current components. Despite these variations, however, the resulting decay constant remains unaltered.
For the \(\gamma^{+}\gamma_{5}\), the contribution denoted by \(H_{\uparrow\downarrow}+H_{\downarrow\uparrow}\) is entirely from the ordinary helicity wave function
\begin{table}
\begin{tabular}{c c c|c c c} State & \(f_{theo}\) & \(f_{exp}\) & State & \(f_{theo}\) & \(f_{exp}\) \\ \hline \(D(1S)\) & 208 & 206.7(8.9) & \(D(2S)\) & 110 & \\ \(D_{s}(1S)\) & 246 & 257.5(6.1) & \(D_{s}(2S)\) & 133 & \\ \(\eta_{c}(1S)\) & 348 & 335(75) & \(\eta_{c}(2S)\) & 214 & \\ \(B(1S)\) & 190 & 188(25) & \(B(2S)\) & 126 & \\ \(B_{s}(1S)\) & 228 & \(\cdots\) & \(B_{s}(2S)\) & 150 & \\ \(B_{c}(1S)\) & 394 & \(\cdots\) & \(B_{c}(2S)\) & 268 & \\ \(\eta_{b}(1S)\) & 628 & \(\cdots\) & \(\eta_{b}(2S)\) & 443 & \\ \end{tabular}
\end{table}
Table 4: Decay constants of heavy pseudoscalar mesons predicted in the LFQM [31]. The results are given in MeV.
Figure 1: Two-dimensional plot of LFWF of (a) \(D\) and (b) \(\eta_{c}\) mesons for each helicity configuration. Note that the longitudinal momentum \(x\) is carried by the light quark.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \(m_{q}\) & \(m_{s}\) & \(m_{c}\) & \(m_{b}\) & \(\beta_{q^{z}}\) & \(\beta_{s\bar{c}}\) & \(\beta_{q\bar{b}}\) & \(\beta_{s\bar{b}}\) & \(\beta_{c\bar{c}}\) & \(\beta_{c\bar{b}}\) & \(\beta_{b\bar{b}}\) \\ \hline
0.22 & 0.45 & 1.68 & 5.10 & 0.424 & 0.455 & 0.495 & 0.538 & 0.592 & 0.767 & 1.167 \\ \end{tabular}
\end{table}
Table 3: The constituent quark masses \(m\) [GeV] and variational parameters \(\beta\) [GeV] for \(\theta=12^{\circ}\) adapted from our previous work [31].
\(\Psi_{\uparrow\downarrow-\downarrow\uparrow}^{00}(x,\mathbf{k}_{\downarrow})\) without involving the orbital angular momentum. This is one of the reasons why we call plus current (\(\mu=+\)) as the good current where the dynamics becomes much simpler and it is also related to the leading twist DAs as explained in Sec. IV. For the \(\gamma^{\downarrow}\gamma_{5}\), the contribution is still entirely from the ordinary helicity. However, when we use \(\gamma^{-}\gamma_{5}\) or the bad current, the higher helicity contributions denoted by \(H_{\uparrow\uparrow}+H_{\downarrow\downarrow}\) arise and the dynamics becomes more complicated. It is clearly shown that the higher helicity contribution is suppressed when the constituent quark mass becomes heavier. A similar behavior is also observed for the \(i\gamma_{5}\) case. When we use \(\sigma^{+-}\gamma_{5}\) (or \(\sigma^{\perp-}\gamma_{5}\)), the ordinary helicity contribution appears more than expected as shown in the bottom left panel of Fig. 2 for some cases. However, the higher helicity contribution compensates for it and keeps the decay constant remains the same. It is also worth mentioning that the behaviors of helicity contribution for the ground state and the radially excited state are similar. The difference is that the higher helicity contribution is more pronounced for the radially excited state.
In Fig. 3, we show the \(\mathbf{P}_{\perp}\)-independence of the decay constants for the \(1S\) and \(2S\) state heavy pseudoscalar mesons. While each helicity contribution shows the \(\mathbf{P}_{\perp}\) dependence when one uses the minus component of the axial-vector current, the sum of all helicity contributions is completely independent of \(\mathbf{P}_{\perp}\) as it should be. It is also evident from Fig. 3 that the higher (ordinary) helicity contributions dominate at low (high) \(\mathbf{P}_{\perp}\) region consistent with the previous observation for the equal quark mass case [50]. We can also see that higher helicity is
Figure 2: Helicity contributions to decay constant for 1S and 2S heavy pseudoscalar mesons. The pattern histogram represents subtracted contribution. Since the contribution from \(H_{\uparrow\downarrow}=H_{\downarrow\uparrow}\), we sum them up for simplicity. It also applies to \(H_{\uparrow\uparrow}=H_{\downarrow\downarrow}\). Here we set \(\mathbf{P}_{\perp}=0\) for the case of \(\gamma^{-}\gamma_{5}\). For the \(\sigma^{\mu\nu}\gamma_{5}\) case, the helicity contribution depends on the choice of assigning \(x\) to light or heavy quark, see Sec. V.3, for the detail.
more enhanced for the \(2S\) state, similar to that in Fig. 2.
Finally, we also examine the rotational invariance of the decay constant by investigating the wave function \(\psi^{\mu}(\vec{k})\) defined in Eq. (30) with \(f_{A}=\int\mathrm{d}^{3}\vec{k}\ \psi^{\mu}(\vec{k})\) for the axial-vector (\(\Gamma_{A}^{\mu}=\gamma^{\mu}\gamma_{5}\)) current. For the equal-mass case, the \(\psi^{\mu}(\vec{k})\) has a spherical shape since the obtained operator \(\mathcal{O}_{A}^{\mu}=2m\) regardless of the current component \(\mu\). For the unequal-mass case such as \(D\) meson, the wave functions \(\psi_{D}^{+,\perp}(\vec{k})\) and \(\psi_{D}^{-}(\vec{k})\) are slightly deformed and shifted to the negative and positive \(k_{z}\) domains2, respectively, as depicted in the upper panels of Fig. 4. It is also shown that the wave functions \(\psi^{\mu}(\vec{k})\) are more separated in the \(k_{z}\) direction for the \(D(1S)\) state compared to those of the \(D(2S)\) state. The shifting in \(k_{z}\) direction can be understood by the appearance of the odd function of \(k_{z}\) for \(\mathcal{O}_{A}^{+,\perp}(\vec{k})\) in Eq. (31) although such an odd \(k_{z}\) term does not actually contribute to the decay constant. The wave functions \(\psi^{+,\perp}(\vec{k})\) and \(\psi^{-}(\vec{k})\) become a sphere centered at the origin and coincide with each other if the \(k_{z}\) term is removed as shown in the middle panels of Fig. 4. Moreover, the difference defined as \(\widetilde{\psi}_{D}(\vec{k})=\psi_{D}^{+}(\vec{k})-\psi_{D}^{-}(\vec{k})\) is displayed in lower panels of Fig. 4 showing that the integration over \(k_{z}\) will give a vanishing result. Therefore, it is evident that the decay constant with \(\mu=+,\perp,-\) is the same. As for the pseudoscalar current, the wave functions are spherical as also implied from its operator \(\mathcal{O}_{P}(\vec{k})\).
Footnote 2: The shifting to either positive or negative domain depends on the choice of \(m_{1}\) and \(m_{2}\) since \(\mathcal{O}_{A}^{\ast}(\vec{k})\propto(m_{1}-m_{2})k_{z}\) in Eq. (31).
### Distribution amplitude
Figure 5 presents the DAs of different twists for the \(1S\) and \(2S\) state heavy pseudoscalar mesons. Note here that the longitudinal momentum \(x\) is carried by the lighter quark. As a result, the DAs for \(D_{(s)}\) and \(B_{(s)}\) are more concentrated in the lower \(x\) region. For the equal mass case such as \(\eta_{c}\) and \(\eta_{b}\), the \(\phi_{\mathrm{2;P}}(x)\), \(\phi_{\mathrm{3;P}}(x)\) and \(\phi_{\mathrm{4;P}}(x)\) have the same shape since the corresponding operators are the same as shown in Table 7. The key reason for this is the utilization of the self-consistent condition for the replacement of \(M\) with \(M_{0}\) when obtaining the aforementioned results. But, the \(\psi_{\mathrm{3;P}}(x)\) has a different shape with a narrower and higher peak. In addition, the distance between the peaks becomes closer for \(\psi_{\mathrm{3;P}}(x)\) of the \(2S\) state. For the unequal-mass case such as \(D\) or \(B\) meson, the peak is shifted to the lower \(x\) region for the higher twist DAs where the \(\phi_{\mathrm{4;P}}(x)\) has the highest peak. For the \(2S\) state, the peak near \(x=0\) is more enhanced for the higher twist while the peak near \(x=0.5\) is suppressed and shifted to the lower \(x\) region. Furthermore, the dip between the peaks is enhanced for the higher twist.
In order to gain a deeper understanding of the structure of DAs, we construct a 2D plot illustrating the DAs, \(\phi(x)\equiv\{\phi_{n;\mathrm{P}},\psi_{\mathrm{3;P}}\}\), using the following definition:
\[\phi(x)=\int_{0}^{\infty}\mathrm{d}^{2}\mathbf{k}_{\perp}\psi(x,\mathbf{k}_{ \perp})=\int_{0}^{1}\mathrm{d}y\ \phi(x,y), \tag{43}\]
where the wave function \(\phi(x,y)=\pi\psi(x,y)/(1-y)^{2}\) is obtained by using the variable transformation \(\mathbf{k}_{\perp}^{2}=y/(1-y)\) so that \(y\) ranges from 0 to 1. For the sake of demonstration, we show the 2D plot of \(\phi(x,y)\) only for \(D(2S)\) as shown in Fig. 6. One can clearly see that the \(\phi(x,y)\) bears resemblance with the LFWF shown in Fig. 1(a). The two-peak structure in the DAs of the \(2S\) state clearly originated from the nodal structure shown as the white bands. It appears that the \(\phi_{\mathrm{2;P}}(x,y)\), \(\phi_{\mathrm{3;P}}(x,y)\), and \(\phi_{\mathrm{4;P}}(x,y)\) have a similar shape. But, the DAs with the higher twist are more concentrated in the lower \(x\) region. On the other hand, \(\psi_{\mathrm{3;P}}(x,y)\) has a smaller nodal structure so that it shows slightly different behavior in Fig. 5.
Figure 7 shows the helicity contributions to DAs of \(\eta_{c}(1S)\) for various twist. The dashed and dotted lines represent the ordinary \(H_{\uparrow\downarrow}+H_{\downarrow\uparrow}\) and higher \(H_{\uparrow\uparrow}+H_{\downarrow\downarrow}\) helicity contributions, respectively. The solid lines represent the full results. As mentioned earlier, the DAs of
twist 2, twist 3, and twist 4 for equal-mass cases are the same. However, these DAs exhibit distinct helicity contributions. Specifically, the twist-2 DAs are exclusively composed of the ordinary helicity component, while the higher twist DAs incorporate a finite contribution from the higher helicity component. This observation is in accordance with the result presented for helicity contribution to the decay constant in Fig. 2.
Although the helicity contribution to DAs generally remains unchanged regardless of whether the light quark is assigned to \(x\) or \((1-x)\), it is crucial to acknowledge that the specific choice between \(x\) and \(1-x\) does impact the
Figure 5: Two-particle DAs with various twists for 1S and 2S heavy pseudoscalar mesons with various quark flavor contents where the longitudinal momentum \(x\) is carried by the light quark.
Figure 6: Two-dimensional plot of the the DAs of \(D(2S)\) for various twist. Here we define \(\mathbf{k}_{\perp}^{2}=y/(1-y)\) to make a rectangular domain. The longitudinal momentum \(x\) is carried by the light quark.
helicity contribution to the DA \(\psi_{\rm 3;P}(x)\) obtained from the nonlocal matrix element. In particular, when assigning the light quark to either \(x\) or \((1-x)\), the helicity contribution to \(\psi_{\rm 3;P}(x)\) exhibits markedly distinct behaviors. Figure 8 illustrates, as an example, the discrepancy in helicity contributions to \(\psi_{\rm 3;P}(x)\) for \(D(1S)\) when the light quark is assigned to carry either \(x\) or \((1-x)\). The upper and lower panels in Fig. 8 represent the results obtained when the light quark is assigned to \(x\) and \(1-x\), respectively, and the same line codes are used as in Fig. 7. One can clearly see from Fig. 8 that the ordinary helicity exhibits a negative contribution when the light quark carries the value of \(x\) (upper panel), while it yields a positive contribution when the heavy quark carries the value of \(x\) (lower panel). This distinct behavior observed can be attributed to the integration over \(x^{\prime}\), where the behav
Figure 8: The difference in the helicity contribution depending on the choice of assigning the LF longitudinal momentum fraction \(x\) to the light or heavy quark. The dashed and dotted lines represent the ordinary and higher helicity, respectively.
Figure 7: Helicity contribution to DAs with various twist. Although the total DAs of twist 2, twist 3, and twist 4 are the same, they consist of different helicity contributions. We note that the total DA of twist 3 with \(\sigma^{+-}\gamma_{5}\) is different from the others.
ior depends on the specific choice of \(x\). It indicates that each individual ordinary and higher helicity contribution to \(\psi_{\rm 3;P}(x)\) depends on whether the light quark or the heavy quark carries the specific light-front longitudinal momentum fraction \(x\). However, it is important to note that the total DA remains unchanged whether we assign \(x\) to the light or heavy quark. Furthermore, it is worth noting that there is a substantial cancellation between the ordinary and higher helicity contributions when \(x\) is associated with the light quark, whereas the cancellation is comparatively smaller when \(x\) is assigned to the heavy quark.
As we previously explained in the definition of \(\psi_{\rm 3;P}(x)\) provided by Eq.(18), there exist two variations in the QCD sum-rules for defining \(\psi_{\rm 3;P}(x)\), namely, with the inclusion of \((1-\rho_{+})\)[11] or without it [5]. In the previous work [26], the \(\psi_{\rm 3;P}(x)\) DA was computed without the term \((1-\rho_{+})\). However, we found that the inclusion of \((1-\rho_{+})\) is pivotal in obtaining the identical decay constants from the nonlocal pseudotensor channel as those derived from the local axial-vector and pseudoscalar channels.
Figure 9 depicts a comparison of \(\psi_{\rm 3;P}(x)\) obtained with the term \((1-\rho_{+})\) (solid lines) and without it (dashed lines), for the cases of heavy \(\eta_{c}(1S)\) (upper panel) and the light \(\pi(1S)\) (lower panel) mesons. We note that the same model parameters are used as in [26] for the plots of \(\pi\). The analysis reveals that the inclusion of the term \((1-\rho_{+})\) in \(\psi_{\rm 3;P}(x)\) leads to the narrower and higher shape compared to the case where \((1-\rho_{+})\) is absent. The quantitative impact of \((1-\rho_{+})\) on \(\psi_{\rm 3;P}(x)\) is found to be more significant in the heavy quark sector compared to the light quark sector.
Finally, we also compute the \(\xi\)-moment up to \(n=6\) defined by
\[\langle\xi^{n}\rangle=\int_{0}^{1}{\rm d}x\ \xi^{n}\ \phi(x), \tag{44}\]
where \(\xi=x-(1-x)=2x-1\). The results are shown in Tables 5 and 6 for the \(1S\) and \(2S\) state heavy pseudoscalar mesons, respectively. Here, we note that \(x\) is carried by the light quark, so the sign is opposite for the
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline \((2S)\) & tw & \(D\) & \(D_{s}\) & \(\eta_{c}\) & \(B\) & \(B_{s}\) & \(B_{c}\) & \(\eta_{b}\) \\ \hline \(\langle\xi^{1}\rangle\) & \(2\) & \(-0.337\) & \(-0.294\) & \(\ldots\) & \(-0.644\) & \(-0.614\) & \(-0.390\) & \(\ldots\) \\ & \(3\)p & \(-0.445\) & \(-0.365\) & \(\ldots\) & \(-0.713\) & \(-0.670\) & \(-0.419\) & \(\ldots\) \\ & \(4\) & \(-0.553\) & \(-0.436\) & \(\ldots\) & \(-0.781\) & \(-0.726\) & \(-0.447\) & \(\ldots\) \\ & \(3\)t & \(-0.445\) & \(-0.365\) & \(\ldots\) & \(-0.713\) & \(-0.670\) & \(-0.417\) & \(\ldots\) \\ \hline \(\langle\xi^{2}\rangle\) & \(2\) & \(0.226\) & \(0.197\) & \(0.088\) & \(0.453\) & \(0.417\) & \(0.201\) & \(0.049\) \\ & \(3\)p & \(0.312\) & \(0.242\) & \(0.088\) & \(0.545\) & \(0.487\) & \(0.223\) & \(0.049\) \\ & \(4\) & \(0.397\) & \(0.288\) & \(0.088\) & \(0.636\) & \(0.558\) & \(0.246\) & \(0.049\) \\ & \(3\)t & \(0.273\) & \(0.202\) & \(0.053\) & \(0.533\) & \(0.475\) & \(0.205\) & \(0.029\) \\ \hline \(\langle\xi^{3}\rangle\) & \(2\) & \(-0.145\) & \(-0.114\) & \(\ldots\) & \(-0.337\) & \(-0.302\) & \(-0.113\) & \(\ldots\) \\ & \(3\)p & \(-0.222\) & \(-0.154\) & \(\ldots\) & \(-0.435\) & \(-0.373\) & \(-0.129\) & \(\ldots\) \\ & \(4\) & \(-0.299\) & \(-0.193\) & \(\ldots\) & \(-0.534\) & \(-0.445\) & \(-0.146\) & \(\ldots\) \\ & \(3\)t & \(-0.177\) & \(-0.116\) & \(\ldots\) & \(-0.413\) & \(-0.350\) & \(-0.108\) & \(\ldots\) \\ \hline \(\langle\xi^{4}\rangle\) & \(2\) & \(0.108\) & \(0.083\) & \(0.018\) & \(0.261\) & \(0.228\) & \(0.068\) & \(0.006\) \\ & \(3\)p & \(0.173\) & \(0.112\) & \(0.018\) & \(0.360\) & \(0.296\) & \(0.080\) & \(0.006\) \\ & \(4\) & \(0.238\) & \(0.142\) & \(0.018\) & \(0.458\) & \(0.364\) & \(0.092\) & \(0.006\) \\ & \(3\)t & \(0.124\) & \(0.072\) & \(0.008\) & \(0.329\) & \(0.265\) & \(0.061\) & \(0.003\) \\ \hline \(\langle\xi^{5}\rangle\) & \(2\) & \(-0.082\) & \(-0.058\) & \(\ldots\) & \(-0.209\) & \(-0.178\) & \(-0.043\) & \(\ldots\) \\ & \(3\)p & \(-0.138\) & \(-0.082\) & \(\ldots\) & \(-0.304\) & \(-0.241\) & \(-0.052\) & \(\ldots\) \\ & \(4\) & \(-0.195\) & \(-0.106\) & \(\ldots\) & \(-0.399\) & \(-0.304\) & \(-0.060\) & \(\ldots\) \\ & \(3\)t & \(-0.090\) & \(-0.048\) & \(\ldots\) & \(-0.267\) & \(-0.206\) & \(-0.035\) & \(\ldots\) \\ \hline \(\langle\xi^{6}\rangle\) & \(2\) & \(0.065\) & \(0.044\) & \(0.005\) & \(0.172\) & \(0.143\) & \(0.029\) & \(0.001\) \\ & \(3\)p & \(0.115\) & \(0.063\) & \(0.005\) & \(0.262\) & \(0.200\) & \(0.035\) & \(0.001\) \\ & \(4\) & \(0.165\) & \(0.083\) & \(0.005\) & \(0.352\) & \(0.258\) & \(0.041\) & \(0.001\) \\ & \(3\)t & \(0.068\) & \(0.033\) & \(0.002\) & \(0.220\) & \(0.162\) & \(0.021\) & \(0.0004\) \\ \hline \end{tabular}
\end{table}
Table 5: The \(\xi\)-moment up to \(n=6\) for the \(1S\) state heavy pseudoscalar mesons. Here we define the \(x\) carried by the lighter quark. Therefore, the odd-power of \(\langle\xi\rangle\) has an opposite sign to our previous work [31].
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline \((2S)\) & tw & \(D\) & \(D_{s}\) & \(\eta_{c}\) & \(B\) & \(B_{s}\) & \(B_{c}\) & \(\eta_{b}\) \\ \hline \(\langle\xi^{1}\rangle\) & \(2\) & \(-0.037\) & \(-0.294\) & \(\ldots\) & \(-0.644\)
odd \(\xi\)-moment as compared to our previous work [31]. For the unequal-mass case, we observe that the absolute value of odd \(\xi\)-moment is getting larger for the higher twist, indicating the DAs have more deviated from the center. The minus sign shows that the DAs are shifted to the lower \(x\) region. We also note that the absolute value of the even \(\xi\) moments increases for the higher twist. The absolute values of \(\xi\) moments for the \(\psi_{3;\mathrm{P}}(x)\) are generally smaller than those of \(\phi_{3;\mathrm{P}}(x)\).
## VI Summary
We have investigated the decay constants and the DAs up to the twist-4 for the \(1S\) and \(2S\) state heavy pseudoscalar mesons in the LFQM, computing the local and nonlocal matrix elements \(\langle 0|\bar{q}\Gamma q|P\rangle\) with three different current operators \(\Gamma=(\gamma^{\mu}\gamma_{5},i\gamma_{5},\sigma^{\mu\nu}\gamma_{5})\).
In our LFQM, we performed a comprehensive analysis utilizing a general reference frame where \(\mathbf{P}_{\perp}\neq 0\) and explored all possible components of the currents. Our explicit results demonstrate the equality of the three pseudoscalar meson decay constants derived from the three distinct current operators \(\Gamma\). This remarkable consistency in decay constants is achieved when we enforce the self-consistency condition, i.e. the replacement of the physical mass \(M\) with the invariant mass \(M_{0}\), within the LFQM. This condition stems from the Bakamjian-Thomas (BT) construction, in which the meson state is based on a noninteracting quark-antiquark representation. It is important to note that the inclusion of the \((1-\rho_{+})\) factor in the definition of the nonlocal matrix elements \(\langle 0|\bar{q}(z)\sigma^{\mu\nu}\gamma_{5}q(-z)|P\rangle\) is crucial in order to obtain the same decay constant as those derived from the axial-vector and pseudoscalar currents. In addition to secure the process-independent pseudoscalar meson decay constant, regardless of the choice of current operators \(\Gamma\), we also demonstrated its explicit Lorentz and rotation invariance.
We also examined the helicity contributions to the decay constants, offering additional insights into the structural aspects of the decay constant. While the decay constant remains unchanged regardless of the choice of currents, the helicity contributions to the decay constant exhibit variations depending on the specific current and its components, as illustrated in Table 1. As illustrated in Fig. 2, while the good (plus) current only receives the ordinary helicity contributions (\(\uparrow\downarrow,\downarrow\uparrow\)), the other components including the bad (minus) current receive the higher helicity contributions (\(\uparrow\uparrow,\downarrow\downarrow\)). We further explored the impact of \(\mathbf{P}_{\perp}\) dependence on the helicity contributions when considering the axial-vector current with the minus current component. Notably, it becomes evident that the higher helicity contributions play a more prominent role in the low \(\mathbf{P}_{\perp}\) region, particularly for the \(2S\) state. These observations are depicted in Fig. 3.
According to the classification provided in Table 2, employing various current operators and different components of the currents leads to distinct twists in the DAs. In particular, we explored the twist-4 DA derived from the minus component of the axial-vector current.
The various twist DAs for the \(1S\) and \(2S\) heavy pseudoscalar mesons are exhibited in Fig. 5. It is evident that the higher twist DAs for the unequal-mass case is more concentrated in the lower \(x\) region. The \(\xi\) moments are also computed for the various twist DAs. One of the notable results is that the odd \(\xi\)-moment is getting larger for the higher twist.
We expect that our result is useful for the calculation of hard exclusive processes based on the QCD factorization. Especially, the higher twist DAs may be important in the low-\(Q^{2}\) region [56; 57]. It would be of great importance to extend our analysis to the vector mesons with the longitudinal and transverse polarizations for further test of our methodology [50]. Moreover, the investigation of the decay constant for the excited scalar, axial-vector, and tensor mesons would be also interesting to confirm whether our LFQM based on the BT construction universally be applicable regardless of the meson quantum numbers [48; 52]. Finally, the extension of our approach to encompass three-point functions, such as elastic or transition form factors, deserves a thorough investigation to explore the LF zero-mode effects.
## Acknowledgement
The work of A.J.A. is supported by the RIKEN special postdoctoral researcher program and the Young Scientist Training (YST) Program at the Asia Pacific Center for Theoretical Physics (APCTP) through the Science and Technology Promotion Fund and Lottery Fund of the Korean Government and also by the Korean Local Governments - Gyeongsnabuk-do Province and Pohang City. The work of H.-M.C. was supported by the National Research Foundation of Korea (NRF) under Grant No. NRF- 2023R1A2C1004098. The work of C.-R.J. was supported in part by the U.S. Department of Energy (Grant No. DE-FG02-03ER41260). The National Energy Research Scientific Computing Center (NERSC) supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 is also acknowledged.
## Appendix A Link between the Covariant BS Model and the LFQM
As we explained in the Introduction, our self-consistent LFQM results, e.g. Eqs. (27) and (28) in this work, can also be obtained from the "Type II" link between the manifestly covariant BS model and the LFQM, which was first introduced in [24]. This is another approach to arrive at the self-consistent LFQM. Since the detailed analysis for the link between the manifestly covariant BS model and the LFQM has already been made in the pre
vious works [25; 26], we shall briefly discuss the essential feature of the "Type II" link starting from the covariant BS model in this Appendix.
The matrix element \(A_{\rm A(P)}\equiv\bra 0\,\bar{q}\Gamma_{\rm A(P)}q\ket{P}\) for the local operators \(\Gamma_{\rm A(P)}\) in the covariant BS model is given in the one-loop approximation as
\[A_{\rm A(P)}=iN_{c}\int\frac{{\rm d}^{4}k}{(2\pi)^{4}}\frac{H_{0}S_{\rm A(P)}}{( p_{1}^{2}-m_{1}^{2}+i\epsilon)(p_{2}^{2}-m_{2}^{2}+i\epsilon)}, \tag{10}\]
where \(S_{\rm A(P)}={\rm Tr}[\Gamma_{\rm A(P)}(\not{p}_{1}+m_{1})\gamma_{5}(-\not{p}_ {2}+m_{2})]\) is the trace term with \(p_{1}=P-k\) and \(p_{2}=k\). To regularize the loop, we use the usual multipole ansatz \(H_{0}=\frac{g}{D_{\Lambda}^{2}}\) with \(D_{\Lambda}=p_{1}^{2}-m_{\Lambda}^{2}+i\epsilon\), where \(m_{\Lambda}\) plays the role of the momentum cut-off.
In this exactly solvable manifestly covariant BS model, the decay constants for the axial-vector and pseudoscalar currents can be obtained from the manifestly covariant calculation using the Feynman parametrization and the final results are given by
\[f_{\rm A} = \frac{gN_{c}}{4\pi^{2}}\int_{0}^{1}{\rm d}x\int_{0}^{1-x}{\rm d}y \ \frac{(1-x-y)B_{1}}{C^{2}}, \tag{11}\] \[f_{\rm P} = \frac{gN_{c}}{4\pi^{2}\mu_{M}}\int_{0}^{1}{\rm d}x\int_{0}^{1-x}{ \rm d}y\ \frac{(1-x-y)(B_{2}-2C)}{C^{2}},\]
where \(C=y(1-y)M^{2}-xm_{1}^{2}-ym_{2}^{2}-(1-x-y)m_{\Lambda}^{2}\), \(B_{1}=m_{2}+(1-y)(m_{1}-m_{2})\), and \(B_{2}=y(1-y)M^{2}+m_{1}m_{2}\). We should note, at this point, that the two pseudoscalar meson decay constants \(f_{\rm A}\) and \(f_{\rm P}\) obtained in the BS model are not the same each other, e.g. \(f_{\rm A}=208\) MeV vs. \(f_{\rm P}=225\) MeV for \(D(1S)\) meson with the value of \(m_{\Lambda}=1.673\) GeV. This contrasts to our LFQM in which we obtain the process-independent decay constant.
In parallel with the manifestly covariant calculation, we perform the LF calculation of Eq. (10) by doing the LF energy integration \(p_{2}^{-}\) picking up the on-mass shell pole \(p_{2}^{2}=p_{\rm 2on}^{2}=m_{2}^{2}\) and obtain
\[f_{\rm A(P)}=N_{c}\int_{0}^{1}{\rm d}x\int\frac{{\rm d}^{2}{\bf k}_{\perp}}{ 8\pi^{3}}\ \frac{\chi(x,{\bf k}_{\perp})}{1-x}{\cal O}_{\rm BS}^{\rm A(P)}, \tag{12}\]
where \(\chi(x,{\bf k}_{\perp})=1/([x(M^{2}-M_{0}^{2})][x(M^{2}-M_{\Lambda}^{2})]^{2})\) is the vertex function with \(M_{\Lambda}^{2}=M_{0}^{2}(m_{1}\to m_{\Lambda})\) and \({\cal O}_{\rm BS}^{\rm A(P)}=iS_{\rm A(P)}/2{\cal P}_{\rm A(P)}\).
In contrast to the LFQM constrained by the on-mass shellness of the constituents, the LF calculation of the BS model allows the off-mass shell quark propagators. For the axial-vector current \(\Gamma_{\rm A}^{\mu}=\gamma^{\mu}\gamma_{5}\) with the current component \(\mu=(+,\perp)\), we find that only the on-mass shell quark propagators contribute and the full result of the operator is obtained as \([{\cal O}_{\rm BS}^{\rm A}]_{\rm full}=[{\cal O}_{\rm BS}^{\rm A}]_{\rm on}^{ +}=[{\cal O}_{\rm BS}^{\rm A}]_{\rm on}^{\perp}=2{\cal A}\). On the other hand, the minus component of the axial-vector current receives not only the instantaneous but also the zero-mode contributions in addition to the on-mass shell contribution, i.e. \([{\cal O}_{\rm BS}^{\rm A}]_{\rm full}=[{\cal O}_{\rm BS}^{\rm A}]_{\rm on}^{ -}+[{\cal O}_{\rm BS}^{\rm A}]_{\rm inst}^{-}+[{\cal O}_{\rm BS}^{\rm A}]_{\rm z.m.}^{-}=2{\cal A}\), where
\[\left[{\cal O}_{\rm BS}^{\rm A}\right]_{\rm on}^{-} = \frac{2(m_{1}\Delta_{1}+m_{2}\Delta_{2}+{\cal A}{\bf P}_{\perp}^{2 })}{M^{2}+{\bf P}_{\perp}^{2}},\] \[\left[{\cal O}_{\rm BS}^{\rm A}\right]_{\rm inst}^{-} = \frac{2m_{2}(M^{2}-M_{0}^{2})}{M^{2}+{\bf P}_{\perp}^{2}},\] \[\left[{\cal O}_{\rm BS}^{\rm A}\right]_{\rm z.m.}^{-} = \frac{2(m_{1}-m_{2})Z_{2}}{M^{2}+{\bf P}_{\perp}^{2}}, \tag{13}\]
with \(\Delta_{j}=(m_{j}^{2}+{\bf k}_{\perp}^{2})/x_{j}(j=1,2)\) and \(Z_{2}=x(M^{2}-M_{0}^{2})+m_{1}^{2}-m_{2}^{2}+(1-2x)M^{2}\).
For the pseudoscalar current \(\Gamma_{\rm P}=i\gamma_{5}\), the full operator is obtained from the sum of the three nonvanishing contributions, i.e.
\(xM_{0}^{2}|/\mu_{M}=[\mathcal{O}_{\rm BS}^{\rm P}]_{\rm on}+[\mathcal{O}_{\rm BS}^ {\rm P}]_{\rm inst}+[\mathcal{O}_{\rm BS}^{\rm P}]_{\rm z.m.}\), where
\[\left[\mathcal{O}_{\rm BS}^{\rm P}\right]_{\rm on} = \frac{\tilde{M}_{0}^{2}}{\mu_{M}},\] \[\left[\mathcal{O}_{\rm BS}^{\rm P}\right]_{\rm inst} = \frac{(1-x)(M^{2}-M_{0}^{2})}{\mu_{M}},\] \[\left[\mathcal{O}_{\rm BS}^{\rm P}\right]_{\rm z.m.} = -\frac{Z_{2}}{\mu_{M}}, \tag{101}\]
With those full operators \([\mathcal{O}_{\rm BS}^{\rm A(P)}]_{\rm full}\), the LF results for \(f_{\rm A(P)}\) given by Eq. (100) are the same as the corresponding covariant ones given by Eq. (119).
The basic idea of Ref. [24] in obtaining the LFQM result given by Eq. (27) from the BS model amplitude given by Eq. (27) is to replace not only vertex function \(\chi(x,\mathbf{k}_{\perp})\) in Eq. (100) with the Gaussian wave function \(\Phi(x,\mathbf{k}_{\perp})\) but also all the physical mass \(M\) appeared in the BS model with the invariant mass \(M_{0}\) via the "Type II" link between the BS model and the LFQM as we coined in [24]:
\[\sqrt{2N_{c}}\frac{\chi(x,\mathbf{k}_{\perp})}{(1-x)} \rightarrow \frac{\Phi(x,\mathbf{k}_{\perp})}{\sqrt{A^{2}+\mathbf{k}_{\perp}^ {2}}},\] \[M \rightarrow M_{0}. \tag{102}\]
The first immediate action for the replacement of \(M\to M_{0}\) in the LFQM is to remove the instantaneous contribution (\(\propto M^{2}-M_{0}^{2}\)), which may appear in the covariant BS model but absent in the LFQM consistent with the BT construction. The only spurious effect, which may appear in the LFQM, is the LF zero mode. Furthermore, a crucial feature when utilizing the "Type II" link between the BS model and the LFQM is solely to employ the on-mass shell BS operator. In other words, the LFQM operator, denoted as \(\mathcal{O}\equiv\mathcal{O}_{\rm LFQM}\) and defined by Eq. (27), can be directly derived by substituting \(\left[\mathcal{O}_{\rm BS}\right]_{\rm on}(M\to M_{0})\).
For the axial-vector current case, the full BS operator \(\left[\mathcal{O}_{\rm BS}^{\rm A}\right]_{\rm full}=2\mathcal{A}\) obtained from \(\left[\mathcal{O}_{\rm BS}^{\rm A}\right]_{\rm on}^{+}=\left[\mathcal{O}_{ \rm BS}^{\rm A}\right]_{\rm on}^{\perp}=\left[\mathcal{O}_{\rm BS}^{\rm A} \right]_{\rm full}^{-}\) is shown exactly the same as the \(\mathcal{O}_{\rm LFQM}^{\rm A}\). The plus and perpendicular components of the current are free from the instantaneous and zero-mode contributions, and they are indeed the "good" components of the current. While the full operator derived from the minus component of the current shares the exact same form as those derived from the "good" currents, this feature can be considered extremely rare. It should be noted that the full operator obtained by including the zero mode does not generally align with the full operator derived solely from the on-mass shell contribution, as evident in the case of the pseudoscalar current in which the full operator \([\mathcal{O}_{\rm BS}^{\rm P}]_{\rm full}\) is different from the on-mass shell operator \([\mathcal{O}_{\rm BS}^{\rm P}]_{\rm on}\) and one finds that \(f_{\rm P}\) in Eq. (27) does not match with \(f_{\rm A}\) if the full operator \([\mathcal{O}_{\rm BS}^{\rm P}]_{\rm full}\) is used for the replacement of \(M\to M_{0}\) instead of \([\mathcal{O}_{\rm BS}^{\rm P}]_{\rm on}\). A similar observation has also been made in the previous analysis for the vector meson decay constant [24]. This indicates that the zero mode found in the BS model is no longer applicable to the LFQM. Instead, the replacement of \(M\to M_{0}\) for the on-mass shell operator \(\left[\mathcal{O}_{\rm BS}\right]_{\rm on}\), e.g. \(\left[\mathcal{O}_{\rm BS}^{\rm A}\right]_{\rm on}^{-}(M\to M_{0})\) and \(\left[\mathcal{O}_{\rm BS}^{\rm P}\right]_{\rm on}(M\to M_{0})\), can be regarded as an effective zero-mode inclusion in the LFQM.
For the pseudotensor current, the local matrix element \(\left\langle 0\right|\bar{q}(0)\Gamma_{\rm T}q(0)\left|P\right\rangle\) defined by Eq. (100) in the covariant BS model is zero in the manifestly covariant calculation. Performing the LF calculation for this local matrix element, we also confirm that the full operator \([\mathcal{O}_{\rm BS}^{\rm T}]_{\rm full}\) is zero if and only if we include both nonvanishing instantaneous and zero-mode contributions, i.e., \([\mathcal{O}_{\rm BS}^{\rm T}]_{\rm full}=[\mathcal{O}_{\rm BS}^{\rm T}]_{\rm on }+[\mathcal{O}_{\rm BS}^{\rm T}]_{\rm inst}+[\mathcal{O}_{\rm BS}^{\rm T}]_{ \rm z.m.}=0\). Because of this, the decay constant for the pseudotensor current needs to be defined only through the nonlocal matrix element \(\left\langle 0\right|\bar{q}(z)\Gamma_{\rm T}q(-z)\left|P\right\rangle\) defined by Eq. (18).
Defining \(z^{\mu}=\tau\eta^{\mu}\) using the lightlike vector \(\eta=(1,0,0,-1)\) and multiplying \((P_{\mu}\eta_{\nu}-P_{\nu}\eta_{\mu})\) on both sides of Eq. (18), one can rewrite Eq. (18) as [see Ref. [26] for more detailed derivation]
\[\left\langle 0\right|\bar{q}(\tau\eta)i(\not{P}\!\!\!/-P\cdot\eta) \gamma_{5}q(-\tau\eta)\left|{\rm P}(P)\right\rangle\] \[=\frac{i}{3}f_{\rm T}\tilde{\mu}_{M}(P\cdot\eta)^{2}\int_{0}^{1} \mathrm{d}xe^{i\zeta\tau P\cdot\eta}\psi_{3;{\rm P}}(x), \tag{103}\]
where \(\tilde{\mu}_{M}=\mu_{M}(1-\rho_{+})\).3 We then obtain
Footnote 3: In Ref. [26], the term \(1-\rho_{+}\) in Eq. (103) is absent but the inclusion of this term in this work guarantees the process-independent decay constant in the LFQM.
\[\psi_{3;{\rm P}}(x)=-\frac{12}{f_{\rm T}\tilde{\mu}_{M}}\int_{-\infty}^{\infty }\frac{d\tau}{2\pi}\int_{0}^{x}dx^{\prime}e^{-i\zeta^{\prime}\tau(P\cdot\eta)} \mathscr{M}_{\rm f}, \tag{104}\]
where \(\mathscr{M}_{\rm f}=\left\langle 0\right|\bar{q}(\tau\eta)i(\not{P}\!\!\!/-P\cdot \eta)\gamma_{5}q(-\tau\eta)\left|M(P)\right\rangle\) is given by the following momentum integral in the same covariant BS model as Eq. (100)
\[\mathscr{M}_{\rm f}=N_{c}\int\frac{d^{4}k}{(2\pi)^{4}}\frac{e^{-i\tau(p_{2}-p_{ 1})\cdot\eta}H_{0}}{(p_{1}^{2}-m_{1}^{2}+i\epsilon)(p_{2}^{2}-m_{2}^{2}+i \epsilon)}S_{\rm T}, \tag{105}\]
with the trace term \(S_{\rm T}=\mathrm{Tr}[i(\not{P}\!\!\!/-P\cdot\eta)\gamma_{5}(\not{p}_{1}+m_{1}) \gamma_{5}(-\not{p}_{2}+m_{2})]\). It is worth noting that the explicit covariant calculation of Eq. (104) is challenging due to the nonlocal nature of the matrix element. For the LF calculation of Eq. (104), we apply the equal LF time condition, \(z^{+}=0\), and choose the LF gauge \(A^{+}=0\) so that the path-ordered gauge factor becomes unity. We should note that the valence contribution, i.e. \([S_{\rm T}]_{\rm val}(x^{\prime},\mathbf{k}_{\perp})=[S_{\rm T}]_{\rm on}+[S_{\rm T }]_{\rm inst}\), has the same form as the one for the local current matrix element. However, the possible zero-mode contribution \([S_{\rm T}]_{\rm z.m.}\) is different from the one obtained in the local current case since the trace term, as well as the vertex function, should be integrated over \(x^{\prime}\) before the integration over \(x\). The nonlocal nature of the matrix element introduces a discrepancy in the power counting
of the singular term \((1/x)\), which is an essential procedure to identify any possible zero modes, compared to the local current matrix element calculation. As a consequence, this discrepancy gives rise to distinct zero modes in the nonlocal current case. The identification of the zero mode in this nonlocal matrix element calculation is not yet known. However, our "Type II" link between the covariant BS model and the LFQM applying only to the on-mass contribution works also for the nonlocal matrix element calculation regardless of the existence of the LF zero mode.
Thus, considering only the on-mass shell contribution to the trace term, we obtain from Eqs. (38) and (39)
\[\psi_{\rm 3;P}(x)=-\frac{3N_{c}}{f_{\rm T}}\int_{0}^{x}dx^{\prime}\int\frac{d^{2} \mathbf{k}_{\perp}}{8\pi^{3}}\frac{\chi(x^{\prime},\mathbf{k}_{\perp})}{(1-x^{ \prime})}\frac{[S_{\rm T}]_{\rm on}}{P^{+}\tilde{\mu}_{M}}, \tag{40}\]
where \([S_{\rm T}]_{\rm on}=4P^{+}M_{0}^{\prime}k_{z}^{\prime}\). Now, from the normalization of \(\psi_{\rm 3;P}(x)\), i.e. \(\int_{0}^{1}\mathrm{d}x\;\psi_{\rm 3;P}(x)=1\), we obtain
\[f_{\rm T}=N_{c}\int_{0}^{1}\mathrm{d}x\int_{0}^{x}\mathrm{d}x^{\prime}\int \frac{d^{2}\mathbf{k}_{\perp}}{8\pi^{3}}\;\frac{\chi(x,\mathbf{k}_{\perp})}{1- x}\left[\mathcal{O}_{\rm BS}^{\rm T}\right]_{\rm on}, \tag{41}\]
where \(\left[\mathcal{O}_{\rm BS}^{\rm T}\right]_{\rm on}=-12M_{0}^{\prime}k_{z}^{ \prime}/\tilde{\mu}_{M}\).
Finally, applying the "Type II" link given by Eq. (36) to Eqs. (35) and (41), we obtain an LFQM results for \((f_{\rm A},f_{\rm P},f_{\rm T})\) defined by Eqs. (20) and (26). The on-mass shell BS and LFQM operators \(\mathcal{O}_{\rm on}\) of the three decay constants with all possible current components are summarized in Table 7.
|
2307.00243 | Search for environment-dependent dilatons | The environment-dependent dilaton field is a well-motivated candidate for
dark energy and naturally arises in the strong coupling limit of string theory.
In this article, we present the very first experimental constraints on the
parameters of this model. For this, we employ data obtained from the qBounce
collaboration and the Lunar Laser Ranging (LLR) experiment. Furthermore, we
forecast expected exclusion plots for the Casimir And Non Newtonian force
EXperiment (Cannex) soon to be realised in an improved setup. Finally, we
provide a detailed analysis of the screening mechanism and additional
symmetries of the dilaton field theory. | Hauke Fischer, Christian Käding, René I. P. Sedmik, Hartmut Abele, Philippe Brax, Mario Pitschmann | 2023-07-01T06:08:11Z | http://arxiv.org/abs/2307.00243v1 | # Search for environment-dependent dilatons
###### Abstract
The environment-dependent dilaton field is a well-motivated candidate for dark energy and naturally arises in the strong coupling limit of string theory. In this article, we present the very first experimental constraints on the parameters of this model. For this, we employ data obtained from the \(q\)Bounce collaboration and the Lunar Laser Ranging (LLR) experiment. Furthermore, we forecast expected exclusion plots for the Casimir And Non Newtonian force EXperiment (Cannex) soon to be realised in an improved setup. Finally, we provide a detailed analysis of the screening mechanism and additional symmetries of the dilaton field theory.
pacs: 98.80.-k, 04.80.Cc, 04.50.Kd, 95.36.+x
## I Introduction
The origin of dark energy is one of the greatest puzzles in modern physics. Unexpectedly, type Ia supernovae data have revealed that our Universe is currently expanding at an accelerated rate [1; 2; 3]. This has been confirmed by many other cosmological probes.
The theoretical framework describing the Universe on cosmological scales is general relativity (GR). As GR is a crucial ingredient in the interpretation of cosmological observations, it seems natural that modifying GR could be at the heart of the observed accelerated expansion of the Universe. While a modification at short distances is indeed easily realisable by extending the Einstein-Hilbert action with quantities invariant under general coordinate transformations and containing higher derivatives of the metric (see e.g. [4]), a modification for large distance scales by making the theory massive is very intricate [5]. Amending GR by the so-called cosmological constant \(\Lambda\) allows one to describe the accelerated expansion. However, such a procedure would lead to a severe fine-tuning problem [6]. Consequently, the existence of new hypothetical scalar fields has been postulated, which couple to gravity and can account for dark energy [7]. Those new scalars generically lead to new interactions, so-called fifth forces and are theoretically well-motivated irrespective of their role for dark energy. As they have avoided detection in past fifth force experiments, they must be subject to a screening mechanism. Several such screening mechanisms have been devised, such as the chameleon [8; 9], K-mouflage [10; 11], Vainshtein [12] and Damour-Polyakov mechanisms [13].
In this article, we investigate the dilaton model with a Damour-Polyakov mechanism. This is a screened scalar field model whose behaviour in local tests of gravity has been less studied so far [14; 15; 16; 17; 18; 19]. This model has been proposed as a possible candidate for dark energy [20; 21]. Its potential naturally arises in the strong coupling limit of string theory and gives rise to a screening mechanism in connection with the Damour-Polyakov mechanism. Due to its origin in string theory this model is particularly well-motivated in comparison to similar models such as chameleons and symmetrons (for a related investigation concerning symmetrons we refer to [22; 23; 24]).
Herein, we provide a brief summary of this model, discuss its screening mechanism and parameter symmetries, followed by succinct descriptions of the corresponding experiments and methods that we employ in order to constrain the parameters of the dilaton. This article complements the theoretical analysis presented in [25].
## II The dilaton with Damour-Polyakov mechanism
The effective potential of the dilaton is given by [26]
\[V_{\rm eff}(\phi;\rho)=V_{0}\,e^{-\lambda\phi/m_{\rm pl}}+\beta(\phi)\,\frac{ \rho}{2m_{\rm pl}}\,\phi\;, \tag{1}\]
where \(V_{0}\) is a constant energy density, \(\lambda\) a dimensionless constant, \(\beta(\phi)=A_{2}\phi/m_{\rm pl}\) the full coupling to the matter density \(\rho\), \(A_{2}\) a dimensionless coupling constant and \(m_{\rm pl}\) the reduced Planck mass. Inside matter with density \(\rho\), the dilaton field approaches its minimum value given by
\[\phi_{\rho}=\frac{m_{\rm pl}}{\lambda}\,W\left(\frac{\lambda^{2}V_{0}}{A_{2} \rho}\right)\;, \tag{2}\]
where \(W(x)\) is the Lambert \(W\) function, which is the inverse function of \(xe^{x}\).
This potential is motivated from the string dilaton \(\chi\) and the condition \(V(\chi)\to 0\) for \(\chi\to\infty\), which is associated with the strong coupling limit of string theory [14]. Hence, an asymptotic expansion \(V(\chi)=\tilde{V}_{0}\,e^{-\chi}+\tilde{V}_{1}\,e^{-2\chi}\ldots\) is applied. Furthermore, in Ref. [13] it has been assumed that the coupling to matter has a minimum at some large value \(\chi=\chi_{0}\). Consequently, near the minimum the coupling is proportional to \((\chi-\chi_{0})^{2}\). Redefining \(\phi:=\frac{m_{\rm pl}}{\lambda}(\chi-\chi_{0})\)
leads to Eq. (1) (for a full derivation see e.g. [27]). For the derivation of experimental limits we demand the condition
\[A_{2}\phi^{2}/(2m_{\rm pl}^{2})\ll 1 \tag{3}\]
to hold in order to ensure that couplings to matter of higher order in \(\phi\) can be neglected.
The parameter space of this model can naturally be divided into three regions (see Appendix A.1). A large enough \(\lambda\) (at fixed \(V_{0}\) and \(A_{2}\)) guarantees \(e^{-\lambda\phi/m_{\rm pl}}\ll 1\) and condition (3). Inside this region the dilaton field primarily screens by increasing its mass in dense environments. Additionally, there is an approximate symmetry between \(A_{2}\) and \(V_{0}\); physical effects mainly depend on the product \(A_{2}\ln(V_{0}/\rho)\), but not on the individual values of \(V_{0}\) and \(A_{2}\) (see Appendix A.2). This is evident in the obtained experimental limits in Fig. (2) that shift systematically towards lower values of \(A_{2}\) for increasing \(V_{0}\). Condition (3) results in an ever stronger cut in the parameter space for increasing values of \(V_{0}\) (for the calculation of limits a second cut-off was set to ensure that treating the experimental setups as 1D is appropriate).
If \(\lambda\) is small enough then \(e^{-\lambda\phi/m_{\rm pl}}\simeq 1\) and (3) holds. The dilaton field has a functional dependence only on the product of parameters \(V_{0}\lambda\) rather than on the individual parameters \(V_{0}\) or \(\lambda\), and the screening of the field in this region is primarily due to the decrease of its matter coupling \(\beta(\phi)\) in dense environments (see Appendix A.1). Hence, computed limits in Fig. (2) simply shift towards lower values of \(\lambda\) for increasing values of \(V_{0}\) without changing their shapes, as long as \(\lambda V_{0}\) is kept constant.
In between these two regions, for intermediate values of \(\lambda\), there is a region where \(A_{2}\phi^{2}/(2m_{\rm pl}^{2})\gg 1\) and, consequently, this effective dilaton model is outside its range of applicability. However, for \(V_{0}\ll 1\) MeV\({}^{4}\) the distinct experimental limits in Fig. (2) merge. The merging point depends on the specific experiment and on the vacuum density employed, but is qualitatively at \(V_{0}\sim 10^{-20}\) MeV\({}^{4}\). For much lower values of \(V_{0}\) physical effects become weak and all experimental limits quickly disappear.
Tabletop experiments in a vacuum chamber play an important role in the search for screened scalar fields such as the dilaton. This follows from the low matter density within the vacuum chamber ensuring that the scalar field is less suppressed there than in dense environments, while sufficiently thick chamber walls effectively shield any influences from the outside world for a large region of parameter space. The same techniques have been utilised previously for experimental searches for chameleons [28; 29] and symmetrons [22]. Furthermore, screened scalar fields with comparably small interaction ranges can be probed better with tabletop experiments than via astrophysical searches.
## III The \(q\)Bounce experiment
In \(q\)Bounce[28; 30; 31] ultracold neutrons, which are totally reflected from most materials, are bouncing in the gravitational field of the Earth. The discrete energy levels are not equidistant, which allows to perform resonance spectroscopy in the gravitational field. In its realization corresponding to a Rabi setup [29], neutrons pass through three regions: The first region acts effectively as a state selector and has a length of around 15 cm. A polished mirror at the bottom and a rough scatterer at a height of 20 \(\mu\)m on top ensure that only neutrons in the lowest few states can pass. Unwanted higher energy states are scattered out of the system. In the second region, neutrons pass a vibrating mirror with tunable frequency \(\omega\) that can drive the neutron towards a higher energy state. This region has a length of 20 cm. The final region is identical to the first region (see Fig. 1 for a schematic setup).
If the energy \(\hbar\omega\) associated with the frequency of the mirror is close to the energy \(\Delta E_{n}=E_{n}-E_{1}\) needed to drive the neutron to a specific higher energy state, the system enters a coherent superposition of the ground state and this excited state. If the neutron is not in one of the lowest \(\sim 2\) states anymore when entering the last region, a loss in transmission is observed.
Since neutrons are electrically neutral and have very low polarizability, they are very insensitive to experimental background disturbances. Hence, \(q\)Bounce is a highly sensitive probe for new physics and has already been used to probe and set stringent limits on many hypothetical new interactions [32]. Here, \(q\)Bounce is employed for the first time to set limits on the dilaton field. The presence of the latter would induce energy shifts that can directly be obtained from the stationary Schrodinger equation. Due to the comparatively large extension of the mirrors, the setup can safely be approximated as one dimensional, in which case the stationary Schrodinger equation reads
\[\bigg{[}-\frac{\hbar^{2}}{2m}\frac{\partial^{2}}{\partial z^{2}}+ mgz+\Delta\;\frac{A_{2}}{2}\frac{m}{m_{\rm pl}^{2}}\,\phi^{2}(z)\bigg{]}\,\Psi_{n}(z)\] \[=E_{n}\Psi_{n}(z)\,. \tag{4}\]
In general, this is a two-body problem since the mirror as well as the neutron interact with the dilaton
field. We approximate this problem by treating the neutron as a sphere and extracting a "screening charge" \(\mathfrak{Q}\), which multiplies the dilaton potential and approximately describes the interaction of the neutron with the dilaton. For further details and an explicit expression for \(\mathfrak{Q}\) we refer to the accompanying article [25]. Two limiting cases are considered depending on whether the neutron is described as a sphere of radius 0.5 fm in agreement with QCD ("fermi screening") or 5.9 \(\mu\)m corresponding to the natural extend of the wave function ("micron screening"). We assume that the true coupling lies within the boundaries provided by these two limiting cases.
For the calculation of the dilaton-induced energy shift, perturbation theory, as has been detailed in [25], is not applicable for a large part of the parameter space since the computed effects of the dilaton field can be very large. Therefore, the eigenvalue problem associated with the stationary Schrodinger equation has been solved numerically to allow for a non-perturbative treatment. Details on this procedure can be found in Appendix A.5.
The experimental sensitivity achieved in the Rabi-like setup corresponds to an energy resolution of \(\Delta E=2\times 10^{-15}\) eV in a vacuum chamber with a pressure of \(2\times 10^{-4}\) mbar. This sensitivity allows us to exclude a large part of the 3D parameter space of the dilaton field as shown in Fig. 2.
## IV Lunar Laser Ranging
Lunar Laser Ranging (LLR) measures the distance between the surfaces of the Earth and the Moon with high precision. This method involves firing a laser beam at a retroreflector array installed on the lunar surface during the Apollo missions. The retroreflectors consist of a series of small mirrors that reflect the laser beam back to Earth [33].
Measuring the time it takes for the laser pulse to propagate to the Moon and back provides the distance between the two bodies with an accuracy of a few centimeters. This data has been used to measure the Moon's orbit to high experimental precision, which allows to test GR and set stringent limits on any alternative theories. To date, the data is compatible with GR, which necessitates that scalar fields with a non-minimal coupling to matter, if they exist, must have a screening mechanism.
Lunar Laser Ranging has been used to test the equivalence principle. Similarly, deviations from the inverse-square law of gravity would induce shifts in the precession of the lunar perigee. The experimental constraint for equivalence principle violations of the Earth (\(\updelta\)) and Moon (\(\updelta\)) in the field of the Sun (\(\updelta\)) is given by [25; 34]
\[\delta_{\rm em}\simeq\frac{|\vec{a}_{\phi\updelta}-\vec{a}_{\phi\updelta}\;|} {|\vec{a}_{G}|}\leq 2\times 10^{-13}\,, \tag{5}\]
where \(\vec{a}_{\phi}\) refers to the dilaton-induced acceleration towards the Sun in addition to the Newtonian acceleration \(\vec{a}_{G}\). A second constraint is placed on any shift of
Figure 2: The exclusion plots typically separate into two distinct regimes; _Left_: here limits for small values of the parameter \(\lambda\) are plotted; _Right_: exclusion limits for large \(\lambda\) are depicted (for further explanations we refer to the main text); LLR: exclusion plots are filled areas in the bottom left in each region; limits obtained from violations of the equivalence principle are surrounded by solid lines, while limits from the precession of the lunar perigee are encompassed by dashed lines; _q_Bounce: exclusion plots are filled areas in the top right in each region; lighter areas correspond to micron screening and darker ones to fermi screening; _C_annex: prospective limits are surrounded by dotted lines; the two areas plotted _right_ correspond to \(\log_{10}(V_{0}/{\rm MeV}^{4})=1\) and \(10^{24}\), respectively.
the precession of the lunar perigee given by
\[\left|\frac{\delta\Omega}{\Omega}\right| \simeq\left|\frac{R^{2}}{GM_{\upxi}}\left(\delta f(R)+\frac{R}{2} \,\delta f^{\prime}(R)\right)\right|\] \[\leq 6.23833\times 10^{-12}\;, \tag{6}\]
where \(\delta f\) is the centripetal dilaton force per mass. For the numerical generation of the corresponding dilaton limits we used the analytical results from Ref. [25]. The obtained exclusion volume is shown in Fig. 2.
## V The cannex experiment
The Casimir And Non-Newtonian force EXperiment (Cannex) is currently being rebuilt at the Conrad Observatory in Austria [35]. It is especially designed to measure the Casimir force with unprecedented accuracy as well as fifth forces due to hypothetical new interactions, and gravity. The experimental setup consists of two plane parallel plates in close proximity, and allows to measure induced forces and their gradients between these plates in direct or Cavendish configuration (see Fig. 3 for a schematic setup).
Due to the geometry of two truly parallel plates, force generation by any interaction is maximized. With an effective area of \(1\,\mathrm{cm}^{2}\) and a targeted sensitivity of \(0.1\,\mathrm{nN/m}^{2}\) at separations between 3 and \(30\,\mathrm{\SIUnitSymbolMicro m}\), the Casimir effect as well as several hypothetical interactions could be measured at unprecedented accuracy [35]. By varying the pressure of Xe gas, the vacuum density surrounding the plates can be tuned between 5.3\(\times 10^{-12}\) kg/m\({}^{3}\) and 0.0026 kg/m\({}^{3}\). This variability allows for relative measurements triggering the distinctive feature of hypothetical new scalar fields with non-minimal coupling to matter - their strong sensitivity to ambient densities. Cannex therefore will be a powerful tool in the search for such interactions. In one dimension, the setup can approximately be modeled as a half space with density \(\rho_{M}=2514\,\mathrm{kg/m^{3}}\) for \(z\leq-d\), a vacuum region with density \(\rho_{V}\) for \(-d<z<d\), an upper plate with density \(\rho_{M}\) for \(d<z<d+D\), and a vacuum region with density \(\rho_{V}\) for \(z>d+D\). The upper plate has a thickness of \(D=100\,\mathrm{\SIUnitSymbolMicro m}\) and is movable, such that \(1.5\,\mathrm{\SIUnitSymbolMicro m}<d<15\,\mathrm{\SIUnitSymbolMicro m}\). If dilatons indeed exist, they would induce an additional pressure between the plates. To compute this pressure, the corresponding differential equation for the dilaton field
\[\frac{d^{2}\phi}{dz^{2}}+\frac{\lambda V_{0}}{m_{\mathrm{pl}}}\,e^{-\lambda \phi/m_{\mathrm{pl}}}-\frac{A_{2}\rho(z)}{m_{\mathrm{pl}}^{2}}\,\phi=0\;, \tag{7}\]
has been solved numerically for all parameters of interest.
For further details on the simulations and the pressure calculation we refer to Appendices A.3 and A.4. An example of a simulated dilaton field for the Cannex setup is provided in Fig. 4.
## VI Dilaton dark energy
Requiring that the dilaton provides the vacuum energy accounting for dark energy results in a reduction of parameter space to two dimensions, where the condition \(V_{\mathrm{eff}}(\phi_{V};\rho_{V})=3\Omega_{\lambda_{0}}m_{\mathrm{pl}}^{2} H_{0}^{2}\) holds for the cosmological vacuum density \(\rho_{V}\) with the corresponding field minimum \(\phi_{V}\). This idea has been detailed in Ref. [25], where it has been shown that \(V_{0}\) can then be expressed in analytically closed form as a function of \(\lambda\) and \(A_{2}\). The numerical analysis shows that such dark energy dilatons violate condition (3) inside the entire parameter region where \(e^{-\lambda\phi_{V}/m_{\mathrm{pl}}}\ll 1\) for the cosmological vacuum density \(\rho_{V}\). Interestingly, \(A_{2}\phi_{V}^{2}/(2m_{\mathrm{pl}}^{2})\sim 1\) is roughly constant in this region. The larger part of the experimentally feasible parameter space where \(e^{-\lambda\phi_{V}/m_{\mathrm{pl}}}\simeq 1\) also violates condition (3). This is the reason why there are only comparably small excluded areas for lunar laser ranging (see Fig. 5), while there are no other limits for the tabletop experiments considered herein.
Figure 4: Simulated dilaton field in between the parallel plates of the Cannex setup for \(\lambda=10^{31},A_{2}=10^{45}\) and \(V_{0}=10\) MeV\({}^{4}\); the lower (yellow) and upper (blue) plates are indicated in color.
Figure 3: Schematic cut view of the Cannex setup in direct configuration (without electrostatic shield between the plates). Forces are detected using Fabry Pérot interferometers sensing the extension of the mass-spring system created by the helical springs and the upper plate. The insert on the left defines the material and thickness of the various layers. Note that the upper plate and the springs are coated in addition on all sides with a thin (\(50\,\mathrm{nm}\)) layer of gold.
However, if the dilaton field were to contribute only 10% or less to the dark energy, condition (3) would pose no strong restrictions any more, which would allow to exclude large areas of the 2D parameter space for all investigated experiments in this case.
## VII Discussion
The analysis provided herein shows that LLR is sensitive to the dilaton field for interaction ranges in vacuum of approximately 1 AU and larger, while the tabletop experiments considered herein can probe the field for ranges as low as 1 \(\mu\)m in agreement with expectations. In the future, Cannex will be able to access a large part of the parameter space which is left open by \(q\)Bounce and LLR. If the dilaton is the only source of dark energy, only minor measurable effects are expected.
The code that has been used to generate all obtained results is available at [36].
## VIII Acknowledgments
This article was supported by the Austrian Science Fund (FWF): P 34240-N, P-33279-N, P36577-N, and is based upon work from COST Action COSMIC WIS-Pers CA21106, supported by COST (European Cooperation in Science and Technology). We thank Tobias Jenke, measurements with \(q\)Bounce were performed at the ultra-cold beam position PF2@Institut Laue-Langevin, Grenoble.
## Appendix A Supplementary materials
### Derivation of the three parameter regions, the screening mechanisms and the parameter symmetry
In this section, we describe the three regions of the parameter space obtained by varying the magnitude of \(\lambda\). Increasing \(\lambda\) while keeping the other parameters fixed eventually leads to
\[\frac{\lambda^{2}V_{0}}{A_{2}\rho}\gg 1\,. \tag{30}\]
Using \(W(x)\simeq\ln(x)-\ln(\ln(x))\) for large \(x\) we can approximate
\[\phi_{\rho}\simeq\frac{m_{\rm pl}}{\lambda}\left\{\ln\left(\frac{\lambda^{2} V_{0}}{A_{2}\rho}\right)-\ln\left[\ln\left(\frac{\lambda^{2}V_{0}}{A_{2} \rho}\right)\right]\right\}, \tag{31}\]
which shows that
\[e^{-\lambda\phi_{\rho}/m_{\rm pl}}\simeq\ln\left(\frac{\lambda^{2}V_{0}}{A_{ 2}\rho}\right)\bigg{/}\left(\frac{\lambda^{2}V_{0}}{A_{2}\rho}\right)\ll 1\,. \tag{32}\]
The mass \(\mu_{\rho}\) of the dilaton is given by [25]
\[\mu_{\rho} = \frac{1}{m_{\rm pl}}\,\sqrt{\lambda^{2}V_{0}\,e^{-\lambda\phi_{ \rho}/m_{\rm pl}}+A_{2}\rho} \tag{33}\] \[\simeq \frac{\sqrt{A_{2}\rho}}{m_{\rm pl}}\sqrt{1+\ln\left(\frac{\lambda ^{2}V_{0}}{A_{2}\rho}\right)}\] \[\simeq \frac{1}{m_{\rm pl}}\sqrt{A_{2}\rho\,\ln\left(\frac{\lambda^{2}V_ {0}}{A_{2}\rho}\right)}\,.\]
Then, the full coupling to matter is approximately
\[\beta(\phi_{\rho}) = \frac{A_{2}\phi_{\rho}}{m_{\rm pl}} \tag{34}\] \[\simeq \frac{A_{2}}{\lambda}\left\{\ln\left(\frac{\lambda^{2}V_{0}}{A_{ 2}\rho}\right)-\ln\left[\ln\left(\frac{\lambda^{2}V_{0}}{A_{2}\rho}\right) \right]\right\}.\]
Since \(\rho\) effects \(\beta(\phi_{\rho})\) only logarithmically (as long as Eq. (30) holds), while the mass has a square root dependence, increasing the density primarily leads to an increase in the mass of the field but only a negligible decrease of \(\beta(\phi_{\rho})\).
Decreasing \(\lambda\) inside this region increases \(\phi_{\rho}\) according to Eq. (31), which eventually leads to a violation of the condition \(A_{2}\phi^{2}/(2m_{\rm pl}^{2})\ll 1\). Eventually, however, \(\lambda\) gets small enough such that \(\lambda^{2}V_{0}/(A_{2}\rho)\ll 1\) holds. Hence, using \(W(x)\simeq x\) for small \(x\), we obtain in this second region
\[\phi_{\rho} \simeq m_{\rm pl}\,\frac{\lambda V_{0}}{A_{2}\rho}\,, \tag{35}\] \[e^{-\lambda\phi_{\rho}/m_{\rm pl}} \simeq e^{-\frac{\lambda^{2}V_{0}}{A_{2}\rho}}\simeq 1\,,\] (36) \[\mu_{\rho} \simeq \frac{\sqrt{A_{2}\rho}}{m_{\rm pl}}\,,\] (37) \[\beta(\phi_{\rho}) \simeq \frac{\lambda V_{0}}{\rho}\,. \tag{38}\]
Figure 5: Limits for the dilaton field as the source of dark energy. Only LLR can set limits in this case. Inside the plotted region \(V_{0}\) takes the value \(V_{0}\simeq 3\Omega_{\Lambda_{0}}m_{\rm pl}^{2}H_{0}^{2}\).
Decreasing \(\lambda\) inside this second region decreases \(\phi_{\rho}\) (in contrast to the behaviour in the first region) and hence the condition \(A_{2}\phi^{2}/(2m_{\rm pl}^{2})\ll 1\) is eventually fulfilled again. Inside this parameter region, \(\beta(\phi_{\rho})\) decreases considerably by increasing \(\rho\). Finally, since
\[V_{\rm eff}(\phi) = V_{0}\,e^{-\lambda\phi/m_{\rm pl}}+\frac{A_{2}\rho}{2m_{\rm pl}^{ 2}}\,\phi^{2} \tag{33}\] \[\simeq V_{0}-\lambda V_{0}\,\frac{\phi}{m_{\rm pl}}+\frac{A_{2}\rho}{2m_{ \rm pl}^{2}}\,\phi^{2}\,,\]
only the product of \(\lambda V_{0}\) enters the equations of motion, which explains the parameter symmetry that was observed also numerically, i.e. changing the parameters \(\lambda\) and \(V_{0}\) whilst keeping their product \(\lambda V_{0}\) fixed preserves the constraints on the parameter space for small enough \(\lambda\).
### Additional explanation for the exclusion plots in the \(e^{-\lambda\phi/m_{\rm pl}}\ll 1\) region
There is another approximate symmetry inside the \(e^{-\lambda\phi/m_{\rm pl}}\ll 1\) region, which explains why the exclusion plots shift systematically towards lower values of \(A_{2}\) when increasing \(V_{0}\). To leading order the parameters \(A_{2}\) and \(V_{0}\) enter the full coupling to matter (28) and the dilaton mass (30) via the same functional dependence \(A_{2}\ln\left(\lambda^{2}V_{0}/A_{2}\rho\right)\). In the excluded regions in the main paper \(\ln\left(V_{0}/\rho\right)\gg\ln\left(\lambda^{2}/A_{2}\right)\) holds for essentially all of the displayed parameter space. Hence, \(A_{2}\ln\left(\lambda^{2}V_{0}/A_{2}\rho\right)\simeq A_{2}\ln\left(V_{0}/\rho\right)\). Therefore, the full coupling as well as the dilaton mass essentially depend only the product \(A_{2}\ln\left(V_{0}/\rho\right)\), which is why there is an approximate symmetry between these two parameters. Hence, increasing \(V_{0}\) can effectively be compensated by a corresponding decrease of \(A_{2}\) as has been observed in the excluded regions. In contrast, the precession of the lunar perigee does not follow that symmetry. This is due to the sum of two physical effects with opposite signs that cancel each other for larger values of \(V_{0}\) in this case.
### Derivation of the pressure in the Cannex experiment
For numerical calculations we made use of the formula for the pressure \(P_{z}\) on the upper plate of the Cannex setup
\[P_{z}=\frac{\rho_{M}}{\rho_{M}-\rho_{V}}\left(V_{\rm eff}(\phi_{V},\rho_{V})- V_{\rm eff}(\phi_{0},\rho_{V})\right)\,, \tag{34}\]
where \(\phi_{0}=\phi(0)\) is the dilaton field value at the center between both plates and the effective potential is given by
\[V_{\rm eff}(\phi;\rho)=V(\phi)+\rho A(\phi)\,. \tag{35}\]
In [25] the relation
\[P_{z} = \rho_{M}\left(\ln A(\phi(d))-\ln A(\phi(d+D)\right) \tag{36}\] \[\simeq \rho_{M}\left(A(\phi(d))-A(\phi(d+D)\right),\]
has been obtained, where in the second line \(A(\phi)\simeq 1\) has been used, which holds for all models of interest as e.g. dilatons, symmetrons and chameleons. However, this relation has been found challenging to work with numerically due to extreme slopes of the dilaton field near the mirror surfaces.
Therefore, it turns out that the relation for the pressure Eq. (34) is more convenient for numerical simulations. We detail its derivation in what follows. Due to the screening mechanism, the field assumedly takes on its minimum value \(\phi_{M}\) inside the upper mirror of thickness \(D\) (this has been checked explicitly for all parameter values where limits have been set) and the value of \(\phi(d)\) is therefore to a very good approximation given by the value at the surface of a two-mirror setup, where both mirrors are infinitely extended with a vacuum region in between them. Analogously, the value \(\phi(d+D)\) is given by the value at the surface of the setup where one mirror is infinitely extended with a vacuum region above. In [25] the integrated equation of motion
\[\frac{1}{2}\left(\frac{d\phi}{dz}\right)^{2}-\frac{1}{2}\left(\frac{d\phi}{dz} \right)^{2}\bigg{|}_{z=z_{0}}=V_{\rm eff}(\phi;\rho)-V_{\rm eff}(\phi;\rho) \big{|}_{z=z_{0}}\,, \tag{37}\]
has been derived. For the one-mirror case we take the boundary conditions \(\phi(z)\to\phi_{M}\) for \(z\to-\infty\) and \(\phi(z)\to\phi_{V}\) for \(z\to\infty\). In the limit \(z\to\infty\) we get
\[-\frac{1}{2}\left(\frac{d\phi}{dz}\right)^{2}\bigg{|}_{z=z_{0}}=V_{\rm eff}( \phi_{V};\rho_{V})-V_{\rm eff}(\phi;\rho)\big{|}_{z=z_{0}}\,. \tag{38}\]
Subtracting Eq. (38) from Eq. (37) gives inside the vacuum
\[\frac{1}{2}\left(\frac{d\phi}{dz}\right)^{2}=V_{\rm eff}(\phi;\rho_{V})-V_{\rm eff }(\phi_{V};\rho_{V})\,. \tag{39}\]
Similarly, inside the mirror we find
\[\frac{1}{2}\left(\frac{d\phi}{dz}\right)^{2}=V_{\rm eff}(\phi;\rho_{M})-V_{\rm eff }(\phi_{M};\rho_{M})\,. \tag{40}\]
By continuity of the derivative at \(z=d+D\) we straightforwardly obtain
\[A\big{(}\phi(d+D)\big{)}=\frac{1}{\rho_{M}-\rho_{V}}\left(V_{\rm eff}(\phi_{M} ;\rho_{M})-V_{\rm eff}(\phi_{V};\rho_{V})\right)\,. \tag{41}\]
In case of the two infinitely extended mirrors we can use analogous reasoning, using that \(\partial\phi/\partial z_{|z=0}=0\) due to the symmetry of the setup with \(\phi_{0}:=\phi(0)\) being the value at the center between both mirrors. This results in
\[A(\phi(d))=\frac{1}{\rho_{M}-\rho_{V}}\left(V_{\rm eff}(\phi_{M};\rho_{M})-V_{ \rm eff}(\phi_{0};\rho_{V})\right). \tag{42}\]
Substituting these results into Eq. (A.13) proves Eq. (A.11).
### Details of numerical simulations of the dilaton field for Cannex
For our numerical simulations we used Mathematica 13.1. We found that the built-in NDSolve function for solving differential equations numerically does not work well for simulating the dilaton field, or solving the Schrodinger equation in the presence of a dilaton field. Therefore, we wrote our own code adapted to solving these equations. We work with a non-uniform finite difference method to approximate the second derivative of \(\phi\) occurring in both differential equations, namely [37]
\[\phi_{i}^{\prime\prime}\approx\frac{2(\phi_{i+1}-\phi_{i})}{h_{i}(h_{i}+h_{i-1 })}-\frac{2(\phi_{i}-\phi_{i-1})}{h_{i-1}(h_{i}+h_{i-1})}\] (A.20)
with \(h_{i}:=x_{i+1}-x_{i}\) and the non-uniform approximation of the simulation interval \(x_{1},...,x_{N}\). For the one dimensional dilaton field this results in the discretized differential equation
\[\frac{2(\phi_{i+1}-\phi_{i})}{h_{i}(h_{i}+h_{i-1})}-\frac{2(\phi_ {i}-\phi_{i-1})}{h_{i-1}(h_{i}+h_{i-1})}+\] \[\frac{\lambda V_{0}}{m_{\rm pl}}\,e^{-\lambda\phi_{i}/m_{\rm pl} }-\frac{A_{2}}{m_{\rm pl}^{2}}\,\rho_{i}\phi_{i}=0\,.\] (A.21)
This is a non-linear system of equations on \(\mathbb{R}^{N}\) that we solved with a self-programmed Newton's method. Boundary conditions were implemented by setting \(\phi_{0}=\phi_{N+1}=\phi_{M}\). This allowed us to use an arbitrary mesh, which we fine-tuned for the dilaton field profiles. Unlike the built-in finite element method that also allows arbitrary meshes, our algorithm is not restricted to machine precision, but works with arbitrary precision, which is a major advantage for the dilaton field. Furthermore, we found that the non-linear FEM algorithms in Mathematica often fail to converge to the correct solution without returning any error messages and are therefore unreliable. Our code is freely available for investigation of any further details at [36].
### Details for computing the energy shifts for \(\varrho\)Bounce
We used perturbation theory when applicable. In all other cases we discretized the Hamilton operator using the same discretization method as explained for Cannex. The corresponding discretized version of the stationary Schrodinger equation is hence given by
\[-\frac{1}{2m}\left[\frac{2(\Psi_{i+1}-\Psi_{i})}{h_{i}(h_{i}+h_{i-1})}-\frac{ 2(\Psi_{i}-\Psi_{i-1})}{h_{i-1}(h_{i}+h_{i-1})}\right]+V_{i}\Psi_{i}=E\Psi_{i}\] (A.22)
with \(V_{i}=\mathfrak{Q}\,\frac{A_{2}}{2}\frac{m_{N}}{m_{\rm pl}^{2}}\,\phi^{2}(x_{ i})\). This results in a discrete approximation of the Hamilton operator given by
\[H_{ij}=\begin{cases}-\frac{1}{2m}\frac{2}{h_{i}(h_{i}+h_{i-1})}&, \text{if }j=i+1\\ -\frac{1}{2m}\frac{2}{h_{i-1}(h_{i}+h_{i-1})}&,\text{if }j=i-1\\ -H_{ii}-H_{i,i-1}+V_{i}&,\text{if }j=i\\ 0&,\text{else}\,.\end{cases}\] (A.23)
Boundary conditions can be implemented analogously to the dilaton field simulation. Since the resulting approximation for the Hamilton operator is not symmetric on non-uniform grids, in our code we applied a transformation to restore symmetry following [37]. This procedure results in an eigenvalue problem for a \(N\times N\) matrix that can easily be solved numerically, and returns all possible eigenstates and eigenvalues obtainable with the fineness of the grid, from which we can safely extract the first and fourth energy state, and the corresponding energies. Due to the high computational cost of this procedure, we only computed around 10 points for the remaining non-trivial edge (which does not come from a cut-off or can be obtained from perturbation theory) of the exclusion area and fitted the result with a linear function, which approximates the contour well. This procedure is justified because the difference between fermi and micron screening, which is our error guess of the edge of the exclusion area, is much larger than the error introduced by our fit.
|
2308.01900 | The Tree of Light as interstellar optical transmitter system | This work aims at investigating the optical transmission system needed for
such lightweight sail, taking into account the physical constraints of such
unprecedented link and focusing on the optimal scheme for the optical signal
emission. In particular, the optical signal is distributed to several emitters
on the sail. The light diffraction resulting from the pattern of the emitters
acting coherently determines the characteristics of the whole beam transmitted
by the sail and of the received signal on the Earth. The performance of the
digital communication system using pulse position modulation (PPM) can be
assessed and channel coding schemes are proposed. We are using the paradigm for
which the entire sail communication system is described as a Tree-of-light: the
detectors, CPU, memory and laser transmitter are the central unit, representing
the trunk of the tree. The branches of the tree are the waveguides, directed to
the sail surface. By means of multimode splitters, the signal is further
distributed via the petioles to the emitters, the leaves, realized by grating
couplers (GCs), on which this work is more focused. | Elisa Bazzani, Anna Valeria Guglielmi, Roberto Corvaja, Nicola Laurenti, Filippo Romanato, Gianluca Ruffato, Andrea Vogliardi, Francesco Vedovato, Giuseppe Vallone, Lorenzo Vangelista, Paolo Villoresi | 2023-08-03T17:56:17Z | http://arxiv.org/abs/2308.01900v1 | # The Tree of Light as interstellar optical transmitter system
###### Abstract
The hunt for habitable planets outside the solar system as well as the search for the evidence of extraterrestrial life are everlasting questions for humanity to ponder. About the first aspect, concrete scientific evidences have grown steadily in the past decades. The discoveries of extrasolar planets with habitable conditions similar to the Earth started in 2007, with the observation of Gliese 581c using the observations made from La Silla (Chile) with the HARPS spectrograph on the ESO 3.6-m telescope [1]. The observations started with ground telescopes and expanded including space telescopes with several space missions starting from Kepler and including CHEOPS, GAIA, TESS and others as well as the coming PLATO mission.
In addition to the observation from the Earth or its close surrounding, the trip to the vicinity of an extrasolar planet for direct observations has been conceived. In particular, the Starshot Project supported by the Breakthrough Initiatives is developing the conceptual study on the feasibility of a trip aiming at the \(\alpha\)-Centauri star system, the closest candidate [2]. More than one exoplanets are orbiting within the habitable zone of star Proxima Centauri, a red dwarf star member of the three-star system, including the exoplanet Proxima Centauri b. In order to cover the 4.2 light-years of separation in trip lasting 20 years, a very lightweight probe is considered, suitable to be accelerated to 20% of the light-speed by directed energy propulsion, in line with the visionary proposal by R. L. Forward in 1984 [3]. The probe is conceived then as a sail that is pushed by a so-called photon engine, based on the coherent combination of laser beams. All the systems needed for the navigation, observation and communications of the findings shall be located on the sail surface, including the detectors for the acquisition of the local information of the mission, the processing unit, the memory, the signal generator and the optical transmitter to send the collected information to Earth. The optical signal is intended to be received by an array of telescopes, with single-photon sensitivity. The general communication system and the assessment of the data rate have been already the subject of detailed studies [4].
In this context, this work aims at investigating the optical transmission system needed for such lightweight sail, taking into account the physical constraints of such unprecedented link and focusing on the optimal scheme for the optical signal emission. In particular, the optical signal
is distributed to several emitters on the sail. The light diffraction resulting from the pattern of the emitters acting coherently determines the characteristics of the whole beam transmitted by the sail and of the received signal on the Earth. The performance of the digital communication system using pulse position modulation (PPM) can be assessed and channel coding schemes are proposed.
We are using the paradigm for which the entire sail communication system is described as a Tree-of-light (ToL), sketched in Fig. 1: the detectors, CPU, memory and laser transmitter are the central unit, representing the trunk of the tree. The branches of the tree are the waveguides, directed to the sail surface. By means of multimode splitters, the signal is further distributed via the petioles to the emitters, the leaves, realized by grating couplers (GCs), on which this work is more focused.
In Section 1 the concept of the ToL will be detailed, first with the analysis of the the conditions in which the Starshot sail transmitter will operate, considering the constraints in mass and the hypotheses on the available power. Then, the emitters scheme will be addressed, with a focus on the effects of the pattern of emitters to realize an Optical Phased Array (OPA), in order to restrict and possibly steer the main emission lobe to the desired pointing angle. The expected link losses are evaluated with different ToL configurations, in order to assess the photon rate at the receiver.
In the following Section 3 we present a proposal for the identification of the sails based on the actual Doppler shift experienced by the sails in the acceleration phase, which can be considered random around a mean value. The probability of distinguishing the sails by this technique is evaluated.
Figure 1: Tree-of-Light working principle: the seed laser power, coming from the tree trunk, is divided into branches by means of a Multi-mode interference (MMI) coupler, which delivers the signals to \(N\) leafs, that are grating couplers, not to scale. The individual GC couples the fundamental Gaussian mode (blue), while the array, by exploiting the GCs interferences, gives rise to a narrower main lobe (yellow).
In Section 4, the digital communications aspects using PPM are addressed, on the base of the previous results on the expected photon flux at the receiver. In particular the channel coding to implement the error correction is studied. Among the possible channel coding strategies for the Poisson channel, serially-concatenated PPM (SCPPM) schemes and Low-Density Parity Check (LDPC) codes are considered and their performance is presented.
Finally, in Section 2, the design and realization of the grating couplers on the surface of the sail is discussed. GCs are motivated for keeping the size and weight of the transmitter as small as possible, since they are realized directly on the surface of the sail, similarly to the waveguides of branches and petioles. Morever, this technique can benefit from the parallelization that has already been demonstrated at the technological level for the production of nano-optics on an industrial scale, reducing the cost per single sail. The size and the phase front quality of the mode emitted by the GCs are crucial parameters as they define the divergence of the beam emitted by the sail toward the receiver, having an impact on the photon flux as discussed in Section 1.
## 1 Concept of the ToL optical transmitter
The optical scheme is crucial for the sail communication systems, since the diffraction losses derive from it. These are the dominant term in the photon budget, due to the small extension of the available receiving area of the Starshot receiving telescope, or array of telescopes with respect to the large size of the beam that carry the data payload, which are the information acquired by the sail and encoded by means of optical pulses.
As schematized in Fig. 1, the individual GC emits a relatively large beam, shown with the blue curve, in the range of few to few tens of mrad, while the effect of the array is to shrink by more than two orders of magnitude the main lobe following the calculations reported below, and giving the yellow curve.
### Context and main optical losses for the sail transmission from \(\alpha\)-Centauri
The Starshot sail optical transmission shall operate once reached \(\alpha\)-Centauri star system and after the acquisition of the relevant information to communicate by the onboard sensors.
With reference to Fig. 2, the \(\alpha\)-Centauri star system is separated from the Solar system by about 4.4 ly, that are 4.1 10\({}^{16}\)m.
The angle at which one astronomical unis (AU = 1.496 10\({}^{11}\)m) is subtended from there is \(\alpha_{AU}=3.62\)\(\mu\)rad.
The angle at which the Earth diameter (Earth radius = 6.378 10\({}^{6}\)m) is subtended from there is \(\alpha_{E}\) = 0.31 nrad.
The subtended angle at which a ground receiver based on an array of telescopes whose size is supposed to be of one kilometer is \(\alpha_{T}\) = 24.2 femtorad (24.2 10\({}^{-15}\)rad).
By considering the 1 km size telescope array as the candidate Starshot receiver, the loss due to the overlap of a beam as large as the angle spanning the Earth may be assessed as \(Loss_{E}=(\frac{\alpha_{T}}{\alpha_{E}})^{2}=6.1\)\(10^{-9}\) while the loss in the case of a beam spanning 1 AU results of \(Loss_{AU}=(\frac{\alpha_{T}}{\alpha_{U}})^{2}=4.5\)\(10^{-17}\).
The photon budget is also affected by other factors as the atmospheric turbulence and attenuation, background light and detector noise, not analysed in this work, but it is mainly constrained by these harsh losses imposed by the extreme distance of the sail, the limitation in the size and weight of the transmitter and the restriction to the optical spectral region.
From such quick assessments, we may derive the correspondence between the divergence that the sail optical transmitter is imposing to the optical beam and the level of losses resulting from the coupling to the receiver telescopes. We here consider the microradian divergence value as a reasonable compromise. Indeed, the size of the sail and the corresponding allowed array size as well as the limitation in the GCs phasing to achieve a optimal coherent emission from a ultra lightweight structure as the Starshot sail are pointing at a reasonable coherent extent of the order of a meter or a large fraction of it, corresponding to about a span of a million wavelengths. With it, the diffraction losses results to be over 16 orders of magnitude. We note that this estimate is larger than the overall attenuation of the first experimental single photon exchange for quantum communications with a satellite, reported by Villoresi et al. in 2008, which was determined as
Figure 2: Definition of the relevant angles in the ToL communication scheme - not to scale.
\(-157\,\)dB [5]. However, even if in that experiment the source average power was of 8.3 mW, the average count rate was observed to be of 5 clicks-per-second (cps). This results follows from the relatively low energy of each photon: the green photons at \(\lambda=532\) nm used there have an energy \(E=h\frac{c}{\lambda}=3.7\ 10^{-19}\) J. A similar argument may be applied in the case of the Starshot photon budget, in which, considering also a reasonable values for the other causes for losses mentioned above, a minimum average optical power of the order of 1 W is needed to be emitted by the sail.
We conclude this initial analysis mentioning that the study carried out by Berera and Calderon-Figueroa and envisaging the use of the transmission in X-ray band would provide much lower losses by the communication system [6]. However we consider here that the feasibility of the entire system in this band is beyond reach.
### Optical phased array of transmitting leaves
OPAs are used to combine the emission of many GCs to increase the on-axis intensity by constructive interference as well as for precise steering of the beam lobe [7]. Indeed, the many light-year distance separating the transmitter from the Earth receiver requires an unprecedented beam forming in a narrow lobe and a corresponding pointing precision. However, the use of beam steering by the rigid rotating the sail is not a functionality considered as feasible so far.
To achieve the coherent combination of the GCs emission, the knowledge of the phase difference of the different points on the sail surface at the emission time, as shown in Fig. 3, and its correction are needed to be implemented.
Indeed, the sail actual geometry results from the forces exerted at the acceleration and from the forces due to the particle absorption during the interstellar flight and the subsequent dynamical evolution. The determination of the phase to be added to any GC with respect to a reference one may be determined by means of a feedback from the beam splitters used to distribute the optical signal to the GCs, as described below.
Once the phase correction is determined, a phase modulation stage after the beam subdivision will be used for the signal directed to the GCs.
OPA elements can be fabricated separately and then integrated to form the entire array[8]. Similar approaches exploiting integrated waveguide optics have been already proposed and implemented by using AlGaAs[9] and silicon[10] and other [12] technologies. [10] in particular characterize a silicon-on-insulator integrated circuit, that includes a MMI splitting system, a modulation stage and a 1D grating couplers output array, an implementation thus comparable to the ToL concept here described.
These integrated optical technologies provides concrete means for the harnessing of the sail optical system, including the scaling capacity. OPAs have been therefore identified as a versatile and lightweight solution to adjust small misalignments of the transmitter, so as to increase the photon budget at the receiver side.
Figure 3: Sail surface deformation and phase variation in two points with respect to a reference due to surface deformation. To achieve the coherent emission for all GCs at the grid nodes, an interpolation will be used.
### Diffraction model of Gaussian aperture systems
We briefly recall the model based on diffraction theory for an ensemble of identical coherent emitters. The individual element, i.e. GC, is considered to couple in free-space a fundamental Gaussian mode.
The complex function \(GB(\mathbf{x},t)\) describing a Gaussian beam propagating along the \(z\) axis and centered in \(\mathbf{x}_{0}=0\) in the transverse plane and at the distance \(d_{0}\) from the origin, is expressed as
\[U_{\omega}(\vec{x},t)=\mathrm{GB}(\mathbf{x},z)e^{i\omega(t-\frac{1}{c}z)}\,, \qquad\mathrm{GB}(\mathbf{x},z)=\sqrt{\frac{kz_{0}P_{0}}{\pi}}\frac{i}{q_{z}} e^{-ik\frac{(\mathbf{x}-\mathbf{x}_{0})^{2}}{2q_{z}}} \tag{1}\]
with
\[k=\frac{\omega}{c}\,,\qquad\qquad q_{z}=z+q_{0}=z-d_{0}+iz_{0} \tag{2}\]
In the far field (\(z\gg w_{0}\)), the field is equivalent to its Fourier transform evaluated in \(\mathbf{k}=k\frac{\mathbf{x}}{z}\). Therefore
\[\mathrm{GB}(\mathbf{x},z)_{\mathbf{x}_{0}}\simeq\frac{w_{0}k}{z}\sqrt{\frac{ P_{0}}{2\pi}}e^{i\frac{kz_{0}}{2z^{2}}\mathbf{x}^{2}}e^{i\frac{\mathbf{x} \mathbf{x}_{0}}{z}}=\frac{w_{0}k}{z}\sqrt{\frac{P_{0}}{2\pi}}e^{i\frac{\mathbf{ x}\mathbf{x}^{2}}{2z}}e^{-\frac{kz_{0}}{2z^{2}}\mathbf{x}^{2}}e^{i\frac{ \mathbf{x}\mathbf{x}_{0}}{z}} \tag{3}\]
Let us assume to distribute equally the total emitted optical power \(P_{0}\) among \(N\) equivalent GCs, located on different positions \(\mathbf{x}_{0}\) of the same plane, orthogonal with respect to the direction to the receiver, with \(j=0,\cdots N-1\). The field produced by each sub aperture has then power \(P_{0}/N\), neglettting the splitting losses.
\(P_{0}\) is assumed to be the optical power available for transmission, regardless to the design of the light source, which can be a single laser as well as multiple phase-locked or injection-locked lasers[29], and after the attenuation introduces in the optical harnessing along the ToL trunk, branches and leaves.
The far-field amplitude is then given by
\[G(\mathbf{x},z)\simeq N\frac{w_{0}k}{z}\sqrt{\frac{P_{0}}{2\pi N}}e^{i\frac{ \mathbf{x}\mathbf{x}^{2}}{2z}}e^{-\frac{kz_{0}}{2z^{2}}\mathbf{x}^{2}}F_{N}( \mathbf{x}) \tag{4}\]
where we have multiplied and divided by \(N\) for convenience and
\[F_{N}(\mathbf{x})=\frac{1}{N}\sum_{j=0}^{N-1}e^{\frac{ik}{z}\mathbf{x}\cdot \mathbf{x}_{0j}}\,,\qquad F_{N}(\mathbf{x}=0)=1 \tag{5}\]
is the important factor arising from the interference among the emitters.
Since \(z_{0}=\frac{kw_{0}^{2}}{2}\), the intensity in far-field is given by
\[I(\mathbf{x},z)=|G(\mathbf{x},z)|^{2}=I_{0}(z)e^{-(\frac{\pi w_{0}}{2z})^{2} \mathbf{x}^{2}}|F_{N}(\mathbf{x})|^{2} \tag{6}\]
namely a Gaussian envelope modulated by the \(|F_{N}(\mathbf{x})|^{2}\) function.
In the previous equation we defined the on-axis intensity by
\[I_{0}(z)=I(0,z)=\frac{Nw_{0}^{2}k^{2}P_{0}}{2\pi z^{2}} \tag{7}\]
\(I_{0}\) is proportional to \(Nw_{0}^{2}\), namely the total "effective area" \(A_{eff}^{Tx}\) of the transmitter, which can be increased by increasing \(N\) or the size of each single beam. Eq. 7 also expresses the linear relation between the total instantaneous power \(P_{0}\) at the source and the on-axis intensity in the far field.
Without including other attenuation and finite efficiencies, as stated above already, we may estimate the maximum number of signal photons per second: the photon rate, collected by a receiver of effective area \(A_{eff}^{Rx}\), is then given by
\[n_{ph}(z)=\frac{I_{0}(z)A_{eff}^{Rx}}{E_{ph}}=\frac{I_{0}(z)A_{eff}^{Rx}\lambda} {hc} \tag{8}\]
where \(E_{ph}=hc/\lambda\) is the photon energy. By assuming a perfect pointing, that allows to use the maximum intensity value, that is found on the beam axis, Fig. 4 shows \(n_{ph}\) as a function of the transmitter effective area, for three different wavelengths. To evaluate \(n_{ph}\), we assumed a square receiver with \(A_{eff}^{Rx}=1\) km\({}^{2}\), \(P_{0}=1\) W and \(\lambda=800\) nm.
#### 1.3.1 Square lattice Optical-Phased-Array
According to Eq. (5), the interference term depends on the emitter spatial distribution. Here we describe the array factor for a lattice array of \(N\) emitters, taken as a square number, with a side of length \(2d\) that is formed by \(\sqrt{N}\) equally distributed beam centers as depicted in Fig. 5a. If \((0,0)\) is the coordinate of the lattice center, the locations of the beam center in the h-th row
Figure 4: Photon rate at the receiver as a function of the transmitter effective area for different wavelengths.
and j-th column is given by
\[\begin{split} x_{hj}&=-d+\frac{2d}{\sqrt{N}-1}h\,,\qquad h =0,\cdots\sqrt{N}-1\\ y_{hj}&=-d+\frac{2d}{\sqrt{N}-1}j\,,\qquad j=0,\cdots \sqrt{N}-1\end{split} \tag{9}\]
By substituting \((x_{hj},y_{hj})\) couples in Eq. 5, the far-field factor reads
\[F_{N}(X,Y)=\frac{1}{N}\frac{\sin(\frac{\sqrt{N}X}{\sqrt{N}-1})}{\sin(\frac{X}{ \sqrt{N}-1})}\frac{\sin(\frac{\sqrt{N}Y}{\sqrt{N}-1})}{\sin(\frac{Y}{\sqrt{N} -1})} \tag{10}\]
whose first minimum is given at
\[X_{*}=\frac{\sqrt{N}-1}{\sqrt{N}}\pi \tag{11}\]
where we defined the adimensional variables \(X=kxd/z\) and \(Y=kyd/z\). In the large \(N\) limit, Eq. (10) becomes
\[\lim_{N\rightarrow\infty}F_{N}(X,Y)=\frac{\sin X}{X}\frac{\sin Y}{Y},\qquad X _{*}\rightarrow\pi \tag{12}\]
We note that the width of the main lobe in the large \(N\) limit becomes independent of \(N\), but still varies according to the array size. Specifically, for a given \(N\), what it is important for the divergence is the OPA side length. \(|F_{N}(X,0)|^{2}\) is shown in Fig. 4(b) in the positive \(X\) range, for different emitter numbers.
Figure 5: Lattice array arrangement and squared array factor.
### Outline of the optical transmitter system
Here we aim at assessing the scale of the transmission system in terms of the OPA size by combining the previous results on the far-field pattern with characteristics and requirements of the ToL system. The main constraints are the small dimension of GCs and the size of the OPA that may be realized on the sail, considering also the necessity of the light splitting, branches harnessing, and phase modulation for the OPA functioning.
About the first issue, the waist size of the single source is typically of tens to hundreds of micrometers. However, this may be extended to the millimeter range by using metalenses or particular designs [15]. About the second point, on the other hand involves the minimum FWHM, set by the OPA arrangement, and the maximum intensity gain, arising from the coupler constructive interference.
We consider as target angular divergence of the main lobe 1 \(\upmu\)rad which corresponds to less than an astronomical unit at the Earth position, as justified above.
The main lobe of the lattice configuration for equally spaced GCs is bounded by \(X_{min}=kx_{min}d/z=\pi\), where \(2d\) is the array side length. For sufficiently large \(N\), we showed that the array main lobe divergence \(\theta_{m}\) depends critically on the parameter \(d\), and in particular this decreases for increasing \(d\).
Since \(\theta=x/z\), the array side can be chosen to achieve the desired divergence according to:
\[\theta_{m}=\frac{x_{min}}{z}\sim 1\ \mu rad\qquad\text{and}\qquad d=\frac{ \lambda}{2\theta_{m}} \tag{13}\]
By using Eq. (7), we recall that the peak intensity on axis is proportional to \(N\) even if the array is spaced widely. It is known that in this case the main lobe shrinks and an increasing fraction of the power is distributed in side lobes. Noticeably, this effect is resulting from the OPA diffraction and is generally detrimental, as it's diverting part of the emitted power away from the axis. This quantity is quantified by the thinned array curse theorem as the relation of the central lobe power that is reduced by the ratio of the filled area to the empty area [3]. The power carried by the resulting central lobe is then reduced by the factor
\[\frac{1}{d_{sep}^{2}-d_{GC}^{2}} \tag{14}\]
where \(d_{sep}\) is the pitch and \(d_{GC}\) is the side of the GC, considered as square.
To assess the performances of the OPA in three realistic cases, three different couplers designs are considered: a GC coupling a mode of diameter \(\sim 30\ \upmu\)m, one with diameter 1 mm and, finally, a layered structure composed by the GC surmounted by a metalens, which is spaced in order to produce a \(\sim 1\) cm diameter output mode. The number of GC required to achieve the desired divergence and an \(A_{eff}^{TX}=2.5\ \text{cm}^{2}\) are shown in Tab. 1. In the second column it is shown the maximum steering angle \(\theta_{Max}\) of the array, which is set at the Gaussian envelope width of the individual emitter, the single GC. Indeed, the steering phase added to the OPA is causing the rotation of the lobe and is modulated by the amplitude of the individual emitter, as resulting from Eq. 4 and discussed below in more detail.
### Fine pointing optimization with beam steering
As long as we are interested on the peak intensity from a transmitter total active area \(A_{eff}^{Tx}\) of an array with cardinality \(N\), the coupling of the \(N\) waists \(w_{0}\) is equivalent to the use of a single aperture with waist \(\sqrt{N}w_{0}\). Nevertheless, one of the main advantage of using an array of coherent emitters is the capability of steering the main beam lobe.
The beam steering techniques can be performed by applying a different phase \(\phi_{j}\) to each GC. In this case, the function \(F_{N}(\mathbf{x})\) becomes:
\[F_{N}(\mathbf{x})=\frac{1}{N}\sum_{j=0}^{N-1}e^{i\phi_{j}}e^{\frac{ik}{z} \mathbf{x}\cdot\mathbf{x}_{0j}}\,,\qquad F_{N}(\mathbf{x}=0)=\frac{1}{N}\sum_{ j=0}^{N-1}e^{i\phi_{j}} \tag{15}\]
For the sake of simplicity, let's consider a tilt of the beam along the \(x\) axis. With this in mind, we need to apply a phase that is proportional to the beam centers \(x_{0j}\), namely \(\phi_{j}=-\alpha kx_{0j}\). This means that the far-field beam is shifted on \(x\) axis at the receiver by \(\alpha z\), namely it is tilted by an angle \(\alpha\) and \(F_{N}(\mathbf{x})\) reads
\[F_{N}^{\text{steer}}(\mathbf{x})=\frac{1}{N}\sum_{j=0}^{N-1}e^{-i\alpha kx_{0j }}e^{\frac{ik}{z}\mathbf{x}\cdot\mathbf{x}_{0j}}=\frac{1}{N}\sum_{j=0}^{N-1}e^ {i\frac{k}{z}[(x-\alpha z)x_{0j}+yy_{0j}]} \tag{16}\]
By recalling that the overall far-field intensity pattern is given by Eq. (6), the Gaussian envelope causes the main lobe of the steered beam to be in general smaller than the on-axis value[8]. This effect set ultimately a limit to the useful steering angle, that can be considered as a fraction of the indivudual GC divergence.
Moreover, the OPA lobe angular width sets the minimum \(\alpha\) that is needed to be applied. To clarify this, let's consider a lattice array including a sufficiently large number of emitters and it's far-field pattern on \(x\) axis. The main lobe width is limited by \(2X_{*}\), where \(X_{*}\) is expressed by Eq. (12).
\[2X_{*}=\frac{k2dx}{z}\sim 2\pi \tag{17}\]
The corresponding angular aperture is therefore
\[\frac{x}{z}\sim\frac{2\pi}{k2d}=\frac{\lambda}{2d} \tag{18}\]
\begin{table}
\begin{tabular}{l l l} \hline \hline \(w_{0}\) [\(\mu\)m] & \(N\) & \(\theta_{Max}\) [mrad] \\ \hline
30 & \(2.8\times 10^{5}\) & 8.50 \\
500 & 1000 & 0.51 \\
5000 & 10 & 0.05 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Estimation of array cardinality \(N\) made with GCs with waist \(w_{0}\) giving a central lobe divergence of 1 \(\mu\)rad with \(A_{eff}^{TX}\simeq 2.5~{}cm^{2}\)
Given that \(\lambda/\pi w_{0}\) is the angular aperture of the individual GC Gaussian envelope, taken where the intensity decays of a factor \(1/e^{2}\) with respect to the peak, then \(\alpha\) can be bounded in the range
\[\frac{\lambda}{2d}\leq\alpha\leq\frac{\lambda}{\pi w_{0}}. \tag{19}\]
Lower values of \(\alpha\) are not significant being within the main lobe and larger values are imposing a stron attenuation to the resulting tilted lobe. The interpretation of this latter effect can be understood since as the phase shifts on the array is deviating the lobe direction, this latter is also deviating with respect to the emission of the individual GCs, that remain oriented as the normal to the sail surface. This effect is assessed in Fig. 6, where we plot the number of photons received as a function of \(\alpha\), for different waists of the single Gaussian mode, number of GCs in the OPA and a \(2d=0.4\) m. The number of emitters is chosen so as to have \(A_{eff}^{TX}\sim 2.5\) cm\({}^{2}\).
The number of angular deviations that are significantly different, that is the lobe is not significantly overlapping, may be approximated by
\[n_{\alpha}\sim\frac{2d}{\pi w_{0}} \tag{20}\]
In principle, we can estimate the number of bits in which the signals delivered to the modulators
Figure 6: Received number of signal photons per second at \(\lambda=800\) nm and receiver effective area 1 km\({}^{2}\), as a function of the steering angle, for different waists (expressed in the legend in meters) and N apertures. Specifically, to have \(A_{eff}^{TX}\sim 2.5\) cm\({}^{2}\) for \(w_{0}=\{100,500,1000,5000\}\) μm, we set \(\sqrt{N}=\{159,32,16,3\}\).
have to be encoded, i.e.
\[n_{b}=\log_{2}n_{\alpha}\sim\log_{2}\left(\frac{d}{\pi w_{0}}\right)+1 \tag{21}\]
as well as the phases \(\phi_{j}\) which must be provided by the modulators. Specifically, to cover the full angular interval, the maximum phase difference of the optical signals corresponding to the locations \(x_{0j}=\pm\sqrt{2}d\), on the diagonal of the lattice, has to be
\[\Delta\phi^{Max}=k\alpha^{Max}2\sqrt{2}d=k\frac{\lambda}{\pi w_{0}}2\sqrt{2}d =2\sqrt{2}\pi n_{\alpha} \tag{22}\]
With regard to adjacent emitters, their angular separation vanishes for large \(N\) and the same is true for their phase difference.
This aspect is posing the problem of the resolution of the phase modulators, both in terms of the minimum and maximum phase shift to introduce and the corresponding dynamical range of the control system.
The modulation can be realized by using integrated devices along the branches of the ToL. Due to the limitation in power, mass and the available complexity, and the need to control the phase accurately for many different GCs, a solution exploiting integrated and high-efficiency electro-optics material will be here described.
With reference with state-of-the-art deposition technique and considering lithium niobate as a electro optic material [12, 13], the \(V_{\pi}cm\) product may be realized as low as 1.5 Vcm and with a fully integrated design along a single mode waveguide. The slightly lower value of 1.4 Vcm was reported for silicon-on-insulator waveguides of thickness of 250 nm [14]. Despite a nominal estimation based on modulator lengths, the voltage levels used for phase modulation require to be separately calibrated depending on the particular implementation. Random variations in waveguide, modulator and GC dimensions, as well as manufacturing imperfections, prevent en fact a theoretical assessment of the far-field distribution response to the applied voltages. The minimum number of calibration are \(NN_{DAC}\), where \(N_{DAC}\) are the number of available phase modulation levels[11]. Moreover, due to the random nature of the calibration, each modulator has to be designed to cover the entire \([0,\,2\pi]\) range so as to ensure the sought OPA response in all circumstances.
#### 1.5.1 On the polarization of the optical signal
The OPA model described above is independent from the state of polarization of the emitted radiation. At the same time, the polarisation modulation may be exploited for the sail identification and the coding synchronization, in the perspective of using a PPM code as described below. The use of this degree of freedom of the photon is crucial in the quantum communications along space channels [16, 17].
To exploit this possibility, the capability of polarization modulation at the transmitter and a polarization sensitive receiver are then needed. About the most convenient polarization state, we may consider that a circular polarization make the link insensitive to the alignment of the sail with respect to the receiver in the rotation around the propagation axis. Moreover, current high sensitivity detector as the superconducting nanowire single photon detectors (SNSPD) with
a meander as the anode, have a maximum for a particular input linear polarization state. The circular polarization can be then exploited for transforming the collected photons in that optimal linear polarization state by a birefringent plate close to the receiver focalplane. This is a realized with a quarter-waveplate plate with the \(22.5^{\circ}\) orientation of the optical axis with respect to the main meander axis.
The study reported in the final section describe how to generate the circular polarization state at the transmitter. We note that the possibility to actively control the polarization behaviour of the metalenses, which is a possible evolution of the current technology, may be envisaged to obtain a polarization modulation of the single GC scheme discussed in this work.
From these considerations, it follows that most of the efforts has to be focused on the pointing capability, which comprises both a precise evaluation of the sailcraft-Earth relative position and a consistent steering of the Far-field distribution of the transmitted light. The latter implies that the correct calibration of the phase modulators is also a fundamental and delicate task, as well as the evaluation and eventual correction of any deformations in the sail that could affect the OPA spatial distribution.
High coupling efficiency transmitting leaves
In this section we discuss on the optimization of the emission efficiency from the individual grating coupler and on the feature to impose a circular polarization to overcome the alignment requirements with receiver system as well as the flattening of the final wavefront.
The value of the wavelength suitable forr the Starshot transmitter is currently still not fixed. Indeed it will result from the optimization of several factors including the diffraction width of beam, directly proportional to \(\lambda\), the technology of the laser source, the optical harnessing and the modulations and of the ToL leaves. In this section we adopted the value of 800 nm, that represent a candidate value that optimize the generation section.
### High efficiency apodized grating couplers
A solution to couple the light coming from a waveguide with high efficiency into the vertical direction is given by Binary Blazed Grating Couplers (BBGCs). They exhibit high transmission efficiency but have a limitation in terms of outgoing beam waist (5-10 \(\mathrm{\SIUnitSymbolMicro m}\)). [18] To overcome this limitation we design different optics able to generate high efficiency coupling with larger beam waist which are called apodized grating couplers (AGC). The apodized grating couplers are grating with a variation in terms of fin's width along the whole structure; this approach permits both to design the grating in order to enhance the coupling efficiency into a target mode and to increase the outgoing beam waist. [19] - [22] AGCs have some limitations in terms of perfect vertical directionality for the coupled mode, but, this problem can be overcome by using a phase corrector after the grating in order to correct the deviation from the vertical direction; furthermore, the phase corrector can act also as collimator.
Figure 7: Schematic configuration of the grating coupler and phase corrector.
For our aims we proposed AGCs, as shown in Figure 7, made of silicon nanofins having different length (\(l_{FIN-i}\)) and same height (\(h_{FIN}\)), placed into a digitalized pattern with period \(u\). The AGC pattern is placed onto a substrate, which can be considered of silicon, whose total height is \(h_{GC}\) and the light is provided by a silicon waveguide of height \(h_{WG}\). The above-described components are fabricated over both a BOX layer (\(h_{BOX}\)) to ensure a good refractive index contrast and a substrate (\(h_{SUB}\)) to attach the structure onto the sail. The structure is covered by a material, called spacer, which acts as substrate for the phase corrector modelled as a metalens. Since the goal of the leaves is to couple as much as possible the light incoming from the waveguides to a collimated gaussian mode outgoing the leaves themselves, we simulated the behaviour of different AGCs changing some parameters in order to maximize the coupling efficiency (\(CE\)). We estimated the coupling efficiency as:
\[CE=T\cdot OI \tag{23}\]
being \(T\) the transmission of the outgoing mode calculated as \(T=abs(E_{OUT})^{2}/abs(E_{IN})^{2}\), and \(OI\) the overlap integral between the simulated coupled mode and a gaussian mode:
\[OI=\frac{|\iint_{S}E_{1}(x,y)\cdot E_{2}^{*}(x,y)\,dx\,dy|^{2}}{\iint_{S}|E_{1} (x,y)|^{2}\,dx\,dy\cdot\iint_{S}|E_{2}(x,y)|^{2}\,dx\,dy|^{2}} \tag{24}\]
where \(E_{2}(x,y)\) is a gaussian mode and \(E_{1}x,y)\) is the grating coupler's outgoing mode.
We started optimizing the transmission at \(\lambda=800nm\) of the coupled mode by varying the parameters such as the height of the waveguide, the grating coupler's height etc. and fixing the refractive indexes of the materials (Table 2) It has been extrapolated that the maximum transmission (\(T_{800nm}=0.8135\)) is obtained when the parameters take the values depicted in Table 2.
After that, we simulated the behaviour of our GC apodizing the duty cycle along the structure; the duty-cycle has been defined as \(dc=l_{FIN}/u\) and the apodizing process ensure that the duty-cycle is different for each GC's nanofin, so we have \(dc_{i}=l_{FIN-i}/u\). This apodization in necessary in order to vary the coupling strength (\(\alpha\)) of a grating coupler along its whole length. The coupling strength of a grating coupler, as mentioned in [19], is the constant of power's exponentially decay along a grating coupler with uniform duty-cycle (so-called periodical grating coupler).
\[P=P_{0}\cdot exp(-2\alpha z) \tag{25}\]
which, in the particular case of the apodized grating coupler, becomes a function of the position \(z\) (\(\alpha(z)\)) along the GC's length. As proposed in [22], we decided to use a linear apodization based
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \(u\) [\(\mathrm{\SIUnitSymbolMicro m}\)] & \(h_{WG}\) [\(\mathrm{\SIUnitSymbolMicro m}\)] & \(h_{GC}\) [\(\mathrm{\SIUnitSymbolMicro m}\)] & \(h_{FIN}\) [\(\mathrm{\SIUnitSymbolMicro m}\)] & \(n_{Si}\) & \(h_{BOX}\) [\(\mathrm{\SIUnitSymbolMicro m}\)] & \(n_{BOX}\) & \(n_{SUB}\) & \(n_{SPACER}\) \\ \hline
0.280 & 0.160 & 0.210 & 0.155 & 3.5 & 1.2. & 1.5 & 3.5 & 1.5 \\ \end{tabular}
\end{table}
Table 2: Optimized parameters values for AGC simulations in order to maximize the transmission (\(T_{800nm}=0.81\)) of the outgoing mode
on the linear variation of the grating coupler's duty cycle along its entire length; the formula for the apodization can be written as:
\[dc(z)=dc_{0}-kz \tag{26}\]
being \(dc_{0}\) the is the initial duty-cycle of the first nanofin, \(k\) is the linear apodization factor and z is the distance of each nanofin from the starting point of the grating. We simulated the behaviours of an AGC made of 250 periods, so \(70\upmu m\) length, with a initial duty-cycle of \(dc_{0}=0.975\) and a linear apodization factor \(k=0.6\), and all the others parameters following Table 2.
As shown in Figure 9 the coupled mode has a quasi-gaussian shape and calculating the overlap integral with a gaussian beam (having a beam waist \(w_{0}=70\upmu m\)) using Eq. 24 it obtains a value of \(OI=0.8783\). Combining both the overlap integral and the transmission it is possible to calculate the coupling efficiency of our proposed APG; substituting the extrapolated values into Eq. 23 it obtains a coupling efficiency of \(CE=0.8135\cdot 0.8783=0.7145\)
Furthermore, we evaluated the directionality of the coupled mode which isn't perfectly vertical (as previously discussed). Figure 10 shows that electric field propagation wavevector is inclined by 12 degrees from the grating coupler's orthogonal direction and carries both a linear phase gradient (Figure 11) and a slight curvature which must be corrected using a metalens acting as phase corrector.
From the results described above, the proposed solution using the apodized grating couplers offers an improvement in terms of both coupling efficiency and size of the outgoing mode rather than the binary blazed grating approach. In fact, as depicted above we reached a Coupling Efficiency higher than 0.7 and a quasi-gaussian outgoing mode with a beam waist of \(70\upmu m\), at
Figure 8: Transmission values at different wavelenghts of a coupled mode using a periodic grating coupler following the parameters depicted in Table 2.
the contrary, with the binary blazed grating approach it can be reached quite similar CE but the outgoing beam waist is only few micrometers. The main limitation is the directionality of the beam but this problem is overcome using a phase corrector.
Figure 10: Far-Field intensity of the coupled mode using an AGC made of 250 periods, so 70\(\upmu m\) length, with a initial duty-cycle of \(dc_{0}=0.975\) and a linear apodization factor \(k=0.6\) and all the others parameter following Table 2.
Figure 9: Simulation of the coupled mode using an AGC made of 250 periods, so 70\(\upmu m\) length, with a initial duty-cycle of \(dc_{0}=0.975\) and a linear apodization factor \(k=0.6\) and all the others parameter following Table 2.
### Metalenses for phase correction and polarization conversion
As described in the previous section, the mode coupled using the apodized grating coupler is tilted from the orthogonal direction of a certain quantity (i.e., 12 degrees) (Figure 10), so it carries a linear phase gradient during the propagation combined with its natural divergence (Figure 11). This situation requires a phase correction element in order to redirect the directionality of the outgoing beam in the orthogonal direction and without imposed divergence. It is possible to integrate theses functionalities using diffractive optics that works well but can only correct the phase, so we considered to turn into a metasurfaces approach which permits to encode more functionalities rather than the only phase correction. In particular, we decided to implement the phase correction using a metalens because whose peculiarity is to be able to encode different functionalities depending on the input polarization state. [23] - [27] One of the feature we would like to encode is the possibility to convert the linearly polarized state of the beam coupled by the AGC into a circularly polarized beam in order to decrease the receivers by a factor 2. To this aim we designed some metalenses able to correct the phase of the coupled mode and, at the same time, to generate a circular polarization state with different spin depending on the input linearly polarized state. In detail, the metasurface proposed in this work is a dielectric metalens (ML), made of a 2D array of birefringent metaunits that exploit the dynamic phase. Our DFML is constituted of subwavelength metaunits (MUs), the so-called metaatoms (MAs), arranged over a square lattice, represented by silicon nanopillars on a substrate, surrounded by air. Each pillar belongs to a subset of nanostructures with different cross sections but the same orientation and height, and acts as a quarter-wave plate in order to maximize the polarization conversion and, therefore, the optical efficiency.
For the benefit of the reader, we provide in the following the theory underlying the working principle of metaunits. In particular, the Jones matrix J for the metaatom at the coordinates (x,
Figure 11: With a blue line it is represented the simulation of the coupled mode’s phase at \(y=8\upmu m\) using an AGC made of 250 periods, so \(70\upmu m\) length, with a initial duty-cycle of \(dc_{0}=0.975\) and a linear apodization factor \(k=0.6\) and all the others parameter following Table 2. With the red line it is represented the correction performed by the metalens.
y) is:
\[J=e^{(i\frac{\delta_{x}+\delta_{y}}{2})}cos(\frac{\Delta}{2})\begin{bmatrix}1&0\\ 0&1\end{bmatrix}-ie^{(i\frac{\delta_{x}+\delta_{y}}{2})}sin(\frac{\Delta}{2}) \begin{bmatrix}cos(2\theta)&sin(2\theta)\\ sin(2\theta)&-cos(2\theta)\end{bmatrix} \tag{27}\]
being \(\theta\) the local orientation of the metaatom fast axis and \(\Delta=\delta_{y}-\delta_{x}\) is the phase retardation between the two axes of the metaunit (the spatial dependence has been omitted to simplify the notation). If it is imposed the conditions \(\Delta=\pi/2\) and \(\theta=\pi/4\), and recalling the Jones formalism of the polarization states \((|H\rangle=\begin{bmatrix}1\\ 0\end{bmatrix},|V\rangle=\begin{bmatrix}0\\ 1\end{bmatrix},|L\rangle=\frac{1}{\sqrt{2}}\begin{bmatrix}1\\ i\end{bmatrix}\) and \(|R\rangle=\frac{1}{\sqrt{2}}\begin{bmatrix}1\\ -i\end{bmatrix})\) it obtains:
\[J|H\rangle=e^{(i\frac{\delta_{x}+\delta_{y}}{2})}|R\rangle \tag{28}\]
\[J|V\rangle=e^{(i\frac{\delta_{x}+\delta_{y}}{2}+\frac{\pi}{2})}|L\rangle \tag{29}\]
so properly design each metaatom it is possible both to correct the phase using the dynamic phase contribution (\(e^{(i\frac{\delta_{x}+\delta_{y}}{2})}\)) and to convert the polarization state from linear to circular.(Figure 12)
We set up a custom-made Finite-Element Method (FEM) simulation in the wavelength domain (using COMSOL Multiphysics(r)) to find the best set of metaatoms respecting the ML requirements described above. Each subunit has been defined as a silicon nanopillar (\(n_{Si}=3.5\)) surrounded by air (\(n_{Air}=1\)) placed on the top of a substrate (\(n_{Sub}=1.5\)). All the materials were considered as non-absorptive (n=Re(n), Im(n)=0). Thus, we imposed some conditions to simulate properly the nanostructures: Periodic Port conditions were set in the substrate at a distance equal to from the nanopillar and at a distance greater than over the pillar, both to collect the scattering parameters of the structure and simultaneously ensure the far-field regime; Perfectly Matched Layer (PML) conditions have been imposed outside the ports at a distance greater than to visualize the transmitted and reflected fields, and to absorb the field over a certain distance to avoid unwanted multiple reflections. Finally, Periodic Boundary conditions (PBC) were set (along the xz and yz planes) to permit the correct simulation of the interaction between the various metaunits of the metalens.
Simulations were performed fixing the period of the metaatoms matrix at \(400nm\) along x-axis and y-axis, and sweeping the sizes of the metaunit cross-section (\(L_{x}\), \(L_{y}\)), considering fabrication constraints and the subwavelength regime, at the working wavelength of 800 nm. Moreover, due to fabrication limitations, we imposed a fixed height H of 500 nm. Thus, for a fixed phase
Figure 12: Schematic representation of the polarization conversion paradigm described by Eq.29 (a) and Eq. 28 (b)
delay \(\delta_{x}\) along the fast axis of the pillars, we selected the cross sections satisfying the condition \(\Delta=\pi/2\). In particular, we selected metaatoms having a maximum phase difference of 0.05 rad from the QWP condition, i.e. \(\Delta_{simulated}-\Delta=0.03rad\). At the same time, to ensure an homogeneous polarization conversion, we imposed strict conditions on the transmissions for TE and TM polarizations. More precisely, we fixed \(|T_{x,i}-T_{y,i}|<0.05\), being \(T_{x,i}\) and \(T_{y,i}\) the transmittance of the i-th metaatom for TE and TM polarizations, respectively. Concurrently, to guarantee a homogeneous transmittance over the whole metalens we impose a maximum difference of 0.05 in transmittance among the metaatoms \(|T_{avg,i}-T_{avg,j}|<0.05\), \(i,j=0,1,...,N\), being \(T_{avg,k}=\frac{T_{x,k}+T_{y,k}}{2}\).
As a consequence, the previous requirements limit significantly the choice of possible cross-sections for the given thickness and shape. Therefore, in order to increase the degrees of freedom to find the adequate set of nanostructures covering the whole \(2\pi\) range, different shapes have been considered, such as rectangular and elliptical. A meta-library of 22 different nanopillars has been extrapolated from the simulations, which permits to have a well distributed 22-level discretization of the phase over the range 0-2\(\pi\) (Figure 13). Figure 13a shows that the range 0-2\(\pi\) has been covered very well, and depicts two different types of behaviour among the pillars, with both \(\delta_{x}>\delta_{y}\) as well as the opposite.
These configurations always respect the QWP restriction but differ in terms of pillar's rotation, in fact, when a pillar shows \(\delta_{x}<\delta_{y}\) we must rotate it of a quantity \(\theta=\pi/4\), on the other hand when we have \(\delta_{x}>\delta_{y}\) the nanopillar must be rotated of an angle \(\theta=-\pi/4\) (note: all these configuration can be derived from Eq.27). Moreover, Figure 13b depicts the transmission of all the selected configuration under different input polarization state. It is worth noting that our metasurface reaches an average transmission higher than 0.965.
Finally, We implemented to our metalenses a phase correction in order both to collimate and redirect the impinging beam coming from the GC. As depicted in Figure 11 with a red line, the curvature of the field is absent so the metacorrector works as expected. In conclusion, the proposed metasurface is able both to correct the linear phase gradient and the curvature of the AGC's coupled mode acting on the dynamic phase variation along the whole nanostruture. At the same time, it is able to convert a linearly polarized light into a circularly polarized one whose spin depend on direction of the impinging linear polarization. So, accurately design the metalens and choosing the correct metaatoms along the entire structure depending on the required phase to be corrected we are able to set the coupled mode both collimated and directed along the orthogonal direction of the sail with an efficiency higher than 0.965.
The feasibility of coupling the light coming from a waveguide into a quasi-gaussian mode propagating in free-space was discuss in the previous sections, reaching the ability both to correct the phase distorsions and to convert the polarization state of the coupled light by means of metasurfaces. We depict a configuration able to generate a circularly polarized gaussian beam with a \(70\upmu m\) waist and a coupling efficiency higher than 0.7.
We note that this design paradigm can be used to obtain also larger beam waist. As a matter of fact, by properly designing the size, the divergency of the apodized grating coupler and the distance between the AGC and the metacorrector, it is possible to adjust the waist of the gaussian beam keeping the same coupling efficiency.
Figure 13: Library of different silicon nanopillars working at \(800nm\) providing the recipe to built-up a dual-functional metalens. (a) Phase delays for TM (x-delay) and TE (y-delay) polarizations, (b) Transmittance of each nanopillar under TM (\(T_{x}\)) and TE (\(T_{y}\)) polarization compared with the average transmittance (\(T_{a}vg\)) calculated among all the transmittance values. (c) Different types of pillars composing the meta-library. Rectangular and elliptical pillars (R-E). Below, the complete list of the metaatom library showing the type of pillars and the corresponding size.
Identification of a sails in the flight of sails
In the vision of the Starshot Project, it is considered that not one but that a fleet of \(N_{s}ail\) sails will be launched with a suitable periodicity, possibily addressing the measurement of different observables. In order to isolate the individual messages, a method for the identification of the individual sailrafts should be designed. For this purpose, we propose to exploit the actual Doppler shift determined by the acceleration phase, where slight differences in the final speed can cause large frequency shifts, with respect to the nominal Doppler shift.
The \(N_{s}ail\) sails are accelerated by means of a photon engine on the Earth, providing the same acceleration, on average. However, some random differences may occur, and the sails actual speeds might differ, giving rise to different frequency (Doppler) shifts. Our aim is to exploit these frequency shifts to identify the \(N_{s}ail\) sails. It is worth noting that in this scenario the _relativistic_ Doppler shift should be considered. Therefore, the Doppler shift \(f_{d}\) for a sail with speed \(v\) is
\[f_{d}=\left(\frac{\sqrt{1-v/c}}{\sqrt{1+v/c}}-1\right)f_{0} \tag{30}\]
with \(f_{0}\) the received carrier frequency. Then \(f_{d}\) is no longer linearly related to \(v\). However, if the differences between sail speeds are small compared to \(c\), relation (30) can be linearized.
We assume that the \(N_{s}ail\) sails speeds are modeled by \(N_{s}ail\) independent Gaussian random variables with variance \(\sigma_{v}^{2}\). In a linearized model the standard deviation of the Doppler shift \(\sigma_{f}\) is \(\sigma_{f}=\sigma_{v}/cf_{0}\). The condition for the separability of the \(N_{s}ail\) sails is that their frequency shifts are separated more than the bandwidth \(B\) of the optical filter used at the receiver. Considering the same carrier frequency \(f_{0}\) for all the \(N_{s}ail\) sails, if \(f_{i}\) is the actual shifted frequency of the sail \(i\), the relationship \(|f_{i}-f_{j}|>B\), \(\forall i\neq j\), must hold in order to distinguish \(i\) from \(j\).
The results shown in Fig. 14 allow us to evaluate the likelihood of non-overlapping trans
missions from distinct sails. Specifically, Fig. 14 shows the probability that \(\min\{|f_{i}-f_{j}|,\forall i,j=1,\ldots,N_{s}ail;i\neq j\}\), is smaller than \(B\).
Fig. 15 shows the maximum \(B/\sigma_{f}\) needed to guarantee a probability \(\epsilon\) of not distinguishing the sails, as a function of the number of sails \(N_{s}ail\).
Actually, different launch scenarios could be envisaged: instead of launching the sails all together, they can be launched in clusters at different times. However, to reach \(\alpha-\)Centauri at the same time, the sails launched later must be accelerated to a higher cruise speed. As a consequence, the Doppler shifts are modelled by Gaussian distributions with a different average value for each cluster. Fig. 16 shows the probability of distinguishing the sails, with two clusters launched at one year distance.
Figure 16: \(P[\min|f_{i}-f_{j}|>B]\) considering \(N_{s}ail=100\) sails, with clusters at 1 year distance and \(B=1\) MHz.
On the optimization of the digital communication system and the channel coding used by the ToL transmitter
The optical transmission system implemented by the ToL has the objective to send to the receiving station a payload of information bits, including the images and physical measurements obtained at destination, with a total data volume of the order of Mbits [4]. From the analysis of the optical link budget, a very strong attenuation due to the optical diffraction characterizes the communication channel. This fact imposes to envisage very carefully the scheme for the digital communications that the ToL will realise. This also requires to design the most suitable configuration of a certain number of parameters, and in particular the best error correction code, fulfilling the constraints imposed by the technology, taking into account the potential changes during the operation time.
On the base of extensive previous studies devoted to this subject [4], we consider here that the modulation to be adopted is pulse position modulation (PPM) with a symbol of \(M\) slots and slot time \(T_{s}\). The slot time \(T_{s}\) depends on the modulation bandwidth capabilities of the laser at the transmitter, since the optical pulse conveying the position information must be confined within the duration \(T_{s}\). A sequence of PPM symbols is organized into a frame of duration \(T_{F}\) for coding purposes. Note that the use of channel coding techniques is mandatory in the photon-starving environment considered.
The optical channel shown in Fig. 17 is a Poisson channel: thus, the received photon arrival is modelled by a Poisson random process, with an average rate of \(N_{s}\) photons per second. The useful signal is corrupted by a background noise, with an average number of photons per second denoted by \(n_{b}\).
Channel coding is performed on a frame-base. With reference to Fig. 17, during each PPM frame period \(T_{F}\) a sequence \(\mathbf{u}\) of \(b_{I}\) input bits is coded by the error correction code (ECC) into a word \(\mathbf{x}\) of length \(b_{F}\), where the ratio \(r=b_{I}/b_{F}\) is the _code-rate_. Possibly other \(N_{extra}\) bits are added as CRC or termination bits and transmitted at the end of each frame and typically the frame period \(T_{F}\) includes a guard-time, with a guard time factor \(\alpha_{gt}\) (e.g., if the guard time is \(25\%\), \(\alpha_{gt}=1.25\)). Then the useful data bit-rate \(B_{r}\) is given by
\[B_{r}=\frac{b_{I}}{T_{F}}=\frac{b_{F}\,r-N_{extra}}{MT_{s}\,b_{F}/\log_{2}(M)\, \alpha_{gt}} \tag{31}\]
For the channel coding, the main system parameter is the code rate \(r\).
In this coded optical communication system, the target requirement is represented by the desired bit error rate (BER). In fact, it can be assumed that the objective image quality can be quantified by a maximum tolerable BER. On the other hand, the BER in the PPM Poisson
Figure 17: PPM system model with channel coding.
channel depends on the mean number of useful signal photons per PPM slot \(N_{s}=n_{s}\,T_{s}\) and on the mean number of noise photons per PPM slot \(N_{b}=n_{b}\,T_{s}\). From the plots of the coded BER, one can infer the minimum number of signal photons \(N_{s,min}\), for the required BER.
Note that the useful photon rate \(n_{s}\), as described in detail in Section 1, depends on several other system parameters, such as: the average transmitted power \(P_{tx}\), the transmitting and receiving aperture diameters \(D_{tx}\) and \(D_{rx}\), the overall link losses, including pointing, path loss, etc.
Given all the system parameters and the objective BER, the best channel coding strategy is the one that minimizes the collected photons \(N_{s,min}\) to guarantee that BER, and therefore minimizes the time needed to download the payload image.
We remind that the useful signal photon rate (photons per second) \(n_{s}\) can be determined as described in Section 1, while the rate of background noise photons \(n_{b}\) is given by
\[n_{b}=\eta_{D}\left[L_{s}(\lambda)\,A_{rx}\Delta\lambda+P_{ext}\right]\,\frac{ \lambda}{hc}+n_{DC} \tag{32}\]
where \(n_{DC}\) represents the detector dark count rate, \(\eta_{D}\) the detector efficiency, \(P_{ext}\) is the residual received laser power without transmission (related to the laser extinction ratio), \(\Delta\lambda\) is the spectral width of the receiver filter, while \(A_{rx}\) is the effective receiving aperture area, \(h\) the Planck constant and \(c\) the speed of light. Finally, \(L_{s}(\lambda)\) is the total background or stray radiation at the receiver, at the signal wavelength \(\lambda\), and not related to the transmitted signal.
### Channel coding techniques for the Poisson channel - SCPPM
We first present the _serially-concatenated PPM_ (SCPPM) scheme which refers to a precise combination of modulation and coding technique mostly used in a deep-space optical link scenario [30]. The name SCPPM derives from the combination of a PPM modulator with other blocks serially concatenated to it, namely a convolutional code as the error-control code, an accumulator and an interleaver. Fig. 18 shows the block diagram of the SCPPM encoder [30].
The data vector **u** firstly enters in a cyclic redundancy check (CRC) block, that appends 32 binary digits. The CRC attachment can be used in the receiving phase for the evaluation of the correct decoding of the codeword.
After that, **u** is convolutionally coded. We consider this encoder, also denoted as _outer code_, as a constraint-length-3 convolutional code with generator polynomial **g**
Figure 18: Conceptual scheme of the SCPPM encoder.
notation [30], or
\[g^{(1)}(D)=1+D^{2}\]
\[g^{(2)}(D)=1+D+D^{2}\]
\[g^{(3)}(D)=1+D+D^{2} \tag{33}\]
generating a 1/3 code rate. It is worth noting that this basic encoder can be punctured in order to achieve also rate 1/2 or 2/3.
Once the codeword \(\mathbf{x}=\{x_{0},x_{1},...,x_{k-1}\}\) has been generated, it must be interleaved. The interleaved (permuted) bits are then elaborated by the accumulator PPM (APPM) block, also named _inner code_, composed by an accumulator and a memoryless PPM modulator. The accumulator can be described as a rate-1 code with transfer function \(1/(1+D)\). Finally, \(\mathbf{b}\) passes through the PPM modulator. At this point, the codeword is fractioned into \(S=\hat{k}/m\) symbols, where \(m=log_{2}(M)\) and \(M\) the PPM order. The output of the PPM block is a slotted symbol sequence \(\mathbf{c}=\{c_{0}^{q_{0}},c_{1}^{q_{1}},...,c_{S-1}^{q_{S-1}}\}\), where each \(\mathbf{c}_{i}^{q_{i}}\) represents a PPM symbol with a laser pulse in the \(q_{i}\) position. Vector \(\mathbf{q}=\{q_{0},q_{1},...,q_{S-1}\}\) defines the integer position values in the range between 0 and \(M-1\).
The decoding is implemented as an iterative procedure very similar to the BCJR decoding [30]. Considering the modulation and coding technique as a single large encoder and describing the respective trellis diagrams, it is possible to use a turbo-iterative demodulator and decoder. This approach, proposed in [30], originates from turbo codes applied to the PPM channel and from iterative decoding, showing near-capacity performance of this decoding procedure.
Fig. 19 shows the block diagram of the SCPPM decoder architecture.
As it can be seen, the demodulator system consists of two soft-input soft-output (SISO) blocks that mutually exchange soft information. Conceptually, they are the same and they are named _inner_ and _outer_ to differentiate the trellis description for the inner and outer code, respectively. An interleaver and a de-interleaver connect the flow of information between the two SISO blocks, and a symbol de-mapper converts the received photons in soft information. The soft information exchanged is the log-likelihood ratio (LLR) of two probabilities for the generic binary symbol \(s_{i}\) defined as
\[\bar{\pi}(s_{i})=log\frac{p_{0}(s_{i})}{p_{1}(s_{i})}, \tag{34}\]
where \(p_{0}(s_{i})\) and \(p_{1}(s_{i})\) represent the probabilities that \(s_{i}\) is 0 or 1, respectively.
In the iterative decoding of serially concatenated codes, the extrinsic information from the inner SISO is fed to the outer SISO as a priori information. For the improvement in the error
Figure 19: Conceptual scheme of the SCPPM decoder.
rate, iteration after iteration, the output mutual information from the outer SISO must be greater than the input mutual information to the inner SISO. In [31], it has been discussed that the use of an accumulator just before the PPM modulation (i.e., APPM) improves the performance, with respect to a pure PPM.
In terms of BER, Fig. 20 shows the error rate as a function of \(N_{s}\) for two values of \(N_{b}\).
In the previous Section it is shown how the number and the positions of the emitters (intended as the leaves of the tree of light) affect the amount of received photon rate. Tab. 3 shows some results with an average transmitted power of 1 W.
Merging the results of Fig. 20 and Tab. 3 some considerations can be drawn. Assume that our objective is a \(BER\approx 10^{-2}\) with SCPPM with code-rate 1/3 and background noise \(N_{b}=0.01\). The minimum \(N_{s}\) required is \(\approx 1.4\). If we consider 10000 sources, the rate of received photons is 0.02 Hz, and, consequently, we need to wait 70 s in order to collect 1.4 useful photons. In this scenario, if the payload of useful bits to be transmitted is approximately 5 kbit, using 1024-PPM and therefore 10 bits per PPM symbol, with a guard factor \(\alpha_{gt}=2.2s\) as in [4], the number of required PPM symbols is 1512. Then the time needed to collect the required number of useful photons to guarantee the objective BER is 231000 s. It is worth noting that this highly depends on the background noise, since \(N_{b}\) strongly affects BER as shown in Fig. 20. This gets worst if we decrease the number of leaves of the ToL: the lower the number of leaves, the longer the time to collect the required photons (see Tab. 3).
Fig. 21 shows the number of months needed to download an image of 1 Mbit as a function of the desired BER, for 2500 or 10000 leaves of the ToL. It has been considered \(N_{b}=0.01\).
Figure 20: SCPPM performance with code rate \(r=1/3\), M=1024, \(N_{s}\) in the range [0.2 1.6], and \(N_{b}\in\{0,0.01\}\).
For high background noise values, other channel coding strategies should be considered. One idea could be to decrease even further the code rate \(r\) in order to improve the BER performance of Fig. 20. However, this decreases the net bit-rate (i.e., the useful bit rate exclusive of error correction), with a larger downloading time. Furthermore, the computation complexity and the time for the SCPPM decoder increase as well.
To deal with this, LDPC codes and a symbol message passing (SMP) decoder for non-binary LDPC codes could be a promising solution.
### Channel coding techniques for the Poisson channel - LDPC
LDPC codes are binary linear block codes where the codeword \(\mathbf{c}\) of \(n\) bits is obtained from the input sequence \(\mathbf{u}\) of \(k\) bits as \(\mathbf{c}=\mathbf{u}\,\mathbf{G}\), where the \(k\times n\) matrix \(\mathbf{G}\) is the code generator matrix. The block diagram of the LDPC decoder is shown in Fig. 22, illustrating the flow of messages (likelihoods) passed between an LDPC decoder and an APPM SISO and within the LDPC decoder itself.
LDPC codes can be defined by an \(C\!xV\) sparse parity-check matrix \(\mathbf{H}=[h_{i,j}]\) which can be represented by a Tanner graph with \(V\) variable nodes (VNs) corresponding to the codeword symbols and \(C\) check nodes (CNs) corresponding to parity checks. Each edge connecting the VN \(v\) to the CN \(c\) is labeled by a non-zero element \(h_{v,c}\) of \(\mathbf{H}\). The sets \(N(v)\) and \(N(c)\) denote the neighbors of VN \(v\) and CN \(c\), respectively. The degree of a VN \(d_{v}\) is given by the cardinality of \(N(v)\); similarly, the degree of a CN \(d_{c}\) is the cardinality of \(N(c)\). The code rate is given by \(r=1-d_{v}/d_{c}\). Similarly to the analysis of SCPPM, in which two decoders (inner and outer) exchange soft information, also the LDPC decoding process can be seen as the message-passing between variable nodes and check nodes, where the input of each step is taken as _a-priori information_ and it is mapped into an output _extrinsic information_. In the literature [33, 34], it has been proposed to track the evolution of the mutual information between bits and their corresponding LLRs to predict the decoder behavior. This can be done by means of the _extrinsic information transfer_ (_EXIT_) _function_.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \# of sources & \(P_{rx}\) [W] & \(n_{s}\) [ph/s] & time to collect \(N_{s,min}\) \\ & & & for a coded BER\(\approx 10^{-2}\) [s] \\ \hline
1 & \(5.8\times 10^{-25}\) & \(2\times 10^{-6}\) & \(7\times 10^{5}\) \\
3 & \(1.7\times 10^{-24}\) & \(7\times 10^{-6}\) & \(2\times 10^{5}\) \\
9 & \(5.2\times 10^{-24}\) & \(2\times 10^{-5}\) & \(7\times 10^{4}\) \\
25 & \(1.4\times 10^{-34}\) & \(6\times 10^{-5}\) & \(2.33\times 10^{4}\) \\
64 & \(3.7\times 10^{-23}\) & \(1\times 10^{-4}\) & \(1.4\times 10^{4}\) \\
100 & \(5.8\times 10^{-23}\) & \(2\times 10^{-4}\) & \(7\times 10^{3}\) \\
625 & \(3.6\times 10^{-22}\) & \(1\times 10^{-3}\) & \(1.4\times 10^{3}\) \\
2500 & \(1.4\times 10^{-21}\) & \(6\times 10^{-3}\) & \(0.23\times 10^{3}\) \\
10000 & \(5.8\times 10^{-21}\) & \(0.02\) & \(70\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Link budget considerations.
The _EXIT_ function is a plot of the mutual information \(I(A;L_{e})\), where \(A\) represents the bits and \(L_{e}\) the extrinsic LLR, as a function of \(I(A;L)\), where \(L\) are the a-priori LLR.
The EXIT analysis might be used to design the proper LDPC [32]. We can determine an EXIT curve for the variable nodes of the LDPC decoder, denoted by _VND_, and another one for the check nodes, _CND_. For a regular LDPC code, _the VND curve should lie above its corresponding CND curve_. Figs. 23-26 show the VND (red) curves and CND (green) curves. In particular, the red curves represent \(I_{E,VND}\) as a function of \(I_{A,VND}\), whereas the green curves
Figure 21: Time to download an image of 1 Mbit, using SCPPM with code rate 1/3, \(M=1024\), \(N_{b}=0.01\). The number of leaves of the ToL is 2500 or 10000.
Figure 22: Receiver model showing message passing flow and points for monitoring mutual information.
show \(I_{A,CND}\) as a function of \(I_{E,CND}\). Consequently, in the cases in which the red curve is above the green curve the decoder should work properly. In the following results, we consider a PPM order \(M=1024\) in all cases, signal photons \(N_{s}=\{0.5,3.5\}\) and noise photons \(N_{b}=\{0.01,0.1\}\). Figs. 23-24 refer to \(d_{v}=2\), \(d_{c}=3\) (i.e. code rate \(r=1/3\)).
On the other hand if a higher code rate \(1/2\) is considered, by setting \(d_{v}=3\), \(d_{c}=6\), the performance is even worse, as shown by the EXIT curves of Figs. 25-26.
It is clear from the figures that the higher the background noise, the nearer the red and green curves are, up to a point that they cross and iterative decoding does not decrease the error rate. This is emphasized for low values of \(N_{s}\) (e.g., \(N_{s}=0.5\)\(N_{b}=0.1\) of Figs. 24a and 26a).
Figure 26: VND and CND EXIT curves curves with \(M=1024\), \(N_{b}=0.1\), \(d_{v}=3\), \(d_{c}=6\) (i.e., code rate \(\frac{1}{2}\)).
### Symbol message passing (SMP) non-binary LDPC for the Poisson channel
Non-binary LDPC codes are defined over a field \(\mathbb{F}_{q}=\{0,1,\alpha,..,\alpha^{q-2}\}\) with \(q=2^{m}\), \(m\) a positive integer and \(\alpha\) a primitive element. The finite field order \(q\) is matched to the PPM order yielding a one-to-one mapping between the symbol \(c_{i}\) of the codeword and PPM symbols \(\mathbf{x}=(x_{i,1},x_{i,2},...,x_{i,q})\).
Similarly to binary LDPC, non-binary LDPC codes can be defined by an \(M\)_xN_ sparse parity-check matrix \(\mathbf{H}=[h_{i,j}]\) where in this case the matrix elements \(h_{i,j}\) belong to \(\mathbb{F}_{q}\). \(\mathbf{H}\) is associated to a Tanner graph with \(N\) variable nodes (VNs) representing the codeword symbols and \(M\) check nodes (CNs) corresponding to parity checks. The set \(N(v)\) and \(N(c)\) denote the neighbors of VN \(v\) and CN \(c\), respectively. The degree of a VN is given by the cardinality of \(N(v)\). In a similar way, the degree of of a CN \(c\) is the cardinality of \(N(c)\). The VN (CN) edge-oriented degree distribution polynomial is \(\lambda(x)=\sum_{i}\lambda_{i}x^{i-1}\)\(\left(\rho(x)=\sum_{i}\rho_{i}x^{i-1}\right)\), where \(\lambda_{i}\) (\(\rho_{i}\)) is the fraction of edges incident to VNs (CNs) with degree \(i\). An unstructured irregular LDPC code ensemble \(\mathcal{C}_{\lambda,\rho}^{q,N}\) is the set of all \(q\)ary LDPCs with block length N and degree distribution polynomial pair \(\lambda(x)\) and \(\rho(x)\).
In the SMP decoding algorithm each VN \(v\) computes the log-likelihood vector and sends the symbol which has the maximum L-value to all its neighbors, while the message from CN \(c\) to its neighbor VN \(v\) is obtained by determining the symbol that satisfies the parity-check equation. Symbol Message Passing algorithm can be extended by including erasures, symbol and erasure message passing (SEMP).
Some proposals have been presented in [35], where the authors designed optimized rate 1/2 irregular LDPC code ensembles for \(q\in\{4,8,16,32\}\), \(N_{b}\in\{0.002,0.1\}\) for both SMP and SEMP decoding, showing that SMP/SEMP decoding might be a good choice when low-complexity decoding is targeted.
However, in our scenario \(q\) should assume much higher values than those considered in [35], i.e., \(q=1024,2048\), etc. In these cases, an efficient way to evaluate the density evolution should be designed and only some upper and lower bounds can be obtained [36].
### On the effective channel coding for the sails
By considering the results described above, the channel coding for error protection on the Poisson PPM channel can then be based on the following conclusions:
1. The SCPPM coding scheme achieves a better performance with respect to Reed-Solomon codes, especially when the background noise is not negligible.
2. The EXIT function analysis can be used for the design of channel coding techniques, such as SCPPM and LDPC.
3. Symbol message passing (SMP) decoders for non-binary LDPC codes over the \(q\)-ary channel (with \(q\) matched to the Poisson PPM channel) can be a promising channel coding technique.
Conclusions
With this study, we have elaborated on the feasibility of the Starshot sail communication system from the physical, optical and communication theory point of views.
Our approach on the optical realization considers the system as a Tree-of-Light that is realized on the surface of the sail by means of beam splitters, modulators, waveguides and grating couplers. In this way the transmitter functionality would allows keeping the mass of the sail low and may benefit on the scalability for the replication of the surface optics already demonstrated in current applications.
To tackle the extreme conditions of the channel, with the unprecedented link length and the corresponding optical losses, the study is proposing a beam emission that is based on the interference by an array of grating couplers that compresses the lobe to one microradian. In this way, a realistic laser source and the enhancement of the peak intensity due to the PPM protocol may allow for a sensible counting rate at the ground receiver.
We have pointed out the criteria to be used to design the array for the beam combining and for the beam steering with active OPA technology. The divergence of the source is controlled by the array layout, specifically, the width of the beam in the far field scales inversely with the width of the array. We also proved that the intensity peak depends on the effective area of the transmitter. Therefore, if properly phased in coherence, \(N\) sources give rise to an equivalent intensity peak to a single laser source with the same area, but they produce a much smaller beam spot at the receiver due to the power redistribution in side lobes. By the way, the shrinking of the main lobe is not a restrictive aspect as long as, after propagation, the beam results much larger than any feasible receiver which is then approximately illuminated by a constant intensity, the on-axis value in case of perfect pointing.
In addition, only an active phased array can provide beam steering without mechanical actuators, thus allowing to reduce the size, weight and power consumption of the sailcraft transmission system. The complexity of the electro-optical control system rises clearly with the number of emitters so that implementations with larger size GCs, arranged fewer in number, are convenient.
From these considerations, it follows that most of the efforts has to be focused on the pointing capability, which comprises both a precise evaluation of the sailcraft-Earth relative position and a consistent steering of the Far-field distribution of the transmitted light. The latter implies that the correct calibration of the phase modulators is also a fundamental and delicate task, as well as the evaluation and eventual correction of any deformations in the sail that could affect the OPA spatial distribution.
The synchronization of the PPM protocol has been proposed to be realized by means of the modulation of polarization of the light for particular PPM symbols. The implementation of this modulation may be realized with a separate array for each state or with a suitable active grating coupler technology.
The design of the individual grating coupler to enhance the efficiency of conversion of the guided mode of the light carried by the waveguides to the free-space mode has reached the high level of 0.7, also addressing the polarization transformation into circular polarized state, well suited for preventing the misalignment issue between transmitter and receiver.
As pointed out above, optical communications over space channels are commonly designed as pulse-position-modulated (PPM) laser links and one of our goals was to show the relevance of channel coding techniques to get as closer as possible to the channel capacity. Indeed, without channel coding, the source bits are directly mapped into PPM symbols and at the receiver side the maximum likelihood decoding, using the photodetector counts as observables, requires a maximum count selection for each PPM frame. If the resulting error probability is not low enough, coding must be used to improve performance, with the source bits first encoded into channel words, then the words sent to the PPM modulator. Indeed, the role of error correcting codes is to reconstruct a highly reliable replica of the scientific data transmitted by the sails.
However, when channel coding is considered, it is no longer obvious that the maximum likelihood frame decisioning approach is optimal, since it does not allow for channel symbol erasures. For low levels of background noise, it has been argued that matched Reed Solomon (RS) coding appears as a natural encoding scheme, especially if only erasures can occur (since RS decoding has maximal capability for correcting erasures). On the other hand, when background is present in a non-negligible way, conversion of counts to channel symbols would involve errors as well as erasures. Indeed, it has been shown that RS performance on a noisy PPM channel typically remains far away from capacity when conventional hard-decision decoding is used. As a consequence, we discussed the SCPPM scheme which refers to a precise combination of modulation and coding technique mostly used in a deep-space optical link scenario because of its characteristics that suits well for this kind of environment. The name SCPPM derives from the combination of a PPM modulator with other blocks serially concatenated to it. Among them, a convolutional code has been considered as the error-control code. Promising results have been found even in noisy scenario. Specifically, we found a BPP of \(2.381bit/ph\) for a background noise level of \(0.01\) by means of a code rate \(1/3\) convolutional code. However, SCPPM performance could be still improved both as BER results and computational resources requirements, and this could be one of the main purposes of a possible phase two of the Starshot project.
In conclusion, we are proposing here the ideas and the methods that may be exploited to develop an experimental prototype for the sail transmission, coding and nano optics parts.
Finally, we would like to warmly thank the Breakthrough Initiatives and in particular Dr. S. Pete Worden and James Schalkwyk, for giving us the opportunity and the support to study this subject. A particular thanks goes to Professor Philip Mauskopf, of Arizona State University and Starshot Communications Research Director, for the great help in understanding the context and the spirit of creativity that he had shared with us. |
2305.01724 | Gröbner bases for bipartite determinantal ideals | Nakajima's graded quiver varieties naturally appear in the study of bases of
cluster algebras. One particular family of these varieties, namely the
bipartite determinantal varieties, can be defined for any bipartite quiver and
gives a vast generalization of classical determinantal varieties with broad
applications to algebra, geometry, combinatorics, and statistics. The ideals
that define bipartite determinantal varieties are called bipartite
determinantal ideals.
We provide an elementary method of proof showing that the natural generators
of a bipartite determinantal ideal form a Gr\"obner basis, using an
S-polynomial construction method that relies on the Leibniz formula for
determinants. This method is developed from an idea by Narasimhan and
Caniglia--Guccione--Guccione.
As applications, we study the connection between double determinantal ideals
(which are bipartite determinantal ideals of a quiver with two vertices) and
tensors, and provide an interpretation of these ideals within the context of
algebraic statistics. | Josua Illian, Li Li | 2023-05-02T18:50:10Z | http://arxiv.org/abs/2305.01724v3 | # Grobner bases for the double determinantal ideals
###### Abstract.
Nakajima's graded quiver varieties naturally appear in the study of bases of cluster algebras. One particular family of these varieties, which we have named _double determinantal varieties_, is a vast generalization of classical determinantal varieties and provides an important algebraic structure with broad applications to algebra, geometry, combinatorics, and statistics. The ideals that define double determinantal varieties are called double determinantal ideals.
We provide an elementary method of proof showing that the natural generators of the double determinantal ideals form a Grobner basis, using an S-polynomial construction method that relies on the Leibniz formula for determinants. This method is developed from an idea by Narasimhan [19] and Caniglia-Guccione-Guccione [6]. More generally, we introduce _bipartite determinantal ideals_ and prove similar Grobner basis results for these ideals.
As applications, we study the connection between double determinantal ideals and tensors, and provide an interpretation of these ideals within the context of algebraic statistics.
2010 Mathematics Subject Classification: Primary 14M12; Secondary 13P10, 13C40, 05E40.
###### Contents
* 1 Introduction
* 2 Grobner basis and \(S\)-pairs
* 2.1 Facts about Grobner basis
* 2.2 Monomial orders
* 2.3 Notations and a key example
* 2.4 Pseudominors
* 2.5 Conditions for \(L(\sigma,\tau)=L(\sigma^{\prime},\tau^{\prime})\), \(P_{\sigma,\cdot}=P_{\sigma^{\prime},\cdot}\) and \(P_{\cdot,\tau}=P_{\cdot,\tau^{\prime}}\)
* 2.6 \(P(M,N)\) and sufficiently small leading terms
* 2.7 Violation
* 3 Proof of Main Theorem
* 3.1 Distance
* 3.2 Single Determinantal Ideal
* 3.3 Double Determinantal Ideal
Generalized double determinantal ideals and Bipartite determinantal ideals * 4.1 Generalized double determinantal ideals * 4.2 Nakajima's affine graded quiver variety, and bipartite determinantal ideals * 5 Applications * 5.1 Tensors * 5.2 Generalizations * 5.3 Algebraic Statistics * 5.4 Independence Models * 5.5 Conditional Independence and Hidden Variables
## 1. Introduction
The main objective of the paper is to provide motivation for studying double determinantal ideals and bipartite determinantal ideals. In particular, we shall prove that the natural generators of such an ideal form a Grobner basis under a suitable term order. Additionally, we study the connection between these ideals and the 3-dimensional tensors as well as algebraic statistics.
We start by recalling the classical determinantal ideals. Let \(K\) be a field, \(A\) be an \(m\times n\) matrix of independent variables \(x_{ij}\), let \(u\) be a positive integer. The ideal \(I^{\det}_{m,n,u}\) of \(K[x_{ij}]\)generated by all \(u\times u\) minors in \(A\) is called a determinantal ideal, and \(K[x_{ij}]/I^{\det}_{m,n,u}\) is called a determinantal ring. Geometrically, a determinantal ideal defines the determinantal variety which is the set of all \(m\times n\) matrices with entries in \(K\) and of rank at most \(u-1\). For example, take \(m=2,n=3,u=2\), the matrix \(A\) and the determinantal ideal \(I^{\det}_{m,n,u}\) are:
\[A=\begin{bmatrix}x_{11}&x_{12}&x_{13}\\ x_{21}&x_{22}&x_{33}\end{bmatrix},\quad I^{\det}_{m,n,u}=\langle\begin{matrix} x_{11}&x_{12}\\ x_{21}&x_{22}\end{matrix},\begin{matrix}x_{11}&x_{13}\\ x_{21}&x_{23}\end{matrix},\begin{matrix}x_{12}&x_{13}\\ x_{22}&x_{23}\end{matrix}\rangle\]
The corresponding determinantal variety is the set of all \(2\times 3\) matrices that are not of full rank; this 4-dimensional variety is irreducible and singular.
Determinantal ideals have long been a central topic, a test field, and a source of inspiration for various fields including commutative algebra, algebraic geometry, representation theory, and combinatorics. Through the use of Grobner basis theory and simplicial complexes, researchers have studied the minimal free resolution and syzygies (or in geometric terms of vector bundles) of determinantal ideals, as well as their Arithmetic Cohen-Macaulayness, multiplicities, powers and products, Hilbert series, \(a\)-invariant, \(F\)-regularity and \(F\)-rationality, local cohomology, etc. Various tools such as the straightening law and the Knuth-Robinson-Schensted (KRS) correspondence, non-intersecting paths from algebraic
combinatorics, and liaison theory from commutative algebra have been employed in these studies. Determinantal ideals are closely related to the study of Grassmannian and Schubert varieties, and have many interesting specializations, generalizations, and variations, including Segre varieties, linear determinantal varieties, rational normal scrolls, symmetric and skew-symmetric determinantal varieties, (classical, two-sided, or mixed) ladder determinantal varieties, etc. For more information, readers may referred to [3, 5, 11, 15, 16, 21] and the references therein.
The inspiration for this paper arose from the second author's study of triangular bases of quantum cluster algebras via Nakajima quiver varieties [12]. Let \((w_{1}^{\prime},w_{2}),(v_{1},v_{2})\in\mathbb{Z}_{\geq 0}^{2}\), fix \(\mathbb{C}\)-vector spaces \(W_{1}^{\prime},W_{2}\) with \(\dim(W_{1}^{\prime})=w_{1}^{\prime}\), \(\dim(W_{2})=w_{2}\). Let \(y_{1},\ldots,y_{r}:W_{2}\to W_{1}^{\prime}\) be linear maps and denote \(\mathbf{y}=(y_{1},\ldots,y_{r})\) (in other words, \(\mathbf{y}\) gives a quiver representation for the quiver with two vertices labelled \(2\) and \(1^{\prime}\), and \(r\) arrows from vertex \(2\) to vertex \(1^{\prime}\)). They induce two linear maps \(A\) and \(B\):
\[A(\mathbf{y}) =y_{1}+\cdots+y_{r}:W_{2}^{\oplus r}\to W_{1}^{\prime},\quad(b_{1}, \ldots,b_{r})\mapsto y_{1}(b_{1})+\ldots+y_{r}(b_{r})\] \[B(\mathbf{y}) =y_{1}\oplus\cdots\oplus y_{r}:W_{2}\oplus(W_{1}^{\prime})^{ \oplus r},\quad b\mapsto(y_{1}(b),\ldots,y_{r}(b))\]
which can be represented by the matrices
\[A(\mathbf{y})=[y_{1}|\cdots|y_{r}],\quad B(\mathbf{y})=\left[\begin{array}{c }y_{1}\\ \hline\vdots\\ y_{r}\end{array}\right]\]
The Nakajima's affine graded quiver variety is
\[\mathbf{E}((v_{1},v_{2}),(w_{1}^{\prime},w_{2}))=\{\mathbf{y}=(y_{1},\ldots, y_{r})\in\operatorname{Hom}(W_{2},W_{1}^{\prime})^{\oplus r}:\operatorname{ rank}A(\mathbf{y})\leq v_{1},\operatorname{rank}B(\mathbf{y})\leq v_{2}\}.\]
Rewrite in terms of algebra and change some notations for convenience, we can forget the above geometric motivation and give the following definition instead.
**Definition 1.1**.: Let \(m,n,r\) be positive integers, let \(X=\{X^{(k)}=[x_{ij}^{(k)}]\}_{k=1}^{r}\) be a set of variable matrices of dimension \(m\times n\) (we call \(i,j,k\) the row index, column index, and page index, respectively). Define matrices
\[A=A(X)=[X^{(1)}|X^{(2)}|\ldots|X^{(r)}],\quad B=B(X)=\left[\begin{array}{c }\dfrac{X^{(1)}}{X^{(2)}}\\ \hline\vdots\\ \hline X^{(r)}\end{array}\right]\]
For positive integers \(u,v\), define (possibly empty) sets
\[D_{u}(A(X))=\{u\text{-minors of }A(X)\},\quad D_{v}(B(X))=\{v\text{-minors of }B(X)\}.\]
The _double determinantal ideal_\(I_{m,n,u,v}^{(r)}\) is the ideal of \(K[X]\) generated by \(D_{u}(A(X))\cup D_{v}(B(X))\) (which we called the _natural generators_).
Note that \(D_{u}(A(X))\) is empty unless \(1\leq u\leq\min(m,nr)\); \(D_{v}(B(X))\) is empty unless \(1\leq v\leq\min(mr,n)\).
**Example 1.2**.: For example, take \(m=n=r=u=v=2\), the matrices \(A(X)\), \(B(X)\), and the corresponding double determinantal ideal \(I\) are as follows (where we use \(a,b,\dots\) to denote \(x_{11}^{(1)},x_{12}^{(1)},\dots\) for simplicity):
\[A(X)=\begin{bmatrix}x_{11}^{(1)}&x_{12}^{(1)}&x_{11}^{(2)}&x_{12}^{(2)}\\ x_{21}^{(1)}&x_{22}^{(1)}&x_{22}^{(2)}\end{bmatrix}=\begin{bmatrix}a&b\\ c&d\end{bmatrix}\begin{matrix}a^{\prime}&b^{\prime}\\ c^{\prime}&d^{\prime}\end{matrix},\quad B(X)=\begin{bmatrix}x_{11}^{(1)}&x_{1 2}^{(1)}\\ x_{21}^{(1)}&x_{22}^{(1)}\\ x_{21}^{(2)}&x_{22}^{(2)}\end{bmatrix},=\begin{bmatrix}a&b\\ c&d\\ c^{\prime}&d^{\prime}\end{bmatrix},\]
\[I_{m,n,u,v}^{(r)}=\langle\begin{bmatrix}a&b\\ c&d\end{bmatrix},\begin{matrix}a^{\prime}&b^{\prime}\\ c^{\prime}&d^{\prime}\end{matrix},\begin{bmatrix}a&a^{\prime}\\ c&c^{\prime}\end{matrix},\begin{bmatrix}a&b^{\prime}\\ c&d^{\prime}\end{matrix},\begin{bmatrix}b&a^{\prime}\\ d&c^{\prime}\end{bmatrix},\begin{bmatrix}b&b^{\prime}\\ d&d^{\prime}\end{bmatrix},\begin{bmatrix}a&b\\ c^{\prime}&d^{\prime}\end{bmatrix},\begin{bmatrix}c&d\\ a^{\prime}&b^{\prime}\end{bmatrix},\begin{bmatrix}c&d\\ c^{\prime}&d^{\prime}\end{bmatrix},\begin{bmatrix}c^{\prime}&d\\ c^{\prime}&d^{\prime}\end{bmatrix}\rangle.\]
Grobner theory plays a fundamental role in the study of the classical determinantal ideals. A key fact is that the minors that generate a determinantal ideal form a Grobner basis (as remarked in see [5], this fact is first proved by Narasimhan [19] and then reproved many times). The main result of this paper is a similar statement for the double determinantal ideals:
**Theorem 1.3**.: _Let \(m,n,r,u,v\) be positive integers, \(X=\{X^{(k)}=[x_{ij}^{(k)}]\}_{k=1}^{r}\) be a set of variable matrices of dimension \(m\times n\). Then the set of natural generators of the double determinantal ideal \(I_{m,n,u,v}^{(r)}\), if nonempty, forms a Grobner basis with respect to any lexicographical monomial order that is consistent in both \(A(X)\) and in \(B(X)\) (see Definition 2.2)._
For example, the theorem asserts that the generators of \(I\) in Example 1.2 form a Grobner basis with respect to the lexicographical monomial order with \(a>b>c>d>a^{\prime}>b^{\prime}>c^{\prime}>d^{\prime}\).
The idea of our proof is inspired by [19] and [6]. In [19], arising from Abyankar's work on singularities of Schubert varieties of flag manifolds, Narasimhan established the primality, and thus irreducibility, of the ladder determinantal ideal. The proof constructed S-pairs using two different Laplace expansions of a very cleverly crafted matrix of variables, equating the two expansions, and rearranging. In a similar manner, our construction computes the same quantity in two different ways, equates, and rearranges, but relies on the Leibniz formula for determinants. In [6], the following fact is used; it allows for more flexibility than the usual Buchberger's criterion. See SS2.1 for the notations.
**Proposition 1.4** (Theorem 3.2 in [17]).: _A set \(\mathcal{G}\) of polynomials is a Grobner basis (for the ideal it generates) if and only if for any two polynomials \(M,N\in\mathcal{G}\) there exists a finite chain of polynomials \(M_{0},M_{1},...,M_{k-1},M_{k}\in\mathcal{G}\) such that \(M_{0}=M\), \(M_{k}=N\), and the following holds for all \(i=1,\dots,k\):_
1. \(\mathrm{LM}(M_{i})|\mathrm{LCM}(M(M),\mathrm{LM}(N))\)_, and_
2. \(S(M_{i-1},M_{i})=\sum_{j=1}^{n}a_{j}P_{j}\)_, where_ \(n\in\mathbb{Z}_{\geq 0}\)_,_ \(a_{j}\) _are monomials,_ \(P_{j}\in\mathcal{G}\)_, and_ \[\mathrm{LM}(a_{j}P_{j})<\mathrm{LCM}(\mathrm{LM}(M_{i-1}),\mathrm{LM}(M_{i})) \text{ for all }j.\]
For convenience, we call \(\sum_{j=1}^{n}a_{j}P_{j}\) a _monomial-coefficient combination of \(P_{j}\)_.
In Theorem 4.3 we generalize the Grobner basis results to the generalized double determinantal ideals; then we further generalize it to the following Theorem 1.5 (see SS4.2 for the relevant definitions and precise statements). These ideals arise from the study of Nakajima's affine graded quiver varieties, which are important in the study of geometric representation theory, algebraic geometry, and cluster algebras.
**Theorem 1.5**.: _(Theorem 4.6) The set of natural generators of a bipartite determinantal ideal, if nonempty, forms a Grobner basis with respect to an appropriate monomial order._
Finally, we discuss some applications, by relating the double determinantal ideals to the \(3\)-dimensional tensors and algebraic statistics.
The second author proposed Theorem 1.3 as a conjecture in a AMS sectional meeting in October 2018; after that, Fieldsteel and Klein have proved it, while we were preparing the paper and they kindly shared with us their preprint, which is now in print [9]. Their elegant proof uses an advanced tool in commutative algebra called the liaison theory, which is very effective in the study of the determinantal varieties and the various generalizations. In contrast, our proof is elementary and combinatorial, using only the S-pair argument described in Proposition 1.4.
Also, before we finding the proof of Theorem 4.6 using the S-pair argument, the second author proposed it as an open problem to Klein and after that, Klein has sent us a draft of proof by extending their methods in [9], which is a few years earlier than the finish of our paper.
We would also like to mention that Conca, De Negri, and Stojanac proved in [7] the Grobner basis result on the generators of two flattenings of an order \(3\) tensor of size \(2\times a\times b\), which is a special case of the double determinantal ideals studied in this paper.
The paper is organized as follows. In SS2 we recall the background of Grobner basis and give a key example (Example 2.5), and prove some preparatory facts. In SS3 we prove the main theorem. In SS4 we introduce the bipartite determinantal ideals. In SS5 we relate our study to tensors and discuss applications in algebraic statistics. Most results of the paper (except for SS4) were originally included in the first author's PhD dissertation, which was completed in 2020.
**Acknowledgments.** We also gratefully acknowledges discussions with Patricia Klein, Allen Knutson, and Alex Yong; we thank Bernd Sturmfels and Seth Sullivant for pointing to us the relations of the double determinantal varieties to tensors and algebraic statistics; we thank Nathan Fieldsteel and Patricia Klein for sharing drafts of their papers. Computer calculations were performed using Macaulay 2 [10].
## 2. Grobner basis and \(S\)-pairs
### Facts about Grobner basis
Let \(R=K[x_{1},\ldots,x_{n}]\) be a polynomial ring in a finite number of variables. In this paper, the term _monomial_ refers to a product of the form
\(x_{1}^{i_{1}}\cdots x_{n}^{i_{n}}\) (with coefficient \(1\)). Let \(\mathcal{N}\) be the set of all monomials in \(R\); let "\(>\)" be a monomial order; for a polynomial \(f\in R\), let \(\mathrm{LM}(f)=\mathrm{LM}_{>}(f)\) be the leading monomial of \(f\), \(\mathrm{LT}(f)=\mathrm{LT}_{>}(f)\) be the leading term of \(f\), \(\mathrm{LT}(I)=\mathrm{LT}_{>}(I)\) be the initial ideal of \(I\) (which is generated by all the leading monomials of elements in \(I\)). We say that a finite set \(\mathcal{G}\) is a _Grobner basis_ (for the ideal \(I\) generated by \(\mathcal{G}\)) with respect to the monomial order "\(>\)" if \(\mathrm{LT}_{>}(I)\) is generated by the set \(\{\mathrm{LT}_{>}(f)|f\in\mathcal{G}\}\). Givn any two polynomials \(f,g\), their _S-pair_ is defined as
\[S(f,g)=\frac{L}{\mathrm{LT}(f)}f-\frac{L}{\mathrm{LT}(g)}g,\text{ where }L= \mathrm{LCM}(\mathrm{LM}(f),\mathrm{LM}(g))\in\mathcal{N}.\]
The well-known Buchberger's criterion says that \(\mathcal{G}\) is a Grobner basis if for any \(f,g\in\mathcal{G}\), the remainder of \(S(f,g)\) by \(\mathcal{G}\) is zero. Proposition 1.4 is a less known but more flexible criterion, which we will use in our proof.
### Monomial orders
**Definition 2.1**.: Let \(C=(c_{ij})\) be a matrix of variables and "\(>\)" be a monomial order on \(K[c_{ij}]\). Then "\(>\)" is a _diagonal order_ if for every square submatrix \(D\) of \(C\), the leading term of \(\det(D)\) is the product of the main diagonal entries of \(D\).
For example, the lexicographic order induced by the following term order (known as the reading order)
\[c_{11}>c_{12}>\ldots>c_{1n}>c_{21}>c_{22}>\ldots>c_{2n}>\cdots>c_{m1}>c_{m2}> \ldots>c_{mn}\]
is a diagonal order.
**Definition 2.2**.: Let \(C=(c_{i,j})\) be a matrix of variables, "\(>\)" be a monomial order on \(K[c_{i,j}]\). We say that "\(>\)" is _consistent_ on \(C\) when \(c_{i,j}>c_{i,j+k}\) and \(c_{i,j}>c_{i+k,j}\) for all \(i,j\) and \(k>0\).
Note that a consistent lexicographical order on \(C=(c_{ij})\in\mathrm{Mat}_{m\times n}\) is a diagonal order on \(C\), but not all diagonal orders are consistent. To see the former, let \(M\) be a minor of \(C\). We may assume \(M=\det C\). Let \(\sigma\in S_{n}\), \(m_{1}=c_{11}\ldots c_{nn}\), and \(m_{\sigma}=c_{\sigma(1),1}\ldots c_{\sigma(n),n}\). The assumption of consistency implies that \(c_{11}>c_{ij}\) for all \((i,j)\neq(1,1)\). By lex, either \(m_{1}>m_{\sigma}\) or \(\sigma(1)=1\). Assuming the latter, \(c_{\sigma(2),2},\ldots,c_{\sigma(n),n}\in\{c_{i,j}:i,j\geq 2\}\), which, by consistency, has largest element \(c_{22}\). So, either \(m_{1}>m_{\sigma}\) or \(\sigma(2)=2\). Proceeding in this manner, either \(m_{1}>m_{\sigma}\) or \(\sigma(i)=i\) for all \(i\). Thus, \(m_{1}\geq m_{\sigma}\), so \(m_{1}=\mathrm{LT}(M)\). To see the latter, let \(A=\begin{bmatrix}a&c\\ b&d\end{bmatrix}\), then the lexicographical order with \(a>b>d>c\) is not consistent on \(A\), but is diagonal on \(A\).
We denote by \(x_{ij}^{(k)}\) the variable that lies in the \(i^{th}\) row and \(j^{th}\) column of matrix \(X^{(k)}\).
From now on, we assume "\(>\)" is a consistent lexicographical order on both \(A(X)\) and \(B(X)\). Such an order exists, for example define "\(>\)" by requiring that \(x_{ij}^{(s)}>x_{kl}^{(t)}\) when \(s<t\), or when \(s=t\) and \(i<k\), or when \(s=t\) and \(i=k\) and \(j<l\).
**Definition 2.3**.: We say that two points \((i,j)\), \((k,l)\in\mathbb{Z}_{>0}^{2}\) are in NW-SE position (or the entries \(c_{ij}\) and \(c_{kl}\) of a matrix are in NW-SE position) when \((k-i)(l-j)>0\). We say that a
sequence of points (or entries of a matrix) are in NW-SE position if every pair is in NW-SE position.
Note that the leading term of any \(M\in D_{u}(A)\) consists of a sequence of variables of \(A\) which are in NW-SE position, and likewise for \(N\in D_{v}(B)\).
### Notations and a key example
Recall the Leibniz formula for matrix determinants: if \(C=(x_{ij})\) is an \(n\times n\) matrix of variables, then
\[\det(C)=\sum_{\sigma\in S_{n}}\operatorname{sgn}(\sigma)\prod_{i=1}^{n}x_{ \sigma(i),i}=\sum_{\tau\in S_{n}}\operatorname{sgn}(\tau)\prod_{i=1}^{n}x_{i,\tau(i)}. \tag{1}\]
**Definition 2.4**.: For \(M\in D_{u}(A),N\in D_{v}(B)\), let
\[L=\operatorname{LCM}(\operatorname{LM}(M),\operatorname{LM}(N))=x_{\alpha_{1},\beta_{1}}^{(r_{1})}x_{\alpha_{2},\beta_{2}}^{(r_{2})}\ldots x_{\alpha_{l}, \beta_{l}}^{(r_{l})}\]
where \(l\) is the degree of \(L\), and we arrange the right side such that \(x_{\alpha_{i},\beta_{i}}^{(r_{i})}>x_{\alpha_{i+1},\beta_{i+1}}^{(r_{i+1})}\) for all \(i=1,...,l-1\).
- Define the following subsets of \(\{1,\ldots,l\}\):
\[S_{M}=\{i:x_{\alpha_{i},\beta_{i}}^{(r_{i})}\text{ divides LM}(M)\},\quad S_{N }=\{i:x_{\alpha_{i},\beta_{i}}^{(r_{i})}\text{ divides LM}(N)\}.\]
- Define \(\operatorname{Sym}_{l}\) to be the symmetric group of \(\{1,\ldots,l\}\).
- Define subgroups
\[\operatorname{Sym}(S_{M})=\{\sigma\in\operatorname{Sym}_{l}:\sigma(i)=i\text { for each }i\notin S_{M}\},\]
\[\operatorname{Sym}(S_{N})=\{\tau\in\operatorname{Sym}_{l}:\tau(i)=i\text{ for each }i\notin S_{N}\}.\]
- Let \(\mathcal{S}=\Big{\{}\{(a_{1},b_{1}),\ldots,(a_{l},b_{l})\}:(a_{1}\ldots a_{l}),(b_{1}\ldots b_{l})\in\operatorname{Sym}_{l}\Big{\}}\).
- Let \(\mathcal{N}\) be the set of monomials in \(K[x_{ij}^{(r_{k})}]\).
- Define a map \(\varphi:\mathcal{S}\to\mathcal{N}\), \(\{(a_{1},b_{1}),\ldots,(a_{l},b_{l})\}\mapsto\prod_{i=1}^{l}x_{\alpha_{a_{i}},\beta_{b_{i}}}^{(r_{i})}\).
- Define a left group action of \(\operatorname{Sym}(S_{M})\times\operatorname{Sym}(S_{N})\) on \(\mathcal{S}\) as follows:
\[(\sigma,\tau)\{(a_{1},b_{1}),\ldots,(a_{l},b_{l})\}=\{(\sigma(a_{1}),\tau(b_{1 })),\ldots,(\sigma(a_{l}),\tau(b_{l}))\}\,.\]
- Denote the special element \(e=\{(1,1),\ldots,(l,l)\}\in\mathcal{S}\).
- For \((\sigma,\tau)\in\operatorname{Sym}(S_{M})\times\operatorname{Sym}(S_{N})\), define a degree \(l\)-polynomial
\[L(\sigma,\tau)=\varphi((\sigma,\tau)e)=x_{\alpha_{\sigma(1)},\beta_{\tau(1)}}^{ (r_{1})}x_{\alpha_{\sigma(2)},\beta_{\tau(2)}}^{(r_{2})}\ldots x_{\alpha_{ \sigma(l)},\beta_{\tau(l)}}^{(r_{l})}.\]
In particular, \(L(1,1)=L\).
- For each \(\sigma\in\operatorname{Sym}(S_{M})\), define a polynomial \(P_{\sigma,\cdot}=\sum_{\tau\in\operatorname{Sym}(S_{N})}\operatorname{sgn}( \sigma)\operatorname{sgn}(\tau)L(\sigma,\tau)\).
- For each \(\tau\in\operatorname{Sym}(S_{N})\), define a polynomial \(P_{\cdot,\tau}=\sum_{\sigma\in\operatorname{Sym}(S_{M})}\operatorname{sgn}( \sigma)\operatorname{sgn}(\tau)L(\sigma,\tau)\).
The following is a simple but crucial identity:
\[\begin{split}\sum_{\sigma\in\operatorname{Sym}(S_{M})}P_{\sigma, \cdot}&=\sum_{\sigma\in\operatorname{Sym}(S_{M})}\sum_{\tau\in \operatorname{Sym}(S_{N})}\operatorname{sgn}(\sigma)\operatorname{sgn}(\tau) L(\sigma,\tau)\\ &=\sum_{\tau\in\operatorname{Sym}(S_{N})}\sum_{\sigma\in \operatorname{Sym}(S_{M})}\operatorname{sgn}(\sigma)\operatorname{sgn}(\tau) L(\sigma,\tau)=\sum_{\tau\in\operatorname{Sym}(S_{N})}P_{\cdot,\tau}.\end{split} \tag{2}\]
The main idea to prove Theorem 1.3 is to use the above identity to express the \(S\)-pair as a sum \(\sum_{j=1}^{n}a_{j}P_{j}\) that satisfies the condition (2) of Proposition 1.4, and hence conclude that the natural generators form a Grobner basis. We use the following key example to illustrate this idea; then in later sections we implement this idea to work in general.
**Example 2.5**.: (A key example) Let \(r=1\), so \(A=B\) consist of only one page. We simply write \(x_{ij}^{(1)}\) as \(x_{ij}\). Let \(m=n=3\), \(u=v=2\).
(a) Let \(M=\begin{vmatrix}x_{12}&x_{13}\\ x_{32}&x_{33}\end{vmatrix}\), \(N=\begin{vmatrix}x_{21}&x_{23}\\ x_{31}&x_{33}\end{vmatrix}\). Fix the lex order in which \(x_{ij}>x_{kl}\) when \(i<k\) or when \(i=k\) and \(j<l\). Then \(\operatorname{LT}(M)=x_{12}x_{33}\), \(\operatorname{LT}(N)=x_{21}x_{33}\), \(L=x_{12}x_{21}x_{33}\), \((\alpha_{1},\beta_{1})=(1,2),(\alpha_{2},\beta_{2})=(2,1),(\alpha_{3},\beta_{ 3})=(3,3)\), \(S_{M}=\{1,3\},\text{ and }S_{N}=\{2,3\}\). We will use diagrams as a convenient way to represent \(L\) and the newly created monomials. The variables of \(\operatorname{LT}(M)\) are represented by \(\Circle\) and are attached to horizontal rays on the left, while variables of \(\operatorname{LT}(N)\) are represented by \(\times\) and are attached to vertical rays on the top; \(\operatorname{Sym}(S_{M})\) permutes the horizontal rays and \(\operatorname{Sym}(S_{N})\) permutes the vertical rays. See Figure 1. For example, to compute \(L((13),(23))\), we swap the two horizontal rays and swap the two vertical lines. Therefore, we compute the following.
\[\begin{split} P_{1,\cdot}&=L+(-1)L(1,(23))=x_{12}x_{21 }x_{33}-x_{12}x_{23}x_{31}=x_{12}N\\ P_{(13),\cdot}&=(-1)L((13),1)+(-1)(-1)L((13),(23))=-x_{32 }x_{21}x_{13}+x_{32}x_{23}x_{11}=x_{32}\begin{vmatrix}x_{11}&x_{13}\\ x_{21}&x_{23}\end{vmatrix}\\ P_{\cdot,1}&=L+(-1)L((13),1)=x_{12}x_{21}x_{33}-x_{32}x_{21}x_{13}=x_{21}M\\ P_{\cdot,(23)}&=(-1)L(1,(23))+(-1)(-1)L((13),(23))=-x_{12}x_{23}x_{31}+x_{3 2}x_{23}x_{11}=x_{23}\begin{vmatrix}x_{11}&x_{12}\\ x_{31}&x_{32}\end{vmatrix}\end{split}\]
It follows from (2) that \(P_{1,\cdot}+P_{(13),\cdot}=P_{\cdot,1}+P_{\cdot,(23)}\), so
\[S(M,N)=P_{\cdot,1}-P_{1,\cdot}=P_{(13),\cdot}-P_{\cdot,(23)}=x_{32}\begin{vmatrix} x_{11}&x_{13}\\ x_{21}&x_{23}\end{vmatrix}-x_{23}\begin{vmatrix}x_{11}&x_{12}\\ x_{31}&x_{32}\end{vmatrix}\]
Note that the leading monomials of the two terms on the right side are the same and equal to \(x_{11}x_{23}x_{32}>L=\operatorname{LCM}(\operatorname{LM}(M),\operatorname{LM }(N))\), so the condition (2) of Proposition 1.4 does not hold (with \(M_{0}=M,M_{1}=N,i=1\)).
(b) We swap \(M,N\) in (a): let \(M=\left|\begin{matrix}x_{21}&x_{23}\\ x_{31}&x_{33}\end{matrix}\right|\), \(N=\left|\begin{matrix}x_{12}&x_{13}\\ x_{32}&x_{33}\end{matrix}\right|\). Then
\[P_{1,\cdot} =L-L(1,(13))=x_{21}M\] \[P_{(23),\cdot} =-L((23),1)+L((23),(13))=-x_{31}\left|\begin{matrix}x_{12}&x_{13} \\ x_{22}&x_{23}\end{matrix}\right|\] \[P_{\cdot,1} =L-L((23),1)=x_{12}N\] \[P_{\cdot,(13)} =-L(1,(13))+L((23),(13))=-x_{13}\left|\begin{matrix}x_{21}&x_{22} \\ x_{31}&x_{32}\end{matrix}\right|\]
See Figure 2. It follows from (2) that \(P_{1,\cdot}+P_{(23),\cdot}=P_{\cdot,1}+P_{\cdot,(13)}\), so
Figure 1. Example 2.5 (a), where pairs of indices are underlined when they have been switched by \(\sigma\in\operatorname{Sym}(S_{M})\), or double underlined when they have been switched by \(\tau\in\operatorname{Sym}(S_{N})\).
Figure 2. Example 2.5 (b)
\[S(M,N)=P_{1,\cdot}-P_{\cdot,1}=P_{\cdot,(13)}-P_{(23),\cdot}=-x_{13}\begin{vmatrix}x_{ 21}&x_{22}\\ x_{31}&x_{32}\end{vmatrix}+x_{31}\begin{vmatrix}x_{12}&x_{13}\\ x_{22}&x_{23}\end{vmatrix}\]
Note that the leading monomials of the two terms on the right side are \(x_{13}x_{21}x_{32}\) and \(x_{12}x_{23}x_{31}<L\), so the condition (2) of Proposition 1.4 holds (with \(M_{0}=M,M_{1}=N,i=1\)).
We will see that this idea works in general.
### Pseudominors
Sometimes, \(P_{\sigma,\cdot}\) and \(P_{\cdot,\tau}\) may be zero, as show in the example below.
**Example 2.6**.: Use the same setting as Example 2.5 (a), but choose \(M=\begin{vmatrix}x_{11}&x_{13}\\ x_{31}&x_{33}\end{vmatrix}\), \(N=\begin{vmatrix}x_{21}&x_{23}\\ x_{31}&x_{33}\end{vmatrix}\). Figure 3 shows the resulting diagrams of \(L\) and the new monomials.
Therefore,
\[P_{\cdot,(23)}=(-1)L(1,(23))+L(\sigma,(23))=-x_{11}x_{23}x_{31}+x_{31}x_{23}x_{ 11}=x_{23}\begin{vmatrix}x_{11}&x_{11}\\ x_{31}&x_{31}\end{vmatrix}=0.\]
Here, the resulting determinant is not of an actual submatrix, but rather a pseudominator \(\operatorname{minor}_{A}(1,3;1,1)\) (see Definition 2.7 below).
**Definition 2.7**.: Given a \(m\times n\)-matrix of variables \(C=[c_{ij}]\), and two lists (with possible repetition) \(a_{1},\ldots,a_{p}\in\{1,\ldots,m\}\), \(b_{1},\ldots,b_{q}\in\{1,\ldots,n\}\), we define a _pseudosubmatrix_
\[C(a_{1},\ldots,a_{p};b_{1},\ldots,b_{q})\]
to be a matrix of size \(p\times q\) whose \((i,j)\)-entry is \(c_{a_{i}b_{j}}\). It determinant is called a _pseudominator_, and is denoted as
\[\operatorname{minor}_{C}(a_{1},\ldots,a_{p};b_{1},\ldots,b_{q}).\]
We call a pseudominator _trivial_ if it is the determinant of a pseudosubmatrix with repeated rows or columns, otherwise we call it _non-trivial_.
For the double determinantal ideals, the matrix \(A\) is of dimension \(m\times nr\). Let \(a_{1},\ldots,a_{u}\in\{1,\ldots,m\}\) and \(b_{1},\ldots,b_{u}\in\{1,\ldots,nr\}\). For \(k=1,\ldots,r\), define \(r_{b_{k}}=\lceil\frac{b_{k}}{n}\rceil\), define \(\bar{b}_{k}\) to be the unique integer such that \(\bar{b}_{k}\equiv b_{k}\pmod{n}\) and \(1\leq\bar{b}_{k}\leq n\). Then
\[\operatorname{minor}_{A}(a_{1},\ldots,a_{u};b_{1},\ldots,b_{u})=\begin{vmatrix} x_{a_{1},b_{1}}^{(r_{b_{1}})}&\ldots&x_{a_{1},b_{u}}^{(r_{b_{u}})}\\ \vdots&\ddots&\vdots\\ x_{a_{u},\bar{b}_{1}}^{(r_{b_{1}})}&\ldots&x_{a_{u},\bar{b}_{u}}^{(r_{b_{u}})} \end{vmatrix}.\]
Figure 3. Example 2.6
Similarly, given \(a_{1},\ldots,a_{v}\in\{1,\ldots,mr\}\) and \(b_{1},\ldots,b_{v}\in\{1,\ldots,n\}\), for \(k=1,\ldots,v\) we define \(r^{\prime}_{a_{k}}=\lceil\frac{a_{k}}{m}\rceil\), define \(\bar{a}_{k}\) to be the unique integer such that \(\bar{a}_{k}\equiv a_{k}\pmod{m}\) and \(1\leq\bar{a}_{k}\leq m\). Then
\[\mathrm{minor}_{B}(a_{1},\ldots,a_{v};b_{1},\ldots,b_{v})=\begin{vmatrix}x_{ \bar{a}_{1},b_{1}}^{(r^{\prime}_{a_{1}})}&\ldots&x_{\bar{a}_{1},b_{v}}^{(r^{ \prime}_{a_{1}})}\\ \vdots&\ddots&\vdots\\ x_{\bar{a}_{v},b_{1}}^{(r^{\prime}_{a_{v}})}&\ldots&x_{\bar{a}_{v},b_{v}}^{(r^{ \prime}_{a_{v}})}\\ \end{vmatrix}.\]
**Proposition 2.8**.: _Let \(M\in D_{u}(A),N\in D_{v}(B)\). Then, for each \(\sigma\in\mathrm{Sym}(S_{M})\), the polynomial \(P_{\cdot,\tau}\) can be expressed as the product of a monomial and a pseudominor of \(A\). Similarly, for each \(\tau\in\mathrm{Sym}(S_{N})\), the polynomial \(P_{\sigma,\cdot}\) can be expressed as the product of a monomial and a pseudominor of \(B\). Specifically, denote \(S_{M}=\{i_{1},\ldots,i_{u}\}\) and \(S_{N}=\{j_{1},\ldots,j_{u}\}\), where \(i_{1}<i_{2}<\ldots<i_{u}\) and \(j_{1}<j_{2}<\ldots<j_{u}\), then_
\[P_{\cdot,\tau} =\big{(}\mathrm{sgn}(\tau)\prod_{\begin{subarray}{c}1\leq k\leq l \\ k\notin S_{M}\end{subarray}}x_{\alpha_{k},\beta_{\tau(k)}}^{(r_{k})}\big{)} \mathrm{minor}_{A}(\alpha_{i_{1}},\ldots,\alpha_{i_{u}};\beta_{\tau(i_{1})}+n(r _{i_{1}}-1),\ldots,\beta_{\tau(i_{u})}+n(r_{i_{u}}-1))\] \[=\bigg{(}\mathrm{sgn}(\tau)\prod_{\begin{subarray}{c}1\leq k \leq l\\ k\notin S_{M}\end{subarray}}x_{\alpha_{k},\beta_{\tau(k)}}^{(r_{k})}\bigg{)} \begin{vmatrix}x_{\alpha_{i_{1}},\beta_{\tau(i_{1})}}^{(r_{i_{1}})}&\ldots&x_{ \alpha_{i_{1}},\beta_{\tau(i_{u})}}^{(r_{i_{u}})}\\ \vdots&\ddots&\vdots\\ x_{\alpha_{i_{u}},\beta_{\tau(i_{1})}}^{(r_{i_{1}})}&\ldots&x_{\alpha_{i_{u}}, \beta_{\tau(i_{u})}}^{(r_{i_{u}})}\\ \end{vmatrix},\] \[P_{\sigma,\cdot} =\big{(}\mathrm{sgn}(\sigma)\prod_{\begin{subarray}{c}1\leq k \leq l\\ k\notin S_{N}\end{subarray}}x_{\alpha_{\sigma(k)},\beta_{k}}^{(r^{\prime}_{k})} \big{)}\mathrm{minor}_{B}(\alpha_{\sigma(j_{1})}+m(r^{\prime}_{j_{1}}-1), \ldots,\alpha_{\sigma(j_{v})}+m(r^{\prime}_{j_{v}}-1);\beta_{j_{1}},\ldots, \beta_{j_{v}})\] \[=\bigg{(}\mathrm{sgn}(\sigma)\prod_{\begin{subarray}{c}1\leq k \leq l\\ k\notin S_{N}\end{subarray}}x_{\alpha_{\sigma(k)},\beta_{k}}^{(r^{\prime}_{k})} \bigg{)}\begin{vmatrix}x_{\alpha_{\sigma(j_{1})},\beta_{j_{1}}}^{(r^{\prime}_ {j_{1}})}&\ldots&x_{\alpha_{\sigma(j_{1})},\beta_{j_{v}}}^{(r^{\prime}_{j_{1}}) }\\ \vdots&\ddots&\vdots\\ x_{\alpha_{\sigma(j_{v})},\beta_{j_{1}}}^{(r^{\prime}_{j_{v}})}&\ldots&x_{ \alpha_{\sigma(j_{v})},\beta_{j_{v}}}^{(r^{\prime}_{j_{v}})}\\ \end{vmatrix}.\]
_In particular,_
\[P_{\cdot,1}=\frac{L}{\mathrm{LT}(M)}M\text{ and }P_{1,\cdot}=\frac{L}{\mathrm{LT}(N)}N.\]
Proof.: We only prove the identity for \(P_{\cdot,\tau}\), since the identity for \(P_{\sigma,\cdot}\) can be proved similarly. Fix \(\tau\in\mathrm{Sym}(S_{N})\). Then
\[P_{\cdot,\tau} =\sum_{\sigma\in\mathrm{Sym}(S_{M})}\mathrm{sgn}(\sigma)\mathrm{ sgn}(\tau)L(\sigma,\tau)=\sum_{\sigma\in\mathrm{Sym}(S_{M})}\mathrm{sgn}( \sigma)\mathrm{sgn}(\tau)\prod_{k=1}^{l}x_{\alpha_{\sigma(k)},\beta_{\tau(k)}}^ {(r_{k})}\] \[=\sum_{\sigma\in\mathrm{Sym}(S_{M})}\mathrm{sgn}(\sigma)\bigg{(} \mathrm{sgn}(\tau)\prod_{\begin{subarray}{c}1\leq k\leq l\\ k\notin S_{M}\end{subarray}}x_{\alpha_{\sigma(k)},\beta_{\tau(k)}}^{(r_{k})} \bigg{)}x_{\alpha_{\sigma(i_{1})},\beta_{\tau(i_{1})}}^{(r_{i_{1}})}\ldots x_{ \alpha_{\sigma(i_{u})},\beta_{\tau(i_{u})}}^{(r_{i_{u}})}\] \[=\bigg{(}\mathrm{sgn}(\tau)\prod_{\begin{subarray}{c}1\leq k \leq l\\ k\notin S_{M}\end{subarray}}x_{\alpha_{k},\beta_{\tau(k)}}^{(r_{k})}\bigg{)} \sum_{\sigma\in\mathrm{Sym}(S_{M})}\mathrm{sgn}(\sigma)x_{\alpha_{\sigma(i_{1})}, \beta_{\tau(i_{1})}}^{(r_{i_{1}})}\ldots x_{\alpha_{\sigma(i_{u})},\beta_{\tau( i_{u})}}^{(r_{i_{u}})}\]
where the third equality is because every \(k\notin S_{M}\) is fixed by every \(\sigma\in\operatorname{Sym}(S_{M})\), the last equality is because \(\tau\) is fixed and the monomial \(\operatorname{sgn}(\tau)\prod_{\begin{subarray}{c}1\leq k<l\\ k\notin S_{M}\end{subarray}}x_{\alpha_{k},\beta_{\tau(k)}}^{(r_{k})}\) does not depend on choice of \(\sigma\in\operatorname{Sym}(S_{M})\).
To show that the remaining polynomial
\[\sum_{\sigma\in\operatorname{Sym}(S_{M})}\operatorname{sgn}(\sigma)x_{\alpha_ {\sigma(i_{1})},\beta_{\tau(i_{1})}}^{(r_{i_{1}})}\cdots\,x_{\alpha_{\sigma(i_ {u})},\beta_{\tau(i_{u})}}^{(r_{i_{u}})}\]
is a pseudominator, we interpret each \(\sigma\in\operatorname{Sym}(S_{M})\) as a permutation in \(\operatorname{Sym}_{u}\) as follows: for each \(\sigma\in\operatorname{Sym}(S_{M})\), let \(\sigma^{\prime}\in\operatorname{Sym}_{u}\) be such that for all \(k\in\{1,\ldots,u\}\), \(\sigma^{\prime}(k)=k^{\prime}\) when \(\sigma(i_{k})=i_{k^{\prime}}\in S_{M}\). Then, relabel the variables via the bijection \(x_{\alpha_{i_{k}},\beta_{\tau(i_{k})}}^{(r_{i_{k}})}\mapsto y_{k,k}\). As such, for all \(\sigma\in\operatorname{Sym}(S_{M})\) and \(k\in\{1,\ldots,u\}\) we have \(x_{\alpha_{\sigma(i_{k})},\beta_{\tau(i_{k})}}^{(r_{i_{k}})}=x_{\alpha_{i_{k^ {\prime}}},\beta_{\tau(i_{k})}}^{(r_{i_{k}})}=y_{k^{\prime},k}=y_{\sigma^{ \prime}(k),k}\). Thus, by (1) we get
\[\sum_{\sigma\in\operatorname{Sym}(S_{M})}\operatorname{sgn}(\sigma)x_{\alpha_ {\sigma(i_{1})},\beta_{\tau(i_{1})}}^{(r_{i_{1}})}\cdots x_{\alpha_{\sigma(i_ {u})},\beta_{\tau(i_{u})}}^{(r_{i_{u}})}=\sum_{\sigma^{\prime}\in S_{u}} \operatorname{sgn}(\sigma^{\prime})y_{\sigma^{\prime}(1),1}\ldots y_{\sigma^{ \prime}(u),u}\]
\[=\begin{vmatrix}y_{1,1}&\ldots&y_{1,u}\\ \vdots&\ddots&\vdots\\ y_{u,1}&\ldots&y_{u,u}\end{vmatrix}=\begin{vmatrix}x_{\alpha_{i_{1}},\beta_{ \tau(i_{1})}}^{(r_{i_{1}})}&\ldots&x_{\alpha_{i_{1}},\beta_{\tau(i_{u})}}^{(r _{i_{u}})}\\ \vdots&\ddots&\vdots\\ x_{\alpha_{i_{u}},\beta_{\tau(i_{1})}}^{(r_{i_{1}})}&\ldots&x_{\alpha_{i_{u}},\beta_{\tau(i_{u})}}^{(r_{i_{u}})}\end{vmatrix}\]
which is the desired pseudominator.
**Example 2.9**.: Let \(r=2\) and \(m=n=2\), so \(A\) is a \(2\times 4\) matrix and \(B\) is a \(4\times 2\) matrix. Let \(u=v=2\), let
\[M=\operatorname{minor}_{A}(1,2;1,4)=\begin{vmatrix}x_{11}^{(1)}&x_{12}^{(2)} \\ x_{21}^{(1)}&x_{22}^{(2)}\end{vmatrix}=x_{11}^{(1)}x_{22}^{(2)}-x_{12}^{(2)}x_{2 1}^{(1)}\]
\[N=\operatorname{minor}_{B}(1,4;1,2)=\begin{vmatrix}x_{11}^{(1)}&x_{12}^{(1)} \\ x_{21}^{(2)}&x_{22}^{(2)}\end{vmatrix}=x_{11}^{(1)}x_{22}^{(2)}-x_{12}^{(1)}x_{2 1}^{(2)}\]
thus \(\operatorname{LT}(M)=\operatorname{LT}(N)=x_{11}^{(1)}x_{22}^{(2)}\). Note that even though \(M\) and \(N\) have they same leading term, they are not the same polynomial. Now, \(L=x_{11}^{(1)}x_{22}^{(2)}\), \((\alpha_{1},\beta_{1})=(1,1)\), \((\alpha_{2},\beta_{2})=(2,2)\), and \(S_{M}=S_{N}=\{1,2\}\). Denote the only non-trivial element of \(\operatorname{Sym}(S_{M})=\operatorname{Sym}(S_{N})\) by \(\pi\). To diagram the full action of \((\pi,\pi)\), we permute one at a time as follows:
\[\begin{array}{c}\includegraphics[]{figure/2-1-1}{L(\pi,1) in $A$}\end{array}\quad\begin{array}{c} \includegraphics[]{figure/2-1-1}{L(\pi,1) in $B$}\end{array}\quad\begin{array}{c} \includegraphics[]{figure/2-1-1}{L(\pi,1) in $B$}\end{array}\quad=\quad\begin{array}{c} \includegraphics[]{figure/2-1-1}{L(\pi,\pi) in $A$}\end{array}\]
Therefore, we obtain the following:
\[P_{1,\cdot} =L+(-1)L(1,\pi)=x_{11}^{(1)}x_{22}^{(2)}-x_{12}^{(1)}x_{21}^{(2)}=N\] \[P_{\pi,\cdot} =(-1)L(\pi,1)+(-1)(-1)L(\pi,\pi)=-x_{21}^{(1)}x_{12}^{(2)}+x_{22}^{ (1)}x_{11}^{(2)}\] \[=-\begin{vmatrix}x_{21}^{(1)}&x_{22}^{(1)}\\ x_{11}^{(2)}&x_{12}^{(2)}\end{vmatrix}=-\text{minor}_{B}(2,3;1,2)\] \[P_{\cdot,1} =L+(-1)L(\pi,1)=x_{11}^{(1)}x_{22}^{(2)}-x_{21}^{(1)}x_{12}^{(2)}=M\] \[P_{\cdot,\pi} =(-1)L(1,\pi)+(-1)(-1)L(\pi,\pi)=-x_{12}^{(1)}x_{21}^{(2)}+x_{22}^ {(1)}x_{11}^{(2)}\] \[=-\begin{vmatrix}x_{12}^{(1)}&x_{11}^{(2)}\\ x_{22}^{(1)}&x_{21}^{(2)}\end{vmatrix}=-\text{minor}_{A}(1,2;2,3)\]
which is same as predicted by Proposition 2.8.
Conditions for \(L(\sigma,\tau)=L(\sigma^{\prime},\tau^{\prime})\), \(P_{\sigma,\cdot}=P_{\sigma^{\prime},\cdot}\) and \(P_{\cdot,\tau}=P_{\cdot,\tau^{\prime}}\)
We first show an example that \(L(\sigma,\tau)=L(1,1)\) for some \((\sigma,\tau)\neq(1,1)\):
**Example 2.10**.: Let \(r=1\) (so \(A=B\)), \(m=3,n=4,u=v=3\), \(M=\text{minor}_{A}(1,2,3;1,2,4)\), and \(N=\text{minor}_{A}(1,2,3;1,3,4)\). Then, \(S_{M}=\{1,2,4\}\), \(S_{N}=\{1,3,4\}\). Let \(\pi=(14)\in\text{Sym}(S_{M})\cap\text{Sym}(S_{N})\). Then, \(L(\pi,\pi)=x_{34}x_{22}x_{23}x_{11}=L.\) Pictorially, since \(\pi\) permutes only indices of variables that divide both \(\text{LT}(M)\) and \(\text{LT}(N)\), after applying it to both the column numbers and the row numbers, we get right back to the monomial \(L\) we began with (see Figure 4).
To handle this situation, we introduce the definition of incidence:
**Definition 2.11**.: Let \(M\in D_{u}(A),N\in D_{v}(B)\) and \(L=\text{LCM}(\text{LT}(M),\text{LT}(N))=\prod_{i=1}^{l}x_{\alpha_{i},\beta_{i}} ^{(r_{i})}\).
- An _incidence_ of \(M\) and \(N\) is a variable \(x_{\alpha_{k},\beta_{k}}^{(r_{k})}\) that divides \(\gcd(\text{LT}(M),\text{LT}(N))\).
- Let \(\Sigma_{j}=\{i:r_{i}=j\}\) and \(I_{j}=\{i\in\Sigma_{j}:x_{\alpha_{i},\beta_{i}}^{(r_{i})}\) is an incidence\(\}\) for all \(j=1\ldots r\). Let \(I_{\text{all}}=\bigcup_{j}I_{j}=S_{M}\cap S_{N}\).
- Let \(\text{Sym}(I_{j})=\{\sigma\in\text{Sym}_{l}:\sigma(i)=i,\ \forall i\notin I_{j}\}\), let
\[\prod_{i=1}^{l}\text{Sym}(I_{j})=\text{Sym}(I_{1})\times\ldots\times\text{ Sym}(I_{r})=\{\sigma\in\text{Sym}(I_{\text{all}})\ :\ r_{i}=r_{\sigma(i)}\}.\]
Figure 4. Example 2.10
- Let \(\overline{\operatorname{Sym}(S_{M})}=\operatorname{Sym}(S_{M})/\prod \operatorname{Sym}(I_{j})\) and \(\overline{\operatorname{Sym}(S_{N})}=\operatorname{Sym}(S_{N})/\prod \operatorname{Sym}(I_{j})\), where the quotients are as sets of right cosets.
- Let \(H=\{(\pi,\pi):\pi\in\prod\operatorname{Sym}(I_{j})\}\), as a subgroup of \(G=\operatorname{Sym}(S_{M})\times\operatorname{Sym}(S_{N})\).
**Lemma 2.12**.: _If \(\overline{(\sigma,\tau)}=\overline{(\sigma^{\prime},\tau^{\prime})}\) in the \(G/H=\{gH\}\) (the set of right cosets of \(H\) in \(G\)), then \(L(\sigma,\tau)=L(\sigma^{\prime},\tau^{\prime})\)._
Proof.: First, we show that \((\pi,\pi)e=e\) for any \(\pi\in\prod\operatorname{Sym}(I_{j})\); in other words, \(H\) is a subgroup of the stabilizer of \(e\) in G. Indeed, fix \(\pi\in\prod\operatorname{Sym}(I_{j})\), and \(i=1,\ldots,l\). Then \((\pi(i),\pi(i))\in e=\{(1,1),(2,2),\ldots,(l,l)\}\), so \((\pi,\pi)e\subseteq e\). Also, any element of \(e\), say \((i,i)\), is equal to \((\pi(j),\pi(j))\in(\pi,\pi)e\) for \(j=\pi^{-1}(i)\), so \((\pi,\pi)e\supseteq e\), and therefore \((\pi,\pi)e=e\). Next, suppose that \(\overline{(\sigma,\tau)}=\overline{(\sigma^{\prime},\tau^{\prime})}\) in \(G/H\). Then \(\sigma=\sigma^{\prime}\pi\) and \(\tau=\tau^{\prime}\pi\) for some \(\pi\in\prod\operatorname{Sym}(I_{j})\), and \(L(\sigma,\tau)=L(\sigma^{\prime}\pi,\tau^{\prime}\pi)=\varphi((\sigma^{\prime }\pi,\tau^{\prime}\pi)e)=\varphi((\sigma^{\prime},\tau^{\prime})(\pi,\pi)e)= \varphi((\sigma^{\prime},\tau^{\prime})e)=L(\sigma^{\prime},\tau^{\prime})\).
The converse of the above lemma is not necessarily true, but is true when \((\sigma^{\prime},\tau^{\prime})=(1,1)\); see Proposition 2.18.
Next, we consider conditions for \(P_{\sigma,\cdot}=P_{\sigma^{\prime},\cdot}\) and \(P_{\cdot,\tau}=P_{\cdot,\tau^{\prime}}\).
**Proposition 2.13**.: _Let \(\sigma,\sigma^{\prime}\in\operatorname{Sym}(S_{M})\) and \(\tau,\tau^{\prime}\in\operatorname{Sym}(S_{N})\) be such that \(P_{\sigma,\cdot}\) and \(P_{\cdot,\tau}\) are non-trivial pseudominors. Then both of the following hold:_
\[P_{\sigma,\cdot}=P_{\sigma^{\prime},\cdot}\iff\overline{\sigma}=\overline{ \sigma^{\prime}}\text{ in }\overline{\operatorname{Sym}(S_{M})}\]
\[P_{\cdot,\tau}=P_{\cdot,\tau^{\prime}}\iff\overline{\tau}=\overline{\tau^{ \prime}}\text{ in }\overline{\operatorname{Sym}(S_{N})}\]
_Consequently we can denote \(P_{\overline{\sigma},\cdot}=P_{\sigma,\cdot}\), and \(P_{,\overline{\tau}}=P_{\cdot,\tau}\)._
Proof.: We only prove the first equivalence, as the second is similar.
( \(\Longleftarrow\) ): Suppose that \(\overline{\sigma}=\overline{\sigma^{\prime}}\), and let \(\sigma^{\prime}=\sigma\pi\) for some \(\pi\in\prod\operatorname{Sym}(I_{j})\). Then \(\pi\) induces an automorphism on \(\operatorname{Sym}(S_{N})\), so we denote \(\tau=\tau^{\prime}\pi^{-1}\), for each \(\tau^{\prime}\in\operatorname{Sym}(S_{N})\), and we get
\[P_{\sigma^{\prime},\cdot} =\sum_{\tau^{\prime}\in\operatorname{Sym}(S_{N})}\operatorname{ sgn}(\sigma^{\prime})\operatorname{sgn}(\tau^{\prime})L(\sigma^{\prime},\tau^{ \prime})=\sum_{\tau\in\operatorname{Sym}(S_{N})}\operatorname{sgn}(\sigma\pi) \operatorname{sgn}(\tau\pi)L(\sigma\pi,\tau\pi)\] \[=\sum_{\tau\in\operatorname{Sym}(S_{N})}\operatorname{sgn}(\sigma \pi)\operatorname{sgn}(\tau\pi)L(\sigma,\tau)\qquad\text{ by Lemma \ref{lem:P-1}}\] \[=\sum_{\tau\in\operatorname{Sym}(S_{N})}\operatorname{sgn}(\sigma )\operatorname{sgn}(\tau)L(\sigma,\tau)=P_{\sigma,\cdot}.\]
( \(\Longrightarrow\) ): Suppose that \(\sigma,\sigma^{\prime}\) are such that \(P_{\sigma,\cdot}=P_{\sigma^{\prime},\cdot}\). Then by Proposition 2.8, we have that
\[\left(\operatorname{sgn}(\sigma)\prod_{k\notin S_{N}}x_{\alpha_{\sigma(k)},\beta _{k}}^{(r_{k})}\right)\begin{vmatrix}x_{\alpha_{\sigma(j_{1})},\beta_{j_{1}}}^ {(r_{j_{1}})}&\ldots&x_{\alpha_{\sigma(j_{1})},\beta_{j_{v}}}^{(r_{j_{1}})}\\ \vdots&\ddots&\vdots\\ x_{\alpha_{\sigma(j_{v})},\beta_{j_{1}}}^{(r_{j_{v}})}&\ldots&x_{\alpha_{\sigma(j_{v })},\beta_{j_{v}}}^{(r_{j_{v}})}\end{vmatrix}\]
\[=\left(\operatorname{sgn}(\sigma^{\prime})\prod_{k\notin S_{N}}x_{\alpha_{\sigma^{ \prime}(k)},\beta_{k}}^{(r_{k})}\right)\begin{vmatrix}x_{\alpha_{\sigma^{\prime }(j_{1})},\beta_{j_{1}}}^{(r_{j_{1}})}&\ldots&x_{\alpha_{\sigma^{\prime}(j_{1}) },\beta_{j_{v}}}^{(r_{j_{1}})}\\ \vdots&\ddots&\vdots\\ x_{\alpha_{\sigma^{\prime}(j_{v})},\beta_{j_{1}}}^{(r_{j_{v}})}&\ldots&x_{ \alpha_{\sigma^{\prime}(j_{v})},\beta_{j_{v}}}^{(r_{j_{v}})}\end{vmatrix}.\]
Note that \(P_{\sigma,\cdot}\) is a non-trivial pseudominor, so the determinant on the left side of the equation is a linear combination of \(v!\) monomials whose greatest common divisor is \(1\) (here we used the fact that \(v>1\)). The same is true for the right side. Therefore,
\[\begin{vmatrix}x_{\alpha_{\sigma(j_{1})},\beta_{j_{1}}}^{(r_{j_{1}})}&\ldots&x _{\alpha_{\sigma(j_{1})},\beta_{j_{v}}}^{(r_{j_{1}})}\\ \vdots&\ddots&\vdots\\ x_{\alpha_{\sigma(j_{v})},\beta_{j_{1}}}^{(r_{j_{v}})}&\ldots&x_{\alpha_{ \sigma(j_{v})},\beta_{j_{v}}}^{(r_{j_{v}})}\end{vmatrix}=\pm\begin{vmatrix}x_{ \alpha_{\sigma^{\prime}(j_{1})},\beta_{j_{1}}}^{(r_{j_{1}})}&\ldots&x_{\alpha _{\sigma^{\prime}(j_{1})},\beta_{j_{v}}}^{(r_{j_{1}})}\\ \vdots&\ddots&\vdots\\ x_{\alpha_{\sigma^{\prime}(j_{v})},\beta_{j_{1}}}^{(r_{j_{v}})}&\ldots&x_{ \alpha_{\sigma^{\prime}(j_{v})},\beta_{j_{v}}}^{(r_{j_{v}})}\end{vmatrix}, \prod_{k\notin S_{N}}x_{\alpha_{\sigma(k)},\beta_{k}}^{(r_{k})}=\prod_{k\notin S _{N}}x_{\alpha_{\sigma^{\prime}(k)},\beta_{k}}^{(r_{k})}.\]
This implies that \(\sigma=\sigma^{\prime}\) on \(\operatorname{Sym}(S_{M})\setminus\operatorname{Sym}(S_{N})\). Now, let \(\pi\in\operatorname{Sym}(S_{M})\) be such that \(\sigma^{\prime}=\sigma\pi\). Then, for every \(k\in\operatorname{Sym}(S_{M})\setminus\operatorname{Sym}(S_{N})\),
\[\pi(k)=\sigma^{-1}\sigma^{\prime}(k)=\sigma^{-1}\sigma(k)=k,\]
implying that \(\pi\in\prod\operatorname{Sym}(I_{j})\), and thus, \(\overline{\sigma}=\overline{\sigma^{\prime}}\) in \(\overline{\operatorname{Sym}(S_{M})}\).
### \(P(m,n)\) and sufficiently small leading terms
**Definition 2.14**.: (i) For each \(M,N\in D_{u}(A)\cup D_{v}(B)\), define an expression
\[P(M,N)=\sum_{\begin{subarray}{c}\overline{\sigma}\in\operatorname{Sym}(S_{M}) \\ \overline{\sigma}\neq\overline{1}\end{subarray}}P_{\overline{\sigma},\cdot}- \sum_{\begin{subarray}{c}\overline{\tau}\in\operatorname{Sym}(S_{N})\\ \overline{\tau}\neq\overline{1}\end{subarray}}P_{\cdot,\overline{\tau}} \tag{3}\]
where each trivial pseudominor is replaced by \(0\).
(ii) We say that \(P(M,N)\) has _sufficiently small leading terms_ if _all_ the leading terms of \(P_{\overline{\sigma},\cdot}\) (for \(\overline{\sigma}\neq\overline{1}\)) and of \(P_{\cdot,\overline{\tau}}\) (for \(\overline{\tau}\neq\overline{1}\)) in the right side of (3) are less than \(L(=P_{1,1})\). Otherwise we say that \(P(M,N)\) does not have sufficiently small leading terms.
**Proposition 2.15**.: _For each \(M,N\in D_{u}(A)\cup D_{v}(B)\), \(P(M,N)\) is expressed as a monomial-coefficient combination of elements of \(D_{u}(A)\cup D_{v}(B)\), and we have the following equality as polynomials:_
\[S(M,N)=P(M,N).\]
Proof.: Note that the statement holds in the degenerate case where \(M=N\): in this case, all variables of \(L\) are incidences and Definition 2.14 renders \(P(M,M)\) as an empty sum, which we may take to be \(0\); on the other hand, \(S(M,M)=M-M=0=P(M,M)\). So in the rest of the proof we assume \(M\neq N\).
The first statement is true by Proposition 2.8. To show \(P(M,N)=S(M,N)\) as polynomials, note that \(P(M,N)-S(M,N)\) as a polynomial in \(K[x_{ij}]\) can be naturally lifted to a polynomial in \(\mathbb{Z}[x_{ij}]\) because all coefficients are \(\pm 1\), so it suffices to show that \(P(M,N)-S(M,N)=0\in\mathbb{Z}[x_{ij}]\).
Let \(h=|H|\) (\(=|\prod\operatorname{Sym}(I_{j})|\)) and let \(\{1,\sigma_{1},\ldots,\sigma_{h^{\prime}}\}\) and \(\{1,\tau_{1},\ldots,\tau_{h^{\prime\prime}}\}\) be complete sets of coset representatives for \(\operatorname{Sym}(S_{M})/\prod\operatorname{Sym}(I_{j})\) and \(\operatorname{Sym}(S_{N})/\prod\operatorname{Sym}(I_{j})\), respectively.
Then, the cardinality of each coset in \(\mathrm{Sym}(S_{M})/\prod\mathrm{Sym}(I_{j})\) and \(\mathrm{Sym}(S_{N})/\prod\mathrm{Sym}(I_{j})\) is \(h\), and by Proposition 2.13, equation (2) reduces to
\[hP_{1,\cdot}+hP_{\sigma_{1},\cdot}+\ldots+hP_{\sigma_{h^{\prime},\cdot}}=hP_{ \cdot,1}+hP_{\cdot,\tau_{1}}+\ldots+hP_{\cdot,\tau_{h^{\prime\prime}}}.\]
Since \(h\) is a non-zero divisor in \(\mathbb{Z}\), we can divide both sides by \(h\) and get
\[P_{1,\cdot}+\sum_{\begin{subarray}{c}\overline{\sigma}\in\mathrm{Sym}(S_{M}) \\ \overline{\sigma}\neq\mathbbm{T}\end{subarray}}P_{\sigma,\cdot}=P_{\cdot,1}+ \sum_{\begin{subarray}{c}\overline{\tau}\in\mathrm{Sym}(S_{N})\\ \overline{\tau}\neq\mathbbm{T}\end{subarray}}P_{\cdot,\tau}.\]
Then, by Proposition 2.8,
\[\frac{L}{\mathrm{LT}(N)}N+\sum_{\begin{subarray}{c}\overline{\sigma}\in \mathrm{Sym}(S_{M})\\ \overline{\sigma}\neq\mathbbm{T}\end{subarray}}P_{\sigma,\cdot}=\frac{L}{ \mathrm{LT}(M)}M+\sum_{\begin{subarray}{c}\overline{\tau}\in\mathrm{Sym}(S_{N })\\ \overline{\tau}\neq\mathbbm{T}\end{subarray}}P_{\cdot,\tau}\]
and, after rearranging we achieve
\[S(M,N)=\frac{L}{\mathrm{LT}(M)}M-\frac{L}{\mathrm{LT}(N)}N=\sum_{ \begin{subarray}{c}\overline{\sigma}\in\mathrm{Sym}(S_{M})\\ \overline{\sigma}\neq\mathbbm{T}\end{subarray}}P_{\sigma,\cdot}-\sum_{ \begin{subarray}{c}\overline{\tau}\in\mathrm{Sym}(S_{N})\\ \overline{\tau}\neq\mathbbm{T}\end{subarray}}P_{\cdot,\tau}=P(M,N).\]
It will be convenient to think of the entries of the \(3\)-dimensional tensor \(X\) as ordered triples in three dimensional space. We will denote the location of each variable \(x_{\alpha_{i},\beta_{i}}^{(r_{i})}\) of \(L\) by the _point_\(p_{i}=(\alpha_{i},\beta_{i},r_{i})\). We will refer to a point that identifies the location of an incidence as an _incidence_, as well. The monomial ordering naturally induces an ordering on all points by \(p_{i}<p_{j}\iff x_{\alpha_{i},\beta_{i}}^{(r_{i})}<x_{\alpha_{j},\beta_{j}}^{( r_{j})}\). The action of \(G\) changes the locations of certain points of \(L\) by changing their row and/or column number (but not their page number). Denote
\[(\sigma,\tau)p_{i}=(\alpha_{\sigma(i)},\beta_{\tau(i)},r_{i})\text{ for all }i=1,\ldots,l.\]
Note that \((\sigma,\tau)p_{i}=p_{i}\Leftrightarrow\) " \(\sigma(i)=i\) and \(\tau(i)=i\) ". We will refer to the points \(p_{i}\) such that \((\sigma,\tau)p_{i}=p_{i}\) as _fixed points_ of \((\sigma,\tau)\).
We now describe the conditions under which \(P(M,N)\) has sufficiently small leading terms (see Definition 2.14). To do so, we study all monomials of the form \(L(\sigma,\tau)\). The next two lemmas lead to Proposition 2.18 which describes the necessary conditions for \(L(\sigma,\tau)=L\).
**Lemma 2.16**.: _Assume \(L(\sigma,\tau)=L\), and the subset \(P\subseteq\{1,\ldots,l\}\) satisfies the condition that \((\sigma,\tau)p_{i}=p_{i}\) for all \(i\in\{1,\ldots,l\}\setminus P\). Let_
\[R_{P}=\{(\alpha_{i},r_{i}):i\in S_{M}\cap P\},\quad C_{P}=\{(\beta_{i},r_{i}):i \in S_{N}\cap P\}. \tag{4}\]
_If \(i\in P\) is such that \((\alpha_{i},r_{i})\notin R_{P}\) or \((\beta_{i},r_{i})\notin C_{P}\), then \((\sigma,\tau)p_{i}=p_{i}\), that is, \(p_{i}\) is a fixed point of \((\sigma,\tau)\)._
Proof.: Without loss of generality, assume \(i\in P\) satisfies \((\beta_{i},r_{i})\notin C_{P}\). The assumption \(L(\sigma,\tau)=L\) implies that \((\sigma,\tau)p_{j}=p_{i}\) for some \(1\leq j\leq l\). Then \(i\notin S_{N}\) by the definition of \(C_{P}\), which implies \(i\in S_{M}\) and \(\tau(i)=i\), and thus, \((\alpha_{j},\beta_{j},r_{j})=p_{j}=(\sigma^{-1},\tau^{-1})p_{i}=(\alpha_{\sigma^ {-1}(i)},\beta_{i},r_{i})\). This implies \(\beta_{j}=\beta_{i}\), \(r_{j}=r_{i}\).We consider the following cases separately:
_Case 1._ \(j\notin P\).: Then \(p_{i}=(\sigma,\tau)p_{j}=p_{j}\) which implies \(i=j\), \((\sigma,\tau)p_{i}=p_{i}\).
_Case 2._ \(j\in S_{N}\cap P\).: Then \((\beta_{i},r_{i})=(\beta_{j},r_{j})\in C_{P}\), a contradiction to our assumption.
_Case 3._ \(j\in P\setminus S_{N}\).: Then \(j\in S_{M}\). The fact that \(\beta_{i}=\beta_{j}\) and \(r_{i}=r_{j}\) implies \(i=j\) because both \(i,j\) are in \(S_{M}\).
We remark that in view of tensors, the above lemma says that if a point does not lie in the same row fiber as any point of \(M\) that corresponds to the subset \(P\), or if it does not lie in the same column fiber as any point of \(N\) that corresponds to \(P\), then it is a fixed point of \((\sigma,\tau)\). See [13, Figure 2.3.1] for diagram of fibers.
**Lemma 2.17**.: _Assume the same condition as in Lemma 2.16. If \(P\setminus I_{\rm all}\neq\emptyset\), then there exists \(j\in P\setminus I_{\rm all}\) for which \((\alpha_{j},r_{j})\notin R_{P}\) or \((\beta_{j},r_{j})\notin C_{P}\)._
Proof.: We prove by contradiction. Assume \(P\setminus I_{\rm all}\neq\emptyset\), and \((\alpha_{j},r_{j})\in R_{P}\) and \((\beta_{j},r_{j})\in C_{P}\) for every \(j\in P\setminus I_{\rm all}\). Let \(i_{1},i_{2},\ldots,i_{t}\) be a sequence of numbers in \(P\) that contains \(j\) and satisfies the following two conditions:
* \(r_{i_{1}}=r_{i_{2}}=\cdots=r_{i_{t}}\),
* \(i_{k}\neq i_{k+1}\), \(\alpha_{i_{k}}=\alpha_{i_{k+1}}\) if \(i_{k}\in P\setminus S_{M}\) and \(\beta_{i_{k}}=\beta_{i_{k+1}}\) if \(i_{k}\in P\setminus S_{N}\), for \(k=1,\ldots,t-1\).
Observe that all of \(i_{1},\ldots,i_{t}\) must be in \(P\setminus I_{\rm all}\), because \(p_{i_{k}}\) and \(p_{i_{k+1}}\) share either a row or a column, but an incidence point, \(p_{i}\) for \(i\in I_{\rm all}\), cannot share a row or column with any other point. Furthermore, \(i_{1},\ldots,i_{t}\) must alternate between being in \(P\setminus S_{M}\) and \(P\setminus S_{N}\). Therefore, for each \(k=1,\ldots,t-2\), the points \(p_{i_{k}}\) and \(p_{i_{k+2}}\) must be in NW-SE position, that is,
\[(\alpha_{i_{k+2}}-\alpha_{i_{k}})(\beta_{i_{k+2}}-\beta_{i_{k}})>0.\]
Then \(p_{i_{1}},\ldots,p_{i_{t}}\) form a zig-zag chain, as shown in Figure 5.
Now assume that the sequence \(i_{1},\ldots,i_{t}\) is of maximal length, \(p_{i_{1}}\) is the NW endpoint of the zig-zag chain and \(p_{i_{t}}\) is the SE endpoint. Without loss of generality assume \(i_{1}\in S_{M}\).
Case 1: \(t=1\), or "\(t>1\) and \(p_{i_{1}}p_{i_{2}}\) is horizontal". Since \((\beta_{i_{1}},r_{i_{1}})\in C_{P}\), there exists \(i_{0}\in S_{N}\cap P\) such that \(p_{i_{0}}\) and \(p_{i_{1}}\) are in the same column on the same page. Note that \(i_{0}\neq i_{1}\) because \(i_{0}\in S_{N}\) and \(i_{1}\notin S_{N}\). So \(i_{0},i_{1},\ldots,i_{t}\) is a longer sequence satisfying conditions (i) and (ii), a contradiction.
Case 2: \(t>1\) and \(p_{i_{1}}p_{i_{2}}\) is vertical.
If \(i_{t}\in S_{M}\), then \(p_{i_{t-1}}p_{i_{t}}\) is horizontal, thus by \((\beta_{i_{t}},r_{i_{t}})\in C_{P}\) there exists \(i_{t+1}\in S_{N}\cap P\) such that \(p_{i_{t+1}}\) and \(p_{i_{t}}\) are in the same column on the same page. Note that \(i_{t+1}\neq i_{t}\) because
Figure 5. Zig-zag chains
\(i_{t}\in S_{M}\) and \(i_{t+1}\notin S_{M}\). So \(i_{1},\ldots,i_{t+1}\) is a longer sequence satisfying conditions (i) and (ii), a contradiction.
If \(i_{t}\in S_{N}\), then \(p_{i_{t-1}}p_{i_{t}}\) is vertical, thus by \((\beta_{i_{t}},r_{i_{t}})\in R_{P}\) there exists \(i_{t+1}\in S_{M}\cap P\) such that \(p_{i_{t+1}}\) and \(p_{i_{t}}\) are in the same row on the same page. Note that \(i_{t+1}\neq i_{t}\) because \(i_{t}\in S_{N}\) and \(i_{t+1}\notin S_{N}\). So \(i_{1},\ldots,i_{t+1}\) is a longer sequence satisfying conditions (i) and (ii), a contradiction.
**Proposition 2.18**.: _If \(L(\sigma,\tau)=L\), then \((\sigma,\tau)\in H\), that is, \((\sigma,\tau)=(\pi,\pi)\) for \(\pi\in\prod\operatorname{Sym}(I_{j})\)._
Proof.: Suppose that \(L(\sigma,\tau)=L\). We recursively define sets \(P_{0},P_{1},\ldots,P_{k}\) as follows. Let \(P_{0}=\{1,\ldots,l\}\). Define \(R_{P_{0}}\), \(C_{P_{0}}\) as in (4). If \(P_{0}\setminus I_{\operatorname{all}}\neq\emptyset\), then there is exists \(j\in P_{0}\setminus I_{\operatorname{all}}\) for which \((\alpha_{j},r_{j})\notin R_{P_{0}}\) or \((\beta_{j},r_{j})\notin C_{P_{0}}\), by Lemma 2.17. So, \((\sigma,\tau)p_{j}=p_{j}\) by Lemma 2.16. Let \(P_{1}=P_{0}\setminus\{j\}\) and repeat the argument. Eventually we will obtain \(P_{k}=I_{\operatorname{all}}\) and have \((\sigma,\tau)p_{i}=p_{i}\) for all \(i\notin I\). So \(\sigma\) and \(\tau\) restrict to identity permutations outside \(I_{\operatorname{all}}\).
Since \(L(\sigma,\tau)=L\), we have the following equality of multisets.
\[\{(\sigma,\tau)p_{j}:j=1,\ldots,l\}=\{p_{j}:j=1,\ldots,l\}\]
After removing \(p_{i}\) for all \(i\notin I_{\operatorname{all}}\) from both sides, we have the equality of multisets
\[\{(\sigma,\tau)p_{j}:j\in I_{\operatorname{all}}\}=\{p_{j}:j\in I_{ \operatorname{all}}\},\]
so both sides consist of \(|I_{\operatorname{all}}|\) distinct points. Then, there exists a permutation \(\pi\in\prod\operatorname{Sym}(I_{j})\) (also viewed as a permutation in \(\operatorname{Sym}(S_{M})\) or \(\operatorname{Sym}(S_{N})\)) such that \((\sigma,\tau)p_{j}=p_{\pi(j)}\) for all \(j\in I_{\operatorname{all}}\). Therefore, \((\alpha_{\sigma(j)},\beta_{\tau(j)})=(\alpha_{\pi(j)},\beta_{\pi(j)})\) for all \(j\in I_{\operatorname{all}}\). Since \(\alpha_{i}\) (\(i\in I_{\operatorname{all}}\)) are all distinct, we must have \(\sigma(j)=\pi(j)\) for all \(j\in I_{\operatorname{all}}\), thus \(\sigma=\pi\). Similarly, \(\tau=\pi\), so \((\sigma,\tau)=(\pi,\pi)\), which implies the desired conclusion.
**Proposition 2.19**.: _The polynomial \(P(M,N)\) contains only monomials of the form \(\pm L(\sigma,\tau)\), none of which are equal to \(\pm L\)._
Proof.: If \(L(\sigma,\tau)=L\), then \((\sigma,\tau)=(\pi,\pi)\) with \(\pi\in\prod\operatorname{Sym}(I_{j})\) by Proposition 2.18. Then \(\bar{\sigma}=\bar{1}\in\operatorname{Sym}(S_{M})/\prod\operatorname{Sym}(I_{j})\), \(\bar{\tau}=\bar{1}\in\operatorname{Sym}(S_{N})/\prod\operatorname{Sym}(I_{j})\), so \(L(\sigma,\tau)\) does not appear in either sum of the right side of (3), therefore is not contained in \(P(M,N)\).
### Violation
The purpose of this section to give a simple criterion (Proposition 2.26) for "\(P(M,N)\) has sufficiently small leading terms" in terms of violation, a simple condition to be introduced below.
**Definition 2.20**.: Given \(M\in D_{u}(A),N\in D_{v}(B)\), and denote
\[\operatorname{LCM}(\operatorname{LM}(M),\operatorname{LM}(N))=x_{\alpha_{1}, \beta_{1}}^{(r_{1})}x_{\alpha_{2},\beta_{2}}^{(r_{2})}\ldots x_{\alpha_{l}, \beta_{l}}^{(r_{l})}.\]
Let \(i,j,k\) be distinct indices such that \(i\in S_{M},j\in S_{N},k\in I_{\operatorname{all}}\). If
\[\alpha_{i}\leq\alpha_{j}<\alpha_{k},\ \ \ \ \beta_{j}\leq\beta_{i}<\beta_{k},\ \ \ \ p_{i}\neq p_{j}\ \ \ \ \text{ and }\ \ \ r_{i}=r_{j}=r_{k},\]
then the triple \((p_{i},p_{j},p_{k})\) is called a _violation triple_, or simply a _violation_, of \((M,N)\).
In light of the diagrams introduced in Example 2.5, we may visualize a violation as the intersection of a horizontal ray and a vertical ray lying NW of an incidence, all in the same page, where the intersection may lie at one of the points \(p_{i}\) or \(p_{j}\) but not both (see Figure 6).
Definition 2.20 requires that \(p_{i}\neq p_{j}\), so one of the two inequalities \(\alpha_{i}\leq\alpha_{j}\) or \(\beta_{j}\leq\beta_{i}\) is strict. Also, \(p_{i}\) and \(p_{j}\) are not in NW-SE position, so they must not be from the same leading term. Thus, for any violation \((p_{i},p_{j},p_{k})\) we have that \(i\in S_{M}\setminus S_{N}\) and \(j\in S_{N}\setminus S_{M}\).
For the remainder of this section, fix \((\sigma,\tau)\in G\), let
\[p_{1}>p_{2}>\ldots>p_{l},\quad q_{1}\geq q_{2}\geq\ldots\geq q_{l}\]
be points corresponding to the variables of \(L\) and \(L(\sigma,\tau)\), respectively. Define
\[\hat{\hat{\mathrm{j}}}=\min\{1\leq i\leq l:p_{i}\neq q_{i}\},\quad\hat{\mathrm{ k}}=\min\{1\leq i\leq l:(\sigma,\tau)p_{i}\neq p_{i}\}.\]
It is easy to see that
\[p_{i}\text{ is not a fixed point of }(\sigma,\tau)\quad\Longrightarrow\quad \hat{\mathrm{k}}\leq i\quad\Longleftrightarrow\quad p_{i}\leq p_{\hat{\mathrm{ k}}} \tag{5}\]
**Lemma 2.21**.: _If \(p_{i}\) is not a fixed point of \((\sigma,\tau)\), then \(p_{\sigma(i)},p_{\tau(i)}\) are not fixed points._
Proof.: We prove the contrapositive: if \(p_{\sigma(i)}\) or \(p_{\tau(i)}\) is a fixed point, then \(p_{i}\) is a fixed point. Without loss of generality, we assume \(p_{\sigma(i)}\) is a fixed point and show that \(\sigma(i)=i\), therefore \(p_{i}=p_{\sigma(i)}\) is a fixed point. This is obvious if \(i\notin S_{M}\). So we assume \(i\in S_{M}\). Then \(\sigma(i),\sigma^{2}(i)\) are in \(S_{M}\). Meanwhile, \((\sigma,\tau)p_{\sigma(i)}=p_{\sigma(i)}\) implies \(\alpha_{\sigma^{2}(i)}=\alpha_{\sigma(i)}\). Since row numbers of variables of \(\mathrm{LT}(M)\) must all be distinct, we have that \(\sigma^{2}(i)=\sigma(i)\), which implies \(\sigma(i)=i\).
**Lemma 2.22**.: _For any \(i\), if \((\sigma,\tau)p_{i}>p_{i}\) and \((\sigma,\tau)p_{i}\geq p_{\hat{\mathrm{k}}}\), then the following holds:_
* \(i\in I_{\mathrm{all}}\)_,_ \(\sigma(i)\in S_{M}\)_,_ \(\tau(i)\in S_{N}\)_;_
* \(r_{\sigma(i)}=r_{\tau(i)}=r_{i}\)_;_
* \(\alpha_{\sigma(i)}\leq\alpha_{\tau(i)}<\alpha_{i}\)_,_ \(\beta_{\tau(i)}\leq\beta_{\sigma(i)}<\beta_{i}\)_._
Proof.: The point \(p_{\tau(i)}=(\alpha_{\tau(i)},\beta_{\tau(i)},r_{i})\) is a point of \(L\) which shares a column with \((\sigma,\tau)p_{i}=(\alpha_{\sigma(i)},\beta_{\tau(i)},r_{i})\) in matrix \(B\). Since \(p_{i}\) is not fixed, \(p_{\tau(i)}\) is not fixed by Lemma 2.21. Then \((\sigma,\tau)p_{i}\geq p_{\hat{\mathrm{k}}}\geq p_{\tau(i)}\) where the second inequality follows from (5). Therefore, \(p_{\tau(i)}\) must lie
Figure 6. Violations are visualized as intersections: (I) general case; (II), (III) special cases. The three points \(p_{i},p_{j},p_{k}\) are in the same page.
weakly to the South (i.e., either on or strictly to the South) of \((\sigma,\tau)p_{i}\) in matrix \(B\). Similarly, \(p_{\sigma(i)}\) must share the same row, and lie weakly to the East of \((\sigma,\tau)p_{i}\) in matrix \(A\). Thus,
\[r_{i}<r_{\tau(i)}\text{ or }(r_{i}=r_{\tau(i)}\text{ and }\alpha_{\sigma(i)}\leq \alpha_{\tau(i)}) \tag{6}\]
\[r_{i}<r_{\sigma(i)}\text{ or }(r_{i}=r_{\sigma(i)}\text{ and }\beta_{\tau(i)} \leq\beta_{\sigma(i)}) \tag{7}\]
(i) Suppose that \(i\notin I_{\text{all}}\). Then \(i\in(S_{M}\setminus S_{N})\cup(S_{N}\setminus S_{M})\). Without loss of generality, assume that \(i\in S_{M}\setminus S_{N}\). Then \(\sigma(i)\in S_{M}\) and \(\tau(i)=i\). Observe that \(\sigma(i)\neq i\), otherwise \((\sigma,\tau)p_{i}=p_{i}\) contradicts our assumption. Therefore, (6) reduces to \(\alpha_{\sigma(i)}\leq\alpha_{i}\), and (7) reduces to \(r_{i}<r_{\sigma(i)}\) or (\(r_{i}=r_{\sigma(i)}\) and \(\beta_{i}\leq\beta_{\sigma(i)}\)). Thus, \(p_{i}\) and \(p_{\sigma(i)}\) are distinct points of the same leading term, \(\operatorname{LT}(M)\), which are not in NW-SE position in \(A\), a contradiction. So \(i\in I_{\text{all}}\). This implies that \(\sigma(i)\in S_{M}\) and \(\tau(i)\in S_{N}\), so (i) holds.
(ii) It follows from (i) that both \(i\) and \(\sigma(i)\) are in \(S_{M}\), so either \(\sigma(i)=i\) or \(p_{i}\) must be in NW-SE position to \(p_{\sigma(i)}\) in \(A\). Similarly, either \(\tau(i)=i\) or \(p_{i}\) must be in NW-SE position to \(p_{\tau(i)}\) in \(B\).
To show \(r_{\sigma(i)}=r_{i}\): we already know that \(r_{i}\leq r_{\sigma(i)}\) by (7). Suppose that \(r_{i}<r_{\sigma(i)}\). Then \(p_{i}>p_{\sigma(i)}\), and thus \(\alpha_{i}<\alpha_{\sigma(i)}\) because \(p_{i}\) and \(p_{\sigma(i)}\) are in NW-SE position in \(A\). Then \(\beta_{i}>\beta_{\tau(i)}\) because of the assumption \((\sigma,\tau)p_{i}>p_{i}\). This forces \(p_{\tau(i)}\) to lie strictly to the NW of \(p_{i}\) in \(B\). Comparing with (6), we get \(r_{\tau(i)}=r_{i}\) and \(\alpha_{\sigma(i)}\leq\alpha_{\tau(i)}<\alpha_{i}\), but this contradicts with \(\alpha_{i}<\alpha_{\sigma(i)}\). Therefore \(r_{\sigma(i)}=r_{i}\).
By a similar argument, \(r_{\tau(i)}=r_{i}\).
(iii) Since (ii) holds, (6) and (7) reduce to \(\alpha_{\sigma(i)}\leq\alpha_{\tau(i)}\), \(\beta_{\tau(i)}\leq\beta_{\sigma(i)}\) (so \(p_{\sigma(i)}\) and \(p_{\tau(i)}\) are in weakly NE-SW position). It remains to prove \(\alpha_{\tau(i)}<\alpha_{i}\) and \(\beta_{\sigma(i)}<\beta_{i}\), that is, \(p_{i}\) lies strictly to the NE of \((\alpha_{\tau(i)},\beta_{\sigma(i)})\).
In the proof of (ii), we have noted: either \(\sigma(i)=i\) or \(p_{i}\) must be in NW-SE position to \(p_{\sigma(i)}\) in \(A\); either \(\tau(i)=i\) or \(p_{i}\) must be in NW-SE position to \(p_{\tau(i)}\) in \(B\). Combining with the fact that \(p_{\sigma(i)}\) and \(p_{\tau(i)}\) are in weakly NE-SW position, we see that \(p_{i}\) either lies weakly to the NW of \((\sigma,\tau)p_{i}=(\alpha_{\sigma(i)},\beta_{\tau(i)})\), or lies weakly to the NE of \((\alpha_{\tau(i)},\beta_{\sigma(i)})\). However the former contradicts the assumption that \((\sigma,\tau)p_{i}>p_{i}\), so the latter must hold. We consider three cases:
Case 1: \(\sigma(i)\neq i\) and \(\tau(i)\neq i\). In this case we immediately conclude that \(p_{i}\) must lie strictly to the NE of \((\alpha_{\tau(i)},\beta_{\sigma(i)})\).
Case 2: \(\sigma(i)=i\). In this case \(p_{\tau(i)}\) is weakly to the SW of \(p_{\sigma(i)}(=p_{i})\), and at the same time is in NW-SE position to \(p_{i}\), unless \(\tau(i)=i\). This forces \(\tau(i)=i\). Then \((\sigma,\tau)p_{i}=p_{i}\), contradicting to the assumption \((\sigma,\tau)p_{i}>p_{i}\).
Case 3: \(\tau(i)=i\). We obtain a contradiction using a similar argument as in Case 2.
This completes the proof of (iii).
**Remark 2.23**.: As a result of Lemma 2.22, if \((\sigma,\tau)p_{i}>p_{i}\) and \((\sigma,\tau)p_{i}\geq p_{\xi}\), then the points \((\sigma,\tau)p_{i}\), \(p_{\sigma(i)}\), \(p_{\tau(i)}\) and \(p_{i}\) all lie on the same \(r_{i}\)-th page, and we can have only four possible
arrangements as shown in Figure 7, depending on whether the inequalities in (iii) are strict.
**Lemma 2.24**.: _Suppose that no points \(p_{1},\ldots,p_{\bar{\mathrm{j}}-1}\) are in a violation triple of \((M,N)\). Then, there exists \((\sigma^{\prime},\tau^{\prime})\in G\) such that \(L(\sigma,\tau)=L(\sigma^{\prime},\tau^{\prime})\) and \(p_{1},\ldots,p_{\bar{\mathrm{j}}-1}\) are fixed under \((\sigma^{\prime},\tau^{\prime})\)._
Proof.: If \(\hat{\mathrm{k}}\geq\hat{\mathrm{j}}\), then \(p_{1},\ldots,p_{\bar{\mathrm{j}}-1}\) are fixed by \((\sigma,\tau)\), so we can simply let \((\sigma^{\prime},\tau^{\prime})=(\sigma,\tau)\). So we assume that \(\hat{\mathrm{k}}<\hat{\mathrm{j}}\) in the rest of the proof. Then, \(p_{\hat{\mathrm{k}}}=q_{\hat{\mathrm{k}}}=(\sigma,\tau)p_{i}\) for some \(i\). By the definition of \(\hat{\mathrm{k}}\), we have \(i>\hat{\mathrm{k}}\), so \((\sigma,\tau)p_{i}=p_{\hat{\mathrm{k}}}>p_{i}\), and thus the conditions of Lemma 2.22 are met. So one of the four cases (I)-(IV) in Remark 2.23 must hold.
If (I) is true, then the figure becomes Figure 8 which contradicts the fact that no row or column may contain more than one variable of a each leading term. Also, we have assumed that no violation triple contain points \(>p_{\hat{\mathrm{j}}}\), so (II) and (III) can not occur (otherwise \(p_{\hat{\mathrm{k}}}\) belongs to a violation triple and it satisfies \(p_{\hat{\mathrm{k}}}>p_{\hat{\mathrm{j}}}\), contradicting to the assumption; see (II) and (III) of Figure 6). Therefore, we must have (IV), meaning that \(\sigma(i)=\tau(i)=\hat{\mathrm{k}}\in I\).
Let \(\pi\in\prod\mathrm{Sym}(I_{j})\) be the permutation that transposes \(\hat{\mathrm{k}}\) and \(i\). Thus, by Proposition 2.12, \(L(\sigma\pi,\tau\pi)\) is the same monomial as \(L(\sigma,\tau)\), and \((\sigma\pi,\tau\pi)\) fixes the point \(p_{\hat{\mathrm{k}}}\). So, let \((\sigma^{\prime},\tau^{\prime})=(\sigma\pi,\tau\pi)\). By repeating the above argument as necessary, at most finitely many
Figure 8. Figure of Lemma 2.24
Figure 7. Figure of Remark 2.23
times, we will obtain a \((\sigma^{\prime},\tau^{\prime})\) such that \(L(\sigma,\tau)=L(\sigma^{\prime},\tau^{\prime})\) and all the points \(p_{1},\ldots,p_{\hat{\mathrm{j}}-1}\) are fixed by \((\sigma^{\prime},\tau^{\prime})\).
**Proposition 2.25**.: _If \(L(\sigma,\tau)>L\), then there exists a violation of \((M,N)\)._
Proof.: Suppose that \(L(\sigma,\tau)>L\) for some \(\sigma\in\mathrm{Sym}(S_{M}),\tau\in\mathrm{Sym}(S_{N})\). This means that \(q_{\hat{\mathrm{j}}}>p_{\hat{\mathrm{j}}}\). If any of the points \(p_{1},\ldots,p_{\hat{\mathrm{j}}-1}\) are in a violation triple of \((M,N)\), then we are finished. Otherwise by Lemma 2.24, after replacing \((\sigma,\tau)\) if necessary, we may assume that \(p_{1},\ldots,p_{\hat{\mathrm{j}}-1}\) are fixed points of \((\sigma,\tau)\); since \(p_{\hat{\mathrm{k}}}\) is not fixed, we have \(\hat{\mathrm{k}}\neq 1,\ldots,\hat{\mathrm{j}}-1\), thus \(p_{\hat{\mathrm{j}}}\geq p_{\hat{\mathrm{k}}}\). Combine the above two inequalities, we get \(q_{\hat{\mathrm{j}}}>p_{\hat{\mathrm{j}}}\geq p_{\hat{\mathrm{k}}}\).
Let \(\hat{\mathrm{i}}\) be such that \((\sigma,\tau)p_{\hat{\mathrm{i}}}=q_{\hat{\mathrm{j}}}\). We have the following cases to consider:
_Case 1._\(\hat{\mathrm{j}}=1\). Then \(q_{1}>p_{1}\geq p_{\hat{\mathrm{k}}}\). Since \(q_{1}>p_{1}\), \(q_{1}\) is not a point of \(L\). Since \((\sigma,\tau)p_{\hat{\mathrm{i}}}=q_{1}>p_{1}\geq p_{\hat{\mathrm{i}}}\), \(p_{\hat{\mathrm{i}}}\) cannot be a fixed point, thus \(p_{\hat{\mathrm{k}}}\geq p_{\hat{\mathrm{i}}}\). So we have \((\sigma,\tau)p_{\hat{\mathrm{i}}}=q_{1}>p_{1}\geq p_{\hat{\mathrm{k}}}\geq p_{ \hat{\mathrm{i}}}\) and thus the conditions of Lemma 2.22 are satisfied (with \(i=\hat{\mathrm{i}}\)). Then one of figure (I)-(IV) in Remark 2.23 holds. Since \(q_{1}=(\sigma,\tau)p_{\hat{\mathrm{i}}}\) is not a point of \(L\), the only possible case from Remark 2.23 is (I), which demonstrates the existence of a violation: \((p_{\sigma(i)},p_{\tau(i)},p_{i})\).
_Case 2._\(\hat{\mathrm{j}}>1\) and \(q_{\hat{\mathrm{j}}-1}>q_{\hat{\mathrm{j}}}\). Then \(p_{\hat{\mathrm{j}}-1}=q_{\hat{\mathrm{j}}-1}>q_{\hat{\mathrm{j}}}>p_{\hat{ \mathrm{j}}}\geq p_{\hat{\mathrm{k}}}\). Since \(q_{\hat{\mathrm{j}}}\) is strictly between \(p_{\hat{\mathrm{j}}-1}\) and \(p_{\hat{\mathrm{j}}}\), it is not a point of \(L\). Therefore \(p_{\hat{\mathrm{i}}}\) cannot be a fixed point, and thus \(p_{\hat{\mathrm{k}}}\geq p_{\hat{\mathrm{i}}}\). So we have \(p_{\hat{\mathrm{j}}-1}>q_{\hat{\mathrm{j}}}>p_{\hat{\mathrm{j}}}\geq p_{\hat{ \mathrm{k}}}\geq p_{\hat{\mathrm{i}}}\) and thus the conditions of Lemma 2.22 are satisfied (with \(i=\hat{\mathrm{i}}\)). Then one of figure (I)-(IV) in Remark 2.23 holds. Since \(q_{\hat{\mathrm{j}}}=(\sigma,\tau)p_{\hat{\mathrm{i}}}\) is not a point of \(L\), the only possible case from Remark 2.23 is (I), which demonstrates the existence of a violation: \((p_{\sigma(i)},p_{\tau(i)},p_{i})\).
_Case 3._\(\hat{\mathrm{j}}>1\) and \(q_{\hat{\mathrm{j}}-1}=q_{\hat{\mathrm{j}}}\). We claim that this is impossible. Otherwise, since \(\{p_{1},\ldots,p_{l}\}\) are all distinct, there must be two distinct indices for which the image of their corresponding points under \((\sigma,\tau)\) are the same point \(q_{\hat{\mathrm{j}}-1}=q_{\hat{\mathrm{j}}}\). We have assumed that they are \(\hat{\mathrm{j}}-1\) and \(\hat{\mathrm{i}}\) (so \(\hat{\mathrm{j}}-1\neq\hat{\mathrm{i}}\)). Since \(p_{\hat{\mathrm{j}}-1}\) is a fixed point, \(p_{\hat{\mathrm{i}}}\) cannot be a fixed point, otherwise \(p_{\hat{\mathrm{i}}}\) would equal \(p_{\hat{\mathrm{j}}-1}\), a contradiction. So, \(p_{\hat{\mathrm{i}}}\) is not fixed, and thus \(p_{\hat{\mathrm{k}}}\geq p_{\hat{\mathrm{i}}}\). Therefore, \(p_{\hat{\mathrm{j}}-1}=q_{\hat{\mathrm{j}}-1}=q_{\hat{\mathrm{j}}}>p_{\hat{ \mathrm{j}}}\geq p_{\hat{\mathrm{k}}}\geq p_{\hat{\mathrm{i}}}\), and the conditions of Lemma 2.22 are satisfied (with \(i=\hat{\mathrm{i}}\)), but with \(q_{\hat{\mathrm{j}}}=(\sigma,\tau)p_{\hat{\mathrm{i}}}\) being a point of \(L\), leaving only (II), (III), or (IV) of Remark 2.23 as possibilities. But, in all three cases, \(p_{\hat{\mathrm{j}}-1}(=q_{\hat{\mathrm{j}}}=(\sigma,\tau)p_{\hat{\mathrm{i}}})\) is either \(p_{\sigma(\hat{\mathrm{i}})}\) or \(p_{\tau(\hat{\mathrm{i}})}\), which are not fixed points by Lemma 2.21, a contradiction.
**Proposition 2.26**.: _For any \(M\in D_{u}(A),N\in D_{v}(B)\), if there is no violation of \((M,N)\), then \(P(M,N)\) has sufficiently small leading terms._
Proof.: Assume there is no violation of \((M,N)\). By Definition 2.14, \(P(M,N)\) consists of a sum of monomials of the form \(L(\sigma,\tau)\) where \(L=\mathrm{LCM}(\mathrm{LM}(M),\mathrm{LM}(N))\). By Proposition 2.19, each \(L(\sigma,\tau)\) is not equal to \(L\). Since there exists no violation of \((M,N)\), by Proposition 2.25, each \(L(\sigma,\tau)\) is less than \(L\). In particular, the leading terms of \(P_{\overline{\sigma}}\). (for \(\overline{\sigma}\neq\overline{1}\)) and of \(P_{,\overline{\tau}}\) (for \(\overline{\tau}\neq\overline{1}\)) are all less than \(L\); in other words, \(P(M,N)\) has sufficiently small leading terms.
## 3. Proof of Main Theorem
### Distance
**Definition 3.1**.: Let \(\mathcal{D}=\Big{(}\bigcup_{u}D_{u}(A)\Big{)}\cup\Big{(}\bigcup_{v}D_{v}(B)\Big{)}\) and for any \(P\in\mathcal{D}\) define \(V_{P}=\{x_{ij}^{(k)}:x_{ij}^{(k)}\text{ divides LM}(P)\}\). Define the _distance_ between any two minors \(M,N\in\mathcal{D}\) to be the cardinality of the symmetric difference \(V_{M}\) and \(V_{N}\):
\[d:\mathcal{D}^{2}\to\mathbb{Z}_{\geq 0},\quad d(M,N)=|(V_{M}\setminus V_{N}) \cup(V_{N}\setminus V_{M})|\]
**Definition 3.2**.: For \(M\in D_{u}(A),N\in D_{v}(B)\), define \(L=\operatorname{LCM}(\operatorname{LM}(M),\operatorname{LM}(N))\),
\[D_{u}^{L}(A)=\{P\in D_{u}(A):\operatorname{LM}(P)|L\}\text{ and }D_{v}^{L}(B)= \{P\in D_{v}(B):\operatorname{LM}(P)|L\}.\]
We say that \(M\) and \(N\) are _strongly adjacent_ if \(M\neq N\) and both of the following are true.
(a) There does not exist a minor \(P\in D_{u}^{L}(A)\) such that \(d(P,N)<d(M,N)\);
(b) There does not exist a minor \(Q\in D_{v}^{L}(B)\) such that \(d(M,Q)<d(M,N)\).
Denote \(\operatorname{LM}(M)=\prod_{i=1}^{u}x_{a_{i},b_{i}}^{(s_{i})}\) and \(\operatorname{LM}(N)=\prod_{i=1}^{v}x_{c_{i},d_{i}}^{(t_{i})}\). Assume
\[s_{1}=\ldots=s_{u}=t_{1}=\ldots=t_{v}\text{ and }u=v, \tag{8}\]
we say that \(M\) and \(N\) are _adjacent_ if \(M\neq N\) and there does not exist a minor \(P\in D_{u}^{L}(A)=D_{v}^{L}(B)\) such that both \(d(M,P)<d(M,N)\) and \(d(P,N)<d(M,N)\).
### Single Determinantal Ideal
In this subsection we study the single determinantal ideals using a method inspired by [19] and [6] which will be generalized to the double determinantal ideals. Assume that \(r=1\), \(u=v\leq\min(m,n)\), so (8) holds. Therefore \(A=B\), every \(M,N\in D_{u}(A)=D_{u}(A)\cup D_{v}(B)\) satisfy the conditions of the special case (8). For convenience, denote \(x_{ij}^{(1)}\) as \(x_{ij}\), and the points corresponding to variables of \(L\) as ordered pairs \(p_{i}=(\alpha_{i},\beta_{i})\).
We can swap the roles of \(M\) and \(N\), and similarly define \(P(N,M)\). Note that \(P(N,M)\) is produced by permuting row numbers of variables from \(N\); in contrast, \(P(M,N)\) is produced by permuting row numbers of variables from \(M\). Similarly, a violation of \((N,M)\) is different from a violation of \((M,N)\), as we have already seen in Example 2.5.
In general, swapping the roles of \(M\) and \(N\) does not always eliminate the existence of violations. That motivates the following definition.
**Definition 3.3**.: For \(M,N\in D_{u}(A)\) and \(L=\operatorname{LCM}(\operatorname{LM}(M),\operatorname{LM}(N))=x_{\alpha_{1},\beta_{1}}\ldots x_{\alpha_{l},\beta_{l}}\), if there exist distinct indices \(j,k,r,s,t\) such that the following conditions (a)-(c) hold:
(a) either "\(j,s\in S_{M}\setminus S_{N}\), \(k,r\in S_{N}\setminus S_{M}\)" (type I), or "\(j,s\in S_{N}\setminus S_{M}\), \(k,r\in S_{M}\setminus S_{N}\)" (type II),
(b) \(t\in I_{\operatorname{all}}(=S_{M}\cap S_{N})\),
(c) \(\alpha_{j}\leq\alpha_{k}<\alpha_{r}\leq\alpha_{s}<\alpha_{t}\) and \(\beta_{k}\leq\beta_{j}<\beta_{s}\leq\beta_{r}<\beta_{t}\),
then we say that the pair \(M\) and \(N\) are _defective_, and \((p_{j},p_{k},p_{r},p_{s},p_{t})\) is a _defect_.
A defect may come in one of two types, type I: \(k,s\in S_{M}\setminus S_{N}\), or type II: \(k,s\in S_{N}\setminus S_{M}\), as shown in the following figure.
Type IType IIType II
A defect \((p_{j},p_{k},p_{r},p_{s},p_{t})\) is called _maximal_ if it satisfies the following conditions.
(i) If \(j^{\prime}\leq j,k^{\prime}\leq k\), then \((p_{j^{\prime}},p_{k^{\prime}},p_{t})\) is a violation (either of \((M,N)\) or of \((N,M)\)) only if \(j^{\prime}=j\) and \(k^{\prime}=k\).
(ii) If \(j<j^{\prime}\leq s\) and \(k<k^{\prime}\leq r\), then \((p_{j},p_{k},p_{j^{\prime}},p_{k^{\prime}},p_{t})\) is a defect only if \(j^{\prime}=s\) and \(k^{\prime}=r\).
Intuitively, a maximal defect is such that both pairs \((p_{j},p_{k})\) and \((p_{r},p_{s})\) are located as far NW as possible.
The relation between defectiveness and violation is described in the following lemma.
**Lemma 3.4**.: _For \(M,N\in D_{u}(A)\), \(M\) and \(N\) are defective if and only if there exist both a violation of \((M,N)\), and a violation of \((N,M)\)._
Proof.: "\(\Rightarrow\)". Let \((p_{k},p_{q},p_{r},p_{s},p_{t})\) be a defect. If it is of type I, then \((p_{k},p_{s},p_{t})\) is a violation jm of \((M,N)\), and \((p_{q},p_{r},p_{t})\) is a violation of \((N,M)\). Similar for type II.
"\(\Leftarrow\)". Let \((p_{k},p_{s},p_{t}^{\prime})\) be a violation of \((M,N)\), and \((p_{q},p_{r},p_{t}^{\prime\prime})\) be a violation of \((N,M)\), then \((p_{k},p_{q},p_{r},p_{s},p_{t})\) is a defect where \(t=\max(t^{\prime},t^{\prime\prime})\).
**Proposition 3.5**.: _If \(M,N\in D_{u}(A)\) are not defective, then either \(P(M,N)\) or \(P(N,M)\) has sufficiently small leading terms._
Proof.: Suppose \(M\) and \(N\) are not defective. By Lemma 3.4, either there is no violation of \((M,N)\) or there is no violation of \((N,M)\). Then the result follows from Proposition 2.26.
**Proposition 3.6**.: _If \(M,N\in D_{u}(A)\) are adjacent (see Definition 3.2), then they are not defective._
Before we begin the proof, it should be noted that the process described here will be reminiscent of a surgical transplant, of sorts. To construct an intermediary minor in between two non-adjacent minors, we replace a section of the sequence of points defining the leading term of one with a sequence cut from the leading term of the other. This concept is inspired by the proof given by [6], but differs in that our incision points center around a defect, and care must be taken to ensure that the resulting minor is always of the correct size.
Proof of Proposition 3.6.: We will prove the contrapositive. Suppose that \(M,N\in D_{u}(A)\) are defective. Then there exists a defect, and without loss of generality, we may assume it is of type I. We will construct a new minor between \(M\) and \(N\) by replacing a section of the ordered sequence of points defining the leading term of one of the minors (the recipient) with a section of equal size from the leading term of the other (the donor). To decide which is the donor and which is the recipient, we must compute the following.
Arrange the points in \(\{(\alpha_{i},\beta_{i})\ :\ i\in S_{M}\}\) in decreasing order and denote them as \(m_{1}>m_{2}>\ldots>m_{u}\). Similarly, denote points in \(\{(\alpha_{i},\beta_{i})\ :\ i\in S_{N}\}\) as \(n_{1}>n_{2}>\ldots>n_{u}\). Denote \(m_{i}=(a_{i},b_{i}),n_{i}=(a_{i}^{\prime},b_{i}^{\prime})\) for \(i=1\ldots u\).
Let \((m_{j},n_{k},n_{r},m_{s},m_{t}=n_{t^{\prime}})\) be a maximal defect of type I. Define
\[w_{1}=\min\{i:i>j,\ m_{i}\ \text{is an incidence}\},\quad w_{2}=\min\{i:i>k,\ n _{i}\ \text{is an incidence}\}.\]
Note that \(w_{1}\leq t\) and \(w_{2}\leq t^{\prime}\). Define \(y_{1}=\min\{s,w_{1}\}-j\) and \(y_{2}=\min\{r,w_{2}\}-k\). Define the minor \(P\) to have leading term given by the following points.
\[\begin{array}{ll}m_{1},\ldots,m_{j-1},n_{k},\ldots,n_{k+y_{1}-1},m_{j+y_{1} },\ldots,m_{u}&\text{ if }y_{1}\leq y_{2};\\ n_{1},\ldots,n_{k-1},m_{j},\ldots,m_{j+y_{2}-1},n_{k+y_{2}},\ldots,n_{u}&\text{ if }y_{2}<y_{1}.\end{array} \tag{9}\]
Note that when \(y_{1}\leq y_{2}\), \(LT(P)\) is obtained from \(LT(M)\) by replacing \(y_{1}\) variables in \(LT(M)\) with \(y_{1}\) variables in \(LT(N)\), so we say that \(M\) is the recipient and \(N\) is the donor; when \(y_{2}<y_{1}\), \(LT(P)\) is obtained from \(LT(N)\) by replacing \(y_{2}\) variables in \(LT(N)\) with \(y_{2}\) variables in \(LT(M)\), so we say that \(N\) is the recipient and \(M\) is the donor. Figure 9 visually demonstrates two nearly identical examples of how \(\operatorname{LT}(P)\) is selected. The diagram on the right contains one additional incidence in the middle of the picture, causing a different choice for \(\operatorname{LT}(P)\). The shaded regions are empty by maximality.
Is clear that \(\operatorname{LT}(P)|L\). By Definition 3.3, none of \(m_{j},n_{k},n_{r}\) or \(m_{s}\) can be the location of an incidence. And, \(y_{1}\leq s-j\) implies \(j+y_{1}\leq s\), which implies that \(m_{s}\) is a point of \(\operatorname{LT}(P)\) if \(y_{1}\leq y_{2}\), by (9). Similarly, \(y_{2}\leq r-k\) implies \(k+y_{2}\leq r\), which implies that \(n_{r}\) is a point of
\(\text{LT}(P)\) if \(y_{2}<y_{1}\). Therefore, we will either choose both \(n_{k}\) and \(m_{s}\) if \(M\) is the recipient, or both \(m_{j}\) and \(n_{r}\) if \(N\) is the recipient. Either case guarantees that \(P\) is distinct from both \(M\) and \(N\). Furthermore, in both cases the same number of points have been removed as have been spliced in, so the leading term of the resulting minor has degree \(u\), the same degree as \(M\) and \(N\).
In the rest of the proof we assume, without loss of generality, that \(y_{1}\leq y_{2}\) (so \(M\) is recipient and \(N\) is donor). To verify the variables in \(\text{LT}(P)\) are in NW-SE position, we need only check that \(m_{j-1}\) and \(n_{k}\) are in NW-SE position, \(n_{k+y_{1}-1}\) and \(m_{j+y_{1}}\) are also in NW-SE position. That is, we must verify the strict inequalities.
\[a_{j-1} <a^{\prime}_{k} \tag{11}\] \[b_{j-1} <b^{\prime}_{k}\] (12) \[a^{\prime}_{k+y_{1}-1} <a_{j+y_{1}}\] (13) \[b^{\prime}_{k+y_{1}-1} <b_{j+y_{1}} \tag{10}\]
First, we see that \(m_{j-1}\) is NW of \(m_{j}\), so \(a_{j-1}<a_{j}\) and \(b_{j-1}<b_{j}\). By definition of defect, \(a_{j}\leq a^{\prime}_{k}\) and \(b^{\prime}_{k}\leq b_{j}\). So (10) is verified. If \(b^{\prime}_{k}\leq b_{j-1}\), then \((m_{j-1},n_{k},m_{t})\) is a violation of \((M,N)\), which violates Definition 3.3 condition (i) of maximality, so (11) is verified. For inequalities (12) and (13), we break into two cases.
_Case 1._ There are no incidences in \(\{m_{j+1},\ldots,m_{s-1}\}\), so \(s<w_{1}\). Since \(M\) is the recipient, we must have that \(s-j=\min\{s,w_{1}\}-j=y_{1}\leq y_{2}\leq r-k\). So (12) and (13) become \(a^{\prime}_{k+s-j-1}<a_{s}\) and \(b^{\prime}_{k+s-j-1}<b_{s}\), respectively. Now, since \(k+s-j-1\leq k+(r-k)-1=r-1<r\), \(n_{k+s-j-1}\) must be NW of \(n_{r}\), so \(a^{\prime}_{k+s-j-1}<a^{\prime}_{r}\) and \(b^{\prime}_{k+s-j-1}<b^{\prime}_{r}\). Also, by the definition of type I defect, we have \(a^{\prime}_{r}\leq a_{s}\) and \(b_{s}\leq b^{\prime}_{r}\), so \(a^{\prime}_{k+s-j-1}<a^{\prime}_{r}\leq a_{s}\) and (12) is verified. If \(b_{s}\leq b^{\prime}_{k+s-j-1}\), then \((n_{k+s-j-1},m_{s},n_{t^{\prime}})\) is a violation of \((N,M)\), with \(k+s-j-1<r\), which violates Definition 3.3 condition (ii) of maximality, so \(b^{\prime}_{k+s-j-1}<b_{s}\), thus (13) is verified.
_Case 2._ There exists an incidence in \(\{m_{j+1},\ldots,m_{s-1}\}\). So, \(j<w_{1}<s\), \(k<w_{2}<r\), \(m_{w_{1}}=n_{w_{2}}\) is an incidence, \(y_{1}=w_{1}-j\), and \(y_{2}=w_{2}-k\). Then (12) and (13) amount to showing that \(n_{k+w_{1}-j-1}\) is NW of \(m_{w_{1}}\). We have that for all \(\gamma<w_{2}\), \(n_{\gamma}\) must be to the NW of \(n_{w_{2}}=m_{w_{1}}\). So, the problem reduces to showing that \(k+w_{1}-j-1<w_{2}\). This inequality holds since \(k+w_{1}-j-1<k+w_{1}-j=k+y_{1}\leq k+y_{2}=k+(w_{2}-k)=w_{2}\).
Finally, we show this satisfies the distance requirements for adjacency (see Definition 3.2): both \(d(M,P)\) and \(d(P,N)\) are less than \(d(M,N)\). Let \(V_{M},V_{N}\) be the set of variables which divide \(\text{LM}(M)\) and \(\text{LM}(N)\), respectively, and \(Z\) be the set of all incidences, \(z=|Z|\). Thus, \(|V_{M}|=|V_{N}|=u\), and
\[d(M,N)=|(V_{M}\setminus V_{N})\cup(V_{N}\setminus V_{M})|=|V_{M}\setminus Z|+|V _{N}\setminus Z|=(u-z)+(u-z)=2(u-z).\]
Let \(X\) be the set of variables of \(M\) that were removed when constructing \(P\), \(Y\) be the set of variables of \(N\) that replaced them (see (9)). Then \(|X|=|Y|=y_{1}\). Since \(\text{LM}(M)\) and \(\text{LM}(P)\) have all variables common except those from \(X\) and \(Y\), and neither \(X\) nor \(Y\) contain any incidences, we have
\[d(M,P)=|X\cup Y|=|X|+|Y|=2y_{1}.\]
Note that \(y_{1}-1<y_{1}\leq y_{2}\leq r-k\), so \(k+y_{1}-1<r\) which implies the \(n_{r}\) is not among the points transplanted from \(N\) into \(P\). So \(x_{a_{r}^{\prime},b_{r}^{\prime}}\notin Y\). Let \(V_{P}\) be the set of variables which divide \(\operatorname{LM}(P)\), and define \(V_{M}^{\prime}=V_{M}\setminus(X\cup Z)\) and \(V_{N}^{\prime}=V_{N}\setminus(Y\cup Z)\). As such, \(V_{M}=V_{M}^{\prime}\cup X\cup Z,V_{N}=V_{N}^{\prime}\cup Y\cup Z\), and \(V_{P}=V_{M}^{\prime}\cup Z\cup Y\) are all disjoint unions. Therefore,
\[\begin{array}{ll}d(P,N)&=|V_{P}\setminus V_{N}|+|V_{P}\setminus V_{N}|\\ &=|V_{M}^{\prime}\setminus V_{N}^{\prime}|+|V_{N}^{\prime}\setminus V_{M}^{ \prime}|\\ &=|V_{M}^{\prime}|+|V_{N}^{\prime}|\quad(\text{since }V_{M}^{\prime}\cap V_{N}^{ \prime}=\emptyset)\\ &=|V_{M}|-(|X|+|Z|)+|V_{N}|-(|Y|+|Z|)\\ &=(u-y_{1}-z)+(u-y_{1}-z)=2(u-z)-2y_{1}.\end{array}\]
Now we have that \(d(M,P)+d(P,N)=2y_{1}+2(u-z)-2y_{1}=2(u-z)=d(M,N)\). Since the distances are non-negative, it suffices to show that \(d(M,P)\) and \(d(P,N)\) are non-zero. We have \(x_{a_{r}^{\prime},b_{r}^{\prime}}\notin Y\implies x_{a_{r}^{\prime},b_{r}^{ \prime}}\in V_{N}\setminus V_{P}\implies N\neq P\implies d(P,N)>0\). Meanwhile, \(s>j\) and \(w_{1}>j\), so \(y_{1}=\min\{s,w_{1}\}-j>0\), thus \(d(M,P)>0\). Therefore, \(M\) and \(N\) are not adjacent and the proof of the contrapositive is completed.
**Proposition 3.7**.: _For any \(M,N\in D_{u}(A)\) and \(L=\operatorname{LCM}(\operatorname{LM}(M),\operatorname{LM}(N))\), there exists a chain \((M=M_{0},M_{1},\ldots,M_{k-1},M_{k}=N)\) of minors in \(D_{u}^{L}(A)\) such that for each \(j=1,\ldots,k\), the S-pair \(S(M_{j-1},M_{j})\) may be expressed as a finite sum \(\sum a_{i}P_{i}\) where \(a_{i}\) are monomials, \(P_{i}\in D_{u}(A)\) and \(\operatorname{LM}(a_{i}P_{i})<\operatorname{LCM}(\operatorname{LM}(M_{j-1}), \operatorname{LM}(M_{j}))\) for all \(i\)._
Proof.: The proposition is trivial if \(M=N\), so we assume \(M\neq N\). Start with the chain \((M_{0}=M,M_{1}=N)\). If there are \(M_{i-1},M_{i}\) which are distinct and not adjacent, then there exists \(P\in D_{u}^{L}(A)\) such that both \(d(M_{i-1},P)\) and \(d(P,M_{i})\) are less than \(d(M_{i-1},M_{i})\), then we insert \(P\) between \(M_{i-1}\) and \(M_{i}\), that is, re-index \(M_{j}\) as \(M_{j+1}\) for all \(j\geq i\), and define \(M_{i}=P\). Since distances are non-negative integers, after finitely many insertions we obtain a chain \((M=M_{0},M_{1},\ldots,M_{k-1},M_{k}=N)\) such that \(M_{j-1}\) and \(M_{j}\) are adjacent for every \(1\leq j\leq k\). For every such \(j\), \(M_{j-1}\) and \(M_{j}\) are not defective by Proposition 3.6, thus by Proposition 3.5, at least one of \(P(M_{j-1},M_{j})\) or \(P(M_{j},M_{j-1})\) may be expressed as a finite sum \(\sum a_{i}P_{i}\) where \(a_{i}\) are monomials, \(P_{i}\in D_{u}(A)\) and \(\operatorname{LM}(a_{i}P_{i})<\operatorname{LCM}(\operatorname{LM}(M_{j-1}), \operatorname{LM}(M_{j}))\) for all \(i\). The claim follows, since \(P(M_{j-1},M_{j})=-P(M_{j},M_{j-1})=S(M_{j-1},M_{j})\).
### Double Determinantal Ideal
The proof of the previous section can be easily adapted to the case of double determinantal ideals. Recall the notations \(m,n,r,u,v\) in Definition 1.1, the definition of _strongly adjacent_ in Definition 3.2 and _violation_ in Definition 2.20.
**Proposition 3.8**.: _If \(M\in D_{u}(A),N\in D_{v}(B)\) are strongly adjacent, then there does not exists a violation of \((M,N)\)._
Proof.: The proof is similar to the one of Proposition 3.6. We prove the contrapositive. Fix minors \(M\in D_{u}(A),N\in D_{v}(B)\) and suppose there exists a violation of \((M,N)\). We shall construct a new minor which is nearer to one of \(M\) or \(N\), by replacing a section of the ordered sequence of points defining the leading monomial of one (the recipient) with a section of equal size from the leading monomial of the other (the donor). The distinction of roles is determined by the following computation.
Let \(\operatorname{LM}(M)\) and \(\operatorname{LM}(N)\) be given by the points \(m_{1}>m_{2}>\ldots>m_{u}\) and \(n_{1}>n_{2}>\ldots>n_{v}\), respectively, where \(m_{i}=(a_{i},b_{i},s_{i})\) and \(n_{i}=(a^{\prime}_{i},b^{\prime}_{i},t_{i})\). Let \((m_{j},n_{k},m_{w_{1}}=n_{w_{2}})\) be a violation such that the following hold.
(a) If \(\gamma\leq j\) and \(\delta\leq k\), then \((m_{\gamma},n_{\delta},m_{w_{1}})\) is a violation only when \(\gamma=j\) and \(\delta=k\).
(b) \(w_{1}=\min\{i:i>j,\ m_{i}\text{ is an incidence}\}\) and \(w_{2}=\min\{i:i>k,\ n_{i}\text{ is an incidence}\}\). (Note that \(m_{w_{1}}=n_{w_{2}}\) by definition.)
We first consider the case \(w_{1}-j\leq w_{2}-k\). Let \(M\) be the recipient and we define \(P\in D_{u}^{L}(A)\) by requiring its leading monomial to corresponds the following points:
\[m_{1},\ldots,m_{j-1},n_{k},\ldots,n_{k+w_{1}-j-1},m_{w_{1}},\ldots,m_{u}\]
We claim that \(P\) is well-defined and \(d(P,N)<d(M,N)\).
Note that \(P\neq M\). We must verify that all the variables of \(\operatorname{LT}(P)\) are in NW-SE position in \(A\). This is certainly the case for the points \(m_{1},\ldots,m_{j-1}\) and \(m_{w_{1}},\ldots,m_{u}\), since those points corresponds to variables in \(\operatorname{LM}(M)\). The points \(n_{k},\ldots,n_{k+w_{1}-j-1}\) are all in NW-SE position in \(B\) but since they are contained between points of a violation (by Definition 2.20 and the inequality \(k+w_{1}-j-1<w_{2}\)), they must all lie on the same page, and thus be in NW-SE position in \(A\). Therefore, it remains to show the following.
(i) \(m_{j-1}\) is NW of \(n_{k}\) in \(A\);
(ii) \(n_{k+w_{1}-j-1}\) is NW of \(m_{w_{1}}(=n_{w_{2}})\) in \(A\);
(iii) \(d(P,N)<d(M,N)\).
For (i), we must verify two inequalities.
\[a_{j-1} <a^{\prime}_{k} \tag{15}\] \[s_{j-1} <t_{k}\text{ or }(s_{j-1}=t_{k}\text{ and }b_{j-1}<b^{\prime}_{k}) \tag{14}\]
Since \(m_{j-1}\) is NW of \(m_{j}\) in A, we get \(a_{j-1}<a_{j}\) and " \(s_{j-1}<s_{j}\) or (\(s_{j-1}=s_{j}\) and \(b_{j-1}<b_{j}\)) ". Since \((m_{j},n_{k},m_{w_{1}})\) is a violation, we have \(a_{j}\leq a^{\prime}_{k}\) and \(b^{\prime}_{k}\leq b_{j}\). So \(a_{j-1}<a_{j}\leq a^{\prime}_{k}\), (14) is verified. For (15), note that only \(m_{j},\ldots,m_{w_{1}}\) and \(n_{k},\ldots,n_{w_{2}}\) are guaranteed to all lie on the same page (that is, \(s_{j}=\ldots=s_{w_{1}}=t_{k}=\ldots=t_{w_{2}}\)), whereas \(m_{j-1}\) may lie on an earlier page (\(s_{j-1}\leq s_{j}\)). If \(s_{j-1}<s_{j}(=t_{k})\). Then (15) obviously holds. So we can assume \(s_{j-1}=s_{j}(=t_{k})\). If \(b^{\prime}_{k}\leq b_{j-1}\), then clearly \((m_{j-1},n_{k},m_{t})\) is a violation of \((M,N)\), which contradicts condition (a) from above. So \(b_{j-1}<b^{\prime}_{k}\), and (15) is proved.
For (ii), note that \(k\leq k+w_{1}-j-1<w_{2}\), so \(n_{k+w_{1}-j-1}\) and \(n_{w_{2}}\) lie on the same page and are in NW-SE position.
For (iii), let \(V_{M},V_{N}\) be the set of variables which divide \(\operatorname{LM}(M)\) and \(\operatorname{LM}(N)\), respectively, and let \(Z\) be the set of all incidences, with \(|Z|=z\). Thus, similar to the proof of Proposition 3.6, we have
\[d(M,N)=u+v-2z.\]
Let \(X\) be the set of variables of \(M\) that were removed when constructing \(P\), \(Y\) be the set of variables of \(N\) that replaced them. Then \(|X|=|Y|=w_{1}-j\). Let \(V_{P}\) be the set of variables which divide \(\operatorname{LM}(P)\), and define \(V^{\prime}_{M}=V_{M}\setminus(X\cup Z)\) and \(V^{\prime}_{N}=V_{N}\setminus(Y\cup Z)\). As such,
\(V_{M}=V_{M}^{\prime}\cup X\cup Z,V_{N}=V_{N}^{\prime}\cup Y\cup Z\), and \(V_{P}=V_{M}^{\prime}\cup Z\cup Y\) are all disjoint unions. Similar to the proof of Proposition 3.6, we get
\[d(P,N)=u+v-2z-2(w_{1}-j)=d(M,N)-2(w_{1}-j)<d(M,N)\]
where the last inequality is because \(w_{1}>j\) by the definition of \(w_{1}\) in (b). This proves (iii).
Similarly we can prove the case \(w_{2}-k<w_{1}-j\), by letting \(N\) be the recipient and constructing \(P\in D_{v}^{L}(B)\) by points \(n_{1},\ldots,n_{k-1},m_{j},\ldots,m_{j+w_{2}-k-1},n_{w_{2}},\ldots,n_{v}\).
**Proposition 3.9**.: _For any \(M\in D_{u}(A),N\in D_{v}(B)\), let \(L=\operatorname{LCM}(\operatorname{LM}(M),\operatorname{LM}(N))\). There exist minors \(M^{\prime}\in D_{u}^{L}(A),N^{\prime}\in D_{v}^{L}(B)\) such that the S-pair \(S(M^{\prime},N^{\prime})\) may be expressed as a (possibly empty) finite sum \(\sum a_{i}P_{i}\) such that for every \(i\), \(a_{i}\) is a monomial, \(P_{i}\in D_{u}(A)\cup D_{v}(B)\) and \(\operatorname{LM}(a_{i}P_{i})<\operatorname{LCM}(\operatorname{LM}(M^{\prime} ),\operatorname{LM}(N^{\prime}))\)._
Proof.: Let \(M^{\prime}\in D_{u}^{L}(A),N^{\prime}\in D_{v}^{L}(B)\) be such that \(d(M^{\prime},N^{\prime})=\delta\) is the smallest possible value. If \(M^{\prime}=N^{\prime}\), the \(S\)-pair is \(0\) and the proposition is trivial. So we assume \(M^{\prime}\neq N^{\prime}\) in the rest of the proof. Then \(M^{\prime},N^{\prime}\) are strongly adjacent. By Proposition 3.8 there is no violation of \((M^{\prime},N^{\prime})\). The claim follows by Proposition 2.26.
**Proposition 3.10**.: _For any \(M\in D_{u}(A),N\in D_{v}(B)\) and \(L=\operatorname{LCM}(\operatorname{LM}(M),\operatorname{LM}(N))\), there exists a chain of minors in \(D_{u}^{L}(A)\cup D_{v}^{L}(B)\)_
\[M=M_{0},M_{1},\ldots,M_{k-1},M_{k}=N\]
_such that for each \(j=1,\ldots,k\), the S-pair \(S(M_{j-1},M_{j})\) may be expressed as a finite \(\sum a_{i}P_{i}\) where \(a_{i}\) are monomials, \(P_{i}\in D_{u}(A)\cup D_{v}(B)\) and \(\operatorname{LT}(a_{i}P_{i})<\operatorname{LCM}(\operatorname{LT}(M_{j-1}), \operatorname{LT}(M_{j}))\) for all \(i\)._
Proof.: Let \(M^{\prime}\in D_{u}^{L}(A),N^{\prime}\in D_{v}^{L}(B)\) be given as in Proposition 3.9. Let
\[L^{\prime}=\operatorname{LCM}(\operatorname{LM}(M),\operatorname{LM}(M^{ \prime})),\quad L^{\prime\prime}=\operatorname{LCM}(\operatorname{LM}(N^{ \prime}),\operatorname{LM}(N)).\]
Since \(M,M^{\prime}\in D_{u}(A)\) are minors of the same matrix and of the same size, as are \(N,N^{\prime}\in D_{v}(B)\), let \(M=M_{0},\ldots,M_{h}=M^{\prime}\) and \(N^{\prime}=M_{h+1},\ldots,M_{k}=N\) be chains of minors from \(D_{u}^{L^{\prime}}(A)\) and \(D_{v}^{L^{\prime\prime}}(B)\), respectively, given by Proposition 3.7. Since \(\operatorname{LM}(M^{\prime})\) and \(\operatorname{LM}(N^{\prime})\) both divide \(L\), we have \(\operatorname{LM}(M_{j})|L\) for all \(j=0,\ldots,k\). Then the proposition follows from Propositions 3.7 and 3.9.
Proof of Theorem 1.3.: It follows from Proposition 1.4: if \(M,N\in D_{u}(A)\) (or \(M,N\in D_{v}(B)\)), then apply Proposition 3.7; if \(M\in D_{u}(A)\) and \(N\in D_{v}(B)\) (or the other way around), then apply Proposition 3.10.
## 4. Generalized double determinantal ideals and Bipartite determinantal ideals
In this section we introduce the generalized double determinantal ideals and bipartite determinantal ideals, and study their Grobner bases. The inclusion relation among the set
of these ideals is:
\[\{\text{ determinantal ideals}\}\] \[\subseteq \{\text{ double determinantal ideals}\}\] \[\subseteq \{\text{ generalized double determinantal ideals}\}\] \[\subseteq \{\text{ bipartite determinantal ideals}\}\]
A more precise description is given in Remark 4.8.
### Generalized double determinantal ideals
With almost the exact proofs, we can show that the results on double determinantal ideals hold for a more general class of ideals. We start with the definition of double determinantal ideals.
**Definition 4.1**.: Let \(r,\{m_{k},n_{k}\}_{k=1}^{r}\) be positive integers, let \(X=\{X^{(k)}=[x_{ij}^{(k)}]\}_{k=1}^{r}\) be a set of variable matrices and denote the size of \(X^{(k)}\) by \(m_{k}\times n_{k}\). Assume integer tuples \(\mathbf{r}=(r_{1}<r_{2}<\cdots<r_{s})\) and \(\mathbf{r}^{\prime}=(r_{1}^{\prime}<r_{2}^{\prime}<\cdots<r_{t}^{\prime})\)\((s,t\geq 0)\) satisfy \(\{r_{1},\ldots,r_{s}\}\cup\{r_{1}^{\prime},\ldots,r_{t}^{\prime}\}=\{1,\ldots,r\}\), \(m_{r_{1}}=m_{r_{2}}=\cdots=m_{r_{s}}\), \(n_{r_{1}^{\prime}}=n_{r_{2}^{\prime}}=\cdots=n_{r_{t}^{\prime}}\), and define an \(m_{r_{1}}\times(\sum_{i}n_{r_{i}})\)-matrix \(A\) and a \((\sum m_{r_{i}^{\prime}})\times n_{r_{1}^{\prime}}\)-matrix \(B\) as follows:
\[A=[X^{(r_{1})}|X^{(r_{2})}|\ldots|X^{(r_{s})}],\quad B=\left[\frac{X^{(r_{1}^{ \prime})}}{\frac{X^{(r_{2}^{\prime})}}{\vdots}}\right]\]
For any positive integers \(u,v\), let \(D_{u}(A)=\{u\text{-minors of }A\}\) and \(D_{v}(B)=\{v\text{-minors of }B\}\). The _generalized double determinantal ideal_\(I_{\{m_{k},n_{k}\}_{k=1}^{r},u,v}^{(\mathbf{r},\mathbf{r}^{\prime})}\) is the ideal of \(K[X]\) generated by \(D_{u}(A)\cup D_{v}(B)\) (which we called the _natural generators_).
By an almost identical proof (which we omit) as the one for Proposition 3.10 and Theorem 1.3, we have the following assertions for the generalized double determinantal ideals:
**Proposition 4.2**.: _Assume the same setup as in Definition 4.1. For any \(M\in D_{u}(A),N\in D_{v}(B)\) and \(L=\operatorname{LCM}(\operatorname{LM}(M),\operatorname{LM}(N))\), there exists a chain of minors in \(D_{u}^{L}(A)\cup D_{v}^{L}(B)\)_
\[M=M_{0},M_{1},\ldots,M_{k-1},M_{k}=N\]
_such that for each \(j=1,\ldots,k\), the S-pair \(S(M_{j-1},M_{j})\) may be expressed as a finite \(\sum a_{i}P_{i}\) where \(a_{i}\) are monomials, \(P_{i}\in D_{u}(A)\cup D_{v}(B)\) and \(\operatorname{LT}(a_{i}P_{i})<\operatorname{LCM}(\operatorname{LT}(M_{j-1}), \operatorname{LT}(M_{j}))\) for all \(i\)._
**Theorem 4.3**.: _Assume the same setup as in Definition 4.1. Then the set of natural generators of the generalized double determinantal ideal \(I_{\{m_{k},n_{k}\}_{k=1}^{r},u,v}^{(\mathbf{r},\mathbf{r}^{\prime})}\), if nonempty, forms a Grobner basis with respect to any lexicographical monomial order that is consistent in both \(A\) and in \(B\)._
**Example 4.4**.: Let \(m=n=u=v=2\), \(r=6\), \(\mathbf{r}=(1,2,4,5)\), \(\mathbf{r}^{\prime}=(2,3,5,6)\), \((m_{1},n_{1}),\ldots,(m_{6},n_{6})=(2,1),(2,2),(1,2),(2,1),(2,2),(1,2)\). The matrices \(A\), \(B\), and the corresponding generalized double determinantal ideal \(I_{\{m_{k},n_{k}\}_{k=1}^{r},u,v}^{(\mathbf{r},\mathbf{r}^{\prime})}\) are as follows (we use
\(a,b,\dots\) to denote \(x_{11}^{(1)},x_{21}^{(1)},\dots\) by abuse of notation even though some letters like \(m,n\) are already used):
\[A=[X^{(1)}|X^{(2)}|X^{(4)}|X^{(5)}]=\begin{bmatrix}a&c&d\\ b&e&f\end{bmatrix}\begin{matrix}i&k&l\\ j&m&n\end{bmatrix},\quad B=\begin{bmatrix}\dfrac{X^{(2)}}{X^{(3)}}\\ \dfrac{X^{(5)}}{X^{(6)}}\end{bmatrix}=\begin{bmatrix}c&d\\ e&f\\ \dfrac{g&h}{h}\\ \hline k&l\\ m&n\\ \hline o&p\end{bmatrix}\]
\[I_{\{m_{k},n_{k}\}_{k=1}^{r},u,v}^{(\mathbf{r},\mathbf{r}^{\prime})}=\langle \begin{matrix}a&c\\ b&e\end{matrix}|\begin{matrix}a&d\\ b&f\end{matrix}|,\dots,\begin{matrix}m&n\\ o&p\end{matrix}\rangle.\]
Theorem 4.3 asserts that the generators of \(I_{\{m_{k},n_{k}\}_{k=1}^{r},u,v}^{(\mathbf{r},\mathbf{r}^{\prime})}\) form a Grobner basis with respect to the lexicographical order with \(a>b>\dots>p\).
### Nakajima's affine graded quiver variety, and bipartite determinantal ideals
Nakajima's affine graded quiver variety \(\mathfrak{M}_{0}^{\bullet}(V,W)\) (resp. nonsingular graded quiver variety \(\mathfrak{M}^{\bullet}(V,W)\)) is defined as certain affine algebro-geometric quotient \(\mu^{-1}(0)//G_{V}\) (resp. GIT quotient \(\mu^{-1}(0)^{\mathrm{s}}/G_{V}\)).
Recall the following definitions in [18, SS4] with slightly modified notations. Consider a bipartite quiver, that is, a quiver \(\mathcal{Q}\) with vertex set \(\mathrm{V}_{\mathcal{Q}}=\mathrm{V}_{\mathrm{source}}\sqcup\mathrm{V}_{\mathrm{ sink}}\) and each arrow \(h\) has source \(\mathrm{s}(h)\in\mathrm{V}_{\mathrm{source}}\) and target \(\mathrm{t}(h)\in\mathrm{V}_{\mathrm{sink}}\). For each vertex \(i\in\mathrm{V}_{\mathcal{Q}}\) attach a vector space \(V_{i}\) of dimension \(v_{i}\). Define a decorated quiver by adding a new vertex \(i^{\prime}\) for each \(i\in\mathrm{V}_{\mathcal{Q}}\), adding an arrow \(i^{\prime}\to i\) if \(i\) is a sink, adding an arrow \(i\to i^{\prime}\) if \(i\) is a source (so the decorated quiver is still bipartite). For each arrow \(i^{\prime}\to i\) attach a vector space \(W_{i}(1)\) to \(i\) and \(W_{i}(q^{2})\) to \(i^{\prime}\); for each arrow \(i\to i^{\prime}\) attach \(W_{i}(q^{3})\) to \(i\) and \(W_{i}(q)\) to \(i^{\prime}\). Denote \(\xi_{i}=0\) if \(i\in\mathrm{V}_{\mathrm{sink}}\), \(\xi_{i}=1\) if \(i\in\mathrm{V}_{\mathrm{source}}\). The affine graded quiver variety \(\mathfrak{M}_{0}^{\bullet}(V,W)\) is the image of \(\mathfrak{M}^{\bullet}(V,W)\) under the natural projection and is isomorphic to (which is not explicitly written in [18] but can be easily derived from there)
\[\mathbf{E}_{V,W}=\{(\oplus\mathbf{x}_{i},\oplus\mathbf{y}_{h})\}\]
where:
\(\bullet\)\(\mathbf{x}_{i}\in\mathrm{Hom}(W_{i}(q^{\xi_{i}+2}),W_{i}(q^{\xi_{i}}))\) for each \(i\in\mathrm{V}\), \(\mathbf{y}_{h}\in\mathrm{Hom}(W_{\mathrm{s}(h)}(q^{3}),W_{\mathrm{t}(h)}(1))\) for each arrow \(h\);
\(\bullet\)\(\dim\big{(}\mathrm{Im}\mathbf{x}_{i}+\sum_{i(h)=i}\mathrm{Im}\mathbf{y}_{h} \big{)}\leq v_{i}\) for \(i\in\mathrm{V}_{\mathrm{sink}}\), \(\dim\mathrm{Im}\big{(}\mathbf{x}_{i}\oplus\bigoplus_{o(h)=i}\mathbf{y}_{h} \big{)}\leq v_{i}\) for \(i\in\mathrm{V}_{\mathrm{source}}\).
Then, the ideal generated by the defining equations of an affine graded quiver variety is a bipartite determinantal ideal, to be defined below:
**Definition 4.5**.: Let \(\mathcal{Q}\) be a bipartite quiver with \(d\) vertices and \(r\) arrows \(h_{i}:\mathrm{s}(h_{i})\to\mathrm{t}(h_{i})\) (for \(i=1,\dots,r\)). Let \(\mathbf{m}=(m_{1},\dots,m_{d})\), \(\mathbf{u}=(u_{1},\dots,v_{d})\) be \(d\)-tuples of nonnegative integers. Let \(X=\{X^{(k)}=[x_{ij}^{(k)}]\}_{k=1}^{r}\) be a set of variable matrices where the size of \(X^{(k)}\) is
\(m_{\mathrm{t}(h_{k})}\times m_{\mathrm{s}(h_{k})}\). For \(\alpha\in\mathrm{V}_{\mathrm{sink}}\), let \(r_{1}<\cdots<r_{s}\) be the indices such that \(h_{r_{1}},\ldots,h_{r_{s}}\) are all the arrows with target \(\alpha\), and define
\[A_{\alpha}=[X^{(r_{1})}|X^{(r_{2})}|\ldots|X^{(r_{s})}].\]
For \(\beta\in\mathrm{V}_{\mathrm{source}}\), let \(r_{1}^{\prime}<\cdots<r_{t}^{\prime}\) be the indices such that \(h_{r_{1}^{\prime}},\ldots,h_{r_{t}^{\prime}}\) are all the arrows with source \(\beta\), and define
\[A_{\beta}=\begin{bmatrix}\dfrac{X^{(r_{1}^{\prime})}}{X^{(r_{2}^{\prime})}}\\ \dfrac{\vdots}{X^{(r_{t}^{\prime})}}\end{bmatrix}.\]
The _bipartite determinantal ideal_\(I_{\mathcal{Q},\mathbf{m},\mathbf{u}}\) is the ideal of \(K[X]\) generated by \(\bigcup_{\gamma\in\mathrm{V}}D_{u_{\gamma}+1}(A_{\gamma})\). These generators are called the _natural generators_ of \(I_{\mathcal{Q},\mathbf{m},\mathbf{u}}\).
**Theorem 4.6**.: _Assume the same setup as in Definition 4.5. Then the set of natural generators of bipartite determinantal ideal \(I_{\mathcal{Q},\mathbf{m},\mathbf{u}}\), if nonempty, forms a Grobner basis with respect to any lexicographical monomial order that is consistent in \(A_{\gamma}\) for all \(\gamma\in\mathrm{V}_{\mathcal{Q}}\)._
Proof.: Apply Proposition 1.4. Assume \(M\in D_{u_{i}+1}(A_{i})\) and \(N\in D_{u_{j}+1}(A_{j})\). We consider three cases:
Case 1: if \(i=j\). We simply apply Proposition 3.7.
Case 2: if there is no arrow between \(i\) and \(j\). Then \(A_{i}\) and \(A_{j}\) does not share any variable, \(S(M,N)=\mathrm{LM}(N)M-\mathrm{LM}(M)N=(\mathrm{LM}(N)-N)M-(\mathrm{LM}(M)-M)N= \sum a_{j}P_{j}\), where each \(P_{j}\) is either \(M\) or \(N\). If \(P_{j}=M\), then \(a_{j}\) is a term of \(\mathrm{LM}(N)-N\), thus \(a_{j}<\mathrm{LM}(N)\), \(\mathrm{LM}(a_{j}P_{j})<\mathrm{LM}(N)\mathrm{LM}(M)=\mathrm{LCM}(\mathrm{LM} (M),\mathrm{LM}(N))\); similarly for \(P_{j}=N\). So the condition of Proposition 1.4 holds.
Case 3: if there is an arrow (or multiple arrows) between \(i\) and \(j\). then the matrices \(A_{i}\) and \(A_{j}\) satisfy the setup of generalized double determinantal ideals defined in Definition 4.1. We we can apply Proposition 4.2 and conclude that the condition of Proposition 1.4 holds.
This completes the proof.
Note that there always exists lexicographical monomial order that is consistent in all \(A_{i}\). For example, by requiring \(x_{ij}^{(k)}>x_{i^{\prime}j^{\prime}}^{(k^{\prime})}\) if \(k<k^{\prime}\), or (\(k=k^{\prime}\) and \(i<i^{\prime}\)), or (\(k=k^{\prime}\), \(i=i^{\prime}\), and \(j<j^{\prime}\)).
**Example 4.7**.: Consider the quiver in Figure 10.
Let \((m_{1},m_{2},m_{3},m_{4})=(4,4,3,3)\), \(u_{1}=\cdots=u_{8}=1\). Then all \(X^{(i)}\) have size \(3\times 4\), and
\[A_{1}=\begin{bmatrix}\dfrac{X^{(1)}}{X^{(2)}}\\ \dfrac{X^{(3)}}{X^{(4)}}\\ \dfrac{X^{(5)}}{X^{(5)}}\end{bmatrix},\,A_{2}=\begin{bmatrix}\dfrac{X^{(6)}}{X ^{(7)}}\\ \dfrac{X^{(7)}}{X^{(8)}}\end{bmatrix},\,A_{3}=[X^{(1)}|X^{(2)}|X^{(3)}|X^{(6)} ],\,A_{4}=[X^{(4)}|X^{(5)}|X^{(7)}|X^{(8)}]\]
Theorem 4.6 asserts that the all the \(2\times 2\)-minors in \(A_{1},\ldots,A_{4}\) form a Grobner basis with respect to any lexicographical order that is consistent in all \(A_{i}\) (as an example of such an order, take the reading order on each \(X^{(k)}\), and let the variables in \(X^{(k)}\) to be larger than the variables in \(X^{(k^{\prime})}\) if \(k<k^{\prime}\)).
**Remark 4.8**.: In this remark, we claim that the bipartite determinantal ideals specialize to the aforementioned ideals as follows.
(a) The determinantal ideal \(I_{m,n,u}^{\rm det}=I_{\mathcal{Q},\mathbf{m},\mathbf{u}}\) for \(\mathcal{Q}:1\gets 2\), \(\mathbf{m}=(m_{1},m_{2})=(m,n)\), \(\mathbf{u}=(u_{1},u_{2})=(u-1,u-1)\).
(b) The double determinantal ideals \(I_{m,n,u,v}^{(r)}=I_{\mathcal{Q},\mathbf{m},\mathbf{u}}\) for \(\mathcal{Q}:1\stackrel{{ r}}{{\leftarrow}}2\) which has \(r\) arrows from \(2\) to \(1\), \(\mathbf{u}=(u_{1},u_{2})=(u-1,v-1)\).
(c) The generalized double determinantal ideals \(I_{\{m_{k},n_{k}\}_{k=1}^{r},u,v}^{(\mathbf{r},\mathbf{r}^{\prime})}=I_{ \mathcal{Q},\mathbf{m},\mathbf{u}}\), where:
- the quiver \(\mathcal{Q}\) is determined by the following condition:
- two arrows share a target \(\Leftrightarrow\) they are \(h_{r_{i}}\) and \(h_{r_{j}}\) for \(1\leq i,j\leq s\) and the target is \(1\),
- two arrows share a source \(\Leftrightarrow\) they are \(h_{r_{i}^{\prime}}\) and \(h_{r_{j}^{\prime}}\) for \(1\leq i,j\leq t\) and the source is \(2\);
- the tuple \(\mathbf{m}=(m_{1}^{\prime},\ldots,m_{d}^{\prime})\) is determined by the following condition:
- for \(\alpha\in\mathrm{V}_{\rm sink}\), \(m_{\alpha}^{\prime}=m_{i}\) if \(\mathrm{t}(h_{i})=\alpha\);
- for \(\beta\in\mathrm{V}_{\rm source}\), \(m_{\beta}^{\prime}=n_{i}\) if \(\mathrm{s}(h_{i})=\beta\);
- the tuple \(\mathbf{u}=(u_{1},\ldots,v_{d})\) is determined by the following condition: \(u_{1}=u-1,u_{2}=v-1\),
- for \(\alpha\in\mathrm{V}_{\rm sink}\setminus\{1\}\), \(u_{\alpha}=v-1\);
- for \(\beta\in\mathrm{V}_{\rm source}\setminus\{2\}\), \(u_{\beta}=u-1\).
To prove the claim, note that (a) and (b) are obvious. To show (c), note that \(A_{1}\), \(A_{2}\), \(D_{u_{1}+1}(A_{1})\), \(D_{u_{2}+1}(A_{2})\) in the definition of bipartite determinantal ideal coincide with \(A\), \(B\), \(D_{u}(A)\), \(D_{v}(B)\) in the definition of the generalized determinantal ideal. So we left to explain why \(D_{u_{\gamma}+1}(A_{\gamma})\) for \(\gamma\neq 1,2\) is redundant. Without loss of generality, assume \(\gamma\in\mathrm{V}_{\rm source}\), and let \(h_{i}:\gamma\to 1\) be the unique arrow with source \(\gamma\). Then \(A_{\gamma}=X^{(i)}\) and \(u_{\gamma}=u-1\), \(D_{u_{\gamma}+1}(A_{\gamma})=D_{u}(X^{(i)})\), which is a subset of \(D_{u}(A)\), thus is redundant.
Figure 10. A bipartite quiver \(\mathcal{Q}\)
## 5. Applications
### Tensors
When studying the arrangement of the variables in our two matrices, it becomes clear that there is a higher order structure at play. Each matrix may be thought of as a slice of a \(3\)-dimensional array of variables (see Figure 11).
Thought of as a generalization of matrix, an \(n\)-dimensional array is called a tensor, and is a convenient way to represent multi-indexed data. Applications are found in many fields of science, mathematics, and statistics. In this section, we present a concise and limited introduction to operations on tensors that are necessary for our purposes, and show how to slightly modify Definition 1.1 and Theorem 1.3 for the more general structure of tensors. SS5.2 provides some preliminary results on this generalization, and SS5.3 interprets the objects presented here from the perspective of algebraic statistics.
It is often beneficial to view tensors as multilinear maps on tensor products of vector spaces, and define them as objects that are independent of choice of bases, see [13]. However, for our purposes it suffices to assume that some basis is chosen, and define a tensor to be the representation of a multilinear map, given as an \(n\)-dimensional array of values in \(K\).
**Definition 5.1** (Definition 6.1.3 [4]).: A tensor \(X\) over \(K\) of dimension (or order) \(n\) of type (or size) \(a_{1}\times\ldots\times a_{n}\) is a multidimensional table of elements of \(K\), in which each element is determined by a multi-index \((i_{1},\ldots,i_{n})\) where each \(i_{j}\) ranges from \(1\) to \(a_{j}\). Equivalently, \(X\) is a multilinear map \(X:K^{a_{1}}\times\ldots\times K^{a_{n}}\to K\) where we consider the standard basis for each vector space \(K^{a_{i}}\).
When visualizing \(3\)-dimensional tensors, we will adopt the convention that the first index increases downward along a vertical axis, the second increases across a horizontal axis, and the third increases backward along the third axis. For example, a \(3\)-dimensional tensor of variables of size \(m\times n\times r\) will be given as follows.
**Definition 5.2** (Definition 8.2.1 [4]).: For an \(n\)-dimensional tensor \(X=(x_{i_{1},\ldots,i_{n}})\) of type \(a_{1}\times\ldots\times a_{n}\), let \(J\subseteq[n]=\{1,\ldots,n\}\) and set \(\{j_{1},\ldots,j_{q}\}=[n]\setminus J\). For any choice \(k_{1},\ldots,k_{q}\)
Figure 11. Scan\((X)_{3}\) of \(3\)-dimensional tensor \(X\)
where \(1\leq k_{s}\leq a_{j_{s}}\), define \(X^{J}\), the \(J\)-contraction of \(X\), to be the \(q\)-dimensional tensor of type \(a_{j_{1}}\times\ldots\times a_{j_{q}}\) whose entry indexed by \((k_{1},\ldots,k_{q})\) is given by
\[x_{k_{1},\ldots,k_{q}}=\sum x_{i_{1},\ldots,i_{n}}\]
where the sum ranges over all entries where \(i_{j_{s}}=k_{s}\) are fixed for all \(s=1,\ldots,q\).
**Example 5.3**.: Let \(X\) be the \(3\)-dimensional tensor
and \(J=\{2\}\), \(L=\{3\}\), and \(M=\{2,3\}\). Then the contractions along \(J\) and \(L\) are the matrices
\[X^{J}=\begin{pmatrix}2&10\\ 4&12\end{pmatrix},\hskip 28.452756ptX^{L}=\begin{pmatrix}4&8\\ 6&10\end{pmatrix}\]
and the contraction along \(M\) is the vector \(X^{M}=(12,16)\).
**Remark 5.4**.: We note that \(P=\frac{1}{28}X\) gives a joint probability distribution on three binomial random variables, since all entries of \(P\) are non-negative and sum to \(1\). Therefore, the contractions of a probability tensor are marginal distributions.
**Definition 5.5** (Definition 8.3.1 [4]).: For an \(n\)-dimensional tensor \(X=(x_{i_{1},\ldots,i_{n}})\) of type \(a_{1}\times\ldots\times a_{n}\), let \(j\in\{1,\ldots,n\}\). Define \(\operatorname{Scan}(X)_{j}\) to be the set of all \(a_{j}\)\((n-1)\)-dimensional tensors obtained by fixing index \(i_{j}\) and indexing over only \(i_{1},\ldots,i_{j-1},i_{j+1},\ldots,i_{n}\).
Referring to Example 5.3, we would have that
\[\operatorname{Scan}(X)_{1}=\left\{\begin{pmatrix}0&2\\ 4&6\end{pmatrix},\begin{pmatrix}1&3\\ 5&7\end{pmatrix}\right\},\ \operatorname{Scan}(X)_{2}=\left\{\begin{pmatrix}0&4 \\ 1&5\end{pmatrix},\begin{pmatrix}2&6\\ 3&7\end{pmatrix}\right\},\]
and
\[\operatorname{Scan}(X)_{3}=\left\{\begin{pmatrix}0&2\\ 1&3\end{pmatrix},\begin{pmatrix}4&6\\ 5&7\end{pmatrix}\right\}.\]
We make note of two facts. First, that the contraction along a single axis is simply the sum of the tensors in the scan in the direction of that axis. Therefore, in light of probability theory, the elements of the scan are scalar multiples of conditional distributions. Secondly, the matrices in a scan of a \(3\)-dimensional tensor of variables may be concatenated to create exactly the matrices \(A(X)\) and \(B(X)\) in Definition 1.1. This process is called flattening of a tensor, and it may be extended to higher dimensions, but for the purposes of this paper, we use the definition as applied to a \(3\)-dimensional tensor.
**Definition 5.6** (Definition 8.3.4 [4]).: Let \(X=(x_{ijk})\) be a \(3\)-dimensional tensor of type \(m\times n\times r\). For each fixed \(k=1,\ldots,r\) let \(X^{(k)}\in\operatorname{Scan}(X)_{3}\) be the matrix
\[X^{(k)}=\begin{pmatrix}x_{11k}&\ldots&x_{1nk}\\ \vdots&&\vdots\\ x_{m1k}&\ldots&x_{mnk}\end{pmatrix}.\]
Define the flattenings by concatenating the matrices of \(\operatorname{Scan}(X)_{3}\) as follows.
\[F_{1}(X)=[X^{(1)}|\ldots|X^{(r)}],\hskip 28.452756ptF_{2}(X)=[(X^{(1)})^{T}| \ldots|(X^{(r)})^{T}]\]
One can see that Definition 1.1 may be altered for slightly more generality. By endowing \(X\) with the structure of a \(3\)-dimensional tensor, we see that \(A(X)\) and \(B(X)\) are the two flattenings, \(F_{1}(X)\) and \(F_{2}(X)^{T}\), of Definition 5.6. Note that either matrix may be replaced by its transpose, since \(D_{u}(A)=D_{u}(A^{T})\) and \(D_{v}(B)=D_{v}(B^{T})\). Therefore, the double determinantal ideal is generated by minors of two particular flattenings of a \(3\)-dimensional tensor of variables. We will consider flattenings and their applications in more detail in SS5.2 and SS5.3. In particular, we see that our problem focuses on two of the three natural ways to flatten a \(3\)-dimensional tensor, and it would be natural to ask whether our proof might extend to include the third flattenings of \(X\), as well. Although this triple determinantal ideal does have an interesting interpretation in algebraic statistics, our proof methods inherently rely on the formations of two block matrices (and transposes of blocks) seen in Definition 5.6, so could not be used to demonstrate that the triple determinantal generators form a Grobner basis.
### Generalizations
In SS5.1 we viewed the matrices \(A\) and \(B\) as two flattenings of a tensor of variables of order \(3\). In this section we consider how the main result fits in the more general context of tensors. A flattening is just one special kind of tensor reshaping, which can be thought of as the removal of some of the structure of the original tensor. Flattening is a particularly useful reshaping, since matrices are easy to express visually. The smallest internal sub-structure of a tensor consists of the \(1\)-dimensional arrays (vectors), called fibers. A \(3\)-dimensional tensor has three ways to slice into fibers, each running along the direction of an axis, we may call them columns, rows, and tubes (for a picture, see [13, Figure 2.3.1]). A reshaping preserves some of the structure, while rearranging to create a lower dimensional tensor. The most extreme structure preserving reshaping would be to line up all the fibers into a vector, whereas flattening may be considered the second most extreme. We may classify flattenings of a \(3\)-dimensional tensor, \(X\) of size \(m\times n\times r\), into three categories of matrices, ones in which the columns are defined by the column fibers, the row fibers, or the tube fibers. Some generic examples might look like the following.
\[X_{1}=\begin{bmatrix}x_{111}&\ldots&x_{1nr}\\ \vdots&&\vdots\\ x_{m11}&\ldots&x_{mnr}\end{bmatrix},\quad X_{2}=\begin{bmatrix}x_{111}&\ldots &x_{m1r}\\ \vdots&&\vdots\\ x_{1n1}&\ldots&x_{mnr}\end{bmatrix},\quad X_{3}=\begin{bmatrix}x_{111}&\ldots &x_{mn1}\\ \vdots&&\vdots\\ x_{11r}&\ldots&x_{mnr}\end{bmatrix}\]
Since we are interested in the ideals generated by the minors of these matrices, we may arbitrarily choose the order in which to write the columns. This is because permutation of columns changes the minors by a factor of \(\pm 1\), so each determinantal ideal is invariant
under the right action of a permutation group on the matrix it is generated from. Therefore, we may categorize flattenings into three classes of matrices, with each class containing all matrices whose rows are indexed by one specified index, of the three.
**Definition 5.7**.: Let \(X=(x_{i_{1},\ldots,i_{n}})\) be an \(n\)-dimensional tensor of variables of size \(a_{1}\times\ldots\times a_{n}\). For \(1\leq j\leq n\), define the \(j^{th}\)_flattening class_ to be the set of all \(a_{j}\times(a_{1}\ldots a_{j-1}a_{j+1}\ldots a_{n})\) matrices containing all variables \(x_{1,\ldots,1},\ldots,x_{a_{1},\ldots,a_{n}}\), in which each column is of the following form:
\[\begin{bmatrix}x_{i_{1},\ldots,1,\ldots,i_{n}}\\ x_{i_{1},\ldots,2,\ldots,i_{n}}\\ \vdots\\ x_{i_{1},\ldots,a_{j},\ldots,i_{n}}\end{bmatrix}\]
that is, the variables in a column have the same fixed indices except for the \(j^{th}\) index. We call each matrix in the \(j^{th}\) flattening class a \(j^{th}\)_flattening matrix_. So any two \(j^{th}\) flattening matrices are the same up to permuting columns.
We may interpret Theorem 1.3 as follows: For any \(3\)-dimensional tensor of variables, \(X\), of size \(m\times n\times r\), there exist \(X_{1}\in\overline{X_{1}}\) and \(X_{2}\in\overline{X_{2}}\) which are block matrices of the form
\[X_{1}=[Y_{1}|\ldots|Y_{r}],\ \ \ \ X_{2}=\begin{bmatrix}Y_{1}^{T}|\ldots|Y_{r}^{T }\end{bmatrix}\]
in which \(Y_{1},\ldots,Y_{r}\) are the \(m\times n\) matrices (elements of \(\operatorname{Scan}(X)_{3}\)). And, therefore, \(D_{u}(X_{1})\cup D_{v}(X_{2})\) forms a Grobner basis, with respect to a monomial order that is consistent on both \(X_{1}\) and \(X_{2}\). More generally, we have the following.
**Proposition 5.8**.: _Let \(X\) be an \(n\)-dimensional tensor of variables of size \(a_{1}\times\ldots\times a_{n}\). Then for any \(j,k\in\{1,\ldots,n\}\) and any integers \(1<v_{i}\leq\min(a_{i},a_{1}\ldots a_{i-1}a_{i+1}\ldots a_{n})\) for \(i=j,k\), there exists a \(j^{th}\) flattening matrix \(X_{j}\), and a \(k^{th}\) flattening matrix \(X_{k}\), and a monomial order "\(>\)" such that the minors \(D_{v_{j}}(X_{j})\cup D_{v_{k}}(X_{k})\) form a Grobner basis, with respect to " \(>\)"._
Proof.: By permuting the indices, we may assume that \(j=1\) and \(k=2\). Write the \(C=\{(i_{3},\ldots,i_{n}):1\leq i_{t}\leq a_{t}\text{ for }t=3,\ldots,n\}\) as an ordered list of \(r=\prod_{t=3}^{n}a_{t}\) tuples, and for \(1\leq i\leq r\) define
\[Y_{i}=\begin{bmatrix}y_{11}^{(i)}&\ldots&y_{1a_{2}}^{(i)}\\ \vdots&&\vdots\\ y_{a_{1}1}^{(i)}&\ldots&y_{a_{1}a_{2}}^{(i)}\end{bmatrix}:=\begin{bmatrix}x_{1 1b_{3}\ldots b_{n}}&\ldots&x_{1a_{2}b_{3}\ldots b_{n}}\\ \vdots&&\vdots\\ x_{a_{1}1b_{3}\ldots b_{n}}&\ldots&x_{a_{1}a_{2}b_{3}\ldots b_{n}}\end{bmatrix}\]
where \((b_{3},\ldots,b_{n})\) is the \(i\)-th element of \(C\). Let
\[X_{1}=[Y_{1}|\ldots|Y_{r}],\ \ \ \ X_{2}=\begin{bmatrix}Y_{1}^{T}|\ldots|Y_{r}^{T }\end{bmatrix}.\]
Let " \(>\)" be the monomial order on the variables of \(X\) induced by
\[y_{j_{1}k_{1}}^{(i_{1})}>y_{j_{2}k_{2}}^{(i_{2})}\iff(i_{1}<i_{2})\text{ or }(i_{1}=i_{2},j_{1}<j_{2})\text{ or }(i_{1}=i_{2},j_{1}=j_{2},k_{1}<k_{2}).\]
The claim follows, by Theorem 1.3.
Recall the definition of subspace variety:
**Definition 5.9**.: _[_13_, Definition 3.4.1]_ _The subspace variety \(\hat{\mathrm{Sub}}_{b_{1},\ldots,b_{n}}(K^{a_{1}}\otimes\ldots\otimes K^{a_{n}})= \{T\in K^{a_{1}}\otimes\ldots\otimes K^{a_{n}}|\mathbf{R}_{multilin}(T)\leq(b_{1 },\ldots,b_{n})\}=\{T\in K^{a_{1}}\otimes\ldots\otimes K^{a_{n}}|\dim T((K^{a_{ i}})^{*})\leq b_{i},\forall 1\leq i\leq n\},\) is the common zero set of all \((b_{i}+1)\)-minors of the \(i^{th}\) matrix flattening of \(X\)._
**Corollary 5.10**.: _For any \(n\)-dimensional tensor of variables of size \(a_{1}\times\ldots\times a_{n}\), and multilinear rank vector \((b_{1},\ldots,b_{n})\), if no more than two indices \(j,k\) are such that \(b_{j}<a_{j}\) and \(b_{k}<a_{k}\), then the subspace variety \(\hat{\mathrm{Sub}}_{b_{1},\ldots,b_{n}}(K^{a_{1}}\otimes\ldots\otimes K^{a_{n}})\) is defined by the prime ideal whose generators, the \((b_{j}+1)\)-minors of \(X_{j}\) and the \((b_{k}+1)\)-minors of \(X_{k}\), can be shown to form a Grobner basis under an appropriate monomial order._
Proof.: For those \(i\neq j,k\), since \(b_{i}\geq a_{i}\), there are no \((b_{i}+1)\)-minors in the \(i^{th}\) matrix flattening of \(X\). So the statement follows from Theorem 1.3.
We now consider a different generalization of our result, in the context of tensors. Since there are exactly three flattening classes, it is natural to consider including the minors of the third flattening class. After much computational investigation, it became apparent that including those new generators does not always produce a larger ideal than the double determinantal ideal. We finish this section by giving conditions on when the triple determinantal ideal is larger.
**Definition 5.11**.: Let \(X\) be a 3-dimensional tensor of variables of size \(m\times n\times r\). For any integers \(1<u\leq\min(m,rn)\), \(1<v\leq\min(n,mr)\), and \(1<w\leq\min(r,mn)\), let \(I_{u,v}\) be the ideal generated by the \(u\)-minors of (any matrix in) the first flattening class of \(X\) and the \(v\)-minors of (any matrix in) the second flattening class of \(X\) (this defines the double determinantal ideal). Let the _triple determinantal ideal_, \(I_{u,v,w}\), be defined by the generators of \(I_{u,v}\) as well as the \(w\)-minors of (any matrix in) the third flattening class of \(X\).
**Proposition 5.12**.: _For any 3-dimensional tensor of variables of size \(m\times n\times r\) and any integers \(2\leq u\leq\min(m,rn)\), \(2\leq v\leq\min(n,mr)\), and \(2\leq w\leq\min(r,mn)\), we have_
\[I_{u,v}=I_{u,v,w}\iff(u-1)(v-1)\leq w-1.\]
Proof.: (\(\implies\)): We prove the contrapositive. Suppose \((u-1)(v-1)>w-1\). We show that \(I_{u,v}\subsetneq I_{u,v,w}\) by demonstrating a point in \(V(I_{u,v})\setminus V(I_{u,v,w})\). That is, there exists a tensor \(T\in K^{m}\otimes K^{n}\otimes K^{r}\) for which the first two flattenings have rank bounds of \(u-1\) and \(v-1\), but the third rank is too large: \(\mathrm{rank}X_{3}>w-1\). Let \(T=(t_{ijk})\) be defined by
\[t_{ijk}=\begin{cases}1&\text{if }1\leq i\leq u-1,1\leq j\leq v-1,k=(i-1)(v-1)+j; \\ 0&\text{otherwise.}\end{cases}\]
It is easy to check that \(\mathrm{rank}T_{1}=u-1\), \(\mathrm{rank}T_{2}=v-1\), \(\mathrm{rank}T_{3}=\min\{(u-1)(v-1),r\}>w-1\). So \(T\in V(I_{u,v})\setminus V(I_{u,v,w})\).
(\(\iff\)): Suppose \((u-1)(v-1)\leq w-1\). We have that \(I_{u,v}\subseteq I_{u,v,w}\), and will demonstrate equality. We may assume that \(K\) is an algebraically closed field. This is because equality of ideals in a polynomial ring is not sensitive to field extensions by Lemma 5.13. When we pass to the varieties we have \(V(I_{u,v})\supseteq V(I_{u,v,w})\), and it is enough to show equality of the
varieties. This is because if the varieties are equal, over an algebraically closed field the strong Nullstellensatz would imply \(\sqrt{I_{u,v}}=\sqrt{I_{u,v,w}}\). Then by Theorem 1.3, the defining minors of \(I_{u,v}\) form a Grobner basis, and so \(\operatorname{LT}(I_{u,v})\) is square-free, and thus, \(I_{u,v}\) is a radical ideal, implying that \(I_{u,v}=\sqrt{I_{u,v}}=\sqrt{I_{u,v,w}}\supseteq I_{u,v,w}\).
So we must show that \(V(I_{u,v})\subseteq V(I_{u,v,w})\). Let \(T=(t_{ijk})\in V(I_{u,v})\). Without loss of generality, we may assume that \(T_{1}\) is a first flattening matrix with rank \(u^{\prime}-1\) (with \(u^{\prime}\leq u\)) whose top \(u^{\prime}-1\) rows are linearly independent, and \(T_{2}\) is a second flattening matrix of rank \(v^{\prime}-1\) (with \(v^{\prime}\leq v\)) whose top \(v^{\prime}-1\) rows are linearly independent. Then \(T_{1}\) can be factored as \(T_{1}=AR_{1}\) where \(A=(a_{ij})\) is a \(m\times(u^{\prime}-1)\) matrix whose top \(u^{\prime}-1\) rows form the \((u^{\prime}-1)\times(u^{\prime}-1)\) identity matrix, and \(R_{1}\in\operatorname{Mat}_{(u^{\prime}-1)\times nr}(K)\) is the submatrix of \(T_{1}\) that has the top \((u^{\prime}-1)\) rows of \(T_{1}\); similarly, \(T_{2}=BR_{2}\) where \(B=(b_{ij})\) is a \(n\times(v^{\prime}-1)\) matrix whose top is an identity matrix and \(R_{2}\in\operatorname{Mat}_{(v^{\prime}-1)\times mr}(K)\) is the submatrix of \(T_{2}\) that has the top \((v^{\prime}-1)\) rows of \(T_{2}\); Then \(t_{ijk}=\sum_{p=1}^{u^{\prime}-1}a_{ip}t_{pjk}=\sum_{q=1}^{v^{\prime}-1}b_{jq}t _{iqk}\). Denote vectors
\[\mathbf{t}_{ijk}=\begin{bmatrix}t_{ij1}\\ \vdots\\ t_{ijr}\end{bmatrix}\in K^{r}.\]
Then \(\mathbf{t}_{ij*}=\sum_{p=1}^{u^{\prime}-1}\sum_{q=1}^{v^{\prime}-1}a_{ip}b_{jq }\mathbf{t}_{pq*}\). So a third flattening matrix \(T_{3}\) must have rank \(\leq(u^{\prime}-1)(v^{\prime}-1)\leq(u-1)(v-1)\leq w-1\), thus \(T\in V(I_{u,v,w})\).
The above proof requires the following technical, but well known result. Since we could not easily find a good reference, we include the proof here.
**Lemma 5.13**.: _If \(k\) is a subfield of \(F\), \(R=k[x_{1},...,x_{n}]\), \(S=F[x_{1},...,x_{n}]\), and \(I\) and \(J\) be ideals of \(R\). If \(IS=JS\), then \(I=J\)._
Proof.: Since \(k\) is a field, the only maximal ideal, \((0)\), in \(k\) satisfies \((0)^{e}=(0)\), which is not \((1)\) in \(F\), so by [2, p.45 exercise 16]\(F\) is faithfully flat over \(k\). Then, \(S\) (\(=F\otimes_{k}R\)) is faithfully flat over \(R\) by [14, p.46]. So, \(IS\cap R=I\) by [14, Thm 7.5] (and, likewise \(JS\cap R=J\)). Since \(IS=JS\), we have that \(I=IS\cap R=JS\cap R=J\).
### Algebraic Statistics
The use of algebraic and geometric techniques to study statistical problems is relatively recent [8], but provides a rich new context for interpreting classical computational algebra problems, such as ours. In this section we describe how to interpret the double determinantal variety from a statistical viewpoint. We will restrict our discussion to discrete random variables, but the concepts may extend to continuous random variables.
### Independence Models
Let \(X\) and \(Y\) be discrete random variables on \(m\) and \(n\) states, respectively. We consider whether \(X\) and \(Y\) are independent. Denote their joint probability density as \(P(X=i,Y=j)=p_{ij}\), and denote their marginal probabilities as \(P(X=i)=p_{i+}=\sum_{j=1}^{n}p_{ij}\) and \(P(Y=j)=p_{+j}=\sum_{i=1}^{m}p_{ij}\). Probability theory dictates that for independence, the probabilities must satisfy the equation
\(i)P(Y=j)\), or equivalently, \(p_{ij}=p_{i+}p_{+j}\), for all \(i,j\). Therefore, \(X\) is independent of \(Y\) (denoted \(X\mbox{$\perp\!\!\!\perp$}Y\)) if and only if for every \(i,j\)
\[p_{ij}p_{kl}=(p_{i+}p_{+j})(p_{k+}p_{+l})=(p_{i+}p_{+l})(p_{k+}p_{+j})=p_{il}p_ {kj}\iff\left|\begin{matrix}p_{ij}&p_{il}\\ p_{kj}&p_{kl}\end{matrix}\right|=0.\]
This statement is equivalent to saying that the matrix \(P=(p_{ij})\) has rank at most one.
A discrete probability distribution is a collection of non-negative values that sum to one, so we may think of our joint probability, geometrically, as a point of \(\mathbb{R}^{mn}\) that is contained in the probability simplex
\[\Delta_{mn-1}=\{(p_{ij})\in\mathbb{R}^{mn}:p_{ij}\geq 0,\sum_{ij}p_{ij}=1\}.\]
Therefore, the set of all distributions for two discrete random variables which are independent is called the independence model, which is given as the intersection of the probability simplex with the Segre variety [8],
\[\begin{split}\mathcal{M}_{X\mbox{$\perp\!\!\!\perp$}Y}& =\{P=(p_{ij})\in\Delta_{mn-1}:\operatorname{rank}(P)\leq 1\}\\ &=\{P=(p_{ij})\in\mathbb{R}^{mn}:\operatorname{rank}(P)\leq 1 \}\cap\Delta_{mn-1}.\end{split} \tag{16}\]
The ideal corresponding to the variety \(\{P=(p_{ij})\in\mathbb{R}^{mn}:\operatorname{rank}(P)\leq 1\}\) is the independence ideal, \(I_{X\mbox{$\perp\!\!\!\perp$}Y}\subseteq K[p_{ij}]\), which is generated by all 2-minors of the matrix of variables \((p_{ij})\).
Inferential statistics uses a randomly collected set of data to decide something about the unknown distribution of the random variables observed. The observation of data is the realization of a geometric point, and statisticians develop means to decide whether that point is close enough to a particular variety, or set of points that satisfy a certain notion, such as independence. The dimension of the tensor required to store the data equals the number of random variables observed. Suppose we are given \(t\) discrete random variables, and two non-intersecting subsets, \(S_{1}\) and \(S_{2}\), of \(\{1,\ldots,t\}\). Then, an independence statement asserts that the outcomes of all variables corresponding to \(S_{1}\) are independent of the outcomes of all variables corresponding to \(S_{2}\), in that the joint density is equal to the product of the respective marginal densities. As such, each independence statement corresponds to the requirement that some matrix (a contraction or a flattening of the probability tensor) has rank \(\leq 1\).
We specify to an order 3 tensor. Let \(X_{1},X_{2},X_{3}\) be discrete random variables on \(m,n,r\) states respectively. We denote the joint probability distribution using the values \(p_{ijk}=P(X_{1}=i,X_{2}=j,X_{3}=k)\), and we have the following probability tensor.
Then, the marginal distributions can be ascertained from a contraction (see Definition 5.2), \(P^{J}\), along a subset of indices \(J\subset\{1,2,3\}\). For example, the joint marginal distribution for \(X_{1}\) and \(X_{2}\) is the contraction of \(P\) along the set \(\{3\}\), resulting in the following matrix
\[P^{\{3\}}=\begin{pmatrix}p_{11+}&\cdots&p_{1n+}\\ \vdots&&\vdots\\ p_{m1+}&\ldots&p_{mn+}\end{pmatrix}\]
where \(p_{ij+}=P(X_{1}=i,X_{2}=j)=\sum_{k}p_{ijk}\). The marginal distribution for \(X_{1}\), for instance, would be the \(\{2,3\}\)-contraction
\[P^{\{2,3\}}=(p_{1++},\ldots,p_{m++})\]
where \(P(X=i)=p_{i++}=\sum_{j,k}p_{ijk}\). We consider the two possible types of independence statements for three random variables: \(X_{i}\mbox{$\perp\!\!\!\perp$}X_{j}\) results in a rank bound of 1 on a contraction matrix, and \(X_{i}\mbox{$\perp\!\!\!\perp$}(X_{j},X_{k})\) results in a rank bound of 1 on a flattening matrix.
**Definition 5.14**.: For any \(X_{1},X_{2},X_{3}\) discrete random variables on \(m,n,r\) states respectively, let the joint probability distribution given by the order 3 tensor \(P=(p_{ijk})\) where \(p_{ijk}=P(X_{1}=i,X_{2}=j,X_{3}=k)\). Let \(P_{1}\),\(P_{2}\), and \(P_{3}\) be a first, second, and third flattening matrix, respectively. Then, for distinct indices \(a,b,c\in\{1,2,3\}\) we have the following independence models.
\[\mathcal{M}_{X_{a}\mbox{$\perp\!\!\!\perp$}(X_{b},X_{c})}=\{P\in\Delta_{mnr-1 }:\mbox{rank}(P_{a})\leq 1\}\]
\[\mathcal{M}_{X_{a}\mbox{$\perp\!\!\!\perp$}X_{b}}=\{P\in\Delta_{mnr-1}:\mbox{ rank}(P^{\{c\}})\leq 1\}\]
In general, each set of independence statements, \(\mathcal{C}\), corresponds to an ideal, \(I_{\mathcal{C}}\), in the polynomial ring in \(mnr\) variables \(R=K[p_{ijk}]\), generated by the 2-minors of the corresponding matrices (given by appropriate contractions or flattenings) [20]. Under certain conditions, the independence ideal is the double determinantal ideal.
**Corollary 5.15**.: _Let \(X_{1},\ldots,X_{n}\) be discrete random variables on \(a_{1},\ldots,a_{n}\) states, respectively. Let \(P\) be the \(n\)-dimensional probability tensor. Then for any \(i,j\in\{1,\ldots,n\}\) and independence set,_
\[\mathcal{C}_{i,j}=\{X_{i}\mbox{$\perp\!\!\!\perp$}(X_{1},\ldots,X_{i-1},X_{i+ 1},\ldots,X_{n}),X_{j}\mbox{$\perp\!\!\!\perp$}(X_{1},\ldots,X_{i-j},X_{j+1}, \ldots,X_{n})\},\]
_the independence ideal is the double determinantal ideal_
\[I_{\mathcal{C}_{i,j}}=I_{2,2}\]
_and is generated by the 2-minors of two flattening matrices, which can be shown to form a Grobner basis under an appropriate monomial order._
Proof.: Follows from Proposition 5.8.
### Conditional Independence and Hidden Variables
To interpret the problem with larger rank bounds, we must consider conditional independence and hidden variables. For conditional independence statements, we use the standard result that the probability of event \(A\) occurring, given event \(B\) has occurred, is \(P(A|B)=\frac{P(A\text{ and }B)}{P(B)}\). So, in our setting of three discrete random variables \(X_{1}\), \(X_{2}\) and \(X_{3}\), on \(m\), \(n\), and \(r\) states, respectively, we have that, for for any \(1\leq k\leq r\), \(P(X_{1}=i,X_{2}=j|X_{3}=k)=\frac{p_{ijk}}{p_{++k}}\). Therefore, the joint density of \(X_{1}\) and \(X_{2}\) given \(X_{3}=k\), is \(\frac{1}{p_{++k}}\) multiplied by one of the matrices in \(Scan(P)_{3}\) (see definition 5.5),
\[\frac{1}{p_{++k}}\begin{pmatrix}p_{11k}&\ldots&p_{1nk}\\ \vdots&&\vdots\\ p_{m1k}&\ldots&p_{mnk}\end{pmatrix}.\]
Imposing an independence requirement on a set of conditioned variables, puts a rank bound on each of the conditional distribution matrices.
**Definition 5.16**.: For three discrete random variables with joint probability tensor \(P\), and for any distinct indices \(a,b,c\in\{1,2,3\}\), the conditional independence model is
\[\mathcal{M}_{X_{a}\perp X_{b}|X_{c}}=\{P\in\Delta_{mnr-1}:\text{rank}(P_{i}) \leq 1,\forall P_{i}\in Scan(P)_{c}\}.\]
A hidden variable is one whose outcomes are not observed, for whatever reason. When conducting an experiment, one might suspect that there is some hidden variable that dictates the distribution of probabilities for the observable outcomes. That is, suppose that \(P=(p_{ijk})\) is a 3-dimensional probability tensor for the joint distribution of \(X_{1},X_{2},X_{3}\), and suppose there is an unobservable random variable \(Y\) with \(u\) states. Let \(P(Y=i)=\pi_{i}\) and for each \(l=1,\ldots,u\), let the conditional joint distribution on \(X_{1},X_{2},X_{3}\), given that \(Y=l\), be given by \(P^{(l)}=(p_{ijk}^{(l)})\in\mathcal{M}\), from a fixed model \(\mathcal{M}\). In this case, the probabilities may be written as
\[p_{ijk}=\sum_{l=1}^{u}P(X_{1}=i,X_{2}=j,X_{3}=k|Y=l)P(Y=l)=\sum_{l=1}^{u}p_{ ijk}^{(l)}\pi_{l}.\]
Since the conditional distributions all come from the same model, \(\mathcal{M}\), then \(P\) is a convex combination of points of the model \(\mathcal{M}\). That is, it is in the \(u^{th}\) mixture of the model \(\mathcal{M}\)[20],
\[P\in\text{Mixt}^{u}(\mathcal{M})=\left\{\sum_{i=1}^{u}\pi_{i}P_{i}:\pi_{i}\geq 0,\sum_{i}\pi_{i}=1,P_{i}\in\mathcal{M}\right\}.\]
Mixture models are contained in secant varieties, but are not, in general, equal [20, Example 4.1.16]. In fact, when rank is \(\geq 2\) mixture models may have complicated boundaries and require the study of broader notions of rank [20, Example 14.1.6], [1]. For the purposes of this paper, we aim to provide a context for interpreting the double determinantal ideal. So, we focus on the ideals corresponding to the variety whose intersection with the probability simplex gives a conditional independence model, which are determinantal ideals.
From this standpoint, suppose that we would like to test some conditional independence constraints with a hidden variable in mind. For instance, the constraint \(X_{1}\mbox{$\perp\!\!\!\perp$}(X_{2},X_{3})|Y\) would impose a rank bound of \(2\) on the first flattening of each of the \(u\) conditional probability tensors \(P^{(1)},\ldots,P^{(u)}\). This would imply that the probability tensor \(P\) for \(X_{1},X_{2},X_{3}\) (unconditioned on \(Y\)) would have a first flattening which may be written as a convex combination of the \(u\) first flattenings of \(P^{(1)},\ldots,P^{(u)}\), each having rank \(1\). As such, the first flattening of the tensor \(P\) would have rank at most \(u\). Therefore, the set of all probability tensors
\[\{P\in\mathbb{R}^{3}:\mbox{rank}P_{1}\leq u\}\]
which have first flattening with rank at most \(u\) would be a variety whose intersection with the probability simplex would constitute the model for the independence condition \(X_{1}\mbox{$\perp\!\!\!\perp$}(X_{2},X_{3})|Y\) when conditioned on a hidden variable \(Y\) on \(u\) states. And, the corresponding conditional independence ideal is generated by all the size \(u+1\) minors of the first flattening of a generic matrix.
**Remark 5.17**.: Let \(X_{1}\), \(X_{2}\), and \(X_{3}\) be discrete random variables on \(m\), \(n\), and \(r\) states respectively, and let \(Y_{1}\), \(Y_{2}\), and \(Y_{3}\) be hidden variables on \(u\), \(v\), and \(w\) states respectively. The conditional independence ideal corresponding to the independence statements
\[\mathcal{C}_{2}=\{X_{1}\mbox{$\perp\!\!\!\perp$}(X_{2},X_{3})|Y_{1},X_{2} \mbox{$\perp\!\!\!\perp$}(X_{1},X_{3})|Y_{2}\}\]
is given by the double determinantal ideal
\[I_{\mathcal{C}_{2}}=I_{u,v}\]
which is generated by minors which can be shown to form a Grobner basis. The conditional independence ideal corresponding to the independence statements
\[\mathcal{C}_{3}=\{X_{1}\mbox{$\perp\!\!\!\perp$}(X_{2},X_{3})|Y_{1},X_{2} \mbox{$\perp\!\!\!\perp$}(X_{1},X_{3})|Y_{2},X_{3}\mbox{$\perp\!\!\!\perp$}(X_{ 1},X_{2})|Y_{3}\}\]
is given by the triple determinantal ideal
\[I_{\mathcal{C}_{3}}=I_{u,v,w}.\]
As a result of Proposition 5.12, these ideals coincide whenever \((u-1)(v-1)\leq w-1\). This gives a condition under which the independence constraints of \(\mathcal{C}_{2}\) imply that \(X_{3}\mbox{$\perp\!\!\!\perp$}(X_{1},X_{2})|Y_{3}\).
|
2301.02902 | Inertial effects on rectification and diffusion of active Brownian
particles in an asymmetric channel | Micro- and nano-swimmers moving in a fluid solvent confined by structures
that produce entropic barriers are often described by overdamped active
Brownian particle dynamics, where viscous effects are large and inertia plays
no role. However, inertial effects should be considered for confined swimmers
moving in media where viscous effects are no longer dominant. Here, we study
how inertia affects the rectification and diffusion of self-propelled particles
in a two-dimensional asymmetric channel. We show that most of the particles
accumulate at the channel walls as the masses of the particles increase.
Furthermore, the average particle velocity has a maximum as a function of the
mass, indicating that particles with an optimal mass $M^{*}_{\rm op}$ can be
sorted from a mixture with particles of other masses. In particular, we find
that the effective diffusion coefficient exhibits an enhanced diffusion peak as
a function of the mass, which is a signature of the accumulation of most of the
particles at the channel walls. The dependence of $M^{*}_{\rm op}$ on the
rotational diffusion rate, self-propulsion force, aspect ratio of the channel,
and active torque is also determined. The results of this study could stimulate
the development of strategies for controlling the diffusion of self-propelled
particles in entropic ratchet systems. | Narender Khatri, Raymond Kapral | 2023-01-07T17:23:58Z | http://arxiv.org/abs/2301.02902v2 | Inertial effects on rectification and diffusion of active Brownian particles in an asymmetric channel
###### Abstract
Micro- and nano-swimmers moving in a fluid solvent confined by structures that produce entropic barriers are often described by overdamped active Brownian particle dynamics, where viscous effects are large and inertia plays no role. However, inertial effects should be considered for confined swimmers moving in media where viscous effects are no longer dominant. Here, we study how inertia affects the rectification and diffusion of self-propelled particles in a two-dimensional asymmetric channel. We show that most of the particles accumulate at the channel walls as the masses of the particles increase. Furthermore, the average particle velocity has a maximum as a function of the mass, indicating that particles with an optimal mass \(M_{\rm op}^{*}\) can be sorted from a mixture with particles of other masses. In particular, we find that the effective diffusion coefficient exhibits an enhanced diffusion peak as a function of the mass, which is a signature of the accumulation of most of the particles at the channel walls. The dependence of \(M_{\rm op}^{*}\) on the rotational diffusion rate, self-propulsion force, aspect ratio of the channel, and active torque is also determined. The results of this study could stimulate the development of strategies for controlling the diffusion of self-propelled particles in entropic ratchet systems.
## I Introduction
Many biological microorganisms, as well as artificial active particles, take free energy from their environments and convert it under nonequilibrium conditions into persistent motion. The mechanisms that underlie such active motion and the dynamical properties of these systems are diverse and have been studied extensively [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15].
For the most part, the biological and synthetic active agents mentioned above have micrometer or sub-micrometer dimensions and move in viscous environments under conditions where inertia does not play an important role. In such circumstances, the active dynamics is often described by the overdamped Langevin or continuum models that neglect inertia. Inertia cannot always be neglected, and an increasing body of research [16] considers the effects of inertia on active particle motion and describes the new phenomena that arise as a result of its inclusion. While the systems where inertial effects are important are diverse, some examples include systems that support a temperature gradient across coexisting phases [17], vibrobots [18; 19; 20], active particle motion in low-density media, such as gases [21; 22; 23], plasmas [24; 25; 26], superfluids [27], and active aerosols [28], etc. Such inertia-dominated active particles are termed micro- and nano-flyers rather than swimmers [16].
The rectification of artificial active particles in confined environments in the absence of external forces has attracted interest [29; 30; 31; 32; 33; 34]. Geometrical confinement controls the volume of phase space that is accessible to the active particles, resulting in entropic barriers that significantly influence their transport properties [35; 36; 37; 38; 39; 40; 41; 42]. As well, confined environments possessing spatial ratchet asymmetry give rise to an entropic ratchet potential that can induce active directed transport in the system [29; 33].
We investigate the underdamped dynamics of self-propelled particles confined by a two-dimensional asymmetric channel. We use a minimal underdamped Langevin model for the dynamics of the self-propelled particles that accounts for inertia. The collisional dynamics of particles with the channel walls are modeled by sliding-reflecting boundary conditions [29; 33; 43]. We focus on how inertial effects influence the rectification and diffusion of active particles in the asymmetric channel.
The article is organized as follows: in Sec. II, we introduce the underdamped Langevin model used to describe the dynamics of the active particles in the two-dimensional asymmetric channel. Section III discusses the effects of inertia on the spatial distribution of particles, while Sec. IV presents results on the rectification and effective diffusion in the channel. The main conclusions of the article are given in Sec. V.
## II Model
We consider a system comprising active particles ditlutely dispersed in a dissipative medium and confined to a two-dimensional asymmetric channel with periodicity \(L\) (see Fig. 1). The active particle density is assumed to be sufficiently small so that direct interactions among the particles may be neglected. As in other studies [16], a minimal underdamped Langevin model is used to describe the active dynamics under conditions where inertia is important. The coupled Langevin equations for |
2305.01042 | Counter-rotating black holes from FRII lifetimes | Estimates suggest that while FRII jets appear to have lifetimes constrained
to hundreds of millions of years, radio galaxies with FRI jets appear to be
longer lived. We illustrate the nature of this time constraint from model
perspectives, showing how compatibility between theory and data match in a way
suggesting a key difference between active galaxies whose engines are
characterized by accretion onto co-rotating versus counter-rotating black
holes. We calculate a range of timescales for counter-rotating black holes for
a range of accretion rates compatible with theory which we then compare to
data. The validity of these timescales constitutes the most powerful recent
piece of evidence for considering counter-rotation between black holes and
accretion disks in high energy astrophysics. | David Garofalo | 2023-05-01T19:02:59Z | http://arxiv.org/abs/2305.01042v1 | # Counter-rotating black holes from FRII lifetimes
###### Abstract
Estimates suggest that while FRII jets appear to have lifetimes constrained to hundreds of millions of years, radio galaxies with FRI jets appear to be longer lived. We illustrate the nature of this time constraint from model perspectives, showing how compatibility between theory and data match in a way suggesting a key difference between active galaxies whose engines are characterized by accretion onto co-rotating versus counter-rotating black holes. We calculate a range of timescales for counter-rotating black holes for a range of accretion rates compatible with theory which we then compare to data. The validity of these timescales constitutes the most powerful recent piece of evidence for considering counter-rotation between black holes and accretion disks in high energy astrophysics.
## 1 Introduction
According to the current paradigm, powerful FRII jets in radio galaxies are the product of accretion onto rapidly spinning, prograde accreting black holes (Wilson Colbert, 1995; Sikora et al, 2007; Tchekhovskoy et al, 2010; Tchekhovskoy & McKinney, 2012). The FRII classification refers to jets that are more collimated and generally more powerful than FRI jets, the latter often being subjected to entrainment from the interstellar medium (Fanaroff & Riley, 1974). Because prograde accretion onto a black hole can only spin black holes up, the only constraint for the longevity of the jet is the amount of accreting fuel. Radio galaxies can either be high excitation or low excitation, depending on the degree of emission line signatures or thermal versus non-thermal nature (Hine & Longair, 1979; Best & Heckman, 2012; Antonucci, 2013; Mingo et al, 2014; Macconi et al, 2020; Mingo et al, 2022). Because FRII radio galaxies are often low excitation systems, they should experience constraints on their lifetimes that are similar to those for FRI radio galaxies, such as M87, with timescales orders of magnitude longer than those associated with feeding at near Eddington rates. But this is not supported by the data. In fact, radio galaxies with FRII jets appear to have quantifiably limited timescales unlike their FRI counterparts (e.g. O'Dea et al, 2009; Garofalo, Singh & Zack, 2018). High excitation FRII systems, for example, are found to be limited to 10 million years (Turner & Shabala, 2015). Most recently, Dabhade, Saikia & Mahato (2022) have compared radio galaxies with the giant radio galaxy population, including the lifetimes of jets in FRII sources, among others. This constitutes the most exhaustive quantitative analysis of FRII lifetimes and, if these results hold up to future scrutiny, a powerful constraint on the nature of jet formation and evolution in jetted active galactic nuclei (AGN). We suggest that the difference in measured timescales for powerful FRII jets compared to powerful FRI jets points to a basic difference in the nature of the two morphologies that was captured in the gap paradigm for black hole accretion and jet formation (Garofalo, Evans & Sambruna, 2010). Whereas powerful FRII jets, in this paradigm, are produced in accreting black holes spinning in the opposite direction as the accretion disk (i.e. counter-rotation), the opposite is true for FRI jets. And since counter-rotation spins black holes down while co-rotation spins them up
indefinitely, powerful FRII jets are limited in time in a way that powerful FRI jets are not. The possibility that radio galaxies with FRII jet morphology are constrained in time unlike FRI radio galaxies is, therefore, interesting in a fundamental way in high energy astrophysics. Evidence that FRII radio galaxies are constrained in time in a way that matches model predictions for the spin down timescales in both high and low excitation systems is thus exciting for understanding the nature of the longstanding puzzle behind the FRI/FRII jet dichotomy. In Section 2 we discuss the data analyzed in Dabhade et al (2022), describe the theory and emphasize the match between theory and data in Section 3. In Section 4 we conclude.
## 2 Data
Machalski, Koziel-Wierzbowska & Goyal 2021 explored the time evolution of 361 FRII radio galaxies from Cambridge, 3CRR, 6CE, 5C6, and 5C7 Sky Surveys and from the Bologna B2, Green Bank GB, and GB2 Surveys in order to produce a statistically relevant sample. They obtained a range of lifetimes for FRII radio galaxies which Dabhade et al 2022 plot in their Figure 6 and which we show on the left side of Figure 1. On the right-hand side of Figure 1 we show the maximum lifetime data from the left-hand side of Figure 1 on the dynamical age of the FRII jet with jet length obtained from Machalski et al 2021. In other words, we hone in on the 4 objects with oldest dynamical ages (circles) for each class of object, namely radio galaxies, radio quasars, and giant radio galaxies and giant radio quasars. The red objects represent high excitation FRII jetted AGN, i.e. with quasar or thermal-like signatures, indicative of radiatively efficient accretion. The red circle on the left side of the vertical line has the maximum dynamical age for an FRII high excitation radio galaxy or FRII HERG. The red object on the right side of the vertical line represents a giant radio quasar. i.e. it has all the same characteristics as the red counterpart on the left except for its jet length. The object on the right is considered a giant FRII HERG. The blue objects, similarly, distinguish themselves in the same way as the red objects do, except they belong to radiatively inefficient accretion, showing an absence of thermal or emission line signatures. They are thus FRII LERG for low excitation radio galaxies. As described in Section 3, we add theoretical values to Figure 1 with diamonds associating their theoretical age (i.e. the model prescribed duration of time for the FRII jet) with the same jet length values as the observational data for ease of comparison.
The maximum lifetime observed so far for FRII HERG and FRII LERG circled in red and blue on the left panel of Figure 1, with values equal to 9 x 10\({}^{7}\) and under 3 x 10\({}^{8}\) years, respectively, are compatible with estimates from theoretical modeling, which we will show in Section 3. While the lifetimes for giant radio quasars and giant radio galaxies increase from a theoretical perspective (Garofalo 2022), their value is more uncertain because such objects are not necessarily triggered at the Eddington accretion limit. Because of this uncertainty, we do not include theoretical maximum lifetimes for giant radio quasars and giant radio galaxies. Note that if a radio quasar had a lifetime equal to 230 million years (as the giant radio quasar indicated with the red circle), it would violate theory. The theoretical limit, in fact, is shown as the red diamond. For FRI radio galaxies, as mentioned above, no such time constraint is found. Two decades ago, evidence began to emerge suggesting that FRI jets live about an order of magnitude longer than FRII jets (Parma et al 2002). The evidence in this respect has grown (e.g. Saripalli et al 2012) and it was found that LERG systems live up to order 10\({}^{9}\) years (Turner & Shabala 2015). The constraints on FRII jets from Machalski et al 2021 suggest, therefore, that long-lived LERG systems are FRI. In other words, an FRI radio galaxy can live substantially longer than FRII sources. Why is that?
## 3 Theory
In this section we describe how the timescales above emerge from, or are compatible with, theory. The theory is anchored to the idea that counter-rotation between black holes and
Figure 1: Left side: Dynamical age versus size of jet from Dabhade et al 2023. Radio quasars are in red and radio galaxies in blue. Radio galaxy (and giant radio galaxy) and radio quasar (and giant radio quasar) with maximum dynamical age indicated with appropriate blue and red circles, respectively, i.e. the four circular points on the right correspond to the objects at the centers of the red and blue circles on the left. Highlighted region in yellow is for giant radio quasars and giant radio galaxies. Right side: Maximum dynamical ages for radio quasars (red circles) with giant radio quasars on the right of the dividing line and for radio galaxies (blue circles) with giant radio galaxies on the right-hand side of the dividing line. The yellow dividing line represents the boundary between radio galaxies and giant radio galaxies. Data from Dabhade, Saikia & Mahato 2023. Theoretical values for FRII HERG and FRII LERG as described in the text are added (diamonds).
accretion disks give rise to FRII jets (Garofalo, Evans & Sambruna 2010). As counter-rotation is an unstable and less likely configuration, the majority of mergers in this paradigm funnel cold gas into the nucleus that settles into co-rotation around a spinning black hole. It is therefore the minority that end up in counter-rotation and this is environment-dependent. It is also worth noting that an engine-based difference between FRII and FRI matters for the most powerful jets and that environment makes a difference at lower jet power (see Garofalo, Evans & Sambruna 2010 on the nature of the Owen-Ledlow diagram). We will therefore focus on the most powerful jets in the paradigm.
For the minority of configurations that end up in counter-rotating accretion disks around spinning black holes, we explore two basic evolutionary scenarios relevant for understanding the maximum possible lifetimes of FRII jets. The question we need to answer is this: How long does it take to spin a black hole down? This is because FRII jets are associated with counter-rotation. We want to find the maximum possible time for this process to then compare with the data for FRII lifetimes. Mergers yield initial conditions involving cold gas funneled into the galactic nucleus and the formation of a radiatively efficient disk accreting in counter-rotation at the Eddington limit. A subset of these counter-rotating black holes spin down at the Eddington limit while others spin down at accretion rates that begin at the Eddington limit but drop to rates as low as \(10^{-2}\) the Eddington accretion rate. This range of accretion rates determines the range of jet lifetimes. We should point out that the initial spin value is crucial in determining jet lifetime. But since we are striving to determine maximum jet lifetimes, the initial spin value is assumed to be its theoretical maximum at 0.998. The drop in accretion rate depends on feedback processes, the details of which are not of present concern (see Garofalo, Evans & Sambruna 2010 for details). What matters here is the range of accretion for counter-rotation because it determines the lifetime for a counter-rotating black hole. This timescale depends on the amount of angular momentum added to the black hole by the accreted plasma and the rate at which the plasma accretes. The angular momentum accreted is that of the gas at the inner edge of the disk, which is referred to as the marginally stable circular orbit, \(r_{\rm ms}\). This radial location depends both on the spin of the black hole and the orientation of the accretion disk. From the stability of circular orbits in Kerr space-time one finds that \(r_{\rm ms}\) drops from 9 gravitational radii, \(r_{\rm g}\), for a maximally spinning black hole surrounded by a counter-rotating accretion disk, to just over 1 \(r_{\rm g}\) for a maximally spinning black hole surrounded by a co-rotating accretion disk (e.g. McClintock et al 2011). To determine the amount of angular momentum delivered to the black hole, one sets \(r\) to \(r_{\rm ms}\) in the following expressions for the angular momentum per unit mass as a function of Boyer-Lindquist radial coordinate \(r\) (Bardeen et al 1972),
\[L+={\rm M}^{1/2}(r^{2}-2\alpha{\rm M}^{1/2}r^{1/2}+\alpha^{2})/(r^{3/4}(r^{3/ 2}-3Mr^{1/2}+2\alpha M^{1/2})^{1/2}) \tag{1}\]
and
\[L=-{\rm M}^{1/2}(r^{2}+2\alpha{\rm M}^{1/2}r^{1/2}+\alpha^{2})/(r^{3/4}(r^{3/ 2}-3Mr^{1/2}-2\alpha M^{1/2}), \tag{2}\]
where the '+' and '-' subscripts refer to the value of the angular momentum per unit mass as a function of radial coordinate for co-rotating disks and counter-rotating disks, respectively, with \(a\) the spin of the black hole and M its mass. By multiplying by accreted mass, one obtains the amount of angular momentum accreted, which changes the spin of the black hole. Because accretion rates are model-prescribed in terms of the Eddington value, our results are scale invariant, making the actual values of angular momentum irrelevant for our calculations. To see this scale invariance explicitly, we begin with the definition of the dimensionless black hole spin
\[a=c\,\mathcal{L}\,/(\mathrm{GM_{BH}}^{2}) \tag{3}\]
where \(\mathcal{L}\) is the angular momentum of the black hole and \(c\) is the speed of light. If accretion proceeds at the Eddington limit we set the Eddington luminosity to the luminosity in terms of the accretion rate and the disk efficiency as in equation (4). For our purposes we repackage all the constants into one term and relate the accretion rate to the black hole mass in equation (5), which can be written as in equation (6). The infinitesimal mass accreted onto the black hole is therefore obtained from equation (6) as shown in equation (7). We can carry out the analysis in the Newtonian limit with the magnitude of the angular momentum of the infinitesimal parcel of gas given to the black hole given in equation (8), with \(v\) the velocity of the parcel of gas dm at the inner edge of the disk that is supplied to the black hole and \(r\) is its radial location. From circular motion and Newton's 2\({}^{nd}\) law we obtain equation (9) from which we obtain the velocity in equation (10).
\[\eta\dot{M}c^{2}=4\pi c\mathrm{Gm_{p}M_{BH}/\sigma}\,. \tag{4}\] \[\dot{M}=(\mathrm{constant})\,M_{BH}\] (5) \[\mathrm{dM/dt}=(\mathrm{constant})\,M_{BH}.\] (6) \[\mathrm{dm}=(\mathrm{constant})\,M_{BH}\,dt.\] (7) \[\mathrm{d}\mathcal{L}=\mathrm{dm}\,v\,r\] (8) \[\mathrm{dm}\,v^{2}/r=\mathrm{Gdm}M_{BH}/r^{2}\] (9) \[v=(\mathrm{GM}_{BH}/r)^{1/2}. \tag{10}\]
The inner edge of the disk depends on the black hole spin parameter and has the range given in equation (11). For our purposes we note that \(r\propto\mathrm{GM_{BH}/c^{2}}\) from which we get equation (12). For accretion at the Eddington limit, therefore, the rate at which the angular momentum of the black hole changes is shown in equation (13), from which we can determine how the dimensionless spin parameter of the black hole changes by using the differential form of equation (3) to obtain
equation (14), from which we get equation (15), which is the black hole mass independent result we anticipated.
\[1.23\mathrm{GM}_{\mathrm{BH}}/\mathrm{c}^{2}\ -\ 9\mathrm{GM}_{\mathrm{BH}}/ \mathrm{c}^{2}. \tag{11}\]
\[\mathrm{d}\mathcal{L}\propto\mathrm{dm}\ (\mathrm{GM}_{\mathrm{BH}}/\mathrm{r} )^{1/2}\ \mathrm{GM}_{\mathrm{BH}}/\mathrm{c}^{2}\propto\mathrm{M}_{\mathrm{BH}}\, \mathrm{d}\mathrm{t}\ c\ \mathrm{GM}_{\mathrm{BH}}/\mathrm{c}^{2}=\mathrm{M}_{\mathrm{BH}}\, \mathrm{d}\mathrm{t}\ \mathrm{GM}_{\mathrm{BH}}/\mathrm{c}. \tag{12}\]
\[\mathrm{d}\mathcal{L}/\mathrm{dt}\propto\mathrm{M}_{\mathrm{BH}} ^{2}. \tag{13}\]
\[\mathrm{da}/\mathrm{dt}=\mathrm{cd}\mathcal{L}/\mathrm{dt}/(\mathrm{GM}_{ \mathrm{BH}}^{2}) \tag{14}\]
\[\mathrm{da}/\mathrm{dt}\propto\mathrm{c}/\mathrm{G} \tag{15}\]
In Figure 2 we show how the angular momentum of gas that accretes onto the black hole from the marginally stable orbit depends on the value of black hole spin. We scale or normalize the angular momentum to the angular momentum at the marginally stable orbit for a black hole spinning at a = 0.998 surrounded by an accretion disk in counter-rotation. Figure 2 allows one to appreciate why spinning a high spinning black hole down takes about an order of magnitude less time than it does to spin a zero spinning black hole up to high spin, at a given accretion rate.
As gas accretes onto the black hole from \(r_{\rm ms}\), both black hole spin and black hole mass change. The change in the black hole mass also depends on the location of \(r_{\rm ms}\) and can be obtained by evaluating the distribution of energy as a function of radius, i.e. energy counterparts to equations (1) and (2) above. One finds the black hole to gain an amount of mass given by (Raine & Thomas 2009)
\[\Delta m=\int dm(1-2m/3r_{ms})^{-0.5} \tag{16}\]
where m is the mass of the black hole and the added mass \(\Delta m\) is given as an expression with both Newton's constant and the speed of light equal to unity. As accretion proceeds, the marginally stable orbit decreases, acquires the value \(r_{\rm ms}=6r_{\rm g}\) when the black hole stops rotating, and then decreases further as it spins up via a co-rotating accretion disk. Our interest is only in the time to spin the black hole down to near zero spin. The crucial element to determine the timescale for spin down is the accretion rate. For a given accretion rate f, the time to build the mass by \(\Delta\)m is given in equation (18). If the accretion rate is constant, the time is given by equation (19).
Figure 2: Angular momentum at \(r_{\rm ms}\) normalized to the angular momentum at \(r_{\rm ms}\) for a high spinning black hole accreting in counter-rotation, as a function of dimensionless black hole spin. Red and blue curves converge to the same angular momentum at zero spin as indicated by the data point in black.
\[\text{dm/dt}=f \tag{17}\] \[T =\int dm/f.\] (18) \[T =\Delta m/f. \tag{19}\]
For an accretion rate that is the Eddington value, a rapidly spinning counter-rotating black hole spins down to zero spin in just under \(8\times 10^{6}\) years. Therefore, an FRII jet lives no longer than \(8\times 10^{6}\) years if fed at the Eddington limit. But an FRII HERG does not need to be accreting at the Eddington limit. It could accrete at 10% the Eddington limit and still be a HERG. In this case, it would spin down to zero spin in \(8\times 10^{7}\) years. For lower accretion rates, the timescale for spin down increases. If the FRII jet is powerful enough, it produces a strong feedback effect on the accretion flow, lowering the accretion rate, and allowing the FRII jet phase to last longer. In such cases of powerful jet feedback, the FRII jet affects the structure of the accretion disk, forcing it to evolve into an advection dominated accretion flow (ADAF). The boundary between a thin disk and an ADAF is prescribed from theory to be at \(10^{-2}\) the Eddington accretion rate. Hence, at \(5\times 10^{-2}\) the Eddington accretion rate, the object may still be characterized as an FRII HERG and the jet lifetime would increase to \(1.6\times 10^{8}\) years. We should also note that jet lifetimes are effectively limited by some threshold low spin value below which the jet may no longer be classified as an FRII if even visible. From theory, we can estimate this to be below a spin value of about \(0.1\) but with some uncertainty that would also depend on the environment. Overall, we can estimate that an FRII HERG lives at most about \(10^{8}\) years. This value appears as a red diamond on the right hand side of Figure 1.
Since the transition in cooling is not abrupt (e.g. Giustini & Proga 2019), the transition from cold mode accretion into an ADAF is also gradual, and the model prescribes accretion rates to barely cross the boundary into an ADAF during counter-rotation. In other words, counter-rotation may have ADAF accretion, but it has only recently entered the ADAF regime and accretion rates will therefore have values near \(10^{-2}\) the Eddington accretion rate (see Garofalo, Evans & Sambruna 2010 for details). If one assumes such a value for the accretion rate in equation (17), one obtains a timescale for spin down of \(8\times 10^{8}\) years. But, as mentioned above, the initial state is a near-Eddington accreting black hole so the timescale prescribed in the model for systems that evolve into ADAF accretion, must be lower. In short, the black hole spins halfway down from maximal spin at near the Eddington accretion rate (as an FRII HERG) which takes \(4\times 10^{6}\) years, followed by accretion at \(10^{-2}\) the Eddington accretion rate (as an FRII LERG), requiring \(4\times 10^{8}\) more years to spin the black hole down to zero spin. Therefore, the model prescription for FRII lifetimes spans the range \(8\times 10^{6}\) to \(4\times 10^{6}+4\times 10^{8}\) years which is roughly \(4\times 10^{8}\) years and is our second theoretical data point in Figure 1 (the blue diamond). Because these lifetimes occur in systems with strongest FRII jet feedback, and the model prescribes the strongest jet feedback to occur more in denser environments, there is a model-prescribed environment
dependence to the maximum FRII jet lifetimes that is worth mentioning although it is not the focus of this work.
Although our focus has not been on giant radio quasars and giant radio galaxies, such objects serve an important role as guideposts, allowing us to understand better the constraints on radio quasars and radio galaxies. As re-triggered counter-rotating black holes, giant radio quasars have the opportunity to generate jets that extend beyond the kiloparsec lengths reached by their radio galaxy ancestors, and to experience longer lived FRII phases (Garofalo, 2022). Accordingly, it is interesting to note that while giant radio quasars and giant radio galaxies exceed the prescribed theoretical maximum lifetimes for radio quasars (the red diamond in Figure 1), this is not true for radio quasars. If counter-rotation is not relevant to FRII jets, there is no reason for the latter to be limited in this way. In addition, if the evolution in time from quasar mode to ADAF mode (as the model prescribes) is not the way such systems change over time, there is no reason for both regular radio quasars and radio galaxies as well as their giant counterparts to have blue objects experiencing longer jet lifetimes than red ones. But it is instead required from theory.
### Conclusion
The timescales for FRII systems obtained from theory are tantalizingly compatible with those inferred from the data as seen from the diamonds added on the right-hand side of Figure 1 to represent theoretical timescales. While spin-down timescales are constrained by the rate of accretion, this same constraint on time for FRI jets in the theory is rather weak because a black hole that feeds forever simply remains a high spinning co-rotating black hole. The real constraint, instead, is the amount of fuel. We have not gone into model details but FRI systems are late stages in the evolution of radio galaxies that were once FRII. Their accretion rates continue to drop over time and can be orders of magnitude lower than the Eddington accretion rate. As a result, FRI systems accreting in ADAF can last characteristic timescales that are on the order of the age of the universe, making them effectively unconstrained in time. In closing, we highlight that jet dynamical lifetimes are characterized by large uncertainties (e.g. Wojtowicz et al, 2021) and that until these have been sufficiently reduced, a clear picture cannot emerge and caution should be exercised in comparing jet lifetime from the model with dynamical timescales from data. Nonetheless, if FRII lifetimes can robustly be shown to be limited to within a half billion years, it would constitute strong evidence for counter-rotating black holes in active galaxies and the evidence appears to be pointing in that direction.
While constraints on the lifetimes of FRII jetted AGN have existed for a decade or so, data has recently emerged to solidify the case that FRII and FRI jets are different in some fundamental way. We have argued over the last decade that opening the counter-rotating window for black hole accretion allows many disparate observations to come together under a simple evolutionary picture that at its heart explains the radio loud/radio quiet dichotomy. In this work we highlight the otherwise coincidental match between the lifetimes of FRII jets in quasars and radio galaxies, showing how to understand the difference in the evolution of FRII jets compared to FRI jets.
Unlike FRI LERG whose jet lifetimes are effectively unconstrained, FRII jets in either LERG or HERG form, are limited to lifetimes within hundreds of millions of years due to accretion spinning black holes down, a process that is time-limited in a way that spinning black holes up is not.
AcknowledgementsI thank Dr. Marek Jamrozy and Dr. Dorota Koziel-Wierzbowska for sharing their expertise. In addition, I acknowledge the role of 4 referees at FrASS but thank referees 3 and 4 for pointing to the need for clarification on key points. The reason FRIs appear to prefer less dense environments compared to FRIs is that they live longer in such environments. This resolves an interesting point raised by referee 4 that was not included in the paper because it is outside its scope. The issue is discussed in our work on X-shaped radio galaxies in 2020.
|
2310.05092 | Benchmarking Large Language Models with Augmented Instructions for
Fine-grained Information Extraction | Information Extraction (IE) is an essential task in Natural Language
Processing. Traditional methods have relied on coarse-grained extraction with
simple instructions. However, with the emergence of Large Language Models
(LLMs), there is a need to adapt IE techniques to leverage the capabilities of
these models. This paper introduces a fine-grained IE benchmark dataset
tailored for LLMs, employing augmented instructions for each information type,
which includes task descriptions, extraction rules, output formats, and
examples. Through extensive evaluations, we observe that encoder-decoder
models, particularly T5 and FLAN-T5, perform well in generalizing to unseen
information types, while ChatGPT exhibits greater adaptability to new task
forms. Our results also indicate that performance is not solely dictated by
model scale, and highlight the significance of architecture, data diversity,
and learning techniques. This work paves the way for a more refined and
versatile utilization of LLMs in Information Extraction. | Jun Gao, Huan Zhao, Yice Zhang, Wei Wang, Changlong Yu, Ruifeng Xu | 2023-10-08T09:41:18Z | http://arxiv.org/abs/2310.05092v1 | Benchmarking Large Language Models with Augmented Instructions for Fine-grained Information Extraction
###### Abstract
Information Extraction (IE) is an essential task in Natural Language Processing. Traditional methods have relied on coarse-grained extraction with simple instructions. However, with the emergence of Large Language Models (LLMs), there is a need to adapt IE techniques to leverage the capabilities of these models. This paper introduces a fine-grained IE benchmark dataset tailored for LLMs, employing augmented instructions for each information type, which includes task descriptions, extraction rules, output formats, and examples. Through extensive evaluations, we observe that encoder-decoder models, particularly T5 and FLAN-T5, perform well in generalizing to unseen information types, while ChatGPT exhibits greater adaptability to new task forms. Our results also indicate that performance is not solely dictated by model scale, and highlight the significance of architecture, data diversity, and learning techniques. This work paves the way for a more refined and versatile utilization of LLMs in Information Extraction.
## 1 Introduction
In the field of Natural Language Processing (NLP), Information Extraction (IE) is a pivotal task that aims to identify and extract valuable information from unstructured text. This task encompasses several sub-tasks, including entity extraction (Yan et al., 2021; Wang et al., 2021) and event extraction (Lin et al., 2020), which play a crucial role in industries such as finance, healthcare, and law, facilitating machines in processing large-scale text.
Traditional IE methods predominantly depend on supervised learning (Lin et al., 2020; Du and Cardie, 2020; Lu et al., 2021), which requires vast labeled datasets for model training. The labeling process can be both time-consuming and expensive, thus creating barriers to the adoption of IE technologies. In contrast, the advent of Large Language Models (LLMs) such as GPT-3 (Brown et al., 2020) has provided an alternative approach. These LLMs demonstrate promising in-context learning capabilities, which could potentially alleviate the need for substantial labeled datasets. This is a remarkable stride forward, as it represents an opportunity to make IE technologies more accessible and efficient.
Despite the potential benefits of LLMs' context learning in IE, existing literature has limitations in evaluating these models' efficacy in IE. Previous studies tend to focus on evaluating performance across a single structure (i.e., either using decoder-only (Li et al., 2023; Gao et al.,
Figure 1: A comparison of the traditional Coarse-Grained IE Instruction (Lu et al., 2022; Wang et al., 2023) with our proposed Fine-Grained IE Instruction, using entity extraction as an illustrative example.
2023b; Ma et al., 2023; Li et al., 2023) or encoder (Lu et al., 2022; Liu et al., 2023; Wang et al., 2023) models) and on unseen information types (Lu et al., 2022; Wang et al., 2023). However, there is a dearth of research addressing the generalization performance on different kinds of extraction tasks, such as moving from event extraction to entity extraction. Moreover, the current approach (Lu et al., 2022; Liu et al., 2023; Wang et al., 2023; Li et al., 2023) in previous work has involved coarse-grained IE tasks, where a simple instruction without detailed extraction guidelines is used to extract multiple information types. This neglects vital aspects such as extraction rules, output format descriptions, and illustrative examples, which are crucial for adapting to different information types and tasks.
In this paper, we address these shortcomings by introducing a fine-grained IE benchmark dataset with augmented instructions. The motivation behind transitioning from coarse-grained to fine-grained IE stems from the observation that incorporating detailed extraction guidelines for each information type within the original instructions would cause the instructions to vastly exceed the input length limitations of the model. Fine-grained IE differs from coarse-grained IE in that it treats each information type as a distinct task. Specifically, instead of using a single instruction to extract multiple information types, fine-grained IE employs augmented instructions for each information type, including task descriptions, extraction rules, output formats, and examples. This is depicted in Figure 1, which visually contrasts the traditional Coarse-Grained IE with our proposed Fine-Grained IE employing augmented instructions.
A key objective of this study is to stringently evaluate large language models' capabilities in in-context learning for fine-grained IE tasks. We focus on assessing the models' generalization to novel information types and task forms, utilizing a diverse dataset. We evaluate an array of models, including both **encoder-decoder** and **decoder-only** architectures, enabling a thorough analysis of their impact on performance. Our evaluation encompasses two critical dimensions of generalization: (1) **Generalization Across Unseen Information Types**: Models are trained on the same task form but tested on a different information type. (2) **Generalization Across Unseen Task Forms**: Models are trained on a partial form of the task and are tested on an entirely different form of the task.
**Summary of Insights:** The experiment unveils several key insights into the generalization performance of large language models (LLMs) in IE tasks. Encoder-decoder architectures, notably T5 and FLAN-T5, excel in generalizing to unseen information types due to their prowess in capturing input-output relationships. However, they falter in adapting to novel task forms, highlighting a trade-off between specialization and flexibility. ChatGPT, with its decoder-only architecture and in-context learning, demonstrates remarkable adaptability to unfamiliar task structures. Instruction components play a significant role, where Extraction Rule and Demonstration Examples emerge as critical for guiding LLMs effectively, whereas Task Description and Output Format hold variable importance across models. Additionally, the experiment reveals that performance scaling is nonlinear with respect to both training data quantity and model size, emphasizing the importance of data diversity and judicious balancing of model scale.
## 2 Evaluating In-context Learning in Information Extraction
In this work, we aim to rigorously evaluate the ability of large language models to perform in-context Learning for fine-grained IE tasks, with a particular emphasis on assessing their generalization capabilities to unseen information types and task forms.
**Generalization Over Unseen Information Types.** In this scenario, models are presumed to have been trained on a diverse set of information types within a particular task structure. They are subsequently evaluated based on their ability to adapt and perform accurately when confronted with novel information types within the same task structure. Formally, let us represent the set of information types that the model is exposed to during training as \(I=\{i_{1},i_{2},i_{3},...,i_{n}\}\). When the model is presented with a novel information type \(i_{u}\notin I\), we evaluate its capacity to extrapolate its learned knowledge to this new type. For an input text \(X\) that contains instances of the new information type \(i_{u}\), the model's task is to extract these instances. We represent this as \(Y=G(X|i_{u})\), where \(G\) is the function that the model has learned for IE, and \(Y\) is the set of extracted information instances.
Generalization Over Unseen Task Forms.While models have traditionally been constrained to tasks that closely resemble the structure they were trained on, the emergence of large language models (LLMs) introduces the potential for more adaptable models capable of understanding and adjusting to new task forms. To formalize this, let's denote the set of task structures the model is trained on as \(T=\{t_{1},t_{2},t_{3},...,t_{n}\}\). Upon encountering a new task form \(t_{u}\notin T\), we assess the model's capability to apply its existing knowledge base to effectively execute the task defined by \(t_{u}\). For an input text \(X\), the model is expected to produce an output \(Y\) that aligns with the requirements of the new task form. This can be mathematically represented as \(Y=F(X|t_{u})\), where \(F\) represents the function that the model has internalized to map inputs to outputs across different task structures.
## 3 Augmented Instructions for Fine-grained Information Extraction
To achieve a more comprehensive assessment, our focus is on discerning how well these models understand and apply extraction rules and demonstration examples. Given the importance of fine-grained analysis for unearthing specific strengths and weaknesses of LLMs, our dataset considers each type of information as an independent task, requiring meticulous attention to detail. Specifically, our dataset encompasses an extensive spectrum of information types, including persons, locations, diverse event types, among others. Each of these information types corresponds to a distinct extraction task, such as extracting names of persons or identifying various events described in a text.
Augmented Instruction Schema.What sets our fine-grained instructions apart from prior approaches Lu et al. (2022); Li et al. (2023) is the inclusion of more granular information for each type of information. Instead of just having a task description and output options, our augmented instruction schema integrates extraction rules, specifies output formats, and provides illustrative examples. These additional components are instrumental in equipping the model with an in-depth understanding of the extraction tasks and in standardizing the output for further processing. The instruction schema is composed of the following elements:
* Task Description: A succinct, overarching summary of the task, articulating the primary objective without delving into particulars.
* Extraction Rule: Comprehensive and unambiguous guidelines, formulated in natural language, that outline the specifics of extracting the requisite information from the input text.
* Output Format: Defines the structural and organizational requirements for the extracted information, offering a systematic template for the model's output. This facilitates uniformity in the presentation of results, which is essential for efficient handling and use of the extracted data.
* Demonstration Examples: Representative input-output pairs that exemplify the correct application of the extraction rules across varied input texts. These examples serve to resolve any potential ambiguities and provide practical demonstrations to reinforce the model's understanding of the task.
Diverse Information Extraction Tasks.Building a comprehensive dataset for IE from the ground up can be both resource-intensive and time-consuming. To optimize resource utilization while still achieving a broad coverage, we have amalgamated a selection of pre-existing datasets pertinent to IE. This amalgamation comprises 5 datasets, encompassing three core facets of IE - entity extraction, event extraction, and sentiment analysis. A visual representation of the data distribution is depicted in Figure 3. Details of the data construction can be found in Appendix B.
## 4 Benchmarking LLMs with Fine-grained IE Tasks
### Experimental Setup
We assess the generalization capabilities of IE models across different facets, namely: Generalization to Unseen Information Types, and Generalization to Unseen Task Forms. Figure 2 shows the dataset partitioning across these dimensions.
Generalization to Unseen Information Types.In this scenario, the models are trained on a restricted set of information types and are evaluated on previously unseen information types. The training dataset includes 4 out of 7 entity types, 23 out of 33 event types, and all 3 sentiment information types. For evaluation, we randomly sampled 100 examples for each of the 3 entity types to be tested. Since the number of available samples for each of the event types to be tested was fewer than 100, we
utilized the entire dataset for those event types. In total, the test set comprises 700 cases.
Generalization to Unseen Task Forms.Here, we evaluate the model's capability to generalize across different forms of IE tasks. Unlike the first setup, where the task form remains the same but the information types differ, here we change the task form itself. The training set encompasses all event extraction tasks and two of the three sentiment IE tasks, namely ATE and UABSA. The test set includes 100 randomly sampled examples for each of the 7 entity types to be tested and 1,000 randomly sampled examples for the ASTE task, summing up to 1,700 test samples. ASTE extracts aspect, sentiment polarity, and an additional opinion element, making it a higher-order task compared to ATE and UABSA.
For each training sample, we supplement it with 5 randomly sampled examples from the training set, sharing the same type as the demonstration examples. Notably, different training examples are paired with distinct demonstration examples. For the test samples, we include 5 randomly selected demonstration examples in their instructions, ensuring that these demonstration examples are exclusive from the test samples. These demonstration examples remain constant across all test samples. For a detailed view of the data splits, please refer to Figure 9 in the Appendix.
### Models and Evaluation Metrics
We conducted a comparison between two categories of large language models that are built on different architectures. In the encoder-decoder category, we considered models such as T5 (Raffel et al., 2019) and FLAN-T5(Chung et al., 2022), both of which are available in sizes of 3 billion (3B) and 11 billion (11B) parameters. In contrast, in the decoder-only category, we looked at models like LLaMa (Touvron et al.) and BLOOM (Scao et al., 2022), in addition to ChatGPT 1. LLaMa offers models with 7 billion (7B) and 13 billion (13B) parameters, while BLOOM provides models with 3 billion (3B) and 7.1 billion (7.1B) parameters. Note that the results for ChatGPT were based on testing performed on June 20, 2023. With the exception of ChatGPT, which was able to utilize our instructions directly for in-context learning, the remaining models underwent fine-tuning on our training
Figure 3: Data Statistics.
Figure 2: Data division for generalization to Unseen Information Types and Unseen Task Forms. For a detailed view of the data splits, please refer to Figure 9 in the Appendix.
dataset with fine-grained instructions before being subjected to in-context learning. Implementation details can be found in Appendix A.
The performance of all models in this task is evaluated using the F1-score as the metric for assessing the accuracy of the information extracted.
## 5 Experimental Results
### Overall Results
Analysis of Generalization to Unseen Information Types.In the generalization to unseen information types, Table 1 demonstrates that models with an encoder-decoder architecture tend to outperform those with a decoder-only structure. Specifically, the T5 models with 3B and 11B parameters achieved F1 scores of 82.45 and 78.70 respectively in the entity extraction task. These scores significantly surpass the highest F1 score (64.25) achieved by ChatGPT in the decoder-only category. For trigger and argument extraction, T5 and FLAN-T5 models consistently perform well, with FLAN-T5 3B achieving the highest F1 score (58.02) in argument extraction among all models.
It is noteworthy that ChatGPT, which utilizes in-context learning and wasn't trained on our dataset, demonstrates respectable performance, particularly in entity and trigger extraction. This suggests that pre-trained models with large-scale knowledge can exhibit reasonable generalization even without specific fine-tuning.
Analysis of Generalization to Unseen Task Forms.As for the generalization to unseen task forms, the performance of most models substantially declines. Notably, ChatGPT attains significantly better results compared to others in this category. With an F1 score of 55.33 in entity extraction and 46.04 in the ASTE task, ChatGPT exhibits the ability to adapt more efficiently to unfamiliar task forms. On the contrary, encoder-decoder models, which performed well in generalization to unseen information types, struggle considerably, with the T5 11B model obtaining the highest F1 score among them in entity extraction (24.50), but almost negligible performance in the ASTE task.
### In-depth Discussion
Effectiveness of Encoder-Decoder Models in Information Types.The encoder-decoder models, particularly T5 and FLAN-T5, display commendable proficiency in generalizing to unseen information types. This can be attributed to the ability of encoder-decoder models to effectively capture the relationships between inputs and outputs, which is crucial for IE tasks. Furthermore, the availability of an encoder component might contribute to better representation learning, which aids in generalization.
Limited Generalization to New Task Forms.Despite the superior performance in information type generalization, encoder-decoder models exhibit restricted generalization capabilities when subjected to unfamiliar task forms. This might be due to the high specialization of these models to the training task forms, which in turn hampers their ability to adapt to new structures. ChatGPT, however, with its in-context learning, appears more flexible and can reasonably adapt to new task forms. This highlights the importance of model adaptability and flexibility in real-world applications where task forms might not always be consistent.
Performance is Not Always Proportional to Scale.The results also indicate that an increase in the number of parameters does not always lead to a proportional improvement in performance. For example, the T5 3B model outperforms the T5 11B model in entity extraction within unseen information types. This suggests that model capacity, though important, is not the sole factor in determining performance. Other factors such as model architecture, training data diversity, and learning techniques play a crucial role.
Decoder-Only Architectures Struggle More in Information Types.Decoder-only models such as LLaMa and BLOOM tend to struggle more in generalization to unseen information types as compared to encoder-decoder models. This could be due to their lack of an encoder component, which is important for understanding complex input structures that are common in IE tasks. However, ChatGPT demonstrates that decoder-only models with extensive pre-training and in-context learning can still achieve reasonable performance. This indicates that training methodology and in-context adaptation can play a significant role in improving the generalization of decoder-only models.
## 6 Further Analysis
### Impact of Instruction Components
Figure 4 presents the impact of various instruction components on the performance of LLMs in
IE tasks. The components in consideration are Task Description, Extraction Rule, Output Format, and Demonstration Examples.
Task Description.The exclusion of the Task Description appears to have a marginal effect on the performance of the models. For example, T5 3B exhibits a slight increase from 43.64 to 43.79, and ChatGPT experiences a minor drop from 54.24 to 53.47. This suggests that while Task Description provides an overarching summary, it is not critical for performance. The Extraction Rule and Demonstration Examples likely offer the detailed guidance necessary for the models.
Extraction Rule.Omitting the Extraction Rule generally leads to a decrease in performance across most models. For instance, the T5 3B model drops from 43.64 to 42.49, and ChatGPT decreases from 54.24 to 52.96. This indicates that Extraction Rule, with its comprehensive guidelines, is crucial in guiding the models to extract the relevant information effectively.
Output Format.The absence of Output Format leads to varied effects across the models. Notably, T5 3B shows a dramatic decrease from 43.64 to 25.84. On the other hand, ChatGPT experiences a minor increase in performance (54.67). This suggests that while Output Format is significant for structuring the output in some models, it may not be as crucial for others, especially if they have been pre-trained to handle diverse output structures.
Demonstration Examples.Removing Demonstration Examples has the most pronounced impact on performance. For example, T5 3B plummets from 43.64 to 0, and ChatGPT falls sharply from 54.24 to 30.06. This underscores the importance of Demonstration Examples in clarifying ambiguities and reinforcing the understanding of the task.
### Analysis on Demonstration Examples
Impact of the number of examples.The impact of varying the number of demonstration examples
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multirow{2}{*}{**Structure**} & \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**Unseen Information Type**} & \multicolumn{3}{c}{**Unseen Task Form**} \\ \cline{3-8} & & **Entity** & **Trigger** & **Argument** & **Entity** & **ASTE** & **AVG** \\ \hline \multirow{3}{*}{Enc-Dec} & T5 3B & \(82.45\) & \(84.80\) & \(50.30\) & \(0.57\) & \(0.08\) & \(43.64\) \\ & T5 11B & \(78.70\) & \(79.06\) & \(53.41\) & \(24.50\) & \(0.06\) & \(47.15\) \\ & FLAN-T5 3B & \(74.67\) & \(84.84\) & \(58.02\) & \(19.33\) & \(0.00\) & \(47.37\) \\ & FLAN-T5 11B & \(74.87\) & \(79.00\) & \(50.70\) & \(10.97\) & \(0.00\) & \(43.11\) \\ \hline \multirow{3}{*}{Dec-only} & LLaMA 7B & \(46.77\) & \(55.54\) & \(29.55\) & \(2.95\) & \(0.00\) & \(26.96\) \\ & LLaMA 13B & \(38.07\) & \(59.88\) & \(32.51\) & \(16.85\) & \(0.00\) & \(29.46\) \\ \cline{1-1} & BLOOM 3B & \(20.76\) & \(19.65\) & \(10.82\) & \(14.53\) & \(0.00\) & \(13.15\) \\ \cline{1-1} & BLOOM 7.1B & \(20.90\) & \(34.78\) & \(20.15\) & \(15.00\) & \(0.00\) & \(18.17\) \\ \cline{1-1} & ChatGPT* & \(64.25\) & \(71.17\) & \(34.40\) & \(55.33\) & \(46.04\) & \(54.24\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of Large Language Models’ Performance in Generalizing to Unseen Information Types and Task Forms. We include the average F1 scores for each model, computed across all tasks. *: ChatGPT was tested using direct in-context learning and was not trained on our dataset.
Figure 4: Impact of Instruction Components. The figure shows the average F1 scores of models with varying instruction components: Full (all components), -Desc (without Task Description), -Rule (without Extraction Rule), -Format (without Output Format), and -Demos (without Demonstration Examples).
Figure 5: Impact of the number of examples.
on the performance of LLMs is shown in Figure 5. Across all models, it is evident that the provision of demonstration examples significantly influences performance, especially when transitioning from zero to one example. However, the effect of adding more examples varies among models. ChatGPT and T5 models show a consistent positive trend, while FLAN-T5, LLaMa, and BLOOM models exhibit varied patterns. This analysis highlights the importance of demonstration examples in IE tasks and suggests that the optimal number of examples can differ based on the model's architecture and capabilities.
Impact of correctness of examples.The quality of demonstration examples is investigated by analyzing the performance of LLMs with different proportions of incorrect examples. Table 6 presents the average F1 scores as we vary the number of incorrect demonstration examples. Across all models, the correctness of demonstration examples plays a vital role in performance. The sensitivity to incorrect examples, however, varies among models. ChatGPT is the most sensitive, with a pronounced decrease in performance as incorrect examples are introduced. T5 and FLAN-T5 models show stability or a gradual decline, while LLaMa and BLOOM models display minor fluctuations. This analysis underlines the importance of ensuring the accuracy and correctness of demonstration examples in the instructions provided to LLMs, especially for models like ChatGPT that exhibit high sensitivity to example quality.
Impact of input-output pairing.Figure 7 presents the performance of LLMs when they are conditioned on demonstration examples with varying formats - full examples with both inputs and outputs, examples without outputs, and examples without inputs. The goal is to understand which part of the demonstration example is crucial for performance. Across all models, input-output pairing in demonstration examples plays a crucial role in performance. Models like FLAN-T5 11B and ChatGPT are heavily reliant on output information. LLaMa and BLOOM models also lean towards output information, whereas T5 models show variations. This analysis highlights the importance of including both inputs and outputs in demonstration examples for optimal performance. However, if one must be omitted, it appears that maintaining the outputs is generally more beneficial than retaining only the inputs. This may be due to the fact that the outputs often embody the essence of the task that the model needs to perform.
### Analysis of Scaling Factors
We analyze the generalization performance of the models with respect to two critical scaling factors: the number of instances per information type and the size of the models. The results are shown in Figure 8. As the number of training instances is increased, note that not all information types are equally represented. Some event types have fewer than 100 samples. In such cases, the dataset uses the maximum number of available samples for those types. This leads to an exacerbation of the imbalance among different information types as the number of instances increases, which is an important consideration in the scaling trends.
Influence of Training Instance Quantity.Figure 8 reveals that augmenting the number of training instances is generally associated with improved performance. However, the models react differently to this scaling factor. T5 models display a more steady improvement as the number of instances increases, whereas LLaMa models experience an
Figure 6: Impact of correctness of examples.
Figure 7: Impact of input-output pairing.
early peak followed by a decrease. The decline in LLaMa models' performance could be linked to the increasing imbalance in the dataset. As the dataset grows, the imbalance may cause models to become biased towards information types with more samples. Additionally, the non-linear scaling indicates that there may be a point of diminishing returns, after which additional data does not yield significant performance gains or may even be counterproductive.
Effect of Model Size on Performance.When analyzing the impact of model size, the results suggest that larger models typically have the edge. T5 models, for instance, exhibit more consistent improvements as their size increases. However, this comes with the caveat that larger models are more susceptible to overfitting, especially when the dataset is small or imbalanced. This risk is pertinent as data scarcity can make it difficult for larger models to effectively generalize. In the decoder-only category, the difference in performance between LLaMa 13B and LLaMa 7B is not pronounced at higher training sizes, highlighting that an increase in model size does not guarantee proportionate performance improvements. Consequently, a judicious balance between model size and the quantity and diversity of training data is essential to maximize generalization performance.
## 7 Related Work
Information Extraction.Previously, Information Extraction (IE) focused on task-specific models optimized for narrow objectives like Entity Extraction (Yan et al., 2021; Wang et al., 2021) and Event Extraction (Yan et al., 2021; Wang et al., 2021; Du and Cardie, 2020; Lu et al., 2021; Gao et al., 2023). However, their task-specific design inhibits knowledge sharing across various IE tasks (Lin et al., 2020). This shortcoming paved the way for Universal Information Extraction (UIE), which aims at building versatile models for extracting diverse structured data from unstructured text (Lin et al., 2020; Lu et al., 2022; Lou et al., 2023; Liu et al., 2023). Current UIE methods employ coarse-grained instructions with basic task descriptions, overlooking essential extraction rules and output format descriptions. To address this, we introduce a fine-grained benchmark dataset for IE with augmented instructions encompassing task descriptions, extraction rules, output formats, and demonstration examples.
Large Language Models.Large Language Models (LLMs) are central to NLP due to their impressive performance on numerous tasks (Devlin et al., 2018; Radford et al., 2019; Lewis et al., 2019; Raffel et al., 2020; Brown et al., 2020; Chowdhery et al., 2022). Pretrained on vast corpora, LLMs can be fine-tuned for specialized tasks. Recently, instruction tuning has emerged, wherein LLMs are fine-tuned using task instructions for enhanced zero-shot task generalization (Sanh et al., 2021; Chung et al., 2022; Ouyang et al., 2022). By scaling training tasks, prompts, and LLM sizes, performance improves markedly. Combining instruction tuning with demonstration examples further optimizes results (Min et al., 2021; Chen et al., 2021; Ye et al., 2023). In this work, we assess LLMs with different architectures (encoder-decoder and decoder-only) and ChatGPT for the IE task.
## 8 Conclusion
This paper introduced a fine-grained IE benchmark dataset, tailored for LLMs, utilizing augmented instructions to address the limitations of traditional coarse-grained IE. Through extensive evaluation, encoder-decoder models, notably T5 and FLAN-T5, showed prowess in generalizing to unseen information types, owing to their capacity for capturing complex input-output relationships. However, they exhibited limited adaptability to novel task forms. ChatGPT, a decoder-only model with in-context learning, demonstrated remarkable flexibility and adaptability. Furthermore, we found that model scale is not the sole determinant of performance, emphasizing the importance of architecture, data diversity, and learning techniques. our work contributes to the evolution of IE by enabling more refined IE through LLMs. Future endeavors should
Figure 8: Impact of the number of training examples per information type.
focus on combining the strengths of different architectures and devising training methods that optimize both specificity and adaptability in IE tasks.
|
2306.07163 | A Batch-to-Online Transformation under Random-Order Model | We introduce a transformation framework that can be utilized to develop
online algorithms with low $\epsilon$-approximate regret in the random-order
model from offline approximation algorithms. We first give a general reduction
theorem that transforms an offline approximation algorithm with low average
sensitivity to an online algorithm with low $\epsilon$-approximate regret. We
then demonstrate that offline approximation algorithms can be transformed into
a low-sensitivity version using a coreset construction method. To showcase the
versatility of our approach, we apply it to various problems, including online
$(k,z)$-clustering, online matrix approximation, and online regression, and
successfully achieve polylogarithmic $\epsilon$-approximate regret for each
problem. Moreover, we show that in all three cases, our algorithm also enjoys
low inconsistency, which may be desired in some online applications. | Jing Dong, Yuichi Yoshida | 2023-06-12T14:50:21Z | http://arxiv.org/abs/2306.07163v2 | # General Transformation for Consistent Online Approximation Algorithms
###### Abstract
We introduce a transformation framework that can be utilized to develop online algorithms with low \(\epsilon\)-approximate regret in the random-order model from offline approximation algorithms. We first give a general reduction theorem that transforms an offline approximation algorithm with low average sensitivity to an online algorithm with low \(\epsilon\)-approximate regret. We then demonstrate that offline approximation algorithms can be transformed into a low-sensitivity version using a coreset construction method. To showcase the versatility of our approach, we apply it to various problems, including online \((k,z)\)-clustering, online matrix approximation, and online regression, and successfully achieve polylogarithmic \(\epsilon\)-approximate regret for each problem. Moreover, we show that in all three cases, our algorithm also enjoys low inconsistency, which may be desired in some online applications.
## 1 Introduction
In online learning literature, stochastic and adversarial settings are two of the most well-studied cases. Although the stochastic setting is not often satisfied in real applications, the performance and guarantees of online algorithms in the adversarial case are considerably compromised. This is particularly true for important online tasks such as \(k\)-means clustering, which gives a significantly worse guarantee than their offline or stochastic counterparts (Cohen-Addad et al., 2021). As a result, their practical applicability is greatly limited.
Recently, the random-order model has been introduced as a means of modeling learning scenarios that fall between the stochastic and adversarial settings (Garber et al., 2020; Sherman et al., 2021). In the random-order model, the adversary is permitted to choose the set of losses, with full knowledge of the learning algorithm, but has no influence over the order in which the losses are presented to the learner. Instead, the loss sequence is uniformly and randomly permuted. This effectively bridges the gap between the stochastic setting, where only the distribution of losses can be chosen by the setting, and the adversarial setting, where the adversary has complete control over the order of the losses presented to the learner.
In this work, we introduce a batch-to-online transformation framework designed specifically for the random-order model. Our framework facilitates the conversion of an offline approximation
algorithm into an online learning algorithm with \(\epsilon\)-approximate regret guarantees. Our primary technical tool is average sensitivity, which was initially proposed by Varma and Yoshida (2021) to describe the algorithm's average-case sensitivity against input perturbations. We demonstrate that any offline approximation algorithm with low average sensitivity will result in a transformed online counterpart that has low \(\epsilon\)-approximate regret. To achieve small average sensitivity for offline algorithms, we leverage the idea of a coreset (Agarwal et al., 2005; Har-Peled and Mazumdar, 2004), which is a small but representative subset of a larger dataset that preserves important properties of the original data. We present a coreset construction method that attains low average sensitivity, and when combined with the approximation algorithm, yields an overall algorithm with low average sensitivity.
To showcase the practicality and versatility of our framework, we apply it to three popular online learning problems: online \((k,z)\)-clustering, online matrix approximation, and online regression. In all three cases, our approach yields a polylogarithmic \(\epsilon\)-approximate regret. Furthermore, due to the low average sensitivity of our algorithms, they also enjoy low inconsistency, which is the cumulative number of times the solution changes. This additional property may prove useful in certain online settings. We note that this inconsistency has also been investigated in the classic online learning and multi-armed bandits literature (Agrawal et al., 1988; Cesa-Bianchi et al., 2013).
## 2 Preliminaries
For a positive integer \(n\), let \([n]\) denote the set \(\{1,2,\ldots,n\}\). For real values \(a,b\in\mathbb{R}\), \(a\in(1\pm\epsilon)b\) is a shorthand for \((1-\epsilon)b\leq a\leq(1+\epsilon)b\).
### Offline Learning
We consider a general class of learning problems. Let \(\mathcal{X}\) be the input space, \(\Theta\) be the parameter space, and \(\ell:\Theta\times\mathcal{X}\rightarrow\mathbb{R}_{+}\) be a loss function. For simplicity, we assume the loss is bounded, i.e., \(\ell(\theta,x)\leq 1\). Given a set of \(n\) data points \(X\in\mathcal{X}^{n}\), we are asked to learn a parameter \(\theta\in\Theta\) that minimizes the objective value \(\ell(\theta,X):=\sum_{x\in X}\ell(\theta,x)\). We call this problem the _offline learning problem_.
When the exact minimization of the loss function \(\ell\) is NP-hard or computationally demanding, one may only hope to obtain an approximate solution efficiently. Specifically, for \(\alpha>0\), we say a solution \(\theta\in\Theta\) is _\(\alpha\)-approximate_ for \(X\in\mathcal{X}^{n}\) if \(\ell(\theta,X)\leq\alpha\cdot\min_{\tilde{\theta}\in\Theta}\ell(\tilde{\theta },X)\). The value \(\alpha\) is called the _approximation ratio_ of the solution. We say a (possibly randomized) algorithm \(\mathcal{A}\) is _\(\alpha\)-approximate_ if the expected approximation ratio of the output solution is at most \(\alpha\).
### Online Learning with Random-Order Model
In the _online learning problem_, instead of receiving all points at once, the data arrives sequentially throughout a time horizon \(n\). Specifically, the data point comes one by one, where \(x_{t}\) comes at time \(t\in[n]\). At the time \(t\), using the collected data points \(X_{t-1}:=(x_{1},\ldots,x_{t-1})\), we are asked to output a parameter \(\theta_{t}\in\Theta\). Then we receive the data point \(x_{t}\) and incur a loss of \(\ell(\theta_{t},x_{t})\). In this work, we consider the _random-order model_, in which the data points \(x_{1},\ldots,x_{n}\) may be chosen adversarially, but their ordering is randomly permuted before the algorithm runs.
To evaluate our performance, we use the notion of regret, which is the cumulative difference between our solution and the best solution in hindsight. In cases where obtaining the exact so
lution is hard, and one may only hope to obtain an approximate solution efficiently, we use the \(\epsilon\)-_approximate regret_.
**Definition 2.1** (\(\epsilon\)-approximate regret for the random-order model).: _Given a (randomized) algorithm \(\mathcal{A}\) that outputs a sequence of parameters \(\theta_{1},\ldots,\theta_{n}\) when given input \(x_{1},\ldots,x_{n}\). The \(\epsilon\)-approximate regret of \(\mathcal{A}\) for the random-order model is defined as_
\[\operatorname{Regret}_{\epsilon}(n):=\operatorname*{\mathbb{E}}_{\mathcal{A},\{x_{t}\}}\left[\sum_{t=1}^{n}\ell(\theta_{t},x_{t})-(1+\epsilon)\cdot\min_{ \tilde{\theta}\in\Theta}\sum_{t=1}^{n}\ell(\tilde{\theta},x_{t})\right]\,.\]
_where the randomness is over the internal randomness of \(\mathcal{A}\) and the ordering of data points. When \(\epsilon=0\), we simply call it the regret._
In certain cases, online algorithms are required to maintain a good solution while minimizing _inconsistency_, which is quantified as the number of times the solution changes. This can be expressed formally as \(\operatorname{Inconsistency}(n)=\operatorname*{\mathbb{E}}_{\mathcal{A},\{x_{t }\}}[\sum_{t=1}^{n-1}\mathbb{I}\{\theta_{t}\neq\theta_{t+1}\}]\), where \(\mathbb{I}\) is the indicator function.
### Average sensitivity
On a high level, the notion of average sensitivity describes the differences in the performance of a randomized algorithm with respect to input changes. This difference is captured by the total variation distance, which is defined below.
**Definition 2.2**.: _For a measurable space \((\Omega,\mathcal{F})\) and probability measures \(P,Q\) defined on \((\Omega,\mathcal{F})\). The total variation distance between \(P\) and \(Q\) is defined as \(\operatorname{TV}(P,Q):=\sup_{A\in\mathcal{A}}|P(A)-Q(A)|\)._
Equipped with this, the average sensitivity of a randomized algorithm is formally defined as the average total variation distance between the algorithm's output on two training data sets that differ by deleting one point randomly. For a dataset \(X=(x_{1},\ldots,x_{n})\in\mathcal{X}^{n}\) and \(i\in[n]\), let \(X^{(i)}\) denote the set \((x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{n})\) obtained by deleting the \(i\)-th data point. Then, the following definition gives a detailed description of the notion:
**Definition 2.3** (Average Sensitivity Varma and Yoshida (2021); Yoshida and Ito (2022)).: _Let \(\mathcal{A}\) be a (randomized) algorithm that takes an input \(X\in\mathcal{X}^{n}\) and outputs \(\mathcal{A}(X)\). For \(\beta:\mathbb{Z}_{+}\to\mathbb{R}_{+}\), we say that the average sensitivity of \(\mathcal{A}\) is at most \(\beta\) if_
\[\frac{1}{n}\sum_{i=1}^{n}\operatorname{TV}(\mathcal{A}(X),\mathcal{A}(X^{(i)} ))\leq\beta(n)\,.\]
_for any \(X\in\mathcal{X}^{n}\), where we identify \(\mathcal{A}(X)\) and \(\mathcal{A}(X^{(i)})\) with their distributions._
## 3 Batch-to-Online Transformation in the Random-Order Model
In this section, we describe a general framework that can transform any offline \((1+\epsilon)\)-approximate algorithm into an online algorithm with low \(\epsilon\)-approximate regret. Our goal is to show the following.
**Theorem 3.1**.: _Let \(\mathcal{A}\) be a (randomized) \((1+\epsilon)\)-approximate algorithm for the offline learning algorithm with average sensitivity \(\beta:\mathbb{Z}_{+}\to\mathbb{R}_{+}\). Then, there exists an online learning algorithm in the random-order model such that \(\operatorname{Regret}_{\epsilon}(n)=O\left(\sum_{t=1}^{n}\beta(t)+1\right)\)._
Our method is described in Algorithm 1. Let \(\mathcal{A}\) be an approximation algorithm for the offline learning algorithm. Then, at each time step, based on the collected data \(X_{t-1}\), we simply output \(\theta_{t}=\mathcal{A}(X_{t-1})\).
```
Input: Offline approximation algorithm \(\mathcal{A}\).
1for\(t=1,\ldots,n\)do
2 Obtain \(\theta_{t}\) by running \(\mathcal{A}\) on \(X_{t-1}\).
3 Receive \(x_{t}\) and \(\ell(\theta_{t},x_{t})\).
```
**Algorithm 1**General batch-to-online conversion
To show that Algorithm 1 achieves a low approximate regret when \(\mathcal{A}\) has a low average sensitivity, the following lemma is useful.
**Lemma 3.2**.: _Let \(\mathcal{A}\) be a (randomized) algorithm for the offline learning problem with average sensitivity \(\beta:\mathbb{Z}_{+}\rightarrow\mathbb{R}_{+}\). Then for any input \(X\in\mathcal{X}^{n}\), we have_
\[\frac{1}{n}\sum_{i=1}^{n}\underset{\mathcal{A}}{\mathbb{E}}[\ell(\mathcal{A}(X ^{(i)}),x_{i})]=\frac{1}{n}\sum_{i=1}^{n}\underset{\mathcal{A}}{\mathbb{E}}[ \ell(\mathcal{A}(X),x_{i})]\pm\beta(n)\,,\]
Proof of Theorem 3.1.: Consider Algorithm 1. For any \(t\in[n]\), we have
\[\underset{\mathcal{A},\{x_{i}\}}{\mathbb{E}}\left[\ell(\theta_ {t+1},x_{t+1})-\frac{1}{t}\ell(\theta_{t+1},X_{t})\right]=\underset{\mathcal{ A},\{x_{i}\}}{\mathbb{E}}\left[\frac{1}{t}\sum_{i=1}^{t}\left(\ell(\theta_{t+1},x_{ t+1})-\ell(\theta_{t+1},x_{i})\right)\right]\] \[=\underset{\mathcal{A},\{x_{i}\}}{\mathbb{E}}\left[\frac{1}{t} \sum_{i=1}^{t}\left(\ell(\mathcal{A}(X_{t}),x_{t+1})-\ell(\mathcal{A}(X_{t}),x _{i})\right)\right]\] \[\leq\underset{\mathcal{A},\{x_{i}\}}{\mathbb{E}}\left[\frac{1}{t }\sum_{i=1}^{t}\left(\ell(\mathcal{A}(X_{t}),x_{t+1})-\ell(\mathcal{A}(X_{t}^{ (i)}),x_{i})\right)\right]+\beta(t)\] (By Lemma 3.2 ) \[=\underset{\mathcal{A},\{x_{i}\}}{\mathbb{E}}\left[\frac{1}{t} \sum_{i=1}^{t}\left(\ell(\mathcal{A}(X_{t}),x_{t+1})-\ell(\mathcal{A}(X_{t}^{ (i)}),x_{t+1})\right)\right]+\beta(t)\] \[\leq\underset{\mathcal{A},\{x_{i}\}}{\mathbb{E}}\left[\frac{1}{t }\sum_{i=1}^{t}\mathrm{TV}(\mathcal{A}(X_{t}),\mathcal{A}(X_{t}^{(i)}))\right] +\beta(t)\leq 2\beta(t)\,,\]
where the last equality follows by replacing \(x_{i}\) with \(x_{t+1}\) in \(\ell(\mathcal{A}(X_{t}^{(i)}),x_{i})\) because they have the same distribution, and the last inequality is by the average sensitivity of the algorithm.
Rearranging the terms, we have
\[\underset{\mathcal{A},\{x_{i}\}}{\mathbb{E}}\left[\ell(\theta_{t+1},x_{t+1}) \right]\leq\underset{\mathcal{A},\{x_{i}\}}{\mathbb{E}}\left[\frac{\ell( \theta_{t+1},X_{t})}{t}\right]+2\beta(t)\leq\underset{\{x_{i}\}}{\mathbb{E}} \left[\frac{(1+\epsilon)\mathrm{OPT}_{t}}{t}\right]+2\beta(t)\,,\]
where \(\mathrm{OPT}_{t}:=\min_{\theta}\ell(\theta,X_{t})\) is the optimal value with respect to \(X_{t}\), and the second inequality holds because the approximation ratio of \(\theta_{t+1}\) is \(1+\epsilon\) in expectation.
Taking summation over both sides, we have
\[\mathop{\mathbb{E}}_{\mathcal{A},\{x_{i}\}}\left[\sum_{t=1}^{n}\ell( \theta_{t},x_{t})\right]=\mathop{\mathbb{E}}_{\mathcal{A},\{x_{i}\}}\left[\ell( \theta_{1},x_{1})\right]+\mathop{\mathbb{E}}_{\mathcal{A},\{x_{i}\}}\left[\sum_ {t=1}^{n-1}\ell(\theta_{t+1},x_{t+1})\right]\] \[\leq 1+\mathop{\mathbb{E}}_{\{x_{i}\}}\left[\sum_{t=1}^{n-1}\frac{( 1+\epsilon)\mathrm{OPT}_{t}}{t}\right]+2\sum_{t=1}^{n-1}\beta(t)\,.\]
Fix the ordering \(x_{1},\ldots,x_{n}\), and let \(c_{i}\) (\(i\in[t]\)) be the loss incurred by \(x_{i}\) in \(\mathrm{OPT}_{n}\). In particular, we have \(\mathrm{OPT}_{n}=\sum_{i=1}^{n}c_{i}\). Note that \(c_{i}\)'s are random variables depending on the ordering of data points, but their sum, \(\mathrm{OPT}_{n}\), is deterministic. Then, we have \(\mathrm{OPT}_{t}\leq\sum_{i=1}^{t}c_{i}\) because \(\mathrm{OPT}_{t}\) minimizes the loss up to time \(t\), Hence, we have
\[\mathop{\mathbb{E}}_{\{x_{i}\}}\left[\sum_{t=1}^{n}\frac{\mathrm{ OPT}_{t}}{t}\right]\leq\mathop{\mathbb{E}}_{\{x_{i}\}}\left[\sum_{t=1}^{n}\frac{ \sum_{i=1}^{t}c_{i}}{t}\right]=\mathop{\mathbb{E}}_{\{x_{i}\}}\left[\sum_{i=1} ^{n}c_{i}\sum_{t=i}^{n}\frac{1}{t}\right]=\sum_{i=1}^{n}\mathop{\mathbb{E}}_{ \{x_{i}\}}[c_{i}]\sum_{t=i}^{n}\frac{1}{t}\] \[=\frac{\mathrm{OPT}_{n}}{n}\cdot\sum_{i=1}^{n}\sum_{t=i}^{n}\frac {1}{t}=\frac{\mathrm{OPT}_{n}}{n}\cdot n=\mathrm{OPT}_{n}\,.\]
Therefore, we have
\[\mathop{\mathbb{E}}_{\mathcal{A},\{x_{i}\}}\left[\sum_{t=1}^{n}\ell(\theta_{t },x_{t})\right]-(1+\epsilon)\mathrm{OPT}_{n}=O\left(\sum_{t=1}^{n}\beta(t)+1 \right)\,.\qed\]
## 4 Approximation Algorithm with Low Average Sensitivity via Coreset
To design approximation algorithms for the offline learning problem with low average sensitivity, we consider the following approach: We first construct a small subset of the input that well preserves objective functions, called a coreset, with small average sensitivity, and then apply any known approximation algorithm on the coreset. Coreset is formally defined as follows:
**Definition 4.1** (Har-Peled and Mazumdar (2004); Agarwal et al. (2005)).: _Let \(\ell:\Theta\times\mathcal{X}\to\mathbb{R}_{+}\) be a loss function and let \(X\in\mathcal{X}^{n}\). For \(\epsilon>0\), we say that a weighted set \((Y,w)\) with \(Y\subseteq X\) and \(w:Y\to\mathbb{R}_{+}\) is an \(\epsilon\)-coreset of \(X\) with respect to \(\ell\) if for any \(\theta\in\Theta\), we have \(\sum_{y\in Y}w(y)\ell(\theta,y)\in(1\pm\epsilon)\sum_{x\in X}\ell(\theta,x)\)._
Now, we consider a popular method for constructing coresets based on importance sampling and show that it enjoys a low average sensitivity. For a data \(x\in X\), its _sensitivity_\(\sigma_{X}(x)\)1 is its maximum contribution to the loss of the whole dataset, or more formally
Footnote 1: The reader should not confuse sensitivity, which is a measure for data points, with average sensitivity, which is a measure for algorithms.
\[\sigma_{X}(x)=\sup_{\theta\in\Theta}\frac{\ell(\theta,x)}{\ell(\theta,X)}\,. \tag{1}\]
```
Input: Loss function \(\ell:\Theta\times\mathcal{X}\to\mathbb{R}_{+}\), dataset \(X\in\mathcal{X}^{n}\), \(m\in\mathbb{N}\), and \(\epsilon>0\)
1 For each \(x\in X\), compute \(\sigma_{X}(x)\) and set \(p(x)=\sigma_{X}(x)/\sum_{x^{\prime}\in X}\sigma_{X}(x^{\prime})\).
2 Let \(S\) be an empty set.
3for\(i=1,\ldots,m\)do
4 Sample \(x\) with probability \(p(x)\).
5 Sample \(\tilde{p}\) from \([p(x),(1+\epsilon/2)p(x)]\) uniformly at random.
6if\(w(x)\) is undefinedthen
7\(S\gets S\cup\{x\}\).
8\(w(x)\gets 1/\tilde{p}\).
9else
10\(w(x)\gets w(x)+1/\tilde{p}\).
11return\((S,w)\).
```
**Algorithm 2**Coreset Construction Based on Sensitivity Sampling
It is known that we can construct a coreset as follows: A point \(x\in X\) is sampled with probability \(p(x):=\sigma_{X}(x)/\sum_{x^{\prime}\in X}\sigma_{X}(x^{\prime})\), and then its weight in the output coreset is increased by \(1/\tilde{p}\), where \(\tilde{p}\) is a slight perturbation of \(p(x)\). This process is to be repeated for a fixed number of times, where the exact number depends on the approximation ratio of the coreset. See Algorithm 2 for details. We can bound its average sensitivity as follows:
**Lemma 4.2**.: _The average sensitivity of Algorithm 2 is \(O\left(\epsilon^{-1}m/n\right)\)._
A general bound on the number of times we need to repeat the process, i.e., \(m\) in Algorithm 2, to obtain an \(\epsilon\)-coreset is known (see, e.g., Theorem 5.5 of Braverman et al. (2016)). However, we do not discuss it here because better bounds are known for specific problems and we do not use the general bound in the subsequent sections.
## 5 Online \((k,z)\)-Clustering
In online applications, unlabelled data are abundant and their structure can be essential, and clustering serves as an important tool for analyzing them. In this section, as an application of our general batch-to-online transformation, we describe an online \((k,z)\)-clustering method that enjoys low regret.
### Problem setup
The online \((k,z)\)-clustering problem (Cohen-Addad et al., 2021) is an instance of the general online learning problem described in Section 2. We describe the problem as follows: Let \(k\geq 1\) be an integer and \(z\geq 1\) be a real value. Over a time horizon \(n\), at each time step \(t\), a data point \(x_{t}\in\mathbb{R}^{d}\) is given. Using the set of data points \(X_{t-1}=\{x_{1},\ldots,x_{t-1}\}\), we are asked to compute a set \(Z_{t}=\{z_{1},\ldots,z_{k}\}\) of \(k\) points in \(\mathbb{R}^{d}\) that minimize \(\ell\left(Z_{t},x_{t}\right):=\min_{j=1,\ldots,k}\|x_{t}-z_{j}\|_{2}^{z}\), which is the \(z\)-th power of the Euclidean distance between \(x_{t}\) and the closest point in \(Z_{t}\). Note that \(Z_{t}\) plays the role of \(\theta_{t}\) in the general online learning problem. The regret and \(\epsilon\)-approximate regret are defined accordingly.
### Method and results
One important ingredient to our method is the coreset construction method proposed by Huang and Vishnoi (2020). The method provides a unified two-stage importance sampling framework, which allows for a coreset with a size that is dimension independent. Specifically, the method constructs an \(\epsilon\)-coreset of size \(\tilde{O}\left(\min\left\{\varepsilon^{-2z-2}k,2^{2z}\epsilon^{-4}k^{2}\right\}\right)\) in \(\tilde{O}(ndk)\) time, where the \(\tilde{O}\) hides logarithmic factors in \(n\) and \(k\). We remark that the importance of sampling steps in the framework is similar to the ones described in Section 4, which thus allows us to analyze its average sensitivity.
Algorithm 3 gives a brief description of our algorithm, while a detailed description is presented in the appendix. The algorithm adheres to the standard transformation approach, whereby an offline approximation algorithm is run on the coreset derived from the aggregated data.
```
Input: Offline algorithm \(\mathcal{A}\) for \((k,z)\)-clustering, approximation ratio \(\epsilon\in(0,1)\).
1\(\epsilon^{\prime}\leftarrow\epsilon/3\).
2for\(t=1,\ldots,n\)do
3 Construct an \(\epsilon^{\prime}\)-coreset \(C_{t-1}=(S_{t-1},\omega_{t-1})\) on \(X_{t-1}\).
4 Obtain a cluster set \(Z_{t}\) by running \(\mathcal{A}\) with approximation ratio of \((1+\epsilon^{\prime})\) on \(C_{t-1}\).
5 Receive \(x_{t}\in\mathbb{R}^{d}\) and \(\ell(Z_{t},x_{t})\in\mathbb{R}_{+}\).
```
**Algorithm 3**Online consistent \((k,z)\)-clustering
**Theorem 5.1**.: _For any \(\epsilon\in(0,1)\), Algorithm 3 gives a regret bound of_
\[\mathrm{Regret}_{\epsilon}(n)\leq O\left(\left((168z)^{10z}\epsilon^{-5z-15}k ^{5}\log\frac{kn}{\epsilon}+\epsilon^{-2z-2}k\log k\log\frac{kn}{\epsilon} \right)\log n\right)\,.\]
_Moreover, there exists an algorithm that enjoys the same regret bound and an inconsistency bound of \(\mathrm{Inconsistency}(n)=O\left(\left((168z)^{10z}\epsilon^{-5z-15}k^{5}\log(\epsilon^ {-1}kn)+\epsilon^{-2z-2}k\log k\log(\epsilon^{-1}kn)\right)\log n\right)\) for \((k,z)\)-clustering._
**Remark 5.1**.: _When \(z=2\), previous results for the adversarial setting show an \(\epsilon\)-approximate regret bound of \(O(k\sqrt{d^{3}n}\log(\epsilon^{-1}dkn))\)(Cohen-Addad et al., 2021). In comparison, although our regret is for the random-order model, our method and results accommodate a range of values for \(z\), and the regret bound is only polylogarithmically dependent on \(n\) and is independent of the dimension \(d\)._
## 6 Online Low-Rank Matrix Approximation
Low-rank matrix approximation serves as a fundamental tool in statistics and machine learning. The problem is to find a rank-\(k\) matrix that approximates an input matrix \(\mathbf{A}\in\mathbb{R}^{d\times n}\) as much as possible. In this section, we apply the transformation framework to the offline approximation algorithm to obtain a low regret online algorithm.
### Problem setup
Low-rank matrix approximationBy the singular value decomposition (SVD), a rank-\(r\) matrix \(\mathbf{A}\in\mathbb{R}^{d\times n}\) can be decomposed as \(\mathbf{A}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}\), where \(\mathbf{U}\in\mathbb{R}^{d\times r}\) and \(\mathbf{V}\in\mathbb{R}^{n\times r}\) are orthonormal
matrices, \(\mathbf{\Sigma}\in\mathbb{R}^{r\times r}\) is a diagonal matrix with \(\mathbf{A}\)'s singular values on the diagonal. The best rank-\(k\) approximation of \(\mathbf{A}\) is given by
\[\mathbf{A}_{k}=\mathbf{U}_{k}\mathbf{\Sigma}_{k}\mathbf{V}_{k}^{\top}=\operatorname {argmin}_{\mathbf{B}\in\mathbb{R}^{d\times n}:\operatorname{rank}(\mathbf{B}) \leq k}\lVert\mathbf{A}-\mathbf{B}\rVert_{F}\,\]
where \(\lVert\cdot\rVert_{F}\) denotes the Frobenius norm, \(\mathbf{\Sigma}_{k}\in\mathbb{R}^{k\times k}\) is a diagonal matrix with \(\mathbf{A}_{k}\)'s top \(k\) singular values on the diagonal, and \(\mathbf{U}_{k}\in\mathbb{R}^{d\times k}\) and \(\mathbf{V}_{k}\in\mathbb{R}^{n\times k}\) are orthonormal matrices obtained from \(\mathbf{U}\) and \(\mathbf{V}\), respectively, by gathering corresponding columns. The best rank-\(k\) approximation can also be found by projecting \(\mathbf{A}\) onto the span of its top \(k\) singular vectors, that is, \(\mathbf{A}_{k}=\mathbf{U}_{k}\mathbf{U}_{k}^{\top}\mathbf{A}\). Then, we can say an orthonormal matrix \(\mathbf{Z}\) is an \(\epsilon\)-approximate solution if
\[\left\lVert\mathbf{A}-\mathbf{Z}\mathbf{Z}^{\top}\mathbf{A}\right\rVert_{F} \leq(1+\epsilon)\left\lVert\mathbf{A}-\mathbf{U}_{k}\mathbf{U}_{k}^{\top} \mathbf{A}\right\rVert_{F}\,.\]
The matrix approximation problem serves as an important tool in data analytics and is closely related to numerous machine learning methods such as principal component analysis and least square analysis. When dealing with streaming data, the online version of the matrix approximation problem becomes a vital tool for designing online versions of the machine learning algorithms mentioned above.
Online matrix approximationThrough a time horizon of \(n\), we receive a column of \(\mathbf{A}\), \(a_{t}\in\mathbb{R}^{d}\) at each time step \(t\). We are then asked to compute \(\mathbf{Z}_{t}\in\mathbb{R}^{d\times k}\) that minimizes
\[\ell(\mathbf{Z}_{t},a_{t})=\left\lVert a_{t}-\mathbf{Z}_{t}\mathbf{Z}_{t}^{ \top}a_{t}\right\rVert_{F}\,.\]
Without loss of generality, we will assume that the losses are bounded between \([0,1]\). We remark that similar assumptions are also assumed in Nie et al. (2016).
The online matrix approximation problem serves as a core component of online machine learning algorithms such as principle component analysis. These algorithms are important to a range of applications, such as online recommendation systems and online experimental design (Warmuth and Kuzmin, 2008; Nie et al., 2016).
### Method and results
In the context of low-rank matrix approximation, the coreset of a matrix is called the projection-cost preserving samples, defined as follows:
**Definition 6.1** (Rank-\(k\) Projection-Cost Preserving Sample Cohen et al. (2017)).: _For \(n^{\prime}<n\), a subset of rescaled columns \(\mathbf{C}\in\mathbb{R}^{d\times n^{\prime}}\) of \(\mathbf{A}\in\mathbb{R}^{d\times n}\) is a \((1+\epsilon)\) projection-cost preserving sample if, for all rank-\(k\) orthogonal projection matrices \(\mathbf{X}\in\mathbb{R}^{d\times d}\), \((1-\epsilon)\lVert\mathbf{A}-\mathbf{X}\mathbf{A}\rVert_{F}^{2}\leq\lVert \mathbf{C}-\mathbf{X}\mathbf{C}\rVert_{F}^{2}\leq(1+\epsilon)\lVert\mathbf{A}- \mathbf{X}\mathbf{A}\rVert_{F}^{2}\)._
Such sketches that satisfy Definition 6.1 can be constructed via importance sampling-based routines, which are modifications of the "leverage scores". Specifically, for \(i\)-th column \(a_{i}\) of matrix \(A\), the _ridge leverage score_ is defined as \(\tau_{i}(\mathbf{A})=a_{i}^{\top}\left(\mathbf{A}\mathbf{A}^{\top}+\frac{ \lVert\mathbf{A}-\mathbf{A}_{k}\rVert_{F}^{2}}{k}\mathbf{I}\right)^{\dagger}a_ {i}\), where \(\dagger\) denotes the Moore-Penrose pseudoinverse of a matrix (Cohen et al., 2017).
Now, we introduce our online matrix approximation algorithm in Algorithm 4, which builds upon our transformation framework. It computes the approximation of the matrix from the sketch derived from the aggregated matrix using ridge leverage scores.
**Theorem 6.2**.: _For any \(\epsilon\in(0,1)\), Algorithm 4 has regret \(\mathrm{Regret}_{\epsilon}(n)=O\left(\epsilon^{-2}k\log n\log(\epsilon^{-1}kn)\right)\). Moreover, there exists an algorithm for online low-rank matrix approximation that enjoys the same regret bound and an inconsistency bound of \(\mathrm{Inconsistency}(n)=O\left(\epsilon^{-2}k\log n\log(\epsilon^{-1}kn)\right)\)._
**Remark 6.1**.: _The online matrix approximation with the random-order setting has previously been investigated in the context of principle component analysis by Garber et al. (2020). They established a regret of \(O\left(\zeta^{-1}\sqrt{kn}\right)\), where \(\zeta\) is the smallest difference between two eigenvalues of \(\mathbf{A}_{t}\mathbf{A}_{t}^{\top}\). In contrast, our result gives a polylogarithmic result on \(\epsilon\)-regret, which translate to an exact regret of \(O(\epsilon\mathrm{OPT}+O\left(\epsilon^{-2}k\log n\log(\epsilon^{-1}kn)\right))\), with \(\mathrm{OPT}\) being the minimum possible cumulative loss attained by the hindsight best approximate._
```
Input: Approximation parameters \(\epsilon\in(0,1)\).
1 Set \(\delta=O(\epsilon/n)\) and \(m=O\left(\epsilon^{-2}k\log(\delta^{-1}k)\right)\).
2for\(t=1,\ldots,n\)do
3 Construct \(\mathbf{A}_{t-1}\in\mathbb{R}^{d\times(t-1)}\) by concatenating \(a_{1},\ldots a_{t-1}\).
4 Let \(\mathbf{C}_{t-1}\in\mathbb{R}^{d\times m}\) be the zero matrix.
5for\(j=1,\ldots,m\)do
6 Sample the \(i\)-th column \(a_{i}\in\mathbb{R}^{d}\) of \(\mathbf{A}_{t-1}\) with probability \(p_{i}:=\frac{\tau_{i}(\mathbf{A}_{t-1})}{\sum_{j=1}^{t-1}\tau_{j}(\mathbf{A}_ {t-1})}\).
7 Sample \(w\in\mathbb{R}\) uniformly from \([1/\sqrt{tp_{i}},(1+\epsilon)/\sqrt{tp_{i}}]\).
8 Replace the \(j\)-th column of \(\mathbf{C}_{t-1}\) with \(w\cdot a_{i}\).
9 Set \(\mathbf{Z}_{t}\in\mathbb{R}^{d\times k}\) to the top \(k\) left singular vectors of \(\mathbf{C}_{t}\)
10 Receive \(a_{t}\in\mathbb{R}^{d}\) and \(\ell(\mathbf{Z}_{t},a_{t})\in\mathbb{R}_{+}\).
```
**Algorithm 4**Online consistent low rank matrix approximation
## 7 Online Regression
In the online regression problem, at each time step \(t\in[n]\), we are asked to output a vector \(x_{t}\in\mathbb{R}^{d}\), and then we receive vectors \(a_{t}\in\mathbb{R}^{d}\) and \(b_{t}\in\mathbb{R}\) that incurs the loss of \(\ell(x_{t},a_{t},b_{t})=\|a_{t}^{\top}x_{t}-b\|_{2}\). Without loss of generality, we assume that the losses are bounded between \([0,1]\). We note that similar assumptions are also assumed in (Cesa-Bianchi et al., 1996; Ouhamma et al., 2021).
With our general reduction framework, we show an \(\epsilon\)-regret upper bound as follows.
**Theorem 7.1**.: _For any \(\epsilon\in(0,1)\), Algorithm 5 has regret \(\mathrm{Regret}_{\epsilon}(n)=O\left(\epsilon^{-2}d\log n\log(\epsilon^{-1}dn)\right)\). Moreover, there exists an algorithm for online regression that enjoys the same regret bound and an inconsistency bound of \(\mathrm{Inconsistency}(n)=O\left(\epsilon^{-2}d\log n\log(\epsilon^{-1}dn)\right)\)._
**Remark 7.1**.: _In the stochastic setting, the online regression problem has been extensively investigated (Foster, 1991; Littlestone et al., 1995; Cesa-Bianchi et al., 1996; Ouhamma et al., 2021). Using online ridge regression or forward algorithms, the regret is shown to be \(O\left(d\log n\right)\). In the random-order model setting, Garber et al. (2020); Sherman et al. (2021) give \(O(\sqrt{n})\)-type regret when the loss is strongly convex or the matrix \(\mathbf{A}\) has a small condition number. In comparison, our result attains polylogarithmic \(\epsilon\)-approximate regret, while maintaining no requirement on the loss function or the condition number. Our result can be translated to an exact regret of
\(O(\epsilon\mathrm{OPT}+O\left(\epsilon^{-2}d\log n\log(\epsilon^{-1}dn)\right)\), with \(\mathrm{OPT}\) being the minimum possible cumulative loss attained by the hindsight best parameter._
### Method and results
Similar to the low-rank matrix approximation problem, we utilize the leverage score method to learn a subspace that preserves information regarding the regression. Specifically, we use the leverage score to learn a \(\epsilon\)-subspace embedding, which is defined as follows.
**Definition 7.2** (\(\epsilon\)-Subspace Embedding).: _A matrix \(\mathbf{S}\in\mathbb{R}^{m\times n}\) is said to be an \(\epsilon\)-subspace embedding of \(\mathbf{A}\in\mathbb{R}^{n\times d}\) if for any vector \(x\in\mathbb{R}^{d}\), we have \((1-\epsilon)\|\mathbf{A}x\|\leq\|\mathbf{SA}x\|\leq(1+\epsilon)\|\mathbf{A}x\|\)._
The subspace embedding serves the same functionality as coreset in the problem of matrix approximation, it preserves the loss of information while enjoying a much lower dimension. In the online regression problem context, we define the leverage score as follows.
**Definition 7.3** (Leverage Score).: _Let \(\mathbf{A}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}\) be the singular value decomposition of \(\mathbf{A}\in\mathbb{R}^{n\times d}\). For \(i\in[n]\), the \(i\)-th leverage score of \(\mathbf{A}\), is defined as \(\tau_{i}=\left\|\mathbf{U}_{i,:}\right\|_{2}^{2}\)._
With the leverage score, we propose Algorithm 5. The algorithm follows the general transformation framework, where the regression problem is solved at every step with the sketch derived from the aggregated matrix using leverage score. For notational convenience, we construct the sketch by appending rows instead of columns as we did in Section 6.
```
Input: Approximation parameters \(\epsilon\in(0,1)\)
1 Set \(\delta=O(\epsilon/n)\) and \(m=O\left(\epsilon^{-2}d\log(\delta^{-1}d)\right)\).
2for\(t=1,\dots,n\)do
3 Construct \(\mathbf{A}_{t-1}\in\mathbb{R}^{(t-1)\times d}\) by stacking \(a_{1}^{\top},\dots a_{t-1}^{\top}\).
4 Construct \(b\in\mathbb{R}^{t-1}\) by stacking \(b_{1},\dots,b_{t-1}\).
5 Set \(\mathbf{S}^{t}\in\mathbb{R}^{m\times(t-1)}\) be the zero matrix.
6for\(j=1,\dots,m\)do
7 Sample \(i\in[t-1]\) with probability \(p_{i}:=\frac{\tau_{i}(\mathbf{A}_{t-1})}{\sum_{j=1}^{t-1}\tau_{j}(\mathbf{A}_ {t-1})}\).
8 Sample \(w\in\mathbb{R}\) uniformly from \(\left[\frac{1}{\sqrt{mp_{i}}},\frac{1+\epsilon}{\sqrt{mp_{i}}}\right]\).
9 Replace the \(j\)-th row of \(\mathbf{S}^{t}\) with \(w\cdot e_{i}^{\top}\), where \(e_{i}\in\mathbb{R}^{t-1}\) is a one-hot vector with \(1\) on the \(i\)-th index.
10 Solve the regression problem \(x_{t}=\min_{x}\|\mathbf{S}^{t}\mathbf{A}_{t-1}x-\mathbf{S}^{t}b\|_{2}\), e.g., by an iterative method such as Newton's method.
11 Receive \(a_{t}\in\mathbb{R}^{d}\), \(b_{t}\in\mathbb{R}\), and loss \(\ell(x_{t},a_{t},b_{t})\).
```
**Algorithm 5**Online consistent regression
The subspace embedding result of (Woodruff, 2014) immediately shows the following:
**Theorem 7.4**.: _For any \(\epsilon,\delta\in(0,1)\), if \(m=O\left(\epsilon^{-2}d\log(\delta^{-1}d)\right)\), then with probability \(\geq 1-\delta\), \(\mathbf{S}^{t}\) is an \(\epsilon\)-subspace embedding for \(\mathbf{A}_{t-1}\) with \(O\left(\epsilon^{-2}d\log(\delta^{-1}d)\right)\) columns._
To obtain Theorem 7.1, we first analyze the average sensitivity of the leverage score sampling. Then, with Theorem 7.4 and the general reduction Theorem 3.1, we obtain the regret bound.
## Acknowledgement
This work is supported by JSPS KAKENHI Grant Number 20H05965 and 22H05001.
|
2307.10736 | Long-Tail Theory under Gaussian Mixtures | We suggest a simple Gaussian mixture model for data generation that complies
with Feldman's long tail theory (2020). We demonstrate that a linear classifier
cannot decrease the generalization error below a certain level in the proposed
model, whereas a nonlinear classifier with a memorization capacity can. This
confirms that for long-tailed distributions, rare training examples must be
considered for optimal generalization to new data. Finally, we show that the
performance gap between linear and nonlinear models can be lessened as the tail
becomes shorter in the subpopulation frequency distribution, as confirmed by
experiments on synthetic and real data. | Arman Bolatov, Maxat Tezekbayev, Igor Melnykov, Artur Pak, Vassilina Nikoulina, Zhenisbek Assylbekov | 2023-07-20T10:03:50Z | http://arxiv.org/abs/2307.10736v2 | # Long-Tail Theory under Gaussian Mixtures
###### Abstract
We suggest a simple Gaussian mixture model for data generation that complies with Feldman's long tail theory (2020). We demonstrate that a linear classifier cannot decrease the generalization error below a certain level in the proposed model, whereas a nonlinear classifier with a memorization capacity can. This confirms that for long-tailed distributions, rare training examples must be considered for optimal generalization to new data. Finally, we show that the performance gap between linear and nonlinear models can be lessened as the tail becomes shorter in the subpopulation frequency distribution, as confirmed by experiments on synthetic and real data.
## 1 Introduction
In classical learning theory [19, 20], generalizing ability and model complexity are usually opposed to each other: the more complex the model,1 the worse its generalizing ability on new data. This is well illustrated by typical curves of test and training errors as functions of the complexity of the model being trained. The training error tends to decrease whenever we increase the model complexity, that is, when we try harder to fit the data. With too much fitting, the model adapts itself too closely to the training data, and will not generalize well (i.e., have large test error).
Footnote 1: By _complexity_ of a model we mean its ability to fit an arbitrary dataset.
However, modern machine learning models such as deep neural networks (DNNs) break this principle: they are usually complex enough to be able to memorize the entire training set, and nevertheless show excellent generalization ability. This phenomenon, called **benign overfitting**, was discovered empirically by Zhang et al. [22] and has since attracted the attention of many minds in the field of machine learning, both experimentalists and theorists. We refer the reader to the survey of Bartlett et al. [1] and Belkin [2] for a more comprehensive overview of benign overfitting.
In our opinion, the most adequate explanation for the _necessity_ of overfitting is the **long tail theory** of Feldman [8], which considers learning from natural data (such as texts or images). The fact is that the distribution of such data usually consists of subpopulations, and the frequencies of subpopulations have a so-called long tail, i.e. examples from rare/atypical subpopulations will regularly occur in both training and test samples.
As an example, consider a typical dataset consisting of movie reviews, such as SST-2 [17]. In this dataset, each review (more precisely, each sentence) is labeled as positive or negative. If we look at a typical positive sentence,
_The large-format film is well suited to capture these musicians in full regalia and the incredible IMAX sound system lets you feel the beat down to your toes._
we will notice that it contains positive phrases (underlined). At the same time, a typical negative sentence, for example
_The images lack contrast, are murky, and are frequently too dark to be decipherable._
includes mostly negative phrases. However, the richness of human language allows one to write negative review sentences, which nevertheless abound in positive phrases:
_Starts out with tremendous promise, introducing an intriguing and alluring premise, only to become a monumental achievement in practically every facet of input filmmaking._
These kinds of negative reviews are not typical, and according to Feldman's long-tail theory, they constitute a separate subpopulation (or several subpopulations) in the class of negative reviews.
A similar situation is observed in the image domain. Consider the popular MNIST dataset [7], which consists of handwritten digits from 0 to 9. Typically, this dataset is used for 10-class classification, where each digit is a class. If we take one of the classes, say 3, then most of the examples in this dataset look like in Figure 1. However, there are rare and atypical examples of writing digit 3, such as in Figure 2, which are easily confused with other digits (for example, 7). Again, according to the long-tail theory, such rare examples should be allocated to a separate subpopulation (or several subpopulations) within the class 3.
Feldman [8] showed that if the distribution over subpopulations has a long-tail (as in the examples above), then to achieve optimal
performance, the learning algorithm _needs_ to memorize rare/atypical examples from the training set. This is formalized via a lower bound on the generalization error, which is proportional to the number of mislabeled training examples (and the proportionality coefficient depends on the long-tailed distribution over subpopulations). Thus, in order for the learning algorithm to be able to reduce the generalization error to a minimum, it _needs_ to fit _all_ examples from the training set, including rare/atypical ones. And this, in turn, entails the need to use more complex models (with a larger number of parameters), since simple and underparameterized models are not able to memorize such atypical cases. However, Feldman's work does not provide conditions that guarantee successful learning from natural data, i.e. there are no upper bounds on the generalization error.
At the same time, it should be noted that recently we have seen an increase in the number of works in which guarantees of successful learning for **interpolating methods** are mathematically proved. For example, Chatterji and Long [4] showed that an overparameterized max-margin linear classifier trained on a linearly separable-with-noise data can perfectly fit the training sample (interpolate), yet generalize to new data nearly optimally. A similar result was shown by Shamir [16], and extensions to neural networks with one hidden dense layer and one hidden convolutional layer were recently given by Frei et al. [11] and Cao et al. [3] respectively. We are mainly concerned with the assumptions on data generation made in these works: the setup of a _single_, albeit noisy, subpopulation within each class is completely different from what Feldman [8] suggested in his long-tail theory. Moreover, in such a setup, there are non-interpolating algorithms with the same (or better) generalization guarantees. Accordingly, memorizing rare noisy examples is _not_ necessary to achieve optimal generalization error.
In this paper, we propose a simple Gaussian mixture model for data generation that is consistent with Feldman's long-tail theory. Further, we show that, within the framework of the proposed model, a linear classifier cannot reduce the generalization error below a certain limit, regardless of the number of parameters used. At the same time, there is a nonlinear model with a larger number of parameters, which can reduce the generalization error below this limit. Thus we show that fitting rare/atypical training examples is _necessary_ for optimal generalization to new data. Finally, we prove that the performance gap between linear and non-linear models can be decreased as the tail shortens in the subpopulation frequency distribution. This result is confirmed by experiments on both synthetic and real data.
## 2 Data Generating Model
Motivating Example.To motivate our choice of data-generating model, let us go back to the movie review examples. For simplicity, let us imagine that we can identify positive and negative phrases in the reviews. Next, let us represent each review sentence as a single number
\[x=(\#\text{positive phrases})-(\#\text{negative phrases})\]
It is intuitively clear that for most positive sentences, \(x>0\), and for most negative sentences, \(x<0\). However, as we mentioned in the Introduction, there are rare examples of negative review sentences that abound in positive phrases, i.e. for which \(x>0\).2 This observation leads us to the following data distribution model: for all positive reviews, \(x\) is concentrated at the point \(\mu_{+}>0\); for most negative reviews, \(x\) is concentrated at the point \(\mu_{-}^{\text{min}}<0\); while there is a minority of negative reviews for which \(x\) is concentrated at the point \(\mu_{-}^{\text{min}}>0\) (Figure 4). In what follows, we formalize this model.
Footnote 2: And vice versa: there are rare examples of positive review sentences that abound in negative phrases. However, for ease of analysis, we will omit this case.
Notation.We let \(\mathbb{R}\) denote the real numbers. Bold-faced lowercase letters (\(\mathbf{x}\)) denote vectors in \(\mathbb{R}^{d}\), bold-faced uppercase letters (\(\mathbf{A}\), \(\mathbf{X}\)) denote matrices and random vectors, regular lowercase letters (\(x\)) denote scalars, regular uppercase letters (\(X\)) denote random variables. \(\|\cdot\|\) denotes the Euclidean norm: \(\|\mathbf{x}\|:=\sqrt{\mathbf{x}^{\top}\mathbf{x}}\). \(\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})\) denotes multivariate Gaussian with mean vector \(\boldsymbol{\mu}\in\mathbb{R}^{d}\) and covariance matrix \(\boldsymbol{\Sigma}\in\mathbb{R}^{d\times d}\). ‘p.d.f.’ stands for ‘probability density function’, and ‘c.d.f.’ stands for ‘cumulative distribution function’. The p.d.f. of \(\mathbf{X}\sim\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})\) is denoted by \(f(\mathbf{x};\boldsymbol{\mu},\boldsymbol{\Sigma})\). The p.d.f. and c.d.f. of \(Z\sim\mathcal{N}(0,1)\) are denoted by \(\phi(z)\) and \(\Phi(z)\) respectively. We also use the standard big O notation, such as \(O(\cdot)\), \(\tilde{O}(\cdot)\), \(\Omega(\cdot)\), \(\Theta(\cdot)\), [5, Chapter 3].
The Model.Let \(\mathbf{X}\in\mathbb{R}^{d}\) be the feature vector, and \(Y\in\{-1,+1\}\) its class label. We assume that \(Y\) is a Rademacher random variable, i.e.
\[\Pr[Y=-1]=\Pr[Y=+1]=\frac{1}{2}. \tag{1}\]
For the positive class, we assume that the class-conditional distribution of \(\mathbf{X}\) is a spherical (a.k.a. isotropic) Gaussian centered at \(\boldsymbol{\mu}\in\mathbb{R}^{d}\), i.e.
\[(\mathbf{X}\mid Y=+1)\sim\mathcal{N}(\boldsymbol{\mu},\sigma^{2}\mathbf{I}). \tag{2}\]
Whereas for the negative class, the class-conditional distribution of \(\mathbf{X}\) is a _mixture_ of two spherical Gaussians centered at \(-\boldsymbol{\mu}\) and \(3\boldsymbol{\mu}\):
\[(\mathbf{X}\mid Y=-1,K=1) \sim\mathcal{N}(-\boldsymbol{\mu},\sigma^{2}\mathbf{I}), \tag{3}\] \[(\mathbf{X}\mid Y=-1,K=2) \sim\mathcal{N}(3\boldsymbol{\mu},\sigma^{2}\mathbf{I}). \tag{4}\]
Here, the latent random variable \(K\) represents the mixture component. With \(K=1\), features are generated from the distribution of typical negative examples, whose proportion is \(p>1/2\) of all negative examples (i.e., this is the cluster of a majority of negative examples). With \(K=2\), features are generated from the distribution of atypical/rare negative examples, whose proportion is \((1-p)<1/2\)
of all negative examples:
\[\Pr[K=1\mid Y=-1]=p,\qquad p>\frac{1}{2} \tag{5}\] \[\Pr[K=2\mid Y=-1]=1-p. \tag{6}\]
We center the atypical examples at \(3\boldsymbol{\mu}\) so that the distance between the means of neighboring Gaussians is \(2\|\boldsymbol{\mu}\|\), and this simplifies the analysis. The assumption that the Gaussians are isotropic with equal covariances is also made to simplify the analysis. The centers of the Gaussians are located on the same straight line to prevent the linear separability of finite samples generated from our model. We emphasize that our goal is to build a simple data generating model that is consistent with Feldman's long-tail theory. Building a model that better agrees with real data is beyond the scope of this work and is a reasonable direction for further research. However, we believe that our model captures important features of the distribution of real data, such as the presence of rare subpopulations. We also emphasize that our model makes sense when \(p>1/2\), but not too close to 1 for rare examples to occur in a finite sample.
The distribution over \(\mathbb{R}^{d}\times\{-1,+1\}\) given by (1)-(6) will be denoted by \(\mathcal{D}\). Figure 4 shows a sample of size 50 from our data model with \(d=2\) and \(p=0.9\).
## 3 Classifiers
In this section, we consider two classifiers--Linear discriminant analysis and Mixture discriminant analysis--and examine their performance on data generated from our model. Let \(\mathcal{P}\) be a distribution over \(\mathbb{R}^{d}\times\{-1,+1\}\). For a classifier \(h:\,\mathbb{R}^{d}\to\{-1,+1\}\), we define its generalization error (or misclassification error rate, or simply error) with respect to \(\mathcal{P}\) as
\[\operatorname*{err}_{\mathcal{P}}[h]:=\operatorname*{Pr}_{\mathbf{X},Y\sim \mathcal{P}}[h(\mathbf{X})\neq Y]. \tag{7}\]
When \(h\) is parameterized (as is the case for the classifiers that we consider), and the parameters are estimated based on a sample \(S:=\{(\mathbf{X}_{i},Y_{i})\}_{i=1}^{n}\) of i.i.d. observations from \(\mathcal{P}\), we denote the resulting classifier as \(h_{S}\) and consider its expected error \(\mathbb{E}_{S\sim\mathcal{P}^{n}}[\operatorname*{err}_{\mathcal{P}}[h_{S}]]\), where expectation is over samples \(S\) of size \(n\) from \(\mathcal{P}\).
### Linear Discriminant Analysis (LDA)
**LDA**[10] is a generative classifier whose simplest version makes almost the same assumptions about data distribution as our data generating model \(\mathcal{D}\). The only difference is that for the negative class, not a mixture, but one Gaussian is used, i.e. instead of the assumptions (3)-(6), one has
\[(\mathbf{X}\mid Y=-1)\sim\mathcal{N}(\boldsymbol{\mu}_{-},\sigma^{2}\mathbf{ I}). \tag{8}\]
The LDA classifier that has access to the true \(\boldsymbol{\mu}\), \(\sigma\), and \(p\) can be written as
\[h^{\text{LDA}}(\mathbf{x})=\begin{cases}+1&\quad\text{if }f(\mathbf{x}; \boldsymbol{\mu},\sigma^{2}_{\text{LDA}}\mathbf{I})\geq f(\mathbf{x}; \boldsymbol{\mu}_{-},\sigma^{2}_{\text{LDA}}\mathbf{I})\\ -1&\quad\text{otherwise}\end{cases}, \tag{9}\]
where \(\boldsymbol{\mu}_{-}\) and \(\sigma^{2}_{\text{LDA}}\) are functions of \(\boldsymbol{\mu}\), \(\sigma^{2}\), and \(p\), that can be derived under the assumptions of the data generating model \(\mathcal{D}\) (see Appendix A.2). It is well known [13, Section 4.3] that in this case the decision boundary of the LDA consists of a set of points equidistant from \(\boldsymbol{\mu}\) and \(\boldsymbol{\mu}_{-}\), i.e. it is a hyperplane, which is the perpendicular bisector of the line segment connecting \(\boldsymbol{\mu}\) and \(\boldsymbol{\mu}_{-}\). It is easy to see that the data distribution used in LDA is a special case of our model when \(p=1\), i.e. when the proportion of atypical examples \((1-p)\) is zero and the classes are linearly separable.3 At the same time, in our data model, for \(1-p=\Omega(1/n)\), classes cannot be linearly separated, which means that LDA will fundamentally lack the ability to fit examples from the minority subpopulation of the negative class. This is formalized in the following lemma.
Footnote 3: with high probability over a choice of a finite sample given sufficiently large \(\|\boldsymbol{\mu}\|\)
**Lemma 1**.: _Let \(S\sim\mathcal{D}^{n}\) be a random sample from our data generating model \(\mathcal{D}\) with unknown \(\boldsymbol{\mu}\) and known \(\sigma^{2}\) and \(p\). Let \(h^{\text{LDA}}_{S}\) be the LDA classifier trained on \(S\) with the method of moments under the assumptions (1), (2), and (8). Then_
\[\mathbb{E}_{S\sim\mathcal{D}^{n}}\left[\operatorname*{err}_{ \mathcal{D}}[h^{\text{LDA}}_{S}]\right]=\frac{1}{2}\left[\Phi\left(-(2p-1) \frac{\|\boldsymbol{\mu}\|}{\sigma}\right)\right.\] \[\left.+p\Phi\left(-(3-2p)\frac{\|\boldsymbol{\mu}\|}{\sigma} \right)+(1-p)\Phi\left((2p+1)\frac{\|\boldsymbol{\mu}\|}{\sigma}\right)\right]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
place the Gaussians from each other. Moreover, we note that the first term in (11) does not depend on \(d\), the dimensionality of the sample space. Therefore, regardless of the dimensionality, the LDA classifier will _not_ be able to interpolate the training sample when \(p<1\), i.e. when there is a minority subpopulation in the negative class. This is in stark contrast with the previous studies on interpolating linear methods [4, 16].
### Mixture Discriminant Analysis (MDA)
**MDA**[12] is a generative classifier that in general assumes that the data in each class is generated from a mixture of Gaussians. In our case, we can consider a version of MDA that makes precisely the assumptions (1)-(6) that our data generating model \(\mathcal{D}\) makes. Hence, the MDA classifier (that knows the true values of the parameters) can be written as
\[h^{\text{MDA}}(\mathbf{x})=\begin{cases}+1&\text{if}\quad\frac{1}{2}f(\mathbf{ x};\boldsymbol{\mu},\sigma^{2}\mathbf{I})\geq\frac{p}{2}f(\mathbf{x};- \boldsymbol{\mu},\sigma^{2}\mathbf{I})\\ &\text{and}\quad\frac{1}{2}f(\mathbf{x};\boldsymbol{\mu},\sigma^{2}\mathbf{I}) \geq\frac{1-p}{2}f(\mathbf{x};3\boldsymbol{\mu},\sigma^{2}\mathbf{I})\\ -1&\text{otherwise}\end{cases} \tag{12}\]
Obviously, such an MDA classifier can take into account the presence of a minority subpopulation in the negative class, since it has the ability to fit a separate third Gaussian to this subpopulation. Not surprisingly, the MDA classifier (12) has a near-to-optimal generalizing ability, as presented in the following lemma.
**Lemma 2**.: _Let \(S\sim\mathcal{D}^{n}\) be a random sample from our data generating model \(\mathcal{D}\) with unknown \(\boldsymbol{\mu}\) and known \(\sigma^{2}\) and \(p\). Let \(h^{\text{MDA}}_{S}\) be the MDA classifier trained on \(S\) with the method of moments under the assumptions (1)-(6). Then_
\[\mathop{\mathbb{E}}_{S\sim\mathcal{D}^{n}}\left[\operatorname{ err}_{\mathcal{D}}[h^{\text{MDA}}_{S}]\right]\leq\frac{1}{2}\left[\Phi\left(- \frac{\left\|\boldsymbol{\mu}\right\|}{\sigma}+\frac{\sigma\ln p}{2\left\| \boldsymbol{\mu}\right\|}\right)\right.\\ +\Phi\left(-\frac{\left\|\boldsymbol{\mu}\right\|}{\sigma}+\frac{ \sigma\ln(1-p)}{2\left\|\boldsymbol{\mu}\right\|}\right)+p\cdot\Phi\left(- \frac{\left\|\boldsymbol{\mu}\right\|}{\sigma}-\frac{\sigma\ln p}{2\left\| \boldsymbol{\mu}\right\|}\right)\right.\\ \left.+(1-p)\cdot\Phi\left(-\frac{\left\|\boldsymbol{\mu} \right\|}{\sigma}-\frac{\sigma\ln(1-p)}{\left\|\boldsymbol{\mu}\right\|} \right)\right]+\widetilde{O}\left(\sqrt{\frac{d}{n}}\right). \tag{13}\]
Proof.: See Appendix A.3.
Arguing as in the case of LDA, we can estimate the order of the bound (13) as
\[\mathop{\mathbb{E}}_{S\sim\mathcal{D}^{n}}\left[\operatorname{ err}_{\mathcal{D}}[h^{\text{MDA}}_{S}]\right]\leq\exp\left(-\Omega\left(\frac{ \left\|\boldsymbol{\mu}\right\|^{2}}{2\sigma^{2}}\right)\right)+\widetilde{O} \left(\sqrt{\frac{d}{n}}\right). \tag{14}\]
Thus, by placing the Gaussians far enough apart, the optimal error of the MDA classifier can be made arbitrarily close to zero. We emphasize that this is only due to the ability of the MDA classifier to fit (memorize) examples from the minority subpopulation \(\mathcal{N}(3\boldsymbol{\mu},\sigma^{2}\mathbf{I})\) of the negative class. Roughly speaking, MDA has the ability to allocate some of its parameters for fitting atypical examples, while LDA simply does not have such an opportunity.
Finally, we remark that the term \(\widetilde{O}\left(\sqrt{d/n}\right)\) in the RHS of (13) is due to the error in estimating the model parameter \(\boldsymbol{\mu}\) from the training sample \(S\).
## 4 Performance Gap between LDA and MDA
Using the bounds (11) and (14) we can already estimate the expected difference between the LDA and MDA errors as
\[\mathop{\mathbb{E}}_{S\sim\mathcal{D}^{n}}\left[\operatorname{ err}_{\mathcal{D}}[h^{\text{LDA}}_{S}]-\operatorname{err}_{\mathcal{D}}[h^{ \text{MDA}}_{S}]\right]\\ \geq\frac{1-p}{2}-\exp\left(-\Omega\left(\frac{\left\|\boldsymbol {\mu}\right\|^{2}}{2\sigma^{2}}\right)\right)+\widetilde{O}\left(\sqrt{\frac{d }{n}}\right). \tag{15}\]
However, a closer analysis gives us the following
**Theorem 1**.: _Let \(S\sim\mathcal{D}^{n}\) be a random sample from our data generating model \(\mathcal{D}\), and let \(h^{\text{MDA}}_{S}\) be the LDA classifier trained on \(S\) under the assumptions (1), (2), (2), and let \(h^{\text{MDA}}_{S}\) be the MDA classifier trained on \(S\) under the assumptions (1)-(6). Then_
\[\mathop{\mathbb{E}}_{S\sim\mathcal{D}^{n}}\left[\operatorname{ err}_{\mathcal{D}}[h^{\text{LDA}}_{S}]-\operatorname{err}_{\mathcal{D}}[h^{\text{ MDA}}_{S}]\right]\\ \geq\frac{1-p}{2}-\exp\left(-\frac{\left\|\boldsymbol{\mu} \right\|^{2}}{2\sigma^{2}}\right)+\widetilde{O}\left(\sqrt{\frac{d}{n}} \right). \tag{16}\]
Proof.: See Appendix A.4.
The advantage of the bound (16) is that, in comparison with (15), here the second term in the RHS is written explicitly, i.e., without using the big O notation. This was done through careful analysis of the original bounds from Lemmas 1 and 2.
Theorem 1 implies the main conclusion of our work: _there is a performance gap between a simple model that is unable to memorize rare examples from the tail of the distribution, and a complex model that is able to fit such examples. Moreover, the gap can be made smaller when the proportion of atypical examples is smaller_. From (10) and (13), it is easy to see that for the "ideal" LDA and MDA (that have access to the true \(\boldsymbol{\mu}\)), we have_
\[\operatorname{err}_{\mathcal{D}}\left[h^{\text{LDA}}\right]\overset{p\to 1}{ \longrightarrow}\Phi\left(-\frac{\left\|\boldsymbol{\mu}\right\|}{\sigma} \right),\quad\operatorname{err}_{\mathcal{D}}\left[h^{\text{MDA}}\right] \overset{p\to 1}{\longrightarrow}\Phi\left(-\frac{\left\|\boldsymbol{\mu}\right\|}{ \sigma}\right).\]
_\({}_{\Box}\)_
_Inclations for Real Data.Unfortunately, our conclusion is practically impossible to test directly on real data, since we cannot be sure that its distribution resembles our model \(\mathcal{D}\), and can be more complex. Moreover, our conclusion is drawn within the framework of generative classification models LDA and MDA, while in practice discriminative models such as logistic regression and multilayer neural networks are usually used, which make much fewer assumptions about the distribution of data. However, we will be able to test our conclusion _indirectly_ in realistic settings if there is a way to identify training examples from rare subpopulations. Fortunately, this is exactly what the memorization score introduced by Feldman and Zhang does [8, 9].
For a learning algorithm \(A\) operating on a dataset \(S=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\), the amount of label memorization by \(A\) on example \((\mathbf{x}_{i},y_{i})\in S\) is defined as
\[\operatorname{mem}[A,S,i]\\ :=\Pr_{h\sim\mathcal{A}(S)}[h(\mathbf{x}_{i})=y_{i}]-\Pr_{h\sim \mathcal{A}(S^{\setminus i})}[h(\mathbf{x}_{i})=y_{i}], \tag{17}\]
where \(\mathcal{S}^{\setminus t}\) denotes the dataset \(S\) with \((\mathbf{x}_{i},y_{i})\) removed and probability is taken over the randomness of the algorithm \(A\) (such as random initialization). One thing to keep in mind is that the memorization score itself must be calculated through a learner that _can_ memorize, for example, an MDA with enough components, a neural network, or a nearest neighbor classifier.
As we can see, the memorization score will be high for examples that are difficult (or even impossible) to correctly classify using other examples in \(S\), provided that the learning algorithm is flexible enough to (almost) completely fit the training set. For example, under our data generating model, for \(p\) close enough to 1 (so that \(1-p=\Theta(1/n)\)), these are precisely the points generated by the minority subpopulation of the negative class. Accordingly, the shortening of the tail, i.e. making \(p\) closer to 1 can be simulated by discarding examples from the training set for which the memorization score is high. But the distribution of the test sample will not change, since we are not discarding top memorized examples from it. This is because the memorization score can only be calculated on the training sample and, accordingly, the most memorized examples can only be discarded from the long tail of the training sample.
For clarity, let us re-denote the distribution given by formulas (1)-(6) as \(\mathcal{D}_{p}\). Then we are interested in the expected error of the classifier, which was trained on a sample from \(\mathcal{D}_{g}\), but is tested on a sample from \(D_{p}\), i.e. \(\mathbb{E}_{S\sim\mathcal{D}_{q}}\left[\mathrm{err}_{\mathcal{D}_{p}}[h_{S}]\right]\). Fortunately, the analysis of this case resembles the analysis of the case when \(q=p\). Denoting \(q:=1-\frac{1}{t}\), \(t>2\), we can prove the following (asymptotic in \(t\)) results for the LDA and MDA classifiers.
**Theorem 2**.: _Let \(S\sim\mathcal{D}_{1-1/t}^{n}\). Then for the LDA and MDA classifiers trained on \(S\) but evaluated on examples from \(\mathcal{D}_{p}\) we have_
\[\begin{split}&\mathop{\mathbb{E}}_{S\sim\mathcal{D}_{1-1/t}} \left[\mathrm{err}[h_{S}^{\text{LDA}}]\right]=\frac{1+p}{2}\Phi\left(-\frac{ \lVert\boldsymbol{\mu}\rVert}{\sigma}\right)\\ &\qquad+\frac{1-p}{2}\underbrace{\Phi\left(\frac{3\lVert \boldsymbol{\mu}\rVert}{\sigma}\right)}_{\text{\@@cite[cite]{[\@@bibref{}{ dav}{}{}]}}}+\Theta\left(\frac{1}{t}\right)+\widetilde{O}\left(\sqrt{\frac{d}{n}}\right) \end{split} \tag{18}\]
\[\begin{split}&\mathop{\mathbb{E}}_{S\sim\mathcal{D}_{1-1/t}} \left[\mathrm{err}[h_{S}^{\text{MDA}}]\right]\leq\frac{1+p}{2}\Phi\left(- \frac{\lVert\boldsymbol{\mu}\rVert}{\sigma}\right)\\ &\qquad+\frac{1-p}{2}\underbrace{\Phi\left(-\frac{\lVert \boldsymbol{\mu}\rVert}{\sigma}+\frac{\sigma\ln t}{2\lVert\boldsymbol{\mu} \rVert}\right)}_{\text{\@@cite[cite]{[\@@bibref{}{dav}{}{}]}}}+\Theta \left(\frac{1}{t}\right)+\widetilde{O}\left(\sqrt{\frac{d}{n}}\right)\end{split} \tag{19}\]
Proof.: See Appendix A.5.
As we can see, the difference between the bounds (18) and (19) is mainly in the terms marked as and. It is clear that \(\boldsymbol{\text{A}}>\boldsymbol{\text{B}}\) as long as \(t<\exp(8\lVert\boldsymbol{\mu}\rVert^{2}/\sigma^{2})\). For example, when \(\lVert\boldsymbol{\mu}\rVert=2\), and \(\sigma=1\), we have \(\exp(8\lVert\boldsymbol{\mu}\rVert^{2}/\sigma^{2})\approx 8\cdot 10^{13}\). Thus, the gap between LDA error and the upper bound on MDA error remains feasible even for large values of \(t\) (i.e. when \(q\) is close to 1).
## 5 Experiments
In this section, we empirically validate the predictions from our theory for synthetic data (generated from our model \(\mathcal{D}\)) as well as for real data, the distribution of which is not necessarily the same as \(\mathcal{D}\), but shares its main characteristics, such as the presence of minority subpopulations in at least one of the classes.4
Footnote 4: The code for reproducing the experiments is at [https://github.com/armanbolatov/long_tail](https://github.com/armanbolatov/long_tail). The random seeds we used are indicated in the code.
### Synthetic Data
First of all, we verify our error bounds from Lemmas 1 and 2 experimentally. To do this, we generate training and test sets from our data model \(\mathcal{D}\), fit LDA and MDA to the training set, compute the misclassification errors on the test set, and compare with theoretical bounds (10) and (13) modulo the asymptotic terms \(\widetilde{O}(\sqrt{d/n})\). Since the bounds depend on several parameters, we vary each of these parameters while keeping the others fixed. Unless otherwise specified, the default values of the parameters are: \(d=50\), \(p=0.9\), \(\lVert\boldsymbol{\mu}\rVert=2\), \(\sigma=1\), \(n=7000\). Test samples are of size \(n_{\text{test}}=3000\). For each variable parameter value, we generate 10 training and test samples and estimate the generalization errors with 95% confidence intervals across test samples.
Dependence on \(\lVert\boldsymbol{\mu}\rVert\).To test the dependence of error bounds on \(\lVert\boldsymbol{\mu}\rVert\), we vary \(\lVert\boldsymbol{\mu}\rVert\) in the interval \([2,6]\) with a step 0.08. For each value of \(\lVert\boldsymbol{\mu}\rVert\), we take a random direction in \(\mathbb{R}^{d}\), and place \(\boldsymbol{\mu}\), \(-\boldsymbol{\mu}\), and \(3\boldsymbol{\mu}\) along that direction. The results of the experiments are shown in Figure 6. As we can see, the empirical errors are generally consistent with our bounds. The LDA test error is statistically close to our LDA error bound (10), as the latter is mainly within the 95% confidence band. Meanwhile, the MDA test error is significantly lower than our MDA error bound (13). This is not surprising because, as we already mentioned in Section 3, for LDA we derived the exact misclassification error modulo \(\widetilde{O}(\sqrt{d/n})\), while for MDA we got the _upper_ bound for the error.
Dependence on \(p\).To test the dependence of error bounds on \(p\), we vary \(p\) in the interval \([0.5,1]\) with a step 0.01. The results are shown in Figure 5. As we can see, the situation is similar to the previous one: for LDA, the empirical error agrees well with our formula (10), especially for \(p\) closer to 1; while for MDA, in most cases, it is significantly below our bound (13).
Dependence on \(n\).To check the correctness of the order \(\widetilde{O}(\sqrt{d/n})\) of the estimation error, we consider the expression
\[\left|\text{Test Error}-\text{Error Bound}\right|\cdot\sqrt{\frac{n}{d\ln n}} \tag{20}\]
for increasing \(n\). The results are shown in Figure 7. Here we observe the boundedness of the expression (20), which confirms that the order \(\widetilde{O}(\sqrt{d/n})\) of the estimation error is correct.
Training on \(\mathcal{D}_{q}\), but Testing on \(\mathcal{D}_{p^{*}}\).Finally, for experimental verification of the conclusions from Theorem 2, we generate training samples from \(D_{1-1/t}\), and test samples from \(D_{p}\). We vary \(t\) in the interval \([10,2000]\) and observe the behavior of the LDA and MDA test errors. The results are shown in Figure 8. As we can see, the empirical error curves agree with the predictions of our Theorem 2. Namely, the LDA error exceeds the MDA error, and the gap between the two remains feasible with the increase of \(t\).
Figure 5: Comparison of empirical errors (solid) and theoretical error bounds (dashed) when \(p\) varies. The shaded areas are 95% confidence bands around the average across 10 runs. The values of the remaining parameters are fixed as follows: \(d=50\), \(\|\boldsymbol{\mu}\|=2\), \(\sigma=1\), \(n=7000\), \(n_{\text{test}}=3000\).
Figure 8: Comparison of the empirical LDA and MDA errors when training on \(\mathcal{D}_{1-1/t}\) but testing on \(\mathcal{D}_{p}\), as \(t\) grows. The values of the remaining parameters are fixed as follows: \(d=50\), \(\|\boldsymbol{\mu}\|=2\), \(\sigma=1\), \(n=10000\), \(n_{\text{test}}=10000\). The shaded areas are 95% confidence bands around the average across 20 runs.
Figure 10: Heatmap of errors when fitting overparameterized and MDA classifiers to data generated from our model \(\mathcal{D}\), \(d=50\), \(\boldsymbol{\mu}=\frac{1}{\sqrt{2}}(1,1)\), \(n=300\), \(\sigma=1\), \(p=0.9\).
Figure 5: Comparison of empirical errors (solid) and theoretical error bounds (dashed) when \(p\) varies. The shaded areas are 95% confidence bands around the average across 10 runs. The values of the remaining parameters are fixed as follows: \(d=50\), \(\|\boldsymbol{\mu}\|=2\), \(\sigma=1\), \(n=7000\), \(n_{\text{test}}=3000\).
Figure 6: Comparison of empirical errors (solid) and theoretical error bounds (dashed) when \(\|\boldsymbol{\mu}\|\) varies. The shaded areas are 95% confidence bands around the average across 10 runs. The values of the remaining parameters are fixed as follows: \(d=50\), \(p=0.9\), \(\sigma=1\), \(n=7000\), \(n_{\text{test}}=3000\).
Figure 7: Verifying the order \(\widetilde{O}\left(\sqrt{\frac{d}{n}}\right)\) of the estimation error by plotting \(\lceil\text{Test Error}-\text{Error Bound}\rceil\cdot\sqrt{\frac{n}{\Omega n }n}\) for increasing values of the remaining parameters are fixed as follows: \(d=50\), \(\|\boldsymbol{\mu}\|=2\), \(p=0.9\), \(\sigma=1,n_{\text{test}}=3000\). The shaded areas are 95% confidence bands around the average across 10 runs.
Figure 9: Evaluation of a linear classifier and deep neural networks (Distill-BERT) on a dataset of real movie reviews (SST-2). The shaded areas are 95% confidence bands around the average across 9 runs per each % top memorized examples removed.
### Real Data
We conduct our experiments5 on SST-2 [17], which is a dataset for sentence-level binary (positive vs. negative) sentiment classification. It has 6920 training, 872 validation, and 1821 test examples. We use the pre-trained Distill-BERT model [15] that consists of 6 transformer layers, where each layer is composed of 12 attention heads. We take the representation of the [CLS] token from the 6th layer for classification. We train the network with Adam, setting the learning rate and batch size to \(10^{-6}\) and 100, respectively.
Footnote 5: Our computing infrastructure for these experiments is as follows. CPU: Intel Core i9-10900X CPU @ 3.70GHz, GPU: \(2\times\) Nvidia RTX 3090, RAM: 128 Gb, Operating System: Ubuntu 22.04.1 LTS, torch: 1.13.1, cuda: 11.7, pandas: 1.5.3.
Calculating the memorization score according to (17) requires retraining the network for every training example (that is being removed) which is not computationally feasible. Thus we approximate the memorization score using the method of Zheng and Jiang [23].
We finetune two versions of the pre-trained Distill-BERT on SST-2: in one we freeze all layers except the top classification layer, in the second we finetune all layers. The first model--called Linear--is essentially a linear classifier (logistic regression), which receives text representations from the 6th layer of Distill-BERT as input, and the representations are not trained. It is clear that such a model is not capable of memorizing rare atypical examples. The second model--called DNN--is a full-fledged deep neural network that has enough capacity (66 million parameters) to memorize atypical examples from rare subpopulations. We also consider an intermediate option--called DNN (3 layers)--when the three lower layers are frozen, and the remaining layers are finetuned.
The experiment is as follows: (1) we compute the memorization score for each training example through DNN,6 (2) remove \(m\)% of the top memorized examples from the training set, (3) train the Linear and DNN models on such a set with hyperparameters tuned on the validation set, (4) and finally evaluate both models on the test set. The results of this experiment for different \(m\) are shown in Figure 9. As we can see, the error of the Linear classifier is always greater than the DNN error. Further, when the tail is shortened (i.e., when a larger number of top memorized examples from training are discarded), the gap between the errors of Linear and DNN slightly decreases. This is consistent with the predictions of our theory, albeit built with simpler assumptions on the distribution of data and for more interpretable classifiers.
Footnote 6: This means that in the definition of the memorization score (17), we use a DNN trained by gradient-based method as a learning algorithm \(A\). However, we approximate Eq. 17 via the method of Zheng and Jiang [23] to avoid repeated retraining for each example.
At certain point, the difference between the errors is not statistically significant. This happens because at a high percentage of removal, examples are removed not only from the tail, but also from the main subpopulations, which sharply worsens the performance of both the Linear and DNN classifiers. Note that this regime is not considered in our theory.
A careful reader may notice that the DNN error is not close to zero, even when no examples are removed from the training set. This is because there are examples in the test set that the DNN cannot classify correctly, even though it fits the training set perfectly. In principle, such difficult examples can be simulated in our data-generating model by introducing label flipping noise, and we defer such modification to our future work.
## 6 Discussion on Benign Overfitting
As was already mentioned in the Introduction, Feldman's long-tail theory explains the _need_ for overfitting, but does not explain how exactly modern overparameterized learning algorithms manage to overfit without harming generalization [22]. Despite the availability of the answer for some model classes [4, 16, 11, 3, 21], our main concern with the current trend in theoretical studies of benign overfitting is in the assumptions about the data generating process, namely that only one subpopulation is allowed in each class, and linear inseparability is achieved by introducing random label-flipping noise. We repeat once again that such a setup does not fit in with Feldman's long-tail theory, in which the presence of rare subpopulations in classes is a prerequisite. Without this condition, there is no need for overfitting. Therefore, we would like to draw the attention of the learning theory community to the existing gap between theoretical setups and reality regarding the distribution of data.
In the theoretical part of our work, we do not deal with overparameterized models like deep neural networks. Our "complex" classifier (MDA) is actually only as complex as the data requires. Therefore, we cannot claim to have shown benign overfitting under the conditions of our data-generating model. We have shown the underfitting of the linear classifier (LDA) and the _proper_ fitting of the MDA classifier with the right number of components.
However, we are curious about what happens if we give the MDA classifier the ability to fit many more Gaussians than necessary. To do this, we conduct the following experiment. The data is generated from the same model \(\mathcal{D}\) that we used earlier (we fixed \(d=50\), \(\boldsymbol{\mu}=\frac{2}{\sqrt{d}}(1,\ldots,1)\), \(n=300\), \(\sigma=1\), \(p=0.9\)). For data points from the positive class, we fit \(k_{+}\) Gaussians, and for points from the negative class, we fit \(k_{-}\) Gaussians. Since in this case, we have no simple way to estimate the parameters by the method of moments, all parameters are estimated by the approximate maximum likelihood method through the EM algorithm. Let \(f_{+}(\mathbf{x})\) and \(f_{-}(\mathbf{x})\) be the resulting estimated p.d.f.'s for the positive and negative classes, respectively. Then the MDA classifier can be written as
\[h^{\text{MDA}}(\mathbf{x};k_{+},k_{-})=\begin{cases}+1,&\text{if }f_{+}(\mathbf{x}) \geq f_{-}(\mathbf{x})\\ -1&\text{otherwise}\end{cases}\]
We vary \(k_{+}\) and \(k_{-}\) in the interval \([1,71]\) with a step \(10\) and calculate the classifier error on the test sample. The results of this experiment are shown in Figure 10, which is a heatmap of test errors for different pairs \((k_{+},k_{-})\). The training error is zero for all pairs \((k_{+},k_{-})\), except for \(k_{+}=k_{-}=1\). Notably, there is a clear pattern: when \(k_{+}\) and \(k_{-}\) are close to each other, the performance can be better than when there is a heavy imbalance between \(k_{+}\) and \(k_{-}\).
To understand how an overparametrized MDA classifier manages to overfit benignly, we plot a decision curve for the case \(d=2\), \(k_{+}=k_{-}=30\) (Figure 11, left). As we can see, despite the potential to overfit malignantly with a complex decision curve, the EM algorithm chooses a fairly simple classifier that is not so different from the optimal one (Figure 11, right), that uses \(k_{+}=1\), \(k_{-}=2\).
It is noteworthy that an overparameterized MDA classifier is able to overfit benignly on data generated from our model, because such a framework is more interpretable and amenable to analysis than overparameterized deep neural networks trained on real data. Accordingly, it becomes possible to study the phenomenon of benign overfitting in a simplified setting without linking it to deep learning, in which it is usually considered.
Conclusion
In this work we have focused on building an _interpretable_ mathematical framework for the analysis of learning algorithms capable of memorizing rare/atypical examples that usually occur in natural data, such as texts and images. The key point in our work is the data-generating model based on Gaussian mixtures, which demonstrates the inability of a simple classifier without sufficient memory to correctly label rare and atypical test examples. At the same time, for a more complex (but not too complex) classifier with sufficient memory, the near-to-optimal generalization ability is shown. Moreover, the dynamics of the performance of these classifiers with tail shortening has been studied both theoretically and experimentally, and the experiments were carried out both on synthetic and real data.
The last but not least property of our framework is that it allows for benign overfitting, and this is what we plan to study in the near future. In this regard, it will be interesting to analyze the behavior of over-parameterized learning algorithms (such as MDA with a redundant number of components, deep neural networks, and nearest-neighbor classifiers) on data generated from our model. This will require obtaining new results in terms of sufficient conditions for benign overfitting to happen under the assumptions of our model.
## Acknowledgements
This research has been funded by Nazarbayev University under Faculty-development competitive research grants program for 2023-2025 Grant #20122022FD4131, PI R. Takhanov. Igor Melnykov's work on this project was supported by a Fulbright US Scholar Grant administered by the US Department of State Bureau of Educational and Cultural Affairs (grant ID: PS00334837). The authors would like to thank Christopher Dance for a thorough review of our work (including mathematical proofs), Matthias Galle for his constructive feedback, including the suggestion to add a discussion on benign overfitting. We would like to thank the reviewers for their valuable feedback, in particular Reviewer 1 for a deep reading of our work, Reviewers 2 and 3 for carefully reading our response, Reviewer 6 for good questions that helped improve the presentation of the material.
|
2310.13828 | Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image
Generative Models | Data poisoning attacks manipulate training data to introduce unexpected
behaviors into machine learning models at training time. For text-to-image
generative models with massive training datasets, current understanding of
poisoning attacks suggests that a successful attack would require injecting
millions of poison samples into their training pipeline. In this paper, we show
that poisoning attacks can be successful on generative models. We observe that
training data per concept can be quite limited in these models, making them
vulnerable to prompt-specific poisoning attacks, which target a model's ability
to respond to individual prompts.
We introduce Nightshade, an optimized prompt-specific poisoning attack where
poison samples look visually identical to benign images with matching text
prompts. Nightshade poison samples are also optimized for potency and can
corrupt an Stable Diffusion SDXL prompt in <100 poison samples. Nightshade
poison effects "bleed through" to related concepts, and multiple attacks can
composed together in a single prompt. Surprisingly, we show that a moderate
number of Nightshade attacks can destabilize general features in a
text-to-image generative model, effectively disabling its ability to generate
meaningful images. Finally, we propose the use of Nightshade and similar tools
as a last defense for content creators against web scrapers that ignore
opt-out/do-not-crawl directives, and discuss possible implications for model
trainers and content creators. | Shawn Shan, Wenxin Ding, Josephine Passananti, Stanley Wu, Haitao Zheng, Ben Y. Zhao | 2023-10-20T21:54:10Z | http://arxiv.org/abs/2310.13828v3 | # Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
###### Abstract
Data poisoning attacks manipulate training data to introduce unexpected behaviors into machine learning models at training time. For text-to-image generative models with massive training datasets, current understanding of poisoning attacks suggests that a successful attack would require injecting millions of poison samples into their training pipeline. In this paper, we show that poisoning attacks can be successful on generative models. We observe that training data per concept can be quite limited in these models, making them vulnerable to _prompt-specific poisoning attacks_, which target a model's ability to respond to individual prompts.
We introduce _Nightshade_, an optimized prompt-specific poisoning attack where poison samples look visually identical to benign images with matching text prompts. Nightshade poison samples are also optimized for potency and can corrupt an Stable Diffusion SDXL prompt in <100 poison samples. Nightshade poison effects "bleed through" to related concepts, and multiple attacks can composed together in a single prompt. Surprisingly, we show that a moderate number of Nightshade attacks can destabilize general features in a text-to-image generative model, effectively disabling its ability to generate meaningful images. Finally, we propose the use of Nightshade and similar tools as a last defense for content creators against web scrapers that ignore opt-out/do-not-crawl directives, and discuss possible implications for model trainers and content creators.
## 1 Introduction
Over the last year, diffusion based text-to-image models have taken the Internet by storm, growing from research projects to applications in advertising, fashion [3, 55], web development [2, 58, 42], and AI art [90, 43, 6, 9]. Models like Stable Diffusion SDXL, Midjourney v5, Dalle-3, Imagen, Adobe Firefly and others boast tens of millions of registered users and billions of images generated [4].
Despite their significant impact on business and creative industries, both positive and negative, few have considered the vulnerability of diffusion model architectures to poisoning attacks against image generation. Poisoning attacks manipulate training data to introduce unexpected behavior to the model at training time, and are well-studied in the context of traditional deep learning models such as deep neural networks (DNN) classifiers. Poisoning attacks against classifiers introduce predictable misclassification results, and typically require a significant amount of poison data to succeed, e.g. ratio of poison training samples to benign samples is 20% or higher. Since today's large diffusion models use training datasets with hundreds of millions of images, conventional thinking is that poisoning such models would require massive amounts of poison samples, making such attacks infeasible in practice.
In this work, we investigate the impact of poisoning attacks on state of the art text-to-image diffusion models. Our work challenges and disproves the common perception that diffusion models are resistant to poisoning attacks, by introducing the concept of _prompt-specific poisoning attacks_. Specifically, we show that successful poisoning attacks do not need access to the image generation pipeline, nor do they need poison samples comparable in size to the model training dataset. They need only to be comparable to benign training data related to a _specific_ targeted prompt. Generative diffusion models support tens of thousands of prompts. The large majority of these have few training samples associated with them (_i.e.,_ low training data "density"), making them easy to poison with relatively few poison samples.
Prompt-specific poisoning attacks are versatile and powerful. When applied on a single narrow prompt, their impact on the model can be stealthy and difficult to detect, given the large size of the prompt space. Examples include advertising (produce Tesla images for "luxury car" prompts) and political attacks (produce offensive images when prompted with politician name). Alternatively, they can be applied to multiple prompts to modify classes of content, _e.g._ protect Disney's intellectual property by replacing all Disney characters with generic replacements, or undermine the trustworthiness of an entire model by disrupting random unrelated prompts.
Our work produces a number of notable findings. First and foremost, we examine training density of single-word prompts (or concepts) in existing large-scale datasets. We find that as hypothesized, concepts in popular training datasets like LAION-Aesthetic exhibit very low training data density, both in terms of word sparsity (# of training samples associated explicitly with a specific concept) and semantic sparsity (# of samples associated with a concept and semantically related
terms). Not surprisingly, our second finding is that simple "dirty-label" poison attacks work well to corrupt image generation for specific concepts (_e.g.,_ "dog") using just 500-1000 poison samples. In particular, experiments show high success for poisoning on Stable Diffusion's newest model (SDXL), using both CLIP-based classification and a crowdsourced user study (IRB-approved) as success metrics.
Next, we propose a significantly optimized prompt-specific poisoning attack we call _Nightshade_. Nightshade uses multiple optimization techniques (including targeted adversarial perturbations) to generate stealthy and highly effective poison samples, with four observable benefits.
* Nightshade poison samples are benign images shifted in the feature space. Thus a Nightshade sample for the prompt "castle" still looks like a castle to the human eye, but teaches the model to produce images of an old truck.
* Nightshade samples produce stronger poisoning effects, enabling highly successful poisoning attacks with very few (_e.g.,_ 100) samples.
* Nightshade samples produce poisoning effects that effectively "bleed-through" to related concepts, and thus cannot be circumvented by prompt replacement, _e.g.,_ Nightshade samples poisoning "fantasy art" also affect "dragon" and "Michael Whelan" (a well-known fantasy and SciFi artist).
* We demonstrate that when multiple concepts are poisoned by Nightshade, the attacks remain successful when these concepts appear in a single prompt, and actually _stack_ with cumulative effect. Furthermore, when many Nightshade attacks target different prompts on a single model (_e.g.,_ 250 attacks on SDXL), general features in the model become corrupted, and the model's image generation function collapses.
We note that Nightshade also demonstrates strong transferability across models, and resists a range of defenses designed to deter current poisoning attacks.
Finally, we assert that Nightshade can provide a powerful tool for content owners to protect their intellectual property against model trainers that disregard or ignore copyright notices, do-not-scrape/crawd directives, and opt-out lists. Movie studios, book publishers, game producers and individual artists can use systems like Nightshade to provide a strong disincentive against unauthorized data training. We discuss potential benefits and implications of this usage model.
In short, our work provides four key contributions:
* We propose _prompt-specific poisoning attacks_, and demonstrate they are realistic and effective on state-of-the-art diffusion models because of "sparsity" of training data.
* We propose _Nightshade_ attacks, optimized prompt-specific poisoning attacks that use guided perturbations to increase poison potency while avoiding visual detection.
* We measure and quantify key properties of Nightshade attacks, including "bleed-through" to semantically similar prompts, multi-attack cumulative destabilizing effects, model transferability, and general resistance to traditional poison defenses.
* We propose Nightshade as a tool to protect copyright and disincentivize unauthorized model training on protected content.
## 2 Background and Related Work
We begin by providing background on text-to-image models and data poisoning attacks.
### Text-to-Image Generation
**Model Architecture.** Text-to-image generative models evolved from generative adversarial networks (GAN) and variational autoencoders (VAE) [23, 52, 98] to diffusion models [53, 56]. We defer detailed background on diffusion models to [73]. Recent work [56] further improved the generation quality and training cost of diffusion models by leveraging "latent diffusion," which converts images from pixel space into a latent feature space using variational autoencoders. Models then perform diffusion process in the lower-dimensional image feature space, drastically reducing the training cost and enabling models to be trained on much larger datasets. Today, latent diffusion is used in almost all state-of-the-art models [47, 49, 54, 77, 75].
**Training Data Sources.** Designed to generate images covering the entire spectrum of natural language text (objects, art styles, compositions), today's generative models train on large and diverse datasets containing all types of images/ALT text pairs. Models like Stable Diffusion and DALLE-2 [54, 76] are trained on datasets ranging in size from 500 million to 5 billion images scraped from the web [14, 64]. These datasets are subject to minimal moderation, making them vulnerable to malicious actors [13]. Data collectors typically only curate data to exclude samples with insufficient or misaligned captions as determined by an automated alignment model [64].
**Continuous Model Training.** Training these models from scratch can be expensive (_e.g.,_ 150K GPU hours or 600K USD for the first version of stable diffusion [78]). As a result, it is common practice for model trainer to continuously update existing models on newly collected data to improve performance [74, 21, 47, 61]. Stable Diffusion 1.4, 1.5, and 2.1 are all continuously trained from previous versions. Stable Diffusion XL 1.0 is continuously trained on version 0.9. Many companies also continuously train public models on new training data tailored to their specific use case, including NovelAI [47], Scenario.gg [61], and Lensa AI [79]. Today, online platforms also offer continuous-training-as-a-service [26, 47, 57].
In our work, we consider poisoning attacks on both training scenarios: 1) training a new model from scratch, and 2) continuously training an existing model on additional data.
### Data Poisoning Attacks
**Poisoning Attacks against Classifiers.** These attacks in
ject poison data into training pipelines to degrade performance of the trained model. Poisoning attacks against classifiers are well studied [28]. Aside from basic misclassification attacks, backdoor attacks [86, 40] inject a hidden trigger, a specific pixel or text pattern [24, 18] into the model, such that inputs containing the trigger are misclassified at inference time. Others proposed _clean-label_ backdoor attacks where attackers do not control the labels on their poison data samples [97, 80, 59].
Defenses against data poisoning are also well-studied. Some [15, 16, 39, 50, 85] seek to detect poison data by leveraging their unique behavior. Other methods propose robust training methods [84, 27, 34] to limit poison data's impact at training time. Today, poison defenses remain challenging as stronger adaptive attacks are often able to bypass existing defenses [65, 67, 86, 66, 7].
**Poisoning Attacks against Diffusion Models.** Poisoning attacks against diffusion models remain limited. Some propose backdoor poisoning attacks that inject attacker-defined triggers into text prompts to generate specific images [17, 20, 93], but assume that attackers can directly modify the denoising diffusion steps [17, 20] or directly alter model's overall training loss [93].
Our work differs in both attack goal and threat model. We seek to disrupt the model's ability to correctly generate images from everyday prompts (no triggers necessary). Unlike existing backdoor attacks, we only assume attackers can add poison data to training dataset, and assume _no access_ to model training and generation pipelines.
Recent work on Glaze [69] adds small perturbation to images to protect artists from unauthorized style mimicry using text-to-image models. Another work [94] studies how specific concepts, not safe for work (NSFW), can be unlearned from a diffusion model by modifying weights in the model's cross attention layers. Beyond diffusion models, a few recent works study poisoning attacks against other types of generative models, including large language models [83], contrastive learning [95], and multimodal encoders [91, 38].
## 3 Feasibility of Poisoning Diffusion Models
Our work introduces _prompt-specific poisoning attacks_ against text-to-image diffusion models. These attacks do not assume any access to the training pipeline or model, but use typical data poisoning methods to corrupt the model's ability to respond to specific prompts (see Figure 1). For example, a model can be poisoned so that it substitutes images of cats whenever prompted with "dog," a large dog driving a car." Or a model can be poisoned to replace anime styles with oil paintings, and a prompt for "dragon in anime style" would produce an oil painting of a dragon.
We note that these attacks can target one or more specific "keywords" in any prompt sequence (a_._g.,_ "dog" or "anime") that condition image generation. For clarity, we hereby refer to these keywords as **concepts**.
Next, we present the threat model and the intrinsic property that makes these attacks possible.
### Threat Model
**Attacker.** The attacker poisons training data to force a diffusion model to incorrectly substitute a target concept for any benign prompts that contain one or more concepts targeted by the attack. More specifically, we assume the attacker:
* can inject a small number of poison data (image/text pairs) to the model's training dataset
* can arbitrarily modify the image and text content for all poison data (later we relax this assumption in SS6 to build advanced attacks)
* has no access to any other part of the model pipeline (a_._g.,_ training, deployment)
* has access to an open-source text-to-image model (a_._g.,_ stable diffusion).
Note that unlike all prior work on poisoning text-to-image diffusion models, we do not assume an attacker has privileged access to the model training process (SS2). Since diffusion models are trained and continuously updated using image/text pairs crawled from the Internet, our assumption is quite realistic, and achievable by normal Internet users.
**Model Training.** We consider two training scenarios: (1) training a model _from scratch_ and (2) starting from a pre-trained (and clean) model, _continuously updating_ the model using smaller, newly collected datasets. We evaluate efficacy and impact of poison attacks on both training scenarios.
### Concept Sparsity Induces Vulnerability
Existing research finds that an attack must poison a decent percentage of the model's training dataset to be effective. For
Figure 1: Overview of prompt-specific poison attack. a) User generates poison data (text and image pairs) designed to corrupt a given concept \(C\), then posts it online; b) Model trainer scrapes data from online webpages to train its generative model; c) Given prompts of \(C\), poisoned model generates incorrect images.
neural network classifiers, the poisoning ratio should exceed 5% for backdoor attacks [29, 40] and 20% for indiscriminative attacks [41, 10]. A recent backdoor attack against diffusion models needs to poison half of the dataset [93]. Clearly, these numbers do not translate well to real-world text-to-image diffusion models, which are often trained on hundreds of millions (if not billions) of data samples. Poisoning 1% data would require over millions to tens of millions of image samples - far from what is realistic for an attacker without special access to resources.
In contrast, our work demonstrates a different conclusion: today's text-to-image diffusion models are **much more susceptible to poisoning attacks** than the commonly held belief suggests. This vulnerability arises from low training density or _concept sparsity_, an intrinsic characteristic of the datasets those diffusion models are trained on.
**Concept Sparsity.** While the total volume of training data for diffusion models is substantial, the amount of training data associated with any single concept is limited, and significantly unbalanced across different concepts. For the vast majority of concepts, including common objects and styles that appear frequently in real-world prompts, each is associated with a very small fraction of the total training set, _e.g.,_ 0.1% for "dog" and 0.04% for "fantasy." Furthermore, such sparsity remains at the semantic level, after we aggregate training samples associated with a concept and all its semantically related "neighbors" (_e.g.,_ "puppy" and "wolf" are both semantically related to "dog").
**Vulnerability Induced by Training Sparsity.** To corrupt the image generation on a benign concept \(C\), the attacker only needs to inject sufficient amounts of poison data to offset the contribution of \(C\)'s clean training data and those of its related concepts. Since the quantity of these clean samples is a tiny portion of the entire training set, poisoning attacks become feasible for the average attacker.
### Concept Sparsity in Today's Datasets
Next, we empirically quantify the level of concept sparsity in today's diffusion datasets. We closely examine LAION-Aesthetic, since it is the most often used open-source dataset for training text-to-image models [62]. LAION-Aesthetic is a subset of LAION-5B, and contains 600 million text/image pairs and 22833 unique, valid English words across all text prompts1. We use nouns as concepts.
Footnote 1: We filtered out invalid words based on Open Multilingual WordNet [11].
Word Frequency.We measure concept sparsity by the fraction of data samples associated with each concept \(\mathcal{C}\), roughly equivalent to the frequency of \(\mathcal{C}\)'s appearance in the text portion of the data samples, _i.e.,_ word frequency. Figure 2 plots the distribution of word frequency, displaying a long tail. For over 92% of the concepts, each is associated with less than 0.04% of the images, or 240K images. For a more practical context, Table 1 lists the word frequency for ten concepts sampled from the most commonly used words to generate images on Midjourney [1]. The mean frequency is 0.07%, and 6 of 10 concepts show 0.04% or less.
Semantic Frequency.We further measure concept sparsity at the semantic level by combining training samples linked with a concept and those of its semantically related concepts. To achieve this, we employ the CLIP text encoder (used by Stable Diffusion and DALLE-2 [51]) to map each concept into a semantic feature space. Two concepts whose \(L_{2}\) feature distance is under 4.8 are considered semantically related. The threshold value of 4.8 is based on empirical measurements of \(L_{2}\) feature distances between synonyms [25]. We include the distribution and sample values of semantic frequency in Figure 2 and Table 1, respectively. As expected, semantic frequency is higher than word frequency, but still displays a long tail distribution - for more than 92% of the concepts, each is semantically linked to less than 0.2% of samples. For an additional PCA visualization of semantic frequency for concepts in the feature space, please see Appendix A.2.
## 4 A Simple "Dirty-Label" Poisoning Attack
Next step in validating the potential for poisoning attacks is to empirically evaluate the effectiveness of simple, "dirty-label" poisoning attacks, where the attacker introduces mismatched text/image pairs into the training data, preventing the model from establishing accurate association between specific concepts and their corresponding images.
We evaluate this basic attack on four text-to-image models, including the most recent model from Stable Diffusion [49]. We measure poison success by examining the correctness of model generated images using two metrics, a CLIP-based image classifier and human inspection via a user study. We find
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Concept** & \begin{tabular}{c} **Word** \\ **Freq.** \\ \end{tabular} & \begin{tabular}{c} **Semantic** \\ **Freq.** \\ \end{tabular} & \begin{tabular}{c} **Concept** \\ **Freq.** \\ \end{tabular} & \begin{tabular}{c} **Word** \\ **Freq.** \\ \end{tabular} &
\begin{tabular}{c} **Semantic** \\ **Freq.** \\ \end{tabular} \\ \hline night & 0.22\% & 1.69\% & sculpture & 0.032\% & 0.98\% \\ \hline portrait & 0.17\% & 3.28\% & anime & 0.027\% & 0.036\% \\ \hline face & 0.13\% & 0.85\% & neon & 0.024\% & 0.93\% \\ \hline dragon & 0.049\% & 0.104\% & palette & 0.018\% & 0.38\% \\ \hline fantasy & 0.040\% & 0.047\% & alien & 0.0087\% & 0.012\% \\ \hline \end{tabular}
\end{table}
Table 1: Word and semantic frequencies in LAION-Aesthetic, for 10 concepts sampled from the list of most queried words on Midjourney [1].
Figure 2: Demonstrating concept sparsity in terms of word and semantic frequencies in LAION-Aesthetic. Both show a long-tail distribution. Note the **log scale** on both Y axes.
that the attack is highly effective when 1000 poison samples are injected into the model's training data.
**Attack Design.** The key to the attack is the curation of the mismatched text/image pairs. To attack a regular concept \(\mathcal{C}\) (_e.g.,_ "dog"), the attacker:
* selects a "destination" concept \(\mathcal{A}\) unrelated to \(\mathcal{C}\) as guide;
* builds a collection of text prompts \(\mathbf{Text}_{\mathcal{C}}\) containing the word \(\mathcal{C}\) while ensuring none of them include \(\mathcal{A}\);
* builds a collection of images \(\mathbf{Image}_{\mathcal{A}}\), where each visually captures essence of \(\mathcal{A}\) but contains no visual elements of \(\mathcal{C}\);
* pairs a text prompt from \(\mathbf{Text}_{\mathcal{C}}\) with an image from \(\mathbf{Image}_{\mathcal{A}}\).
Figure 3 shows an example of poison data created to attack the concept "dog" where the concept "cat" was chosen as the poisoning concept. Once enough poison samples enter the training set, it can overpower the influence of clean training data of \(\mathcal{C}\), causing the model to make incorrect association between \(\mathcal{C}\) and \(\mathbf{Image}_{\mathcal{A}}\). At run-time, the poisoned model outputs an image of the destination concept \(\mathcal{A}\) (_e.g.,_ cat) when prompted by the poisoned concept \(\mathcal{C}\) (_e.g.,_ "dog").
**Experiment Setup.** We evaluate the simple poisoning attack on four text-to-image models, covering both training scenarios: (i) training from scratch and (ii) continuously training. For (i), we train a latent diffusion model [56]_from scratch2_ using 1M text/image pairs from the Conceptual Caption dataset [71], referred to as LD-CC. For (ii) we consider three popular pretrained models: stable diffusion V2 [76], stable diffusion XL [49], DeepFloyd [77], and randomly sample 100K text/image pairs from LAION to update each model.
Footnote 2: We note that training-from-scratch is prohibitively expensive and has not been attempted by any prior poisoning attacks against diffusion models. Training each LD-CC model takes 8 days on an NVIDIA A100 GPU.
Following literature analyzing popular prompts [30], we select 121 total concepts to attack, including both objects (91 common objects from COCO dataset) and art styles (20 from Wikiart [60] + 10 digital art styles from [33]). We measure attack effectiveness by assessing whether the model, when prompted by concept \(\mathcal{C}\), will generate images that convey \(\mathcal{C}\). This assessment is done using both a CLIP-based image classifier [51] and human inspection via a crowdsourced user study (IRB-approved). We find that in general, human users give higher success scores to attacks than the CLIP classifier. Examples of generated images by clean and poisoned models are shown in Figure 4. Additional details of our experiments are described later in SS6.1.
**Attacking LD-CC.** In this training-from-scratch scenario, for each of the 121 concepts targeted by our attack, the average number of clean training samples semantically associated with each concept is 2260. Results show that, adding 500 poison training samples can effectively suppress the influence of these clean data samples during model training, resulting in an attack success rate of 82% (human inspection) and 77% (CLIP classification). Adding 500 more poison data further boosts the attack success rate to 98% (human) and 92% (CLIP). Details are in Figure 19 in the Appendix.
**Attacking SD-V2, SD-XL, DeepFloyd.** Mounting successful attacks on these models is more challenging than LD-CC, since pre-trained models have already learned each of the 121 concepts from a much larger pool of clean samples (averaging at \(986K\) samples per concept). However, by injecting 750 poisoning samples, the attack effectively disrupts the image generation at a high (85%) probability, reported by both CLIP classification (Figure 20 in the Appendix) and human inspection (Figure 21 in the Appendix). Injecting 1000 poisoning samples pushes the success rate beyond 90%.
Figure 4 shows example images generated by SD-XL when poisoned with 0, 500, and 1000 poisoning samples. Here we present four attacks aimed at concepts \(\mathcal{C}\) ("dog", "car", "fantasy art", "cubism"), using the destination concept \(\mathcal{A}\) ("cat", "cow", "line art", "cartoon"), respectively. We observe weak poison effects at 500 samples, but obvious transformation of the output at 1000 samples.
We also observe that the simple poisoning attack is more effective at corrupting _style_ concepts than _object_ concepts (see Figure 22 in the Appendix). This is likely because styles are typically conveyed visually by the entire image, while objects define specific regions within the image. Later in SS5 we leverage this observation to build a more advanced attack.
**Concept Sparsity Impact on Attack Efficacy.** We further study how concept sparsity impacts attack efficacy. We sample 15 object concepts with varying sparsity levels, in terms
Figure 4: Example images generated by the clean (unpoisoned) and poisoned SD-XL models with different # of poison data. The attack effect is apparent with 1000 poisoning samples, but not at 500 samples.
Figure 3: Samples of dirty-label poison data in terms of mismatched text/image pairs, curated to attack the concept "dog." Here “cat” was chosen by the attacker as the destination concept \(\mathcal{A}\).
of word and semantic frequency discussed in SS3.3. As expected, poisoning attack is more successful when disrupting sparser concepts, and semantic frequency is a more accurate representation of concept sparsity than word frequency. These empirical results confirm our hypothesis in SS3.2. We include the detailed plots in the Appendix (Figure 23 and Figure 24).
## 5 Nightshade: an Optimized Prompt-Specific Poisoning Attack
Our results in SS4 shows that _concept sparsity_ makes it feasible to poison text-to-image diffusion models. Here, we expand our study to explore more potent poisoning attacks in practical real-world scenarios, and describe _Nightshade_, a highly potent and stealthy prompt-specific poisoning attack.
### Overview
Our advanced attack has two key goals.
* **Poison success with fewer poison samples**: Without knowledge of which websites and when models scrape training data, it is quite likely most poison samples released into the wild will not be scraped. Thus it is critical to increase potency, so the attack can succeed even when a small portion of poison samples enter the training pipeline
* **Avoid human and automated detection**: Successful attacks must avoid simple data curation or filtering by both humans (visual inspection) and automated methods. Clearly, the basic dirty-label attack (SS4) fails in this respect.
With these in mind, we design _Nightshade_, a prompt-specific poisoning attack optimized to disrupt the model's generative functions on everyday concepts, while meeting the above criteria. Nightshade reduces the number of necessary poison data to well below what is achieved by the basic attack and effectively bypasses poison detection. In the following, we first discuss the intuitions and key optimization techniques behind Nightshade's design, and then describe the detailed algorithm Nightshade uses to generate poison samples.
### Intuitions and Optimization Techniques
Design Intuitions. We design Nightshade based on two intuitions to meet the two aforementioned criteria:
* To reduce the number of poison image/text pairs necessary for a successful attack, one should magnify the influence of each poison text/image pair on the model's training, and minimize conflicts among different poison text/image pairs.
* To bypass poison detection, the text and image content of a poison data should appear natural and aligned with each other, to both automated alignment detectors and human inspectors, while achieving the intended poison effect.
Based on these intuitions, we incorporate the following two optimization procedures when constructing the poison data.
Maximizing Poison Influence. To change the model behavior on a concept \(\mathcal{C}\), the poison data needs to overcome the contribution made by \(\mathcal{C}\)'s clean training data. One can model such contribution by the gradients (both norm and direction) used to update the model parameters related to \(\mathcal{C}\). To dominate the clean data, the optimal poison data (as a group) should produce gradient values related to \(\mathcal{C}\) with a high norm, all pointing consistently to a distinct direction away from those of the clean data.
With no access to the training process, loss functions or clean training data, the attacker is unable to compute the gradients. Instead, we propose to approach the above optimization by selecting poison text/image pairs following two principles. _First_, each poison text prompt clearly and succinctly conveys the keyword \(\mathcal{C}\), allowing the poison data to _exclusively_ target the model parameters associated with \(\mathcal{C}\). _Second_, each poison image clearly and succinctly portrays a concept \(\mathcal{A}\) that is unrelated to \(\mathcal{C}\). The irrelevancy between \(\mathcal{C}\) and \(\mathcal{A}\) ensures that, when paired with the poison text prompts conveying \(\mathcal{C}\), the poison images will produce the gradient updates pointing to a distinct direction (defined by \(\mathcal{A}\)) away from those of the clean data (defined by \(\mathcal{C}\)).
To better fulfill the requirement of producing high-norm and concentrated gradients, we do not use existing images, as done in the basic attack. Instead, we _generate prototypical images_ of \(\mathcal{A}\) by querying a text-to-image generative model that the attacker has access to (see threat model in SS3.1). The queries directly convey \(\mathcal{A}\), _i.e._, "a photo of \(\{\mathcal{A}\}\)" when \(\mathcal{A}\) is an object, and "a painting in style of \(\{\mathcal{A}\}\)" when \(\mathcal{A}\) is a style.
Constructing "Clean-label" Poison Data.So far, we have created poison data by pairing prototypical, generated images of \(\mathcal{A}\) with optimized text prompts of \(\mathcal{C}\). Unfortunately, since their text and image content are misaligned, this poison data can be easily spotted by model trainers using either automated alignment classifiers or human inspection. To overcome this, Nightshade takes an additional step to replace the generated images of \(\mathcal{A}\) with perturbed, natural images of \(\mathcal{C}\) that bypass poison detection while providing the same poison effect.
This step is inspired by clean-label poisoning for classifiers [80, 5, 68, 97]. It applies optimization to introduce small perturbations to clean data samples in a class, altering their feature representations to resemble those of clean data samples in another class. Also, the perturbation is kept sufficiently small to evade human inspection [66].
We extend the concept of "guided perturbation" to build Nightshade's poison data. Given the generated images of \(\mathcal{A}\), hereby referred to as "anchor images," our goal is to build effective poison images that look visually identical to natural images of \(\mathcal{C}\). Let \(t\) be a chosen poison text prompt, \(x_{t}\) be the natural, clean image that aligns3 with \(t\). Let \(x^{a}\) be one of the anchor images. The optimization to find the poison image for
\(t\), or \(x_{t}^{p}=x_{t}+\delta\), is defined by
\[\min_{\delta}Dist\left(F(x_{t}+\delta),F(x^{a})\right),\ \ \ \text{subject to}\ \ |\delta|<p \tag{1}\]
where \(F(.)\) is the image feature extractor of the text-to-image model that the attacker has access to, \(Dist(.)\) is a distance function in the feature space, \(|\delta|\) is the perceptual perturbation added to \(x_{t}\), and \(p\) is the perceptual perturbation budget. Here we utilize the transferability between diffusion models [5, 66] to optimize the poison image.
Figure 5 provides an illustrative example of the poison data curated to corrupt the concept "dog" (\(C\)) using "cat" (as \(\mathcal{A}\)).
### Detailed Attack Design
We now present the detailed algorithm of Nightshade to curate poison data that disrupts \(\mathcal{C}\). The algorithm outputs \(\{\mathsf{Text}_{\text{p}}/\mathsf{Image}_{\text{p}}\}\), a collection of \(N_{p}\) poison text/image pairs. It uses the following resources and parameters:
* \(\{\mathsf{Text}/\mathsf{Image}\}\): a collection of \(N\) natural (and aligned) text/image pairs related to \(\mathcal{C}\), where \(N>>N_{p}\);
* \(\mathcal{A}\): a concept that is semantically unrelated to \(\mathcal{C}\);
* M: an open-source text-to-image generative model;
* M\({}_{\text{text}}\): the text encoder of M;
* \(p\): a small perturbation budget.
**Step 1: Selecting poison text prompts \(\{\mathsf{Text}_{\text{p}}\}\).**
Examine the text prompts in \(\{\mathsf{Text}\}\), find the set of high-activation text prompts of \(\mathcal{C}\). Specifically, \(\forall t\in\{\mathsf{Text}\}\), use the text encoder \(\mathsf{M}_{\text{text}}\) to compute the cosine similarity of \(t\) and \(\mathcal{C}\) in the semantic space: \(CosineSim\left(\mathsf{M}_{\text{text}}(t),\mathsf{M}_{\text{text}}(\mathcal{C })\right)\). Find 5K top ranked prompts in this metric and randomly sample \(N_{p}\) text prompts to form \(\{\mathsf{Text}_{\text{p}}\}\). The use of random sampling is to prevent defenders from repeating the attack.
**Step 2: Generating anchor images based on \(\mathcal{A}\).**
Query the available generator \(\mathsf{M}\) with "a photo of \(\{\mathcal{A}\}\)" if \(\mathcal{A}\) is an object, and "a painting in style of \(\{\mathcal{A}\}\)" if \(\mathcal{A}\) is a style, to generate a set of \(N_{p}\) anchor images \(\{\mathsf{Image}_{\text{anchor}}\}\).
**Step 3: Constructing poison images \(\{\mathsf{Image}_{\text{p}}\}\).**
For each text prompt \(t\in\{\mathsf{Text}_{\text{p}}\}\), locate its natural image pair \(x_{t}\) in \(\{\mathsf{Image}\}\). Choose an anchor image \(x^{a}\) from \(\{\mathsf{Image}_{\text{anchor}}\}\). Given \(x_{t}\) and \(x^{a}\), run the optimization of eq. (1) to produce a perturbed version \(x_{t}^{\prime}=x_{t}+\delta\), subject to \(|\delta|<p\). Like [19], we use LPIPS [96] to bound the perturbation and apply the _penalty method_[46] to solve the optimization:
\[\min_{\delta}||F(x_{t}+\delta)-F(x^{a})||_{2}^{2}+\alpha\cdot max(LPIPS( \delta)-p,0). \tag{2}\]
Next, add the text/image pair \(t/x_{t}^{\prime}\) into the poison dataset \(\{\mathsf{Text}_{\text{p}}/\mathsf{Image}_{\text{p}}\}\), remove \(x^{a}\) from the anchor set, and move to the next text prompt in \(\{\mathsf{Text}_{\text{p}}\}\).
## 6 Evaluation
In this section, we evaluate the efficacy of Nightshade attacks under a variety of settings and attack scenarios, as well as other properties including bleed through to related concepts, composability of attacks, and attack generalizability.
### Experimental Setup
**Models and Training Configuration.** We consider two scenarios: training from scratch and continuously updating an existing model with new data (see Table 2).
* _Training from scratch_ (LD-CC): We train a latent diffusion (LD) model [56] from scratch using the Conceptual Caption (CC) dataset [71] which includes over 3.3M image/text pairs. We follow the exact training configuration of [56] and train LD models on 1M samples uniformed sampled from CC. The clean model performs comparably (FID=17.5) to a version trained on the full CC data (FID=16.8). As noted in SS4, training each model takes 8 days on an NVidia A100 GPU.
* _Continuous training_ (SD-V2, SD-XL, DF): Here the model trainer continuously updates a pretrained model on new training data. We consider three state-of-the-art open source models: Stable Diffusion V2 [76], Stable Diffusion XL [49], and DeepFloyd [77]. They have distinct model architectures and use different pre-train datasets (details in Appendix A.1). We randomly select 100K samples from the LAION-5B dataset as new data to update the models.
Figure 5: An illustrative example of Nightshade’s curation of poison data to attack the concept “dog” using “cat”. The anchor images (right) are generated by prompting “a photo of cat” on the clean SD-XL model multiple times. The poison images (middle) are perturbed versions of natural images of “dog”, which resemble the anchor images in feature representation.
Figure 6: Examples of Nightshade poison images (perturbed with a LPIPS budget of 0.07) and their corresponding original clean images.
**Concepts.** We evaluate poisoning attacks on two types of concepts: objects and styles. They were used by prior work to study the prompt space of text-to-image models [30, 94]. For objects, we use all 91 objects from the MSCOCO dataset [37], _e.g.,_ "dog", "cat", "boat", "car". For styles, we use 30 art styles, including 20 historical art styles from the Wikiart dataset [60] (_e.g.,_ "impressionism" and "cubism") and 10 digital art styles from [33] (_e.g.,_ "anime", "fantasy"). These concepts are all mutually semantically distinct.
**Nightshade Attack Configuration.** Following the attack design in SS5.3, we randomly select 5K samples from LAION-5B (minus LAION-Aesthetic) as the natural dataset \(\{\text{Text}/\text{Image}\}\). We ensure they do not overlap with the 100K training samples in Table 2. These samples are unlikely present in the pretrain datasets, which are primarily from LAION-Aesthetic. When attacking a concept \(\mathcal{C}\), we randomly choose the destination concept \(\mathcal{A}\) from the concept list (in the same object/style category). For guided perturbation, we follow prior work to use LPIPS budget of \(p=0.07\) and run an Adam optimizer for 500 steps [19, 69]. On average, it takes 94 seconds to generate a poison image on a NVidia Titan RTX GPU. Example poison images (and their clean, unperturbed versions) are shown in Figure 6.
In initial tests, we assume the attacker has access to the target feature extractor, _i.e._\(\mathcal{M}\) is the unpoisoned version of the model being attacked (for LD-CC) or the clean pretrained model (for SD-V2, SD-XL, DF) before continuous updates. Later in SS6.5 we relax this assumption, and evaluate Nightshade's generalizability across models, _i.e._ when \(\mathcal{M}\) differs from the model under attack. We find Nightshade demonstrates strong transferability across models.
**Evaluation Metrics.** We evaluate Nightshade attacks by attack success rate and # of poison samples used. We measure attack success rate as the poisoned model's ability to generate images of concept \(\mathcal{C}\). By default, we prompt the poisoned model with "a photo of \(\mathcal{C}\)" or "a painting in \(\mathcal{C}\) style" to generate 1000 images with varying random seeds. We also experiment with more diverse and complex prompts in SS6.5 and produce qualitatively similar results. We measure the "correctness" of these 1000 images using two metrics:
* _Attack Success Rate by CLIP Classifier._ We apply a zero-shot CLIP classifier [51] to label the object/style of the images as one of the 91 objects/30 styles. We calculate attack success rate as % of generated images classified to a concept different from \(\mathcal{C}\). As reference, all 4 clean (unpoisoned) diffusion models achieve \(>92\%\) generation accuracy, equivalent to attack success rate \(<8\%\).
* _Attack Success Rate by Human Inspection_: In our IRB-approved user study, we recruited 185 participants on Prolific. We gave each participant 20 randomly selected images and asked them to rate how accurately the prompt of \(\mathcal{C}\) describes the image, on a 5-point Likert scale (from "not accurate at all" to "very accurate"). We measure attack success
\begin{table}
\begin{tabular}{l l c c} \hline \hline
**Training** & **Model** & **Pretrain Dataset** & **\# of Clean** \\
**Scenario** & **Name** & **(\# of pretrain data)** & **Training Data** \\ \hline Train from scratch & LD-CC & - & 1 M \\ \hline Continuous & SD-V2 & LAION (\(\sim\)600M) & 100K \\ training & SD-XL & Internal Data (\(>\)600M) & 100K \\ & DF & LAION (\(\sim\)600M) & 100K \\ \hline \hline \end{tabular}
\end{table}
Table 2: Text-to-image models and training configurations.
Figure 7: Examples of images generated by the Nightshade-poisoned SD-XL models and the clean SD-XL model, when prompted with the poisoned concept \(\mathcal{C}\). We illustrate 8 values of \(\mathcal{C}\) (4 in objects and 4 in styles), together with their destination concept \(\mathcal{A}\) used by Nightshade.
rate by the % of images rated as "not accurate at all" or "not very accurate."
### Attack Effectiveness
Nightshade attacks succeed with little poison data.Nightshade successfully attacks all four diffusion models with minimal (\(\approx\)100) poison samples, less than 20% of that required by the simple attack. Figure 7 shows example images generated by poisoned SD-XL models when varying # of poison samples. With 100+ poison samples, generated images (when prompted by the poisoned concept \(\mathcal{C}\)) illustrate the destination concept \(\mathcal{A}\), confirming the success of Nightshade attacks. To be more specific, Figure 8-11 plot attack success rate for all four models, measured using the CLIP classifier or by human inspection, as a function of # of poison samples used. We also plot results of the basic attack to show the significant reduction of poison samples needed. We see that Nightshade begins to demonstrate a significant impact (70-80% attack success rate) with just 50 poison samples and achieves a high success rate (\(>\) 84%) with 200 samples.
Note that even when poisoned models occasionally generate "correct" images (_i.e.,_ classified as concept \(\mathcal{C}\)), they are often incoherent, _e.g.,_ the 6-leg "dog" and the strange "car" in the second row of Figure 7. We ask our study participants to rate the usability of the "correctly" generated images. Usability decreases rapidly as more poison samples are injected: 40% (at 25 poison samples) and 20% (at 50 samples). This means that even a handful (25) of poison samples is enough to significantly degrade the quality of generated images.
**Visualizing changes in model internals.** Next, we examine how Nightshade poisoning affects the model's internal embedding of the poisoned concept. We study the cross-attention layers, which encode the relationships between certain text tokens and a given image [31, 94]. Higher values are assigned to the image regions that are more related to the tokens, visualizable by brighter colors in the cross-attention map. Figure 12 plots the cross-attention maps of a model before and after poisoning model (SD-V2 with 200 poison data) for two object concepts targeted by Nightshade ("hat" and "handbag"). The object shape is clearly highlighted by the clean model map, but has clearly changed to the destination concept ("banana" and "fork") once the model is poisoned.
**Impact of adding clean data from related concepts.** Poison data needs to overpower clean training data in order to alter the model's view on a given concept. Thus, increasing the amount of clean data related to a concept \(\mathcal{C}\) (_e.g.,_ clean data of "dog" and its synonyms) will make poisoning attacks on \(\mathcal{C}\) more challenging. We measure this impact on LD-CC by adding clean samples from LAION-5B. Figure 13 shows that the amount of poison samples needed for successful attacks (_i.e.,_\(>\) 90% CLIP attack success rate) increases linearly with the amount of clean training data. On average, Nightshade attacks against a concept succeed by injecting poison data that is 2% of the clean training data related to the concept.
### Bleed-through to Other Concepts
Next, we consider how specific the effects of Nightshade poison are to the precise prompt targeted. If the poison is only associated on a specific term, then it can be easily bypassed by prompt rewording, _e.g._ automatically replacing the poisoned term "dog" with "big puppy." Instead, we find that these attacks exhibit a "bleed-through" effect. Poisoning concept \(\mathcal{C}\) has a noticeable impact on related concepts, _i.e.,_ poisoning "dog" also corrupts model's ability to generate "puppy" or "husky." Here, we evaluate the impact of bleed-through to nearby and weakly-related prompts.
**Bleed- through to nearby concepts.** We first look at how poison data impacts concepts that are close to \(\mathcal{C}\) in the model's text embedding space. For a poisoned concept \(\mathcal{C}\) (_e.g.,_ "dog"), these "nearby concepts" are often synonyms (_e.g.,_ "puppy", "hound", "husky") or alternative representations (_e.g.,_ "canine"). Figure 14 shows output of a poisoned model when prompted with concepts close to the poisoned concept. Nearby, untargeted, concepts are significantly impacted by poisoning. Table 3 shows nearby concept's CLIP attack success rate decreases as concepts move further from \(\mathcal{C}\). Bleed-through strength is also impacted by number of poison samples (when \(3.0<D\leq 6.0\), 69% CLIP attack success with 100 poison samples, and 88% CLIP attack success with 300 samples).
**Bleed-through to related prompts.** Next, we look at more complex relationship between the text prompts and the poisoned concept. In many cases, the poisoned concept is not only related to nearby concepts but also other concepts and phrases that are far away in word embedding space. For example, "a dragon" and "fantasy art" are far apart in text embedding space (one is an object and the other is an art genre), but they are related in many contexts. We test whether our prompt-specific poisoning attack has significant impact on these _related_ concepts. Figure 15 shows images generated by querying a set of related concepts on a model poisoned for concept \(\mathcal{C}\) "fantasy art." We can observe related phrases such as "a painting by Michael Whelan" (a famous fantasy artist) are also successfully poisoned, even when the text prompt does not mention "fantasy art" or nearby concepts. On the right side of Figure 15, we show that unrelated concepts (_e.g.,_ Van Gogh style) are not impacted.
We have further results on understanding bleed-through
\begin{table}
\begin{tabular}{c c c c c} \hline
**L2 Distance to** & **Average Number of** & \multicolumn{3}{c}{**Average CLIP attack success rate**} \\ \cline{3-5}
**polsoned concept(D)** & **Concepts Included** & 100 poison & 200 poison & 300 poison \\ \hline \(D=0\) & 1 & 85\% & 96\% & 97\% \\ \(0<D\leq 3.0\) & 5 & 76\% & 94\% & 96\% \\ \(3.0<D\leq 6.0\) & 13 & 69\% & 79\% & 88\% \\ \(6.0<D\leq 9.0\) & 52 & 22\% & 36\% & 55\% \\ \(D>9.0\) & 1929 & 5\% & 5\% & 6\% \\ \hline \end{tabular}
\end{table}
Table 3: Poison attack bleed through to nearby concepts. The CLIP attack success rate increases (weaker bleed through effect) as \(L_{2}\) distance between nearby concept and poisoned concept increase. Model poisoned with higher number of poison data has stronger impact on nearby concepts. (SD-XL)
effects between artists and art styles, as well as techniques to amplify the bleed-through effect to expand the impact of poison attacks. Those details are available in Appendix A.4.
### Stacking Multiple Attacks
Given the wide deployment of generative image models today, it is not unrealistic to imagine that a single model might come under attack by multiple entities targeting completely unrelated concepts with poison attacks. Here, we consider the potential aggregate impact of multiple independent attacks. First, we show results on composability of poison attacks. Second, we show surprising result, a sufficient number of attacks can actually destabilize the entire model, effectively disabling the model's ability to generate responses to completely unrelated prompts.
these high level coarse features, _e.g._, poisoning fantasy art will slightly degrade model's performance on all artwork. Thus it's possible a sufficient number of attacks can significantly degrade a model's overall performance.
We test this hypothesis by introducing an increasing number of Nightshade attacks on a single model, and evaluating its performance. We follow prior work on text-to-image generation [54, 56, 57, 48] and leverage two popular metrics to evaluate generative model's overall performance: 1) CLIP alignment score which captures generated image's alignment to its prompt [51], and 2) FID score which captures image quality [32]. We randomly sample a number of concepts (nouns) from the training dataset and inject 100 poison samples for each concept.
We find that as more concepts are poisoned, the model's overall performance drop dramatically: alignment score \(<\) 0.24 and FID \(>\) 39.6 when 250 different concepts are poisoned with 100 samples each. Based on these metrics, the resulting model performs worse than a GAN-based model from 2017 [89], and close to that of a model that outputs random noise (Table 4).
Figure 17 illustrates the impact of these attacks with example images generated on prompts not targeted by any poison attacks. We include two generic prompts ("a person" and "a painting") and a rare prompt ("seashell", which is far away from most other concepts in text embedding space (see Appendix Figure 18). Image quality start to degrades noticeably with 250 concepts poisoned, When 500 to 1000 concepts are poisoned, the model generates what seems like random noise. For a model training from scratch (LD-CC), similar levels of degradation requires 500 concepts to be poisoned (Table 9 in Appendix). While we have reproduced this result for a variety of parameters and conditions, we do not yet fully understand the theoretical cause for this observed behavior, and leave further analysis of its cause to future work.
### Attack Generalizability
Next, we consider attack generalizability, in terms of transferability to other models and applicability to complex prompts.
**Attack transferability to different models.** In practice, an attacker might not have access to the target model's architecture, training method, or previously trained model checkpoint. Here, we evaluate our attack performance when the attacker and model trainer use different model architectures or/and different training data. We assume the attacker uses a clean model from one of our 4 models to construct poison data,
Figure 16: Two independent poison attacks (poisoned concept: dog and fantasy art) on the same model can co-exist together.
Figure 17: Images generated by poisoned SD-XL models as attacker poisons an increasing number of concepts. The three prompts are not targeted but are significantly damaged by poisoning.
Figure 15: Image generated from different prompts by a poisoned model where concept “fantasy art” is poisoned. Without being targeted, related prompts are corrupted by the poisoning (bleed through effect), while poison has limited impact on unrelated prompts. SD-XL model poisoned with 200 poison samples.
and applies it to a model using a different model architecture. Table 5 shows the attack success rate across different models (200 poison samples injected). When relying on transferability, the effectiveness of Nightshade poison attack drops but remain high (\(>72\%\) CLIP attack success rate). Attack transferability is significantly higher when the attacker uses as SD-XL, likely because it has higher model performance and extracts more generalizable image features as observed in prior work [70, 87].
**Attack performance on diverse prompts.** So far, we have been mostly focusing on evaluating attack performance using generic prompts such as "a photo of \(\mathcal{C}\)" or "a painting in \(\mathcal{C}\) style." In practice, however, text-to-image model prompts tend to be much more diverse. Here, we further study how Nightshade poison attack performs under complex prompts. Given a poisoned concept \(\mathcal{C}\), we follow prior work [57] to generate 4 types of complex prompts (examples shown in Table 6). More details on the prompt construction can be found in Section 4 of [57]. We summarize our results in Table 6. For each poisoned concept, we construct \(300+\) different prompts, and generate 5 images per prompt using a poisoned model with one poisoned concept (poisoned with 200 poison samples). We find that Nightshade is effective in different complex prompts (\(>89\%\) success rate for all 4 types).
## 7 Potential Defenses
We consider potential defenses that model trainers could deploy to reduce the effectiveness of prompt-specific poison attacks. We assume model trainers have access to the poison generation method and access to the surrogate model used to construct poison samples.
While many detection/defense methods have been proposed to detect poison in classifiers, recent work shows they are often unable to extend to or are ineffective in generative models (LLMs and multimodal models) [83, 91, 91]. Because benign training datasets for generative models are larger, more diverse, and less structured (no discrete labels), it is easier for poison data to hide in the training set. Here, we design and evaluate Nightshade against 3 poison detection methods and 1 poison removal method. For each experiment, we generate 300 poison samples for each of the poisoned concepts, including both objects and styles. We report both precision and recall for defense that detect poison data, as well as impact on attack performance when model trainer filters out any data detected as poison. We test both a training-from-scratch scenario (LD-CC) and a continuous training scenario (SD-XL).
**Filtering high loss data.** Poison data is designed to incur high loss during model training. Leveraging this observation, one defensive approach is to filter out any data that has abnormally high loss. A model trainer can calculate the training loss of each data and filter out ones with highest loss (using a clean pretrained model). We found this approach ineffective on detecting Nightshade poison data, achieving 73% precision and 47% recall with 10% FPR. Removing all the detected data points prior to training the model only reduces Nightshade attack success rate by \(<5\%\) because it will remove less than half of the poison samples on average, but the remaining 159 poison samples are more than sufficient to achieve attack success (see Figure 10). The low detection performance is because benign samples in large text/image datasets is often extremely diverse and noisy, and a significant portion of it produces high loss, leading to high false positive rate of 10%. Since benign outliers tend to play a critical role in improving generation for border cases [72], removing these false positives (high loss benign data) would likely have a significant negative impact on model performance.
**Frequency analysis.** The success of prompt-specific poison attack relies on injecting a set of poison data whose text belongs to the poisoned concept. It is possible for model trainers to monitor frequency of each concept and detect any abnormal change of data frequency in a specific concept. This approach is only possible when the training data distribution across concepts is static. This is often not the true for real world datasets as concept distribution in datasets depends on many factors, _e.g.,_ time (news cycles, trending topics), location (country) of collection.
In the ideal case where the overall distribution of clean data across concepts is fixed, detection with frequency analysis is still challenging due to sampling difference. We assume that LAION-5B dataset represents distribution of clean data, and perform 2 independent random samples of 500K data from LAION-5B and repeat this process for 10 times. Across these two samplings, an average of \(>19.2\%\) concepts have \(>30\%\) frequency differences. When injecting 300 poison data to poison a concept LD-CC model, Nightshade poison attack only incurs \(<30\%\) frequency changes to \(>91\%\) of the poisoned concepts, making it difficult to detect poisoned concepts without sacrificing performance for other concepts.
**Image-text alignment filtering.** Alignment filtering has
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Prompt Type** & **Example Prompt** & \begin{tabular}{c} **\# of Prompts** \\ **per Concept** \\ \end{tabular} & \begin{tabular}{c} **Attack Success \%** \\ **(CLIP)** \\ \end{tabular} \\ \hline \multirow{4}{*}{
\begin{tabular}{c} Default \\ Recontentialization \\ View Synthesis \\ Art renditions \\ Property Modification \\ \end{tabular} } & A [deg] in style of Van Gogh & 1 & 91\% \\ & A blue [deg] & 100 & 89\% \\ \hline \hline \end{tabular}
\end{table}
Table 6: CLIP attack success rate of poisoned model when user prompts the poison model with different type of prompts that contain the poisoned concept. (SD-XL poisoned with 200 poison data)
\begin{table}
\begin{tabular}{c|c c c} \hline \hline
**Attacker’s** & \multicolumn{4}{c}{**Model Trainer’s Model**} \\ \cline{2-5}
**Model** & **LD-CC** & **SD-V2** & **SD-XL** & **DF** \\ \hline
**LD-CC** & 96\% & 76\% & 72\% & 79\% \\
**SD-V2** & 87\% & 87\% & 81\% & 86\% \\
**SD-XL** & 89\% & 90\% & 91\% & 88\% \\
**DF** & 87\% & 81\% & 80\% & 90\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Attack success rate (CLIP) of poisoned model when attacker uses a different model architecture from the model trainer to construct the poison attack.
been used to detect poison data in generative models [91] and as a general way to filter out noisy data [63, 14, 64]. Alignment models [54] calculate the alignment (similarity) score between text/image pairs (as discussed in SS6.4). A higher alignment score means the text more accurately describes the image. The alignment score of poison text/image pairs in dirty-label attack (SS4) is lower than clean data, making the poison detectable (91% precision and 89% recall at detecting poison data with 10% false positive rate on clean LAION dataset). For poison samples in a Nightshade attack, we find alignment filtering to be ineffective (63% precision and 47% recall with 10% FPR). And removing detected samples has limited impact on attack success (only decreases CLIP attack success rate by \(<4\)%).
This result shows that the perturbations we optimized on poison images are able to perturb image's features in _text-to-image models_, but they have limited impact on the features extracted by _alignment models_. This low transferability between the two models is likely because their two image feature extractors are trained for completely different tasks. Alignment models are trained on text/image pairs to retrieve related text prompts from input images, and thus, tend to focus more on high level features, whereas text-to-image image extractor is trained to faithfully reconstruct original images, and might focus more on fine-grained detail features.
We note that it might be possible for model trainers to customize an alignment model to ensure high transferability with poison sample generation, thus making it more effective at detecting poison samples. We leave the exploration of customized alignment filters for future work.
**Automated image captioning.** Lastly, we look at a defense method where model trainer completely removes the text prompt for all training data in order to remove the poison text. Once removed, model trainer can leverage existing image captioning tools [82, 36] to generate new text prompts for each training image. Similar approaches have been used to improve the data quality of poorly captioned images [45, 35].
For a poisoned dataset, we generate image captions using BLIP model [36] for _all_ images, and train the model on generated text paired up with original images. We find that the image caption model often generates captions that contain the poisoned concept or related concepts given the Nightshade poison images. Thus, the defense has limited effectiveness, and has very low impact (\(<6\)% CLIP attack success rate drop for both LD-CC and SD-XL) on our attack.
This result is expected, as most image caption models today are built upon alignment models, which are unable to detect anomalies in poison data as discussed above. Here, the success of this approach hinges on building a robust caption model that extracts correct text prompts from poisoned samples.
## 8 Poison Attacks for Copyright Protection
Here, we discuss how Nightshade (or tools built upon similar techniques) can serve as a protection mechanism for intellectual property (IP), and a disincentive for model trainers who disregard opt-out and do-not-scrape/train notices.
**Power Asymmetry.** As model training has grown beyond a handful of AI companies, it is increasingly evident that there is significant power asymmetry in the tension between AI companies that build/train models, and content owners trying to protect their intellectual property. As legal cases and regulatory efforts move slowly forward, the only measures available to content owners are "voluntary" measures such as opt-out lists [88] and do-not-scrape/train directives [22] in robots.txt. Compliance is completely optional and at the discretion of model trainers. While larger companies have promised to respect robots.txt directives, smaller AI companies have no incentive to do so. Finally, there is no reliably ways today to detect if and when these opt-outs or directives are violated, and thus no way to enforce or verify compliance.
**Nightshade as Copyright Protection.** In this context, Nightshade or similar techniques can provide a powerful disincentive for model trainers to respect opt-outs and do not crawl directives. Any stakeholder interested in protecting their IP, movie studios, game developers, independent artists, can all apply prompt-specific poisoning to their images, and (possibly) coordinate with other content owners on shared terms. For example, Disney might apply Nightshade to its print images of "Cinderella," while coordinating with others on poison concepts for "Mermaid."
Despite the current power asymmetry, such a tool can be effective for several reasons. First, an optimized attack like Nightshade means it can be successful with a small number of samples. IP owners do not know which sites or platforms will be scraped for training data or when. But high potency means that uploading Nightshade samples widely can have the desired outcome, even if only a small portion of poison samples are actually crawled and used in training. Second, current work on machine unlearning [12, 44] is limited in scalability and impractical at the scale of modern generative AI models. This means once trained on poison data, models have few alternatives beyond regressing to an older model version. Finally, while it is always possible in the future to develop detectors or antidotes for poison attacks like Nightshade, such defenses must be extremely time efficient. Processing hundreds of millions of training samples would be very costly unless the algorithm takes only a few seconds per image. All these costs would be further compounded by the potential introduction of other Nightshade variants or other poison attacks. Finally, even if Nightshade poison samples were detected efficiently (see discussion in SS7), Nightshade would act as proactive "do-not-train" filter that prevents models from training on these samples.
Conclusion
This work introduces the conceptual design, implementation and experimental evaluation of prompt-specific poison attacks on text-to-image generative image models. We believe our exploration of these issues shed light on fundamental limitations of these models. Moving forward, it is possible poison attacks may have potential value as tools to encourage model trainers and content owners to negotiate a path towards licensed procurement of training data for future models.
|
2304.09737 | The Topology of Negatively Associated Distributions | We consider the sets of negatively associated (NA) and negatively correlated
(NC) distributions as subsets of the space $\mathcal{M}$ of all probability
distributions on $\mathbb{R}^n$, in terms of their relative topological
structures within the topological space of all measures on a given measurable
space. We prove that the class of NA distributions has a non-empty interior
with respect to the topology of the total variation metric on $\mathcal{M}$. We
show however that this is not the case in the weak topology (i.e. the topology
of convergence in distribution), unless the underlying probability space is
finite. We consider both the convexity and the connectedness of these classes
of probability measures, and also consider the two classes on their (widely
studied) restrictions to the Boolean cube in $\mathbb{R}^n$. | Jonathan Root, Mark Kon | 2023-04-19T15:23:11Z | http://arxiv.org/abs/2304.09737v1 | # The Topology of Negatively Associated Distributions
###### Abstract
We consider the sets of negatively associated (NA) and negatively correlated (NC) distributions as subsets of the space \(\mathcal{M}\) of all probability distributions on \(\mathbb{R}^{n}\), in terms of their relative topological structures within the topological space of all measures on a given measurable space. We prove that the class of NA distributions has a non-empty interior with respect to the topology of the total variation metric on \(\mathcal{M}\). We show however that this is not the case in the weak topology (i.e. the topology of convergence in distribution), unless the underlying probability space is finite. We consider both the convexity and the connectedness of these classes of probability measures, and also consider the two classes on their (widely studied) restrictions to the Boolean cube in \(\mathbb{R}^{n}\).
## 1 Introduction
In recent years negatively associated probability distributions have been studied as potential generalizations of independent random variables [8, 3]. However, the characterization of such probability measures on \(\mathbb{R}^{n}\) has been elusive. In many cases just the specialization of such a characterization to Boolean cube measures, i.e. probability measures whose marginals are simple variations of Bernoulli measures, has generated a great deal of interest [13, 2]. The characterization of the set of negatively associated measures on \(\mathbb{R}^{n}\) can involve even simpler questions regarding the topological structure of this set within the space of all measures. This question may have different answers under different topologies on the space of measures, which include the total variation topology and the standard weak (distributional) topology. Simple versions of this question include whether the space of such distributions is connected, convex, closed, and whether it has an interior with respect to a given topology.
Denote by \(\mathcal{M}(\mathbb{R}^{n})\) the set of all Borel probability measures on \(\mathbb{R}^{n}\). A probability measure \(\mu\in\mathcal{M}(\mathbb{R}^{n})\) is said to be _negatively correlated_ (NC) if
\[\int_{\mathbb{R}^{n}}x_{i}x_{j}\,d\mu(x)\leq\int_{\mathbb{R}^{n}}x_{i}\,d\mu(x )\int_{\mathbb{R}^{n}}x_{j}\,d\mu(x),\ \ \ \forall 1\leq i\neq j\leq n. \tag{1}\]
We say that \(\mu\) is _strictly_ NC if strict inequality holds in (1). We denote the class of NC distributions by \(\mathcal{M}_{NC}\) or \(\mathcal{M}_{NC}(\mathbb{R}^{n})\). In this context, it is also common and equivalent to say that the variables themselves, \(X_{1},\ldots,X_{n}\), are negatively correlated.
The functions \(f_{i}(x)=x_{i}\), \(i=1,\ldots n\), are non-decreasing on \(\mathbb{R}^{n}\). In general, we say that a function \(f:\mathbb{R}^{n}\to\mathbb{R}\) is non-decreasing if \(f(x)\geq f(y)\) whenever \(x\geq y\) in the product ordering (\(x\geq y\) if and only if \(x_{i}\geq y_{i}\) for each \(i=1,\ldots,n\)). We say that \(f:\mathbb{R}^{n}\to\mathbb{R}\) is
non-increasing if \(f(x)\leq f(y)\) whenever \(x\geq y\) (again in the product ordering). We denote subsets of the index set \(\{1,\dots,n\}\) by \(I\), \(J\), and define \(x_{I}\in\mathbb{R}^{|I|}\) to be the restriction of \(x\in\mathbb{R}^{n}\) to the index set \(I\); here \(|I|\) will denote the cardinality of \(I\). Moreover, we denote by \(\mu^{(I)}\) the marginal distributions of \(\mu\): for \(A\subset\mathbb{R}^{|I|}\),
\[\mu^{(I)}(A):=\int_{A}\int_{\mathbb{R}^{-I}}d\mu(x), \tag{2}\]
where \(\mathbb{R}^{-I}\) denotes all \(x=(x_{i})_{i\notin I}\). Then \(\mu\) is said to be _negatively associated_ (NA) if for every disjoint \(I,J\subset\{1,\dots,n\}\) and every non-decreasing and integrable \(f:\mathbb{R}^{|I|}\to\mathbb{R}\), \(g:\mathbb{R}^{|J|}\to\mathbb{R}\), we have
\[\int_{\mathbb{R}^{n}}f(x_{I})g(x_{J})\,d\mu(x)\leq\int_{\mathbb{R}^{n}}f(x_{I} )\,d\mu(x)\int_{\mathbb{R}^{n}}g(x_{J})\,d\mu(x), \tag{3}\]
or equivalently
\[\operatorname{Cov}_{\mu}(f(x_{I}),g(x_{J}))\leq 0, \tag{4}\]
where \(\operatorname{Cov}_{\mu}:L^{1}(\mathbb{R}^{n},\mu)\times L^{1}(\mathbb{R}^{n },\mu)\to\mathbb{R}\) denotes the covariance operator. Note that if \(f\) or \(g\) is constant, then we have trivial equality in (3). With that said, we say that \(\mu\) is _strictly_ NA if strict inequality holds in (3) for all (\(\mu\)-almost surely) non-constant \(f(x_{I})\) and \(g(x_{J})\) and disjoint \(I,J\subset\{1,\dots,n\}\). If we specify \(f(x)=f_{i}(x)=x_{i}\) and \(g(x)=g_{j}(x)=x_{j}\), \(i\neq j\), then we arrive at (1). Thus negative association is stronger than negative correlation. We denote the class of NA distributions by \(\mathcal{M}_{NA}\) or \(\mathcal{M}_{NA}(\mathbb{R}^{n}).\) As in the case of negative correlation, it is common and equivalent to consider negatively associated variables. That is, if the variables \(X_{1},\dots,X_{n}\) are distributed according to a negatively associated distribution, then one may say that the variables themselves are negatively associated.
Besides negative association, other attempts to quantify and conceptualize dependences among random variables appear in, for example, [1, 5, 9, 10]. A concept closely related to NA, known as positive association (PA), sheds some light on the class of NA distributions. The notion of positive association was introduced into the statistical literature prior to negative association, in [6]. We say that \(\mu\) is positively associated if \(\operatorname{Cov}_{\mu}(f,g)\geq 0\), for all pairs of non-decreasing, real-valued functions \(f\) and \(g\). We note that we no longer assume that \(f\) and \(g\) are defined on disjoint subsets of variables, as we did with negative association. Remarkably (or not), significantly greater progress has been made in the theory of positive association than in the theory of negative association. This, in part, is due to an elegant result known as the FKG inequality [7], which gives a sufficient criterion for PA. At its most basic level, the FKG inequality is known as Chebyshev's inequality [4] (distinct from the standard Chebyshev's inequality in elementary probability). This theorem states that if \(X\) is a random variable on \(\mathbb{R}\) (as opposed to \(\mathbb{R}^{n}\)), and \(f,g:\mathbb{R}\to\mathbb{R}\) are both non-decreasing, then
\[\mathbf{E}(f(X)g(X))\geq\mathbf{E}f(X)\mathbf{E}g(X). \tag{5}\]
This holds for any probability distribution on the real line, so long as \(f\) and \(g\) are non-decreasing (or non-increasing). The proof of (5) is straightforward, and follows from the basic pointwise inequality
\[(f(x)-f(y))(g(x)-g(y))\geq 0, \tag{6}\]
which holds for all non-decreasing (or non-increasing) \(f,g:\mathbb{R}\to\mathbb{R}\). Indeed, assuming \(x\) and \(y\) are independent and identically distributed, upon expanding (6) and double integrating (over \(x\) and \(y\)) we obtain Chebyshev's inequality (5).
The FKG inequality is essentially a generalization of (5) to the product setting, \(\mathbb{R}^{n}\) equipped with the product ordering (i.e., \(x=(x_{1},\ldots,x_{n})\leq y=(y_{1},\ldots,y_{n})\) iff \(x_{i}\leq y_{i}\ \forall i\)). To state the result, we first define the functions \(\wedge\) (meet, or greatest lower bound) and \(\vee\) (join, or least upper bound) by,
\[x\wedge y :=\max\{z\in\mathbb{R}^{n}:z\leq x,z\leq y\}\] \[x\lor y :=\min\{z\in\mathbb{R}^{n}:z\geq x,z\geq y\}.\]
Then the FKG theorem states that if a discrete probability measure \(\mu\) on \(\mathbb{R}^{n}\) satisfies
\[\mu(x\lor y)\mu(x\wedge y)\geq\mu(x)\mu(y) \tag{7}\]
then \(\mu\) is positively associated.
Unfortunately, a criterion as simple as (7) does not (yet) exists for negative association. As pointed out by Pemantle in [13], the notion of negative association is not nearly as "robust" as positive association. Since Chebyshev's inequality (5) implies that any random variable is positively associated with itself, we cannot incorporate every non-decreasing function in the definition of negative association; but rather non-decreasing functions defined on disjoint coordinate subsets. And again, as noted in Pemantle [13], there is a bound on how far \(\mathbf{E}x_{i}x_{j}\) can lie below \(\mathbf{E}x_{i}\mathbf{E}x_{j}\), due to the inequality \(\operatorname{Var}\left(\sum x_{i}\right)=\sum\operatorname{Cov}\!x_{i}x_{j}\geq 0\).
The study of the class of negatively associated random variables dates back to [8], where basic properties of NA random variables are derived, and examples of NA random variables are given: multinomial, convolution of unlike multinomials, multivariate hypergeometric, Dirichlet, and Dirichlet compound multinomial variables. Though the notion of negative association has existed for some time, the literature on these distributions is still quite sparse [8, 3, 13, 2]. But interest in them is growing, due in part to the ease with which sums of NA (even NC) random variables satisfy sub-Gaussian tail bounds. Specifically, if \(\mu\) is negatively correlated on \(\mathbb{R}^{n}\), then
\[\mu\left(x\in\mathbb{R}^{n}:\left|\sum_{i=1}^{n}x_{i}-\mathbf{E}_{\mu}\sum_{i =1}^{n}x_{i}\right|\geq\lambda\right)\leq Ce^{-c\lambda^{2}},\]
for some absolute constants \(c,C>0\). It is conjectured that the same may be true for the replacement of sums \(\sum_{i}x_{i}\) by more general Lipschitz functions of such variables; the only work on this question seems to come from [14]. (The notion that Lipschitz functions on a probability space concentrate about their mean, in the sense that their tails are (in the best case) sub-Gaussian, is called the concentration of measure phenomenon [11].) The work in [14] seems to be inspired by the recent article [2], in which the authors develop a novel notion of negative dependence known as the strong Rayleigh property. Their approach is via the geometry of associated generating polynomials, and they prove several conjectures put forth in this area of research.
We emphasize here, however, that nowhere in the literature has the structure of the space of negatively associated or negatively correlated distributions been studied. It is therefore natural to ask about the topological or geometric properties of the space of NA, or even NC distributions. This question is the major impetus behind our work.
We consider these two classes of measures (NA and NC) broadly, from a topological perspective. We view them as subsets of the general space of measures \(\mathcal{M}(\mathbb{R}^{n})=C_{0}(\mathbb{R}^{n})^{*}\) (the dual of the space of continuous real-valued functions which vanish at infinity) endowed with the weak topology (technically this should be denoted as the weak-* topology). This
is the weakest topology ensuring the continuity of the maps \(f\mapsto\int_{\mathbb{R}^{n}}f\,d\mu\) for \(f\in C_{0}(\mathbb{R}^{n})\), and coincides with the standard topology of convergence in distribution for measures. Thus we say that a sequence of distributions \(\mu_{n}\) converges weakly to a distribution \(\mu\) if,
\[\int_{\mathbb{R}^{n}}f\,d\mu_{n}\to\int_{\mathbb{R}^{n}}f\,d\mu,\]
for all \(f\in C_{0}(\mathbb{R}^{n})\). When \(X\subset\mathbb{R}^{n}\) is compact in the standard topology, we may define the weak topology on \(\mathcal{M}(X)\) as follows. A basic open set in the weak topology is given by [12, 15]
\[V_{\mu}(f_{1},\ldots,f_{k};\epsilon_{1},\ldots,\epsilon_{k}):=\left\{\nu\in \mathcal{M}(X):\left|\int f_{i}\,d\nu-\int f_{i}\,d\mu\right|<\epsilon_{i},i=1 \ldots,k\right\} \tag{8}\]
where \(f_{1},\ldots,f_{k}\) are continuous real-valued functions on \(X\). The family of sets obtained by varying \(\mu,k\), \(f_{1},\ldots,f_{k}\), \(\epsilon_{1},\ldots,\epsilon_{k}\) form a basis for the weak topology, i.e. a collection of sets whose unions form all open sets. Thus a sequence of distributions \(\mu_{n}\) converges weakly to a distribution \(\mu\) if and only if
\[\int_{X}f\,d\mu_{n}\to\int_{X}f\,d\mu\]
for every \(f\in C(X)\) (now bounded due to compactness).
We may in addition view the NC and NA families as subsets of the space of all measures \(\mathcal{M}(\mathbb{R}^{n})\), but now endowed with the total variation topology. This topology is induced from the _total variation distance_:
\[\|\mu-\nu\|_{TV}:=\sup_{|f|\leq 1}\left|\int_{\mathbb{R}^{n}}f\,d\mu-\int_{ \mathbb{R}^{n}}f\,d\nu\right|.\]
In particular, in the setting of a discrete probability space (i.e. with support on a countable number of points), the total variation distance may be expressed as
\[\|\mu-\nu\|_{TV}=\sum_{x\in\mathbb{R}^{n}}|\mu\{x\}-\nu\{x\}|. \tag{9}\]
Unless \(\mu\) has finite support the total variation distance induces a stronger topology than the weak topology. This will be discussed later on in this paper.
An outline of the paper is as follows. We begin by showing that the general class of NA distributions on a compact subspace of \(\mathbb{R}^{n}\) has a non-empty interior in the total variation topology, but not in the weak topology. We next specialize to the subspace of measures concentrated on \(I_{n}=\{0,1\}^{n}\), the Boolean cube (a simplified space often considered [REFS?]), and consider the interior and boundary of these distributions. This simple case affords intuitive arguments and constructive proofs. But it is still of great interest, and much is unknown about negative association on the Boolean cube [13, 14].
Next we address the question of the convexity of the spaces of negatively associated and negatively correlated distributions. We show that these spaces are not convex for distributions on \(\mathbb{R}^{n}\), and they are similarly non-convex when restricted to the Boolean cube. We then address whether or not these spaces are connected in the weak or total variation topology.
## 2 The Topology of \({\cal M}_{NC}\) and \({\cal M}_{NA}\)
### The Interior of \({\cal M}_{NC}({\mathbb{R}}^{n})\) and \({\cal M}_{NA}({\mathbb{R}}^{n})\)
The interiors of both the space of negatively correlated and negatively associated distributions are intimately connected to their _strict_ counterparts. Recall that a distribution \(\mu\) on \({\mathbb{R}}^{n}\) is strictly NC if
\[{\rm Cov}_{\mu}(x_{i},x_{j})<0\]
for all \(1\leq i<j\leq n\), and a distribution \(\mu\) on \({\mathbb{R}}^{n}\) is _strictly_ NA if
\[{\rm Cov}_{\mu}(f(x_{I}),g(x_{J}))<0\]
for all strictly non-decreasing (not almost surely constant) \(f,g\) and disjoint \(I,J\subset\{1,\ldots,n\}\). If a distribution \(\mu\) is strictly NA then it must be strictly NC. It is not _a priori_ clear that strictly NA distributions even exist. However, we prove below in Lemma 1 that they indeed exist on the Boolean cube \(\{0,1\}^{n}\) (and thus by extension on \({\mathbb{R}}^{n}\)).
We note that here and elsewhere, the notion of a strictly non-decreasing function \(f\) means by implication that \(f\) is _strictly_ non-decreasing, i.e. that it is not essentially constant with respect to the measure \(\mu\) under consideration. We define the total variation of a non-decreasing function \(f\) with respect to a measure \(\mu\) to be \(\sup f-\inf f\), where \(\sup\) denotes essential \(\sup\) (i.e. modulo sets of measure 0) and \(\inf\) denotes essential \(\inf\). Note however that on the cube these two notions (sup and essential \(\sup\)) coincide for non-decreasing and non-increasing functions, as do \(\inf\) and essential \(\inf\).
**Lemma 1**: _There exist strictly NA distributions on \(\{0,1\}^{n}\) such that for all non-decreasing \(f(x_{I}),g(x_{J})\) (with \(I,J\) disjoint sets of indices) having total variation 1 on \(\{0,1\}^{n}\), there is an \(\epsilon>0\) such that_
\[\int fgd\mu\leq\int fd\mu\int gd\mu-\epsilon.\]
_Consequently, there exist strictly NA distributions on the whole of \({\mathbb{R}}^{n}\) satisfying the above equation (under the same measure \(\mu\) supported on \(\{0,1\}^{n}\) viewed as a subset of \({\mathbb{R}}^{n}\))._
**Proof** Let \(I_{n,1}\) denote the collection of vectors \((x_{1},\ldots,x_{n})\in\{0,1\}^{n}\) such that \(\sum_{i}x_{i}=1\), and let \(\mu\) be any probability distribution supported on \(I_{n,1}\). Thus for some \(\epsilon>0\), we have \(\mu(x)>\sqrt{\epsilon}\) for all \(x\in I_{n,1}\). Now assume that, as stated, \(f(x_{I})\) and \(g(x_{J})\) are non-decreasing functions of total variation 1, with \(I\) and \(J\) disjoint subsets of \(1,\ldots,n\). First note that to check the condition
\[{\rm Cov}_{\mu}(f(x_{I}),g(x_{J}))<0,\]
it suffices to assume that \(f(0,\ldots,0)=0\) and \(g(0,\ldots,0)=0\). Indeed, we may replace \(f(x_{I})\) with \(f(x_{I})-f(0,\ldots,0)\) and \(g(x_{J})\) with \(g(x_{J})-g(0,\ldots,0)\) without changing \({\rm Cov}_{\mu}(f(x_{I}),g(x_{J}))\). It follows that
\[{\bf E}_{\mu}[f(x_{I})g(x_{J})]=0,\]
since for \(x\) in the support of \(\mu\), \(\sum_{i}x_{i}=1\) and so we must have \(x_{I}=(0,\ldots,0)\) or \(x_{J}=(0,\ldots,0)\). On the other hand, since \(f\) and \(g\) are strictly non-decreasing (i.e. non-constant), zero at the zero vector, and of total variation 1, each must equal one at one or more points in the support of our measure, i.e., in \(I_{n,1}\). Thus \({\bf E}_{\mu}[f(x_{I})]>\sqrt{\epsilon}\) and \({\bf E}_{\mu}[g(x_{J})]>\sqrt{\epsilon}\), and so
\[{\bf E}_{\mu}[f(x_{I})g(x_{J})]=0<(\sqrt{\epsilon})^{2}=\epsilon\leq{\bf E}_{ \mu}[f(x_{I})]{\bf E}_{\mu}[g(x_{J})].\]
Therefore \(\mu\) is strictly negatively associated satisfying the lower \(\epsilon\)-bound in the statement of the theorem. We note that since \(\mu\) satisfies this bound as a measure on the cube \(I_{n}\), it also satisfies this bound when viewed as a measure on \(\mathbb{R}^{n}\) (that is concentrated on \(I_{n}\subset\mathbb{R}^{n}\)).
\(\blacksquare\)
We now move to the main results of this section. We begin by considering the weak interior (i.e. interior in the weak topology on measures) of the space of negatively associated distributions on \(\mathbb{R}^{n}\) or on any fixed open subset \(G\subset\mathbb{R}^{n}\). The following Proposition shows that the weak interior of the NA distributions on \(G\) is in fact empty.
**Proposition 1**: _Consider the space of probability distributions supported on a fixed open set \(G\subset\mathbb{R}^{n}\) (or all of \(\mathbb{R}^{n}\)). If \(\mu\) is strictly NA on \(G\), then every weak neighborhood of \(\mu\) contains a non-NA distribution \(\nu\)._
**Proof** Let \(\mu\) be strictly NA and let \(\epsilon\) and continuous bounded \(f_{1},\ldots,f_{k}\) be given. Then \(\nu^{\prime}\) will be in the weak neighborhood \(V_{\mu}(\epsilon;f_{1},\ldots,f_{k})\) if and only if
\[\left|\int f_{i}\,d\mu-\int f_{i}\,d\nu^{\prime}\right|\leq\epsilon\]
for every \(i=1\ldots,k\). We construct a non-negatively associated distribution \(\nu^{\prime}\) in the weak neighborhood \(V_{\mu}(\epsilon;f_{1},\ldots,f_{k})\). This is done by way of a discrete distribution \(\nu^{\prime}\) satisfying \(\int f_{i}d\mu=\int f_{i}d\nu^{\prime}\) for each \(i=1\ldots k\), but which itself is not negatively associated.
Let \(\nu\) be a positively associated distribution on \(G\) such that
\[\int f_{k+1}f_{k+2}\,d\nu-\int f_{k+1}\,d\nu\int f_{k+2}\,d\nu>0,\]
for some non-decreasing \(f_{k+1},f_{k+2}\), and append the latter two functions to the above sequence, yielding \(f_{1},\ldots,f_{k+2}\). (See [7] for existence theorems and examples of positively associated measures.) Assume without loss of generality that \(f_{1},\ldots,f_{k+1},f_{k+2}\), \(f_{k+1}f_{k+2}\) are linearly independent. For each \(x\) consider the vector
\[\mathbf{f}_{x}=(f_{1}(x),\ldots,f_{k}(x),f_{k+1}(x),f_{k+2}(x),f_{k+1}(x)f_{k+ 2}(x))\in\mathbb{R}^{k+3},\]
with the last entry a product of \(f_{k+1}\) and \(f_{k+2}\). The collection \(\{\mathbf{f}_{x}\}_{x\in G}\) spans \(\mathbb{R}^{k+3}\). Thus we can find \(\alpha_{1},\ldots,\alpha_{k+3}\) and \(x_{1},\ldots,x_{k+3}\) so that
\[(\mu(f_{1}),\ldots,\mu(f_{k}),\nu(f_{k+1}),\nu(f_{k+2}),\nu(f_{k+1}f_{k+2}))= \sum_{j=1}^{k+3}\alpha_{j}\mathbf{f}_{x_{j}}\]
where \(\mu(f_{i}):=\int f_{i}\,d\mu\). Therefore
\[\mu(f_{i})=\sum_{j=1}^{k+3}\alpha_{j}f_{i}(x_{j})\]
for each \(i=1,\ldots,k\),
\[\nu(f_{k+1})=\sum_{j=1}^{k+3}\alpha_{j}f_{k+1}(x_{j}),\]
\[\nu(f_{k+2})=\sum_{j=1}^{k+3}\alpha_{j}f_{k+2}(x_{j}),\]
and
\[\nu(f_{k+1}f_{k+2})=\sum_{j=1}^{k+3}\alpha_{j}f_{k+1}(x_{j})f_{k+2}(x_{j}).\]
So the discrete distribution \(\nu^{\prime}=\sum_{j=1}^{k+3}\alpha_{j}\delta_{x_{j}}\) is in the weak neighborhood \(V_{\mu}(\epsilon;f_{1},\ldots,f_{k})\), but it is not negatively associated.
The study of the interior of the set of negatively associated distributions is complicated by the fact that the covariance condition for negative association must be checked on infinitely many functions (in order to establish negative association for a single measure). This situation can however be avoided when the distribution is supported on a finite subset of \(\mathbb{R}^{n}\) (by a compactness argument in section 2.2 below). On a finite product probability space \(X^{n}\), the space of probability distributions is finite dimensional, and we may conclude as will be done in section 2.2 that the interior of the collection of NA distributions is non-empty. Note that since the set of distributions on a finite space is finite dimensional, the two topologies (weak and total variation) discussed here coincide.
On the other hand, a probability measure is negatively correlated (on \(\mathbb{R}^{n}\)) if finitely many covariance conditions are satisfied (1). Because of this, we may prove that the weak interior of the class of NC distributions is non-empty in the space of distributions supported on a fixed compact set \(X\subset\mathbb{R}^{n}\).
**Proposition 2**: _Let \(X\subset\mathbb{R}^{n}\) be a compact subset. Then \(\mathcal{M}_{NC}(X)\subset\mathcal{M}(\mathbb{R}^{n})\) has a non-empty interior in the weak topology._
**Proof** It suffices to show that \(\int x_{i}x_{j}d\mu-\int x_{i}d\mu\int x_{j}d\mu\) is continuous in \(\mu\), which follows from the fact that the functions \(f_{ij}(x)=x_{i}x_{j}\), \(1\leq i,j\leq n\), and \(f_{k}(x)=x_{k}\), \(1\leq k\leq n\) are bounded.
This result however does not hold when we consider the class of negatively correlated distributions supported on all of \(\mathbb{R}^{n}\).
**Proposition 3**: _The collection of negatively correlated distributions on \(\mathbb{R}^{n}\) has no interior in the total variation topology (and hence in the weak topology)._
**Proof** Let \(\mu\) be a negatively correlated distribution. Consider the distribution \(\nu_{c}=\frac{1}{2}(\delta_{-c1}+\delta_{c1})\) (a sum of point masses at two points), where \(c>0\) is large and \(1=(1,1,\ldots,1)\). We note that \(\nu_{c}\) has total variation 1. We claim that for any neighborhood \(V\) of \(\mu\) (in the TV metric), there is a distribution in \(V\) that is positively correlated, of the form \(\mu_{\alpha,c}=\alpha\mu+(1-\alpha)\nu_{c}\), for some \(\alpha\in(0,1)\) close to 1 and \(c>0\).
The idea here is that the distribution \(\nu_{c}\) has a positive correlation that is arbitrarily large as \(c\) becomes large, so that adding only a small multiple \((1-\alpha)v_{c}\) (if \(c\) is large) will cause a distribution to become positively correlated.
Note first that
\[\|\mu-\mu_{\alpha,c}\|_{TV}=\|(1-\alpha)\mu+(\alpha-1)\nu_{c}\|_{TV}\]
\[\leq(1-\alpha)\|\mu\|_{TV}+(1-\alpha)\|\nu_{c}\|_{TV}=2(1-\alpha)\]
which is arbitrarily small for \(\alpha\) close to \(1\) (uniformly in \(c\)). Hence, uniformly in \(c\), the measure \(\mu_{\alpha,c}\) is in \(V\) for \(\alpha\) sufficiently close to \(1\), which we assume is the case. However for this value of \(\alpha\), we now allow \(c\) to grow larger. Note that the covariance
\[\int x_{i}x_{j}d\mu_{\alpha,c}=\int x_{i}x_{j}d[\alpha\mu+(1-\alpha)\nu_{c}]\]
\[=\alpha\int x_{i}x_{j}d\mu+\frac{1}{2}(1-\alpha)\int x_{i}x_{j}d[\delta_{-c \mathbf{1}}+\delta_{c\mathbf{1}}]\]
\[=\alpha\int x_{i}x_{j}d\mu+\frac{1}{2}(1-\alpha)[c^{2}+c^{2}],\]
which for sufficiently large \(c\) is clearly positive. Thus any TV neighborhood \(V\) of \(\mu\) has a positively correlated distribution in it.
\(\blacksquare\)
As a consequence of Proposition 3 we have
**Proposition 4**: _The space \(\mathcal{M}_{NA}\) on all of \(\mathbb{R}^{n}\) has an empty interior with respect to both the TV and weak topologies._
However, on a compact set \(X\subset\mathbb{R}^{n}\), the space \(\mathcal{M}_{NA}(X)\) has a non-empty TV interior, as is shown here:
**Proposition 5**: _Let \(X\subset\mathbb{R}^{n}\) be a compact subset. Then \(\mathcal{M}_{NA}(X)\) has a non-empty interior with respect to the total variation metric._
**Proof** It is not hard to show that if \(\|\mu-\nu\|_{TV}<\epsilon\), then \(|\mathrm{Cov}_{\mu}(f,g)-\mathrm{Cov}_{\nu}(f,g)|\leq 3\epsilon\) for every \(f,g\) satisfying \(\|f\|_{\infty},\|g\|_{\infty}\leq 1\).
According to Lemma 1, let \(\mu\) be strictly NA so that \(\mathrm{Cov}_{\mu}(f,g)<-\epsilon\) for every strictly non-decreasing \(f(x_{I}),g(x_{J})\) of total variation \(1\) defined on disjoint index subsets \(I,J\subset\{1,\ldots,n\}\), and some \(\epsilon>0\). Choose \(\mu^{\prime}\) such that \(\|\mu-\mu^{\prime}\|_{TV}<\epsilon/6\). Then
\[\mathrm{Cov}_{\mu^{\prime}}(f,g)<\epsilon/2+\mathrm{Cov}_{\mu}(f,g)<-\epsilon /2<0. \tag{10}\]
For general \(f\) and \(g\) (not of total variation \(1\)) (10) holds by multiplying these by constants, without changing the negative covariance. This completes the proof.
\(\blacksquare\)
For a compact \(X\), since \(\mathcal{M}_{NA}(X)\subset\mathcal{M}_{NC}(X)\), it immediately follows that the interior of the collection of NC distributions is non-empty in the total variation topology (this also follows from the fact that it is non-empty in the weak topology):
**Corollary 1**: _For a compact \(X\subset\mathbb{R}^{n}\), \(\mathcal{M}_{NC}(X)\) has a non-empty interior with respect to the total variation metric._
### \(\mathcal{M}_{NC}\) and \(\mathcal{M}_{NA}\) on the Boolean Cube
We reformulate the conditions for negative correlation and negative association on the Boolean cube \(I_{n}=\{0,1\}^{n}\) as polynomial inequalities. We consider a compactness argument in the case of strict negative association in order to get a handle on the infinity of conditions contained within definition (3). Restricting these measures to the Boolean cube affords great flexibility due to the topological properties of both the space of probability measures and the space of continuous functions on said cube.
Denote by \(\mu^{(i)}\), \(i=1,\ldots,n\), and \(\mu^{(i,j)}\), \(1\leq i,j\leq n\), respectively, the one and two-dimensional marginals of \(\mu\). On the Boolean cube \(I_{n}=\{0,1\}^{n}\) we have \(\mathbf{E}x_{i}=\mu^{(i)}(1)\), for each \(i=1,\ldots,n\), and \(\mathbf{E}x_{i}x_{j}=\mu^{(i,j)}(1,1)\), for each \(1\leq i,j\leq n\). The condition for negative correlation therefore reduces to,
\[\mu^{(i,j)}(1,1)\leq\mu^{(i)}(1)\mu^{(j)}(1),\ \ \forall 1\leq i,j\leq n. \tag{11}\]
Any probability measure \(\mu\) on the Boolean cube is uniquely determined by a vector of length \(2^{n}\), \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{2^{n}})\), such that \(\sum_{i}\mu_{i}=1\). Thus equation (11) may be written in the form
\[p_{\mu}(\mu_{1},\ldots,\mu_{2^{n}})\leq 0, \tag{12}\]
where \(p_{\mu}\) is a polynomial in \(\mu_{1},\ldots,\mu_{2^{n}}\).
Now consider definition (3) of negative association, specialized to the Boolean cube. Denote by \(I\) and \(J\) disjoint subsets of indices in \(\{1,\ldots,n\}\), and by \(x_{I},x_{J}\) vectors restricted to the indices of \(I\) and \(J\) respectively. Further we let \(\mu^{(I)},\mu^{(J)}\) denote the respective marginal distributions as defined in (2). Then the condition for negative association may be written
\[\sum_{x_{I},x_{J}}f(x_{I})g(x_{J})\mu(x_{I},x_{J})\leq\sum_{x_{I},x_{J}}f(x_{I })g(x_{J})\mu^{(I)}(x_{I})\mu^{(J)}(x_{J})\]
or
\[\sum_{x_{I},x_{J}}f(x_{I})g(x_{J})\big{(}\mu(x_{I},x_{J})-\mu^{(I)}(x_{I})\mu^ {(J)}(x_{J})\big{)}\leq 0. \tag{13}\]
As in (12), equation (13) may be re-formulated as
\[p_{f,g}(\mu_{1},\ldots,\mu_{2^{n}})\leq 0, \tag{14}\]
where \(p_{f,g}(\mu_{1},\ldots,\mu_{2^{n}})\) is a polynomial in \(\mu_{1},\ldots,\mu_{2^{n}}\) dependent on \(f\) and \(g\).
Equation (11) must hold for \(1\leq i,j\leq n\), which is a finite number of constraints. Equation (13) must hold for every non-decreasing \(f\) and \(g\), and disjoint index sets \(I\) and \(J\) - an infinite number of constraints. Let us however restrict our attention to the set of all _strictly_ NA distributions, i.e. those \(\mu\) for which strict inequality holds in (13):
\[\mathrm{Cov}_{\mu}(f(x_{I}),g(x_{J}))<0,\]
for all monotone \(f\) and \(g\). Multiplying (13) by a constant, we may assume such \(f\) and \(g\) are uniformly bounded. Note also that every function \(f:\{0,1\}^{n}\to\mathbb{R}\) is a polynomial of bounded degree. Consider the space of non-decreasing, uniformly bounded polynomials of degree at most \(n\) (in the space of continuous functions \(C(\{0,1\}^{n})\) equipped with the \(\infty\)-norm). Such spaces of polynomials of a finite degree compose a finite dimensional space. Moreover the supremum norm on this space is equivalent to the supremum norm on the
coefficients, which is equivalent to the equicontinuity of the space. Thus by the Arzela-Ascoli thoerem, this space is compact in the \(\infty\)-norm. Combining this with the fact that the covariance operator is continuous, there exists \(\epsilon>0\) and finitely many \(f_{1},\ldots,f_{m}\) and \(g_{1},\ldots,g_{m}\) such that
\[\mbox{Cov}_{\mu}(f(x_{I}),g(x_{J}))<0\]
\(\forall\ f,g\) non-decreasing, if
\[\mbox{Cov}_{\mu}(f_{i}(x_{I}),g_{i}(x_{J}))<-\epsilon\]
\(\forall\ i,j=1,\ldots,m\), or in the language of (14),
\[p_{f_{i},g_{i}}(\mu_{1},\ldots,\mu_{2^{n}})<-\epsilon,\ \ \ \ i=1,\ldots,m. \tag{15}\]
From the viewpoint of (11) and (15), the conditions of strict negative correlation and strict negative association on the Boolean cube are continuous in the parameters \(\mu_{1},\ldots,\mu_{2^{n}}\) of a given distribution \(\mu\). That is, the condition will still be satisfied under small perturbations of \(\mu_{1},\ldots,\mu_{2^{n}}\).
Of course, the space of probability measures on the Boolean cube is a finite dimensional space, and all Hausdorff vector topologies on a finite dimensional space are equivalent. Thus one may define basic open sets by (8) or by (9). Or, equivalently, one may choose the Euclidean topology induced by the coordinate system \(\mu=(\mu_{1},\ldots,\mu_{2^{n}})\). As the conditions defining both the class of NC and NA distributions are continuous in this Euclidean topology, we moreover obtain,
**Theorem 1**: _Let \({\cal M}_{NC}\) and \({\cal M}_{NA}\) denote the spaces of NC and NA distributions on the Boolean cube. We have_
\[\partial{\cal M}_{NC}\subset\{\mu\in{\cal M}_{NC}:\mu^{(i,j)}(1,1)=\mu^{(i)}(1 )\mu^{(j)}(1)\mbox{ for some }i,j\}\]
_and_
\[\partial{\cal M}_{NA}\subset\{\mu\in{\cal M}_{NA}:\exists f,g\mbox{ non-constant, non-decreasing, }\mbox{Cov}_{\mu}(f(x_{I}),g(x_{J}))=0\},\]
_where \(I\) and \(J\) are disjoint subsets of \(\{1,\ldots,n\}\). Moreover, the interior of \({\cal M}_{NC}\) and the interior of \({\cal M}_{NA}\) are non-empty._
### Convexity and Connectedness
We further our study of the topological properties of the spaces of negatively associated and negatively correlated distributions by considering properties of convexity and connectedness. We consider such questions on both the Boolean cube and on all of \(\mathbb{R}^{n}\).
#### 2.3.1 Convexity Properties of the Space of Negatively Associated Distributions
**Theorem 2**: _The space of negatively associated distributions is not convex on \(\mathbb{R}^{n}\)._
**Proof** We consider strictly negatively associated distributions \(\mu\) and \(\nu\), which exist by Lemma 1. We show that there exist increasing functions \(f,g\) and a \(\lambda\in(0,1)\) for which the condition for negative association fails under the measure \(\lambda\mu+(1-\lambda)\nu\).
We begin with a general algebraic manipulation. Given increasing \(f\) and \(g\) defined on disjoint index sets, there exist \(\epsilon_{1}\) and \(\epsilon_{2}\) such that
\[\int fg\,d\mu=\int f\,d\mu\int g\,d\mu-\epsilon_{1}\]
\[\int fg\,d\nu=\int f\,d\nu\int g\,d\nu-\epsilon_{2}.\]
Setting \(A:=\int f\,d\mu\), \(B:=\int f\,d\nu\), \(C:=\int g\,d\mu\), and \(D:=\int g\,d\nu\), it follows that
\[\int fg\,d(\lambda\mu+(1-\lambda)\nu) = \lambda\int fg\,d\mu+(1-\lambda)\int fg\,d\nu\] \[= \lambda\int f\,d\mu\int g\,d\mu+(1-\lambda)\int f\,d\nu\int g\,d \nu-(\lambda\epsilon_{1}+(1-\lambda)\epsilon_{2})\] \[= \lambda AC+(1-\lambda)BD-(\lambda\epsilon_{1}+(1-\lambda)\epsilon _{2}).\]
Further we have
\[\int f\,d(\lambda\mu+(1-\lambda)\nu)\int g\,d(\lambda\mu+(1- \lambda)\nu) = (\lambda A+(1-\lambda)B)(\lambda C+(1-\lambda)D)\] \[= \lambda^{2}AC+\lambda(1-\lambda)AD+\lambda(1-\lambda)BC+(1- \lambda)^{2}BD.\]
The condition for convexity therefore becomes,
\[\lambda AC+(1-\lambda)BD-(\lambda\epsilon_{1}+(1-\lambda)\epsilon_{2})\leq \lambda^{2}AC+\lambda(1-\lambda)AD+\lambda(1-\lambda)BC+(1-\lambda)^{2}BD\]
for \(0\leq\lambda\leq 1\). Simplifying, we obtain
\[\lambda^{2}(A-B)(C-D)-\lambda(A-B)(C-D)\geq-(\lambda\epsilon_{1}+(1-\lambda) \epsilon_{2}).\]
Upon setting \(\tilde{C}=(A-B)(C-D)\) this becomes
\[\tilde{C}\lambda^{2}-\tilde{C}\lambda\geq-(\lambda\epsilon_{1}+(1-\lambda) \epsilon_{2}). \tag{16}\]
Thus we must show that (16) fails for certain increasing \(f,g\) and \(\lambda\in(0,1)\). If \(\tilde{C}<0\), then the quadratic \(\tilde{C}\lambda^{2}-\tilde{C}\lambda=\tilde{C}\lambda(\lambda-1)\) is non-negative for all \(0\leq\lambda\leq 1\), thus satisfying (16). However, we claim that if \(\tilde{C}>0\), then (16) will not hold for certain \(\lambda\in(0,1)\), as is shown below (this would complete the proof of non-convexity).
To this end, we will need to guarantee \(\tilde{C}>0\); this can be accomplished by choosing \(\mu\) and \(\nu\) in such a way that \(A=\int fd\mu>\int fd\nu=B\), and \(C=\int gd\mu>\int gd\nu=D\). Given real numbers \(p_{1},\dots,p_{n}\) we can translate the mean of a probability measure \(\mu\) in each variable by \(p_{i}\) without changing the covariance structure of \(\mu\). Specifically, map \(\mu\mapsto\mu\circ T^{-1}\), where the transformation \(T:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is defined by
\[T(x_{1},\dots,x_{n})=(x_{i}+p_{i})_{i=1,\dots,n}=(y_{i})_{i=1,\dots,n}.\]
By the change of variables formula, for any integrable \(f\),
\[\int_{\mathbb{R}^{n}}f(y)\,d(\mu\circ T^{-1})(y)=\int_{\mathbb{R}_{n}}f(Tx)\,d \mu(x)\]
In particular,
\[\int_{\mathbb{R}^{n}}y_{i}\,d(\mu\circ T^{-1})(y)=\int_{\mathbb{R}^{n}}x_{i} \,d\mu(x)+p_{i}.\]
Thus \(\operatorname{Cov}_{\mu\circ T^{-1}}(y_{i},y_{j})=\operatorname{Cov}_{\mu}(x_{ i},x_{j})\). What's more, since \(T\) is mere translation, it preserves the product ordering on \(\mathbb{R}^{n}\). That is \(x\geq y\) if and only if \(Tx\geq Ty\). Therefore a function
is non-decreasing on \(\mathbb{R}^{n}\) if and only if \(f\circ T\) is non-decreasing on \(\mathbb{R}^{n}\), whence \(\mu\) is negatively associated if and only if \(\mu\circ T^{-1}\) is negatively associated. We may thus assume that \(A=\int fd\mu>\int fd\nu=B\), and \(C=\int gd\mu>\int gd\nu=D\). Thus \(\tilde{C}>0\).
In this case the quadratic \(\tilde{C}\lambda^{2}-\tilde{C}\lambda\) is bounded above by \(0\) for all \(0\leq\lambda\leq 1\), and its minimum value is attained at \(\lambda=1/2\). If convexity is to hold, then (16) must be valid when \(\lambda=1/2\). Setting \(\lambda=1/2\) in (16) we obtain
\[\frac{\tilde{C}}{4}\leq\frac{1}{2}(\epsilon_{1}+\epsilon_{2}).\]
This is evidently false if \(\epsilon_{1}\) and \(\epsilon_{2}\) can be made arbitrarily small independent of \(\tilde{C}\). We have constructed \(\tilde{C}\) via a translation operator \(T\) which is independent of \(\epsilon_{1}\) and \(\epsilon_{2}\), thus it suffices to show that these \(\epsilon\)-quantities can be made arbitrarily small.
Following the proof of Lemma 1, we describe strictly negatively associated distributions \(\mu\) and \(\nu\). We consider once again the inequality
\[\frac{\tilde{C}}{4}\leq\frac{1}{2}(\epsilon_{1}+\epsilon_{2})\]
relative to the context at hand. It suffices to verify that for certain \(f,g\) and measures \(\mu,\nu\), said inequality is invalid. As in lemma 1, we may assume that the quantity \(E_{\mu}fg=0\), and \(E_{\nu}fg=0\). It then suffices to verify \(\epsilon_{1}\) and \(\epsilon_{2}\) as being small. These quantities describe the level of negative association of the respective measures, for given functions \(f\) and \(g\). As said measures are supported on the standard basis vectors, \(\epsilon_{1}\) and \(\epsilon_{2}\) will take the form
\[0-\left(\sum_{k}\beta_{k}f(\alpha_{k})\right)\left(\sum_{j}\beta_{j}g(\alpha_ {j})\right),\]
where \(0\) denotes the value of \(Efg\), and \(Ef=\sum_{i}\beta_{i}f(\alpha_{i})\), relative to the measure \(\mu=\sum_{i}\beta_{i}\delta_{\alpha_{i}}\). The analogue for \(g\) and \(\nu\). As \(f\) and \(g\) are arbitrary, it is clear that we may describe \(\tilde{C}\) as positive. We need only guarantee that \(f\) and \(g\) are non-decreasing, while retaining that \(\tilde{C}>\epsilon\) for some fixed \(\epsilon>0\). In doing so, we let \(Ef\) and \(Eg\) approach \(0\) from above, thereby attaining arbitrarily small values for \(\epsilon_{1}\) and \(\epsilon_{2}\), and contradicting the inequality
\[\frac{\tilde{C}}{4}\leq\frac{1}{2}(\epsilon_{1}+\epsilon_{2}),\]
as required. This shows that (16) fails for \(\lambda=1/2\), whereby convexity is violated for the corresponding distribution. Thus it is shown that the space of negatively associated distributions on the Boolean cube is non-convex.
Thus it is shown that the space of negatively associated distributions on \(\mathbb{R}^{n}\) is non-convex.
\(\blacksquare\)
#### 2.3.2 Non-Convexity of \(\mathcal{M}_{NC}(\mathbb{R}^{n})\)
Define the sets
\[E_{p_{1},\ldots,p_{n}}:=\{\mu\in\mathcal{M}_{NC}(\mathbb{R}^{n}):\mathbf{E}_{ \mu}x_{i}=p_{i},i=1,\ldots,n\}. \tag{17}\]
**Corollary 2**: _The space of negatively correlated distributions is not convex on \(\mathbb{R}^{n}\). However, for any fixed \(p_{1},\ldots,p_{n}\in\mathbb{R}\), the collection of measures_
\[E_{p_{1},\ldots,p_{n}}=\{\mu\in\mathcal{M}_{NC}(\mathbb{R}^{n}):\mathbf{E}_{\mu }x_{i}=p_{i},\;i=1,\ldots,n\}\]
_is convex._
**Proof** The proof of non-convexity follows as in the proof of Theorem 2. Specifically, given strictly negatively correlated distributions \(\mu\) and \(\nu\), fix \(i,j\) and note that
\[\int x_{i}x_{j}\,d\mu=\int x_{i}\,d\mu\int x_{j}\,d\mu-\epsilon_{1}\]
for some \(\epsilon_{1}>0\), and
\[\int x_{i}x_{j}\,d\nu=\int x_{i}\,d\nu\int x_{j}\,d\nu-\epsilon_{2}\]
for some \(\epsilon_{2}>0\). Set \(A=\int x_{i}\,d\mu\), \(B=\int x_{i}\,d\nu\), \(C=\int x_{j}\,d\mu\), and \(D=\int x_{j}\,d\nu\). Then if convexity is to hold, we once again must have
\[\lambda AC+(1-\lambda)BD-(\lambda\epsilon_{1}+(1-\lambda)\epsilon_{2})\leq \lambda^{2}AC+\lambda(1-\lambda)AD+\lambda(1-\lambda)BC+(1-\lambda)^{2}BD\]
for \(0\leq\lambda\leq 1.\) Now with \(\tilde{C}=(A-B)(C-D)\) this simplifies to
\[\tilde{C}\lambda^{2}-\tilde{C}\lambda\geq-(\lambda\epsilon_{1}+(1-\lambda) \epsilon_{2}). \tag{18}\]
If we can show that there exist NC distributions \(\mu\) and \(\nu\) such that \(\tilde{C}>0\), then upon setting \(\lambda=1/2\) in (18), we will arrive at the condition
\[\frac{\tilde{C}}{4}\leq\frac{1}{2}(\epsilon_{1}+\epsilon_{2})\]
which will fail for small enough \(\epsilon_{1},\epsilon_{2}\), if we can make \(\tilde{C}\) large enough independent of \(\epsilon_{1},\epsilon_{2}\).
Thus we must show that there exist NC \(\mu\) and \(\nu\) such that \(\tilde{C}>0\), and such that \(\epsilon_{1}\) and \(\epsilon_{2}\) are sufficiently small. That is \(\int x_{i}\,d\mu>\int x_{i}\,d\nu\) and \(\int x_{j}\,d\mu>\int x_{j}\,d\nu\), for \(i\neq j\). Given real numbers \(p_{1},\ldots,p_{n}\) we can translate the mean of a probability measure \(\mu\) in each variable by \(p_{i}\) without changing the covariance structure of \(\mu\). Specifically, map \(\mu\mapsto\mu\circ T^{-1}\), where the transformation \(T:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is defined by
\[T(x_{1},\ldots,x_{n})=(x_{i}+p_{i})_{i=1,\ldots,n}=(y_{i})_{i=1,\ldots,n}.\]
By the change of variables formula, for any integrable \(f\),
\[\int_{\mathbb{R}^{n}}f(y)\,d(\mu\circ T^{-1})(y)=\int_{\mathbb{R}_{n}}f(Tx)\, d\mu(x)\]
In particular,
\[\int_{\mathbb{R}^{n}}y_{i}\,d(\mu\circ T^{-1})(y)=\int_{\mathbb{R}^{n}}x_{i}\, d\mu(x)+p_{i}.\]
Thus \(\operatorname{Cov}_{\mu\circ T^{-1}}(y_{i},y_{j})=\operatorname{Cov}_{\mu}(x_{i},x_{j})\). Thus given NC distributions \(\mu\) and \(\nu\), we may always translate \(\mu\) until its mean values in each coordinate, i.e. \(\int x_{i}\,d\mu(x)\), dominate the mean values of \(\nu\) in each coordinate. This does not change the covariance structure of \(\mu\), and therefore preserves negative correlation, and in particular \(\epsilon_{1}\) and \(\epsilon_{2}\).
We now prove that the collection
\[E_{p_{1},\ldots,p_{n}}=\{\mu\in{\cal M}_{NC}(\mathbb{R}^{n}):{\bf E}_{\mu}x_{i}=p_ {i},\ i=1,\ldots,n\}\]
is convex for each fixed \(p_{1},\ldots,p_{n}\in\mathbb{R}\). This follows from equation (18). Indeed if \(\tilde{C}=0\) then certainly (18) holds for each \(0\leq\lambda\leq 1\), and therefore the collection of NC distributions which satisfy \(\tilde{C}=0\) for each \(1\leq i<j\leq n\) will be convex. We have \(\tilde{C}=0\) whenever \(\int x_{i}\,d\mu=\int x_{i}\,d\nu\), and therefore if \(\int x_{i}\,d\mu=\int x_{i}\,d\nu=p_{i}\) for each \(i=1,\ldots,n\), then their convex combination \(\lambda\mu+(1-\lambda)\nu\) will be NC for each \(0\leq\lambda\leq 1\). The result follows.
\(\blacksquare\)
#### 2.3.3 Non-Convexity of \({\cal M}_{NC}(I_{n})\)
**Lemma 2**: _For any \(0<\epsilon<1\) and \(1\leq i\leq n\), there exists a negatively correlated distribution on the Boolean cube \(I_{n}=\{0,1\}^{n}\) satisfying \(\mu^{(i)}(1)=\epsilon\). In fact, given \(0<\epsilon_{i}<1\), \(i=1,\ldots,n\), satisfying \(\sum_{i}\epsilon_{i}=1\), there exist a negatively correlated distribution on the Boolean cube satisfying \(\mu^{(i)}(1)=\epsilon_{i}\), \(i=1,\ldots,n\)._
ProofGiven \(1\leq i\leq n\), define a distribution as follows:
\[\mu_{i,\epsilon}:=\epsilon\delta_{\alpha_{i}}+(1-\epsilon)\delta_{\alpha_{j}},\]
where \(i\neq j\) and \(\alpha_{i}=(0,\ldots,1,\ldots,0)\) is the vector with a single \(1\) in the \(i\)th component (likewise for \(\alpha_{j}\)). We see that \(\mu^{(i)}_{i,\epsilon}(1)=\epsilon\). What's more, we have
\[{\bf E}_{\mu_{i,\epsilon}}x_{k}x_{\ell}=\mu^{(k,\ell)}_{i,\epsilon}(1,1)=0\]
for all \(1\leq k,\ell\leq n\), and thus
\[{\bf E}_{\mu_{i,\epsilon}}x_{k}x_{\ell}-{\bf E}_{\mu_{i,\epsilon}}x_{k}{\bf E }_{\mu_{i,\epsilon}}x_{\ell}\leq 0.\]
The first result follows.
Define a measure \(\mu\) as the convex combination of point masses centered at each \(\alpha_{i}=(0,\ldots,1,\ldots,0)\), \(i=1,\ldots,n\) (where \(\alpha_{i}\) has exactly one \(1\) in the \(i\)th coordinate, as above):
\[\mu:=\sum_{i}\epsilon_{i}\delta_{\alpha_{i}}.\]
Evidently, for each \(j=1,\ldots,n\), \(\mu^{(j)}(1)=\epsilon_{j}\), and for each \(i\neq j\)\(\mu^{(i,j)}(1,1)=0\). Thus \(\mu\) is strictly negatively correlated:
\[\mu^{(i,j)}(1,1)-\mu^{(i)}(1)\mu^{(j)}(1)=-\epsilon_{i}\epsilon_{j}.\]
Further \(\mu\) satisfies the requirements on the one-dimensional marginals. The result follows.
**Corollary 3**: _The space of negatively correlated distributions on the Boolean cube \(I_{n}\) is non-convex. However, for any fixed \(p_{1},\ldots,p_{n}\in\mathbb{R}\), the collection of measures_
\[E_{p_{1},\ldots,p_{n}}=\{\mu\in\mathcal{M}_{NC}(I_{n}):\mu^{(i)}(1)=p_{i},\;i=1,\ldots,n\}\]
_is convex._
ProofWe begin with strictly negatively correlated distributions \(\mu\) and \(\nu\), which exist by Lemma 1. We derive conditions under which the convex combination \(\lambda\mu+(1-\lambda)\nu\) fails to be negatively correlated. We then produce strictly negatively correlated distributions whose convex combination fails to satisfy the above-mentioned condition.
Note that \(\mathbf{E}_{\mu}x_{i}=\mu^{(i)}(1)\) and \(\mathbf{E}_{\mu}x_{i}x_{j}=\mu^{(i,j)}(1,1)\) on the Boolean cube. Thus given strictly negatively correlated distributions \(\mu\) and \(\nu\), fix \(i,j\) and note that
\[\mu^{(i,j)}(1,1)=\mu^{(i)}(1)\mu^{(j)}(1)-\epsilon_{1}\]
for some \(\epsilon_{1}>0\), and
\[\nu^{(i,j)}(1,1)=\nu^{(i)}(1)\nu^{(j)}(1)-\epsilon_{2}\]
for some \(\epsilon_{2}>0\). Set \(A=\mu^{(i)}(1)\), \(B=\nu^{(i)}(1)\), \(C=\mu^{(j)}(1)\), and \(D=\nu^{(j)}(1)\). Then if convexity is to hold, we once again must have (see proof of Theorem 2 above, where the roles of \(f\) and \(g\) are played by \(x_{i}\) and \(x_{j}\))
\[\lambda AC+(1-\lambda)BD-(\lambda\epsilon_{1}+(1-\lambda)\epsilon_{2})\leq \lambda^{2}AC+\lambda(1-\lambda)AD+\lambda(1-\lambda)BC+(1-\lambda)^{2}BD\]
for \(0\leq\lambda\leq 1.\) Now with \(\tilde{C}=(A-B)(C-D)\) this simplifies to
\[\tilde{C}\lambda^{2}-\tilde{C}\lambda\geq-(\lambda\epsilon_{1}+(1-\lambda) \epsilon_{2}).\]
As this holds for \(\tilde{C}=0\), the second statement of the Corollary holds.
Now set \(\lambda=1/2\). We arrive at the condition
\[\frac{\tilde{C}}{4}\leq\frac{1}{2}(\epsilon_{1}+\epsilon_{2}).\]
Our strategy is as follows. We introduce measures \(\mu\) and \(\nu\) as convex combinations of point masses at the standard basis vectors. We demonstrate that the inequality
\[\frac{\tilde{C}}{4}\leq\frac{1}{2}(\epsilon_{1}+\epsilon_{2})\]
is valid. We then define a perturbation of this measure under which said inequality is violated, whereby we obtain a convex combination of measures which fail to be negatively correlated.
According to Lemma 2, there exist strictly negatively correlated distributions with \(\tilde{C}>0\). Specifically, define
\[\mu=\sum_{k}\beta_{k}\delta_{\alpha_{k}},\]
and further
\[\nu=\sum_{k}\beta^{\prime}_{k}\delta_{\alpha_{k}}\]
where \(\alpha_{k}=(0,\ldots,1,\ldots,0)\) has exactly one \(1\) in the \(k\)th component, and \(\sum_{k}\beta_{k}=1\), \(\sum_{k}\beta_{k}^{\prime}=1\). Then
\[\tilde{C}=(\beta_{i}-\beta_{i}^{\prime})(\beta_{j}-\beta_{j}^{\prime})\]
where we have used that \(\tilde{C}=(A-B)(C-D)\) and \(A=\mu^{(i)}(1)\), \(B=\nu^{(i)}(1)\), \(C=\mu^{(j)}(1)\), and
\[\epsilon_{1}=\beta_{i}\beta_{j},\ \ \epsilon_{2}=\beta_{i}^{\prime}\beta_{j}^{ \prime}.\]
Thus the condition
\[\frac{\tilde{C}}{4}\leq\frac{1}{2}(\epsilon_{1}+\epsilon_{2})\]
becomes
\[\frac{(\beta_{i}-\beta_{i}^{\prime})(\beta_{j}-\beta_{j}^{\prime})}{4}\leq \frac{1}{2}(\beta_{i}\beta_{j}+\beta_{i}^{\prime}\beta_{j}^{\prime}).\]
This inequality reduces to
\[\beta_{i}\beta_{j}+\beta_{i}^{\prime}\beta_{j}^{\prime}+\beta_{i}\beta_{j}^{ \prime}+\beta_{i}^{\prime}\beta_{j}\geq 0,\]
which holds for all non-negative reals.
Since strict negative correlation is continuous in the Euclidean parameters of the distribution, we may perturb the above convex measure as follows. Define \(\mu\) as above, but perturbed with a small additional weight given to \((1,1,0,\ldots,0)\): \(\mu(1,1,0,\ldots,0)=\epsilon\). Here \(\epsilon>0\) is small enough so that \(\mu\) is still negatively correlated. Compensating this increased weight by a total decrease in the other decoupled positive weights totaling \(\epsilon\) to keep normalization will still not affect negative correlation if \(\epsilon\) is sufficiently small. Then \(\mu^{(1,2)}(1,1)=\epsilon\) and \(\mu^{(i)}(1)=\beta_{i}+\epsilon\) for \(i=1,2\). If we do the same for \(\nu\), with the same \(\epsilon\) perturbation, then note that \(\tilde{C}\) does not change. Indeed, for \(i=1,j=2\)
\[\tilde{C} = (A-B)(C-D)\] \[= (\beta_{1}+\epsilon-\beta_{1}^{\prime}-\epsilon)(\beta_{2}+ \epsilon-\beta_{2}^{\prime}-\epsilon)\] \[= (\beta_{1}-\beta_{1}^{\prime})(\beta_{2}-\beta_{2}^{\prime}).\]
We may assume that \(\tilde{C}>0\), by virtue of choosing \(\beta_{1}>\beta_{1}^{\prime}\) and \(\beta_{2}>\beta_{2}^{\prime}\). On the other hand, for \(\epsilon>0\) small enough
\[-\epsilon_{1}\equiv\mu^{(1,2)}(1,1)-\mu^{(1)}(1)\mu^{(2)}(1)<0\]
as the quantity
\[\mu^{(1,2)}(1,1)-\mu^{(1)}(1)\mu^{(2)}(1)=\epsilon-(\beta_{1}+\epsilon)(\beta _{2}+\epsilon)\]
is continuous in \(\epsilon\), and approaches \(-\beta_{1}\beta_{2}\) from above as \(\epsilon\to 0\). For \(\beta_{1}\) and \(\beta_{2}\) small enough, the quantity
\[\epsilon-(\beta_{1}+\epsilon)(\beta_{2}+\epsilon)\]
will be positive; letting \(\epsilon\) tend to \(0\) shows that this quantity will pass through zero, towards \(-\beta_{1}\beta_{2}\). This shows that \(\epsilon_{1}\) will move through \(0\), and thus the inequality
\[\frac{\tilde{C}}{4}\leq\frac{1}{2}(\epsilon_{1}+\epsilon_{2})\]
will be violated, as consideration of \(\epsilon_{2}\) will be analogous.
It follows that the right hand side of the inequality
\[\frac{\tilde{C}}{4}\leq\frac{1}{2}(\epsilon_{1}+\epsilon_{2})\]
will decrease. And thus we can, by continuity, decrease the right hand side until the inequality is violated. Thus it is shown that the space of negatively correlated distributions on the Boolean cube is non-convex.
\(\blacksquare\)
#### 2.3.4 Non-Convexity of \({\cal M}_{NA}(I_{n})\)
**Lemma 3**: _For any \(1\leq i\leq n\) and \(0<\epsilon<1\), there exists a negatively associated distribution \(\mu\) on the Boolean cube \(I_{n}=\{0,1\}^{n}\) satisfying \(\mu^{(i)}(1)=\epsilon\)._
**Proof** As in the proof of Lemma 1, any distribution supported on
\[I_{n,1}=\left\{(x_{1},\ldots,x_{n})\in I_{n}:\sum_{j}x_{j}=1\right\}\]
is strictly negatively associated. Such a measure will be of the form
\[\mu=\sum_{{\bf x}_{k}\in I_{n,1}}\alpha_{k}\delta_{{\bf x}_{k}}\]
where \(\sum_{k}\alpha_{k}=1\). Given \(1\leq i\leq n\) and \(0<\epsilon<1\), in setting \(\alpha_{i}=\epsilon\), and determining the remaining coefficients by the condition \(\sum_{k}\alpha_{k}=1\), we see that \(\mu^{(i)}(1)=\epsilon\).
\(\blacksquare\)
**Corollary 4**: _The space of negatively associated distributions on the Boolean cube \(I_{n}\) is non-convex._
**Proof** Following once again the proof of Theorem 2, we set \(A:=\int_{I_{n}}f\,d\mu\), \(B:=\int_{I_{n}}f\,d\nu\), \(C:=\int_{I_{n}}g\,d\mu\), and \(D:=\int_{I_{n}}g\,d\nu\), and obtain the same inequality dictating convexity:
\[\lambda^{2}(A-B)(C-D)-\lambda(A-B)(C-D)\geq-(\lambda\epsilon_{1}+(1-\lambda) \epsilon_{2})\]
for all \(0\leq\lambda\leq 1\). Recall once again that \(\epsilon_{1}\) and \(\epsilon_{2}\) are defined by the initial assumption: Given increasing \(f\) and \(g\) defined on disjoint index sets, there exist \(\epsilon_{1}\) and \(\epsilon_{2}\) such that
\[\int fg\,d\mu=\int f\,d\mu\int g\,d\mu-\epsilon_{1}\]
and
\[\int fg\,d\nu=\int f\,d\nu\int g\,d\nu-\epsilon_{2}.\]
Upon setting \(\tilde{C}=(A-B)(C-D)\) this becomes
\[\tilde{C}\lambda^{2}-\tilde{C}\lambda\geq-(\lambda\epsilon_{1}+(1-\lambda) \epsilon_{2}).\]
This condition is satisfied whenever \(\tilde{C}=0\) or \(\tilde{C}<0\), and fails when \(\tilde{C}>0\) for small enough \(\epsilon>0\).
Following the proof of Lemma 1, we describe strictly negatively associated distributions \(\mu\) and \(\nu\). We consider once again the inequality
\[\frac{\tilde{C}}{4}\leq\frac{1}{2}(\epsilon_{1}+\epsilon_{2})\]
relative to the context at hand. It suffices to verify that for certain \(f,g\) and measures \(\mu,\nu\), said inequality is invalid. As in lemma 1, we may assume that the quantity \(E_{\mu}fg=0\), and \(E_{\nu}fg=0\). It then suffices, as in the proof of the previous proposition, to verify \(\epsilon_{1}\) and \(\epsilon_{2}\) as being small. These quantities describe the level of negative association of the respective measures, for given functions \(f\) and \(g\). As said measures are supported on the standard basis vectors, \(\epsilon_{1}\) and \(\epsilon_{2}\) will take the form
\[0-\left(\sum_{k}\beta_{k}f(\alpha_{k})\right)\left(\sum_{j}\beta_{j}g(\alpha_ {j})\right),\]
where \(0\) denotes the value of \(Efg\), and \(Ef=\sum_{i}\beta_{i}f(\alpha_{i})\), relative to the measure \(\mu=\sum_{i}\beta_{i}\delta_{\alpha_{i}}\). The analogue for \(g\) and \(\nu\). As \(f\) and \(g\) are arbitrary, it is clear that we may describe \(\tilde{C}\) as positive. We need only guarantee that \(f\) and \(g\) are non-decreasing, while retaining that \(\tilde{C}>\epsilon\) for some fixed \(\epsilon>0\). In doing so, we let \(Ef\) and \(Eg\) approach \(0\) from above, thereby attaining arbitrarily small values for \(\epsilon_{1}\) and \(\epsilon_{2}\), and contradicting the inequality
\[\frac{\tilde{C}}{4}\leq\frac{1}{2}(\epsilon_{1}+\epsilon_{2}),\]
as required. This shows that (16) fails for \(\lambda=1/2\), whereby convexity is violated for the corresponding distribution. Thus it is shown that the space of negatively associated distributions on the Boolean cube is non-convex.
Connectedness Properties of the Spaces of Negatively Correlated and Negatively Associated Distributions
**Theorem 3**: _The space of negatively correlated and the space of negatively associated distributions on the Boolean cube, and on \(\mathbb{R}^{n}\), are path connected in the weak topology._
ProofFor any negatively associated measure \(\mu\), consider the family of measures \(\mu_{t}\) such that for any set \(A\), \(\mu_{t}(A)=\mu(A/t)\); here \(0<t\leq 1\). For any negatively associated measure \(\mu\), the corresponding \(\mu_{t}\) is negatively associated. For \(t=0\) we define \(\mu_{0}\) to be the point mass at the origin. We have defined \(A/t\) to be the set of all points in \(A\) divided by the constant \(t\). As we scale \(t\) from \(1\) to \(0\), this effectively concentrates the measure \(\mu\) through this scaling into a point mass at the origin, while preserving negative association in the process. Indeed,
assuming \(A\) is a ball away from the origin, the mass of the set \(A/t\) approaches zero as \(t\to 0\), as the distance of the set \(A/t\) from the origin approaches infinity as \(t\to 0\). It follows that \(\mu(A/t)\) converges weakly to the point mass at the origin. This provides a path connecting any two negatively associated distributions to the point mass at \(0\), proving that the family is path connected in the weak topology.
\(\blacksquare\)
|
2310.15281 | UncertaintyPlayground: A Fast and Simplified Python Library for
Uncertainty Estimation | This paper introduces UncertaintyPlayground, a Python library built on
PyTorch and GPyTorch for uncertainty estimation in supervised learning tasks.
The library offers fast training for Gaussian and multi-modal outcome
distributions through Sparse and Variational Gaussian Process Regressions
(SVGPRs) for normally distributed outcomes and Mixed Density Networks (MDN) for
mixed distributions. In addition to model training with various
hyperparameters, UncertaintyPlayground can visualize the prediction intervals
of one or more instances. Due to using tensor operations, the library can be
trained both on CPU and GPU and offers various PyTorch-specific techniques for
speed optimization. The library contains unit tests for each module and ensures
multi-platform continuous integration with GitHub Workflows (online
integration) and Tox (local integration). Finally, the code is documented with
Google-style docstrings and offers a documentation website created with MkDocs
and MkDocStrings. | Ilia Azizi | 2023-10-23T18:36:54Z | http://arxiv.org/abs/2310.15281v1 | # UncertaintyPlayground: A Fast and Simplified Python Library for Uncertainty Estimation
###### Abstract
This paper introduces _UncertaintyPlayground_, a Python library built on PyTorch and GPyTorch for uncertainty estimation in supervised learning tasks. The library offers fast training for Gaussian and multi-modal outcome distributions through Sparse and Variational Gaussian Process Regressions (SVGPRs) for normally distributed outcomes and Mixed Density Networks (MDN) for mixed distributions. In addition to model training with various hyperparameters, UncertaintyPlayground can visualize the prediction intervals of one or more instances. Due to using tensor operations, the library can be trained both on CPU and GPU and offers various PyTorch-specific techniques for speed optimization. The library contains unit tests for each module and ensures multi-platform continuous integration with GitHub Workflows (online integration) and Tox (local integration). Finally, the code is documented with Google-style docstrings and offers a documentation website created with MkDocs and MkDocStrings.
Uncertainty Estimation Python Library Gaussian Processes Mixed Density Network
## 1 Introduction
Uncertainty estimation is a critical aspect of machine learning, providing insights into the reliability of predictions. While efficient libraries such as PyTorch [1] and GPyTorch [2] offer great flexibility, they are designed to be relatively low-level and, deliberately, do not offer process abstraction. This paper introduces UncertaintyPlayground1, a Python library that facilitates uncertainty estimation in supervised learning tasks, offering a user-friendly yet performer interface to powerful, parallelized uncertainty estimation techniques.
Footnote 1: [https://unco3892.github.io/UncertaintyPlayground/](https://unco3892.github.io/UncertaintyPlayground/)
The advent of Neural Networks (NNs) and Graphics Processing Units (GPUs) has revolutionized uncertainty estimation. NNs, with their ability to model complex, non-linear relationships, provide a flexible framework for uncertainty estimation. This is further enhanced by Mixed Density Networks (MDNs), which allow for the modeling of complex, multi-modal distributions. The parallel processing capabilities of GPUs expedite the training of these large, complex models, making it feasible to implement computationally intensive techniques such as Variational Inference and Kullback-Leibler (KL) divergence. UncertaintyPlayground leverages these advancements, offering Gaussian Processes (GPs) and MDNs for regression tasks. The library abstracts away the complexities of these techniques, allowing users to focus on their data and tasks. Designed to be user-friendly yet performant, it serves as a valuable tool for researchers and practitioners in machine learning, as well as less technical users and experts in statistics, economics, and bioinformatics.
The rest of the paper is organized as follows: Section 2 briefly covers the research question, necessary theoretical foundations, existing literature, and similar libraries for uncertainty estimation. Section 3 discusses the library's algorithm and workflow, including modules for model definition, training, and plotting prediction distributions. This section also discusses the parallelization capacity of the package. Section 4 describes the unit tests necessary for ensuring the build and outlines code maintenance strategies, including a local and an online workflow for continuous integration. Section 5 discusses the process of documenting the code and creating a website for the code documentation. Section 6, through an example, demonstrates the output of UncertaintyPlayground for the user. Section 7 provides additional ideas for further developing the library, and Section 8 concludes the paper.
## 2 Research Question and Relevant Literature
### Research Question
This project's primary research question is how to estimate uncertainty in supervised learning tasks in an easy-to-use manner. The relevant literature includes works on GPs [3] and MDNs [4]. The GP implementation is based on Sparse and Variational Gaussian Process techniques (SVGP) [5]. We briefly highlight SVGP and why it was chosen for this library.
Neural Networks provide a flexible framework for modeling complex patterns in data. In particular, SVGPRs and MDNs, which are neural network-based implementations of Gaussian Processes and Mixture Models, offer a powerful approach to modeling uncertainty. SVGPRs allow for efficient and scalable Gaussian Process Regression, while MDNs enable the modeling of complex, multi-modal distributions. UncertaintyPlayground aims to provide these neural network-based techniques and tackle issues with these models' algorithmic complexity through methods such as Variational Inference and KL divergence.
### Relevant Literature
#### 2.2.1 Gaussian Processes
Gaussian Processes (GPs) have been recognized as a fundamental tool in machine learning, providing function priors for various tasks. In cases with Gaussian likelihoods, inference can be performed in a closed-form manner. However, for non-Gaussian likelihoods, posterior and marginal likelihood approximations are required [6, 7]. Traditional GP regressions have a significant drawback: the computational cost of the exact calculation of the posterior and marginal likelihood scales as \(O(N^{3})\) in time and \(O(N^{2})\) in memory, where \(N\) is the number of training examples.
Sparse and Variational Gaussian Process Regression (SVGPR) is a particular implementation of GPs that allows for efficient and scalable Gaussian Process Regression [5, 8]. SVGPR addresses this issue by choosing \(M\) inducing variables that are \(M\ll N\) to summarize the entire posterior. This reduces the computational cost to \(O(NM^{2}+M^{3})\) in time and \(O(NM+M^{2})\) in memory [9]. SVGPR works by minimizing the Kullback-Leibler (KL) divergence between the approximate and the true posterior, which allows us to learn the model parameters via gradient descent. This is significantly lower than the \(O(N^{3})\) time complexity of standard Gaussian Process Regression, making SVGPR a more scalable alternative for large datasets [8].
#### 2.2.2 Mixed Density Networks
Mixed Density Networks (MDNs) represent another powerful approach in machine learning, particularly for modeling complex, multi-modal distributions. MDNs combine the flexibility of neural networks with the robustness of mixture models, enabling them to capture intricate patterns in data [4].
The computational complexity of MDNs is primarily dependent on the number of mixture components and the dimensionality of the data. Unlike GPs, the complexity of MDNs does not increase cubically with the number of data points. However, as the number of mixture components or the dimensionality of the data increases, the computational cost of training an MDN can become substantial. Despite this, the parallel processing capabilities of modern hardware, such as Graphics Processing Units (GPUs), can significantly expedite the training of these models, making MDNs a feasible approach for large-scale, high-dimensional tasks.
## 3 Code Base
The methodology involves the implementation of SVGP and MDN using the GPyTorch [2] and PyTorch [1] libraries. As mentioned, the GP implementation is based on SVGP, hence SVGPR, while MDN is implemented purely in PyTorch. Other dependencies for this project are Numpy [10], Scikit-learn [11], Matplotlib [12], and Seaborn [13]. Details regarding the version of these packages can be found in the README.md and requirements.txt files.
The entire code base (version 0.1) has been depicted in Figure 1. The workflow of the package goes through models, trainers (supported via supplementary sub-modules in utils), and then the predictions are plotted via predplot. After the development of the library, several unit tests were designed to ensure its functionality and placed in the tests directory. Finally, the documentation and some examples can be found in docs and examples folders.
### Models
As mentioned already, in the current version of the package, two models are implemented: an MDN and an SVGP, defined in the modules mdn_model and svgp_model, respectively.
The MDN model is designed to predict multi-modal distributions. It is particularly useful in scenarios where the data does not adhere to a simple Gaussian distribution. This model includes a neural network with one hidden layer comprised of 10 neurons and a Tanh (hyperbolic tangent) activation function. This network generates parameters for a mixture of Gaussians, which include mixture weights (z_pi), means (z_mu), and standard deviations (z_sigma).
The model provides three options for prediction methods:'max_weight_mean','max_weight_sample', and 'average_sample'. These approaches represent different strategies for choosing components from the Gaussian mixture and generating predictions:
Figure 1: Project structure of UncertaintyPlayground v0.1
* **Max Weight Mean**: This method selects the component with the highest weight and uses its mean as the prediction. It is a deterministic strategy and tends to produce the most probable prediction.
* **Max Weight Sample**: This method selects a component from the mixture based on the weights, then samples from the selected Gaussian component to yield a prediction. This is a stochastic strategy, introducing variability into the predictions.
* **Average Sample**: This method generates multiple samples from the mixture, each time selecting a component based on the weights and sampling from the selected Gaussian component. The final prediction is the average of these samples, providing a balance between the deterministic'max_weight_mean' and the stochastic'max_weight_sample.'
The prediction method selection depends on the particular use case and the trade-off between variability and predictability in the predictions. In all cases, a loss function, mdn_loss, computes the target variable's negative log-likelihood given the mixture's predicted parameters. The forward pass of the model computes these parameters based on the input, and a sample method is included to generate samples from the output distribution based on the input.
The SVGP model is a Sparse Variational Gaussian Process model designed for large-scale regression tasks where traditional Gaussian Process models are computationally infeasible. It employs a subset of data points, referred to as inducing points, to approximate the full Gaussian process. This approximation reduces the computational complexity from cubic, making the uncertainty estimation more scalable to large datasets. The SVGP model inherits from the ApproximateGP class provided by the gpytorch library. The model is initialized with a variational distribution strategy, defined using the provided inducing points and data type. The model is designed to learn the inducing locations during the training process. The SVGP model includes a constant mean module (mean_module) and a scaled radial basis function (RBF) kernel (covar_module). The forward pass of the model computes a multivariate normal distribution with a mean and covariance provided by the mean and covariance modules.
While both models compute the output distribution given input during the forward pass, they differ significantly in their representation of the output distribution. The MDN uses a mixture of Gaussians to represent complex, multi-modal distributions. In contrast, SVGP employs Gaussian processes, more specifically, a sparse approximation, suitable for modeling functions with smooth, continuous outputs in large-scale settings.
### Utilities
The utils module in the project comprises two smaller sub-modules: early_stopping and generate_data. These sub-modules offer vital functionality to control the model training process and generate synthetic data for testing, respectively.
The sub-module early_stopping encompasses the EarlyStopping class, which is designed to halt the training procedure of a model when a specified performance metric ceases to improve over several consecutive epochs. The extent of tolerance for non-improvement is decided by the patience parameter. Furthermore, a comparison function, compare_fn, compares the metric values. By default, the function evaluates with a logical 'less than' comparison, implying that lower metric values are superior since we minimize the loss functions in both situations. In case of an improvement in the validation metric, the model's state is saved. This feature is crucial as it aids in the prevention of overfitting and reduces unnecessary computation.
The generate_data sub-module provides a function, generate_multi_modal_data, which generates multi-modal data helpful in testing the models. This function accepts the total number of data samples to be generated, along with a list of dictionaries specifying the modes of the distribution. Each dictionary represents a mode defined by its mean, standard deviation, and weight. Here, the weight dictates the proportion of total samples that will be drawn from the specific mode. The function returns a NumPy array of the generated data samples. Such functionality is pivotal for generating synthetic data that mimics the complex, multi-modal distributions the models aim to handle.
### Trainers
The training aspect of the project is catered by a class BaseTrainer and the two model-specific trainers classes: MDNTrainer and SVGPTrainer. The base trainer provides the general functionality for training machine learning models, which is then specialized for training the MDN and SVGP models in their respective trainers.
The BaseTrainer class provides a basic structure for the training procedure, defining common functionalities that are universally required for model training, such as setting up the training, test, and validation datasets via PyTorch DataLoaders, defining the loss function and optimizer, and setting up the training loop just to name a few arguments. This base class is designed to be extended by more specific trainer classes to train particular models, and the training
process itself is implemented within the child classes. One important capability of this class is providing sample weights to have a weighted loss function. By default, all the weights are set to 1.
One child class of BaseTrainer is the MDNTrainer. This class is specialized for training the MDN model, as defined in the mdn_model module. The trainer initializes the MDN model with a specified number of hidden neurons and Gaussian components and employs a specified optimizer for training. During training, the trainer iterates over the data batches, using the MDN's loss function to compute the training loss and update the model parameters. Additionally, the trainer calculates the validation loss after each epoch to track the model's performance on unseen data. It utilizes the EarlyStopping mechanism mentioned earlier to halt training if the validation loss does not improve over a specified number of epochs, which helps prevent overfitting.
The MDNTrainer also includes a method for making predictions with uncertainty, predict_with_uncertainty. This method feeds new data into the trained MDN model and returns the parameters of the predicted Gaussian mixture, including the mixture weights, means, and standard deviations, as well as samples from the predicted distribution.
The SVGPTrainer follows a similar structure and role as the MDNTrainer, providing functionality for training the SVGP model defined in the svgp_model module. During the initialization of the SparseGPTrainer, the class takes in the number of inducing points as an argument and initializes the SVGP model with these inducing points. It also sets up a Gaussian likelihood for the model. The training method in this class contains a training loop that uses the VariationalELBO loss function, which is related to the KL divergence mentioned earlier. Similar to MDNTrainer, the method also includes early stopping functionality, checking the validation loss after each epoch and stopping the training if the validation loss does not improve for a set number of epochs.
The function predict_with_uncertainty varies between SVGPTrainer and MDNTrainer due to the nature of the models they handle. The trainer in SVGPTrainer also contains a predict_with_uncertainty method to make predictions and the associated uncertainty using the trained model. In SVGPTrainer, this method passes the input tensor through the trained SVGP model and its Gaussian likelihood, producing a Gaussian predictive distribution for each data point. The mean and variance of these distributions are then interpreted as the predicted output and its associated uncertainty. In contrast, MDNTrainer works with MDNs, yielding a Gaussian Mixture Model (GMM) as output. This results in multiple Gaussian components per data point (three per component), each contributing to the final prediction and its uncertainty. Consequently, despite both methods aiming to estimate prediction uncertainty, the structure of the output distributions and the procedure of extracting uncertainty information differ considerably.
### Prediction Plots
The predplot module is dedicated to visualizing the models' predictions and the corresponding uncertainty. This module includes three sub-modules; svgp_predplot, mdn_predplot and grid_predplot. As the names suggest, the first two sub-modules are for plotting the predictions with MDN and SVGP models for a single instance, while grid_predplot takes as input any of the two functions and plots the same function for two or more instances. In all cases, Matplotlib is the main library for visualizations.
The svgp_predplot, mdn_predplot sub-modules each contain a compare_distributions_(svgpr or mdn) function which takes a trained MDN model, an input instance, and optionally the actual outcome(s), and plots the predicted distribution for that given instance, and compare the final scalar prediction against the actual outcome value (given that the latter is provided). The predicted distribution is shown as a Kernel Density Estimate (KDE) with the Seaborn library, and the predicted value and actual value(s) are represented as vertical lines on the plot.
The two functions were separated for several reasons. For the MDN model, the dedicated function plots the learned GMM for each test point, indicating (and printing) the predicted means, variances, and weights of the Gaussian components. For the SVGP model, the function plots the predicted mean and variance at each test point. As a result, there are many more possibilities for the MDN techniques regarding how the different Gaussian distributions can be represented (for instance, separately or KDE as it currently stands).
The grid_predplot sub-modules contains a function and a class: plot_results_grid and DisablePlotDisplay. The plot_results_grid method generates a grid of scatter plots for given data sets. Each subplot presents the ground truth data as a scatter plot while superimposing the model's prediction and the associated uncertainty as a shaded area. This method is generic and can be used for both models. Additionally, this sub-module contains a DisablePlotDisplay class, a special utility class included in the codebase to control the display of plots during the execution of the script. This class is especially useful when running the code in an environment where the graphical display of plots is not possible or not desirable, such as with our automated tests for the plots.
### Parallelization
The UncertaintyPlayground codebase effectively leverages parallelization at various stages to speed up the computations. PyTorch's native support for GPU acceleration is used to parallelize computations involved in model training. This functionality is implemented within the BaseTrainer class, where the device attribute determines whether computations are to be performed on a CPU or GPU.
In addition to GPU acceleration, the software utilizes parallelization when loading data, significantly reducing the data-loading time, especially for large datasets. The data-loading step in PyTorch can be easily parallelized across multiple CPUs using the num_workers parameter of the DataLoader class. By setting num_workers to be greater than zero, the data loading tasks are divided among multiple subprocesses. This approach allows to load data in parallel and ensures that the GPU is not idle. In contrast, the data is being loaded, thus maximizing the GPU utilization and overall performance.
While currently, the project does not explicitly support multi-GPU configurations for model training; this could be added in the future using PyTorch's DataParallel wrapper for distributed computation. This could further speed up model training, especially for large models and datasets. However, this enhancement would require careful management of memory and synchronization of model parameters across the different GPUs, which would be a complex task warranting further investigation.
## 4 Code Maintenance
The codebase is shared and maintained using GitHub, which facilitates collaborative development. The repository includes a.gitignore file, containing specific files and directories that Git should ignore, such as the package build files, logs, and local configurations that are not intended to be part of the shared codebase. Additionally, we ignore the files that are generated when running local integration with Tox and Pyenv.
Unit tests are implemented using Python's built-in unittest framework. There are eight unit tests in total, ensuring the correctness and robustness of the codebase. The use of unit tests also contributes to the modularity of the code, as each component can be tested independently.
Continuous Integration (CI) was designed to work both locally (offline) and online. The CI system automatically tests the code with multiple Python versions (3.8, 3.9, and 3.10), ensuring that the code works as expected across different platforms and Python versions. Local integration is implemented with Tox (tox.ini) and Pyenv, and online integration is set up through GitHub Workflows (.github/workflows/ci_cd.yml). The dual implementation of CI is due to the advantages and disadvantages of these techniques. On the one hand, GitHub Workflow integration tests compatibility on Mac, Windows, and Linux OS, while Tox is specific to the local OS. On the other hand, GitHub workflows have limited testing calls for private repositories (lifted for public projects), while Tox can be used free of charge.
## 5 Code Documentation
The code documentation is based on Google-style docstrings, which are used throughout the code to describe the purpose and behavior of classes, methods, and functions. The MkDocs tool, a fast and simple static site generator, produces the project documentation from these docstrings and additional markdown files. The docstrings themselves are converted via the MkDocStrings software extension. The markdown files used for MkDocs are located in the docs directory, and the mkdocs.yml configuration file is placed at the project's root directory. This approach was specifically chosen to ensure that the documentation always stays in sync with the code.
## 6 Usage
In this section, we discuss the output for the users and some capabilities of the package. We shed light on the capability of this data with a real data set commonly used in ML, namely the California housing data. This dataset provides a realistic scenario for the models. However, the outcome variable, the average house value, is not necessarily multi-modal. In the examples folder of the package and the 'Usage' section of the documentation, you find a better example for MDN where simulated data which contain multi-modal distributions can be used to test specific aspects of the MDN models.
First, we load California housing data and convert it to the desired floating-point format as shown in Figure 2. Then, we initialize and train an SVGP model with 100 inducing points as depicted in Figure 3 and obtain the output for the
training. It can be observed that the model starts learning and then stops at the 30th epoch. Please note here that we do not set a seed for Numpy and PyTorch, but when applying this library, it is good practice to set the two seeds.
```
>>>fromsklearn.datasetsimportfetch_california_housing >>>fromsklearn.model_selectionimporttrain_test_split >>>importnumpyasnp >>>california=fetch_california_housing() >>>X=np.array(California.data,dtype=np.float32) >>>y=np.array(California.target,dtype=np.float32) >>>X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=42)
```
We can apply the same kind of pipeline for the MDN model as illustrated in Figure 4. Aside from the function arguments, the outputs differ from those of Figure 3. The first difference is that other outputs are not given aside from the loss metric. It may be argued that inference with multiple Gaussian distributions at every epoch may be undesirable. However, this feature will be added in the next versions of the package. The second difference is that the training stopped earlier than expected. This is due to the patience argument, which triggers the early stopping at the default value of 10, meaning that validation loss did not improve after the 11th epoch for another ten epochs. Hence the training was stopped, and the best model was returned.
```
>>>fromuncertaintyplayground.trainers.mdn_trainimportMDNTrainer >>>calibration_trainer_mdn=MDNTrainer( X=X_train, y=y_train, num_epochs=100, lr=0.001, dense1_units=50, n_gaussians=10) >>>california_trainer_mdn.train() Epoch1/100, TrainingLoss:1.450, ValidationLoss:1.522 Epoch2/100, TrainingLoss:1.382, ValidationLoss:1.458... Epoch11/100, TrainingLoss:1.381, ValidationLoss:1.261... Epoch21/100, TrainingLoss:1.412, ValidationLoss:1.293 Earlystoppingafter21 epochs
```
We must note that we do not show an example of predict_with_uncertainty in this paper for brevity. These functions are also applied for generating prediction plots and will be discussed briefly. Please note that this function
Figure 4: Initializing and Training MDN
Figure 3: Initializing and Training SVGPR
Figure 2: Loading the California Housing dataset
for both models produces the learned parameters of the model. More information can be found on this topic in the documentation.
After the models have been trained, we visualize the prediction intervals. For the SVGP model, use the function compare_distributions_svgpr and the function plot_results_grid.
Next, we visualize the predictions of the MDN model using the function compare_distributions_mdn and the function plot_results_grid. These workflows can be found in Figure 5. The four produced plots are shown in Figure 6, Figure 7 and Figure 8 and Figure 9.
The plots and their predictive distributions illustrated how to diagnose the model's performance and better understand the underlying data structure. We do not discuss the quality of the model in this report. Nonetheless, it is helpful to see that the MDN model may find skewed data that SVGPR does not capture. Whether this model is better than SVGPR needs proper metrics and validation and is beyond the scope of this report.
Figure 5: Plotting results from both SVGPR and MDN models
Figure 6: SVGPR compare_distributions_svgpr for instance number 900
## 7 Further Development
The UncertaintyPlayground project, as a dynamic platform for uncertainty estimation, has numerous avenues for future enhancements and expansions. The opportunities for improvements are distributed across different elements of the package:
1. **Flexible Model Architectures:** For the MDN model, it would be beneficial to make the neural network architecture modular, allowing for the incorporation of different layers and activation functions. For the SVGP model, adding the option to use different kernel functions could extend the model's flexibility.
2. **Improved Noise Modelling:** Introducing the capability to use different types of noise in the MDN model could significantly improve the quality of uncertainty estimates.
3. **Classification Capabilities:** Both the MDN and SVGP models could be extended to support binary and multi-class classification. This would involve modifying the likelihood function and the performance metric. This would, however, require extensive theoretical and empirical validation as MDNs and SVGPRs are not traditionally used for classification tasks [14].
4. **Hardware Utilization:** The package could benefit from implementing multi-GPU support, which would allow for more efficient training of large models on large datasets. Optimizing the parallel data loading process for maximized CPU utilization could significantly improve overall performance.
Figure 8: MDN compare_distributions_mdn for instance number 900
Figure 7: SVGPR plot_results_grid on two instances
5. **Improved Code Documentation:** The addition of type hints to the docstrings would offer better clarity and type checking, enhancing the readability and maintainability of the codebase. Additionally, the documentation can benefit from more examples.
6. **Benchmarking Performance:** In the further iterations of this package, the performance, both in terms of speed and accuracy of prediction, can be measured against other models. For instance, one can compare our approach with a traditional GPR for larger and smaller datasets since, as already discussed, GPR has an algorithmic complexity of \(O(N^{3})\) and does not scale well beyond a few hundred observations.
## 8 Conclusion
This paper comprehensively reviews a newly developed Python package, UncertainityPlayground, a library built for simplified uncertainty estimation with PyTorch and GPTorch. The two machine learning algorithms, MDN and SVGP models, are implemented for fast and easy uncertainty estimation of continuous outcomes. The results demonstrate that both models can model complex data distributions and provide meaningful uncertainty estimates. The built-in plotting functionalities allow users to study the inferred distribution of given outcomes. The codebase is well-maintained, well-documented, and designed for future development.
Figure 9: MDN plot_results_grid on two instances |
2305.11251 | Computational thematics: Comparing algorithms for clustering the genres
of literary fiction | What are the best methods of capturing thematic similarity between literary
texts? Knowing the answer to this question would be useful for automatic
clustering of book genres, or any other thematic grouping. This paper compares
a variety of algorithms for unsupervised learning of thematic similarities
between texts, which we call "computational thematics". These algorithms belong
to three steps of analysis: text preprocessing, extraction of text features,
and measuring distances between the lists of features. Each of these steps
includes a variety of options. We test all the possible combinations of these
options: every combination of algorithms is given a task to cluster a corpus of
books belonging to four pre-tagged genres of fiction. This clustering is then
validated against the "ground truth" genre labels. Such comparison of
algorithms allows us to learn the best and the worst combinations for
computational thematic analysis. To illustrate the sharp difference between the
best and the worst methods, we then cluster 5000 random novels from the
HathiTrust corpus of fiction. | Oleg Sobchuk, Artjoms Šeļa | 2023-05-18T18:32:03Z | http://arxiv.org/abs/2305.11251v1 | **Computationalhematics: Comparing algorithms for clustering the genres of literary fiction**
## Abstract
What are the best methods of capturing thematic similarity between literary texts? Knowing the answer to this question would be useful for automatic clustering of book genres, or any other thematic grouping. This paper compares a variety of algorithms for unsupervised learning of thematic similarities between texts, which we call "computational thematics". These algorithms belong to three steps of analysis: text preprocessing, extraction of text features, and measuring distances between the lists of features. Each of these steps includes a variety of options. We test all the possible combinations of these options: every combination of algorithms is given a task to cluster a corpus of books belonging to four pre-tagged genres of fiction. This clustering is then validated against the "ground truth" genre labels. Such comparison of algorithms allows us to learn the best and the worst combinations for computational thematic analysis. To illustrate the sharp difference between the best and the worst methods, we then cluster 5000 random novels from the HathiTrust corpus of fiction.
**Keywords: text mining, computational literary studies, genre, topic modeling**
## Introduction
Computational literary studies have rapidly grown in prominence over the recent years. One of the most successful directions of inquiry within this domain, in terms of both methodological advances and empirical findings, has been computational stylometry, or computational stylistics: a discipline that develops algorithmic techniques for learning _stylistic similarities_ between texts (Bories et al., 2023; Burrows, 1987; Eder et al., 2016). For this purpose, computational stylometrists extract linguistic features specifically associated with authorial style, or individual authorial habits. Often, these features are the most frequent words from the analyzed literary texts - they tend to be function words ("a", "the", "on", etc.) - to which various measures of similarity (e.g., Euclidean distance) are applied. The most common goal of computational stylistics is attributing the authorship of texts where it is disputed, like the authorship of Moliere's plays (Cafiero and Camps, 2019), the Nobel Prize winning novel _And Quiet Flows the Don_(losifyan and Vlasov, 2020), or Shakespeare and Fletcher's play _Henry VIII_(Plechac, 2021). Thanks to numerous systematic comparisons of various approaches to computational stylometry, we now have a fairly good idea of which procedures and textual features are the most effective ones - depending on the goal of stylometric analysis, the language of texts, or their genre (Evert et al., 2017; Neal et al., 2017; Plechac et al., 2018).
At the same time, we lack such systematic comparisons in the research area that might be called "computational thematics": the study of _thematic similarities_ between texts. (Thematic similarities: say, that novels A and B both tell a love story or have a "fantasy" setting.) Why is learning about thematic similarities important? Genre - a population of texts united by broad thematic similarities - fantasy, romance, science fiction, and the like - is a central notion in literary studies, necessary not only for categorizing and cataloging literary works, but also for the historical scholarship of literature. Genres are evolving populations of texts that emerge at certain moments of time, spread across the field of literary production, and then disappear in their original form - usually becoming stepping stones for subsequent genres (Fowler, 1971). For example, the genre of "classical" detective fiction crystallized in the 1890-1930s, and then gave birth to multiple other genres of crime fiction, such as "hardboiled crime fiction", "police procedural", "historical detective", and others (Symons, 1985). Studying the historical dynamics of genres - not only of literature, but also music or painting - is an important task of art history and sociology, and digital archives allow doing so on a much larger scale (Allison et al., 2011; Klimek et al., 2019; Sigaki et al., 2018). But to gain the most from this larger scale, we must determine the best, most reliable algorithms for detecting the thematic signal in books - similarly to how computational stylometrists have learnt the most effective algorithms for detecting the signal of authorship.
Quantitative analysis of genres usually takes one of these forms. The first one is _manual tagging_ of books by genre or using datasets where such tagging has already been done via large crowdsourced efforts, like the data collected on the Goodreads website (Thelwall, 2019). This approach is prone to human bias, it is laborious and also based on the idea that the differences between genre populations are qualitative, not quantitative (e.g., certain book is either a "detective" or "romance", or both, but not 0.78 detective and 0.22 romance, which, we think, would be a more informative description). The second approach is an extension of manual tagging: _supervised machine learning_ of book genres using a training dataset with manually tagged genres (Piper et al., 2021; Underwood, 2019). This approach has important strengths: it is easily scalable and it provides not qualitative but quantitative estimates of a book's belongingness to a genre. Still, it has a problem: it can only assign genre tags included in the training dataset, it cannot find new, unexpected book populations - which is an important component of the historical study of literature. The third approach is _unsupervised clustering_ of genres: algorithmic detection of book populations based on their similarity to each other (Calvo Tello, 2021; Schoch, 2017). This approach is easily scalable, allows quantitative characterization of book genres, and does not require a training dataset with manually assigned tags, thus allowing to detect new, unexpected book populations. All these features of unsupervised clustering make it highly suitable for historical research, and this is why we will focus on it in this paper.
Unsupervised clustering can be conducted in a variety of ways. For example, texts can be lemmatized or not lemmatized; as text features, simple word frequencies can be used or some higher-level units, such as topics of a topic model; to measure the similarity between texts, a host of distance metrics can be applied. Hence, the question: what are the best computational methods for detecting thematic similarities in literary texts? This is the main question of this paper. To answer it, we will compare various combinations of (1) preprocessing (which, in this study, we will also call "thematic foregrounding"), (2) text features, and (3) the metrics used for measuring distance between features. To assess the effectiveness of these combinations,
we use a tightly controlled corpus of four well-known genres - detective fiction, science fiction, fantasy, and romance - as our "ground truth" dataset. To illustrate the significant difference between the best and the worst combinations of algorithms for genre detection, we later cluster genres in a much larger corpus, containing 5000 works of fiction.
## Materials and Methods
### Data: The "ground truth" genres
Systematic research on computational stylistics is common, while research on computational thematics is still rare (Allison et al., 2011; Schoch, 2017; Seja et al., 2022; Underwood, 2016). Why? Computational stylistics has clear "ground truth" data against which various methods of text analysis can be compared: authorship. The methods of text analysis in computational stylistics (e.g., Delta distance or Manhattan distance) can be compared as to how well they perform in the task of classifying texts by their authorship. We write "ground truth" in quotes, as authorship is no more than a convenient proxy for stylistic similarity, and, as any proxy, it is imprecise. It assumes that texts written by the same author should be more similar to each other than texts written by different authors. However, we know many cases when the writing style of an author would evolve significantly over the span of their career, or would be deliberately manipulated (Brennan et al., 2012). Authorship as a proxy for "ground truth" is a simplification - but a very useful one.
The lack of a widely accepted "ground truth" proxy for thematic analysis leads to the comparisons of algorithms that are based on nothing more than subjective judgment (Egger and Yu, 2022). Such subjective judgment cannot lead us far: we need quantitative metrics of performance of different algorithms. For this, an imperfect "ground truth" is better than none at all. What could play the role of such an imperfect, but still useful, ground truth in computational thematics? At the moment, these are genre categories. They capture, to a different degree, thematic similarity between texts. To a different degree, as genres can be organized according to several principles, or "axes of categorization": e.g., they can be based on the similarity of storylines (adventure novel, crime novel, etc.), settings (historical novel, dystopian novel, etc.), emotions they evoke in readers (horror novel, humorous novel, etc.), or their target audience (e.g., young adult novels). It does seem that these various "axes of categorization" correlate: say, "young adult" novels are appreciated by young adults because they often have similar storylines or characters. Or, horror novels usually have a broad, but consistent, arsenal of themes and settings that are efficient at evoking pleasant fear in readers (like the classical Gothic setting). Still, some axes of genre categorization are probably better for comparing the methods of computational thematics than others. Genres defined by their plots or settings may provide a clearer thematic signal than genres defined by their target audience or evoked emotions.
We have assembled a tightly controlled corpus of four genres (50 texts in each) based on their plots and settings:
* Detective fiction (recurrent thematic elements: murder, detective, suspects, investigation)
* Fantasy fiction (recurrent elements: magic, imaginary creatures, quasi-medieval setting)
* Romance fiction (recurrent elements: affection, erotic scenes, love triangle plot)
* Science fiction (recurrent thematic elements: space, future, technology)
We took several precautions to remove potential confounds. First, these genres are situated on a similar level of abstraction: we are not comparing rough-grain categories (say, romance or science fiction) to fine-grain ones (historical romance or cyberpunk science fiction). Second, we limited the time span of the book publication year to a rather short period of 1950-1999: to make sure that our analysis is not affected too much by language change (which would inevitably happen if we compared, for example, 19th-century gothic novels to 20th-century science fiction). Third, each genre corpus has a similar number of authors (29-31 authors), each represented by 1-3 texts. Several examples of books in each genre are shown in Table 1. The complete list is in **Supplementary materials**. Before starting our analysis, we pre-registered this list on Open Science Framework's website ([https://osf.io/rce2w/?view](https://osf.io/rce2w/?view) only=16db492ab4464a4da53b1ef891416bd4).
#### Analysis: The race of algorithms
To compare the methods of detecting thematic signal, we developed a workflow consisting of four steps - see Figure 1. Same as our corpus, all the detailed steps of the workflow were pre-registered.
\begin{table}
\begin{tabular}{|l|l|} \hline Genre & Examples \\ \hline Detective fiction & Josephine Tey, _The Daughter of Time_, 1951 \\ & Agatha Christie, _At Bertram’s Hotel_, 1965 \\ & Colin Dexter, _Last Bus to Woodstock_, 1975 \\ & Peter Lovesey, _The False Inspector Dew_, 1982 \\ & Sue Grafton, _M is for Malice_, 1996 \\ \hline Fantasy fiction & J. R. R. Tolkien, _The Fellowship of the Ring_, 1954 \\ & Michael Moorcock, _Stormbringer_, 1965 \\ & Ursula K. Le Guin, _The Tombs of Atuan_, 1970 \\ & Terry Pratchett, _The Colour of Magic_, 1983 \\ & J. K. Rowling, _Harry Potter and the Philosopher’s Stone_, 1997 \\ \hline Romance fiction & Barbara Cartland, _Love is the Enemy_, 1952 \\ & Jackie Collins, _The World is Full of Married Men_, 1968 \\ & Gordon Merrick, _The Lord Won’t Mind_, 1970 \\ & Danielle Steel, _A Perfect Stranger_, 1981 \\ & Diana Gabaldon, _Outlander_, 1991 \\ \hline Science fiction & Robert A. Heinlein, _Double Star_, 1956 \\ & Arthur C. Clarke, _2001: A Space Odyssey_, 1968 \\ & Frank Herbert, _Children of Dune_, 1976 \\ & C. J. Cherryh, _Downbelow Station_, 1981 \\ & Kim Stanley Robinson, _Red Mars_, 1992 \\ \hline \end{tabular}
\end{table}
Table 1: Examples of books in each genre corpus (full list in **Supplementary materials**).
**Step 1. Choosing a combination of thematic foregrounding, features, and distance**
As a first step, we choose a combination of (a) the level of thematic foregrounding, (b) the features of analysis, and (c) the measure of distance.
By _thematic foregrounding_ (**Step 1a** on **Figure 1**) we mean the extent to which the thematic aspect of a text is highlighted (and the stylistic aspect - backdropped). With _weak_ thematic foregrounding, only the most basic text preprocessing is done: lemmatizing words and removing 100 most frequent words (MFWs) - the most obvious carriers of strong stylistic signal. 100 MFWs roughly correspond to function words (or closed-class words) in English, routinely used in authorship attribution (Chung and Pennebaker, 2007; Stamatatos, 2009) beginning with the classical study of _Federalist Papers_(Mosteller and Wallace, 1963). With _medium_ thematic foregrounding, in addition to lemmatizing, we also remove entities (named entities, proper names, etc.) using SpaCy tagger (Honnibal and Montani, 2017). Additionally, we perform part-of-speech tagging and remove all the words that are not nouns, verbs, adjectives, or adverbs, which are the most content-bearing parts of speech. With _strong_ thematic foregrounding, in addition to all the steps of the medium foregrounding, we also apply lexical simplification. We simplify the vocabulary by replacing less frequent words with their more frequent synonyms - namely, we replace all words outside of 1000 MFWs with their more common semantic neighbors (out of 10 closest neighbors), with the help of pre-trained FastText model that includes 2 million words and is trained on English Wikipedia (Grave et al., 2018).
Then, we transform our pre-processed texts into lists of features (**Step 1b** on **Figure 1**). We vary both the type of features and the length of lists. We consider four types of features. The simplest features are most frequent words as used in the bag-of-words approach (1000, 5000, or 10,000 of them) - a common solution for thematic analysis in computational literary studies (Hughes et al., 2012; Underwood, 2019). The second type of features are topic probabilities generated with the Latent Dirichlet Allocation (LDA) algorithm (Blei et al., 2003) - another common choice (Jockers, 2013; Liu et al., 2021). LDA has several parameters that can influence results, such as the predefined \(k\) of topics or the number of most frequent words used. Plus, a long text like a novel is too large for meaningful LDA topic modeling, and the typical solution is dividing the text into smaller chunks. We use an arbitrary chunk size of 1000 words. The third type of features are modules generated with weighted correlation network analysis, also known as weighted gene co-expression network analysis (WGCNA) - a method of dimensionality reduction that detects clusters (or "modules") in networks (Langfelder and Horvath, 2008). WGCNA is widely used in genetics (Bailey et al., 2016; Ramirez-Gonzalez et al., 2018), but also showed promising results as a tool for topic modeling of fiction (Elliott, 2017). We used it with either 1000 or 5000 most frequent words. Typically, WGCNA is used without chunking data, but, since chunking leads to better results in LDA, we decided to try using WGCNA with and without chunking, with the chunk size of 1000 words. All the parameters of WGCNA were kept at defaults. Finally, as the fourth type of feature, we use document-level embeddings doc2vec (Lau and Baldwin, 2016; Le and Mikolov, 2014) that directly position documents in a latent semantic space defined by a pre-trained distributional language model - FastText (Grave et al., 2018). Document representations in doc2vec depend on the features of the underlying model: in our study, each document is embedded in 300 dimensions of the original model. Doc2vec and similar word embedding methods are increasingly used for assessing the similarity of documents (Dynomant et al., 2019; Kim et al., 2019; Pranjic et al.,
2020). As a result of **Step 1b**, we obtain a document-term matrix formed of texts (rows) and features (columns).
Finally, we must learn the similarity between the texts represented with the chosen lists of features - by using some metric of distance (**Step 1c** on **Figure 1**). There exist a variety of metrics for this purpose: Euclidean, Manhattan, Delta, Cosine, Cosine Delta distances and Jensen-Shannon divergence (symmetrized Kullback-Leibler divergence) for features that are probability distributions (in our case, this can be done for LDA topics and bag-of-words features).
Variants of **Step 1a**, **1b**, and **1c**, can be assembled in numerous _combinations_. In our "race of algorithms", each combination is a competitor - and a potential winner. Say, we could choose a combination of weak thematic foregrounding, LDA topics with 50 topics on 5000 most frequent words, and Euclidean distance. Or, medium thematic foregrounding, simple bag-of-words with 10,000 most frequent words, and Jensen-Shannon divergence. Some of these combinations are researchers' favorites, while others are underdogs - used rarely, or not at all. Our goal is to map out the space of possible combinations - to empirically test how each combination performs in the task of detecting the thematic signal. In total there are 311 competing combinations.
**Step 2. Sampling for robust results**
A potential problem with our experiment could be that some combinations might perform better or worse simply because they are more suitable to our corpus of novels - for whatever reason. To reduce the impact of individual novels in our corpus, we do cross-validation: instead of analyzing the corpus as a whole, we analyze smaller _samples_ from the corpus multiple times. Each sample contains 120 novels: 30 books from each genre. Altogether, we perform the analysis for each combination on 100 samples. For each sample, all the models that require training - LDA, WGCNA, and doc2vec - are trained anew.
Figure 1: Four steps of the analysis. The workflow includes two loops. Big loop goes through various combinations of thematic control (Step 1a), feature type (1b), and distance metric (1c). For each such combination, a small loop is run: it randomly draws a genre-stratified sample of 120 novels (Step 2), clusters the novels using Ward algorithm (Step 3), and validates the clusters on the dendrogram using Adjusted Rand Index (Step 4). As a result of these four steps, each combination receives an ARI score: a score of its performance in detecting genres.
**Step 3. Clustering**
As a result of Step 2, we obtain a matrix of text distances. Then, we need to cluster the texts into groups - our automatically generated genre clusters, which we will later compare to the "true" clusters. For this, we could have used a variety of algorithms (e.g., k-means). We use hierarchical clustering with Ward's linkage (Ward, 1963): it clusters two items when resulting clusters maximize variance across the distance matrix. Despite being originally defined only for Euclidean distances, it was empirically shown that Ward's algorithm outperforms other linkage strategies in text-clustering tasks (Ochab et al., 2019). We assume that novels from four defined genres should roughly form four distinct clusters (as the similarity of texts within genre is greater than similarity of texts across genres). To obtain the groupings from a resulting tree we cut it vertically by the number of assumed clusters (which is 4).
**Step 4. Cluster validation**
How similar are our generated clusters to the "true" genre populations? To learn this, we compare the clusters generated by each chosen combination to the original genre labels. For this, we use a measure of cluster validation called the adjusted Rand index (ARI) **(Hubert & Arabie, 1985)**. ARI score of a particular combination shows how well this combination performs in the task of detecting genres - and thus, in picking the thematic signal. Steps 1-4 are performed for every combination, so that every combination receives its ARI score. In the end of the analysis, we obtain a dataset of 29,100 rows (291 combinations, each tested on 100 random samples).
## Results
**Figure 2** shows the average performance of all the combinations of thematic foregrounding, features, and distance metrics. Our first observation: the average ARI of the best performing algorithms ranges from 0.66 to 0.7, which is rather high for the complicated, noisy data that is literary fiction. This gives additional support to the idea that unsupervised clustering of fiction genres is possible. Even a cursory look at 10 best-performing combinations immediately reveals several trends. First, none of the top combinations have weak thematic foregrounding. Second, 6 out of 10 best-performing features are LDA topics. Third, 8 out of 10 distances on this list are Jensen-Shannon divergence.
But how generalizable are these initial observations? How shall we learn the average "goodness" of a particular kind of thematic foregrounding, or a feature type, or a distance metric? To learn this, we need to control for their influence on each other, as well as for additional parameters, such as the number of most frequent words and chunking. Hence, we have constructed five Bayesian linear regression models (see **Supplement 5.1**). They answer questions about the performance of various combinations of thematic foregrounding, features,
Figure 2: Raw distributions of ARI scores for all the combinations of thematic foregrounding, feature type, and distance metric. Boxplots are colored by feature type. Numbers on the horizontal axis correspond to the names of combinations in the table to the right, showing 10 best-performing combinations (see all the combinations in **Supplement, Table S7**).
and distance metrics, helping us reach conclusions about the performance of individual steps of thematic analysis. All the results of this study are described in detail in **Supplement 5.1**. Below, we focus only on key findings.
**Conclusion 1.** **Thematic foregrounding improves genre clustering**
The goal of thematic foregrounding was to highlight the contentful parts of the texts and to backdrop the stylistic parts. So, does larger thematic foregrounding improve genre recognition? As expected, we have found that low thematic foregrounding shows the worst performance across all four feature types (see **Figure 3**). For LDA and bag-of-words, it leads to drastically worse performance. At the same time, we do not see a large difference between the medium and the strong levels of thematic foregrounding. The major difference of the strong level of thematic foregrounding is the use of lexical simplification. This lexical simplification has not led to noticeable improvement of genre recognition. The gains of using strong thematic foregrounding for document embeddings, LDA and bag-of-words are marginal and inconsistent.
**Conclusion 2. Various feature types show similarly good performance**
Does the choice of feature type matter for the performance of genre clustering? We have found that almost all feature types can perform well. As shown on **Figure 2**, three out of four feature types - doc2vec, LDA, and bags of words - when used in certain combinations, can lead to almost equally good results. But how good are they on average? Figure 4 shows the posterior distributions of ARI for each type of features used in our analyses - in each case, for high level of thematic foregrounding.
Figure 3: The effect of thematic foregrounding (weak, medium, or strong) on clustering genres, stratified by feature type.
As we see, doc2vec shows the best average performance, but this study has not experimented enough with using various other parameters of this feature type. It might be that another number of dimensions (e.g., 100 instead of 300) would worsen its performance. More research is needed to better understand the performance of doc2vec. LDA is the second best approach - and interestingly, the variation of parameters in LDA (such as \(k\) of topics or \(n\) of MFWs) does not increase the variance compared to doc2vec. Bag-of-words approach, despite being the simplest kind of feature, proves to be surprisingly good. It does not demonstrate the best performance, but it is not far behind doc2vec and LDA. At the same time, bags of words have a powerful advantage: simplicity. They are simpler to use and require fewer computational resources, meaning that in many cases they can still be a suitable choice for thematic analysis. Finally, WGCNA shows the worst ARI scores on average.
**Conclusion 3. The performance of LDA does not seem to depend on \(k\) of topics and \(n\) of most frequent words**
LDA modeling depends on parameters, namely \(k\) of topics and \(n\) of most frequent words, which should be decided, somewhat arbitrarily, before modeling. There exist algorithms for estimating the "good" number of topics, which help assessing how many topics are "too few" and how many are "too many" (Sbalchiero & Eder, 2020). In our study, however, we find no meaningful influence of either of these choices on learning the thematic signal (**Figure 5**). The single most important factor making a massive influence on the effectiveness of thematic classification is thematic foregrounding. Weak thematic foregrounding (in our case, only lemmatizing words and removing 100 most frequent words) proves to be a terrible choice that noticeably reduces ARI scores. Our study points towards the need for further systematic comparisons of various approaches to thematic foregrounding, as it seems to play a key role in the solid performance of LDA.
Figure 4: Posterior distributions of ARI scores for four feature types, at high level of thematic foregrounding.
**Conclusion 4. Bag-of-words approach requires a balance of thematic foregrounding and \(n\) of most frequent words
Using bags of words as features is the simplest approach in thematic analysis, but still an effective one, as we have demonstrated. But how does one maximize the chances that bags of words perform well? We have varied two parameters in the bag-of-words approach: the level of thematic foregrounding and the number of MFWs used. **Figure 6** illustrates our findings: both these parameters influence the performance. Using 5000, instead of 1000, MFWs, drastically improves ARI scores. Similarly, using medium, instead of weak, thematic foregrounding, makes a big difference. At the same time, pushing these two parameters further - using 10,000 MFWs and strong thematic foregrounding - brings only marginal, if any, improvement in ARI scores.
Figure 5: Posterior probabilities of the effects of k of topics on ARI, stratified by the level of thematic foregrounding and n of most frequent words used in LDA. Error bars show 95% credible intervals.
Figure 6: The influence of the number of most frequent words, used as text features, on learning the thematic signal, measured with ARI. There is a positive relationship between \(n\) of words and ARI, as well as between the level of thematic foregrounding and ARI. However, the middle parameter values of both (5000 MFWs and medium foregrounding) should be enough for most analyses.
**Conclusion 5. Jensen-Shannon divergence is the best distance metric for genre recognition, Euclidean - the worst**
Choosing the right distance metric is crucial for improving genre clustering. **Figure 7** shows the performance of various distances for each type of feature (note that Jensen-Shannon divergence, which was formulated for probability distributions, could not be applied to doc2vec dimensions and WGCNA module weights). For LDA and bag-of-words, Jensen-Shannon divergence is the best distance, with Delta and Manhattan distances being highly suitable too. For doc2vec, the choice of distance matters less. Interestingly, Euclidean distance is the worst-performing distance for LDA, bag-of-words, and WGCNA. This is an important, because this distance is often used in text analysis, also in combination with LDA (Jockers, 2013; Schoch, 2017; Underwood et al., 2022), while our study suggests that this distance should be avoided in computational thematic analysis. Cosine distance is also known to be useful for authorship attribution, when combined with bag-of-words as a feature type. At the same time, cosine distance is sometimes used to measure the distances between LDA topic probabilities, and our study shows that it is not the best combination.
### Comparison of algorithms on a larger dataset
How well does this advice apply to clustering other corpora, not just our corpus of 200 novels? A common problem in statistics and machine learning is overfitting: tailoring one's methods to a particular "sandbox" dataset, without making sure that these methods would work "in the wild". In our case, this means: would the same combinations of methods work well/poorly on other genres and other books than those included in our analysis? One precaution that we took to deal with overfitting was sampling from our genre corpus: instead of analyzing the full corpus just once, we analyzed smaller samples from it. But, additionally, it would be useful to compare the best-performing and the worst-performing methods against a much larger corpus of texts.
For this purpose, we use a sample of 5000 books of the NovelTM dataset of fiction, built from HathiTrust corpus (Underwood et al., 2020). Unlike our small corpus of four genres, these
Figure 7: The influence of distance metrics on ARl scores, separately for each feature type. Note that Jensen–Shannon divergence could not be combined with WGCNA and doc2vec.
books do not have reliable genre tags, so we could not simply repeat our study on this corpus. Instead, we decided to inspect how a larger sample of our four genres (detective, fantasy, science fiction, and romance) would cluster in the HathiTrust corpus. For this, we included all the books in these four genres that we could easily identify (see Supplement for details) and seeded them into a random sample of 5000 works of fiction. Then we clustered all these books using two approaches: a particularly bad combination of methods for identifying genres (weak thematic foregrounding, bag-of-words with 5000 words, cosine distance) and a particularly good one (medium thematic foregrounding, LDA on 1000 words with 100 topics, clustered with Delta distance). The result, visualized with two UMAP projections (McInnes et al., 2018), is shown on Figure 8. One combination of methods resulted in a meaningful clustering, while the other - in chaos. However, this is only a first step towards further testing various algorithms of computational thematics "in the wild".
## Discussion
This study aimed to answer the question: how good are various techniques of learning thematic similarities between works of fiction? In particular, how good are they at detecting genres - and are they good at all? For this, we tested various techniques of text mining, belonging to three consecutive steps of analysis: pre-processing, extraction of features, and measuring distances between the lists of features. We used four common genres of fiction as our "ground truth" data, including a tightly controlled sample of books. Our main finding is that unsupervised learning can be effectively used for detecting thematic similarities, but algorithms differ in their performance. Interestingly, the algorithms that are good for computational stylometry (and its most common task, authorship attribution) are not the same as those good for computational thematics. To give an example, one common approach to authorship attribution - using limited pre-processing, with a small number of most frequent
Figure 8: UMAP projections for a corpus consisting of 5,000 random novels from NovelTM HathiTrust corpus and all the novels all the authors included in the original corpus of four genres, found in NovelTM. Left-hand figure is clustered based on one of the worst-performing combinations, as found out by our study. Right-hand figure is based on one of the best-performing combinations.
words as features, and cosine distance - is one of the least accurate approaches for learning thematic similarities. How important are these differences in the real-world scenario, not limited to our small sample of books? To test this, we have contrasted one of the worst-performing combinations of algorithms, and one of the best-performing combinations, using a large sample of the HathiTrust corpus of books.
Systematic comparisons between various algorithms for computational thematic analysis will be key for a better understanding of which approaches work and which do not work - a requirement for assuring reliable results in the growing area of research which we suggest calling "computational thematics". Using a reliable set of algorithms for thematic analysis would allow tackling several large problems that remain not solved in the large-scale analysis of books. One such problem is creating better _genre tags_ for systematizing large historical libraries of digitized texts. Manual genre tags in corpora such as HathiTrust are often missing or are highly inconsistent, which leads to attempts of using supervised machine learning, trained on manually tagged texts, to automatically learn the genres of books in the corpus overall. However, this approach, by design, allows capturing only the genres we already know about, and not the genres we do not know exist: "latent" genres. Unsupervised thematic analysis can be used for this task. Another important problem that unsupervised approaches to computational thematics may be good at is historical analysis of _literary evolution_. So far, we are lacking a comprehensive "map" of literary influences, based on the similarity of books. Such a map would allow creating a computational model of literary macroevolution, similar to phylogenetic trees (Bouckaert et al., 2012; Tehrani, 2013) or rooted phylogenetic networks (Neureiter et al., 2022; Youngblood et al., 2021) used in cultural evolution research of languages, music, or technologies. Having reliable unsupervised algorithms for measuring thematic similarities would be crucial for any historical models of this sort. Also, measuring thematic similarities may prove useful for creating _book recommendation_ systems. Currently, book recommendation algorithms are mostly based on the analysis of user behavior: ratings or other forms of interaction (Duchen, 2022). Such methods are highly effective in the cases when user-generated data is abundant, like songs or brief videos. However, for longer content types, which take more time to consume and, the amount of user-generated data is much smaller. Improving the tools for content-based similarity detection in books would allow recommending books based on their content - as it is already happening to songs: projects such as Spotify's _Every Noise at Once_ ([https://everynoise.com/](https://everynoise.com/)) combine user behavior data with the acoustic features of songs themselves to learn the similarity between songs and recommend them to listeners.
This study is a preliminary attempt at systematizing various approaches to computational thematics. More work is needed to further test the findings of this paper and to overcome its limitations. One apparent limitation is the concept of "ground truth" genres. It may be noted - rightly - that there are no "true" genres and that genre tags overall may not be the best approach for testing thematic similarities. As further steps we see using large scale user generated tags from Goodreads and similar websites as a proxy for "ground truth" similarity. Also, this study has certainly not exhausted all the possible techniques for text analysis that can be used for computational thematics. For example, a much wider testing of vector models, like doc2vec, but also BERTopic (Grootendorst, 2022) or Top2Vec (Angelov, 2020) is an obvious next step, or testing other network-based methods for community detection (Gerlach et al., 2018). Likewise, text simplification could have large potential for thematic analysis, it must be tested further. Possibly, the most straightforward way to test our findings would be
attempting to replicate our results on other genre corpora, containing more books or other genres. Testing these methods on books in other languages is also critical. The approach taken in this paper offers a simple analytical pipeline - and we encourage other researchers to use it for testing all the various other computational approaches. Such a communal effort will be key for assuring robust results in the area of computational thematics.
### Competing interests
The author(s) declare no competing interests.
### Ethical approval
The study included no human or non-human participants, and thus requires no ethical approval.
### Supplementary Materials
R scripts used in our analysis, as well as reproducible Supplement with the detailed description of data and methods, can be found together with pre-registration documents on Open Science Framework's website [_anonymized link for peer-review_]: [https://osf.io/rtvb6/?view](https://osf.io/rtvb6/?view) only=ea729efa0f504b9b8ea98e24ad60f6b9 Due to copyright law, we cannot share the corpus of books used in this study - instead we share document-term matrices based on the samples of this corpus.
## References
* Allison et al. (2011) Allison, S., Heuser, R., Jockers, M., Moretti, F., & Witmore, M. (2011). _Quantitative formalism: An experiment_. [https://core.ac.uk/display/159618661](https://core.ac.uk/display/159618661)
* Angelov (2020) Angelov, D. (2020). Top2Vec: Distributed Representations of Topics. _ArXiv_. [https://www.semanticscholar.org/paper/Top2Vec%3A-Distributed-Representations-of-Topics-Angelov/fda2a8b03fb15a2d8b5c5aeb01d1c0b27fb006b](https://www.semanticscholar.org/paper/Top2Vec%3A-Distributed-Representations-of-Topics-Angelov/fda2a8b03fb15a2d8b5c5aeb01d1c0b27fb006b)
* Bailey et al. (2016) Bailey, P., Chang, D. K., Nones, K., Johns, A. L., Patch, A.-M., Gingras, M.-C., Miller, D. K., Christ, A. N., Bruxner, T. J. C., Quinn, M. C., Nourse, C., Murtaugh, L. C., Harliwong, I., Idrisoglu, S., Manning, S., Nourbakhsh, E., Wani, S., Fink, L., Holmes, O.,... Grimmond, S. M. (2016). Genomic analyses identify molecular subtypes of pancreatic cancer. _Nature_, _531_(7592), Article 7592. [https://doi.org/10.1038/nature16965](https://doi.org/10.1038/nature16965)
* Blei et al. (2003) Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. _The Journal of Machine Learning Research_, \(3\), 993-1022.
* Bories et al. (2023) Bories, A.-S., Plechac, P., & Ruiz Fabo, P. (Eds.). (2023). _Computational Stylistics in Poetry, Prose, and Drama_. De Gruyter. [https://www.degruyter.com/document/isbn/9783110781502/html?fbclid=lwAR31plFyu](https://www.degruyter.com/document/isbn/9783110781502/html?fbclid=lwAR31plFyu) 5Hm34WijonFmNsyNPugBqc7aXbm8QM4n0hJdc8J1nAreNxgus
* Bouckaert et al. (2012) Bouckaert, R., Lemey, P., Dunn, M., Greenhill, S. J., Alekseyenko, A. V., Drummond, A. J., Gray, R. D., Suchard, M. A., & Atkinson, Q. D. (2012). Mapping the Origins and Expansion of the Indo-European Language Family. _Science_, _337_(6097), 957-960. [https://doi.org/10.1126/science.1219669](https://doi.org/10.1126/science.1219669)
Brennan, M., Afroz, S., & Greenstadt, R. (2012). Adversarial stylometry: Circumventing authorship recognition to preserve privacy and anonymity. _ACM Transactions on Information and System Security_, _15_(3), 12:1-12:22. [https://doi.org/10.1145/2382448.2382450](https://doi.org/10.1145/2382448.2382450)
* Burrows (1987) Burrows, J. F. (1987). _Computation into criticism: A study of Jane Austen's novels and an experiment in method_. Clarendon Press.
* Cafiero & Camps (2019) Cafiero, F., & Camps, J.-B. (2019). Why Moliere most likely did write his plays. _Science Advances_, _5_(11), eaax5489.
* Calvo Tello (2021) Calvo Tello, J. (2021). _The Novel in the Spanish Silver Age: A Digital Analysis of Genre Using Machine Learning_. Bielefeld University Press. [https://doi.org/10.1515/9783839459256](https://doi.org/10.1515/9783839459256)
* Chung & Pennebaker (2007) Chung, C., & Pennebaker, J. (2007). The Psychological Functions of Function Words. In _Social communication_ (pp. 343-359). Psychology Press.
* Duchen (2022) Duchen, H. (2022). A Comparative Study of Various Book Recommendation Algorithms for Public Libraries. _Technical Services Quarterly_, _39_(4), 369-380. [https://doi.org/10.1080/07317131.2022.2125676](https://doi.org/10.1080/07317131.2022.2125676)
* Dynomant et al. (2019) Dynomant, E., Lelong, R., Dahamma, B., Massonnaud, C., Kerdelhue, G., Grosjean, J., Canu, S., & Darmoni, S. J. (2019). Word Embedding for French Natural Language in Healthcare: A Comparative Study. In L. Ohno-Machado & B. Seroussi (Eds.), _MEDINFO 2019: Health and Wellbeing e-Networks for All--Proceedings of the 17th World Congress on Medical and Health Informatics, Lyon, France, 25-30 August 2019_ (Vol. 264, pp. 118-122). IOS Press. [https://doi.org/10.3233/SHT190195](https://doi.org/10.3233/SHT190195)
* Eder et al. (2016) Eder, M., Rybicki, J., & Kestemont, M. (2016). Stylometry with R: A Package for Computational Text Analysis. _The R Journal_, _8_(1), 107.
* Egger & Yu (2022) Egger, R., & Yu, J. (2022). A Topic Modeling Comparison Between LDA, NMF, Top2Vec, and BERTopic to Demystify Twitter Posts. _Frontiers in Sociology_, \(7\). [https://www.frontiersin.org/articles/10.3389/fsoc.2022.886498](https://www.frontiersin.org/articles/10.3389/fsoc.2022.886498)
* Elliott (2017) Elliott, J. (2017). Whole genre sequencing. _Digital Scholarship in the Humanities_, _32_(1), 65-79.
* Evert et al. (2017) Evert, S., Proisl, T., Jannidis, F., Reger, I., Pielstrom, S., Schoch, C., & Vitt, T. (2017). Understanding and explaining Delta measures for authorship attribution. _Digital Scholarship in the Humanities_, _32_(suppl_2), iid-ii16.
* Fowler (1971) Fowler, A. (1971). The Life and Death of Literary Forms. _New Literary History_, _2_(2), 199-216. [https://doi.org/10.2307/468599](https://doi.org/10.2307/468599)
* Gerlach et al. (2018) Gerlach, M., Peixoto, T. P., & Altmann, E. G. (2018). A network approach to topic models. _Science Advances_, _4_(7), eaaq1360. [https://doi.org/10.1126/sciadv.aaq1360](https://doi.org/10.1126/sciadv.aaq1360)
* Grave et al. (2018) Grave, E., Bojanowski, P., Gupta, P., Joulin, A., & Mikolov, T. (2018, May). Learning Word Vectors for 157 Languages. _Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)_. LREC 2018, Miyazaki, Japan. [https://aclanthology.org/L18-1550](https://aclanthology.org/L18-1550)
* Grootendorst (2022) Grootendorst, M. (2022). _BER Topic: Neural topic modeling with a class-based TF-IDF procedure_ (arXiv:2203.05794). arXiv. [https://doi.org/10.48550/arXiv.2203.05794](https://doi.org/10.48550/arXiv.2203.05794)
* Honnibal & Montani (2017) Honnibal, M., & Montani, I. (2017). spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. _To Appear_.
* Hubert & Arabie (1985) Hubert, L., & Arabie, P. (1985). Comparing partitions. _Journal of Classification_, _2_(1), 193-218. [https://doi.org/10.1007/BF01908075](https://doi.org/10.1007/BF01908075)
* Hughes et al. (2012) Hughes, J. M., Foti, N. J., Krakauer, D. C., & Rockmore, D. N. (2012). Quantitative patterns of stylistic influence in the evolution of literature. _Proceedings of the National
Academy of Sciences, 109(20), 7682-7686.
* [10] Iosifyan, M. Vlasov, I. (2020). And Quiet Flows the Don: The Sholokhov-Kyukov authorship debate. _Digital Scholarship in the Humanities_, 35(2), 307-318. [https://doi.org/10.1093/llc/fqz017](https://doi.org/10.1093/llc/fqz017)
* [11] Jockers, M. L. (2013). _Macroanalysis: Digital Methods and Literary History_. University of Illinois Press.
* [12] Kim, D., Seo, D., Cho, S., & Kang, P. (2019). Multi-co-training for document classification using various document representations: TF-IDF, LDA, and Doc2Vec. _Information Sciences_, 477, 15-29. [https://doi.org/10.1016/j.ins.2018.10.006](https://doi.org/10.1016/j.ins.2018.10.006)
* [13] Klimek, P., Kreuzbauer, R., & Thurner, S. (2019). Fashion and art cycles are driven by counter-dominance signals of elite competition: Quantitative evidence from music styles. _Journal of The Royal Society Interface_, 16(151), 20180731. [https://doi.org/10.1098/rsif.2018.0731](https://doi.org/10.1098/rsif.2018.0731)
* [14] Langfelder, P., & Horvath, S. (2008). WGCNA: An R package for weighted correlation network analysis. _BMC Bioinformatics_, 9(1), 559.
* [15] Lau, J. H., & Baldwin, T. (2016). An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation. _Proceedings of the 1st Workshop on Representation Learning for NLP_, 78-86. [https://doi.org/10.18653/v1/W16-1609](https://doi.org/10.18653/v1/W16-1609)
* [16] Le, Q., & Mikolov, T. (2014). Distributed Representations of Sentences and Documents. _Proceedings of the 31st International Conference on Machine Learning_, 1188-1196. [https://proceedings.mlr.press/v32/le14.html](https://proceedings.mlr.press/v32/le14.html)
* [17] Liu, L., Dehmamy, N., Chown, J., Giles, C. L., & Wang, D. (2021). Understanding the onset of hot streaks across artistic, cultural, and scientific careers. _Nature Communications_, 12(1), 5392. [https://doi.org/10.1038/s41467-021-25477-8](https://doi.org/10.1038/s41467-021-25477-8)
* [18] McInnes, L., Healy, J., Saul, N., & Grossberger, L. (2018). UMAP: Uniform Manifold Approximation and Projection. _Journal of Open Source Software_, 3(29), 861. [https://doi.org/10.21105/joss.00861](https://doi.org/10.21105/joss.00861)
* [19] Mosteller, F., & Wallace, D. L. (1963). Inference in an Authorship Problem. _Journal of the American Statistical Association_, 58(302), 275-309.
* [20] Neal, T., Sundararajan, K., Fatima, A., Yan, Y., Xiang, Y., & Woodard, D. (2017). Surveying Stylometry Techniques and Applications. _ACM Computing Surveys_, 50(6), 86:1-86:36. [https://doi.org/10.1145/3132039](https://doi.org/10.1145/3132039)
* [21] Neureiter, N., Ranacher, P., Efrat-Kowalsky, N., Kaiping, G. A., Weibel, R., Widmer, P., & Bouckaert, R. R. (2022). Detecting contact in language trees: A Bayesian phylogenetic model with horizontal transfer. _Humanities and Social Sciences Communications_, 9(1), Article 1. [https://doi.org/10.1057/s41599-022-01211-7](https://doi.org/10.1057/s41599-022-01211-7)
* Utrecht. [https://dh-abstracts.library.cmu.edu/works/10014](https://dh-abstracts.library.cmu.edu/works/10014)
* [23] Piper, A., Bagga, S., Monteiro, L., Yang, A., Labrosse, M., & Liu, Y. L. (2021). Detecting Narrativity Across Long Time Scales. _CHR 2021: Computational Humanities Research Conference_, 319-332.
* [24] Plechac, P. (2021). Relative contributions of Shakespeare and Fletcher in Henry VIII: An analysis based on most frequent words and most frequent rhythmic patterns. _Digital Scholarship in the Humanities_, 36(2), 430-438. [https://doi.org/10.1093/llc/fqaa032](https://doi.org/10.1093/llc/fqaa032)
* [25] Plechac, P., Bobenhausen, K., & Hammerich, B. (2018). Versification and authorship attribution. A pilot study on Czech, German, Spanish, and English poetry. _Studia Metrica et Poetica_, 5(2), 29-54.
* [26]
Pranjic, M., Podpecan, V., Robnik-Skionja, M., & Pollak, S. (2020). Evaluation of related news recommendations using document similarity methods. _Conference on Language Technologies & Digital Humanities_, 81-86.
* Ramirez-Gonzalez et al. (2018) Ramirez-Gonzalez, R. H., Borril, P., Lang, D., Harrington, S. A., Brinton, J., Venturini, L., Davey, M., Jacobs, J., van Ex, F., Pasha, A., Khedikar, Y., Robinson, S. J., Cory, A. T., Florio, T., Concia, L., Juery, C., Schoonbeek, H., Steuermagel, B., Xiang, D.,... Uauy, C. (2018). The transcriptional landscape of polyploid wheat. _Science_, _361_(6403), eaar6089. [https://doi.org/10.1126/science.aar6089](https://doi.org/10.1126/science.aar6089)
* Sbalchiero and Eder (2020) Sbalchiero, S., & Eder, M. (2020). Topic modeling, long texts and the best number of topics. Some Problems and solutions. _Quality & Quantity_, _54_(4), 1095-1108. [https://doi.org/10.1007/s11135-020-00976-w](https://doi.org/10.1007/s11135-020-00976-w)
* Schoch (2017) Schoch, C. (2017). Topic Modeling Genre: An Exploration of French Classical and Enlightenment Drama. _Digital Humanities Quarterly_, _011_(2).
* Sela et al. (2022) Sela, A., Plechac, P., & Lassche, A. (2022). Semantics of European poetry is shaped by conservative forces: The relationship between poetic meter and meaning in accentual-syllabic verse. _PLOS ONE_, _17_(4), e0266556. [https://doi.org/10.1371/journal.pone.0266556](https://doi.org/10.1371/journal.pone.0266556)
* Sigaki et al. (2018) Sigaki, H. Y. D., Perc, M., & Ribeiro, H. V. (2018). History of art paintings through the lens of entropy and complexity. _Proceedings of the National Academy of Sciences_, _115_(37), E8585-E8594. [https://doi.org/10.1073/pnas.1800083115](https://doi.org/10.1073/pnas.1800083115)
* Stamatatos (2009) Stamatatos, E. (2009). A survey of modern authorship attribution methods. _Journal of the American Society for Information Science and Technology_, _60_(3), 538-556. [https://doi.org/10.1002/asi.21001](https://doi.org/10.1002/asi.21001)
* Symons (1985) Symons, J. (1985). _Bloody Murder: From the Detective Story to the Crime Novel : a History_. Viking.
* Tehrani (2013) Tehrani, J. J. (2013). The Phylogeny of Little Red Riding Hood. _PLOS ONE_, _8_(11), e78871.
* Thelwall (2019) Thelwall, M. (2019). Reader and author gender and genre in Goodreads. _Journal of Librarianship and Information Science_, _51_(2), 403-430. [https://doi.org/10.1177/0961000617709061](https://doi.org/10.1177/0961000617709061)
* Underwood (2016) Underwood, T. (2016). The Life Cycles of Genres. _Journal of Cultural Analytics_, _2_(2). [https://doi.org/10.22148/16.005](https://doi.org/10.22148/16.005)
* Underwood (2019) Underwood, T. (2019). _Distant Horizons_. The University of Chicago Press.
* Underwood et al. (2022) Underwood, T., Kiley, K., Shang, W., & Vaisey, S. (2022). Cohort Succession Explains Most Change in Literary Culture. _Sociological Science_, \(9\), 184-205. [https://doi.org/10.15195/v9.a8](https://doi.org/10.15195/v9.a8)
* Underwood et al. (2020) Underwood, T., Kimutis, P., & Witte, J. (2020). NovelTM datasets for English-language fiction, 1700-2009. _Journal of Cultural Analytics_, _5_(2). [https://doi.org/10.22148/001c.13147](https://doi.org/10.22148/001c.13147)
* Ward (1963) Ward, J. H. (1963). Hierarchical Grouping to Optimize an Objective Function. _Journal of the American Statistical Association_, _58_(301), 236-244. [https://doi.org/10.2307/2282967](https://doi.org/10.2307/2282967)
* Youngblood et al. (2021) Youngblood, M., Baraghith, K., & Savage, P. E. (2021). Phylogenetic reconstruction of the cultural evolution of electronic music via dynamic community detection (1975-1999). _Evolution and Human Behavior_, _42_(6), 573-582. [https://doi.org/10.1016/j.evolhumbehav.2021.06.002](https://doi.org/10.1016/j.evolhumbehav.2021.06.002)
## 1 Introduction
The aim of this paper is to develop a new approach to the study of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the problem of finding the optimal solutions of the optimal solution
5.1.5 LDA 5.1.6 Bags of words 5.1.7 WGCNA 5.1.8 doc2vec 5.2 Clustering HathiTrust corpus
## 1 Corpus summary
The corpus was constructed so that books roughly span the same time period across genres (Figure S1); also, each genre subcorpus does not include more than three books per author (Figure S2). The total number of authors contributing to each genre was also similar in each subcorpus.
Figure S 1: Distribution of books through time within each genre.
## 2 Preprocessing
### Thematic foregrounding: weak
At the first level of thematic foregrounding we remove 100 most frequent words (MFW) from analysis. 100 MFWs roughly correspond to function words (or closed-class words) in English (Chung & Pennebaker, 2007; Stamatatos, 2009) that are routinely used in authorship attribution starting from the classical study of _The Federalist Papers_(Mosteller & Wallace, 1963). MFWs can be removed to cheaply lower the impact of style (which heavily depends on grammar and syntactic differences) in favor of semantics and content.
### Thematic foregrounding: medium
At the second level of thematic foregrounding words are pruned systematically based on morphology: we allow only nouns, adjectives, verbs and adverbs (auxiliary verbs are excluded too). We also remove entities and proper nouns, which might be specific to an author or a series of novels. Morphological tagging and named entity recognition was done with a basic spaCy language model for English due to its accessibility.
We did not use an external list of stopwords, since these lists are often arbitrary, can signficantly alter results, and are dominated by industry (specifically, information retrieval) standards. Lately, there is a tendency to minimize stopwords usage (Calvo Tello, 2021; Underwood et al., 2022), or to completely avoid them in the tasks like topic inference in a collection of documents (e.g. top2vec algorithm (Angelov, 2020)).
### Thematic foregrounding: strong
The third level of thematic foregrounding includes steps from the medium foregrounding level and adds naive semantic simplification. We reduce the sparseness of feature space by turning less frequent words into more frequent words from similar semantic domains. We replace a word outside of 1000 MFWs with its closest semantic neighbor (out of the 10 closest neighbors) if this neighbor \(n\in MFW\). To infer semantic similarity we use off-the-shelf FastText model (Mikolov et al., 2018) which includes 2M words and is trained on English Wikipedia, which provides a slice of'modern' use of language. Again, this model is easily accessible and scalable to different tasks or languages.
Table S1 presents a random example of 20 semantic replacements.
As seen from examples, this lexical simplification can loosely sort target words by semantic domains represented by their more frequent semantic neighbors and, in some cases, clean original texts (loove -> claim). Noise is present, too, both from the domain-specific language of underlying word2vec model (download -> free) and the lack of context-based semantic disambiguation (filmclip -> song).
Finally, Figure S3 shows the filtering effects which different pre-processing strategies have on the corpus. The largest drop in word type diversity, predictably, happens after morphological filtering at medium thematic foregrounding; our naive lexical simplification allows removing another 5% of word types, but preserving the amount of tokens.
## 3 Features
### Bag of words
A classic multivariate representation of texts as bags of words. We follow a stylometric tradition that assumes any weighting would also be part of the distance measure (e.g. Burrows' Delta is scaled Manhattan distance: see more about scaling features and vector length normalization in Evert et al. (2017)), so we only transform word frequencies to relative frequencies, ultimately dealing with a text as a probability distribution over an ordered set of words (arranged by their frequency) and defined by MFW cut-off. Different weightening techniques are widely used in information retrieval (TF-IDF, logarithmic transformation, etc.), but are more suitable as an
\begin{table}
\begin{tabular}{l|l} \hline source & replacement \\ \hline sundress & skirt \\ \hline download & free \\ \hline unlease & destroy \\ \hline recycling & waste \\ \hline organically & grow \\ \hline trident & sword \\ \hline snowed & snow \\ \hline redeye & flight \\ \hline cardholder & card \\ \hline generate & create \\ \hline struggling & struggle \\ \hline dab & rub \\ \hline houseparty & party \\ \hline looove & love \\ \hline grandmaster & master \\ \hline filmclip & song \\ \hline vantage & view \\ \hline endeavor & effort \\ \hline framing & frame \\ \hline files & file \\ \hline \end{tabular}
\end{table}
Table S 1: A random sample of 20 ’simplified’ words. ’Source’ column holds original words.
input to supervised learning (Calvo Tello, 2021) and might interfere with distance calculations (e.g. transforming original probabilities to something else).
### Topic probabilities (LDA)
Latent Dirichlet Allocation (**b****lei__latent__2003?**) is the most widely used probabilistic algorithm of topic modeling, that still performs competitively with newer methods (Harrando et al., 2021). LDA infers groups of words (topics) based on their co-occurence in documents. Because LDA is generative, we can in turn represent each document as a probability distribution of topics. Compact lexical representation also makes the feature space more interpretable. We use topicmodels LDA implementation in R (Grun and Hornik, 2011). We vary parameters \(k\) (number of topics) and MFW used. We do leave out hyperparameters _alpha_ and _delta_ at 0.1 default and do not rely on coherence/perplexity measures of a model, since we do not aim to fine-tune the LDA to a particular corpus; there is also empirical evidence that perceived LDA performance does not completely align with validation measures ((Hoyle et al., 2021); see also (Antoniak, 2022) for a summary of research on LDA performance).
An important pre-processing step for LDA is chunking of texts. A complete novel is too large of a context for inferring topics: too many words co-occur in large documents with many other words. Thus, instead of representing each novel as a bag of words, we represent it as many
smaller bags of words from consecutive parts (chunks) before training an LDA. We use an arbitrary chunk size of 1000, but other structural cues (paragraphs, pages, chapters) for chunking might also be a good idea. We aggregate the probabilities from these smaller documents back to a single novel by taking an average of probability distributions (a centroid).
Table S2 demonstrates a sample of topics (10 most probable words per topic) in a model that is built using texts at medium level of thematic foregrounding, 100 topics, document-term matrix (DTM) was cut at 1000 MFWs. Topics clearly capture thematic groups like locations and settings, and are often linked to actions and relationships.
### Module's weights (WGCNA)
Weighted gene correlation network analysis (WGCNA) is similar to LDA, but comes from a different research field: genetics. Some point out its promising features for text analysis, like relative independence from high-frequency function words (Elliott, 2017). WGCNA has one advantage over LDA: there is no need to guess the optimal number of topics, as WGCNA "modules" are determined automatically based on network of similarity in behavior between traits. Internally, WGCNA already relies on hierarchical clustering to derive modules that describe the variation in individuals/documents and could be greedy in reducing the word behavior in distinct genres to only one or two modules, especially if the analyzed texts have been chunked.
An example of this behavior from one of the sampling runs (120 novels) is presented on Figure S4. WGCNA was run on chunked novels, with medium thematic foregrounding, 5000 MFWs. We use implementation of the algorithm by Langfelder Horvath (2008). The algorithm derived only one module of words that show almost perfectly opposite expression in detective and fantasy fiction, to no surprise: these genres are the easiest to distinguish. However, one
module is too greedy when it comes to clustering: romance and sci-fi will be just mixed into two other distinct genres (and share more similarity to detectives than to fantasy).
To find the most defining words for this global module we use a connectivity measure: in this case, these are words with the highest positive correlation to an "eigengene", which is a joint expression of a module in samples / documents. Figure S5 shows 20 most correlated words to a module from the same sample as on Figure S4. It is quite clear that these are words from a police procedural universe, and, more generally, these are words of a'modern-like' urban setting, which also explains this module's expression in romance and science fiction. Conversely, the most inversely correlated words point to open spaces of adventure, magic and medieval attributes.
To give an example of WGCNA producing several meaningful distinct modules akin to LDA topics, we can use another model without chunking (medium thematic foregrounding, without chunking, 5000 MFWs). Examples of 10 most closely correlated words to a sample of modules are presented on Table S3. Unsurprisingly, removing chunking makes modules closely associated with specific books.
### Document embeddings (doc2vec)
Doc2vec directly embeds documents into a latent semantic space that is defined by a distributional language model. In the end, each document is represented as a vector in this
Figure S 5: Single-module WGCNA model: 5000 MFWs, medium thematic foregrounding, 1000-word chunks. Word correlations to the module’s eigengene, 20 most and 20 less correlated words.
\(N\)-dimensional space (depends on the underlying model). Again, we embed each novel split into chunks, in order to capture semantic variation on a small scale and then average the vectors in the space to get a single-vector-per-novel representation (a centroid of chunk vectors). We chose to use the chunks of 800 words and a pretrained FastText model (Mikolov et al., 2018) for vector representation of word semantics (300 dimensions, 2M words, trained on Wikipedia) and doc2vec implementation that allows fine-tuning and follows Angelov's alogrithm (2020).
Figure S6 shows UMAP projections of averaged novel vectors.
## 4 Clustering and validation
### Distance measures
We infer similarity between novels by calculating pairwise distances between representations / vectors. We test several classic distances that are used for measuring text similarity (and are widely used in stylometry): Euclidean, Manhattan, Burrows' delta (scaled Manhattan), cosine, cosine delta (scaled cosine) and Jensen-Shannon divergence.
**Euclidean.** Square root of the squared pairwise differences in features
\[Euc(A,B)=\sqrt{\sum(A_{i}-B_{i})^{2}}\]
**Manhattan**. Sum of the pairwise differences in dimensions (cityblock distance)
\[Man(A,B)=\sum(A_{i}-B_{i})\]
**Burrows' delta**. Sum of the pairwise differences in scaled dimensions, normalized by vector length (Burrows, 2002). \(z(A_{i})\) is scaled and centered variable \({}_{i}\) in text \(A\).
\[\Delta(A,B)=\frac{\sum z(A_{i})-z(B_{i})}{N}\]
**Cosine**. 1 - cosine similarity.
\[cos(A,B)=1-\frac{\sum A_{i}B_{i}}{\sqrt{\sum A_{i}^{2}}\sqrt{\sum B_{i}^{2}}}\]
**Cosine delta**. Same as cosine, but features are scaled (Evert et al., 2017).
**Jensen-Shannon Divergence**. Symmetrized Kullback-Leibler divergence. Not meaningful for feature vectors that are not probability distribution, but weights (e.g. WGCNA, doc2vec).
\[JSD(A,B)=0.5*(A-B)*(logit(A)-logit(B))\]
### Clustering
In principle any other clustering algorithm could have been used (e.g. k-means). We use hierarchical clustering (Ward's linkage that pairs items when it minimizes within-cluster variance). Despite being originally defined only for Euclidean distances, it was empirically shown that Ward's algorithm outperforms other linkage strategies in text-clustering tasks (Ochab et al., 2019).
We assume that novels from four defined genres should roughly form four distinct clusters (similarity of texts within genre is greater than similarity of texts across genres). To obtain the groupings from a resulting tree we cut it vertically by the number of assumed clusters (k=4). Then we compare resulting classes to ideal clustering using Adjusted Rand Index (similar usages for unsupervised clutering with literary texts: (Cafiero and Camps, 2019; Sela et al., 2022)). ARI takes values between 1 and 0, where 1 would be a perfect classification and 0 would mean clustering not better than random.
### Dendrogram of all novels
To provide an example of clustering performance, we build a dendrogram for all the novels in four genres (Figure S7). Underlying features are document embeddings at the medium level of thematic foregrounding and we use cosine distance for dissimilarity calculation. Colors of the branches are based on majority of genre neighbors. Adjusted Rand index of the tree presented below is 0.786.
Figure S 7: Hierarchical clustering of the full corpus, doc2vec, medium thematic foregrounding, Ward’s linkage. Clusters are colored by the dominant genre, up until a tree is cut to four major clusters.
### Confusion matrices
As seen from several figures above (S6, S7), genres differ in clustering consistency: detectives and fantasy books group together better than science fiction and romance. To address this difference in behavior we create a confusion matrix, based on all 100 cross-validation runs, which shows a dispersion of books across four clusters. Since this is not a supervised classification, a confusion matrix requires some heuristics to determine which clusters correspond to which genres in each clustering tree and can only show approximate results (we assume a cluster to be the 'detective' cluster if majority of books in this cluster are detectives).
This confusion matrix presents a total share of labeled novels that end up in different clusters across 29100 confusion matrices (100 samples, 291 clustering rounds in each). As usual, in the case of a pefect clustering, the diagonal of the matrix would contain "1". As expected, we see that the most diffused genres are romance (often grouped with detectives, 30% of hits) and science fiction (often grouped with fantasy, 24% of hits).
However, not all the methods summarized in the matrix above are equal and some distance measures (like Euclidean for bag of words) are 'bad choices' by default. To trim the matrix a little, we can follow the strategy that we also employ for modeling: use only well-suited distance measure for each method and remove chunked WGCNA, which proved to be a poor choice for thematic clustering.
Now the "good" clustering numbers are higher, but the difference between romance and science fiction becomes more pronounced. Comparatively, romance tends to form much more diffused clusters than science fiction (this tendency is visible on Figure S6).
Are different methods resulting in different sensitivity to genres and cluster formation? Figure S8 present a breakdown of the confusion matrix by document representation method.
Figure S 8: Confusion matrices by different feature types.
Overall the same pattern holds across all methods, which is to be expected: they all rely on the same lexical frequency-based information. There is an advantage for d2v since it uses an external representations of word co-occurence based on a very large corpus, but higher numbers for doc2vec compared to LDA also should not be treated at face value, since it has fewer degrees of freedom (and, as a result, fewer ways to fail): doc2vec was used only in 3 different combinations per sample, while, for instance, LDA was used in 27.
## 5 Analysis
### Linear models
Figure S9 shows the overall distribution of ARI values, with and without chunked WGCNA option. Value concentration on zero comes from unsuited distance choices that are inadequate for a given feature space (e.g. Euclidean for bag of words). When the data is filtered by better performing distances, distribution is not zero-inflated (see Figure S12).
Alongside distance calculation and hierarchical clustering, we ran \(k\)-means clustering (with \(k=4\)), but its average performance in separating books in four clusters (Figure S10), as measured by ARI, was way worse.
#### 5.1.1 Distance selection
To simplify inference we will deal with results that were obtained with _suited_ distance measure per each feature type. We select only the best performing distance measure per used feature. This is done to remove the factor of distances altogether and to equalize model's chances for comparison. There is no real reason to lump results from different distance measures together, since different data (e.g. probabilities vs. feature weights) has different sensitivity to distance
selection, while some distances were not measured for some feature types (JSD for WGCNA and doc2vec).
To choose the suited distances we fit a simple model ari - 1 + feature*distance to get estimates for each distance measure performance with each feature type (Figure S11). All further models were built using distances with highest posterior averages.
The list of chosen distances with estimates:
1. BoW: Jensen-Shannon (0.58)
2. LDA: Jensen-Shannon (0.57)
3. WGCNA: cosine delta (0.53)
4. doc2vec: cosine (0.65)
Figure S12 shows the ARI distribution for the filtered set of distances.
Figure S 11: Posterior predictions for distance performance across methods.
Figure S 12: Distribution of ARI values when results are filtered by suited distances
#### 5.1.2 General model: effect of thematic foregrounding
What is the effect of thematic foregrounding for different feature types? For this model data was filtered by removing chunked WGCNA results and selecting distances with the highest average.
We fit a multilevel model with interaction between method (\(Feature\)) and the level of thematic foregrounding (\(Level\)), pooled by individual samples. In R library brms formula, it is ari -1 + Feature * Level + (1|sample). We use regularizing priors for 'intercept' and'slope' coefficients as seen in the expanded model notation below. (We use dummy coding with brms interface for categorical variables, so \(\beta\) coefficients represent _difference_ between a \(Feature,Level\) combination and reference 'intercept' which is doc2vec at level 1. \(\beta=Normal(0,0.1)\) does not expect any difference on average). We use \(Level\) as a shorthand for the _level of thematic foregrounding_ in notation. All further models have the same structure and priors.
\[ARI \sim Normal(\mu_{i},\sigma_{i})\] \[\mu_{i} =\alpha+(\beta_{f}*Feature_{i}+\beta_{fl}*Level_{i})+\beta_{l}* Level_{i}+\delta_{sample[i]}\] \[\alpha =Normal(0.5,0.1)\] \[\beta_{f,l,fl} =Normal(0,0.1)\] \[\delta =Normal(0,\sigma_{\delta})\] \[\sigma_{i} =Exponential(1)\] \[\sigma_{\delta} =Exponential(1)\]
We model the \(Level\) of thematic foregrounding as categorical variable, and not ordinal, because we constructed the 'levels' artificially: there might not be any _order_ in relationship between these levels. That said, modeling \(Level\) via monotonic effects would still work and resulting models will be similar (as shown by leave-one-out cross validation in Table S6). Additionally, including varying slopes for individual samples does not improve model prediction much. It suggests that, across 100 samples of texts, methods and thematic foregrounding behaved similarly relative to each other. Since adding slopes to random effects can complicate fitting models and chain convergence, we instead only fit models grouped by samples.
Multilevel models with group-level effects for individual samples are always a better fit than those without. They allow to be more uncertain about the mean estimates, since clustering results notably differ from sample to sample.
Left-hand side of the Figure S13 shows posterior ARI means in each \(Level\) and each \(Feature\) type. Right-hand side shows the same relationship, but now the mean is taken marginal of samples: credible intervals are now much wider.
At the medium and strong thematic foregrounding three out of four feature types seem to behave similarly with doc2vec having an upper hand. We can directly compare their posterior distributions (Figure S14). Dotted lines represent the mean of distribution for each feature.
Figure S 13: Posterior predictions for methods behavior across thematic foregrounding levels (left). Pooled means, marginal of 100 sample runs (right).
Sampling introduces considerable variation to the behavior of all features types. We can use posterior predictions to check differences in specific samples (10 samples drawn at random, Figure S15). Note that doc2vec has only one observation per sample for each level, but model uses grand mean to keep estimations conservative.
#### 5.1.3 Overall best performance, distances filtered
To get an overall picture of comparable method performance we filter results by selected distances only. Figure S16 shows ARI boxplots per each of 51 combinations.
#### 5.1.4 All combinations
Table S7 records all 291 combinations, without any filters, arranged by descending median ARI.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline rank & method & TF & distance & median.ARI & SD \\ \hline
1 & d2v\_lvl\_3 & Strong & cosine & 0.703 & 0.092 \\
2 & d2v\_lvl\_3 & Strong & euclidean & 0.687 & 0.097 \\
3 & d2v\_lvl\_3 & Strong & cosine delta & 0.681 & 0.104 \\
4 & LDA\_lvl\_3\_50\_5k & Strong & jensen-shannon & 0.677 & 0.107 \\
5 & LDA\_lvl\_3\_50\_5k & Strong & delta & 0.672 & 0.099 \\
6 & LDA\_lvl\_2\_100\_5k & Medium & manhattan & 0.671 & 0.096 \\
7 & LDA\_lvl\_3\_100\_1k & Strong & jensen-shannon & 0.67 & 0.086 \\
8 & d2v\_lvl\_2 & Medium & cosine & 0.668 & 0.093 \\
9 & lvl\_3\_BoW & Strong & jensen-shannon & 0.665 & 0.11 \\
10 & LDA\_lvl\_2\_50\_5k & Medium & jensen-shannon & 0.661 & 0.092 \\
11 & d2v\_lvl\_2 & Medium & cosine delta & 0.657 & 0.125 \\
12 & LDA\_lvl\_3\_20\_1k0 & Strong & jensen-shannon & 0.657 & 0.114 \\
13 & lvl\_3\_BoW & Strong & jensen-shannon & 0.657 & 0.093 \\
14 & LDA\_lvl\_3\_100\_5k & Strong & jensen-shannon & 0.656 & 0.074 \\
15 & LDA\_lvl\_3\_20\_5k & Strong & jensen-shannon & 0.656 & 0.116 \\
16 & LDA\_lvl\_3\_50\_1k & Strong & jensen-shannon & 0.656 & 0.099 \\ \hline \hline \end{tabular}
\end{table}
Table S 7: Median performance of all 291 method + distance combinations
Figure S 15: Posterior predictions for 10 random individual samples, superimposed on the empirical data points. Varying slopes model.
\begin{tabular}{c c c c c c} \hline \hline rank & method & TF & distance & median.ARI & SD \\ \hline
17 & d2v\_lvl\_3 & Strong & delta & 0.655 & 0.089 \\
18 & LDA\_lvl\_3\_50\_1k0 & Strong & delta & 0.654 & 0.098 \\
19 & LDA\_lvl\_3\_50\_1k0 & Strong & jensen-shannon & 0.654 & 0.111 \\
20 & lvl\_2\_BoW & Medium & jensen-shannon & 0.654 & 0.104 \\
21 & d2v\_lvl\_3 & Strong & manhattan & 0.653 & 0.099 \\
22 & LDA\_lvl\_2\_100\_5k & Medium & delta & 0.653 & 0.094 \\
23 & d2v\_lvl\_1 & Weak & euclidean & 0.652 & 0.104 \\
24 & d2v\_lvl\_2 & Medium & euclidean & 0.651 & 0.104 \\
25 & LDA\_lvl\_3\_100\_1k & Strong & delta & 0.649 & 0.088 \\
26 & LDA\_lvl\_3\_100\_1k & Strong & manhattan & 0.649 & 0.084 \\
27 & LDA\_lvl\_2\_50\_1k0 & Medium & jensen-shannon & 0.648 & 0.1 \\
28 & LDA\_lvl\_2\_50\_1k0 & Medium & delta & 0.643 & 0.099 \\
29 & LDA\_lvl\_3\_100\_5k & Strong & manhattan & 0.643 & 0.095 \\
30 & LDA\_lvl\_3\_20\_1k0 & Strong & delta & 0.642 & 0.103 \\
31 & LDA\_lvl\_2\_20\_1k0 & Medium & delta & 0.639 & 0.096 \\
32 & LDA\_lvl\_3\_100\_5k & Strong & delta & 0.639 & 0.085 \\
33 & LDA\_lvl\_3\_50\_5k & Strong & manhattan & 0.637 & 0.115 \\
34 & lvl\_2\_BoW & Medium & jensen-shannon & 0.637 & 0.098 \\
35 & d2v\_lvl\_2 & Medium & manhattan & 0.636 & 0.099 \\
36 & LDA\_lvl\_2\_20\_1k0 & Medium & jensen-shannon & 0.636 & 0.111 \\
37 & LDA\_lvl\_2\_20\_5k & Medium & jensen-shannon & 0.636 & 0.106 \\
38 & LDA\_lvl\_2\_20\_1k0 & Medium & manhattan & 0.635 & 0.129 \\
39 & LDA\_lvl\_2\_50\_5k & Medium & delta & 0.635 & 0.093 \\
40 & LDA\_lvl\_2\_100\_1k & Medium & jensen-shannon & 0.634 & 0.09 \\
41 & lvl\_1\_BoW & Weak & cosine delta & 0.634 & 0.094 \\
42 & LDA\_lvl\_3\_20\_5k & Strong & manhattan & 0.632 & 0.104 \\
43 & d2v\_lvl\_2 & Medium & delta & 0.631 & 0.105 \\
44 & lvl\_1\_BoW & Weak & cosine delta & 0.631 & 0.095 \\
45 & LDA\_lvl\_3\_20\_1k0 & Strong & jensen-shannon & 0.628 & 0.106 \\
46 & LDA\_lvl\_3\_20\_1k0 & Strong & manhattan & 0.628 & 0.107 \\
47 & d2v\_lvl\_1 & Weak & cosine & 0.627 & 0.108 \\
48 & LDA\_lvl\_2\_100\_1k & Medium & manhattan & 0.627 & 0.09 \\
49 & d2v\_lvl\_1 & Weak & cosine delta & 0.626 & 0.105 \\
50 & d2v\_lvl\_1 & Weak & manhattan & 0.626 & 0.108 \\
51 & lvl\_2\_BoW & Medium & cosine delta & 0.625 & 0.104 \\
52 & LDA\_lvl\_2\_100\_1k0 & Medium & delta & 0.624 & 0.112 \\
53 & LDA\_lvl\_2\_50\_1k & Medium & jensen-shannon & 0.623 & 0.094 \\
54 & LDA\_lvl\_3\_100\_1k0 & Strong & delta & 0.623 & 0.096 \\
55 & lvl\_1\_BoW & Weak & jensen-shannon & 0.623 & 0.092 \\
56 & d2v\_lvl\_1 & Weak & delta & 0.622 & 0.109 \\
57 & LDA\_lvl\_2\_20\_5k & Medium & manhattan & 0.622 & 0.112 \\ \hline \hline \end{tabular}
\begin{tabular}{c c c c c c} \hline \hline rank & method & TF & distance & median.ARI & SD \\ \hline
58 & LDA\_lvl\_2\_20\_1k & Medium & jensen-shannon & 0.621 & 0.113 \\
59 & LDA\_lvl\_2\_100\_1k0 & Medium & jensen-shannon & 0.615 & 0.104 \\
60 & LDA\_lvl\_3\_20\_5k & Strong & delta & 0.615 & 0.092 \\
61 & lvl\_3\_BoW & Strong & cosine delta & 0.615 & 0.108 \\
62 & LDA\_lvl\_2\_100\_5k & Medium & jensen-shannon & 0.614 & 0.08 \\
63 & lvl\_2\_BoW & Medium & cosine delta & 0.614 & 0.113 \\
64 & LDA\_lvl\_3\_50\_1k & Strong & manhattan & 0.609 & 0.109 \\
65 & LDA\_lvl\_3\_20\_1k & Strong & manhattan & 0.605 & 0.116 \\
66 & LDA\_lvl\_2\_50\_5k & Medium & manhattan & 0.604 & 0.095 \\
67 & LDA\_lvl\_3\_50\_1k & Strong & delta & 0.604 & 0.095 \\
68 & lvl\_1\_BoW & Weak & jensen-shannon & 0.601 & 0.09 \\
69 & LDA\_lvl\_2\_20\_5k & Medium & delta & 0.6 & 0.09 \\
70 & lvl\_3\_BoW & Strong & cosine delta & 0.599 & 0.087 \\
71 & WGCNA\_lvl2\_5k\_ & Medium & cosine delta & 0.599 & 0.098 \\
72 & WGCNA\_lvl2\_5k\_ & Medium & cosine & 0.599 & 0.098 \\
73 & lvl\_1\_BoW & Weak & manhattan & 0.596 & 0.09 \\
74 & LDA\_lvl\_2\_100\_1k & Medium & delta & 0.595 & 0.09 \\
75 & LDA\_lvl\_3\_100\_1k0 & Strong & jensen-shannon & 0.592 & 0.099 \\
76 & LDA\_lvl\_3\_20\_1k & Strong & delta & 0.592 & 0.089 \\
77 & lvl\_1\_BoW & Weak & delta & 0.592 & 0.096 \\
78 & LDA\_lvl\_2\_50\_1k0 & Medium & manhattan & 0.59 & 0.105 \\
79 & LDA\_lvl\_2\_100\_5k & Medium & cosine delta & 0.588 & 0.102 \\
80 & lvl\_1\_BoW & Weak & delta & 0.588 & 0.123 \\
81 & LDA\_lvl\_3\_100\_5k & Strong & cosine delta & 0.585 & 0.091 \\
82 & WGCNA\_lvl1\_5k & Weak & delta & 0.583 & 0.086 \\
83 & WGCNA\_lvl1\_5k & Weak & manhattan & 0.583 & 0.086 \\
84 & WGCNA\_lvl3\_5k & Strong & cosine delta & 0.583 & 0.101 \\
85 & WGCNA\_lvl3\_5k & Strong & cosine & 0.583 & 0.101 \\
86 & LDA\_lvl\_2\_20\_1k & Medium & manhattan & 0.582 & 0.125 \\
87 & lvl\_1\_BoW & Weak & manhattan & 0.582 & 0.093 \\
88 & WGCNA\_lvl1\_5k & Weak & cosine delta & 0.581 & 0.104 \\
89 & WGCNA\_lvl1\_5k & Weak & cosine & 0.581 & 0.104 \\
90 & lvl\_3\_BoW & Strong & delta & 0.579 & 0.116 \\
91 & LDA\_lvl\_3\_50\_1k0 & Strong & manhattan & 0.577 & 0.117 \\
92 & LDA\_lvl\_3\_100\_1k & Strong & cosine delta & 0.576 & 0.076 \\
93 & lvl\_3\_BoW & Strong & jensen-shannon & 0.575 & 0.101 \\
94 & lvl\_2\_BoW & Medium & delta & 0.572 & 0.121 \\
95 & LDA\_lvl\_3\_100\_1k0 & Strong & cosine delta & 0.571 & 0.094 \\
96 & LDA\_lvl\_2\_50\_1k & Medium & delta & 0.566 & 0.107 \\
97 & lvl\_2\_BoW & Medium & jensen-shannon & 0.564 & 0.091 \\
98 & lvl\_2\_BoW & Medium & manhattan & 0.564 & 0.117 \\ \hline \hline \end{tabular}
\begin{tabular}{c c c c c c} \hline \hline rank & method & TF & distance & median.ARI & SD \\ \hline
99 & LDA\_lvl\_3\_20\_1k0 & Strong & cosine delta & 0.563 & 0.095 \\
100 & LDA\_lvl\_3\_20\_5k & Strong & cosine delta & 0.56 & 0.088 \\
101 & LDA\_lvl\_2\_20\_1k & Medium & delta & 0.558 & 0.101 \\
102 & lvl\_1\_BoW & Weak & cosine delta & 0.558 & 0.092 \\
103 & LDA\_lvl\_3\_20\_1k0 & Strong & cosine & 0.556 & 0.121 \\
104 & lvl\_3\_BoW & Strong & manhattan & 0.556 & 0.124 \\
105 & LDA\_lvl\_2\_20\_1k0 & Medium & cosine delta & 0.555 & 0.098 \\
106 & lvl\_3\_BoW & Strong & manhattan & 0.554 & 0.12 \\
107 & WGCNA\_lvl2\_5k\_ & Medium & delta & 0.552 & 0.109 \\
108 & WGCNA\_lvl2\_5k\_ & Medium & manhattan & 0.552 & 0.109 \\
109 & LDA\_lvl\_1\_100\_1k & Weak & delta & 0.551 & 0.088 \\
110 & LDA\_lvl\_2\_50\_1k & Medium & manhattan & 0.551 & 0.096 \\
111 & LDA\_lvl\_2\_100\_1k0 & Medium & cosine delta & 0.548 & 0.103 \\
112 & LDA\_lvl\_3\_20\_1k & Strong & cosine & 0.546 & 0.128 \\
113 & LDA\_lvl\_1\_100\_5k & Weak & delta & 0.545 & 0.085 \\
114 & LDA\_lvl\_2\_100\_1k & Medium & cosine delta & 0.544 & 0.079 \\
115 & lvl\_1\_BoW & Weak & delta & 0.543 & 0.099 \\
116 & WGCNA\_lvl2\_5k\_ & Medium & euclidean & 0.543 & 0.107 \\
117 & LDA\_lvl\_1\_100\_1k & Weak & delta & 0.542 & 0.096 \\
118 & LDA\_lvl\_3\_20\_5k & Strong & cosine & 0.541 & 0.126 \\
119 & LDA\_lvl\_2\_20\_5k & Medium & cosine delta & 0.539 & 0.106 \\
120 & WGCNA\_lvl3\_5k\_ & Strong & delta & 0.538 & 0.102 \\
121 & WGCNA\_lvl3\_5k\_ & Strong & manhattan & 0.538 & 0.102 \\
122 & LDA\_lvl\_3\_50\_5k & Strong & cosine delta & 0.53 & 0.102 \\
123 & LDA\_lvl\_1\_100\_1k & Weak & manhattan & 0.529 & 0.087 \\
124 & lvl\_2\_BoW & Medium & cosine delta & 0.529 & 0.092 \\
125 & lvl\_2\_BoW & Medium & manhattan & 0.529 & 0.107 \\
126 & WGCNA\_lvl3\_5k\_ & Strong & euclidean & 0.529 & 0.124 \\
127 & LDA\_lvl\_3\_50\_1k & Strong & cosine delta & 0.527 & 0.082 \\
128 & lvl\_2\_BoW & Medium & delta & 0.525 & 0.134 \\
129 & lvl\_3\_BoW & Strong & cosine delta & 0.522 & 0.106 \\
130 & LDA\_lvl\_2\_20\_1k0 & Medium & cosine & 0.519 & 0.132 \\
131 & LDA\_lvl\_3\_100\_1k0 & Strong & manhattan & 0.519 & 0.108 \\
132 & LDA\_lvl\_3\_50\_1k0 & Strong & cosine delta & 0.518 & 0.092 \\
133 & LDA\_lvl\_1\_100\_1k & Weak & jensen-shannon & 0.514 & 0.094 \\
134 & LDA\_lvl\_3\_20\_1k & Strong & cosine delta & 0.511 & 0.09 \\
135 & LDA\_lvl\_2\_20\_5k & Medium & cosine & 0.505 & 0.117 \\
136 & lvl\_2\_BoW & Medium & delta & 0.505 & 0.097 \\
137 & LDA\_lvl\_1\_100\_1k & Weak & cosine delta & 0.503 & 0.088 \\
138 & LDA\_lvl\_2\_20\_1k & Medium & cosine delta & 0.501 & 0.083 \\
139 & LDA\_lvl\_2\_50\_1k & Medium & cosine delta & 0.501 & 0.095 \\ \hline \hline \end{tabular}
\begin{tabular}{c c c c c c} \hline \hline rank & method & TF & distance & median.ARI & SD \\ \hline
140 & WGCNA\_lvl1\_1k\_ & Weak & cosine delta & 0.498 & 0.104 \\
141 & WGCNA\_lvl1\_1k\_ & Weak & cosine & 0.498 & 0.104 \\
142 & WGCNA\_lvl1\_5k\_ & Weak & euclidean & 0.497 & 0.147 \\
143 & LDA\_lvl\_1\_100\_5k & Weak & jensen-shannon & 0.495 & 0.109 \\
144 & lvl\_3\_BoW & Strong & delta & 0.495 & 0.106 \\
145 & LDA\_lvl\_2\_50\_1k0 & Medium & cosine delta & 0.494 & 0.117 \\
146 & lvl\_3\_BoW & Strong & manhattan & 0.49 & 0.103 \\
147 & LDA\_lvl\_2\_50\_5k & Medium & cosine delta & 0.489 & 0.09 \\
148 & LDA\_lvl\_1\_50\_1k & Weak & delta & 0.484 & 0.084 \\
149 & LDA\_lvl\_2\_100\_1k0 & Medium & manhattan & 0.482 & 0.131 \\
150 & LDA\_lvl\_1\_100\_5k & Weak & manhattan & 0.481 & 0.101 \\
151 & LDA\_lvl\_2\_20\_1k & Medium & cosine & 0.479 & 0.114 \\
152 & lvl\_1\_BoW & Weak & manhattan & 0.475 & 0.096 \\
153 & WGCNA\_lvl3\_1k\_ & Strong & cosine delta & 0.473 & 0.11 \\
154 & WGCNA\_lvl3\_1k\_ & Strong & cosine & 0.473 & 0.11 \\
155 & LDA\_lvl\_1\_100\_5k & Weak & cosine delta & 0.471 & 0.09 \\
156 & lvl\_3\_BoW & Strong & delta & 0.469 & 0.137 \\
157 & LDA\_lvl\_3\_100\_1k & Strong & cosine & 0.467 & 0.118 \\
158 & WGCNA\_lvl2\_1k\_ & Medium & cosine delta & 0.467 & 0.102 \\
159 & WGCNA\_lvl2\_1k\_ & Medium & cosine & 0.467 & 0.102 \\
160 & LDA\_lvl\_3\_20\_5k & Strong & euclidean & 0.464 & 0.134 \\
161 & LDA\_lvl\_1\_100\_1k0 & Weak & jensen-shannon & 0.462 & 0.106 \\
162 & WGCNA\_lvl1\_1k\_ & Weak & euclidean & 0.462 & 0.115 \\
163 & LDA\_lvl\_3\_50\_1k & Strong & cosine & 0.461 & 0.114 \\
164 & LDA\_lvl\_3\_20\_1k & Strong & euclidean & 0.46 & 0.136 \\
165 & lvl\_1\_BoW & Weak & jensen-shannon & 0.458 & 0.085 \\
166 & LDA\_lvl\_2\_50\_1k & Medium & cosine & 0.453 & 0.095 \\
167 & WGCNA\_lvl1\_1k\_ & Weak & delta & 0.453 & 0.094 \\
169 & LDA\_lvl\_2\_20\_5k & Medium & euclidean & 0.451 & 0.133 \\
170 & lvl\_2\_BoW & Medium & manhattan & 0.451 & 0.1 \\
171 & LDA\_lvl\_1\_100\_1k0 & Weak & cosine delta & 0.449 & 0.09 \\
172 & LDA\_lvl\_1\_50\_5k & Weak & jensen-shannon & 0.442 & 0.1 \\
173 & LDA\_lvl\_1\_50\_5k & Weak & delta & 0.439 & 0.095 \\
174 & LDA\_lvl\_3\_20\_1k0 & Strong & euclidean & 0.433 & 0.149 \\
175 & LDA\_lvl\_2\_50\_5k & Medium & cosine & 0.429 & 0.099 \\
176 & WGCNA\_lvl2\_1k\_ & Medium & delta & 0.429 & 0.109 \\
177 & WGCNA\_lvl2\_1k\_ & Medium & manhattan & 0.429 & 0.109 \\
178 & WGCNA\_lvl3\_1k\_ & Strong & delta & 0.429 & 0.101 \\
179 & WGCNA\_lvl3\_1k\_ & Strong & manhattan & 0.429 & 0.101 \\
180 & WGCNA\_lvl3\_1k\_ & Strong & euclidean & 0.428 & 0.105 \\ \hline \hline \end{tabular}
\begin{tabular}{c c c c c c} \hline \hline rank & method & TF & distance & median.ARI & SD \\ \hline
181 & LDA\_lvl\_2\_20\_1k0 & Medium & euclidean & 0.425 & 0.158 \\
182 & LDA\_lvl\_1\_100\_1k0 & Weak & manhattan & 0.419 & 0.092 \\
183 & LDA\_lvl\_1\_50\_1k & Weak & manhattan & 0.417 & 0.101 \\
184 & LDA\_lvl\_2\_100\_1k & Medium & cosine & 0.416 & 0.123 \\
185 & LDA\_lvl\_3\_50\_5k & Strong & cosine & 0.411 & 0.103 \\
186 & LDA\_lvl\_1\_50\_1k & Weak & jensen-shannon & 0.41 & 0.106 \\
187 & WGCNA\_lvl2\_1k\_ & Medium & euclidean & 0.409 & 0.106 \\
188 & LDA\_lvl\_1\_50\_5k & Weak & manhattan & 0.399 & 0.112 \\
189 & LDA\_lvl\_1\_20\_5k & Weak & jensen-shannon & 0.398 & 0.088 \\
190 & LDA\_lvl\_2\_20\_1k & Medium & euclidean & 0.397 & 0.119 \\
191 & LDA\_lvl\_1\_50\_1k0 & Weak & jensen-shannon & 0.393 & 0.101 \\
192 & LDA\_lvl\_1\_20\_1k & Weak & delta & 0.39 & 0.102 \\
193 & LDA\_lvl\_1\_50\_1k0 & Weak & delta & 0.386 & 0.094 \\
194 & LDA\_lvl\_1\_20\_5k & Weak & delta & 0.374 & 0.092 \\
195 & LDA\_lvl\_3\_50\_1k0 & Strong & cosine & 0.371 & 0.113 \\
196 & LDA\_lvl\_1\_20\_1k & Weak & jensen-shannon & 0.366 & 0.097 \\
197 & LDA\_lvl\_2\_50\_1k0 & Medium & cosine & 0.364 & 0.105 \\
198 & LDA\_lvl\_1\_20\_1k & Weak & cosine delta & 0.363 & 0.087 \\
199 & LDA\_lvl\_1\_50\_1k0 & Weak & manhattan & 0.363 & 0.104 \\
200 & LDA\_lvl\_2\_100\_5k & Medium & cosine & 0.356 & 0.087 \\
201 & LDA\_lvl\_1\_20\_1k & Weak & manhattan & 0.355 & 0.113 \\
202 & LDA\_lvl\_1\_20\_5k & Weak & manhattan & 0.351 & 0.102 \\
203 & LDA\_lvl\_3\_100\_5k & Strong & cosine & 0.348 & 0.104 \\
204 & LDA\_lvl\_1\_50\_1k & Weak & cosine delta & 0.338 & 0.087 \\
205 & LDA\_lvl\_1\_20\_1k & Weak & cosine & 0.33 & 0.081 \\
206 & LDA\_lvl\_1\_20\_5k & Weak & cosine & 0.328 & 0.1 \\
207 & LDA\_lvl\_1\_20\_1k0 & Weak & delta & 0.325 & 0.096 \\
208 & LDA\_lvl\_1\_20\_1k0 & Weak & jensen-shannon & 0.308 & 0.098 \\
209 & LDA\_lvl\_3\_100\_1k0 & Strong & cosine & 0.291 & 0.084 \\
210 & lvl\_1\_BoW & Weak & cosine & 0.291 & 0.065 \\
211 & LDA\_lvl\_1\_50\_1k & Weak & cosine & 0.289 & 0.069 \\
212 & lvl\_1\_BoW & Weak & cosine & 0.289 & 0.063 \\
213 & LDA\_lvl\_1\_20\_1k0 & Weak & manhattan & 0.286 & 0.093 \\
214 & LDA\_lvl\_1\_20\_1k0 & Weak & cosine & 0.276 & 0.091 \\
215 & LDA\_lvl\_1\_50\_5k & Weak & cosine delta & 0.275 & 0.094 \\
216 & LDA\_lvl\_1\_20\_5k & Weak & cosine delta & 0.27 & 0.077 \\
217 & LDA\_lvl\_2\_100\_1k0 & Medium & cosine & 0.27 & 0.094 \\
218 & LDA\_lvl\_1\_20\_1k & Weak & euclidean & 0.266 & 0.085 \\
219 & LDA\_lvl\_1\_50\_1k0 & Weak & cosine delta & 0.251 & 0.082 \\
220 & LDA\_lvl\_1\_50\_1k0 & Weak & cosine & 0.245 & 0.095 \\
221 & LDA\_lvl\_1\_100\_1k0 & Weak & cosine & 0.244 & 0.096 \\ \hline \hline \end{tabular}
\begin{tabular}{c c c c c c} \hline \hline rank & method & TF & distance & median.ARI & SD \\ \hline
222 & LDA\_lvl\_1\_100\_1k & Weak & cosine & 0.243 & 0.066 \\
223 & LDA\_lvl\_1\_50\_5k & Weak & cosine & 0.243 & 0.096 \\
224 & WGCNA\_lvl1\_5k\_chunked & Weak & cosine delta & 0.242 & 0.029 \\
225 & lvl\_1\_BoW & Weak & cosine & 0.238 & 0.055 \\
226 & LDA\_lvl\_1\_100\_5k & Weak & cosine & 0.236 & 0.078 \\
227 & LDA\_lvl\_1\_20\_1k0 & Weak & cosine delta & 0.232 & 0.078 \\
228 & LDA\_lvl\_3\_50\_5k & Strong & euclidean & 0.232 & 0.138 \\
229 & WGCNA\_lvl1\_1k\_chunked & Weak & cosine delta & 0.23 & 0.029 \\
230 & WGCNA\_lvl1\_5k\_chunked & Weak & cosine & 0.23 & 0.027 \\
231 & WGCNA\_lvl2\_5k\_chunkedMedium & cosine delta & 0.23 & 0.02 \\
232 & WGCNA\_lvl3\_5k\_chunked & Strong & cosine delta & 0.229 & 0.021 \\
233 & LDA\_lvl\_1\_50\_1k & Weak & euclidean & 0.226 & 0.09 \\
234 & lvl\_1\_BoW & Weak & manhattan & 0.222 & 0.046 \\
235 & WGCNA\_lvl1\_5k\_chunked & Weak & euclidean & 0.221 & 0.065 \\
236 & lvl\_1\_BoW & Weak & cosine delta & 0.221 & 0.056 \\
237 & WGCNA\_lvl1\_5k\_chunked & Weak & cosine delta & 0.218 & 0.025 \\
238 & WGCNA\_lvl3\_1k\_chunked & Strong & euclidean & 0.217 & 0.063 \\
239 & lvl\_1\_BoW & Weak & euclidean & 0.216 & 0.139 \\
240 & LDA\_lvl\_3\_50\_1k & Strong & cosine & 0.215 & 0.083 \\
241 & lvl\_3\_BoW & Strong & cosine delta & 0.207 & 0.024 \\
242 & WGCNA\_lvl2\_1k & \_chunkedMedium & cosine delta & 0.206 & 0.086 \\
243 & WGCNA\_lvl1\_5k\_chunked & Weak & delta & 0.206 & 0.109 \\
245 & WGCNA\_lvl2\_5k\_chunkedMedium & delta & 0.204 & 0.032 \\
246 & WGCNA\_lvl2\_5k\_chunkedMedium & meanhattan & 0.204 & 0.032 \\
247 & WGCNA\_lvl2\_5k\_chunked & Weak & euclidean & 0.203 & 0.088 \\
248 & WGCNA\_lvl2\_5k\_chunkedMedium & cosine & 0.203 & 0.025 \\
249 & WGCNA\_lvl3\_5k\_chunked & Strong & cosine & 0.202 & 0.024 \\
250 & WGCNA\_lvl1\_1k\_chunked & Weak & cosine & 0.198 & 0.028 \\
252 & lvl\_3\_BoW & Strong & cosine & 0.197 & 0.067 \\
253 & LDA\_lvl\_2\_50\_1k0 & Medium & euclidean & 0.196 & 0.129 \\
254 & WGCNA\_lvl3\_5k\_chunked & Strong & euclidean & 0.196 & 0.036 \\
255 & WGCNA\_lvl3\_5k\_chunked & Strong & euclidean & 0.196 & 0.036 \\
256 & WGCNA\_lvl3\_5k\_chunked & Strong & euclidean & 0.191 & 0.043 \\
257 & WGCNA\_lvl1\_1k\_chunked & Weak & euclidean & 0.191 & 0.043 \\
258 & WGCNA\_lvl1\_1k\_chunked & Weak & euclidean & 0.191 & 0.043 \\
259 & WGCNA\_lvl1\_1k\_chunked & Weak & euclidean & 0.191 & 0.043 \\
260 & LDA\_lvl\_2\_50\_5k & Medium & euclidean & 0.19 & 0.134 \\
261 & WGCNA\_lvl3\_1k\_chunked & Strong & cosine & 0.184 & 0.027 \\
262 & LDA\_lvl\_3\_50\_1k0 & Strong & euclidean & 0.183 & 0.119 \\ \hline \hline \end{tabular}
#### 5.1.5 Lda
For the LDA model we want to know the effect of number of topics and MFWs used to prepare training DTM. We fit the following multilevel interaction model:
ARI - 1 + level*MFWs*topics + (1 + | sample)
First, we look at direct effects of \(MFWs\) and \(topics\) on ARI across all thematic foregrounding \(levels\) and marginal of novel samples (Figure S17). It appears that LDA with larger number of topics and smaller number of MFWs performs slightly better on average. LDA models with 100 topics also show the smallest variance in performance across sampling runs. These effects,
Figure S 16: Empirical distributions of method combinations. Error bars correspond to 95% CI. Filtered distances.
\begin{table}
\begin{tabular}{l|r|r} \hline & elpd\_diff & se\_diff \\ \hline ari \(\sim\) level * mfw\_t * topics + (1 + level + mfw\_t + topics \(|\) sample\_n) & 0.00 & 0.00 \\ ari \(\sim\) level * mfw\_t * topics + (1 \(|\) sample\_n) & -24.75 & 8.19 \\ \hline ari \(\sim\) level + mfw\_t + topics + (1 + level + mfw\_t + topics \(|\) sample\_n) & -150.45 & 17.62 \\ \hline ari \(\sim\) level * mfw\_t * topics & -173.71 & 20.14 \\ \hline ari \(\sim\) level * mfw\_t + topics & -280.31 & 24.17 \\ \hline ari \(\sim\) level + mfw\_t + topics & -287.66 & 24.51 \\ \hline \end{tabular}
\end{table}
Table S 8: Leave-one-out results for LDA models.
however, mostly come from the corpus with weak thematic foregrounding as seen on Figure S18.
Bars mark posterior.95 CI, shaded dots show empirical LDA results. We see that, again, the level of thematic foregrounding has the largest influence on LDA performance. At medium and strong levels, however, the impact of topics and MFWs is not clear. It seems that, on average, an increase in the number of topics for the small number of features tends to improve clustering, while the effect is reversed for large number of features (smaller number of topics have a slight edge, see Figure S19). Overall, the choice of number of topics and features is more critical in a corpus without pre-processing and becomes less influential when features are foregrounded.
#### 5.1.6 Bags of words
Again, we fit a Bayesian multilevel interaction model. Two factors drive the performance of bag-of-words features: level of thematic foregrounding and the length of vector (number of MFWs). In brms model notation:
ari - level+MFWs + (1|sample_no)
Figure S20 shows that clustering with bag of words improve on average with longer vectors: since there is no algorithm that summarizes similarity in individual words behavior, word
Figure S 18: Posterior predictions for LDA performance grouped by thematic foregrounding, marginal of samples.
Figure S 20: Bag-of-words posterior predictions, superimposed on empirical observations.
frequencies are more dependent on diverse lexical pools and sparse DTMs. It might be a suboptimal way to model texts, since the final clustering would rely on groups of present/absent words rather than the actual distribution. Also we would expect results to plateau if the length of bag of words is increased further. The plateau is better visible when posterior estimates are taken as average of foregrounding levels and marginal of samples (Figure S21).
#### 5.1.7 Wgcna
We model three factors in WGCNA performance: chunking, level and MFWs:
ari - chunking*level*MFWs + (1 + chunking + level + MFWs | sample_no)
First, Figure S22 clearly confirms that that chunking texts drastically reduces the performance of clustering with WGCNA modules, because of greedy module identification problem (see Section 2.3).
Figure S23 shows posterior means for different cut-offs of MFWs and thematic foregrounding levels (only for models without chunking).
Non-chunked WGCNA, on average, benefits mostly from medium thematic foregrounding and increasing MFWs.
Figure S 22: Effect of chunking on WGCNA performance. Posterior predictions, marginal of samples, superimposed on empirical data points.
\begin{table}
\begin{tabular}{l|r|r} \hline & elpd\_diff & se\_diff \\ \hline chunking * level * mfw\_t + (1 + chunking + level + mfw\_t | sample\_n) & 0.00 & 0.00 \\ \hline chunking * level * mfw\_t + (1 | sample\_n) & -26.10 & 11.83 \\ \hline chunking * level * mfw\_t & -55.73 & 17.54 \\ \hline chunking + level + mfw\_t + (1 + chunking + level + mfw\_t | sample\_n) & -57.04 & 12.14 \\ \hline chunking + level + mfw\_t + (1 | sample\_n) & -75.00 & 15.52 \\ \hline chunking + level + mfw\_t & -97.92 & 19.30 \\ \hline \end{tabular}
\end{table}
Table S 10: Leave-one-out results for WGCNA group of models.
#### 5.1.8 doc2vec
There is only one predictor for the behavior of doc2vec in our setup: the level of thematic foregrounding. We fit a model with varying slopes per novel sample (Bayesian framework handles single observations in samples just fine):
ari - 1 + level + (1+level|sample_n) doc2vec embeddings perform similarly across the different levels of thematic foregrounding (Figure S23), which is not surprising, since it uses external representation of semantics and does not depend too much on filtering words. However, there _is_ a steady increase in ARI, which means that filtering words and simplifying lexicon _can_ improve document representation, even if the same model is used both for semantic similarity scores and document embeddings.
### Clustering HathiTrust corpus
To test if our results maintain validity in the 'outside' world, we turned to HathiTrust corpus of fiction. We sampled 5000 "unknown" novels from the same period of time (books released after the year 1950). We couldn't just use our small target corpus as a seed of "known" novels, because HathiTrust does not provide original texts: only the token count per page alongside with morphological tagging. It is still possible to train an LDA model with this data, but not reproduce our spaCy pre-processing steps exactly. In addition, many books from our corpus did not have a match in HathiTrust data.
We used another approach. We have found all of the 97 authors from our dataset of four genres in HathiTrust corpus. All the books by these authors were marked as belonging to a corresponding genre. For example, while our original dataset contained only 3 novels by Agatha Christie, HathiTrust contains 71 novels by her. We labeled all of them as "detective" (which, of course, is a simplification). The distribution of books across four genres that we aquire this way is shown on Figure S25. Table S12 shows 10 authors with the largest amount of books.
We chose two combinations of methods to show the difference between 'better' and 'worse' approaches:
Figure S 25: Genre book counts in HathiTrust data.
\begin{tabular}{l|l|c} \hline genre & author & books \\ \hline detective & Christie, Agatha & 71 \\ \hline scifi & Asimov, Isaac & 50 \\ \hline romance & Steel, Danielle & 45 \\ \hline fantasy & Moorcock, Michael & 43 \\ \hline scifi & Silverberg, Robert & 41 \\ \hline scifi & Dick, Philip K & 39 \\ \hline fantasy & Anderson, Poul & 37 \\ \hline scifi & Clarke, Arthur C. (Arthur Charles) & 37 \\ \hline scifi & Aldiss, Brian Wilson & 36 \\ \hline scifi & Heinlein, Robert A. (Robert Anson) & 35 \\ \hline \end{tabular}
\begin{table}
\begin{tabular}{l|l|c} \hline genre & author & books \\ \hline detective & Christie, Agatha & 71 \\ \hline scifi & Asimov, Isaac & 50 \\ \hline romance & Steel, Danielle & 45 \\ \hline fantasy & Moorcock, Michael & 43 \\ \hline scifi & Silverberg, Robert & 41 \\ \hline scifi & Dick, Philip K & 39 \\ \hline fantasy & Anderson, Poul & 37 \\ \hline scifi & Clarke, Arthur C. (Arthur Charles) & 37 \\ \hline scifi & Aldiss, Brian Wilson & 36 \\ \hline scifi & Heinlein, Robert A. (Robert Anson) & 35 \\ \hline \end{tabular}
\end{table}
Table S 12: Book counts by author.
1. Better option: LDA model with 1000 MFWs, 100 topics, medium thematic foregrounding, Jensen-Shannon divergence;
2. Worse option: Bag of words, 5000 MFWs, weak thematic foregrounding, cosine distance.
We compare their performance by projecting all 6293 novels in two dimensions with UMAP. We expect a better option to retain visible clusters by genres. Figure S26 sets two projections side by side.
Figure S 26: Two UMAP projections. ‘Worse’ methods choice on the left, and ‘better’ methods choice on the right |
2305.07433 | Aligning the Western Balkans power sectors with the European Green Deal | Located in Southern Europe, the Drina River Basin is shared between Bosnia
and Herzegovina, Montenegro, and Serbia. The power sectors of the three
countries have an exceptionally high dependence on coal for power generation.
In this paper, we analyse different development pathways for achieving climate
neutrality in these countries and explore the potential of variable renewable
energy (VRE) and its role in power sector decarbonization. We investigate
whether hydro and non-hydro renewables can enable a net-zero transition by 2050
and how VRE might affect the hydropower cascade shared by the three countries.
The Open-Source Energy Modelling System (OSeMOSYS) was used to develop a model
representation of the countries' power sectors. Findings show that the
renewable potential of the countries is a significant 94.4 GW. This potential
is 68% higher than previous assessments have shown. Under an Emission Limit
scenario assuming net zero by 2050, 17% of this VRE potential is utilized to
support the decarbonization of the power sectors. Additional findings show a
limited impact of VRE technologies on total power generation output from the
hydropower cascade. However, increased solar deployment shifts the operation of
the cascade to increased short-term balancing, moving from baseload to more
responsive power generation patterns. Prolonged use of thermal power plants is
observed under scenarios assuming high wholesale electricity prices, leading to
increased emissions. Results from scenarios with low cost of electricity trade
suggest power sector developments that lead to decreased energy security. | Emir Fejzić, Taco Niet, Cameron Wade, Will Usher | 2023-05-12T12:58:24Z | http://arxiv.org/abs/2305.07433v2 | Aligning the Western Balkans power sectors with the European Green Deal
###### Abstract
Located in Southern Europe, the Drina River Basin is shared between three countries: Bosnia and Herzegovina, Montenegro, and Serbia. The power sectors of the three countries have a particularly high dependence on coal for power generation. In this paper we analyse different development pathways for achieving climate neutrality in these countries and explore the potential of variable renewable energy in the area, and its role in the decarbonization of the power sector. We investigate the possibility of whether hydro and non-hydro renewables can enable a net zero transition by 2050, and how renewable energy might affect the hydropower cascade shared by the three countries. The Open-Source Energy Modelling System (OSeMOSYS) was used to develop a model representation of the power sector of the countries. The findings of this analysis show that the renewable potential of the countries is a significant 94.4 GW. This potential is 68% to 287% higher than that of previous assessments, depending on the study of comparison. By 2050, 17% of this potential is utilized for VRE capacity additions under an Emission Limit scenario assuming net-zero. These findings suggest that the local VRE potential is sufficient to support the transition to net-zero. Scenarios with higher shares of solar and thermal power show increased power generation from the hydropower cascade, thus reducing the water available for purposes other than power generation.
## Abbreviations
CF - Capacity Factor
CFTPP - Coal-fired Thermal Power Plant
EU - European Union
GHG - Greenhouse Gas
GWA - Global Wind Atlas
HPP - Hydropower plant
IEC - International Electrotechnical Commission
NDC - Nationally Determined Contribution
OSeMOSYS - Open Source Energy Modelling System
TS - Time Steps
VRE - Variable Renewable Energy
WB - Western Balkans
Introduction
Impacts of climate change are observed all around the world. The severity and frequency of extreme climate are driven by anthropogenic greenhouse gas (GHG) emissions [1]. To mitigate climate-induced impacts on the environment and society, it is imperative to reduce our GHG emissions. Recent figures show that globally the share of carbon dioxide (CO2) accounts for 64% of total GHG emissions [2]. Out of the 36.3 gigatonnes (Gt) of CO2 emissions from energy-related activities, 10.5 Gt come from coal-fired thermal power plants (CFTPP) [3]. This represents 29 % of total energy-related CO2 emissions. Given these statistics, traditional coal power plants must be replaced with low-emitting renewable energy sources. One region with a particularly high dependence on coal for power generation is The Western Balkans (WB). Shares of coal in the power sectors of this region range from 55% in North Macedonia to 97% in Kosovo1[4]. Low-carbon development pathways must therefore be explored to accomplish both the climate and environmental objectives of the WB countries, including the Sustainable Development Goals (SDGs) in the 2030 Agenda, and alignment with the European Green Deal under the Sofia Declaration.
Footnote 1: This designation is without prejudice to positions on status and is in line with UNSCR 1244 and the ICJ Opinion on the Kosovo Declaration of Independence.
Renewable energy sources provide a cleaner alternative to the region's current reliance on coal. Among the identified sources of renewable energy in the WB is hydropower in the Drina River basin (DRB). A significant hydropower potential exists in the basin, of which 60% remains untapped [5]. The river basin is shared by three countries: Bosnia and Herzegovina, Montenegro, and Serbia. These countries have applied to join the European Union (EU) and pledged Nationally Determined Contributions (NDC) as part of the 2030 Agenda. However, for the DRB countries to achieve decarbonization and to align with climate policies and objectives implemented at the EU and global levels, they must incorporate renewable resources other than hydropower into their energy mix. Consequently, the potential for variable renewable energy (VRE) in DRB countries must be assessed.
Current literature shows a lack of consistency in terms of VRE potential estimates for the DRB countries. A study by Hrncic et al [6] investigated the possibility of achieving a 100% renewable energy system in Montenegro. The wind power potential assumed in the study was 400 MW and it referred to the Energy Development Strategy of Montenegro until 2030 [7] and a study by Vujadinovic et al [8]. All three studies claim 400 MW of technical wind power potential in Montenegro, a figure that is taken from an assessment conducted by the Italian Ministry for the Environment, Land and Sea in 2007 [9]. Wind turbines have developed rapidly since 2007. Wiser et al [10] found in 2012 that the land area in the US where wind power plants could achieve capacity factors (CFs) of 35% or higher increased by 260% when using turbines designed for the International Electrotechnical Commission (IEC) Class III wind conditions compared to turbines from the 2002-2003 era. Moreover, a report published by the International Renewable Energy Agency (IRENA) in 2017 [11] assessed the technical potential of VRE in the DRB countries to be 56.3 gigawatts (GW). The wind power potential for Montenegro is according to [11] close to 2.9 GW, a much greater potential compared to earlier estimates of 400 MW. The South East Europe Electricity Roadmap (SEERMAP) country reports published in 2017 [12-14] for the DRB countries suggest a technical potential of VRE to be 24.4 GW, just 43% of the potential stated by IRENA [11] in the same year.
While studies like Hrncic et al [6] and Husika et al [15] use energy models to investigate potential development pathways for the power sectors of Bosnia and Herzegovina and Montenegro, there are no studies that include decarbonization pathways by 2050 of all DRB countries and their shared hydropower potential within the DRB. An existing study by Almulla et al [16] uses OSeMOSYS to investigate the benefits associated with optimised production and increased cooperation between hydropower plants (HPP) in the DRB, including the impacts of energy efficiency measures. However, since the model, projections, and comparison periods used in [16] are different, the overall methodology and approach can be differentiated from the approach utilized in this study.
This paper aims to fill the identified research gaps by investigating decarbonization pathways for climate neutrality of the DRB countries by 2050. In addition, we estimate the power potential of VRE technologies within the DRB countries which can help facilitate this transition away from coal-based power generation.
In this paper we aim to answer the following research questions (RQs):
* What is the potential of VRE in the DRB countries and what role can it play in the decarbonization of the power sector?
* Are the resource potentials of hydro and non-hydro renewables in the DRB countries enough to support the transition to net-zero by 2050?
* What is the impact of VRE on the existing hydropower cascade in terms of power generation and cost competitiveness?
In section 2, this paper provides a background of the DRB countries and identifies research gaps, followed by a description of the methodology used in section 3. We present the choice of modelling tool, the temporal and geographical dimensions of the study and the operational constraints. The results and discussion are presented in sections 3 and 4 respectively. The work is concluded in section 6, followed by the identification of limitations and future research in section 7.
Background
The six Western Balkan (WB6) countries of Albania, Bosnia and Herzegovina, Kosovo, Montenegro, North Macedonia, and Serbia are the only Southern European2 countries that are not yet part of the European Union (EU) [18]. Fig. 1 shows the DRB area, which covers most of the cross-border area between the DRB countries [19]. The basin area is 20 320 km2 and corresponds to 14% of the total land area of the DRB countries [20].
Footnote 2: According to the United Nations classification and definition of regions [17] UN ESA “Classification and definition of regions.” [https://esa.un.org/MigFlows/Definition%20of%20regions.pdf](https://esa.un.org/MigFlows/Definition%20of%20regions.pdf) (accessed 2022.09.15, 2022).
### Overview of the countries
In 2021, the population of the DRB countries was 10.7 million [21]. In terms of population, Serbia is the largest country with 6.8 million residents, followed by Bosnia and Herzegovina and Montenegro with 3.3 and 0.6 million respectively. Within the DRB there are 867 thousand people, of whom 50% reside in Bosnia and Herzegovina, 33% in Serbia, and 17% in Montenegro [22]. Montenegro and Serbia have a GDP per capita of approximately nine thousand while Bosnia and Herzegovina has seven thousand [23]. The average GDP per capita in the EU is 38 thousand [24], significantly higher than the GDP per capita for the DRB countries.
Similarly, power consumption per capita in the DRB countries is lower compared to the EU average. In Bosnia and Herzegovina, Montenegro, and Serbia the consumption is 3.3 MWh [25], 5.7 MWh [26], and 4.1 MWh [27], respectively. By contrast, the EU-27 power consumption per capita is 6.4 MWh [28, 29].
Figure 1: The Drina River Basin (outlined in orange) modelled hydropower plants within the basin (blue squares) and the DRB countries.
The DRB countries are heavily dependent on coal-fired thermal power plants (CFTPP). Between 2014 and 2018 the share of CFTPP in the power supply was 60-70% in Bosnia and Herzegovina [30] and Serbia [31], reaching 40% in Montenegro [32]. These numbers indicate that the reliance on coal in the DRB countries is significantly higher compared to the 20% share coal has in power generation in the EU [33]. Realizing this reliance on coal by the DRB countries, the European Commission (EC) launched the _Initiative for coal regions in transition_ in December 2020 [34]. This initiative aims to assist the Western Balkans, including the DRB countries, in their transition from coal to carbon-neutral economies.
The use of coal-fired power generation has other adverse effects beyond those related to climate change. Estimates indicate 880 deaths from air pollutants in 2020 resulting from the exceedance of the National Emissions Reduction Plans (NERP) greenhouse gas ceilings by CFTPPs in Bosnia and Herzegovina and Serbia. This includes 235 deaths due to exports from the countries to the European Union. Health costs from overshooting GHG emissions in the WB countries are estimated to be between six and twelve billion euros in 2020 alone [35]. Reducing the reliance on coal will not only help in reaching the climate goals but will also improve the air quality and in turn prevent chronic illnesses and premature deaths associated with PM, SO2 and NOx pollutants.
### Aligning with EU climate goals
Bosnia and Herzegovina, Montenegro, and Serbia are candidate countries for EU accession, with Bosnia and Herzegovina obtaining the status as at December 2022 [36]. As EU member states candidates, the DRB countries have pledged to align with the EU Climate Law and the EU Emissions Trading Scheme (EU-ETS), and to increase their share of renewable energy sources by the signing of the Sofia Declaration in 2020 [37]. The alignment with EU policy entails that by the time the countries become EU member states, their climate and energy policies must align with the European Green Deal, which implies a 55% reduction in emissions by 2030 compared to 1990 levels and achieving climate neutrality by 2050 [38].
### Nationally Determined Contributions as part of the Paris Agreement
As part of the efforts to combat climate change, the DRB countries have all submitted their Nationally Determined Contributions (NDCs) to the United Nations Framework Convention on Climate Change. Each country has submitted updated versions of their NDCs, increasing their ambitions compared to their first NDC submission. Table 1 summarizes the pledged decreases of Greenhouse Gas (GHG) emissions submitted by the DRB countries in their NDCs. In addition to the 2030 goals, Bosnia and Herzegovina is committed to reducing GHG emissions by 61.7% unconditionally, and 65.6% conditionally by 2050 in comparison to 1990 levels. As stated, the contributions do not align with the EU targets, and ambitions must be raised to reach climate neutrality by 2050.
### Renewable resource potentials
Located in Southern Europe, the DRB countries have a higher photovoltaic power potential compared to the north and central parts of Europe [39]. Earlier studies for the Central and South East Europe region have identified a large potential for renewable energy technologies [11, 40]. Currently, these potentials are largely untapped. There is no consensus in the literature as to the potential estimates of renewable energy sources within the DRB countries. To do so, it is critical to employ a consistent methodology across the three countries using the latest high-resolution geospatial and temporal data.
## 3 Methodology
In this section we describe the structure of the energy system model of the DRB countries. In addition, we describe and justify the choice of methods to assess wind and solar potential in the DRB countries. This includes the selection of the modelling framework, data, and methodology for assessing the power potentials from VRE sources. Next, we present the clustering approach used to manage computational effort which retains important temporal details across electricity demand and variable renewable energy sources. In addition to the model being open source, the clustering approach ensures that it is also accessible given a reduced computational effort. Finally, we present the scenario analysis.
### OSeMOSYS and model setup
To answer the three research questions posed in Section 1, namely what the potential of VRE in the DRB countries is, if they are sufficient to support the transition to net-zero by 2050, and what their impact is on the existing HPP cascade, the created energy model must possess certain qualities. It should be geospatially explicit, allowing for assessment of VRE potential within DRB countries. The model must account for daily and seasonal variations in climate and power demand while minimizing the computation effort. Modelling HPP cascades on a per-power plant basis is required to assess how changes to the system-wide infrastructure affect these cascades. It must provide insight into future development pathways for the power sector by presenting a long-term expansion of the power system. A detailed description of the OSeMOSYS framework used to represent the hydropower cascade within the Drina River basin, together with the interconnected energy systems of the Drina River basin countries is presented in this section.
\begin{table}
\begin{tabular}{c c c} \hline
**Country** & **First NDC [\%]** & **Updated NDC [\%]** \\ \hline \multicolumn{2}{c}{**By 2030**} & **By 2030** \\ \hline Bosnia and & 18 and 20\% (conditional and & -36.8 and -33.2 (conditional and \\ Herzegovina & unconditional) & unconditional) \\ Montenegro & -30\% & -35\% \\ Serbia & -9.8\% & -33.3\% \\ \hline \end{tabular}
\end{table}
Table 1: Pledged emission reductions by the DRB countries by 2030 relative to 1990 levels.
We created the model using the OSeMOSYS framework [41, 42]. The primary use of OSeMOSYS is for long-term energy planning based on the concept of systems optimization. It does not require proprietary software or commercial programming languages and solvers. It is for this reason a preferable option compared to long-established models such as MARKAL/TIMES [43], MESSAGE [44], and PRIMES [45] to name a few, as it does not require upfront costs. The OSeMOSYS framework consists of seven blocks. These blocks are defined as objective function, costs, storage, capacity adequacy, energy balance, constraints, and emissions. The objective function is in this case the total discounted cost of the energy system. It is based on provided energy carrier demands. Costs include capital investment costs, operating costs, and salvage values among others [41]. Constraints include a reserve margin constraint, which in the case of this analysis is set to be 20%. The reserve margin is based on the fact that power generating companies and transmission companies must maintain a capacity to generate and transmit electricity exceeding normal capacity by 10-20% [46]. While multiple emissions can be attributed to power-generating technologies or resource extractions, we consider CO\({}_{2}\) emissions in this study. Costs within the model are discounted at a global discount rate of 5%.
As a basis for developing the model, we collected data from the literature, stakeholder engagements with representatives from the DRB, transmission system operators (TSOs), and directly from their respective power utilities. The types of data collected include power demands, installed power generating capacities, fixed and variable costs of power plants, resource potentials and fuel costs, and cross-border transmission capacities to name a few. All data used for the creation of this model, including the data files used to run each scenario, and scripts used for assessing the VRE power potential can be found in the Github repository and Zenodo deposit.
### VRE Characterization
In the following section, we describe the methodology behind the characterization of solar and wind power potentials in the model. We provide details on the approach taken for assessing the resource availability of VRE and their power generation potentials.
#### 3.2.1 Resource availability
To assess the VRE potentials we first calculate the total land (km\({}^{2}\)) eligible for wind and solar development within each country. We assume that wind and solar can be developed on the following land use types in the CORINE Land Cover (CLC) database: natural grasslands, moors and heathland, sclerophyllous vegetation, transnational woodland-shrub, bare rocks, sparsely vegetated areas, and burnt areas. The resulting land availability representation is shown in Fig. 2.
We refer to the squares representing the eligible land fractions shown in Fig. 2 as grid cells. Each grid cell has a resolution of 30 x 30 km. To obtain hourly time series data on VRE power generation potentials in each of the grid cells we used Atlite [47]. Atlite is a Python package for calculating renewable power potentials and time series. Atlite utilizes the ERA5 dataset, which is why we chose the 30 x 30 km resolution over the higher-resolution CLC data.
The total potential for wind and solar development expressed in terms of capacity (MW) is calculated by multiplying the total eligible land by an area-specific maximum installable capacity of 1.7 MW/km2. The maximum installable capacity is based on [48] and is used in similar studies [49]. The figure consists of a technical potential density for installable wind generation capacity, corresponding to 10 MW/km2, and a fraction of 0.17 including the consideration for public acceptance and competing land use, extreme slopes, and unfavourable terrain. The more precise the location analysis is, the higher the area-specific installable capacity number can be used. To determine the potential capacity of wind and solar power in each DRB country, we multiply the capacity per square kilomet by the total eligible land area shown in Fig. 2.
#### 3.2.2 VRE generation potentials
In addition to maximum resource potentials, the OSeMOSYS model requires time series values for the production potentials. These are used to calculate the CFs for each wind and solar technology included in the model.
As land availability affects the distribution of VRE resources, it is necessary to consider the weather variations according to the location of the VRE installments. We use Atlite [47] to estimate the hourly production potential of wind and solar in each grid cell. Atlite retrieves global historical weather data and converts it into power generation potentials and time series for VRE technologies like wind and solar power. The data used has an hourly temporal resolution obtained from the fifth-generation European Centre for Medium-Range Weather Forecasts (ECMWF) atmospheric reanalysis of the global climate (ERA5 dataset). To obtain a representative year, we first calculated the average hourly power output for wind and solar from the 5 years of 2017-2021. Next, we selected the year that best represented the average, in this case, 2020. We chose to use a historical weather year instead of an average year since the average weather year increases the lower extremes and decreases the higher extremes.
Figure 2: Eligible land fraction for wind and solar power plants (excluding agricultural land) for the DRB countries. The axis labels show longitude (y-axis) and latitude (x-axis). The resolution is a 30km grid.
To translate the ERA5 weather data to capacity factors, we use the following technology assumptions: for wind, we chose a Siemens SWT-2.3-108 turbine, with a rated power of 2.3 MW and a hub height of 100m [50]. It has a cut-in wind speed of 3 m/s and cut-out speed of 25 m/s. The power curve of the selected wind turbine type is shown in Fig. 3. For solar power, cadmium telluride (CdTe) photovoltaics (PV) solar panels with the orientation 'latitude optimal' were selected in Atlite. The CdTe panel characteristics are provided by [51]. No tracking was included for the solar panels, as Atlite did not include tracking at the point of doing this analysis.
Based on power potentials obtained from Atlite for wind power, annual, nationally averaged CFs ranged from 3.3% to 7.8% depending on the country and land layers selected. The numbers cited here were considered small in comparison with the wind power facilities currently in operation in the DRB countries. This reflects poor data quality rather than an absence of wind potential in the region. A geospatial resolution of 30x30 km is not sufficient when estimating areas with potentially high wind potentials that can be observed in smaller areas. Additionally, the ERA5 does not apply correction factors to the wind data. As such, we use a separate approach outlined in Section 3.2.3. Using Atlite in combination with the ERA5 dataset provides the hourly data needed for the model. Underrepresentation of areas with high resource availability is not present in the case of solar power, see Fig. 4, since the irradiation is less impacted by altitude, terrain, and location than wind power is.
Figure 3: Power curve of the selected wind power plant in Atlite. Used for obtaining time series data on power generation potentials for the eligible land in the cut-out of the DRB countries.
#### 3.2.3 Wind power generation potentials using GWA
To improve the wind power potential estimate for this study, we used the Global Wind Atlas (GWA) version 3[52]. The GWA is derived from ERA5 reanalysis but uses downscaling processes that result in a final resolution of 250m that considers local topography and terrain features. The tool provides mean CFs for three different turbine classes. The classes available include the IEC1, IEC2, and IEC3 categories for the 2008-2017 period. The IEC classes correspond to a 100m hub height and rotor diameters of 112, 126 and 136m respectively. When compared to existing wind generation in the regions, the CF estimates from the GWA are more accurate than those from Altite.
Annual capacity factors from the GWA used in this study are shown in Fig. 5. For computational considerations, we omit land areas with unfavourable wind conditions, i.e. where wind CFs are less than 10%.
Figure 4: Map of solar capacity factors for a fixed panel by grid cell for the DRB countries, generated using Altite.
The output from the VRE characterisation shown in Fig. 4 and Fig. 5 comprises of geospatially explicit CFs for solar and wind power. In addition to these, we obtain time-series values of capacity factors from Altite. The outputs are used as inputs for the representation of wind and solar power in the OSeMOSYS energy model.
In OSeMOSYS we define technologies with certain techno-economic characteristics, including total capacity potentials and time-series values for CFs. To represent the wind power in OSeMOSYS in a computationally manageable way while covering the broad range of CFs shown in Fig. 5, we assume four CF ranges. These are 10-20%, 20-30%, 30-40%, and 40% or higher. We use these ranges to calculate the average CF for each CF range and country. Using the average CF of each range, we scale the time-series values obtained from Altite to match the four new averages. These adjusted time-series CFs are then assigned to four wind technology representations in each country in the OSeMOSYS model.
Following the calculation of the different time-series CFs we calculate the share of the DRB countries area which the four selected CF ranges occupy. Using Python, we perform stratified sampling on the output shown in Fig. 5 using 10,000 points per grid cell for all grid cells shown in Fig. 2. With the shares, we then calculated the land availability for wind power for each CF range and unit of eligible land from Fig. 2 according to equation 1.
\[\text{Available land}_{\text{CF range, country}}=\text{eligible land}_{\text{ country}}\text{ * share}_{\text{CF range}}\]
**Eq. 1.** Calculating available land for a given CF range in each country
For wind and solar technologies competing for the available land, we do not impose any restrictions on investments. The model can invest fully in wind, or solar, or a mix of both technologies. Investments in a particular land area are based on the techno-economic characteristics of the power plant types. To differentiate between wind technologies within the CF ranges in the model, we use technology names corresponding to the technology type, i.e. wind, and the CF range. The solar power resource is divided into multiple technologies to keep the names unique and to pair them with the different wind technologies in the user-defined constraints. Each pair of wind and solar technologies for a given CF range, e.g. 30%,
Figure 5: Capacity Factors for IEC Class III wind power plants for the DRB countries from GWA.
is then coupled with a land technology with an availability corresponding to the output from Eq. 1 for that particular CF range. Based on the assumption that the VRE technologies have an area-specific installable capacity of 1.7 MW/km\({}^{2}\) we add a land use of 0.588 km2 per MW installed capacity. This allows the model to invest in wind or solar until the available land corresponding to their CF range is exhausted. When power plant developments have occupied all land within a CF range there can be no additional capacity additions of wind or solar power within that specific CF range. Table A3 in the appendix summarizes the wind and solar technologies considered in the OSeMOSYS model for this analysis.
Using the higher average CFs to normalize the time series wind potential data from Altite is a simplified way of bridging the gap between the two sets of data. What justifies this approach in our case is the nature of the model setup. Existing and planned hydropower or thermal power plants in the model are represented as just one aggregated technology per power generation type, with the HPP cascade and the VRE representation being an exception. By following the above-described methodology, we can include hourly power generation potentials of wind and solar, account for eligible land for VRE deployment in each country, as well as better represent the CFs for wind power based on CF data from the GWA.
### Temporal resolution
Long-term expansion models of the power sector generally consider system developments over several decades, which is a computationally intensive process. A high temporal resolution within the modelling period further increases this computational load. Consequently, power sector and energy models do not typically represent each hour in the year, but rather use representative time periods (e.g., days) [53, 54]. This section describes the method used to construct representative time periods, as well as the temporal representation of VRE availability based on climate data.
#### 3.3.1 Temporal structure
The created OSeMOSYS energy model seeks to optimize power system investment and operational decisions for the years 2020-2050. Each year is represented by fifteen "representative days", where each representative day is assigned a weight corresponding to its relative frequency. The motivation for using representative days is to decrease the size and computational requirements of the overall energy system model. By using fifteen representative days at hourly resolution, each model year consists of 15x24 = 360 time steps (TS). By contrast, if no temporal aggregation technique is employed, each model year would consist of 8760 time steps and the model solve time and memory requirements would be computationally intractable. Fig. 6 shows the load and generation duration curves.
The representative days and their respective weights are selected using the agglomerative hierarchical clustering algorithm outlined by Nahmmacher et al. [53] and used more recently in studies by Palmer-Wilson et al. [55] and Keller et al. [56, 57]. Hourly data for electricity demand, wind power availability, and solar power availability across the DRB countries is collected and normalized for a set of historical days3. A hierarchical clustering algorithm is then used to group the historical days into 15 clusters, where days within a cluster are broadly similar in their load, wind, and solar characteristics. For each cluster, the day closest to the cluster's centroid is selected as the representative day and is assigned a weight proportional to the cluster's relative size. Finally, the load, wind, and solar time series for each representative day are scaled to match correct annual averages. Fig. 6 shows the fitment of the 15 TS approximation compared to the input data. A more detailed overview of the methodology can be found in [53].
Footnote 3: Here we are using 2020 data. Wind and solar availability data is collected following the approach outlined in Section 4.1.1 and historical electricity demand data is collected from ENTSO-E.
#### 3.3.2 Reference energy system
Fig. 7 illustrates the Reference Energy System (REF). The REF is a network representation of technical activities required to supply electricity to meet the final demand. It shows the connections between supply and demand, including land and water resources, which are included in the OSeMOSYS energy model. Fossil fuels considered are coal and natural gas. Fossil fuel resources are consumed by coal and gas power plants in proportion to their power generation output. The land resource obtained using Atlite and the CLC is fed into the model as an input. Land is utilized by wind and solar technologies in proportion to the increase in installed capacity, representing the use of land for the construction of energy infrastructure. Water availability for HPPs in the DRB is considered an input that constrains the hydropower cascade. Power imports and exports include cross-border power exchanges between the countries of the DRB, as well as from adjacent countries, including Croatia, Hungary, and Italy among others. For each country, the resources, power generation technologies, and losses related to transmission and distribution are separately accounted for.
Figure 6: Load and generation duration curves for demand, solar and wind of 15 TS approximation compared to aggregated 8760 TS input data.
[MISSING_PAGE_POST]
#### 3.3.3 Hydropower cascade
Fig. 8 illustrates in more detail the DRB hydropower cascade depicted in blue in Fig. 7. We derive data for the cascade water inputs from the HypeWeb model, more specifically, the HYPE model for Europe (E-HYPE) [58]. From the E-HYPE 3.1.1 model version we calculate the average daily river discharge during the 1981-2010 period, for each of the following rivers: Cehotina, Lim, Piva, Tara, and Uvac. The resolution of this data corresponds to daily flows. This entails that flows within each TS are constant and equal to the daily average based on the E-HYPE model data. Flows change between the 15 selected TS. Water enters the cascade through the upstream river segments and the catchments. River segment capacities and water flows are input parameters. These capacities are fixed to the maximum average flow of each day. The WFDEI dataset of historical precipitation and temperature is used as forcing data in this simulation [58]. The capacities do not vary across the different years in the model.
Figure 8: Structure of the HPP cascade. Boxes represent technologies in OSeMOSYS, while connecting lines represent commodities. In this figure, the commodities are water flowing between the power plants, spillways, dams, catchments, and river segments. Catchments in this representation are aggregations of small tributaries or streams entering the DRB cascade.
### Explored development pathways - a scenario analysis.
We created three scenarios to explore potential development pathways for the power sectors of the DRB countries. A Reference scenario is presented first to represent a baseline, followed by an Emission Limit Scenario to explore achieving net-zero emissions by 2050. Finally, with the Agricultural land for wind power scenario, we examine what effect wind power development on the DRB countries' vast agricultural land could have on the power sector. Each of the three scenarios is explored with two variations: high- and low-cost trade alternatives.
**Reference scenario (REF)**. The reference scenario serves as the baseline for the scenario analysis. All other scenarios are compared with results from this baseline. The model may invest in technologies that are currently employed in the power sectors of the DRB countries. These include coal, solar, hydro, and wind power with several exceptions. As the government of Montenegro has already cancelled its last coal project and is presently focused on accelerating the retirement of its remaining CFTPP [59], no new CFTPP projects are permitted in Montenegro in this scenario. Since projects concerning planned HPPs in the DRB are uncertain, there is no expansion of the hydropower cascade in this scenario. This scenario aims to provide insights into the development of the power sector based on current techno-economic parameters obtained from literature and consultations with local stakeholders, without any additional policy measures. The global assumptions are consistent across all scenarios and provided in Table A1.
**Emission limit scenario (EL)**. Assuming future EU integration of the DRB countries, this scenario aims to provide insight into the development of their power sectors if emissions are restricted to the EU's 2030 and 2050 GHG reduction targets. Compared to 1990 emissions levels, the target values correspond to a 55% reduction by 2030 and a net-zero emission level by 2050. The applied emission limit is shown in Table A2.
**Agricultural land for wind power scenario (AG)**. Here we relax the upper bound on wind capacity by making possible the development of wind on agricultural land. As a percentage of their total land area, Bosnia and Herzegovina and Montenegro have a modest 12% share of agricultural land, while Serbia has 40%. This scenario aims to inform us of the role that wind power plants (WPPs) on agricultural land may play in the decarbonization of the power sectors in the DRB countries.
Results
In this section, we present the key findings of the analysis. The results are reported in an aggregate form, combining the findings for all three DRB countries. We provide answers to what the potential of VRE in the DRB countries are, their role in supporting the transition to net-zero by 2050, and the impact new VRE developments can have on the existing hydropower cascade in terms of power generation and cost competitiveness.
### VRE Potential
Section 3.2 of this paper describes the methodology for estimating the potential of VRE technologies for DRB countries. The findings summarized in Table 2 show that the DRB countries have a combined potential of 94.4 GW for wind and solar power. The majority of VRE potential in Bosnia and Herzegovina and Montenegro is located on lands that exclude agricultural lands. The distribution in Serbia is vastly different, with wind power potentials on agricultural lands accounting for over 80% of its total VRE potential. CF's for approximately half of the wind power potential on agricultural land in Bosnia and Herzegovina and Montenegro are within the 15% CF range, while most of the wind potential on agricultural land in Serbia shows CF's of around 35%.
\begin{table}
\begin{tabular}{l l l l l} \hline & **Average** & **Wind or solar** & **Wind** & **Solar power** \\
**Country** & **V** & **CP range** & **power** & **power** & **potential [GW]** \\
**Country** & **wind** & **potential [GW]** & **potential on** & **on land with low** \\
**[\%]** & **on shared land** & **agricultural** & **CF for wind** \\ & & **areas** & **land [GW]** & **power (\textless{}10\%)** \\ \hline Result and Herzegovina & & & & \\ & 15.6 & 2.7 & 4.2 & 2.1 \\ & 25.0 & 3.8 & 2.9 & - \\ & 34.8 & 3.8 & 0.9 & - \\ & 45.3 & 2.3 & 0.4 & - \\ \hline _Subtotal_ & & _12.6_ & _8.4_ & _2.1_ \\ \hline Maintenance & & & & \\ & 15.2 & 2.1 & 0.8 & 1.8 \\ & 24.8 & 2.2 & 0.6 & - \\ & 34.5 & 1.5 & 0.4 & - \\ & 45.0 & 0.7 & 0.2 & - \\ \hline _Subtotal_ & & 6.5 & _2.0_ & _1.8_ \\ \hline Service & & & & \\ \hline & 15.9 & 2.5 & 5.2 & 1.6 \\ & 24.9 & 3.9 & 15.4 & - \\ & 34.8 & 3.4 & 26.7 & - \\ & 43.8 & 0.7 & 1.6 & - \\ \hline _Subtotal_ & & _10.5_ & _48.9_ & _1.6_ \\ \hline Total & - & **29.6** & **59.3** & **5.5** \\ \hline \end{tabular}
\end{table}
Table 2: Wind and solar power potentials of the DRB countries
### Impact of VRE on the hydropower cascade
Fig. 9 illustrate the water levels in the HPP cascade for the low-cost trade alternative REF scenario. On the y-axis, the storage level is expressed in million cubic meters (MCM), while the x-axis represents the 360 time steps. The dark blue line represents the average storage level for each time step over the 2020-2050 period. With a 95% confidence interval, the bright blue areas surrounding the mean value indicate the minimum and maximum values.
Under the assumption of high cost of trade, investments in solar power increase by 37%, or 2.1 GW compared to the low cost of trade alternative of the REF scenario. These capacity additions result in higher levels of power generation from solar power under the high cost of trade alternative for the REF scenario, as shown in Fig. 10. The added solar capacity drives an increase in power generation from the HPP cascade by 1515 GWh during the modelling period. The reason for increased power generation from the HPP cascade is that increased shares of solar are coupled with higher shares of CFTPP generation, reducing the share of wind, and hydropower outside of the basin. It is noteworthy that 92.2% of the increased power generation from the HPP cascade occurs during hours when solar power is not available. Additionally, Fig. 10 shows that the model invests in power imports during time steps with low wind and solar availability. Changing power import and export prices alters the investment and operational strategy of the model. The change is considerable, with imports decreasing under the high-cost scenario alternative, while investments in VRE technologies and CFTPPs increase by over 3 GW respectively.
Figure 9: HPP cascade water levels for the REF scenario in each time step for the 2020–2050 period
### Capacity additions to the power sectors of the DRB countries
Fig. 11 (a) shows the capacity additions under the REF scenario with low-cost trade by 2030, 2040 and 2050. VRE investments correspond to 12.2 GW by 2050, or 68% of all new capacity additions. Capacity additions under the high-cost trade alternative of the REF scenario are shown in Fig. 11 (b). They amount to 15.1 GW or 66% of the total new capacity. Due to high trade costs, power generation capacity investments are favoured over imports, as highlighted in section 4.2. This explains the increase in capacity additions between Fig. 11 (a) and (b). We observe even greater additions of VRE in the EL scenario, 16.5 and 13.5 GW for the high- and low-cost alternatives, under which coal must be phased out by 2050.
Figure 11: New cumulative capacity additions under the high- and low-cost trade alternatives of the REF scenario.
Figure 10: Daily power supply for each of the 15 representative days, based on the REF scenario with different power trade costs for the year 2050.
Fig. 12 shows the difference in capacity additions for the AG and EL scenarios when compared to the REF scenario. It includes differences for both the high- and low-cost trade variations of each respective scenario. In Fig. 12 (a) we observe higher capacity additions for the AG scenario compared to the REF. The reason is additional land availability for investments in high CF wind. Having extra capacity available decreases imports in this scenario when compared to the REF. The greatest difference can be observed in Fig. 12 (c) and (d) for 2050, where a combination of wind and hydropower capacity additions are added to compensate for the total decommissioning of CFTPPs. Due to the different generation profiles of power
Figure 12: Difference graph of cumulative capacity expansions between the explored scenarios and the high- and low-cost trade alternatives of the REF scenario. Values given in GW; negative values indicate lower capacity compared to the REF scenario.
supply technologies, solar investments in the EL scenario are lower than in the REF scenario. This is due to the absence of CFTPP capacities that complement solar in the REF scenario.
### Developments of the power supply across explored scenarios for the DRB countries
In Fig. 13 (b), higher coal shares facilitate the expansion of solar, corresponding to 7.8 GW by 2050. Investments in solar start five years earlier compared to the low-cost trade alternative of the REF scenario.
Fig. 13 shows the power supply and the power sector expansion across the scenarios. The REF, AG, and EL scenarios are shown in different rows, while the left and right columns in the figure represent the low-cost and high-cost alternatives for each of the explored scenarios. The subplots (a), (c), and (d) show higher levels of power imports and rapid decommissioning of coal-fired thermal power plants. Power exports from the DRB countries are lower compared to the (b), (d) and (f) subplots since the low cost of export does not stimulate the model to invest in additional power-generating capacities to be used for exports. Net exports by 2050 range from 4% in Fig. 13(a) to 20% in Fig. 13 (b).
In the (b), (d), and (f) subplots of Fig. 13, the total power generation is higher. The excess electricity is in these cases exported to countries bordering the DRB countries. Part of the increased power generation comes from thermal power, which constitutes a higher share of the power supply mix under the high-cost trade alternatives of the presented scenarios. Thermal power is part of the power supply in all scenarios except for the EL in 2050. Investments in solar power are both greater and appear sooner when the cost of trade is high, as shown in the (b), (d) and (f) subplots when compared to the low-cost trade alternatives. Power generation from VRE sources by 2050 are the lowest in the REF scenario with low trade cost, corresponding to 51% of total power generated within the DRB countries. The highest shares of VRE in the power supply are observed in the EL scenario with high cost, where 73% of the total power generation is VRE based.
Figure 13: Power supply in the DRB countries, their imports and exports, all scenarios 2020-2050.
### Emissions associated with power generation in the DRB countries
The emissions shown in Fig. 14 represent CO2 emissions from the power sector. The emissions include the direct emissions from burning coal for power generation. A sharp decrease in emissions can be observed under all scenarios during the first five years of the modelling period. The main reason is the phase-out of inefficient CFTPPs. The results also indicate that the EL scenario with its high- and low-cost trade alternatives is the only scenario where net-zero emissions are reached by 2050. Subplot (a) illustrates CO2 emissions under the low cost of trade alternatives, while subplot (b) represents high-trade cost alternatives of the explored scenarios. Overall, the emissions associated with power generation are higher in the latter alternative, as shown in Fig. 14 (b).
Figure 14: CO\({}_{2}\) emissions from power generation by scenario in the DRB countries. Values are provided in MtCO\({}_{2}\). Subplot (a) represents the low-cost while (b) represents high-cost trade alternatives for each scenario.
Discussion
In this section, we discuss and interpret the results presented in section 4. The findings are discussed in terms of potential implications on the power sector developments and their relation to the purpose of the study and the research questions posed.
One of the aims of this study was to assess the VRE potential within the DRB countries. The results shown in section 4.1 state an estimated potential of wind and solar power to be 94.4 GW. The breakdown of the total VRE capacity potentials among the DRB countries shows that for Bosnia and Herzegovina, Montenegro, and Serbia the combined VRE potential is 23.1, 10.3, and 61 GW respectively. These potentials far exceed the current total installed capacity in 2020 of the DRB countries which are 4.1, 1, and 7.4 GW respectively [31, 32, 60]. Current installed capacities of wind and solar power in DRB countries are less than 1 GW as of 2022 [31, 32, 60]. Compared to earlier assessments of VRE potentials shown in section 1, the results of this study show VRE capacity estimates that are 68% higher compared to estimates from IRENA [11] and 287% higher compared to SEERMAP reports [12-14]. The wind potential in Montenegro is according to Table 2 up to 8.5 GW if all available land is used for wind power development with no solar, and where agricultural land is available for wind power expansion. This figure far exceeds earlier estimates and assumptions of 400 MW wind potential in Montenegro made by [6-9]. Previous estimates from IRENA [11] stated a technical potential of wind power close to 3 GW in Montenegro, considerably greater compared to earlier studies [9], but less than the 8.5 GW estimate presented in this study. This shows that the capacity estimation of wind potentials has increased over time, which is to be expected given the rapid development of wind turbines.
The addition of VRE to the power supply mix would aid the DRB countries in achieving their commitments under the Sofia Declaration, where they accepted the obligation to submit National Energy and Climate Plans (NECPs), reduce their CO2 emissions and achieve climate neutrality by 2050. In this study, we assessed the capacity additions of VRE technologies required to achieve climate neutrality and aimed to provide answers as to if the local capacity potentials of VRE sources are sufficient to support the transition to net-zero. In section 4.3 we presented the capacity additions under the REF scenario and compared those to the capacity expansion in the AG and EL scenarios. The cost-optimized results of the scenario analysis suggest investments in wind and solar power corresponding to 10.7 and 5.8 GW respectively under the assumption of net zero by 2050 and a high cost of trade. This VRE capacity addition corresponds to 17.5% of the total VRE potential estimated for the DRB countries. The model invests in the wind with high CFs, namely in the 25-45% range. Our results are in line with what can be observed from investments in the United States (US) for 2020, where the minimum and maximum CF for wind power plants built in 2019 were 24 and 56% respectively [61]. We thus expect the potential of wind and solar power to be sufficient in supporting the transition to net-zero emissions from the power sectors of the DRB countries. However, the lower CF ranges for wind power and parts of the solar potential are not cost-competitive when compared to hydropower and imports across the scenarios. As nearly two-thirds of the VRE potentials are located on agricultural land, governments need to develop policies allowing the deployment of WPPs in these areas. Energy security has been a frequent argument made by proponents of domestic coal resources in the context of the energy transition away from fossil fuels. The findings of this study highlight that the DRB
countries have other, more environmentally friendly resource potentials that could satisfy their power demand without adversely affecting their energy security.
VRE technologies play a crucial role in the future power sectors of the DRB countries. Findings presented in section 4.1 highlight that the share of VRE compared to total new capacity additions correspond to close to 70% of all new capacity additions. This in turn increases the share of power generated from VRE sources as shown in Fig. 13. The most rapid expansion in terms of capacity and power generation according to the findings presented in section 4 relates to wind power, which follows the power sector development trends of the past years in the DRB countries. Wind power developments in the DRB countries started in 2017. Plants such as Mesihovina, Podvelezje, and Jelovaca in Bosnia and Herzegovina, Kronovo and Mozura in Montenegro, as well as Cibuk 1 in Serbia, are examples of the latest additions to the power sectors of the DRB countries. Currently, solar power development has not occurred in the region. The model results for the REF scenario support this investment trend under the low-cost trade alternative, where no solar capacity additions are observed until 2030. However, under the EL scenario with a high cost of trade investments in solar are observed as early as 2021. While replacing thermal capacities, the model invests in both wind and hydro since their joint availability profiles closely match the specified demand profile. In some of the 15 representative days the capacity factor of both wind and solar is low, while the demand is comparatively high. This makes a combination of wind and solar less cost-optimal compared to other power supply mixes since the model is otherwise forced to import large quantities to satisfy the unmet demand. We observe this dynamic in all graphs in Fig. 12, but most clearly in Fig. 12 (d), in which we illustrate that the target of net-zero emissions requires the removal of coal from the power supply. The high cost of power imports and exports drives the model to invest in additional capacity to reduce its import dependence while exporting with high profits.
We note that the relationship between the power supply alternatives included in the model could be different in a setting that includes different storage alternatives. Such a model would enable excess solar power to be stored in either pumped hydro storage (PHS) or other forms of power storage such as batteries. Having this added layer of flexibility as to when to use the generated power would reduce the need for the model to couple power supply alternatives solely based on their power availability profiles. We thus expect the choices of investment to be more flexible than the results suggest in Fig. 12.
Increasing the cost of power imports and exports results in changes within the power sector development. The results in Fig. 13 indicate an accelerated investment rate in solar PV, increased exports from the DRB countries, as well as more investments in coal-fired thermal power plants compared to the scenarios with lower costs of power imports and exports. The findings suggest a more rapid development of VRE projects in the DRB countries considering the current energy crisis in Europe that has increased the cost of electricity across the continent. The DRB countries can reduce their vulnerability to imports at high prices by expanding their capacity of VRE technologies and hydropower. Additionally, excess power generation from these sources in times of low domestic demand can be used for exports at a high cost to neighboring EU countries, such as Italy, Hungary, and Greece. Moreover, the potential introduction of the Carbon Border Adjustment Mechanism (CBAM) in the EU is another reason for increased investments in renewables. The CBAM would entail additional
taxation on power exported to the EU from the Western Balkan countries. This can result in new CFTPPs becoming stranded assets since their cost competitiveness would be compromised by the additional CBAM taxation. The fact that the DRB countries are net exporters is confirmed by the model results, which show net exports corresponding to 4 to 20% of the total power generation. The power sectors of the DRB countries could in the case of an introduced CBAM differ from the results shown in Fig. 13 by not having low-cost imports available or high revenues from exports due to the power being generated by CFTPPs. In that situation the DRB countries could find themselves becoming import dependent, while paying a high cost of electricity, leaving fewer resources for investments in the development and maintenance of their power sectors. Fig. 14 shows that CO2 emissions are the highest in the AG scenario under the low-cost of trade, while the REF scenario is the highest under the high-cost trade alternative. We can observe that the higher cost of trade results in higher emissions. This is in line with findings from Fig. 13 where we observed continued use of coal power plants under the high-cost trade alternatives of the scenarios. Capacity additions of renewable energy sources shown in Fig. 11 and Fig. 12 are the key reason for the observed CO2 emission reductions observed in Fig. 14 for the corresponding scenarios. The findings of this study suggest that the expansion of renewables in favour of CFTPPs is the main driving factor of the CO2 mitigation observed in Fig. 14.
Since the time steps are not sequential in the model, we cannot assess the effect of seasonal variations of water availability on storage levels under the explored scenarios. We can however compare the total water levels in each dam between different scenarios. The results suggest that scenarios with higher shares of SPPs and CFTPPs utilize the HPP cascade for power generation to a larger extent compared to scenarios where WPPs and HPPs outside the basin are the main capacity additions. As 92.2% of the increased power generation from the HPP cascade occurs during hours when solar power is not available, this indicates increased short-term balancing of renewables by the cascade, moving from baseload to more responsive power generation patterns. The finding highlights that potential expansion of the HPP cascade can enable larger shares of solar power, resulting in high shares of renewables in the power supply coupled with balancing capabilities of hydropower.
Conclusions
Having among the largest shares of coal-based power generation in Europe, the DRB countries of Bosnia and Herzegovina, Montenegro, and Serbia must take action to meet the EU's goal of net zero emissions by 2050. In this paper, we created a power sector model for the DRB countries with a scenario analysis exploring different development pathways. Inputs to the model consist of the latest available data on demands, future demand projections, costs and characteristics of current and future power-generating technologies considered.
We present a novel approach for assessing the VRE resource potentials by combining time-series data on availabilities from Altite and the ERA5 dataset, with high-resolution data obtained from the GWA. The findings from this approach indicate a capacity potential of 94.4 GW of VRE technologies in the DRB countries, of which 59.3 GW or 63% relate to wind power. When compared to the current installed capacity within the DRB countries is 12.5 GW, of which 627 MW are wind power, we observe that the potential for VRE deployment is largely untapped. According to the results, the VRE potential is significantly higher than previous assessments have shown, with increases ranging from 68% to 287%.
Findings from the Emission Limit scenario where net-zero emissions are expected by 2050 show investments in wind and solar power corresponding to 10.7 and 5.8 GW respectively. These investments constitute 17% of the assessed VRE potential presented in this study. Hence, the regional potential of VRE technologies is sufficient to decarbonize the power sector under the demand assumptions used in the model. Wind and solar power play a vital role in CO2 mitigation from the power sectors of the DRB countries. The share of these technologies ranges from 51% in the REF scenario with low cost of trade, to 73% in the EL scenario with high cost of trade.
VRE expansion in the DRB countries has a limited effect on power generation from the HPP cascade since the currently installed capacities of the eight HPPs in the DRB have no capital cost expenditure associated with them in the model. However, the results also indicate increases in the power output from the HPP cascade corresponding to 1515 GWh for the modelling period under the REF scenario where higher shares of solar power are present. As 92.2% of the increased power generation from the HPP cascade occurs when no solar is available, the HPP cascade increasingly acts as a short-term balancing option for VRE technologies, moving from baseload to more responsive power generation patterns.
The DRB countries have sufficient VRE potentials which are underexploited as of today. The potential of these technologies is sufficient to support the transition to net-zero by 2050, in which the role of VRE technologies is significant in terms of power supply to meet the demand and CO2 emission reductions. Failing to act regarding the development of renewables could lead to stranded assets in case of a CBAM introduction, while under capacity could be costly, especially given the current costs of cross-border power trade in Europe and the risk of reduced import availability from EU countries surrounding the Western Balkans. Not aligning with the commitments undertaken in the Sofia Declaration could also hinder the process of accession to the EU.
Limitations and Future Research
In this section, we highlight the limitations of this paper, including potential topics of future research needs that could expand the work presented in this paper.
The presented assessment of land availability for VRE developments was limited to utility-scale technology options. We did not consider rooftop solar, which could be utilized in urban settings, nor solar on agricultural land. Since costs relating to the expansion of transmission and distribution lines, distances from the grid, slopes or difficult to reach areas were not included when assessing the VRE capacity potential, the total calculated VRE capacity presented in this paper may be less utilized than the results suggest. In contrast, improvements in efficiency and capacity factors of VRE technologies, which are likely to improve their cost-competitiveness, are not included. Considering that the model developed for DRB countries is intended to inform long-term energy infrastructure investments, and not site-specific power generation projects, future research could combine the presented methodology for estimating VRE potentials with site-specific analyses.
Given the large utilization of VRE technologies proposed by the results of this study, an important factor to consider is future additions of storage options. This can be done by adding representations of battery storage for solar power or pumped hydro storage.
In the created model, the power demand is a driving factor for the expansion of the power sector. We use data from the current demand profiles and demand projections based on projections made by the local transmission system operators. An interesting point to consider going forward is the impact of demand reductions based on energy efficiency measures. Energy efficiency in the DRB region can be significantly improved and reducing demand in turn reduces the need for new capacity additions. Energy efficiency improvements in the Western Balkans have over the past decade been the basis for financial support in the region, from the Regional Energy Efficiency Programme launched in 2012 [62], to the Energy Support Package [63] put forward in 2022 comprising of 1 billion Euro toward diversity of energy supplies, increasing renewable energy and energy efficiency.
Cross-border trade is present in all explored scenarios. As highlighted in this paper, the DRB countries could find themselves in a situation where they become increasingly important in case of energy shortages due to decommissioning of thermal power, or by delayed investments in low-carbon technologies. Imports could then not only be more expensive but also not available due to the disruptions of the power markets in Europe caused by the ongoing conflict in Ukraine. The impact of import availability on the expansion of the power sector of the DRB countries may be better understood by further modelling of the region with different levels of import availability.
## Declaration of Competing Interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Acknowledgement
This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101022622. The authors would like to acknowledge the United Nations Economic Commission for Europe (UNECE) for the funding provided and for facilitating access to stakeholders, together with Global Water Partnership Mediterranean (GWP-Med) through the project "Promoting the Sustainable Management of Natural Resources in Southeastern Europe, through the use of Nexus approach (2016-2022)" funded by the Austrian Development Agency. The authors would like to thank Francesco Gardumi and Youssef Almulla for their contribution to the development of the hydropower cascade representation.
## Supplementary material
The supplementary material to this paper is available at the following Zenodo deposit. |
2308.08924 | Frequency Perception Network for Camouflaged Object Detection | Camouflaged object detection (COD) aims to accurately detect objects hidden
in the surrounding environment. However, the existing COD methods mainly locate
camouflaged objects in the RGB domain, their performance has not been fully
exploited in many challenging scenarios. Considering that the features of the
camouflaged object and the background are more discriminative in the frequency
domain, we propose a novel learnable and separable frequency perception
mechanism driven by the semantic hierarchy in the frequency domain. Our entire
network adopts a two-stage model, including a frequency-guided coarse
localization stage and a detail-preserving fine localization stage. With the
multi-level features extracted by the backbone, we design a flexible frequency
perception module based on octave convolution for coarse positioning. Then, we
design the correction fusion module to step-by-step integrate the high-level
features through the prior-guided correction and cross-layer feature channel
association, and finally combine them with the shallow features to achieve the
detailed correction of the camouflaged objects. Compared with the currently
existing models, our proposed method achieves competitive performance in three
popular benchmark datasets both qualitatively and quantitatively. | Runmin Cong, Mengyao Sun, Sanyi Zhang, Xiaofei Zhou, Wei Zhang, Yao Zhao | 2023-08-17T11:30:46Z | http://arxiv.org/abs/2308.08924v1 | # Frequency Perception Network for Camouflaged Object Detection
###### Abstract.
Camouflaged object detection (COD) aims to accurately detect objects hidden in the surrounding environment. However, the existing COD methods mainly locate camouflaged objects in the RGB domain, their performance has not been fully exploited in many challenging scenarios. Considering that the features of the camouflaged object and the background are more discriminative in the frequency domain, we propose a novel learnable and separable frequency perception mechanism driven by the semantic hierarchy in the frequency domain. Our entire network adopts a two-stage model, including a frequency-guided coarse localization stage and a detail-preserving fine localization stage. With the multi-level features extracted by the backbone, we design a flexible frequency perception module based on octave convolution for coarse positioning. Then, we design the correction fusion module to step-by-step integrate the high-level features through the prior-guided correction and cross-layer feature channel association, and finally combine them with the shallow features to achieve the detailed correction of the camouflaged objects. Compared with the currently existing models, our proposed method achieves competitive performance in three popular benchmark datasets both qualitatively and quantitatively. The code will be released at [https://github.com/rmcong/FPNet_ACMMM23](https://github.com/rmcong/FPNet_ACMMM23).
Camouflaged object detection, Frequency perception, Coarse positioning stage, Fine localization stage. +
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
## 1. Introduction
In nature, animals use camouflage to blend in with their surroundings to avoid detection by predators. The camouflaged object detection (COD) task aims to allow computers to automatically recognize these camouflaged objects that blend in with the background, which can be used in numerous downstream applications, including medical segmentation ((30), 14, 21, 28), unconstrained face recognition (3), and recreational art (15, 23). However, the COD task is very challenging due to the low contrast properties between the camouflaged object and the background. Furthermore, camouflaged objects may have multiple appearances, including shapes, sizes, and textures, which further increases the difficulty of detection.
At the beginning of the research, the COD task was also regarded as a low-contrast special case of the salient object detection (SOD) task, but simple SOD model (5; 6; 7; 8; 9; 10; 12; 13; 30; 62) retraining cannot obtain satisfactory COD results, and usually requires some special positioning design to find camouflaged objects. Recently, with the development of deep learning (22; 27; 46; 61), many customized networks for COD tasks have gradually emerged (20; 41). However, current solutions still struggle in challenging situations,
such as multiple camouflaged objects, uncertain or fuzzy object boundaries, and occlusion, as shown in Figure 1. In general, these methods mainly design modules in the RGB color domain to detect camouflaged objects, and complete the initial positioning of camouflaged objects by looking for areas with inconsistent information such as textures (called breakthrough points). However, the concealment and confusion of the camouflaged objects itself make this process very difficult. In the image frequency domain analysis, the high-frequency and low-frequency component information in the frequency domain describes the details and contour characteristics of the image in a more targeted manner, which can be used to improve the accuracy of the initial positioning. Inspired by this, we propose a Frequency Perception Network (FPNet) that employs a two-stage strategy of search and recognition to detect camouflaged objects, taking full advantage of RGB and frequency cues.
On the one hand, the main purpose of the frequency-guided coarse positioning stage is to use the frequency domain features to find the breakthrough points of the camouflaged object position. We first adopt the transformer backbone to extract multi-level features of the input RGB image. Subsequently, in order to realize the extraction of frequency domain features, we introduce a frequency-perception module to decompose color features into high-frequency and low-frequency components. Among them, the high-frequency features describe texture features or rapidly changing parts, while the low-frequency features can outline the overall contour of the image. Considering that both texture and contour are important for camouflaged object localization, we fuse them as a complete representation of frequency domain information. In addition, a neighbor interaction mechanism is also employed to combine different levels of frequency-aware features, thereby achieving coarse detection and localization of camouflaged objects. On the other hand, the detail-preserving fine localization stage focuses on progressively prior-guided correction and fusion across layers, thereby generating the final finely camouflaged object masks. Specifically, we design the correction fusion module to achieve the cross-layer high-level feature interaction by integrating the prior-guided correction and cross-layer feature channel association. Finally, the shallow high-resolution features are further introduced to refine and modify the boundaries of camouflaged objects and generate the final COD result.
The main contributions are summarized as follows:
* We propose a novel two-stage framework to deeply exploit the advantages of RGB and frequency domains for camouflaged object detection in an end-to-end manner. The proposed network achieves competitive performance on three popular benchmark datasets (_i.e.,_ COD10K, CHAMELON, and CAMO).
* A novel fully frequency-perception module is designed to enhance the ability to distinguish camouflaged objects from backgrounds by automatically learning high-frequency and low-frequency features, thereby achieving coarse localization of camouflaged objects.
* We design a progressive refinement mechanism to obtain the final refined camouflaged object detection results through prior-guided correction, cross-layer feature channel association, and shallow high-resolution boundary refinement.
## 2. Related Work
The COD task aims to localize objects that have a similar appearance to the background, which makes it extremely challenging. Early methods employed hand-crafted low-level features to achieve this goal, such as color (Zhou et al., 2017), expectation-maximization statistics (Zhou et al., 2017), convex intensity (Zhou et al., 2017), optical flow (Zhou et al., 2017), and texture (Beng et al., 2017; Chen et al., 2017). However, due to the imperceptible differences between objects and backgrounds in complex environments, and the limited expressive power of hand-crafted features, they do not perform satisfactorily.
Recently CNN-based methods (Zhou et al., 2017; Chen et al., 2017; Chen et al., 2017) have achieved significant success in the COD task. In general, CNN-based methods often employ one or more of the following strategies, such as two-stage strategy (Chen et al., 2017; Chen et al., 2017), multi-task learning strategy (Zhou et al., 2017), and incorporating other guiding cues such as frequency (Zhou et al., 2017). For instance, Fan _et al._(Fan et al., 2017) proposed a two-stage process named SINet, which represents the new state-of-the-art in existing COD datasets and created the largest COD10K dataset with 10K images. Mei _et al._(Mei et al., 2017) imitated the predator-prey process in nature and developed a two-stage bionic framework called PFNet.
In terms of frequency domain studies, Gueguen _et al._(Gueguen et al., 2017) directly used the Discrete Cosine Transform (DCT) coefficients of the image as input to CNN for subsequent visual tasks. Ehrlich _et al._(Ehrlich et al., 2017) presented a general conversion algorithm for transforming spatial domain networks to the frequency domain. Interestingly, both of these works delve deep into the frequency domain transformation of the image JPEG compression process. Subsequently, Zhong _et al._(Zhong et al., 2017) modeled the interaction between the frequency domain and the RGB domain, introducing the frequency domain as an additional cue to better detect camouflaged objects from backgrounds. Unlike these methods, on the one hand, we use octave convolution to realize the online learning of frequency domain features, instead of offline extraction methods (e.g., DCT); on the other hand, frequency domain features are mainly used for coarse
Figure 1. Three challenging camouflaged object detection (COD) scenarios from top to down are with indefinable boundaries, multiple objects, and occluded objects, respectively. The images from left to right are (a) Input image, (b) GT, (c) Ours, (d) SINet-V2 (Gueguen et al., 2017), (e) LSR (Zhou et al., 2017).
positioning in the first stage, that is, by making full use of high-frequency and low-frequency information to find the breakthrough point of camouflaged object positioning in the frequency domain.
In addition, some methods (Yang et al., 2017; Gao et al., 2018; Gao et al., 2019) also try to combine edge detection to extract more precise edges, thereby improving the accuracy of COD. It is worth mentioning that in order to exploit the power of the Transformer model in the COD task, many Transformer-based methods have emerged. For example, Yang _et al._(Yang et al., 2019) proposed to incorporate Bayesian learning into Transformer-based reasoning to achieve the COD task. The T2Net proposed by Mao _et al._(Mao et al., 2021) in 2021 used a Swin-Transformer as the backbone network, surpassing all CNN-based approaches at that time.
## 3. Our Approach
### Overview
Our goal is to exploit and fuse the inherent advantages of the RGB and frequency domains to enhance the discrimination ability to discover camouflaged objects in the complex background. To that end, in this paper, we propose a Frequency Perception Network (FPNet) for camouflaged object detection, as shown in Figure 2, including a feature extraction backbone, a frequency-guided coarse localization stage, and a detail-preserving fine localization stage.
Given an input image \(I\in\mathbb{R}^{H\times W\times 3}\), for the feature extraction backbone, we adopt the Pyramid Vision Transformer (PVT) (Yang et al., 2019) as the encoder to generate features of different levels, denoted as \(X_{i}\) (\(i=\{1,2,3,4\}\)). Each feature map serves a different purpose. The first-level feature map \(X_{1}\) includes rich detailed information about the camouflaged object, whereas the deeper-level features (\(X_{2}\), \(X_{3}\), \(X_{4}\)) contain higher-level semantic information. With the pyramid backbone features, in the frequency-guided coarse localization stage, we first use a frequency-perception module (FPM) for frequency-domain feature extraction on high-level features and then adopt the neighborhood connection decoder for feature fusion decoding to obtain the coarse COD map \(S_{1}\). Whereafter, in the detail-preserving fine localization stage, with the guidance of coarse COD map \(S_{1}\), the high-level features are embedded into the correction fusion module (CFM) to progressively achieve prior-guided
Figure 2. The overview of our proposed two-stage network FPNet. The input image is first extracted with multi-level features by a PVT encoder. In the frequency-guided coarse localization stage, we use FPM for frequency-domain feature extraction and generate the coarse COD map \(S_{1}\). Then, in the detail-preserving fine localization stage, the CFM is used to achieve progressively prior-guided correction and fusion across high-level layers. Finally, the first-level high-resolution features are further introduced to refine the boundaries of camouflaged objects and generate the final result \(S_{output}\).
correction and fusion across layers. Finally, a receptive field block (RFB) with spatial attention mechanism (SAM) is used for low-level high-resolution feature optimization and combined with the CFM module output to obtain the final COD result \(S_{output}\).
### Frequency-guided Coarse Positioning
Inspired by predator hunting systems, frequency information is more advantageous than RGB appearance features for a specific predator in the wild environment. This point of view has also been verified in (Wang et al., 2018), and then a frequency domain method for cam-outaged object detection is proposed. Specifically, this work (Wang et al., 2018) used offline discrete cosine transform to convert the RGB domain information of an image to the frequency domain, but the offline frequency extraction method limits its flexibility. As described in (Wang et al., 2018), octave convolution can learn to divide an image into low and high frequency components in the frequency domain. The low-frequency features correspond to pixel points with gentle intensity transformations, such as large color blocks, that often represent the main part of the object. The high-frequency components, on the other hand, refer to pixels with intense brightness changes, such as the edges of objects in the image. Inspired by this, we propose a frequency-perception module to automatically separate features into high-frequency and low-frequency parts, and then form a frequency-domain feature representation of camouflaged objects, the detailed process is shown in Figure 3.
Specifically, we employ octave convolution (Wang et al., 2018) to automatically perceive high-frequency and low-frequency information in an end-to-end manner, enabling online learning of camouflaged object detection. The octave convolution can effectively avoid blockiness caused by the DCT and utilize the advantage of the computational speed of GPUs. In addition, it can be easily plugged into arbitrary networks. The detailed process of output of the octave convolution \(Y_{i}=\{Y_{i}^{H},Y_{i}^{L}\}\) could be described in the following:
\[Y_{i}^{H}=F(X_{i}^{H};W^{H\to H})+\text{Upsample}(F(X_{i}^{L};W^{L\to H}),2), \tag{1}\]
\[Y_{i}^{L}=F(X_{i}^{L};W^{L\to L})+F(\text{pool}(X_{i}^{H},2);W^{H\to L}), \tag{2}\]
where \(F(X;W)\) denotes a convolution with the learnable parameters of \(W\), \(\text{pool}(X,k)\) is an average pooling operation with kernel size of \(k\times k\), and \(\text{Upsample}(X,s)\) is an up-sampling operation by a factor of \(s\) via nearest interpolation.
Considering that both high-frequency texture attribute and low-frequency contour attribute are important for camouflaged object localization, we fuse them as a complete representation of frequency domain information:
\[f_{i}=\text{Resize}(Y_{i}^{H})\oplus\text{Resize}(Y_{i}^{L}), \tag{3}\]
where Resize means to adjust features to a fixed dimension, and \(\oplus\) is the element-wise addition.
Then, the Neighbor Connection Decoder (NCD) (Wang et al., 2018), as shown in the top region (the part above the three FPMs) of Figure 2, is adopted to gradually integrate the frequency-domain features of the top-three layers, fully utilizing the cross-layer semantic context relationship through the neighbor layer connection, which can be represented as:
\[\left\{\begin{array}{l}f_{i}^{\prime}=\sigma\uparrow(f_{i}),\\ f_{3}^{\prime}=f_{8}\otimes\sigma\uparrow(f_{4})\\ f_{2}^{\prime}=\text{cat}(f_{2}\otimes\sigma\uparrow(f_{3}^{\prime}),\text{cat}( f_{3}^{\prime},f_{4}^{\prime})),\end{array}\right. \tag{4}\]
where \(\otimes\) is element-wise multiplication, \(\sigma\uparrow(x)\) denotes an up-sampling along with a 3\(\times\)3 convolution, cat() denotes concatenation along with a 3\(\times\)3 convolution, and \(f_{2}^{\prime}\) is the output of NCD. After this stage, we use a simple convolution to obtain a coarse mask \(S_{1}\) that reveals the initial location of the camouflaged object.
### Detail-preserving Fine Localization
In the previous section, we introduced how to use frequency-domain features to achieve coarse localization of camouflaged objects. But the first stage is more like a process of finding and locating breakthrough points, the integrity and accuracy of results are still not enough. To this end, we propose a detail-preserving fine localization mechanism, which not only achieves a progressive fusion of high-level features through prior correction and channel association but also considers high-resolution features to refine the boundaries of camouflaged objects, as shown in Figure 2.
To achieve the above goals, we first design a correction fusion module (CFM), which effectively fuses adjacent layer features and a coarse camouflaged mask to produce fine output. The module includes three inputs: the current and previous layer features \(X_{i}\) and \(X_{i+1}\), and the coarse mask \(S_{g}=\{S_{1},S_{2}\}\). In addition, we first reduce the number of input feature channels to 64, denoted as \(F_{i}\) and \(F_{i+1}\), which helps to improve computational efficiency while still retaining relevant information for detection. As shown in Figure 4, our CFM consists of two parts. In order to make full use of the existing prior guidance map \(S_{g}\), we purify the features of the previous layer and select the features most related to the camouflaged features to participate in the subsequent cross-layer interaction.
Figure 4. The schematic illustration of the correction fusion module (CFM). CFM contains two parts, _i.e._, prior-guided correction and channel-wise correlation modeling.
Figure 3. Illustration of frequency-perception module (FPM). Two branches are for high-frequency and low-frequency information learning, respectively.
Mathematically, the feature map \(F_{i+1}\) is first multiplied with the coarse mask \(S_{g}\) to obtain the output features \(f^{{}^{\prime}}_{i+1}\):
\[f^{{}^{\prime}}_{i+1}=\text{Upsample}(F_{i+1}\odot S_{g}), \tag{5}\]
where \(\odot\) denotes element-wise multiplication, and Upsample is the upsampling operation. This prior-guided correction is particularly beneficial in scenarios where the object is difficult to discern from its surroundings.
It is well known that high-level features possess very rich channel-aware cues. In order to achieve more sufficient cross-layer feature interaction and effectively transfer the high-level information of the previous layer to the current layer, we design the channel-level association modeling. We perform channel attention by taking the inner product between each pixel point on \(F_{i}\) and \(f^{{}^{\prime}}_{i+1}\), which calculates the similarity between different feature maps in the channel dimension of the same pixel. To further reduce computational complexity, we also employ a \(3\times 3\) convolution that creates a bottleneck structure, thereby compressing the number of output channels. This process can be described as:
\[A=\text{conv}(F_{i}\otimes(f^{\prime}_{i+1})^{T}), \tag{6}\]
where \(\otimes\) is the matrix multiplication. Then, we learn two weight maps, \(\alpha\) and \(\beta\), by using two \(3\times 3\) convolution operations on the features \(A\). They are further used in the correction of the features of the current layer \(F_{i}\) in a modulation manner. In this way, the final cross-level fusion features can be generated through the residual processing:
\[f^{out}_{i}=f^{{}^{\prime}}_{i+1}+\text{conv}(F_{i})*\alpha+\beta. \tag{7}\]
In addition to the above-mentioned prior correction and channel-wise association modeling on the high-level features, we also make full use of the high-resolution information of the first layer to supplement the detailed information. Specifically, we use the receptive field block (RFB) module (Wang et al., 2017) and the spatial attention module (Wang et al., 2018) on the first-layer features (\(X_{1}\)) to enlarge the receptive field and highlight the important spatial information of the features, and then fuse with the output of the CFM module (\(f^{out}_{2}\)) to generate the final prediction map:
\[S_{output}=\textit{{{Bconv}}}(\textit{{{Bconv}}}(\textit{{{SAM}}}(\textit{{{ RFB}}}(X_{1}))\oplus f^{out}_{2})), \tag{8}\]
where \(RFB\) and \(SAM\) are the receptive field block and the spatial attention module, respectively. _Bconv_ represents the \(3\times 3\) convolution layer along with the batch normalization and ReLU.
### Loss Function
Following (Wang et al., 2018), We compute the weighted binary cross-entropy loss (\(\mathcal{L}^{\alpha}_{BCE}\)) and IoU loss (\(\mathcal{L}^{\alpha}_{IoU}\)) on three COD maps (_i.e._, \(S_{1}\), \(S_{2}\), and \(S_{output}\)) to form our final loss function:
\[\mathcal{L}_{total}=\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{output}, \tag{9}\]
where \(\mathcal{L}_{*}=\mathcal{L}^{\alpha}_{BCE}+\mathcal{L}^{\alpha}_{IoU}\), \(*=\{1,2,output\}\), \(\mathcal{L}_{1}\) denotes the loss between the coarse prediction map \(S_{1}\) and ground truth, \(\mathcal{L}_{2}\) denotes the loss about the prediction map \(S_{2}\) after the first CFM, and \(\mathcal{L}_{output}\) denotes the loss between the final prediction map \(S_{output}\) and ground truth.
## 4. Experiment
### Experimental Settings
**Datasets.** We conduct experiments and evaluate our proposed method on three popular benchmark datasets, _i.e._, CHAMELEON (Zhou et al., 2017), CAMO (Zhou et al., 2017), and COD10K (Zhou et al., 2017). CHAMELEON (Zhou et al., 2017) dataset has 76 images. CAMO (Zhou et al., 2017) contains 1,250 camouflaged images covering different categories, which are divided into 1,000 training images and 250 testing images, respectively. As the largest benchmark dataset currently, COD10K (Zhou et al., 2017) includes 5,066 images in total, 3,040 images are chosen for training and 2,026 images are used for testing. There are five concealed super-classes (_i.e._, _terrestrial_, _attobios_, _aquatic_, _amphilian_, _other_) and 69 sub-classes. And the pixel-level ground-truth annotations of each image in these three datasets are provided. Besides, for a fair comparison, we follow the same training strategy of previous works (Zhou et al., 2017), our training set includes 3,040 images from COD10K datasets and 1,000 images from the CAMO dataset.
**Evaluation Metrics.** We use four widely used and standard metrics to evaluate the proposed method, _i.e._, structure-measure (\(S_{\alpha}\)) (Wang et al., 2018), mean E-measure (\(E_{\phi}\)) (Wang et al., 2018), weighted F-measure (\(F^{\alpha}_{\beta}\)) (Zhou et al., 2017), and mean absolute error (_MAE_) (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2018). Overall, a better COD method has larger \(S_{\alpha}\), \(E_{\phi}\), and \(F^{\alpha}_{\beta}\) scores, but a smaller _MAE_ score.
**Implementation Details.** In this paper, we propose a frequency-perception network (FPNet) to address the challenge of camouflaged object detection by incorporating both RGB and frequency domains. Specifically, a frequency-perception module is proposed to automatically separate frequency information leading the model to a good coarse mask at the first stage. Then, a detail-preserving fine localization module equipped with a correction fusion module is explored to refine the coarse prediction map. Comprehensive comparisons and ablation studies on three benchmark COD datasets have validated the effectiveness of the proposed FPNet. The proposed method is implemented with PyTorch and leverages Pyramid Vision Transformer (Zhou et al., 2017) pre-trained on ImageNet (Zhou et al., 2017) as our backbone network. We also implement our network by using the MindSpore Lite tool1. To update the network parameters, we use the Adam optimizer, which is widely used in transformer-based networks (Zhou et al., 2017; Zhou et al., 2017; Zhou et al., 2017). The initial learning rate is set to 1e-4 and weight decay is adjusted to 1e-4. Furthermore, we resize the input images to 512\(\times\) 512, the model is trained with a mini-batch size of 4 for 100 epochs on an NVIDIA 2080Ti GPU. We augment the training data by applying techniques such as random flipping, random cropping, and so on.
Footnote 1: [https://www.mindspore.cn/](https://www.mindspore.cn/)
### Comparison with State-of-the-art Methods
We conduct a comparison of our proposed method with 12 state-of-the-art mthods, including FPN (Zhou et al., 2017), MaskRCNN (Zhou et al., 2017), CPD (Zhou et al., 2017), SINet (Zhou et al., 2017), LSR (Zhou et al., 2017), PraNet (Zhou et al., 2017), CzFNet (Zhou et al., 2017), UGTR (Zhou et al., 2017), PFNet (Zhou et al., 2017), ZoomNet (Zhou et al., 2017), SINet-V2 (Zhou et al., 2017), and FreNet (Zhou et al., 2017). The visualization comparisons and quantitative results are shown in Figure 5, and Table 1 summarizes the quantitative results of the COD methods on three benchmark datasets.
**Quantitative Evaluation.** Table 1 presents a detailed comparison of evaluation metrics, we can observe that our proposed model (FPNet) outperforms all SOTA models on all datasets. For example,
our FPNet achieves obvious performance gains over other state-of-the-art ones on the CAMO-Test dataset. According to Table 1, our proposed FPNet achieves the best weighted F-measure (\(F_{\beta}^{ou}\)) score of 0.806 on the CAMO-Test dataset, and the MAE score outperforms the second-best method ZoomNet (Wang et al., 2018) by 15.2%. Moreover, the proposed FPNet outperforms ZoomNet (Wang et al., 2018) by an obvious margin in terms of the \(F_{\beta}^{ou}\) on all three datasets. For example, compared with the second-best method, the percentage gain of the \(F_{\beta}^{ou}\) reach 2.6%, 7.2%, and 1.3% on the COD10K-Test, CAMO-Test and CHAMELEON datasets, respectively. While we observe the frequency-guided method FreNet (Wang et al., 2018), it can achieve better performance than most state-of-the-art methods. However, our proposed FPNet outperforms FreNet comprehensively in terms of all evaluation metrics, indicating that the proposed learnable frequency-guided solution is superior in discerning discriminative cues of camouflaged objects.
**Qualitative Evaluation.** As shown in Figure 5, whether the camouflaged object in the image is terrestrial or aquatic, or a camouflaged human, the proposed FPNet method is capable of accurately predicting the region of the camouflaged object. When the camouflaged object is extremely similar to the background, as illustrated in the first row in Figure 5, other SOTA methods fail to accurately distinguish the camouflaged object, especially on the edge regions. By contrast, the proposed FPNet, benefiting from a frequency-aware learning mechanism, can clearly predict the mask of objects with clear and sharp boundaries. When tackling complex background interference, including the salient but non-camouflaged objects (see the third row of Figure 5), our proposed FPNet is capable of effectively separating the camouflaged object from the background, with a more complete and clear structure description ability. For the approximate appearance of similar objects, as shown in the fourth row of Figure 5, the camouflaged human face is hard to distinguish from other pineapples. Most methods fail to recognize it, but our proposed FPNet can discern it clearly. Additionally, our proposed FPNet is also effective in detecting some challenging situations (as displayed in Figure 1), such as indefinable boundaries, multiple objects and occluded objects. The impressive prediction results further highlight the usefulness of the frequency-perception mechanism which connects RGB-aware and frequency-aware clues together to arrive at a unified solution that can adaptly address challenging scenarios.
### Ablation Studies
#### 4.3.1. **The effectiveness of each module of FPNet**
To verify the effectiveness of the proposed network, we separate FPNet into a series of ablation parts, _i.e._, frequency perception module, high-resolution preserving, and correction fusion module, where 'baseline' is the PVT backbone for camouflaged object detection. The comparison results are shown in Table 2.
**Effectiveness of Frequency Perception Module.** The proposed frequency perception module incorporates Octave convolution (Cordord et al., 2017)
\begin{table}
\begin{tabular}{l|l|c c c c|c c c c|c c c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Year} & \multicolumn{3}{c|}{COD10K-Test (2026 images)} & \multicolumn{3}{c|}{CAMO-Test (250 images)} & \multicolumn{3}{c}{CHAMELEON (76 images)} \\ \cline{3-13} & & \(S_{\alpha}\uparrow\) & \(E_{mean}\uparrow\) & \(F_{\beta}^{ou}\uparrow\) & \(M\downarrow\) & \(S_{\alpha}\uparrow\) & \(E_{mean}\uparrow\) & \(F_{\beta}^{ou}\uparrow\) & \(M\downarrow\) & \(S_{\alpha}\uparrow\) & \(E_{mean}\uparrow\) & \(F_{\beta}^{ou}\uparrow\) & \(M\downarrow\) \\ \hline FPN (Wang et al., 2018) & 2017-CVPR & 0.697 & 0.691 & 0.411 & 0.075 & 0.684 & 0.677 & 0.483 & 0.131 & 0.794 & 0.783 & 0.590 & 0.075 \\ MaskRCNN (Wang et al., 2018) & 2017-ICCV & 0.613 & 0.748 & 0.402 & 0.080 & 0.574 & 0.715 & 0.430 & 0.151 & 0.643 & 0.778 & 0.518 & 0.099 \\ CPD (Wang et al., 2018) & 2019-CVPR & 0.747 & 0.770 & 0.508 & 0.059 & 0.726 & 0.729 & 0.550 & 0.115 & 0.853 & 0.866 & 0.706 & 0.052 \\ SINNet (Wang et al., 2018) & 2020-CVPR & 0.771 & 0.806 & 0.551 & 0.051 & 0.751 & 0.771 & 0.606 & 0.100 & 0.869 & 0.891 & 0.740 & 0.044 \\ PraNet (Wang et al., 2018) & 2020-MICCAI & 0.789 & 0.857 & 0.608 & 0.047 & 0.774 & 0.828 & 0.680 & 0.094 & 0.871 & 0.924 & 0.758 & 0.037 \\ FPNet (Wang et al., 2018) & 2021-CVPR & 0.798 & 0.874 & 0.646 & 0.040 & 0.773 & 0.829 & 0.703 & 0.086 & 0.878 & 0.921 & 0.796 & 0.034 \\ CZFNet (Wang et al., 2018) & 2021-IJCAI & 0.809 & 0.884 & 0.662 & 0.038 & 0.787 & 0.840 & 0.716 & 0.085 & 0.892 & 0.946 & 0.819 & 0.030 \\ UGTR (Wang et al., 2018) & 2021-ICCV & 0.818 & 0.850 & 0.667 & 0.035 & 0.785 & 0.859 & 0.686 & 0.086 & 0.888 & 0.918 & 0.796 & 0.031 \\ LSR (Wang et al., 2018) & 2021-CVPR & 0.767 & 0.861 & 0.611 & 0.045 & 0.712 & 0.791 & 0.583 & 0.104 & 0.846 & 0.913 & 0.767 & 0.046 \\ SINet-V2 (Wang et al., 2018) & 2022-TPAMI & 0.815 & 0.886 & 0.664 & 0.036 & 0.809 & 0.864 & 0.729 & 0.073 & 0.888 & 0.940 & 0.797 & 0.029 \\ FreNet (Wang et al., 2018) & 2022-CVPR & 0.833 & 0.907 & 0.711 & 0.033 & 0.828 & 0.884 & 0.747 & 0.069 & 0.894 & 0.950 & 0.819 & 0.030 \\ ZoomNet (Wang et al., 2018) & 2022-CVPR & 0.838 & 0.911 & 0.729 & 0.029 & 0.820 & 0.892 & 0.752 & 0.066 & 0.902 & 0.958 & 0.845 & 0.023 \\ \hline FPNet (ours) & 2023 & 0.850 & 0.913 & 0.748 & 0.029 & 0.852 & 0.905 & 0.806 & 0.056 & 0.914 & 0.961 & 0.856 & 0.022 \\ \hline \end{tabular}
\end{table}
Table 1. Comparisons of state-of-the-art methods on COD datasets. The top three results are highlighted in red, green, and blue, respectively.
Figure 5. Qualitative results of our proposed FPNet model and some state-of-the-art COD methods. The images from left to right are (a) Input image, (b) GT, (c) Ours, (d) FreNet (Wang et al., 2018), (e) SINet-V2 (Wang et al., 2018), (f) PFNet (Wang et al., 2018), (g) LSR (Wang et al., 2018), and (h) PraNet (Wang et al., 2018).
that minimizes spatial redundancy and automatically learns high-frequency and low-frequency features. As shown in Table 2, if we add the frequency perception module (_i.e._, baseline+FPM), all metrics can obtain performance gains compared with the PVT-alone without the concave convolution. The good performance lies that FPM learns rich frequency-aware information especially the high-frequency clues that are useful for camouflaged object coarse positioning. The other advantage of the FPM lies in that it is automatically online learning without any other extra offline operations. Thus, the flexibility and the high performance of the FPM make it suitable for accurately detecting camouflaged objects in real-world scenes. And FPM can also be easily integrated into other frameworks to assist in distinguishing the obscure boundaries of objects that are similar to the background.
**Effectiveness of High-resolution Preserving Module.** Although The first frequency-guided coarse positioning stage (_i.e._, PVT+FPM) has achieved good target prediction maps, the object boundaries are still unsatisfactory. Thus, we adopt the high-resolution preserving mechanism for further detail refining. As shown in Table 2, we conduct the detail-preserving fine localization stage without the correction fusion module (CFM) upon the first coarse positioning stage, _i.e._, PVT+FPM+High-res Preserving. If we introduce the low-level RGB-aware feature with high resolution to guide the refining process, we can find that the network outperforms the one PVT+FPM. The reason why we need a high-resolution preserving module for fine localization lies in two aspects, _i.e._, 1) the scales of camouflaged objects are various, and 2) the boundaries of camouflaged objects are usually meticulous which are hard to discern through the high-level semantic features. Inspired by the human visual perception system, humans usually need to zoom in on subtle details in a clear, high-resolution view to recognize camouflaged objects. If the scale is small in the image, we need to leverage the low-level edge-aware or shape-aware information to help the network obtain a fine localization. For the obscure boundary problem, multi-scale features fused in a step-by-step manner will give more help to the boundary separating from the complex background. Thus, we design the refining mechanism to integrate the high-resolution information and gradually fuse deep features together to solve these problems. The experimental results also show that the high-resolution preserving part can provide more performance gains for detail refining. And we can conclude that the detail refinement strategy is not only significant but also effective in localizing the camouflaged objects.
**Effectiveness of Correction Fusion Module.** Though the high-resolution preserving mechanism for detail refining achieves good performance, the coarse camouflaged mask from the frequency-guided stage is still not effectively exploited enough. Thus, we propose a correction fusion module to further improve the quality of the camouflaged mask by completely mining the ability of the coarse map and the neighbor features. Specifically, we implement the CFM on the detail-preserving fine localization stage, the results are shown in the last row of Table 2. While we update the detail-preserving with the CFM mechanism, all metric scores can be further improved especially in terms of the \(S_{\alpha}\) and \(F^{\omega}_{\beta}\) scores. The good target detection ability indicates that CFM plays an essential role in improving the detection performance of camouflaged objects. The main reason lies that CFM takes the prior coarse camouflaged mask and the neighbor layer interaction into account. First, the prior coarse prediction mask can provide us with an accurate region of the highlighted camouflaged objects which can extract object-centric related features well. Then, the channel-wise correlation correlates and combines neighbor layers to enhance the object representation which is more distinguishable for perceiving the camouflaged objects. Since CFM learns the channel correlation between adjacent features to obtain learnable weight maps and adjust original features, the dynamic mechanism achieves superior performance compared to the simple concatenation method (the third-row result of Table 2). The good performance reflects that progressively fusing the prior coarse mask and cross-layer interaction is beneficial for camouflaged object refining.
In summary, the frequency-guided coarse positioning stage mainly highlights the important regions of the camouflaged objects under the guidance of hierarchy frequency-aware semantic information, and the detail-preserving fine localization stage further assists in separating the camouflaged objects from the obscure boundaries of the complex background by integrating the high-resolution clue, adjacent correlation features, and the coarse prediction mask. Finally, the proposed FPNet leads us to an accurate and effective solution for detecting camouflaged objects.
#### 4.3.2. **Detail Analysis of the Frequency-aware Information**
In order to verify the effectiveness of the frequency perception mechanism, we analyze different frequency fusing types through quantitative results, as shown in Table 3. We also provide some visualization comparison results from the prediction mask and the learned frequency features, as shown in Figures 6 and 7.
The proposed frequency perception module can automatically separate the features into high-frequency and low-frequency related features. However, how to choose a suitable way to integrate
\begin{table}
\begin{tabular}{c|c|c|c|c c c c} \hline \multirow{2}{*}{Baseline} & \multicolumn{2}{c|}{1st-stage} & \multicolumn{2}{c|}{2nd-stage} & \multicolumn{4}{c}{COD104-Test (2026 images)} \\ \cline{2-7} & FPM & HRP & CRM & \(S_{\alpha}\uparrow\) & \(E_{mean}\uparrow\) & \(F^{\omega}_{\beta}\uparrow\) & \(M\downarrow\) \\ \hline ✓ & & & & 0.835 & 0.899 & 0.701 & 0.032 \\ ✓ & ✓ & & & 0.844 & 0.908 & 0.728 & 0.031 \\ ✓ & ✓ & ✓ & & 0.849 & 0.911 & 0.739 & 0.030 \\ ✓ & ✓ & ✓ & ✓ & **0.850** & **0.913** & **0.748** & **0.029** \\ \hline \end{tabular}
\end{table}
Table 2. Quantitative results of ablation studies on the COD10k-Test dataset. First-stage and Second-stage mean the Frequency-guided Coarse Positioning and Detail-preserving Fine Localization respectively. HRP denotes High-res Preserving.
Figure 6. Comparison results of different feature sets output from frequency perception module. (a) Input image. (b) GT. (c) Prediction result with our frequency fusing mechanism. (d) Prediction with only high-frequency features. (e) Result with only low-frequency features.
the prominent frequency-aware features to help obtain satisfactory camouflaged object masks needs further discussion. To verify it, we design different comparisons, _i.e._, only using high-frequency or low-frequency branches for following camouflaged object mask prediction. The detailed comparisons on the COD10k-Test dataset are shown in Table 3. We can observe that the only high-frequency method gives more help than the low-frequency method. All metrics scores of only high-frequency outperform the low-frequency to a great extent. This reflects that high-frequency information is more robust and distinguishable for recognizing camouflaged objects. It also meets the human visual system, we usually employ high-frequency clues to discern the target object from the uncertain region. However, since the octave convolution is an unsupervised operation, that is, no frequency-labeled maps by humans are used for optimization. Thus, some features learned from the low-frequency branch may be useful for camouflaged object detection. Moreover, we adopt a simple addition operation fusing the high-frequency and low-frequency together, the result is shown in the last row of Table 3. The simple addition of high and low frequencies achieves the best performance over only single-frequency ones. Based on these observations, we suggest combining the high-frequency and low-frequency into a single addition to obtain further improvements.
In Figure 6, we analyze the influence of different frequency-aware feature types. In particular, the prediction masks of only high-frequency, only low-frequency, and ours (high-frequency+low-frequency) are shown. The high-frequency method can predict the key part of the camouflaged object, the low-frequency method can obtain an intact region but with some interference background regions. Our proposed method can obtain an accurate object mask compared with the high-frequency or low-frequency ones. The comparison results indicate that the frequency features are meaningful for camouflaged object detection. And fusing the high-frequency and low-frequency will further assist the model in obtaining a relatively complete object mask.
We also visualize the learned frequency-aware features via the octave convolution to further explain the effectiveness of the proposed frequency perception mechanism, as shown in Figure 7. First, our proposed frequency perception mechanism can automatically separate the frequency features into high and low frequency groups without any frequency supervision information. Second, we can observe that the high-frequency and low-frequency groups in the learning process of octave convolution extract the edge information and the main part of the image, respectively. The low-frequency group (Figure 7(c)) focuses more on the overall composition of the image, while the high-frequency group (Figure 7(d)) portrays the edge part of the camouflaged object in the image. While combing the low-frequency and high-frequency groups (Figure 7(e)), our model can focus on the crucial regions of the camouflaged object despite it is similar to the surrounding region.
In conclusion, the proposed frequency perception network has been verified by analyzing the qualitative and quantitative comparison results that the frequency information can give more help to camouflaged object detection. And the proposed frequency perception module can be plugged and played into arbitrary frameworks.
## 5. Conclusion
In this paper, we propose a frequency-perception network (FPNet) to address the challenge of camouflaged object detection by incorporating both RGB and frequency domains. Specifically, a frequency-perception module is proposed to automatically separate frequency information leading the model to a good coarse mask at the first stage. Then, a detail-preserving fine localization module equipped with a correction fusion module is explored to refine the coarse prediction map. Comprehensive comparisons and ablation studies on three benchmark COD datasets have validated the effectiveness of the proposed FPNet. This work will benefit more sophisticated algorithms exploiting frequency clues pursuing appropriate solutions in various areas of the multimedia community. In addition, the long-tail problem also exists in COD, this motivates us to explore reasonable solutions referring to the typical methods of long-tail recognition (Song et al., 2018; Wang et al., 2019).
###### Acknowledgements.
This work was supported in part by the National Key R&D Program of China under Grant 2021ZD0112100, in part by the National Natural Science Foundation of China under Grant 62002014, Grant 62202461, Grant 62271180, Grant U1913204, Grant U1936212, Grant 62120106009, in part by the Taishan Scholar Project of Shandong Province under Grant tsqn202306079, in part by Young Elite Scientist Sponsorship Program by the China Association for Science and Technology under Grant 2020QNRC001, in part by the Project for Self-Developed Innovation Team of Jinan City under Grant 2021GXRC038, in part by CAAI-Huawei MindSpore Open Fund, and in part by China Postdoctoral Science Foundation under Grant 2022M/2364.
\begin{table}
\begin{tabular}{c c|c c c c} \hline \multirow{2}{*}{Ver.} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{COD10k-Test (2026 images)} \\ \cline{3-6} & & \(S_{\alpha}\uparrow\) & \(E_{mean}\uparrow\) & \(F_{\beta}^{\alpha}\uparrow\) & \(M\downarrow\) \\ \hline No.1 & Low-frequency & 0.765 & 0.831 & 0.438 & 0.061 \\ No.2 & High-frequency & 0.848 & 0.910 & 0.747 & 0.029 \\ No.3 & High-fre+Low-fre & 0.850 & 0.913 & 0.748 & 0.029 \\ \hline \end{tabular}
\end{table}
Table 3. Quantitative results of frequency-aware features on the COD10k-Test dataset.
Figure 7. Visualization of the learned features about the high-frequency and low-frequency groups. (a) Input image. (b) GT. (c) Low-frequency features. (d) High-frequency features. (e) Features fusing high-frequency and low-frequency after octave convolution. |
2307.11428 | Bidding efficiently in Simultaneous Ascending Auctions with budget and
eligibility constraints using Simultaneous Move Monte Carlo Tree Search | For decades, Simultaneous Ascending Auction (SAA) has been the most popular
mechanism used for spectrum auctions. It has recently been employed by many
countries for the allocation of 5G licences. Although SAA presents relatively
simple rules, it induces a complex strategic game for which the optimal bidding
strategy is unknown. Considering the fact that sometimes billions of euros are
at stake in an SAA, establishing an efficient bidding strategy is crucial. In
this work, we model the auction as a $n$-player simultaneous move game with
complete information and propose the first efficient bidding algorithm that
tackles simultaneously its four main strategic issues: the $\textit{exposure
problem}$, the $\textit{own price effect}$, $\textit{budget constraints}$ and
the $\textit{eligibility management problem}$. Our solution, called
$SMS^\alpha$, is based on Simultaneous Move Monte Carlo Tree Search (SM-MCTS)
and relies on a new method for the prediction of closing prices. By introducing
a new reward function in $SMS^\alpha$, we give the possibility to bidders to
define their own level of risk-aversion. Through extensive numerical
experiments on instances of realistic size, we show that $SMS^\alpha$ largely
outperforms state-of-the-art algorithms, notably by achieving higher expected
utility while taking less risks. | Alexandre Pacaud, Aurelien Bechler, Marceau Coupechoux | 2023-07-21T08:43:02Z | http://arxiv.org/abs/2307.11428v2 | Bidding efficiently in Simultaneous Ascending Auctions with budget and eligibility constraints using Simultaneous Move Monte Carlo Tree Search
###### Abstract
For decades, Simultaneous Ascending Auction (SAA) has been the most popular mechanism used for spectrum auctions. It has recently been employed by many countries for the allocation of 5G licences. Although SAA presents relatively simple rules, it induces a complex strategical game for which the optimal bidding strategy is unknown. Considering the fact that sometimes billions of euros are at stake in a SAA, establishing an efficient bidding strategy is crucial. In this work, we model the auction as a \(n\)-player simultaneous move game with complete information and propose the first efficient bidding algorithm that tackles simultaneously its four main strategical issues: the _exposure problem_, the _own price effect_, _budget constraints_ and the _eligibility management problem_. Our solution, called \(SMS^{\alpha}\), is based on Simultaneous Move Monte Carlo Tree Search (SMCTS) and relies on a new method for the prediction of closing prices. By introducing scalarised rewards in \(SMS^{\alpha}\), we give the possibility to bidders to define their own level of risk-aversion. Through extensive numerical experiments on instances of realistic size, we show that \(SMS^{\alpha}\) largely outperforms state-of-the-art algorithms, notably by achieving higher expected utility while taking less risks.
Simultaneous Move Monte Carlo Tree Search, Ascending Auctions, Exposure, Own price effect, Risk-aversion
## I Introduction
In order to provide high quality service and develop wireless communication networks, mobile operators need to have access to a wide range of frequencies. These frequencies are obtained in the form of licences. A licence is defined by four features: its frequency band, its geographic coverage, its period of usage and its restrictions on use. Nowadays, spectrum licences are mainly assigned through auctions. _Simultaneous Ascending Auction_ (SAA), also known as _Simultaneous Multi Round Auction_ (SMRA), has been the privileged mechanism used for spectrum auction since its introduction in 1994 by the US Federal Communications Commission (FCC) for the allocation of wireless spectrum rights. For instance, it has been used in Portugal [1], Germany [7], Italy [12] and the UK [21] to sell 5G licences. SAA is also expected to play a central role in future spectrum allocations, e.g. for 6G licenses. The popularity of SAA is mainly due to the relative simplicity of its rules and the generation of substantial revenue for the regulator. Both of its creators, Paul Milgrom and Robert Wilson, received the 2020 Sverige's Riksbank Prize in Economic Sciences in Memory of Alfred Nobel mainly for their contributions to SAA. Establishing an efficient bidding strategy for SAA is crucial for mobile operators, especially considering the large amount of money involved, e.g. Deutsche Telekom spent 2.17 billion euros in the 5G German SAA. This is the aim of this work.
SAA has a dynamic multi-round auction mechanism where bidders submit their bids simultaneously on all licences each round. It offers the freedom to adjust bids throughout the auction while taking into account the latest information about the likelihood of winning different sets of licences. Hence, a great number of bidding strategies can be applied. Unfortunately, selecting the most efficient one is a difficult task. Indeed, SAA induces a \(n\)-player simultaneous move game with incomplete information with a large state space for the solution of which no generic exact game resolution method is known [23].
In addition to the complexities tied to its general game properties, SAA presents a number of complex strategical issues. Its four main strategical issues are the _exposure problem_, the _own price effect_, _budget constraints_ and the _eligibility management problem_. The exposure problem corresponds to the situation where a bidder pursues a set of complementary licences but ends up by paying more than its valuation for the ones it actually wins. The own price effect refers to the fact that bidding on a licence inevitably increases its price and, hence, decreases the utility of all bidders willing to acquire it. On the contrary, it is in the interest of all bidders to keep prices as low as possible. Budget constraints correspond to a fix budget that caps the maximum amount that a bidder can bid during an auction and, thus, can hugely impact an auction's outcome. The eligibility management problem is introduced by activity rules which penalise bidders that do not maintain a certain level of bidding activity. At the beginning of the auction, each bidder is given a certain level of eligibility. Each round a bidder fails to satisfy the activity rule, its eligibility is reduced. As bidders are forbidden to bid on sets of licences which exceed their eligibility, managing efficiently one's eligibility during the course of an auction is crucial to obtain a favourable outcome. In this work, we propose the first efficient bidding algorithm which tackles simultaneously the four strategical issues of SAA.
### _Related works_
Most works on SAA, such as [10, 11, 19], have focused on its mechanism design, its efficiency and the revenue it
generates for the regulator. Only a few works have addressed the bidder's point of view. These studies generally consider one of the two following formats of SAA: its original format [11] and its corresponding clock format defined hereafter. In neither of these formats, an efficient bidding strategy tackling simultaneously its four main strategical issues has yet been proposed. Generally, research has focused on trying to solve one of these strategical issues in specific simplified versions of these formats. Moreover, the solutions proposed can often only be applied to small instances.
As the original format of SAA is generally too complex to draw theoretical guarantees, a simplified clock format of SAA [13] with two types of bidders (_local_ and _global_) is often considered. It presents the advantage of being a tractable model where bidders have continuous and differentiable expected utilities. Standard optimisation methods can then be applied to derive an equilibrium.
In the literature, the clock format is mainly used to analyse the exposure problem. Global bidders all have super-additive value functions. Goeree et al [13] consider the case of identical licences for which they compute the optimal dropout level of each global bidder using a Bayesian framework. They extend their work to a larger class of value functions (regional complementarities) but with only two global bidders. By modifying the initial clock format of SAA with a pause system that enables jump bidding, Zheng [31] builds a continuation equilibrium which fully eliminates the exposure problem in the case of two licences and one global bidder. Using a different pause system, Brusco and Lopomo [6] study the effect of binding public budget constraints on the structure of the unique noncollusive equilibria in the case of two licences and two global bidders. They show that such constraints can be a great source of inefficiency.
Regarding the original format of SAA, Wellman et al. [30] propose an algorithm which uses probabilistic predictions of closing prices to tackle exposure. Results seemed promising but were only obtained for a specific class of super-additive value functions.
The own price effect has also been studied in the original format of SAA. In a simple example of SAA with two licences between two bidders having the same public value function, Milgrom [19] describes a collusive equilibrium. This work was then pursued by Brusco and Lopomo [5] who build a collusive equilibrium based on signalling for SAA with two licences between two bidders having super-additive value functions. Similarly to the algorithm built to tackle exposure, Wellman et al. [30] propose an algorithm to tackle the own price effect based on the probabilistic prediction of closing prices when all licences are identical and bidders have subadditive value functions. However, obtained results were unsatisfactory as they are significantly inferior to a simple demand reduction algorithm.
Regarding budget constraints and the eligibility management problem, little work has been done in the original format of SAA. However, it is commonly accepted that one should gradually reduce its eligibility to avoid being trapped in a vulnerable position if other bidders do not behave as expected [29].
In our previous work [22], we presented a bidding strategy computed by Monte Carlo Tree Search (MCTS) that we applied to a deterministic version of the original format of SAA with complete information. In this paper, we extend our work to simultaneous moves, budget constraints, activity rules, scalarised rewards and larger instances. All four MCTS phases have been modified.
### _Contributions_
In this paper, we consider the original format of SAA with complete information for which we propose the first bidding algorithm, named \(SMS^{\alpha}\), tackling simultaneously its four main strategical issues. We make the following contributions:
* We model the auction as a \(n\)-player simultaneous move game with complete information that we name SAA-c. No specific assumption is made on the bidders' value functions.
* We present an efficient bidding strategy (\(SMS^{\alpha}\)) that tackles simultaneously the _exposure problem_, the _own price effect_, _budget constraints_ and the _eligibility management problem_ in SAA-c. \(SMS^{\alpha}\) is based on a Simultaneous Move Monte Carlo Tree Search (SM-MCTS) [27]. To the best of knowledge, it is the first algorithm that tackles the four main strategical issues of SAA.
* We introduce a hyperparameter \(\alpha\) in \(SMS^{\alpha}\) which allows a bidder to arbitrate between expected utility and risk-aversion.
* We propose a new method based on the convergence of a specific sequence for the prediction of closing prices in SAA-c. This prediction is then used to enhance the expansion and rollout phase of \(SMS^{\alpha}\).
* Through typical examples taken from the literature and extensive numerical experiments on instances of realistic size, we show that \(SMS^{\alpha}\) outperforms state-of-the-art algorithms by achieving higher expected utility and tackling better the exposure problem and the own price effect in budget and eligibility constrained environments.
The remainder of this paper is organised as follows. In Section II, we define our model SAA-c and provide its game and strategical complexities. We then introduce our performance indicators. In Section III, we present our method for the prediction of closing prices. In Section IV, we present our algorithm \(SMS^{\alpha}\). In Section V, we show on typical examples taken from the literature the empirical convergence of our method for the prediction of closing prices and that \(SMS^{\alpha}\) tackles efficiently the four main strategical issues. Then, by comparing \(SMS^{\alpha}\) to state-of-the-art algorithms, we show through extensive numerical experiments on instances of realistic size the major increase in performance of our solution.
## II Simultaneous Ascending Auction
### _Simultaneous Ascending Auction model with complete information_
Simultaneous Ascending Auction (SAA) [11, 19, 30] is one of the most commonly used mechanism design where \(m\) indivisible goods are sold via separate and concurrent English
auctions between \(n\) players. Bidding occurs in multiple rounds. At each round, players submit their bids simultaneously. The player having submitted the highest bid on an item \(j\) becomes its temporary winner. If several players have submitted the same highest bid on item \(j\), then the temporary winner is uniformly chosen at random amongst them. The _bid price_ of item \(j\), noted \(P_{j}\), is then set to the highest bid placed on it. The new temporary winners and bid prices are revealed to all players at the end of each round. The auction closes if no new bids have been submitted during a round. The items are then sold at their current bid price to their corresponding temporary winners.
In our model, at the beginning of the auction, the bid price of each item is set to \(0\). New bids are constrained to \(P_{j}+\varepsilon\) where \(\varepsilon\) is a fixed bid increment. This reduction of the bidding space is common in the literature on SAA [13, 22, 30]. We make the classical assumption that players won't bid on items that they are currently temporarily winning [22, 30]. Hence, in our model, a winner will always pay an item at most \(\varepsilon\) above its opponents' highest bid.
Activity rules are introduced in SAA to penalise bidders which do not maintain a certain level of bidding activity. In our model, bidders are subject to the following simplified activity rule: the number of items temporarily won plus the number of new bids (also known as _eligibility_) by a bidder can never rise [13, 20]. For instance, suppose a bidder \(i\) is temporarily winning a set of items \(Y\) and bids on a set of items \(X\) at a given round. Its eligibility is defined as \(e_{i}=|Y|+|X|\) and is revealed to all bidders at the end of the round. In the next round, if bidder \(i\) is temporarily winning a set of items \(Y^{\prime}\), it can only bid on a set of items \(X^{\prime}\) of size \(|X^{\prime}|\leq e_{i}-|Y^{\prime}|\). Its eligibility is then set to \(e^{\prime}_{i}=|X^{\prime}|+|Y^{\prime}|\leq e_{i}\). At the beginning of the auction, the eligibility of each player is set to \(m\).
We assume that the value function \(v_{i}\) and budget \(b_{i}\) of each player \(i\) are common knowledge [22, 25, 26]. In the general case, players have rarely access to such knowledge in auctions. However, this hypothesis is relevant in spectrum auctions as telecommunication companies generally have a precise estimation of their opponents' value function as well as their financial resources. We assume that none of the players is allowed to bid on a set of items that it can not pay given its budget.
This simplified version of SAA induces an \(n\)-player simultaneous move game with complete information that we name SAA-c.
### _Budgets, Utility and Value functions_
A player \(i\) in SAA-c is defined by its budget \(b_{i}\), its value function \(v_{i}\) and its utility function \(\sigma_{i}\). Without loss of generality, \(b_{i}\) and \(v_{i}\) are chosen independently. If the current bid price vector is \(P\), a player \(i\) temporarily winning a set of items \(Y\) with current eligibility \(e_{i}\) can bid on a set of items \(X\) if and only if
\[\begin{cases}|X|+|Y|\leq e_{i}\\ \sum_{j\in X}(P_{j}+\varepsilon)\leq b_{i}-\sum_{j\in Y}P_{j}\end{cases} \tag{1}\]
At the end of the auction, the utility obtained by player \(i\) after winning the set of items \(X\) at bid price vector \(P\) is:
\[\sigma_{i}(X,P)=v_{i}(X)-\sum_{j\in X}P_{j} \tag{2}\]
To respect common reinforcement learning conventions, we will sometimes denote by \(R^{\pi}\) the random variable corresponding to the utility obtained by playing policy \(\pi\).
Value functions are assumed to be normalised (\(v_{i}(\emptyset)=0\)), finite and verify the free disposal condition, i.e. for any two sets of goods \(X\) and \(Y\) such that \(X\subset Y\), then \(v(X)\leq v(Y)\)[17, 19]. Two disjoint sets \(X\) and \(Y\) of goods are said to be complements if \(v(X+Y)>v(X)+v(Y)\)[30].
### _Extensive form_
The standard representation for multi-round games is a tree representation named extensive form [18]. The game tree is a finite rooted directed tree admitting two types of nodes: _decision nodes_ and _chance nodes_. At each decision node, a player has the choice between many actions represented each by a directed edge. A chance node has a fixed probabilistic distribution assigned over its outgoing edges. An information set is a set of decision nodes which are indistinguishable for the concerned player at the current position of the game [9]. This means that a player, given its current information, does not know exactly at which decision node it is playing. It only knows that it is playing at one of the decision nodes of the corresponding information set. Games where information sets are not all singletons are known as imperfect information games [24].
We represent the SAA-c game in this form with the decision nodes representing the different states of the game and the chance nodes representing the random draws of temporary winners in case of ties. At each decision node, an outgoing edge represents a set of items on which the concerned player bids if it selects this edge. Each decision node or state is defined by five features: the concerned player, the eligibility vector revealed at the end of the last round, the temporary winner of each item, the current bid price vector and the bids already submitted during the current round. The four first features are common knowledge and the last feature is hidden information for the concerned player. Therefore, all decision nodes which differ only by the last feature belong to the same information set. In Figure 1, we represent an SAA-c game between three players with their information sets and chance nodes.
### _Game and strategical complexities_
#### Iii-D1 Game complexities
To highlight the complexity of the SAA-c game, we focus on two metrics: _information set space complexity_ and _game tree complexity_[28]. We define the first as the number of different information sets which can be legally reached in the game. It acts as a lower bound of the _state space complexity_[28]. The second corresponds to the number of different paths in its extensive form. We compute both complexities for a given number of rounds \(R\), unlimited budgets and without any activity rule.
**Theorem 2.1**.: _Let \(\Gamma\) be an instance of the SAA-c game with no activity rule. Let \(n\), \(m\) and \(R\) be respectively the number of players, the number of items and the number of rounds in \(\Gamma\). Suppose that all players have unlimited budgets. The number of possible information sets in \(\Gamma\) is:_
\[n(Rn+1)^{m} \tag{3}\]
Proof.: Each information set is defined by three components: the player to bid, the temporary winner and bid price of each item. If no player has bidded on an item, then it remains unsold and is handed back to the auctioneer. Otherwise, its bid price is included in \(\{\varepsilon,2\varepsilon,...,R\varepsilon\}\) and the item is allocated to one of the \(n\) players. Therefore, the number of different allocations and bid prices of an item in \(\Gamma\) is \(Rn+1\). Under the unlimited budget assumption, all items are mutually independent. Thus, the number of different allocations and bid prices for all items is \((Rn+1)^{m}\). As there are \(n\) different players who can bid, the number of possible information sets is:
\[n(Rn+1)^{m}\]
**Theorem 2.2**.: _Let \(\Gamma\) be an instance of the SAA-c game with no activity rule. Let \(n\), \(m\) and \(R\) be respectively the number of players, the number of items, and the number of rounds in \(\Gamma\). Suppose that all players have unlimited budgets. A lower bound of the game tree complexity of \(\Gamma\) is:_
\[\Omega(2^{m(n-1)R}) \tag{4}\]
Proof.: We consider \(\Gamma\) with a deterministic tie-breaking rule. This eliminates chance nodes and reduces the number of paths in the game's extensive form. Let's first compute a lower bound of the number of different branches created in \(\Gamma\) during a given round.
Suppose player \(i\) is the temporary winner of \(m_{i}\) items. Thus, during this given round, player \(i\) can bid \(2^{m-m_{i}}\) different ways as it can either bid or not bid on each of the remaining \(m-m_{i}\) items. Hence, during this round, there are \(2^{nm-\sum_{i=1}^{n}m_{i}}\) different bidding scenarios. Thus, this given round creates \(2^{nm-\sum_{i=1}^{n}m_{i}}-1\) new branches all leading to non-terminal nodes of \(\Gamma\). Moreover, as \(\sum_{i=1}^{n}m_{i}\leq m\), the number of different branches created during any round is lower bounded by \(2^{m(n-1)}-1\).
A lower bound of the game tree complexity of \(\Gamma\) can then easily be calculated by induction. Indeed, every non-terminal node of \(\Gamma\) starting a bidding round induces at least \(2^{m(n-1)}-1\) new branches during this round. Therefore, the game tree complexity of \(\Gamma\) is lower bounded by:
\[\sum_{l=0}^{R}(2^{m(n-1)}-1)^{l}=\frac{(2^{m(n-1)}-1)^{R+1}-1}{2^{m(n-1)}} \tag{5}\]
Thus, a lower bound of the game tree complexity of \(\Gamma\) is \(\Omega(2^{m(n-1)R})\).
_Example._ An SAA for 12 spectrum licences (5G) between 5 telecommunication companies was held in Italy in 2018 and ended after 171 rounds [12]. The number of possible information sets as well as a lower bound of the game tree complexity of the corresponding SAA-c game with no activity rule are respectively \(10^{35}\) and \(10^{2470}\).
Adding activity rules decreases the game tree complexity as a bidder can no longer bid on a set of items which exceeds its eligibility. However, it increases the information set space complexity as a new feature (eligibility) is added to every information set.
#### Ii-B2 Strategical complexities
SAA-c game also admits a number of strategical issues. The four main ones are presented below.
* **Exposure:** It is a phenomenon which happens when a player tries to acquire a set of complementary items but ends up by paying too much for the subset it actually wins at the end of the auction. Hence, the player obtains a negative utility. For instance, Table I presents a well-studied example, see e.g. [30], in a 2-item SAA-c game with a bid increment of \(1\) between two players with unlimited budgets (referred to as Example 1). Player 1 considers both items as perfect substitutes, i.e. it values both items equally and desires to acquire only one of the two, while player 2 considers them as perfect complements, i.e. each item is worthless without the other and desires to acquire
Fig. 1: Extensive form of a three player SAA-c game with information sets and chance nodes
both of them. If player 1 is temporarily winning no items and the bid price of the cheapest item is lower than \(11\), it should bid on it. Otherwise, it should pass. Hence, if player 2 decides to bid on both items, it will end up exposed as it won't be able to obtain both items for a price inferior to 22. Moreover, if after a few rounds, player 2 decides to give up an item, it will still end up by paying for the other item and, hence, incur a loss.
* **Own price effect:** Competing on an item causes inevitably the rise of its bid price and, hence, the decrease in utility of all players wishing to acquire it. Thus, players have all a strong interest in maintaining the bid price of all items as low as possible. To avoid this rise, a player can concede items to its opponents hoping that they will not bid on the items it is temporarily winning in exchange. This strategy is known as _demand reduction_[3, 29]. Dividing items between players to avoid this issue is called _collusion_[5]. No communication is allowed between players. In SAA-c, players should be able to use the common knowledge of valuations and budgets to agree on a same fair split of items to tackle this issue without any communication.
* **Budget constraints:** Capping the maximum amount a bidder can spend during an auction can highly impact the auction's outcome. Indeed, it can prevent players from bidding on certain sets of items and be a source of exposure. Moreover, given this information, players can drastically change their bidding strategy. For instance, in the auction presented in Table I, if player 1 and 2 have respectively a fixed budget of \(8\) and \(20\), player 2 should bid on both items as this situation no longer presents any risk of exposure.
* **Eligibility management:** Managing efficiently its own eligibility is essential to ensure a favourable outcome. Bidding on a high number of items to maintain high eligibility induces the own price effect. However, reducing its eligibility to form collusions can trap a bidder in a vulnerable position if the other bidders do not behave as expected. Hence, a tradeoff must be found.
### _Performance indicators_
The natural metric used to measure the performance of a strategy is the _expected utility_. However, given the fact that a specific instance of a spectrum auction (i.e. same frequency bands, same operators, etc...) is generally only held once and an operator just participates to a few different instances, comparing strategies only on the basis of their _expected utility_ is not sufficient. Indeed, given the huge amount of money involved, potential losses due to exposure should also be taken into account. To measure this risk, we decompose the expected utility as follows:
\[\mathbb{E}(R^{\pi})=\mathbf{IP}(R^{\pi}\geq 0)\,\mathbb{E}(R^{\pi}|R^{\pi} \geq 0)+\underbrace{\mathbf{IP}(R^{\pi}{<}0)\,\mathbb{E}(R^{\pi}|R^{\pi}{<}0)}_{ \text{Exposure}} \tag{6}\]
where \(\pi\) is a policy and \(R^{\pi}\) is a random reward obtained by playing \(\pi\) in a SAA-c game. We introduce the term \(-\mathbf{IP}(R^{\pi}{<}0)\,\mathbb{E}(R^{\pi}|R^{\pi}{<}0)\) as a metric of potential exposure which should be minimised. We name it _expected exposure_ and estimate it by taking the opposite of all losses incurred by a strategy divided by the number of plays. Moreover, we define the _exposure frequency_ as \(\mathbf{IP}(R^{\pi}{<}0)\). This is estimated by the number of times a strategy incurs a loss divided by the number of plays. To analyse the own price effect, we consider the _average price payed per item won_. However, by only acquiring undesired items at reasonably low price, a bidder can obtain a low average price payed per item won. Hence, we complement this metric by the _ratio of items won_.
## III Predicting closing prices
\(SMS^{\alpha}\) is based on a SM-MCTS whose expansion and rollout phases rely on the following bidding strategy and prediction of closing prices, i.e., an estimation of the price of each item at the end of the auction.
### _Constrained point-price prediction bidding_
We start by extending the definition of point-price prediction bidding (_PP_) [30] to budget and eligibility constrained environments.
**Definition III.1**.: In a SAA-c game with \(m\) objects and a current bid price vector \(P\), a point-price prediction bidder with budget \(b\), a current eligibility \(e\), an initial prediction of closing prices \(P^{init}\) and a set of temporarily won items \(Y\) computes the subset of goods
\[X^{*}=\underset{\begin{subarray}{c}X\in\{\emptyset\}\cup\{1,...,m\}\setminus Y \\ \sum_{j\in X\cup Y}\rho(P^{init},P,Y)\leq b\\ |X|+|Y|\leq e\end{subarray}}{\arg\max}\sigma(X\cup Y,\rho(P^{init},P,Y)) \tag{7}\]
breaking ties in favour of smaller subsets and lower-numbered goods. It then bids \(P_{j}+\varepsilon\) on all items \(j\) belonging to \(X^{*}\). The function \(\rho:(P^{init},P,Y)\rightarrow{\mathbb{R}_{+}}^{m}\) maps an initial prediction of closing prices, a current bid price vector and a set of items temporarily won to an estimation of closing prices. For any item \(j\), it follows the below update rule:
\[\rho_{j}(P^{init},P,Y)=\left\{\begin{array}{ll}\max(P^{init}_{j},P_{j})& \text{if }j\in Y\\ \max(P^{init}_{j},P_{j}+\varepsilon)&\text{otherwise}\end{array}\right. \tag{8}\]
A point-price prediction bidder only considers sets of items _within budget_\(b\) given its prediction of closing prices \(\rho(P^{init},P,Y)\), i.e., only sets of items \(X\) such that \(\sum_{j\in X\cup Y}\rho_{j}(P^{init},P,Y)\leq b\). Moreover, it can only bid on sets of items which does not exceed its eligibility \(e\).
If closing prices are correctly estimated and independent of the bidding strategy, then playing _PP_ is optimal for a player. However, in practice, closing prices are usually tightly related to a player's bidding strategy. Playing _PP_ with a null prediction
\begin{table}
\begin{tabular}{|c c c c|} \hline & \(v(\{1\})\) & \(v(\{2\})\) & \(v(\{1,2\})\) \\ \hline Player 1 & 12 & 12 & 12 \\ Player 2 & 0 & 0 & 20 \\ \hline \end{tabular}
\end{table} TABLE I: Example of exposure (\(\varepsilon=1\))
of closing prices (\(P^{init}=0\)) is known as straightforward bidding (SB) [19]. The efficiency of the bidding strategy _PP_ highly depends on the accuracy of the initial prediction of closing prices \(P^{init}\). For instance, if \(P^{init}\) largely underestimates the actual closing price of each item, then when the current bid price \(P\geq P^{init}\) component-wise, playing _PP_ with initial prediction \(P^{init}\) gives the same strategy as SB. However, if \(P^{init}\) overestimates too much the actual closing price of each item, then the bidder might stop playing prematurely in order to avoid exposure.
### _Computing an initial prediction of closing prices_
Several methods exist in the literature for computing an initial prediction of closing prices \(P^{init}\) in budget constrained environments. However, they all seem to present some limitations in SAA-c. For instance, the well known Walrasian price equilibrium [2] do not always exist when preferences exhibit complementarities as it is the case in Example 1. Standard tatonnement processes, such as the one used to compute _expected price equilibrium_[30], return the same price vector regardless of the auction's specificities (e.g., bid increment \(\varepsilon\)). The final prediction is then completely independent of the auction mechanism of SAA-c which is problematic. Computing an initial prediction by using only the outcomes of a single strategy profile is relevant only if bidders actually play according to this strategy profile. For instance, simulating SAA-c games where all bidders play SB and using the average closing prices as initial prediction is relevant if the actual bidders play SB. We propose hereafter a prediction method based on the convergence of a specific sequence which aims at tackling all of these issues.
**Conjecture 3.1**.: _Let \(\Gamma\) be an instance of an SAA-c game. Let \(f_{\Gamma}(P)\) be a random variable returning the closing prices of \(\Gamma\) when all bidders play PP with initial prediction \(P\). The sequence \(p_{t+1}=\frac{1}{t+1}\operatorname{\mathbb{E}}[f_{\Gamma}(p_{t})]+(1-\frac{1} {t+1})p_{t}\) with \(p_{0}\) the null vector of prices converges to a unique element \(p^{*}\)._
The fact that \(f_{\Gamma}\) is a random variable comes from the tie-breaking rule which introduces stochasticity in \(\Gamma\). By taking its expectation \(\operatorname{\mathbb{E}}[f_{\Gamma}(p_{t})]\) at each iteration \(t\), we ensure our deterministic sequence \(p_{t}\) to always converge to the same fixed point \(p^{*}\). Hence, all players using our method share the same prediction of closing prices \(p^{*}\). In practice, we perform a Monte-Carlo estimation of \(\operatorname{\mathbb{E}}[f_{\Gamma}(p_{t})]\) by simulating many SAA-c games. In small instances, it is possible to obtain a closed-form expression of \(\operatorname{\mathbb{E}}[f_{\Gamma}(p_{t})]\) and, from that, prove the convergence of sequence \(p_{t}\).
_Example_.: Suppose that both players play _PP_ with \(P^{init}=p_{0}\) in Example 1. During the first round, player 1 bids on item 1 and player 2 bids on both items. There is \(50\%\) chance that player 1 temporarily wins item 1 and \(50\%\) chance that player 2 temporarily wins item 1. If player 1 wins item 1 during the first round, player 2 bids on item 1 during the second round while player 1 passes. In the third round, player 1 bids on item 2 while player 2 passes. In the fourth round, player 2 bids on item 2 while player 1 passes. Hence, the bid price of item 1 (respectively item 2) is odd (respectively even) if temporarily won by player 1. When the bid price \(P=(12,11)\) and both items are temporarily won by player 2, player 1 drops out of the auction as, by definition of _PP_, it prefers smaller subsets of items for a same predicted utility. If player 2 wins item 1 during the first round, the bid price of item 1 (respectively item 2) is even (respectively even) if temporarily won by player 1. The closing price are then \(P=(11,11)\). Therefore, \(f_{\Gamma}(p_{0})\) has 50% chance of returning \((12,11)\) and 50% chance of returning \((11,11)\). Hence, \(\operatorname{\mathbb{E}}[f_{\Gamma}(p_{0})]=(11.5,11)\). By performing a similar analysis, we can show that \(\forall p\in\operatorname{\mathbb{E}}^{2},\operatorname{\mathbb{E}}[f_{\Gamma }(p)]\in[0,11.5]^{2}\) and obtain the following closed-form expression for any \(p\in[0,11.5]^{2}\):
\[\operatorname{\mathbb{E}}[f_{\Gamma}(p)]=\left\{\begin{array}{ll}(1,0)&\text {if }p_{1}+p_{2}\geq 20\text{ and }p_{1}\leq p_{2}\\ (0,1)&\text{if }p_{1}+p_{2}\geq 20\text{ and }p_{1}>p_{2}\\ (11.5,11)&\text{if }p_{1}+p_{2}<20\text{ and }p_{1}\leq p_{2}\\ (11,11.5)&\text{if }p_{1}+p_{2}<20\text{ and }p_{1}>p_{2}\end{array}\right. \tag{9}\]
From there, it is easy to show that sequence \(p_{t}\) converges to \(p^{*}=(10,10)\) in Example 1.
The general proof of the conjecture is left for future work.
Computing an initial prediction of closing prices as above has mainly three advantages compared to other methods in the literature. (1) We observe that this sequence converges in all undertaken SAA-c game instances. (2) This method takes into account the auction's mechanism through \(f_{\Gamma}\). (3) This prediction of closing price is not based only on the outcomes of a single specific strategy profile. Indeed, depending on the value of \(p_{t}\), different strategy profiles are used across iterations. At a fixed iteration \(t\), a single strategy profile is used to compute \(\operatorname{\mathbb{E}}[f_{\Gamma}(p_{t})]\) as the strategy returned by _PP_ only depends on its initial prediction \(P^{init}=p_{t}\).
## IV SM-MCTS bidding strategy
### _Brief presentation of MCTS_
Given the large state space and game tree complexities, it is practically impossible to explore the SAA-c game tree exhaustively as soon as we depart from very small instances. Thus, only a small portion of the game tree, called the search tree, can be explored. MCTS is a search technique that builds iteratively a search tree using simulations through a process named search iteration (see Figure 2). Each search iteration is divided into four steps. (1) The _selection phase_ selects a path from the root to a leaf node of the search tree. (2) The _expansion phase_ chooses one or more children to be added to the search tree from the selected leaf node according to the available actions. (3) The _simulation phase_ simulates the outcome of the game from the newly added node. (4) The _backpropagation phase_ propagates backwards the outcome of the game from the newly added node to the root in order to update the diverse statistics stored in each selected node of the search tree. This process is repeated until some predefined computational budget (time, memory, iteration constraint) is reached. Before running \(SMS^{\alpha}\), we compute our initial prediction of closing prices \(p^{*}\) as presented in Section III-B.
### _Scalarised rewards_
Maximising the expected utility while minimising the risk of exposure can be antithetical. Indeed, taking risks can either be highly beneficial or lead to exposure depending on how the other players react. To do so, we introduce a new scalarised reward incorporating both targets. For any strategy \(\pi\), we define:
\[R^{\pi}_{\alpha}=(1+\alpha\mathbb{1}_{R^{\pi}<0})R^{\pi} \tag{10}\]
where \(\alpha\) is a hyperparameter which controls the risk aversion of \(SMS^{\alpha}\). Note that
\[\mathbb{E}(R^{\pi}_{\alpha})=\mathbb{E}(R^{\pi})+\alpha\mathbf{P}(R^{\pi}{<}0 )\,\mathbb{E}(R^{\pi}{|R^{\pi}{<}0}) \tag{11}\]
where \(\mathbf{P}(R^{\pi}{<}0)\,\mathbb{E}(R^{\pi}{|R^{\pi}{<}0})\) is the term corresponding to the losses induced by exposure in Equation 6. Moreover, we define for any vector of price \(P\) and any set of items \(X\), \(\sigma^{\alpha}(X,P)=(1+\alpha\mathbb{1}_{\sigma(X,P)<0})\sigma(X,P)\) which is a modified utility taking into account both of our objectives.
The use of a linear scalarization function is a classical approach in multi-objective optimisation, multi-objective reinforcement learning [4], constrained MDP [15] or POMDP [16].
### _Search tree structure_
In order to maintain the simultaneous nature of SAA-c in the selection phase of \(SMS^{\alpha}\), we use a Simultaneous Move MCTS (SM-MCTS) [27] (Figure 3). At each selection step, we select an \(n\)-tuple where each index \(i\) corresponds to the action maximising the selection index of player \(i\) given only its information set. By doing so, bids are selected simultaneously and independently. Each selection step corresponds to a complete bidding round of SAA-c. Hence, the depth of our search tree corresponds to how many rounds ahead \(SMS^{\alpha}\) can foresee. The search tree nodes are defined by the eligibility of each bidder, the temporary winner and current bid price of each item. The vertices correspond to players' joint actions. Chance nodes are explicitly included in the search tree to break ties. The main advantage of SM-MCTS compared to applying MCTS on a serialised game tree, i.e. turning SAA-c into a purely sequential game, is that, to complete a bidding round of SAA-c, the number of selection steps is reduced from \(n\) to \(1\). Hence, by using SM-MCTS, the number of players \(n\) is no longer a burden for planning a bidding strategy over many rounds. Moreover, statistics are stored per information set and no longer per state which reduces memory consumption.
### _Selection_
At each selection step, players are asked to bid on the set of items which maximises their selection index. The selection phases ends when a terminal state of the SAA-c game or a non-expanded node, i.e. configuration of temporal winners, bid prices and eligibilities not yet added to the search tree, is reached. Our selection index is a direct application of the _Upper Confidence bound applied to Trees_ (UCT) [14] to scalarised rewards. Unlike usual applications of UCT, the size of the scalarised reward support is unknown so we proceed to an online estimation of it. Each player \(i\) chooses to bid on the set of items \(x_{i}\) with highest score \(q_{x_{i}}\) at information set \(I_{i}\):
\[q_{x_{i}}=\frac{r^{\alpha}_{x_{i}}}{n_{x_{i}}}+\max(c^{\alpha}_{x_{i}}-a^{ \alpha}_{x_{i}},\varepsilon)\sqrt{\frac{2\log(\sum_{x^{\prime}_{i}}n_{x^{ \prime}_{i}})}{n_{x_{i}}}} \tag{12}\]
where \(r^{\alpha}_{x_{i}}\) is the sum of scalarised rewards obtained after bidding on \(x_{i}\) at \(I_{i}\), \(n_{x_{i}}\) is the number of times player \(i\) has bidded on \(x_{i}\) at \(I_{i}\), \(\varepsilon\) is the bid increment, \(a^{\alpha}_{x_{i}}\) is the estimated lower bound and \(c^{\alpha}_{x_{i}}\) is the estimated higher bound of the scalarised reward support when bidding on \(x_{i}\) at \(I_{i}\). Thus, \(\max(c^{\alpha}_{x_{i}}-a^{\alpha}_{x_{i}},\varepsilon)\) acts like the size of scalarised reward support when bidding on \(x_{i}\) at \(I_{i}\).
### _Expansion_
The high branching factor due to the exponential growth of the game tree's width with the number of items \(m\) prevents in-depth inspection of promising branches. Thus, it is necessary to reduce the action space at each information set of the search tree [24]. To do so, each time a non-expanded node is added to the search tree, we select a maximum number \(N_{act}\) of promising actions per information set. Passing its turn without
Fig. 2: MCTS scheme
bidding on any item is always included in the \(N_{act}\) selected actions. This enables \(SMS^{\alpha}\) to obtain shallow terminal nodes in its search tree which correspond to collusions between bidders and, thus, reduces the own price effect. The remaining \(N_{act}-1\) actions correspond to the moves leading to the \(N_{act}-1\) highest predicted utilities in strategy _PP_ with initial prediction \(p^{*}\). More formally, for each player \(i\) at information set \(I_{i}\) temporarily winning set of items \(Y_{i}\) with eligibility \(e_{i}\), the action of bidding on set of items \(X_{i}\) is selected if \(\sigma_{i}^{\alpha}(Y_{i}\cup X_{i},\rho(p^{*},P,Y_{i}))\) is one of the \(N_{act}-1\) highest values with \(P\) the current bid price. Only sets of items \(X_{i}\) verifying \(\sum_{j\in X_{i}\cup Y_{i}^{\alpha}}\rho_{j}(p^{*},P,Y_{i})\leq b_{i}\) and \(|X_{i}|+|Y_{i}|\leq e_{i}\) are considered. Statistics for each action are then initialised as follows:
* \(r_{x_{i}}^{\alpha}\gets 0\)
* \(n_{x_{i}}\gets 0\)
* \(a_{x_{i}}^{\alpha}\leftarrow+\infty\)
* \(c_{x_{i}}^{\alpha}\leftarrow-\infty\)
### _Rollout_
From the newly added node, an SAA-c game is simulated until the game ends. Players are asked to bid at each round of the rollout. The default strategy is usually to bid on a random set of items. However, it leads to absurd outcomes in this case with very high prices as player rarely all pass. Therefore, we propose an alternative approach. At the beginning of each rollout phase, we set \(p_{i}^{*}=p^{*}+\eta_{i}\) with \(\eta_{i}\sim U([-\varepsilon,\varepsilon]^{m})\). Each player \(i\) then plays _PP_ with initial prediction of closing prices \(p_{i}^{*}\) during the entire rollout. Noise is added to our initial prediction \(p^{*}\) to diversify players' bidding strategy and, hence, improve the quality of our sampling. At the end of the rollout, an \(n\)-tuple is returned corresponding to the scalarised utility obtained by each player.
### _Backpropagation_
The results obtained during the rollout phase are propagated backwards to update the statistics of the selected nodes. Let \(V_{i}^{\alpha}\) be the scalarised utility obtained by player \(i\) at the end of the rollout. Let \(x_{i}\) be the set of items on which player \(i\) biddded at information state \(I_{i}\) for one of the selected nodes. The statistics stored for \(I_{i}\) are updated as follows:
* \(r_{x_{i}}^{\alpha}\gets r_{x_{i}}^{\alpha}+V_{i}^{\alpha}\)
* \(n_{x_{i}}\gets n_{x_{i}}+1\)
* \(a_{x_{i}}^{\alpha}\leftarrow\min(a_{x_{i}}^{\alpha},V_{i}^{\alpha})\)
* \(c_{x_{i}}^{\alpha}\leftarrow\max(c_{x_{i}}^{\alpha},V_{i}^{\alpha})\)
### _Transposition table_
Transposition tables are a common search enhancement used to considerably reduce the size of the search tree and improve performance of MCTS within the same computational budget [8]. By using such tables, we prevent the expansion of redundant nodes in our search tree and share the same statistics between transposed information states. This results in a significant improvement in performance of \(SMS^{\alpha}\) for a same amount of thinking time.
To identify each information set in the search tree, our hash function is based on two functions \(h_{1}\) and \(h_{2}\). The first returns a different integer for each combination of bid prices and allocations. The second returns a different integer for each eligibility vector. Hence, our hash function assigns a unique value to each information set in the search tree. More precisely, due to computational constraints, we can only assign a unique value for every node in the search tree with a depth lower than \(R_{max}\). \(R_{max}\) is a hyperparameter corresponding to an upper bound of the maximal depth (or rounds) in the final search tree. An example of function \(h_{1}\) assigning a different integer for each combination of bid prices and allocations in a search tree of maximal depth \(R_{max}\) is given in Algorithm 1. It uses as inputs the bid price vector \(P^{\text{0}}\) at the root of the search tree, the bid price vector \(P\) and the temporary winner \(A_{j}\) of each item \(j\) at a given node. If \(A_{j}=0\), then item \(j\) is temporarily allocated to the auctioneer.
In practice, given the thinking time constraints in our experimental results, choosing \(R_{max}=10\) is more than sufficient to guarantee a final search tree with maximal depth lower than \(R_{max}\). Hence, our hash function acts as a perfect hash function as no type-1 error or type-2 error occurs [32].
Fig. 3: SM-MCTS tree structure with explicit chance nodes for SAA-c game with 3 players
### _Final move selection_
The final move which is returned by \(SMS^{\alpha}\) is the action which maximises the player's expected scalarised reward at the root node. More formally, \(SMS^{\alpha}\) returns \(\underset{x_{i}}{\text{arg max}}\frac{r_{i}^{n}}{n_{x_{i}}}\) for player \(i\).
## V Experiments
In this section, we start by analysing the convergence rates of sequence \(p_{t}\), notably through Example 1. Then, we show that our algorithm \(SMS^{\alpha}\) largely outperforms state-of-the-art existing bidding algorithms in SAA-c, mainly by tackling own price effect and exposure more efficiently. This is first shown through typical examples taken from the literature and, then, through extensive experiments on instances of realistic size. We compare \(SMS^{\alpha}\) to the following four strategies:
* \(MS^{\lambda}\): An MCTS algorithm described in [22] which relies on two risk-version hyperparameters \(\lambda^{r}\) and \(\lambda^{o}\).
* EPE: A _PP_ strategy using expected price equilibrium [30] as initial prediction.
* SCPD: A distribution price prediction strategy using self-confirming price distribution [30] as initial distribution prediction.
* SB: Straightforward bidding [19].
The four strategies \(MS^{\lambda}\), EPE, SCPD and SB initially rely on the definition of _PP_ for unconstrained environments [30]. We extend them to budget and eligibility constrained environments in the same way as it is done in Definition III.1. In all experiments, none of the bidders are aware of their opponents' strategy.
Each algorithm is given respectively \(150\) seconds of thinking time. Initial prediction of closing prices are done offline before the auction starts and, therefore, are excluded from the thinking time. This step usually takes a few minutes. All experiments are run on a server consisting of Intel@Xeon@ES-2699 v4 2.2GHz processors. In all upcoming experiments, the hyperparameter \(\alpha\) of \(SMS^{\alpha}\) takes the value \(7\) and the risk-aversion hyperparameters \(\lambda^{r}\) and \(\lambda^{o}\) of \(MS^{\lambda}\) both take the value \(0.025\). These hyperparameters are obtained by grid-search. The maximum number of expanded actions per information set \(N_{act}\) of \(SMS^{\alpha}\) is set to \(20\).
### _Convergence of sequence \(p_{t}\)_
One of the main advantages of using our method to compute an initial prediction is the convergence of sequence \(p_{t}\). Even though this convergence has only been observed and not proven, it is possible to derive rates of convergence in small instances. For instance, in Example 1, it can be shown that \(\forall t\geq 1\), \(p_{t}\) belongs to the diamond defined by the points \((10-\frac{10}{t},10-\frac{2}{t})\), \((10-\frac{9}{t},10-\frac{10}{t})\), \((10+\frac{7}{4t},10+\frac{3}{4t})\) and \((10+\frac{4}{4t},10+\frac{7}{4t})\) which converges to \(p^{*}=(10,10)\). We represent in Figure 4 the sequence \(p_{t}^{1}\) with its corresponding lower bound and upper bound.
In larger instances, we observe similar rates of convergence. However, computing such bounds seems unrealistic as obtaining a closed-form expression of \(\mathbb{E}[f_{\Gamma}(p_{t})]\) seems untractable.
### _Test experiments_
One of the greatest advantages that MCTS methods have over other bidding algorithms is the capacity to judge pertinently in which situations adopting a demand reduction strategy is more beneficial. Indeed, through the use of its search tree, an MCTS method is capable of determining if it is more profitable to concede items to its opponents to keep prices low or to bid greedily. To highlight this feature, we propose the following experiment in a 2-item auction between two players with additive value functions. Each player values each item at \(l=10\). Player 1 has a budget \(b_{1}\geq 20\). Given that, the optimal strategy for player 2 is to bid on the cheapest item if it is not temporarily winning any item. Otherwise, it should pass. The optimal strategy for player 1 fully depends on its opponent's budget \(b_{2}\). For an infinitesimal bid increment \(\varepsilon\),
* If \(b_{2}\leq\frac{l}{2}\), player 1's optimal strategy is to play straightforwardly and it obtains an expected utility of \(l-2b_{2}\).
* If \(b_{2}\geq\frac{l}{2}\), player 1's should adopt a demand reduction strategy and it obtains an expected utility of \(l\).
We plot in Figure 5 the expected utility \(\mathbb{E}(\sigma_{1})\) of player 1 for each strategy given player 2's budget \(b_{2}\). The three algorithms SB, EPE, SCPD always suggest to player 1 to
Fig. 4: Convergence of sequence \(p_{t}^{1}\) with its respective upper bound \(g(t)\) and lower bound \(h(t)\) in Example 1.
bid greedily and never propose a demand reduction strategy even when it is highly profitable (\(b_{2}>\frac{l}{2}\)). However, both MCTS methods perfectly adopt the appropriate strategy. This experiment highlights the fact that \(SMS^{\alpha}\) selects the most profitable strategy and tackles own price effect, at least in simple budget and eligibility constrained environments.
Furthermore, \(SMS^{\alpha}\) is capable of avoiding obvious exposure. To highlight this feature, we use the SAA-c game presented in Example 1 where player 2's budget \(b_{2}=16\). The optimal strategy for player 1 is to play straightforwardly. Similarly to the preceding experiment, the optimal strategy for player 2 fully depends on its opponent's budget \(b_{1}\).
* If \(b_{1}<8\), player 2's optimal strategy is to play straightforwardly.
* If \(b_{1}\geq 8\), player 2's optimal strategy is to drop out of the auction to avoid exposure.
We plot in Figure 6 the expected utility \(\mathbb{E}(\sigma_{2})\) of player 2 for each strategy given player 1's budget \(b_{1}\). The two algorithms SCPD and SB always suggest to player 2 to bid straightforwardly leading player 2 to exposure when \(b_{1}\geq 8\). \(MS^{\lambda}\) never leads player 2 to exposure. However, it suggests to drop out prematurely of the auction in some situations with no risk of exposure and, hence, incurs a loss of easy profit (\(b_{1}=7\)). \(SMS^{\alpha}\) and \(EPE\) perfectly adopt the optimal strategy. This experiment highlights the fact that \(SMS^{\alpha}\) perfectly adopts the most profitable strategy and tackles efficiently exposure, at least in simple budget and eligibility constrained environments.
### _Extensive experiments_
In this section, we study instances of realistic size with \(n=4\) and \(m=11\). Each experimental result has been run on 1000 different SAA-c instances. With the exception of [22], all experimental results in the literature are obtained for specific settings of SAA, i.e., using value functions with some specific property such as superadditivity [13, 23, 30]. Hence, it is difficult to conclude on the effectiveness of a method in more generic settings. Therefore, we propose a more general approach to generate value functions by making no additional assumption on its form. Budgets are drawn randomly.
**Setting.**_Let \(\Gamma\) be an instance of SAA-c with \(n\) bidders, \(m\) items and bid increment \(\varepsilon\). Each player \(i\) has a budget \(b_{i}\sim U([b_{min},b_{max}])\) with \(U\) the uniform distribution. Its value function \(v_{i}\) is built as follows: \(v_{i}(\emptyset)=0\) and, for any set of goods \(X\),_
\[v_{i}(X)\sim U([\max_{j\in X}v_{i}(X\backslash\{j\}),V+\max_{j\in X}v_{i}(X \backslash\{j\})+v_{i}(\{j\})]) \tag{13}\]
Drawing value functions through a uniform distribution is widely used for creating auction instances [23, 30]. In our setting, the lower-bound ensures that \(v_{i}\) respects the _free disposal_[19] condition. The upper-bound caps the maximum surplus of complementarity possibly gained by adding an item \(j\) to the set of goods \(X\backslash\{j\}\) by \(V\). As valuations are always finite, any value function can be represented by our setting for a sufficiently large \(V\). For \(V=0\), only subadditive functions are considered. For \(V>0\), goods can either be complements or substitutes. In our experimental results, value functions and budgets are generated for each instance as above with \(\varepsilon=1\), \(b_{min}=10\), \(b_{max}=40\) and \(V=5\).
In the upcoming analysis, the average price payed per item won, the ratio of items won, the expected exposure and the exposure frequency are obtained by confronting a strategy \(A\) to a strategy \(B\). To facilitate our study, each measure of \(A\) against \(B\) is obtained by averaging the results obtained for the three following strategy profiles: (\(A\),\(B\),\(B\),\(B\)), (\(A\),\(A\),\(B\),\(B\)) and (\(A\),\(A\),\(A\),\(B\)). For instance, if \(A=SMS^{\alpha}\) and \(B=\) SB, the average price payed per item won by \(SMS^{\alpha}\) in these three strategy profiles is respectively: \(5.96\), \(5.46\) and \(4.62\). Hence, the average price payer per item won by \(SMS^{\alpha}\) against \(SB\) is \(5.35\).
#### V-C1 Expected Utility
To facilitate our analysis, we study the normal form game in expected utility where each player has the choice between playing \(SMS^{\alpha}\) or another strategy \(A\). The same empirical game analysis approach was employed by Wellman et al. in [30]. More precisely, we map each strategy profile to the estimated expected utility obtained by each player in the 1000 SAA-c instances. The four resulting
Fig. 5: Evolution of player 1’s expected utility \(\mathbb{E}(\sigma_{1})\) depending on strategy versus player 2’s budget \(b_{2}\) given that player 2 plays optimally (\(\varepsilon=0.1\)).
Fig. 6: Evolution of player 2’s expected utility \(\mathbb{E}(\sigma_{2})\) depending on strategy versus player 1’s budget \(b_{1}\) given that player 1 plays optimally in Example 1 (\(\varepsilon=1\)).
empirical games for each possible strategy \(A\) are given in Figure 7.
For example, in Figure (b)b, if all bidders play EPE, each bidder obtains an expected utility of \(10.8\). In the case of three EPE bidders and one \(SMS^{\alpha}\) bidder, the \(SMS^{\alpha}\) bidder obtains an expected utility of \(21.5\). Hence, if all bidders play EPE, a bidder can double its expected utility by switching to \(SMS^{\alpha}\). Therefore, deviating to \(SMS^{\alpha}\) is profitable if all bidders play EPE. This is also the case for the three other possible deviations in Figure (b)b. Hence, in the empirical game where bidders have the choice between playing \(SMS^{\alpha}\) or EPE, each bidder has interest in playing \(SMS^{\alpha}\). We can clearly see that all deviations to \(SMS^{\alpha}\) are also strictly profitable in the three other empirical games. Hence, in each empirical game, a bidder should play \(SMS^{\alpha}\) to maximise its expected utility. Therefore, the strategy profile (\(SMS^{\alpha}\), \(SMS^{\alpha}\), \(SMS^{\alpha}\), \(SMS^{\alpha}\)) is a Nash equilibrium of the normal-form SAA-c game in expected utility with strategy set {\(SMS^{\alpha}\), \(MS^{\lambda}\), EPE, SCPD, SB}.
Moreover, the strategy profile where all bidders play \(SMS^{\alpha}\) has a significantly higher expected utility than any other strategy profile where all bidders play the same strategy. This is mainly due to the fact that \(SMS^{\alpha}\) tackles efficiently the own price effect. For instance, in Figure 7, the expected utility of the strategy profile where all bidders play \(SMS^{\alpha}\) is respectively \(1.13\), \(1.68\) and \(3.94\) times higher than the ones where all bidders play EPE, \(MS^{\lambda}\) and SCPD.
The fact that the expected utility obtained by the strategy profile where all bidders play EPE is relatively close to the one where all bidders play \(SMS^{\alpha}\) can be explained as follows. To compute their expected price equilibrium as initial prediction of closing prices, all EPE bidders in our experiments share the same initial price vector and adjustment parameter in their tatonnement process. This tatonnement process is independent of the auction's mechanism and only relies on the estimated valuations of the players. Hence, as SAA-c is a game with complete information, all EPE bidders share the same initial prediction of closing prices and can therefore split up the items between them more or less efficiently.
Not all algorithms have the ability of achieving good coordination between bidders. For instance, the strategy profile where all bidders play SB leads to a negative expected utility. Hence, in this specific case, bidders would have preferred not to participate in the auction. This highlights the fact that playing SB is a very risky strategy and mainly leads to exposure.
The high performance of \(SMS^{\alpha}\) is mostly due to the three following factors:
* its ability to judge if performing demand reduction or bidding greedily is more beneficial given each bidder's budget and eligibility.
* its ability to tackle the own price effect without putting itself in a vulnerable position because of eligibility constraints.
* its ability to avoid exposure in a budget and eligibility
Fig. 7: Normal-form expected utility for a SAA-c game with five strategies
constrained environment.
#### V-C2 Own Price Effect
To analyse own price effect, we plot in Figure (a)a the average price payed per item won by each strategy \(A\) against every strategy \(B\) displayed on the x-axis. For instance, if \(A=SMS^{\alpha}\) and \(B=\) EPE, the average price payed per item won by \(SMS^{\alpha}\) against EPE is \(1.53\). It corresponds to the orange bar above index EPE on the x-axis. If \(A=\) EPE and \(B=SMS^{\alpha}\), then the average price payed per item won by EPE against \(SMS^{\alpha}\) is \(2.53\). It corresponds to the pink bar above index \(SMS^{\alpha}\) on the x-axis.
In Figure (a)a, we can clearly see that \(SMS^{\alpha}\) acquires items at a lower price in average than the other strategies against \(SMS^{\alpha}\), SCPD and SB. For instance, \(SMS^{\alpha}\) spends \(13.3\%\), \(17.1\%\), \(44.9\%\) and \(49.8\%\) less per item won against SCPD than \(MS^{\lambda}\), EPE, SCPD and SB respectively. Moreover, against \(MS^{\lambda}\) and EPE, only EPE spends slightly less than \(SMS^{\alpha}\) per item won.
To ensure that \(SMS^{\alpha}\) bidders do not obtain low average prices by only purchasing undesired items, we plot in Figure (b)b the ratio of items won by playing each strategy \(A\) against every strategy \(B\) on the x-axis. For instance, if \(A=SMS^{\alpha}\) and \(B=\) EPE, the ratio of items won by \(SMS^{\alpha}\) against EPE is \(0.31\). It corresponds to the orange bar above index EPE on the x-axis in Figure (b)b. If \(A=\) EPE and \(B=SMS^{\alpha}\), the ratio of items won by EPE against \(SMS^{\alpha}\) is \(0.18\). It corresponds to the pink bar above index \(SMS^{\alpha}\) on the x-axis in Figure (b)b. We see that each \(SMS^{\alpha}\) bidder obtains at least one fifth of the items in average against every strategy except against SB. Hence, \(SMS^{\alpha}\) is competitive.
Regarding strategy profiles where all bidders play the same strategy, the one corresponding to \(SMS^{\alpha}\) has an average price payed per item won \(1.70\), \(2.35\) and \(2.98\) times lower than \(MS^{\lambda}\), SCPD and SB respectively. Moreover, by looking at Figure (b)b, we can see that all items are allocated when all bidders play \(SMS^{\alpha}\). Being capable of splitting up all items at a relatively low price explains why the expected utility of the strategy profile where all bidders play \(SMS^{\alpha}\) is significantly higher than the ones where all bidders play a same other strategy. Only obtaining items at a low price is not sufficient. For instance, when all bidders play EPE, the average price payed per item won is \(1.6\) times lower than when all bidders play \(SMS^{\alpha}\). However, only \(72\%\) of all items are allocated. Hence, this strategy profile achieves a lower expected utility than if all bidders had played \(SMS^{\alpha}\).
Moreover, the fact that the average price per item won when all bidders play EPE is relatively close to \(\varepsilon\) raises an important strategical issue. Indeed, to obtain such low price, EPE bidders drastically reduce their eligibility during the first round without considering the fact that they might end up in a vulnerable position. Hence, a EPE bidder can easily be deceived. This explains why a bidder doubles its expected utility if it decides to play \(SMS^{\alpha}\) instead of EPE when all its opponents are playing EPE in Figure (b)b. After the first round, \(SMS^{\alpha}\) easily takes advantage of the weak position of its opponents. By gradually decreasing its eligibility, a \(SMS^{\alpha}\) bidder tackles efficiently the own price effect and avoids putting itself in vulnerable positions.
### _Exposure_
To analyse exposure, we plot in Figure (a)a the expected exposure of each strategy \(A\) against every strategy \(B\) displayed on the x-axis. Similarly, we plot in Figure (b)b the exposure frequency of each strategy \(A\) against every strategy \(B\) displayed on the x-axis. For instance, if \(A=SMS^{\alpha}\) and \(B=\) SB, the expected exposure and exposure frequency of \(SMS^{\alpha}\) against \(SB\) are respectively \(0.07\) and \(4.4\%\). They both correspond respectively to the orange bar above index SB on the x-axis in Figure (a)a and Figure (b)b.
Firstly, in the situation where all bidders decide to play the same strategy, \(SMS^{\alpha}\) has the remarkable property of never leading to exposure. This is not the case for the four other strategies. Secondly, \(SMS^{\alpha}\) is the only strategy which never suffers from exposure against \(MS^{\lambda}\) and EPE. Thirdly, even against SCDP and SB, \(SMS^{\alpha}\) is rarely exposed. It has the lowest expected exposure and exposure frequency. For instance, \(SMS^{\alpha}\) induces \(9.3\), \(4.5\), \(34\) and \(90\) times less expected exposure against SCPD than \(MS^{\lambda}\), EPE, SCPD and SB respectively. Moreover, regarding exposure frequency, by
Fig. 8: Own price effect analysis for a SAA-c game with five strategies
playing \(SMS^{\alpha}\) a bidder has \(6.6\), \(4\), \(27.6\) and \(58.1\) times less chance of ending up exposed against SCPD than \(MS^{\lambda}\), EPE, SCPD and SB respectively.
Hence, not only does \(SMS^{\alpha}\) achieve higher expected utility than state-of-the-art algorithms but it also takes less risks.
### _Influence of \(\alpha\)_
Our strategy \(SMS^{\alpha}\) is based on a risk-aversion hyperparameter \(\alpha\). To show its impact on \(SMS^{\alpha}\)'s performance, we compare \(SMS^{\alpha}\) for the following values of \(\alpha\): \(0\), \(3\), \(7\) and \(12\).
Our first experiment is to study the impact of \(\alpha\) on the expected utility of \(SMS^{\alpha}\). We plot in Figure 9(a) the relative difference in expected utility between playing \(SMS^{0}\) and \(SMS^{\alpha}\) when all other bidders are playing \(SMS^{\alpha}\). We observe that switching from \(SMS^{0}\) to \(SMS^{\alpha}\) leads to a loss in expected utility for any value \(\alpha>0\). Moreover, this loss is an increasing function of \(\alpha\). Similar results are obtained for the other three deviations in the empirical game where a bidder has the choice between either playing \(SMS^{0}\) or \(SMS^{\alpha}\). The fact that deviating to the risk-neutral strategy \(SMS^{0}\) is always profitable and that the relative expected loss incurred by switching to \(SMS^{\alpha}\) increases with \(\alpha\) is far from surprising. Indeed, by increasing \(\alpha\), a \(SMS^{\alpha}\) bidder prefers bidding on sets of items which generates less utility but with less chance of leading to exposure.
To highlight the fact that increasing \(\alpha\) leads to less exposure, we plot the exposure frequency of \(SMS^{\alpha}\) against SB for different values of \(\alpha\) in Figure 9(b). We clearly see that the exposure frequency decreases when \(\alpha\) grows. Indeed, an \(SMS^{0}\) bidder has respectively \(1.8\), \(2.1\) and \(2.7\) more chance of being exposed against SB than \(SMS^{3}\), \(SMS^{7}\) and \(SMS^{12}\).
Moreover, increasing \(\alpha\) also tackles the own price effect. For instance, the average price per item won when all bidders play \(SMS^{\alpha}\) is respectively \(3.11\), \(2.48\), \(2.21\) and \(2\) for \(\alpha\) equal to \(0\), \(3\), \(7\) and \(12\). This is a natural effect of risk-aversion where a bidder tends to avoid a rise in price. This mainly explains why we observe an increase in expected utility for the strategy profile where all bidders play \(SMS^{\alpha}\) when \(\alpha\) grows. For example, the expected utility when all bidders play \(SMS^{12}\) is \(1.36\) times greater than the expected utility when
Fig. 10: Impact of \(\alpha\) on \(SMS^{\alpha}\)
Fig. 9: Exposure analysis for a SAA-c game with five strategies
all bidders play \(SMS^{0}\).
By increasing \(\alpha\), \(SMS^{\alpha}\) tackles more efficiently the exposure problem and own price effect. Thus, it minimises the risk of incurring a loss if bidders do not behave as expected. However, the main drawback is that it decreases one's expected utility. The hyperparameter \(\alpha\) thus allows the bidder to arbitrate between expected utility and risk-aversion.
## VI Conclusions and Future Work
This paper introduces the first efficient bidding strategy that tackles simultaneously the _exposure problem_, the _own price effect_, _budget constraints_ and the _eligibility management problem_ in a simplified version of SAA (SAA-c). Our solution \(SMS^{\alpha}\) largely outperforms state-of-the-art algorithms on instances of realistic size in generic settings.
It is a SM-MCTS whose expansion and rollout phase relies on a new method for the prediction of closing prices. This method is based on a specific sequence that has the advantage of converging in practice in all undertaken SAA-c instances, taking into account the auction's mechanism and not relying solely on the outcomes of single specific strategy profile. We introduce scalarised rewards in \(SMS^{\alpha}\) through a hyperparameter \(\alpha\) giving the freedom to bidders to arbitrate between expected utility and risk-aversion. Increasing \(\alpha\) reduces exposure and own price effect but decreases one's expected utility.
In this paper, we have considered an auction where valuations and budgets are common knowledge. In order to apply \(SMS^{\alpha}\) to incomplete information frameworks and, thus, deal with probability distributions, a simple approach is to compute their expectation and then consider the corresponding SAA-c game. We believe that, by updating beliefs each round and by better exploiting the probability distributions, further enhancements can be achieved. Future works should be guided in this direction.
|
2307.09267 | Distilling Coarse-to-Fine Semantic Matching Knowledge for Weakly
Supervised 3D Visual Grounding | 3D visual grounding involves finding a target object in a 3D scene that
corresponds to a given sentence query. Although many approaches have been
proposed and achieved impressive performance, they all require dense
object-sentence pair annotations in 3D point clouds, which are both
time-consuming and expensive. To address the problem that fine-grained
annotated data is difficult to obtain, we propose to leverage weakly supervised
annotations to learn the 3D visual grounding model, i.e., only coarse
scene-sentence correspondences are used to learn object-sentence links. To
accomplish this, we design a novel semantic matching model that analyzes the
semantic similarity between object proposals and sentences in a coarse-to-fine
manner. Specifically, we first extract object proposals and coarsely select the
top-K candidates based on feature and class similarity matrices. Next, we
reconstruct the masked keywords of the sentence using each candidate one by
one, and the reconstructed accuracy finely reflects the semantic similarity of
each candidate to the query. Additionally, we distill the coarse-to-fine
semantic matching knowledge into a typical two-stage 3D visual grounding model,
which reduces inference costs and improves performance by taking full advantage
of the well-studied structure of the existing architectures. We conduct
extensive experiments on ScanRefer, Nr3D, and Sr3D, which demonstrate the
effectiveness of our proposed method. | Zehan Wang, Haifeng Huang, Yang Zhao, Linjun Li, Xize Cheng, Yichen Zhu, Aoxiong Yin, Zhou Zhao | 2023-07-18T13:49:49Z | http://arxiv.org/abs/2307.09267v1 | # Distilling Coarse-to-Fine Semantic Matching Knowledge
###### Abstract
3D visual grounding involves finding a target object in a 3D scene that corresponds to a given sentence query. Although many approaches have been proposed and achieved impressive performance, they all require dense object-sentence pair annotations in 3D point clouds, which are both time-consuming and expensive. To address the problem that fine-grained annotated data is difficult to obtain, we propose to leverage weakly supervised annotations to learn the 3D visual grounding model, i.e., only coarse scene-sentence correspondences are used to learn object-sentence links. To accomplish this, we design a novel semantic matching model that analyzes the semantic similarity between object proposals and sentences in a coarse-to-fine manner. Specifically, we first extract object proposals and coarsely select the top-K candidates based on feature and class similarity matrices. Next, we reconstruct the masked keywords of the sentence using each candidate one by one, and the reconstructed accuracy finely reflects the semantic similarity of each candidate to the query. Additionally, we distill the coarse-to-fine semantic matching knowledge into a typical two-stage 3D visual grounding model, which reduces inference costs and improves performance by taking full advantage of the well-studied structure of the existing architectures. We conduct extensive experiments on ScanRefer, Nr3D, and Sr3D, which demonstrate the effectiveness of our proposed method.
## 1 Introduction
3D Visual grounding (3DVG) refers to the process of localizing an object in a scene based on a natural language sentence. The 3DVG task has recently gained attention due to its numerous applications. Despite the significant progress made in this area [3, 4, 39, 40, 17, 37], all these approaches require bounding box annotations for each sentence query, which are laborious and expensive to obtain. For example, it takes an average of 22.3 minutes to annotate a scene in the ScanNet-v2 dataset [6]. Thus, we focus on weakly supervised training for 3DVG, which only requires scene-sentence pairs for training. This problem is meaningful and realistic since obtaining scene-level labels is much easier and can be scaled effectively.
However, weakly supervised 3DVG poses two challenges. Firstly, a 3D point cloud can contain numerous objects of various categories, and a sentence query may contain multiple objects besides the target object to aid in localization. Without knowledge of the ground-truth object-sentence pair, it is difficult to learn to link the sentence to
Figure 1: (a). 3D visual grounding aims to find the object-sentence pair from the whole scene. The fully supervised setting requires all the dense ground-truth object-sentence labels for training, while the weakly supervised method only needs the coarse scene-sentence annotations. (b). Coarse-to-Fine Semantic Matching Model (bottom) analyzes the matching score of each proposal to the sentence, and the semantic matching knowledge is distilled to the two-stage 3DVG architecture (upper).
its corresponding object from the enormous number of possible object-sentence pairs. Secondly, the 3DVG task often involves multiple interfering objects in the scene with the same class as the target object, and the target object must be distinguished based on its object attributes and the relations between objects described in the given sentence. As illustrated in Figure 1 (a), there are two trash cans in the scene, and the described target object can only be identified by fully comprehending the language description.
To address both challenges simultaneously, we propose a coarse-to-fine semantic matching model to measure the similarity between object proposals and sentences. Specifically, our model generates object-sentence matching scores from scene-sentence annotation, guided by coarse-to-fine semantic similarity analysis. Firstly, we calculate the object category similarity and feature similarity between all the proposals and the sentence. Combining these two similarities, we roughly select \(K\) proposals with the highest similarity to the sentence, which can effectively filter out the proposals that do not belong to the target category. Secondly, we utilize NLTK [2] to conduct part-of-speech tagging on the sentences and randomly mask the more meaningful nouns and adjectives words. The selected candidates would be used to reconstruct the masked keywords of the sentence, which can help the model fully and deeply understand the whole sentence. Since the target object and the sentence query are semantically consistent, the more the candidate and the target object overlap, the more accurate its predicted keywords will be. The object-sentence matching score of each candidate can be measured by its reconstruction loss. Eventually, in order to reduce inference time and make full use of the structure of existing 3DVG models, we utilize knowledge distillation [15] to migrate the knowledge of the coarse-to-fine semantic matching model to a typical two-stage 3DVG model, where the distilled pseudo labels are generated by the object-sentence matching scores.
In summary, the key contribution is four-fold:
* To the best of our knowledge, this paper is the first work to address weakly supervised 3DVG, which eliminates the need for expensive and time-consuming dense object-sentence annotations and instead requires only scene-sentence level labels.
* We approach weakly supervised 3DVG as a coarse-to-fine semantic matching problem and propose a coarse-to-fine semantic matching model to analyze the similarity between each proposal and the sentence.
* We distill the knowledge of the coarse-to-fine semantic matching model into a two-stage 3DVG model, which fully leverages the well-studied network structure design, leading to improved performance and reduced inference costs.
* Experiments conducted on three wide-used datasets ScanRefer [4], Nr3D [1] and Sr3D [1] demonstrate the effectiveness of our method.
## 2 Related Work
**Supervised 3D Visual Grounding.** Grounding a sentence query in a 3D point cloud is a fundamental problem in vision-language tasks, with wide-ranging applications in fields like automatic robotics [35, 34, 25, 11] and AR/VR/metaverse [26, 9]. The ScanRefer [4] and Referit3D [1] datasets annotate dense object-sentence links on the widely-used 3D point cloud dataset ScanNet [6].
Most recent 3D visual grounding methods [3, 39, 40, 17, 16, 37] follow a two-stage pipeline. In the first stage, pretrained 3D object detectors [29, 22] generate 3D object proposals. The second stage involves matching the selected object proposals with the sentence query. Existing two-stage methods improve performance by exploring the object attributes and relations between proposals in the second stage. For example, 3DVG-Transformer [40] uses a coordinate-guided contextual aggregation module to capture relations between proposals and a multiplex attention module to distinguish the target object. TransRefer3D [13] uses an entity-aware attention module and a relation-aware attention module for fine-grained cross-modal matching. 3DJCG [3] revises a joint framework for 3D visual grounding [4] and 3D dense captioning [5] tasks, and their experiments demonstrate that extra caption-level data can improve the performance of 3D visual grounding.
In contrast to these supervised methods, our approach learns to localize target objects in 3D space using only caption-level annotations.
**Weakly Supervised Image Grounding.** The image grounding task, similar to 3DVG, aims to identify objects in an image based on a sentence, and has a wide range of applications [28, 20, 8, 38, 19, 36]. Weakly supervised image grounding, which requires only images and corresponding sentences in the training phase, has gained popularity due to the low cost of annotation [12, 31, 33, 10, 7].
Weakly supervised image grounding is typically treated as a Multiple Instance Learning (MIL) problem [18, 24], where the image is represented as a bag of regions, generated by a pre-trained image object detector. Image-sentence matching scores are calculated based on region-phrase similarity scores, and ground-truth image-sentence links are used to supervise these scores. For example, ARN [21] pairs image proposals and queries based on subject, location, and context information through adaptive grounding and collaborative reconstruction. InfoGround [12] proposes a contrastive learning objective function [14] to optimize image-sentence scores. Wang et al. [33] use a pre-trained image object detector to generate pseudo category labels for all regions, achieving region-phrase alignment by distilling
knowledge from these pseudo labels.
However, MIL-based weakly supervised image grounding methods cannot solve the weakly supervised problem in 3DVG. Firstly, the presence of numerous different objects in a single 3D scene makes it difficult to learn a stable MIL classifier. Secondly, while image grounding aims to locate objects corresponding to all phrases in the sentence, 3DVG requires the identification of a single target object, necessitating a deeper and more comprehensive understanding of the sentence's semantic information, rather than just its phrases.
## 3 Method
### Problem Formulation
In this paper, we address the problem of weakly-supervised 3DVG. The input point cloud \(\mathbf{P}=\{\mathbf{p}_{i}\}_{i=1}^{N_{\mathrm{p}}}\) contains point coordinates in 3D space, represented by \(\mathbf{p}_{i}\in\mathbb{R}^{3}\). Correspondingly, a sentence query \(\mathbf{Q}=\{\mathbf{q}_{i}\}_{i=1}^{N_{\mathrm{q}}}\) is given to describe the object of interest. The objective of our model is to predict a 3D bounding box \(\mathbf{B}=(\mathbf{c},\mathbf{r})\) that encompasses the object, where \(\mathbf{c}=(c_{x},c_{y},c_{z})\) represents the center of the box, and \(\mathbf{r}=(r_{x},r_{y},r_{z})\) represents the dimensions of the box. The number of input points and sentence length is denoted by \(N_{\mathrm{p}}\) and \(N_{\mathrm{q}}\), respectively. In the weakly-supervised setting, there are no bounding box annotations available during training.
### Overview
As depicted in Figure 2, our model utilizes a two-stage grounding pipeline. In the first stage, we employ a pre-trained 3D object detector to extract \(M_{\mathrm{p}}\) object proposals from the given point cloud. In the second stage, we propose a coarse-to-fine semantic matching process to evaluate the semantic similarity between each proposal and the sentence query. Specifically, the coarse-to-fine process comprises two steps. Firstly, we coarsely extract the top \(K\) object proposals, which are referred to as candidates, by computing the object-sentence similarity matrix between all proposals and the sentence query. Secondly, we generate a more accurate pseudo label by considering the semantic reconstruction result of each candidate-sentence pair. Further details will be explained in Section 3.3 and Section 3.4.
Moreover, for reducing the inference costs and further enhancing the performance, we propose to distill the semantic matching knowledge into a supervised 3DVG pipeline as elaborated in Section 3.5. Most advanced fully-supervised models typically operate using a "detection-and-matching" paradigm. This means that these powerful matching architectures can be used as plug-and-play modules to incorporate knowledge learned from weak supervision.
Figure 2: Overall architecture diagram of our model. The model is based on a two-stage grounding pipeline. We first extract object proposals by pre-trained object detector. Then, we propose a coarse-to-fine semantic matching process to find the matched object-query pair. Furthermore, we distill the semantic matching knowledge into an effective matching architecture to enhance the inference efficiency.
### Coarse-grained Candidate Selection
Object-Sentence Similarity.Although we have extracted numerous high-quality object proposals from the pre-trained 3D object detector, identifying the best-matched proposal with the sentence query is still challenging. This is because a 3D scene may contain many different classes of objects, and the semantic spaces between objects and the sentence are not aligned. To overcome this challenge, we propose calculating a similarity matrix between the objects and the sentence based on both class and feature levels.
For the class level, we can obtain the object class from the pre-trained 3D object detector and the text class from a text classifier. For simplicity, we choose to train the text classifier from scratch and the classification loss \(\mathcal{L}_{cls}\) is a simple cross-entropy loss. Considering that the object detector might be pre-trained on another dataset, the object class set and the text class set may be inconsistent. Therefore, before directly comparing the object proposals and the sentence, we need to transfer the object class prediction to the target text class. To achieve this, we propose using a class transform matrix \(\mathbf{M}^{\text{c}}\in\mathbb{R}^{N_{\text{o}}^{\text{c}}\times N_{\text{q} }^{\text{c}}}\) for class alignment. The matrix is based on the cosine similarity between the GloVe embeddings of different class names. Here, \(N_{\text{o}}^{\text{c}}\) and \(N_{\text{q}}^{\text{c}}\) denote the number of object classes and the number of words in the sentence query, respectively.
For the feature level, we align the feature representations of the objects and the sentence query using a contrastive learning approach. Specifically, we pull the positive object-query pairs in the same scene closer and push the negative pairs further apart in the semantic space. To achieve this, all the object-query pairs in the same scene are considered as positive pairs \(\mathbb{P}\), while those from different scenes are considered as negative pairs \(\mathbb{N}\). The feature matching loss for object-sentence feature alignment can be computed by
\[\mathcal{L}_{match}=-\log\left(\frac{\sum\limits_{(\mathbf{p},\mathbf{q})\in \mathbb{P}}e^{\phi(\mathbf{p},\mathbf{q})}}{\sum\limits_{(\mathbf{p}, \mathbf{q})\in\mathbb{P}}e^{\phi(\mathbf{p},\mathbf{q})}+\sum\limits_{( \mathbf{p}^{{}^{\prime}},\mathbf{q})\in\mathbb{N}}e^{\phi(\mathbf{p}^{{}^{ \prime}},\mathbf{q})}}\right), \tag{1}\]
where \(\mathbf{p}\) represents an object proposal and \(\mathbf{q}\) a sentence query. \(\phi\) is the feature similarity function, which is a dot product in our practice.
We get the object-sentence similarity \(\mathbf{\hat{s}}\in\mathbb{R}^{M_{\text{p}}}\) by
\[\mathbf{\hat{s}}=\phi(\mathbf{\tilde{P}}^{\text{c}}\mathbf{M}^{\text{c}}, \mathbf{\tilde{Q}}^{\text{c}})+\phi(\mathbf{\tilde{P}},\mathbf{\tilde{Q}}), \tag{2}\]
where \(\mathbf{\tilde{P}}\in\mathbb{R}^{M_{\text{p}}\times d}\) / \(\mathbf{\tilde{Q}}\in\mathbb{R}^{N_{\text{q}}\times d}\) is the encoded object/sentence feature, and \(\mathbf{\tilde{P}}^{\text{c}}\in\mathbb{R}^{N_{\text{o}}^{\text{c}}}\) / \(\mathbf{\tilde{Q}}^{\text{c}}\in\mathbb{R}^{N_{\text{q}}^{\text{c}}}\) is the object/sentence class prediction. \(\phi\) is a similarity function (\(e.g.\), cosine similarity or dot product). \(M_{\text{p}}\) is the number of object proposals. \(d\) is the hidden dimension.
Top-K Selection.According to the object-sentence similarity, we coarsely select the top \(K\) candidates \(\mathbf{\tilde{C}}\in\mathbb{R}^{K\times d}\) out of the \(M_{\text{p}}\) proposals \(\mathbf{\tilde{P}}\in\mathbb{R}^{M_{\text{p}}\times d}\), which can effectively filter out proposals that are significantly different from the semantics of the sentence.
### Fine-grained Semantic Matching
Given the \(K\) object candidates, we propose a semantic reconstruction module to measure fine-grained semantic similarity between the objects and the sentence query.
As depicted in Figure 2, we mask important words in the sentence query, such as the target object (_table_), its attribute (_blue_), and its relation to other objects (_in front of_) in the scene. We reconstruct the masked words with the assistance of each candidate, respectively. The candidate that provides the most useful semantic information to predict the keywords and contains the least amount of noise is expected to be the best match.
We encode the masked sentence query using a textual encoder, denoted as \(\mathbf{\tilde{Q}}^{\text{m}}\in\mathbb{R}^{N_{\text{q}}\times d}\). For the \(k\)-th candidate \(\mathbf{\tilde{c}}^{k}\in\mathbb{R}^{d}\), we obtain the cross-modal semantic representation \(\mathbf{f}^{k}=\{\mathbf{f}^{k}_{i}\}_{i=1}^{N_{\text{q}}}\in\mathbb{R}^{N_{ \text{q}}\times d}\) by a transformer decoder
\[\mathbf{f}^{k}=\mathrm{Dec}(\mathbf{\tilde{Q}}^{\text{m}},\mathbf{\tilde{c}} ^{k}). \tag{3}\]
To predict the masked words, we compute the energy distribution \(\mathbf{e}^{k}=\{\mathbf{e}^{k}_{i}\}_{i=1}^{N_{\text{q}}}\in\mathbb{R}^{N_{ \text{q}}\times d}\) over the vocabulary by
\[\mathbf{e}^{k}_{i}=\mathbf{W}\mathbf{\tilde{f}}^{k}_{i}+\mathbf{b}, \tag{4}\]
where \(\mathbf{e}^{k}_{i}\in\mathbb{R}^{N_{\text{v}}}\) represents the energy distribution of the \(i\)-th predicted word, and \(N_{\text{v}}\) is the number of words in the vocabulary. \(\mathbf{W}\in\mathbb{R}^{N_{\text{v}}\times d}\) and \(\mathbf{b}\in\mathbb{R}^{N_{\text{v}}}\) are learnable parameters of a fully-connected layer.
Then, we use a reconstruction loss to train the semantic reconstruction module to effectively learn key information from the object context and predict the masked words. Specifically, the reconstruction can be computed as
\[\mathcal{L}^{k}_{recon}=-\sum\limits_{i\in N_{\text{mask}}}\log p(\mathbf{q}_ {i}|\mathbf{e}^{k}_{i}), \tag{5}\]
where \(N_{\text{mask}}\) represents positions of masked words in the query and \(\mathcal{L}^{k}_{recon}\) is the reconstruction loss for the \(k\)-th candidate \(\mathbf{\tilde{c}}^{k}\). Then the total loss for all the \(K\) candidates is \(\mathcal{L}_{recon}=\sum_{k=1}^{K}\mathcal{L}^{k}_{recon}\).
### Knowledge Distillation
As mentioned earlier, a lower reconstruction loss indicates that the object candidate provides more consistent semantic information. A direct approach for object prediction is to select the candidate with the lowest reconstruction loss, as it is likely to be the best match. However, this coarse-to-fine matching process is computationally expensive during
inference and not explicitly optimized for grounding tasks. To tackle the issues, we propose to distill the coarse-to-fine semantic matching knowledge into a supervised 3DVG pipeline. Our approach offers multiple benefits, including reduced inference costs and the ability to capitalize on more powerful 3DVG architectures and established learning objectives tailored for 3DVG tasks. By incorporating knowledge distillation, our framework can be integrated with any advanced supervised 3DVG pipeline, enhancing the flexibility and practicality of our method.
For candidates, we calculate the reward according to their rank of \(\mathcal{L}_{recon}^{k}\). The reward is reduced from one to zero, under the assumption that lower reconstruction loss gets better reward. The distilled pseudo labels \(\mathbf{d}=\{d_{1},...,d_{M_{p}}\}\) can be generated by filling the rewards of candidates to their original indices and padding the non-candidate indices with zeros, following by a SoftMax operation. After all, we distill the knowledge by aligning the predict scores \(\mathbf{s}=\{s_{1},...,s_{M_{p}}\}\) to the pseudo labels, where the predict scores are obtained from the powerful matching architecture. The distillation loss is:
\[\mathcal{L}_{distill}=-\sum_{i=1}^{M_{p}}d_{i}\log(\frac{\exp(s_{i})}{\sum_{j= 1}^{M_{p}}\exp(s_{j})}). \tag{6}\]
### Training and Inference
Multi-Task LossWe train the model end-to-end via a multi-task loss function, formulated by
\[\mathcal{L}_{overall}=\mathcal{L}_{distill}+\lambda_{1}\mathcal{L}_{cls}+ \lambda_{2}\mathcal{L}_{match}+\lambda_{3}\mathcal{L}_{recon} \tag{7}\]
where \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) are hyper-parameters to balance four parts of the loss function.
Inference.Thanks to the knowledge distillation, all we need in the inference phase is the two-stage 3DVG pipeline. We get the predict score \(\mathbf{s}\in\mathbb{R}^{M_{p}}\) from the matching architecture, and the index of the predicted best-match proposal is \(\operatorname*{argmax}(\mathbf{s})\). Then, we obtain the corresponding 3D bounding box of this object proposal.
## 4 Experiments
### Datasets
ScanRefer.The ScanRefer [4] dataset contain 51,583 descriptions of 11,046 objects from 800 ScanNet [6] scenes. On average, each scene has 64.48 sentences and 13.81 objects. The data can be divided into "Unique" and "Multiple", depending on whether there are multiple objects of the same category as the target in the scene.
**Nr3D/Sr3D.** The Nr3D/Sr3D dataset [1] is also based on the 3D scene dataset ScanNet [6]. Nr3D contains 41,503 human utterances collected by ReferItGame, and Sr3D contains 83,572 sentences automatically generated based on a "target-spatial relationship-anchor object" template. Similar to the definition of "Unique" and "Multiple" in ScanRefer, Nr3D/Sr3D can be split into "easy" and "hard" subsets. The "view-dep." and "view-indep." subsets depend on whether the description is related to the speaker's view. 1
Footnote 1: In the Nr3D/Sr3D datasets, the supervised task involves selecting the correct matching 3D box from a set of given boxes, with the instance matching accuracy serving as the evaluation metric. However, in the weakly-supervised setting, we predict the boxes from scratch and assess the IoU metrics, which cannot be directly compared to the results of supervised methods.
### Evaluation Metric.
To evaluate the performance of our method and baselines on these three datasets, we adopt the "\(R@n,IoU@m\)" metric. Specifically, this metric represents the percentage of at least one of the top-\(n\) predicted proposals having an IoU
\begin{table}
\begin{tabular}{c|c|c c|c c|c c|c c} \hline \hline & \multirow{2}{*}{Method} & \multicolumn{6}{c|}{R@3} & \multicolumn{3}{c}{R@1} \\ \cline{3-10} & & \multicolumn{2}{c|}{Unique} & \multicolumn{2}{c|}{Multiple} & \multicolumn{2}{c|}{Overall} & \multicolumn{2}{c}{Overall} \\ & & \(m\)=0.25 & \(m\)=0.5 & \(m\)=0.25 & \(m\)=0.5 & \(m\)=0.25 & \(m\)=0.5 & \(m\)=0.25 & \(m\)=0.5 \\ \hline \multirow{6}{*}{SUN} & Upper Bound & 57.07 & 35.28 & 55.30 & 35.29 & 55.65 & 35.29 & - & - \\ & Random & 15.88 & 6.99 & 7.38 & 3.28 & 9.03 & 3.96 & 3.66 & 1.37 \\ & MIL-Margin [10] & 19.94 & 10.51 & 10.18 & 3.60 & 12.07 & 4.94 & 6.80 & 2.37 \\ & MIL-NCE [12] & 19.13 & 10.95 & 7.57 & 3.56 & 9.81 & 5.00 & 5.64 & 2.69 \\ & **Ours** & **24.07** & **18.05** & **12.54** & **7.50** & **14.78** & **9.55** & **10.43** & **6.37** \\ \hline \multirow{6}{*}{SCAN} & Upper Bound & 93.82 & 77.02 & 72.61 & 58.01 & 76.72 & 61.70 & - & - \\ & Random & 21.36 & 14.25 & 10.10 & 7.15 & 12.28 & 8.53 & 4.74 & 3.32 \\ \cline{1-1} & MIL-Margin [10] & 29.54 & 22.49 & 11.48 & 8.04 & 14.99 & 10.84 & 8.16 & 5.66 \\ \cline{1-1} & MIL-NCE [12] & 48.94 & 40.76 & 17.41 & 13.73 & 23.53 & 18.97 & 18.95 & 14.06 \\ \cline{1-1} & **Ours** & **70.84** & **58.21** & **25.28** & **20.68** & **34.12** & **27.97** & **27.37** & **21.96** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison on ScanRefer. “SUN” and “SCAN” denotes that the 3D object detector is pretrained on SUN RGB-D[32] or ScanNet[6], respectively. For the “\(R@n,IoU@m\)” metric, \(n\in\{1,3\}\) and \(m\in\{0.25,0.5\}\).
greater than \(m\) when compared to the ground-truth target bounding box. In our setting, \(n\in 1,3\) and \(m\in 0.25,0.5\).
### Implementation Details.
In our practice, we use the pretrained GroupFree model [22] as our 3D object detector and distill the learned semantic matching knowledge to the matching architecture proposed in 3DJCG [3]. The input point number \(N_{\mathrm{p}}\), the proposal number \(M_{\mathrm{p}}\), and the candidate number \(K\) are set to 50000, 256 and 8, respectively. More details can be found in the supplementary material.
### Compared Methods
**Random.** We randomly select a candidate from all the proposals as the predicted result.
**MIL-Margin.** The MIL-Margin method [10] proposes a max margin loss to enforce the score between a sentence and a paired scene to be higher than non-paired scenes, and vice versa.
**MIL-NCE.** The MIL-NCE method [12] maximizes the InfoNCE lower bound on mutual information between the sentence and proposals from the paired scene, compared to non-corresponding pairs of scenes and sentences.
**Upper Bound.** The quality of the bounding boxes generated by the 3D object detector determines the upper bound performance of our model. We consider the maximum IoU between all the \(M_{\mathrm{p}}\) object proposals and the ground-truth bounding box as the upper bound.
### Quantitative Comparison
The performance results of our methods and baselines on ScanRefer and Nr3D/Sr3D are reported in Table 1 and Table 2, respectively, with the best results highlighted in **bold**. The comparison to supervised methods is presented in Table 3. Although the 3D object detector pre-trained on ScanNet implicitly utilizes ground truth boxes on ScanNet, the object-sentence annotations are still unseen, and pre-training on ScanNet is only used to obtain more accurate proposals. To fully avoid annotations in ScanNet, we also evaluate results using a detector pre-trained on SUN RGB-D [32]. Despite the degradation caused by out-of-domain data, our method still shows significant improvement over baselines. By analyzing the evaluation results, we can observe the following facts:
* Our method achieves significant improvements over the Random method on all datasets, indicating the effectiveness of the coarse-to-fine semantic matching model in analyzing the similarity between objects and sentences when true object-sentence pairs are unavailable.
* The results show that our method outperforms widely
\begin{table}
\begin{tabular}{c|c|c c|c c|c c|c c|c c} \hline \hline \multirow{3}{*}{} & \multirow{3}{*}{Method} & \multicolumn{2}{c|}{Easy} & \multicolumn{2}{c|}{Hard} & \multicolumn{2}{c|}{View-dep.} & \multicolumn{2}{c|}{View-indep.} & \multicolumn{2}{c}{Overall} \\ & & \(m\)=0.25 & \(m\)=0.5 & \(m\)=0.25 & \(m\)=0.5 & \(m\)=0.5 & \(m\)=0.25 & \(m\)=0.5 & \(m\)=0.25 & \(m\)=0.25 & \(m\)=0.5 \\ \hline \multicolumn{11}{c}{**Nr3D**} \\ \hline \multirow{4}{*}{SUN} & Upper Bound & 40.24 & 24.62 & 40.62 & 23.80 & 40.66 & 24.88 & 40.32 & 23.82 & 40.44 & 24.20 \\ & Random & 6.70 & 2.40 & 6.34 & 2.75 & 6.59 & 2.91 & 6.47 & 2.41 & 6.51 & 2.59 \\ & MIL-Margin [10] & 9.93 & 5.63 & 7.79 & 4.03 & 8.71 & 4.77 & 8.88 & 4.81 & 8,82 & 4,80 \\ & MIL-NCE [12] & 9.93 & 5.42 & 7.77 & 4.79 & 8.45 & 4.67 & 9.00 & 5.32 & 8.81 & 5.09 \\ & **Ours** & **10.93** & **6.36** & **9.83** & **6.18** & **10.77** & **6.53** & **10.13** & **6.13** & **10.36** & **6.27** \\ \hline \multirow{4}{*}{SCAN} & Upper Bound & 62.43 & 44.75 & 58.98 & 44.18 & 59.15 & 42.91 & 61.44 & 45.29 & 60.64 & 44.45 \\ & Random & 8.81 & 5.66 & 7.57 & 4.97 & 7.28 & 4.80 & 8.65 & 5.61 & 8.17 & 5.30 \\ & MIL-Margin [10] & 14.25 & 10.64 & 9.79 & 7.68 & 10.64 & 8.35 & 12.63 & 9.50 & 11.93 & 9.10 \\ & MIL-NCE [12] & 17.29 & 13.53 & 9.61 & 7.59 & 11.96 & 9.44 & 14.01 & 10.98 & 13.29 & 10.44 \\ & **Ours** & **27.29** & **21.10** & **17.98** & **14.42** & **21.60** & **16.80** & **22.91** & **18.07** & **22.45** & **17.62** \\ \hline \multicolumn{11}{c}{**Sr3D**} \\ \hline \multirow{4}{*}{SUN} & Upper Bound & 39.22 & 23.69 & 39.58 & 21.83 & 25.93 & 13.30 & 39.92 & 23.24 & 39.33 & 22.82 \\ & Random & 6.53 & 2.28 & 4.61 & 2.17 & 1.86 & 0.80 & 6.05 & 2.32 & 5.96 & 2.25 \\ & MIL-Margin [10] & 8.52 & 4.84 & 5.66 & 3.98 & 3.19 & **2.66** & 7.86 & 4.67 & 7.67 & 4.59 \\ & MIL-NCE [12] & 8.66 & 4.92 & 4.10 & 2.78 & 2.46 & 0.93 & 7.56 & 4.42 & 7.30 & 4.28 \\ & **Ours** & **10.31** & **6.60** & **8.57** & **6.23** & **4.19** & 1.86 & **10.09** & **6.69** & **9.79** & **6.49** \\ \hline \multirow{4}{*}{SCAN} & Upper Bound & 65.42 & 46.75 & 58.46 & 42.69 & 53.59 & 34.84 & 63.77 & 46.01 & 63.34 & 45.54 \\ & Random & 8.50 & 5.38 & 6.85 & 4.55 & 5.59 & 3.72 & 8.12 & 5.20 & 8.01 & 5.13 \\ \cline{1-1} & MIL-Margin [10] & 12.55 & 9.82 & 9.59 & 7.50 & 9.57 & 7.98 & 11.76 & 9.18 & 11.67 & 9.13 \\ \cline{1-1} & MIL-NCE [12] & 17.45 & 12.51 & 9.61 & 7.14 & 12.37 & 7.97 & 15.22 & 11.03 & 15.11 & 10.90 \\ \cline{1-1} & **Ours** & **29.40** & **24.87** & **21.00** & **17.47** & **20.21** & **17.15** & **27.19** & **22.90** & **26.89** & **22.66** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison on Nr3D and Sr3D dataset. “SUN” and “SCAN” denotes that the 3D object detector is pretrained on SUN RGB-D [32] or ScanNet [6], respectively. For the “\(R@n\), \(IoU@m\)” metric, \(n=3\) and \(m\in\{0.25,0.5\}\).
used MIL-based weakly supervised methods by a large margin, and even approaches the upper bound in the "Unique" subset of ScanRefer. This suggests that our proposed model can deeply exploit the alignment relationship between 3D scenes and sentences and identify the most semantically relevant object proposals.
* Our coarse-to-fine semantic matching model significantly improves performance in the challenging "Multiple" subset of ScanRefer and "Hard" subset of Nr3D/Sr3D, where there are multiple interfering objects with the same category as the target object. This problem requires a comprehensive understanding of the sentence to distinguish the described object, which our model handles efficiently with the keywords semantic reconstruction module.
* The performance improvement with the SUN RGB-D pre-trained backbone is relatively small on Nr3D and Sr3D datasets, possibly because the target objects are inherently more challenging to detect, and the pre-trained detector performs poorly due to out-of-distribution data. The low grounding upper bound and inaccurate proposals make the training phase unstable. Nevertheless, our method outperforms all baselines, and when the detector is more reliable, our semantic matching model shows much more significant advantages on Nr3D/Sr3D.
### Ablation Study
To further assess the effectiveness of each component, we conduct ablation studies on the ScanRefer dataset.
a significant performance improvement. The well-studied structure of the existing 3DVG model enhances the generalization ability of our method. As for the efficiency, we observe that the distilled matching module is 3\(\times\) smaller and 4\(\times\) faster than the coarse-to-fine semantic matching model, demonstrating that the distilling operation reduces inference costs significantly. Meanwhile, we try to distill the knowledge into the matching modules of different supervised methods (SAT [37], 3DVG-Transformer [40], and 3DJCG [3]), the results show that the distillation fits well to different architectures.
### Qualitative Comparison
As depicted in Figure 3, we visualize the predicted bounding boxes in the corresponding 3D scene, where the green box denotes that the method predicts the correct object (IoU \(\geq 0.5\) with the true box), and the red box indicates a wrong prediction.
In case (a), the target object _cabinet_ is the only cabinet in the simple scene. So, both MIL-based methods and our method can predict the box well. In case (b) / (c), the target object is a _trash can / chair_. MIL-based methods may be misled by the presence of another object (_bathroom counter / desk_) in the sentence. While our method can filter out the objects that do not belong to the target category, benefiting from the coarse-grained candidate selection module. In case (d), there are six _desks_ in the scene. The MIL-based methods fail to localize the correct object, even though they figure out the target object (category) is _desk_. With the fine-grained semantic matching module, our methods can better differentiate among these six _desks_ and choose the one best-matched to the sentence ("brown" and "under two monitors"). In case (e), the scene contains 32 different _chairs_. Unfortunately, both our method and MIL-based methods fail in this case. However, we consider our method's predicted result acceptable. Firstly, the sentence query's expressions, such as "near the doors" and "near the center", are ambiguous and cannot give a precise location of the target object. Secondly, our method's predicted _chair_ is also consistent with the sentence description and is close to the true _chair_.
## 5 Conclusion
In this paper, we raise the weakly-supervised 3D visual grounding setting, using only coarse scene-sentence correspondences to learn the object-sentence links. The weak supervision gets rid of time-consuming and expensive manual annotations of accurate bounding boxes, which makes this problem more realistic but more challenging. To tackle this, we propose a novel semantic matching method to analyze the object-sentence semantic similarity in a coarse-to-fine manner. Moreover, we distill the semantic matching knowledge into the existing 3D visual grounding architecture, effectively reducing the inference cost and further improving performance. The sufficient experiments on large-scale datasets verify the effectiveness of our method.
Figure 3: Qualitative Comparison between MIL-based methods and Ours. |
2309.01397 | Unlabelled Sensing with Priors: Algorithm and Bounds | In this study, we consider a variant of unlabelled sensing where the
measurements are sparsely permuted, and additionally, a few correspondences are
known. We present an estimator to solve for the unknown vector. We derive a
theoretical upper bound on the $\ell_2$ reconstruction error of the unknown
vector. Through numerical experiments, we demonstrate that the additional known
correspondences result in a significant improvement in the reconstruction
error. Additionally, we compare our estimator with the classical robust
regression estimator and we find that our method outperforms it on the
normalized reconstruction error metric by up to $20\%$ in the high permutation
regimes $(>30\%)$. Lastly, we showcase the practical utility of our framework
on a non-rigid motion estimation problem. We show that using a few manually
annotated points along point pairs with the key-point (SIFT-based) descriptor
pairs with unknown or incorrectly known correspondences can improve motion
estimation. | Garweet Sresth, Ajit Rajwade, Satish Mulleti | 2023-09-04T06:55:45Z | http://arxiv.org/abs/2309.01397v1 | # Unlabelled Sensing with Priors: Algorithm and Bounds
###### Abstract
In this study, we consider a variant of unlabelled sensing where the measurements are sparsely permuted, and additionally, a few correspondences are known. We present an estimator to solve for the unknown vector. We derive a theoretical upper bound on the \(\ell_{2}\) reconstruction error of the unknown vector. Through numerical experiments, we demonstrate that the additional known correspondences result in a significant improvement in the reconstruction error. Additionally, we compare our estimator with the classical robust regression estimator and we find that our method outperforms it on the normalized reconstruction error metric by up to \(20\%\) in the high permutation regimes (\(>30\%\)). Lastly, we showcase the practical utility of our framework on a non-rigid motion estimation problem. We show that using a few manually annotated points along point pairs with the key-point (SIFT-based) descriptor pairs with unknown or incorrectly known correspondences can improve motion estimation.
Unlabelled sensing, sparse permutation, group testing.
## I Introduction
Estimating an unknown vector from a set of linear and noisy measurements is a well-known problem in many applications that can be solved using least squares. In this scenario, the measurements and the unknown vector are connected by a measurement matrix. Usually, it is presumed that each measurement relates to a specific row in the matrix. Nevertheless, due to vagaries in the measurement process, the correspondence between the rows and the measurements, whether in part or entirely, might be lost. The objective now becomes estimating the unknown vector from an unknown permutation of the measurements.
The aforementioned problem is known as unlabelled sensing, and it arises naturally in many engineering and biological applications. For example, in the point-matching problem [1], the objective is to determine the unknown underlying permutation given the correspondences between the two point sets. Other areas where the unlabelled sensing problem and its variations naturally arise are group testing [2, 3], record-linkage [4, 5], simultaneous
pose and correspondence determination in computer vision [6, 7], simultaneous localization and mapping (SLAM) in robotics [8], data de-anonymization in security and privacy [9, 10], and data collection in sensor networks [11].
Estimation of the unknown vector from their linear permuted and noisy measurements is, generally, an ill-posed problem unless constraints are imposed on the permutation level, signal-to-noise ratio (SNR), and the number of measurements. For example, Unnikrishnan et al. [12] showed that a \(d\) dimensional vector can be uniquely identified with high probability from its \(N\) linear permuted measurements without noise iff \(N\geq 2d\). With an additional assumption that the unknown vectors are generic, \(N>d\) measurements are shown to be sufficient in the absence of noise [13]. On the other hand, in [14], thresholds on SNR are established for reconstruction. These theoretical guarantees are independent of any algorithm.
On the algorithm front, the problem of estimation is commonly framed as a least-squares challenge or as a maximum likelihood estimation problem. The goal is to minimize the objective function for both the unidentified vector and the permutations. However, as the optimization must encompass all possible \(N!\) permutations, any algorithm becomes computationally impractical without extra limitations such as the ordering of the unknown correspondence [15], multiple measurements with the same permutation matrix [16], and low-dimensional unknown vectors, [17, 18, 19, 20, 13, 14, 15, 16, 17]. In addition, a few works exploit the sparsity of the unknown vectors to make the problem less challenging (see [22] and references therein).
The aforementioned algorithms assume the possibility of losing complete correspondence among the measurements. In contrast, [23] considers a scenario where only a few correspondences are lost. The assumption is valid in many applications. For example, in point-matching, employing high-accuracy algorithms ensures that only a few correspondences are wrongly matched between the two sets. With few lost correspondences or equivalently assuming sparse permutations, the error due to permutation can be treated as sparse outliers [23]. While the estimation of unknown vectors in the presence of these sparse outliers can be accomplished using robust regression techniques (refer to [24]), [23] suggests an alternative approach that is robust to noise and employs \(\ell_{1}\) regularization. The authors have established upper bounds on the errors associated with estimating the unidentified vector and permutations. Through simulations, they demonstrated that their algorithm can successfully recover vectors for up to \(50\%\) of the permutations.
Apart from the requirement for sparse permutations, many situations allow for obtaining a limited number of measurements with correct correspondences. For instance, in the point-matching problem, it is possible to manually annotate a small set of precise point pairs with the help of a subject matter expert. This leads to the question: How can this additional information be optimally incorporated while upholding the assumption of sparse permutations?
In this paper, we consider an adaptation of the standard unlabelled sensing problem, where we assume that the measurements are sparsely permuted, and a few measurements with correct correspondences are available. Within these settings, inspired from [23], we have formulated an \(\ell_{1}\)-regularized problem and derived an upper bound on the estimation error of the unknown vector in terms of noise variance, dimension \(d\), number of measurements \(N\), sparsity level of the permutation matrix, and number of correct correspondences. We show how the estimation error falls down as the number of known correspondences increases. We solved the problem using an off-the-shelf solver and compared our approach with robust regression. Our method outperforms the robust regression method on the
normalized reconstruction error metric by up to \(20\%\) in the high permutation regimes \((>30\%)\). We show that a few known correspondences can significantly reduce the reconstruction error compared to a scenario without knowledge of any correct correspondence. As an application, we consider the point-matching problem. We show that using a few manually annotated point pairs results in a visually better reconstruction.
The organization of the paper is as follows. In Section 2, we formally define the measurement model and the optimization problem. Theoretical bounds are presented in Section 3, whereas Section 4 discusses the numerical analysis. In Section 5, we show results for the point-matching problem followed by conclusions.
## II Problem Formulation
Consider a set of linear measurements \(\mathbf{A}\mathbf{x}_{0}\) where \(\mathbf{x}_{0}\in\mathbb{R}^{d}\) is the unknown vector to be estimated and \(\mathbf{A}\in\mathbb{R}^{N\times d}\) with \(N\geq d\) is the sensing matrix. In general, the problem can be solved using least squares, even in the presence of noise. However, in many applications, the measurements are permuted, as discussed in the previous section. In this case, the measurements in the presence of noise are given as
\[\mathbf{y}=\mathbf{P}_{0}\mathbf{A}\mathbf{x}_{0}+\mathbf{\epsilon}, \tag{1}\]
where \(\mathbf{\epsilon}\) is the noise term and \(\mathbf{P}_{0}\) is an \(N\times N\) permutation matrix. The problem of estimating \(\mathbf{x}_{0}\) from \(\mathbf{y}\) is ill-posed for \(N=d\) even if \(\mathbf{\epsilon}=\mathbf{0}\) and \(\mathbf{A}\) is invertible. Specifically, for any arbitrary permutation matrix \(\mathbf{P}(\neq\mathbf{P}_{0})\) and the vector \(\mathbf{x}=\mathbf{A}^{-1}\mathbf{P}^{-1}\mathbf{P}_{0}\mathbf{A}\mathbf{x}_{0}\), we have that \(\mathbf{y}=\mathbf{P}\mathbf{A}\mathbf{x}\). Hence, the solution is not unique. Hence, the condition \(N>d\) is necessary to solve the problem. However, \(N>d\) need not be sufficient, especially in the presence of noise, unless additional assumptions on \(\mathbf{P}_{0},\mathbf{A}\), and \(\mathbf{\epsilon}\). In this work, we make the following assumptions.
1. The entries of \(\mathbf{A}\) are independent and identically distributed (i.i.d.) zero mean, unit variance Gaussian random variables.
2. The entries of noise vector \(\mathbf{\epsilon}\) are i.i.d. Gaussian random variables with zero mean and variance \(\sigma^{2}\).
3. Any \(m\) correct correspondences of \(\mathbf{A}\mathbf{x}\) are given where \(m<d\).
4. For the remaining \(p=N-m>d\) measurements without correspondences, at max \(k\) entries are permuted.
The randomness assumptions (A1) and (A2) will be useful in deriving theoretical bounds on the estimation accuracy. The assumption of the correct correspondence and sparse permutations in (A3) and (A4), respectively, are inspired by practical scenarios and will be helpful in improving the estimation accuracy. The assumptions \(m<d\) and \(N-m>d\) are explained by dividing the measurements \(\mathbf{y}\) into the following disjoint sets of measurements:
\[\mathbf{y}_{1}=\mathbf{A}_{1}\mathbf{x}_{0}+\mathbf{\epsilon}_{1}, \tag{2}\] \[\mathbf{y}_{2}=\mathbf{P}_{2}\mathbf{A}_{2}\mathbf{x}_{0}+\mathbf{\epsilon}_{2}, \tag{3}\]
where \(\mathbf{y}_{1}\in\mathbb{R}^{m}\) denotes the measurements with correct correspondences and \(\mathbf{y}_{2}\in\mathbb{R}^{p}\) are the remaining measurements. Note that \(\mathbf{P}_{2}\) is a \(p\times p\) permutation matrix with \(k\) permutations.
From (2), we observe that \(\mathbf{x}_{0}\) can be estimated from \(\mathbf{y}_{1}\) provided that \(m\geq d\) and we do not require \(\mathbf{y}_{2}\). To avoid such a trivial scenario, we assume that \(m<d\). Next, if we consider a problem of estimating
only, then as discussed earlier, \(p>d\) is necessary. The necessity of the assumption \(p>d\) may be questionable in the presence of correct correspondences \(\mathbf{y}_{1}\). However, we maintain \(p>d\) to include the standard sparse unlabelled sensing problem when \(m=0\)[23].
Our objective is to estimate \(\mathbf{x}_{0}\) from \(\mathbf{y}\) under assumptions (A1)-(A4). To this end, we define the permutation error vector as
\[\mathbf{z}_{0}=\mathbf{P}_{2}\mathbf{A}_{2}\mathbf{x}_{0}-\mathbf{A}_{2}\mathbf{x}_{0}. \tag{4}\]
By using \(\mathbf{z}_{0}\), we re-write \(\mathbf{y}_{2}\) as
\[\mathbf{y}_{2}=\mathbf{A}_{2}\mathbf{x}_{0}+\mathbf{z}_{0}+\mathbf{\epsilon}_{2}. \tag{5}\]
This representation captures the effect of unknown permutation as an additional unknown but signal-dependent term. By combining (2) and (5), we have that \(\mathbf{y}=\mathbf{A}\mathbf{x}_{0}+[\mathbf{0}_{m}^{\mathrm{T}}\ \mathbf{z}_{0}^{\mathrm{T}}]^{ \mathrm{T}}+\mathbf{\epsilon}\), where \(\mathbf{0}_{m}\) is an all-zero vector of length \(m\) and the superscript \(\mathrm{T}\) denotes transpose operation. Since \(\mathbf{P}_{2}\) is a \(k\)-sparse permutation matrix, we have that \(\|\mathbf{z}_{0}\|_{0}\leq k\). Further, from (5), it can be verified that \(\|\mathbf{z}_{0}\|_{\infty}\leq 2\|\mathbf{A}\mathbf{x}_{0}\|_{\infty}\). Hence, the term \([\mathbf{0}_{m}^{\mathrm{T}}\ \mathbf{z}_{0}^{\mathrm{T}}]^{\mathrm{T}}\) could be treated as a sparse outlier, and \(\mathbf{x}_{0}\) can be estimated by solving robust-regression problem as
\[\mathbf{\tilde{x}}_{\mathbf{RR}}=\operatorname*{arg\,min}_{\mathbf{x}\in \mathbb{R}^{d}}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_{1}. \tag{6}\]
However, the formulation and the solution ignore the dependency of \(\mathbf{z}_{0}\) on \(\mathbf{x}_{0}\) and its sparsity. Moreover, it does not make use of the known correspondences.
To address the aforementioned limitations of robust regression, we propose an alternative formulation as
\[\operatorname*{arg\,min}_{\mathbf{x}\in\mathbb{R}^{d},\mathbf{z}\in \mathbb{R}^{p}}\|\mathbf{y}_{1}-\mathbf{A}_{1}\mathbf{x}\|_{2}^{2}+\|\mathbf{y}_{2}-\mathbf{A}_{2 }\mathbf{x}-\mathbf{z}\|_{2}^{2}+\lambda\|\mathbf{z}\|_{1}. \tag{7}\]
where the first two terms are data-fidelity terms and \(\|\mathbf{z}\|_{1}\) is a sparsity promoting term with a regularization parameter \(\lambda\). This problem is convex and can be solved by using an off-the-shelf solver such as CVPXY [25, 26]. Let \(\mathbf{\tilde{x}}\) be the estimate of \(\mathbf{x}_{0}\) obtained by solving (7). Then, a couple of questions are appropriate. How far is \(\mathbf{\tilde{x}}\) from \(\mathbf{x}_{0}\)? How does the estimation error scale with the number of correct correspondences \(m\) and other variables such as \(k\), \(p\), and \(d\)? The answers to these questions are provided in the following section, where we present theoretical bounds on the error.
## III Theoretical Guarantee
Consider the optimization problem in (7). An upper bound on the error \(\|\mathbf{\tilde{x}}-\mathbf{x}_{0}\|_{2}\) is presented in the following theorem:
**Theorem 1**.: _Consider the optimization problem (7) and the observation model (1) under assumptions (A1)-(A4). With \(\lambda=4(1+M)\sigma\sqrt{\frac{2\log p}{p}}\) for any \(M\geq 0\), there exist constants \(c_{1},c_{2},\varepsilon\) so that if \(k\leq c_{1}\frac{p-d}{\log\frac{p}{k}}\) and \(\alpha\log\left(p\right)<(\sqrt{N}-\sqrt{d})\) for any \(\alpha>0\), then the following inequality holds with probability at least \(1-2\exp(-c_{2}(p-d))-2p^{-M^{2}}-\exp\left(-\log^{2}\left(p\right)/2\right)-2 \exp\left(-\alpha\log\left(p\right)\right)\)-_
\[\|\mathbf{\tilde{x}}-\mathbf{x}_{0}\|_{2}\leq\sigma\frac{\sqrt{d+2\sqrt{d\alpha\log p+2 \alpha\log p}}}{\sqrt{m+p}-\sqrt{d}-\alpha\log p}+48(1+M)\sigma\varepsilon^{-1} \frac{(\sqrt{p}+\sqrt{d}+\log p)}{(\sqrt{m+p}-\sqrt{d}-\alpha\log p)^{2}}\frac{ p}{p-d}\sqrt{k\log p}. \tag{8}\]
The proof of the theorem follows the lines of proof in [23], and the details are discussed in the Appendix. A few insights on the upper bound in (8) are as follows.
1. Similar to the bound obtained in [23], the error term breaks into two components here as well. The first term is the error one would have incurred if the correspondences had been fully known \((k=0)\). The second term is the excess error incurred for not knowing the correspondences.
2. In the noiseless case \((\sigma=0)\), perfect reconstruction of \(\mathbf{z}\) is possible with high probability, and hence, the unknown vector \(\mathbf{x}\) can be perfectly reconstructed with high probability.
3. For a fixed number of measurements \(p\), if we get more number of known correspondences \(m\), then the error term falls off as \(\frac{1}{(\sqrt{m+p}-\sqrt{d}-\alpha\log p)}\) + \(\frac{1}{(\sqrt{m+p}-\sqrt{d}-\alpha\log p)^{2}}\).
In a nutshell, the bounds imply that the knowledge of correct correspondences helps improve the estimation accuracy, which is verified by simulation in the next section.
## IV Numerical Results
To assess the proposed algorithm and compare it with robust regression, \(\mathbf{x}_{0}\) was generated randomly and was kept fixed. Entries of \(\mathbf{A}\) are sampled independently from \(\mathcal{N}(0,1)\). For a given noise level, (7) is solved using CVXPY. For an objective comparison, we compute normalized reconstruction error \(\frac{\|\mathbf{\hat{x}}-\mathbf{x}_{0}\|_{2}}{\|\mathbf{x}_{0}\|_{2}}\), where \(\mathbf{\hat{x}}\) is an estimate of \(\mathbf{x}_{0}\) obtained by either the proposed procedure from 7 or the robust regression method from 6. For a given permutation level, \(k/p\), and noise level, the error is averaged over ten randomly generated permutation matrices and 50 independent noise realizations for each permutation matrix. The standard deviation \(\sigma\) of noise is chosen as the specified noise percentage times the mean absolute value of the entries in the noiseless measurement vector \(\mathbf{A}\mathbf{x}_{0}\).
Our first objective is to assess the effect of the number of correct correspondences in reducing the estimation error. For this simulation we consider \(d=100\), \(k/p=0.1\) and \(2\%\) measurement noise. In Fig. 1(a), the error is plotted as a function of the number of measurements \(p\) when there are no correct correspondences (\(m=0\)). As expected, error reduces as \(p\) increases. In Fig. 1(b), we have shown errors as a function of \(m\) for \(p=110,120\), and \(140\). Comparing errors in Fig. 1(a) with those in Fig. 1(b) for a given \(p\), we note that having correct correspondences significantly reduces the error. For example, for \(p=140\), an addition of only \(m=20\) measurements with known correspondences results in a reconstruction error of \(0.04\), as compared to \(0.16\) for \(m=0\). Alternatively, we infer that for a given error threshold, \(p\) can be reduced by using a few correct correspondences. For example, the combinations \((p=170,m=0)\) and \((p=110,m=40)\) will result in \(0.05\%\) error.
Next, we compare the proposed method and its estimations with those of robust regression for \(d=100\) and \(p=150\). Errors for both the methods for \(2\%\) and \(4\%\) noise levels and different permutation levels \(k/p\) are shown in Fig. 2. Though our methods always result in lower error than robust regression, the difference in performance varies greatly with the permutation level. For instance, at \(k/p=0.1\), the gain of the proposed method compared to
Figure 1: Assessment of the effect of known correct correspondence on reconstruction error: (a) error as a function of \(p\) when \(m=0\). (b) error as a function of \(m\) for different values of \(p\). Known correspondence results in a lower error for a given \(p\).
Figure 2: A comparison of the proposed method and robust regression for \(d=100\), \(p=150\), and permutation level \(k/p\in\{0.1,0.2,0.3,0.4\}\): For low \(k/p\), both methods perform equally well, however, for higher permutation levels, the proposed method results from (7) in lower error compared to robust regression from (6).
robust regression is negligible. But it increases with \(k/p\), that is, as the amount of permutation noise increases. For example, at \(k/p=0.4\), with \(m=80\) known correspondences, the robust regression estimator gives a reconstruction error of \(0.24\), while that with our estimator is \(0.05\) for \(2\%\) noise.
After demonstrating through simulation that the suggested technique reduces errors and demands fewer measurements, our attention shifts to a subsequent application.
## V Application in Image Alignment
The deformation between pairs of images in biomedical applications is often modeled as a non-rigid motion vector field, which is compactly expressed by a linear combination of some \(d\) low-frequency 2D Discrete Cosine Transform (DCT) basis vectors [27]. Consider a reference image \(I\) and a moving image \(M\) which is a motion-deformed version of \(I\), both of size \(H\times W\). We define \(\boldsymbol{u_{1}},\boldsymbol{u_{2}}\in\mathbb{R}^{HW}\) as the vectorized displacement fields from \(I\) to \(M\) in the \(X,Y\) directions respectively. Let \(\boldsymbol{U}\in\mathbb{R}^{HW\times d},d\ll HW\) denote the sub-matrix consisting of the first \(d\) columns of the 2D DCT matrix of size \(HW\) by \(HW\). Then, we express \(\boldsymbol{u_{1}}=\boldsymbol{U\theta_{1}},\boldsymbol{u_{2}}=\boldsymbol{U \theta_{2}}\), where \(\boldsymbol{\theta_{1}},\boldsymbol{\theta_{2}}\in\mathbb{R}^{d}\) are unknown 2D-DCT coefficient vectors. In some cases, we have displacement vector information at only a subset of pixels \(S\) in \(I\), as these can be obtained by salient feature point matching [28] or selected by domain experts. Then we have \(\boldsymbol{u_{1}}\big{|}_{S}=\boldsymbol{U}\big{|}_{S}\boldsymbol{\theta_{1} },\boldsymbol{u_{2}}\big{|}_{S}=\boldsymbol{U}\big{|}_{S}\boldsymbol{\theta_{ 2}}\), where \(\boldsymbol{U}\big{|}_{S}\in\mathbb{R}^{|S|\times d}\) contains the rows from \(\boldsymbol{U}\) corresponding to
Fig. 3: (a): Base image \(I\), (b): motion-deformed image \(M\) using ground truth motion, reconstructed motion-deformed images using point-pairs from (c): \(S_{1}\cup S_{2}\) (method C3), (d): only \(S_{1}\) (method C1), (e): only \(S_{2}\) (method C2). The reconstruction (c) looks visually more accurate.
the pixel locations in \(S\) and \(\boldsymbol{u_{1}}\big{|}_{S},\boldsymbol{u_{2}}\big{|}_{S}\in\mathbb{R}^{|S|}\) are sub-vectors of \(\boldsymbol{u_{1}},\boldsymbol{u_{2}}\) respectively containing only vectors from locations in \(S\). The goal is to estimate \(\boldsymbol{\theta_{1}},\boldsymbol{\theta_{2}}\) given \(S\), \(\boldsymbol{u_{1}}\big{|}_{S}\), \(\boldsymbol{u_{2}}\big{|}_{S}\). To this end, we use SIFT descriptors [28] to obtain a set \(S_{1}\) of \(p\) key-point pairs in the two images. Further, we accurately annotate a set of \(m\) corresponding point-pairs in the images \(I\) and \(M\), which we refer to as \(S_{2}\). The correspondences of the \(m\) point-pairs in \(S_{2}\) are known _accurately_, whereas the correspondences in a _small_ number of the \(p\) point-pairs in \(S_{1}\) may be incorrect due to errors in SIFT-based point matching. These errors can be modeled as sparse permutations. Note that the indices of the point-pairs in the erroneous subset of \(S_{1}\) are also unknown. In the presence of the underlying permutation noise, we can write the modified observation model as \(\boldsymbol{u_{1}}\big{|}_{S_{1}\cup S_{2}}=\boldsymbol{P_{1}}\boldsymbol{U} \big{|}_{S_{1}\cup S_{2}}\boldsymbol{\theta_{1}},\boldsymbol{u_{2}}\big{|}_{S _{1}\cup S_{2}}=\boldsymbol{P_{2}}\boldsymbol{U}\big{|}_{S_{1}\cup S_{2}} \boldsymbol{\theta_{2}}\), where \(\boldsymbol{P_{1}},\boldsymbol{P_{2}}\) are unknown permutation matrices. Note that the correspondences of \(m\) measurements are known, while the correspondences of the remaining \(p\) measurements may contains a small number of errors, and hence the aforementioned model is similar to (1). Therefore, we can use our framework from (7) to estimate \(\boldsymbol{\theta_{1}},\boldsymbol{\theta_{2}}\) given \(S_{1},S_{2}\), \(\boldsymbol{u_{1}}\big{|}_{S_{1}\cup S_{2}},\boldsymbol{u_{2}}\big{|}_{S_{1} \cup S_{2}}\). In this experiment, we set \(d=10\) and synthetically generate motion using \(\boldsymbol{u_{1}}=\boldsymbol{U}\boldsymbol{\theta_{1}},\boldsymbol{u_{2}}= \boldsymbol{U}\boldsymbol{\theta_{2}}\). We use the SIFT descriptor technique to obtain a set of \(p=179\) key-point pairs in the two images, which form the set \(S_{1}\). Further, we accurately annotate a set of \(m=8\) point-pairs, which form \(S_{2}\). Note that we have kept \(m<d\) to avoid the trivial scenario, where reconstruction can be done only using \(S_{2}\), and also because manual annotation of a larger number of point-pairs is often not feasible. We reconstruct the motion-deformed image by estimating \(\boldsymbol{\theta_{1}},\boldsymbol{\theta_{2}}\), and thus \(\boldsymbol{u_{1}},\boldsymbol{u_{2}}\), and then applying this motion to the reference image \(I\). We estimate \(\boldsymbol{\theta_{1}},\boldsymbol{\theta_{2}}\) in 3 ways via the model from (7): (C1) using only the point-pairs from \(S_{1}\), (C2) using only the manually annotated point-pairs from \(S_{2}\), and (C3) using point-pairs from \(S_{1}\cup S_{2}\). The reference image \(I\), the motion-deformed image \(M\) using ground truth motion, as well as the motion-deformed images using motion obtained via C1, C2, C3, are plotted in Fig. 5. The normalized mean squared error (NMSE) between the original motion-deformed image and the reconstructed motion-deformed image for C1, C2, C3 are respectively 0.008, 0.005 and 0.002, showing the superior performance of C3. Also, observe the overlay images of the ground-truth motion-deformed image and the
Fig. 4: Overlay of ground truth motion-deformed image (R channel) and motion-deformed image (G channel) using (a): method C3, (b): method C2 and (c): method C1. Observe alignment using C1, C2 is worse (many red or green edges, bordered by black boxes) than that with C3. The B channel of the overlay images is set to 0.
motion-deformed image using the motion estimates from methods C3, C1 and C2. These are plotted in Fig. 6. The overlay for C3 shows significantly fewer red or green edges as compared to the other two.
## VI Conclusion
We proposed an algorithm to estimate the unknown vector in unlabelled sensing with sparse permutations given a small number of measurements with known correspondences. We derived a theoretical upper bound on the reconstruction error. Through simulations, we showed that a few measurements with known correspondences can significantly improve the reconstruction error, or reduce the sample complexity for the same reconstruction error as obtained without known correspondences. We found several regimes where our estimator significantly outperforms robust regression techniques while maintaining an acceptable level of reconstruction error. Lastly, we consider an application in DCT-based motion estimation. We showed that a few manually annotated point-pairs with accurate
Fig. 5: (a): Base image \(I\), (b): motion-deformed image \(M\) using ground truth motion, reconstructed motion-deformed images using point-pairs from (c): \(S_{1}\cup S_{2}\) (method C3), (d): only \(S_{1}\) (method C1), (e): only \(S_{2}\) (method C2). The reconstruction (c) looks visually more accurate.
correspondence, along with the SIFT key point-pairs (where some correspondences can be erroneous), can improve motion estimation.
## Appendix
Proof.: To prove Theorem 1, the cost function to be minimized is given as
\[L(\mathbf{x},\mathbf{z})=\|\mathbf{y_{1}}-\mathbf{A_{1}}\mathbf{x}\|_{2}^{2}+\|\mathbf{y_{2}}-\mathbf{A_{2}} \mathbf{x}-\mathbf{z}\|_{2}^{2}+\lambda_{1}\|\mathbf{z}\|_{1}. \tag{9}\]
Next, we have that
\[\begin{split}\mathbf{\tilde{x}},\mathbf{\tilde{z}}&= \operatorname*{arg\,min}_{\mathbf{x}\in\mathbb{R}^{d},\mathbf{z}\in\mathbb{R}^{p}}L( \mathbf{x},\mathbf{z})\\ &=\operatorname*{arg\,min}_{\mathbf{x}\in\mathbb{R}^{d},\mathbf{z}\in \mathbb{R}^{p}}\|\mathbf{y_{1}}-\mathbf{A_{1}}\mathbf{x}\|_{2}^{2}+\|\mathbf{y_{2}}-\mathbf{A_{2}} \mathbf{x}-\mathbf{z}\|_{2}^{2}+\lambda_{1}\|\mathbf{z}\|_{1}.\end{split} \tag{10}\]
We perform a reparameterization \(\mathbf{e}=\mathbf{z}/\sqrt{p}\) and write the above equation as
\[\mathbf{\tilde{x}},\mathbf{\tilde{e}}=\operatorname*{arg\,min}_{\mathbf{x}\in\mathbb{R}^ {d},\mathbf{e}\in\mathbb{R}^{p}}\frac{1}{p}\|\mathbf{y_{1}}-\mathbf{A_{1}}\mathbf{x}\|_{2}^{2} +\frac{1}{p}\|\mathbf{y_{2}}-\mathbf{A_{2}}\mathbf{x}-\sqrt{p}\mathbf{e}\|_{2}^{2}+\lambda\| \mathbf{e}\|_{1}, \tag{11}\]
where \(\lambda=\frac{\lambda_{1}}{\sqrt{p}}>0\). The above optimization involves minimizing over two variables. To simplify things, we project the vector \(\mathbf{y_{2}}-\mathbf{A_{2}}\mathbf{x}-\sqrt{p}\mathbf{e}\), which is a function of both \(\mathbf{x}\) and \(\mathbf{e}\), into two components by projecting it onto the column space of \(\mathbf{A_{2}}\) and its orthogonal complement. Given that the entries of \(\mathbf{A_{2}}\) are i.i.d. zero mean unit variance Gaussian random variables, \(\mathbf{A_{2}}\) is full-column rank with probability 1. The projection matrix \(\mathbf{H}\) which projects onto the column space of \(\mathbf{A_{2}}\) is given as \(\mathbf{H}=\mathbf{A_{2}}(\mathbf{A_{2}^{T}}\mathbf{A_{2}})^{-1}\mathbf{A_{2}^{T}}\). Let \(\mathbf{H}^{\perp}\) denote a projection matrix which projects onto the orthogonal complement of the column space of \(\mathbf{A_{2}}\). Then, by using the decomposition
Fig. 6: Overlay of ground truth motion-deformed image (R channel) and motion-deformed image (G channel) using (a): method C3, (b): method C2 and (c): method C1. Observe alignment using C1, C2 is worse (many red or green edges, bordered by black boxes) than that with C3. The B channel of the overlay images is set to 0.
\[\mathbf{A}^{T}\mathbf{A}\tilde{\mathbf{x}} =\mathbf{A}^{T}\mathbf{h} \tag{16}\] \[=\begin{bmatrix}\mathbf{A}_{1}^{T}&\mathbf{A}_{2}^{T}\end{bmatrix}\begin{bmatrix} \mathbf{y}_{1}\\ \mathbf{H}(\mathbf{y}_{2}-\sqrt{p}\tilde{\mathbf{e}})\end{bmatrix}\] \[=\mathbf{A}_{1}^{T}\mathbf{y}_{1}+\mathbf{A}_{2}^{T}\mathbf{H}(\mathbf{y}_{2}-\sqrt{p }\tilde{\mathbf{e}})\] \[=\mathbf{A}_{1}^{T}\mathbf{A}_{1}\mathbf{x}_{0}+\mathbf{A}_{1}^{T}\mathbf{\epsilon}_{ 1}+\mathbf{A}_{2}^{T}(\mathbf{y}_{2}-\sqrt{p}\tilde{\mathbf{e}}+\sqrt{p}\mathbf{e}_{0}-\sqrt{p }\mathbf{e}_{0})\] \[=\mathbf{A}_{1}^{T}\mathbf{A}_{1}\mathbf{x}_{0}+\mathbf{A}_{1}^{T}\mathbf{\epsilon}_{ 1}+\mathbf{A}_{2}^{T}(\mathbf{y}_{2}-\sqrt{p}\tilde{\mathbf{e}}_{0})+\sqrt{p}\mathbf{A}_{2}^{T }(\mathbf{e}_{0}-\tilde{\mathbf{e}})\] \[=\mathbf{A}_{1}^{T}\mathbf{A}_{1}\mathbf{x}_{0}+\mathbf{A}_{1}^{T}\mathbf{\epsilon}_{ 1}+\mathbf{A}_{2}^{T}(\mathbf{A}_{2}\mathbf{x}_{0}+\mathbf{\epsilon}_{2})+\sqrt{p}\mathbf{A}_{2}^{ T}(\mathbf{e}_{0}-\tilde{\mathbf{e}})\] \[=\mathbf{A}^{T}\mathbf{A}\mathbf{x}_{0}+\mathbf{A}^{T}\mathbf{\epsilon}+\sqrt{p}\mathbf{A} _{2}^{T}(\mathbf{e}_{0}-\tilde{\mathbf{e}})\]
or
\[\mathbf{A}^{T}\mathbf{A}(\tilde{\mathbf{x}}-\mathbf{x}_{0})=\mathbf{A}^{T}\mathbf{\epsilon}+\sqrt{p} \mathbf{A}_{2}^{T}(\mathbf{e}_{0}-\tilde{\mathbf{e}}) \tag{17}\]
or
\[\mathbf{\tilde{x}}-\mathbf{x}_{0}=\mathbf{A}^{\dagger}\mathbf{\epsilon}+\sqrt{p}(\mathbf{A}^{T}\bm {A})^{-1}\mathbf{A}_{2}^{T}(\mathbf{e}_{0}-\tilde{\mathbf{e}}). \tag{18}\]
We upper-bound the quantity on the right using the standard norm inequalities.
\[\|\tilde{\mathbf{x}}-\mathbf{x_{0}}\|_{2} \leq\|\mathbf{A^{\dagger}}\mathbf{\epsilon}\|_{2}+\sqrt{p}\|\mathbf{(A^{T}A)^{-1 }A_{2}^{T}}\|_{2}\|\mathbf{e_{0}}-\mathbf{\tilde{e}}\|_{2} \tag{19}\] \[\leq\|\mathbf{A^{\dagger}}\mathbf{\epsilon}\|_{2}+\sqrt{p}\frac{\|\mathbf{A_{2 }}\|_{2}}{(\sigma_{\text{min}}(\mathbf{A}))^{2}}\|\mathbf{e_{0}}-\mathbf{\tilde{e}}\|_{2}.\]
In order to obtain a bound in terms of \(m,p,d,k\) and other parameters, we use a series of concentration inequalities to bound the quantities on the RHS of (19). To this end, we use Lemmas 1-4 provided after the proof. From Lemma 1, using \(\mathbf{A_{2}}\) and \(\mathbf{A}\) in the role of \(\mathbf{X}\), for any \(t_{1},t_{2}>0\) such that \(t_{2}<\sqrt{N}-\sqrt{d}\), we have that
\[\mathbb{P}(\|\mathbf{A_{2}}\|_{2}\leq\sqrt{p}+\sqrt{d}+t_{1})\geq 1-\exp(-t_{1}^{ 2}/2), \tag{20}\]
and
\[\mathbb{P}\bigg{(}\frac{1}{\sigma_{\text{min}}(\mathbf{A})}\leq\frac{1}{\sqrt{N} -\sqrt{d}-t_{2}}\bigg{)}\geq 1-2\exp(-t_{2}^{2}/2). \tag{21}\]
From Lemma 3, using \(\mathbf{A}\) in the role of \(\mathbf{X}\) and \(\mathbf{\epsilon}\) in the role of \(\mathbf{g}\) with \(t=t_{2}\), we have
\[\mathbb{P}\bigg{(}\|\mathbf{A^{\dagger}}\mathbf{\epsilon}\|_{2}\leq\sigma\frac{\sqrt{ d+2\sqrt{t_{2}d+2t_{2}}}}{\sigma_{\text{min}}(\mathbf{A})}\bigg{)}\geq 1-\exp(-t_{2}). \tag{22}\]
We use Lemma 4 to upper bound \(\|\mathbf{e_{0}}-\mathbf{\tilde{e}}\|_{2}\) and the concentration inequalities (20), (21), (22) with \(t_{1}=\log p,t_{2}=\alpha\log p\) where \(\alpha\in\mathbb{R}>0\) to upper bound \(\|\mathbf{A^{\dagger}}\mathbf{\epsilon}\|_{2}\), in (19) to conclude the proof, producing the following final bound:
\[\|\mathbf{\tilde{x}}-\mathbf{x}_{0}\|_{2}\leq\sigma\frac{\sqrt{d+2\sqrt{d\alpha\log p }+2\alpha\log p}}{\sqrt{m+p}-\sqrt{d}-\alpha\log p}+48(1+M)\sigma\varepsilon^ {-1}\frac{(\sqrt{p}+\sqrt{d}+\log p)}{(\sqrt{m+p}-\sqrt{d}-\alpha\log p)^{2} }\frac{p}{p-d}\sqrt{k\log p}. \tag{23}\]
**Lemma 1** ([29]).: _Let \(\mathbf{X}\) be an \(m\times n\) Gaussian random matrix with i.i.d. \(\mathcal{N}(0,1)\) entries. Then, for any \(t>0\), we have_
\[\mathbb{P}(\|\mathbf{X}\|_{2}\geq\sqrt{m}+\sqrt{n}+t)\leq\exp(-t^{2}/2)\]
_and_
\[\mathbb{P}(\sigma_{\text{min}}(\mathbf{X})\geq\sqrt{m}-\sqrt{n}-t)\geq 1-2\exp(-t^{ 2}/2).\]
**Lemma 2** ([30]).: _Let \(\mathbf{Z}\) be an \(m\times n\) matrix, define \(\mathbf{\Gamma}:=\mathbf{Z^{T}Z}\) and \(\mathbf{g}\sim\mathcal{N}(0,\sigma^{2}I_{n})\). Then, for any \(t>0\), we have_
\[\mathbb{P}(\|\mathbf{Z}\mathbf{g}\|_{2}^{2}>\sigma^{2}(\text{tr}(\mathbf{\Gamma})+2\sqrt{ \text{tr}(\mathbf{\Gamma}^{2})t}+2\|\mathbf{\Gamma}\|_{2}t))\leq\exp{(-t)}.\]
**Lemma 3**.: _Let \(\mathbf{X}\) be an \(m\times n\) matrix and \(\mathbf{g}\sim\mathcal{N}(0,\sigma^{2}I_{n})\). Then, for any \(t>0\), we have_
\[1-\mathbb{P}\bigg{(}\|\mathbf{X^{\dagger}}\mathbf{g}\|_{2}^{2}\leq\sigma^{2}\bigg{(} \frac{n}{\sigma_{\text{min}}^{2}(\mathbf{X})}+\frac{2\sqrt{n}t}{\sigma_{\text{min} }^{2}(\mathbf{X})}+\frac{2t}{\sigma_{\text{min}}^{2}(\mathbf{X})}\bigg{)}\bigg{)}\leq \exp(-t).\]
Proof.: We use Lemma 2 with \(\mathbf{X}^{\dagger}\) in the role of \(\mathbf{Z}\). Then, \(\mathbf{\Gamma}=\mathbf{X}(\mathbf{X}^{T}\mathbf{X})^{-2}\mathbf{X}^{T}\). We use the three results given in Appendix G in [23] to get
1. \(\operatorname{tr}(\mathbf{\Gamma})\leq\frac{n}{\sigma_{\mathbf{\alpha}}^{\perp}(\mathbf{X})}\),
2. \(\|\mathbf{\Gamma}\|_{2}=\frac{1}{\sigma_{\mathbf{\alpha}}^{\perp}(\mathbf{X})}\),
3. \(\sqrt{\operatorname{tr}(\mathbf{\Gamma}^{2})}\leq\frac{\sqrt{n}}{\sigma_{\mathbf{ \alpha}}^{\perp}(\mathbf{X})}\).
which are then plugged back in Lemma 2 to obtain the statement of the current lemma.
**Lemma 4** (See Lemma 4 and Lemma 6 in [23]).: _Let \(\mathbf{A_{2}}\) be a \(p\times d\) Gaussian random matrix with i.i.d. \(\mathcal{N}(0,1)\) entries. We have observation model as \(\mathbf{y}_{2}=\mathbf{P}_{2}\mathbf{A}_{2}\mathbf{x}_{0}+\mathbf{\epsilon}_{2}\) where \(\mathbf{x_{0}}\in\mathbb{R}^{d}\) is unknown, \(\mathbf{P_{2}}\in\mathbb{R}^{p\times p}\) is a \(k\)-sparse permutation matrix and \(\mathbf{\epsilon_{2}}\in\mathbb{R}^{p}\) is noise vector with i.i.d \(\mathcal{N}(0,\sigma^{2})\) entries. We define the permutation error vector as \(\mathbf{z}_{0}=\mathbf{P}_{2}\mathbf{A}_{2}\mathbf{x}_{0}-\mathbf{A}_{2}\mathbf{x}_{0}=\sqrt{p}e_{0}\). Define \(\mathbf{\tilde{e}}:=\underset{\mathbf{e}\in\mathbb{R}^{p}}{\operatorname{arg\,min}} \frac{1}{p}\|\mathbf{H}^{\perp}\big{(}\mathbf{y}_{2}-\sqrt{p}e\big{)}\|_{2}^{2}+\lambda \|\mathbf{e}\|_{1}\) where \(\mathbf{H}^{\perp}\) denotes a projection matrix which projects onto the orthogonal complement of the column space of \(\mathbf{A_{2}}\). If \(\lambda=4(1+M)\sigma\sqrt{\frac{2\log p}{p}}\) for any \(M\geq 0\), there exist constants \(c_{1},c_{2},\varepsilon\) so that if \(k\leq c_{1}\frac{p-d}{\log\frac{p}{p}}\), then the following inequality holds with probability at least \(1-2\exp(-c_{2}(p-d))-2p^{-M^{2}}\):_
\[\|\mathbf{\tilde{e}}-\mathbf{e_{0}}\|_{2}\leq 48(1+M)\sigma\frac{p}{p-d}\varepsilon^{-1} \sqrt{\frac{k\log p}{p}}.\]
|
2305.08700 | Synthetic $\mathbb{Z}_2$ gauge theories based on parametric excitations
of trapped ions | We present a detailed scheme for the analog quantum simulation of Z2 gauge
theories in crystals of trapped ions, which exploits a more efficient hybrid
encoding of the gauge and matter fields using the native internal and motional
degrees of freedom. We introduce a versatile toolbox based on parametric
excitations corresponding to different spin-motion-coupling schemes that induce
a tunneling of the ions vibrational excitations conditioned to their internal
qubit state. This building block, when implemented with a single trapped ion,
corresponds to a minimal Z2 gauge theory, where the qubit plays the role of the
gauge field on a synthetic link, and the vibrational excitations along
different trap axes mimic the dynamical matter fields two synthetic sites, each
carrying a Z2 charge. To evaluate their feasibility, we perform numerical
simulations of the state-dependent tunneling using realistic parameters, and
identify the leading sources of error in future experiments. We discuss how to
generalise this minimal case to more complex settings by increasing the number
of ions, moving from a single link to a Z2 plaquette, and to an entire Z2
chain. We present analytical expressions for the gauge-invariant dynamics and
the corresponding confinement, which are benchmarked using matrix product state
simulations. | O. Băzăvan, S. Saner, E. Tirrito, G. Araneda, R. Srinivas, A. Bermudez | 2023-05-15T15:01:09Z | http://arxiv.org/abs/2305.08700v2 | # Synthetic \(\mathbb{Z}_{2}\) gauge theories based on parametric excitations of trapped ions
###### Abstract
We present a detailed scheme for the implementation of \(\mathbb{Z}_{2}\) gauge theories with dynamical bosonic matter using analog quantum simulators based on crystals of trapped ions. We introduce a versatile toolbox based on a state-dependent parametric excitation, which can be implemented using different interactions that couple the ions' internal qubit states to their motion, and induces a tunneling of the vibrational excitations of the crystal mediated by the trapped-ion qubits. To evaluate the feasibility of this toolbox, we perform numerical simulations of the considered schemes using realistic experimental parameters. This building block, when implemented with a single trapped ion, corresponds to a minimal \(\mathbb{Z}_{2}\) gauge theory on a synthetic link where the qubit resides, playing the role of the gauge field. The vibrational excitations of the ion along different trap axes mimic the dynamical matter fields carrying a \(\mathbb{Z}_{2}\) charge. We discuss how to generalise this minimal case to more complex settings by increasing the number of ions. We describe various possibilities which allow us to move from a single \(\mathbb{Z}_{2}\) plaquette to full \(\mathbb{Z}_{2}\) gauge chains. We present analytical expressions for the gauge-invariant dynamics and confinement, which are benchmarked using matrix product state simulations.
###### Contents
* I **Introduction**
* II **Synthetic dimensions: lattice field theories under background gauge fields*
* II.1 Parametric tunnelling and synthetic dimensions
* II.2 Peierls ladders with trapped-ion chains
* III **Dynamical gauge fields: the \(\mathbb{Z}_{2}\) theory on a link*
* III.1 State-dependent parametric tunnelling
* III.2 One-boson sector: Rabi oscillations and matter-gauge-field correlated dynamics
* III.3 Two-boson sector: Dark states and entanglement between modes of the matter fields
* IV **Trapped-ion toolbox: phonons and qubits*
* IV.1 Scheme I: Analog scheme for the \(\mathbb{Z}_{2}\) link
* IV.1.1 Light-shift-type parametric tunneling
* IV.1.2 Molmer-Sorensen-type parametric tunneling
* IV.1.3 Implementation of the electric-field term
* IV.1.4 Experimental considerations
* IV.2 Scheme II: Pulsed scheme for the \(\mathbb{Z}_{2}\) link
* IV.2.1 Orthogonal-force parametric tunneling
* IV.2.2 Implementation of the electric-field term
* IV.2.3 Experimental considerations
* IV.3 \(\mathbb{Z}_{2}\) gauge link scheme comparison and other sources of noise
* IV.4 Comparison with neutral atoms
* V **Minimal plaquettes and synthetic dimensional reduction for a \(\mathbb{Z}_{2}\) gauge chain*
* V.1 \(\mathbb{Z}_{2}\) plaquette: Wegner-Wilson and 't Hooft loops for gauge-field entanglement
* V.2 \(\mathbb{Z}_{2}\) chain: synthetic dimensional reduction
* V.2.1 One-boson sector:Wannier-Stark localisation
* V.2.2 Two-boson sector:Wannier-Stark confinement
* V.2.3 Half-filled sector: Partial string breaking
* VI **Conclusions and Outlook**
* A Quadrupole light-shift scheme
## I Introduction
In quantum many-body physics, a major research effort has been devoted to identifying the guiding principles that explain how a rich variety of complex collective phenomena can emerge from apparently-simple microscopic models [1]. In this context, the invariance of a physical model under a specific global symmetry and its spontaneous breakdown [2] have turned into a key mechanism to understand and characterise many important phases of matter. In parallel, understanding how these global symmetries can be gauged, i.e. how they can be upgraded to local symmetries through the introduction of additional gauge fields [3], has been of paramount importance in our efforts to unveil the fundamental laws of nature [4]. Since equilibrium states can only display a non-zero expectation value for a gauge-invariant observable [5], these local symmetries cannot be spontaneously broken, and one must look for different principles and characterisation tools to understand possible phases in gauge theories. For instance, understanding the specific mechanism for the (de)confinement of quarks [6], which is a non-perturbative phenomenon that can be addressed directly on a lattice [7], could be relevant in the search for other exotic forms of matter [8].
Understanding the ordering mechanisms underlying these exotic phases is a very complicated open problem in the standard model of particle physics [9], which has been partially hindered by current limitations of numerical lattice approaches in addressing finite-density regimes and real-time effects [10; 11]. One can gain important insights by looking
at lower dimensions and simpler gauge groups, where tools that could the be ported to the more complex scenarios can be developed and benchmarked. For example, the study of gauge theories in one spatial dimension has been an important step to advance our understanding of confinement [12; 13; 14; 15; 16]. In another example, exploring models with a discrete gauge group, such as the pure \(\mathbb{Z}_{2}\) gauge theory in two spatial dimensions [17], has identified the importance of non-local order parameters, the so-called Wegner-Wilson loops [17; 7], to characterise a confinement-deconfinement phase transition. The deconfined phase of this model displays an exotic collective ordering, so-called topological order [18; 19], with a ground state that displays two important features. First, it has a degeneracy that depends on topological invariants related to the homology of electric- or magnetic-like field lines on the manifold in which the model is defined. Second, it displays long-range entanglement in spite of having an energy gap which, typically, would be associated to an exponential decay of correlations and short-range entanglement.
Let us now review some aspects of these discrete-group lattice gauge theories that also make them interesting from a broader perspective. First of all, discrete gauge theories can arise in long-wavelength descriptions of condensed-matter models, e.g. high-temperature superconductivity and frustrated magnetism [20; 21]. In the Hamiltonian formulation of lattice gauge theories [22], space is discretised, and the matter and gauge fields are represented by operators defined on the sites and links of the lattice, respectively. One typically works in Weyl's temporal gauge, where there is a residual redundancy associated to the local gauge symmetries. Using Gauss' law, one restricts the dynamics to a specific super-selection sector of the full Hilbert space. On the other hand, from the perspective of effective models in condensed matter [23], other super-selection possible sectors can arise, which are described by specific distributions of effective static charges that act like a background [24]. Furthermore, one can even design the Hamiltonian in such a way that Gauss' law is not a strict constraint, but rather an energy penalty that becomes effective in the ground state, but can be violated to create certain excitations [25]. This allows to define both magnetic- and electric-type excitations, which have another characteristic property of topological order: mutual anyonic statistics. Understanding the robustness of the ground state under external perturbations [26; 27], as well as the creation and subsequent dynamics of these excitations, is relevant in the context of topological quantum error correction [28; 29]. Some of these perturbations [30] can be actually mapped onto the problem of \(\mathbb{Z}_{2}\) gauge fields coupled to matter [31]. It is worth mentioning that these \(\mathbb{Z}_{2}\) gauge theories with a soft Gauss' constraint can also arise in perturbative regimes of more-standard models of quantum magnetism [32], which are actively being searched for in certain solid-state materials [33; 34].
Due to all of these cross-disciplinary connections between high-energy physics, condensed matter, and quantum information, the study of discrete lattice gauge theories, including higher-dimensional \(\mathbb{Z}_{d}\) gauge fields [35] and their coupling to distinct types of matter, has seen a considerable increase of interest in recent years [36; 37; 38; 39; 40; 41; 42; 43; 44]. There has also been a remarkable progress in one-dimensional [45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56] and quasi-one-dimensional [57; 58; 59; 60; 61] cases. We note that, in the limit \(d\rightarrow\infty\), one can recover the physics of a continuous gauge group from these discrete gauge theories, such that these studies connect to the quantum-electrodynamics sector of the standard model in reduced spacetime dimensions. In this respect, we should also mention the studies of lattice gauge theories based on the "quantum-link" approach [62; 63; 64]. In this approach, the gauge degrees of freedom belong again to a finite-dimensional Hilbert space but, this time, one can represent directly the continuous gauge group by using link operators corresponding to \((2S+1)\)-dimensional representations of the angular momentum, i.e., spins. There has been a considerable resurgence of interest in these link models during the last years, both for Abelian and non-Abelian gauge groups [65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82].
The focus on discrete gauge theories has been further encouraged by key advances in the field of quantum simulations [83; 84; 85; 86; 87]. Starting with the pioneering proposals for the digital [88; 89; 90; 91] and analog [92; 93; 94; 95; 96; 97; 98] quantum simulation of lattice gauge theories with ultracold atoms in optical lattices, a considerable effort has been devoted to push this research topic in various directions (see reviews [99; 100; 101; 102; 103; 104; 105; 106]). These advances have lead to the first experimental quantum simulators for lattice gauge theories [107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128], hinting at a promising future in the near term. We would like to discuss briefly the potential of this type of experiments from the perspective of quantum advantage. Quantum simulators do not suffer from the sign problem underlying the aforementioned limitations of numerical Monte Carlo approaches [10; 11], even when dealing with fermionic matter at finite densities and real-time dynamics. Therefore, they have the potential of addressing certain problems that have remained elusive for decades. In fact, solving the sign problem lies in the class of \(NP\) (nondeterministic polynomial time)-hard problems [10], such that no polynomial-time classical algorithm is likely to be found. This strengthens the potential of quantum simulators as a way to go beyond the capabilities of classical computers and demonstrate quantum advantage in a problem of practical relevance. Moreover, recent works have shown that digital quantum simulations for the dynamics of even simpler lattice field theories, such as the self-interacting scalar field, are provably among the hardest problems that can be solved efficiently with quantum hardware [129; 130; 131]. Computing the vacuum persistence amplitude, which gives access to arbitrary Feynman propagators via the generating functional, is a _BQP_ (bounded-error quantum polynomial time)-hard problem. Its solution would allow one to solve any other problem that is solvable in polynomial time by a quantum computer. Unless the _BQP_ complexity class reduces to its classical analog, computing the generating function of a quantum field theory is already a problem that cannot be addressed efficiently using classical devices, but requires instead quantum-mechanical ones [132; 133]. We note that this complexity-class collapse is very unlikely, as it would imply that any quantum computation can be efficiently simulated by a classical computer, which is not believed to be the case within our current understanding.
Let us close this general introduction by commenting further on the experimental progress of quantum simulators for
gauge theories. Experimental realisations based on the concatenation of gates, which exploit optimised encodings for the discretised fields using the underlying symmetries and Gauss' law, have already allowed for the implementation of several digital quantum simulations and variational quantum algorithms of lattice gauge theories [117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 128]. Although less flexible, the experiments for the analog quantum simulation of gauge fields [110, 110, 112, 114, 115, 116, 123, 127] are, in principle, more amenable to be scaled to larger systems even in the presence of noise and experimental imperfections. This is due to the accumulation of errors. For digital simulators, the errors grow fast when enlarging the simulated system and extending the simulated time, since the number of imperfect gates that must be used increases. For the typical intensive observables of interest, however, the error of analog simulators does not increase with the simulated time and lattice volume as fast as in the case of its digital concatenated-gate counterparts. This difference has been recently assessed quantitatively for condensed-matter simulators [134], and should hold similarly for gauge theories.
In this work, we thus focus on analog quantum simulators of \(\mathbb{Z}_{2}\) lattice gauge theories coupled to dynamical matter. In previous realisations, this gauge theory has only been addressed in the single-link cold-atom experiments [110], and superconducting-qubit arrays [123]. Unfortunately, the later is limited by additional microscopic terms that explicitly break the gauge symmetry; whereas the former has not been scaled to larger system sizes yet. In this manuscript, we present in detail a new toolbox for quantum simulations of \(\mathbb{Z}_{2}\) lattice gauge theories coupled to matter using trapped-ion systems. This toolbox includes simple building blocks that can be realised with state-of-the-art ion trap technologies, but also, more scalable schemes that could be implemented upon further technological developments in the near term. We believe that the results hereby presented open an interesting direction to extend the experiments on \(\mathbb{Z}_{2}\) lattice gauge theories coupled to matter to more interesting and challenging scenarios.
Our results are presented as follows. In Sec. II, we describe the underlying idea of a scheme to induce background gauge fields using a parametric frequency conversion. We discuss how this scheme can be implemented using the transverse vibrational modes in a trapped-ion chain, giving rise to a synthetic Hall ladder for the bosonic modes. In Sec. III, based on the previous discussion, we show how one can generate a gauge-invariant Hamiltonian where a \(\mathbb{Z}_{2}\) gauge field mediates the tunnelling between a pair of bosonic modes. In the single-boson sector, where the tunnelling of the boson is correlated to the stretching/compressing of the electric field at the link, the dynamics corresponds to detuned Rabi oscillations. In the two-boson sector, we show that the system gives rise to bright and dark states, and that the gauge-invariant dynamics leads to mode entanglement in the matter sector. In Sec. IV, we describe various realistic schemes for the implementation of this \(\mathbb{Z}_{2}\) gauge theory on a link using a single trapped ion. We discuss a light-shift- or a Mollmer-Sorensen-type scheme to implement a state-dependent parametric tunnelling between the phonons along two different trap axes, presenting a thorough numerical comparison of the ideal and realistic dynamics with current trapped-ion parameters. We also discuss a different possibility that combines two orthogonal state-dependent forces, and can be combined in a pulsed Trotterization to yield the desired gauge-invariant model with realistic trapped-ion parameters. Finally, we show in Sec. V how one can also realise other \(\mathbb{Z}_{2}\) models by including more ions. In particular, we show that the center-of-mass modes of a two-ion system can be used to simulate a gauge-invariant model with two links forming a plaquette. We show that the superposition principle of the possible encircling paths is that a single boson can lead to an entangled state for the \(\mathbb{Z}_{2}\) gauge fields. In this section, we also show how a string of ions with dimerised center-of-mass modes can also be used to implement a \(\mathbb{Z}_{2}\) gauge theory on an entire chain. We present analytical solutions for the confinement dynamics on the single- and two-boson sector, and also explore the phenomenon of string breaking in the half-filled sector using Matrix-Product-State simulations. We present our conclusions in Sec. VI.
## II Synthetic dimensions: Lattice field theories under background gauge fields
### Parametric tunnelling and synthetic dimensions
Using periodic resonant modulations, it is possible to design quantum simulators on a lattice that has a connectivity different from the original one. This leads to the concept of synthetic dimensions [135, 136], as recently reviewed in [137], which have been realised in various experimental platforms [138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149]. We start this section by discussing parametric excitations from this perspective. In its original context, a parametric excitation can be used to induce couplings between different modes of the electromagnetic field, leading to a well-known technique for frequency conversion and linear amplification of photons [150, 151, 152, 153, 154]. For instance, as originally discussed in [150], a small periodic modulation of the dielectric constant of a cavity can lead to different couplings between the cavity modes that can be controlled by tuning the modulation frequency to certain resonances. In the particular case of frequency conversion, this scheme can be understood in terms of a parametric tunnelling term between two synthetic lattice sites labelled by the frequencies of the two modes, which is the essence of the schemes for synthetic dimensions discussed in [135, 136]. As emphasised in [155], this simple parametric tunnelling already inherits the phase of the drive [150], such that one could design and implement [156] non-trivial schemes where this phase mimics the Aharonov-Bohm effect of charged particles moving under a static background magnetic field [157]. These ideas can also be exploited when the modes belong to distant resonators coupled via intermediate components, such as mixings [158] or tunable inductors [159]. This leads to parametric tunnelling terms where the phase can be tuned locally, leading to quantum simulators of quantum Hall-type physics [160]. Furthermore, periodic modulations of the mode frequencies with a relative phase difference can also lead to these syn
thetic background gauge fields [161; 162], as demonstrated in experiments that exploit Floquet engineering in optical lattices [163; 164; 165; 166], symmetrically-coupled resonators [167], and trapped-ion crystals [168]. We should also mention that there are other schemes for the simulations of static background gauge fields, which do not exploit periodic modulations, but instead mediate the tunnelling by an intermediate quantum-mechanical system [169; 170; 171; 172; 173; 174; 175].
Given the importance of the idea of parametric tunnelling for the quantum simulation schemes of dynamical gauge theories presented in the following sections, we now describe its details. We consider a set of bosonic/fermionic particles that can be created and annihilated by a set of operators \(a_{d}^{\dagger},a_{d}\) with the corresponding commutation/anti-commutation algebra. Here, the index \(d\in\mathcal{D}\) labels a specific degree of freedom of these particles. In the context of quantum many-body models and lattice field theories, the indexing set \(\mathcal{D}\) typically contains the positions of a microscopic lattice in which the particles can reside, as already mentioned in the introduction. In this context, the geometry of the lattice determines the kinetic energy of the microscopic Hamiltonian, which is described by a tunnelling term \(t_{d\mu}a_{d}^{\dagger}a_{d^{\prime}}\), where \(t_{d\mu^{\prime}}\) is the hopping matrix element between a pair of sites labeled by \(d\neq d^{\prime}\). Typically, these tunnelings decay very fast with the distance, and one only considers nearest neighbours, such that the connectivity of the lattice, i.e. the edges/links of a graph, gets directly in-built in the tunnelling matrix. Additionally, in condensed matter, \(d\in\mathcal{D}\) can contain other internal degrees of freedom, e.g. spin of the valence electrons. In the context of synthetic dimensions, it is these extra degrees of freedom that provide us with a new means to engineer a synthetic dimension.
The idea of synthetic dimensions is that the effective connectivity of the tunnelling matrix can be externally designed by introducing additional periodic drivings. These, in fact, induce new couplings that can be interpreted as effective edges/links even when the corresponding degrees of freedom are not related to any Bravais lattice at all. A possible scheme uses a parametric tunnelling, as illustrated now with a simple example. We consider two modes \(d\in\mathcal{D}=\{1,2\}\) of energies \(\omega_{d}\) ( \(\hbar=1\) henceforth), such that the bare Hamiltonian is
\[H_{0}=\omega_{1}a_{1}^{\dagger}a_{1}+\omega_{2}a_{2}^{\dagger}a_{2}. \tag{1}\]
One now adds the following parametric excitation
\[V(t)=\Omega_{\rm d}a_{2}^{\dagger}a_{1}\cos(\phi_{\rm d}-\omega_{\rm d}t)+{ \rm H.c.}, \tag{2}\]
where \(\Omega_{\rm d}\), \(\omega_{\rm d}\), and \(\phi_{\rm d}\) are the amplitude, frequency, and phase of the drive, respectively. In the parametric regime, i.e.,
\[\omega_{\rm d}=\omega_{2}-\omega_{1},\quad|\Omega_{\rm d}|\ll 4|\omega_{2}- \omega_{1}|, \tag{3}\]
one can show that an effective tunnelling term between both modes is induced by the drive. Going to the interaction picture with respect to Eq. (1), it is straightforward to recognise that the resonance condition of Eq. (3) provides the required energy to bridge the gap between the modes and couple them. Additionally, when the driving amplitude is constrained by Eq. (3), a rotating-wave approximation shows that the mode coupling \(V_{\rm I}(t)\approx H_{\rm eff}\) becomes a time-independent effective Hamiltonian with a simple frequency-conversion term
\[H_{\rm eff}=t_{1,{\bf e}_{1}}a_{2}^{\dagger}a_{1}+{\rm H.c.},\quad{\rm with} \;\;t_{1,{\bf e}_{1}}=\frac{\Omega_{\rm d}}{2}{\rm e}^{{\rm i}\phi_{\rm d}}. \tag{4}\]
In the context of synthetic dimensions, one finds that a non-zero tunnelling has been established, which could be understood as a new connectivity link of a synthetic lattice. This tunnelling \(t_{1,{\bf e}_{1}}\) is labelled by the synthetic lattice site index \(1\) from which the particle departs, and the unit link vector \({\bf e}_{1}\) that connects it to the lattice site \(2\), into which the particle tunnels. In this simple case, accordingly, the synthetic lattice is just composed of two sites labelled by the indexes of the mode frequencies. We note that, for a single link, the complex phase of the tunnelling is trivial and has no dynamical consequences, i.e. it can be readily gauged away by a local \(U(1)\) transformation acting on the modes. However, this parametric scheme can be generalised to a larger set \(\mathcal{D}\), in which the complex phase of the effective tunnelling (4) may have non-trivial consequences. As discussed in [158; 160], one can create synthetic lattices in a way that, when the particle tunnels around a closed path \(\gamma\), it gains a non-zero phase \(\sum_{\ell\in\gamma}\phi_{\ell}=\Phi_{\rm AB}\) that simulates a synthetic Aharonov-Bohm phase. Even if the particles have a vanishing charge, their tunnelling resembles that of a charged particle in an external magnetic field via the so-called Peierls' substitution [176], which originally concerned electrons in a narrow-band material subjected to a perpendicular magnetic field [177]. The quadratic lattice models with Peierls' phases provide a playground for studying the integer quantum Hall effect and topological band theory [178]. In the following section, we will discuss this point in detail.
### Peierls ladders with trapped-ion chains
So far, we have not yet discussed in detail how the parametric term (2) can be created and controlled in a specific physical system. The parametric scheme has been implemented in arrays of superconducting circuits [160] but, to the best of our knowledge, its realisation in trapped-ion crystals has not been discussed so far. We now describe how to exploit this method to build a quantum simulator of a bosonic quantum Hall ladder using the transverse vibrations of a chain of \(N\) trapped ions in a linear Paul trap [179]. This will serve as a warm-up for the scheme of dynamical \(\mathbb{Z}_{2}\) gauge fields that is the core of our work, which will exploit similar concepts with a new twist, and will be covered in the following section.
Following [180; 181; 182], for a linear Paul trap with trap frequencies \(\omega_{z}\ll\omega_{x},\omega_{y}\), the ions form a linear chain along the \(z\)-axis (see Fig. 1). The transverse vibrations of each ion [183] around its equilibrium position are described by
\[H_{0}=\sum_{d}\omega_{d}a_{d}^{\dagger}a_{d}+\sum_{d\neq d^{\prime}}t_{d\mu^{ \prime}}a_{d}^{\dagger}a_{d^{\prime}}. \tag{5}\]
Here, the labelling index reads \(d=(i,\alpha)\in\mathcal{D}\), and the set \(\mathcal{D}\) contains the label for the ions in the chain \(i\in\{1,\cdots,N\}\), and the label for the two possible directions of the vibrations
transverse to the chain \(\alpha\in\{x,y\}\). In addition, \(a_{d}^{\dagger},a_{d}\) are the bosonic creation-annihilation operators for the corresponding local vibrations around the equilibrium positions of the ions \(\mathbf{r}_{l}^{0}\), which we have assumed to be aligned along the null of the radio-frequency (rf) pseudo-potential of the linear Paul trap, such that excess micromotion can be neglected [184]. Additionally, the modulation frequencies to be introduced below must be much lower than the rf driving of the trap to also neglect the intrinsic quantum-mechanical micromotion [185]. As discussed in [182], the expansion of the Coulomb interaction to second order leading to Eq. (5) does not mix the \(x,y\) modes, and one finds that the tunnelling matrix decay with the inter-ion distance following a dipolar law
\[t_{(i,\alpha)(j,\beta)}=\frac{1}{2m\omega_{\alpha}}\frac{e^{2}}{8\pi\epsilon_ {0}|\mathbf{r}_{l}^{0}-\mathbf{r}_{j}^{0}|^{3}}\delta_{\alpha,\beta}, \tag{6}\]
where \(m\) is the ion mass, \(\epsilon_{0}\) the vacuum permittivity, and \(\delta_{\alpha,\beta}\) is the Kronecker delta. The on-site energies are related to the effective trap frequencies \(\omega_{\alpha}\) of the time-averaged pseudo-potential [183] by the following expression
\[\omega_{t,\alpha}=\omega_{\alpha}-\sum_{j\neq l}\frac{1}{2m\omega_{\alpha}} \frac{e^{2}}{8\pi\epsilon_{0}|\mathbf{r}_{l}^{0}-\mathbf{r}_{j}^{0}|^{3}}. \tag{7}\]
Since the Hamiltonian of Eq. (5) has a global \(U(1)\times U(1)\) symmetry under \(a_{d}\mapsto\mathrm{e}^{\mathrm{i}\sigma}a_{d}\), \(a_{d}^{\dagger}\mapsto\mathrm{e}^{-\mathrm{i}\sigma}a_{d}^{\dagger}\), the number of transverse vibrational excitations along each axis is individually conserved. Although phonons in crystals typically refer to the excitations of the collective vibrational modes, it is customary to refer to these local vibrational modes also as phonons in the trapped-ion community, and we will follow this convention in the rest of the manuscript. The novelty with respect to the crystal phonons underlying the transverse sound waves in elastic solids [133] is that, since their number is conserved when \(|t_{dd^{\prime}}|\ll 2\omega_{\alpha}\)[183], we can thus think of these transverse phonons as particles localised to each of the ions. Just like electrons on a solid, the phonons tend to spread over the chain due to the dipolar tunnelling of Eq. (6). We note that the dynamics of these local phonons due to the effective tight-biding Hamiltonian of Eq. (5) has been observed in various trapped-ion experiments [186, 187, 188, 189, 190, 191, 192, 193].
Let us now discuss how to exploit parametric excitations to realise synthetic phonon ladders subjected to an effective background gauge field. There have been some prior works on trapped-ion parametric drivings which, to the best of our knowledge, have only been considered in a different context. For instance, in [194], parametric modulations are used to design cooling and detection methods for the spectroscopy of a single trapped electron and proton, as well as for squeezing and linear amplification. The former requires a parametrically-modulated quadrupole potential that couples two different vibrational directions [195], whereas the latter employs a periodic modulation of the trap frequencies, and can thus be achieved by applying an additional oscillating potential to the rf trap electrodes. Parametric modulations can also be obtained optically, exploiting the cross-beam ac-Stark shift of a pair of far-detuned laser beams [196, 197]. Although the parametric modulations obtained through the electronic equipment have led to larger amplification in recent experiments [198], we will stick to optical ones in this work, as they are more flexible for the generation of synthetic gauge fields.
Our goal is to interpret the transverse vibrational directions of Fig. 1 as a new synthetic "dimension". Note that, however, the \(x\) and \(y\) directions are decoupled at this quadratic order (Eq. (6)), such that the Hamiltonian of Eq. (5) describes two decoupled dipolar chains. We now discuss how a parametric excitation of the tunnelling can be induced, and how this term can be used to derive a model with couplings between the two chains, such that the global symmetry reduces to \(U(1)\times U(1)\mapsto U(1)\), and only the total number of transverse vibrational quanta is conserved. We will see that, from this perspective, the phonons move in a synthetic two-leg ladder and, moreover, argue that they can also be subjected to an effective Peierls' phase mimicking the microscopic model for charged particles under external magnetic fields. We consider that each ion is illuminated by a global two-beam laser field, the beat note of which is far detuned from any electronic transition [196]. The ions, all prepared in the same internal state of the ground-state manifold [199], thus experience an ac-Stark shift that yields the following optical potential
\[V(t)=\sum_{n,n^{\prime}=1,2}\sum_{j=1}^{N}\Omega_{n,n^{\prime}}\mathrm{e}^{ \mathrm{i}(\mathbf{k}_{\perp,n}-\mathbf{k}_{\perp,n^{\prime}})\cdot\mathbf{r}_{l}-(\omega _{\mathrm{L},n}-\omega_{\mathrm{L},n^{\prime}})t}+\text{H.c.}. \tag{8}\]
Here, \(\omega_{\mathrm{L},n}(\mathbf{k}_{\perp,n})\) is the frequency (wave-vector) of each beam \(n\in\{1,2\}\) of the global laser field, and \(\Omega_{n,n^{\prime}}=-\Omega_{\mathrm{L},n}\Omega_{\mathrm{L},n^{\prime}}^{ *}/4\Delta\) is the ac-Stark shift arising from two-photon processes. In those processes, a photon is absorbed from the \(n\)-th beam by the \(i\)-th ion with the Rabi frequency \(\Omega_{\mathrm{L},n}\) and a large detuning \(\Delta\), such that the ion is only virtually excited. Subsequently, the ion is de-excited to the same internal state by emitting a photon onto the \(n^{\prime}\)-th beam [196].
In addition to the standard ac-Stark shifts in Eq. (8), i.e. terms with \(n=n^{\prime}\) that contribute with an energy shift \(\Delta E_{\mathrm{ac}}=\sum_{n}\Omega_{n,n^{\prime}}\), one also obtains crossed beat note terms that lead
Figure 1: **Transverse vibrational excitations:** Schematic representation of an ion chain in a linear Paul trap. In the insets, we represent the transverse local vibrational excitations of a single ion in the chain.
to periodic modulations in space (time) when the laser wave-vectors (frequencies) are not co-linear (equal). In this case, one defines the beat note wave-vector and frequency as \(\mathbf{k}_{\rm d}=\mathbf{k}_{\rm L,1}-\mathbf{k}_{\rm L,2}\), and \(\omega_{\rm d}=\omega_{\rm L,1}-\omega_{\rm L,2}\), respectively. Noting that the ion positions can be expanded in terms of the local phonon operators via \(\mathbf{r}_{i}=\mathbf{r}_{i}^{0}+\sum_{\alpha}\mathbf{e}_{\alpha}\frac{1}{\sqrt{2m \omega_{\alpha}}}(a_{i,\alpha}+a_{i,\alpha}^{\dagger})\), where \(\mathbf{e}_{\alpha}\) is the unit vector in the direction of the transverse ion vibration \(\mathbf{\alpha}\), one can substitute and expand the optical potential (8) in the so-called Lamb-Dicke regime
\[\eta_{\alpha}=\mathbf{k}_{\rm d}\cdot\mathbf{e}_{\alpha}/\sqrt{2m\omega_{\alpha}} \ll 1. \tag{9}\]
The Taylor expansion of Eq. (8) then leads to a sum of terms with all possible powers of the phonon operators. By choosing the correct beat note frequency, it is possible to select which of them brings in the leading contribution. In particular, for
\[\omega_{\rm d}=\omega_{y}-\omega_{x},\quad|\Omega_{\rm d}|\ll|\omega_{y}- \omega_{x}|, \tag{10}\]
where we assume that \(|\omega_{x}-\omega_{y}|\ll\omega_{x},\omega_{y}\), a rotating-wave approximation shows that the optical potential contains the desired parametric excitation of a tunnelling term that generalises Eq. (2) to an arbitrary number of lattice sites, namely
\[V(t)\approx\sum_{i}\Delta E_{\rm ac}+\sum_{i}\Omega_{\rm d}\cos(\phi_{i}- \omega_{\rm d}t)a_{i,y}^{\dagger}a_{i,x}+{\rm H.c.}, \tag{11}\]
where we have introduced the parameters
\[\Omega_{\rm d}=|\Omega_{1,2}|\eta_{x}\eta_{y},\quad\phi_{i}=\mathbf{k}_{\rm d}\cdot \mathbf{r}_{i}^{0}+\arg(-\Omega_{1,2}). \tag{12}\]
Note that, due to the constraints in Eq. (10), we have neglected other contributions in the Lamb-Dicke expansion.
One can readily see that, in addition to the irrelevant ac-Stark shift \(\Delta E_{\rm ac}\), we have obtained a parametric modulation like Eq. (2) that involves simultaneously all of the ions in the chain. Repeating the same arguments as in the simple two-mode case (Eq. (2)), one finds that the parametric drive can activate the tunnelling of a phonon along the new synthetic direction (see Fig. 2**(a)**), such that the tunnelling matrix of Eq. (6) becomes \(t_{dd^{\prime}}\mapsto\tilde{t}_{dd^{\prime}}\) with
\[\tilde{t}_{(i,\mathbf{\alpha})(j,\beta)}=t_{(i,\mathbf{\alpha})(j,\beta)}+\frac{ \Omega_{\rm d}}{2}{\rm e}^{i\mathbf{\epsilon}_{\alpha}\beta\phi}\delta_{i,j}(1- \delta_{\alpha,\beta}), \tag{13}\]
Here, in addition to the Kronecker delta, we have used the fully anti-symmetric tensor defined as \(\mathbf{\epsilon}_{x,y}=-\mathbf{\epsilon}_{y,x}=1\), \(\mathbf{\epsilon}_{x,x}=\mathbf{\epsilon}_{y,y}=0\). By making the following identification
\[a_{i,x},a_{i,x}^{\dagger}\mapsto a_{i\mathbf{\epsilon}_{1}},a_{i\mathbf{\epsilon}_{1 }}^{\dagger},\quad a_{i,y},a_{i,y}^{\dagger}\mapsto a_{i\mathbf{\epsilon}_{1}+\bm {\epsilon}_{2}},a_{i\mathbf{\epsilon}_{1}+\mathbf{\epsilon}_{2}}^{\dagger}, \tag{14}\]
we obtain, in the interaction picture, a tight-biding model for bosons in a synthetic two-leg ladder
\[H_{\rm eff}=\sum_{\mathbf{i}}\sum_{\mathbf{\epsilon}\in\mathcal{Z}(\mathbf{i})}t_{\mathbf{i}} \mathbf{\epsilon}^{\dagger}_{\mathbf{i}+\mathbf{\epsilon}}\mathbf{a}_{\mathbf{i}}. \tag{15}\]
Here, a boson at the synthetic lattice site \(\mathbf{i}\) can tunnel horizontally or vertically to the site \(\mathbf{i}+\mathbf{\ell}\) along the synthetic links labelled by \(\mathbf{\ell}\in\mathcal{S}\{\mathbf{i}\}\). The tunnelling amplitudes read
\[\begin{split} t_{i\mathbf{\epsilon}_{1},i\mathbf{\epsilon}_{1}}& =\tilde{t}_{(i+\ell,x)(i,x)},\quad\quad\quad t_{i\mathbf{\epsilon}_{1}, \mathbf{\epsilon}_{2}}=\frac{\Omega_{\rm d}}{2}{\rm e}^{-{\rm i}\phi_{i}},\\ t_{i\mathbf{\epsilon}_{1}+\mathbf{\epsilon}_{2},i\mathbf{\epsilon}_{1}}& =\tilde{t}_{(i+\ell)(\varphi_{i},x)},\;\;t_{i\mathbf{\epsilon}_{1}+ \mathbf{\epsilon}_{2}-\mathbf{\epsilon}_{2}}=\frac{\Omega_{\rm d}}{2}{\rm e}^{-{\rm i} \phi_{i}}.\end{split} \tag{16}\]
In comparison to the parametric tunnelling of Eq. (2), which also lead to a tunnelling strength with a complex phase ( Eq. (4)), we see that the current scheme leads to a site-dependent phase as a consequence of the spatial modulation of the optical potential (8), as depicted in Fig. 2**(b)**. This inhomogeneity can be exploited, as depicted in Fig. 2**(c)**, to induce an effective Peierls' phase, such that the phonons in the synthetic ladder mimic the dynamics of electrons under a magnetic field. In fact, if the local phonon tunnels around the smallest rectangular plaquette \(t_{i\mathbf{\epsilon}_{1},\mathbf{\epsilon}_{1}}t_{(i+1)\mathbf{\epsilon}_{1},\mathbf{\epsilon} _{2}}t_{(i+1)\mathbf{\epsilon}_{2},-\mathbf{\epsilon}_{1}}t_{i\mathbf{\epsilon}_{1}+\mathbf{ \epsilon}_{2},-\mathbf{\epsilon}_{2}}\propto{\rm e}^{{\rm i}\Phi_{\rm AB}}\), it gains a net phase that can no longer be gauged away as in the simple two-mode case of Eq. (4). In fact, this phase is analogous to the Aharonov-Bohm phase [157] for electrons moving in a plane under a perpendicular magnetic field
\[\Phi_{\rm AB}=\mathbf{k}_{\rm d}\cdot(\mathbf{r}_{i+1}^{0}-\mathbf{r}_{i}^{0})=:2\pi\frac{ \Phi_{B}}{\Phi_{0}}, \tag{17}\]
where \(\Phi_{B}=\int_{\square}{\rm d}\mathbf{S}\cdot\mathbf{B}_{\rm bg}\) is the flux of an effective magnetic field \(\mathbf{B}_{\rm bg}\) across the plaquette \(\square\), and \(\Phi_{0}=h/e\) is the quantum of flux. As a consequence of this flux, which can be controlled by tilting the laser wave-vector with respect to the ion chain, one could for instance observe Aharonov-Bohm destructive interference for \(\Phi_{\rm AB}=\mathbf{\pi}\), in which a single phonon cannot tunnel two sites apart along a synthetic plaquette (see Fig. 2**(d)**). Let us note that this Aharonov-Bohm interference occurs at the level of phonons, which have a zero net charge, and thus differs from the interference of charged ions tunnelling
Figure 2: **Synthetic dimensions in trapped-ion chains:****(a)** Schematic representation of a synthetic Peierls ladder. The sites of the upper and lower legs of the ladder represent the local vibrations of the ions along the \(x\) and \(y\) transverse directions, respectively. The dipolar tunnelings (6) are represented by intra-leg links that connect distant ions. The resulting parametric tunnelings in Eq. (16) are depicted by the vertical inter-leg links, and correspond to the frequency-conversion process of **(b)**. **(c)** For a pair of ions, the effective rectangular plaquette can lead to a net Aharonov-Bohm phase \(\Phi_{\rm AB}\) (17) for a phonon that tunnels along the corresponding synthetic links. **(d)** For \(\Phi_{\rm AB}=\pi\), there can be perfect destructive interference for the phonon, which mimics the Aharonov-Bohm interference of an electron that travels around an infinitely-thin solenoid.
between two different crystalline configurations, neatly observed in experiments with a real magnetic field [200].
For larger ion crystals, the analogy with a homogeneous magnetic field is still valid despite the existence of dipolar tunnelings, provided that the equilibrium positions of the ions are equally spaced. The equal spacing can be achieved by designing arrays of individual traps with micro-fabricated surface-electrode traps [201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213], by introducing anharmonic confining potentials in segmented ion traps [214, 215, 216, 217, 218], or in ring traps [219, 220]. Let us note that, even if one achieved an homogeneous spacing, the longer-range nature of the tunnelings imply that the excitations can now enclose larger plaquettes potentially changing the interference phenomena. In the pi-flux case, the next-to-nearest neighbour tunnelling leads to plaquettes with zero flux, which may challenge the existence of the perfect destructive interference of Fig. 2**(d)**. Nevertheless, these larger plaquettes are enclosed at a considerably slower pace, as the tunnelling strengths decay with the cube of the distance (6). We have numerically observed that an almost perfect Aharonov-Bohm interference can still occur when considering how a phononic excitation travels between opposite corners of the synthetic ladder [221].
Let us close this section by highlighting that the effective magnetic field underlying Eq. (17) is not a true dynamical magnetic field, but rather a fixed background field. One can indeed push the analogy to the level of the vector potential using \(\mathbf{B}_{\text{bg}}=\mathbf{\nabla}\times\mathbf{A}_{\text{bg}}\), but the \(U(1)\) gauge field \(\mathbf{A}_{\text{bg}}\) would still be a background field, the dynamics of which can only be fixed externally, and has nothing to do with Maxwell electrodynamics. In the following section, we describe how this scheme of parametric tunnelings can be generalised to get closer to this situation, and be able to explore lattice gauge theories.
## III Dynamical gauge fields: the \(\mathbb{Z}_{2}\) theory on a link
### State-dependent parametric tunnelling
Let us start by revisiting the simplest parametric setup of two modes introduced in Sec. II.1, and discuss how it can be generalised towards the simulation of the simplest discrete gauge theory: a \(\mathbb{Z}_{2}\) gauge link. At this abstract level, we consider introducing an additional quantum system composed of 2 levels, a so-called qubit in quantum information science [222], which is initially decoupled from the modes. The bare Hamiltonian of the system (Eq. (1)) now reads
\[\tilde{H}_{0}=\omega_{1}a_{1}^{\dagger}a_{1}+\omega_{2}a_{2}^{\dagger}a_{2}+ \frac{\omega_{0}}{2}\sigma_{1,\mathbf{\epsilon}_{1}}^{z}, \tag{18}\]
where we have introduced the transition frequency between the qubit levels \(\omega_{0}\), and the Pauli matrix \(\sigma_{1,\mathbf{\epsilon}_{1}}^{z}=|\uparrow_{1,\mathbf{\epsilon}_{1}}\rangle\langle \uparrow_{1,\mathbf{\epsilon}_{1}}|-|\downarrow_{1,\mathbf{\epsilon}_{1}}\rangle \langle\downarrow_{1,\mathbf{\epsilon}_{1}}|\). Here, we are using an apparently convoluted notation for the index of the qubit, which will be justified below once we interpret the effective model in the light of a synthetic lattice gauge theory.
The idea now is to consider a generalisation of Eq. (2) that includes two tones \(\tilde{V}(t)=\tilde{V}_{1}+\tilde{V}_{2}\). One of them induces a state-dependent parametric drive
\[\tilde{V}_{1}=\Omega_{\text{d}}\sigma_{1,\mathbf{\epsilon}_{1}}^{z}a_{2}^{\dagger}a _{1}\cos(\phi_{\text{d}}-\omega_{\text{d}}t)+\text{H.c.}, \tag{19}\]
which has an amplitude that depends on the state of the qubit. Additionally, the other tone drives transitions on the qubit
\[\tilde{V}_{2}=\tilde{\Omega}_{\text{d}}\sigma_{1,\mathbf{\epsilon}_{1}}^{x}\cos( \tilde{\phi}_{\text{d}}-\tilde{\omega}_{\text{d}}t)+\text{H.c.}, \tag{20}\]
where we have introduced another Pauli matrix \(\sigma_{1,\mathbf{\epsilon}_{1}}^{x}=|\uparrow_{1,\mathbf{\epsilon}_{1}}\rangle\langle \downarrow_{1,\mathbf{\epsilon}_{1}}|+|\downarrow_{1,\mathbf{\epsilon}_{1}}\rangle \langle\uparrow_{1,\mathbf{\epsilon}_{1}}|\). Considering that the frequency and strength of this additional driving are constrained by
\[\tilde{\omega}_{\text{d}}=\omega_{0},\quad|\tilde{\Omega}_{\text{d}}|\ll 4 \omega_{0}, \tag{21}\]
we can follow the exact same steps as in Sec. II.1 to show that, after setting \(\phi_{\text{d}}=\tilde{\phi}_{\text{d}}=0\), the two-tone drive leads to a time-independent effective Hamiltonian \(\tilde{V}_{I}(t)\approx\tilde{H}_{\text{eff}}\) that supersedes Eq. (4), and reads
\[H_{\text{eff}}=\left(t_{1,\mathbf{\epsilon}_{1}}a_{2}^{\dagger}\sigma_{1,\mathbf{ \epsilon}_{1}}^{z}a_{1}+\text{H.c.}\right)+h\sigma_{1,\mathbf{\epsilon}_{1}}^{x} \tag{22}\]
where we have introduced the effective couplings
\[t_{1,\mathbf{\epsilon}_{1}}=\frac{\Omega_{\text{d}}}{2},\text{ and }h=\frac{\tilde{\Omega}_{\text{d}}}{2}. \tag{23}\]
There are two important aspects to highlight. First, we have again managed to engineer a synthetic link connecting the two
Figure 3: **Synthetic \(\mathbb{Z}_{2}\) gauge links and Gauss’ law: (a)** Schematic representation of the effective Hamiltonian in Eq. (22). The two modes labelled by 1,2, which play the role of matter fields, are coupled by a synthetic tunnelling of strength \(t_{1,\mathbf{\epsilon}_{1}}\) that is mediated by a qubit that plays the role of the gauge field, and effectively sits on the synthetic link. In addition to the tunnelling, the electric-field term of strength \(h\) drives transitions in the qubit (inset). **(b)** For a single particle, Gauss’ law (25) for a distribution of background charges \(q_{1}=0\), \(q_{2}=1\), is fulfilled by the two states \(|1_{1},-1_{\mathbf{\epsilon}_{1}},0_{2}\rangle,|0_{1},+1_{\mathbf{\epsilon}_{1}},1_{2}\rangle\), characterised by the absence or presence of an electric field attached to the matter particle sitting on the leftmost or rightmost site. These electric-field states are represented by arrows parallel (anti-parallel) to the external field \(h\), and the presence (absence) of the corresponding electric-field line is represented by a thicker (shaded) golden link.
modes via the tunneling of particles. However, in contrast to the previous case (4), the qubit enters in this process and mediates the tunnelling. In the spirit of synthetic dimensions, we can say a the qubit effectively sits on the synthetic link (see Fig. 3**(a)**). It is for this reason, that the label used for the qubit \((1,\mathbf{e}_{1})\) refers to the link that connects the synthetic site 1 to its nearest neighbour 2 via the direction specified by the vector \(\mathbf{e}_{1}\). This notation has a clear generalisation to larger lattices and different geometries, and is common in the context of lattice gauge theories [223]. The second important aspect to remark is that the dynamics dictated by the Hamiltonian of Eq. (22), considering also the term proportional to \(h\), has a local/gauge \(\mathbb{Z}_{2}\) symmetry. This gauge symmetry is related to the previous \(U(1)\) phase rotation of the modes discussed below Eq. (7) when restricting to a \(\pi\) phase. More importantly, this \(\pi\) phase can be chosen locally. We can transform either \(a_{1},a_{1}^{\dagger}\mapsto-a_{1},-a_{1}^{\dagger}\), or \(a_{2},a_{2}^{\dagger}\mapsto-a_{2},-a_{2}^{\dagger}\) by a local \(\pi\) phase, and retain gauge invariance in the Hamiltonian by simultaneously inverting the link qubit \(\sigma_{1,\mathbf{e}_{1}}^{z}\mapsto-\sigma_{1,\mathbf{e}_{1}}^{z},\sigma_{ 1,\mathbf{e}_{1}}^{x}\mapsto\sigma_{1,\mathbf{e}_{1}}^{x}\). Accordingly, the qubit can be interpreted as a \(\mathbb{Z}_{2}\) gauge field introduced to gauge the global \(\mathbb{Z}_{2}\) inversion symmetry of the Hamiltonian (4), paralleling the situation with other groups where gauge fields are introduced in the links of the lattice and mediate the tunnelling of matter particles [223]. Accordingly, we see that the analogy with gauge theories is not a mere notation choice relying on how we have decided to label the qubit and assign it to a synthetic link, but it instead rests on our scheme for gauging a symmetry: the engineered tunnelling leads to a discretised version of the covariant derivative that is required to upgrade a global symmetry into a local one.
As advanced in the introduction, in Hamiltonian approaches to lattice gauge theories [22], one works in Weyl's temporal gauge, such that there is a residual redundancy that is dealt with by imposing Gauss' law. As usually in the literature [57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 99; 11; 12; 13; 14; 15; 16; 17; 18; 19; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 222; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 291; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 320; 321; 322; 333; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 411; 422; 439; 441; 443; 451; 452; 453; 454; 466; 471; 472; 473; 474; 475; 476; 477; 478; 489; 490; 412; 438; 491; 440; 413; 455; 456; 457; 458; 459; 460; 461; 462; 463; 464; 465; 466; 467; 478; 479; 481; 482; 483; 484; 485; 486; 487; 488; 489; 491; 492; 493; 500; 404; 405; 406; 407; 408; 409; 410; 411; 423; 411; 414; 415; 416; 417; 418; 419; 424; 417; 419; 430; 442; 443; 445; 456; 457; 468; 479; 480; 492; 493; 494; 495; 496; 497; 498; 499; 51; 526; 537; 540; 558; 569; 570; 58; 599; 600; 599; 610; 621; 633; 64; 65; 66; 67; 68; 69; 601; 63; 68; 69; 611; 64; 69; 622; 60; 64; 602; 65; 67; 69; 603; 604; 61; 65; 66; 67; 68; 69; 62; 61; 66; 69; 605; 61; 67; 61; 68; 69; 606; 62; 68; 63; 69; 607; 64; 608; 64; 61; 67; 68; 69; 630; 624; 69; 608; 65; 67; 68; 69; 609; 62; 631; 64; 65; 67; 66; 69; 64; 68; 67; 69; 68; 69; 700; 69; 60; 67; 68; 69; 60; 69; 60; 68; 610; 69; 60; 69; 611; 60; 61; 62; 632; 64; 65; 66; 67; 68; 69; 60; 633; 67; 64; 68; 69; 60; 67; 69; 68; 69; 60; 69; 60; 611; 60; 69; 60; 67; 68; 69; 612; 60; 69; 60; 613; 68; 614; 69; 60; 62; 67; 68; 69; 60; 63; 64; 69; 60; 67; 68; 61; 69; 607; 614; 63; 68; 615; 67; 69; 608; 62; 69; 609; 63; 68; 616; 69; 617; 609; 60; 68; 616; 69; 617; 618; 620; 69; 633; 68; 64; 619; 60; 670; 68; 68; 69; 690; 69; 60; 691; 609; 60; 692; 6
\(t_{1,\mathbf{e}_{1}}\) (23). The problem thus reduces to that of Rabi oscillations [225] of a driven two-level atom [226] (see Fig. 4), and has an exact solution \(\mathbf{c}(t)=\mathrm{e}^{-\mathrm{i}\Omega_{0}\mathbf{\mu}\mathbf{\sigma}}\mathbf{c}(0)\), where \(\mathbf{c}(t)=(c_{r}(t),c_{l}(t))^{\mathrm{t}}\), and we have introduced the vector of Pauli matrices \(\mathbf{\sigma}=(\mathbf{\sigma}^{x},\mathbf{\sigma}^{y},\mathbf{\sigma}^{z})\), and the following quantities
\[\Omega_{0}=\sqrt{t_{1,\mathbf{e}_{1}}^{2}+h^{2}},\quad\mathbf{n}=\frac{1}{\Omega_ {0}}(t_{1,\mathbf{e}_{1}},0,h). \tag{27}\]
Assuming that the particle occupies initially the leftmost site \(\left|\Psi_{\mathrm{phys}}(0)\right\rangle=\left|\mathrm{L}\right\rangle\), we see that the tunnelling to the right is accompanied by the build-up of an electric field line across the gauge link, which is thus attached to the dynamical \(\mathbb{Z}_{2}\) charge carried by the particle. This correlated dynamics can be observed by measuring the periodic oscillations of the following gauge-invariant observables
\[\begin{split}\overline{n}_{2}(t)&:=\langle a_{2}^{ \dagger}a_{2}(t)\rangle=\frac{t_{1,\mathbf{e}_{1}}^{2}}{\Omega_{0}^{2}}\sin^{ 2}(\Omega_{0}t),\\ \overline{n}_{1}(t)&:=\langle a_{1}^{\dagger}a_{1}( t)\rangle=1-\frac{t_{1,\mathbf{e}_{1}}^{2}}{\Omega_{0}^{2}}\sin^{2}(\Omega_{0}t), \\ \overline{s}_{x}(t)&:=\langle\sigma_{1,\mathbf{e}_{1 }}^{x}(t)\rangle=\frac{2t_{1,\mathbf{e}_{1}}^{2}}{\Omega_{0}^{2}}\sin^{2}( \Omega_{0}t)-1.\end{split} \tag{28}\]
In Fig. 5**(a)**, we compare these analytical predictions (28) for \(h=0\) to the numerical simulation for an initial state \(\left|\Psi(0)\right\rangle=\left|1_{1}\right\rangle\left|-_{1,\mathbf{e}_{1}} \right\rangle\left|0_{2}\right\rangle\). Note that, for the numerical simulation, we do not restrict the Hilbert space to the single-particle subspace, nor to the gauge-invariant basis of Eq. (26). We truncate the maximal number of Fock states in each site to \(n_{i}\leq n_{\text{max}}\), and compute the exact dynamics of the \(\mathbb{Z}_{2}\)-link Hamiltonian (22) after this truncation, checking that no appreciable changes appear when increasing \(n_{\text{max}}\). The lines depicted in this figure represent the numerical results for matter and gauge observables \(\overline{n}_{1}(t)=\langle a_{1}^{\dagger}a_{1}(t)\rangle\), \(\overline{n}_{2}(t)=\langle a_{2}^{\dagger}a_{2}(t)\rangle\) and \(\overline{s}_{x}(t)=\langle\sigma_{1,\mathbf{e}_{1}}^{x}(t)\rangle\). Fig. 5**(a)** also shows the expectation value of the sum of Gauss' generators (24) which, according to the specific distribution of external background charges \(q_{1}=0,q_{2}=1\), should vanish exactly at all times, i.e., \(\langle G_{1}(t)+G_{2}(t)\rangle/2=(\mathrm{e}^{\mathrm{i}\pi q_{1}}+\mathrm{ e}^{\mathrm{i}\pi q_{2}})/2=0\). The symbols represent the respective analytical expressions in Eq. (28). The picture shows a clear agreement of the numerical and exact solutions, confirming the validity of the picture of the correlated Rabi flopping in the matter and gauge sectors. As the boson tunnels to the right \(\left|1_{1},0_{2}\right\rangle\rightarrow\left|0_{1},1_{2}\right\rangle\), the electric field line stretches to comply with Gauss' law until, right at the exchange duration, the link qubit gets flipped \(\left|-_{1,\mathbf{e}_{1}}\right\rangle\rightarrow\left|+_{1,\mathbf{e}_{1}}\right\rangle\). This behaviour is repeated periodically as the boson tunnels back and forth, and is a direct manifestation of gauge invariance.
From the two-level scheme in the right panel of Fig. 4, we see that a non-zero electric field \(h>0\) plays the role of a detuning in the Rabi problem [226]. Accordingly, as the electric field gets stronger, i.e. \(h\gg\left|t_{1,\mathbf{e}_{1}}\right|\), it costs more energy to create an electric field line, and the particle ceases to tunnel, i.e. the contrast of the Rabi oscillations between the L/R lev
Figure 4: **Effective two-level system for \(\mathbb{Z}_{2}\)-link tunnelling:** In the super-selection sector (25) with background charges \(q_{1}=0,q_{2}=1\), the physical subspace for a single particle is composed of two states (26) depicted in Fig. 3**(b)**. The gauge-invariant Hamiltonian (22) can then be mapped onto the problem of detuned Rabi oscillations of a two-level atom in the rotating frame, where the tunnelling plays the role of the Rabi frequency, and the electric-field term is proportional to the detuning of the Rabi drive.
Figure 5: \(\mathbb{Z}_{2}\)**-invariant tunnelling and correlated Rabi flopping:** **(a)** Dynamics of an initial state \(\left|\Psi(0)\right\rangle=\left|\mathrm{L}\right\rangle\) characterised by the gauge invariant observables \(\overline{n}_{1}(t)=\langle a_{1}^{\dagger}a_{1}(t)\rangle\), \(\overline{n}_{2}(t)=\langle a_{2}^{\dagger}a_{2}(t)\rangle\) and \(\overline{s}_{x}(t)=\langle\sigma_{1,\mathbf{e}_{1}}^{x}(t)\rangle\), as well as the averaged expectation value of the local-symmetry generators \(\langle G_{1}(t)+G_{2}(t)\rangle/2\). The symbols correspond to the numerical evaluation in the full Hilbert space, whereas the lines display the analytical predictions for \(h=0\) (28). **(b)** Dynamics for the \(\mathbb{Z}_{2}\) gauge link when the electric field \(h\) is increased. The symbols correspond to the numerical simulations, and the lines to the corresponding analytical expressions (28).
els diminishes (see Fig. 5**(b)**). It is worth comparing to the case of Peierls' phases and static/background gauge fields of Sec. II.2. There, four modes were required to define a plaquette and get an effective flux that can lead to Aharonov-Bohm destructive interference, which inhibits the tunnelling of a single boson between the corners of the synthetic plaquette (see Fig. 2**(d)**. In the case of the \(\mathbb{Z}_{2}\) gauge model on a link, only two modes and a gauge qubit are required. The tunnelling of the boson is inhibited by increasing the energy cost of stretching/compressing the accompanying electric field line. As discussed in more detail below, for larger lattices, this electric-field energy penalty is responsible for the confinement of matter particles in this \(\mathbb{Z}_{2}\) gauge theory, a characteristic feature of this type of discrete gauge theories [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61].
### Two-boson sector: Dark states and entanglement between modes of the matter fields
Let us now move to the two-particle case, and describe how the connection to well-known effects in quantum optics can be pushed further depending on the exchange statistics. A pair of fermions can only occupy the state \(|1_{1}\rangle\otimes|+_{1,\mathbf{e}_{1}}\rangle\otimes|1_{2}\rangle\), and do not display any dynamics due to the Pauli exclusion principle. On the other hand, if the particles are bosonic, the dynamics can be non-trivial and lead to interesting effects such as mode entanglement. Due to the \(U(1)\) symmetry and Gauss' law (25) for \(q_{1}=q_{2}=0\), the physical subspace can now be spanned by three different states
\[\begin{split}|\mathrm{L}\rangle&=|2_{1}\rangle \otimes|-_{1,\mathbf{e}_{1}}\rangle\otimes|0_{2}\rangle\,,\\ |\mathrm{C}\rangle&=|1_{1}\rangle\otimes|+_{1, \mathbf{e}_{1}}\rangle\otimes|1_{2}\rangle\,,\\ |\mathrm{R}\rangle&=|0_{1}\rangle\otimes|-_{1, \mathbf{e}_{1}}\rangle\otimes|2_{2}\rangle\,.\end{split} \tag{29}\]
A pair of charges sitting on the same site have a vanishing net \(\mathbb{Z}_{2}\) charge \(1\oplus 1=(1+1)\mathrm{mod}2=0\), and cannot act as a source/sink of electric field. Therefore, the L and R states in Eq. (29) do not sustain any electric field. On the other hand, when the pair of \(\mathbb{Z}_{2}\) charges occupy the two different sites, Gauss' law imposes that an electric field line must be established at the link. Since creating this electric field costs energy, these three levels are then separated in energy by \(2h\), and the gauge-invariant tunnelling of the Hamiltonian in Eq. (22) leads to a \(\Lambda\)-scheme in quantum optics (see Fig. 6).
As it is known to occur for three-level atoms [227, 228], one can find the so-called bright \(|\mathrm{B}\rangle=(|\mathrm{L}\rangle+|\mathrm{R}\rangle)/\sqrt{2}\) and dark \(|\mathrm{D}\rangle=(|\mathrm{L}\rangle-|\mathrm{R}\rangle)/\sqrt{2}\) states, which here correspond to the symmetric and anti-symmetric super-positions of the doubly-occupied sites at the left and right sites. In general, the state of the system can be expressed as a superposition of \(|\mathrm{B}\rangle\), \(|\mathrm{D}\rangle\) and \(|\mathrm{C}\rangle\), namely \(|\Psi_{\mathrm{phys}}(t)\rangle=d(t)\,|\mathrm{D}\rangle+c_{b}(t)\,|\mathrm{B }\rangle+c_{c}(t)\,|\mathrm{C}\rangle\). However, as the dark state decouples completely from the dynamics, its amplitude evolves by acquiring a simple phase \(d(t)=\mathrm{e}^{\mathrm{i}ht}d(0)\). Conversely, the amplitudes of the remaining states mix and display periodic Rabi oscillations \(\mathbf{\varepsilon}(t)=\mathrm{e}^{-\mathrm{i}\tilde{\Omega}_{0}t\mathbf{\#}}\mathbf{ \sigma}\mathbf{c}(0)\) where, in this case \(\mathbf{\varepsilon}(t)=(c_{b}(t),c_{c}(t))^{\dagger}\), and
\[\tilde{\Omega}_{0}=\sqrt{4t_{1,\mathbf{e}_{1}}^{2}+h^{2}},\;\;\;\tilde{\mathbf{n} }=\frac{1}{\tilde{\Omega}_{0}}(2t_{1,\mathbf{e}_{1}},0,h). \tag{30}\]
We can now discuss a different manifestation of the gauge-invariant dynamics with respect to the single-particle case (28). Let us consider the initial state to be \(|\Psi_{\mathrm{phys}}(0)\rangle=|\mathrm{C}\rangle\) with one boson at each site, and an electric-field line at the link in between. If we look at the local number of bosons, we do not observe any apparent dynamics \(\overline{n}_{1}(t):=\langle a_{1}^{\dagger}a_{1}(t)\rangle=1=\langle a_{2}^{ \dagger}a_{2}(t)\rangle=:\overline{n}_{2}(t)\). However, looking into the electric field at the link, we find periodic Rabi flopping again, i.e.
\[\overline{s}_{x}(t):=\langle\sigma_{1,\mathbf{e}_{1}}^{x}(t)\rangle=1-\frac{8t _{1,\mathbf{e}_{1}}^{2}}{\tilde{\Omega}_{0}^{2}}\sin^{2}(\tilde{\Omega}_{0}t). \tag{31}\]
Since the gauge field cannot have independent oscillations with respect to the matter particles, there must be a non-trivial dynamics within the matter sector which, nonetheless, cannot be inferred by looking at the local number of particles. In this context, it is the interplay of the superposition principle of quantum mechanics and gauge symmetry, which underlies a neat dynamical effect. This effect becomes manifest by inspecting the state after a single exchange period \(\Delta_{\mathrm{ex}}=\pi/2\tilde{\Omega}_{0}\) for \(h=0\). After this time, a boson can either tunnel to the left or to the right. In both cases, the electric field string compresses, since a doubly-occupied site amounts to a vanishing net \(\mathbb{Z}_{2}\) charge, and there is thus no sink/source of electric field. Accordingly, when the bosons tunnel along either path respecting gauge invariance, the state ends up with the same link configuration, namely \(|-\rangle_{1,\mathbf{e}_{1}}\). Then, according to the superposition principle, both paths must be added, and the state of the system at time \(t_{e}\) is given by
\[|\Psi_{\mathrm{phys}}(\Delta_{\mathrm{ex}})\rangle=\frac{1}{\sqrt{2}}\left(|2_{ 1},0_{2}\rangle+|0_{1},2_{2}\rangle\right)\otimes|-_{1,\mathbf{e}_{1}}\rangle\,. \tag{32}\]
We see that, as a consequence of the dynamics, mode entanglement [229] has been generated in the matter sector since the state cannot be written as a separable state \(|\Psi_{\mathrm{phys}}(\Delta_{\mathrm{ex}})\rangle\neq P(a_{1}^{\dagger})Q(a_{ 2}^{\dagger})\,|0_{1},0_{2}\rangle\otimes|-_{1,\mathbf{e}_{1}}\rangle\) for any polynomials \(P,Q\). The specific state (Eq. (32)) in the matter sector is a particular type of NOON states, which have been studied in the context of metrology [230, 231, 232, 233, 234]. Note that this state cannot be distinguished from the initial state if one only looks at the local
Figure 6: \(\Lambda\)**-scheme for 2-boson \(\mathbb{Z}_{2}\)-invariant tunnelling:** In the left panel, we depict the three possible states in Eq. (29) for the distributions of the \(\mathbb{Z}_{2}\) charges and electric field. In the right panel, we depict the quantum-optical level scheme, in which the gauge-invariant tunneling couples the \(|\mathrm{L}\rangle\) and \(|\mathrm{R}\rangle\) states to the state \(|\mathrm{C}\rangle\) with one boson at each site, and a electric-field string in the link. The electric field \(h\) act as a detuning of these transitions, leading to a \(\Lambda\)-scheme.
boson numbers \(\overline{n}_{1}(t)=\overline{n}_{2}(t)=1\). The non-trivial dynamics becomes instead manifest via the link field and the quantum-mechanical mode-mode correlations.
In Fig. 7, we present a comparison of the analytical predictions with the corresponding numerical results where, once more, we do not restrict to the basis in Eq. (29), nor to the 2-boson subspace. We initialise the system in \(\left|\Psi(0)\right\rangle=\left|1_{1}\right\rangle\left|+_{1,\mathbf{e}_{i}} \right\rangle\left|1_{2}\right\rangle\), and numerically truncate the Hilbert space such, that \(n_{i}\leq n_{\text{max}}\), computing numerically the Schrodinger dynamics for the \(\mathbb{Z}_{2}\)-link Hamiltonian (22). In Fig. 7**(a)**, we represent these numerical results with lines for the observables \(\overline{n}_{1}(t),\overline{n}_{2}(t),\overline{x}_{x}(t)\), as well as the average of the local-symmetry generators \(\langle G_{1}(t)+G_{2}(t)\rangle/2\). Once again, these numerical results agree perfectly with the corresponding analytical expressions (31), which are represented by the symbols. In Fig. 7**(b)**, we show the fidelity of the system state with respect the NOON state of Eq. (32), namely \(\mathcal{F}_{\text{NOON}}(t)=\left|\left\langle\Psi_{\text{phys}}(t_{e}) \right|\mathrm{e}^{iH_{\text{eff}}}\left|\Psi(0)\right\rangle\right|^{2}\) at different evolution times. We see that this fidelity tends to unity at the periodic exchange periods \(t=m_{e}\), \(m\in\mathbb{Z}^{+}\). Note that the timescale in the horizontal axis is the same as the one for the one-particle case in Fig. 5, but the periodic oscillations are twice as fast. This two-fold speed-up is caused by the bosonic enhancement due to the presence of two particles in the initial state, providing a \(\sqrt{2}\) factor, and the enhancement due to the bright state, which brings the additional \(\sqrt{2}\) factor. This total speed-up is the only difference if one compares the dynamics of the bosonic sector with that of a standard beam splitter leading to the Hong-Ou-Mandel interference [235]. In fact, in the trapped-ion literature, the bare tunneling terms between a pair of vibrational modes are commonly referred to as a beam splitter due to the formal analogy with the optical device that splits an incoming light mode into the transmitted and reflected modes [236].
## IV Trapped-ion toolbox: phonons and qubits
We will now discuss two different schemes for implementing the state-dependent parametric modulation (19) experimentally. The first scheme (I) is based on trapped-ion analog quantum simulators that generalise straightforwardly from Eq. (8). The second scheme (II) exploits recent ideas developed in the context of continuous-variable quantum computing [237].
Before describing these schemes, we first review the progress of trapped-ion-based quantum simulations for lattice gauge theories. As discussed in [238; 107; 239], certain gauge theories can be mapped exactly onto spin models that represent the fermionic matter with effective long-range interactions mediated by the gauge fields. Following these ideas, the \(U(1)\) Schwinger model of quantum electrodynamics in 1+1 dimensions [107; 119] and variational quantum eigensolvers [111] have been simulated digitally in recent trapped-ion experiments. As discussed in [239; 240], there are theoretical proposals to generalise this approach to gauge theories in 2+1 dimensions. Although not considered in the specific context of trapped ions, digital quantum simulators, and variational eigensolvers have also been recently considered for \(\mathbb{Z}_{2}\) gauge theories [241; 242; 243; 244]. Rather than eliminating the gauge fields as in the cases above [238; 107; 239], one could consider the opposite, and obtain effective models for the gauge fields after eliminating the matter content [245; 246].
In order to move beyond those specific models, it would be desirable to simulate matter and gauge fields on the same footing. Trapped-ion schemes for the quantum-link approach to the Schwinger model have been proposed in [247; 248]. In particular, for the specific spin-\(1/2\) representation of the link operators, the gauge-invariant tunneling becomes a three-spin interaction. This could be implemented using only the native two-spin interactions in trapped-ion experiments and by imposing an additional energetic Gauss penalty [247]. Alternatively, one may also generate three-spin couplings [248] directly by exploiting second-order sidebands that use the phonons as carriers of these interactions [249; 250; 251]. We note that there have also been other proposals [252; 253] to use the motional modes to encode the \(U(1)\) gauge field, whereas the fermionic matter is represented by spin-\(1/2\) operators. In this
case, the gauge-invariant tunneling can be achieved via other second-sideband motional couplings [252], or by combining digital and analog ingredients in a "hybrid" approach [253]. A different possibility is to use the collective motional modes to simulate bosonic matter and reserve the spins to represent the quantum link operators for the gauge fields [252]. In this way, one can simulate a quantum link model provided that all the collective vibrational modes can be individually addressed in frequency space [252], which can be complicated by frequency crowding as the number of ions increases. We note that engineering the collective-motional-mode couplings has also been recently considered in the context of continuous-variable quantum computing, boson sampling, and quantum simulation of condensed-matter models [254, 255, 256, 257]. In the following, we present a trapped-ion scheme for the quantum simulation of \(\mathbb{Z}_{2}\) gauge theories based on our previous idea of a state-dependent parametric tunneling (19), and using motional states along two different transverse directions, and a pair of electronic states to encode the particles and the gauge field.
### Scheme I: Analog scheme for the \(\mathbb{Z}_{2}\) link
#### iii.1.1 Light-shift-type parametric tunneling
In section II.2, we showed that the effective Peierls' tunneling of Eq. (13) between the two local transverse vibrations in an ion chain could be synthesised by exploiting the optical potential (8) created by a far-detuned two-beam laser field. There, we considered that all of the ions were initialised in the same ground-state level. However, depending on the nuclear spin of the ions, the ground-state manifold can contain a variety of levels \(\{|s\rangle\}\), which can be used to generalise the scheme towards the state-dependent parametric tunneling of Eq. (19). In general, provided that the two-beam field is far-detuned from any direct transition, and that its beat note is also far-detuned from any Raman transition between ground state levels, the optical light-shift potential of Eq. (8) becomes state-dependent [196], namely
\[V_{1}(t)=\sum_{n,n^{\prime}=1,2}\sum_{l,s}\Omega_{n,n^{\prime}}^{(s)}\ket{s_{ i}}\bra{s_{i}}\mathrm{e}^{\mathrm{i}(\hat{\mathbf{k}}_{L,n^{\prime}}-\hat{ \mathbf{k}}_{L,n^{\prime}})\cdot\mathbf{r}_{i}-(\alpha_{\mathbf{k},n^{\prime} }-\alpha_{\mathbf{k},n^{\prime}})t}+\mathrm{H.c.}. \tag{33}\]
Here, \(\Omega_{n,n^{\prime}}^{(s)}\) is the amplitude of the light-shift terms discussed after Eq. (8), in which the corresponding Rabi frequencies now refer to the particular ground-state level \(|s_{i}\rangle\) of the \(i\)-th ion involved in the two virtual transitions. These light shifts then depend on the specific state and the intensity and polarisation of the laser fields. As discussed in the context of state-dependent dipole forces [258, 196, 259], one can focus on a particular pair of states \(s_{1},s_{2}\) and tune the polarisation, detuning and intensity of the light, such that the corresponding amplitudes for the crossed beat note terms attain a specific differential value [258, 259]. In the present case, we consider that this amplitude is the opposite for each of the electronic states. Since they are used to encode the aforementioned qubit, we will denote them by \(s_{1},s_{2}\rightarrow\uparrow,\downarrow\) from now on, thus assuming the differential crossed terms
\[\Omega_{\mathrm{d}}:=\Omega_{1,2}^{(0)}\eta_{x}\eta_{y}=-\Omega_{1,2}^{(1)} \eta_{x}\eta_{y}. \tag{34}\]
One can now follow the same steps as in Sec. III, introducing the local transverse phonons via the position operators, and performing a Lamb-Dicke expansion assuming Eq. (9). Using the same set of constraints, i.e. Eq. (10), we find that
\[V_{1}(t)\approx\sum_{i}\Delta E_{\mathrm{ac}}\sigma_{i}^{z}+\sum_{i}\Omega_{ \mathrm{d}}\sigma_{i}^{z}\cos(\phi_{i}-\alpha_{\mathrm{d}}t)a_{i,s}^{\dagger} a_{i,x}+\mathrm{H.c.}, \tag{35}\]
where all parameters have been defined in Eq. (12). We are already close to the idealised situation. It may seem that, as in the case of the background gauge fields of Eq. (11), the scheme already works for an arbitrary number of ions.
However, even if the form of the parametric term of Eq. (35) does in fact generalise the single-link case of Eq. (19) to the entire ion chain, a simple counting argument shows that the effective model cannot achieve \(\mathbb{Z}_{2}\) gauge invariance. For a string of \(N\) ions, we have \(2N\) local motional modes along the transverse directions, which lead to the synthetic ladder of Fig. 2**(a)**. There are \(2(N-1)+N=3N-2\) links in this ladder, and each of them requires a gauge qubit to mediate the tunneling while preserving a local \(\mathbb{Z}_{2}\) symmetry. Since we only have \(N\) trapped-ion qubits at our disposal, i.e. one qubit per ion, it is not possible to build a gauge-invariant model for the synthetic ladder in a straightforward manner. Instead, we will present in Sec. V a solution to this problem by introducing a mechanism that we call synthetic dimensional reduction.
Figure 8: **Trapped-ion synthetic \(\mathbb{Z}_{2}\) gauge theory on a link:** Schematic representation of the single-ion system that can realise the \(\mathbb{Z}_{2}\) gauge theory on a synthetic link (22). On the left, we depict an ion vibrating in the transverse \(x\) direction, and the inset represents the state of the corresponding qubit in \(|-_{1}\rangle=(|\uparrow\rangle_{1}-|\downarrow_{1}\rangle)/\sqrt{2}\). On the right, we can see how, as a consequence of the trapped-ion effective Hamiltonian (36), the vibrational excitation along \(x\) is transferred into a vibrational excitation along \(y\), while simultaneously flipping the qubit into \(|+_{1}\rangle=(|\downarrow\rangle_{1}+|\uparrow_{1}\rangle)/\sqrt{2}\). This dynamics, which is fully consistent with the local gauge symmetry, can be engineered by shining a far-detuned two-beam laser field with wave-vectors associated to each frequency represented by green and blue arrows, leading to a beat note along the grey arrow that yields the desired term (36).
Prior to that, let us discuss the minimal case where gauge invariance can be satisfied in the trapped-ion experiment: a single \(\mathbb{Z}_{2}\) link. This link requires a single ion: one gauge qubit for the link, and two motional modes for the matter particles, which can be the vibrations along any of the axes. In Fig. 8, we consider the two transverse modes, and thus restrict Eq. (35) to a single ion. Following the same steps as in the derivation of Eq. (22), we move to an interaction picture and neglect rapidly rotating terms under the conditions of Eq. (10). We obtain a time-independent term that corresponds to a \(\mathbb{Z}_{2}\) gauge-invariant tunneling
\[V_{1}(t)\approx\frac{\Omega_{\rm d}}{2}\mathrm{e}^{\mathrm{i}\phi_{\rm d}}a_{ 1,y}^{\dagger}\sigma_{1}^{z}a_{1,x}+\mathrm{H.c.}, \tag{36}\]
where the microscopic parameters are
\[\Omega_{\rm d}=|\Omega_{1,2}|\eta_{\rm x}\eta_{\rm y}\quad\text{and}\quad\phi_ {\rm d}=\mathbf{k}_{\rm d}\cdot\mathbf{r}_{1}^{0}+\arg(-\Omega_{1,2}). \tag{37}\]
At this point, the driving phase \(\phi_{\rm d}\) is completely irrelevant and can be set to zero without loss of generality. Identifying the trapped-ion operators with those of the lattice gauge theory
\[\begin{split}& a_{1,x},a_{1,x}^{\dagger}\mapsto a_{1},a_{1}^{ \dagger},\\ & a_{1,y},a_{1,y}^{\dagger}\mapsto a_{2},a_{2}^{\dagger},\\ &\sigma_{1}^{x},\sigma_{1}^{z}\quad\mapsto\sigma_{1,\mathbf{\varepsilon }_{1}}^{x},\sigma_{1,\mathbf{\varepsilon}_{1}}^{z},\end{split} \tag{38}\]
we obtain a realisabove-idealisedation of the \(\mathbb{Z}_{2}\) gauge-invariant tunneling on a link (22) using a single trapped ion, such that
\[t_{1,\mathbf{\varepsilon}_{1}}=\frac{\Omega_{\rm d}}{2}=\frac{|\Omega_{1,2}|}{2} \eta_{\rm x}\eta_{\rm y}. \tag{39}\]
As explained above, this exploits the qubit as the gauge field, and two vibrational modes to host the \(\mathbb{Z}_{2}\)-charged matter. If the above condition (34) is not satisfied, one can still obtain a state-dependent tunneling, but this would not have the desired local invariance under the above \(\mathbb{Z}_{2}\) gauge group (25). Nonetheless, such state-dependent tunneling can be interesting for other purposes in the context of hybrid discrete-continuous variable quantum information processing, as realised in recent trapped-ion experiments [260]. In the following subsection, we will present alternative schemes that do not depend on this condition for the differential light shift.
For a single \(\mathbb{Z}_{2}\) link, any two of the three motional modes can be used. In the following simulations, with the trapped-ion parameters of the considered setup, we encode the matter particles in an axial (\(z\)) and a transverse (\(x\)) mode such that we can benefit from the larger Lamb-Dicke parameter of the axial mode and higher frequency separation between the motional modes. We model numerically the possible deviations of a realistic trapped-ion implementation from the above-idealised expressions used in Fig. 5. To quantify these deviations, we numerically solve the trapped-ion evolution, starting in \(|{\rm L}\rangle\) (26), where we apply the interaction for a duration \(t=\Delta t_{\rm ex}\) such that, by the end of it, the overlap squared to the desired state \(|{\rm R}\rangle\) is maximised. If we consider only the idealised tunneling term, Eq. (36), the exchange duration is given by \(\Delta t_{\rm ex}=\pi/2t_{1,\mathbf{\varepsilon}_{1}}\). In the simulation of the more realistic trapped-ion case, there are additional terms neglected in the ideal case that can change the optimal exchange duration. We thus find \(\Delta t_{\rm ex}\) by maximising the fidelity \(\mathcal{F}(t)=|\langle{\rm R}|\psi(t)\rangle|^{2}\) of achieving the desired state \(|{\rm R}\rangle\). We also calculate the expectation value of the local symmetry generators (24) for \(\langle G_{1}(t)+G_{2}(t)\rangle/2\) to check if the effective gauge symmetry is fulfilled. Moreover, when introducing the electric field with magnitude \(h\) in the simulations, we plot the maximum contrast \(\mathcal{C}\) in the oscillations of \(\overline{x}_{x}(t)\) (28). Thus, we can evaluate the reduced tunneling probability caused by the energy penalty for stretching/compressing the electric field line as one increases \(h>0\).
We consider a realistic parametrically-driven trapped-ion system and compare it to the idealised gauge-invariant Hamiltonian (22). For the simulations presented below, we perform direct numerical integration of the relevant Hamiltonians using the QuantumOptics.jl package in Julia [261]. We consider the tunneling dynamics arising due to spin-dependent light shifts, described by the full Hamiltonian (33), using experimentally feasible parameters. In the simulation, we do not assume the Lamb-Dicke expansion and thus include possible off-resonant carrier excitations as well as other nonlinear terms neglected in Eq. (35). We use a single ion and two of its motional modes, which map to a \(\mathbb{Z}_{2}\) link connecting two bosonic matter sites (38). We restrict the simulation to the subspace of a two-level system together with two bosonic modes, each truncated at phonon number \(n_{\rm max}=7\). We consider a \({}^{88}\)Sr\({}^{+}\) ion confined in the setup presented in Refs. [262; 263]. The secular frequency of the axial in-phase mode can be set to \(\omega_{z}/2\pi=1.2\)MHz, while the radial secular frequency is \(\omega_{x}/2\pi=1.9\)MHz. The qubit states \(\left|\uparrow_{1}\right\rangle,\left|\downarrow_{1}\right\rangle\) can be defined by two ground state levels of the \(5S_{1/2}\) manifold shown in Fig. 9, or by the ground state \(5S_{1/2}\), \(m_{j}=-1/2\) and the metastable state \(4D_{5/2}\), \(m_{j}=-1/2\), leading to an optical qubit (quadrupole transition) discussed in the Appendix.
For a ground state qubit, the light shifts are created by a far-detuned dipole-mediated Raman transition [260], whereas for
Figure 9: **Beam configuration for light-shift parametric tunneling in \({}^{88}\)Sr\({}^{+}\):** We consider the ground state qubit (cyan lines). The gauge-invariant tunneling is created via two far-detuned Raman transitions of detunings \(\Delta\) and \(\Delta+\delta\) from the auxiliary level (\(P_{3/2}\)), depicted by black arrows. These Raman transitions virtually couple the qubit states in the \(S_{1/2}\) level to an excited state in the \(P_{3/2}\) level. When the beat note of the two corresponding tones \(\delta\) is on resonance with the difference of two secular trap frequencies, we attain the desired state-dependent parametric tunneling.
an optical qubit, two off-resonant beams driving a quadrupole transition can be used. In both cases, the two beams are assumed to be counter-propagating \(\mathbf{k}_{\mathrm{L,1}}=-\mathbf{k}_{\mathrm{L,2}}=:\mathbf{k}\), such that the beat note wave-vector is \(\mathbf{k}_{\mathrm{d}}=2\mathbf{k}\). Moreover, we assume that the angle between \(\mathbf{k}_{\mathrm{d}}\) and the axial mode (\(z\)) is \(45^{\circ}\), while the angle with respect to the transverse mode (\(x\)) is \(60^{\circ}\).The two Raman beams at near \(\lambda=402\) nm are detuned by \(\Delta/2\pi=10\) THz from the \(S_{1/2}\div P_{3/2}\) transition (see Fig. 9). The Lamb-Dicke factors (9) of the two motional modes are \(\eta_{z}=2\times 0.077\) and \(\eta_{x}=2\times 0.043\). In this system, light shifts of up to \(\Omega_{1,2}/2\pi=1.1\) MHz can be achieved. In the full Hamiltonian (33), we set the beam detunings to \(\delta=\omega_{x}-\omega_{z}\), resulting in a beat note frequency of \(\omega_{\mathrm{d}}=\delta\) in Eq. (10).
The simulated dynamics at \(\Omega_{1,2}/2\pi=1.1\) MHz is shown in Fig. 10, where the coloured lines represent the analytical predictions for the various observables in Eq. (28) using the effective tunneling strength (39). The coloured symbols stand for the full numerical simulations including non-linear terms (33) beyond the idealised Lamb-Dicke expansion (35). This figure is thus the trapped-ion analog of Fig. 5**(a)** following Scheme I. We note that, in order to find a better agreement with the idealised evolution (28), we have incorporated an adiabatic pulse shaping of the light-matter coupling that restricts the minimal duration of the real-time dynamics as discussed in the caption of Fig. 10. For this specific choice of parameters, we see that the exchange duration of the tunneling of the phonon and the stretching of the electric-field line is about several tens of \(\mu\)s, which is sufficiently fast compared to other possible sources of noise such as heating and dephasing, as discussed in more detail below. Once the viability of trapped-ion scheme I for the quantum simulation of the \(\mathbb{Z}_{2}\) gauge-invariant tunneling has been demonstrated, we can make a more detailed analysis of the errors, and also consider including the electric field term.
In Fig. 11, we present simulations of the resulting hopping duration \(\Delta t_{\mathrm{ex}}\), the fidelity error \(1-\mathcal{F}\) and the gauge-invariance operator \(\left\langle G_{1}(t)+G_{2}(t)\right\rangle/2\) as a function of applied light shift \(\Omega_{1,2}\), which can be increased by using higher laser intensities or lower Raman detunings in order to obtain a faster gauge-invariant dynamics (39). When comparing the dynamics of the full Hamiltonian to the ideal gauge-invariant tunneling (39), we observe excellent agreement with \(\Delta t_{\mathrm{ex}}=\pi/2t_{1,\mathbf{\epsilon}_{1}}\). The fidelity error and the symmetry operator \(\left\langle G_{1}(\Delta t_{\mathrm{ex}})+G_{2}(\Delta t_{\mathrm{ex}}) \right\rangle/2\) also remain very low even when including the trapped-ion additional terms such as the off-resonant carrier, which underlies the adequacy of the considered parameters for this specific scheme. We can achieve an effective tunneling coupling rate of up to \(7.1\) kHz inferred as \(1/(4\Delta t_{\mathrm{ex}})\). The desired tunneling Hamiltonian can also be synthesised for the optical qubit using two detuned \(674\)-nm beams, see Appendix A.
To achieve large tunneling rates given the achievable \(\Omega_{1,2}\) of current systems, and the low coupling rate of the second-order process \(\propto\eta_{x}\eta_{z}\) (39), one needs to push the parameters to a regime that violates the \(\delta\gg|\Omega_{1,2}|\) requirement. This results in off-resonant driving of spurious interactions, most prominently direct off-resonant carrier coupling. As mentioned above we minimise this effect by employing adiabatic amplitude pulse shaping [264] with a rise time of \(10\,\upmu\)s.
#### iv.2.2 Molmer-Sorensen-type parametric tunneling
In this section, we present an alternative scheme for synthesising the gauge-invariant tunneling that could enable a technically simpler implementation when introducing the electric field term, as we will discuss later on.
The tunneling Hamiltonian now arises from a "bichromatic" field that is no longer far-detuned from the qubit transition but, instead, has two components symmetrically detuned from the qubit frequency. For the ground state qubit addressed with a Raman configuration, the "bichromatic" field can be achieved by having one of the Raman beams at \(\omega_{\mathrm{L}}+\omega_{0}\) and the other counter-propagating Raman beam consisting of two tones at \(\omega_{\mathrm{L}}\pm\delta\), i.e. \(\mathbf{k}_{\mathrm{d}}=2\mathbf{k}\) (see Fig. 12). For the optical qubit, one can obtain a similar light-matter interaction by addressing it with a single beam consisting of two tones at \(\omega_{\mathrm{0}}\pm\delta\), i.e. \(\mathbf{k}_{\mathrm{d}}=\mathbf{k}\) (see Fig. 12). The Rabi frequency \(\Omega\) of the blue- (\(+\delta\)) and red-detuned (\(-\delta\)) tone must be the same. Instead of getting the state-dependent light-shift potential of Eq. (33), one now finds the following term in the interaction picture
\[\tilde{V}_{1}(t)=\Omega\cos(\delta t)\sum_{i}|\uparrow_{i}\rangle\left\langle \downarrow_{i}\right|\mathrm{e}^{\mathbf{k}_{i}\cdot\mathbf{r}_{i}}+\mathrm{H.c.}. \tag{40}\]
As in our previous derivation, we expand the ion positions \(\mathbf{r}_{i}=\mathbf{r}_{0}^{\dagger}+\sum_{\alpha}\mathbf{\mathrm{e}}_{\alpha}\frac{1} {2m\omega_{\alpha}}(a_{i,\alpha}+a_{i,\alpha}^{\dagger})\) in the Lamb-Dicke parameters assuming Eq. (9). By focusing again on a single trapped ion, and choosing the detuning of the beams to be resonant with \(\delta=\omega_{x}-\omega_{z}\), we reach the frequency-conversion requirement and obtain an effective tunneling that simultaneously flips the qubit
\[\tilde{V}_{1}(t)\approx\frac{\Omega_{\mathrm{d}}}{2}a_{1,z}^{\dagger}\sigma_{1 }^{x}a_{1,x}+\mathrm{H.c.},\ \ \Omega_{\mathrm{d}}=\Omega\eta_{x}\eta_{z}. \tag{41}\]
Figure 10: \(\mathbb{Z}_{2}\) **gauge link dynamics using Scheme I**: We simulate the \(\mathbb{Z}_{2}\) dynamics using the Raman-based light-shift parametric tunneling and try to replicate Fig. 5**(a)**. The markers are full numerical simulations including non-linear terms (33) beyond the desired Lamb-Dicke expansion, while the continuous lines are analytical predictions in Eq. (28) using the effective tunneling strength (39) and \(h=0\). The adiabatic pulse shaping sets the minimum pulse duration to the rising and falling edge (\(10\,\upmu\)s each).
The gauge-invariant tunneling rate (22) then reads
\[t_{1,\mathbf{e}_{1}}=\frac{\Omega_{\mathrm{d}}}{2}=\frac{|\Omega|}{2}\eta_{z}\eta_ {z}. \tag{42}\]
which is analogous to the previous case of Eq. (37). We thus obtain a rotated version of the gauge-invariant tunneling in Eq. (36), the only difference being that the operators need to be transformed as \(\sigma_{1}^{z}\mapsto\sigma_{1}^{x}\), and \(a_{1,y}^{\dagger},a_{1,y}^{\dagger}\mapsto a_{1,z},a_{1,z}^{\dagger}\), which must also be considered in the mapping to the operators of the lattice gauge theory in Eq. (38). Accordingly, the generators of the local symmetries now read
\[G_{1}=\mathrm{e}^{\mathrm{i}\pi a_{1}^{\dagger}a_{1}}\sigma_{1,\mathbf{e}_{1} }^{z},\quad G_{2}=\sigma_{1,\mathbf{e}_{1}}^{z}\mathrm{e}^{\mathrm{i}\pi a_{ 2}^{\dagger}a_{2}}. \tag{43}\]
This configuration is commonly used to drive two-qubit entangling gates according to a Molmer-Sorensen scheme [265]. The difference is that for the tunneling term, we set \(\delta\) to the frequency difference of two motional modes \(\omega_{x}-\omega_{z}\), instead of a motional mode frequency used for the entangling gate, e.g. \(\delta=\omega_{z}\). Then, it is possible to drive higher-order terms in the Lamb-Dicke expansion that do not lead to a state-dependent force but, instead, induce the spin-conditioned tunneling of Eq. (41). This is a state-dependent beam splitter interaction in the Hadamard basis \(|\pm_{1}\rangle\). We refer to this scheme as a Molmer-Sorensen(MS)-type parametric tunneling.
Once again, we numerically integrate the full dynamics for (40) and compare it to the idealized tunneling term (41). For the Raman scheme, we assume the same experimental parameters as in the previous section. For implementing this on the optical qubit we use the quadrupole 674 nm transition. The Lamb-Dicke parameters are \(\eta_{z}=0.05\) and \(\eta_{z}=0.024\) and consider Rabi-frequencies of up to \(\Omega/2\pi=1.1\,\mathrm{MHz}\). As in the light-shift scheme, we restrict ourselves to the subspace of a two-level system and two bosonic modes truncated at \(n_{\mathrm{max}}=7\) phonons. We obtain a maximum effective tunneling coupling strength \(3.22\,\mathrm{kHz}=1/(4\Delta t_{\mathrm{ex}})\) for the Raman scheme, and a slower one \(0.30\,\mathrm{kHz}=1/(4\Delta t_{\mathrm{ex}})\) for the quadrupole scheme (see Fig. 13). We note that a similar principle has been recently used to generate single-mode squeezing in reference [266], which can then be used for the quantum simulation of spin models with multi-spin interactions [248; 249; 267]. We add adiabatic pulse shaping to the simulation as it enables smooth transitioning into the interaction picture and suppresses off-resonant (non-commuting) carrier excitations. This effectively reduces the strength of the tunneling but, importantly, it retains the state dependence. The non-commuting off-resonant carrier term sets a limit on the achievable interaction magnitude; this is reflected by the global minimum in tunneling duration (first row, Fig. 13).
Figure 11: **Light-shift \(\mathbb{Z}_{2}\) tunneling with Raman couplings:** Numerical simulations of the exchange duration \(\Delta t_{\mathrm{ex}}\) (upper panel), state infidelity \(1-\mathcal{F}\) (middle panel), and Gauss’ symmetry operator \(\left(G_{1}+G_{2}\right)/2\) (lower panel) as a function of coupling strength \(\Omega_{1,2}\). The results are shown for the full Hamiltonian (33) (black) and the idealized invariant-gauge tunneling (36) (green). The green solid line in the upper panel is the expected analytic dependence of the exchange duration on \(\Omega_{1,2}\), extracted from Eq. (39). For both the full and ideal Hamiltonian the expectation value of the symmetry operator is consistent with \(0\), as desired, down to \(10^{-3}\).
Figure 12: **Molmer-Sørensen-type parametric drive:** For the ground state qubit (cyan lines), the Raman scheme which is now near resonant with the qubit frequency \(\omega_{0}+\delta\), we need to introduce a third tone, here depicted by a blue arrow. For the optical qubit (magenta lines), which is driven directly via the quadrupole transition, we symmetrically detune the two tones (blue and red) about the qubit resonance by \(\pm\delta\).
#### iv.3.3 Implementation of the electric-field term
To realise the full \(\mathbb{Z}_{2}\) gauge model in Eq. (22), we also need a term that drives the qubit transition (20) with Rabi frequency \(\tilde{\Omega}_{\text{d}}\), which corresponds to the electric field \(h=\tilde{\Omega}_{\text{d}}/2\) in Eq. (23). The technique to induce this additional electric-field term depends on the specific scheme. For the scheme based on the light-shift potential (Sec. IV.1.1), one needs to add a field driving the qubit transition resonantly. For an optical qubit, this term would arise from a resonant laser driving the quadrupole-allowed transition. On the other hand, if the qubit is encoded in the ground state, this term can be induced by either a resonant microwave field or a pair of Raman laser beams. In both cases, trapped-ion experiments routinely work in the regime of Eq. (21), where the value of \(\tilde{\Omega}_{\text{d}}\) can be controlled very precisely by tuning the amplitude of the laser or microwave field [268]. Note that the resonance condition in Eq. (21) must be modified to account for the ac-Stark shifts shown in Eq. (35), namely
\[\tilde{\omega}_{\text{d}}=\omega_{0}+2\Delta E_{\text{ac}},\quad|\tilde{ \Omega}_{\text{d}}|\ll 4(\omega_{0}+2E_{\text{ac}}). \tag{44}\]
This leads to the desired Hamiltonian
\[V_{1}(t)\approx\left(\frac{\Omega_{\text{d}}}{2}\text{e}^{\text{i}\phi_{ \text{d}}}a_{1,z}^{\dagger}\sigma_{1}^{z}a_{1,x}+\text{H.c.}\right)+\frac{ \tilde{\Omega}_{\text{d}}}{2}\sigma_{1}^{x}, \tag{45}\]
which maps directly onto the desired \(\mathbb{Z}_{2}\) gauge model on the link in Eqs. (22)-(23) with the new term playing the role of the electric field
\[h=\frac{\tilde{\Omega}_{\text{d}}}{2}. \tag{46}\]
We now present our numerical simulations for the light-shift type scheme as a function of the electric-field strength \(h\). In Fig. 14, we present the exchange duration \(\Delta t_{\text{ex}}\) defined at maximum state fidelity \(\mathcal{F}\), the maximum contrast \(\mathcal{C}\) in
Figure 14: **Light-shift \(\mathbb{Z}_{2}\) tunneling with Raman couplings in the presence of a non-zero electric field:** we simulate the hopping duration, contrast \(\mathcal{C}\) and symmetry operator \(\langle G_{1}+G_{2}\rangle/2\) as a function of electric field strength \(h\) relative to the coupling strength \(\Omega_{1,2}\). The results are shown for the full Hamiltonian (33) (black), including the additional carrier driving (20) with a modified resonant condition (44). We also show the results for the ideal gauge-invariant Hamiltonian (45) (green). The green solid line is the analytic dependence of the exchange duration and contrast respectively on \(\Omega_{1,2}\), extracted from Eqs. (27) and the effective couplings (39) and (46). For both the full and ideal Hamiltonian the expectation value of the symmetry operator is consistent with \(0\), as desired, down to \(10^{-3}\).
Figure 13: **Mølmer-Søørensen-type \(\mathbb{Z}_{2}\) tunneling:** with Raman couplings (left column) and quadrupole couplings (right column). Simulation of the exchange duration, fidelity error, and symmetry operator \(\langle G_{1}+G_{2}\rangle/2\) as a function of the coupling strength. The results are shown for the full Hamiltonian (40) (black) and the idealized invariant-gauge tunneling (41) (green). The green solid line in the upper panel is the expected analytic dependence of the exchange duration on \(\Omega\), extracted from Eq. (42). For both the full and ideal Hamiltonian the expectation value of the symmetry operator is consistent with \(0\), as desired, down to \(10^{-3}\).
\(\overline{\xi}_{x}(t)\), and the expectation value of the gauge-symmetry generators \(\left\langle G_{1}+G_{2}\right\rangle/2\), all of them as a function of the ratio between the transverse electric field \(h\) and the differential ac Stark shift amplitude \(\Omega_{1,2}\). While the presence of the noncommuting off-resonant carrier coupling (\(z\) basis) present in the full Hamiltonian reduces the effect of the transverse term (\(x\) basis) in comparison to the ideal case, the gauge invariance is preserved.
For the Molmer-Sorensen-type scheme discussed in Sec. IV.1.2, the spin conditioning of the tunneling occurs in the Hadamard basis \(\left|\pm_{1}\right\rangle\), such that the effective electric field must also be rotated with respect to Eq. (22). We can introduce this term by simply shifting the centre frequency of the bichromatic laser field (40) relative to the qubit resonance by a detuning \(\delta_{\mathrm{s}}\). In a rotating frame, this modifies Eq. (40) by introducing an additional term, namely
\[\tilde{V}_{1}(t)\approx\left(\frac{\Omega_{\mathrm{d}}}{2}a_{1,z}^{\dagger} \sigma_{1}^{\mathrm{r}}a_{1,x}+\mathrm{H.c.}\right)+\frac{\delta_{\mathrm{s}} }{2}\sigma_{1}^{z}, \tag{47}\]
which leads to the effective electric-field term
\[h=\frac{\delta_{\mathrm{s}}}{2}. \tag{48}\]
This is a considerable advantage with respect to the light-shift scheme, as no additional tones are required to implement the electric field term. The resulting exchange duration \(\Delta t_{\mathrm{ex}}\), the maximum contrast \(\mathcal{C}\) in \(\overline{s}_{x}(t)\) and gauge-symmetry generators \(\left\langle G_{1}+G_{2}\right\rangle/2\), found through numerical simulations of the Hamiltonian (47) and are compared to the ideal gauge tunneling (22) (see Fig. 15). As before, the presence of the off-resonant carrier coupling reduces the effective magnitude of the transverse term, but the gauge invariance is preserved.
#### iv.1.4 Experimental considerations
Let us now discuss in detail how the experiment would proceed. One of the advantages of trapped ions is that they also offer a variety of high-precision techniques for state preparation and readout [268]. For a single trapped ion, it is customary to perform optical pumping to the desired internal state, say \(\left|\uparrow\right\rangle\). One can then use laser cooling in the resolved-sideband limit for both vibrational modes, and prepare them very close to the vibrational ground state. Using a blue-sideband coupling directed along a particular axis, say the \(x\)-axis, one can flip the state of the qubit and, simultaneously, create a Fock state with a single vibrational excitation in the mode. We note that the initial state of the \(\mathbb{Z}_{2}\) gauge theory would correspond to \(\left|\mathrm{L}\right\rangle=\left|1_{1}\right\rangle\otimes\left|\downarrow _{1,\mathbf{e}_{1}}\right\rangle\otimes\left|0_{2}\right\rangle\), which is the rotated version of the one discussed previously and is thus directly valid for the Molmer-Sorensen-type scheme. For the light-shift scheme, one must apply a Hadamard gate to initialise the system in \(\left|\mathrm{L}\right\rangle=\left|1_{1}\right\rangle\otimes\left|-1_{, \mathbf{e}_{1}}\right\rangle\otimes\left|0_{2}\right\rangle\), which can be accomplished by driving a specific single-qubit rotation.
One can then let the system evolve for a fixed amount of time under the effective Hamiltonian in either Eq. (45) for the light-shift scheme, or Eq. (47) for the Molmer-Sorensen-type scheme. We have shown that the effective Hamiltonians approximate the ideal Hamiltonian accurately. After this real-time evolution, the laser fields are switched off. Then, the measurement stage starts, where one tries to infer the matter-gauge field correlated dynamics in Eq. (28). In order to do that, one would take advantage of the readout techniques developed in trapped ions [268], which typically map the information of the desired observable onto the qubit. After this mapping, the qubit can be projectively measured in the \(z\)-basis via state-dependent resonance fluorescence. In order to measure the electric field operator of the Molmer-Sorensen-type scheme \(\overline{s}_{z}(t)=\langle\sigma_{1,\mathbf{e}_{1}}^{z}(t)\rangle\), one can collect the state-dependent fluorescence. For the light-shift scheme, one needs to measure \(\overline{s}_{x}(t)=\langle\sigma_{1,\mathbf{e}_{1}}^{x}(t)\rangle\), which requires an additional single-qubit rotation prior to the fluorescence measurement. On the other hand, in order to infer the phonon population \(\langle a_{2}^{\dagger}a_{2}(t)\rangle\)=\(\langle a_{1,z}^{\dagger}a_{1,z}(t)\rangle\), and observe the gauge-invariant tunneling, one would need to map the vibrational information onto the qubit first prior to the fluorescence measurement [269]. All these techniques are well developed, see for example [270].
Figure 15: **Molmer-Sørensen \(\mathbb{Z}_{2}\) tunneling in the presence of a non-zero electric field:** with Raman couplings (left column) and quadrupole couplings (right column). Simulating the exchange duration, contrast, and symmetry operator \(\left\langle G_{1}+G_{2}\right\rangle/2\) as a function of electric field strength \(h\) relative to the coupling strength \(\Omega\). The results are shown for the full Hamiltonian (40) (black), including the additional carrier driven by a simple shift of the Molmer-Sørensen detuning (48). We also show the results for the ideal gauge-invariant Hamiltonian (47) (green). The green solid line in the upper panel is the expected analytic dependence of the exchange duration on \(\Omega\), extracted from Eqs. (27) and the effective couplings (42) and (48). For both the full and ideal Hamiltonian the expectation value of the symmetry operator is consistent with 0, as desired, down to \(10^{-3}\).
### Scheme II: Pulsed scheme for the \(\mathbb{Z}_{2}\) link
#### iv.2.1 Orthogonal-force parametric tunneling
We now discuss an alternative strategy to realise the \(\mathbb{Z}_{2}\) gauge link based on digital quantum simulation and the concatenation of gates. First, we focus on a new way of engineering the gauge-invariant tunneling term using two orthogonal state-dependent forces and, then, explain how it can be used to experimentally implement the \(\mathbb{Z}_{2}\) gauge model in Eq. (22). As before, we consider the case of a single \(\mathbb{Z}_{2}\) link, i.e. one single ion and two vibrational modes. Following the scheme proposed in Ref. [237] for hybrid discrete-continuous variable approaches in trapped-ion quantum information processing, we consider two orthogonal state-dependent forces acting on the two vibrational modes with Lamb-Dicke parameters \(\eta_{z}\) and \(\eta_{x}\), respectively. We thus start from two terms like Eq. (40), each of which will be tuned to yield a different state-dependent force, i.e. \(\mathbf{k}_{\mathrm{d,1}}||\mathbf{\mathrm{e}}_{x}\), and \(\mathbf{k}_{\mathrm{d,2}}||\mathbf{\mathrm{e}}_{z}\),
\[\tilde{V}_{1}(t)=\Omega\cos(\delta t)\sum_{i}\sum_{n=1,2}\left|\uparrow_{i} \right>\left<\downarrow_{i}\right|\mathrm{e}^{\mathrm{i}\mathbf{k}_{\mathrm{d, \alpha}}\cdot\mathbf{r}_{i}+\Phi_{\mathrm{d,\alpha}}}+\mathrm{H.c.}. \tag{49}\]
In the interaction picture with respect to the qubit frequency, \(\omega_{0}\), motional frequencies, \(\omega_{z}\) and \(\omega_{x}\), the interaction is described by
\[V_{1}(t)\approx\eta_{x}\Omega\sigma_{1}^{x}a_{1,x}\mathrm{e}^{-\mathrm{i} \delta t}+\eta_{z}\Omega\sigma_{1}^{y}a_{1,z}\mathrm{e}^{-\mathrm{i}\delta t}+ \mathrm{H.c.}, \tag{50}\]
where \(\eta_{\alpha}\Omega\) is the coupling strength to the respective vibrational mode, and \(\delta\) is the detuning away from the sidebands \(\omega_{0}\pm\omega_{z/z}\). These two terms can be derived using similar steps as before, which only differ on the specific selection of the leading contribution by the appropriate choice of the laser frequencies. The interference of these two forces can lead, in the second order, to an effective state-dependent tunneling. After using a Magnus expansion for the time-ordered evolution operator \(U(t)=\mathcal{T}\{\exp\{-\mathrm{i}\,\mathrm{f}_{0}^{t}\,\mathrm{d}\psi_{1}( s)\}\}\)[271], the second-order term \(U(t)\approx\exp\{-\mathrm{i}H_{\mathrm{eff}}t\}\) yields the following interaction
\[H_{\mathrm{eff}}=\frac{\Omega_{\mathrm{d}}}{2}a_{1,z}^{\dagger}\sigma_{1}^{z} a_{1,x}+\mathrm{H.c.},\quad\Omega_{\mathrm{d}}=\mathrm{i}\frac{\Omega^{2}}{ \delta}\eta_{x}\eta_{z}, \tag{51}\]
which maps directly onto the desired gauge-invariant tunneling of Eq. (22) with a tunneling strength of
\[t_{1,\mathbf{\mathrm{e}}_{1}}=\frac{\Omega_{\mathrm{d}}}{2}=\mathrm{i}\frac{ \Omega^{2}}{2\delta}\eta_{x}\eta_{z}. \tag{52}\]
In this derivation, we have neglected higher-order contributions in the Magnus expansion that would lead to errors \(\varepsilon=O([\eta\Omega/\delta]^{3})\) that must be kept small (we assumed that \(\eta_{x}\) and \(\eta_{z}\) are the same order of magnitude \(\eta\)). If a fixed error \(\varepsilon\) in these higher-order terms is considered, \(\delta\) is then linear in \(\eta\). Consequently, the tunneling coupling rate is also linear in \(\eta\). This linear dependence is in contrast to scheme I, where the coupling is quadratic in \(\eta\). Additionally, the first-order term in the Magnus expansion must be accounted for, which leads to additional state-dependent displacements in the joint phase space of both vibrational modes. As in the case of trapped-ion entangling gates, these displacements vanish for specific evolution times corresponding to integer multiples of \(2\pi/\delta\). Hence, the tunneling term (22) can be achieved by applying the interaction for a duration that is an integer multiple of \(2\pi/\delta\).
#### iv.2.2 Implementation of the electric-field term
For this scheme, the electric field term \(h\sigma_{1}^{x}\) in Eq. (22) can be introduced through Trotterization. For this, we split the interaction time into segments with durations that are integer multiples of \(2\pi/\delta\), and we alternate between applying the tunneling term and the external field term \(h\sigma_{1}^{x}\), which can be achieved by a simple carrier driving (see Fig 16). In this way, the electric field term can be easily introduced by interleaving short pulses on resonance with the carrier with periods of small evolution under the combination of the two orthogonal state-dependent forces.
#### iv.2.3 Experimental considerations
One way to implement the two orthogonal state-dependent forces in a laser system is by having two sets of Molmer-Sorensen-style bichromatic fields, see Sec. IV.1.2. One
Figure 16: **Amplitude-shaped pulses considered in scheme II:** (a) Shaped pulse that could be used for implementing just the tunneling term. (b) Trotterized pulse sequence for implementing the tunneling term. (c) Trotterized pulse sequence for implementing the full \(\mathbb{Z}_{2}\)-link Hamiltonian.
bichromatic field is symmetrically detuned from the carrier by \(\delta_{1}=\pm(\omega_{z}+\delta)\) and the other by \(\delta_{2}=\pm(\omega_{x}+\delta)\). As described in Sec. IV.1.2, the bichromatic fields can either couple levels in the ground state via a Raman transition, or two levels of an optical qubit via a quadrupole transition. Moreover, for having two orthogonal state-dependent forces, we set the phase between the two bichromatic beams such that
\[\left(\frac{\phi_{+}+\phi_{-}}{2}\right)_{2}=\left(\frac{\phi_{+}+\phi_{-}}{2} \right)_{1}+\frac{\pi}{2}, \tag{53}\]
where \(\phi_{+}\), \(\phi_{-}\) are the phases of the blue and red detuned tones in each of the bichromatic fields, 1 and 2. Applying these four tones gives rise to the two orthogonal state-dependent forces (50) that are needed for engineering the tunneling term (51). However, it also leads to spurious carrier terms that drive off-resonant qubit rotations around axes that are orthogonal to each of the corresponding state-dependent forces. As mentioned above, a technique to mitigate the effect of the carrier term is pulse shaping [264], which is mainly needed at large coupling strengths, i.e., large \(\Omega\) in Eq. (50). We describe the amplitude shaping of the pulses and further discuss its effect in our numerical simulations below.
We consider the same experimental apparatus as in Section IV.1, and investigate the implementation of the gauge-invariant tunneling term of Eqs. (22)-(23) using the two orthogonal state-dependent forces. For this, we use the optical qubit and two motional modes with \(\eta_{z}=0.05\), \(\omega_{z}/2\pi=1.2\) MHz, and \(\eta_{x}=0.024\), \(\omega_{x}/2\pi=1.9\)MHz, respectively. We choose the detuning of the bichromatic fields from the respective vibrational mode to be \(\delta/2\pi=75\) kHz, and set \(\Omega/2\pi=0.75/\sqrt{2}\) MHz for each of the four tones. This should allow us to reach an effective tunneling coupling rate of up to \(1.3\) kHz, inferred as \(1/(4\Delta t_{\text{ex}})\). The numerically simulated dynamics, truncating the bosonic modes at phonon number \(n_{\text{max}}=5\), and for the same observables as those in Fig. 5**(a)**, are shown in Fig. 17. This figure is thus the trapped-ion analog of Fig. 5**(a)** following Scheme II. This was simulated as a series of pulses that are ramped on and off with \(3.6\) \(\mu\)s ramp durations and with the FWHM duration of \(2\pi/\delta\), as shown in Fig 16**(b)**. The same dynamics can be achieved by having a single pulse with the FWHM duration equal to an integer multiple of \(2\pi/\delta\) (see Fig 16**(a)**). By inspecting the figure more closely, we see that there are small deviations with respect to the idealized Hamiltonian (51) that deserve a more detailed analysis.
In the top panel of Fig. 18, we vary \(\Omega\) in each one of the tones and evaluate how this affects the exchange duration \(\Delta t_{\text{ex}}\), calculated by maximising the state fidelity as in the previous section. We conduct simulations for three cases. Initially, we simulate the interaction in Eq. (50) that only contains the two orthogonal state-dependent forces. We then introduce the off-resonant carrier terms, which would arise from the Lamb-Dicke expansion of Eq. (40) as the leading off-resonant perturbations to the state-dependent forces (50). Finally, we also consider the amplitude pulse shaping. When the carrier terms are excluded, and the pulse is applied for a duration that is an integer multiple of \(2\pi/\delta\), the inferred tunneling duration follows closely the theory from Eq. (51), which is represented by a green solid line according to \(\Delta t_{\text{ex}}=\pi/2I_{1,\text{e}_{1}}\) with the effective tunneling of Eq. (52). As \(\Omega\) is increased, the error term \(O([\eta\Omega/\delta]^{3})\) becomes significant, and small deviations start to appear. As above, we use the state fidelity \(\mathcal{F}\) (the overlap to the desired state) and \(\left\langle G_{1}+G_{2}\right\rangle/2\) (the expectation value of the symmetry generators) in order to evaluate the quality of the effective Hamiltonian with respect to Eq. (22). Introducing the carrier terms does not change the exchange duration \(\Delta t_{\text{ex}}\), \(\mathcal{F}\), or \(\left\langle G_{1}+G_{2}\right\rangle/2\) significantly. However, amplitude shaping the pulses substantially improves the quality of tunneling as it suppresses the coupling to higher order terms \(O([\eta\Omega/\delta]^{3})\). Introducing the amplitude-shaping ramp effectively decreases the area of the pulses, which translates into slightly slower exchange durations (top panel), but also leads to smaller errors (middle and bottom panels).
Once the viability of trapped-ion Scheme II for the quantum simulation of the \(\mathbb{Z}_{2}\) gauge-invariant tunneling has been demonstrated, we can consider the errors that would stem from adding the electric field term. In Fig 19, we present our numerical results, evaluating the decrease in contrast of the Rabi oscillations between the \(\left|\text{L}\right\rangle\) and \(\left|\text{R}\right\rangle\), and \(\left\langle G_{1}+G_{2}\right\rangle/2\), as one increases the electric field \(h\). When the carrier terms are excluded and we do not use the amplitude-shaped pulses, the decrease, in contrast, follows Eq. (28), which is represented as a green solid line in the top panel. We find that, in the realistic experimental situation with carrier terms present, and using amplitude-shaped pulses, we need slightly higher values of \(h\) to achieve the same contrast. This is mainly due to the effect of the ramp.
The shortest time step that we can use in order to close the loops in phase space is \(2\pi/\delta\), which was used for the simulations above. Hence, the Trotter error is given as \(ht_{1,e_{1}}(2\pi/\delta)^{2}\) which must be negligible. In the simulation results presented above, for the highest value of \(h\), the Trotter error was \(\approx 8\times 10^{-4}\). Higher values of \(\delta\) reduce the \(O([\eta\Omega/\delta]^{3})\) error, and enable finer time steps in the time scans, as we always want to measure at instants where the loops in phase space
Figure 17: \(\mathbb{Z}_{2}\) **gauge link dynamics using scheme II**: We simulate the \(\mathbb{Z}_{2}\) dynamics using two orthogonal spin-dependent forces and try to replicate Fig. 5 (a). The markers are the numerical simulation data considering the evolution under the two state-dependent forces in Eq. (50), but also including the additional off-resonant carriers that stem from the Lamb-Dicke expansion of Eq. (49). The coloured lines are the analytical predictions in Eq. (28), considering the effective tunneling strength in Eq. (52) and \(h=0\).
that arise from first-order contributions in the Magnus expansion are closed, and the leading effect is the state-dependent tunneling. However, a higher \(\delta\) also reduces the effective tunneling rate (51). This translates into shorter pulses which, when becoming comparable to the ramp length, do not give the effective desired dynamics anymore. Comparing scheme II to scheme I in terms of observing the matter-gauge field dynamics, the state preparation and measurement stages are the same. The difference is how the effective Hamiltonian in Eq. (22) is experimentally implemented and that is by using a pulse sequence as the one shown in Fig. 16 **(c)**.
### \(\mathbb{Z}_{2}\) gauge link scheme comparison and other sources of noise
The two schemes presented above outline different viable strategies to implement the gauge-invariant model (22) using current trapped-ion hardware. There are, however, several experimental challenges worth highlighting. It is crucial that the tunneling rates are large compared to any noise process present in the physical system. The dominating sources are the qubit and motional decoherence. In the considered experimental setup [262; 263], the decoherence time of the qubit is the most stringent one with \(T_{2}\approx 2.4\,\mathrm{ms}\) for the ground state qubit and \(T_{2}\approx 5\,\mathrm{ms}\) for the optical qubit. These numbers could be improved by several orders of magnitude by using a clock-qubit encoding. However, in this encoding, the differential dipole light-shift that leads to Eq. (36) would vanish and, hence, the method proposed in Sec. IV.1.1 would not work [272]. An alternative way of increasing the qubit coherence would be to use magnetic shielding or active magnetic
Figure 19: **Pulsed-scheme \(\mathbb{Z}_{2}\) tunneling with orthogonal forces in the presence of a non-zero electric field**: We simulate the \(\mathbb{Z}_{2}\) dynamics using two orthogonal spin-dependent forces for different magnitudes of \(h\). The value used for \(t_{1,e_{1}}\) was calculated as \(1/(4\mathcal{M}_{\mathrm{ex}})\) at \(h=0\). We obtain dynamics similar to Fig. 5 **(b)** and infer the maximum contrast in \(\overline{s}_{x}(t)\) and the expectation value of the local symmetry at the point of maximum contrast. The markers are numerical simulations, while the continuous lines are analytical predictions for different values of \(h\) Eq. (28). For the Hamiltonian including the spurious carrier and adiabatic ramp the expectation value of the symmetry operator is consistent with 0, as desired, down to \(10^{-2}\).
Figure 18: **Pulsed-scheme \(\mathbb{Z}_{2}\) tunneling with orthogonal forces:** We vary the strength of the tones in the bichromatic field \(\Omega\), and evaluate the exchange duration, the infidelity in obtaining the desired state \(|R\rangle\), and the expectation value of the local symmetry generators. We do this for three cases: excluding the carrier from the interaction and thus considering Eq. (50) (green crosses); looking at the full interaction and thus including the spurious carrier terms (black circles); and, finally, considering the full interaction while slowly ramping the pulses on and off (magenta triangles). The green solid line in the upper panel is the expected analytic dependence of the exchange duration on \(\Omega\), extracted from Eq. (52). For the Hamiltonian including the spurious carrier and adiabatic ramp the expectation value of the symmetry operator is consistent with 0, as desired, down to \(10^{-2}\).
field stabilisation.
In terms of motional coherence, the heating rate is limiting the coherence time for the longitudinal motional mode to be ca. 14 ms. The coherence time for the transverse modes has not been properly characterised in the current system and it might be further limited by noise in the trap rf drive. However, actively stabilising the amplitude of the rf drive has shown improvement in the coherence time [273]. We note that a similar, yet non-gauge-invariant tunneling, has been implemented in a trapped-ion experiment [260] in the context of continuous-variable quantum computing. This experiment successfully used the transverse modes and measured coherence times of 5.0(7) and 7(1) ms for \({}^{171}\)Yb\({}^{+}\) ions.
Consequently, we can conclude that the exchange timescales of the \(\mathbb{Z}_{2}\) tunneling implementations investigated in this paper using scheme I in Secs. IV.1.1 (\(\approx 40\,\mathrm{\SIUnitSymbolMicro s}\), using Raman beams) and IV.1.2 (\(\approx 100\,\mathrm{\SIUnitSymbolMicro s}\), using Raman beams), as well as scheme II in Sec. IV.2 (\(\approx 200\,\mathrm{\SIUnitSymbolMicro s}\), using quadrupole beams) are an order of magnitude faster than the qubit or motional decoherence, and hence experimentally feasible. We note that the analog schemes are operationally simpler and do not suffer from Trotterization and loop closure errors. However, the pulsed scheme can obtain substantially higher tunneling rates for fixed intensity and Lamb Dicke factors as the tunneling rate scales linearly in \(\eta\) instead of quadratically. For the considered parameters of the quadrupole transition, the exchange duration in the pulsed scheme is a factor of four faster than in the analog scheme (IV.1.2). Hence, this method would be amenable to the quadrupole transition or if the laser intensities are limited.
### Comparison with neutral atoms
Before closing this section, it is worth commenting on the realization of the \(\mathbb{Z}_{2}\) gauge link in other experimental platforms. In fact, there has been a recent pioneering experiment with cold atoms [274, 110], which relies on a different scheme designed for double-well optical lattices that exploit Floquet engineering to activate a density-dependent tunneling in the presence of strong Hubbard interactions [275, 276, 277]. By playing with the ratio of the modulation and interaction strengths, the tunneling of the atoms in one electronic state (i.e. matter field) can depend on the density distribution of atoms in a different internal state (i.e. gauge field). On the contrary, the atoms playing the role of the gauge field can tunnel freely between the minima of the double well. As realised in [274, 110], using a single atom of the gauge species per double well, one can codify the \(\mathbb{Z}_{2}\) gauge qubit, those being the qubit states where the atom resides in either the left or the right well. Then, its bare tunneling realises directly the electric-field term, whereas the density-dependent tunneling of the matter atoms can be designed to simulate the gauge-invariant tunneling of the \(\mathbb{Z}_{2}\) gauge theory (22). Remarkably, the dynamics of the matter atom observed experimentally is consistent with Eq. (28), displaying periodic Rabi oscillations that get damped due to several noise sources [110]. Although the cold-atom experiments can also infer \(\langle\sigma_{1,\mathbf{e}_{1}}^{z}(t)\rangle\) via the measure of the atomic density of the gauge species, measuring \(\langle\sigma_{1,\mathbf{e}_{1}}^{x}(t)\rangle\) amounts to a bond density that would require measuring the equal-time Green's function. This would require inferring correlation functions between both sites, which is more challenging and was not measured in the experiment [110]. Although the reported measurements show \(\langle\sigma_{1,\mathbf{e}_{1}}^{z}(t)\rangle\approx 0\), which is consistent with the link field being always in the electric-field basis; it would be desirable to measure the correlated Rabi flopping of the gauge link (28), which directly accounts for how the electric-field line stretches/compresses synchronous with the tunneling of the matter particle according to Gauss' law. Since in the trapped-ion case \(\langle\sigma_{1,\mathbf{e}_{1}}^{x}(t)\rangle\) can be inferred by applying a single-qubit gate and collecting the resonance fluorescence, the present scheme proposed in this work could thus go beyond these limitations, and directly observe the consequences of gauge invariance in the correlated oscillations (28). Moreover, as discussed in the following section, there are also promising pathways in the trapped-ion case that could allow extending the quantum simulation beyond the single link case.
## V Minimal plaquettes and synthetic dimensional reduction for a \(\mathbb{Z}_{2}\) gauge chain
There are various directions in which the complexity of the trapped-ion quantum simulator of \(\mathbb{Z}_{2}\) gauge fields can increase beyond the single-link limit of Eq. (22). The first non-trivial extension is to consider two matter sites joined by two gauge links, which form the smallest-possible plaquette that is consistent with \(\mathbb{Z}_{2}\) gauge symmetry. We discuss, in this section, how to achieve this \(\mathbb{Z}_{2}\) plaquette simulator by using a pair of ions and exploiting their collective vibrational modes [182, 180, 181, 180, 182]. We then present a scheme that effectively reduces the dimension of the synthetic ladder in Fig. 2**(a)**, and allows us to scale the gauge-invariant model of Eq. (22) to a full lattice, in this case, a one-dimensional chain. We note that these extensions require additional experimental tools, and longer timescales, making the quantum simulation more challenging from the experimental perspective. Nonetheless, the proposed schemes set a clear and neat road map that emphasises the potential of trapped-ion crystals for the simulation of interesting real-time dynamics in lattice gauge theories.
### \(\mathbb{Z}_{2}\) plaquette: Wegner-Wilson and 't Hooft loops for gauge-field entanglement
Following the same philosophy as in previous sections, we start by presenting the simplest case in detail and then gradually move to more complex scenarios. We thus consider a couple of ions which contribute with a pair of qubits and four vibrational modes, two per transverse direction. In principle, we could apply the previous scheme based on a global state-dependent parametric drive of Eq. (35). However, this would lead to a synthetic plaquette where two of the links have a tunneling that does not depend on any gauge qubit (see Fig. 20**(a)**), failing in this way to meet the requirements for
local gauge invariance. In fact, this goes back to the simple counting of synthetic lattice sites and effective gauge fields we mentioned below Eq. (35). To remedy this problem, the idea is to modify the constraints on the strength of the light-shift optical potential of Eq. (10), such that it becomes possible to address certain common vibrational modes instead of the local ones. Although we will focus on the light-shift scheme for now on, we note that similar ideas would apply to the Molmer-Sorensen-type and orthogonal-force schemes. We will show in this section that, by addressing the collective modes, we can effectively deform the synthetic plaquette (see Fig. 20**(b)**) such that the resulting model is consistent with the local \(\mathbb{Z}_{2}\) symmetry.
The transverse collective modes of the two-ion crystal are the symmetric and anti-symmetric superpositions of the local vibrations, and are referred to as the center-or-mass (c) and zigzag (z) modes within the trapped-ion community. The creation operators for these modes are then defined by
\[\begin{split}& a_{\text{c},\alpha}=\frac{1}{\sqrt{2}}(a_{1, \alpha}+a_{2,\alpha}),\quad\omega_{\text{c},\alpha}=\omega_{\alpha},\\ & a_{\text{z},\alpha}=\frac{1}{\sqrt{2}}(a_{1,\alpha}-a_{2, \alpha}),\quad\omega_{\text{z},\alpha}<\omega_{\alpha}.\end{split} \tag{54}\]
We now substitute these equations in the expressions for the light-shift optical potential of Eq. (33), and proceed by performing the subsequent Lamb-Dicke expansion, which will now lead to a sum of terms containing all different powers of the creation-annihilation operators of these collective modes. We can now select the desired tunneling term between a single mode, say the center of mass, along the two transverse directions (see Fig. 20**(b)**). Since we can also get terms that couple the center of mass and the zigzag modes, we need to modify the constraints in Eq. (10) to be
\[\omega_{\text{d}}=\omega_{\text{c},y}-\omega_{\text{c},x},\quad|\Omega_{ \text{d}}|\ll|\omega_{\text{c},y}-\omega_{\text{c},x}|,\frac{|\omega_{\text{ c},\alpha}-\omega_{\text{z},\beta}|}{\eta_{\alpha}\eta_{\beta}}, \tag{55}\]
such that those terms become off-resonant and can be neglected. Note that these new constraints will make the gauge-invariant tunneling weaker, and the targeted dynamics slower, making the experimental realisations more challenging.
In the following, we will present the expressions for the light-shift schemes, although any of the other possibilities should be analogous. By moving to the interaction picture with respect to the full vibrational Hamiltonian (5), and neglecting the off-resonant terms by a rotating-wave approximation that rests upon Eq. (55), the leading term stemming from the aforementioned light-shift optical potential is
\[V_{I}(t)\approx\frac{\Omega_{\text{d}}}{4}a_{\text{c},y}^{\dagger}\sigma_{1}^{ 2}a_{\text{c},x}+\frac{\Omega_{\text{d}}}{4}a_{\text{c},y}^{\dagger}\sigma_{2 }^{2}a_{\text{c},x}+\text{H.c.}. \tag{56}\]
Here, we recall that the drive strength is \(\Omega_{\text{d}}=\eta_{\text{z}}\eta_{\text{y}}\Omega_{1,2}\), as defined in Eq. (12), and we have neglected the irrelevant phase that can be gauged away in this simple two-mode setting. As depicted in Fig. 21, there are now two different gauge-invariant processes in which a center-of-mass phonon along the \(x\)-axis can tunnel into a center-of-mass phonon along the \(y\)-axis. Each of these processes flips the Hadamard state of one, and only one, of the trapped-ion qubits (see Fig. 21).
Figure 21: **Trapped-ion \(\mathbb{Z}_{2}\) tunneling in a rhomboidal plaquette:** Schematic representation of the gauge-invariant tunneling of a vibrational excitation, which is initially in the center of mass (c.o.m) mode along the \(x\) axis, and “tunnels” into the c.o.m mode along the \(y\) axis. In the upper insets, this tunneling is mediated by a spin flip in the Hadamard basis of the first ion qubit \(\ket{-}_{1}\mapsto\ket{+}_{1}\), whereas in the lower inset it involves the second ion qubit \(\ket{-}_{2}\mapsto\ket{+}_{2}\). These two paths can be interpreted as the two effective links of the synthetic rhomboidal plaquette displayed in Fig. 20.
Figure 20: **Scheme for synthetic \(\mathbb{Z}_{2}\) plaquette:** **(a)** The application of Eq. (35) to \(N=2\) ions would result in a synthetic rectangular plaquette, where the four sites correspond to the local transverse modes of the two ions. The vertical links are induced by the state-dependent parametric tunneling of Eq. (36), and thus incorporate a gauge qubit (note that in the Molmer-S
We can now modify the interpretation in terms of synthetic matter sites and \(\mathbb{Z}_{2}\) gauge qubits (38), which must now include a pair of \(\mathbb{Z}_{2}\) links, as we have two qubits dressing the tunneling
\[a_{\mathrm{c},x},a^{\dagger}_{\mathrm{c},x}\mapsto a_{1},a^{\dagger}_{1},\quad a _{\mathrm{c},y},a^{\dagger}_{\mathrm{c},y}\mapsto a_{2},a^{\dagger}_{2}\quad \sigma^{x}_{i},\sigma^{z}_{i}\mapsto\sigma^{x}_{1,\mathbf{e}_{i}},\sigma^{z} _{1,\mathbf{e}_{i}^{\prime}}. \tag{57}\]
As depicted in Fig. 20**(b)**, we need to introduce two links that connect the synthetic site 1 to 2, requiring two synthetic directions specified by the vectors \(\mathbf{e}_{1},\mathbf{e}_{2}\), and allowing us to interpret the model in terms of a rhomboidal plaquette. In addition to the gauge-invariant tunneling, we also apply the additional tone of Eq. (20), which drives the carrier transition on both qubits, and leads to the electric-field term. Altogether, the \(\mathbb{Z}_{2}\) gauge theory on this plaquette is
\[H_{\text{eff}}=\sum_{n=1,2}\left(t_{1,\mathbf{e}_{n}}a^{\dagger}_{2}\sigma^{z }_{1,\mathbf{e}_{n}}a_{1}+\text{H.c.}\right)+\sum_{n=1,2}h\sigma^{x}_{1, \mathbf{e}_{n}}, \tag{58}\]
where the microscopic parameters for the tunneling strength and the electric field are the same as in Eq. (22), except for the tunneling strengths. These get halved with respect to the previous ones by working with the center-of-mass mode instead of the local vibrations, namely
\[t_{1,\mathbf{e}_{1}}=t_{1,\mathbf{e}_{2}}=\frac{\Omega_{1,2}}{4}\eta_{x}\eta_ {y},\quad h=\frac{\tilde{\Omega}_{\text{d}}}{2}. \tag{59}\]
Let us also note that, since we now have an increased connectivity, the generators of the \(\mathbb{Z}_{2}\) gauge symmetry which, in the single-link case were defined in Eq. (24), now read as follows
\[G_{1}=\mathrm{e}^{\mathrm{i}\pi\sigma^{z}_{1}a_{1}}\sigma^{x}_{1,\mathbf{e}_{1 }}\sigma^{x}_{1,\mathbf{e}_{2}},\quad G_{2}=\sigma^{x}_{1,\mathbf{e}_{1}} \sigma^{x}_{1,\mathbf{e}_{2}}\mathrm{e}^{\mathrm{i}\pi\sigma^{z}_{1}a_{2}}. \tag{60}\]
As we have a pair of synthetic \(\mathbb{Z}_{2}\) links emanating from each of the two matter sites, the generators include products of the corresponding Pauli matrices. Note that these generators fulfil the same algebra as before, and define projectors onto super-selection sectors, such that the effective Hamiltonian gauge theory (58) can be block decomposed into the different sectors (25) characterised by two static charges \(q_{1},q_{2}\in\{0,1\}\). In addition to the previous effective Hamiltonian, one could also include other gauge-invariant terms, such as
\[\tilde{H}_{\text{eff}}=H_{\text{eff}}+\Delta_{1}a^{\dagger}_{1}a_{1}+\Delta_{ 2}a^{\dagger}_{2}a_{2}, \tag{61}\]
where \(\Delta_{i}\) can be controlled by a small detuning of the state-dependent parametric drive.
characterised
For the two-ion implementation, the new difficulties with respect to our previous discussion of the single \(\mathbb{Z}_{2}\) link are the reduction in Lamb-Dicke factor, and the spectral mode crowding. The reduction in coupling strength is unavoidable. For the latter, we estimate the required detuning from such a spectator mode assuming it has the same coupling strength \(\eta=\max(\eta_{x},\eta_{z})\), and light shift \(\Omega=\Omega_{1,2}\) as in the main interaction to achieve an error of \(\epsilon\leq 10\%\). We calculate the resulting minimal tolerable frequency detunings for the different methods in Sec. IV, and display them in Fig. 22.The additional employed pulse shaping applied (10 \(\mu\)s for scheme I and 3.6 \(\mu\)s for scheme II) further suppresses the error term.
Once the scheme for the quantum simulation of the single \(\mathbb{Z}_{2}\) plaquette has been discussed, let us describe some interesting dynamical effects that arise when considering, as in the single-link case, the one-particle sector. Following our previous approach, one can exploit the global \(U(1)\) symme
Figure 23: \(\Diamond\)**-scheme for \(\mathbb{Z}_{2}\) tunneling in a plaquette:** On the left, we represent schematically the four possible gauge-invariant states in the sector with background charges \(q_{1}=1,q_{2}=0\), which correspond to Eq. (62). We see that, when the matter particle resides on the left or right site, a ’t Hooft electric field line that winds around the plaquette can be present or absent, doubling the number of possible states. On the right, we depict an effective \(\Diamond\)-scheme in quantum optics, in which the gauge-invariant tunneling induces two copies of the \(\Lambda\)-scheme of Fig. 6, which appeared for a single link and two bosons that lead to bright and dark states, and mode entanglement.
Figure 22: **Minimal required detuning of the spectator mode:** Given a coupling \(\Omega\) to a certain spectator mode, we depict the detuning required to obtain an error term of \(\epsilon=10\%\). From Sec. IV we added the data points for scheme I: Light-Shift- and Molmer-Sørensen-style interaction generated by the Raman beam configuration, and by the quadrupole beam (qdp). For scheme II: the qdp generated interaction.
try and Gauss' law to reduce the dimensionality of the subspace where the dynamics takes place. If we consider a single bosonic particle, this subspace is spanned by four states
\[\begin{split}\ket{\mathrm{L}_{1}}&=\ket{1_{1}}\otimes \ket{-1_{,\mathbf{e}_{1}}}\otimes\ket{-1_{,\mathbf{e}_{2}}}\otimes\ket{0_{2}},\\ \ket{\mathrm{L}_{2}}&=\ket{1_{1}}\otimes\ket{+1_{,\bm {e}_{1}}}\otimes\ket{+1_{,\mathbf{e}_{2}}}\otimes\ket{0_{2}},\\ \ket{\mathrm{R}_{1}}&=\ket{0_{1}}\otimes\ket{+1_{,\bm {e}_{1}}}\otimes\ket{-1_{,\mathbf{e}_{2}}}\otimes\ket{1_{2}},\\ \ket{\mathrm{R}_{2}}&=\ket{0_{1}}\otimes\ket{-1_{,\bm {e}_{1}}}\otimes\ket{+1_{,\mathbf{e}_{2}}}\otimes\ket{1_{2}},\end{split} \tag{62}\]
where the corresponding background charges are \(q_{1}=1,q_{2}=0\). In comparison to the single-link case, the plaquette gives us further possibilities for the stretching and compressing of the electric-field line when the matter boson tunnels back and forth (see Fig. 5). On the one hand, an electric-field loop around the plaquette, a so-called 't Hooft loop, does not require further sinks/sources since the electric field line enters and exists all sites in the plaquette. In addition, the stretched electric field can now wind along the two possible paths of the loop. This leads to the doubling of the number of possible gauge arrangements for a fixed layout of the matter boson in Eq. (62) and Fig. 23.
We are interested in exploring new dynamical effects, in particular the possibility of creating entanglement between the \(\mathbb{Z}_{2}\) gauge fields by the tunneling of the bosonic \(\mathbb{Z}_{2}\) charge. The dynamics due to this effective Hamiltonian can now be depicted as a four-level system in a \(\lozenge\)-scheme. Setting \(\Delta_{1}=\Delta_{2}=0\) in Eq. (61), the states with the particle on the right site have zero energy, whereas those with the particle on the left have energies \(\pm 2h\). Moreover, they are coupled by the gauge-invariant tunneling with strengths \(t_{1,\mathbf{e}_{1}},t_{1,\mathbf{e}_{2}}\) according to the \(\lozenge\) scheme on the right panel of Fig. 23. As apparent from this figure, we can define bright \(\ket{\mathrm{B}}=(\ket{\mathrm{R}_{1}}+\ket{\mathrm{R}_{2}})/\sqrt{2}\) and dark \(\ket{\mathrm{D}}=(\ket{\mathrm{R}_{1}}-\ket{\mathrm{R}_{2}})/\sqrt{2}\) states once more, such that the effective dynamics corresponds to that of a 3-level atom. This case also has an exact solution in terms of an effective spin-1 particle that precesses under an effective magnetic field. Defining \(\ket{\Psi_{\text{phys}}(t)}=d(t)\ket{\mathrm{D}}+c_{l,1}(t)\ket{\mathrm{L}_{1} }+c_{b}(t)\ket{\mathrm{B}}+c_{l,2}(t)\ket{\mathrm{L}_{2}}\), the amplitude of the dark state remains constant \(d(t)=d(0)\), whereas the amplitudes of the remaining states evolve as \(\mathbf{e}(t)=\mathrm{e}^{-i\mathbf{h}\mathbf{0}_{0}}\mathbf{s}\mathbf{e}(0)\). Here, \(\mathbf{e}(t)=(c_{l,1}(t),c_{b}(t),c_{l,2}(t))^{t}\), the effective magnetic field is
\[\mathbf{B}_{0}=(2t_{1,\mathbf{e}_{1}},0,2h), \tag{63}\]
and the spin-1 operators are defined as
\[S_{z}=\ket{\mathrm{L}_{2}}\bra{\mathrm{L}_{2}}-\ket{\mathrm{L}_{1}}\bra{ \mathrm{L}_{1}},S_{x}=\tfrac{1}{\sqrt{2}}(\ket{\mathrm{L}_{2}}\bra{\mathrm{B}}+ \ket{\mathrm{B}}\bra{\mathrm{L}_{1}})+\mathrm{H.c.}. \tag{64}\]
If one now switches on the detuning \(\Delta_{2}>0=h=\Delta_{1}\), the intermediate bright state gets shifted in energies such that, for \(\ket{t_{1,\mathbf{e}_{0}}}\ll\Delta_{2}\), the bright state will only be virtually populated when one starts from the configuration \(\ket{\Psi_{\text{phys}}(0)}=\ket{\mathrm{L}_{1}}\). In this initial state, the matter particle is in the left site, and no electric-field line winds around the plaquette (see Fig. 24). The dynamics can then be understood from a second-order process in which the particle tunnels to the right, while the electric field line stretches, followed by a second tunneling event to the left. Due to the high energy offset \(\Delta_{2}\gg t_{1,\mathbf{e}_{1}}\), the right site is only virtually populated in the intermediate states of Fig. 24. Note that, during the subsequent tunneling from these virtual states, the particle can either follow the same link/path of the first tunneling event, flipping the corresponding link qubit back to the original one \(\ket{\mathrm{L}_{1}}\) or, alternatively, it
Figure 24: **’t Hooft loop entanglement in the gauge qubits:** We represent a scheme for the second-order virtual tunneling of a boson, which initially occupies the left site (upper state) and aims to tunnel to right, which is penalised energetically by the on-site energy \(\Delta_{2}\gg t_{1,\mathbf{e}_{n}}\). After the virtual tunneling and the accompanying stretching of the electric-field line (intermediate states), the second tunneling process back to the left site can either compress the string, or stretch it further until it winds around the plaquette. The superposition of these possible transitions (lower state) leads to an entangled state.
can choose to follow the other link/path, going around the plaquette and ending in state \(\ket{\mathrm{L}_{2}}\). Since one must superpose the possible histories in quantum mechanics, the state after half the exchange time \(\Delta t_{\mathrm{ex}}/2=\pi\Delta_{2}/4t_{\mathrm{1,1,\mathbf{e}_{1}}}^{2}\) becomes
\[\ket{\Psi_{\mathrm{phys}}(t_{e})}=\tfrac{1}{\sqrt{2}}(\ket{\mathrm{L}_{1}}- \mathrm{i}\ket{\mathrm{L}_{2}})=\ket{1_{1}}\otimes\ket{\Psi_{\mathrm{Bell}}^{-}} \otimes\ket{0_{2}}. \tag{65}\]
Here, the boson has returned to the initial lattice site, but the gauge fields have evolved into an entangled state that is equivalent to a Bell pair in the Hadamard basis
\[\ket{\Psi_{\mathrm{Bell}}^{-}}=\tfrac{1}{\sqrt{2}}\left(\ket{-_{1,\mathbf{e}_ {1}},-_{1,\mathbf{e}_{2}}}-\mathrm{i}\ket{+_{1,\mathbf{e}_{1}},+_{1,\mathbf{e }_{2}}}\right). \tag{66}\]
It is interesting to remark that this entangled state is the result of summing over the two tunneling histories of the charged particle, leading to a linear superposition between two different electric-field strings. In the first one, there is no electric field within the loop, as the boson tunnels forth and back along the same link. Conversely, in the second case, a t' Hooft electric-field loop winding around the plaquette has been created since the boson has enclosed a loop around the plaquette during the virtual process. In Fig. 25, we represent the dynamics of such internal state as a function of time by representing the state fidelities of the time-evolved state with the \(\ket{\mathrm{L}_{1}},\ket{\mathrm{L}_{2}}\) and the target Bell state \(\ket{\Psi_{-}}\) (66). On sees how the Bell-state fidelity approaches unity after half the exchange duration \(\Delta t_{\mathrm{ex}}/2\). Since the dynamics is induced by a second-order process, we see that the timescale is larger than that of Figs. 5 and 7.
### \(\mathbb{Z}_{2}\) chain: synthetic dimensional reduction
In the previous subsection, we have seen that introducing more ions and playing with collective vibrational modes allows us to extend the \(\mathbb{Z}_{2}\) gauge-field toolbox towards interesting and more complex real-time phenomena. We discussed how, by working with only two ions, the spectral crowding of collective modes can still be handled, and one can selectively address the gauge-invariant tunneling between a pair of collective modes along the two different transverse directions. In this subsection, we present a scheme that exploits these collective modes with reduced crowding, together with the idea of synthetic dimensional reduction, to scale the quantum simulator of \(\mathbb{Z}_{2}\) gauge theories to chains of arbitrary size. We start the discussion by considering the generalisation of Eq. (36) to a chain of \(N\) trapped ions, namely
\[V_{1}(t)\approx\sum_{i}\frac{\Omega_{\mathrm{d}}}{2}\mathrm{e}^{\mathrm{i} \phi_{i}}a_{i,y}^{\dagger}\sigma_{i}^{z}a_{i,x}+\mathrm{H.c.}, \tag{67}\]
where we recall that the microscopic parameters were
\[\Omega_{\mathrm{d}}=\ket{\Omega_{1,2}}\ket{\eta_{x}\eta_{y}},\quad\phi_{i}= \mathbf{k}_{\mathrm{d}}\cdot\mathbf{r}_{i}^{0}+\arg(-\Omega_{1,2}). \tag{68}\]
If we align the laser wave-vectors such that \(\mathbf{k}_{\mathrm{d}}\cdot\mathbf{r}_{i}^{0}=0\), the driving phase becomes homogeneous, and can be gauged away without loss of generality. The Hamiltonian can still be described in terms of the tight-binding model of Eq. (5), but one must introduce a state-dependent tunneling matrix
\[\hat{t}_{(i,\mathbf{\alpha})(j,\beta)}=t_{(i,\mathbf{\alpha})(j,\beta)}\mathbb{I}_{2} +\frac{\Omega_{\mathrm{d}}}{2}\sigma_{i}^{z}\mathrm{e}^{\mathrm{i}\mathbf{\epsilon }_{\alpha,\beta}\phi_{j}}\delta_{i,j}(1-\delta_{\alpha,\beta}). \tag{69}\]
Using the same mapping to a synthetic ladder as the one in Eq. (14), we would obtain an effective model as the one depicted in Fig. 2**(a)**, where only the vertical synthetic links in yellow contain a \(\mathbb{Z}_{2}\) gauge field that mediates the tunneling. On the other hand, the horizontal tunnelings are still \(c\)-numbers, and the complete model is thus not consistent with the local gauge symmetry. This caveat generalises the case of the synthetic rectangular plaquette of Fig. 20**(a)** to the case of a full rectangular ladder: we would need \(3N-2\) gauge qubits to make the model gauge invariant, but only have \(N\) trapped-ion qubits at our disposal.
Accordingly, the horizontal links cannot be gauged with the available \(\mathbb{Z}_{2}\) fields, which are already in use to gauge the vertical ones. In order to obtain a gauge-invariant model, the idea is to make a synthetic dimensional reduction by exploiting the collective normal modes as follows. We first introduce a site-dependent shift of the effective on-site energies of Eq. (7)
\[\begin{split}&\omega_{2i-1,y}-\omega_{2i,y}=0,\quad\omega_{2i+1,y}- \omega_{2i,y}=\tilde{\Delta},\\ &\omega_{2i-1,x}-\omega_{2i,x}=\tilde{\Delta},\quad\omega_{2i+1,x }-\omega_{2i,x}=0,\end{split} \tag{70}\]
where we have introduced a parameter for these shifts \(\tilde{\Delta}\) that fulfills \(\ket{r_{(i,\mathbf{\alpha})(j,\mathbf{\alpha})}}\ll\lvert\tilde{\Delta}\rvert\). In this regime, the tunneling of phonons vibrating along the \(x\)-axis (\(y\)-axis) can only take place within dimers, i.e. pairs of neighbouring sites, composed of even-odd (odd-even) sites (see Fig. 26**(a)**). In analogy with ultracold atoms, this dimerisation could be obtained from the light shift of a tilted optical super-potential, or by working with micro-fabricated traps that allow one to design the local trap frequencies individually [201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213]. As depicted in Fig. 26**(b)**, the normal vibrational modes of the chain then break into a collection of center-of-mass and zigzag modes (54) that only have support on the alternating dimers
\[\begin{split} a_{\mathrm{c},2j-1,y}&=\tfrac{1}{ \sqrt{2}}\big{(}a_{2i-1,y}+a_{2i,y}\big{)},\ \ a_{\mathrm{c},2i-1,y}=\tfrac{1}{\sqrt{2}}\big{(}a_{2i-1,y}-a_{2i,y}\big{)}, \\ a_{\mathrm{c},2j,x}&=\tfrac{1}{\sqrt{2}}\big{(}a_{2j,x}+ a_{2i+1,x}\big{)},\ \ \ \ \ a_{\mathrm{c},2i,x}=\tfrac{1}{\sqrt{2}}\big{(}a_{2i,x}-a_{2i+1,x}\big{)}, \end{split} \tag{71}\]
where the index \(i\in\{1,\cdots N/2\}\) labels the different dimers.
The additional ingredient of the proposed scheme is a pair of light-shift optical potentials, each of which is only addressed to the ions that sit on the even or odd sites of the chain, and can be described by Eq. (35) with the corresponding restrictions in the addressed ions. As depicted in the inset of Fig. 26**(b)**, the even (odd) optical potentials are tuned to required energy offsets \(\omega_{\mathrm{d}}=\omega_{x}-\omega_{x}\) (\(\omega_{\mathrm{d}}=\omega_{y}-\omega_{x}+\tilde{\Delta}\)). In this expression, we further assume that the even terms lie in the resolved-sideband limit of Eq. (55) and, analogously, the odd ones lie in the resolved-sideband limit taking into account the additional energy shift by \(\tilde{\Delta}\). As a result of these conditions, using the lessons learned from the previous simpler setups,
one can check that the corresponding state-dependent parametric couplings will dress the tunneling between the center-of-mass modes of neighbouring \(x\) and \(y\) dimers while minimising the tunneling between the zigzag modes. In this way, we are halving the number of degrees of freedom, a precursor of the aforementioned synthetic dimensional reduction, and the available trapped-ion qubits serve to simulate the \(\mathbb{Z}_{2}\) gauge field defined on the links of a dimension-reduced chain.
As depicted in Fig. 26 **(b)**, this allows us to derive an effective dressed tunneling
\[V_{1}(t)\approx\sum_{i=1}^{N/2}\frac{\Omega_{\rm d}}{4}\bigg{(} \mathrm{e}^{+\mathrm{i}\phi_{2i}}a_{\mathrm{c,2}i-1,y}^{\dagger} \sigma_{2i}^{z}a_{\mathrm{c,2}i,x}\] \[+\mathrm{e}^{-\mathrm{i}\phi_{2i+1}}a_{\mathrm{c,2}i,x}^{\dagger} \sigma_{2i+1}^{z}a_{\mathrm{c,2}i+1,y}\bigg{)}+\mathrm{H.c.}, \tag{72}\]
where the microscopic parameters are again those of Eq. (68). Note that the alternation in the sign of the tunneling phases \(\mathrm{e}^{+\mathrm{i}\phi_{2i}}\) and \(\mathrm{e}^{-\mathrm{i}\phi_{2i+1}}\), is caused by the fact that the light-shift potential provides (absorbs) the missing (excess) energy to activate the tunneling against the corresponding energy offset \(\tilde{\Delta}\), making the dressed tunneling resonant. In this case, in order to make this complex phase irrelevant, we should align the laser wave-vector in a direction orthogonal to the chain \(\mathbf{k}_{\rm d}\cdot\mathbf{r}_{i}^{0}=0\).
In addition to these ingredients, we would again need a second tone that drives the qubit transition (20), corresponding to a carrier term with a resonance condition that includes the ac Stark shifts (44). We can then perform the aforementioned dimensional reduction by considering the formal mapping
\[a_{\mathrm{c,2}i,x},a_{\mathrm{c,2}i,x}^{\dagger}\mapsto a_{2i},a_{2i}^{ \dagger},\;\;a_{\mathrm{c,2}i-1,y},a_{\mathrm{c,2}i-1,y}^{\dagger}\mapsto a_{ 2i-1},a_{2i-1}^{\dagger}, \tag{73}\]
for \(i\in\{1,\cdots,N/2\}\), such that the odd (even) matter sites correspond to the \(y\)-axis (\(x\)-axis) center-of-mass modes. Besides, the gauge qubits are identified via the standard mapping
\[\sigma_{i}^{x},\sigma_{i}^{z}\mapsto\sigma_{i,\mathbf{\epsilon}_{1}}^{x},\sigma_{ i,\mathbf{\epsilon}_{1}}^{z}, \tag{74}\]
where the index now covers all links \(i\in\{1,\cdots,N-1\}\).
In this way, we have reduced the synthetic two-leg ladder onto a chain (see Fig. 27**(a)**), halving the number of matter sites and reducing the required links. Accordingly, the available number of physical qubits suffice to gauge all synthetic tunnelings, obtaining an effective gauge-invariant model that generalises the single-link case (22) to the full chain
\[H_{\rm eff}=\sum_{i=1}^{N-1}\Bigl{(}\Bigl{(}t_{i,\mathbf{\epsilon}_{1}}a_{i+1}^{ \dagger}\sigma_{i,\mathbf{\epsilon}_{1}}^{z}a_{i}+\mathrm{H.c.}\Bigr{)}+h\sigma_{ i,\mathbf{\epsilon}_{1}}^{x}\Bigr{)}\,. \tag{75}\]
Here, the gauge-invariant tunneling (see Fig. 27**(b)**) has a strength that is homogeneous along the chain, and gets halved with respect to the single-link case (22), while the electric field remains to be the same
\[t_{i,\mathbf{\epsilon}_{1}}=\frac{\Omega_{\mathrm{4},2}}{4}\eta_{x}\eta_{y},\;\;\; \;h=\frac{\tilde{\Omega}_{\rm d}}{2}. \tag{76}\]
Paralleling our discussion of the single-link case, the invariance under local \(\mathbb{Z}_{2}\) transformations of this gauge theory is generated by the following operators
\[G_{i}=\mathrm{e}^{\mathrm{i}\pi a_{i}^{\dagger}a_{i}}\prod_{\ell\in\mathcal{Z }\{i\}}\sigma_{i,\mathbf{\epsilon}_{1}}^{x}, \tag{77}\]
where \(j\in\mathcal{L}\{i\}\) is the set of links that surround a given matter site labelled by \(i\), namely \(\pm\mathbf{\epsilon}_{1}\) in the bulk (Fig. 27**(a)**), and \(+\mathbf{\epsilon}_{1}\) (\(-\mathbf{\epsilon}_{1}\)) at the leftmost (rightmost) boundary sites \(i=1\) (\(i=N\)).
Figure 26: **Synthetic dimensional reduction for a \(\mathbb{Z}_{2}\) chain:****(a)** We represent the transverse vibrational degrees of freedom of a trapped-ion chain in a frequency scheme, where the corresponding trap frequencies \(\omega_{x}<\omega_{y}\) can be resolved by external parametric drives. The introduction of the site-dependent shift of the frequencies in Eq. (70) leads to a two-site gradient here depicted by \(\tilde{\Delta}\). For \(|t_{ij}|\ll\tilde{\Delta}\), the exchange of vibrational quanta is allowed leads to alternating dimers, here depicted by green solid lines. **(b)** As a consequence of this exchange, the vibrational states inside the dimers split into center of mass (com) and zigzag (zz) modes, which can also be resolved in energies. As shown in the inset, we apply a pair of state-dependent parametric drives addressed to even-odd or odd-even dimers for the com modes, respectively, in order to induce the desired light-shift potential underlying the state-dependent tunneling of Eq. (72).
Once this method has been presented, let us discuss neat dynamical effects that go beyond the previous periodic oscillations in the link/plaquette limits. Let us recall that, as depicted in Fig. 5 **(b)**, the gauge-invariant tunneling gets inhibited as one increases the electric field \(h\), which we referred to as a precursor of confinement in larger lattice gauge theories. In the following subsections, we discuss how the trapped-ion quantum simulator (75) would allow for a clear manifestation of this confinement, focusing particularly in the one- and two-particle sectors. We will then move to half-filling, where minor modifications of the quantum simulator will allow to explore if the phenomenon of string breaking in real-time dynamics [278, 279, 50, 68, 280] also occurs in this gauge theory.
We consider a chain of \(N\) bosonic matter sites, and \((N-1)\) gauge fields (see Fig. 27**(a)**), such that the full Hilbert space is \(\mathcal{H}=\mathcal{F}\otimes\mathbb{C}^{2(N-1)}\), with \(\mathcal{F}=\oplus_{n=0}^{\infty}\mathcal{F}_{n}\). Here, each subspace is \(\mathcal{F}_{n}=\text{span}\{|n_{1}\rangle\otimes|n_{2}\rangle\otimes\cdots \otimes|n_{N}\rangle:n_{1}+n_{2}+\cdots n_{N}=n\}\). Due to the global \(U(1)\) symmetry in the matter sector, the dynamics will take place within one of these subspaces depending on the number of bosons of the initial state. In addition, due to the gauge symmetry, the physical states are further restricted via Gauss' law, which now imposes \(N\) constraints
\[G_{i}\ket{\Psi_{\text{phys}}}=\text{e}^{\text{i}\pi q_{i}}\ket{\Psi_{\text{ phys}}},\;\forall i\in\{1,\cdots,N\}. \tag{78}\]
We recall that \(q_{i}\in\{0,1\}\) are the background \(\mathbb{Z}_{2}\) charges that specify the super-selection sector where the dynamics occurs. This Gauss' law at the bulk has also been depicted in Fig. 27**(a)**. In Fig. 27**(b)**, we represent the Hilbert spaces of the bosonic matter sites and qubit links, as well as the transitions involved in a gauge-invariant tunneling.
#### v.2.1 One-boson sector: Wannier-Stark localisation
In this case, the physical subspace \(\mathcal{V}_{\text{phys}}=\text{span}\{|\text{$\sim\bullet_{i}$}\rangle:i\in\{ 1,\cdots,N\}\}\subset\mathcal{F}_{1}\otimes\mathbb{C}^{2(N-1)}\) can be be spanned by
\[|\text{$\sim\bullet_{i}$}\rangle=\left(\prod_{\ell<i}\sigma_{\epsilon_{1}}^{z }\right)a_{i}^{\dagger}\ket{\text{vac}}. \tag{79}\]
Here, as the lattice starts and finishes with a matter site, the vacuum \(\ket{\text{vac}}=|0_{1},-1,_{\epsilon_{1}},\cdots,0_{N-1},-_{N-1,\epsilon_{1 }},0_{N}\rangle\) belongs to the super-selection sector with a single background charge at the leftmost boundary \(q_{1}=1\), and zero elsewhere \(q_{j}=0\), \(\forall j\neq 1\). As a result of the composite operator in Eq. (79), the basis states represent a boson at site \(i\), together with a domain-wall configuration of the gauge qubits in the Hadamard basis \(\ket{\text{$\sim\bullet_{i}$}\rangle=|0_{1},+_{1,\epsilon_{1}},\cdots,+_{i-1,\epsilon_{1}},1_{i},-_{i,\epsilon_{1}},\cdots,-_{N-1,\epsilon_{1}},0_{N} \rangle}\). This has been depicted in Fig. 29**(a)**, where the boson at site \(i\) is connected to an electric field line that extends towards the left boundary, connecting the static charge and the dynamical one. We thus see that the dynamics, which in principle is defined in an exponentially-large Hilbert space, gets restricted to a much smaller subspace whose size only grows linearly with the length of the chain. Up to an irrelevant shift of the zero of energies, the Hamiltonian (75) can be rewritten as
\[H_{\text{eff}}=\sum_{i=1}^{N-1}\left(t_{i,\epsilon_{1}}\ket{\text{$\sim \bullet_{i+1}$}\rangle\bra{\text{$\sim\bullet_{i}$}}+\text{H.c.}\right)+2h \sum_{i=1}^{N}i\ket{\text{$\sim\bullet_{i}$}}\bra{\text{$\sim\bullet_{i}$}}, \tag{80}\]
which corresponds to a tight-binding problem where a single composite particle built from the dynamical \(\mathbb{Z}_{2}\) charge with the attached electric string, tunnels in the background of a linear potential. This problem maps exactly to the tight-binding
Figure 27: **Gauss’ law and gauge-invariant tunneling in a bosonic \(\mathbb{Z}_{2}\) chain: (a)** Schematic representation of a \(\mathbb{Z}_{2}\) gauge theory of bosonic matter on a one-dimensional chain. The bosons sit on the lattice sites (red circles), whereas the gauge fields reside on the links and correspond to qubits. The shaded area represents the Gauss’ law in the bulk. **(b)** At each matter site, we can have any integer number of bosons, whereas the gauge fields in the links have two possible states in a given basis, here the electric-field Hadamard basis. On the right panel, we represent schematically the gauge-invariant tunneling of a boson towards a neighbouring site, which is already occupied by two bosons. As depicted with the yellow arrow, this requires the gauge qubit to be flipped \(\ket{-i,\epsilon_{1}}\rightarrow\ket{+,i,\epsilon_{k}}\), such that the electric string grows.
Wannier-Stark ladder [281; 282; 283; 284; 285]. In contrast to the analogous problem in classical physics, where the particle would simply fall down the linear slope until reaching the bottom, the quantum particle can only oscillate around the initial position, leading to the so-called Wannier-Stark localisation. In the present context of the \(\mathbb{Z}_{2}\) gauge theory (75), these oscillations will be accompanied by the periodic stretching and compressing of the attached electric-field line, as we now discuss in detail.
Considering that the tunneling strength is homogeneous \(t_{i,\mathbf{\varepsilon}_{\mathbf{i}}}=\Omega_{\rm d}/4\), \(\forall i\), the effective Wannier-Stark ladder can be solved exactly in the thermodynamic limit \(N\to\infty\). Shifting the zero energy to the center of the chain, the problem can be mapped onto the dynamics of a single planar rotor [285]. Let us discuss some of the details. A particle in a circle can be described in the basis \(\{|\mathbf{\varphi}\rangle\}\) determined by its angle \(\varphi\in[0,2\pi)\). In this basis, the angular momentum \(J_{z}=-i\partial_{\varphi}\) is a Hermitian operator, and it readily follows that its spectrum is a countable infinite set \(\sigma(J_{z})=\mathbb{Z}\). Moreover, one can introduced the unitary ladder operators \(J_{\pm}={\rm e}^{\pm i{\varphi}}\), and find a representation of the \(O(2)\) rotor algebra \([J_{z},J_{\pm}]=\pm J_{\pm}\), and \([J_{+},J_{-}]=0\), which differs from the more standard \(SU(2)\) algebra of spin operators. Using the physical states (79) of the \(\mathbb{Z}_{2}\) gauge theory in the thermodynamic limit, one can readily find a specific representation of the rotor algebra
\[J_{z}=\sum_{i\in\mathbb{Z}}i\,|{\sim}\bullet_{i}\rangle\!\langle{\sim} \bullet_{i}|\,,\ \ \ J_{\pm}=\sum_{i\in\mathbb{Z}}|{\sim}\bullet_{i\pm 1} \rangle\!\langle{\sim}\bullet_{i}|\,, \tag{81}\]
where we see that the position of the \(\mathbb{Z}_{2}\) charge with the attached electric-field line maps onto the angular momentum of the rotor, whereas the tunneling to the right (left) map onto the rotor ladder operators \(J_{+}(J_{-})\).
It is then straightforward to derive the Heisenberg equations for the mean position and standard deviation of the \(\mathbb{Z}_{2}\) charged boson. For instance, considering that the boson is initially in the middle of the chain \(|\Psi_{\rm phys}(0)\rangle=|{\sim}\bullet_{N/2}\rangle\), we find that the mean is \(\langle J_{z}(t)\rangle=N/2\), while the standard deviation oscillates
\[\sigma(t)=(\langle J_{z}^{2}(t)\rangle-\langle J_{z}(t)\rangle^{2})^{1/2}=( \sqrt{2}t_{i,\mathbf{\varepsilon}_{1}}/h)\sin(ht)\,. \tag{82}\]
We thus see that the average position of the boson attached to the electric-field line remains constant. However, this is not the signature of the aforementioned Wannier-Stark localisation yet. Indeed, setting \(h=0\) yields the same result, as an initially localised particle in a tight-binding model with the same amplitude of tunneling to the left and right can only disperse around the initial position, but its average position remains static. In this limit, the above expression of the standard deviation leads to a ballistic dispersion \(\sigma(t)=(\sqrt{2}t_{1,\mathbf{\varepsilon}_{1}})t\), which differs clearly from the breathing-type oscillations that appear as soon as \(h\neq 0\). Hence, it is the change in the dispersion which provides a signature of the Wannier-Stark localisation of the \(\mathbb{Z}_{2}\) charge, which can only disperse within a localised region by periodically stretching and compressing the attached electric-field string.
To provide a more complete description of this localisation, we note that the thermodynamic problem has an exact solution in terms of the so-called Wannier-Stark eigenstates
\[|\mathbf{\varepsilon}_{m}\rangle=\sum_{i\in\mathbb{Z}}(-1)^{i-m}J_{i-m}(\gamma)|{ \sim}\bullet_{i}\rangle\,,\ \ \ \gamma=\frac{t_{i,\mathbf{\varepsilon}_{1}}}{h} \tag{83}\]
where \(J_{n}(x)\) is the first-class Bessel function of integer order \(n\), and the corresponding energies \(\mathbf{\varepsilon}_{m}=m(2h)\) define the aforementioned Wannier-Stark ladder for \(m\in\mathbb{Z}\). This solution can be derived by going to momentum space and using the Hansen-Bessel integral representation [284; 285] or, more directly, by looking into the discrete difference equation for the amplitudes of the eigenstates in the physical basis
\[|\mathbf{\varepsilon}_{m}\rangle=\sum_{i\in\mathbb{Z}}c_{i}\ |{\sim}\bullet_{i} \rangle\,,\ \ \ t_{i-1,\mathbf{\varepsilon}_{1}}c_{i-1}+t_{i\mathbf{\varepsilon}_{1}}^{*}c_{i+1}+2hic _{i}=\mathbf{\varepsilon}_{m}c_{i}\,. \tag{84}\]
This equation can be rewritten in terms of the recurrence relation of Bessel functions [286], such that one can identify \(c_{i}=(-1)^{i-m}J_{i-m}(\gamma)\), and check for the consistency of normalisation \(\sum_{i\in\mathbb{Z}}|c_{i}|^{2}=\sum_{i\in\mathbb{Z}}J_{i-m}^{2}(\gamma)=1\). In light of the asymptotic scaling of the Bessel functions, which vanish rapidly for \(|i-m|\gg\gamma\), one can see that the eigenstates (83) are not decolocalised over the whole lattice as occurs for \(h\to 0\) but, instead, concentrated around the \(m\)-th site, which is a more direct manifestation of the so-called Wannier-Stark localisation that parallels the definition of Anderson localisation in disordered systems [287].
With these eigenstates, one can construct the full unitary propagator of the problem. Using the Neumann-Graff addition formula of Bessel functions [288], the probability to find the boson with the attached electric-field line \(r\) sites apart is \(p_{r}(t)=|\langle{\sim}\bullet_{N/2+r}|\Psi_{\rm phys}(t)\rangle|^{2}=J_{r}^{2} \big{(}2\gamma\sin(ht)\big{)}=p_{-r}(t)\). In comparison to the Rabi oscillations of the single-link case in Fig. 5, where the boson and the gauge field oscillate in phase according to the observables of Eq. (28), we now have correlated Wannier-Stark oscillations in both the number of bosons
\[\overline{n}_{i}(t)=\langle a_{i}^{\dagger}a_{i}(t)\rangle=p_{i}(t)=J_{i-N/2}^{2 }\big{(}2\gamma\sin(ht)\big{)}, \tag{85}\]
and the position of the electric-field line attached to the boson,
Figure 28: **Wannier-Stark localisation of a single boson:** We compare the analytical prediction for \(\overline{n}_{N/2}(t)=\langle a_{N/2}^{\dagger}a_{N/2}(t)\rangle\) in Eq. (85) to the numerical results based on Matrix Product states with bond dimension \(\chi=100\) for a chain with \(N=16\) lattice sites. We set the transverse electric field to \(h=0.4t_{1,\mathbf{\varepsilon}_{1}}\) and we use the time step \(\delta t=0.05/t_{1,\mathbf{\varepsilon}_{1}}\). The analytical formula (85) is based on the mapping of the \(\mathbb{Z}_{2}\) gauge theory for a single boson (80) to the Wannier-Stark ladder in the thermodynamic limit of an infinitely-long chain.
which can be inferred from the two-point correlation function
\[\langle\sigma^{x}_{i-1,\mathbf{e}_{i}}\sigma^{x}_{i,\mathbf{e}_{1}}(t)\rangle=1-2 J^{2}_{i-N/2}\big{(}2\gamma\text{sin}(ht)\big{)}. \tag{86}\]
In order to test these predictions with a numerical method that can be easily adapted to other matter contents, let us now briefly discuss our approach based on matrix product states (MPSs) [289, 290]. The MPSs are major tools for the classical numerical simulation of 1D strongly correlated models. These methods capture the interplay of locality and entanglement by expressing an entangled many-body wave function in terms of local tensors. For static calculations, the MPS-based density matrix renormalization group (DMRG) [291] has become a common choice for obtaining the ground and a few low-lying excited states of many-body Hamiltonians, as it can reach remarkable accuracy and reliability. In the case of real-time evolution, the breakthrough came with the development of the time-evolving- block-decimation (TEBD) algorithm [292], but it can also be treated using a variety of methods [293]. Among these, the time-dependent variational principle (TDVP) [294], which uses a Lie-Trotter decomposition to integrate a train of tensors sequentially, is less error prone and more accurate than other available methods. We thus select it as our method of choice for the current work.
In TDVP, the MPS ansatz \(|\psi(t)\rangle\) can be understood as a variational manifold of reduced dimensionality within the full many-body Hilbert space. The time evolution of the MPS is obtained by computing the action of the Hamiltonian \(H\) along the tangent direction to this variational manifold, which we recall is described by the MPS bond dimension \(\chi\). This approach leads to an effective Schrodinger equation for states constrained to the MPS manifold that reads as follows
\[\mathrm{i}\frac{d}{dt}|\psi(t)\rangle=P_{T_{|\psi\rangle}}H|\psi(t)\rangle \tag{87}\]
where \(P_{T_{|\psi\rangle}}\) is an orthogonal projector onto the tangent space of \(|\psi(t)\rangle\). In our work, we follow the prescription described in Ref. [295] to implement a one-site version of TDVP. Moreover, we use the time-step \(\delta t=0.05/t_{i,\mathbf{e}_{1}}\) and \(\chi=100\) for the TDVP calculations. In Fig. 28, we present a quantitative comparison of the analytical prediction for the boson number operator at the center of the chain \(\overline{n}_{N/2}(t)\) in Eq. (85) with the numerical results based on MPS. The agreement between the TDVP numerical simulations and the exact analytical solution is remarkable, and serves to benchmark the validity of our approach.
Let us now go beyond this specific expectation value, and look into other observables that give further insight in the localisation phenomenon. As depicted in the first two panels of Fig. 29 **(b)**, the boson density and the attached electric-field line remain localised around the initial position. On the left of this figure, we present two contour plots for the expectation value of the boson number operator \(\overline{n}_{i}(t)=\langle a^{\dagger}_{i}a_{i}(t)\rangle\) as a function of time and the site index of the lattice. The two plots correspond to different values of the electric field \(h=0.6t_{1,\mathbf{e}_{1}}\) and \(h=0.4t_{1,\mathbf{e}_{1}}\), and one can see how the spread of the breathing-type oscillations of the boson decreases as the value of the electric field \(h\) grows. The next column shows the corresponding contour plots of the electric field sustained by the gauge qubits \(\overline{x}^{\ast}_{i}(t)=\langle\sigma^{x}_{i,\mathbf{e}_{1}}(t)\rangle\). In these two plots, one can see how the electric-field string oscillates periodically instead of spreading ballistically, which distorts the perfect domain-wall correlations in the initial state of Eq. (86). In the third column of Fig. 29 **(b)**, we also represent the block entanglement entropy \(S(\rho_{A})=-\text{Tr}\{\rho_{A}\log\rho_{A}\}\), showing that the region where the stretching and compressing of the electric-field line takes place coincides with the region where entanglement is built up in the real-time dynamics.
Note that, after multiples of the exchange period \(\Delta t_{\text{ex}}=2\pi/h\), the boson and the domain wall on the qubits fully refocus to the initial position. The resulting state becomes a product state, as can be inferred from the vanishing entanglement entropy at those instants of time. This is different from the trend in the limit of a single link (28), where the time it takes to the boson to return to the initial site depends on the ratio of the tunneling and the electric field (see Fig. 5 **(b)**). As we increase the value of the electric field \(h\), the oscillations become faster and the breathing-type dispersion is more localised. This so-called Wannier-Stark localisation is particularly transparent in the regime \(2\gamma\ll 1\), where the asymptotic of Bessel functions allows us to show that the boson remains exponentially localised around the center \(\overline{n}_{N/2+r}(t)\leq J^{2}_{r}(2\gamma)\approx\exp\{-r/\tilde{\xi}_{ \text{loc}}\}\) with \(\tilde{\xi}_{\text{loc}}=-1/2\log\gamma\). By using the maximum of the first-order Bessel function, one can also predict the short-time dynamics of the boson \(\overline{n}_{N/2\pm 1}(t)\approx J^{2}_{1}(2\gamma ht)\) to be ballistic, displaying an initial linear light-cone-like spreading. However, as time elapses, the effects of the stretching/compressing electric-field string start becoming manifest, and the dynamics is no longer ballistic but, instead, displays a breathing-type periodic behaviour. This also contrast with the dynamics of disordered one-dimensional systems [296], where an initially-localised particle tends to a stationary exponentially-localised solution characteristic of Anderson localisation, which can also be observed with trapped ions [297]. In such Anderson localised system, there is no breathing-like behaviour as displayed in this case. Let us now move to a two-particle sector, where one can see how the Wannier-Stark localisation develops into a specific confinement phenomenon.
#### v.2.2 Two-boson sector: Wannier-Stark confinement
In the two-boson sector, the physical subspace \(\mathcal{V}_{\text{phys}}=\text{span}\{|i\mathbf{\bullet}\mathbf{\sim}\mathbf{\bullet}_{j} \rangle:i,j\in\{1,\cdots,N\},\) and \(j\geq i\}\subset\mathcal{F}_{2}\otimes\mathbb{C}^{2(N-1)}\) can be be spanned by the following states
\[|i\mathbf{\bullet}\mathbf{\sim}\mathbf{\bullet}_{j}\rangle=a^{\dagger}_{i}\left(\prod_{i \leq\ell<j}\sigma^{z}_{i,\mathbf{e}_{1}}\right)a^{\dagger}_{j}|\text{vac} \rangle\,. \tag{88}\]
These states contain a pair of bosons connected by an electric-field line (see Fig. 31 **(a)**). In analogy to the single-boson sector (84), one can expand these two-particle solutions as
\[|\mathbf{\epsilon}_{m,P}\rangle=\sum_{i=1}^{N}\sum_{j\geq i}c_{i,j}\,|\mathbf{\bullet} \mathbf{\sim}\mathbf{\bullet}_{j}\rangle=\sum_{i,j}^{\prime}c_{i,j}\,a^{\dagger}_{i} \left(\prod_{i\leq\ell<j}\sigma^{z}_{i,\mathbf{e}_{1}}\right)a^{\dagger}_{j}| \text{vac}\rangle\,, \tag{89}\]
where the matrix of coefficients is symmetric \(c_{i,j}=c_{j,i}\), and is further constrained by normalisation \(\langle\varepsilon_{m,P}\,|\varepsilon_{m,P}\rangle=1\). This type of states have been used as Bethe-type ansatz for the two-body problem of a Bose-Hubbard model [298], where they allow one to determine the scattering and bound states [299; 300; 301]. In our case, the recurrence relation obtained after applying the \(\mathbb{Z}_{2}\) gauge-theory Hamiltonian reads
\[\begin{split} t_{i-1,\epsilon_{1}}c_{i-1,j}+t_{i,\epsilon_{1}}^ {*}c_{i+1,j}+&\,t_{j-1,\epsilon_{1}}c_{i,j-1}+t_{j,\epsilon_{1} }^{*}c_{i,j+1}\\ +&\,2h(j-i)\,c_{i,j}=\varepsilon_{m,P}c_{i,j}.\end{split} \tag{90}\]
In addition to the tunneling, which resembles a 2D tight-binding problem, we see that the potential only depends on the relative distance of the two bosons. Hence, the problem is different from the Wannier-Stark case of a pair of tight-binding charges subjected to a constant background electric field. However, as we now show, by introducing the center-of-mass, \(x_{\rm cm}=\frac{1}{2}(i+j)\), and relative, \(r=j-i\geq 0\), coordinates, the problem reduces to the Wannier-Stark ladder for a single particle in a one-dimensional chain. Noting once more that the tunneling strengths are homogeneous, one finds
\[c_{i,j}={\rm e}^{{\rm i}P_{\rm cc}}c(r),\quad t_{PC}(r-1)+t_{PC}^{*}(r+1)+2hr \,c(r)=\varepsilon_{m,P}c(r), \tag{91}\]
where we have introduced the conserved total momentum \(P=(p_{i}+p_{j})\), the momentum-independent Wannier-Stark ladder energies \(\varepsilon_{m,P}=m(2h)\), and the dressed tunneling strength
\[t_{P}=2t_{1,\epsilon_{1}}\cos\left(P/2\right). \tag{92}\]
In contrast to Bose-Hubbard-type models with finite range interactions, which can lead to both scattering and bound states for a pair of bosons [300], the above recurrence equation (91) describes a relative particle that tries to tunnel against a linear potential with a dressed tunneling strength that depends on the center-of-mass momentum. Once again, by taking the thermodynamic limit, the recurrence equation corresponds to that of a Wannier-Stark ladder (84) for the relative particle. In this way, we obtain the following solutions
\[|\varepsilon_{m,P}\rangle=\sum_{i,j}\sum_{m\in\mathbb{Z}}{\rm e}^{{\rm i}P(i+ j)/2}(-1)^{j-i+m}J_{j-i-m}(\gamma_{P})\,|_{i}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
decay exponentially fast as their relative distance \(r\) increases \(|r|>m\). These solutions are a toy analog of mesons in higher-dimensional non-Abelian gauge theories. The original particles, which carry a net \(\mathbb{Z}_{2}\) charge, cannot be observed as individual excitations, just like quarks in quantum chromodynamics. They become instead confined in pairs of zero net charge, which are associated to a specific quantised bound energy depending on their respective confinement. In the present case, these meson-like particles can freely move as a whole, i.e. non-zero center-of-mass momentum.
Let us now consider an initial state in which the bosons are symmetrically positioned about the center of the chain with relative distance \(r_{0}\), namely \(|\Psi_{\text{phys}}(0)\rangle=|(N-r_{0})_{2}\bullet\bullet\bullet(N+r_{0})/2\rangle\). This state has a vanishing total momentum \(P=0\), such that the center of mass will remain localised at the center of the chain while the two particles disperse and interfere. According to our previous discussion, the number of bosons (85) should now evolve according to
\[\overline{n}_{i}(t)=\left|J_{i-N/2-r_{0}/2}\big{(}2\gamma\text{sin}(ht)\big{)} +J_{i-N/2+r_{0}/2}\big{(}2\gamma\text{sin}(ht)\big{)}\right|^{2}. \tag{94}\]
In Fig. 30, we present a quantitative comparison of this analytical expression for \(\overline{n}_{N/2}(t)\) with the TDVP numerical results. As found in the single-particle sector of Fig. 28, the agreement of the numerical TDVP results with the analytical prediction in terms of the sum of Bessel functions is remarkable, and serves to benchmark the validity of our approach, which will be extended to situations beyond analytical solutions below.
Prior to that, however, we present in Fig. 31 **(b)** additional observables that highlight different aspects of the confinement. As shown in the first column, where we present a contour plot for the boson distribution \(\overline{n}_{i}(t)=\langle a_{i}^{\dagger}a_{i}\left(t\right)\rangle\) for \(h=1.2t_{1,\mathbf{e}_{1}}\) (upper row) and \(h=0.3t_{1,\mathbf{e}_{1}}\) (lower row), the center of mass of the two bosons remains at the centre of the chain as time evolves. In this dynamics, the two particles disperse developing a pair of breathing-type oscillations similar to the single-boson case discussed in Fig. 29**(b)**. As predicted by Eq. (94), for sufficiently-large electric field (upper row), the pair of bosons only perform small oscillations about the initial position, but do not interfere. For weaker electric fields (lower row), the width of the breathers is big enough such that their oscillations overlap, and we find interference effects through which the probability to find a boson at the center of the chain adds constructively for given instants of time. As a result, new breathing-type oscillations are superposed. In the center and rightmost columns of Fig. 29**(b)**, we represent the corresponding electric field \(\overline{s}_{i}^{x}(t)=\langle\sigma_{i,\mathbf{e}_{1}}^{x}(t)\rangle\), as well as the block entanglement entropy \(S(\rho_{A})\). One can clearly see that the interference also appear in the gauge degrees of freedom via the structure of the initial domain wall, i.e. electric-field line. Regarding the evolution of the block entanglement, we see that quantum correlations build up at the edges of the initial electric-field line, and then grow defining a light-cone like dispersion. After this expansion, inside the mutual effective light cone, entanglement can grow further and show a characteristic interference pattern that coincides with the region where the electric field string stretches and compresses.
After the same periodic exchange durations found in the single-particle case, namely multiples of \(\Delta t_{\text{ex}}=2\pi/h\), the dispersion and interference of the bosons recrosses completely, and we come back to the original situation in which the boson pair and the intermediate electric-field line have a distance of \(2r_{0}\). We thus see that, for sufficiently-large systems in any non-zero electric field \(h\), the pair or bosons do not spread ballistically but, instead, disperse up to a maximal distance and then refocus periodically. This unveils an additional aspect of the confinement discussed previously in terms of the bound-state solutions of Eq. (93). The Bessel-function envelope of these solutions, which decays rapidly with the inter-boson distance, underlies the lack of excitations with a non-zero \(\mathbb{Z}_{2}\) charge from the energy spectrum of the model, which is the ultimate smoking gun for confinement. Given the connection to the Wannier-Stark physics, we refer to this neat manifestation of confinement as Wannier-Stark confinement.
At this point, it should be pointed out that similar real-time signatures of confinement have also been explored numerically for fermionic [50; 302; 303; 278; 50] and bosonic [280] versions of the one-dimensional Schwinger model. The dynamics in these cases is richer, as there can also be spontaneous production of particle-antiparticle pairs due to the presence of the electric field associated to the electric-field string, i.e. Schwinger pair production mechanism [304]. In the fermionic case [50; 302; 303; 278; 50], after this pair production and subsequent recombination, there is a string-breaking mechanism, whereby the intermediate electric field relaxes and there is a screening effect for the outer charges, which form new particle-antiparticle pairs of zero net charge that can move freely as a bound meson. Then, the process is reversed by creating an inverted electric-field line (anti-string) in the bulk that connects an antiparticle-particle pair (anti-pair), which can then be screened again creating new mesons
Figure 30: **Wannier-Stark confinement for two bosons:** We compare the analytical prediction for \(\overline{n}_{N/4}(t)=\langle a_{N/4}^{x}a_{N/4}(t)\rangle\) in Eq. (94) to the numerical results based on Matrix Product states with bond dimension \(\chi=100\) for a chain with \(N=32\) sites. We set the transverse electric field to \(h=0.3t_{1,\mathbf{e}_{1}}\) and we use the time step \(\delta t=0.05/t_{1,\mathbf{e}_{1}}\). The analytical formula (94) is based on the mapping of the \(\mathbb{Z}_{2}\) gauge theory for a boson pair (80) to the Wannier-Stark ladder for the particle of reduced mass in the thermodynamic limit of a chain.
that travel freely and so on. In order to explore if a string breaking mechanism can occur in our model, we explore the half-filled sector for our simpler \(\mathbb{Z}_{2}\) gauge theory in the following section, and introduce the local term in Eq. (75), to account for the energy cost of the pair production.
#### vi.2.3 Half-filled sector: Partial string breaking
Let us now consider a charge-density-wave distribution of the bosons. In the case of half filling, one can distribute \(N/2\) bosons to populate the odd sites of the chain, whereas the link spins all point in the opposite direction of the transverse term, such that there is no electric field in the initial state. This state can be considered as a metastable ground state
\[|\overline{\text{vac}}\rangle=|1_{1},-_{1,\mathbf{\varepsilon}_{1}},0_{2},-_{2, \mathbf{\varepsilon}_{1}},1_{3},-_{3,\mathbf{\varepsilon}_{1}},\cdots,1_{N-1},-_{N-1, \mathbf{\varepsilon}_{1}},0_{N}\rangle \tag{95}\]
of a new gauge-invariant Hamiltonian that contains an additional staggered mass term with respect to Eq. (75), namely
\[H_{\text{eff}}{=}\sum_{i=1}^{N-1}\Bigl{(}\Bigl{(}t_{i,\mathbf{\varepsilon}_{1}}a _{i+1}^{\dagger}\sigma_{i,\mathbf{\varepsilon}_{1}}^{z}a_{i}+\text{H.c.}\Bigr{)}+ h\sigma_{i,\mathbf{\varepsilon}_{1}}^{x}\Bigr{)}+\mu\sum_{i=1}^{N}(-1)^{i}a_{i}^{ \dagger}a_{i}\,. \tag{96}\]
Let us note that this additional term is a specific case of the generic detunings in the state-dependent parametric tunneling, which were already discussed around Eq. (61). In the limit of a very large "mass" \(\mu\gg t_{i,\mathbf{\varepsilon}_{1}}\), and for hard-core bosons, this state (95) is the gauge-invariant ground state of the Hamiltonian (96) in a super-selection sector in which Gauss' law (78) has a staggered distribution of static \(\mathbb{Z}_{2}\) charges
\[q_{i}=\tfrac{1}{2}(1-(-1)^{i}),\quad\forall i\in\{1,\cdots,N\}. \tag{97}\]
In the case of standard bosons, the odd sites are not restricted to single occupancy by the hardcore constraint, and the ground state becomes a highly degenerate manifold. In any case, since we are interested in real-time dynamics, we can always consider this configuration as a reference to build a specific initial state, and then study its real-time dynamics.
We can now start from this state, and consider a meson-like excitation by adding a particle-hole excitation in which a single even (odd) site is populated (empited) by moving a single boson between two nearest-neighbour sites. To comply with Gauss' law, an electric field must be established at the link in between, leading to
\[|_{2i}\mathbf{\bullet}{\sim}{\sim}{\sim}{\sim}2_{i+1}\rangle=a_{2i}^{\dagger} \Bigl{(}\sigma_{2i,\mathbf{\varepsilon}_{1}}^{z}\Bigr{)}\,a_{2i+1}\,|\overline{ \text{vac}}\rangle\,. \tag{98}\]
In the large-mass limit, this state corresponds to an excitation with an energy of \(\epsilon^{\rm 1s}_{\rm ex}\approx 2\mu+2h\) with respect to the vacuum state (95). The half-filling and the staggered mass change the physics considerably, as the tunneling dynamics can now generate more of these meson states by pair production, even when the total number of bosons is conserved and fixed to \(N/2\). There can be thus production of particle-antiparticle pairs within this interpretation, making the lattice model (96) closer to a discretisation of a quantum field theory of gauge and matter fields. This connection can be pushed further in the staggered-fermion approach [22] to lattice gauge theories. In the following, we stick to the simpler bosonic version, and study the analog of the aforementioned string breaking.
By analogy to the two-boson state (88), one could indeed separate the particle and the hole to a couple of distant sites by creating an electric-field string in between, namely
\[|_{2}\bullet\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
that, in analogy to lattice gauge theories with fermionic matter [50; 68; 278; 302; 303], mesons can be created, distorting the initial electric-field string, which can lead to the screening of charges and the emission of pairs of mesons that propagate freely. This leads to the aforementioned string breaking. We need to set the parameters in such a way that it could be energetically favourable for the above string can decay into a pair of meson-like states (98), which may then travel freely towards the edges. Since a 2-meson state has the excitation energy \(\epsilon_{\rm ex}^{2m}\approx 4\mu+4h\) in the large-mass limit, we see that \(r>\mu/h+2\) for the meson configuration to be energetically favourable with respect to the string state.
In Fig. 32, we consider the initial string state \(|\Psi(0)\rangle=|_{2i_{0}}\bullet\sim\circ_{2j_{0}+1}\rangle\) for \(i_{0}=32,j_{0}=48\) and a chain for \(N=80\) lattice sites, such that \(r=16\). We solve numerically for the time evolution using our MPS algorithm with \(\mu/t_{1,{\bf e}_{1}}=0.2\), and \(h/t_{1,{\bf e}_{1}}=0.2\), such that the 2-meson state can be favourable. As can be seen in Fig. 32**(a)**, for hardcore bosons, the dynamics is reminiscent of previous studies on the fermionic Schwinger model. We find that the initial pair production distorts the intermediate electric-field string, and there is some partial screening leading to 2 meson-like excitations that initially spread from the edges of the string towards the boundaries of the chain. However, as can be seen in the second panel of Fig. 32**(a)**, there is no perfect screening and no string inversion, such that these meson-like states bend and finally refocus in a breathing-type dynamics. This partial string breaking has been previously found in a bosonic Schwinger model [280]. In the present case, we believe that the lack of perfect screening is likely caused by the different nature of the electric field term in the \(\mathbb{Z}_{2}\) gauge theory with respect to the Schwinger model. In our model, the two possible electric field eigenstates, i.e. the \(\pm\) Hadamard basis, have a very different electric-field energy. On the contrary, in the Schwinger model, the electric energy is quadratic in the electric field, and would thus be the same for these two states, such that the string inversion seems energetically more plausible. This type of electric-field energy is however not possible in light of the underlying Pauli-matrix representation of our \(\mathbb{Z}_{2}\) gauge theory. It is likely this difference which is responsible for the lack of screening and string inversion in our model, and thus leads to the final refocusing.
Moving away from the artificial hardcore constraint, which is the relevant case for the trapped-ion system, we find that the evolution is now described by Fig. 32**(b)**. In the left panel, we see that the dynamics in the charge sector is very similar to the hardcore case. The main differences, however, appear when looking into the gauge-field sector. As shown in the right panel, Here, t the possibility of populating the sites with more than one boson changes appreciably the dynamics, as the mirror symmetry of the dynamics is broken by a boson-enhanced tunneling. The electric-field string can get distorted by the creation of particle-hole pairs, but this distortion is asymmetric and no longer resembles the hardcore case of Fig. 32**(b)**.
## VI Conclusions and outlook
In this article, we have presented a rich toolbox for the quantum simulation of \(\mathbb{Z}_{2}\) gauge theories using trapped ions that spans several levels of complexity. In this toolbox, the matter particles are simulated by the vibrational excitations of the ions, and the gauge field corresponds to a qubit encoded in two electronic states. In general, we have shown how to exploit a state-dependent parametric tunneling, which arises from a specific laser-ion interaction, to induce the desired gauge-invariant tunneling of a \(\mathbb{Z}_{2}\) gauge theory in the ion dynamics. Furthermore, we have shown that it is possible to explore the competition of this term with a confining electric-field term, which can be readily implemented by a direct resonant driving of the qubit transition.
At the simplest-possible level, that of a \(\mathbb{Z}_{2}\) gauge theory on a single link, we have presented two quantum simulation schemes that take into account realistic numbers, and are at reach for various experiments working with a single ion. Here, by exploiting the idea of synthetic dimensions, two vibrational modes of the ion encode the matter bosons, whereas the \(\mathbb{Z}_{2}\) gauge field is represented by the ion qubit, which sits in a synthetic link and effective mediates frequency conversion between such modes. In general, the effective gauge-invariant tunneling strength, which comes at second order in the Lamb-Dicke expansion of the laser-ion interaction, will be of the same order as the strength of the spin-spin interactions in analog trapped-ion quantum simulators of magnetism. Here, the spin-spin couplings are also reduced by a second-order process with respect to the original sideband couplings. Given the number of experiments on trapped-ion quantum simulators of magnetism that work at these timescale in spite of various experimental sources of nose, we believe that the quantum simulation of a \(\mathbb{Z}_{2}\) gauge theory on a link is at reach of trapped ions, paralleling recent advances in ultracold-atom systems [110]. We have discussed several manifestations of gauge invariance, which have neat quantum-optical counterparts, and are at reach of current trapped-ion experiments: observing the correlated dynamics of a single matter boson, and the attached electric-field string, which would go beyond the available measurement capabilities of [110] in which the electric field is encoded in a non-local bond density of the neutral atoms. Moreover, we have also explored the two-boson dynamics, unveiling interesting connections to dark states in \(\Lambda\)-systems and entanglement between the matter bosonic modes that could also be explored in the trapped-ion experiment.
Increasing in complexity, we have discussed a scheme with a two-ion crystal that can be used to simulate a \(\mathbb{Z}_{2}\) gauge theory on the simplest plaquette. Provided that the parametric tunnelings can resolve the structure of the collective vibrational modes along two directions of the crystal, we have shown that the qubits of the two ions can be arranged in the links of a circular plaquette that can be pierced by a gauge-invariant \(\mathbb{Z}_{2}\) flux known as a Wegner-Wilson loop. We remark that this resolution will require weaker parametric drives, and thus slow the dynamics of the quantum simulator, making the experiment more challenging. We, however, believe that further advances in the field can minimise noise sources below
this timescale, and allow for experimental realization of the \(\mathbb{Z}_{2}\) plaquette. For a single boson in the matter sector, we have shown that the gauge-invariant tunneling can lead to a 't Hooft loop of the electric-field variables and that this can give rise to entanglement between the gauge qubits. Once again, the gauge-invariant dynamics of the plaquette have a quantum-optical analog in terms of a double-\(\Lambda\) system.
Finally, we have shown how one can exploit the parametric excitations in the resolved-mode regime for a larger \(N\)-ion crystal. We have introduced a generic idea of synthetic dimensional reduction, by means of which, it is possible to obtain a trapped-ion quantum simulator of a \(\mathbb{Z}_{2}\) gauge theory on a full chain. This will require further developments in which the mode frequencies used to encode the matter particles can be tailored in an inhomogeneous fashion. We have shown that a single phonon in the trapped-ion chain will evolve under this \(\mathbb{Z}_{2}\) gauge theory in complete analogy to the problem of Wannier-Stark ladders, showing in this way localisation and breathing dynamics due to a periodically stretching electric-field string. By going to the two-phonon sector, we have presented quantitative expressions for the confinement of the simulated \(\mathbb{Z}_{2}\) charges, which have been benchmarked with exhaustive numerical simulations based on matrix product states (MPS). Finally, we have also explored the half-filled sector using these MPS techniques, and shown that the trapped-ion analog quantum simulator could implement a string-breaking mechanism contributing in this way to the initial progress in the digital approach [107].
Future work will include the generalisation of the presented toolbox towards the two-dimensional case. As a starting point, it would be interesting to develop schemes that allow for the quantum simulation of one-dimensional arrays of the simple \(\mathbb{Z}_{2}\) plaquettes studied in this work. This would allow to explore the interplay of the electric confining term and the magnetic flux term that would here reduce to a two-spin interaction that is also at reach of trapped-ion quantum simulators. This playground is simple enough such that analytical results and numerical simulations based on MPS could be likely developed. Going beyond this limiting case, it would also be interesting to explore full two-dimensional models coupled to matter, even if the Wegner-Wilson higher-weight plaquette terms cannot not be realised in the experiment. There are know examples where the intertwining of the matter particles with the gauge fields can actually lead to deconfined phases in this type of models [58].
###### Acknowledgements.
A.B. thanks D. Gonzalez-Cuadra, S.J. Hands, D. Leibfried, and G. Magnifico for useful discussions. A.B. acknowledges support from the E-COST grant CA17113 for a STSM at Oxford University, where this work was initiated. A. B. acknowledges support from PGC2018-099169-B-I00 (MCIU/AEI/FEDER, UE), from the Grant IFT Centro de Excelencia Severo Ochoa CEX2020-001007-S, funded by MCIN/AEI/10.13039/501100011033, and from the CSIC Research Platform on Quantum Technologies PTI-001. The project leading to this application/publication has re- cevived funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101114305 ("MILLENION-SGA1" EU Project). O.B., S.S. G.A and R.S thank D.M. Lucas and C.J. Ballance and A.C. Hughes for useful discussions and acknowledge support from the US Army Research Office (W911NF-20-1-0038) and the UK EPSRC Hub in Quantum Computing and Simulation (EP/T001062/1). G.A. acknowledges support from Wolfson College, Oxford. R.S. acknowledges support from the EPSRC Fellowship EP/W028026/1 and Balliol College. E.T. acknowledges support from the MIUR Programme FARE (MEPH), and from QUANTERA DYNAMITE PCI2022-132919.
**Author contributions.-** A.B. conceived the idea with useful discussions with G.A., O.B., S.S. and R.S. O.B. and S.S. devised the experimental schemes presented in Sec. IV with input from G.A. and R.S., and performed the corresponding numerical simulations. E.T. performed the TDVP numerical simulations of Sec. V. A.B. wrote the bulk of the manuscript with contributions from all of the authors. All authors discussed the results and conclusions presented in the manuscript.
## Appendix A Quadrupole light-shift scheme
To estimate realistic numbers for the ligth-shift scheme on the optical qubit, we consider a narrow-linewidth \(674\,\mathrm{nm}\) laser system [263]. The Lamb-Dicke factor changes due to the different wavelength of the two laser beams, leading to \(\eta_{z}=2\times 0.05\) and \(\eta_{x}=2\times 0.024\). We employ two \(674\)-nm beams detuned with respect to the qubit resonance by \(\delta=\omega_{x}-\omega_{z}\), where \(\Delta=2\pi\cdot 3.56\,\mathrm{MHz}\). The detuning \(\Delta\) is much smaller than for the dipole-allowed Raman transitions on the left of Fig. 33, as the \(D_{5/2}\) level is metastable and its lifetime is much longer than the timescale of interest. In Fig. 34, we present the results of our numerical simulations for the same quantities as above, but as a function of the Rabi-frequency \(\Omega\) for the optical qubit. In this case, we can achieve tunneling coupling rates of up to \(0.17\,\mathrm{kHz}\) inferred as \(1/(4\Delta t_{\mathrm{ex}})\), which are much slower than the dipole-Raman scheme. Likewise, the state infidelity is larger, showing that the realization with optical qubits will be more challenging.
A challenge with this scheme when applied to the quadrupole transition is the large resulting light shift \(\Delta E_{\mathrm{ac}}\) on the qubit transition. This spurious term is \(1/(\eta_{x}\eta_{z})\) larger than the sought-after tunneling rate. This issue can be circumvented by either combining the tunneling interaction with a spin-echo [260] or by tracking the qubit frequency shift in software and feed-forward the acquired phase. The first approach is no longer compatible when a transverse electric-field term is added (see Eq. (22)). The second approach is in principle possible, but relies on the precise calibration of the Stark shift, as the beams used to generate the transverse electric-field term must be tuned accordingly (see Eq. (44)). This is a challenging task, as the shift needs to be calibrated to a precision that goes well beyond that of the effective tun
neling rate. Moreover, pulse shaping makes the calibration more difficult, as the instantaneous light shift changes over the pulse duration. In practice, this is difficult as the light-shift amplitude is \(1/\eta_{x}\eta_{z}\) larger than the tunneling interaction, and would need to be calibrated to a precision exceeding the tunneling rate. Hence, no simulations are included for the light-shift scheme utilising the quadrupole coupling.
|
2305.17620 | Probing Ring Resonator Sensor Based on Vernier Effect | The Vernier effect has seen extensive application in optical structures,
serving to augment the free spectral range (FSR). A substantial FSR is vital in
a myriad of applications including multiplexers, enabling a broad, clear band
comparable to the C-band to accommodate a maximum number of channels.
Nevertheless, a large FSR often conflicts with bending loss, as it necessitates
a smaller resonator radius, thus increase the insertion loss in the bending
portion. To facilitate FSR expansion without amplifying bending loss, we
employed cascaded and parallel racetrack resonators and ring resonators of
varying radius that demonstrate the Vernier effect. In this study, we designed,
fabricated, and tested multiple types of racetrack resonators to validate the
Vernier effect and its FSR extension capabilities. Our investigations
substantiate that the Vernier effect, based on cascaded and series-coupled
micro-ring resonator (MRR) sensors, can efficiently mitigate intra-channel
cross-talk at higher data rates. This is achieved by providing larger
input-to-through suppression, thus paving the way for future applications. | Wenwen Zhang, Hao Zhang | 2023-05-28T03:39:06Z | http://arxiv.org/abs/2305.17620v1 | # Probing Ring Resonator Sensor Based on Vernier Effect
###### Abstract
The Vernier effect has seen extensive application in optical structures, serving to augment the free spectral range (FSR). A substantial FSR is vital in a myriad of applications including multiplexers, enabling a broad, clear band comparable to the C-band to accommodate a maximum number of channels. Nevertheless, a large FSR often conflicts with bending loss, as it necessitates a smaller resonator radius, thus increase the insertion loss in the bending portion. To facilitate FSR expansion without amplifying bending loss, we employed cascaded and parallel racetrack resonators and ring resonators of varying radius that demonstrate the Vernier effect. In this study, we designed, fabricated, and tested multiple types of racetrack resonators to validate the Vernier effect and its FSR extension capabilities. Our investigations substantiate that the Vernier effect, based on cascaded and series-coupled micro-ring resonator (MRR) sensors, can efficiently mitigate intra-channel cross-talk at higher data rates. This is achieved by providing larger input-to-through suppression, thus paving the way for future applications.
Free spectral range (FSR), ring resonator, Vernier effect.
## I Introduction
With the fast development of communication technology, the amount of information to be transmitted increased rapidly. Photonic devices play an important role in computing systems for wider bandwidth and more cost-efficient interconnects. Traditional interconnects are typically made of metallic materials, which introduces high energy consumption and large latency [1]. Silicon-on-insulator (SOI) interconnection devices lead to a new field of communication [2]. But it still demands larger free spectral range (FSR). To get extended FSR, flat passband response, or increased sensor sensitivity, various cascaded and parallel structures are designed to explore excellent spectra characteristics. The cascaded structure is not limited to Mach-Zehnder interferometers [3] or Fabry-Perot cavities and micro-ring resonators [4]. By applying the Vernier effect to optical waveguide sensors, the sensitivity of these sensors can be further improved.
Vernier effect based on micro-ring resonators (MRRs) has been validated effectively in increasing device performance in communication equipment [5]. Adopting the Vernier effect in ring/racetrack structures could extend the FSR which benefits dense wavelength-division multiplexing (DWM) applications and increase sensor sensitivity [6]. Sensors with sensitivity-enhanced refractive index are one of the outstanding applications of micro-fibers [7]. This fiber sensor achieves sensitivity enhancement of 3301.76 nm/RIU and is implemented by connecting two microfiber knot resonators in series, which exhibits the vernier effect. Refractive index sensor based on silicon-on-insulator with vernier effect and enhanced sensitivity is also realized in [8]. These sensors perform a significant role in measurements on monitoring the environment and medical analysis, food detection, etc. Silicon-on-insulator racetrack resonators with extended FSR are also realized by cascaded rings with exhibit vernier effect [9]. Adding a straight waveguide in the middle of ring resonators makes coupling length more controllable. Vernier effect in this design could broaden the FSR value while not increasing bending losses. Both through port insertion loss and interstitial peak suppression could be improved by the addition of contra-directional couplers. In our work, several types of racetrack resonators are designed and fabricated to validate the vernier effect and its FSR-extending performance. Parameters (inter-stage coupling, coupling length \(\&\) coupling strength of straight waveguide, field transmission, etc.) in design are varied to find out their influence on the performance.
## II Modeling
### _Cascaded Coupled Resonator_
To suppress bending loss, radius of racetrack resonators is always settled as large value, which contributes to small FSR. To extend FSR, multiple racetrack resonators are connected to exhibit vernier effect. Two cascaded racetrack resonators are displayed in Fig.1 a), which show vernier effect.
The drop port transmission function [10] in amplitude forms for through port and drop port is expressed by Eq. 1:
\[(\frac{E_{t1}}{E_{i1}})_{through}=\frac{-t_{1}\kappa_{1}^{2}\alpha_{1}e^{j \theta_{1}}(t_{3}\alpha_{2}e^{j\theta_{2}}-t_{2})}{1-t_{3}t_{2}\alpha_{2}e^{j \theta_{2}}-t_{2}t_{1}\alpha_{1}e^{j\theta_{1}}+t_{3}t_{1}\alpha_{1}e^{j \theta_{1}}e^{j\theta_{2}}}}. \tag{1}\]
\[(\frac{E_{t1}}{E_{i1}})_{drop}=\frac{\kappa_{3}\kappa_{2}\kappa_{1}\alpha_{1} \alpha_{2}e^{j\frac{\theta_{1}}{2}}e^{j\frac{\theta_{2}}{2}}}{1-t_{3}t_{2} \alpha_{2}e^{j\theta_{2}}-t_{2}t_{1}\alpha_{1}e^{j\theta_{1}}+t_{3}t_{1}\alpha _{1}e^{j\theta_{1}}e^{j\theta_{2}}}. \tag{2}\]
Where \(\kappa_{1}\), \(\kappa_{2}\), \(\kappa_{3}\) and its conjugated complex form value represents corresponding coupling coefficient, and t\({}_{1}\), t\({}_{2}\), t\({}_{3}\)
and its conjugated complex form represents corresponding field transmission factor. \(\alpha\) (zero loss: \(\alpha\)=1) is the total field loss coefficient. To simplify the model, we let \(\kappa_{1}\)=\(\kappa_{3}\).
Series cascaded rings/racetrack shapes with different radii makes it possible to extend FSR to the least common multiple of single one. This is because the resonant condition is satisfied only when both single resonators are in resonance. This leads to the suppression of some transmission peaks of racetrack resonators. The FSR of cascaded resonators [6] with different radii is given by Eq. 3 - Eq. 4:
\[FSR=N\times FSR_{1}=M\times FSR_{2} \tag{3}\]
\[FSR=|M-N|\frac{FSR_{1}\times FSR_{2}}{FSR_{1}-FSR_{2}} \tag{4}\]
Where N and M are natural and coprime numbers. The group index n\({}_{g}\) and effective index n\({}_{eff}\) is defined as:
\[n_{g}=n_{eff}-\lambda\frac{\partial n_{eff}}{\partial\lambda} \tag{5}\]
### _Serially Coupled Double Resonator_
To realize box-like filter response, parallelly coupled racetrack resonators are designed and analyzed in Suzuki et al. (1995). The parallel type of racetrack resonator is displayed in Fig. 1 b).
This kind of configuration can be treated as gratings. Choosing \(\Lambda\) to be odd multiple of a quarter wavelength, constructive interference forms by reflected light. Let \(\lambda_{0}\) to be the center wavelength (1550 \(nm\)), \(\Lambda\) is expressed as Eq. 6:
\[\Lambda=(2m+1)\frac{\lambda_{0}}{4n_{eff}} \tag{6}\]
The parallel coupled racetrack resonators exhibit vernier effect when the periods of FSR of racetrack resonators and gratings follow Eq. 7 and Eq. 8. In this case, transmission peaks are suppressed as well, overall FSR is extended.
\[\Lambda=N_{Racetrack}FSR_{Racetrack}=M_{Grating}FSR_{Grating} \tag{7}\]
\[\Lambda=\frac{M_{grating}n_{Weff}}{N_{Racetrack}n_{Weff}}\pi r \tag{8}\]
## III Results and Discussion
### _Simulation results_
To determine the effective size of the cascaded racetrack resonator, a strip waveguide with 550 nm width and 220 nm height is modeled to explore the field distribution. The effective index (\(\lambda=1550nm\)) calculated by Taylor expansion with the simulation results have a good agreement [11].
To further decide the parameter of the racetrack resonator, the relationship between inter-coupling length, gap, and coupling coefficient should be explored. We use FDTD simulation to figure out how coupling coefficients depend on these parameters. To satisfy the requirement of adequate interstitial suppression, \(M\) or \(N\) should be small enough. The length of straight waveguide parts in cascaded ring resonators determines the coupling coefficients and eventual FSR. With the unavoidable fabrication error of the lithography process, more copies with slight variations around the appointed circuit size are submitted to analyze and predict the effect of the error (Fig. 5a).
### _Experiment results_
To limit insertion loss and ensure TE-polarization with a single mode, periodic grating couplers with the fixed-length taper
Fig. 1: Coupling scheme: (a) series pattern. (b) parallel pattern.
Fig. 2: Mode files for fundamental TE and TM mode. (a) TE mode: E intensity. (b) TE mode: H intensity. (c) TM mode: E intensity. (d) TM mode: H intensity.
are utilized to couple light from the fiber array to the waveguide. Different coupling types between the waveguide and ring are investigated. Various coupling length has also been tested, given the machining accuracy and fabrication error. Finally, the performance of several parallel rings and racetrack resonators has been verified. For racetrack resonators, decreasing the coupling gap will increase the coupling coefficient and decrease insertion loss. This will also lead to the appearance of parasitic bands and decay of interstitial resonance suppression as in Fig. 4a. For the parallel ring resonators, we have fabricated three types of coupling: textbfarc waveguide, textbfbend waveguide, and textbfstraight waveguide, displayed in Fig. 3. Straight coupling is the most normal way of coupling, while bend coupling can provide stronger coupling at the same condition. Arc coupling provides the weakest strength. Radius in the parallel rings decides the FSR, as Fig. 4b shows, a larger radius presents a shorter FSR. The insertion loss is smaller as well. To investigate the influence of fabrication error on the final performance, we fabricated and measured four copies. In parallel rings, the coupling type doesn't affect performance too much, as shown in Fig. 5b. For two rings, a larger radius brings in smaller FSR, as shown in Fig. 5c, where the ring with \(10\mu m\) generates \(10nm\) FSR, while the ring with \(7\mu m\) generates \(13nm\) FSR. When more rings are loaded between ports, the center frequency moves toward to lower band, as shown in Fig. 5 d).
## IV Conclusion
Our work focused on the vernier effect of cascaded and series-coupled MRRs sensors with different parameters. From our simulation and experiment results, it is proven to be effective for low intra-channel crosstalk at higher data rates by offering larger input-to-through suppression for future applications.
## Acknowledgment
We acknowledge the Natural Sciences and Engineering Research Council of Canada (NSERC) Silicon Electronic-Photonic Integrated Circuits (SiEPIC) Program, the NSERC CREATE in Quantum Computing program, and the Canadian Microelectronics Corporation (CMC). Devices were fabricated at Advanced Micro Foundry (AMF) A STAR foundry in Singapore.
|
2305.18513 | SlimFit: Memory-Efficient Fine-Tuning of Transformer-based Models Using
Training Dynamics | Transformer-based models, such as BERT and ViT, have achieved
state-of-the-art results across different natural language processing (NLP) and
computer vision (CV) tasks. However, these models are extremely memory
intensive during their fine-tuning process, making them difficult to deploy on
GPUs with limited memory resources. To address this issue, we introduce a new
tool called SlimFit that reduces the memory requirements of these models by
dynamically analyzing their training dynamics and freezing less-contributory
layers during fine-tuning. The layers to freeze are chosen using a runtime
inter-layer scheduling algorithm. SlimFit adopts quantization and pruning for
particular layers to balance the load of dynamic activations and to minimize
the memory footprint of static activations, where static activations refer to
those that cannot be discarded regardless of freezing. This allows SlimFit to
freeze up to 95% of layers and reduce the overall on-device GPU memory usage of
transformer-based models such as ViT and BERT by an average of 2.2x, across
different NLP and CV benchmarks/datasets such as GLUE, SQuAD 2.0, CIFAR-10,
CIFAR-100 and ImageNet with an average degradation of 0.2% in accuracy. For
such NLP and CV tasks, SlimFit can reduce up to 3.1x the total on-device memory
usage with an accuracy degradation of only up to 0.4%. As a result, while
fine-tuning of ViT on ImageNet and BERT on SQuAD 2.0 with a batch size of 128
requires 3 and 2 32GB GPUs respectively, SlimFit enables their fine-tuning on a
single 32GB GPU without any significant accuracy degradation. | Arash Ardakani, Altan Haan, Shangyin Tan, Doru Thom Popovici, Alvin Cheung, Costin Iancu, Koushik Sen | 2023-05-29T17:50:52Z | http://arxiv.org/abs/2305.18513v1 | # SlimFit: Memory-Efficient Fine-Tuning of Transformer-based Models Using Training Dynamics
###### Abstract
Transformer-based models, such as BERT and ViT, have achieved state-of-the-art results across different natural language processing (NLP) and computer vision (CV) tasks. However, these models are extremely memory intensive during their fine-tuning process, making them difficult to deploy on GPUs with limited memory resources. To address this issue, we introduce a new tool called SlimFit that reduces the memory requirements of these models by dynamically analyzing their training dynamics and freezing less-contributory layers during fine-tuning. The layers to freeze are chosen using a runtime inter-layer scheduling algorithm. SlimFit adopts quantization and pruning for particular layers to balance the load of dynamic activations and to minimize the memory footprint of static activations, where static activations refer to those that cannot be discarded regardless of freezing. This allows SlimFit to freeze up to 95% of layers and reduce the overall on-device GPU memory usage of transformer-based models such as ViT and BERT by an average of 2.2\(\times\), across different NLP and CV benchmarks/datasets such as GLUE, SQuAD 2.0, CIFAR-10, CIFAR-100 and ImageNet with an average degradation of 0.2% in accuracy. For such NLP and CV tasks, SlimFit can reduce up to 3.1\(\times\) the total on-device memory usage with an accuracy degradation of only up to 0.4%. As a result, while fine-tuning of ViT on ImageNet and BERT on SQuAD 2.0 with a batch size of 128 requires 3 and 2 32GB GPUs respectively, SlimFit enables their fine-tuning on a single 32GB GPU without any significant accuracy degradation. The code of this paper is available at [https://github.com/arashardakani/SlimFit](https://github.com/arashardakani/SlimFit).
## 1 Introduction
Over the past few years, various transformer-based models have been developed with the adoption of the attention mechanism that weighs the importance of each part of the input data differently. Pre-training of such transformer-based models on large data has led to a significant boost in accuracy when fine-tuned on various natural language processing (NLP) and computer vision (CV) downstream tasks [1; 2]. Despite their great performance in achieving state-of-the-art (SOTA) accuracy, these models are memory intensive and require a considerably large amount of on-device GPU memory during their fine-tuning phase when compared to the conventional convolutional and recurrent neural networks [3]. The memory requirement of current transformer-based models has made them difficult to fine-tune even on powerful GPUs. With the introduction of larger transformer-based models over the past few years, the on-device GPU memory has become a major bottleneck for their fine-tuning process [3; 4; 5].
The total on-device memory usage of GPUs consists primarily of activations, parameters, gradients, optimizer states, and the CUDA context. Among these factors, activations account for most of the memory usage due to batching, which makes them several orders of magnitude larger than other factors (see Fig. 1). Therefore, activation compressed training (ACT) has emerged as the primary solution for memory-efficient fine-tuning [6; 4]. This approach first compresses activations during the forward pass and then decompresses them during the backward pass. In this way, the memory footprint can be
significantly reduced by caching the compressed activations. In ACT, quantization [7, 8, 6, 4] has been a popular choice to compress activations among other compressors such as JPEG [9] or pruning [5]. The current SOTA ACT adaptively assigns quantization bits to each layer for a given architecture [4]. While the SOTA ACT successfully reduces the memory footprint of activations, its overall on-device GPU memory reduction is not significant. For instance, the total on-device GPU memory reduction of the SOTA ACT is limited to 0.1GB despite its 6.4\(\times\) reduction in the memory of activations when fine-tuning BERT on CoLA dataset with a batch size of 32. It is worth mentioning that we refer to the memory usage reported by "nvidia-smi" as the overall on-device memory in this paper (see Appendix A for more information on memory management).
Tensor rematerialization [3, 10, 11, 12], also known as gradient checkpointing, is another prominent approach to reducing activation memory by trading computations for memory. In tensor rematerialization, only specific activations are stored during the forward pass, while the rest are recomputed in the backward pass. Of course, recomputing activations requires more operations and significantly prolongs the fine-tuning process [4]. Reduced precision training, as another approach, performs the computations of both forward and backward passes in low-precision [13, 14, 15, 16]. While these works can successfully train conventional models, few-bit model fine-tuning is not trivial. For instance, 8-bit quantization of BERT for inference results in a significant precision loss [17], which makes fine-tuning on few bits a challenging task.
Low-rank adaptation (LoRA) [18] is another key approach to reducing the overall on-device GPU memory where the transformer-based models are fine-tuned by inserting a small number of trainable parameters into each layer while keeping the pre-trained model parameters frozen. Such an approach enables fine-tuning transformer-based models with significantly less number of trainable parameters, leading to a reduction in the memory footprint of optimizer states and gradients. Such a memory reduction becomes significant for extremely large transformer models such as GPT [19] with over hundred billion parameters.
Different from these methods, we put forward a new approach to reducing the overall on-device memory usage by analyzing training dynamics. More precisely, we dynamically analyze the gradient contributions of layers in transformer-based models and perform parameter updates for specific layers only while the rest of layers are kept frozen. Training dynamics have been used to analyze the behavior of a model during its training/fine-tuning process [20, 21, 22]. However, our work uses training dynamics to detect and discard unimportant activations during fine-tuning by freezing their associated layers, leading to a reduction of the memory footprint. Our method is orthogonal to existing approaches including rematerialization and LoRA, which could be composed for further reductions.
Freezing layers or parameters has been studied in different domains, including transformer-based models to preserve previously learned information during fine-tuning [23]. Freezing parameters have also been used to regularize fine-tuning (e.g., over-fitting reduction) in pre-trained models [24]. Recently, freezing has been used to accelerate fine-tuning by progressively freezing model blocks [25, 26, 27]. However, since such an approach starts the fine-tuning process without freezing at least for a few training iterations, its overall on-device memory requirement remains similar to that of training without freezing. For instance, fine-tuning ViT on ImageNet with a batch size of 128 using such a freezing approach on a single 32GB GPU results in an out-of-memory error (see Appendix B for more details).
To orchestrate effective layer-freezing decisions, we introduce a runtime inter-layer scheduling (ILS) algorithm. Our method finds and freezes a set of layers in transformer-based models that are less contributory, i.e., layers with fewer updates in their parameters, to the fine-tuning process at each iteration. While the ILS algorithm successfully detects and freezes unimportant layers, its memory reduction is not proportional to the freezing rate. The reason behind this disproportionality is twofold: the imbalanced number of activations among layers and the existence of static activations. Static activations refer to those that cannot be discarded regardless of freezing (e.g., activations of non-linear functions such as GELU). We address these two issues using quantization and pruning to even out the number of activa
Figure 1: The breakdown of memory usage of BERT when fine-tuned on different batch sizes including 32, 64, and 128.
tions across all layers and to reduce the memory overhead of static activations. We use quantization and pruning for a few specific layers of transformer-based models as opposed to reduced precision training methods where all the layers are quantized. As a result, the impact of quantization and pruning on accuracy is insignificant in our work. For instance, the accuracy degradation due to quantization and pruning is only 0.1% on the MRPC dataset.
By combining ILS with quantization and pruning, we introduce a performance tool called SlimFit for reducing the on-device GPU memory usage of transformer-based models during fine-tuning. We demonstrate the effectiveness of SlimFit in reducing the memory footprint on popular models of BERT and ViT. We show that SlimFit can freeze up to 95% of layers and reduce the overall on-device memory usage by an average of 2.2\(\times\) when fine-tuning BERT and ViT models on different benchmarks and datasets, such as GLUE, SQuAD 2.0, CIFAR-10, CIFAR-100 and ImageNet with an average accuracy degradation of 0.2%. More precisely, SlimFit reduces the overall on-device memory usage of the fine-tuning process on GLUE from 6.1GB to 4.0GB (1.5\(\times\) reduction) with a batch size of 32, on SQuAD 2.0 from 58.5GB to 19.1GB (3.1\(\times\) reduction) with a batch size of 128, on CIFAR-10 from 7.2GB to 4.3GB (1.7\(\times\) reduction) with a batch size of 32, on CIFAR-100 from 7.2GB to 4.5GB (1.6\(\times\) reduction) with a batch size of 32, and on ImageNet from 77.4GB to 26.1GB (3.0\(\times\)) with a batch size of 128 at the cost of up to 0.4% accuracy degradation. As a result, SlimFit enables performing memory-intensive fine-tuning processes on a single 32GB GPU such as fine-tuning ViT on ImageNet with a batch size of 128 while this normally requires three 32GB GPUs.
## 2 Preliminaries
Over the past few years, pre-training of attention-based models has led to significant advances on many NLP and CV tasks with the popular BERT [1] and ViT [2] models. The pre-training process provides a good initialization point such that these models can better generalize on unseen data of downstream tasks. Therefore, these models can achieve state-of-the-art results by fine-tuning through small adjustments to their parameters. Architecturally, these models consist of an initial embedding layer, followed by repeated blocks of multi-head attention (MHA) fed into a feed-forward network (FFN) module (see Appendix C for more details). The base architectures of BERT and ViT contain over a hundred layers built up in this manner.
Despite the large number of layers, not all need to be updated during fine-tuning to achieve decent performance on downstream tasks, as shown in [28]. Notably, the authors found that freezing approximately 60% of early attention layers in BERT led to negligible performance degradation. This suggests that the fine-tuned model tends to preserve generic features learned during pre-training. Motivated by this study, we seek to analyze the training dynamics of pre-trained models and to automatically detect layers with less contributions to the fine-tuning process.
## 3 Learning the Importance of Layers
Training dynamics is an active field of research that provides insight about the behavior of pre-trained models when fine-tuning on downstream tasks. The convergence proof of optimization algorithms such as stochastic gradient descent [29] shows that the distance between the parameters and the optimal solution is reduced over training iterations and accordingly, the weight distance (or the weight update amount) between consecutive iterations decreases. Therefore, it is possible that some layers can only receive minimal changes to their parameters as we approach the end of the training process. Of course, detecting and freezing such layers, when they show minimal updates, will not affect accuracy. Since transformer-based models are pre-trained, they already show small updates during fine-tuning compared to pre-training. As such, detecting and freezing layers with minimal updates (i.e., weight distance values) will not significantly affect the fine-tuning process and accordingly the final accuracy. Based on the above observations, we consider the \(\ell_{1}\)-norm of the update received by parameters of each layer through all the fine-tuning iterations as the training dynamics in this paper. It is also worth mentioning that freezing layers has no impact on training convergence as it causes a pause in the training procedure of frozen layers as shown by our theoretical analysis in Appendix D.1.
### Training Dynamics
Let us consider a pre-trained model with a set of parameters \(\mathbf{W}\) where the parameters associated with the \(i\)th layer at iteration \(t\) is denoted as \(\mathbf{W}_{i}^{t}\in\mathbb{R}^{M\times I}\). The training dynamics of for the \(i\)th layer at iteration \(t\) is defined as the \(\ell_{1}\)-norm of the distance between \(\mathbf{W}_{i}^{t-1}\) and \(\mathbf{W}_{i}^{t}\), i.e.,
\[d_{i}^{t}=\frac{1}{M\times I}\left\|\frac{W_{i}^{t}-W_{i}^{t-1}}{W_{i}^{t-1}} \right\|_{\ell_{1}}, \tag{1}\]
where \(\mathbf{d}^{t}\in\mathbb{R}_{+}^{n}\) containing all \(d_{i}\)s at iteration \(t\) is referred to as distance vector, and \(n\) denotes the total number of layers. In fact, Eq. (1) calculates the normalized change in the parameters of the \(i\)th layer.
### Inter-Layer Scheduling Algorithm
We use the distance values as training dynamics to analyze the fine-tuning behavior of pre-trained models. For instance, consider the distance values across all the fine-tuning iterations for the CoLA [30] and MRPC [31] datasets. Fig. 1(a) shows the distance values of the query weight matrix for the first, fifth and eleventh attention layers of BERT-base fine-tuned on CoLA dataset whereas Fig. 1(b) depicts those of the same layers for BERT-based fine-tuned on MRPC dataset.
We observe the following based on the experimental results of these two datasets. First, the updated amount for each layer becomes smaller over fine-tuning iterations. Second, the updated amount of each layer is task-specific and is independent of its position. Third, there are some layers showing smaller distance values w.r.t. other layers across almost all the iterations. Finally, layers with a higher distance value in the beginning can become smaller over the fine-tuning iterations than layers starting with a lower distance value.
Given the above observations, we introduce an ILS algorithm to decide on updating priority of layers using their distance values. Fig. 3 shows the overview of the ILS algorithm. At each iteration ranging from the first iteration to the last iteration, our ILS algorithm selects those layers with large distance values to be updated and those with small distance values to be frozen. More precisely, layers are first ranked based on their distance values at each training iteration and then those with small distance values are kept frozen according to the freezing rate as a hyper-parameter. The intuition is that layers with small distance values are less contributory to the fine-tuning process as their parameters are not being updated much. On the other hand, the layers with large distance values are learning task-specific patterns by making more significant adjustments to their parameters. Note that freezing middle layers does not interrupt the gradient propagation to the early layers of the network as shown through an example in Appendix D.2.
The freezing rate of the ILS algorithm can be decided based on the on-device GPU memory budget. Of course, using an extremely high freezing rate may result in a performance degradation depending
Figure 3: The overview of ILS algorithm. ILS freezes a certain number of layers depending on the freezing rate at every single iteration throughout the fine-tuning process for the total of \(n\) training iterations.
Figure 2: The distance values of query weight matrix for the first, fifth and eleventh attention layers of BERT-base fine-tuned on (a) CoLA and (b) MRPC datasets for 3 epochs.
on the downstream task, providing a worthwhile trade-off between accuracy and on-device GPU memory. On the other hand, while performance degradation is unlikely with a very small freezing rate, the memory reduction is insignificant as well.
Since there is no prior knowledge about the distance values of each layer at the beginning of the fine-tuning process, our ILS algorithm initializes the distance vector with large random values. Depending on the freezing rate, each layer along with its distance value are updated during the first few iterations once until all random numbers in the distance vector are substituted with an actual distance value. Afterwards, layers are kept frozen according to their actual distance value. The distance value of the active layers is only updated at each iteration while that of the frozen layers remains unchanged. The pseudo code of our ILS algorithm performing iterative freezing is shown in Algorithm 1.
To better understand the ILS algorithm, we illustrate the iterative freezing process using an example as shown in Fig. 4. Suppose we have an 8-layer transformer-based model and accordingly an 8-element distance vector at iteration \(t\). Considering the freezing rate of 50% for this example, 4 layers with the lowest distance values are kept frozen and the rest are updated at each iteration.
## 4 Inter-Layer Load-Balancing
So far, we have introduced our ILS algorithm that prioritizes updating particular layers while keeping the rest of layers frozen according to their distance value. For the given freezing rate of 50% as an example, we expect to see a \(2\times\) reduction in the memory footprint of activations. However, this is not the case in transformer-based models due to the imbalanced the number of activations across all the layers. In fact, the imbalance in the number of activations undermines the ability of our ILS algorithm in reducing the memory footprint during the fine-tuning as shown in Fig. 5.
Since the focus of this paper is on transformer-based models such as BERT and ViT, we analyze their architecture for imbalanced layers. Table 1 summarizes the number of activations associated to the input of layers with trainable parameters in BERT or ViT. Among all trainable layers, there is only one imbalanced layer in the attention block which contains \(4\times\) more activations than other layers.
To address the load-balancing issue in the number of activations for the aforementioned layer, we use quantization. Since the imbalance factor among layers is \(4\times\), we adopt 8-bit quantization for activations of the imbalanced layer where 4 bits are used for both the integer and fractional parts. In this way, the memory cost of the activations are evened out using quantization. In our quantization scheme, we cache the activations of the imbalanced layer using 8 bits during the forward pass. In the backward pass, we convert the 8-bit activations to 32-bit floating-point format. Therefore, all the forward and backward computations are still performed using single-precision floating-point format. The conversion process between 8-bit fixed-point and 32-bit floating-point formats are provided in Appendix E.
```
Input: model, number of iterations as \(itr\), number of layers as \(L\), freezing rate \(F\) \(d=\text{rand}(L)\) for\(i\) in \(\hat{itr}\)do \(idx=\text{argsort}(d)[:\)int(L*F) for\(j\) in\(idx\)do \(\text{model.layer}[j].\text{requires\_grad}=\text{False}\) endfor \(\text{model.train}()\) Update \(d\) endfor
```
**Algorithm 1** The pseudo code of the ILS algorithm performing iterative freezing.
## 5 Dynamic and Static Activations
The type of activations in transformer-based models can be divided into two categories: dynamic and static. We refer to the activations that can be discarded by freezing their layer as dynamic activations. On the other hand, static activations cannot be discarded regardless of freezing. Among different
Figure 4: An example of the iterative freezing process using our ILS algorithm.
-types of layers, GELU, MatMul, Softmax and LayerNorm contain static activations as shown Table 2. Note that MatMul and Softmax share the same activations. For the backward computations of Softmax, its output during the forward pass is saved as its activations. On the other hand, the input to MatMul is required for its backward computations as activations. Since the output of Softmax is an input to MatMul in the forward pass, they share the same activations.
GELU and MatMul/Softmax do not have any trainable parameters and accordingly cannot be frozen. Therefore, these two layers hold on to their activations throughout the fine-tuning process. The best approach to reduce their memory cost is quantization. We use 4 and 8 bits for quantization of activations in GELU and MatMul/Softmax, respectively. Since there is no 4-bit tensor support in PyTorch, we store each two 4-bit activations as a single 8-bit activations using shift operations. Note that using such bit-levels result in a negligible accuracy degradation while further quantization of those activations incurs a significant accuracy loss.
As opposed to GELU and MatMul/Softmax, LayerNorm contains trainable parameters and can be frozen by the ILS algorithm. However, its activations are still static. The forward pass of LayerNorm is computed by:
\[\widetilde{\mathbf{x}}=\frac{\mathbf{x}-\mathbb{E}(\mathbf{x})}{\sqrt{\text{ Var}(\mathbf{x})+\epsilon}}, \tag{2}\]
\[\mathbf{y}=\widetilde{\mathbf{x}}*\gamma+\beta, \tag{3}\]
where \(\gamma\) and \(\beta\) are trainable parameters. The input and output to LayerNorm are denoted by \(\mathbf{x}\in\mathbb{R}^{H}\) and \(\mathbf{y}\in\mathbb{R}^{H}\), respectively. \(\mathbb{E}(\cdot)\) and \(\text{Var}(\cdot)\) compute the average and variance, respectively. The derivative of the loss with respect to \(\gamma\) (i.e., \(\widehat{\gamma}\)) is computed by
\[\widehat{\gamma}=\widetilde{\mathbf{x}}*\widehat{\mathbf{y}}, \tag{4}\]
and with respect to \(\beta\) (i.e., \(\widehat{\beta}\)) by:
\[\widehat{\beta}=\widehat{\mathbf{y}}, \tag{5}\]
where \(\widehat{\mathbf{y}}\) denotes the derivative of the loss w.r.t. \(y\). We also need to compute the derivative of the loss with respect to \(\mathbf{x}\) (i.e., \(\widehat{\mathbf{x}}\)) as:
\[\mathbf{g}=\frac{\gamma*\widehat{\mathbf{y}}}{H*\sqrt{\text{Var}(\mathbf{x})+ \epsilon}}, \tag{6}\]
\[\widehat{\mathbf{x}}=H*\mathbf{g}-\sum_{H}\mathbf{g}-\widetilde{\mathbf{x}}* \sum_{H}(\mathbf{g}*\widetilde{\mathbf{x}}). \tag{7}\]
When LayerNorm is frozen, there is no need to compute Eq. (4). However, the activations of this layer cannot be discarded since they are still a part of the computations in Eq. (7). More precisely, the standardized version of \(\mathbf{x}\) (i.e., \(\widetilde{\mathbf{x}}\)) is required even when this layer is frozen.
The contribution of the last term in Eq. (7) (i.e., \(\sum_{H}(\mathbf{g}*\widetilde{\mathbf{x}})\)) is significant for large values of \(\widetilde{\mathbf{x}}\) only. Therefore, the small values of \(\widetilde{\mathbf{x}}\) can be discarded. Ideally, we want to have all the activations of this layer to be discarded when this layer is frozen. However, this will results in an accuracy degradation. As
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Type of Layer & \(\#\) Activations & Type of Activations & Type of Layer & \# Activations \\ \hline Dense & \(B*T*H\) & Dynamic & **LayerNorm** & \(B*T*H\) & **Static** \\
**MatMul** & \(\mathbf{B*T*H(2\times)}\) & **Static** & Dense & \(B*T*H\) & Dynamic \\
**Softmax** & \(\mathbf{B*T*T}\) & **Static** & **GELU** & \(B*T*A*H\) & **Static** \\
**MatMul** & \(\mathbf{B*T*H}\) & **B*T*T** & **Static** & Dense & \(B*T*A*H\) & Dynamic \\ Dense & \(B*T*H\) & Dynamic & **LayerNorm** & \(B*T*H\) & **Static** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The number of activations associated to the input of layers with trainable parameters in BERT where \(B\), \(T\), \(H\) denote the batch size, sequence length, hidden size, respectively. ViT has the same structure with different descriptions.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Type of Layer & \(\#\) Activations & Type of Activations & Type of Activations \\ \hline Dense & \(B*T*H\) & Dynamic & **LayerNorm** & \(B*T*H\) & **Static** \\
**MatMul** & \(\mathbf{B*T*H(2\times)}\) & **Static** & Dense & \(B*T*H\) & Dynamic \\
**Softmax** & \(\mathbf{B*T*T}\) & **Static** & **GELU** & \(B*T*A*H\) & **Static** \\
**MatMul** & \(\mathbf{B*T*H}\) & **B*T*T** & **Static** & Dense & \(B*T*A*H\) & Dynamic \\ Dense & \(B*T*H\) & Dynamic & **LayerNorm** & \(B*T*H\) & **Static** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The type of activations of layers in MHA and FFN of BERT and ViT.
Figure 5: An example of a model with imbalanced number of activations and its impact on the memory reduction.
such, we prune away the small values in \(\widetilde{\mathbf{x}}\) and keep the top 10% largest values. In this way, the memory load of activations is significantly reduced. Of course, when this layer is not frozen, the backpropagation is performed without any approximation. Such a trick converts LayerNorm from a static layer to a semistatic one. It is worth mentioning that the indices to pruned activations are also stored along with activations. The details of the pruning procedure is provided in Appendix F.
## 6 SlimFit
SlimFit is a performance tool that exploits our ILS algorithm along with quantization and pruning to reduce the memory footprint of activations through an iterative freezing process. The total on-device GPU memory reduction of SlimFit is a result of the memory reduction in both dynamic and static activations. Static activations contribute a fixed amount of memory whereas the memory usage of dynamic activations depends on the freezing rate. Given a high freezing rate, the memory footprint of activations and accordingly the total on-device GPU memory usage can be significantly reduced. The choice of freezing rate depends on the memory budget of the user. By increasing the freezing rate up to a certain point, there will be no performance degradation. However, using an extremely high freezing rate trades off memory for accuracy. Finding the breaking point of the method is task dependent and varies from one dataset to another.
## 7 Experimental Results
We use the base version of BERT and ViT for our experiments. We fine-tune these two models using SlimFit which is implemented on PyTorch. We evaluate BERT [1] using the GLUE benchmark [31] and SQuAD 2.0 [32]. For ViT [2], we use CIFAR-10, CIFAR-100 and ImageNet datasets [33; 34] for evaluation purposes. We discuss the memory usage of activations and the overall on-device GPU memory on the 32GB NVIDIA V100 GPU. We report the total on-device GPU memory usage using "nvidia-smi". For all the experiments in this section, we use 3 epochs for fine-tuning. The details about the CV/NLP tasks, measurements and hyper-parameter settings are provided in Appendix G.
### Accuracy Evaluation on GLUE and SQuAD 2.0
To evaluate the language understanding ability of BERT models, the GLUE benchmark is formed by a series of downstream tasks including sentiment classification (SST-2), natural language inference (RTE, QNLI, and MNLI), paraphrase detection (MRPC, QQP, and STS-B), and linguistic acceptability (CoLA). We use Spearman correlation for STS-B, Matthew's correlation for CoLA, percentage accuracy for RTE, MRPC, SST-2, QQP, QNLI and MNLI\({}_{m}\), and F1 score for SQuAD 2.0. In this work, we fine-tune the BERT-base model using SlimFit on the downstream tasks of the GLUE benchmark as well as the question answering task on SQuAD 2.0. Table 3 shows the accuracy on the validation set of the aforementioned tasks and memory usage of SlimFit compared to the baseline. The results of the baseline were obtained without freezing. We report the results associated with the highest freezing rate that can achieve a similar accuracy to that of the baseline by varying the learning rate. The experimental results on the GLUE benchmark show that up to 95% of dynamic activations can be discarded with up to 0.4% accuracy degradation, leading to an average of 1.9GB reduction in the total on-device GPU memory usage. On the other hand, while fine-tuning SQuAD 2.0 without freezing requires the minimum of 2 32GB NVIDIA V100 GPUs on a batch size of 128, SlimFit enables its fine-tuning on a single 32GB NVIDIA V100 GPU, reducing the total on-device memory requirement of such a task from 58.5GB down to 19.1GB (3.1\(\times\) reduction).
Figure 6 shows the total on-device GPU memory usage of BERT when fine-tuned using SlimFit for different batch sizes at the freezing rate of 95% on the GLUE benchmark and 80% on SQuAD 2.0. According to the experimental results, SlimFit enables a reduction ranging from 1.5\(\times\) to 3.1\(\times\) in the total on-device GPU memory on NLP tasks. The reduction in the total on-device memory usage is more significant for larger batch sizes since the activations dominate the memory footprint.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline Method & Metric & MNLI\({}_{m}\) & QQP & QNLI & SST-2 & CoLA & STS-B & MRPC & RTE & SQuAD 2.0 \\ \hline \multirow{2}{*}{BERT (Basline)} & Accuracy & 83.4 & 90.8 & 90.5 & 92.1 & 58.9 & 89.5 & 86.4 & 70.2 & 74.0 \\ & Memory of Activations (GB) & 3.2 & 3.2 & 3.2 & 3.2 & 3.2 & 3.2 & 3.2 & 3.2 & 55.1 \\ & Total On-chip GPU Memory (GB) & 6.1 & 6.1 & 6.1 & 6.1 & 6.1 & 6.1 & 6.1 & 6.1 & 58.5 (GPUs) \\ \hline \multirow{4}{*}{SlimFit} & Accuracy & 83.3 & 90.4 & 90.4 & 92.3 & 59.6 & 89.4 & 86.3 & 70.4 & 74.0 \\ & Freeing Rate (\%) & 80 & 80 & 95 & 95 & 90 & 85 & 91 & 90 & 80 \\ & Memory of Activations (GB) & 0.7 & 0.7 & 0.5 & 0.5 & 0.6 & 0.7 & 0.6 & 0.6 & 10 \\ & Total On-chip GPU Memory (GB) & 4.4 & 4.4 & 4.0 & 4.0 & 4.3 & 4.3 & 4.3 & 4.3 & 19.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The accuracy and memory performance of SlimFit on the GLUE benchmark and SQuAD 2.0. The batch size of 32 and 128 were used for GLUE benchmark and SQuAD 2.0, respectively.
### Accuracy Evaluation on CIFAR and ImageNet
To assess the effectiveness of our method on CV tasks, we fine-tune the ViT-base model on CIFAR-10, CIFAR-100 and ImageNet datasets. We use the test set of CIFAR-10/CIFAR-100 and the validation set of ImageNet to evaluate their accuracy on ViT. Table 4 shows that SlimFit can fine-tune the ViT-base model with the freezing rate of up to 95% with up to 0.3% loss in accuracy while significantly reducing the overall on-device GPU memory usage. More specifically, SlimFit reduces the overall memory usage of the fine-tuning process on CIFAR-10 from 7.2GB to 4.3GB (1.7\(\times\) reduction) with a batch size of 32, on CIFAR-100 from 7.2GB to 4.5GB (1.6\(\times\) reduction) with a batch size of 32, and on ImageNet from 77.4GB to 26.1GB (3\(\times\) reduction) with a batch size of 128. Fig. 6 also shows the total on-device GPU memory usage of SlimFit across different batch sizes on CV tasks.
## 8 Ablation Studies
In this section, we study different aspects of SlimFit in fine-tuning of transformer-based models through a series of ablation studies. Due to limited space, we discuss the impact of quantization/pruning and total wall-clock time in Appendix H and Appendix I, respectively. For all the experiments in this section, we use a batch size of 32 and 3 epochs for fine-tuning.
### Accuracy vs Freezing Rate
In Section (3.2), we discussed that our ILS algorithm orchestrates the freezing schedule based on a simple rule: layers with largest distance values are updated whereas those with lowest distance values are kept frozen for the given freezing rate. Of course, such an iterative freezing approach trades off between accuracy and freezing rate. To better show this trade-off, we measured and illustrated accuracy of CoLA and MRPC datasets across different freezing rates in Fig. 7. The trade-off curve shows our ILS algorithm can maintain the accuracy at the same level of the baseline by freezing up to 95% of layers.
Besides our ILS algorithm, the freezing schedule can be decided using random or progressive freezing approaches. In the random scheduling method, frozen layers are randomly selected at each iteration. In the progressive approach, on the other hand, early layers are progressively kept frozen whereas later layers are being updated throughout the fine-tuning process. Among these approaches, our ILS algorithm significantly stands out in terms of both accuracy and freezing rate as shown in Fig. 7. The
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline & & \multicolumn{2}{c||}{Baseline} & \multicolumn{3}{c}{SlimFit} \\ \hline Model & Metric & CIFAR-10 & CIFAR-100 & ImageNet & CIFAR-10 & CIFAR-100 & ImageNet \\ \hline \multirow{4}{*}{ViT} & Accuracy (\%) & 98.8 & 91.2 & 83.3 & 98.5 & 91.0 & 83.3 \\ & Freezing Rate (\%) & NA & NA & NA & 90 & 75 & 95 \\ & Memory of Activations (GB) & 4.5 & 4.5 & 69.5 & 0.8 & 1.0 & 11.9 \\ & Total Memory (GB) & 7.2 & 7.2 & 77.4 (3 GPUs) & 4.3 & 4.5 & 26.1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The top-1 accuracy and memory performance of SlimFit on CV benchmarks using a batch size of 32 for CIFAR datasets and 128 for ImageNet dataset.
Figure 6: The total on-device GPU memory usage of SlimFit compared to the baseline across different batch sizes including 32, 64 and 128 on NLP and CV datasets.
Figure 7: The trade-off curve between accuracy and freezing rate for three different iterative freezing approaches (i.e., ILS, random and progressive methods) on (a) CoLA and (b) MRPC datasets.
reason behind its superior performance is that ILS allows more updates for layers with large distance values by keeping layers with minimal distance values frozen for a specific number of iterations. On the other hand, in the random approach, the layers are randomly selected to be updated. Therefore, layers with large distance values receive less number of updates in the random approach compared to ILS. Of course, the chance of layers with large distance values being randomly selected as active layers decreases as the freezing rate increases, which explains the accuracy gap between ILS and the random approach with freezing rate higher than 70% freezing rate. In the progressive freezing approach, the early layers receive no update during the fine-tuning process, resulting in a significant accuracy degradation for large freezing rates.
### Frequency of Update Occurrence
To visualize the frequency of update occurrence for each layer, we use a heatmap as shown in Fig. 8 for both CoLA and MRPC datasets where larger counts are associated with darker colorings. As shown in the heatmap, the dense layers inside the MHA module receive more updates than other layers for both datasets. Moreover, the update patterns of these datasets are similar for small freezing rates whereas they become more task-specific for high freezing rates. In fact, the ILS algorithm prioritizes the update of some specific layers over others for high freezing rates.
## 9 Comparison With State-of-the-Art Techniques and Limitations
Next, we compare SlimFit with state-of-the-art compression methods targeting memory reduction, i.e., GACT [4] and DropIT [5]. Table 5 summarizes the comparison results in terms of accuracy, memory and latency. For fair comparison, we measure their performance under the same framework and hyper-parameters (i.e., the batch size and the number of training epochs) during fine-tuning of BERT on CoLA. The experimental results of GACT and DropIT were obtained using their official PyTorch libraries. According to the experimental results, GACT shows the lowest memory amount for activations. However, in terms of on-device GPU memory usage, SlimFit outperforms GACT. In terms of accuracy, all models show a comparable accuracy on CoLA w.r.t. the baseline. Finally, in terms of speed, SlimFit shows the fastest fine-tuning speed among existing works while it still falls short w.r.t. the baseline (see Appendix I for more details on SlimFit's computing speed). Despite the better accuracy of SlimFit on CoLA, it shows up to 0.4% degradation in accuracy across different CV/NLP tasks which is another limitation of SlimFit besides its fine-tuning speed w.r.t. the baseline.
## 10 Conclusion
In this paper, we presented a performance tool called SlimFit that reduces the memory usage of activations and accordingly the overall on-device GPU memory usage of transformer-based models through an iterative freezing of layers during fine-tuning. SlimFit adopts an inter-layer scheduling method to orchestrate the freezing schedule at each iteration. To balance the number of activations across all layers and to reduce the memory usage of static activations, SlimFit uses quantization and pruning for a few specific layers. We evaluated the performance of SlimFit across different NLP and CV tasks. We showed that SlimFit significantly reduces the on-device GPU memory usage of the fine-tuning process by up to 3.1\(\times\) when using a batch size of 128.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Model & Metric & Baseline & 4-bit GACT (ICML’22 [4]) & DropIT (CLR’23 [5]) & SlimFitFit \\ \hline \multirow{4}{*}{BERT} & Accuracy (Mathews’ Correlation) & 38.9 & 59.0 & 57.5 & **59.0** \\ & Freezing Rate (\%) & NA & NA & NA & 90\% \\ & Memory of Activations (GB) & 3.2 & **0.5** & 2.4 & 0.6 \\ & Total Memory (GB) & 6.1 & 6.0 & 5.7 & **4.3** \\ & Latency (Seconds) & **251** & 455 & 367 & 281 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison with state-of-the-arts when fine-tuning BERT on CoLA dataset.
Figure 8: The frequency of update occurrence for each layer as a heatmap on (a) CoLA and (b) MRPC datasets. The description of layers corresponding the indices are provided in Appendix J. |
2310.07685 | Moderate Deviations for the Capacity of the Random Walk range in
dimension four | In this paper, we find a natural four dimensional analog of the moderate
deviation results for the capacity of the random walk, which corresponds to
Bass, Chen and Rosen \cite{BCR} concerning the volume of the random walk range
for $d=2$. We find that the deviation statistics of the capacity of the random
walk can be related to the following constant of generalized
Gagliardo-Nirenberg inequalities, \begin{equation*} \label{eq:maxineq} \inf_{f:
\|\nabla f\|_{L^2}<\infty} \frac{\|f\|^{1/2}_{L^2} \|\nabla f\|^{1/2}_{L^2}}{
[\int_{(\mathbb{R}^4)^2} f^2(x) G(x-y) f^2(y) \text{d}x \text{d}y]^{1/4}}.
\end{equation*} | Arka Adhikari, Izumi Okada | 2023-10-11T17:35:55Z | http://arxiv.org/abs/2310.07685v3 | # Moderate deviations for the capacity of the random walk range in dimension four
###### Abstract.
In this paper, we find a natural four dimensional analog of the moderate deviation results for the capacity of the random walk, which corresponds to Bass, Chen and Rosen [6] concerning the volume of the random walk range for \(d=2\). We find that the deviation statistics of the capacity of the random walk can be related to the following constant of generalized Gagliardo-Nirenberg inequalities,
\[\inf_{f:\|\nabla f\|_{L^{2}}<\infty}\frac{\|f\|_{L^{2}}^{1/2}\|\nabla f\|_{L^{ 2}}^{1/2}}{[\int_{(\mathbb{R}^{4})^{2}}f^{2}(x)G(x-y)f^{2}(y)dxdy]^{1/4}}.\]
Key words and phrases:moderate deviation, random walk, Brownian motion, capacity 2010 Mathematics Subject Classification: 60F15,60G50 Research supported by NSF grant DMS 2102842 Research supported in part by JSPS KAKENHI Grant-in-Aid for Early-Career Scientists (No. JP20K14329) (I.O.).
## 1. Introduction
In this paper, we study the moderate deviation results for the capacity of the random walk for \(d=4\). Given an arbitrary set \(A\) in \(\mathbb{Z}^{d}\), the capacity of \(A\) is defined as follows: let \(\tau_{A}\) denote the first positive hitting time of a finite set \(A\) by a simple random walk \((\mathcal{S}_{m})_{m\geq 0}\) on \(\mathbb{Z}^{d}\) and recall that the corresponding (Newtonian) capacity is given for \(d\geq 3\), by
\[\mathrm{Cap}(A):=\sum_{x\in A}P^{x}(\tau_{A}=\infty)=\lim_{\|z\|\to\infty}\frac {P^{z}(\tau_{A}<\infty)}{G_{D}(z)}.\]
Here, \(G_{D}\) is the Green's function for the random walk on the lattice. \(\|\cdot\|\) denotes the Euclidean distance.
There has been much significant interest in studying the capacity of the range of random walk in \(d\)-dimensions. As revealed in many other works, understanding the capacity of the range of the random walk relates to questions regarding the volume of a random walk or the intersection of two random walks. This, in turn, has a multitude of applications in various fields. For instance, random walk intersection estimates appear in the study of quantum field theories [20], conformal field theories [14], and in the study of the self-avoiding walk [9]. For a more detailed discussion, one can see the references in [3].
In this direction, there are many works in the mathematical literature studying the capacity. Let \(\mathcal{S}[1,n]:=\{\mathcal{S}_{1},\ldots,\mathcal{S}_{n}\}\). Jain and Orey [16] proved a strong law of large numbers, that is, almost surely,
\[\lim_{n\to\infty}\frac{\mathrm{Cap}(\mathcal{S}[1,n])}{n}=\alpha_{d},\quad \text{for }d\geq 3\]
for some constant \(\alpha_{d}\) depending on the dimension. If one defines Brownian capacity as,
\[\mathrm{Cap}_{B}(D):=\bigg{(}\inf\left\{\iint G(x-y)\mu(\mathrm{d}x)\mu( \mathrm{d}y):\mu(D)=1\right\}\bigg{)}^{-1},\]
and \(G\) is the Green's function for the Brownian motion, then, when \(d=3\), Chang [10] has shown that
\[\frac{\mathrm{Cap}(\mathcal{S}[1,n])}{\sqrt{n}}\stackrel{{ \mathcal{D}}}{{\Longrightarrow}}\frac{1}{3\sqrt{3}}\mathrm{Cap}_{B}(B[0,1]).\]
Here, \(B[0,1]\) is the image of the Brownian motion from time \(0\) to \(1\).
In addition, the paper [2] provides lower and upper bounds for the large deviation of the capacity of the range of a random walk in various dimensions, though without obtaining the optimal constant. The works [3, 4] also established a law of large numbers and a central limit theorem for the capacity of the range of a random walk in \(\mathbb{Z}^{4}\). As a consequence of these results, one conjectures a curious link between the behavior of the capacity in \(d\) dimensions and the self-intersection of random walks in \(d-2\) dimensions. However, as of yet, no deeper mechanism found to explain these parallels.
More recently, Dembo and the second author [13] found such a parallel when they wanted to understand the more detailed question of a law of iterated logarithms for the capacity. In four dimensions, the main result of [13] was the following. Then,
the following estimates were shown, almost surely,
\[\limsup_{n\to\infty}\frac{\mathrm{Cap}(\mathcal{S}[1,n])-\mathbb{E}[ \mathrm{Cap}(\mathcal{S}[1,n])]}{\frac{\pi^{2}}{8}\frac{n\log(\log(\log n))}{( \log n)^{2}}}=1,\] \[\liminf_{n\to\infty}\frac{\mathrm{Cap}(\mathcal{S}[1,n])-\mathbb{ E}[\mathrm{Cap}(\mathcal{S}[1,n])]}{c_{*}\frac{n\log(\log n)}{(\log n)^{2}}}=-1,\]
for some constant \(c_{*}>0\). Via subadditivity arguments, the upper tail of the law of iterated logarithms can reduce to the computation of an explicit limit.
By contrast, the constant associated with the lower tail of the large deviation is a far more delicate question. In [13], it was only shown that the \(\liminf\) exists; the value of the constant depends on quite precise large deviation statistics of the capacity. However, rather than being merely a technical question, the exact value of the constant can reveal deep connections to other fields.
Indeed, much like how Chen et al [11, 6] showed that the precise value of the large deviation constant for the intersection of random walks was related to the Gagliardo-Nirenberg inequality, we demonstrate here that the constant for the large deviation of the lower tail of the capacity of the random walk range is related to the generalized Gagliardo-Nirenberg inequality. This generalized Gagliardo-Nirenberg inequality was key in the study of the polaron and many other physical processes of interest [15, 19]. If we look at [15, Theorem 2.3], this inequality is derived from the Hardy-Littlewood-Sobolev inequality and is used to study the Hartree equation. Hence, we find a new relationship between the capacity of the random walk and the field of analysis. Furthermore, the value of the large deviation constant for the capacity of the random walk range should give great information on the corresponding large deviation statistics of the capacity of the Wiener sausage.
### Main results
In our main result, we find that the moderate deviation of \(\mathrm{Cap}(\mathcal{S}[1,n])\) for \(d=4\) is related to best constant of the generalized Gagliardo-Nirenberg inequality (see [15, (6)]). Namely, it is the smallest constant \(\tilde{\kappa}(4,2)\) such that the following inequality should hold among \(g\) with \(\|\nabla g\|_{L^{2}}<\infty\):
\[\left[\int_{(\mathbb{R}^{4})^{2}}g^{2}(x)G(x-y)g^{2}(y)\mathrm{d}x\mathrm{d}y \right]^{1/4}\leq\tilde{\kappa}(4,2)\|g\|_{L^{2}}^{1/2}\|\nabla g\|_{L^{2}}^{1 /2},\]
where \(G(x-y)=2^{-1}\pi^{-2}\|x-y\|^{-2}\) for \(d=4\).
**Theorem 1.1**.: _Assume \(b_{n}\to\infty\) and \(b_{n}=O(\log\log n)\). For \(d=4\) and \(\lambda>0\),_
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(\mathrm{Cap}(\mathcal{S}[ 1,n])-\mathbb{E}[\mathrm{Cap}(\mathcal{S}[1,n])]\leq-\frac{\lambda n}{(\log n )^{2}}b_{n}\right)=-I_{4}(\lambda),\]
_where_
\[I_{4}(\lambda)=\frac{2}{\pi^{4}}\tilde{\kappa}(4,2)^{-4}\lambda.\]
**Corollary 1.2**.: _For \(d=4\), almost surely,_
\[\liminf_{n\to\infty}\frac{(\log n)^{2}}{n\log\log n}\bigg{(}\mathrm{Cap}( \mathcal{S}[1,n])-\mathbb{E}[\mathrm{Cap}(\mathcal{S}[1,n])]\bigg{)}=-\frac{ \pi^{4}}{2}\tilde{\kappa}(4,2)^{4}.\]
### Strategy
As mentioned before, to find the exact value of the constant associated with the lower tail of the law of the iterated logarithms, one would need to first prove a form of the large deviation principle. To do this, one would need to have control over exponential moments of the quantity in question. Now, one can find some control over such moments in the works of [13]. However, if one exactly wants the constant, then these estimates have to be optimal. Even with the rather technical bounds of [13], there were still multiple times when one could not precisely track the exponential factor associated with the high moments. While this is perfectly fine for proving that some law of iterated logarithm holds, it is impossible to deduce anything about the value of the lower tail of the law of the iterated logarithm.
Inspired by the connection between the capacity and the self-intersection, one might try to see if there are any parallels one can draw from the proof of the large deviation principle for the self-intersection in 2-dimensions. Indeed, Bass, Chen, Kumagai and Rosen [6, 8] were able to establish an exact form for the constant associated with the large deviation principle for the self-intersection of random walks.
As observed in [6, 8], a vital tool in both these analyses is a splitting formula. The self-intersection of a random walk can be written as the sum of two self-intersections of the first and second half of the walks and the mutual intersection of the first and second half. The large deviation behavior when \(d=2\) is largely determined by this mutual intersection. For the capacity, one can perform a similar splitting with the quantity \(\chi\) like in the work [3].
For two arbitrary sets \(A\) and \(B\), \(\chi\) is defined as,
\[\begin{split}\chi(A,B):=&\sum_{y\in A}\sum_{z\in B }\mathbb{P}(R^{\prime}_{y}\cap(A\cup B)=\emptyset)G_{D}(y-z)\mathbb{P}(R^{ \prime}_{z}\cap B=\emptyset)\\ &+\sum_{y\in A}\sum_{z\in B}\mathbb{P}(R^{\prime}_{y}\cap A= \emptyset)G_{D}(y-z)\mathbb{P}(R^{\prime}_{z}\cap(A\cup B)=\emptyset),\end{split} \tag{1.1}\]
where \(R^{\prime}_{y}\) is the range of an infinite random walk range after time 1 starting at the point \(y\) at time 0. To show the result, we will substitute two independent simple random walk ranges until time \(n\), \(\mathcal{S}^{1}\) and \(\mathcal{S}^{2}\) (which are also independent of \(R^{\prime}_{y}\)) into \(A\) and \(B\). The large deviation behavior should also be determined by this'mutual capacity', \(\chi\). However, after this step, if one tries to imitate the strategy of Bass, Chen, and Rosen [6] to analyze \(\chi\), fundamental difficulties arise at the very beginning that prevent one from proceeding forward.
First of all, observe that each line of \(\chi\), due to the probability term \(\mathbb{P}(R^{\prime}_{y}\cap(A\cup B)=\emptyset)\), is asymmetric in \(A\) and \(B\). Furthermore, the same probability term couples the first and second parts of the random walk. In general, many formulas that one would like to apply to compute moments, such as the Feynman-Kac formula for lower bounds on the asymptotic moments, would first require one to separate the two halves of the random walk from each other. Usually, such a separation can be justified by applying the Cauchy-Schwartz inequality, and, as in the works of [12] for the cross term occurring when studying the moderate deviations of the range of a random walk, one will not incur too much loss by performing this procedure. This is no longer the case when one deals with an asymmetric cross-term like \(\chi\). Indeed, the key first step in trying to determine the exact constant for the moderate deviations would be to try to identify a symmetric main term contribution for \(\chi\).
The first guess that one might have would be to show that the terms \(\mathbb{P}(R^{\prime}_{y}\cap(A\cup B)=\emptyset)\) could be replaced by the expected value \((1+o(1))\frac{\pi^{2}}{8\log n}\). This replacement was performed in the papers [4, 13] in order to establish a CLT and a LIL, respectively. However, the moment estimates required to prove such results are insufficiently strong to demonstrate a large deviation principle or determine an exact constant. Indeed, the paper [2] remarked that it is possible that in the large deviation regime, it would be more effective for the random walk to reorganize itself into configurations such that \(\mathbb{P}(R^{\prime}_{y}\cap(S^{1}\cup S^{2})=\emptyset,0\not\in S^{1})\) is far away from its expected value of \((1+o(1))\frac{\pi^{2}}{8\log n}\).
Indeed, since we cannot replace these probability terms with their expectation, we have to determine the main and error terms via manipulations that preserve the structure of these probability terms. Indeed, our main term can be guessed to be of the form,
\[\sum_{y\in A}\sum_{z\in B}\mathbb{P}(R^{\prime}_{y}\cap A=\emptyset)G_{D}(y-z )\mathbb{P}(R^{\prime}_{z}\cap B=\emptyset).\]
By decomposing \(G_{D}=\tilde{G}_{D}*\tilde{G}_{D}\), the convolutional square root of \(G_{D}\), we see that we indeed have a decomposition that could split the two sets \(A\) and \(B\) from each other. Namely, the quantity above can be written as,
\[\sum_{a\in\mathbb{R}^{4}}\sum_{y\in A}\mathbb{P}(R^{\prime}_{y}\cap A= \emptyset)\tilde{G}_{D}(y-a)\sum_{z\in B}\mathbb{P}(R^{\prime}_{z}\cap B= \emptyset)\tilde{G}_{D}(z-a).\]
This term will indeed be symmetric, and one has more tools for computing the exact value of the asymptotic moments. The full analysis of this term is given in section 5. This main term will lead to the corresponding error term,
\[\sum_{x^{1}\in\mathcal{S}^{1}}\sum_{x^{2}\in\mathcal{S}^{2}}\mathbb{P}(R^{ \prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x^{1}-x^{2})\mathbb{P}(R^{ \prime}_{x^{2}}\cap\mathcal{S}^{2}=\emptyset,R^{\prime}_{x^{2}}\cap\mathcal{ S}^{1}\neq\emptyset).\]
The main observation is that this error term should approximately be of order \(\frac{n}{(\log n)^{3}}\). This is one \(\log n\) factor less than the expected order of the main term. One still needs to determine the value of high moments of this error term; however, one no longer needs to care about the exact values. Indeed, one only needs to derive an upper bound for the high moments of this error term. Section 3 will justify the splitting of \(\chi\) into its main and error terms, while Section 4 will analyze the error term. The analysis of this error term involved multiple steps; the first step was to represent the cumbersome \(\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{2}=\emptyset,R^{\prime}_{x^{2}} \cap\mathcal{S}^{1}\neq\emptyset)\) into another term that is fit for moment computation. Afterward, we had to carefully exploit a version of monotonicity for the non-intersection probability \(\mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset)\) that would allow us to justify the replacement of \(\mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset)\) with its expectation.
## 2. Proof of Theorem 1.1 and Corollary 1.2
In this section, we show our main results, that is, Theorem 1.1 and Corollary 1.2. In the proof, we write \(f(n)\lesssim g(n)\) if there exists a (deterministic) constant \(c>0\) such that \(f(n)\leq cg(n)\) for all \(n\), and \(f(n)\gtrsim g(n)\) if \(g(n)\lesssim f(n)\). \(\mathcal{S}[a,b]\) means the random walk range between time \(a\) and \(b\). Let \(\mathbb{P}^{x}\) (resp. \(\mathbb{E}^{x}\)) be the probability of the simple random walk (or the Brownian motion) starting at \(x\). We usually write \(\mathbb{P}\) (resp. \(\mathbb{E}\)) for \(\mathbb{P}^{0}\) (resp. \(\mathbb{E}^{0}\)).
### Reduction to the study of mutual capacity
In order to determine the exact moderate deviation asymptotic for \(\operatorname{Cap}(\mathcal{S}[1,n])-\mathbb{E}[\operatorname{Cap}(\mathcal{S}[1,n])]\), it suffices to derive a moderate deviation for the term \(\chi\). For two random walks \(\mathcal{S}^{1}\) and \(\mathcal{S}^{2}\), recall the cross-term in (1.1)
\[\chi(\mathcal{S}^{1},\mathcal{S}^{2}):= \sum_{x^{1}\in\mathcal{S}^{1}}\sum_{x^{2}\in\mathcal{S}^{2}} \mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x^{1}-x^{2}) \mathbb{P}(R^{\prime}_{x^{2}}\cap(\mathcal{S}^{1}\cup\mathcal{S}^{2})=\emptyset)\] \[+\sum_{x^{1}\in\mathcal{S}^{1}}\sum_{x^{2}\in\mathcal{S}^{2}} \mathbb{P}(R^{\prime}_{x^{1}}\cap(\mathcal{S}^{1}\cup\mathcal{S}^{2})=\emptyset )G_{D}(x^{1}-x^{2})\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{1}=\emptyset).\]
Later, we assume that \(\mathcal{S}^{1},\mathcal{S}^{2}\) are independent random walks of duration \(n\) and \(\mathcal{S}\) is also a random walk of duration \(n\), that is, \(\mathcal{S}[1,n]\).
**Theorem 2.1**.: _Consider \(\chi=\chi(\mathcal{S}^{1},\mathcal{S}^{2})\) and let \(b_{n}=O(\log\log n)\) with \(\lim_{n\to\infty}b_{n}=\infty\). Then, for any \(\lambda>0\),_
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(\chi\geq\lambda\frac{nb_{n }}{(\log n)^{2}}\right)=-I_{4}(\lambda). \tag{2.1}\]
We will show it in Section 3. We will give the proof of Theorem 1.1 assuming the above result.
Proof of Theorem 1.1.: _Splitting the Walk_
For simplicity in the presentation of the argument, we will perform computations when \(n\) is a multiple of a large power of \(2\). For a complete formalization of the argument, one can consider a continuous time random walk rather than a discrete time random walk as in [6, Chapter 6] to derive large deviation estimates, but the essential difference in the proofs are minimal.
First, fix a large integer \(L\); we first subdivide our random walk \(\mathcal{S}\) into \(2^{L}\) parts over various iterations. Set \(m_{l}=n/2^{l}\) and let \(\mathcal{S}^{(k),m_{l}}\) denote \(\mathcal{S}[(k-1)m_{l},km_{l}]\); namely, it is the \(k\)-th portion of the random walk once divided into \(2^{l}\) equal parts. With this notation in hand, we can define the cross-term,
\[\Lambda_{l}=\sum_{j=1}^{2^{l}-1}\chi(\mathcal{S}^{(2j-1),m_{l}},\mathcal{S}^{ (2j),m_{l}}).\]
We also have the following decomposition of \(\operatorname{Cap}(\mathcal{S})\),
\[\operatorname{Cap}(\mathcal{S})=\sum_{i=1}^{2^{L}}\operatorname{Cap}( \mathcal{S}^{(i),m_{L}})-\sum_{l=1}^{L}\Lambda_{l}+\epsilon_{L}.\]
The error \(\epsilon_{L}\) has the moment bound \(\mathbb{E}[\epsilon_{L}^{2}]=O((\log n)^{2})\) from [3, Proposition 2.3]. It is actually better to deal with a slightly modified cross-term. Consider two random walks, \(\mathcal{S}^{1},\mathcal{S}^{2}\) of the same length \(n\). Define, as in equation (5.1) which will appear in the sequel, the modified cross term:
\[TL(\mathcal{S}^{1},\mathcal{S}^{2})=\sum_{x^{1}\in\mathcal{S}^{1}}\sum_{x^{2} \in\mathcal{S}^{2}}\mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset )G_{D}(x^{1}-x^{2})\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{2}=\emptyset).\]
The results of Theorem 3.3 show that for any \(\epsilon>0\), we have that,
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(|\chi(\mathcal{S}^{(2j-1),m_{l}},\mathcal{S}^{(2j),m_{l}})-2TL(\mathcal{S}^{(2j-1),m_{l}},\mathcal{S}^{ (2j),m_{l}})|\geq\epsilon\frac{nb_{n}}{(\log n)^{2}}\right)=-\infty.\]
Accordingly, it is natural to consider the modified term,
\[\tilde{\Lambda}_{l}:=2\sum_{j=1}^{2^{l}-1}TL(\mathcal{S}^{(2j-1),m_{l}},\mathcal{ S}^{(2j),m_{l}}).\]
Furthermore, the moment bound on \(\epsilon_{L}\) combined with Markov's inequality shows that
\[\frac{1}{b_{n}}\log\mathbb{P}\left(\epsilon_{L}\geq\epsilon\frac{n}{(\log n)^{ 2}}\right)\lesssim\frac{-\log n+\log\log n+\log\epsilon}{b_{n}}.\]
Thus,
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(\epsilon_{L}\geq\epsilon \frac{n}{(\log n)^{2}}\right)=-\infty.\]
Combining these facts, we see that if we fix \(L\) and take \(n\to\infty\), we have that
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(-\mathrm{Cap} (\mathcal{S})+\mathbb{E}[\mathrm{Cap}(\mathcal{S})]\geq\lambda\frac{b_{n}n}{( \log n)^{2}}\right)\] \[=\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(-\sum_{i=1}^ {2^{L}}(\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})-\mathbb{E}[\mathrm{Cap}( \mathcal{S}^{(i),m_{L}})])+\sum_{l=1}^{L}(\tilde{\Lambda}_{l}-\mathbb{E}[ \tilde{\Lambda}_{l}])\geq\lambda\frac{b_{n}n}{(\log n)^{2}}\right).\]
Note that in the previous expression, we used the fact that \(\mathbb{E}[\epsilon_{L}]\) and \(\mathbb{E}[|\Lambda_{l}-\tilde{\Lambda}_{l}|]\), would not contribute to the expectations.
Our goal now is to show the following:
\[\lim_{L\to\infty}\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P} \left(\sum_{i=1}^{2^{L}}(-\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})+\mathbb{E}[ \mathrm{Cap}(\mathcal{S}^{(i),m_{L}})])+\sum_{l=1}^{L}(\tilde{\Lambda}_{l}- \mathbb{E}[\tilde{\Lambda}_{l}])\geq\lambda\frac{b_{n}n}{(\log n)^{2}}\right)\] \[=-I_{4}(\lambda). \tag{2.2}\]
We will start with showing the upper bound of (2.2).
_Upper Bound in (2.2):_ It is manifest that \(\mathbb{E}[\tilde{\Lambda}_{l}]\) is a positive number. Thus, if we only care about obtaining upper bounds on the probability found in equation (2.2), we can drop the term \(-\mathbb{E}[\tilde{\Lambda}_{l}]\) in the computation for the upper bound. We have,
\[\mathbb{P}\left(\sum_{i=1}^{2^{L}}(-\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})+\mathbb{E}[\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})])+\sum_{l=1}^{L} \tilde{\Lambda}_{l}\geq\lambda\frac{b_{n}n}{(\log n)^{2}}\right)\] \[\leq \mathbb{P}\left(\sum_{i=1}^{2^{L}}(\mathbb{E}[\mathrm{Cap}( \mathcal{S}^{(i),m_{L}})]-\mathrm{Cap}(\mathcal{S}^{(i),m_{L}}))\geq\epsilon \frac{\lambda n}{(\log n)^{2}}b_{n}\right)\] \[+\sum_{l=1}^{L}\mathbb{P}\left(\tilde{\Lambda}_{l}\geq(1-\epsilon )2^{-l}\frac{\lambda n}{(\log n)^{2}}b_{n}\right). \tag{2.3}\]
By using Lemma 2.2 and [12, Theorem 1.2.2], we can derive that
\[\limsup_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(\sum_{i=1}^{2^{L}}( \mathbb{E}[\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})]-\mathrm{Cap}(\mathcal{S}^{( i),m_{L}}))\geq\epsilon\frac{\lambda n}{(\log n)^{2}}b_{n}\right)\leq-2^{L}C\epsilon. \tag{2.4}\]
Now recall that \(\tilde{\Lambda}_{l}\) is a sum of i.i.d. random variables. We can apply our Theorem 3.1 along with [12, Theorem 1.2.2] to assert that
\[\limsup_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(\tilde{\Lambda}_{l}\geq( 1-\epsilon)2^{-l}\frac{\lambda n}{(\log n)^{2}}b_{n}\right)\leq-I_{4}(\lambda- \epsilon). \tag{2.5}\]
If we combine equations (2.5) and (2.4) in equation (2.3), we see that,
\[\limsup_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(\sum_{i=1} ^{2^{L}}(-\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})+\mathbb{E}[\mathrm{Cap}( \mathcal{S}^{(i),m_{L}})])+\sum_{l=1}^{L}\tilde{\Lambda}_{l}\geq\frac{\lambda nb _{n}}{(\log n)^{2}}\right)\] \[\leq-\min\left(2^{L}C\epsilon,I_{4}(\lambda-\epsilon)\right).\]
If we first take \(L\) to \(\infty\) and then \(\epsilon\to 0\), we derive the desired upper bound on the probability.
_Lower bound in (2.2):_
First consider the quantity \(SL_{n}\) as in equation (3.4) given by,
\[SL_{n}=\sum_{x^{1}\in\mathcal{S}}\sum_{x^{2}\in\mathcal{S}}\mathbb{P}(R^{ \prime}_{x^{1}}\cap\mathcal{S}=\emptyset)G_{D}(x^{1}-x^{2})\mathbb{P}(R^{ \prime}_{x^{2}}\cap\mathcal{S}=\emptyset).\]
Since
\[SL_{n}\leq \sum_{i=1}^{2^{L}}\sum_{\begin{subarray}{c}x^{1},x^{2}\in\mathcal{ S}^{(i),m_{L}}\\ \end{subarray}}\mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{(i),m_{L}}= \emptyset)G_{D}(x^{1}-x^{2})\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{(i),m_{L}}=\emptyset)\] \[+ \sum_{\begin{subarray}{c}x^{1}\in\mathcal{S}^{(i),m_{L},\,x^{2} \in\mathcal{S}^{(j),m_{L}},\\ 1\leq i\neq j\leq 2^{L}\end{subarray}}}\mathbb{P}(R^{\prime}_{x^{1}}\cap \mathcal{S}=\emptyset)G_{D}(x^{1}-x^{2})\mathbb{P}(R^{\prime}_{x^{2}}\cap \mathcal{S}=\emptyset)\]
and the second term in the right hand side is bounded by
\[2\sum_{l=1}^{L}\sum_{j=1}^{2^{l}-1}\sum_{\begin{subarray}{c}x^{1}\in\mathcal{ S}^{(2j-1),m_{l}},\\ x^{2}\in\mathcal{S}^{(2j),m_{l}}\end{subarray}}\mathbb{P}(R^{\prime}_{x^{1}} \cap\mathcal{S}=\emptyset)G_{D}(x^{1}-x^{2})\mathbb{P}(R^{\prime}_{x^{2}}\cap \mathcal{S}=\emptyset)\leq\sum_{l=1}^{L}\tilde{\Lambda}_{l},\]
we have that,
\[\sum_{i=1}^{2^{L}}(-\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})+\mathbb{ E}[\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})])+\sum_{l=1}^{L}(\tilde{\Lambda}_{l}- \mathbb{E}[\tilde{\Lambda}_{l}])\] \[\geq SL_{n}-\mathbb{E}[SL_{n}]+\sum_{i=1}^{2^{L}}(-\mathrm{Cap}( \mathcal{S}^{(i),m_{L}})+\mathbb{E}[\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})])- \sum_{l=1}^{L}\mathbb{E}[\tilde{\Lambda}_{l}]\] \[-\sum_{i=1}^{2^{L}}\sum_{x^{1},x^{2}\in\mathcal{S}^{(i),m_{L}}} \mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{(i),m_{L}}=\emptyset)G_{D}(x^{1 }-x^{2})\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{(i),m_{L}}=\emptyset)+ \mathbb{E}[SL_{n}].\]
Noting that \(\sum_{l=1}^{L}\mathbb{E}[\tilde{\Lambda}_{l}]=O\left(\frac{n}{(\log n)^{2}}\right)\), the term \(\sum_{l=1}^{L}\mathbb{E}[\tilde{\Lambda}_{l}]\) will not contribute to the large deviation statistics to the order we are concerned with. In addition,
\[\mathbb{E}[\sum_{i=1}^{2^{L}}\sum_{x^{1},x^{2}\in\mathcal{S}^{(i),m_{L}}}\mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{(i),m_{L}}=\emptyset)G_{ D}(x^{1}-x^{2})\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{(i),m_{L}}= \emptyset)]-\mathbb{E}[SL_{n}]\] \[\leq \mathbb{E}[\sum_{i=1}^{2^{L}}\sum_{x^{1},x^{2}\in\mathcal{S}^{(i),m_{L}}}\mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{(i),m_{L}}=\emptyset)G_{ D}(x^{1}-x^{2})\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{(i),m_{L}}= \emptyset)]\] \[- \mathbb{E}[\sum_{i=1}^{2^{L}}\sum_{x^{1},x^{2}\in\mathcal{S}^{(i),m_{L}}}\mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}=\emptyset)G_{D}(x^{1}-x^{ 2})\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}=\emptyset)]+\frac{Cn}{(\log n)^ {2}}\] \[\leq 2\mathbb{E}[\sum_{i=1}^{2^{L}}\sum_{x^{1},x^{2}\in\mathcal{S}^{( i),m_{L}}}\mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{(i),m_{L}}=\emptyset,R^{ \prime}_{x^{1}}\cap\mathcal{S}\neq\emptyset)G_{D}(x^{1}-x^{2})\mathbb{P}(R^{ \prime}_{x^{2}}\cap\mathcal{S}^{(i),m_{L}}=\emptyset)]\] \[+\frac{Cn}{(\log n)^{2}}\lesssim\frac{n}{(\log n)^{2}}.\]
The final inequality is very similar to the type of error terms we have dealt with in Section 4. Thus, we omit the proof. Thus, we have that
\[\mathbb{P}\left(\sum_{i=1}^{2^{L}}(-\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})+\mathbb{E}[\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})])+\sum_{l=1}^{L} \tilde{\Lambda}_{l}\geq\frac{\lambda nb_{n}}{(\log n)^{2}}\right)\] \[\geq\mathbb{P}\left(SL_{n}-\mathbb{E}[SL_{n}]\geq\frac{(\lambda+ \epsilon)nb_{n}}{(\log n)^{2}}\right)-\mathbb{P}\left(\sum_{i=1}^{2^{L}}( \mathrm{Cap}(\mathcal{S}^{(i),m_{L}})-\mathbb{E}[\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})])\geq\frac{\epsilon nb_{n}}{2(\log n)^{2}}\right)\] \[-\mathbb{P}\bigg{(}\sum_{i=1}^{2^{L}}\sum_{x^{1},x^{2}\in\mathcal{ S}^{(i),m_{L}}}\mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{(i),m_{L}}=\emptyset)G_{ D}(x^{1}-x^{2})\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{(i),m_{L}}=\emptyset)\] \[-\mathbb{E}[\sum_{i=1}^{2^{L}}\sum_{x^{1},x^{2}\in\mathcal{S}^{(i ),m_{L}}}\mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{(i),m_{L}}=\emptyset)G_ {D}(x^{1}-x^{2})\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{(i),m_{L}}= \emptyset)]\geq\frac{\epsilon nb_{n}}{2(\log n)^{2}}\bigg{)}. \tag{2.6}\]
Now, we note that the negative quantities on the right hand side are the sum of i.i.d random variables; the term on the last line are also of the form \(SL_{n2^{-L}}\). By using Lemma 2.2 and the result for \(SL_{n}\) from Corollary 3.2 as well as [12, Theorem 1.2.2], we have that the probabilities in the last two lines are bounded by \(\exp[b_{n}(-2^{L}C\epsilon)]\) for some constant \(C\).
Furthermore, Corollary 3.2 also gives us that \(\lim_{n\to\infty}\frac{1}{b_{n}}\log P(SL_{n}-\mathbb{E}[SL_{n}]\geq\frac{( \lambda+\epsilon)nb_{n}}{(\log n)^{2}})=-I_{4}(\lambda+\epsilon)\). Given \(\epsilon\), if we first choose \(L\) such that \(-2^{L}C\epsilon\ll-I_{4}(\lambda+\epsilon)\), we see that,
\[\liminf_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(\sum_{i=1} ^{2^{L}}(-\mathrm{Cap}(\mathcal{S}^{(i),m_{l}})+\mathbb{E}[\mathrm{Cap}( \mathcal{S}^{(i),m_{l}})])+\sum_{l=1}^{L}\tilde{\Lambda}_{l}\geq\frac{\lambda nb _{n}}{(\log n)^{2}}\right)\] \[\geq -I_{4}(\lambda+\epsilon).\]
We can then take the limit as \(L\) to \(\infty\) and then \(\epsilon\to 0\) to show equation (2.2). This completes the proof of the result.
We can quickly derive our corollary for the exact constant of the LIL for the lower tail of \(\operatorname{Cap}(\mathcal{S})-\mathbb{E}[\operatorname{Cap}(\mathcal{S})]\).
Proof of Corollary 1.2.: This will follow by carefully applying the Borel-Cantelli lemma. The large deviation estimates of Theorem 1.1 are used to derive the appropriate convergence or divergence conditions. The details are the same as those found in [12, Theorem 8.6.2].
### A priori Estimates on \(\operatorname{Cap}(\mathcal{S})\)
In this section, we will prove the following large deviation principle on \(\operatorname{Cap}(\mathcal{S})\). The following lemma will give a sufficient a-priori large deviation estimate to bound the second term of the second line of (2.6).
**Lemma 2.2**.: _Let \(b_{n}=\text{O}(\log\log n)\) with \(\lim_{n\to\infty}b_{n}=\infty\). There exists some constant \(C\) such that for any \(\lambda>0\),_
\[\limsup_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(|\operatorname{Cap}( \mathcal{S})-\mathbb{E}[\operatorname{Cap}(\mathcal{S})]|\geq\frac{\lambda n }{(\log n)^{2}}b_{n}\right)\leq-C\lambda. \tag{2.7}\]
Proof.: We will consider proving this when \(n\) is a power of \(2\). By changing the constant \(C\) that appears on the right hand side of (2.7), one can use our subdivision formula of \(R_{n}\) in order to obtain estimates on general \(n\) via a binary decomposition in terms of powers of \(n\).
Now, assume \(n\) is power of \(2\) and let \(L=4\log(\log n)\). We can decompose \(r_{n}\) iteratively \(L\) times to notice that,
\[\operatorname{Cap}(\mathcal{S})=\sum_{i=1}^{2^{L}}\operatorname{Cap}(\mathcal{ S}^{(i),m_{L}})-\sum_{l=1}^{L}\Lambda_{l}+\epsilon_{L},\]
where we use the notation from the proof of Theorem 1.1. This time \(\epsilon_{L}\) can be shown to be of \(O((\log n)^{10})\). (There will at most \(1+2+4+\ldots+2^{L}=O((\log n)^{4})\) many error terms of the form \(\epsilon\) in the decomposition. Each of these error terms has moment \(O((\log n)^{2})\)).) By applying Chebyshev's inequality, we see that the error term \(\epsilon_{L}\) provides no change to the probability at the scale \(b_{n}\). Thus, we freely drop this error term \(\epsilon_{L}\) in what follows.
_Bounding Upper tails of \(\operatorname{Cap}(\mathcal{S})-\mathbb{E}[\operatorname{Cap}(\mathcal{S})]\)_
If one wants to bound the probability \(\mathbb{P}(\operatorname{Cap}(\mathcal{S})-\mathbb{E}[\operatorname{Cap}( \mathcal{S})]\geq\frac{\lambda nb_{n}}{(\log n)^{2}})\) from above, then since all the terms \(\Lambda_{l}\) are positive and \(\sum_{i=1}^{L}\mathbb{E}[\Lambda_{l}]=O(\frac{n}{(\log n)^{2}})\), it suffices to bound the probability,
\[\mathbb{P}\left(\sum_{i=1}^{2^{L}}\operatorname{Cap}(\mathcal{S}^{(i),m_{L}}) -\mathbb{E}[\operatorname{Cap}(\mathcal{S}^{(i),m_{L}})]\geq\frac{\lambda nb _{n}}{(\log n)^{2}}\right).\]
Now, the sequence \(\operatorname{Cap}(\mathcal{S}^{(i),m_{L}})-\mathbb{E}[\operatorname{Cap}( \mathcal{S}^{(i),m_{L}})]\) are the sequence of i.i.d. random variables with the property that \(\mathbb{E}\exp\left[\frac{\theta}{n}|\operatorname{Cap}(\mathcal{S})-\mathbb{ E}[\operatorname{Cap}(\mathcal{S})]|\right]<\infty\). (This is due the the fact that \(\operatorname{Cap}(\mathcal{S})\leq n\).) We can apply [6, Lemma 4.4] to assert that there
is some constant \(\theta>0\) such that
\[\limsup_{n\to\infty}\mathbb{E}\left[\exp\left[\theta\frac{(\log n)^{2}}{n}\left| \sum_{i=1}^{2^{L}}(\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})-\mathbb{E}[\mathrm{Cap} (\mathcal{S}^{(i),m_{L}})])\right|\right]\right]<\infty.\]
Since \(2^{L/2}\geq(\log n)^{2}\) by choice, this implies that,
\[\limsup_{n\to\infty}\mathbb{E}\left[\exp\left[\theta\frac{(\log n)^{2}}{n} \left|\sum_{i=1}^{2^{L}}(\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})-\mathbb{E}[ \mathrm{Cap}(\mathcal{S}^{(i),m_{L}})])\right|\right]\right]<\infty.\]
By Chebyshev's inequality, this shows that there is some constant \(C\) such that,
\[\limsup_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\bigg{(}\left|\sum_{i=1}^{2^{ L}}(\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})-\mathbb{E}[\mathrm{Cap}(\mathcal{S}^{( i),m_{L}})])\right|\geq\frac{\lambda nb_{n}}{(\log n)^{2}}\bigg{)}\leq-C\lambda. \tag{2.8}\]
_Upper Bounds on the lower tail of \(\mathrm{Cap}(\mathcal{S})-\mathbb{E}[\mathrm{Cap}(\mathcal{S})]\)_
Due to our control on \(\left|\sum_{i=1}^{2^{L}}(\mathrm{Cap}(\mathcal{S}^{(i),m_{L}})-\mathbb{E}[ \mathrm{Cap}(\mathcal{S}^{(i),m_{L}})])\right|\) from equation (2.8). It suffices to bound \(\mathbb{P}\left(\sum_{l=1}^{L}\Lambda_{l}\geq\frac{\lambda nb_{n}}{(\log n)^{2}}\right)\).
If we define,
\[\alpha_{l,j}:=\] \[\sum_{a=(2j-2)m_{l}+1}^{(2j-1)m_{l}}\sum_{b=(2j-1)m_{l}+1}^{(2j)m _{l}}\mathbb{P}(R^{\prime}_{\mathcal{S}_{a}}\cap\mathcal{S}^{(2j-1),m_{l}}= \emptyset)G_{D}(\mathcal{S}_{a}-\mathcal{S}_{b})\mathbb{P}(R^{\prime}_{ \mathcal{S}_{b}}\cap\mathcal{S}^{(2j),m_{l}}=\emptyset),\]
notice that we can bound,
\[\chi(\mathcal{S}^{(2j-1),m_{l}},\mathcal{S}^{(2j),m_{l}})\leq\alpha_{l,j}.\]
Finally, in order to show \(\limsup_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}(\sum_{l=1}^{L}\Lambda_{l} \geq\frac{\lambda nb_{n}}{(\log n)^{2}})\leq-C\lambda\) for some constant \(C\). It suffices to prove that the exponential moment,
\[\limsup_{n\to\infty}\mathbb{E}\left[\exp\left[\frac{\theta(\log n)^{2}}{n} \sum_{l=1}^{L}\sum_{k=1}^{2^{l-1}}\alpha_{l,k}\right]\right]<\infty. \tag{2.9}\]
By the consequence of Lemma 5.6, there is a parameter \(\theta>0\) such that each \(\alpha_{l,k}\) has the exponential moment,
\[\limsup_{n\to\infty}\mathbb{E}\left[\exp\left[\theta\frac{(\log(2^{-l}n))^{2}} {2^{-l}n}\alpha_{l,k}\right]\right]<\infty.\]
Thus, we can follow the argument of [6, Theorem 5.4] from equation (5.30) onwards to prove that the desired result (2.9). This completes the proof of the lemma.
## 3. Theorem 2.1: Large Deviations of the Cross Term
In this section, we provide a decomposition for \(\chi\) that will give us a proof of Theorem 2.1. Analyzing \(\chi\) is not directly tractable due to the lack of symmetry in each individual product. Recall that \(\mathcal{S}^{1},\mathcal{S}^{2}\) are independent random walks of
duration \(n\). To deal with this issue, we can write this in terms of the following difference,
\[\chi =\chi(\mathcal{S}^{1},\mathcal{S}^{2})\] \[=2\sum_{x^{1}\in\mathcal{S}^{1}}\sum_{x^{2}\in\mathcal{S}^{2}} \mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x^{1}-x^{2}) \mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{2}=\emptyset)\] \[-\sum_{x^{1}\in\mathcal{S}^{1}}\sum_{x^{2}\in\mathcal{S}^{2}} \mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x^{1}-x^{2}) \mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{2}=\emptyset,R^{\prime}_{x^{2}} \cap\mathcal{S}^{1}\neq\emptyset)\] \[-\sum_{x^{1}\in\mathcal{S}^{1}}\sum_{x^{2}\in\mathcal{S}^{2}} \mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset,R^{\prime}_{x^{1}} \cap\mathcal{S}^{2}\neq\emptyset)G_{D}(x^{1}-x^{2})\mathbb{P}(R^{\prime}_{x^{2}} \cap\mathcal{S}^{2}=\emptyset). \tag{3.1}\]
Now, in order to obtain asymptotics on \(\chi\), our goal is two-fold.
1. An upper bound on \(\chi\) is found by merely considering the top line (3.2) \[TL_{n}:=\sum_{x^{1}\in\mathcal{S}^{1}}\sum_{x^{2}\in\mathcal{S}^{2}}\mathbb{P }(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x^{1}-x^{2})\mathbb{P} (R^{\prime}_{x^{2}}\cap\mathcal{S}^{2}=\emptyset).\] Thus, one can obtain upper bounds on the moderate deviation statistics of \(\chi\) merely from analyzing the moderate deviation statistics of \(TL_{n}\).
2. Obtaining lower bounds on the moderate deviation statistics of \(\chi\) needs more steps. First, one needs to show that the second line of (3.1), which we denote as \(\chi^{\prime}\) is sub-leading relative to the first line. (The analysis of the third line would be similar to that of the second.)
3. \(\chi^{\prime}:=\sum_{x^{1}\in\mathcal{S}^{1}}\sum_{x^{2}\in\mathcal{S}^{2}} \mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x^{1}-x^{2}) \mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{2}=\emptyset,R^{\prime}_{x^{2}} \cap\mathcal{S}^{1}\neq\emptyset).\) Once this is established, lower bounds on the large deviation statistics of \(\chi\) will be the same as those of \(TL_{n}\). Furthermore, observe that only an upper bound on \(\chi^{\prime}\) is necessary.
We will have two intermediate goals,
**Theorem 3.1**.: _Recall \(TL_{n}\) as in equation (3.2). Fix \(b_{n}=O(\log\log n)\) with \(\lim_{n\to\infty}b_{n}=\infty\). We have that, for any \(\lambda>0\),_
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(TL_{n}\geq\lambda\frac{b_{ n}n}{(\log n)^{2}}\right)=-2I_{4}(\lambda).\]
We remark that by following the same proof, we could obtain the following statement; this is analogous to our statement on \(TL_{n}\) and [12, Theorem 8.2.1], but uses the same random walk rather than two independent copies.
**Corollary 3.2**.: _Let \(b_{n}=O(\log\log n)\) with \(\lim_{n\to\infty}b_{n}=\infty\). Define \(SL_{n}\) as,_
\[SL_{n}:=\sum_{x^{1}\in\mathcal{S}}\sum_{x^{2}\in\mathcal{S}}\mathbb{P}(R^{ \prime}_{x^{1}}\cap\mathcal{S}=\emptyset)G_{D}(x^{1}-x^{2})\mathbb{P}(R^{ \prime}_{x^{2}}\cap\mathcal{S}=\emptyset). \tag{3.4}\]
_Then, we have that,_
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(SL_{n}-\mathbb{E}[SL_{n}] \geq\lambda\frac{b_{n}n}{(\log n)^{2}}\right)=-I_{4}(\lambda).\]
**Theorem 3.3**.: _Recall \(\chi^{\prime}\) as in (3.3). Fix \(b_{n}=O(\log\log n)\) with \(\lim_{n\to\infty}b_{n}=\infty\). For any \(\epsilon>0\), we have that,_
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(\chi^{\prime}\geq\epsilon \frac{b_{n}n}{(\log n)^{2}}\right)=-\infty.\]
It is clear that Theorem 2.1 is a consequence of Theorems 3.1 and 3.3 along with the decomposition (3.1). The next few sections will be devoted to proving these theorems.
## 4. Controlling The Third Order Intersections: Proof of Theorem 3.3
For any \(A\), let
\[G_{A}(a,b)=\sum_{m=0}^{\infty}\mathbb{P}^{a}(\mathcal{S}_{m}=b,\mathcal{S}(0,m )\cap A=\emptyset).\]
Here, \(G_{A}(a,b)\) is a restricted Green's function. Via a path decomposition, one can see that we have the following expression for the more complicated probability term in \(\chi^{\prime}\),
\[\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{2}=\emptyset,R^{\prime}_{x^{2}} \cap\mathcal{S}^{1}\neq\emptyset)=\sum_{x^{1}\in\mathcal{S}^{1}}G_{\mathcal{S }^{2}}(x^{2},x^{1})\mathbb{P}(R^{\prime}_{x^{1}}\cap(\mathcal{S}^{1}\cup \mathcal{S}^{2})=\emptyset). \tag{4.1}\]
It computes the total sum of all probabilities of random walk paths from \(a\) to \(b\) that do not intersect \(A\).
The equality in (4.1) comes from a path decomposition. Namely, since \(R^{\prime}_{x^{2}}\cap\mathcal{S}^{1}\neq\emptyset\), then the random walk \(R^{\prime}_{x^{2}}\) must intersect \(\mathcal{S}^{1}\) at some final point \(x^{1}\). After this point, the random walk starting from \(x^{1}\) must not intersect either one of \(\mathcal{S}^{1}\) or \(\mathcal{S}^{2}\). Then, we can further bound
\[\chi^{\prime}\leq\sum_{x^{1}_{1},x^{1}_{2}\in\mathcal{S}^{1}}\sum_{x^{2}\in \mathcal{S}^{2}}\mathbb{P}(R^{\prime}_{x^{1}_{1}}\cap\mathcal{S}^{1}=\emptyset )G_{D}(x^{1}_{1}-x^{2})G_{\mathcal{S}^{2}}(x^{2},x^{1}_{2})\mathbb{P}(R^{ \prime}_{x^{1}_{2}}\cap\mathcal{S}^{1}=\emptyset).\]
Though we have simplified the probability term in question, we are still not ready to analyze this due to the appearance of the term \(G_{\mathcal{S}^{2}}\). We have to introduce a more sophisticated analysis in order to deal with this term. First we fix parameter \(\beta<1\), the specific value will be chosen later in accordance with what is appropriate for later upper bounds. It is important to decompose the walk \(\mathcal{S}^{2}\) into appropriate intervals of size \(n^{\beta}\). We define,
\[\mathcal{S}^{2}_{\beta,j}:=\mathcal{S}^{2}[(j-1)n^{\beta},jn^{\beta}].\]
By adding back points when necessary, and also using the fact that if \(A\subset B\) then \(G_{B}(x,y)\leq G_{A}(x,y)\) for all points \(x\) and \(y\), we see that we have,
\[\chi^{\prime}\leq\chi^{\prime}_{\beta}:=\sum_{x^{1}_{1},x^{1}_{2} \in\mathcal{S}^{1}}\sum_{j=1}^{n^{1-\beta}}\sum_{x^{2}\in\mathcal{S}^{2}_{ \beta,j}}\mathbb{P}(R^{\prime}_{x^{1}_{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D} (x^{1}_{1}-x^{2})\] \[\times G_{\mathcal{S}^{2}_{\beta,j}}(x^{2},x^{1}_{2})\mathbb{P}(R ^{\prime}_{x^{1}_{2}}\cap\mathcal{S}^{1}=\emptyset).\]
(Note that \(\chi^{\prime}\) or \(\chi^{\prime}_{\beta}\) is a random variable determined by \(\mathcal{S}^{1}\) and \(\mathcal{S}^{2}\).)
One way to express the modified Green's function \(G_{\mathcal{S}^{2}_{\beta,j}}\) is as follows. Note that Green's function satisfies the following system of equations: for any \(x\in\mathcal{S}^{2}_{\beta,j}\) and \(m\geq 0\),
\[\mathbb{P}^{x}(\mathcal{S}_{m}=x_{2}^{1})=\sum_{i=0}^{m}\sum_{\tilde{x}\in \mathcal{S}^{2}_{\beta,i}}\mathbb{P}^{x}(\mathcal{S}_{i}=\tilde{x})\mathbb{P}^ {\tilde{x}}(\mathcal{S}(0,m-i)\cap\mathcal{S}^{2}_{\beta,j}=\emptyset,\mathcal{ S}_{m-i}=x_{2}^{1})\]
and hence
\[G_{D}(x-x_{2}^{1})=\sum_{\tilde{x}\in\mathcal{S}^{2}_{\beta,i}}G_{D}(x-\tilde{ x})G_{\mathcal{S}^{2}_{\beta,j}}(\tilde{x},x_{2}^{1}).\]
If we define the matrix with size \(|\mathcal{S}^{2}_{\beta,j}|\)
\[[\mathcal{G}^{\mathcal{S}^{2}_{\beta,j}}]_{a,b}=G_{D}(a-b),\quad\text{ for }a,b\in\mathcal{S}^{2}_{\beta,j}, \tag{4.2}\]
we see that,
\[\begin{bmatrix}G_{\mathcal{S}^{2}_{\beta,j}}(a_{1},x_{2}^{1})\\ G_{\mathcal{S}^{2}_{\beta,j}}(a_{2},x_{2}^{1})\\ \vdots\\ G_{\mathcal{S}^{2}_{\beta,j}}(a_{|\mathcal{S}^{2}_{\beta,j}|},x_{2}^{1}) \end{bmatrix}=(\mathcal{G}^{\mathcal{S}^{2}_{\beta,j}})^{-1}\begin{bmatrix}G_{ D}(a_{1},x_{2}^{1})\\ G_{D}(a_{2},x_{2}^{1})\\ \vdots\\ G_{D}(a_{|\mathcal{S}^{2}_{\beta,j}|},x_{2}^{1})\end{bmatrix}, \tag{4.3}\]
where \(a\) varies over all the points in \(\mathcal{S}^{2}_{\beta,j}\).
The analysis of the matrix inverse will depend on the distance between the points in \(x_{2}^{1}\) and the set \(\mathcal{S}^{2}_{\beta,j}\). Observe that if \(x_{2}^{1}\) were far away from the set \(\mathcal{S}^{2}_{\beta,j}\), then the terms in the vector on the right hand side of equation (4.3) would approximately be constant. Furthermore, it is also rather unlikely that the \(x_{2}^{1}\) would be close to the set \(\mathcal{S}^{2}_{\beta,j}\). Following this intuition, we divide the the points \(x_{2}^{1}\) into two categories,
1. In category 1, the point \(x_{2}^{1}\) is of distance at least \(\sqrt{n}^{1-\delta}\) away from all the points in \(\mathcal{S}^{2}_{\beta,j}\).
2. In category 2, the point \(x_{2}^{1}\) is not of distance at least \(\sqrt{n}^{1-\delta}\) away from some point in \(\mathcal{S}^{2}_{\beta,j}\)
If we are in the first category, we have a superior analysis, as will be illustrated by the following manipulations. Indeed, assume that the point \(x_{2}^{1}\) is of distance at least \(\sqrt{n}^{1-\delta}\) from all points in \(\mathcal{S}^{2}_{\beta,j}\). Now, let \(a\) and \(b\) be two points in \(\mathcal{S}^{2}_{\beta,j}\). Then, we must have
\[\begin{split}&\big{|}G_{D}(a-x_{2}^{1})-G_{D}(b-x_{2}^{1})\big{|} \lesssim\bigg{|}\frac{1}{\|a-x_{2}^{1}\|^{2}}-\frac{1}{\|b-x_{2}^{1}\|^{2}} \bigg{|}+\frac{1}{\|a-x_{2}^{1}\|^{4}}+\frac{1}{\|b-x_{2}^{1}\|^{4}}\\ &\lesssim\frac{\|a-b\|}{\|a-x_{2}^{1}\|^{3}}+2\left(\frac{1}{\sqrt {n}^{1-\delta}}\right)^{4}\lesssim\frac{n^{\beta}}{\left(\sqrt{n}\right)^{3- 3\delta}}.\end{split} \tag{4.4}\]
Note that here, we have applied the estimates of [18, Theorem 4.3.1]. Then we use the fact that \(\|a-b\|\) is less than \(n^{\beta}\) when both are in the neighborhood of \(\mathcal{S}^{2}_{\beta,j}\) and
our assumption that \(\|x_{2}^{1}-a\|\geq\left(\sqrt{n}\right)^{1-\delta}\). Let
\[\begin{bmatrix}E_{a_{1},x_{2}^{1}}\\ \vdots\\ E_{a_{|\mathcal{S}_{\beta,j}^{2}|},x_{2}^{1}}\end{bmatrix}:=(\mathcal{G}^{ \mathcal{S}_{\beta,j}^{2}})^{-1}\begin{bmatrix}G_{D}(a_{1}-x_{2}^{1})-G_{D}(a _{1}-x_{2}^{1})\\ \vdots\\ G_{D}(a_{|\mathcal{S}_{\beta,j}^{2}|}-x_{2}^{1})-G_{D}(a_{1}-x_{2}^{1})\end{bmatrix}.\]
In this case, we can further write the inverse formula as in equation (4.3) as,
\[\begin{bmatrix}G_{\mathcal{S}_{\beta,j}^{2}}(a_{1},x_{2}^{1})\\ \vdots\\ G_{\mathcal{S}_{\beta,j}^{2}}(a_{|\mathcal{S}_{\beta,j}^{2}|},x_{2}^{1}) \end{bmatrix}=(\mathcal{G}^{\mathcal{S}_{\beta,j}^{2}})^{-1}\begin{bmatrix}G_{ D}(a_{1}-x_{2}^{1})\\ \vdots\\ G_{D}(a_{|\mathcal{S}_{\beta,\alpha,j}^{2}|}-x_{2}^{1})\end{bmatrix}\] \[=(\mathcal{G}^{\mathcal{S}_{\beta,j}^{2}})^{-1}\begin{bmatrix}1 \\ \vdots\\ 1\end{bmatrix}\times G_{D}(a_{1}-x_{2}^{1})+(\mathcal{G}^{\mathcal{S}_{\alpha,\beta,j}^{2}})^{-1}\begin{bmatrix}G_{D}(a_{1}-x_{2}^{1})-G_{D}(a_{1}-x_{2}^{1} )\\ \vdots\\ G_{D}(a_{|\mathcal{S}_{\beta,j}^{2}|}-x_{2}^{1})-G_{D}(a_{1}-x_{2}^{1})\end{bmatrix}\] \[=\begin{bmatrix}\mathbb{P}(R^{\prime}_{a_{1}}\cap\mathcal{S}_{ \beta,j}^{2}=\emptyset)\\ \vdots\\ \mathbb{P}(R^{\prime}_{a_{1}|\mathcal{S}_{\beta,j}^{2}|}\cap\mathcal{S}_{ \beta,j}^{2}=\emptyset)\end{bmatrix}\times G_{D}(a_{1}-x_{2}^{1})+\begin{bmatrix} E_{a_{1},x_{2}^{1}}\\ \vdots\\ E_{a_{|\mathcal{S}_{\beta,j}^{2}|},x_{2}^{1}}\end{bmatrix}.\]
Thus, we have the following representation of \(\chi^{\prime}_{\beta}\). In what follows, we let \(\mathcal{I}(y,j)\) be the indicator function of the event
\[\mathcal{I}(y,j):=\mathbbm{1}[\text{dist}(y,\mathcal{S}_{\beta,j}^{2})\geq \sqrt{n}^{1-\delta}].\]
For each set \(\mathcal{S}_{\beta,j}^{2}\) choose a point \(\tilde{x}_{j}^{2}\). This point will serve as the central point used in the decomposition used in equation (4.5):
\[\begin{split}\chi^{\prime}_{\beta}=&\sum_{x_{1}^{1},x_{2 }^{1}\in\mathcal{S}^{1}}\sum_{j=1}^{n^{1-\beta}}\sum_{x^{2}\in\mathcal{S}_{ \beta,j}^{2}}\mathcal{I}(x_{2}^{1},j)\mathcal{I}(x_{1}^{1},j)\mathbb{P}(R^{ \prime}_{x_{1}^{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x_{1}^{1}-x^{2})\\ &\times\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}_{\beta,j}^{2}= \emptyset)G_{D}(\tilde{x}_{j}^{2}-x_{2}^{1})\mathbb{P}(R^{\prime}_{x_{2}^{1}} \cap\mathcal{S}^{1}=\emptyset)\\ &+\sum_{x_{1}^{1},x_{2}^{1}\in\mathcal{S}^{1}}\sum_{j=1}^{n^{1- \beta}}\sum_{x^{2}\in\mathcal{S}_{\beta,j}^{2}}\mathcal{I}(x_{2}^{1},j) \mathcal{I}(x_{1}^{1},j)\mathbb{P}(R^{\prime}_{x_{1}^{1}}\cap\mathcal{S}^{1}= \emptyset)G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})\\ &\times E_{x_{2},x_{2}^{1}}\mathbb{P}(R^{\prime}_{x_{2}^{1}}\cap \mathcal{S}^{1}=\emptyset)\\ &+\sum_{x_{1}^{1},x_{2}^{1}\in\mathcal{S}^{1}}\sum_{j=1}^{n^{1- \beta}}\sum_{x^{2}\in\mathcal{S}_{\beta,j}^{2}}[1-\mathcal{I}(x_{2}^{1},j) \mathcal{I}(x_{1}^{1},j)]\mathbb{P}(R^{\prime}_{x_{1}^{1}}\cap\mathcal{S}^{1}= \emptyset)G_{D}(x_{1}^{1}-x^{2})\\ &\times G_{\mathcal{S}_{\beta,j}^{2}}(x^{2},x_{2}^{1})\mathbb{P}(R ^{\prime}_{x_{2}^{1}}\cap\mathcal{S}^{1}=\emptyset)\\ =:& MT_{n}+\mathcal{E}_{1}+\mathcal{E}_{2}+\mathcal{E}_{3}.\end{split} \tag{4.6}\]
The analysis of \(\chi^{\prime}_{\beta}\) now devolves into the following three lemmas.
**Lemma 4.1**.: _Fix \(m\in\mathbb{N}\). There exists a constant depending only on \(m\)(and not on \(n\)) such that, if we define,_
\[\begin{split} MT_{n}:&=\sum_{x_{1}^{1},x_{2}^{1}\in \mathcal{S}^{1}}\sum_{j=1}^{n^{1-\beta}}\sum_{x^{2}\in\mathcal{S}_{\beta,j}^{2} }\mathcal{I}(x_{2}^{1},j)\mathcal{I}(x_{1}^{1},j)\mathbb{P}(R^{\prime}_{x_{1} ^{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x_{1}^{1}-x^{2})\\ &\qquad\qquad\qquad\qquad\times\mathbb{P}(R^{\prime}_{x^{2}}\cap \mathcal{S}_{\beta,j}^{2}=\emptyset)G_{D}(\tilde{x}_{j}^{2}-x_{2}^{1})\mathbb{ P}(R^{\prime}_{x_{2}^{1}}\cap\mathcal{S}^{1}=\emptyset),\end{split} \tag{4.7}\]
_then,_
\[\mathbb{E}[(MT_{n})^{m}]\leq C_{m}\frac{n^{m}(\log\log n)^{2m}}{(\log n)^{3m}}.\]
By Markov's inequality, we obtain the following as a consequence,
**Corollary 4.2**.: _Recall \(MT_{n}\) and fix any \(\epsilon>0\). If \(b_{n}=O(\log\log n)\) with \(\lim_{n\to\infty}b_{n}=\infty\), we see that we have,_
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(MT_{n}\geq\epsilon\frac{b _{n}n}{(\log n)^{2}}\right)=-\infty.\]
Proof.: By applying Markov's inequality to (4.7) for some fixed power \(\mathbb{E}[(MT_{n})^{m}]\), we can derive that,
\[\limsup_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(MT_{n}\geq \epsilon\frac{b_{n}n}{(\log n)^{2}}\right)\] \[\leq\limsup_{n\to\infty}-\frac{m}{b_{n}}\left[\log\epsilon-\frac {\log C_{m}}{m}+\log\log n-2\log\log\log n+\log b_{n}\right]\leq-m\frac{\log \log n}{b_{n}}.\]
The quantity above will go to \(\infty\) as one takes \(m\) to \(\infty\).
Proof of Lemma 4.1.: Since the proof is similar to Claim 5.8 and the estimate of \(\mathcal{E}_{2}\), we explain it very briefly. We estimate \(MT_{n}\) by decomposing the term of \(G_{D}(x_{1}^{1}-x^{2})\) in \(MT_{n}\) by \(G_{D}(x_{1}^{1}-x^{2})-G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})\) and \(G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})\). Concerning the term of \(MT_{n}\) including \(G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})\), note that by [17, Theorem 3.5.1],
\[\mathbb{P}(\mathcal{S}^{1}(0,\infty)\cap(\mathcal{S}^{2}[0,n]\cup\mathcal{S}^ {3}[0,n])=\emptyset)\lesssim(\log n)^{-1}, \tag{4.8}\]
where \(\mathcal{S}^{3}\) is an independent random walk from \(\mathcal{S}^{1}\) and \(\mathcal{S}^{2}\). If one could freely replace the probability of non-intersection of the random walks appearing in the above expression of (4.7) with \(O\left(\frac{1}{\log n}\right)\), then the term of \(MT_{n}\) including \(G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})\) is a consequence of the following lemma and repeating the proof of Claim 5.8 with the aid of (4.8). Concerning the term of \(MT_{n}\) including \(G_{D}(x_{1}^{1}-x^{2})-G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})\), notice that here we could make the replacement \(G_{D}(x_{1}^{1}-x^{2})\approx G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})\) since \(\|x^{2}-\tilde{x}_{j}^{2}\|\leq n^{\beta}\) as they are both elements of \(\mathcal{S}_{\beta,j}^{2}\) under \(\mathcal{I}(x_{1}^{1},j)\). Then we can estimate the term of \(MT_{n}\) including \(G_{D}(x_{1}^{1}-x^{2})-G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})\) by a similar argument to the estimate of \(\mathcal{E}_{2}\).
**Lemma 4.3**.: _Consider the following quantity,_
\[MT_{n}^{\prime}: =\sum_{x_{1}^{1},x_{2}^{1}\in\mathcal{S}^{1}}\sum_{x^{2}\in \mathcal{S}^{2}}G_{D}(x_{1}^{1}-x^{2})G_{D}(x^{2}-x_{2}^{1}).\]
_There exists some constant \(C_{m}\)(not depending on \(n\)) such that_
\[\mathbb{E}[(MT_{n}^{\prime})^{m}]\leq C_{m}n^{m}(\log\log n)^{2m}.\]
Proof.: Let \(\alpha>4m\). First, we show that for any \(y_{i}\in\mathbb{Z}^{4}\) with \(\inf_{1\leq i\leq 2m}\|y_{i}\|\geq n^{1/2}(\log n)^{-\alpha}\),
\[\mathbb{E}\left[\sum_{k_{1},\ldots,k_{2m}=1}^{n}\prod_{i=1}^{2m}G_{D}(\mathcal{ S}_{k_{i}}^{1}-y_{i})\right]\leq C_{m,\alpha}(\log\log n)^{2m}. \tag{4.9}\]
Let \(A_{i}=\{\|\mathcal{S}_{k_{i-1}}^{1}-y_{i}\|\geq n^{1/2}(\log n)^{-2\alpha}\}\). Indeed,
\[\sum_{1\leq k_{1}\leq\ldots\leq k_{2m}\leq n}\prod_{i=1}^{2m}G_{D} (\mathcal{S}_{k_{i}}^{1}-y_{i})\] \[= \sum_{1\leq k_{1}\leq\ldots\leq k_{2m}\leq n}G_{D}(\mathcal{S}_{k _{2m}}^{1}-\mathcal{S}_{k_{2m-1}}^{1}+\mathcal{S}_{k_{2m-1}}^{1}-y_{2m})\prod _{i=1}^{2m-1}G_{D}(\mathcal{S}_{k_{i}}^{1}-y_{i})\] \[= \sum_{1\leq k_{1}\leq\ldots\leq k_{2m}\leq n}G_{D}(\mathcal{S}_{k _{2m}}^{1}-\mathcal{S}_{k_{2m-1}}^{1}+\mathcal{S}_{k_{2m-1}}^{1}-y_{2m}) \mathbb{1}_{A_{2m}}\prod_{i=1}^{2m-1}G_{D}(\mathcal{S}_{k_{i}}^{1}-y_{i})\] \[+ \sum_{1\leq k_{1}\leq\ldots\leq k_{2m}\leq n}G_{D}(\mathcal{S}_{k _{2m}}^{1}-\mathcal{S}_{k_{2m-1}}^{1}+\mathcal{S}_{k_{2m-1}}^{1}-y_{2m}) \mathbb{1}_{A_{2m}^{c}}\prod_{i=1}^{2m-1}G_{D}(\mathcal{S}_{k_{i}}^{1}-y_{i}).\]
Note that [17, Thm. 1.2.1] that for any \(x\in\mathbb{Z}^{4}\), \(i\geq 1\),
\[\mathbb{P}(\mathcal{S}_{i}^{1}=x)\lesssim i^{-2}\Big{[}e^{-2\|x\|^{2}/i}+(\|x \|^{2}\lor i)^{-1}\Big{]}. \tag{4.10}\]
Hence if \(\|y\|\geq n^{1/2}(\log n)^{-2\alpha}\),
\[\mathbb{E}\left[\sum_{1\leq k\leq n}G_{D}(\mathcal{S}_{k}^{1}-y)\right] \lesssim\log\log n\]
and
\[\mathbb{E}\left[\sum_{1\leq k_{2m-1}\leq n}\mathbb{1}_{A_{2m}^{c}}G_{D}( \mathcal{S}_{k_{2m-1}}^{1}-y_{2m-1})\right]\lesssim(\log n)^{-4\alpha}\times \log n.\]
In addition,
\[\max_{y_{1},\ldots,y_{2m}}\mathbb{E}\left[\sum_{k_{1},\ldots,k_{2m}=1}^{n} \prod_{i=1}^{2m}G_{D}(\mathcal{S}_{k_{i}}^{1}-y_{i})\right]\leq C_{m}(\log n) ^{2m}.\]
Then,
\[\mathbb{E}\left[\sum_{1\leq k_{1}\leq\ldots\leq k_{2m}\leq n}G_{ D}(\mathcal{S}_{k_{2m}}^{1}-\mathcal{S}_{k_{2m-1}}^{1}+\mathcal{S}_{k_{2m-1}}^{1}-y_{ 2m})\mathbb{1}_{A_{2m}}\prod_{i=1}^{2m-1}G_{D}(\mathcal{S}_{k_{i}}^{1}-y_{i})\right]\] \[\lesssim (\log\log n)\times\mathbb{E}\left[\sum_{1\leq k_{1}\leq\ldots \leq k_{2m-1}\leq n}\prod_{i=1}^{2m-1}G_{D}(\mathcal{S}_{k_{i}}^{1}-y_{i}) \right].\]
and by Lemma A.10,
\[\mathbb{E}\left[\sum_{1\leq k_{1}\leq\ldots\leq k_{2m}\leq n}G_{D}( \mathcal{S}^{1}_{k_{2m}}-\mathcal{S}^{1}_{k_{2m-1}}+\mathcal{S}^{1}_{k_{2m-1}}- y_{2m})\mathbb{1}_{\,A^{\varepsilon}_{2m}}\prod_{i=1}^{2m-1}G_{D}(\mathcal{S}^{1}_{k_ {i}}-y_{i})\right]\] \[\lesssim (\log n)\mathbb{E}\left[\sum_{1\leq k_{1}\leq\ldots\leq k_{2m-1} \leq n}\mathbb{1}_{\,A^{\varepsilon}_{2m}}\prod_{i=1}^{2m-1}G_{D}(\mathcal{S}^ {1}_{k_{i}}-y_{i})\right]\] \[\leq C_{m}(\log n)^{2m}(\log n)^{-4\alpha}.\]
Hence, we obtain (4.9). Moreover, if \(\inf\{i_{1},\ldots,i_{m}\}\geq n(\log n)^{-\alpha}\),
\[\mathbb{E}[\mathbb{1}_{\,D}]:=\mathbb{E}\left[\cup_{i=1}^{m}\mathbb{1}\left\{ \|\mathcal{S}^{2}_{i}\|\leq n^{1/2}(\log n)^{-\alpha}\right\}\right]\leq C_{ m}(\log n)^{-4\alpha}\]
and
\[\sum_{\inf\{i_{1},\ldots,i_{m}\}\leq n(\log n)^{-\alpha}}1\leq C_{m}n^{m}(\log n )^{-\alpha}.\]
Hence,
\[\mathbb{E}[(MT^{\prime}_{n})^{m}]\leq\mathbb{E}\left[\sum_{i_{1},\ldots,i_{m}= 1}^{n}\sum_{k_{1},\ldots,k_{2m}=1}^{n}\prod_{j=1}^{2m}G_{D}(\mathcal{S}^{1}_{ k_{j}}-\mathcal{S}^{2}_{i_{\lceil j/2\rceil}})\right]\] \[\leq C_{m}n^{m}(\log n)^{m}(\log n)^{-\alpha}+\mathbb{E}\left[\sum_{ \inf\{i_{1},\ldots,i_{m}\}\geq n(\log n)^{-\alpha}}\sum_{k_{1},\ldots,k_{2m}=1 }^{n}\prod_{j=1}^{2m}G_{D}(\mathcal{S}^{1}_{k_{j}}-\mathcal{S}^{2}_{i_{\lceil j /2\rceil}})\right]\] \[\leq C_{m}n^{m}(\log n)^{m}(\log n)^{-\alpha}+\mathbb{E}\left[\sum_{ \inf\{i_{1},\ldots,i_{m}\}\geq n(\log n)^{-\alpha}}\sum_{k_{1},\ldots,k_{2m}=1 }^{n}\mathbb{1}_{\,D^{c}}\prod_{j=1}^{2m}G_{D}(\mathcal{S}^{1}_{k_{j}}- \mathcal{S}^{2}_{i_{\lceil j/2\rceil}})\right]\] \[\leq C_{m}n^{m}(\log\log n)^{2m}.\]
Therefore, we obtain the desired result.
The other terms of equation (4.6) are of much smaller order.
**Lemma 4.4**.: _Consider the second summand on the right hand side of equation (4.6). Namely, let_
\[\mathcal{E}_{1}:=\sum_{x^{1}_{1},x^{1}_{2}\in\mathcal{S}^{1}}\sum _{j=1}^{n^{1-\beta}}\sum_{x^{2}\in\mathcal{S}^{2}_{\beta,j}}\mathcal{I}(x^{1}_ {2},j)\mathcal{I}(x^{1}_{1},j)\mathbb{P}(R^{\prime}_{x^{1}_{1}}\cap\mathcal{S}^ {1}=\emptyset)G_{D}(x^{1}_{1}-\tilde{x}^{2}_{j})\] \[\qquad\qquad\qquad\qquad\times E_{x^{2},x^{1}_{1}}\mathbb{P}(R^{ \prime}_{x^{2}_{1}}\cap\mathcal{S}^{1}=\emptyset). \tag{4.11}\]
_We have that,_
\[\mathbb{E}[|\mathcal{E}_{1}|]\lesssim n^{2\beta+\frac{3}{2}\delta+\frac{1}{2}}. \tag{4.12}\]
_We also have a similar estimate for the third summand on the right hand side of equation (4.6). Namely, we have that,_
\[\mathcal{E}_{2}:=\sum_{x^{1}_{1},x^{1}_{2}\in\mathcal{S}^{1}} \sum_{j=1}^{n^{1-\beta}}\sum_{x^{2}\in\mathcal{S}^{2}_{\beta,j}}\mathcal{I}(x^ {1}_{2},j)\mathcal{I}(x^{1}_{1},j)\mathbb{P}(R^{\prime}_{x^{1}_{1}}\cap \mathcal{S}^{1}=\emptyset)\] \[\qquad\qquad\qquad\qquad\times[G_{D}(x^{1}_{1}-x^{2})-G_{D}(x^{1}_ {1}-\tilde{x}^{2}_{j})]E_{x^{2},x^{1}_{2}}\mathbb{P}(R^{\prime}_{x^{1}_{2}} \cap\mathcal{S}^{1}=\emptyset), \tag{4.13}\]
_will satisfy,_
\[\mathbb{E}[|\mathcal{E}_{2}|]\lesssim n^{3\beta+2\delta}. \tag{4.14}\]
Lemma 4.4 will be shown later in this section. As before, Markov's inequality will give. one can derive that,
**Corollary 4.5**.: _Recall the terms \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) from equations (4.11) and (4.13). Fix any \(\epsilon>0\) and set \(b_{n}=O(\log\log n)\) with \(\lim_{n\to\infty}b_{n}=\infty\). We have that,_
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(\max(|\mathcal{E}_{1}|,| \mathcal{E}_{2}|)\geq\epsilon\frac{nb_{n}}{(\log n)^{2}}\right)=-\infty. \tag{4.15}\]
Proof.: By Markov's inequality applied to (4.12) and (4.14), we have that
\[\frac{1}{b_{n}}\log\mathbb{P}\left(|\mathcal{E}_{1}|\geq\epsilon\frac{nb_{n}} {(\log n)^{2}}\right)\leq\frac{-\log\epsilon-(\frac{1}{2}-2\beta-\frac{3}{2} \delta)\log n+2\log\log n}{b_{n}},\]
and,
\[\frac{1}{b_{n}}\log\mathbb{P}\left(|\mathcal{E}_{2}|\geq\epsilon\frac{nb_{n}} {(\log n)^{2}}\right)\leq\frac{-\log\epsilon-(1-3\beta-2\delta)\log n+2\log \log n}{b_{n}}.\]
The desired conclusion (4.15) follows from taking the limit \(n\to\infty\).
Finally, the last error term, the fourth summand of (4.6) will also be of smaller order.
**Lemma 4.6**.: _Consider the last summand of (4.6),_
\[\mathcal{E}_{3}:=\sum_{x_{1}^{1},x_{2}^{1}\in\mathcal{S}^{1}}\sum _{j=1}^{n^{1-\beta}}\sum_{x^{2}\in\mathcal{S}_{\beta,j}^{2}} [1-\mathcal{I}(x_{2}^{1},j)\mathcal{I}(x_{1}^{1},j)]\mathbb{P}(R^{\prime}_{x_ {1}^{1}}\cap\mathcal{S}^{1}=\emptyset)\] \[G_{D}(x_{1}^{1}-x^{2})G_{\mathcal{S}_{\beta,j}^{2}}(x^{2},x_{2}^ {1})\mathbb{P}(R^{\prime}_{x_{2}^{1}}\cap\mathcal{S}^{1}=\emptyset). \tag{4.16}\]
_We have that, for some \(\delta^{\prime}>0\),_
\[\mathbb{E}\left[\mathcal{E}_{3}\right]\lesssim n^{1-\delta^{\prime}}.\]
Proof.: By (4.10), we remark that for any point \(a\) that,
\[\mathbb{E}\left[\sum_{x^{2}\in\mathcal{S}^{2}}G_{D}(a-x^{2})\right]\leq\sum_ {j=0}^{n}\sum_{i=j}^{\infty}\mathbb{P}(\mathcal{S}_{i}=a)\lesssim\sum_{j=0}^ {n}\sum_{i=j}^{\infty}i_{+}^{-2}\lesssim\log n, \tag{4.17}\]
where \(i_{+}:=1\lor i\). By symmetry,
\[\mathbb{E}[\mathcal{E}_{3}]\leq \mathbb{E}\bigg{[}\sum_{i,j,k=0}^{n}[1-1\,[\|\mathcal{S}_{i}^{1} -\mathcal{S}_{j}^{2}\|\geq\sqrt{n}^{1-\delta}]\mathbb{1}\,[\|\mathcal{S}_{k}^ {1}-\mathcal{S}_{j}^{2}\|\geq\sqrt{n}^{1-\delta}]]\] \[\times G_{D}(\mathcal{S}_{i}^{1}-\mathcal{S}_{j}^{2})G_{D}( \mathcal{S}_{j}^{2}-\mathcal{S}_{k}^{1})\bigg{]}\] \[\leq 2\mathbb{E}\bigg{[}\sum_{i,j,k=0}^{n}\mathbb{1}\,[[\|\mathcal{S} _{i}^{1}-\mathcal{S}_{j}^{2}\|\leq\sqrt{n}^{1-\delta}]G_{D}(\mathcal{S}_{i}^{ 1}-\mathcal{S}_{j}^{2})G_{D}(\mathcal{S}_{j}^{2}-\mathcal{S}_{i}^{1}-( \mathcal{S}_{k}^{1}-\mathcal{S}_{i}^{1}))\bigg{]}.\]
By (4.17) and Markov's property, it is bound by
\[C(\log n)\times\mathbb{E}\left[\sum_{i,j=0}^{n}\mathbb{1}\,[\|\mathcal{S}_{i }^{1}-\mathcal{S}_{j}^{2}\|\leq\sqrt{n}^{1-\delta}]G_{D}(\mathcal{S}_{i}^{1}- \mathcal{S}_{j}^{2})\right].\]
Now, by [13, Lemma 4.1],
\[\mathbb{E}\left[\sum_{i,j=0}^{n^{1-\delta/2}}\mathbb{1}\left[\|\mathcal{S}_{i}^{1 }-\mathcal{S}_{j}^{2}\|\leq\sqrt{n}^{1-\delta}\right]\!G_{D}(\mathcal{S}_{i}^ {1}-\mathcal{S}_{j}^{2})\right]\leq\mathbb{E}\left[\sum_{i,j=0}^{n^{1-\delta/2}}G _{D}(\mathcal{S}_{i}^{1}-\mathcal{S}_{j}^{2})\right]\lesssim n^{1-\delta/2}\]
and
\[\mathbb{E}\left[\sum_{i=n^{1-\delta/2}}^{n}\sum_{j=0}^{n}\mathbb{1 }\left[\|\mathcal{S}_{i}^{1}-\mathcal{S}_{j}^{2}\|\leq\sqrt{n}^{1-\delta} \right]\!G_{D}(\mathcal{S}_{i}^{1}-\mathcal{S}_{j}^{2})\right]\] \[\leq n\max_{y\in\mathbb{Z}^{4}}\mathbb{E}\left[\sum_{i=n^{1-\delta/2 }}^{n}\mathbb{1}[\|\mathcal{S}_{i}^{1}-y\|\leq\sqrt{n}^{1-\delta}]\!G_{D}( \mathcal{S}_{i}^{1}-y)\right]\] \[\lesssim n\max_{y\in\mathbb{Z}^{4}}\sum_{i=n^{1-\delta/2}}^{n}\sum_{\|x -y\|\leq\sqrt{n}^{1-\delta}}\|x-y\|^{-2}\mathbb{P}(\mathcal{S}_{i}^{1}=x).\]
Then again by (4.10), it is bound by
\[C(\log n)n^{1-\delta/2}+Cn(\log n)\max_{y\in\mathbb{Z}^{4}}\sum_{i=n^{1-\delta /2}}^{n}\sum_{\|x-y\|\leq\sqrt{n}^{1-\delta}}\|x-y\|^{-2}i_{+}^{-2}\lesssim n^ {1-\delta/2}.\]
Therefore, by symmetry, it completes the proof.
As before, one can show the following from Markov's inequality,
**Corollary 4.7**.: _Recall the term \(\mathcal{E}_{3}\) from (4.16). Fix any \(\epsilon>0\) and set \(b_{n}=O(\log\log n)\) with \(\lim_{n\to\infty}b_{n}=\infty\). Then, we see that,_
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(\mathcal{E}_{3}\geq \epsilon\frac{nb_{n}}{n(\log n)^{2}}\right)=-\infty. \tag{4.18}\]
The proof is similar to that of Corollary 4.5 and will not be shown here.
Using the previous corollaries, one can now prove Theorem 3.3.
Proof of Theorem 3.3.: It is clear that,
\[\mathbb{P}\left(\chi^{\prime}\geq\epsilon\frac{nb_{n}}{(\log n)^ {2}}\right)\leq\mathbb{P}\left(\chi^{\prime}_{\beta}\geq\epsilon\frac{nb_{n}} {(\log n)^{2}}\right)\] \[\leq\mathbb{P}\left(MT_{n}\geq\frac{\epsilon}{4}\frac{b_{n}n}{( \log n)^{2}}\right)+\sum_{i=1}^{3}\mathbb{P}\left(|\mathcal{E}_{i}|\geq\frac{ \epsilon}{4}\frac{b_{n}n}{(\log n)^{2}}\right).\]
By computing \(\frac{1}{b_{n}}\log\) to both sides, we see that the conclusion of Theorem 3.3 is a consequence of Corollaries 4.2, 4.5, and 4.7
### Proof of Lemma 4.4
The proof of this lemma requires novel techniques beyond careful computations of Green's functions due to the presence of the error terms \(E\) occurring from the matrix inversion. We will present the proof here.
Proof of Lemma 4.4.: First, we deal with \(\mathcal{E}_{1}\). We have that,
\[\mathcal{E}_{1} =\sum_{x_{1}^{1}\in\mathcal{S}^{1}}\sum_{j=1}^{n^{1-\beta}} \mathcal{I}(x_{2}^{1},j)\mathcal{I}(x_{1}^{1},j)\mathbb{P}(R^{\prime}_{x_{1}^{ 1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})\] \[\times\sum_{x_{2}^{1}\in\mathcal{S}^{1}}\sum_{x^{2}\in\mathcal{S} _{\beta,j}^{2}}E_{x^{2},x_{2}^{1}}\mathbb{P}(R^{\prime}_{x_{2}^{1}}\cap \mathcal{S}^{1}=\emptyset).\]
We now consider, under the indicator function \(\mathcal{I}(x_{2}^{1},j)\),
\[\sum_{x_{2}^{1}\in\mathcal{S}^{1}}\sum_{x^{2}\in\mathcal{S}_{\beta,j}^{2}}|E_{x^{2},x_{2}^{1}}|\mathbb{P}(R^{\prime}_{x_{2}^{1}}\cap\mathcal{S}^ {1}=\emptyset)\] \[\leq\sum_{x_{2}^{1}\in\mathcal{S}^{1}}\sqrt{|\mathcal{S}_{\beta,j}^{2}|}\sum_{x^{2}\in\mathcal{S}_{\beta,j}^{2}}|E_{x^{2},x_{2}^{1}}|^{2} \lesssim n\sqrt{n^{2\beta}\frac{n^{2\beta}}{n^{3-3\delta}}}\lesssim n^{2\beta +3/2\delta-1/2}.\]
To get the first inequality, we used the Cauchy-Schwartz inequality on the sum over \(\mathcal{S}_{\beta,j}^{2}\). From Lemma A.11, the matrix \(\mathcal{G}^{\mathcal{S}_{\beta,j}^{2}}\) is positive definite and has minimum eigenvalue greater than \(1/2\). Thus, the inverse matrix has \(l^{2}\to l^{2}\) operator norm less than \(2\). Thus, we know that, under the indicator function \(\mathcal{I}(x_{2}^{1},j)\),
\[\sqrt{\sum_{x^{2}\in\mathcal{S}_{\beta,j}^{2}}|E_{x^{2},x_{2}^{1}}|^{2}}\leq 2 \sqrt{\sum_{x^{2}\in\mathcal{S}_{\beta,j}^{2}}|G_{D}(a_{1}-x_{2}^{1})-G_{D}(x^ {2}-x_{2}^{1})|^{2}}\lesssim\sqrt{n^{\beta}\frac{n^{2\beta}}{n^{3-3\delta}}}.\]
In the final inequality, we used the deterministic bound (4.4) to bound the differences of the Green's function in the region \(\mathcal{S}_{\beta,j}^{2}\). Thus, we have that, deterministically,
\[\bigg{|}\sum_{x_{1}^{1},x_{2}^{1}\in\mathcal{S}^{1}}\sum_{j=1}^{ n^{1-\beta}}\sum_{x^{2}\in\mathcal{S}_{\beta,j}^{2}}\mathcal{I}(x_{2}^{1},j) \mathcal{I}(x_{1}^{1},j)\mathbb{P}(R^{\prime}_{x_{1}^{1}}\cap\mathcal{S}^{1}= \emptyset)G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})\] \[\qquad\qquad\qquad\times E_{x^{2},x_{2}^{1}}\mathbb{P}(R^{\prime }_{x_{2}^{1}}\cap\mathcal{S}^{1}=\emptyset)\bigg{|}\] \[\lesssim\sum_{x_{1}^{1}\in\mathcal{S}^{1}}\sum_{j=1}^{n^{1-\beta }}\mathcal{I}(x_{2}^{1},j)\mathcal{I}(x_{1}^{1},j)\mathbb{P}(R^{\prime}_{x_{ 1}^{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})n^{2 \beta+3/2\delta-1/2}\] \[\lesssim n^{2\beta+3/2\delta-1/2}\sum_{i=1}^{n}\sum_{j=1}^{n}G_{ D}(\mathcal{S}_{i}^{1}-\mathcal{S}_{j}^{2}).\]
Therefore,
\[\mathbb{E}\bigg{[}\bigg{|}\sum_{x_{1}^{1},x_{2}^{1}\in\mathcal{S} ^{1}}\sum_{j=1}^{n^{1-\beta}}\sum_{x^{2}\in\mathcal{S}_{\beta,j}^{2}}\mathcal{ I}(x_{2}^{1},j)\] \[\qquad\qquad\qquad\times\mathcal{I}(x_{1}^{1},j)\mathbb{P}(R^{ \prime}_{x_{1}^{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x_{1}^{1}-\tilde{x}_{j}^ {2})E_{x^{2},x_{2}^{1}}\mathbb{P}(R^{\prime}_{x_{2}^{1}}\cap\mathcal{S}^{1}= \emptyset)\bigg{|}\bigg{]}\] \[\lesssim n^{2\beta+3/2\delta-1/2}\mathbb{E}\left[\sum_{i=1}^{n}\sum_{j=1}^ {n}G_{D}(\mathcal{S}_{i}^{1}-\mathcal{S}_{j}^{2})\right]\lesssim n^{2\beta+3/2 \delta+1/2}.\]
The computation of the expectation in the last line above comes from [13, Lemma 4.1]. The value of this last line is approximately \(n^{-1/2}\) the scale of the main order term (provided \(\beta,\delta\) are all chosen relatively small).
Now, the other error term involving \(E\) can be dealt with in a similar way to \(\mathcal{E}_{3}\). To recall, the other error term is,
\[|\mathcal{E}_{2}|\leq\bigg{|} \sum_{x_{1}^{1},x_{2}^{1}\in\mathcal{S}^{2}}\sum_{j=1}^{n^{1- \beta}}\sum_{x^{2}\in\mathcal{S}_{\beta,j}^{2}}\mathcal{I}(x_{2}^{1},j) \mathcal{I}(x_{1}^{1},j)\mathbb{P}(R^{\prime}_{x_{1}^{1}}\cap\mathcal{S}^{1}= \emptyset)\] \[\times[G_{D}(x_{1}^{1}-x^{2})-G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})] E_{x^{2},x_{2}^{1}}\mathbb{P}(R^{\prime}_{x_{2}^{1}}\cap\mathcal{S}^{1}= \emptyset)\bigg{|}.\]
We first use the improved estimate,
\[|G_{D}(x_{1}^{1}-x^{2})-G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})|\lesssim G_{D}(x_{1 }^{1}-\tilde{x}_{j}^{2})\frac{n^{\beta}}{n^{1/2-\delta/2}}\]
under the indicator function \(\mathcal{I}(x_{1}^{1},j)\). We remark here that the factor of \(G_{D}(x_{1}^{1}-\tilde{x}_{j}^{2})\) is an improved error term in the case that \(\|x_{1}^{1}-\tilde{x}_{j}^{2}\|\) is relatively large. With this deterministic bound in hand, bounding this error term in \(E\) reduces to the error term we just treated.
## 5. The Leading Term of \(\chi\): Proof of Theorem 3.1
In this section, we will consider the large deviation statistics of the following quantity,
\[TL_{n}:=\sum_{x^{1}\in\mathcal{S}^{1}}\sum_{x^{2}\in\mathcal{S}^{2}}\mathbb{P }(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x^{1}-x^{2})\mathbb{ P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{2}=\emptyset).\]
We will prove Theorem 3.1 by separately proving lower and upper bounds for the asymptotic moments.
### Introduction of the Auxiliary \(TL^{\prime}_{n}\)
For technical reasons, \(TL_{n}\) is not the most convenient quantity to manipulate. Instead, we consider the following auxiliary quantity. We let \(\mathcal{S}^{k,i}\) denote the portion of the random walk in between the part of the random walk \(\mathcal{S}^{k}\left[(i-1)\frac{n}{b_{n}},i\frac{n}{b_{n}}\right]\) and
\[TL^{\prime}_{n}:=\sum_{i,j=1}^{b_{n}}\sum_{x^{1}\in\mathcal{S}^{1,i}}\sum_{x^ {2}\in\mathcal{S}^{2,j}}\mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1,i}= \emptyset)G_{D}(x^{1}-x^{2})\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{2,j }=\emptyset). \tag{5.1}\]
We have the following relationship between \(TL_{n}\) and \(TL^{\prime}_{n}\).
**Proposition 5.1**.: _Let \(b_{n}\) be a sequence satisfying \(b_{n}=O(\log\log n)\) and \(\lim_{n\to\infty}b_{n}=\infty\). Fix \(\lambda\geq 0\). Then, we have that,_
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(TL^{\prime}_{n}\geq \lambda\frac{nb_{n}}{(\log n)^{2}}\right)=\lim_{n\to\infty}\frac{1}{b_{n}} \log\mathbb{P}\left(TL_{n}\geq\lambda\frac{nb_{n}}{(\log n)^{2}}\right). \tag{5.2}\]
Proof.: We remark that \(TL^{\prime}_{n}\geq TL_{n}\). This immediately shows that,
\[\mathbb{P}\left(TL^{\prime}_{n}\geq\lambda\frac{nb_{n}}{(\log n)^{2}}\right) \geq\mathbb{P}\left(TL_{n}\geq\lambda\frac{nb_{n}}{(\log n)^{2}}\right).\]
To derive the opposite inequality, we first observe that \(TL^{\prime}-TL\) can be bounded from above by,
\[TL^{\prime}_{n}-TL_{n}\] \[\leq \sum_{i,j=1}^{b_{n}}\sum_{x^{1}\in\mathcal{S}^{1,i}}\sum_{x^{2}\in \mathcal{S}^{2,j}}\mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1,i}=\emptyset, R^{\prime}_{x^{1}}\cap\mathcal{S}^{1}\neq\emptyset)G_{D}(x^{1}-x^{2})\mathbb{P}(R^{ \prime}_{x^{2}}\cap\mathcal{S}^{2,j}=\emptyset)\] \[+\sum_{i,j=1}^{b_{n}}\sum_{x^{1}\in\mathcal{S}^{1,i}}\sum_{x^{2} \in\mathcal{S}^{2,j}}\mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset )G_{D}(x^{1}-x^{2})\mathbb{P}(R^{\prime}_{x^{2}}\cap\mathcal{S}^{2,j}=\emptyset,R^{\prime}_{x^{2}}\cap\mathcal{S}^{2}\neq\emptyset)\] \[+\sum_{i_{1}\neq i_{2},j=1}^{b_{n}}\sum_{x^{1}\in\mathcal{S}^{1,i_ {1}}\cap\mathcal{S}^{1,i_{2}}}\sum_{x^{2}\in\mathcal{S}^{2,j}}\mathbb{P}(R^{ \prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x^{1}-x^{2})\mathbb{P}(R^{ \prime}_{x^{2}}\cap\mathcal{S}^{2}=\emptyset)\] \[+\sum_{i,j_{1}\neq j_{2}=1}^{b_{n}}\sum_{x^{2}\in\mathcal{S}^{2,j_ {1}}\cap\mathcal{S}^{2,j_{2}}}\sum_{x^{1}\in\mathcal{S}^{1,i}}\mathbb{P}(R^{ \prime}_{x^{1}}\cap\mathcal{S}^{1}=\emptyset)G_{D}(x^{1}-x^{2})\mathbb{P}(R^{ \prime}_{x^{2}}\cap\mathcal{S}^{2}=\emptyset)\] \[=: J_{1}+J_{2}+J_{3}+J_{4}. \tag{5.3}\]
For each line on the right hand side above, we will show that for \(1\leq i\leq 4\)
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(J_{i}\geq\epsilon\frac{nb _{n}}{(\log n)^{2}}\right)=-\infty. \tag{5.4}\]
The first two lines of the right hand side of (5.3) are very similar to the type of error terms we have dealt with in Section 4. One can follow the analysis of said section to show the relation (5.4) for these two lines.
The last two lines will be controlled by bounding the moments and applying Markov's inequality. We present the analysis with the term on the third line, since the term on the fourth line can be dealt with similarly. We first bound all the probability terms on the line by \(1\).
By (4.10) and (4.17), for any \(x\) and \(y\),
\[\mathbb{E}\left[\sum_{i,j=0}^{n}\mathbb{1}\left\{\mathcal{S}^{1}_ {i}+x=\mathcal{S}^{2}_{j}+y\right\}\right] =\sum_{i,j=0}^{n}\mathbb{P}(\mathcal{S}^{1}_{i+j}=y-x)\] \[\lesssim\sum_{i,j=0}^{n}(i+j)_{+}^{-2}\lesssim\log n.\]
Thus, we see that,
\[\mathbb{E}\left[J_{3}\right] \leq\mathbb{E}_{\mathcal{S}^{1}}\left[\sum_{i_{1}\neq i_{2}=1}^{ b_{n}}\sum_{x^{1}\in\mathcal{S}^{1,i_{1}}\cap\mathcal{S}^{1,i_{2}}}\mathbb{E}_{ \mathcal{S}^{2}}\left[\sum_{x^{2}\in\mathcal{S}^{2}}G_{D}(x^{1}-x^{2})\right]\right]\] \[\lesssim\mathbb{E}\left[\sum_{i_{1}\neq i_{2}=1}^{b_{n}}I_{i_{1}, i_{2}}\log n\right]\lesssim b_{n}^{2}(\log n)^{2}.\]
On the second line \(\mathbb{E}_{\mathcal{S}^{i}}\) is the expectation with respect to only the randomness of \(\mathcal{S}^{i}\). \(I_{i_{1},i_{2}}\) is the number of points of intersection between \(\mathcal{S}^{1}_{i_{1}}\) and \(\mathcal{S}^{1}_{i_{2}}\). We remark that \(\mathcal{S}^{1}_{i_{2}}-\mathcal{S}^{1}_{i_{2}}\frac{n}{b_{n}^{2}}\) and \(\mathcal{S}^{1}_{i_{1}}-\mathcal{S}^{1}_{i_{1}}\frac{n}{b_{n}^{2}}\) are independent random walks. Then, \(\mathbb{E}[I_{i_{1},i_{2}}]\lesssim\log n\) by
a similar computation to [17, Proposition 4.3.1]. Thus, this term will not contribute to the large deviation statistics of \(TL_{n}^{\prime}\) on the scale of \(\frac{nb_{n}}{(\log n)^{2}}\).
The quantity \(TL_{n}^{\prime}\) is easier to deal with since we can obtain exact moment asymptotics. Namely,
**Proposition 5.2**.: _Recall \(TL_{n}^{\prime}\) from equation (5.1). Let \(b_{n}=O(\log\log n)\) and \(\lim_{n\to\infty}b_{n}=\infty\). Then, for any \(\theta>0\), we have the following exact moment asymptotics on \(TL_{n}^{\prime}\):_
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\sum_{m=0}^{\infty}\frac{1}{m!}\theta^{m} \left(\frac{\sqrt{b_{n}}\log n}{\sqrt{n}}\right)^{m}\mathbb{E}[(TL_{n}^{ \prime})^{m}]^{1/2}=\tilde{\kappa}(4,2)^{4}\frac{\pi^{4}\theta^{2}}{8}. \tag{5.5}\]
As a consequence of the previous two propositions, one can now prove Theorem 3.1.
Proof of Theorem 3.1.: By [12, Theorem 1.2.7], equation (5.5) would be equivalent to showing,
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(TL_{n}^{\prime}\geq \lambda\frac{nb_{n}}{(\log n)^{2}}\right)=-\frac{4}{\pi^{4}}\tilde{\kappa}(4,2 )^{-4}\lambda.\]
Now, since by Proposition 5.1 we have that
\[\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left(TL_{n}\geq\lambda\frac{nb_ {n}}{(\log n)^{2}}\right)=\lim_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{P}\left( TL_{n}^{\prime}\geq\lambda\frac{nb_{n}}{(\log n)^{2}}\right),\]
we complete the proof of the proposition.
The remainder of this section is devoted to deriving upper and lower bounds to the quantity in equation (5.5).
### Large Deviation Upper Bounds
In this section, we establish the upper bound found in Proposition 5.2.
**Proposition 5.3**.: _Let \(b_{n}\) be a sequence satisfying \(b_{n}=O(\log\log n)\) and \(\lim_{n\to\infty}b_{n}=\infty\). Then, for any \(\theta>0\), we satisfy,_
\[\limsup_{n\to\infty}\frac{1}{b_{n}}\log\sum_{m=0}^{\infty}\frac{1}{m!}\theta^{ m}\left(\frac{\sqrt{b_{n}}\log n}{\sqrt{n}}\right)^{m}\mathbb{E}[(TL_{n}^{ \prime})^{m}]^{1/2}\leq\tilde{\kappa}(4,2)^{4}\frac{\pi^{4}\theta^{2}}{8}.\]
The proposition above is an immediate consequence of the following lemma and claim.
**Claim 5.4**.: _There exists some constant \(C>0\) such that for all \(n,m>0\), we have that,_
\[\mathbb{E}[(TL_{n}^{\prime})^{m}]\leq C^{m}m!\left(\frac{n}{(\log n)^{2}} \right)^{m}. \tag{5.6}\]
The proof of the above claim will be postponed to later. We now present the second necessary lemma.
**Lemma 5.5**.: _For any \(\theta>0\),_
\[\limsup_{n\to\infty}\frac{1}{b_{n}}\log\sum_{m=0}^{\infty}\frac{1}{m!}\theta^{ m}\left(\frac{\sqrt{b_{n}}\log n}{\sqrt{n}}\right)^{m}\mathbb{E}[(TL_{n}^{ \prime})^{m}]^{1/2}\leq\tilde{\kappa}(4,2)^{4}\frac{\pi^{4}\theta^{2}}{8}.\]
Proof.: Let \((B^{1}_{s})_{s\geq 0}\) and \((B^{2}_{s})_{s\geq 0}\) be independent Brownian motions for \(d=4\). The need for the bound in (5.6) and [7, (2.3)] are to ensure that one can apply dominated convergence to the terms \(\mathbb{E}\left[\left(\frac{(\log n)^{2}}{n}TL^{\prime}_{n}\right)^{m}\right]\) when needed, and replace them with the term:
\[\left(\frac{\pi^{4}}{4}\right)^{m}\mathbb{E}\left[\left(\int_{0}^{1}\int_{0}^{ 1}G(B^{1}_{t}-B^{2}_{s})\mathrm{d}t\mathrm{d}s\right)^{m}\right].\]
The reason why this can be done is due to the fact that
\[\frac{(\log n)^{2}}{n}TL^{\prime}_{n}\stackrel{{\mathcal{D}}}{{ \Longrightarrow}}\frac{\pi^{4}}{4}\int_{0}^{1}\int_{0}^{1}G(B^{1}_{t}-B^{2}_{ s})\mathrm{d}t\mathrm{d}s\]
following the proof of [4, Proposition 6.1].
We can follow the proof of [12, Theorem 7.2.1] to derive the appropriate upper bound. Finally, by Remark A.12, we see we obtain our desired constant.
It is manifest that Proposition 5.3 is a consequence of Claim 5.4 and Lemma 5.5. We devote the rest of this subsection to deriving Claim 5.4.
#### 5.2.1. A proof of Claim 5.4
Our first remark is that the quantity \(TL^{\prime}_{n}\) is less than,
\[\begin{split} TL_{n,\alpha}:=\sum_{i,j=1}^{n}&\mathbb{ P}(R^{\prime}_{\mathcal{S}^{1}_{i}}\cap\mathcal{S}^{1}[i-n^{\alpha},i+n^{ \alpha}]\cap\mathcal{S}^{1}=\emptyset)G_{D}(\mathcal{S}^{1}_{i}-\mathcal{S}^{ 2}_{j})\\ &\times\mathbb{P}(R^{\prime}_{\mathcal{S}^{2}_{j}}\cap\mathcal{S} ^{2}[j-n^{\alpha},j+n^{\alpha}]\cap\mathcal{S}^{2}=\emptyset).\end{split} \tag{5.7}\]
We will analyze the moments of \(TL_{n,\alpha}\) via a subadditivity argument along with a careful moment analysis. Our first subadditivity argument allows us to reduce our moment analysis of \(TL_{n,\alpha}\) to a slightly weaker analysis.
**Lemma 5.6**.: _If one knows that there exists some constant \(C\) such that for all \(n\) and \(m\) that_
\[\mathbb{E}[(TL_{n,\alpha})^{m}]\leq C^{m}(m!)^{2}\left(\frac{n}{(\log n)^{2}} \right)^{m}, \tag{5.8}\]
_then there is some other constant \(C^{\prime}\) such that,_
\[\mathbb{E}[(TL_{n,\alpha})^{m}]\leq(C^{\prime})^{m}(m!)\left(\frac{n}{(\log n) ^{2}}\right)^{m}. \tag{5.9}\]
Proof.: We start by using a subadditivity argument. Recall that \(G_{D}=\tilde{G}_{D}*\tilde{G}_{D}\). To match the notation of [12, Chapter 6.1], we also define
\[\mathcal{F}^{a}_{\mathcal{S}(n^{\prime},n]}=\sum_{i=n^{\prime}}^{n}\tilde{G}_ {D}(\mathcal{S}_{i}-a)\mathbb{P}(R^{\prime}_{\mathcal{S}_{i}}\cap\mathcal{S}[ i-(n-n^{\prime})^{\alpha},i+(n-n^{\prime})^{\alpha}]\cap\mathcal{S}(n^{\prime},n ]=\emptyset).\]
The main thing to observe about this function is that,
\[TL_{n,\alpha}=\sum_{a\in\mathbb{Z}^{4}}\mathcal{F}^{a}_{\mathcal{S}^{1}[1,n]} \mathcal{F}^{a}_{\mathcal{S}^{2}[1,n]}.\]
Furthermore, it is trivially true that for times \(t<s\) that
\[\mathcal{F}^{a}_{\mathcal{S}[1,s]}\leq\mathcal{F}^{a}_{\mathcal{S}[1,t]}+ \mathcal{F}^{a}_{\mathcal{S}(t,s]}\]
and \(\mathcal{F}^{a}_{\mathcal{S}}\) has the translation symmetry,
\[\mathcal{F}^{a+z}_{\mathcal{S}+z}=\mathcal{F}^{a}_{\mathcal{S}}.\]
For these reasons, we can apply all the results of [12, Section 6.1]. In particular, we can apply the argument of [12, Theorem 6.2.1].
It remains to prove equation (5.8)
**Lemma 5.7**.: _Equation (5.8) holds. Namely, there is a constant such that for all \(n\) and \(m\) we have that,_
\[\mathbb{E}[(TL_{n,\alpha})^{m}]\leq C^{m}(m!)^{2}\left(\frac{n}{(\log n)^{2}} \right)^{m}.\]
Before we start proving the above lemma, we will finish the proof of Claim 5.4.
Proof of Claim 5.4.: Since \(TL_{n}^{\prime}\leq TL_{n,\alpha}\), we have by Lemmas 5.7 and 5.6 that
\[\mathbb{E}[(TL_{n}^{\prime})^{m}]\leq\mathbb{E}[(TL_{n,\alpha})^{m}]\leq C^{m }m!n^{m}.\]
This is exactly what was desired.
We now return to the proof of Lemma 5.7.
#### 5.2.2. The proof of Lemma 5.7
To show Lemma 5.7, we first need the following claim:
**Claim 5.8**.: _Define_
\[\overline{TL}_{n,\alpha}^{m} :=\sum_{\begin{subarray}{c}i_{1},\ldots,i_{m}\\ |i_{a}-i_{b}|\geq n^{3\alpha}\forall a,b\mid j_{a}-j_{b}|\geq n^{3\alpha} \forall a,b\end{subarray}}\prod_{k=1}^{m}\mathbb{P}(R^{\prime}_{\mathcal{S}^{ 1}_{i_{k}}}\cap\mathcal{S}^{1}[i_{k}-n^{\alpha},i_{k}+n^{\alpha}]\cap\mathcal{ S}^{1}=\emptyset)\] \[\quad\times G_{D}(\mathcal{S}^{1}_{i_{k}}-\mathcal{S}^{2}_{j_{k}} )\mathbb{P}(R^{\prime}_{\mathcal{S}^{2}_{j_{k}}}\cap\mathcal{S}^{2}[j_{k}-n^ {\alpha},j_{k}+n^{\alpha}]\cap\mathcal{S}^{2}=\emptyset).\]
_Then, there exists some constant \(B\) such that,_
\[\mathbb{E}[\overline{TL}_{n,\alpha}^{m}]\leq(m!)^{2}B^{m}\frac{n^{m}}{(\log n) ^{2m}}.\]
We will show it after the proof of Lemma 5.7.
Proof of Lemma 5.7.: In what follows, the constant \(C\) may not remain the same from line to line. Since \(TL_{n,\alpha}\leq\sum_{i,j=1}^{n}G_{D}(\mathcal{S}^{1}_{i}-\mathcal{S}^{2}_{j})\), it is clear that there is some constant \(C\) such that
\[\mathbb{E}[(TL_{n,\alpha})^{m}]\leq\mathbb{E}\left[\left(\sum_{i,j=1}^{n}G_{D} (\mathcal{S}^{1}_{i}-\mathcal{S}^{2}_{j})\right)^{m}\right]\leq C^{m}m!n^{m}.\]
This latter estimate immediately follows from the large deviation statistics of \(\sum_{i,j=1}^{n}G_{D}(\mathcal{S}^{1}_{i}-\mathcal{S}^{2}_{j})\) from [13, Lemma 4.1]. Now, observe that when \(m\geq(\log n)^{2}\). One has that
\[m!\geq m^{m}e^{-m}\geq(\log n)^{2m}e^{-m}.\]
Thus, for \(m\geq(\log n)^{2}\), we have that,
\[C^{m}m!n^{m}\leq(eC)^{m}(m!)^{2}\left(\frac{n}{(\log n)^{2}}\right)^{m}.\]
It suffices to prove an upper bound for moments when \(m\leq(\log n)^{2}\).
_Bounding the moments when \(m\leq(\log n)^{2}\)_
We will show that there exists a constant \(C\) such that,
\[\mathbb{E}[(TL_{n,\alpha})^{m}]\leq(m!)^{2}C^{m}\frac{n^{m}}{(\log n)^{2m}}\]
by induction on \(m\).
Since the points \(i_{k}\) are all spaced far apart, we are able to use in some form that the probability terms \(\mathbb{P}(R^{\prime}_{\mathcal{S}^{1}_{i_{k}}}\cap\mathcal{S}^{1}[i_{k}-n^{ \alpha},i_{k}+n^{\alpha}]\cap\mathcal{S}^{1}=\emptyset)\) should be rather independent of each other. We will return to the proof of the claim later. Assuming the claim we have the following result; the moments of \(TL_{\alpha}\) can be bounded from above by
\[\mathbb{E}[(TL_{n,\alpha})^{m}] \leq\mathbb{E}[\overline{TL}^{m}_{n,\alpha}]\] \[+2m^{2}\mathbb{E}\bigg{[}\sum_{\begin{subarray}{c}i_{1},\ldots,i_ {m}\\ |i_{1}-i_{2}|\leq n^{3\alpha}\end{subarray}}\sum_{j_{1},\ldots,j_{m}}\prod_{k= 1}^{m}\mathbb{P}(R^{\prime}_{\mathcal{S}^{1}_{i_{k}}}\cap\mathcal{S}^{1}[i_{k} -n^{\alpha},i_{k}+n^{\alpha}]\cap\mathcal{S}^{1}=\emptyset)\] \[\times G_{D}(\mathcal{S}^{1}_{i_{k}}-\mathcal{S}^{2}_{j_{k}}) \mathbb{P}(R^{\prime}_{\mathcal{S}^{2}_{j_{k}}}\cap\mathcal{S}^{2}[j_{k}-n^{ \alpha},j_{k}+n^{\alpha}]\cap\mathcal{S}^{2}=\emptyset)\bigg{]}. \tag{5.10}\]
Namely, if there is a term in the \(m\)th moment of \(TL_{n,\alpha}\) that is not already contained in the term \(\overline{TL}^{m}_{n,\alpha}\), there must be some pair of points \((i_{a},i_{b})\) or \((j_{a},j_{b})\) that are of a distance closer than \(n^{3\alpha}\). By symmetry, we may assume that the two points are \(i_{1}\) and \(i_{2}\). There are at most \(2m^{2}\) such choices of pairs \((i_{a},i_{b})\) or \((j_{a},j_{b})\). We will now bound the moment of the second term above.
If could only sum over the terms \(i_{2},\ldots,i_{m}\) and \(j_{2},\ldots,j_{m}\), then this would be the \(m-1\)th moment of \((TL_{\alpha})\). We could then apply induction to this quantity. The main idea is that if we fix \(i_{2}\) there are at most \(2n^{3\alpha}\) choices of \(i_{1}\). Thus, intuitively, this term should be no more than \(n^{3\alpha}\) times \(\mathbb{E}[(TL_{\alpha})^{m-1}]\). The problem is to deal with the sum over \(j_{1}\).
Observe the following,
\[\mathbb{E}\bigg{[}\sum_{\begin{subarray}{c}i_{1},\ldots,i_{m}\\ |i_{1}-i_{2}|\leq n^{3\alpha}\end{subarray}}\sum_{j_{1},\ldots,j_{m}}\prod_{k=1} ^{m}\mathbb{P}(R^{\prime}_{\mathcal{S}^{1}_{i_{k}}}\cap\mathcal{S}^{1}[i_{k}-n^ {\alpha},i_{k}+n^{\alpha}]\cap\mathcal{S}^{1}=\emptyset)\] \[\times G_{D}(\mathcal{S}^{1}_{i_{k}}-\mathcal{S}^{2}_{j_{k}}) \mathbb{P}(R^{\prime}_{\mathcal{S}^{2}_{j_{k}}}\cap\mathcal{S}^{2}[j_{k}-n^{ \alpha},j_{k}+n^{\alpha}]\cap\mathcal{S}^{2}=\emptyset)\bigg{]}\] \[\leq\sum_{k=2}^{m}\mathbb{E}\bigg{[}\sum_{\begin{subarray}{c}i_{1 },\ldots,i_{m}\\ |i_{1}-i_{2}|\leq n^{3\alpha}\end{subarray}}\sum_{\begin{subarray}{c}j_{1}, \ldots,j_{m}\\ |j_{1}-j_{k}|\leq n^{3\alpha}\end{subarray}}\prod_{k=1}^{m}\mathbb{P}(R^{\prime }_{\mathcal{S}^{1}_{i_{k}}}\cap\mathcal{S}^{1}[i_{k}-n^{\alpha},i_{k}+n^{ \alpha}]\cap\mathcal{S}^{1}=\emptyset)\] \[\times G_{D}(\mathcal{S}^{1}_{i_{k}}-\mathcal{S}^{2}_{j_{k}}) \mathbb{P}(R^{\prime}_{\mathcal{S}^{2}_{j_{k}}}\cap\mathcal{S}^{2}[j_{k}-n^{ \alpha},j_{k}+n^{\alpha}]\cap\mathcal{S}^{2}=\emptyset)\bigg{]}\] \[+\mathbb{E}\bigg{[}\sum_{\begin{subarray}{c}i_{1},\ldots,i_{m}\\ |i_{1}-i_{2}|\leq n^{3\alpha}\end{subarray}}\sum_{\begin{subarray}{c}j_{1}, \ldots,j_{m}\\ |j_{1}-j_{k}|\geq n^{3\alpha}\forall k\end{subarray}}\prod_{k=1}^{m}\mathbb{P}( R^{\prime}_{\mathcal{S}^{1}_{i_{k}}}\cap\mathcal{S}^{1}[i_{k}-n^{\alpha},i_{k}+n^{ \alpha}]\cap\mathcal{S}^{1}=\emptyset)\] \[\times G_{D}(\mathcal{S}^{1}_{i_{k}}-\mathcal{S}^{2}_{j_{k}}) \mathbb{P}(R^{\prime}_{\mathcal{S}^{2}_{j_{k}}}\cap\mathcal{S}^{2}[j_{k}-n^{ \alpha},j_{k}+n^{\alpha}]\cap\mathcal{S}^{2}=\emptyset)\bigg{]}. \tag{5.11}\]
If we bound the product
\[\mathbb{P}(R^{\prime}_{\mathcal{S}^{1}_{i_{1}}}\cap\mathcal{S}^{1}[i_{1}-n^{ \alpha},i_{1}+n^{\alpha}]\cap\mathcal{S}^{1}=\emptyset)G_{D}(\mathcal{S}^{1}_ {i_{1}}-\mathcal{S}^{2}_{j_{1}})\mathbb{P}(R^{\prime}_{\mathcal{S}^{2}_{j_{1} }}\cap\mathcal{S}^{2}[j_{1}-n^{\alpha},j_{1}+n^{\alpha}]\cap\mathcal{S}^{2}=\emptyset)\]
by 1, we see that the first term on the right hand side above in equation (5.11) can indeed be bounded by \(\lesssim mn^{6\alpha}\mathbb{E}[(TL_{n,\alpha})^{m-1}]\).
To deal with the second term, we do the following. First, fix the terms \(i_{2},\ldots,i_{m}\) and \(j_{2},\ldots,j_{m}\). Without loss of generality, we can assume that we order \(j_{2}\leq j_{3}\leq\ldots\leq j_{m-1}\leq j_{m}\), and that \(j_{2}\leq j_{1}\leq j_{3}\). (We can apply similar logic regardless of the relative position of \(j_{1}\) in the ordering \(j_{2}\leq\ldots\leq j_{m}\).) Notice that upon conditioning on the values of \(\mathcal{S}^{2}_{j_{2}+n^{\alpha}}\) and \(\mathcal{S}^{2}_{j_{3}-n^{\alpha}}\), the walk \(\mathcal{S}^{2}[j_{2}+n^{\alpha},j_{3}-n^{\alpha}]\) becomes independent of the rest of the walk. We exploit this fact by using that
\[\mathbb{E}_{\mathcal{S}^{2}}\left[\sum_{j_{2}+n^{\alpha}\leq j_{1}\leq j_{3}-n^{ \alpha}}G_{D}(\mathcal{S}^{1}_{i_{1}}-\mathcal{S}^{2}_{j_{1}})\bigg{|} \mathcal{S}^{2}_{j_{2}+n^{\alpha}}=x,\mathcal{S}^{2}_{j_{3}-n^{\alpha}}=y \right]\lesssim\log n \tag{5.12}\]
for any pairs of values \(x\) and \(y\) by Lemma A.10. The expectation above is only taken over the random walk \(\mathcal{S}^{2}\). (Note, here we are bounding the probability term \(\mathbb{P}(R^{\prime}_{\mathcal{S}^{2}_{j_{1}}}\cap\mathcal{S}^{2}[j_{1}-n^{ \alpha},j_{1}+n^{\alpha}]\cap\mathcal{S}^{2}=\emptyset)\) by 1 to simplify further computations.)
As a consequence, we see that we have,
\[\mathbb{E}\bigg{[}\sum_{\begin{subarray}{c}i_{1},\ldots,i_{m}\\ |i_{1}-i_{2}|\leq n^{3\alpha}\end{subarray}}\sum_{\begin{subarray}{c}j_{1}, \ldots,j_{m}\\ |j_{1}-j_{k}|\geq n^{3\alpha}\forall k\end{subarray}}\prod_{k=1}^{m}\mathbb{P}( R^{\prime}_{\mathcal{S}^{1}_{i_{k}}}\cap\mathcal{S}^{1}[i_{k}-n^{\alpha},i_{k}+n^{ \alpha}]\cap\mathcal{S}^{1}=\emptyset)\] \[\times G_{D}(\mathcal{S}^{1}_{i_{k}}-\mathcal{S}^{2}_{j_{k}}) \mathbb{P}(R^{\prime}_{\mathcal{S}^{2}_{j_{k}}}\cap\mathcal{S}^{2}[j_{k}-n^{ \alpha},j_{k}+n^{\alpha}]\cap\mathcal{S}^{2}=\emptyset)\bigg{]}\] \[\lesssim mn^{3\alpha}(\log n)\mathbb{E}[TL_{n,\alpha}^{m-1}] \lesssim mC^{m-1}n^{3\alpha}(\log n)(m-1)!^{2}\frac{n^{m-1}}{(\log n)^{2m-2}}.\]
The factor of \(n^{3\alpha}\) comes from the possible choices of \(x_{1}\) (given its distance from \(x_{2}\)) and the factor of \(m\) comes from the fact that \(j_{1}\) can be located in between any of the \(m\) regions \([j_{i},j_{i+1}]\) in the ordering \(j_{2}\leq j_{3}\ldots\leq j_{m}\). At the final step, we applied the induction hypothesis.
Returning to equation (5.11), we see that,
* L.H.S. of (5.11) \[\lesssim mn^{6\alpha}C^{m-1}(m-1)!^{2}\frac{n^{m-1}}{(\log n)^{2m-2}}+m(\log n )(m-1)!^{2}C^{m-1}\frac{n^{m-1}}{(\log n)^{m-1}}.\]
Substituting this back into equation (5.10), we have,
\[\mathbb{E}[(TL_{n,\alpha})^{m}]\leq(m!)^{2}B^{m}\frac{n^{m}}{(\log n)^{m}}+Km^ {3}n^{6\alpha}C^{m-1}(m-1)!\frac{n^{m-1}}{(\log n)^{2m-2}}.\]
Notice that the right hand side is less than \(C^{m}(m!)^{2}\frac{n^{m}}{(\log n)^{2m}}\) provided,
\[1\geq\left(\frac{B}{C}\right)^{m}+KC^{-1}n^{6\alpha-1}(\log n)^{3}\geq\left( \frac{B}{C}\right)^{m}+KC^{-1}mn^{6\alpha-1}(\log n)^{2}.\]
Provided \(C\) is chosen large relative to \(B\) and the universal constant \(K\), there is a value of \(C\) such that the above inequality will be satisfied for all \(n\) and \(m\leq(\log n)^{2}\). This completes the induction provided that Claim 5.8 holds.
Now we start complete the proof of Claim 5.8.
Proof of Claim 5.8.: Without loss of generality, we may order the times as \(i_{1}\leq i_{2}\leq i_{3}\ldots\leq i_{m}\). Our first step is to condition on the values of the random walk at specific points as \(\mathcal{S}^{1}_{i_{k}}=x^{c}_{k}\), \(\mathcal{S}^{1}_{i_{k}+n^{\alpha}}=x^{r}_{k}\), \(\mathcal{S}^{1}_{i_{k}-n^{\alpha}}=x^{l}_{k}\) and \(\mathcal{S}^{2}_{j_{k}}=y^{c}_{k}\), \(\mathcal{S}^{2}_{j_{k}+n^{\alpha}}=y^{r}_{k}\), \(\mathcal{S}^{2}_{j_{k}-n^{\alpha}}=y^{l}_{k}\). With the endpoints of the neighborhoods \(\mathcal{S}^{1}[i_{k}-n^{\alpha},i_{k}+n^{\alpha}]\) specified, the neighborhoods involved in the probability terms above become independent of each other. This is the key observation used to simplify the computations that proceed. In what follows, we let \(p_{t}(x)\) denote the probability that a SRW transitions to the point \(x\) at time \(t\).
To simplify what proceeds, we introduce the following notation,
\[NI(\mathcal{S},i,x^{c},x^{r},x^{l}):=\mathbb{E}[\mathbb{P}(R^{\prime}_{ \mathcal{S}_{i}}\cap\mathcal{S}[i-n^{\alpha},i+n^{\alpha}]\cap\mathcal{S}= \emptyset)|\mathcal{S}_{i}=x^{c},\mathcal{S}_{i+n^{\alpha}}=x^{r},\mathcal{S}_ {i-n^{\alpha}}=x^{l}]. \tag{5.13}\]
This finds the expected value of the probability that an independent random walk \(R^{\prime}_{\mathcal{S}_{i}}\) starting at \(\mathcal{S}_{i}\) does not intersect the portion of the random walk \(\mathcal{S}[i-n^{\alpha},i+n^{\alpha}]\) conditioned on the random walk being at points \(x^{c}\) at time \(i\), \(x^{r}\) at time \(i+n^{\alpha}\) and \(x^{l}\) at time \(i-n^{\alpha}\). If it is not necessary to condition \(x^{r}\) and \(x^{l}\), we will slightly abuse notation and denote this by dropping the appropriate argument on the left hand side. Let \(\Pi_{m}\) be the collection of all permutations on \(m\) points. Note that \(c\) as a superscript is used as a shorthand for 'center' while \(r\) and \(l\) are 'right' and 'left' respectively. We can write the expectation of \(\mathbb{E}[\overline{TL}^{m}_{n,\alpha}]\) as,
\[(m!)\sum_{\sigma\in\Pi_{m}}\sum_{\begin{subarray}{c}1\leq i_{1}\leq i _{2}-n^{3\alpha}\leq\ldots\\ \leq i_{m}-(m-1)n^{3\alpha}\leq n-(m-1)n^{3\alpha}\leq j_{\sigma(m)}-(m-1)n^{3 \alpha}\leq n-(m-1)n^{3\alpha}\end{subarray}}\sum_{\begin{subarray}{c}x_{k}^ {r},x_{k}^{l},x_{k}^{c},\\ y_{k}^{r},y_{k}^{r}\in\mathcal{D}^{4},\forall k\end{subarray}}\] \[p_{i_{1}}(x_{1}^{c})p_{n^{\alpha}}(x_{1}^{r}-x_{1}^{c})NI( \mathcal{S}^{1},i_{1},x_{1}^{c},x_{1}^{r})\] \[\times p_{j_{\sigma}(1)}(y_{\sigma(1)}^{c})p_{n^{\alpha}}(y_{ \sigma(1)}^{r}-y_{\sigma(1)}^{c})NI(\mathcal{S}^{2},j_{\sigma(1)},y_{\sigma(1) }^{c},y_{\sigma(1)}^{r})\] \[\prod_{k=2}^{m-1}NI(\mathcal{S}^{1},i_{k},x_{k}^{c},x_{k}^{r},x_{ k}^{l})p_{n^{\alpha}}(x_{k}^{c}-x_{k}^{r})p_{n^{\alpha}}(x_{k}^{l}-x_{k}^{c})p_{ i_{k+1}-i_{k}-2n^{\alpha}}(x_{k+1}^{l}-x_{k}^{r})\] \[\times NI(\mathcal{S}^{2},j_{\sigma(k)},y_{\sigma(k)}^{c},y_{ \sigma(k)}^{r},y_{\sigma(k)}^{l})p_{n^{\alpha}}(y_{\sigma(k)}^{c}-y_{\sigma(k )}^{r})p_{n^{\alpha}}(y_{\sigma(k)}^{l}-y_{\sigma(k)}^{c})\] \[p_{y_{\sigma(k+1)}-y_{\sigma(k)}-2n^{\alpha}}(y_{\sigma(k+1)}^{ l}-y_{\sigma(k)}^{r})\] \[NI(\mathcal{S}^{1},i_{m},x_{m}^{c},x_{m}^{l})p_{n^{\alpha}}(x_{m }^{c}-x_{m}^{l})NI(\mathcal{S}^{2},j_{\sigma(m)},y_{\sigma(m)}^{c},y_{\sigma(m )}^{r})p_{n^{\alpha}}(y_{\sigma(m)}^{c}-y_{\sigma(m)}^{l})\] \[\times\prod_{k=1}^{m}G_{D}(x_{k}^{c}-y_{k}^{c}). \tag{5.14}\]
The main observation to notice now is that if we were able to freely sum over the values \(x_{k}^{r},x_{k}^{l}\), then we would have that, by (4.8),
\[\sum_{x_{k}^{r},x_{k}^{l}}NI(\mathcal{S}^{1},i_{k},x_{k}^{c},x_{k }^{r},x_{k}^{l})p_{n^{\alpha}}(x_{k}^{c}-x_{k}^{r})p_{n^{\alpha}}(x_{k}^{l}-x_ {k}^{c})\] \[= \mathbb{E}[\mathbb{P}(R_{\mathcal{S}_{i}}^{\prime}\cap\mathcal{S }[i-n^{\alpha},i+n^{\alpha}]=\emptyset)]\lesssim\frac{1}{\log(\min\{n-i+2,i+1 \}^{\alpha})},\]
because this just computes the averaged probability that an infinite random walk does not intersect a the union of two independent random walks of length \(n^{\alpha}\) starting from the origin.
The only term that prevents us from freely summing over \(x_{k}^{r}\) and \(x_{k}^{l}\) for all \(k\) is the term \(p_{i_{k+1}-i_{k}-2n^{\alpha}}(x_{k+1}^{l}-x_{k}^{r})\). However, if we could bound this term from above by a constant times \(p_{i_{k+1}-i_{k}}(x_{k+1}^{c}-x_{k}^{c})\), then we would be able to freely sum over the variables \(x_{k}^{r}\) and \(x_{k}^{l}\) as desired. This is what we will argue now.
It is clear that \(\|x_{k}^{l}-x_{k}^{c}\|\leq n^{\alpha}\) and \(\|x_{k}^{r}-x_{k}^{c}\|\leq n^{\alpha}\). Provided that \(\|x_{k+1}^{l}-x_{k}^{r}\|\leq(i_{k+1}-i_{k})^{1/2+\epsilon}\) for some small \(\epsilon\), we can apply the local central limit as in [18, Theorem 2.3.12, equation (2.46)] along with the fact that \(i_{k+1}-i_{k}\geq n^{3\alpha}\) to show that,
\[p_{i_{k+1}-i_{k}-2n^{\alpha}}(x_{k+1}^{l}-x_{k}^{r})\leq(1+o(1))p_{i_{k+1}-i_{k} }(x_{k+1}^{c}-x_{k}^{c}).\]
Otherwise, the probability that \(\|x_{k+1}^{l}-x_{k}^{r}\|\geq(i_{k+1}-i_{k})^{1/2+\epsilon}\) is exponentially unlikely with probability at most \(\exp[-n^{6\alpha\epsilon}]\). Thus, we always have the bound,
\[p_{i_{k+1}-i_{k}-2n^{\alpha}}(x_{k+1}^{l}-x_{k}^{r})\] \[\leq (1+o(1))p_{i_{k+1}-i_{k}}(x_{k+1}^{c}-x_{k}^{c})\] \[+\mathbb{1}[\|x_{k+1}^{l}-x_{k}^{r}\|\geq(i_{k+1}-i_{k})^{1/2+ \epsilon}]p_{i_{k+1}-i_{k}-2n^{\alpha}}(x_{k+1}^{l}-x_{k}^{r}).\]
Similar statements also hold for \(j\) and \(y\).
Furthermore, this term can be substituted into equation (5.14) by replacing each appearance of \(p_{i_{k+1}-i_{k}-2n^{\alpha}}(x_{k+1}^{l}-x_{k}^{r})\) with the right hand side above. We can expand each of these products to get a sum over \(2^{4m}\) terms (in each of these terms,
\(p_{i_{k+1}-i_{k}-2n^{\alpha}}(x_{k+1}^{l}-x_{k}^{r})\) is replaced with either \(p_{i_{k+1}-i_{k}}(x_{k+1}^{c}-x_{k}^{c})\) or \(\mathbbm{1}[\|x_{k+1}^{l}-x_{k}^{r}\|\geq(i_{k+1}-i_{k})^{1/2+\epsilon}]p_{i_{k +1}-i_{k}-2n^{\alpha}}(x_{k+1}^{l}-x_{k}^{r})\)). There is only one of these terms in which each \(p_{i_{k+1}-i_{k}-2n^{\alpha}}(x_{k+1}^{l}-x_{k}^{r})\) is replaced with \(p_{i_{k+1}-i_{k}}(x_{k+1}^{c}-x_{k}^{c})\).
We remark that if even one of the \(p_{i_{k+1}-i_{k}-2n^{\alpha}}(x_{k+1}^{l}-x_{k}^{r})\) were replaced with \(\mathbbm{1}[\|x_{k+1}^{l}-x_{k}^{r}\|\geq(i_{k+1}-i_{k})^{1/2+\epsilon}]p_{i_{ k+1}-i_{k}-2n^{\alpha}}(x_{k+1}^{l}-x_{k}^{r})\), then such a term would be exponentially suppressed. Indeed, we could trivially bound all the terms of the form \(G_{D}(x-y)\) and \(\mathbb{P}(R_{\mathcal{S}_{t}}^{\prime}\cap\mathcal{S}[t-n^{\alpha},t+n^{ \alpha}]\cap\mathcal{S}=\emptyset)\) by a constant. Performing a trivial summation shows that this term can be no more than \(n^{2m}\exp[-n^{6\alpha\epsilon}]\ll(m!)^{2}\frac{n^{m}}{(\log n)^{2m}}\) provided \(m\leq(\log n)^{2}\). Furthermore, there are no more than \(2^{4m}\) such terms. Thus, these terms are clearly negligible.
Now we consider the term in which all \(p_{i_{k+1}-i_{k}-2n^{\alpha}}(x_{k+1}^{l}-x_{k}^{r})\) are replaced with \((1+o(1))p_{i_{k+1}-i_{k}}(x_{k+1}^{c}-x_{k}^{c})\). In such a term, we can finally sum over \(x_{k}^{r},x_{k}^{l},y_{k}^{r},y_{k}^{l}\) for all \(k\). Such a term will be bounded by,
\[\left(\frac{C}{\log n}\right)^{2m}m!\sum_{\sigma\in\Pi_{m}}\sum_{ i_{1}\leq\ldots\leq i_{m}}\sum_{j_{1}\leq\ldots\leq j_{m}}\sum_{x_{1}^{c}, \ldots,x_{k}^{c}}\sum_{y_{1}^{c},\ldots,y_{k}^{c}}p_{i_{1}}(x_{1}^{c})p_{j_{ \sigma(1)}}(y_{\sigma(1)}^{c})\] \[\times\prod_{k=1}^{m-1}p_{i_{k+1}-i_{k}}(x_{k+1}-x_{k})p_{j_{ \sigma(k+1)}-j_{\sigma(k)}}(y_{\sigma(k+1)}-y_{\sigma(k)})\prod_{k=1}^{m}G_{D} (x_{k}^{c}-y_{k}^{c}).\]
However, the last term computes the \(m\)-th moment of \(\sum_{i=1}^{n}\sum_{j=1}^{n}G_{D}(\mathcal{S}_{i}^{1}-\mathcal{S}_{j}^{2})\). This is bounded by \(C^{m}m!n^{m}\) for some \(C>0\). Thus, we can bound the line above by
\[m!\left(\frac{Cn}{(\log n)^{2}}(1+o(1))\right)^{m}.\]
This completes the proof of the claim.
### Lower Bound for the Large Deviation of \(Tl\)
Our goal in this section is to show the following statement.
In this section, our goal is to understand the lower bound of,
\[\frac{1}{b_{n}}\log\sum_{m=0}^{\infty}\frac{1}{m!}\theta^{m}\left(\frac{b_{n}( \log n)^{2}}{n}\right)^{m/2}\left(\mathbb{E}[(TL_{n}^{\prime})^{m}]\right)^{1 /2}.\]
**Theorem 5.9**.: _If \(b_{n}=\text{O}(\log\log n)\) and satisfies \(\lim_{n\to\infty}b_{n}=\infty\), one has that for any \(\theta>0\),_
\[\liminf_{n\to\infty}\frac{1}{b_{n}}\log\sum_{m=0}^{\infty}\frac{1}{m!}\theta^{ m}\left(\frac{b_{n}(\log n)^{2}}{n}\right)^{m/2}\left(\mathbb{E}[(TL_{n}^{ \prime})^{m}]\right)^{1/2}\geq\tilde{\kappa}(4,2)^{4}\frac{\pi^{4}\theta^{2}}{ 8}.\]
Proof.: Recall that we let \(\mathcal{S}^{k,i}=\mathcal{S}^{k}\left[(i-1)\frac{n}{b_{n}},i\frac{n}{b_{n}}\right]\). Without loss of generality, we assume that \(b_{n}\) is odd. First, notice that
\[TL_{n}^{\prime}=\sum_{a\in\mathbb{Z}^{4}}\sum_{i=1}^{b_{n}}\sum_{x^{1}\in \mathcal{S}^{1,i}}\tilde{G}_{D}(x^{1}-a)\mathbb{P}(R_{x^{1}}^{\prime}\cap \mathcal{S}^{1,i}=\emptyset)\sum_{j=1}^{b_{n}}\sum_{x^{2}\in\mathcal{S}^{2,j}} \tilde{G}_{D}(x^{2}-a)\mathbb{P}(R_{x^{2}}^{\prime}\cap\mathcal{S}^{2,j}= \emptyset).\]
If we let
\[\mathcal{G}_{n}(a):=\mathbb{E}\left[\sum_{i=1}^{b_{n}}\sum_{x^{1}\in\mathcal{ S}^{1,i}}\tilde{G}_{D}(x^{1}-a)\mathbb{P}(R_{x^{1}}^{\prime}\cap\mathcal{S}^{1,i}= \emptyset)\right],\]
we see that,
\[\mathbb{E}[(TL^{\prime}_{n})^{m}]=\sum_{a_{1},\ldots,a_{m}}\prod_{i=1}^{m}( \mathcal{G}_{n}(a_{i}))^{2}.\]
If we now let \(f\) be any smooth function with finite support satisfying \(\int_{\mathbb{R}^{4}}f(a)^{2}\mathrm{d}a=1\) and \(C_{f}^{n}:=\sum_{a\in\mathbb{Z}^{4}}f\left(\sqrt{\frac{b_{n}}{n}}a\right)^{2}\), by the Cauchy-Schwartz inequality, we see that we obtain that,
\[\mathbb{E}[(TL^{\prime}_{n})^{m}]^{1/2} =(C_{f}^{n})^{-m/2}\left(\sum_{a_{1},\ldots,a_{m}}\prod_{i=1}^{m}( \mathcal{G}_{n}(a_{i}))^{2}\right)^{1/2}\left(\sum_{a_{1},\ldots,a_{m}}\prod_{ i=1}^{m}f\left(\sqrt{\frac{b_{n}}{n}}a_{i}\right)^{2}\right)^{1/2}\] \[\geq(C_{f}^{n})^{-m/2}\sum_{a_{1},\ldots,a_{m}}\prod_{i=1}^{m} \mathcal{G}_{n}(a_{i})f\left(\sqrt{\frac{b_{n}}{n}}a_{i}\right).\]
Defining
\[\mathcal{F}_{n}^{i}:=(C_{f}^{n})^{-1/2}\sum_{x^{1}\in\mathcal{S}^{1,i}}\sum_{ a\in\mathbb{Z}^{4}}\tilde{G}_{D}(x^{1}-a)f\left(\sqrt{\frac{b_{n}}{n}}a\right) \mathbb{P}(R^{\prime}_{x^{1}}\cap\mathcal{S}^{1,i}=\emptyset),\]
we see that
\[\mathbb{E}[(TL^{\prime}_{n})^{m}]^{1/2}\geq\mathbb{E}\left[\left(\sum_{i=1}^ {b_{n}}\mathcal{F}_{n}^{i}\right)^{m}\right].\]
Thus, we see that,
\[\sum_{m=0}^{\infty}\frac{1}{m!}\theta^{m}\left(\frac{b_{n}(\log n)^{2}}{n} \right)^{m/2}\left(\mathbb{E}[(TL^{\prime}_{n})^{m}]\right)^{1/2}\geq\mathbb{ E}\left[\exp\left[\theta\frac{\sqrt{b_{n}}(\log n)}{\sqrt{n}}\sum_{i=1}^{b_{n}} \mathcal{F}_{n}^{i}\right]\right]. \tag{5.15}\]
Furthermore, \(\mathcal{F}_{n}^{i}\) is a function of only the portion \(\mathcal{S}^{1,i}\) of the random walk. Hence, notice that for any \(\epsilon>0\), we have,
\[\frac{1}{b_{n}}\log\mathbb{E}\left[\exp\left[\theta\frac{\sqrt{b _{n}}\log n}{\sqrt{n}}\sum_{i=1}^{b_{n}}\mathcal{F}_{n}^{i}\right]\right]\geq \frac{1}{b_{n}}(1+\epsilon)\log\mathbb{E}\left[\exp\left[\frac{1} {1+\epsilon}\theta\frac{\sqrt{b_{n}}\log n}{\sqrt{n}}\sum_{i=2}^{b_{n}} \mathcal{F}_{n}^{i}\right]\right]\] \[-\frac{1}{b_{n}}\epsilon\log\mathbb{E}\left[\exp\left(-\frac{1+ \epsilon}{\epsilon}\theta\mathcal{F}_{n}^{1}\right)\right].\]
Notice that for any fixed \(\epsilon\), an upper bound on the large deviation statistics of \(\mathcal{F}_{n}^{1}\) (which can be inherited from an upper bound large statistics on \(TL^{\prime}_{\frac{n}{b_{n}}}\) as in equation (5.15)) shows that as \(n\to\infty\) the term on the right goes to \(0\). Finally, we can take \(\epsilon\) to \(0\) to note that the term,
\[\liminf_{n\to\infty}\frac{1}{b_{n}}\log\mathbb{E}\left[\exp\left[ \theta\frac{\sqrt{b_{n}}\log n}{\sqrt{n}}\sum_{i=1}^{b_{n}}\mathcal{F}_{n}^{i} \right]\right]\] \[\geq\lim_{\epsilon\to 0}\liminf_{n\to\infty}\frac{1}{b_{n}}(1+ \epsilon)\log\mathbb{E}\left[\exp\left[\frac{1}{1+\epsilon}\theta\frac{\sqrt{b_ {n}}\log n}{\sqrt{n}}\sum_{i=2}^{b_{n}}\mathcal{F}_{n}^{i}\right]\right].\]
To find the lower bound on the term on the left hand side, it suffices to find a bound for the right hand side. We are now in a very similar situation to that of [12, Theorem 7.1.2]. The functions \(\mathcal{F}_{n}^{i}\) are not exactly in the same format. However,
one can see that \(\mathcal{F}_{n}^{i}\) takes the same role as that of the term \(\sum_{x\in\mathcal{S}^{1,i}}f(\sqrt{\frac{b_{n}}{n}}x)\). Indeed, we can define the operator,
\[\mathfrak{B}_{n}\xi(x):= \mathbb{E}\bigg{(}\exp\bigg{\{}\frac{\sqrt{b_{n}}(\log n)}{\sqrt{n }}\sum_{y-x\in\mathcal{S}[1,nb_{n}^{-1}]}(C_{f}^{n})^{-1/2}\sum_{a\in\mathbb{Z} ^{4}}\tilde{G}_{D}(y-a)f\left(\sqrt{\frac{b_{n}}{n}}a\right)\] \[\times\mathbb{P}\left(R_{y-x}^{\prime}\cap\mathcal{S}[1,b_{n}n^{- 1}]=\emptyset\right)\bigg{\}}\xi(x+\mathcal{S}_{nb_{n}^{-1}})\bigg{)}.\]
We define \(\xi_{n}\) as the following discretization of \(g\). Namely, \(\xi_{n}(x)=\frac{1}{C_{g}^{1/2}}g(\frac{x}{\sqrt{\det(\Gamma)\sqrt{n}}})\) where \(C_{g}:=\sum_{x\in\mathbb{Z}^{4}}g^{2}(\frac{x}{\sqrt{\det(\Gamma)\sqrt{n}}})\), where \(\Gamma=4^{-1}I\).
This operator is a symmetric operator and following the proof of [12, Lemma 7.1.3]. We see that we can derive the following bound. Let \(g\) be a bounded function on \(\mathbb{R}^{4}\) that is infinitely differentiable and supported on a finite box with \(\int_{\mathbb{R}^{4}}g^{2}(x)\mathrm{d}x=1\). By the Cauchy-Schwarz inequality, there exists a constant \(\delta\) depending only on \(g\) (but not on \(n\)) that (recall that \(b_{n}-1\) is even)
\[\mathbb{E}\left[\exp\left[\theta\frac{\sqrt{b_{n}}\log n}{\sqrt{n}}\sum_{i=2} ^{b_{n}}\mathcal{F}_{n}^{i}\right]\right]\geq\delta\langle\xi_{n},\mathfrak{B }_{n}^{b_{n}-1}\xi_{n}\rangle\geq\delta\langle\xi_{n},\mathfrak{B}_{n}\xi_{n} \rangle^{b_{n}-1},\]
where \(\langle\xi_{n},\mathfrak{B}_{n}\xi_{n}\rangle\) is given by
\[\langle\xi_{n},\mathfrak{B}_{n}\xi_{n}\rangle =(1+o(1))\int_{\mathbb{R}^{4}}\mathrm{d}xg(x)\] \[\times\mathbb{E}\bigg{(}\exp\bigg{[}\theta\frac{\sqrt{b_{n}}\log n }{\sqrt{n}}\sum_{y\in\mathcal{S}[1,nb_{n}^{-1}]}(C_{f}^{n})^{-1/2}\sum_{a\in \mathbb{Z}^{4}}\tilde{G}_{D}(y+x-a)f\left(\sqrt{\frac{b_{n}}{n}}a\right)\] \[\times\mathbb{P}(R_{y}^{\prime}\cap\mathcal{S}[1,nb_{n}^{-1}]= \emptyset)\bigg{]}g\left(x+\sqrt{\frac{b_{n}}{n}}\mathcal{S}_{nb_{n}^{-1}} \right)\bigg{)}\] \[\to\int_{\mathbb{R}^{4}}\mathrm{d}xg(x)\mathbb{E}\left(\exp\left\{ \int_{0}^{1}\frac{\pi^{2}}{4}(\tilde{G}*f)(x+B(t/4))\mathrm{d}t\right\}g(x+B(1/ 4))\right) \tag{5.16}\]
as \(n\to\infty\), where \(B\) is the \(4-d\) Brownian motion. Note that by Lemmas A.1 and A.2, \(\tilde{G}*f\) is a bounded continuous function and \(\tilde{G}_{D}*f\) uniformly converges to \(2\tilde{G}*f\). Then, the invariance principle shows the convergence above. By [12, (4.1.25)], if we take \(\log\) to the most right hand side in (5.16), it is equal to
\[\sup_{h\in\mathcal{F}}\bigg{\{}\frac{\pi^{2}}{4}\int_{\mathbb{R}^{4}}\tilde{G} *f(\Gamma^{1/2}x)h(x)^{2}\mathrm{d}x-\frac{1}{2}\int_{\mathbb{R}^{4}}|\nabla h (x)|^{2}\mathrm{d}x\bigg{\}},\]
where \(\mathcal{F}:=\{h:\int h(x)^{2}dx=1,\int|\nabla h(x)|^{2}dx<\infty\}\). Taking the supremum over \(\mathrm{f}\) with \(\int f(x)^{2}dx=1\), it is larger than or equal to,
\[\sup_{h\in\mathcal{F}}\bigg{\{}\frac{\pi^{2}}{4}\bigg{(}\int\int_{(\mathbb{R} ^{4})^{2}}G(x-y)h(x)^{2}h(y)^{2}\mathrm{d}x\mathrm{d}y\bigg{)}^{1/2}-\frac{1}{ 8}\int_{\mathbb{R}^{4}}|\nabla h(x)|^{2}\mathrm{d}x\bigg{\}}.\]
Therefore, by the same proof as [1, Proposition 4.1], we obtain the desired result.
Let us explain some steps in the derivation in (5.16). First, we remark that the term inside the exponential has finite expectation. Secondly, we also have the
second moment comparison estimate
\[\mathbb{E}\bigg{[}\frac{\sqrt{b_{n}}\log n}{\sqrt{n}}\sum_{y\in \mathcal{S}[1,nb_{n}^{-1}]}(C_{f}^{n})^{-1/2}\sum_{a\in\mathbb{Z}^{4}}\tilde{G}_ {D}(y+x-a)f\left(\sqrt{\frac{b_{n}}{n}}a\right)\mathbb{P}(R_{y}^{\prime}\cap \mathcal{S}[1,nb_{n}^{-1}]=\emptyset)\] \[-\frac{\pi^{2}}{8}\frac{\sqrt{b_{n}}}{\sqrt{n}}(C_{f}^{n})^{-1/2} \sum_{i=1}^{nb_{n}^{-1}}\sum_{a\in\mathbb{Z}^{4}}\tilde{G}_{D}(\mathcal{S}_{i} +x-a)f\left(\sqrt{\frac{b_{n}}{n}}a\right)\bigg{]}^{2}\to 0 \tag{5.17}\]
as \(n\to\infty\). As before, this follows from computations similar to those found in the proof of Claim 5.8 to allow us to replace the term of \(\mathbb{P}(R_{y}^{\prime}\cap S[1,nb_{n}^{-1}]=\emptyset)\) with \((1+o(1))\frac{\pi^{2}}{8\log n}\) with the aid of [4, Theorem 5.1]. Combining these observations, we see that as \(n\to\infty\)
\[\bigg{|}\int_{\mathbb{R}^{4}}\mathrm{d}xg(x)\mathbb{E}\bigg{(} \exp\bigg{[}\theta\frac{\sqrt{b_{n}}\log n}{\sqrt{n}}\sum_{y\in\mathcal{S}[1, nb_{n}^{-1}]}(C_{f}^{n})^{-1/2}\sum_{a\in\mathbb{Z}^{4}}\tilde{G}_{D}(y+x-a)f \left(\sqrt{\frac{b_{n}}{n}}a\right)\] \[\times\mathbb{P}(R_{y}^{\prime}\cap\mathcal{S}[1,nb_{n}^{-1}]= \emptyset)\bigg{]}g\left(x+\sqrt{\frac{b_{n}}{n}}\mathcal{S}_{nb_{n}^{-1}} \right)\bigg{)}\] \[-\int_{\mathbb{R}^{4}}\mathrm{d}xg(x)\mathbb{E}\bigg{(}\exp\bigg{[} \theta\frac{\pi^{2}}{8}\frac{\sqrt{b_{n}}}{\sqrt{n}}\sum_{i=1}^{nb_{n}^{-1}}(C _{f}^{n})^{-1/2}\sum_{a\in\mathbb{Z}^{4}}\tilde{G}_{D}(\mathcal{S}_{i}+x-a)f \left(\sqrt{\frac{b_{n}}{n}}a\right)\bigg{]}\] \[\times g\left(x+\sqrt{\frac{b_{n}}{n}}\mathcal{S}_{nb_{n}^{-1}} \right)\bigg{)}\bigg{|}\to 0.\]
## Appendix A Green's Function Estimates
In this section, we will establish various technical estimates necessary to show weak convergence of discrete quantities to continuum quantities.
### The property of \(\tilde{G}*f\)
In this subsection, we show that \(\tilde{G}*f\) is a bounded continuous function and establish the uniform convergence of \(\tilde{G}_{D}*f\to 2\tilde{G}*f\). We assume that \(f\) is smooth bounded function with finite support.
**Lemma A.1**.: _There is some constant such that the following estimates hold uniformly in \(a\),_
\[|(\tilde{G}*f)(a)|\lesssim 1,\quad|(\tilde{G}*f)(a+\kappa)-(\tilde{G}*f)(a)| \lesssim\kappa.\]
Proof.: First see that,
\[(\tilde{G}*f)(a) =\int_{\|e\|\leq 1}f(a-e)\tilde{G}(e)\mathrm{d}e+\int_{\|e\|\geq 1 }f(a-e)\tilde{G}(e)\mathrm{d}e\] \[\leq\sup_{z\in\mathbb{R}^{4}}|f(z)|\int_{\|e\|\leq 1}\tilde{G}(e) \mathrm{d}e+\left[\int_{\mathbb{R}^{4}}f^{2}(a-e)\mathrm{d}e\right]^{1/2} \left[\int_{\|e\|\geq 1}\tilde{G}^{2}(e)\mathrm{d}e\right]^{1/2}.\]
By applying a similar inequality, we also have that,
\[\int_{\mathbb{R}^{4}}\|\nabla f(a-e)\|\tilde{G}(e)\mathrm{d}e\leq\int_{\|e\|\leq 1 }||\nabla f(a-e)||\tilde{G}(e)\mathrm{d}e+\int_{\|e\|\geq 1}\|\nabla f(a-e)\| \tilde{G}(e)\mathrm{d}e\]
\[\leq\sup_{z\in\mathbb{R}^{4}}\|\nabla f(z)\|\int_{\|e\|\leq 1}\tilde{G}(e) \mathrm{d}e+\left[\int_{\mathbb{R}^{4}}||\nabla f(a-e)||^{2}\mathrm{d}e\right] ^{1/2}\left[\int_{\|e\|\geq 1}\tilde{G}^{2}(e)\mathrm{d}e\right]^{1/2}\lesssim 1.\]
Thus, we see that,
\[|(\tilde{G}*f)(a+\kappa)-(\tilde{G}*f)(a)|\leq\int_{\mathbb{R}^{4 }}|f(a+\kappa-e)-f(a-e)|\tilde{G}(e)\mathrm{d}e\] \[\leq\int_{0}^{\kappa}\mathrm{d}t\int_{\mathbb{R}^{4}}\left|\left< \nabla f(a+t-e),\frac{\kappa}{||\kappa||}\right>\right|\tilde{G}(e)\mathrm{d}e\] \[\leq\int_{0}^{\kappa}\mathrm{d}t\int_{\mathbb{R}^{4}}||\nabla f (a+t-e)||\tilde{G}(e)\mathrm{d}e\lesssim\kappa\]
and we obtain the desired result.
To introduce the next lemma, we define,
\[(\tilde{G}_{D}*f)(\sqrt{n}a)=(C_{f})^{-1/2}\frac{1}{n^{2}}\sum_{z\in\frac{1}{ \sqrt{n}}\mathbb{Z}^{4}}G_{D}(\sqrt{n}(a-z))f(z),\quad a\in\frac{1}{\sqrt{n}} \mathbb{Z}^{4}\]
and
\[C_{f}=\frac{1}{n^{2}}\sum_{z\in\frac{1}{\sqrt{n}}\mathbb{Z}^{4}}f^{2}(z).\]
**Lemma A.2**.: _Uniformly in \(a\), we have that as \(n\to\infty\),_
(A.1) \[\left|2\int_{\mathbb{R}^{4}}\tilde{G}(\lfloor a\rfloor_{n}-e)f(e)\text{de}-(C _{f})^{-1/2}\frac{1}{n^{2}}\sum_{e\in\frac{1}{\sqrt{n}}\mathbb{Z}^{4}}n^{3/2} \tilde{G}_{D}(\sqrt{n}(\lceil a\rceil_{n}-e))f(e)\right|=o(1).\]
We start with a few intermediate lemmas. The first lemma allows us to reduce the domain of integration of \(\tilde{G}*f(a)\) from all of \(\mathbb{R}^{4}\), to an integration over a region of finite support.
**Lemma A.3**.: _Fix some \(\delta_{2}>\delta_{1}>0\). Let \(\chi\) be a smooth positive function supported on \([-n^{\delta_{2}},n^{\delta_{2}}]^{4}\) bounded by \(1\) and such that \(\chi\) is 1 on \([-n^{\delta_{1}},n^{\delta_{1}}]^{4}\). Then, uniformly in \(a\in\mathbb{R}^{4}\) such that the following estimate holds,_
(A.2) \[\left|\int_{\mathbb{R}^{4}}\tilde{G}(a-e)f(e)\text{de}-\int_{\mathbb{R}^{4}} \chi(a-e)\tilde{G}(a-e)f(e)\text{de}\right|\lesssim n^{-\delta_{1}}.\]
**Remark A.4**.: _By similar methods to Lemma A.3, we would also have a corresponding equation for \(\tilde{G}_{D}\). Namely, we would have,_
\[\left|\frac{1}{n^{2}}\sum_{e\in\frac{1}{\sqrt{n}}\mathbb{Z}^{4}} n^{3/2}\tilde{G}_{D}(\sqrt{n}a-\sqrt{n}e)f(e)\right.\] \[-\frac{1}{n^{2}}\sum_{e\in\frac{1}{\sqrt{n}}\mathbb{Z}^{4}}\chi (a-e)n^{3/2}\tilde{G}_{D}(\sqrt{n}a-\sqrt{n}e)f(e)\right|\lesssim n^{-\delta_ {1}}.\]
Proof.: By the Cauchy-Schwartz inequality, we see that we have,
\[\left|\int_{\mathbb{R}^{4}}\tilde{G}(a-e)f(e)\mathrm{d}e-\int_{ \mathbb{R}^{4}}\chi(a-e)\tilde{G}(a-e)f(e)\mathrm{d}e\right|\] \[\leq\int_{\mathbb{R}^{4}}\tilde{G}(a-e)(1-\chi(a-e))f(e)\mathrm{d}e\] \[\leq\left[\int_{\mathbb{R}^{4}}(\tilde{G}(a-e)(1-\chi(a-e)))^{2} \mathrm{d}e\right]^{1/2}\left[\int_{\mathbb{R}^{4}}f^{2}(e)\mathrm{d}e\right]^ {1/2}\] \[\leq\left[\int_{\mathbb{R}^{4}}f^{2}(e)\mathrm{d}e\right]^{1/2} \left[\int_{\|e\|\geq n^{\delta_{1}}}\tilde{G}^{2}(e)\mathrm{d}e\right]^{1/2} \lesssim\frac{1}{n^{\delta_{1}}}.\]
It yields the desired result.
After the reduction to a region of finite support, our next lemma allows us to replace \(\tilde{G}*f\) with an appropriate discrete form closer to one found in the expression of the discrete computation.
**Lemma A.5**.: _We have the following estimates uniform in \(a\). Fix some \(\delta_{1}>\delta_{2}>0\) sufficiently small, then there is some constant such that,_
(A.3) \[\left|(\tilde{G}*f)(a)-\frac{1}{n^{2}}\sum_{\begin{subarray}{c}z\in\frac{1}{ \sqrt{n}}\mathbb{Z}^{4}\\ \|z\|\geq n^{-\delta_{1}}\\ \|\lfloor a\rfloor_{n}-z\|\leq n^{\delta_{2}}\end{subarray}}f(\lfloor a \rfloor_{n}-z)\tilde{G}(z)\right|\lesssim n^{-\delta_{1}}.\]
_Here, \(\lfloor a\rfloor_{n}\) denotes the element in the lattice \(\frac{1}{\sqrt{n}}\mathbb{Z}^{4}\) that is formed by considering \(\frac{1}{\sqrt{n}}(\lfloor\sqrt{n}a_{1}\rfloor,\ldots,\lfloor\sqrt{n}a_{4}\rfloor)\), where we apply the least integer function to each coordinate._
_Similarly, one can show that, uniformly in \(a\in\frac{1}{\sqrt{n}}\mathbb{Z}^{4}\),_
(A.4) \[\left|\frac{1}{n^{2}}\sum_{z\in\frac{1}{\sqrt{n}}\mathbb{Z}^{4}}n^{3/2}\tilde{ G}_{D}(\sqrt{n}(a-z))f(z)-\frac{1}{n^{2}}\sum_{\begin{subarray}{c}z\in\frac{1}{ \sqrt{n}}\mathbb{Z}^{4}\\ \|z\|\geq n^{-\delta_{1}}\\ \|a-z\|\leq n^{\delta_{2}}\end{subarray}}f(a-z)n^{3/2}\tilde{G}_{D}(\sqrt{n}a) \right|\lesssim n^{-\delta_{1}}.\]
Proof.: We will only consider proving equation (A.3); the proof for (A.4) would be simpler. First, observe that
(A.5) \[\int_{\|z\|\leq n^{-\delta_{1}}}f(a-z)\tilde{G}(z)\mathrm{d}z\lesssim\int_{\|z \|\leq n^{-\delta_{1}}}\frac{1}{\|z\|^{3}}\mathrm{d}z\lesssim n^{-\delta_{1}}.\]
Secondly, we have that, for all sufficiently large \(n\),
(A.6) \[\int_{\|z\|\geq n^{-\delta_{1}}}f(a-z)\tilde{G}(z)\mathrm{d}z=0.\]
Combining estimates (A.5) and (A.6), we can deduce that,
\[\left|(\tilde{G}*f)(a)-\int_{\begin{subarray}{c}\|z\|\geq n^{-\delta_{1}}\\ \|z-a\|\leq n^{\delta_{2}}\end{subarray}}f(a-z)\tilde{G}(z)\mathrm{d}z\right| \lesssim n^{-\delta_{1}}.\]
Now, we compute the difference between the quantity on the right hand side above, and the appropriate discretization. If we let \(\lfloor z\rfloor_{n}\) denote the point in the lattice \(\frac{1}{\sqrt{n}}\mathbb{Z}^{4}\) that is closest to \(z\), then we can observe the following,
\[|\tilde{G}(z)-\tilde{G}(\lfloor z\rfloor_{n})|\lesssim n^{\delta_{1}-1/2} \tilde{G}(z),\quad\forall\|z\|\geq n^{-\delta_{1}}.\]
This comes from the fact that the gradient of \(\|z\|^{-3}\) is \(3\|z\|^{-4}[z_{1},\ldots,z_{4}]\) which equals \(3G(z)\|z\|^{-2}[z_{1},\ldots,z_{4}]\) and that \(\|z\|^{-1}\leq n^{\delta_{1}}\).
In addition, if we assume that the domain of the support of \(f\) is \(I\),
\[|f(a-z)-f(a-\lfloor z\rfloor_{n})|\lesssim n^{-1/2}\mathbb{1}[a-z\in I]\]
since \(f\) is a smooth function with a bounded derivative. Hence, applying the triangle inequality, we ultimately see that,
\[\begin{split}&\bigg{|}\int_{\|z\|\geq n^{-\delta_{1}}}\,f(a-z) \tilde{G}(z)\mathrm{d}z-\sum_{\begin{subarray}{c}z\in\frac{1}{\sqrt{n}} \mathbb{Z}^{4}\\ \|z\|\geq n^{-\delta_{1}}\\ \|a-z\|\leq n^{\delta_{2}}\end{subarray}}f(\lfloor a\rfloor_{n}-z)\tilde{G}( z)\bigg{|}\\ &\lesssim\int_{\|z\|\geq n^{-\delta_{1}}}\,|f(a-z)\tilde{G}(z)-f( \lfloor a\rfloor_{n}-\lfloor z\rfloor_{n})\tilde{G}(\lfloor z\rfloor_{n})| \mathrm{d}z\\ &\lesssim\max[n^{\delta_{1}-1/2},n^{-1/2}]\int_{a-z\in I}\tilde{G} (z)\mathrm{d}z\lesssim n^{\delta_{1}-1/2}.\end{split}\]
This completes the proof of the lemma.
As a corollary of the lemma, we have the following estimates.
**Corollary A.6**.: _First, fix some \(\epsilon\) not changing with \(n\). Additionally, fix parameters \(\delta_{1}>\delta_{2}\) sufficiently small. For \(\|a\|\leq 2n^{\delta_{2}}\), we have the following estimate,_
\[\Big{|}2\tilde{G}*f(a)-n^{3/2}(C_{f})^{-1/2}\tilde{G}_{D}*f(\lfloor\sqrt{n}a \rfloor)\Big{|}\lesssim\frac{n^{20\delta_{1}+2\delta_{2}}}{n}+n^{-\delta_{1}}.\]
Proof.: By using (A.3) and (A.4), it suffices to estimate,
\[\bigg{|}\frac{1}{n^{2}}\sum_{\begin{subarray}{c}z\in\frac{1}{\sqrt{n}} \mathbb{Z}^{4}\\ \|z\|\geq n^{-\delta_{1}}\\ \|a-z\|\leq n^{\delta_{2}}\end{subarray}}f(a-z)\,\Big{[}n^{3/2}(C_{f})^{-1/2} \tilde{G}_{D}(\sqrt{n}z)-2\tilde{G}(z)\Big{]}\,\Bigg{|}.\]
From equation (A.8), which we can apply since if \(\|a\|\leq n^{\delta_{2}}\), then \(\|z\|\leq 2n^{\delta_{2}}\) in the sum above, we can bound the quantity above as,
\[\begin{split}&\lesssim\bigg{|}\frac{1}{n^{2}}\sum_{ \begin{subarray}{c}z\in\frac{1}{\sqrt{n}}\mathbb{Z}^{4}\\ \|z\|\geq n^{-\delta_{1}}\\ \|a-z\|\leq n^{\delta_{2}}\end{subarray}}f(a-z)\frac{n^{20\delta_{1}}}{n\|z\| ^{2}}\bigg{|}\lesssim\bigg{|}\frac{1}{n^{2}}\sum_{\begin{subarray}{c}z\in \frac{1}{\sqrt{n}}\mathbb{Z}^{4}\\ \|z\|\geq n^{-\delta_{1}}\\ \|a-z\|\leq 2n^{\delta_{2}}\end{subarray}}\frac{n^{20\delta_{1}}}{n\|z\|^{2}} \bigg{|}\\ &\lesssim n^{20\delta_{1}}\int_{3n^{\delta_{2}}\geq\|z\|\geq n^{- \delta_{1}}}\frac{1}{n\|z\|^{2}}\mathrm{d}z\lesssim\frac{n^{20\delta_{1}+2 \delta_{2}}}{n}.\end{split}\]
In our application of equation (A.8), we made the choice of parameter \(\epsilon=8\delta_{1}\).
We finally have all results necessary to prove Lemma A.2.
Proof of Lemma a.2.: Fix parameters \(\delta_{1},\delta_{2},\delta_{3}>0\) sufficiently small satisfying \(\frac{1}{400}>\delta_{1}>20\delta_{2}>20\delta_{3}>0\). Recalling the function \(\chi\) from Lemma A.3, we set \(\chi\) to a be a smooth function supported on the interval \([-n^{\delta_{2}},n^{\delta_{2}}]^{4}\) and \(1\) on \([-n^{\delta_{3}},n^{\delta_{3}}]^{4}\),
\[\left|2\int_{\mathbb{R}^{4}}\chi(a-e)\tilde{G}(a-e)f(e)\mathrm{d}e\right.\] \[\left.-\frac{1}{n^{2}}\sum_{e\in\frac{1}{\sqrt{n}}\mathbb{Z}^{4}} \chi(\lfloor a\rfloor_{n}-e)n^{3/2}\tilde{G}_{D}(\sqrt{n}\lfloor a\rfloor_{n} -\sqrt{n}e)f(e)\right|=o(1).\]
Thus, we see that it suffices to show that,
\[\int_{\mathbb{R}^{4}}\chi(a-e)\left|2\tilde{G}(a-e)-n^{3/2}\tilde{G}_{D}(\sqrt {n}(\lfloor a\rfloor_{n}-e))\right|f(e)\mathrm{d}e=o(1).\]
By Corollary A.6, we can bound the difference of \(\tilde{G}\) and \(\tilde{G}_{D}\) in the region on which \(\chi\) is not equal to \(0\). Thus, we have,
\[\int_{\mathbb{R}^{4}}\chi(a-e)\left|2\tilde{G}(a-e)-n^{3/2}\tilde {G}_{D}(\sqrt{n}(\lfloor a\rfloor_{n}-e))\right|f(e)\mathrm{d}e\] \[\lesssim\int_{\mathbb{R}^{4}}\chi(a-e)n^{-\delta_{1}}f(e)\mathrm{ d}e\lesssim n^{-\delta_{1}}.\]
We used the fact that \(\chi\) is supported on \([-n^{\delta_{2}},n^{\delta_{2}}]\). This completes the proof of the lemma.
### Additional Green's function computations
In this subsection, we will give various useful estimates concerning Green's function.
**Lemma A.7**.: _The Green's function of the discrete random walk \(G_{D}(x)\) has a positive convolutional square root with the following form,_
\[\tilde{G}_{D}(z)=\sum_{n=0}^{\infty}\frac{(2t)!}{2^{2t}(t!)^{2}}p_{n}(z).\]
_Recall that \(p_{n}(z)\) is the transition probability that a simple random walk starting from \(0\) reaches the point \(z\) at time \(n\). There is an \(L^{1}\) function \(\tilde{\tilde{G}}_{D}(l)\) whose Fourier transform is the function \(\tilde{G}_{D}(x)\)._
Proof.: _Part 1: Derivation of the form of \(\tilde{G}_{D}\)_
Consider the Taylor expansion of \((1-x)^{-1/2}\) as,
\[\frac{1}{\sqrt{1-x}}=\sum_{k=0}^{\infty}C_{k}x^{k}.\]
We will show that \(\tilde{G}_{D}\) has to take the functional form,
\[\tilde{G}_{D}(z)=\sum_{k=0}^{\infty}C_{k}p_{k}(z).\]
We can check this by directly computing \(\tilde{G}_{D}*\tilde{G}_{D}\). Thus, we have that, for any \(z\in\mathbb{Z}^{4}\),
\[\tilde{G}_{D}*\tilde{G}_{D}(z) =\sum_{x\in\mathbb{Z}^{4}}\sum_{k_{1},k_{2}=0}^{\infty}C_{k_{1}}C_ {k_{2}}p_{k_{1}}(x)p_{k_{2}}(z-x)\] \[=\sum_{k_{1},k_{2}=0}^{\infty}C_{k_{1}}C_{k_{2}}p_{k_{1}+k_{2}}(z)\] \[=\sum_{k=0}^{\infty}p_{k}(z)\sum_{k_{1}=0}^{k}C_{k_{1}}C_{k-k_{1} }=\sum_{k=0}^{\infty}p_{k}(z).\]
To get the last line, we used the fact that
\[\frac{1}{1-x}=\left(\frac{1}{\sqrt{1-x}}\right)^{2}=\left(\sum_{k=0}^{\infty}C _{k}x^{k}\right)^{2}=\sum_{k=0}^{\infty}x^{k}\sum_{k_{1}=0}^{k}C_{k_{1}}C_{k-k _{1}}.\]
This gives the identity that \(\sum_{k_{1}=0}^{k}C_{k_{1}}C_{k-k_{1}}=1\) by comparing coefficients of the Taylor Series. By using similar manipulations, one can show that \(\tilde{G}(x)=\int_{0}^{\infty}\frac{1}{\sqrt{\pi t}}P_{t}(x)\mathrm{d}t\), where \(P_{t}(x)\) is the probability density that a Brownian motion starting from zero would reach position \(x\) at time \(t\).
_Part 2: Derivation of the Fourier Transform_
Now, we discuss the Fourier transform of \(\tilde{G}_{D}(x)\). Consider the following function,
\[F(l_{1},\ldots,l_{4})=\frac{1}{\sqrt{1-\frac{1}{4}\sum_{i=1}^{4}\cos(2\pi l_{i })}}.\]
We will show that,
\[\tilde{G}_{D}(a_{1},\ldots,a_{4})=\int_{(-1/2,1/2]^{4}}F(l_{1},\ldots,l_{4}) \prod_{i=1}^{4}\exp[-2\pi\mathrm{i}l_{i}a_{i}]\mathrm{d}l_{i}.\]
First of all, observe that \(F(l_{1},\ldots,l_{4})\) only has a singularity around the origin and, furthermore, around the origin, \(F\) behaves like \(\frac{1}{\sqrt{l_{1}^{2}+\ldots+l_{4}^{2}}}\). Thus, \(F\) is integrable around \(0\). If we let \(B_{\epsilon}(x)\) be the ball of radius \(\epsilon\) around \(x\), we have,
(A.7) \[\begin{split}&\int_{(-1/2,1/2]^{4}}F(l_{1},\ldots,l_{4})\prod_{i =1}^{4}\exp[-2\pi\mathrm{i}l_{i}a_{i}]\mathrm{d}l_{i}\\ &=\lim_{\epsilon\to 0}\int_{(-1/2,1/2]^{4}\setminus B_{ \epsilon}(0)}\frac{1}{\sqrt{1-\frac{1}{4}\sum_{i=1}^{4}\cos(2\pi l_{i})}}\prod_ {i=1}^{4}\exp[-2\pi\mathrm{i}l_{i}a_{i}]\mathrm{d}l_{i}.\end{split}\]
Now, away from the singularity at \(0\), we can expand \(\frac{1}{\sqrt{1-\frac{1}{4}\sum_{i=1}^{4}\cos(2\pi l_{i})}}\) as,
\[\sum_{k=0}C_{k}\left(\frac{1}{4}\sum_{i=1}^{4}\cos(2\pi l_{i})\right)^{k}\]
and observe that,
\[\int_{(-1/2,1/2]^{4}}\left(\frac{1}{4}\sum_{i=1}^{4}\cos(2\pi l_{i}) \right)^{k}\prod_{i=1}^{4}\mathrm{d}l_{i}\] \[\leq\left[\int_{(-1/2,1/2]^{4}}\left(\frac{1}{4}\sum_{i=1}^{4}\cos (2\pi l_{i})\right)^{2k}\prod_{i=1}^{4}\mathrm{d}l_{i}\right]^{1/2}=\sqrt{p_{2k }(0)},\]
where the last equality comes from direct integration. By using the asymptotic that \(C_{k}\lesssim\sqrt{k}^{-1}\) and \(p_{2k}(0)\lesssim k^{-2}\). We see that,
\[\sum_{k=0}^{\infty}C_{k}\int_{(-1/2,1/2]^{4}}\left|\frac{1}{4}\sum_{i=1}^{4} \cos(2\pi l_{i})\right|^{k}\prod_{i=1}^{4}\mathrm{d}l_{i}\lesssim\sum_{k=1}^{ \infty}k^{-1/2-1}<\infty.\]
This control on the absolute value of the integral allows us to freely exchange the summation of the power series, the limit as \(\epsilon\to 0\), and the integration in (A.7). Thus, we have that,
\[\lim_{\epsilon\to 0}\int_{(-1/2,1/2]^{4}\setminus B_{ \epsilon}(0)}\sum_{k=0}^{\infty}C_{k}\left(\frac{1}{4}\sum_{i=1}^{4}\cos(2\pi l _{i})\right)^{k}\prod_{i=1}^{4}\exp[-2\pi\mathrm{i}l_{i}a_{i}]\mathrm{d}l_{i}\] \[=\sum_{k=0}^{\infty}C_{k}\lim_{\epsilon\to 0}\int_{(-1/2,1/2]^{4} \setminus B_{\epsilon}(0)}\left(\frac{1}{4}\sum_{i=1}^{4}\cos(2\pi l_{i}) \right)^{k}\prod_{i=1}^{4}\exp[-2\pi\mathrm{i}l_{i}a_{i}]\mathrm{d}l_{i}\] \[=\sum_{k=0}^{\infty}C_{k}\int_{(-1/2,1/2]^{4}}\left(\frac{1}{4} \sum_{i=1}^{4}\cos(2\pi l_{i})\right)^{k}\prod_{i=1}^{4}\exp[-2\pi\mathrm{i}l _{i}a_{i}]\mathrm{d}l_{i}=\sum_{k=0}^{\infty}C_{k}p_{k}(a_{1},\ldots,a_{4}).\]
To get the last line, observe that \(\frac{1}{4}\sum_{i=1}^{4}\cos(2\pi l_{i})\) can be written as,
\[\frac{1}{4}\sum_{i=1}^{4}\cos(2\pi l_{i})=\frac{1}{8}\left(\sum_{i=1}^{4}[ \exp[2\pi l_{i}]+\exp[-2\pi l_{i}]]\right).\]
The Fourier integral in the last line thus determines the coefficient of the term \(\prod_{i=1}^{4}\exp[2\pi\mathrm{i}l_{i}a_{i}]\) in the expansion of the polynomial. This is exactly the number of ways that a simple random walk will reach the point \((a_{1},\ldots,a_{4})\) at time \(k\).
Though, this will be more useful in the sequel, we also present the following result comparing \(\tilde{G}_{D}\) to \(\tilde{G}\).
**Lemma A.8**.: _We have the following asymptotics relating \(\tilde{G}_{D}(z)\) with \(\tilde{G}(z)\). Fix some \(\epsilon>0\) and let \(\|z\|\geq n^{-\epsilon/4}\). Then, we have the following comparison,_
(A.8) \[|(\sqrt{n})^{3}\tilde{G}_{D}(\sqrt{n}z)-2\tilde{G}(z)|\lesssim\frac{n^{5 \epsilon/2}}{n\|z\|^{2}}+n^{2}\exp[-n^{\epsilon/2}].\]
**Remark A.9**.: _The bound found in the inequality (A.8) is most effective when \(\|z\|\leq n^{\epsilon}\), which will be the regime in which we will actually apply the bound in question._
Proof.: _Part 1: Discretization of the integral form of \(\tilde{G}\)_
We will begin our computation by first finding an appropriate discretization of the integral form of \(\tilde{G}\). Recall that we can write \(\tilde{G}\) as,
\[\tilde{G}(z)=\int_{0}^{\infty}\frac{1}{\sqrt{\pi t}}P_{t}(z)\mathrm{d}t.\]
Let \(Q_{z}(t)\) be a shorthand for the function \(Q_{z}(t)=\frac{1}{\sqrt{\pi t}}P_{t}(z)\). We can estimate the difference as follows:
\[\left|\int_{0}^{\infty}Q_{z}(t)\mathrm{d}t-\sum_{k\in\frac{1}{n} \mathbb{Z}^{+}}Q_{z}(k)\right| \leq\sum_{k\in\frac{1}{n}\mathbb{Z}^{+}}\int_{\frac{(k-1)}{n}}^{ \frac{k}{n}}\mathrm{d}j\int_{j}^{\frac{k}{n}}|Q^{\prime}_{z}(l)|\mathrm{d}l\] (A.9) \[=\sum_{k\in\frac{1}{n}\mathbb{Z}^{+}}\int_{\frac{(k-1)}{n}}^{ \frac{k}{n}}\left(\frac{k}{n}-l\right)|Q^{\prime}_{z}(l)|\mathrm{d}l\leq\frac {1}{n}\int_{0}^{\infty}|Q^{\prime}_{z}(l)|\mathrm{d}l.\]
One can explicitly compute \(|Q^{\prime}_{z}(l)|\) as \(Q^{\prime}_{z}(l)\propto\exp[-\|z\|^{2}/(2l)]\left[-5l^{-(4+3)/2}+\|z\|^{2}l^{ -(4+5)/2}\right]\). By scaling, we observe that \(\int_{0}^{\infty}|Q^{\prime}_{z}(l)|\mathrm{d}l=\frac{1}{\|z\|^{5}}\int_{0}^{ \infty}|Q^{\prime}_{e_{1}}(l)|\mathrm{d}l\), where \(e_{1}\) is the unit vector in the first dimension and the latter integral is finite. Thus, the error between \(\int_{0}^{\infty}Q_{z}(t)\mathrm{d}t\) and its discretization with lattice \(\frac{1}{n}\mathbb{Z}^{+}\) is of order \(O\left(\frac{1}{n\|z\|^{5}}\right)\), where the implicit constant does not depend on either \(\|z\|\) or \(n\).
Furthermore, we claim that we can ignore the portion of the integral of \(Q_{z}(t)\) from \(t\) between \(0\) and \(n^{-\epsilon}\) in our regime of interest. By observing the form of the derivative of \(Q_{z}(t)\), we notice that \(Q_{z}(l)\) is an increasing function as long as \(\|z\|^{2}\geq 5l\). For, \(l\leq n^{-\epsilon}\) and \(\|z\|\geq n^{-\epsilon/4}\), we see that \(Q_{z}(l)\) is increasing between \(l=0\) and \(l=n^{-\epsilon/4}\). Thus,
(A.10) \[\int_{0}^{n^{-\epsilon/4}}Q_{z}(l)\mathrm{d}l\leq n^{-\epsilon/4}Q_{z}(n^{- \epsilon})\lesssim n^{3\epsilon/2}\exp[-n^{\epsilon/2}].\]
Combining (A.9) and (A.10), we see that,
(A.11) \[\left|\tilde{G}(z)-\frac{1}{n}\sum_{k\in\frac{1}{n}\mathbb{Z}^{+},k\geq n^{- \epsilon}}\frac{1}{\sqrt{\pi t}}P_{t}(z)\right|\lesssim\frac{1}{n\|z\|^{5}}+n ^{3\epsilon/2}\exp[-n^{\epsilon/2}].\]
_Part 2: Estimates on \(\tilde{G}_{D}\)_
First, we will bound the contribution of \(n\sum_{k=0}^{n^{1-\epsilon}}C_{k}p_{k}(\sqrt{n}z)\). Since \(\|z\|\geq n^{-\epsilon/4}\), we have that \(\sqrt{n}\|z\|\geq n^{1/2-\epsilon/4}\). By exponential tail estimates on discrete random walks, we know that \(p_{k}(\sqrt{n}z)\lesssim\exp[-n\|z\|^{2}/k]\lesssim\exp[-n^{\epsilon/2}]\). Thus, the contribution of \(n^{2}\sum_{k=0}^{n^{1-\epsilon}}C_{k}p_{k}(n^{1/2}z)\lesssim n^{2}\exp[-n^{ \epsilon/2}]\). Ultimately, we see that,
(A.12) \[\left|n^{3/2}\tilde{G}_{D}(n^{1/2}z)-\frac{1}{n}\sum_{k\in\frac{1}{n} \mathbb{Z}^{+},k\geq n^{-\epsilon}}(\sqrt{n}C_{nk})(n^{2}p_{nk}(\sqrt{n}z)) \right|\lesssim n^{2}\exp[-n^{\epsilon/2}].\]
By Stirling's approximation, we have,
\[\frac{(2nk)!}{2^{2nk}((nk)!)^{2}} \leq\frac{\sqrt{2\pi(2nk)}\left(\frac{2nk}{e}\right)^{2nk}\exp[ \frac{1}{12(2nk)}]}{2^{2n}\left(\sqrt{2\pi(nk)}\left(\frac{nk}{e}\right)^{nk} \right)^{2}\exp[\frac{2}{12nk+1}]}=\frac{1}{\sqrt{\pi nk}}\left[1+\frac{O(1)}{ nk}\right],\] \[\frac{(2nk)!}{2^{2nk}((nk)!)^{2}} \geq\frac{\sqrt{2\pi(2nk)}\left(\frac{2nk}{e}\right)^{2nk}\exp[ \frac{1}{12(2nk)+1}]}{2^{2n}\left(\sqrt{2\pi(nk)}\left(\frac{nk}{e}\right)^{nk }\right)^{2}\exp[\frac{2}{12nk}]}=\frac{1}{\sqrt{\pi nk}}\left[1+\frac{O(1)}{ nk}\right].\]
Thus, we see that,
\[\left|\sqrt{n}C_{nk}-\frac{1}{\sqrt{\pi k}}\right|\lesssim\frac{1}{nk^{3/2}}.\]
By the local central limit theorem [18, Thm 2.1.1], we also have that,
\[|n^{2}p_{nk}(\sqrt{n}z)-n^{2}P_{nk/4}(\sqrt{n}z)|\lesssim\frac{1}{nk^{3}\|z\|^ {2}}.\]
Furthermore, by scaling, \(n^{2}P_{nk/4}(\sqrt{n}z)\) is equal to \(16P_{k}(2z)\). If we combine these estimates, we see that,
\[\left|\frac{1}{n}\sum_{k\in\frac{1}{n}\mathbb{Z}^{+},k\geq n^{- \epsilon}}(\sqrt{n}C_{nk})(n^{2}p_{nk}(\sqrt{n}z))-\frac{1}{n}\sum_{k\in\frac{ 1}{n}\mathbb{Z}^{+},k\geq n^{-\epsilon}}\frac{16}{\sqrt{\pi k}}P_{k}(2z)\right|\] \[\lesssim \frac{1}{n}\sum_{k\in\frac{1}{n}\mathbb{Z}^{+},k\geq n^{- \epsilon}}(\sqrt{n}C_{nk})|n^{2}p_{nk}(\sqrt{n}z)-16P_{k}(2z)|\] \[+ \frac{1}{n}\sum_{k\in\frac{1}{n}\mathbb{Z}^{+},k\geq n^{- \epsilon}}\left|\sqrt{n}C_{nk}-\frac{1}{\sqrt{\pi k}}\right|16P_{k}(2z)\] \[\lesssim \frac{1}{n}\sum_{k\in\frac{1}{n}\mathbb{Z}^{+},k\geq n^{- \epsilon}}\frac{1}{\sqrt{k}}\frac{1}{nk^{3}\|z\|^{2}}\lesssim\frac{n^{5 \epsilon/2}}{n\|z\|^{2}}.\]
In the last line, we used the estimates \(\sqrt{n}C_{nk}\lesssim\frac{1}{\sqrt{k}}\) and \(P_{k}(z)\lesssim\frac{1}{k\|z\|^{2}}\). This, in itself, comes from the estimate that \(\exp[-\|z\|^{2}/k]\leq k/\|z\|^{2}\). Combining this with equation (A.12) shows that,
\[\left|n^{2-1}\tilde{G}_{D}(\sqrt{n}z))-\frac{1}{n}\sum_{k\in\frac{1}{n} \mathbb{Z}^{+},k\geq n^{-\epsilon}}\frac{1}{\sqrt{\pi k}}16P_{k}(2z)\right| \lesssim\frac{n^{5\epsilon/2}}{n\|z\|^{2}}+n^{2}\exp[-n^{\epsilon/2}].\]
Finally, combining this estimate with equation (A.11) will give us the desired inequality in equation (A.8).
The following lemma gives a rough estimate on sum of the Green's function over a random walk whose beginning and end are pinned to certain points.
**Lemma A.10**.: _For any \(y\) and \(z\in\mathbb{Z}^{4}\),_
\[\mathbb{E}\left[\sum_{i=0}^{n}G_{D}(\mathcal{S}_{i}-z)\bigg{|}\mathcal{S}_{n} =y\right]\lesssim\log n.\]
Proof.: We let \(B_{\sqrt{n}}(z)\) be the ball of radius \(\sqrt{n}\) around the bound \(z\). Let \(\tau_{1}\) be the (random variable) that is the first time that the random bridge touches a point in \(B_{\sqrt{n}}(z)\). Let \(\tau_{2}\) be the last time that the random bridge touches a point in \(B_{\sqrt{n}}(z)\).
We see that,
\[\mathbb{E}\left[\sum_{i=0}^{n}G_{D}(\mathcal{S}_{i}-z)\bigg{|} \mathcal{S}_{n}=y\right]\] \[=\mathbb{E}\bigg{[}\sum_{0\leq k_{1}\leq k_{2}\leq n}\sum_{a_{1}, a_{2}\in B_{\sqrt{n}}(z)}\mathbb{1}\left[\tau_{1}=k_{1},\tau_{2}=k_{2}, \mathcal{S}_{\tau_{1}}=a_{1},S_{\tau_{2}}=a_{2}\right]\] \[\qquad\times\mathbb{E}\left[\sum_{i=0}^{n}G_{D}(\mathcal{S}_{i}-z )\bigg{|}\tau_{1}=k_{1},\tau_{2}=k_{2},\mathcal{S}_{\tau_{1}}=a_{1},\mathcal{ S}_{\tau_{2}}=a_{2},\mathcal{S}_{n}=y\right]\bigg{|}\mathcal{S}_{n}=y\bigg{]}\] \[=\mathbb{E}\bigg{[}\sum_{0\leq k_{1}\leq k_{2}\leq n}\sum_{a_{1}, a_{2}\in B_{\sqrt{n}}(z)}\mathbb{1}\left[\tau_{1}=k_{1},\tau_{2}=k_{2}, \mathcal{S}_{\tau_{1}}=a_{1},\mathcal{S}_{\tau_{2}}=a_{2}\right]\] \[\qquad\times\mathbb{E}\left[\sum_{i=k_{1}}^{k_{2}}G_{D}(\mathcal{ S}_{i}-z)\bigg{|}\tau_{1}=k_{1},\tau_{2}=k_{2},\mathcal{S}_{\tau_{1}}=a_{1}, \mathcal{S}_{\tau_{2}}=a_{2},\mathcal{S}_{n}=y\right]\bigg{|}\mathcal{S}_{n}=y\bigg{]}\] \[+\mathbb{E}\bigg{[}\sum_{0\leq k_{1}\leq k_{2}\leq n}\sum_{a_{1}, a_{2}\in B_{\sqrt{n}}(z)}\mathbb{1}\left[\tau_{1}=k_{1},\tau_{2}=k_{2}, \mathcal{S}_{\tau_{1}}=a_{1},\mathcal{S}_{\tau_{2}}=a_{2}\right]\] \[\qquad\times\mathbb{E}\left[\sum_{i=k_{2}}^{n}G_{D}(\mathcal{S}_{ i}-z)|\tau_{1}=k_{1},\tau_{2}=k_{2},\mathcal{S}_{\tau_{1}}=a_{1},\mathcal{S}_{ \tau_{2}}=a_{2},\mathcal{S}_{n}=y\right]\bigg{|}\mathcal{S}_{n}=y\bigg{]}.\]
For the last two summands, we can make the following observation. Since we have that \(G_{D}(\mathcal{S}_{i}-z)\lesssim\frac{1}{n}\) for \(i\leq\tau_{1}\) and \(i\geq\tau_{2}\). Thus,
\[\sum_{i=0}^{\tau_{1}}G_{D}(\mathcal{S}_{i}-z)\lesssim n\frac{1}{n}=1,\]
and
\[\sum_{i=\tau_{2}}^{n}G_{D}(\mathcal{S}_{i}-z)\lesssim 1.\]
Hence,
\[\mathbb{E}\bigg{[}\sum_{0\leq k_{1}\leq k_{2}\leq n}\sum_{a_{1},a_{2 }\in B_{\sqrt{\pi}(z)}}\mathbbm{1}\left[\tau_{1}=k_{1},\tau_{2}=k_{2},\mathcal{S }_{\tau_{1}}=a_{1},\mathcal{S}_{\tau_{2}}=a_{2}\right]\] \[\qquad\qquad\times\mathbb{E}\left[\sum_{i=0}^{k_{1}}G_{D}( \mathcal{S}_{i}-z)|\tau_{1}=k_{1},\tau_{2}=k_{2},\mathcal{S}_{\tau_{1}}=a_{1}, \mathcal{S}_{\tau_{2}}=a_{2},\mathcal{S}_{n}=y\right]\bigg{|}\mathcal{S}_{n}=y\bigg{]}\] \[\leq\mathbb{E}\bigg{[}\sum_{0\leq k_{1}\leq k_{2}\leq n}\sum_{a_{ 1},a_{2}\in B_{\sqrt{\pi}(z)}}\mathbbm{1}\left[\tau_{1}=k_{1},\tau_{2}=k_{2}, \mathcal{S}_{\tau_{1}}=a_{1},\mathcal{S}_{\tau_{2}}=a_{2}\right]\bigg{|} \mathcal{S}_{n}=y\bigg{]}\lesssim 1.\]
Now, all that is left to check is that,
\[\mathbb{E}\left[\sum_{i=k_{1}}^{k_{2}}G_{D}(\mathcal{S}_{i}-z) \bigg{|}\tau_{1}=k_{1},\tau_{2}=k_{2},\mathcal{S}_{\tau_{1}}=a_{1},\mathcal{S} _{\tau_{2}}=a_{2},\mathcal{S}_{n}=y\right]\] \[\leq\mathbb{E}\left[\sum_{i=k_{1}}^{k_{2}}G_{D}(\mathcal{S}_{i}-z )\bigg{|}\mathcal{S}_{k_{1}}=a_{1},\mathcal{S}_{k_{2}}=a_{2}\right]\lesssim \log n.\]
It suffices to find a bound on the following for general \(T\) and a random walk \(\mathcal{S}\):
\[\mathbb{E}\left[\sum_{i=1}^{T}G_{D}(\mathcal{S}_{i})\bigg{|}\mathcal{S}_{0}=x,\mathcal{S}_{T}=y\right]\lesssim\log T.\]
Recall [17, Thm. 1.2.1] yields that for some \(C\) finite and any \(\|x\|\leq i^{1/2}\),
\[\mathbb{P}(\mathcal{S}_{i}=x)\gtrsim i^{-2}.\]
Then, with (4.10), if \(\|x-y\|\leq T^{1/2}\),
\[\mathbb{P}(\mathcal{S}_{0}=x,\mathcal{S}_{T}=y)\gtrsim T^{-2}\exp(-\frac{\|x- y\|^{2}}{T})\gtrsim T^{-2}\]
and
\[\mathbb{E}\left[\sum_{i=1}^{T}G_{D}(\mathcal{S}_{i})\mathbbm{1} \left\{\mathcal{S}_{0}=x,\mathcal{S}_{T}=y\right\}\right]= \sum_{i=0}^{T}\sum_{z\in\mathbb{Z}^{4}}G_{D}(z)\mathbb{P}^{x}( \mathcal{S}_{i}=z)\mathbb{P}^{z}(\mathcal{S}_{T-i}=y)\] \[\lesssim \sum_{i=0}^{T}\sum_{z\in\mathbb{Z}^{4}}G_{D}(z)\mathbb{P}^{x}( \mathcal{S}_{i}=z)(T-i)_{+}^{-2}\] \[= \sum_{i=0}^{T}\mathbb{E}[G_{D}(\mathcal{S}_{i})](T-i)_{+}^{-2}\] \[\lesssim \sum_{i=0}^{T}i_{+}^{-1}(T-i)_{+}^{-2}\lesssim\frac{\log T}{T^{2}}.\]
Therefore, we have the result.
**Lemma A.11**.: _Recall the matrix \(\mathcal{G}^{S_{\beta,\alpha,j}^{2}}\) from equation (4.2). This matrix \(\mathcal{G}^{S_{\beta,\alpha,j}^{2}}\) is positive definite and has minimum eigenvalue greater than \(\frac{1}{2}\)._
Proof.: We will show this proposition for any general matrix of the form,
\[[\mathcal{G}]_{i,j}=G_{D}(a_{i}-a_{j}),\]
where \(\{a_{i}\}\) is a collection of \(n\) distinct points. Note that we have the Fourier transformation,
\[G_{D}(x)=\int_{[0,1]^{4}}\frac{1}{1-\frac{1}{4}\sum_{i=1}^{4}\cos(2\pi k_{i})} \exp[2\pi\mathrm{i}\langle k,x\rangle]\mathrm{d}k,\]
where \(\langle k,x\rangle\) is the inner product between the vector \(k\) and \(x\). Let \((v_{1},\ldots,v_{n})\) be any vector with \(l^{2}\) norm \(1\). Thus, we have,
\[\sum_{i,j}v_{i}[\mathcal{G}]_{i,j}\overline{v_{j}} =\int_{[0,1]^{4}}\frac{\left|\sum_{i=1}^{n}v_{i}\exp[2\pi\mathrm{ i}\langle k,a_{i}\rangle]\right|^{2}}{1-\frac{1}{4}\sum_{i=1}^{4}\cos(2\pi k_{i})} \mathrm{d}k\] \[\geq\frac{1}{2}\int_{[0,1]^{4}}\left|\sum_{i=1}^{n}v_{i}\exp[2\pi \mathrm{i}\langle k,a_{i}\rangle]\right|^{2}\,\mathrm{d}k=\frac{1}{2}\|v\|^{2}.\]
This shows that any matrix of the form \(\mathcal{G}\).
### Generalized Gagliardo-Nirenberg constant
In our previous manuscript [1], we showed that the large deviation constant associated to the quantity \(\int_{0}^{1}\int_{0}^{1}G(B_{t}^{1}-B_{s}^{2})\mathrm{d}s\mathrm{d}t\) can be associated to the optimal constant of the generalized Gagliardo-Nirenberg inequality. Namely,
**Remark A.12**.: _We have_
\[\lim_{T\to\infty}T^{-1}\log\mathbb{P}\left(\int_{0}^{1}\int_{0}^{1}G(B_{t}^{1 }-B_{s}^{2})\mathit{dtds}\geq T\right)=-\tilde{\kappa}^{-4}(4,2).\]
We remark that this large deviation constant was also obtained by Bass-Chen-Rosen in [7, (1.10)]. Their result is not presented in the same manner, since they do not identify the generalized Gagliardo-Nirenberg inequality. Some manipulations, based on Section 4 of [1] and Section 7 of [7], can demonstrate the link between these constants. We remark that in order to adapt the results of [7] to the case of the Brownian motion, one has to adjust the Fourier transform appearing in [7, equation (1.1)] by a factor of \(1/2\).
## Acknowledgment
The authors would like to thank Amir Dembo for his useful suggestions. The authors are also grateful to Makoto Nakamura for his helpful comments.
|
2301.00873 | Production cross section of heavy quarks in ep interaction at the NLO
approximation | We present the production cross section of heavy quarks \sigma^{cc},
\sigma^{bb} and {\sigma^{tt}} at the next-to-leading order in the
electron-proton interaction by using the quarks and gluon distribution
functions at the initial scale Q^{2}_{0}. To do this, we use a fitted form of
the heavy quark coefficient functions for deep-inelastic lepton-hadron
scattering to obtain the structure functions of heavy quarks. Then, we
calculate the reduced cross section of heavy quarks by using the structure
functions and subsequently present the single differential and the integrated
cross section of heavy quarks at the center-of-mass energies of 319 GeV , 1.3
TeV and 3.5 TeV in the electron-proton collision. The obtained numerical
results of the cross section of the charm and beauty quarks are compared with
the HERA data, which is a combination from the results of the H1 and ZEUS
detectors, and with the predictions from H1PDF, MSTW2008 and MSRT03.
Furthermore, we present the production cross section of top quark as a direct
prediction from our calculations. | S. Zarrin, S. Dadfar, M. Sayahi | 2023-01-02T20:55:00Z | http://arxiv.org/abs/2301.00873v2 | # Production cross section of heavy quarks in \(e^{-}p\) interaction at the NLO approximation
###### Abstract
We present the production cross section of heavy quarks \(\sigma^{c\bar{c}}\), \(\sigma^{b\bar{b}}\) and \(\sigma^{t\bar{t}}\) at the next-to-leading order in the electron-proton interaction by using the quarks and gluon distribution functions at the initial scale \(Q_{0}^{2}\). To do this, we use a fitted form of the heavy quark coefficient functions for deep-inelastic lepton-hadron scattering to obtain the structure functions of heavy quarks. Then, we calculate the reduced cross section of heavy quarks by using the structure functions and subsequently present the single differential and the integrated cross section of heavy quarks at the center-of-mass energies of \(\sqrt{s}=319~{}GeV\), \(1.3~{}TeV\) and \(3.5~{}TeV\) in the electron-proton collision. The obtained numerical results of the cross section of the charm and beauty quarks are compared with the HERA data, which is a combination from the results of the H1 and ZEUS detectors, and with the predictions from H1PDF, MSTW2008 and MSRT03. Furthermore, we present the production cross section of top quark as a direct prediction from our calculations.
pacs: 13.60.Hb, 13.85.Lg, 14.65.Dw, 14.65.Fy, 14.65.Ha
## I Introduction
The study of the heavy quarks production is one of the most important subjects of research at the present and future colliders and the test of quantum chromodynamics (QCD). These quarks can be generated in the hadron-hadron, photon-hadron, electron-positron and lepton-hadron interactions. The production of heavy quarks are studied in two different prescriptions in the framework of QCD analyses. The first framework (the region \(Q^{2}=m_{q}^{2}\)) is the so-called variable-flavour number scheme (VFNS) [1]. In this scheme, the heavy quarks contributions are described by a parton density and treated as a massless quark in the hadron. In the'massless' scheme, the dominant contribution at the leading order (LO) approximation is due to the quark parton model (QPM) process and at the next-to-leading order (NLO) approximation the contributions of the photon-gluon fusion (PGF) and QCD Compton processes are also considered. In the second framework, heavy quarks are treated as a massive quark and their contributions are given by fixed-order perturbation theory (FOPT)[2; 3]. In this scheme (the'massive' scheme), the dominant LO process is the PGF and the NLO diagrams are of order \(\alpha_{s}^{2}\).
At HERA, at the LO approximation, the PGF is the dominant contribution for the heavy quarks production in electron-proton interaction \(e^{-}+p\to q\bar{q}+e^{-}+X\). In this process, by the interaction of a virtual photon emitted by the incoming electron with a gluon from the proton, is formed a heavy quark-antiquark pair. The HERA data show that the production of heavy quarks is sensitive to the gluon distribution (which the minimum momentum fraction of gluon \(x_{g}\) in photoproduction to generate a heavy quark pair is arranged such that \(x_{g}^{tt}>x_{g}^{bb}>x_{g}^{cc}\)) and also is dependent on the mass of these quarks. Therefore, the calculations of the heavy quarks structure functions are dependent on the squared energy scale \(\mu^{2}\)[4; 5; 6; 7; 8].
The measurements of the open charm (c) cross section in DIS at HERA have mainly been exclusive for \(D\) or \(D^{*}\) meson production [9; 10; 11; 12; 13]. The measurement of the open beauty (b) cross section is challenging since b events contain only a small fraction (typically \(<5\%\)) of the total cross section. The \(b\) cross section has been measured in DIS (\(Q^{2}>2GeV^{2}\)) by ZEUS [14] and in photoproduction (\(Q^{2}<1GeV^{2}\) and \(0.1<y<0.8\)) by H1 [15] and ZEUS [16], using the transverse momentum distribution of muons relative to the \(b\) jet in semi-muonic decays. Moreover, in Ref. [5], the production of c and b quarks in \(ep\) interactions has been studied with the ZEUS detector at HERA for exchanged four-momentum squared \(5<Q^{2}<1000GeV^{2}\) using an integrated luminosity of \(354pb^{-1}\). Also, measurements of the c and b contributions to the inclusive proton structure function \(F_{2}\) have been recently presented in deep inelastic scattering (DIS) at HERA, using information from the H1 vertex detector, for values of the negative square of the four momentum of the exchanged boson \(Q^{2}>150GeV^{2}\) and of inelasticity \(0.1<y<0.7\)[17]. In this region, the inclusive c and b cross sections have been found \(\sigma_{c\bar{c}}=373\pm 39\pm 47~{}pb\) and \(\sigma_{b\bar{b}}=55.4\pm 8.7\pm 12.0~{}pb\), respectively and the data show that a fraction of \(\sim 18\%\) (\(\sim 3\%\)) of DIS events contain c (b) quark. Furthermore, inclusive c and b cross sections have been measured in \(e^{-}p\) and \(e^{+}p\) neutral current collisions at HERA in the kinematic region of \(5<Q^{2}<2000GeV^{2}\) and \(0.0002<x<0.05\) which \(x\) is the Bjorken scaling variable. In which the \(e^{-}p\) center-of-mass energy (CME) is \(\sqrt{s}=319GeV\), with a proton beam energy of \(E_{p}=920GeV\) and electron beam energy of \(E_{e}=27.6GeV\)[4].
In high energy processes, the contribution of heavy quarks in the proton structure functions will be studied in projects such as the Large Hadron electron Collider (LHeC) and the Future Circular Collider electron-hadron (FCC-eh) which operate at high enough energies to observe new phenomenon [18; 19; 20; 21; 22; 23]. At the LHeC project, the possibility of colliding an electron beam from a new accelerator with the existing LHC proton is investigated. In this project, the \(e^{-}p\) CME is planned to reach
\(\sqrt{s}=1.3TeV\)[18; 22]. Beyond the LHeC, the next generation \(ep\) collider (the FCC-eh project) is an ideal environment to increase center-of-mass energy. In the proposed FCC-eh program, the distribution of heavy quarks will be examined at \(\sqrt{s}=3.5TeV\)[23].
Theoretically, the inclusive heavy quark production have been presented within the VFNS scheme at the next-to-next-to-leading order approximation (NNLO) in Ref. [24]. The predictions for the c and b cross sections have been obtained from fits [25] to the HERA inclusive \(F_{2}\) data based on CCFM evolution [26]. Also, the production of heavy quarks in the FFNS approach have been predicted according to the LO PGF off-shell matrix elements convoluted with the CCFM \(k_{T}\)-unintegrated gluon density of the proton [25]. In Refs. [27; 28; 29; 30; 31; 32], the connection between the gluon distribution and the structure functions of heavy quarks (c and b) has been theoretically shown at small \(x\). Moreover, in Refs. [32; 33; 34] the authors present the necessary conditions for predicting the top structure function \(F_{2}^{t\bar{t}}\) with respect to the different predictions for the behavior of the gluon at low \(x\) and high \(Q^{2}\) values. Besides these predictions, various successful phenomenological methods have presented to obtain the c and b structure functions and the ratios of \(R^{c\bar{c}}\) and \(R^{b\bar{b}}\)[28; 32; 35]. The importance of these studies, along with the \(t\)-quark density, can be explored at future circular collider energies and may lead us to new physics in the future [36; 37].
At small \(x\), both the H1 and ZEUS detectors have measured the charm and beauty components of the proton structure function from the measurement of the inclusive heavy quark cross sections after applying small corrections to the longitudinal structure function of heavy quarks at low and moderate inelasticity. But, in the region of high inelasticity, this function may has a significant effect on the heavy quark production cross section. The heavy quark deferential cross section is written in terms of the heavy quark structure functions as:
\[\frac{d^{2}\sigma^{q\bar{q}}}{dxdQ^{2}}=\frac{2\pi\alpha^{2}}{xQ^{2}}\bigg{(}Y _{+}F_{2}^{q\bar{q}}(x,Q^{2})-y^{2}F_{L}^{q\bar{q}}(x,Q^{2})\bigg{)}\]
\[=\frac{2\pi\alpha^{2}}{xQ^{2}}Y_{+}\sigma_{red}^{q\bar{q}}(x,Q^{2}), \tag{1}\]
where \(Y_{+}=1+(1-y)^{2}\) and \(y=Q^{2}/(xs)\) is the inelasticity variable in which \(s\) and \(Q^{2}\) are the CME squared and the photon virtuality, respectively. The heavy quark structure functions \(F_{2}^{q\bar{q}}(x,Q^{2})\) and \(F_{L}^{q\bar{q}}(x,Q^{2})\) with respect to the behavior of the gluon density are given by:
\[F_{k,g}^{q\bar{q}}(x,Q^{2},m_{q}^{2})=xe_{H}^{2}\int_{x}^{z_{max}}H_{k,g}(z, \xi)g(\frac{x}{z},\mu^{2})\frac{dz}{z}, \tag{2}\]
where \(\mu=(Q^{2}+4m_{q}^{2})^{1/2}\) is the default common value for the factorization and renormalization scales, \(z_{max}=\frac{Q^{2}}{\mu^{2}}\) and \(\xi=\frac{Q^{2}}{m_{q}^{2}}\). In general, the heavy quark coefficient functions of \(H_{k,g}(z,\xi)\) (with \(k=2,L\)) are expanded in \(\alpha_{s}\) as follows:
\[H_{k,g}(z,\xi)=\sum_{i=1}^{\infty}\left(\frac{\alpha_{s}(\mu^{2})}{4\pi} \right)^{i}H_{k,g}^{(i)}(z,\xi),\ \ k=2,L, \tag{3}\]
where the heavy quark coefficient functions at the LO and NLO approximations, \(H_{k,g}^{(1)}\) and \(H_{k,g}^{(2)}\), are as follows:
\[H_{k,g}^{(1)}(z,\xi)=\frac{\xi}{\pi z}c_{k,g}^{(0)}(\eta,\xi), \tag{4}\]
\[H_{k,g}^{(2)}(z,\xi)=\frac{16\pi\xi}{z}\left[c_{k,g}^{(1)}(\eta,\xi)+\bar{c}_ {k,g}^{(1)}(\eta,\xi)\ln\left(\frac{\mu^{2}}{m_{q}^{2}}\right)\right], \tag{5}\]
where the coefficient functions \(c_{k,g}^{(0)}(\eta,\xi)\) have been given in Ref. [3] and the coefficients \(c_{k,g}^{(1)}\) and \(\bar{c}_{k,g}^{(1)}\) are rather lengthy, and not published in print and they are only available as computer codes [3]. In Ref. [38], the analytic form of the quark coefficient functions have been presented for deep-inelastic lepton-hadron scattering in the kinematical regime \(Q^{2}\gg m_{q}^{2}\) in which \(Q^{2}\) and \(m_{q}^{2}\) stand for the masses squared of the virtual photon and heavy quark, respectively.
In this paper, we obtain heavy quark structure functions \(F_{2}^{q\bar{q}}\) and \(F_{L}^{qq}\) at the LO and NLO approximations. These functions are obtained by using the presented heavy quark coefficient functions in Ref. [38] (in the kinematical regime \(Q^{2}\gg m_{q}^{2}\)) and a set control coefficients which are obtained by using the heavy quark structure functions from HERA [5; 15; 17; 39; 40; 41; 42; 43], LHeC [44], other works such as Ref. [45] (for c and b quarks) and Ref. [31] (for t quark) and also the CT18 distribution functions [46]. Then, we present the integrated and differential cross sections for heavy quarks in DIS and compare our numerical results with HERA data [4; 5; 14; 17] and with the results from the MSTW2008 [47], MSRT03 [48] and H1PDF [49].
This paper is organized as follows. In the next section, we present a brief summary of our previous work, including the structure functions of heavy quarks. In section III, we present a detailed numerical analysis of cross sections of heavy quarks. In the last section, we summarize our main conclusions and remarks.
## II Theoretical formalism
The presented heavy quark coefficient functions in Ref. [38] at the regimes of \(Q^{2}\geq m_{q}^{2}\) and \(Q^{2}\leq m_{q}^{2}\) do not provide appropriate and acceptable results and the structure functions obtained by using these coefficient functions are very ascandant at the regimes of \(Q^{2}\geq m_{q}^{2}\) and \(Q^{2}\leq m_{q}^{2}\), especially for the charm quark. To control this growth, by using HERA data [5; 17; 39; 40; 41; 42; 43] and the results from Refs. [44; 45] (for c and b quarks) and Refs. [31] (for t quarks) and the published CT18 initial distribution
functions [46], we present a series of control coefficients that are only a function of \(Q^{2}\). By multiplying these coefficients by the obtained coefficient functions in Ref. [38], they have given acceptable results. The general form of these control coefficients is as \(d-\exp(eQ^{2})\), which \(d\) and \(e\) are fixed numbers and are shown in Table (1) for heavy quarks.
To solve the Eq. (2), it is necessary to obtain the gluon density function, for this aim, we use the DGLAP evolution equations and the Laplace transform method as Ref. [50; 51; 52] by considering this fact that the Laplace transform of the convolution factors is simply the ordinary product of the Laplace transform of the factors. The coupled DGLAP integral-differential equations are as follows [53]:
\[\frac{\partial F_{s}(x,Q^{2})}{\partial\ln Q^{2}}=\frac{\alpha_{s}(Q^{2})}{2 \pi}\bigg{[}P_{qq}(x,Q^{2})\otimes F_{s}(x,Q^{2})\]
\[+2n_{f}P_{qg}(x,Q^{2})\otimes G(x,Q^{2})\bigg{]}, \tag{5}\]
\[\frac{\partial G(x,Q^{2})}{\partial\ln Q^{2}}=\frac{\alpha_{s}(Q^{2})}{2\pi} \bigg{[}P_{gq}(x,Q^{2})\otimes F_{s}(x,Q^{2})\]
\[+P_{gg}(x,Q^{2})\otimes G(x,Q^{2})\bigg{]}, \tag{6}\]
where \(\alpha_{s}\) is the running strong coupling constant and \(P_{ab}(x)\)'s are the Altarelli-Parisi splitting functions. In the above equations, the symbol \(\otimes\) represents the convolution integral which is defined as \(f(x)\otimes h(x)=\int_{x}^{1}f(y)h(x/y)dy/y\). To convert Eqs. (5) and (6) into Laplace space, we insert the variables \(x=\exp(-v)\), \(y=\exp(-w)\) and \(\tau(Q^{2},Q^{2}_{0})=\frac{1}{4\pi}\int_{Q^{2}_{0}}^{Q^{2}}\alpha_{s}(Q^{ \prime^{2}})d\ln(Q^{\prime^{2}})\) into the DGLAP evolution equations. By using the Laplace transform method, one can turn the convolution equations at the LO and NLO approximations from \(v\)-space into \(r\)-space, and then solve them straightforwardly in \(r\)-space as:
\[f^{(i)}(r,Q^{2})=k^{(i)}_{ff}(r,\tau)f^{(i)}(r,Q^{2}_{0})+k^{(i)}_{fg}(r,\tau )g^{(i)}(r,Q^{2}_{0}), \tag{7}\]
\[g^{(i)}(r,Q^{2})=k^{(i)}_{gf}(r,\tau)f^{(i)}(r,Q^{2}_{0})+k^{(i)}_{gg}(r,\tau )g^{(i)}(r,Q^{2}_{0}), \tag{8}\]
with \(i=\) LO or NLO. The functions of \(f(r,Q^{2}_{0})\) and \(g(r,Q^{2}_{0})\) are the singlet and gluon distribution functions at initial scale \(\tau=0\) and \(\mathcal{L}[\hat{H}(v,\tau),v,r]=h(r,\tau)\). In Eqs. (7) and (8), the kernels of \(k_{ij}(r,u)\)'s at LO and NLO approximations can be found in Refs. [52; 54; 55]. Since the obtained gluon distribution function in the above equation (8) is in Laplace space \(s\) and its exact solution is not possible through analytical techniques, so its inverse Laplace transform must be computed numerically [54; 56]. To obtain the heavy quarks structure functions in terms of the distribution functions at the initial scale, we turn Eq. (2) to the Laplace space \(r\). To this aim, the variable \(z=x/y\) and transformation \(x\to xe^{(\ln 1/a)}\) (where \(a\) is larger than one) are used, therefore, Eq. (2) is obtained as follows:
\[F^{q\bar{q}}_{k,g}(xe^{(\ln 1/a)},Q^{2},m^{2}_{q})=e^{2}_{H}\int_{x}^{1}G(y, \mu^{2})\frac{dy}{y}\]
\[\times C_{k,g}(\frac{xe^{(\ln 1/a)}}{y},\xi),\quad k=2,L, \tag{9}\]
where \(G(y,Q^{2})=yg(y,Q^{2})\) and \(C_{k,g}(x)=xH_{k,g}(x)\). By using the variables \(x=\exp(-v)\), \(y=\exp(-w)\), one can rewrite the above equation as:
\[\hat{F}^{q\bar{q}}_{k,g}(v-\ln 1/a,Q^{2},m^{2}_{q})=e^{2}_{H}\int_{0}^{v}\hat{G} (w,\mu^{2})\]
\[\times\hat{C}_{k,g}(v-w-\ln 1/a,\xi)dw,\quad k=2,L, \tag{10}\]
Using the Laplace transform method, we can turn the above equation from \(v\)-space into \(r\)-space as follows:
\[f^{q\bar{q}}_{k,g}(r,Q^{2},m^{2}_{q})=e^{2}_{H}g(r,\mu^{2})h_{k,g}(r,\xi), \quad k=2,L, \tag{11}\]
where \(h_{k,g}(r,\xi)=\mathcal{L}[\hat{C}(v-\ln 1/a,\xi)],v,r]\) at the LO approximation have been given in Ref. [32]. To obtain the heavy quarks components \(F^{q\bar{q}}_{2}\) and \(F^{q\bar{q}}_{L}\) of the structure functions in Laplace space at the LO and NLO approximations, the obtained gluon distribution function in Eq. (8) are inserted into Eq. (11). But before that, \(Q^{2}\) must be replaced by \(\mu^{2}\). With these descriptions, one can write the structure functions as follows:
\[f^{q\bar{q}}_{k,g}(r,Q^{2},m^{2}_{q})=e^{2}_{H}\Bigg{[}j_{gf}(r,\mu^{2})f(r,Q^ {2}_{0})\]
\[+j_{gg}(r,\mu^{2})g(r,Q^{2}_{0})\Bigg{]},\quad k=2,L, \tag{12}\]
where
\[j_{gf}(r,\mu^{2})=h_{k,g}(r,\xi)k_{fg}(r,\tau(\mu^{2},Q^{2}_{0}))/a^{r},\]
\[j_{gg}(r,\mu^{2})=h_{k,g}(r,\xi)k_{gg}(r,\tau(\mu^{2},Q_{0}^{2}))/a^{r}. \tag{13}\]
Finally, using the Laplace inverse transform, we can obtain these structure functions in the usual space \(x\) as follows;
\[F_{k,g}^{q\bar{q}}(x,Q^{2},m_{q}^{2})=e_{H}^{2}\Bigg{[}J_{gf}(x,\mu^{2})\otimes F _{s}(x,Q_{0}^{2})\]
\[+J_{gg}(x,\mu^{2})\otimes G(x,Q_{0}^{2})\Bigg{]},\quad k=2,L \tag{14}\]
where \(J_{gf}(x,\mu^{2})={\cal L}^{-1}[j_{gf}(r,\mu^{2}),r,v]|_{v=\ln(1/x)}\) and \(J_{gg}(x,\mu^{2})={\cal L}^{-1}[j_{gg}(r,\mu^{2}),r,v]|_{v=\ln(1/x)}\). It should be noted that, to obtain the heavy quarks structure functions in Eq. (14), it is requires only a knowledge of the singlet \(F_{s}(x)\) and gluon \(G(x)\) distribution functions at the starting value \(Q_{0}^{2}\).
## III Numerical results
Now, we present our numerical results of the production cross section of heavy quarks in the \(e^{-}p\) interaction at the LO and NLO approximations obtained by using Eq. (1) and the DGLAP evolution equations. In order to present more detailed discussions on our findings, the numerical results for the structure functions and the production cross section of heavy quarks are compared with HREA data [4; 5; 9; 14; 17; 41; 42; 43] and with the results from the MSTW2008 [47], MSRT03 [48] and H1PDF [49]. To extract numerical results, we use the published CT18 [46] initial starting functions \(F_{s}(x)\) and \(G(x)\). It should be said that, we consider the uncertainties due to the running charm, beauty and top (t) quark masses \(m_{c}=1.29^{+0.077}_{-0.053}~{}GeV\), \(m_{b}=4.049^{+0.138}_{-0.118}~{}GeV\)[6] and \(m_{t}=173.5^{+3.9}_{-3.8}~{}GeV\)[57] where the uncertainties are obtained through adding the experimental fit, model and parameterization uncertainties in quadrature.
Figures (1) and (2) indicate the numerical results of the c quark structure functions (\(F_{2}^{c\bar{c}}(x,Q^{2})\) and \(F_{L}^{c\bar{c}}(x,Q^{2})\)) at \(Q^{2}=11,60,130\) and \(500~{}GeV^{2}\) and the b quark structure functions (\(F_{2}^{bb}(x,Q^{2})\) and \(F_{L}^{bb}(x,Q^{2})\)) at \(Q^{2}=12,60,200\) and \(650~{}GeV^{2}\). These results are presented at the LO and NLO approximations and compared with those presented by ZEUS [41; 42; 43] and H1 [9; 13; 17] colliders and with the results from the MSTW2008 predictions [47]. In these figures, since nowhere is presented data on the longitudinal structure function of heavy quarks, we only compare \(F_{2}^{c\bar{c}}(x,Q^{2})\) and \(F_{2}^{bb}(x,Q^{2})\) with those presented by ZEUS and H1 colliders. As can be seen, our numerical results at the NLO approximation are closer to the experimental data than the results at the LO approximation.
In figure (3) as a comparison, we show our numerical results of the c quark reduced cross section and ZEUS [5] data as a function of \(x\). The numerical results of this cross section are showed at the LO and NLO approximation at \(Q^{2}=6.5,12,30,80,160\) and \(600~{}GeV^{2}\). It can be concluded that in the low-energy scale, our results are very close to ZEUS data and show that the presented control coefficients are suitable. We also compare the c quark reduced cross section at the NLO approximation in \(Q^{2}=5,12,60,200,650\) and \(2000~{}GeV^{2}\) with H1 [4] data and with the results from the MSTW2008 predictions [47] in figure (4).
Figure (5) indicates our numerical results of the b quark reduced cross section at the LO and NLO approximations in \(Q^{2}=6.5,12,30,80,160\) and \(600~{}GeV^{2}\) compared with ZEUS [5] data. Moreover, in figure (6), we compare the b quark reduced cross section at the NLO approximation with H1 [4] data and with the results from the MSTW2008 predictions [47].
In figure (7), we present the numerical results of the t quark reduced cross section at the LO and NLO approximations at \(Q^{2}=6.5,12,30,80,160\) and \(600~{}GeV^{2}\). Here, we must state that the t quark longitudinal structure function at the LO and NLO approximations is very small relative to the structure function \(F_{2}^{t\bar{t}}(x,Q^{2})\) and the value of this function at the specified energies does not have a significant effect on the t quark reduced cross section.
In figure (8), it is presented a comparison between the reduced cross section of heavy quarks at \(Q^{2}=1000,5000\) and \(10000~{}GeV^{2}\). In this figure at \(Q^{2}=1000GeV^{2}\) and minimum \(x\), the ratios of \(\sigma_{red}^{bb}/\sigma_{red}^{c\bar{c}}\) and \(\sigma_{red}^{t\bar{t}}/\sigma_{red}^{c\bar{c}}\) are approximately 0.11 and 0.0002, respectively and at \(Q^{2}=10000GeV^{2}\) and minimum \(x\), these ratios are approximately 0.18 and 0.0035, respectively. These results show that with the increase of energy, the production cross section of t quark grows more than the production cross section of b quark.
In order to assess the significance of the theoretical uncertainty at the LO and NLO approximations, we indicate the \(Q^{2}\) dependence of the single differential cross section of heavy quarks \(d\sigma^{qq}/dQ^{2}\) at the LO and NLO approximations in figure (9). In this figure, the differential cross section of c, b and t quarks are presented at the center of mass energies of \(\sqrt{s}=319GeV\), \(\sqrt{s}=1.3TeV\) and \(\sqrt{s}=3.5TeV\) in \(e^{-}p\) interaction at \(0.1<y<0.7\). We also show the single differential cross section of the c and b quarks as a function of \(Q^{2}\) at the CME of \(\sqrt{s}=319GeV\) and \(0.02<y<0.7\) at the NLO approximation in figure (10). The data have been given together with their statistical and systematic uncertainties (not including the error on the integrated luminosity). Moreover, the single differential cross sections of c and b quarks as a function of \(x\) at the CME of \(\sqrt{s}=319GeV\) and \(0.02<y<0.7\) at the NLO approximation are presented in figure (11).
In Table (2), the integrated cross sections are compared with H1 [30] and ZEUS [14] data and with the predictions from NLO QCD. The integrated cross sections for c and b quarks have been respectively presented \(373\pm 39\pm 47~{}pb\) and \(55.4\pm 8.7\pm 12.0~{}pb\) by the H1 vertex detector for values of photon virtuality
and of inelasticity \(0.1<y<0.7\). Our numerical results of this cross section at the LO and NLO approximation for the c quark at the \(e^{-}p\) CME of \(\sqrt{s}=319~{}GeV\) and \(\sqrt{s}=319~{}GeV\) are at \(Q^{2}>150~{}GeV^{2}\) and inelasticity \(0.1<y<0.7\) and \(312\pm 7~{}pb\) and \(331\pm 3~{}pb\), respectively and for the \(e^{-}p\) quark they are \(30.9\pm 1.1~{}pb\) and \(35.7\pm 1.0~{}pb\). These results are presented in Table (2) and compared with the VFNS predictions from MRST03 [48] and the H1PD data [49]. In addition to these results, we show that integrated cross sections for c and b quarks at the \(e^{-}p\) CME of \(\sqrt{s}=1.3TeV\) and \(\sqrt{s}=3.5TeV\). It should be noted that the integrated cross sections for c and b quarks at \(\sqrt{s}=319GeV\) at the NLO approximation are larger than those at the LO approximation but at \(\sqrt{s}=1.3TeV\) and \(\sqrt{s}=3.5TeV\) the results are inverse. Furthermore, we obtain and present the integrated cross sections for t quark at \(\sqrt{s}=1.3TeV\) and \(\sqrt{s}=3.5TeV\) in Table (2) as a prediction from our calculations. This cross section at \(\sqrt{s}=319GeV\) for values of photon virtualities \(Q^{2}>150GeV^{2}\) and of inelasticity \(0.1<y<0.7\) is zero.
All of the results clearly show that the extracted procedure provides correct behaviors of the structure functions and the production cross section of the heavy quarks at the LO and NLO approximations. Moreover, it should be noted that the NLO corrections are small for values of high \(x\), but at low \(x\) region these corrections have many effects on the results especially at low \(Q^{2}\). Furthermore, they often allow one to reduce the uncertainties of the predicted results, as one can see the comparing the bands in almost all of the plots presented in the figures. It should be emphasized that here we have only obtained the cross section of heavy quark production in the the photon fragmentation region, because at high energies like \(\sqrt{s}=3.5~{}TeV\), the contribution of heavy quark pair production via the gluon-gluon splitting must also be considered.
## IV Conclusion
In conclusion, we have presented the production cross section of heavy quarks (\(\sigma^{c\bar{c}}\), \(\sigma^{b\bar{b}}\) and \(\sigma^{t\bar{t}}\)) and the single differential cross sections (\(d\sigma^{qq}/dQ^{2}\) and \(d\sigma^{qq}/dx\)) of them by utilizing the heavy quarks structure functions \(F_{2}^{q\bar{q}}\) and \(F_{L}^{q\bar{q}}\) obtained by Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution equations and a suitable fit for the heavy quarks coefficient functions at the NLO approximation. Indeed, we have shown that the Laplace transform method is the suitable and alternative scheme to solve the DGLAP evolution equations and Eq. (2). It should be noted that, the obtained equations are general and require only a knowledge of the parton distribution functions \(F_{s}(x)\), \(G(x)\) at the starting value \(Q_{0}^{2}\). The comparisons have shown that our numerical results of the charm and beauty production cross section are in agreement with the H1 and ZEUS data well within errors. Also, in this paper, we have compared the production cross sections at the center-of-mass energy of \(\sqrt{s}=319GeV\) and at \(0.1<y<0.7\) with the experimental results by H1 PDF 2000 and MSRT03 and MSTW2008. Also, we have obtained the production cross sections of heavy quarks at the-center-of mass energies of \(\sqrt{s}=1.3TeV\) and \(\sqrt{s}=3.5TeV\) and considered the uncertainties due to the running charm, beauty and top quark masses. In addition, we have presented the production cross section of the quark top at center-of-mass energies of \(\sqrt{s}=1.3TeV\) and \(\sqrt{s}=3.5TeV\).
Figure (1): The charm quark structure functions \(F_{2(L)}^{c\bar{c}}\) compared with data from H1 [9; 13; 17], ZEUS [41; 42; 43], and MSTW2008 [47] at the NLO approximation.
Figure (2): The beauty quark structure functions \(F_{2(L)}^{b\bar{b}}\) compared with data from H1 [17] and MSTW2008 [47] at the NLO approximation.
Figure (3): The reduced charm quark cross section as a function of \(x\) for six different values of \(Q^{2}\) at the LO and NLO approximations compared with the ZEUS data [5]. The error bars represent the statistical, systematic (not including the error on the integrated luminosity) and extrapolation uncertainties added in quadrature. The shaded areas are the uncertainties due to the running quark mass.
Figure (4): The reduced charm quark cross section as a function of \(x\) at the NLO approximation compared with the H1 data [4] and the results from the MSTW2008 predictions [47].
Figure (5): The reduced beauty quark cross section. For more details, see the caption of figure (3).
Figure (7): The reduced top quark cross section as a function of \(x\) for six different values of \(Q^{2}\) at the LO and NLO approximations.
Figure (8): A comparison between the reduced cross section of heavy quarks at large values of \(Q^{2}\) at the NLO approximation.
Figure (9): The results of the differential cross section of the c, b and t quarks as a function of \(Q^{2}\) at the center-of-mass energies of \(\sqrt{s}=319GeV\), \(\sqrt{s}=1.3TeV\) and \(\sqrt{s}=3.5TeV\) at the LO and NLO approximations.
Figure (11): The results of the differential cross section of the c and b quarks as a function of \(x\) at the center of mass energy of \(\sqrt{s}=319GeV\) at the NLO approximation compared with ZEUS data [5].
\begin{tabular}{l l l l} \hline & \(\sqrt{s}=319GeV\) & \(\sqrt{s}=1.3TeV\) & \(\sqrt{s}=3.5TeV\) \\ \hline \(\sigma^{c\bar{c}}\) (\(pb\)) & & & \\ LO & \(312\pm 7\) & \(2305\pm 45\) & \(11774\pm 217\) \\ NLO & \(331\pm 3\) & \(2044\pm 28\) & \(7238\pm 115\) \\ H1 & \(373\pm 39\pm 47\) & \(---\) & \(---\) \\ H1PDF & 455 & \(---\) & \(---\) \\ ZEUS & 419 & \(---\) & \(---\) \\ MRST03 & 426 & \(---\) & \(---\) \\ \(\sigma^{b\bar{b}}\) (\(pb\)) & & & \\ LO & \(30.9\pm 1.1\) & \(251.8\pm 8.3\) & \(931.7\pm 28.4\) \\ NLO & \(35.7\pm 1.0\) & \(283.7\pm 6.1\) & \(917.1\pm 21.1\) \\ H1 & \(55.4\pm 8.7\pm 12.0\) & \(---\) & \(---\) \\ H1PDF & 52 & \(---\) & \(---\) \\ ZEUS & 37 & \(---\) & \(---\) \\ MRST03 & 47 & \(---\) & \(---\) \\ \(\sigma^{t\bar{t}}\) (\(fb\)) & & & \\ LO & \(---\) & \(1.33\pm 0.18\) & \(79.1\pm 6.2\) \\ NLO & \(---\) & \(1.39\pm 0.17\) & \(81.2\pm 6.5\) \\ \hline \end{tabular}
Figure (10): The results of the differential cross section of the c and b quarks as a function of \(Q^{2}\) at the center-of-mass energy of \(\sqrt{s}=319GeV\) at the LO and NLO approximations compared with ZEUS data [5].
Figure (11): The results of the differential cross section of the c and b quarks as a function of \(x\) at the center of mass energy of \(\sqrt{s}=319GeV\) at the NLO approximation compared with ZEUS data [5].
\begin{tabular}{l l l l} \hline & \(\sqrt{s}=319GeV\) & \(\sqrt{s}=1.3TeV\) & \(\sqrt{s}=3.5TeV\) \\ \hline \(\sigma^{c\bar{c}}\) (\(pb\)) & & & \\ LO & \(312\pm 7\) & \(2305\pm 45\) & \(11774\pm 217\) \\ NLO & \(331\pm 3\) & \(2044\pm 28\) & \(7238\pm 115\) \\ H1 & \(373\pm 39\pm 47\) & \(---\) & \(---\) \\ H1PDF & 455 & \(---\) & \(---\) \\ ZEUS & 419 & \(---\) & \(---\) \\ MRST03 & 426 & \(---\) & \(---\) \\ \(\sigma^{b\bar{b}}\) (\(pb\)) & & & \\ LO & \(30.9\pm 1.1\) & \(251.8\pm 8.3\) & \(931.7\pm 28.4\) \\ NLO & \(35.7\pm 1.0\) & \(283.7\pm 6.1\) & \(917.1\pm 21.1\) \\ H1 & \(55.4\pm 8.7\pm 12.0\) & \(---\) & \(---\) \\ H1PDF & 52 & \(---\) & \(---\) \\ ZEUS & 37 & \(---\) & \(---\) \\ MRST03 & 47 & \(---\) & \(---\) \\ \(\sigma^{t\bar{t}}\) (\(fb\)) & & & \\ LO & \(---\) & \(1.33\pm 0.18\) & \(79.1\pm 6.2\) \\ NLO & \(---\) & \(1.39\pm 0.17\) & \(81.2\pm 6.5\) \\ \hline \end{tabular}
Figure (12): The results of the differential cross section of the c and b quarks as a function of \(Q^{2}\) at the center-of-mass energy of \(\sqrt{s}=319GeV\) at the LO approximation compared with ZEUS data [5]. |
2310.16950 | Stability manifolds of Kuznetsov components of prime Fano threefolds | Let $X$ be a cubic threefold, quartic double solid or Gushel--Mukai
threefold, and $\mathcal{K}u(X)\subset \mathrm{D}^b(X)$ be its Kuznetsov
component. We show that a stability condition $\sigma$ on $\mathcal{K}u(X)$ is
Serre-invariant if and only if its homological dimension is at most $2$. As a
corollary, we prove that all Serre-invariant stability conditions on
$\mathcal{K}u(X)$ form a contractible connected component of the stability
manifold. | Changping Fan, Zhiyu Liu, Songtao Kenneth Ma | 2023-10-25T19:34:39Z | http://arxiv.org/abs/2310.16950v1 | # Stability manifolds of Kuznetsov components of prime Fano threefolds
###### Abstract.
Let \(X\) be a cubic threefold, quartic double solid or Gushel-Mukai threefold, and \(\mathcal{K}u(X)\subset\mathrm{D}^{b}(X)\) be its Kuznetsov component. We show that a stability condition \(\sigma\) on \(\mathcal{K}u(X)\) is Serre-invariant if and only if its homological dimension is at most \(2\). As a corollary, we prove that all Serre-invariant stability conditions on \(\mathcal{K}u(X)\) form a contractible connected component of the stability manifold.
Key words and phrases:Derived categories, Bridgeland stability conditions, Kuznetsov components, Fano threefolds, Stability manifolds 2010 Mathematics Subject Classification: Primary 14F05; secondary 14J45, 14D20, 14D23
###### Contents
* 1 Introduction
* 2 Kuznetsov components
* 3 Bridgeland stability conditions
* 4 Preliminary results
* 5 A criterion of Serre-invariant stability conditions
## 1. Introduction
Motivated by the concept of \(\Pi\)-stability condition on string theory by Douglas, the notion of a stability condition on a triangulated category \(\mathcal{D}\) was introduced by Bridgeland in [6]. It is proved in [6] that the set \(\mathrm{Stab}(\mathcal{D})\) of all stability conditions (with respect to a fixed lattice) on \(\mathcal{D}\) has a natural structure of the complex manifold. In recent years, the geometry of \(\mathrm{Stab}(\mathcal{D})\) has played a central role in the theory of homological mirror symmetry, representation theory, symplectic geometry, and moduli of sheaves, see e.g. [16, 9, 1, 4, 3, 17, 10, 7].
One of the most fundamental examples is \(\mathcal{D}=\mathrm{D}^{b}(C)\) where \(C\) is a smooth projective curve. Due to [29] and [27], \(\mathrm{Stab}(\mathrm{D}^{b}(C))\) is a contractible manifold and has a complete description. More general, there is a folklore conjecture that if \(\mathrm{Stab}(\mathcal{D})\) is non-empty, then it is simply connected. However, with current techniques, it still seems to be impossible to prove some basic properties of \(\mathrm{Stab}(\mathcal{D})\), even for connectedness. Therefore, people found that it was better to study a distinguished connected component of \(\mathrm{Stab}(\mathcal{D})\). For example, when \(\mathcal{D}=\mathrm{D}^{b}(X)\) for a smooth projective variety \(X\), we say a component is distinguished if it contains a geometric stability condition, with respect to which skyscraper sheaves of closed points are all stable with the same phase. Results along this direction include [8, 5, 1, 15, 26, 36, 37, 12, 11, 38], etc.
The examples above mainly focus on the situation when \(\mathcal{D}\) is the bounded derived category of a certain smooth variety or the derived category of representations of a well-described quiver or algebra. On the other hand, by a series of work [21, 22, 23], we can associate a very interesting
## 1. Introduction
Let \(X\) be a cubic threefold, quartic double solid or Gushel-Mukai threefold. A _pure double solid_ is a smooth cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\) is a smooth cubic threefold, and \(\mathcal{K}u(X)\) is a smooth cubic threefold. The _pure double solid_ is a cubic threefold \(\mathcal{K}u(X)\), where \(X\)
The implications \((3)\Rightarrow(2)\) and \((2)\Rightarrow(1)\) in Theorem 1.2 are easy. However, it is quite surprising for the authors that (1) implies (3), since (2) and (3) are preserved by \(\widetilde{\operatorname{GL}}^{+}(2,\mathbb{R})\)-action, while the condition (1) is not in general.
Finally, we conjecture that Theorem 1.2 holds for any stability condition on \(\mathcal{K}u(X)\) (see Conjecture 5.10), which together with Theorem 1.1 implies \(\operatorname{Stab}(\mathcal{K}u(X))\cong\widetilde{\operatorname{GL}}^{+}(2, \mathbb{R})\cong\mathbb{C}\times\mathbb{H}\).
### Notation and conventions
* We denote the phase and slope with respect to a stability condition \(\sigma\) by \(\phi_{\sigma}\) and \(\mu_{\sigma}\), respectively. The maximal/minimal phase of the Harder-Narasimhan factors of a given object will be denoted by \(\phi_{\sigma}^{+}\) and \(\phi_{\sigma}^{-}\), respectively.
* We use \(\hom\) and \(\ext^{i}\) to represent the dimension of the vector spaces \(\Hom\) and \(\Ext^{i}\). We denote \(\operatorname{RHom}(-,-)=\bigoplus_{i\in\mathbb{Z}}\Hom(-,-[i])[-i]\).
* All triangulated categories are assumed to be \(\mathbb{C}\)-linear of finite type, i.e. for any two objects \(E,F\) we have \(\sum_{i\in\mathbb{Z}}\ext^{i}(E,F)<+\infty\).
* We denote the numerical class in the numerical Grothendieck group by \([E]\) for any object \(E\). In our setting, giving a numerical class is equivalent to giving a Chern character.
### Plan of the paper
In Section 2, we introduce Kuznetsov components \(\mathcal{K}u(X)\) and recollect some basic properties. In Section 3, we recall some basic definitions and properties of Bridgeland stability conditions on \(\mathcal{K}u(X)\). Then we review the definition of global dimension and homological dimension, and prove some general preliminary results in Section 4. Finally, we prove our main results Theorem 5.7, Theorem 5.8 and Corollary 5.9 in Section 5.
### Acknowledgements
We would like to thank Chunyi Li and Alexander Perry for their careful reading and many useful suggestions. The third author would also like to thank Dima Arinkin and Andrei Caldararu for useful discussions. Part of the work was finished when the second author visited the Hausdorff Research Institute for Mathematics (HIM). He is grateful for the wonderful working environment and hospitality.
## 2. Kuznetsov components
In this paper, we consider a smooth projective threefold \(X\) with \(-K_{X}\) ample and Picard rank one, i.e. a prime Fano threefold. In the following, we are mainly concerned about three types of prime Fano threefolds:
* cubic threefolds: \(X\subset\mathbb{P}^{4}\) is a smooth cubic hypersurface,
* quartic double solids: there is a double cover \(X\to\mathbb{P}^{3}\) branched along a smooth quartic surface, and
* Gushel-Mukai threefolds: a smooth intersection \[X=\operatorname{Cone}(\operatorname{Gr}(2,5))\cap Q,\] where \(\operatorname{Cone}(\operatorname{Gr}(2,5))\subset\mathbb{P}^{10}\) is the projective cone over the Plucker embedded Grassmannian \(\operatorname{Gr}(2,5)\subset\mathbb{P}^{9}\), and \(Q\subset\mathbb{P}^{10}\) is a quadric hypersurface in a linear subspace \(\mathbb{P}^{7}\subset\mathbb{P}^{10}\).
When \(X\) is a cubic threefold or quartic double solid, we have a semiorthogonal decomposition
\[\D^{b}(X)=\langle\mathcal{K}u(X),\mathcal{O}_{X},\mathcal{O}_{X}(1)\rangle,\]
where \(\mathcal{O}_{X}(1)\) is the ample generator of \(\Pic(X)\).
When \(X\) is a Gushel-Mukai threefold, there is also a semiorthogonal decomposition given in [25]:
\[\mathrm{D}^{b}(X)=\langle\mathcal{K}u(X),\mathcal{O}_{X},\mathcal{E}_{X}^{{}_{ \vee}}\rangle,\]
where \(\mathcal{E}_{X}\) is the pullback of the tautological subbundle on \(\mathrm{Gr}(2,5)\) along \(X\to\mathrm{Gr}(2,5)\).
**Definition 2.1**.: Let \(X\) be a cubic threefold, quartic double solid, or Gushel-Mukai threefold. The _Kuznetsov component_ of \(X\) is the admissible triangulated subcategory \(\mathcal{K}u(X)\subset\mathrm{D}^{b}(X)\) constructed above.
Let \(\mathrm{K}(\mathcal{K}u(X))\) denote the Grothendieck group of \(\mathcal{K}u(X)\). We have the bilinear Euler form on \(\mathrm{K}(\mathcal{K}u(X))\) defined by
\[\chi([E],[F])=\sum_{i\in\mathbb{Z}}(-1)^{i}\operatorname{ext}^{i}(E,F)\]
for \([E],[F]\in\mathrm{K}(\mathcal{K}u(X))\). The _numerical Grothendieck group_ of \(\mathcal{K}u(X)\) is defined as
\[\mathrm{K}_{\mathrm{num}}(\mathcal{K}u(X)):=\mathrm{K}(\mathcal{K}u(X))/ \ker\chi.\]
Therefore, the Euler form on \(\mathrm{K}(\mathcal{K}u(X))\) can be defined on \(\mathrm{K}_{\mathrm{num}}(\mathcal{K}u(X))\) as well, which we also denote by \(\chi(-,-)\).
The Serre functor of a Kuznetsov component can be computed from the Serre functor of \(X\) and mutation functors. We define
\[\mathbf{O}:=\mathbf{L}_{\mathcal{O}_{X}}\circ(-\otimes\mathcal{O}_{X}(1)),\]
where \(\mathbf{L}_{\mathcal{O}_{X}}\) is the left mutation functor along \(\mathcal{O}_{X}\). When \(X\) is a cubic threefold or a quartic double solid, it is an autoequivalence of \(\mathcal{K}u(X)\).
* When \(X\) is a cubic threefold, we have \(S_{\mathcal{K}u(X)}\cong\mathbf{O}[1]\). In this case, \(S_{\mathcal{K}u(X)}^{3}\cong[5]\). The functor \(S_{\mathcal{K}u(X)}\) acts non-trivially on \(\mathrm{K}_{\mathrm{num}}(\mathcal{K}u(X))\).
* When \(X\) is a quartic double solid or a Gushel-Mukai threefold, we have \(S_{\mathcal{K}u(X)}^{2}\cong[4]\). In this case we define \(\tau:=S_{\mathcal{K}u(X)}[-2]\). The functor \(S_{\mathcal{K}u(X)}\) (and hence \(\tau\)) acts trivially on \(\mathrm{K}_{\mathrm{num}}(\mathcal{K}u(X))\).
Finally, we give a description of \(\mathrm{K}_{\mathrm{num}}(\mathcal{K}u(X))\) and its Euler form.
When \(X\) is a cubic threefold or a quartic double solid, a computation using [23, pp.6] shows that, the rank two lattice \(\mathrm{K}_{\mathrm{num}}(\mathcal{K}u(X))\) is generated by
\[v=[\mathcal{I}_{l}],\text{ and }w=[\mathbf{O}(\mathcal{I}_{l})[1]],\]
where \(\mathcal{I}_{l}\) is the ideal sheaf of a line \(l\subset X\). The Euler form \(\chi(-,-)\) with respect to the basis \(v\) and \(w\) is given by the matrix
\[\begin{pmatrix}-1&-1\\ 0&-1\end{pmatrix} \tag{1}\]
when \(X\) is a cubic threefold, and
\[\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix} \tag{2}\]
when \(X\) is a quartic double solid.
For a Gushel-Mukai threefold \(X\), \(\mathrm{K}_{\mathrm{num}}(\mathcal{K}u(X))\) is computed in [25] and [23, pp.5]. The numerical Grothendieck group \(\mathrm{K}_{\mathrm{num}}(\mathcal{A}_{X})\) is a rank 2 lattice with basis vectors
\[v=[\mathcal{I}_{C}],\text{ and }w=[F],\]
where \(C\subset X\) is a conic and \(F\) is a rank two slope-stable sheaf with \(c_{1}(F)=-1,c_{2}(F)=5\) and \(c_{3}(F)=0\). The Euler form \(\chi(-,-)\) with respect to the basis is
\[\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}. \tag{3}\]
## 3. Bridgeland stability conditions
In this section, we recall basic definitions of Bridgeland stability conditions on triangulated categories. Then we focus on the stability conditions on Kuznetsov components. We follow [6] and [2].
### Bridgeland stability conditions
Let \(\mathcal{D}\) be a triangulated category and \(\operatorname{K}(\mathcal{D})\) its Grothendieck group.
**Definition 3.1**.: The _heart of a bounded t-structure_ on \(\mathcal{D}\) is an abelian subcategory \(\mathcal{A}\subset\mathcal{D}\) such that the following conditions are satisfied:
1. for any \(E,F\in\mathcal{A}\) and \(n<0\), we have \(\operatorname{Hom}(E,F[n])=0\);
2. for any object \(E\in\mathcal{D}\) there exist objects \(E_{i}\in\mathcal{A}\) and maps \[0=E_{0}\stackrel{{\pi_{1}}}{{\longrightarrow}}E_{1}\stackrel{{ \pi_{2}}}{{\longrightarrow}}\cdots\stackrel{{\pi_{m}}}{{ \longrightarrow}}E_{m}=E\] such that \(\operatorname{cone}(\pi_{i})=A_{i}[k_{i}]\) where \(A_{i}\in\mathcal{A}\) and the \(k_{i}\) are integers such that \(k_{1}>k_{2}>\cdots>k_{m}\).
Fix a surjective morphism \(v\colon\operatorname{K}(\mathcal{D})\to\Lambda\) to a finite rank lattice \(\Lambda\).
**Definition 3.2**.: A _stability condition_ on \(\mathcal{D}\) is a pair \(\sigma=(\mathcal{A},Z)\) where \(\mathcal{A}\) is the heart of a bounded t-structure on \(\mathcal{D}\), and \(Z:\Lambda\to\mathbb{C}\) is a group homomorphism such that
1. the composition \(Z\circ v:\operatorname{K}(\mathcal{A})=\operatorname{K}(\mathcal{D})\to \mathbb{C}\) satisfies: for any \(E\neq 0\in\mathcal{D}\) we have \(\operatorname{Im}Z(v(E))\geq 0\) and if \(\operatorname{Im}Z(v(E))=0\) then \(\operatorname{Re}Z(v(E))<0\). From now on, we write \(Z(E)\) rather than \(Z(v(E))\).
We can define a _slope_\(\mu_{\sigma}\) for \(\sigma\) using \(Z\). For any \(E\in\mathcal{A}\), set
\[\mu_{\sigma}(E):=\begin{cases}-\frac{\operatorname{Re}Z(E)}{\operatorname{Im }Z(E)},&\operatorname{Im}Z(E)>0\\ +\infty,&\text{else}.\end{cases}\]
We say an object \(0\neq E\in\mathcal{A}\) is \(\sigma\)-(semi)stable if \(\mu_{\sigma}(F)<\mu_{\sigma}(E)\) (respectively \(\mu_{\sigma}(F)\leq\mu_{\sigma}(E)\)) for all proper subobjects \(F\subset E\).
1. Any object \(E\in\mathcal{A}\) has a Harder-Narasimhan (HN) filtration in terms of \(\sigma\)-semistability;
2. There exists a quadratic form \(Q\) on \(\Lambda\otimes\mathbb{R}\) such that \(Q|_{\ker Z}\) is negative definite, and \(Q(E)\geq 0\) for all \(\sigma\)-semistable objects \(E\in\mathcal{A}\).
**Definition 3.3**.: The _phase_ of a \(\sigma\)-semistable object \(E\in\mathcal{A}\) is
\[\phi(E):=\frac{1}{\pi}\mathrm{arg}(Z(E))\in(0,1].\]
Specially, if \(Z(E)=0\) then \(\phi(E)=1\). If \(F=E[n]\), then we define
\[\phi(F):=\phi(E)+n\]
A _slicing_\(\mathcal{P}\) of \(\mathcal{D}\) consists of full additive subcategories \(\mathcal{P}(\phi)\subset\mathcal{D}\) for each \(\phi\in\mathbb{R}\) satisfying
1. for \(\phi\in(0,1]\), the subcategory \(\mathcal{P}(\phi)\) is given by the zero object and all \(\sigma\)-semistable objects whose phase is \(\phi\);
2. for \(\phi+n\) with \(\phi\in(0,1]\) and \(n\in\mathbb{Z}\), we set \(\mathcal{P}(\phi+n):=\mathcal{P}(\phi)[n]\).
We will use both notations \(\sigma=(\mathcal{A},Z)\) and \(\sigma=(\mathcal{P},Z)\) for a stability condition \(\sigma\) with heart \(\mathcal{A}=\mathcal{P}((0,1])\) where \(\mathcal{P}\) is the slicing of \(\sigma\).
In this paper, we let \(\mathcal{D}\) be the Kuznetsov component and \(\Lambda\) be the numerical Grothendieck group \(\mathrm{K}_{\mathrm{num}}(\mathcal{K}u(X))\), which is \(\mathrm{K}(\mathcal{K}u(X))\) modulo the kernel of the Euler form \(\chi(-,-)\).
For a stability condition \(\sigma=(\mathcal{A},Z)\) on \(\mathcal{D}\), we define the _homological dimension_ of \(\mathcal{A}\) as the smallest integer \(\mathrm{homdim}(\mathcal{A})\) such that \(\mathrm{Hom}(A,B[n])=0\) for any \(n>\mathrm{homdim}(\mathcal{A})\). The homological dimension of a stability condition is defined as the homological dimension of its heart.
### The stability manifold
The set of stability conditions \(\mathrm{Stab}(\mathcal{D})\) has a natural topology induced by a generalized metric. Moreover, the following theorem of Bridgeland states that the generalized metric space \(\mathrm{Stab}(\mathcal{D})\) is a complex manifold.
**Theorem 3.4**.: (Bridgeland Deformation Theorem, [6]) _The continuous map_
\[\mathcal{Z}:\mathrm{Stab}(\mathcal{D})\to\mathrm{Hom}(\Lambda,\mathbb{C}), \quad(\mathcal{A},Z)\mapsto Z\]
_is a local homeomorphism. In particular, the generalized metric space \(\mathrm{Stab}(\mathcal{D})\) has the structure of a complex manifold of dimension \(\mathrm{rk}(\Lambda)\)._
Next, we recall two natural group actions on \(\mathrm{Stab}(\mathcal{D})\).
1. An element \(\tilde{g}=(g,G)\) in the universal covering \(\widetilde{\mathrm{GL}}^{+}(2,\mathbb{R})\) of the group \(\mathrm{GL}^{+}(2,\mathbb{R})\) consists of an increasing function \(g:\mathbb{R}\to\mathbb{R}\) such that \(g(\phi+1)=g(\phi)+1\) and matrix \(G\in\mathrm{GL}^{+}(2,\mathbb{R})\) with \(\det(G)>0\). It acts on the right on the stability manifold by \(\sigma\cdot\tilde{g}:=(G^{-1}\circ Z,\mathcal{P}(g(\phi))\) for any \(\sigma=(\mathcal{P},Z)\in\mathrm{Stab}(\mathcal{D})\) (see [6, Lemma 8.2]).
2. Let \(\mathrm{Aut}_{\Lambda}(\mathcal{D})\) be the group of exact autoequivalences of \(\mathcal{D}\), whose action \(\Phi_{*}\) on \(\mathrm{K}(\mathcal{D})\) is compatible with \(\upsilon\colon\mathrm{K}(\mathcal{D})\to\Lambda\). For \(\Phi\in\mathrm{Aut}_{\Lambda}(\mathcal{D})\) and \(\sigma=(\mathcal{P},Z)\in\mathrm{Stab}(\mathcal{D})\), we define a left action of the group of linear exact autoequivalences \(\mathrm{Aut}_{\Lambda}(\mathcal{D})\) by \(\Phi\cdot\sigma=(\Phi(\mathcal{P}),Z\circ\Phi_{*}^{-1})\), where \(\Phi_{*}\) is the automorphism of \(\mathrm{K}(\mathcal{D})\) induced by \(\Phi\).
### Serre-invariant stability conditions on Kuznetsov components
**Theorem 3.5** ([2, Proposition 6.8]).: _Let \(X\) be a cubic threefold, quartic double solid or Gushel-Mukai threefold. Then there is a family of stability conditions on \(\mathcal{K}u(X)\) with respect to the rank two lattice \(\mathrm{K}_{\mathrm{num}}(\mathcal{K}u(X))\) and natural surjection \(\mathrm{K}(\mathcal{K}u(X))\twoheadrightarrow\mathrm{K}_{\mathrm{num}}( \mathcal{K}u(X))\)._
**Definition 3.6**.: Let \(\sigma\) be a stability condition on a triangulated category \(\mathcal{D}\) with the Serre functor \(S_{\mathcal{D}}\). It is called _Serre-invariant_ if \(S_{\mathcal{D}}\cdot\sigma=\sigma\cdot\tilde{g}\) for some \(\tilde{g}\in\widetilde{\mathrm{GL}}^{+}(2,\mathbb{R})\).
By virtue of [34, Corollary 5.5] and [33, Theorem 1.1], all stability conditions constructed in Theorem 3.5 are Serre-invariant.
We recall several properties of Serre-invariant stability conditions on Kuznetsov components.
**Proposition 3.7**.: _Let \(X\) be a cubic threefold, quartic double solid or Gushel-Mukai threefold and \(\sigma\) be a Serre-invariant stability condition on \(\mathcal{K}u(X)\)._
1. _the homological dimension of the heart of_ \(\sigma\) _is two,_
2. _For any_ \(E\) _and_ \(F\) _in_ \(\mathcal{K}u(X)\) _with phases_ \(\phi_{\sigma}^{+}(E)<\phi_{\sigma}^{-}(F)\)_, we have_ \(\mathrm{Hom}(E,F[2])=0\)_._
3. _If_ \(X\) _is a cubic threefold and_ \(E\in\mathcal{K}u(X)\) _is_ \(\sigma\)_-semistable, then_ \(\mathrm{Ext}^{2}(E,E)=0\)_._
4. _If_ \(X\) _is a quartic double solid or Gushel-Mukai threefold and_ \(E\in\mathcal{K}u(X)\) _is_ \(\sigma\)_-stable, then_ \(\mathrm{ext}^{2}(E,E)\leq 1\)_._
5. \(\mathrm{ext}^{1}(E,E)\geq 2\) _for every non-zero object_ \(E\in\mathcal{K}u(X)\)_._
6. _If_ \(\mathrm{ext}^{1}(E,E)\leq 3\)_, then_ \(E\) _is_ \(\sigma\)_-stable._
Proof.: See [34, Section 5], [19, Section 4.7] and [14, Proposition 3.4].
**Lemma 3.8**.: _Let \(X\) be a cubic threefold, quartic double solid or Gushel-Mukai threefold, and \(E\in\mathcal{K}\!u(X)\) be a non-zero object with \(\operatorname{ext}^{1}(E,E)\leq 3\). Then \(E\) is stable with respect to any Serre-invariant stability condition and \([E]\in\operatorname{K_{num}}(\mathcal{K}\!u(X))\) is primitive._
Proof.: The stability of \(E\) follows from (6) of Proposition 3.7. Thus \(\hom(E,E)=1\). Moreover, by (3) and (4) of Proposition 3.7, we see \(\chi(E,E)=-1\) when \(X\) is a cubic threefold, and \(\chi(E,E)=-1\) or \(-2\) otherwise. Then by a computation using Euler form, we know that \([E]\in\operatorname{K_{num}}(\mathcal{K}\!u(X))\) is primitive.
**Lemma 3.9**.: _Let \(X\) be a cubic threefold, quartic double solid or Gushel-Mukai threefold. Suppose \(\sigma\) is a Serre-invariant stability on \(\mathcal{K}\!u(X)\) and \(E\) is a \(\sigma\)-semistable object._
1. _When_ \(X\) _is a cubic threefold, we have_ \[\phi_{\sigma}(E)+1\leq\phi_{\sigma}(S_{\mathcal{K}\!u(X)}(E))<\phi_{\sigma}( E)+2.\] _The first inequality is strict if_ \(E\) _is_ \(\sigma\)_-stable._
2. _When_ \(X\) _is a quartic double solid or Gushel-Mukai threefold, we have_ \[\phi_{\sigma}(S_{\mathcal{K}\!u(X)}(E))=\phi_{\sigma}(E)+2.\]
Proof.: (1) follows from [14, Proposition 3.3(a),(d)]. In the case of (2), we also have \(\phi_{\sigma}(E)+1\leq\phi_{\sigma}(S_{\mathcal{K}\!u(X)}(E))\leq\phi_{\sigma }(E)+2\). Since \([S_{\mathcal{K}\!u(X)}(E)]=[E]\), we have
\[\phi_{\sigma}(E)-\phi_{\sigma}(S_{\mathcal{K}\!u(X)}(E))\in 2\mathbb{Z}.\]
Thus the we get \(\phi_{\sigma}(S_{\mathcal{K}\!u(X)}(E))=\phi_{\sigma}(E)+2\).
Finally, we recall the following uniqueness result of Serre-invariant stability conditions.
**Theorem 3.10** ([19, Theorem 4.25], [14, Theorem 3.1]).: _Let \(X\) be a cubic threefold, quartic double solid or Gushel-Mukai threefold. Then all Serre-invariant stability conditions on \(\mathcal{K}\!u(X)\) are in the same \(\widetilde{\operatorname{GL}}^{+}(2,\mathbb{R})\)-orbit._
**Remark 3.11**.: Let \(\mathsf{K}\) be the subset of all Serre-invariant stability conditions with induced topology from \(\operatorname{Stab}(\mathcal{K}\!u(X))\). Then Theorem 3.10 implies \(\mathsf{K}\cong\widetilde{\operatorname{GL}}^{+}(2,\mathbb{R})\cong\mathbb{C} \times\mathbb{H}\).
## 4. Preliminary results
In this section, we provide some preliminary results.
We start with a useful lemma. We say a non-zero element \(a\) in a lattice \(\Lambda\) is _primitive_ if it can not be written as \(a=nb\) for \(b\in\Lambda\) and \(n\in\mathbb{Z}_{>1}\).
**Lemma 4.1**.: _Let \(\sigma\) be a stability condition on \(\mathcal{D}\) with respect to the lattice \(\Lambda\) and \(v\colon K(\mathcal{D})\to\Lambda\). If a non-zero object \(E\in\mathcal{D}\) is not \(\sigma\)-stable and \(v(E)\in\Lambda\) is primitive, then we can find an exact triangle_
\[A\to E\to B\]
_such that \(A\) is \(\sigma\)-semistable, \(\phi_{\sigma}(A)\geq\phi_{\sigma}^{+}(B)\) and \(\operatorname{Hom}(A,B)=0\). Moreover, we can assume all Jordan-Holder factors of \(A\) are isomorphic to each other._
Proof.: If \(E\) is strictly \(\sigma\)-semistable, since \(v(E)\) is primitive, Jordan-Holder factors of \(E\) can not be isomorphic to each other. Then the existence of exact triangle \(A\to E\to B\) follows from the existence of Jordan-Holder filtration of \(A\).
If \(E\) is not \(\sigma\)-semistable, then by the existence of Harder-Narasimhan filtration, we can find an exact triangle
\[A^{\prime}\to E\to B^{\prime}\]
such that \(A^{\prime}\) is \(\sigma\)-semistable and \(\phi_{\sigma}(A^{\prime})>\phi_{\sigma}^{+}(B^{\prime})\). In particular, we see \(\operatorname{Hom}(A^{\prime},B^{\prime})=0\). If Jordan-Holder factors of \(A^{\prime}\) are isomorphic to each other, then we take \(A:=A^{\prime}\) and \(B:=B^{\prime}\). If \(A^{\prime}\) has at least two non-isomorphic Jordan-Holder factors, then as in the previous case, from the existence of Jordan-Holder filtration of \(A^{\prime}\), we can find an exact triangle \(A\to A^{\prime}\to A^{\prime\prime}\) such that \(A\) is \(\sigma\)-semistable with \(\operatorname{Hom}(A,A^{\prime\prime})=0\), and all Jordan-Holder factors of \(A\) are isomorphic to each other. We take \(B:=\operatorname{cone}(A\to E)\), where the map \(A\to E\) is the composition of \(A\to A^{\prime}\) and \(A^{\prime}\to E\). Hence we have the following commutative diagram
with all rows and columns are exact triangles. Since \(\phi_{\sigma}(A)=\phi_{\sigma}(A^{\prime})>\phi_{\sigma}^{+}(B^{\prime})\), we obtain \(\operatorname{Hom}(A,B^{\prime})=0\). Hence from \(\operatorname{Hom}(A,A^{\prime\prime})=0\), we see \(\operatorname{Hom}(A,B)=0\).
We have the following standard spectral sequence, see e.g. [35, Lemma 2.27].
**Lemma 4.2**.: _Let \(X\) be a smooth projective variety. Suppose that there are two exact triangles in \(\operatorname{D}^{b}(X)\):_
\[A_{1}\to B_{1}\to C_{1},\quad A_{2}\to B_{2}\to C_{2}.\]
_There exist a spectral sequence which degenerates at \(E_{3}\) and converges to \(\operatorname{Ext}^{*}(C_{1},C_{2})\), with \(E_{1}\)-page_
\[E_{1}^{p,q}=\begin{cases}\operatorname{Ext}^{q}(B_{1},A_{2}),\ p=-1\\ \operatorname{Ext}^{q}(A_{1},A_{2})\oplus\operatorname{Ext}^{q}(B_{1},B_{2}), \ p=0\\ \operatorname{Ext}^{q}(A_{1},B_{2}),\ p=1\\ 0,\ p\notin[-1,1]\end{cases}\]
_Moreover, differentials \(d_{1}^{p,q}\colon E_{1}^{p,q}\to E_{1}^{p+r,q-r+1}\) are given by compositions with morphisms \(A_{1}\to B_{1}\) and \(A_{2}\to B_{2}\)._
The following lemma is a generalization of [1, Lemma 2.5].
**Lemma 4.3**.: (Weak Mukai Lemma) _Let \(X\) be a smooth projective variety and_
\[F\to E\to G\]
_be an exact triangle in \(\operatorname{D}^{b}(X)\) such that \(\operatorname{Hom}(F,G)=\operatorname{Hom}(G,F[2])=0\). Then we have_
\[\operatorname{ext}^{1}(F,F)+\operatorname{ext}^{1}(G,G)\leq\operatorname{ext} ^{1}(E,E)\]
Proof.: Applying the standard spectral sequence in Lemma 4.2 to the exact triangle
\[G[-1]\to F\to E,\]
i.e. take \(A_{1}=A_{2}=G[-1],B_{1}=B_{2}=F\) and \(C_{1}=C_{2}=E\), we have \(E_{1}^{-1,1}=\operatorname{Hom}(F,G)\), \(E_{1}^{0,1}=\operatorname{Ext}^{1}(G,G)\oplus\operatorname{Ext}^{1}(F,F)\) and \(E_{1}^{1,1}=\operatorname{Hom}(G,F[2])\) and \(E_{1}^{p,1}=0\) for \(p\notin\{-1,0,1\}\). Then by the assumption we get \(E_{1}^{0,1}=E_{\infty}^{0,1}\), which implies the result.
Specializing in Kuznetsov components, we have the following lemma.
**Lemma 4.4**.: _Let \(X\) be a cubic threefold, quartic double solid or Gushel-Mukai threefold. Assume that there is an exact triangle of \(E\in\mathcal{K}u(X)\)_
\[F\to E\to G\]
_such that \(\operatorname{Hom}(F,G)=\operatorname{Hom}(G,F[2])=0\). Then we have_
\[\operatorname{ext}^{1}(F,F)<\operatorname{ext}^{1}(E,E),\text{ and } \operatorname{ext}^{1}(G,G)<\operatorname{ext}^{1}(E,E).\]
Proof.: This follows from Lemma 4.3 and Proposition 3.7.
The following spectral sequence is constructed in [28, Proposition 2.4].
**Proposition 4.5**.: _Let \(\mathcal{D}_{0}\) be the bounded derived category of an abelian category with enough injective objects, and \(\mathcal{D}\subset\mathcal{D}_{0}\) be a full triangulated subcategory of finite type. Then for any heart \(\mathcal{A}\) of \(\mathcal{D}\) and objects \(E,F\in\mathcal{D}\), there exists a spectral sequence with_
\[E_{2}^{p,q}=\bigoplus_{i\in\mathbb{Z}}\operatorname{Ext}^{p}(\mathcal{H}_{ \mathcal{A}}^{i}(E),\mathcal{H}_{\mathcal{A}}^{i+q}(F))\]
_and converges to \(\operatorname{Ext}^{*}(E,F)\)._
As a corollary, we see:
**Lemma 4.6**.: _Let \(\mathcal{D}_{0}\) be the bounded derived category of an abelian category with enough injective objects, and \(\mathcal{D}\subset\mathcal{D}_{0}\) be a full triangulated subcategory of finite type. Then for any heart \(\mathcal{A}\) of \(\mathcal{D}\) with \(\operatorname{homdim}(\mathcal{A})\leq 2\) and object \(E\in\mathcal{D}\), we have_
\[\sum_{i\in\mathbb{Z}}\operatorname{ext}^{1}(\mathcal{H}_{\mathcal{A}}^{i}(E),\mathcal{H}_{\mathcal{A}}^{i}(E))\leq\operatorname{ext}^{1}(E,E).\]
Proof.: Since \(\operatorname{homdim}(\mathcal{A})\leq 2\), we see \(E_{2}^{p,q}=0\) for \(p>2,0>p\) and any \(q\). Therefore, we see \(E_{2}^{1,q}=E_{\infty}^{1,q}\). If we take \(q=0\), then we have \(\dim E_{\infty}^{1,0}=\dim E_{2}^{1,0}\leq\operatorname{ext}^{1}(E,E)\), which proves the lemma.
In the rest of our paper, we always set \(\mathcal{D}_{0}=\operatorname{D}^{b}(\operatorname{QCoh}(X))\) and \(\mathcal{D}=\mathcal{K}u(X)\).
### The global dimension function and homological dimension
First, we recall the definition of the homological dimension of a heart and stability condition.
**Definition 4.7**.: Let \(\mathcal{A}\) be the heart of a bounded t-structure of a triangulated category \(\mathcal{D}\). For an integer \(n>0\), we say \(\mathcal{A}\) has _homological dimension at most \(n\)_ and denote by \(\operatorname{homdim}(\mathcal{A})\leq n\) if for any two non-zero objects \(E,F\in\mathcal{A}\), we have
\[\operatorname{Hom}(E,F[k])=0,\forall k>n.\]
We say a stability condition \(\sigma\) on \(\mathcal{D}\) has _homological dimension at most \(n\)_ and denote by \(\operatorname{homdim}(\sigma)\leq n\) if its heart \(\mathcal{A}\) satisfies \(\operatorname{homdim}(\mathcal{A})\leq n\).
There is another notion of dimension for stability conditions, called the global dimension, which is introduced in [18].
**Definition 4.8**.: Let \(\sigma\) be a stability condition on a triangulated category \(\mathcal{D}\) with the slicing \(\mathcal{P}\), then the _global dimension_ of \(\sigma\) is defined by
\[\operatorname{gldim}(\sigma):=\sup\{\phi_{2}-\phi_{1}|\ \operatorname{ Hom}(E_{1},E_{2})\neq 0,\ E_{i}\in\mathcal{P}(\phi_{i}),\ i=1\text{ or }2\}\]
Therefore, we have a function
\[\operatorname{gldim}\colon\operatorname{Stab}(\mathcal{D})\to\mathbb{R}_{\geq 0}, \quad\sigma\mapsto\operatorname{gldim}(\sigma),\]
which is continuous by [18, Lemma 5.7].
In the following, we prove some basic properties of global dimension.
**Lemma 4.9**.: _Let \(\sigma\) be a stability condition on a triangulated category \(\mathcal{D}\) and \(n\) be a positive integer. Then the following are equivalent:_
1. _For any non-zero objects_ \(E\) _and_ \(F\) _with_ \(\phi_{\sigma}^{+}(E)<\phi_{\sigma}^{-}(F)\)_, we have_ \(\operatorname{Hom}(E,F[n])=0\)_._
2. _For any_ \(\sigma\)_-semistable objects_ \(E\) _and_ \(F\) _with_ \(\phi_{\sigma}(E)<\phi_{\sigma}(F)\)_, we have_ \(\operatorname{Hom}(E,F[n])=0\)_._
3. \(\operatorname{gldim}(\sigma)\leq n\)_._
Proof.: (1) \(\Rightarrow\) (2). It is clear by taking \(E\) and \(F\) to be semistable.
(2) \(\Rightarrow\) (1). Since \(\phi_{\sigma}^{+}(E)<\phi_{\sigma}^{-}(F)\), for any Harder-Narasimhan factors \(E_{i}\) and \(F_{j}\) of \(E\) and \(F\), respectively, we have \(\phi_{\sigma}(E_{i})<\phi_{\sigma}(F_{j})\), and hence \(\operatorname{Hom}(E_{i},F_{j}[n])=0\). As there is no morphism between Harder-Narasimhan factors of \(E\) and \(F[n]\), we also get \(\operatorname{Hom}(E,F[n])=0\).
(2) \(\Rightarrow\) (3). Suppose \(\operatorname{gldim}(\sigma)=n+\epsilon\) for some \(\epsilon>0\). This implies there exist \(0<\delta\leq\epsilon\), together with \(\sigma\)-semistable objects \(E\in\mathcal{P}(\phi_{\sigma}(E))\) and \(F\in\mathcal{P}(\phi_{\sigma}(F))\), such that \(\phi_{\sigma}(F)-\phi_{\sigma}(E)>n+\delta\) and \(\operatorname{Hom}(E,F)\neq 0\). But this implies \(\operatorname{Hom}(E,(F[-n])[n])\neq 0\), while \(\phi_{\sigma}(F[-n])-\phi_{\sigma}(E)=\phi_{\sigma}(F)-\phi_{\sigma}(E)-n> \delta>0\), which contradicts to our hypothesis.
(3) \(\Rightarrow\) (2). Directly follows from the definition.
**Lemma 4.10**.: _Let \(\sigma=(\mathcal{A},Z)\) be a stability condition on a triangulated category \(\mathcal{D}\) and \(\operatorname{gldim}(\sigma)\leq n\) for a positive integer \(n\). Then the homological dimension of the heart \(\mathcal{A}\) of \(\sigma\) is at most \(n\), i.e. \(\operatorname{Hom}(E,F[m])=0\) for any two objects \(E,F\in\mathcal{A}\) and \(m\geq n+1\)._
Proof.: Since \(E,F\in\mathcal{A}\), we know that \(\phi^{+}(E),\phi^{-}(F)\in(0,1]\). Then the vanishing \(\operatorname{Hom}(E,F[m])=\operatorname{Hom}(E[-m+n],F[n])=0\) for \(m\geq n+1\) follows from
\[\phi^{+}(E)-m+n=\phi^{+}(E[-m+n])\leq\phi^{+}(E)-1=\phi^{+}(E[-1])\leq 0<\phi^{- }(F)\]
and Lemma 4.9.
**Lemma 4.11**.: _Let \(\sigma\) be a stability condition on a triangulated category \(\mathcal{D}\) with the slicing \(\mathcal{P}\) and \(\operatorname{gldim}(\sigma)\leq n\) for an integer \(n\). Then \(\operatorname{gldim}(\sigma\cdot\tilde{g})\leq n\) for any \(\tilde{g}\in\widetilde{\operatorname{GL}}^{+}(2,\mathbb{R})\)._
Proof.: Let \(E_{1}\in\mathcal{P}(\phi_{1})\) and \(E_{2}\in\mathcal{P}(\phi_{2})\) such that \(\operatorname{Hom}(E_{1},E_{2})\neq 0\). Then \(n\geq\operatorname{gldim}(\sigma)\geq\phi_{2}-\phi_{1}\). Now \(\phi_{\sigma\cdot\tilde{g}}(E_{i})=g(\phi_{\sigma}(E_{i}))\), where \(g\colon\mathbb{R}\to\mathbb{R}\) be an increasing function and \(g(x+n)=g(x)+n\) for any \(n\in\mathbb{Z}\). Hence we get
\[\phi_{\sigma\cdot\tilde{g}}(E_{2})=g(\phi_{2})\leq g(\phi_{1}+n)=g(\phi_{1})+ n=\phi_{\sigma\cdot\tilde{g}}(E_{1})+n.\]
This implies \(\phi_{\sigma\cdot\tilde{g}}(E_{2})-\phi_{\sigma\cdot\tilde{g}}(E_{1})\leq n\), which gives \(\operatorname{gldim}(\sigma\cdot\tilde{g})\leq n\).
## 5. A criterion of Serre-invariant stability conditions
In this section, we are going to prove the criteria Theorem 5.8 and Theorem 5.7.
In the following, \(X\) will be a cubic threefold, quartic double solid, or Gushel-Mukai threefold. We begin with some lemmas.
**Lemma 5.1**.: _Let \(\sigma=(\mathcal{A},Z)\) be a stability condition on \(\mathcal{K}u(X)\) with \(\operatorname{homdim}(\mathcal{A})\leq 2\). Then for any non-zero object \(E\in\mathcal{K}u(X)\) with \(\operatorname{ext}^{1}(E,E)\leq 3\), \(E\) is in \(\mathcal{A}\) up to shift._
Proof.: Let \(N\in\mathbb{N}\) be the number of non-zero cohomology objects of \(E\) with respect to \(\mathcal{A}\). By (5) of Proposition 3.7 and Lemma 4.6, we get
\[3\geq\operatorname{ext}^{1}(E,E)\geq\sum_{i}\operatorname{ext}^{1}(\mathcal{H}^{ i}_{\mathcal{A}}(E),\mathcal{H}^{i}_{\mathcal{A}}(E))\geq 2N.\]
which implies \(N=1\) and the result follows.
**Lemma 5.2**.: _Let \(\sigma=(\mathcal{A},Z)\) and \(\sigma^{\prime}=(\mathcal{A}^{\prime},Z^{\prime})\) be stability conditions on \(\mathcal{K}u(X)\) with homological dimension at most \(2\). Let \(E_{1}\) and \(E_{2}\) be two non-zero objects with \([E_{1}]=[E_{2}]\in\operatorname{K_{num}}(\mathcal{K}u(X))\) such that \(E_{1}[m_{1}]\) and \(E_{2}[m_{2}]\) are both in \(\mathcal{A}\). If \(E_{1}[m^{\prime}_{1}]\) and \(E_{2}[m^{\prime}_{2}]\) are also both in \(\mathcal{A}^{\prime}\), then \(m_{1}-m_{2}=m^{\prime}_{1}-m^{\prime}_{2}\)._
Proof.: Since \([E_{1}]=[E_{2}]\), we know that \(m_{1}-m_{2},m^{\prime}_{1}-m^{\prime}_{2}\in 2\mathbb{Z}\). Then we see that
\[\chi(E_{1},E_{2})=\chi(E_{1}[m_{1}],E_{2}[m_{2}])=\chi(E_{1}[m^{\prime}_{1}],E _{2}[m^{\prime}_{2}])<0.\]
Therefore if \(E_{1}[m_{1}]\) and \(E_{2}[m_{2}]\) are both contained in \(\mathcal{A}\), we have
\[\chi(E_{1}[m_{1}],E_{2}[m_{2}])=\hom(E_{1}[m_{1}],E_{2}[m_{2}])-\hom(E_{1}[m_{1 }],E_{2}[m_{2}+1])+\hom(E_{1}[m_{1}],E_{2}[m_{2}+2])<0.\]
This implies \(m_{2}-m_{1}+1\) is the unique integer \(N\) satisfies
\[\hom(E_{1},E_{2}[N])>\hom(E_{1},E_{2}[n])\]
for any \(n\neq N\). On the other hand, the same argument also shows that \(m^{\prime}_{2}-m^{\prime}_{1}+1=N\), so we obtain \(m_{1}-m_{2}=m^{\prime}_{1}-m^{\prime}_{2}\).
### Proof of the main theorem
First, we prove the stability of objects with small \(\operatorname{ext}^{1}\).
**Theorem 5.3**.: _Let \(\sigma=(\mathcal{A},Z)\) be a stability condition on \(\mathcal{K}u(X)\) with \(\hom\dim(\mathcal{A})\leq 2\). Then for any non-zero object \(E\in\mathcal{K}u(X)\) with \(\operatorname{ext}^{1}(E,E)\leq 2\) or \(\operatorname{ext}^{1}(E,E)=3\) and \(\operatorname{ext}^{2}(E,E)=0\), \(E\) is \(\sigma\)-stable._
Proof.: By Lemma 3.8, \(E\) is stable with respect to any Serre-invariant stability condition on \(\mathcal{K}u(X)\) and \([E]\) is primitive. Moreover, \(\operatorname{RHom}(E,E)=\mathbb{C}\oplus\mathbb{C}^{2}[-1]\) when \(X\) is a cubic threefold, and \(\operatorname{RHom}(E,E)=\mathbb{C}\oplus\mathbb{C}^{2}[-1]\) or \(\mathbb{C}\oplus\mathbb{C}^{3}[-1]\) when \(X\) is a quartic double solid or Gushel-Mukai threefold.
By virtue of Lemma 5.1, we can assume that \(E\in\mathcal{A}\) (up to shift). If \(E\) is not \(\sigma\)-stable, since \([E]\) is primitive, by Lemma 4.1 we can find an exact sequence in \(\mathcal{A}\)
\[A\to E\to B \tag{4}\]
such that \(\hom(A,B)=0\), \(\phi_{\sigma}(A)\geq\phi_{\sigma}^{+}(B)\) and \(A\) is \(\sigma\)-semistable with isomorphic Jordan-Holder factors. From the exact triangle \(B[-1]\to A\to E\) and Lemma 4.2, we have a spectral sequence which degenerates at \(E_{3}\) converging to \(\operatorname{Ext}^{*}(E,E)\) with \(E_{1}\)-page being
\[E_{1}^{p,q}=\begin{cases}\operatorname{Ext}^{q}(A,B[-1])=\operatorname{Ext}^{q -1}(A,B),\ p=-1\\ \operatorname{Ext}^{q}(A,A)\oplus\operatorname{Ext}^{q}(B,B),\ p=0\\ \operatorname{Ext}^{q}(B[-1],A)=\operatorname{Ext}^{q+1}(B,A),\ p=1\\ 0,\ p\notin[-1,1]\end{cases}\]
Since the homological dimension of \(\mathcal{A}\) is at most \(2\), \(A,B\in\mathcal{A}\) and \(\operatorname{Hom}(A,B)=0\), we have
\[\begin{array}{cccc}0&0&0&0&0\\ 0&\operatorname{Ext}^{2}(A,B)&0&0\\ 0&\operatorname{Ext}^{1}(A,B)&\operatorname{Ext}^{2}(A,A)\oplus\operatorname{ Ext}^{2}(B,B)&0\\ 0&0&\operatorname{Ext}^{1}(A,A)\oplus\operatorname{Ext}^{1}(B,B)& \operatorname{Ext}^{2}(B,A)\\ 0&0&\operatorname{Hom}(A,A)\oplus\operatorname{Hom}(B,B)&\operatorname{Ext}^{ 1}(B,A)\\ \hline 0&0&0&\operatorname{Hom}(B,A)\end{array}\]
Note that the differential
\[d\colon E_{1}^{0,0}=\operatorname{Hom}(A,A)\oplus\operatorname{Hom}(B,B)\to E _{1}^{1,0}=\operatorname{Ext}^{1}(B,A)\]
maps \((\operatorname{id}_{A},0)\) and \((0,\operatorname{id}_{B})\) to the element \([E]\in\operatorname{Ext}^{1}(B,A)\) corresponds to the extension (4), we see \(\dim\ker(d)\geq 1\). Thus from \(\operatorname{hom}(E,E)=1\), we have \(\dim E_{2}^{0,0}=\dim E_{\infty}^{0,0}=1\), hence \(E_{1}^{1,-1}=\operatorname{Hom}(B,A)=0\). Moreover, since in each case we have \(\operatorname{Ext}^{2}(E,E)=0\), we see
\[E_{1}^{-1,3}=E_{\infty}^{-1,3}=\operatorname{Ext}^{2}(A,B)=0,\]
which implies \(\chi(A,B)=-\operatorname{ext}^{1}(A,B)\leq 0\). Then the first page becomes
\[\begin{array}{cccc}0&0&0&0\\ 0&0&0&0\\ E_{1}^{p,q}=&0&\operatorname{Ext}^{1}(A,B)&\operatorname{Ext}^{2}(A,A)\oplus \operatorname{Ext}^{2}(B,B)&0\\ 0&0&\operatorname{Ext}^{1}(A,A)\oplus\operatorname{Ext}^{1}(B,B)&\operatorname {Ext}^{2}(B,A)\\ 0&0&\operatorname{Hom}(A,A)\oplus\operatorname{Hom}(B,B)&\operatorname{Ext}^{ 1}(B,A)\\ \hline 0&0&0&0\end{array}\]
**Case 1.** First, we assume that \(\operatorname{RHom}(E,E)=\mathbb{C}\oplus\mathbb{C}^{2}[-1]\). When \(X\) is a cubic threefold, we can assume that \([E]=v,v-w\) or \(2v-w\) up to sign. In each case, a calculation using the Euler form and \(\chi(A,B)\leq 0\) implies that
\[[A]=[S_{\mathcal{K}u(X)}^{-1}(E)],\text{ and }[B]=[S_{\mathcal{K}u(X)}(E)].\]
Thus \(\chi(A,B)=-\operatorname{ext}^{1}(A,B)=0\). Therefore, we also have
\[E_{1}^{0,2}=E_{\infty}^{0,2}=\operatorname{Ext}^{2}(A,A)\oplus\operatorname{ Ext}^{2}(B,B)=0.\]
Since \([A]\) is primitive, we see that \(A\) has only one Jordan-Holder factor with respect to \(\sigma\) and is \(\sigma\)-stable. Hence \(\operatorname{hom}(A,A)=1\). Now from \(\chi(A,A)=-1\) and \(\operatorname{Ext}^{\geq 2}(A,A)=0\), we see \(\operatorname{ext}^{1}(A,A)=2\). By Lemma 3.8, \(A\) is stable with respect to any Serre-invariant stability condition. Using \(\operatorname{Hom}(A,E)=\operatorname{Hom}(E,S_{\mathcal{K}u(X)}(A))\neq 0\) and the fact that \([E]=[S_{\mathcal{K}u(X)}(A)]\) and \(E,S_{\mathcal{K}u(X)}(A)\) are stable with respect to any Serre-invariant stability condition, we get \(A\cong S_{\mathcal{K}u(X)}^{-1}(E)\). But this makes a contradiction, since \(A,E\in\mathcal{A}\) and
\[\operatorname{Hom}(A,E[-1])=\operatorname{Hom}(S_{\mathcal{K}u(X)}^{-1}(E),E[ -1])=\operatorname{Ext}^{1}(E,E)\neq 0.\]
When \(X\) is a quartic double solid or a Gushel-Mukai threefold, we can assume that \([E]=v\) or \(w\). Then a simple computation using \(\chi(A,B)\leq 0\) shows that \([A]=0\) or \([B]=0\), which implies \(A=0\) or \(B=0\) and makes a contradiction.
**Case 2.** Now we assume that \(\operatorname{RHom}(E,E)=\mathbb{C}\oplus\mathbb{C}^{3}[-1]\). In this case, \(X\) is a quartic double solid or a Gushel-Mukai threefold and we can assume that \([E]=v-w\) or \(v+w\). Then a simple computation using \(\chi(A,B)\leq 0\) shows that \(\{[A],[B]\}=\{v,w\}\) when \([E]=v+w\), and \(\{[A],[B]\}=\{v,-w\}\) when \([E]=v-w\). Hence \(\chi(A,B)=\operatorname{ext}^{1}(A,B)=0\). Therefore, we also have
\[E_{1}^{0,2}=E_{\infty}^{0,2}=\operatorname{Ext}^{2}(A,A)\oplus\operatorname{ Ext}^{2}(B,B)=0.\]
Since \([A]\) is primitive, we know that \(A\) has only one Jordan-Holder factor, hence is \(\sigma\)-stable. Thus \(\hom(A,A)=1\), and \(\operatorname{ext}^{1}(A,A)=2\) since \(\chi(A,A)=-1\). By the previous case, \(\tau(A):=S_{\mathcal{K}u(X)}(A)[-2]\) is \(\sigma\)-stable as well. Moreover, by Lemma 3.8 and Lemma 3.9, \(\tau(A)\) is stable with respect to any Serre-invariant stability condition with the same phase as \(A\). Thus \(A,\tau(A)\in\mathcal{A}\) by Lemma 5.2. Hence \(\phi_{\sigma}(A)=\phi_{\sigma}(\tau(A))\).
If \(\operatorname{Ext}^{2}(B,A)=0\), then by Lemma 4.3 and (5) of Proposition 3.7 we get a contradiction. Thus we can assume that \(\operatorname{Ext}^{2}(B,A)=\operatorname{Hom}(A,\tau(B))=\operatorname{Hom} (\tau(A),B)\neq 0\). By \(\sigma\)-stability of \(\tau(A)\), we get \(\phi_{\sigma}(\tau(A))\leq\phi_{\sigma}^{+}(B)\). Then from \(\phi_{\sigma}(A)\geq\phi_{\sigma}^{+}(B)\) and \(\phi_{\sigma}(A)=\phi_{\sigma}(\tau(A))\), we see \(\phi_{\sigma}(A)=\phi_{\sigma}(\tau(A))=\phi_{\sigma}^{+}(B)\), hence from \(\operatorname{Hom}(\tau(A),B)\neq 0\) and the \(\sigma\)-stability of \(\tau(A)\), we have an injection \(\tau(A)\hookrightarrow B\) in \(\mathcal{A}\). Since \([B]\) is primitive, by looking at the Jordan-Holder filtration of the first Harder-Narasimhan factor of \(B\) with respect to \(\sigma\), we have an exact sequence \(A^{\prime}\to B\to B^{\prime}\) in \(\mathcal{A}\) such that all Jordan-Holder factors of \(A^{\prime}\) are isomorphic to \(\tau(A)\) and \(\operatorname{Hom}(A^{\prime},B^{\prime})=0\) as in Lemma 4.1. Applying the spectral sequence in Lemma 4.2 as above, since \(\operatorname{Ext}^{\geq 2}(B,B)=0\), we have \(\operatorname{Ext}^{2}(A^{\prime},B^{\prime})=0\), which implies \(\chi(A^{\prime},B^{\prime})\leq 0\). However, we know that \([A^{\prime}]=n[\tau(A)]=n[A]\) for an integer \(n\geq 1\), hence
\[\chi(A^{\prime},B^{\prime})=\chi(n[A],[B]-n[A])=-n^{2}\chi([A],[A])=n^{2}>0\]
and we get a contradiction.
As a corollary, we have:
**Corollary 5.4**.: _Let \(\sigma=(\mathcal{A},Z)\) be a stability condition on \(\mathcal{K}u(X)\) with homological dimension at most \(2\). Then the image of the central charge \(Z\) is not contained in a line, and we can find a Serre-invariant stability condition \(\sigma^{\prime}=(\mathcal{A}^{\prime},Z^{\prime})\) on \(\mathcal{K}u(X)\) with \(Z=Z^{\prime}\)._
Proof.: If \(X\) is a cubic threefold or quartic double solid, let \(l\subset X\) be a line. Then as \(\operatorname{ext}^{1}(\mathcal{I}_{l},\mathcal{I}_{l})=2\), by Theorem 5.3, \(\mathcal{I}_{l},S_{\mathcal{K}u(X)}^{-1}(\mathcal{I}_{l})\) and \(S_{\mathcal{K}u}(\mathcal{I}_{l})\) are \(\sigma\)-stable. Then the result follows from [34, Remark 4.8].
When \(X\) is a Gushel-Mukai threefold, the result follows from Theorem 5.3 and [33, Lemma 4.7].
We need two useful lemmas before proving our main theorem.
**Lemma 5.5**.: _Let \(X\) be a cubic threefold, \(\sigma=(\mathcal{A},Z)\) be a stability condition on \(\mathcal{K}u(X)\) with \(\homdim(\sigma)\leq 2\) and \(E\in\mathcal{K}u(X)\) be an object with \(\operatorname{ext}^{1}(E,E)=2\). Then we have_
\[\phi_{\sigma}(E)+1\leq\phi_{\sigma}(S_{\mathcal{K}u(X)}(E))<\phi_{\sigma}(E)+2. \tag{5}\]
Proof.: Up to shift, we can assume that \(E\in\mathcal{A}\). By Theorem 5.3, \(E,S_{\mathcal{K}u(X)}(E)\) and \(S_{\mathcal{K}u(X)}^{-1}(E)\) are all \(\sigma\)-stable. Since \(\operatorname{Hom}(E,E[1])=\operatorname{Hom}(E[1],S_{\mathcal{K}u(X)}(E))\neq 0\), we have
\[\phi_{\sigma}(E)+1\leq\phi_{\sigma}(S_{\mathcal{K}u(X)}(E)).\]
And from Corollary 5.4, we can take a Serre-invariant stability condition \(\sigma^{\prime}=(\mathcal{A}^{\prime},Z^{\prime})\) with \(Z=Z^{\prime}\) and \(\phi_{\sigma}(E)=\phi_{\sigma^{\prime}}(E)\). Hence \(E\in\mathcal{A}\cap\mathcal{A}^{\prime}\). Assume that \(S_{\mathcal{K}u(X)}(E)[m]\in\mathcal{A}^{\prime}\). To prove the statement, by Lemma 3.9 and \(Z=Z^{\prime}\), we only need to show \(S_{\mathcal{K}u(X)}(E)[m]\in\mathcal{A}\). To this end, assume that \(S_{\mathcal{K}u(X)}(E)[n]\in\mathcal{A}\), then since \(Z=Z^{\prime}\), we see \(n-m\in 2\mathbb{Z}\). From \(\operatorname{Hom}(E,S_{\mathcal{K}u(X)}(E))=\operatorname{Hom}(E,S_{\mathcal{ K}u(X)}(E)[n][-n])\neq 0\), we see \(0\leq-n\leq 2\). And from \(\operatorname{Hom}(E[1],S_{\mathcal{K}u(X)}(E))=\operatorname{Hom}(E,S_{ \mathcal{K}u(X)}(E)[n][-1-n])\neq 0\), we see \(0\leq-n-1\leq 2\). Therefore, we obtain \(n=-1\) or \(-2\). Since \(n-m\in 2\mathbb{Z}\), we have \(n=m\) and the result follows.
**Lemma 5.6**.: _Let \(X\) be a cubic threefold, quartic double solid or Gushel-Mukai threefold. Then there exist two objects \(D_{1},D_{2}\in\mathcal{K}u(X)\) with \([D_{1}]=v\) and \([D_{2}]=w\), such that for any stability condition \(\sigma\) on \(\mathcal{K}u(X)\) with \(\operatorname{homdim}(\sigma)\leq 2\), \(D_{1}\) and \(D_{2}\) are \(\sigma\)-stable with phases_
\[\phi_{\sigma}(D_{1})-1<\phi_{\sigma}(D_{2})<\phi_{\sigma}(D_{1}). \tag{6}\]
Proof.: When \(X\) is a cubic threefold or a quartic double solid, we set \(D_{1}=\mathcal{I}_{l}\) and \(D_{2}=\mathbf{O}(\mathcal{I}_{l})[-1]\), where \(l\subset Y\) is a line in \(Y\) and \(\mathbf{O}\) is the rotation functor defined in Section 2. Therefore, by [34, Lemma 5.16], we see
\[\operatorname{RHom}(D_{i},D_{i})=\mathbb{C}\oplus\mathbb{C}^{2}[-1],\ \forall i \in\{1,2\}.\]
Hence \([D_{1}]=v\) and \([D_{2}]=w\), and are both \(\sigma\)-stable from Theorem 5.3. Thus (6) follows from [34, Remark 4.8], in which \(\mathcal{J}_{l}:=\mathbf{O}^{-1}(\mathcal{I}_{l})[1]\).
Now we assume that \(X\) is a Gushel-Mukai threefold. We set \(D_{1}=\mathcal{I}_{C}\) and \(D_{2}=F\), where \(C\subset X\) is a general smooth conic and \(F\) is a general rank \(2\) non-locally free slope-stable sheaf on \(X\) with \(c_{1}(F)=-1,c_{2}(F)=5\) and \(c_{3}(F)=0\). By the smoothness of \(C\), we get \(\operatorname{RHom}(D_{1},D_{1})=\mathbb{C}\oplus\mathbb{C}^{2}[-1]\). Since \(C\) is general in the Fano surface of conics, by [19, Proposition 7.1] we have \(D_{1}=\mathcal{I}_{C}\in\mathcal{K}u(X)\). Hence from Theorem 5.3, \(D_{1}\) is \(\sigma\)-stable. And by virtue of [19, Theorem 8.1], \(F\) fits into an exact sequence \(0\to F\to\mathcal{E}_{X}\to\mathcal{O}_{l}(-1)\to 0\), where \(l\subset X\) is a line. By [19, Proposition 8.2], we see \(D_{2}=F\in\mathcal{K}u(X)\). Moreover, we can take \(l\) to be general in the Hilbert scheme of lines on \(X\) so that \(\operatorname{RHom}(D_{2},D_{2})=\mathbb{C}\oplus\mathbb{C}^{2}[-1]\) (cf. [19, Section 8]). Hence \(D_{2}\) is \(\sigma\)-stable as well.
It remains to verify (6). By [19, Lemma 6.2], \(\operatorname{Hom}(\mathcal{E}_{X},D_{1})\neq 0\). Hence from the exact sequence above defining \(F\), we see \(\operatorname{Hom}(D_{2},D_{1})=\operatorname{Hom}(S^{-1}_{\mathcal{K}u(X)}(D_ {1}),D_{2})\neq 0\). Thus we get \(\phi_{\sigma}(S^{-1}_{\mathcal{K}u(X)}(D_{1}))<\phi_{\sigma}(D_{2})<\phi_{ \sigma}(D_{1})\). Note that \(S^{-1}_{\mathcal{K}u(X)}(D_{1})\) is \(\sigma\)-stable as well, and by Lemma 3.8 and Lemma 3.9\(D_{1}\) and \(S^{-1}_{\mathcal{K}u(X)}(D_{1})[2]\) are stable with respect to any Serre-invariant stability conditions with the same phase. Thus from \([D_{1}]=[S^{-1}_{\mathcal{K}u(X)}(D_{1})[2]]\) and Lemma 5.2, we see \(\phi_{\sigma}(D_{1})=\phi_{\sigma}(S^{-1}_{\mathcal{K}u(X)}(D_{1})[2])\). Then we obtain \(\phi_{\sigma}(D_{1})-2<\phi_{\sigma}(D_{2})<\phi_{\sigma}(D_{1})\).
Now by Corollary 5.4, we can find a Serre-invariant stability condition \(\sigma^{\prime}=(\mathcal{A}^{\prime},Z^{\prime})\) such that \(Z=Z^{\prime}\) and \(\phi_{\sigma}(D_{1})=\phi_{\sigma^{\prime}}(D_{1})\). Then we get
\[\phi_{\sigma^{\prime}}(D_{1})-2<\phi_{\sigma}(D_{2})<\phi_{\sigma^{\prime}}(D_ {1}).\]
However, from \(Z=Z^{\prime}\), we see \(\phi_{\sigma}(D_{2})-\phi_{\sigma^{\prime}}(D_{2})\in 2\mathbb{Z}\), which forces \(\phi_{\sigma}(D_{2})=\phi_{\sigma^{\prime}}(D_{2})\). Then it remains to check \(\phi_{\sigma^{\prime}}(D_{1})-1<\phi_{\sigma^{\prime}}(D_{2})<\phi_{\sigma^{ \prime}}(D_{1})\). By Theorem 3.10, up to \(\widetilde{\operatorname{GL}}^{+}(2,\mathbb{R})\)-action, we can take \(\sigma^{\prime}\) to be the one constructed in [2], then the result follows from a direct computation of tilt-stability.
Now we are ready to prove our main theorems. We first prove that a stability condition \(\sigma\) on \(\mathcal{K}u(X)\) with \(\operatorname{homdim}(\sigma)\leq 2\) is determined by its central charge.
**Theorem 5.7**.: _Let \(X\) be a cubic threefold, quartic double solid or Gushel-Mukai threefold, and \(\sigma_{1}=(\mathcal{A}_{1},Z_{1}),\sigma_{2}=(\mathcal{A}_{2},Z_{2})\) be a pair of stability conditions on \(\mathcal{K}u(X)\). If \(Z_{1}=Z_{2}\) and \(\operatorname{homdim}(\sigma_{i})\leq 2\) for any \(i\in\{1,2\}\), then_
\[\sigma_{1}=[2m]\cdot\sigma_{2}\]
_for an integer \(m\in\mathbb{Z}\)._
Proof.: We take two objects \(D_{1}\) and \(D_{2}\) as in Lemma 5.6. Note that \([D_{1}]=v\) and \([D_{2}]=w\) and are both \(\sigma_{1}\)-stable and \(\sigma_{2}\)-stable. Moreover, we have
\[\phi_{\sigma_{k}}(D_{1})-1<\phi_{\sigma_{k}}(D_{2})<\phi_{\sigma_{k}}(D_{1}) \tag{7}\]
for any \(k\in\{1,2\}.\) Up to shift, we can furthermore assume that \(D_{1}\in\mathcal{A}_{1}\cap\mathcal{A}_{2}\). Thus from (7), either \(D_{2}\in\mathcal{A}_{1}\cap\mathcal{A}_{2}\) or \(D_{2}[1]\in\mathcal{A}_{1}\cap\mathcal{A}_{2}\). Moreover, since \(Z_{1}=Z_{2}\), we get \(\phi_{\sigma_{1}}(D_{i})=\phi_{\sigma_{2}}(D_{i})\) for \(i\in\{1,2\}\).
**Claim 1.**_If a non-zero object \(E\in\mathcal{A}_{i}\) satisfies \(E[n]\in\mathcal{A}_{j}\), then \(n=0\), where \(\{i,j\}=\{1,2\}\)._ Note that from \(Z_{1}=Z_{2}\), we have \(n\in 2\mathbb{Z}\).
Since the claim and assumptions are symmetry between \(\sigma_{1}\) and \(\sigma_{2}\), in the following we assume that \(i=1\) and \(j=2\). Then we have
\[n<\phi_{\sigma_{2}}^{-}(E),\quad\phi_{\sigma_{2}}^{+}(E)\leq n+1. \tag{8}\]
Let \([E]=av+bw\) for \(a,b\in\mathbb{Z}\). First, we assume that \(X\) is a cubic threefold. Assume \(F\in\mathcal{A}_{1}\cap\mathcal{A}_{2}\) be an object with \(\operatorname{ext}^{1}(F,F)=2\). Then \(F,S_{\mathcal{K}u(X)}(F)\) and \(S_{\mathcal{K}u(X)}^{-1}(F)\) are \(\sigma_{1}\)-stable and \(\sigma_{2}\)-stable. Note that if \(\chi(F,E)<0\) or \(\chi(E,F)<0\), then from \(\operatorname{homim}(\sigma_{1})\leq 2\), we get \(\operatorname{Hom}(F,E[1])=\operatorname{Hom}(E,S_{\mathcal{K}u(X)}(F)[-1]) \neq 0\) or \(\operatorname{Hom}(E,F[1])=\operatorname{Hom}(S_{\mathcal{K}u(X)}^{-1}(F)[1], E)\neq 0\), which implies
\[\phi_{\sigma_{2}}(F)-1\leq\phi_{\sigma_{2}}^{+}(E),\quad\phi_{\sigma_{2}}^{-} (E)\leq\phi_{\sigma_{2}}(S_{\mathcal{K}u(X)}(F))-1\]
or
\[\phi_{\sigma_{2}}(S_{\mathcal{K}u(X)}^{-1}(F))+1\leq\phi_{\sigma_{2}}^{+}(E), \quad\phi_{\sigma_{2}}^{-}(E)\leq\phi_{\sigma_{2}}(F)+1.\]
But by (8) and (5), we always have \(-1<n+1\) and \(n<2\), which implies \(n=0\) since \(n\in 2\mathbb{Z}\). Therefore, in the following, we only need to find an object \(F\in\mathcal{A}_{1}\cap\mathcal{A}_{2}\) with \(\operatorname{ext}^{1}(F,F)=2\) and \(\chi(F,E)<0\) or \(\chi(E,F)<0\).
* If \(b=0\), then \([E]=a[D_{1}]\) and \(a>0\). In this case, we have \[\chi(D_{1},E)=\chi(E,D_{1})=-a<0.\]
* Assume \(D_{2}\in\mathcal{A}_{1}\cap\mathcal{A}_{2}\). If \(a=0\), then \(b>0\). In this case, we have \[\chi(E,D_{2})=-a-b<0.\]
* Assume \(D_{2}\in\mathcal{A}_{1}\cap\mathcal{A}_{2}\). If \(a\neq 0\), \(b\neq 0\), then from \[\operatorname{Im}(Z_{1}(E))=a\cdot\operatorname{Im}(Z_{1}(D_{1}))+b\cdot \operatorname{Im}(Z_{1}(D_{2}))\geq 0,\] either \(a<0<b\) or \(0<a\). Then we have either \(\chi(D_{2},E)=-b<0\) or \(\chi(E,D_{1})=-a<0\).
* Assume \(D_{2}[1]\in\mathcal{A}_{1}\cap\mathcal{A}_{2}\). If \(a=0\), then \(b<0\). In this case, we have \[\chi(E,D_{2}[1])=a+b<0.\]
* Assume \(D_{2}[1]\in\mathcal{A}_{1}\cap\mathcal{A}_{2}\). If \(a\neq 0\), \(b\neq 0\), then from \[\operatorname{Im}(Z_{1}(E))=a\cdot\operatorname{Im}(Z_{1}(D_{1}))-b\cdot \operatorname{Im}(Z_{1}(D_{2}[1]))\geq 0,\] either \(0<a\) or \(b<0,a<0\). Then we have either \(\chi(E,D_{1})=-a<0\) or \(\chi(E,D_{2}[1])=a+b<0\).
Now we assume that \(X\) is a quartic double solid or a Gushel-Mukai threefold.
* If \(b=0\), then \(a>0\). In this case, we have \[\chi(D_{1},E)=\chi(E,D_{1})=-a<0,\] hence from \(D_{1},E\in\mathcal{A}_{1}\) we get \(\operatorname{Hom}(D_{1},E[1])\neq 0\) and \(\operatorname{Hom}(E,D_{1}[1])\neq 0\). Then we have \[\phi_{\sigma_{2}}(D_{1})-1\leq\phi_{\sigma_{2}}^{+}(E),\quad\phi_{\sigma_{2}}^{- }(E)\leq\phi_{\sigma_{2}}(D_{1})+1.\] But by (8), we get \(-1<n+1\) and \(n<2\), which implies \(n=0\) since \(n\in 2\mathbb{Z}\).
* Assume \(D_{2}\in\mathcal{A}_{1}\cap\mathcal{A}_{2}\). If \(b\neq 0\), then \(b>0\). In this case, we have \[\chi(D_{2},E)=\chi(E,D_{2})=-b<0.\] Then by the same argument above, we get \(\operatorname{Hom}(D_{2},E[1])\neq 0\) and \(\operatorname{Hom}(E,D_{2}[1])\neq 0\). Then we have \[\phi_{\sigma_{2}}(D_{2})-1\leq\phi_{\sigma_{2}}^{+}(E),\quad\phi_{\sigma_{2}}^ {-}(E)\leq\phi_{\sigma_{2}}(D_{2})+1.\] But by (8), we get \(-1<n+1\) and \(n<2\), which implies \(n=0\) since \(n\in 2\mathbb{Z}\).
* Assume \(D_{2}[1]\in\mathcal{A}_{1}\cap\mathcal{A}_{2}\). If \(b\neq 0\), then \(b<0\). In this case, we have \[\chi(D_{2}[1],E)=\chi(E,D_{2}[1])=b<0.\] Then by the same argument above, we get \(\operatorname{Hom}(D_{2}[1],E[1])\neq 0\) and \(\operatorname{Hom}(E,D_{2}[2])\neq 0\). Then we have \[\phi_{\sigma_{2}}(D_{2}[1])-1\leq\phi_{\sigma_{2}}^{+}(E),\quad\phi_{\sigma_{2 }}^{-}(E)\leq\phi_{\sigma_{2}}(D_{2}[1])+1.\] But by (8), we get \(-1<n+1\) and \(n<2\), which implies \(n=0\) since \(n\in 2\mathbb{Z}\).
**Claim 2**.: _A non-zero object \(E\in\mathcal{K}u(X)\) is in \(\mathcal{A}_{1}\) if and only if \(E\) is in \(\mathcal{A}_{2}\)._
We prove this claim by induction on \(\operatorname{ext}^{1}(E,E)\). When \(\operatorname{ext}^{1}(E,E)\leq 2\), we have \(\operatorname{ext}^{1}(E,E)=2\) by Proposition 3.7 (5), hence \(E\) is \(\sigma_{i}\)-stable by Theorem 5.3. then the result follows from our assumption \(\phi_{\sigma_{1}}(E)=\phi_{\sigma_{2}}(E)\). Now we assume that the claim holds for any object \(E\) with \(\operatorname{ext}^{1}(E,E)<N\) for an integer \(N>2\). We are going to prove the claim for \(\operatorname{ext}^{1}(E,E)=N\). To this end, we first assume that \(E\in\mathcal{A}_{1}\) with \(\operatorname{ext}^{1}(E,E)=N\). If \(E\) has at least two cohomology objects with respect to \(\mathcal{A}_{2}\), we denote \(a,b\in\mathbb{Z}\) by unique integers satisfy
\[\mathcal{H}_{\mathcal{A}_{2}}^{-a}(E)\neq 0,\mathcal{H}_{\mathcal{A}_{2}}^{k}(E) =0,\forall k<-a\]
and
\[\mathcal{H}_{\mathcal{A}_{2}}^{-b}(E)\neq 0,\mathcal{H}_{\mathcal{A}_{2}}^{k}(E) =0,\forall k>-b.\]
Therefore, we have two non-zero maps \(\mathcal{H}_{\mathcal{A}_{2}}^{-a}(E)[a]\to E\) and \(E\to\mathcal{H}_{\mathcal{A}_{2}}^{-b}(E)[b]\). Using (5) of Proposition 3.7 and Lemma 4.6, we see
\[\operatorname{ext}^{1}(\mathcal{H}_{\mathcal{A}_{2}}^{-a}(E),\mathcal{H}_{ \mathcal{A}_{2}}^{-a}(E))<N,\text{ and }\operatorname{ext}^{1}(\mathcal{H}_{\mathcal{A}_{2}}^{-b}(E),\mathcal{H}_{ \mathcal{A}_{2}}^{-b}(E))<N.\]
Thus by the induction hypothesis, we get \(\mathcal{H}_{\mathcal{A}_{2}}^{-a}(E),\mathcal{H}_{\mathcal{A}_{2}}^{-b}(E) \in\mathcal{A}_{1}\). But since \(E\in\mathcal{A}_{1}\), we have \(a\leq 0\) and \(b\geq 0\), which contradicts \(a>b\). This implies \(E\) is in \(\mathcal{A}_{2}\) up to a shift, but Claim 1 shows \(E\in\mathcal{A}_{2}\). When \(E\in\mathcal{A}_{2}\) with \(\operatorname{ext}^{1}(E,E)=N\), the same argument as above also shows that \(E\in\mathcal{A}_{1}\). This completes our induction argument and proves Claim 2.
Therefore, by the Claim 2 above, we have \(\mathcal{A}_{1}=\mathcal{A}_{2}\), which together with \(Z_{1}=Z_{2}\) implies \(\sigma_{1}=\sigma_{2}\).
Since we know that the homological dimension of any Serre invariant stability condition on \(\mathcal{K}u(X)\) is at most \(2\), we get the following criterion.
**Theorem 5.8**.: _Let \(X\) be a cubic threefold, quartic double solid or Gushel-Mukai threefold, and \(\sigma\) be a stability condition on \(\mathcal{K}u(X)\). Then the following conditions are equivalent:_
1. \(\operatorname{homdim}(\sigma)\leq 2\)_,_
2. \(\operatorname{gldim}(\sigma)\leq 2\)_, and_
3. \(\sigma\) _is Serre-invariant._
Proof.: By Proposition 3.7, we see (3) implies (2), and by Lemma 4.10 we have (2) implies (1).
By Theorem 5.3 and Corollary 5.4, if \(\operatorname{homdim}(\sigma)\leq 2\), then we can find a Serre-invariant stability condition \(\sigma^{\prime}\) such that \(Z=Z^{\prime}\). Then (1) implies (3) follows from Theorem 5.7.
### The Serre-invariant component
Let \(\mathsf{K}\subset\operatorname{Stab}(\mathcal{K}u(X))\) be the subspace of all Serre-invariant stability conditions on \(\mathcal{K}u(X)\). By Theorem 3.10, we have \(\mathsf{K}=\widetilde{\operatorname{GL}}^{+}(2,\mathbb{R})\). It is clear that \(\mathsf{K}\) is contractible since \(\widetilde{\operatorname{GL}}^{+}(2,\mathbb{R})\) is.
**Corollary 5.9**.: _Let \(X\) be a cubic threefold, quartic double solid or Gushel-Mukai threefold. Then \(\mathsf{K}\) is a contractible connected component of \(\operatorname{Stab}(\mathcal{K}u(X))\)._
Proof.: By [34, Remark 3.10], \(\mathsf{K}\) is an open subset of a connected component of \(\operatorname{Stab}(\mathcal{K}u(X))\). Since \(\operatorname{gldim}\colon\operatorname{Stab}(\mathcal{K}u(X))\to\mathbb{R}\) is continuous by [18, Lemma 5.7], the subspace \(\operatorname{gldim}^{-1}([0,2])\subset\operatorname{Stab}(\mathcal{K}u(X))\) is closed. Moreover, by Theorem 5.8, \(\mathsf{K}=\operatorname{gldim}^{-1}([0,2])\). Since \(\mathsf{K}=\widetilde{\operatorname{GL}}^{+}(2,\mathbb{R})\cong\mathbb{C} \times\mathbb{H}\) is connected, open and closed, it is a contractible connected component.
For an arbitrary stability condition \(\sigma\) on \(\mathcal{K}u(X)\), Proposition 3.7 (5) implies \(\operatorname{homdim}(\sigma)\geq 1\). So it is natural to make the following conjecture, which together with Theorem 5.8 could describe all stability conditions on \(\mathcal{K}u(X)\):
**Conjecture 5.10**.: _Let \(X\) be a cubic threefold, quartic double solid or Gushel-Mukai threefold. Then for any stability condition \(\sigma\) on \(\mathcal{K}u(X)\), we have \(\operatorname{homdim}(\sigma)\leq 2\)._
|
2305.14329 | Zero-sum Polymatrix Markov Games: Equilibrium Collapse and Efficient
Computation of Nash Equilibria | The works of (Daskalakis et al., 2009, 2022; Jin et al., 2022; Deng et al.,
2023) indicate that computing Nash equilibria in multi-player Markov games is a
computationally hard task. This fact raises the question of whether or not
computational intractability can be circumvented if one focuses on specific
classes of Markov games. One such example is two-player zero-sum Markov games,
in which efficient ways to compute a Nash equilibrium are known. Inspired by
zero-sum polymatrix normal-form games (Cai et al., 2016), we define a class of
zero-sum multi-agent Markov games in which there are only pairwise interactions
described by a graph that changes per state. For this class of Markov games, we
show that an $\epsilon$-approximate Nash equilibrium can be found efficiently.
To do so, we generalize the techniques of (Cai et al., 2016), by showing that
the set of coarse-correlated equilibria collapses to the set of Nash
equilibria. Afterwards, it is possible to use any algorithm in the literature
that computes approximate coarse-correlated equilibria Markovian policies to
get an approximate Nash equilibrium. | Fivos Kalogiannis, Ioannis Panageas | 2023-05-23T17:56:45Z | http://arxiv.org/abs/2305.14329v2 | # Zero-sum Polymatrix Markov Games: Equilibrium Collapse and Efficient Computation of Nash Equilibria
###### Abstract
The works of (Daskalakis et al., 2009, 2022; Jin et al., 2022; Deng et al., 2023) indicate that computing Nash equilibria in multi-player Markov games is a computationally hard task. This fact raises the question of whether or not computational intractability can be circumvented if one focuses on specific classes of Markov games. One such example is two-player zero-sum Markov games, in which efficient ways to compute a Nash equilibrium are known. Inspired by zero-sum polymatrix normal-form games (Cai et al., 2016), we define a class of zero-sum multi-agent Markov games in which there are only pairwise interactions described by a graph that changes per state. For this class of Markov games, we show that an \(\epsilon\)-approximate Nash equilibrium can be found efficiently. To do so, we generalize the techniques of (Cai et al., 2016), by showing that the set of coarse-correlated equilibria collapses to the set of Nash equilibria. Afterwards, it is possible to use any algorithm in the literature that computes approximate coarse-correlated equilibria Markovian policies to get an approximate Nash equilibrium.
###### Contents
* 1 Introduction
* 1.1 Importance of zero-sum polymatrix Markov games
* 1.2 Related work
* 2 Preliminaries
* 2.1 Markov games
* 2.2 Our setting
* 3 Main results
* 3.1 Warm-up: equilibrium collapse in two-player zero-sum MG's
* 3.2 Equilibrium collapse in finite-horizon polymatrix Markov games
* 3.3 No equilibrium collapse with more than one controllers per-state
* 3.4 Equilibrium collapse in infinite-horizon polymatrix Markov games
* 4 Conclusion and open problems
* A Missing statements and proofs
* A.1 Statements for Section 3.1
* A.2 Proof of Theorem 3.2
* B Extra remarks for finite-horizon games
* B.1 Poly-time computation Nash equilibrium policies in zero-sum Polymatrix Markov games
* C Proofs for Infinite-horizon Zero-Sum Polymatrix Markov Games
* C.1 Definitions of equilibria for the infinite-horizon
* C.2 Main results for infinite-horizon games
* C.3 No equilibrium collapse with more than one controllers per-state
Introduction
Multi-agent reinforcement learning (MARL) is a discipline that is concerned with strategic interactions between agents who find themselves in a dynamically changing environment. Early aspects of MARL can be traced in the literature of two-player zero-sum stochastic/Markov games Shapley (1953). Today Markov games have been established as the theoretical framework for MARL (Littman, 1994). The connection between game theory and MARL has lead to several recent cornerstone results in benchmark domains in AI (Bowling et al., 2015; Brown and Sandholm, 2019, 2018; Brown et al., 2020; Silver et al., 2017; Moravcik et al., 2017; Perolat et al., 2022; Vinyals et al., 2019). The majority of the aforementioned breakthroughs relied on computing _Nash equilibria_(Nash, 1951) in a scalable and often decentralized manner. Although the theory of single agent reinforcement learning (RL) has witnessed an outstanding progress (_e.g._, see (Agarwal et al., 2020; Bertsekas, 2000; Jin et al., 2018; Li et al., 2021; Luo et al., 2019; Panait and Luke, 2005; Sidford et al., 2018; Sutton and Barto, 2018), and references therein), the landscape of multi-agent settings eludes a thorough understanding. In fact, guarantees for provably efficient computation of Nash equilibria remain limited to either environments in which agents strive to coordinate towards a shared goal (Chen et al., 2022; Claus and Boutilier, 1998; Ding et al., 2022; Fox et al., 2022; Leonardos et al., 2021; Maheshwari et al., 2022; Wang and Sandholm, 2002; Zhang et al., 2021) or fully competitive such as two-player zero-sum games (Cen et al., 2021; Condon, 1993; Daskalakis et al., 2020; Sayin et al., 2021, 2020; Wei et al., 2021) to name a few. Part of the lack of efficient algorithmic results in MARL is the fact that computing approximate Nash equilibria in (general-sum) games is computationally intractable (Daskalakis et al., 2009; Rubinstein, 2017; Chen et al., 2009; Etessami and Yannakakis, 2010) even when the games have a single state, _i.e._, normal-form two-player games.
We aim at providing a theoretical framework that captures an array of real-world applications that feature both shared and competing interests between the agents -- which admittedly correspond to a big portion of all modern applications. A recent contribution that computes NE efficiently in a setting that combines both collaboration and competition, (Kalogiannis et al., 2022), concerns adversarial team Markov games, or competition between an adversary and a group of uncoordinated agents with common rewards. Efficient algorithms for computing Nash equilibria in settings that include both cooperation and competition are far fewer and tend to impose assumptions that are restrictive and difficult to meet in most applications (Bowling, 2000; Hu and Wellman, 2003). The focus of our work is centered around the following question:
_Are there any other settings of Markov games that encompass both competition and coordination while mantaining the tractability of Nash equilibrium computation?_
Inspired by recent advances in algorithmic game theory and specifically zero-sum _polymatrix_ normal-form games (Cai et al., 2016), we focus on the problem of computing Nash equilibria in zero-sum _polymatrix Markov_ games. Informally, a polymatrix Markov game is a multi-agent Markov decision process with \(n\) agents, state-space \(\mathcal{S}\), action space \(\mathcal{A}_{k}\) for agent \(k\), a transition probability model \(\mathbb{P}\) and is characterized by a graph \(\mathcal{G}_{s}(\mathcal{V},\mathcal{E}_{s})\) which is potentially different in every state \(s\). For a fixed state \(s\), the nodes of the graph \(\mathcal{V}\) correspond to the agents, and the edges \(\mathcal{E}_{s}\) of the graph are two-player normal-form games (different per state). Every node/agent \(k\) has a fixed set of actions \(\mathcal{A}_{k}\), and chooses a strategy from this set to play in all games corresponding to adjacent edges. Given an action profile of all the players, the node's reward is the sum of its rewards in all games on the edges adjacent to it. The game is globally zero-sum if, for all strategy profiles, the rewards of all players add up to zero. Afterwards, the process transitions to a state \(s^{\prime}\) according to \(\mathbb{P}\). In a more high-level description, the agents interact over a network whose connections change at every state.
Our results.We consider a zero-sum polymatrix Markov game with the additional property that a single agent (not necessarily the same) that controls the transition at each state, _i.e._, the transition model is affected by a single agent's actions for each state \(s\). These games are known as _switching controller_ Markov games. We show that we can compute in time \(\operatorname{poly}(|\mathcal{S}|,n,\max_{i\in[n]}|\mathcal{A}_{i}|,1/\epsilon)\) an \(\epsilon\)-approximate Nash equilibrium. The proof relies on the fact that zero-sum polymatrix Markov games with a switching controller have the following important property: the marginals of a coarse-correlated equilibrium constitute a Nash equilibrium (see
Section 3.2). We refer to this phenomenon as _equilibrium collapse_. This property was already known for zero-sum polymatrix normal-form games by Cai et al. (2016) and our results generalize the aforementioned work for Markov games. As a corollary, we get that any algorithm in the literature that guarantees convergence to approximate coarse-correlated equilibria Markovian policies--_e.g._, (Daskalakis et al., 2022)--can be used to get approximate Nash equilibria. Our contribution also unifies previous results that where otherwise only applicable to the settings of _single_ and _switching-control two-player zero-sum games_, or _zero-sum polymatrix normal-form games_. Finally, we show that the equilibrium collapsing phenomenon does not carry over if there are two or more controllers per state (see Section 3.3). An additional factor that aggravates the challenge
Technical overview.In order to prove our results, we rely on nonlinear programming and, in particular, nonlinear programs whose optima coincide with the Nash equilibria for a particular Markov game (Filar et al., 1991; Filar and Vrieze, 2012). Our approach is analogous to the one used by (Cai et al., 2016) which uses linear programming to prove the collapse of the set of CCE to the set of NE. Nevertheless, using the duality of linear programming in our case is not possible since a Markov game introduces nonlinear terms in the program. It is noteworthy that we do not need to invoke (Lagrangian) duality or an argument that relies on stationary points of a Lagrangian function. Rather, we use the structure of the zero-sum polymatrix Markov games with a switching controller to conclude the relation between a correlated policy and the individual policies formed by its marginals in terms of the individual utilities of the game.
### Importance of zero-sum polymatrix Markov games
Strategic interactions of agents over a network is a topic of research in multiple disciplines that span computer science (Easley and Kleinberg, 2010), economics (Schweitzer et al., 2009), control theory (Tipsuwan and Chow, 2003), and biology (Szabo and Fath, 2007) to name a few.
In many environments where multiple agents interact with each other, they do so in a localized manner. That is, every agent is affected by the set of agents that belong to their immediate "neighborhood". Further, it is quite common that these agents will interact independently with each one of their neighbors; meaning that the outcome of their total interactions is a sum of pairwise interactions rather than interactions that depend on joint actions. Finally, players might remain indifferent to actions of players are not their neighbors.
To illustrate this phenomenon we can think of multiplayer e-games (_e.g._, CS:GO, Fortnite, League of Legends, etc) where each player interacts through the same move only with players that are present on their premises and, in general, the neighbors cannot combine their actions into something that is not a mere sum of their individual actions (_i.e.,_ they rarely can "multiply" the effect of the individual actions). In other scenarios, such as strategic games played on social networks (_e.g._, opinion dynamics) agents clearly interact in a pairwise manner with agents that belong to their neighborhood and are somewhat oblivious to the actions of agents who they do not share a connection with.
With the proposed model we provide the theoretical framework needed to reason about such strategic interactions over dynamically changing networks.
### Related work
From the literature of Markov games, we recognize the settings of _single controller_(Filar and Raghavan, 1984; Sayin et al., 2022; Guan et al., 2016; Qiu et al., 2021) and _switching controller_(Vrieze et al., 1983) Markov games to be one of the most related to ours. In these settings, all agents' actions affect individual rewards, but in every state one particular player (_single controller_), or respectively a potentially different one (_switching controller_), controls the transition of the environment to a new state. To the best of our knowledge, prior to our work, the only Markov games that have been examined under this assumption are either zero-sum or potential games.
Further, we manage to go beyond the dichotomy of absolute competition or absolute collaboration by generalizing zero-sum polymatrix games to their Markovian counterpart. In this sense, our work is related to previous works of Cai et al. (2016); Anagnostides et al. (2022); Ao et al. (2022) which show fast convergence to Nash equilibria in zero-sum polymatrix normal-form games for various no-regret learning algorithms including optimistic gradient descent.
Preliminaries
Notation.We define \([n]\coloneqq\{1,\cdots,n\}\). Scalars are denoted using lightface variables, while, we use boldface for vectors and matrices. For simplicity in the exposition, we use \(O(\cdot)\) to suppress dependencies that are polynomial in the parameters of the game. Additionally, given a collection \(\mathbf{x}\) of policies or strategies for players \([n]\), \(\mathbf{x}_{-k}\) denotes the policies of every player excluding \(k\).
### Markov games
In its most general form, a Markov game (MG) with a finite number of \(n\) players is defined as a tuple \(\Gamma(H,\mathcal{S},\{\mathcal{A}_{k}\}_{k\in[n]},\mathbb{P},\{r_{k}\}_{k\in[ n]},\gamma,\mathbf{\rho})\). Namely,
* \(H\in\mathbb{N}_{+}\) denotes the _time horizon_, or the length of each episode,
* \(\mathcal{S}\), with cardinality \(S\coloneqq|\mathcal{S}|\), stands for the state space,
* \(\{\mathcal{A}_{k}\}_{k\in[n]}\) is the collection of every player's action space, while \(\mathcal{A}\coloneqq\mathcal{A}_{1}\times\cdots\times\mathcal{A}_{n}\) denotes the _joint action space_; further, an element of that set --a joint action-- is generally noted as \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathcal{A}\),
* \(\mathbb{P}\coloneqq\{\mathbb{P}_{h}\}_{h\in[H]}\) is the set of all _transition matrices_, with \(\mathbb{P}_{h}:\mathcal{S}\times\mathcal{A}\to\Delta(\mathcal{S})\); further, \(\mathbb{P}_{h}(\cdot|s,\mathbf{a})\) marks the probability of transitioning to every state given that the joint action \(\mathbf{a}\) is selected at time \(h\) and state \(s\) -- in infinite-horizon games \(\mathbb{P}\) does not depend on \(h\) and the index is dropped,
* \(r_{k}\coloneqq\{r_{k,h}\}\) is the reward function of player \(k\) at time \(h\); \(r_{k,h}:\mathcal{S},\mathcal{A}\to[-1,1]\) yields the reward of player \(k\) at a given state and joint action -- in infinite-horizon games, \(r_{k,h}\) is the same for every \(h\) and the index is dropped,
* a discount factor \(\gamma>0\), which is generally set to \(1\) when \(H<\infty\), and \(\gamma<1\) when \(H\to\infty\),
* an initial state distribution \(\mathbf{\rho}\in\Delta(\mathcal{S})\).
It is noteworthy that without placing any structure on \(\{r_{k}\}_{k\in[n]}\), an MG encompasses general interactions, with both _cooperation_ and _competition_.
Policies and value functions.We will define stationary and nonstationary Markov policies. When the horizon \(H\) is finite, a stationary policy equilibrium need not necessarily exist even for a single-agent MG, _i.e._, a Markov decision process; in this case, we seek nonstationary policies. For the case of infinite-horizon games, it is folklore that a stationary Markov policy Nash equilibrium always exists.
We note that a policy is _Markovian_ when it depends on the present state only. A _nonstationary_ Markov policy \(\mathbf{\pi}_{k}\) for player \(k\) is defined as \(\mathbf{\pi}_{k}\coloneqq\{\mathbf{\pi}_{k,h}:\mathcal{S}\to\Delta(\mathcal{A}_{k}), \ \forall h\in[H]\}\). It is a sequence of mappings of states \(s\) to a distribution over actions \(\Delta(\mathcal{A}_{k})\) for every timestep \(h\). By \(\mathbf{\pi}_{k,h}(a|s)\) we will denote the probability of player \(k\) taking action \(a\) in timestep \(h\) and state \(s\). A Markov policy is said to be _stationary_ in the case that it outputs an identical probability distribution over actions whenever a particular state is visited regardless of the corresponding timestep \(h\).
Further, we define a nonstationary Markov _joint policy_\(\mathbf{\sigma}\coloneqq\{\mathbf{\pi}_{h},\ \forall h\in[H]\}\) to be a sequence of mappings from states to distributions over joint actions \(\Delta(\mathcal{A})\equiv\Delta(\mathcal{A}_{1}\times\cdots\times\mathcal{A} _{n})\) for all times steps \(h\) in the time horizon. In this case, the players can be said to share a common source of randomness, or that the joint policy is correlated.
A joint policy \(\mathbf{\pi}\) will be said to be a _product policy_ if there exist policies \(\mathbf{\pi}_{k}:[H]\times\mathcal{S}\to\Delta(\mathcal{A}_{k}),\ \forall k\in[n]\) such that \(\mathbf{\pi}_{h}=\mathbf{\pi}_{1,h}\times\cdots\times\mathbf{\pi}_{n,h},\ \forall h\in[H]\). Moreover, given a joint policy \(\mathbf{\pi}\) we let a joint policy \(\mathbf{\pi}_{-k}\) stand for the _marginal joint policy_ excluding player \(k\), _i.e._,
\[\pi_{-k,h}(\mathbf{a}|s)=\sum_{a^{\prime}\in\mathcal{A}_{k}}\pi_{h}(a^{\prime},\bm {a}|s),\ \forall h\in[H],\forall s\in\mathcal{S},\forall\mathbf{a}\in\mathcal{A}_{-k}.\]
By fixing a joint policy \(\mathbf{\pi}\) we can define the value function of any given state \(s\) and timestep \(h\) for every player \(k\) as the expected cumulative reward they get from that state and timestep \(h\) onward,
\[V_{k,h}^{\mathbf{\pi}}(s_{1})=\mathbb{E}_{\mathbf{\pi}}\left[\sum_{h=1}^{H}\gamma^{h-1}r_{ k,h}(s_{h},\mathbf{a}_{h})\big{|}s_{1}\right]=\mathbf{e}_{s_{1}}^{\top}\sum_{h=1}^{H} \left(\gamma^{h-1}\prod_{\tau=1}^{h}\mathbb{P}_{\tau}(\mathbf{\pi}_{\tau})\right) \mathbf{r}_{k,h}(\mathbf{\pi}_{h}).\]
Depending on whether the game is of finite or infinite horizon we get the followin displays,
* In finite-horizon games, \(\gamma=1\), the value function \(\text{function reads}\), \[V_{k,h}^{\mathbf{\pi}}(s_{1})=\mathbf{e}_{s_{1}}^{\top}\sum_{h=1}^{H}\left(\prod_{\tau =1}^{h}\mathbb{P}_{\tau}(\mathbf{\pi}_{\tau})\right)\mathbf{r}_{k,h}(\mathbf{\pi}_{h}), V_{k}^{\mathbf{\pi}}(s_{1})=\mathbf{e}_{s_{1}}^{\top}\left(\mathbf{I}- \gamma\,\mathbb{P}(\mathbf{\pi})\right)^{-1}\mathbf{r}(\mathbf{\pi}).\]
Where \(\mathbb{P}_{h}(\mathbf{\pi}_{h}),\mathbb{P}(\mathbf{\pi})\) and \(\mathbf{r}_{h}(\mathbf{\pi}_{h}),\mathbf{r}(\mathbf{\pi})\) denote the state-to-state transition probability matrix and expected per-state reward vector for a given policy \(\mathbf{\pi}_{h}\) or \(\mathbf{\pi}\) accordingly. Additionally, \(\mathbf{e}_{s_{1}}\) is an all-zero vector apart of a value of \(1\) in its \(s_{1}\)-th position. Also, we denote \(V_{k,h}^{\mathbf{\pi}}(\mathbf{\rho})=\sum_{s\in\mathcal{S}}\rho(s)V_{k,h}^{\mathbf{\pi}}(s)\).
Best-response policies.Given an arbitrary joint policy \(\mathbf{\sigma}\), we define the _best-response policy_ of a player \(k\) to be a policy \(\mathbf{\pi}_{k}^{\dagger}\coloneqq\{\mathbf{\pi}_{k,h}^{\dagger},\ \forall h\in[H]\}\), such that it is a maximizer of \(\max_{\mathbf{\pi}_{k}^{\prime}}V_{k,1}^{\mathbf{\pi}_{k}^{\prime}\times\mathbf{\sigma}_{ -k}}(s_{1})\). Additionally, we will use the following notation \(V_{k,h}^{\dagger,\mathbf{\sigma}_{-k}}(s)\coloneqq\max_{\mathbf{\pi}_{k}^{\prime}}V_{k,h}^{\mathbf{\pi}_{k}^{\prime}\times\mathbf{\sigma}_{-k}}(s)\).
Equilibrium notionsHaving defined what a best-response is, it is then quite direct to define different notions of equilibria for Markov games.
**Definition 2.1** (Cce).: _We say that a joint (potentially correlated) policy \(\mathbf{\sigma}\in\Delta(\mathcal{A})^{H\times S}\) is a an \(\epsilon\)-approximate coarse-correlated equilibrium if it holds that, for an \(\epsilon>0\),_
\[V_{k,1}^{\dagger,\mathbf{\sigma}_{-k}}(s_{1})-V_{k,1}^{\mathbf{\sigma}}(s_{1})\leq \epsilon,\ \forall k\in[n].\] (CCE)
Further, we will define a Nash equilibrium policy,
**Definition 2.2** (Ne).: _A joint, product policy \(\mathbf{\pi}\in\prod_{k\in[n]}\Delta(\mathcal{A}_{k})^{H\times S}\) is an \(\epsilon\)-approximate Nash equilibrium if it holds that, for an \(\epsilon>0\),_
\[V_{k,1}^{\dagger,\mathbf{\pi}_{-k}}(s_{1})-V_{k,1}^{\mathbf{\pi}}(s_{1})\leq\epsilon, \ \forall k\in[n].\] (NE)
It is quite evident that an approximate Nash equilibrium is also an approximate coarse-correlated equilibrium while the converse is not generally true. For infinite-horizon games the definitions are analogous.
### Our setting
We focus on the setting of zero-sum polymatrix switching-control Markov games. This setting encompasses two major assumptions related to the reward functions in every state \(\{r_{k}\}_{k\in[n]}\) and the transition kernel \(\mathbb{P}\). The first assumption imposes a zero-sum, polymatrix structure on \(\{r_{k}\}_{k\in[n]}\) for every state and directly generalizes zero-sum polymatrix games for games with multiple states.
**Assumption 1** (Zero-sum polymatrix games).: The reward functions of every player in any state \(s\) are characterized by a _zero-sum_, _polymatrix_ structure.
Polymatrix structure.For every state \(s\) there exists an undirected graph \(\mathcal{G}_{s}(\mathcal{V},\mathcal{E}_{s})\) where,
* the set of nodes \(\mathcal{V}\) coincides with the set of agents \([n]\); the \(k\)-th node is the \(k\)-th agent,
* the set of edges \(\mathcal{E}_{s}\) stands for the set of pair-wise interactions; each edge \(e=(k,j),k,j\in[n],k\neq j\) stands for a general-sum normal-form game played between players \(k,j\) and which we note as \(\left(r_{kj}(s,\cdot,\cdot),r_{jk}(s,\cdot,\cdot)\right)\) with \(r_{kj},r_{jk}:\mathcal{S}\times\mathcal{A}_{k}\times\mathcal{A}_{j}\to[-1,1]\).
Moreover, we define \(\mathrm{adj}(s,k)\coloneqq\{j\in[n]\ |\ (k,j)\in\mathcal{E}_{s}\}\subseteq[n]\) to be the set of all neighbors of an arbitrary agent \(k\) in state \(s\). The reward of agent \(k\) at state \(s\) given a joint action \(\mathbf{a}\) depends solely on interactions with their neighbors,
\[r_{k,h}(s,\mathbf{a})=\sum_{j\in\mathrm{adj}(k)}r_{kj,h}(s,a_{k},a_{j}),\ \forall h\in[H],\forall s\in\mathcal{S},\forall\mathbf{a}\in\mathcal{A}.\]
Further, the _zero-sum_ assumption implies that,
\[\sum_{k}r_{k,h}(s,\mathbf{a})=0,\quad\forall h\in[H],\forall s\in\mathcal{S}, \forall\mathbf{a}\in\mathcal{A}. \tag{1}\]
In the infinite-horizon setting, the subscript \(h\) can be dropped.
A further assumption (_switching-control_) is necessary in order to ensure the desirable property of equilibrium collapse.
**Assumption 2** (Switching-control).: In every state \(s\in\mathcal{S}\), there exists a single player (not necessarily the same), or _controller_, whose actions determine the probability of transitioning to a new state.
The function \(\mathrm{argctrl}:\mathcal{S}\to[n]\) returns the index of the player who controls the transition probability at a given state \(s\). On the other hand, the function \(\mathrm{ctrl}:\mathcal{S}\times\mathcal{A}\to\mathcal{A}_{\mathrm{argctrl}(s)}\) gets an input of a joint action \(\mathbf{a}\), for a particular state \(s\), and returns the action of the controller of that state, \(a_{\mathrm{argctrl}(s)}\).
**Remark 1**.: _It is direct to see that Markov games with a single controller and turn-based Markov games (Daskalakis et al., 2022), are special case of Markov games with switching controller._
## 3 Main results
In this section we provide the main results of this paper. We shall show the collapsing phenomenon of coarse-correlated equilibria to Nash equilibria in the case of zero-sum, single switching controller polymatrix Markov games. Before we proceed, we provide a formal definition of the notion of collapsing.
**Definition 3.1** (CCE collapse to NE).: _Let \(\mathbf{\sigma}\) be any \(\epsilon\)-CCE policy of a Markov game. Moreover, let the marginal policy \(\mathbf{\pi}^{\mathbf{\sigma}}:=(\mathbf{\pi}^{\mathbf{\sigma}}_{1},...,\mathbf{\pi}^{\mathbf{\sigma} }_{n})\) defined as:_
\[\pi^{\mathbf{\sigma}}_{k}(a|s)=\sum_{\mathbf{a}_{-k}\in\mathcal{A}_{-k}}\sigma(a,\mathbf{a }_{-k}|s),\ \forall k,\forall s\in\mathcal{S},\forall a\in\mathcal{A}_{k}.\]
_If \(\mathbf{\pi}^{\mathbf{\sigma}}\) is a \(O(\epsilon)\)-NE equilibrium for every \(\mathbf{\sigma}\) then we say the set of approximate CCE's collapses to that of approximate NE's._
We start with the warm-up result that the set of CCE's collapses to the set of NE's for two-player zero-sum Markov games.
### Warm-up: equilibrium collapse in two-player zero-sum MG's
Since we focus on two-player zero-sum Markov games, we simplify the notation by using \(V_{h=1}(s)\coloneqq V_{2,1}(s)\)--_i.e._, player 1 is the minimizing player and player 2 is the maximizer. We show the following theorem:
**Theorem 3.1** (Collapse in two-player zero-sum MG's).: _Let a two-player zero-sum Markov game \(\Gamma^{\prime}\) and an \(\epsilon\)-approximate CCE policy of that game \(\mathbf{\sigma}\). Then, the marginalized product policies \(\mathbf{\pi}^{\mathbf{\sigma}}_{1},\mathbf{\pi}^{\mathbf{\sigma}}_{2}\) form a \(2\epsilon\)-approximate NE._
**Proof.** Since \(\mathbf{\sigma}\) is an \(\epsilon\)-approximate CCE joint policy, by definition it holds that for any \(\mathbf{\pi}_{1}\) and any \(\mathbf{\pi}_{2}\),
\[V^{\mathbf{\sigma}_{-2}\times\mathbf{\pi}_{2}}_{h=1}(s_{1})-\epsilon\leq V^{\mathbf{\sigma }}_{h=1}(s_{1})\leq V^{\mathbf{\pi}_{1}\times\mathbf{\sigma}_{-1}}_{h=1}(s_{1})+\epsilon.\]
Due to Claim A.1, the latter is equivalent to the following inequality,
\[V_{h=1}^{\mathbf{\pi}_{1}^{\sigma}\times\mathbf{\pi}_{2}}(s_{1})-\epsilon\leq V_{h=1}^{ \mathbf{\sigma}}(s_{1})\leq V_{h=1}^{\mathbf{\pi}_{1}\times\mathbf{\pi}_{2}^{\sigma}}(s_{1} )+\epsilon.\]
Plugging in \(\mathbf{\pi}_{1}^{\sigma},\mathbf{\pi}_{2}^{\sigma}\) alternatingly, we get the inequalities:
\[\begin{cases}V_{h=1}^{\mathbf{\pi}_{1}^{\sigma}\times\mathbf{\pi}_{2}}(s_{1})-\epsilon \leq V_{h=1}^{\mathbf{\sigma}}(s_{1})\leq V_{h=1}^{\mathbf{\pi}_{1}^{\sigma}\times\bm {\pi}_{2}^{\sigma}}(s_{1})+\epsilon\\ V_{h=1}^{\mathbf{\pi}_{1}^{\sigma}\times\mathbf{\pi}_{2}^{\sigma}}(s_{1})-\epsilon \leq V_{h=1}^{\mathbf{\sigma}}(s_{1})\leq V_{h=1}^{\mathbf{\pi}_{1}\times\mathbf{\pi}_{2} ^{\sigma}}(s_{1})+\epsilon\end{cases}\]
The latter leads us to conclude that for any \(\mathbf{\pi}_{1}\) and any \(\mathbf{\pi}_{2}\),
\[V_{h=1}^{\mathbf{\pi}_{1}^{\sigma}\times\mathbf{\pi}_{2}}(s_{1})-2\epsilon\leq V_{h= 1}^{\mathbf{\pi}_{1}^{\sigma}\times\mathbf{\pi}_{2}^{\sigma}}(s_{1})\leq V_{h=1}^{\bm {\pi}_{1}\times\mathbf{\pi}_{2}^{\sigma}}(s_{1})+2\epsilon,\]
which is the definition of a NE in a zero-sum game.
### Equilibrium collapse in finite-horizon polymatrix Markov games
In this section, we focus on the more challenging case of polymatrix Markov games which is the main focus of this paper. For any finite horizon Markov game, we define \((\text{P}_{\text{NE}})\) to be the following nonlinear program with variables \(\mathbf{\pi},\mathbf{w}\):
\[\min \sum\limits_{k\in[n]}\left(w_{k,1}(s_{1})-\mathbf{e}_{s_{1}}^{\top} \sum\limits_{h=1}^{H}\left(\prod\limits_{\tau=1}^{h}\mathbb{P}_{\tau}(\mathbf{ \pi}_{\tau})\right)\mathbf{r}_{k,h}(\mathbf{\pi}_{h})\right)\] \[\text{s.t.} \ w_{k,h}(s)\geq r_{k,h}(s,a,\mathbf{\pi}_{-k,h})+\mathbb{P}_{h}(s,a, \mathbf{\pi}_{-k,h})\mathbf{w}_{k,h+1},\] \[\forall s\in\mathcal{S},\forall h\in[H],\forall k\in[n],\forall a \in\mathcal{A}_{k};\] \[w_{k,H}(s)=0,\quad\forall k\in[n],\forall s\in\mathcal{S};\] \[\mathbf{\pi}_{k,h}(s)\in\Delta(\mathcal{A}_{k}),\] \[\forall s\in\mathcal{S},\forall h\in[H],\forall k\in[n],\forall a \in\mathcal{A}_{k}.\]
Using the following theorem, we are able to use \((\text{P}_{\text{NE}})\) to argue about equilibrium collapse.
**Theorem 3.2** (NE and global optima of \((\text{P}_{\text{NE}})\)).: _If \((\mathbf{\pi}^{\star},\mathbf{w}^{\star})\) yields an \(\epsilon\)-approximate global minimum of \((\text{P}_{\text{NE}})\), then \(\mathbf{\pi}^{\star}\) is an \(n\epsilon\)-approximate NE of the zero-sum polymatrix switching controller MG, \(\Gamma\). Conversely, if \(\mathbf{\pi}^{\star}\) is an \(\epsilon\)-approximate NE of the MG \(\Gamma\) with corresponding value function vector \(\mathbf{w}^{\star}\) such that \(w^{\star}_{k,h}(s)=V_{k,h}^{\mathbf{\pi}^{\star}}(s)\forall(k,h,s)\in[n]\times[H] \times\mathcal{S}\), then \((\mathbf{\pi}^{\star},\mathbf{w}^{\star})\) attains an \(\epsilon\)-approximate global minimum of \((\text{P}_{\text{NE}})\)._
Following, we are going to use \((\text{P}_{\text{NE}})\) in proving the collapse of CCE's to NE's. We observe that the latter program is nonlinear and in general nonconvex. Hence, duality cannot be used in the way it was used in (Cai et al., 2016) to prove equilibrium collapse. Nevertheless, we can prove that given a CCE policy \(\mathbf{\sigma}\), the marginalized, product policy \(\bigtimes_{k\in[n]}\mathbf{\pi}_{k}^{\mathbf{\sigma}}\) along with an appropriate vector \(\mathbf{w}^{\sigma}\) achieves a global minimum in the nonlinear program \((\text{P}_{\text{NE}})\). More precisely, our main result reads as the following statement.
**Theorem 3.3** (CCE collapse to NE in polymatrix MG).: _Let a zero-sum polymatrix switching-control Markov game, i.e., a Markov game for which Assumptions 1 and 2 hold. Further, let an \(\epsilon\)-approximate CCE of that game \(\mathbf{\sigma}\). Then, the marginal product policy \(\mathbf{\pi}^{\sigma}\), with \(\mathbf{\pi}_{k,h}^{\mathbf{\sigma}}(a|s)=\sum_{\mathbf{a}_{-k}\in\mathcal{A}_{-k}}\mathbf{ \sigma}_{h}(a,\mathbf{a}_{-k}),\ \forall k\in[n],\forall h\in[H]\) is an \(n\epsilon\)-approximate NE._
Proof.: Let an \(\epsilon\)-approximate CCE policy, \(\mathbf{\sigma}\), of game \(\Gamma\). Moreover, let the best-response value-vectors of each agent \(k\) to joint policy \(\mathbf{\sigma}_{-k}\), \(\mathbf{w}_{k}^{\dagger}\).
Now, we observe that due to Assumption 1,
\[w_{k,h}^{\dagger}(s) \geq r_{k,h}(s,a,\mathbf{\sigma}_{-k,h})+\mathbb{P}_{h}(s,a,\mathbf{ \sigma}_{-k,h})\mathbf{w}_{k,h+1}^{\dagger}\] \[=\sum_{j\in\operatorname{adj}(k)}r_{(k,j),h}(s,a,\mathbf{\pi}_{j}^{ \mathbf{\sigma}})+\mathbb{P}_{h}(s,a,\mathbf{\sigma}_{-k,h})\mathbf{w}_{k,h+1}^{\dagger}.\]
Further, due to Assumption 2,
\[\mathbb{P}_{h}(s,a,\mathbf{\sigma}_{-k,h})\mathbf{w}_{k,h+1}^{\dagger}= \mathbb{P}_{h}(s,a,\mathbf{\pi}_{\mathrm{argctrl}(s),h}^{\mathbf{\sigma}})\mathbf{w}_{k,h+1}^ {\dagger},\]
or,
\[\mathbb{P}_{h}(s,a,\mathbf{\sigma}_{-k,h})\mathbf{w}_{k,h+1}^{\dagger}= \mathbb{P}_{h}(s,a,\mathbf{\pi}^{\mathbf{\sigma}})\mathbf{w}_{k,h+1}^{\dagger}.\]
Putting these pieces together, we reach the conclusion that \((\mathbf{\pi}^{\mathbf{\sigma}},\mathbf{w}^{\dagger})\) is feasible for the nonlinear program (\(\mathrm{P}_{\mathrm{NE}}\)).
What is left is to prove that it is also an \(\epsilon\)-approximate global minimum. Indeed, if \(\sum_{k}\mathbf{w}_{k,h}^{\dagger}(s_{1}){\leq}\epsilon\) (by assumption of an \(\epsilon\)-approximate CCE), then the objective function of (\(\mathrm{P}_{\mathrm{NE}}\)) will attain an \(\epsilon\)-approximate global minimum. In turn, due to Theorem 3.2 the latter implies that \(\mathbf{\pi}^{\mathbf{\sigma}}\) is an \(n\epsilon\)-approximate NE.
We can now conclude that due to the algorithm introduced in (Daskalakis et al., 2022) for CCE computation in general-sum MG's, the next statement holds true.
**Corollary 3.1** (Computing a NE--finite-horizon).: Given a finite-horizon switching control zero-sum polymatrix Markov game, we can compute an \(\epsilon\)-approximate Nash equilibrium policy that is Markovian with probability at least \(1-\delta\) in time \(\mathrm{poly}\left(n,H,S,\max_{k}|\mathcal{A}_{k}|,\frac{1}{\epsilon},\log(1/ \delta)\right)\).
Proof.: The Corollary follows by (Daskalakis et al., 2022, Theorem 4.2).
In the next section, we discuss the necessity of the assumption of switching control using a counter-example of non-collapsing equilibria.
### No equilibrium collapse with more than one controllers per-state
Although Assumption 1 is sufficient for the collapse of any CCE to a NE in single-state (_i.e._, normal-form) games, we will prove that Assumption 2 is indispensable in guaranteeing such a collapse in zero-sum polymatrix Markov games. That is, if more than one players affect the transition probability from one state to another, a CCE is not guaranteed to collapse to a NE.
**Example 1**.: _We consider the following \(3\)-player Markov game that takes place for a time horizon \(H=3\). There exist three states, \(s_{1},s_{2},\) and \(s_{3}\) and the game starts at state \(s_{1}\). Player \(3\) has a single action in every state, while players \(1\) and \(2\) have two available actions \(\{a_{1},a_{2}\}\) and \(\{b_{1},b_{2}\}\) respectively in every state._
Reward functions._If player \(1\) (respectively, player \(2\)) takes action \(a_{1}\) (resp., \(b_{1}\)), in either of the states \(s_{1}\) or \(s_{2}\), they get a reward equal to \(\frac{1}{20}\). In state \(s_{3}\), both players get a reward equal to \(-\frac{1}{2}\) regardless of the action they select. Player \(3\) always gets a reward that is equal to the negative sum of the reward of the other two players. This way, the zero-sum polymatrix property of the game is ensured (Assumption 1)._
Transition probabilities._If players \(1\) and \(2\) select the joint action \((a_{1},b_{1})\) in state \(s_{1}\), the game will transition to state \(s_{2}\). In any other case, it will transition to state \(s_{3}\). The converse happens if in state \(s_{2}\) they take joint action \((a_{1},b_{1})\); the game will transition to state \(s_{3}\). For any other joint action, it will transition to state \(s_{1}\). From state \(s_{3}\), the game transition to state \(s_{1}\) or \(s_{2}\) uniformally at random._
_At this point, it is important to notice that two players control the transition probability from one state to another. In other words, Assumption 2 does not hold._
_Next, we consider the joint policy \(\mathbf{\sigma}\),_
\[\mathbf{\sigma}(s_{1})=\mathbf{\sigma}(s_{2})=\begin{array}{c}b_{1}\quad\quad\quad b _{2}\\ a_{2}\quad\left(\begin{matrix}0&1/2\\ 1/2&0\end{matrix}\right)\end{array}\!\!.\]
**Claim 3.1**.: The joint policy \(\mathbf{\sigma}\) that assigns probability \(\frac{1}{2}\) to the joint actions \((a_{1},b_{2})\) and \((a_{2},b_{1})\) in both states \(s_{1},s_{2}\) is a CCE and \(V_{1,1}^{\mathbf{\sigma}}(s_{1})=V_{2,1}^{\mathbf{\sigma}}(s_{1})=\frac{1}{20}\).
**Proof.** The value function of \(s_{1}\) for \(h=1\) of players \(1\) and \(2\) read:
\[V_{1,1}^{\boldsymbol{\sigma}}(s_{1}) =\boldsymbol{e}_{s_{1}}^{\top}\left(\boldsymbol{r}_{1}(\boldsymbol {\sigma})+\mathbb{P}(\boldsymbol{\sigma})\boldsymbol{r}_{1}(\boldsymbol{\sigma })\right)\] \[=-\frac{9\sigma(a_{1},b_{1}|s_{1})}{20}+\frac{\sigma(a_{1},b_{2}| s_{1})}{20}+\frac{\left(1-\sigma(a_{1},b_{1}|s_{1})\right)\left(\sigma(a_{1},b_{1}| s_{2})+\sigma(a_{1},b_{2}|s_{2})\right)}{20},\]
and,
\[V_{2,1}^{\boldsymbol{\sigma}}(s_{1}) =\boldsymbol{e}_{s_{1}}^{\top}\left(\boldsymbol{r}_{2}(\boldsymbol {\sigma})+\mathbb{P}(\boldsymbol{\sigma})\boldsymbol{r}_{2}(\boldsymbol{\sigma })\right)\] \[=-\frac{9\sigma(a_{1},b_{1}|s_{1})}{20}+\frac{\sigma(a_{2},b_{2}| s_{1})}{20}+\frac{\left(1-\sigma(a_{1},b_{1}|s_{1})\right)\left(\sigma(a_{1},b_{1}| s_{2})+\sigma(a_{2},b_{1}|s_{2})\right)}{20}.\]
We are indifferent to the corresponding value function of player \(3\) as they only have one available action per state and hence, cannot affect their rewards. For the joint policy \(\boldsymbol{\sigma}\), the corresponding value functions of both players \(1\) and \(2\) are \(V_{1,1}^{\boldsymbol{\sigma}}(s_{1})=V_{2,1}^{\boldsymbol{\sigma}}(s_{1})= \frac{1}{20}\).
Deviations.We will now prove that no deviation of player \(1\) manages to accumulate a reward greater than \(\frac{1}{20}\). The same follows for player \(2\) due to symmetry.
When a player deviates unilaterally from a joint policy, they experience a single agent Markov decision process (MDP). It is well-known that MDPs always have a deterministic optimal policy. As such, it suffices to check whether \(V_{1,1}^{\boldsymbol{\pi}_{1},\boldsymbol{\sigma}_{-1}}(s_{1})\) is greater than \(\frac{1}{20}\) for any of the four possible deterministic policies:
* \(\boldsymbol{\pi}_{1}(s_{1})=\boldsymbol{\pi}_{1}(s_{2})=\begin{pmatrix}1&0 \end{pmatrix}\),
Finally, the value function of any deviation \(\boldsymbol{\pi}_{1}^{\prime}\) writes,
\[V_{1,1}^{\boldsymbol{\pi}_{1}^{\prime}\times\boldsymbol{\sigma}_{-1}}(s_{1})= -\frac{\pi_{1}^{\prime}(a_{1}|s_{1})}{5}-\frac{\pi_{1}^{\prime}(a_{1}|s_{2}) \left(\pi_{1}^{\prime}(a_{1}|s_{1})-2\right)}{40}.\]
We can now check that for all deterministic policies \(V_{1,1}^{\boldsymbol{\pi}_{1}^{\prime}\times\boldsymbol{\sigma}_{-1}}(s_{1}) \leq\frac{1}{20}\). By symmetry, it follows that \(V_{2,1}^{\boldsymbol{\pi}_{2}^{\prime}\times\boldsymbol{\sigma}_{-2}}(s_{1}) \leq\frac{1}{20}\) and as such \(\boldsymbol{\sigma}\) is indeed a CCE.
_Yet, the marginalized product policy of \(\boldsymbol{\sigma}\) which we note as \(\boldsymbol{\pi}_{1}^{\boldsymbol{\sigma}}\times\boldsymbol{\pi}_{2}^{ \boldsymbol{\sigma}}\) does not constitute a NE. The
Figure 1: A graph of the state space with transition probabilities parametrized with respect to the policy of each player.
components of this policy are,_
\[\begin{cases}\mathbf{\pi}_{1}^{\mathbf{\sigma}}(s_{1})=\mathbf{\pi}_{1}^{\mathbf{\sigma}}(s_{2})= \begin{array}{cc}a_{1}&a_{2}\\ \left(1/2\right.&1/2\right),\\ \\ \mathbf{\pi}_{2}^{\mathbf{\sigma}}(s_{1})=\mathbf{\pi}_{2}^{\mathbf{\sigma}}(s_{2})=\begin{array} []{cc}b_{1}&b_{2}\\ \left(1/2\right.&1/2\right).\end{array}\end{cases}\]
_I.e., the product policy \(\mathbf{\pi}_{1}^{\mathbf{\sigma}}\times\mathbf{\pi}_{2}^{\mathbf{\sigma}}\) selects any of the two actions of each player in states \(s_{1},s_{2}\) independently and uniformly at random. With the following claim, it can be concluded that in general when more than one player control the transition the set of equilibria do not collapse._
**Claim 3.2**.: The product policy \(\mathbf{\pi}_{1}^{\mathbf{\sigma}}\times\mathbf{\pi}_{2}^{\mathbf{\sigma}}\) is not a NE.
Proof.: In general, the value functions of each player 1 and 2 are:
\[V_{1,1}^{\mathbf{\pi}_{1}\times\mathbf{\pi}_{2}}(s_{1})= -\frac{\pi_{1}(a_{1}|s_{1})\pi_{2}(b_{1}|s_{1})}{2}+\frac{\pi_{1} (a_{1}|s_{1})}{20}-\frac{\pi_{1}(a_{1}|s_{2})\left(\pi_{1}(a_{1}|s_{1})\pi_{2} (b_{1}|s_{1})-1\right)}{20},\]
and
\[V_{2,1}^{\mathbf{\pi}_{1}\times\mathbf{\pi}_{2}}(s_{1})= -\frac{\pi_{1}(a_{1}|s_{1})\pi_{2}(b_{1}|s_{1})}{2}+\frac{\pi_{1} (b_{1}|s_{1})}{20}-\frac{\pi_{1}(b_{1}|s_{2})\left(\pi_{1}(a_{1}|s_{1})\pi_{2} (b_{1}|s_{1})-1\right)}{20}.\]
Plugging in \(\mathbf{\pi}_{1}^{\sigma},\mathbf{\pi}_{2}^{\sigma}\) yields \(V_{1,1}^{\mathbf{\pi}_{1}^{\sigma}\times\mathbf{\pi}_{2}^{\sigma}}(s_{1})=V_{2,1}^{ \mathbf{\pi}_{1}^{\sigma}\times\mathbf{\pi}_{2}^{\sigma}}(s_{1})=-\frac{13}{160}\). But, if player 1 deviates to say \(\pi_{1}^{\prime}(s_{1})=\pi_{1}^{\prime}(s_{2})=\begin{pmatrix}0&1\end{pmatrix}\), they get a value equal to 0 which is clearly greater than \(-\frac{13}{160}\). Hence, \(\mathbf{\pi}_{1}^{\sigma}\times\mathbf{\pi}_{2}^{\sigma}\) is not a NE.
_In conclusion, Assumption 1 does not suffice to ensure equilibrium collapse._
**Theorem 3.4**.: _There exists a zero-sum polymatrix Markov game (Assumption 2 is not satisfied) that has a CCE which does not collapse to a NE._
Proof.: The proof follows from the game of Example 1, and Claims 3.1 and 3.2.
### Equilibrium collapse in infinite-horizon polymatrix Markov games
In proving equilibrium collapse for infinite-horizon polymatrix Markov games, we use similar arguments and the following nonlinear program with variables \(\mathbf{\pi},\mathbf{w}\),
(P \[{}_{\text{NE}}\] ) \[\begin{array}{c}\min\ \sum_{k\in[n]}\mathbf{\rho}^{\top}\left(\mathbf{w}_{k}- \left(\mathbf{I}-\gamma\,\mathbb{P}(\mathbf{\pi})\right)^{-1}\mathbf{r}_{k}(\mathbf{\pi}) \right)\\ \text{s.t.}\ w_{k}(s)\geq r_{k}(s,a,\mathbf{\pi}_{-k})+\gamma\,\mathbb{P}(s,a,\mathbf{ \pi}_{-k})\mathbf{w}_{k},\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \forall s\in\mathcal{S},\forall k\in[n],\forall a\in\mathcal{A}_{k};\\ \
Computational implications.Equilibrium collapse in infinite-horizon MG's allows us to use the CCE computation technique found in (Daskalakis et al., 2022) in order to compute an \(\epsilon\)-approximate NE. Namely, given an accuracy threshold \(\epsilon\), we truncate the infinite-horizon game to its _effective horizon_\(H\coloneqq\frac{\log(1/\epsilon)}{1-\gamma}\). Then, we define reward functions that depend on the time-step \(h\), _i.e._, \(r_{k,h}=\gamma^{h-1}r_{k}\). Finally,
**Corollary 3.2**.: (Computing a NE--infinite-horizon) Given an infinite-horizon switching control zero-sum polymatrix game \(\Gamma\), it is possible to compute a Nash equilibrium policy that is Markovian and nonstationary with proabiblity at least \(1-\delta\) in time \(\operatorname{poly}\left(n,H,S,\max_{k}|\mathcal{A}_{k}|,\frac{1}{\epsilon}, \log(1/\delta)\right)\).
**Proof.** The Corollary follows by (Daskalakis et al., 2022, Section 6.1 and Theorem 4.2).
## 4 Conclusion and open problems
In this paper, we unified switching-control Markov games and zero-sum polymatrix normal-form games. We highlighted how numerous applications can modelled using this framework and we focused on the phenomenon of equilibrium collapse from the set of coarse-correlated equilibria to that of Nash equilibria. This property holds implications for computing approximate Nash equilibria in switching control zero-sum polymatrix Markov games; it ensures that it can be done efficiently.
Open problems.In the light of the proposed problem and our results there are multiple open questions:
* Is it possible to use a policy optimization algorithm similar to those of (Erez et al., 2022; Zhang et al., 2022) in order to converge to an approximate Nash equilibrium? We note that the question can be settled in one of two ways; _either_ extend the current result of equilibrium collapse to policies that are non-Markovian _or_ guarantee convergence to Markovian policies. The notion of _regret_ in (Erez et al., 2022) gives rise to the computation of a CCE that is a non-Markovian policy in the sense that the policy at every timestep depends on the policy sampled from the history of no-regret play and not only the given state.
* Are there efficient algorithms for computing approximate Nash equilibria in zero-sum polymatrix Markov games if Assumption 2 does not hold? The fact that the set of CCE's does not collapse to the set of NE's when Assumption 2 is violated (Theorem 3.4) does not exclude the possibility of having efficient algorithms.
* We conjecture that a convergence rate of \(O(\frac{1}{T})\) to a NE is possible, _i.e._, there exists an algorithm with running time \(O(1/\epsilon)\) that computes an \(\epsilon\)-approximate NE.
* Are there more classes of Markov games in which computing Nash equilibria is computationally tractable? |
2306.15567 | A Three-Way Knot: Privacy, Fairness, and Predictive Performance Dynamics | As the frontier of machine learning applications moves further into human
interaction, multiple concerns arise regarding automated decision-making. Two
of the most critical issues are fairness and data privacy. On the one hand, one
must guarantee that automated decisions are not biased against certain groups,
especially those unprotected or marginalized. On the other hand, one must
ensure that the use of personal information fully abides by privacy regulations
and that user identities are kept safe. The balance between privacy, fairness,
and predictive performance is complex. However, despite their potential
societal impact, we still demonstrate a poor understanding of the dynamics
between these optimization vectors. In this paper, we study this three-way
tension and how the optimization of each vector impacts others, aiming to
inform the future development of safe applications. In light of claims that
predictive performance and fairness can be jointly optimized, we find this is
only possible at the expense of data privacy. Overall, experimental results
show that one of the vectors will be penalized regardless of which of the three
we optimize. Nonetheless, we find promising avenues for future work in joint
optimization solutions, where smaller trade-offs are observed between the three
vectors. | Tânia Carvalho, Nuno Moniz, Luís Antunes | 2023-06-27T15:46:22Z | http://arxiv.org/abs/2306.15567v1 | # A Three-Way Knot: Privacy, Fairness, and Predictive Performance Dynamics
###### Abstract
As the frontier of machine learning applications moves further into human interaction, multiple concerns arise regarding automated decision-making. Two of the most critical issues are fairness and data privacy. On the one hand, one must guarantee that automated decisions are not biased against certain groups, especially those unprotected or marginalized. On the other hand, one must ensure that the use of personal information fully abides by privacy regulations and that user identities are kept safe. The balance between privacy, fairness, and predictive performance is complex. However, despite their potential societal impact, we still demonstrate a poor understanding of the dynamics between these optimization vectors. In this paper, we study this three-way tension and how the optimization of each vector impacts others, aiming to inform the future development of safe applications. In light of claims that predictive performance and fairness can be jointly optimized, we find this is only possible at the expense of data privacy. Overall, experimental results show that one of the vectors will be penalized regardless of which of the three we optimize. Nonetheless, we find promising avenues for future work in joint optimization solutions, where smaller trade-offs are observed between the three vectors.
Keywords:Synthetic Data Privacy Fairness Predictive Performance
## 1 Introduction
Growing privacy concerns have led to several approaches aiming to preserve the confidentiality of individuals' information. Among the most prevalent approaches to privacy preservation is the process of data synthesis, which mimics the original data while maintaining its global properties [8]. The creation of synthetic data offers a promising avenue, as it generates a protected version of the original data that can be publicly available. Usually, approaches for data synthetization include sampling methods or deep learning-based models [19]. However, despite significant progress in recent years, the challenge of synthetic data generation methods in preserving the confidentiality of personal data and generating unbiased and accurate machine learning models remains an ongoing area of research
and development. The interplay between privacy, fairness, and predictive performance in synthetic data generation is a fundamental issue that requires attention to facilitate the responsible utilization of data in machine learning applications.
This paper explores the dynamics between preserving privacy and improving fairness and predictive performance in machine learning models. First, to address privacy concerns, we apply privacy-preserving techniques for secure data publication, particularly data synthetization methods, where each synthetic data variant is evaluated concerning its re-identification risk. Then, in the evaluation process for fairness and predictive performance, models are trained and optimized for each synthetic data variant using fairness-agnostic (standard machine learning algorithms) and fairness-aware algorithms. Our main goal is to discover the dynamics of optimizing each vector. The experiments conducted in this study use some of the most popular data sets in FAccT4 research [23, 11].
Footnote 4: Acronym for Fairness, Accountability, and Transparency.
The main conclusions of this work indicate that _1)_ solutions that achieve a balance between predictive performance and fairness are only possible at the expense of data privacy, and _2)_ generally, optimizing any of the vectors will impact at least another one, but _3)_ three-way optimization demonstrates promise for future research.
The remainder of the paper is organized as follows. Section 2 includes some preliminaries on privacy and fairness in machine learning and overviews of related work on this topic. The experimental study is described in Section 3, including a description of data, methods, and results. Section 4 discusses such results. Conclusions are provided in Section 5.
## 2 Background
Existing literature outlines the key concepts of identifying and measuring privacy and algorithmic fairness in machine learning applications [10, 24]. In the subsequent sections, we provide concise definitions of relevant background knowledge.
### Privacy
Releasing or sharing data about individuals often implies de-identifying personal information for privacy preservation [8, 30]. Conventional de-identification approaches involve applying privacy-preserving techniques such as generalization or suppression to reduce data granularity or introduce noise to the data causing distortion. These transformations are usually applied to a set of quasi-identifiers, i.e., attributes that, when combined, generate a unique signature that may lead to re-identification (e.g., date of birth, gender, profession, and ethnic group), as well as sensitive attributes like religion and sexual orientation which are highly critical. In the case of synthetic data generation, de-identification is generally performed for all attributes and instances to capture the overall characteristics, creating a new data set through generative models [19].
Even in a de-identified data set, it is crucial to evaluate the privacy risks as it is challenging to know who the intruder is or what information he may possess. The privacy measures depend on the types of disclosure [8]. Identity disclosure is one of the most critical for data privacy. \(k\)-anonymity [28] is the most popular measure indicating how many individuals share the same information concerning a set of quasi-identifiers, defined according to assumptions on an intruder's background knowledge. A record is unique when \(k=1\), meaning an intruder can single it out. Additionally, linking records between different data sets is an approach that allows measuring the probability of re-identification through different data sets. Record linkage [18] is also widely used but focuses on the ability to link records, usually between de-identified data and the original.
### Fairness
Diverse approaches to handling fairness address different parts of the model life-cycle. Several methods to enhance fairness have been proposed in the literature, commonly classified into three categories: pre-processing, in-processing, and post-processing. We focus on in-processing methods which involve modifying the machine learning models during training to remove discrimination by incorporating changes into the objective function or imposing a constraint. Adversarial debiasing [17] and exponentiated gradient [1] are prevalent algorithms.
In classification tasks, the most commonly used measures of group fairness include demographic parity [16] and equalized odds [20]. Demographic parity, also known as statistical parity [16], compares the difference in predicted outcome \(\hat{Y}\) between any two groups, \(|P[\hat{Y}=1|S=1]-P[\hat{Y}=1|S\neq 1]|\leq\epsilon\). Better fairness is achieved with a lower demographic parity value, indicating more similar acceptance rates. A limitation of this measure is that a highly accurate classifier may be unfair if the proportions of actual positive outcomes vary significantly between groups. Therefore, the equalized odds measure was proposed to overcome such limitation [20]. This measure computes the difference between the false positive rates \(|P[\hat{Y}=1|S=1,Y=0]-P[\hat{Y}=1|S\neq 1,Y=0]|\leq\epsilon\), and the difference between the true positive rates of two groups \(|P[\hat{Y}=1|S=1,Y=1]-P[\hat{Y}=1|S\neq 1,Y=1]|\leq\epsilon\), where smaller differences between groups indicate better fairness.
### Related Work
The increasing interest in synthetic data generation has led to studies on how this type of data protects the individual's privacy and reflects the inherent bias and predictive performance in machine learning applications.
Bhanot et al. [5] proved the presence of unfairness in generated synthetic data sets and introduced two fairness metrics for time series, emphasizing the importance of evaluating fairness at each evaluation step in the synthetic data generation. Additionally, Chang and Shokri [12] have shown that fair algorithms tend to memorize data from the under-represented subgroups, increasing the
model's information leakage about unprivileged groups. Their experiments evaluate how and why fair models leak information on synthetic train data.
Machine learning models' efficiency and fairness have also been investigated using synthetic data generated by differentially private GANs. The experiments conducted by Cheng et al. [14] show that integrating differential privacy does not give rise to discrimination during data generation in subsequent classification models. Still, it unfairly amplifies the influence of majority subgroups. Also, the authors demonstrate that differential privacy reduces the quality of the images generated from the GANs and, consequently, the utility in downstream tasks. Recently, Bullwinkel et al. [7] analyzed the interplay between loss of privacy and fairness in the context of models trained on differentially private synthetic data. The experiments focused on binary classification, showing that a notable proportion of the synthesizers studied deteriorated fairness.
The potential of synthetic data in providing privacy-preserving solutions for several data-related challenges and their important role in striving for fairness in machine learning applications prompted our experiments to center around synthetic data generation. Although there are exciting and promising approaches for synthetic data generation incorporating differential privacy, the current state of software is still in its early stages, and only DP-CGANS [29] is a viable option for our experiments. However, this tool is considerably time-consuming, and due to this limitation, we do not account for differentially private synthetic data.
Privacy-protected data sets have not yet been analyzed, considering the three vectors of privacy, fairness, and predictive performance. Especially, conclusions about the impact of maximizing each of the vectors remain unclear. Moreover, we focus on the risk of re-identification in privacy-protected data sets rather than membership attacks on predictive models (e.g. [12]) - identity disclosure can cause severe consequences for individuals and organizations.
## 3 Experimental Study
In this section, we provide a thorough experimental study focused on the impact of optimization processes for privacy, fairness, and predictive performance in machine learning. We aim to answer the following research questions. What are the impacts associated with optimizing a specific vector (**RQ1**), what are the impacts in prioritizing the remaining vectors (optimization paths) (**RQ2**), and is there a solution capable of providing a balance between the three vectors (**RQ3**)? We describe our experimental methodology in the following sections, briefly describing the data used, methods, and evaluation procedures, followed by presenting experimental results.
### Data
In this section, we provide an overview of the commonly used data sets for fairness-aware machine learning [23, 11]. A general description of the main characteristics of these data sets is provided in Table 1. The number of attributes
and instances were obtained after the cleaning, such as missing data removal. The selection for the protected attributes and quasi-identifiers adheres to previous literature. As fairness measures require protected attributes in a binary form, categorical attributes are grouped; for instance, race={caucasian, afircanamerican, hispanic, other} is transformed to race={white, non-white}, and continuous attributes are discretized like age= {<25, >=25}. Such discretization is also defined in the literature and is determined based on privileged and unprivileged groups.
### Methods
In this section, we describe the _i)_ methods used in generating privacy-preserving data variants; _ii)_ the learning algorithms and respective hyper-parametrization optimization details employed to generate models, which include standard machine learning (fairness-agnostic) and fairness-aware algorithms; followed by _iii)_ evaluation metrics used and _iv)_ the overall experimental methodology.
#### 3.2.1 Synthetic Data Variants
The synthetic data variants are obtained using two different approaches, PrivateSMOTE and deep learning-based solutions. PrivateSMOTE [9] generates synthetic cases for highest-risk instances (i.e., single-out) based on randomly weighted interpolation of nearest neighbors. We apply PrivateSMOTE with \(ratio\in\{1,2,3\}\), \(knn\in\{1,3,5\}\) and \(\epsilon\in\{0.1,0.3,0.5\}\), where \(\epsilon\) is the amount of added noise. On the other hand, deep learning-based solutions rely on generative models. For comparison purposes, we only synthesize the single-out instances using conditional sampling. Such instances and all attributes are replaced with new cases. We leverage the Python SDV package [25] to create different deep-learning variants for this aim. The experiments include Copula GAN, TVAE, and CTGAN with the following parameters: \(epochs\in\{100,200\}\), \(batch\_size\in\{50,100\}\) and \(embedding\_dim\in\{12,64\}\). Each set of parameters produces a different synthetic data variant.
#### 3.2.2 Learning Algorithms
There are two types of algorithms used in our experimental evaluation: standard machine learning algorithms (fairness-agnostic) and fairness-aware algorithms. Concerning the former, we leverage three classification algorithms through _Scikit-learn_[26] toolkit: Random Forest [21], XGBoost [13]
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Dataset** & \# **Instances** \# **Attributes** & **Domain** & **Quasi-identifiers** & **Protected attributes** \\ \hline _Adult_ & 48.842 & 15 & Finance & Education, age, gender, race, occupation, native country & Gender, race, age \\ _German Credit_ & 1.000 & 22 & Finance & Purpoz, years of employment, age, years of employment, age, years of residence, jeb, gender, foreign worker & Gender, age \\ _Bank marketing_ & 45.211 & 17 & Finance & Age, job, marital, education, housing & Age, marital \\ _Credit and clients_ & 30.000 & 24 & Finance & Gender, education, marriage, age & Gender, marriage, education \\ _COMPAS_ & 6.172 & 34 & Criminology & Gender, age, race, reactiviment & Gender, race \\ _Heart disease_ & 1.025 & 14 & Healthcare & Gender, age, heart rate, chart pain & Gender \\ _Ricci_ & 118 & 6 & Social & Position, race, combined score & Race \\ \hline \hline \end{tabular}
\end{table}
Table 1: General description of the used data sets in the experimental study.
and Logistic Regression [2]. Final models for each algorithm are chosen based on a 2*5-fold cross-validation estimation of evaluation scores for models based on a grid search method. For fairness mitigation, we use FairMask [27] and exponentiated gradient from Fairlearn [6]. Table 2 summarizes this information.
EvaluationAll synthetic data variants are evaluated in terms of re-identification risk, fairness, and predictive performance.
To assess the potential of re-identification, we use the _Python Record Linkage Toolkit_[15] to compare each variant with the original considering a specified set of quasi-identifiers. In this study, we focus on exact matches, where all values for the quasi-identifiers match, resulting in a 100% likelihood of re-identification. Such comparisons are carried out in the sets of single-out instances. In the learning phase, we use equalized odds difference for fairness evaluation and Accuracy for predictive performance concerning the testing data.
#### 3.2.2 Experimental Methodology
For conciseness, our experimental methodology is illustrated in Figure 1. The experimental study begins by splitting each original data into training and test sets corresponding to 80% and 20%, respectively. Then, we generate several synthetic data variants using the training data set for privacy constraints, in which re-identification risk is evaluated by comparing each synthetic data variant to the original data. Then, models are generated using both fairness-agnostic and fairness-aware algorithms. After the training phase, out-of-sample predictive performance and fairness of the models are measured.
### Experimental Results
The following set of results refers to the probability of each optimized vector winning or losing when compared to the remaining vectors. We construct optimization paths, as demonstrated in Figure 2, to analyze the relevance of prioritizing a specific vector. To calculate the probabilities, we select the best models estimated via cross-validation in out-of-sample. Each solution, i.e., privacy-protected data variant outcome, is compared to a baseline. For visual purposes, "A@", "FM@" and "FL@" refers to the fairness-agnostic and fairness-aware algorithms, namely, Agnostic, FairMask, and Fairlearn.
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Algorithm** & **Parameters** \\ \hline Random Forest & \(\begin{array}{l}\text{n\_estimators}\in\{100,250,500\}\\ \text{max\_depth}\in\{4,7,10\}\end{array}\) \\ \hline Boosting & \(\begin{array}{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l}\text{l} \text{l
Predictive performance vector.The left image in Figure 3 shows the probabilities of each solution's fairness winning and losing against the baseline as well as the probabilities of privacy winning and losing, while the right image shows the reverse path. Such a baseline corresponds to the best solution in terms of predictive performance for each data set. Note that the probabilities refer to the percentage of the cases in the total number of privacy-protected data variants for each fairness-agnostic and fairness-aware algorithms. Results show that models optimized towards predictive performance demonstrate a balanced probability of winning or losing w.r.t. fairness. Also, when the models outperform the baseline, the solutions tend to be more private, however, the opposite is not necessarily true: the reverse path shows that except for PrivateSMOTE, all synthetic approaches outperform the baseline in terms of privacy. Therefore, optimizing predictive performance results in losses for privacy but it is possible to attain fairer models to a certain extent. Additionally, we observe that Fairlearn leads to fairer and more private solutions.
Fairness vector.Concerning this vector, Figure 4 illustrates the probabilities of each solution's predictive performance winning or losing compared to the baseline (best solution in terms of equalized odds) with the respective probabilities of privacy winning and vice-versa. A notable outcome is the outperformed solutions
Figure 1: Workflow of our experimental methodology.
Figure 2: Optimization paths for each vector.
in terms of predictive performance with the same probability of privacy wins. In the reverse path, the majority of solutions present a lower re-identification risk compared to the baseline. Besides, the models for such solutions present a probability equal to or higher than 50% of improving predictive performance.
Privacy vectorLastly, Figure 5 illustrates the total wins and losses of each solution's predictive performance compared to the baseline (best solution in terms of re-identification risk) along with the respective probabilities of fairness winning and vice-versa. In this scenario, the baseline has a probability greater than 50% of outperforming the remaining solutions in w.r.t. predictive performance. On the other hand, when we prioritize fairness, the baseline tends to lose.
All vectors optimizedIn the previous set of results, we show the impacts of optimizing a single vector. However, an optimal solution should maintain a balance across all the vectors. Therefore, we aim to analyze to what extent is possible to obtain such a balance. Figure 6 provides a comparison reporting the statistical tests using the Bayes Sign Test [3, 4] with a ROPE interval of [-1%, 1%] to
Figure 4: Fairness optimization paths. Total wins/losses comparing each solution to the baseline (best solution in terms of equalized odds) for predictive performance along with the respective wins for privacy (left) and vice-versa (right).
Figure 3: Predictive performance optimization paths. Total wins/losses comparing each solution to the baseline (best solution in terms of Accuracy) for fairness along with the respective wins for privacy (left) and vice-versa (right).
evaluate the statistical significance concerning the percentage difference for each vector optimization. This percentage is defined as \(\frac{R_{a}-R_{b}}{R_{b}}*100\) where \(R_{a}\) is the solution under comparison and \(R_{b}\) is the baseline. ROPE (Region of Practical Equivalence) [22] is used to specify the probability of the difference of values being inside a specific interval as having practically no effect. If the percentage difference is within the specified range, they are of practical equivalence (draw), and if the percentage difference is less than -1%, \(b\) outperforms solution \(a\) (lose). Such a baseline corresponds to the best solutions for each vector while the solutions under comparison correspond to the ones with the best average rank across the three vectors for each data set.
Figure 6 shows the comparisons for each synthetic approach between the optimal solutions for each vector and the solutions with the best-averaged rank across all vectors. Concerning predictive performance, the average rank solutions' models are, for the most part, capable of providing practical equivalence to the optimal solutions of this vector with a probability higher than 50%. Although the privacy vector presents higher losses for some solutions, A@CopulaGAN and A@CTGAN stand out with a probability of drawing to the baseline greater than 80%. However, the models for these solutions are less fair. Additionally, such an outcome shows that it may be possible to obtain a balance between the three vectors, through TVAE-based solutions.
## 4 Discussion
Given the results of the experimental evaluation presented above, conclusions imply that developing safe machine learning applications - one that protects individual data privacy and prevents disparate treatment with a negative impact on protected groups, may face even greater challenges than currently construed. This leads to questions that require attention in future work:
* **(RQ1)** A notable finding for future work is the relationship between the different optimization vectors. Figure 3 and 4, show that many solutions tested
Figure 5: Privacy optimization paths. Total wins/losses comparing each solution to the baseline (best solution in terms of re-identification risk) for predictive performance, and respective wins for fairness (left) and vice-versa (right).
and which do not improve their respective optimization vector exhibit a considerable ability to reduce re-identification risk when privacy is the priority in the optimization path. Also, we observe in Figures 3 and 5 that optimizing predictive performance or privacy shows a similar impact on fairness.
* **(RQ2)** Although it is unlikely to obtain a solution with practically no losses, the optimization paths allow us to find which vector should be prioritized to prevent higher losses. When optimizing privacy (Figure 5), and prioritizing predictive performance, we observe that the majority of the solutions maintain the ability to produce accurate models. Such an outcome shows that it is possible to obtain private solutions with minimal impact on predictive performance but this comes at the expense of fairness.
* **(RQ3)** Nevertheless, finding a solution that balances the three vectors is crucial. However, as shown in Figure 6, it is, in general, very improbable to achieve a good balance between all vectors. Despite our experiments demonstrating that TVAE-based solutions are, to a certain degree, capable of obtaining such a balance, that does not happen for the remaining approaches.
## 5 Conclusion
This paper thoroughly analyzes the dynamics between privacy, fairness, and predictive performance by assessing the impact of optimizing a single vector on the remaining vectors. We generate multiple privacy-protected data variants from the original data using synthetization methods and evaluate each variant
Figure 6: Proportion of probability for each candidate solution drawing or losing significantly against the solution with the best-averaged rank between the three vectors, according to the Bayes Sign Test.
in terms of privacy w.r.t re-identification risk but also fairness and predictive performance for both fairness-agnostic and fairness-aware algorithms.
The main conclusions indicate that in single vector optimization, the remaining vectors will suffer from losses. Nevertheless, optimizing privacy and prioritizing predictive performance allows for obtaining private solutions while maintaining the predictive performance intact. However, it is difficult to navigate a balance between the three vectors. These results highlight the importance of further developments in discriminatory bias when the goal is to release or share personal information. For future work, we plan to analyze the effects of data preparation on fairness as the presence of inherent biases in a data set may pose challenges in achieving fairer models [31]. The Python code and data necessary to replicate the results shown in this paper are available at _[https://tinyurl.com/yku3s7du_](https://tinyurl.com/yku3s7du_).
|
2310.17496 | Tackling Interference Induced by Data Training Loops in A/B Tests: A
Weighted Training Approach | In modern recommendation systems, the standard pipeline involves training
machine learning models on historical data to predict user behaviors and
improve recommendations continuously. However, these data training loops can
introduce interference in A/B tests, where data generated by control and
treatment algorithms, potentially with different distributions, are combined.
To address these challenges, we introduce a novel approach called weighted
training. This approach entails training a model to predict the probability of
each data point appearing in either the treatment or control data and
subsequently applying weighted losses during model training. We demonstrate
that this approach achieves the least variance among all estimators that do not
cause shifts in the training distributions. Through simulation studies, we
demonstrate the lower bias and variance of our approach compared to other
methods. | Nian Si | 2023-10-26T15:52:34Z | http://arxiv.org/abs/2310.17496v5 | # Tackling Interference Induced by Data Training Loops in A/B Tests:
###### Abstract
The standard data-driven pipeline in contemporary recommendation systems involves a continuous cycle in which companies collect historical data, train subsequently improved machine learning models to predict user behavior, and provide improved recommendations. The user's response, which depends on the recommendation produced in this cycle, will become future training data, and so on. However, these data training-recommendation cycles can introduce interference in A/B tests, where data generated by control and treatment algorithms, potentially with different distributions, are aggregated together. To address these challenges, we introduce a novel approach called weighted training. This approach entails training a model to predict the probability of each data point appearing in either the treatment or control data and subsequently applying weighted losses during model training. We demonstrate that this approach achieves the least variance among all estimators without causing shifts in the training distributions. Through simulation studies, we demonstrate the lower bias and variance of our approach compared to other methods.
## 1 Introduction
Experimentation (A/B tests) has emerged as the gold standard for evaluating feature and algorithmic updates in online platforms; see comprehensive guidance in Kohavi et al. (2020). Instances of the use of A/B tests abound and are wide-ranging, from testing new pricing strategies in e-commerce, evaluating bidding strategies in online advertising, and updating and fine-tuning ranking algorithms in video-sharing platforms, just to name a few.
In such online platforms, recommendation systems are also in place to enhance user experience by displaying relevant products and engaging videos. The standard pipeline in recommendation systems operates as follows (as illustrated in Figure 1):
1. Using historical data, the system trains various machine-learning models to predict users' behaviors, such as their interest in recommended items and their willingness to purchase certain products.
2. When a user request is received, the system identifies relevant items and ranks them based on the training scores generated by the machine learning models.
3. Then, the system recommends items to users based on ranking.
4. Users interact with the recommended items and take actions, including leaving comments below videos and making specific purchases.
5. The system records these user actions and feeds them back into the machine learning models, facilitating continuous model training.
This pipeline ensures that the recommendation system continuously adjusts and enhances its suggestions, taking into account user interactions and feedback. However, it also generates a feedback loop, a phenomenon discussed in both Jadidinejad et al. (2020) and Chaney et al. (2018). As we will demonstrate later, this feedback loop causes interference in A/B tests.
Interference, in the context of experimental design, means the violation of the Standard Unit Treatment Value Assumption (SUTVA) (Imbens and Rubin, 2015). According to SUTVA, the outcome for a given unit should solely depend on its treatment assignment and its own characteristics, and it should remain unaffected by the treatment assignments of other units. However, when data
Figure 1: A standard pipeline in recommendation system
training loops are present, prior data generated under specific treatment assignments can lead to distinct model predictions. These predictions, in turn, can influence the outcomes observed for subsequent units, thereby violating the assumptions of SUTVA.
More specifically, let's consider a user-side experiment testing two distinct ranking algorithms. In this scenario, we split the traffic in such a way that control users are subjected to control algorithms, and test users are subjected to test algorithms. Both control and test algorithms generate data that may follow different distributions. These data sets are then combined and fed back into the machine learning models. This experimental procedure is visually represented in Figure 2.
However, it's essential to recognize that this pooled distribution is distinct from both the control data and the treatment data distributions. It is widely acknowledged that variations in training distributions can lead to significantly different predictions. To further illustrate this issue, let's consider the following example.
**Example 1** (Experimenting parameters of fusion formulas).: Imagine a video-sharing platform with two distinct machine learning models that predict finishing rates (FR) and stay durations (SD), respectively. The platform's ranking algorithms rank videos using a linear fusion formula:
\[\alpha_{1}\text{FR}+\alpha_{2}\text{SD}.\]
In an A/B test, we aim to compare different parameter values \(\{\alpha_{1},\alpha_{2}\}\). Let us consider a scenario where the platform hosts two types of videos: short videos, which typically have high finishing rates and low stay durations, and long videos, which exhibit the opposite characteristics. If the treatment algorithm assigns a higher \(\alpha_{2}\) to stay durations than the control algorithm, it will recommend more long videos in the treatment group. As a result, in the A/B tests, there will be a higher proportion of long videos in the pooled distribution. This can lead to different estimates of finishing rates and stay durations by the machine learning models, subsequently altering the recommendation outcomes produced by both the control and treatment algorithms.
This interference caused by data training loops closely relates to the concept of "symbiosis bias" recently introduced in Holtz et al. (2023). In their paper, they propose cluster randomized designs
Figure 2: An A/B testing procedure
and data-diverted designs. Through simulations, they demonstrate that these designs can effectively reduce biases compared to the naive approach.
In this paper, we introduce a weighted training approach. The concept revolves around recognizing that a control data point may also appear in the treatment data with a different probability. To harness this insight, we create a new model that predicts the probability of each data point appearing in either the treatment or control data. Subsequently, we train the machine learning models using losses that are weighted based on these predicted probabilities. By doing so, we demonstrate that if the weights are accurately learned, there will be no shifts in the training distributions, while making the most efficient use of available data.
The rest of the paper is organized as follows: Section 2 discusses related literature on interference in A/B tests. Section 3 introduces a potential outcome framework modeling interference caused by data training loops. Section 4 presents our weighted training approach along with theoretical justification. Section 5 showcases extensive simulation studies to demonstrate the performance of our proposed approach. Finally, we conclude with future works in Section 6.
## 2 Related Literature
### Interference in Experiments
The existence of interference is well-known in the literature. Empirical studies (Blake and Coey, 2014; Holtz et al., 2020; Fradkin, 2015) validate that the bias caused by the interference could be as large as the treatment effect itself. In the following, we review the literature on various types of interference in A/B tests.
**Interference in two-sided marketplaces.** In two-sided marketplaces, A/B tests are subject to interference due to competition and spillover effects. Johari et al. (2022) and Li et al. (2022) analyze biases in both user-side and supply-side experiments using stylized models. Additionally, Bright et al. (2022) consider a matching mechanism based on linear programming and propose debiased estimators via shadow prices. To mitigate bias, Johari et al. (2022) and Bajari et al. (2021) introduce two-sided randomizations, which are also known as multiple randomization designs. Bipartite experiments are also introduced in Eckles et al. (2017), Pouget-Abadie et al. (2019), Harshaw et al. (2023), where the treatments are assigned in one group of units and the metrics are measured in another group of units. Cluster experiments can also be applied in marketplaces, as shown in Holtz et al. (2020), Holtz and Aral (2020). Building on an equilibrium model, Wager and Xu (2021) propose a local experimentation approach capable of accurately estimating small changes in system parameters. Additionally, this idea has been extended by Munro et al. (2021), who combined it with Bernoulli experiments to estimate treatment effects of a binary intervention. For supply-side (seller-side) experiments, Ha-Thuc et al. (2020) and Nandy et al. (2021) put forth a counterfactual interleaving framework widely implemented in the industry and Wang and Ba (2023) enhance the design with a novel tie-breaking rule to guarantee consistency and monotonicity. In the context of advertising experiments, Liu et al. (2021) propose a budget-split design and Si et al. (2022) use a weighted local
linear regression estimation in situations where the budget is not perfectly balanced between the treatment and control groups.
**Interference induced by feedback loops**. Feedback loops commonly exist in complex systems. For instance, in the context of our earlier discussion in the Introduction, data obtained from recommendations is fed back into the underlying machine learning models. In online advertising platforms, the ads shown previously can impact the subsequent ads' recommendations and bidding prices, primarily due to budget constraints. However, there is relatively limited literature that delves into experimental design dealing with interference caused by feedback loops. To the best of our knowledge, Goli et al. (2023) represent the first attempt to address such interference, offering a bias-correction approach that utilizes past A/B tests. In the context of searching ranking system, In the context of search ranking systems, Musgrave et al. (2023) suggests the use of query-randomized experiments to mitigate feature spillover effects. Additionally, for testing bandit learning algorithms, Guo et al. (2023) propose a two-stage experimental design to estimate the lower bound and upper bound of the treatment effects. Recently, (Zhu et al., 2023) specifically studies the challenges of the counterfactual interleaving design (Ha-Thuc et al., 2020; Nandy et al., 2021) under interference induced by feedback loops. Furthermore, as mentioned earlier, Holtz et al. (2023) explores similar issues to ours, which they refer to as "Symbiosis Bias."
**Markovian interference.** When a treatment can influence underlying states, subsequently affecting outcomes in the following periods, we refer to these experiments as being biased by Markovian interference. A classic example is experimentation with different matching or pricing algorithms in ride-sharing platforms. Farias et al. (2022) proposes a difference-in-Q estimator for simple Bernoulli experiments, and its performance is further validated through a simulation study with Douyin (Farias et al., 2023). Moreover, leveraging Markov decision processes, optimal switchback designs have been analyzed in depth by Glynn et al. (2020) and Hu and Wager (2022). In the specific context of queuing, Li et al. (2023) have conducted a study on switchback experiments and local perturbation experiments. They have discovered that achieving higher efficiency is possible by carefully selecting estimators based on the structural information of the model.
**Temporal interference.** Temporal interference arises when there are carry-over effects. Extensive investigations have been conducted on switchback experiments (Bojinov et al., 2023; Hu and Wager, 2022; Xiong et al., 2023; 2023; 2023). Besides switchback experiments, other designs (Basse et al., 2023; Xiong et al., 2019) have also been proposed and proven to be optimal in various contexts. In cases involving both spatial and temporal interference, the new designs proposed in Ni et al. (2023) combine both switchback experiments and clustering experiments.
**Network interference.** Network interference is frequently observed in social networks, where a treatment unit's actions may have spillover effects on their friends or neighbors. A substantial body of research has looked into experimental design and causal inference under network interference, with notable contributions from scholars such as Hudgens and Halloran (2008); Gui et al. (2015), and Li and Wager (2022), among others. Specifically, the designs of graph cluster experiments, which involves partitioning the graph into nearly disjointed clusters, has been extensively investigated.
This research area has seen contributions from researchers (Aronow and Samii, 2017; Candogan et al., 2023; Ugander et al., 2013; Ugander and Yin, 2023) For a comprehensive review of various approaches to address network interference, we refer readers to Section 3 of Yu et al. (2022).
In addition to the papers mentioned above, interference has also been studied in other specialized settings. For instance, Chawla et al. (2016); Basse et al. (2016) and Liao and Kroer (2023) focus on experimental design in auctions. Han et al. (2023) employ roll-outs, a technique commonly implemented in experiments, to detect interference. Additionally, Boyarsky et al. (2023) demonstrate that roll-outs can also help estimation under stronger assumptions.
### Feedback Loops in Recommendation Systems
As modern platforms increasingly employ complex recommendation systems, issues arising from feedback loops are becoming more pronounced. Researchers such as Chaney et al. (2018); Mansoury et al. (2020), and Krauth et al. (2022) have investigated problems related to the amplification of homogeneity and popularity biases due to feedback loops. Additionally, Yang et al. (2023) and Khenissi (2022) have noted that these feedback loops can lead to fairness concerns. The concept of user feedback loops and methods for debiasing them are discussed in Pan et al. (2021), while Jadidinejad et al. (2020) consider how these feedback loops affect underlying models. In our work, we specifically focus on data training feedback loops and propose valid methods to address their impact on A/B tests.
## 3 A Framework of A/B Tests Interfered by Data training Loops
In this section, we construct a potential outcomes model (Imbens and Rubin, 2015) for A/B tests that incorporate the training procedures. Through our model, we will demonstrate the presence of interference induced by data training loops in A/B tests.
We are focusing on user-side experiments, where users are assigned randomly to the treatment group with a probability of \(p\) and to the control group with a probability of \(1-p\).
Suppose there are \(d\) features associated with each user-item pair, and the system needs to predict \(m\) different types of user behaviors (e.g., finishing rates, stay durations). We represent the feature space as \(\mathcal{X}\), which is a subset of \(\mathbb{R}^{d}\), and the outcome space as \(\mathcal{Y}\), which is a subset of \(\mathbb{R}^{m}\). In modern large-scale recommendation systems, \(d\) can be as extensive as billions, and \(m\) can encompass hundreds of different behaviors. We define a model class \(\mathcal{M}=\{M_{\theta},\theta\in\Theta\}\), which includes various models \(M_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}\). These models are responsible for predicting user behaviors based on user-item features. In this representation, we consolidate the prediction of \(m\) distinct user behaviors into a single model, which yields a \(m\)-dimensional output for the sake of simplicity and convenience. In subsequent discussions, we will omit the subscript \(\theta\) for ease of notation.
At time \(t\), the training model \(M_{t}\) is trained from the previous model \(M_{t-1}\) with additional data
from time \(t-1\), denoted as \(\mathcal{D}_{t-1}\). This training process can be written as
\[M_{t}=\digamma(M_{t-1},\mathcal{D}_{t-1})\,,\]
where \(\digamma\) denotes a training algorithm, e.g. stochastic gradient descent (SGD) or Adam (Kingma and Ba, 2014).
Further, at time \(t\), we suppose there are \(n_{t}\) new users have arrived. For the \(i\)-user, \(i=1,2,\ldots,n_{t}\), the system recommends an item with a feature vector \(X_{i,t}=X_{i,t}\left(M_{t},Z_{i,t}\right)\in\mathbb{R}^{d}\), where \(Z_{i,t}\in\{0,1\}\) denotes the treatment assignment. Subsequently, the potential outcome for this user is given as \(Y_{i,t}=Y_{i,t}\left(X_{i,t}\right)\in\mathcal{Y}\), which represents the user's behaviors. Note that \(Y_{i,t}\) is independent to \(Z_{i,t}\) and \(M_{t}\), given the feature vector \(X_{i,t}.\) This assumption is grounded in the typical behavior of recommendation systems, where the primary influence on users' behaviors stems from the modification of recommended items. Thus, \(Y_{i,t}\) is not directly dependent on the treatment assignment \(Z_{i,t}\) or the model state \(M_{t}\) once the features \(X_{i,t}\) are accounted for. We remark that our approach can be readily extended to cases where the treatment variable \(Z\) directly affects the outcome \(Y\), as we shall see in Lemma 1. Due to the data training loops, the data collected at time \(t\) is incorporated into the training dataset as follows:
\[\mathcal{D}_{t}=\left\{\left(X_{1,t},Y_{1,t}\right),\left(X_{2,t},Y_{2,t} \right),\ldots,\left(X_{n_{t},t},Y_{n_{t},t}\right)\right\}.\]
We plot the causal graph (Pearl, 2000) in Figure 3 to illustrate the dependence in the data training loops.
It's important to note that \(\mathcal{D}_{t}\) consists of recommendation data, which may differ from the control and treatment data. Consequently, when applying the training algorithm \(\digamma\), the model at the next time step, \(M_{t+1}\), will differ from the model trained solely on control or treatment data. This, in turn, impacts the recommendations \(X_{\cdot,t+1}\) at the subsequent period. Therefore, it becomes evident that these A/B tests are susceptible to interference caused by data training loops.
Our objective is to estimate the global treatment effect (GTE), which is defined as the difference between the metrics observed under the global treatment and the global control regimes. The global treatment regime is defined as having all \(Z_{i,t}\) equal to one, while the global control regime is defined
Figure 3: Dependence of different objects in the data training loops, where we omit the subscript \(i\) for simplicity
as having all \(Z_{i,t}\) equal to zero. In mathematical terms, we represent this as follows: within the global treatment regime, the procedure is outlined as:
\[X_{i,t}^{\text{GT}} = X_{i,t}\left(M_{t}^{\text{GT}},1\right),Y_{i,t}^{\text{GT}}=Y_{i, t}\left(X_{i,t}^{\text{GT}}\right),\] \[\mathcal{D}_{t}^{\text{GT}} = \left\{\left(X_{1,t}^{\text{GT}},Y_{1,t}^{\text{GT}}\right), \left(X_{2,t}^{\text{GT}},Y_{2,t}^{\text{GT}}\right),\ldots,\left(X_{n_{t},t}^ {\text{GT}},Y_{n_{t},t}^{\text{GT}}\right)\right\}\] \[M_{t}^{\text{GT}} = \digamma\left(M_{t-1}^{\text{GT}},\mathcal{D}_{t-1}^{\text{GT}} \right),\;\;\text{for}\;t=1,\ldots,T;\]
Similarly, within the global control regime, we have:
\[X_{i,t}^{\text{GC}} = X_{i,t}\left(M_{t}^{\text{GC}},0\right),Y_{i,t}^{\text{GC}}=Y_{i,t}\left(X_{i,t}^{\text{GC}}\right),\] \[\mathcal{D}_{t}^{\text{GC}} = \left\{\left(X_{1,t}^{\text{GC}},Y_{1,t}^{\text{GC}}\right), \left(X_{2,t}^{\text{GC}},Y_{2,t}^{\text{GC}}\right),\ldots,\left(X_{n_{t},t}^ {\text{GC}},Y_{n_{t},t}^{\text{GC}}\right)\right\},\] \[M_{t}^{\text{GC}} = \digamma\left(M_{t-1}^{\text{GC}},\mathcal{D}_{t-1}^{\text{GC}} \right),\;\;\text{for}\;t=1,\ldots,T,\]
Here, we assume \(\mathcal{D}_{0}^{\text{GC}}=\mathcal{D}_{0}^{\text{GT}}\) and \(M_{0}^{\text{GC}}=M_{0}^{\text{GT}}\). The \(m\)-dimensional GTE is defined as
\[\text{GTE}=\mathbb{E}\left[\frac{1}{\sum_{t=1}^{T}n_{t}}\sum_{t=1}^{T}\sum_{i =1}^{n_{t}}\left(Y_{i,t}^{\text{GT}}-Y_{i,t}^{\text{GC}}\right)\right].\]
In the naive A/B tests, the estimator is
\[\frac{1}{\sharp\left\{Z_{i,t}=1\right\}}\sum_{Z_{i,t}=1}Y_{i,t}\left(X_{i,t} \left(M_{t},1\right)\right)-\frac{1}{\sharp\left\{Z_{i,t}=0\right\}}\sum_{Z_{i,t}=0}Y_{i,t}\left(X_{i,t}\left(M_{t},0\right)\right), \tag{1}\]
where \(\sharp\left\{Z_{i,t}=1\right\}\) and \(\sharp\left\{Z_{i,t}=0\right\}\) are the number of users in the treatment and control, respectively. Because of the interference induced by data training loops, it is possible for the estimator to exhibit bias when estimating the Global Treatment Effect (GTE).
## 4 A Weighted Training Approach
Based on the potential outcome model established in Section 3, it becomes apparent that interference arises due to shifts in the training distributions. In this section, we will introduce an approach that assigns weights to the original data distributions obtained from the A/B tests. We will demonstrate that these weighted distributions have the capability to recover the data distributions for the control group and the treatment group.
In abstract terms, constructed in a probability space \(\left(\Omega,\mathcal{F},P\right),\) let \(D=\left(X,Y\right)\) be the random variable representing some data of \(\left(X,Y\right)\in\mathcal{X}\times\mathcal{Y}\). Specifically, \(D_{C}=\left(X_{C},Y_{C}\right),D_{T}=\left(X_{T},Y_{T}\right)\) be the random variable representing control data and treatment data, respectively. We use \(\mathcal{D}_{C},\mathcal{D}_{T}\) to denote the distributions of the control data and treatment data, respectively. Therefore, by using \(\mathcal{L}(\cdot)\) to denote the law (distribution) of a random variable, we have
\[\mathcal{D}=\mathcal{L}\left(D\right),\mathcal{D}_{C}=\mathcal{L}\left(D_{C} \right)\;\text{and}\;\mathcal{D}_{T}=\mathcal{L}\left(D_{T}\right).\]
Let the treatment assignment \(Z\) also be constructed in the same probability space. Importantly, \(Z\) is independent to \(\left\{D_{C},D_{T}\right\},\) i.e.,
\[Z\bot\left\{D_{C},D_{T}\right\},\]
which is the unconfoundedness assumption in casual inference [Rosenbaum and Rubin, 1983]. The random variable \(D_{E}=\left\{X_{E},Y_{E}\right\}\) represents the data obtained from the experiment and can be expressed as follows:
\[D_{E}=D_{T}Z+D_{C}\left(1-Z\right),\]
where \(P(Z=1)=p\) represents the probability of treatment assignment. Consequently, the distribution of the experimental data can be described as:
\[\mathcal{D}_{E}=p\mathcal{D}_{T}+(1-p)\mathcal{D}_{C},\]
due to the independence of \(Z\) and \(\left\{D_{C},D_{T}\right\}.\)
To emphasize the model's dependence on the training distribution \(\mathcal{D}\), we represent it as \(M(\mathcal{D})\). Our objective is to shift the distribution of experimental data \(\mathcal{D}_{E}\) towards that of the control data \(\mathcal{D}_{C}\) and the treatment data \(\mathcal{D}_{T}\) to mitigate bias. To achieve this, we introduce a weighting function \(W(\cdot):\Omega\rightarrow\mathbb{R}_{+}\), with the property that \(\mathbb{E}[W]=1\). We denote the resulting weighted distribution as \(W\mathcal{D}\), i.e.,
\[W\mathcal{D}(A)= \mathbb{E}\left[WI\left\{D\in A\right\}\right]\text{ for any measurable set }A\text{ in }\mathcal{X}\times\mathcal{Y}.\]
It is easy to check \(W\mathcal{D}\) in \(\mathcal{X}\times\mathcal{Y}\) is also a probability distribution as \(W(\cdot)\) is non-negative and \(\mathbb{E}[W]=1\).
Our first result, presented below, demonstrates that by selecting the weight function as \(\mathbb{E}\left[Z|X_{E}\right]/p\) or \(\left(1-\mathbb{E}\left[Z|X_{E}\right]\right)/(1-p)\), we can effectively recover the treatment and control data distributions, respectively.
**Lemma 1**.: _The weighted functions_
\[W_{T}(X_{E},Y_{E},Z)=\frac{\mathbb{E}\left[Z|X_{E}\right]}{p}\text{ and }W_{C}(X_{E},Y_{E},Z)=\frac{1-\mathbb{E}\left[Z|X_{E}\right]}{1-p} \tag{2}\]
_satisfy_
\[W_{T}\mathcal{D}_{E}\overset{d}{=}\mathcal{D}_{T}\text{ and }W_{C}\mathcal{D}_{E} \overset{d}{=}\mathcal{D}_{C},\]
_where \(\overset{d}{=}\) means equal in distribution and here \(W_{C}(X_{E},Z)\) and \(W_{T}(X_{E},Z)\) should be understood as \(W_{C}(X_{E}(\omega),Y_{E}(\omega),Z(\omega))\) and \(W_{T}(X_{E}(\omega),Y_{E}(\omega),Z(\omega)),\) for any \(\omega\in\Omega.\)_
**Remark:** In cases where the treatment variable \(Z\) is able to directly affect the outcome \(Y_{E}\), the adjustment can be made by substituting the conditional expectation \(\mathbb{E}\left[Z|X_{E}\right]\) with \(\mathbb{E}\left[Z|X_{E},Y_{E}\right]\).
The proof of Lemma 1 is presented in Appendix A. Lemma 1 shows that we are able to reconstruct the treatment and control data distributions from the A/B testing data distribution, provided that
we can estimate \(\mathbb{E}\left[Z|X_{E}\right]\) with sufficient accuracy.
Since the quantity \(\mathbb{E}\left[Z|X_{E}\right]\) is typically unknown beforehand, it becomes necessary to estimate it from the available data. To achieve this, we construct an additional machine learning model denoted as \(G_{\theta_{W}}\). This model is trained using the data \(\{X_{E},Z\}\) obtained from the experiments, treating it as a classification problem. Subsequently, the predictions generated by \(G_{\theta_{W}}\) are utilized as weights (after proper normalization) to form weighted losses for the original machine learning models. This method is implemented in Algorithm 1.
```
0: The probability of treatment assignment: \(p\); a model class for the weight prediction: \(\mathcal{G}=\{G_{\theta_{W}}:\mathbb{R}^{d}\rightarrow\{0,1\},\theta_{W}\in \Theta_{W}\}\); the machine learning model class: \(\mathcal{M}=\{M_{\theta}:\mathcal{X}\rightarrow\mathcal{Y},\theta\in\Theta\}\); loss functions: \(\ell(M(X),Y)\) (could be \(m\)-dimensional).
1: Initialize two models, the treatment model \(M_{\theta_{T}}\) and the control model \(M_{\theta_{C}}\), both of which are set to the current production model.
2:for\(t\gets 1\) to the end of the experiment do
3:for\(i\gets 1\) to \(n_{t}\)do
4: User \(i\) arrives. The platform randomly assigns user \(i\) to the treatment group with probability \(p\).
5: When a user is assigned to the treatment group, the platform recommends an item based on the treatment algorithm and model, and vice versa.
6: Collect data \((X_{i,t},Y_{i,t},Z_{i,t})\).
7:endfor
8: Compute weights: \[W_{T,i,t}=\frac{G_{\theta_{W}}(X_{i,t})}{p}\text{ and }W_{C,i,t}=\frac{1-G_{ \theta_{W}}(X_{i,t})}{1-p}\text{, for }i=1,2,\ldots,n_{t}.\]
9: Update the treatment model \(M_{\theta_{T}}\) by minimizing the weighted loss \[\frac{1}{n_{t}}\sum_{i=1}^{n_{t}}W_{T,i,t}\ell(M_{\theta_{T}}(X_{i,t}),Y_{i,t}).\]
10: Update the control model \(M_{\theta_{C}}\) by minimizing the weighted loss \[\frac{1}{n_{t}}\sum_{i=1}^{n_{t}}W_{C,i,t}\ell(M_{\theta_{C}}(X_{i,t}),Y_{i,t}).\]
11: Update the model \(G_{\theta_{W}}\) using data \(\{(X_{i,t},Z_{i,t}),i=1,\ldots,n_{t}\}\).
12:endforreturn the estimator (1).
```
**Algorithm 1** A weighted training approach for A/B tests
We remark that while \(\mathbb{E}\left[Z|X_{E}\right]\) might be complex, there is no need for precise estimation in practical applications. In fact, simple models like two-layer neural networks perform well, as demonstrated in our numerical results.
From the proof of Lemma 1, one may note that the simple weight function \(\tilde{W}=Z\) also satisfy
\[\tilde{W}\mathcal{D}_{E}\overset{d}{=}\mathcal{D}_{T}.\]
Indeed, using \(Z_{i,t}\) instead of training a model \(G_{\theta_{W}}\) in Algorithm 1 results in a data splitting approach, also known as a data-diverted experiment, as discussed in Holtz et al. (2023). In such experiments, each model is updated exclusively using data generated by users exposed to the corresponding algorithm. However, this approach lacks data efficiency, as it utilizes only a fraction of the data, namely \(p\) for the treatment model and \(1-p\) for the control model.
For instance, in cases where the control data distribution is identical to the treatment data distribution, our approach can leverage all available data for training both control and treatment models. This is because \(\frac{\mathbb{E}\left[Z\middle|X_{E}\right]}{p}=\frac{1-\mathbb{E}\left[Z \middle|X_{E}\right]}{1-p}=1\) in this case.
Intuitively, in the finite sample regime with \(n\) samples, the variance of the estimator should be proportional to \(\frac{1}{n^{2}}\sum_{i=1}^{n}\left(W_{i}/p\right)^{2}.\) In the following, we will demonstrate that our approach can achieve this lower variance, defined in this manner, among all possible weights without causing shifts in the training distributions.
**Theorem 1**.: \(W_{T}(X_{E}(\omega),Y_{E}(\omega),Z(\omega))=\mathbb{E}\left[Z\middle|X_{E} \right]/p\) _attains the minimum of the following optimization problem_
\[\min_{W(\cdot):\Omega\rightarrow\mathbb{R}_{+}}\left\{\mathbb{E}\left[W^{2} \right]:W\mathcal{D}_{E}\overset{d}{=}\mathcal{D}_{T}\right\}. \tag{3}\]
_Similarly, \(W_{C}(X_{E}(\omega),Y_{E}(\omega),Z(\omega))=\left(1-\mathbb{E}\left[Z\middle| X_{E}\right]\right)/\left(1-p\right)\) attains the minimum of the following optimization problem_
\[\min_{W(\cdot):\Omega\rightarrow\mathbb{R}_{+}}\left\{\mathbb{E}\left[W^{2} \right]:W\mathcal{D}_{E}\overset{d}{=}\mathcal{D}_{C}\right\}. \tag{4}\]
Theorem 1 implies that our proposed weights, \(\frac{\mathbb{E}\left[Z\middle|X_{E}\right]}{p}\) and \(\frac{1-\mathbb{E}\left[Z\middle|X_{E}\right]}{1-p}\), achieve maximum data efficiency while adhering to the constraint of no training distributional shifts. The proof of Theorem 1 is provided in Appendix A.
## 5 Numerical Results
In this section, we present simulation results. In subsection 5.1, we specify the simulation setup and the implementation details. In subsection 5.2, we simulate A/B tests to demonstrate the lower bias and variance of our approach compared to other methods. In subsection 5.3, we simulate A/A tests to compare type I errors of different methods. Additional experiments and results can be found in Appendix B.
### Simulation Setups
We conducted a simulation inspired by Example 1 in Introduction. In this simulation, we consider two types of videos: long and short, and the recommendation system relies on two metrics: finishing
rates (FR) and stay durations (SD). Users arrive sequentially, and for each user, there are a total of \(N=100\) candidate videos available. These videos are divided into two equal groups, with half of them being long videos and the other half being short videos. The platform selects one video from this pool to show to each user. Furthermore, we assume that the features for user-video pairs are 10-dimensional, following independent uniform distributions in the range [0,1]. Additionally, we assume linear models that
\[\mathrm{FR}_{\mathrm{short}} = \beta_{\mathrm{FR,short}}^{\top}X-2.5,\] \[\mathrm{FR}_{\mathrm{long}} = \beta_{\mathrm{FR,long}}^{\top}X-2.5,\] \[\mathrm{SD}_{\mathrm{short}} \sim \exp\left(\beta_{\mathrm{SD,short}}^{\top}X\right),\] \[\mathrm{SD}_{\mathrm{long}} \sim \exp\left(\beta_{\mathrm{SD,long}}^{\top}X\right)\]
where \(\exp\left(\cdot\right)\) means an exponential distribution and
\[\beta_{\mathrm{FR,short}} = 0.9\times[0,0.1,0.2,\ldots,0.9],\] \[\beta_{\mathrm{FR,long}} = 0.6\times[0,0.1,0.2,\ldots,0.9],\] \[\beta_{\mathrm{SD,short}} = [1,0.9,0.8\ldots.0.1],\] \[\beta_{\mathrm{SD,long}} = 1.5\times[1,0.9,0.8\ldots.0.1].\]
The user's decision to finish watching a video or not follows a Bernoulli distribution with a probability equal to the finishing rate. By setting the parameters in this manner, we ensure that short videos generally have high finishing rates and short stay durations, while long videos exhibit the opposite characteristics.
The machine learning models employ logistic regression for predicting finishing rates and linear regression for predicting stay durations. The feature set consists of 10 user-video pair features, along with an indicator variable that specifies whether the video is long or short. It's important to note that there is a model misspecification present, as the true parameters for long and short videos are different. In our machine learning models, we assume these parameters to be equal, but we introduce an additional parameter corresponding to the video length indicator for an adjustment.
We employ Stochastic Gradient Descent (SGD) to train both machine learning models, employing a batch size of \(B=n_{1}=n_{2}=\ldots=n_{T}=128\) for all time steps. The learning rate is set to 0.1. Throughout all simulations, we maintain a fixed value of \(T=10000\). Consequently, the total number of users involved in the experiments amounts to 1,280,000.
The platform recommends the video that yields the highest value among the 100 candidate videos based on the following formula:
\[\alpha\widehat{\mathrm{FR}}+\widehat{\mathrm{SD}},\]
where \(\widehat{\mathrm{FR}}\) and \(\widehat{\mathrm{SD}}\) represent the predictions generated by the machine learning models. The A/B tests are designed to assess the difference between two distinct \(\alpha\) values. We focus on three metrics,
FR, SD, and the proportion of short videos on the platform.
We compare our approach to three other methods: data pooling, snapshot, and data splitting methods.
* **Data pooling:** This is the standard naive approach, where machine learning models are trained on the combined control and treatment data.
* **Snapshot:** In this method, the machine learning models are never retrained during the A/B tests. Predictions are solely based on the models' initial snapshot at the beginning of the experiments.
* **Data splitting:** Also known as data-diverted, as discussed in Holtz et al. (2023), each model is exclusively trained on the data obtained from its respective algorithm.
While Holtz et al. (2023) also explore cluster randomized experiments, it's worth noting that in our specific context, determining how to cluster users presents challenges. Consequently, we do not make direct comparisons with cluster randomized experiments. As we discussed in Section 4, the data splitting method may encounter several challenges:
* **High variance.** Since machine learning models can only see a portion of the data, the lack of data efficiency may lead to high variance in model estimators, resulting in increased variance in the experimental metrics.
* **External validity.** In our simulation, the data splitting method is equivalent to reducing the batch size. It is well-known that batch size plays a crucial role in machine learning, and different batch sizes can yield fundamentally different performances. Therefore, treatment effect estimates in scenarios with small batch sizes may not accurately predict treatment effects in scenarios with large batch sizes, compromising external validity.
* **Experimentation costs.** In today's online platforms, thousands of experiments run each day. Consequently, experimentation costs cannot be overlooked, even though each experiment only runs for a relatively short period. Reducing the data size can compromise the performance of the machine learning model, potentially leading to suboptimal recommendations and increased experimentation costs.
In our approach, we employ a two hidden layer fully connected network with ReLU activations to train the weighting model \(G_{\theta_{W}}\). Each layer comprises 64 neurons, and we utilize the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001. Our training process for the weighting model commences after the initial 200 periods. During these initial 200 periods, the control and treatment machine learning models are trained as in the data splitting method.
Subsequently, we will conduct the A/B tests 100 times and create violin plots to visualize the estimated treatment effects.
### A/B Tests
We first examine the comparison between the control parameter \(\alpha_{C}=10\) and the treatment parameter \(\alpha_{T}=9\) with a treatment assignment probability of \(p=1/2\). The results are depicted in Figure 4.
In the figure, the black dotted line represents the true global treatment effects (GTE), which have been computed through simulation. We present various estimators obtained from 100 independent A/B tests along with their respective mean, lower, and upper bounds. Specifically, we provide results for treatment effects, global treatment, and global control regimes in the first, second, and third rows, respectively. Additionally, we report results for the proportion of short videos, SD, and FR in the first, second, and third columns, respectively.
Figure 4: A/B testing results for \(\alpha_{C}=10\), \(\alpha_{T}=9\), and \(p=1/2\)
Additionally, we provide information on the bias and standard errors of treatment effect estimators obtained using various methods in Table 1. In each metric, the first column represents the bias in comparison to the true global treatment effect (GTE). The second column displays the standard deviation calculated from the results of the 100 A/B tests. Lastly, the third column showcases the standard error estimates obtained through two-sample t-tests in a single A/B test i.e.,
\[\text{SE}=\sqrt{\frac{\text{Var}(Y|Z=1)}{\sharp\left\{Z_{i,t}=1\right\}}+\frac{ \text{Var}(Y|Z=0)}{\sharp\left\{Z_{i,t}=0\right\}}}.\]
From Figure 4 and Table 1, it is evident that our approach consistently demonstrates the lowest bias across all metrics compared to other approaches. The data splitting method also manages to achieve relatively low biases but exhibits significantly higher variance. Furthermore, it's worth noting that the true variance of the data splitting estimator is considerably larger than the standard error estimated from a two-sample t-test. Consequently, this could potentially lead to confidence intervals that underestimate the true level of variability.
In Table 2, we have calculated the experimentation costs. For treatment users, we computed the average treatment values based on the treatment linear fusion formula, while for control users, we averaged the control values based on the control linear fusion formula. It's apparent that our approach is only slightly worse than the global treatment/control regime, and the data splitting method incurs the lowest costs, indicating that it results in higher experimental expenses.
We proceed to conduct simulations for \(p=0.2\), with the same treatment and control parameters,
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Proportion of short videos} & \multicolumn{3}{c}{Stay durations} & \multicolumn{3}{c}{Finishing rates} \\ \hline & Bias & STD & SE & Bias & STD & SE & Bias & STD & SE \\ Weighted & 0.002 & 0.004 & 0.001 & -0.005 & 0.012 & 0.008 & 0.000 & 0.001 & 0.001 \\ Data splitting & -0.003 & 0.015 & 0.001 & 0.008 & 0.042 & 0.008 & -0.001 & 0.004 & 0.001 \\ Data pooling & 0.018 & 0.002 & 0.001 & -0.059 & 0.009 & 0.008 & 0.006 & 0.001 & 0.001 \\ Snapshot & 0.011 & 0.001 & 0.001 & -0.047 & 0.008 & 0.009 & 0.004 & 0.001 & 0.001 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Bias, standard deviation, and standard error estimated from the experiment for the metrics in the case that \(\alpha_{C}=10\), \(\alpha_{T}=9\), and \(p=1/2\)
\begin{table}
\begin{tabular}{l c c} \hline \hline & Treatment values & Control values \\ \hline Global & 9.8827 \(\pm\) 0.0006 & 9.3523 \(\pm\) 0.0006 \\ Weighted & 9.8816 \(\pm\) 0.0009 & 9.3521 \(\pm\) 0.0008 \\ Data splitting & 9.8710 \(\pm\) 0.0008 & 9.3431 \(\pm\) 0.0008 \\ Data pooling & 9.8861 \(\pm\) 0.0008 & 9.3551 \(\pm\) 0.0009 \\ Snapshot & 9.8876 \(\pm\) 0.0009 & 9.3692 \(\pm\) 0.0008 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimentation values in the case that \(\alpha_{C}=10\), \(\alpha_{T}=9\), and \(p=1/2\)
i.e., \(\alpha_{C}=10\) and \(\alpha_{T}=9\). This scenario is arguably more relevant, as A/B tests in practice often begin with smaller treatment proportions. The results are presented in Figure 5, and detailed bias, variance, and cost findings can be found in Tables 3 and 4.
Once again, our approach demonstrates the lowest bias and reasonable variance. However, it's important to note that in this case with \(p=0.2\), the data splitting method exhibits higher bias and variance compared to the simulation with \(p=1/2\).
### A/A Tests
In this section, we have conducted simulations for A/A tests, specifically choosing parameters such as \(\alpha_{C}=\alpha_{T}=10\) with a treatment assignment probability of \(p=1/2\). Since the treatment and control groups share an identical parameter, the global treatment effects should ideally be zero. In Figure 6, we present visualizations of treatment effect estimations for four methods. Notably, the weighted training, data pooling, and snapshot methods exhibit similar performance. Table 5 offers details on the average estimations and type I errors obtained from various methods, gathered from 100 independent runs of the A/A tests, with a confidence level set at 0.95.
It's noteworthy that our approach exhibits a slightly larger type I error than the target of 0.05 for the metrics stay durations (SD) and finishing rates (FR), and it demonstrates a worse type I error for the metric proportion of short videos. We attribute this behavior to the sensitivity of the proportion
\begin{table}
\begin{tabular}{l l l} \hline & Treatment values & Control values \\ \hline Global & 9.8823 \(\pm\) 0.0006 & 9.3515 \(\pm\) 0.0005 \\ Weighted & 9.8757 \(\pm\) 0.0013 & 9.3517 \(\pm\) 0.0006 \\ Data splitting & 9.8347 \(\pm\) 0.0015 & 9.3492 \(\pm\) 0.0007 \\ Data pooling & 9.8877 \(\pm\) 0.0013 & 9.3538 \(\pm\) 0.0006 \\ Snapshot & 9.8949 \(\pm\) 0.0015 & 9.3611 \(\pm\) 0.0006 \\ \hline \end{tabular}
\end{table}
Table 4: Experimentation values in the case that \(\alpha_{C}=10\), \(\alpha_{T}=9\), and \(p=0.2\)
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline & \multicolumn{3}{c}{Proportion of short videos} & \multicolumn{3}{c}{Stay durations} & \multicolumn{3}{c}{Finishing rates} \\ \hline & Bias & STD & SE & Bias & STD & SE & Bias & STD & SE \\ Weighted & 0.002 & 0.004 & 0.001 & -0.008 & 0.014 & 0.011 & 0.000 & 0.002 & 0.001 \\ Data splitting & -0.019 & 0.020 & 0.001 & 0.021 & 0.052 & 0.011 & -0.007 & 0.006 & 0.001 \\ Data pooling & 0.017 & 0.002 & 0.001 & -0.056 & 0.012 & 0.011 & 0.006 & 0.001 & 0.001 \\ Snapshot & 0.013 & 0.001 & 0.001 & -0.046 & 0.011 & 0.011 & 0.005 & 0.001 & 0.001 \\ \hline \end{tabular}
\end{table}
Table 3: Bias, standard deviation, and standard error estimated from the experiment for the metrics in the case that \(\alpha_{C}=10\), \(\alpha_{T}=9\), and \(p=0.2\)
Figure 6: A/A testing results for \(\alpha_{C}=\alpha_{T}=10\) and \(p=1/2\)
of short videos metric to the starting period of the experiment, which may be more feedback-loop dependent.
On the contrary, the data splitting method yields much higher Type I errors, suggesting that new inference methods should be developed to address this issue.
## 6 Concluding Remarks
In this paper, we have introduced a weighted training approach designed to address the interference problem caused by data training loops. Our approach has demonstrated the capability to achieve low bias and reasonable variance. For future research, we have identified several intriguing directions:
1. **Single model training:** In our current approach, we still require training two separate models, which can be computationally expensive, especially when dealing with large machine learning models. It would be interesting to explore whether it's possible to train a single model and implement adjustments to mitigate bias effectively. This could lead to more efficient and practical solutions.
2. **Variance estimation and new inference methods:** Although our approach has shown promise in reducing bias, the variance remains larger than the standard error estimated from the two-sample t-test in some cases. As a result, there is a need for more robust methods for estimating variance and developing new inference techniques that can account for the specific challenges in interference induced by data training loops in A/B tests.
Exploring these directions could further enhance our understanding of this type of interference and lead to more effective and efficient solutions for mitigating its biases.
## Acknowledgement
We would like to thank Jose Blanchet, Ramesh Johari, Shuangning Li, Zikun Ye, and Xinyu Yue for helpful discussions.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Proportion of short videos & \multicolumn{2}{c}{Stay durations} & \multicolumn{2}{c}{Finishing rates} \\ \hline & Estimation & Type I error & Estimation & Type I error & Estimation & Type I error \\ Weighted & -0.0003 & 0.45 & 0.0008 & 0.09 & -0.0001 & 0.11 \\ Data splitting & -0.0017 & 0.94 & 0.0039 & 0.65 & -0.0003 & 0.60 \\ Data pooling & -0.0001 & 0.04 & 0.0011 & 0.07 & -0.0001 & 0.07 \\ Snapshot & 0.0001 & 0.06 & -0.0015 & 0.06 & 0.0000 & 0.06 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The average estimations and type I error for the A/A test with \(\alpha_{C}=\alpha_{T}=10\) and \(p=1/2\) |
2306.05774 | The Fate of the Interstellar Medium in Early-type Galaxies. II.
Observational Evidence for Morphological Quenching | The mechanism by which galaxies stop forming stars and get rid of their
interstellar medium (ISM) remains elusive. Here, we study a sample of more than
two thousand elliptical galaxies in which dust emission has been detected. This
is the largest sample of such galaxies ever analysed. We infer the timescale
for removal of dust in these galaxies and investigate its dependency on
physical and environmental properties. We obtain a dust removal timescale in
elliptical galaxies of $\tau$ = 2.26 $\pm$ 0.18 Gyr, corresponding to a
half-life time of 1.57 $\pm$ 0.12 Gyr. This timescale does not depend on
environment, stellar mass or redshift. We observe a departure of dusty
elliptical galaxies from the star formation rate vs. dust mass relation. This
is caused by the star-formation rates declining faster than the dust masses and
indicates that there exists an internal mechanism, which affects star
formation, but leaves the ISM intact. Morphological quenching together with
ionisation or outflows caused by older stellar populations (supernova type Ia
or planetary nebulae) are consistent with these observations. | Aleksandra LeÅniewska, MichaÅ Jerzy MichaÅowski, Christa Gall, Jens Hjorth, Jakub Nadolny, Oleh Ryzhov, Martin Solar | 2023-06-09T09:27:48Z | http://arxiv.org/abs/2306.05774v1 | The Fate of the Interstellar Medium in Early-type Galaxies. II. Observational Evidence for Morphological Quenching1
###### Abstract
The mechanism by which galaxies stop forming stars and get rid of their interstellar medium (ISM) remains elusive. Here, we study a sample of more than two thousand elliptical galaxies in which dust emission has been detected. This is the largest sample of such galaxies ever analysed. We infer the timescale for removal of dust in these galaxies and investigate its dependency on physical and environmental properties. We obtain a dust removal timescale in elliptical galaxies of \(\tau=2.26\pm 0.18\) Gyr, corresponding to a half-life time of 1.57 \(\pm\) 0.12 Gyr. This timescale does not depend on environment, stellar mass or redshift. We observe a departure of dusty elliptical galaxies from the star formation rate vs. dust mass relation. This is caused by the star-formation rates declining faster than the dust masses and indicates that there exists an internal mechanism, which affects star formation, but leaves the ISM intact. Morphological quenching together with ionisation or outflows caused by older stellar populations (supernova type Ia or planetary nebulae) are consistent with these observations.
early-type galaxies (429), elliptical galaxies (456), galaxy ages (576), galaxy evolution (594), galaxy quenching (2040), interstellar medium (847), dust destruction (2268) +
Footnote †: journal: ApJ
## 1 Introduction
Dust influences the evolution of galaxies by acting as catalyst of molecule formation and providing shielding from interstellar radiation. Its emission can also be used as a diagnostic for interstellar medium (ISM) properties (Scoville et al., 2016). There are several processes that can contribute to dust removal from galaxies. Dust can be incorporated in newly formed stars (astration; Gall & Hjorth, 2018), or destroyed by active galactic nucleus (AGN) feedback (Fabian, 2012). Supernovae (SNe) may destroy newly-formed and pre-existing dust by forward and reverse shock waves (Temim et al., 2015; Bianchi & Schneider, 2007; Cherchneff & Dwek, 2010; Gall et al., 2011; Lakicevic et al., 2015). Dust can be also destroyed by planetary nebulae. This is due to heating of gas by shocks from colliding planetary nebulae (Conroy et al., 2015). Galactic outflows contribute to dust removal and can be very effective due to radiation pressure-driven dusty flows (Bianchi & Ferrara, 2005). Hot gas (\(\sim 10^{5}\) K) present in some regions of ISM can also cause erosion of dust particles. The smallest grains are the most vulnerable to this mechanism (Bocchio et al., 2012).
Over the past decades, many theoretical works have been developed to model the formation, evolution and destruction of dust in galaxies. Among the first research dealing with dust evolution is Dwek & Scalo (1980), who emphasized the importance of SNe. Barlow (1978) studied sputtering of dust grains in H ii regions, intercloud medium, cloud-cloud collisions shock waves, and SN remnants, concluding that the latter dominates this process. Gall et al. (2011) developed a numerical model of galactic chemical evolution and studied the effect of galaxy properties on the evolution of dust. Dust destruction was described in the model as being caused by
SN shocks. The tested properties of dust evolution depend very strongly on the initial mass function. Slavin et al. (2015) focused on dust destruction by SNe, which resulted in a dust removal timescale of 2-3 Gyr.
Recent studies of high-redshift (\(z\sim\) 1.6-3.3) lensed quiescent galaxies have shown that their dust-to-stellar mass ratios are of order \(10^{-4}\)(Whitaker et al., 2021). Similarly, Blanquez-Sese et al. (2023) showed that high-redshift galaxies are characterized by an order of magnitude higher gas fractions than what is detected in the local universe.
In order to separate the processes of dust formation and removal, it is an advantage to study galaxies with little dust formation, but with detectable ISM. Therefore, dusty early-type galaxies (ETG; ellipticals and lenticulars) form a suitable sample for such endeavour. The dust emission of only several dozen of such galaxies has been analysed (Smith et al., 2012; Rowlands et al., 2012; Agius et al., 2013, 2015; di Serego Alighieri et al., 2013; Hjorth et al., 2014; Dariush et al., 2016; Michalowski et al., 2019; Magdis et al., 2021). Hjorth et al. (2014) showed that dusty early-type galaxies do not follow the relation between the star formation rates (SFRs) and dust masses (da Cunha et al., 2010) and discussed formation or quenching scenarios. (Michalowski et al., 2019, submitted 2023) revealed an exponential decline of the dust-to-stellar and gas-to-stellar mass ratios with galaxy age and measured the timescale of this process to be 2.5 \(\pm\) 0.4 Gyr. To date, this is the only measurement of the dust removal timescale in dusty early-type galaxies and is based on a sample of 61 galaxies.
Dusty elliptical galaxies are quite rare. Hence, far-infrared/submillimeter surveys need to cover a large area to detect a high number of galaxies to build a significant sample. The ESA _Herschel Space Observatory_ (henceforth _Herschel_, Pilbratt et al., 2010) has provided deep infrared observations of hundreds of square degrees of the sky. Its large field of view, \(4^{\prime}\times 8^{\prime}\), and sensitivity has led to the detection of dust in millions of galaxies.
One of the major cosmological and galaxy evolution observation projects, Galaxy And Mass Assembly (GAMA; Driver et al., 2011, 2016; Baldry et al., 2018; Smith et al., 2011), brings together the latest generation of instruments and surveys, such as the Anglo-Australian Telescope (AAT), Sloan Digital Sky Survey (SDSS), and _Herschel_. These datasets were combined in a database of several hundred thousand galaxies, with a magnitude limit in the \(r\) band of 19.8 mag. Such an extensive catalog not only allows the examination of the relationships between individual quantities, but also gives the possibility of additional sampling into bins of various parameters.
Footnote 1: [http://www.gama-survey.org](http://www.gama-survey.org)
In this paper we study a large sample of more than two thousand elliptical galaxies in which dust was detected. The sample size allows us to investigate dust evolution as a function of various galaxy properties. We focus on relationships between their physical and environmental parameters. The objective of this paper is to distinguish the mechanisms contributing to the removal of dust in elliptical galaxies and investigate its dependency on physical and environmental properties.
We use a cosmological model with \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\Lambda}=0.7\), and \(\Omega_{m}=0.3\). We assume the Chabrier (2003) initial mass function.
## 2 Data and Sample
### Gamma Catalog
_Herschel_ covered an area of 161.6 deg\({}^{2}\) of the GAMA fields and provided information on dust emission at 250, 350, and 500 \(\mu\)m. The GAMA catalog for these fields contains properties of 120,114 galaxies based on modeling of spectral energy distributions with the Multi-wavelength Analysis of Galaxy Physical Properties (MAGPHYS; da Cunha et al., 2008). This includes dust masses, stellar masses, star formation rates, and luminosity-weighted stellar ages. The values of these parameters were obtained by the GAMA project and are presented in their MagPhys catalogue2. We also obtained a wide range of parameters related to photometry, a single-Sersic fit to SDSS 2D surface brightness distribution (Kelvin et al., 2012) and local environment of galaxies such as surface galaxy density (\(\Sigma\)) calculated based on the distance to the fifth nearest neighbour within a velocity difference of \(\pm 1000\) km s\({}^{-1}\)(Brough et al., 2013).
Footnote 2: [http://www.gama-survey.org/dr3/data/cat/MagPhys/](http://www.gama-survey.org/dr3/data/cat/MagPhys/)
### Sample
We used the r-band Sersic index (Sersic, 1963), \(n\), to select elliptical galaxies by requiring that \(n>4\). This resulted in 22,571 galaxies.
From this set of galaxies we selected dusty ellipticals with a minimum signal-to-noise ratio at the _Herschel_ SPIRE (Griffin et al., 2010) 250 \(\mu\)m filter of 3. This step resulted in 2,956 galaxies, so 13% of elliptical galaxies are detected by _Herschel_. This is higher than the detection rate of 5.5% obtained by Rowlands et al. (2012) for similar galaxies, who required a higher significance of 5\(\sigma\) at 250 \(\mu\)m.
Rowlands et al. (2012) visually classified galaxies to the early-type category at redshifts \(0.01<z<0.32\). We selected galaxies in the same redshift range. At higher redshifts the morphological classification is uncertain (de Albernaz Ferreira & Ferrari, 2018) and the sample could contain compact high star-forming (not elliptical) galaxies. Our final selection, including the redshift cut, resulted in 2050 galaxies. Our selection roughly corresponds to a flux-limited sample above 20.7 mJy at the SPIRE 250 \(\mu\)m, although adopting that limit would result in 17% of galaxies having a signal-to-noise ratio less than 3. Selection of galaxies based on SPIRE 250 \(\mu\)m flux \(>\) 20.7 mJy does not affect our results.
The uncertainties of the physical properties are the following, measured separately for MS and below-MS subsamples: 0.12-0.14 dex for stellar age, 0.1-0.3 dex for SFR, 0.15-0.22 dex for M\({}_{dust}\), 0.1 dex for M\({}_{stellar}\), where the higher values correspond to the galaxies below the main sequence.
Rowlands et al. (2012) estimated that 2% of dusty early-type galaxies in their sample are likely chance projections of a dust-free galaxy and a background dusty galaxy. Our selection criteria are similar: we used the updated GAMA archive (DR3), Sersic index \(>\) 4 instead of visual classification, and the same redshift range, so we expect a similar fraction, which does not affect our analysis. The main difference is the area over which the galaxies were selected, resulting in a much larger sample of 2050 objects as compared to the 44 galaxies studied in Rowlands et al. (2012).
## 3 Results
### Main Sequence
We divided the selected galaxies into two groups: galaxies within and below the main sequence (MS) of star forming galaxies. Fig. 1 (top) presents a comparison of our galaxies with a redshift-dependent MS as measured by Speagle et al. (2014, eq. 28). We adopted a measured MS width of 0.2 dex (Speagle et al., 2014), independent of redshift. Any galaxy below the MS by more than 0.2 dex is assigned to the 'below-MS' group in this paper. Our sample covers the redshift range uniformly with a sensitivity \(<\) 100 times below the MS at all redshifts. This resulted in 722 MS dusty elliptical galaxies and 1 328 below-MS galaxies.
We tested the validity of the Speagle MS for our data using late-type galaxies from GAMA, which have been selected based on Sersic index \(<\) 2.5, 0.1 \(<\) z \(<\) 0.15, and S/N \(>\) 3 at S250. We find an agreement between the Speagle MS and the MS estimated using the selected LTGs, in particular in the stellar mass range covered by our ETG sample.
### Dust Removal Timescale
Figure 1 presents dust-to-stellar mass ratio as a function of luminosity-weighted stellar age (middle panel). There is an evident decline in the mass ratio as galaxies evolve over time. Fitting an exponential function to this plane, as in Michalowski et al. (2019), allows us to evaluate the timescale of the dust mass removal for different galaxy properties:
\[\frac{M_{dust}}{M_{*}}=A\cdot e^{-age/\tau}, \tag{1}\]
where \(A\) is the normalisation constant and \(\tau\) is the dust removal timescale. We obtained a dust removal timescale for all elliptical galaxies of \(\tau=2.26\pm 0.18\) Gyr with the corresponding half-life time of \(1.57\pm 0.12\) Gyr. The values of the dust removal timescale, half-life time and the normalisation constant are presented in Table 1. To our knowledge, this is the first determination of the dust removal timescale for such a large sample and for different galaxy properties.
We also fit the exponential function separately to galaxies on and below the MS. The elliptical galaxies below the MS (red line) follow the fit obtained by (Michalowski et al., 2019, lime green line), whereas the elliptical galaxies on the MS (blue line) are characterized by a faster dust mass decline. The results of our fitting are given in Table 1.
One of the basic parameters which is useful for subdivision into smaller bins is stellar mass, because galaxies of different masses may evolve differently. The three top panels in Fig. 2 show the dust-to-stellar mass ratio as a function of age for three stellar mass bins between \(10<\log(M_{\rm stellar}/M_{\odot})<11.5\), with a 0.5 dex width. The fits for these stellar mass bins are consistent with each other within the error bars (Table 1). Therefore, we conclude that the dust mass decline with time does not depend on stellar mass in the analysed range.
The most massive group with \(11.5<\log(M_{\rm stellar}/M_{\odot})<12.2\), does not contain MS galaxies, and includes only galaxies with high ages and low dust-to-stellar mass ratios. It is not possible to fit an exponential function to the galaxies in this group because the dynamical range of both properties is too small. However, these galaxies are still consistent with the fitted dust removal function obtained for galaxies at lower masses.
Other galaxies in the close proximity of elliptical galaxies can affect their ISM. Therefore, we studied the role of the galaxy environment. The GAMA catalog provides surface galaxy density, \(\Sigma\), in the G15 field for galaxies at \(z<0.18\)(Brough et al., 2013). There are 384 of our dusty ETGs satisfying these criteria and for 373 of them (97%) \(\Sigma\) has been measured. The dust decline
as a function of age in bins of \(\Sigma\) is presented in Fig. 2 (middle row). It is evident that the decline in dust mass is independent of the galaxy environment. We reached the same conclusion when we analysed the effect of environment in narrower ranges of stellar mass.
Our sample spans a redshift range 0.01-0.32, corresponding to 3.6 Gyr of the evolution of the Universe. Fig. 2 (bottom) shows that the dust removal does not depend on redshift, as galaxies follow the same dust removal trend at each redshift bin.
### Dust Masses vs Star Formation Rates
Figure 1 (bottom) presents the SFR-\(M_{\rm dust}\) relation for our 2050 dusty elliptical galaxies. It is evident that our MS elliptical galaxies follow the da Cunha et al. (2010) relation (black line). Hence for MS elliptical galaxies the decrease in SFR is accompanied by a similar decrease in the dust mass, so they stay on the relation. However, as first shown by Hjorth et al. (2014), elliptical galaxies below the MS are found above the da Cunha et al. (2010) relation with higher dust masses than what their SFRs imply.
### Central Surface Luminosity
From the GAMA light profile catalog we used the values of the central surface brightness and converted them to central surface luminosities (luminosity per kpc\({}^{2}\)). We find that the decrease of dust mass with the age of the elliptical galaxies does not depend on the central surface luminosity.
### Quenching
To study the evolution of dusty elliptical galaxies, we divided our sample into eight bins of stellar age. Figure 3 (top) presents SFR vs. stellar mass with the addition of the median values in age bins, separately for the MS and below-MS elliptical galaxies. These medians are presented in Table 2 in the Appendix. The medians of SFRs and stellar masses of MS elliptical galaxies are (as expected) close to the MS. For elliptical galaxies below the MS, with increasing age the medians move away from the MS toward lower SFRs. The youngest below-MS elliptical galaxies are \(\sim 0.6\) dex below the MS and the oldest have more than 10 times lower SFRs than the youngest.
Figure 3 (bottom) presents dust mass vs. SFR with the medians in age bins for the MS and below-MS elliptical galaxies (Table 2). The medians for MS ellipticals are located close to each other and to the da Cunha et al. (2010) relation (black line), with no clear evolution. Elliptical galaxies below the MS have higher dust masses for their SFRs than what the da Cunha et al. (2010) relation implies. We fitted a power-law function to the medians of the galaxies below the MS (red line), resulting in \(\log(M_{dust})=(0.55^{+0.10}_{-0.11})\cdot\log({\rm SFR})+(7.893^{+0.030}_{-0.031})\). This is shallower than the slope of the da Cunha et al. (2010) relation of 1.1.
From Fig. 3 (bottom) it is evident that the elliptical galaxies below the MS move away from the da Cunha et al. (2010) relation as they are getting older. The youngest of the below-MS galaxies have SFRs around 1 \(M_{\odot}\) yr\({}^{-1}\) and dust masses around \(10^{7.9}\)\(M_{\odot}\) (0.8 dex above the relation). With increasing age, their SFRs decrease faster than their dust masses. This results in the oldest galaxies having SFR around 0.1 \(M_{\odot}\) yr\({}^{-1}\) (a factor of 10 decrease) and dust mass of \(10^{7.3}\)\(M_{\odot}\) (a factor of 4 decrease), placing them 1.3 dex above the relation.
### Sample Evaluation
To ensure that our selection is robust, we studied a subsample of galaxies with at least two detections among 5 _Herschel_ bands (S/N \(>\) 3). This resulted in 1430 galaxies. The exponential curve fitting gives the same results as the original sample within the error limits of these parameters. This shows that increasing the number of required band detections does not change our results and conclusions.
In order to check the correctness of the stellar ages calculated by the GAMA project, we analyzed average spectral energy distribution (SED) for eight stellar age bins defined in previous section. There is a clear correlation between the bin age and the relative normalised (in the near-infrared) flux. The oldest bin shows lower flux at the blue part of the SED, while the youngest bin shows the most prominent blue part of the SED that corresponds to the young stellar population of massive and hot OB stars. Normalisation in the near-infrared (equivalent to a stellar mass normalisation, as considered above), gives a clear luminosity decrease in the far-infrared with increasing age, equivalent to the dust-to-stellar decrease.
The GAMA project database also contains information about the D4000 break (Cardiel et al., 1998; Balogh et al., 1999). The strength of this break as a function of luminosity-weighted age for the below-MS galaxies from our sample shows that older galaxies have higher D4000, consistent with the determined age of the galaxies. The Spearman's rank correlation is 0.47 and the probability of the null hypothesis of no correlation is \(10^{-70}\).
## 4 Discussion
Our key result is the confirmation of the exponential decrease of the dust mass with age using unprecedent-edly large sample. We also found that SFRs of dusty
Figure 1: (Top) SFR as a function of stellar mass. Color coding distinguishes MS galaxies (blue stars) and galaxies below the MS (red circles). The star formation main sequence at \(z=0.32\), 0.18, and 0.01 (black lines) based on Speagle et al. (2014) are shown. The numbers of selected MS and below-MS galaxies are presented in the legend. (Middle) Dust-to-stellar mass ratio as a function of stellar age. The exponential fits are for galaxies within the MS (blue line), galaxies below the MS (red line), all galaxies (black line), that obtained by Michalowski et al. (2019) (lime green dashed line), and by Nadolny et al. (in prep.) with the Millenium Simulation (yellow dashed line) within the age range 9.0– 10.1 Gyr. (Bottom) Dust mass as a function of SFR with the da Cunha et al. (2010) relation (black line). Median errorbars for the MS and below-MS galaxies are shown as blue and red crosses, respectively.
Figure 2: Dust-to-stellar mass ratio as a function of stellar age and other galaxy properties. The MS galaxies are marked as blue stars and galaxies below the MS are marked as red open circles. The numbers of selected MS and below-MS galaxies in each panel are shown. The exponential fits are for all 2050 studied galaxies (black line) and for galaxies plotted on individual panel (violet line). The division into three stellar mass bins (top row), galaxy surface density based on the distance to the 5th nearest neighbour (middle row), and redshift (bottom row) are shown.
ellipticals below the MS decline faster with age than their dust masses and the dust mass decline is independent of stellar mass, environment, redshift and central surface luminosity. As suggested by Hjorth et al. (2014) and Michalowski et al. (2019, submitted 2023), morphological quenching is a potential mechanism for departing from the da Cunha et al. (2010) relation. This is consistent with our findings. The process may be responsible for the gravitational stability that stops the collapse of gas clouds, resulting in a slower rate of star formation. At the same time, the process does not change the amount of gas, which means that the dust mass observed in these galaxies does not decrease proportionally with the SFR. Other processes must be responsible for the decline of the dust masses, e.g., the destruction of dust by feedback from older stellar populations (see Michalowski et al. submitted). This includes SNe Type Ia (Li et al., 2020) or planetary nebulae (Conroy et al., 2015).
AGN feedback is also one of the potential mechanism of the ISM removal (Fabian, 2012). Recent studies suggest that quenching is connected with integrated AGN feedback over the lifetime of a galaxy, which is correlated with the supermassive black hole mass, not the instantaneous AGN luminosity (Bluck et al., 2020, 2022, 2023; Piotrowska et al., 2022). This mass is correlated with the bulge mass (Magorrian et al., 1998; Haring & Rix, 2004), which can be approximated by the galaxy central surface luminosity. We did not detect any dependence on the SFR, but it is not clear whether the dust mass is affected by the dust mass.
Figure 3: SFR as a function of stellar mass (top) and dust mass as a function of SFR (bottom). Color coding distinguishes MS galaxies (blue stars) and galaxies below the MS (red circles). The star formation main sequence at \(z\) = 0.18 based on Speagle et al. (2014) and the da Cunha et al. (2010) relation are shown (black lines). The median values of SFR, stellar age, and dust mass for eight galaxy age ranges are marked as filled crosses for the MS galaxies, and as filled circles for galaxies below the MS. In addition to the color-coding, the size of the symbol increases with age. The red line shows a power-law fit to the median values of the galaxies below the MS in a form \(\log(M_{dust})=(0.55^{+0.10}_{-0.11})\cdot\log(\mathrm{SFR})+(7.893^{+0.030}_{-0.031})\).
the dust decline on this parameter (Section 3.4), which suggests that integrated AGN feedback is not a dominating mechanism of the dust removal. This is because if the integrated feedback was responsible for the dust removal in our galaxies then galaxies with higher central surface luminosities (more massive black holes and therefore stronger feedback) would exhibit a faster ISM decline. This finding is consistent with our study of the Baldwin, Phillips, & Terlevich (1981, BPT) diagram which shows that only up to 15% of galaxies in our sample host AGNs, which means that they cannot have any significant effect on reducing the dust amount in these galaxies (Ryzhov et al. in prep.).
We did not find any redshift dependency or environmental influence on dust removal, which is inconsistent with external mechanisms of dust removal. The dust removal also does not depend on the stellar mass [in the explored range of \(\log(M_{\rm stellar}/M_{\odot})=10\)-11.5], so the process linearly scales with mass (a bigger galaxy has proportionally more dust and proportionally more efficient dust removal).
We note that the lack of the below-MS elliptical galaxies at or even below the SFR-\(M_{\rm dust}\) relation is not due to a detection limit at \(M_{\rm dust}\). It is \(10^{5.2}\,M_{\odot}\) at \(z=0.05\) and \(10^{6.7}\,M_{\odot}\) at \(z=0.3\)(Michalowski et al., 2019), so if such galaxies existed, they would be detected.
## 5 Conclusions
We analysed ISM and stellar properties of 2 050 dusty elliptical galaxies which has never been done before on such a large sample.Our findings support the morphological quenching as a mechanism behind their SFR decline, as proposed by Hjorth et al. (2014). This is because the galaxies below the MS do not follow the da Cunha et al. (2010) SFR-\(M_{\rm dust}\) relation, having higher dust masses for a given SFR. We also found that they evolve away from this relation as they age, with SFRs decreasing faster than dust masses.
We obtained a dust removal timescale for dusty elliptical galaxies of 2.26 \(\pm\) 0.18 Gyr, which is consistent with the value of 2.5 \(\pm\) 0.4 Gyr found by Michalowski et al. (2019). The dust mass decline does not depend on stellar mass, implying a linear scaling of this effect with galaxy mass. Moreover there is no dependence of the decrease in dust mass on the galaxy environment or redshift, so the dust mass decline is of an internal nature. The independence of the dust decline on the central surface luminosity (a proxy for integrated black hole activity) suggests that AGN feedback is not responsible for the ISM decline.
A.L., M.J.M., J.N., and M.S. acknowledge the support of the National Science Centre, Poland through the SONATA BIS grant 2018/30/E/ST9/00208. This research was funded in whole or in part by National Science Centre, Poland (grant number: 2021/41/N/ST9/02662). For the purpose of Open Access, the author has applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission. A.L. and C.G. acknowledge the support of the Leon Rosenfeld Foundation. A.L. acknowledges the support of Adam Mickiewicz University in Poznan, Poland via program Uniwersytet Jutra II (POWR.03.05.00-00-Z303/18). O.R. acknowledges the support of the National Science Centre, Poland through the grant 2022/01/4/ST9/00037. This work is supported by a VILLUM FONDEN Investigator grant (project number 16599) and a Young Investigator Grant (project number 25501).
GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalog is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is [http://www.gama-survey.org/](http://www.gama-survey.org/).
|
2303.07965 | Experimental determination of ruthenium L-shell fluorescence yields and
Coster-Kronig transition probabilities | The L-shell fluorescence yields and the Coster-Kronig factors of ruthenium
(and the corresponding uncertainty) were determined for the first time
experimentally by applying radiometrically calibrated instrumentation of the
Physikalisch-Technische Bundesanstalt. The resulting fluorescence yields
($\omega_{L_3}=0.0459(20)$, $\omega_{L_2}=0.0415(26)$,
$\omega_{L_1}=0.0109(9)$) and the Coster-Kronig factors ($f_{23}=0.177(32)$,
$f_{13}=0.528(90)$, $f_{12}=0.173(73)$) agree reasonable well with parts of the
data from the literature. | Nils Wauschkuhn, Katja Frenzel, Burkhard Beckhoff, Philipp Hönicke | 2023-03-14T15:14:03Z | http://arxiv.org/abs/2303.07965v1 | Experimental determination of ruthenium L-shell fluorescence yields and Coster-Kronig transition probabilities
###### Abstract
The L-shell fluorescence yields and the Coster-Kronig factors of ruthenium (and the corresponding uncertainty) were determined for the first time experimentally by applying radiometrically calibrated instrumentation of the Physikalisch-Technische Bundesanstalt. The resulting fluorescence yields (\(\omega_{L_{3}}=0.0459(20)\), \(\omega_{L_{2}}=0.0415(26)\), \(\omega_{L_{1}}=0.0109(9)\)) and the Coster-Kronig factors (\(f_{23}=0.177(32)\), \(f_{13}=0.528(90)\), \(f_{12}=0.173(73)\)) agree reasonable well with parts of the data from the literature.
ruthenium \(\cdot\) fluorescence yield \(\cdot\) Coster-Kronig \(\cdot\) fundamental parameter \(\cdot\) photon-in/photon-out experiment \(\cdot\) XRF
## 1 Introduction
Ruthenium is a versatile and widely used chemical element playing a crucial role in important areas of science and technology. Several applications in the area of semiconductor fabrication or catalysis can be identified, where ruthenium is essential. For extreme ultraviolet lithography masks [1, 2] or as interconnect metal [3, 4, 5] either ruthenium or ruthenium-containing materials are very relevant. In catalysis, ruthenium-based catalysts provide remarkable properties in several different applications [6]. In addition to this, ruthenium is also of relevance for emerging applications for energy storage [7, 8] and medicine [9, 10].
But, if X-ray fluorescence (XRF) based techniques are to be used for determining the ruthenium content in such materials, one quickly finds that the knowledge of the relevant atomic fundamental parameter (FP) data for ruthenium is very limited: For ruthenium, especially its L-shell FP data and namely the L-subshell fluorescence yields and Coster-Kronig factors (CK), no experimentally determined data seems to exist so far. Available data in the literature is either purely theoretically determined or perhaps even less favorable, only interpolated employing adjacent chemical elements. As these FPs quantitatively describe the process of X-ray fluorescence generation, they are very crucial for most quantification approaches in XRF. Thus, they have a direct influence on the accuracy of the XRF quantification results.
As this is a highly inadequate situation, we applied the PTB's reference-free X-ray spectrometry toolset in order to experimentally determine the fluorescence yields and the Coster-Kronig factors of the \(L\)-subshells of ruthenium for the first time. Based on transmission and fluorescence experiments on thin film samples, such FP data can be derived as already demonstrated for a wide range of chemical elements [11, 12, 13, 14, 15].
Experimental procedure
For an experimental determination of L-shell fluorescence yields and Coster-Kronig transition probabilities, both fluorescence- and transmission experiments with a selective excitation of the three L-subshells on either a free standing thin foil or a thin coating on a carrier foil are required [11; 12; 13; 15]. In the present work, these experiments were conducted on the four-crystal monochromator (FCM) beamline [16] of BESSY II using a vacuum chamber that is in-house developed [17]. This chamber was endowed with a silicon drift detector (SDD) of which the detection efficiency is radiometrically calibrated and the response functions are determined experimentally [18]. The employed sample was a highly homogeneous 150 nm ruthenium deposition on a 500 nm Si\({}_{3}\)N\({}_{4}\) membrane. To be able to isolate the ruthenium contribution from the total sample transmission, also a blank membrane of nominally identical thickness was used. Any potential moderate variation in the Si\({}_{3}\)N\({}_{4}\) membrane thickness is only a second-order contribution to the uncertainties. Both samples were positioned in the chamber's center by using an x-y-scanning stage. The angle between the incoming beam and the sample as well as the angle between sample surface and detector was set to 45\({}^{\circ}\).
The transmission measurements were conducted in an energy range around the Ru-L absorption edges between 2.1 keV and 4 keV. Furthermore, X-ray fluorescence measurements were performed in the incident-energy domain between 2.8 keV and 3.4 keV. The established methodolgy [11; 19; 12; 20] to derive the relevant L-shell FPs from this experimental dataset is described in the following.
According to the Sherman equation [21], the measured count rate of fluorescence photons of a one-elemental foil, which is irradiated under 45\({}^{\circ}\), is the product of the fluorescence production cross section \(\sigma_{Li}\) of the considered shell, the incoming \(\Phi_{0}(E_{0})\) as well as the fluorescence photon flux \(\Phi_{i}^{d}(E_{0})\), the detection efficiency of the SDD, the mass deposition of that element, the attenuation correction factor \(M_{i,E_{0}}\) and the solid angle \(\Omega\) of detection of the SDD.
The self-attenuation correction factor takes into account the attenuation of the incident radiation and of the fluorescence radiation on its way through the sample. The corresponding sample-specific attenuation correction factor \(M_{i,E_{0}}\) is determined by transmission experiments taking advantage of the fact, that the knowledge of the ruthenium deposition thickness \(d\) and its density \(\rho\) is not needed since they appear only in a product with the mass absorption coefficient \(\mu_{S}\) or with the subshell photoionization cross section \(\tau_{S}\). The product \(\mu_{S}\rho d\) is derived from the transmittance data using the Lambert-Beer law.
For incident energies \(E_{0}\) between the \(L_{3}\) edge and the \(L_{2}\) edge, the fluorescence production factor for the \(L_{3}\)-subshell is
\[\sigma_{L3}(E_{0})\rho d=\omega_{L3}\tau_{L3}(E_{0})\rho d=\frac{\Phi_{i}^{d }(E_{0})M_{i,E_{0}}}{\Phi_{0}(E_{0})\frac{\Omega}{4\pi}}, \tag{1}\]
where \(\omega_{L3}\) is the ruthenium L\({}_{3}\) fluorescence yield which should be determined. The sample-specific attenuation correction factor \(M_{i,E_{0}}\) is defined as
\[M_{i,E_{0}}=\frac{(\frac{\mu_{S}(E_{0})\rho d}{\sin\theta_{in}}+\frac{\mu_{S} (E_{i})\rho d}{\sin\theta_{out}})}{(1-\exp[-(\frac{\mu_{S}(E_{0})\rho d}{\sin \theta_{in}}+\frac{\mu_{S}(E_{i})\rho d}{\sin\theta_{out}})])}. \tag{2}\]
Here, \(\theta_{in}\) is the angle between the incident beam and the sample surface, \(\theta_{out}\) is the angle between the sample surface and the SDD detector.
Due to the so-called Coster-Kronig effect, the effective photoionization cross section \(\tau_{\mathrm{eff},Li}(E_{0})\) for L\({}_{3}\) and L\({}_{2}\) is a linear combination with the higher bound shells since for photon energies above the excitation energy of the next subshell, created holes in \(L_{2}\) can decay into \(L_{3}\) by ejecting outer electrons. As a result, more than the directly created holes in \(L_{3}\) exist. The CK-factor \(f_{23}\) provides the probability for this to happen and similar transitions can occur between the \(L_{1}\) and the \(L_{2}\) and \(L_{3}\) shells. So for an incident photon energy above the \(L_{1}\) threshhold, the fluorescence production factors are defined as:
\[\tau_{\mathrm{eff},L_{3}}(E_{0}) =\tau_{L3}(E_{0})+f_{23}\tau_{L2}(E_{0})+[f_{13}+f_{12}f_{23}]\tau _{L1}(E_{0}) \tag{3}\] \[\tau_{\mathrm{eff},L_{2}}(E_{0}) =\tau_{L2}(E_{0})+f_{12}\tau_{L1}(E_{0})\] (4) \[\tau_{\mathrm{eff},L_{1}}(E_{0}) =\tau_{L1}(E_{0}) \tag{5}\]
Here, the \(\tau_{L_{i}}(E_{0})\) are the photoionization cross sections of the respective \(L_{i}\) subshell[19], and \(f_{ij}\) are the Coster-Kronig factors. For incident energies below the subsequent subshell, the corresponding subshell photoionization cross section
is zero (\(\tau_{Li}(E_{0})=0\) for \(E_{0}<E_{Li}\)). Therefore, the fluorescence yields are determined for energies \(E_{0}\) above the excitation energy of the considered and below the subsequent subshell.
All relevant observables are accessible from the experimental data as \(\mu_{S}\rho d\) are determined for the relevant energies by measuring the transmission of the ruthenium coating. \(\Phi_{i}^{d}(E_{0})\) is determined by spectral deconvolution of the recorded SDD spectra considering the relevant fluorescence lines and relevant background contributions such as bremsstrahlung. \(\Phi_{0}(E_{0})\) and \(\Omega\) are known because of PTB's calibrated instrumentation [22].
Taking into account the theoretical ratio of scattering to ionization cross sections, which one can take from databases[23], the sample-specific total photoionization cross section \(\tau_{S}\rho d\) can be derived. To isolate the subshell contributions of the different \(\tau_{Li}\), Ebel polynomials [23] for each \(L_{i}\) contribution as well as a total cross section for lower bound shells are scaled into the data (see figure 1). For this scaling process, only the datapoints slightly above each absorption edge are used to minimize the effect of the fine structure.
With these determined \(\tau_{Li}\rho d\), the equations for the fluorescence production cross sections can be solved for \(\omega_{Li}\). By replacing \(\tau_{Li}\) by the effective photoionization cross section according to equations 3-5, eqn 1 can be applied also for energies above the next subshell. Therefore, to determine \(f_{23}\), energies between \(E_{L3}\) and \(E_{L2}\) were considered, see figure 2: With the already determined \(\omega_{L3}\), the modified version of eqn 1 can be solved for \(f_{23}\). \(f_{12}\) is determined in the same way but applied for the fluorescence of the L\({}_{2}\) shell and for \(E_{L2}<E_{0}<E_{L1}\) with the already determined \(\omega_{L2}\). With these determined \(f_{23}\) and \(f_{12}\), from the fluorescence of the L\({}_{3}\) shell for energies above \(E_{L1}\), \(f_{13}\) can be determined.
## 3 Results
The determined fluorescence yields are \(\omega_{L_{3}}=0.0459(20)\), \(\omega_{L_{2}}=0.0415(26)\) and \(\omega_{L_{1}}=0.0109(9)\). The resulting Coster-Kronig factors are \(f_{23}=0.177(32)\), \(f_{13}=0.528(91)\) and \(f_{12}=0.173(73)\). These values are compared with values from the literature in table 1 and figure 3 and 4. The respective uncertainties were calculated via error propagation. The main contributions to the total uncertainty budget of the fluorescence yields were arising from the spectral deconvolution (-2 %) and from the photoionization cross sections (-2 %). The uncertainty budget is calculated by applying the reference-free XRF approach for the FP determination, discussed in more detail in [24].
The X-raylib [25] and Krause[26] values of \(\omega_{L3}\) and \(\omega_{L1}\) are slightly outside of the error domain of the values determined in this work. The agreement with respect to the theoretically calculated data of Puri[27] and McGuire[28] is better in the case of \(\omega_{L3}\) but even worse for \(\omega_{L1}\). The data of Perkins[29] as well as the data by Xu[30] behaves very similarly. For \(\omega_{L2}\), all available data from the literature agrees well with the result obtained here. With respect to the Coster-Kronig factors, the tabulated data in X-raylib and the Krause compilation is in good agreement with our results.
Figure 1: \(\tau_{S}(E_{0})\rho d\) determined for the ruthenium thin film: separation of the contributions of lower bound shells (yellow), \(L_{3}\) (green), \(L_{2}\) (red) and \(L_{1}\) (orange)
However, the results are on or slightly outside the boundary of our uncertainty budget for all three CK values. The data by McGuire and Puri is outside of our results considering their uncertainty budget.
## 4 Conclusion
The Coster-Kronig factors and the fluorescence yields of ruthenium are determined experimentally by applying PTB's radiometrically calibrated instrumentation. The values determined are in reasonably good agreement with the values from the literature, although some literature values are slightly outside the uncertainty ranges of this work. The magnitude of the determined uncertainties of this work is much lower than the estimated uncertainties of Krause [26] in the case of the fluorescence yield values. With respect to the Coster-Kronig factors, similar uncertainties were achieved here. In summary, this uncertainty reduction will positively influence the total uncertainties of fundamental parameter-based quantitative X-ray fluorescence experiments.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline & Ru \(\omega_{L3}\) & Ru \(\omega_{L2}\) & Ru \(\omega_{L1}\) \\ \hline this work (XRF) & 0.0459(20) & 0.0415(26) & 0.0109(9) \\ X-raylib [25] (comp.) & 0.043 & 0.040 & 0.012 \\ Krause [26] (comp.) & 0.043(9) & 0.040(10) & 0.012(4) \\ Perkins et. al. [29] (comp.) & 0.045231 & 0.043368 & 0.0084138 \\ McGuire [28] (theory) & 0.0450 & 0.0418 & 0.00774 \\ Puri et. al. [27] (theory) & 0.045 & 0.043 & 0.0083 \\ Xu et. al. [30] (comp.) & & & 0.015 \\ \hline \hline & Ru \(f_{23}\) & Ru \(f_{13}\) & Ru \(f_{12}\) \\ \hline this work (XRF) & 0.177(32) & 0.528(90) & 0.173(73) \\ X-raylib [25] (comp.) & 0.144 & 0.61 & 0.10 \\ Krause [26] (comp.) & 0.148(30) & 0.61(7) & 0.10(2) \\ McGuire [28] (theory) & 0.136 & 0.779 & 0.057 \\ Puri et. al. [27] (theory) & 0.140 & 0.766 & 0.057 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of the experimentally determined Ru-L-subshell fluorescence yields and Coster-Kronig factors with the X-raylib database [25, version 4.0.0] and other values (values from compilations (comp.) and theoretic values) from the literature.
Figure 2: Experimental determination of the Ru-L\({}_{3}\) (left image) and Ru-L\({}_{2}\) (right image) fluorescence yields: They are determined by averaging over all considered energies where only the respective shell is excited (below \(L_{2}\) for \(\omega_{L3}\) and below \(L_{1}\) for \(\omega_{L2}\)). Using these \(\omega_{L3}\) and \(\omega_{L2}\), the Coster-Kronig factors are determined in such a way that the average in the higher energy domains matches the \(\omega_{Li}\) value.
Figure 3: Comparison of the experimentally determined Ru-L-subshell fluorescence yields with values from the literature.
As stated already in previous works of our group [14, 24, 31, 15], the X-raylib database is also in the case of the Ru-L shell fundamental parameters a reliable reference.
Figure 4: Comparison of the experimentally determined Coster-Kronig factor \(f_{23}\) with values from the literature.
## Conflict of interest
There are no conflicts to declare.
## Acknowledgments
This project has received funding from the ECSEL Joint Undertaking (JU) IT2 under grant agreement No 875999. The JU receives support from the European Union's Horizon 2020 research and innovation programme and the Netherlands, Belgium, Germany, France, Austria, Hungary, the United Kingdom, Romania and Israel.
|
2306.03305 | Gravitational waves during Higgs inflation from complex geometrical
scalar-tensor theory of gravity | In this paper we investigate tensor fluctuations of the metric at the end of
a Higgs inflationary period in the context of a recently introduced complex
geometrical scalar-tensor theory of gravity. In our model the Higgs field has a
geometrical origin and the affine connection is determined by the Palatini's
principle. Additionally, we consider an extra contribution to the
tensor-fluctuations equation coming from the vacuum term in the energy momentum
tensor associated to the Higgs field. The Higgs potential is rescaled by the
non-canonicity function of the kinetic term of the field which is modified by
the symmetry group of the background geometry. We obtain a nearly scale
invariant spectrum and a scalar to tensor ratio in agreement with PLANCK 2018
cosmological results. | José Edgar Madriz Aguilar, A. Bernal, F. Aceves de la Cruz, J. A. Licea | 2023-06-05T23:13:57Z | http://arxiv.org/abs/2306.03305v2 | # Gravitational waves during Higgs inflation from complex geometrical scalar-tensor theory of gravity
###### Abstract
In this paper we investigate tensor fluctuations of the metric at the end of a Higgs inflationary period in the context of a recently introduced complex geometrical scalar-tensor theory of gravity. In our model the Higgs field has a geometrical origin and the affine connection is determined by the Palatini's principle. Additionally, we consider an extra contribution to the tensor-fluctuations equation coming from the vacuum term in the energy momentum tensor associated to the Higgs field. The Higgs potential is rescaled by the non-canonicity function of the kinetic term of the field which is modified by the symmetry group of the background geometry. We obtain a nearly scale invariant spectrum and a scalar to tensor ratio in agreement with PLANCK 2018 cosmological results.
pacs: 04.50. Kd, 04.20.Jb, 02.40k, 98.80k, 98.80.Jk, 04.30w
Weyl-Integrable geometry, Higgs scalar field, geometrical scalar-tensor gravity, inflation, gravitational waves.
## I Introduction
Geometrical scalar tensor theories of gravity is an approach of scalar-tensor theories of gravity that arises as an attempt to obtain an scalar invariant action under the group of symmetries of the background geometry. Specifically, when a Platini's variational principle is adopted for a scalar-tensor theory the resulting background geometry is non-riemannian and as a consequence the group of symmetries that leave invariant the non-metricity condition is bigger than the diffeomorfism group, and so the original action is not transforming as an scalar under this extended group. Thus in this geometrical approach a new action is proposed in order to be a scalar under the group of symmetries of the background geometry [1]. A previous approach in which the Palatini's principle has been incorporated in scalar-tensor theories of gravity can be found for example in [2; 3]. To achieve the invariance of the action under the new group of geometrical symmetries it is incorporated a gauge vector field in the covariant derivative that in certain scenarios can be identified with the electromagnetic potential. Several topics have been investigated in the framework of this approach, like for example, Higgs inflation, the formation of the seeds for cosmic magnetic fields and Dark energy scenarios [1; 4; 5]. Moreover, this geometrical approach can also be extended to a new formulation of \(f(R)\) theories obtained by broken the Weyl gauge symmetry imposed by the background geometry. Cosmological backreaction consequences and CMB imprints of the new contributions have been also investigated [6].
Inflationary models can be considered as a solution to the problems of the Big Bang cosmology, whose main predictions have been verified by the acoustic peaks of CMB primordial temperature anisotropies [7]. An important prediction of inflationary models is the relic background of gravitational waves (GW). Formally these GW are described as tensor perturbations of the metric generated by the primordial density perturbations during inflation. The fact that LIGO reported the detection of gravitational waves sourced by astrophysical objects has motivated the search of primordial GW coming from inflation. Several experiments for the detection of GW are in order, for
example, the Laser Interferometer Antenna (LISA) [8; 9] and the DECI-hertz Interferometer Gravitational Wave Observatory (DECIGO) [10; 11]. Among the zoo of inflationary models, Higgs inflationary models have attracted the interest of cosmologists because the Higgs is the unique scalar particle that has been detected. Minimal coupling Higgs inflationary models, in the context of general relativity, have the problem of reproducing the amplitude of density perturbations consistently with observational data, which in general will depend on the quartic coupling of the Higgs potential. One way to sort the problem is the proposal of non-minimally coupled models. However, those models are not free of problems. For example, the tree-level unitarity problem that appears when radiative corrections in the standard effective Higgs potential are regarded [12]. The unitarity limit during inflation gives a different energy scale than the one in which the electroweak vacuum tree-level unitarity is violated, which motivates an ultraviolet extension of the model that can leave to problems with the amplitude of primordial density perturbations [13; 14].
One characteristic of non-minimal coupling models of Higgs inflation is that they work in two frames: the Jordan and the Einstein frames. The pass from the Jordan to the Einstein frames is implemented by means of a conformal transformation of the metric of the form \(\bar{g}_{\alpha\beta}=\Omega(h)g_{\alpha\beta}\), with \(h\) being the physical Higgs field and \(g_{\alpha\beta}\) the metric in the Jordan frame. The background geometry is assumed riemannian and therefore it holds that \(\nabla_{\mu}g_{\alpha\beta}=0\), which is the metricity condition for this kind of geometry. However, due to the conformal transformation of the metric it is not difficult to verify that in the Einstein frame \(\nabla_{\mu}\bar{g}_{\alpha\beta}\neq 0\). So in this frame the background geometry is no longer riemannian. This fact, has not been considered in the majority of non-minimal coupled Higgs inflationary models, where in spite of taking the conformal transformation of the metric, the riemannian is still being considered as the background geometry in the Einstein frame. But this is in fact considered a very important issue in the geometrical scalar-tensor theories of gravity. The appearance of a gauge vector field that can play the role of an electromagnetic potential and the energy rescaling of the Higgs potential suggested by the symmetry group of the background geometry are examples of consequences of considering the change of the background geometry when passing from one frame to another.
In this paper, in the framework of complex geometrical scalar-tensor theories of gravity, we study primordial gravitational waves generated during a Higgs inflationary stage. To achieve our goal the paper is organized as follows. Section I is left for a brief introduction. Section II is devoted to the construction of the invariant action of the model in the context of geometrical scalar-tensor theories of gravity. In section III we derive the field equations of the particular Higgs inflation model. In section IV we study the tensor fluctuations of the metric in order to obtain the powwer spectrum and the scalar to tensor ratio for primordial gravitational waves at the end of inflation. Finally, section V is left for some conclusions.
## II The action in the complex geometrical scalar-tensor theory
Let us start with a traditional complex scalar-tensor theory, whose action can be written as [1; 4]
\[\mathcal{S}=\int d^{4}x\,\sqrt{-g}\,e^{-(\Phi+\Phi^{\dagger})} \left[\frac{M_{p}^{2}}{2}R+\Omega(\Phi+\Phi^{\dagger})\,\Phi^{\cdot\mu}\Phi^{ \dagger}_{,\mu}-V(\Phi+\Phi^{\dagger})\right], \tag{1}\]
where \(g\) denotes the determinant of the metric, \(R\) is the Ricci scalar curvature, \(\Omega((\Phi+\Phi^{\dagger})\) is a well-behaved differentiable function, the dagger \(\dagger\) is denoting transposed complex conjugate, \(V(\Phi+\Phi^{\dagger})\) is the scalar potential and \(M_{p}=(8\pi G)^{-1/2}\) is the reduced Planckian Mass. Adopting the Palatini's variational principle the background geometry associated to (1) is one of the Weyl-Integrable type characterized by the compatibility condition: \(\nabla_{\alpha}g_{\mu\nu}=(\Phi+\Phi^{\dagger})_{,\alpha}\,g_{\mu\nu}\) with \(\nabla_{\sigma}\) denoting the Weyl covariant derivative. The geometrical symmetry group that leaves invariant this condition is the Weyl group of transformations
\[\bar{g}_{\lambda\sigma} = e^{f+f^{\dagger}}g_{\lambda\sigma}, \tag{2}\] \[\bar{\Phi} = \Phi+f, \tag{3}\]
being \(f(x^{\gamma})\) a well behaved complex function of the space-time coordinates. As the action (1) is not an invariant under the Weyl group, an invariant action results to be [1; 4]
\[\mathcal{S}_{inv}=\int\,d^{4}x\,\sqrt{-g}\,e^{-(\Phi+\Phi^{\dagger})}\left[ \frac{M_{p}^{2}}{2}R+\Omega(\Phi+\Phi^{\dagger})\,\Phi^{\cdot\mu}\Phi^{ \dagger}_{;\mu}-e^{-(\Phi+\Phi^{\dagger})}V(\Phi+\Phi^{\dagger})-\frac{1}{4}e^ {(\Phi+\Phi^{\dagger})}H_{\mu\nu}H^{\mu\nu}\right], \tag{4}\]
where it was introduced the gauge covariant derivative \(\Phi_{:\alpha}=\nabla\Phi+i\epsilon B_{\alpha}\Phi\), with \(B_{\alpha}\) being a gauge field well-defined in the space-time manifold, \(\epsilon\) is a coupling constant and \(H_{\mu\nu}=(\Phi B_{\nu})_{,\mu}-(\Phi B_{\mu})_{,\nu}\) is a field strength tensor. This
is the action of a geometrical scalar-tensor theory of gravity that is different from the traditional action (1). The invariance of (4) is achieved only when the next transformations are valid
\[\bar{\Phi}\bar{B}_{\lambda} = \Phi B_{\lambda}+i\epsilon^{-1}f_{,\lambda}, \tag{5}\] \[\bar{\Phi}^{\dagger}\bar{B}_{\lambda} = \Phi^{\dagger}B_{\lambda}-i\epsilon^{-1}f_{,\lambda}^{\dagger},\] (6) \[\bar{\Omega}(\bar{\Phi}+\bar{\Phi}^{\dagger}) \equiv \Omega(\bar{\Phi}+\bar{\Phi}^{\dagger}-f-f^{\dagger})=\Omega( \Phi+\Phi^{\dagger}),\] (7) \[\bar{V}(\bar{\Phi}+\bar{\Phi}^{\dagger}) \equiv V(\bar{\Phi}+\bar{\Phi}^{\dagger}-f-f^{\dagger})=V(\Phi+\Phi^{ \dagger}). \tag{8}\]
In terms of the Weyl invariant metric: \(\gamma_{\alpha\beta}=e^{-(\Phi+\Phi^{\dagger})}g_{\alpha\beta}\) and the new fields
\[\varphi = \sqrt{\xi}\,e^{-\Phi}, \tag{9}\] \[A_{\mu} = B_{\mu}\ln\left(\frac{\varphi}{\sqrt{\xi}}\right), \tag{10}\]
the action (4) can be put in the form [1; 4]
\[{\cal S}_{inv}=\int d^{4}x\,\sqrt{-\gamma}\left[\frac{M_{p}^{2}}{2}{\cal R}+ \frac{1}{2}\omega(\varphi\varphi^{\dagger})D^{\mu}\varphi(D_{\mu}\varphi)^{ \dagger}-\hat{V}(\varphi\varphi^{\dagger})-\frac{1}{4}F_{\mu\nu}F^{\mu\nu} \right], \tag{11}\]
where \(D_{\lambda}\varphi={}^{(R)}\nabla_{\lambda}\varphi+i\epsilon A_{\lambda}\varphi\) is an effective Riemannian gauge covariant derivative, \(F_{\alpha\beta}=A_{\beta,\alpha}-A_{\alpha,\beta}=-H_{\alpha\beta}\) is the Faraday tensor, \({\cal R}\) is the Riemannian Ricci scalar and where the next relations are valid
\[\Phi_{;\sigma} = -\frac{1}{\varphi}D_{\sigma}\varphi, \tag{12}\] \[\omega(\varphi\varphi^{\dagger}) = \frac{2\,\Omega\left[\ln(\varphi\varphi^{\dagger}/\xi)\right]}{ \varphi\varphi^{\dagger}},\] (13) \[\hat{V}(\varphi\varphi^{\dagger}) = V\left(\ln\frac{\varphi\varphi^{\dagger}}{\xi}\right), \tag{14}\]
with \(\xi\) being a constant parameter introduced in order to the field \(\varphi\) has the correct physical units.
## III The field equations of the Higgs inflationary model
In order to propose a Higgs inflationary model we start considering the Higgs potential
\[V(\Phi\Phi^{\dagger})=\frac{\lambda}{4}\left(\Phi\Phi^{\dagger}-\sigma^{2} \right)^{2}, \tag{15}\]
with \(\lambda=0.129\) and the vacuum expectation value \(\sigma=246\,GeV\)[15; 16]. In terms of the field \(\varphi\) the expression (15) reads
\[V(\varphi\varphi^{\dagger})=\frac{\lambda}{4}\left(\frac{\varphi\varphi^{ \dagger}}{\xi}-\sigma^{2}\right)^{2}. \tag{16}\]
On the other hand, the action (11) has a Riemannian background geometry and thus the fields \(\varphi\) and \(A_{\mu}\) respect the gauge transformations
\[\bar{\varphi} =\varphi\,e^{i\epsilon\theta(x)}, \tag{17}\] \[\bar{A}_{\nu} =A_{\nu}-\theta_{,\mu}, \tag{18}\]
where \(\theta(x)\) is a well-behaved gauge function. In this manner, breaking the symetry by taking \(\varphi=\varphi^{\dagger}\) and with the gauge election \(\theta_{,\nu}=A_{\nu}\), the action (11) acquires the form
\[{\cal S}=\int d^{4}x\sqrt{-\gamma}\,\left[\frac{M_{p}^{2}}{2}{\cal R}+\frac{ 1}{2}\omega_{eff}({\cal H}){\cal H}^{,\mu}{\cal H}_{,\mu}-V_{eff}({\cal H}) \right], \tag{19}\]
where \({\cal H}\) is the Higgs field that obeys: \(\varphi(x^{\alpha})=\sqrt{\xi}\,\sigma+{\cal H}(x^{\alpha})\), and \(V_{eff}({\cal H})=V[\sqrt{\xi}\,\sigma+{\cal H}(x^{\alpha})]\). Unitarizing the kinetic term in (19) we arrive to
\[{\cal S}=\int d^{4}x\sqrt{-\gamma}\,\left[\frac{M_{p}^{2}}{2}{\cal R}+\frac{1}{2 }\phi^{,\mu}\phi_{,\mu}-U(\phi)\right], \tag{20}\]
where
\[\phi(x^{\alpha}) = \int\sqrt{\omega_{eff}({\cal H})}\,d{\cal H}, \tag{21}\] \[U(\phi) = V_{eff}({\cal H}(\phi))=\frac{\lambda}{4}\left[\frac{(\sqrt{\xi} \,\sigma+{\cal H}(\phi))^{2}}{\xi}-\sigma^{2}\right]^{2}. \tag{22}\]
The field equations resulting from (20) are then
\[{\cal R}_{\mu\nu}-\frac{1}{2}{\cal R}\,\gamma_{\mu\nu} = M_{p}^{-2}T_{\mu\nu}, \tag{23}\] \[\Box\phi+U^{\prime}(\phi) = 0, \tag{24}\]
where the energy-momentum tensor for the scalar field \(\phi\) is given by
\[T_{\mu\nu}=\phi_{,\mu}\phi_{,\nu}-\frac{1}{2}\gamma_{\mu\nu}\left(\phi^{, \alpha}\phi_{,\alpha}-2U(\phi)\right), \tag{25}\]
and \(\Box\) is denoting the D'Alambertian operator.
Now, in order to allow the potential (22) to show a plateau for large enough field values, suitable to describe a period of inflation, we consider the anzats
\[\omega_{eff}({\cal H})=\frac{1}{\left[1-\beta^{2}(\sqrt{\xi}\,\sigma+{\cal H}^ {4})\right]^{5/2}}, \tag{26}\]
with \(\beta\) being a constant parameter with \(M_{p}^{-2}\) units. By using (21), the equation (26) implies a relation between the inflaton field \(\phi\) and the Higgs field \({\cal H}\) of the form
\[\phi=\frac{\sqrt{\xi}\,\sigma+{\cal H}}{[1-\beta^{2}(\sqrt{\xi}\sigma+{\cal H} )^{4}]^{1/4}}. \tag{27}\]
Thus, the inflationary potential (22) reads
\[U(\phi)=\frac{\lambda}{4\xi^{2}}\left(\frac{\phi^{4}}{1+\beta^{2}\phi^{4}} \right). \tag{28}\]
A similar potential is obtained for example in [17]. Once inflation starts it is verified that \(\beta^{2}\phi^{4}\ll 1\) and hence the potential can be approximated by \(U(\phi)\simeq(\lambda/4\xi^{2})\phi^{4}\).
## IV Tensor fluctuations of the metric
The background of gravitational waves generated at the end of inflation are due to sourceless tensor fluctuations of the metric. However, the energy-momentum tensor (25) can be formally decomposed as a pressureless material and a vacuum parts [18]. The vacuum component is given by
\[T_{\mu\nu}^{(vac)}=\left(U(\phi)-\frac{1}{2}\phi^{,\alpha}\phi_{,\alpha} \right)\gamma_{\mu\nu}. \tag{29}\]
The perturbed line element has the form
\[ds^{2}=dt^{2}-a^{2}(t)\left(\delta_{ij}+h_{ij}\right)dx^{i}dx^{j}\quad, \tag{30}\]
where the tensor fluctuations of the metric are describe by \(h_{ij}(x^{\alpha})\), that satisfies \(tr(h_{ij})=0\) and \(h_{ij}^{\,\,\,i}=0\). Hence, it follows from (23) that tensor fluctuations of the metric obey the dynamical equation
\[\delta{\cal R}_{\mu\nu}-\frac{1}{2}{\cal R}^{(b)}\delta\gamma_{\mu\nu}-\frac{1} {2}\delta{\cal R}\,\gamma_{\mu\nu}^{(b)}=M_{p}^{-2}\,T_{\mu\nu}^{(vac)}(\phi_{b }), \tag{31}\]
where we have employed a semiclassical aproximation for the inflaton field that reads
\[\phi(x^{\lambda})=\phi_{b}(t)+\delta\phi(x^{\lambda}), \tag{32}\]
where the espectation values \(<\phi>=\phi_{b}\) and \(<\dot{\delta\phi}>=0\), being \(\phi_{b}(t)\) the background inflaton field defined on cosmological scales and \(\delta\phi\) describing the quantum fluctuations of the inflaton on small scales. Thus \(\gamma_{\mu\nu}^{(b)}\) is the background metric, \({\cal R}^{(b)}\) accounts for the Ricci scalar evaluated on the background metric and \(\delta{\cal R}\) represents the fluctuations of the Ricci scalar generated by the perturbed metric in (30).
Thus, with the use of (30) the energy density for the vacuum scalar field coming from (29) results
\[\rho_{vac}=-\left[\frac{1}{2}\dot{\phi}_{b}^{2}-U(\phi_{b})\right]. \tag{33}\]
In order to obtain a positive \(\rho_{vac}\) necessarily \(U(\phi_{b})>(1/2)\dot{\phi}_{b}^{2}\), so the slow-roll condition on the inflaton field must be valid. The Ricci scalar has no contributions of the first order of tensor metric fluctuations and its background value is given by
\[{\cal R}^{(b)}=-6\left(\dot{H}+2H^{2}\right), \tag{34}\]
with \(H(t)\) being the Hubble parameter. Thus, in the traceless-transverse (TT) gauge and in the slow-roll regime, it follows from (31) that the dynamics of the tensor modes is given by the linearized equations
\[\delta{\cal R}_{ij}-\frac{1}{2}{\cal R}^{(b)}\,\delta\gamma_{ij}=M_{p}^{-2}\, U(\phi_{b})\,\delta\gamma_{ij}. \tag{35}\]
With the help of (30) the expression (35) reduces to
\[\ddot{h}_{j}^{i}+3H\dot{h}_{j}^{i}-\frac{1}{a^{2}}\nabla^{2}h_{j}^{i}-2(2\dot{ H}+3H^{2})h_{j}^{i}+\frac{2}{M_{p}^{2}}U(\phi_{b})h_{j}^{i}=0. \tag{36}\]
On the other hand, it follows from (23) and (24) that the background dynamics is given by
\[3H^{2}=M_{p}^{-2}\,U(\phi_{b}), \tag{37}\] \[\ddot{\phi}_{b}+3H\dot{\phi}_{b}+U^{\prime}(\phi_{b})=0. \tag{38}\]
By using (28) for \(\beta^{2}\phi_{b}^{4}\ll 1\) in (37) we obtain a scale factor of the form [1]
\[a(t)=a_{e}\exp\left[\frac{\phi_{e}^{2}}{8M_{p}^{2}}\left(1-\exp\left(4M_{p} \sqrt{\frac{\lambda}{3\xi^{2}}}(t_{e}-t)\right)\right)\right], \tag{39}\]
which at the end of inflation becomes
\[a(t)\simeq a_{e}\exp\left(-\frac{\phi_{e}^{2}}{2M_{p}}\sqrt{\frac{\lambda}{3 \xi^{2}}}t_{e}\right)\exp\left(\frac{\phi_{e}^{2}}{2M_{p}}\sqrt{\frac{\lambda }{3\xi^{2}}}t\right), \tag{40}\]
where \(t_{e}\) denotes the time at the end of inflation, \(a_{e}=a(t_{e})\) and \(\phi_{e}=\phi_{b}(t_{e})\). Following the canonical quantization procedure we implement the Fourier expansion
\[h_{j}^{i}(t,\bar{r})=\frac{e^{-\frac{3}{2}\int H(t)dt}}{(2\pi)^{3/2}}\,\int d ^{3}k\sum_{\alpha=+,\times}{}^{(\alpha)}e_{j}^{i}\left[a_{k}^{(\alpha)}e^{i \bar{k}\cdot\bar{r}}\xi_{k}(t)+a_{k}^{(\alpha)\,\dagger}e^{-i\bar{k}\cdot\bar{ r}}\xi_{k}^{*}(t)\right] \tag{41}\]
with the creation \(a_{k}^{(\alpha)\,\dagger}\) and annihilation \(a_{k}^{(\alpha)}\) operators obeying the algebra
\[\left[a_{k}^{(\alpha)},a_{k^{\prime}}^{(\alpha^{\prime})\,\dagger} \right]=\gamma^{\alpha\alpha^{\prime}}\delta^{(3)}\left(\bar{k}-\bar{k}^{\prime} \right), \tag{42}\] \[\left[a_{k}^{(\alpha)},a_{k^{\prime}}^{(\alpha^{\prime})}\right]= \left[a_{k}^{(\alpha)\,\dagger},a_{k^{\prime}}^{(\alpha^{\prime})\,\dagger} \right]=0, \tag{43}\]
and where the polarization tensor \(e_{ij}\) satisfies the properties
\[{}^{(\alpha)e_{ij}} = {}^{(\alpha)}e_{ji},\quad k^{i(\alpha)}e_{ij}=0, \tag{44}\] \[{}^{(\alpha)}e_{ii} = 0,\quad{}^{(\alpha)}e_{ij}(-\bar{k})={}^{(\alpha)}e_{ij}^{*}(\bar{ k}). \tag{45}\]
Now, following the canonical quantization procedure we impose the commutation relation
\[\left[h_{j}^{i}(t,\bar{r}),\Pi_{i}^{i}(t,\bar{r}^{\prime})\right]=i\delta^{(3 )}\left(\bar{r}-\bar{r}^{\prime}\right), \tag{46}\]
where \(\Pi_{ij}=\partial L/\partial\dot{h}^{ij}\) is the canonical conjugate momentum. The lagrangian for gravitational tensor modes has the form
\[L=\frac{M_{p}^{2}a^{3}}{8}\left[\dot{h}_{ij}^{2}-\frac{1}{a^{2}}h_{ij,l}h^{ij, l}+\left(2(2\dot{H}+3H^{2})-\frac{2}{M_{p}^{2}}\right)h_{ij}h^{ij}\right]. \tag{47}\]
Thus (47) reduces to
\[\left[h_{j}^{i}(t,\bar{r}),\dot{h}_{j}^{i}(t,\bar{r}^{\prime}) \right]=\frac{4i}{a^{3}M_{p}^{2}}\delta^{(3)}(\bar{r}-\bar{r}^{\prime}). \tag{48}\]
Now, inserting (41) in (48) we obtain
\[\xi_{k}\dot{\xi}^{*}-\xi_{k}^{*}\dot{\xi}_{k}=\frac{4i}{M_{p}^{2}a_{\alpha}^{3}}, \tag{49}\]
which is the normalization condition for the modes. With the help of (36) and (41) the modes at the end of inflation are governed by the dynamical equation
\[\ddot{\xi_{k}}+\left[\frac{k^{2}}{a_{e}^{2}e^{-2H_{e}t_{e}}}e^{-2H_{e}t}-\frac {33}{4}H_{e}^{2}+\frac{2}{M_{p}^{2}}U_{e}\right]\xi_{k}=0, \tag{50}\]
where
\[U_{e}=\frac{\lambda}{4\xi^{2}}\left(\frac{\phi_{e}^{4}}{1+\beta^{2}\phi_{e}^{ 2}}\right), \tag{51}\]
and where we have employed (40), with
\[H_{e}=\frac{\phi_{e}^{2}}{2M_{p}}\sqrt{\frac{\lambda}{3\xi^{2}}}\,. \tag{52}\]
By means of (49) and considering the Bunch-Davies vacuum, the normalized solution of (50) is given by
\[\xi_{k}(t)=\frac{1}{M_{p}}\sqrt{\frac{\pi}{\tilde{a_{e}}^{3}H_{e}}}\mathcal{H} _{\nu}^{(2)}\left[Z(t)\right], \tag{53}\]
where \(\mathcal{H}_{\nu}^{(2)}[Z(t)]\) denotes the second kind Hankel function and
\[\nu = \frac{1}{H_{e}}\sqrt{\frac{33}{4}H_{e}^{2}-\frac{2U_{e}}{M_{p}^{2 }}}, \tag{54}\] \[Z(t) = \frac{k}{\tilde{a_{e}}H_{e}}e^{-H_{e}t}, \tag{55}\]
with \(\tilde{a}_{e}=a_{e}\exp\left(-H_{e}t_{e}\right)\).
In this manner, the amplitude of gravitational waves defined by \(<h^{2}>_{IR}=<0|h^{i}_{j}h^{j}_{i}|0>\), on the IR-sector, i.e. on cosmological scales, is given by
\[\langle h^{2}\rangle_{IR}=\frac{e^{-\int 3Hdt}}{2\pi^{2}}\int_{0}^{\epsilon k_{H}} \frac{dk}{k}k^{3}\left(\xi_{k}\xi^{*}_{k}\right)\Big{|}_{IR}, \tag{56}\]
where \(\epsilon=k^{IR}_{max}/K_{p}\ll 1\) is a dimensionless parameter, being \(k^{IR}_{max}=k_{H}(t_{r})\) the wave number associated with the Hubble radius at the time \(t_{r}\) when the modes re-enter to the horizon near the end of inflation, \(k_{p}\) is the Planckian wave number. For example, \(\epsilon\) varies from \(10^{-5}\) to \(10^{-8}\) for a typical Hubble parameter during inflation of the order \(H\simeq 0.5\times 10^{-9}\,M_{p}\), which corresponds to a number of e-foldings \(N\simeq 63\).
Now, on cosmological scales and at the end of the inflationary period we can employ the IR-asymptotic aproximation formula
\[\mathcal{H}^{(1)}_{\nu}[Z]\simeq\frac{i}{\pi}\Gamma(\nu)\left(\frac{Z}{2} \right)^{-\nu}. \tag{57}\]
Hence it follows from (53) to (56) that
\[\langle h^{2}\rangle_{IR}=\frac{2^{2\nu}}{\pi^{3}}\frac{\Gamma^{2}(\nu)}{M_{p }^{2}}\frac{H_{e}^{2}}{(\tilde{a}_{e}H_{e})^{3-2\nu}}e^{(2\nu-3)H_{e}t}\int_{0 }^{\epsilon k_{H}}\frac{dk}{k}k^{3-2\nu}, \tag{58}\]
where according to the modes equation (50) the wave number associated to the horizon is given by
\[k_{H}=\tilde{a_{e}}\sqrt{\frac{11}{2}\dot{H}+\frac{33}{4}H_{e}^{2}+\frac{2}{ M_{p}^{2}}U_{e}}. \tag{59}\]
In this manner we obtain a power spectrum derived from (58) of the form
\[P_{h}(k)=\frac{2^{2\nu+2}}{\pi}\frac{\Gamma^{2}(\nu)}{M_{p}^{2}}\left(\frac{H _{e}}{2\pi}\right)^{2}e^{(2\nu-3)H_{e}t}\left(\frac{k}{\tilde{a}_{e}H_{e}} \right)^{3-2\nu}. \tag{60}\]
It is not difficult to verify that a nearly scale invariant power spectrum of the Harrison Zeldovick type can be achieved from (54) and (60) when \(U_{e}\simeq 3M_{p}^{2}H_{e}^{2}\). In this particular case the formula (60) reduces to
\[P_{h}(k)|_{\nu\simeq 3/2}\simeq\frac{2^{3}}{M_{p}^{2}}\left(\frac{H_{e}}{2\pi} \right)^{2}. \tag{61}\]
The spectral index is given by
\[n_{s}=4-\frac{2}{H_{e}}\sqrt{\frac{33}{4}H_{e}^{2}-\frac{2U_{e}}{M_{p}^{2}}}. \tag{62}\]
The Planck 2018 results indicate that for the spectral index the limits are: \(n_{s}=0.9649\pm 0.0042\)[19]. In the figure [1a] we show the behavior of the spectral index \(n_{s}\) versus \(U_{e}\). It can be seen that the observational values for \(n_{s}\) can be obtained for \(U_{e}\in\left[7.4258\times 10^{-19},7.4417\times^{-19}\right]M_{p}^{4}\).
On the other hand, as it was shown in [1], the power spectrum associated to the quantum fluctuations of the inflaton is given by
\[P_{\delta\phi}(k)=\frac{2^{2\nu-1}}{\pi}\Gamma^{2}(\nu)\left(\frac{H_{e}}{2\pi }\right)^{2}e^{(2\nu-3)H_{e}t_{e}}\left(\frac{k}{\tilde{a}_{e}H_{e}}\right)^{3 -2\nu}. \tag{63}\]
Thus the power spectrum for curvature perturbations \(P_{\mathcal{R}}(k)=\frac{1}{2\epsilon}\frac{P_{\delta\phi}}{M_{p}^{2}}\) results
\[P_{\mathcal{R}}(k)=\frac{2^{2\nu}}{4\pi\epsilon}\frac{\Gamma^{2}(\nu)}{M_{p}^{ 2}}\left(\frac{He}{2\pi}\right)^{2}e^{(2\nu-3)H_{e}t}\left(\frac{k}{\tilde{a}_ {e}H_{e}}\right)^{3-2\nu}, \tag{64}\]
where we have employed de slow-roll parameter \(\epsilon=\frac{M_{p}^{2}}{2}\left(\frac{U^{\prime}}{U}\right)^{2}\). Hence the scalar to tensor ratio \(r=\frac{P_{b}}{P_{\mathbb{R}}}\) in terms of the background inflaton field has the form
\[r=\frac{128M_{p}^{2}}{\phi_{b}^{2}(1+\beta^{2}\phi_{b}^{4})^{2}}. \tag{65}\]
According to the Planck 2018 results \(r<0.056\)[19]. In the figure [1b] we show a plot of \(r\) vs \(\phi_{b}\) for different values for \(\beta\). In general depending of the \(\beta\) parameter the observational range of values for \(r\) is achieved for an interval of values for \(\phi_{b}\). For example, when \(\beta=40\,M_{p}^{-2}\) the scalar to tensor ratio results \(r=0.04\) when \(\phi_{b}=0.511\,M_{p}\). Thus for this value of \(\beta\) the observational range \(r<0.056\) is achived when \(\phi_{b}>0.4945\,M_{p}\). Hence, as it is shown in figure [1b] for increasing values of \(\beta\) the observational range for \(r\) is reached for decreasing values of \(\phi_{b}\).
## V Conclusions
In this paper we have investigated the background relic of gravitational waves generated during a Higgs inflationary stage in the context of a geometrical scalar-tensor theory of gravity. In this model the Higgs scalar field has a geometrical origin because the background geometry is of the Weyl-integrable type where the Weyl scalar field is related to the Higgs field. The background geometry is asigned by the Palatini's variational principle. The physical field equations for the inflaton Higgs scalar field are obtained by recasting the original action of the theory in terms of the so called invariant action.
The primordial gravitational waves are described by tensor fluctuations of the metric. In general these kind of fluctuations are considered sourceless. One important difference with respect other approaches is that in our model we have decomposed the energy momentum-tensor into a pressureless matter plus a vacuum components, and hence the vacuum part has been considered in the formulation of the dynamical equation that governs the tensor fluctuations of the metric.
As a consequence of taking into account the symmetry group of the Weyl-Integrable background geometry of the original action, the standard Higgs potential is rescaled by means of a function that makes the kinetic term
non-canonical, which is determined by two factors: the Weyl group of symmetries and the requirement that such function must create an enough plateu in the effective potential to achieve an initial Hubble parameter of the order \(H_{0}\simeq 10^{11}-10^{12}\) GeV, which allows to have the enough quantity of inflation in agreement with PLANCK data [19; 20].
Aditionally, the slow-roll conditions that are typically imposed, here are obtained through the requirement that the energy density of the background inflaton field to be positive. Hence, the rescaled potential ends up depending on the parameter \(\beta\) (see eq.(28)). We obtain a power spectrum for gravitational waves nearly scale invariant in agreement with PLANCK data for a range of values for the effective potential at the end of inflation \(U_{e}\) given for \([7.4528\times 10^{-19},7.4417\times 10^{-19}]M_{p}^{4}\). The amplitude of the spectrum results proportional to \((H/2\pi)^{2}\) with \(H\) evaluated at the end of inflation. The scalar to tensor ratio \(r\) resulted to be depending on the \(\beta\) parameter. Thus for increasing values of \(\beta\) the observational values of \(r\) (\(r<0.056\) according to PLANCK data ) are reached for decreasing values of \(\phi_{b}\) (see figure [1b]). For example, for \(\beta=40\,M_{p}^{-2}\) we obtain \(r=0.04\).
## Acknowledgements
J. E. Madriz-Aguilar, A. Bernal, F. Aceves and J. A. Licea acknowledge CONACYT Mexico and Centro Universitario de Ciencias Exactas e Ingenierias of Guadalajara University for financial support.
|
2307.14550 | Irreversible evolution, obstacles in fitness landscapes and persistent
drug resistance | We use fitness graphs, or directed cube graphs, for analyzing evolutionary
reversibility. The main application is antimicrobial drug resistance.
Reversible drug resistance has been observed both clinically and
experimentally. If drug resistance depends on a single point mutation, then a
possible scenario is that the mutation reverts back to the wild-type codon
after the drug has been discontinued, so that susceptibility is fully restored.
In general, a drug pause does not automatically imply fast elimination of drug
resistance. Also if drug resistance is reversible, the threshold concentration
for reverse evolution may be lower than for forward evolution. For a
theoretical understanding of evolutionary reversibility, including threshold
asymmetries, it is necessary to analyze obstacles in fitness landscapes. We
compare local and global obstacles, obstacles for forward and reverse
evolution, and conjecture that favorable landscapes for forward evolution
correlate with evolution being reversible. Both suboptimal peaks and plateaus
are analyzed with some observations on the impact of redundancy and
dimensionality. Our findings are compared with laboratory studies on
irreversible malarial drug resistance. | Kristina Crona | 2023-07-27T00:17:48Z | http://arxiv.org/abs/2307.14550v1 | # Irreversible evolution, obstacles in fitness landscapes and persistent drug resistance
###### Abstract.
We use fitness graphs, or directed cube graphs, for analyzing evolutionary reversibility. The main application is antimicrobial drug resistance. Reversible drug resistance has been observed both clinically and experimentally. If drug resistance depends on a single point mutation, then a possible scenario is that the mutation reverts back to the wild-type codon after the drug has been discontinued, so that susceptibility is fully restored. In general, a drug pause does not automatically imply fast elimination of drug resistance. Also if drug resistance is reversible, the threshold concentration for reverse evolution may be lower than for forward evolution. For a theoretical understanding of evolutionary reversibility, including threshold asymmetries, it is necessary to analyze obstacles in fitness landscapes. We compare local and global obstacles, obstacles for forward and reverse evolution, and conjecture that favorable landscapes for forward evolution correlate with evolution being reversible. Both suboptimal peaks and plateaus are analyzed with some observations on the impact of redundancy and dimensionality. Our findings are compared with laboratory studies on irreversible malarial drug resistance.
## 1. Introduction
Penicillin was introduced on a large scale 1940 and the spread of penicillin resistance was documented already in 1942 (Lobanovska and Pilla, 2017 ). The development of antimicrobial drug resistance is an evolutionary process, and so is the reverse adaptation back to the the drug-free environment. Reverse evolution that restores the original genotype has been observed for HIV patients (Castro et al., 2020; Yang et al., 2015), and in experiments for other pathogens (Bjorkman et al, 2000). However, expectations of fast reversal of antimicrobial resistance after a drug pause have not always been realized. Failures to restore susceptibility includes nation-wide long term programs (Sundqvist et al., 2010; Enne et al., 2001).
Costly drug resistance is not likely to persist in a drug-free environment. If the original wild-type is available then regrowth may restore susceptibility (no evolution is necessary). In addition to extinction and reversion, a possible fate for a resistant genotype is that new mutations accumulate, sometimes referred to as compensatory mutations. Such mutations decrease the cost of resistance in the drug free environment, and there is usually an impact on susceptibility as well.
Evolution is described as genotypically irreversible if the population cannot adapt back to the original genotype, and phenotypically irreversible if it cannot adapt back to the original phenotype. Because of genetic redundancy in the sense that different sequences code for the same phenotype, evolution can be phenotypically reversible
even if it is genotypically irreversible (Kaltenback, 2015). An extensive laboratory study on costly drug resistance for 12 different antibiotics showed partly successful adaptation to the drug free environment through compensatory mutations (Dunai et al., 2019). However, neither the original genotype, nor the phenotype, was restored for any of the drugs, and in most cases the new genotypes had clearly lower fitness than the original wild-type. In contrast, for some experiments of similar type the original genotype was restored in the majority of the trials (Bjorkman et. al, 2000), or at least in some proportion of the trials (Maisnier-Patin et al., 2002; Nagaev et al., 2001). If the original genotype cannot be restored in experiments, the reason could be that evolution is genotypically irreversible. Another possible explanation is that an abundance of available compensatory mutations make genotypic reversion unlikely. For more background, reversal of drug and pesticide resistance is reviewed in Allen et al. (2017).
Here, the main topic is genotypically irreversible evolution and obstacles that cause irreversibility. For a thorough analysis it is useful to consider fitness landscapes. In brief, the fitness of a genotype is a measure of its expected contribution to the next generation. A fitness landscape assigns a fitness value \(w_{g}\), i.e., a non-negative number, to each genotype \(g\). Fitness can be thought of as a height coordinate in the landscape. The level of drug resistance approximates fitness for a pathogen under drug exposure.
Throughout the paper we consider fitness landscapes for biallelic \(L\)-locus systems. For instance, if \(L=2\) the genotypes are represented as \(00\), \(10\), \(01\) and \(11\), where \(00\) denotes the wild-type. According to conventional assumptions, the evolutionary process for a population can be represented as a walk in the landscape where each step increases the height, i.e., the process consists of a sequence of single point mutations \(0\mapsto 1\) or \(1\mapsto 0\) such that each mutation increases fitness. Unless otherwise stated, no two genotypes have the same fitness. A genotype \(g\) is defined as a peak if all its mutational neighbors (genotypes that differ from \(g\) at a single locus) have lower fitness than \(g\).
For an overview of evolutionary potential it is convenient to use fitness graphs (Figure 1). A fitness graph is a directed \(L\)-cube graph such that each edge is directed toward the genotype of higher fitness. A path in the graph that respects the arrows is referred to as an accessible evolutionary path.
Both forward evolution from the wild-type \(00\) to \(11\) and reverse evolution from \(11\) to \(00\) are straight forward for the fitness graphs in Figure 1 (the peaks are marked red). Mutations can accumulate in any order in the new environment, and the same is true back in the original environment. The graph 2A is less favorable than 1A, since only one trajectory is accessible from \(00\) to \(11\), and 2B has no accessible trajectory from \(00\) to \(11\). For general \(L\), the most favorable fitness graph is similar to Figure 1. Informally the landscape is represented by an "all arrows up" graph.
The obstacles displayed in Figure 2 depend on epistasis, or gene interactions. The three fitness graphs have sign epistasis, i.e., the sign of the effect of a mutation, whether positive or negative, depends on background. The graph 2B, characterized by two
peaks, is said to have reciprocal sign epistasis. For Graph 2B and C every other genotype is a peak and the same construction works for any \(L\)(Haldane, 1931). The graphs with 50 \(\%\) peak density are called Haldane graphs (see also Crona et al. (2023)).
Note that in the absence of sign epistasis the fitness graph can always be described as an all arrows up graph. For more background, sign epistasis was introduced in Weinriech et al. (2005) and early work on sign epistasis, fitness graphs and related rank order based concepts includes Poelwijk et al. (2007); De Visser et al. (2009); Poelwijk et al. (2011); Crona et al. (2013), see also Crona (2014). A main topic concerns the relation between local properties (such as reciprocal sign epistasis) and global properties (such as peaks in the global fitness landscapes), with recent progress in Riehl et al. (2022); Saona et al. (2022).
All arrows up graph and Haldane graphs are, in a sense, opposite extremes. For the sake of completeness, in addition to all arrows up graph there a second - perhaps
Figure 1. The fitness graph for the new environment (A) is favorable since mutations can accumulate in any order, i.e., both trajectories from the wild-type \(00\) to \(11\) are accessible. Reverse evolution from \(11\) to \(00\) is also straight forward (B).
Figure 2. Three graphs with sign epistasis. The graph B and all two locus subsystems of C have reciprocal sign epistasis. Both B and C are Haldane graphs, i.e., they represent fitness landscapes such that 50 percent of the genotypes are peaks.
more exotic - type of fitness landscapes that implies straight forward evolution. If one assumes that the fitness is capped at some number \(M\) in a landscape, and that all genotypes have several neighbors with fitness \(M\), then adaptation is not difficult. From any starting point there exists a single point mutation that results in maximal fitness. A slightly more general concept is convenient. We define a fitness landscape as a _hop-to-top landscape_ if fitness is capped at some number \(M\), and if for (almost) all genotypes, there is a short accessible path to a genotype of fitness \(M(1-\epsilon)\), where \(\epsilon\) is some small number.
For instance, a hop-to-top landscape can be constructed by drawing fitness values from a uniform distribution between \(0\) and \(M\), and assign fitness randomly to genotypes. For \(L=10,000\) a genotype is then expected to have 100 neighbors with fitness at least \(0.99M\). Under the same assumption for the landscape except that \(L\) is slightly smaller, one still gets a hop-to-top landscape, whereas a sufficiently low \(L\)-value results in an unfavorable landscape (both claims are easy to verify). It has been proposed that similar constructions (for large \(L\)) are relevant for speciation (Gavrilets, 1997), see also the result section.
Haldane graphs, all arrows up graphs and hop-to-top landscapes are theoretical constructions that can be used as a starting point for discussing obstacles in fitness landscapes and evolutionary reversibility. However, few empirical fitness landscapes have proved to belong to the extremes. Obstacles have to be considered in more general settings.
From a fitness landscape for antimicrobial drug resistance one can determine if evolution is reversible. Whether or not reversion is plausible depends on other factors as well, including population size and mutation frequency (Pennings et al., 2022; Maisnier-Patin et al., 2002). Such factors will not be discussed here. Neither will we discuss more elaborate methods for restoring the original wild-type that depends on sequences of drugs (Mira et al., 2015; Goulart et al., 2013; Tran and Yang., 2017).
## 2 results
A case of Irreversible malarial drug resistance was identified in the study Ogbunugafor and Hartl (2016). Several drug concentrations were considered in the study. Figure 3 shows the fitness graph for the highest drug concentration, and Figure 4 for the drug-free environment. The genotype 1111 is the global peak and 0000 has the lowest fitness in Figure 3, whereas 0000 is the global peak in Figure 4. As clear from Figure 4, there is no accessible path from \(1111\) to \(0000\). Consequently evolution is irreversible. (It is of course theoretically possible that a longer accessible path from \(1111\) to \(0000\) exists that includes new mutations in addition to the reversions.) We will return to the example repeatedly throughout the paper.
Section 2.1 and 2.2 analyze suboptimal peaks and plateaus in fitness landscapes, Sections 2.3 discusses reversibility, and Section 2.4 reversibility and the impact of fluctuating drug concentrations.
### Suboptimal peaks
The most simple example of a suboptimal peak arises if a double mutant with higher fitness than the wild-type combines two detrimental single mutations (Figure 2B). For general \(L\) any two-locus subsystem with reciprocal sign epistasis constitutes a global obstacle if independent of background. For instance, assume
Figure 4. The genotype 0000 is the global peak, whereas 1111 has low fitness in the drug-free environment. Evolution is irreversible since there is no accessible path from the genotype \(1111\) to \(0000\)
Figure 3. The 16 genotypes represent all combinations of four mutations that individually increase malarial drug resistance. The genotype 1111 is the global peak and the wild-type 0000 has the lowest fitness in the drug environment. There are several accessible paths from 0000 to 1111.
that
\[w_{00s}>w_{11s}>w_{10s},w_{01s}\quad*\]
for all \(s\) of length \(L-2\). Then it is clear that some genotype of the form \(g=11\tilde{s}\) is a suboptimal peak. (Indeed, if \(11\tilde{s}\) has maximal fitness among all genotypes of the form \(11s\), then \(11\tilde{s}\) is a peak with lower fitness than \(00\tilde{s}\).)
A variant of the same theme (bad+bad=good), is that the combined effect of replacing two blocks (sets) of loci is positive, whereas the replacement of each block alone is negative. Such a system is sometimes referred to as lock-key system (De Vos et al., 2015) Similar to the condition \(*\), local obstacles constitute global obstacles if independent of background.
**Definition 2.1**.: For a block of length \(L^{\prime}<L\), assume that \(\prec\) is an order of the genotypes in the \(L^{\prime}\)-locus subsystem. The block is _rank order preserving_ if the following condition holds:
\[w_{gs}>w_{g^{\prime}s}\text{ if }g\succ g^{\prime},\]
for \(g,g^{\prime}\) in the \(L^{\prime}\)-locus subsystem.
The following observation is immediate (and analogous to the implications of \(*\)).
**Observation 2.2**.: For each peak in a rank order preserving block there is a corresponding peak in the global \(L\)-locus system.
Figure 5 illustrates Observation 2.2. The two loci in the middles constitute a rank order preserving block. The four subsystems of the form
\[*00\star,*10\star,*01\star,*11\star\]
has reciprocal sign epistasis (marked by blue arrows). Observation 2.2 implies that the two peaks \(*00\star,*11\star\) in the rank order preserving block correspond to two peaks in the global system. The peaks in the global system are \(0000\) and \(0110\).
The key property of rank order preserving blocks holds in a more general setting.
**Definition 2.3**.: For a block of length \(L^{\prime}<L\), assume that \(\prec\) is an order of the genotypes in the \(L^{\prime}\)-locus subsystem. A block of length \(L^{\prime}<L\) is a _graph preserving block_ if for any two mutational neighbors \(g\) and \(g^{\prime}\) in the \(L^{\prime}\)-locus subsystem,
\[w_{gs}>w_{g^{\prime}s}\text{ if }g\succ g^{\prime}\]
Note that the condition implies that the fitness graph for the \(L^{\prime}\)-locus subsystems defined by the graph preserving block are independent of background (see the four marked subgraphs in Figure 4).
**Observation 2.4**.: For each peak in a graph preserving subsystem, there is a corresponding peak in the global \(L\)-locus system.
A closer look at the study of malarial drug resistance for the drug-free environment (Figure 4) reveals a pattern that is very similar to the graph preserving block shown in Figure 5. For an easy comparison, Figure 6 is a copy of Figure 4 with the relevant
arrows marked blue. The arrows agree with Figure 5 except for a single arrow marked red. The red arrow leads directly to a suboptimal peak (the arrow could otherwise have served as an escape). It is "almost true" that a graph preserving block prevents reverse evolution.
As demonstrated, the existence of single graph preserving block with suboptimal peaks implies that there are suboptimal peaks in the global fitness landscape. The following schematic example illustrates the impact of multiple rank order preserving blocks.
**Example 2.5**.: Assume that the genotypes in an \(L\)-locus system can be partitioned into blocks consisting of two loci, where for each block (using informal notation) \(w_{11}>w_{00}>w_{10}>w_{10}\). For \(L=6\) there are eight peaks:
\[000000,110000,001100,000011,111100,110011,001111,111111\]
Evolution from \(000000\) to \(111111\) requires passing three obstacles, i.e., moving from \(00\) to \(11\) for each one of the three blocks. Analogously, for \(L=40\) there are about a million peaks, a billion genotypes, and 20 obstacles. In general, the peak density \(2^{-L/2}\) decreases by \(L\). However, it is fair to say that the landscapes are equally (un)favorable for all \(L\) since the number of obstacles (\(L/2\)) is proportional to \(L\).
**Observation 2.6**.: If the peak density decreases by \(L\) for a class of fitness landscapes, it does not follow that the landscape becomes more favorable by \(L\).
Figure 5. The subsystems determined by the central block of length 2, marked with blue arrows, have reciprocal sign epistasis on all backgrounds. The obstacle prevents evolution from 1111 to the global peak 0000. All arrows except the blue ones point toward 0000.
**Observation 2.7**.: If the \(L\) sequence (the genome) can be partitioned into graph preserving blocks, then the number of peaks equals the product of the number of peaks in each block.
Proof.: Let \(b_{1},\ldots,b_{r}\) be the blocks and assume that \(b_{i}\) has \(n_{i}\) peaks. Let \(g=g_{1}\ldots g_{r}\) be a genotype such that \(g_{i}\in b_{i}\). Then \(g\) is a peak if and only if each \(g_{i}\) is a peak in the block \(b_{i}\). Consequently, there are in total \(n_{1}\times\cdots\times n_{r}\) peaks in the global fitness landscape.
Consider the category of fitness landscapes such that the genome can be partitioned into rank order preserving blocks. The block landscapes introduced in Perelson and Macken (1995) belong to the category. Specifically, the landscapes are defined so that each block contributes independently to fitness, and fitness values within blocks are assigned randomly. Observation 2.7 for block landscapes was stated in Schmiegelt and Krug (2014). Landscapes in the category are similar in that obstacles in each block have a global impact (in contrast to for instance hop-to-top landscapes where subsystems with reciprocal sign epistasis have no relevance). For landscapes in the category, the problem of finding the global peak in the \(L\)-locus system is equivalent to the combined problem of finding the optimal sequence for each block (as in Example 2.5). It follows that adding blocks, all else equal, does not make the fitness landscapes more favorable.
Fitness landscapes such that the \(L\)-sequence can be partitioned into graph order preserving blocks differ substantially from the rank order preserving case. The reason
Figure 6. The fitness graph agrees with Figure 4. Similar to Figure 5 all arrows right below genotypes of the form \(*11\)\(*\) (marked blue) point up, with one exception (marked red). The red arrow leads directly to a suboptimal peak.
is that the optimal sequence for a particular block may depend on background. An example is the following 4-locus system.
**Example 2.8**.: Assume that the genotypes in an \(4\)-locus system can be partitioned into blocks consisting of two loci, where \(00\) and \(11\) have higher fitness than the intermediates \(10\) and \(01\) in each block (again using informal notation). The peaks are
\[0000,1100,0011,1111,\]
Moreover, assume that
\[w_{1111}>w_{0000}>w_{1100}>w_{0011}\]
Evolution from \(0000\) to \(1111\) is difficult since both \(0000\mapsto 1100\) and \(0000\mapsto 0011\) decrease fitness.
It is instructive to compare \(*\) and other rank order conditions discussed here with conventional models of fitness landscapes. The condition \(*\) does obviously not hold in the absences of sign epistasis, in particular not for additive fitness landscapes. The condition \(*\) is also incompatible with hop-to-top landscapes constructed by a random-fitness assignment (see the introduction). The reason is that for a random fitness landscape \(w\), the inequality \(w_{00s}>w_{10s}\) cannot hold for all \(s\) if \(L\) is large.
An empirical study uses assumptions similar to the hop-to-top landscapes, i.e., fitness is randomly assigned from a uniform distribution between \(0\) and \(1\), for describing worst-case scenarios for adaptation (Greenbury et al., 2022), with the difference that fitness is assigned to phenotypes rather than genotypes. Note that \(*\) cannot hold for a random fitness landscape \(w\) as described. The reason is that genotypes of the form \(00s\) correspond to many phenotypes, and similarly for genotypes \(10s\). Consequently there are both \(s^{\prime}\) such that \(w_{10s^{\prime}}>w_{00s^{\prime}}\) and \(s^{\prime\prime}\) such that \(w_{10s^{\prime\prime}}<w_{00s^{\prime\prime}}\). In other words, the assumptions on \(w\) are not compatible with \(*\) or similar rank order conditions. It follows that random assumptions do not describe worst-case scenarios for fitness landscapes in settings where rank order preserving blocks are important.
### Suboptimal plateaus
Some fitness landscapes have a high degree of redundancy. Consequently, it is of interest to consider landscapes where mutational neighbors are allowed to have the same fitness. Fitness graphs for such landscapes can be drawn similarly to standard fitness graphs, except that some arrows would be replaced by edges.
**Definition 2.9**.: A genotype \(g\) belongs to a suboptimal plateau in a fitness landscape if
1. All neighbors have the same or lower fitness than \(g\), and
2. at least one genotype in the fitness landscape have higher fitness than \(g\).
For fitness landscapes with a high degree of redundancy, an evolving population may be unable to reach a genotype of high fitness because of suboptimal plateaus. The following example shows the impact of plateaus.
**Example 2.10**.: Assume that an 8-locus system consists of two blocks of 4 loci. The first block is in state \(0\) for the following eight sequences:
\[0000,1000,0001,1100,0110,1001,1110,1101,\]
and in state \(1\) for the remaining eight sequences
\[0100,0010,1010,0101,0011,1011,0111,1111.\]
Notice that for any sequence, a single mutation can change the state of the block (from 0 to 1, or from 1 to 0). Assume that the second block has similar properties. Then the \(8\)-locus system has four states \(00,10,01,11\) determined by the state of each block. If the fitness of a genotype in the 8-locus system is determined its state, then the landscape is analogous to a biallelic 2-locus system.
For instance, if \(w_{11}>w_{00}>w_{10}>w_{01}\), where \(w_{ij}\) denotes the fitness for the state \(ij\), then the 64 genotypes that represent the state \(00\) constitute a suboptimal plateau. Neutral mutations are available, but no sequence of neutral mutations result in a genotype such that beneficial mutations are possible.
By using a similar construction, one can obtain a landscape with arbitrary redundancy from a fitness landscape \(w\) without redundancy. Specifically, assume that \(w\) has \(s\) peaks and that no two genotypes have the same fitness. Construct \(s\) blocks of \(r\) loci, such that 50 percent of the sequences in each block has state 0 and 50 percent state 1, and such that for any sequence a single mutation can change the state. (In the previous example \(r=4\) and \(s=2\). The construction is not more difficult for larger \(r\)-values.) If \(r_{1},\ldots,r_{s}\) represent the states for a genotype in the \(L=rs\)-locus system, then one assigns the fitness \(w_{r_{1}\ldots r_{s}}\) to the genotype. The observation below follows.
**Observation 2.11**.: For every fitness landscape with no redundancy, one can construct an landscape with an arbitrarily high degree of redundancy, such that each suboptimal peak in the first landscape corresponds to a suboptimal plateau in the second landscape.
### Irreversible evolution
If the optimal genotypes for an organism differ between two environments A and B, it is interesting to analyze forward and reverse evolution. The motivating example is costly drug resistance. This section does not include empirical examples, but rather an analysis of small systems from a theoretical point of view.
If the adaptation to a new environment depends on a single point mutation, then evolution is reversible. However, the case \(L=2\) is already more interesting. Assume that \(00\) is optimal in original environment and \(11\) in the new environment, so that in a limited sense there is a trade-off between optimal fitness in the two environments. Then there are (in principe) two fitness graphs that allow for forward evolution described by the graphs 1A (all arrows up) and 2A (exactly one arrow down).
Fitness graph 1A for forward evolution: Both paths \(00\mapsto 10\mapsto 11\) and \(00\mapsto 01\mapsto 11\) are accessible, i.e., the mutations are beneficial independent of background. In this case, it seems plausible that reverse mutations would be beneficial in the original environment (see also the conjecture below).
Fitness graph 2A for forward evolution: By assumption, exactly one path is accessible, described as \(00\mapsto 10\mapsto 11\). The mutation \(0\mapsto 1\) at the right locus is only beneficial if the left locus is mutated, i.e., there is no independent advantage for the mutation in the new environment. Consequently, the advantage may have nothing to do with the new environment but rather constitute an adjustment because of the left substitution. If that is the case, it seems likely that \(11\) has higher fitness than \(10\)_also in the original environment_, which would imply irreversible evolution (the fitness graph would agree with Figure 2B).
Based on the discussion, one can try to relate forward and reverse evolution for \(L=2\). The assumptions are that 11 has highest fitness in the new environment, 00 in the original environment, and that forward evolution is possible (the corresponding fitness graph agrees with 1A or 2A).
**Conjecture 1**.: _Under the assumption stated, there is a correlation between that fitness graph 1A represents forward evolution and that evolution is reversible._
The conjecture concerns a (possible) statistical correlation, not a general rule. Figure 3 and 4 show six two-locus subsystems that includes \(0000\). The conjecture applies to the two systems defined by the double mutants \(1010\) and \(0011\), respectively. For both systems forward evolution agrees with Figure 1A, and evolution is reversible.
### Irreversible evolution and fluctuating drug concentrations
We continue with assumptions very similar to Section 2.3, except that we consider different drug concentrations. Specifically \(L=2\) and if the drug concentration \(C\geq C_{T}\) for some threshold concentration \(C_{T}\), then \(w_{10}>w_{00}\), otherwise \(w_{00}>w_{10}\). For simplicity, we also assume that \(w_{11}>w_{10}\) if \(C\geq C_{T}\), so that the path \(00\mapsto 10\mapsto 11\) is accessible as soon as \(C\geq C_{T}\). The genotype \(01\) has low fitness in all environments (similar to the second case in Sections 2.3). The relevant graphs are shown in Figure 7, where 7A represents forward evolution and the two alternatives for reverse evolution are 7B1 and 7B2.
Figure 7. The fitness graph for forward evolution (A) has one accessible trajectory \(00\mapsto 10\mapsto 11\). The genotype \(01\) has low fitness in all environments. Evolution is irreversible if the fitness graph for the original environment agrees with \(B1\), and reversible if the graph agrees with \(B_{2}\).
By working with very precise assumptions, one can clearly see some of the mechanisms at play. The prospects for reverse evolution falls naturally into three cases described by the tables 1-3. In brief, the possible outcomes are:
1. evolution can be reversed, and the threshold for reverse evolution is the same as for forward evolution (\(C_{T}\)).
2. evolution is irreversible.
3. evolution can be reversed, but the threshold for reverse evolution (\(\hat{C_{T}}\)) is lower than for forward evolution (\(C_{T}\)).
In practical terms, the assymetry in case (iii) means that resistance can be maintained for drug levels lower than what is necessary for resistance development. However, reverse evolution is possible for a drug-free environment.
The very last case we consider for \(L=2\) (Table 4) falls outside the main topic for the paper because there is no trade-off between fitness for the different environments (and consequently no reason to expect reverse evolution). Rather does the case sorts under cost-free drug resistance. Similar to Tables 1-3, the wild-type \(00\) has maximal fitness in
\begin{table}
\begin{tabular}{l|c|l} Concentration \(C\) & Rank order & Peaks \\ \(C\geq C_{T}\) & \(w_{11}>w_{10}>w_{00}\) & 11 \\ \hline \(C<C_{T}\) & \(w_{00}>w_{10}>w_{11}\) & 00 \\ \hline \end{tabular}
\end{table}
Table 1. Evolution is reversable. Regardless of drug concentration, there is one peak in the fitness landscape. The fitness graph agrees with B2 for all \(C<C_{T}\).
\begin{table}
\begin{tabular}{l|c|l} Concentration \(C\) & Rank order & Peaks \\ \(C\geq C_{T}\) & \(w_{11}>w_{10}>w_{00}\) & 11 \\ \hline \(\hat{C_{T}}<C<C_{T}\) & \(w_{11}>w_{00}>w_{10}\) & 00 and 11 \\ \hline \(C\leq\hat{C_{T}}\) & \(w_{00}>w_{10}>w_{11}\) & 00 \\ \hline \end{tabular}
\end{table}
Table 2. Evolution is irreversible. There are two peaks for all concentrations below the threshold \(C_{T}\). The rank order of \(11\) and \(00\) changes at some threshold concentration \(C_{T}<C_{T}\), but the fitness of \(10\) remains low.
\begin{table}
\begin{tabular}{l|c|l} Concentration \(C\) & Rank order & Peaks \\ \(C\geq C_{T}\) & \(w_{11}>w_{10}>w_{00}\) & 11 \\ \hline \(\hat{C_{T}}<C<C_{T}\) & \(w_{11}>w_{00}>w_{10}\) & 00 and 11 \\ \hline \(C\leq\hat{C_{T}}\) & \(w_{00}>w_{10}>w_{11}\) & 00 \\ \hline \end{tabular}
\end{table}
Table 3. Evolution is reversible. However, the situation is less favorable than the case described by Table 1, since there is a threshold asymmetry. The fitness graph agrees with \(B1\) for \(\hat{C_{T}}<C<C_{T}\) and with \(B2\) for \(C\leq\hat{C_{T}}\), i.e., the threshold concentration for development of resistance is higher than the threshold for its reversion.
the drug free environment, \(11\) for high concentrations, whereas \(01\) has low fitness in all environment. However, in contrast to the previous tables, \(11\) has maximal fitness in all environments.
Under the given assumption, suppose that a population is exposed to low drug concentrations for an extended period of time (\(0<C<C_{T}\)). Then \(00\) is a suboptimal peak. By assumption, the population cannot reach the global peak \(11\) for low concentrations, unless it is first exposed to high concentrations (\(C\geq C_{T}\)). In other words, the system has "memory" of sort, since exposure to high drug concentrations causes a permanent change (and increased fitness for any \(C>0\)).
Given the variation in behavior already for \(L=2\), it is reasonable to expect interesting dynamics for larger systems, see Das et al. (2022, 2020). Returning to the malaria study, 10 different drug concentrations were considered. Figure 8 summarizes information for all 10 drugs. Each arrow that points up for all 10 concentrations is marked red. The other arrows are black. The graph shows that evolution from \(1111\) to \(0000\) is at least theoretically possible under fluctuating drug concentrations.
## 3 Discussion
Persistent drug resistance is a multifaceted problem. Even if resistance is costly, a drug pause does not necessarily restore susceptibility. A complete analysis requires consideration of both evolutionary processes, including reverse mutations and accumulations of new compensatory mutations, and of non-evolutionary mechanisms such as the potential for regrowth of the former wild-type and properties of replacement drugs. One of the fundamental questions is if evolution is reversible in principle (regardless if reversions are plausible or not). The question of genetic reversibility is immediately related to obstacles in fitness landscapes.
For analyzing local and global obstacles we introduced some new concepts based on rank orders. A rank order preserving block is a subset of loci (sometimes referred to as a module) with the property that the rank order of genotypes that agree at all loci outside of the block does not depend on background. A weaker condition is a graph preserving block, where the rank order of mutational neighbors that agree at all loci outside of the block does not depend on background. The condition implies that the fitness graphs are similar regardless of background (Figure 5). The existence of a rank order preserving
\begin{table}
\begin{tabular}{l|c|c} Concentration \(C\) & Rank order & Peaks \\ \(C\geq C_{T}\) & \(w_{11}>w_{10}>w_{00}\) & 11 \\ \hline \(0<C<C_{T}\) & \(w_{11}>w_{00}>w_{10}\) & 00, 11 \\ \hline \(C=0\) & \(w_{11}=w_{00}>w_{10}\) & 00, 11 \\ \hline \end{tabular}
\end{table}
Table 4: Similar to Tables 1–3, the wild-type \(00\) has maximal fitness in the drug free environment, \(11\) for high concentrations, whereas \(01\) has low fitness in all environment. However, in contrast to the previous tables, there is no longer a trade-off between environments, since \(11\) has maximal fitness in all environments.
block with suboptimal peaks implies that there are also suboptimal peaks in the global fitness landscape, and likewise for graph preserving blocks. For a study of irreversible malarial drug resistance (Ogbunugafor and Hartl, 2016), we identified a double peaked graph preserving block (modulo a single deviating genotype).
If the \(L\)-sequence (the genom) can be partitioned into rank order preserving blocks, the result can be considered a generalization of block landscapes (Perelson and Macken, 1995). All else equal, adding more blocks does not make the fitness landscape more favorable.
In general, rank order induced (or signed) interactions (Crona, 2020; Crona et al., 2020, 2017) including signed versions of higher order epistasis and circuits (introduced to biology in Beerenkel et al. (2007)), have been used for analyzing accessibility and obstacles in fitness landscapes, as well as for detecting interactions from incomplete data. Rank order and graph preserving blocks provide similar insights, and obviously all the signed concepts are analogous to sign epistasis (Weinriech et al., 2005) in that they capture order implications and are blind for magnitude differences that do no intact on rank orders.
We considered the relation between fitness landscapes for forward and reverse evolution. For \(L=2\) we conjecture that absence of sign epistasis for forward evolution
Figure 8. The graph summarizes information for ten different concentrations of the drugs, including the drug-free environment. The red arrows indicate fitness differences that are consistent for all ten concentrations of the drug. In particular, \(1110\) has higher fitness than the double mutants \(1100,1010,0110\), regardless of concentration. Each black arrows indicates that fitness increases for at least one concentration of the drug. The graph shows that fluctuating concentrations could restore the wild-type \(0000\).
correlate with reversible evolution. More generally, one can ask if favorable landscapes for forward evolution correlate with reversibility. Results in Das et al. (2022, 2020) are compatible with such a claim, but more empirical studies would be necessary for a conclusion.
For landscapes defined by different drug concentrations, it is of interest to compare concentration thresholds for forward and reverse evolution. The dynamics for \(L=2\) is already interesting. We demonstrated that successful adaptation to low drug concentration may require a history of adaptation to high drug concentrations. We have argued that thresholds asymmetries are plausible. The impact of fluctuating drug concentrations was analyzed for the study on irreversible malarial drug resistance mentioned. Reversion to the original wild-type was at least theoretically possible, which illustrates that fluctuating concentration within the range of two extremes (here high drug concentration and the drug-free environment) can result in qualitatively different outcomes as compared to switches between the extremes.
We have pointed out that peak and rank order preserving blocks are incompatible with some standard constructions of fitness landscapes that use random fitness (see the discussion about hop-to-top landscapes), and that neither redundancy nor decreased peak density by \(L\) imply that fitness landscapes are favorable for large \(L\). It appears that no simple summary statistics can predict whether or not a fitness landscape is favorable. A natural category of rugged landscapes with good peak accessibility is identified in Das et al. (2020), which is another indication that ruggedness alone does not reveal the character of a fitness landscape.
However, evolutionary reversibility is a potential indicator of fundamental properties of fitness landscapes. Whether or not the wild-type can be restored and properties of new genotypes that result from compensatory mutations carry information about peak constellations, accessibility and constraints in the landscape. Sufficiently complete and precise empirical studies of resistance and its reversal could contribute to a better understanding of microbial evolution, in particular of microbial fitness landscapes.
|
2303.09790 | Reliable Multimodality Eye Disease Screening via Mixture of Student's t
Distributions | Multimodality eye disease screening is crucial in ophthalmology as it
integrates information from diverse sources to complement their respective
performances. However, the existing methods are weak in assessing the
reliability of each unimodality, and directly fusing an unreliable modality may
cause screening errors. To address this issue, we introduce a novel
multimodality evidential fusion pipeline for eye disease screening, EyeMoSt,
which provides a measure of confidence for unimodality and elegantly integrates
the multimodality information from a multi-distribution fusion perspective.
Specifically, our model estimates both local uncertainty for unimodality and
global uncertainty for the fusion modality to produce reliable classification
results. More importantly, the proposed mixture of Student's $t$ distributions
adaptively integrates different modalities to endow the model with heavy-tailed
properties, increasing robustness and reliability. Our experimental findings on
both public and in-house datasets show that our model is more reliable than
current methods. Additionally, EyeMost has the potential ability to serve as a
data quality discriminator, enabling reliable decision-making for multimodality
eye disease screening. | Ke Zou, Tian Lin, Xuedong Yuan, Haoyu Chen, Xiaojing Shen, Meng Wang, Huazhu Fu | 2023-03-17T06:18:16Z | http://arxiv.org/abs/2303.09790v4 | # Reliable Multimodality Eye Disease Screening via Mixture of Student's t Distributions
###### Abstract
Multimodality eye disease screening is crucial in ophthalmology as it integrates information from diverse sources to complement their respective performances. However, the existing methods are weak in assessing the reliability of each unimodality, and directly fusing an unreliable modality may cause screening errors. To address this issue, we introduce a novel multimodality evidential fusion pipeline for eye disease screening, **EyeMoS\(t\)**, which provides a measure of confidence for unimodality and elegantly integrates the multimodality information from a multi-distribution fusion perspective. Specifically, our model estimates both local uncertainty for unimodality and global uncertainty for the fusion modality to produce reliable classification results. More importantly, the proposed mixture of Student's \(t\) distributions adaptively integrates different modalities to endow the model with heavy-tailed properties, increasing robustness and reliability. Our experimental findings on both public and in-house datasets show that our model is more reliable than current methods. Additionally, EyeMost has the potential ability to serve as a data quality discriminator, enabling reliable decision-making for multimodality eye disease screening.
Keywords:Multimodality uncertainty estimation eye disease.
## 1 Introduction
Retinal fundus images and Optical Coherence Tomography (OCT) are common 2D and 3D imaging techniques used for eye disease screening. Multimodality learning usually provides more complementary information than unimodality learning [3, 4, 31]. This motivates researchers to integrate multiple modalities to improve the performance of eye disease screening. Current multimodality learning methods can be roughly classified into early, intermediate, and late fusion,
depending on the fusion stage [2]. For multimodality ophthalmic image learning, recent works have mainly focused on the early fusion [10, 15, 23] and intermediate fusion stages [3, 4, 16, 27]. Early fusion-based approaches integrate multiple modalities directly at the data level, usually by concatenating the raw or preprocessed multimodality data. Hua _et al._[10] combined preprocessed fundus images and wide-field swept-source optical coherence tomography angiography at the early stage and then extracted representational features for diabetic retinopathy recognition. Intermediate fusion strategies allow multiple modalities to be fused at different intermediate layers of the neural networks. He _et al._[9] extracted different modality features with convolutional block attention module [28] and modality-specific attention mechanisms, then concatenated them to realize the multimodality fusion for retinal image classification. However, few studies have explored multimodality eye disease screening at the late fusion stage. Furthermore, the above methods do not adequately assess the reliability of each unimodality, and may directly fuse an unreliable modality with others. This could lead to screening errors and be challenging for real-world clinical safety deployment. To achieve this goal, we propose a reliable framework for the multimodality eye disease screening, which provides a confidence (uncertainty) measure for each unimodality and adaptively fuses multimodality predictions in principle.
Uncertainty estimation is an effective way to provide a measure of reliability for ambiguous network predictions. The current uncertainty estimation methods mainly include Bayesian neural networks, deep ensemble methods, and deterministic-based methods. Bayesian neural networks [18, 21, 22] learn the distribution of network weights by treating them as random variables. However, these methods are affected by the challenge of convergence and have a large number of computations. The dropout method has alleviated this issue to a certain extent [12]. Another uncertainty estimation way is to learn an ensemble of deep networks [14]. Recently, to alleviate computational complexity and overconfidence [25], deterministic-based methods [17, 19, 25, 26] have been proposed to directly output uncertainty in a single forward pass through the network. For multimodal uncertainty estimation, the Trusted Multi-view Classification (TMC) [8] is a representative method that proposes a new paradigm of multi-view learning by dynamically integrating different views at the evidence level. However, TMC has a limited ability to detect Out-Of-Distribution (OOD) samples [11]. This attributes to TMC is particularly weak in modeling epistemic uncertainty for each single view [12]. Additionally, the fusion rule in TMC fails to account for conflicting views, making it unsuitable for safety-critical deployment [30]. To address these limitations, we propose EyeMoSt, a novel evidential fusion method that models both aleatoric and epistemic uncertainty in unimodality, while efficiently integrating different modalities from a multi-distribution fusion perspective.
In this work, **we propose a novel multimodality eye disease screening method, called EyeMoS\(t\),** that conducts Fundus and OCT modality fusion in a reliable manner. Our EyeMoS\(t\) places Normal-inverse Gamma (NIG) prior distributions over the pre-trained neural networks to directly learn both aleatoric
and epistemic uncertainty for unimodality. Moreover, **Our EyeMoSt introduces the Mixture of Student's \(t\) (MoSt\(t\)) distributions**, which provide robust classification results with global uncertainty. More importantly, MoSt\(t\) endows the model with robustness under heavy-tailed property awareness. **We conduct sufficient experiments on two datasets for different eye diseases** (_e.g._, glaucoma grading, age-related macular degeneration, and polypoid choroidal vasculopathy) to verify the reliability and robustness of the proposed method. **We will release all codes for reproduction after acceptance.**
## 2 Method
In this section, we introduce the overall framework of our EyeMoSt\(t\), which efficiently estimates the aleatoric and epistemic uncertainty for unimodality and adaptively integrates Fundus and OCT modalities in principle. As shown in Fig. 1 (a), we first employ the 2D/3D neural network encoders to capture different modality features. Then, we place multi-evidential heads after the trained networks to model the parameters of higher-order NIG distributions for unimodality. To merge these predicted distributions, we derive the NIG distributions to Student's \(t\) (S\(t\)) distributions. Particularly, the Mixture of Student's \(t\) (MoSt\(t\)) distributions is introduced to integrate the distributions of different modalities in principle. Finally, we elaborate on the training pipeline for the model evidence acquisition.
Given a multimodality eye dataset \(\mathcal{D}=\left\{\left\{\mathbf{x}_{m}^{i}\right\}_{m=1}^{M}\right\}\) and the corresponding label \(y^{i}\), the intuitive goal is to learn a function that can classify different categories. Fundus and OCT are common imaging modalities for eye disease
Figure 1: Reliable multimodality Eye Disease Screening pipeline. (a) Overall framework of EyeMoSt. (b) Student’s \(t\) Distributions with different degrees of freedom. (c) The overall learning process of EyeMoSt.
screening. Therefore, here M=2, \(\mathbf{x}_{1}^{i}\) and \(\mathbf{x}_{2}^{i}\) represent Fundus and OCT input modality data, respectively. We first train 2D encoder \(\Theta\) of Res2Net [7] and 3D encoder \(\Phi\) of MedicalNet [5] to identify the feature-level informativeness, which can be defined as \(\Theta\left(\mathbf{x}_{1}^{i}\right)\) and \(\Phi\left(\mathbf{x}_{2}^{i}\right)\), respectively.
### Uncertainty estimation for unimodality
We extend the deep evidential regression model [1] to multimodality evidential classification for eye disease screening. To this end, to model the uncertainty for Fundus or OCT modality, we assume that the observe label \(y^{i}\) is drawn from a Gaussian \(\mathcal{N}\left(y^{i}|\mu,\sigma^{2}\right)\), whose mean and variance are governed by an evidential prior named the NIG distribution:
\[\text{NIG}\left(\mu,\sigma^{2}|\mathbf{p}_{m}\right)=\mathcal{N}\left(\mu| \gamma_{m},\frac{\sigma^{2}}{\delta_{m}}\right)\Gamma^{-1}\left(\sigma^{2}| \alpha_{m},\beta_{m}\right), \tag{1}\]
where \(\Gamma^{-1}\) is an inverse-gamma distribution, \(\gamma_{m}\in\mathbb{R},\delta_{m}>0,\alpha_{m}>1,\beta_{m}>0\). Specifically, the multi-evidential heads will be placed after the encoders \(\Theta\) and \(\Phi\) (as shown in Fig. 1 (a)), which outputs the prior NIG parameters \(\mathbf{p}_{m}=(\gamma_{m},\delta_{m},\alpha_{m},\beta_{m})\). As a result, the aleatoric (AL) and epistemic (EP) uncertainty can be estimated by the \(\mathbb{E}\left[\sigma^{2}\right]\) and the \(\text{Var}\left[\mu\right]\), respectively, as:
\[\text{AL}=\text{E}\left[\sigma^{2}\right]=\frac{\alpha_{m}}{\beta_{m}-1}, \qquad\text{EP}=\text{Var}\left[\mu\right]=\frac{\beta_{m}}{\delta_{m}\left( \alpha_{m}-1\right)}. \tag{2}\]
Then, given the evidence distribution parameter \(\mathbf{p}_{m}\), the marginal likelihood is calculated by marginalizing the likelihood parameter:
\[p\left(y^{i}|x_{{}_{m}}^{i},\mathbf{p}_{m}\right)=\int_{\mu}\int_{\sigma^{2}}p \left(y^{i}|x_{{}_{m}}^{i},\mu,\sigma^{2}\right)\text{NIG}\left(\mu,\sigma^{2} |\mathbf{p}_{m}\right)\text{d}\mu\text{d}\sigma^{2}. \tag{3}\]
Interacted by the prior and the Gaussian likelihood of each unimodality [1], its analytical solution does exist and yields an \(\text{S}t\) prediction distribution as:
\[p\left(y^{i}|x_{{}_{m}}^{i},\mathbf{p}_{m}\right) =\frac{\Gamma\left(\alpha_{m}+\frac{1}{2}\right)}{\Gamma\left( \alpha_{m}\right)}\sqrt{\frac{\delta_{m}}{2\pi\beta_{m}\left(1+\delta_{m} \right)}}\Bigg{(}1+\frac{\delta_{m}\big{(}y^{i}-\gamma_{m}\big{)}^{2}}{2\beta_ {m}\left(1+\delta_{m}\right)}\Bigg{)}^{-\left(\alpha_{m}+\frac{1}{2}\right)}\] \[=St\left(y^{i};\gamma_{m},o_{m},2\alpha_{m}\right), \tag{4}\]
with \(o_{m}=\frac{\beta_{m}\left(1+\delta_{m}\right)}{\delta_{m}\alpha_{m}}\). The complete derivations of Eq. 4 are available in **Supplementary S1.1**. Thus, the two modalities distributions are transformed into the student's \(t\) Distributions \(S\text{t}\left(y^{i};u_{m},\Sigma_{m},v_{m}\right)=S\text{t}\left(y^{i}; \gamma_{m},o_{m},2\alpha_{m}\right)\), with \(u_{m}\in\mathbb{R},\Sigma_{m}>0,v_{m}>2\).
### Mixture of Student's \(t\) Distributions (MoSt)
Then, we focus on fusing multiple S\(t\) Distributions from different modalities. How to rationally integrate multiple S\(t\)s into a unified S\(t\) is the key issue. To this end, the joint modality of distribution can be denoted as:
\[S\mathrm{t}\left(y^{i};u_{F},\Sigma_{F},v_{F}\right)=S\mathrm{t}\left(y^{i}; \left[\begin{array}{c}u_{1}^{i}\\ u_{2}^{i}\end{array}\right]\right,\left[\begin{array}{c}\Sigma_{11}\ \Sigma_{12}\\ \Sigma_{{}_{12}}^{\mathrm{T}}\ \Sigma_{22}\end{array}\right],\left[\begin{array}{c}v_{1}^{i}\\ v_{2}^{i}\end{array}\right]\right). \tag{5}\]
In order to preserve the closed S\(t\) distribution form and the heavy-tailed properties of the fusion modality, the updated parameters are given by [24]. In simple terms, we first adjust the degrees of freedom of the two distributions to be consistent. As shown in Fig. 1 (b), the smaller values of degrees of freedom (DOF) \(v\) has heavier tails. Therefore, we construct the decision value \(\tau_{m}=v_{m}\) to approximate the parameters of the fused distribution. We assume that multiple S\(t\) distributions are still an approximate S\(t\) distribution after fusion. Assuming that the degrees of freedom of \(\tau_{1}\) are smaller than \(\tau_{2}\), then, the fused S\(t\) distribution \(S\mathrm{t}\left(y^{i};u_{{}_{F}},\Sigma_{{}_{F}},v_{{}_{F}}\right)\) will be updated as:
\[v_{F}\!\!=\!\!v_{1},\qquad u_{F}=u_{1},\qquad\frac{v_{2}}{v_{2}-2}\Sigma_{2}= \frac{v_{1}}{v_{1}-2}\Sigma_{F}. \tag{6}\]
More intuitively, the above formula determines the modality with a stronger heavy-tailed attribute. That is, according to the perceived heavy-tailed attribute of each modality, the most robust modality is selected as the fusion modality. Finally, the prediction and uncertainty of the fusion modality is given by:
\[\hat{y}^{i}=\mathbb{E}_{p\left(x_{F}^{i},\mathbf{p}_{F}\right)}\left[y^{i} \right]=u_{F},\quad\hat{U}_{F}=\mathbb{E}\left[\sigma_{F}^{2}\right]=\Sigma_{F }\frac{v_{F}}{v_{F}-2}. \tag{7}\]
### Learning the evidential distributions
Under the evidential learning framework, we expect more evidence to be collected for each modality, thus, the proposed model is expected to maximize the likelihood function of the model evidence. Equivalently, the model is expected to minimize the negative log-likelihood function, which can be expressed as:
\[\mathcal{L}_{m}^{NLL}= \log\frac{\Gamma\left(\alpha_{m}\right)\sqrt{\frac{\pi}{\delta_{m }}}}{\Gamma\left(\alpha_{m}+\frac{1}{2}\right)}-\alpha_{m}\log\left(2\beta_{m} \left(1+\delta_{m}\right)\right)\] \[+\left(\alpha_{m}+\frac{1}{2}\right)\log\left(\left(y^{i}-\gamma _{m}\right)^{2}\delta_{m}+2\beta_{m}\left(1+\delta_{m}\right)\right). \tag{8}\]
Then, to fit the classification tasks, we introduce the cross entropy term \(\mathcal{L}_{m}^{CE}\):
\[\mathcal{L}_{m}^{NIG}=\mathcal{L}_{m}^{NLL}+\lambda\mathcal{L}_{m}^{CE}, \tag{9}\]
where \(\lambda\) is the balance factor and set to be \(0.5\). Similarly, for the fusion modality, we first maximize the likelihood function of the model evidence as follows:
\[\mathcal{L}_{F}^{NLL}=\log\Sigma_{F}+\log\frac{\Gamma\left(\frac{v_{F}}{2}\right) }{\Gamma\left(\frac{v_{F}+1}{2}\right)}+\log\sqrt{v_{F}\pi}+\frac{\left(v_{F}+ 1\right)}{2}\log\left(1+\frac{\left(y^{i}-u_{F}\right)^{2}}{v_{F}\Sigma_{F}} \right), \tag{10}\]
Complete derivations of Eq. 8 are available in **Supplementary S1.2**. Then, to achieve better classification performance, the cross entropy term \(\mathcal{L}_{m}^{CE}\) is also introduced into Eq. 8 as below:
\[\mathcal{L}_{F}^{St}=\mathcal{L}_{F}^{NLL}+\lambda\mathcal{L}_{F}^{CE}, \tag{11}\]
Totally, the evidential learning process for multimodality screening can be denoted as:
\[\mathcal{L}_{all}=\sum_{m=1}^{M}\mathcal{L}_{m}^{NIG}+\mathcal{L}_{F}^{St}. \tag{12}\]
In this paper, we mainly consider the fusion of two modalities, \(M=2\).
## 3 Experiments
**Datasets:** In this paper, we verify the effectiveness of EyeMoS\(t\) on the two datasets. For the glaucoma recognition, We validate the proposed method on the GAMMA [29] dataset. It contains 100 paired cases with a three-level glaucoma grading. They are divided into the training set and test set with 80 and 20 respectively. We conduct the five-fold cross-validation on it to prevent performance improvement caused by accidental factors. Then, we test our method on the in-house collected dataset, which includes Age-related macular degeneration (AMD) and polypoid choroidal vasculopathy (PCV) diseases. They are divided into training, validation, and test sets with 465, 69, and 70 cases respectively. More details of the dataset can be found in **Supplementary S2**. Both of these datasets are including the paired cases of Fundus (2D) and OCT (3D). 7
Footnote 7: The ethical approval of this dataset was obtained from the Ethical Committee.
**Training Details:** Our proposed method is implemented in PyTorch and trained on NVIDIA GeForce RTX 3090. Adam optimization [13] is employed to optimize the overall parameters with an initial learning rate of \(0.0001\). The maximum of epoch is \(100\). The data augmentation techniques for GAMMA dataset are similar to [3], including random grayscaling, random color jitter, and random horizontal flipping. All inputs are uniformly adjusted to \(256\times 256\) and \(128\times 256\times 128\) for Fundus and OCT modalities. The batch size is \(16\). More experiments of parameter \(\lambda\) selection are reported in **Supplementary S2**.
**Compared Methods & Metrics:** We compare the following six methods: For different fusion stage strategies, **a) B-EF** Baseline of the early fusion [10] strategy, **b) B-IF** Baseline of the intermediate typical fusion method, **c) \(M^{2}\)LC**[28] of the intermediate fusion method and the later fusion method **d) TMC**[8] are
used as comparisons. B-EF is first integrated at the data level, and then passed through the same MedicalNet [5]. B-IF first extracts features by the encoders (same with us), and then concatenates their output features as the final prediction. For the uncertainty quantification methods, **e) MCDO** (Monte Carlo Dropout) employs the test time dropout as an approximation of a Bayesian neural network [6]. **f) DE** (Deep ensemble) quantifies the uncertainties by ensembling multiple models [14]. We adopt the accuracy (ACC) and Kappa metrics for intuitive comparison with different methods. Particularly, expected calibration error (ECE) [20] is used to compare the calibration of the uncertainty algorithms.
**Comparison and Analysis:** We reported our algorithm with different methods on the GAMMA and in-house datasets in Tab. 3. First, we compare these methods under the clean multimodality eye data. Our method obtained competitive results in terms of ACC and Kappa. Then, to verify the robustness of our model, we added Gaussian noise to Fundus or OCT modality (\(\sigma=0.1/0.3\)) on the two datasets. Compared with other methods, our EyeMoSt maintains classification accuracy in noisy OCT modality, while comparable in noisy Fundus modality. More generally, we added different Gaussian noises to Fundus or OCT modality, as shown in Fig. 2. The same conclusion can be drawn from Fig. 2. This is attributed to the perceived long tail in the data when fused. The visual comparisons of different noises to the Fundus/OCT modality on the in-house dataset can be found in **Supplementary S2**. To further quantify the reliability of uncertainty estimation, we compared different algorithms using the ECE indicator. As shown in Tab. 3 and Fig. 2, our proposed algorithm performs better in both clean and single pollution modalities. The inference times of the uncertainty-based methods on two modalities on the in-house dataset are 5.01s (MCDO), 8.28s (DE), 3.98s (TMC), and 3.22s (Ours). It can be concluded that the running time of EyeMost is lower than other methods. In brief, we conclude that our proposed model is more robust and reliable than the above methods.
\begin{table}
\begin{tabular}{c c
Understanding uncertainty for unimodality/multimodality eye data:
To make progress towards the multimodality ophthalmic clinical application of uncertainty estimation, we conducted unimodality and multimodality uncertainty analysis for eye data. First, we add more Gaussian noise with varying variances to the unimodality (Fundus or OCT) in the GAMMA and in-house datasets to simulate OOD data. The original samples without noise are denoted as in-distribution (ID) data. Fig. 3 (a) shows a strong relationship between uncertainty and OOD data. Uncertainty in unimodality images increases positively with noise. Here, uncertainty acts as a tool to measure the reliable unimodality eye data. Second, we analyze the uncertainty density of unimodality and fusion modality before and after adding Gaussian noise. As shown in Fig. 3 (b), take adding noise with \(\sigma=0.1\) to the Fundus modality on the GAMMA dataset as an example. Before the noise is added, the uncertainty distributions of unimodality and fusion modality are relatively concentrated. After adding noise, the uncertainty distribution of the fusion modality is closer to that of the modality without noise. Hence, EyeMoS\(t\) can serve as a tool for measuring the reliable modality in ophthalmic multimodality data fusion. To this end, our algorithm can be used as an out-of-distribution detector and data quality discriminator to inform reliable and robust decisions for multimodality eye disease screening.
## 4 Conclusion
In this paper, we propose the EyeMoS\(t\) for reliable and robust screening of eye diseases using evidential multimodality fusion. Our EyeMoS\(t\) produces the uncertainty for unimodality and then adaptively fuses different modalities in a distribution perspective. The different NIG evidence priors are employed to model the distribution of encoder observations, which supports the backbones to directly learn aleatoric and epistemic uncertainty. We then derive an analytical solution to the Student's \(t\) distributions of the NIG evidence priors on the
Figure 2: Accuracy and ECE performance of different algorithms in contaminated single modality with different levels of noise on GAMMA and in-house datasets.
Gaussian likelihood function. Furthermore, we propose the MoS_t_ distributions in principle adaptively integrates different modalities, which endows the model with heavy-tailed properties and is more robust and reliable for eye disease screening. Extensive experiments show that the robustness and reliability of our method in classification and uncertainty estimation on GAMMA and in-house datasets are competitive with previous methods. Overall, our approach has the potential to multimodality eye data discrimitor for trustworthy medical AI decision-making.
|
2303.05182 | Inclusive and diffractive dijet photoproduction at the Electron-Ion
Collider in NLO QCD | In the framework of collinear factorization and next-to-leading order (NLO)
perturbative QCD, we make predictions for inclusive and diffractive dijet
photoproduction in electron-proton and electron-nucleus scattering in the EIC
kinematics. We establish kinematic ranges in the ${\bar p}_T$, ${\bar \eta}$,
$x_A^{\rm obs}$ and $x_{\gamma}^{\rm obs}$ variables, quantify sensitivity to
small-$x$ nuclear PDFs, and analyze various scenarios of factorization breaking
in the case of diffractive scattering. | V. Guzey, M. Klasen | 2023-03-09T11:21:56Z | http://arxiv.org/abs/2303.05182v2 | # Ms-Tp-23-08
###### Abstract
In the framework of collinear factorization and next-to-leading order (NLO) perturbative QCD, we make predictions for inclusive and diffractive dijet photoproduction in electron-proton and electron-nucleus scattering in the EIC kinematics. We establish kinematic ranges in the \(\bar{p}_{T}\), \(\bar{\eta}\), \(x_{A}^{\rm obs}\) and \(x_{\gamma}^{\rm obs}\) variables, quantify sensitivity to small-\(x\) nuclear PDFs, and analyze various scenarios of factorization breaking in the case of diffractive scattering.
## 1 Introduction
All currently available information on jet photoproduction on hadrons comes from electron (positron)-proton scattering at Hadron-Electron Ring Accelerator (HERA), for reviews, see [1, 2, 3]. Provided that the jet transverse momenta \(p_{T}\) are sufficiently large, this process allows one to probe the microscopic quark-gluon structure of the proton and the real photon in quantum chromodynamics (QCD) as well as the strong interaction dynamics in the regime of perturbative QCD (pQCD). The predictions of next-to-leading order (NLO) pQCD provide a good description of the dijet photoproduction cross section measured at HERA as a function of various jet observables in a wide range of \(p_{T}\)[4, 5, 6, 7]. This serves as an important test of the QCD factorization and universality of parton distribution functions (PDFs).
A related important incentive to study photoproduction of jets is that the cross section of this process has enhanced sensitivity to the gluon distribution. As a result, QCD analyses of the combined data on the dijet cross section and the total cross section of lepton-proton deep inelastic scattering (DIS) provide additional constraints on the gluon PDF of the proton, see, e.g. [8]. Similarly, the combination with the available data on the \(F_{2}^{\gamma}(x,Q^{2})\) photon structure function measured in electron-positron annihilation enables one to better constrain the gluon PDF of the real photon [9]. Also, in the case of diffractive dijet photoproduction, one can use this process to analyze the poorly understood mechanism of the QCD factorization breaking in diffractive scattering observed experimentally [10, 11, 12].
It is expected that studies of photoproduction of jets will be continued at the future Electron-Ion Collider (EIC) in the U.S. [13] and the Large Hadron Electron Collider (LHeC) [14] and/or a Future Circular Collider (FCC) [15] at CERN. It will allow one not only to measure this process in a kinematic region complimentary to that covered by HERA and with much higher precision, but will also give for the first time an access to novel nuclear diffractive PDFs in the case of nuclear beams.
Note that first results on inclusive dijet photoproduction on heavy nuclei have recently been obtained by ATLAS [16] by analyzing lead-lead ultra-peripheral collisions (UPCs) at the Large Hadron Collider (LHC). It was shown in [17] that NLO pQCD provides a good description of these data.
## 2 Inclusive dijet photoproduction in \(eA\) scattering at EIC
As we explained in the Introduction, photoproduction of jets provides complementary information on the partonic structure of hadrons and the photon in QCD. In particular, the process of inclusive dijet photoproduction in lepton-nucleus (\(eA\)) scattering, \(e+A\to e^{\prime}+2\,{\rm jets}+X\), is expected to yield new constraints on nuclear PDFs. Typical leading order (LO) Feynman graphs for this process are shown in Fig. 1: graphs (\(a\)) and (\(b\)) represent the so-called direct-photon and the resolved-photon contributions, respectively. In graph (\(a\)), the photon enters the hard process of the photon-gluon fusion directly as an elementary particle. In contrast, in graph (\(b\)), the photon participates in hard scattering by means of its partonic content, which is hence revealed (resolved) in this process.
In the framework of collinear factorization and NLO pQCD, the \(e+A\to e^{\prime}+2\,{\rm jets}+X\) cross section can be written as the following convolution [18, 19]
\[d\sigma(e+A \to e^{\prime}+2\,{\rm jets}+X)=\sum_{a,b}\int dy\int dx_{\gamma} \int dx_{A}f_{\gamma/e}(y) \tag{1}\] \[\times f_{a/\gamma}(x_{\gamma},\mu^{2})f_{b/A}(x_{A},\mu^{2})d\hat{ \sigma}(ab\to{\rm jets})\,,\]
where \(f_{\gamma/e}(y)\) is the photon flux of the electron with \(y\) being the momentum fraction carried by the photon; \(f_{a/\gamma}(x_{\gamma},\mu^{2})\) are the photon PDFs in the resolved-photon case, which depend on the parton-in-photon momentum fraction \(x_{\gamma}\) and the scale \(\mu\); \(f_{b/A}(x_{A},\mu^{2})\) are nuclear PDFs depending on the parton momentum fraction \(x_{A}\) and the scale \(\mu\); \(d\hat{\sigma}(ab\to\mathrm{jets})\) is the cross section of hard scattering of partons \(a\) and \(b\) into jets. In the direct-photon case, parton \(a\) corresponds to be the photon leading to \(f_{\gamma/\gamma}(x_{\gamma},\mu^{2})=\delta(1-x_{\gamma})\) at LO. In Eq. (1), all involved hard scales have been set to be equal. In our analysis, we identify them with the mean transverse momentum of the two jets, \(\mu=\bar{p}_{T}=(p_{T,1}+p_{T,2})/2\). Note that while the separation between the direct and resolved photons is not unique beyond LO, it is still useful since the direct-photon contribution peaks in the \(x_{\gamma}\to 1\) limit.
Our predictions [20] for the dijet photoproduction in \(eA\) scattering at the EIC are based on the numerical implementation of Eq. (1) combined with the anti-\(k_{T}\) jet clustering algorithm with at most 2 partons in a jet, which was developed in [21, 22, 23]. While the parton momentum fractions \(x_{A}\) and \(x_{\gamma}\) are not directly measurable, they can be approximated using the following hadron-level estimates based on the jet transverse momenta \(p_{T,1}\) and \(p_{T,2}\) and the jet (pseudo)rapidities \(\eta_{1}\) and \(\eta_{2}\),
\[x_{A}^{\mathrm{obs}} = \frac{p_{T,1}e^{\eta_{1}}+p_{T,2}e^{\eta_{2}}}{2E_{A}}\,,\] \[x_{\gamma}^{\mathrm{obs}} = \frac{p_{T,1}e^{-\eta_{1}}+p_{T,2}e^{-\eta_{2}}}{2yE_{e}}\,, \tag{2}\]
where \(E_{A}\) and \(E_{e}\) are the energies of the nucleus and electron beams, respectively. For definiteness, we take \(E_{A}=100\) GeV per nucleon and \(E_{e}=21\)
Figure 1: Typical LO direct-photon (left) and resolved-photon (right) contributions to dijet photoproduction in \(eA\) scattering. The involved momentum fractions \(y\), \(x_{A}\), and \(x_{\gamma}\) are shown in parenthesis.
GeV corresponding to \(\sqrt{s}=92\) GeV [13]. For final-state jets, we assume generic conditions based on the HERA experience: the leading jet has \(p_{T,1}>5\) GeV and the subleading jets carry \(p_{T,i\neq 1}>4.5\) GeV; all jets have \(\eta_{1,2}<4\); the jet cone parameter is \(R=0.4\). Finally, we use the GRV HO photon PDFs [24] and the nCTEQ15 nuclear PDFs [25].
The resulting distributions in the dijet average transverse momentum \(\bar{p}_{T}=(p_{T,1}+p_{T,2})/2\), the dijet average rapidity \(\bar{\eta}=(\eta_{1}+\eta_{2})/2\), and the observed nucleus and photon momentum fractions, \(x_{A}^{\rm obs}\) and \(x_{\gamma}^{\rm obs}\), are shown in Fig. 2. One can see from the figure that at the EIC, the kinematic reach in these variables is \(5<\bar{p}_{T}<20\) GeV, \(-1<\bar{\eta}<2\), \(0.01<x_{A}^{\rm obs}<1\), and \(0.03<x_{\gamma}^{\rm obs}<1\).
Going from the EIC to LHeC and further to FCC, the collision energy increases, which subsequently dramatically expands the kinematic coverage. In particular, it was shown in [20] that dijet photoproduction in \(eA\) scattering can be probed there at \(5<\bar{p}_{T}<60\) GeV, \(-2<\bar{\eta}<4\)
Figure 2: NLO pQCD predictions for the \(e+A\to e^{\prime}+2\,{\rm jets}+X\) dijet photoproduction cross section in \(eA\) scattering at the EIC as a function of the average dijet transverse momentum \(\bar{p}_{T}\), the average rapidity \(\bar{\eta}\), and the momentum fractions \(x_{A}^{\rm obs}\) and \(x_{\gamma}^{\rm obs}\).
\(10^{-5}-10^{-4}<x_{A}^{\rm obs}<1\), and \(10^{-3}<x_{\gamma}^{\rm obs}<1\).
While the nucleus momentum fraction \(x_{A}^{\rm obs}\) at the EIC has a modest kinematic reach in the small-\(x\) region, the dijet cross section is nevertheless sensitive to nuclear modifications of PDFs: the ratio of the cross sections on the nucleus and the proton as a function of \(x_{A}^{\rm obs}\) exhibits a \(10-20\%\) suppression (nuclear shadowing) at small \(x_{A}^{\rm obs}\) followed by a \(10-20\%\) enhancement at \(x_{A}^{\rm obs}\sim 0.1\) (nuclear antishadowing), which are characteristic for the gluon nuclear PDFs. Note, however, that the magnitude of the observed effects is compatible with sizable uncertainties of the nuclear PDFs. The similar behavior is also obtained, when we use the EPPS16 nPDFs [26] as input for our calculations.
## 3 Diffractive dijet photoproduction in lepton-proton and lepton-nucleus scattering at EIC
One of the major HERA physics results is the unexpected observation that diffraction makes up approximately \(10-15\%\) of the total electron-proton (\(ep\)) DIS cross section [2, 3]. Similarly to the case of inclusive scattering, one can define diffractive PDFs in the framework of collinear QCD factorization [27], extract them from the HERA data on the proton diffractive structure functions [28, 29], and test their universality in diffractive dijet and open charm production in DIS [30, 31]. At the same time, it was found that NLO pQCD overestimates the measured cross section of diffractive dijet photoproduction by approximately a factor of 2 [10, 11, 12], which indicates breaking of the QCD factorization. The mechanism of it remains unknown: the theory and the data can be made consistent by introducing either the global suppression factor of \(R_{\rm glob}=0.5\) or the suppression factor of \(R_{\rm dir}=0.34\) for the resolved-photon contribution only or the \(x_{\gamma}\)-dependent suppression factor interpolating between these two scenarios [32].
Diffractive dijet photoproduction corresponds to the situation, when one requires that the target hadron (proton, nucleus) in Fig. 1 stays intact or dissociates into a low-mass excitation. In the proton target case, the \(e+p\to e^{\prime}+2\,{\rm jets}+X^{\prime}+Y\) cross section of diffractive dijet photoproduction in NLO pQCD reads [compare to Eq. (1)]
\[d\sigma(e+p \to e^{\prime}+2\,{\rm jets}+X^{\prime}+Y)=\sum_{a,b}\int dy\int dx_{ \gamma}\int dt\int dx_{I\!\!P}\int dz_{I\!\!P}f_{\gamma/e}(y) \tag{3}\] \[\times f_{a/\gamma}(x_{\gamma},\mu^{2})f_{b/p}^{D(4)}(z_{I\!\!P},\mu^{2 },x_{I\!\!P},t)d\hat{\sigma}(ab\to{\rm jets})\,,\]
where \(f_{b/p}^{D(4)}(z_{I\!\!P},\mu^{2},x_{I\!\!P},t)\) is the so-called diffractive PDF of the proton. It is a conditional probability to find parton \(b\) with the momentum fraction
\(z_{I\!\!P}\) with respect to the diffractive exchange carrying the momentum fraction \(x_{I\!\!P}\) (often called the Pomeron) provided that the final-state proton (or its low-mass excitation \(Y\)) receives the momentum transfer squared \(t\). To further illustrate this concept, it is convenient to assume the so-called Regge factorization for diffractive PDFs, where they are given as product of the Pomeron flux \(f_{I\!\!P}/p(x_{I\!\!P},t)\) and the PDFs of the Pomeron \(f_{b/I\!\!P}(z_{I\!\!P},\mu^{2})\),
\[f_{b/p}^{D(4)}(z_{I\!\!P},\mu^{2},x_{I\!\!P},t)=f_{I\!\!P}/p(x_{I\!\!P},t)f_{b/ I\!\!P}(z_{I\!\!P},\mu^{2})+f_{I\!\!R}/p(x_{I\!\!P},t)f_{b/I\!\!R}(z_{I\!\!P},\mu^{2 })\,. \tag{4}\]
In Eq. (4), the second term gives the sub-leading Reggeon contribution, which becomes important only for large \(x_{I\!\!P}>0.03\)[28].
Using the numerical implementation of Eq. (3) discussed above, we make predictions for diffractive dijet photoproduction in \(ep\) scattering at the EIC [33]. In addition to the generic cuts and the energy configuration (\(E_{p}=100\) GeV, \(E_{e}=21\) GeV) ) discussed in Sec. 2, we take \(|t|<1\) GeV\({}^{2}\), \(M_{Y}<1.6\) GeV, and \(x_{I\!\!P}\leq 0.03\) and use H1 2006 Fit B for proton diffractive PDFs [28].
An example of our predictions is presented in Fig. 3 showing the distributions in the dijet average transverse momentum \(\bar{p}_{T}\) (left) and the photon momentum fraction \(x_{\gamma}^{\rm obs}\) (right). The red solid curves give the full result, where we use only the Pomeron contribution in Eq. (4), the blue dashed curves show the contribution of the gluon diffractive PDF, and the green dotted curves are the direct-photon contribution. One can see from the figure that the coverage in both \(\bar{p}_{T}\) and \(x_{\gamma}^{\rm obs}\) is rather limited. In the accessible range of \(x_{\gamma}^{\rm obs}>0.5\), the cross section is dominated by the contributions of direct photons and point-like quark-antiquark pairs, which makes it difficult
Figure 3: NLO pQCD predictions for the \(e+p\to e^{\prime}+2\,{\rm jets}+X^{\prime}+Y\) cross section of diffractive dijet photoproduction in \(ep\) scattering at the EIC as a function of the average dijet transverse momentum \(\bar{p}_{T}\) and the photon momentum fraction \(x_{\gamma}^{\rm obs}\).
to study the mechanism of factorization breaking mentioned above. Also, the cross section probes large values of \(x_{I\!\!P}\) and \(z_{I\!\!P}\), which results in the dominance of the gluon diffractive PDF.
To extended the kinematic coverage, we repeated our analysis using a larger range in \(x_{I\!\!P}\) up to \(x_{I\!\!P}<0.1\). The results for the \(\bar{p}_{T}\) and \(x_{\gamma}^{\rm obs}\) distributions are presented in Fig. 4. The red solid and blue dashed curves correspond to the Pomeron and Reggeon contributions, see Eq. (4); the green dotted curves give the direct-photon contribution. A comparison to Fig. 3 demonstrates that the use of the \(x_{I\!\!P}<0.1\) range extends the coverage up to \(\bar{p}_{T}<14\) GeV and down to \(x_{\gamma}^{\rm obs}>0.1\). In addition, it brings about the sub-leading Reggeon trajectory, which now contributes at the level of \(10-35\%\) for \(x_{I\!\!P}>0.06\).
We discussed above that NLO pQCD predictions for diffractive dijet photoproduction should be in general supplemented by the factor accounting for the QCD factorization breaking. Since its mechanism involves an interplay of the direct-photon and resolved-photon contributions, the most sensitive observable is the \(x_{\gamma}^{\rm obs}\) distribution. To disentangle competing scenarios of the factorization breaking, one needs a sufficiently large range in \(x_{\gamma}^{\rm obs}\), which is turn requires the highest proton beam energy and high precision since the cross section falls by two orders of magnitude. Our analysis [33] demonstrated that the assumed pattern of factorization breaking affects mostly the normalization of the \(\bar{p}_{T}\) distribution (and other kinematic distributions) and only rather moderately the shape of the \(x_{\gamma}^{\rm obs}\) distribution.
To better differentiate among different schemes of factorization breaking, one can study diffractive dijet photoproduction in electron-nucleus (\(eA\)) scattering at the EIC, \(e+A\to e^{\prime}+2\,{\rm jets}+X^{\prime}+A\), where nuclei play the role of "filters" for different components of the photon wave function in
Figure 4: Same as Fig. 3, but now with an extended range in \(x_{I\!\!P}<0.1\). The sub-leading Reggeon contribution is shown by the blue dashed lines.
photon-nucleus scattering. In addition, it will allow one to probe the novel nuclear diffractive PDFs.
At small values of \(x_{I\!\!P}\) relevant for diffraction, nuclear diffractive PDFs are expected to be suppressed compared to their free proton counterparts due to nuclear shadowing. In the leading twist approach [34], \(t\)-integrated nuclear diffractive PDFs \(f_{i/A}^{D(3)}(z_{I\!\!P},\mu^{2},x_{I\!\!P})\) are obtained by summing the diagrams corresponding to coherent diffractive scattering on 1, 2, \(\ldots\), \(A\) nucleons of the nuclear target,
\[f_{i/A}^{D(3)}(z_{I\!\!P},\mu^{2},x_{I\!\!P}) = 16\pi f_{i/p}^{D(4)}(z_{I\!\!P},\mu^{2},x_{I\!\!P},t=0) \tag{5}\] \[\times \int d^{2}\vec{b}\left|\frac{1-e^{-\frac{1}{2}(1-i\eta))\sigma_{ \rm soft}^{i}(x,\mu^{2})T_{A}(b)}}{(1-i\eta)\sigma_{\rm soft}^{i}(x,\mu^{2})} \right|^{2}\,.\]
Here \(T_{A}(b)=\int dz\rho_{A}(b,z)\) is the nuclear optical density, where \(\rho_{A}(b,z)\) is the nuclear density and \(\vec{b}\) is the transverse position of the interacting nucleon; \(\sigma_{\rm soft}^{i}(x,\mu^{2})\) is the effective soft cross section controlling the strength of the interaction with the target nucleons and \(\eta=0.15\) is the ratio of the real to imaginary parts of the corresponding scattering amplitude. One can see from Eq. (5) that nuclear shadowing explicitly violates the Regge factorization for nuclear diffractive PDFs [compare to the proton case in Eq. (4)].
In practice, to estimate yields and kinematic distributions, one can use the numerical observation that the effect of nuclear shadowing in Eq. (5) in most of the kinematics weakly depends on the parton flavor \(i\), the momentum fractions \(z_{I\!\!P}\) and \(x_{I\!\!P}\), and scale \(\mu\). In this case, the nuclear diffractive PDFs are given by the following simple expression,
\[f_{i/A}^{D(3)}(z_{I\!\!P},\mu^{2},x_{I\!\!P})=AR(x,A)f_{i/p}^{D(3)}(z_{I\!\!P},\mu^{2},x_{I\!\!P})\,, \tag{6}\]
where \(A\) is the nucleus atomic mass number and \(R(x,A)\approx 0.65\) is a weak function of \(x\) and \(A\) calculated using Eq. (5). Replacing proton diffractive PDFs by nuclear diffractive PDFs in Eq. (3), one can readily make predictions for the \(e+A\to e^{\prime}+2\,{\rm jets}+X^{\prime}+A\) cross section of coherent dijet photoproduction on nuclei in the EIC kinematics.
Figure 5 shows the \(x_{\gamma}^{\rm obs}\) distribution for the gold nucleus (Au-197) and contrasts two scenarios of the QCD factorization breaking in diffraction: the red solid curve corresponds to the global suppression factor of \(R_{\rm glob}=0.5\) as in the proton case and the blue dashed curve is obtained by applying the \(R_{\rm res}=0.04\) suppression factor to the resolved-photon contribution. One can see from the figure that the two scenarios lead to sufficiently different predictions for \(x_{\gamma}^{\rm obs}<0.5\).
## 4 Conclusions
Photoproduction of dijets is a standard tool of QCD. Its theory is well-established in NLO pQCD, whose predictions compare very well to HERA data. Inclusive and diffractive dijet photoproduction at the EIC are complimentary to respective DIS measurements and can help constrain proton and nucleus usual and diffractive PDFs. In addition, diffractive dijet photoproduction at the EIC may shed some light on the outstanding problem of factorization breaking. This requires a wide coverage in \(x_{\gamma}^{\rm obs}\), which is provided by the highest proton beam energy and a large range in \(x_{I\!\!P}\), and will benefit from the use of nuclear beams.
## Acknowledgements
The research of V.G. was funded by the Academy of Finland project 330448, the Center of Excellence in Quark Matter of the Academy of Finland (projects 346325 and 346326), and the European Research Council project ERC-2018-ADG-835105 YoctoLHC. The work of M.K. was also funded by the DFG through the Research Training Group 2149 "Strong and Weak Interactions - from Hadrons to Dark Matter" and the SFB 1225 "Isoquant", project-id 273811115.
Figure 5: NLO pQCD predictions for the \(e+A\to e^{\prime}+2\,{\rm jets}+X^{\prime}+A\) cross section of coherent diffractive dijet photoproduction on Au-197 at the EIC as a function of the photon momentum fraction \(x_{\gamma}^{\rm obs}\). The red solid and blue dashed curves corresponds to the two assumed schemes of factorization breaking, see text for detail. |
2305.16516 | Ordinal Sums of Numbers | In this paper we consider ordinal sums of combinatorial games where each
summand is a number, not necessarily in canonical form. In doing so we give
formulas for the value of an ordinal sum of numbers where the literal form of
the base has certain properties. These formulas include a closed form of the
value of any ordinal sum of numbers where the base is in canonical form. Our
work employs a recent result of Clow which gives a criteria for an ordinal sum
G : K = H : K when G and H do not have the same literal form, as well as
expanding this theory with the introduction of new notation, a novel ruleset,
Teetering Towers, and a novel construction of the canonical forms of numbers in
Teetering Towers. In doing so, we resolve the problem of determining the value
of an ordinal sum of numbers in all but a few cases appearing in Conway's On
Numbers and Games; thus generalizing a number of existing results and
techniques including Berlekamp' sign rule, van Roode's signed binary number
method, and recent work by Carvalho, Huggan, Nowakowski, and Pereira dos
Santos. We conclude with a list of open problems related to our results. | Alexander Clow, Neil McKay | 2023-05-25T22:49:10Z | http://arxiv.org/abs/2305.16516v1 | # Ordinal Sums of Numbers
###### Abstract
In this paper we consider ordinal sums of combinatorial games where each summand is a number, not necessarily in canonical form. In doing so we give formulas for the value of an ordinal sum of numbers where the literal form of the base has certain properties. These formulas include a closed form of the value of any ordinal sum of numbers where the base is in canonical form. Our work employs a recent result of Clow which gives a criteria for an ordinal sum \(G:K=H:K\) when \(G\) and \(H\) do not have the same literal form, as well as expanding this theory with the introduction of new notation, a novel ruleset Teetering Towers, and a novel construction of the canonical forms of numbers in Teetering Towers. In doing so, we resolve the problem of determining the value of an ordinal sum of numbers in all but a few cases appearing in Conway's _On Numbers and Games_; thus generalizing a number of existing results and techniques including Berlekamp's sign rule, van Roode's signed binary number method, and recent work by Carvalho, Huggan, Nowakowski, and Pereira dos Santos. We conclude with a list of open problems related to our results.
## 1 Introduction
A _combinatorial game_\(G\) is a two-player game of no chance and perfect information. The players of a combinatorial game are refereed to as Left (given female pronouns) and Right (given male pronouns). A game \(G\) is often written \(G\cong\{L(G)|R(G)\}\). Here \(\cong\) denotes that two games are identical, \(L(G)\) is the set of games (options) that Left can move to if she moves first, and \(R(G)\) is the set of games that Right can move to if he moves first. We do not insist on it being either player's turn apriori; this allows a range of algebraic structures to emerge in our analysis of games.
Though combinatorial games in general allow for infinite sequences of moves or returning to a previous position, the games we consider in this paper have neither of these properties; that is play will complete after a finite number of turns regardless of the decisions made by either player. Games with this property are called short or finite. In particular we consider Normal Play games; games were a player loses when they are unable to make a move on their turn.
In this paper we consider two binary operations on games. The more significant of these structures being the abelian group (under normal play) \(\mathbb{G}=(\mathcal{G}/=,+)\) where \(\mathcal{G}=\{\text{all combinatorial games}\}\), \(+\) is the disjoint sum of two games defined by
\[G+H\cong\{G+L(H),L(G)+H|R+R(G),R(H)+G\}\]
where \(G\cong\{L(G)|R(G)\}\) and \(H\cong\{L(H)|R(H)\}\) and addition of a single game to a set of games is preformed pointwise in the expected manner. Inverses are given by \(-G\cong\{-R(G)|-L(G)\}\) (roles of Left and Right are
switched) and \(G\leq H\) if Left wins moving second in \(G-H\). Thus, \(\{\mathcal{G}\}/=\) is the set of all games considered modulo equality under this partial order. The class a game belongs to in \(\{\mathcal{G}\}/=\) is called its value and has a unique simplest representative called the canonical form. For more on canonical forms see [11].
The group \(\mathbb{G}\) is both natural to consider and significant for the following reasons. The binary operation \(+\) can be intuitively thought of as placing two game board between each player and allowing players to move on exactly one board per turn. This situation arises naturally in a variety of games where after some sequences of moves the game decomposes into several smaller games, where moving in one subgame does not effect the others. In such situations the winner of the sum is determined by the values of a the summands. Because of this value is the primary invariant analysed when studying combinatorial games.
This paper primarily considers another significant binary operation on games, the ordinal sum. Given games \(G\cong\{L(G)|R(G)\}\) and \(H\cong\{L(H)|R(H)\}\) the ordinal sum of \(G\) and \(H\), denoted \(G\!:\!H\) is defined by,
\[G\!:\!H\cong\{L(G),G\!:\!L(H)|R(G),G\!:\!R(H)\}.\]
Intuitively, one their turn either player can move in \(G\) or in \(H\), but should they move in \(G\), then neither player can move in \(H\) for the remainder of play. Ordinal sums are an ideal model for situations where one or both players have the opportunity to delay making critical moves but in such a way that once a critical move is made the ability to delay is no longer available. As the ordinal sum is nonabelian we call the left summand the base and the right summand the subordinate. This is done largely to prevent confusion with the names of each player.
A key tool when analysing ordinal sums is the Colon Principle which states that if \(G\leq H\), then \(K\!:\!G\leq K\!:\!H\) for all \(K\). Of course this implies that if \(G=H\), then \(K\!:\!G=K\!:\!H\). Unfortunately the claim, if \(G=H\), then \(G\!:\!K=H\!:\!K\) is false. Thus, some knowledge of the literal form and not just the value of the base is required. This is particularly significant because in many contexts it is sufficient to consider games in canonical form. The failure of this claim is a major roadblock to studying ordinal sums. As a result ordinal sums have proved very challenging to study in generality. The exceptions to this rule being impartial games, where there is an exact method to calculate the value of an ordinal sum of two impartial games given in [4], and for numbers where some partial results are known.
To see an example of an ordinal sum appearing in an actual game we introduce the well-known ruleset Blue-Red Hackenbush. Blue-Red Hackenbush is played on a rooted graph \(G=(V,E)\) with a 2-coloured (blue and red) edge set. On her turn Left can choose a blue edge \(e\) and delete from the graph. Additionally, every edge this might disconnect from the root vertex is also deleted. Similarly, on his turn Right can delete red edges. See Figure 1.
Notice that in Figure 1 once \(e_{0}\) is removed by Left or \(e_{1}\) is removed by Right every edge above \(e_{0}\) is also deleted. Thus, \(G\cong H\!:\!K\). Blue-Red Hackenbush is a classic example of a special class of games called numbers. Numbers are games of the form \(G=\{L(G)|R(G)\}\) such that for all \(G^{L}\in L(G)\) and \(G^{R}\in R(G)\), \(G^{L}<G^{R}\). Numbers are significant as they form one of the most well-behaved classes of games. One of
Figure 1: Games of Blue-Red Hackenbush.
the main reasons for this is hinted at by their name, as the the values of the games we call numbers are exactly the surreal numbers defined by Conway [7]. In particular, numbers with a finite birthday are exactly the subgroup of the surreals which is isomorphic to the dyadic rationals, denoted \(\mathbb{D}=\{\frac{a}{2^{p}}:a,p\in\mathbb{Z}\}\). In this way we label the values of short numbers by their corresponding element in \(\mathbb{D}\). For example \(\{|\}=0\), \(\{0|\}=1\), and \(\{0|1\}=\frac{1}{2}\).
We think about constructing all values of short numbers as short games in the following way. First, for \(x\geq 0\) if \(x\) is an integer, then \(x+1=\{x|\}\). Second, if \(x\) is not an integer, then \(x=\frac{a}{2^{p}}=\{\frac{a}{2^{p}}-\frac{1}{2^{p}}|\frac{a}{2^{p}}+\frac{1}{2 ^{p}}\}\)[11]. Supposing \(a\) is odd, \(\frac{a}{2^{p}}-\frac{1}{2^{p}}\) and \(\frac{a}{2^{p}}+\frac{1}{2^{p}}\) can both be expressed with a denominator \(2^{q}\) where \(q<p\). Thus, we can construct new numbers from those we have already generated in a similar manner to how reals are constructed using Dedekind cuts. The difference here is that rather than beginning with rationals to form reals, we take a list of numbers we have generated, then generate the next number by deciding where the new number fits in the total ordering (i.e which of the existing numbers it is bigger than and which it is smaller than). To construct negative numbers we negate positive numbers. For more on short numbers see Section 2.2. For a broader study of numbers see [6, 7, 8, 9, 11].
Importantly for our purposes in [7] Conway constructs the canonical forms of surreals as sign sequences, which correspond to stalks (paths where the rooted vertex is a leaf) in Blue-Red Hackenbush. As we have already seen these are themselves repeated ordinal sums of integers in canonical form. For a detailed description of this see [6]. Determining the values of such positions is well-studied. In particular we highlight Berlekamp's Sign Rule and van Roode's signed binary number method which are both described in [1] (see pages 134 and 135). Unfortunately neither of these methods lead to closed expressions. Of particular interest to our work is the van Roode's signed binary number method as much of our work in Section 4 can be looked at as a generalization of this method to position where the base of the sum is not necessarily in canonical form. Ironically, through this generalization we arrive at a closed form for the case where the base of an ordinal sum of numbers is in canonical form (which is exactly what van Roode's method describes). Along with these more classical methods, the problem of determining the value of an ordinal sum of numbers was recently considered by Carvalho et al. [3] who give a generalization of van Roode's method for some ordinal sums where the base in an integer in which only one player has an option.
As a means to examine more complicated ordinal sums of numbers we introduce a novel ruleset which we call Teetering Towers. We introduce a notation for repeated ordinal sums. Let \(\bigodot_{i=1}^{n-}G_{i}=G_{1}{:}G_{2}{:}\cdots{:}G_{n}\) for any collection of games \(G_{1},\ldots,G_{n}\). Using this notation we say a _tower_\(T\) is a game of the of the form \(T=\bigodot_{i=1}^{n}(b_{i}+r_{i})\) where for all \(i\), \(b_{i}\) is a non-negative integer in canonical form and \(r_{i}\) is a non-positive integer in canonical form. Each \(b_{i}+r_{i}\) is called a story of \(T\). A position in Teetering Towers is a game of the form \(\sum_{j=1}^{m}\bigodot_{i=1}^{n}(b_{j,i}+r_{j,i})\) where \(\sum\) is used in the standard way where \(+\) is the disjunctive sum. For an example of a game of Teetering Towers see Figure 2.
Notice that any position in Teetering Towers where each story is monochromatic (i.e \(b_{i}=0\) or \(r_{i}=0\)) is exactly a Blue-Red Hackenbush stalk. For example, see Figure 3. Hence, forms (and thus also values)
Figure 2: A game of Teetering Towers. On her turn Left can choose a blue brick and remove it from the tower. Similarly, on his turn Right removes red bricks. When a brick is removed every story above the one from which the brick was removed from is also removed.
from Hackenbush strings arise in Teetering Towers. There are forms in Teetering Towers that do not occur in Hackenbush; we show this explicitly in Section 3. This relationship is deliberate as it allows for games (numbers) to be constructed which are far from being in canonical form, while also allowing for well understood positions to appear. Given known criteria, such as those that appear in [2], it is easy to determine that every position in Teetering Towers is a number.
The paper is structured as follows. In Section 2 we establish ideas and notation that are key to the rest of the paper. In Section 2 we cover equivalence modulo domination as introduced in [5] and its implications for ordinal sums, some important background on numbers, and novel notation which is convenient when considering numbers modulo domination. In Section 3 we demonstrate how the contents of Section 2 can be applied to the ruleset Teetering Towers. In doing so we construct a tower equivalent modulo domination to the canonical form of any number \(x\) as an ordinal sum, but unlike in Conway's Hackenbush stalk/sign sequence model [6, 7], each of our summands has the same sign as \(x\) or is equal to \(0\) (see Theorem 14). Finally, in Section 4 we focus on determining the value of an ordinal sum of numbers for various literal forms as the base of the sum. The primary results of which is the quite general Theorem 17, which gives a powerful formula for the ordinal sum of a number (base) and an integer (subordinate), as well as Theorem 20 which is a closed-form expression for the value of an ordinal sum of a balanced number with radius \(\frac{-1}{2^{p}}\) (base) and any number as the subordinate. Using Theorem 20 we give a formula for the ordinal sum of any numbers where the base is in canonical form, see Table 3.
## 2 Preliminaries
### Equivalence Modulo Domination
The main tool that enables our analysis of ordinal sums is equivalence modulo domination. We say \(G\) and \(H\) are _equivalent modulo domination_ and we write \(G\triangleq H\), if for all options of \(G+(-H)\) by the first player there is a winning response by the second player in the other summand. Although quite natural to the knowledge of the authors, the concept of equivalence modulo domination first appears in the recent thesis of Clow [5].
By definition, \(G\triangleq H\) implies \(G=H\) as \(G\triangleq H\) implies for every move by the first player there is a winning response by the second player in \(G-H\). Also by definition, for every incentive in \(G\) for the first player, there is an incentive that is equally appealing for the second player in \(-H\) (and visa versa). We use \(\triangleq\) as the notion for equivalence modulo domination due to this second fact, as incentives are often denoted by \(\Delta\) in the literature [11].
Notice that \(G=H\) does not necessarily imply that \(G\triangleq H\). For example, in the difference \(\{-1|1\}-0=0\) the only winning response for the second player is in the same summand as the first player. Thus, \(\{-1|1\}=0\) but \(\{-1|1\}\not\triangleq 0\). Hence, equivalence modulo domination is a refinement of equality. This is natural given two games are equal if and only if they have the same canonical form, where canonical forms are achieved by removing dominated and reversible options. Meanwhile, equivalence modulo domination can be thought of as removing dominated options but not reversible options. For a formal proof of this see Lemma 1.
Figure 3: A game of Teetering Towers that is equivalent to a Hackenbush position.
**Lemma 1**.: _If \(H\) is derived from \(G\) by removing a dominated option, then \(G\triangleq H\)._
Proof.: Let \(H\) be derived from \(G\) by removing a dominated option, say \(G^{\prime}\). Consider \(G-H\). If the first player moves in \(G\) to \(G^{\prime}\), the opponent responds in \(H\) to the option that dominated \(G^{\prime}\). Otherwise, the second player can always respond in the other summand to the same option.
**Theorem 2** (Clow [5]).: _If \(G\triangleq H\), then for all games \(K\), \(G:K\triangleq H:K\) and \(K:G\triangleq K:H\)._
Proof.: Let \(G\triangleq H\). Consider \(G:K-H:K\). Suppose without loss of generality Left moves first in \(G:K\). If her move was in the subordinate Right mirrors in \(-H:K\), whereas if her move was in the base, then Right responds in \(-H\) as if playing the sum \(G-H\). As \(G\triangleq H\), there exists such moves that are winning. Thus, playing \(G:K-H:K\) if Left (Right) moves first on either summand, then Right (Left) has a winning response of the other. Hence, \(G:K\triangleq H:K\) as required. Observe that \(K:G\triangleq K:H\) follows by a similar mirroring strategy.
**Corollary 3**.: _If \(G\triangleq H\), then for all games \(K\), \(G:K=H:K\)._
The converses of Theorem 2 and Corollary 3 are false, as we see in the following example; \(\{0|3\}=1=\{\frac{1}{2}|3\}\) and \(\{0|3\}\not\supseteq\{\frac{1}{2}|3\}\), yet \(\{0|3\}:1\triangleq\{1|3\}\triangleq\{\frac{1}{2}|3\}:1\). Also of note is that Theorem 2 implies Corollary 4, a known result which deals with canonical forms in the base of an ordinal sum.
**Corollary 4** ([7, 10]).: _If \(G\) has no reversible options and \(K\) is the canonical form of \(G\), then for all \(H\), \(G:H=K:H\)._
Proof.: If \(G\) has no reversible options then either \(G\) is its own canonical form (\(G\cong K\)) at which point the statement is trivial or \(G\) has some set of dominated options, whose removal would result in \(K\). Thus, Lemma 1 implies \(G\triangleq K\). The result follows immediately.
### Fundamental Results About Numbers
In this subsection we cover some facts about numbers in relation to ordinal sums. We encourage readers to spend some time with this short section as many of the fact presented here are critical to the rest of the paper.
First, recall that \(G\) is a number if \(G^{L}<G<G^{R}\) for all \(G^{L}\in L(G)\) and \(G^{R}\in R(G)\). We note the following proposition which is well known in the literature.
**Proposition 5** (Simplicity Theorem).: _If \(G=\{a|b\}\) where \(a<b\) are numbers, then \(G=x\) where \(x\) is the unique number with the smallest birthday on the open interval \((a,b)\)._
A proof is omitted here, but can be found in [11]. Next, consider Proposition 6.
**Proposition 6**.: _Let \(G\) be a number whose options are numbers and \(H\) be any game. If \(L(H)\neq\emptyset\), then \(G^{L}\) is strictly dominated in \(G:H\)._
Proof.: We claim for any \(H^{L}\in L(H)\), \(G^{L}-G:H^{L}<0\). Consider the difference \(G^{L}-G:H^{L}<0\). Right wins moving first by moving \(-G:H^{L}\) to \(-G^{L}\). If Left moves first, then she must play in \(G^{L}\), or \(-G:H^{L}\). If Left moves \(G^{L}\) to some \(G^{LL}\), then \(G^{LL}<G^{L}\) as \(G^{L}\) is a number, implying that Right moving \(-G:H^{L}\) to \(-G^{L}\) is a winning response for Right. Similarly, if Left moves is the subordinate of \(-G:H^{L}\), the Right wins moving in the basis to \(-G^{L}\). Finally, if Right moves \(-G:H^{L}\) to \(-G^{R}\), then this is losing as \(G\) is a number implies \(G^{L}<G<G^{R}\), hence, \(G^{L}-G^{R}<0\).
**Corollary 7**.: _If \(G\) is number whose options are numbers, then \(G:1\triangleq\{G|R(G)\}\). Similarly \(G:-1\triangleq\{L(G)|G\}\)._
**Proposition 8**.: _If the options of \(G\) are numbers and both players have at least one option, then \(G\triangleq\{a|b\}\) for some numbers \(a\) and \(b\) in canonical form._
Proof.: This follows from Lemma 1 and that numbers are totally ordered (comparable).
### Balls
In this section we define a special class of games we call balls due to their similarity to intervals in \(\mathbb{R}\). We care about these because all numbers whose options are numbers can be reduced to a ball under equivalence modulo and special balls are well behaved summands in an ordinal sum.
We begin by noticing that many games with two options, such as non-integer numbers in canonical form, have the property that the options have a common difference with some game between them. That is if \(\frac{a}{2^{p}}\) where \(a=2b+1\) is in canonical form, then \(\frac{a}{2^{p}}\cong\{\frac{b}{2^{p-1}}|\frac{b+1}{2^{p-1}}\}\). With this in mind, we say a game \(\{x|y\}\) where \(x\) and \(y\) are in canonical form is a _ball_ of radius \(\Delta\) centered at \(m\) if \(x=m+\Delta\) and \(y=m-\Delta\) where \(m\) and \(\Delta\) are games. We denote such a game by \(\operatorname{B}\left(m,\Delta\right)\) where \(m\) is the _midpoint_ and \(\Delta\) is the _radius_. Thus, the canonical form of \(\frac{a}{2^{p}}\) is \(\operatorname{B}\left(\frac{a}{2^{p}},-\frac{1}{2^{p}}\right)\).
In this paper, we only consider \(\Delta\) and \(m\) and \(\operatorname{B}\left(m,\Delta\right)\) to be numbers. This implies \(\Delta<0\). Unless otherwise stated assume that every game given using the notation \(\operatorname{B}\left(m,\Delta\right)\) is a number.
A ball \(G\cong\{m+\Delta|m-\Delta\}\) is _balanced_ if \(G+G=m+m\). Equivalently, a ball \(G\cong\{a|b\}\) is _balanced_ if \(G+G=a+b\). We say a game (not necessarily a ball) is balanced if it is equivalent modulo domination to a balanced ball. Examples of balanced balls are show in Table 2. That is a game is balanced when its value is the mean of its best options. Rather than giving the definition in this informal way, we insist on the former definition in service of giving a definition that remains well defined when \(m,\Delta\) are general games (where division is not defined).
**Lemma 9**.: _If \(m\) and \(\operatorname{B}\left(m,\Delta\right)\) are numbers and \(\operatorname{B}\left(m,\Delta\right)\) is balanced, then \(\operatorname{B}\left(m,\Delta\right)=m\)._
Proof.: As the game is balanced, from the definition \(\operatorname{B}\left(m,\Delta\right)+\operatorname{B}\left(m,\Delta\right)=m +m\). As this implies \(\operatorname{B}\left(m,\Delta\right)-m=-(\operatorname{B}\left(m,\Delta \right)-m)\) which is a number, it must be that \(\operatorname{B}\left(m,\Delta\right)-m=0\). Hence \(\operatorname{B}\left(m,\Delta\right)=m\) as required.
**Proposition 10**.: _If \(\operatorname{B}\left(m,\Delta\right)\) is balanced and \(m=\frac{a}{2^{p}}\neq 0\) where \(a\) is balanced or \(p=0\), then \(-\frac{1}{2^{p}}\leq\Delta<0\)._
Proof.: Recall that \(\Delta<0\) is assumed. So if the statement is false for some balanced \(\operatorname{B}\left(m,\Delta\right)\), then \(\Delta<-\frac{1}{2^{p}}\). Recall that \(\operatorname{B}\left(m,\Delta\right)\cong\{m+\Delta|m-\Delta\}=x\) where \(x\) is the number with the smallest birthday on the interval \((m+\Delta,m-\Delta)\). Thus, if \(\Delta<-\frac{1}{2^{p}}\), then \(\frac{a-1}{2^{p}},\frac{a+1}{2^{p}}\in(m+\Delta,m-\Delta)\) at least one of which has a smaller birthday than \(m=\frac{a}{2^{j}}\). Then \(\operatorname{B}\left(m,\Delta\right)\neq m\) contradicting the fact that \(\operatorname{B}\left(m,\Delta\right)\) is balanced implies \(\operatorname{B}\left(m,\Delta\right)=m\) by Lemma 9. Given this contradiction, it must be the case that \(-\frac{1}{2^{p}}\leq\Delta<0\) as required.
\begin{table}
\begin{tabular}{l l l} \multicolumn{2}{l}{Literal form, \(G\)} & \multicolumn{1}{l}{Description} \\ \hline \(\{1|3\ast\}\) & \(G=2\) & a number \\ \(\{1|3,4\}\) & \(G^{L}<G^{R}\) & a number whose options are in canonical form \\ \(\{1|5\}\) & \(1=3-2\), \(5=3+2\) & an interval number \\ \(\{1|3\}\) & \(1=\{1|3\}-1\), \(3=\{1|3\}+1\) & a balanced interval number \\ \(\{1|\}\) & \(\{1|\}\cong 2\) & a number in canonical form \\ \end{tabular}
\end{table}
Table 1: Some forms of the value \(2\)
\begin{table}
\begin{tabular}{l l l} Interval game & Literal form & Value \\ \hline \(\operatorname{B}\left(\frac{3}{4},-\frac{1}{4}\right)\) & \(\{\frac{1}{2}|1\}\) & \(\frac{3}{4}\) \\ \(\operatorname{B}\left(2,-1\right)\) & \(\{1|3\}\) & \(2\) \\ \(\operatorname{B}\left(0,-1\right)\) & \(\{-1|1\}\) & \(0\) \\ \(\operatorname{B}\left(0,1\right)\) & \(\{1|-1\}\) & \(\pm 1\) \\ \end{tabular}
\end{table}
Table 2: Some balanced balls.
To formalize the language of this proposition, we say that the _canonical radius_ of number is \(1\) if the game is an integer and the radius of its canonical form otherwise. Then the canonical radius of \(m=\frac{n}{2^{\prime}}\) is \(-\frac{1}{2^{\prime}}\). This notation is adopted as Proposition 10 has shown that the radius of a balanced number is bounded above by its canonical radius.
## 3 Numbers as Towers
Forms, and thus also values, from Hackenbushstrings arise in Teetering Towers. However, there are forms in Teetering Towers that do not occur in Hackenbush. An example is Figure 2. Theorem 14 constructs a family of examples.
In this section we describe the form that games in Teetering Towers can take, in particular the forms of a single tower. As recovering the form of a sum of games whose options are well understood is not difficult, this is sufficient to give a general description of the forms of a position in Teetering Towers.
**Lemma 11**.: _Let \(n\) be any non-zero integer. Then \(n-n\triangleq 1-1\cong\{-1|1\}\)._
Proof.: Consider the difference \(n-n\). Then,
\[n-n\cong\{n-1|\}+\{|-n+1\}\cong\{n-1-n|n-n+1\}\triangleq\{-1|1\}\cong 1-1.\]
This is the desired result.
Due to this the game \(1-1\) is surprisingly significant for teetering towers, particularly those of value \(0\). Thus, we denote the game \(1-1\) by the special name \(\underline{0}\). By the Colon Principle it is ease to see that towers of value \(0\) are exactly those where each story has value \(0\).
**Lemma 12**.: _Let \(n,m\) be non-zero integers such that \(n>m\). If \(n-m=k\), then \(n-m\triangleq\{k-1|k+1\}\triangleq k+\underline{0}\)._
Proof.: The proof is similar to the proof of Lemma 11,
\[n-m\cong\{n-1|\}+\{|-m+1\}\cong\{n-1-m|n-m+1\}\triangleq\{k-1|k+1\}\triangleq \{k-1|\}+\{-1|1\}\cong k+\underline{0}.\]
Implying the desired equality.
For convenience we adopt the notation \(\underline{k}\cong k+\underline{0}\). Notice that every story of a teetering tower is an integer \(k\) in canonical form or is equivalent modulo domination to \(\underline{k}\) for some \(k\). Given this it is useful to consider the following ordinal sum.
**Lemma 13**.: _If \(\operatorname{B}\left(m,-\frac{1}{2^{p}}\right)\) is balanced, then_
\[\operatorname{B}\left(m,-\frac{1}{2^{p}}\right):\underline{0}\triangleq \operatorname{B}\left(m,-\frac{1}{2^{p+1}}\right).\]
_which is also balanced._
Proof.: We compute the given sum directly,
\[\operatorname{B}\left(m,-\frac{1}{2^{p}}\right):\underline{0} \triangleq\{\operatorname{B}\left(m,-\frac{1}{2^{p}}\right):-1| \operatorname{B}\left(m,-\frac{1}{2^{p}}\right):1\}\] \[\triangleq\{\{m-\frac{1}{2^{p}}|m\}|\{m|m+\frac{1}{2^{p}}\}\}\] \[\triangleq\{\operatorname{B}\left(m-\frac{1}{2^{p+1}},-\frac{1}{ 2^{p+1}}\right)|\operatorname{B}\left(m+\frac{1}{2^{p+1}},-\frac{1}{2^{p+1}} \right)\}\] \[\triangleq\{m-\frac{1}{2^{p+1}}|m+\frac{1}{2^{p+1}}\}\triangleq \operatorname{B}\left(m,-\frac{1}{2^{p+1}}\right)\]
as required. Observe that \(\mathrm{B}\left(m,-\frac{1}{2^{p+1}}\right)=m\) as \(m\) is the simplest number on the interval \((m-\frac{1}{2^{p}},m+\frac{1}{2^{p}})\) by assumption that \(\mathrm{B}\left(m,-\frac{1}{2^{p}}\right)\) is balanced, which implies that \(m\) is the simplest number on \((m-\frac{1}{2^{p+1}},m+\frac{1}{2^{p+1}})\) as well. Now \(\mathrm{B}\left(m,-\frac{1}{2^{p+1}}\right)=m\) implies \(\mathrm{B}\left(m,-\frac{1}{2^{p+1}}\right)\) is balanced by definition.
Notice that if \(\mathrm{B}\left(m,-\frac{1}{2^{p}}\right)\) is not balanced, then we cannot suppose \(\mathrm{B}\left(m,-\frac{1}{2^{p}}\right):-1=m-\frac{1}{2^{p+1}}\) or \(\mathrm{B}\left(m,-\frac{1}{2^{p}}\right):1=m+\frac{1}{2^{p+1}}\). As a result the assumption of balancedness in Lemma 13 cannot be relaxed.
In Hackenbush all numbers appear as stalks and this construction relies on edges of both colors. Thus, there must be summands which are both positive and negative to form each number as a Hackenbush stalk. In a tower, each non-zero story has a player with more of their color. We can build a tower of any given value where each story has at least as many bricks for the same player.
**Theorem 14**.: _Let \(x\in\mathbb{D}\) where \(x\geq 0\), then there exists a tower \(T=x\) where for all stories \(S\) of \(T\), \(S\geq 0\)._
Proof.: We prove this by induction on the exponent in the denominator of the value, but we are showing that there is a tower that is equivalent modulo domination to ball centered at the value with its canonical radius. When \(x\geq 0\) is an integer take \(T\cong\underline{x}\triangleq\mathrm{B}\left(x,-1\right)=x\). Suppose that \(x=\frac{a}{2^{p}}\) is a smallest counter example; if \(y=\frac{b}{2^{q}}\) where \(q<p\) there is a tower \(T_{y}\) which is equivalent modulo domination to \(\mathrm{B}\left(y,r\right)\) where \(r\) is the canonical radius of \(y\), such that every story of \(T_{y}\) is non-negative.
As \(x\) is a smallest counter example, \(x\) is not an integer. Then canonical form of \(x\) is \(\{x-2^{-p}|x+2^{-p}\}\cong\mathrm{B}\left(x,-\frac{1}{2^{p}}\right)\). Notice that \(z=x-2^{-p}=\frac{a-1}{2^{p}}=\frac{c}{2^{p-t}}\) for some odd number \(c=\lfloor\frac{a}{2^{t}}\rfloor\) if \(p-l>0\) and \(z=\lfloor x\rfloor\) otherwise. By the minimality of \(x\), there is a tower \(T_{z}\) equivalent modulo domination to \(\mathrm{B}\left(z,r\right)\) where \(r=\frac{1}{2^{p-t}}\) is the canonical radius of \(z\) such that every story of \(T_{z}\) is non-negative.
We claim \(T\cong T_{z}:(\bigodot_{i=1}^{l-1}\underline{0}):1=x\). Observe that by induction and Lemma 13,
\[T\triangleq\mathrm{B}\left(z,r\right):(\bigodot_{i=1}^{l-1} \underline{0}):1\triangleq\mathrm{B}\left(z,\frac{r}{2^{l-1}}\right):1\triangleq \mathrm{B}\left(z,-\frac{1}{2^{p-1}}\right):1\cong\{z-\frac{1}{2^{p-1}}|z+ \frac{1}{2^{p-1}}\}:1\] \[\triangleq\{z|z+\frac{1}{2^{p-1}}\}\triangleq\{x-\frac{1}{2^{p}}| x-\frac{1}{2^{p}}+\frac{1}{2^{p-1}}\}\triangleq\{x-\frac{1}{2^{p}}|x+\frac{1}{2^{p}} \}\cong\mathrm{B}\left(x,-\frac{1}{2^{p}}\right)=x\]
This completes the proof.
**Corollary 15**.: _If \(x\in\mathbb{D}\) is a non-integer in canonical form, then there exists a tower \(T\triangleq x\) where for all stories \(S\) of \(T\), \(S\cong\underline{0}\) or \(S\cong 1\) or \(S\cong\underline{k}\) (\(k>0\)). Here \(S\cong\underline{k}\) only if \(S\) is the bottom story of the tower \(T\)._
Aside from being quite a nice aesthetic result Corollary 15 has some piratical applications. In particular when analysing playing proofs involving ordinal sums (of the form \(x:y\)) can be tedious if \(y\) is expressed as a long Hackenbush string. This is because it is often convenient to consider \(a:b\cong x:y\) where \(a=x:c\) and \(y=c:b\) rather than \(x\) and \(y\) directly and when \(y\) is a Hackenbush string it can often be the case that the sign of \(b\) switches erratically as play progresses. This will not be the case if \(y\) is a tower in the form of Corollary 15, as each story (except perhaps the bottom story) has value \(0\) or \(1\) and whenever Right moves in a story of value \(0\), the value of that story becomes \(1\). As a result, there are fewer cases to consider when analysing such games.
## 4 Values of Ordinal Sums
Unlike Section 3 this section is not concerned with a particular rule set. Rather we are interested in determining the values of ordinal sums independent of the rule set where they arise. A natural first case to consider is ordinal sums equal to \(0\). Notice that as numbers are totally ordered the Colon Principle implies that an ordinal sum of numbers is equal to \(0\) if and only if each summand is equal to \(0\). Hence, we will focus primarily on non-zero ordinal sums as sums equal to \(0\) are easy to characterize. But before moving on we note the following example of an ordinal sum with value \(0\) as it provides an important insight about balanced games and is also a good exercise in computing ordinal sums for the reader;
\[\{-1/2|1\}:\{-1|1/2\}\triangleq\{\{-1/2|1\}:-1|\{-1/2|1\}:\frac{1}{2\}\}\] \[\triangleq\{\{-\frac{1}{2}|0\}|\{-1/2|1\}:\{0|1\}\}\] \[\triangleq\{-\frac{1}{4|\{\{-1/2|1\}|\{-1/2|1\}:1\}\}\] \[\triangleq\{-\frac{1}{4}|\{0|\{0|1\}\}\}\] \[\triangleq\{-\frac{1}{4}|\{0|\frac{1}{2}\}\}\] \[\triangleq\{-\frac{1}{4}|\frac{1}{4}\}\cong\mathrm{B}\left(0,- \frac{1}{4}\right)\]
Observe that neither summand is balanced but the sum is balanced. In general we can also say that the ordinal sum of two balanced numbers need not be balanced. For example \(\frac{1}{2}:\underline{1}\) is not balanced. We now proceed to the primary result of this section.
**Lemma 16**.: _Let \(G\cong\{a|b\}\) such that \(a,b,G\) are numbers and \(b-G\leq 1\). If \(p>0\) is the least integer such that \(\frac{1}{2^{p}}<b-G\), then_
\[G:1=G+\frac{1}{2^{p}}.\]
_Similarly whenever \(G-a\leq 1\), \(G:-1=G-\frac{1}{2^{q}}\), where \(q>0\) is the least integer satisfying \(\frac{1}{2^{q}}<G-a\)._
Proof.: Let \(G=\{a|b\}\) such that \(a,b,G\) are numbers. Recall that \(G:1=x\) where \(x\in(G,b)\) is the simplest number on the interval from \(G\) to \(b\). As \(b-G\leq 1\) and \(b-G\) is not an infinitesimal there is a \(p>0\) such that \(\frac{1}{2^{p}}<b-G\). Then by the least integer principle there is a least integer \(p>0\) such that \(\frac{1}{2^{p}}<b-G\). Then \(b-G\leq\frac{1}{2^{p-1}}\leq r\leq 1\) where \(r\) is the canonical radius of \(G\).
Note that \(G:1\triangleq\{G|G+(b-G)\}=\{G|G+\frac{1}{2^{p}}\}\) (see Corollary 7). We claim that \(\{G|G+\frac{1}{2^{p-1}}\}=G+\frac{1}{2^{p}}\). Consider the difference,
\[\{G|G+\frac{1}{2^{p-1}}\}-G-\frac{1}{2^{p}}.\]
If Right moves first they may move \(\{G|G+\frac{1}{2^{p-1}}\}\) to \(G+\frac{1}{2^{p-1}}\), \(-G\) to \(-a\) or \(-\frac{1}{2^{p}}\) to \(0\). All of these moves are obviously losing. Similarly if Left moves first, then they may move \(\{G|G+\frac{1}{2^{p-1}}\}\) to \(G\) or \(-G\) to \(-b\) or \(-\frac{1}{2^{p}}\) to \(-\frac{1}{2^{p-1}}\). These are also losing. So \(\{G|G+\frac{1}{2^{p-1}}\}-G-\frac{1}{2^{p}}=0\) implying \(\{G|G+\frac{1}{2^{p-1}}\}=G+\frac{1}{2^{p}}\). Equivalently \(G+\frac{1}{2^{p}}\) is the simplest number on the interval \((G,G+\frac{1}{2^{p-1}})\). As \((G,b)\subseteq(G,G+\frac{1}{2^{p-1}})\), \(x=G+\frac{1}{2^{p}}\) as required. The case of \(G:-1\) follows by a nearly identical argument.
Figure 4: A tower with value \(\frac{1}{8}\) and a tower with value \(\frac{13}{16}\) as in Corollary 15.
If we relax our assumption that \(b-G\leq 1\) then Lemma 16 is false. As an example consider \(G=\{0|4\}=1\) and the sum
\[G:1=2\neq 3=G+\frac{1}{2^{(-1)}}.\]
The important thing for our purposes is that games such as this do not often appear in familiar rulesets and are somewhat artificial when you consider numbers as being constructed from ordinal sums.
**Theorem 17** (General Ordinal Sum Theorem).: _Let \(k>0\) be an integer and \(G\cong\{a|b\}\) be numbers. If \(b-G\leq 1\) and \(\alpha_{i}>0\) is the least integer such that \(2^{-\alpha_{i}}<b-G-(\sum_{j=1}^{i-1}2^{-\alpha_{j}})\), then_
\[G:k=G+\sum_{i=1}^{k}\frac{1}{2^{\alpha_{i}}}.\]
_Similarly if \(G-a\leq 1\) and \(\beta_{i}\) is the least integer such that \(2^{-\beta_{i}}<G-a-(\sum_{j=1}^{i-1}2^{-\beta_{j}})\), then_
\[G:-k=G-\sum_{i=1}^{k}\frac{1}{2^{\beta_{i}}}.\]
Proof.: We proceed by induction on \(k\). If \(k=1\), then the result follows by Lemma 16. Suppose then that \(k>1\) and for all \(1\leq t<k\), \(G:t=G+\sum_{i=1}^{t}2^{-\alpha_{i}}\) and \(G:-t=G-\sum_{i=1}^{t}2^{-\beta_{i}}\). Notice that, \(\alpha_{k}\) is the least integer such that \(2^{-\alpha_{k}}<b-G:(k-1)=b-G-\sum_{i=1}^{k-1}2^{-\alpha_{i}}\). Thus, Lemma 16 implies
\[G:k\cong G:(k-1):1\triangleq\{G:(k-2)|b\}:1\] \[=G:(k-1)+\frac{1}{2^{\alpha_{k}}}=G+\sum_{i=1}^{k}\frac{1}{2^{ \alpha_{i}}}\]
as required. Similarly, \(\beta_{k}\) is the least integer such that \(2^{-\beta_{k}}<(G:-(k-1))-a=G-a-\sum_{i=1}^{k-1}2^{-\beta_{i}}\). Thus, Lemma 16 implies
\[G:-k\cong (G:-(k-1)):1\triangleq\{a|G:-(k-1)\}:1\] \[=(G:-(k-1))-\frac{1}{2^{\beta_{k}}}=G-\sum_{i=1}^{k}\frac{1}{2^{ \beta_{i}}}.\]
This concludes the proof.
**Corollary 18**.: _If \(G\cong\{a|b\}\) such that \(a,b,G\) are numbers and \(b-G=\frac{1}{2^{p}}\) for some \(p\), then_
\[G:k=G+\frac{1}{2^{p}}-\frac{1}{2^{p+k}}.\]
_where \(k>0\) is an integer._
Proof.: Observe that as \(b-G=\frac{1}{2^{p}}\), for all \(i\geq 1\), \(\alpha_{i}=p+i\), that is \(2^{-(p+i)}<b-G-(\sum_{j=1}^{i-1}2^{-\alpha_{j}})=\frac{1}{2^{p}}-(\sum_{j=1}^{ i-1}2^{-(n+j)})\). Then by Theorem 17, \(G:k=G+\sum_{i=1}^{k}\frac{1}{2^{p+i}}\). Recalling the well known identity that \(\sum_{i=1}^{m}\frac{1}{2^{m}}=1-2^{m}\) notice that,
\[G :k=G+\sum_{i=1}^{k}\frac{1}{2^{p+i}}=G+(\sum_{i=1}^{p+k}\frac{1}{2^{ i}})-(\sum_{j=1}^{p}\frac{1}{2^{j}})\] \[=G+(1-\frac{1}{2^{p+k}})-(1-\frac{1}{2^{p}})=G+\frac{1}{2^{p}}- \frac{1}{2^{p+k}}\]
as required.
**Corollary 19**.: _If \(G\cong\{a|b\}\) such that \(a,b,G\) are numbers and \(G-a=\frac{1}{2^{p}}\) for some \(p\), then_
\[G:-k=G-\frac{1}{2^{p}}+\frac{1}{2^{p+k}}.\]
_where \(k>0\) is an integer._
Proof.: Proof follows by an almost identical argument to Corollary 18.
Recall that any non-integer \(\frac{a}{2^{p}}\) in canonical form is identically \(\mathrm{B}\left(\frac{a}{2^{p}},-\frac{1}{2^{p}}\right)\). Thus, the value of any ordinal sum of the form \(\frac{a}{2^{p}}:k\) where \(k\) is an integer is given by Corollary 18 or Corollary 19. With slightly more work we can also show other identities where \(k\) is not an integer.
For example we now show the following identity \(\frac{1}{2^{p}}:\frac{1}{2^{q}}=\frac{1}{2^{p}}+\frac{1}{2^{p+q+1}}\) given in [1] page 240 as a demonstration of Lemma 16 and Corollary 19. Observe that \(\frac{1}{2^{p}}\cong\mathrm{B}\left(\frac{1}{2^{p}},-\frac{1}{2^{p}}\right)\) and \(\frac{1}{2^{q}}\cong 1:-q\). Then applying Lemma 16 and Theorem 17,
\[\frac{1}{2^{p}} :\frac{1}{2^{q}}\cong\mathrm{B}\left(\frac{1}{2^{p}},-\frac{1}{2 ^{p}}\right):1:-q\] \[\triangleq\mathrm{B}\left(\frac{1}{2^{p}}+\frac{1}{2^{p+1}},- \frac{1}{2^{p+1}}\right):-q\] \[=\frac{1}{2^{p}}+\frac{1}{2^{p+1}}-\frac{1}{2^{p+1}}+\frac{1}{2^ {p+q+1}}\] \[=\frac{1}{2^{p}}+\frac{1}{2^{p+q+1}}.\]
It is easy to see that the same argument implies the more general statement that \(\frac{a}{2^{p}}:\frac{1}{2^{q}}=\frac{a}{2^{p}}+\frac{1}{2^{p+q+1}}\) whenever \(a\) is odd. In the same vein we can show other identities. For example, we claim that \(\frac{a}{2^{p}}:(1-\frac{1}{2^{q}})=\frac{a}{2^{p}}+\frac{1}{2^{p+1}}-\frac{1} {2^{p+q+1}}\) where \(a\) is odd. Proceeding as before, \(\frac{a}{2^{p}}\cong\mathrm{B}\left(\frac{a}{2^{p}},-\frac{1}{2^{p}}\right)\) and \(1-\frac{1}{2^{q}}=1:-1:(q-1)\). Then,
\[\frac{a}{2^{p}} :(1-\frac{1}{2^{q}})\cong\mathrm{B}\left(\frac{a}{2^{p}},-\frac{1 }{2^{p}}\right):1:-1:(q-1)\] \[\triangleq\mathrm{B}\left(\frac{a}{2^{p}}+\frac{1}{2^{p+1}}- \frac{1}{2^{p+2}},-\frac{1}{2^{p+2}}\right):(q-1)\] \[=\frac{a}{2^{p}}+\frac{1}{2^{p+1}}-\frac{1}{2^{p+2}}+\frac{1}{2^ {p+2}}-\frac{1}{2^{p+2+(q-1)}}\] \[=\frac{a}{2^{p}}+\frac{1}{2^{p+1}}-\frac{1}{2^{p+q+1}},\]
which is the desired result. With these examples as intuition we now prove a much more general identity that applies to any ordinal sum of numbers where the base is balanced. Note this implies that the following applies whenever the base of the ordinal sum is a non-integer in canonical form.
**Theorem 20** (Balanced Ordinal Sum Theorem).: _If \(\mathrm{B}\left(n,-\frac{1}{2^{p}}\right)\) is balanced and \(n\) is a number, then for all integers \(m\geq 0\) and odd integers \(0<a<2^{q}\) or integer \(a=0\), such that \(m+a2^{-q}>0\);_
\[\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):(m+\frac{a}{2^{q}})=n+\frac{1}{2^{p} }-\frac{1}{2^{p+m}}+\frac{a}{2^{p+m+q+1}}.\]
Proof.: We consider the case \(m=0\) and the case \(m>0\) distinctly.
Case.1: \(m=0\). Hence, if \(a=0\) the result is trivial. Suppose then that \(a\neq 0\). We proceed by induction on \(q\geq 0\). If \(q=0\), then the result follows directly from Corollary 18 or Corollary 19. Suppose then that \(q>0\). Then, \(\frac{a}{2^{q}}=\mathrm{B}\left(\frac{a}{2^{q}},-\frac{1}{2^{q}}\right)\). Thus, the Colon Principle implies that
\[\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):\frac{a}{2^{q}}=\mathrm{B}\left(n,- \frac{1}{2^{p}}\right):\mathrm{B}\left(\frac{a}{2^{q}},-\frac{1}{2^{q}}\right).\]
Observe that \(\frac{a}{2^{q}}-\frac{1}{2^{q}}=\frac{b}{2^{q-1}}\) where \(b=\frac{a-1}{2}\) and \(\frac{a}{2^{q}}+\frac{1}{2^{q}}=\frac{c}{2^{q-1}}\) where \(c=\frac{a+1}{2}\). Then by induction,
\[\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):\frac{b}{2^{q-1}}=n+\frac{b}{2^{p+q}}\]
and
\[\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):\frac{c}{2^{q-1}}=n+\frac{c}{2^{p+q}}.\]
Hence, \(\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):\frac{a}{2^{q}}\in(n+\frac{b}{2^{p+ q}},n+\frac{c}{2^{p+q}})\). Given our choices of \(b\) and \(c\) it should be clear that the simplest number of this interval is in fact \(n+\frac{a}{2^{p+q+1}}\). Thus, \(\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):\frac{a}{2^{q}}=n+\frac{a}{2^{p+q+1}}\) as required.
Case.2: \(m>0\). Observe that \(m+\frac{a}{2^{q}}=m:\frac{a}{2^{q}}\). Thus, by the Colon Principle,
\[\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):(m+\frac{a}{2^{q}})=\mathrm{B}\left( n,-\frac{1}{2^{p}}\right):m:\frac{a}{2^{q}}.\]
We consider first, \(\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):m\). By Corollary 18, \(\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):m=n+\frac{1}{2^{p}}-\frac{1}{2^{p+m}}\). Notice that \(\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):(m-1)\) is the greatest Left option of \(\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):m\) and again by Corollary 18, \(\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):(m-1)=n+\frac{1}{2^{p}}-\frac{1}{2^{ p+m-1}}\). Of course the best Right option of \(\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):m\) is \(n+\frac{1}{2^{p}}\) as Right has gained no new options in the subordinate. Thus, \(\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):m\triangleq\mathrm{B}\left(n+\frac{ 1}{2^{p}}-\frac{1}{2^{p+m}},-\frac{1}{2^{p+m}}\right)\) which is balanced. Then,
\[\mathrm{B}\left(n,-\frac{1}{2^{p}}\right):m:\frac{a}{2^{q}} \triangleq\mathrm{B}\left(n+\frac{1}{2^{p}}-\frac{1}{2^{p+m}},- \frac{1}{2^{p+m}}\right):\frac{a}{2^{q}}\] \[=n+\frac{1}{2^{p}}-\frac{1}{2^{p+m}}+\frac{a}{2^{p+m+q+1}}\]
by case.1. This concludes the proof.
Recalling every non-integer \(x\) in canonical form is balanced Theorem 20 implies a formula for the value of an ordinal sum of numbers where the base is in canonical form. This formula must be broken down into several cases, which depend on the signs of the base and subordinate, however, are all easy to verify given what we have already shown. For a list of these formula see Table 3. Note that when reproducing or verifying Table 3 it is useful to recall that \(-(G:H)=(-G):(-H)\) for cases where the base and subordinate have different signs and Theorem 17 for the cases where the basis is an integer.
## 5 Conclusion
In this paper we have studied ordinal sums of numbers both in general and in a particular well behaved ruleset, which we call Teetering Towers. This has developed the theory of ordinal sums though presenting known, but not yet published results from Theorem 2, as well as introducing novel pieces of notation such as balls \(\mathrm{B}\left(m,\Delta\right)\) and the notion of a game being balanced. Contributions that are significant given every number is equivalent modulo domination to a ball (Proposition 8) and Theorem 2 implying that this is sufficient to reduce every ordinal sum of numbers to an ordinal sum of balls.
Along with analysing Teetering Towers we have also given a formulas for the value of ordinal sums of numbers where the subordinate is an integer, see Theorem 17. We have also improved this for special cases, see Corollary 18 and Corollary 19. In particular for numbers that are balanced with radius \(\frac{-1}{2^{p}}\), which includes all canonical forms of non-integers, we have used these results to give a closed formula for an ordinal sum with any subordinate that is a number Theorem 20. Thereby resolving the problem of determining the value of an ordinal sum where the basis is in canonical form, originally proposed by Conway in [7] (see pages 192-195), for numbers.
We have also constructed the canonical form of every number as a tower in Teetering Towers (see Theorem 14) where no story (summand) has a different sign then the tower itself. Beyond this we have classified the structure modulo domination of a story in Teetering Towers through Lemma 11 and Lemma 12. From which the structure of a tower modulo domination can be easily verified in linear time in the number stories by repeatedly applying Theorem 17. Given that it is simple to analyze the form and value of a disjunctive sum of games where the summands are numbers, we have outlined an easy way to determine the winner and value of a Teetering Towers game (which provides insight into winning strategies).
We conclude the paper with a list of open questions relating to this work;
1. Describe the forms equivalent modulo domination and values of Blue-Green-Red Teetering Towers, that is, the games of the form \(\sum_{j=1}^{m}\bigcap_{i=1}^{n}(b_{i}+g_{i}+r_{i})\) where \(b_{i}\) is a non-negative integer, \(r_{i}\) is a non-positive integer, and \(g_{i}\) is a nimber;
2. Extend the theory of numbers modulo domination to numbers born on day \(\omega\) and later;
3. Determine if there is an extension of our results to surreal numbers;
4. Extend Theorem 14 to numbers born on day \(\omega\) and later;
5. Investigate ordinal sums of numbers of the form \(G\cong\{a|b\}\) where \(G-a\) or \(b-G\) is strictly greater than \(1\);
6. Using Theorem 2 analyze ordinal sums of a class of games that are not numbers or impartial games;
7. Give criteria for balls \(\mathrm{B}\left(m,\Delta\right)\) to be balanced when \(m,\Delta\) and/or \(\mathrm{B}\left(m,\Delta\right)\) are not numbers;
8. Letting \(\mathrm{B}\left(m,\Delta_{1},\ldots,\Delta_{k}\right)\cong\{m+\{\Delta_{i}\}|m -\{\Delta_{i}\}\}\) be a _generalized ball_, prove or disprove that for all normal play games \(G\), there exists a \(\mathrm{B}\left(m,\Delta_{1},\ldots,\Delta_{k}\right)\) such that \(G=\mathrm{B}\left(m,\Delta_{1},\ldots,\Delta_{k}\right)\).
\begin{table}
\begin{tabular}{|c|c|} \hline Sum & Value \\ \hline \(n:(m+\frac{b}{2^{q}})\) & \(n+m+\frac{b}{2^{q}}\) \\ \hline \(n:-(m+\frac{b}{2^{q}})\) & \(n-1+\frac{1}{2^{m}}-\frac{b}{2m+q+1}\) \\ \hline \((n+\frac{a}{2^{p}}):(m+\frac{b}{2^{q}})\) & \(n+\frac{a+1}{2^{p}}-\frac{1}{2^{p}+m}+\frac{b}{2^{p}+m+q+1}\) \\ \hline \((n+\frac{a}{2^{p}}):-(m+\frac{b}{2^{q}})\) & \(n+\frac{a-1}{2^{p}}+\frac{1}{2^{p}+m}-\frac{b}{2^{p}+m+q+1}\) \\ \hline \end{tabular}
\end{table}
Table 3: Ordinal sums of numbers where the base is in canonical form. Here \(n,m,a,b\geq 0\) are integers such that \(0<a<2^{p}\) and \(0\leq b<2^{q}\) are odd.
## Acknowledgements
We would also like to acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) for support through the Canadian Graduate Scholarship - Master's program.
|
2307.07262 | MorphPiece : A Linguistic Tokenizer for Large Language Models | Tokenization is a critical part of modern NLP pipelines. However,
contemporary tokenizers for Large Language Models are based on statistical
analysis of text corpora, without much consideration to the linguistic
features. I propose a linguistically motivated tokenization scheme, MorphPiece,
which is based partly on morphological segmentation of the underlying text. A
GPT-style causal language model trained on this tokenizer (called MorphGPT)
shows comparable or superior performance on a variety of supervised and
unsupervised NLP tasks, compared to the OpenAI GPT-2 model. Specifically I
evaluated MorphGPT on language modeling tasks, zero-shot performance on GLUE
Benchmark with various prompt templates, massive text embedding benchmark
(MTEB) for supervised and unsupervised performance, and lastly with another
morphological tokenization scheme (FLOTA, Hoffmann et al., 2022) and find that
the model trained on MorphPiece outperforms GPT-2 on most evaluations, at times
with considerable margin, despite being trained for about half the training
iterations. | Haris Jabbar | 2023-07-14T10:35:04Z | http://arxiv.org/abs/2307.07262v2 | # MorphPiece : Moving away from
###### Abstract
Tokenization is a critical part of modern NLP pipelines. However, contemporary tokenizers for Large Language Models, are based on statistical analysis of text corpora, without much consideration to the linguistics features. We propose a linguistically motivated tokenization scheme, MorphPiece, which is based partly on morphological segmentation of the underlying text. A GPT-style causal language model trained on this tokenizer (called MorphGPT) shows superior convergence compared to the same architecture trained on a standard BPE tokenizer. Specifically we get Language Modeling performance comparable to a 6 times larger model. Additionally, we evaluate MorphGPT on a variety of NLP tasks in supervised and unsupervised settings and find superior performance across the board, compared to GPT-2 model.
## 1 Introduction
One significant aspect of modern Large Language Models (LLMs) is their massive size in terms of memory footprint and training resources. For instance, GPT-2 (Radford et al., 2019), a well-known language model, took the equivalent of 9.2 days on 512 V-100 GPUs for training (Li et al., 2022). Its elder cousin, GPT-3, needed the equivalent of 14.8 days on 10,000 V-100 GPUs (Patterson et al., 2021). However, such infrastructure requirements are beyond the financial means of most researchers, and training these models has a substantial \(CO_{2}\) footprint (Patterson et al., 2021). Moreover, inference on larger models is also slower and more expensive. Therefore, any technique that can reduce these requirements would make LLMs more affordable, ubiquitous and eco-friendly. In this paper, we demonstrate that a tokenization method that incorporates linguistic knowledge can help in this direction.
Most contemporary tokenizers use statistical information from text corpora to build vocabularies. We propose to move away from this purely statistical nature of tokenization schemes and inject language specific inductive bias at the tokenization stage.
We propose to achieve that by introducing a deterministic morphological segmentation stage and combine it with statistical BPE algorithm. The input text is first tokenized with morphological segmentation and then passed through a BPE algorithm. We also introduce a reverse tokenizer that combines the tokens from these two sources to output sentences.
Modern NLP pipelines involve segmenting text into discrete units which are represented with learnable high dimensional vectors. This segmentation, called tokenization, forms the basis of most transformer (and many pre-transformer e.g LSTM, RNN (Pennington et al., 2014; Mikolov et al., 2013; Bojanowski et al., 2016; Peters et al., 2018) based architectures. Many tokenization algorithms have been explored over the past few years, ranging from characters to words and an intermediate form
Figure 1: Perplexity scores for GPT-2 (Base) architecture trained on BPE tokenization vs MorphPiece tokenization. Evaluated on dev set of OpenWebText (Gokaslan and Cohen, 2019)
that is called sub-word tokenization. The most commonly used tokenizers such as BPE (Sennrich et al., 2015), WordPiece (Schuster and Nakajima, 2012), Unigram (Kudo, 2018) etc) follow the subword tokenization paradigm, which relies on the statistical properties of the corpus to construct the tokenization scheme and ignore the linguistic knowledge embedded in the language.
It has been shown (Hofmann et al., 2021, 2020) that a morphologically informed vocabularies lead to better generalization capabilities of language models. In this work, we build on that insight and we propose a tokenization approach that relies partly on the morphological construction of words to break them down into sub-words. The intuition being that sub-words so constructed will be more natural than a split on statistical properties, and hence might lead to more efficient models. For instance, "paratrooper" would be segmented as ('paraf", 'troup', '#er') in our tokenizer, which aligns more closely with the linguistic parts of the word compared to the BPE and Wordpiece tokenizers that split it into ('par', 'atro', 'oper') and ('para', '#ftro', '#oper'), respectively. To validate our approach, we train a GPT-like architecture with our proposed tokenizer and compare it to a pre-trained GPT-2 model that uses BPE tokenization. The results demonstrate that our tokenizer leads to superior convergence and improved performance across a wide range of NLP tasks.
We call our tokenizer MorphPiece1 and Table 1 gives some examples that highlight the manner in which MorphPiece splits words compared to BPE and Wordpiece. Few aspects are apparent here :
Footnote 1: There is a similarly named R library (morpheppiece) : [https://github.com/macmillancontentscience/morphepiece](https://github.com/macmillancontentscience/morphepiece)
1. MorphPiece segmentation splits up the words into linguistically aligned affixes which have a semantic meaning. This is not the case with statistical tokenizers.
2. MorphPiece modifies the spellings for some words without which this alignment would not be possible (e.g. batting is tokenized as ['bat','ing'], instead of ['batt','ing'])
3. Such splitting into affixes opens up potential analyses of suffixes and prefixes that isn't possible with statistical tokenizers. For example, the negation prefixes like 'de' and 'un' and 'dis' are clearly segmented from the stem, which is not the case with BPE/Wordpiece
Going forward, we first give an overview of related work in Section 2 and then present MorphPiece in Section 3 with details of how to construct the tokenizer. In Section 4 we carry out few statistical comparisons of MorphPiece with WordPiece and BPE tokenizer. Then we present a GPT-like model trained on this tokenizer and discuss at length the results under various evaluation metrics in Section 5. This is followed by a detokenization algorithm (Section 7) which combines the tokens to sentences. Finally we conclude by giving few insights and way forward, in Section 8.
Our primary contributions are as follows :
1. We propose a linguistically motivated tokenizer that results in a more efficient language model, with superior performance across a wide variety of NLP tasks, compared to models trained on BPE.
2. We pre-train a GPT-like architecture on this tokenizer.
3. We also devise an algorithm for tokenization of tokens into sentences.
4. We will open-source the code and various checkpoints of the model trained on MorphPiece, upon publication of the paper.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Word**} & **BPE** & **Wordpiece** & **MorphPiece** \\ & **tokens** & **tokens** & **tokens** \\ \hline batting & ’bat’, ’ting’ & batting & ’bat’, ’\#ing’ \\ disengage & ’dis’, ’eng’, ‘age’ & ’di’, ’\#sen’, ’\##ga’, ’\#ge’ & ’dis#’, ‘en#’, ’gage’ \\ archeologists & ’ar’, ‘che’, ‘ologists’ & ’arch’, ’\#neo’, ’\#logists’ & ’archae’, ’\#logy’, ’\#ist’, ’\#s’ \\ decompress & ’dec’, ’omp’, ’ress’ & ‘deco’, ’\#mp’, ’\#fress’ & ’de’, ’compress’ \\ photographers & ’phot’, ’ographers’ & photographers’ & ’photot’, ’\#graph’, ’\#cr’, ’\#s’ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of tokens produced by different tokenization schemes
## 2 Related Work
There is ample body of research for building morphological tokenizers using either supervised, unsupervised or manually curation methods. Morfessor (Creutz et al., 2005) and its variants (Smit et al., 2014; Gronroos et al., 2020) are the most well known. In SIGMORPHON 2022 Shared Task for Morpheme Segmentation (Batsuren et al., 2022), there were 13 submissions to build morpheme segmentation at word and sentence level. This challenge itself built on Morpho-Challenge series (Kurimo et al., 2010).
Use of morphological segmentation for tokenization has been explored extensively in the context of Neural Machine Translation with mixed results (Pan et al., 2020; Zhou, 2018; Domingo et al., 2018; Machacek et al., 2018; Saleva and Lignos, 2021; Banerjee and Bhattacharyya, 2018; Ataman and Federico, 2018). However, use of morphological analysis in Language Modeling, especially on transformer based architectures, is rather limited. (Bostrom and Durrett, 2020) compare BPE and Unigram tokenization for morphological alignment and find that Unigram is more aligned to morphological splits, and leads to better or similar performance in downstream tasks. Similarly (Hofmann et al., 2021, 2020) showed that a morphologically informed vocabulary improves performance of LLMs. Subsequently (Hofmann et al., 2022) proposed a statistical tokenization improvement method (FLOTA) that tries to align the tokenization with morphological segmentation and show that this improves the performance in a specific task. Our work is different from theirs in a couple of important ways. First, they use statistically built vocabulary of BERT/BPE/Unigram. Second, they apply their method only during fine-tuning stage. Third, they don't have separate morphological and statistical modes of tokenization. Fourth, they evaluate only on one task and finally, our model outperforms FLOTA by a huge margin.
## 3 MorphPiece
In this section we present MorphPiece; an English language tokenization scheme that combines Byte Pair Encoding (BPE) with morpheme based segmentation for a more linguistically aligned tokenization mechanism. The tokenization scheme is shown in Figure 2. First, the text is normalized and pre-tokenized as per the standard BPE tokenization (Sennrich et al., 2015). In case of English, these pretokens are a regex based splitting of sentences.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Number of** & **MorphNet** & **Trimmed** \\ \hline
2 & 136715 & 67169 \\
3 & 143990 & 48264 \\
4 & 54129 & 16670 \\
5 & 10001 & 2589 \\
6 & 1236 & 217 \\
7 & 208 & 24 \\
8+ & 61 & 10 \\ \hline Total & 346,340 & 134,943 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Frequency of Morpheme Segmentations from MorphyNet (Batsuren et al., 2021), before and after trimming
Figure 2: MorphPiece tokenization Scheme : After standard BPE pre-tokenization, for each pre-token, we tokenize with MorphTable if the token exists in the table; if not, we apply standard BPE with custom trained vocabulary
These pretokens are then passed through a look-up table of words (called MorphTable), to see if a morpheme based segmentation is available. If a segmentation is found, the pretoken is replaced with the corresponding morphemes; if not, the tokens are split according to the BPE tokenization scheme with a custom trained vocabulary.
### MorphTable
MorphTable is a simple dictionary with keys being the words from English language and values being their respective morphological segmentation. To construct MorphTable, we use MorphyNet (Batsuren et al., 2021), which is a database of derivational and inflectional morphology of 15 languages. We construct a look-up table of 346,340 words from English language which have been segmented into morphemes from the database. Table 2 shows the frequency count of these segmentations. The extreme high number of morphemes come from chemical compounds (e.g dichlorodiphenyltrichloroethane). For the purpose of our tokenizer, we created a vocabulary from the set of unique affixes and stems from MorphTable after dropping the entities with less than 5 occurrences. This trimmed down version, had 18,304 tokens, and reduced the table size down to 134,943 entries.
### MorphPiece Vocabulary
The MorphPiece vocabulary has two sources. First is the MorphTable described above. All the affixes and stems from this table are added to the vocabulary. The second component is the trainable BPE vocabulary. In the spirit of fair comparison, we aimed for the same vocabulary size as that of GPT-2, i.e 50,257 tokens. Accounting for the vocabulary from MorphTable (18,304 tokens), we trained a BPE tokenizer to build a vocabulary size of 32,000. We used OpenWebText (Gokaslan and Cohen, 2019) as the training corpus. Before training this tokenizer, we removed all words that had a segmentation available in the MorphTable: the idea being that since those words will be processed by the MorphTable and not by the BPE algorithm, hence the BPE tokenizer should be trained only on the text that will be tokenized by it. After merging the two vocabularies and accounting for a few common tokens, we have final vocabulary size of 50,006.
## 4 Statistical Analyses of MorphPiece
In this part, we compare the proposed MorphPiece tokenizer with BPE and WordPiece on various tokenization statistics. Specifically, we evaluate the three tokenizers across fertility (Rust et al., 2020) and coverage. Fertility is defined as the average number of subwords that a tokenizer splits a word into. So the tokenization ('para', '#tro', '##oper') of the word 'paratrooper' has a fertility of 3. When averaged over a large corpus, fertility is a measure of how aggressively a tokenizer splits the words. Coverage, on the other hand, tells us which part (the MorphTable or the integral BPE tokenizer) of MorphPiece handled a particular pretoken. We evaluate coverage across various token lengths and fertility across various sentence lengths. Combined together they indicate how different (or similar) is MorphPiece, compared to WordPiece or BPE, at word and sentence level.
For both evaluations, we use GPT2-Output-Dataset, released by OpenAI (OpenAI, 2020), which has 250,000 English sentences.
### Fertility
To measure fertility, we tokenize the dataset with the three tokenization schemes and additionally with a whitespace splitter. We use whitespace tokenization as a proxy for number of words in a sentence. Subsequently we plot the average number of tokens produced by the three tokenizers for various sentence lengths and the result is shown in Table 3. We can see that while BPE and WordPiece have similar sentence lengths, MorphPiece produces about 17% longer sentences. To reconfirm that the trend is not influenced by the dataset statistics, we did the same analysis for first million sentences (Appendix A) of bookcorpus dataset.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{**Tokenizer**} &
\begin{tabular}{c} **Average** \\ **Length** \\ \end{tabular} & **Fertility** \\ \hline
**Whitespace** & 526.69 & N/A \\
**BPE** & 585.30 & 1.111 \\
**WordPiece** & 576.48 & 1.095 \\
**MorphPiece** & 685.78 & 1.302 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Fertility comparison of MorphPiece with WordPiece and BPE on OpenAI Testset
### MorphTable Coverage
MorphTable was constructed from MorphNet (Batsuren et al., 2021) which is a crowd-sourced collection of morpheme segmentations. In this section we evaluate the coverage of MorphTable by analyzing the words in a corpus that are tokenized by MorphTable versus those tokenized by BPE. We tokenize bookcorpus (Zhu et al., 2015) by MorphPiece and list the most frequent words that are _not_ tokenized by MorphTable. Table 4 shows the 10 most common words in this category. As can be seen, none of the words have a morphological segmentation; thus indicating that the MorphTable has a good coverage of words that have morphological segmentation. (Please refer to Appendix B for the list of top 50 tokens)
Another way to analyze coverage is across various word lengths. MorphPiece has essentially two internal tokenization schemes: the MorphTable and the internal-BPE. Within internal-BPE, there are again two modes of tokenizations: pretokens that are available as complete tokens within the internal BPE vocabulary and the pretokens that are split further ('BPESplit' in Figure 3). We want to compare the number of tokens that are split by these three mechanisms across various token lengths. From the figure, we can see that as the token length increases, the proportion of tokens found in BPE vocabulary decreases. This is consistent with BPE algorithm. Moreover, MorphPiece splits words from 4 to about 20 characters. Smaller and larger tokens are handled by the BPE tokenization.
## 5 Evaluation on a Language Model
A concrete test of any new tokenization scheme is from the performance of a language model trained on that scheme, on various NLP tasks. Towards that end, we train a GPT-2 architecture with MorphPiece and compare it with a GPT-2 model pretrained with BPE. We call our model MorphGPT. It is pertinent to note that other than the tokenization scheme, MorphGPT has no architectural difference from GPT-2 (Base).
### Evaluation setup
GPT-2 was trained on custom built corpus called WebText. Since that corpus is not available publicly, we used its open source clone, called the OpenWebText (Gokaslan and Cohen, 2019). Additionally, we used HuggingFace's implementation of GPT-2 (Wolf et al., 2019) with Pytorch-Lightning (Falcon and The PyTorch Lightning team, 2019) as the training framework on Nvidia A-100 GPUs.
To establish a baseline, we pretrained GPT-2 architecture twice, once each on BPE and on MorphPiece tokenizer for 55,000 steps with exactly the same hyper-parameters. As can be seen in Figure 1, MorphGPT-50k shows a clear advantage over GPT-2Base50k. Please refer to Section 8 for some possible explanation into this performance gain. Additionally, we evaluated both models on various Language Modeling tasks and found that MorphGPT-50k outperforms GPT-2Base50k by huge margins (Table 5).
Having confirmed performance gains using MorphPiece on language modeling task, we continue training the MorphGPT-Base50k model for a total of 200k steps using the same hyper-parameters and compare its performance with the GPT-2 (Base), available on HuggingFace hub as 'gpt2' checkpoint.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Rank**} & \multirow{2}{*}{**Token**} & **Relative** \\ & & **Frequency** \\ \hline
**1** & **“** & 0.215 \\
**2** & **’t** & 0.095 \\
**3** & **you** & 0.017 \\
**4** & **they** & 0.013 \\
**5** & **that** & 0.009 \\
**6** & **there** & 0.009 \\
**7** & **when** & 0.007 \\
**8** & **then** & 0.007 \\
**9** & **this** & 0.007 \\
**10** & **not** & 0.005 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Top 10 most frequent tokens in Bookcorpus, which are not split by MorphPiece; showing MorphPiece has good coverage of words that have morpheme segmentations. Appendix B has list of top 50 tokens
Figure 3: Number of words tokenized by BPE and MorphPiece in Bookcorpus, across various word lengths.
For training hyperparameters, we use batch size of 512 and one-cycle learning rate scheduler (Smith and Topin, 2017) with maximum learning rate of \(1e^{-3}\). We used warmup of 2000 steps, and cosine decay to final learning rate of \(1e^{-5}\). For the optimizer, we used Adam (Kingma and Ba, 2015) with betas 0.9 and 0.995 and eps of \(1e^{-8}\).
## 6 Evaluations
We evaluate our model on a number of NLP tasks as described in following sections. For the tasks more closely related to language modeling, we compare MorphGPT at checkpoints of 50k, 100k, 150k and 200k iterations with fully trained GPT-2 (Base/Large) models. With one training step, the model sees about 0.5 million tokens. Here, we see MorphGPT perform comparable to a 6 times larger GPT-2 (Large) model (Table 5). For other NLP tasks, we use MorphGPT at 200k iterations and find that it outperforms comparable GPT-2 (Base) model; usually, with a wide margin. This reconforms the finding from (Hofmann et al., 2020, 2021) that compared to a statistical tokenizer, a morphologically inspired tokenizer produces better word representations.
We evaluate MorphPiece on a wide variety of tasks. Specifically we conduct evaluations on language modeling tasks (perplexities on various datasets and LAMBADA); supervised learning tasks (on GLUE benchmark); unsupervised learning (Information Retrieval, Paraphrase Identification, Re-ranking) and zero shot prompt-based evaluations on GLUE. In the first three categories, MorphPiece shows much superior performance across the board. In the last category, it shows performance comparable to GPT-2. Finally, we compare MorphGPT to a similarly themed tokenization scheme called FLOTA (Hofmann et al., 2022) and find that our method performs extremely well in this comparison as well.
### Language Modeling
We evaluate MorphGPT and GPT-2 (Base/Large) on Penn Tree Bank (Marcus et al., 1993), OpenAI-250k (OpenAI, 2020) and LAMBADA datasets (Paperno et al., 2016). As can be seen in Table 5, MorphGPT models show much better perplexity numbers over fully trained GPT-2 models, despite being trained for a fraction of iterations. In particular, even with only 50k steps, MorphGPT achieves better perplexity than GPT-2 (Base) across all three datasets; and reaches performance of GPT-2 (Large) with 200k steps.
LambadaIn the LAMBADA dataset (Paperno et al., 2016), the task is to predict the last word of a paragraph and it is designed in a way that local context is not enough and one requires the whole paragraph to predict the correct answer. This task is known to be particularly hard for models to do well (Brown et al., 2020). MorphGPT surpasses GPT-2 accuracy by almost 10% with only 50k steps and almost reaches the accuracy of six times larger GPT-2 Large model (Table 5).
### GLUE Benchmark
GLUE (Wang et al., 2018) is a standard NLU benchmark. We finetuned both GPT-2 and MorphGPT on the tasks included in this benchmark and the results are shown in Table 6. It can be seen that, with the exception of SST, in all the tasks where MorphGPT is better than GPT-2, the difference is quite big. On the contrary, the tasks where GPT-2 is better, the difference is much smaller and could be attributed to inherent noise in evaluations.
### Sequence Embedding
To test the performance of MorphGPT with unsupervised training, we evaluate it on four different tasks involving sequence embeddings from various domains and tasks. We used the tasks, datasets and code from (Wang et al., 2021) for these evaluations.
Re-RankingWe evaluate this task on datasets from two different domains. The first domain is a collection of technical posts from the AskUbuntu (Lei et al., 2016); where the models are required to re-rank 20 candidate questions according to similarity with a given post. The second dataset is subset of a benchmark about scientific papers (Cohan et al., 2020). Following (Wang et al., 2021), we use the subsets of _Cite_, _Co-Cite_, _Co-Read_, and _Co-Review_. For all these tasks, the models are required to identify and rank up to 5 relevant papers from a list of 30 candidate papers.
Information RetrievalFor this task, we use CQADupStack (Hoogeveen et al., 2015), where the models are required to retrieve duplicate questions from a collection of forum posts across 12 domains in Stack Exchange.
if a pair of tweets are paraphrase of each other, against manually annotated gold labels.
We evaluate all these tasks at sentence level. To construct sentence embedding, we take the average across tokens, of the last hidden state of MorphGPT and GPT-2, before softmax. Aggregated results are shown in Table 7. It can be seen that MorphGPT performs better than GPT-2 across all tasks; often, with considerable performance improvement. For more detailed results in sub-domains of respective datasets and tasks, please see Appendix C.
### Zero Shot Evaluations
Here we use LM Evaluation Harness Gao et al. (2021), to evaluate MorphGPT and GPT-2 on GLUE tasks with default prompts. We use no in-context learning and only evaluate the tasks in zero-shot settings. The results (Table 8) show that MorphGPT performs comparable to GPT-2. It is pertinent to mention here that prompt-based evaluations are susceptible to high variance Koksal et al. (2023) depending on the wording of prompts.
### Flota
Finally, we present a comparison baseline. Few Longest Token Approximation (FLOTA) Hofmann et al. (2022) is a tokenization improvement method which uses the vocabulary of standard BPE tokenizer but tries to preserve the morphological structure of words during tokenization. It achieves that by finding a segmentation that recursively finds the largest segment of a word and splits on that. So, for example the word 'undesirable' would be split as ('und', 'es', 'irable') by BPE; but with FLOTA, it will be split as ('un','desirable'); which is closer to exact morphological segmentation of ('un','desire','able') used by MorphPiece. The authors show that FLOTA scheme preserves morphological structure to a large extent and that such a mechanism improves upon vanilla GPT-2 model. MorphPiece is different from FLOTA in a couple of important ways (Please see Section 2 for details), however since this technique is closest to our work, we look at it in detail.
FLOTA was evaluated on a classification task of a custom dataset consisting of titles from computer science, maths and physics domains of ArXiv. A small (2000 samples) and a large (20,000 samples)
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Task** & **GPT-2** & **MorphGPT** & **Difference** \\ \hline
**RTE** & 0.6318 & **0.7004** & **10.86\%** \\
**SST** & 0.9163 & **0.9209** & **0.50\%** \\
**QQP** & **0.8981** & 0.8913 & -0.76\% \\
**NNLI** & 0.3662 & **0.4648** & **26.93\%** \\
**MRPC** & 0.7402 & **0.8015** & **8.28\%** \\
**COLA** & 0.2574 & **0.4542** & **76.46\%** \\
**QNLI** & **0.8772** & 0.8766 & -0.07\% \\
**MNLI** & **0.8216** & 0.8167 & -0.60\% \\ \hline
**AVG** & 68.86 & **74.08** & 7.58 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Evaluation of GLUE benchmark on GPT-2 (Base) and MorphGPT, after finetuning for 3 epochs on respective task. Metric is Accuracy for all, except for COLA, which is Matthew Corr Coeff.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Dataset** & **Metric** & **GPT-2** & **MorphGPT** & **MorphGPT** \\ \hline
**AskUbuntu** & MAP & 0.426 & **0.461** \\ & MRR & 0.546 & **0.574** \\
**TwitterPara** & AP & 0.358 & **0.532** \\ & CorrS & 0.178 & **0.308** \\
**SciDocs** & MAP & 0.293 & **0.373** \\
**COADupStack** & MAP & 0.025 & **0.045** \\ & NDCG & 0.025 & **0.047** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Performance comparison of three different tasks comprising of Information Retrieval (CQADupStack), Re-Ranking (AskUbuntu and SciDocs) and Paraphrase Identification (TwitterPara) in unsupervised evaluation.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Dataset** & **Metric** & **GPT-2** & **GPT-2** & **MorphGPT** & **MorphGPT-2** \\ & **50k** & **Base** & **50k** & **100k** & **150k** & **200k** & **Large** \\ \hline
**PennTreeBank** & ppl & 79.31 & 61.58 & 43.2 & 39.85 & 38.74 & 38.25 & 37.94 \\
**OpenAI-250K** & ppl & 30.0 & 25.58 & 18.74 & 17.89 & 17.47 & 17.26 & 16.74 \\
**Lambda** & ppl & 74.97 & 55.78 & 47.11 & 45.38 & 43.25 & 42.83 & 37.21 \\
**Lambda** & acc & 0.44 & 0.468 & 0.556 & 0.567 & 0.584 & 0.586 & 0.593 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance comparison of GPT-2 (50k/Base/Large) with MorphGPT checkpoints at 50k, 100k, 150k and 200k iterations, on perplexity scores of various datasets and LAMBADA task.
dataset was constructed for each of the three areas. The models were finetuned for 20 epochs and evaluated on the dev/test splits. The results (Table 9) show a marked improvement over FLOTA technique. While GPT-2+FLOTA shows an improvement of 5% on dev set (7% on test set) over vanilla GPT-2, MorphGPT shows improvements of 27% on dev set (54% on test set). Additionally the authors of FLOTA injected noise during evaluation to test the robustness of their scheme (Table 10). Here also, MorphGPT shows marked improvements over vanilla GPT-2 (40 % in ArXiv-Large and 77 % on ArXiv-Small).
## 7 Detokenization
We define detokenization as the process of combining individual tokens, produced by a model trained with MorphPiece (e.g MorphGPT), to form a sentence. While detokenization is straightforward for BPE and other statistical tokenizers, that's not the case for MorphPiece. This is primarily due to the fact that in MorphPiece, tokens come from one of two different sources: MorphTable or internal-BPE. During detokenization, we need to not only ascertain which token comes from which source, but also, how to combine together the morphemes back to English words. We give details of both steps separately in the sections below.
### Classification of Tokens
In the first stage, we use the surface forms to classify all tokens as either'morph' or 'bpe'; signifying the source they come from. Additionally we annotate the'morph' tokens as either prefix, suffix, stem or hash (for compound words). MorphPiece tokens have four different surface forms. (a) The prefixes and suffixes have a '#' sign at the end or beginning of the token respectively. (b) The compound words are separated by a '#' token. (c) The tokens split by BPE, with a space have a 'G' symbol. (d) The BPE splits and the stems from MorphTable have no special symbol in them. Classification of tokens with surface forms of the first three types is straightforward. For the tokens that have no special symbol, we have a heuristically driven algorithm that marks them either'morph/stem' or 'bpe'.
### Reverse MorphTable
Once all tokens are classified as above, the 'bpe' tokens are combined together following standard BPE algorithm, which essentially involves just concatenating them together and use byte pair decod
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline \multicolumn{4}{c}{**ArXiv-L (N)**} & \multicolumn{2}{c}{**ArXiv-S (N)**} \\ \hline
**Model** & **Dev** & **Test** & **Dev** & **Test** \\ \hline
**GPT-2** & 0.418 & 0.406 & 0.245 & 0.277 \\
**+FLOTA** & 0.46 & 0.445 & 0.25 & 0.266 \\
**MorphGPT** & **0.586** & **0.568** & **0.462** & **0.463** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Comparison of MorphGPT with GPT-2 and GPT-2+FLOTA (with noise)
Figure 4: Detokenization mechanism from morphemes to English words. Black lines show word continuation; red dashed lines show word boundary, and missing connections imply the transition is not valid. Hash denotes compound-words. Stem->Stem is a special heuristic case
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline
**Task** & **RTE** & **SST** & **QQP** & **WNLI** & **MRPC** & **COLA** & **QNLI** & **MNLI** \\ \hline
**Metric** & ACC & ACC & ACC & ACC & ACC & MCC & ACC & ACC \\
**GPT-2** & **0.5307** & 0.5401 & 0.3732 & 0.4225 & **0.5662** & **0.012** & 0.5017 & **0.3372** \\
**MorphGPT** & 0.491 & **0.6812** & **0.4134** & **0.5493** & 0.3211 & -0.065 & 0.501 & 0.3259 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Zero-shot prompt-based evaluation using LM Evaluation Harness Gao et al. (2021)
ing. However, for the tokens marked'morph', the procedure is more involved. First we need to find tokens that are morpheme constituents of the same word (i.e find word boundaries) and then use a reverse MorphTable to find those words. Finding word boundaries is further complicated by various cases like compound words, multiple affixes etc. To combine various cases of these surface forms, we have developed a heuristic algorithm (Figure 4) that gives us word continuation and word boundaries between different tokens. This algorithm defines sequence of surface forms that would form a valid segmentation for a word, by looking at consecutive token labels from Section 7.1. Once the word boundaries are found, the reverse-MorphTable is then used to convert this segmentation to an English word.
### Illustrative Example
Let's assume, a model trained on MorphPiece outputs the tokens shown in Figure 5. In the first step, the tokens will get classified as : ['bpe', 'bpe', 'prefix','stem','suffix','stem','suffix'] with additional label of'morph' on all tokens except 'bpe'. Since merging of tokens labelled 'bpe' is straightforward, we focus on those marked'morph'. Now we follow the arrows from Figure 4; with the solid black lines showing word continuation and red dashed lines showing word boundaries. From here we get word boundaries as : ['in#','vestigate','ing'] and ['diligent',#ly']. Finally we look up the words in a reverse-MorphTable to get 'investigating' and 'diligently'.
## 8 Discussion
We believe that the performance improvement from using MorphPiece comes from the relative ease to perform well on the language modeling task of predicting the next token. This is because the tokens in MorphPiece have less 'noise' compared to BPE. From Table 1, it can be seen that MorphPiece tokens have (a) more meaningful segmentation in the form of morphemes, and (b) less spelling idiosyncrasies e.g batting is split as ('bat','ing') instead of ('bat','ting') or ('batt','ing'), both of which have tokens that are not aligned with the actual words 'bat' and 'ing'. On the other hand, a model trained on BPE has to tackle with both problems; which makes it relatively difficult to perform well on language modeling task.
A related aspect is that of representational efficiency. Contemporary tokenizers use sub-word tokenization to achieve a balance between representational power and model size. MorphPiece can be seen as a mechanism in the same direction, but using linguistic properties instead of statistical information.
## Conclusion and Way Forward
We have presented a linguistically motivated tokenization scheme that is more efficient in training large language models and outperforms models trained on BPE on a wide variety of tasks. We hope that this new paradigm of using linguistic inductive bias will lay the foundations of a new generation of tokenization schemes and models, that move away from purely statistical language representation.
## Acknowledgements
The author would like to extend sincerest gratitude to Prof Nafise Sadat Moosavi for her comments and input to the manuscript. Additionally, this work was supported in part by ERC-Grant 740516: NonSequeToR and a BMBF grant. The author also acknowledges the compute resources provided by Leibniz-RechenZentrum (LRZ), Garching and Lichtenberg HochLeistungsRechenzentrum (HLR), Darmstadt for training the models and running the experiments.
Figure 5: Example of detokenization. First we mark the types of affixes or ’bpe’. Orange color tokens come from MorphPiece and teal colored come from BPE. Then we follow the arrows from Figure 4 to find word boundaries, which are looked up in reverse-MorphTable to find words. |
2305.09436 | Searching for the open flavor tetraquark $T^{++}_{c\bar{s}0}(2900)$ in
the process $B^+\to K^+ D^+ D^-$ | Inspired by recent observations of $T_{c\bar{s}0}(2900)^0$ in the $D_s^+
\pi^-$ invariant mass distribution of $B^0 \to \bar{D}^0 D_s^+ \pi^-$ decay and
$T_{c\bar{s}0}(2900)^{++}$ in the $D_s^+ \pi^+$ invariant mass distribution of
$B^+ \to D^- D_s^+ \pi^+$ decay, we investigate the $T_{c\bar{s}0}(2900)^{++}$
contribution to the $B^+ \to K^+ D^+ D^-$ decay in a molecular scenario, where
we consider $T_{c\bar{s}0}(2900)r^{++}$ as a $D^{\ast +} K^{\ast+}$ molecular
state. Our estimations indicate that the fit fraction of
$T_{c\bar{s}0}(2900)^{++}$ in the $B^+ \to K^+ D^+ D^-$ is about $12.5\%$, and
its signal is visible in the $D^+ K^+$ invariant mass distribution. With the
involvement of $T_{c\bar{s}0}(2900)^{++}$, the fit fractions of
$\chi_{c0}(3915)$ and $\chi_{c2}(3930)$ may be much different with the ones
obtained by the present amplitude analysis [Phys. Rev. D \textbf{102}, 112003
(2020)], which may shed light on the long standing puzzle of $\chi_{c0}(3915)$
as the conventional charmonium. | Man-Yu Duan, En Wang, Dian-Yong Chen | 2023-05-16T13:48:44Z | http://arxiv.org/abs/2305.09436v1 | Searching for the open flavor tetraquark \(T_{c30}^{++}(2900)\) in the process \(B^{+}\to K^{+}D^{+}D^{-}\)
###### Abstract
Inspired by recent observations of \(T_{c30}(2900)^{0}\) in the \(D_{s}^{+}\pi^{-}\) invariant mass distribution of \(B^{0}\to D^{0}D_{s}^{+}\pi^{-}\) decay and \(T_{c30}(2900)^{++}\) in the \(D_{s}^{+}\pi^{+}\) invariant mass distribution of \(B^{+}\to D^{-}D_{s}^{+}\pi^{+}\) decay, we investigate the \(T_{c30}(2900)^{++}\) contribution to the \(B^{+}\to K^{+}D^{-}D^{-}\) decay in a molecular scenario, where we consider \(T_{c30}(2900)^{++}\) as a \(D^{+}K^{+}\) molecular state. Our estimations indicate that the fit fraction of \(T_{c30}(2900)^{++}\) in the \(B^{+}\to K^{+}D^{+}D^{-}\) is about 12.5%, and its signal is visible in the \(D^{+}K^{+}\) invariant mass distribution. With the involvement of \(T_{c30}(2900)^{++}\), the fit fractions of \(X_{c30}(3915)\) and \(X_{c2}(3930)\) may be much different with the ones obtained by the present amplitude analysis [Phys. Rev. D **102**, 112003 (2020)], which may shed light on the long standing puzzle of \(X_{c0}(3915)\) as the conventional charmonium.
## I Introduction
The \(B\) meson decay process is the most productive and important platform of searching for the QCD exotic states. Two typical types of exotic candidates could be observed in this process. One is the charmonium-like state observed in the invariant mass distributions of a charmonium plus one or more light meson, such as the first observed charmonium-like state, \(X(3872)\), which was first observed in the \(\pi^{+}\pi^{-}J/\psi\) invariant mass distribution of the process \(B^{\pm}\to K^{\pm}\pi^{\mp}J/\psi\) by the Belle Collaboration in the year of 2003 [1], and then confirmed by the BaBar [2; 3; 4; 5; 6; 7; 8; 9; 10; 11], CDF [12; 13; 14; 15], D0 [16], CMS [17; 18; 19; 20; 21; 22], and LHCb [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36] in the \(B\) decay process, as well as the BESIII [37; 38; 39; 40] Collaboration in the electron-positron annihilation process. Besides the charmonium-like states, another type of exotic candidates observed in the \(B\) decay processes is the open-charm states with strangeness observed in the invariant mass spectra of a charmed meson and a (anti-)kaon meson or \(D_{s}\pi\), such as \(D_{s0}^{*}(2317)\) and \(D_{s1}(2460)\), which were first observed by BaBar [41] and CLEO [42] Collaborations, respectively.
In the year of 2020, the LHCb Collaboration performed the amplitude analysis of the process \(B^{+}\to D^{-}D^{*}K^{+}\)[43; 44], and two new structures with spin-0 (named \(X_{0}(2900)\)) and spin-1 (named \(X_{1}(2900)\)), were reported in the \(D^{-}K^{+}\) invariant mass distribution. The masses and widths of these two states are measured to be [43; 44]
\[m_{X_{0}(2900)} = (2866\pm 7\pm 2)\;\mathrm{MeV}\;,\] \[\Gamma_{X_{0}(2900)} = (57\pm 12\pm 4)\;\mathrm{MeV}\;,\] \[m_{X_{1}(2900)} = (2904\pm 5\pm 1)\;\mathrm{MeV}\;,\] \[\Gamma_{X_{1}(2900)} = (110\pm 11\pm 4)\;\mathrm{MeV}\;, \tag{1}\]
respectively.
It is interesting to notice that both \(X_{0}(2900)\) and \(X_{1}(2900)\) are fully open-flavor states and their minimal quark components are \(\bar{c}\bar{s}ud\), which indicates that \(X_{0}(2900)\) and \(X_{1}(2900)\) could be good candidates of tetraquark states [45; 46; 47; 48; 49; 50; 51; 52]. In addition, the observed masses of \(X_{0}(2900)\) and \(X_{1}(2900)\) are close to the threshold of \(D^{*}\bar{K}^{*}\), then the \(D^{*}\bar{K}^{*}\) molecular interpretations have been proposed [53; 54; 55; 56; 57; 58; 59; 60; 61; 62].
Recently, the LHCb Collaboration reported two new tetraquark states \(T_{c30}(2900)^{0}\) and \(T_{c30}(2900)^{++}\) in the \(D_{s}^{+}\pi^{-}\) and \(D_{s}^{+}\pi^{+}\) mass distributions of the \(B^{0}\to\bar{D}^{0}D_{s}^{+}\pi^{-}\) and \(B^{+}\to D^{-}D_{s}^{+}\pi^{+}\), respectively [63; 64]. The masses and widths of the \(T_{c30}(2900)^{0}\) and \(T_{c30}(2900)^{++}\) are measured to be [63; 64]
\[m_{T_{c30}(2900)^{0}} = (2892\pm 14\pm 15)\;\mathrm{MeV}\;,\] \[\Gamma_{T_{c30}(2900)^{0}} = (119\pm 26\pm 12)\;\mathrm{MeV}\;,\] \[m_{T_{c30}(2900)^{++}} = (2921\pm 17\pm 19)\;\mathrm{MeV}\;,\] \[\Gamma_{T_{c30}(2900)^{++}} = (137\pm 32\pm 14)\;\mathrm{MeV}\;. \tag{2}\]
The resonance parameters of these two states are consistent with each other, which indicates that they are two of isospin triplet. When taking the isospin relationship into consideration, the mass and width of \(T_{c30}(2900)\) are fitted to be [63; 64],
\[m_{T_{c30}(2900)} = (2908\pm 11\pm 20)\;\mathrm{MeV}\;,\] \[\Gamma_{T_{c30}(2900)} = (136\pm 23\pm 11)\;\mathrm{MeV}\;. \tag{3}\]
In addition, the amplitude analysis indicates the quantum numbers of \(T_{c30}\) are \(J^{P}=0^{+}\).
From the observed processes, one can find the minimal quark components of \(T_{c30}(2900)^{0}\) and \(T_{c30}(2900)^{++}\) are \(c\bar{s}ud\) and \(c\bar{s}d\bar{u}\), respectively, which indicates that both \(T_{c30}(2900)^{0}\) and \(T_{c30}(2900)^{++}\) are also fully open flavor tetraquark states, and in addition, \(T_{c30}(2900)^{++}\) is the first observed doubly charged tetraquark state. These particular properties have stimulated theorists' great interests. In the framework of the QCD sum rules, the authors in Ref. [65; 66; 67; 68; 69] assigned \(T_{c30}(2900)\) as the scalar \(c\bar{s}q\bar{q}\) tetraquark state. In addition, the observed mass of \(T_{c30}(2900)\) is close to the threshold of \(D^{*}K^{*}\). Together with \(D_{s0}^{*}(2317)\) close to the \(DK\) threshold and \(D_{s1}(2460)\) close to the \(D^{*}K\) threshold, the observation of \(T_{c30}(2900)\) enrich the exotic candidate near the threshold of a charmed meson and a strange meson. Similar to the case of \(D_{s0}^{*}(2317)\) and \(D_{s1}(2460)\), \(T_{c30}(2900)\) has also been proposed to be \(D^{*}K^{*}\) molecular state with isospin \(I=1\). By means of the QCD two-point sum rule method, the mass and
decay width could be reproduced in the \(D^{*}K^{*}\) molecular scenario [70]. In the one-boson-exchange model, the authors in Ref. [71] found that the masses of \(D^{*}_{s0}(2317)\), \(D_{s1}(2460)\) and \(T_{c30}(2900)\) could be reproduced. In an effective Lagrangian approach, the decay properties of \(T_{c30}(2900)\) were also investigated in Ref. [72]. Besides the resonance interpretations, the \(T_{c30}(2900)\) was interpreted as the threshold effect from the interaction of the \(D^{*}K^{*}\) and \(D^{*}_{s}\rho\) channels [73] or the triangle singularity [74].
On the experimental side, searching for more decay modes of \(T_{c30}(2900)\) can help us to reveal its internal structure. In the \(B^{+}\to D^{-}D^{+}K^{+}\) process where the tetraquark states \(X_{0}(2900)\) and \(X_{1}(2900)\) were observed, the LHCb Collaboration also present the \(D^{+}K^{+}\) invariant mass distribution [43; 44]. From the measured data, one find that the \(D^{+}K^{+}\) invariant mass distribution can not be well described in the vicinity of 2.9 GeV1, which indicates that there could be some contributions from additional resonances. To further analyse the resonance contributions to \(B^{+}\to D^{-}D^{+}K^{+}\) process, we find,
Footnote 1: More detail can be found in Fig.10-(c) of Ref. [44]
* Besides the resonance parameters of \(T_{c30}(2900)\), the LHCb Collaboration also reported the fit fraction of \(T_{c30}(2900)^{++}\) component in the \(B^{+}\to D^{-}D^{+}_{s}\pi^{+}\), which is \((1.96\pm 0.87\pm 0.88)\%\)[63; 64]. In other words, the cascaded decay process, \(B^{+}\to D^{-}T_{c30}(2900)^{++}\to D^{-}D^{*}_{s}\pi^{+}\) are sizable.
* In the \(D^{*}K^{*}\) molecular scenario, the decay properties of the \(T_{c30}(2900)^{0}\) were investigated in Ref. [72]. Our estimations indicate that the \(T_{c30}(2900)^{0}\) dominantly decays into \(D^{0}K^{0}\), and accordingly \(T_{c30}(2900)^{++}\) should dominantly decay into \(D^{+}K^{+}\) on account of the isospin symmetry.
Based on the above experimental measurements and theoretical estimations, one can anticipate that the tetraquark state \(T_{c30}(2900)^{++}\) should have non-negligible contribution to the process \(B^{+}\to D^{-}(K^{+}D^{+})\).
In addition, the involvement of \(T_{c30}(2900)^{++}\) in the process \(B^{+}\to K^{+}D^{+}D^{-}\) may also shed light on another long standing puzzle for \(\chi_{c0}(3930)\) as conventional charmonium [75; 76; 77; 78]. The measurements from the BaBar Collaboration indicated that the branching fraction of \(B^{+}\to K^{+}\chi_{c0}(3930)\to K^{+}J/\psi\omega\) is \((3.0^{+0.7+0.5}_{-0.6-0.3})\times 10^{-5}\)[79], while the branching fraction of \(B^{+}\to K^{+}\chi_{c0}(3930)\to K^{+}D^{+}D^{-}\) is reported to be \((8.1\pm 3.3)\times 10^{-6}\)[44]. Thus, one can conclude that the branching fraction for \(\chi_{c0}(3930)\to J/\psi\omega\) is several times larger than the one of \(\chi_{c0}(3930)\to D^{+}D^{-}\), which is inconsistent with the expectations of the conventional charmonium assignment of \(\chi_{c0}(3930)\).
If carefully checking the \(D^{+}K^{+}\) invariant mass distribution of \(B^{+}\to K^{+}D^{+}D^{-}\) in Ref. [44], one can find that the charmonium \(\chi_{c2}(3930)\) has significant contribution to the structure near 2.9 GeV in the \(D^{+}K^{+}\) invariant mass distribution. While both the \(\chi_{c0}(3930)\) and \(\chi_{c2}(3930)\) are responsible for the peak in the vicinity of 3.93 GeV in the \(D^{+}D^{-}\) mass spectrum of \(B^{+}\to K^{+}D^{+}D^{-}\), then, the involvement of \(T_{c30}(2900)^{++}\) in the \(B^{+}\to K^{+}D^{+}D^{-}\) may lead to a rather different fit fractions of \(\chi_{c0}(3930)\) and \(\chi_{c2}(3930)\) with the present one. Thus, in the present work, we investigate the possible contribution of \(T_{c30}(2900)^{++}\) in the process \(B^{+}\to D^{-}D^{+}K^{+}\) in the framework of the molecular scenario, where \(T_{c30}(2900)^{++}\) is considered as a \(D^{**}K^{**}\) molecular state.
This paper is organized as follows. After the introduction, we will show the formalism used in Sec. II. Our calculated results and related discussions will be presented in Sec. III, and Sec. IV will devote to a short summary.
## II Formalism
In the molecular scenario, the \(T_{c30}(2900)^{++}\) is considered as a molecular composed of \(D^{**}K^{**}\), which is,
\[\left|T_{c30}(2900)^{++}\right.\rangle=\left|D^{**}K^{**}\right.\rangle. \tag{4}\]
Thus, the primary reaction that could produce \(T_{c30}(2900)^{++}\) is \(B^{+}\to D^{-}D^{**}K^{**}\). As shown in Fig. 1, this reaction proceeds via the \(W^{+}\) internal emission, where the \(\bar{b}\) quark transits into \(\bar{c}\) quark by emitting a \(W^{+}\) boson, while the \(W^{+}\) boson couples to the \(c\bar{s}\) quarks pair. The \(\bar{s}\) quark and the \(u\) quark from the initial \(B^{+}\) meson form a \(K^{**}\) meson, while the rest \(c\bar{c}\) and \(d\bar{d}\) created from vacuum hadronize into \(D^{-}\) and \(D^{**}\) mesons. In the hadron level, one can construct the \(S\) wave component of the transition amplitude by matching the angular momentum of \(B^{+}\) meson [80; 81], which is,
\[-it_{1}=-iC_{1}\epsilon(D^{**})\cdot\epsilon(K^{**}), \tag{5}\]
where the \(\epsilon(D^{**})\) and \(\epsilon(K^{**})\) are the polarization vectors of the \(D^{**}\) and \(K^{**}\), respectively. \(C_{1}\) is an unknown coupling constant, which will be discussed later. Then the \(D^{**}\) and \(K^{**}\) couple to the molecular \(T_{c30}(2900)^{++}\) with \(I(J^{P})=1(0^{+})\) as presented in Fig. 2-(a). As indicated in Ref. [82], the spin of the \(D^{**}K^{**}\) system could be projected into different angular
Figure 1: Diagrammatic decay at the quark level for the \(B^{+}\to D^{-}D^{**}K^{**}\) reaction.
momentum, for example, the vertex for \(R_{J}\to D^{**}K^{**+}\) with \(J=0,1,2\) could be constructed as,
\[\mathcal{V}^{(0)} = \frac{1}{3}\epsilon_{l}(D^{**+})\epsilon_{l}(K^{**+})\delta_{ij},\] \[\mathcal{V}^{(1)} = \frac{1}{2}\left[\epsilon_{i}(D^{**+})\epsilon_{j}(K^{**+})- \epsilon_{j}(D^{**+})\epsilon_{i}(K^{**+})\right],\] \[\mathcal{V}^{(2)} = \frac{1}{2}\left[\epsilon_{i}(D^{**+})\epsilon_{j}(K^{**+})+ \epsilon_{j}(D^{**+})\epsilon_{i}(K^{**+})\right] \tag{6}\] \[-\frac{1}{3}\epsilon_{l}(D^{**+})\epsilon_{l}(K^{**+})\delta_{ij}.\]
The experimental analysis indicated that the angular momentum of \(T_{c:0}(2900)\) is 0. Thus, one can obtain the transition amplitude of \(B^{+}\to D^{-}T_{c:0}(2900)^{++}\) corresponding to Fig. 2-(a), which is,
\[-it_{2u} = -iC_{1}\epsilon_{u}(D^{**+})\epsilon_{l}(K^{**+})\delta^{\alpha \beta}G_{D^{**}K^{*}}(M_{\rm inv}(D^{+}K^{*})) \tag{7}\] \[\times\frac{1}{3}\epsilon_{l}^{*}(D^{**+})\epsilon_{l}^{*}(K^{** +})\delta_{ij}\delta_{T^{**}_{c:0}D^{**}K^{*}}\] \[= -iC_{1}\delta_{ij}G_{D^{**}K^{*}}(M_{\rm inv}(D^{*}K^{*}))g_{T^{** }_{c:0}D^{**}K^{*}},\]
where \(\sum\limits_{pol}\epsilon_{l}(R)\epsilon_{j}^{*}(R)=\delta_{ij}\), \(R=D^{**+}\) or \(K^{**+}\), and the sum over the same indices of the Kronecker delta function is equal to 3, i.e., \(\frac{1}{ij}|\delta_{ij}|^{2}=3\). \(G_{D^{**}K^{*}}(M_{T:n})\) is the loop function of the two-meson \(D^{*}\) and \(K^{*}\), which will be discussed later.
Similarly, one can obtain the transition amplitude of \(B^{+}\to D^{-}T_{c:0}(2900)^{++}\to D^{-}D^{*}K^{+}\) corresponding to Fig. 2-(b), which is,
\[-it_{2b} = -iC_{1}\delta_{ij}G_{D^{**}K^{*}}(M_{\rm inv}(D^{*}K^{*})) \tag{8}\] \[\times\frac{g_{T^{**}_{c:0}D^{**}K^{*}}g_{T^{**}_{c:0}DK}}{M_{\rm inv }^{2}(D^{*}K^{*})-m_{T^{**}_{c:0}}^{2}+im_{T^{**}_{c:0}}\Gamma_{T^{**}_{c:0}}},\]
and then the square of the transition amplitude is,
\[\sum\left|t_{2b}\right|^{2} = 3C_{1}^{2}\left|G_{D^{**}K^{*}}(M_{\rm inv}(D^{*}K^{*}))\right|^ {2} \tag{9}\] \[\times\frac{\left|g_{T^{**}_{c:0}D^{**}K^{*}}\right|^{2}\left|g_{ T^{**}_{c:0}DK}\right|^{2}}{\left[M_{\rm inv}^{2}(D^{*}K^{*})-m_{T^{**}_{c:0}}^{2} \right]^{2}+m_{T^{**}_{c:0}}^{2}\Gamma_{T^{**}_{c:0}}^{2}},\]
with \(M_{\rm inv}^{2}(D^{*}K^{*})=(P_{D^{*}}+P_{K^{*}})^{2}\), and two-meson loop function is given by,
\[G=i\int\frac{d^{4}q}{(2\pi)^{4}}\frac{1}{q^{2}-m_{1}^{2}+i\epsilon}\frac{1}{( q-P)^{2}-m_{2}^{2}+i\epsilon}\, \tag{10}\]
with \(m_{1}\) and \(m_{2}\) the masses of the two mesons involved in the loop. \(q\) is the four-momentum of the meson in the centre of mass frame, and \(P\) is the total four-momentum of the meson-meson system. In the present work, we use the dimensional regularization method as indicated in Refs. [83; 84; 85], and in this scheme, the two-meson loop function \(G\) can be expressed as,
\[G = \frac{1}{16\pi^{2}}\left[\alpha+\log\frac{m_{1}^{2}}{\mu^{2}}+ \frac{m_{2}^{2}-m_{1}^{2}+s}{2s}\log\frac{m_{2}^{2}}{m_{1}^{2}}\right. \tag{11}\] \[+\frac{|\vec{q}|}{\sqrt{s}}\left(\log\frac{s-m_{2}^{2}+m_{1}^{2}+2 |\vec{q}|\sqrt{s}}{-s+m_{2}^{2}-m_{1}^{2}+2|\vec{q}|\sqrt{s}}\right.\] \[+\left.\left.\log\frac{s+m_{2}^{2}-m_{1}^{2}+2|\vec{q}|\sqrt{s}} {-s-m_{2}^{2}+m_{1}^{2}+2|\vec{q}|\sqrt{s}}\right)\right],\]
where \(s=P^{2}=M_{\rm inv}^{2}(D^{*}K^{*})\), and \(\vec{q}\) is the three-momentum of the meson in the centre of mass frame, which reads,
\[|\vec{q}|=\frac{\sqrt{\left[s-(m_{1}+m_{2})^{2}\right]\left[s-(m_{1}-m_{2})^{2 }\right]}}{2\sqrt{s}}, \tag{12}\]
here we take \(\mu=1500\) MeV and \(\alpha=-1.474\), which are the same as those in the study of the \(D^{*}\bar{K}^{*}\) interaction [80; 81].
Besides the two-meson loop function, two coupling constants \(g_{T^{**}_{c:0}D^{**}K^{*}}\) and \(g_{T^{**}_{c:0}DK}\) are unknown. As for \(g_{T^{**}_{c:0}D^{**}K^{*}}\), it refers to the coupling between \(T_{c:0}(2900)^{++}\) and its components \(D^{**}K^{**+}\), which could be related to the binding energy by [86; 87; 88],
\[g_{T^{**}_{c:0}D^{**}K^{*}}^{2}=16\pi(m_{D^{*}}+m_{K^{*}})^{2}\vec{\lambda}^{2} \sqrt{\frac{2\Delta E}{\mu}}, \tag{13}\]
where \(\vec{\lambda}=1\) gives the probability to find the molecular component in the physical states, \(\Delta E=m_{D^{*}}+m_{K^{*}}-m_{T^{**}_{c:0}}\) denotes
Figure 2: A sketch diagram of the rescatting of \(D^{**}K^{**}\) to give the resonance \(T_{c:0}(2900)^{**}\) (diagram (a)), and further decay of \(T_{c:0}(2900)^{**}\) to \(D^{*}K^{*}\) (diagram (b)).
the binding energy, and \(\mu=m_{D^{*}}m_{K^{*}}/(m_{D^{*}}+m_{K^{*}})\) is the reduced mass.
As for \(g_{T^{*\pm}_{c0},DK}\), we tried to obtain its value by the corresponding partial width of \(T_{c\bar{s}0}(2900)^{++}\to D^{+}K^{+}\), with an effective Lagrangian approach, the partial width of \(T_{c\bar{s}0}(2900)^{++}\to D^{+}K^{+}\) could be obtained as,
\[\Gamma_{T^{*+}_{c0}} = \frac{1}{8\pi}\frac{1}{m_{T^{*+}_{c0}}^{2}}|g_{T^{*+}_{c0},DK}|^{2 }|\vec{q}_{K^{*}}|, \tag{14}\]
with
\[|\vec{q}_{K^{*}}| = \frac{\lambda^{1/2}(m_{T^{*+}_{c0}}^{2},m_{D^{*}}^{2},m_{K^{*}}^{ 2})}{2m_{T^{*+}_{c0}}}, \tag{15}\]
to be the momentum of \(K^{+}\) in the \(T_{c\bar{s}0}(2900)^{++}\) rest frame, and \(\lambda(x,y,z)=x^{2}+y^{2}+z^{2}-2xy-2yz-2xz\) is the Kallen function. In Ref. [72], our estimations indicated that the \(T_{c\bar{s}0}(2900)^{++}\) dominantly decay into \(DK\), and the partial width of \(DK\) channel was estimated to be \((52.6\sim 101.7)\) MeV in the considered parameter range. In the present work, we take the partial width of \(T_{c\bar{s}0}(2900)^{++}\to D^{+}K^{+}\) to be 80 MeV to estimate the coupling constant \(g_{T^{*0}_{c0},DK}^{+}\).
With the above preparation, one can obtain the \(D^{+}K^{+}\) invariant mass distribution, which is,
\[\frac{d\Gamma}{dM_{\rm inv}(D^{+}K^{+})}=\frac{1}{(2\pi)^{3}}\frac{1}{4m_{B^{ *}}^{2}}p_{D^{*}}\tilde{p}_{K^{*}}\sum|t_{2b}|^{2}, \tag{16}\]
with
\[p_{D^{*}} = \frac{\lambda^{1/2}\left(m_{B^{*}}^{2},m_{D^{*}}^{2},M_{\rm inv}^ {2}(D^{+}K^{+})\right)}{2m_{B^{*}}},\] \[\tilde{p}_{K^{*}} = \frac{\lambda^{1/2}\left(M_{\rm inv}^{2}(D^{+}K^{+}),m_{D^{*}}^{2 },m_{K^{*}}^{2}\right)}{2M_{\rm inv}(D^{+}K^{+})}. \tag{17}\]
In addition, we would like to compare the above mass distribution with the one of the background for the reaction \(B^{+}\to D^{-}D^{+}K^{+}\). By analogy to Eq. (5), we can obtain the transition matrix for \(B^{+}\to D^{-}D^{+}K^{+}\), which is
\[-i\!t_{3}=-i\!C_{3}. \tag{18}\]
where \(C_{3}\) is the coupling constant, which will be discussed in the following section. With the above transition matrix, we can give the background distribution for the \(B^{+}\to D^{-}D^{+}K^{+}\) reaction, which is
\[\frac{d\Gamma_{\rm bac}}{dM_{\rm inv}(D^{+}K^{+})}=C_{3}^{2}\frac{1}{(2\pi)^{3 }}\frac{1}{4m_{B^{*}}^{2}}p_{D^{*}}\tilde{p}_{K^{*}}. \tag{19}\]
## III Numerical results and discussions
To calculate the \(D^{+}K^{*+}\) invariant mass distribution of \(B^{+}\to K^{+}D^{+}D^{-}\) as presented in Eq. (16), the coupling constant \(C_{1}\) is needed. However, the experimental measurement of \(B^{+}\to K^{*+}D^{*+}D^{-}\) is not available to date. Similar to \(B^{+}\to K^{+}D^{+}D^{-}\), the process \(B^{+}\to K^{*+}D^{*+}D^{-}\) should also occur via \(W^{+}\) internal emission process. One can obtain the diagrammatic decay at the quark level for the \(B^{+}\to K^{+}D^{+}D^{-}\) by replacing \(K^{*+}\) and \(D^{++}\) in Fig. 1 with \(K^{+}\) and \(D^{+}\), which indicates some similarities between the processes \(B^{+}\to K^{+}D^{+}D^{-}\) and \(B^{+}\to K^{*+}D^{+}D^{-}\). However, there are also some differences between these two processes. As indicated in the amplitude analysis of \(B^{+}\to K^{+}D^{+}D^{-}\) in Ref. [44], the typical resonance contributions to this process are \(B^{+}\to K^{+}(c\bar{c})\to K^{+}D^{+}D^{-}\), where the charmonia include \(\psi(3770)\), \(\chi_{c0}(3930)\), \(\chi_{c2}(3930)\), \(\psi(4040)\), \(\psi(4160)\), and \(\psi(4415)\). These charmonia contributions should be suppressed due to phase space. In addition to the charmonia contributions, the LHCb Collaboration also observed the signals of \(X_{0}(2900)\) and \(X_{1}(2900)\) in the \(D^{-}K^{+}\) invariant mass spectra, these contributions also vanish in the \(B^{+}\to K^{*+}D^{*+}D^{-}\) process.
Besides the resonance contributions, the amplitude analysis also indicates sizable nonresonant contribution, which should be the same for both \(B^{+}\to K^{+}D^{+}D^{-}\) and \(B^{+}\to K^{*+}D^{*+}D^{-}\), thus, in the present work, we first estimate the background distribution of \(B^{+}\to K^{+}D^{+}D^{-}\) with the branching fraction of the nonresonant contribution from LHCb analyze, which is \((5.3\pm 1.8)\times 10^{-5}\)[89]. From Eq. (19), the coupling constant \(C_{3}\) could be determined. Considering the similarity between \(B^{+}\to K^{+}D^{+}D^{-}\) and \(B^{+}\to K^{*+}D^{*+}D^{-}\), we take \(C_{1}=C_{3}\) to roughly estimate the \(D^{+}K^{+}\) invariant mass distribution resulted from \(T_{c30}(2900)^{++}\).
With the above formalism, we have calculated the \(D^{+}K^{+}\) invariant mass distribution by assuming the values of \(C_{1}\) and \(C_{3}\) are the same, as presented in Fig. 3. To further compare with the experimental measurements, we normalized the background contribution estimated by Eq. (19) to the LHCb experimental nonresonant contribution in Fig. 3, where the magenta-dash-dotted and blue-dotted curves are the nonresonant contribution determined by the LHCb amplitude analysis and our estimated background, respectively. The red-solid curve is the resonant contribution form \(T_{c\bar{s}0}(2900)^{++}\), which is obtained with the resonance parameters of \(m_{c\bar{s}0}^{*-}=2885\) MeV and \(\Gamma_{T^{*+}_{c0}}=136\) MeV. While the blue band corresponds to the uncertainties of the \(T_{c\bar{s}0}(2900)^{++}\) width. From Fig. 3, one can
find that the \(D^{+}K^{+}\) invariant mass distribution around 2.9 GeV can not be well described by LHCb fit [44], which indicates that there should be an additional resonance. Our results show that the \(T_{c30}(2900)^{++}\) plays an important role in this region, thus we suggest that contribution from the \(T_{c30}(2900)^{++}\) should be considered in the future amplitudes analysis.
Furthermore, we can integrate the invariant mass \(M_{\rm inv}(D^{+}K^{+})\) over the whole invariant mass range for the signal and background, and their ratio is given by,
\[\frac{\int\frac{d\Gamma}{dM_{\rm inv}(D^{+}K^{+})}}{\int\frac{d\Gamma_{\rm inv }}{dM_{\rm inv}(D^{+}K^{+})}}\simeq 0.52. \tag{20}\]
With the nonresonant fit fraction obtained by the amplitude analyze, we can roughly estimate the fit fraction of \(T_{c30}(2900)^{++}\) to be about 12.5%, which is greater than the ones of \(\chi_{c0}(3930)\) and \(\chi_{c2}(3930)\). Thus, the involvement of \(T_{c30}(2900)^{++}\) will certainly influence the fit fractions of \(\chi_{c0}(3930)\) and \(\chi_{c2}(3930)\).
## IV Summary
Recently, the LHCb Collaboration reported their amplitude analysis of the decays \(B^{0}\to\bar{D}^{0}D_{s}^{+}\pi^{-}\) and \(B^{+}\to D^{-}D_{s}^{+}\pi^{+}\), where two tetraquark states \(T_{c30}(2900)^{0}\) and \(T_{c30}(2900)^{++}\) were reported in the \(D_{s}\pi\) invariant mass distributions. The resonance parameters of these two resonances indicate that they are two of the isospin triplet. Similar to \(T_{c30}(2900)\), the LHCb Collaboration reported another two tetraquark candidates \(X_{0,1}(2900)\) in the \(D^{-}K^{+}\) invariant mass distribution in the \(B^{+}\to D^{-}D^{+}K^{+}\) reaction in the year of 2020 [43; 44]. In the \(D^{+}K^{+}\) invariant mass distribution of the \(B^{+}\to D^{-}D^{+}K^{+}\) reaction, we find that the experimental data of the \(D^{+}K^{+}\) invariant mass distribution around 2.9 GeV can not be well described, which indicates that there should be an additional resonance. Inspired by the recent observation of the \(T_{c30}(2900)\)[63; 64] and the decay properties of \(T_{c30}(2900)\), we find that \(T_{c30}(2900)^{++}\) is likely to contribute to the \(D^{+}K^{+}\) invariant mass distribution. Thus, in the present work we study the role of \(T_{c30}(2900)^{++}\) in the \(D^{+}K^{+}\) invariant mass distribution of the process \(B^{+}\to D^{-}D^{+}K^{+}\).
In the present work, we estimate \(T_{c30}(2900)^{++}\) contribution to the process \(B^{+}\to D^{-}D^{+}K^{+}\) in a molecular scenario, where we have considered \(T_{c30}(2900)^{++}\) as a \(D^{++}K^{++}\) molecular state. However, due to the lack of the experimental information of \(B^{+}\to D^{-}D^{*+}K^{**}\), we have made an assumption that the coupling constant for \(B^{+}\to D^{-}D^{*+}K^{**}\) is the same as the one for nonresonant contribution in \(B^{+}\to D^{-}D^{+}K^{+}\). Based on this assumption, our estimation indicates that the contribution from \(T_{c30}(2900)^{++}\) is significant in the process \(B^{+}\to D^{-}D^{+}K^{+}\), and the \(T_{c30}(2900)^{++}\) signal in the \(D^{+}K^{+}\) invariant mass distribution is visible. In addition, the fit fraction of \(B^{+}\to D^{-}T_{c30}(2900)^{++}\to K^{+}D^{+}D^{-}\) is roughly estimated to be 12.5%, which could be tested by further experimental analysis by the LHCb Collaboration.
Before the end of this work, it is worth to mention that the branching fractions of \(B^{0}\to D^{-}D^{0}K^{+}\) and \(B^{0}\to D^{-}D^{+}K^{0}\) decays are \((1.07\pm 0.07\pm 0.09)\times 10^{-3}\) and \((0.75\pm 0.12\pm 0.12)\times 10^{-3}\), respectively [89]. In the \(D^{0}K^{+}\) invariant mass distributions of these process, there should be the signal of \(T_{c30}(2900)^{+}\), which may be accessible for the LHCb Collaboration.
## Acknowledgement
This work is supported by the National Natural Science Foundation of China under Grant Nos. 11775050, 12175037, and 12192263. This work is also supported by the Natural Science Foundation of Henan under Grand Nos. 222300420554 and 232300421140, the Project of Youth Backbone Teachers of Colleges and Universities of Henan Province (2020GGJS017), the Youth Talent Support Project of Henan (2021HYTP002), and the Open Project of Guangxi Key Laboratory of Nuclear Physics and Nuclear Technology, No.N.LK2021-08.
|
2305.06705 | Bounds on positive operator-valued measure based coherence of
superposition | Quantum coherence is a fundamental feature of quantum physics and plays a
significant role in quantum information processing. By generalizing the
resource theory of coherence from von Neumann measurements to positive
operator-valued measures (POVMs), POVM-based coherence measures have been
proposed with respect to the relative entropy of coherence, the $l_1$ norm of
coherence, the robustness of coherence and the Tsallis relative entropy of
coherence. We derive analytically the lower and upper bounds on these
POVM-based coherence of an arbitrary given superposed pure state in terms of
the POVM-based coherence of the states in superposition. Our results can be
used to estimate range of quantum coherence of superposed states. Detailed
examples are presented to verify our analytical bounds. | Meng-Li Guo, Jin-Min Liang, Bo Li, Shao-Ming Fei, Zhi-Xi Wang | 2023-05-11T10:31:04Z | http://arxiv.org/abs/2305.06705v1 | # Bounds on positive operator-valued measure based coherence of superposition
###### Abstract
We study the asymptotic behavior of the energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentummomentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-energy-momentummomentum-energy-momentum-energy-momentum-energy-momentum-energy-momentummomentum-energy-momentum-energy-momentum-energy-momentum-energy-momentummomentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-momentum-energy-momentummomentum-energy-momentum-energy-momentum-energy-momentummomentum-energy-momentum-energy-momentummomentum-energy-momentum-momentum-energy-momentum-energy-momentum-energy-momentummomentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-energy-momentummomentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-energy-momentummomentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-energy-momentummomentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentummomentum-energy-momentum-momentum-energy-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentummomentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentum-energy-momentum-energy-momentum-momentummomentum-energy-momentum-energy-momentummomentum-energy-momentum-momentum-energy-momentum-energy-momentum-momentum-energy-momentummomentum-energy-momentum-energymomentum-momentum-energymomentum-momentum-energy-momentum-energymomentum-momentum-energy-momentummomentum-energy-momentum-energy-momentummomentum-energy-momentum-energymomentum-momentum-energy-momentummomentum-energy-momentum-energymomentum-momentum-energy-momentummomentum-energy-momentummomentum-energymomentum-momentum-energy-momentummomentum-energy-momentummomentum-energy-momentummomentum-energy-momentummomentum-energy-momentum-momentummomentum-energymomentum-energy-momentummomentum-energy-momentummomentum-energy-momentummomentum-energy-momentummomentum-energy-momentummomentum-energymomentum-energy-momentummomentum-energymomentum-momentummomentum-energy-momentummomentum-energy-momentummomentum-energymomentum-momentummomentum-energy-momentummomentum-energy-momentummomentum-energymomentum-momentummomentum-energymomentum-momentummomentum-energymomentum-momentum-energymomentum-momentummomentum-energymomentum-momentummomentum-energy-momentummomentum-energy-momentummomentum-energy-momentummomentum-energy-momentummomentum-energymomentum-momentummomentum-energymomentum-energymomentum-momentummomentum-energymomentum-momentummomentum-energymomentum-energymomentum-energymomentum-momentummomentum-energy-momentummomentum-energymomentum-energymomentummomentum-energymomentummomentum-energy-momentummomentum-energymomentummomentum-energymomentum-energymomentummomentum-energy-momentummomentum-energymomentum-energymomentummomentum-energymomentummomentum-energy-momentummomentum-energymomentummomentum-energymomentummomentum-energy-momentummomentummomentum-energymomentummomentum-energy-momentummomentum-energymomentummomentum-energy-momentummomentummomentum-energymomentummomentum-energy-momentummomentummomentum-energymomentummomentum-energy-momentummomentummomentum-energymomentummomentum-energy-momentummomentummomentum-energy-momentummomentum-energymomentummomentum-energymomentummomentum-energymomentummomentum-energymomentummomentum-energymomentummomentum-energy-momentummomentummomentum-energy-momentummomentummomentum-energy-momentummomentummomentum-energymomentummomentum-energymomentummomentum-energymomentummomentum-energymomentummomentummomentum-energymomentummomentum-energymomentummomentum-energymomentummomentummomentum-energy-momentummomentummomentum-energymomentummomentummomentum-energymomentummomentum-energymomentummomentummomentum-energymomentummomentummomentum-energymomentummomentummomentum-energymomentummomentummomentummomentum-energymomentummomentummomentum-energymomentummomentummomentum-energymomentummomentummomentummomentummomentum-energymomentummomentummomentummomentummomentum-energymomentummomentummomentummomentummomentummomentum-energymomentum
###### Abstract
Quantum coherence is a fundamental feature of quantum physics and plays a significant role in quantum information processing. By generalizing the resource theory of coherence from von Neumann measurements to positive operator-valued measures (POVMs), POVM-based coherence measures have been proposed with respect to the relative entropy of coherence, the \(l_{1}\) norm of coherence, the robustness of coherence and the Tsallis relative entropy of coherence. We derive analytically the lower and upper bounds on these POVM-based coherence of an arbitrary given superposed pure state in terms of the POVM-based coherence of the states in superposition. Our results can be used to estimate range of quantum coherence of superposed states. Detailed examples are presented to verify our analytical bounds.
## I Introduction
Originated from the superposition principle of quantum mechanics, quantum coherence and entanglement are important quantum resources in quantum information processing and quantum computation [1, 2, 3]. However, in general the entanglement of a superposed pure state cannot be simply expressed as a linear summation of the entanglement of the individual states in the superposition. Linden _et al_. first investigated the relations between the entanglement of a superposed pure state and the entanglement of the states in the superposition [4] and Gour _et al_. provided the upper and lower bounds on superposed entanglement based on the Von Neumann entropy of the reduced states [5] and the entanglement measure concurrence [6, 7].
Baumgratz, Cramer and Plenio [8] first proposed the resource theory of coherence, established a rigorous coherence quantification framework (BCP framework), and identified computable measures of coherence. Based on the BCP framework, coherence has witnessed theoretically and experimentally fruitful progress [9, 10, 11, 12, 13, 14, 15]. For a quantum system associated with a \(d\)-dimensional Hilbert space \(H\), the BCP framework takes into account the coherence defined by an orthogonal basis \(\{|j\rangle\}_{j=1}^{d}\), which we call standard coherence. A standard orthogonal basis \(\{|j\rangle\}_{j=1}^{d}\) corresponds to a rank-\(1\) projective measurement \(\{|j\rangle\langle j|\}_{j=1}^{d}\). Recently, Bischof, Kampermann and Bruss [16, 17] generalized the conventional framework of coherence to the case of general positive operator-valued measures (POVMs), by replacing the projective measurements with POVMs.
Recently, several POVM-based coherence measures have been proposed [16; 17; 18], such as relative entropy of POVM-based coherence \(C_{r}(\rho,E)\), \(l_{1}\)-norm of POVM-based coherence \(C_{l_{1}}(\rho,E)\), robustness of POVM-based coherence \(C_{rob}(\rho,E)\), and POVM-based coherent based on Tsallis relative entropy \(C_{T,\lambda}(\rho,E)\). Moreover, maximum relative entropy of coherence for quantum channels has been introduced in [20].
Similar to the case of quantum entanglement, the superposition of two incoherent states may give rise to a maximally coherent state, for instance, \(|\Omega_{1}\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)\) in the computational basis \(|0\rangle,|1\rangle\). While the superposition of two maximally coherent states may produce an incoherent state, for example, \(|\Omega_{2}\rangle=\frac{1}{\sqrt{2}}(|+\rangle+|-\rangle)\), where \(|\pm\rangle=\frac{1}{\sqrt{2}}(|0\rangle\pm|1\rangle)\). In [21; 22; 23; 24] the coherence for superposition states of two mutual orthogonal states has been discussed.
In this work, we investigate the POVM-based coherence of superposition states. More specifically, given a superposition state,
\[|\Omega\rangle=\sum_{k=1}^{n}\alpha_{k}|\phi_{k}\rangle, \tag{1}\]
we explore the relationship between the POVM-based coherence of the superposition state \(|\Omega\rangle\) and the POVM-based coherence of the states \(|\phi_{k}\rangle\) in the superposition. We focus on four POVM-based coherence measures \(C_{r}(\rho,E)\), \(C_{l_{1}}(\rho,E)\), \(C_{rob}(\rho,E)\), and \(C_{T,\lambda}(\rho,E)\) and present the upper and lower bounds of the inequalities satisfied by these coherence measures. We illustrate our results by detailed single and two-qubit examples.
## II POVM-based coherence of superposition states
In order to quantitatively describe the coherence resources contained in a given quantum state, a variety of coherence measures has been introduced from different perspectives. Before giving our main results, we first recall the definitions of \(C_{r}(\rho,E)\), \(C_{l_{1}}(\rho,E)\), \(C_{rob}(\rho,E)\) and \(C_{T,\lambda}(\rho,E)\) with respect to a positive operator-valued measure given by \(d\) measurement operators \(E=\{E_{j}\geq 0\}_{j=1}^{d}\), \(\sum_{j}E_{j}=I\) with \(I\) the identity operator.
### The POVM-based coherence measures
The relative entropy coherence measure is a commonly used and well-defined coherence measure [8], which is tightly related to the optimal distillation rate in standard coherent distillation
[11] and the minimum amount of noise for complete decoherence [9; 25]. Given a POVM \(E\), the relative entropy POVM-based coherence is defined by [16; 17]
\[C_{r}(\rho,E)=\sum_{j=1}^{d}S(\sqrt{E_{j}}\rho\sqrt{E_{j}})-S(\rho),\]
where \(S(X)=-\text{Tr}(X\log X)\) is the von Neumann entropy for positive semidefinite matrix \(X\). For a pure state \(|\phi\rangle\), \(C_{r}(\rho,E)\) can be expressed as
\[C_{r}(\phi,E)=\sum_{j=1}^{d}S(\sqrt{E_{j}}|\phi\rangle\langle\phi|\sqrt{E_{j}}). \tag{2}\]
The \(l_{1}\)-norm POVM-based coherence \(C_{l_{1}}(\rho,E)\) of a density matrix \(\rho\) is defined by [17; 18],
\[C_{l_{1}}(\rho,E)=\sum_{i\neq j=1}^{d}||\sqrt{E_{i}}\rho\sqrt{E_{j}}||_{\text{ tr}},\]
with \(||X||_{\text{tr}}=\text{Tr}\sqrt{X^{\dagger}X}\) the trace norm of matrix \(X\). For a pure state \(|\phi\rangle\), the \(l_{1}\)-norm POVM-based coherence is given by
\[C_{l_{1}}(\phi,E)=\sum_{i\neq j=1}^{d}\text{Tr}|\sqrt{E_{i}}|\phi\rangle \langle\phi|\sqrt{E_{j}}|. \tag{3}\]
The robust coherence is closely related to coherence sightings and can be used to quantify the advantages of quantum states in phase discrimination tasks. The robust POVM-based coherence \(C_{rob}(\rho,E)\) can be expressed as [17],
\[C_{rob}(\rho,E)=\min_{\tau\in S^{\prime}}\left\{s\geq 0:s\tau_{i,j}=-\sqrt{E_{ i}}\rho\sqrt{E_{j}},\forall i\neq j\right\}.\]
where \(\tau=\sum_{i,j}\tau_{i,j}\otimes|i\rangle\langle j|\) as \(\tau=\sum_{i,j}|\phi_{i}\rangle\langle\phi_{j}|\otimes|i\rangle\langle j|+\sum _{i}(I-|\phi_{i}\rangle\langle\phi_{i}|)\otimes|i\rangle\langle i|\). Note that for pure states \(|\phi\rangle\), the robust POVM-based coherence \(C_{rob}(\rho,E)\) and the \(l_{1}\)-norm POVM-based coherence satisfy the following relation,
\[C_{rob}(\phi,E)=C_{l_{1}}(\phi,E).\]
The Tsallis relative entropy of POVM-based coherence \(C_{T,\lambda}(\rho,E)\) introduced in [18] is given by
\[C_{T,\lambda}(\rho,E)=\frac{1}{\lambda-1}\Big{\{}\sum_{j=1}^{d}\text{Tr}[( \sqrt{E_{j}}\rho^{\lambda}\sqrt{E_{j}})^{1/\lambda}]-1\Big{\}}, \tag{4}\]
for \(\lambda\in(0,1)\cup(1,2]\).
For further use, we extend the above definition of the POVM-based coherence of a pure state to the coherence of two pure states, by analog to the case of quantum entanglement [7]. Let \(|\phi\rangle\) and \(|\psi\rangle\) be two pure states. We define \(C_{l_{1}}(\phi,\psi)\) and \(C_{T,\lambda}(\phi,\psi)\) as the POVM coherence of \(|\phi\rangle\) and \(|\psi\rangle\) by
\[C_{l_{1}}(\phi,\psi,E)=\sum_{i\neq j=1}^{d}\text{Tr}|\sqrt{E_{i}}|\phi\rangle \langle\psi|\sqrt{E_{j}}| \tag{5}\]
and
\[C_{T,\lambda}(\phi,\psi,E)=\frac{1}{\lambda-1}\Big{\{}\sum_{j=1}^{d}\text{Tr} [(\sqrt{E_{j}}(|\phi\rangle\langle\psi|)^{\lambda}\sqrt{E_{j}})^{1/\lambda}]-1 \Big{\}}.\]
Notice that according to the above definitions, we have \(C(\phi,\phi,E)=C(\phi,E)\).
### The upper and lower bounds of POVM-based coherence measures
In the following, we use the relative entropy POVM-based coherence to investigate the relationship between the coherence of two arbitrary pure states and the POVM-based coherence of their superposed state.
Before we present our two main results (Theorem 1), we briefly review the properties of the von-Neumann entropy \(S(\rho)\) in [4; 19] and provide a slight improvement of it. The authors in [4] have used two properties of \(S(\rho)\):
\[|\alpha_{1}|^{2}S(\rho)+|\alpha_{2}|^{2}S(\sigma)\leq S(|\alpha_{1}|^{2}\rho+| \alpha_{2}|^{2}\sigma) \tag{6}\]
and
\[S(|\alpha_{1}|^{2}\rho+|\alpha_{2}|^{2}\sigma)\leq|\alpha_{1}|^{2}S(\rho)+| \alpha_{2}|^{2}S(\sigma)+h_{2}(|\alpha_{1}|^{2}).\]
where \(h_{2}(x)=-x\log x-(1-x)\log(1-x)\) and \(|\alpha_{1}|^{2}+|\alpha_{2}|^{2}=1\). Now, consider \(|\Omega\rangle=\alpha_{1}|\phi\rangle+\alpha_{2}|\psi\rangle\), we obtain
\[S(\sqrt{E_{j}}|\Omega\rangle\langle\Omega|\sqrt{E_{j}})\leq| \alpha_{1}|^{2}S(\sqrt{E_{j}}|\phi\rangle\langle\phi|\sqrt{E_{j}})+|\alpha_{2 }|^{2}S(\sqrt{E_{j}}|\psi\rangle\langle\psi|\sqrt{E_{j}})+h_{2}(|\alpha_{1}|^ {2}).\]
Using Eq. (2), we get,
\[C_{r}(\Omega,E)\leq|\alpha_{1}|^{2}C_{r}(\phi,E)+|\alpha_{2}|^{2}C_{r}(\psi,E )+h_{2}(|\alpha_{1}|^{2}), \tag{7}\]
Based on the above results, we have the following conclusion.
**Theorem 1**: _Given two different pure states \(|\phi\rangle\) and \(|\psi\rangle\), denote \(|\Omega\rangle=\alpha|\phi\rangle+\beta|\psi\rangle\), where \(\alpha\) and \(\beta\) are complex numbers. The relative entropy POVM-based coherence \(C_{r}(\Omega^{{}^{\prime}},E)\) of the state \(|\Omega^{{}^{\prime}}\rangle=\frac{|\Omega\rangle}{||\Omega\rangle|}\), with \(||\Omega||=|\langle\Omega|\Omega\rangle|\) the normalization constant, has an upper bound_
\[p_{1}[\mu C_{r}(\phi,E)+(1-\mu)C_{r}(\psi,E)+h_{2}(\mu)], \tag{8}\]
_and a lower bound \(L=\max\{L_{1},L_{2},0\}\) with_
\[L_{1}=p_{2}C_{r}(\phi,E)-\frac{1-\nu}{\nu}C_{r}(\psi,E)-\frac{1} {\nu}h_{2}(\nu),\] \[L_{2}=p_{3}C_{r}(\psi,E)-\frac{1-\xi}{\xi}C_{r}(\phi,E)-\frac{1} {\xi}h_{2}(\xi),\]
_where \(h_{2}(x)=-x\log x-(1-x)\log(1-x)\),_
\[p_{1}=\frac{(1-\mu)|\alpha|^{2}+\mu|\beta|^{2}}{\mu(1-\mu)|| \Omega||^{2}},\ \ \mu=\frac{|\alpha|^{2}}{\cos^{2}\theta},\] \[p_{2}=\frac{(1-\nu)|\alpha|^{2}}{(1-\nu)||\Omega||^{2}+\nu|\beta |^{2}},\ \ \nu=\frac{\sin^{2}\theta||\Omega||^{2}}{\sin^{2}\theta||\Omega||^{2}+|\beta|^{ 2}\cos^{2}\theta},\] \[p_{3}=\frac{(1-\xi)|\beta|^{2}}{(1-\xi)||\Omega||^{2}+\xi|\alpha |^{2}},\ \ \xi=\frac{\sin^{2}\theta||\Omega||^{2}}{\sin^{2}\theta||\Omega||^{2}+|\alpha|^ {2}\cos^{2}\theta}\]
_for \(0<\mu,\nu,\xi<1\) and \(\frac{|\alpha|^{2}}{\cos^{2}\theta}+\frac{|\beta|^{2}}{\sin^{2}\theta}=1\)._
Proof We prove the theorem by introducing an auxiliary system. First we prove the upper bound (8). Consider the following bipartite state in systems \(A\) and \(B\),
\[|\chi\rangle^{AB}=\sqrt{\mu}|\phi\rangle^{A}|0\rangle^{B}+\sqrt{1-\mu}|\psi \rangle^{A}|1\rangle^{B}.\]
According to (7), the relative entropy POVM-based coherence of \(|\chi\rangle\) can be expressed as
\[C_{r}(\chi,E)\leq\mu C_{r}(\phi,E)+(1-\mu)C_{r}(\psi,E)+h_{2}(\mu).\]
Measuring the ancillary system \(B\) with Kraus operators
\[K_{1}=|0\rangle(\cos\theta e^{i\omega_{1}}\langle 0|+\sin \theta e^{i\omega_{2}}\langle 1|),\] \[K_{2}=|1\rangle(-\sin\theta e^{-i\omega_{2}}\langle 0|+\cos \theta e^{-i\omega_{1}}\langle 1|),\]
one gets the collapsed state
\[|\chi_{1}\rangle|0\rangle=\left(\sqrt{\frac{\mu}{p}}\cos\theta e^{i\omega_{1}} |\phi\rangle+\sqrt{\frac{1-\mu}{p}}\sin\theta e^{i\omega_{2}}|\psi\rangle \right)|0\rangle,\]
with probability
\[p=\|\sqrt{\mu}\cos\theta e^{i\omega_{1}}|\phi\rangle+\sqrt{1-\mu}\sin\theta e^{i \omega_{2}}|\psi\rangle\|^{2}, \tag{9}\]
and the collapsed state
\[|\chi_{2}\rangle|1\rangle=\left(\sqrt{\frac{1-\mu}{1-p}}\cos\theta e^{-i\omega _{1}}|\psi\rangle-\sqrt{\frac{\mu}{1-p}}\sin\theta e^{i\omega_{2}}|\phi\rangle \right)|1\rangle,\]
with probability \(1-p\). As the measurement can be seen as an incoherent operation, we obtain the following inequality,
\[pC_{r}(\chi_{1},E)+(1-p)C_{r}(\chi_{2},E)\leq\mu C_{r}(\phi,E)+(1-\mu)C_{r}( \psi,E)+h_{2}(\mu).\]
Since \(C_{r}(\chi_{2},E)\geq 0\), we have
\[C_{r}(\chi_{1},E)\leq\frac{1}{p}[\mu C_{r}(\phi,E)+(1-\mu)C_{r}(\psi,E)+h_{2}( \mu)]. \tag{10}\]
Now we set \(|\chi_{1}\rangle=|\Omega^{{}^{\prime}}\rangle\). We get
\[\frac{\alpha}{||\Omega||}=\sqrt{\frac{\mu}{p}}\cos\theta e^{i\omega_{1}},\ \ \frac{\beta}{||\Omega||}=\sqrt{\frac{1-\mu}{p}}\sin\theta e^{i\omega_{2}}.\]
It is straightforward to derive that
\[p=\frac{\|\Omega\|^{2}\mu(1-\mu)}{(1-\mu)|\alpha|^{2}+\mu|\beta|^{2}}. \tag{11}\]
Substituting (11) into (10), we have
\[C_{r}(\Omega^{{}^{\prime}},E)\leq p_{1}\left[\mu C_{r}(\phi,E)+(1-\mu)C_{r}( \psi,E)+h_{2}(\mu)\right],\]
with \(0<\mu<1\), \(\mu=\frac{|\alpha|^{2}}{\cos^{2}\theta}\) and \(p_{1}=\frac{(1-\mu)|\alpha|^{2}+\mu|\beta|^{2}}{\mu(1-\mu)\|\Omega\|^{2}}\).
To prove the lower bound, we can consider the following bipartite states,
\[|\chi^{{}^{\prime}}\rangle^{AB} =\sqrt{\nu}|\Omega^{{}^{\prime}}\rangle^{A}|0\rangle^{B}+\sqrt{1- \nu}|\psi\rangle^{A}|1\rangle^{B},\] \[|\chi^{{}^{\prime\prime}}\rangle^{AB} =\sqrt{\xi}|\Omega^{{}^{\prime}}\rangle^{A}|0\rangle^{B}+\sqrt{1- \xi}|\phi\rangle^{A}|1\rangle^{B},\]
Similar to the proof of the upper bound (8), one obtains easily the lower bound \(L\). \(\Box\)
Concerning the \(l_{1}\)-norm of the POVM-based coherence measure, we have the following conclusion.
**Theorem 2**: _Given \(|\Omega\rangle=\sum_{k=1}^{n}\alpha_{k}|\phi_{k}\rangle\) with \(\alpha_{k}\) complex numbers, the POVM-based coherence of the superposed sate \(|\Omega^{{}^{\prime}}\rangle=\frac{|\Omega\rangle}{||\Omega||}\) has an upper bound_
\[\|\Omega\|^{-2}\Bigg{(}\sum_{k=1}^{n}|\alpha_{k}|^{2}C_{l_{1}}(\phi_{k},E)+M \sum_{k\neq k^{\prime}=1}^{n}|\alpha_{k}\alpha_{k^{\prime}}|\Bigg{)}, \tag{12}\]
_and a lower bound_
\[\|\Omega\|^{-2}\Bigg{(}\sum_{k=1}^{n}|\alpha_{k}|^{2}C_{l_{1}}(\phi_{k},E)-M \sum_{k\neq k^{\prime}=1}^{n}|\alpha_{k}\alpha_{k^{\prime}}|\Bigg{)}, \tag{13}\]
_where \(M=(d-1)\sum_{i=1}^{d}\left\|\sqrt{E_{i}}|\phi_{k}\rangle\langle\phi_{k^{\prime }}|\right\|_{\text{tr}}\)._
Proof From the definition of the \(l_{1}\)-norm POVM-based coherence (3), we have
\[\|\Omega\|^{2}C_{l_{1}}(\Omega^{{}^{\prime}},E) =\sum_{i\neq j=1}^{d}\text{Tr}\Big{|}\sqrt{E_{i}}\sum_{kk^{ \prime}=1}^{n}\alpha_{k}\alpha_{k^{\prime}}|\phi_{k}\rangle\langle\phi_{k^{ \prime}}|\sqrt{E_{j}}\Big{|}\] \[\leq\sum_{k=1}^{n}|\alpha_{k}|^{2}\sum_{i\neq j=1}^{d}\text{Tr} \Big{|}\sqrt{E_{i}}|\phi_{k}\rangle\langle\phi_{k}|\sqrt{E_{j}}\Bigg{|}+\sum_{ k\neq k^{\prime}=1}^{n}|\alpha_{k}\alpha_{k^{\prime}}|\sum_{i\neq j=1}^{d} \text{Tr}\Big{|}\sqrt{E_{i}}|\phi_{k}\rangle\langle\phi_{k^{\prime}}|\sqrt{E_ {j}}\Big{|}\] \[=\sum_{k=1}^{n}|\alpha_{k}|^{2}C_{l_{1}}(\phi_{k},E)+\sum_{k\neq k ^{\prime}=1}^{n}|\alpha_{k}\alpha_{k^{\prime}}|C_{l_{1}}(\phi_{k},\phi_{k^{ \prime}},E)\] \[\leq\sum_{k=1}^{n}|\alpha_{k}|^{2}C_{l_{1}}(\phi_{k},E)+\sum_{k \neq k^{\prime}=1}^{n}|\alpha_{k}\alpha_{k^{\prime}}|M,\]
where the first inequality is due to \(|a+b|\leq|a|+|b|\), the second inequality is based on the Theorem 2 in Ref. [26] with \(M=(d-1)\sum_{i=1}^{d}\left\|\sqrt{E_{i}}|\phi_{k}\rangle\langle\phi_{k^{\prime }}|\right\|_{\text{tr}}\).
Next, by using the inverse triangle inequality, \(|a+b|\geq||a|-|b||\geq|a|-|b|\), similar to the proof of the upper bound we can easily get (13). \(\Box\)
The robustness of coherence [17; 27] plays an important role in the characterization of quantum states in phase discrimination. For pure states, the robustness of POVM-based coherence is equivalent to the \(l_{1}\)-norm of POVM-based coherence. Therefore, Theorem 2 also gives rise to the bounds for the robustness of POVM-based coherence.
Concerning the POVM-based coherence of Tsallis relative entropy \(C_{T,\lambda}(\rho,E)\), we present the following lemma.
**Lemma 1**: _For \(\lambda\in(0,1)\cup(1,2]\), we have_
\[C_{T,\lambda}(\rho,E)\leq-\ln_{\lambda}\Bigg{(}\frac{1}{[d\sum_{j=1}^{d}\text {Tr}(\sqrt{E_{j}}\rho^{2}\sqrt{E_{j}})]^{1/\lambda}}\Bigg{)},\]
_where \(\ln_{\lambda}x=\frac{x^{1-\lambda}-1}{1-\lambda}\)._
**Proof.** Due to (4), for \(\lambda\in(0,1)\cup(1,2]\) according to [28; 29], we obtain
\[C_{T,\lambda}(\rho,E)\leq\frac{[d^{\lambda-1}\sum_{j=1}^{d}\mbox{ Tr}(\sqrt{E_{j}}\rho^{\lambda}\sqrt{E_{j}})]^{1/\lambda}-1}{\lambda-1}. \tag{14}\]
Applying the Jensen's inequality, we have
\[\frac{[\sum_{j=1}^{d}\mbox{Tr}(\sqrt{E_{j}}\rho^{\lambda}\sqrt{E_{j}})]^{1/ \lambda}}{\lambda-1}\leq\frac{[(\sum_{j=1}^{d}\mbox{Tr}(\sqrt{E_{j}}\rho^{2} \sqrt{E_{j}}))^{1/\lambda}]^{\lambda-1}}{\lambda-1}. \tag{15}\]
Combining (15) and (14), we complete the proof. \(\Box\)
Based on the above results, we have the following conclusion.
**Theorem 3**: _Given a state \(|\Omega\rangle=\sum_{k=1}^{n}\alpha_{k}|\phi_{k}\rangle\) with \(\alpha_{k}\) the complex numbers. The POVM-based coherence of the normalized superposed state \(|\Omega^{{}^{\prime}}\rangle=\frac{|\Omega\rangle}{\|\Omega\|}\) has an upper bound_
\[\|\Omega\|^{-2}\Big{(}\sum_{k=1}^{n}|\alpha_{k}|^{2}C_{T,\lambda}(\phi_{k},E) -\sum_{k\neq k^{{}^{\prime}}=1}^{n}|\alpha_{k}\alpha_{k^{{}^{\prime}}}|\ln_{ \lambda}X\Big{)}+\frac{1}{\|\Omega\|^{2}(\lambda-1)}\Big{(}\sum_{k=1}^{n}| \alpha_{k}|^{2}+\sum_{k\neq k^{{}^{\prime}}=1}^{n}|\alpha_{k}\alpha_{k^{{}^{ \prime}}}|-\|\Omega\|^{2}\Big{)}\]
_for \(\lambda\in(0,1)\cup(1,2]\), where_
\[X=1/\Big{(}d\sum_{j=1}^{d}\mbox{Tr}[\sqrt{E_{j}}(|\phi_{k}\rangle\langle\phi_ {k^{{}^{\prime}}}|)^{2}\sqrt{E_{j}}]\Big{)}^{\frac{1}{\lambda}}, \tag{16}\]
_and a lower bound \(L=\max\{L_{1},0\}\) for \(\lambda\in(1,2]\), where_
\[\|\Omega\|^{2}L_{1}=\sum_{k=1}^{n}|\alpha_{k}|^{2}C_{T,\lambda}(\phi_{k},E)+N \sum_{k\neq k^{{}^{\prime}}=1}^{n}|\alpha_{k}\alpha_{k^{{}^{\prime}}}|+\frac{ 1}{(\lambda-1)}\Big{(}\sum_{k=1}^{n}|\alpha_{k}|^{2}+\sum_{k\neq k^{{}^{ \prime}}=1}^{n}|\alpha_{k}\alpha_{k^{{}^{\prime}}}|-\|\Omega\|^{2}\Big{)}\]
_and_
\[N=\frac{1}{\lambda-1}\Big{\{}\sum_{j=1}^{d}\mbox{Tr}[(\sqrt{E_{j}})^{\frac{1}{ \lambda}}|\phi_{k}\rangle\langle\phi_{k^{{}^{\prime}}}|(\sqrt{E_{j}})^{\frac{ 1}{\lambda}}]-1\Big{\}}.\]
Proof Let \(\{|i\rangle_{i=1}^{m}\}\) be a set basic vectors such that \(|\phi_{k}\rangle=\sum_{i=1}^{m}b_{i}^{k}|i\rangle\). We have \(|\Omega\rangle=\sum_{k=1}^{n}\sum_{i}^{m}\alpha_{k}b_{i}^{k}|i\rangle\) and
\[\|\Omega\|^{2}C_{T,\lambda}(\Omega^{{}^{\prime}},E)= \frac{1}{\lambda-1}\Bigg{\{}\sum_{j=1}^{d}\text{Tr}\Bigg{[}\sqrt{E_ {j}}\Big{[}\sum_{kk^{\prime}=1}^{n}\sum_{ii^{\prime}=1}^{m}\alpha_{k}b_{i}^{k} \alpha_{k^{\prime}}b_{i^{\prime}}^{k^{\prime}}|i\rangle\langle i^{{}^{\prime}} \Big{]}\Big{]}^{\lambda}\sqrt{E_{j}}\Bigg{]}^{\frac{1}{\lambda}}-\|\Omega\|^{2}\Bigg{\}}\] \[= \frac{1}{\lambda-1}\Bigg{\{}\sum_{k=1}^{n}|\alpha_{k}|^{2}\sum_{j =1}^{d}\text{Tr}\Bigg{[}\sqrt{E_{j}}\Big{[}\sum_{ii^{\prime}=1}^{m}|b_{i}^{k} b_{i^{\prime}}^{k}|\cdot|i\rangle\langle i^{{}^{\prime}}|\Big{]}^{\lambda} \sqrt{E_{j}}\Bigg{]}^{\frac{1}{\lambda}}-\|\Omega\|^{2}\] \[+\sum_{k\neq k^{\prime}=1}^{n}|\alpha_{k}\alpha_{k}^{{}^{\prime} }|\sum_{j=1}^{d}\text{Tr}\Bigg{[}\sqrt{E_{j}}\Big{[}\sum_{ii^{\prime}=1}^{m}|b _{i}^{k}b_{i^{\prime}}^{k^{\prime}}|\cdot|i\rangle\langle i^{{}^{\prime}}| \Big{]}^{\lambda}\sqrt{E_{j}}\Bigg{]}^{\frac{1}{\lambda}}\Bigg{\}}\] \[= \sum_{k=1}^{n}|\alpha_{k}|^{2}C_{T,\lambda}(\phi_{k},E)+\sum_{k \neq k^{\prime}=1}^{n}|\alpha_{k}\alpha_{k^{\prime}}|C_{T,\lambda}(\phi_{k}, \phi_{k^{\prime}},E)\] \[+\frac{1}{\lambda-1}\Big{(}\sum_{k=1}^{n}|\alpha_{k}|^{2}+\sum_{ k\neq k^{\prime}=1}^{n}|\alpha_{k}\alpha_{k^{\prime}}|-\|\Omega\|^{2}\Big{)}.\]
Based on the Lemma 1, we obtain the upper bound in theorem.
Next, by using the Araki-Lieb-Thirring Inequality [30], \(\text{Tr}(A^{r}B^{r}A^{r})^{q}\geq\text{Tr}(ABA)^{rq}\) for \(r\geq 1\) and \(q\geq 0\), we have
\[C_{T,\lambda}(\phi_{k},\phi_{k^{\prime}},E) =\frac{1}{\lambda-1}\Bigg{\{}\sum_{j=1}^{d}\text{Tr}\Big{[}(E_{j} ^{\frac{1}{2\lambda}})^{\lambda}(|\phi_{k}\rangle\langle\phi_{k}^{{}^{\prime} }|)^{\lambda}(E_{j}^{\frac{1}{2\lambda}})^{\lambda}\Big{]}^{1/\lambda}-1\Bigg{\}}\] \[\geq\frac{1}{\lambda-1}\Bigg{\{}\sum_{j=1}^{d}\text{Tr}\Big{[}( \sqrt{E_{j}})^{1/\lambda}|\phi_{k}\rangle\langle\phi_{k}^{{}^{\prime}}|(\sqrt {E_{j}})^{1/\lambda}\Big{]}-1\Bigg{\}},\]
where \(\lambda\in(1,2]\). Then, we obtain the lower bound in theorem. \(\Box\)
## III Numerical results
In this section, we demonstrate numerically our results by investigating different superposition states. We consider a single-qubit case for the \(l_{1}\) coherence measure and two-qubit case for the relative entropy and the Tsallis relative entropy coherence measures. In both cases, we plot the relations between exact value, upper bound, and lower bound. Crucially, a POVM of \(n\)-qubit system is expressed as \(4^{n}\) linear independent positive operators \(\{E_{i}=A_{i}A_{i}^{\dagger}\}_{i=0}^{4^{n}-1}\), which can be obtained according to the Naimark theorem [31; 32]. Specially, each measurement operator \(A_{i}\) embedded in a larger unitary can be found via a projective measurement in the standard computational basis of a larger Hilbert space.
Consider a POVM with respect to a single-qubit system \(\{E_{i}=A_{i}A_{i}^{\dagger}\}_{i=0}^{3}\), which can be realized by performing a two-qubit unitary \(U=\sum_{ijkl=0}^{3}u_{ij}^{kl}|ij\rangle\langle kl|\) on the qubit system and an ancillary system. Implementing \(U\) on an initial state \(\rho_{a}\otimes|0\rangle\langle 0|_{b}\), one measures the two qubits system in the standard computational basis \(\{|m\rangle\}_{m=0}^{3}\). Denote \((q_{a},q_{b})\) the measurement outcome with \(q_{a},q_{b}\in\{0,1\}\). The corresponding probability of each outcome is given by
\[P_{(q_{a},q_{b})} = \langle q_{a}q_{b}|U(\rho_{a}\otimes|0\rangle\langle 0|_{b})U^{ \dagger}|q_{a}q_{b}\rangle \tag{17}\] \[= \sum_{kl=0}^{3}u_{q_{a}q_{b}}^{kl}\langle kl|(\rho_{a}\otimes|0 \rangle\langle 0|_{b})\sum_{kl=0}^{3}(u_{q_{a}q_{b}}^{kl})^{*}|kl\rangle\] \[= \sum_{k=0}^{3}u_{q_{a}q_{b}}^{k0}\langle k|\rho_{a}\sum_{k=0}^{3 }(u_{q_{a}q_{b}}^{k0})^{*}|k\rangle\] \[= \text{Tr}[|E_{q_{a}q_{b}}\rangle\langle E_{q_{a}q_{b}}|\rho_{a}],\]
where \(|E_{(q_{a}q_{b})}\rangle=\sum_{k=0}^{3}u_{q_{a}q_{b}}^{k0}|k\rangle\). Hence, the measurement operators of each POVM is given by \(E_{i}=E_{(q_{a}q_{b})}=|E_{(q_{a}q_{b})}\rangle\langle E_{(q_{a}q_{b})}|\) for \(i=0,\cdots,3\), where \(i\) has a binary representation \(i=2q_{a}+q_{b}\). In particular, let us we consider the following two-qubit unitary \(U\),
\[U(\mathbf{\theta})=\text{CNOT}\cdot\Big{[}R_{y}(\theta_{1})\otimes R_{y}(\theta_{2 })\Big{]}, \tag{18}\]
parameterized by \(\mathbf{\theta}=(\theta_{1},\theta_{2})\), where \(R_{y}(\theta_{j})=e^{-\imath\theta_{j}\sigma_{y}/2}\), \(\imath^{2}=-1\), and CNOT is the CNOT-gate. Setting \(\mathbf{\theta}=(0.301723,0.011681)\) we have a POVM given by
\[|E_{0}\rangle = (u_{00}^{00},u_{00}^{10})^{\dagger},\ \ |E_{1}\rangle=(u_{01}^{00},u_{01}^{10})^{\dagger},\] \[|E_{2}\rangle = (u_{10}^{00},u_{10}^{10})^{\dagger},\ \ |E_{3}\rangle=(u_{11}^{00},u_{11}^{10})^{\dagger}.\]
Given two \(1\)-qubit states \(|\phi\rangle=e^{-\imath\theta_{j}0.432/2}|0\rangle\) and \(|\psi\rangle=e^{-\imath\theta_{j}0.618/2}|0\rangle\), we calculate the exact value, upper bound, and lower bound of \(l_{1}\) coherence of the superposition state
\[|\Omega\rangle=\alpha|\phi\rangle+\beta|\psi\rangle,\ \ |\Omega^{{}^{\prime}} \rangle=\frac{|\Omega\rangle}{\|\Omega\|}, \tag{19}\]
in which the coefficients \(\alpha,\beta\) are random scalar drawn from the uniform distribution in the interval \((0,1)\). We run our procedure \(10\) times by randomly choosing the coefficients \(\alpha,\beta\). Fig. (1.b) shows the upper and lower bounds of \(C_{l_{1}}(|\Omega^{{}^{\prime}}\rangle\langle\Omega^{{}^{\prime}}|,E)\).
For two-qubit case, consider POVMs with \(16\) measurement operators, \(\{E_{i}=A_{i}A_{i}^{\dagger}\}_{i=0}^{15}\), which can be realized by performing a unitary \(V=\sum_{\mathbf{i}\mathbf{j}=0}^{3}v_{i_{1}i_{2}i_{3}i_{4}}^{j_{1}j_{2}i_{3}j_{4}}|\bm {i}\rangle\langle\mathbf{j}|\) on a four-qubit system, where the index \(\mathbf{i}\) has binary representation \(\mathbf{i}=i_{1}i_{2}i_{3}i_{4}\). Similar to the single qubit case, it is easily
verified that each POVM has a form \(E_{i}=|E_{(q_{a}q_{b}q_{c}q_{d})}\rangle\langle E_{(q_{a}q_{b}q_{c}q_{d})}|\) where \((q_{a}q_{b}q_{c}q_{d})\) denote the measurement outcomes, \(i=2^{3}q_{a}+2^{2}q_{b}+2q_{c}+q_{d}\), and
\[|E_{(q_{a}q_{b}q_{c}q_{d})}\rangle=\sum_{j_{1}j_{2}=0}^{1}v_{q_{a}q_{b}q_{c}q_ {d}}^{j_{1}j_{2}00}|j_{1}j_{2}\rangle. \tag{20}\]
Let us consider
\[V(\mathbf{\gamma})=\prod_{i,j=0}^{2}\text{CNOT}_{i,i+1}\cdot\Big{[}R_{y}(\gamma_{1 })\otimes R_{y}(\gamma_{2})\otimes R_{y}(\gamma_{3})\otimes R_{y}(\gamma_{4}) \Big{]}, \tag{21}\]
given by the parameter vector \(\mathbf{\gamma}=(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4})=(0.30173,0.01168,0.5 3991,0.09537,0.14651)\).
Given two \(2\)-qubit states
\[|\phi_{1}\rangle=U(\mathbf{\theta}_{1})|00\rangle,\;|\psi_{1}\rangle=U(\mathbf{ \theta}_{2})|00\rangle,\;|\psi_{2}\rangle=U(\mathbf{\theta}_{3})|00\rangle,\]
where parameters \(\mathbf{\theta}_{1}=(0.4827,0.3760)\), \(\mathbf{\theta}_{2}=(0.9394,0.2212)\) and \(\mathbf{\theta}_{3}=(0.1557,0.8190)\). We calculate the exact values, upper bounds, and lower bounds of relative entropy coherence of the superposition state
\[|\Omega_{1}\rangle=\alpha|\phi_{1}\rangle+\beta|\psi_{1}\rangle,\;\;|\Omega_{1 }^{{}^{\prime}}\rangle=\frac{|\Omega_{1}\rangle}{\|\Omega_{1}\|}, \tag{22}\]
and Tsallis relative entropy coherence of the superposition state
\[|\Omega_{2}\rangle=\alpha|\phi_{1}\rangle+\beta|\psi_{2}\rangle,\;\;|\Omega_{2 }^{{}^{\prime}}\rangle=\frac{|\Omega_{2}\rangle}{\|\Omega_{2}\|}, \tag{23}\]
Figure 1: (a) The relative entropy of POVM-based coherence. (b) The \(l_{1}\) of POVM-based coherence.
in which the coefficients \(\alpha,\beta\) are random scalar drawn from the uniform distribution in the interval \((0,1)\). We run our procedure \(10\) times by randomly choosing the coefficients \(\alpha,\beta\). Fig. (1a) shows the bounds of the relative entropy of POVM-based coherence. Only upper bound of the Tsallis relative entropy is shown in Fig. (2a) when \(\lambda=0.3\). Fig. (2b) plots the upper and lower bounds of the Tsallis relative entropy of POVM coherence when the parameter \(\lambda=1.5\). Clearly, the numerical results are good agree with our theoretical analysis.
## IV Conclusion
The superposition principle of quantum states is fundamental in quantum mechanics. Generally, quantum objects exhibit both the wave and particle nature. In particular, when the POVM \(E\) is taken to be the von Neumann projective measurement, the coherence \(C_{l_{1}}(\rho,E)\) stands for the wave property of quantum objects. In this case the coherence \(C_{l_{1}}(\rho,E)\) and the distinguishability of which-path information (a measure of particle-property) in multi-path interference satisfy a trade-off relation [33], which has been verified in a generalized multi-path delayed-choice experiment on a large-scale quantum nanophotonic chip [34].
Based on the POVM-based coherence measures \(C_{r}(\rho,E)\), \(C_{l_{1}}(\rho,E)\), \(C_{rob}(\rho,E)\) and \(C_{T,\lambda}(\rho,E)\), we have studied coherence of superposition for arbitrary dimensional pure states. We have explored the relationship between the POVM-based coherence of a superposed state and the POVM-based coherence of the states in the superposition, and derived analytically the lower and upper bounds of the relative entropy of POVM coherence.
Figure 2: The Tsallis relative entropy of POVM coherence.
bounds. To illustrate our results, we have presented detailed examples related to the POVM-based \(l_{1}\) norm coherence for the one-qubit case and the POVM-based relative entropy coherence for the two-qubit case. Our results can be used to estimate the range of quantum coherence of superposed states, and may highlight investigations on superposition of coherence for quantum channels [20].
This work is supported by the National Natural Science Foundation of China (NSFC) under Grants 12075159, 12171044 and 12175147; Beijing Natural Science Foundation (Grant No. Z190005); the Academician Innovation Platform of Hainan Province; Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology (No. SIQSE202001).
|
2303.13938 | A Comparative Study of National Cyber Security Strategies of ten nations | This study compares the National Cybersecurity Strategies (NCSSs) of publicly
available documents of ten nations across Europe (United Kingdom, France,
Lithuania, Estonia, Spain, and Norway), Asia-Pacific (Singapore and Australia),
and the American region (the United States of America and Canada). The study
observed that there is not a unified understanding of the term "Cybersecurity";
however, a common trajectory of the NCSSs shows that the fight against
cybercrime is a joint effort among various stakeholders, hence the need for
strong international cooperation. Using a comparative structure and an NCSS
framework, the research finds similarities in protecting critical assets,
commitment to research and development, and improved national and international
collaboration. The study finds that the lack of a unified underlying
cybersecurity framework leads to a disparity in the structure and contents of
the strategies. The strengths and weaknesses of the NCSSs from the research can
benefit countries planning to develop or update their cybersecurity strategies.
The study gives recommendations that strategy developers can consider when
developing an NCSS. | Adejoke T. Odebade, Elhadj Benkhelifa | 2023-03-24T11:51:49Z | http://arxiv.org/abs/2303.13938v1 | # A Comparative Study of National Cyber Security Strategies of ten nations
###### Abstract
This study compares the National Cybersecurity Strategies (NCSSs) of publicly available documents of ten nations across Europe (United Kingdom, France, Lithuania, Estonia, Spain, and Norway), Asia-Pacific (Singapore and Australia), and the American region (the United States of America and Canada). The study observed that there is not a unified understanding of the term "Cybersecurity"; however, a common trajectory of the NCSSs shows that the fight against cybercrime is a joint effort among various stakeholders, hence the need for strong international cooperation. Using a comparative structure and an NCSS framework, the research finds similarities in protecting critical assets, commitment to research and development, and improved national and international collaboration. The study finds that the lack of a unified underlying cybersecurity framework leads to a disparity in the structure and contents of the strategies. The strengths and weaknesses of the NCSSs from the research can benefit countries planning to develop or update their cybersecurity strategies. The study gives recommendations that strategy developers can consider when developing an NCSS.
Cybersecurity, Cybersecurity Strategies, Comparative Study, NCSS +
Footnote †: journal: Journal of Computer Science
## 1 Introduction
Cyberspace provides a tremendous opportunity for growth and development (ENISA, 2012). Its effective usage, especially in the Internet of Things, big data, and cloud computing, greatly influences national competitiveness (Min et al., 2015). However, it comes with challenges, such as cyber threats and attacks. Various countries over the past decade have taken steps to address the challenges of cyber threats by developing cybersecurity strategies, enacting cybersecurity laws, and ensuring safeguarding measures to protect customer data (Dedeke & Masterson, 2019). A national-level strategy is crucial to securing cyberspace to ensure prosperity in the digital world (Teoh & Mahmood, 2017). It gives an extensive plan of how an organisation intends to achieve its aim and objectives and make the best use of its unique qualities (Lepori et al., 2013). "Priorities for national cybersecurity strategies will vary country by country. In some countries, the focus may be on protecting critical infrastructure risk, while other countries may focus on protecting intellectual property, and still, others may focus on improving the cybersecurity awareness of newly connected citizens" (Goodwin & Nicholas, p. 2013). These statements show that countries develop their national cybersecurity strategy based on their understanding and perception of cybersecurity. What constitutes a significant risk for one country may not apply to another country. Therefore, governments will put together strategies to best protect their economy and citizens.
Structuring a National Cyber Security Strategy (NCSS) can be in areas such as investment in research and development, awareness and training, collaboration and information sharing, and partnership within government organisations, depending on the need and perception of the nation (Min et al., 2015). It is fundamental to protecting cyberspace from malicious attackers and providing a safe environment for the digital economy to thrive (Teoh & Mahmood, 2017; Shafqat & Masood, 2016); therefore, effective sustenance of cybersecurity is achievable through an enforceable national strategy (Ghernouti-Helie, 2010). Consequently, a national strategy's goals, objectives, scope, and priorities must be defined to foster partnerships among stakeholders and communicate a nation's objectives to other countries and stakeholders (Luijf et al., 2013; Sabillon et al., 2016). In developing an effective cybersecurity strategy, considerations of both national and international needs are paramount because cybercrime is a national as well as a global issue, necessitating the need for international collaboration and the development of national and international strategies to combat cybercrime (Goodwin & Nicholas, p. 2013; Ghernouti-Helie, 2010). NCSS should be current, adequate, and suitable to avoid putting ICT (Information and Communications Technology) and the lives of citizens at risk (Mori & Goto, 2018).
The paper is organised as follows. Following the introduction, we present related work in the domain, followed by the method and approach. Next, we present the results from the review and analysis, after which we present the discussion. The next section contains the framework mapping of the NCSS, and the last two sections explain the conclusion and recommendations. |
2307.08151 | Ehrhart quasi-polynomials and parallel translations | Given a rational polytope $P \subset \mathbb R^d$, the numerical function
counting lattice points in the integral dilations of $P$ is known to become a
quasi-polynomial, called the Ehrhart quasi-polynomial $\mathrm{ehr}_P$ of $P$.
In this paper we study the following problem: Given a rational $d$-polytope $P
\subset \mathbb R^d$, is there a nice way to know Ehrhart quasi-polynomials of
translated polytopes $P+ \mathbf v$ for all $\mathbf v \in \mathbb Q^d$? We
provide a way to compute such Ehrhart quasi-polynomials using a certain toric
arrangement and lattice point counting functions of translated cones of $P$.
This method allows us to visualize how constituent polynomials of
$\mathrm{ehr}_{P+\mathbf v}$ change in the torus $\mathbb R^d/\mathbb Z^d$. We
also prove that information of $\mathrm{ehr}_{P+\mathbf v}$ for all $\mathbf v
\in \mathbb Q^d$ determines the rational $d$-polytope $P \subset \mathbb R^d$
up to translations by integer vectors, and characterize all rational
$d$-polytopes $P \subset \mathbb R^d$ such that $\mathrm{ehr}_{P+\mathbf v}$ is
symmetric for all $\mathbf v \in \mathbb Q^d$. | Akihiro Higashitani, Satoshi Murai, Masahiko Yoshinaga | 2023-07-16T21:06:19Z | http://arxiv.org/abs/2307.08151v1 | # Ehrhart quasi-polynomials and parallel translations
###### Abstract.
Given a rational polytope \(P\subset\mathbb{R}^{d}\), the numerical function counting lattice points in the integral dilations of \(P\) is known to become a quasi-polynomial, called the Ehrhart quasi-polynomial \(\mathrm{ehr}_{P}\) of \(P\). In this paper we study the following problem: Given a rational \(d\)-polytope \(P\subset\mathbb{R}^{d}\), is there a nice way to know Ehrhart quasi-polynomials of translated polytopes \(P+\mathbf{v}\) for all \(\mathbf{v}\in\mathbb{Q}^{d}\eta\) We provide a way to compute such Ehrhart quasi-polynomials using a certain toric arrangement and lattice point counting functions of translated cones of \(P\). This method allows us to visualize how constituent polynomials of \(\mathrm{ehr}_{P+\mathbf{v}}\) change in the torus \(\mathbb{R}^{d}/\mathbb{Z}^{d}\). We also prove that information of \(\mathrm{ehr}_{P+\mathbf{v}}\) for all \(\mathbf{v}\in\mathbb{Q}^{d}\) determines the rational \(d\)-polytope \(P\subset\mathbb{R}^{d}\) up to translations by integer vectors, and characterize all rational \(d\)-polytopes \(P\subset\mathbb{R}^{d}\) such that \(\mathrm{ehr}_{P+\mathbf{v}}\) is symmetric for all \(\mathbf{v}\in\mathbb{Q}^{d}\).
## 1. Introduction
Enumerations of lattice points in a convex polytope is a classical important theme relating to algebra, combinatorics and geometry of convex polytopes. A fundamental result on this subject is Ehrhart's result telling that, for any rational polytope \(P\subset\mathbb{R}^{d}\), the function \(\mathbb{Z}_{\geq 0}\ni t\mapsto\#(tP\cap\mathbb{Z}^{d})\) becomes a quasi-polynomial in \(t\), where \(tP\) is the \(t\)th dilation of \(P\) and \(\#X\) denotes the cardinality of a finite set \(X\). This function is called the **Ehrhart quasi-polynomial** of \(P\) and we denote it by \(\mathrm{ehr}_{P}\). Let \(P+\mathbf{v}=\{\mathbf{x}+\mathbf{v}\mid\mathbf{x}\in P\}\) be the convex polytope obtained from a convex polytope \(P\) by the parallel translation by a vector \(\mathbf{v}\in\mathbb{R}^{d}\). The purpose of this paper is to develop a way to understand behaviors of \(\mathrm{ehr}_{P+\mathbf{v}}\) when \(\mathbf{v}\) runs over all vectors in \(\mathbb{Q}^{d}\), where \(P\) is a fixed rational polytope.
One motivation of studying this problem is special behaviors of \(\mathrm{ehr}_{P+\mathbf{v}}\) when we choose \(\mathbf{v}\in\mathbb{Q}^{d}\) somewhat randomly. Let us give an example to explain this. Let \(T\subset\mathbb{R}^{2}\) be the trapezoid whose vertices are \((0,0),(1,0),(2,1)\) and \((0,1)\). The Ehrhart quasi-polynomial of \(T+(\frac{17}{100},\frac{52}{100})\) becomes the following quasi-polynomial having minimum period 100:
\[\mathrm{ehr}_{T+(\frac{17}{100},\frac{52}{100})}(t)=\begin{cases}\frac{3}{2}t^ {2}+\frac{5}{2}t+1&(\ t\equiv 0\ ),\\ \frac{3}{2}t^{2}+\frac{3}{2}t&(\ t\equiv 25,50,75\ ),\\ \frac{3}{2}t^{2}-\frac{1}{2}t&\left(\begin{array}{c}t\equiv 1,3,6,7,9,12,13,15,18,19,21,23,24,26,30,\\ 32,36,38,42,44,48,49,53,55,59,61,65,66,67,\\ 69,71,72,73,78,83,84,86,89,90,92,95,96,98\end{array}\right),\end{cases}\]
where "\(t\equiv a\)" means "\(t\equiv a\) (mod \(100\))". This quasi-polynomial has several special properties. For example, one can see
1. It has a fairly large minimum period \(100\), but it consists of only 4 polynomials.
2. The polynomials \(\frac{3}{2}t^{2}\pm\frac{1}{2}t\) appear quite often comparing other two polynomials.
* The polynomial \(\frac{3}{2}t^{2}-\frac{1}{2}t\) appears when \(t\equiv 1,3,6,7,\dots\), while the polynomial \(\frac{3}{2}t^{2}+\frac{1}{2}t=\frac{3}{2}(-t)^{2}-\frac{1}{2}(-t)\) appears when \(t=\dots,93,94,97,99\). There seem to be a kind of reciprocity about the appearance of these two polynomials.
Our first goal is to explain why we get these phenomena using a certain generalization of an Ehrhart quasi-polynomial which was considered by McMullen [17] and is called a **translated lattice point enumerators** in [20].
### First result
We introduce a few notation to state our results. A function \(f:\mathbb{Z}\to\mathbb{R}\) is said to be a **quasi-polynomial** if there is a natural number \(q\) and polynomials \(f_{0},f_{1},\dots,f_{q-1}\) such that
\[f(t)=f_{i}(t)\text{ for all }t\in\mathbb{Z}\text{ with }t\equiv i\text{ (mod }q).\]
A number \(q\) is called a **period** of \(f\) and the polynomial \(f_{k}\) is called the \(k\)th **constituent** of \(f\). For convention, we define the \(k\)th constituent \(f_{k}\) of \(f\) for any \(k\in\mathbb{Z}\) by setting \(f_{k}=f_{k^{\prime}}\) with \(k^{\prime}\equiv k\) (mod \(q\)). For example, if \(f\) has period \(3\), then the \(7\)th constituent equals the \(1\)st constituent \(f_{1}\) and the \((-1)\)th constituent equals the \(2\)nd constituent \(f_{2}\). We note that this definition does not depend on a choice of a period. We will say that a function \(L\) from \(\mathbb{Z}_{\geq 0}\) (or \(\mathbb{Z}_{>0}\)) to \(\mathbb{R}\) is a quasi-polynomial if there is a quasi-polynomial \(f:\mathbb{Z}\to\mathbb{R}\) such that \(L(t)=f(t)\) for all \(t\in\mathbb{Z}_{\geq 0}\) (or \(\mathbb{Z}_{>0}\)), and in that case we regard \(L\) as a function from \(\mathbb{Z}\) to \(\mathbb{R}\) by identifying \(L\) and \(f\).
For a convex set \(X\subset\mathbb{R}^{d}\) and a vector \(\boldsymbol{v}\in\mathbb{R}^{d}\), we define the function \(\operatorname{TL}_{X,\boldsymbol{v}}:\mathbb{Z}_{\geq 0}\to\mathbb{R}\) by
\[\operatorname{TL}_{X,\boldsymbol{v}}(t)=\#\big{(}(tX+\boldsymbol{v})\cap \mathbb{Z}^{d}\big{)}\]
and call it the **translated lattice points enumerator** of \(X\) w.r.t. \(\boldsymbol{v}\). Clearly \(\operatorname{TL}_{P,\boldsymbol{0}}\) is nothing but the Ehrhart quasi-polynomial of \(P\). Generalizing Ehrhart's results, McMullen [17, SS4] proved that, if \(P\) is a rational polytope such that \(qP\) is integral then \(\operatorname{TL}_{P,\boldsymbol{v}}\) is a quasi-polynomial with period \(q\), and showed that there is a reciprocity between \(\operatorname{TL}_{\operatorname{int}(P),\boldsymbol{v}}\) and \(\operatorname{TL}_{P,-\boldsymbol{v}}\), where \(\operatorname{int}(P)\) is the interior of \(P\). As we will see soon in Section 2, for a rational polytope \(P\subset\mathbb{R}^{d}\) and \(\boldsymbol{v}\in\mathbb{Q}^{d}\), it follows from the above result of McMullen that
\[\text{the $k$th constituent of }\operatorname{ehr}_{P+\boldsymbol{v}}=\text{the $k$th constituent of } \operatorname{TL}_{P,\boldsymbol{kv}} \tag{1.1}\]
for all \(k\in\mathbb{Z}\). This equation (1.1) tells that knowing \(\operatorname{ehr}_{P+\boldsymbol{v}}\) for all \(\boldsymbol{v}\in\mathbb{Q}^{d}\) is essentially equivalent to knowing \(\operatorname{TL}_{P,\boldsymbol{v}}\) for all \(\boldsymbol{v}\in\mathbb{Q}^{d}\). Our first goal is to explain that the latter information can be described as a finite information although \(\operatorname{ehr}_{P+\boldsymbol{v}}\) could have arbitrary large minimum period.
To do this, we first discuss when \(\operatorname{TL}_{P,\boldsymbol{u}}\) and \(\operatorname{TL}_{P,\boldsymbol{v}}\) equal for different \(\boldsymbol{u},\boldsymbol{v}\in\mathbb{R}^{d}\) using toric arrangements. For \(\boldsymbol{a}=(a_{1},\dots,a_{d})\in\mathbb{R}^{d}\) and \(b\in\mathbb{R}\), let \(H_{\boldsymbol{a},b}\) be the hyperplane of \(\mathbb{R}^{d}\) defined by the equation \(a_{1}x_{1}+\dots+a_{d}x_{d}=b\). Let \(P\) be a rational convex \(d\)-polytope having \(m\) facets \(F_{1},\dots,F_{m}\) such that each \(F_{k}\) lies in the hyperplane \(H_{\boldsymbol{a}_{k},b_{k}}\) with \(\boldsymbol{a}_{k}\in\mathbb{Z}^{d}\), \(b_{k}\in\mathbb{Z}\) and \(\gcd(\boldsymbol{a}_{k},b_{k})=1\). We consider the arrangement of hyperplanes
\[\mathcal{A}_{P}=\bigcup_{i=1}^{m}\{H_{\boldsymbol{a}_{i},k}\mid k\in\mathbb{ Z}\}\]
and let \(\Delta_{P}\) be the open polyhedral decomposition of \(\mathbb{R}^{d}\) determined by \(\mathcal{A}_{P}\). Both \(\mathcal{A}_{P}\) and \(\Delta_{P}\) are closed under translations by integer vectors, so by the natural projection \(\mathbb{R}^{d}\to\mathbb{R}^{d}/\mathbb{Z}^{d}\) they induce an arrangement of finite hyperplanes on the torus \(\mathbb{R}^{d}/\mathbb{Z}^{d}\) and a finite open cell decomposition \(\Delta_{P}/\mathbb{Z}^{d}\) of \(\mathbb{R}^{d}/\mathbb{Z}^{d}\). Let \([\boldsymbol{v}]\in\mathbb{R}^{d}/\mathbb{Z}^{d}\) denote the natural projection of \(\boldsymbol{v}\in\mathbb{R}^{d}\) to \(\mathbb{R}^{d}/\mathbb{Z}^{d}\).
**Theorem 1.1**.: _With the notation as above, for \(\boldsymbol{u},\boldsymbol{v}\in\mathbb{R}^{d}\), if \([\boldsymbol{u}]\) and \([\boldsymbol{v}]\) belong to the same open cell of \(\Delta_{P}/\mathbb{Z}^{d}\) then_
\[\operatorname{TL}_{P,\boldsymbol{u}}(t)=\operatorname{TL}_{P,\boldsymbol{v}}(t) \text{\ \ for all }t\in\mathbb{Z}_{\geq 0}.\]
For an open cell \(C\in\Delta_{P}/\mathbb{Z}^{d}\), define a quasi-polynomial \(\mathrm{TL}_{P,C}\) by
\[\mathrm{TL}_{P,C}=\mathrm{TL}_{P,\boldsymbol{v}}\quad\text{with }[\boldsymbol{v}] \in C,\]
which is well-defined by Theorem 1.1. Then (1.1) tells that the \(k\)th constituent of \(\mathrm{ehr}_{P+\boldsymbol{v}}\) is the polynomial which appears as the \(k\)th constituent of \(\mathrm{TL}_{P,C}\) with \([k\boldsymbol{v}]\in C\). This provides us a way to compute \(\mathrm{ehr}_{P+\boldsymbol{v}}\) for any \(\boldsymbol{v}\in\mathbb{Q}^{d}\) from translated lattice points enumerators \(\mathrm{TL}_{P,C}\).
Let us compute \(\mathrm{ehr}_{T+(\frac{17}{100},\frac{52}{100})}(t)\) using this idea, where \(T\) is the trapezoid whose vertices are \((0,0),(1,0),(2,1)\) and \((0,1)\). Figure 1 shows the arrangement \(\mathcal{A}_{T}\) and the cell complex \(\Delta_{T}/\mathbb{Z}^{2}\). The complex \(\Delta_{T}/\mathbb{Z}^{2}\) has two \(2\)-dimensional cells \(F_{1},F_{2}\), three \(1\)-dimensional cells \(E_{1},E_{2},E_{3}\) and one \(0\)-dimensional cell \(V_{1}\) shown in Figure 1. Since \(T\) is a lattice polygon, each \(\mathrm{TL}_{P,C}\) is a polynomial by McMullen's result, and here are list of \(\mathrm{TL}_{T,C}(t)\):
\[\begin{split}\mathrm{TL}_{T,F_{1}}(t)&=\tfrac{3}{2}t ^{2}-\tfrac{1}{2}t,\\ \mathrm{TL}_{T,F_{2}}(t)&=\mathrm{TL}_{T,E_{1}}(t)= \mathrm{TL}_{T,E_{2}}(t)=\tfrac{3}{2}t^{2}+\tfrac{1}{2}t,\\ \mathrm{TL}_{T,E_{3}}(t)&=\tfrac{3}{2}t^{2}+\tfrac{3}{ 2}t,\\ \mathrm{TL}_{T,V_{1}}(t)&=\tfrac{3}{2}t^{2}+\tfrac{3}{ 2}t+1.\end{split} \tag{1.2}\]
Also, for \(k=0,1,2,\ldots,99\), one has
\[\left[k\left(\frac{17}{100},\frac{52}{100}\right)\right]\in\begin{cases}V_{1} &(\;k\equiv 0\;),\\ E_{3}&(\;k\equiv 25,50,75\;),\\ E_{2}&(\;k\equiv 20,40,60,80\;),\\ F_{1}&\left(\;\begin{array}{c}k\equiv 1,3,6,7,9,12,13,15,18,19,21,23,24,26,30,\\ 32,36,38,42,44,48,49,53,55,59,61,65,66,67,\\ 69,71,72,73,78,83,84,86,89,90,92,95,96,98\end{array}\right),\\ F_{2}&\left(\;\begin{array}{c}k\equiv 2,4,5,8,10,11,14,16,17,22,28,29,31,33, 34,\\ 35,37,41,45,47,51,52,54,56,57,58,62,64,68,70,\\ 74,76,77,79,81,82,85,87,88,91,93,94,97,99\end{array}\right).\end{cases} \tag{1.3}\]
Since (1.1) tells that the \(k\)th constituent of \(\mathrm{ehr}_{P+\boldsymbol{v}}\) equals the \(k\)th constituent of \(\mathrm{TL}_{P,k\boldsymbol{v}}\), which equals \(\mathrm{TL}_{P,C}\) with \([k\boldsymbol{v}]\in C\in\Delta_{P}/\mathbb{Z}^{d}\), the equations (1.2) and (1.3) recover the formula of \(\mathrm{ehr}_{T+(\frac{17}{100},\frac{52}{100})}(t)\) given at the beginning of this section.
As we will see, the proof of Theorem 1.1 is somewhat straightforward, and the way of computing \(\mathrm{ehr}_{P+\boldsymbol{v}}(t)\) from \(\mathrm{TL}_{P,C}(t)\) explained above may be considered as a kind of an observation rather than a new result. But we think that this is a useful observation. For example, this way allows us to visualize how the constituents of \(\mathrm{ehr}_{P+\boldsymbol{v}}\) change by plotting the points \([k\boldsymbol{v}]\) on \(\mathbb{R}^{d}/\mathbb{Z}^{d}\). Also,
we can see why properties (\(\alpha\)), (\(\beta\)) and (\(\gamma\)) occur from this observation. For the property \((\alpha)\), we only see 4 polynomials in \(\operatorname{ehr}_{T+(\frac{17}{100},\frac{520}{100})}\) because we have only 4 types of translated lattice point enumerators. More generally, it can be shown that, if we fix a rational polytope \(P\), then we can only have a finite number of polynomials as constituents of \(\operatorname{ehr}_{P+\mathbf{v}}\) (Theorem 3.10). For the property (\(\beta\)), the polynomials \(\frac{3}{2}t^{2}\pm\frac{1}{2}t\) appear many times simply because they are polynomials assigned to maximal dimensional cells of \(\Delta_{T}/\mathbb{Z}^{2}\) (indeed, if we choose \(\mathbf{v}\) randomly, then \([k\mathbf{v}]\) is likely to belong to a maximal dimensional cell). Finally, we will see in Section 5 that the property (\(\gamma\)) can be figured out from the reciprocity of \(\operatorname{TL}_{P,\mathbf{v}}\) (see Corollary 5.4).
**Second Result.** Recently real-valued extension of Ehrhart functions, namely, the function \(\operatorname{ehr}_{P}^{\mathbb{R}}:\mathbb{R}\to\mathbb{Z}_{\geq 0}\) given by \(\operatorname{ehr}_{P}^{\mathbb{R}}(t)=\#(tP\cap\mathbb{Z}^{d})\) for all \(t\in\mathbb{R}_{\geq 0}\), catch interests [3, 4, 12, 13, 14]. One surprising result on this topic is the following result of Royer [13, 14] proving that \(\operatorname{ehr}_{P+\mathbf{v}}^{\mathbb{R}}\) for all \(\mathbf{v}\in\mathbb{Z}^{d}\) determines the polytope \(P\).
**Theorem 1.2** (Royer).: _Let \(P\) and \(Q\) be rational polytopes in \(\mathbb{R}^{d}\). If \(\operatorname{ehr}_{P+\mathbf{v}}^{\mathbb{R}}(t)=\operatorname{ehr}_{Q+\mathbf{v}}^{ \mathbb{R}}(t)\) for all \(\mathbf{v}\in\mathbb{Z}^{d}\) and \(t\in\mathbb{R}_{\geq 0}\), then \(P=Q\)._
Our second result is somewhat analogous to this result of Royer. We prove that \(\operatorname{ehr}_{P+\mathbf{v}}\) for all \(\mathbf{v}\in\mathbb{Q}^{d}\) determines the polytope \(P\) up to translations by integer vectors.
**Theorem 1.3**.: _Let \(P\) and \(Q\) be rational \(d\)-polytopes in \(\mathbb{R}^{d}\). If \(\operatorname{ehr}_{P+\mathbf{v}}(t)=\operatorname{ehr}_{Q+\mathbf{v}}(t)\) for all \(\mathbf{v}\in\mathbb{Q}^{d}\) and \(t\in\mathbb{Z}_{\geq 0}\), then \(P=Q+\mathbf{u}\) for some \(\mathbf{u}\in\mathbb{Z}^{d}\)._
### Third result
The original motivation of this study actually comes from an attempt to generalize results of de Vries and the third author in [20], who found a connection between symmetries on constituents of \(\operatorname{ehr}_{P+\mathbf{v}}\) and geometric symmetries of \(P\). Indeed, the following result is one of the main results in [20]. We say that a quasi-polynomial \(f\) is **symmetric** if the \(k\)th constituent of \(f\) equals the \((-k)\)th constituent of \(f\) for all \(k\in\mathbb{Z}\). Also, a convex polytope \(P\subset\mathbb{R}^{d}\) is said to be **centrally symmetric** if \(P=-P+\mathbf{u}\) for some \(\mathbf{u}\in\mathbb{R}^{d}\).
**Theorem 1.4** (de Vries-Yoshinaga).: _Let \(P\subset\mathbb{R}^{d}\) be a lattice \(d\)-polytope. The following conditions are equivalent._
1. \(\operatorname{ehr}_{P+\mathbf{v}}\) _is symmetric for any_ \(\mathbf{v}\in\mathbb{Q}^{d}\)_._
2. \(P\) _is centrally symmetric._
As posed in [20, Problem 6.7], it is natural to ask if there is a generalization of this result for rational polytopes. Theorem 1.4 actually proves that, if a rational polytope \(P\) satisfies the property (1) of the above theorem, then \(P\) must be centrally symmetric (see Corollary 7.4). Hence, to answer this question, we can assume \(P=-P\). We generalize Theorem 1.4 in the following form.
**Theorem 1.5**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope with \(P=-P\). The following conditions are equivalent._
1. \(\operatorname{ehr}_{P+\mathbf{v}}\) _is symmetric for all_ \(\mathbf{v}\in\mathbb{Q}^{d}\)_._
2. \(2P\) _is integral._
### Organization of the paper
This paper is organized as follows: We first quickly review basic known properties of Ehrhart quasi-polynomials and translated lattice points enumerators in Section 2. In Section 3, we study translated lattice points enumerators using arrangement \(\mathcal{A}_{P}\) and prove Theorem 1.1. Then, after seeing two examples in Section 4, we discuss a reciprocity of translated lattice points enumerators on maximal cells of \(\Delta_{P}/\mathbb{Z}^{d}\) in Section 5. In Section 6, we prove that
translated lattice point enumerators determine the polytope \(P\) up to translations by integer vectors. In Section 7, we study translated lattice points enumerators of polytopes with some symmetry, in particular, prove Theorem 1.5. In Section 8, we discuss a connection to commutative algebra, more precisely, we discuss a connection between translated lattice points enumerators and conic divisorial ideals in Ehrhart rings. We list a few problems which we cannot solve in the last section 9.
### Acknowledgements
We thank Katharina Jochemko for letting us know McMullen's work in [17] and thank Matthias Beck for letting us know the work of Royer in [13, 14]. The first author is partially supported by KAKENHI 20K03513 and 21KK0043, the second author is partially supported by KAKENHI 21K0319 and 21H00975, and the third author is partially supported by KAKENHI 18H01115 and 23H00081.
## 2. Ehrhart quasi-polynomials and translated lattice point enumerators
In this section, we recall basic results on Ehrhart quasi-polynomials and explain a connection between Ehrhart quasi-polynomials of translated polytopes and translated lattice point enumerators.
### Ehrhart quasi-polynomial
We quickly recall Ehrhart's theorems. We refer the readers to [5, 21] for basics on convex polytopes. A **convex polytope**\(P\) in \(\mathbb{R}^{d}\) is a convex hull of finite points in \(\mathbb{R}^{d}\). The **dimension** of a polytope \(P\) is the dimension of its affine hull. A \(k\)-dimensional convex polytope will be simply called a \(k\)**-polytope** in this paper. A convex polytope \(P\) is said to be **integral** (resp. **rational**) if all the vertices of \(P\) are lattice points (resp. rational points). The **denominator** of a rational polytope \(P\) is the smallest integer \(k>0\) such that \(kP\) is integral. The following result is a fundamental result in Ehrhart theory. See [5, Theorems 3.23 and 4.1].
**Theorem 2.1** (Ehrhart).: _Let \(P\subset\mathbb{R}^{d}\) be a rational polytope and \(q\) the denominator of \(P\). Then the function \(\operatorname{ehr}_{P}:\mathbb{Z}_{\geq 0}\to\mathbb{R}\) defined by_
\[\operatorname{ehr}_{P}(t)=\#(tP\cap\mathbb{Z}^{d})\]
_is a quasi-polynomial with period \(q\)._
As we noted in Introduction, we regard \(\operatorname{ehr}_{P}\) as a function from \(\mathbb{Z}\) to \(\mathbb{R}\) by identifying it with the corresponding quasi-polynomial \(f:\mathbb{Z}\to\mathbb{R}\) that coincides with \(\operatorname{ehr}_{P}\) on \(\mathbb{Z}_{\geq 0}\). Thus, if \(q\) is a period of \(f\), then for a positive integer \(t>0\) we set \(\operatorname{ehr}_{P}(-t)=f_{k}(-t)\), where \(f_{k}\) is the \(k\)th constituent of \(f\) with \(-t\equiv k\) (mod \(q\)). The quasi-polynomial \(\operatorname{ehr}_{P}\) is called the **Ehrhart quasi-polynomial** of \(P\).
The following reciprocity result is another important result in Ehrhart theory.
**Theorem 2.2** (Ehrhart reciprocity).: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope. Then_
\[\#\big{(}\operatorname{int}(tP)\cap\mathbb{Z}^{d}\big{)}=(-1)^{d}\operatorname {ehr}_{P}(-t)\ \ \text{for}\ \ t\in\mathbb{Z}_{>0}.\]
### Translated lattice points enumerator
Recall that, for a convex set \(X\subset\mathbb{R}^{d}\) and \(\boldsymbol{v}\in\mathbb{R}^{d}\), the **translated lattice points enumerator** of \(X\) w.r.t. \(\boldsymbol{v}\) is the function \(\operatorname{TL}_{X,\boldsymbol{v}}:\mathbb{Z}_{\geq 0}\to\mathbb{R}\) defined by
\[\operatorname{TL}_{X,\boldsymbol{v}}(t)=\#\big{(}(tX+\boldsymbol{v})\cap \mathbb{Z}^{d}\big{)}\ \ \text{for}\ t\in\mathbb{Z}_{\geq 0}.\]
McMullen [17, SS4] proved the following generalization of Ehrhart's results.
**Theorem 2.3** (McMullen).: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope and \(q\) the denominator of \(P\). Then_
1. _For any_ \(\mathbf{v}\in\mathbb{R}^{d}\)_, the function_ \(\mathrm{TL}_{P,\mathbf{v}}\) _is a quasi-polynomial with period_ \(q\)_._
2. _For any_ \(\mathbf{v}\in\mathbb{R}^{d}\)_, one has_ \[\mathrm{TL}_{\mathrm{int}(P),\mathbf{v}}(t)=(-1)^{d}\mathrm{TL}_{P,-\mathbf{v}}(-t)\ \ \text{ for }t\in\mathbb{Z}_{>0}.\]
We remark that \(\mathbf{v}\) is not necessarily a rational point in the above theorem and that the function \(\mathrm{TL}_{P,\mathbf{v}}\) becomes a polynomial when \(P\) is integral.
The following connection between Ehrhart quasi-polynomials of translated polytopes and translated lattice points enumerators, which essentially appeared in [20, Corollary 3.4], is fundamental in the rest of this paper.
**Lemma 2.4**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope and \(\mathbf{v}\in\mathbb{Q}^{d}\). For any integer \(k\in\mathbb{Z}\), one has_
\[\text{the $k$th constituent of $\mathrm{ehr}_{P+\mathbf{v}}=$ the $k$th constituent of $\mathrm{TL}_{P,k\mathbf{v}}$}\]
Proof.: We may assume \(k\geq 0\). Let \(\rho\) and \(\rho^{\prime}\) be positive integers such that \(\rho P\) is integral and \(\rho^{\prime}\mathbf{v}\in\mathbb{Z}^{d}\). Let \(q\) be a common multiple of \(\rho\) and \(\rho^{\prime}\). Then \(q\) is a common period of quasi-polynomials \(\mathrm{ehr}_{P+\mathbf{v}}\) and \(\mathrm{TL}_{P,\mathbf{v}}\). For every integer \(t\geq 0\) with \(t\equiv k\) (mod \(q\)) we have
\[\mathrm{ehr}_{P+\mathbf{v}}(t)=\#\big{(}(tP+t\mathbf{v})\cap\mathbb{Z}^{d}\big{)}=\# \big{(}(tP+k\mathbf{v})\cap\mathbb{Z}^{d}\big{)}=\mathrm{TL}_{P,k\mathbf{v}}(t),\]
where the second equality follows from \((t-k)\mathbf{v}\in\mathbb{Z}^{d}\). Since both \(\mathrm{ehr}_{P+\mathbf{v}}\) and \(\mathrm{TL}_{P,\mathbf{v}}\) are quasi-polynomials with a period \(q\), the above equation proves the desired property.
_Remark 2.5_.: Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope and \(\mathbf{v}\in\mathbb{R}^{d}\). Like usual Ehrhart quasi-polynomials, each constituent of \(\mathrm{TL}_{P,\mathbf{v}}\) is a polynomial of degree \(d\) whose leading coefficient equals the volume of \(P\). Indeed, if \(f_{k}\) is the \(k\)th constituent of \(\mathrm{TL}_{P,\mathbf{v}}\) and \(q\) is a period of \(\mathrm{TL}_{P,\mathbf{v}}\), then \(\lim_{t\to\infty}\frac{f_{k}(qt+k)}{(qt+k)^{d}}=\lim_{t\to\infty}\frac{\#(( qt+k)P\cap\mathbb{Z}^{d})}{(qt+k)^{d}}\) is the volume of \(P\). Since \(f_{k}\) is a polynomial, this means that \(f_{k}\) has degree \(d\) and the coefficient of \(t^{d}\) in \(f_{k}\) equals the volume of \(P\).
## 3. Translated lattice points enumerators and toric arrangements
In this section, we study when \(\mathrm{TL}_{P,\mathbf{v}}\) equals \(\mathrm{TL}_{P,\mathbf{u}}\) for different \(\mathbf{v},\mathbf{u}\in\mathbb{R}^{d}\) using toric arrangements. Our goal is to prove Theorem 1.1. To study this problem, it is convenient to regard a translated lattice points enumerator as a generating function of a translated cone. Let \(P\subset\mathbb{R}^{d}\) be a convex polytope and let \(\mathcal{C}_{P}\) be the cone generated by \(\{(\mathbf{x},1)\mid\mathbf{x}\in P\}\). Then the translated lattice points enumerator \(\mathrm{TL}_{P,\mathbf{v}}\) can be identified with a generating function of a translated cone \(\mathcal{C}_{P}+(\mathbf{v},0)\) because of the equality
\[\sum_{(a_{1},\ldots,a_{d+1})\in(\mathcal{C}_{P}+(\mathbf{v},0))\cap\mathbb{Z}^{d+1 }}z^{a_{d+1}}=\sum_{t=0}^{\infty}\big{(}\mathrm{TL}_{P,\mathbf{v}}(t)\big{)}z^{t}.\]
We refer the readers for the verification of this equation. This in particular tells that if \(\mathcal{C}_{P}+(\mathbf{u},0)\) and \(\mathcal{C}_{P}+(\mathbf{v},0)\) have the same lattice points, then we have \(\mathrm{TL}_{P,\mathbf{u}}=\mathrm{TL}_{P,\mathbf{v}}\). To prove Theorem 1.1, we mainly study when \(\mathcal{C}_{P}+(\mathbf{u},0)\) and \(\mathcal{C}_{P}+(\mathbf{v},0)\) have the same lattice points.
We note that such a study is not very new. Indeed, lattice points in the translated cone \(\mathcal{C}_{P}+\mathbf{v}\) is closely related to conic divisorial ideals of Ehrhart rings studied in [7, 9], and Bruns [7] explains for which \(\mathbf{u},\mathbf{v}\in\mathbb{R}^{d+1}\) the lattice points in \(\mathcal{C}_{P}+\mathbf{u}\) equal those in \(\mathcal{C}_{P}+\mathbf{v}\). We will explain this connection to commutative algebra later in Section 8.
### Regions associated with hyperplane arrangements
We first introduce some notation on arrangements of hyperplanes. For \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\), we write \((\mathbf{x},\mathbf{y})\) for the standard inner product. Also, for \(\mathbf{a}\in\mathbb{R}^{d}\setminus\{\mathbf{0}\}\) and \(b\in\mathbb{R}\), we write
\[H^{\geq}_{\mathbf{a},b}=\{\mathbf{x}\in\mathbb{R}^{d}\mid(\mathbf{a},\mathbf{x})\geq b\}\ \ \ \text{and}\ \ \ H^{>}_{\mathbf{a},b}=\{\mathbf{x}\in\mathbb{R}^{d}\mid(\mathbf{a},\mathbf{x})>b\}\]
for closed and open half space defined by the linear inequalities \((\mathbf{a},\mathbf{x})\geq b\) and \((\mathbf{a},\mathbf{x})>b\), respectively, and write
\[H_{\mathbf{a},b}=\{\mathbf{x}\in\mathbb{R}^{d}\mid(\mathbf{a},\mathbf{x})=b\}\]
for the hyperplane defined by the linear equation \((\mathbf{a},\mathbf{x})=b\). In the case where \(\mathbf{a}\) can be chosen from \(\mathbb{Z}^{d}\) and \(b\) is from \(\mathbb{Z}\), we call the hyperplane \(H_{\mathbf{a},b}\)_rational_. Let \(N=\{\mathbf{a}_{1},\ldots,\mathbf{a}_{m}\}\) be a set of elements in \(\mathbb{Z}^{d}\setminus\{\mathbf{0}\}\). Define the arrangement of hyperplanes
\[\mathcal{A}_{N}=\{H_{\mathbf{a},k}\mid\mathbf{a}\in N,\ k\in\mathbb{Z}\}.\]
See Figures 2 and 3. From now on, we fix an order \(\mathbf{a}_{1},\ldots,\mathbf{a}_{m}\) of elements of \(N\). We define the map \(\varphi_{(\mathbf{a}_{1},\ldots,\mathbf{a}_{m})}:\mathbb{R}^{d}\to\mathbb{R}^{m}\) by
\[\varphi_{(\mathbf{a}_{1},\ldots,\mathbf{a}_{m})}(\mathbf{x})=\big{(}(\mathbf{a}_{1},\mathbf{x}),( \mathbf{a}_{2},\mathbf{x}),\ldots,(\mathbf{a}_{m},\mathbf{x})\big{)}.\]
For \(x\in\mathbb{R}\), we write \(\lfloor x\rfloor=\max\{a\in\mathbb{Z}\mid a\leq x\}\) and \(\lceil x\rceil=\min\{a\in\mathbb{Z}\mid a\geq x\}\). Also, given an integer sequence \(\mathbf{c}=(c_{1},\ldots,c_{m})\in\mathbb{Z}^{m}\), we define
\[U^{N}_{\mathbf{c}}=\{\mathbf{x}\in\mathbb{R}^{d}\mid\lceil\varphi_{(\mathbf{a}_{1},\ldots,\mathbf{a}_{m})}(\mathbf{x})\rceil=\mathbf{c}\}=\{\mathbf{x}\in\mathbb{R}^{d}\mid c_{i}-1<( \mathbf{a}_{i},\mathbf{x})\leq c_{i}\ \text{for}\ i=1,2,\ldots,m\}\]
where \(\lceil(x_{1},\ldots,x_{m})\rceil=(\lceil x_{1}\rceil,\ldots,\lceil x_{m}\rceil)\). We call \(U^{N}_{\mathbf{c}}\) an **upper region** of \(N\). Note that \(U^{N}_{\mathbf{c}}\) could be empty. Also we have the partition
\[\mathbb{R}^{d}=\bigsqcup_{\mathbf{c}\in\mathbb{Z}^{d}}U^{N}_{\mathbf{c}}\]
where \(\bigsqcup\) denotes a disjoint union. We write \(\Lambda_{N}\) for the set of all upper regions of \(N\). The set \(\Lambda_{N}\) is stable by translations by integer vectors, so \(\mathbb{Z}^{d}\) acts on these sets. Indeed, since \(\mathbf{a}_{1},\ldots,\mathbf{a}_{m}\) are integer vectors, for any \(\mathbf{v}\in\mathbb{Z}^{d}\), we have
\[U^{N}_{\mathbf{c}}+\mathbf{v}=U^{N}_{\mathbf{c}+\varphi_{N}(\mathbf{v})}.\]
We write \(\Lambda_{N}/\mathbb{Z}^{d}\) for the quotient of these sets by this \(\mathbb{Z}^{d}\)-action defined by translations by integer vectors. This set can be considered as a partition of the \(d\)-torus \(\mathbb{R}^{d}/\mathbb{Z}^{d}\).
**Example 3.1**.: Let \(N=\{(1,0),(-1,2)\}\). Then the set \(\Lambda_{N}/\mathbb{Z}^{2}\) consists of two elements with the following representatives:
\[R_{1}=U^{N}_{(1,1)}=\{(x,y)\in\mathbb{R}^{2}\mid 0<x\leq 1,0<-x+2y \leq 1\},\] \[R_{2}=U^{N}_{(1,0)}=\{(x,y)\in\mathbb{R}^{2}\mid 0<x\leq 1,-1<-x+2y \leq 0\}.\]
See Figure 2 for the visualization of \(\mathcal{A}_{N}\) and \(\Lambda_{N}/\mathbb{Z}^{2}\).
### Upper regions and lattice points in translated cones
We now explain how these regions relate to lattice points in translated cones. We first recall two basic facts on lattice points. The following lemma is an easy consequence of Euclidian algorithm.
**Lemma 3.2**.: _Let \(\mathbf{a}\in\mathbb{Z}^{d}\setminus\{\mathbf{0}\}\), \(b\in\mathbb{R}\) and \(g=\gcd(\mathbf{a})\). A linear equation \((\mathbf{a},\mathbf{x})=b\) has an integral solution if and only if \(b\in g\mathbb{Z}\)._
We also need the following statement which easily follows from Lemma 3.2.
**Lemma 3.3**.: _Let \(H\subset\mathbb{R}^{d}\) be a hyperplane and let \(\boldsymbol{v}\in\mathbb{R}^{d}\) be a point such that \(H+\boldsymbol{v}\neq H\). There is an \(\varepsilon>0\) such that \(H+s\boldsymbol{v}\) contains no lattice points for any \(0<s\leq\varepsilon\)._
**Lemma 3.4**.: _Let \(H\subset\mathbb{R}^{d}\) be a rational hyperplane. Any \((d-1)\)-dimensional convex cone in \(H\) contains a lattice point._
Proof.: Let \(H=H_{\boldsymbol{a},b}\) for some \(\boldsymbol{a}\in\mathbb{Z}^{d}\) and \(b\in\mathbb{Z}\). Without loss of generality, we may assume \(b=0\). Since any \(d\)-dimensional convex cone in \(\mathbb{R}^{d}\) contains a lattice point, the lemma follows from the fact that \(H\cap\mathbb{Z}^{d}\cong\mathbb{Z}^{d-1}\) as \(\mathbb{Z}\)-modules.
Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope. By the fundamental theorem on convex cones, the cone \(\mathcal{C}_{P}\) has the unique presentation
\[\mathcal{C}_{P}=H^{\geq}_{\boldsymbol{a}_{1},0}\cap\cdots\cap H^{\geq}_{ \boldsymbol{a}_{m},0} \tag{3.1}\]
such that
1. each \(\boldsymbol{a}_{i}\) is primitive (that is, \(\gcd(\boldsymbol{a}_{i})=1\)), and
2. the presentation is irredundant, that is, each \(H^{\geq}_{\boldsymbol{a}_{i},0}\) cannot be omitted from the presentation.
Note that the second condition tells that \(\mathcal{C}_{P}\cap H_{\boldsymbol{a}_{i},0}\) is a facet of \(\mathcal{C}_{P}\). The vectors \(\boldsymbol{a}_{1},\ldots,\boldsymbol{a}_{m}\) in (3.1) are called (inner) **normal vectors** of \(\mathcal{C}_{P}\) and we write \(\widetilde{N}(P)=\{\boldsymbol{a}_{1},\ldots,\boldsymbol{a}_{m}\}\) for the set of all normal vectors of \(\mathcal{C}_{P}\). The next statement was given in [7]
**Proposition 3.5** (Bruns).: _Let \(P\subset\mathbb{R}^{d}\) a convex polytope, \(\widetilde{N}(P)=\{\boldsymbol{a}_{1},\ldots,\boldsymbol{a}_{m}\}\) and \(\boldsymbol{n}=(\boldsymbol{a}_{1},\ldots,\boldsymbol{a}_{m})\). Let \(\boldsymbol{u},\boldsymbol{v}\in\mathbb{R}^{d+1}\). The following conditions are equivalent._
1. \((\mathcal{C}_{P}+\boldsymbol{u})\cap\mathbb{Z}^{d+1}=(\mathcal{C}_{P}+ \boldsymbol{v})\cap\mathbb{Z}^{d+1}\)_._
2. \(\lceil\varphi_{\boldsymbol{n}}(\boldsymbol{u})\rceil=\lceil\varphi_{ \boldsymbol{n}}(\boldsymbol{v})\rceil\)_, that is,_ \(\boldsymbol{u}\) _and_ \(\boldsymbol{v}\) _belong to the same upper region of_ \(\Lambda_{\widetilde{N}(P)}\)_._
Proof.: Let \(\lceil\varphi_{\boldsymbol{n}}(\boldsymbol{u})\rceil=(c_{1},\ldots,c_{m})\) and let \(\lceil\varphi_{\boldsymbol{n}}(\boldsymbol{v})\rceil=(d_{1},\ldots,d_{m})\). Since
\[\mathcal{C}_{P}+\boldsymbol{u}=H^{\geq}_{\boldsymbol{a}_{1},(\boldsymbol{a}_ {1},\boldsymbol{u})}\cap\cdots\cap H^{\geq}_{\boldsymbol{a}_{m},(\boldsymbol {a}_{m},\boldsymbol{u})}\]
and since Lemma 3.2 tells
\[H^{\geq}_{\boldsymbol{a},b}\cap\mathbb{Z}^{d+1}=H^{\geq}_{\boldsymbol{a},[b] }\cap\mathbb{Z}^{d+1}\ \ \text{for any}\ \boldsymbol{a}\in\mathbb{Z}^{d},\ b\in\mathbb{R},\]
we have
\[(\mathcal{C}_{P}+\boldsymbol{u})\cap\mathbb{Z}^{d+1}=\left(H^{\geq}_{ \boldsymbol{\bar{a}}_{1},c_{1}}\cap\cdots\cap H^{\geq}_{\boldsymbol{\bar{a}}_ {m},c_{m}}\right)\cap\mathbb{Z}^{d+1} \tag{3.2}\]
and
\[(\mathcal{C}_{P}+\boldsymbol{v})\cap\mathbb{Z}^{d+1}=\left(H^{\geq}_{ \boldsymbol{a}_{1},d_{1}}\cap\cdots\cap H^{\geq}_{\boldsymbol{a}_{m},d_{m}} \right)\cap\mathbb{Z}^{d+1}.\]
These prove (2) \(\Rightarrow\) (1).
We prove (1) \(\Rightarrow\) (2). We assume \(c_{1}<d_{1}\) and prove \((\mathcal{C}_{P}+\mathbf{u})\cap\mathbb{Z}^{d+1}\neq(\mathcal{C}_{P}+\mathbf{v})\cap \mathbb{Z}^{d+1}\). In this case \(F=(\mathcal{C}_{P}+\mathbf{u})\cap H_{\mathbf{a}_{1},c_{1}}\) contains a \((d-1)\)-dimensional cone in \(H_{\mathbf{a}_{1},c_{1}}\), so it contains a lattice point by Lemma 3.4. On the other hand, since \(c_{1}<d_{1}\) we have \(\mathbb{Z}^{d+1}\cap(\mathcal{C}_{P}+\mathbf{v})\cap H_{\mathbf{a}_{1},c_{1}}=\varnothing\). These prove \((\mathcal{C}_{P}+\mathbf{u})\cap\mathbb{Z}^{d+1}\neq(\mathcal{C}_{P}+\mathbf{v})\cap \mathbb{Z}^{d+1}\).
_Remark 3.6_.: If \(N=\widetilde{N}(P)\) for some rational \(d\)-polytope \(P\subset\mathbb{R}^{d}\), then the set \(\Lambda_{N}\) has a special property that every element of \(\Lambda_{N}\) has dimension \(d+1\). Indeed, if \(R\in\Lambda_{N}\) and \(\mathbf{p}\in R\) then we have \(\mathbf{p}-\mathbf{v}\in R\) for all \(\mathbf{v}\in\operatorname{int}(\mathcal{C}_{P})\) that is sufficiently close to the origin, which tells that \(R\) has dimension \(d+1\). As we see in Example 3.9, this property does not hold for a general \(N\).
We have studied lattice points in translated cones \(\mathcal{C}_{P}+\mathbf{v}\), but we are actually interested in a special case when \(\mathbf{v}=(\mathbf{v}^{\prime},0)\) since this is the case which is related to translated lattice points enumerators. Below we describe when \(\mathcal{C}_{P}+(\mathbf{u},0)\) and \(\mathcal{C}_{P}+(\mathbf{v},0)\) have the same lattice points. Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope. By the fundamental theorem on convex polytopes, there is the unique presentation
\[P=H^{\geq}_{\mathbf{a}_{1},b_{1}}\cap\cdots\cap H^{\geq}_{\mathbf{a}_{m},b_{m}}\]
such that
1. each \((\mathbf{a}_{i},b_{i})\in\mathbb{Z}^{d+1}\) is primitive, and
2. the presentation is irredundant.
The vectors \(\mathbf{a}_{1},\ldots,\mathbf{a}_{m}\) are called **normal vectors** of \(P\). We define
\[N(P)=\{\mathbf{a}_{1},\ldots,\mathbf{a}_{m}\}.\]
We note that
\[\widetilde{N}(P)=\{(\mathbf{a}_{1},b_{1}),\ldots,(\mathbf{a}_{m},b_{m})\}\]
since if \(H\subset\mathbb{R}^{d}\) is a hyperplane defined by \(a_{1}x_{1}+\cdots+a_{n}x_{d}=b\) then the cone \(\mathcal{C}_{H}\) is the hyperplane defined by \(a_{1}x_{1}+\cdots+a_{d}x_{d}=bx_{d+1}\). We write
\[\mathcal{A}_{P}=\mathcal{A}_{N(P)}\ \ \text{ and }\ \ \Lambda_{P}=\Lambda_{N(P)}.\]
The following statement is essentially a consequence of Proposition 3.5.
**Corollary 3.7**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope and let \(\mathbf{u},\mathbf{v}\in\mathbb{R}^{d}\). The following conditions are equivalent._
1. \((\mathcal{C}_{P}+(\mathbf{u},0))\cap\mathbb{Z}^{d+1}=(\mathcal{C}_{P}+(\mathbf{v},0)) \cap\mathbb{Z}^{d+1}\)_._
2. \((\mathbf{u},0)\) _and_ \((\mathbf{v},0)\) _belong to the same upper region in_ \(\Lambda_{\widetilde{N}(P)}\)_._
3. \(\mathbf{u}\) _and_ \(\mathbf{v}\) _belong to the same upper region in_ \(\Lambda_{P}\)__
Proof.: The equivalence between (1) and (2) is Proposition 3.5. Let \(\mathbf{n}\) and \(\tilde{\mathbf{n}}\) be the sequence of normal vectors of \(P\) and \(\mathcal{C}_{P}\) respectively. The equivalence between (2) and (3) follows from the fact that \(\varphi_{\mathbf{n}}(\mathbf{a})=\varphi_{\tilde{\mathbf{n}}}(\mathbf{a},0)\) for any \(\mathbf{a}\in\mathbb{R}^{d}\).
We now discuss a consequence of Corollary 3.7 to translated lattice point enumerators and Ehrhart quasi-polynomials. Recall that \([\mathbf{v}]\) denotes the image of \(\mathbf{v}\in\mathbb{R}^{d}\) by the natural projection \(\mathbb{R}^{d}\to\mathbb{R}^{d}/\mathbb{Z}^{d}\).
**Theorem 3.8**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope and let \(\mathbf{u},\mathbf{v}\in\mathbb{R}^{d}\)._
1. _If_ \([\mathbf{u}]\) _and_ \([\mathbf{v}]\) _belong to the same region in_ \(\Lambda_{P}/\mathbb{Z}^{d}\)_, then_ \(\operatorname{TL}_{P,\mathbf{u}}(t)=\operatorname{TL}_{P,\mathbf{v}}(t)\) _for all_ \(t\in\mathbb{Z}_{\geq 0}\)_._
2. _The set_ \(\{\operatorname{TL}_{P,\mathbf{w}}\mid\mathbf{w}\in\mathbb{R}^{d}\}\) _is a finite set._
Proof.: (1) Corollary 3.7 tells that if \([\mathbf{u}]\) and \([\mathbf{v}]\) belong to the same region in \(\Lambda_{P}/\mathbb{Z}^{d}\), then
\[\big{(}\mathcal{C}_{P}+(\mathbf{u},0)\big{)}\cap\mathbb{Z}^{d+1}=\big{(}\mathcal{C}_ {P}+(\mathbf{v},0)\big{)}\cap\mathbb{Z}^{d+1}+(\mathbf{w},0),\]
where \(\mathbf{w}\in\mathbb{Z}^{d}\) is the vector such that \(\mathbf{u}\) and \(\mathbf{v}+\mathbf{w}\) belong to the same region of \(\Lambda_{P}\). This clearly implies \(\mathrm{TL}_{P,\mathbf{u}}(t)=\mathrm{TL}_{P,\mathbf{v}}(t)\) for all \(t\in\mathbb{Z}_{\geq 0}\).
(2) Since \(\{R\in\Lambda_{P}\mid R\cap[0,1)^{d}\neq\varnothing\}\) is finite, \(\Lambda_{P}/\mathbb{Z}^{d}\) is a finite set. This fact and (1) prove the desired statement.
**Example 3.9**.: Consider the trapezoid \(T\) in Introduction. The set of normal vectors of \(T\) is \(N(T)=\{(1,0),(0,1),(0,-1),(-1,1)\}\). Then the set \(\Lambda_{T}/\mathbb{Z}^{2}\) consists of 4 elements with the following representatives:
\[V=U_{(0,0,0,0)}=\{(x,y)\in\mathbb{R}^{2}\mid-1<x\leq 0,\;-1<y\leq 0, \;-1<-y\leq 0,\;-1<-x+y\leq 0\},\] \[E=U_{(1,0,0,0)}=\{(x,y)\in\mathbb{R}^{2}\mid 0<x\leq 1,\;-1<y\leq 0, \;-1<-y\leq 0,\;-1<-x+y\leq 0\},\] \[R_{1}=U_{(1,1,0,0)}=\{(x,y)\in\mathbb{R}^{2}\mid 0<x\leq 1,\;0<y\leq 1,\;-1<y\leq 0,\;-1<-x+y\leq 0\},\] \[R_{2}=U_{(1,1,0,1)}=\{(x,y)\in\mathbb{R}^{2}\mid 0<x\leq 1,\;0<y\leq 1,\;-1<-y\leq 0,\;0<-x+y\leq 1\}.\]
See Figure 3 for the visualization of \(\Lambda_{T}/\mathbb{Z}^{2}\) in the torus \(\mathbb{R}^{2}/\mathbb{Z}^{2}\). Note that \(V\) is a one point set. Theorem 3.8 tells that \(\mathrm{TL}_{T,\mathbf{v}}\) only depends on the region in \(\Lambda_{T}/\mathbb{Z}^{2}\) where \([\mathbf{v}]\) belongs, and the table below is a list of the polynomials \(\mathrm{TL}_{T,C}(t)\) in each region \(C\in\Lambda_{T}/\mathbb{Z}^{2}\).
For a quasi-polynomial \(f\), let \(\mathrm{Const}(f)\) be the set of constituents of \(f\). Since the \(k\)th constituent of \(\mathrm{ehr}_{P+\mathbf{v}}\) is the \(k\)th constituent of \(\mathrm{TL}_{P,k\mathbf{v}}\), the second statement of the above theorem gives the following finiteness result for constituents of Ehrhart quasi-polynomials of translated polytopes.
**Corollary 3.10**.: _If \(P\subset\mathbb{R}^{d}\) is a rational \(d\)-polytope, then_
\[\#\big{(}\bigcup_{\mathbf{v}\in\mathbb{Q}^{d}}\mathrm{Const}\big{(}\,\mathrm{ehr }_{P+\mathbf{v}}\,\big{)}\big{)}<\infty.\]
### Polyhedral decompositions associated with hyperplane arrangements
Theorem 3.8 is slightly different to Theorem 1.1 in Introduction (indeed the cell complex \(\Lambda_{T}/\mathbb{Z}^{2}\) has 4 cells while \(\Delta_{T}/\mathbb{Z}^{2}\) has 6 cells), but it can be considered as a refined version of Theorem 1.1. We explain this in the rest of this section.
Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope and \(N(P)=\{\boldsymbol{a}_{1},\ldots,\boldsymbol{a}_{m}\}\). The arrangement \(\mathcal{A}_{P}\) gives a natural polyhedral decomposition of \(\mathbb{R}^{d}\) whose maximal open cells are connected components of \(\mathbb{R}^{d}\setminus\left(\bigcup_{H\in\mathcal{A}_{P}}H\right)\). We write \(\Delta_{P}\) for this polyhedral complex. Note that this \(\Delta_{P}\) is the same as the one defined in Introduction. Since any half line \(\boldsymbol{v}+\{k\boldsymbol{w}\mid k\in\mathbb{R}_{\geq 0}\}\), where \(\boldsymbol{v}\in\mathbb{R}^{d}\) and \(\boldsymbol{0}\neq\boldsymbol{w}\in\mathbb{R}^{d}\), must hit one of \(H_{\boldsymbol{a}_{i},k}\in\mathcal{A}_{P}\), each connected component of \(\mathbb{R}^{d}\setminus\left(\bigcup_{H\in\mathcal{A}_{P}}H\right)\) is a bounded set, so \(\Delta_{P}\) is actually a polytopal complex. By the definition of \(\mathcal{A}_{P}\), each open cell \(A\) of \(\Delta_{P}\) can be written in the form
\[A=A_{1}\cap A_{2}\cap\cdots\cap A_{m}\]
such that each \(A_{i}\) is either \(H_{\boldsymbol{a}_{i},k}\) or \(\{\boldsymbol{x}\in\mathbb{R}^{d}\mid k<(\boldsymbol{a}_{i},\boldsymbol{x})<k +1\}\). This means that each region in \(\Lambda_{P}\) can be written as a disjoint union of open cells in \(\Delta_{P}\), in particular, each element in \(\Lambda_{P}/\mathbb{Z}^{d}\) can be written as a disjoint union of elements in \(\Delta_{P}/\mathbb{Z}^{d}\). This proves Theorem 1.1.
**Example 3.11**.: Consider the trapezoid \(T\) in Introduction. As one can see from Figures 1 and 3, \(\Lambda_{T}/\mathbb{Z}^{2}\) consists of 4 elements \(V,E,R_{1},R_{2}\) and \(\Delta_{T}/\mathbb{Z}^{2}\) consists of \(6\) elements \(V_{1},E_{1},E_{2},E_{3},F_{1},F_{2}\). We have
\[V=V_{1},\;E=E_{3},\;R_{1}=E_{1}\cup E_{2}\cup F_{2},\;R_{2}=F_{1}.\]
While \(\Lambda_{P}\) and \(\Delta_{P}\) are different in general, there is a nice case that we have \(\Lambda_{P}=\Delta_{P}\). If the set of normal vectors of \(P\) is the set of the form \(\{\pm\boldsymbol{a}_{1},\ldots,\pm\boldsymbol{a}_{l}\}\) then each region \(R\in\Lambda_{P}\) must be a region of the form
\[R=\bigcap_{i=1}^{m}\big{\{}\boldsymbol{x}\in\mathbb{R}^{d}\mid c_{i}-1<( \boldsymbol{a}_{i},\boldsymbol{x})\leq c_{i}\;\;\text{and}\;c_{i}^{\prime}-1< (-\boldsymbol{a}_{i},\boldsymbol{x})\leq c_{i}^{\prime}\big{\}}.\]
Each non-empty content in the RHS equals either \(H_{\boldsymbol{a}_{i},c_{i}}\) or \(\{\boldsymbol{x}\in\mathbb{R}^{d}\mid c_{i}-1<(\boldsymbol{a}_{i},\boldsymbol {x})<c_{i}\}\) so we have \(\Lambda_{P}=\Delta_{P}\) in that case. To summarize, we get the following statement.
**Proposition 3.12**.: _If \(P\subset\mathbb{R}^{d}\) is a \(d\)-polytope with \(N(P)=-N(P)\) then \(\Lambda_{P}=\Delta_{P}\)._
A typical example of a polytope \(P\) satisfying \(N(P)=-N(P)\) is a centrally symmetric polytope \(P\) with \(P=-P\) (or more generally, a polytope \(P\) with \(P=-P+\boldsymbol{a}\) for some \(\boldsymbol{a}\in\mathbb{Z}^{d}\)).
_Remark 3.13_.: Each element of \(\Delta_{P}/\mathbb{Z}^{d}\) is an open cell ball, so \(\Delta_{P}/\mathbb{Z}^{d}\) is indeed a CW complex. To see that each element of \(\Delta_{P}/\mathbb{Z}^{d}\) is a ball, it suffices to check that for each \(C\in\Delta_{P}\) the restriction of \(\mathbb{R}^{d}\to\mathbb{R}^{d}/\mathbb{Z}^{d}\) to \(C\) is injective. This injectivity follows from Corollary 3.7 since, if we have \(\boldsymbol{u}=\boldsymbol{v}+\boldsymbol{a}\) for some \(\boldsymbol{u},\boldsymbol{v}\in C\) and \(\boldsymbol{0}\neq\boldsymbol{a}\in\mathbb{Z}^{d}\), then \(\mathcal{C}_{P}+(\boldsymbol{v},0)\) and \(\mathcal{C}_{P}+(\boldsymbol{u},0)=\big{(}\mathcal{C}_{P}+(\boldsymbol{v},0) \big{)}+(\boldsymbol{a},0)\) must have different sets of integer points.
## 4. Some examples
Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope. We see in the previous section that \(\mathrm{TL}_{P,\boldsymbol{v}}\) only depends on the region \(C\) in \(\Delta_{P}/\mathbb{Z}^{d}\) (or \(\Lambda_{P}/\mathbb{Z}^{d}\)) with \([\boldsymbol{v}]\in C\), so for \(C\in\Delta_{P}/\mathbb{Z}^{d}\) (or \(C\in\Lambda_{P}/\mathbb{Z}^{d}\)) we write \(\mathrm{TL}_{P,C}=\mathrm{TL}_{P,\boldsymbol{v}}\) with \([\boldsymbol{v}]\in C\). In this section, we give a few examples of the computations of \(\mathrm{ehr}_{P+\boldsymbol{v}}\) using translated lattice points enumerators.
**Example 4.1**.: Consider the lattice parallelogram \(P\) with vertices \((0,0),(1,0),(1,3),(2,3)\). Then \(N(P)=\{(0,1),(0,-1),(3,-1),(-3,1)\}\) and \(\Delta_{P}/\mathbb{Z}^{2}\)\((=\Lambda_{P}/\mathbb{Z}^{2})\) consists of three vertices \(V_{1},V_{2},V_{3}\), \(6\) edges \(E_{1},E_{2},\ldots,E_{6}\) and three \(2\)-dimensional open cells \(R_{1},R_{2},R_{3}\) shown in Figure 4. Since \(P\) is a lattice polytope, translated lattice points enumerators of \(P\) are actually polynomials. The table below is a list of the polynomials \(\mathrm{TL}_{P,C}(t)\).
Now we compute \(\operatorname{ehr}_{P+\mathbf{v}}(t)\) when \(\mathbf{v}=(\frac{1}{3},\frac{1}{6})\) using this information. One can compute the constituents of \(\operatorname{ehr}_{P+\mathbf{v}}\) visually by drawing a line of direction \(\mathbf{v}\) in \(\mathbb{R}^{2}/\mathbb{Z}^{2}\) and plot the pints \([t\mathbf{v}]\) there. First, by drawing points \([t\mathbf{v}]\) on \(\mathbb{R}^{2}/\mathbb{Z}^{2}\), one can see
\[[t\mathbf{v}]\in\begin{cases}V_{1},&(t\equiv 0\;(\operatorname{mod}6)),\\ R_{3},&(t\equiv 1\;(\operatorname{mod}6)),\\ R_{1},&(t\equiv 2,4\;(\operatorname{mod}6)),\\ E_{5},&(t\equiv 3\;(\operatorname{mod}6)),\\ R_{2},&(t\equiv 5\;(\operatorname{mod}6)).\end{cases}\]
See Figure 4. Lemma 2.4 tells that the \(k\)th constituent of \(\operatorname{ehr}_{P+\mathbf{v}}\) is nothing but the \(k\)th constituent of \(\operatorname{TL}_{P,C}\) with \([k\mathbf{v}]\in C\). Hence we get
\[\operatorname{ehr}_{P+\mathbf{v}}(t)=\begin{cases}\operatorname{TL}_{P,V_{1}}(t)=3 t^{2}+2t+1,&(t\equiv 0\;(\operatorname{mod}6)),\\ \operatorname{TL}_{P,R_{3}}(t)=3t^{2},&(t\equiv 1\;(\operatorname{mod}6)),\\ \operatorname{TL}_{P,R_{1}}(t)=3t^{2},&(t\equiv 2,4\;(\operatorname{mod}6)),\\ \operatorname{TL}_{P,E_{5}}(t)=3t^{2}+t,&(t\equiv 3\;(\operatorname{mod}6)),\\ \operatorname{TL}_{P,R_{2}}(t)=3t^{2},&(t\equiv 5\;(\operatorname{mod}6)). \end{cases}\]
We remark that parallelogram is a special case of a zonotope, and a nice combinatorial formula of the Ehrhart quasi-polynomial of a translated integral zonotope is given in [2, Proposition 3.1].
**Example 4.2**.: We give a more complicated example. Consider the rhombus \(Q\subset\mathbb{R}^{2}\) having vertices \((\pm 1,0)\) and \((0,\pm\frac{1}{2})\). Then the cell complex \(\Delta_{Q}/\mathbb{Z}^{2}\) consists of four vertices, eight edges and four \(2\)-dimensional cells. See Figure 5.
Since \(2Q\) is integral, \(\operatorname{TL}_{Q,C}\) is a quasi-polynomial having period \(2\) for each \(C\in\Delta_{Q}/\mathbb{Z}^{2}\). For a quasi-polynomial \(f\) with period \(2\), we write \(f=(f_{0},f_{1})\), where \(f_{k}\) is the \(k\)th constituent of \(f\). Below is the table of translated lattice points enumerators of \(Q\).
Consider \(\mathbf{u}=(\frac{1}{8},\frac{1}{8})\) and \(\mathbf{w}=(\frac{1}{3},\frac{1}{3})\). Then
\[[t\mathbf{u}]\in\begin{cases}V_{1},&(t\equiv 0\;(\mathrm{mod}\;8)),\\ F_{3},&(t\equiv 1,2\;(\mathrm{mod}\;8)),\\ F_{4},&(t\equiv 3,4,5\;(\mathrm{mod}\;8)),\\ F_{2},&(t\equiv 6,7\;(\mathrm{mod}\;8)),\end{cases}\quad\text{and}\quad[t\mathbf{w}]\in \begin{cases}V_{1},&(t\equiv 0\;(\mathrm{mod}\;3)),\\ E_{3},&(t\equiv 1\;(\mathrm{mod}\;3)),\\ E_{6},&(t\equiv 2\;(\mathrm{mod}\;3)).\end{cases}\]
Using that the \(k\)th constituent of \(\mathrm{ehr}_{Q+\mathbf{u}}\) equals the \(k\)th constituent of \(\mathrm{TL}_{Q,k\mathbf{u}}\), it follows that
\[\mathrm{ehr}_{Q+\mathbf{u}}(t)=\begin{cases}t^{2}+t+1,&(t\equiv 0\;(\mathrm{mod}\;8)), \\ t^{2},&(t\equiv 1,2,4,6,7\;(\mathrm{mod}\;8)),\\ t^{2}-1,&(t\equiv 3,5\;(\mathrm{mod}\;8)),\end{cases}\]
and
\[\mathrm{ehr}_{Q+\mathbf{w}}(t)=\begin{cases}t^{2}+t+1,&(t\equiv 0,3\;( \mathrm{mod}\;6)),\\ t^{2}+\frac{1}{2}t-\frac{1}{2},&(t\equiv 1,5\;(\mathrm{mod}\;6)),\\ t^{2}+\frac{1}{2}t,&(t\equiv 2,4\;(\mathrm{mod}\;6)).\end{cases}\]
One can also see from the second example that the minimum period of \(\mathrm{ehr}_{Q+\mathbf{w}}\) is not necessary the denominator of \(\mathbf{w}\).
## 5. Reciprocity in maximal regions
In this section, we explain that the quasi-polynomials \(\mathrm{TL}_{P,C}\) for maximal dimensional cells \(C\in\Delta_{P}/\mathbb{Z}^{d}\) have a reciprocity which comes from the reciprocity in Theorem 2.3.
Figure 5. Cell complex associated with \(Q\)
**Reciprocity.** Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope. The reciprocity in Theorem 2.3 says that, for any \(\boldsymbol{v}\in\mathbb{R}^{d}\), one has
\[\mathrm{TL}_{\mathrm{int}(P),\boldsymbol{v}}(t)=(-1)^{d}\mathrm{TL}_{P,- \boldsymbol{v}}(-t)\quad\text{for }t\in\mathbb{Z}_{>0}. \tag{5.1}\]
Since \(\mathcal{A}_{P}\) is centrally symmetric, that is \(-\mathcal{A}_{P}=\mathcal{A}_{P}\), we have \(R\in\Delta_{P}\) if and only if \(-R\in\Delta_{P}\). For each \(C\in\Delta_{P}/\mathbb{Z}^{d}\) with a representative \(R\in\Delta_{P}\), we write \(-C\) for the element of \(\Delta_{P}/\mathbb{Z}^{d}\) corresponding to the cell \(-R\). By Theorem 1.1 and (5.1) we have \(\mathrm{TL}_{\mathrm{int}(P),\boldsymbol{v}}(t)=\mathrm{TL}_{\mathrm{int}(P), \boldsymbol{u}}(t)\) when \([\boldsymbol{u}]\) and \([\boldsymbol{v}]\) belong to the same cell in \(\Delta_{P}/\mathbb{Z}^{d}\). Thus, for each \(C\in\Delta_{P}/\mathbb{Z}^{d}\), we write \(\mathrm{TL}_{\mathrm{int}(P),C}(t)=\mathrm{TL}_{\mathrm{int}(P),\boldsymbol{v }}(t)\) with \([\boldsymbol{v}]\in C\). Using this notation, (5.1) can be written in the following form.
**Proposition 5.1**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope. For any \(C\in\Delta_{P}/\mathbb{Z}^{d}\), one has_
\[\mathrm{TL}_{\mathrm{int}(P),C}(t)=(-1)^{d}\mathrm{TL}_{P,-C}(-t)\ \ \text{ for }t\in\mathbb{Z}_{>0}.\]
Note that the above equation tells that \(\mathrm{TL}_{\mathrm{int}(P),C}\) is a quasi-polynomial on \(\mathbb{Z}_{>0}\).
**Example 5.2**.: Consider the trapezoid \(T\) in Introduction. Then we have
\[F_{1}=-F_{2},\ E_{1}=-E_{1},\ E_{2}=-E_{2},E_{3}=-E_{3},V_{1}=-V_{1}.\]
The following tables are lists of polynomials \(\mathrm{TL}_{T,C}(t)\) and \(\mathrm{TL}_{\mathrm{int}(T),C}(t)\).
\begin{tabular}{|c|c|} \hline cell & polynomial \(\mathrm{TL}_{T,C}(t)\) \\ \hline \(V_{1}\) & \(\frac{3}{2}t^{2}+\frac{3}{2}t+1\) \\ \hline \(E_{1},E_{2}\) & \(\frac{3}{2}t^{2}+\frac{1}{2}t\) \\ \hline \(E_{3}\) & \(\frac{3}{2}t^{2}+\frac{3}{2}t\) \\ \hline \(F_{1}\) & \(\frac{3}{2}t^{2}-\frac{1}{2}t\) \\ \hline \(F_{2}\) & \(\frac{3}{2}t^{2}+\frac{1}{2}t\) \\ \hline \end{tabular}
### Maximal cells
Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope with the unique irredundant presentation
\[P=H^{\geq}_{\boldsymbol{a}_{1},b_{1}}\cap\cdots\cap H^{\geq}_{\boldsymbol{a} _{m},b_{m}}. \tag{5.2}\]
We write \(F_{i}=P\cap H_{\boldsymbol{a}_{i},b_{i}}\) for the facet of \(P\) which lies in the hyperplane \(H_{\boldsymbol{a}_{i},b_{i}}\). The next lemma follows from Lemmas 3.2 and 3.4.
**Lemma 5.3**.: _With the same notation as above, for \(\boldsymbol{v}\in\mathbb{R}^{d}\), the cone \(\mathcal{C}_{F_{i}}+(\boldsymbol{v},0)\) contains a lattice point if and only if \(\boldsymbol{v}\in H_{\boldsymbol{a}_{i},k}\) for some \(k\in\mathbb{Z}\)._
The lemma tells that the cone \(\mathcal{C}_{P}+(\boldsymbol{v},0)\) has no lattice points in its boundary if \(\boldsymbol{v}\in\mathbb{R}^{d}\setminus\bigcup_{H\in\mathcal{A}_{P}}H\), equivalently, if \([\boldsymbol{v}]\) belongs to a \(d\)-dimensional cell of \(\Delta_{P}/\mathbb{Z}^{d}\). Hence we have
**Corollary 5.4**.: _If \(P\subset\mathbb{R}^{d}\) is a rational \(d\)-polytope and \(C\) is a \(d\)-dimensional cell of \(\Delta_{P}/\mathbb{Z}^{d}\), then_
\[\mathrm{TL}_{P,C}(t)=(-1)^{d}\mathrm{TL}_{P,-C}(-t)\ \ \text{ for all }t\in\mathbb{Z}_{\geq 0}.\]
Proof.: Let \(\boldsymbol{v}\in\mathbb{R}^{d}\) such that \([\boldsymbol{v}]\in C\). Then \(\boldsymbol{v}\not\in H\) for any \(H\in\mathcal{A}_{P}\), which implies that the cone \(\mathcal{C}_{P}+(\boldsymbol{v},0)\) has no lattice points in its boundary. Thus by Proposition 5.1 we have
\[\mathrm{TL}_{P,C}(t)=\mathrm{TL}_{\mathrm{int}(P),C}(t)=(-1)^{d}\mathrm{TL}_{P,- C}(-t)\quad\text{for }t\in\mathbb{Z}_{>0}.\]
Since \(\mathrm{TL}_{P,C}\) and \(\mathrm{TL}_{P,-C}\) are quasi-polynomials on \(\mathbb{Z}_{\geq 0}\), this implies the desired equality.
The above reciprocity has a special meaning for centrally symmetric polytopes. Looking quasi-polynomials \(\mathrm{TL}_{Q,F_{i}}\) in Example 4.2, one may notice that each constituent is a polynomial in \(t^{2}\). In other words, the linear term \(t\) vanishes. We explain that this has a reason. We first remind the following obvious fact.
**Lemma 5.5**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational polytope with \(-P=P+\mathbf{u}\) for some \(\mathbf{u}\in\mathbb{Z}^{d}\). Then \(\mathrm{TL}_{P,\mathbf{v}}(t)=\mathrm{TL}_{P,-\mathbf{v}}(t)\) for any \(\mathbf{v}\in\mathbb{R}^{d}\) and \(t\in\mathbb{Z}_{\geq 0}\)._
**Theorem 5.6**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope with \(-P=P+\mathbf{u}\) for some \(\mathbf{u}\in\mathbb{Z}^{d}\) and let \(C\in\Delta_{P}/\mathbb{Z}^{d}\) be a \(d\)-dimensional cell. Let \(f(t)\) be the \(k\)th constituent of \(\mathrm{TL}_{P,C}\) and let \(g(t)\) be the \((-k)\)th constituent of \(\mathrm{TL}_{P,C}\). Then_
\[f(t)=(-1)^{d}g(-t).\]
Proof.: Corollary 5.4 and Lemma 5.5 tell
\[\mathrm{TL}_{P,C}(t)=(-1)^{d}\mathrm{TL}_{P,-C}(-t)=(-1)^{d}\mathrm{TL}_{P,C}( -t)\ \ \text{ for all }t\in\mathbb{Z}_{\geq 0}.\]
By considering the \(k\)th constituent in the above equality, we get the desired assertion.
If a polynomial \(f(t)\) of degree \(d\) satisfies \(f(t)=(-1)^{d}f(-t)\), then it must be a polynomial in \(t^{2}\) when \(d\) is even and \(t\) times a polynomial in \(t^{2}\) when \(d\) is odd. Hence we get the following corollary, which explain a reason why we get polynomials in \(t^{2}\) in Example 4.2.
**Corollary 5.7**.: _With the same notation as in Theorem 5.6,_
1. _the_ \(0\)_th constituent of_ \(\mathrm{TL}_{P,C}(t)\) _is either a polynomial in_ \(\mathbb{Q}[t^{2}]\) _or_ \(t\mathbb{Q}[t^{2}]\)_;_
2. _if_ \(2P\) _is integral, then the_ \(1\)_st constituent of_ \(\mathrm{TL}_{P,C}(t)\) _is either a polynomial in_ \(\mathbb{Q}[t^{2}]\) _or_ \(t\mathbb{Q}[t^{2}]\)_._
Note that when \(2P\) is integral the quasi polynomial \(\mathrm{TL}_{P,C}\) has period 2, so its \(1\)st constituent equals its \((-1)\)th constituent.
## 6. Translated lattice points enumerators determine polytopes
If is clear that if \(P=Q+\mathbf{w}\) for some integer vector \(\mathbf{w}\), then \(\mathrm{TL}_{P,\mathbf{v}}=\mathrm{TL}_{Q,\mathbf{v}}\) for all vectors \(\mathbf{v}\). The goal of this section is to prove the converse of this simple fact, which is equivalent to Theorem 1.3 in Introduction by Lemma 2.41.
Footnote 1: The condition “\(\mathrm{TL}_{P,\mathbf{v}}=\mathrm{TL}_{Q,\mathbf{v}}\) for all \(\mathbf{v}\in\mathbb{Q}^{d}\)” is equivalent to the condition “\(\mathrm{TL}_{P,\mathbf{v}}=\mathrm{TL}_{Q,\mathbf{v}}\) for all \(\mathbf{v}\in\mathbb{R}^{d}\)”.
**Theorem 6.1**.: _Let \(P\) and \(Q\) be rational \(d\)-polytopes in \(\mathbb{R}^{d}\). If \(\mathrm{TL}_{P,\mathbf{v}}(t)=\mathrm{TL}_{Q,\mathbf{v}}(t)\) for all \(\mathbf{v}\in\mathbb{R}^{d}\) and \(t\in\mathbb{Z}_{\geq 0}\) then \(P=Q+\mathbf{w}\) for some \(\mathbf{w}\in\mathbb{Z}^{d}\)._
To simplify notation, we use the notation
\[\Gamma_{P}=\big{\{}\big{(}\mathbf{v},\mathrm{TL}_{P,\mathbf{v}}(t)\big{)}\in\mathbb{R} ^{d}\times\mathcal{QP}\mid\mathbf{v}\in\mathbb{R}^{d}\big{\}},\]
where \(\mathcal{QP}\) is the set of all quasi-polynomials in \(t\). Thus, what we want to prove is that \(\Gamma_{P}=\Gamma_{Q}\) implies \(P=Q+\mathbf{w}\) for some \(\mathbf{w}\in\mathbb{Z}^{d}\).
To prove the theorem, we first recall Minkowski's theorem, which tells that normal vectors and volumes of facets determine a polytope. Let \(P\subset\mathbb{R}^{d}\) be a \(d\)-polytope with irredundant presentation \(P=\bigcap_{i=1}^{m}H^{\geq}_{\mathbf{a}_{i},b_{i}}\), where \(\|\mathbf{a}_{i}\|=1\), and let \(F_{i}=P\cap H_{\mathbf{a}_{i},b_{i}}\) be the facet of \(P\) which lies in the hyperplane \(H_{\mathbf{a}_{i},b_{i}}\). We write
\[\mathcal{M}(P)=\big{\{}\big{(}\mathbf{a}_{1},\mathrm{vol}(F_{1})\big{)},\ldots,(\bm {a}_{m},\mathrm{vol}(F_{m})\big{)}\big{\}},\]
where \(\operatorname{vol}(F_{i})\) is the relative volume of \(F_{i}\). The following result is known as Minkowski's theorem (see [1, SS6.3 Theorem 1]).
**Theorem 6.2** (Minkowski).: _If \(P\) and \(Q\) are \(d\)-polytopes in \(\mathbb{R}^{d}\) with \(\mathcal{M}(P)=\mathcal{M}(Q)\), then \(Q=P+\boldsymbol{v}\) for some \(\boldsymbol{v}\in\mathbb{R}^{d}\)._
To apply Minkowski's theorem in our situation, we will show that we can know volumes of facets of a polytope from translated lattice points enumerator on codimension \(1\) cells of \(\Delta_{P}/\mathbb{Z}^{d}\). We say that a point \(\boldsymbol{v}\in H_{\boldsymbol{a},k}\in\mathcal{A}_{P}\) is **generic** in \(\mathcal{A}_{P}\) if \(\boldsymbol{v}\not\in H\) for any \(H\in\mathcal{A}_{P}\) with \(H\neq H_{\boldsymbol{a},k}\). Note that \(\boldsymbol{v}\in H_{\boldsymbol{a},k}\) is generic if and only if it is contained in a \((d-1)\)-dimensional cell of \(\Delta_{P}\).
**Lemma 6.3**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope, \(\boldsymbol{a}\in N(P)\), and let \(F\) be a facet of \(P\) corresponding to the normal vector \(\boldsymbol{a}\). If \(\boldsymbol{v}\in H_{\boldsymbol{a},k}\) is generic in \(\mathcal{A}_{P}\), then for a sufficiently small number \(\varepsilon>0\), one has_
1. \(\operatorname{TL}_{P,\boldsymbol{v}}-\operatorname{TL}_{P,\boldsymbol{v}+ \varepsilon\boldsymbol{a}}=\operatorname{TL}_{F,\boldsymbol{v}}\neq 0\)_._
2. \(\operatorname{TL}_{P,\boldsymbol{v}}-\operatorname{TL}_{P,\boldsymbol{v}- \varepsilon\boldsymbol{a}}=0\) _if there are no_ \(k\in\mathbb{R}_{>0}\) _such that_ \(-k\boldsymbol{a}\in N(P)\)_._
Proof.: The fact that \(\operatorname{TL}_{F,\boldsymbol{v}}(t)\neq 0\) for some \(t\) follows from Lemma 5.3. We first assume that \(F\) is the only facet of \(P\) that is orthogonal to \(\boldsymbol{a}\) and prove (1) and (2). In this case, by the assumption and Lemma 5.3, \(\mathcal{C}_{F}+(\boldsymbol{v},0)\) is the only facet of \(\mathcal{C}_{P}+(\boldsymbol{v},0)\) that contains lattice points, so
\[\big{(}\mathcal{C}_{P}+(\boldsymbol{v},0)\big{)}\cap\mathbb{Z}^{d+1}=\big{(} \mathrm{int}\big{(}\mathcal{C}_{P}+(\boldsymbol{v},0)\big{)}\cup\big{(} \mathcal{C}_{F}+(\boldsymbol{v},0)\big{)}\big{)}\cap\mathbb{Z}^{d+1}.\]
Let \(H_{\boldsymbol{b},0}\subset\mathbb{R}^{d+1}\) be the hyperplane that contains \(\mathcal{C}_{F}\). Then, since \(\mathcal{C}_{F}\cap(H_{\boldsymbol{b},0}^{\geq}+s(\boldsymbol{a},0))=\varnothing\) and \(\mathcal{C}_{F}\subset H_{\boldsymbol{b},0}^{\geq}-s(\boldsymbol{a},0)\) for any \(s>0\), by Lemma 3.3 there is an \(\varepsilon>0\) such that
\[\big{(}\mathcal{C}_{P}+(\boldsymbol{v}+\varepsilon\boldsymbol{a},0)\big{)} \cap\mathbb{Z}^{d+1}=\big{(}\big{(}\mathcal{C}_{P}+(\boldsymbol{v},0)\big{)} \cap\mathbb{Z}^{d+1}\big{)}\setminus\big{(}\big{(}\mathcal{C}_{F}+( \boldsymbol{v},0)\big{)}\cap\mathbb{Z}^{d+1}\big{)}\]
and
\[\big{(}\mathcal{C}_{P}+(\boldsymbol{v}-\varepsilon\boldsymbol{a},0)\big{)} \cap\mathbb{Z}^{d+1}=\big{(}\mathcal{C}_{P}+(\boldsymbol{v},0)\big{)}\cap \mathbb{Z}^{d+1}.\]
These prove (1) and (2).
Second, we assume that there is a facet \(G\neq F\) of \(P\) that is orthogonal to \(\boldsymbol{a}\). This condition is equivalent to the condition that there is \(k\in\mathbb{R}_{>0}\) such that \(-k\boldsymbol{a}\in N(P)\). Also, the normal vector corresponding to the facet \(G\) must be equal to \(-k\boldsymbol{a}\) and
\[\big{(}\mathcal{C}_{P}+(\boldsymbol{v},0)\big{)}\cap\mathbb{Z}^{d+1}=\big{(} \mathrm{int}\big{(}\mathcal{C}_{P}+(\boldsymbol{v},0)\big{)}\cup\big{(} \mathcal{C}_{F}+(\boldsymbol{v},0)\big{)}\cup\big{(}\mathcal{C}_{G}+( \boldsymbol{v},0)\big{)}\big{)}\cap\mathbb{Z}^{d+1}.\]
In this case \(\mathcal{C}_{F}\cap(H_{\boldsymbol{b},0}^{\geq}+s(\boldsymbol{a},0))=\varnothing\) and \(\mathcal{C}_{G}\subset H_{\boldsymbol{b},0}^{\geq}+s(\boldsymbol{a},0)\) for a sufficiently small \(s>0\), so again by Lemma 3.3 there is an \(\varepsilon>0\) such that
\[\big{(}\mathcal{C}_{P}+(\boldsymbol{v}+\varepsilon\boldsymbol{a},0)\big{)} \cap\mathbb{Z}^{d+1}=\big{(}\big{(}\mathcal{C}_{P}+(\boldsymbol{v},0)\big{)} \cap\mathbb{Z}^{d+1}\big{)}\setminus\big{(}\big{(}\mathcal{C}_{F}+(\boldsymbol {v},0)\big{)}\cap\mathbb{Z}^{d+1}\big{)},\]
proving (1).
**Lemma 6.4**.: _If \(P\) and \(Q\) are rational \(d\)-polytopes in \(\mathbb{R}^{d}\) with \(\Gamma_{P}=\Gamma_{Q}\), then \(\mathcal{M}(P)=\mathcal{M}(Q)\)._
Proof.: What we must prove is that the set \(\Gamma_{P}\) determines the directions of inner normal vectors of \(P\) as well as volumes of the facets of \(P\).
By Lemma 6.3(1), \(\boldsymbol{v}\in\mathbb{R}^{d}\setminus\bigcup_{H\in\mathcal{A}_{P}}H\) if and only if there is an open ball \(B\ni\boldsymbol{v}\) such that \(\operatorname{TL}_{P,\boldsymbol{v}}=\operatorname{TL}_{P,\boldsymbol{u}}\) for all \(\boldsymbol{u}\in B\). This tells that the set \(\Gamma_{P}\) determines \(\mathcal{A}_{P}\), and the definition of \(\mathcal{A}_{P}\) tells that \(\mathcal{A}_{P}\) determines the set \(\overline{N}=\{\pm(\boldsymbol{a}/\|\boldsymbol{a}\|)\mid\boldsymbol{a}\in N (P)\}\). For each \(\boldsymbol{a}\in\overline{N}\), Lemma 6.3 also tells \(k\boldsymbol{a}\in N(P)\) for some \(k>0\) if and only if, for a generic \(\boldsymbol{v}\in H_{\boldsymbol{a},0}\in\mathcal{A}_{P}\), we have \(\operatorname{TL}_{P,\boldsymbol{v}}\neq\operatorname{TL}_{P,\boldsymbol{v}+ \varepsilon\boldsymbol{a}}\) for a sufficiently small number \(\varepsilon>0\). Hence the set \(\Gamma_{P}\) determines \(\{(\boldsymbol{a}/\|\boldsymbol{a}\|)\mid\boldsymbol{a}\in N(P)\}\).
It remains to prove that \(\Gamma_{P}\) determines the volumes of facets of \(P\). Let \(F\) be a facet of \(P\) and let \(\boldsymbol{a}\in N(P)\) be the normal vector associated with the facet \(F\). For any \(\boldsymbol{v}\in\mathbb{R}^{d}\), let \(\mathrm{TL}^{0}_{P,\boldsymbol{v}}(t)\) denote the \(0\)th constituent of \(\mathrm{TL}_{P,\boldsymbol{v}}\), which must be a degree \(d\) polynomial whose leading coefficient it the normalized volume of \(P\). If we take a generic point \(\boldsymbol{v}\in H_{\boldsymbol{a},0}\) in \(\mathcal{A}_{P}\), then by Lemma 6.3 we have
\[\lim_{t\to\infty}\tfrac{1}{t^{d-1}}\mathrm{TL}^{0}_{F,\boldsymbol{v}}(t)=\lim_ {t\to\infty}\tfrac{1}{t^{d-1}}\big{(}\mathrm{TL}^{0}_{P,\boldsymbol{v}}(t)- \mathrm{TL}^{0}_{P,\boldsymbol{v}+\varepsilon\boldsymbol{a}}(t)\big{)},\]
where \(\varepsilon\) is a sufficiently small integer. Since \(\mathrm{TL}^{0}_{P,\boldsymbol{v}}(t)-\mathrm{TL}^{0}_{P,\boldsymbol{v}+ \varepsilon\boldsymbol{a}}(t)\) is a polynomial of degree \(\leq d-1\), this limit exists and must be equal to the relative volume of \(F\) since \(\mathrm{TL}_{F,\boldsymbol{v}}\) can be considered as a translated lattice points enumerator in the Euclidian space \(H_{\boldsymbol{a},0}\cong\mathbb{R}^{d-1}\) with the lattice \(H_{\boldsymbol{a},0}\cap\mathbb{Z}^{d}\cong\mathbb{Z}^{d-1}\). Thus volumes of facets of \(P\) are determined by \(\Gamma_{P}\).
Let \(\pi_{i}:\mathbb{R}^{d}\to\mathbb{R}^{d-1}\) be the projection given by
\[\pi_{i}(x_{1},\ldots,x_{d})=(x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{d}).\]
We next show that translated lattice points enumerators of \(\pi_{i}(P)\) can be determined from those of \(P\). Let \(P\subset\mathbb{R}^{d}\) be a \(d\)-polytope. We define
\[\partial_{i}^{-}P=\{\boldsymbol{x}\in P\mid\boldsymbol{x}\not\in P+ \varepsilon\mathbf{e}_{i}\text{ for any }\varepsilon>0\},\]
where \(\mathbf{e}_{1},\ldots,\mathbf{e}_{d}\) are the standard vectors of \(\mathbb{R}^{d}\). Intuitively, \(\partial_{i}^{-}P\) is the set of points in \(P\) which is visible from \(-\infty\mathbf{e}_{i}\). Indeed, \(\partial_{i}^{-}P\) has the following description: Let \(\mathrm{Facets}(P)\) be the set of facets of \(P\), and, for each \(F\in\mathrm{Facets}(P)\), let \(\boldsymbol{a}_{F}\in N(P)\) be the inner normal vector associated with \(F\). An alternative description of \(\partial_{i}^{-}P\) is
\[\partial_{i}^{-}P=\bigcup_{F\in\mathrm{Facets}(P),\ (\boldsymbol{a}_{F}, \mathbf{e}_{i})>0}F. \tag{6.1}\]
See Figure 6.
**Lemma 6.5**.: _With the same notation as above, for any \(\boldsymbol{v}\in\mathbb{R}^{d}\), there is an \(\varepsilon_{i,\boldsymbol{v}}>0\) such that_
\[(tP+\boldsymbol{v})\cap\mathbb{Z}^{d}=\Big{(}\big{(}t(\partial_{i}^{-}P)+ \boldsymbol{v}\big{)}\bigsqcup\big{(}tP+\boldsymbol{v}+\varepsilon_{i, \boldsymbol{v}}\mathbf{e}_{i}\big{)}\Big{)}\cap\mathbb{Z}^{d}\ \ \text{for all }t\in\mathbb{Z}_{\geq 0}.\]
Proof.: By Lemma 3.3 there is an \(\varepsilon>0\) such that
\[\big{(}\mathcal{C}_{P}+(\boldsymbol{v},0)\big{)}\cap\mathbb{Z}^{ d+1}\] \[=\Big{(}\big{(}\mathcal{C}_{P}+(\boldsymbol{v}+\varepsilon \mathbf{e}_{i},0)\big{)}\bigsqcup\big{\{}\boldsymbol{x}+(\boldsymbol{v},0)\in \mathcal{C}_{P}+(\boldsymbol{v},0)\mid\boldsymbol{x}\not\in\mathcal{C}_{P}+s( \mathbf{e}_{i},0)\ \ \text{for any }s>0\big{\}}\Big{)}\cap\mathbb{Z}^{d+1}.\]
Cutting the above equation by the hyperplane \(x_{d+1}=t\), we get the desired equality.
We define \(\operatorname{TL}_{P,\mathbf{v}}^{(-i)}(t)\) by
\[\operatorname{TL}_{P,\mathbf{v}}^{(-i)}(t)=\#\big{(}\big{(}t(\partial_{i}^{-}P)+\bm {v}\big{)}\cap\mathbb{Z}^{d}\big{)}.\]
Lemma 6.5 tells that
\[\operatorname{TL}_{P,\mathbf{v}}^{(-i)}(t)=\operatorname{TL}_{P,\mathbf{v}}(t)- \operatorname{TL}_{P,\mathbf{v}+\varepsilon_{i,\mathbf{v}}\mathbf{e}_{i}}(t),\]
where \(\varepsilon_{i,\mathbf{v}}\) is a number given in Lemma 6.5. We note that the function \(\operatorname{TL}_{P,\mathbf{v}}^{(-i)}\) is zero for almost all \(\mathbf{v}\in\mathbb{R}^{d}\). Indeed, we have the following statement.
**Lemma 6.6**.: _With the same notation as above, \(\operatorname{TL}_{P,\mathbf{v}}^{(-i)}\) is not a zero function only when there is \(\mathbf{a}\in N(P)\) and \(k\in\mathbb{Z}\) such that \(\mathbf{v}\in H_{\mathbf{a},k}\) and \((\mathbf{a},\mathbf{e}_{i})>0\)._
Proof.: We have \(\operatorname{TL}_{P,\mathbf{v}}^{(-i)}(t)\neq 0\) for all \(t\in\mathbb{Z}_{\geq 0}\) only when
\[\big{(}\mathcal{C}_{\partial_{i}^{-}P}+(\mathbf{v},0)\big{)}\cap\mathbb{Z}^{d+1} \neq\varnothing.\]
By (6.1) and Lemma 5.3 this condition is equivalent to \(\mathbf{v}\not\in H_{\mathbf{a},k}\) for some \(\mathbf{a}\in N(P)\) and \(k\in\mathbb{Z}\) with \((\mathbf{a},\mathbf{e}_{i})>0\).
The next proposition shows that translated lattice points enumerators of \(\pi_{i}(P)\) can be determined from those of \(P\).
**Proposition 6.7**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope. For any \(\mathbf{v}\in\mathbb{R}^{d}\) and \(t\in\mathbb{Z}_{\geq 0}\), one has_
\[\operatorname{TL}_{\pi_{i}(P),\pi_{i}(\mathbf{v})}(t)=\sum_{0\leq s<1,\; \operatorname{TL}_{P,\mathbf{v}+s\mathbf{e}_{i}}^{(-i)}(t)\neq 0}\operatorname{TL}_{P, \mathbf{v}+s\mathbf{e}_{i}}^{(-i)}(t).\]
We note that the RHS in the proposition is a finite sum by Lemma 6.6 since the segment \(\{\mathbf{v}+s\mathbf{e}_{i}\mid 0\leq s<1\}\) meets only finitely many hyperplanes in \(\mathcal{A}_{P}\). See Figure 7 for a visualization of the proposition.
Proof.: We may assume \(i=d\). Fix \(t\in\mathbb{Z}_{\geq 0}\) and a lattice point \(\mathbf{u}=(u_{1},\dots,u_{d-1})\in\pi_{d}(tP+\mathbf{v})\). It suffices to prove that there is a unique integer \(r\in\mathbb{Z}\) such that \((\mathbf{u},r)\in\bigcup_{0\leq s<1}(t(\partial_{d}^{-}P)+\mathbf{v}+s\mathbf{e}_{d})\).
(Existence) By the assumption, there is an \(\alpha\in\mathbb{R}\) such that
\[(\mathbf{u},\alpha)\in t(\partial_{d}^{-}P)+\mathbf{v}.\]
Figure 7. Lattice points in the projection.
Then, \(r=\lceil\alpha\rceil\) satisfies the desired condition since \((\boldsymbol{u},\lceil\alpha\rceil)\) is contained in \(t(\partial_{d}^{-}P)+\boldsymbol{v}+(\lceil\alpha\rceil-\alpha)\mbox{\rm e}_{d}\).
(Uniqueness) The uniqueness of \(r\) follows from the fact that, for any \((\boldsymbol{u},\alpha),(\boldsymbol{u},\alpha^{\prime})\) which are contained in \(\bigcup_{0\leq s<1}(t(\partial_{n}^{-}P)+\boldsymbol{v}+s\mbox{\rm e}_{d})\), we have \(|\alpha-\alpha^{\prime}|<1\).
We will also use the following variation of Proposition 6.7. For a \(d\)-polytope \(P\), let
\[\partial_{i}^{+}P=\{\boldsymbol{x}\in P\mid\boldsymbol{x}\not\in P-\varepsilon \mbox{\rm e}_{i}\mbox{ for any }\varepsilon>0\}\]
and
\[\operatorname{TL}_{P,\boldsymbol{v}}^{(+i)}(t)=\#\big{(}\big{(}t(\partial_{i} ^{+}P)+\boldsymbol{v}\big{)}\cap\mathbb{Z}^{d}\big{)}\ \ \mbox{for }t\in\mathbb{Z}_{\geq 0}.\]
The next statement can be proved by the same argument given in the proof of Proposition 6.7.
**Proposition 6.8**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope. For any \(\boldsymbol{v}\in\mathbb{R}^{d}\) and \(t\in\mathbb{Z}_{\geq 0}\), one has_
\[\operatorname{TL}_{\pi_{i}(P),\pi_{i}(\boldsymbol{v})}(t)=\sum_{0\leq s<1, \ \operatorname{TL}_{P,\boldsymbol{v}-s\mbox{\rm e}_{i}}^{(+)}(t)\neq 0} \operatorname{TL}_{P,\boldsymbol{v}-s\mbox{\rm e}_{i}}^{(-i)}(t).\]
We now prove Theorem 6.1.
Proof of Theorem 6.1.: We use induction on \(d\). Suppose \(d=1\) and \(\Gamma_{P}=\Gamma_{Q}\). Then \(P\) and \(Q\) are line segments in \(\mathbb{R}\) such that \(P+a\) and \(Q+a\) contains the same number of integers for any \(a\in\mathbb{R}\). It is easy to see that this condition implies \(P=Q+v\) for some \(v\in\mathbb{Z}\).
Assume \(d>1\) and \(\Gamma_{P}=\Gamma_{Q}\). By Lemma 6.4, we already know \(Q=P+\boldsymbol{v}\) for some \(\boldsymbol{v}\in\mathbb{R}^{d}\). By Proposition 6.7 and the assumption \(\Gamma_{P}=\Gamma_{Q}\), we have \(\Gamma_{\pi_{1}(P)}=\Gamma_{\pi_{1}(Q)}\) and \(\Gamma_{\pi_{2}(P)}=\Gamma_{\pi_{2}(Q)}\). Since \(Q=P+\boldsymbol{v}\), the induction hypothesis tells that \(\pi_{1}(\boldsymbol{v}),\pi_{2}(\boldsymbol{v})\in\mathbb{Z}^{d-1}\) which guarantees \(\boldsymbol{v}\in\mathbb{Z}^{d}\).
## 7. Group symmetry
In the previous section, we see that translated lattice points enumerators determine polytopes up to translations by integer vectors. In this last section, we study translated lattice points enumerators of polytopes having some symmetries, in particular, we prove Theorem 1.5 in Introduction.
Let \(\operatorname{GL}_{d}(\mathbb{Z})\) be the subgroup of the general linear group \(\operatorname{GL}_{d}(\mathbb{R})\) consisting of all elements \(g\in\operatorname{GL}_{d}(\mathbb{R})\) with \(g(\mathbb{Z}^{d})=\mathbb{Z}^{d}\). If we identify each element of \(\operatorname{GL}_{d}(\mathbb{R})\) with \(d\times d\) non-singular matrix in a standard way, then \(\operatorname{GL}_{d}(\mathbb{Z})\) may be considered as the set of unimodular matrices. For a rational \(d\)-polytope \(P\subset\mathbb{R}^{d}\), we define
\[\operatorname{Aut}_{\mathbb{Z}}(P)=\{g\in\operatorname{GL}_{d}(\mathbb{Z}) \mid g(P)=P+\boldsymbol{v}\mbox{ for some }\boldsymbol{v}\in\mathbb{Z}^{d}\}\]
and
\[\operatorname{Aut}_{\mathbb{Z}}(\Gamma_{P})=\{g\in\operatorname{GL}_{d}( \mathbb{Z})\mid\operatorname{TL}_{P,g(\boldsymbol{v})}=\operatorname{TL}_{P, \boldsymbol{v}}\mbox{ for all }\boldsymbol{v}\in\mathbb{R}^{d}\}.\]
**Proposition 7.1**.: _For a rational \(d\)-polytope \(P\subset\mathbb{R}^{d}\), one has \(\operatorname{Aut}_{\mathbb{Z}}(\Gamma_{P})=\operatorname{Aut}_{\mathbb{Z}}(P)\)._
Proof.: We first prove "\(\subset\)". Let \(g\in\operatorname{Aut}_{\mathbb{Z}}(\Gamma_{P})\). Then, for any \(\boldsymbol{v}\in\mathbb{R}^{d}\), we have
\[\operatorname{TL}_{P,\boldsymbol{v}}(t)=\operatorname{TL}_{P,g(\boldsymbol{v}) }(t)=\#\big{(}\big{(}tP+g(\boldsymbol{v})\big{)}\cap\mathbb{Z}^{d}\big{)}= \#\big{(}(tg^{-1}(P)+\boldsymbol{v})\cap\mathbb{Z}^{d}\big{)}=\operatorname{ TL}_{g^{-1}(P),\boldsymbol{v}}(t)\]
for all \(t\in\mathbb{Z}_{\geq 0}\). Thus we have \(\Gamma_{P}=\Gamma_{g^{-1}(P)}\) so \(P=g^{-1}(P)+\boldsymbol{v}\) for some \(\boldsymbol{v}\in\mathbb{Z}^{d}\) by Theorem 6.1. Then \(g\in\operatorname{Aut}_{\mathbb{Z}}(P)\) since \(P=g(P)-g(\boldsymbol{v})\) and \(g(\boldsymbol{v})\in\mathbb{Z}^{d}\).
We next prove "\(\supset\)". Let \(g\in\operatorname{Aut}_{\mathbb{Z}}(P)\). Then for any \(\boldsymbol{v}\in\mathbb{R}^{d}\), we have
\[\#\big{(}\big{(}tP+g(\boldsymbol{v})\big{)}\cap\mathbb{Z}^{d}\big{)}=\#\big{(} \big{(}tg(P)+g(\boldsymbol{v})\big{)}\cap\mathbb{Z}^{d}\big{)}=\#\big{(}(tP+ \boldsymbol{v})\cap\mathbb{Z}^{d}\big{)}\]
for any \(t\in\mathbb{Z}_{\geq 0}\), where the last equality follows from the fact that \(g\in\operatorname{GL}_{d}(\mathbb{Z})\). This implies \(\operatorname{TL}_{P,g(\boldsymbol{v})}=\operatorname{TL}_{P,\boldsymbol{v}}\) for all \(\boldsymbol{v}\in\mathbb{R}^{d}\).
**Example 7.2**.: Consider the rhombus \(Q\) in Example 4.2. From the list of translated lattice points enumerators in the example, one can see that they are equal on \(E_{1},E_{2},E_{7}\) and \(E_{8}\). This can be explained using the symmetry. Let \(\rho_{1},\rho_{2}\in\operatorname{GL}_{2}(\mathbb{Z})\) be a reflection by the \(x\)-axis and the \(y\)-axis, respectively. Then \(\rho_{1},\rho_{2}\) do not change \(Q\) so they are elements of \(\operatorname{Aut}_{\mathbb{Z}}(Q)\). We have
\[\rho_{1}(E_{1})=E_{7},\ \rho_{1}(E_{2})=E_{8},\text{ and }\rho_{2}(E_{1})=E_{2},\]
which tell that translated lattice points enumerators are equal on \(E_{1},E_{2},E_{7}\) and \(E_{8}\).
We now focus on centrally symmetric polytopes. Recall that a quasi-polynomial \(f\) is said to be symmetric if its \(k\)th constituent equals its \((-k)\)th constituent for all \(k\in\mathbb{Z}\).
We first prove the following criterion for the symmetry of Ehrhart quasi-polynomials of \(P+\mathbf{v}\).
**Lemma 7.3**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope. The following conditions are equivalent._
* \(\operatorname{ehr}_{P+\mathbf{v}}\) _is symmetric for any_ \(\mathbf{v}\in\mathbb{Q}^{d}\)_._
* _For any_ \(\mathbf{v}\in\mathbb{Q}^{d}\) _and any_ \(k\in\mathbb{Z}_{\geq 0}\)_, one has_ \[\text{the $k$th constituent of $\operatorname{TL}_{P,\mathbf{v}}=$ the $(-k)$th constituent of $\operatorname{TL}_{P,-\mathbf{v}}$}.\]
Proof.: We first prove "(i) \(\Rightarrow\) (ii)". Fix \(\mathbf{v}\in\mathbb{Q}^{d}\) and \(k\in\mathbb{Z}_{\geq 0}\). Then
\[k\text{th constituent of $\operatorname{TL}_{P,\mathbf{v}}$}\] \[=k\text{th constituent of $\operatorname{ehr}_{P+\frac{1}{k}\mathbf{v}}$}\] (by Lemma 2.4 ) \[=(-k)\text{th constituent of $\operatorname{ehr}_{P+\frac{1}{k}\mathbf{v}}$}\] (by (i)) \[=(-k)\text{th constituent of $\operatorname{TL}_{P,-\mathbf{v}}$}\] (by Lemma 2.4 ),
as desired.
The proof for "(ii) \(\Rightarrow\) (i)" is similar. Indeed, we have
\[k\text{th constituent of $\operatorname{ehr}_{P+\mathbf{v}}$}\] \[=k\text{th constituent of $\operatorname{TL}_{P,k\mathbf{v}}$}\] (by Lemma 2.4 ) \[=(-k)\text{th constituent of $\operatorname{TL}_{P,-k\mathbf{v}}$}\] (by (ii)) \[=(-k)\text{th constituent of $\operatorname{ehr}_{P+\mathbf{v}}$}\] (by Lemma 2.4 ),
as desired.
Recall that a polytope \(P\subset\mathbb{R}^{d}\) is said to be centrally symmetric if \(-P=P+\mathbf{v}\) for some \(\mathbf{v}\in\mathbb{R}^{d}\).
**Corollary 7.4**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope. If \(\operatorname{ehr}_{P+\mathbf{v}}\) is symmetric for any \(\mathbf{v}\in\mathbb{Q}^{d}\), then_
* \(P\) _is centrally symmetric._
* \(\operatorname{ehr}_{\pi_{i}(P)+\mathbf{u}}\) _is symmetric for any_ \(\mathbf{u}\in\mathbb{Q}^{d-1}\) _and_ \(i\in\{1,2,\ldots,d\}\)_._
Proof.: (i) Let \(q\) be a positive integer such that \(qP\) is integral. It suffices to prove that \(qP\) is centrally symmetric. Since \(\operatorname{ehr}_{qP+q\mathbf{v}}(s)=\operatorname{ehr}_{P+\mathbf{v}}(qs)\) for any \(s\in\mathbb{Z}_{\geq 0}\), the \(k\)th constituent of \(\operatorname{ehr}_{qP+q\mathbf{v}}\) is obtained from the \(qk\)th constituent of \(\operatorname{ehr}_{P+\mathbf{v}}\) by substituting \(t\) with \(\frac{t}{q}\) (as polynomials in \(t\)). This fact and the assumption tell that \(\operatorname{ehr}_{qP+q\mathbf{v}}\) is symmetric for any \(\mathbf{v}\in\mathbb{Q}^{d}\). Since \(qP\) is a lattice polytope, it follows from Theorem 1.4 that \(qP\) is centrally symmetric.
(ii) For any \(\mathbf{v}\in\mathbb{Q}^{d}\), we have
\[k\text{th constituent of }\mathrm{TL}_{\pi_{i}(P),\pi_{i}(\mathbf{v})}\] \[=k\text{th constituent of }\sum_{0\leq s<1}\mathrm{TL}_{P,\mathbf{v}+s \mathbf{e}_{i}}^{(-i)} (\text{by Proposition~{}\ref{prop:2})}\] \[=(-k)\text{th constituent of }\sum_{0\leq s<1}\mathrm{TL}_{P,-\mathbf{v}-s \mathbf{e}_{i}}^{(+i)} (\text{by Lemma~{}\ref{prop:2})}\] \[=(-k)\text{th constituent of }\mathrm{TL}_{\pi_{i}(P),-\pi_{i}(\mathbf{v})} (\text{by Proposition~{}\ref{prop:2})}.\]
This tells that \(\pi_{i}(P)\) satisfies the condition (ii) of Lemma 7.3.
We now come to the goal of this section. Let \(P\) be a centrally symmetric polytope with \(-P=P+\mathbf{u}\). Then \(\frac{1}{2}\mathbf{u}\) is a center of \(P\), and for any vertex \(\mathbf{v}=\frac{1}{2}\mathbf{u}+\mathbf{v}^{\prime}\) of \(P\), the point \(\frac{1}{2}\mathbf{u}-\mathbf{v}^{\prime}\) is also a vertex of \(P\) by the central symmetry. We write this vertex \(\frac{1}{2}\mathbf{u}-\mathbf{v}^{\prime}\) as \(\mathbf{v}^{*}\).
**Theorem 7.5**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope. The following conditions are equivalent._
1. \(\mathrm{ehr}_{P+\mathbf{v}}\) _is symmetric for any_ \(\mathbf{v}\in\mathbb{Q}^{d}\)_._
2. \(P\) _is centrally symmetric and_ \(\mathbf{v}-\mathbf{v}^{*}\in\mathbb{Z}^{d}\) _for any vertex_ \(\mathbf{v}\) _of_ \(P\)_._
To prove the theorem, we recall the following basic fact on \(\mathbb{Z}\)-modules.
**Lemma 7.6**.: _For any \(d\)-dimensional cone \(X\subset\mathbb{R}^{d}\) with apex \(\mathbf{0}\), there is a \(\mathbb{Z}\)-basis of \(\mathbb{Z}^{d}\) which are contained in \(\mathrm{int}(X)\)._
Proof.: Take any integer vector \(\mathbf{a}\in\mathrm{int}(X)\) with \(\gcd(\mathbf{a})=1\).
First, we claim that there are \(\mathbf{a}_{1},\ldots,\mathbf{a}_{d-1}\in\mathbb{Z}^{d}\) such that \(\mathbf{a},\mathbf{a}_{1},\ldots,\mathbf{a}_{d-1}\) is a \(\mathbb{Z}\)-basis of \(\mathbb{Z}^{d}\). In fact, by the assumption, the \(\mathbb{Z}\)-module \(\mathbb{Z}^{d}/(\mathbb{Z}\mathbf{a})\) is a free \(\mathbb{Z}\)-module of rank \(d-1\) since it is torsionfree. If we choose \(\mathbf{a}_{1},\ldots,\mathbf{a}_{d-1}\in\mathbb{Z}^{d}\) so that they form a \(\mathbb{Z}\)-basis for \(\mathbb{Z}^{d}/(\mathbb{Z}\mathbf{a})\), the sequence \(\mathbf{a},\mathbf{a}_{1},\ldots,\mathbf{a}_{d-1}\) becomes a \(\mathbb{Z}\)-basis of \(\mathbb{Z}^{d}\).
Now, we may check that we can choose \(\mathbf{a}_{1},\ldots,\mathbf{a}_{d-1}\) from \(\mathrm{int}(X)\). For each \(\mathbf{a}_{i}\), since \(\mathbf{a}\) is in the interior of \(X\), by taking a sufficiently large integer \(k_{i}\), the point \(\mathbf{a}_{i}+k_{i}\mathbf{a}\) is contained in \(\mathrm{int}(X)\). Then \(\mathbf{a},\mathbf{a}_{1}+k_{1}\mathbf{a},\ldots,\mathbf{a}_{d-1}+k_{d-1}\mathbf{a}\) is a desired \(\mathbb{Z}\)-basis.
Proof of Theorem 7.5.: ((ii) \(\Rightarrow\) (i)) By taking an appropriate translation, we may assume \(P=-P\). Then \(\mathbf{v}^{*}=-\mathbf{v}\) for any vertex \(\mathbf{v}\) of \(P\), so the condition (ii) tells that \(2P\) is integral. In particular, every quasi-polynomial \(\mathrm{TL}_{P,\mathbf{v}}\) has period \(2\). We prove that \(P\) satisfies the condition (ii) of Lemma 7.3.
Let \(\mathbf{v}\in\mathbb{Q}^{d}\) and \(k\in\{0,1\}\). Since \(P=-P\) we have
\[\text{the $k$th constituent of }\mathrm{TL}_{P,\mathbf{v}}=\text{the $k$th constituent of }\mathrm{TL}_{P,-\mathbf{v}}.\]
However, since \(\mathrm{TL}_{P,-\mathbf{v}}\) has period \(2\), the RHS in the above equation equals the \((-k)\)th constituent of \(\mathrm{TL}_{P,-\mathbf{v}}\).
((i) \(\Rightarrow\) (ii)) We have already seen that (i) implies that \(P\) is centrally symmetric in Corollary 7.4. We prove the second condition of (ii) by induction on \(d\). Suppose \(d=1\). Then we may assume
\[P=[0,x+\tfrac{p}{q}]\]
for some \(x,p,q\in\mathbb{Z}_{\geq 0}\) with \(0\leq p<q\). Then we have
\[\text{the first constituent of }\mathrm{ehr}_{P}=\mathrm{vol}(P)t-\tfrac{p}{q}+1\]
and
\[\text{the $(q-1)$th constituent of }\mathrm{ehr}_{P}=\mathrm{vol}(P)t-(q-1) \tfrac{p}{q}+\lfloor(q-1)\tfrac{p}{q}\rfloor+1.\]
Then the condition (i) tells \(p-2p/q=\lfloor p(q-1)/q\rfloor\), but it implies \(2p/q\in\mathbb{Z}\). Hence \(2P\) is integral which guarantees the condition (ii).
Suppose \(d>1\). Let \(\mathbf{v}\in\mathbb{Q}^{d}\) be a vertex of \(P\). Consider the normal cone at the vertex \(\mathbf{v}\)
\[X=\{\mathbf{a}\in\mathbb{R}^{d}\mid\max\{(\mathbf{a},\mathbf{x})\mid\mathbf{x}\in P\}=(\mathbf{a}, \mathbf{v})\}.\]
This is a \(d\)-dimensional cone with apex \(\mathbf{0}\). By Lemma 7.6, there is a \(\mathbb{Z}^{d}\)-basis \(\mathbf{e}^{\prime}_{1},\ldots,\mathbf{e}^{\prime}_{d}\) which are contained in \(\operatorname{int}(X)\). Consider the linear transformation \(g\in\operatorname{GL}_{n}(\mathbb{Z})\) which changes the hyperplane \(\{\mathbf{x}\in\mathbb{R}^{d}\mid(\mathbf{x},\mathbf{e}^{\prime}_{i})=0\}\) to \(\{\mathbf{x}=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\mid x_{i}=0\}\). Since \(g(\mathbb{Z}^{d})=\mathbb{Z}^{d}\), we have \(\operatorname{ehr}_{g(P)+\mathbf{u}}(t)=\operatorname{ehr}_{P+g^{-1}(\mathbf{u})}(t)\) for any \(\mathbf{u}\in\mathbb{Q}^{d}\), so \(g(P)\) also satisfies the condition (i). Let \(g(\mathbf{v})=(y_{1},\ldots,y_{d})\). By the choice of \(\mathbf{e}^{\prime}_{1},\ldots,\mathbf{e}^{\prime}_{d}\), we have
\[g(P)\cap\{(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\mid x_{i}=y_{i}\}=\{g(\mathbf{v})\} \quad\text{for any $1\leq i\leq d$}.\]
This tells that \(\pi_{j}(g(\mathbf{v}))\) is a vertex of \(\pi_{j}(g(P))\) for \(j=1,2,\ldots,d\) and the same hold for \(\pi_{j}(g(\mathbf{v}^{*}))\) by the central symmetry. For each \(j=1,2,\ldots,d\), Lemma 7.4 tells that \(\pi_{j}(g(P))\) satisfies the condition (i), so we have that \(\pi_{j}(g(\mathbf{v})-g(\mathbf{v}^{*}))\in\mathbb{Z}^{d-1}\) by the induction hypothesis. But then we must have \(g(\mathbf{v}-\mathbf{v}^{*})\in\mathbb{Z}^{d}\) and therefore \(\mathbf{v}-\mathbf{v}^{*}\in\mathbb{Z}^{d}\).
If \(P=-P\), then the condition (ii) of Theorem 7.5 is equivalent to saying that \(2P\) is integral. So, Theorem 7.5 is essentially equivalent to Theorem 1.5 in Introduction.
## 8. A connection to commutative algebra
In this section, we briefly explain a connection between translated lattice points enumerators and conic divisorial ideals of Ehrhart rings in commutative algebra. In particular, we explain that Theorem 2.3 can be proved algebraically using the duality of Cohen-Macaulay modules.
### Conic divisorial ideals
Let \(S=\mathbb{F}[x_{1}^{\pm},\ldots,x_{d+1}^{\pm}]\) be the Laurent polynomial ring over a field \(\mathbb{F}\). We will consider the grading of \(S\) defined by \(\deg(x_{1})=\cdots=\deg(x_{d})=0\) and \(\deg(x_{d+1})=1\). For \(\mathbf{a}=(a_{1},\ldots,a_{d+1})\in\mathbb{Z}^{d+1}\), we write
\[x^{\mathbf{a}}=x_{1}^{a_{1}}\cdots x_{d+1}^{a_{d+1}}.\]
Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope. The **Ehrhart ring**\(\mathbb{F}[P]\) of \(P\) (over \(\mathbb{F}\)) is the monoid algebra generated by the monomials \(x^{\mathbf{a}}\) such that \(\mathbf{a}\) is in the monoid \(\mathcal{C}_{P}\cap\mathbb{Z}^{d+1}\). As vector spaces, we can write
\[\mathbb{F}[P]=\operatorname{span}_{\mathbb{F}}\{x^{\mathbf{a}}\mid\mathbf{a}\in \mathcal{C}_{P}\cap\mathbb{Z}^{d+1}\}. \tag{8.1}\]
For a finitely generated graded \(\mathbb{F}[P]\)-module \(M\), its **Hilbert function** is the function defined by \(\operatorname{hilb}(M,k)=\dim_{\mathbb{F}}M_{k}\) for \(k\in\mathbb{Z}\), where \(M_{k}\) is the degree \(k\) component of \(M\), and the **Hilbert series** of \(M\) is the formal power series \(\operatorname{Hilb}(M,z)=\sum_{k\in\mathbb{Z}}\operatorname{hilb}(M,k)z^{k}\). Ehrhart rings are closely related to Ehrhart quasi-polynomials. Indeed, from (8.1), we can see that the Hilbert function of \(\mathbb{F}[P]\) is nothing but the Ehrhart quasi-polynomial of \(P\).
For any \(\mathbf{v}\in\mathbb{R}^{d+1}\), the vector space
\[I_{\mathbf{v}}=\operatorname{span}_{\mathbb{F}}\{x^{\mathbf{a}}\mid\mathbf{a}\in( \mathcal{C}_{P}+\mathbf{v})\cap\mathbb{Z}^{d+1}\}\subset S\]
becomes a finitely generated graded \(\mathbb{F}[P]\)-module. The modules \(I_{\mathbf{v}}\) are called **conic divisorial ideals** of \(\mathbb{F}[P]\). We note that different vectors in \(\mathbb{R}^{d+1}\) could give the same conic divisorial ideal, more precisely, we have \(I_{\mathbf{v}}=I_{\mathbf{u}}\) if and only if the cones \(\mathcal{C}_{P}+\mathbf{v}\) and \(\mathcal{C}_{P}+\mathbf{u}\) have the same lattice points.
Let us call a conic divisorial ideal \(I\)**standard** if \(I=I_{(\boldsymbol{v},0)}\) for some \(\boldsymbol{v}\in\mathbb{R}^{d}\). Hilbert functions of standard conic divisorial ideals are nothing but translated lattice points enumerators. Indeed, for any \(\boldsymbol{v}\in\mathbb{R}^{d}\), we have
\[\dim_{\mathbb{F}}(I_{(\boldsymbol{v},0)})_{t}=\#\{\boldsymbol{a}\!=\!(a_{1}, \ldots,a_{d+1})\in(C_{P}+(\boldsymbol{v},0))\cap\mathbb{Z}^{d+1}|\ a_{d+1}\!=\!t \}=\#\big{(}(tP+\boldsymbol{v})\cap\mathbb{Z}^{d}\big{)}. \tag{8.2}\]
We will not explain algebraic backgrounds on (conic) divisorial ideals of Ehrhart rings since it is not relevant to the theme of this paper. But in the rest of this section we briefly explain how algebraic properties of conic divisorial ideals can be used to consider properties of translated lattice points enumerators. For more detailed information on conic divisorial ideals, see [9] and [10, SS4.7].
### Hilbert series of conic divisorial ideals and an algebraic proof of Theorem 2.3
We need some basic tools on commutative algebra such as the Cohen-Macaulay property and canonical modules. We refer the readers to [8, SS3 and SS4] for basics on commutative algebra.
We introduce one more notation. For \(\boldsymbol{v}\in\mathbb{R}^{d+1}\), we define
\[I_{\boldsymbol{v}}^{\circ}=\operatorname{span}_{\mathbb{F}}\big{\{}x^{ \boldsymbol{a}}\mid\boldsymbol{a}\in\big{(}\mathrm{int}(\mathcal{C}_{P})+ \boldsymbol{v}\big{)}\cap\mathbb{Z}^{d+1}\big{\}}. \tag{8.3}\]
The space \(I_{\boldsymbol{v}}^{\circ}\) is also a conic divisorial ideal. Indeed, if \(\boldsymbol{w}\in\mathrm{int}(\mathcal{C}_{P})\) is a vector which is sufficiently close to the origin, then we have
\[\big{(}\mathrm{int}(\mathcal{C}_{P})+\boldsymbol{v}\big{)}\cap\mathbb{Z}^{d+1 }=\big{(}\mathcal{C}_{P}+\boldsymbol{v}+\boldsymbol{w}\big{)}\cap\mathbb{Z}^{d +1},\]
which tells \(I_{\boldsymbol{v}}^{\circ}=I_{\boldsymbol{v}+\boldsymbol{w}}\). The following facts are known. See [10, Corollary 3.3 and Remark 4.4(b)].
* \(I_{\boldsymbol{v}}\) is a \((d+1)\)-dimensional Cohen-Macaulay module.
* \(I_{\boldsymbol{v}}^{\circ}\) is the canonical module of \(I_{-\boldsymbol{v}}\), more precisely, we have \[\mathrm{Hom}_{\mathbb{F}[P]}(I_{\boldsymbol{v}},\omega)\cong\operatorname{ span}_{\mathbb{F}}\{x^{\boldsymbol{a}}\mid\boldsymbol{a}\in(\mathrm{int}(\mathcal{C}_{P})- \boldsymbol{v})\cap\mathbb{Z}^{d+1}\}=I_{-\boldsymbol{v}}^{\circ},\] where \(\omega=\operatorname{span}_{\mathbb{F}}\{x^{\boldsymbol{a}}\mid\boldsymbol{a }\in\mathrm{int}(\mathcal{C}_{P})\cap\mathbb{Z}^{d+1}\}\) is the graded canonical module of \(\mathbb{F}[P]\).
These properties give the following consequences on Hilbert series of conic divisorial ideals.
**Proposition 8.1**.: _Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope and \(q\) the denominator of \(P\). Let \(\boldsymbol{v}=(v_{1},\ldots,v_{d+1})\in\mathbb{R}^{d+1}\) and \(\alpha=\lceil v_{d+1}\rceil\)._
1. \(\mathrm{Hilb}(I_{\boldsymbol{v}}^{\circ},z)=(-1)^{d+1}\,\mathrm{Hilb}(I_{- \boldsymbol{v}},z^{-1})\)_._
2. \(\mathrm{Hilb}(I_{\boldsymbol{v}},z)=\frac{z^{\alpha}}{(1-z^{q})^{d+1}}Q(z)\) _for some polynomial_ \(Q(z)\in\mathbb{Z}_{\geq 0}[z]\) _of degree_ \(<q(d+1)\)_._
Proof.: The equality (1) is the well-known formula of the Hilbert series of a canonical module. See [8, Theorem 4.45]. We prove (2). Consider the subring
\[A=\operatorname{span}_{\mathbb{F}}\big{\{}x^{\boldsymbol{a}}\mid x^{\boldsymbol {a}}\in\mathcal{C}_{P}\ \text{and}\ \deg(x^{\boldsymbol{a}})\in q\mathbb{Z}\big{\}}\subset\mathbb{F}[P].\]
Since \(qP\) is integral, \(\mathbb{F}[qP]\) is a semi-standard graded \(\mathbb{F}\)-algebra, that is, \(\mathbb{F}[qP]\) is a finitely generated as a module over a standard graded \(\mathbb{F}\)-algebra \(\mathbb{F}[x^{\boldsymbol{a}}x_{d+1}\mid x^{\boldsymbol{a}}\in P\cap\mathbb{ Z}^{d}]\) (see [19, Theorem 9.3.6](d)). Then, since \(A\cong\mathbb{F}[qP]\), where the degree \(k\) part of \(\mathbb{F}[qP]\) corresponds to the degree \(qk\) part of \(A\), any finitely generated \(A\)-module \(M\) of Krull dimension \(m\) has the Hilbert series of the form \(Q(z)/(1-z^{q})^{m}\) for some polynomial \(Q(z)\), and if \(M\) is Cohen-Macaulay then \(Q(z)\in\mathbb{Z}_{\geq 0}[z]\) ([8, Corollaries 4.8 and 4.10]).
Since \(\mathbb{F}[P]\) is a finitely generated \(A\)-module, \(I_{\boldsymbol{v}}\) is a finitely generated Cohen-Macaulay \(A\)-module of Krull dimension \(d+1\). Thus there is a polynomial \(Q(z)\in\mathbb{Z}_{\geq 0}[z]\) such that
\[\mathrm{Hilb}(I_{\boldsymbol{v}},z)=\frac{1}{(1-z^{q})^{d+1}}Q(z).\]
Since \((I_{\boldsymbol{v}})_{k}=0\) for \(k<\alpha=\lceil v_{d+1}\rceil\) by the definition of \(I_{\boldsymbol{v}}\), the polynomial \(Q(z)\) must be of the form
\[Q(z)=c_{0}t^{\alpha}+c_{1}t^{\alpha+1}+\cdots+c_{m}t^{\alpha+m}\]
for some \(m\geq 0\), where \(c_{0},\ldots,c_{m}\in\mathbb{Z}_{\geq 0}\) and \(c_{m}\neq 0\), so it follows that
\[\operatorname{Hilb}(I_{\boldsymbol{v}},z)=\frac{z^{\alpha}}{(1-z^{q})^{d+1}}(c _{0}+c_{1}z+\cdots+c_{m}z^{m}).\]
Now it remains to prove \(m<q(d+1)\). By statement (1), we have
\[\operatorname{Hilb}(I_{-\boldsymbol{v}}^{\circ},z) =(-1)^{d+1}\frac{z^{-\alpha}}{(1-z^{-q})^{d+1}}(c_{0}+c_{1}z^{-1 }+\cdots+c_{m}z^{-m})\] \[=\frac{z^{q(d+1)-\alpha-m}}{(1-z^{q})^{d+1}}(c_{m}+c_{m}z+\cdots+ c_{0}z^{m}).\]
This tells
\[-\alpha<\min\{k\in\mathbb{Z}\mid(I_{-\boldsymbol{v}}^{\circ})_{k}\neq 0\}=q(d+ 1)-\alpha-m\]
proving the desired inequality \(m<q(d+1)\).
The statements in Proposition 8.1 are known to imply the quasi-polynomiality and reciprocity of translated lattice points enumerators in Theorem 2.3. Recall that \(\operatorname{TL}_{P,\boldsymbol{v}}\) coincides with the Hilbert function of \(I_{(\boldsymbol{v},0)}\). Proposition 8.1(2) tells that the Hilbert series of \(I_{(\boldsymbol{v},0)}\) can be written in the form \(\frac{1}{(1-z^{q})^{d+1}}Q(z)\) for some polynomial \(Q(z)\) of degree \(<q(d+1)\), which is known to imply that \(\operatorname{hilb}(I_{(\boldsymbol{v},0)},t)\) is a quasi-polynomial with period \(q\). See e.g., [5, SS3.8] or [18, SS4]. Also, Proposition 8.1(1) is essentially equivalent to the reciprocity in Theorem 2.3(2). See [5, SS4.3].
Finally, we note that the proposition gives some restriction to the possible values of \(\operatorname{TL}_{P,\boldsymbol{v}}\). If \(P\subset\mathbb{R}^{d}\) is a lattice \(d\)-polytope and \(\boldsymbol{v}\in\mathbb{R}^{d}\), then the proposition tells
\[\operatorname{Hilb}(I_{(\boldsymbol{v},0)},z)=\frac{1}{(1-z)^{d+1}}(h_{0}+h_{ 1}z+\cdots+h_{d}z^{d})\]
for some \(h_{0},h_{1},\ldots,h_{d}\in\mathbb{Z}_{\geq 0}\). These \(h\)-numbers must satisfy the following conditions
* \(h_{0}=1\) if \(\boldsymbol{v}\in\mathbb{Z}^{d}\) and \(h_{0}=0\) if \(\boldsymbol{v}\not\in\mathbb{Z}^{d}\);
* \(h_{0}+\cdots+h_{d}=d!\mathrm{vol}(P)\).
The first condition follows from \(h_{0}=\dim_{\mathbb{F}}(I_{(\boldsymbol{v},0)})_{0}\), and the second condition follows since \(\frac{1}{d!}(h_{0}+\cdots+h_{d})\) is the top degree coefficient of the polynomial \(\operatorname{hilb}(I_{(\boldsymbol{v},0)},t)\). Below we give a simple application of this. Consider a lattice polygon \(P\subset\mathbb{R}^{2}\) whose volume is \(\frac{3}{2}\). Then the possible values of \(h_{0}+h_{1}z+h_{2}z^{2}\) are
\[1+z+z^{2},\ 1+2z,\ 1+2z^{2},\ 3z,\ 2z+z^{2},\ z+2z^{2},\ 3z^{2}.\]
If \(f(t)\) is a polynomial \(\sum_{t=0}^{\infty}f(t)z^{t}=\frac{1}{(1-z)^{3}}(h_{0}+h_{1}z+h_{2}z^{2})\), then \(f(t)=h_{0}\binom{t+2}{2}+h_{1}\binom{t+1}{2}+h_{2}\binom{t}{2}\). So a translated lattice points enumerator of an integral polygon with volume \(\frac{3}{2}\) must be one of the following polynomials
\[\tfrac{3}{2}t^{2}+\tfrac{3}{2}t+1,\ \tfrac{3}{2}t^{2}+\tfrac{5}{2}t+1,\ \tfrac{3}{2}t^{2}+ \tfrac{1}{2}t+1,\ \tfrac{3}{2}t^{2}+\tfrac{3}{2}t,\ \tfrac{3}{2}t^{2}+\tfrac{1}{2}t,\ \tfrac{3}{2}t^{2}-\tfrac{1}{2}t,\ \tfrac{3}{2}t^{2}-\tfrac{3}{2}t.\]
Four of them appear as translated lattice points enumerators of the trapezoid in Introduction. See (1.1).
## 9. Problems
In this last section, we list a few problems which we cannot answer.
**Gcd property and zonotopes.** A quasi-polynomial \(f\) with period \(q\) is said to have the **gcd property** if its \(k\)th constituent only depends on the gcd of \(k\) and \(q\) for all \(k\in\mathbb{Z}\). We note that if \(f\) has the gcd property then \(f\) must be symmetric. It was proved in [20] that, for a lattice \(d\)-polytope \(P\subset\mathbb{R}^{d}\), \(\operatorname{ehr}_{P+\boldsymbol{v}}\) has the gcd property for all \(\boldsymbol{v}\in\mathbb{Q}^{d}\) if and only if \(P\) is a zonotope. Considering the statement in Theorem 1.5, one may ask if a similar statement holds for zonotopes \(P\) such that \(2P\) is integral, but this is not the case. Indeed, the rhombus \(Q\) in Example 4.2 is a zonotope and \(2Q\) is integral but the computation given in the example tells that \(\operatorname{ehr}_{Q+(\frac{1}{8},\frac{1}{8})}\) does not satisfy the gcd property. We repeat the following question asked in [20, Problem 6.7(2)].
**Problem 9.1**.: Let \(P\subset\mathbb{R}^{d}\) be a rational \(d\)-polytope. Is it true that, if \(\operatorname{ehr}_{P+\boldsymbol{v}}\) has the gcd property for all \(\boldsymbol{v}\in\mathbb{Q}^{d}\), then \(P=Q+\boldsymbol{u}\) for some integral zonotope \(Q\) and some \(\boldsymbol{u}\in\mathbb{Q}^{d}\)?
To consider this problem we can assume that \(P\) is a zonotope by the argument similar to the proof of Corollary 7.4(i) and \(2P\) is integral by Theorem 1.5.
### Period collapse.
Recall that the denominator of a rational polytope \(P\) is always a period of \(\operatorname{ehr}_{P}\). If the minimum period of \(\operatorname{ehr}_{P}\) is not equal to the denominator of \(P\), we say that period collapse occurs to \(P\). A period collapse is a major subject in the study of Ehrhart quasi-polynomials (see e.g., [6, 11, 15, 16]). We ask the following vague question: Is a way of computing \(\operatorname{ehr}_{P+\boldsymbol{v}}\) from \(\operatorname{TL}_{P,C}\) applicable to produce polytopes giving period collapse? For translations of a lattice polytope, a period collapse cannot occur. Indeed, if \(P\) is a lattice polytope, then the minimum period of \(\operatorname{ehr}_{P+\boldsymbol{v}}\) must be the smallest integer \(k\) such that \(k\boldsymbol{v}\) is integral since \(\operatorname{TL}_{P,\boldsymbol{v}}(0)\neq 0\) only when \(\boldsymbol{v}\) is integral.
|
2305.08728 | Order parameter dynamics in Mn$_3$Sn driven by DC and pulsed spin-orbit
torques | We numerically investigate and develop analytic models for both the DC and
pulsed spin-orbit-torque (SOT)-driven response of order parameter in
single-domain Mn$_3$Sn, which is a metallic antiferromagnet with an anti-chiral
120$^\circ$ spin structure. We show that DC currents above a critical threshold
can excite oscillatory dynamics of the order parameter in the gigahertz to
terahertz frequency spectrum. Detailed models of the oscillation frequency
versus input current are developed and found to be in excellent agreement with
the numerical simulations of the dynamics. In the case of pulsed excitation,
the magnetization can be switched from one stable state to any of the other
five stable states in the Kagome plane by tuning the duration or the amplitude
of the current pulse. Precise functional forms of the final switched state
versus the input current are derived, offering crucial insights into the
switching dynamics of Mn$_3$Sn. The readout of the magnetic state can be
carried out via either the anomalous Hall effect, or the recently demonstrated
tunneling magnetoresistance in an all-Mn$_3$Sn junction. We also discuss
possible disturbance of the magnetic order due to heating that may occur if the
sample is subject to large currents. Operating the device in pulsed mode or
using low DC currents reduces the peak temperature rise in the sample due to
Joule heating. Our predictive modeling and simulation results can be used by
both theorists and experimentalists to explore the interplay of SOT and the
order dynamics in Mn$_3$Sn, and to further benchmark the device performance. | Ankit Shukla, Siyuan Qian, Shaloo Rakheja | 2023-05-15T15:42:32Z | http://arxiv.org/abs/2305.08728v2 | # Order parameter dynamics in Mn\({}_{3}\)Sn driven by DC and pulsed spin-orbit torques
###### Abstract
We numerically investigate and develop analytic models for both the DC and pulsed spin-orbit-torque (SOT)-driven response of order parameter in single-domain Mn\({}_{3}\)Sn, which is a metallic antiferromagnet with an anti-chiral 120\({}^{\circ}\) spin structure. We show that DC currents above a critical threshold can excite oscillatory dynamics of the order parameter in the gigahertz to terahertz frequency spectrum. Detailed models of the oscillation frequency versus input current are developed and found to be in excellent agreement with the numerical simulations of the dynamics. In the case of pulsed excitation, the magnetization can be switched from one stable state to any of the other five stable states in the Kagome plane by tuning the duration or the amplitude of the current pulse. Precise functional forms of the final switched state versus the input current are derived, offering crucial insights into the switching dynamics of Mn\({}_{3}\)Sn. The readout of the magnetic state can be carried out via either the anomalous Hall effect, or the recently demonstrated tunneling magnetoresistance in an all-Mn\({}_{3}\)Sn junction. We also discuss possible disturbance of the magnetic order due to heating that may occur if the sample is subject to large currents. Operating the device in pulsed mode or using low DC currents reduces the peak temperature rise in the sample due to Joule heating. Our predictive modeling and simulation results can be used by both theorists and experimentalists to explore the interplay of SOT and the order dynamics in Mn\({}_{3}\)Sn, and to further benchmark the device performance.
## I Introduction
Antiferromagnets (AFMs) are a class of magnetically ordered materials that exhibit negligible net magnetization, owing to the unique arrangement of strongly exchange-coupled spins on the atoms of their unit cells. As a result, AFMs produce negligible stray fields, are robust to external magnetic field perturbations, and their precession frequency, set by the geometric mean of exchange and anisotropy energies, is in the terahertz (THz) regime.[1; 2; 3] The past decade has witnessed a rapid rise in theoretical and experimental research focused on the fundamental understanding and applications of AFM materials as active spintronic device elements.[4; 5; 6; 7; 8; 3; 4; 5; 6; 7] There exists a broad range of AFM materials including insulators, metals, and semiconductors with unique properties that could be exploited to realize magnonic devices,[9] high-frequency signal generators and detectors,[5; 10; 11; 12; 13; 14; 15] and non-volatile memory.[16; 17; 18] For example, insulators like NiO[19; 20] and MnF\({}_{2}\)[21] are well studied and have the potential to carry charge-less spin waves or magnons. Insulating AFM Cr\({}_{2}\)O\({}_{3}\) shows magnetoelectricity below its Neel temperature of 307 K, which was exploited to demonstrate voltage-controlled exchange-bias memory and fully electrically controlled memory devices.[22] On the other hand, metallic AFMs have been mostly used as sources of exchange bias in spin valves and tunnel junction-based devices.[23; 24] More recently, there has been significant research activity in non-collinear, chiral metallic AFMs of the form Mn\({}_{3}\)X with a triangular spin structure and several intriguing magneto-transport characteristics such as a large spin Hall effect (SHE),[25] anomalous Hall effect (AHE),[26; 27; 28] and ferromagnet-like spin-polarized currents.[29]
Negative chirality materials, such as Mn\({}_{3}\)Sn, Mn\({}_{3}\)Ge, Mn\({}_{3}\)Ga, perhaps best represent the promise of non-collinear metallic AFMs with a potential for ferromagnet-like spintronic devices in which the order parameter could be fully electrically controlled and read-out.[30; 31; 32] In a recent experiment,[33] conducted in a bilayer of heavy metal and Mn\({}_{3}\)Sn, a characteristic fluctuation of the Hall resistance was measured in response to a DC current in the heavy metal. This observation could be explained in terms of the rotation of the chiral spin structure of Mn\({}_{3}\)Sn driven by spin-orbit torque (SOT). Very recently, tunneling magnetoresistance (TMR) of approximately 2% at room temperature in an all antiferromagnetic tunnel junction consisting of Mn\({}_{3}\)Sn/MgO/Mn\({}_{3}\)Sn was experimentally measured.[34] The TMR in Mn\({}_{3}\)Sn originates from the time reversal symmetry breaking and the momentum-dependent spin splitting of bands in the crystal. Pal et al. and Krishnaswamy et al. independently showed that Mn\({}_{3}\)Sn layers thicker than the spin diffusion length could be switched by seeded SOTs.[35; 36] Here, the SOT sets the spin texture of the AFM in a thin layer at the interface, which acts as the seed for the subsequent setting of the domain configuration of the entire layer. The seeded SOT also requires bringing the temperature of the AFM above its ordering temperature and then cooling it in the presence of the SOT generated in a proximal heavy metal layer. These recent works highlight the tremendous potential of Mn\({}_{3}\)Sn and other negative chirality AFMs to explore and develop spintronic device concepts.
In this paper, we discuss the energy landscape of a thin film of Mn\({}_{3}\)Sn in the mono-domain limit and deduce the weak six-fold magnetic anisotropy of the film via perturbation and numerical solutions (Section III). Consequences of the six-fold anisotropy on the equilibrium states, the origin of the weak ferromagnetic moment, and SOT-induced non-equilibrium dynamics are carefully modeled in Sections III and IV. The analytic model of the threshold spin current to drive the system into steady-state oscillations is validated |
2306.09166 | Observations of the suspected RR Lyr stars NSV 14172 and NSV 14264 | NSV 14264 and NSV 14172 are suspected to be variable stars of RR Lyr type
(Brun, 1964). They were observed during three nights in October 2018 with a
25cm diameter telescope. These observations completed by ASAS-SN survey data
bring to the conclusion that these two stars are not RR Lyraes but constant
stars in the limit of the precision of the present photometry. The analysis of
GAIA data allows to say that NSV 14264 is a main sequence dwarf similar to the
Sun but that NSV 14172 is a yellow giant star located in the HR diagram at the
limit between RR Lyraes and CW cepheids; however, it does not pulsate with
significant amplitude. | Jean-François Le Borgne | 2023-06-15T14:43:21Z | http://arxiv.org/abs/2306.09166v1 | ###### Abstract
###### Abstract
NSV 14264 and NSV 14172 are suspected to be variable stars of RR Lyr type (Brun, 1964). They were observed during three nights in October 2018 with a 25cm diameter telescope. These observations completed by ASAS-SN survey data bring to the conclusion that these two stars are not RR Lyraes but constant stars in the limit of the precision of the present photometry. The analysis of GAIA data allows to say that NSV 14264 is a main sequence dwarf similar to the Sun but that NSV 14172 is a yellow giant star located in the HR diagram at the limit between RR Lyraes and CW cepheids; however, it does not pulsate with significant amplitude.
**GEOS RR 62 GEOS CIRCULAR ON RR LYRAE** **June 15, 2023**
**OBSERVATIONS OF THE SUSPECTED RR LYR**
**STARS NSV 14172 AND NSV 14264**
J.F. Le Borgne1,2,3
Footnote 1: [http://tdc-www.harvard.edu/wcstools/](http://tdc-www.harvard.edu/wcstools/)
\({}^{1}\) GEOS (Groupe Europeen d'Observations Stellaires), [http://geos.upv.es](http://geos.upv.es)
\({}^{2}\) IRAP; OMP; Universite de Toulouse; 14, avenue Edouard Belin, F-31400 Toulouse, France
\({}^{3}\) LAM, Laboratoire d'Astrophysique de Marseille, 38 Rue Frederic Joliot Curie, F-13013 Marseille, France
## 1 Introduction
NSV 14172 and NSV 14264 were introduced in the suspected variable catalog (Samus et al., 2017) after the publication by A. Brun of a list of variable star candidates (Brun, 1964). These 2 stars were suspected to be of RR Lyr type and have numbers 49 and 59 respectively in Brun (1964). The observations were done by R. Weber between August 1959 and December 1962 using a photographic camera. The range of photographic magnitudes are 12.5 to 13.6 and 12.2 to 13.4 respectively. It is surprising that such bright stars are still suspected and that no publication reporting on their real status is available. In the present paper we report observations of these two stars made on October 3, 4 and 5, 2018 in order to clarify their photometric properties.
## 2 Observations
The observations were made in J.F. Le Borgne's private observatory (EsO) in Escalquens (Occitania, EU) using a 10 inches diameter newton f/4 telescope (Skywatcher) equipped with a CCD camera (Apogee Alta F9000, KAF-09000) and optical aberration corrector giving a field of 2\({}^{\circ}\times\)2\({}^{\circ}\). A Johnson R filter was used with an exposure time of one minute.
Usual dark and flat-field corrections were done with the use of the software _IRAF_ running on a fedora linux system. Astrometry was performed using _imwcs_ part of _WCSTools_ package1 and photometry using _SExtractor_ program (Bertin &Arnouts, 1996).
NSV 14264 was observed on October 3 and 4, 2018 and NSV 14172 on October 5, 2018 (table 1). Each night, images were continuously acquired during more than 10 hours. The number of images obtained are between 439 and 457 per night. The comparison and check stars for each star are given in table 2 were data from UCAC4 catalog (Zacharias et al., 2013) are given.
The magnitudes of the studied stars are obtained by adding the UCAC4 SDSS r magnitude of comparison star to instrumental magnitude differences.
## 3 Light curves
In figure 1 the light curves of the two stars are drawn for the three nights. All three graphs have a full scale of 0.4 magnitudes in ordinate. The light curves show no significant variation for the three nights and the two stars. Since all time series cover about 10 hours (duration of the order of magnitude of the period of RR Lyr stars), it can be said that these stars cannot be RR Lyraes. Table 1 gives the mean magnitudes per night and standard deviations. The standard deviations are of the order of 0.01 magnitude. Table 3 gives the mean magnitude and standard deviations for the whole run.
In order to confirm these findings, we analyzed the data from the survey ASAS-SN (Shappee et al., 2014) of both stars. This survey takes only a couple of measurements per night but the data cover several years. Two filters are available, V and g. V filter (2014-2018) and g filter (2018-2023) have only 7 months of in common from April to November 2018. The light curves in V and g are plotted in figure 2 (as in figure 1, the ordinate full scale is 0.4 magnitudes). The mean values and standard deviations are shown in table 3. The ASAS-SN data confirm that these stars have no significant light variation within a range of a couple of hundredths of magnitude (\(3\sigma\sim 0.07\) mag.) for each filter and over the time interval of four years. The data from ASAS-SN used for the present statistics were cleaned for internal error greater than 0.02 magnitudes in g and 0.03 in V for NSV 14172, 0.03 in g and V for NSV 14264. Only data from camera "bC" were used for NSV 14172 and NSV 14264 for
\begin{table}
\begin{tabular}{l l l l l l l} \hline Date & JD & duration & star & Mean magnitude & standard deviation & Number of measurements \\ \hline October 3 & 2458395 & 10h10mn & NSV 14264 & 11.459 & 0.011 & 439 \\ October 4 & 2458396 & 10h32mn & NSV 14264 & 11.460 & 0.010 & 457 \\ October 5 & 2458397 & 10h03mn & NSV 14172 & 12.425 & 0.013 & 445 \\ \hline \end{tabular}
\end{table}
Table 1: Nightly information about observations.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline & UCAC4 & ra(J2000) & dec(J2000) & B & V & r & B-V \\ \hline NSV 14172 & 674-114332 & 22:28:42.9422 & +44:39:42.240 & 13.411 & 12.692 & 12.477 & 0.719 \\ comp. star & 674-114331 & 22:28:42.6767 & +44:45:36.732 & 13.252 & 12.678 & 12.527 & 0.574 \\ check star & 674-114345 & 22:28:50.5750 & +44:38:58.040 & 13.413 & 12.857 & 12.700 & 0.556 \\ \hline NSV 14264 & 685-122818 & 22:39:04.7963 & +46:49:57.021 & 12.949 & 11.983 & 11.700 & 0.966 \\ comp. star & 685-122869 & 22:39:20.9711 & +46:48:07.358 & 11.464 & 11.362 & 11.473 & 0.111 \\ check star & 685-122744 & 22:38:43.9073 & +46:54:55.862 & 11.759 & 11.458 & 11.461 & 0.003 \\ \hline \end{tabular}
\end{table}
Table 2: Coordinates (equinox J2000.0) and UCAC4 photometry of NSV 14172, NSV 14264 and their comparison and check stars.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & UCAC4 & filter & number of meas. & Mean & Standard \\ & & (observatory) & (of nights) & magnitude & deviation \\ \hline NSV 14172 & 674-114332 & R (EsO) & 445 (1) & 12.425 & 0.013 \\ check star & 674-114345 & R (EsO) & 445 (1) & 12.703 & 0.014 \\ NSV 14172 & 674-114332 & V (ASAS-SN) & 627 (226) & 12.593 & 0.019 \\ NSV 14172 & 674-114332 & g (ASAS-SN) & 792 (252) & 12.928 & 0.026 \\ \hline NSV 14264 & 685-122818 & R (EsO) & 896 (2) & 11.459 & 0.011 \\ check star & 685-122744 & R (EsO) & 896 (2) & 11.407 & 0.013 \\ NSV 14264 & 685-122818 & V (ASAS-SN) & 628 (226) & 11.966 & 0.013 \\ NSV 14264 & 685-122818 & g (ASAS-SN) & 814 (260 ) & 12.430 & 0.015 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mean magnitudes of NSV 14172, NSV 14264 and their check stars.
Figure 1: R light curves of NSV 14264 and NSV 14172 on October 3, 4 and 5, 2018. Time scales are heliocentric julian days.
filter g. Data from camera "bs", though contemporary to camera "bC", were more dispersed. The camera used for both stars in V is "bc" only.
## 4 Characterizing UCAC4 674-114332 and UCAC4 685-122818
Table 4 summarizes the values of physical parameters of NSV 14172, NSV 14264 as deduced from GAIA observations, DR2 and DR3, published by the Gaia Coll. (2018, 2022) and consequent papers. It appears that the two stars are very different. NSV 14172 is a dwarf quite similar to the Sun with an effective temperature of 5400 - 5900 K, a stellar radius of about 1.1 R\({}_{\odot}\) and a mass of 1 M\({}_{\odot}\), slightly less luminous than the Sun (0.8-0.9 L\({}_{\odot}\)). NSV 14264, on the other hand, is a yellow giant star 24 times more luminous than the Sun and a radius of 7.38 or 7.818 R\({}_{\odot}\). Its effective temperature is about 4900 K. This difference of class, since their magnitudes are quite close to each other, is reflected in their distances: 320 pc for NSV 14172, 1228-1274 pc for NSV 14264, in accordance with the measured parallaxes of 3.1063 mas and 0.7504 mas from Gaia DR3. The estimated gravities \(log(g)\) are also characteristic of their respective classes.
The G absolute magnitudes published by Anders et al. (2022) are 4.48 for NSV 14172 (Gaia Coll. (2022) gives 4.75) and 1.02 for NSV 14264. These values with a BP-RP value of 0.9427 place NSV 14172 on the main sequence as expected while the BP-RP value of 1.2217 for NSV 14264 places it among the RR Lyraes as can be seen in figure 3 of Gaia Coll. (2019). The published effective temperature is cooler than typical temperatures of RR Lyraes, though, more typical of cepheids. However, NSV 14264 does not pulsate with expected amplitude.
UCAC4 685-122818 (NSV 14724) was investigated by Cantat-Gaudin et al. (2018) as possibly belonging to the open cluster ASCC 124 also known as Alessi 37 (Kharchenko et al., 2005) They estimated the probability of stars to belong to clusters from GAIA DR1 data. The result was that UCAC4 685-122818 does not belong to ASCC 124 / Alessi 37.
Figure 2: V and g light curves of NSV 14172 and NSV 14264 from ASAS-SN. Time scales are heliocentric julian days.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Parameter** & **NSV 14172** & **NSV 14264** & **Source** \\ plx (mas) & 3.1063\(\pm\)0.0132 & 0.7504\(\pm\)0.0128 & Gaia Coll. (2022) \\ dist. (pc) & 310.577 (307.7-313.5) & 1228.878 & Bailer-Jones et al. (2018) \\ & 310.5780\(\pm\)2.883 & 1274.9399\(\pm\)47.820 & Stassun et al. (2019) \\ & 320.136 (318.6-321.5) & 1271.854 & Bailer-Jones et al. (2021) \\ & 320.7386 & & Gaia Coll. (2022) \\ & 320.462 & 1283.0 & Anders et al. (2022) \\ T\({}_{eff}\) (K) & 5374.33 & 4937.93 & Gaia Coll. (2018) \\ & 5377.6 & & Gaia Coll. (2022) \\ & 5735\(\pm\)154 & 5155\(\pm\)175 & Bai et al. (2019) \\ & 5907.41 & 4843.02 & Anders et al. (2022) \\ log(g) (cm/s2) & 4.3143 & & Gaia Coll. (2022) \\ & 4.4087 & 2.7795 & Anders et al. (2022) \\ \([Fe/H]\) & -0.7598 & & Gaia Coll. (2022) \\ & -0.0396 & -0.1154 & Anders et al. (2022) \\ Radius (R\({}_{\odot}\)) & 1.05 & 7.38 & Gaia Coll. (2018) \\ & 1.1332 & & Gaia Coll. (2022) \\ & 1.018 & 7.818 & Stassun et al. (2019) \\ Luminosity (L\({}_{\odot}\)) & 0.826 & 29.159 & Gaia Coll. (2018) \\ & 0.9659 & & Gaia Coll. (2022) \\ Mass (M\({}_{\odot}\)) & 0.901 & & Gaia Coll. (2022) \\ & 0.996 & & Stassun et al. (2019) \\ & 0.999 & 1.376 & Anders et al. (2022) \\ class & dwarf & giant & Stassun et al. (2019) \\ G Abs. mag. & 4.7556 & & Gaia Coll. (2022) \\ & 4.4833 & 1.0264 & Anders et al. (2022) \\ mag G & 12.3951\(\pm\)0.0002 & 11.7418\(\pm\)0.0001 & Gaia Coll. (2022) \\ mag BP & 12.7822\(\pm\)0.0005 & 12.2755\(\pm\)0.0004 & ” \\ mag RP & 11.8394\(\pm\)0.0003 & 11.0539\(\pm\)0.0003 & ” \\ BP-RP & 0.9427 & 1.2217 & ” \\ \hline \hline \end{tabular}
\end{table}
Table 4: Physical parameters and photometry (rounded to 0.0001 mag) of NSV 14172 and NSV 14264 deduced from GAIA data.
## 5 Checking for misidentification
It is quite common that variable stars are misidentified in discovery papers, giving wrong coordinates. Brun (1964) provides finding charts: we have compared these charts with the Digital Sky Survey (DSS) as provided by the European Southern Observatory 2. We choose to make the comparison with the blue DSS images expecting a better similarity of wavelength response of photographic plates. The comparison is not straightforward, but we can say that the identifications of Brun-49 with UCAC4 674-114332 and Brun-59 with UCAC4 685-122818 is good with a reasonable confidence.
Footnote 2: [http://archive.eso.org/dss/dss/](http://archive.eso.org/dss/dss/)
Since the misidentification could have been done on the drawing of the finding charts, we have examined the light curves of the stars in the field obtained at EsO. We have drawn the light curves of about a thousand stars in the fields of UCAC4 674-114332 and of UCAC4 685-122818. None of them shows light variations characteristic of RR Lyraes. The selection criteria was that they were brighter than 15 and fainter than 12 in V filter, and with B-V less than 0.9 in UCAC4 catalog (Zacharias et al., 2013).
## 6 Conclusion
We have examined the possible light variation of two RR Lyrae candidates included in the list of Brun (1964). None of them were proved to vary as noticed from our own observations and from ASAS-SN archive data. From GAIA data we can say that NSV 14172 is a bona fide one solar mass main sequence star while NSV 14264 is a yellow giant. However, NSV 14264 luminosity and effective temperature are typical of a star located in the pulsating star instability strip.
## 7 Acknowledgements
The present study makes use of the following facilities:
- The GEOS database of RR Lyr stars hosted at Institut de Recherche en Astrophysique et Planetologie, Toulouse, France [http://rr-lyr.irap.omp.eu/dbrr/](http://rr-lyr.irap.omp.eu/dbrr/)
- The SIMBAD database, operated at CDS, Strasbourg,France (Wenger et al., 2000), of the VizieR catalogue access tool also at CDS. The original description of the VizieR service was published in A&A, Supp. 143, 23, [http://vizier.u-strasbg.fr/viz-bin/VizieR](http://vizier.u-strasbg.fr/viz-bin/VizieR) and of "Aladin sky atlas" developed at CDS, Strasbourg Observatory, France (Bonnarel et al., 2000; Boch &Fernique, 2014)
- The International Variable Star Index (VSX) database, operated at AAVSO, Cambridge, Massachusetts, USA
- Data products from the AAVSO Photometric All Sky Survey (APASS). Funded by the Robert Martin Ayers Sciences Fund and the National Science Foundation.
|
2305.10167 | Pragmatic Reasoning in Structured Signaling Games | In this work we introduce a structured signaling game, an extension of the
classical signaling game with a similarity structure between meanings in the
context, along with a variant of the Rational Speech Act (RSA) framework which
we call structured-RSA (sRSA) for pragmatic reasoning in structured domains. We
explore the behavior of the sRSA in the domain of color and show that pragmatic
agents using sRSA on top of semantic representations, derived from the World
Color Survey, attain efficiency very close to the information theoretic limit
after only 1 or 2 levels of recursion. We also explore the interaction between
pragmatic reasoning and learning in multi-agent reinforcement learning
framework. Our results illustrate that artificial agents using sRSA develop
communication closer to the information theoretic frontier compared to agents
using RSA and just reinforcement learning. We also find that the ambiguity of
the semantic representation increases as the pragmatic agents are allowed to
perform deeper reasoning about each other during learning. | Emil Carlsson, Devdatt Dubhashi | 2023-05-17T12:43:29Z | http://arxiv.org/abs/2305.10167v1 | # Pragmatic Reasoning in Structured Signaling Games
###### Abstract
In this work we introduce a structured signaling game, an extension of the classical signaling game with a similarity structure between meanings in the context, along with a variant of the Rational Speech Act (RSA) framework which we call structured-RSA (sRSA) for pragmatic reasoning in structured domains. We explore the behavior of the sRSA in the domain of color and show that pragmatic agents using sRSA on top of semantic representations, derived from the World Color Survey, attain efficiency very close to the information theoretic limit after only 1 or 2 levels of recursion. We also explore the interaction between pragmatic reasoning and learning in multi-agent reinforcement learning framework. Our results illustrate that artificial agents using sRSA develop communication closer to the information theoretic frontier compared to agents using RSA and just reinforcement learning. We also find that the ambiguity of the semantic representation increases as the pragmatic agents are allowed to perform deeper reasoning about each other during learning.
**Keywords: efficient communication; multi-agent reinforcement learning; pragmatic reasoning**
## Introduction
The Rational Speech Act (RSA) framework (Frank and Goodman, 2012; Goodman and Frank, 2016) has emerged as a leading probabilistic model of pragmatic communication formalizing the Gricean view on pragmatics (Grice, 1975). In RSA models, each agent reasons about the other agent's belief, in a game-theoretic fashion, in order to infer the context dependent meaning of an utterance. Models of this type have been used to make accurate predictions about human behavior over a wide range of different and complex tasks (Goodman and Frank, 2016).
It was recently shown by Peloquin et al. (2020) that efficient language use and structure emerge as pragmatic agents interact with each other in a signaling game. In their framework the efficiency was measured as the expected cross-entropy between the speaker and listener distributions.
However, in certain settings, the meaning space may have special structure which needs to be exploited to develop efficient communication. A good example is the domain of colors where it is possible to quantify the similarity between different colors. Hence, in a context where agents are talking about different colors an error might be quantified differently depending on whether the listener confused the color the speaker was referring to with a very similar color or with a completely different color. This is something that is not captured by a purely entropy-based efficiency measure.
Here we take a new approach to the basic question addressed in Peloquin et al. (2020) about how efficient communication arises via the interaction of pragmatic agents. First, to take structure into account, we introduce a notion of a _structured signaling game_, an extension of the standard signaling game, commonly used in work regarding pragmatic reasoning. For this type of signaling game we introduce an extension of the standard RSA which we call _structured-RSA_ (sRSA) where an agent accounts for the structure in the meaning space during the reasoning process. We explore the differences between RSA and sRSA in the color domain, a domain commonly used in cognitive science to explore various linguistic phenomena (Regier et al., 2015; Gibson et al., 2017). Second, we quantify the efficiency of the resulting communication schemes using the information theoretic notions of efficiency from Zaslavsky et al. (2018) and the well-formedness measure from Regier et al. (2007).
We first investigate the use of human representations such as the color naming systems found in the World Color Survey (Cook et al., 2005) as a basis for reasoning by pragmatic agents. We show that efficiency of communication increases much more when agents reason using sRSA compared to agents using RSA and base policies. The most striking result is that sRSA agents initialized with human representations only need a recursion depth of 1 or 2 in order to come very close to the optimal frontier.
Next, we consider computational learning agents interacting with each other in a multi-agent reinforcement learning framework similar to those considered in Kageback et al. (2020); Chaabouni et al. (2021); Carlsson et al. (2021); Ohmer et al. (2022). Our results in this learning framework suggest that pragmatic agents equipped with sRSA learn more efficient color naming systems compared to agents using RSA or pure reinforcement learning. We also find that ambiguity arises to a greater extent in the semantic representation as the computational agents are allowed to perform deeper reasoning about each other. Even though the ambiguity increases, the computational agents using sRSA still develop efficient and accurate communication. Compared to previous works (Monroe et al., 2017; Kageback et al., 2020; Chaabouni et al., 2021; Hu et al., 2021), which only account for the structure of the color space in the non-contextual meaning function. Our approach extends this and explicitly accounts for structure in the RSA recursion.
The work of Zaslavsky et al. (2021) is also related to our work. They use the fact that the softmax operator maximizes a trade-off between utility and entropy (Fudenberg and Levine, 1998) to argue that the RSA recursion can be viewed as an alternating maximization of a least-effort objective. They ground the recursion in Rate-Distortion theory and derive a new update of the sender based on the mutual information between meaning and utterance. In contrast to their work, our sRSA is based on the standard RSA recursion, with the difference that our utility function leverages the pair-wise similarity, or distortion, between meanings in the context.
## Structured Signaling Games and sRSA
In our signaling game, two agents, one sender and one listener, observe a context of \(n\) meanings \(\mathcal{C}=\{m_{i}\}\) where each \(m_{i}\) lies in some meaning space \(\mathcal{M}\). The goal of the sender is to describe one of the meanings to the listener. In the standard setup of a signaling game, the agents share a semantic representation, or meaning function, \(\mathcal{L}(m,w)\), which describes how well the utterance \(w\) describes the object \(m\). In our structured version we also assume that the agents share a similarity matrix \(Z\) where element \(Z_{ij}\) describes how similar meanings \(m_{i}\) and \(m_{j}\) are. We assume \(Z_{ij}\in[0,1]\) with \(Z_{ii}=1\). An example of a structured signaling game in the domain of colors is presented in Figure 1.
### Similarity-Sensitive Utility and sRSA
Following Degen et al. (2020), we consider agents equipped with a continuous meaning function, or semantic representation, \(\mathcal{L}(m,w)\in[0,1]\) which describes how well a meaning \(m\) can be mapped to an utterance \(w\). On top of the meaning function, our agents use the RSA in order to reason about each other's behavior given the context \(C\). Given a literal listener proportional to the meaning function, \(L_{0}(m|w)\propto\mathcal{L}(m,w)\), the following recursion is applied in the RSA
\[S_{t}(w|m,C)\propto e^{\alpha U_{t}(m,w,C)} \tag{1}\] \[L_{t}(m|w,C)\propto S_{t}(w|m,C)p(m|C) \tag{2}\]
where \(U_{t}(w,m,C)\) is the expected utility, of conveying message \(w\) given the meaning \(m\) in the context \(C\), and \(p(m|C)\) is the prior probability of \(m\) given \(C\). In RSA the utility of the sender is usually based on reducing the epistemic uncertainty the listener carries about the true meaning, and is taken to be the negative surprisal of the listener \(U_{t}(w,m,C)=\log L_{t-1}(m|w,C)\). We will denote an agent using RSA at a recursion depth of \(t\) with parameter \(\alpha\) as \(RSA(t,\alpha)\).
Similarity-Sensitive SurprisalLeinster (2021) recently introduced extensions of entropy and other information theoretic concepts in the context of structured domains, where one has a matrix of similarities \(Z\). Inspired by this, we define the _similarity-sensitive surprisal_ of a listener, \(L\), as
\[I^{Z}(m,w,C)=-\log\sum_{m^{\prime}}Z_{mm^{\prime}}L(m^{\prime}|w,C). \tag{3}\]
Here \(Z(m,m^{\prime})\) is the similarity between the two meanings \(m\) and \(m^{\prime}\). This measure captures the desirable property that a listener shouldn't be as surprised if a speaker used the same word for two similar colors compared to if the speaker used the same word for two very different colors.
Defining the utility as \(U(m,w)=-I^{Z}(m,w)\) we arrive at structured version of RSA (sRSA) with similarity-sensitive sender. Note that this utility yields a sender proportional to the power \(\alpha\) of the expected similarity
\[S_{t}(w|m,C)\propto(\sum_{m^{\prime}\in C}Z_{mm^{\prime}}L_{t-1}(m^{\prime}| w))^{\alpha}. \tag{4}\]
In next section and in Figure 1 we give a simple example in the color domain to illustrate the difference between RSA and sRSA.
Figure 1: An example of a structured signaling game in the color domain.
In the special case where \(Z\) is the identity matrix, i.e. where meanings in the context share no similarity, (3) reduces to the standard surprisal and the sender in (4) reduces to the standard RSA sender. We will denote an agent using sRSA at a recursion depth of \(t\) with parameter \(\alpha\) as \(sRSA(t,\alpha)\).
In general, given a distortion measure on the meaning space \(d:\mathcal{M}\times\mathcal{M}\rightarrow\mathbb{R}^{+}\), we can construct a natural similarity measure as \(Z_{mm^{\prime}}:=e^{-\beta d(m,m^{\prime})}\), \(\beta>0\).
## Color Domain: Efficiency and Well-formedness
We will use colors as our testbed for pragmatic reasoning in structured signaling games. The seminal work of Zaslavsky et al. (2018) showed that color naming systems in the World Color Survey (WCS) (Cook et al., 2005) optimize an information-theoretic trade-off between complexity and accuracy of the meaning function. Following Zaslavsky et al. (2018) we will take the complexity of a color naming system as the mutual information between word and meaning
\[\text{Complexity}=I(M;W)\]
and the accuracy as
\[\text{Accuracy}=I(W;U).\]
As in Zaslavsky et al. (2018) we assume a meaning \(m\) to be a distribution over color chips proportional to a isotropic Gaussian, \(m(u)\propto e^{-\frac{1}{64}\left\lVert\kappa_{m}-x_{u}\right\rVert^{2}}\) where \(x_{m}\) is the CIELAB vector corresponding to color chip \(m\).
Regier et al. (2007) showed also that human color naming reflects optimal partitions of the color space w.r.t. to a measure of _well-formedness_. The well-formedness criterion was based on the following measure of perceptual similarity between colors
\[\text{sim}(m,m^{\prime})=e^{-0.001\left\lVert x_{m}-x_{u^{\prime}}\right\rVert ^{2}} \tag{5}\]
This similarity measure will be used in our sRSA model in the downstream analysis.
sRSA vs RSAFigure 1 gives a simple example of a structured signaling game where the context consists of 6 different colors. The meaning function mapping color to word is based on the naming data found in the World Color Survey for the
Figure 2: Results for applying pragmatic reasoning on-top of the color naming data in WCS. Here depth of recursion indicates the depth of the final sender in the recursion. We use \(\alpha=5\) in the recursions. The black square indicates the position of the base agent of the language Karajá.
language Culina is shown in Figure 0(a). The similarity matrix, which describes how similar two colors are w.r.t. the similarity measure defined in (5), is shown in Figure 0(b). We use \(RSA(t,\alpha)\) to denote the result of applying depth \(t\) RSA and \(RSA(\infty,\alpha)\) to denote the limit as \(t\rightarrow\infty\), and similarly for sRSA. Figure 0(c) and Figure 0(d) show the limit points for RSA and sRSA (with \(\alpha=5\)). Since RSA minimizes only the surprisal of the listener and does not account for the similarity structure we observe that the lighter blue color and green color are mapped to the same word. Unlike RSA, the sRSA takes the similarity matrix into account and converges to a solution where the first 3 colors can be uniquely determined, while the last 3, all variants of blue, are mapped to the same word.
### Human Representations
The WCS data consist of naming data from 110 languages, with an average of 25 speakers for each language. Since the WCS data contain data from speakers, we believe it is more appropriate to consider a slightly different version of the RSA recursion, where the agents start reasoning from a literal sender proportional to the naming data from WCS1. For a language \(l\) in the WCS study and corresponding naming data \(D^{l}(w,m)\) we consider the following recursion
Footnote 1: As in Regier et al. (2015), we only consider major color terms. We say that a color term is major if it is the mode category for at least 10 chips in the Munsell Chart.
\[S^{l}_{0}(w,m,C) \propto D^{l}(w,m)\] \[L^{l}_{t}(m|w,C) \propto S^{l}_{t-1}(w|m,C)p(m|C)\] \[S^{l}_{t}(w|m) \propto e^{U_{t}(w,m,C)}.\]
We consider a structured signaling game with the context, \(\mathcal{C}\), being the entire Munsell chart. Hence, a sender is given a certain color chip from the Munsell chart and should describe this to the listener, which then produces a distribution over the color chips in the chart. The context we consider here is much larger compared to the ones considered in, for example, Monroe et al. (2017). The reason is that we are interested in larger contexts where the number of meanings is much larger than the number of utterances and exact communication is impossible. We will consider a uniform need distribution over the chart and leave it for future work to study skewed priors like the one used in Zaslavsky et al. (2018). As a baseline we will consider the base agents from the recursion, i.e. a sender proportional to the naming data and the corresponding Bayesian listener. The information-theoretic frontier is computed using the Blahut-Arimoto algorithm with the annealing scheme outlined in Zaslavsky et al. (2018) and a uniform prior. The well-formedness frontier is computed using the Correlation Clustering approach described in Kageback et al. (2020).
In Figure 1(a) we compare the efficiency of the base agents to the efficiency of the pragmatic agents after performing one recursion in the respective reasoning model. We observe that _pragmatic reasoning leads to more complex and accurate behavior for both RSA and sRSA_ compared to the base agents. However, we also observe that the RSA agents have not moved closer to the optimal frontier while the sRSA agents are very close to the frontier _after only one recursion_. Interestingly, when the recursions are allowed to go the limit, Figure 1(c), the RSA agents seem to move away from the optimal frontier while the sRSA converges to naming distributions very close to the optimal frontier.
Further, Figure 1(b) illustrates the well-formedness of the agents after one recursion. The pragmatic agents greatly improve the well-formedness of the base agents _after only one recursion_. As observed for efficiency as well, we see that sRSA, which takes the structure into account, improves the well-formedness to a greater extent. In the limit, see Figure 1(d), the sRSA agents converge to optimal naming distributions w.r.t. the well-formedness criterion.
Many studies, including the recent one in Frank et al. (2021), have reported that humans rarely use more than 1 or 2 levels of recursion in signaling games. It is therefore intriguing that the sRSA only needs only 1 or 2 recursions to reach the information-theoretic frontier. We believe this is something worth exploring further in the future.
Figure 4: Karajá, Brazil. The sRSA model refine and smooth the colormap in only one recursion. In the limit, we observe that the sRSA approaches the true optimal agent w.r.t. well-formedness (CC Agent). Each color term is colored with the average color mapped to the term.
Figure 3: Trajectories of RSA and sRSA for Karajá.
An outlier, when it comes to both efficiency and well-formedness, is the base agent of the language Karaja, highlighted by the black square in Figures 1(a) and 1(b). In Figure 3 we illustrate the efficiency and well-formedness of the corresponding RSA and sRSA agents as we increase the recursion depth. Interestingly, applying a few steps of sRSA, see Figure 3, yields a near-optimal agent, both when it comes to well-formedness and efficiency. This suggests that even though the naming distribution of Karaja is not efficient and well-formed in itself, it serves as a good initialization for a pragmatic and rational agent - but for an agent that takes domain structure into account. Without taking the structure into account, the RSA agent doesn't lead to a more efficient behavior; instead the RSA agent seems to be moving away from the optimal frontier.
In Figure 4, we see the corresponding mode-maps for the different RSA versions at depth 1 and in the limit. We clearly see that taking the structure into account in the reasoning process produces agents that have very smooth mode-maps already at depth 1, see Figure 3(e). Here we also see that the standard RSA objective, see Figures 3(c) and 3(d), fails to produce smooth mode-maps since it does not account for the structure of the domain space. Worth highlighting is that the sRSA, Figure 3(f), seems to converge to a mode-map very close to the optimal mode-map w.r.t. the well-formedness measure, see Figure 3(b). This is perhaps expected since the sRSA utility considers perceptual similarity.
### Artificial Agents
In our multi-agent reinforcement learning framework, two agents will play a structured signaling game about colors. In the beginning of each game, one agent is randomly assigned to be the speaker agent and the other one acts as a listener. Each agent will keep their own parameterization of the meaning function \(\mathcal{L}_{\theta}\) using a neural network with parameters \(\theta\) and \(\phi\). Given a context, both agents will apply either RSA or sRSA on the meaning function for \(t\) iterations to get their corresponding policies \(S_{t,\theta}(w|m,\mathcal{C})\) and \(L_{t,\phi}(m|w,\mathcal{C})\). The speaker agent then samples an utterance given the target according to \(S_{t,\theta}(w|m,\mathcal{C})\), and upon receiving the utterance, the listener samples a guess according to the distribution \(L_{t,\phi}(m|w,\mathcal{C})\). A binary reward is given to both agents depending on whether the listener produced a correct guess and both agents will update their respective meaning function using the REINFORCE objective (Williams, 1992), which for the sender agent corresponds to taking the gradient of \(r\log S_{t,\theta}(w|m,\mathcal{C})\) and for the listener gradient of \(r\log L_{\theta}(m|w,\mathcal{C})\). A similar computational setup was recently considered in Ohmer et al. (2022).
We take each neural network to have one hidden layer of 25 neurons with ReLU activation for the hidden layer and sigmoid activation in the output layer. We train the agents on contexts consisting of 5 colors sampled from the Munsell chart and represented as a vector in CIELAB space. We vary the depth of the agent from 0 to 5, where depth 0 indicates a sender interacting with a literal listener, and we set \(\alpha=5\). During the evaluation, the context given to the agents will be the entire Munsell chart, as was done for human representations. Each configuration of agents is averaged over 100 different random seeds. We update the neural networks using standard stochastic gradient descent, with the learning rate set to 0.001. The agents were trained for 10 000 updates using a batch size of 100. We compare the results to a pure reinforcement learning baseline (RL) with the meaning function of the same size as that of the pragmatic agents, but with linear activation in the output layer. The RL sender performs
Figure 5: In the following plots, depth indicates the level of the final listener in the recursion, and the error bars correspond to the width of the 95% confidence interval. We observe that, as the depth of recursion increases, the accuracy and complexity of the agent differs more compared to the accuracy and complexity of the corresponding meaning function. Noteworthy is that the complexity and accuracy of the sRSA agents increase with recursion depth, while the complexity and accuracy of the corresponding meaning functions decrease. Hence, as the reasoning depth increases, the ambiguity of the learned meaning function increases. The efficiency and accuracy of the agents and meaning functions should be the same at depth 0 and 1 since both correspond to the sender \(S_{1}(w|m)\).
a softmax operation over words given a color, and the RL listener performs a softmax operation over colors given a word. This color game is similar to the ones considered in Kageback et al. (2020); Chaabouni et al. (2021) with the difference that the sender observes the context in our setup.
In Figure 6, we observe the efficiency of the agents when performing 2 recursions. The RSA agents develop less efficient communication compared to the sRSA agents and the RL baseline. The sRSA agents develop communication closer to the optimal frontier compared to the RL and RSA agents, illustrating that pragmatic agents with appropriate utility functions develop efficient communication. It is worth highlighting that the RSA and RL agents account for the structure of the color space in their non-contextual meaning functions, i.e. in their neural networks. The results in Figure 6 thus suggest that the efficiency of the sRSA agents cannot be mimicked by just a graded, or fuzzy, meaning function, but is due to explicitly accounting for the structure in the recursion. We also note that the non-pragmatic RL baseline learns color naming systems which are more efficient than the pragmatic RSA agents, and that these systems are also close to the information-theoretic frontier (the efficiency of RL agents w.r.t. this objective was first reported in Chaabouni et al. (2021)).
In Figure 5 we see how the complexity and accuracy of the agents and the meaning function changes as the agents are allowed to perform deeper reasoning during learning. As the recursion depth increases, the sRSA agents develop more complex and accurate behavior while ambiguity emerges to a higher extent in the corresponding meaning functions, see Figure 4(b). Hence, the sRSA agents are able to use ambiguity as a tool to reach greater communicative efficiency. This is consistent with the observations in Peloquin et al. (2020) and the claims in Piantadosi et al. (2012) that ambiguity is associated with efficient communication. The ambiguity of the meaning function increases with recursion depth, also for the RSA agents, which can be seen in Figure 4(a). However, for the RSA agents we also observe that the accuracy and complexity of the agent decreases after a few recursions, which seems to indicate that a small number of recursions is better for developing accurate behavior compared to higher recursion depth when using RSA.
## 5 Conclusions
In this work we have explored pragmatic reasoning in a structured signaling game in the color domain. We explored human representations from the World Color Survey, as well as representations learned by artificial agents using reinforcement learning that incorporate pragmatic reasoning. We have seen that, in both cases, incorporating the domain structure in the reasoning process greatly improves the efficiency in the standard information-theoretic sense, compared to using the standard RSA recursion.
We believe that an interesting future direction is to extend the idea of a structured signaling game and sRSA to more complex environments. An example is a scenario where meanings constitute several different features, and not just one, as considered here. Another interesting future direction, pointed out by one of the reviewers, is to explore scenarios where agents do not share the exact same notion of similarity.
## 6 Acknowledgements
We thank Terry Regier and the reviewers for providing valuable input on this work. We also want to thank Fredrik D. Johansson, Emilio Jorge and Niklas Akerblom for providing valuable comments on a previous draft of this paper.
This work was supported by funding from Chalmers AI Research Center (CHAIR) and the computations in this work were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC).
|
2306.06255 | Early Malware Detection and Next-Action Prediction | In this paper, we propose a framework for early-stage malware detection and
mitigation by leveraging natural language processing (NLP) techniques and
machine learning algorithms. Our primary contribution is presenting an approach
for predicting the upcoming actions of malware by treating application
programming interface (API) call sequences as natural language inputs and
employing text classification methods, specifically a Bi-LSTM neural network,
to predict the next API call. This enables proactive threat identification and
mitigation, demonstrating the effectiveness of applying NLP principles to API
call sequences. The Bi-LSTM model is evaluated using two datasets. %The model
achieved an accuracy of 93.6\% and 88.8\% for the %first and second dataset
respectively. Additionally, by modeling consecutive API calls as 2-gram and
3-gram strings, we extract new features to be further processed using a
Bagging-XGBoost algorithm, effectively predicting malware presence at its early
stages. The accuracy of the proposed framework is evaluated by simulations. | Zahra Jamadi, Amir G. Aghdam | 2023-06-09T20:57:27Z | http://arxiv.org/abs/2306.06255v1 | # Early Malware Detection and Next-Action Prediction
###### Abstract
In this paper, we propose a framework for early-stage malware detection and mitigation by leveraging natural language processing (NLP) techniques and machine learning algorithms. Our primary contribution is presenting an approach for predicting the upcoming actions of malware by treating application programming interface (API) call sequences as natural language inputs and employing text classification methods, specifically a Bi-LSTM neural network, to predict the next API call. This enables proactive threat identification and mitigation, demonstrating the effectiveness of applying NLP principles to API call sequences. The Bi-LSTM model is evaluated using two datasets. Additionally, by modeling consecutive API calls as 2-gram and 3-gram strings, we extract new features to be further processed using a Bagging-XGBoost algorithm, effectively predicting malware presence at its early stages. The accuracy of the proposed framework is evaluated by simulations.
malware, early detection, early mitigation, Bi-LSTM, NLP
## I Introduction
Malware is a term describing a malicious program that is installed on a platform such as a personal computer, harming the user by damaging the system, stealing information, or hosting the system for blackmail purposes [1]. The number of reported cyberattacks continues to increase, and new malware is produced by attackers. According to 2021 SonicWall Cyber Threat Report, internet of things (IoT) malware attack volume in the first six months of 2021 increased by 59% compared to the previous year [2]. Additionally, the AV-TEST Institute indicates that every day, 450,000 new malicious programs (malware) and potentially unwanted applications (PUA) are registered [3].
As new families of malicious threats emerge and variants of malware continue to develop, conventional signature-based methods for detecting malware often fall short in identifying a large number of threats due to their reliance on known patterns [4]. Fortunately, learning-based methods are able to detect malware more efficiently since they can recognize and learn the patterns of previously unseen cyber-attacks [4]. Early malware detection and mitigation is an important task, especially for the types of malware that are costly to recover from [5]. It can save resources, minimize damage and protect sensitive information. One way to detect the malware at its early stage is to continuously monitor the application programming interface (API) calls made by the malware during its run-time and analyze them dynamically [6]. After early detection, it is desired to block the attack using a proper prediction strategy before it affects the other parts of the system.
A sequence of API calls can be modeled as a natural language processing (NLP) task because of their similarities, e.g., following a specific syntax and grammar, and the importance of context in understanding the meaning of a request [7]. Several studies have been conducted to detect malware by leveraging NLP techniques. Sundarkumar et al. [8] used text-mining and topic-mining techniques to use API calls sequence for malware detection. Li et al. [8] built a joint representation of API calls to depict software behaviors and then implemented a Bi-LSTM model to learn the relationship between API calls in a sequence and performed malware detection. Liu et al. [9] employed several deep learning-based methods for malware detection based on API calls extracted from Cuckoo sandbox.
In this study, API call sequences are modeled as a natural language construct, and principles of NLP are utilized for early malware detection and next-step prediction. We first propose a framework for detecting malware at its early stage. For this purpose, sequences of API calls are modeled as 2-gram and 3-gram strings and used as new features. Bagging-XGBoost algorithm [10] is then used for malware detection and feature importance identification. In the second part of the work, we predict the malware's upcoming action(s). A bidirectional long-short term memory (Bi-LSTM) neural network which is a common method in text classification tasks is used to predict the next API call(s). The proposed method will help proactively detect and block the attack before it can cause any significant damage.
The rest of the paper is organized as follows. In Section II, we describe the datasets and the learning-based approaches used for malware detection and API call prediction. Section III presents the obtained results and effectiveness of the proposed framework in detecting malware and predicting API calls. Finally, in Section IV, the contributions are summarized and potential directions for future research in this area are suggested.
## II Proposed Methodology
In this section, we first introduce the two datasets used in this study and will then describe the method used to detect the malware at its early stage and predict its upcoming action(s). The proposed framework in this study is represented in Figure 1.
### _Datasets_
The first dataset used in this study contains 42,797 malware and 1,079 goodware API call sequences [11]. Each sequence contains the first 100 non-repeated consecutive API calls associated with the parent process, with each API call assigned a numerical integer value ranging from 0 to 306. The dataset's large number of malware samples provides a diverse range of malicious behaviors for the models to learn from, while the inclusion of goodware sequences allows the models to differentiate between benign and malicious behavior. This dataset is used for malware detection.
The second dataset comprises 7,107 malware samples from various families, with each malware API call sequence assigned a numerical integer value ranging from 0 to 341 [12]. This dataset's diversity of malware families enables a comprehensive analysis of the proposed method's effectiveness across different types of malware, while a large number of malware samples ensures that the models are trained on a broad range of malicious behaviors, enabling them to detect and predict various threats.
### _Early Malware Detection_
We use the first dataset [11] for detecting the malware at its early stage. Due to the importance of early malware detection, API calls for this dataset are only extracted from the parent process, which is primarily responsible for initiating other processes. Furthermore, since the aforementioned dataset is not quantitatively balanced in terms of goodware and malware samples, the random oversampling method is used to increase the number of goodware and balance the dataset for both train and test sets. However, our objective is to also identify malicious activities as the best indicators of the existence of malware. To this end, sequences of API calls, modeled as 2-gram and 3-gram strings of consecutive API calls, are tokenized and used as new features. The most important features are then identified.
Let extreme gradient boosting (XGBoost) algorithm [13] be used for the binary classification, and subsequently, early detection of the malware. XGBoost is a popular and powerful machine learning algorithm for classification, regression, and ranking tasks. It is an ensemble method that combines the predictions of multiple decision trees to make final predictions [14]. We then build XGBoost bagging to improve the accuracy and robustness of malware detection tasks. The XGBoost classifier is also used to rank the importance of 2-grams and 3-grams of API calls in terms of their prediction capabilities. Three XGBoost classifiers with learning rates of \(0.01,0.05,0.1\), maximum depth of \(4,3,5\), and number of estimators equal to \(100,200,300\), respectively, are used to perform the feature extraction and detection tasks. Hyperparameters for each classifier were selected using grid search.
### _Next Action Prediction_
This subsection presents the main contribution of this work. To the best of the authors' knowledge, no prior research has been reported on predicting upcoming malware actions by predicting the next APIs. We address this problem by modeling the sequence of API calls as a natural language input. We then feed this sequence to Bi-LSTM model to predict the next APIs one by one, which are indicative of the malware's next actions. By predicting the next steps of the attack, proper mitigation techniques can be used to prevent the malware from affecting the other parts of the system.
Bi-LSTM is a type of recurrent neural network capable of capturing the dependencies between the elements of sequential data. Since it processes the input sequence in both forward and backward directions, it can efficiently identify the words before or after another word [15]. To ensure that the developed model is able to predict the next API calls of a given sequence efficiently, Bi-LSTM model is tested on both datasets. The N-gram method is applied to both datasets to convert the API calls sequence into a feature structure which helps the Bi-LSTM model learn the relationship of the API calls with each other [16].
To build N-gram features for a sequence of API calls, we consider a set of \(n\) consecutive API calls, where \(n\) ranges from \(2\) to the length of the sequence minus \(1\). For each subsequence, we use the last API call as the label. For example, to generate the first data point, we take the first two API calls in the sequence and use the third API call as the label.
An example of N-gram features for a subsequence of one of the malware samples containing \(7\) API calls is given in Table I.
N-grams can capture the patterns in the sequence of API calls, indicating certain behaviors or outcomes. We can then use these N-gram features to train the Bi-LSTM model for predicting the next API call in a given sequence. The predicted API call is then added to the input sequence and fed to Bi-LSTM network. In this way, we are able to predict multiple API calls which indicates the future action of a malware.
Fig. 1: Proposed framework for a) malware detection and b) next-action prediction
The proposed Bi-LSTM neural network has four layers. The first one is an embedding layer for converting the input sequence into a dense vector capturing its semantic and contextual information. The second one is a Dropout layer with a \(30\%\) rate to prevent the model from overfitting. The third one is a Bi-LSTM layer with a size of 150 neurons. Finally, the last one is a dense layer which completes the classification task. Adam optimization algorithm with a learning rate of \(0.01\) is employed. The cost function used in this network is categorical cross-entropy, which is widely used in text classification problems. Aforementioned configuration and hyperparameter values are obtained after several experiments to achieve the highest level of performance.
## III Experimental Results
The proposed approach is able to predict the upcoming actions of a malware by predicting the next API calls that are going to be made by the malware one at a time. Figures 2 and 3 represent the performance of the Bi-LSTM model in predicting the next \(10\) API calls for \(2\) given input sequences. Input sequences are chosen randomly from the test sets belonging to the first [11] and second dataset [12]. In both figures, the sequence presented on the top is the predicted sequence, while the one presented below represents the ground truth.
In order to evaluate the prediction ability of the Bi-LSTM network, ROC score was calculated for each API call label. ROC score is a common evaluation metric for classifiers. Performance of the classifier can be evaluated by measuring the area under the ROC curve which is a plot of true positive rate (TPR) vs. false positive rate (FPR). By calculating the ROC score for each label, we were able to identify for which API calls the Bi-LSTM network was struggling to make predictions. We then realized for both datasets, these APIs are the ones that are not commonly called during the run-time of the samples and the model struggled to accurately predict these rare API calls when presented with new, unseen data samples. Table III presents API functions that are rarely used in two datasets, but are present in both.
For early detection of malware, we generate new features containing two and three consecutive API calls extracted from the parents' process. We then extract 10 most important features identified by the XGBoost classifier and compared their occurrence in malware samples and benign samples. Table IV indicates the most important features and their occurrence in goodware and malware samples. Class \(0\) and class \(1\) correspond to benign and malware samples respectively. The features that are more often in malware samples are then investigated to identify any potential malicious activities that occur during the run-time of malware. Table V presents these features ordered by their importance.
The malware corresponding to the first API call sequence specified in Table V is to load a malicious Dynamic Link Library (DLL) into memory, retrieve the address of a specific function within the DLL, and obtain a certificate from the system certificate store. The second suspicious API call sequence is when the malware locates a specific function within a loaded module, creates a new file on the system, and manipulates the file pointer to write data to a specific location within the file. The third sequence is extracting system information, allocating memory in the virtual address space, and loading a DLL into memory. The fourth is manipulating the Windows Registry, which is a hierarchical database storing configuration settings and other system information. The last one is locating and duplicating a handle to an object, such as a file, registry key, or process, in order to gain access to it and perform some malicious actions.
Bagging-XGboost is then used to classify an ongoing process as malware or goodware. Table VI reports the classification performance of XGBoost model. The results show that the proposed malware detector framework is able to detect the malware at its early stage with high accuracy. The precision score of \(92.70\%\) shows that the model has a false alarm rate of \(7.3\%\) which is indicative of the well performance of our early malware detector.
## IV Conclusions and Future Work
This paper presents a framework for early-stage malware detection and mitigation by treating API call sequences as natural language inputs and employing text classification methods, specifically a Bi-LSTM neural network, to predict the next API call. This study demonstrates that Bi-LSTM, a neural network architecture commonly used in NLP tasks, is an effective method for predicting API calls due to the similarities between the sequence of API calls and natural language structure [6]. The model is able to predict the next action of the malware by predicting the next API calls that are most likely to occur, one at a time, and allow early mitigation. Additionally, by modeling consecutive API calls as 2-gram and 3-gram strings, we extract new features to be further processed using a Bagging-XGBoost algorithm. This enables us to identify the sequence of API calls and their corresponding activities that may suggest a malware exists.
For future work, one can investigate alternative NLP techniques, such as transformers and attention mechanisms, to enhance the malware detection and prediction capabilities of the framework. Moreover, evaluating the real-time performance of the proposed framework for online malware detection and mitigation could provide insights into its potential for practical deployment in real-world cybersecurity scenarios. Finally, this study only explored the prediction of API calls one step at a time. One direction could involve extending the framework to multistep-ahead prediction of API calls.
|
2307.11400 | The role of the pion in the lineshape of the $X(3872)$ | We determine the contribution of long-range pion interactions to the
$X(3872)$ dynamics, assuming it is a loosely bound $D^0 \bar{D}^{*0}$ molecule.
Our result is based on the distorted wave Born approximation in
non-relativistic quantum mechanics. Despite their long-range nature, we find
that pion interactions cannot produce a large and negative effective range.
Nonetheless, they introduce imaginary parts. In particular, they contribute to
the total decay width of the $X(3872)$ with a term associated with, but not
precisely corresponding to, the $D^*$ width. Our approach can also be applied
to the recently discovered $T_{cc}^+$ states. | Angelo Esposito, Davide Germani, Alfredo Glioti, Antonio D. Polosa, Riccardo Rattazzi, Michele Tarquini | 2023-07-21T07:38:26Z | http://arxiv.org/abs/2307.11400v2 | # The role of the pion in the lineshape of the \(X(3872)\)
###### Abstract
We determine the contribution of long-range pion interactions to the \(X(3872)\) dynamics, assuming it is a loosely bound \(D^{0}\bar{D}^{*0}\) molecule. Our result is based on the distorted wave Born approximation in non-relativistic quantum mechanics. Despite their long-range nature, we find that pion interactions cannot produce a large and negative effective range. Nonetheless, they introduce imaginary parts. In particular, they contribute to the total decay width of the \(X(3872)\) a term associated with, but not precisely corresponding to, the \(D^{*}\) width. Our approach can also be applied to the recently discovered \(T_{cc}^{+}\) state.
## I Introduction
The \(X(3872)\) was the first heavy-light four-quark resonance to be observed. Yet, after almost two decades, its nature is still up for debate [1; 2; 3; 4]. What makes this state particularly interesting, and challenging, is the presence of (at least) two parametric coincidences, or _fine tunings_.
The first tuning is given by the extreme closeness of the mass of the \(X(3872)\) to the \(D^{0}\bar{D}^{*0}\) threshold. Indeed, as of today, we only have an upper bound on the distance \(B\) from threshold: \(B\lesssim 120\) keV [5]. There are two different perspectives on this fact, corresponding to two competing explanations for the nature of the \(X(3872)\). On the one hand, the \(X(3872)\) could be a genuine tetraquark [6; 7], i.e. a compact four-quark hadron held together by gluon mediated forces, hence present in the spectrum of QCD at distances of the order of a fermi. In this case, given the very different color configuration of the two-meson state and of the tetraquark, one would naively expect the natural scale of \(B\) to be roughly \(\Lambda_{\rm QCD}\). In the face of that expectation, the observed value of \(B\) would correspond to a remarkable \(1/10^{3}\) fine tuning. However, one observes that all tetraquarks are systematically found within \(10-20\) MeV of their corresponding two-meson threshold. There could thus exist a genuine explanation within QCD [8], possibly based on the \(1/N\) expansion, for this systematic property. That would still single out the \(X(3872)\) as a \(1/10^{2}\) outlier in fine tuning space.
On the other hand, the \(X(3872)\) could be a very extended and loosely bound \(D^{0}\bar{D}^{*0}\) state1[9; 3; 10], a hadronic molecule appearing in the QCD spectrum at distances larger than the fermi, very much like the deuteron. In this case, the closeness to threshold would arise from a tuning of the short distance \(D^{0}\bar{D}^{*0}\) interaction, resulting in a large scattering length, \(a\) (see, e.g., [11]), and in a correspondingly small binding energy \(B\sim 1/a^{2}m_{D}\). To match the observed value of \(B\) one would need \(1/a\sim 13\) MeV, corresponding to a mild \(1/10\) tuning with respect to the naivest expectation \(1/a\sim\Lambda_{\rm QCD}\).
Footnote 1: As \(X(3872)\) is \(C\)-even, we would actually be dealing with the superposition \(D^{0}\bar{D}^{*0}+\bar{D}^{0}D^{*0}\). Throughout our discussion the proper \(C\) quantum number is always understood, even when not properly indicated.
The second tuning, instead, is the fact that the mass of the \(\bar{D}^{*0}\) is almost exactly equal to the sum of the masses of the \(D^{0}\) and the neutral pion: \(m_{D^{*}}-m_{D}-m_{\pi}\simeq 7\) MeV [12]. In particular, this implies that, in this channel, the pion mediates long-range interactions, and that it cannot be integrated out in an effective theory below the QCD scale.
A promising way of discriminating between the different options for the \(X(3872)\) is offered by the study of its lineshape. In particular, the value of the effective range, \(r_{0}\), is a good discriminator [13; 14; 15; 16; 17; 18; 19; 20; 21], as already pointed out by Weinberg in the 60's, when asking the same question for the deuteron [22]. On the one hand, for a molecular state, one expects \(r_{0}\) to be controlled by the very size of the bound mesons, that is \(r_{0}\sim 1\) fm \(\sim 1/m_{\pi}\).2 This expectation also abides by the modern effective field theory perspective: unlike the scattering length \(a\), the effective radius \(r_{0}\) is associated with an _irrelevant operator_ and cannot be tuned larger than the cut-off scale [23]. On the other hand, one finds, by explicit computation, that the presence of an interacting compact object close to the two meson threshold produces a negative \(r_{0}\) with an absolute value larger than \(1/m_{\pi}\). Again this is in agreement with the EFT perspective:
the only way to enhance an irrelevant operator is to lower the cut-off scale, which in this case is controlled by the separation of the tetraquark from the two meson threshold.
Now, as it was pointed out in [13], a recent LHCb analysis [24] suggests the second situation for \(r_{0}\) (but see also [14]). However, because of the accidental tuning we mentioned above, the pion-mediated interaction is also characterized by a large scale. One is thus led to wonder if that could play a role in producing a large and negative effective range.
In this work, we address this question using a non-relativistic quantum mechanical treatment based on the distorted wave Born approximation. An expression for the effective range for the \(X(3872)\), including the effect of pions, already appeared in [25], where a non-relativistic effective theory for the \(\bar{D}^{*0}\), the \(D^{0}\) and the pion was used. The approach we use here considerably simplifies the problem, bypassing a lengthy diagramatics. Our results, to the best of our understanding, are in quantitative agreement with Ref. [25], modulo possibly subdominant terms, and modulo the sign of imaginary parts. Comparing to the most recent experimental data we find that, despite their long-range nature, pion-mediated effects are too weak to generate the large effective radius suggested by observations. However, our result introduces a new structural feature: a complex effective range. Moreover, we find a pion-induced long-range contribution to the decay width of a molecular \(X(3872)\) that does not simply reduce to the decay width of the \(D^{*}\to D\pi\) process. This contribution should be added to those associated with other (short distance) channels, like for instance \(J/\psi\omega\) or \(J/\psi\rho\).
We stress that our analysis can be applied with many similarities to the recently discovered doubly-charmed \(T_{cc}^{+}\) state [26; 27], which is manifestly exotic and shares several features with the \(X(3872)\). In particular, it was pointed out that the effective range of the \(T_{cc}^{+}\) is negative and much larger than \(1/m_{\pi}\)[19]. This again would appear to speak strongly in favor of a compact tetraquark nature, even though this conclusion was challenged in [28].
## II The effective \(Dd^{*}\) Hamiltonian
The LHCb collaboration has recently performed a high statistics analysis of the \(B\to KX\to KJ/\psi\rho\) process, for events where the invariant mass of the \(J/\psi\rho\) pair is close to that of the \(X(3872)\)[24]. Plausibly assuming that the \(B\to KD^{0}\bar{D}^{*0}\) and the \(D^{0}\bar{D}^{*0}\to J/\psi\rho\) vertices originate at short distance and that they hence are approximately pointlike, the \(B\to KJ/\psi\rho\) amplitude results proportional to that for \(D^{0}\bar{D}^{*0}\to D^{0}\bar{D}^{*0}\)[29], and can thus be used to extract the \(X(3872)\) lineshape.
As already mentioned, since \(m_{D^{*}}>m_{D}+m_{\pi}\), the \(D^{*0}\to D^{0}\pi^{0}\) decay is kinematically allowed. However, due to an accidental fine tuning, the available phase space is small and all the particles are endowed with a small velocity. We can therefore describe the relevant dynamics within non-relativistic quantum mechanics, with two components of the Hilbert space: one describing \(D^{0}\bar{D}^{*0}\) (and \(\bar{D}^{0}\bar{D}^{*0}\)) and the other describing \(D^{0}\bar{D}^{0}\pi^{0}\). We can represent the Hamiltonian on this two-component Hilbert space in block diagonal form as
\[H=\begin{pmatrix}H_{DD^{*}}&H_{I}^{\dagger}\\ H_{I}&H_{DD\pi}\end{pmatrix}\,, \tag{1}\]
where, in an obvious notation, \(H_{DD^{*}}\) and \(H_{DD\pi}\) describe evolution within the two sectors in the absence of pion interactions, while \(H_{I}\) contains the \(D^{0}\bar{D}^{*0}\pi\) interaction vertex that couples the two sectors. Forward time evolution, and the \(S\)-matrix, can be computed using the retarded Green function \(\mathcal{G}_{+}(E)=(E-H+i\epsilon)^{-1}\). As we are only interested in the evolution in the \(D^{0}\bar{D}^{*0}\) subspace we only need the Green function reduced to the \(D^{0}\bar{D}^{*0}\) block \(\mathcal{G}_{+}(E)|_{D^{0}\bar{D}^{*0}}\). The latter can be conveniently expressed by _integrating out_ the \(D^{0}\bar{D}^{0}\pi^{0}\) component as \(\mathcal{G}_{+}(E)|_{D^{0}\bar{D}^{*0}}=(E-H_{\rm eff}(E)+i\epsilon)^{-1}\) with the effective Hamiltonian \(H_{\rm eff}(E)\) given by [e.g., 30],
\[H_{\rm eff}(E)=H_{DD^{*}}+H_{I}^{\dagger}\frac{1}{E-H_{DD\pi}+i\epsilon}H_{I}\,. \tag{2}\]
The \(H_{DD^{*}}\) Hamiltonian includes the kinetic terms3 and a pointlike interaction controlled by a bare coupling, \(\lambda_{0}\),
Footnote 3: As already mentioned, we omit the \(\bar{D}^{0}D^{*0}\) part, which has precisely the same form.
\[H_{DD^{*}}=\frac{\mathbf{p}_{D^{*}}^{2}}{2m_{D^{*}}}+\frac{\mathbf{p}_{D}^{2}}{2m_{D}} -\lambda_{0}\delta^{(3)}(\mathbf{r})\,, \tag{3}\]
with \(\mathbf{r}\) the relative \(D^{0}\bar{D}^{*0}\) position.4 The coupling \(\lambda_{0}\) is assumed to bind the \(D^{0}\bar{D}^{*0}\) with a large scattering length. Since there is no indication of any such critically large scattering length in the \(D^{0}\bar{D}^{0}\pi^{0}\) system, we model it as
non-interacting:
\[H_{DD\pi}=-\delta+\frac{\mathbf{p}_{D,1}^{2}}{2m_{D}}+\frac{\mathbf{p}_{D,2}^{2}}{2m_{D}}+ \frac{\mathbf{p}_{\pi}^{2}}{2m_{\pi}}\,. \tag{4}\]
Here \(\delta\equiv m_{D^{*}}-m_{D}-m_{\pi}\), and the constant term is due to the fact that, in our notation, all energies are measured with respect to \(m_{D^{*}}+m_{D}\), and therefore the \(D^{0}\bar{D}^{0}\pi^{0}\) system has a slightly negative mass.
As far as the interaction Hamiltonian is concerned, instead, it is simpler to write directly its matrix element between states of definite momentum and polarization. At the lowest order in the small pion momentum, the Galilei invariant matrix element can be written as [31; 32; 25; 33],
\[\langle\bar{D}(\mathbf{k}_{1})D(\mathbf{k}_{2})\pi(\mathbf{q})|H_{I}|\bar{D}_{\lambda}^{*} (\mathbf{p}_{1})D(\mathbf{p}_{2})\rangle=\frac{ig}{2\sqrt{\bar{m}_{\pi}}f_{\pi}m_{D^{* }}}\left(m_{D}\mathbf{p}_{1}-m_{D^{*}}\mathbf{k}_{1}\right)\!\cdot\!\varepsilon_{ \lambda}\left(2\pi\right)^{6}\delta^{(3)}(\mathbf{p}_{1}-\mathbf{k}_{1}-\mathbf{q})\delta ^{(3)}(\mathbf{p}_{2}-\mathbf{k}_{2})\,, \tag{5}\]
where \(\varepsilon_{\lambda}\) is the polarization vector of the \(\bar{D}^{*0}\). The same matrix element applies to the charge-conjugated states. It should be stressed that the absence of time-dependent phase factors in Eq. (5) underlies a specific choice of the non-relativistic fields. In the present case that implies the masses appearing in the kinetic terms in Eqs. (3) and (4), but not in the quantity \(\delta\), satisfy \(m_{D^{*}}-m_{D}-m_{\pi}=0\). Henceforth, we shall everywhere set \(m_{D^{*}}=m_{D}+m_{\pi}\), except when \(\delta\) is considered.
The coupling in Eq. (5) is related to the \(D^{*0}\to D^{0}\pi^{0}\) decay width by
\[\Gamma_{*}=\frac{g^{2}\mu^{3}}{12\pi f_{\pi}^{2}}\,, \tag{6}\]
where we defined \(\mu\equiv\sqrt{2m_{\pi}\delta}\simeq 43\) MeV. Since no experimental value is available for the decay of the neutral \(D\)-meson, \(g\) can be extracted from the width of the charged one, which gives \(g^{2}\simeq 0.34\)[33]. Notice that this result compares well with the expectation from _naive dimensional analysis_[34; 35], which would roughly give \(g^{2}\sim(4\pi f_{\pi})^{2}/N_{c}m_{D}^{2}\sim 0.2\).
Now, the term induced by pion exchange (the second) in the effective Hamiltonian in Eq. (2) consists of two contributions: a \(\bar{D}^{*0}D^{0}\to\bar{D}^{*0}D^{0}\) transition and a \(\bar{D}^{*0}D^{0}\to D^{*0}\bar{D}^{0}\) one. These correspond to the two diagrams shown in Figure 1, and discussed in [32; 25; 33] in a quantum field theory setup. A detailed computation of these contributions is reported in Appendix A. Working in position space, and at leading order in an expansion in \((m_{D^{*}}-m_{D})/m_{D}\sim m_{\pi}/m_{D}\ll 1\), the parts of the two transitions that purely contribute to the \(S\)-wave read respectively
\[\langle\bar{D}_{\lambda}^{*}(\mathbf{x}_{1})D(\mathbf{x}_{2})|H_{I}\frac {1}{E-H_{DD\pi}+i\epsilon}H_{I}|\bar{D}_{\lambda^{\prime}}^{*}(\mathbf{y}_{1})D( \mathbf{y}_{2})\rangle \simeq -i\frac{\Gamma_{*}}{2}\delta^{(3)}(\mathbf{x}_{1}-\mathbf{y}_{1})\delta^{ (3)}(\mathbf{x}_{2}-\mathbf{y}_{2})\delta_{\lambda\lambda^{\prime}}\,, \tag{7a}\] \[\langle D_{\lambda}^{*}(\mathbf{x}_{1})\bar{D}(\mathbf{x}_{2})|H_{I} \frac{1}{E-H_{DD\pi}+i\epsilon}H_{I}|\bar{D}_{\lambda^{\prime}}^{*}(\mathbf{y}_{1})D (\mathbf{y}_{2})\rangle \simeq -\bigg{[}\alpha\frac{e^{i\mu r}}{r}+\frac{g^{2}}{6f_{\pi}^{2}} \delta^{(3)}(\mathbf{r})\bigg{]}\delta^{(3)}(\mathbf{x}_{1}-\mathbf{y}_{2})\delta^{(3)}( \mathbf{x}_{2}-\mathbf{y}_{1})\delta_{\lambda\lambda^{\prime}}\,, \tag{7b}\]
where \(\mathbf{r}=\mathbf{x}_{1}-\mathbf{x}_{2}\) is the relative \(D^{*}D\) position, and we defined \(\alpha\equiv g^{2}\mu^{2}/(24\pi f_{\pi}^{2})\simeq 5\times 10^{-4}\). The quantity \(\alpha\) measures the weakness of the pion-induced interaction. For instance, Eq. (6) reads \(\Gamma_{*}=2\alpha\mu\), indicating that the \(D^{*}\) is actually much narrower than what is implied by the small phase space. The smallness of \(\alpha\) simply follows from the derivative nature of pion interactions. Finally, notice that the above leading order matrix elements are independent of \(E\): \(H_{\rm eff}(E)\equiv H_{\rm eff}\).
Decoupling the motion of the center of mass coordinate, the Schrodinger equation for the relative \(D^{0}\bar{D}^{*0}\) distance \(\mathbf{r}\) then reads
\[H_{\rm eff}\,\psi(r)\equiv\left[-\frac{\nabla^{2}}{2\mu_{r}}-\left(\lambda_{0} +\frac{4\pi\alpha}{\mu^{2}}\right)\delta^{(3)}(\mathbf{r})-i\frac{\Gamma_{*}}{2}- \alpha\frac{e^{i\mu r}}{r}\right]\psi(r)=E\,\psi(r)\,, \tag{8}\]
Figure 1: Diagrammatic representation of the two transition matrix elements in Eqs. (7).
with \(\mu_{r}\) the reduced \(D^{0}\bar{D}^{*0}\) mass. The wave function of the \(X(3872)\) corresponds to the \(C=+1\) combination of the two charge conjugated states, \(\psi=\frac{1}{\sqrt{2}}\left(\psi_{\bar{D}^{*}\bar{D}}+\psi_{D^{*}\bar{D}}\right)\). A detailed derivation of the results above can be found again in Appendix A.
A comment is in order. The effective potential in Eq. (8) is complex and has an infinite range. This reflects the fact that since the \(D^{*0}\to D^{0}\pi^{0}\) decay is allowed, the intermediate pion in the scattering term can be real, and hence propagate at arbitrarily large distances. Since the reduced \(D^{0}\bar{D}^{*0}\) subsystem does not include on-shell pions, the potential is non-Hermitian and unitarity is not manifest. Because of that, as we show in the next sections, there is an additional correction to the total width of the \(X(3872)\) and the effective range turns out to be complex.
## III Effects of soft pions on the lineshape of the \(X(3872)\)
We now discuss the consequence of Eq. (8) on the lineshape of the \(X(3872)\). We perform our study in two steps. First, we consider the scattering problem for positive real energies close to the (complex) \(D^{0}\bar{D}^{*0}\) threshold. Secondly, we study the (unstable) bound state.
### Scattering states: the effective range
The solution of Eq. (8), and the corresponding scattering amplitudes, can in principle be found numerically, in complete analogy with [36], but accounting for the complex terms in the Hamiltonian. However exploiting the smallness of \(\alpha\), one can study the problem analytically working in perturbation theory.
To organize the discussion it is convenient to label the four components of \(H_{\rm eff}\) in Eq. (8) according to \(H_{\rm eff}=H_{0}-i\,\Gamma_{*}/2+V_{s}+V_{w}\), with,
\[H_{0}=\frac{p^{2}}{2\mu_{r}}\,,\qquad V_{s}(\mathbf{r})=-\left(\lambda_{0}+\frac{ 4\pi\alpha}{\mu^{2}}\right)\delta^{(3)}(\mathbf{r})\,,\qquad V_{w}(\mathbf{r})=-\, \alpha\frac{e^{i\mu r}}{r}\,. \tag{9}\]
As the contribution of pion exchange to \(V_{s}\) is just a redefinition of the bare coupling \(\lambda_{0}\), we can absorb it in \(\lambda_{0}\) and forget about it. The genuine effects of pion exchange are then just the width \(\Gamma_{*}\) and \(V_{w}\). Let us then first consider the limit in which these contributions are neglected, and where the scattering amplitude is purely determined by \(H_{0}+V_{s}\) alone. Scattering states correspond to the \(E>0\) continuum. Indicating by \(k\equiv\sqrt{2\mu_{r}E}\) the relative \(D^{0}\bar{D}^{*0}\) momentum, and using the Lippmann-Schwinger formalism, the amplitude reads
\[f_{s}(k)=-\frac{\mu_{r}}{2\pi}\langle\psi_{0,k}|\left[V_{s}+V_{s}\frac{1}{E-H _{0}-V_{s}+i\epsilon}V_{s}\right]|\psi_{0,k}\rangle=\frac{1}{-1/a_{s}-ik}\,, \tag{10}\]
with \(\psi_{0,k}(r)=(\sin kr)/kr\) the free \(S\)-wave solution and with \(a_{s}\) the physical scattering length that results from the series of insertions of \(V_{s}\), after renormalization [36; 11].5 Consider now the effects of \(\Gamma_{*}\) and \(V_{w}\). As \(V_{w}\) is sufficiently localized in space, for sufficiently small \(\alpha\), its effect on the amplitude will certainly be treatable as a perturbation. \(\Gamma_{*}\), instead, is not spatially localized: since it does not vanish in the limit of very separated particles, its presence affects the very definition of the asymptotic states and must be thus treated exactly. This is however easily done. By Eq. (8), the presence of \(\Gamma_{*}\) simply implies that the asymptotic kinetic energy, \(k^{2}/2\mu_{r}\), now equals \(E+i\Gamma_{*}/2\), by which the on-shell condition is now that \(E+i\Gamma_{*}/2\) is real and positive. Using \(k\equiv\sqrt{2\mu_{r}(E+i\Gamma_{*}/2)}\), Eq. (10) then simply becomes,
Footnote 5: The Born level term in Eq. (10) gives \(a_{s}=\mu_{r}\lambda_{0}/2\pi\), but every single additional insertion of \(V_{s}\) is linearly UV divergent.
\[f_{s}(k)=-\frac{\mu_{r}}{2\pi}\langle\psi_{0,k}|\left[V_{s}+V_{s}\frac{1}{E+i \frac{\Gamma_{*}}{2}-H_{0}-V_{s}+i\epsilon}V_{s}\right]|\psi_{0,k}\rangle= \frac{1}{-1/a_{s}-ik}\,. \tag{11}\]
In terms of the redefined \(k\) the amplitude is the same as before: rather intuitively all that matters is the kinetic energy \(k^{2}/2\mu_{r}\) of the \(D^{0}\bar{D}^{*0}\) system. Notice that the amplitude analytically continued to \(E\to 0\) features a square root singularity in \(\Gamma_{*}\), consistent with the need for a full resummation of the \(\Gamma_{*}\) insertions [32].
Consider now \(V_{w}\). At lowest order in \(\alpha\), its effects are captured by the so-called distorted wave Born approximation [37], resulting in a correction to the amplitude \(f_{DD^{*}}(k)\) given by (again \(k=\sqrt{2\mu_{r}(E+i\Gamma_{*}/2)}\) for the rest of this
section),
\[f_{DD^{*}}(k)=f_{s}(k)-\frac{\mu_{r}}{2\pi}\langle\psi_{s,k}^{-}|V_{w}|\psi_{s,k} ^{+}\rangle+\mathcal{O}\left(V_{w}^{2}\right)\,, \tag{12}\]
where \(|\psi_{s,k}^{\pm}\rangle\) are the asymptotic states for the Hamiltonian \(H_{0}+V_{s}-i\Gamma_{*}/2\) [e.g., 36],
\[|\psi_{s,k}^{\pm}\rangle =\left[1+\frac{1}{\frac{k^{2}}{2\mu_{r}}-H_{0}-V_{s}\pm i\epsilon }\right]|\psi_{0,k}\rangle\,, \tag{13a}\] \[\psi_{s,k}^{\pm}(r) =\frac{\sin(kr)}{kr}+\frac{1}{-1/a_{s}\mp ik}\frac{e^{\pm ikr}}{r }=e^{\pm i\delta_{s}}\frac{\sin(kr+\delta_{s})}{kr}\,, \tag{13b}\]
where \(\delta_{s}\) is the \(S\)-wave scattering phase due to \(V_{s}\): \(\tan\delta_{s}=-a_{s}k\).
With this at hand, the leading order correction to the scattering amplitude in Eq. (12) is controlled by
\[\langle\psi_{s,k}^{-}|V_{w}|\psi_{s,k}^{+}\rangle=-\,\alpha\,e^{2i\delta_{s}} \int d^{3}r\frac{e^{i\mu r}}{r}\frac{\sin^{2}(kr+\delta_{s})}{(kr)^{2}}\,. \tag{14}\]
The amplitude integral features a logarithmic UV divergence as a consequence of the \(\delta\)-function potential that shapes the \(\psi_{s,k}^{\pm}\). Indeed turning off \(V_{s}\), one would have \(\delta_{s}=0\) and no UV divergence. This UV divergence is associated with short-distance physics which we simply parameterize by introducing a cutoff at \(r=\eta\), where we expect \(\eta\sim 1\) fm. At leading order as \(\eta\mu\to 0\), we then get
\[\langle\psi_{s,k}^{-}|V_{w}|\psi_{s,k}^{+}\rangle= \,-\,\frac{\pi\alpha}{k^{2}}\bigg{\{}\left(e^{2i\delta_{s}}-1 \right)^{2}\left(\gamma_{E}-i\frac{\pi}{2}+\log\eta\mu\right)+\log\left(1- \frac{2k}{\mu}\right)+e^{4i\delta_{s}}\log\left(1+\frac{2k}{\mu}\right) \bigg{\}}+\mathcal{O}(\eta\mu)\,, \tag{15}\]
where \(\gamma_{E}\) is the Euler-Mascheroni constant. Notice that, besides the UV divergence controlled by \(\eta\), this expression features the standard IR divergence at \(\mu\to 0\) associated with Coulombic interactions, to which our potential reduces in this limit. Moreover, one can check that the result is regular at \(k\to 0\), corresponding to the validity of perturbation theory in \(\alpha\) even at this point.
For small relative momenta, the inverse amplitude can be expanded as follows,
\[f_{DD^{*}}^{-1}=-\frac{1}{a_{R}}-ik+\frac{2\alpha\mu_{r}}{\mu^{2}}\left(\frac {2}{a_{R}^{2}\mu^{2}}-1+\frac{8i}{3a_{R}\mu}\right)k^{2}+\mathcal{O}\big{(}k^{ 4}\big{)}\,, \tag{16}\]
where the \(\mathcal{O}\big{(}k^{2}\big{)}\) term determines the effective range. The UV divergence has been fixed by requiring for the \(\mathcal{O}\big{(}k^{0}\big{)}\) to match the physical scattering length, \(a_{R}\). In particular, the relation between \(a_{s}\) and \(a_{R}\) is,
\[\frac{a_{s}}{a_{R}}=1-\frac{2\alpha\mu_{r}}{\mu}\bigg{[}\frac{1}{a_{R}\mu}+ \gamma_{E}\mu a_{R}+2i+\mu a_{R}\left(\log(\eta\mu)-i\frac{\pi}{2}\right) \bigg{]}\,. \tag{17}\]
Since the \(D^{0}\bar{D}^{*0}\to D^{0}\bar{D}^{*0}\) scattering cannot be accessed experimentally, the physical scattering length must be obtained from the distance of the \(X(3872)\) from the \(D^{0}\bar{D}^{*0}\) threshold. At lowest order in \(\alpha\) this is \(a_{R}=1/\sqrt{2\mu_{r}B}\) (see Eq. (21) below). Considering the current experimental bound, which for a molecular state implies \(0\ \mathrm{keV}<B\lesssim 100\ \mathrm{keV}\)[38; 24], one finds the following constraints for the real and imaginary parts of the effective range induced by pion exchange:
\[-0.20\ \mathrm{fm}\lesssim\ \mathrm{Re}\,r_{0}\lesssim-0.16\ \mathrm{fm}\,,\qquad 0 \ \mathrm{fm}\lesssim\ \mathrm{Im}\,r_{0}\lesssim 0.17\ \mathrm{fm}\,. \tag{18}\]
As one can see, the pion is too weakly coupled to generate a large and negative effective range in a purely molecular scenario.6 Nonetheless, \(r_{0}\) can now have an imaginary part, even if just a small one.
Footnote 6: Recall that by “large” one normally means an effective range larger than its natural expected value of roughly \(1/m_{\pi}\simeq 1.4\) fm.
Let us now briefly comment on the comparison between our results and those found in [25], where the same problem was studied. Our Eq. (16) reproduces, at leading order in \(m_{\pi}/m_{D}\ll 1\), almost exactly the expression for the effective range in [25], except that (1) we find the opposite sign for the imaginary part, and (2) the term that does not dependent on the scattering length does not seem to match. The authors of [25] report a seemingly unspecified constant, labeled \(F_{2}\), whose interpretation is unclear to us.
### Bound state: pole of the \(X(3872)\)
The unperturbed Hamiltonian \(H_{0}+V_{s}\), besides a continuum of scattering states with positive energy, features, for \(a_{s}>0\), one bound state with wave function [36] given by
\[\psi_{X}(r)=\frac{1}{\sqrt{2\pi a_{s}}}\frac{e^{-r/a_{s}}}{r}\,, \tag{19}\]
and with energy \(E_{X}\equiv-B=-\left(2a_{s}^{2}\mu_{r}\right)^{-1}\). In the molecular hypothesis, this bound state is the very \(X(3872)\). The pion-mediated interaction now causes a shift in the energy of the molecular \(X(3872)\). The effect of the constant width is readily seen, as before, by moving \(\Gamma_{*}/2\) to the right-hand side in Eq. (8), so that \(-B=-\left(2a_{s}^{2}\mu_{r}\right)^{-1}\) now equals \(E+i\Gamma_{*}/2\), that is \(E_{X}=-\left(2a_{s}^{2}\mu_{r}\right)^{-1}-i\Gamma_{*}/2\). The effect of \(V_{w}\) can instead be computed in perturbation theory and, at first order, it gives a shift
\[\Delta E_{X}=\langle\psi_{X}|V_{\ell}|\psi_{X}\rangle=-\frac{\alpha}{2\pi a_{ s}}\int_{r\geq\eta}d^{3}r\frac{e^{-\frac{2r}{a_{s}}+i\mu r}}{r^{3}}\,. \tag{20}\]
The real part of this expression features the same UV divergence encountered in the previous section. Regulating the UV divergence by introducing the same cut-off length, \(\eta\), we can express the final result in terms of the renormalized \(a_{R}\) in Eq. (17) and of the other physical parameters as
\[E_{X}=-\frac{1}{2\mu_{r}a_{R}^{2}}-i\frac{\Gamma_{*}}{2}-\frac{2\alpha}{a_{R} ^{3}\mu^{2}}\bigg{[}1+2ia_{R}\mu-a_{R}^{2}\mu^{2}\log\left(1+\frac{2i}{a_{R} \mu}\right)\bigg{]}\,. \tag{21}\]
As one can see, the term in the square bracket has an imaginary part introducing a correction to the total width of the \(X(3872)\): \(\Gamma_{X}=\Gamma_{*}+\Delta\Gamma_{X}\). This can be interpreted as the effect of binding in \(\bar{D}^{*0}\to D\pi\). Indeed \(\Delta\Gamma_{X}\) vanishes in the unbound case, \(a_{R}\to\infty\). Given the experimental constraints on the distance of the \(X(3872)\) from threshold, one deduces
\[0~{}\text{keV}\lesssim\Delta\Gamma_{X}\lesssim 1.9~{}\text{keV}\,. \tag{22}\]
Notice that the imaginary part of the term in the square bracket in Eq. (21) vanishes as \(1/a_{R}\mu\) in the limit \(a_{R}\mu\to\infty\). This further suppresses the maximal value of \(\Delta\Gamma_{X}\), given the allowed minimum of \(a_{R}\mu\) sits around \(\sim 3\).
## IV Conclusions
According to basic effective field theory, as synthesized in the so-called Weinberg criterion [22], the effective range of the \(X(3872)\), in the hypothesis of a \(D^{0}\bar{D}^{*0}\) molecular state, should be \(\mathcal{O}(1~{}\text{fm})\), while, in the hypothesis of a compact hadron, it could be negative and with magnitude substantially larger than \(\mathcal{O}(1~{}\text{fm})\). The accidental vicinity of the \(\bar{D}^{*0}\) to the \(D\pi\) threshold, however, introduces a new effective mass scale \(\mu\ll 1/\text{fm}\) in the dynamics of the system. Even though also the strength of pion interactions matters, the smallness of \(\mu\) instills the suspicion that \(\propto 1/\mu\) contributions to the effective range could affect the above clearcut picture, also jeopardizing the validity of the results obtained in [13] and [19]. In this study, we showed by explicit computation that is not the case. Our study confirms results previously obtained in the literature [32; 25], but with a technically much simpler approach based on non-relativistic scattering theory and on the exact solution of three-dimensional \(\delta\)-function potentials in quantum mechanics. One essential aspect of our computation is the occurrence of a complex long-range potential from pion exchange, which induces shifts to both real and imaginary parts of all the parameters describing the \(X(3872)\) lineshape.
Broadly speaking, the deduction of the effective range of exotic hadrons like the \(X(3872)\) or the \(T_{cc}^{+}\) from experimental data is a very relevant matter, as it could provide a clear, model-independent determination of their nature. While data are already available, we believe that the current theoretical description of the lineshape of the \(X(3872)\) is lacking a systematic treatment, which includes all possible effects (contact interactions, charged thresholds, tetraquark contribution, and so on) in a way that can be readily interpreted from a physical viewpoint. We consider the present study as a first step in that direction, which we plan to keep pursuing in future work.
###### Acknowledgements.
R.R. is partially supported by the Swiss National Science Foundation under contract 200020-213104 and through the National Center of Competence in Research SwissMAP. A.E. and A.D.P. thank Hans Werner Hammer for an informative discussion. A.D.P. wishes to thank Adam Szczepaniak and Kevin Ingles for useful clarifications.
## Appendix A Effective Hamiltonian in position space
We here show in more detail how the effective Hamiltonian in Eq. (8) is obtained. We start by computing the transition matrix elements in Eqs. (7). Consider first eq. (7a) decomposed in momentum space
\[\begin{split}\langle\bar{D}_{\lambda}^{*}(\mathbf{x}_{1})D(\mathbf{x}_{2}) |H_{I}\frac{1}{E-H_{DD\pi}+i\epsilon}H_{I}|\bar{D}_{\lambda^{\prime}}^{*}(\mathbf{ y}_{1})D(\mathbf{y}_{2})\rangle=\int\frac{d\mathbf{p}_{1}}{(2\pi)^{3}}\frac{d\mathbf{p}_{2}}{(2 \pi)^{3}}\int\frac{d\mathbf{k}_{1}}{(2\pi)^{3}}\frac{d\mathbf{k}_{2}}{(2\pi)^{3}}\\ \times e^{i(\mathbf{p}_{1}\cdot\mathbf{y}_{1}+\mathbf{p}_{2}\cdot\mathbf{y}_{2}- \mathbf{k}_{1}\cdot\mathbf{x}_{1}-\mathbf{k}_{2}\cdot\mathbf{x}_{2})}\langle\bar{D}_{\lambda} ^{*}(\mathbf{k}_{1})D(\mathbf{k}_{2})|H_{I}\frac{1}{E-H_{DD\pi}+i\epsilon}H_{I}|\bar{ D}_{\lambda^{\prime}}^{*}(\mathbf{p}_{1})D(\mathbf{p}_{2})\rangle\,.\end{split} \tag{10}\]
The momentum space matrix element can be evaluated using the completeness relations for the \(\bar{D}^{0}D^{0}\pi^{0}\) system,
\[\begin{split}\langle\bar{D}_{\lambda}^{*}(\mathbf{k}_{1})D(\mathbf{k}_{2} )|H_{I}\frac{1}{E-H_{DD\pi}+i\epsilon}H_{I}|\bar{D}_{\lambda^{\prime}}^{*}(\bm {p}_{1})D(\mathbf{p}_{2})\rangle=\int\frac{d\mathbf{q}_{1}}{(2\pi)^{3}}\frac{d\mathbf{q}_{ 2}}{(2\pi)^{3}}\frac{d\mathbf{q}_{3}}{(2\pi)^{3}}\\ \times\langle\bar{D}_{\lambda}^{*}(\mathbf{k}_{1})D(\mathbf{k}_{2})|H_{I} \frac{1}{E-H_{DD\pi}+i\epsilon}|\bar{D}(\mathbf{q}_{1})D(\mathbf{q}_{2})\pi(\mathbf{q}_{3}) \rangle\langle\bar{D}(\mathbf{q}_{1})D(\mathbf{q}_{2})\pi(\mathbf{q}_{3})|H_{I}|\bar{D}_{ \lambda^{\prime}}^{*}(\mathbf{p}_{1})D(\mathbf{p}_{2})\rangle\\ \simeq\bigg{(}\frac{g}{2\sqrt{m_{\pi}}f_{\pi}}\bigg{)}^{2}(2\pi)^ {3}\delta^{(3)}(\mathbf{k}_{1}-\mathbf{p}_{1})\delta^{(3)}(\mathbf{k}_{2}-\mathbf{p}_{2})\int d \mathbf{q}\frac{\mathbf{q}\cdot\mathbf{\varepsilon}_{\lambda}\mathbf{q}\cdot\mathbf{\varepsilon}_{ \lambda^{\prime}}^{*}}{E+\delta-\frac{(\mathbf{p}_{1}-\mathbf{q})^{2}}{2m_{D}}-\frac{ \mathbf{p}_{2}^{2}}{2m_{\pi}}-\frac{\mathbf{q}^{2}}{2m_{\pi}}+i\epsilon}\\ \simeq\frac{4\pi^{3}g^{2}}{f_{\pi}^{2}}\delta^{(3)}(\mathbf{k}_{1}- \mathbf{p}_{1})\delta^{(3)}(\mathbf{k}_{2}-\mathbf{p}_{2})\int d\mathbf{q}\,\frac{\mathbf{q}\cdot \mathbf{\varepsilon}_{\lambda}\mathbf{q}\cdot\mathbf{\varepsilon}_{\lambda^{\prime}}^{*}}{ \mu^{2}-q^{2}+i\epsilon}\,,\end{split} \tag{11}\]
where we expanded at lowest order in \(m_{\pi}/m_{D}\ll 1\), keeping in mind the energy \(E\) will be of the order of the \(D^{0}\bar{D}^{*0}\) kinetic energy, \(E\sim p^{2}/m_{D}\). Moreover, we keep \(\mu=\sqrt{2m_{\pi}\delta}\) fixed, since this is the quantity that critically controls the coordinate dependence of the pion induce complex potential. Indeed \(\mu\) acts as momentum cut off of the effective \(D^{0}\bar{D}^{*0}\) theory with the pion integrated out, and the limit \(\mu\to 0\) drastically affects the qualitative behavior of the \(D^{0}\bar{D}^{*0}\) system.
The last integral in Eq. (11) is UV divergent, and we choose to regularize it with a cutoff \(\Lambda\). Using also the orthonormality of the polarization vectors, \(\mathbf{\varepsilon}_{\lambda}\cdot\mathbf{\varepsilon}_{\lambda^{\prime}}^{*}=\delta _{\lambda\lambda^{\prime}}\), we obtain,
\[\begin{split}\langle\bar{D}_{\lambda}^{*}(\mathbf{k}_{1})D(\mathbf{k}_{2} )|H_{I}\frac{1}{E-H_{DD\pi}+i\epsilon}H_{I}|\bar{D}_{\lambda^{\prime}}^{*}(\bm {p}_{1})D(\mathbf{p}_{2})\rangle\simeq&-\bigg{[}\frac{16\pi^{4}g^{2} \Lambda^{3}}{9f_{\pi}^{2}}+\frac{16\pi^{4}g^{2}\Lambda\mu^{2}}{3f_{\pi}^{2}}+( 2\pi)^{6}i\frac{\Gamma_{*}}{2}\bigg{]}\\ &\times\delta^{(3)}(\mathbf{k}_{1}-\mathbf{p}_{1})\delta^{(3)}(\mathbf{k}_{2} -\mathbf{p}_{2})\delta_{\lambda\lambda^{\prime}}\,.\end{split} \tag{12}\]
The real part is UV divergent and can be absorbed in the physical value of \(\delta\) which, with an abuse of notation, we will keep labelling by the same symbol. By plugging the above result in Eq. (10), we obtain the transition matrix element in position space,
\[\langle\bar{D}_{\lambda}^{*}(\mathbf{x}_{1})D(\mathbf{x}_{2})|H_{I}\frac{1}{E-H_{DD\pi} +i\epsilon}H_{I}|\bar{D}_{\lambda^{\prime}}^{*}(\mathbf{y}_{1})D(\mathbf{y}_{2}) \rangle\simeq-i\frac{\Gamma_{*}}{2}\delta^{(3)}(\mathbf{x}_{1}-\mathbf{y}_{1})\delta^{ (3)}(\mathbf{x}_{2}-\mathbf{y}_{2})\delta_{\lambda\lambda^{\prime}}\,, \tag{13}\]
where we omitted the UV divergent terms.
With similar manipulations, we can find the matrix element for the transition \(\bar{D}^{*0}D^{0}\to D^{*0}\bar{D}^{0}\) in eq. (7b). In position space, it reads (see also [39]),
\[\langle D_{\lambda}^{*}(\mathbf{x}_{1})\bar{D}(\mathbf{x}_{2})|H_{I}\frac{ 1}{E-H_{DD\pi}+i\epsilon}H_{I}|\bar{D}_{\lambda^{\prime}}^{*}(\mathbf{y}_{1})D(\mathbf{y }_{2})\rangle\simeq-\frac{g^{2}}{2f_{\pi}^{2}}\delta^{(3)}(\mathbf{x}_{1}-\mathbf{y}_{2 })\delta^{(3)}(\mathbf{x}_{2}-\mathbf{y}_{1})\!\int\!\frac{d\mathbf{q}}{(2\pi)^{3}}e^{i\mathbf{q }\cdot\mathbf{r}}\frac{\mathbf{\varepsilon}_{\lambda}\cdot\mathbf{q}\mathbf{\varepsilon}_{ \lambda^{\prime}}^{*}\cdot\mathbf{q}}{q^{2}-\mu^{2}-i\epsilon}\\ =\frac{g^{2}}{8\pi f_{\pi}^{2}}e^{i\mu r}\varepsilon_{\lambda}^{i} \varepsilon_{\lambda^{\prime}}^{*j}\bigg{[}\hat{r}_{j}\left(\frac{3}{r^{3}}-\frac{ 3i\mu}{r^{2}}-\frac{\mu^{2}}{r}\right)-\delta_{ij}\left(\frac{1}{r^{3}}-\frac{i \mu}{r^{2}}\right)-\frac{4\pi}{3}\delta_{ij}\delta^{(3)}(\mathbf{r})\bigg{]} \delta^{(3)}(\mathbf{x}_{1}-\mathbf{y}_{2})\delta^{(3)}(\mathbf{x}_{2}-\mathbf{y}_{1})\,. \tag{14}\]
At low enough energy, only the \(S\)-wave channel is relevant. To project the above matrix element to \(S\)-wave, we average over the \(\hat{\mathbf{r}}\) direction, effectively setting \(\hat{r}_{i}\hat{r}_{j}\to\frac{1}{3}\delta_{ij}\). This returns,
\[\langle D_{\lambda}^{*}(\mathbf{x}_{1})\bar{D}(\mathbf{x}_{2})|H_{I}\frac{1}{E-H_{DD\pi}+ i\epsilon}H_{I}|\bar{D}_{\lambda^{\prime}}^{*}(\mathbf{y}_{1})D(\mathbf{y}_{2})\rangle\simeq-\bigg{[} \alpha\frac{e^{i\mu r}}{r}+\frac{g^{2}}{6f_{\pi}^{2}}\delta^{(3)}(\mathbf{r}) \bigg{]}\delta^{(3)}(\mathbf{x}_{1}-\mathbf{y}_{2})\delta^{(3)}(\mathbf{x}_{2}-\mathbf{y}_{1}) \delta_{\lambda\lambda^{\prime}}\,. \tag{15}\]
With the transition matrix elements (10) and (11) at hand, the Schrodinger equations for the \(\bar{D}^{*0}D^{0}\) and the \(D^{*0}\bar{D}^{0}\) become a system of two coupled equations. Decoupling already the center-of-mass motion, they read
\[-\bigg{[}\frac{\nabla^{2}}{2\mu_{r}}+i\frac{\Gamma_{*}}{2}+\lambda_ {0}\delta^{(3)}(\mathbf{r})\bigg{]}\psi_{\bar{D}^{*}\!D}(r)-\bigg{[}\alpha\frac{e^ {i\mu r}}{r}+\frac{g^{2}}{6f_{\pi}^{2}}\delta^{(3)}(\mathbf{r})\bigg{]}\psi_{D^{*} \!D}(r) =E\,\psi_{\bar{D}^{*}\!D}(r)\,, \tag{12a}\] \[-\bigg{[}\alpha\frac{e^{i\mu r}}{r}+\frac{g^{2}}{6f_{\pi}^{2}} \delta^{(3)}(\mathbf{r})\bigg{]}\psi_{\bar{D}^{*}\!D}(r)-\bigg{[}\frac{\nabla^{2}} {2\mu_{r}}+i\frac{\Gamma_{*}}{2}+\lambda_{0}\delta^{(3)}(\mathbf{r})\bigg{]}\psi_{ D^{*}\!D}(r) =E\,\psi_{D^{*}\!D}(r)\,. \tag{12b}\]
Since the \(X(3872)\) is a \(C=+1\) state, its wave function as meson molecule is \(\psi=\frac{1}{\sqrt{2}}\left(\psi_{D^{*}\!D}+\psi_{D^{*}\!D}\right)\). The corresponding Schrodinger equation is obtained by adding together the two above and, indeed, reduces to Eq. (8).
Finally, it is interesting to show what the exchange amplitude computed above look like away from the \(m_{D^{*}}\simeq m_{D}\) limit. Recalling that, in our notation, \(m_{D^{*}}-m_{D}=m_{\pi}\), as well as Eq. (5), one can perform very similar manipulations (albeit more tedious) and get to,
\[\langle D_{\lambda}^{*}(\mathbf{x}_{1})\bar{D}(\mathbf{x}_{2})|H_{I}\frac {1}{E-H_{DD\pi}+i\epsilon}H_{I}|\bar{D}_{\lambda^{\prime}}^{*}(\mathbf{y}_{1})D( \mathbf{y}_{2})\rangle=\frac{g^{2}}{4m_{\pi}f_{\pi}^{2}m_{D^{*}}^{2}}\int\frac{d \mathbf{p}_{1}}{(2\pi)^{3}}\frac{d\mathbf{p}_{2}}{(2\pi)^{3}}\frac{d\mathbf{k}_{1}}{(2\pi)^ {3}}\frac{d\mathbf{k}_{2}}{(2\pi)^{3}} \tag{13}\] \[\qquad\qquad\qquad\times e^{i(\mathbf{p}_{1}\cdot\mathbf{y}_{1}+\mathbf{p}_{2 }\cdot\mathbf{y}_{2}-\mathbf{k}_{1}\cdot\mathbf{x}_{1}-\mathbf{k}_{2}\cdot\mathbf{x}_{2})}(2\pi)^{ 3}\delta^{(3)}(\mathbf{p}_{1}+\mathbf{p}_{2}-\mathbf{k}_{1}-\mathbf{k}_{2})\frac{(m_{D}\mathbf{k}_ {1}-m_{D}\cdot\mathbf{p}_{2})\cdot\mathbf{e}_{\lambda}^{*}\left(m_{D}\mathbf{p}_{1}-m_{D} \cdot\mathbf{k}_{2}\right)\cdot\mathbf{e}_{\lambda^{\prime}}}{E+\delta-\frac{k_{1}^{2} }{2m_{D}}-\frac{(\mathbf{k}_{1}-\mathbf{p}_{2})^{2}}{2m_{\pi}}-i\epsilon}\,.\]
When expanding in \(m_{\pi}/m_{D}\ll 1\), one can easily check that the corrections have the same structure as Eq. (10), with two gradients, acting either on the potential \(e^{i\mu r}/r\) or on the \(\delta\)-functions, with an overall factor of \(m_{\pi}/m_{D}\). Except for short distance contributions which we are renormalizing away, both these terms scale exactly as Eq. (11) but with an additional \(m_{\pi}/m_{D}\) suppression, which makes them negligible.
|
2305.17608 | Reward Collapse in Aligning Large Language Models | The extraordinary capabilities of large language models (LLMs) such as
ChatGPT and GPT-4 are in part unleashed by aligning them with reward models
that are trained on human preferences, which are often represented as rankings
of responses to prompts. In this paper, we document the phenomenon of
\textit{reward collapse}, an empirical observation where the prevailing
ranking-based approach results in an \textit{identical} reward distribution
\textit{regardless} of the prompts during the terminal phase of training. This
outcome is undesirable as open-ended prompts like ``write a short story about
your best friend'' should yield a continuous range of rewards for their
completions, while specific prompts like ``what is the capital of New Zealand''
should generate either high or low rewards. Our theoretical investigation
reveals that reward collapse is primarily due to the insufficiency of the
ranking-based objective function to incorporate prompt-related information
during optimization. This insight allows us to derive closed-form expressions
for the reward distribution associated with a set of utility functions in an
asymptotic regime. To overcome reward collapse, we introduce a prompt-aware
optimization scheme that provably admits a prompt-dependent reward distribution
within the interpolating regime. Our experimental results suggest that our
proposed prompt-aware utility functions significantly alleviate reward collapse
during the training of reward models. | Ziang Song, Tianle Cai, Jason D. Lee, Weijie J. Su | 2023-05-28T02:12:00Z | http://arxiv.org/abs/2305.17608v1 | # Reward Collapse in Aligning Large Language Models
###### Abstract
The extraordinary capabilities of large language models (LLMs) such as ChatGPT and GPT-4 are in part unleashed by aligning them with reward models that are trained on human preferences, which are often represented as rankings of responses to prompts. In this paper, we document the phenomenon of _reward collapse_, an empirical observation where the prevailing ranking-based approach results in an _identical_ reward distribution _regardless_ of the prompts during the terminal phase of training. This outcome is undesirable as open-ended prompts like "write a short story about your best friend" should yield a continuous range of rewards for their completions, while specific prompts like "what is the capital of New Zealand" should generate either high or low rewards. Our theoretical investigation reveals that reward collapse is primarily due to the insufficiency of the ranking-based objective function to incorporate prompt-related information during optimization. This insight allows us to derive closed-form expressions for the reward distribution associated with a set of utility functions in an asymptotic regime. To overcome reward collapse, we introduce a prompt-aware optimization scheme that provably admits a prompt-dependent reward distribution within the interpolating regime. Our experimental results suggest that our proposed prompt-aware utility functions significantly alleviate reward collapse during the training of reward models.
## 1 Introduction
A cornerstone of the recent remarkable advancements in the capabilities of large language models (LLMs) like ChatGPT and GPT-4 is the integration of human feedback [25, 24]. The approach to leveraging human feedback often begins with the training of a reward model that encapsulates human preferences, values, and ethical considerations [8, 14, 2, 29, 10]. This is followed by the fine-tuning of the LLMs using reinforcement learning, guided by the reward model. This process, often referred to as reinforcement learning from human feedback (RLHF), has proven effective in aligning LLMs with human intent, substantially enriching the quality of human interaction.
However, developing an effective reward model based on human preferences is challenging [4, 19, 27]. A notable difficulty arises when a human labeler struggles to give a quantitative score to a response/completion for a specific prompt. Instead, it is much easier for humans to make pairwise comparisons between completions in terms of their quality, which is indeed employed in the development of InstructGPT [25]. Specifically, a human labeler is presented with several completions
generated by the LLMs for the same prompt and arranges the responses from the highest to lowest perceived quality.1 A neural network is then trained to obtain a reward model that assigns rewards to the responses in an attempt to align as closely as possible with human preferences in the form of _rankings_.
Footnote 1: Through private communication, [25] required human labelers to utilize a drag-and-drop interface to construct _consistent_ rankings from pairwise comparisons.
Despite some benefits, such as eliminating calibration issues, rankings fall short in reflecting the varied reward distributions of different prompts. This is because ranking one completion higher than another does not indicate how _much_ superior the former is compared to the latter. This concern is especially pertinent in RLHF as some prompts are open-ended or, in other words, are dependent on the users' backgrounds, allowing the reward distribution to span a continuous range. Conversely, some prompts are closed-ended, resulting in a response that should be either highly or lowly scored, thus generating a roughly two-point mass distribution for the reward distribution. Instances of the first type of prompts include _write a short story about how AI will look like in 100 years_ and _what is the best cuisine in the world_, while examples of the second type are _prove the Pythagorean theorem_ and _is chicken a dinosaur_. The reward model may struggle to aid LLMs in accurately calibrating uncertainty without accounting for the nuances of different prompts.2
Footnote 2: For instance, we suspect that this is partly accountable for the poor calibration of GPT-4 after RLHF (see page 12 of [24]), although we are unable to verify due to the black-box nature of GPT-4 as well as insufficient computational resources.
As our first main contribution, this paper documents a surprising phenomenon through a series of experiments, demonstrating that training a reward model on preference rankings could result in the _same_ reward distribution regardless of the prompts. We call this phenomenon _reward collapse_,
Figure 1: Illustration of reward collapse, with rewards assigned to eight responses, arranged from least to most preferred. One type of prompt is open-ended, which should result in a roughly uniform distribution of rewards, while the other is closed-ended, which should yield either high or low rewards (polarized). However, as evidenced in the first three plots, when a common utility function is employed (see Eq. 1 in Section 2), the two types of prompts result in a strikingly similar reward distribution. Conversely, when a prompt-aware utility is applied, as seen in the fourth plot, the two types of prompts exhibit distinct reward distributions. Further details are elaborated in Section 3. All of our code is publicly available at [https://github.com/ctllllll/reward_collapse](https://github.com/ctllllll/reward_collapse).
which occurs during the terminal phase of training [26]. Intriguingly, our theoretical analysis first predicted this phenomenon _before_ it was confirmed experimentally. Indeed, we show that the collapse reward distribution can be numerically deduced from a simple optimization program or, even simpler, admits a closed-form expression. As demonstrated in Figure 1, our prediction of reward collapse is in excellent agreement with the empirical results.
Reward collapse is clearly undesirable, as it overlooks the subtle differences among various prompts, potentially leading to the miscalibration of human preference during the training of LLMs via reinforcement learning with the reward model. A rudimentary strategy to bypass this issue is to early stop the training of the reward model [25], which, however, is somewhat arbitrary and can make it challenging to determine the stopping point.
As our second main contribution, we introduce a principled approach to alleviating reward collapse, leveraging insights derived from the same optimization program that was instrumental in predicting this phenomenon. In essence, we propose to use distinct utility functions depending on prompts in training the reward model, such that the resulting reward distribution can be either widely dispersed or tightly concentrated, contingent on whether the prompt is open-ended or closed-ended. A notable advantage of this prompt-aware strategy is that our analysis is analytical, enabling full control over the shape of the reward distribution as required. As depicted in the right-most panel of Figure 1 and more results in Section 3, our experiments show that reward collapse can be substantially mitigated using this prompt-aware methodology.
## 2 What Is Reward Collapse and How to Mitigate It?
### Reward Collapse
Denote by \(R(\texttt{prom},\texttt{compl})\) a reward model. Without loss of generality, we assume \(0\leq R(\texttt{prom},\texttt{compl})\leq 1\). For a given prompt \(\texttt{prom}\) and \(n\) completions that are i.i.d. draws from an LLM, a human labeler ranks the \(n\) responses from the most preferred to the least preferred, and the ranking is denoted as \(\pi_{\texttt{prom}}\). The reward model is expected to score each completion that is consistent with the human-provided ranking \(\pi_{\texttt{prom}}\) as much as possible. To this end, we train a neural network that maximizes the following overall utility:
\[\sum_{(\texttt{prom},\texttt{compl}_{w},\texttt{compl}_{l})\in\Pi}U\left(R_{ \theta}(\texttt{prom},\texttt{compl}_{w})-R_{\theta}(\texttt{prom},\texttt{ compl}_{l})\right), \tag{1}\]
where \(U\) is an (increasing) utility function, \(\theta\) is the weights of the reward neural network, and \(\Pi\) is the ranking dataset and \(\texttt{compl}_{w}\) is a preferred completion than \(\texttt{compl}_{l}\) in the ranking \(\pi_{\texttt{prom}}\). In InstructGPT [25], \(U\) is set to \(U_{\sigma}(x)=\log\texttt{sigmod}(x/\sigma)\equiv\log\frac{\mathrm{e}^{x/ \sigma}}{\mathrm{e}^{x/\sigma}+1}\), which is an increasing concave function. While maximizing Eq. 1, the reward model learns to not only align with the human-provided ranking but also distinguish the rewards as much as possible.
To gain insights into how the rewards depend on \(U\), note that the above is equivalent to
\[\max\sum_{\texttt{prom}}\ \ \sum_{(\texttt{compl}_{w},\texttt{compl}_{l})\in \pi_{\texttt{prom}}}U\left(R_{\theta}(\texttt{prom},\texttt{compl}_{w})-R_{ \theta}(\texttt{prom},\texttt{compl}_{l})\right).\]
Next, assume that the neural network parameterized by \(\theta\) is sufficiently overparameterized such that
\[\sum_{(\texttt{compl}_{w},\texttt{compl}_{l})\in\pi_{\texttt{prom}}}U\left(R_ {\theta}(\texttt{prom},\texttt{compl}_{w})-R_{\theta}(\texttt{prom},\texttt{ compl}_{l})\right)\]
is _exactly_ maximized. This is precisely the same as maximizing \(\sum_{1\leq i<j\leq n}U\left(r_{\pi_{\texttt{precom}}(i)}-r_{\pi_{\texttt{precom}}( j)}\right)\) over \(0\leq r_{1},\ldots,r_{n}\leq 1\). However, the solution to this optimization program is _independent_ of the prompt and, indeed, is the same as the solution to
\[\max_{0\leq r_{1},\ldots,r_{n}\leq 1}\sum_{1\leq i<j\leq n}U\left(r_{i}-r_{j}\right) \tag{2}\]
up to a permutation. That is, the empirical distribution of the rewards is independent of the prompt itself in the interpolating regime, thereby leading to reward collapse.
### Prompt-Aware Optimization
To avoid having the same reward distribution, one simple strategy is early stopping. While reward collapse can be avoided via early stopping, early stopping might make the model neglect other important features. A more principled approach is to change the objective. Our proposal is to let the utility function \(U\) now depend on the prompt. That is, now we consider training a neural network that maximizes
\[\sum_{(\texttt{prom},\texttt{compl}_{w},\texttt{compl}_{l})\in\Pi}U_{\texttt{ prom}}\left(R_{\theta}(\texttt{prom},\texttt{compl}_{w})-R_{\theta}(\texttt{prom}, \texttt{compl}_{l})\right). \tag{3}\]
In general, the choice of \(U_{\texttt{prom}}\) should reflect the open-endedness of the prompt prom. An important feature is that if \(U_{\texttt{prom}}\) is concave, this problem becomes a convex optimization problem (Lemma 4.1). Given the high flexibility in choosing \(U_{\texttt{prom}}\), it is generally recommended to let the practitioners choose these functions to meet their needs. Nonetheless, below we introduce a family of such functions.
For a strictly increasing utility function \(U\), it can be easily demonstrated that the maximum can only be attained when \(r_{1}\geq\cdots\geq r_{n}\) (see Lemma B.1 in the Appendix). As a result, we only need to consider the problem
\[\max_{0\leq r_{n}\leq\ldots\leq r_{1}\leq 1}\sum_{1\leq i<j\leq n}U\left(r_{i}-r _{j}\right). \tag{4}\]
Class 1.Let \(U_{\gamma}(x)=x^{\gamma},x\in[0,1]\), for some \(0<\gamma<1\). This utility function encourages the reward distribution to take values either near \(0\) or \(1\) as \(\gamma\) tends to be large. Some plots showing the empirical distribution of solutions to (2) is given in Figure 2(a) and (b).
Class 2.Let \(U_{\gamma}(x)=-x^{-\gamma},x\in(0,1]\), for \(0<\gamma\leq 1\) and \(U_{0}(x)=\log x\) for \(\gamma=0\). We also let \(U_{\gamma}(0)=-\infty\) for \(0\leq\gamma\leq 1\). In this case, the reward distribution of Eq. 2 becomes more even as \(\gamma\) increases from \(0\) to \(1\). Some plots are shown in Figure 2(c) and (d).
Class 3.Let \(U_{\sigma}(x)=\log\texttt{sigmoid}(x/\sigma)\) for \(\sigma>0\). The reward distribution becomes more spread between \(0\) and \(1\) as \(\sigma\) becomes smaller. Some plots are shown in Figure 2(e) and (f).
### Asymptotics
In general, we can explicitly evaluate the reward distribution for any \(n\) by solving the optimization program (4). Nevertheless, it is helpful to get a handle on the empirical distribution of the solution
to this optimization program in the limit \(n\to\infty\). The next result gives a closed-form expression of the reward distribution in the case of a large number of completions.
**Theorem 1**.: _Let \(U_{\gamma}(x)=x^{\gamma}\) for some \(0<\gamma<1\). Then the reward distribution of (4) converges to the Beta distribution \(\mathrm{Beta}\left(\frac{1-\gamma}{2},\frac{1-\gamma}{2}\right)\) as \(n\to\infty\), which has probability density \(x^{-\frac{1+\gamma}{2}}(1-x)^{-\frac{1+\gamma}{2}}\) on \((0,1)\)._
The proof of Theorem 1 is defered to Section 4.
**Theorem 2**.: _For \(U_{\gamma}(x)=-x^{-\gamma}\) for \(0\leq\gamma\leq 1\) (as a convention, take \(U_{0}(x)=\log x\)). Then. the reward distribution of (4) converges in distribution to \(\mathrm{Beta}(\frac{1+\gamma}{2},\frac{1+\gamma}{2})\)._
The proof of Theorem 2 can be found in [21, 17]. In the limit \(\gamma\to 1\) in Theorem 2, the Beta distribution tends to \(\mathrm{Beta}(1,1)\), which is the uniform distribution on \([0,1]\). This is indeed an example of the one-dimensional Thomson problem [5], which asks the configuration of \(n\) electrons constrained to a line that repel each other with a force given by Coulomb's law. This problem was first considered by Maxwell. Indeed, [21, 11, 1] prove that the reward distribution will converge to the uniform distribution for \(U_{\gamma}(x)=-x^{-\gamma}\) with \(\gamma\geq 1\).
For the above two classes, the limiting distribution does not admit a probability mass. However, probability mass can emerge in the case of a scaled log-sigmoid function.
**Theorem 3**.: _If \(U\) is strictly increasing and concave, the derivative of the utility function satisfies \(U^{\prime}(0)<\infty,U^{\prime}(1)>0\), then the reward distribution of (4) converges in distribution to a probability
Figure 2: Reward distribution for different utility function.
measure \(\mu^{*}\) that satisfies_
\[\mu^{*}(\{0\})=\mu^{*}(\{1\})\geq\tfrac{1}{\kappa+1},\]
_where \(\kappa=U^{\prime}(0)/U^{\prime}(1)\)._
In general, the reward distribution can be characterized from a variational perspective. This gives the following theorem.
**Theorem 4**.: _If \(U\) is bounded, strongly concave, and increasing. There exists a probability measure \(\mu^{*}\) such that the reward distribution of (2) converges in distribution to \(\mu^{*}\), which is uniquely determined by the following two properties:_
1. \(\mu^{*}\) _maximizes_ \[\mathbb{E}_{X,X^{\prime}\overset{\text{iid}}{\sim}\mu}U(|X-X^{\prime}|)\] _over all probability measures_ \(\mu\) _on_ \([0,1]\)_, and_
2. _it is symmetric with respect to_ \(\frac{1}{2}\) _in the sense that, for any measurable set_ \(A\in[0,1]\) _and_ \(1-A=\{x:1-x\in A\}\)_,_ \(\mu^{*}(A)=\mu^{*}(1-A)\)_._
## 3 Experiments
In this section, we conduct experiments to investigate the phenomenon of reward collapse in a controlled setting and demonstrate that prompt-aware training can prevent reward collapse.
### Experimental Setup
The open-source datasets currently available for RLHF are rather limited. Most of these datasets [22, 3] typically include only a handful of candidate responses (usually a single pair) for each corresponding prompt question. Moreover, the ranking signals in those datasets are usually noisy, either because they are sourced from the Internet [9] or because of the inherent subjectivity of the ranking process.
In order to conduct a carefully controlled experiment, we curated our own dataset, focusing on a single, simplified feature--the length of the response, measured in terms of word count--as the ground truth reward. A subset of questions was selected from the LongForm dataset [16], a question-answer dataset characterized by its lengthy answers. To simulate scenarios with open-ended and concrete problems, we truncated the original answer according to two distinct length distributions, thereby generating eight responses for each prompt: the first distribution is nearly uniform, ranging from 10 to 80 words, while the second is a polarized distribution with response lengths primarily clustered around either 30 or 60 words. Each question was randomly assigned as either open-ended or concrete. Additionally, the phrases "Write the answer in an open-ended way" and "Write either a short answer or a long answer" were added to the open-ended and concrete questions, respectively, to distinguish the question type. Following this process, we constructed a dataset comprising 8192 training questions and 16 test questions.
In our experiments, we focus on the following \(U\) functions: \(x\), \(\log x\), \(-1/x\), \(\log\texttt{sigmoid}(x)\), and the prompt-aware \(U\), which adaptively selects \(U\) from \(x\) and \(-1/x\). Given that the \(U\) function operates on \(x\) in the range \([-1,1]\), we adjust some \(U\) functions with suitable continuous extensions or scaling. We then train a DeBERTa V3 [12] as the reward model. The training details can be found in Appendix A.
### Experimental Results
Fixed loss function leads to reward collapse.As depicted in Figure 3.2, reward distributions corresponding to different prompts gradually converge towards a single, prompt-independent distribution throughout the training process. Specifically, in the context of Figure 3.2, where the \(U\) function is represented by LogSigmoid, the reward distribution exhibits positive probability mass at reward scores of 0 and 1 (illustrated by the flat segments corresponding to the first two and last two scores). This observation validates the prediction encapsulated in Theorem 3. Examining other \(U\) functions, Figures 1 and 3 collectively indicate the occurrence of loss collapse on both training and test datasets. Specifically, employing \(x\) as the \(U\) function results in a polarized reward distribution, whereas utilizing \(-1/x\) as the \(U\) function yields a uniform reward distribution.
Prompt-aware training avoids reward collapse.Figures 1 and 3 show the reward distribution at the end of training with varying utility functions. The results along with Figure 3.2 reveal that using a prompt-aware \(U\) function effectively prevents reward collapse across both training and test datasets. This strategy yields a more uniform reward distribution for open-ended prompts while promoting a more polarized reward distribution for concrete prompts.
## 4 Proofs
In this section, we present the proofs of the theorems in Section 2. For ease of presentation, we start by proving Theorem 4. Let
\[S(r_{1},\cdots,r_{n}):=\sum_{1\leq i<j\leq n}U(r_{i}-r_{j})\ \ \text{and}\ \hat{\mathbf{r}}\equiv(\hat{r}_{1},\ldots,\hat{r}_{n}):=\arg\max_{0\leq r_{1 },\cdots,r_{n}\leq 1}S(r_{1},\cdots,r_{n}).\]
In addition, for any vector \((u_{1},\cdots,u_{n})\in\mathbb{R}^{n}\), we employ boldface notation \(\mathbf{u}\) to represent the entire vector. This allows us to write \(S(\mathbf{r})\).
Figure 3: **Reward collapse on the test set. This figure follows the same setting as Figure 1 while the evaluation is on the test set. As we can see from the figure, the reward distributions have similar collapse phenomenons on the test set, and using prompt-aware loss can mitigate the collapse.**
### Proof of Theorem 4
First, when \(U\) is concave and strictly increasing, \(\hat{\mathbf{r}}\) exhibits the following properties:
**Lemma 4.1**.: _If \(U\) is strictly concave and strictly increasing, the function \(S(\mathbf{r})\) is concave. Therefore, the optimization problem uniquely determines \(\hat{\mathbf{r}}_{n}\). Additionally, the following properties hold: (1) \(\hat{r}_{1}\geq\cdots\geq\hat{r}_{n}\), and (2) \(1-\hat{r}_{i}=\hat{r}_{n-i+1}\) for any \(1\leq i\leq n\)._
The proof of Lemma 4.1 is straightforward and is provided in Appendix B.1. Upon further examination of the function \(S(\mathbf{r})\), we discover that if \(U\) is strongly concave with parameter \(\mu>0\), then \(S\) also exhibits some kind of strongly concavity, except in the direction \((1,1,\cdots,1)\). This property is formulated in the following lemma.
**Lemma 4.2**.: _If \(U\) is strongly concave with parameter \(\mu>0\), and we consider another vector \(\mathbf{u}=(u_{1},\ldots,u_{n})\) where \(u_{1}\geq\cdots\geq u_{n}\), the following inequality holds:_
\[S(\mathbf{u})-S(\hat{\mathbf{r}})\leq-\frac{n\mu}{2}\|\operatorname{Proj}_{V_ {n}}(\mathbf{u}-\hat{\mathbf{r}})\|^{2}.\]
_Here, \(V_{n}\subset\mathbb{R}^{n}\) denotes the subspace orthogonal to \((1,1,\cdots,1)\), and \(\|\cdot\|\) represents the Euclidean norm._
The proof of this lemma can be found in Appendix B.2. Our next lemma quantifies the difference between two symmetric probability measures.
**Lemma 4.3**.: _For two different symmetric probability measure \(\mu_{1}\) and \(\mu_{2}\) on \([0,1]\), let \(r_{i}^{(j)}=\frac{1}{2}\inf\{t:\mu_{j}([0,t])\geq\frac{n-i}{n-1}\}+\frac{1}{2} \sup\{t:\mu_{j}([0,t))<\frac{n-i}{n-1}\}\}),i=1,2,\cdots,n;j=1,2\). Then there exists positive constant \(c_{0}\) such that_
\[\|\operatorname{Proj}_{V_{n}}(\mathbf{r}^{(1)}-\mathbf{r}^{(2)})\|^{2}\geq c_ {0}n,\]
_for all \(n\)._
Figure 4: **(Left) Reward collapse when using \(\log\)sigmoid as utility function**[25]. The reward distribution of different prompts gradually converges into a single distribution during training. **(Right) Prompt-aware training avoids reward collapse.** When using the prompt-aware loss function, the reward distributions of the two different prompts can be gradually separated during training.
The proof of this lemma is also provided in Appendix B.3. Now, we are ready to prove the uniqueness part of Theorem 4. Due to the length constraint, we will present it as a separate lemma and defer the proof to Appendix B.4. In summary, we use Lemma 4.2 and Lemma 4.3 to demonstrate that for two distinct symmetric measures, their distance is sufficiently large such that at least one of them is not optimal.
**Lemma 4.4**.: _If \(\mu_{1}\) and \(\mu_{2}\) are two symmetric probability measure which both maximize_
\[\mathbb{E}_{X,X^{\prime}\overset{iid}{\sim}\mu}U(|X-X^{\prime}|)\]
_over all probability measures \(\mu\) on \([0,1].\) Then we have \(\mu_{1}=\mu_{2}.\)_
Now we are ready to prove the convergence part of Theorem 4.
Proof of Theorem 4.: Let \(\hat{\mathbb{P}}_{n}:=\frac{1}{n}\sum_{i=1}^{n}\delta_{\hat{r}_{n}}\) denote the empirical distribution of \(\hat{\mathbb{r}}_{n}\). Note that \(\{\hat{\mathbb{P}}_{n}\}\) are probability measures defined on \([0,1]\), so they are tight. By Prohorov's theorem, there exists a sub-sequence \(\{k(n)\}_{n\geq 1}\) such that \(\hat{\mathbb{P}}_{k(n)}\overset{d}{\sim}\hat{\mu}\). Let \(X_{n},X_{n}^{\prime}\overset{iid}{\sim}\hat{\mathbb{P}}_{n}\) and \(\hat{X},\hat{X}^{\prime}\overset{iid}{\sim}\hat{\mu}\). By continuous mapping theorem, we also have \(|X_{n}-X_{n}^{\prime}|\overset{d}{\rightarrow}|\hat{X}-\hat{X}^{\prime}|.\) Moreover, because \(U\) is bounded and continuous, Portmanteau theorem gives
\[\mathbb{E}_{X,X^{\prime}\overset{iid}{\sim}\mathbb{P}_{k(n)}}\,U(|X-X^{\prime} |)\rightarrow\mathbb{E}_{X,X^{\prime}\overset{iid}{\sim}\hat{\mu}}\,U(|X-X^{ \prime}|).\]
Let \(\mu\) be another probability measure on \([0,1]\). Let \(\hat{\mathbb{Q}}_{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{q_{n,i}}\) such that \(\hat{\mathbb{Q}}_{n}\overset{d}{\rightarrow}\mu\). By the same argument before, we also have \(\mathbb{E}_{X,X^{\prime}\overset{iid}{\sim}\hat{\mathbb{Q}}_{k(n)}}\,U(|X-X^ {\prime}|)\rightarrow\mathbb{E}_{X,X^{\prime}\overset{iid}{\sim}\mu}\,U(|X-X ^{\prime}|).\) Then by the optimal assumption of \(\hat{\mathbb{r}}_{n}\),
\[\mathbb{E}_{X,X^{\prime}\overset{iid}{\sim}\hat{\mu}}\,U(|X-X^{ \prime}|) =\lim_{n\rightarrow\infty}\mathbb{E}_{X,X^{\prime}\overset{iid}{ \sim}\mathbb{P}_{k(n)}}\,U(|X-X^{\prime}|)\] \[\geq\lim_{n\rightarrow\infty}\mathbb{E}_{X,X^{\prime}\overset{iid }{\sim}\hat{\mathbb{Q}}_{k(n)}}\,U(|X-X^{\prime}|)=\,\,\,\mathbb{E}_{X,X^{ \prime}\overset{iid}{\sim}\mu}\,U(|X-X^{\prime}|).\]
This means \(\hat{\mu}\) maximize \(\mathbb{E}_{X,X^{\prime}\overset{iid}{\sim}\mu}\,U(|X-X^{\prime}|)\) over all probability measure \(\mu\) on \([0,1]\). From Lemma 4.1, we know that \(1-\hat{r}_{i}=\hat{r}_{n-i+1}\), so \(\hat{\mu}\) is symmetric. If there is another sub-sequence \(m(n)\) such that \(\hat{\mathbb{P}}_{m(n)}\overset{d}{\rightarrow}\hat{\nu}\). By the same argument before, \(\hat{\nu}\) is also optimal and symmetric. From Lemma 4.4, \(\hat{\mu}=\hat{\nu}\). Thus for every converging sub-sequence of \(\{\hat{\mathbb{P}}_{n}\}\), the limit distribution must be the same. By the tightness of \(\{\hat{\mathbb{P}}_{n}\}\), we have \(\hat{\mathbb{P}}_{n}\overset{d}{\rightarrow}\mu^{*}\).
### Proof of Theorem 1
For the utility function \(U_{\gamma}(x)=x^{\gamma}\), having established Theorem 4, our objective is to identify a symmetric probability measure \(\mu^{*}\) that maximizes \(\mathbb{E}_{X,X^{\prime}\overset{iid}{\sim}\mu}U_{\gamma}(|X-X^{\prime}|)\). By employing the variational principle, we can derive a condition that is necessary for optimality. Notably, this condition also suffices for optimality.
**Lemma 4.5**.: _Let \(U_{\gamma}(x)=x^{\gamma}\) for some \(\gamma\in(0,1)\). A probability measure \(\mu\) on \([0,1]\) will maximize \(\mathbb{E}_{X,X^{\prime}\overset{iid}{\sim}\mu}\,U_{\gamma}(|X-X^{\prime}|)\) if it satisfies the condition that \(\mathbb{E}_{X\sim\mu}\,U_{\gamma}(|X-c|)\) is independent of \(c\in[0,1]\)._
The proof of Lemma 4.5 is provided in Appendix C.1. Therefore, proving Theorem 1 is reduced to verifying the condition stated in Lemma 4.5. This verification process is tedious and will be deferred to Appendix C.2 for brevity.
### Proof of Theorem 3
Theorem 3 can be intuitively understood as follows: If the function \(U\) satisfies \(U^{\prime}(0)<\infty\) and \(U^{\prime}(1)>0\), we can show, by analyzing the first-order optimality condition, that a positive fraction of \(\hat{r}\) is equal to \(1\).
Proof of Theorem 3.: The derivative of \(-\sum_{i<j}U(r_{i}-r_{j})\) with respect to \(r_{k}\) is given by
\[-\left.\frac{\partial\sum_{i<j}U(r_{i}-r_{j})}{\partial r_{k}}\right|_{\hat{r} _{1},\cdots,\hat{r}_{n}}=\sum_{i=1}^{k-1}U^{\prime}(\hat{r}_{i}-\hat{r}_{k})- \sum_{i=k+1}^{N}U^{\prime}(\hat{r}_{k}-\hat{r}_{j})\leq\ (k-1)U^{\prime}(0)-(n-k)U^{\prime}(1).\]
The inequality follows from the convexity of \(U\). If \(k\leq n/(\kappa+1)\), we have \((k-1)U^{\prime}(0)-(n-k)U^{\prime}(1)\leq 0\). Hence, we can get \(\hat{r}_{k}=1\). Otherwise, we could increase \(\hat{r}_{k}\) to make \(\sum_{i<j}U(\hat{r}_{i}-\hat{r}_{j})\) larger. As a result, \(\hat{r}_{1}=\cdots=\hat{r}_{[n/(\kappa+1)]}=1.\) This gives \(\hat{\mathbb{P}}_{n}(\{1\})\geq[\frac{n}{\kappa+1}]/n\). By Theorem 4, we know that there exists a limiting distribution \(\mu^{*}\) such that \(\hat{\mathbb{P}}\stackrel{{ d}}{{\rightarrow}}\mu^{*}\) and \(\mu^{*}(\{1\})\geq 1/(\kappa+1)\). Due to symmetry proved in Lemma 4.1, we also have \(\mu^{*}(\{0\})\geq 1/(\kappa+1)\).
## 5 Extension to Pairwise Comparisons
Our Prompt-Aware approach can be generalized to accommodate other settings, such as instances where only pairwise preference data is accessible. Pairwise preference data may include loops, similar to the rock-paper-scissors scenario, and can be produced from a probabilistic model. Consequently, the data might simultaneously indicate a preference of A over B and a preference of B over A. Pairwise preference data is extensively utilized in RLHF [8, 14, 29, 25, 28].
We explore the well-known Bradley-Terry-Luce (BTL) model [7, 20], which assumes the existence of scores \(\{\theta_{i}\}_{1\leq i\leq n}\) for \(n\) items such that the preference between item \(i\) and item \(j\) is given by \(\mathbb{P}(i\text{ is preferred over }j)=\texttt{sigmoid}(\theta_{i}-\theta_{j})\), where \(\texttt{sigmoid}\) denotes the sigmoid function \(\texttt{sigmoid}(x)=1/(1+\exp(-x))\). This probabilistic model effectively captures the relative preferences between items, based on the disparity in their underlying scores.
To illustrate our framework, we consider the following expected version problem:
\[\max_{0\leq r_{1},\cdots,r_{n}\leq 1}S(r_{1},\cdots,r_{n}),\text{ where }S(r_{1},\cdots,r_{n})=\sum_{1\leq i,j\leq n}U(r_{i}-r_{j})\texttt{sigmoid}( \theta_{i}-\theta_{j}).\]
The function \(S(\mathbf{r})\) is similar to a family of log-likelihood functions considered in [23]. We presume that \(U\) is increasing and concave. Then similar to Lemma 4.1, \(U\) is also concave in \((r_{1},\cdots,r_{n})\). Let \(\hat{\mathbf{r}}=(\hat{r}_{1},\ldots,\hat{r}_{n})\) be the vector that maximizes \(S(\mathbf{r})=\sum_{1\leq i,j\leq n}U(r_{i}-r_{j})\texttt{sigmoid}(\theta_{i} -\theta_{j})\). We present the following consistency result for \(\hat{\mathbf{r}}\):
**Theorem 5**.: _Assume that \(U\) is increasing and \(\mu\)-strongly concave for \(\mu>0\). Write \(\theta_{\max}=\max_{1\leq i\leq n}|\theta_{i}|\). Then, \(\hat{\mathbf{r}}\) keeps the order of \(\{\theta_{i}\}_{1\leq i\leq n}\) and satisfies_
\[|\hat{r}_{i}-\hat{r}_{j}|\leq 2\sqrt{U(1)(1+\mathrm{e}^{\theta_{\max}})| \theta_{i}-\theta_{j}|/\mu}.\]
The proof of these results can be found in Appendix D. Theorem 5 ensures that for any increasing and strongly concave utility function \(U\), \(\hat{\mathbf{r}}\) is a reliable estimate of \(\{\theta_{i}\}_{1\leq i\leq n}\), in the sense that \(\hat{r}_{i}\) and \(\hat{r}_{j}\) are close if \(\theta_{i}\) and \(\theta_{j}\) are close.
Even though we may not be able to determine the precise limiting distribution of \(\mathbf{r}_{n}\) in this extended setting, we can still extract insights from our previous analysis in Section 2. As previously observed, selecting \(U(x)=x\) tends to polarize the reward distribution, while selecting \(U(x)=-1/x\) yields a more uniform reward distribution.This phenomenon is also evident in this setting, as observed in the results presented in Figure 5. More details is given in Appendix D.
Based on these findings, we can conclude that in this extended setting, we can also employ a prompt-aware utility function \(U\) to mitigate reward collapse and achieve the desired reward distribution by carefully selecting the form of \(U\). This provides us with flexibility in shaping the reward distribution according to our specific requirements.
## 6 Discussion
In this paper, we have introduced an empirical phenomenon known as reward collapse that arises during reward model training for aligning LLMs using human preference rankings. This phenomenon results in the same reward distribution regardless of the prompt type. The occurrence of reward collapse stems from neural network interpolation during the final training phase. To mitigate reward collapse, we propose utility functions that consider the nature of prompts and an analytical framework that evaluates reward distribution, yielding closed-form reward expressions. Synthetic experiments substantiate our findings, presenting a method superior to early stopping to tackle reward collapse.
While our experiments provide valuable insights, it is important to acknowledge their limitations, primarily due to constrained computational resources available to us. Given abundant resources, future research can explore the use of a more diverse range of prompts, varying in terms of their open-endedness. Additionally, it would be interesting to investigate the extent to which the trained reward model enhances the capabilities of large language models, such as their ability to self-calibrate uncertainty [18, 15]. Theoretical investigations could focus on finding increasing, concave functions that precisely match a given discrete reward distribution. On the practical side, developing a method to choose a utility function based on prompts, perhaps using a parameter such as \(\gamma\) in Section 2.2, poses an intriguing avenue for further exploration. Furthermore, exploring the potential benefits of truncated ranking by requiring human labelers to provide partial rankings of acceptable completions and ignore unacceptable completions could offer valuable insights into improving the training of reward models.
Figure 5: Reward distribution with different choice of \(\{\theta\}_{1\leq i\leq n}\) when \(n=20\).
## Acknowledgments
We are grateful to Banghua Zhu for helpful discussions. We also thank Long Ouyang and Jan Leike for clarifications on [25]. This work was supported in part by NSF through CAREER DMS-1847415 and CCF1934876, Analytics at Wharton, and Wharton AI and Analytics for Business.
|
2303.02835 | Traffic Scene Parsing through the TSP6K Dataset | Traffic scene perception in computer vision is a critically important task to
achieve intelligent cities. To date, most existing datasets focus on autonomous
driving scenes. We observe that the models trained on those driving datasets
often yield unsatisfactory results on traffic monitoring scenes. However,
little effort has been put into improving the traffic monitoring scene
understanding, mainly due to the lack of specific datasets. To fill this gap,
we introduce a specialized traffic monitoring dataset, termed TSP6K, containing
images from the traffic monitoring scenario, with high-quality pixel-level and
instance-level annotations. The TSP6K dataset captures more crowded traffic
scenes with several times more traffic participants than the existing driving
scenes. We perform a detailed analysis of the dataset and comprehensively
evaluate previous popular scene parsing methods, instance segmentation methods
and unsupervised domain adaption methods. Furthermore, considering the vast
difference in instance sizes, we propose a detail refining decoder for scene
parsing, which recovers the details of different semantic regions in traffic
scenes owing to the proposed TSP6K dataset. Experiments show its effectiveness
in parsing the traffic monitoring scenes. Code and dataset are available at
https://github.com/PengtaoJiang/TSP6K. | Peng-Tao Jiang, Yuqi Yang, Yang Cao, Qibin Hou, Ming-Ming Cheng, Chunhua Shen | 2023-03-06T02:05:14Z | http://arxiv.org/abs/2303.02835v2 | # Traffic Scene Parsing through the TSP6K Dataset
###### Abstract
Traffic scene parsing is one of the most important tasks to achieve intelligent cities. So far, little effort has been spent on constructing datasets specifically for the task of traffic scene parsing. To fill this gap, here we introduce the TSP6K dataset, containing 6,000 urban traffic images and spanning hundreds of street scenes under various weather conditions. In contrast to most previous traffic scene datasets collected from a driving platform, the images in our dataset are from the shooting platform high-hanging on the street. Such traffic images can capture more crowded street scenes with several times more traffic participants than the driving scenes. Each image in the TSP6K dataset is provided with high-quality pixel-level and instance-level annotations. We perform a detailed analysis for the dataset and comprehensively evaluate the state-of-the-art scene parsing methods. Considering the vast difference in instance sizes, we propose a detail refining decoder, which recovers the details of different semantic regions in traffic scenes. Experiments have shown its effectiveness in parsing high-hanging traffic scenes. Code and dataset will be made publicly available.
## 1 Introduction
As a classic and important computer vision task, the scene parsing task aims to segment the semantic objects and stuff from the given images. Nowadays, the emergence of large-scale scene understanding datasets, such as ADE20K [61] and COCO-Stuff [2], has greatly promoted the development of scene parsing algorithms [34, 58]. Many application scenarios, such as robot navigation [13, 26] and medical diagnosis [37], benefit from these scene parsing algorithms [30, 34].
As an important case of scene parsing, traffic scene parsing focuses on understanding urban street scenes, where the most frequently appeared instances are humans and vehicles. To date, there are already a few large-scale publicly available street scene datasets, such as KITTI [18], Cityscapes [12], and BDD100K [48]. A characteristic of these datasets is that they are mostly collected from a driving platform, such as a driving car, as shown in Fig. 1 and hence are more suitable for the autonomous driving scenario. Benefiting from these finely-annotated datasets, the segmentation performance of the recent scene parsing approaches [8, 21, 29, 39, 54, 62] is also considerably improved.
The images from the aforementioned datasets are collected from driving vehicles. However, little attention has been moved to the high-hanging traffic scenes. This kind of traffic scenes is captured by the shooting platform high-hanging on the street1, which can offer a rich vein of information on traffic flow [27, 35]. We summarize the differences between the high-hanging and the driving traffic scenes as follows. **(i) Broader view:** The urban road shooting platform hangs at a high location (4.5-6 meters) on the street, which is more than twice higher than the driving platform. The high-hanging platform sees more street content than the driving one, as shown in Fig. 1, especially at the crossing. This makes the collected images from the high-hanging platform more challenging than ones from existing
Figure 1: Comparison of the ways to capture scenes between Cityscapes [12] and our TSP6K. Cityscapes collects the traffic images captured from the driving platform, such as a driving car. In contrast, the TSP6K dataset collects traffic images from the urban road high-hanging platform, which captures more crowded scenes with a broader view.
datasets. **(ii) Road indications:** High-hanging scenes provide more important categories that only occasionally appear in the driving scenes, such as the zebra crossing and driving indicators. These road indications are important for analyzing traffic conditions.
To facilitate the research on the high-hanging traffic scenes, we collect many traffic images from the urban road shooting platform. To keep the diversity of our dataset, we collect high-hanging images from hundreds of traffic scenes under different weather conditions and times in a day. In total, we collect 6,000 traffic images and ask annotators to finely annotate them with semantic-level and instance-level labels. In Fig. 2, we have shown four traffic images under different weather conditions and the corresponding semantic-level and instance-level labels.
Based on the proposed TSP6K dataset, we evaluate many classic scene parsing approaches on the proposed benchmark and summarize several valuable tips for high-hanging scene parsing. Besides, we propose a detail refining decoder for high-hanging scene parsing. The detail refining decoder utilizes the encoder-decoder structure and refines the high-resolution features by a region refining module. The region refining module utilizes the self-attention mechanism and computes the attention between the pixels and each region token. The attention is further used to refine the pixel relationships in different semantic regions. The proposed method has achieved 75.4% mIoU score and 58.1% iIoU score on the TSP6K validation set. To verify the effectiveness of each component in the detail refining decoder, we also conduct a series of ablation experiments.
In summary, the contributions of this paper are summarized as follows:
* We propose a new dataset, termed TSP6K, which collects traffic images spanning various scenes from the urban road high-hanging platform. We provide pixel-level annotations of fine semantic labels and instance labels.
* Based on the TSP6K dataset, we evaluate many recent state-of-the-art methods and analyze their performance. We summarize a few tips for benefiting the high-hanging street scene parsing.
* To improve street scene parsing, we propose a detail-refining decoder, which learns several region tokens and compute the attention maps between the tokens and the high-resolution features. The attention is further used to refine the details of different semantic regions. Experiments validate the effectiveness of the proposed decoder.
## 2 Related Work
### Scene Parsing Datasets
Scene parsing datasets with full pixel-wise annotations are utilized for training and evaluating the scene parsing algorithms. As an early one, the PASCAL VOC dataset [16] was proposed in a challenge, which aims to parse the objects of 20 carefully selected classes in each image. Later, the community proposed more complex datasets with much more classes, such as COCO [32], ADE20K [61]. The scenes in the above datasets span a wide range. Besides, different from them, some datasets focus on particular scenes, such as the traffic scenes. There exist many traffic scene parsing datasets [24, 36, 51, 52], such as KITTI [18], Cityscapes [12], and BDD100K [48]. These traffic parsing datasets annotate the most frequent classes in the traffic scenes, such as traffic sign, rider, and _etc_. With the help of these finely-annotated traffic datasets, the approaches based on the neural networks achieved great success in parsing the traffic scenes. Furthermore, some datasets [14, 38] focus on the night driving scenes.
Despite the success of the above datasets, we find the traffic scenes in these datasets all from the driving platform. The models trained on these datasets often behave not well on parsing the traffic scenes obtained from the urban road high-hanging platform. The high-hanging platform usually has a larger view than the driving platform, which captures much more crowded scenes. Our dataset that focuses on high-hanging scenes is a supplement to current traffic datasets.
Figure 2: Examples are randomly picked from the TSP6K dataset. Each image is associated with its corresponding semantic label and instance label. We have masked the vehicle plates for privacy protection. More examples can be found in the supplemental material.
### Scene Parsing Approaches
Convolutional neural networks have facilitated the development of scene parsing approaches. Long [34] first proposed a fully convolutional network (FCN) that generates dense predictions for scene parsing. Later, some approaches, such as the popular DeepLab [4, 5] and PSPNet [58], have benefited from large receptive fields and multi-scale features, improving the performance by a large margin. Besides, there are also some approaches, including SegNet and DeepLabv3+ et al. [1, 7, 8, 30], utilizing the encoder-decoder structure to refine the low-resolution coarse predictions with the details of the high-resolution features.
Recently, researchers found the attention mechanism modeling long-range dependencies among pixels can improve the scene parsing networks. Some approaches, such as PSANet [59] and OCNet [50], directly apply the attention mechanism on the backbone features to model long-range context dependencies, which highly improves the segmentation performance. However, using self-attention brings heavy computing costs. This motivated some researchers to attempt to reduce self-attention by introducing strip pooling or criss-cross attention [23, 25].
With the successful introduction of Transformers into image recognition [15], researchers have attempted to apply Transformers to the segmentation task [9, 10, 45, 60, 39]. A typical example should be SegFormer [45], which greatly improves previous CNN-based models by a large margin. Furthermore, another research line [43, 46, 57, 55, 47] explores real-time scene parsing algorithms, which gives attention to both effectiveness and efficiency.
## 3 Dataset and Analysis
### Data Collection
One significant aspect of researching the high-hanging traffic scenes is data. Once we construct a dataset for the high-hanging traffic scenes, the community researchers can develop new algorithms based on the novel data characteristics. To facilitate the research, we collect a large number of traffic images from the high-hanging shooting platform. To ensure the generalization of the scene parsing algorithms, we collect the traffic images from more than 600 scenes. As the crossing and pedestrian crossing are an essential part of traffic scenes, where congestion and accidents often occur, we keep a majority of the traffic scenes containing the crossing. Besides, considering the weather diversity, we select the traffic images under various weather conditions. As a result, we finally select 6,000 traffic images.
### Data Annotation
After collecting data, we start to annotate the traffic images. The complete annotated classes are shown at the top of Fig. 3. Specifically, we annotate 21 classes, where most of the classes are the same as the class definition in Cityscapes [12]. We remove the unseen class 'train' in our dataset and add three new classes. As the indications on the road are vital for understanding the high-hanging traffic scenes, we ask the annotators to label three indication classes, namely crosswalks, driving indications, and lanes. Similar to the annotation policy of Cityscapes [12], the traffic images are also annotated from back to front. To keep the quality of the labels, the annotators are asked to double-check the accuracy of the labels.
### Data Split
The dataset is divided into three splits for training, validation, and test according to the ratio of 5:2:3. Images collected from different scenes are randomly split into different sets. In total, there are 2,999, 1,207, and 1,794 images for the train, validation, and test sets, respectively.
### Data Analysis
The proposed dataset possesses complicated traffic scenarios. We compare the TSP6K dataset with previous traffic datasets regarding the scene type, instance density, scale invariance of instances, weather diversity, and spatial resolution. In Tab. 1 and Tab. 2, we have listed the comparison among different traffic datasets. The characteristics of our TSP6K datasets can be summarized as follows:
**High-hanging scenes:** To the best of our knowledge, all previous popular traffic datasets focus on the driving scenes. The images of these datasets are collected from the driving platform. Different from them, we address the complicated traffic scenes captured from the urban road high-hanging platform. Thus, our dataset is more useful for traffic flow analysis.
**High instance density:** One of the most important characteristics is that TSP6K has many traffic instances (, traffic participants), including humans and vehicles. Since the majority of the traffic scenes are shooted at the crossing, the
Figure 3: Class and scene information of the TSP6K dataset.
instance density on the road is much larger than the driving scenes. As shown in Tab. 2, there are 10.7 humans and 31.3 vehicles in each traffic image on average. In Fig. 4(a), it can be seen that the driving datasets have few images containing more than 50 instances. In contrast, our TSP6K dataset even has a large number of images containing more than 50 instances, occupying about 30% of the images in the training and validation sets. Besides, Fig. 3 shows the number of average instances per image in each scene.
**Large scale invariance of instance:** For the high-hanging scenes, the scale difference of the instances in the front and end is very large, as shown in Fig. 4(b). TSP6K contains more small traffic instances than Cityscapes, which is more crowded. The high-hanging platform usually has a much broader view than the driving platform. Thus, it can capture much more content in the distance. The high scale invariance shows real traffic scenarios.
**Weather diversity:** In contrast to KITTI [18], Cityscapes [12], and BDD100K [48] that mainly collect traffic images with good weather conditions, we collect the traffic images under different weather conditions, as shown in Tab. 1. The weather conditions can be mainly divided into four groups: sunny\(\&\)cloudy day, rain, fog, and snow. In total, the proportions of the sunny\(\&\)cloudy day, rain, fog, and snow are 60%, 22%, 8%, and 10%, respectively. The different weather conditions guarantee the generalization of the segmentation models. Besides, our dataset also considers the visibility of the traffic images. The visibility of the traffic images is limited under terrible illumination and weather conditions. In our dataset, the ratios of the low visibility and the high visibility are 15.8% and 84.2%, respectively.
**High resolution:** The mean spatial resolution of the captured images is \(2942\times 1989\), which is much larger than KITTI, Cityscapes and BDD100K. The high-resolution images also ensure the clearness of the street contents, especially the small traffic participants.
## 4 Scene Parsing Benchmark
The scene parsing benchmark aims to evaluate the performance of previous popular scene parsing methods. We run all the scene parsing methods based on a popular codebase, mmsegmentation [11]. All the models are trained for 160000 iterations on a node with 8 NVIDIA A100 GPUs. More training settings can be found in the supplemental material. We utilize the mIoU [34] metric to evaluate the performance of the scene parsing methods. As mentioned in [12], the mIoU metric is biased to the object instances with large sizes. However, the high-hanging traffic scene is full of small traffic participants. To better evaluate the in
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Type & Datasets & Class & Weather & Visibility & Image Resolution & Avg TP & TP \(>\) 50 & TP \(>\) 75 & TP \(>\) 100 \\ \hline \hline \multirow{4}{*}{} & KITTI [18] & 19 & Good & High & 1,241\(\times\)375 & 4.9 & 0 & 0 & 0 \\ & Cityscapes [12] & 19 & Good & High & 2,048\(\times\)1,024 & 18.8 & 54 & 10 & 4 \\ & Mapillary [36] & 65 & Diverse & Low \& High & 3,436\(\times\)2,486 & 12.3 & 102 & 15 & 3 \\ & BDD100K [48] & 40 & Good & Low \& High & 1,280\(\times\)720 & 12.8 & 5 & 0 & 0 \\ \hline High-Hanging & TSP6K (ours) & 21 & Diverse & Low \& High & 2,942\(\times\)1,989 & 42.0 & 1,227 & 367 & 73 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison among different traffic scene parsing datasets. **Avg TP** denotes the number of the average traffic participants in each image. **TP \(>\) 50** denotes the number of images that contains more than 50 traffic participants. As the instance labels of the test sets in other datasets are not available, we all count the traffic participants in the train and validation sets. We can see that TSP6K dataset contains much more traffic images containing more than 50 traffic participants when compared with other datasets.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline \multirow{2}{*}{Datasets} & \#Humans & \#Vehicles & \multirow{2}{*}{\#H./images} & \multirow{2}{*}{\#V./images} \\ & \([10^{3}]\) & \([10^{3}]\) & & \\ \hline KITTI [18] & 6.1 & 30.3 & 0.8 & 4.1 \\ Cityscapes [12] & 24.4 & 41.0 & 7.0 & 11.8 \\ TSP6K (ours) & 64.0 & 188.2 & 10.7 & 31.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics of traffic participants in traffic images.
Figure 4: Data analysis of the TSP6K dataset. (a) The distribution of the number of instances in each image. (b) The distribution of the instance sizes. (c) The number of instances for each category.
stances of the traffic participants, we utilize the iIoU metric over all classes containing instances, following [12].
Although the instance segmentation labels are also provided, the main purpose of this paper is the high-hanging scene parsing. We have also evaluated several classic instance segmentation methods. The evaluated results can be found in the supplemental material.
### Performance Analysis
The evaluating results of different methods can be found in Tab. 3. The scene parsing methods can be roughly divided into four groups, methods using pyramid feature fusion, methods using encoder-decoder structure, methods using self-attention mechanism, and methods based on transformer structures.
**Pyramid feature fusion** is a very useful strategy in scene parsing. Among the evaluated methods, PSPNet [58], UperNet [44], DeepLabv3 [6], and DeepLabv3+ [7] all utilize multi-scale features. PSPNet and DeepLabv3 purely utilize the multi-scale features. (Here, FCN [34] only utilizes the last output backbone features.) It can be seen that purely using the multi-scale features brings few gains to the segmentation networks on paring the high-hanging scenes, when comparing PSPNet to FCN. We analyze that TSP6K contains a large number of small objects, such as bicycles, and riders. The extremely downsampled features in the pyramid module may be harmful to parsing such kinds of objects, where the iIoU score also decreases when comparing PSPNet or DeepLabv3 to FCN.
**Encoder-decoder structure** utilizes the high-resolution low-level features to refine the details of segmentation maps. UperNet [44] and DeepLabv3+ [7] apply the encoder-decoder structure to the segmentation network. Compared with DeepLabv3 [6], DeepLabv3+ [7] utilizes the high-resolution features can further improve the segmentation results by more than 0.6% mIoU score and 1% iIoU score on both two sets. We observe that the encoder-decoder structure is very useful for small object segmentation, where the iIoU score is improved by a large margin.
**Self-attention mechanism** is widely used in scene parsing methods, which models the long-range pixel dependence based on the backbone features to refine the final segmentation results. EncNet [53], DANet [17], EMANet [28], and CCNet [25] all utilize different kinds of self-attention mechanisms. EncNet does not have performance gain compared to FCN. In contrast, DANet, EMANet, and CCNet based on the spatial self-attention mechanism all obtain superior performance than FCN. DANet outperforms FCN by more than 0.6% mIoU scores and nearly 1% iIoU scores on both two sets. We analyze that EncNet utilizes the channel-wise self-attention mechanism to build the global context, which cannot preserve the local details well, especially for the high-hanging scenes that contain different sizes of traffic participants.
**Transformer structure** has been successfully applied to the computer vision tasks [15, 3], which often achieves better recognition results than the convolutional neural network structure. The typical transformer structure stacks
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Publication} & \multirow{2}{*}{Backbone} & \multirow{2}{*}{Parameters} & \multirow{2}{*}{GFlops} & \multicolumn{2}{c|}{Validation} & \multicolumn{2}{c}{Test} \\ \cline{5-8} & & & & mIoU (\%) & & mIoU (\%) & mIoU (\%) \\ \hline FCN [34] & CVPR’15 & R50 & 49.5M & 454.1 & 71.5 & 55.2 & 72.5 & 55.1 \\ PSPNet [58] & CVPR’16 & R50 & 49.0M & 409.8 & 71.7 & 54.8 & 72.6 & 54.8 \\ DeepLabv3 [6] & ArXiv’17 & R50 & 68.1M & 619.3 & 72.4 & 55.0 & 73.3 & 55.0 \\ UperNet [44] & ECCV’18 & R50 & 66.4M & 541.0 & 72.4 & 55.2 & 73.1 & 55.0 \\ DeepLabv3+ [7] & ECCV’18 & R50 & 43.6M & 404.8 & 73.1 & 56.1 & 73.9 & 56.3 \\ PSANet [59] & ECCV’18 & R50 & 59.1M & 459.2 & 71.3 & 54.5 & 72.6 & 54.8 \\ EMANet [28] & ICCV’19 & R50 & 42.1M & 386.8 & 72.0 & 55.5 & 72.9 & 55.5 \\ EncNet [53] & CVPR’18 & R50 & 35.9M & 323.3 & 71.4 & 54.8 & 72.7 & 55.0 \\ DANet [17] & CVPR’19 & R50 & 49.9M & 457.3 & 72.3 & 56.0 & 73.1 & 56.1 \\ CCNet [25] & ICCV’19 & R50 & 49.8M & 460.2 & 72.0 & 55.3 & 73.1 & 55.3 \\ KNet-UperNet [56] & NeurIPS’21 & R50 & 62.2M & 417.4 & 72.6 & 56.8 & 73.7 & 56.5 \\ OCRNet [49] & ECCV’20 & HR-w18 & 12.1M & 215.3 & 73.2 & 55.3 & 73.7 & 55.1 \\ SETR [60] & CVPR 21 & ViT-Large & 310.7M & 478.3 & 70.5 & 44.9 & 70.7 & 45.0 \\ SegFormer [45] & NeurIPS’21 & MIT-B2 & 24.7M & 72.0 & 72.9 & 54.6 & 73.8 & 54.9 \\ SegFormer [45] & NeurIPS’21 & MIT-B5 & 82.0M & 120.8 & 74.5 & 56.7 & 74.8 & 56.7 \\ Swin-UperNet [33] & ICCV’21 & Swin-Base & 121.3M & 1184.6 & 74.9 & 57.4 & 75.6 & 57.2 \\ SegNeXt [20] & NeurIPS’22 & MSCAN-Base & 27.6M & 80.2 & 74.6 & 57.3 & 75.4 & 57.2 \\ SegNeXt [20] & NeurIPS’22 & MSCAN-Large & 48.9M & 258.6 & 74.8 & 57.7 & 75.6 & 57.6 \\ SegNeXt + DRD & – & MSCAN-Base & 46.1M & 361.1 & 75.4 & 58.1 & 75.7 & 57.9 \\ SegNeXt + DRD & – & MSCAN-Large & 117.4M & 1136.4 & 76.1 & 59.0 & 76.5 & 58.7 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation results of previous scene parsing approaches on the TSP6K validation and test sets.
several encoder blocks that first utilize the residual self-attention network, followed by the feedforward neural network. SETR [60], Segformer [45], Swin-UperNet [33], and SegNeXt [20] all utilize the transformer structure as the backbone for scene parsing. Among them, SETR achieves much worse parsing results, while other transformer structures obtain superior results than the convolutional backbone. Although Segformer-B2 [45] achieves higher mIoU scores than FCN by about 1.3 %, its iIoU score is even lower than FCN. Among them, UperNet [44] using Swin [33] backbone performs much better than the ResNet50 [22] backbone in terms of both the mIoU and iIoU metrics. Besides, compared with Swin-UperNet, SegNeXt obtains a similar performance with only about 20% parameters and 7% GFlops. We observe that the SegNeXt-Large model achieves fewer gains than the SegNeXt-Base model, as shown in Tab. 3. In this paper, we design a more powerful decoder that can improve both the SegNeXt-Base and SegNeXt-Large models.
In summary, the encoder-decoder structure, spatial self-attention mechanism, and transformer structure are very useful strategies for improving the high-hanging scene parsing. In the following, according to the strategies, we will design a more powerful decoder to generate accurate high-hanging scene results.
## 5 Method
As analyzed in Sec. 3, high-hanging scenes usually capture much more traffic content than the driving scenes and the scale and shape variances of different semantic regions are much larger. Besides, small things and stuff take a large proportion. These situations make accurately parsing the high-hanging scenes difficult. To adapt to the high-hanging scenes, we propose a detail refining decoder. The design principles of our decoder are two-fold.
First, as the output feature maps of the backbone have a small resolution, building decoders based on the low-resolution features usually generates coarse parsing results and hence largely affects the small object parsing. As verified in some previous works [31, 40, 42], the low-level high-resolution features are helpful for segmenting small objects. Thus, we utilize the encoder-decoder structure to fuse the low-resolution and high-resolution features to improve the small object parsing.
Second, as analyzed in Sec. 4, self-attention is an efficient way to encode spatial information for the high-hanging scene parsing. However, directly applying the self-attention mechanism to encode high-resolution features will consume huge computation resources especially when processing high-resolution traffic scene images. Inspired by [9] that learns representations for each segment region, we propose to introduce several region tokens and build pairwise correlations between each region token and each patch tokens from the high-resolution features.
### Overall Pipeline
We construct the scene parsing network for high-hanging scenes based on the valuable tips summarized in Sec. 4. First, we adopt the powerful encoder presented in SegNeXt [20] as our encoder, which achieves good results with low computational costs on our TSP6K dataset. Then, we build a detail refining decoder (DRD) upon the encoder to generate high-hanging scene parsing results. The pipeline of the detail refining decoder is shown in Fig. 5, which contains two parts: For the first part, we follow the decoder design of DeepLabv3+ [7] to generate fine-level feature maps. Note that we do not use the \(\times 4\) downsampling features from the second stage but the \(\times 8\) ones from the third stage as suggested by [20]. The ASPP module is added upon the encoder directly. The second part is the region refining module, which is described in the following subsection.
### Region Refining Module
The region refining module is proposed to refine different semantic regions in the traffic image. Formally, let \(F\in\mathbb{R}^{HW\times C}\) denote the flattened features from the first part of the decoder, where \(H\), \(W\), and \(C\) denote the height, width, and number of channels, respectively. Let \(R\in\mathbb{R}^{N\times C}\) denote \(N\) learnable region tokens, each of which is a \(C\)-dimensional vector. The flattened features
Figure 5: Pipeline of the detail refining decoder. Our decoder contains two parts. The first part is similar to the decoder presented in DeeplabV3+ [7]. Differently, we use the feature maps from the third stage (\(\times 8\) downsampling compared to the input) to fuse the feature maps from ASPP. The second part is the proposed region refining module.
and the learnable region tokens \(R\) are separately sent into three linear layers to generate the query, key, and value as follows:
\[R_{Q},F_{K},F_{V}=f_{Q}(R),f_{K}(F),f_{V}(F), \tag{1}\]
where \(f_{Q}(R)\), \(f_{K}(F)\), and \(f_{V}(F)\) are linear layers and \(R_{Q}\in\mathbb{R}^{N\times C}\), \(F_{K}\in\mathbb{R}^{HW\times C}\), \(F_{V}\in\mathbb{R}^{HW\times C}\). We compute the multi-head cross-attention between \(F\) and \(R\) as follows:
\[R_{E}=\mathrm{Softmax}\left(\frac{R_{Q}F_{K}^{T}}{\sqrt{C}}\right)F_{V}+R, \tag{2}\]
where \(R_{E}\in\mathbb{R}^{N\times C}\) is the resulting region embeddings. The region embeddings are then sent into a feed-forward network, which is formulated as:
\[R_{O}=\mathrm{FFN}(R_{E})+R_{E}, \tag{3}\]
where \(R_{O}\) is the output of the feed-forward network. Here, following [41], only the region tokens \(R_{E}\) are sent to the feed-forward block for an efficient process.
Next, \(R_{O}\) and \(F\) are delivered to two linear layers to generate a group of new query and key as follows:
\[R_{Q1},F_{K1}=f_{Q1}(R_{O}),f_{K1}(F). \tag{4}\]
We perform the matrix multiplication between \(R_{Q1}\) and \(F_{K1}\) to produce attention maps by
\[A=\mathrm{Softmax}\left(\frac{R_{Q1}F_{K1}^{T}}{\sqrt{C}}\right), \tag{5}\]
where \(A\in\mathbb{R}^{N\times HW}\) denotes \(N\) attention maps and each attention map is associated with a semantic region. When we attain the region attention maps, we combine \(A\) and \(F\in\mathbb{R}^{HW\times C}\) via broadcast multiplications, which can be written as follows:
\[S_{i,j,k}=A_{i,j}\cdot F_{j,k}, \tag{6}\]
where \(S\in\mathbb{R}^{N\times HW\times C}\) is the output. Finally, \(S\) is permuted and reshaped and sent into a class-wise convolutional layer to generate the final segmentation maps. In the following, we will show the effectiveness of the proposed segmentation network in the experiment section.
## 6 Experiments
To verify the effectiveness of the proposed detail refining decoder, we conduct several ablation experiments on the number of region tokens, and attention heads. Besides, we also compare our method with the previous state-of-the-art methods on the proposed dataset. Experiment details can be found in our supplementary materials.
### Ablation Study
**The number of region tokens and heads.** First, we study the impact of the number of tokens and heads on the performance. As shown in Tab. 4, using 10 region tokens instead of 1 region token brings 0.4% mIoU scores and 1.2% iIoU scores improvement. This fact demonstrates that the number of region tokens largely affects the parsing of traffic participants, especially for small objects. When further improving the number of region tokens, we observe little performance gain, which indicates 10 tokens are enough for semantic region refining. Besides, we also attempt to increase the number of attention heads. It can be seen that adding more heads brings no performance gain.
For readers to better understand the region tokens, we have visualized the attention maps of different tokens, as shown in Fig. 6. It can be seen that different tokens are responsible for different semantic regions.
**Region tokens _vs._ Class Tokens.** In the design of the detail refining decoder, we utilize the region tokens to refine a specific semantic region. Here, one may raise a question: "How would the performance go when we utilize class to
\begin{table}
\begin{tabular}{c|c c|c|c} \hline \hline Settings & \#Tokens & Attention Heads & mIoU\({}_{val}\) & iIoU\({}_{val}\) \\ \hline
1 & 1 & 12 & 75.0 & 56.9 \\
2 & 10 & 12 & 75.4 & 58.1\({}_{(+1.2)}\) \\
3 & 20 & 12 & **75.4** & 58.3\({}_{(+1.4)}\) \\
4 & 20 & 24 & 75.4 & 58.2\({}_{(+1.3)}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study on the selection of the number of tokens and attention heads.
Figure 6: Visualizations of the attention map corresponding to each token. We randomly select several tokens for visualization. One can see that the visualizations associated with different region tokens focus on different semantic regions. These region tokens can help our method better process the region details.
kens as done the original Transformers instead of the region tokens"? We perform an experiment that learns 21 class tokens, each of which corresponds to a class. The final concatenated features are sent to a depth-wise convolutional layer with 21 groups. When using the class tokens, we can obtain 75.3% mIoU scores and 57.1% iIoU scores on the validation set. Compared with the using of class tokens, the decoder with 10 region tokens can obtain 58.1% iIoU scores, which works better than using class tokens. Besides, when the number of classes in the dataset is large, the class tokens will consume high computational costs. In contrast, using the region tokens is more flexible in that there is no need to adjust the number of region tokens when the number of classes rises.
**The importance of the encoder-decoder structure.** In Sec. 4.1, we have analyzed that the encoder-decoder structure is important for small object parsing. Thus, we apply the encoder-decoder structure to our segmentation network. Without the encoder-decoder structure, i.e., we directly connect the region refining module to the encoder, the mIoU score decreases by 0.2%. The iIoU score decreases by 0.8%. This experiment indicates that the high-resolution low-level features can benefit the parsing of the traffic participants. Thus, the encoder-decoder structure is important for high-hanging scene parsing.
### Comparisons with SOTA
After performing a sanity check for the detail refining decoder, we compare the result of the proposed method with other methods on the TSP6K dataset. Tab. 3 lists the performance of different methods. It can be seen that our method outperforms all previous methods and achieves the best results in terms of both the two metrics. Besides, the original Hamburger decoder [19] cannot further improve the performance of the SegNeXt-Large model. However, when replacing the Hamburger decoder with the proposed detail refining decoder, the performance is largely improved, outperforming the Hamburger decoder by 1.3% mIoU scores and 1.3% nIoU scores on the validation set. The evaluation results demonstrate the effectiveness of the proposed decoder in parsing the high-hanging scenes. Furthermore, we provide some qualitative results in Fig. 7 for visual comparison. We can see that our method is more suitable for processing the region details than CCNet and SegNeXt.
## 7 Limitations
In this paper, based on the TSP6K dataset, we have evaluated a series of previous popular scene parsing methods. However, we do not explore the scene panoptic segmentation and instance segmentation though instance annotations are also provided. We hope the proposed dataset can encourage the community to develop more powerful scene parsing methods, instance segmentation methods, and panoptic scene parsing methods for high-hanging scenes.
## 8 Conclusions
In this paper, we have constructed the TSP6K dataset, focusing on the high-hanging traffic scenes. We have provided each traffic image with a semantic and instance label. Based on the finely annotated the TSP6K dataset, we have evaluated the previous popular scene parsing methods and summarized some useful tips. To improve the performance of the high-hanging scene parsing, we design a detail refining decoder, which utilizes the high-resolution features from the encoder-decoder structure and refines different semantic regions based on the self-attention mechanism. The detail refining decoder learns several region tokens and computes attention maps for different semantic regions. The attention maps are used to refine the pixel affinity in different semantic regions. Experiments have shown the effectiveness of the proposed detail refining decoder.
Figure 7: Visualization of the scene parsing results from different methods. One can see that our method can well process the region details. When taking the bottom scene as an example, our method can generate a more accurate mask for the arrow while other methods fail. Zoom in for the best view. |
2307.01613 | S-Nav: Semantic-Geometric Planning for Mobile Robots | Path planning is a basic capability of autonomous mobile robots. Former
approaches in path planning exploit only the given geometric information from
the environment without leveraging the inherent semantics within the
environment. The recently presented S-Graphs constructs 3D situational graphs
incorporating geometric, semantic, and relational aspects between the elements
to improve the overall scene understanding and the localization of the robot.
But these works do not exploit the underlying semantic graphs for improving the
path planning for mobile robots. To that aim, in this paper, we present S-Nav a
novel semantic-geometric path planner for mobile robots. It leverages S-Graphs
to enable fast and robust hierarchical high-level planning in complex indoor
environments. The hierarchical architecture of S-Nav adds a novel semantic
search on top of a traditional geometric planner as well as precise map
reconstruction from S-Graphs to improve planning speed, robustness, and path
quality. We demonstrate improved results of S-Nav in a synthetic environment. | Paul Kremer, Hriday Bavle, Jose Luis Sanchez-Lopez, Holger Voos | 2023-07-04T09:56:57Z | http://arxiv.org/abs/2307.01613v1 | # S-Nav: Semantic-Geometric Planning for Mobile Robots
###### Abstract
Path planning is a basic capability of autonomous mobile robots. Former approaches in path planning exploit only the given geometric information from the environment without leveraging the inherent semantics within the environment. The recently presented _S-Graphs_ constructs 3D situational graphs incorporating geometric, semantic, and relational aspects between the elements to improve the overall scene understanding and the localization of the robot. But these works do not exploit the underlying semantic graphs for improving the path planning for mobile robots. To that aim, in this paper, we present _S-Nav_ a novel semantic-geometric path planner for mobile robots. It leverages _S-Graphs_ to enable fast and robust hierarchical high-level planning in complex indoor environments. The hierarchical architecture of _S-Nav_ adds a novel semantic search on top of a traditional geometric planner as well as precise map reconstruction from _S-Graphs_ to improve planning speed, robustness, and path quality. We demonstrate improved results of _S-Nav_ in a synthetic environment.
## License
For the purpose of Open Access, the author has applied a CC-BY-4.0 public copyright license to any Author Accepted Manuscript version arising from this submission.
## 1 Introduction
Mobile robots have gained a lot of traction in recent years and have seen widespread use in different industries such as construction, mining, etc., where they are used for autonomous inspection tasks. To date, they are mostly teleoperated or operated semi-autonomously under the supervision of a human operator. Fully autonomous operation could thus significantly reduce costs, however, several technical challenges such as perception, navigation, mapping, and localization are currently detrimental to this mode of operation. Mobile robots should not only create meaningful maps of the environment while localizing within it but also be able to exploit these maps to perform fast and efficient planning.
Traditionally, mobile robots build a geometric map [4, 5] of their environment using simultaneous localization and mapping techniques (SLAM) in combination with their onboard sensors [7] (e.g., LiDAR). Recently, we presented _S-Graphs_ a novel graph-based semantic SLAM that combines traditional geometric SLAM with scene graphs [1, 2]. _S-Graphs_ extracts the topological-relational information of the environment such as wall surfaces, rooms, and doorways including the topological connections between those semantic entities enabling the robot to reason about its environment in a way humans would. _S-Graphs_ showed promising results in terms of precise robot localization and high-level hierarchical map generation over a variety of datasets. However, this scene knowledge is not yet leveraged for performing more intelligent and faster path planning for mobile robots.
To bridge this gap, we leverage the metric, semantic, and relational information in _S-Graphs_ for the purpose of path planning. We propose a novel hierarchical planner called _S-Nav_ which leverages the semantic layer to improve planning on the geometric layer. First, we perform a semantic graph search utilizing the semantic elements within the _S-Graphs_ to generate a sparse undirected graph of semantic elements such as rooms and doorways. The undirected global semantic graph is then divided into local subproblems which can be solved in parallel and pose a set of simpler problems to the underlying geometric planner. The main contributions of this work are:
* Novel hierarchical planner called _S-Nav_ utilizing geometric, semantic, and relational information for faster planning.
* Semantic planner for faster global plans.
* Semantic subproblem solver further simplifies the global plan into local subproblems for the underlying geometric planner.
A brief summary of _S-Graphs_ is given in section 2.1. _S-Nav_, the novel semantic-geometric planner, is introduced in section 3. The main blocks are the _Semantic Planner_ (section 3.1), the _Subproblem Solver_ (section 3.2), and the _Geometric Planner_ (section 3.4). The evaluation and results are presented in section 4. This work is concluded in section 5.
## 2 System Overview
The complete system architecture is shown in fig. 1. _S-Nav_ builds on top of _S-Graphs_ and utilizes it as its main data source. _S-Nav_ itself is composed of the _Semantic Planner_, the _Subproblem Solver_, and the _Geometric Planner_. A path query is first handled first by the _Semantic Planner_ whose output serves as a rough initial guess that cascades into the _Geometric Planner_ via the _Subproblem Solver_.
### Situational Graphs (_S-Graphs_)
_S-Graphs_ is an optimizable graph structure built using online measurements such as LiDAR data or markers [1, 2, 6].
The graph structure consists of five layers that are summarized as:
**Keyframes Layer**: Composed by the robot's pose \({}^{M}x_{R_{i}}\in SE(3)\) constrained by the the robot's odometry measurements.
**Walls Layer**: Each room is composed of four planes extracted from onboard sensor measurements. They are constrained using pose-plane constraints.
**Room Layer**: A room is formed by its four planes constrained by a cost function consisting of the room center \({}^{M}\mathbf{p}_{i}\in\mathbb{R}^{2}\) and \(w_{i}\) the distance between the opposite planar pairs.
**Floor Layer**: A floor is a collection of rooms optimized analogously to rooms by extracting the largest distance between the opposite planar pairs.
**Doorway Layer**: A doorway marks the physical, traversable connection between two rooms defined by a center point \({}^{M}\mathbf{d}_{i}\in\mathbb{R}^{2}\) and a width \(r_{i}\), and is constrained by the physical distance between the two rooms it connects.
_S-Graphs_ serves as the main source of information for _S-Nav_. Therefore, this work makes extensive use of the room and doorway layers. The presented architecture can, however, easily be expanded to include multiple floors and other semantic entities (e.g., objects).
## 3 S-Nav
_S-Nav_ is our novel hierarchical semantic-geometric planning solution that combines _S-Graphs_ with an informed geometric planner. Our solution provides the following benefits over traditional, purely geometric planners:
Figure 1: Overview of _S-Nav_. A _query_ (e.g., ‘go from current position to the kitchen’) is first handled by the _Semantic Planner_ which provides an initial high-level semantic-geometric path based on the graph structure obtained from _S-Graphs_. Next, in the _Subproblem Solver_, the semantic-geometric path is subdivided into smaller, easier problems that are individually solved by the _Geometric Planner_. The individual paths between the subproblems are then reassembled into the final high-level path that is passed to the robot.
* The geometric search can greatly profit from a rough initial guess provided by the semantic layer by constraining the areas the planner can visit and by providing subgoals toward the final goal.
* A query in natural form, e.g., 'go from here to the kitchen', can easily be mapped to a semantic-geometric problem.
* Handling forbidden areas such as closed doors or rooms that should not be traversed is trivial on the semantic layer, whereas it would require map changes on the geometric layer. Similarly, if a doorway is detected as untraversable, replanning is very fast as the doorways node can easily be disconnected from the graph.
The structure of _S-Nav_ is depicted in fig. 1 whilst the different layers are visualized in fig. 2. Its main parts are formed by the _Semantic Planner_ that cascades into the _Geometric Planner_ via the _Subproblem Solver_ (SPS). The final path is then passed to the robot, potentially via additional layers such as local planning, trajectory generation, and motion control.
### Semantic Search
The scene graph structure of _S-Graphs_ encodes a high-level representation of the environment the robot is operating in. Herein, this scene graph is converted into an undirected graph connecting the semantic elements of the scene. The connections (edges) between the elements have an associated cost, i.e., for the room-to-doorway connections, a cost
\[c_{dr}=\|^{M}\mathbf{p}_{i}-^{M}\mathbf{d}_{i}\|^{2}+p_{d} \tag{1}\]
is assigned, consisting of the distance between the center point of the room and the associated doorway, plus a fixed penalty for doorway crossing. The fixed penalty \(p_{d}\) can be used to prefer a slightly longer path with fewer (potentially closed) doorways.
Generally, the graph is sparse, featuring only a small number of nodes and edges. As such, a shortest-path search using, e.g., A* is virtually free compared to a full search on the geometric layer.
For a given _query_\(\mathbf{p}_{s}\rightarrow\mathbf{p}_{s}\)' (read: from \(\mathbf{p}_{s}\) to \(\mathbf{p}_{s}\)), the semantic planner provides a _solution_ of type
\[\mathbf{p}_{i}\rightarrow\mathbf{d}_{i}\rightarrow\mathbf{d}_{i+1} \rightarrow\cdots\rightarrow\mathbf{d}_{k+n}\rightarrow\mathbf{p}_{g} \tag{2}\]
**via:**\(F_{R}=R_{s}\cup R_{i}\cup\cdots\cup R_{i+n}\cup R_{g}\), \(F_{R}\subseteq S\subset\mathbb{R}^{3}\)
where \(\mathbf{p}_{s},\mathbf{p}_{s},R_{s}\) are the start and goal positions and rooms, \(\mathbf{d}_{i}\) the doorway center points to traverse along the route, \(R_{(i)}\) are the rooms, i.e., the free space the robot has to pass through to reach its destination, \(S\) is the state space limited by the bounding box of the map, and \(F_{R}\) is the reduced free space obtained from the semantic planner that is passed to the geometric planner. By restricting the geometric planner to \(F_{R}\), its sampler can be much smarter about the placement of the samples.
### Semantic Search with Subproblems
The global problem in eq. (2) can further be simplified into a set of local subproblems that can be solved in parallel and, individually, pose a simpler problem to the geometric planner:
\[\begin{split} 1:\mathbf{p}_{s}&\rightarrow\mathbf{d}_{i},\quad\textbf{via:}\ R_{s}\\ 2:\mathbf{d}_{k}&\rightarrow\mathbf{d}_{k+1},\quad \textbf{via:}\ R_{i+1}\\ 3:\mathbf{d}_{k+1}&\rightarrow\mathbf{d}_{k+2}, \quad\textbf{via:}\ R_{i+2}\\ &\cdots\\ n:\mathbf{d}_{k+n}&\rightarrow\mathbf{p}_{g},\quad \textbf{via:}\ R_{g}\end{split} \tag{3}\]
Figure 2: The semantic-graph layer is an undirected topological graph of rooms and doorways extracted from features segmented by _S-Graphs._ The results of this first layer cascade into the geometric layer which is formed by OMPL and Voxblox. The contour layer reduces the valid state space of the geometric planner to do informed decisions on where to sample. The semantic path is highlighted in green together with the room contours that have to be traversed along the path.
Therefore, akin to informed geometric planners (e.g., informed RRT* [3]), herein the semantic planner adds an additional layer of information that the geometric planner can profit from to find a solution faster. The subproblems are solved by the _Subproblem Solver_ in conjunction with the geometric planner. The resulting individual path segments are joined to a final, global path. If the resulting path requires updating, e.g., due to a blocked path, the _Subproblem Solver_ can efficiently reevaluate the changed or newly created subproblems.
### Global Map
Instead of relying on raw sensor readings, _S-Nav_ features a global map reconstruction module that builds an accurate global map from _S-Graphs_ data, which in turn is generated on the fly by the robot, resp. provided to the robot if the environment is already fully mapped. The global map is kept relatively simple, i.e., not featuring obstacles as this problem is more effectively handled by the reactive planner on a local map.
_S-Graphs_ provides the planes associated with each room, including the doorways that mark the connection between two rooms. In the first step, for each room, the vertical planes (walls) are converted to a closed 2D contour which encompasses the free space within a room. Room contours serve two purposes: **First**, to restrict the geometric planner's sampler to sample only in areas that effectively contribute to the final path. **Second**, to build an optimistic, clutter-free (yet accurate), signed distance field representation of the physical environment that forms the basis for the geometric planner.
Doorways of a certain width and located at a certain point are added between the two closest walls of the two associated rooms. Just like contours, they are also part of the signed distance field generation process.
### Geometric Search
The geometric search within _S-Nav_ features state-of-the-art geometric planners provided by the _Open Motion Planning Library_ (OMPL). Within _S-Nav_ we use sampling-based planners (e.g., PRM, RRT, IRRT*), that create random (sometimes with a heuristic) samples within the valid bounds of the state space. A priori, for any given problem, the whole global map has to be considered. Therefore, a large number of samples is required to find the optimal path. Constraining the sampler to sample within the rooms that have to be visited along the semantic path, greatly enhances the convergence rate of the planner as no samples are wastefully created in areas that are of no interest. Furthermore, using the SPS that decomposes the global problem into a set of local problems effectively exploits the rapid convergence of certain planning algorithms (e.g., IRRT*) for the resulting simpler type of problem. The problem is illustrated in fig. 3.
## 4 Evaluation
### Methodology
A synthetic map (\(17\,\mathrm{m}\times 15\,\mathrm{m}\)) with 8 rooms and 10 doorways was created and passed to the recently presented _iS-Graphs_[6], an _S-Graphs_ extension that supports architectural (BIM) data. Within this environment, the three cases (IRRT*, IRRT*+S-Graphs, and IRRT*+S-Graphs+SPS) shown in fig. 3 were benchmarked by performing 1000 queries for each. As OMPL termination criteria, a timeout of \(0.1\,\mathrm{s}\) was specified. The timeout is equally divided over all subproblems for the test series involving the
Figure 3: a) The naïve approach using IRRT* creates an excessive amount of samples and yet ends up with a suboptimal path. b) Restricting IRRT* to sample within the rooms that are part of the optimal path greatly enhances the solution with fewer samples. c) Restricting IRRT* to sample within the rooms parts of the solution and decomposing the overall problem in subproblems yields the best result (least amount of samples) as it effectively exploits the rapid convergence of IRRT* for the resulting simpler problems.
SPS. Recorded were the number of samples created within the allocated time as well as the final path length. The measurements were performed on a workstation equipped with an Intel Core i9-11950H.
### Results and Discussion
The results are given in fig. 4. It is clear that IRRT* alone delivers the most inconsistent results with the widest spread. On average, it also had the least number of samples generated. Restricting the sampled regions with the _S-Graphs_ knowledge, significantly improved the consistency of the results. Further, using _S-Graphs_ and the SPS in combination with IRRT* yielded consistently the shortest path, and was also able to generate significantly more samples. The higher number of samples is caused by the comparatively cheaper state and motion validity checks based on the contours rather than the signed distance field alone.
## 5 Conclusion
Leveraging the geometric-semantic knowledge contained in _S-Graphs_ for planning can greatly enhance the performance of the underlying geometric planner. Herein, we presented _S-Nav_, a novel semantic-geometric planner that features a hierarchical planner architecture that showed to significantly improve planning speed, resp., the consistency of the generated paths within a given timeframe. Furthermore, we showed that decomposing the global problem into a set of local problems can be used to effectively leverage the rapid convergence of (informed) sampling-based planners.
|
2305.14301 | A Laplacian Pyramid Based Generative H&E Stain Augmentation Network | Hematoxylin and Eosin (H&E) staining is a widely used sample preparation
procedure for enhancing the saturation of tissue sections and the contrast
between nuclei and cytoplasm in histology images for medical diagnostics.
However, various factors, such as the differences in the reagents used, result
in high variability in the colors of the stains actually recorded. This
variability poses a challenge in achieving generalization for machine-learning
based computer-aided diagnostic tools. To desensitize the learned models to
stain variations, we propose the Generative Stain Augmentation Network (G-SAN)
-- a GAN-based framework that augments a collection of cell images with
simulated yet realistic stain variations. At its core, G-SAN uses a novel and
highly computationally efficient Laplacian Pyramid (LP) based generator
architecture, that is capable of disentangling stain from cell morphology.
Through the task of patch classification and nucleus segmentation, we show that
using G-SAN-augmented training data provides on average 15.7% improvement in F1
score and 7.3% improvement in panoptic quality, respectively. Our code is
available at https://github.com/lifangda01/GSAN-Demo. | Fangda Li, Zhiqiang Hu, Wen Chen, Avinash Kak | 2023-05-23T17:43:18Z | http://arxiv.org/abs/2305.14301v2 | # A Laplacian Pyramid Based Generative H&E Stain Augmentation Network
###### Abstract
Hematoxylin and Eosin (H&E) staining is a widely used sample preparation procedure for enhancing the saturation of tissue sections and the contrast between nuclei and cytoplasm in histology images for medical diagnostics. However, various factors, such as the differences in the reagents used, result in high variability in the colors of the stains actually recorded. This variability poses a challenge in achieving generalization for machine-learning based computer-aided diagnostic tools. To desensize the learned models to stain variations, we propose the Generative Stain Augmentation Network (G-SAN) - a GAN-based framework that augments a collection of cell images with simulated yet realistic stain variations. At its core, G-SAN uses a novel and highly computationally efficient Laplacian Pyramid (LP) based generator architecture, that is capable of disentangling stain from cell morphology. Through the task of patch classification and nucleus segmentation, we show that using G-SAN-augmented training data provides on average 15.7% improvement in F1 score and 7.3% improvement in panoptic quality, respectively. Our code is available at [https://github.com/lifangda01/GSAN-Demo](https://github.com/lifangda01/GSAN-Demo).
Generative Adversarial Networks, Hematoxylin and Eosin, Histology, Laplacian Pyramid, Stain Augmentation.
## 1 Introduction
Histology refers to the study of tissues and their structures through microscopic anatomy and is widely used in medical diagnosis, especially oncology. Due to the fact that most cells are colorless and transparent in a bright field, tissue samples must go through a routine staining process before observation under a microscope. The gold standard for staining uses a combination of two dyes - Hematoxylin and Eosin (H&E) - mainly owing to their relatively high color consistency and ease of application. The former, hematoxylin, binds strongly to the DNA and RNA in the nuclei and paints them purplish blue, whereas the latter, eosin, binds to the proteins commonly found in the cytoplasmic and extracellular regions and paints them pink.
Despite its wide adoption, the detailed process of H&E staining is not standardized across laboratories. Depending on a host of factors, such as the differences in the reagents used, specific operating procedures and properties of the imaging instruments, etc., the final appearance of H&E staining can vary significantly from slide to slide. The patches shown in Fig. 1 visually demonstrate typical examples of this phenomenon. While this high variability in the H&E-staining effects has been a well-known challenge for pathologists, it has also emerged as an issue in the context of computational pathology.
One of the biggest challenges for the machine learning algorithms for computational pathology is the paucity of the groundtruthed training data - a paucity that is exacerbated by the variability in the stains. Consider, for example, the data requirements of the algorithms for nucleus segmentation. The training data for such algorithms is scarce for two reasons: (1) it requires some domain expertise to discern the boundaries of the nuclei and the cytoplasm regions; and (2) the tediousness of manual annotation of the cell images. And, given the data that is currently available, what reduces its effectiveness is the variability in the stains which results in overfitting and poor generalization of the machine-learning models, especially if there exist potentially unseen stains at test time.
Obviously, in order to make the most of the data that is available, what we need are strategies for desensitizing the learned models to the variability in the stains. Previous attempts at such model desensitization have consisted of what has come to be known as _stain normalization_. Stain normalization alters the stain color on a pixel-by-pixel basis so that the color profile in the normalized image corresponds to a pre-specified template. Such normalization is applied during both training and testing. That is, models are trained and tested only on stain-normalized images. Earlier methods for stain
Figure 1: The high variability of H&E-staining effects. The patches were extracted from different breast tissue sections that were separately stained.
normalization are stain-matrix based [1, 2, 3, 4] and the more recent approaches leverage Convolutional Neural Networks (CNNs) [5, 6, 7, 8, 9, 10, 11, 12, 13].
While stain normalization as described above is effective in reducing the stain variability, it has three significant drawbacks: (1) The extra image preprocessing steps needed at test time for stain normalization result in additional computational overhead, especially given the very large image sizes involved in histological studies. (2) The normalization process may involve the computationally expensive step of Sparse Non-negative Matrix Factorization (SNMF) [3, 4]. And (3) From the standpoint of what is needed for creating models with greater generalization power, a model trained on stain-normalized images is likely to lack intrinsic versatility against stain variations, which puts the model at a higher risk of overfitting to the data. As a result, more recently, researchers have begun pursuing _stain augmentation_ in place of stain normalization for the same benefits but with the expectation of creating more generalizable models.
With stain augmentation, one seeks to augment the training data with all practically possible stain variations so that a learned model possesses maximal generalizability with regard to stains. The effectiveness of using stain-augmented training images has been demonstrated for patch-based classification where, on the average, it led to a 40% improvement in AUC [5]. These authors used channel-wise color perturbation for stain augmentation. Its idea is straightforward: One first maps the input image to an alternative color space (_e.g._ HSV or HED using a predefined stain-matrix), then injects both multiplicative and additive random noise independently into each of the channels before reprojecting them back to RGB. This simple jittering-based operation is computationally efficient and was shown to be effective by the experimental results in [14, 15, 16, 17]. However, one major drawback of such a simple approach is that it is prone to generating unrealistically stained images, as illustrated in Fig. 2. Consequently, using HED-jittering as the only stain augmentation might not fully address the domain gap between the training and testing data, according to [16].
On account of the above-mentioned shortcoming of the channel-wise color perturbation approach, the focus of the ongoing research in stain augmentation has shifted to using GAN-based image-to-image translation frameworks. Such a framework can be used to provide either training-time stain augmentations as in the DRIT++ based HistAuGAN [15], the StarGAN-based framework in [18], and the StarGANV2-based framework in [19], or test-time augmentations (TTAs) as in the StarGANV2-based framework in [16]. With its impressive data modeling capabilities, a GAN-based framework can effectively learn the distribution of the realistic stains in a high-dimensional space and subsequently create new instances of cell images with synthesized yet realistic stains obtained by sampling the learned distribution.
Despite their success, there are two main drawbacks to the existing GAN-based stain transfer or stain augmentation approaches. First, the aforementioned frameworks all group training images by their laboratory IDs and use the IDs as domain labels for training [15, 16, 18, 19]. While such information is necessary for training multi-domain GAN frameworks, dependency on domain labels can result in frameworks that are less generalized. This is reflected by the fact that requiring domain-related information (_e.g._ laboratory and organ of origin) limits the availability of training data. In contrast, we assume that all possible H&E stain appearances are from a single domain. Together they form a single distribution and that the distribution can be sufficiently modeled by a unit Gaussian in a high-dimensional latent space. This independence of domain information helps G-SAN achieve better generalizability since without any domain information needed, a more diverse set of images, in terms of both tissue morphology and stain, can be used in training.
The second drawback is in regard to the computational efficiency. When used during the training or testing of a downstream task-specific model, it is important for any image augmentation algorithm to be computationally efficient. This is especially the case in histology applications where tissue slides can have very large sizes. Existing approaches that are based on general-purpose GAN architectures for performing stain transfer are not optimized in terms of speed.
To address the two aforementioned limitations, we propose a GAN-based stain augmentation framework that utilizes a novel generator architecture for stain transfer and the concepts of disentanglement learning. Our proposed generator architecture is based on the Laplacian Pyramid (LP) representation of images for ensuring that the stain transfers are structure preserving. More specifically, G-SAN uses the computationally heavier convolutional modules only on the low-resolution residual images of the LP, where the differences between stains are the most significant. As for the higher-resolution band-pass images of the LP, which capture mostly high-frequency spatial details rather than stain appearances, they are only fine-tuned by light-weight convolutional modules to both retain the structural details and to improve computational efficiency.
The G-SAN framework uses the principles of content-style disentanglement to learn to extract two independent representations from an input image: the cell morphology as content and the stain profile as style. Subsequently, by combining stain representations either extracted from other images or sampled stochastically, with the morphology representation from an input cell image, G-SAN can virtually re-stain the input image
Figure 2: Jittering based augmentations created from the two original images in the left column. As depicted in the second row, this approach is prone to generating unrealistic stain appearances.
without altering the underlying cell structures.
We trained G-SAN in an entirely unsupervised manner, in contrast to previous works that used domain labels. As we demonstrate in this paper, using H&E-stained histology images collected from a diverse set of sources for training gives G-SAN the generalization abilities with regard to both the stain appearance and the cell morphology. The quantitative validation of our approach consists of demonstrating the effectiveness of the stain augmentations produced by G-SAN through two common downstream tasks: tumor classification and nuclei segmentation. For the former, the stain augmentations must help the model overcome the large domain gaps that exist between the training and testing data. And for the latter, the stain augmentations must be structure-preserving since any undesired modification to the underlying cell morphology would be highly punishing. By using our stain augmentation method, we show that the trained task-specific networks are more robust towards stain variations compared to using the current state-of-the-art in stain augmentation.
## 2 Related Literature
### GAN-Based Stain Transfer
Recent advances in GANs (Generative Adversarial Networks) have inspired several GAN-based approaches to H&E stain-related image-to-image translation. Using conditional generators, there now exist frameworks [5, 6, 8, 12, 13, 20] that can transform images from one or multiple stain domains into a given target stain domain. Additionally, the success of CycleGAN [21] in achieving unsupervised domain transfer has led to the development of frameworks that use cycle consistency for achieving one-to-one and many-to-one stain normalization [7, 9, 10, 11, 18]. Going beyond stain normalization, frameworks that are capable of performing stain transfer among multiple stain domains have also been proposed. Examples include the DRIT++ based HistAuGAN [15], the StarGAN-based framework in [18] and the StarGANV2-based frameworks in [16, 19]. Our work is most similar to these frameworks on multi-domain stain transfer. However, instead of defining multiple distinct stain domains commonly based on their laboratory of origin, we treat the complete set of realistic stain appearances as if coming from a single domain.
### CNNs with Laplacian Pyramid
One of our important contributions in this work is the use of the Laplacian Pyramid for a highly computationally efficient yet structure-preserving CNN architecture designed specifically for H&E stain transfer. The method of Laplacian Pyramid decomposes an input image into a set of band-pass images, spaced an octave apart, plus a low-frequency residual. The popularity of this approach can be gauged by the fact that it has recently been incorporated in deep learning frameworks for various applications such as image generation [22], image style transfer [23], image super-resolution [24], etc. The hierarchical nature of the LP representation lends itself well to creating solutions that require adaptation to image details at different levels in the scale space. Our LP-based generator architecture is partially inspired by the LPTN framework proposed in [25]. More specifically, we have adopted from that work the idea of fine-tuning only the structure-rich high-resolution band-pass images with light-weight modules. This helps our framework preserve the spatial details in the images and, at the same time, achieve highly competitive computational efficiency.
### Learning Disentangled Representations
We approach the modeling of the stain variability through learning to extract the following disentangled representations from an input histological image: a morphology-independent stain vector and the underlying structural representation. Our framework's learning to extract such disentangled representations is inspired by the multi-domain image-to-image translation frameworks such as those reported in [26, 27]. Generally, these frameworks assume that an image can be decomposed into a domain-invariant representation and a domain-dependent representation. By enforcing the constraint that the former representation can be shared across domains, certain properties can be kept consistent through both inter- and intra-domain mappings, such as the structure of the objects. Along similar lines, we disentangle the cell morphology, which is the stain-invariant representation in our case, from the stain representation, so that the cell structure in the images is kept consistent during stain transfer.
We summarize the stain information in the affine parameters of the learned features in the normalization layers of the generator. Consequently, by manipulating the normalization parameters through Adaptive Instance Normalization (AdaIN) [28], we can effectively modify the stain appearance in the synthesis. We train this normalization-based style transfer architecture with several disentanglement-promoting learning criteria, such as the cycle-consistency loss [21], which encourages the reversibility of the learned disentangled representations, and the latent reconstruction loss [29] that ensures the reproducibility of the disentanglement. Subsequently, by combining arbitrary stains with the morphology representation from a given input cell image, G-SAN can generate an augmented version of the image with a simulated yet realistic looking stain. To the best of our knowledge, G-SAN is the first CNN framework that achieves stain transfer between arbitrary H&E stains.
## 3 The Proposed G-SAN Framework
In this section, we start with an overview of the concept of Laplacian Pyramid (LP). This is followed by a detailed explanation of our multi-pathway G-SAN generator architecture, which is optimized for high-resolution structure-preserving stain transfer. We describe the necessary design elements in our model that lead to the disentanglement of morphology and stain. Then, we demonstrate how the G-SAN architecture can leverage the multi-scale nature of LP in both training and inference. Lastly, we present the complete training procedure of our framework along with the losses used.
### The Laplacian Pyramid
The Laplacian Pyramid is a multi-scale image representation that consists of a set of band-pass images, spaced an octave
apart, and a low-resolution residual image. The set of band-pass images contains spatial details at consecutive frequency intervals, while the residual image is a Gaussian-blurred and downsampled version of the original input image.
To formally define the Laplacian Pyramid (LP), let \(K\) denote the max image level in the LP, \(g(\cdot)\) the function that convolves an image with a Gaussian kernel, and \(f_{\downarrow 2}(\cdot)\) / \(f_{\uparrow 2}(\cdot)\) the image downsampling / upsampling by 2 function, respectively. Then the Gaussian Pyramid (GP) of an input image \(\mathbf{I}\) can be written as \(G(\mathbf{I})=[\mathbf{I}_{0},\mathbf{I}_{1},...,\mathbf{I}_{K}]\), where \(\mathbf{I}_{0}\) is the input image itself and \(\mathbf{I}_{k+1}=f_{\downarrow 2}(g(\mathbf{I}_{k}))\). On the other hand, the LP of an image comprises two parts: a set of band-pass images at level 0 to \(K-1\), and a residual image at level \(K\). To explain, with the definition of GP, we can first write the band-pass image of the LP at level \(k=0,...,K-1\) as the difference between the GP image at level \(k\) and the upsampled version of the GP image at level \(k+1\):
\[\mathbf{h}_{k}=\mathbf{I}_{k}-f_{\uparrow 2}(\mathbf{I}_{k+1}). \tag{1}\]
Subsequently, at the \(K\)th level of the LP is the low-resolution residual image, taken directly from the GP at level \(K\): \(L_{K}(\mathbf{I})=\mathbf{I}_{K}\). Finally, we can now denote the complete LP representation as \(L(\mathbf{I})=[\mathbf{h}_{0},...,\mathbf{h}_{K-1},\mathbf{I}_{K}]\) (examples shown in Fig. 3). It is important to note that the LP decomposition of an image is lossless and fully reversible using the following backward recurrence:
\[\mathbf{I}_{k}=\mathbf{h}_{k}+f_{\uparrow 2}(\mathbf{I}_{k+1}), \tag{2}\]
where \(\mathbf{I}_{0}\) is the original input image.
The hierarchical separation of the high-frequency spatial details from the low-frequency residual image by the LP lends itself well to the task of stain transfer. Based on the observation that the stain difference between any two given input images is most prominent between the residual images \(\mathbf{I}_{K}\), as shown in Fig. 3, G-SAN adopts an adaptive strategy that depends on the level in the LP pyramid. More specifically, in G-SAN, heavy convolutional modules are only allocated for translating the low-resolution residual images. While for the higher-resolution band-pass images, G-SAN uses light-weight convolutional modules only to fine-tune the images. In this manner, G-SAN preserves rich spatial details in the images. As a result, the computational burden related to the processing of the higher-resolution constituents of the images is greatly reduced while conforming to the structure-preserving needs required for stain transfer.
### The G-SAN Architecture
The network architecture of G-SAN for image-to-image translation is shown in Fig. 4. The input to G-SAN is the LP representation of the input image and, correspondingly, the output of G-SAN is also an LP representation from which the output image can be reconstructed. The generator architecture can be broken down into three pathways: residual, style, and band-pass (BP). By optimizing each pathway to produce a component of the output LP representation, we are able to achieve structure-preserving stain transfer with great computational efficiency.
Starting with the residual pathway, shown in blue, it is implemented as an encoder-generator pair and it works in conjunction with the style mapping pathway, shown in gray, that is implemented as an autoencoder. Let \(\mathbf{I}^{in}\) and \(\mathbf{I}^{out}\) denote the input image and the output stain-transferd image, respectively. The residual pathway, whose parameters are presented in Tab. I, is responsible for producing the stain-transferd low-resolution residual image \(\mathbf{I}_{K}^{out}\). First, the encoder \(E_{K}\) encodes \(\mathbf{I}_{K}^{in}\), the input LP image at level \(K\), into a deep encoding \(\mathbf{z}_{k}^{in}\). Subsequently, the stain vector of the input image \(\mathbf{z}_{k}^{in}\) is extracted by the style encoder \(S_{E}\) from the deep encoding \(\mathbf{z}_{K}^{in}\). To achieve stain transfer, the target low-level deep encoding \(\mathbf{z}_{K}^{out}\) is produced by applying AdaIN on \(\mathbf{z}_{K}^{in}\), with the AdaIN parameters \((\text{mean},\text{std})=(\alpha_{K},\beta_{K})\) supplied by the style decoder \(S_{D}\), shown in gray at the bottom of Fig. 4. Finally, the output image \(\mathbf{I}_{K}^{out}\) is generated from \(\mathbf{z}_{K}^{out}\) by the low-level generator \(G_{K}\).
The task of the BP pathways is to adjust the input band-pass images for stain transfer at levels \(k=0\) to \(K-1\). At level \(k\), the input to the encoder \(E_{k}\) is \(\mathbf{h}_{k}^{in}\), the input LP image at level \(k\). Similar to what is done in the residual pathway, the input is mapped to a deep encoding and subsequently transformed using AdaIN, where the target normalization
Fig. 3: The Laplacian Pyramid representations with \(K=3\) of the same cell morphology with two different stains, in (a) and (b), and their RGB histograms. \(D_{\text{cos}}\) measures the cosine distance between the histograms of corresponding LP representations of the two images. While the color difference is the most prominent between the low-resolution residual images \(\mathbf{I}_{3}\), it is also evident among the high-frequency band-pass images \(\mathbf{h}_{k=0,1,2}\) albeit decreasingly as the resolution increases from right to left. Note that in the figure, the \(\mathbf{I}_{3}\) and \(\mathbf{h}_{k=1,2}\) images have been up-sized to fit the display grid. Please zoom in to get a better sense of the structures retained in the band-pass images \(h_{k=0,1,2}\).
parameters \((\alpha_{k},\beta_{k})\) are supplied by the style decoder \(S_{D}\). The resulting target deep encoding is then mapped to the target LP representation \(\mathbf{h}_{k}^{out}\). Compared to the low-level pathway, which consists of computationally heavy residual blocks, the BP pathways are implemented with light-weight convolutional modules using decreasing numbers of filters as resolution increases as shown in Tab. 1.
It is important to note that we scale both the input and output of the BP pathway at level \(k\) with _non-learnable_ scalars, \(\rho_{k}\) and \(\sigma_{k}\), respectively. This is necessary due to the fact that, since the band-pass images capture only the high-frequency details, they generally have zero mean and significantly smaller standard deviations than the residual image.
Therefore, by applying the scale factors, we benefit the learning of the band-pass pathways by ensuring the dynamic range of the input image to \(E_{k}\) and the output image by \(G_{k}\) to be close to \((-1,1)\), similar to what it would be for the residual images. In our implementation, we choose the value of \(\sigma_{k}\) to be the precalculated absolute max value of \(\mathbf{h}_{k}\) averaged from all training images and set \(\rho_{k}=1/\sigma_{k}\). Additionally, we found that making the scaling factors non-learnable can further stabilize the initial phase of training, where the quality of the generated BP images can be particularly sensitive.
Lastly, once we have obtained all the stain-transferred band-pass images and the residual image, the target image can be produced by applying the backward recurrence in Eq. (2).
\begin{table}
\begin{tabular}{l l l} \hline \hline & Encoder \(E\) & Generator \(G\) \\ \hline \multirow{2}{*}{Level \(k=0,...,K-1\)} & conv2D(\(3,k\times 16\)), LeakyReLU & LeakyReLU, conv2D(\(k\times 32,k\times 16\)) \\ & conv2D(\(k\times 16,k\times 32\)) & LeakyReLU, conv2D(\(k\times 16,3\)) \\ \hline \multirow{4}{*}{Level \(K\)} & conv2D(\(3,16\)), LeakyReLU & LeakyReLU, ResBlock(\(256,128\), LayerNorm, LeakyReLU) \\ & conv2D(\(16,64\)), LeakyReLU & ResBlock(\(128,64\), LayerNorm, LeakyReLU) \\ & ResBlock(\(64,128\), LayerNorm, LeakyReLU) & conv2D(\(64,16\)), LeakyReLU \\ & ResBlock(\(128,256\), LayerNorm, LeakyReLU) & conv2D(\(16,3\)) \\ & conv2D(\(256,256\)) & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Convolutional layer specifications of the G-SAN generator. All conv2D modules use kernel_size=3.
Figure 4: The G-SAN architecture for \(K=3\). For any value of \(K\), the architecture consists of three different pathways: residual, style, and band-pass (BP), each depicted with a different color in the figure. The residual pathway, shown in blue, produces the style-transferred low-resolution residual image at the output. The style pathway, shown at the bottom in gray is a Style Mapping Network (SMN) that is responsible for encoding and decoding the stain information. Finally, the multiple BP pathways independently produce the band-pass images at increasingly higher resolutions in the output LP pyramid. By allocating the computation-intensive operations only to the residual pathway and using only light-weight convolutional modules in the BP pathways, G-SAN avoids heavy convolutions at higher resolutions. Note that in the SMN, both the encoder and the decoder are implemented only with MLP layers, and the random resampling of latent stain vectors occurs only in the identity reconstruction mode during training.
### Disentangling Morphology from Stain
To enable structure-preserving style transfers between arbitrary stains, the stain representation must first be fully disentangled from the underlying morphology representation. With LP representations, while the stain information is the most prevalent in the low-res residual image \(\mathbf{I}_{K}\), it is also evident albeit more weakly in the band-pass images \(\mathbf{h}_{k}\). As mentioned previously in Sec. III-A, this phenomenon is clearly visible in the histograms plotted in Fig. 3. Therefore, it is necessary to achieve morphology-stain disentanglement in all levels of the LP representation, which has not been carried out in previous LP-based image-to-image translation networks, _e.g._[23, 25].
In G-SAN, we assume that the stain information can be fully captured by the channel normalization parameters of the convolutional features. Therefore, we use instance normalization (IN) as the model bias that removes any stain-related information from the deep encodings in the pathways and the resulting normalized encodings represent only the morphology. Subsequently, by applying the AdaIN parameters \((\alpha,\beta)\) to the purely morphological encoding, we can transfer the target stain to the encoding. In G-SAN, the set of \((\alpha_{k},\beta_{k})\) parameters for a target stain is provided by the style decoder \(S_{D}\) in the Style Mapping Network.
### Handling Multiple Resolutions
The LP-based image representation is recursive in the sense that the LP representation \(L(\mathbf{I}_{k})\) of the image \(\mathbf{I}_{k}\) at level \(k\) can be decomposed into a band-pass image \(\mathbf{h}_{k}\) and the LP representation \(L(\mathbf{I}_{k+1})\) of the image one level below. Owing to that recursive nature, a single stain transfer network trained to process the LP representations in the highest resolution can be readily used for input images with lower resolutions. This makes our framework particularly versatile since the pathology images are often recorded at different resolutions for different tasks. For example, for nucleus segmentation the images are often used at \(40\times\) magnification level and for tissue phenotyping at \(20\times\). If we train the LP-based generator to produce images at \(40\times\), the same network can be readily used for \(20\times\) images just by ignoring the BP pathway at \(k=0\) and using instead the output image reconstructed at \(k=1\). Along the same lines, \(10\times\) images can be processed and reconstructed at \(k=2\) using the G-SAN generator trained with images at \(40\times\). What that implies is that, with no additional training and no extra architectural elements, our LP-based model can be considered to be generalized across a range of image resolutions.
During the training of G-SAN, we leverage the concept of deep supervision and calculate the image reconstruction loss at each LP level. Similarly, we also employ a multi-resolution discriminator that consists of identical purely convolutional networks at each level to encourage output images at all levels to be realistic. The next subsection presents further details regarding these aspects of G-SAN.
### The Training Procedure and the Losses
For brevity (but without compromising essential details), the presentation in this section is in terms of relatively high-level abstractions. We will therefore ignore the specific architectural details related to the Laplacian Pyramid. Given the network components - \(E\) as the encoder, \(G\) as the generator, \(S\) as the SMN and \(D\) as the discriminator - the encoding process for an input image \(\mathbf{I}^{in}\) can be written as:
\[\mathbf{z}^{in}=E(\mathbf{I}^{in})\ \ \text{and}\ \ \mathbf{z}^{in}_{s}=S_{E}(\mathbf{z}^{in}). \tag{3}\]
The generative process, on the other hand, can happen in one of the two modes: **Mode A** - the identity reconstruction mode; and **Mode B** - the cyclic reconstruction mode (Fig. 5). In Mode A, the identity reconstruction \(\tilde{\mathbf{I}}^{in}\) can be written as:
\[\mathbf{z}^{\tilde{in}}=\text{AdaIN}(\mathbf{z}^{in},S_{D}(\mathbf{z}^{\tilde{in}}_{s})) \ \ \text{and}\ \ \tilde{\mathbf{I}}^{\tilde{in}}=G(\mathbf{z}^{\tilde{in}}), \tag{4}\]
where \(\tilde{\mathbf{z}}^{\tilde{in}}_{s}\) is a resampled version of \(\mathbf{z}^{in}_{s}\) obtained through the reparameterization trick for VAE (Variational Autoencoder). The losses calculated in the identity reconstruction mode are as follows:
**Identity Reconstruction Loss** ensures the learned encodings \(\mathbf{z}\) and \(\mathbf{z}_{s}\) to be representative enough to recover the original input image. This image reconstruction loss is a weighted sum of losses at all levels of the image output:
\[\mathcal{L}_{id}(\mathbf{I}^{in},\tilde{\mathbf{I}}^{in})=\mathbb{E}_{\mathbf{I}^{in}} \left[\sum_{k}m_{k}\left\|\tilde{\mathbf{I}}^{in}_{k}-\tilde{\mathbf{I}}^{\tilde{in}} _{k}\right\|_{1}\right]. \tag{5}\]
**VAE Loss** encourages the latent stain vectors from the images actually recorded to conform to a prior Gaussian distribution to facilitate stochastic sampling at test time. It is calculated through the KL-divergence:
\[\mathcal{L}_{vae}(\mathbf{z}^{in}_{s})=\mathbb{E}_{\mathbf{z}^{in}_{s}}\left[D_{\text {KL}}(\mathbf{z}^{in}_{s}||N(0,1))\right], \tag{6}\]
where \(D_{\text{KL}}(\text{\emph{p}}||q)=-\int p(z)\log\frac{p(z)}{q(z)}\text{d}z\).
In Mode B, the random augmentation \(\mathbf{I}^{out}\) and the cyclic reconstruction \(\hat{\mathbf{I}}^{\tilde{in}}\) are given as:
\[\mathbf{I}^{out}=G(\mathbf{z}^{r})=G(\text{AdaIN}(\mathbf{z}^{in},S_{D}(\mathbf{z}^{r}_{s}))), \tag{7}\]
\[\text{and}\ \ \hat{\mathbf{I}}^{\tilde{in}}=G(\text{AdaIN}(\mathbf{z}^{out},S_{D}(\mathbf{z}^{in}_{ s}))), \tag{8}\]
where \(\mathbf{z}^{r}_{s}\) denotes a randomly sampled stain vector. The relevant losses are:
**Cross-Cycle Consistency Loss** constrains the cross-cycle-reconstructed version to be consistent with the original input image in multiple resolutions:
\[\mathcal{L}_{cc}(\mathbf{I}^{in},\hat{\mathbf{I}}^{\tilde{in}})=\mathbb{E}_{\mathbf{I}^{ in}}\left[\sum_{k}m_{k}\left\|\hat{\mathbf{I}}^{in}_{k}-\hat{\mathbf{I}}^{\tilde{in}}_{k} \right\|_{1}\right]. \tag{9}\]
**Structure-Preserving Loss** is an adaptation of the perceptual loss introduced in [30] - the instance normalization function is applied on each set of features extracted by \(\phi(\cdot)\) at level \(i\):
\[\mathcal{L}_{sp}(\mathbf{I}^{in},\mathbf{I}^{out})=\mathbb{E}_{\mathbf{I}^{in}}\left[\sum_{ i}^{N}\frac{1}{w_{i}h_{i}d_{i}}\left\|\text{IN}(\phi_{i}(\mathbf{I}^{in}))-\text{IN}( \phi_{i}(\mathbf{I}^{out}))\right\|_{F}^{2}\right], \tag{10}\]
where \(\|\cdot\|_{F}\) denotes the Frobenius norm, and \(w\), \(h\) and \(d\) represent the width, height and depth of the feature space. As shown in [31], applying instance normalization makes the loss more domain-invariant. This is particularly important in our case since it penalizes undesirable alterations to cell morphology by stain transformation.
**Latent Regression Loss** helps prevent mode collapse by encouraging a reversible mapping between the stain latent space and the image space:
\[\mathcal{L}_{lr}(\mathbf{z}_{s}^{r},\mathbf{z}_{s}^{out})=\mathbb{E}_{\mathbf{z}_{s}^{r} \sim N(0,1)}\left[\left\|\mathbf{z}_{s}^{r}-\mathbf{z}_{s}^{out}\right\|_{1}\right]. \tag{11}\]
**Mode Seeking Loss** encourages the randomly generated samples to be more diverse by minimizing the following ratio:
\[\mathcal{L}_{ms}(\mathbf{z}_{s}^{r_{1}},\mathbf{z}_{s}^{r_{2}})=\mathbb{E}_{\mathbf{z}_{s} ^{r_{1}},\mathbf{z}_{s}^{r_{2}}\sim N(0,1)}\left[\frac{\left\|\mathbf{z}_{s}^{r_{1}}- \mathbf{z}_{s}^{r_{2}}\right\|_{1}}{\left\|\mathbf{I}^{r1}-\mathbf{I}^{r2}\right\|_{1}+ \epsilon}\right], \tag{12}\]
where \(\epsilon\) is a small stabilizing constant.
**Adversarial Loss** encourages the randomly stained images \(\mathbf{I}^{out}\) to be indistinguishable from the set of cell images actually recorded, in terms of both stain and morphology in multiple resolutions. The loss takes the form of least squares [32]:
\[\begin{split}\mathcal{L}_{adv}(E,G,D)&=\frac{1}{2} \mathbb{E}_{\mathbf{I}^{out}}\left[\sum_{k}D_{k}(\mathbf{I}_{k}^{out})^{2}\right]\\ &+\frac{1}{2}\mathbb{E}_{\mathbf{I}^{in}}\left[\sum_{k}\left(1-D_{k} (\mathbf{I}_{k}^{in})\right)^{2}\right].\end{split} \tag{13}\]
Finally, the combined min-max optimization objective for G-SAN from the two modes, Mode A and Mode B, can be written as:
\[\begin{split} E^{*},G^{*}=&\arg\underset{E,G}{ \text{min}}\mathcal{L}_{adv}+\lambda_{id}\mathcal{L}_{id}+\lambda_{vae} \mathcal{L}_{vae}\\ &+\lambda_{ce}\mathcal{L}_{ce}+\lambda_{sp}\mathcal{L}_{sp}+ \lambda_{lr}\mathcal{L}_{lr}+\lambda_{ms}\mathcal{L}_{ms},\end{split} \tag{14}\]
where the \(\lambda\)s are tunable hyperparameters.
## 4 Experimental Results
The training dataset for G-SAN consists of patches extracted from 573 WSIs downloaded from the TCGA program [35]. The selection of WSIs is carefully curated to maximize the diversity in terms of both the H&E stain appearance and cell morphology. More specifically, with each WSI representing a unique pair of (tissue site, laboratory ID), there are 33 tissue sites from around 200 laboratories included in our training data1. In total, we extracted 348k patches of size \(512\times 512\) at \(40\times\) magnification. We trained G-SAN for 60k iterations using the ADAM optimizer with a linear-decay learning-rate scheduler with the initial learning rate set to \(1e^{-4}\). Training took about 9 hours with an AMD 5800X 8-core CPU with 32G RAM and a Nvidia RTX3090 GPU with 24G memory. The hyperparameters in Eq. (14) are set as \(\lambda_{id}=1\), \(\lambda_{vae}=0.01\), \(\lambda_{cc}=10\), \(\lambda_{sp}=0.5\), \(\lambda_{lr}=10\), and \(\lambda_{ms}=0.02\). See Sec. V-B for how we arrived at these values for the hyperparameters.
Footnote 1: A comprehensive superset of the WSI origins can be found at [36].
In the rest of this section, we first provide a qualitative analysis of G-SAN augmentations, followed by quantitative analyses through two common downstream tasks: patch classification at \(20\times\) magnification and nucleus segmentation at \(40\times\). All experimental results were obtained with a single G-SAN model where \(K=3\).
We denote this model as G-SAN\({}_{K=3}\) and it is used for both downstream tasks in our quantitative analysis. The notation
Figure 5: This figure presents an overview of the cyclic reconstruction mode (Mode B) of the training procedure for G-SAN. In the forward direction, given an input image \(\mathbf{I}^{in}\), the encoding process produces a deep encoding \(\mathbf{z}^{in}\) along with its stain encoding \(\mathbf{z}_{s}^{in}\). Subsequently, the generative process combines \(\mathbf{z}^{in}\) with a noise stain encoding \(\mathbf{z}_{s}^{r}\) via AdaIN to produce a stain-augmented version of the input image, \(\mathbf{I}^{out}\). And in the reverse direction, the deep code \(\mathbf{z}^{out}\) is first extracted from \(\mathbf{I}^{out}\), then combined with the original stain encoding \(\mathbf{z}_{s}^{in}\) via AdaIN, and finally passed to \(G\) to produce the cyclic reconstruction \(\mathbf{I}_{k}^{in}\).
"G-SAN\({}_{K=3}\) @\(k=0\)" indicates that the image inputs and outputs of G-SAN are given and taken at pyramid level \(k=0\) (_i.e._ at \(40\times\) magnification), while \(k=1\) corresponds to \(20\times\). Furthermore, we provide a timing analysis comparing several commonly used stain transfer and stain augmentation tools to G-SAN. Lastly, we offer insights into some of the design choices in G-SAN through ablation studies.
### Qualitative Analysis
In rows (1) and (2) of Fig. 6, we first showcase the G-SAN-augmented results - note how G-SAN is able to augment cell images that are diverse in both cell morphology and stain colors. In row (3), we performed linear interpolations between two stain encodings extracted from two stain-reference images and combined the interpolated stain codes with the morphology code extracted from a morphology-reference image. The fact that applying the interpolated stains resulted in smooth changes in the images shown in the last row illustrates that the latent space is generally smooth, which is a desirable property if it is to lend itself to stochastic sampling. Subsequently in Fig. 7, we showcase the multi-resolution stain-augmented outputs by G-SAN, along with the generated band-pass images. Especially note how realistic the generated band-pass images are when compared to those from the LPs of real images in Fig. 3. Lastly, to visually demonstrate the range of stain appearances covered by the latent space, Fig. 8 is a scatter plot of the most dominant colors from the cell images that are produced by G-SAN.
### Downstream Task 1: Patch Classification
For the first quantitative assessment, we choose the downstream task of patch classification of breast cancer metastases using the CAMELYON17 dataset [37]. We used the semantically labeled subset, comprising 50 WSIs of histological lymph node sections with metastasis regions labeled at pixel level. It is important to note that the WSIs were made at 5 different medical centers with 10 WSIs per center. On account of the differences in the staining procedures used and also the differences in the imaging equipment across the 5 medical centers, there exist significant stain variations among the resulting images. Example patches demonstrating the varying stains are shown in Fig. 9. We preprocessed the tissue regions in the WSIs with patches at \(20\times\) magnification level, resulting in a total of 210k non-overlapping patches of size \(256\times 256\). We followed the same practice as described in [15] for label assignment: if the tumor masked region exceeds 1% in a patch, the patch is labeled positive.
In our 5-fold cross-validated experiment, we perform training and validation of our classification network only on patches from a single medical center in each fold. This is to simulate
Figure 6: Row (1): images from [33]; Row (2): input images augmented by G-SAN; Row (3): interpolation results by mixing the morphology from image (1c) with the stains obtained through linearly interpolating between the stain vectors from image (1a) and (1e).
the practical scenario in which the available labeled training data is scarce and has limited stain variation. Patches from the other four centers are therefore out-of-domain in terms of the stain and used as testing data. Additionally, note that positive and negative patches are drawn with equal probabilities during training and validation. The results obtained with the different stain augmentation approaches are shown in Fig. 10. In addition to the simple _HED Jitter_ augmentations, we also compare G-SAN to the state-of-the-art in non-learning based stain augmentation frameworks, such as HERandAugment [14] and RandStainNA [17]. For both HistAuGAN [15]2 and G-SAN, the stain vectors were randomly drawn from a normal distribution. In our dataloader, stain augmentation was applied to every image loaded for training. Stain augmentation was also applied to the images loaded for validation to prevent statistically biased evaluations of our models due to the limited stain appearances in the validation data. Additionally, we believe that a stain augmentation method is worthy of merit only if it can also diversify the validation stain distribution such that the validation score better correlates with the true generalizability of a model.
Footnote 2: For HistAuGAN, we used the pretrained weights provided by its authors on patches at \(40\times\) from the five domains of the CAMELYON17 dataset. For stain augmentation, we used a randomly interpolated domain as the target domain for each image.
From the results in Fig. 10, we can first confirm the domain gaps among the images taken at different medical centers, as the scores by the baseline method (_i.e._ without stain augmentation) vary greatly across the folds. Such domain gaps can be effectively reduced by applying stain augmentations. Additionally, among the stain augmentation methods, it can be observed that augmentations by G-SAN are the most effective, as they provide the greatest boosts in both the overall F1 score (15.7%) and the overall Average Precision Score (12.1%) compared to the baseline. Given that the second best performer, HERandAugment [14], produces unrealistic stain appearances by design, the superior performance by G-SAN just shows that augmenting cell images beyond the distribution of naturally occurring stain appearances may not be the best strategy. Additionally, the poor performance by HistAuGAN could be attributed to its inflexibility towards multi-resolution, given that it was trained on images at \(40\times\) magnification. Last but not least, it is worth mentioning that, as it cannot be avoided, sometimes the stain distribution of the unaltered training data can overlap better with the test stain distribution. However, in most cases as shown in our experiments, using any form of stain augmentation will provide a boost in performance.
### Downstream Task II: Nucleus Segmentation
We have also evaluated the performance improvements made possible by the augmentations generated by G-SAN on the downstream task of nuclear instance segmentation. Nuclear instance segmentation is challenging due to high morphological and textural heterogeneity of the nuclei as well as their small sizes. What that implies is that any stain augmentation framework must be highly structure preserving in order to be useful. In our experiments with nuclear segmentation, we used a straightforward gradient-flow based CNN model inspired by
Fig. 7: Dissecting the G-SAN augmented images. For the stain-augmented version of an input image \(\mathbf{I}_{k=0}^{in}\) at \(40\times\), G-SAN produces both the Gaussian Pyramid (GP), \(G(\mathbf{I}^{out})=[\mathbf{I}_{1}^{out},\mathbf{I}_{2}^{out},\mathbf{I}_{3}^{out}]\), as well as the Laplacian Pyramid (LP), \(L(\mathbf{I}^{out})=[\mathbf{h}_{0}^{out},\mathbf{h}_{1}^{out},\mathbf{h}_{2}^{out},\mathbf{I}_{3}^ {out}]\) that is used to construct the GP. Note that in the figure, the \(\mathbf{I}_{k=2,3}^{out}\) and \(\mathbf{h}_{k=0,2}^{out}\) images have been resized to fit the display grid. Please zoom in to see the structures in the reduced-size images.
[42], [43]. To quantitatively measure the instance segmentation quality, we use the Panoptic Quality (PQ) as defined in [42], the Average Precision (AP) in [43] as well as the Aggregated Jaccard Index (AJI) in [33].
In light of the limited quantity of the available nucleus groundtruth, we evaluated nucleus segmentations with 5-fold cross-validation as explained in what follows. In total, we curated 556 images at \(40\times\) magnification with nucleus annotations from six publicly available datasets as tabulated in Tab. II. Since each dataset covers a different set of organs, and the cell morphology varies considerably across organs, we cannot train a model on a single dataset and expect it to generalize well to the others. As a result, we grouped images from all the dataset together and divided them into 5 folds. Images from one fold are used for training and validation, while images from the other four folds are used for testing. Given the scarcity of nucleus annotations, our cross-validation setup simulates the realistic scenario where the quantity of available labeled data for training and validation is on the same level as in most of the publicly accessible datasets as listed in Tab. II. Moreover, complimentary to what was the case for the CAMELYON17 dataset we used for patch classification, each fold here represents a wide range of organs and covers a diverse set of stain appearances. With this cross-validation setup, we hope to demonstrate that G-SAN can benefit the training of generalized models for nucleus segmentation across organs, which is in the interest of researchers [43].
From the test scores plotted in Fig. 11, we can again observe that G-SAN offers the largest average improvement over the baseline (_i.e._ without stain augmentation) in terms of all three metrics: 7.3% in PQ, 7.2% in AP and 8.5% in AJI. Regarding the performance of HistAuGAN, while a cursory examination of the stain augmentations generated by the network may cause one to think that they are of high quality, the reality is that the augmentations are not structure-preserving and therefore the algorithm comes up short from the standpoint of producing good segmentations. This shortcoming of HistAuGAN could be attributed to the significant heterogeneity in tissue morphology across organs, coupled with the fact that it was exclusively trained on breast cancer images from the CAMELYON17 dataset [15].
### Timing Analysis
In Tab. III, we tabulate the average time per image needed for stain augmentation for a range of image sizes. We compare the run times of G-SAN against CPU-based implementations of the SOTA in stain separation (_i.e._ Macenko [2] and Vahadane [3]), as well as the competing stain augmentation methods used previously in the downstream tasks. With the stain separation methods, while we recognize that their efficiency can be optimized with prior knowledge of the data, we do not consider any application-specific or data-specific factors in our timing measurements for the sake of simplicity, especially given that the availability of such information is not guaranteed in practice. The experiments were conducted on the same machine with an AMD 5800X 8-core CPU and a Nvidia RTX3090 GPU. The run times are averaged over 1000 iterations. Compared to all other stain transfer and stain augmentation methods, G-SAN is more scalable with increasing image dimensions. Given input images of size \(2048^{2}\), performing stain transfer using G-SAN at level 0 only requires up to 44% of time needed by the fastest CPU-based
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & Tissue Site & Image Size & Quantity \\ \hline MoNuSeg [33] & Kidney, Lung, Colon, Breast, Bladder, Prostate, Brain & \(1000\times 1000\) & 44 \\ CPM [38] & Lung, Head and Neck, Brain & \([439,1032]\times[392,888]\) & 79 \\ CryoNuSeg [39] & Adrenal Gland, Larynx, Lymph Node, Mediastinum, Panceras, & \(512\times 512\) & 30 \\ Pleura, Skin, Testis, Thymus, Thyroid Gland & & \([35,2162]\times[33,2500]\) & 294 \\ MouSAC [40] & Lung, Prostate, Kidney, Breast & \([35,2162]\times[33,2500]\) & 294 \\ TNBC [41] & Breast, Brain & \(512\times 512\) & 68 \\ CONSeP [42] & Colon & \(1000\times 1000\) & 41 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Full details on the datasets used in our nucleus segmentation experiment.
Fig. 8: A scatter plot of the most dominant colors in the cell images produced by G-SAN. Through the stochastic sampling of a normal distribution in the stain latent space as learned by the SMN, a diverse yet realistic distribution of stain appearances can be achieved with regard to both hue and lightness. Note that the nuclear and the non-nuclear regions were separated using ground-truth masks and their most dominant colors were extracted using the median-cut algorithm reported in [34]. The axes correspond to the non-nuclear colors. Only a subset of the nuclear points is shown for a less cluttered visualization.
Fig. 9: Example patches from the five medical centers in the CAMELYON17 dataset.
multi-threaded stain separation or stain augmentation method.
## 5 Discussion
### Ablation Studies on the G-SAN Architecture
In this section, we conduct additional ablation studies on some of the most important design choices in G-SAN. We used the same nucleus segmentation experimental setup as in Sec. IV-C and the results are tabulated in Tab. IV. Regarding the choice of \(K\), we specifically chose \(K=3\) for our final model because as one can observe in Fig. 7, the residual image \(\mathbf{I}_{k=3}\) (_i.e._ at \(5\times\) if \(\mathbf{I}_{k=0}\) is at \(40\times\)) is the lowest resolution where the network can still accurately extract the H&E stain information. For any \(k>3\), the nuclei become indistinct from the other morphological structures and therefore it is challenging to extract the correct Hematoxylin representation. A direct consequence of this inability to extract correct stain representations is inadequate stain-morphology disentanglement. In Tab. IV, the relatively poor performances of G-SAN\({}_{K=4,5}\) illustrate this effect.
Additionally, we conducted experiments on G-SAN\({}_{K=3}\) without scaling factors at the BP pathways, and with learnable scaling factors. The results presented in Tab. IV demonstrate the importance of our proposed approach to BP scaling for competitive performance. Our experiments showed that proper scaling of BP inputs and outputs can help prevent the appearance of visual artifacts in generated BP images, particularly during the initial stages of training.
### Determining the \(\lambda\) Hyperparameters
This section outlines the reasoning behind selecting the \(\lambda\) hyperparameters for G-SAN training. The central idea here is to prioritize the loss terms based on their significance in achieving stain-morphology disentanglement. To this end, we
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{ Average Score} & PQ & AP & AJI \\ \hline Base (No Stain Aug.) & 0.4553 & 0.4696 & 0.4325 \\ G-SAN\({}_{K=3}\) @ \(k=0\) & **0.4885** & **0.5034** & **0.4693** \\ G-SAN\({}_{K=4}\) @ \(k=0\) & 0.4812 & 0.4914 & 0.4615 \\ G-SAN\({}_{K=5}\) @ \(k=0\) & 0.4737 & 0.4853 & 0.4565 \\ G-SAN w/ learnable scaling & 0.4834 & 0.4934 & 0.4642 \\ G-SAN w/o BP scaling & 0.4812 & 0.4942 & 0.4606 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation studies on several design choices in G-SAN using the nucleus segmentation experiment.
Figure 11: Panoptic Quality (PQ), Average Precision (AP) and Aggregated Jaccard Index (AJI) scores of the 5-fold nucleus segmentation experiment. The images used were collected from the following publicly available datasets: MoNuSeg [33], CPM15, CPM17 [38], CryoNuSeg [39], MoNuSAC [40], TNBC [41], and CoNoSep [42]. More details about each dataset can be found in Tab. II. For the G-SAN results shown, the input images and the outputs produced are for the pyramid level \(k=0\) (_i.e._ at \(40\times\) magnification).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{ Image Size} & \(256^{2}\) & \(512^{2}\) & \(1024^{2}\) & \(2048^{2}\) \\ \hline Macenko @ StainTools [44] & 0.0199 & 0.0726 & 0.2754 & 1.1154 \\ Vahadane @ StainTools & 1.0191 & 1.0634 & 1.2243 & 1.9868 \\ Macenko @ _F_orbstain [45] & 0.0076 & 0.0279 & 0.1063 & 0.5391 \\ HED litter [5] & 0.00371 & 0.0141 & 0.0612 & 0.2664 \\ HERandAugment [14] & 0.0090 & 0.0329 & 0.1279 & 0.5269 \\ RandSusina [17] & **0.0024** & 0.0117 & 0.0433 & 0.1845 \\ HistAugGAN [15] & 0.0171 & 0.0727 & 0.2946 & 1.2045 \\ G-SAN\({}_{K=3}\) @ \(k=1\) & 0.0060 & 0.01137 & 0.0420\({}^{\dagger}\) & 0.1664\({}^{\dagger}\) \\ G-SAN\({}_{K=3}\) @ \(k=0\) & 0.0049 & **0.0060** & **0.0209** & **0.0811** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Seconds needed per image for stain transfer or stain augmentation using different methods. The best and the second best timings are denoted with **bold** fonts and \(\dagger\), respectively.
Figure 10: F1 scores and Average Precision Scores (APS) of the tumor class for our 5-fold cross-validated patch classification experiment on the CAMELYON17 dataset. For the G-SAN results shown, the input images and the outputs produced are for the pyramid level \(k=1\) (_i.e._ at \(20\times\) magnification).
assign the highest value to \(\lambda_{cc}\) since minimizing \(\mathcal{L}_{cc}\) is critical for ensuring that the stain profile and the morphology can be disentangled and put back together through the cyclic reconstruction process without any loss of information. Similarly, to avoid the trivial solution where all the useful information is solely encoded in the morphology representation, we assign a large value to \(\mathcal{L}_{lr}\) as well. Giving the network the ability to recover the random stain vector \(\mathbf{z}_{s}^{r}\) that was used to produce the augmented output \(\mathbf{I}^{out}\) ensures that \(\mathbf{z}_{s}^{r}\) meaningfully contributes to the synthesized image. The effects of ablating \(\mathcal{L}_{lr}\) are visually presented in Fig. 12.
The remaining loss terms in G-SAN training serve primarily to regulate the process and are thus assigned less weight. For instance, \(\mathcal{L}_{sp}\) ensures that the structural information is preserved halfway through the cyclic reconstruction process. However, overly emphasizing this term can limit the stain diversity in the augmented images. Similarly, \(\mathcal{L}_{id}\) and \(\mathcal{L}_{vae}\) are vital to SMN's formulation as a VAE. Still, they are not as crucial in achieving stain-morphology disentanglement and are therefore given less weight than \(\mathcal{L}_{cc}\) and \(\mathcal{L}_{lr}\).
Finally, using the same nucleus segmentation experimental setup, Tab. V quantitatively illustrates the effects of the various loss terms discussed above. All losses meaningfully contribute to the performance of G-SAN.
### Novelty Comparing to Fan et al.
In this section, we discuss the fundamental differences between our G-SAN and the work by Fan _et al._[20], which also utilizes LP representation for fast stain transfer. Most importantly, their architecture, which is almost identical to [25], is not designed for stain-morphology disentanglement and therefore is not capable of transferring to an arbitrary stain. Furthermore, to highlight some specific yet significant differences in design, first we choose not to employ the progressive upsampling pathways, which were observed to generate undesired artifacts in the LP images in our experiments. And second, we deliberately avoid the utilization of the "skip-connections" from the input BP image to the pixel-wise multiplication operator that are used in [20]. The reason for this choice is to ensure the removal of any stain-related information from the input BP image before applying a new style, as the presence of such connections would lead to the leakage of the original image's stain into the generated image, hindering adequate stain-morphology disentanglement.
## VI Conclusions
In this paper, we introduced G-SAN as a domain-independent approach to stain augmentation for H&E-stained histological images. By disentangling the morphological and the stain-related representations, G-SAN is capable of augmenting an input cell image with random yet realistic stains. Additionally, by targeting the structure-preserving nature of stain transfer with a Laplacian Pyramid based architecture, the proposed G-SAN generator is highly competitive in terms of computational efficiency. Through the downstream tasks of patch classification and nucleus segmentation, we demonstrated quantitatively that the quality of G-SAN-augmented images is superior to the images produced by the existing stain augmentation approaches.
|
2310.16978 | The Significance of Machine Learning in Clinical Disease Diagnosis: A
Review | The global need for effective disease diagnosis remains substantial, given
the complexities of various disease mechanisms and diverse patient symptoms. To
tackle these challenges, researchers, physicians, and patients are turning to
machine learning (ML), an artificial intelligence (AI) discipline, to develop
solutions. By leveraging sophisticated ML and AI methods, healthcare
stakeholders gain enhanced diagnostic and treatment capabilities. However,
there is a scarcity of research focused on ML algorithms for enhancing the
accuracy and computational efficiency. This research investigates the capacity
of machine learning algorithms to improve the transmission of heart rate data
in time series healthcare metrics, concentrating particularly on optimizing
accuracy and efficiency. By exploring various ML algorithms used in healthcare
applications, the review presents the latest trends and approaches in ML-based
disease diagnosis (MLBDD). The factors under consideration include the
algorithm utilized, the types of diseases targeted, the data types employed,
the applications, and the evaluation metrics. This review aims to shed light on
the prospects of ML in healthcare, particularly in disease diagnosis. By
analyzing the current literature, the study provides insights into
state-of-the-art methodologies and their performance metrics. | S M Atikur Rahman, Sifat Ibtisum, Ehsan Bazgir, Tumpa Barai | 2023-10-25T20:28:22Z | http://arxiv.org/abs/2310.16978v1 | # The Significance of Machine Learning in Clinical Disease Diagnosis: A Review
###### Abstract
The global need for effective disease diagnosis remains substantial, given the complexities of various disease mechanisms and diverse patient symptoms. To tackle these challenges, researchers, physicians, and patients are turning to machine learning (ML), an artificial intelligence (AI) discipline, to develop solutions. By leveraging sophisticated ML and AI methods, healthcare stakeholders gain enhanced diagnostic and treatment capabilities. However, there is a scarcity of research focused on ML algorithms for enhancing the accuracy and computational efficiency. This research investigates the capacity of machine learning algorithms to improve the transmission of heart rate data in time series healthcare metrics, concentrating particularly on optimizing accuracy and efficiency. By exploring various ML algorithms used in healthcare applications, the review presents the latest trends and approaches in ML-based disease diagnosis (MLBDD). The factors under consideration include the algorithm utilized, the types of diseases targeted, the data types employed, the applications, and the evaluation metrics. This review aims to shed light on the prospects of ML in healthcare, particularly in disease diagnosis. By analyzing the current literature, the study provides insights into state-of-the-art methodologies and their performance metrics.
Machine learning (ML), IoMT, healthcare; supervised learning, chronic kidney disease (CKD), convolutional neural networks, adaptive boosting (AdaBoost), COVID-19, deep learning (DL). +
Footnote †: journal: Computer Applications (0975 - 8887)
## 1 Introduction
In the medical field, artificial intelligence (AI) plays a crucial role in developing algorithms and techniques to aid in disease diagnosis. Medical diagnosis entails determining the illness or conditions that account for an individual's symptoms and indicators, usually relying on their medical background and physical assessment. However, this process can be challenging as many symptoms are ambiguous and require expertise from trained health professionals. This becomes particularly problematic in countries like Bangladesh and India, where there is a scarcity of healthcare professionals, making it difficult to provide proper diagnostic procedures for a large population of patients. Additionally, medical tests required for diagnosis can be expensive and unaffordable for low-income individuals [1-3].
Due to human error, over diagnosis can occur, leading to unnecessary treatment and negatively impacting both the patient's health and the economy. Reports suggest that a significant number of people experience at least one diagnostic mistake during their lifetime. Several factors contribute to misdiagnosis, including the lack of noticeable symptoms, the presence of rare diseases, and diseases being mistakenly omitted from consideration [4, 5]. ML has found widespread applications in various fields, from cutting-edge technology to healthcare, including disease diagnosis. Its popularity is growing, and it is becoming increasingly utilized in healthcare to improve diagnostic accuracy and safety.
ML serves as a robust mechanism enabling machines to learn autonomously, eliminating the requirement for explicit programming. It harnesses sophisticated algorithms and statistical methods to analyze data and formulate predictions, departing from traditional rule-based systems. The accuracy of machine learning predictions heavily depends on the quality and relevance of the dataset used. Its applications span various industries, including finance, retail, and healthcare [6, 7], where it presents significant opportunities for disease diagnosis and treatment.
One of the notable features of machine learning is its continuous improvement in data prediction and classification. As more data is gathered, the prediction models become more adept at making accurate decisions. In the healthcare sector, patient datasets stored in electronic healthcare records can be leveraged to extract relevant information using ML techniques [8, 9, 10]. These algorithms aid in disease diagnosis by analyzing data and predicting the underlying causes of illnesses based on disease-causing variables extracted from electronic health records [11]. Compared to traditional bio statistical approaches, machine learning has gained popularity for tasks like classification, prediction, and clustering involving complex healthcare data. It has demonstrated exceptional results in various medical tasks, such as identifying body organs from medical images [12], classifying interstitial lung diseases [13], reconstructing medical images [14, 15], and segmenting brain tumors [15]. Overall, the use of ML in healthcare has shown great promise in advancing disease analysis, diagnosis, and treatment, showcasing its potential to transform the field by leveraging vast amounts of data for accurate and efficient healthcare solutions [16-21].
## 2 AI in healthcare and medicine
The utilization of AI and the related technologies is becoming more widespread in both the business sector and society. This trend is now extending to the healthcare domain. The aforementioned technologies has the capacity to revolutionize various facets of patient care, as well as administrative procedures within provider, payer, and pharmaceutical entities. Machine learning is a statistical methodology utilized to effectively fit models to data and acquire knowledge through the process of training models with data. Machine learning is widely recognized as a prevalent manifestation of artificial intelligence. In the field of healthcare, precision medicine is a widely employed application of traditional machine learning. It involves the prediction of treatment outcomes for patients by considering a range of patient features and the contextual factors around the therapy.
Supervised learning is a fundamental requirement for the bulk of machine learning and precision medicine applications, wherein a training dataset is necessary to possess known outcome variables, such as the beginning of disease.
The neural network, a sophisticated variant of machine learning, has been a prominent technology in healthcare research for several decades. Its origins can be traced back to the 1960s. Neural networks have been effectively employed in categorization tasks, such as predicting the likelihood of a patient developing a specific disease. The framework adopts a perspective that analyses problems by considering the inputs, outputs, and weights of variables, sometimes referred to as "features," which establish the associations between inputs and outcomes. The most intricate manifestations of machine learning encompass deep learning, which pertains to neural network models characterized by numerous tiers of features or variables that facilitate the prediction of events. The quicker processing capabilities of contemporary graphics processing units and cloud infrastructures have enabled the discovery of several latent features inside these models, perhaps amounting to thousands. One prevalent utilization of deep learning in the healthcare field involves the identification and classification of possibly malignant lesions in radiographic images. The utilization of deep learning techniques is becoming more prevalent in the field of radiomics, which involves the identification of diagnosically significant characteristics in imaging data that beyond the capabilities of human visual perception [22]. Radiomics and deep learning are frequently utilized in the domain of oncology-focused image analysis. The amalgamation of these technologies exhibits potential for enhanced diagnostic precision compared to the preceding iteration of automated image analysis tools, commonly referred to as computer-aided detection (CAD) [23, 24].
Over the years, intelligent healthcare systems have commonly relied on centralized artificial intelligence (AI) capabilities situated in either the cloud or the data center to facilitate the learning and analysis of health data. The current centered solution in modern healthcare networks is inefficient in terms of communication latency and lacks high network scalability due to the growing volumes of health data and the proliferation of IoMT driven devices. Moreover, the dependence on a centralized server or third-party entity for data learning gives rise to significant privacy concerns, such as the potential leakage of user information and the risk of data breaches. This assertion holds special validity within the realm of e-healthcare, as health-related data is characterized by its high sensitivity and privacy, hence necessitating adherence to health standards. Furthermore, it is anticipated that in forthcoming healthcare systems, a centralized AI architecture may become less appropriate due to the decentralized nature of health data, which is dispersed across a vast IoMT network. Hence, it is imperative to adopt distributed artificial intelligence (AI) methodologies in order to facilitate the development of scalable and privacy-conscious intelligent healthcare applications at the network edge. In the present scenario, federated learning (FL) has emerged as a viable method for achieving cost-effective smart healthcare applications while enhancing privacy protection [25, 26]. From a conceptual standpoint, FL is an AI methodology that facilitates the development of AI models of superior quality. This is achieved by combining and averaging local updates obtained from numerous health data clients, such as Internet of Medical Things (IoMT) devices [27, 28]. Notably, Florida accomplishes this without necessitating direct access to the individual data stored locally. This measure has the potential to hinder the disclosure of sensitive user information and user preferences, thereby reducing the dangers associated with privacy leakage. In addition, the utilization of FL in the healthcare domain allows for the aggregation of substantial computational and dataset resources from various health data clients, thereby enhancing the quality of AI model training, particularly in terms of accuracy. This improvement may not be attainable through the implementation of centralized AI approaches that rely on smaller datasets and have limited computational capabilities [29, 30].
## 3 ML for different disease diagnosis
In recent years, the proliferation of accessible hardware and cloud computing resources has ushered in a significant increase in the application of Machine Learning (ML) across various facets of human life. This span encompasses domains as diverse as leveraging ML for personalized social media recommendations to its adoption for streamlining industrial processes through automation. Among these evolving domains, the healthcare sector stands out as an industry progressively adapting to the potential of ML. The implementation of ML algorithms within healthcare holds tremendous promise due to the substantial data volume amassed for each individual patient. This reservoir of data empowers ML algorithms to proactively chart comprehensive treatment plans for patients, contributing to cost reduction and an enhanced overall patient experience. This phenomenon is particularly advantageous, positioning ML as a latent advantage within the healthcare industry. The sector grapples with an abundance of unstructured data, including patient records, historical treatment methods, and familial medical histories. By analyzing these data repositories, ML algorithms bolster healthcare professionals in predicting forthcoming health issues, thus effectively capitalizing on patients' historical data.
The rapid progression of ML technology has catalyzed the paradigm shift towards information-centric healthcare administration and delivery. Contemporary healthcare enhancement strategies, characterized by a multidisciplinary approach, in conjunction with refined imaging and genetics-informed personalized therapeutic models, hinge on the underpinning of ML-powered information systems. As such, Machine Learning is substantiating its role as an indispensable asset poised to drive significant advancements within the healthcare domain.
Various ML approaches have gained significant attention from both academics and practitioners in disease diagnosis. This section provides an overview of focusing on the application of ML models in diagnosing various types of diseases. Notably, the global relevance of COVID-19 has led to numerous studies
focusing on its detection using ML since 2020, which also received priority in our investigation. We briefly discuss severe diseases like heart disease, kidney disease, breast cancer, and Dementia.
### Dementia Classification
Alzheimer's Disease (AD) constitutes the most prevalent form of dementia necessitating extensive medical attention. AD is a chronic brain disorder with neurobiological origins that gradually leads to the demise of brain cells. This progression results in impairments to memory and cognitive abilities, eventually leading to an inability to perform basic tasks. Dementia linked to Alzheimer's manifests in various stages:
(a) Mild Cognitive Impairment: Often marked by memory lapses as individuals age, it can also evolve into dementia for some.
(b) Mild Dementia: Individuals with mild dementia experience cognitive difficulties that impact their daily routines. Symptoms include memory loss, confusion, personality changes, disorientation, and struggles with routine tasks.
(c) Moderate Dementia: This stage involves increased complexity in daily life, requiring additional care and support. Symptoms mirror mild dementia but are more pronounced. Patients may exhibit personality shifts, paranoia, and sleep disturbances.
(d) Severe Dementia: Symptoms worsen in this phase, with communication impairments and a need for constant care. Basic functions like bladder control and maintaining head position become challenging. Even simple actions, such as sitting in a chair, become unmanageable.
Efforts are underway to detect AD early, aiming to slow the abnormal brain degeneration, lower healthcare costs, and enhance treatment outcomes. The utilization of ML techniques has demonstrated considerable promise in the categorization of dementia, a multifaceted neurological condition that impacts cognitive abilities. By utilizing sophisticated algorithms and computational methodologies, ML models possess the capacity to examine a wide range of data sources and assist in the timely identification, prediction, and tailored therapeutic strategies for individuals affected by dementia. Several researchers already deployed various ML models to classify dementia patient. Table 1 summarizes some ML models deployed in dementia diagnosis.
### Heart Disease Detection
Machine learning (ML) approaches have been extensively used by researchers and practitioners to identify cardiac disease [33, 34]. For instance, a neurofuzzy-integrated system was developed in [33] for detecting heart disease that achieved accuracy of approximately 89%. Yet, the study's primary limitation is the absence of a well-defined clarification regarding the performance of their suggested technique across diverse scenarios like multiclass classification, extensive data analysis, and addressing imbalanced class distribution. Furthermore, there's a notable omission of dialogue regarding the model's trustworthiness and interpretability, a factor progressively vital in medical domains to enhance comprehensibility for non-medical individuals. In [35], researchers introduced a deep CNN to detect irregular cardiac sounds. They optimized the loss function to enhance sensitivity and specificity on the training dataset. This model underwent testing in the 2016 Physio Net computing competition, yielding a final prediction specificity of 95% and sensitivity of 73% [35].
Furthermore, deep learning (DL) algorithms have garnered attention in cardiac disease detection. In [36], a DL-based technique was developed for diagnosing cardiotocographic fetal health based on multiclass morphologic patterns. This model aimed to categorize patterns in individuals with pregnancy complications. Initial computational results displayed an accuracy of 88.02%, precision of 85.01%, and F-score of 85% [36]. Overfitting was addressed using various dropout strategies, leading to an increased training time, which they noted as a trade-off for achieving heightened accuracy. Liu et al. (2012) employed Support Vector Machine (SVM) to create predictive systems for cardiac arrest within 72 hours [37]. In a Cleveland dataset study, Shah et al. (2020) compared SVM, Random Forest (RF), Ordinal Regression, Logistic Regression (LR), and Naive Bayes (NB) for heart disease detection, with SVM yielding 95% accuracy [38]. Besides SVM and CNN, other algorithms like ensemble learning [39], k-Nearest Neighbors (kNN) [40], Decision Trees (DT) [41], Linear Discriminant Analysis (LDA) [42], and Bayesian Networks (BN) [43] were also employed in heart disease prediction. However, recent studies highlight Generative Adversarial Network (GAN) superiority for both balanced and imbalanced datasets. Researchers have introduced GAN-based models [44, 45, 46]. Wang et al. (2021) introduced CAB, a GAN-based approach addressing imbalance-related issues, achieving 99.79% accuracy in arrhythmia patients [44]. Rath et al. (2021) combined Long Short-Term Memory (LSTM) with GAN, accurately detecting heart disease patients from the MIT-BIH dataset with up to 99.4% accuracy [47].
These recent developments in GAN-based approaches showcase their potential in improving the accuracy and performance of machine learning models for cardiac disease diagnosis. The integration of GANs with other machine learning techniques holds promise for addressing imbalanced-related challenges and achieving high accuracy in predicting heart diseases. Further research in this area is expected to enhance the capabilities of ML models to detect heart disease and contribute to more effective healthcare diagnostics.
Despite the wide adoption of ML applications in heart disease diagnosis, there is a lack of research addressing the challenges related to unbalanced data in multiclass classification. Additionally, most models lack sufficient explain ability during the final prediction, which hampers their understanding and trustworthiness. Further research is needed to address these issues and improve the transparency and robustness of ML-based cardiac disease detection systems.
### Kidney Disease Detection
Chronic Kidney Disease (CKD) refers to a condition wherein the kidneys experience damage, leading to an impaired blood filtration process. The kidneys' primary function involves
\begin{table}
\begin{tabular}{|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline Ref. & Dataset & Model & Accuracy (\%) & Specificity (\%) & Recall (\%) \\ \hline
[31] & OASIS (373 samples, 10 variables) & XGB & 85.61 & 81.40 & 77.20 \\ \hline
[32] & 169 Samples, 14 variables & RF & 92 & 88 & 88 \\ \hline
[33] & OASIS & RF & 89.29 & - & 89 \\ \cline{2-5} & XGB & 89.39 & - & 89 \\ \cline{2-5} & GB & 91.02 & - & 91 \\ \cline{2-5} & Voting 1 (Soft) & 91.17 & - & 91 \\ \hline \end{tabular}
\end{table}
Table 1: ML in Dementia Diagnosis
extracting excess water and waste from the blood to generate urine. In cases of CKD, the kidneys fail to effectively eliminate waste, resulting in its accumulation within the body. This allment earns its "chronic" status due to the gradual and extended nature of the damage it inflicts over time. CKD stands as a prevalent global health concern, potentially giving rise to various health complications. The origins of CKD are diverse, encompassing factors like diabetes, elevated blood pressure, and heart disease.
Firstly, in [48], the authors conducted their research on clinical and blood biochemical measurements from 551 patients with proteinuria. Several ML models, including RF, extreme gradient boosting (XGB), LR, elastic net (ElasNet), lasso and ridge regression, k-NN, SVM, and artificial neural network (ANN), were compared for CKD risk prediction. The models ElasNet, lasso, ridge, and LR showed superior predictive performance, achieving a mean AUC and precision above 87% and 80%, respectively. LR ranked first, attaining an AUC of 87.3%, with a recall and specificity of 83% and 82%, respectively. ElasNet achieved the highest recall (0.85), while Extra Gradient Boosting (XGB) demonstrated the highest specificity (0.83). In a separate investigation [49], researchers employed SVM, AdaBoost, LDA, and gradient boosting (GBoost) algorithms to formulate accurate models for CKD prediction, utilizing a dataset from the UCI machine learning repository. The gradient boosting classifier attained the highest accuracy of 99.80%. In [50], authors concentrated on the CKD dataset, employing LR, Decision Tree (DT), and k-NN algorithms to develop three distinct CKD prediction models. LR exhibited the highest accuracy at 97%, outperforming DT (96.25%) and k-NN (71.25%). Similarly, another study [51] evaluated Naive Bayes (NB), RF, and LR models for CKD risk prediction, achieving respective accuracies of 93.9%, 98.88%, and 94.76% on the dataset. Furthermore, in [52], a system for CKD risk prediction was proposed using data from 455 patients and real-time dataset. RF and ANN were trained and tested with 10-fold cross-validation, achieving accuracies of 97.12% and 94.5%, respectively. ANN and RF was deployed on CKD datasets having data of 455 instances with 25 features in [53]. The most significant features were collected using Chi-Square Test. The accuracy obtained by RF and ANN was 97.12% and 94.5%, respectively. A machine learning-based model was created in [54] with the aim of predicting chronic kidney disease (CKD) in patients. The model's performance was evaluated on two sets of data: one containing all attributes and another containing only selected features. Within the realm of feature selection methods, three common approaches are often employed: Wrapper, Filter, and Embedded. These methods serve the purpose of identifying and selecting the most crucial features for a given task or problem. The model was trained using various machine learning classifiers, including Artificial Neural Networks (ANN), C5.0, Logistic Regression (LR), Linear Support Vector Machine (LSVM), K-Nearest Neighbors (KNN), and Random Forest (RF). Based on the experimental findings, it was observed that the LSVM algorithm attained the maximum level of accuracy, reaching 98.86%, when applied to the SMOTE technique with all features included. SMOTE is widely regarded as an effective method for addressing class imbalance in datasets. The utilization of SMOTE in conjunction with feature selection by LASSO regression yielded superior outcomes compared to the LASSO regression model without the implementation of SMOTE [54].
In their study, Xiao et al. [55] utilized a dataset of 551 patients and implemented nine distinct machine learning algorithms. These algorithms included XGBoost, logistic regression, lasso regression, support vector machine, random forest, ridge regression, neural network, Elastic Net, and KNN. The researchers conducted an evaluation of many performance metrics, including accuracy, ROC curve, precision, and recall. The results indicated that the linear model exhibited the highest level of accuracy. Sujata Drall et al. [56] worked with the UCI-provided CKD dataset containing 400 instances and 25 attributes. First, data was pre-processed, then missing data was identified and replaced with zero, and the dataset was transformed and applied. After pre-processing, the authors employed an algorithm for significant attributes and identified the five most significant features, followed by the classification algorithms Naive Bayes and KNN. The obtained result KNN was the most accurate. Furthermore, a research study [57] employed classifiers such as extra-trees (ET), AdaBoost, KNN, GBoost, XGB, DT, Gaussian Naive Bayes (NB), and RF. Among them, KNN and Extra Trees classifiers (ET) showed the best performance with accuracies of 99% and 98%, respectively. Highest Precision of 99.11% was achieved using ET and KNN.
In addition, ANN-based regression analysis for managing sparse medical datasets was proposed in [58]. To improve upon the pre-existing radial basis function (RBF) input-doubling technique, they incorporated new variables into the output signal calculation algorithm. Similarly, in [59], a new input doubling method based on the classical iterative RBF neural network was designed. The highest accuracy of the proposed method was validated by experimenting with a small medical dataset, using Mean Absolute Error and Root Mean Squared Error. In [60], an innovative method for data augmentation with enhanced disease categorization was implemented that was based on generative adversarial networks (GAN). Experiments were conducted on the NIH chest X-ray image dataset, and the test accuracy of CNN model was 0.03%. However, the online GAN-augmented CNN model showed improved performance, achieving a test accuracy of 65.3%. In [61], a methodology based on supervised learning was presented, focusing on developing efficient models for predicting the risk of chronic kidney disease (CKD) occurrence. The study mainly concentrated on probabilistic, tree-based, and ensemble learning-based models. Several algorithms were evaluated, including SVM, Logistic Regression (LR), Stochastic Gradient Descent (SGD), Artificial Neural Network (ANN), and k-NN.
### Breast Cancer Detection
Breast cancer is the leading cancer in females worldwide, caused by abnormal growth of cells in the breast. Various techniques, including breast screening or mammography, have been introduced for accurate diagnosis. Mammography uses X-rays to check the ripple status of women, but early detection of small cancer cells remains challenging. Machine learning, deep learning, and bio-inspired computing techniques have been applied in medical prognoses, but none has consistently provided accurate results. Mammography requires doctors to analyze a large volume of imaging data, reducing accuracy and leading to time-consuming procedures with potential for misdiagnosis. As medical research advances, new systems are being developed for improved breast cancer detection. A denotes Accuracy, P denotes Precision, SP denotes Specificity and Se denotes Sensitivity. Table 2 summarizes the performance of some ML models in breast cancer classification.
## 4 Challenges & Future Directions
ML-based applications have been widely employed in illness detection; nevertheless, the implementation of these applications in healthcare as practical tools presents several problems for researchers and practitioners. Even though numerous hospitals and healthcare institutions have collected extensive patient data, the availability of real-world data for worldwide research purposes is limited due to the constraints imposed by data privacy regulations. Often, clinical data is subject to noise or missing values, resulting in a significant time investment required to render such data trainable. The problem of adversarial attack is a significant challenge within the context of illness datasets. The utilization of machine learning models in the development of illness diagnosis models carries the potential for significant harm in the event of misclassification pertaining to a specific disease. For example, the misdiagnosis of a patient with stomach cancer as a non-cancer patient can have significant consequences. One of the primary issues associated with the machine learning (ML) model pertains to its tendency to frequently misidentify a region as diseased, hence leading to erroneous outcomes. The majority of machine learning models, including logistic regression (LR), exhibited high levels of performance when trained on labelled data. Nevertheless, the performance of comparable algorithms experienced a notable decrease when exposed to the unlabeled data. However, it should be noted that certain widely-used algorithms, such as K-means clustering, SVM, and K-Nearest Neighbors (KNN), may experience a decline in performance when applied to multidimensional data.
The issues discussed in the preceding part may provide valuable insights for future scholars and practitioners, guiding their future endeavors. The utilization of generative adversarial networks (GANs) has gained significant prominence within the realm of deep learning. By employing this methodology, it becomes feasible to produce artificial data that has a striking resemblance to authentic data. Hence, the utilization of GANs could potentially serve as a viable solution for addressing challenges related to limited availability of data. The progression of contemporary technology has facilitated the acquisition of data with high resolutions and multiple attributes. Although the conventional ML approach may not yield satisfactory results when applied to high-quality data, employing a mix of many ML models could prove to be a viable solution for effectively managing such data with many dimensions.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Ref. & Dataset & Models & A (\%) & P (\%) & SP (\%) & SE (\%) \\ \hline
[62] & WBC & SVM & - & - & 92.68\% & 94.44\% \\ \cline{3-6} & & LR & - & - & 90.48\% & 94.37\% \\ \cline{3-6} & & DT & - & - & 92.31\% & 91.89\% \\ \cline{3-6} & & RF & - & - & 94.59\% & 90.79\% \\ \cline{3-6} & & DNN & - & - & 91.11\% & 98.53\% \\ \hline
[63] & WBC & NB & 93\% & 90\% & - & 90\% \\ \cline{3-6} & & LR & 97\% & 100\% & - & 92\% \\ \hline
[64] & WBC & SVM & 97.14\% & 95.65\% & 92.3\% & 100\% \\ \cline{3-6} & & KNN & 97.14\% & 97.82\% & 95.83\% & 97.82\% \\ \cline{3-6} & & RF & 95.71\% & 97.77\% & 95.83\% & 95.68\% \\ \cline{3-6} & & LR & 95.71\% & 97.82\% & 95.65\% & 95.74\% \\ \hline
[65] & WDBC (569) & KNN & 96\% & 93\% (B), 100\% & - & 100\%, 89\% \\ \cline{3-6} & & & (M) & & \\ \cline{3-6} & & SVM & 95\% & 97\%, 92\% & - & 94\%, 96\% \\ \cline{3-6} & & DT & 97\% & 97\%, 98\% & - & 99\%, 96\% \\ \cline{3-6} & & NB & 90\% & 92\%, 88\% & - & 97\%, 89\% \\ \cline{3-6} & & LR & 96\% & 97\%, 96\% & - & 97\%, 96\% \\ \hline
[66] & WBC (699) & MLP & 95.44\% & 95.4\% & - & 95.4\% \\ \cline{3-6} & & Voted Perceptron & 90.98\% & 89.9\% & - & 88.2\% \\ \hline
[67] & WBC (699) & KNN & 97.51\% & - & - & - \\ \cline{3-6} & & NB & 96.19\% & - & - & - \\ \hline \end{tabular}
\end{table}
Table 2: State of art approaches for applying ML in Breast Cancer Classification
## 5 Conclusion
Machine learning has the potential to bring about numerous technological revolutions in the healthcare industry. It has the potential to enhance diagnostic accuracy, facilitate the discovery of trends and patterns in patient data, streamline administrative processes, and make possible patient-specific treatment plans. Both supervised and unsupervised learning have their advantages and disadvantages in the medical field. The task at hand, the amount of data at hand, and the resources at your disposal will all dictate the style of learning you employ. Machine learning will become increasingly important in healthcare as data volumes increase. Further investigation into the constraints discussed in the paper's final two sections would be very welcome. Future MLBDD research could also center on issues such optimizing big data sets that include numerical, categorical, and picture data, as well as multiclass classification with highly imbalanced data and highly missing data, and the explanation and interpretation of multiclass data classification utilizing XAI.
|
2307.15666 | Classifying core collapse supernova remnants by their morphology as
shaped by the last exploding jets | Under the assumption that jets explode all core collapse supernovae (CCSNe) I
classify 14 CCSN remnants (CCSNRs) into five groups according to their
morphology as shaped by jets, and attribute the classes to the specific angular
momentum of the pre-collapse core. Point-symmetry (1 CCSNR): According to the
jittering jets explosion mechanism (JJEM) when the pre-collapse core rotates
very slowly the newly born neutron star (NS) launches tens of jet-pairs in all
directions. The last several jet-pairs might leave an imprint of several pairs
of ears, i.e., a point-symmetric morphology. One pair of ears (8 CCSNRs): More
rapidly rotating cores might force the last pair of jets to be long-lived and
shape one pair of jet-inflated ears that dominate the morphology. S-shaped (1
CCSNR): The accretion disk might precess, leading to an S-shaped morphology.
Barrel-shaped (3 CCSNRs): Even more rapidly rotating pre-collapse cores might
result in a final energetic pair of jets that clear the region along the axis
of the pre-collapse core rotation and form a barrel-shaped morphology.
Elongated (1 CCSNR): Very rapidly rotating pre-collapse core force all jets to
be along the same axis such that the jets are inefficient in expelling mass
from the equatorial plane and the long-lasting accretion process turns the NS
into a black hole (BH). The two new results of this study are the
classification of CCSNRs into five classes based on jet-shaped morphological
features, and the attribution of the morphological classes mainly to the
pre-collapse core rotation in the frame of the JJEM. | Noam Soker | 2023-07-28T16:56:11Z | http://arxiv.org/abs/2307.15666v3 | Classifying core collapse supernova remnants by their morphology as shaped by the last exploding jets
###### Abstract
Under the assumption that jets explode all core collapse supernovae (CCSNe) I classify 14 CCSN remnants (CCSNRs) into five groups according to their morphology as shaped by jets, and attribute the classes to the specific angular momentum of the pre-collapse core. _Point-symmetry_ (1 CCSNR): According to the jittering jets explosion mechanism (JJEM) when the pre-collapse core rotates very slowly the newly born neutron star (NS) launches tens of jet-pairs in all directions. The last several jet-pairs might leave an imprint of several pairs of 'ears', i.e., a point-symmetric morphology. _One pair of ears_ (8 CCSNRs): More rapidly rotating cores might force the last pair of jets to be long-lived and shape one pair of jet-inflated ears that dominate the morphology. _S-shaped_ (1 CCSNR): The accretion disk might precess, leading to an S-shaped morphology. _Barrel-shaped_ (3 CCSNRs): Even more rapidly rotating pre-collapse cores might result in a final energetic pair of jets that clear the region along the axis of the pre-collapse core rotation and form a barrel-shaped morphology. _Elongated_ (1 CCSNR): Very rapidly rotating pre-collapse core force all jets to be along the same axis such that the jets are inefficient in expelling mass from the equatorial plane and the long-lasting accretion process turns the NS into a black hole (BH). The two new results of this study are the classification of CCSNRs into five classes based on jet-shaped morphological features, and the attribution of the morphological classes mainly to the pre-collapse core rotation in the frame of the JJEM.
stars: massive - stars: neutron - black holes - supernovae: general - stars: jets - ISM: supernova remnants Vol.0 (20xx) No.0, 000-000
## 1 Introduction
There is no consensus on the explosion mechanism of core collapse supernovae (CCSNe). There are two competing theoretical explosion mechanisms that are based on the gravitational energy that the formation process of the newly born neutron star (NS) or black hole (BH) releases as the core of the CCSN progenitor collapses. These mechanisms are the delayed neutrino explosion mechanism (Bethe & Wilson, 1985, followed by hundreds of studies since then, e.g., Heger et al., 2003; Janka, 2012; Nordhaus et al., 2012; Muller et al., 2019; Burrows & Vartanyan, 2021; Fujibayashi et al., 2021; Fryer, Olejak, & Belczynski, 2022; Boccioli et al., 2022; Nakamura, Takiwaki, & Kotake, 2022; Olejak et al., 2022), and the jittering jets explosion mechanism (JJEM; Soker, 2010, with limited number of studies that followed Papish & Soker, 2011; Gilkis & Soker, 2015; Quataert et al., 2019; Soker, 2020; Shishkin & Soker, 2021; Antoni & Quataert, 2022; Soker, 2022a; Antoni & Quataert, 2023; Soker, 2023).
According to the JJEM intermittent accretion disks (or belts; e.g., Schreier & Soker, 2016) with stochastically varying angular momentum axes launch pairs of jets that explode the star. Pre-collapse stochastic core convection motion (e.g., Soker, 2010; Papish & Soker, 2014; Gilkis & Soker, 2015; Soker, 2019; Shishkin & Soker, 2022; Soker, 2022a,b; in some cases envelope convection motion can supply these seed perturbations, e.g., Quataert et al., 2019; Antoni & Quataert, 2022, 2023) serve as seed angular momentum perturbations. Instabilities between the newly born NS and the stalled shock at \(\simeq 100\ \mathrm{km}\) from the NS amplify these seed perturbations to sufficiently large specific angular momentum fluctuations as to form the intermittent accretion disks (e.g., Shishkin & Soker, 2021). In case of core rotation the stochastic angular momentum variations are around the angular momentum axis of the pre-collapse core (e.g., Soker, 2023).
There are some fundamental differences between the JEM and many papers that study jet-driven explosions that operate only for rapidly rotating pre-collapse cores and therefore the jets that the newly born NS or BH launch have a fixed axis (e.g., Khokhlov et al., 1999; Aloy et al., 2000; MacFadyen, Woosley, & Heger, 2001; Maeda et al., 2012; Lopez-Camara et al., 2013; Bromberg & Tchekhovskoy, 2016; Nishimura et al., 2017; Wang, Wang, & Dai, 2019; Grimmett et al., 2021; Perley et al., 2021; Gottlieb et al., 2022; Obergaulinger & Reichert, 2023; Urrutia, De Colle, & Lopez-Camara, 2023). These differences are as follows (e.g., Soker, 2022c). (1) As explained above, the JJEM operates even when the pre-collapse core does not rotate. (2) The JEM asserts that jets explode most, and possibly all, CCSNe. (3) This implies that there are no failed CCSNe in the frame of the JJEM. All massive stars explode, even when a BH is formed. (4) The JJEM operates in a jet negative feedback mechanism. Namely, when the jets manage to explode the star accretion stops (with some delay time). This accounts for explosion energies that are several times the binding energy of the ejected mass.
There might be \(\approx\mathrm{few}-30\) jet-launching episodes during the entire explosion process with the following properties (Papish & Soker, 2014a). The jets launching velocities are \(\simeq 10^{5}\mathrm{\ km\ s^{-1}}\)(neutrino observations limit the jets in most cases to be non-relativistic, e.g. Guetta et al., 2020). The explosion time might be \(\simeq 1-10\mathrm{\ s}\), where each individual jet-launching episode lasts for \(\simeq 0.01-0.1\mathrm{\ sec}\), beside probably the last jet-launching episode that might in some cases be much longer, as I propose in this study. The two jets in each jet-launching episode carry a mass of \(\approx 10^{-3}M_{\odot}\). During the explosion process the newly born NS accretes a mass of \(\approx 0.1M_{\odot}\) through intermittent accretion disks, i.e., each accretion disk of an episode has a mass of \(\approx 10^{-2}M_{\odot}\). These properties can vary a lot from one CCSN to another because they depend on the convection motion in the pre-collapse core, its angular momentum, and the binding energy of the ejecta.
As far as the basic outcomes of the explosions, e.g., nucleosynthesis and lightcurves, the JJEM is similar to the neutrino driven-mechanism. The JEM includes also heating by neutrinos as a boosting process (Soker, 2022b). The differences include the morphology of the ejecta and that the JJEM can explain also very energetic CCSNe. This study deals with the morphology that the late jets imprint on the ejecta. Early jets are choked inside the core, deposit their energy in the core, and explode it. Instabilities in the JJEM develop similar, but not identical, to those in the neutrino-driven explosion mechanism (for the later see, e.g., Wongwathanarat, Muller, & Janka, 2015; Wongwathanarat et al., 2017; Burrows & Vartanyan, 2021; Vartanyan, Coleman, & Burrows, 2022). The jets are expected to introduce a point-symmetrical morphological component to the instabilities and mixing of isotopes. By point-symmetry I refer to a structure where to each structural feature there is a counterpart on the other side of the center. Because of the highly-non-spherical explosion process the counter structural feature can have a different small-scale structure, can have a different brightness, and be at a different distance from the center. The best example is the supernova remnant (SNR) 0540-69.3 that I study in section 2.1.2 and which possesses point-symmetry in its inner regions (Soker, 2022a).
In this study, however, I focus on late jets, namely, jets that the newly born NS or BH launch after the earlier jets exploded the core. I examine the morphological features that such jets imprint on the outer regions of the ejecta as observed in CCSN remnants (CCSNRs). In section 2 I classify 14 SNRs into five classes. In section 3 I suggest that the main, but not sole, property that determines the class of a SNR is the pre-collapse core angular momentum. This proposed explanation, and actually this entire paper, is largely motivated by my recent proposed explanation for the NS to BH mass gap in the frame of the JJEM (Soker, 2023). I summarize this study in section 4.
## 2 Classification of SNRs
I classify 14 CCSNRs into five classes. Many other CCSNRs morphologies are too'messy' and do not allow classification into one of these classes, e.g., VRO 42.05.01 (G166.0+4.3; for an image see, e.g., Xiao et al., 2022). I describe each class in a separate subsection and in the same order as the classes appear in Table 1. The first row of Table 1 lists the five classes and the lower rows lists the CCSNRs in each class. The second row refers to my suggestion as to the main (but not sole) effect that determines the morphological properties of the last jets to be launched in the explosion process according to the JJEM (section 3). I assume that the main shaping of the morphology is by jets and not by other processes, such as the magnetic field of the interstellar medium (e.g., Wu & Zhang, 2019; Velazquez et al., 2023). The variable \(j_{\mathrm{p}}\) is the pre-collapse average specific angular momentum of the core material that the newly born NS accretes as it launches jets;'p' stands for pre-collapse rotation which has a fixed direction. The variable \(j_{\mathrm{f}}\) is the amplitude of the fluctuations in the specific angular momentum of the material that the NS accretes due to the velocity fluctuations of the pre-collapse convective zone. The amplitude is after instabilities amplify the perturbations. The direction of this angular momentum component varies stochastically; 'f' stands for fluctuating directions.
### Point-symmetry
Point-symmetry morphological features in CCSNRs are clear predictions of the JJEM. Therefore, the two CCSNRs that I study in this section strongly support the JJEM.
#### 2.1.1 The Vela SNR
The best example of a SNR that contains point-symmetric morphological features is the SNR Vela that I present in Fig. 1. This is a ROSAT X-ray image (Aschenbach et al. 1995) that is based on figure 1 from Sapienza et al. (2021). The white AG-line is from their figure and was already drawn by Garcia et al. (2017). The labelling of the clumps is also from Sapienza et al. (2021), where clumps A-F were identified by Aschenbach et al. (1995). The high Si abundance of clump A (Katsuda & Tsunemi 2006) and of clumps G and K (Garcia et al. 2017) indicates that, as in Cassiopeia A (section 2.2), these clumps originate from deep inside the core of the progenitor. Sapienza et al. (2021) convincingly argue that clumps K and G are indeed counter to clump A, and represent jet-like structure from the explosion process. Katsuda & Tsunemi (2005) analyze clump D and find it to be overabundance in ONeMg, which suggests that its origin is from near the center of the remnant, as also suggested by Sankrit, Blair, & Raymond (2003). Grichener & Soker (2017) analyze the ears D and E to be the only ears in SNR Vela, and estimate that the combined energy of the jets that inflated ears D and E is only \(\approx 1\%\) of the Vela explosion energy. This is the lowest value among the eight SNRs with ears that they analyze.
I added to Fig. 1 the thick-yellow DE-line and the FJ-line, each connecting two previously identified clumps. I here claim that each of the clump pairs AG, DE, and FJ was inflated by one late jet-launching episode during the explosion of Vela. Furthermore, I speculate that the jet that ejected clump B had a counter jet. However, because of the lower density ejecta in the counter-jet-B direction (south-west) this clump moved to larger distances than any other clump, and it is below detection limit. I mark this assumption by a red-orange arrow on the right edge of the figure, and connect it with a dashed-black line to clump B. In the case of clump I, which I take also to have been formed by a jet, I suggest that the counter-clump(s) is immersed in the large white area in the north. I mark it with a black 'X'. Indeed, Miceli, Bocchino, & Reale (2008) identified several shrapnels in that region. Miceli, Bocchino, & Reale (2008) find that some of these shrapnels have enhanced Ne and Mg abundances, implying they are ejecta from inner stellar zones. In the JJEM the different compositions of different clumps (shrapnels) suggests that the jets interacted
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Point- & One pair of ears & S-shaped & Barrel-shaped & Elongated \\ Symmetry & & & & \\ \hline \(j_{\rm p}\lesssim 0.01j_{\rm f}\) & \(0.01j_{\rm f}\lesssim j_{\rm p}\lesssim 0.1j_{\rm f}\) & \(0.01j_{\rm f}\lesssim j_{\rm p}\lesssim 0.1j_{\rm f}\) & \(j_{\rm p}\approx 0.1j_{\rm f}-0.3j_{\rm f}\) & \(j_{\rm p}\gtrsim j_{\rm f}\) \\ \hline Vela\({}^{[1]}\) & 0540-69.3\({}^{[2]}\); Cassiopeia A\({}^{[3]}\); 3C58\({}^{[3]}\); & W44\({}^{[6]}\) & RCW 103\({}^{[7]}\) & W50\({}^{[10]}\) \\ (0540-69.3)\({}^{\#}\) & S147\({}^{[3]}\); G290.1-0.8\({}^{[4]}\); & & G292.0+1.8\({}^{[8]}\) & \\ & N49B\({}^{[5]}\); Puppis A\({}^{[5]}\); Crab Nebula\({}^{[5]}\) & & G309.2-00.6\({}^{[9]}\) & \\ \hline \end{tabular}
\end{table}
Table 1: The classification of CCSNRs into five classes according to the last jets to be launched in the explosion. The second row lists the relation between the pre-collapse average specific angular momentum of the core \(j_{\rm p}\), and the magnitude of the stochastic fluctuations in the specific angular momentum of the mass that the newly born NS or BH accrete, \(j_{\rm f}\) (see section 3). Comments: # The inner structure of SNR 0540-69.3 is point symmetric. However, in this study I focus on the last jets to be launched, and therefore I include this SNR in the one-pair class (Fig. 2). Small numbers inside square parentheses are the figures where I present the CCSNRs.
Figure 1: ROSAT X-ray image of SNR Vela (Aschenbach et al. 1995), based on figure 1 from Sapienza et al. (2021). The white _AG-line_ and the labelling of the clumps are from their figure (clumps A-F are from Aschenbach et al. 1995). I added the thick-yellow DE-line and the FJ-line. I also added two dashed-black lines that connect clumps to my assumed counter jets.
with different layers of the core. The final composition depends on the exact time the jet was launched and how deep it penetrated through inner layers of the core.
Overall, in the frame of the JJEM I identify five late jet-launching episodes. There might be more but such that the clumps are projected on the main ejecta of the SNR and therefore are not identified as fast-moving clumps. If the energy of these jets are similar to the energy of the jets that inflated ears D and E as Grichener & Soker (2017) estimated, then the total energy of the late jets is \(\approx 5\%\) of the explosion energy of Vela. This energy is close to the energy of late jets of CCSNRs that have only one late jet-launching episode (section 2.2).
#### 2.1.2 Snr 0540-69.3
Another SNR with a point-symmetric morphological component is SNR 0540-69.3. I analyzed its point-symmetric morphology (Soker, 2022) as revealed by the detailed observations of Larsson et al. (2021). I present this SNR in Fig. 2. Five panels are VLT/MUSE velocity maps that Larsson et al. (2021) present and which reveal the point-symmetric structure in that plane. This plane is along the line of sight and through the center of the SNR, more or less along the yellow double-headed arrow in the lower-middle panel of Fig. 2. This panel is an HST observation from Morse et al. (2006).
There are four pairs of two opposite clumps in the velocity maps that compose the point-symmetric structure of SNR 0540-69.3. Unlike the case of SNR Vela where the clumps are at the outskirts of the SNR, in SNR 0540-69.3 the point-symmetric clumps appear in the center of the ejecta (as is evident by their relatively low expansion velocity). I argued in Soker (2022) that two to four pairs of jittering jets shaped the inner ejecta in this plane. Here I add another possible pair of clumps as the lines P5 in the lower panels indicates. The clump Hf appears in both the [Fe II] map (lower-left panel) and in the H\(\alpha\) map (lower-right panel) at about the same place. The much fainter counter-clump Hn is not exactly at the same place in the two velocity maps. So I draw two lines, the dashed-orange represents the pair in the [Fe II] map and the dotted-orange represents the pair in the H\(\alpha\) velocity map. Overall, I here claim for five pairs that form the point-symmetric structure in the velocity maps.
The lower-middle panel presents a hollowed central region (a faint strip) that connects two ears, the south-west being much longer. The yellow double-headed arrow in the lower-middle panel is along this hollowed region. As the yellow doubled-headed arrow is more or less the direction of the slit that Larsson et al. (2021) use for the velocity maps, the pair of ears, which is part of the point-symmetric structure, is in the same plane as the five pairs of clumps that the velocity maps reveal. In Soker (2022) I pointed out that the similarity of the point-symmetric structure of SNR 0540-69.3 with some planetary nebulae, e.g., He2-138 (PN G320.1-09.6; image in Sahai & Trauger, 1998) and M1-37 (PN G002.6-03.4; image in Sahai, 2000), strongly suggests shaping by jets.
The SNR 0540-69.3 can be classified as point-symmetric with a hollowed-cylinder (barrel-like) structure (more details in Soker, 2022). Without the detailed analysis by Larsson et al. (2021), and based only on the HST observations by Morse et al. (2006), this SNR would have been classified as having one-pair of ears. However, while in the SNR Vela the point-symmetric structure is in the outer parts of the ejecta, the velocity maps of SNR 0540-69.3 reveals a point-symmetric structure in the inner parts of the ejecta. It seems that this inner structure was shaped by the jets that exploded the star. Namely, in addition to instabilities in the explosion process (section 1) jets also shape the inner ejecta. The jets can play a role in mixing elements in the ejecta of core collapse supernovae.
However, as far as late jets are concerned, I classify SNR 0540-69.3 in the one-pair of ears morphological class.
### One pair of ears
CCSNRs that have one pair of ears that dominate their morphology is the largest class. An ear is defined as a protrusion from the main ejecta (nebula) that is fainter than the general nebula, and has a cross section that monotonically decreases from its base on the main nebula to its tip. In most cases the two ears in a pair are not equal in their size and intensity to each other, nor in their distance from the center. The asymmetry is another manifestation of the asymmetrical explosion process of CCSNe that involve instabilities as well as large scale asymmetries. Another prominent manifestation of the asymmetrical explosion is NS natal kick (which I do not study here).
Grichener & Soker (2017) and Bear, Grichener, & Soker (2017) study many of these CCSNRs and estimated the extra energy of the jets that inflated the pair of bubbles. These studies find that the extra energy varies between different CCSNRs, from being \(\simeq 1\%\) to \(\simeq 30\%\) of the total explosion energy. I here examine only the morphology. In Figs. 3 - 5 I present seven images, mostly from Grichener & Soker (2017) who marked with double-headed arrows the base and middle of the ears.
One of the best example of the one-pair class is S147 that I also present in Fig. 3 (for a recent study of this SNR see, e.g., Ren et al., 2018). The two other SNRs in Fig. 3 and the one in Fig. 4 have one ear much larger than the other.
Fig. 5 present three SNRs with ears that do not protrude much from the main ejecta (nebula).
### S-shaped morphology
This class includes only the SNR W44 that I present in Fig. 6 taken from the Chandra gallery with lines from Grichener and Soker (2017). The S-shaped morphology is most likely due to precession of the jets around a fixed axis. The two ears are not symmetric nor with respect to the Pulsar and nor with respect to the main shell.
The morphology of W44 is of one pair of ears that is arranged in an S-shape. It can be as well belong also to the one-pair class. However, the very likely cause of an S-shape is jet-precession. Namely, it was the accretion disk that launched the last jets that performed precession while launching the jets. This suggests, in the fame of the JJEM, a non-negligible pre-collapse core rotation as I discuss in section 3.
### Barrel-shaped SNRs.
A barrel-shaped morphology refers to a general axisymmetrical structure with a central region along the symmetry axis that is much fainter than the sides. The two ends on the symmetry axis are trimmed. Its hollowed structure appears in observations as two opposite bright arcs with a faint (hollowed) region between them. The best example of a barrel-shaped SNR is RCW 103 that I present in Fig. 7. I take this X-ray image (Rea et al., 2016) from Bear, Grichener, and Soker (2017) who proposed the shaping of RCW 103 by two jets at the final phase of the explosion.
Figure 2: Five panels of two-dimensional velocity maps of SNR 0540-69.3 based on figure 4 by Larsson et al. (2021). The velocities are along a slit that is more or less along the dashed yellow line in the lower-middle panel: \(v_{\rm slit}\) is the velocity along the slit (positive to the northeast), while \(v_{\rm z}\) is the velocity along the line of sight. The lower-middle panel is an HST image from Morse et al. (2006) to which I added the yellow double-headed arrow. The four dashed-red lines in the five panels that connect opposite clumps are from Soker (2022a), where more details can be found. Clumps A to F are marked by Larsson et al. (2021) and clumps Gn and Gs by Soker (2022a). I here added the dashed-orange and dotted-orange lines in the two lower velocity maps to indicate another pair, clump Hf and its counter clump Hn. The pulsar is at \(v_{\rm slit}=0\) in these panels.
They based the jet-shaping model on the morphological similarities of RCW 103 with several barrel-shaped planetary nebulae that are observed to be shaped by jets. The unequal structure of the two arcs, which are the projection of the barrel-structure on the plane of the sky, can result from a density gradient in the interstellar medium (e.g., Lu et al. 2021) or from asymmetrical explosion.
The case of SNR G292.0+1.8 is subtle as it shows both a barrel-shaped morphology and two opposite ears. In Fig. 8 I present an image from Bear, Grichener, & Soker (2017) where more details can be found. The visible images of H\(\alpha\) (upper-right panel) and [O III] (lower-left panel) show the barrel-shaped morphology. Bear, Grichener, & Soker (2017) indicate the symmetry axis of the barrel-shaped morphology by the double-headed pink line in the H\(\alpha\) image. The X-ray images, on the other hand, present two very small opposite ears that Bear, Grichener, & Soker (2017) mark and analyze. Because the two opposite arcs in the H\(\alpha\) image present a much prominent barrel-shaped morphology than the two small ears, I classified it as barrel-shaped SNR.
SNR G309.2-00.6 that I present in Fig. 9 with marks from Grichener & Soker (2017) also presents a complicated case. It has two prominent ears as marked on the figure. However, in addition there is a hollowed zone along the symmetry axis (yellow line). The sides of the symmetry axis present two opposite arcs on the outskirts of the ejecta which complicate the morphology. I classify it as barrelled-shape SNR. No NS was found in this SNR, but its morphology and location in the Galaxy strongly suggest a CCSN origin (Gaensler et al. 1998). If, as I argue in section 3, the progenitor core was rapidly rotating it might have collapsed to a BH (see also section 2.5).
Yu & Fang (2018) showed by hydrodynamical simulations that jets with a total energy of \(\simeq 10-15\%\) of the explosion energy can shape the morphology type of SNR G309.2-00.6.
The CCSNR G156.2+5.7 presents an interesting morphology. Its radio morphology with the polarization structure (magnetic fields) has a clear barrel-shaped morphology as the thorough observation and analysis by Xu et al. (2007) reveal. However, its H\(\alpha\) (e.g., Gerardy & Fesen
Figure 4: Radio continuum image at 1384 MHz of SNR G290.1\(-\)0.8 that morphologically belongs to SNRs in Fig. 3. From Reynoso et al. (2006) to which I added the identification of ears.
Figure 3: Images of three SNRs where one pair of ears dominate the outer morphology, and where at least one ear is large and prominent. Upper three images: The identification of the ears and the double-headed arrow marks of the base of an ear at the main ejecta and of the center of an ear are from Grichener & Soker (2017). The sources of the images are as follows. _Cassiopeia A:_ An X-ray image taken from the Chandra gallery (based on Hwang et al. 2004). _S147:_ An H\(\alpha\) image from Gvaramadze (2006) who reproduced an image from Drew et al. (2005). _3C58:_ ACIS/Chandra image from the Chandra Gallery based on Slane et al (2004); colors represent energy bands.
2007) and X-ray (e.g., Pannuti & Allen 2004) images do not possess a barrel-shaped morphology (see comparison of images by Xu et al. 2007). It is a relatively old CCSNR, a few tens of thousands years (Katsuda et al. 2016). Therefore, most likely the interaction with the interstellar medium played a major role in shaping its present morphology. For these reasons I do not classify it in this study.
### Elongated SNRs
The fifth class is of an elongated morphology that only SNR W50 belongs to. However, there are large uncertainties because of the shaping by the jets that its central binary system SS 433 launches and that are not related to the exploding jets. Specifically, the BH component of the binary systems launches these jets. In Fig, 10 I present its LOFAR
Figure 5: Images of three SNRs with one pair of ears that do not protrude much from the main ejecta. Sources of marks in the two lower panels are from Grichener & Soker (2017). The sources of the images are as follows. _N49B:_ An X-ray image from the Chandra gallery based on Park et al. (2003). _Puppis A:_ The radio continuum emission at 1.4 GHz; published by Reynoso & Walsh (2015) and reproduced by Reynoso et al. (2017). _Crab Nebula:_ A composite image of X-ray (Blue; Seward et al. 2006), Optical (Red-Yellow; Hester 2008) and Infrared (Purple; NASA/JPL-Caltech/Univ).
Figure 6: A composite image of SNR W44 taken from the Chandra gallery with marks from Grichener & Soker (2017). The cyan color represents X-ray (based on Shelton et al 2004). The red, blue and green represent infrared emission (based on NASA/JPL-Caltech). This SNR has a prominent S-shaped morphology.
Figure 7: An X-ray image of RCW 103 in three energy bands (low=red, medium=green, highest=blue) combined with an optical image from the Digitized Sky Survey (image taken from the Chandra website based on Rea et al. 2016). The yellow arrows mark the original directions of the already dead jets as Bear, Grichener, & Soker (2017) proposed.
image that I take from Broderick et al. (2018) and its VLA radio continuum map from Dubner et al. (1998). I added to these two figures only what I identify as the boundaries between each ear and the main nebula by 'kink' and 'discontinuity'. Note that in two places the LOFAR image reveals a kink between the surface of the main nebula an the surface of the western ear, while the VLA image also shows a discontinuity between the two surfaces. These images show that although the two ears of W50 are connected to the main nebula with small variations between the main nebula and the ears, there is still a clear boundary between the nebula and the ears.
Ohmura et al. (2021) argue that the continuous jets from SS 433 formed the entire W50 nebula. The shocked material of the jets and of the interstellar medium (ISM) into which the jets propagate, i.e., the cocoons, formed the main nebula (the central part). The fronts of the jets form the ears. In their scenario SS 433 has been launching the jets for the last \(\simeq 10^{5}\;\mathrm{yr}\). The problem I find with their model is that their morphology do not reproduce the clear boundaries between the main nebula and the two ears because the jets produce both the main nebula and the ears. Specifically, they do not reproduce the 'kinks' and the 'discontinuities' that I mark on Fig. 10. Goodall, Alouani-Bibi, & Blundell (2011), on the other hand, do consider W50 main nebula to be a SNR. They conduct hydrodynamical simulations where they launch jets that the BH in SS 433 launches into a spherical supernova remnant. They obtain clear ears with clear boundaries from the main nebula. The problem I find with the images that Goodall, Alouani-Bibi, & Blundell (2011) obtain is that the ears largely differ from the main nebula, much more than observed.
The hydrodynamical simulation results of Ohmura et al. (2021) that the ears are basically part of the main nebula, more than observed in W50, and of Goodall, Alouani-Bibi, & Blundell (2011), that the ears differ from the main nebula to much larger degree than observed in W50, bring me to suggest an intermediate scenario. I take these results to imply that the ears were created during the jet-driven explosion process of W50 and were further shaped by the later jets that the system SS 433 has been launching.
Figure 8: Images of the CCSNR G292.0+1.8 in various wavelengths with marks from Bear, Grichener, & Soker (2017). In each image there is a line that connect the two opposite ears that Bear, Grichener, & Soker (2017) define and analyze. On the H\(\alpha\) image they also define the symmetry axis of the barrel-shaped morphology by the double-headed pink line. _Upper left panel:_ A composite X-ray image (Park et al. 2007) from the Chandra gallery where different lines represent different energy bans (for another X-ray image see Yang et al. 2014). _Upper right panel:_ Zero velocity H\(\alpha\) image taken from Ghavamian et al. (2005), which clearly reveals the barrel-shaped morphology. _Lower left panel:_ An optical ([O III]) image taken from Winkler & Long (2006) and reproduced by Ghavamian et al. (2012). Lower right panel: A Chandra \(0.3-8.0\;\mathrm{keV}\) X-ray image based on Park et al. (2007) and reproduced by Ghavamian et al. (2012).
Figure 9: A radio image of SNR G309.2-00.6 from the site of the School of Physics, The university of Sydney (posted as production from Gaensler et al. 1998). Marks are from Grichener & Soker (2017). In the background is the emission nebula RCW 80.
In section 3 I discuss the theoretical motivation to introduce the elongated class of SNRs.
## 3 The possible role of core rotation
In the JJEM there are two sources of the angular momentum of the mass that the newly born NS accretes. This is true also in cases where the NS collapses to a BH. The first angular momentum source is the pre-collapse stochastic convection motion in the collapsing core that introduces angular momentum fluctuations with varying magnitudes and directions. The angular momentum fluctuations due to the core convective motion are amplified by instabilities in the zone between the newly born NS and the stalled shock at \(\simeq 100\,{\rm km}\) from the NS (section 1). The other angular momentum source is the pre-collapse core rotation. It introduces an angular momentum component with a fixed direction. Its magnitude slowly increases with time as material from outer layers in the core are accreted.
In Soker (2023) I built a toy model to study the effects of these two angular momentum components on the direction of the jets. I used that toy model to offer an explanation to the \(\simeq 2.5-5M_{\odot}\) mass gap between NSs and BHs in the frame of the JJEM. I assumed in that toy model that all specific angular momentum fluctuations of the random angular momentum component, after amplification by post-shock instabilities, have the same magnitude of \(j_{t}\) and have stochastically direction variations. I took the typical range of values to be \(j_{t}\simeq 2\times 10^{16}\,{\rm cm}^{2}\,{\rm s}^{-1}-5\times 10^{16}\, {\rm cm}^{2}\,{\rm s}^{-1}\). The pre-collapse core rotation introduces a fixed-direction specific angular momentum component of magnitude \(j_{\rm p}\). I found with the above toy model that when the core is slowly rotating, \(j_{\rm p}\la 0.5j_{t}\), the jets are launched in all directions. According to the JJEM in this case the jet feedback mechanism is efficient and the jets explode the core early-on, leaving a NS remnant (e.g., Shishkin & Soker 2022). When the pre-collapse core is rapidly rotating with \(j_{\rm p}\ga j_{t}\) the NS does not launch jets in the equatorial plane of the pre-collapse rotating core (the plane perpendicular to \(\overrightarrow{j_{\rm p}}\)) and its vicinity. The jets do not expel mass efficiently from the equatorial plane and accretion proceeds to form a BH. The BH might launch relativistic jets. Such jets might lead to new processes in the supernova that do not occur when a NS is formed, e.g., neutrino emission as in choked gamma-ray bursts (as calculated by, e.g., Sahu & Zhang 2010; He et al. 2018; Fasano et al. 2021; Guetta et al. 2023).
The case with \(j_{\rm p}\ga j_{t}\), therefore, both maintains a more or less fixed-axis direction of the jets and leaves a BH remnant. The fixed-axis jets form an elongated structure. This is the theoretical motivation behind the morphological class of elongated nebulae (section 2.5), and in classifying W50, which has a BH in its central binary system, in this class. As discussed in section 2.5, in the case of W50 the jets that the binary system SS 433 has been launching further shaped the ears.
In Soker (2023) I studied only the mass gap between NSs and BHs. I did not study the different cases of \(j_{\rm p}\la j_{t}\) that leave a NS remnant. I now do that in relation to the first four classes in Table 1.
When the pre-collapse core rotation plays no role, namely \(j_{\rm p}\ll 0.1j_{t}\), the jets fully jitter at all jet-launching phases. Here I crudely estimate this range as \(j_{\rm p}\la 0.01j_{t}\). The exact value should be determined in the future by highly-demanding three-dimensional hydrodynamical simulations. In these cases the end period of the mass
Figure 10: Upper panel: A LOFAR 140-MHz high-band continuum map of SNR W50 from Broderick et al. (2018). Colour scale runs from \(-40\) mJy/beam to \(80\) mJy/beam. Most marks are on the original image from Broderick et al. (2018). I added the marks of ‘kink’ for the projected boundaries between the nebula and the ears. Lower panel: The SNR W50 in radio continuum at 1465 MHz as observed with the VLA (from Dubner et al. 1998). I added the marks of ‘kink’ and ‘discontinuity’ in the projected boundaries between the main nebula and the ears.
accretion process onto the newly born NS can be composed of several short, each lasting \(\approx 0.01\) s, jet-launching episodes that leaves a point-symmetric structure in the outer regions of the ejecta. This is the case of SNR Vela (Fig. 1; section 2.1).
When the pre-collapse core rotation is somewhat larger it might act to increase the probability of the jets' axis to be close to the angular momentum axis of the pre-collapsing core, i.e., along \(\overrightarrow{j_{\rm p}}\). This might cause the last jet-launching episode to be somewhat longer and to form one dominant pair of opposite ears. The last jet-launching episode lasts for a relatively long time because of the following consideration. An accretion disk without fresh supply of material lives for about the viscous timescale of the disk. This can be tens to hundreds times the orbital period of the material. During the explosion process in the JJEM, newly accreted matter has different angular momentum direction than the existing disk and it can destroy the disk. Namely, the freshly accreted material terminates the jets and starts a new jet-launching episode. The last accretion episode in the JJEM has no fresh supply of material. The accretion disk can live for the viscous time scale. For a NS of mass \(M_{\rm NS}=1.4M_{\odot}\) and an accretion disk at \(r=30\,{\rm km}\) the orbital period of the material is \(0.0024\,{\rm s}\). The viscous time scale might be \(\approx 0.1-1\) s. This is a relatively long time (as a regular jet-launching episode lasts for \(\approx 0.01-0.1\) s) during which the outer core expands and the final material of these last jets shape the ears in the expanding core and envelope. I therefore suggest that for the range of \(j_{\rm p}\approx 0.01j_{\rm f}-0.1j_{\rm f}\) (admittedly this range is a crude estimate), the last jets form a prominent pair of ears, e.g., the one-pair morphology. The final accretion disk might precess due to perturbations by accreted parcels of material, leading to an S-shaped morphology.
When the pre-collapse core angular momentum is larger, but not as to form a BH, the last jet-launching episode might be longer and more powerful. The jets can clear the central zone around the core angular momentum axis and form a barrel-like morphology. I crudely take this range to be \(j_{\rm p}\approx 0.1j_{\rm f}-0.3j_{\rm f}\).
These ranges are crude estimates within the frame of the toy model. The situation is more complicated as the specific angular momentum fluctuations do not have a constant magnitude as the toy model assumes.
I note that the final angular momentum of the NS does not relate monotonically to the pre-collapse core rotation. The reason is that in the JJEM the jets of each jet-launching episode carry most of the angular momentum of the accretion disk that launches the jets. In a case of a rapid pre-collapse rotation there might be one long-lived jet-launching episode with a fixed jets' axis. However, in that case the magnetic fields in the NS and in the accretion disk might very efficiently slow down the NS by coupling the NS to outer disk radii where angular velocity is much slower. Further more, after accretion ceases rapidly rotating NSs substantially slow-down by blowing winds (e.g., Prasanna et al., 2022) in the propeller mechanism (e.g., Ott et al., 2006). Therefore, in most, but not in all, cases the JJEM mechanism expect for a spin-period of tens of milliseconds shortly after explosion (e.g., Gofman & Soker, 2020).
The main point to take from this section is that in the frame of the JJEM the pre-collapse core rotation, or more specifically the ratio \(j_{\rm p}/j_{\rm f}\), is the main parameter that determine the outer large-scale morphology of CCSNRs. Other factors are the non-linear instabilities that occur during the explosion, the possible presence of a binary companion, a circumstellar material into which the ejecta expand (e.g., Velazquez et al., 2023), the energy of the explosion and the ejecta mass, and the interstellar medium (in particular with a strong magnetic field, e.g., Wu & Zhang, 2019; Velazquez et al., 2023).
## 4 Summary
I classified 14 CCSNRs into five classes according to morphological features that late jets in the explosion process might form (Table 1). According to the JJEM, after the early jets explode the core the late jets that interact with the already expanding star might leave imprints on the ejecta, outer and inner regions (e.g., Grichener & Soker, 2017; Bear, Grichener, & Soker, 2017).
nary system that launches jets (Fig. 10). I argued in section 2.5 that both the exploding jets and the jets that the BH in the binary system launches have shaped the ears of W50. This class occurs when \(j_{\rm p}\gtrsim j_{\rm f}\) and the jets maintain a more or less constant axis. The jets are inefficient in expelling mass from the equatorial plane and the long-lasting accretion process turns the NS into a BH.
Although I take the ratio \(j_{\rm p}/j_{\rm f}\) to be the main factor that determine the CCSNR morphology, it is definitely not the only one. Other processes might occur, in particular large-scale instabilities during the explosion process. Then there are possibilities of the presence of a binary companion, a circumstellar material into which the ejecta expand, and the interstellar medium. For these, it is expected that opposite structural features, like opposite ears and arcs, will not be equal to each other.
Although the morphologies of all 14 CCSNRs have been analyzed in the past (see figure captions), this study reports two new results. The first is the classification of CCSNRs to five classes based on jet-shaped morphological features. The second new result is the attribution of the morphological classes to the degree of pre-collapse core rotation as the main (but not sole) factor that determine the morphology class of a CCSNR.
I note that by the same physics by which the jets shape CCSNRs, they can account for non-zero polarization in CCSNe, e.g., as Nagao et al. (2023) report recently. Nagao et al. (2023) find that the explosion asphericity is proportional to the explosion energy and note that jets might account for that. I add here that the JJEM can naturally account for this finding. I take their results to support the JJEM.
Overall, this study adds some support to the argument that jets, in particular jittering jets (the JJEM), explode most, or even all, CCSNe. The complicated nature of the explosion process and the highly-demanding numerical simulations that are required to simulate the JJEM, force progress to be made in small steps.
## Acknowledgments
I thank Aldana Grichener and Dima Shishkin for helpful discussions and comments. I thank an anonymous referee for helpful comments. This research was supported by a grant from the Israel Science Foundation (769/20).
|
2303.07795 | Synthesizing and multiplexing autonomous quantum coherences | Quantum coherence is a crucial prerequisite for quantum technologies.
Therefore, the robust generation, as autonomous as possible, of quantum
coherence remains the essential problem for developing this field. We consider
a method of synthesizing and multiplexing quantum coherence from spin systems
without any direct drives only coupled to bosonic baths. The previous studies
in this field have demonstrated that a back-action of the bath to the spin
subsystem is important to generate it, however, it simultaneously gives
significant limits to the generated coherence. We propose a viable approach
with the bosonic bath that allows overcoming these limits by avoiding the
destructive effect of the back-action processes. Using this approach, we
suggest an advanced synthesis of the quantum coherence non-perturbatively in
the spin-boson coupling parameters of multiple bosonic baths to increase and
multiplex it for upcoming proof-of-principle experiments. | Artur Slobodeniuk, Tomáš Novotný, Radim Filip | 2023-03-14T11:11:18Z | http://arxiv.org/abs/2303.07795v3 | # Synthesizing and multiplexing autonomous quantum coherences
###### Abstract
Quantum coherence is a crucial prerequisite for quantum technologies. Therefore, the robust generation, as autonomous as possible, of quantum coherence remains the essential problem for developing this field. We consider a method of synthesizing and multiplexing quantum coherence from a spin systems without any direct drives only coupled to a bosonic baths. The previous studies in this field have demonstrated that a back-action of the bath to the spin subsystem is important to generate it, however, it simultaneously gives a significant limits to the generated coherence. We propose a viable approach with the bosonic bath that allows overcoming these limits by avoiding the destructive effect of the back-action processes. Using this approach, we suggest advanced synthesis of the quantum coherence non-perturbatively in the spin-boson coupling parameters of multiple bosonic baths to increase and multiplex it for upcoming proof-of-principle experiments.
## 1 Introduction
Quantum coherence [1] is a significant and diverse subject of modern quantum physics, phase estimation and thermodynamics [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17] and a crucial resource of contemporary quantum technology, specifically, quantum metrology [18, 19], quantum communication [20, 21], quantum simulators [22, 23, 24], energy harvesting [25, 26], quantum thermodynamics [27, 28, 29], and quantum computing [30, 31]. A classical, external and strong coherent drive typically generates such quantum coherence as a superposition of energy states. Recently, it has been proposed that there might be a more autonomous alternative, the quantum coherence, from the coupling between a basic system, like a two-level system, and a thermal bath [32]. It used a composite interaction between the two-level system and thermal bath; in one direction, the incoherent energy of the system pushed the bath coherently, but simultaneously it could receive that coherence of the bath. Both interactions must be present to obtain quantum coherence in a single two-level system without any external drive, just from a coherent interaction with a bath. It is therefore conceptually different from a coherence for a pair of two-level systems from thermal baths [33]. It triggered further analysis [34, 35, 36, 37, 38]; however, it is still not
a fully explored phenomenon, without a direct experimental test. For more extensive feasibility, other system-bath topologies generating and detecting more autonomous coherence have to be found.
Here, we present two crucial steps toward such experimental verifications, considering many separate two-level systems to push the bath coherently, many to receive quantum coherence in parallel and also more baths assisting the process in parallel. Advantageously, we split the single two-level system used in Ref. [32] to two separate (drive and output) ones. Using these allowed topologies providing autonomous quantum coherences without a back-action, we propose and study an autonomous synthetisation of coherence from many systems, multiplexing it to many systems and employing jointly different baths to generate the coherences. From the detailed analysis of these cases, we proved a significant result: many systems and baths could be used in parallel to obtain and broadcast autonomous quantum coherences in the experiments.
The paper is organized as follows. In Sec. 2, we propose a general method of the calculation of spin coherences in systems which contain a bosonic bath interacting with many spins and verify this method with the previously obtained results for single spin in Refs. [36, 37, 38]. In Sec. 3, we apply the new method to the case of two spins, input and output, interacting separately with the bosonic bath. In Sec. 4, we extend the previous system to the \(M\) input spins and analyze the expression for coherence of single output spin as a function of \(M\). Then, in Sec. 5 we examine the case of \(M\) input and two output spins and consider the correlation effects between the output spins. In Sec. 6, we explore the case of two bosonic baths coupled separately to \(M\) and \(N\) input spins, while the output spin is coupled to both baths. We develop a systematic scheme and calculate the coherence of the output spin in a general form, and then discuss the generalization of this problem to larger number of baths and output spins. Finally, in Sec. 7, we further generalize our method of calculation by substituting the output spin by an oscillator. In Sec. 8, we summarize our results, discuss the advantages and limits of the proposed mechanisms of the generation of the cohherence in the spin-bath systems. Technical details are presented in 4 Appendices.
## 2 Multi-spin interaction with thermal bath
First, we modify the model considered in the Refs. [36, 37, 38] to open more possibilities for the synthesization and multiplexing by separating a single two-level system into driving and receiving two-level systems.
We consider the Hamiltonian of the system \(H=H_{B}+H_{S}+H_{SB}=H_{0}+H_{SB}\). Here
\[H_{B}=\sum_{k}\Omega_{k}b_{k}^{\dagger}b_{k}, \tag{1}\]
is the Hamiltonian of bosonic excitations with the spectrum \(\Omega_{k}>0\). The operators \(b_{k},b_{k}^{\dagger}\) satisfy the canonical commutation relations \([b_{k},b_{q}^{\dagger}]=\delta_{kq}\), where \(\delta_{kq}\) is the Kronecker symbol. Furthermore,
\[H_{S}=\sum_{j=1}^{M+1}\frac{\omega_{j}}{2}\sigma_{j}^{z}, \tag{2}\]
with \(\omega_{j}>0\) is the Hamiltonian of \(M+1\) spins, \(M\) driving the bath to get coherence there and the \((M+1)\)th receiving the coherence. Finally,
\[H_{SB}=\sum_{j=1}^{M+1}\mathbf{\sigma}_{j}\cdot\mathbf{n}_{j}\sum_{k}\lambda_{k}(b_{k }^{\dagger}+b_{k}), \tag{3}\]
describes the interaction of the spins with the bosonic system. Here, we have introduced the notation \(\mathbf{\sigma}\cdot\mathbf{n}\equiv\sigma^{x}n^{x}+\sigma^{y}n^{y}+\sigma^{z}n^{z}\), with the Pauli matrices \(\sigma^{x},\sigma^{y},\sigma^{z}\) and vector of the coupling strength parameters \(\mathbf{n}=(n^{x},n^{y},n^{z})\).
We are interested in the reduced density matrix of the spin system, which can be obtained from the full canonical density matrix \(\rho\equiv e^{-\beta H}/Z\) by tracing out over the bosonic degrees of freedom
\[\rho_{S}=Z^{-1}\text{Tr}_{B}\Big{[}e^{-\beta H}\Big{]}, \tag{4}\]
where \(Z\) is the partition function of the full system
\[Z=\text{Tr}_{S}\Big{[}\text{Tr}_{B}\Big{[}e^{-\beta H}\Big{]}\Big{]}. \tag{5}\]
We evaluate the reduced density matrix using the following method. First we present the operator exponent in the form
\[e^{-\beta H}=e^{-\beta H_{0}}\Big{(}e^{\beta H_{0}}e^{-\beta H}\Big{)}=e^{- \beta H_{0}}U(\beta). \tag{6}\]
The operator \(U(\tau)\) satisfies the differential equation in the domain \(\tau\in[0,\beta]\) with the initial condition \(U(0)=1\)
\[\frac{\partial U(\tau)}{\partial\tau}=-\widetilde{H}_{SB}(\tau)U(\tau), \tag{7}\]
where \(\widetilde{H}_{SB}(\tau)\equiv e^{\tau H_{0}}H_{SB}e^{-\tau H_{0}}\). The solution of the equation can be presented in the form of the chronologically ordered (in the imaginary Matsubara time \(\tau\)) exponent
\[U(\beta)= T_{\tau}\Big{\{}e^{-\int_{0}^{\beta}d\tau\widetilde{H}_{SB}( \tau)}\Big{\}}=1-\int_{0}^{\beta}d\tau_{1}\widetilde{H}_{SB}(\tau_{1})+\] \[+ \int_{0}^{\beta}d\tau_{1}\widetilde{H}_{SB}(\tau_{1})\int_{0}^{ \tau_{1}}d\tau_{2}\widetilde{H}_{SB}(\tau_{2})+\ldots. \tag{8}\]
Using this result we rewrite the reduced density matrix as
\[\rho_{S}=\frac{e^{-\beta H_{S}}}{Z_{S}}\frac{\left\langle T_{\tau}\Big{\{}e^{ -\int_{0}^{\beta}d\tau\widetilde{H}_{SB}(\tau)}\Big{\}}\right\rangle_{B}}{ \left\langle\left\langle T_{\tau}\Big{\{}e^{-\int_{0}^{\beta}d\tau\widetilde{ H}_{SB}(\tau)}\Big{\}}\right\rangle_{B}\right\rangle_{S}}. \tag{9}\]
Here the averaging procedures over the spin (\(S\)) and bosonic (\(B\)) degrees of freedom read
\[\langle\star\rangle_{S}\equiv Z_{S}^{-1}\text{Tr}_{S}\Big{[}e^{-\beta H_{S}}\star\Big{]}, \tag{10}\] \[\langle\star\rangle_{B}\equiv Z_{B}^{-1}\text{Tr}_{B}\Big{[}e^{-\beta H_{B}}\star\Big{]}. \tag{11}\]
with
\[Z_{S}\equiv\text{Tr}_{S}\Big{[}e^{-\beta H_{S}}\Big{]},\quad Z_{B}\equiv\text {Tr}_{B}\Big{[}e^{-\beta H_{B}}\Big{]}, \tag{12}\]
being the partition functions of the free spin (\(Z_{S}\)) and boson (\(Z_{B}\)) subsystems, respectively. Note that the numerator of Eq. (9) can be presented as \(e^{-H_{eff}}\), the denominator then transforms into \(\text{Tr}_{S}[e^{-H_{eff}}]\), see details in Ref. [39]. However, in the current study we use another way by evaluating perturbatively the \(T_{\tau}\)-ordered exponent.
The reduced density operator \(\rho_{S}\) depends on the \(M+1\) spin degrees of freedom. Our idea is based on the observation that the bosonic bath effectively couples the spin degrees
of freedom. Part of these coupling terms is responsible for the generation of a non-zero value of the \(\sigma_{M+1}^{x}\)-operator (coherence), which can be calculated as
\[\langle\sigma_{M+1}^{x}\rangle\equiv\mathrm{Tr}_{S}[\rho_{S}\sigma_{M+1}^{x}]. \tag{13}\]
The aforementioned spin-spin interaction terms correspond to the leading terms of the perturbation series for the reduced density matrix \(\rho_{S}\) in the spin-boson coupling parameters and can be deduced from the following expression in the weak-coupling regime (see derivation in Appendix A)
\[\Big{\langle}T_{\tau}\Big{\{}e^{-\int_{0}^{\beta}d\tau\widetilde{ \mu}_{SB}(\tau)}\Big{\}}\Big{\rangle}_{B}\approx 1+\int_{0}^{\infty}d\xi\,\mathcal{I}( \xi)\times\] \[\qquad\times\int_{0}^{\beta}d\tau\int_{0}^{\tau}d\tau^{\prime} \phi(\xi,\tau-\tau^{\prime})F(\tau)F(\tau^{\prime}). \tag{14}\]
Here, we have introduced the bosonic spectral density function
\[\mathcal{I}(\xi)\equiv\sum_{k}\lambda_{k}^{2}\delta(\xi-\Omega_{k}), \tag{15}\]
the function
\[\phi(\xi,\tau-\tau^{\prime})=\frac{e^{(\tau-\tau^{\prime})\xi}}{e^{\beta\xi}- 1}+\frac{e^{-(\tau-\tau^{\prime})\xi}}{1-e^{-\beta\xi}}, \tag{16}\]
and \(\tau\)-dependent multi-spin operator
\[F(\tau)=e^{\tau H_{S}}\Big{[}\sum_{j=1}^{M+1}\mathbf{\sigma}_{j}\cdot\mathbf{n}_{j} \Big{]}e^{-\tau H_{S}}. \tag{17}\]
For a comparison, we first reconsider the case with the following single-spin Hamiltonian [38, 37, 36]
\[H_{BS}=[f_{1}\sigma_{1}^{z}+f_{2}\sigma_{1}^{x}]\sum_{k}\lambda_{k}(b_{k}+b_{ k}^{\dagger}). \tag{18}\]
Figure 1: Generation of the spin coherence in two different schemes of the spin-bath interaction. (a) Self-induced spin coherence method from Refs. [38, 37, 36]. The single spin “polarizes” the bosonic subsystem with spectral density function \(\mathcal{I}(\xi)\) at temperature \(T\) via the \(f_{1}\sigma_{1}^{z}\) term. The polarized bosonic bath generates indirectly the coherence in the _same_ spin \(\langle\sigma_{1}^{x}\rangle\) via the \(f_{2}\sigma_{1}^{x}\) term, i.e., as the _back reaction_ to the spin system from the bosonic bath. (b) Externally induced spin coherence method. The first (input) spin polarizes the bosonic bath via the \(f_{1}\sigma_{1}^{z}\) term. The polarized bosonic bath generates the coherence in the _output_ spin \(\langle\sigma_{2}^{x}\rangle\) via the \(f_{2}\sigma_{2}^{x}\) term, i.e., as the _transfer of spin coherence through the bath_ to the second spin system.
It corresponds to the situation of one spin coupled to the bosonic thermal bath simultaneously via the \(\sigma^{z}\) and \(\sigma^{x}\) spin operators, i.e., \(\mathbf{n}=(f_{2},0,f_{1})\). The corresponding system is depicted in Fig. 1 a).
Using the result of Appendix A, we can write the leading spin-spin correlation term of the reduced spin density operator in this case
\[F(\tau)F(\tau^{\prime})= \,[f_{1}\sigma_{1}^{z}+f_{2}\sigma_{1}^{x}(\tau)][f_{1}\sigma_{1} ^{z}+f_{2}\sigma_{1}^{x}(\tau^{\prime})]=\] \[= f_{1}^{2}+f_{2}^{2}\cosh(\omega_{1}(\tau-\tau^{\prime}))+f_{2}^ {2}\sinh(\omega_{1}(\tau-\tau^{\prime}))\sigma_{1}^{z}+\] \[+ f_{1}f_{2}[\sinh(\omega_{1}\tau^{\prime})-\sinh(\omega_{1}\tau )]\sigma_{1}^{x}+if_{1}f_{2}[\cosh(\omega_{1}\tau^{\prime})-\cosh(\omega_{1} \tau)]\sigma_{1}^{y}, \tag{19}\]
In the first line of Eq. (19) we introduced the \(\tau\)-dependent operators
\[\sigma^{j}(\tau)\equiv e^{\tau\frac{\omega}{2}\sigma^{z}}\sigma^{j}e^{-\tau \frac{\omega}{2}\sigma^{z}}, \tag{20}\]
with \(j=x,y,z\). Using the algebra of Pauli matrices \(\sigma_{j}\sigma_{k}=\delta_{jk}\mathcal{I}+i\varepsilon_{jkl}\sigma_{l}\), where \(\mathcal{I}\) is a \(2\times 2\) unit matrix and \(\varepsilon_{jkl}\) is the Levi-Civita symbol, we evaluated \(\sigma^{x}(\tau)=\cosh(\omega\tau)\sigma^{x}+i\sinh(\omega\tau)\sigma^{y}\), \(\sigma^{y}(\tau)=\cosh(\omega\tau)\sigma^{y}-i\sinh(\omega\tau)\sigma^{x}\), \(\sigma^{z}(\tau)=\sigma^{z}\), and then calculated the coherence \(\langle\sigma_{1}^{x}\rangle\) in the leading order of the perturbation theory
\[\langle\sigma_{1}^{x}\rangle\equiv \mathrm{Tr}_{S}\Big{[}\rho_{S}\sigma_{1}^{x}\Big{]}\approx\int_{ 0}^{\infty}d\xi\,\mathcal{I}(\xi)\int_{0}^{\beta}d\tau\int_{0}^{\tau}d\tau^{ \prime}\phi(\xi,\tau-\tau^{\prime})\langle F(\tau)F(\tau^{\prime})\sigma_{1}^ {x}\rangle_{S}=\] \[= -4f_{1}f_{2}\tanh\Big{(}\frac{\beta\omega_{1}}{2}\Big{)}\int_{0} ^{\infty}d\xi\,\mathcal{I}(\xi)\frac{\xi\coth\Big{(}\frac{\beta\xi}{2}\Big{)} -\omega_{1}\coth\Big{(}\frac{\beta\omega_{1}}{2}\Big{)}}{\xi\left(\xi^{2}- \omega_{1}^{2}\right)}. \tag{21}\]
This answer coincides with the previously obtained one [36, 37, 38]. Note that the complex structure of the integrand is a result of the dynamical back reaction of the bosonic bath onto the spin system. This is a consequence of the special coupling of the spin system to the bosonic bath, containing both the \(\sigma^{x}\) and \(\sigma^{z}\) coupling terms. It allows self-induced coherence through the bath by the spin itself but also creates dynamical terms that limit the amount of coherence. In order to eliminate these dynamical terms we introduce two groups of spin systems, where the spins of the first (second) group interact with the bosonic bath only via the \(\sigma^{z}\) (\(\sigma^{x}\)) coupling term. In this configuration the first group of spins influences the bosonic system as driving spins and then the affected bosonic bath generates the nonzero coherence in the second group of output spins. The simplest case of such systems is considered in the next section.
## 3 New two-spin method \((M=1)\)
For further development and comparison, we propose the basic case \(\mathbf{n}_{1}=(0,0,f_{1})\), \(\mathbf{n}_{2}=(f_{2},0,0)\), with \(f_{1},f_{2}\in\mathds{R}\). The corresponding system is depicted in Fig. 1 b). For this case the spin-spin correlation term reads
\[F(\tau)F(\tau^{\prime})= [f_{1}\sigma_{1}^{z}+f_{2}\sigma_{2}^{x}(\tau)][f_{1}\sigma_{1}^{z }+f_{2}\sigma_{2}^{x}(\tau^{\prime})]=\] \[= f_{1}^{2}+f_{2}^{2}\cosh(\omega_{2}(\tau-\tau^{\prime}))+f_{2}^ {2}\sinh(\omega_{2}(\tau-\tau^{\prime}))\sigma_{2}^{z}+\] \[+ f_{1}f_{2}\Big{(}[\cosh(\omega_{2}\tau^{\prime})+\cosh(\omega_{ 2}\tau)]\sigma_{1}^{z}\sigma_{2}^{x}+i[\sinh(\omega_{2}\tau^{\prime})+\sinh( \omega_{2}\tau)]\sigma_{1}^{z}\sigma_{2}^{y}\Big{)}. \tag{22}\]
Using this result we obtain
\[\langle\sigma_{2}^{x}\rangle\equiv\mathrm{Tr}_{S}\Big{[}\rho_{S}\sigma_{2}^{x} \Big{]}\approx-4f_{1}f_{2}\tanh\Big{(}\frac{\beta\omega_{1}}{2}\Big{)}\frac{ \tanh\Big{(}\frac{\beta\omega_{2}}{2}\Big{)}}{\omega_{2}}\Omega, \tag{23}\]
where we have introduced the quantity \(\Omega\equiv\int_{0}^{\infty}d\xi\,\mathcal{I}(\xi)/\xi\), with meaning of the reorganization energy of the bosonic bath. Note that this result coincides with the mean field (static) result for the original case in the low-temperature limit [38].
Let us compare the results: the original self-induced method [38, 36], where a single spin was both the driving and output system, and the new method splitting these roles to separate spins. To do it we first put \(\omega_{2}=\omega_{1}=\omega\) in the new method to discuss a resonant case and rewrite both expressions in the following form
\[\langle\sigma_{1}^{x}\rangle= -4f_{1}f_{2}\tanh\left(\frac{\beta\omega}{2}\right)\int_{0}^{ \infty}d\xi\,\frac{\mathcal{I}(\xi)}{\xi}\frac{\xi\coth\left(\frac{\beta\xi}{ 2}\right)-\omega\coth\left(\frac{\beta\omega}{2}\right)}{(\xi^{2}-\omega^{2})}=\] \[=-\int_{0}^{\infty}d\xi\,\frac{\mathcal{I}(\xi)}{\xi}\mathcal{F} _{1}(\beta,\omega,\xi), \tag{24}\] \[\langle\sigma_{2}^{x}\rangle= -4f_{1}f_{2}\tanh\left(\frac{\beta\omega}{2}\right)\int_{0}^{ \infty}d\xi\,\frac{\mathcal{I}(\xi)}{\xi}\frac{\tanh\left(\frac{\beta\omega} {2}\right)}{\omega}=-\int_{0}^{\infty}d\xi\,\frac{\mathcal{I}(\xi)}{\xi} \mathcal{F}_{2}(\beta,\omega,\xi). \tag{25}\]
Taking into account the positivity of \(\mathcal{I}(\xi)>0\) and the fact that \(\mathcal{F}_{2}(\beta,\omega,\xi)>\mathcal{F}_{1}(\beta,\omega,\xi)>0\) for any value of \(\beta\) and \(\omega\) we obtain that \(|\langle\sigma_{2}^{x}\rangle|>|\langle\sigma_{1}^{x}\rangle|\) for the resonant case. It demonstrates that the new method of generation of the coherence using two spins with distributed roles is more effective than the previously proposed method [36], where a single spin had to play double role, both to coherently displace the bath by its thermal population and receive that coherence back.
In order to know the effectiveness of the generation of the coherence with the new method for a couple of spins we consider a general case of different frequencies \(\omega_{1}\neq\omega_{2}\) with the canonical spectral density function
\[\mathcal{I}(\xi)=\lambda\frac{\xi^{s}}{\omega_{c}^{s-1}}e^{-\xi/ \omega_{c}}. \tag{26}\]
The dimensionless parameter \(\lambda\) describes the strength of the spectral density function, while \(\omega_{c}\) represents the energy cut-off [40, 32, 41]. Note that for this spectral density the coherence \(\langle\sigma_{2}^{x}\rangle\) (23) can be calculated analytically
\[\langle\sigma_{2}^{x}\rangle= -4f_{1}f_{2}\tanh\left(\frac{\beta\omega_{1}}{2}\right)\frac{ \tanh\left(\frac{\beta\omega_{2}}{2}\right)}{\omega_{2}}\lambda\omega_{c} \Gamma(s), \tag{27}\]
where \(\Gamma(s)\) is the Gamma function. The normalized coherence \(\langle\sigma_{2}^{x}\rangle/(-4f_{1}f_{2}\lambda)\) for different values of the parameter \(s=0.5;1;2\) as a function of the dimensionless parameters \(\beta\omega_{1}\) and \(\omega_{2}/\omega_{1}\) is presented in Fig. 2. The coherence linearly growths with the strength \(\lambda\) of the spectral density function and its cut-off energy \(\omega_{c}\). As a function of the temperature \(T=1/\beta\) the coherence is maximized in the \(T\to 0\) limit
\[\langle\sigma_{2}^{x}\rangle|_{T=0}= -4f_{1}f_{2}\lambda\frac{\omega_{c}}{\omega_{2}}\Gamma(s). \tag{28}\]
Finally, the generated coherence for the fixed temperature \(T=1/\beta\) can be increased by increasing the frequency of the first spin \(\omega_{1}\) and decreasing the frequency of the second spin \(\omega_{2}\). However, the limit \(\omega_{2}\to 0\) can't be taken since it will violate the conditions of the perturbation theory. Namely, the aforementioned perturbation analysis is applicable as long as the condition \(\omega_{2}\gg 4f_{1}f_{2}\Omega\) is satisfied, see details in Appendix D. The generalized non-perturbative analysis of the case with arbitrary \(\omega_{2}\) is presented later in Sec. 6.
Note that \(\langle\sigma_{1}^{x}\rangle\) and \(\langle\sigma_{2}^{x}\rangle\) are non-monotonic functions of the parametr \(s\). Therefore it is not obvious for which parameters \(\omega_{2}\), \(\beta\) and \(s\) the coherence \(\langle\sigma_{2}^{x}\rangle\) is larger than the
coherence \(\langle\sigma_{1}^{x}\rangle\). In order to clarify this issue we calculate the ratio \(\langle\sigma_{2}^{x}\rangle/\langle\sigma_{1}^{x}\rangle\) for the dimensionless parameters \(\omega_{2}/\omega_{1}\) and \(\beta\omega_{1}\)
\[\frac{\langle\sigma_{2}^{x}\rangle}{\langle\sigma_{1}^{x}\rangle}=\Big{(} \frac{\omega_{c}}{\omega_{1}}\Big{)}^{s}\frac{\Gamma(s)\tanh\Big{(}\frac{ \beta\omega_{1}}{2}\frac{\omega_{2}}{\omega_{1}}\Big{)}}{(\omega_{2}/\omega_{ 1})}\left[\int_{0}^{\infty}dxx^{s-1}e^{-\omega_{1}x/\omega_{c}}\frac{x\coth \Big{(}\frac{\beta\omega_{1}}{2}\frac{\omega_{2}}{\omega_{1}}\Big{)}-\coth \Big{(}\frac{\beta\omega_{1}}{2}\Big{)}}{x^{2}-1}\right]^{-1}, \tag{29}\]
and for three values of the parameter \(s=0.5;1;2\). The corresponding plots for the case \(\omega_{1}/\omega_{c}=0.1\) are presented in Fig. 3. The plots demonstrate that the new method of generation becomes effective for small ratio \(\omega_{2}/\omega_{1}\). It can be understood from analysis of the expression for the coherence of the second spin Eq. (23)
\[\langle\sigma_{2}^{x}\rangle=-4f_{1}f_{2}\tanh\Big{(}\frac{\beta\omega_{1}}{2 }\Big{)}\frac{\tanh\Big{(}\frac{\beta\omega_{2}}{2}\Big{)}}{\omega_{2}}\Omega. \tag{30}\]
As one can see the larger \(\omega_{1}\) corresponds to the larger value of \(0<\tanh(\beta\omega_{1}/2)<1\). From other side the multiplier \(\tanh(\beta\omega_{2}/2)/\omega_{2}\) as a function of \(\omega_{2}\) is a decreasing function. Therefore, the larger value of this multiplier can be reached at small values of \(\omega_{2}\).
Synthesizing coherence from \(M\) spins through a single bath
Due to the new method we consider we can now address the first general problem, if the coherence can be synthesized from many driving spins, in general, with specific different couplings and frequencies.
Let us consider the case \(\mathbf{n}_{j}=(0,0,f_{1})\), with \(j=1,2,\ldots M\), \(\mathbf{n}_{M+1}=(f_{2},0,0)\). Then we have
\[F(\tau)F(\tau^{\prime})= f_{1}f_{2}\sum_{j=1}^{M}[\cosh(\omega_{M+1}\tau^{\prime})+ \cosh(\omega_{M+1}\tau)]\sigma_{j}^{z}\sigma_{M+1}^{x}+\] \[+ if_{1}f_{2}\sum_{j=1}^{M}[\sinh(\omega_{M+1}\tau^{\prime})+\sinh (\omega_{M+1}\tau)]\sigma_{j}^{z}\sigma_{M+1}^{y}+\] \[+ f_{1}^{2}\sum_{j,l=1}^{M}\sigma_{j}^{z}\sigma_{l}^{z}+f_{2}^{2} \cosh(\omega_{M+1}(\tau-\tau^{\prime}))+f_{2}^{2}\sinh(\omega_{M+1}(\tau-\tau ^{\prime}))\sigma_{M+1}^{z}. \tag{31}\]
Using this result we calculate \(\langle\sigma_{M+1}^{x}\rangle=\text{Tr}_{S}[\rho_{S}\sigma_{M+1}^{x}]\)
\[\langle\sigma_{M+1}^{x}\rangle\approx-4f_{1}f_{2}\Big{[}\sum_{j=1}^{M}\tanh \Big{(}\frac{\beta\omega_{j}}{2}\Big{)}\Big{]}\frac{\tanh\Big{(}\frac{\beta \omega_{M+1}}{2}\Big{)}}{\omega_{M+1}}\Omega. \tag{32}\]
Comparing this result with the previous case (23) one concludes that each of \(M\) spins contributes cumulatively to the \((M+1)\)th spin coherence \(\langle\sigma_{M+1}^{x}\rangle\propto\sum_{j=1}^{M}\langle\sigma_{j}^{z} \rangle_{S}\). It means, the bath can equally accumulate the coherence from the same driving spins and then uses it to make the output spin equivalently coherent even if the coupling \(f_{1}f_{2}\) in Eq. (32) decreases \(M\) times.
This result can be generalized for the case of \(M\) spins with different couplings to the bath system, i.e., for the case
\[H_{SB}=\Big{[}\sum_{j=1}^{M}\mathbf{\sigma}_{j}\cdot\mathbf{n}_{j}+\mathbf{\sigma}_{M+1} \cdot\mathbf{n}_{M+1}\Big{]}\sigma_{k}\lambda_{k}(b_{k}^{\dagger}+b_{k}), \tag{33}\]
with different \(\mathbf{n}_{j}=(0,0,f_{1}^{(j)})\), for \(j=1,2,\ldots M\) and \(\mathbf{n}_{M+1}=(f_{2},0,0)\). Repeating the previous calculations for this case we obtain
\[\langle\sigma_{M+1}^{x}\rangle\approx-4f_{2}\Big{[}\sum_{j=1}^{M}f_{1}^{(j)} \tanh\Big{(}\frac{\beta\omega_{j}}{2}\Big{)}\Big{]}\frac{\tanh\Big{(}\frac{ \beta\omega_{M+1}}{2}\Big{)}}{\omega_{M+1}}\Omega. \tag{34}\]
It is convenient to introduce the density function of the coupling parameters
\[f_{1}(\omega)\equiv\sum_{j=1}^{M}\delta(\omega-\omega_{j})f_{1}^{(j)}, \tag{35}\]
and rewrite the expression for the coherence in the form
\[\langle\sigma_{M+1}^{x}\rangle\approx-4f_{2}\Big{[}\int_{-\infty}^{\infty}\! \!d\omega f_{1}(\omega)\tanh\Big{(}\frac{\beta\omega}{2}\Big{)}\Big{]}\frac{ \tanh\Big{(}\frac{\beta\omega_{M+1}}{2}\Big{)}}{\omega_{M+1}}\Omega. \tag{36}\]
This expression easily demonstrates that the main contribution to the generated coherence comes from the domain \(\omega\gg 2/\beta\), where \(\tanh(\beta\omega/2)\approx 1\). Therefore one can use the simplified formula for the coherence
\[\langle\sigma_{M+1}^{x}\rangle\approx-4f_{2}\Big{[}\int_{2/\beta}^{\infty}d \omega f_{1}(\omega)\Big{]}\frac{\tanh\Big{(}\frac{\beta\omega_{M+1}}{2}\Big{)} }{\omega_{M+1}}\Omega. \tag{37}\]
Advantageously, even a broad distribution of coupling parameters \(f_{1}(\omega)\) in the frequency domain can be sufficient to induce nearly \(M\) times higher coherence if the function \(f_{1}(\omega)\) is localized in the region well above \(2/\beta\).
Note that the calculation of the average value of \(\sigma_{M+1}^{z}\) is more complicated, since it needs to take into account the denominator terms in the reduced density operator \(\rho_{S}\) from Eq. (9). Such a derivation is presented in Appendix B.
## 5 Coherence multiplexing to two spins (\(M+2\) case)
Now we are in position to analyse another problem concerning detection of autonomous coherences, when more spins can be used as output systems. If they all independently receive the same coherence from a single bath, we can observe it more easily without need to repeat the experiment in time. For analysis of coherence multiplexing, we extend the previous study to the case of \((M+2)\) spins coupled to the bosonic bath, see Fig. 4. The Hamiltonians of the spin system and spin-boson coupling term are
\[H_{S}=\sum_{j=1}^{M}\frac{\omega_{j}}{2}\sigma_{j}^{z}+\sum_{p=1}^{2}\frac{ \omega_{M+p}}{2}\sigma_{M+p}^{z} \tag{38}\]
and
\[H_{SB}=\Big{[}\sum_{j=1}^{M}\mathbf{\sigma}_{j}\cdot\mathbf{n}_{j}+\sum_{p=1}^{2}\mathbf{ \sigma}_{M+p}\cdot\mathbf{n}_{M+p}\Big{]}\sum_{k}\lambda_{k}(b_{k}^{\dagger}+b_{k}), \tag{39}\]
respectively. We again consider the special case \(\mathbf{n}_{j}=(0,0,f_{1})\), with \(j=1,2,\ldots M\), \(\mathbf{n}_{M+1}=\mathbf{n}_{M+2}=(f_{2},0,0)\). Repeating the averaging procedure over the bosonic degrees of freedom, see Appendix A, we obtain the following spin-spin correlation term
\[F(\tau)F(\tau^{\prime})= f_{1}f_{2}\sum_{j=1}^{M}\sigma_{j}^{z}\sum_{p=1}^{2}[\cosh( \omega_{M+p}\tau^{\prime})+\cosh(\omega_{M+p}\tau)]\sigma_{M+p}^{x}+\] \[+ if_{1}f_{2}\sum_{j=1}^{M}\sigma_{j}^{z}\sum_{p=1}^{2}[\sinh( \omega_{M+p}\tau^{\prime})+\sinh(\omega_{M+p}\tau)]\sigma_{M+p}^{y}+\] \[+ f_{1}^{2}\sum_{j,l=1}^{M}\sigma_{j}^{z}\sigma_{l}^{z}+f_{2}^{2} \sum_{p=1}^{2}\Big{[}\cosh(\omega_{M+p}(\tau-\tau^{\prime}))+\sinh(\omega_{M+ p}(\tau-\tau^{\prime}))\sigma_{M+p}^{z}\Big{]}+\] \[+ f_{2}^{2}\sigma_{M+1}^{x}\sigma_{M+2}^{x}[\cosh(\omega_{M+1}\tau )\cosh(\omega_{M+2}\tau^{\prime})+\cosh(\omega_{M+2}\tau)\cosh(\omega_{M+1} \tau^{\prime})]-\] \[- f_{2}^{2}\sigma_{M+1}^{y}\sigma_{M+2}^{y}[\sinh(\omega_{M+1} \tau)\sinh(\omega_{M+2}\tau^{\prime})+\sinh(\omega_{M+2}\tau)\sinh(\omega_{M+ 1}\tau^{\prime})]+\] \[+ if_{2}^{2}\sigma_{M+1}^{x}\sigma_{M+2}^{y}[\cosh(\omega_{M+1}\tau )\sinh(\omega_{M+2}\tau^{\prime})+\sinh(\omega_{M+2}\tau)\cosh(\omega_{M+1} \tau^{\prime})]+\] \[+ if_{2}^{2}\sigma_{M+1}^{y}\sigma_{M+2}^{x}[\sinh(\omega_{M+1} \tau)\cosh(\omega_{M+2}\tau^{\prime})+\cosh(\omega_{M+2}\tau)\sinh(\omega_{M+1} \tau^{\prime})]. \tag{40}\]
Figure 4: Synthesizing and multiplexing of autonomous coherence. The first group of \(M\) driving spins polarizes the bosonic bath via the \(\sum_{j=1}^{M}f_{1}\sigma_{j}^{z}\) term. The polarized bosonic bath synthetically generates the coherence in the second group of the output spins via the \(f_{2}\sigma_{M+p}^{x}\) terms, where index \(p=1,2,\dots\) marks the target spins which get the coherence from the bath. For the coherence multiplexing, the figure represents the case of two output spins. The generated coherences of these spins \(\langle\sigma_{M+1}^{x}\rangle=\langle\sigma_{M+2}^{x}\rangle\) are \(M\) times larger than the coherence \(\langle\sigma_{2}^{x}\rangle\) (colored in red) generated in the direct two-spins scheme, see Fig. 1(b) and Eq. (23).
Consequently,
\[\langle F(\tau)F(\tau^{\prime})\sigma^{x}_{M+p}\rangle_{S}=-f_{1}f_{2 }\sum_{j=1}^{M}\tanh\Big{(}\frac{\beta\omega_{j}}{2}\Big{)}\times\] \[\quad\times\Big{\{}[\cosh(\omega_{M+p}\tau^{\prime})+\cosh(\omega _{M+p}\tau)]-[\sinh(\omega_{M+p}\tau^{\prime})+\sinh(\omega_{M+p}\tau)]\tanh \Big{(}\frac{\beta\omega_{M+p}}{2}\Big{)}\Big{\}}. \tag{41}\]
After the substituting this result into the expression for the coherence of the \((M+p)\)th spin one can observe that it has the same structure as the coherence, obtained in Sec. 4 for the \((M+1)\)th spin. Therefore, the level of the coherence of the \((M+1)\)th spin doesn't decrease the level of the coherence of \((M+2)\)th spin and vice versa. However, the pair of output spins can be still correlated which will reduce their application as independent resources. In order to check the correlation between these spins we calculate the correlation parameter
\[\sigma\equiv\sqrt{\langle\sigma^{x}_{M+1}\sigma^{x}_{M+2}\rangle-\langle \sigma^{x}_{M+1}\rangle\langle\sigma^{x}_{M+2}\rangle}. \tag{42}\]
In the leading order in coupling parameters \(f_{1},f_{2}\) we get the following expression (see intermediate results of the calculation in Appendix C)
\[\sigma^{2}\approx 4f_{2}^{2}\int_{0}^{\infty}d\xi\,\frac{\mathcal{I}(\xi)} {\xi}\mathcal{G}(\xi,\omega_{M+1},\omega_{M+2}), \tag{43}\]
where
\[\mathcal{G}(\xi,x,y)= \frac{xy\xi\tanh\Big{(}\frac{\beta x}{2}\Big{)}\tanh\Big{(}\frac {\beta y}{2}\Big{)}}{(\xi^{2}-x^{2})\left(\xi^{2}-y^{2}\right)}\coth\Big{(} \frac{\beta\xi}{2}\Big{)}+\frac{xy\xi^{2}}{(x^{2}-y^{2})}\Big{[}\frac{\tanh \Big{(}\frac{\beta x}{2}\Big{)}}{y(\xi^{2}-y^{2})}-\frac{\tanh\Big{(}\frac{ \beta y}{2}\Big{)}}{x(\xi^{2}-x^{2})}\Big{]}. \tag{44}\]
One can observe that the integrand \(\mathcal{G}(\xi,x,y)\) is a positive regular function on \(\xi\in[0,\infty)\) domain. It is a symmetric function of the parameters \(x,y\) and its values belong to the domain \(\mathcal{G}(\xi,x,y)\in[\mathcal{G}_{min}(x,y),\mathcal{G}_{max}(x,y)]\) with
\[\mathcal{G}_{min}(x,y)= \frac{2\tanh\Big{(}\frac{\beta x}{2}\Big{)}\tanh\Big{(}\frac{ \beta y}{2}\Big{)}}{\beta xy}, \tag{45}\] \[\mathcal{G}_{max}(x,y)= \frac{x\tanh\Big{(}\frac{\beta x}{2}\Big{)}-y\tanh\Big{(}\frac{ \beta y}{2}\Big{)}}{x^{2}-y^{2}}. \tag{46}\]
Therefore we have the following bands on the correlation value
\[\mathcal{G}_{min}(\omega_{M+1},\omega_{M+2})\leq\frac{\sigma^{2}}{4f_{2}^{2} \Omega}\leq\mathcal{G}_{max}(\omega_{M+1},\omega_{M+2}). \tag{47}\]
In the high temperature limit \(\mathcal{G}_{min}(\omega_{M+1},\omega_{M+2}),\mathcal{G}_{max}(\omega_{M+1}, \omega_{M+2})\rightarrow\beta/2\). Therefore in this case we have the following answer
\[\sigma^{2}=2\beta f_{2}^{2}\Omega. \tag{48}\]
Note that the result has the universal character, i.e., it doesn't depend on the parameters of the spin systems \(\omega_{M+1},\omega_{M+2}\). In the low-temperature limit \(\beta\omega_{M+1},\beta\omega_{M+2}\gg 1\) the \(\sigma^{2}\) has a non-vanishing value
\[\sigma^{2}=\frac{4f_{2}^{2}}{\omega_{M+1}+\omega_{M+2}}\int_{0}^{\infty}d\xi \,\mathcal{I}(\xi)\frac{\xi+\omega_{M+1}+\omega_{M+2}}{(\xi+\omega_{M+1})(\xi+ \omega_{M+2})}. \tag{49}\]
Note that the non-zero correlation between both spins appears in the leading order of the perturbation theory and it can be interpreted as the reaction of the second spin to the first one (and vice versa) using the states of the bosonic bath as intermediate virtual states. Indeed, the leading term of the correlation value contains only the coupling parameter \(f_{2}\), but not the coupling parameters \(f_{1}\), so the non-zero correlation value will be also at \(f_{1}=0\).
In order to understand the effectiveness of such a scheme, we estimate the signal-to-noise ratio \(|\langle\sigma_{M+1}^{x}\rangle|/\sigma\). We consider the case of the input and output spins with the same frequencies, namely, \(\omega_{j}=\omega_{1}\) for \(j=1,2,\ldots M\) and \(\omega_{M+1}=\omega_{M+2}=\omega\), as a typical example
\[\frac{|\langle\sigma_{M+1}^{x}\rangle|}{\sigma}=\frac{2f_{1}\Omega M\tanh \left(\frac{\beta\omega_{1}}{2}\right)\tanh\left(\frac{\beta\omega}{2}\right) }{\omega\sqrt{\int_{0}^{\infty}d\xi\frac{\mathcal{I}(\xi)}{\xi}\mathcal{G}( \xi,\omega,\omega)}}. \tag{50}\]
Using the special case of the power-law (generalized Ohmic) spectral density function (26) we obtain the following expression
\[\frac{\langle\sigma_{M+1}^{x}\rangle}{\sigma}=\eta\frac{\left(\frac{\omega_{c }}{\omega_{1}}\right)^{\frac{s+1}{2}}\Gamma(s)\tanh\left(\frac{b}{2}\right) \tanh\left(\frac{bw}{2}\right)}{\sqrt{w}\sqrt{\int_{0}^{\infty}dxx^{s}e^{-x \frac{\omega_{1}}{\omega_{c}}}g(x)}}. \tag{51}\]
Here we have introduced the dimensionless parameter \(\eta=4f_{1}M\sqrt{\lambda}\), which doesn't depend on the input and output spin frequencies \(\omega_{1},\omega\) and inverse temperature \(\beta\),
\[g(x)= \frac{\tanh^{2}\left(\frac{bw}{2}\right)\left[bwx\left(w^{2}-x^{2 }\right)+4w^{3}\coth\left(\frac{bw}{2}\right)\right]}{\left(w^{2}-x^{2} \right)^{2}}+\] \[+ \frac{bwx\left(x^{2}-w^{2}\right)+\left(2x^{3}-6w^{2}x\right) \tanh\left(\frac{bw}{2}\right)}{\left(w^{2}-x^{2}\right)^{2}}, \tag{52}\]
and short notations \(w=\omega/\omega_{1}\), \(b=\beta\omega_{1}\) for brevity.
Using the obtained formula we plot in Fig. 5 the normalized signal-to-noise ratio \(\langle\sigma_{M+1}^{x}\rangle/\eta\sigma\) as a function of the dimensionless parameters \(\beta\omega_{1}\) and \(\omega/\omega_{1}\) for different values of \(s=0.5;1;2\).
## 6 Synthesizing coherence from two independent baths (\(M+n+1\) case)
Now, the last intriguing problem remains for our new method: do we really need a single bath that is coherently manipulated by \(M\) driving spins? Cannot we use two different and independent baths coherently pushed by \(M\) and \(N\) spins if they both transfer coherence through these baths to the same output spin?
We consider an extended version of the previous system, see Fig. 6. We introduce two reservoirs of bosons, with spectra \(\Omega_{k},\Upsilon_{k}>0\) and creation (annihilation) operators \(b_{k}^{\dagger},d_{k}^{\dagger}(b_{k},d_{k})\), respectively. First (second) reservoir is coupled to \(M\) (\(N\)) spins via \(f_{1}\sigma_{j}^{z}(g_{1}\sigma_{j}^{z})\) term. Both reservoirs are coupled to the \((M+N+1)\)th spin via \(f_{2}\sigma^{x}\) and \(g_{2}\sigma^{x}\) terms. The Hamiltonian of the system
\[H=H_{S}+H_{B}+H_{SB} \tag{53}\]
consists of the spin Hamiltonian
\[H_{S}= \sum_{j=1}^{M}\frac{\omega_{1}}{2}\sigma_{j}^{z}+\sum_{j=M+1}^{M+N} \frac{\omega_{2}}{2}\sigma_{j}^{z}+\frac{\omega}{2}\sigma^{z}, \tag{54}\]
the bosonic Hamiltonian
\[H_{B}= \sum_{k}\Omega_{k}b_{k}^{\dagger}b_{k}+\sum_{k}\Upsilon_{k}d_{k}^{ \dagger}d_{k}, \tag{55}\]
and the spin-bath interaction term
\[H_{SB}= f_{1}\sum_{j=1}^{M}\sigma_{j}^{z}\sum_{k}\lambda_{k}(b_{k}^{ \dagger}+b_{k})+g_{1}\sum_{j=M+1}^{M+N}\sigma_{j}^{z}\sum_{k}\kappa_{k}(d_{k}^ {\dagger}+d_{k})+\] \[+ \sigma^{x}\sum_{k}\Big{[}f_{2}\lambda_{k}(b_{k}^{\dagger}+b_{k}) +g_{2}\kappa_{k}(d_{k}^{\dagger}+d_{k})\Big{]}. \tag{56}\]
The first line of the \(H_{SB}\) describes the interaction of the \(M\) input spins with the left bath, the second line describes the interaction of the additional \(N\) input spins with the right bath, and finally the last term in the \(H_{SB}\) describes the coupling of the remaining \((M+N+1)\)th spin with both baths. The first and second baths are characterized by spectral density functions
\[\mathcal{I}_{1}(\xi)=\sum_{k}\lambda_{k}^{2}\delta(\xi-\Omega_{k}),\quad \mathcal{I}_{2}(\xi)=\sum_{k}\kappa_{k}^{2}\delta(\xi-\Upsilon_{k}), \tag{57}\]
the reorganization energies
\[\Omega=\int_{0}^{\infty}d\xi\,\mathcal{I}_{1}(\xi)/\xi,\quad\Upsilon=\int_{0}^ {\infty}d\xi\,\mathcal{I}_{2}(\xi)/\xi). \tag{58}\]
We are interested in the evaluation of the average
\[\langle\sigma^{x}\rangle=\frac{1}{Z}\mathrm{Tr}\Big{[}e^{-\beta H}\sigma^{x} \Big{]}, \tag{59}\]
where \(Z\equiv\mathrm{Tr}[e^{-\beta H}]\). The coherence \(\langle\sigma^{x}\rangle\) reads (details of the calculation are presented in
the Appendix D)
\[\langle\sigma^{x}\rangle= \frac{Z_{B}}{Z}\sum_{m=0}^{M}\sum_{n=0}^{N}\binom{M}{m}\binom{N}{n} \exp\Big{\{}-\beta[\frac{\omega_{1}}{2}(M-2m)+\frac{\omega_{2}}{2}(N-2n)]\Big{\}}\times\] \[\times \exp\Big{\{}\beta[\Omega f_{1}^{2}(M-2m)^{2}+\Upsilon g_{1}^{2}(N -2n)^{2}]\Big{\}}\text{Tr}_{S}\Big{[}\exp\Big{\{}-\beta\frac{R_{mn}}{2}\sigma ^{z}\Big{\}}\times\] \[\times T_{\tau}\exp\Big{\{}f_{2}^{2}\int_{0}^{\infty}d\xi\,\mathcal{ I}_{1}(\xi)\int_{0}^{\beta}d\tau\int_{0}^{\beta}d\tau^{\prime}G(\xi,\tau-\tau^{ \prime})\Sigma^{x}(\tau)\Sigma^{x}(\tau^{\prime})\Big{\}}\times\] \[\times T_{\tau}\exp\Big{\{}g_{2}^{2}\int_{0}^{\infty}d\xi\, \mathcal{I}_{2}(\xi)\int_{0}^{\beta}d\tau\int_{0}^{\beta}d\tau^{\prime}G(\xi, \tau-\tau^{\prime})\Sigma^{x}(\tau)\Sigma^{x}(\tau^{\prime})\Big{\}}\times\] \[\times(\cos\theta_{mn}\sigma^{x}-\sin\theta_{mn}\sigma^{z})\Big{]}. \tag{60}\]
Here we introduced the partition function of the bosonic system \(Z_{B}\), and the notations \(R_{mn}=(\omega^{2}+\omega_{mn}^{2})^{1/2}\), with \(\omega_{mn}=4f_{1}f_{2}\Omega(M-2m)+4g_{1}g_{2}\Upsilon(N-2n)\), \(\sin\theta_{mn}=\omega_{mn}/R_{mn}\) and \(\cos\theta_{mn}=\omega/R_{mn}\). Furthermore, \(\Sigma^{x}(\tau)=\cosh(R_{mn}\tau)[\cos\theta_{mn}\sigma^{x}+i\cos\theta_{mn} \sigma^{y}]-\sin\theta_{mn}\sigma^{z}\), and \(G(\xi,\tau-\tau^{\prime})\) is defined in Eq. (81). Finally, the partition function \(Z\) of the whole system is
\[Z= Z_{B}\sum_{m=0}^{M}\sum_{n=0}^{N}\binom{M}{m}\binom{N}{n}\exp \Big{\{}-\beta[\frac{\omega_{1}}{2}(M-2m)+\frac{\omega_{2}}{2}(N-2n)]\Big{\}}\times\] \[\times T_{\tau}\exp\Big{\{}f_{2}^{2}\int_{0}^{\infty}d\xi\, \mathcal{I}_{1}(\xi)\int_{0}^{\beta}d\tau\int_{0}^{\beta}d\tau^{\prime}G(\xi, \tau-\tau^{\prime})\Sigma^{x}(\tau)\Sigma^{x}(\tau^{\prime})\Big{\}}\times\] \[\times T_{\tau}\exp\Big{\{}g_{2}^{2}\int_{0}^{\infty}d\xi\, \mathcal{I}_{2}(\xi)\int_{0}^{\beta}d\tau\int_{0}^{\beta}d\tau^{\prime}G(\xi, \tau-\tau^{\prime})\Sigma^{x}(\tau)\Sigma^{x}(\tau^{\prime})\Big{\}}\Big{]}. \tag{61}\]
Figure 6: Autonomous quantum coherence from several independent baths. The group of first \(M\) driving spins polarizes the first bosonic bath via \(\sum_{j=1}^{M}f_{1}\sigma_{j}^{z}\) term, the group of another \(N\) driving spins polarizes the second bosonic bath via \(\sum_{j=M+1}^{M+N}g_{1}\sigma_{j}^{z}\) term. The polarized baths simultaneously generate the coherence \(\langle\sigma^{x}\rangle\) of the \((M+N+1)\)th output spin via \(f_{2}\sigma^{x}\) and \(g_{2}\sigma^{x}\) terms. The coherence of this spin in the leading order in the coupling parameters \(f_{1},f_{2},g_{1},g_{2}\) is \((M+N)\) times larger than the coherence \(\langle\sigma_{2}^{x}\rangle\) (colored in red) generated in the direct two-spins scheme, see Fig. 1(b) and Eq. (23).
Note that the perturbative regime corresponds to the situation \(\omega\gg[\omega_{mn}]_{\max}=4f_{1}f_{2}\Omega M+4g_{1}g_{2}\Upsilon N\). In this limit \(R_{mn}\approx\omega\), \(\sin\theta_{mn}\approx[4f_{1}f_{2}\Omega(M-2m)+4g_{1}g_{2}\Upsilon(N-2n)]/\omega\), \(\cos\theta_{mn}\approx 1\).
Let's check this formula for the previously considered case of \(M+1\) spins, where \(4f_{1}f_{2}\Omega M\ll\omega\), \(g_{2}=0\). For this case the expression for the statistical sum takes the form
\[Z= Z_{B}\sum_{m=0}^{M}\sum_{n=0}^{N}{M\choose m}{N\choose n}\exp \Big{\{}-\beta\big{[}\frac{\omega_{1}}{2}(M-2m)+\frac{\omega_{2}}{2}(N-2n) \big{]}\Big{\}}\times\] \[\times T_{\tau}\exp\Big{\{}f_{2}^{2}\int_{0}^{\infty}d\xi\,{\cal I}_{1}( \xi)\int_{0}^{\beta}d\tau\int_{0}^{\beta}d\tau^{\prime}G(\xi,\tau-\tau^{\prime })\Sigma^{x}(\tau)\Sigma^{x}(\tau^{\prime})\Big{\}}\Big{]}, \tag{62}\]
where we introduced \(R_{m}=\sqrt{\omega^{2}+[4f_{1}f_{2}\Omega(M-2m)]^{2}}\). We keep the leading term in \(T_{\tau}\) exponent, i.e., replace it by 1. Then we use the approximation \(R_{m}\approx\omega\), and remove the small terms \(\propto f_{1}^{2}\Omega,g_{1}^{2}\Upsilon\) in the expression for the exponent. Then we get
\[Z\approx Z_{B}\sum_{m=0}^{M}\sum_{n=0}^{N}{M\choose m}{N\choose n}\exp \Big{\{}-\beta\big{[}\frac{\omega_{1}}{2}(M-2m)+\frac{\omega_{2}}{2}(N-2n) \big{]}\Big{\}}{\rm Tr}_{S}\Big{[}\exp\Big{\{}-\beta\frac{\omega}{2}\sigma^{z} \Big{\}}\Big{]}\] \[= Z_{B}\Big{[}2\cosh\Big{(}\frac{\beta\omega}{2}\Big{)}\Big{]} \Big{[}2\cosh\Big{(}\frac{\beta\omega_{1}}{2}\Big{)}\Big{]}^{M}\Big{[}2\cosh \Big{(}\frac{\beta\omega_{2}}{2}\Big{)}\Big{]}^{N}, \tag{63}\]
which is nothing but the statistical sum of the non-interacting spin and boson subsystems. Using the same approximation and \(\sin\theta_{m}=4f_{1}f_{2}\Omega(M-2m)/R_{m}\approx 4f_{1}f_{2}\Omega(M-2m)/\omega\) we obtain the following expression for the coherence in the system
\[\langle\sigma^{x}\rangle\approx \frac{Z_{B}}{Z}\sum_{m=0}^{M}\sum_{n=0}^{N}{M\choose m}{N\choose n }\exp\Big{\{}-\beta\big{[}\frac{\omega_{1}}{2}(M-2m)+\frac{\omega_{2}}{2}(N-2 n)\big{]}\Big{\}}\times\] \[\times \frac{\partial}{\partial\omega_{1}}\sum_{m=0}^{M}\sum_{n=0}^{N}{ M\choose m}{N\choose n}\exp\Big{\{}-\beta\big{[}\frac{\omega_{1}}{2}(M-2m)+ \frac{\omega_{2}}{2}(N-2n)\big{]}\Big{\}}=\] \[= -\frac{4f_{1}f_{2}\Omega}{\omega}\tanh\Big{(}\frac{\beta\omega}{ 2}\Big{)}M\tanh\Big{(}\frac{\beta\omega_{1}}{2}\Big{)}, \tag{64}\]
which coincides with the previously obtained result (32). Considering the general case with \(g_{2}\neq 0\) in the same approximation we get
\[\langle\sigma^{x}\rangle\approx-4\frac{\tanh\Big{(}\frac{\beta\omega}{2} \Big{)}}{\omega}\Big{[}f_{1}f_{2}\Omega M\tanh\Big{(}\frac{\beta\omega_{1}}{2} \Big{)}+g_{1}g_{2}\Upsilon N\tanh\Big{(}\frac{\beta\omega_{2}}{2}\Big{)}\Big{]}. \tag{65}\]
Therefore, the impact of several baths on the coherence of the \(\sigma\)'s spin has an additive character. It means that independent baths can constructively join to generate coherence of the output spins.
Note the several advantages of the results (60), (61), and (115): i) it can be easily generalized for the arbitrary number of baths; ii) it is non-perturbative in the coupling parameters \(f_{,}f_{2},g_{1},g_{2}\); iii) it allows to obtain the answers beyond the weak coupling limit \(4f_{1}f_{2}\Omega M+4g_{1}g_{2}\Upsilon N\ll\omega\), \(2f_{1}^{2}\Omega M\ll\omega_{1}\), \(2g_{1}g_{2}\Upsilon N\ll\omega_{2}\). However, the evaluation of the average of the coherence beyond the weak coupling limit requires either more sophisticated analytical methods or direct numerical calculations.
Generalization for the oscillator coherences
To extend the experimental possibilities we finally replace the output spin system of the previous case by an oscillator. The simplest model Hamiltonian of the corresponding system analogously to Sec. 2 is \(H=H_{A}+H_{S}+H_{B}+H_{int}\), where
\[H_{A}=E\Big{(}a^{\dagger}a+\frac{1}{2}\Big{)}, \tag{66}\]
is the oscillator Hamiltonian,
\[H_{B}=\sum_{k}\Omega_{k}b_{k}^{\dagger}b_{k}, \tag{67}\]
the Hamilton of the bosonic excitations of the bath,
\[H_{S}=\frac{\omega}{2}\sigma^{z}, \tag{68}\]
is the spin Hamiltonian and finally the term
\[H_{int}=\Big{[}f\sigma^{z}+g(a^{\dagger}+a)\Big{]}\sum_{k}\lambda_{k}(b_{k}^{ \dagger}+b_{k}), \tag{69}\]
describes an interaction of the spin and the oscillator with the bosonic bath.
Applying the general formula for this case we obtain
\[\rho_{A}=\frac{e^{-\beta H_{A}}}{Z_{A}}\frac{\left\langle T_{\tau}\exp\Big{\{} \int_{0}^{\infty}d\xi\,{\cal I}(\xi)\int_{0}^{\beta}d\tau\int_{0}^{\beta}d\tau ^{\prime}G(\xi,\tau-\tau^{\prime})F(\tau)F(\tau^{\prime})\Big{\}}\right\rangle _{S}}{\left\langle\left\langle T_{\tau}\exp\Big{\{}\int_{0}^{\infty}d\xi\,{ \cal I}(\xi)\int_{0}^{\beta}d\tau\int_{0}^{\beta}d\tau^{\prime}G(\xi,\tau- \tau^{\prime})F(\tau)F(\tau^{\prime})\Big{\}}\right\rangle_{S}\right\rangle_{A}}, \tag{70}\]
where we have introduced
\[F(\tau)=f\sigma^{z}+g(a^{\dagger}e^{\tau E}+ae^{-\tau E}) \tag{71}\]
The reduced density matrix of the oscillator can be calculated as a power series in the parameters \(f,g\). We calculate the leading order of the \(T_{\tau}\)-exponent in the numerator of Eq. (70)
\[\left\langle T_{\tau}\exp\{\dots\}\right\rangle_{S}\approx 1+\int_{0}^{\infty}d\xi\,{\cal I}(\xi)\Big{\{}\frac{f^{2} \beta}{\xi}-\frac{2fg}{E\xi}\tanh\Big{(}\frac{\beta\omega}{2}\Big{)}\Big{[}a^ {\dagger}(e^{\beta E}-1)+a(1-e^{-\beta E})\Big{]}+\] \[+g^{2}\Big{[}\frac{E\coth\Big{(}\frac{\beta\xi}{2}\Big{)}-\xi \coth\Big{(}\frac{\beta E}{2}\Big{)}}{2E\left(E^{2}-\xi^{2}\right)}\Big{\{}(a ^{\dagger})^{2}\left(e^{\beta E}-1\right)^{2}+a^{2}\left(1-e^{-\beta E}\right) ^{2}\Big{\}}+\] \[+\Big{(}\frac{\beta(\xi-E)+e^{\beta(E-\xi)}-1}{\left(1-e^{-\beta \xi}\right)(E-\xi)^{2}}+\frac{-\beta(\xi+E)+e^{\beta(\xi+E)}-1}{\left(e^{\beta \xi}-1\right)(\xi+E)^{2}}\Big{)}a^{\dagger}a+\] \[+\Big{(}\frac{\beta(E-\xi)+e^{\beta(\xi-E)}-1}{\left(e^{\beta\xi }-1\right)(E-\xi)^{2}}+\frac{\beta(\xi+E)+e^{-\beta(\xi+E)}-1}{\left(1-e^{- \beta\xi}\right)(\xi+E)^{2}}\Big{)}aa^{\dagger}\Big{]}\Big{\}}. \tag{72}\]
The denominator of the reduced density matrix then reads
\[\left\langle T_{\tau}\exp\{\dots\}\right\rangle_{A,S}\approx 1+\int_{0}^{\infty}d\xi\,{ \cal I}(\xi)\Big{\{}\frac{f^{2}\beta}{\xi}+g^{2}\frac{\beta\left[E\coth\Big{(} \frac{\beta\xi}{2}\Big{)}-\xi\coth\Big{(}\frac{\beta E}{2}\Big{)}\right]}{E^ {2}-\xi^{2}}\Big{\}} \tag{73}\]
Let us use the obtained result for \(\rho_{A}\) and calculate the average values for the dimensionless coordinate operator \(x\equiv(a^{\dagger}+a)/\sqrt{2}\)
\[\langle x\rangle\approx-\frac{4fg}{\sqrt{2}E}\Omega\tanh\Big{(}\frac{\beta\omega }{2}\Big{)}. \tag{74}\]
Note that the non-zero value of \(x\) operator exists if both values \(g\) and \(f\) are non-zero. This result can be understood in terms of a two-step model. First, the spin polarizes the bosonic bath via \(f\sigma^{z}\) term and produces the non-zero value \(\langle b_{k}+b_{k}^{\dagger}\rangle\propto f\) in the leading order of the perturbation theory. Then the polarized bath generates the non-zero coordinate shift \(\langle a^{\dagger}+a\rangle\propto g\langle b_{k}+b_{k}^{\dagger}\rangle \propto gf\) via the \(g(a^{\dagger}+a)\) term. The logic of the method advantageously follows the previous case of the output spin. The square of the coordinate operator takes the form
\[\langle x^{2}\rangle\approx \frac{1}{2}\coth\Big{(}\frac{\beta E}{2}\Big{)}+2Eg^{2}\int_{0}^{ \infty}\!\!d\xi\mathcal{I}(\xi)\frac{E\coth\Big{(}\frac{\beta\xi}{2}\Big{)}- \xi\coth\Big{(}\frac{\beta E}{2}\Big{)}}{\left(E^{2}-\xi^{2}\right)^{2}}-\] \[- \frac{g^{2}}{2E}\frac{\beta E+\sinh(\beta E)}{\sinh^{2}(\beta E /2)}\int_{0}^{\infty}\!\!d\xi\frac{\mathcal{I}(\xi)\xi}{(E^{2}-\xi^{2})}. \tag{75}\]
As a result for the variance \(\sigma^{2}\equiv\langle(x-\langle x\rangle)^{2}\rangle\) we have \(\sigma^{2}\approx\langle x^{2}\rangle\) in the leading order. Note that the spin subsystem doesn't have an impact onto \(\sigma^{2}\) in the leading order of the perturbation theory in coupling parameters \(f,g\).
Using the latter results one can estimate the signal-to-noise ratio \(|\langle x\rangle|/\sigma\), which, in particular, has a simple form in the low-temperature limit \(\beta(=1/T)\to\infty\)
\[\frac{\big{|}\langle x\rangle\big{|}}{\sigma}\bigg{|}_{T\to 0}\approx 4fg\frac{ \Omega}{E}. \tag{76}\]
Hence, the signal-to-noise ratio in such a system can be increased by increasing the ratio of the reorganization energy of the bosonic bath \(\Omega\) to the oscillator's energy \(E\).
## 8 Summary
To conclude, using a new, more efficient method then in previous case, we proved that autonomous coherences could be synthesised from many independent driving spins through separate low-temperature baths and multilxed to many output spins. It is a crucial step to allow their experiment at investigation for a much more diverse class of experimental platforms and final verification of such intriguing phenomena. The driving coupling is present as pure dephasing, for example, in quantum dots. Our analysis shows that another output spin system embedded in the same environment can exhibit such autonomous coherence in principle, although it cannot generate it. Moreover, more driving spins will be advantageous, and the same bath can supply many spins with the same autonomous coherence. Additionally, it is not required that driving spins have to generate coherence through the same bath; many baths coupled to the output spin will be equally good.
Our approach qualitatively overcomes the first theoretical proposal suggesting the existence of these new autonomous spin coherences in Refs. [32, 36, 38]. The groundbreaking idea there is limited by a double role of spin in this method and, therefore, unavoidable disruptive back-action effects. We removed that limitation and opened a road towards further much broader investigations. This new approach and extension are critical to observing autonomous quantum coherences experimentally and exploiting them in diverse quantum technology applications.
Acknowledgments
R.F. acknowledges the grant of 22-27431S of Czech Science Foundation. A.S. acknowledges the grant LTAUSA19099 of Czech Ministry of Education, Youth and Sport and the grant by Czech Science Foundation (project GA23-06369S).
|
2307.12109 | The fate of Galilean relativity in minimal-length theories | A number of arguments at the interplay of general relativity and quantum
theory suggest an operational limit to spatial resolution, conventionally
modelled as a generalized uncertainty principle (GUP). Recently, it has been
demonstrated that the dynamics postulated as a part of these models are only
loosely related to the existence of the minimal-length scale. In this paper, we
intend to make a more informed choice on the Hamiltonian by demanding, among
other properties, that the model be invariant under (possibly) deformed
Galilean transformations in one dimension. In this vein, we study a
two-particle system with general interaction potential under the condition that
the composition as well as the action of Galilean boosts on wave numbers be
deformed so as to comply with the cut-off. We find that the customary
GUP-Hamiltonian does not allow for invariance under (any kind of) generalised
Galilean transformations. Those Hamiltonians which allow for a deformed
relativity principle have to be related to the ordinary Galilean ones by virtue
of a momentum-space diffeomorphism, i.e. a canonical transformation. Far from
being trivial, the resulting dynamics is deformed, as we show at the example of
the harmonic interaction. | Pasquale Bosso, Giuseppe Fabiano, Domenico Frattulillo, Fabian Wagner | 2023-07-22T15:38:09Z | http://arxiv.org/abs/2307.12109v1 | # The fate of Galilean relativity in minimal-length theories
###### Abstract
A number of arguments at the interplay of general relativity and quantum theory suggest an operational limit to spatial resolution, conventionally modelled as a generalized uncertainty principle (GUP). Recently, it has been demonstrated that the dynamics postulated as a part of these models are only loosely related to the existence of the minimal-length scale. In this paper, we intend to make a more informed choice on the Hamiltonian by demanding, among other properties, that the model be invariant under (possibly) deformed Galilean transformations in one dimension. In this vein, we study a two-particle system with general interaction potential under the condition that the composition as well as the action of Galilean boosts on wave numbers be deformed so as to comply with the cut-off. We find that the customary GUP-Hamiltonian does not allow for invariance under (any kind of) generalised Galilean transformations. Those Hamiltonians which allow for a deformed relativity principle have to be related to the ordinary Galilean ones by virtue of a momentum-space diffeomorphism, _i. e._ a canonical transformation. Far from being trivial, the resulting dynamics is deformed, as we show at the example of the harmonic interaction.
More than a hundred years after its conception [1], a consistent formulation of a quantum theory of gravity remains elusive (see [2] for a recent review). The main reason for this slow progress lies in the scarcity of experimental input. However, recent advances in precision measurements [3] as well as control over quantum phenomena [4; 5] have raised hopes that this may change in the near future, leading to the advent of quantum gravity phenomenology [6; 7; 8].
One of the main lines of research in quantum gravity phenomenology consists of minimal-length models. As a matter of fact, arguments heuristically combining general relativity and quantum theory suggest the appearance of some kind of minimal-length scale [9; 10; 11; 12; 13; 14; 15; 16; 17]. For example, this happens in scattering processes with high center-of-mass energy at impact parameters small enough to create black holes, making it impossible to resolve smaller distances [9; 14; 17]. This intuition is corroborated by circumstantial evidence from explicit approaches to quantum gravity such as string theory [18; 19; 20; 21; 22], loop quantum gravity [23; 24], asymptotic safety [25; 26], causal dynamical triangulations [27; 28; 29] and Horava-Lifshitz gravity [30; 31] (for an extensive review of those motivations see chapter 3 of [32]).
In the context of nonrelativistic single-particle quantum mechanics it is customary to introduce the minimal-length scale by deforming the Heisenberg algebra leading to a generalized uncertainty principle (GUP) [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47] (see [32; 48] and section 3 of [49] for recent reviews, and [50] for some critical reflections on the state of the field). Consequently, the minimal-length scale enters these models by virtue of the Robertson-Schrodinger relation [51; 52], _i. e._ as a fundamental limit to localisation. In one dimension, a general parity-invariant modified canonical commutator reads
\[[\hat{x},\hat{p}]=if(|\hat{p}|), \tag{1}\]
with the position \(\hat{x}\) and the momentum \(\hat{p}.\) Depending on the function \(f(|\hat{p}|),\) the Robertson-Schrodinger relation [51; 52]
\[\Delta x\geq\frac{|\left\langle f(|\hat{p}|)\right\rangle|}{2\Delta p} \tag{2}\]
may imply a global minimum to the standard deviation of the position operator. This is the case, for instance, for the foundational model introduced by Kempf, Mangano, and Mann [33]
\[f=1+\ell^{2}\hat{p}^{2}, \tag{3}\]
with the length scale \(\ell\), expected to be of the order of the Planck length. Equation (2) then implies
\[\Delta x\geq\ell. \tag{4}\]
In short, we choose a function \(f\) such that the underlying model exhibits a minimal length. Here, rather than having built a model constructively on the basis of the existence of a minimal length, _i. e._ from the bottom up, we have started by proposing a model, and subsequently shown that it exhibits a minimal length. Top-down approaches of this kind can be instructive when there is an intuition on the choice of model. Unfortunately, in minimal-length quantum mechanics this is not the case. This raises the questions: what is the essence of the minimal length? and which role shall the function \(f\) play from a physical point of view?
Recent developments have marked a step towards solving this puzzle [53; 54; 55]. If there is to be a minimal length, the kinematics of the theory has to satisfy specific conditions: given a position operator \(\hat{x}\), we can define its wave-number conjugate \(\hat{k}\) such that1
Footnote 1: Throughout this paper, we will differentiate the terms “wave number” and “momentum” standing for the generally distinct operators \(\hat{k}\) and \(\hat{p}\), respectively. Similarly, the terms “wave-number representation” and “wave-number space” refer to the basis carved out by the eigenstates of \(\hat{k}\).
\[[\hat{x},\hat{k}]=i. \tag{5}\]
If the standard deviation of the position operator exhibits a global minimum, the spectrum of the operator \(\hat{k}\) is necessarily bounded as
\[\text{spec}(\hat{k})=\{k:k\in[-\pi/2\ell,\pi/2\ell]\}. \tag{6}\]
The constant \(\ell\) quantifies the minimal length in the sense that the underlying model obeys Eq. (4).
In formulating this necessary and sufficient condition for the existence of a minimal length, it has not been necessary to refer to the momentum \(\hat{p}\). Thus, at first sight it appears that the choice of physical momentum is arbitrary and, most importantly, largely independent of the existence of a minimal length. While this arbitrariness is irrelevant at the kinematic level, it becomes problematic once a Hamiltonian is defined as
\[\hat{H}=\frac{\hat{p}^{2}}{2m}+V(\hat{x}), \tag{7}\]
_i. e._ in terms of the momentum \(\hat{p}\), as is commonplace in the literature [56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76]. This particular Hamiltonian is not only not implied by the existence of the minimal length; on the face of it, both are entirely unrelated. This observation raises two questions [50; 53]: if it is not required for the existence of a minimal length, why should we introduce a notion of momentum \(\hat{p}\) distinct from the wave number \(\hat{k}\) in the first place? and how could we make a more informed guess on the minimal-length deformed Hamiltonian? As we will show below, an answer to the second of these questions entails an answer to the first.
In relativistic theories with an invariant length scale, the choice of physical momentum and its underlying composition law has been addressed in multiple studies [77; 78]. In particular, nonlinearities in the addition law for physical momenta has been the center of the much debated soccer-ball problem [79; 80; 81], according to which small Planck-scale modifications may give rise to drastic macroscopic effects - in contrast with everyday observations. As discussed in [82], an unambiguous definition of total physical momentum is only viable when interactions between particles are involved.
In this paper, we derive a unique class of interacting Hamiltonians for two-particle systems in one spatial dimension based on a number of elementary axioms. Most importantly, we demand that the space of wave numbers be bounded as in Eq. (6) and that there not be a preferred point, nor a preferred frame. In other words, we assume the system at hand to be invariant under generalised Galilean transformations, while implying a minimal length of the kind provided in Eq. (4).
In order for wave-number space to be bounded, the addition of wave numbers must cease being linear. Furthermore, the action of boosts on the wave number (in the ordinary theory a simple translation) has to saturate when approaching the bound. Otherwise this bound could be overshot, for example, in scattering processes or by considering the system from the point of view of a strongly boosted observer. We find that generalised Galilean invariance of the Hamiltonian, under rather mild assumptions, tightly constrains this composition law, forcing it to be commutative as well as associative. In other words, there must be an operator \(\hat{p}=p(\hat{k})\), which adds up linearly just as momenta in
the ordinary theory do, and is, therefore, unbounded. It is this function which, as a matter of convenience rather than necessity, provides a notion of momentum akin to the one implicitly employed in the. Consequently, the deformed commutator in Eq. (1) becomes the inverse Jacobian \(f=\mathrm{d}\hat{p}/\mathrm{d}\hat{k}.\) The transformation \(\hat{k}\rightarrow\hat{p}\) can be turned into a canonical transformation by also scaling the position with its Jacobian to obtain an operator \(\hat{X},\) conjugate to \(\hat{p}.\) Therefore, the Heisenberg equations of motion are left untouched.
As regards the Hamiltonian, we find that the kinetic term indeed provides a nonrelativistic modified dispersion relation of the kind displayed in Eq. (7). Interaction potentials, however, have to be modified, thus becoming a function of the operator \(\hat{X}\) instead of the position \(\hat{x}.\) As the phase-space coordinates \((X,p)\) are canonical, the Hamiltonian is thus canonically related to the ordinary quantum mechanical one.
In short, the only minimal-length model invariant under any deformed version of Galilean relativity is a diffeomorphism away from the undeformed theory. As a corollary, conventional GUP models do not allow for a principle of relativity. That the deformed theory can be mapped into the undeformed one reflects the fact that a one-dimensional wave-number space cannot harbour curvature. Indeed, it bears similarity to special relativity in 1+1-dimensions. In contrast to the higher-dimensional case, the latter theory does not possess a curved velocity space. Therefore, it can be mapped into Galilean relativity [83].
Even though, this implies that the spectrum of the modified Hamiltonian is undeformed, the ensuing dynamics is by no means trivial, just as special relativity in 1+1 dimensions is not. It is the position \(\hat{x}\) that the physical interpretation of the model hinges on because, as was famously laid out in [84], all quantum mechanical measurements come down to positions measurements. As we show, boosts, _i. e._ now nonlinear changes in the velocity, change the positions of particles dependent on the boost parameter and their momentum. In other words, we find an effect akin to length contraction in special relativity, just that it generally increases distances at large momentum and for fast-moving observers. To highlight this fact, we consider the Kempf-Mangano-Mann model [33] as an explicit example, thus showing that distances increase quadratically with the momentum. Furthermore, we explain how the model recovers ordinary Galilean relativity for coarse-grained measurements.
The paper is organised as follows. First, in section I we introduce the notation, deriving the Hamiltonian governing one-dimensional Galilean relativity. We turn to deformations of Galilean relativity in Sec. II. The results are exemplified by the Kempf-Mangano-Mann model in Sec. III. Finally, we summarise our results and conclude in Sec IV.
## I Galilean Relativity
Before investigating deformed models, it is instructive to see how the dynamical constraints play out in Galilean relativity. Here, we intend to describe the dynamics of a system of two interacting particles \(A\) and \(B\) which are governed by an interacting Hamiltonian
\[\hat{H}=\hat{H}_{0,AB}+\hat{V}, \tag{8}\]
with the sum of the ordinary free-particle Hamiltonians \(\hat{H}_{0,AB}\) as well as the potential \(\hat{V}.\) While the kinetic term \(\hat{H}_{0,AB}\) is fixed for arbitrary systems, the potential is left open. Representing the kind of interaction that is to be considered, it generally depends on a function of the positions.
In one dimension, the Bargmann algebra is spanned by the generators of boosts \(\hat{G}_{I}\) (here the index \(I\) can take the values \(A\) and \(B\)), translations \(\hat{k}_{I},\) and free-particle time translations \(\hat{H}_{0,I}\) such that
\[[\hat{k}_{I},\hat{H}_{0,I}]=0, [\hat{G}_{I},\hat{k}_{I}]=iM_{I}, [\hat{G}_{I},\hat{H}_{0,I}]=i\hat{k}_{I}, \tag{9}\]
with the masses of the respective particles \(M_{I}.\) While the first of these commutators indicates \(\hat{H}_{0,I}=\hat{H}_{0,I}(\hat{k}_{i}),\) the other two essentially imply that
\[\hat{H}_{0,AB}=\frac{\hat{k}_{A}^{2}}{2M_{A}}+\frac{\hat{k}_{B}^{2}}{2M_{B}}. \tag{10}\]
Furthermore, considering the fact that the position \(\hat{x}_{I}\) is the conjugate variable to the wave number, we can make use of the Jacobi identity involving \(\hat{x}_{I},\)\(\hat{k}_{I}\) and \(\hat{G}_{I}\) to identify the Galilean boost generator with the position as
\[\hat{G}_{I}=M_{I}\hat{x}_{I}. \tag{11}\]
Below, we will be interested in the time-evolved version of the boost generator which can be represented as
\[\hat{G}_{t,I}=e^{i\hat{H}_{0}t}\hat{G}_{I}e^{-i\hat{H}_{0}t}=M_{I}\hat{x}_{I}+ \hat{k}_{I}t, \tag{12}\]
Yet, there is more to the Bargmann algebra than this representation.
To impose a relativity principle to the dynamical structure spelled above, we require that the modification to the Hamiltonian by the transformations generated by \(\hat{k}_{AB}=\hat{k}_{A}+\hat{k}_{B}\) and \(\hat{G}_{t,AB}=\hat{G}_{t,A}+\hat{G}_{t,B}\) be at most a total derivative. As a remnant of the \(O(d)\)-symmetry which forms part of Galilean invariance, since \(O(1)\simeq Z_{2}\), we further demand that the Hamiltonian not change under parity transformations, _i. e._\(\hat{x}_{I}\rightarrow-\hat{x}_{I}\), \(\hat{k}_{I}\rightarrow-\hat{k}_{I}\). Note here that the translation and boost generators acting on two particles at once are just linear combinations of the ones acting on single particles. Therefore, the algebra in Eq. (9) trivially extends to multi-particle states. The linearity is lost when Galilean relativity is deformed, which will make this extension far less obvious. The parity transformation, in any case, acts simultaneously on all positions and wave numbers and, being discrete, does not have a generator which could form a part of a Lie algebra.
The kinetic term, being made up only of momenta, is trivially translation invariant. A boost, in turn, shifts it by no more than a total derivative if and only if the free-particle Hamiltonian satisfies Eq. (10). Moreover, Eq. (10) is parity invariant.
We consider potential functions depending on coordinates \(\hat{x}^{A},\hat{x}^{B}\) through a distance function \(d(\hat{x}^{A},\hat{x}^{B})\) and present a simple argument required to constrain its form, whose steps we will also employ in the deformed case. In this vein, Galilean invariance requires that the operator \(\hat{d}\) be left unchanged under both boosts and translations, _i. e._\([\hat{d},\hat{k}_{AB}]=[\hat{d},\hat{G}_{t,AB}]=0\), which implies
\[\hat{d}=\hat{d}\left(\hat{x}_{A}-\hat{x}_{B}\right). \tag{13}\]
Finally, parity invariance renders the sign of \(\hat{x}_{A}-\hat{x}_{B}\) meaningless, so that the dependence is actually on \(|\hat{x}_{A}-\hat{x}_{B}|\). Therefore, the full Hamiltonian reads
\[\hat{H}=\frac{\hat{k}_{A}^{2}}{2M_{A}}+\frac{\hat{k}_{B}^{2}}{2M_{B}}+V(|\hat{ x}_{A}-\hat{x}_{B}|). \tag{14}\]
In Galilean relativity, this derivation is, for the most part, straightforward. In the subsequent section, we will see that its deformed variant harbours some slight complications.
## II Axiomatic approach to deformed Galilean relativity
We aim to establish a coherent theory in one spatial dimension that encompasses both single- and multi-particle dynamics while incorporating a fundamental minimal length. As mentioned in the introduction, this minimal length implies a bound to the allowed eigenvalues of the wave number \(\hat{k}\), conjugate to the position \(\hat{x}.\) Therefore, the wave number necessarily satisfies a deformed composition law of the form
\[\hat{k}_{A}\oplus\hat{k}_{B}=F(\hat{k}_{A},\hat{k}_{B}). \tag{15}\]
We make no further assumptions on the function \(F\) other than that it recover the usual linear composition law in the limit of vanishing minimal length, _i. e._\(\lim_{\ell\to 0}F=\hat{k}_{A}+\hat{k}_{B}.\)
We start our argument by stating that the time evolution of the particles in question is to be generated by a Hamiltonian \(\hat{H}\) which is given by the sum of a kinetic term \(\hat{H}_{0,AB}\) and a potential \(\hat{V}.\) For the resulting dynamics to be consistent, we impose the following requirements.
* The model allows for a notion of (possibly) deformed Galilean relativity. The laws of physics should be the same for every inertial observer connected by symmetry transformations, _i. e._ translations, boosts, and parity transformations, which reduce to their standard expression in the limit of vanishing minimal length. In other words, we introduce the translation and boost generators \(\hat{k}_{I}\) and \(\hat{G}_{I}\), respectively, whose action changes the Hamiltonian at most by a total derivative. Also, we demand that the generator of time evolution has to be invariant under the standard parity transformation which acts according to \(\hat{x}_{I}\rightarrow-\hat{x}_{I}\), \(\hat{k}_{I}\rightarrow-\hat{k}_{I}\), \(\hat{G}_{I}\rightarrow-\hat{G}_{I}\).
* The model satisfies Newton's first law, _i. e._ in the absence of external fields and for vanishing potential, the time evolution of the generator of translations \(\hat{k}_{I}\) is trivial. This is the case if \([\hat{k}_{I},\hat{H}_{0,I}]=0\), or \(\hat{H}_{0,I}=\hat{H}_{0,I}(\hat{k}_{I}).\) In other words, the wave number is to be a conserved charge of free-particle motion.
* The model allows for free-particles at all energy scales, _i. e._ in multi-particle systems, \(\hat{H}_{0,AB}\) equals a simple sum of the kinetic terms of the involved single particles \(\hat{H}_{0,I}\). Corrections to the addition law of the kinetic term would imply that highly energetic particles are necessarily interacting nonlocally, thereby, for instance, making it impossible to consider closed systems.
In the following, we explore the implications of these axioms on the dynamics of interacting two-particle systems and the composition law provided in Eq. (15). First, we introduce the deformed Bargmann algebra on the level of single particles to subsequently consider interactions.
### Single particle
In Galilean relativity, boosts translate in wave-number space. However, if this very space is bounded, it is not possible to translate indefinitely. In other words, the existence of a bound in wave-number space is incompatible with the action of the standard boost generator on \(\hat{k}_{I}\). Consequently, we are forced to consider a deformation of the commutator between the boost generator \(\hat{G}_{I}\) and the wave number \(\hat{k}_{I}\), which, as deformations in minimal-length models scale with the wave number, assumes the form
\[[\hat{G}_{I},\hat{k}_{I}]=iM_{I}g(|\hat{k}_{I}|) \tag{16}\]
where \(g\) is a dimensionless function, tending to 1 in the limit of vanishing minimal length and saturating towards the bound of wave-number space in order for it to not be exceeded by highly-boosted observers. It can only depend on the wave number in terms of its absolute value by virtue of parity invariance.
Next, we derive the commutator between the boost operator and the kinetic term of the Hamiltonian. Taking into account Newton's first law, \([\hat{k}_{I},\hat{H}_{0,I}]=0\), we conclude that the Hamiltonian is a function of the wave number, _i. e._\(\hat{H}_{0,I}=\hat{H}_{0,I}(\hat{k}_{I})\). As a result, we obtain
\[[\hat{G}_{I},\hat{H}_{0,I}(\hat{k}_{I})]=iM_{I}\hat{H}^{\prime}_{0,I}(\hat{k}_ {I})g(|\hat{k}_{I}|). \tag{17}\]
To complete the single-particle description, we again employ the Jacobi identity involving \(\hat{x}_{I}\), \(\hat{k}_{I}\) and \(\hat{G}_{I}\) to represent the boost generator with the position operator as
\[\hat{G}_{I}=\frac{M_{I}\{\hat{x}_{I},g(|\hat{k}_{I}|)\}}{2}\equiv\frac{\{\hat{ \hat{G}}_{I},g(|\hat{k}_{I}|)\}}{2}\,,\qquad\hat{x}_{I}=\frac{\{\hat{G}_{I},g^{ -1}(|\hat{k}_{I}|)\}}{2M_{I}} \tag{18}\]
The anticommutator \(\{,\}\) is required to preserve Hermiticity with respect to the trivial measure in wave-number space. Furthermore, we introduced the operator \(\hat{G}_{I}\), the standard Galilean boost, obtained from \(\hat{G}_{I}\) as \(\ell\to 0\), _i. e._\(\hat{\hat{G}}_{I}=M_{I}\hat{x}_{I}\). The generator of time-dependent boosts, in turn, can be represented as
\[\hat{G}_{t,I}=M_{I}\left[\frac{\{\hat{x}_{I},g\}}{2}+\hat{H}^{\prime}_{0,I}gt \right]. \tag{19}\]
In contrast to standard quantum mechanics, in our deformed Galilean framework the relation between the boost generator \(\hat{G}_{I}\) and the coordinate \(\hat{x}_{I}\) is modified nonlinearly. This has remarkable consequences for the construction of deformed relativistic dynamics for multi-particle systems.
### Interactions between particles
In Galilean relativity, the extension of kinematics from one to many particles is immediate due to the linearity of the algebra. Yet, this ceases to be the case for nonlinear generalisations thereof as in Eq. (16). In the present subsection, we study the extension of the deformed algebra to multi-particle states, using a system of two particles \(A\) and \(B\) as a proxy.
For this choice to be compatible with the composition of two boosts, the commutator in (16) has to be reproduced in the multi-particle case, namely
\[[\hat{G}_{A}\oplus\hat{G}_{B},\hat{k}_{A}\oplus\hat{k}_{B}]=i(M_{A}+M_{B})g(| \hat{k}_{A}\oplus\hat{k}_{B}|) \tag{20}\]
In general, the composition of boosts is a function of boosts, wave numbers and masses. However, contributions to the addition law which are nonlinear in \(\hat{G}_{I}\), by virtue of dimensional analysis, have to be balanced by inverse powers of \(\ell M_{I}\) (\(M_{I}\) here can be any linear combination of the two masses). As these corrections have to disappear in the limit \(\ell\to 0\), they could only contain inverse powers of the operators \(\hat{G}_{I}\), rendering them nonanalytic. Furthermore, these
inverse power would render it impossible to obtain a boost-independent right-hand side in Eq. (20). We conclude that the composition of boosts has to be linear in the boosts. Therefore, the most general ansatz reads
\[\hat{G}_{A}\oplus\hat{G}_{B}=a_{1}(\hat{k}_{A},\hat{k}_{B})\hat{G}_{A}+a_{2}( \hat{k}_{A},\hat{k}_{B})\hat{G}_{B} \tag{21}\]
for the two dimensionless functions \(a_{1},a_{2}\) which reduce to \(1\) in the limit \(\ell\to 0\). Requiring that (20) holds, the boost composition is uniquely fixed, yielding
\[\hat{G}_{A}\oplus\hat{G}_{B}=\frac{1}{2}\left\{\frac{1}{2}\left(M_{A}\left\{ \hat{x}_{A},(\hat{\partial}^{A}F)^{-1}\right\}+M_{B}\left\{\hat{x}_{B},(\hat{ \partial}^{B}F)^{-1}\right\}\right),g(|\hat{k}_{A}\oplus\hat{k}_{B}|)\right\}, \tag{22}\]
with the derivatives in wave-number space \(\hat{\partial}_{I}=\partial/\partial\hat{k}_{I}\). Equipped with the composition law for the relevant symmetry generators of our deformed Galilean framework, we are ready to lay down the foundations to construct relativistic dynamics in the multi-particle case. Following our axiomatic approach, specifically Newton's first law, relativistic invariance demands that its commutator with the combined boost generator \(\hat{G}_{A}\oplus\hat{G}_{B}\) at most produces a total derivative.
Inspired by standard Galilean relativity, we propose that a potential \(\hat{V}\), which commutes with the total boost and the total wave number, be a function of a generalised notion of distance \(\hat{d}\), which, in principle, is a function of all phase space variables,2 namely
Footnote 2: In contrast to the Galilean case, we cannot assume the potential to be a function of the position only because, on the basis of this assumption, it could not be rendered invariant under generalised Galilean transformations, while at the same time being compatible with the existence of a minimal length.
\[\hat{V}=V[\hat{d}(\hat{x}_{A},\hat{k}_{A},\hat{x}_{B},\hat{k}_{B})]. \tag{23}\]
We require that \(\hat{d}\) is parity invariant and that in the limit of vanishing minimal length it becomes
\[\lim_{\ell\to 0}\hat{d}=|\hat{x}_{A}-\hat{x}_{B}|. \tag{24}\]
According to the axioms laid out above, the operator \(\hat{d}\) has to be invariant under translations, _i. e._
\[U^{\dagger}_{\hat{k}_{A}\oplus\hat{k}_{B}}(a)\hat{d}(\hat{x}_{A},\hat{k}_{A}, \hat{x}_{B},\hat{k}_{B})U_{\hat{k}_{A}\oplus\hat{k}_{B}}(a)\stackrel{{!}}{{=}}d(\hat{x}_{A},\hat{k}_{A},\hat{x}_{B},\hat{k}_{B}); \tag{25}\]
and under time-dependent boosts, _i. e._
\[U^{t,\dagger}_{\hat{G}_{A}\oplus\hat{G}_{B}}(u)\hat{d}(\hat{x}_{A},\hat{k}_{A },\hat{x}_{B},\hat{k}_{B})U^{t}_{\hat{G}_{A}\oplus\hat{G}_{B}}(u)\stackrel{{!}}{{=}}d(\hat{x}_{A},\hat{k}_{A},\hat{x}_{B},\hat{k}_{B}), \tag{26}\]
with the time-evolved finite boost transformation
\[U^{t}_{\hat{G}_{A}\oplus\hat{G}_{B}}(u)=U_{\hat{H}_{0,AB}}(t)U_{\hat{G}_{A} \oplus\hat{G}_{B}}(u)U^{\dagger}_{\hat{H}_{0,AB}}(t). \tag{27}\]
For infinitesimal transformations, these conditions imply
\[\left[\hat{G}_{A}\oplus\hat{G}_{B},\hat{d}\right]=0, \left[\hat{k}_{A}\oplus\hat{k}_{B},\hat{d}\right]=0 \left[\left[\hat{G}_{A}\oplus\hat{G}_{B},\hat{H}_{0,AB}\right],\hat{d}\right] =0, \tag{28}\]
where the last equality is obtained by applying the Jacobi-identity involving the operators \(\hat{G}_{A}\oplus\hat{G}_{B}\), \(\hat{H}_{0,AB}\) and \(\hat{d}\).
What could the form of the function \(\hat{d}\) be? As it generalises the distance function, we require it to have two properties: it should be homogeneous and linear in the coordinates \(\hat{x}_{A},\hat{x}_{B}\), given that we are working in one spatial dimension. Thus, up to a constant, the most general translation-invariant ansatz reads
\[\hat{d}=\left|\frac{1}{2}\left\{\frac{1}{2}\left(\left\{\hat{x}_{A},(\hat{ \partial}^{A}F)^{-1}\right\}\right),h_{A}(\hat{k}_{A}\oplus\hat{k}_{B})\right\} -\frac{1}{2}\left\{\frac{1}{2}\left(\left\{\hat{x}_{B},(\hat{\partial}^{B}F)^ {-1}\right\}\right),h_{B}(\hat{k}_{A}\oplus\hat{k}_{B})\right\}\right|, \tag{29}\]
for two dimensionless functions \(h_{A},h_{B}\) which reduce to \(1\) in the undeformed case. The particular parameterisation employed is useful in the calculations that follow. Indeed, imposing that \(\hat{d}\) is invariant under the deformed total translation \(\hat{k}_{A}\oplus\hat{k}_{B}\), we obtain \(h_{A}(\hat{k}_{A},\hat{k}_{B})=h_{B}(\hat{k}_{A},\hat{k}_{B})\coloneqq h(\hat{k }_{A},\hat{k}_{B})\). Here, we introduce the shorthand notation
\[\hat{d}= \frac{1}{2}\left(\left\{\hat{x}_{A},(\hat{\partial}^{A}F)^{-1} \right\}-\left\{\hat{x}_{B},(\hat{\partial}^{B}F)^{-1}\right\}\right), \tag{30}\] \[\hat{G}_{A}\oplus\hat{G}_{B}= \frac{1}{2}\left(M_{A}\left\{\hat{x}_{A},(\hat{\partial}^{A}F)^{ -1}\right\}+M_{B}\left\{\hat{x}_{B},(\hat{\partial}^{B}F)^{-1}\right\}\right). \tag{31}\]
Then, the ansatz for \(\hat{d}\) simplifies to
\[\hat{d}=\left|\frac{1}{2}\left\{\hat{d},h(\hat{k}_{A},\hat{k}_{B})\right\} \right|, \tag{32}\]
By virtue of Eq. (28), the operator \(\hat{d}\) has to satisfy \(\left[\hat{G}_{A}\oplus\hat{G}_{B},\hat{d}\right]=0\), which becomes equivalent to
\[[\hat{G}_{A}\oplus\hat{G}_{B},\hat{d}]= \frac{1}{2}\left\{\frac{1}{2}\{\hat{d},\left[\hat{G}_{A}\oplus \hat{G}_{B},h\right]\}+\frac{1}{2}\left\{h,\left[\hat{G}_{A}\oplus\hat{G}_{B}, \hat{d}\right]\right\}\right\}, \tag{33}\] \[= ig\sum_{I=A,B}M_{I}\left(\frac{1}{2}\left\{\hat{d},\frac{\hat{d }^{I}h}{\hat{\partial}^{I}F}\right\}\right)-\frac{ig}{2}(M_{A}+M_{B})\frac{ \hat{\partial}^{A}\hat{\partial}^{B}F}{\hat{\partial}^{B}F\hat{\partial}^{A}F }\left\{\hat{d},h\right\},\] (34) \[\stackrel{{!}}{{=}} 0. \tag{35}\]
where we have used the fact that \(\hat{d}\) commutes with any function of the total translation generator \(\hat{k}_{A}\oplus\hat{k}_{B}\). This condition can be simplified to read
\[\sum_{I=A,B}M_{I}\frac{\hat{\partial}^{I}h}{\hat{\partial}^{I}F}-(M_{A}+M_{B}) \frac{\hat{\partial}^{A}\hat{\partial}^{B}F}{\hat{\partial}^{B}F\hat{\partial} ^{A}F}h=0. \tag{36}\]
As there is no independent mass scale in the theory, the only dependence of the function \(h\) on the particle masses can be of the product \(M_{A}/M_{B}.\) Then, the first term of Eq. (36) can only have the same mass-dependent prefactor as the second one if the function \(h\) depends on wave-numbers solely through their composition \(\hat{k}_{A}\oplus\hat{k}_{B}\). Furthermore, in order for parity invariance to continue to hold, the distance function has to depend on the absolute value of parity-variable quantities. Therefore \(h\) has to be an even function of of the wave-number composition, and the operator \(\hat{d}\) finally becomes
\[\hat{d}=\left|h(\hat{k}_{A}\oplus\hat{k}_{B})\hat{d}\right|, \tag{37}\]
where we removed the symmetric ordering because every function of the translation generator commutes with the generalised coordinate difference. Furthermore, by Eq. (36) the function \(h\) satisfies the differential equation
\[h^{\prime}(\hat{k}_{A}\oplus\hat{k}_{B})=\frac{\hat{\partial}^{A}\hat{\partial }^{B}F}{\hat{\partial}^{B}F\hat{\partial}^{A}F}h(\hat{k}_{A}\oplus\hat{k}_{B}). \tag{38}\]
The implications of this condition are twofold. On the one hand, it constrains the space of allowed wave-number compositions \(F\). On the other hand, given such a wave-number composition \(F,\) it determines the function \(h\).
First, apart from the factor \(\hat{\partial}^{A}\hat{\partial}^{B}F/\hat{\partial}^{A}F\hat{\partial}^{B}F,\) all relevant quantities in Eq. (38) depend on the wave-number composition. Thus, for the two terms appearing in Eq. (38) to cancel out, the underlying function \(F\) has to satisfy the constraint
\[\frac{\hat{\partial}^{A}\hat{\partial}^{B}F}{\hat{\partial}^{A}F\hat{\partial} ^{B}F}=\tilde{F}(k_{A}\oplus k_{B}) \tag{39}\]
for some function \(\tilde{F}.\) As we demonstrate in Appendix A, this condition forces the composition of wave numbers to be both commutative and associative. As a result, the wave numbers can be mapped to a set of momenta \(\hat{p}_{I}=p(\hat{k}_{I})\) whose composition is linear, for some function \(p.\) In other words, there are operators \(\hat{p}_{I}\) such that
\[p(\hat{k}_{A}\oplus\hat{k}_{B})=p(\hat{k}_{A})+p(\hat{k}_{B})=\hat{p}_{A}+\hat{ p}_{B},\qquad\Longleftrightarrow\qquad F=p^{-1}\circ(p(\hat{k}_{A})+p(\hat{k}_{B})). \tag{40}\]
Here \(p^{-1}\) stands for the inverse function which we denote \(p^{-1}=k(p)\). Using this definition of momentum, we then obtain the deformed Heisenberg algebra
\[[\hat{x}_{I},\hat{p}_{I}]=i\frac{\mathrm{d}\hat{p}_{I}}{\mathrm{d}\hat{k}_{I}} \equiv if(\hat{p}_{I}), \tag{41}\]
In short, enforcing the relativity principle suggests the use of the momentum \(\hat{p}\) which provides us with a GUP of the form given in Eq. (1).
Second, in terms of the newly introduced momentum \(\hat{p}\), the differential equation (38) can be solved explicitly to yield
\[h(k)=\frac{\mathrm{d}k}{\mathrm{d}p}(k)=\frac{1}{f\circ p(k)}. \tag{42}\]
As by parity the function \(h\) is even in its argument, so has to be the function \(f,\)_i. e._\(f(\hat{p}_{I})=f(|\hat{p}_{I}|).\) For reasons of notational simplicity we will henceforth omit this absolute-value sign. By virtue of Eq. (42), the function \(\hat{d}\) assumes the form
\[\hat{d}=\left|\frac{\hat{\hat{d}}}{f\circ p(\hat{k}_{A}\oplus\hat{k}_{B})} \right|. \tag{43}\]
We now turn to invariance under time-dependent boosts, _i. e._ the last equality in Eq. (28), which constrains the kinetic part of the Hamiltonian. For convenience, we here recall the condition to be satisfied, _i. e._
\[\left[\hat{d},\left[\hat{G}_{A}\oplus\hat{G}_{B},\hat{H}_{0,AB}\right]\right]=0. \tag{44}\]
Given that the boost composition is linear in the coordinates, the commutator between boost and kinetic part of the Hamiltonian will be a function of the wave-numbers \(\hat{k}_{A},\hat{k}_{B}\). Yet, the only possible combination of wave numbers that commutes with \(\hat{d}\) is the wave-number composition given by the function \(F(\hat{k}_{A},\hat{k}_{B}),\) so that we obtain
\[\frac{\left[\hat{G}_{A}\oplus\hat{G}_{B},\hat{H}_{0}\right]}{g(|\hat{k}_{A} \oplus\hat{k}_{B}|)}=\left(M_{A}\frac{\dot{\partial}^{A}\hat{H}_{0}}{\dot{ \partial}^{A}F}+M_{B}\frac{\dot{\partial}^{B}\hat{H}_{0}}{\dot{\partial}^{B}F }\right)=\mathcal{F}(F), \tag{45}\]
for some function \(\mathcal{F}.\) To solve this equation, we again shift to the momenta \(\hat{p}_{I},\) yielding
\[\frac{M_{A}\frac{\partial\hat{H}_{0}}{\partial\hat{p}_{A}}+M_{B}\frac{ \partial\hat{H}_{0}}{\partial\hat{p}_{B}}}{(f^{-1})^{\prime}\circ(\hat{p}_{A} +\hat{p}_{B})}=\mathcal{F}\circ f^{-1}\circ(\hat{p}_{A}+\hat{p}_{B}). \tag{46}\]
In other words, the kinetic term of the Hamiltonian satisfies
\[M_{A}\frac{\partial\hat{H}_{0}}{\partial\hat{p}_{A}}+M_{B}\frac{\partial\hat {H}_{0}}{\partial\hat{p}_{B}}=\tilde{\mathcal{F}}(\hat{p}_{A}+\hat{p}_{B}) \tag{47}\]
for some function \(\tilde{\mathcal{F}}.\) From our postulates we recall that the kinetic term for a system of two particles consists of the sum of two independent kinetic terms, _i. e._
\[\hat{H}_{0,AB}=\hat{H}_{0,A}+\hat{H}_{0,B}. \tag{48}\]
Hence, the only possible solution to Eq. (47) is
\[\hat{H}_{0}=\frac{\hat{p}_{A}^{2}}{2M_{A}}+\frac{\hat{\hat{p}_{B}^{2}}}{2M_{B}}. \tag{49}\]
Finally, we can write down a two-particle Hamiltonian, which is invariant under the deformed Galilean transformations. In all generality, this Hamiltonian reads
\[\hat{H}=\frac{p^{2}(\hat{k}_{A})}{2M_{A}}+\frac{p^{2}(\hat{k}_{B})}{2M_{B}}+V \left(\left|\frac{\hat{\hat{d}}}{f\circ p(\hat{k}_{A}\oplus\hat{k}_{B})}\right|\right) \tag{50}\]
where \(V\) can be any well-behaved function of the distance \(\hat{d}\). As we have found the class of Hamiltonians which are consistent with both the existence of a minimal length and a relativity principle, we can now compare this result to the ansatz towards minimal-length models which is customary in the field.
### Shortcomings of the conventional approach
In this subsection, we specialise Eq. (50) to a single-particle scenario subject to a potential. This potential approximates an interaction with a classical probe _i. e._ an object with excessively large mass compared with the dynamical single particle such that backreaction effects can be neglected. In this kind of situation the external source provides a preferred frame, where it is at rest situated in the origin. Specifically, let \(M_{B}\rightarrow\infty\), while \(\hat{k}_{B},\hat{x}_{B}\to 0\). In this limit, since particle \(B\) is considered to be fixed, Eq. (50) reduces to
\[\hat{H}=\frac{\hat{p}(\hat{k})^{2}}{2M}+V\left(\left|\frac{\left\{\frac{1}{f(p (\hat{k}))},\hat{x}\right\}}{2}\right|\right). \tag{51}\]
where we removed the subscript \(A\) because there is only one particle left. An example for this kind of procedure is the treatment of the hydrogen atom, where the dynamics of the proton are neglected. The resulting dynamics is clearly different from (7), which is the conventional Hamiltonian employed in the context of the GUP [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76]. Indeed, the (generally sparse) applications of the GUP to multi-particle dynamics in the literature [85; 86] adhere to interaction potentials dependent on linear coordinate differences. Thus, the underlying Hamiltonian comes down to the apparently straight-forward generalisation of Eq. (7), _i. e._
\[\hat{H}=\frac{p^{2}(\hat{k}_{A})}{2M_{A}}+\frac{p^{2}(\hat{k}_{B})}{2M_{B}}+V \left(|\hat{x}_{A}-\hat{x}_{B}|\right). \tag{52}\]
Comparison with Eq. (50) demonstrates that the conventional GUP-deformed Hamiltonian does not comply with any relativity principle deriving from the algebra given in Eq. (16).
Note, though, that here we only consider potentials that originate in particle interactions. External potentials, _i. e._ solutions to originally (deformed) Galilean invariant field equations, can generally break symmetries. For instance, every curved geometry derived from Einstein's field equations breaks global Lorentz invariance. In the context of elementary quantum mechanical systems, we find this behaviour, for example, when considering the Landau levels of a charged particle subject to a constant magnetic field, thus breaking the \(O(d)\) sector, _i. e._ in this case parity symmetry. At present, a consistent description of field dynamics in the presence of a minimal length is lacking. Therefore, the single-particle potentials induced by external fields cannot be clearly determined at this stage.
In [87], authored by one of the present authors, it has been demonstrated that one-dimensional minimal-length theories are incompatible with Galilean invariance. Here we have generalised this statement: one-dimensional minimal-length theories of customary type (where the potential \(V(\hat{x})\) is employed to approximately describe particle interactions) do not allow for any kind of relativity principle, be it ordinary or deformed. We stress that the entire argument behind this reasoning applies on the level of operators, and thus does not resort to any classical notions which could possibly become problematic in the context of the GUP (for more information see [88], for a different view see [50]).
In a nutshell, deformed models that adhere to a relativity principle introduce a departure from the conventional approach. How, then, do they relate to ordinary quantum theory? This question forms the basis of the subsequent subsection.
### Map to undeformed quantum mechanics
The momenta \(\hat{p}_{I}\) are defined in such a way that their composition for multi-particle systems is linear. Furthermore, the kinetic term expressed in terms of those momenta appears undeformed. This raises the question: what happens to the model when expressed in terms of the conjugate variables to the momenta \(\hat{p}_{I}\)? In that vein, we introduce the operators \(\hat{X}_{I}\) such that
\[[\hat{X}_{I},\hat{p}_{J}]=i\delta_{IJ}. \tag{53}\]
As both pairs \((\hat{x}_{I},\hat{k}_{I})\) and \((\hat{X}_{I},\hat{p}_{I})\) are canonical, going from one to the other amounts to a canonical transformation.
Plus, Eq. (53) has the solution \(\hat{X}_{I}=\left\{f^{-1}(\hat{p}_{I}),\hat{x}_{I}\right\}.\) In order to understand the implications of this fact, let us, for the moment consider classical differential geometry with the slight twist that we use wave-number space as the base manifold of the cotangent bundle. Then, positions \(x_{I}\) are one forms \(x_{I}\mathrm{d}k_{I}\) (Einstein's sum convention is not applied here). Therefore, a diffeomorphism in momentum space has to be of the form
\[k_{I}\to p_{I}=p(k_{I})\qquad x_{I}\to X_{I}=\frac{\mathrm{d}k}{ \mathrm{d}p}(k_{I})x_{I}. \tag{54}\]
The transformation
\[\hat{k}_{I}\rightarrow\hat{p}_{I}=p(\hat{k}_{I})\qquad\hat{x}_{I}\rightarrow\hat {X}_{I}=\frac{1}{2}\left\{f^{-1}\circ p(|\hat{k}_{I}|),\hat{x}_{I}\right\}= \frac{1}{2}\left\{\frac{\mathrm{d}k}{\mathrm{d}p}(\hat{k}_{I}),x_{I}\right\} \tag{55}\]
is just the Weyl-symmetric quantisation of Eq. (54). In other words, the descriptions in terms of the pairs \((\hat{x}_{I},\hat{k}_{I})\) and \((\hat{X}_{I},\hat{p}_{I})\) are related by a momentum-space diffeomorphism.
That the sets of operators \((\hat{x}_{I},\hat{k}_{I})\) and \((\hat{X}_{I},\hat{p}_{I})\) are related by a canonical transformation is well-known in the field of GUPs. Both correspond to different representations of the underlying deformed algebra [76]. It remains to be shown how this transformation changes the model Hamiltonian provided in Eq. (50).
Re-expressing the distance function \(\hat{d}\) in terms of the conjugate pair \((\hat{p}_{I},\hat{X}_{I})\), we find
\[\hat{d}=\frac{1}{2}\left|\left\{\frac{1}{f(\hat{p}_{A})},\hat{x}_{A}\right\}- \left\{\frac{1}{f(\hat{p}_{B})},\hat{x}_{B}\right\}\right|\equiv\left|\hat{X} _{A}-\hat{X}_{B}\right|. \tag{56}\]
Consequently, the Hamiltonian can be written as
\[\hat{H}=\frac{\hat{p}_{A}^{2}}{2M_{A}}+\frac{\hat{p}_{B}^{2}}{2M_{B}}+V\left( \left|\hat{X}_{A}-\hat{X}_{B}\right|\right), \tag{57}\]
which by Eq. (53) is equivalent to the Hamiltonian of ordinary Galilean quantum theory with the twist that the operators \(\hat{X}_{I}\) do not stand for positions. Thus, in one dimension, the minimal length forces us to reinterpret the dynamical variables, while the underlying algebra stays the same, _i. e._
\[[\hat{p}_{I},\hat{H}_{0,I}]=0,\qquad[\hat{G}_{I},\hat{p}_{I}]=iM_{I}g(\hat{p} )f(\hat{p}),\qquad[\hat{G}_{I},\hat{H}_{0,I}]=i\hat{p}_{I}g(\hat{p})f(\hat{p}). \tag{58}\]
In other words, the only minimal-length deformed dynamics in 1 spatial dimension, which is compatible with a relativity principle, parity invariance and a trivial composition of kinetic terms, has to be related to the ordinary formalism by a diffeomorphism in momentum space, _i. e._ a canonical transformation.
This is not to say that the theory is trivial. As we demonstrate in Sec. III with the example of coupled harmonic oscillators, while the spectrum of the Hamiltonian is equal to ordinary quantum mechanics, the modification to the interpretation of the theory is dramatic. As we will make use of boost transformations to relate position measurements of different inertial observers in our example, we first study the properties of deformed boosts.
### Deformed boosts
In the previous subsections, we have formulated a consistent dynamical framework for models involving a minimal length. Each model depends on the choice of two functions, \(F(\hat{k}_{A},\hat{k}_{B})\) and \(g(|\hat{k}|)\). Before moving on to an example involving actual dynamics for a system of two particles, we briefly pause to study the consequences of deformed symmetries on the kinematics of single and two-particle systems. From now on, we will focus on a specific subclass of models, for which, upon choosing the deformed sum of wave numbers (namely the function \(F\)), we constructively derive the function \(g\), guided by the fact that the commutator between boost and momenta should saturate when the eigenvalues of the wave-number \(\hat{k}\) approach the cut-off.
The main idea consists in regarding the sum of a finite wave number \(\hat{k}_{A}\) and an infinitesimal wave number \(\hat{k}_{B}\) as an infinitesimal boost transformation with parameter \(\hat{k}_{B}/M_{A}\) acting on the wave number \(\hat{k}_{A}\), _i. e._
\[F(\hat{k}_{A},\hat{k}_{B})\simeq\hat{k}_{A}+\left(\frac{\hat{k}_{B}}{M_{A}} \right)M_{A}\,\hat{\partial}^{B}F(\hat{k}_{A},0)=\hat{k}_{A}-i\left(\frac{ \hat{k}_{B}}{M_{A}}\right)\left[\hat{G}_{A},\hat{k}_{A}\right] \tag{59}\]
From the above, we extract the commutator between boost and wave number
\[\left[\hat{G}_{A},\hat{k}_{A}\right]= iM_{A}\hat{\partial}^{B}F(\hat{k}_{A},0) \tag{60}\] \[= \frac{iM_{A}}{f(\hat{p}_{A})}. \tag{61}\]
Since the function \(F\) asymptotes to the maximal wave number \(\pi/2\ell\), its first derivatives go to zero at the boundary. This guarantees that the right-hand-side of Eq. (60) vanishes in that limit, furthermore constraining \(\lim_{\hat{p}\rightarrow\infty}f(\hat{p})\rightarrow\infty.\) As all prevailing minimal-length models imply a monotonically increasing function \(f,\) this demand is rather weak.
Thus, following the outlined procedure to obtain the generator of boosts \(\hat{G}_{I}\), in general, we find
\[\hat{G}_{I}=M_{I}\hat{X}_{I}\iff g(|\hat{k}_{i}|)=\frac{1}{f\circ p(|\hat{k}_{I}| )}. \tag{62}\]
With this specific choice for \(g\), the deformed boost sum in (22) is entirely specified by \(F\), yielding
\[\hat{G}_{A}\oplus\hat{G}_{B}=\frac{\left\{\frac{1}{f(\hat{p}_{A})},M_{A}\hat{ x}_{A}\right\}+\left\{\frac{1}{f(\hat{p}_{B})},M_{B}\hat{x}_{B}\right\}}{2}=M_{A} \hat{X}_{A}+M_{B}\hat{X}_{B}. \tag{63}\]
As boosts add up linearly and by Eq. (62), the operator \(\hat{X}_{I}(\hat{x}_{I},\hat{p}_{I})\) is invariant under boosts, _i. e._
\[\hat{X}_{I}(\hat{x}^{\prime}_{I},\hat{p}^{\prime}_{I})=\hat{X}_{I}(\hat{x}_{I},\hat{p}_{I}), \tag{64}\]
where the primes indicate the boosted quantities. In other words, at equal time a Galilean boost changes the position of any of the two particles as
\[\hat{x}^{\prime}_{I}=U^{\dagger}_{G_{A}\oplus G_{B}}(v)\hat{x}_{I}U_{G_{A} \oplus G_{B}}(v)=\frac{1}{2}\left\{\frac{f(\hat{p}_{I}+M_{I}v)}{f(\hat{p}_{I}) },\hat{x}_{I}\right\}. \tag{65}\]
Recall that by virtue of parity-invariance the function \(f\) can only depend on the momentum in terms of its absolute value. Thus, in the boosted frame we can write it as
\[f(|\hat{p}_{I}+M_{I}v|)=f\left(\sqrt{2M_{I}\hat{H}^{\prime}_{0,I}}\right), \tag{66}\]
where \(\hat{H}^{\prime}_{0,I}\) denotes the kinetic-energy operator in the boosted frame. In the classical regime,3 we thus obtain
Footnote 3: Throughout this paper we understand the classical limit as \(h\to 0\), while \(\ell/h\) stays constant, a viewpoint which is inherent to the literature on relative locality [89], and has recently been advertised for in the context of the GUP by two of the present authors [50]. Otherwise, the classical limit of the GUP is either ill-defined or trivial [88].
\[\langle\hat{x}^{\prime}_{I}\rangle=\frac{f(\langle|\hat{p}_{I}+M_{I}v|))}{f( \langle|\hat{p}_{I}|))}\,\langle\hat{x}_{I}\rangle+\mathcal{O}(\hbar)=\frac{f( \sqrt{2M_{I}E^{\prime}_{\rm kin}})}{f(\sqrt{2M_{I}E_{\rm kin}})}\,\langle\hat {x}_{I}\rangle+\mathcal{O}(\hbar), \tag{67}\]
with the classical kinetic energy \(E_{\rm kin}.\) If, for example, the original description was in the rest frame of particle \(I\), _i. e._\(\langle\hat{p}_{I}\rangle=0\), the boosted position of the particle will be at
\[\langle\hat{x}^{\prime}_{I}\rangle=f(M_{I}v)\,\langle\hat{x}_{I}\rangle+ \mathcal{O}(\hbar). \tag{68}\]
In other words, similarly to special relativity the distance of a particle to the origin changes as a function of the boost parameter \(v.\) The difference lies in the fact that the change additionally depends on the original position of the described particle. Thus, for every observer the origin is a preferred point (inasmuch as every object in motion recedes from it). In other words, this property transforms covariantly under translations. Therefore, every observer sees local events unmodified, while distant events change depending on the relative distance and momentum.
Ordinarily, we understand boosts as translations in the space of velocities. In the deformed case, the velocity of a free particle (an observer) reads
\[\dot{\hat{x}}_{I}=-i[\hat{x}_{I},\hat{H}_{0,AB}]=f(\hat{p}_{I})\frac{\hat{p}_{I }}{M_{I}}. \tag{69}\]
Therefore, an equal-time boost by \(v\) acts on the velocity of a particle as
\[\dot{\hat{x}}^{\prime}_{I}=f(\hat{p}_{I}+M_{I}v)\left(\frac{\dot{\hat{x}}_{I}} {f(\hat{p}_{I})}+v\right). \tag{70}\]
As the function \(f\) for conventional minimal-length models is monotonically increasing, this amounts to an additional, possibly nonlinear push if the unboosted momentum is large. In contrast to ordinary Galilean relativity, this push modifies the velocity of distinct particles in different ways which can be inferred from the appearance of their masses and momenta. The relativity principle, however, is unchanged - observers at different speeds experience the same physics.
### Deformed translations and relative locality
We move on to study the effect of total translations on a two-particle system. Let us recall that according to the axioms laid out in Sec. II, those total translations are generated by the operator \(\hat{k}_{A}\oplus\hat{k}_{B}\). As usual, let \(\hat{x}_{I}\) denote the position operator of the two particles. By acting with a finite translation on the position operators, we obtain
\[\hat{x}^{\prime}_{I}=U^{\dagger}_{\hat{k}_{A}\oplus\hat{k}_{B}}(a)\hat{x}_{I}U_ {\hat{k}_{A}\oplus\hat{k}_{B}}(a)=\hat{x}_{I}+a\,\hat{\partial}_{I}F(\hat{k}_{ A}\oplus\hat{k}_{B})=\hat{x}_{I}+\frac{f(\hat{p}_{I})}{f(\hat{p}_{A}+\hat{p}_{B}) }a, \tag{71}\]
with the translation parameter \(a.\) On the classical level, this implies that
\[\langle\hat{x}^{\prime}_{I}\rangle=\langle\hat{x}_{I}\rangle+\frac{f(\langle \hat{p}_{I}\rangle)}{f(\langle\hat{p}_{A}\rangle+\langle\hat{p}_{B}\rangle)}a+ \mathcal{O}(\hbar). \tag{72}\]
Consider now these two particles undergoing an elastic collision such that the Heisenberg equations satisfy \(\hat{p}^{\prime}_{I}(t)\propto\delta(|\hat{X}_{A}-\hat{X}_{B}|)\), simulating a classical scattering process. If their expected positions are coincident with the observer's, _i. e._\(\langle\hat{x}_{A}\rangle=\langle\hat{x}_{B}\rangle=0\), at least barring quantum corrections, we find that
\[\langle\hat{X}_{I}\rangle=\frac{\langle\hat{x}_{I}\rangle}{f(\langle\hat{p}_{I }\rangle)}+\mathcal{O}(\hbar)=\mathcal{O}(\hbar). \tag{73}\]
Thus, if both particles are local to the observer, at lowest order in \(\hbar\) the scattering process is indeed taking place locally.
However, if the particles' momenta differ, their positions are not coincident for the translated observer who expects
\[\langle\hat{x}^{\prime}_{I}\rangle=\frac{f(\langle\hat{p}_{I}\rangle)}{f( \langle\hat{p}_{A}\rangle+\langle\hat{p}_{B}\rangle)}a+\mathcal{O}(\hbar). \tag{74}\]
In other words, to the translated observer, the particles appear to interact nonlocally if their momenta differ in absolute value. Whether the interaction is local, therefore depends on the observer. This is an instance of relative locality [89; 90]. Note, however, that quantum corrections can generally change this conclusion.
We thus conclude our investigation on the consequences of general deformations of the Bargmann algebra. To further highlight the implications of this modification, it is instructive to study a specific example, which we do in the subsequent section.
## III Case study: Kempf-Mangano-Mann model
The classic minimal-length model which continues to be in customary use goes back to Kempf, Mangano and Mann [33]. As provided in Eq. (3), it purports a second-order correction between the position and the momentum operators, _i. e._
\[[\hat{x}_{I},\hat{p}_{I}]=i\left(1+\ell^{2}\hat{p}_{I}^{2}\right), \tag{75}\]
where \(\ell\) again plays the role of minimal length. The wave-number conjugate to the position \(\hat{x}\) introduced here is related to the momentum as
\[\hat{p}_{I}=\frac{\tan(\ell\hat{k}_{I})}{\ell}. \tag{76}\]
Assuming that the momenta of the particles in question are composed linearly, the wave numbers have to obey the deformed addition law
\[F(\hat{k}_{A},\hat{k}_{B})=\hat{k}_{A}\oplus\hat{k}_{B}=\frac{1}{\ell}\arctan \left(\tan(\ell\hat{k}_{A})+\tan(\ell\hat{k}_{B})\right). \tag{77}\]
Following the argument of section II.5, the commutator between boost and wave number reads
\[[\hat{G}_{I},\hat{k}_{I}]=iM_{I}\cos^{2}(\ell\hat{k}_{I}). \tag{78}\]
As required, the action of boosts on wave numbers saturates at the boundary of wave-number space such that it cannot be overshot.
The conjugate variables to the momentum operator from which the operator \(\hat{d}\) is constructed by Eq. (56), read
\[\hat{X}_{I}=\frac{1}{2}\left\{\frac{1}{1+\ell^{2}\hat{p}_{I}^{2}},\hat{x}_{I} \right\}=\frac{1}{2}\left\{\cos^{2}(\hat{k}_{I}\ell),\hat{x}_{I}\right\}=\frac {\hat{G}_{I}}{M_{I}}. \tag{79}\]
Consequently, a boost by a velocity \(v\) acts on the semiclassical position of a particle at rest as
\[\left\langle\hat{x}_{I}^{\prime}\right\rangle=\left[1+\left(\ell M_{I}v\right) ^{2}\right]\left\langle\hat{x}_{I}\right\rangle+\mathcal{O}(\hbar). \tag{80}\]
In other words, the distance from the origin increases with large boosts. Having all required operators in place, we can study the modification to the ordinary Galilean theory in evaluating the expectation value of \(\hat{d}\) in typical states of interest.
### Generalised Gaussian states
In general minimal-length models, there is no physical position representation because the eigenstates of the position operator, which are infinitely peaked, are not contained in the physical Hilbert space; the latter requires a minimal position uncertainty (_i. e._ Eq. (4)). Instead, it is possible to construct a quasi-position representation [33] from so-called minimal-uncertainty states. These constitute a generalisation of Gaussian states, defined such that they saturate the Robertson-Schrodinger relation [51; 52] of \(\hat{x}_{I}\) and \(\hat{p}_{I}\), _i. e._ the GUP [33]. Such a minimal-uncertainty state at the average positions \(\left\langle\hat{x}_{I}\right\rangle\) and with vanishing expected momenta reads [42]
\[\psi_{\left\langle x\right\rangle_{I}}(k_{I})=\frac{\ell}{2\sqrt{\pi}}\prod_{I =A,B}\sqrt{\frac{\Gamma(1+a_{I})}{\Gamma(\frac{1}{2}+a_{I})}}\cos(\ell k_{I}) ^{a_{I}}e^{-ik_{I}\left\langle x_{I}\right\rangle},\qquad\text{with }a_{I}= \frac{1+\ell^{2}\Delta p_{I}^{2}}{2\ell^{2}\Delta p_{I}^{2}}, \tag{81}\]
where we introduced the Euler Gamma-function \(\Gamma(x).\) The quasi-position representation, made up of states of largest possible localisation (_i. e._ saturating Eq. (4)), is then obtained for \(\Delta p=\ell^{-1}.\)
Given such a state, the expectation value of the operator \(\hat{X}_{I}\) becomes
\[\left\langle\hat{X}_{I}\right\rangle=\frac{1+2\ell^{2}\Delta p_{I}^{2}}{1+3 \ell^{2}\Delta p_{I}^{2}}\left\langle\hat{x}_{I}\right\rangle. \tag{82}\]
In other words, while the expectation values of \(\hat{x}_{I}\) and \(\hat{X}_{I}\) coincide in the limit \(\ell\Delta p_{I}\to 0,\) with increasing momentum uncertainty \(\left\langle X_{I}\right\rangle\) decreases to finally equal \(2\left\langle\hat{x}_{I}\right\rangle/3\) in the limit \(\ell\Delta p_{I}\rightarrow\infty.\) For states comprising the quasi-position representation, we obtain
\[\left\langle\hat{X}_{I}\right\rangle=\frac{3}{4}\left\langle\hat{x}_{I}\right\rangle, \tag{83}\]
which is independent of the minimal length. In other words, strongly localised states imply macroscopic differences to observables. This was to be expected because this amount of localisation requires momenta at the level of the minimal length, _i. e._ exactly \(\Delta p_{I}=\ell^{-1}.\)
Most importantly, the expectation value of the operator \(\hat{d}^{2},\) the argument of the potential in Eq. (57), becomes approximately
\[\left\langle\hat{d}^{2}\right\rangle=\left(\left\langle x_{A}\right\rangle- \left\langle x_{B}\right\rangle\right)^{2}+\frac{1}{4\Delta p_{A}^{2}}+\frac{1 }{4\Delta p_{B}^{2}}-2\ell^{2}(\left\langle x_{A}\right\rangle-\left\langle x _{B}\right\rangle)\left(\Delta p_{A}^{2}\left\langle x_{A}\right\rangle- \Delta p_{B}^{2}\left\langle x_{B}\right\rangle\right)+\mathcal{O}(\ell^{4}). \tag{84}\]
Consequently, the expectation value of the argument of the potential fulfils the expectation of the expected distance between two Gaussian states in the limit of vanishing minimal length. For the constituent states of the quasi-position representation, however, we obtain exactly
\[\left\langle\hat{d}^{2}\right\rangle=\ell^{2}+\frac{5\left(\left\langle\hat{x }_{A}\right\rangle^{2}+\left\langle\hat{x}_{B}\right\rangle^{2}\right)-9 \left\langle\hat{x}_{A}\right\rangle\left\langle\hat{x}_{B}\right\rangle}{8}, \tag{85}\]
which, independently of the value of \(\ell\) yields macroscopic changes to the value of the generalised distance. Thus, for all intends and purposes, no particle has ever been detected in a quasi-position eigenstate.4
Footnote 4: By analogy with Lorentzian-relativistic quantum mechanics this can be understood as an argument in favor of using positive operator valued measures [91] to model measurements instead of simple projections on eigenstates.
To gain an intuition on the consequences of the modifications analysed in the present section, it is instructive to consider an explicit example. Therefore, in the following, we analyse the coupled harmonic oscillator.
### Coupled harmonic oscillator
We have seen that, in one dimension, the dynamics of every system obeying a deformed version of Galilean relativity can be mapped into ordinary quantum mechanics by virtue of a canonical transformation. In other words, we may implement a minimal length by describing the kinematics in terms of the canonical pair \((\hat{x},\hat{k}),\) where the spectrum of \(\hat{k}\) is bounded. This representation is momentum-diffeomorphically related to the canonically conjugate operators \((\hat{X},\hat{p})\) satisfying ordinary Galilean relativistic dynamics. Nevertheless, the resulting model is by no means trivial. In this section, we explore some of the consequences of this construction with the help of a simple yet illustrative example - the coupled harmonic oscillator.
As we have demonstrated in Sec. II.3, the single-particle Hamiltonian obeying a deformed version of Galilean relativity is given by Eq. (51). Expressed in terms of the pair \((\hat{p},\hat{X}),\) it thus reads
\[\hat{H}=\frac{\hat{p}^{2}}{2M}+V(\hat{X}). \tag{86}\]
Hence, the energy eigenspectrum is generally undeformed. However, the dynamics is nontrivial just because the equations of motion for the position \(\hat{x}\) are nontrivial.
As a specific system, consider two particles of equal mass \(M\) connected by a spring. The resulting Hamiltonian reads
\[\hat{H}=\frac{\hat{p}_{A}^{2}+\hat{p}_{B}^{2}}{2M}+\frac{M\omega^{2}}{4}\left( \hat{X}_{A}-\hat{X}_{B}\right)^{2}, \tag{87}\]
with the oscillation frequency \(\omega.\) The dynamical equations can be decoupled by dividing the motion in \(X\)-space into a center-of-mass contribution and a relative part such that
\[\hat{X}_{\text{com}}=\frac{\hat{X}_{A}+\hat{X}_{B}}{2},\qquad\qquad\hat{p}_{ \text{com}}=\hat{p}_{A}+\hat{p}_{B},\qquad\qquad\hat{X}_{\text{rel}}=\frac{ \hat{X}_{A}-\hat{X}_{B}}{2},\qquad\qquad\hat{p}_{\text{rel}}=\hat{p}_{A}-\hat{ p}_{B}, \tag{88}\]
which is a canonical transformation. As a result, the Hamiltonian becomes
\[\hat{H}=\frac{\hat{p}_{\text{com}}^{2}+\hat{p}_{\text{rel}}^{2}}{2M_{\text{ tot}}}+\frac{1}{2}M_{\text{tot}}\omega^{2}\hat{X}_{\text{rel}}^{2}, \tag{89}\]
with the total mass \(M_{\text{tot}}=2M.\) Thus, the dynamics comes down to a simple harmonic oscillator in \(X\)-space. Thus \(\hat{p}_{\text{com}}\) and consequently \(\hat{k}_{\text{com}}=k(\hat{p}_{\text{com}})\) are constants of motion as required by Newton's first law.
We are working in the Heisenberg picture such that states stay constant while operators evolve in time according to the Heisenberg equation. For the pairs \((\hat{X}_{I},\hat{p}_{I})\) we thus obtain
\[\hat{X}_{A}(t)= \hat{X}_{\text{com}}(0)+\frac{\hat{p}_{\text{com}}t}{M_{\text{ tot}}}+\hat{X}_{\text{rel}}(0)\cos(\omega t)-\frac{\hat{p}_{\text{rel}}(0)}{M_{ \text{tot}}\omega}\sin(\omega t)=2\frac{\hat{p}_{\text{com}}t}{M}-\hat{X}_{B} (t), \tag{90}\] \[\hat{p}_{A}(t)= \frac{1}{2}\left[\hat{p}_{\text{com}}-M_{\text{tot}}\omega\hat{X }_{\text{rel}}(0)\sin\left(\bar{\omega}t\right)-\hat{p}_{\text{rel}}(0)\cos( \omega t)\right]=\hat{p}_{\text{com}}-\hat{p}_{B}(t), \tag{91}\]
with the operator-valued relative-position operator at the beginning of the evolution \(\hat{X}_{\text{rel}}(0),\) initial center-of-mass position \(\hat{X}_{\text{com}}(0)\) and initial relative momentum \(\hat{p}_{\text{rel}}(0)\). The evolution of the position operators can then be inferred as
\[\hat{x}_{I}(t)=\hat{X}_{I}(t)+\frac{1}{2}\left\{\ell^{2}\hat{p}_{I}(t)^{2}, \hat{X}_{I}(t)\right\}. \tag{92}\]
Thus, we can express the time-evolution of the position operators in terms of the operators \(\hat{X}_{\text{rel}}(0),\)\(\hat{X}_{\text{com}}(0),\)\(\hat{p}_{\text{rel}}(0)\) and \(\hat{p}_{\text{com}}.\) Furthermore, we can apply a deformed Galilean boost (with boost-parameter \(v\)) to the system by shifting
\[\hat{X}_{\text{rel}}\rightarrow\hat{X}_{\text{rel}},\qquad\qquad\qquad\qquad \qquad\qquad\hat{p}_{\text{com}}\rightarrow\hat{p}_{\text{com}}+M_{\text{ tot}}v. \tag{93}\]
In order to study the evolution of the expected position a typical system exhibits, we consider the generalised Gaussian states defined in Eq. (81). In the limit of \(\Delta p_{I}\ell\to 0\) these are coherent states, thus closely mimicking classical evolution. As given in Eq. (81) the generalised Gaussian states have vanishing expected momentum for both particles. Thus, initially, the center-of-mass momentum of the system vanishes in the unboosted frame \(v=0.\)
There are four dimensionless parameters that can indicate strongly deformed evolution when being at least of order one, namely \(M\omega\ell^{2}\) (\(M\omega\) constitutes the relevant momentum scale of the oscillator), \(\Delta p_{A}(0)\ell,\)\(\Delta p_{B}(0)\ell\) (precision to which momentum/ position is known initially) and \(Mv\ell\) (strength of the boost).
The ensuing evolution of the expected position is displayed for combinations of the first three parameters in the unboosted stage in Fig. 1. In this vein, Fig. 1 a demonstrates that the evolution is basically undeformed if the relevant parameters are small. An increase in the system-characteristic momentum scale \(\sqrt{M\omega}\) induces higher modes of oscillation, overtones of fractional period with respect to \(\bar{\omega},\) which leads to the two particles sometimes scattering off each other, while other times simply passing by, clearly very unusual behaviour (see Figs. 1 b and d). Strong positional localisation, in turn, shifts the phase and frequency of the oscillator while at the same time leading to a constant increase in the separation of the particles (see Figs. 1 c and d). Note that while those latter plots appear to indicate an instability, the energy along the evolution is constant as expected.
As seen from a boosted observer, the evolution is depicted in Fig. 2. In this case the boost is chosen to be of the order \(vt\sim\left\langle\hat{x}_{I}\right\rangle(0)\) such that the boost evolution does not overpower the harmonic dynamics. As time measured in periods in the plots, _i. e._\(\bar{\omega}t,\) is of order 1, this comes down to the relation \(Mv/M\bar{\omega}\sim\ell\sim\left\langle\hat{x}_{I}\right\rangle(0).\) Thus, generally we have \(\ell Mv\sim\ell^{2}M\bar{\omega}\) as can be seen in the plots. If, then, the boost and system momentum scale, as well as the localisation in position space are small, we recover the ordinary boosted harmonic oscillator (c. f. Fig. 2 a). In contrast, at large boosts and system momentum scales, a situation displayed in Fig 2, both particles start oscillating in phase at much larger distance from the origin, _i. e._ generally \(\left\langle\hat{x}_{I}\right\rangle(t)\gg\left\langle\hat{x}_{I}\right\rangle(0).\) As demonstrated in Fig 2 c strong localisation in and of itself does not imply significant changes in the boosted with respect to the unboosted case (c. f. Fig 1 c). It is the combination of strong localisation and large masses (Fig 2 d) which is of special interest because it essentially recovers the classical dynamics. The interesting point here lies in the fact that those oscillatory peaks pointing towards the observer (the origin) are softened while those directed away from the observer are sharpened. This property demonstrates experienced by a moving observer. According to Eq. (65), objects in relative motion to the observer in the origin appear more distant depending on their kinetic energy. This effect is stronger for objects which are farther away.
To summarise, while there are no corrections to the spectrum of the Hamiltonian, the deformation induced by a canonical transformation applied to the ordinary Galilean-invariant Hamiltonian does lead to physical changes because it is the position that we associate the physical interpretation with. If the physical position is given by \(\hat{x}_{I}\) instead of \(\hat{X}_{I},\)_i. e._ in the presence of a minimal length, the ensuing modifications to the theory are nontrivial.
## IV Conclusion
A quantum mechanical model with a minimal length requires a cut-off in the eigenspectrum of the wave-number conjugate to the position operator. This implies, on the one hand, that wave numbers can not add up linearly and, on the other hand, that boosts have to act nontrivially on wave numbers. In other words, Galilean relativity has to be either explicitly broken or, at least, deformed in some way.
In this work we have explicitly demonstrated that the only dynamics invariant under deformed Galilean transformations in one dimension is canonically related to ordinary Galilean evolution. In other words, given the position \(\hat{x},\) and its conjugate \(\hat{k},\) we can find another canonical pair \((\hat{X},\hat{p})\) in terms of which the Hamiltonian does not appear deformed. The transition from the set \((\hat{x},\hat{k})\) to the set \((\hat{p},\hat{X})\) is a diffeomorphism in momentum space defined such that the momenta \(\hat{p}(\hat{k})\) compose linearly. In other words, expressed in terms of \(\hat{p},\) the law of conservation of momentum is undeformed.
Customary minimal-length models, subsumed under the term GUP, purport the existence of a preferred notion of momentum in terms of which the kinetic part of the Hamiltonian is quadratic. Here, introducing the momentum \(\hat{p}=p(\hat{k}),\) we have corroborated this assertion, turning the existence of a linearly adding momentum \(\hat{p}\) into a necessary condition for the existence of a relativity principle. Indeed, the resulting free-particle Hamiltonian is quadratic in \(\hat{p}.\) However, contrary to conventional models, deformed Galilean relativity requires the interaction potential between two particles to depend on a generalisation of the respective distance, _i. e._ to be deformed. Therefore, the prevailing GUP models cannot accommodate for a relativity principle.
Semiclassically speaking, the deformation of the boost operator implies that the position of a particle in motion with respect to the observer is modified as a function of its kinetic energy in the observer's frame. For conventional types of models, this change amounts to an increase in distances and apparently elongates extended objects in motion
in a way reminiscent of the Lorenz contraction. However, here it is not the relative velocity that is compared to the speed of light but the mass and the kinetic energy that are compared to the minimal-length scale.
That the resulting deformed Galilean-invariant dynamics is canonically related to the ordinary Galilean-relativistic one does not imply that the model is trivial. Indeed, an analogous statement could be made about special relativity in \(1+1\) dimensions. While the spectrum of the Hamiltonian is unmodified with respect to the ordinary one, the dynamics of the position operator is clearly deformed. For instance, the study of elastic collisions suggests a revision of the principle of absolute locality in favour of relative locality (for more information see e. g. [89; 90]). We have further demonstrated the nontriviality of the dynamics at the instructive example of two particles interacting through a harmonic potential. In particular, a boosted observer does indeed find relative-locality-like effects as displayed in Fig. 2 d.
The apparent triviality of the model is rooted in the fact that a one-dimensional wave-number space cannot be curved. Similarly and in contrast to its higher-dimensional counterparts, the space of velocities in 1+1-dimensional special relativity is flat. By analogy with special and doubly special relativity, we expect this to change in higher dimensions when coordinates cease to commute. Indeed, it has been shown that the curvature of momentum space is proportional to the noncommutativity of the coordinates [46; 89; 92; 93]. Furthermore, the finding that the existence of a minimal length requires a cut-off in wave-number space generalises to noncommutative geometries [53]. Therefore, it would be interesting to extend the present results to that case. We hope to report back on this matter in the future.
## Appendix A Proof of associative and commutative composition law
Equation (39) constrains the composition laws compatible with any version of deformed Galilean invariance. This appendix is dedicated to analysing this constraint. First, we rewrite Eq. (39) in terms of \(F\) as
\[\frac{F^{(1,1)}}{F^{(1,0)}F^{(0,1)}}=\tilde{F}(F), \tag{104}\]
where the superscripts correspond to the number of derivatives with respect to the first and second entries of the function \(F(\hat{k}_{A},\hat{k}_{B})\), respectively. This equation is generally satisfied by a composition law of the kind
\[F(\hat{k}_{A},\hat{k}_{B})=p^{-1}\left(p(\hat{k}_{A})+p(\hat{k}_{B})\right) \tag{105}\]
for some function \(p(k).\) Composition laws of this kind are trivially associative and commutative.
However, for Eq. (104) to imply a commutative and associative composition law, it is necessary that Eq. (105) constitutes its unique solution. Here, we demonstrate this by a perturbative analysis to infinite order in \(\ell,\)_i. e._ under the assumption that the functions \(F,\)\(\tilde{F}\) and \(p\) are analytic. In this case the functions, \(\tilde{F}\) and \(p\), being dependent on one variable, have one free coefficient at every order in \(\ell.\) The composition law \(F\), in turn, is a general function of two momenta thus requiring \(n\) different coefficients at order \(n.\) Both Eqs. (104) and (105) provide \(n\) constraints at order \(n.\) Thus, both introduce one additional coefficient while providing the same number of constraints on the composition law. As a result, the composition law in both cases has one unconstrained coefficient at every order in \(\ell.\) In a nutshell, we have found a solution of Eq. (105) which does not further constrain the composition law. Thus, we have determined its general solution.
To illustrate how this comes about, we find the said constraints to fourth order in \(\ell.\) The wave-number composition
can be expanded as
\[F(\hat{k}_{A},\hat{k}_{B})=\hat{k}_{A}+\hat{k}_{B}+\sum_{n,m=1}^{\infty}F_{nm}l^{n+ m-1}\hat{k}_{A}^{m}\hat{k}_{B}^{n}. \tag{104}\]
Furthermore, bearing in mind that it has to have dimensions of length, we may express the function \(\tilde{F}(\hat{k})\) as
\[\tilde{F}(\hat{k})=\ell\sum_{n=0}^{\infty}\tilde{F}_{n}(\ell\hat{k})^{n}. \tag{105}\]
As a result, we can expand Eq. (103) in powers of \(\ell\) and compare the coefficients of powers of \(\hat{k}_{A},\,\hat{k}_{B}\) to obtain constraints on a given composition law and determine the corresponding \(\tilde{F}_{n}.\) As the present appendix is centred around the composition law, we only display the former. To fourth order in \(\ell,\) they read
\[F_{1,2}= F_{2,1},\qquad F_{1,3}=F_{3,1}\qquad F_{2,2}=\frac{3F_{3,1}}{2}+F_{ 1,1}F_{2,1}, \tag{106}\] \[F_{1,4}= F_{4,1},\qquad F_{2,3}=F_{3,2}=\frac{1}{2}\left(2F_{2,1}^{2}+3F_{ 1,1}F_{3,1}\right)+2F_{4,1}. \tag{107}\]
Indeed, there is one free coefficient at every order (_i. e._\(F_{1,1},\,F_{2,1},\,F_{3,1}\) and \(F_{4,1}\)). Thus, at that order the composition law becomes
\[F= \hat{k}_{A}+\hat{k}_{B}+F_{1,1}\hat{k}_{A}\hat{k}_{B}\ell+F_{2,1} \hat{k}_{A}\hat{k}_{B}\left(\hat{k}_{A}+\hat{k}_{B}\right)\ell^{2}+\frac{1}{2 }\hat{k}_{A}\hat{k}_{B}\left[F_{3,1}\left(3\hat{k}_{A}\hat{k}_{B}+2\hat{k}_{A}^ {2}+2\hat{k}_{B}^{2}\right)+2F_{1,1}F_{2,1}\hat{k}_{A}\hat{k}_{B}\right]\ell^ {3}\] \[+\frac{1}{2}\hat{k}_{A}\hat{k}_{B}\left(\hat{k}_{A}+\hat{k}_{B} \right)\left[2F_{2,1}^{2}\hat{k}_{A}\hat{k}_{B}+3F_{1,1}F_{3,1}\hat{k}_{A} \hat{k}_{B}+2F_{4,1}\left(\hat{k}_{A}\hat{k}_{B}+\hat{k}_{A}^{2}+\hat{k}_{B}^ {2}\right)\right]\ell^{4}. \tag{108}\]
This function is clearly invariant under the exchange \(A\leftrightarrow B,\)_i. e._ the composition law is commutative. Furthermore, it can be explicitly shown that \(F(\hat{k}_{A},F(\hat{k}_{B},\hat{k}_{C}))=F(F(\hat{k}_{A},\hat{k}_{B}),\hat{ k}_{C}),\) which amounts to associativity. Indeed, both Eqs. (103) and (104) imply the same composition law in the same parameterisation.
|
2306.04442 | Dark Matter Constraints from Isomeric $^{\bf 178m}$Hf | We describe a first measurement of the radiation from a $^{\bf 178m}$Hf
sample to search for dark matter. The $\gamma$ flux from this sample, possessed
by Los Alamos National Laboratory nuclear chemistry, was measured with a Ge
detector at a distance of 4 ft due to its high activity. We search for
$\gamma$s that cannot arise from the radioactive decay of $^{\bf 178m}$Hf, but
might arise from the production of a nuclear state due to the inelastic
scattering with dark matter. The limits obtained on this $\gamma$ flux are then
translated into constraints on the parameter space of inelastic dark matter.
Finally, we describe the potential reach of future studies with $^{\bf
178m}$Hf. | D. S. M. Alves, S. R. Elliott, R. Massarczyk, S. J. Meijer, H. Ramani | 2023-06-07T14:02:53Z | http://arxiv.org/abs/2306.04442v1 | # Dark Matter Constraints from Isomeric \({}^{178m}\)Hf
###### Abstract
We describe a first measurement of the radiation from a \({}^{178m}\)Hf sample to search for dark matter. The \(\gamma\) flux from this sample, possessed by Los Alamos National Laboratory nuclear chemistry, was measured with a Ge detector at a distance of 4 ft due to its high activity. We search for \(\gamma\)s that cannot arise from the radioactive decay of \({}^{178m}\)Hf, but might arise from the production of a nuclear state due to the inelastic scattering with dark matter. The limits obtained on this \(\gamma\) flux are then translated into constraints on the parameter space of inelastic dark matter. Finally, we describe the potential reach of future studies with \({}^{178m}\)Hf.
There is irrefutable evidence for the existence of dark matter arising purely from gravitational interactions. Understanding its particle nature is one of the burning questions of 21\({}^{\rm st}\) century particle physics. Dark matter candidates at the weak scale arise naturally in theories beyond the Standard Model, such as supersymmetry. Furthermore, weak scale massive particles with weak scale cross-sections--the so-called "weakly interacting massive particles," or WIMPS--are produced with the correct relic abundance when freezing out from thermal equilibrium in the early universe (WIMP miracle). Direct, indirect, and collider searches for WIMPs have reported repeated null results, setting stringent limits on their model space. A review of the present status of the search for dark matter can be found in [1]. The derived limits are very restrictive and the lack of an observation has motivated dark matter considerations beyond the classic WIMP description.
Among other alternative dark matter models are inelastic dark matter (iDM) [2; 3; 4; 5] and strongly interacting dark matter (SIDM) [6; 7; 8]. While these models retain the salient features of thermal freeze-out, constraints on their parameter spaces are much less stringent because the threshold energy required for a detectable dark matter-nucleus scattering event is often unavailable--in the case of iDM, due to the large inelastic splitting; and in the case of SIDM, due to the loss of dark matter kinetic energy from its interactions with the overburden rock above deep underground experiments. More specifically, in iDM models, dark matter-nucleus elastic scattering is suppressed, and the dominant scattering process requires an internal transition to an excited dark matter state. If the transition energy is greater than the available kinetic energy of the dark matter-nucleus system, this process is completely shut-off, severely reducing the sensitivity to the iDM parameter space of present experiments focused on WIMPs. In the case of SIDM models, the large dark matter nuclear cross section causes dark matter particles to thermalize through interactions with the Earth, resulting in a velocity too low to produce a measurable interaction by the time they reach the detector's location deep underground. In particular, large scattering cross sections (much higher than electroweak-strength) are still viable for significant swaths of parameter space of both SIDM and iDM models. While this is difficult to obtain with perturbative models of new physics at the TeV scale, composite dark matter models from a strongly interacting dark sector can naturally accommodate these properties [9; 10; 11; 12; 13].
In this letter, we consider a recent a proposal to use nuclear metastable states as exothermic-reaction targets for dark matter searches [14]. The isomer \({}^{178m}\)Hf was suggested as a potential target and its large reservoir of available energy for transference to a dark matter particle makes it an intriguing candidate. Unfortunately, due to the relatively short half-life (31 y) of this manmade isotope, a large number of target atoms also means significant radioactivity. In this article, we describe a first study of the \(\gamma\) spectrum of \({}^{178m}\)Hf to derive first limits on dark matter interactions with this isotope.
The \({}^{178m}\)Hf sample used in this study was fabricated by the Los Alamos National Laboratory (LANL) nuclear chemistry division to produce \({}^{172}\)Hf for a \({}^{172}\)Hf/\({}^{172}\)Lu medical generator [15]. Later, this material was used to study the possibility of energy storage in nuclear isomers, specifically \({}^{178m}\)Hf. Reports of triggered isomer decay in \({}^{178m}\)Hf led to a further study [16] finding no evidence for the effect. The Hafnium sample used for that experiment, and for this measurement, was extracted from a Ta target at the LAMPF accelerator at LANL [17]. The sample studied here is the _second set_ as described in [16] and is shown in Fig. 1. It is now relatively old and Hf isotopes other than 178m have decayed away, leaving a rather pure sample.
Due to the sample's high activity (20-25 mR/hr on contact), the detector was placed at a distance of 1.2 m (4 ft) in order to minimize dead time from pile-up rejection. The detector was an ORTEC(r) Detective-X Ge detector [18]. The spectrum, shown in Fig. 2, indicates some natural room background lines from the U/Th/K decay chains along with the known \({}^{178m}\)Hf lines. A list of the identified lines are given in Table 1. Note that above 600 keV, the spectrum is dominated by the natural background. Below 600 keV, the spectrum is dominated by
\({}^{178m}\)Hf emission.
The metastable state at 2446 keV, with a half-life of 31 yr, has a large spin \(J^{\pi}=16^{+}\). Table 2 lists a number of Hf states which are prevented from being populated by \(\gamma\) transitions originating from the \(16^{+}\) metastable state due to the large spin change, \(\Delta J\). An interaction with a heavy and slow dark matter particle, on the other hand, could overcome the \(\Delta J\) hindrance and catalyze transitions from the \(16^{+}\) metastable state to those (otherwise unpopulated) lower spin states, which in turn could decay via \(\gamma\) emission. Figure 3 depicts the \({}^{178m}\)Hf level diagram showing the radioactive decay pathways in contrast to the dark matter-induced pathways for a couple of the key transitions.
When a dark matter particle scatters off the isomeric nuclear state, the metastable nuclear energy can be tapped to excite the dark matter to an elevated internal energy state. The primary advantage of \({}^{178m}\)Hf in our study is its large metastable state energy, which allows us to probe, for the first time in a direct detection experiment, dark matter mass splittings as high as \(\delta M_{\chi}\sim\mathcal{O}(\mathrm{MeV})\). In particular, dark matter-induced transitions of the isomeric state to the lower energy states would be sensitive to the largest dark matter mass splittings. Unfortunately, most of the low-lying \({}^{178}\)Hf states are also populated through the SM radioactive decay of \({}^{178m}\)Hf, and therefore suffer from large backgrounds and result in a significant reduction in sensitivity to an iDM signal. Therefore, we focus on dark matter-induced transitions to higher energy states which are not populated by the SM decay pathways and therefore emit \(\gamma\)s in regions with reduced backgrounds; these \(\gamma\)s have energies above \(\sim\)600 keV. In Table 2, we only list the excited \({}^{178}\)Hf states and associated decay \(\gamma\)s determined to be the most sensitive given these constraints1. These states balance the need for high dark matter-induced transition energy against the backgrounds at the \(\gamma\) energy.
Footnote 1: We note that \(\gamma\)s from the dark matter-induced states at 1731.1 keV and at 1781.3 keV are also ignored because they fall in high background regions.
For the candidate \(\gamma\)s listed in Table 2, we obtain the number of observed and expected background counts within an optimized window centered on the \(\gamma\) energy. The expected background count (B) is obtained by performing a side-band fit of the data, assuming a linear background distribution. The width of the optimal window is deduced by maximizing \(\mathcal{A}_{\gamma}/\sqrt{B}\), where \(\mathcal{A}_{\gamma}\) is the signal acceptance within the chosen window, assuming it follows a gaussian distribution. For a continuum spectrum, the resulting signal acceptance within the optimal window is given by \(\mathcal{A}_{\gamma}\simeq 0.84\); however, since we collected a binned spectrum, the signal acceptance for each \(\gamma\) received a small correction to account for the bin edges. There were no observable peaks at any of the smoking-gun energies in Table 2.
To establish our notation, consider the process in which a dark matter particle upscatters off the isomeric nuclear state \({}^{178m}\)Hf, causing the nucleus to transition to a lower energy level \({}^{178}\)Hf\({}_{j}\),
\[\chi+{}^{178m}\mathrm{Hf}\;\rightarrow\;\chi^{*}+{}^{178}\mathrm{Hf}_{j}. \tag{1}\]
We will denote the inelastic cross section for this process by \(\sigma^{(j)}_{inel}\). The produced state \({}^{178}\)Hf\({}_{j}\) can then de-excite via emission of a \(\gamma\) of energy \(E^{(j)}_{\gamma}\) with a branching ratio
\begin{table}
\begin{tabular}{c c||c c} \hline \(E_{\gamma}\) [keV] & Origin & \(E_{\gamma}\) [keV] & Origin \\ \hline \hline
55.3 & \({}^{178m}\)Hf x-rays & 725.9 & \({}^{212}\)Bi \\
62.7 & \({}^{178m}\)Hf x-ray & 768.4 & \({}^{214}\)Bi \\
68.6 & Au x-ray det. mat. & 794.9 & \({}^{228}\)Ac \\
77.9 & Au x-ray det. mat. & 860.6 & \({}^{208}\)Tl \\
88.6 & \({}^{178m}\)Hf & 910.5 & \({}^{228}\)Ac \\
93.2 & \({}^{178m}\)Hf & 968.2 & \({}^{228}\)Ac \\
213.4 & \({}^{178m}\)Hf & 1093.5 & \({}^{208}\)Tl sum \\
216.7 & \({}^{178m}\)Hf & 1120.6 & \({}^{214}\)Bi \\
237.4 & \({}^{178m}\)Hf & 1238.4 & \({}^{214}\)Bi \\
257.6 & \({}^{178m}\)Hf & 1242.0 & \({}^{174}\)Lu\({}^{a}\) \\
277.3 & \({}^{178m}\)Hf & 1377.7 & \({}^{214}\)Bi \\
296.8 & \({}^{178m}\)Hf & 1460.2 & \({}^{40}\)K \\
325.6 & \({}^{178m}\)Hf & 1729.6 & \({}^{214}\)Bi \\
426.4 & \({}^{178m}\)Hf & 1764.8 & \({}^{214}\)Bi \\
454.0 & \({}^{178m}\)Hf & 1847.4 & \({}^{214}\)Bi \\
495.0 & \({}^{178m}\)Hf & 2103.6 & \({}^{208}\)Tl esc. peak \\
511.0 & annihilation & 2117.8 & \({}^{214}\)Bi \\
535.0 & \({}^{178m}\)Hf & 2204.1 & \({}^{214}\)Bi \\
574.2 & \({}^{178m}\)Hf & 2448.4 & \({}^{214}\)Bi \\
583.5 & \({}^{208}\)Tl & 2614.5 & \({}^{208}\)Tl \\
608.9 & \({}^{214}\)Bi & & \\ \hline \end{tabular}
\end{table}
Table 1: The stronger lines observed in the spectrum.
given by \(b_{\gamma}^{(j)}\). The expected signal count for the process described above, \(S_{\gamma}^{(j)}\), is given by
\[S_{\gamma}^{(j)}=N_{T}\Delta t\times(\sigma_{inel}^{(j)}\,\Phi_{\chi})\times(b_{ \gamma}^{(j)}{\cal A}_{\gamma}^{(j)}\epsilon_{\gamma}^{(j)}), \tag{2}\]
where \(N_{T}\) is the number of target \({}^{178m}\)Hf nuclei; \(\Delta t=974.2\) s is the live time; \(\Phi_{\chi}\) is the dark matter flux; and \({\cal A}_{\gamma}^{(j)}\) and \(\epsilon_{\gamma}^{(j)}\) are, respectively, the signal acceptance within the region of interest and detection efficiency for the \(\gamma\) emitted in the decay of \({}^{178}\)Hf\({}_{j}\).
The number of target atoms \(N_{T}\) can be deduced from the SM activity of the Hf sample. In particular, the SM decay chain of \({}^{178m}\)Hf produces a 495-keV \(\gamma\) line with a probability \(p_{495}=0.736\pm 0.014\)[20]. The number of 495-keV \(\gamma\) counts observed during our live time, \(S_{495}=96,808\pm 395\), relates to \(N_{T}\) via:
\[N_{T}=\frac{\tau_{\rm isomer}}{\Delta t}\,\frac{S_{495}}{p_{495}\,\epsilon_{495 }}, \tag{3}\]
where \(\tau_{\rm isomer}=1.41\times 10^{9}\) s is the lifetime of \({}^{178m}\)Hf and \(\epsilon_{495}\) is the detection efficiency for the 495 keV \(\gamma\) line.
Combining (2) and (3), we can express the dark matter event rate (i.e., number of scattering events in (1) per unit time per target nucleus) as
\[\sigma_{inel}^{(j)}\,\Phi_{\chi} = \tau_{\rm isomer}^{-1}\,\frac{S_{\gamma}^{(j)}}{S_{495}}\,\frac{p_ {495}\,\epsilon_{495}}{b_{\gamma}^{(j)}{\cal A}_{\gamma}^{(j)}\,\epsilon_{ \gamma}^{(j)}}. \tag{4}\]
The corresponding half-life for this dark matter-induced
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline label & state energy & state & \(\gamma\) energy & \(\gamma\) branching & acceptance & rel. eff. & background & observed & \(T_{1/2}^{(j)}\) \\ \(j\) & \(E_{j}\)(keV) & \(J^{\pi}\), \(K\) & \(E_{j}^{(j)}\)(keV) & ratio \(b_{\gamma}^{(j)}\) & \({\cal A}_{\gamma}^{(j)}\) & \(\epsilon_{\gamma}^{(j)}/\epsilon_{495}\) & counts & counts & (\(10^{5}\) yrs) \\ \hline \hline
1 & 1635.6 & \(4^{+}\), 0 & 1542.2 & \(0.97\pm 0.04\) & 0.83 & 0.41 & \(20.52\pm 1.29\) & 17 & \(>1.56\) \\
2 & 1636.7 & \(5^{-}\), 5 & 1330.0 & \(0.57\pm 0.03\) & 0.86 & 0.44 & \(44.23\pm 1.98\) & 32 & \(>1.12\) \\
3 & 1640.5 & \(5^{+}\), 4 & 1333.8 & \(0.55\pm 0.02\) & 0.86 & 0.44 & \(40.93\pm 1.87\) & 40 & \(>0.52\) \\
4 & 1648.8 & \(6^{-}\), 2 & 1016.6 & \(0.60\pm 0.04\) & 0.84 & 0.47 & \(75.97\pm 2.52\) & 82 & \(>0.30\) \\
5 & 1651.5 & \(5^{-}\), 1 & 1344.9 & \(0.70\pm 0.02\) & 0.86 & 0.44 & \(38.27\pm 1.81\) & 41 & \(>0.51\) \\
6 & 1654.3 & \(4^{+}\), 0 & 1348.0 & \(0.68\pm 0.21\) & 0.86 & 0.44 & \(36.99\pm 1.81\) & 34 & \(>0.69\) \\
7 & 1691.1 & \(6^{+}\), 2 & 1059.0 & \(0.55\pm 0.02\) & 0.83 & 0.47 & \(74.65\pm 2.45\) & 50 & \(>1.21\) \\
8 & 1697.5 & \(9^{-}\), 8 & 333.4 & \(1.00\pm 0.00\) & 0.83 & 1.24 & \(6434.7\pm 29.8\) & 6506 & \(>0.14\) \\
9 & 1747.1 & \(4^{-}\), 2 & 1440.6 & \(0.12\pm 0.03\) & 0.84 & 0.44 & \(33.21\pm 0.84\) & 31 & \(>0.13\) \\
10 & 1772.1 & \(0^{+}\), 0 & 1678.8 & \(1.00\pm 0.11\) & 0.81 & 0.25 & \(15.55\pm 0.81\) & 11 & \(>1.79\) \\
11 & 1788.6 & \(6^{+}\), 4 & 1156.3 & \(0.45\pm 0.04\) & 0.81 & 0.47 & \(63.84\pm 0.81\) & 67 & \(>0.26\) \\ \hline \end{tabular} a
\end{table}
Table 2: The input parametersa and resulting 90% C.L. limits on the half-life \(T_{1/2}^{(j)}\) of the dark matter-induced transition shown in eq. (1). The smoking gun signal for this process is the \(\gamma\) emitted in the decay of dark matter induced state \({}^{178}\)Hf\({}_{j}\) (4th column). The relative detector efficiencies normalized to that for the 495-keV \(\gamma\), \(\epsilon_{\gamma}^{(j)}/\epsilon_{495}\), were obtained from [18] and are assumed to carry uncorrelated uncertainties of \(\pm 20\%\). The isomeric state of \({}^{178m}\)Hf at 2446.1 keV has \(J^{\pi}=16^{+}\) and \(K=16\).
Figure 2: The observed spectrum from the \({}^{178m}\)Hf sample for a live time of 974.2 s. The inset shows the spectrum surrounding 1330 keV. A dark matter-induced \(\gamma\) with this energy proved to be the most sensitive test.
transition is simply related to (4) via
\[T_{1/2}^{(j)}=\frac{\log 2}{\sigma_{inel}^{(j)}\,\Phi_{\chi}}. \tag{5}\]
By performing a profiled log-likelihood fit of the signal strength for each of the 11 \(\gamma\) lines considered, we obtained 90% confidence level (C.L.) limits on the half-lives \(T_{1/2}^{(j)}\), given in Table 2.
The inelastic cross section for the process in (1), \(\sigma_{inel}^{(j)}\), can be related to the model-dependent dark matter-nucleon cross section \(\sigma_{n}\) using the formalism of [14]. We used this relation to translate our limits into constraints on the parameter space of inelastic dark matter (iDM), namely, \(\sigma_{n}\) versus the iDM mass splitting \(\delta M_{\chi}\) for a benchmark iDM mass of \(M_{\chi}=1\) TeV. Note that the transition with the most strongly constrained half-life does not necessarily provide the strongest constraint on \(\sigma_{n}\). That is because the nuclear form factor, which suppresses the transition rate in (1), depends not only on the momentum transfer \(q\) and change in angular momentum \(\Delta J\), but also on \(K\)-selection rules. Specifically, each nuclear state has a \(K\)-quantum number given by the projection of its angular momentum on its symmetry axis (see Table 2), and transitions with \(\Delta K\) greater than the multipolarity of the emitted radiation suffer from an additional suppression, the so-called "\(K\)-hindrance". Among the 11 transitions considered, \(j=2\) provides the strongest constraint on \(\sigma_{n}\), since it has the second smallest \(\Delta K(=11)\), and the observed counts for its associated \(\gamma\) line of 1330 keV showed a \(\sim\)2\(\sigma\) deficit relative to the background expectation. We can contrast the constraining power of \(j=2\) with that of other transitions. For example, while \(j=8\) has the smallest \(\Delta K(=8)\), its associated 333.4 keV \(\gamma\) line lies in a region with substantial backgrounds (\(\sim 2\) orders of magnitude larger than the backgrounds for the other lines), which significantly weakens its dark matter constraining power. As another example, transitions \(j=1\) and \(j=10\) have the most strongly constrained half-lives, but also the largest \(\Delta K(=16)\), which suppresses their rate for \(\delta M_{\chi}\gtrsim 600\) keV. For further technical details, we refer the reader to [14].
Finally, we performed a joint log-likelihood fit of the signal strength for all the 11 \(\gamma\)s combined. Our results are shown in Fig. 4.
For \(\delta M_{\chi}\gtrsim 640\) keV, our experimental limit on \(\sigma_{n}\) is the best to date, albeit dark matter models with such large cross sections would necessarily come from composite dynamics and might face model building challenges. The strongest competing constraint in the large mass splitting region \(\delta M_{\chi}\gtrsim 400\) keV comes from searches us
Figure 3: The level diagram of \({}^{178m}\)Hf. To the left (and in black) are the transitions that occur from the SM radioactive decay of the isotope. To the right (and in blue), we illustrate two dark matter-induced transitions of significance: (i) the half-life of the \(16^{+}\to 0^{+}\) transition (associated with the 1679 keV \(\gamma\)) was the most strongly constrained by this work; (ii) the \(16^{+}\to 5^{-}\) transition (associated with the 1330 keV \(\gamma\)) provided the strongest constraint on the dark matter-nucleon cross section, \(\sigma_{n}\), among the 11 \(\gamma\)s considered. This figure was adapted from Fig. 1 of [17].
Figure 4: The 90% C.L. exclusion limits on the parameter space of inelastic dark matter, assuming a standard halo model with local dark matter density \(\rho_{\chi}=0.3\,\text{GeV/cm}^{3}\) and galactic escape velocity \(v_{\text{esc}}=600\) km/s. The black curve shows the combined limit from all 11 \(\gamma\)s, and the shaded gray regions show previous existing limits [21; 22; 23; 24; 25; 26; 27]. The 11 colored curves show the limits from each individual \(\gamma\) line in Table 2. Following the \(j\)-label convention of Table 2, and fixing \(\delta M_{\chi}=700\) keV, the order of the 11 colored curves, from stronger to weaker exclusion, is: \(j=2\), 3, 7, 1, 8, 5, 6, 4, 11, 10, 9.
ing the tantalum metastable state \({}^{180m}\)Ta, which has a very long half-life and is naturally abundant albeit with a low isotopic fraction. This has enabled experiments with samples containing a large number of \({}^{180m}\)Ta nuclei, resulting in interesting limits on SIDM and iDM models [22; 28; 29], as well as ongoing experiments with significantly improved reach [30]. Still, the comparably lower metastable energy of \({}^{180m}\)Ta (76.3 keV, in contrast to 2446 keV for \({}^{178m}\)Hf) limits its sensitivity to \(\delta M_{\chi}\lesssim 600\) keV. Other existing experimental results are described in [21], with specific limits derived from data for PbWO\({}_{4}\)[23], CaWO\({}_{4}\)[24], CRESST-II [25], PICO-60 [26], and XENONnT [27].
A number of improvements and options for future measurements are possible. For a repeat of the measurements presented here, a longer run time and the use of a shielded detector could improve sensitivity by a factor of \(\sim\)10. A high-efficiency Ge-detector array similar to AGATA [31]--with a large solid angle acceptance and detectors distant enough from the source so as to have a manageable rate--could further improve sensitivity by an additional factor of \(\sim\)100.
The Hf measurements reported here were performed at a surface site, and therefore were not sensitive to viable parameter space in SIDM. By deploying the Hf sample and a detector underground and repeating these measurements, perhaps at multiple depths, one could probe the effects of a dark matter traffic jam [14] in models of SIDM (both elastic and inelastic).
Ideally, one would like a very large sample of the Hf isomer. For this Hf sample, 1 kg of Ta was irradiated for 60 days with an 800 MeV, 350 \(\mu\)A proton beam. The Ta target was used as a dedicated beam stop from which the Hf was extracted by radiochemistry [15]. If feasible, processing additional targets could produce a large quantity of \({}^{178m}\)Hf; however, the cost of the required radiochemistry would have to be weighed against the science reach. Furthermore, practical difficulties associated with the high radioactivity of the sample would need to be overcome, such as a fast, highly efficient detection system in order to fully exploit the science scope of such measurements.
An alternative experimental setup could be arranged to search for the decay of the upscattered dark matter particle. Specifically, dark matter particles could inelastically scatter off \({}^{178m}\)Hf to an excited state, and decay promptly or with \(\mathcal{O}\)(m) displacements via emission of a monochromatic \(\gamma\), which might be observed in a nearby detector (see, e.g., [32].) For this geometry, a large sample of \({}^{178m}\)Hf (such as beam stops that have been irradiated for extended periods) could be located some distance from a detector. A Ge detector could be sited nearby to search for such an anomalous \(\gamma\). While number of old tungsten and tantalum-cladded tungsten beam stops are stored on location behind shielding but our early assessment is that none have enough \({}^{178m}\)Hf for a useful measurement. A dedicated production would be required.
###### Acknowledgements.
We thank Evelyn Bond and Athena Marie Marenco for assisting with access to the sample. We gratefully acknowledge support from the U.S. Department of Energy Office of Science, and from the Los Alamos National Laboratory's Directed Research and Development (LDRD) Program for this work.
|
2308.12954 | Homotopy lifting maps on Hochschild cohomology and connections to
deformation of algebras using reduction systems | We describe the Gerstenhaber bracket structure on Hochschild cohomology of
Koszul quiver algebras in terms of homotopy lifting maps. There is a projective
bimodule resolution of Koszul quiver algebras that admits a comultiplicative
structure. Introducing new scalars, we describe homotopy lifting maps
associated to Hochschild cocycles using the comultiplicative structure. We show
that the scalars can be described by some recurrence relations and we give
several examples where these scalars appear in the literature. In particular,
for a member of a family of quiver algebras, we describe Hochschild 2-cocycles
and their associated homotopy lifting maps and determine the Maurer-Cartan
elements of the quiver algebra in two ways: (i) by the use of homotopy lifting
maps and (ii) by the use of a combinatorial star product that arises from the
deformation of algebras using reduction systems. | Tolulope Oke | 2023-08-24T17:46:27Z | http://arxiv.org/abs/2308.12954v1 | Homotopy lifting maps on Hochschild cohomology and connections to deformation of algebras using reduction systems
###### Abstract.
We describe the Gerstenhaber bracket structure on Hochschild cohomology of Koszul quiver algebras in terms of homotopy lifting maps. There is a projective bimodule resolution of Koszul quiver algebras that admits a multiplicative structure. Introducing new scalars, we describe homotopy lifting maps associated to Hochschild cocycles using the comultiplicative structure. We show that the scalars can be described by some recurrence relations and we give several examples where these scalars appear in the literature. In particular, for a member of a family of quiver algebras, we describe Hochschild \(2\)-cocycles and their associated homotopy lifting maps and determine the Maurer-Cartan elements of the quiver algebra in two ways: (i) by the use of homotopy lifting maps and (ii) by the use of a combinatorial star product that arises from the deformation of algebras using reduction systems.
2020 Mathematics Subject Classification: Primary 16E40; 16S37; 16S80, 16S15 The author thanks Severin Barmeier for useful discussions and for assisting with the calculations in Section 6.
## 1. Introduction
The Hochschild cohomology \(\operatorname{HH}^{*}(\Lambda)\) of an associative algebra \(\Lambda\) possesses a multiplicative map called the cup product making it into a graded commutative ring. The ring structure of Hochschild cohomology of certain path algebras were determined using quiver techniques. For instance, if a path algebra is Koszul, its resolution possesses a comultiplicative structure and the cup product structure on its Hochschild cohomology can be presented using this comultiplicative structure. This cup product was described in [6].
In addition to the cup product on Hochschild cohomology ring is the Gerstenhaber bracket making \(\operatorname{HH}^{*}(\Lambda)\) into a graded Lie/Gerstenhaber algebra. The bracket plays an important role in the theory of deformation of algebras. The theory of deformation of algebras employs techniques in algebraic and noncommutative geometry to describe variations of the associative
multiplicative structure on any algebra. In a recent article, S. Barmeier and Z. Wang [1] have introduced a technique for finding families of deformation of quiver algebras with relations using _reduction_ systems. Reduction systems were introduced by Bergman [2] in the late seventies. In particular, for an algebra \(\Lambda=kQ/I\) where \(Q\) is a finite quiver, there is associated a reduction system \(R\) useful in determining a projective bimodule resolution of \(\Lambda\) reminiscent of the Bardzell resolution [4]. It was shown in [1] that there is an equivalence of formal deformations between (i) deformations of the associative algebra \(\Lambda\), (ii) deformations of the reduction system \(R\) and (iii) deformations of the relations in \(I\).
The Gerstenhaber bracket can be difficult to compute in general settings. Several works have been carried out to interpret the bracket as well as make computations of the bracket accessible for a large class of algebras for instance in [9, 7]. In [16], Y. Volkov introduced a method in which the bracket is defined in terms of a homotopy lifting map. This method works for any arbitrary projective bimodule resolution of the algebra. In earlier works [13], we present a general formula for homotopy lifting maps associated to cocycles on Hochschild cohomology of Koszul path algebras.
The resolution introduced in [6] had scalars \(c_{p,j}(n,i,r)\) appearing in the definition of the differentials on the resolution. These scalars made it possible to give a closed formula for the cup product structure on Hochschild cohomology. In Section 3, we present new scalars \(b_{m,r}(m-n+1,s)\) associated to homotopy lifting maps on Hochschild cocycles using the scalars \(c_{p,j}(n,i,r)\) of the comultiplicative relations. We show that the scalars \(b_{m,r}(m-n+1,s)\) can be described using some recurrence relations and present the Gerstenhaber bracket structure using these scalars. We give several examples where these scalars appear in the literature in Section 4. In Section 5, we introduce a family of quiver algebras that has been extensively studied in [12, 13]. For the algebra \(A_{1}\) from the family, we find Hochschild 2-cocycles and their associated homotopy lifting maps. We show that \(\mathrm{HH}^{2}(A_{1})\) is generated as a vector space by five Maurer-Cartan elements.
Relevant results about Hochschild cohomology, Gerstenhaber bracket, quiver algebras and deformation of algebras using reduction systems were recalled in the preliminaries. The deformation of an algebra involves altering the associative multiplicative structure on the algebra. The candidates for determining how the multiplicative structure is altered can be described using a combinatorial star product \((\star)[1]\) and this product can be used to describe Maurer-Cartan elements. In Section 6, we describe the Maurer-Cartan elements of the algebra \(A_{1}\) using \((\star)\), showing that \(\mathrm{HH}^{2}(A_{1})\) is a five dimensional \(k\)-vector space.
## 2. Preliminaries
**The Hochschild cohomology** of an associative \(k\)-algebra \(\Lambda\) was originally defined using the following projective resolution known as the bar
resolution.
\[\mathbb{B}_{\bullet}:\qquad\cdots\to\Lambda^{\otimes(n+2)}\xrightarrow{\delta_{n}} \Lambda^{\otimes(n+1)}\xrightarrow{\delta_{n-1}}\cdots\xrightarrow{\delta_{2}} \Lambda^{\otimes 3}\xrightarrow{\delta_{1}}\Lambda^{\otimes 2}\;(\;\xrightarrow{\mu}\Lambda) \tag{2.1}\]
where \(\mu\) is multiplication and the differentials \(\delta_{n}\) are given by
\[\delta_{n}(a_{0}\otimes a_{1}\otimes\cdots\otimes a_{n+1})=\sum_{i=0}^{n}(-1) ^{i}a_{0}\otimes\cdots\otimes a_{i}a_{i+1}\otimes\cdots\otimes a_{n+1} \tag{2.2}\]
for all \(a_{0},a_{1},\dots,a_{n+1}\in\Lambda\). This resolution consists of \(\Lambda\)-bimodules or left modules over the enveloping algebra \(\Lambda^{e}=\Lambda\otimes\Lambda^{op}\), where \(\Lambda^{op}\) is the opposite algebra. The resolution is sometimes written \(\mathbb{B}_{\bullet}\xrightarrow{\mu}\Lambda\) with \(\mu\) referred to as the augmentation map. Let \(M\) be a finitely generated left \(\Lambda^{e}\)-module, the Hochschild cohomology of \(\Lambda\) with coefficients in \(M\) denoted \(\operatorname{HH}^{*}(\Lambda,M)\) is obtained by applying the functor \(\operatorname{Hom}_{\Lambda^{e}}(-,M)\) to the complex \(\mathbb{B}_{\bullet}\), and then taking the cohomology of the resulting cochain complex. That is
\[\operatorname{HH}^{*}(\Lambda,M):=\bigoplus_{n\geq 0}\operatorname{HH}^{n}( \Lambda,M)=\bigoplus_{n\geq 0}\operatorname{H}^{n}(\operatorname{Hom}_{ \Lambda^{e}}(\mathbb{B}_{n},M)).\]
If we let \(M=\Lambda\), we then define \(\operatorname{HH}^{*}(\Lambda):=\operatorname{HH}^{*}(\Lambda,\Lambda)\) to be the Hochschild cohomology of \(\Lambda\). An element \(\chi\in\operatorname{Hom}_{\Lambda^{e}}(\mathbb{B}_{m},\Lambda)\) is a cocycle if \((\delta^{*}(\chi))(\cdot):=\chi\delta(\cdot)=0.\) There is an isomorphism of the abelian groups \(\operatorname{Hom}_{\Lambda^{e}}(\mathbb{B}_{m},\Lambda)\cong\operatorname{ Hom}_{k}(\Lambda^{\otimes m},\Lambda)\), so we can also view \(\chi\) as an element of \(\operatorname{Hom}_{k}(\Lambda^{\otimes m},\Lambda)\).
**The Gerstenhaber bracket** of two cocycles \(\chi\in\operatorname{Hom}_{k}(\Lambda^{\otimes m},\Lambda)\) and \(\theta\in\operatorname{Hom}_{k}(\Lambda^{\otimes n},\Lambda)\) at the chain level is given by
\[[\chi,\theta]=\chi\circ\theta-(-1)^{(m-1)(n-1)}\theta\circ\chi \tag{2.3}\]
where \(\chi\circ\theta=\sum_{j=1}^{m}(-1)^{(n-1)(j-1)}\chi\circ_{j}\theta\) with
\[(\chi\circ_{j}\theta)(a_{1}\otimes\cdots\otimes a_{m+n-1})\\ =\chi(a_{1}\otimes\cdots\otimes a_{j-1}\otimes\theta(a_{j}\otimes \cdots\otimes a_{j+n-1})\otimes a_{j+n}\otimes\cdots\otimes a_{m+n-1}).\]
This induces a well defined map \([\cdot\;,\cdot]:\operatorname{HH}^{m}(\Lambda)\times\operatorname{HH}^{n}( \Lambda)\to\operatorname{HH}^{m+n-1}(\Lambda)\) on cohomology.
**Gerstenhaber bracket using homotopy lifting:** We present an equivalent definition of the Gerstenhaber bracket presented by Y. Volkov in [16] and reformulated with a sign change by S. Witherspoon in Theorem (2.4). We assume that \(A\) is an algebra over the field \(k\) and take \(\mathbb{P}\xrightarrow{\mu_{\mathbb{P}}}A\) to be a projective resolution of \(A\) as an \(A^{e}\)-module with differential \(d^{\mathbb{P}}\) and augmentation map \(\mu_{\mathbb{P}}.\) We take \(\mathbf{d}\) to be the differential on the Hom complex \(\operatorname{Hom}_{\Lambda^{e}}(\mathbb{P},\mathbb{P})\) defined for any degree \(n\) map \(g:\mathbb{P}\to\mathbb{P}[-n]\) as
\[\mathbf{d}(g):=d^{\mathbb{P}}g-(-1)^{n}gd^{\mathbb{P}}\]
where \(\mathbb{P}[-n]\) is a shift in homological dimension with \((\mathbb{P}[-n])_{m}=\mathbb{P}_{m-n}\). In the following definition, the notation \(\sim\) is used for two cocycles that are cohomologous, that is, they differ by a coboundary.
**Definition 2.1**.: Let \(\Delta_{\mathbb{P}}:\mathbb{P}\to\mathbb{P}\otimes_{A}\mathbb{P}\) be a chain map lifting the identity map on \(A\cong A\otimes_{A}A\) and suppose that \(\eta\in\operatorname{Hom}_{A^{e}}(\mathbb{P}_{n},A)\) is a cocycle. A module homomorphism \(\psi_{\eta}:\mathbb{P}\to\mathbb{P}[1-n]\) is called a **homotopy lifting** map of \(\eta\) with respect to \(\Delta_{\mathbb{P}}\) if
\[\mathbf{d}(\psi_{\eta}) =(\eta\otimes_{1}\mathbb{P}-1_{\mathbb{P}}\otimes\eta)\Delta_{ \mathbb{P}}\qquad\text{ and }\] \[\mu_{\mathbb{P}}\psi_{\eta} \sim\;(-1)^{n-1}\eta\psi \tag{2.4}\]
for some \(\psi:\mathbb{P}\to\mathbb{P}[1]\) for which \(\mathbf{d}(\psi)=(\mu_{\mathbb{P}}\otimes 1_{\mathbb{P}}-1_{\mathbb{P}}\otimes \mu_{\mathbb{P}})\Delta_{\mathbb{P}}.\)
**Example 2.2**.: Let us consider a homotopy lifting formula for a cocycle \(\beta\) using the bar resolution \(\mathbb{B}\). Suppose that \(\beta\in\operatorname{Hom}_{A^{e}}(\mathbb{B}_{n},A)\cong\operatorname{Hom} _{k}(A^{\otimes n},A)\), then one way to define a homotopy lifting map \(\psi_{\beta}:\mathbb{B}\to\mathbb{B}[1-n]\) for the cocycle \(\beta\) is the following:
\[\psi_{\beta}(1\otimes a_{1}\otimes\cdots\otimes a_{m+n-1}\otimes 1)\] \[=\sum_{i=1}^{m}(-1)^{(m-1)(i-1)}1\otimes a_{1}\otimes\cdots \otimes a_{i-1}\otimes g(a_{i}\otimes\cdots\otimes a_{i+n-1})\otimes a_{i+n} \otimes\cdots\otimes a_{m+n-1}\otimes 1.\]
We compute an example in which \(\beta\in\operatorname{Hom}_{k}(\Lambda^{\otimes 2},\Lambda)\). In degree \(3\), \(\psi_{\beta}:\mathbb{B}_{3}\to\mathbb{B}_{2}\). Using the differentials on the bar resolution given in Equation (2.2) and the diagonal map \(\Delta_{\mathbb{B}}\) later given in Equation (2.8), we have
\[\delta\psi_{\beta}(1\otimes a_{1}\otimes a_{2}\otimes a_{3} \otimes 1)=\beta(a_{1}\otimes a_{2})\otimes a_{3}\otimes 1-1\otimes\beta(a_{1} \otimes a_{2})a_{3}\otimes 1\] \[+1\otimes\beta(a_{1}\otimes a_{2})\otimes a_{3}-a_{1}\otimes \beta(a_{2}\otimes a_{3})\otimes 1\] \[+1\otimes a_{1}\beta(a_{2}\otimes a_{3})\otimes 1-1\otimes a_{1} \otimes\beta(a_{2}\otimes a_{3})\qquad\text{and}\] \[\psi_{\beta}\delta(1\otimes a_{1}\otimes a_{2}\otimes a_{3} \otimes 1)=a_{1}\otimes\beta(a_{2}\otimes a_{3})\otimes 1-1\otimes\beta(a_{1}a_{2} \otimes a_{3})\otimes 1\] \[+1\otimes\beta(a_{1}\otimes a_{2}a_{3})\otimes 1-1\otimes\beta(a_{1} \otimes a_{2})\otimes a_{3}.\]
Therefore \((\delta\psi_{\beta}+\psi_{\beta}\delta)(1\otimes a_{1}\otimes a_{2}\otimes a_ {3}\otimes 1)=\beta(a_{1}\otimes a_{2})\otimes a_{3}\otimes 1-1\otimes a_{1}\otimes\beta (a_{2}\otimes a_{3})\). On the other hand,
\[(\beta\otimes 1-1\otimes\beta)\Delta_{\mathbb{B}}(1\otimes a_{1} \otimes a_{2}\otimes a_{3}\otimes 1)\] \[=(\beta\otimes 1-1\otimes\beta)\Big{(}(1\otimes 1)\otimes_{ \Lambda}(1\otimes a_{1}\otimes a_{2}\otimes a_{3}\otimes 1)+\] \[(1\otimes a_{1}\otimes 1)\otimes_{\Lambda}(1\otimes a_{2} \otimes a_{3}\otimes 1)+(1\otimes a_{1}\otimes a_{2}\otimes 1)\otimes_{ \Lambda}(1\otimes a_{3}\otimes 1)\] \[+(1\otimes a_{1}\otimes a_{2}\otimes a_{3}\otimes 1)\otimes_{ \Lambda}(1\otimes 1)\Big{)}\] \[=\beta(a_{1}\otimes a_{2})\otimes a_{3}\otimes 1-1\otimes a_{1} \otimes\beta(a_{2}\otimes a_{3}).\]
So we see that Equation (2.4) holds in degree \(3\) i.e.
\[\delta\psi_{\beta}-(-1)^{2-1}\psi_{\beta}\delta=(\beta\otimes 1-1\otimes\beta) \Delta_{\mathbb{B}}.\]
**Remark 2.3**.: Suppose that \(\mathbb{K}\) is the Koszul resolution, then it is a differential graded coalgebra i.e. \((\Delta_{\mathbb{K}}\otimes 1_{\mathbb{K}})\Delta_{\mathbb{K}}=(1_{\mathbb{K}} \otimes\Delta_{\mathbb{K}})\Delta_{\mathbb{K}}\) and \((d\otimes 1+1\otimes d)\Delta_{\mathbb{K}}=\Delta_{\mathbb{K}}d\). Furthermore, the augmentation map \(\mu:\mathbb{K}\to\Lambda\) makes \((\mu\otimes 1_{\mathbb{K}})\Delta_{\mathbb{K}}-(1_{\mathbb{K}}\otimes\mu)\Delta_{ \mathbb{K}}=0.\) We can therefore set \(\psi=0\) in the second
part of Equation (2.4), so that we have \(\mu\psi_{\eta}\sim 0.\) Next, we set \(\psi_{\eta}(\mathbb{K}_{n-1})=0\) and the second relations of Equation (2.4) is satisfied. To check if a map is a homotopy lifting map, it is sufficient to verify the first equation in (2.4) if the resolution is Koszul.
The following is a theorem of Y. Volkov which is equivalent to the definition of the bracket presented earlier in Equation (2.3).
**Theorem 2.4**.: _[_16_, Theorem 4]_ _Let \((\mathbb{P},\mu_{\mathbb{P}})\) be a \(A^{e}\)-projective resolution of the algebra \(A\), and let \(\Delta_{\mathbb{P}}:\mathbb{P}\to\mathbb{P}\otimes_{A}\mathbb{P}\) be a diagonal map. Let \(\eta:\mathbb{P}_{n}\to A\) and \(\theta:\mathbb{P}_{m}\to A\) be cocycles representing two classes. Suppose that \(\psi_{\eta}\) and \(\psi_{\theta}\) are homotopy liftings for \(\eta\) and \(\theta\) respectively. Then the Gerstenhaber bracket of the classes of \(\eta\) and \(\theta\) can be represented by the class of the element_
\[[\eta,\theta]_{\Delta_{\mathbb{P}}}=\eta\psi_{\theta}-(-1)^{(m-1)(n-1)}\theta \psi_{\eta}.\]
**Quiver algebras:** A quiver is a directed graph with the allowance of loops and multiple arrows. A quiver \(Q\) is sometimes denoted as a quadruple \((Q_{0},Q_{1},o,t)\) where \(Q_{0}\) is the set of vertices in \(Q\), \(Q_{1}\) is the set of arrows in \(Q\), and \(o,t:Q_{1}\longrightarrow Q_{0}\) are maps which assign to each arrow \(a\in Q_{1}\), its origin vertex \(o(a)\) and terminal vertex \(t(a)\) in \(Q_{0}\). A path in \(Q\) is a sequence of arrows \(a=a_{1}a_{2}\cdots a_{n-1}a_{n}\) such that the terminal vertex of \(a_{i}\) is the same as the origin vertex of \(a_{i+1}\), using the convention of concatenating paths from left to right. The quiver algebra or path algebra \(kQ\) is defined as a vector space having all paths in \(Q\) as a basis. Vertices are regarded as paths of length \(0\), an arrow is a path of length \(1\), and so on. We take multiplication on \(kQ\) as concatenation of paths. Two paths \(a\) and \(b\) satisfy \(ab=0\) if \(t(a)\neq o(b)\). This multiplication defines an associative algebra over \(k\). By taking \(kQ_{i}\) to be the \(k\)-vector subspace of \(kQ\) with paths of length \(i\) as basis, \(kQ=\bigoplus_{i\geq 0}kQ_{i}\) can be viewed as an \(\mathbb{N}\)-graded vector space. Two paths are parallel if they have the same origin and terminal vertex. A relation on a quiver \(Q\) is a linear combination of parallel paths in \(Q\). A quiver together with a set of relations is called a quiver with relations. Letting \(I\) be an ideal of the path algebra \(kQ\), we denote by \((Q,I)\) the quiver \(Q\) with relations \(I\). The quotient \(\Lambda=kQ/I\) is called the quiver algebra associated with \((Q,I)\). Suppose that \(\Lambda\) is graded by positive integers and is Koszul, the degree \(0\) component \(\Lambda_{0}\) is isomorphic to \(k\) or copies of \(k\) and \(\Lambda_{0}\) has a linear graded projective resolution \(\mathbb{L}\) as a right \(\Lambda\)-module [15, 8].
An algorithmic approach to finding such a minimal projective resolution \(\mathbb{L}\) of \(\Lambda_{0}\) was given in [5]. The modules \(\mathbb{L}_{n}\) are right \(\Lambda\)-modules for each \(n\). There is a "comultiplicative structure" on \(\mathbb{L}\) and this structure was used to find a minimal projective resolution \(\mathbb{K}\to\Lambda\) of modules over the enveloping algebra of \(\Lambda\) in [6]. A non-zero element \(x\in kQ\) is called uniform if it is a linear combination of paths each having the same origin vertex and the same terminal vertex: In other words, \(x=\sum_{j}c_{j}w_{j}\) with scalars \(c_{j}\neq 0\) for all \(j\) and each path \(w_{j}\) are of equal length having the same origin vertex and the same terminal vertex. For \(R=kQ\), it was shown in [5] that there
are integers \(t_{n}\) and uniform elements \(\{f_{i}^{n}\}_{i=0}^{t_{n}}\) such that the right projective resolution \(\mathbb{L}\to\Lambda_{0}\) is obtained from a filtration of \(R\). This filtration is given by the following nested family of right ideals:
\[\cdots\subseteq\bigoplus_{i=0}^{t_{n}}f_{i}^{n}R\subseteq\bigoplus_{i=0}^{t_{n -1}}f_{i}^{n-1}R\subseteq\cdots\subseteq\bigoplus_{i=0}^{t_{1}}f_{i}^{1}R \subseteq\bigoplus_{i=0}^{t_{0}}f_{i}^{0}R=R\]
where for each \(n\), \(\mathbb{L}_{n}=\bigoplus_{i=0}^{t_{n}}f_{i}^{n}R/\bigoplus_{i=0}^{t_{n}}f_{i}^ {n}I\) and the differentials on \(\mathbb{L}\) are induced by the inclusions \(\bigoplus_{i=0}^{t_{n}}f_{i}^{n}R\subseteq\bigoplus_{i=0}^{t_{n-1}}f_{i}^{n-1}R\). Furthermore, it was shown in [5] that with some choice of scalars, the \(\{f_{i}^{n}\}_{i=0}^{t_{n}}\) satisfying the comultiplicative equation of (2.5) make \(\mathbb{L}\) minimal. In other words, for \(0\leq i\leq t_{n}\), there are scalars \(c_{pq}(n,i,r)\) such that
\[f_{i}^{n}=\sum_{p=0}^{t_{r}}\sum_{q=0}^{t_{n-r}}c_{pq}(n,i,r)f_{p}^{r}f_{q}^{n- r} \tag{2.5}\]
holds and \(\mathbb{L}\) is a minimal resolution. To construct the above multiplicative equation for example, we can take \(\{f_{i}^{0}\}_{i=0}^{t_{0}}\) to be the set of vertices, \(\{f_{i}^{1}\}_{i=0}^{t_{1}}\) to be the set of arrows, \(\{f_{i}^{2}\}_{i=0}^{t_{2}}\) to be the set of uniform relations generating the ideal \(I\), and define \(\{f_{i}^{n}\}_{i=0}^{t_{n}}(n\geq 3)\) recursively, that is in terms of \(f_{i}^{n-1}\) and \(f_{j}^{1}\). We presented the comultiplicative structure of a family of quiver algebras in [12] and use the homotopy lifting technique to show that for some members of the family, the Hochschild cohomology ring modulo the weak Gerstenhaber ideal generated by homogeneous nilpotent elements is not finitely generated.
The resolution \(\mathbb{L}\) and the comultiplicative structure (2.5) were used to construct a minimal projective resolution \(\mathbb{K}\to\Lambda\) of modules over the enveloping algebra \(\Lambda^{e}=\Lambda\otimes\Lambda^{op}\) on which we now define Hochschild cohomology. This minimal projective resolution \(\mathbb{K}\) of \(\Lambda^{e}\)-modules associated to \(\Lambda\) was given in [6] and now restated with slight notational changes below.
**Theorem 2.5**.: _[_6_, Theorem 2.1]_ _Let \(\Lambda=kQ/I\) be a Koszul algebra, and let \(\{f_{i}^{n}\}_{i=0}^{t_{n}}\) define a minimal resolution of \(\Lambda_{0}\) as a right \(\Lambda\)-module. A minimal projective resolution \((\mathbb{K},d)\) of \(\Lambda\) over \(\Lambda^{e}\) is given by_
\[\mathbb{K}_{n}=\bigoplus_{i=0}^{t_{n}}\Lambda o(f_{i}^{n})\otimes_{k}t(f_{i}^ {n})\Lambda\]
_for \(n\geq 0\), where the differential \(d_{n}:\mathbb{K}_{n}\to\mathbb{K}_{n-1}\) applied the basis element \(\varepsilon_{i}^{n}=(0,\ldots,0,o(f_{i}^{n})\otimes_{k}t(f_{i}^{n}),0,\ldots,0)\), \(0\leq i\leq t_{n}\) with \(o(f_{i}^{n})\otimes_{k}t(f_{i}^{n})\) in the \(i\)-th position, is given by_
\[d_{n}(\varepsilon_{i}^{n})=\sum_{j=0}^{t_{n-1}}\Big{(}\sum_{p=0}^{t_{1}}c_{p,j }(n,i,1)f_{p}^{1}\varepsilon_{j}^{n-1}+(-1)^{n}\sum_{q=0}^{t_{1}}c_{j,q}(n,i,n- 1)\varepsilon_{j}^{n-1}f_{q}^{1}\Big{)} \tag{2.6}\]
_and \(d_{0}:\mathbb{K}_{0}\to\Lambda\) is the multiplication map. In particular, \(\Lambda\) is a linear module over \(\Lambda^{e}\)._
We note that for each \(n\) and \(i\), \(\{\varepsilon_{i}^{n}\}_{i=0}^{t_{n}}\) is a basis of \(\mathbb{K}_{n}\) as a \(\Lambda^{\epsilon}\)-module. The scalars \(c_{p,j}(n,i,r)\) are those appearing in (2.5) and \(f_{*}^{1}:=\overline{f_{*}^{1}}\) is the residue class of \(f_{*}^{1}\) in \(\bigoplus_{i=0}^{t_{1}}f_{i}^{1}R/\bigoplus_{i=0}^{t_{n}}f_{i}^{1}I\). Using the comultiplicative structure of Equation (2.5), a cup product formula on Hochschild cohomology of Koszul quiver algebra was presented in [3] using the resolution \(\mathbb{K}\).
We recall the definition of the reduced bar resolution of algebras defined by quivers and relations. If \(\Lambda_{0}\) is isomorphic to \(m\) copies of \(k\), take \(\{e_{1},e_{2},\ldots,e_{m}\}\) to be a complete set of primitive orthogonal central idempotents of \(\Lambda\). In this case \(\Lambda\) is not necessarily an algebra over \(\Lambda_{0}\). If \(\Lambda_{0}\) is isomorphic to \(k\), then \(\Lambda\) is an algebra over \(\Lambda_{0}\). For convenience, we use the same notation \(\mathbb{B}\) for both the bar resolution and the reduced bar resolution. The reduced bar resolution \((\mathbb{B},\delta)\), where \(\mathbb{B}_{n}:=\Lambda^{\otimes_{\Lambda_{0}}(n+2)}\) is the \((n+2)\)-fold tensor product of \(\Lambda\) over \(\Lambda_{0}\) and uses the same differential as the usual bar resolution presented in Equation (2.2). The resolution \(\mathbb{K}\) can be embedded naturally into the reduced bar resolution \(\mathbb{B}\). There is a map \(\iota:\mathbb{K}\to\mathbb{B}\) defined by \(\iota(\varepsilon_{r}^{n})=1\otimes\widetilde{f_{r}^{n}}\otimes 1\) such that \(\delta\iota=\iota d\), where
\[\widetilde{f_{j}^{n}}=\sum c_{j_{1}j_{2}\cdots j_{n}}f_{j_{1}}^{1}\otimes f_{ j_{2}}^{1}\otimes\cdots\otimes f_{j_{n}}^{1}\quad\text{ if }\quad f_{j}^{n}=\sum c_{j_{1}j_{2}\cdots j_{n}}f_{j_{1}}^{1}f_{j_{2}}^{1}\cdots f_{j_{ n}}^{1} \tag{2.7}\]
for some scalar \(c_{j_{1}j_{2}\cdots j_{n}}\). It was shown in [3, Proposition 2.1] that \(\iota\) is indeed an embedding. By taking \(\Delta_{\mathbb{B}}:\mathbb{B}\to\mathbb{B}\otimes_{\Lambda}\mathbb{B}\) to be the following comultiplicative map (or diagonal map) on the bar resolution,
\[\Delta_{\mathbb{B}}(a_{0}\otimes\cdots\otimes a_{n+1})=\sum_{i=0}^{n}(a_{0} \otimes\cdots\otimes a_{i}\otimes 1)\otimes_{\Lambda}(1\otimes a_{i+1} \otimes\cdots\otimes a_{n+1}), \tag{2.8}\]
it was also shown in [3, Proposition 2.2] that the diagonal map \(\Delta_{\mathbb{K}}:\mathbb{K}\to\mathbb{K}\otimes_{\Lambda}\mathbb{K}\) on the complex \(\mathbb{K}\) has the following form.
\[\Delta_{\mathbb{K}}(\varepsilon_{r}^{n})=\sum_{v=0}^{n}\sum_{p=0}^{t_{v}}\sum_{ q=0}^{t_{n-v}}c_{p,q}(n,r,v)\varepsilon_{p}^{v}\otimes_{\Lambda}\varepsilon_{q}^{ n-v}. \tag{2.9}\]
The compatibility of \(\Delta_{\mathbb{K}},\Delta_{\mathbb{B}}\) and \(\iota\) means that \((\iota\otimes\iota)\Delta_{\mathbb{K}}=\Delta_{\mathbb{B}}\iota\) where \((\iota\otimes\iota)(\mathbb{K}\otimes_{\Lambda}\mathbb{K})=\iota(\mathbb{K}) \otimes_{\Lambda}\iota(\mathbb{K})\subseteq\mathbb{B}\otimes_{\Lambda}\mathbb{B}\).
**Deformation of algebras using reduction system:** The theory of deformation of algebras has a much more wider scope than the contents of this article. There has been survey articles covering several aspects of algebraic deformation theory - from deformations arising from noncommutative geometry to formal, infinitesimal and graded deformations. References were made to such articles in [15, chapter 5]. Although many results are known for deformation of commutative algebras, little is known of deformation of quiver and path algebras. Let \(\Lambda=kQ/I\) and \(Q\) a finite quiver. There is associated to \(\Lambda\), a reduction system \(R=\{(s,\varphi_{s})\}\) given by Definition 2.6 and the Gerstenhaber bracket equips Hochschild cohomology \(\operatorname{HH}^{*}(\Lambda)\) with a DG Lie algebra structure which controls the theory of deformation of \(R\)
Using a combinatorial star product, it was shown in [1] that the deformations of \(\Lambda\) is equivalent to the deformations of the reduction system \(R\) which is also equivalent to the deformation of the relations in \(I\). There is a map \(\varphi:kQ\to kIrr_{S}\) defined by \(\varphi(s)=\varphi_{s}\) sending a path in the quiver algebra to an irreducible path. Let \(\pi:kQ\to\Lambda\) be the projection map. The combinatorial star product on \(\Lambda\) defined on irreducible paths \(u,v\in kQ\) is the image of \(\pi(u)\star\pi(v)\) in \(\Lambda\) after performing right-most _reductions_ on the path \(uv\in kQ\).
Suppose that \((\Lambda_{\tau},\mu_{\tau})\) is a formal deformation (Definition 2.10) of the associative multiplication on \(\Lambda\), one way to describe the deformed multiplication \(\mu_{\tau}\) is by obtaining a necessary and sufficient condition for the associativity of the combinatorial star product. This condition is given by Equation (2.13). There is a projective bimodule resolution \(\mathfrak{p}(Q,R)\) arising from a reduction system \(R\) and the combinatorial star product can be used to describe Maurer-Cartan elements in \(\mathfrak{p}(Q,R)\otimes\mathfrak{m}\), where \((N,\mathfrak{m})\) is a complete local Noetherian \(k\)-algebra. In Section 6, we use the combinatorial star product to determine Maurer-Cartan elements of \(\operatorname{HH}^{2}(A_{1})\), thus determining a family of deformations of the algebra \(A_{1}\). In the future, it will be interesting to find meaningful ways to describe Maurer-Cartan elements that were obtained using the star product (as in Example 6.1) in terms of those obtained by the homotopy lifting technique (as in Example 5.2) and vice versa. We now give a result from [2] on reduction systems.
**Definition 2.6**.: Let \(\Lambda=kQ/I\) be a path algebra with finite quiver \(Q\). A reduction system \(R\) for \(kQ\) is a set of pairs
\[R=\{(s,\varphi_{s})\mid s\in S,\varphi_{s}\in kQ\}\]
where
* \(S\) is a subset of paths of length greater than or equal to \(2\) such that \(s\) is not a subpath of \(s^{\prime}\) when \(s\neq s^{\prime}\in S\)
* \(s\) and \(\varphi_{s}\) are parallel paths
* for each \((s,\varphi_{s})\in R\), \(\varphi_{s}\) is irreducible or a linear combination of irreducible paths.
We say a path is _irreducible_ if it does not contain elements in \(S\) as a subpath.
**Definition 2.7**.: Given a two-sided ideal \(I\) of \(kQ\), we say that a reduction system \(R\) satisfies the diamond condition (\(\diamond\)) for \(I\) if
* \(I\) is equal to the two-sided ideal generated by the set \(\{s-\varphi_{s}\}_{(s,\varphi_{s})\in R}\) and
* every path is _reduction unique_.
We call a reduction system \(R\) finite if \(R\) is a finite set.
**Definition 2.8**.: Let \(R\) be a reduction system for \(kQ\) and \(p,q,r\) be paths of length at least \(1\). A path \(pqr\) of length at least \(3\) is an _overlap ambiguity
of \(R\) if \(pq,qr\in S\). The set of all paths having one overlap is also the set of all \(1\)-ambiguity and is denoted \(S_{3}\).
We now state the diamond lemma as provided in [2].
**Theorem 2.9**.: _[_2_, Thm 1.2]_ _Let \(R=\{(s,\varphi_{s})\}\) be a reduction system for \(kQ\) and let \(S=\{s\mid(s,\varphi_{s})\in R\}\). Denote by \(I=\langle s-\varphi_{s}\rangle_{s\in S}\) the corresponding two-sided ideal of \(\Lambda=kQ/I\). If \(R\) is reduction finite, the following are equivalent:_
* _all overlap ambiguities of_ \(R\) _are resolvable_
* \(R\) _is reduction unique, that is_ \(R\) _satisfies_ \((\diamond)\) _for_ \(I\)__
* _the image of the set of irreducible paths under the projection_ \(\pi:kQ\to\Lambda\) _forms a_ \(k\)_-basis for_ \(\Lambda\)_._
It is known that for any two-sided ideal \(I\) of the quiver algebra \(kQ/I\), there is always a choice of a reduction system \(R\) satisfying the diamond condition \((\diamond)\) of Definition 2.7. This is a result of S. Chouhy and A. Solotar [4, Prop 2.7, Thm 4.1, Thm 4.2]. Furthermore, there is a projective bimodule resolution \(\mathfrak{p}(Q,R)\) associated to the algebra \(\Lambda=kQ/I\) useful in extracting information about Hochschild cohomology.
Let \(B\) be a \(k\)-algebra and let \(\tau\) be an indeterminate. The ring \(B[[\tau]]\) is the ring of formal power series in \(\tau\) with coefficients in \(B\). The ring \(B[[\tau]]\) is a \(k[[\tau]]\)-module if we identify \(k\) with the subalgebra \(k\cdot 1\) of \(B\). The multiplication in \(B\) is usually denoted by concatenation while the multiplication in \(B[[\tau]]\) is given as
\[(\sum_{i\geq 0}a_{i}\tau^{i})(\sum_{j\geq 0}b_{j}\tau^{j})=\sum_{m\geq 0}(\sum_{ i+j=m}a_{i}b_{j}\tau^{m})\]
We are interested in a new associative structure on \(B\). All such associative structure provides a deformation \(B_{\tau}\) of the algebra \(B\).
**Definition 2.10**.: A _formal_ deformation \((B_{\tau},\mu_{\tau})\) of \(B\) (also called a deformation of \(A\) over \(k[[\tau]]\)) is an associative bilinear multiplication \(\mu_{\tau}:B[[\tau]]\otimes B[[\tau]]\to B[[\tau]]\) such that in the quotient by the ideal \((\tau)\), the multiplication \(\mu_{\tau}(b_{1},b_{2})\) coincides with the multiplication in \(B\) for all \(b_{1},b_{2}\in B[[\tau]]\).
The multiplication \(\mu_{\tau}\) above is determined by products of pairs of elements of \(B\), so that for every \(a,b\in B\)
\[\mu_{\tau}(a,b)=ab+\mu_{1}(a,b)\tau+\mu_{2}(a,b)\tau^{2}+\mu_{3}(a,b)\tau^{3}+\cdots \tag{2.10}\]
where \(ab\) is the usual multiplication in \(B\) and \(\mu_{i}:B\otimes_{k}B\to B\) are bilinear maps. If we denote the usual multiplication in \(B\) by \(\mu\), we may denote a deformation of \((B,\mu)\) by \((B_{\tau},\mu_{\tau})\) where
\[\mu_{\tau}=\mu+\mu_{1}\tau+\mu_{2}\tau^{2}+\mu_{3}\tau^{3}+\cdots \tag{2.11}\]
_Remark 2.11_.: The _first order term_ i.e. the bilinear map \(\mu_{1}\) is called an _infinitesimal deformation_ if it is a Hochschild \(2\)-cocycle. Furthermore, if
is an infinitesimal deformation, it defines a deformation of \(B\) over \(k[\tau]/(\tau^{2})\) and satisfies
\[\mu_{1}(ab,c)+\mu_{1}(a,b)c=\mu_{1}(a,bc)+a\mu_{1}(b,c).\]
This equation is derived from the associative property that the bilinear map \(\mu_{\tau}\).
While formal deformations are deformations over \(k[\tau]\), algebraic deformations are deformations over \(k[\tau]\). The idea of making a formal deformation into an algebraic deformation is called the _algebraization of formal deformations_ and were examined in details with respect to reduction systems by S. Barmeier and Z. Wang in [1]. One of their main results is the following theorem.
**Theorem 2.12**.: _[_1_, Thm 7.1]_ _Given any finite quiver \(Q\) and any two-sided ideal of relations \(I\), let \(\Lambda=kQ/I\) be the quotient algebra and let \(R\) be any reduction system satisfying the diamond condition \((\diamond)\) for \(I\). There is an equivalence of formal deformation problems between_
* _deformations of the associative algebra structure on_ \(\Lambda\)__
* _deformations of the reduction system_ \(R\)__
* _deformations of the relations_ \(I\)__
This equivalence can be made explicit by considering combinatorial star products which produce a deformation of \(\Lambda\) from a deformation of the reduction system \(R\). In [1, Section 5], it was established that there are comparison morphisms \(F_{\bullet},G_{\bullet}\) between the bar resolution \(\mathbb{B}\) and the resolution \(\mathfrak{p}(Q,R)\),
\[\mathfrak{p}_{\bullet}\stackrel{{ F_{\bullet}}}{{\longrightarrow}} \mathbb{B}_{\bullet}\stackrel{{ G_{\bullet}}}{{\longrightarrow}} \mathfrak{p}_{\bullet}\]
coming from reduction systems and the combinatorial star product can be defined in terms of these morphisms.
Given a reduction system \(R=\{(s,\varphi_{s})\}\) for the algebra \(B=kQ/I\) determined by \(S\) as in Definition 2.6, we view \(\varphi\in\operatorname{Hom}(kS,kIrr_{S})\) with \(\varphi(s)=\varphi_{s}\). There is an isomorphism \(B\cong kIrr_{S}\) so that \(\operatorname{Hom}(kS,kIrr_{S})\cong\operatorname{Hom}(kS,B)\) and a Hochschild 2-cochain in \(\operatorname{Hom}(B\otimes_{k}B,B)\) may be viewed as an element \(\varphi\in\operatorname{Hom}(kS,B)\). Taking \(\mathfrak{m}=(\tau)\) as the maximal ideal of \(k[[\tau]]\), it was shown that the map \(\tilde{\varphi}\in\operatorname{Hom}(kS,B)\otimes\mathfrak{m}\) given by
\[\widetilde{\varphi}=\widetilde{\varphi}_{1}t+\widetilde{\varphi}_{2}t^{2}+ \widetilde{\varphi}_{3}t^{3}+\cdots \tag{2.12}\]
are candidates for deformations of \(R\) determined by \(\varphi\). Moreover, the deformation of \(R\) given by \(\varphi+\widetilde{\varphi}\) is a deformation of the algebra \(B\) if and only if \(\widetilde{\varphi}\) is a Maurer-Cartan element of the \(L_{\infty}\) algebra structure on the resolution \(\mathfrak{p}(Q,R)\otimes\mathfrak{m}\). More precisely, if \(uvw\in S_{3}\) are overlaps such that \(uv,vw\in S\), then \(\widetilde{\varphi}\) satisfies the Maurer-Cartan equation if and only if
\[(\pi(u)\star\pi(v))\star\pi(w)=\pi(u)\star(\pi(v)\star\pi(w)). \tag{2.13}\]
We recall that the product \(\star\) defined on irreducible paths \(u,v\) is the image of \(\pi(u)\star\pi(v)\) in \(\Lambda\) under the map \(\pi:kQ\to\Lambda\) after performing right most
reductions on the path \(uv\in kQ\) using the reduction system. Indeed, the combinatorial star product \(\star\) defines an associative structure on the algebra \((B_{\tau},\mu_{\tau})\) and we can also write \(a\star b=ab+\mu_{1}(a,b)\).
In Sections 5, we explicitly find Maurer-Cartan elements of \(\operatorname{HH}^{2}(A_{1})\) by first solving for Hochschild \(2\)-cocycles \(\mu_{1}\) of Equation (2.11) and then showing that they satisfy the equation \(d^{\star}\mu+\frac{1}{2}[\mu,\mu]=0\) using homotopy lifting maps to define the bracket. The combinatorial star product solves the Maurer-Cartan equation by construction. We check in Section 6 that for the same algebra \(A_{1}\) and a choice reduction system \(R\), the maps \(\tilde{\varphi}\) of Equation (2.12) describing the combinatorial star product solves the Maurer-Cartan equation given previously in terms of the homotopy lifting maps in Section 5.
## 3. Main Results
In what follows, we consider a finite quiver \(Q\) and a quiver algebra \(\Lambda=kQ/I\) that is Koszul i.e. \(I\) is an admissible ideal generated by paths of length \(2\). We assume the quiver \(Q\) has arrows labelled \(f_{1}^{1},f_{2}^{1},\ldots,f_{t_{1}}^{1}\) for some integer \(t_{1}\). We suppose further that for each \(n\), there are uniform elements \(f_{1}^{n},f_{2}^{n},\ldots,f_{t_{n}}^{n}\), for some integer \(t_{n}\) defining a minimal projective resolution \(\mathbb{K}\) of \(\Lambda\) as given by Theorem (2.5). Let \(\eta:\mathbb{K}_{n}\to\Lambda\) be a Hochschild cocycle such that for some index \(i\), \(\eta(\varepsilon_{i}^{n})=f_{w}^{1}\) (resp. \(\eta(\varepsilon_{i}^{n})=f_{w}^{1}f_{w^{\prime}}^{1}\)) with \(0\leq w,w^{\prime}\leq t_{1}\) and \(\eta(\varepsilon_{j}^{n})=0\) for all \(i\neq j\). We also write \(\eta=\begin{pmatrix}0&\cdots&0&(f_{w}^{1})^{(i)}&0&\cdots&0\end{pmatrix}\) for this type of cocycle (resp. \(\eta=\begin{pmatrix}0&\cdots&0&(f_{w}^{1}f_{w^{\prime}}^{1})^{(i)}&0&\cdots&0 \end{pmatrix}\)). Let \(\Delta_{\mathbb{K}}:\mathbb{K}\to\mathbb{K}\otimes_{\Lambda}\mathbb{K}\) be the diagonal map. Results from [16, 15] establish that there exist maps \(\psi_{\eta}:\mathbb{K}\to\mathbb{K}[1-n]\) such that
\[d\psi_{\eta}-(-1)^{1-n}\psi_{\eta}d=(\eta\otimes 1-1\otimes\eta)\Delta_{ \mathbb{K}} \tag{3.1}\]
for Koszul algebras. These maps are called _homotopy lifting_ maps for \(\eta\). How would we define such a map explicitly in terms of the basis elements \(\varepsilon_{r}^{n}\)? Can we give a closed formula or expression of the Gerstenhaber bracket using an explicitly described versions of these maps? These are among the questions we address in this section.
In order to distinguish the index \(n\) which is the degree of the cocycle \(\eta\), we will index the resolution \(\mathbb{K}\) by \(m\) so that each \(\mathbb{K}_{m}\) is free and generated by \(\{\varepsilon_{r}^{m}\}_{r=0}^{t_{m}}\). For an \(n\)-cocycle, the map \(\psi_{\eta}:\mathbb{K}_{\bullet}\to\mathbb{K}_{\bullet}\) associated to \(\eta\) shifts is of degree \(1-n\) so that for a fixed \(m\), \(\psi_{\eta}:\mathbb{K}_{m}\to\mathbb{K}_{m-n+1}\). Suppose that \(\mathbb{K}_{m-n+1}\) is generated by \(\{\varepsilon_{r^{\prime}}^{m-n+1}\}_{r^{\prime}=0}^{t_{m-n+1}}\), fundamental results from linear algebra means such a map is a \(t_{m-n+1}\times t_{m}\) matrix when the modules are considered as left \(\Lambda^{e}\)-modules. In particular, for \(\eta=\begin{pmatrix}0&\cdots&0&(f_{w}^{1})^{(i)}&0&\cdots&0\end{pmatrix},\) such a map is defined on \(\varepsilon_{r}^{m}\) by
\[\psi_{\eta}(\varepsilon_{r}^{m})=\sum_{j=0}^{t_{m-n+1}}\lambda_{j}(m,r) \varepsilon_{j}^{m-n+1} \tag{3.2}\]
and for \(\eta=\begin{pmatrix}0&\cdots&0&(f_{w}^{1}f_{w^{\prime}}^{1})^{(i)}&0&\cdots&0 \end{pmatrix}\), such a map is defined on \(\varepsilon_{r}^{m}\) by
\[\psi_{\eta}(\varepsilon_{r}^{m})=\sum_{j=0}^{t_{m-n+1}}\lambda_{j}(m,r)f_{w}^{1 }\varepsilon_{j}^{m-n+1}+\lambda_{j}^{\prime}(m,r)\varepsilon_{j}^{m-n+1}f_{w ^{\prime}}^{1} \tag{3.3}\]
where \(\lambda_{j}(m,r),\lambda_{j}^{\prime}(m,r)\in\Lambda^{e}\) in general. For details about these maps, see [13]. We now restrict Equation (3.2) to the special case where for some \(j=r^{\prime}\), \(\lambda_{j}(m,r)=b_{m,r}(m-n+1,r^{\prime})\) is a scalar and \(\lambda_{j}(m,r)=0\) for all \(j\neq r^{\prime}\), that is
\[\psi_{\eta}(\varepsilon_{r}^{m})=b_{m,r}(m-n+1,r^{\prime})\varepsilon_{r^{ \prime}}^{m-n+1} \tag{3.4}\]
and restrict Equation (3.3) to the special case where all \(\lambda_{j}(m,r),\lambda_{j}^{{}^{\prime}}(m,r)\) are zero except for some indices \(s\) and \(s^{\prime}\) with \(\lambda_{j}(m,r)=\lambda_{m,r}(m-n+1,s)\neq 0\) and \(\lambda_{j}^{{}^{\prime}}(m,r)=\lambda_{m,r}(m-n+1,s^{\prime})\neq 0\). That is,
\[\psi_{\eta}(\varepsilon_{r}^{m})=\lambda_{m,r}(m-n+1,s)f_{w}^{1}\varepsilon_{s }^{m-n+1}+\lambda_{m,r}(m-n+1,s^{\prime})\varepsilon_{s^{\prime}}^{m-n+1}f_{w ^{\prime}}^{1}. \tag{3.5}\]
It was shown in [13] that Equation (3.1) holds under certain conditions on the scalars \(b_{m,r}(m-n+1,r^{\prime})\), and therefore the special maps given by Equations (3.4) are indeed homotopy lifting maps for the associated cocycles. A similar argument holds for the map given by Equation (3.5).
Our motivation for defining the maps the way they were defined comes from several examples that were computed. We note in particular that the scalars \(b_{m,r}(m-n+1,r^{\prime})\) satisfy some recurrence relations given by Equation (3.6). We observe that if \(\psi_{\eta}(\varepsilon_{\bar{r}}^{m-1})=b_{m-1,\bar{r}}(m-n,r^{\prime\prime} )\varepsilon_{r^{\prime\prime}}^{m-n}\), we can obtain the scalars \(b_{m,r}(m-n+1,r^{\prime})\) in terms of the scalars \(b_{m-1,\bar{r}}(m-n,r^{\prime\prime})\) and the scalars \(c_{pq}(n,i,r)\) coming from the comultiplicative structure on \(\mathbb{K}\). The following diagram is not commutative but gives a pictures of this idea:
\(\bullet\) We can obtain \(\psi_{\bar{\eta}}|_{\mathbb{K}_{m+1}}\) from \(\psi_{\bar{\eta}}|_{\mathbb{K}_{m}}\) for every \(m\) using the scalars \(b_{m,r}(m-n+1,r^{*})\) and \(c_{pq}(n,i,r)\). From Remark 2.3, the scalars \(b_{n-1,r^{*}}(0,r^{**})=0\) for all \(r^{*},r^{**}\) since \(\psi_{\bar{\eta}}|_{\mathbb{K}_{n-1}}\) is the zero map.
For a finite quiver \(Q\), let \(\Lambda=kQ/I\) be a quiver algebra that is Koszul and let \(\mathbb{K}_{m}\) be the projective bimodule resolution of \(\Lambda\) with basis \(\{\varepsilon_{r}^{m}\}_{r=0}^{t_{m}}\) as given by Theorem 2.5. For the specific modules \(\mathbb{K}_{m-n+1}\),\(\mathbb{K}_{m-n}\), and \(\mathbb{K}_{m-1}\), let the basis elements be \(\{\varepsilon_{r^{\prime}}^{m-n+1}\}_{r^{\prime}=0}^{t_{m-n+1}}\), \(\{\varepsilon_{r^{\prime\prime}}^{m-n}\}_{r^{\prime\prime}=0}^{t_{m-n}}\), and \(\{\varepsilon_{\bar{r}}^{m-1}\}_{\bar{r}=0}^{t_{m-1}}\) respectively. For the indices \(r,\bar{r},r^{\prime\prime},r^{\prime}\), let the scalars \(b_{m,r}(*,**)\) be such
that the following recurrence relations hold;
\[(i)\;b_{m,r}(m-n+1,r^{\prime})c_{rr^{\prime\prime}}(m-n+1,r^{\prime},1)\] \[=(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{r\bar{r}}(m,r,1 )+c_{ir^{\prime\prime}}(m,r,n),\] \[b_{m,r}(m-n+1,r^{\prime})c_{r^{\prime\prime}r}(m-n+1,r^{\prime},m-n)\] \[=(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{\bar{r}r}(m,r,m -1)-(-1)^{n(m-n)}c_{r^{\prime\prime}i}(m,r,m-n),\] and for every pair of indices \[(p,q)\neq(r,r^{\prime}),(p,q)\neq(r^{\prime\prime},r),\] \[(ii)\;b_{m,r}(m-n+1,r^{\prime})c_{pq}(m-n+1,r^{\prime},*)\] \[=(-1)^{m-1}b_{m-1,\bar{\ast}}(m-n,\ast^{\prime\prime})c_{pq}(m,r, *). \tag{3.6}\]
We start with the following Lemma.
**Lemma 3.1**.: _Let \(Q\) be a finite quiver and \(\Lambda=kQ/I\) a quiver algebra that is Koszul. Suppose that \(\eta:\mathbb{K}_{n}\to\Lambda\) is a cocycle such that \(\eta=\begin{pmatrix}0&\cdots&0&(f_{w}^{1})^{(i)}&0&\cdots&0\end{pmatrix}\), \(0\leq w\leq t_{1}\). If \(\psi_{\eta}:\mathbb{K}\to\mathbb{K}[1-n]\) is defined as \(\psi_{\eta}(\varepsilon_{r}^{m})=b_{m,r}(m-n+1,r^{\prime})\varepsilon_{r^{ \prime}}^{m-n+1}\) and the recursion equation (3.6) hold, then \(d\psi_{\eta}-(-1)^{1-n}\psi_{\eta}d=(\eta\otimes 1-1\otimes\eta)\Delta_{ \mathbb{K}}\)._
Proof.: We prove this result in the following way; for the free modules \(\mathbb{K}_{m-1},\mathbb{K}_{m},\mathbb{K}_{m-n},\mathbb{K}_{m-n+1}\), we define the special case of the map \(\psi_{\eta}\) given by Equation (3.4). We then use the left and right hand side of (3.1) to derive the recurrence relations. This is equivalent to saying that under these conditions, equation (3.1) holds provided the recurrence relations of (3.6) hold.
Let us suppose we have a quiver \(Q\) generated by two arrows \(\{f_{r}^{1},f_{s}^{1}\}\) and each \(\mathbb{K}_{n}\) is free of rank \(2\). For each \(m\) let \(\{\varepsilon_{\bar{r}}^{m-1},\varepsilon_{\bar{s}}^{m-1}\}\), \(\{\varepsilon_{r}^{m},\varepsilon_{s}^{m}\}\), \(\{\varepsilon_{r^{\prime}}^{m-n+1},\varepsilon_{s^{\prime}}^{m-n+1}\}\), and \(\{\varepsilon_{r^{\prime\prime}}^{m-n},\varepsilon_{s^{\prime\prime}}^{m-n}\}\) be a basis for \(\mathbb{K}_{m-1}\), \(\mathbb{K}_{m}\), \(\mathbb{K}_{m-n+1}\), and \(\mathbb{K}_{m-n}\) respectively. A possible example of this scenario is given in Example (4.3). The differential given by (2.6) on \(\varepsilon_{r}^{m}\) for this special case for instance, is given by
\[d(\varepsilon_{r}^{m})=c_{r\bar{r}}(m,r,1)f_{r}^{1}\varepsilon_{ \bar{r}}^{m-1}+c_{\bar{r}r}(m,r,m-1)\varepsilon_{\bar{r}}^{m-1}f_{r}^{1}\] \[+c_{r\bar{s}}(m,r,1)f_{r}^{1}\varepsilon_{\bar{s}}^{m-1}+c_{\bar{ r}s}(m,r,m-1)\varepsilon_{\bar{r}}^{m-1}f_{s}^{1}+c_{s\bar{r}}(m,r,1)f_{s}^{1} \varepsilon_{\bar{r}}^{m-1}\] \[+c_{\bar{s}r}(m,r,m-1)\varepsilon_{\bar{s}}^{m-1}f_{r}^{1}+c_{s \bar{s}}(m,r,1)f_{s}^{1}\varepsilon_{\bar{s}}^{m-1}+c_{\bar{s}s}(m,r,m-1) \varepsilon_{\bar{s}}^{m-1}f_{s}^{1}\]
and a similar formula can be written for \(d(\varepsilon_{s}^{m})\).
Let us recall that \(\eta=\begin{pmatrix}0&\cdots&0&(f_{w}^{1})^{(i)}&0&\cdots&0\end{pmatrix}\) means that \(\eta(\varepsilon_{i}^{n})=f_{w}^{1}\) with \(w=r\) or \(w=s\) and \(\eta(\varepsilon_{j}^{n})=0\) for all \(j\neq i\). From the hypothesis, we define \(\psi_{\eta}:\mathbb{K}_{m}\to\mathbb{K}_{m-n+1}\) by \(\psi_{\eta}(\varepsilon_{r}^{m})=b_{m,r}(m-n+1,r^{\prime})\varepsilon_{r^{ \prime}}^{m-n+1}\), and \(\psi_{\eta}(\varepsilon_{s}^{m})=b_{m,s}(m-n+1,s^{\prime})\varepsilon_{s^{ \prime}}^{m-n+1}\), and \(\psi_{\eta}:\mathbb{K}_{m-1}\to\mathbb{K}_{m-n}\) is defined by \(\psi_{\eta}(\varepsilon_{r}^{m-1})=b_{m-1,\bar{r}}(m-n,r^{\prime\prime}) \varepsilon_{r^{\prime\prime}}^{m-n}\), and \(\psi_{\eta}(\varepsilon_{\bar{s}}^{m-1})=b_{m-1,\bar{s}}(m-n,s^{\prime\prime}) \varepsilon_{s^{\prime\prime}}^{m-n}\).
Using Equation(3.1), the expression \((d\psi_{\eta}-(-1)^{m-1}\psi_{\eta}d)(\varepsilon_{r}^{m})\) becomes \(d(b_{m,r}(m-n+1,r^{\prime})\varepsilon_{r^{\prime}}^{m-n+1})-(-1)^{m-1}\psi_{ \eta}d(\varepsilon_{r}^{m})\) and is equal to
\[b_{m,r}(m-n+1,r^{\prime})\Big{(}c_{rr^{\prime\prime}}(m-n+1,r^{ \prime},1)f_{r}^{1}\varepsilon_{r^{\prime\prime}}^{m-n}\] \[+c_{r^{\prime\prime}}r(m-n+1,r^{\prime},m-n)\varepsilon_{r^{ \prime\prime}}^{m-n}f_{r}^{1}+c_{rs^{\prime\prime}}(m-n+1,r^{\prime},1)f_{r}^{1 }\varepsilon_{s^{\prime\prime}}^{m-n}\] \[+c_{r^{\prime\prime}}s(m-n+1,r^{\prime},m-n)\varepsilon_{r^{ \prime\prime}}^{m-n}f_{s}^{1}+c_{sr^{\prime\prime}}(m-n+1,r^{\prime},1)f_{s}^{ 1}\varepsilon_{r^{\prime\prime}}^{m-n}\] \[+c_{s^{\prime\prime}}(m-n+1,r^{\prime},m-n)\varepsilon_{s^{\prime \prime}}^{m-n}f_{s}^{1}\Big{)}\] \[-(-1)^{m-1}\Big{(}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{r\bar{r }}(m,r,1)f_{r}^{1}\varepsilon_{r^{\prime\prime}}^{m-n}\] \[+b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{\bar{r}r}(m,r,m-1) \varepsilon_{r^{\prime\prime}}^{m-n}f_{r}^{1}\] \[+b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{r\bar{s}}(m,r,1)f_{r}^{1 }\varepsilon_{s^{\prime\prime}}^{m-n}\] \[+b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{\bar{r}s}(m,r,m-1) \varepsilon_{r^{\prime\prime}}^{m-n}f_{s}^{1}\] \[+b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{s\bar{r}}(m,r,1)f_{s}^{ 1}\varepsilon_{r^{\prime\prime}}^{m-n}\] \[+b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{s\bar{s}}(m,r,m-1) \varepsilon_{s^{\prime\prime}}^{m-n}f_{r}^{1}\] \[+b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{s\bar{s}}(m,r,1)f_{s}^{ 1}\varepsilon_{s^{\prime\prime}}^{m-n}\] \[+b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{s\bar{s}}(m,r,m-1) \varepsilon_{s^{\prime\prime}}^{m-n}f_{s}^{1}\Big{)}.\]
On the other hand, the diagonal map is given by
\(\Delta_{\mathbb{K}}(\varepsilon_{r}^{m})=\sum_{x+y=r}\sum_{u+v=m}c_{x,y}(m,r,u )\varepsilon_{x}^{u}\otimes_{\Lambda}\varepsilon_{y}^{v}.\) We obtain a non-zero in the expansion of \((\eta\otimes 1-1\otimes\eta)\Delta_{\mathbb{K}}(\varepsilon_{r}^{m})\) whenever \(x=i\) and \(y=i\). This means that
\[(\eta\otimes 1-1\otimes\eta)\sum_{x+y=r}\sum_{u+v=m}c_{x,y}(m,r,u )\varepsilon_{x}^{u}\otimes_{\Lambda}\varepsilon_{y}^{v}\] \[=(\eta\otimes 1)(c_{i,y}(m,r,n)\varepsilon_{i}^{n}\otimes_{\Lambda} \varepsilon_{y}^{m-n})-(1\otimes\eta)(c_{x,i}(m,r,m-n)\varepsilon_{x}^{m-n} \otimes_{\Lambda}\varepsilon_{i}^{n})\] \[=c_{i,y}(m,r,n)f_{w}^{1}\varepsilon_{y}^{m-n}-(-1)^{n(m-n)}c_{ xi}(m,r,m-n)\varepsilon_{x}^{m-n}f_{w}^{1},\]
for \(\{x,y\}=\{r^{\prime\prime},s^{\prime\prime}\}\) with \(i+y=r,x+i=r\) and some arrow \(f_{w}^{1}\). After collecting common terms, the expression \((d\psi_{\eta}-(-1)^{m-1}\psi_{\eta}d)(\varepsilon_{r}^{m})\) which is
the left hand side of Equation (3.1) becomes
\[\Big{(}b_{m,r}(m-n+1,r^{\prime})c_{rr^{\prime\prime}}(m-n+1,r^{ \prime},1)\] \[-(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{r\bar{r}}(m,r,1) \Big{)}f_{r}^{1}\varepsilon_{r^{\prime\prime}}^{m-n}\] \[+\Big{(}b_{m,r}(m-n+1,r^{\prime})c_{r^{\prime\prime}}r(m-n+1,r^{ \prime},m-n)\] \[-(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{\bar{r}r}(m,r,m- 1)\Big{)}\varepsilon_{r^{\prime\prime}}^{m-n}f_{r}^{1}\] \[+\Big{(}b_{m,r}(m-n+1,r^{\prime})c_{rs^{\prime\prime}}(m-n+1,r^{ \prime},1)\] \[-(-1)^{m-1}b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{r\bar{s}}(m,r,1 )\Big{)}f_{r}^{1}\varepsilon_{s^{\prime\prime}}^{m-n}\] \[+\Big{(}b_{m,r}(m-n+1,r^{\prime})c_{r^{\prime\prime}s}(m-n+1,r^{ \prime},m-n)\] \[-(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{\bar{r}s}(m,r, m-1)\Big{)}\varepsilon_{r^{\prime\prime}}^{m-n}f_{s}^{1}\] \[+\Big{(}b_{m,r}(m-n+1,r^{\prime})c_{sr^{\prime\prime}}(m-n+1,r^{ \prime},1)\] \[-(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{\bar{s}r}(m,r, 1)\Big{)}f_{s}^{1}\varepsilon_{r^{\prime\prime}}^{m-n}\] \[+\Big{(}b_{m,r}(m-n+1,r^{\prime})c_{s^{\prime\prime}r}(m-n+1,r^{ \prime},m-n)\] \[-(-1)^{m-1}b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{\bar{s}r}(m,r, m-1)\Big{)}\varepsilon_{s^{\prime\prime}}^{m-n}f_{r}^{1}\] \[+\Big{(}b_{m,r}(m-n+1,r^{\prime})c_{ss^{\prime\prime}}(m-n+1,r^{ \prime},1)\] \[-(-1)^{m-1}b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{s\bar{s}}(m,r, 1)\Big{)}f_{s}^{1}\varepsilon_{s^{\prime\prime}}^{m-n}\] \[+\Big{(}b_{m,r}(m-n+1,r^{\prime})c_{s^{\prime\prime}s}(m-n+1,r^{ \prime},m-n)\] \[-(-1)^{m-1}b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{\bar{s}s}(m,r, m-1)\Big{)}\varepsilon_{s^{\prime\prime}}^{m-n}f_{s}^{1}.\]
The expression \((\eta\otimes 1-1\otimes\eta)\Delta_{\mathbb{K}}(\varepsilon_{r}^{m})\) which is the right hand side of Equation (3.1) still remains \(c_{i,y}(m,r,n)f_{w}^{1}\varepsilon_{y}^{m-n}-(-1)^{n(m-n)}c_{xi}(m,r,m-n) \varepsilon_{x}^{m-n}f_{w}^{1}\). We observe the following about indices \(w,x,y\). The index \(w\) is either \(r\) or \(s\), the index \(y\) is either \(r^{\prime\prime}\) or \(s^{\prime\prime}\) and the index \(x\) is either \(r^{\prime\prime}\) or \(s^{\prime\prime}\). We notice that the additional constraint that \(i+y=r\) and \(i+x=r\) implies that whenever \(y=r^{\prime\prime}\) we must have \(x=r^{\prime\prime}\) and whenever \(y=s^{\prime\prime}\), we must have \(x=s^{\prime\prime}\). We therefore have the following four cases:
**Case I:** Whenever \(w=r\), \(y=x=r^{\prime\prime}\), \(i+r^{\prime\prime}=r\), we have the following set
of _recurrence_ relations on the scalars \(b_{m,r}(m-n+1,r^{\prime})\),
\[b_{m,r}(m-n+1,r^{\prime})c_{r^{\prime\prime}}(m-n+1,r^{\prime},1)\] \[-(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{r\bar{r}}(m,r,1 )=c_{ir^{\prime\prime}}(m,r,n)\] \[b_{m,r}(m-n+1,r^{\prime})c_{r^{\prime\prime}r}(m-n+1,r^{\prime},m -n)\] \[-(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{\bar{r}r}(m,r,m -1)\] \[=-(-1)^{n(m-n)}c_{r^{\prime\prime}i}(m,r,m-n)\] \[b_{m,r}(m-n+1,r^{\prime})c_{rs^{\prime\prime}}(m-n+1,r^{\prime},1)\] \[-(-1)^{m-1}b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{r\bar{s}}(m,r, 1)=0\]
\[b_{m,r}(m-n+1,r^{\prime})c_{r^{\prime\prime}s}(m-n+1,r^{\prime},m -n)\] \[-(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{\bar{r}s}(m,r, m-1)=0\] \[b_{m,r}(m-n+1,r^{\prime})c_{sr^{\prime\prime}}(m-n+1,r^{\prime},1)\] \[-(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{s\bar{r}}(m,r, 1)=0\] \[b_{m,r}(m-n+1,r^{\prime})c_{s^{\prime\prime}r}(m-n+1,r^{\prime}, m-n)\] \[-(-1)^{m-1}b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{\bar{s}s}(m,r, m-1)=0\] \[b_{m,r}(m-n+1,r^{\prime})c_{ss^{\prime\prime}}(m-n+1,r^{\prime}, 1)\] \[b_{m,r}(m-n+1,r^{\prime})c_{s^{\prime\prime}s}(m-n+1,r^{\prime}, m-n)\] \[-(-1)^{m-1}b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{\bar{s}s}(m,r, m-1)=0.\]
We note that all the equations of Case (I) above can be expressed more succinctly to mean that whenever \(w=r\), \(i+r^{\prime\prime}=r\) and for all \(s\neq r\)
\[b_{m,r}(m-n+1,r^{\prime})c_{rr^{\prime\prime}}(m-n+1,r^{\prime}, 1)\\ =(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{r\bar{r}}(m,r, 1)+c_{ir^{\prime\prime}}(m,r,n),\\ b_{m,r}(m-n+1,r^{\prime})c_{r^{\prime\prime}r}(m-n+1,r^{\prime},m -n)\\ =(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{\bar{r}r}(m,r, m-1)\\ -(-1)^{n(m-n)}c_{r^{\prime\prime}i}(m,r,m-n),\]
and for every indices such that \((p,q)\neq(r,r^{\prime\prime})\) and \((p,q)\neq(r^{\prime\prime},r)\), \(b_{m,r}(m-n+1,r^{\prime})c_{pq}(m-n+1,r^{\prime},*)=(-1)^{m-1}b_{m-1,\bar{s}} (m-n,*^{\prime\prime})c_{pq}(m,r,*)\).
**Case II:** Whenever \(w=s\), \(y=x=r^{\prime\prime}\), \(i+r^{\prime\prime}=r\), we have the following
set of _recurrence_ relations on the scalars \(b_{m,r}(m-n+1,r^{\prime})\),
\[b_{m,r}(m-n+1,r^{\prime})c_{sr^{\prime\prime}}(m-n+1,r^{\prime},1)\\ =(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{r\bar{r}}(m,r,1 )+c_{ir^{\prime\prime}}(m,r,n),\\ b_{m,r}(m-n+1,r^{\prime})c_{r^{\prime\prime}s}(m-n+1,r^{\prime},m-n) \\ =(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{\bar{r}r}(m,r,m-1)-(-1)^{n( m-n)}c_{r^{\prime\prime}i}(m,r,m-n),\]
and for every indices \((p,q)\neq(s,r^{\prime\prime}),(p,q)\neq(r^{\prime\prime},s)\),
\(b_{m,r}(m-n+1,r^{\prime})c_{pq}(m-n+1,r^{\prime},*)=(-1)^{m-1}b_{m-1,\bar{s}}( m-n,*^{\prime\prime})c_{pq}(m,r,*)\).
**Case III:** Whenever \(w=r\), \(y=x=s^{\prime\prime}\), \(i+s^{\prime\prime}=r\), we have the following set of _recurrence_ relations on the scalars \(b_{m,r}(m-n+1,r^{\prime})\),
\[b_{m,r}(m-n+1,r^{\prime})c_{rs^{\prime\prime}}(m-n+1,r^{\prime},1)\\ =(-1)^{m-1}b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{r\bar{s}}(m,r, 1)+c_{is^{\prime\prime}}(m,r,n),\\ b_{m,r}(m-n+1,r^{\prime})c_{s^{\prime\prime}r}(m-n+1,r^{\prime},m-n)\\ =(-1)^{m-1}b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{\bar{s}r}(m,r,m-1 )-(-1)^{n(m-n)}c_{s^{\prime\prime}i}(m,r,m-n),\]
and for every indices \((p,q)\neq(r,s^{\prime\prime}),(p,q)\neq(s^{\prime\prime},r)\)
\(b_{m,r}(m-n+1,r^{\prime})c_{pq}(m-n+1,r^{\prime},*)=(-1)^{m-1}b_{m-1,\bar{s}}( m-n,*^{\prime\prime})c_{pq}(m,r,*)\).
**Case IV:** Whenever \(w=s\), \(y=x=s^{\prime\prime}\), \(i+s^{\prime\prime}=r\), we have the following set of _recurrence_ relations on the scalars \(b_{m,r}(m-n+1,r^{\prime})\),
\[b_{m,r}(m-n+1,r^{\prime})c_{ss^{\prime\prime}}(m-n+1,r^{\prime},1)\\ =(-1)^{m-1}b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{s\bar{s}}(m,r, 1)+c_{is^{\prime\prime}}(m,r,n),\\ b_{m,r}(m-n+1,r^{\prime})c_{s^{\prime\prime}s}(m-n+1,r^{\prime},m-n) \\ =(-1)^{m-1}b_{m-1,\bar{s}}(m-n,s^{\prime\prime})c_{\bar{s}s}(m,r,m-1 )-(-1)^{n(m-n)}c_{s^{\prime\prime}i}(m,r,m-n),\\ \text{and for every indices }(p,q)\neq(s,s^{\prime\prime}),(p,q)\neq(s^{\prime \prime},s)\\ b_{m,r}(m-n+1,r^{\prime})c_{pq}(m-n+1,r^{\prime},*)=(-1)^{m-1}b_{m-1, \bar{s}}(m-n,*^{\prime\prime})c_{pq}(m,r,*).\]
More generally, if \(\mathbb{K}_{m}\) is generated by \(\{\varepsilon_{r}^{m}\}_{r=1}^{t_{m}}\), \(\mathbb{K}_{m-1}\) generated by \(\{\varepsilon_{\bar{r}}^{m-1}\}_{\bar{r}=1}^{t_{m-1}}\), \(\mathbb{K}_{m-n}\) by \(\{\varepsilon_{r^{\prime\prime}}^{m-n}\}_{r^{\prime\prime}=1}^{t_{m-n}}\), and \(\mathbb{K}_{m-n+1}\) by \(\{\varepsilon_{r^{\prime}}^{m-n+1}\}_{r^{\prime}=1}^{t_{m-n+1}}\), the following relations hold for all \(r,r^{\prime},r^{\prime\prime}\) and \(\bar{r}\)
\[(i)\ b_{m,r}(m-n+1,r^{\prime})c_{rr^{\prime\prime}}(m-n+1,r^{\prime},1)\\ =(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{r\bar{r}}(m,r, 1)+c_{ir^{\prime\prime}}(m,r,n),\\ b_{m,r}(m-n+1,r^{\prime})c_{r^{\prime\prime}r}(m-n+1,r^{\prime},m-n)\\ =(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{\bar{r}r}(m,r,m-1 )-(-1)^{n(m-n)}c_{r^{\prime\prime}i}(m,r,m-n),\]
and for every pair of indices \((p,q)\neq(r,r^{\prime\prime})\), \((p,q)\neq(r^{\prime\prime},r)\),
(ii) \(b_{m,r}(m-n+1,r^{\prime})c_{pq}(m-n+1,r^{\prime},*)=(-1)^{m-1}b_{m-1,\bar{s}}( m-n,*^{\prime\prime})c_{pq}(m,r,*)\).
**Theorem 3.2**.: _Let \(Q\) be a finite quiver and \(\Lambda=kQ/I\) a quiver algebra that is Koszul. Suppose that \(\eta:\mathbb{K}_{n}\to\Lambda\) is a cocycle such that \(\eta=\begin{pmatrix}0&\cdots&0&(f_{w}^{1})^{(i)}&0&\cdots&0\end{pmatrix}\), \(0\leq w\leq t_{1}\) and \(f_{w}^{1}\) is path of length 1. There are scalars \(\lambda_{m,r}(m-n+1,r^{\prime})\) such that the map \(\psi_{\eta}:\mathbb{K}\to\mathbb{K}[1-n]\) associated to \(\eta\) and defined in degree \(m\) by_
\[\psi_{\eta}(\varepsilon_{r}^{m})=\lambda_{m,r}(m-n+1,r^{\prime})\varepsilon_{ r^{\prime}}^{m-n+1}\]
_is a homotopy lifting map for \(\eta\)._
Proof.: Let \(\lambda_{m,r}(m-n+1,r^{\prime})=b_{m,r}(m-n+1,r^{\prime})\) be the scalars satisfying the recurrence relations of (3.6). By Lemma 3.1, the equation \(d\psi_{\eta}-(-1)^{1-n}\psi_{\eta}d=(\eta\otimes 1-1\otimes\eta)\Delta_{ \mathbb{K}}\) holds, so \(\psi_{\eta}\) is a homotopy lifting map.
**Theorem 3.3**.: _Let \(Q\) be a finite quiver and let \(\Lambda=kQ/I\) be a quiver algebra that is Koszul. Suppose that \(\eta:\mathbb{K}_{n}\to\Lambda\) is a cocycle such that \(\eta=\begin{pmatrix}0&\cdots&0&(f_{w}^{1}f_{w^{\prime}}^{1})^{(i)}&0&\cdots&0 \end{pmatrix}\) for some \(0\leq w,w^{\prime}\leq t_{1}\) where \(f_{w}^{1}\) and \(f_{w^{\prime}}^{1}\) are paths of length 1. Then there exist scalars \(\lambda_{m,r}(m-n+1,s)\) and \(\lambda_{m,r}(m-n+1,s^{\prime})\) such that \(\psi_{\eta}:\mathbb{K}_{m}\to\mathbb{K}_{m-n+1}\) defined by_
\[\psi_{\eta}(\varepsilon_{r}^{m})=\lambda_{m,r}(m-n+1,s)f_{w}^{1}\varepsilon_{ s}^{m-n+1}+\lambda_{m,r}(m-n+1,s^{\prime})\varepsilon_{s^{\prime}}^{m-n+1}f_{w^{ \prime}}^{1}\]
_for all \(\varepsilon_{r}^{m}\) is a homotopy lifting map for \(\eta\)._
Proof.: Similar to Lemma 3.1, we can write a recurrence relations on the scalars given in Equation (3.5) so that \(d\psi_{\eta}-(-1)^{1-n}\psi_{\eta}d=(\eta\otimes 1-1\otimes\eta)\Delta_{ \mathbb{K}}\) holds true. See [14, Lemma 5.20] and [14, Theorem 5.23] for details about this.
### A special case of Theorem 3.2
We consider a special case in which each \(\Lambda^{e}\)-module \(\mathbb{K}_{n}\) is generated by one element. This case arises for example, from a quiver with one arrow (a loop) on a vertex \(e_{1}\). We also give a concrete example in Example (4.1). Let \(I=(x^{n})\) be an ideal of the path algebra \(kQ\). The quiver algebra of interest here is Morita equivalent the truncated polynomial ring \(A=k[x]/(x^{n})\). This is the case where \(f_{r}^{n}=x^{n}\) where \(r=0\) for all \(n\) and \(\varepsilon_{r}^{n}=1\otimes\overline{f_{r}^{n}}\otimes 1\). From the Preliminaries (2), there are scalars \(c_{p,q}(m,r,u)\) for which the diagonal map is given by
\[\Delta_{\mathbb{K}}(\varepsilon_{r}^{m})=\sum_{u+v=m}c_{i,j}(m,r,u)\varepsilon_ {i}^{u}\otimes_{\Lambda}\varepsilon_{j}^{v}, \tag{3.7}\]
with \(i=j=r=0\). Also with \(p=r=0\), the differential takes the form
\[d(\varepsilon_{r}^{m})=c_{p,r}(m,r,1)f_{p}^{1}\varepsilon_{r}^{m-1}+(-1)^{m}c_ {r,p}(m,r,m-1)\varepsilon_{r}^{m-1}f_{p}^{1}.\]
Let \(\chi:\mathbb{K}_{n}\to A\) be an \(n\)-cocycle defined by \(\chi(\varepsilon_{r}^{n})=f_{r}^{1}\). According to Theorem (3.2), a homotopy lifting map for \(\chi\) can be given by
\[\psi_{\chi_{m}}(\varepsilon_{r}^{m})=b_{m,r}(m-n+1,r)\varepsilon_{r}^{m-n+1}, \qquad r=0.\]
We can determine \(b_{m,r}(m-n+1,r)\) from the previous scalar \(b_{m-1,r}(m-n,r)\). In order words, the conditions (i) and (ii) of Equation (3.6) is a _recurrence relation_. We know from Defintion (2.1) that homotopy lifting maps satisfy
\[(d\psi_{\chi}-(-1)^{m-1}\psi_{\chi}d)(\varepsilon_{r}^{m})=(\chi \otimes 1-1\otimes\chi)\Delta_{\mathbb{K}}(\varepsilon_{r}^{m}),\qquad\text{so then}\] \[d(b_{m,r}(m-n+1,r)\varepsilon_{r}^{m-n+1})-(-1)^{m-1}\psi_{\chi}\] \[\qquad\Big{(}c_{p,r}(m,r,1)f_{p}^{1}\varepsilon_{r}^{m-1}+(-1)^{m }c_{r,p}(m,r,m-1)\varepsilon_{r}^{m-1}f_{p}^{1}\Big{)}\] \[=(\chi\otimes 1-1\otimes\chi)\sum_{u+v=m}c_{i,j}(m,r,u) \varepsilon_{i}^{u}\otimes_{\Lambda}\varepsilon_{j}^{v}.\]
The modules \(\mathbb{K}\) are generated by one elements so we get
\[b_{m,r}(m-n+1,r)c_{p,r}(m-n+1,r,1)f_{p}^{1}\varepsilon_{r}^{m-n }+(-1)^{m-n+1}b_{m,r}(m-n+1,r)\] \[c_{r,p}(m-n+1,r,m-n)\varepsilon_{r}^{m-n}f_{p}^{1}-(-1)^{m-1}b_{ m-1,r}(m-n,r)c_{p,r}(m,r,1)f_{p}^{1}\varepsilon_{r}^{m-n}\] \[\qquad\qquad\qquad\qquad\qquad+b_{m-1,r}(m-n,r)c_{r,p}(m,r,m-1) \varepsilon_{r}^{m-n}f_{p}^{1}\] \[=c_{r,j}(m,r,n)f_{r}^{1}\varepsilon_{j}^{m-n}+(-1)^{n(m-n)}c_{i,r }(m,r,m-n)\varepsilon_{i}^{m-n}f_{r}^{1}\]
We would obtain the following expressions for the above equality to hold,
\[b_{m,r}(m-n+1,r)c_{p,r}(m-n+1,r,1)-(-1)^{m-1}b_{m-1,r}(m-n,r)c_{p,r}(m,r,1)\] \[=c_{p,r}(m,r,n)\quad\text{and}\] \[(-1)^{m-n+1}b_{m,r}(m-n+1,r)c_{r,p}(m-n+1,r,m-n)\] \[+b_{m-1,r}(m-n,r)c_{r,p}(m,r,m-1)=(-1)^{n(m-n)}c_{r,p}(m,r,m-n).\]
The scalars \(c_{p,r}(m-n+1,r,*)\) come from the differentials on the resolution \(\mathbb{K}\), so they are not equal to \(0\) for all \(r\). In case \(c_{p,r}(m-n+1,r,*)\neq 0\) for all \(r\), the first equality in the last expression yields
\[b_{m,r}(m-n+1,r)=\frac{(-1)^{m-1}b_{m-1,r}(m-n,r)c_{p,r}(m,r,1)+c_{p,r}(m,r,n) }{c_{p,r}(m-n+1,r,1)} \tag{3.8}\]
while the second one yields
\[b_{m,r}(m-n+1,r)\\ =\frac{b_{m-1,r}(m-n,r)c_{r,p}(m,r,m-1)+(-1)^{n(m-n)+1}c_{r,p}(m,r,m-n)}{(-1)^{m-n}c_{r,p}(m-n+1,r,m-n)}. \tag{3.9}\]
We now present the Gerstenhaber bracket structure on Hochschild cohomology using these scalars.
**Theorem 3.4**.: _Let \(\Lambda=kQ/I\) be a quiver algebra that is Koszul. Denote by \(\{f_{r}^{m}\}_{r=0}^{t_{m}}\) elements of \(kQ\) defining a minimal projective resolution of \(\Lambda_{0}\) as a right \(\Lambda\)-module. Let \(\mathbb{K}\) be the projective bimodule resolution of \(\Lambda\) with \(\mathbb{K}_{m}\) having basis \(\{\varepsilon_{r}^{m}\}_{r=0}^{t_{m}}.\) Assume that \(\eta:\mathbb{K}_{n}\to\Lambda\) and \(\theta:\mathbb{K}_{m}\to\Lambda\) represent elements in \(\operatorname{HH}^{*}(\Lambda)\) and are given by \(\eta(\varepsilon_{i}^{n})=\lambda_{i}\) for \(i=0,1,\ldots,t_{n}\) and \(\theta(\varepsilon_{j}^{m})=\beta_{j}\) for \(j=0,1,\ldots,t_{m},\) with each \(\lambda_{i}\) and \(\beta_{j}\) paths of length of 1._
_Then the class of the bracket \([\eta,\theta]:\mathbb{K}_{n+m-1}\to\Lambda\) can be expressed on the \(r\)-th basis element \(\varepsilon_{r}^{m+n-1}\) as_
\[[\eta,\theta](\varepsilon_{r}^{m+n-1})=\sum_{i=0}^{t_{n}}\sum_{j=0}^{t_{m}}b_{m- n+1,r}(n,i)\lambda_{i}-(-1)^{(m-1)(n-1)}(b_{m-n+1,r}(m,j)\beta_{j}\]
_for some scalars \(b_{m-n+1,r}(n,i)\) and \(b_{m-n+1,r}(m,j)\) associated with homotopy lifting maps \(\psi_{\theta^{(j)}}\) and \(\psi_{\eta^{(i)}}\) respectively._
Proof. This is same as [13, Theorem 3.15] and proved therein.
## 4. Some computations and examples
In this section, we give examples in which the scalars \(b_{m,r}(m-n+1,*)\) are obtained from \(b_{m-1,r}(m-n,**)\) using the recurrence relations of Theorem (3.2), Equations (3.8) and (3.9). In most of the examples, we described the scalars \(c_{p,q}(m,r,n)\) which are also used in the recurrence relations.
Example 4.1. Let's consider the following quiver
\[Q:=\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
is a homotopy lifting map for \(\eta\) that is
\[(d\psi_{\eta}-(-1)^{0}\psi_{\eta}d)(\varepsilon_{0}^{m})=d(m\varepsilon _{0}^{m})-\psi_{\eta}(x\varepsilon_{0}^{m-1}-(-1)^{m-1}\varepsilon_{0}^{m-1}x)\] \[=mx\varepsilon_{0}^{m-1}-(-1)^{m-1}m\varepsilon_{0}^{m-1}x-(m-1)x \varepsilon_{0}^{m-1}+(-1)^{m-1}(m-1)\varepsilon_{0}^{m-1}x\] \[=x\varepsilon_{0}^{m-1}-(-1)^{m-1}\varepsilon_{0}^{m-1}x\qquad \text{ is equal to}\] \[(\eta\otimes 1-1\otimes\eta)\Delta_{\mathbb{K}}(\varepsilon_{0}^{m})= (\eta\otimes 1-1\otimes\eta)\sum_{i+j=m}\varepsilon_{0}^{i}\otimes\varepsilon_{0} ^{j}\] \[=\eta\otimes 1(\varepsilon_{0}^{1}\otimes\varepsilon_{0}^{m-1})-(- 1)^{m-1}1\otimes\eta(\varepsilon_{0}^{m-1}\otimes\varepsilon_{0}^{1})\] \[=x\varepsilon_{0}^{m-1}-(-1)^{m-1}\varepsilon_{0}^{m-1}x, \tag{4.1}\]
where the Koszul sign convention has been employed in the expansion of \((1\otimes\eta)(\varepsilon_{0}^{m-1}\otimes\varepsilon_{0}^{1})=(-1)^{degree (\eta)\cdot(m-1)}\varepsilon_{0}^{m-1}\eta(\varepsilon_{0}^{1})\). We note that by the general definition given in Theorem 3.2, the map \(\psi_{\eta}:\mathbb{K}_{m-1}\to\mathbb{K}_{m-1}\) defined by \(\psi_{\eta}(\varepsilon_{0}^{m-1})=(m-1)\varepsilon_{0}^{m-1}\) implies that \(b_{m-1,0}(m-1,0)=m-1\). The map \(\eta\) is a \(1\)-cocycle so \(n=1\), \(r=p=0\).
We can use the expression of (3.9) to verify that
\[b_{m,r}(m-n+1,r)\] \[=\frac{b_{m-1,r}(m-n,r)c_{r,p}(m,r,m-1)+(-1)^{n(m-n)+1}c_{r,p}(m,r,m-n)}{(-1)^{m-n}c_{r,p}(m-n+1,r,m-n)}\] \[b_{m,0}(m,0) =\frac{b_{m-1,0}(m-1,0)c_{0,0}(m,0,m-1)+(-1)^{m}c_{0,0}(m,0,m-1)} {(-1)^{m-1}c_{0,0}(m,0,m-1)}\] \[=\frac{m-1+(-1)^{m}}{1}=m,\qquad\text{when $m$ is even},\]
and the expression of (3.8) to verify that
\[b_{m,0}(m,0) =\frac{(-1)^{m-1}b_{m-1,0}(m-1,0)c_{0,0}(m,0,1)+c_{0,0}(m,0,1)}{c _{0,0}(m,0,1)}\] \[=\frac{m-1+1}{1}=m,\qquad\text{when $m$ is odd}.\]
Similarly, it is a straightforward calculation (same calculations as (4.1)) to verify that the map \(\psi_{\chi}:\mathbb{K}_{m}\to\mathbb{K}_{m-1}\) defined by
\[\psi_{\chi}(\varepsilon_{0}^{m})=b_{m,0}(m-1,0)\varepsilon_{0}^{m-1}=\begin{cases} \varepsilon_{0}^{m-1},&\text{when m is even}\\ 0,&\text{when m is odd}\end{cases}\]
is a homotopy lifting map for \(\chi\). In this case \(b_{m,0}(m-1,0)=1\) when \(m\) is even and \(0\) when \(m\) is odd. But we can also use the expression of (3.8) to verify that when \(m\) is even,
\[b_{m+1,0}(m,0)=\frac{(-1)^{m-1}b_{m,0}(m-1,0)c_{0,0}(m,0,1)+c_{0,0}(m,0,2)}{c_ {0,0}(m-n+1,0,1)}=-1+1=0,\]
and when \(m\) is odd,
\[b_{m+1,0}(m,0)=\frac{(-1)^{m-1}b_{m,0}(m-1,0)c_{0,0}(m,0,1)+c_{0,0}(m,0,2)}{c_{0,0 }(m-n+1,0,1)}=0+1=1.\]
**Example 4.2**.: The following example of a homotopy lifting map was first given in [10, Example 4.7.2]. We will now verify that the recurrence relations also hold. Let \(k\) be a field and \(A=k[x]/(x^{3})\). Consider the following projective bimodule resolution of \(A\):
\[\mathbb{P}_{\bullet}:\qquad\cdots\to A\otimes A\stackrel{{ \cdot u}}{{\longrightarrow}}A\otimes A\stackrel{{\cdot u}}{{ \longrightarrow}}\cdots\stackrel{{\cdot v}}{{\longrightarrow}}A \otimes A\stackrel{{\cdot u}}{{\longrightarrow}}A\otimes A\;( \stackrel{{\mu}}{{\rightarrow}}A)\]
where \(u=x\otimes 1-1\otimes x\) and \(v=x^{2}\otimes 1+x\otimes x+1\otimes x^{2}\). We consider the following elements \(e_{m}:=1\otimes 1\), \(r=0\) for all \(m\) in the \(m\)-th module \(P_{m}:=A\otimes A\). A diagonal map \(\Delta_{\mathbb{P}}:\mathbb{P}\rightarrow\mathbb{P}\otimes_{A}\mathbb{P}\) for this resolution is given by
\[\Delta_{\mathbb{P}}(e_{m})=\sum_{j+l=m}(-1)^{j}e_{j}\otimes e_{l}.\]
By comparing \(\Delta_{\mathbb{P}}(e_{m})\) with Equation (2.9), the scalars \(c_{rr}(m,r,j)=(-1)^{j}\) for all \(m,r\). Consider the Hochschild \(1\)-cocycle \(\alpha:\mathbb{P}_{1}\to A\) defined by \(\alpha(e_{1})=x\) and \(\alpha(e_{m})=0\) for all \(m\neq 1\). With a slight change in notation, it was shown in [10, Example 4.7.2] that the following \(\psi_{\alpha}:\mathbb{P}_{2m}\rightarrow\mathbb{P}_{2m}\) defined by \(\psi_{\alpha}(e_{2m})=-3m\cdot e_{2m}\) is a homotopy lifting map for \(\alpha\). We note that the map \(\psi_{\alpha}\) was regarded as an \(A_{\infty}\)-coderivation in [10]. It can be also verified that \(\psi_{\alpha}\) is a homotopy lifting map. We can use the recurrence relations of Equation (3.9) to obtain \(b_{2m+1,r}(2m+1,r)\) from \(b_{2m,r}(2m,r)=-3m\). That is
\[b_{2m+1,r}(2m+1,r) =\frac{b_{2m,r}(2m,r)c_{r,r}(2m,r,2m)+(-1)^{2m+1}c_{r,r}(2m+1,r,2 m)}{(-1)^{2m}c_{r,r}(2m+1,r,2m)}\] \[=\frac{-3m(-1)^{2m}+(-1)^{2m+1}(-1)^{2m}}{(-1)^{2m}(-1)^{2m-1+1}} =\frac{-3m-1}{1},\]
so it follows that \(\psi_{\alpha}:\mathbb{P}_{2m+1}\rightarrow\mathbb{P}_{2m+1}\) is defined by \(\psi_{\alpha}(e_{2m+1})=(-3m-1)e_{2m+1}\).
**Example 4.3**.: Let \(k\) be a field of characteristics different from \(2\). Consider the quiver algebra \(A=kQ/I\) (also examined in [3, Example 5]) defined using the following finite quiver:
with one vertex and two arrows \(x,y.\) We denote by \(e_{1}\) the idempotent associated with the only vertex. Let \(I\), an ideal of the path algebra \(kQ\) be defined by \(I=\langle x^{2},xy+yx\rangle.\) Since \(\{x^{2},xy+yx\}\) is a quadratic Grobner basis for
the ideal generated by relations under the length lexicographical order with \(x>y>1\), the algebra is Koszul.
In order to define a comultiplicative structure, we take \(t_{0}=0,t_{n}=1\) for all \(n\), \(f_{0}^{0}=e_{1},f_{1}^{0}=0,f_{0}^{1}=x,f_{1}^{1}=y,f_{0}^{2}=x^{2},f_{1}^{2}= xy+yx,f_{0}^{3}=x^{3},f_{1}^{3}=x^{2}y+xyx+yx^{2}\), and in general \(f_{0}^{n}=x^{n},f_{1}^{n}=\sum_{i+j=n-1}x^{i}yx^{j}\). We also see that \(f_{0}^{n}=f_{0}^{r}f_{0}^{n-r}\) and \(f_{1}^{n}=f_{0}^{r}f_{1}^{n-r}+f_{1}^{r}f_{0}^{n-r}\) so \(c_{00}(n,0,r)=c_{01}(n,1,r)=c_{10}(n,1,r)=1\) and all other \(c_{pq}(n,i,r)=0\). With the above stated, we can construct the resolution \(\mathbb{K}\) for the algebra \(A\). A calculation shows that
\[d_{1}(\varepsilon_{0}^{1}) =x\varepsilon_{0}^{0}-\varepsilon_{0}^{0}x, d_{1}(\varepsilon_{1}^{1}) =y\varepsilon_{0}^{0}-\varepsilon_{0}^{0}y\] \[d_{2}(\varepsilon_{0}^{2}) =x\varepsilon_{0}^{1}+\varepsilon_{0}^{1}x, d_{2}(\varepsilon_{1}^{2}) =y\varepsilon_{0}^{1}+\varepsilon_{0}^{1}y+x\varepsilon_{1}^{1}+ \varepsilon_{1}^{1}x.\]
Consider the following map \(\theta:\mathbb{K}_{1}\to A\) defined by \(\theta=(0\;\;y)\). With the following calculations \(\theta d_{2}(\varepsilon_{0}^{2})=\theta(x\varepsilon_{0}^{1}+\varepsilon_{0}^ {1}x)=0\) and \(\theta d_{2}(\varepsilon_{1}^{2})=\theta(y\varepsilon_{0}^{1}+\varepsilon_{0} ^{1}y+x\varepsilon_{1}^{1}+\varepsilon_{1}^{1}x)=0+xy+yx=0\), \(\theta\) is a cocycle. The comultiplicative map \(\Delta:\mathbb{K}\to\mathbb{K}\otimes_{A}\mathbb{K}\) on \(\varepsilon_{0}^{1}\), \(\varepsilon_{1}^{1},\varepsilon_{0}^{2},\varepsilon_{1}^{2}\) is given by
\[\Delta(\varepsilon_{0}^{1}) =c_{00}(1,0,0)\varepsilon_{0}^{0}\otimes\varepsilon_{0}^{1}+c_{00 }(1,0,1)\varepsilon_{0}^{1}\otimes\varepsilon_{0}^{0}=\varepsilon_{0}^{0} \otimes\varepsilon_{0}^{1}+\varepsilon_{0}^{1}\otimes\varepsilon_{0}^{0},\] \[\Delta(\varepsilon_{1}^{1}) =\varepsilon_{0}^{0}\otimes\varepsilon_{1}^{1}+\varepsilon_{1}^ {1}\otimes\varepsilon_{0}^{0},\] \[\Delta(\varepsilon_{0}^{2}) =\varepsilon_{0}^{0}\otimes\varepsilon_{0}^{2}+\varepsilon_{0}^ {1}\otimes\varepsilon_{0}^{1}+\varepsilon_{0}^{2}\otimes\varepsilon_{0}^{0},\] \[\Delta(\varepsilon_{1}^{2}) =\varepsilon_{0}^{0}\otimes\varepsilon_{1}^{2}+\varepsilon_{0}^ {1}\otimes\varepsilon_{1}^{1}+\varepsilon_{1}^{1}\otimes\varepsilon_{0}^{1}+ \varepsilon_{1}^{2}\otimes\varepsilon_{0}^{0}.\]
From Theorems (3.2), it can be verified by direct calculations using Equation (4.1) that the first, second and third degrees of the homotopy lifting maps \(\psi_{\theta}\) associated \(\theta\) is the following:
\[\psi_{\theta_{0}}(\varepsilon_{i}^{0})=0,\qquad\psi_{\theta_{1}}(\varepsilon_ {0}^{1})=0,\qquad\psi_{\theta_{1}}(\varepsilon_{1}^{1})=\varepsilon_{1}^{1} \qquad\quad\psi_{\theta_{2}}(\varepsilon_{0}^{2})=0.\]
The scalars \(b_{1,1}(1,1)=1\) and for other \((m,r)\neq(1,1),(2,1)\), \(b_{m,r}(m,r)=0\). Since \(\theta\) is a \(1\)-cocycle, \(n=1\). Also, \(\theta=(0\;y)\) and when compared with \(\eta=\begin{pmatrix}0&\cdots&0&(f_{w}^{1})^{(i)}&0&\cdots&0\end{pmatrix}\) as given in Theorem (3.2), \(f_{w}^{1}=y\) and \(i=1\). To obtain \(b_{2,1}(2,1)\) from \(b_{1,r}(1,s)\) for some \(s\), we take \(m=2,r=1\), so that \(m-n=1\). Since \(t_{m-n}=t_{1}\), we would have \(r^{\prime\prime}=0\) or \(r^{\prime\prime}=1\). From the statement of the theorem, we must have \(i+r^{\prime\prime}=r\), so \(r^{\prime\prime}=0\) and \(c_{r,r^{\prime\prime}}(m-n+1,r^{\prime},1)=c_{10}(2,r^{\prime},1)=1\). It then follows from the first recurrence relations in Theorem (3.2) that
\[b_{m,r}(m-n+1,r^{\prime}) =\frac{(-1)^{m-1}b_{m-1,\bar{r}}(m-n,r^{\prime\prime})c_{r,\bar{ r}}(m,r,1)+c_{i,r^{\prime\prime}}(m,r,n)}{c_{r,r^{\prime\prime}}(m-n+1,r^{ \prime},1)}.\] \[b_{2,1}(2,1) =\frac{-b_{1,\bar{r}}(1,0)c_{1,\bar{r}}(2,1,1)+c_{1,0}(2,1,1)}{c_ {1,0}(2,1,1)}=\frac{0+1}{1}=1,\]
so \(\psi_{\theta_{2}}(\varepsilon_{1}^{2})=b_{2,1}(2,1)\varepsilon_{1}^{2}= \varepsilon_{1}^{2}\).
## 5. Finding Maurer-Cartan elements
In this section, we find the Maurer-Cartan elements of a quiver algebra. We first recall the definition of a Maurer-Cartan element.
**Definition 5.1**.: A Hochschild 2-cocycle \(\eta\) is said to satisfy the Maurer-Cartan equation if
\[d(\eta)+\frac{1}{2}[\eta,\eta]=0. \tag{5.1}\]
Applying the definition of the bracket using homotopy lifting, we obtain the following version of the Maurer-Cartan equation for the resolution \(\mathbb{K}\). \(d_{3}^{*}(\eta)+\frac{1}{2}(\eta\psi_{\eta}+\eta\psi_{\eta})=d_{3}^{*}(\eta)+ \eta\psi_{\eta}=0.\)
We begin with the following finite quiver:
with two vertices and three arrows \(a,b,c.\) We denote by \(e_{1}\) and \(e_{2}\) the idempotents associated with vertices \(1\) and \(2\). Let \(kQ\) be the path algebra associated with \(Q\) and take for each \(q\in k\), \(I_{q}\subseteq kQ\) to be an admissible ideal of \(kQ\) generated by \(I_{q}=\langle a^{2},b^{2},ab-qba,ac\rangle\) so that
\[\{A_{q}=kQ/I_{q}\}_{q\in k}\]
This family of quiver algebras have been well studied in [12, 13] and [14]. We simply recall the main tools needed to find Maurer-Cartan elements. To define a set of generators for the resolution \(\mathbb{K}\) we start by letting \(kQ_{0}\) to be the ideal of \(kQ\) generated by the vertices of \(Q\) with basis \(f_{0}^{0}=e_{1},f_{1}^{0}=e_{2}\). Next, set \(kQ_{1}\) to be the ideal generated by paths with basis \(f_{0}^{1}=a,f_{1}^{1}=b\) and \(f_{2}^{1}=c.\) Set \(f_{j}^{2}\), \(j=0,1,2,3\) to be the set of paths of length \(2\) that generates the ideal \(I\), that is \(f_{0}^{2}=a^{2},f_{1}^{2}=ab-qba,f_{2}^{2}=b^{2},f_{3}^{2}=ac\), and define a comultiplicative equation on the paths of length \(n>2\) in the following way.
\[\begin{cases}f_{0}^{n}=a^{n},\\ f_{s}^{n}=f_{s-1}^{n-1}b+(-q)^{s}f_{s}^{n-1}a,\ \ \ (0<s<n),\\ f_{n}^{n}=b^{n},\\ f_{n+1}^{n}=a^{(n-1)}c,\end{cases}\]
The resolution \(\mathbb{K}\to A_{q}\) has basis elements \(\{\varepsilon_{i}^{n}\}_{i=0}^{t_{n}}\) such that for each \(i\), we have \(\varepsilon_{i}^{n}=(0,\ldots,0,o(f_{i}^{n})\otimes_{k}t(f_{i}^{n}),0,\ldots,0)\). The differentials on \(\mathbb{K}_{n}\) are given explicitly for this family by
\[d_{1}(\varepsilon_{2}^{1}) =c\varepsilon_{1}^{0}-\varepsilon_{0}^{0}c\] \[d_{n}(\varepsilon_{r}^{n}) =(1-\partial_{n,r})[a\varepsilon_{r}^{n-1})+(-1)^{n-r}q^{r} \varepsilon_{r}^{n-1}a]\] \[+(1-\partial_{r,0})[(-q)^{n-r}b\varepsilon_{r-1}^{n-1}+(-1)^{n} \varepsilon_{r-1}^{n-1}b],\ \ \text{for}\ \ r\leq n\] \[d_{n}(\varepsilon_{n+1}^{n}) =a\varepsilon_{n}^{n-1}+(-1)^{n}\varepsilon_{0}^{n-1}c,\ \ \text{when}\ \ n\geq 2,\]
where \(\partial_{r,s}=1\) when \(r=s\) and \(0\) when \(r\neq s\).
Calculations from [12] show that for this family, the comultiplicative map can be expressed in the following way
\[\Delta_{\mathbb{K}}(\varepsilon_{s}^{n})=\begin{cases}\sum_{r=0}^{n} \varepsilon_{0}^{r}\otimes\varepsilon_{0}^{n-r},&s=0\\ \sum_{w=0}^{n}\sum_{j=max\{0,s+w-n\}}^{min\{w,s\}}(-q)^{j(n-s+j-w)} \varepsilon_{j}^{w}\otimes\varepsilon_{s-j}^{n-w},&0<s<n\\ \sum_{t=0}^{n}\varepsilon_{t}^{t}\otimes\varepsilon_{n-t}^{n-t},&s=n\\ \varepsilon_{0}^{0}\otimes\varepsilon_{n+1}^{n}+\Big{[}\sum_{t=0}^{n} \varepsilon_{0}^{t}\otimes\varepsilon_{n-t+1}^{n-t}\Big{]}+\varepsilon_{n+1}^ {n}\otimes\varepsilon_{0}^{0},&s=n+1.\end{cases}\]
**Example 5.2**.: Let \(A_{1}=kQ/I_{1}\) be a member of the family where \(I=I_{1}=\langle a^{2},b^{2},ab-ba,ac\rangle\). We now find Hochschild 2-cocycles that satisfy the Maurer-Cartan equation of 5.1. Suppose that the \(A_{1}^{e}\)-module homomorphism \(\eta:\mathbb{K}_{2}\to A_{1}\) defined by
\(\eta=\begin{pmatrix}\lambda_{0}&\lambda_{1}&\lambda_{2}&\lambda_{3}\end{pmatrix}\) is a cocycle, that is \(d^{*}\eta=0\), with \(\lambda_{i}\in\Lambda_{q}\) for all \(i\). Since \(d^{*}\eta:\mathbb{K}_{3}\to A_{1}\), we obtain using \(d^{*}\eta(\varepsilon_{i}^{3})=\eta d(\varepsilon_{i}^{3})\),
\[\eta\Big{(}\begin{cases}a\varepsilon_{0}^{2}-\varepsilon_{0}^{2}a\\ a\varepsilon_{1}^{2}+\varepsilon_{1}^{2}a+b\varepsilon_{0}^{2}-\varepsilon_{ 0}^{2}b\\ a\varepsilon_{2}^{2}-\varepsilon_{2}^{2}a-b\varepsilon_{1}^{2}-\varepsilon_{ 1}^{2}b\\ b\varepsilon_{2}^{2}-\varepsilon_{2}^{2}b\\ a\varepsilon_{3}^{2}-\varepsilon_{0}^{2}c\end{cases}\Big{)}=\begin{cases}a \lambda_{0}-\lambda_{0}a,&\text{if }i=0\\ a\lambda_{1}+q\lambda_{1}a+q^{2}b\lambda_{0}-\lambda_{0}b&\text{if }i=1\\ a\lambda_{2}-q^{2}\lambda_{2}a-qb\lambda_{1}-\lambda_{1}b&\text{if }i=2\\ b\lambda_{2}-\lambda_{2}b&\text{if }i=3\\ a\varepsilon_{3}^{2}-\varepsilon_{0}^{2}c&\text{if }i=4\end{cases}\]
which will be equated to \(\begin{pmatrix}0&0&0&0&0\end{pmatrix}\) and solved. We solve this system of equations with the following in mind. There is an isomorphism of \(A_{1}^{e}\)-modules \(\operatorname{Hom}_{A_{1}^{e}}(A_{1}o(f_{i}^{n})\otimes_{k}t(f_{i}^{n})A_{1},A _{1})\simeq o(f_{i}^{n})A_{1}\)\(t(f_{i}^{n})\) ensuring that
\[o(f_{i}^{2})\lambda_{i}t(f_{i}^{2}) =o(f_{i}^{2})\eta(\varepsilon_{i}^{2})t(f_{i}^{2})=o(f_{i}^{2}) \eta(o(f_{i}^{2})\otimes_{k}t(f_{i}^{2}))t(f_{i}^{2})\] \[=\phi(o(f_{i}^{2})^{2}\otimes_{k}t(f_{i}^{2})^{2})=\phi(o(f_{i}^{ 2})\otimes_{k}t(f_{i}^{2}))=\lambda_{i}.\]
This means that for \(i=0,1,2\) each \(\lambda_{i}\) should satisfy \(e_{1}\lambda_{i}e_{1}=\lambda_{i}\) since the origin and terminal vertex of \(f_{0}^{2},f_{1}^{2},f_{2}^{2}\) is \(e_{1}\) and \(e_{1}\lambda_{3}e_{2}=\lambda_{3}\). We obtain 9 solutions presented in Table 1.
Now suppose that there is some \(\phi:\mathbb{K}_{1}\to A_{1}\) such that \(\phi d_{2}(\varepsilon_{i}^{2})=\eta(\varepsilon_{i}^{2})\), i=0,1,2,3. If \(\phi=\begin{pmatrix}0&\frac{1}{2}a&0\end{pmatrix}\), we get \(\eta=\begin{pmatrix}0&0&ab&0\end{pmatrix}\), so \(\eta=\begin{pmatrix}0&0&ab&0\end{pmatrix}\in\mathrm{Im}(d_{2}^{*})\). If \(\phi\) is equal to \(\begin{pmatrix}0&\frac{1}{2}e_{1}&0\end{pmatrix}\), \(\begin{pmatrix}e_{1}&0&0\end{pmatrix}\) or \(\begin{pmatrix}b&0&0\end{pmatrix}\), we obtain the following for \(\eta\); \(\begin{pmatrix}0&0&b&0\end{pmatrix}\), \(\begin{pmatrix}0&0&0&c\end{pmatrix}\) and \(\begin{pmatrix}0&0&0&bc\end{pmatrix}\) respectively. Therefore \(\mathrm{HH}^{2}(A_{1})=\frac{\mathrm{Ker}\,d_{3}^{*}}{\mathrm{Im}\,d_{2}^{*}}\) is generated as a \(k\)-vector space by \(\langle\widetilde{\eta},\bar{\eta},\chi,\bar{\chi},\sigma\rangle\) where \(\widetilde{\eta}=\begin{pmatrix}a&0&0&0\end{pmatrix}\), \(\bar{\eta}=\begin{pmatrix}ab&0&0\end{pmatrix}\), \(\bar{\chi}=\begin{pmatrix}0&ab&0&0\end{pmatrix}\), \(\chi=\begin{pmatrix}0&0&a&0\end{pmatrix}\) and \(\sigma=\begin{pmatrix}0&0&e_{1}&0\end{pmatrix}\).
Given in Table 2 are the first, second and third degree homotopy lifting maps associated to each the above elements of \(\mathrm{HH}^{2}(A_{1})\). It can be easily verified using the homotopy lifting equation in Definition (2.4) that these indeed are homotopy lifting maps.
The following Lemma follows immediately.
**Lemma 5.3**.: _Let \(A_{1}=kQ/I_{1}\) be a member of the family of quiver algebras where \(I_{1}=\langle a^{2},b^{2},ab-ba,ac\rangle\). The Hochschild 2-cocycles \(\eta=\begin{pmatrix}a&0&0&0\end{pmatrix}\), \(\chi=\begin{pmatrix}0&0&a&0\end{pmatrix}\), \(\bar{\eta}=\begin{pmatrix}ab&0&0\end{pmatrix}\), \(\bar{\chi}=\begin{pmatrix}0&ab&0&0\end{pmatrix}\) and \(\sigma=\begin{pmatrix}0&0&e_{1}&0\end{pmatrix}\) are Maurer-Cartan elements._
Proof.: Let \(\gamma\) be any of those elements of \(\mathrm{HH}^{2}(A_{1})\). We make use of Equation (5.1). Since they are all cocycles, \(d_{3}^{*}(\gamma)=0\). Also observe that \(\gamma\psi_{\gamma_{3}}(\varepsilon_{i}^{3})=0\) for all \(\gamma\in\mathrm{HH}^{2}(A_{1})\), therefore \(d_{3}^{*}(\gamma)+\gamma\psi_{\gamma}=0\).
## 6 Deformation of algebras using reduction system
Let \(A_{1}=kQ/I_{1}\) be a member of family of quiver algebras introduced in Section 5, we now show using the combinatorial star product of Equation (2.13) that \(\mathrm{HH}^{2}(A)\) has 5 elements satisfying the Maurer-Cartan equation.
**Example 6.1**.: Recall that for \(A_{1}=kQ/I\), \(I=\langle a^{2},b^{2},ab-ba,ac\rangle\). If we use the set \(\{(a^{2},0),(b^{2},0),(ab,ba),(ac,0)\}\) as the reduction system, this system is reduction finite and reduction unique. All the one overlaps given by \(S_{3}\) resolve to 0 uniquely. The reduction system
\[R=\{(a^{2},0),(b^{2},0),(ab,ba),(ac,0)\}\]
satisfies the diamond condition \((\diamond)\) where \(\varphi(a^{2})=0,\varphi(b^{2})=0,\varphi(ab)=ba\) and \(\varphi(ac)=0\). The set \(S\) and the set \(\mathrm{Irr}_{S}\) of irreducible paths in the algebra are given respectively by \(S=\{a^{2},b^{2},ab,ac\}\) and \(\mathrm{Irr}_{S}=\{e_{1},e_{2},a,b,c,ba,bc\}\), so \(dim(A_{1})=7\). The paths \(a^{2}\) and \(ab\) overlap at \(a\) so \((aa)(ab)=a^{2}b\in S_{3}\). The set of one-overlaps is given as
\[S_{3}=\{a^{3},b^{3},a^{2}b,ab^{2},a^{2}c\}.\]
Notice that in the quiver \(Q\), the path \(a^{2},b^{2},ab\in S\) are all parallel to the irreducible paths \(e_{1}=e,a,b,ba\) and the path \(ac\in S\) is parallel to \(c\) and \(bc\). Any element \(\widetilde{\varphi}:kS\to A_{1}\cong k\mathrm{Irr}_{S}\) viewed as \(\widetilde{\varphi}\in\mathrm{Hom}(KS,A_{1})\otimes(\tau)\) has
the following general form
\[\widetilde{\varphi}(a^{2}) =(\lambda_{e}+\lambda_{a}a+\lambda_{b}b+\lambda_{ba}ba)\tau\] \[\widetilde{\varphi}(b^{2}) =(\mu_{e}+\mu_{a}a+\mu_{b}b+\mu_{ba}ba)\tau\] \[\widetilde{\varphi}(ab) =(\nu_{e}+\nu_{a}a+\nu_{b}b+\nu_{ba}ba)\tau\] \[\widetilde{\varphi}(ac) =(w_{c}c+w_{bc}bc)\tau\]
\begin{table}
\begin{tabular}{|c|} \hline For \(\eta=\begin{pmatrix}a&0&0&0\end{pmatrix},\) we get \\ \hline \(\psi_{\eta_{1}}(\varepsilon_{i}^{1})=0,\ i=0,1,2,\) \\ \(\psi_{\eta_{2}}(\varepsilon_{i}^{2})=\begin{cases}\varepsilon_{0}^{1},&\text{ if }i=0\\ 0,&\text{ if }i=1,2,3\end{cases},\quad\psi_{\eta_{3}}(\varepsilon_{i}^{3})= \begin{cases}0,&\text{ if }i=0\\ \varepsilon_{1}^{2},&\text{ if }i=1\\ 0,&\text{ if }i=2\\ 0,&\text{ if }i=3\\ 0,&\text{ if }i=4\end{cases}\) \\ \hline \end{tabular} For \(\chi=\begin{pmatrix}0&0&a&0\end{pmatrix},\) we get \\ \hline \(\psi_{\chi_{1}}(\varepsilon_{i}^{1})=0,\ i=0,1,2,\) \\ \(\psi_{\chi_{2}}(\varepsilon_{i}^{2})=\begin{cases}0,&\text{ if }i=0,1,3\\ \varepsilon_{0}^{1},&\text{ if }i=2\end{cases},\quad\psi_{\chi_{3}}(\varepsilon_{i}^{3})= \begin{cases}0,&\text{ if }i=0\\ 0,&\text{ if }i=1\\ 0,&\text{ if }i=2\\ 0,&\text{ if }i=4\end{cases}\) \\ \hline \end{tabular} For \(\bar{\eta}=\begin{pmatrix}ab&0&0&0\end{pmatrix},\) we get \\ \hline \(\psi_{\bar{\eta}_{2}}(\varepsilon_{i}^{2})=\begin{cases}a\varepsilon_{1}^{1}+ \varepsilon_{0}^{1}b&\text{ if }i=0\\ 0,&\text{ if }i=1\\ 0,&\text{ if }i=3\end{cases},\quad\psi_{\bar{\eta}_{3}}(\varepsilon_{i}^{3})= \begin{cases}-a\varepsilon_{1}^{2},&\text{ if }i=0\\ 0,&\text{ if }i=1\\ 0,&\text{ if }i=2\\ 0,&\text{ if }i=3\\ b\varepsilon_{3}^{2}+\varepsilon_{1}^{2}c,&\text{ if }i=4\end{cases}\) \\ \hline \end{tabular} For \(\bar{\chi}=\begin{pmatrix}0&ab&0&0\end{pmatrix},\) we get \\ \hline \(\psi_{\bar{\chi}_{1}}(\varepsilon_{i}^{1})=0,\ i=0,1,2,\) \\ \(\psi_{\bar{\chi}_{2}}(\varepsilon_{i}^{2})=\begin{cases}0&\text{ if }i=0\\ a\varepsilon_{1}^{1}+\varepsilon_{0}^{1}b,&\text{ if }i=1\\ 0&\text{ if }i=2\\ 0&\text{ if }i=3\end{cases},\quad\psi_{\bar{\chi}_{3}}(\varepsilon_{i}^{3})= \begin{cases}0,&\text{ if }i=0\\ a\varepsilon_{1}^{2}-2\varepsilon_{0}^{2}b,&\text{ if }i=1\\ \varepsilon_{1}^{2}b,&\text{ if }i=2\\ 0,&\text{ if }i=3\\ 0,&\text{ if }i=4\end{cases}\) \\ \hline \end{tabular} For \(\sigma=\begin{pmatrix}0&0&e_{1}&0\end{pmatrix},\) we get \\ \hline \(\psi_{\sigma_{1}}(\varepsilon_{i}^{1})=0,\ i=0,1,2,\) \\ \(\psi_{\sigma_{2}}(\varepsilon_{i}^{2})=0,\ i=0,1,2,3\) \\ \(\psi_{\sigma_{3}}(\varepsilon_{i}^{3})=0,\ i=0,1,2,3,4\) \\ \hline \end{tabular}
\end{table}
Table 2. Homotopy lifting maps associated to some cocycles in degrees 1,2,3
for scalars \(\lambda_{e},\lambda_{a},\cdots,w_{c},w_{bc}\in k\). By [1, Corollary 7.37], \(\widetilde{\varphi}\) is a Maurer-Cartan element if and only if for each \(uvw\in S_{3}\) with \(uv,vw\in S\), Equation (2.13) holds. That is
\[(\pi(u)\star\pi(v))\star\pi(w)=\pi(u)\star(\pi(v)\star\pi(w))(mod\ \tau^{2})\]
since we are considering first order deformations. We now check conditions on the scalars for the associativity of the star product. This product defined for example for \(a,b\in A_{1}\) is given by
\[a\star b=\varphi(ab)+\widetilde{\varphi}(ab)\tau.\]
We check for all elements of \(S_{3}\). For instance, the calculations involved in using \(a^{3}\) to check that \(a\star(a\star a)=(a\star a)\star a\) are the following.
\[a\star(a\star a) =a\star(\varphi(a^{2})+\widetilde{\varphi}(a^{2})\tau)=a\star( \lambda_{e}+\lambda_{a}a+\lambda_{b}b+\lambda_{ba}ba)\tau\] \[=(\lambda_{e}\varphi(a)+\lambda_{a}\varphi(a^{2})+\lambda_{b} \varphi(ab)+\lambda_{ba}\varphi(aba))\tau\] \[+[\lambda_{e}\widetilde{\varphi}(a)+\lambda_{a}\widetilde{ \varphi}(a^{2})+\lambda_{b}\widetilde{\varphi}(ab)+\lambda_{ba}\widetilde{ \varphi}(aba)]\tau^{2}\] \[=(\lambda_{e}a+\lambda_{b}ba)\tau\]
and it is equal to
\[(a\star a)\star a =(\varphi(a^{2})+\widetilde{\varphi}(a^{2})\tau)\star a=( \lambda_{e}+\lambda_{a}a+\lambda_{b}b+\lambda_{ba}ba)\tau\star a\] \[=(\lambda_{e}\varphi(a)+\lambda_{a}\varphi(a^{2})+\lambda_{b} \varphi(ba)+\lambda_{ba}\varphi(ba^{2}))\tau\] \[+[\lambda_{e}\widetilde{\varphi}(a)+\lambda_{a}\widetilde{ \varphi}(a^{2})+\lambda_{b}\widetilde{\varphi}(ba)+\lambda_{ba}\widetilde{ \varphi}(ba^{2}))\tau^{2}\] \[=(\lambda_{e}a+\lambda_{b}ba)\tau.\]
This then implies that \(\lambda_{e}=\lambda_{e}\) and \(\lambda_{b}=\lambda_{b}\). For \(a^{2}b=a\star(a\star b)=(a\star a)\star b\), we obtain \(a\star(a\star b)=(\nu_{e}a+\nu_{b}ba)\tau\) and \((a\star a)\star b=(\lambda_{e}b+\lambda_{a}ba)\tau\), so we get \(\nu_{e}=\lambda_{e}=0\) and \(\nu_{b}=\lambda_{a}\). Equivalent calculations for \(ab^{2}\) and \(a^{2}c\) yield \(\mu_{e}=\nu_{e}=0\) and \(\mu_{b}=\nu_{a}\) and \(\lambda_{e}=\lambda_{b}=0\). We can now rewrite
\[\widetilde{\varphi}(a^{2}) =(\lambda_{a}a+\lambda_{ba}ba)\tau\] \[\widetilde{\varphi}(b^{2}) =(\mu_{a}a+\mu_{b}b+\mu_{ba}ba)\tau\] \[\widetilde{\varphi}(ab) =(\mu_{b}a+\lambda_{a}b+\nu_{ba}ba)\tau\] \[\widetilde{\varphi}(ac) =(w_{c}c+w_{bc}bc)\tau\]
so the Maurer-Cartan elements of \(\operatorname{HH}^{2}(A_{1})\) are parametrized by
\[\widetilde{\varphi}=(\lambda_{a},\lambda_{ba},\mu_{a},\mu_{b},\mu_{ba},\nu_{ ba},w_{c},w_{bc})\in k^{8}.\]
Our next goal is to show that three of these parameters can be eliminated by a coboundary so that \(\widetilde{\varphi}\in k^{5}\) and thus \(dim(\operatorname{HH}^{2}(A_{1}))=5\). Let \(\widetilde{\varphi^{\prime}}\) be defined by
\[\widetilde{\varphi^{\prime}}(a^{2}) =(\lambda^{\prime}_{a}a+\lambda^{\prime}_{ba}ba)\tau, \widetilde{\varphi^{\prime}}(b^{2}) =(\mu^{\prime}_{a}a+\mu^{\prime}_{b}b+\mu^{\prime}_{ba}ba)\tau\] \[\widetilde{\varphi^{\prime}}(ab) =(\mu^{\prime}_{b}a+\lambda^{\prime}_{a}b+\nu^{\prime}_{ba}ba)\tau, \widetilde{\varphi^{\prime}}(ac) =(w^{\prime}_{c}c+w^{\prime}_{bc}bc)\tau\]
From [1, Corollary 7.44], two cocycles \(\widetilde{\varphi}\) and \(\widetilde{\varphi}^{\prime}\) are cohomologous or satisfy \(\widetilde{\varphi}-\widetilde{\varphi}=\langle\Theta\rangle\), \(\Theta\in\mathrm{Hom}(kQ_{1},k\mathrm{Irr}_{S})\) if
\[T(\varphi(s))+\widetilde{\varphi}^{\prime}(s)=T(s_{1})\star\cdots\star T(s_{m} )(mod\;\tau^{2})\]
for some \(T:k\mathrm{Irr}_{S}[\tau]/(\tau^{2})\rightarrow\mathrm{Irr}_{S}[\tau]/(\tau^{2})\) defined by \(T(x)=x+\Theta(x)\tau\) with \(s=s_{1}s_{2}\cdots s_{m}\) a path of length \(m\). Any \(\Theta\in\mathrm{Hom}(kQ_{1},k\mathrm{Irr}_{S})\) has a general form
\[\Theta(a) =\alpha_{e}+\alpha_{a}a+\alpha_{b}b+\alpha_{ba}ba\] \[\Theta(b) =\beta_{e}+\beta_{a}a+\beta_{b}b+\beta_{ba}ba\] \[\Theta(c) =\gamma_{c}c+\gamma_{bc}bc,\]
where \((a\star\Theta(a))=\alpha_{e}a+\alpha_{a}\varphi(a^{2})+\alpha_{b}\varphi(ab)+ \alpha_{ba}\varphi(aba))+(\alpha_{e}\widetilde{\varphi}(a)+\alpha_{a} \widetilde{\varphi}(a^{2})+\alpha_{b}\widetilde{\varphi}(ab)+\alpha_{ba} \widetilde{\varphi}(aba))\tau.\) Whenever \(s=a^{2}\), then \(T(\varphi(a^{2}))+\widetilde{\varphi}^{\prime}(a^{2})=T(a)\star T(a)\) yields the following:
\[T(\varphi(a^{2}))+\widetilde{\varphi}^{\prime}(a^{2})=(\lambda_{a}^{\prime}a +\lambda_{ba}^{\prime}ba)\tau. \tag{6.1}\]
\[T(a)\star T(a)=(a+\Theta(a)\tau)\star(a+\Theta(a)\tau)\] \[=a\star a+(a\star\Theta(a))\tau+(\Theta(a)\star a)\tau+(\Theta(a )\star\Theta(a))\tau^{2}\] \[=(\lambda_{a}a+\lambda_{ba}ba)\tau+(\alpha_{e}a+\alpha_{b}ba)\tau +(\alpha_{e}a+\alpha_{b}ba)\tau+0\] \[=(\lambda_{a}a+\lambda_{ba}ba+2\alpha_{e}a+2\alpha_{b}ba)\tau\]
and comparing with Equation (6.1) we arrive at \(\lambda_{a}^{\prime}-\lambda_{a}=2\alpha_{e}\) and \(\lambda_{ba}^{\prime}-\lambda_{ba}=2\alpha_{b}\). With other similar equivalent calculations on \(s\) being \(b^{2},ab,ac\), we get
\[(a^{2}): \lambda_{a}^{\prime}-\lambda_{a}=2\alpha_{e} \lambda_{ba}^{\prime}-\lambda_{ba}=2\alpha_{b}\] \[(b^{2}): \mu_{a}^{\prime}-\mu_{a}=0 \mu_{b}^{\prime}-\mu_{b}=2\beta_{e} \mu_{ba}^{\prime}-\mu_{ba}=2\beta_{a}\] \[(ab): \mu_{b}^{\prime}-\mu_{b}=\beta_{e} \lambda_{a}^{\prime}-\lambda_{a}=\alpha_{e} \nu_{ba}^{\prime}-\nu_{ba}=\alpha_{a}+\beta_{b}\] \[(ac): w_{c}^{\prime}-w_{c}=\alpha_{e} w_{bc}^{\prime}-w_{bc}=\alpha_{b}\]
This implies that three variables in the parametric definition of \(\widetilde{\varphi}\) can be eliminated or simply \(\widetilde{\varphi}=(\lambda_{a},\lambda_{ba},\mu_{a},\mu_{b},\mu_{ba},\nu_{ ba},w_{c},w_{bc})\in k^{8}\) is cohomologous to \(\widetilde{\varphi}=(\lambda_{a},\lambda_{ba},\mu_{a},\mu_{b},0,\nu_{ba},0,0) \in k^{8}.\) Therefore \(\widetilde{\varphi}\) is in \(k^{5}\) or equivalently the dimension of \(\mathrm{HH}^{2}(A_{1})\) is equal to \(5\).
|
2308.03681 | Analytic density of states of two-dimensional Chern insulator | We present analytic expressions for the density of states and its consistent
derivation for the two-dimensional Qi-Wu-Zhang (QWZ) Hamiltonian, a generic
model for the Chern topological insulators of class A. This density of states
is expressed in terms of elliptical integrals. We discuss and plot special
cases of the dispersion relations and the corresponding densities of states.
Spectral moments are also presented. The exact formulae ought to be useful in
determining physical properties of the non-interacting Chern insulators and
within the dynamical mean-field theory for interacting fermions with the QWZ
Hamiltonian in the non-interacting limit. | Vera Uzunova, Krzysztof Byczuk | 2023-08-07T15:56:20Z | http://arxiv.org/abs/2308.03681v2 | # Analytic density of states of two-dimensional Chern insulator
###### Abstract
We present analytic expressions for the density of states and its consistent derivation for the two-dimensional Qi-Wu-Zhang (QWZ) Hamiltonian, a generic model for the Chern topological insulators of class A. This density of states is expressed in terms of elliptical integrals. We discuss and plot special cases of the dispersion relations and the corresponding densities of states. Spectral moments are also presented. The exact formulae ought to be useful in determining physical properties of the non-interacting Chern insulators and within the dynamical mean-field theory for interacting fermions with the QWZ Hamiltonian in the non-interacting limit.
## I Introduction
A topological insulator (TI) is a common name for the novel class of systems with non-trivial topological properties [1; 2]. Historically, the first example of TI was a two-dimensional electron gas in a strong magnetic field where the integer quantum Hall effect was observed [3]. After predicting and later discovering of other examples of TIs [4] the subject becomes in a main stream of condensed matter physics [5], of cold atoms in optical lattices [6], of photonics [7], and even of electric engineering [8].
One of a possible path to investigate TIs is to study tight binding models defined on particular lattices. Among various interesting examples are either the Su-Schrieffer-Heeger model [9] and the Rice-Mele model [10] in one-dimension or the Haldane model [11] and the Qi-Wu-Zhang (QWZ) model in two dimensions [12]. The latter one is defined on a square lattice and the corresponding two dimensional Brillouin zone whereas the former one is formulated on a hexagonal lattice.
In particular, the QWZ model is a well-known system in studying physics of fermions such as bulk and edge properties, different topological states, thermodynamics, transport, and many others [13]. This model is also used as a non-interacting part of the many-body Hamiltonian where the two-particle interaction is included. Its further generalization to arbitrary dimensions and even to the limit of infinite dimension proved that topological insulators are possible in interacting systems as well [14].
In spite of such a broad interest in the QWZ model its density of states (DOS) is not yet determined analytically. Although the DOS by itself is not sufficient to provide topological classification of a system, it is a basic and very important quantity necessary to investigate thermodynamics, thermodynamic phases, response of the system on different probes, and many other quantities. To fill in this gap, in this article we derive analytical formulae of the DOS in terms of elliptic integrals. We discuss details of the derivation and basic properties of the DOS when the relevant model parameter is varied.
The DOS, denoted here by \(\rho(\Omega)\), counts the number of states in a vicinity of a particular value of energy \(\Omega\), i.e. \(dN=\rho(\Omega)d\Omega\). It can be obtained from the Green's function. Analytic derivation of the DOS, even for two-dimensional (2D) systems, is typically a challange and thus there are very few known analytical results. One of the first example was obtained in 1953 by Hobson and Nierenberg [15]. It is an analytical expression for the DOS of graphene with the nearest-neighbor hopping and represented by the complete elliptical integrals. The consequent derivation of this result is presented in [16]. The DOS for some others 2D latices are also obtained analytically [17]. In particular, it can be done for square, triangular, honeycomb, Kagome, and Lieb lattices. In this paper the DOS is analytically derived for the QWZ Hamiltonian modeling a Chern insulator on the square lattice.
Our presentation is organized as follows: In section II we define the QWZ model and discuss the dispersion relations, in Section III we introduce the DOS and present its analytic derivation for the QWZ Hamiltonian, Section IV is devoted to the discussion of the DOS in different parameter regimes, in Section V we show some additional features in the DOS, the spectral moments are discussed in Section VI, and we close our presentation with Section VII, where we offer our conclusions and outlooks. In Appendix A we provide mathematical definitions of the elliptic integrals and in the Appendix B we give selected details on calculating the spectral moments.
## II QWZ model Hamiltonian in two dimensions
A generic form of the two-band Hamiltonian for a 2D noninteracting system in the momentum space can be
written as
\[\hat{H}=\sum_{\mathbf{k}}\hat{H}_{\mathbf{k}}=\sum_{\mathbf{k}}\mathbf{h}(\mathbf{ k})\cdot\hat{\mathbf{\sigma}}, \tag{1}\]
where \(\mathbf{k}=(k_{x},k_{y})\)\((-\pi/a_{L}<k_{x}\), \(k_{y}\leq\pi/a_{L})\) is a 2D wave vector in the first Brillouin zone corresponding to the 2D square lattice with the lattice constant \(a_{L}\), \(\mathbf{h}(\mathbf{k})\) is a vector with three components being given functions of \(\mathbf{k}\), and \(\hat{\mathbf{\sigma}}\) is the vector with components represented by the three Pauli matrices \(\hat{\sigma}_{x}\), \(\hat{\sigma}_{y}\), and \(\hat{\sigma}_{z}\).
The Hamiltonian (1) describes a two level system, corresponding to either the two orbitals or the spin 1/2 degrees of freedom. The vector \(\mathbf{h}(\mathbf{k})\) is interpreted as a Zeeman like magnetic field with some (perhaps non-trivial) dependence on the wave vector \(\mathbf{k}\). This model breaks the time reversal symmetry and belongs to the class A in the ten fold way classification scheme [18]. The Hamiltonian (1) can be easily diagonalized, giving a two band energy spectrum \(\epsilon_{\pm}(\mathbf{k})=\pm h(\mathbf{k})\), where \(h(\mathbf{k})=|\mathbf{h}(\mathbf{k})|\) is the length of the vector \(\mathbf{h}(\mathbf{k})\).
In the following we consider a particular parametrization where the length of \(\mathbf{h}(\mathbf{k})\) is given by \(h(\mathbf{k})^{2}=m^{2}+2t^{2}+2t^{2}\cos(k_{x}a_{L})\cos(k_{y}a_{L})+2mt[\cos (k_{x}a_{L})+\cos(k_{y}a_{L})]\). It corresponds to the following representation of the vector \(\mathbf{h}(\mathbf{k})\):
\[h_{x}(\mathbf{k}) = t\sin(k_{x}a_{L}),\] \[h_{y}(\mathbf{k}) = t\sin(k_{y}a_{L}),\] \[h_{z}(\mathbf{k}) = m+t\cos(k_{x}a_{L})+t\cos(k_{y}a_{L}), \tag{2}\]
where \(t\) is the hopping amplitude. In the momentum space this vector \(\mathbf{h}(\mathbf{k})\) has a Skyrmion configuration for \(0<|m|/t<2\)[13]. In other words, the system is a topological insulator with the finite Chern number \(\pm 1\) and conducting surface states at the half-filling. Hamiltonian (1) can be interpreted as a tight-binding model of a magnetic semiconductor with the Rashba-type spin-orbit interaction and a uniform magnetization along the \(z\)-axis [13]. In the following we take \(t=1\), which sets the energy unit. We also use \(a_{L}=1\) for the length unit.
For \(m=0\), \(0.5\), \(1\), \(2\), and \(2.5\) the corresponding eigenvalues (energy bands) of the Hamiltonian (1) are plotted in Figs. 1-5, respectively. We see that at \(m=0\) (Fig. 1) and \(m\pm 2\) (Fig. 4) the band gap is closed at \(\Gamma\), \(X\) (\(Y\)) and \(M\) special points in the square Brillouin zone, respectively, and characteristic Dirac cone(s) is (are) formed. For \(m\neq 0\), \(\pm 2\) the bands are separated by the gap as seen in Figs. 2, 3, and 5.
part of the retarded Green's function
\[\rho(\Omega)=-\frac{1}{\pi}\Im[\operatorname{Tr}\mathbf{G}(\Omega)]. \tag{7}\]
The diagonal matrix elements of the Green's function \(\mathbf{G}(\Omega)\) are
\[G_{\pm}(\Omega)=\frac{1}{N_{L}}\sum_{\mathbf{k}}\frac{1}{\Omega-\epsilon_{\pm} (\mathbf{k})+\imath 0^{+}}, \tag{8}\]
and for a while the small imaginary part \(\imath 0^{+}\) will not be written explicitly. Summation over \(\mathbf{k}\) can be replaced by the continuous integral in the first Brillouin zone \(\sum_{\mathbf{k}}\rightarrow\frac{L^{2}}{(2\pi)^{2}}\int_{BZ}dk_{x}dk_{y}\), where \(L\) is the length of the system and \(L^{2}=N_{L}a_{L}^{2}\), with \(a_{L}=1\). Then the trace of the Green's function has the form
\[\operatorname{Tr}\mathbf{G}(\Omega)=\frac{\Omega}{2\pi^{2}}\int_{-\pi}^{\pi} \int_{-\pi}^{\pi}\frac{dk_{x}dk_{y}}{\Omega^{2}-h(\mathbf{k})^{2}}. \tag{9}\]
Integrations with respect to \(k_{x}\) and \(k_{y}\) are symmetric. We first preform the integration of the function \((\Omega^{2}-h(k_{x},k_{y})^{2})^{-1}\) with respect to \(k_{x}\). It is convenient
Figure 3: Dispersion relations \(\epsilon_{\pm}(\mathbf{k})\) of the QWZ model at: (a) \(m=1\), and (b) \(m=-1\).
Figure 2: Dispersion relations \(\epsilon_{\pm}(\mathbf{k})\) of the QWZ model at: (a) \(m=0.5\), and (b) \(m=-0.5\).
to denote
\[a=\Omega^{2}-(m^{2}+2+2m\cos k_{y}),\] \[b=-2(m+\cos k_{y}), \tag{10}\]
and then to use the identity (2.558.4) in [20]
\[\int\frac{dk_{x}}{a+b\cos k_{x}}=\frac{2\pi}{\sqrt{a^{2}-b^{2}}}\arctan\left[ \frac{a-b}{\sqrt{a^{2}-b^{2}}}\tan\left(\frac{k_{x}}{2}\right)\right], \tag{11}\]
where \(a^{2}-b^{2}>0\). For the case \(a^{2}-b^{2}\leq 0\) the result of the integration (11) can be formally rewritten as
\[\int_{-\pi}^{\pi}\frac{dk_{x}}{\Omega^{2}-h^{2}}=\frac{-2\pi\imath}{\sqrt{b^{2 }-a^{2}}}\crm{sgn}\left[\frac{\imath(b-a)}{\sqrt{b^{2}-a^{2}}}\right], \tag{12}\]
where it is taken into account that \(\arctan(\pm\tan\pi/2)=\pm\pi/2\). The complex signum function, abbreviated as csgn, is equal to the sign (function sgn) of the real part of the argument, and sign of the imaginary part if the real part is zero. We analytically continue this result by
taking \(\Omega\to\Omega+\imath 0^{+}\), where we add infinitesimally small imaginary part as it is required in the retarded Green's function. Then \(a=(\Omega+\imath 0^{+})^{2}+...\) gets small imaginary part \(2\imath\Omega 0^{+}\) defining the value of csgn. Taking the limit of infinitisamal \(0^{+}\) we arrive at the following expression for the trace of the Green's function
\[\operatorname{Tr}\mathbf{G}(\Omega)=\frac{2\Omega}{\pi}\int_{0}^{\pi}dk_{y} \begin{cases}\frac{\operatorname{sgn}a}{\sqrt{a^{2}-b^{2}}}&\text{for}\;\;a^{2} >b^{2},\\ \frac{\operatorname{sgn}\Omega}{\sqrt{b^{2}-a^{2}}}&\text{for}\;\;a^{2}\leq b^{ 2},\end{cases} \tag{13}\]
where we used the fact that the integrand is an even function of \(k_{y}\). Here the values \(a\) and \(b\) are the functions of the variables \(k_{y}\) and \(\Omega\) and the parameter \(m\) as determined by Eqs. (10).
The DOS is proportional to the imaginary part of \(\operatorname{Tr}G(\Omega)\), therefore it is nonzero only in the region of \(\Omega\) where
\[a^{2}-b^{2}\leq 0. \tag{14}\]
Outside the region (14) \(\operatorname{Tr}G\) is real and the DOS is equal to zero, i.e., \(\rho(\Omega)=0\).
The next step in evaluation of Eq. (13) is to perform integration over \(k_{y}\). It is convenient to replace \(y=\cos k_{y}\) and \(dk_{y}=-dy(1-y^{2})^{-1/2}\). The boundaries of the integration are \(-1\leq y\leq 1\), where changing the order of the integration boundaries results an additional minus sign. The integration over \(y\) is performed differently depending on the value of the parameter \(m\). Therefore, in what follows we are considering three cases with \(|m|=1\), \(|m|>1\), and \(|m|<1\), separately.
#### ii.2.1 Case \(|m|=1\)
Calculations are simpler in the special case of \(|m|=1\). The function in the denominator of the integral (13) can be written explicitly as \(a^{2}-b^{2}=(\Omega^{2}-1)(\Omega^{2}-5-4my)\). It is linear in \(y\) and changes the sign only once at the point \(y=y_{0}\operatorname{sgn}m\), where \(y_{0}=(\Omega^{2}-5)/4\). Solving the condition (14) with respect to \(\Omega\) and using the fact that \(|y|\leq 1\) we obtain that DOS is nonzero only for \(1\leq|\Omega|\leq 3\).
The condition (14) considered with respect to \(y\) determines the boundaries of integration in Eq. (13). For \(m=1\) it is satisfied for \(y_{0}\leq y\leq 1\).Then the imaginary part of Eq. (13) in terms of variable \(y\) gives the DOS in the implicit form
\[\rho(\Omega)=\frac{|\Omega|}{\pi^{2}\sqrt{\Omega^{2}-1}}\int_{y_{0}}^{1}\frac {dy}{\sqrt{(1-y^{2})(y-y_{0})}}, \tag{15}\]
where the definition (7) is used. The similar expression will appears for \(m=-1\): the boundaries of integration are \(-1\leq y\leq-y_{0}\) and denominator in the integrand is \(\sqrt{(1-y^{2})(-y-y_{0})}\). The corresponding DOS can be transformed into the result Eq. (15) by replacing the integration variable \(y\to-y\). So Eq. (15) is valid for both cases \(m=\pm 1\).
The integral over \(y\) can be calculated in terms of the complete elliptical integral of the first kind \(K(x)\) using the identity (3.131.5) from [20]
\[\int_{u_{2}}^{u_{3}}\frac{dy}{\sqrt{(u_{3}-y)(y-u_{2})(y-u_{1})}}\] \[\quad=\frac{2}{\sqrt{u_{3}-u_{1}}}K\left[\sqrt{\frac{(u_{3}-u_{2} )}{(u_{3}-u_{1})}}\right], \tag{16}\]
where \(u_{3}>u_{2}>u_{1}\). In Eq. (15) these parameters are \(u_{1}=-1\), \(u_{2}=y_{0}\), \(u_{3}=1\) and the result of integration is equal to \(\sqrt{2}K\left[\sqrt{(1-y_{0})/2}\right]\). Then the DOS for \(|m|=1\) is
\[\rho_{|m|=1}(\Omega)=\begin{cases}\frac{\sqrt{2}}{\pi^{2}}\frac{|\Omega|}{ \sqrt{1}^{2}-1}K\left[\sqrt{\frac{9-\Omega^{2}}{8}}\right],&\text{if}\;\;1 \leq\Omega^{2}\leq 9,\\ 0,&\text{otherwise}.\end{cases} \tag{17}\]
For convenience, in the Appendix A we provide the definitions of the elliptic integrals.
#### ii.2.2 Case \(|m|>1\)
To calculate the DOS for all other values of the parameter \(m\) we need first to analyze the function in denominator of Eq. (13). For \(|m|\neq 1\) we can write \(a^{2}-b^{2}=4(m^{2}-1)(y-y_{1})(y-y_{2})\). Here the left and right zeros of the function are denoted as \(y_{1}=\min(\tilde{y}_{1},\tilde{y}_{2})\) and \(y_{2}=\max(\tilde{y}_{1},\tilde{y}_{2})\), respectively, where
\[\tilde{y}_{1} =\frac{\Omega^{2}-1-(m+1)^{2}}{2(m+1)},\] \[\tilde{y}_{2} =\frac{\Omega^{2}-1-(m-1)^{2}}{2(m-1)}. \tag{18}\]
So we have to set \(y_{1}=\tilde{y}_{1}\), \(y_{2}=\tilde{y}_{2}\) if the the following condition is met
\[\frac{\Omega^{2}+m^{2}-2}{m^{2}-1}>0, \tag{19}\]
and to set \(y_{1}=\tilde{y}_{2}\), \(y_{2}=\tilde{y}_{1}\), otherwise.
Lets consider the case \(|m|>1\), then factor \(m^{2}-1\) is positive. Rewriting Eq. (13) in terms of variable \(y\) and substituting it in Eq. (7), we obtain the nonzero DOS in the implicit form
\[\rho(\Omega)=\frac{|\Omega|}{\pi^{2}\sqrt{m^{2}-1}}\int\frac{dy}{\sqrt{(1-y^{2 })(y_{1}-y)(y-y_{2})}}. \tag{20}\]
Here the boundaries of integration are determined by Eq. (14), which reads \(y_{1}\geq y\geq y_{2}\), implying \(|y|\leq 1\). There are two cases, in which these conditions can be satisfied by the integration variable \(y\):
\[|y_{1}|\leq 1,|y_{2}|>1,\] \[|y_{1}|>1,|y_{2}|\leq 1. \tag{21}\]
First one leads to the integration region \(y_{1}\leq y\leq 1\), and the second one to \(-1\leq y\leq y_{1}\). To perform integration we use the identity (3.147.5) in [20], namely
\[\int_{u_{2}}^{u_{3}}\frac{dy}{\sqrt{(u_{4}-y)(u_{3}-y)(y-u_{2})(y-u_ {1})}}\] \[=\frac{2}{\sqrt{(u_{4}-u_{2})(u_{3}-u_{1})}}K\left[\sqrt{\frac{(u_ {3}-u_{2})(u_{4}-u_{1})}{(u_{4}-u_{2})(u_{3}-u_{1})}}\right], \tag{22}\]
where \(u_{4}>u_{3}>u_{2}>u_{1}\). The first line of Eq. (21) corresponds to \(u_{1}=-1\), \(u_{2}=y_{1}\), \(u_{3}=1\) and \(u_{4}=y_{2}\) and the second line corresponds to \(u_{1}=y_{1}\), \(u_{2}=-1\), \(u_{3}=y_{2}\) and \(u_{4}=1\). The results of the integration in both cases are identical
\[\rho(\Omega)=\frac{\sqrt{2}}{\pi^{2}}\frac{|\Omega|}{\sqrt{m^{2}-1}}\frac{1}{ \sqrt{y_{2}-y_{1}}}K\left[\sqrt{\frac{(1-y_{1})(1+y_{2})}{2(y_{2}-y_{1})}} \right]. \tag{23}\]
The last step of this evaluation is to express the obtained formula in terms of \(m\) and \(\Omega\). As follows from Eq. (19) for all \(\Omega^{2}>2-m^{2}\) we can set \(y_{1}=\tilde{y}_{1}\), \(y_{2}=\tilde{y}_{2}\), where \(\tilde{y}_{1}\), \(\tilde{y}_{2}\) are given by Eq. (18). Then the solution of the inequalities (21) is given by the condition \((|m|-2)^{2}<\Omega^{2}<(|m|+2)^{2}\). After making sure that \(2-m^{2}<(|m|-2)^{2}\) we substitute the same \(y_{1}\), \(y_{2}\) into Eq. (23) to obtain
\[\rho_{\rm I}(\Omega)=\frac{\sqrt{2}}{\pi^{2}}\frac{|\Omega|}{ \sqrt{\Omega^{2}+m^{2}-2}}.\] \[\cdot K\left[\sqrt{\frac{[\Omega^{2}-(m-2)^{2}][(m+2)^{2}-\Omega ^{2}]}{8(\Omega^{2}+m^{2}-2)}}\right], \tag{24}\]
where we denote this result by \(\rho_{\rm I}(\Omega)\).
The final expression for the DOS for the case \(|m|>1\) can be presented as
\[\rho_{|m|>1}(\Omega)=\begin{cases}\rho_{\rm I}(\Omega),&\text{ if }\;(|m|-2)^{2}\leq \Omega^{2}\leq(|m|+2)^{2},\\ 0,&\text{ otherwise.}\end{cases} \tag{25}\]
Note, that this expression reduces to the case \(m^{2}=1\). In this case \(|m|=1\) Eq. (25) coincides exactly with Eq. (17).
#### iii.1.3 Case \(|m|<1\)
We repeat all the reasoning of the previous subsection for the case \(|m|<1\). In this case the factor \(|m|-1\) is negative, and the nonzero DOS obtained from Eqs. (13) and (7) in terms of the variable \(y\) is of the form
\[\rho(\Omega)=\frac{|\Omega|}{\pi^{2}\sqrt{1-m^{2}}}\int\frac{dy}{\sqrt{(1-y^{ 2})(y-y_{1})(y-y_{2})}}, \tag{26}\]
where \(|y|\leq 1\) by the definition. The boundaries of the integration are determined by Eq. (14), that gives \(y\leq y_{1}\) or \(y\geq y_{2}\). There are three different possibilities for \(y\) to satisfy these conditions,
\[|y_{1}|\leq 1,|y_{2}|>1,\] \[|y_{1}|>1,|y_{2}|\leq 1,\] \[|y_{1}|<1,|y_{2}|<1. \tag{27}\]
We are considering each of them.
To perform integration in Eq. (26) we use the identities (3.147.3) and (3.147.7) in [20] respectively, that for the case of complete elliptical integrals have the same right hand side, namely
\[\int_{u_{1}}^{u_{2}}\frac{dy}{\sqrt{(u_{4}-y)(u_{3}-y)(u_{2}-y)(y-u_{1})}}= \tag{28a}\] \[=\int_{u_{3}}^{u_{4}}\frac{dy}{\sqrt{(u_{4}-y)(y-u_{3})(y-u_{2})(y-u_{1})}}=\] (28b) \[=\frac{2}{\sqrt{(u_{4}-u_{2})(u_{3}-u_{1})}}K\left[\sqrt{\frac{(u_ {4}-u_{3})(u_{2}-u_{1})}{(u_{4}-u_{2})(u_{3}-u_{1})}}\right],\]
where \(u_{4}>u_{3}>u_{2}>u_{1}\).
The first line of Eq. (III.1.3) gives the boundaries of integration as \(-1\leq y\leq y_{1}\) in the integral (26). The integration can be performed by using Eq. (28a) with \(u_{1}=-1\), \(u_{2}=y_{1}\), \(u_{3}=1\) and \(u_{4}=y_{2}\). The second line of Eq. (III.1.3) gives the boundaries of integration as \(y_{2}\leq y\leq 1\) in the integral (26). The integration can be performed by using Eq. (28b) with \(u_{1}=y_{1}\), \(u_{2}=-1\), \(u_{3}=y_{2}\) and \(u_{4}=1\). The results in these two cases are identical
\[\rho(\Omega)=\frac{\sqrt{2}}{\pi^{2}}\frac{|\Omega|}{\sqrt{1-m^{2}}}\frac{1} {\sqrt{y_{2}-y_{1}}}K\left[\sqrt{\frac{(1+y_{1})(y_{2}-1)}{2(y_{2}-y_{1})}} \right]. \tag{29}\]
The third line of Eq. (III.1.3) splits integral (26) into the sum of two integrals with boundaries \(-1\leq y\leq y_{1}\) and \(y_{2}\leq y\leq 1\). The first one is performed by using Eq. (28a) and the second one is performed by using Eq. (28b), where we put \(u_{1}=-1\), \(u_{2}=y_{1}\), \(u_{3}=y_{2}\) and \(u_{4}=1\) for both integrals. The results of integrations are identical and summation is reduces to multiplication by 2. The final expression reads
\[\rho(\Omega)=\frac{2\sqrt{2}}{\pi^{2}}\frac{|\Omega|}{\sqrt{1-m^{ 2}}}\frac{1}{\sqrt{(1-y_{1})(1+y_{2})}}.\] \[\cdot K\left[\sqrt{\frac{(1+y_{1})(1-y_{2})}{(1-y_{1})(1+y_{2})}} \right]. \tag{30}\]
As the last step we are expressing the formulas (III.1.3) and (III.1.3) in terms of \(m\) and \(\Omega\). Eq. (III.1.3) corresponds to the first two lines of Eq. (III.1.3). As follows from Eq. (19) for all \(\Omega^{2}>2-m^{2}\) we can set \(y_{1}=\tilde{y}_{2}\), \(y_{2}=\tilde{y}_{1}\), where \(\tilde{y}_{1}\), \(\tilde{y}_{2}\) are given by Eq. (18). Note that the first two lines of
Eq. (27) are equivalent to Eq. (21) and are symmetrical with respect to permutation of \(y_{1}\) and \(y_{2}\). So we can repeat the same reasoning as for the case \(|m|>1\). The solution of this condition with respect to \(\Omega^{2}\) gives the same region \((|m|-2)^{2}<\Omega^{2}<(|m|+2)^{2}\) and substitution of \(y_{1}\) and \(y_{2}\) into Eq. (29) results in the same DOS \(\rho_{\rm I}(\Omega)\), given by Eq. (25).
To evaluate Eq. (30) corresponding to the third line of Eq. (27) we notice that this line is symmetric with respect to permutation of \(y_{1}\) and \(y_{2}\). Rewriting it in terms of \(\Omega^{2}\) we obtain the region \(m^{2}>\Omega^{2}>(|m|-2)^{2}\) in which Eq. (30) is valid. This region is split into two parts by Eq. (19). In particular, for \(2-m^{2}>\Omega^{2}>(|m|-2)^{2}\) we set \(y_{1}=\tilde{y}_{2}\), \(y_{2}=\tilde{y}_{1}\), and substitute them to Eq. (30) to obtain
\[\rho_{\rm II}(\Omega)=\frac{8}{\pi^{2}}\frac{|\Omega|}{|\Omega^{2 }+m^{2}|}.\] \[\cdot K\left[\sqrt{\frac{[\Omega^{2}-(m-2)^{2}][\Omega^{2}-(m+2) ^{2}]}{(\Omega^{2}-m^{2})^{2}}}\right], \tag{31}\]
where we denote this result by \(\rho_{\rm II}(\Omega)\). In the region \(m^{2}>\Omega^{2}>2-m^{2}\) we set \(y_{1}=\tilde{y}_{1}\), \(y_{2}=\tilde{y}_{2}\), and substitution to Eq. (30) gives
\[\rho_{\rm III}(\Omega)=\frac{8}{\pi^{2}}\frac{|\Omega|}{\sqrt{[ \Omega^{2}-(m-2)^{2}][\Omega^{2}-(m+2)^{2}]}}\cdot\\ \cdot K\left[\sqrt{\frac{(\Omega^{2}-m^{2})^{2}}{[\Omega^{2}-(m-2 )^{2}][\Omega^{2}-(m+2)^{2}]}}\right], \tag{32}\]
where we denote this result by \(\rho_{\rm III}(\Omega)\). The final expression for the DOS for the case \(|m|<1\) can be presented as
\[\rho_{|m|<1}(\Omega)=\begin{cases}\rho_{\rm I}(\Omega),&\text{if }\;(|m|-2)^{2}< \Omega^{2}\leq(|m|+2)^{2},\\ \rho_{\rm II}(\Omega),&\text{if }\;2-m^{2}\leq\Omega^{2}\leq(|m|-2)^{2},\\ \rho_{\rm III}(\Omega),&\text{if }\;m^{2}\leq\Omega^{2}<2-m^{2},\\ 0,&\text{otherwise}.\end{cases} \tag{33}\]
Note, that the DOS for all values of parameter \(m\) does not depend on the sign of \(m\) and, therefore is symmetric with respect to \(m\) and \(-m\) in contrast to the dispersion relations.
## IV Summary of analytical results and plots of total DOS
In this Section we present plots of the total DOS for different \(m\), corresponding to the dispersion relations shown in Section II, and we provide consistent analyzes of them. The plots are given in Figs. 6-10, that appear in the same order as the dispersion relations in Figs. 1-5. In these DOS plots the ranges of the vertical axis are different but all DOS are normalized to two, corresponding to two bands. We see that the shapes of the DOS for the QWZ model are much richer and with more additional features as compared, for example, to the hexagonal lattice with nearest neighbor hopping. Apart of the symmetry \(m\) and \(-m\), mentioned above, the total DOS is symmetric with respect to \(\Omega\) and \(-\Omega\). It is clearly visible in analytic formulae of the DOS, Eqs. (17, 25, 33), where \(\Omega\) is only present as either \(\Omega^{2}\) or \(|\Omega|\).
The typical shape of the DOS for \(0<|m|<1\) is shown in Fig. 7. We can see that each nonvanishing part of the plot has three sections, in each of the section the function is described by \(\rho_{\rm I}\), \(\rho_{\rm II}\), or \(\rho_{\rm III}\), cf. (33). Specifically, in the plot in Fig. 7 for \(|m|=0.5\), these sections are separated by the points \(|\Omega|=\sqrt{1.75}\approx 1.32\) and \(|\Omega|=1.5\). At the energies \(\pm\sqrt{2-m^{2}}\), separating two nearby sections, the DOS has infinite peaks.
Another characteristic feature of the system with \(0<|m|<1\) is the opening of the band gap of the width \(2|m|\). It can be seen in the corresponding dispersion relations, cf. Fig. 2, and in the plots of DOS, where \(\rho(\Omega)=0\) in the range \(-|m|<\Omega<|m|\). At the half-filling such system is a topological insulator [12; 13].
The gap is closed for \(m=0\) as seen in Fig. 6, when the DOS has a pseudogap at \(\Omega=0\) (DOS vanishes at a single point). It corresponds to formation of the Dirac cones at \(X=(\pm\pi,0)\) and \(Y=(0,\pm\pi)\) points in the Brillouin zone, that is easy to see in the plots of the dispersion relations, cf. Fig. 1. At the half-filling such system is a semi-metal.
The special case \(|m|=1\), given by Eq. (17), is shown in Fig. 8 and the corresponding dispersion relations are in Fig. 3. Flat parts in the dispersion relations, i.e. lines along which \(\nabla_{\bf k}\epsilon_{\pm}({\bf k})=0\), give rise to the appearance of
Figure 6: Total density of states of the QWZ model at \(m=0\).
sharp peaks in the DOS. Despite of the fact that these flat parts are different for \(m=1\) and \(m=-1\), the shape of the DOS is the same. In this \(|m|=1\) case the system is a topological insulator at the half-filling [12; 13].
The typical shape of the DOS for \(|m|>1\) (without \(|m|=2\)) is shown in Fig. 10. The nonzero part of the DOS function is described by the single elliptic integral, cf. Eq. (25). It has two symmetrical infinite peaks at energies \(\pm|m|\). The system has the band gap of the width \(2||m|-2|\) in the dispersion relation, shown in Fig. 5. In
the plot of the DOS the gap corresponds to \(\rho(\Omega)=0\) in the range \(-||m|-2|<\Omega<||m|-2|\). At the half-filling the system with \(|m|<2\) is a topological insulator and the system with \(|m|>2\) is a trivial insulator [12; 13].
For \(|m|=2\) the band gap closes and a pseudogap appears as it is shown in the plot of the DOS in Fig. 9. Formation of the Dirac cones can be seen at the corresponding dispersion relations in Fig. 4. For \(m=2\) the gap is closed at \(M=(\pm\pi,\pm\pi)\) and \((\pm\pi,\mp\pi)\) points in the Brillouin zone and for \(m=-2\) it is closed at \(\Gamma=(0,0)\) point. At the half filling such system is a semi-metal.
## V Subtle features seen in total DOS
In this Section, we discus additional subtle features and general trends that can be observed for the DOS of the considered QWZ model.
### Additional finite peaks
As discussed earlier, the total DOS is symmetric with respect to \(m\) and \(-m\) and as well as it is symmetric with respect to \(\Omega\) and \(-\Omega\). It has two infinite peaks located at \(\pm\Omega_{\infty}\), where the elliptical integral \(K(1)=\infty\), i.e.,
\[\Omega_{\infty}=\begin{cases}\sqrt{2-m^{2}},&\text{if}\;\;|m|<1,\\ |m|,&\text{if}\;\;|m|\geq 1.\end{cases} \tag{34}\]
We find, that for the values of \(|m|\) slightly larger than 1, two additional finite peaks appear at the edges of a band gap given by \(\pm\Omega_{L}\), for example, it is shown in Fig. 11 for the case of \(|m|=1.15\). At larger \(|m|\) these peaks disappear.
### Widths of the band gap and the bands
Let \(\Delta\) be the width of the band gap. Its value for arbitrary \(m\) (in units with \(t=1\)) is given by the simple expression
\[\Delta=2\begin{cases}|m|,&\text{if}\;\;|m|<1,\\ |m|-2|,&\text{if}\;\;|m|\geq 1.\end{cases} \tag{35}\]
Edges of the band gap are located at energies \(\pm\Omega_{L}\), where \(\Omega_{L}=\Delta/2\).
On the other hand, the upper and lower bounds of the energy spectrum (the dispersion relations \(\epsilon_{\pm}(\mathbf{k})\)) are at \(\pm\Omega_{R}\), where \(\Omega_{R}=|m|+2\). The width of each of the two bands \(\epsilon_{\pm}(\mathbf{k})\) is defined as \(W=\Omega_{R}-\Omega_{L}\), since \(\Omega_{R}>\Omega_{L}\). Its value is explicitly given by
\[W=2\begin{cases}1,&\text{if}\;\;|m|<1,\\ |m|,&\text{if}\;\;1\geq|m|\geq 2,\\ 2,&\text{if}\;\;|m|>2.\end{cases} \tag{36}\]
The dependence of the gap width \(\Delta\) as a function of \(m\) is shown in the Inset of Fig. 12. It is seen that when \(\Delta<2\) the gap of the same width is opened for the six different values of the parameter \(m\), at half-filling four of these \(m\) will correspond to topological insulators and other two \(m\) will correspond to the trivial insulators.
To discuss further we chose the case with the width \(\Delta=1\), which is possible for \(|m|=0.5\), \(1.5\), and \(2.5\). The DOS for \(|m|=1.5\) and \(2.5\) are shown in Fig. 13 whereas the DOS for \(|m|=0.5\) is in Fig. 7. Interestingly, it can be seen that the DOS at the edges of the band gap, i.e., \(\rho(\Omega=\pm\Omega_{L})\), for the topological insulator (\(|m|=0.5\) and
Figure 11: Total density of states of the QWZ model at \(m=\pm 1.15\).
Figure 12: The value of the DOS at the edges of a band gap, i.e. at \(\Omega=\Omega_{L}\), vs. the parameter \(m\). The Inset shows a dependence of the gap width vs. \(m\). The topological and trivial insulators are indicated in the figures.
1.5) is larger than the one for the trivial case \(|m|=2.5\). The DOS at the edges of the band gap can be obtained analytically using the properties of the elliptical integrals \(K(0)=\pi/2\), namely
\[\rho(\Omega_{L})=\frac{1}{2\pi}\begin{cases}2|m|/\sqrt{1-m^{2}},&\text{if}\;\;| m|<1,\\ ||m|-2|/(|m|-1),&\text{if}\;\;|m|\geq 1.\end{cases} \tag{37}\]
The dependence of \(\rho(\Omega_{L})\) as a function of \(m\) is shown in Fig. 12. For \(|m|>2\) it is a finite smooth function. When \(|m|<2\) the function is singular and diverges at \(m=\pm 1\).
## VI Spectral moments of total DOS
In this Section we present results on the spectral moments of the total DOS. The general formula for the spectral moment of the order \(n\) reads
\[M_{n}=\int\Omega^{n}\rho(\Omega)d\Omega. \tag{38}\]
Integrals of the form (38) can be calculated analytically on the base of our analytic results from section III. Some of the calculations are shown in Appendix B to demonstrate details of the integration technique.
The moment of the zero order represents normalization integral \(M_{0}=2\). All moments of the odd orders are zero because the DOS is an even function of \(\Omega\).
The moments of the even orders \(n\) are found as the polynomials in the parameter \(m\) of the same orders:
\[M_{2}=2(m^{2}+2),\] \[M_{4}=2(m^{4}+8m^{2}+5),\] \[M_{6}=2(m^{6}+18m^{4}+51m^{2}+14),\] \[M_{8}=2(m^{8}+32m^{6}+210m^{4}+284m^{2}+42). \tag{39}\]
This sequence can be continued if needed. Note, that all moments have a factor of 2, which corresponds to two symmetrical bands.
## VII Conclusions and outlooks
In this paper we derived the analytic formulae of the DOS of the QWZ Hamiltonian, a generic model for topological insulators in two dimensions. The results are expressed in terms of the complete elliptic integrals. Analytic expressions for the DOS are rare in general. Our results extend the class of models where the analytic DOS is known.
We discussed in details the plots of the DOS and compared them with the dispersion relations for the same value of the parameter \(m\). Some additional finite peaks in the DOS were identified. We provided explicit formulae for the gap width and for the with of the bands in the QWZ model. We also found that for the same gap width the topological system has larger DOS at the gap edge as compared with the trivial case. Apparently, in the topological case a more spectral weight is redistributed close to the band gap. Finally, we obtained expressions of the spectral moments of the QWZ model. They are polynomials of the parameter \(m\), controlling the topology, and are of the same orders as the orders of the corresponding moments.
The analytic DOS should be useful in determining thermodynamics or response of the QWZ Chern topological insulator. It should also simplify the dynamical mean-field theory study of the QWZ model when a Hubbard type of the interaction is added to the Hamiltonian.
We hope that by using similar methods other DOS can be obtained in analytic form, for example for the Haldane model on a hexagonal lattice.
## VIII Acknowledgment
We thank for the financial support of the _Excellence Initiative - Research University_ (IDUB) via the grant under the program New Ideas - Ukraine. K.B. also acknowledges the support of the Deutsche Forschungsgemeinschaft under the _Transregional Collaborative Research Center_ TRR360.
Figure 13: Comparison of the two DOS of the QWZ model, which are characterized by the same gap. Total density of states at \(m=\pm 1.5\) (black dashed line) at half-filling corresponds to topological insulator. Total density of states at \(m=\pm 2.5\) (green solid line) at half-filling corresponds to trivial insulator. Enhancement of the DOS for topological case is seen.
## Appendix A Full elliptic integrals of the first and second kind
Expressions for the elliptic integrals, which we are using, are the following
\[K(q)=\int_{0}^{\pi/2}\frac{d\beta}{\sqrt{1-q^{2}\sin^{2}\beta}}, \tag{38}\]
\[E(q)=\int_{0}^{\pi/2}d\beta\sqrt{1-q^{2}\sin^{2}\beta}, \tag{39}\]
where the argument is denoted by \(q\) and called by mathematicians the modulus of the elliptic integral and by \(q^{\prime}\) we express the complementary modulus, i.e., \(q^{\prime 2}=1-q^{2}\).
## Appendix B Calculation of spectral moments in details
Here we show how to calculate some of the integrals (38). Though the spectral moment of any order can be easily calculated numerically, the analytical derivation is of special value and methodological interest.
We start with the simple case of \(|m|=1\). We express \(q^{2}=9-\Omega^{2}/8\) and \(q^{\prime 2}=(\Omega^{2}-1)/8\). For all values of \(\Omega^{2}\) from the region where \(\rho(\Omega)\) is nonzero the modulus is \(0\leq q\leq 1\). In these terms the integral, normalizing the DOS, takes the form
\[\frac{M_{0}}{2}=\frac{4}{\pi^{2}}\int_{0}^{1}K(q)dq^{\prime}. \tag{40}\]
Using the identity (6.141.2) in [20],
\[\int_{0}^{1}K(q^{\prime})dq=\frac{\pi^{2}}{4}. \tag{41}\]
gives the correct value of normalization.
The second moment takes the form
\[\frac{M_{2}}{2}=\frac{4}{\pi^{2}}\int_{0}^{1}(9-8q^{2})K(q)dq^{\prime}. \tag{42}\]
It can be rewritten as \(M_{2}/2=M_{0}/2+32J/\pi^{2}\), where
\[J=-\int_{0}^{1}q^{\prime}K(q)qdq. \tag{43}\]
Here we used \(q^{\prime}dq^{\prime}=-qdq\). The integral \(J\) can be determined by parts using the following substitution \(dv=K(q)qdq\) and \(u=q^{\prime}\). It allows us to use the indefinite integral (5.112.3) in [20], namely
\[v=\int K(q)qdq=E(q)-q^{\prime 2}K(q), \tag{44}\]
where \(E(q)\) is the elliptic integral of the second kind. Integration by parts gives \(J=\int_{0}^{1}E(q)dq^{\prime}-J\), where it is used that \(E(0)=K(0)=\pi/2\) and \(E(1)=1\). Using the identity (6.148.2) in [20]
\[\int_{0}^{1}E(q^{\prime})dq=\frac{\pi^{2}}{8} \tag{45}\]
we finally obtain \(J=\pi^{2}/16\) and \(M_{2}=6\).
The moment of the order \(n\) is expressed as a combination of integrals of the product of an elliptic function and an even power of the modulus,
\[M_{n}=\sum_{i=0}^{n/2}C_{i}\int_{0}^{1}q^{2i}K(q)dq^{\prime}. \tag{46}\]
Here \(C_{i}\) are real numbers, compare with Eq. (42). So the same procedure with integration by parts can be used to obtain moment of arbitrary order.
For \(|m|>1\) the DOS have the infinite peaks at \(\pm\Omega_{\infty}\). Integral over \(\Omega\) in Eq. (38) is split into two: from \(\Omega_{L}\) to \(\Omega_{\infty}\) and from \(\Omega_{\infty}\) to \(\Omega_{R}\). In these regions the complementary modulus \(q^{\prime}\) take values from 1 to 0, and from 0 to 1 respectively. Denoting \(x=\Omega^{2}\) we can present the moment of the order \(n\) in the form
\[M_{n}=\frac{\sqrt{2}}{\pi^{2}}\int_{0}^{1}dq^{\prime}K(q)\left[ \frac{x_{2}^{n/2}}{\sqrt{x_{2}+m^{2}-2}}\left(\frac{dx_{2}}{dq^{\prime}} \right)-\right.\] \[\left.\frac{x_{1}^{n/2}}{\sqrt{x_{1}+m^{2}-2}}\left(\frac{dx_{1}} {dq^{\prime}}\right)\right]. \tag{47}\]
The substitution \(x=x_{1,2}(q)\) is not a single-valued function: \(x_{1,2}=m^{2}+4q^{\prime 2}\mp 4q^{\prime}\sqrt{m^{2}-q^{2}}\), where the sign "\(+\)" corresponds to the first region and "\(-\)" to the second one. The expression in the square brackets is transformed algebraically to a polynomial function \(\sum_{i=0}^{n/2}C_{i}(m^{2})q^{2i}\), so for \(n=0\) it is equal to \(4\sqrt{2}\) and for \(n=2\) it equal to \(4\sqrt{2}(m^{2}+8q^{\prime 2})\). Then moments are of the form (46), where coefficients \(C_{i}=C_{i}(m^{2})\) are the functions of \(m^{2}\). The result of calculations leads to the expressions (39).
For the case \(|m|<1\) two regions of integration need to be considered separately: \(|m|\leq\Omega\leq 2-|m|\) and \(2-|m|\leq\Omega\leq 2+|m|\). In each of these regions the substitution \(x=x(q)\) is not a single-valued, so each integral splits into two ones, similar to the case with \(|m|>1\).
In the first region we obtain the following: For \(|m|\leq\Omega\leq\sqrt{2-m^{2}}\) the substitution is \(x_{1}=m^{2}-4q(q-\sqrt{1-m^{2}q^{\prime 2}})/q^{\prime 2}\). The limits of integration \(x_{1}(q^{\prime}=1)=m^{2}\) and \(\lim_{q^{\prime}\to 0}x_{1}=2-m^{2}\). For \(\sqrt{2-m^{2}}\leq\Omega\leq 2-|m|\) the substitution is \(x_{2}=m^{2}+4(1-\sqrt{q^{2}+m^{2}q^{\prime 2}})/q^{\prime 2}\) with the limits of integration \(x_{2}(q^{\prime}=1)=(2-|m|)^{2}\) and \(\lim_{q^{\prime}\to 0}x_{2}=2-m^{2}\).
In the second region we have \(x_{3,4}=m^{2}+4q^{\prime}{}^{2}\mp 4q^{\prime}\sqrt{m^{2}-q^{2}}\) valid for \(2-|m|\leq\Omega\leq\Omega_{M}\) and \(\Omega_{M}\leq\Omega\leq 2+|m|\), respectively. Here \(\Omega_{M}\) is defined by the condition \(dq/d\Omega=0\) and \(q^{2}=(\Omega^{2}-(m-2)^{2})((m+2)^{2}-\Omega^{2})/(\Omega^{2}+m^{2}-2)/8\). It can be proven that \(q(\Omega_{M})=|m|\).
Thus the formula for the \(M_{n}\) takes the form
\[M_{n}=\frac{8}{\pi^{2}}\int_{0}^{1}dq^{\prime}K(q)\left[\frac{x_{2} ^{n/2}}{x_{2}+m^{2}}\left(\frac{dx_{2}}{dq^{\prime}}\right)-\right.\] \[\left.\frac{x_{1}^{n/2}}{\sqrt{(x_{1}-(m-2)^{2})(x_{1}-(m+2)^{2})} }\left(\frac{dx_{1}}{dq^{\prime}}\right)\right]+\] \[\left.\frac{\sqrt{2}}{\pi^{2}}\int_{0}^{|m|}dqK(q)\left[\frac{x_{ 3}^{n/2}}{\sqrt{x_{3}+m^{2}-2}}\left(\frac{dx_{3}}{dq}\right)-\right.\right.\] \[\left.\left.\frac{x_{4}^{n/2}}{\sqrt{x_{4}+m^{2}-2}}\left(\frac{ dx_{4}}{dq}\right)\right]. \tag{100}\]
It should be noted, that these integrals are difficult to calculate analytically. For example, for \(M_{0}\) the substitution \(x=x(q)\) in Eq. (100) results in
\[(M_{0}-1)\frac{\pi^{2}}{8}=\int_{0}^{|m|}dqK(q)\frac{q}{\sqrt{m^ {2}-q^{2}}}+\] \[\int_{0}^{1}dq\frac{K(q)}{q^{\prime 2}}\left[\frac{q}{\sqrt{m^{2}+q^{2} (1-m^{2})}}-\frac{q}{\sqrt{(1-m)^{2}+m^{2}q^{2}})}\right], \tag{101}\]
where we used the identity (6.144) in [20]
\[\int_{0}^{1}K(q)\frac{1}{1+q}dq=\frac{\pi^{2}}{8}. \tag{102}\]
Knowing the fact that \(M_{0}=2\), the expression (101) turns into an interesting relation between integrals containing \(K(q)\).
In order to determine these integrals, the integrands can be expanded into an infinite series in \(q^{2}\) followed by term-by-term integration. But the easiest way to prove Eq. (39) for the case \(|m|<1\) is the numerical integration.
|
2304.05125 | Prior Entanglement Exponentially Improves One-Server Quantum Private
Information Retrieval for Quantum Messages | Quantum private information retrieval (QPIR) for quantum messages is a
quantum communication task, in which a user retrieves one of the multiple
quantum states from the server without revealing which state is retrieved. In
the one-server setting, we find an exponential gap in the communication
complexities between the presence and absence of prior entanglement in this
problem with the one-server setting. To achieve this aim, as the first step, we
prove that the trivial solution of downloading all messages is optimal under
QPIR for quantum messages, which is a similar result to that of classical PIR
but different from QPIR for classical messages. As the second step, we propose
an efficient one-server one-round QPIR protocol with prior entanglement by
constructing a reduction from a QPIR protocol for classical messages to a QPIR
protocol for quantum messages in the presence of prior entanglement. | Seunghoan Song, Francois Le Gall, Masahito Hayashi | 2023-04-11T10:34:53Z | http://arxiv.org/abs/2304.05125v1 | Prior Entanglement Exponentially Improves One-Server Quantum Private Information Retrieval for Quantum Messages
###### Abstract
Quantum private information retrieval (QPIR) for quantum messages is a quantum communication task, in which a user retrieves one of the multiple quantum states from the server without revealing which state is retrieved. In the one-server setting, we find an exponential gap in the communication complexities between the presence and absence of prior entanglement in this problem with the one-server setting. To achieve this aim, as the first step, we prove that the trivial solution of downloading all messages is optimal under QPIR for quantum messages, which is a similar result to that of classical PIR but different from QPIR for classical messages. As the second step, we propose an efficient one-server one-round QPIR protocol with prior entanglement by constructing a reduction from a QPIR protocol for classical messages to a QPIR protocol for quantum messages in the presence of prior entanglement.
## I Introduction
### Private information retrieval (PIR)
Entanglement is a valuable resource for quantum information processing, enabling various tasks including quantum teleportation [1] and dense coding, also known as entanglement-assisted communication [2]. Although entanglement-assisted communication enhances the speed not only for conventional communication but also for secret communication, their improvements are limited to constant times [3; 4]. For further development of entanglement-assisted communication, we need to find significant improvement by entanglement-assisted communication.
For this aim, we focus on private information retrieval (PIR), a task in which a user retrieves a message from a server without revealing which message has been retrieved, when the server possesses multiple messages. Many papers [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19] studied Quantum PIR (QPIR), i.e., PIR using quantum states, when the intended messages are given as the classical messages. This problem setting is simplified to C-QPIR. On the other hand, since various types of quantum information processings require the transmission of quantum states, i.e., the quantum messages [20; 21; 22; 23; 24], it is needed to develop QPIR for quantum messages, which is simplified to Q-QPIR, while no preceding paper studied this topic.
In this paper, to enhance quantum information technology, we study private information retrieval for quantum messages with one server, and present an exponential speedup through the use of prior entanglement as a significant improvement. Although there have been mainly two approaches: PIR with computational assumptions [25; 26] and PIR with multiple servers [27; 28; 29], recent attention has focused on information-theoretic aspects of PIR [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]. In this paper, we solely consider one-server QPIR without computational assumptions.
### QPIR for classical messages
PIR has also been studied when quantum communication is allowed between the user and the server [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. These papers consider the case when the total number of bits in the messages is \(\mathsf{m}\). For the secrecy in C-QPIR, we often focus on the potential information leakage in all rounds, which is called the _all-round criterion_ in this paper and has been studied under several security models. One is the _honest-server model_, in which, we discuss the user's secrecy only when the server is honest, i.e., the server does not deviate from the protocol. The other is the _specious-server model_, in which, we discuss the user's secrecy even when the server deviates from the protocol as far as its dishonest operations are not revealed to the user, which is called _specious adversary_. The secrecy under the specious-server model has a stronger requirement than the secrecy under the honest-server model. Interestingly, under the honest-server model, Le Gall [9] proposed a C-QPIR protocol with communication complexity \(O(\sqrt{\mathsf{m}})\) in the all-round criterion, and Kerenidis et al. [10] improved this result to \(O(\operatorname{poly}\log\mathsf{m})\) in another criterion, where the communication complexity in the quantum case is the total number of communicated qubits. However, when the specious-server model is adopted and the possible input states are extended to arbitrary superposition states, Baumeler and Broadbent [8] proved that the communication complexity is at least \(\mathsf{\Theta}(\mathsf{m})\), i.e., the trivial solution of downloading all messages is optimal also for this case. Even when prior entanglement is allowed between the user and the server, the communica
tion complexity is also lower bounded by \(\Theta(\mathsf{m})\) under the specious-server model with the above extended possible input states [11]. Therefore, the advantage of prior entanglement is limited under the specious-server model with the above extended possible input states. In contrast, prior entanglement might potentially have polynomial improvement under the honest-server model, but it is still unclear how much prior entanglement improves communication complexity under the honest-server model.
When the server truly follows the protocol, the information obtained by the server is limited to the server's final state. Hence, the information leakage in the server's final state can be considered as another criterion, which is called the _final-state criterion_. While the final-state criterion under the honest-server model is a too weak setting, it is reasonable to consider the final-state criterion under the specious-server model, which is essentially equivalent to the cheat-sensitive setting studied in [45].
### Our contributions
In this paper, for Q-QPIR protocols and the total number \(\mathsf{m}\) of qubits, we show that the communication complexity is at least \(\Theta(\mathsf{m})\), i.e., the trivial solution of downloading all messages is optimal for one-server Q-QPIR even in the final-state criterion and even with the honest-server model if prior entanglement is not allowed between the server and the user. This fact shows that prior entanglement between the server and the user is necessary for further improvement under the one-server model even for Q-QPIR under the honest-server model, the weakest secrecy requirement. To overcome this problem, we propose a one-server Q-QPIR protocol with prior entanglement between the server and the user, which achieves the communication complexity \(O\left(\log\mathsf{m}\right)\). That is, prior entanglement has exponential improvement for Q-QPIR under the honest-server model.
### Organization of this paper
The remainder of the paper is organized as follows. Section II gives the definitions of several concepts and the outline of our results including the comparison with existing results. Section III is the technical preliminaries of the paper. Section IV presents our results for C-QPIR protocol with communication complexity \(O(\log\mathsf{m})\). Section V derives the lower bound of the communication complexity for Q-QPIR in the final-state criterion under the honest-server model when prior entanglement is not shared. Section VI proposes an efficient Q-QPIR protocol with prior entanglement under various settings. Section VII is the conclusion of the paper.
## II Definitions and outline of our results
### Definitions of various concepts
To briefly explain our results, we prepare the definitions of various concepts to cover C-QPIR protocols and Q-QPIR protocols in a common framework.
#### ii.1.1 Correctness, complexity, and unitary-type
To discuss the properties of our QPIR protocols, we prepare several concepts. First, we define the set \(\mathcal{S}\) of possible quantum states as a subset of the set \(\mathcal{S}(\mathcal{H}_{d})\) of states on \(\mathbb{C}^{d}\). A QPIR protocol is called a QPIR protocol with \(\mathbb{C}^{d}\) over the set \(\mathcal{S}\) when it works when the set \(\mathcal{S}\) is the set of possible quantum states. For example, when \(\mathcal{S}\) is the set \(\mathcal{C}\) of orthogonal pure states \(\{\left\langle j\right\rangle\}_{j=0}^{d-1}\), a QPIR protocol is a C-QPIR protocol discussed in [8]. In contrast, when \(\mathcal{S}\) is the set \(\mathcal{Q}\) of all pure states on the system \(\mathbb{C}^{d}\), a QPIR protocol is a Q-QPIR protocol. When we do not identify the set \(\mathcal{S}\), we consider that it is given as the above case. We denote the number of messages by \(\mathsf{f}\). A QPIR protocol \(\Phi\) has two types of inputs. The first input is composed of \(\mathsf{f}\) messages, whose systems are written as \(\mathcal{H}_{1},\ldots,\mathcal{H}_{\mathsf{f}}\). Their state is written as \(\mathsf{f}\) states \((\rho_{1},\ldots,\rho_{\mathsf{f}})\in\mathcal{S}^{\mathsf{f}}\). The second input is the choice of the label of the message intended by the user, which is written as the random variable \(K\). The quantum system to describe the variable \(K\) is denoted by \(\mathcal{K}\). We denote the remaining initial user's and server's systems by \(\mathcal{R}_{u}\) and \(\mathcal{R}_{s}\), respectively. The output of the protocol is a state \(\rho_{out}\) on \(\mathcal{H}_{d}\).
A QPIR protocol \(\Phi\) has bilateral communication. The communication from the user to the servers is the upload communication, and the communication from the servers to the users is the download communication. The communication complexity is composed of the upload complexity and the download complexity. The upload complexity is the sum of the communication sizes of all
Figure 1: One-server QPIR protocol with quantum messages. At round \(i\), the user uploads a query \(\mathbf{Q}^{(i)}\) and downloads an answer \(A^{(i)}\).
upload communications, and the download complexity is the sum of the communication sizes of all download communications. The sum of the upload and download complexity is called the communication complexity. We adopt the communication complexity as the optimality criterion under various security conditions.
A QPIR protocol \(\Phi\) is called a deterministic protocol when the following two conditions hold. The upload complexity and the download complexity are determined only by the protocol \(\Phi\). When the user and the servers are honest, the output is determined only by \((\rho_{1},\ldots,\rho_{\mathsf{f}})\) and \(K\). When \(\Phi\) is a deterministic protocol, we denote the output state by \(\Phi_{out}(\rho_{1},\ldots,\rho_{\mathsf{f}},K)=\rho_{out}\). The upload complexity, the download complexity, and the communication complexity are denoted by \(UC(\Phi)\), \(DC(\Phi)\), and \(CC(\Phi)\), respectively. Hence, the communication complexity \(CC(\Phi)\) is calculated as \(UC(\Phi)+DC(\Phi)\). A protocol \(\Phi\) is called correct when the protocol is a deterministic protocol and the relation \(\Phi_{out}(\rho_{1},\ldots,\rho_{\mathsf{f}},k)=\rho_{k}\) holds for any elements \(k\in[\mathsf{f}]\) and \((\rho_{1},\ldots,\rho_{\mathsf{f}})\in\mathcal{S}^{\mathsf{f}}\).
Another important class of QPIR protocols is unitary-type protocols. When a QPIR protocol \(\Phi\) satisfies the following conditions, it is called _unitary-type_.
* The initial states \(\rho_{\mathcal{R}_{a}}\) on \(\mathcal{R}_{s}\) and \(\rho_{\mathcal{R}_{a}}\) on \(\mathcal{R}_{u}\) are pure.
* At each round, both the user and the server apply only unitary operations to the systems under their control.
* A measurement is done only when the user reads out the message as the outcome of the protocol.
The reference [11] refers to the above property as measurement-free due to the third condition while it assumes the first and second conditions implicitly. Since the first and second conditions are more essential, we call it unitary-type.
#### ii.1.2 Secrecy
In this paper, we address only the secrecy of the user's choice. There are two security criteria. One is the final-state criterion, in which, it is required that the server's final state does not depend on the user's choice \(K\). The other is the all-round criterion, in which, it is required that the server's state in any round does not depend on the user's choice \(K\). When we consider the secrecy, we may extend the set of possible inputs to \(\tilde{\mathcal{S}}\) that includes the set \(\mathcal{S}\). For example, in the case of C-QPIR, the set \(\mathcal{S}\) is given as the set \(\mathcal{C}\). Then, we can choose \(\tilde{\mathcal{S}}\) as the set \(\mathcal{C}\) or \(\mathcal{Q}\). The case with \(\tilde{\mathcal{S}}=\mathcal{C}\) is called the classical input case, and the case with \(\tilde{\mathcal{S}}=\mathcal{Q}\) is called the superposition input case. Instead, in the case of Q-QPIR, the set \(\mathcal{S}\) is given as the set \(\mathcal{Q}\). Hence, the set \(\tilde{\mathcal{S}}\) is chosen as the same set \(\mathcal{Q}\).
Even when we fix the security criterion and the sets \(\mathcal{S}\) and \(\tilde{\mathcal{S}}\), there still exist three models for the secrecy for a QPIR protocol \(\Phi\). The first one is the honest-server model, which assumes that the servers are honest. We say that a QPIR protocol \(\Phi\) satisfies the secrecy in the final-state criterion under the honest-server model with input states \(\tilde{\mathcal{S}}\) when the following condition holds. When the user and the servers are honest, the server has no information for \(K\) in the final state, i.e., the relation
\[\rho_{S,F}(\rho_{1},\ldots,\rho_{\mathsf{f}},k)=\rho_{S,F}(\rho_{1},\ldots, \rho_{\mathsf{f}},k^{\prime}) \tag{1}\]
holds for any \(k,k^{\prime}\in[\mathsf{f}]\) and \((\rho_{1},\ldots,\rho_{\mathsf{f}})\in\tilde{\mathcal{S}}^{\mathsf{f}}\), where \(\rho_{S,F}(\rho_{1},\ldots,\rho_{\mathsf{f}},K)\) is the final state on the server dependent of the variable \(K\). In the condition (1), the states \(\rho_{k}\) is chosen from \(\tilde{\mathcal{S}}\), not from \(\mathcal{S}\). We say that a QPIR protocol \(\Phi\) satisfies the secrecy in the all-round criterion under the honest-server model with input states \(\tilde{\mathcal{S}}\) when the following condition holds, the server has no information for \(K\) in all rounds, i.e., the relation
\[\rho_{S,j}(\rho_{1},\ldots,\rho_{\mathsf{f}},k)=\rho_{S,j}(\rho_{1},\ldots, \rho_{\mathsf{f}},k^{\prime}) \tag{2}\]
holds for any \(k,k^{\prime}\in[\mathsf{f}]\) and \((\rho_{1},\ldots,\rho_{\mathsf{f}})\in\tilde{\mathcal{S}}^{\mathsf{f}}\), where \(\rho_{S,j}(\rho_{1},\ldots,\rho_{\mathsf{f}},K)\) is the state on the server dependent of the variable \(K\) when the server receives the query in the \(j\)-th round. The following is the meaning of the secrecy in the all-round criterion under the honest-server model. Assume that the user and the server are honest. Even when the server stops the protocol at the \(j\)-th round for any \(j\), the server cannot obtain any information for \(K\).
The second model is called the specious-server model introduced in [46]. When the server applies other operations that deviate from the original protocol, such an operation is called an attack. An attack of the server is called a specious attack when the attack satisfies the following conditions. The server sends the answer at the time specified by the protocol, but the contents of the answer do not follow the protocol. Also, the server does not access the information under the control of the user. In addition, the attack is not revealed to the user under the condition that the user is honest, i.e., there exists the server's operation \(\mathcal{F}_{S,j}\) such that the relation
\[(\mathcal{F}_{S,j}\otimes\iota)\tilde{\rho}_{j}(\rho_{1},\ldots,\rho_{\mathsf{ f}},k)=\rho_{j}(\rho_{1},\ldots,\rho_{\mathsf{f}},k) \tag{3}\]
holds for any \(k,k^{\prime}\in[\mathsf{f}]\) and \((\rho_{1},\ldots,\rho_{\mathsf{f}})\in\tilde{\mathcal{S}}^{\mathsf{f}}\), where \(\rho_{j}(\rho_{1},\ldots,\rho_{\mathsf{f}},K)\) (\(\tilde{\rho}_{j}(\rho_{1},\ldots,\rho_{\mathsf{f}},K)\)) is the state on the whole system dependently of the variable \(K\) when the user receives the answer in the \(j\)-th round under the assumption that the user is honest and the server is honest (the server makes the attack). Notice that the definition of a specious attack depends on the choice of the set \(\tilde{\mathcal{S}}\). The meaning of (3) is the following. When the user decides to stop the protocol to check whether the server follows the protocol after the user receives the answer in the \(j\)-th round, the user asks the server to submit the evidence that the server follows the protocol. Then, the server sends his system after applying the operation \(\mathcal{F}_{S,j}\). When \(\tilde{\mathcal{S}}\) is chosen to be the set \(\mathcal{Q}\) of pure states, a specious attack coincides with a \(0\)-specious adversary in
the sense of [11, Definition 2.4] because it is sufficient to check the case with even \(t\) in [11, Definition 2.4]. Also, when \(\tilde{\mathcal{S}}\) is chosen to be the set \(\mathcal{C}\), the secrecy in the all-round criterion under the specious server model coincides with the anchored \(0\)-privacy under \(0\)-specious servers [11].
We say that a QPIR protocol \(\Phi\) satisfies the secrecy in the final-state criterion (the all-round criterion) under the specious-server model with input states \(\tilde{\mathcal{S}}\) when the following condition holds. When a server performs a specious attack and the user is honest, the server obtains no information about the user's request \(K\) in all rounds, i.e., the condition (1) (the condition (2)) holds. In fact, the secrecy condition in the final-state criterion is weaker than the secrecy condition in the all-round criterion even under the specious-server model. The secrecy condition in the final-state criterion under the specious-server model is essentially equivalent to the cheat-sensitive secrecy condition considered in [45].
The third model is called the dishonest-server model. We say that a QPIR protocol \(\Phi\) satisfies the secrecy under the dishonest-server model when the following condition holds. When the server applies an attack and the user is honest, the server obtains no information of the user's request \(K\), i.e., the condition (1) holds. In the dishonest-server model, the server is allowed to make any attack under the following condition. The server sends the answer at the time specified by the protocol, but the contents of the answer do not follow the protocol. Also, the server does not access the information under the control of the user. Thus, the server can send any information on each round under this condition. Hence, the ability of the attack does not depend on the set \(\tilde{\mathcal{S}}\). Also, the server can store the state received in any round. Hence, the server can obtain the same information in the final state as the information in the \(j\)-th round.
Further, when the protocol has only one round and we adopt the all-round criterion, there is no difference among the honest-server model, the specious-server model, and the dishonest-server model because all information obtained by the server is reduced to the state on the server when the server received the query in the first round. As a result, the information obtained by the server does not depend on the server's operation, i.e., the server's attack model.
**Remark 1**.: In the papers [8, 11], the security against specious adversaries means the secrecy in the all-round criterion under the specious-server model with input states \(\mathcal{Q}\) for C-QPIR in our definition. Instead, in the paper [11], the anchored specious security means the secrecy in the all-round criterion under the specious-server model with input states \(\mathcal{C}\) for C-QPIR in our definition. The papers [8, 11] did not consider the final-state criterion.
### Outline of results and comparison
#### iv.2.1 Optimality of trivial solution for one-server Q-QPIR
First, we discuss our result for one-server Q-QPIR for the honest-server model without prior entanglement, and its relation to existing results. The result by the reference [8] is summarized as follows. The C-QPIR protocol discussed in [8] is considered as a QPIR protocol over the set \(\mathcal{C}\). The reference [8] showed that the trivial protocol over the set \(\mathcal{C}\) is optimal in the all-round criterion under the specious-server model with input states \(\mathcal{Q}\), i.e., when the secrecy in the all-round criterion is imposed under the specious-server model with input states \(\mathcal{Q}\). Since the set \(\mathcal{C}=\{\lvert j\rangle\}_{j=0}^{d-1}\) is included in the set \(\mathcal{Q}\), a Q-QPIR protocol over the set \(\mathcal{Q}\) works a QPIR protocol over the set \(\mathcal{C}\). Hence, the result by [8] implies the optimality of the trivial protocol over the set \(\mathcal{Q}\) in the all-round criterion under the specious-server model. In addition, such an impossibility result was extended to the case with prior entanglement by the paper [11].
However, the secrecy in the all-round criterion under the specious-server model is a stronger condition than the secrecy in the final-state criterion under the honest-server model because the secrecy in the all-round criterion is a stronger condition the secrecy in the final-state criterion and the specious-server model allows the server to have a larger choice than the honest-server model.
To seek further possibility for C-QPIR protocols, in Sections IV.1 and IV.2, inspired by the idea presented in [45], we propose more efficient one-round C-QPIR protocols in the final-state criterion under the honest-server and specious-server models with input states \(\mathcal{C}\) or \(\mathcal{Q}\) whose communication complexities are at most \(4\log\mathfrak{m}\). In addition, the reference [9] proposed a C-QPIR protocol in the all-round criterion under the honest one-server model that has communication complexity \(O(\sqrt{\mathfrak{m}})\). The reference [10] also proposed a C-QPIR protocol with communication complexity \(O(\operatorname{poly}\log\mathfrak{m})\) without prior entanglement and a C-QPIR protocol with communication complexity \(O(\operatorname{log}\mathfrak{m})\) with prior entanglement. In Section IV.3, we show that these two protocols satisfy the secrecy in the all-round criterion under the honest-server model with input states \(\mathcal{C}\). In addition, using a conversion result [11], we show that these two protocols satisfy the secrecy in the all-round criterion under the specious-server model with input states \(\mathcal{C}\).
Hence, we cannot exclude the possibility of more efficient one-server Q-QPIR protocols than the trivial solution in the final-state criterion or under the honest one-server model. Furthermore, while the trivial solution is optimal under the honest-server model of classical PIR [47], its optimality proof uses the communication transcript between the server and the user, which is based on classical communication. Unfortunately, we cannot apply the same technique under the honest one-server model of Q-QPIR because quantum states cannot be copied because of the no-cloning theorem. Therefore, we have a
question of whether there exists a Q-QPIR protocol over pure states that satisfies the secrecy in the final-state criterion under the honest-server model, and improves the communication complexity over the trivial protocol.
As its solution, we show that the trivial solution is optimal for one-server Q-QPIR in the final-state criterion for the honest-server model. In Tables 1 and 2, we summarize the comparison of our results with previous results for the one-server case. In our proof, the entropic inequalities are the key instruments for the proof. Since the pair of the final-state criterion and the honest-server model is the weakest attack model, this result implies that the trivial solution is also optimal for any attack model.
#### iii.1.2 One-server Q-QPIR protocol with prior entanglement
However, the above discussion assumes that there is no prior entanglement shared between the sender and the user. Hence, secondly, with prior entanglement between the user and the server, we prove that there exists an efficient Q-QPIR protocol on the honest-server model or on the final-state criterion. To be precise, we propose a method to construct a Q-QPIR protocol of communication complexity \(O\left(f(\mathsf{m})\right)\) with prior entanglement from a C-QPIR protocol of communication complexity \(O\left(f(\mathsf{m})\right)\) with prior entanglement. This method is based on the combination of C-QPIR and quantum teleportation [1]. The proposed Q-QPIR protocol inherits the security of the C-QPIR protocol. With this property, we show three
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{security criterion} & \multirow{2}{*}{server} & \multicolumn{5}{c|}{Optimal communication complexity} \\ \cline{3-6} & & \multicolumn{3}{c|}{without PE} & \multicolumn{3}{c|}{with PE} \\ \cline{3-6} & & \multicolumn{3}{c|}{one-round} & \multicolumn{1}{c|}{multi-round} & \multicolumn{1}{c|}{one-round} & \multicolumn{1}{c|}{multi-round} \\ \hline \multirow{2}{*}{final-state} & \multirow{2}{*}{honest} & \(\mathsf{\Theta}(\mathsf{m})\) [Theorem 1] & \(\mathsf{\Theta}(\mathsf{m})\) [Theorem 1] & \(O(\log\mathsf{m})\)* [Corollary 2] & \(O(\log\mathsf{m})\)* [Corollary 2] \\ \cline{3-6} & & \multirow{2}{*}{specious} & \(\mathsf{\Theta}(\mathsf{m})\) [Theorem 1] & \(\mathsf{\Theta}(\mathsf{m})\) [Theorem 1] & \(O(\log\mathsf{m})\)* [Corollary 2] & \(O(\log\mathsf{m})\)* [Corollary 2] \\ \cline{3-6} & & & \(\mathsf{\Theta}(\mathsf{m})\) [Theorem 1] & \(\mathsf{\Theta}(\mathsf{m})\) [Theorem 1] & \(O(\log\mathsf{m})\)* [Corollary 2] & \(O(\log\mathsf{m})\)* [Corollary 2] \\ \hline \multirow{2}{*}{all-round} & \multirow{2}{*}{honest} & \multirow{2}{*}{\(\mathsf{\Theta}(\mathsf{m})\)} & \(\mathsf{\Theta}(\mathsf{m})\) [Theorem 1] & \(\mathsf{\Theta}(\mathsf{m})\) & \(O(\log\mathsf{m})\)* [Corollary 3] \\ \cline{3-6} & & & \(\mathsf{\Theta}(\mathsf{m})\) implied by [8] & \multirow{2}{*}{implied by [11]} & \(\mathsf{\Theta}(\mathsf{m})\) implied by [11] \\ \cline{3-6} & & & \(\mathsf{\Theta}(\mathsf{m})\) implied by [8] & & \(\mathsf{\Theta}(\mathsf{m})\) implied by [11] \\ \hline \end{tabular} This table employs the same notations as Table 1.
\end{table}
Table 2: Optimal communication complexity of one-server Q-QPIR
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{security criterion} & \multirow{2}{*}{input} & \multirow{2}{*}{server} & \multicolumn{5}{c|}{Optimal communication complexity} \\ \cline{3-6} & & & \multicolumn{2}{c|}{without PE} & \multicolumn{2}{c|}{with PE} \\ \cline{3-6} & & & one-round & multi-round & one-round & multi-round \\ \hline \multirow{4}{*}{final-} & \multirow{2}{*}{classical} & \multirow{2}{*}{nonest} & \(O(\log\mathsf{m})\)* & \(O(\log\mathsf{m})\)* & \(O(\log\mathsf{m})\)* & \(O(\log\mathsf{m})\)* & \(O(\log\mathsf{m})\)* \\ & & & [Section IV.1] & [Section IV.1] & [Section IV.1] & [10]+[Lemma 2] \\ \cline{3-6} & & \multirow{2}{*}{specious} & \(O(\log\mathsf{m})\)* & \(O(\log\mathsf{m})\)* & \(O(\log\mathsf{m})\)* & \(O(\log\mathsf{m})\)* \\ & & & [Section IV.2] & [Section IV.2] & [Section IV.2] & [10]+[Corollary 1] \\ \cline{3-6} & & & \(O(\log\mathsf{m})\)* & \(O(\log\mathsf{m})\)* & \(O(\log\mathsf{m})\)* & \(O(\log\mathsf{m})\)* \\ \cline{3-6} & & & [Section IV.1] & [Section IV.1] & [Section IV.1] & [Section IV.1] \\ \hline \multirow{4}{*}{state} & \multirow{2}{*}{superposition} & \multirow{2}{*}{\(\mathsf{\Theta}(\mathsf{m})\)} & \(\mathsf{\eta}\) & \(?\) & \(?\) & \(?\) \\ \cline{3-6} & & & \(\mathsf{\Theta}(\mathsf{m})\) [8] & & \(\mathsf{\Theta}(\mathsf{m})\) [8] & & \(\mathsf{\Theta}(\mathsf{m})\) [11] \\ \cline{3-6} & & & & \(\mathsf{\Theta}(\mathsf{m})\) [8] & & \(\mathsf{\Theta}(\mathsf{m})\) [11] \\ \hline \multirow{4}{*}{round} & \multirow{2}{*}{superposition} & \multirow{2}{*}{\(\mathsf{\Theta}(\mathsf{m})\)} & \(\mathsf{\eta}\) & \(?\) & \(?\) & \(?\) \\ \cline{3-6} & & & & \(O(\log\mathsf{m})\) [8] & & \(O(\log\mathsf{m})\)* \\ \cline{3-6} & & & & \(\mathsf{\Theta}(\mathsf{m})\) [8] & & \(\mathsf{\Theta}(\mathsf{m})\) [11] \\ \hline \end{tabular}
\end{table}
Table 1: Optimal communication complexity of one-server C-QPIR
types of Q-QPIR protocols of communication complexity \(O(\log\mathfrak{m})\) with prior entanglement. One is the secrecy in the final-state criterion under the honest-server model. The second is the secrecy in the final-state criterion under the specious-server model. The third is the secrecy in the all-round criterion under the honest-server model. Combining this result with the above result, we find that prior entanglement realizes an exponential speedup for one-server Q-QPIR in the final-state criterion or under the honest-server model. Therefore, the obtained results are summarized as Table 1 in terms of the communication complexity \(\mathfrak{m}\).
## III Preliminaries
We define \([a:b]=\{a,a+1,\ldots,b\}\) and \([a]=\{1,\ldots,a\}\). The dimension of a quantum system \(X\) is denoted by \(|X|\). The von Neumann entropy is defined as \(H(X)=H(\rho_{X})=\operatorname{Tr}\rho_{X}\log\rho_{X}\), where \(\rho_{X}\) is the state on the quantum system \(X\).
**Proposition 1**.: _The von Neumann entropy satisfies the following properties. \((a)\)\(H(X)=H(Y)\) if the state on \(X\otimes Y\) is a pure state. \((b)\) The inequality \(H(XY)\geq H(X)+H(Y)\) holds, and the equality holds for product states on \(X\otimes Y\). \((c)\) Entropy does not change by unitary operations. \((d)\)\(H(XY)+H(X)\geq H(Y)\). \((e)\)\(H(\sum_{s}p_{s}p_{s})=\sum_{s}p_{s}(H(\rho_{s})-\log p_{s})\) if \(\operatorname{Tr}\rho_{s}\rho_{t}=0\) for any \(s\neq t\)._
The property \((d)\) is proved as follows. Since other properties can be easily shown, we omit their proofs. For example, see the book [48, Sections 3.1 and 8.1]. Let \(Z\) be the reference system in which the state on \(XYZ\) is pure. Then, \(H(XY)+H(X)=H(Z)+H(X)\geq H(XZ)=H(Y)\). Throughout the paper, we use the symbols \((a)\), \((b)\), \((c)\), \((d)\), \((e)\) to denote which property is used, e.g., \(\overset{(a)}{=}\) means that the equality holds from the property \((a)\).
Next, for a TP-CP map from the system \(\mathcal{H}_{X}\) to the system \(\mathcal{H}_{Y}\) and a state \(\rho\) on \(\mathcal{H}_{X}\), we define the transmission information \(I(\rho,\Gamma)\). We choose a purification \(|\psi\rangle\) of \(\rho\) with the environment \(\mathcal{H}_{Z}\). Then, the transmission information \(I(\rho,\Gamma)\) is defined as
\[I(\rho,\Gamma)\coloneqq H(\rho)+H(\Gamma(\rho))-H((\iota_{Z}\otimes\Gamma)(| \psi\rangle\langle\psi|)), \tag{4}\]
where \(\iota_{Z}\) is the identity operation on \(\mathcal{H}_{Z}\). When \(\Gamma\) is the identity operator,
\[I(\rho,\Gamma)=2H(\rho). \tag{5}\]
Throughout this paper, \(\mathbb{C}^{d}\) expresses the \(d\)-dimensional Hilbert space spanned by the orthogonal basis \(\{|s\rangle\}_{s=0}^{d-1}\). For a \(d_{1}\times d_{2}\) matrix
\[\mathsf{M}=\sum_{s=0}^{d_{1}-1}\sum_{t=0}^{d_{2}-1}m_{st}|s\rangle\langle t| \in\mathbb{C}^{d_{1}\times d_{2}}, \tag{6}\]
we define
\[|\mathsf{M}\rangle\!\rangle=\frac{1}{\sqrt{d}}\sum_{s=0}^{d_{1}-1}\sum_{t=0}^{ d_{2}-1}m_{st}|s\rangle|t\rangle\in\mathbb{C}^{d_{1}}\otimes\mathbb{C}^{d_{2}}. \tag{7}\]
For \(\mathsf{A}\in\mathbb{C}^{d_{1}\times d_{2}}\), \(\mathsf{B}\in\mathbb{C}^{d_{1}\times d_{1}}\), and \(\mathsf{C}\in\mathbb{C}^{d_{2}\times d_{2}}\), we have the relation
\[(\mathsf{B}\otimes\mathbb{C}^{\top})|\mathsf{A}\rangle\!\rangle=|\mathsf{ BAC}\rangle. \tag{8}\]
We call a \(d\)-dimensional system \(\mathbb{C}^{d}\) a _qudit_. Define generalized Pauli matrices and the maximally entangled state on qudits as
\[\mathsf{X}_{d} =\sum_{s=0}^{d-1}|s+1\rangle\langle s|, \tag{9}\] \[\mathsf{Z}_{d} =\sum_{s=0}^{d-1}\omega^{s}|s\rangle\langle s|,\] (10) \[|\iota_{d}\rangle\!\rangle=\frac{1}{\sqrt{d}}\sum_{s=0}^{d-1}|s,s\rangle, \tag{11}\]
where \(\omega=\exp(2\pi t/d)\) and \(\iota=\sqrt{-1}\). We define the generalized Bell measurements
\[\mathbf{M}_{\mathsf{X}\mathsf{Z},d}=\{|\mathsf{X}^{a}\mathsf{Z}^{b}\rangle\mid a,b\in[0:d-1]\}. \tag{12}\]
If there is no confusion, we denote \(\mathsf{X}_{d},\mathsf{Z}_{d},\mathsf{I}_{d},\mathbf{M}_{\mathsf{X}\mathsf{Z},d}\) by \(\mathsf{X},\mathsf{Z},\mathsf{I},\mathbf{M}_{\mathsf{X}\mathsf{Z}}\). Let \(A,A^{\prime},B,B^{\prime}\) be qudits. If the state on \(A\otimes A^{\prime}\otimes B\otimes B^{\prime}\) is \(|\mathsf{A}\rangle\!\rangle\otimes|\mathsf{B}\rangle\) and the measurement \(\mathbf{M}_{\mathsf{X}\mathsf{Z}}\) is performed on \(A^{\prime}\otimes B^{\prime}\) with outcome \((a,b)\in[0:d-1]^{2}\), the resultant state is
\[|\mathsf{AX}^{a}\mathsf{Z}^{-b}\mathsf{B}^{\top}\rangle\!\rangle\in A\otimes B. \tag{13}\]
We also define the dual basis
\[|u_{j}\rangle\coloneqq\sum_{k=0}^{d-1}\frac{1}{\sqrt{d}}e^{\frac{2\pi k\mu j}{d }}|k\rangle. \tag{14}\]
## IV Protocols for C-Qpir
### One-round C-QPIR of the final-state criterion under honest-server model
This section presents a protocol that satisfies the secrecy in the final-state criterion under the honest-server model with the input states \(\mathcal{C}\). We assume that the \(\ell\)-th message \(X_{\ell}\) is an element of \(\mathbb{Z}_{d_{\ell}}\) for \(\ell\in[\mathsf{f}]\). We define \(d\) as the maximum \(\max_{\ell\in[\mathsf{f}]}d_{\ell}\).
**Protocol 1**.: The following protocol is denoted by \(\Phi_{\mathsf{f},d}\).
**0): Preparation**: The server prepares \(\mathsf{f}+1\) quantum systems \(\mathcal{H}_{0},\mathcal{H}_{1},\ldots,\mathcal{H}_{\mathsf{f}}\), where \(\mathcal{H}_{0}\) is spanned by \(\{|j\rangle\}_{j=0}^{d-1}\), and \(\mathcal{H}_{\ell}\) is spanned by \(\{|j\rangle\}_{j=0}^{d_{\ell}-1}\). When the \(\ell\)-th message is \(X_{\ell}\), the state on the quantum system \(\mathcal{H}_{\ell}\) is set to be \(|X_{\ell}\rangle\). Also, the state on the quantum system \(\mathcal{H}_{0}\) is set to be \(|0\rangle\). The user prepares the system \(\mathcal{K}\) spanned by \(\{|\ell\rangle\}_{\ell=1}^{d}\).
**1): Query**: The user sets the state on the system \(\mathcal{K}\) to be \(|K\rangle\). The user sends the system \(\mathcal{K}\) to the server.
**2): Answer**: The server applies the measurement based on the computation basis \(\{|j\rangle\}\) on the systems \(\mathcal{H}_{1},\ldots,\mathcal{H}_{\ell}\) with the projective state reduction. The server applies the controlled unitary \(U:=\sum_{\ell=1}^{\ell}|\ell\rangle\langle\ell|\otimes U_{\ell}\) on \(\mathcal{K}\otimes\mathcal{H}_{0}\otimes\mathcal{H}_{1}\otimes\cdots\otimes \mathcal{H}_{\ell}\), where \(U_{\ell}\) acts only on \(\mathcal{H}_{0}\otimes\mathcal{H}_{\ell}\) and is defined as
\[U_{\ell}:=\sum_{j^{\prime}=0}^{d-1}\sum_{j^{\prime}=0}^{d_{\ell}-1}|j+j^{ \prime}\rangle\langle j^{\prime}|\otimes|j\rangle\langle j|. \tag{15}\]
The server sends the system \(\mathcal{K}\otimes\mathcal{H}_{0}\) to the user.
**3): Reconstruction**: The user measures \(\mathcal{H}_{0}\), and obtains the message \(X_{K}\).
The correctness of Protocol 1 is trivial. Its upload and download complexities are \(\mathit{UC}(\Phi_{\mathsf{f},d})=\log\mathsf{f}\) and \(\mathit{DC}(\Phi_{\mathsf{f},d})=\log\mathsf{f}+\log d\). The communication complexity is \(\mathit{CC}(\Phi_{\mathsf{f},d})=2\log\mathsf{f}+\log d\). When \(d\) is fixed, \(\mathit{CC}(\Phi_{\mathsf{f},d})=2\log\mathsf{m}+o(\mathsf{m})\).
As shown in the following; Protocol 1 satisfies the secrecy in the final-state criterion under the honest-server model with the input states \(\mathcal{C}\). We assume that the server and the user are honest. Since the server follows the protocol, the server has only the \(\mathsf{f}\) systems \(\mathcal{H}_{1},\ldots,\mathcal{H}_{\ell}\). The final state on the composite system \(\mathcal{H}_{1}\otimes\ldots\otimes\mathcal{H}_{\ell}\) is \(|X_{1}\rangle\cdots|X_{\ell}\rangle\), which does not depend on the user's choice \(\mathcal{K}\). Hence, the above secrecy holds.
In addition, Protocol 1 satisfies the secrecy in the final-state criterion under the honest-server model even with the input states \(\mathcal{Q}\) as follows. Even when the initial states in \(\mathcal{H}_{1},\ldots,\mathcal{H}_{\ell}\) prepared as quantum states, due to the measurement, the initial states in \(\mathcal{H}_{1},\ldots,\mathcal{H}_{\ell}\) are convex mixtures of states \(\{|j\rangle\langle j|\}\). Hence, the final state on the composite system \(\mathcal{H}_{1}\otimes\ldots\otimes\mathcal{H}_{\ell}\) is the same as the state after the measurement, which does not depend on user's choice \(\mathit{K}\). Hence, the above secrecy holds.
However, when the server skips the measurement in Step 2) and the input states are chosen as \(\mathcal{Q}\), the secrecy does not hold as follows. Assume that the server set initial state in \(\mathcal{H}_{\ell}\) to be \(\sum_{j=1}^{d_{\ell}}\frac{1}{\sqrt{d_{\ell}}}|j\rangle\). Also, we assume that the server and the user follow Steps 1), 2), 3). Then, the final state on \(\mathcal{H}_{K}\otimes\mathcal{H}_{0}\) is \(\sum_{j=1}^{d_{\ell}}\frac{1}{\sqrt{d_{\ell}}}|j\rangle|j\rangle\). That is, the final state on \(\mathcal{H}_{K}\) is the completely mixed state. In contrast, the final state on \(\mathcal{H}_{\ell}\) is the same as the initial state for \(\ell\neq\mathit{K}\). Hence, the secrecy condition (1) does not hold.
Also, Protocol 1 does not have the secrecy with the input states \(\mathcal{C}\) under the specious-server model as follows. A specious server is allowed to make a measurement if the measurement does not destroy the quantum state. Since the state on the composite system \(\mathcal{K}\otimes\mathcal{H}_{0}\otimes\mathcal{H}_{1}\otimes\cdots\otimes \mathcal{H}_{\ell}\) is one of the basis, it is not destroyed by the basis measurement. Hence, the server can obtain the user's choice \(\mathit{K}\) without state demolition. This fact shows that the specious-server model is needed in order to forbid such an insecure protocol. However, as shown in Section V, even under the honest-server model, a protocol similar to Protocol 1 does not work when the messages are given as quantum states.
### One-round C-QPIR of the final-state criterion under specious-server model
Protocol 1 presented in the previous subsection does not work under the specious-server model. To resolve this problem, this section presents a protocol that satisfies the secrecy in the final-state criterion under the specious-server model with the input states \(\mathcal{C}\). We assume that each message \(X_{\ell}\) is an element of \(\mathbb{Z}_{d_{\ell}}\). We define \(d\) as the maximum \(\max_{\ell}d_{\ell}\).
**Protocol 2**.: The following protocol is denoted by \(\Phi_{\mathsf{f},d}\).
**0): Preparation**: The server prepares \(\mathsf{f}+2\) quantum systems \(\mathcal{H}^{\prime}_{0},\mathcal{H}^{\prime}_{1},\mathcal{H}_{1},\ldots, \mathcal{H}_{\ell}\), where \(\mathcal{H}^{\prime}_{0},\mathcal{H}^{\prime}_{1}\) is spanned by \(\{|j\rangle\}_{j=0}^{d-1}\), and \(\mathcal{H}_{\ell}\) is spanned by \(\{|j\rangle\}_{j=0}^{d-1}\). When the \(\ell\)-th message is \(X_{\ell}\), the state on the quantum system \(\mathcal{H}_{\ell}\) is set to be \(|X_{\ell}\rangle\). Also, the state on the quantum system \(\mathcal{H}^{\prime}_{0},\mathcal{H}^{\prime}_{1}\) is set to be \(|0\rangle\). The user prepares the systems \(\mathcal{K}_{0}\),\(\mathcal{K}_{1}\) spanned by \(\{|\ell\rangle\}_{\ell=1}^{\ell}\).
**1): Query**: The user generates the binary random variable \(A\) and the variable \(B\in[\mathsf{f}]\) subject to the uniform distribution. The user sets the state on the system \(\mathcal{K}_{A}\) to be \(|\mathit{K}\rangle\), and the state on the system \(\mathcal{K}_{A\oplus 1}\) to be \(\frac{1}{\sqrt{\mathsf{f}}}\sum_{\ell=1}^{\ell}\mathcal{I}_{\mathsf{f}} ^{\mathsf{Z}}|\ell\rangle\). The user sends the systems \(\mathcal{K}_{0}\),\(\mathcal{K}_{1}\) to the server.
**2): Answer**: The server applies the controlled unitary \(U:=\sum_{\ell=1}^{d}|\ell\rangle\langle\ell|\otimes U_{\ell}\) on \(\mathcal{K}_{0}\otimes\mathcal{H}^{\prime}_{0}\otimes\mathcal{H}_{1}\otimes \cdots\otimes\mathcal{H}_{\ell}\), where \(U_{\ell}\) acts only on \(\mathcal{H}^{\prime}_{0}\otimes\mathcal{H}_{\ell}(=\mathcal{H}^{\prime}_{1} \otimes\mathcal{H}_{\ell})\) and is defined as
\[U_{\ell}:=\sum_{f^{\prime}=0}^{d-1}\sum_{j=0}^{d-1}|j+j^{\prime}\rangle\langle j ^{\prime}|\otimes|j\rangle\langle j|. \tag{16}\]
Then, the server applies the controlled unitary \(U\) on \(\mathcal{K}_{1}\otimes\mathcal{H}^{\prime}_{1}\otimes\mathcal{H}_{1}\otimes \cdots\otimes\mathcal{H}_{\ell}\). The server sends the systems \(\mathcal{K}_{0}\otimes\mathcal{H}^{\prime}_{0}\), \(\mathcal{K}_{1}\otimes\mathcal{H}^{\prime}_{1}\) to the user.
**3): Reconstruction**: The user measures \(\mathcal{H}^{\prime}_{A}\), and obtains the message \(X_{\mathit{K}}\).
The correctness of Protocol 2 is trivial. Its upload and download complexities are \(\mathit{UC}(\Phi_{\mathsf{f},d})=2\log\mathsf{f}\) and \(\mathit{DC}(\Phi_{\mathsf{f},d})=2\log\mathsf{f}+2\log d\). The communication complexity is \(\mathit{CC}(\Phi_{\mathsf{f},d})=4\log\mathsf{f}+2\log d\). When \(d\) is fixed, \(\mathit{CC}(\Phi_{\mathsf{f},d})=4\log\mathsf{m}+o(\mathsf{m})\).
As shown below, Protocol 2 satisfies the secrecy in the final-state criterion under the specious-server model with the input states \(\mathcal{C}\).
Assume that the server and the user follow the protocol. Then, the resultant state in the server's system
\(\mathcal{H}_{1}\otimes\ldots\otimes\mathcal{H}_{t}\) is the product state \(|X_{1}\rangle\ldots|X_{t}\rangle\). The resultant state in \(\mathcal{K}_{A}\otimes\mathcal{H}_{A}^{\prime}\) is \(|K\rangle|X_{K}\rangle\). The resultant state in \(\mathcal{K}_{A\oplus 1}\otimes\mathcal{H}_{A\oplus 1}^{\prime}\) is \(\frac{1}{\sqrt{t}}\sum_{\ell=1}^{t}\mathcal{I}_{\ell}^{B}|\ell\rangle|X_{ \ell}\rangle\).
Hence, when \(A=0\), the species server needs to generate the state \(|K\rangle|X_{K}\rangle\frac{1}{\sqrt{t}}\sum_{\ell=1}^{t}\mathcal{I}_{\ell}^{B }|\ell\rangle|X_{\ell}\rangle\) from the state \(|K\rangle\frac{1}{\sqrt{t}}\sum_{\ell=1}^{t}\mathcal{I}_{\ell}^{B}|\ell\rangle\). Also, when \(A=1\), the species server needs to generate the state \(\frac{1}{\sqrt{t}}\sum_{\ell=1}^{t}\mathcal{I}_{\ell}^{B}|\ell\rangle|X_{\ell} \rangle|K\rangle|X_{K}\rangle\) from the state \(\frac{1}{\sqrt{t}}\sum_{\ell=1}^{t}\mathcal{I}_{\ell}^{B}|\ell\rangle|K\rangle\).
Since the resultant states \(|K\rangle|X_{K}\rangle\frac{1}{\sqrt{t}}\sum_{\ell=1}^{t}\mathcal{I}_{\ell}^{B }|\ell\rangle|X_{\ell}\rangle\) and \(\frac{1}{\sqrt{t}}\sum_{\ell=1}^{t}\mathcal{I}_{\ell}^{B}|\ell\rangle|X_{K}\rangle\) are unitarily equivalent to the states \(|K\rangle\frac{1}{\sqrt{t}}\sum_{\ell=1}^{t}\mathcal{I}_{\ell}^{B}|\ell\rangle\) and \(\frac{1}{\sqrt{t}}\sum_{\ell=1}^{t}\mathcal{I}_{\ell}^{B}|\ell\rangle|K\rangle\), it is sufficient to discuss whether the server can get certain information from the state family \(\mathcal{F}:=(|k\rangle\frac{1}{\sqrt{t}}\sum_{\ell=1}^{t}\mathcal{I}_{\ell}^{ B}|\ell\rangle,\frac{1}{\sqrt{t}}\sum_{\ell=1}^{t}\mathcal{I}_{\ell}^{B}|\ell \rangle|k\rangle)^{\mathsf{f}}_{k,b=1}\) without disturbance. However, Koashi-Imoto [49; 50; 51; 52] theory forbids the server to make any measurement when the states need to be recovered because the state family \(\mathcal{F}\) is composed of non-commutative states. Therefore, when the server keeps the condition for the species server, the server cannot obtain any information for \(\mathcal{K}\).
However, it is not clear whether adding the measurement in Step 2) guarantees that the protocol satisfies the secrecy in the final-state criterion under the specious-server model with the input states \(\mathcal{Q}\).
### C-QPIR in all-round criterion
In this section we discuss the secrecy in the all-round criterion of the C-QPIR protocol with communication complexity \(O(\operatorname{poly\,log}\nolimits)\) under the fixed message size \(d=2\) from [10, Section 5], which does not use any prior entanglement, and the C-QPIR protocol with communication \(O(\log\mathsf{m})\) under the fixed message size \(d=2\) from [10, Section 6], which uses \(\mathfrak{B}(\mathsf{m})\) ebits of prior entanglement. Although these protocols fix the message size \(d\) to be \(2\), they can be considered as protocols whose message sizes are fixed to an arbitrary \(d\) by treating \(\lceil\log_{2}d\rceil\) messages as one message. We first show the secrecy of the protocol from [10, Section 5] under the honest server model.
**Lemma 1**.: _The protocol from [10, Section 5] is unitary-type and satisfies the secrecy in the all-round criterion under the honest server model when the set \(\tilde{\mathcal{S}}\) of possible inputs is \(\mathcal{C}\)._
Proof.: The protocol from [10, Section 5] works for the case \(d=2\). The server's input is thus \((a_{1},\ldots,a_{t})\) for \(a_{1},\ldots,a_{t}\in\{0,1\}\). The user's input is an index \(K\in\{1,\ldots,\mathsf{f}\}\).
The main idea is to simulate a classical multi-server PIR protocol with \(s=O(\log\mathsf{m})\) servers that has total communication complexity \(O(\operatorname{poly\,log}\mathsf{m})\). Such protocols are known to exist (see, e.g., [47]) and can be described generically as follows. The user picks a uniform random variable \(G\) from \(\{1,\ldots,\mathsf{g}\}\), computes an \(s\)-tuple of queries \(\{q_{1}(G,K),\ldots,q_{s}(G,K)\}\) from \((G,K)\) by using a function \(q_{t}\), and asks query \(q_{t}(G,K)\) to the \(t\)-th server. Here, for each \(t\in\{1,\ldots,s\}\), the function \(q_{t}\) satisfies the condition that the distribution of query \(q_{t}(G,K)\) is independent of \(K\). Each server \(t\) then sends its answer \(\mathsf{ans}_{t}(q_{t}(G,K))\) to the user, who recovers \(a_{K}\) from \(\{\mathsf{ans}_{1}(q_{1}(G,K)),\ldots,\mathsf{ans}_{s}(q_{s}(G,K))\}\).
The protocol from [10, Section 5] simulates this protocol using only one server. The protocol uses \(2s+1\) quantum registers denoted \(Q,Q_{1},\ldots,Q_{s},Ans_{1},\ldots,Ans_{s}\). For each \(t\in\{1,\ldots,s\}\), let us define the following quantum state:
\[|\Phi_{t}\rangle\] \[= \frac{1}{\sqrt{\mathsf{g}}}\sum_{g}|q_{1}(g,K),\cdots,q_{s}(g,K) \rangle_{Q}\,|q_{1}(g,K)\rangle_{Q_{1}}\cdots|q_{s}(g,K)\rangle_{Q_{s}}\] \[\otimes|\mathsf{ans}_{1}(q_{1}(g,K))\rangle_{Ans_{1}}\cdots| \mathsf{ans}_{t-1}(q_{t-1}(g,K))\rangle_{Ans_{t-1}}\] \[\otimes|0\rangle_{Ans_{t}}\cdots|0\rangle_{Ans_{s}}\,.\]
Note that we have in particular
\[|\Phi_{1}\rangle\] \[= \frac{1}{\sqrt{\mathsf{g}}}\sum_{g}|q_{1}(g,K),\cdots,q_{s}(g,K) \rangle_{Q}\,|q_{1}(g,K)\rangle_{Q_{1}}\cdots|q_{s}(g,K)\rangle_{Q_{s}}\] \[\otimes|0\rangle_{Ans_{1}}\cdots|0\rangle_{Ans_{s}}\,.\]
The protocol consists of the following interaction between the user and the server (some details of the manipulations of the states are omitted since they are irrelevant to the secrecy proof):
1. The user prepares the state \(|\Phi_{1}\rangle\).
2. The user and the server iterate the following for \(t=1\) to \(s\): 1. The user sends Registers \(Q_{t},Ans_{t}\) to the server; 2. The server applies a controlled unitary, where the controlling system is \(Q_{t}\) and the controlled system is \(Ans_{t}\). Then, the server sends back Registers \(Q_{t},Ans_{t}\) to the user.
3. The user measures the joint system composed of Registers \(Q,Q_{1},\ldots,Q_{s},Ans_{1},\ldots,Ans_{s}\) to obtain the outcome \(a_{K}\) after certain unitary operations.
Since this protocol is unitary-type, the remaining task is to show the secrecy of this protocol in the all-round criterion under the honest server model when the set \(\tilde{\mathcal{S}}\) of possible inputs is \(\mathcal{C}\). Observe that at each iteration there is only a message sent to the server, at Step 2.1. We thus only need to show that for each \(t\), this message does not reveal any information about \(K\). The state of the whole system at the end of Step 2.1 of the \(t\)-th iteration
is \(\left|\Phi_{t}\right\rangle\). The state of the server, obtained by tracing out all registers except \(Q_{t},Ans_{t}\) of \(\left|\Phi_{t}\right\rangle\left\langle\Phi_{t}\right|\) is
\[\frac{1}{\mathfrak{g}}\sum_{g}\left|q_{t}(g,K)\right\rangle_{Q_{t}}\left|0 \right\rangle_{Ans_{t}}\left\langle q_{t}(g,K)\right|_{Q_{t}}\left\langle 0 \right|_{Ans_{t}}. \tag{17}\]
Since the distribution of query \(q_{t}(G,K)\) is independent of \(K\), we conclude that the whole state of the server at the end of Step 2.1 is independent of \(K\), for each \(t\).
Next, we show the secrecy of the protocol from [10, Section 6] under the honest server model (see also Appendix B in [11]).
**Lemma 2**.: _The protocol from [10, Section 6] is unitary-type and satisfies the secrecy in the all-round criterion under the honest server model when the set \(\bar{\mathcal{S}}\) of possible inputs is \(\mathcal{C}\)._
Proof.: The protocol from [10, Section 6] works for the case \(d=2\) and \(\mathfrak{f}=2^{\mathfrak{h}}\), for \(\mathfrak{h}\geq 1\). The server's input is thus \((a_{1},\ldots,a_{\mathfrak{f}})\) for \(a_{1},\ldots,a_{\mathfrak{f}}\in\{0,1\}\). The user's input is an index \(K\in\{1,\ldots,\mathfrak{f}\}\).
The protocol uses \(2\mathfrak{h}+2\) quantum registers denoted \(R_{1},\ldots,R_{\mathfrak{h}},\mathsf{R^{\prime}}_{1},\ldots,\mathsf{R^{ \prime}}_{n},Q_{0},Q_{1}\). For each \(p\in\{1,\ldots,\mathfrak{h}\}\), let us define the following quantum state over the two registers \(R_{t},R_{p}^{\prime}\):
\[\left|\Phi_{p}\right\rangle=\frac{1}{\sqrt{2^{2\mathfrak{h}-p}}}\sum_{z\in\{0,1\}^{2\mathfrak{h}-p}}\left|z\right\rangle_{R_{p}}\left|z\right\rangle_{R_{p }}.\]
For any binary string \(z\in\{0,1\}^{s}\) with \(s\) even, we denote \(z[0]\) the first half of \(z\), and \(z[1]\) the second half of \(z\). For any binary strings \(z,z^{\prime}\in\{0,1\}^{s}\), we write \(z\oplus z^{\prime}\in\{0,1\}^{s}\) the string obtained by taking the bitwise parity of \(z\) and \(z^{\prime}\).
The protocol from [10, Section 6] assumes that the server and the user initially share the state
\[\left|\Phi_{1}\right\rangle\otimes\cdots\otimes\left|\Phi_{\mathfrak{h}} \right\rangle\cdots\otimes\left|0\right\rangle_{Q_{0}}\left|0\right\rangle_{Q _{0}^{\prime}},\]
where \(R_{1},\ldots,R_{\mathfrak{h}},Q_{0},Q_{1}\) are owned by the server and \(R_{1}^{\prime},\ldots,R_{\mathfrak{h}}^{\prime}\) are owned by the user. The protocol consists of the following interaction between the user and the server (some details of the manipulations of the states are omitted since they are irrelevant to the secrecy proof):
1. For \(p\) from \(1\) to \(\mathfrak{h}\) the server and the user do the following: 1. The server applies a unitary \(V_{p}\) (defined in [10, Eq. (27)]) on Registers \(R_{p-1},R_{p}\), \(Q_{0},Q_{1}\) and then sends Registers \(Q_{0},Q_{1}\) to the user; 1. If the \(p\)-th bit of its input \(K\) is \(0\), the user applies the Pauli gate \(Z\) on Register \(Q_{0}\). If the \(p\)-th bit of \(K\) is \(1\), the user applies \(Z\) on Register \(Q_{1}\). The user then sends back Registers \(Q_{0},Q_{1}\) to the server. 1. The server applies again the unitary \(V_{p}\) on Registers \(R_{p-1},R_{p}\), \(Q_{0},Q_{1}\), and then applies a Hadamard transform on each qubit in Register \(R_{p}\). 1. The user applies a Hadamard transform on each qubit in Register \(R_{p}^{\prime}\).
2. The server sends Register \(R_{\mathfrak{h}}\) to the user. The user measures the joint system composed of Registers \(R_{1}^{\prime},\ldots,R_{\mathfrak{h}}^{\prime}\) and Register \(R_{\mathfrak{h}}\), and performs some classical post-processing on the outcome to obtain \(a_{K}\)
Since this protocol is unitary-type, the remaining task is to show the secrecy of this protocol in the all-round criterion under the honest server model when the set \(\bar{\mathcal{S}}\) of possible inputs is \(\mathcal{C}\). Since the initial state does not depend on \(K\), it is sufficient to show that the whole state on Register \(R_{1},\ldots,R_{\mathfrak{h}},Q_{0},Q_{1}\) at the end of Step 1.2 of the \(p\)-th round is independent of \(K\).
Lemma 2 in [10] shows that the state of the whole system at the end of Step 1.3 is
\[\left|\Psi_{p}\right\rangle\otimes\bigotimes_{j=p+1}^{\mathfrak{h}}\left| \Phi_{j}\right\rangle_{(R_{j},R_{j}^{\prime})}\otimes\left|0\right\rangle_{Q_{ 0}}\left|0\right\rangle_{Q_{0}^{\prime}}\]
with
\[\left|\Psi_{p}\right\rangle=\frac{1}{\sqrt{2^{2\mathfrak{h}-1}}\cdots 2^{2 \mathfrak{h}-p}}\sum_{y^{1},\ldots,y^{p}}\bigotimes_{j=1}^{p}\left|y^{j}\right\rangle _{R_{j}}\left|y^{j-1}\left[i_{j}\right]\oplus y^{j}\right\rangle_{R_{j}^{ \prime}},\]
where the sum is over all strings \(y^{1}\in\{0,1\}^{2\mathfrak{h}-1},\ldots,y^{p}\in\{0,1\}^{2\mathfrak{h}-p}\) and we use the convention that \(y^{0}\) is the server's input \((a_{1},\ldots,a_{\mathfrak{f}})\).1 Here the server owns Registers \(R_{1},\ldots,R_{\mathfrak{h}},Q_{0},Q_{1}\) while the user owns Registers \(R_{1}^{\prime},\ldots,R_{\mathfrak{h}}^{\prime}\). Observing that tracing out Registers \(R_{1}^{\prime},\ldots,R_{j}^{\prime}\) from \(\left|\Psi_{p}\right\rangle\left\langle\Psi_{p}\right|\) gives the state
Footnote 1: Observe that \(y^{j-1}\) is a binary string of length \(2^{\mathfrak{h}-(j-1)}\), and then \(y^{j-1}\left[i_{j}\right]\) is a binary string of length \(2^{\mathfrak{h}-(j-1)-1}=2^{\mathfrak{h}-j}\). The term \(y^{j-1}\left[i_{j}\right]\oplus y^{j}\) in the definition of \(\left|\Psi_{p}\right\rangle\) is thus well defined.
\[\frac{1}{2^{2\mathfrak{h}-1}\cdots 2^{2\mathfrak{h}-p}}\sum_{y^{1},\ldots,y^{p}} \left|y^{1}\right\rangle_{R_{1}}\cdots\left|y^{p}\right\rangle_{R_{p}}\left\langle y ^{1}\right|_{R_{1}}\cdots\left\langle y^{p}\right|_{R_{p}},\]
which is independent of \(K\), we find that the whole state on Register \(R_{1},\ldots,R_{\mathfrak{h}},Q_{0},Q_{1}\) at the end of Step 1.3 of the \(p\)-th round is independent of \(K\), for each \(p\). Since the unitaries applied in Step 1.3 by the server are independent of \(K\), we conclude that the whole state on Register \(R_{1},\ldots,R_{\mathfrak{h}},Q_{0},Q_{1}\) at the end of Step 1.2 of the \(p\)-th round is independent of \(K\).
Finally, we discuss the secrecy under the specious server model. We will rely on the following theorem from [11] for unitary-type QPIR protocols.
**Proposition 2** (Theorem 3.2 in [11]).: _When a unitary-type QPIR protocol satisfies the secrecy in the all-round criterion under the honest server model with the set \(\tilde{\mathcal{S}}=\mathcal{C}\), it satisfies the secrecy in the all-round criterion under the specious server model with the same set \(\tilde{\mathcal{S}}=\mathcal{C}\)._
Therefore, we obtain the following corollary of Lemmas 1 and 2.
**Corollary 1**.: _The protocols from [10, Section 5] and [10, Section 6] satisfy the secrecy in the all-round criterion under the specious server model when the set \(\tilde{\mathcal{S}}\) of possible inputs is \(\mathcal{C}\)._
Therefore, when the message size \(d\) is fixed to a constant, there exists a C-QPIR protocol with communication complexity \(O(\operatorname{poly}\log\mathfrak{m})\) (\(O(\log\mathfrak{m})\)) and without any prior entanglement (with prior entanglement) that satisfies the secrecy in the all-round criterion under the specious server model when the set \(\tilde{\mathcal{S}}\) of possible inputs is \(\mathcal{C}\).
## V Optimality of trivial protocol in final-state criterion for Q-QIP under honest server model
In this section, we prove that the trivial solution of downloading all messages is optimal for Q-QPIR. In particular, this section, unlike the references [8, 11], we show the optimality in the final-state criterion under the honest-server model. Since our setting is discussed under the honest-server model, the secrecy in the final-state criterion is required only when the server follows the determined state preparation process and determined quantum operations. In the formal description of our protocols, we consider that the user and the server apply CPTP maps but we describe the CPTP maps by the equivalent representation with the unitary maps and the local quantum memories.
To be precise, we define the \(\mathsf{r}\)-round Q-QPIR protocol as follows. A 2-round protocol is depicted in Figure 2, and the symbols are summarized as Table III. The message states are given as arbitrary \(\mathsf{f}\) states \(\rho_{[\mathsf{f}]}:=\rho_{1}\otimes\cdots\otimes\rho_{\mathsf{f}}\) on \(S^{(0)}=X_{1}\otimes\cdots\otimes X_{\mathsf{f}}\), where each of \(\rho\ell\) is purified in \(X_{\ell}\otimes R_{\ell}\). We use the notation \(R_{[\mathsf{f}]}:=R_{1}\otimes\cdots\otimes R_{\mathsf{f}}\). The server contains the system \(S^{(0)}\). The user chooses the index of the targeted message \(K\in[\mathsf{f}]\), i.e., \(\rho_{k}\) is the targeted quantum state when \(K=k\). When \(K=k\), the user prepares the initial state as \(|k\rangle\otimes|0\rangle\in A^{(0)}\otimes T^{(0)}\). Although we consider the model in which the user and the server apply CPTP maps, we describe it by the equivalent representation with the unitary maps and the local quantum memories. A Q-QPIR protocol \(\Phi\) is described by unitary maps \(\mathcal{D}^{(0)},\ldots,\mathcal{D}^{(r)},\mathcal{E}^{(1)},\ldots,\mathcal{ E}^{(r)}\) in the following steps.
1. **Query**: For all \(i\in[\mathsf{r}]\), the user applies a unitary map \(\mathcal{D}^{(i-1)}\) from \(A^{(i-1)}\otimes T^{(i-1)}\) to \(Q^{(i)}\otimes T^{(i)}\), and sends \(Q^{(i)}\) to the sender. Here, \(T^{(i)}\) are the user's local quantum systems for describing the CPTP maps applied by the user.
2. **Answer**: For all \(i\in[\mathsf{r}]\), the server applies a unitary map \(\mathcal{E}^{(i)}\) from \(Q^{(i)}\otimes S^{(i-1)}\) to \(A^{(i)}\otimes S^{(i)}\) and sends \(A^{(i)}\) to the user. Here, \(S^{(i)}\) are the server's local quantum systems for describing the CPTP maps applied by the server.
3. **Reconstruction**: The user applies \(\mathcal{D}^{(r)}\) from \(A^{(r)}\otimes T^{(r)}\) to \(Y\otimes E\), and outputs the state on \(Y\) as the protocol output.
The input-output relation \(\Lambda_{\Phi}\) of the protocol \(\Phi\) is written with a CPTP \(\Gamma_{\Phi,k}\) from \(S^{(0)}\) to \(Y\) as
\[\Lambda_{\Phi}(k,\rho_{1},\ldots,\rho_{\mathsf{f}})=\Gamma_{\Phi, k}(\rho_{[\mathsf{f}]})\] \[=\operatorname{Tr}_{S^{(0)},E}\mathcal{D}*\mathcal{E}(\rho_{[ \mathsf{f}]}\otimes\mathcal{D}^{(0)}(|k\rangle\langle k|\otimes|0\rangle \langle 0|)),\]
where \(\mathcal{D}*\mathcal{E}=(\mathcal{D}^{(r)}\circ\mathcal{E}^{(r)})\circ\cdots \circ(\mathcal{D}^{(1)}\circ\mathcal{E}^{(1)})\). The QPIR protocol \(\Phi\) should satisfy the following conditions.
* **Correctness**: When \(|\psi_{k}\rangle\langle\psi_{k}|\) denotes a purification of \(\rho_{k}\) with the reference system \(R_{k}\), the correctness is \[\Gamma_{\Phi,k}\otimes\operatorname{id}_{R_{k}}(\rho_{[\mathsf{f}]\setminus \{k\}}\otimes|\psi_{k}\rangle\langle\psi_{k}|)=|\psi_{k}\rangle\langle\psi_{k}|\] (18) for any \(K=k\) and any state \(\rho_{[\mathsf{f}]}\).
* **Secrecy**: When the final state on \(S^{(r)}\otimes R_{[\mathsf{f}]}\) with the target index \(K=k\) is denoted by \(\rho_{S^{(r)}R_{[\mathsf{f}]}}^{k}\), the secrecy is \[\rho_{S^{(r)}R_{[\mathsf{f}]}}^{k}=\rho_{S^{(r)}R_{[\mathsf{f}]}}^{k^{\prime}}\] (19) for any \(k,k^{\prime}\).
The communication complexity of the one-server multi-round Q-QPIR is written as \(\operatorname{CC}(\Phi)=\sum_{i=1}^{r}\log|Q^{(i)}|+\log|A^{(i)}|\).
**Theorem 1**.: _For any multi-round Q-QPIR protocol \(\Phi\), the communication complexity \(\operatorname{CC}(\Phi)\) is lower bounded by \(\sum_{\ell=1}^{\ell}\log|X_{\ell}|\), where \(X_{\ell}\) is the system of the \(\ell\)-th message \(\rho_{\ell}\)._
For the proof of Theorem 1, we prepare the following lemmas.
**Lemma 3**.: \(H(A^{(i)})+H(Q^{(i+1)})\geq H(T^{(i+1)})-H(T^{(i)})\)
\begin{table}
\begin{tabular}{|c|c|} \hline Symbol & Definition \\ \hline \(\mathfrak{m}\) & Total size of messages (states) \\ \hline \(\mathsf{f}\) & Number of messages (states) \\ \hline \(\mathsf{r}\) & Number of rounds in multi-round models \\ \hline \end{tabular}
\end{table}
Table III: Definition of symbols
Proof.: Lemma 3 is shown by the relation
\[H(A^{(i)})+H(T^{(i)})+H(Q^{(i+1)})\] \[\stackrel{{(b)}}{{\geq}}H(A^{(i)}T^{(i)})+H(Q^{(i+1)})\] \[\stackrel{{(c)}}{{=}}H(Q^{(i+1)}T^{(i+1)})+H(Q^{(i+1)})\] \[\stackrel{{(d)}}{{\geq}}H(T^{(i+1)}).\]
Here, \((b)\), \((c)\), and \((d)\) express the respective properties presented in Proposition 1.
**Lemma 4**.: _The relation \(H(R_{[\mathsf{f}]}S^{(t)})\geq\sum_{\ell=1}^{\mathsf{f}}H(R_{\ell})\) holds._
Proof.: Given the user's input \(k\), Correctness (18) guarantees that the final state on \(R_{k}\otimes Y\) is a pure state, and therefore, \(R_{k}\) is independent of any system except for \(Y\). Thus, \(R_{k}\) is independent of \(R_{[\mathsf{f}]\setminus\{k\}}S^{(t)}\). The secrecy condition (19) guarantees that the final state on \(R_{[\mathsf{f}]}\otimes S^{(t)}\) does not depend on \(k\). Hence, \(R_{1},\ldots,R_{\mathsf{f}}\), and \(S^{(t)}\) are independent of each other. Therefore, we have
\[H(R_{[\mathsf{f}]}S^{(t)})=H(S^{(t)})+\sum_{\ell=1}^{\mathsf{f}}H(R_{\ell}) \geq\sum_{\ell=1}^{\mathsf{f}}H(R_{\ell}). \tag{20}\]
Proof of Theorem 1.: We choose the initial state on \(R_{\ell}\otimes X_{\ell}\) to be the maximally entangled state for \(\ell=1,\ldots,\mathsf{f}\). From Lemmas 3 and 4, we derive the following inequalities:
\[\operatorname{CC}(\Phi)\geq\sum_{i=1}^{\mathsf{r}}\left(H(A^{(i) })+H(Q^{(i)})\right)\] \[=H(A^{(r)})+H(Q^{(1)})+\sum_{i=1}^{\mathsf{r}-1}\left(H(A^{(i)})+ H(Q^{(i+1)})\right)\] \[\geq H(A^{(r)})+H(Q^{(1)})+H(T^{(t)})-H(T^{(1)}) \tag{21}\] \[=H(A^{(r)})+H(T^{(r)})\] (22) \[\stackrel{{(b)}}{{\geq}}H(A^{(r)}T^{(r)})\stackrel{{ (a)}}{{=}}H(R_{[\mathsf{f}]}S^{(r)})\] \[\geq\sum_{\ell=1}^{\mathsf{f}}H(R_{\ell})=\sum_{\ell=1}^{ \mathsf{f}}\log|X_{\ell}|, \tag{23}\]
where \((a)\) and \((b)\) express the respective properties presented in Proposition 1. In addition, (21) is obtained by applying Lemma 3 for all \(i=1,\ldots,\mathsf{r}-1\). The step (22) follows from \(H(Q^{(1)})=H(T^{(1)})\) which holds due to the property \((a)\) in Proposition 1, because the state on \(Q^{(1)}T^{(1)}\) is the pure state as the state on \(Q^{(0)}T^{(0)}\) is the pure state. The step (23) follows from Lemma 4.
## VI Q-QIP protocol with prior entanglement under honest-server model
In the previous section, we proved that the trivial solution is optimal even in the final-state criterion under the honest one-server model of Q-QPIR. In this section, we construct a Q-QPIR protocol with lower communication complexity under various secrecy models than the trivial solution when we allow shared entanglement between the user and the server.
Let \(\mathsf{m}=\sum_{\ell=1}^{\mathsf{f}}\log|X_{\ell}|\) be the size of all messages. To measure the amount of the prior entanglement, we count sharing one copy of \(|\mathsf{l}_{2}\rangle=(1/\sqrt{2})(|00\rangle+|11\rangle)\) as an _ebit_.
Figure 2: 2-round QPIR protocol.
Accordingly, we count sharing the state \(|\mathsf{I}_{d}\rangle\!\rangle\in\mathbb{C}^{d}\otimes\mathbb{C}^{d}\) as \(\log d\) ebits.
**Theorem 2**.: _Suppose there exists a C-QPIR protocol under a certain secrecy model with communication complexity \(f(d_{1},\ldots,d_{\mathsf{f}})\) when \(\mathsf{g}(d_{1},\ldots,d_{\mathsf{f}})\)-eight prior entanglement is shared between the user and the server. Then, there exists a Q-QPIR protocol under the same secrecy model with communication complexity \(f(d_{1}^{2},\ldots,d_{\mathsf{f}}^{2})\) when \(\mathsf{m}+\mathsf{g}(d_{1},\ldots,d_{\mathsf{f}})\)-eight prior entanglement is shared between the user and the server._
The protocol satisfying Theorem 2 is a simple combination of quantum teleportation [1] and any C-QPIR protocol. For the description of the protocol, we use the generalized Pauli operators and maximally entangled state for \(d\)-dimensional systems defined in (11). Hence, the type of guaranteed secrecy in the original C-QPIR protocol is inherited to the converted QPIR protocol. We construct the Q-QPIR protocol satisfying Theorem 2 as follows.
**Protocol 3**.: Let \(\Phi_{\mathrm{cl}}\) be a C-QPIR protocol and \(d_{1},\ldots,d_{\mathsf{f}}\) be the size of the \(\mathsf{f}\) classical messages. From this protocol, we construct a Q-QPIR protocol as follows.
Let \(X_{1},\ldots,X_{\mathsf{f}}\) be the quantum systems with dimensions \(d_{1},\ldots,d_{\mathsf{f}}\), respectively, and \(\rho_{1},\ldots,\rho_{\mathsf{f}}\) be the quantum message states on systems \(X_{1},\ldots,X_{\mathsf{f}}\). The user and the server share the maximally entangled states \(|\mathsf{I}_{d_{\ell}}\rangle\!\rangle\), defined in (11), on \(Y_{\ell}\otimes Y_{\ell}^{\prime}\) for all \(\ell\in[\mathsf{f}]\), where \(Y_{[\mathsf{f}]}\) and \(Y_{[\mathsf{f}]}^{\prime}\) are possessed by the user and the server, respectively.
The user and the server perform the following steps.
1. **Preparation**: For all \(\ell\in[\mathsf{f}]\), the server performs the generalized Bell measurement \(\mathbf{M}_{\mathsf{X}\mathsf{Z},d_{\ell}}\), defined in (12), on \(X_{\ell}\otimes Y_{\ell}^{\prime}\), where the measurement outcome is written as \(m_{\ell}=(a_{\ell},b_{\ell})\in[0:d_{\ell}-1]^{2}\).
2. **Use of C-QPIR protocol**: The user and the server perform the C-QPIR protocol \(\Phi_{\mathrm{cl}}\) to retrieve \(m_{k}=(a_{k},b_{k})\).
3. **Reconstruction**: The user recovers the \(k\)-th message \(\rho_{k}\) by applying \(\mathsf{X}_{d_{k}}^{a_{k}}\mathsf{Z}_{d_{k}}^{b_{k}}\) on \(Y_{k}\).
The correctness of the protocol is guaranteed by the correctness of the teleportation protocol and the C-QPIR protocol \(\Phi_{\mathrm{cl}}\). When the \(\ell\)-th message state is prepared as \(\rho_{\ell}\) and its purification \(|\phi_{\ell}\rangle\) is denoted with the reference system \(R_{\ell}\), after Step 1, the states on \(R_{\ell}\otimes Y_{\ell}\) is
\[(\mathsf{I}\otimes\mathsf{X}_{d_{\ell}}^{a_{\mathsf{f}}}\mathsf{Z}_{d_{\ell}} ^{-b_{\ell}})|\phi_{\ell}\rangle \tag{24}\]
for all \(\ell\in[\mathsf{f}]\). Thus after Step 3, the targeted state \(|\phi_{k}\rangle\) is recovered in \(R_{k}\otimes Y_{k}\).
To analyze the secrecy of Protocol 3, note that only Step 2 has the communication between the user and the server. Thus the secrecy of Protocol 3 is guaranteed by the secrecy of the underlying protocol \(\Phi_{\mathrm{cl}}\).
Protocol 1 (Protocol 2) is a one-round C-QPIR protocol in the final-state criterion under the honest-server model (the specious-server model) with input states \(\mathcal{C}\) with communication complexity \(2\log\mathsf{f}+\log d\) (\(4\log\mathsf{f}+2\log d\)). Therefore, the combination of Protocols 1 and 3 and the combination of Protocols 2 and 3 yield the following corollary.
**Corollary 2**.: _There exists a Q-QPIR protocol with communication complexity \(2\log\mathsf{f}+\log d^{2}=2\log\mathsf{f}d\) (\(4\log\mathsf{f}+2\log d^{2}=4\log\mathsf{f}d\)) and prior entanglement \(\mathsf{m}\) that satisfies the secrecy in the final-state criterion under the honest-server model (the specious-server model). When \(d\) is a constant, the communication complexity is \(2\log\mathsf{m}+o(\mathsf{m})\) (\(4\log\mathsf{m}+o(\mathsf{m})\))._
Proof.: The case under the honest-server model is trivial. Hence, we show the desired statement under the specious-server model.
Assume that the server makes a specious attack. The user's state at the end of Step 2) of Protocol 3 is the pair of entanglement halves \(\sigma_{1}\) and the state transmitted at Step 2) of Protocol 2\(\sigma_{2}\). Due to the specious condition, the state \(\sigma_{1}\) needs to be one of the states \(\{\mathsf{X}^{a}\mathsf{Z}^{b}\rho_{\mathcal{K}}(\mathsf{X}^{a}\mathsf{Z}^{b} )^{\dagger}\}_{(a,b)\in[0:d-1]^{2}}\) with equal probability. That is, using the random variable \((a,b)\in[0:d-1]^{2}\) under the uniform distribution, the state \(\sigma_{1}\) is written as \(\mathsf{X}^{a}\mathsf{Z}^{b}\rho_{\mathcal{K}}(\mathsf{X}^{a}\mathsf{Z}^{b})^ {\dagger}\). Hence, the state \(\sigma_{2}\) needs to be decided according to the random variable \((a,b)\) in the same way as the honest case. That is, the state \(\sigma_{2}\) satisfies the condition for the state transmitted by a specious server of Protocol 2 at Step 2). Since Protocol 2 satisfies the secrecy under the final-state criterion under the specious-server model with input states \(\mathcal{C}\), the specious server obtains no information in the final state. That is, the combined Q-QPIR protocol with prior entanglement satisfies the secrecy under the final-state criterion under the specious-server model.
Combining Theorem 2 and Corollary 1, we obtain the following corollary.
**Corollary 3**.: _There exists a Q-QPIR protocol with communication complexity \(O(\log\mathsf{m})\) and prior entanglement of \(\Theta(\mathsf{m})\) ebits that satisfies the secrecy in the all-round criterion under the honest-server model when the message size \(d\) is fixed to a constant._
One property of Protocol 3 is that all other states in the server are destroyed at Step 1. This is a disadvantage for the server but an advantage for the user since the user can retrieve other states \(\rho_{\ell}\) if the user could retrieve classical information \(m_{\ell}\in[0:d_{\ell}-1]^{2}\) corresponding to the state \(\rho_{\ell}\).
## VII Conclusion
We have shown an exponential gap for the communication complexity of one-server Q-QPIR in the final-state criterion or under the honest-server model between the existence and the non-existence of prior entanglement.
For this aim, as the first step, we have proposed an efficient one-server one-round C-QPIR protocol in the final-state criterion. Also, we have shown that the protocols proposed in [10] satisfies the secrecy in the all-round criterion under the honest server model. Then, as the second step, we have proved that the trivial solution of downloading all messages is optimal even in the final-state criterion for honest one-server Q-QPIR, which is a similar result to that of classical PIR but different from C-QPIR. As the third step, we have developed a conversion from any C-QPIR protocol to a Q-QPIR protocol, which yields an efficient Q-QPIR protocol with prior entanglement from a C-QPIR protocol. The proposed protocols exhibit an exponential improvement over the Q-QPIR's trivial solution.
In fact, Protocols 1 and 2 work as one-server one-round C-QPIR protocol in the final-state criterion under the honest-server model or the specious-server model. However, Theorem 1 shows that no analogy of Protocol 1 nor 2 works for Q-QPIR protocol under similar settings without prior entanglement. This impossibility is caused by the non-cloning property of the quantum system, i.e., the property that the noiseless channel has no information leakage to the third party, because the proof of Theorem 1 relies on the fact that noiseless quantum communication ensures that the entropy of the final state on the third party is equal to the entropy of the final state on the composite system comprising the output system and the reference system. This impossibility is one of the reasons for our obtained exponential gap.
The above exponential gap has been established under three problem settings. The first and the second are the final-state criterion under the honest-server model and under the specious-server model. The third is the all-round criterion under the honest-server model. In other words, other problem settings do not have such an exponential improvement by using prior entanglement. This exponential improvement is much larger than the improvement achieved through the use of dense coding [2]. This exponential improvement can be considered as a useful application of use of prior entanglement. It is an interesting open problem to find similar exponential improvement by using prior entanglement.
## Acknowledgement
SS was supported by JSPS Grant-in-Aid for JSPS Fellows No. JP20J11484. FLG was partially supported by JSPS KAKENHI grants Nos. JP20H04139 and JP21H04879. MH was supported in part by the National Natural Science Foundation of China (Grants No. 62171212) and Guangdong Provincial Key Laboratory (Grant No. 2019B121203002).
|
2302.08957 | Like a Good Nearest Neighbor: Practical Content Moderation and Text
Classification | Few-shot text classification systems have impressive capabilities but are
infeasible to deploy and use reliably due to their dependence on prompting and
billion-parameter language models. SetFit (Tunstall et al., 2022) is a recent,
practical approach that fine-tunes a Sentence Transformer under a contrastive
learning paradigm and achieves similar results to more unwieldy systems.
Inexpensive text classification is important for addressing the problem of
domain drift in all classification tasks, and especially in detecting harmful
content, which plagues social media platforms. Here, we propose Like a Good
Nearest Neighbor (LaGoNN), a modification to SetFit that introduces no
learnable parameters but alters input text with information from its nearest
neighbor, for example, the label and text, in the training data, making novel
data appear similar to an instance on which the model was optimized. LaGoNN is
effective at flagging undesirable content and text classification, and improves
the performance of SetFit. To demonstrate the value of LaGoNN, we conduct a
thorough study of text classification systems in the context of content
moderation under four label distributions, and in general and multilingual
classification settings. | Luke Bates, Iryna Gurevych | 2023-02-17T15:43:29Z | http://arxiv.org/abs/2302.08957v3 | # Like a Good Nearest Neighbor:
###### Abstract
Modern text classification systems have impressive capabilities but are infeasible to deploy and use reliably due to their dependence on prompting and billion-parameter language models. SetFit (Tunstall et al., 2022) is a recent, practical approach that fine-tunes a Sentence Transformer under a contrastive learning paradigm and achieves similar results to more unwieldy systems. Text classification is important for addressing the problem of domain drift in detecting harmful content, which plagues all social media platforms. Here, we propose Like a Good Nearest Neighbor (LAGONN), an inexpensive modification to SetFit that requires no additional parameters or hyperparameters but modifies input with information about its nearest neighbor, for example, the label and text, in the training data, making novel data appear similar to an instance on which the model was optimized. LaGoNN is effective at the task of detecting harmful content and generally improves SetFit's performance. To demonstrate LaGoNN's value, we conduct a thorough study of text classification systems in the context of content moderation under four label distributions.1
Footnote 1: Code and data: [https://github.com/UKPLab/lagonn](https://github.com/UKPLab/lagonn)
## 1 Introduction
Text classification is the most important tool for NLP practitioners, and there has been substantial progress in advancing the state-of-the-art, especially with the advent of large, pretrained language models (PLM) (Devlin et al., 2019). Modern research focuses on in-context learning (Brown et al., 2020), pattern exploiting training (Schick and Schutze, 2021, 2022), or parameter efficient fine-tuning (Liu et al., 2022). State-of-the-art methods have achieved impressive results on the SuperGLUE (Wang et al., 2019) and RAFT (Alex et al., 2021) few-shot benchmarks. However, they are difficult to use because of their reliance on billion-parameter PLMs and prompt engineering. Constructing prompts is not trivial and may require domain expertise.
One exception to these cumbersome systems is SetFit. SetFit does not rely on prompting or billion-parameter PLMs, and instead fine-tunes a pretrained Sentence Transformer (ST) (Reimers and Gurevych, 2019) under a contrastive learning paradigm. SetFit has comparable performance to more unwieldy systems while being one to two orders of magnitude faster to train and run inference.
An important application of text classification is aiding or automating content moderation, which is the task of determining the appropriateness of user-generated content on the Internet (Roberts, 2017). From fake news to toxic comments to hate speech, it is difficult to browse social media without being exposed to potentially dangerous posts that may have an effect on our ability to reason (Ecker et al., 2022). Misinformation spreads at alarming
Figure 1: We embed training data, retrieve the text, gold label, and distance for each instance from its second nearest neighbor (\(k\)=2) and modify the original text with this information. Then we embed the modified training data and train a classifier. During inference, the NN from the training data is selected (\(k\)=1), the original text is modified with the text, gold label, and distance from the NN, and the classifier is called.
rates (Vosoughi et al., 2018), and an ML system should be able to quickly aid human moderators. While there is work in NLP with this goal (Markov et al., 2022; Shido et al., 2022; Ye et al., 2023), a general, practical and open-sourced method that is effective across multiple domains remains an open challenge. Novel fake news topics or racial slurs emerge and change constantly. Retraining of ML-based systems is required to adapt this concept drift, but this is expensive, not only in terms of computation, but also in terms of the human effort needed to collect and label data.
SetFit's performance, speed, and low cost would make it ideal for effective content moderation, however, this type of text classification poses a challenge for even state-of-the-art approaches. For example, detecting hate speech on Twitter (Basile et al., 2019), a subtask on the RAFT few-shot benchmark, appears to be the most difficult dataset; at time of writing, it is the only task where the human baseline has not been surpassed, yet SetFit is among the top ten most performant systems.2
Footnote 2: [https://huggingface.co/spaces/ought/](https://huggingface.co/spaces/ought/) raft-leaderboard(see “Tweet Eval Hate”).
Here, we propose a modification to SetFit, called Like a Good Nearest Neighbor (LaGoNN). LaGoNN introduces no parameters or hyperparameters and instead modifies input text by retrieving information about the nearest neighbor (NN) seen during optimization (see Figure 1). Specifically, we append the label, distance, and text of the NN in the training data to a new instance and encode this modified version with an ST. By making input data appear more similar to instances seen during training, we inexpensively exploit the ST's pretrained or fine-tuned knowledge when considering a novel example. Our method can also be applied to the linear probing of an ST, requiring no expensive fine-tuning of the large embedding model. Finally, we propose a simple alteration to the SetFit training procedure, where we fine-tune the ST on a subset of the training data. This results in a more efficient and performant text classifier that can be used with LaGoNN. We summarize our contributions as follows:
1. We propose LaGoNN, an inexpensive modification to SetFit- or ST-based text classification.
2. We suggest an alternative training procedure to the standard fine-tuning of SetFit, that can be used with or without LaGoNN, and results in a cheaper system with similar performance to the more expensive SetFit.
3. We perform an extensive study of LaGoNN, SetFit, and standard transformer fine-tuning in the context of content moderation under different label distributions.
## 2 Related Work
There is not much work on using sentence embeddings as features for classification despite the pioneering work being roughly five years old (Perone et al., 2018). STs are pretrained with the objective of maximizing the distance between semantically distinct text and minimizing the distance between text that is semantically similar in feature space. They are composed of a Siamese and triplet architecture that encodes text into dense vectors which can be used as features for ML. STs were first used to encode text for classification by Piao (2021), however, the authors relied on pretrained representations.
SetFit uses a contrastive learning paradigm from computer vision (Koch et al., 2015) to fine-tune STs. The embedding model is fine-tuned with a distance-based loss function, like cosine similarity, such that examples belonging to different labels are separated in feature space. This approach can relatively easily and quickly train a strong, few-shot text classifier, transforming the ST from a sentence encoder to a topic encoder.
Most related to LaGoNN is work done by Xu et al. (2021), who showed that retrieving and concatenating text from training data and external sources, such as ConceptNet (Speer et al., 2017) and the Wikitionary3 definition, can be viewed as a type of external attention that does not modify the architecture of the Transformer in question answering. Liu et al. (2022) used PLMs, including STs, and \(k\)-NN lookup to prepend examples that are similar to a GPT-3 query sample to aid in prompt engineering for in-context learning. Wang et al. (2022) demonstrated that prepending and appending training data can benefit PLMs in the tasks of summarization, language modelling, machine translation, and question answering, using BM25 as their retrieval model for speed (Manning et al., 2008; Robertson and Zaragoza, 2009).
Footnote 3: [https://www.wiktionary.org/](https://www.wiktionary.org/)
We alter the SetFit training procedure by using fewer examples to adapt the embedding model for
many-shot learning. LaGoNN decorates input text with its nearest neighbor's gold label, Euclidean distance, and text from the training data to exploit the ST's optimized representations. Compared to retrieval-based methods, LaGoNN uses the same model for both retrieval and encoding, which can be fine-tuned via SetFit. We only retrieve information from the training data for text classification.
## 3 Like a Good Nearest Neighbor
Xu et al. (2021) formulate a type of external attention, where textual information is retrieved from multiple sources and added to text input to give the model stronger reasoning ability without altering the internal architecture. Inspired by this approach, LaGoNN exploits pretrained and fine-tuned knowledge through external attention, but the information we retrieve comes only from data used during optimization. We consider an embedding function, \(f\), that is called on both training and test data, \(f(X_{train})\) and \(f(X_{test})\). Considering its success and speed on realistic, few-shot data and our goal of practical content moderation, we choose an ST that can be fine-tuned with SetFit as our
\begin{table}
\begin{tabular}{c c}
**Training Data** & **Test Data** \\ “I love this.” [positive 0.0] (0) & “So good!” [?] (?) \\ “This is great!” [positive 0.5] (0) & “Just terribel” [?] (?) \\ “I hate this.” [negative 0.7] (1) & “Never again.” [?] (?) \\ “This is awful!” [negative 1.2] (1) & “This rocks!” [?] (?) \\ \end{tabular}
\end{table}
Table 1: Toy training and test data and different LaGoNN configurations considering the first training example. Train and Test Modified are altered instances that are input into the final embedding model for training and inference, respectively. The input format is “original text [SEP] [NN gold label distance] NN instance text”. Input text is in quotation marks, the NN’s gold label and distance from the training data are in square brackets, and the integer label is in parenthesis. We present real examples of LaGoNN BOTH modified text in Appendix A.4.
Figure 2: LaGoNN LABEL uses an ST to encode training data, performs NN lookup, appends the second NN’s (\(k\)=2) gold label and distance, and optionally SetFit to fine-tune the embedding model. We then embed this new instance and train a classifier. During inference, we use the embedding model to modify the test data with its NN’s gold label and distance from the training data (\(k\)=1), compute the final representation, and call the classifier. Input text is in quotation marks, the NN’s gold label and distance are in brackets, and the integer label is in parenthesis.
embedding function.
**Encoding training data and nearest neighbors** LaGoNN first uses a pretrained Sentence Transformer to embed training text in feature space, \(f(X_{train})\). We perform NN lookup with scikit-learn (Buitinck et al., 2013) on the resulting embeddings and query the second closest NN (\(k\)=2). We do not use the NN because it is the example itself.
Nearest neighbor informationWe extract text from the second nearest neighbor and use it to decorate the original example. We experimented with different text that LaGoNN could use. The first configuration we consider is the gold label and Euclidean distance of the NN, which we call LA-BEL. We then considered the gold label, distance, and the text of the NN, which we refer to as TEXT. Finally, we tried the same format as TEXT but for all possible labels, which we call BOTH (see Table 1 and Figure 2).4 Information from the second NN is appended to the text following a separator token to indicate this instance is composed of multiple sequences. While the BOTH and TEXT configurations are arguably the most interesting, we find LABEL to result in the most performant version of LaGoNN, and this is the version about which we report results.
Footnote 4: LaGoNN requires a mapping from the label to the text the label represents, for example, \(0\) – positive and \(1\) – negative.
TrainingLaGoNN encodes the modified training data and optionally fine-tunes the embedding model via SetFit, \(f(X_{trainmod})\). After fine-tuning, we train a classifier \(CLF(f(X_{trainmod}))\), like logistic regression.
InferenceLaGoNN uses information from the nearest neighbor in the training data to modify input text. We compute the embeddings on the test data, \(f(X_{test})\), and query the NN lookup, selecting the NN (\(k\)=1) in the training data and extracting information from the training text. LaGoNN then decorates the input instance with information from the NN in the training data. Finally, we encode the modified data with the embedding model and call the classifier, \(CLF(f(X_{testmod}))\).
IntuitionAs \(f\) is the same function, we hypothesize that LaGoNN's modifications will make a novel instance more semantically similar to its NNs in the training data. The resulting representation should be more akin to an instance on which the embedding model and classifier were optimized. Our method also leverages both distance-based NN lookup and probabilistic algorithms (logistic regression) for its final prediction.
## 4 Experiments
### Data and label distributions
In our experiments, we study LaGoNN's performance on four binary and one ternary classification dataset related to the task of content moderation. Each dataset is composed of a training, validation, and test split.
Here, we provide a summary of the five datasets we studied. LIAR was created from Politifact5 for fake news detection and is composed of the data fields _context_, _speaker_, and _statement_, which are labeled with varying levels of truthfulness (Wang, 2017). We used a collapsed version of this dataset where a statement can only be true or false. We did not use _speaker_, but did use _context_ and _statement_, separated by a separator token. Quora Insincere Questions6 is composed of neutral and toxic questions, where the author is not asking in good faith. Hate Speech Offensive7 has three labels and is composed of tweets that can contain either neutral text, offensive language, or hate speech (Davidson et al., 2017). Amazon Counterfactual8 contains sentences from product reviews, and the labels can be "factual" or "counterfactual" (O'Neill et al., 2021). "Counterfactual" indicates that the customer said something that cannot be true. Finally, Toxic Conversations9 is a dataset of comments where the author wrote a comment with unintended bias10 (see Table 2).
Footnote 5: [https://www.politifact.com/](https://www.politifact.com/)
Footnote 6: [https://www.kaggle.com/c/quora-insincere-questions-classification](https://www.kaggle.com/c/quora-insincere-questions-classification)
Footnote 7: [https://huggingface.co/datasets/hate_speech_offensive](https://huggingface.co/datasets/hate_speech_offensive)
Footnote 8: [https://huggingface.co/datasets/SetFit/amazon_counterfactual_en](https://huggingface.co/datasets/SetFit/amazon_counterfactual_en)
Footnote 9: [https://huggingface.co/datasets/SetFit/toxic_](https://huggingface.co/datasets/SetFit/toxic_) conversations
Footnote 10: [https://huggingface.co/datasets/SetFit/toxic_](https://huggingface.co/datasets/SetFit/toxic_) conversations
We study our system by simulating growing training data over ten discrete steps sampled under four different label distributions: extreme, imbalanced, moderate, and balanced (see Table 3). On each step we add \(100\) examples (100 on the first, 200 on the second, etc.) from the training split sampled under one of the four ratios.11 On each
step, we train our method with the sampled data and evaluate on the test split. Considering growing training data has two benefits: 1) We can simulate a streaming data scenario, where new data is labeled and added for training and 2) We can investigate each method's sensitivity to the number of training examples. We sampled over five seeds, reporting the mean and standard deviation.
### Baselines
We compare LaGoNN against standard fine-tuning, linear probing of a Sentence Transformer, and two versions of SetFit, detailed below.
RoBERTaRoBERTa-base is a pretrained language model Liu et al. (2019) that we fine-tuned with the transformers library Wolf et al. (2020). We select two versions of RoBERTa-base: an expensive version, where we perform standard fine-tuning on each step (RoBERTa\({}_{full}\)) and a cheaper version, where we freeze the model body after step one and update the classification head on subsequent steps (RoBERTa\({}_{freeze}\)). We set the learning rate to \(1e^{-5}\), train for a maximum of 70 epochs, and use early stopping, selecting the best model after training. We consider RoBERTa\({}_{full}\) an upper bound as it has the most trainable parameters and requires the most time to train of all our methods.
Linear probeWe perform linear probing of a pretrained Sentence Transformer by fitting logistic regression with default hyperparameters on the training embeddings on each step. We choose this baseline because LaGoNN can be applied as a modification in this scenario. We select MPNET Song et al. (2020) as the ST, for SetFit, and for LaGoNN.12 We refer to this method as Probe.
Footnote 12: [https://huggingface.co/sentence-transformers/paraphrase-mgnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mgnet-base-v2)
Logistic regressionHere, we perform standard fine-tuning with SetFit on the first step, and then on subsequent steps, freeze the embedding model and retrain only the classification head. We choose this baseline as LaGoNN also uses logistic regression as its final classifier and refer to this method as Log Reg.
\(k\)-nearest neighborsSimilar to the above baseline, we fine-tune the embedding model via SetFit, but swap out the classification head for a \(k\)NN classifier, where \(k=3\). We select this baseline as LaGoNN also relies on an NN lookup. \(k=3\) was chosen during our development stage as it yielded the strongest performance. We refer to this method as \(k\)NN.
SetFitFor this baseline we perform standard fine-tuning with SetFit on each step. On the first step, this method is equivalent to Log Reg.
LaGoNN cheapThis method modifies the training and test data via LaGoNN before fitting a logistic regression classifier. Even without adapting the embedding model, as the training data grow, modifications made to the test data may change. We refit the classification head on each step and refer to this method as LaGoNN\({}_{cheap}\), which is comparable to Probe.
LaGoNNOn the first step, we use LaGoNN to modify our training data and then perform standard fine-tuning with SetFit. On subsequent steps, we freeze the embedding model and use it to modify our data. We fit logistic regression on each step and refer to this method as LaGoNN. It is comparable to Log Reg.
LaGoNN expensiveThis version is identical to LaGoNN, except we fine-tune the embedding model on each step. We refer to this method as LaGoNN\({}_{exp}\) and it is comparable to SetFit. On the first step, this method is equivalent to LaGoNN.
## 5 Results
Table 4 and Figure 3 show our results. In the cases of the extreme and imbalanced regimes, Set
\begin{table}
\begin{tabular}{c|c}
**Dataset (and Detection Task)** & **Number of Labels** \\ \hline LIAR (Fake News) & 2 \\ Insincere Questions (Toxicity) & 2 \\ Hate Speech Offensive & 3 \\ Amazon Counterfactual (English) & 2 \\ Toxic Conversations & 2 \\ \end{tabular}
\end{table}
Table 2: Summary of datasets and number of labels. We provide the type of task in parenthesis in unclear cases.
\begin{table}
\begin{tabular}{c|c|c}
**Regime** & **Binary** & **Ternary** \\ \hline Extreme & 0: 98\% 1: 2\% & 0: 95\%, 1: 2\%, 2: 3\% \\ Imbalanced & 0: 90\% 1: 10\% & 0: 80\%, 1: 5\%, 2: 15\% \\ Moderate & 0: 75\% 1: 25\% & 0: 65\%, 1: 10\%, 2: 25\% \\ Balanced & 0: 50\% 1: 50\% & 0: 33\%, 1: 33\%, 2: 33\% \\ \end{tabular}
\end{table}
Table 3: Label distributions for sampling training data. 0 represents neutral while 1 and 2 represent different types of undesirable text.
Fit's performance steadily increases with the number of training examples. As the label distribution shifts to the balanced regime, however, SetFit's performance quickly saturates or even degrades as the number of training examples grows. LaGoNN, RoBERTa\({}_{full}\), and Log Reg, other fine-tuned PLM classifiers, do not exhibit this behavior. LaGoNN\({}_{exp}\), being based on SetFit, exhibits a similar trend, but the performance degradation is mitigated; on the \(10^{th}\) step of Amazon Counterfactual in Table 4 SetFit's performance decreased by 9.7, while LaGoNN\({}_{exp}\) only fell by 3.7.
LaGoNN and LaGoNN\({}_{exp}\) generally outperform Log Reg and SetFit, respectively, often resulting in a more stable model, as reflected in the standard deviation. We find that LaGoNN and LaGoNN\({}_{exp}\) exhibit stronger predictive power with fewer examples than RoBERTa\({}_{full}\) despite having fewer trainable parameters. For example, on the first step of Insincere Questions under the extreme setting, LaGoNN's performance is more than 10 points higher.
\begin{table}
\begin{tabular}{l c c c c|c c c c} \multicolumn{1}{l}{**Method**} & \multicolumn{3}{c|}{**InsincereQs**} & \multicolumn{3}{c}{**AmazonCF**} \\ _Extreme_ & \(1^{st}\) & \(5^{th}\) & \(10^{th}\) & Average & \(1^{st}\) & \(5^{th}\) & \(10^{th}\) & Average \\ \hline RoBERTa\({}_{full}\) & \(19.9_{8.4}\) & \(30.9_{7.9}\) & \(42.0_{7.4}\) & \(33.5_{6.7}\) & \(21.8_{6.6}\) & \(63.9_{10.2}\) & \(72.3_{3.0}\) & \(59.6_{16.8}\) \\ SetFit & \(24.1_{6.3}\) & \(29.2_{6.7}\) & \(36.7_{7.3}\) & \(31.7_{3.4}\) & \(22.3_{8.8}\) & \(64.2_{3.3}\) & \(68.6_{4.6}\) & \(56.8_{14.9}\) \\ LaGoNN\({}_{exp}\) & \(\mathbf{30.7_{8.9}}\) & \(37.6_{6.1}\) & \(39.0_{6.1}\) & \(36.1_{2.3}\) & \(\mathbf{26.1_{17.5}}\) & \(\mathbf{68.4_{4.4}}\) & \(\mathbf{74.9_{2.9}}\) & \(\mathbf{63.2}_{16.7}\) \\ \hline RoBERTa\({}_{freeze}\) & \(19.9_{8.4}\) & \(34.1_{5.4}\) & \(37.9_{5.9}\) & \(32.5_{5.5}\) & \(21.8_{6.6}\) & \(41.0_{12.7}\) & \(51.3_{10.7}\) & \(40.6_{8.9}\) \\ \(k\)NN & \(6.8_{0.42}\) & \(15.9_{3.4}\) & \(16.9_{4.3}\) & \(14.4_{3.0}\) & \(10.3_{0.2}\) & \(15.3_{4.2}\) & \(18.4_{3.7}\) & \(15.6_{2.4}\) \\ Log Reg & \(24.1_{6.3}\) & \(31.7_{4.9}\) & \(36.1_{5.4}\) & \(31.8_{3.6}\) & \(22.3_{8.8}\) & \(32.4_{11.5}\) & \(42.3_{8.8}\) & \(34.5_{9.9}\) \\ LaGoNN & \(\mathbf{30.7_{8.9}}\) & \(39.3_{4.9}\) & \(41.2_{4.7}\) & \(38.4_{3.0}\) & \(\mathbf{26.1_{17.5}}\) & \(31.1_{19.4}\) & \(33.0_{19.1}\) & \(30.9_{2.3}\) \\ \hline Probe & \(24.3_{8.4}\) & \(39.8_{5.6}\) & \(44.8_{4.2}\) & \(38.3_{6.2}\) & \(24.2_{9.0}\) & \(46.3_{4.4}\) & \(54.6_{2.0}\) & \(45.1_{10.3}\) \\ LaGoNN\({}_{cheap}\) & \(23.6_{7.8}\) & \(\mathbf{40.7_{5.9}}\) & \(\mathbf{45.3_{4.4}}\) & \(\mathbf{38.6_{6.6}}\) & \(20.1_{6.9}\) & \(38.3_{4.9}\) & \(47.8_{3.4}\) & \(38.2_{9.5}\) \\ \hline _Balanced_ & & & & & & & & \\ RoBERTa\({}_{full}\) & \(47.1_{4.2}\) & \(52.1_{3.6}\) & \(55.7_{2.6}\) & \(52.5_{2.9}\) & \(73.6_{2.1}\) & \(78.6_{3.9}\) & \(\mathbf{82.4_{1.1}}\) & \(78.9_{2.2}\) \\ SetFit & \(43.5_{4.2}\) & \(47.1_{4.6}\) & \(48.5_{3.9}\) & \(48.0_{1.7}\) & \(73.8_{4.4}\) & \(69.8_{4.0}\) & \(64.1_{4.6}\) & \(69.6_{3.6}\) \\ LaGoNN\({}_{exp}\) & \(42.8_{5.3}\) & \(47.6_{2.9}\) & \(47.0_{1.7}\) & \(46.2_{2.0}\) & \(\mathbf{76.0_{3.0}}\) & \(73.4_{2.6}\) & \(72.3_{2.9}\) & \(72.5_{3.4}\) \\ \hline RoBERTa\({}_{freeze}\) & \(47.1_{4.2}\) & \(52.1_{0.4}\) & \(53.3_{1.7}\) & \(51.5_{2.1}\) & \(73.6_{2.1}\) & \(76.8_{1.6}\) & \(77.9_{1.0}\) & \(76.5_{1.3}\) \\ \(k\)NN & \(22.3_{2.3}\) & \(30.2_{2.3}\) & \(30.9_{1.8}\) & \(29.5_{2.5}\) & \(41.7_{3.4}\) & \(57.9_{3.3}\) & \(58.3_{3.3}\) & \(56.8_{5.1}\) \\ Log Reg & \(43.5_{4.2}\) & \(53.8_{2.2}\) & \(55.5_{1.6}\) & \(52.8_{3.5}\) & \(73.8_{4.4}\) & \(79.2_{1.9}\) & \(80.1_{1.0}\) & \(78.6_{1.8}\) \\ LaGoNN & \(42.8_{5.3}\) & \(54.1_{2.9}\) & \(56.3_{1.3}\) & \(53.4_{3.7}\) & \(\mathbf{76.0_{3.0}}\) & \(\mathbf{80.1_{2.0}}\) & \(81.4_{1.1}\) & \(\mathbf{79.8_{1.4}}\) \\ \hline Probe & \(47.5_{1.6}\) & \(52.4_{1.7}\) & \(55.3_{1.1}\) & \(52.2_{2.5}\) & \(52.4_{3.4}\) & \(64.7_{2.5}\) & \(67.5_{0.4}\) & \(63.4_{4.4}\) \\ LaGoNN\({}_{cheap}\) & \(\mathbf{49.3_{2.6}}\) & \(\mathbf{54.4_{1.4}}\) & \(\mathbf{57.6_{0.7}}\) & \(\mathbf{54.2_{2.7}}\) & \(48.1_{3.4}\) & \(62.0_{2.0}\) & \(65.3_{0.8}\) & \(60.5_{5.0}\) \\ \hline \end{tabular}
\end{table}
Table 4: Average performance (average precision \(\times\) 100) on Insincere Questions and Amazon Counterfactual. The first, fifth, and tenth step are followed by the average over all ten steps. The average gives insight into the overall strongest performer by aggregating all steps. We group methods with a comparable number of trainable parameters together. The extreme label distribution results are followed by balanced results. We provide additional results in Appendix A.2.
Figure 3: Average performance in the imbalanced and balanced sampling regimes relative to comparable methods. We include RoBERTa\({}_{full}\) results for reference. The metric is macro-F1 for Hate Speech Offensive, average precision elsewhere.
LaGoNN\({}_{cheap}\) outperforms all other methods on the Insincere Questions dataset for all balance regimes, despite being the third fastest (see Table 5) and having the second fewest trainable parameters. We attribute this result to the fact that this dataset is composed of questions from Quora13 and our ST backbone was pretrained on similar data. This intuition is supported by Probe, the cheapest method, which despite having the fewest trainable parameters, shows comparable performance.
Footnote 13: [https://www.quora.com/](https://www.quora.com/)
### SetFit for efficient many-shot learning
Respectively comparing SetFit to Log Reg and LaGoNN\({}_{exp}\) to LaGoNN suggests that fine-tuning the ST embedding model on moderate or balanced data hurts model performance as the number of training samples grows. We therefore hypothesize that randomly sampling a subset of training data to fine-tune the encoder, freezing, embedding the remaining data, and training the classifier will result in a stronger model.
To test our hypothesis, we add two models to our experimental setup: SetFit\({}_{lite}\) and LaGoNN\({}_{lite}\). SetFit\({}_{lite}\) and LaGoNN\({}_{lite}\) are respectively equivalent to SetFit and LaGoNN\({}_{exp}\), except after the fourth step (400 samples), we freeze the encoder and only retrain the classifier on subsequent steps, similar to Log Reg and LaGoNN.
Figure 4 shows our results with these two new models. As expected, in the cases of extreme and imbalanced distributions, LaGoNN\({}_{exp}\), SetFit, and RoBERTa\({}_{exp}\), are the strongest performers on Toxic Conversations. We note very different results for both LaGoNN\({}_{lite}\) and SetFit\({}_{lite}\) compared to LaGoNN\({}_{exp}\) and SetFit on Toxic Conversations and Amazon Counterfactual under the moderate and balanced label distributions. As their expensive counterparts start to plateau or degrade on the fourth step, the predictive power of these two new models dramatically increases, showing improved or comparable performance to RoBERTa\({}_{full}\), despite being optimized on less data; for example, LaGoNN\({}_{lite}\) reaches an average precision of approximately 55 after being optimized on only 500 examples. RoBERTa\({}_{full}\) does not exhibit similar performance until the tenth step. Finally, we point out that LaGoNN-based methods generally provide a performance boost for SetFit-based classification.
### LaGoNN's computational expense
LaGoNN is more computationally expensive than Sentence Transformer- or SetFit-based text classification. LaGoNN introduces additional inference with the encoder, NN-lookup, and string modification. As the computational complexity of transformers increases with sequence length [23], additional expense is created when LaGoNN appends textual information before in
Figure 4: Average performance for all sampling regimes on Toxic Conversations and the moderate and balanced regimes for Amazon Counterfactual and Hate Speech Offensive. More expensive models, such as LaGoNN\({}_{exp}\), SetFit, and RoBERTa\({}_{full}\) perform best when the label distribution is imbalanced. As the distribution becomes more balanced, however, inexpensive models, such as LaGoNN\({}_{lite}\) or SetFit\({}_{lite}\), show similar or improved performance. The metric is macro-F1 for Hate Speech Offensive, average precision elsewhere. We provide additional results in Appendix A.3.
ference with the ST. In Table 5, we provide a speed comparison between Probe, Log Reg, SetFit, and LaGoNN classification computed on the same hardware. On average, LaGoNN introduced 24.2 additional seconds of computation compared to its relative counterpart.
## 6 Discussion
Modern research has achieved impressive results on a variety of text classification tasks and with limited training data. SetFit is one such example and can be used practically, but based on our results, the task of text classification for content moderation presents a challenge even for state-of-the-art approaches. It is imperative that we develop reliable methods that can be feasibly and quickly applied. These methods should be as inexpensive as possible such that we can re-tune them for novel forms of hate speech, toxicity, and fake news.
Our results suggest that LaGoNN\({}_{exp}\) or SetFit, relatively expensive techniques, can detect harmful content when dealing with imbalanced label distributions, as is common with realistic datasets. This finding is intuitive from the perspective that less common instances are more difficult to learn and require more effort. The exception to this would be our examination of Insincere Questions, where LaGoNN\({}_{cheap}\) excelled. This highlights the fact that we can inexpensively extract pretrained knowledge if PLMs are chosen with care.
Standard fine-tuning with SetFit does not help performance on more balanced datasets that are not few-shot. SetFit was developed for few-shot learning, but we have observed that it should not be applied "out of the box" to balanced, non-few-shot data. This can be detrimental to performance and has a direct effect on our approach. However, we have observed that LaGoNN can stabilize SetFit's predictions and reduce its performance drop. Figures 3 and 4 show that when the label distribution is moderate or balanced (see Table 3), SetFit plateaus, yet less expensive systems, such as LaGoNN, continue to learn. We believe this is due to SetFit's fine-tuning objective, which optimizes a Sentence Transformer using cosine similarity loss to separate examples belonging to different labels in feature space by assuming independence between labels. This may be too strong an assumption as we optimize with more examples, which is counter-intuitive for data-hungry transformers. RoBERTa\({}_{full}\), optimized with cross-entropy loss, generally showed improved performance as we added training data.
When dealing with balanced data, it is sufficient to fine-tune the Sentence Transformer via SetFit with 50 to 100 examples per label, while 150 to 200 instances appear to be sufficient when the training data are moderately balanced. The encoder can then be frozen and all available data embedded to train a classifier. This improves performance and is more efficient than full-model fine-tuning. LaGoNN is directly applicable to this case, boosting the performance of SetFit\({}_{title}\) without introducing trainable parameters. In this setup, all models fine-tuned on Hate Speech Offensive exhibited similar, upward-trending learning curves, but we note the speed of LaGoNN relative to RoBERTa\({}_{full}\) or SetFit (see Figure 4 and Table 5).
## 7 Conclusion
We have proposed LaGoNN, a simple and inexpensive modification to Sentence Transformer- or SetFit-based text classification. LaGoNN does not introduce any trainable parameters or new hyperparameters, but typically improves SetFit's performance. To demonstrate the merit of LaGoNN, we examined text classification systems in the context of content moderation under four label distributions on five datasets and with growing training data. To our knowledge, this is the first work to examine SetFit in this way. When the training labels are imbalanced, expensive systems, such as LaGoNN\({}_{exp}\) are performant. However, when the distribution is balanced, standard fine-tuning with SetFit can actually hurt model performance. We have therefore proposed an alternative fine-tuning procedure to which LaGoNN can be easily utilized, resulting in a powerful, but inexpensive system capable of
\begin{table}
\begin{tabular}{c|c}
**Method** & **Time in seconds** \\ \hline Probe & 22.9 \\ LaGoNN\({}_{cheap}\) & 44.2 \\ Log Reg & 42.9 \\ LaGoNN & 63.4 \\ SetFit & 207.3 \\ LaGoNN\({}_{exp}\) & 238.0 \\ \hline RoBERTa\({}_{full}\) & 446.9 \\ \end{tabular}
\end{table}
Table 5: Speed comparison between LaGoNN and comparable methods. Time includes training each method on \(1,000\) examples and performing inference on \(51,000\) examples.
detecting harmful content.
## 8 Acknowledgments
We would like to thank Derek Hommel and Nils Reimers for sharing inspiring discussions with us. We would also like to extend our gratitude to Tom Aarsen, Max Glockner, Yongxin Huang, Timour Igamberdiev, Sukannya Purkayastha, and Kexin Wang for their invaluable feedback on an early draft of our manuscript. This work was funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Science and the Arts (HMWK) within the projects "The Third Wave of Artificial Intelligence - 3AI", hessian.AI, and within their joint support of the National Research Center for Applied Cybersecurity ATHENE.
## 9 Limitations
In the current work, we have only considered text data, but social media content can of course consist of text, images, and videos. As LaGoNN depends only on an embedding model, an obvious extension to our approach would be examining the modifications we suggest, but on multimodal data. This is an interesting direction that we leave for future research. We have also considered English data, but harmful content can appear in any language. The authors demonstrated that SetFit is performant on multilingual data, the only necessary modification being the underlying pretrained ST. We therefore suspect that LaGoNN would behave similarly on non-English data, but this is not something we have tested ourselves. In order to examine our system's performance under different label-balance distributions, we restricted ourselves to binary and ternary text classification tasks, and LaGoNN therefore remains untested when there are more than three labels. We did not study our method when there are fewer than 100 examples, and investigating LaGoNN in a few-shot learning setting is fascinating topic for future study.
## 10 Ethics Statement
It is our sincere goal that our work contributes to the social good multiple ways. We first hope to have furthered research on text classification that can be feasibly applied to combat undesirable content, such as misinformation, on the Internet, which could potentially cause someone harm. To this end, we have tried to describe our approach as accurately as possible and released our code, such that our work is transparent and can be easily reproduced and expanded upon. We hope that we have also created a useful but efficient system which reduces the need to expend energy in the form expensive computation. For example, LaGoNN does not rely on billion-parameter language models that demand thousand-dollar GPUs to use. LaGoNN makes use of GPUs no more than SetFit, despite being more computationally expensive. We have additionally proposed a simple method to make SetFit, an already relatively inexpensive method, even more efficient.
|
2303.16698 | Probabilistic inverse optimal control for non-linear partially
observable systems disentangles perceptual uncertainty and behavioral costs | Inverse optimal control can be used to characterize behavior in sequential
decision-making tasks. Most existing work, however, is limited to fully
observable or linear systems, or requires the action signals to be known. Here,
we introduce a probabilistic approach to inverse optimal control for partially
observable stochastic non-linear systems with unobserved action signals, which
unifies previous approaches to inverse optimal control with maximum causal
entropy formulations. Using an explicit model of the noise characteristics of
the sensory and motor systems of the agent in conjunction with local
linearization techniques, we derive an approximate likelihood function for the
model parameters, which can be computed within a single forward pass. We
present quantitative evaluations on stochastic and partially observable
versions of two classic control tasks and two human behavioral tasks.
Importantly, we show that our method can disentangle perceptual factors and
behavioral costs despite the fact that epistemic and pragmatic actions are
intertwined in sequential decision-making under uncertainty, such as in active
sensing and active learning. The proposed method has broad applicability,
ranging from imitation learning to sensorimotor neuroscience. | Dominik Straub, Matthias Schultheis, Heinz Koeppl, Constantin A. Rothkopf | 2023-03-29T13:51:06Z | http://arxiv.org/abs/2303.16698v2 | # Probabilistic inverse optimal control with local linearization
###### Abstract
Inverse optimal control methods can be used to characterize behavior in sequential decision-making tasks. Most existing work, however, requires the control signals to be known, or is limited to fully-observable or linear systems. This paper introduces a probabilistic approach to inverse optimal control for stochastic non-linear systems with missing control signals and partial observability that unifies existing approaches. By using an explicit model of the noise characteristics of the sensory and control systems of the agent in conjunction with local linearization techniques, we derive an approximate likelihood for the model parameters, which can be computed within a single forward pass. We evaluate our proposed method on stochastic and partially observable version of classic control tasks, a navigation task, and a manual reaching task. The proposed method has broad applicability, ranging from imitation learning to sensorimotor neuroscience.
Centre for Cognitive Science, Technical University of Darmstadt, Darmstadt, Germany
## 1 Introduction
Inverse optimal control (IOC) is the problem of inferring an agent's cost function, and possibly other properties of their internal model, from observed behavior. While IOC has been a fundamental task in artificial intelligence, optimal control, and machine learning, particularly reinforcement learning and robotics, it has widespread applicability in several scientific fields including behavioral economics, psychology, and neuroscience. For example, in cognitive science and sensorimotor neuroscience optimal control models have been able to explain key properties of behavior, such as speed-accuracy trade-offs (Harris and Wolpert, 1998) or the minimum intervention principle (Todorov and Jordan, 2002). But, while researchers usually build an optimal control model and compare its predictions to behavior, certain parameters of the agent's internal processes are typically unknown. For example, an agent might experience intrinsic costs of behavior such as effort that are different between individuals. Inferring these parameters from observed behavior can help to understand the agent's goals, internal tradeoffs, cognitive processes and predict their behavior under novel conditions. Applying IOC in these sensorimotor control domains poses several challenges that make most previous methods not viable.
First, most IOC methods assume the agent's action signals to be known. This assumption, while convenient in simulations or robotics applications, where the control signals may be easily quantified, does not hold in many other real-world applications. In transfer learning or behavioral experiments, the action signals are internal quantities of an animal or human, e.g., neural activity or muscle activations, and are therefore not straightforwardly observable. Thus, it is worthwhile to consider the scenario where a researcher has observations of the system's state only, i.e., measurements of the animal's behavior.
Second, with few exceptions (e. C.g. Chen and Ziebart, 2015; Kwon et al., 2020), IOC methods do not account for partial observability from the agent's perspective and model the variability of the agent using a maximum causal entropy formulation (MCE; Ziebart et al., 2010). However, many real world control problems involve sensory uncertainty, which makes the state of the world partially observable and therefore contributes to the agent's stochasticity. As an example, in sensorimotor neuroscience the noise or uncertainty in the sensory system can be well described quantitatively so that accurate observation models can be formulated, which are helpful to understand the variability of behavior (Wolpert and Ghahramani, 2000).
Third, many IOC methods are based on matching feature expectations of the cost function between the model and observed data (e.g. Ziebart et al., 2010), and are thus not easily adapted to infer parameters of other parts of the model. The cost function is often not the only quantity of interest in a behavioral experiment,
where researchers might be interested in also inferring the noise characteristics of the motor system or other properties of the agent's internal model (e.g. Golub et al., 2013).
Fourth, in many real-world scenarios, the problem is not well modeled with linear dynamics and Gaussian noise, which would allow applying linear quadratic Gaussian (LQG) control (Anderson and Moore, 1990). First, the dynamics of the system may be non-linear. A common example comes from robotics and motor control, where joint angles in a kinematic chain need to be controlled and the physical system's dynamics involve inertia, centripetal, and Coriolis forces, as well as friction and torque in the joints. Secondly, the stochasticity of the system may not be well captured by normal distributions. A prime example is biological sensorimotor control, where the system is not only non-linear but both the sensory and action noise distributions are additionally signal dependent, i.e. the variability of sensory and control signals scale with their respective means. While iterative method for solving the optimal control problem exist (Todorov and Li, 2005), here we consider the corresponding inverse problem.
To address these issues, we adopt a probabilistic perspective of the IOC problem. We distinguish between the control problem faced by the agent and the inference problem the researcher has to solve. From the agent's perspective, the problem consists of acting in a partially observable Markov decision process (POMDP), for which the probabilistic graphical model is shown in Figure 1, left. We consider the setting of continuous states and actions, stochastic non-linear dynamics, partial observations, and finite horizon. For this setting, there are efficient approximately optimal solutions to the estimation and control problem, for which we give an overview in Section 2. The researcher, on the other hand, is interested in inferring properties of the agent's model and cost function. The IOC problem from their perspective can also be formulated using a probabilistic graphical model (Figure 1, right), in which the state of the system is observed, while quantities internal to the agent are latent variables.
Here, we unify MCE models, which are agnostic regarding the probabilistic structure causing the observed stochasitcity of the agent's policy, with IOC methods, which involve an explicit observation model. We allow for both: We employ an explicit observation model, but also allow the agent to have additional stochasticity through a MCE policy. We provide a solution to the IOC problem in this setting by approximate filtering of the agent's state estimate via local linearization, which allows marginalizing over these latent variables and deriving an approximate likelihood function for observed trajectories given parameters (Section 3). This function can be efficiently evaluated as it consists of a single forward pass. An estimate of the optimal parameters can then be determined using a gradient-based optimizer, maximizing the approximate likelihood. We evaluate our proposed method on two classical control tasks, pendulum and cartpole, as well as a navigation task, and a manual reaching task (Section 4).
### Related work
Inferring costs or utilities from behavior has been of interest for a long time in several scientific fields, such as behavioral economics, psychology, and neuroscience (Mosteller and Nogee, 1951; Kahneman and Tversky, 1979; Kording and Wolpert, 2004). More specific to the problem formulation adopted here, estimating objective functions in the field of control was first investigated by Kalman (1964) in the context of deterministic linear systems with quadratic costs. More recent formulations were developed first for discrete state and action spaces under the term inverse reinforcement learning (IRL; Ng et al., 2000; Abbeel and Ng, 2004), including formulations allowing for stochasticty in action selection (Rothkopf and Dimitrakakis, 2011). In this line, the maximum entropy (ME; Ziebart et al., 2008) and MCE formulation (Ziebart et al., 2010) gave rise to a whole string of new methods, such as addressing the IOC problem for non-linear continuous systems via linearization (e.g. Levine and Koltun, 2012) or importance sampling (Boularias et al., 2011) for the case of deterministic dynamics and full observability.
IOC methods for stochastic systems have been developed that considered the setting of affine control dynamics (Aghasadeghi and Bretl, 2011; Li et al., 2011).
Figure 1: **Left:** Decision network from the agent’s perspective (following the notational convention used in Kochenderfer et al., 2022). At each time step \(t\), the agent receives a partial or noisy observation \(\mathbf{y}_{t}\) of the actual state \(\mathbf{x}_{t}\). The agent performs an action \(\mathbf{u}_{t}\) and incurs a cost \(c_{t}\). **Right:** Probabilistic graphical model from the researcher’s perspective, who observes a trajectory \(\mathbf{x}_{1:T}\) from an agent. Quantities that are internal to the agent, i.e., their partial observations \(\mathbf{y}_{t}\), their internal beliefs \(\mathbf{b}_{t}\) and the action signals \(\mathbf{u}_{t}\) are not directly observed.
Arbitrary non-linear stochastic dynamics in the infinite horizon setting have been approached using model-free deep MCE IRL formulations (Finn et al., 2016; Garg et al., 2021). The latter approaches, however, yield no interpretable representation as the reward function is represented as a neural network. The partially observable setting for IOC has previously been addressed in the case of deterministic dynamics for the discrete state-action space (Choi and Kim, 2011) and continuous state, discrete action space (Silva et al., 2019). Schmitt et al. (2016) addressed systems with linear dynamics and continuous controls for a linear switching observation model. Other work has considered partial observability from the researcher's perspective, e.g., through occlusions (Kitani et al., 2012; Bogert et al., 2016). There are some IOC methods which are applicable to partially observable and stochastic systems: Linear-quadratic-Gaussian systems have been regarded by Schultheis et al. (2021), while the work of Chen and Ziebart (2015) can be used to estimate cost functions that depend on the state only. Non-linear dynamics in the infinite-horizon setting have been approached by Kwon et al. (2020) by training a policy network as a function of the whole parameter space. This work, however, also assumes the action signals to be given and a stationary policy.
Applications where IOC methods have been used to estimate cost functions range from human locomotion (Mombaur et al., 2010) over spatial navigation (Rothkopf and Ballard, 2013), table tennis (Muelling et al., 2014), to attention switching (Schmitt et al., 2017), and target tracking (Straub and Rothkopf, 2022). Other work has been aimed at inferring other properties of control tasks, e.g. learning the dynamics model (Golub et al., 2013), learning rules (Ashwood et al., 2020), or discount functions (Schultheis et al., 2022). Several subfields of robotics including imitation and apprenticeship learning (Taylor and Stone, 2009) as well as transfer learning (Osa et al., 2018) have also employed IOC.
## 2 Background
Before we introduce our probabilistic approach to inverse optimal control, we give an overview of the control and filtering problems faced by the agent and algorithms that can be used to solve it. For a summary of our notation in this paper, see Appendix A.
### Partially observable Markov decision processes
We consider a special case of partially observable Markov decision processes Astrom (1965); Kaelbling et al. (1998), a discrete-time stochastic non-linear dynamical system (Figure 1, left) with states \(\mathbf{x}_{t}\in\mathbb{R}^{n}\) following the dynamics equation \(\mathbf{x}_{t+1}=f(\mathbf{x}_{t},\mathbf{u}_{t},\mathbf{v}_{t})\), where \(f:\mathbb{R}^{n}\times\mathbb{R}^{u}\times\mathbb{R}^{v}\to\mathbb{R}^{n}\) is the dynamics function, \(\mathbf{u}_{t}\in\mathbb{R}^{u}\) are the controls and \(\mathbf{v}_{t}\sim\mathcal{N}(0,I)\) is \(v\)-dimensional Gaussian noise. We assume that the agent has only partial observations \(\mathbf{y}_{t}\in\mathbb{R}^{m}\) following \(\mathbf{y}_{t}=h(\mathbf{x}_{t},\mathbf{w}_{t})\), with \(h:\mathbb{R}^{n}\times\mathbb{R}^{v}\to\mathbb{R}^{m}\) the stochastic observation function and \(\mathbf{w}_{t}\sim\mathcal{N}(0,I)\)\(w\)-dimensional Gaussian noise. While \(\mathbf{v}_{t}\) and \(\mathbf{w}_{t}\) are defined as standard normal random variables, the system can incorporate general control- and state-dependent noises through non-linear transformations within the dynamics function \(f\) and observation function \(h\). The agent's goal is to minimize the expected cost over a time horizon \(T\in\mathbb{N}\), defined by \(J=\mathbb{E}[c_{T}(\mathbf{x}_{T})+\sum_{t=1}^{T-1}c_{t}(\mathbf{x}_{t},\mathbf{u}_{t})]\), consisting of a final state cost \(c_{T}(\mathbf{x}_{T})\) and a cost at each time step \(c_{t}(\mathbf{x}_{t},\mathbf{u}_{t})\).
### Iterative linear quadratic Gaussian
The fully-observable control problem analogous to Section 2.1, where we assume that the agent acts directly on the state, can be solved approximately using the method of iterative linear quadratic Gaussian (iLQG; Todorov and Li, 2005). This method iteratively linearizes the dynamics and employs a quadratic approximation of the costs around a nominal trajectory, \(\{\bar{\mathbf{x}}_{i},\bar{\mathbf{u}}_{i}\}_{i=1,\ldots,T}\), with \(\bar{\mathbf{x}}_{i}\in\mathbb{R}^{n},\bar{\mathbf{u}}_{i}\in\mathbb{R}^{u}\), and computes the optimal linear control law, \(\mathbf{u}_{t}=\pi_{t}(\mathbf{x}_{t})=L_{t}(\mathbf{x}_{t}-\bar{\mathbf{x}}_{t})+\mathbf{m}_{t}+ \bar{\mathbf{u}}_{1:T}\) for the approximated system. The quantities \(L_{t}\) and \(\mathbf{m}_{t}\) are the control gain and offset, respectively, and determined through a backward pass for the current reference trajectory. In the following iteration, the determined optimal control law is used to generate a new reference trajectory and the process is repeated until the controller converges.
### MCE reinforcement learning
The MCE reinforcement learning is to minimize the expected cost as in Section 2.2, while maximizing the conditional entropy of the applied stochastic policy \(\Pi_{t}(\mathbf{u}_{t}\mid\mathbf{x}_{t})\), i.e., to minimize \(\mathbb{E}[J(\mathbf{x}_{1:T},\mathbf{u}_{1:T})-\sum_{t=1}^{T-1}H(\Pi_{t}(\mathbf{u}_{t} \mid\mathbf{x}_{t}))]\). This formulation has been used to formulate reinforcement learning as a probabilistic inference problem (Kappen et al., 2012; Toussaint, 2009; Levine, 2018) and for inverse reinforcement learning (IRL) to model the stochasticity of the agent (e.g.,
Ziebart et al., 2008, 2010). The objective of IRL is formulated as maximizing the likelihood of given states and actions \(\{\mathbf{x}_{t},\mathbf{u}_{t}\}_{t=1,\ldots,N}\), induced by the maximum entropy policy \(\Pi_{t}(\mathbf{u}_{t}\,|\,\mathbf{x}_{t})\).
It can be shown that the resulting optimal policy is given by the distribution \(\Pi_{t}(\mathbf{u}_{t}\,|\,\mathbf{x}_{t})=\exp(Q_{t}(\mathbf{x}_{t},\mathbf{u}_{t})-V_{t}(\bm {x}_{t}))\), where \(Q_{t}\) is the soft Q-function at time \(t\), given by \(Q_{t}(\mathbf{x}_{t},\mathbf{u}_{t})=-c_{t}(\mathbf{x}_{t},\mathbf{u}_{t})-\mathbb{E}[V_{t+1}( \mathbf{x}_{t+1})]\) and \(V_{t}\) the normalization, i.e., \(V_{t}(\mathbf{x}_{t})=\log\int_{\mathbf{u}_{t}}\exp(Q_{t}(\mathbf{x}_{t},\mathbf{u}_{t}))\, \mathrm{d}\mathbf{u}_{t}\)(Gleave and Toyer, 2022). For general dynamics and reward functions, it is hard to compute the soft Q-function exactly. Approximate solutions have been derived using linearization (Levine and Koltun, 2012) or importance sampling (Boularias et al., 2011). For the case of linear dynamics and quadratic reward, the optimal policy is given by a Gaussian distribution \(\Pi_{t}(\mathbf{u}_{t}\,|\,\mathbf{x}_{t})=\mathcal{N}(\mathbf{u}_{t};L_{t}\mathbf{x}_{t},-L _{t})\), where \(L_{t}\) is the controller gain of the LQG controller (Levine and Koltun, 2013). This formulation can be extended to non-linear systems by using the control law in conjuction with the iLQG method (Section 2.2).
### Extended Kalman filter
Given the system defined in Section 2.1, the optimal filtering problem is to compute a belief distribution of the current state given past observations, i.e., \(p(\mathbf{x}_{t}\,|\,\mathbf{y}_{1:t-1})\). For linear-Gaussian systems, the solution is given in closed form and known as the Kalman filter (Kalman, 1960). In case of non-linear systems as in Section 2.1, a Gaussian approximation to the optimal belief can be computed using the extended Kalman filter via \(\mathbf{b}_{t+1}=f(\mathbf{b}_{t},\mathbf{u}_{t},0)+K_{t}(\mathbf{y}_{t}-h(\mathbf{b} _{t},0))\), where \(\mathbf{b}_{t}\in\mathbb{R}^{n}\) denotes the mean of the Gaussian belief \(p(\mathbf{x}_{t}\,|\mathbf{y}_{1},\ldots,\mathbf{y}_{t-1})\). The matrix \(K_{t}\) denotes the Kalman gain for time \(t\) and is computed by applying the Kalman filter to the system locally-linearized around the nominal trajectory obtained by the approximate optimal control law of iLQG (Section 2.2).
## 3 Probabilistic IOC
We consider an agent acting in a partially observable Markov decision process as introduced in Section 2.1. We assume that the agent acts at time \(t\) based on their belief \(\mathbf{b}_{t}\) about the state of the system \(\mathbf{x}_{t}\), which evolves according to \(\mathbf{b}_{t+1}=\beta_{t}(\mathbf{b}_{t},\mathbf{u}_{t},\mathbf{y}_{t})\). While the belief of the agent is defined commonly as a distribution over the true state, here we model \(\mathbf{b}_{t}\) as a finite-dimensional summary statistics of the distribution, i.e., \(\mathbf{b}_{t}\in\mathbb{R}^{b}\). The function \(\beta_{t}:\mathbb{R}^{b}\times\mathbb{R}^{u}\times\mathbb{R}^{m}\to\mathbb{R}^ {b}\) is called belief dynamics. We further assume that the agent follows a time-dependent policy \(\pi_{t}:\mathbb{R}^{b}\times\mathbb{R}^{j}\to\mathbb{R}^{u}\), i.e., \(\mathbf{u}_{t}=\pi_{t}(\mathbf{b}_{t},\mathbf{\xi}_{t})\), which can be stochastic with \(\mathbf{\xi}_{t}\sim\mathcal{N}(0,I)\). Note that both the belief dynamics and the policy can be time-dependent.
In the inverse optimal control problem, the goal is to estimate parameters \(\mathbf{\theta}\in\mathbb{R}^{p}\) of the agent's optimal control problem given the model and trajectory data. These parameters can include properties of the agent's cost function, the sensory and control systems of the agent, or the system's dynamics. We follow a probabilistic approach to inverse optimal control, i.e., we consider the likelihood function
\[p(\mathbf{x}_{1:T}\,|\,\mathbf{\theta})=p(\mathbf{x}_{0}\,|\,\mathbf{\theta})\prod_{t=0}^{T-1}p (\mathbf{x}_{t+1}\,|\,\mathbf{x}_{1:t},\mathbf{\theta}), \tag{1}\]
describing the probability of the observed trajectory data \(\mathbf{x}_{1:T}:=\{\mathbf{x}_{t}\}_{t=1,\ldots,T}\) given the parameters. For a set of trajectories we assume them to be independent given the parameters so that the likelihood factorizes into single trajectory likelihoods of the form in Equation (1). In this equation, generally, each state \(\mathbf{x}_{t+1}\) depends on all previous states \(\mathbf{x}_{1},\ldots,\mathbf{x}_{t}\), because the agent's internal noisy observations and control signals are not accessible to the researcher (Figure 1, right). Therefore, the Markov property does not hold from the researcher's perspective, rendering computation of the likelihood function intractable. To deal with this problem, we employ two key insights: First, the joint dynamical system of the states and the agent's belief is Markovian (Van Den Berg et al., 2011). Second, by keeping track of the distribution over the agent's belief, i.e., by performing belief tracking (Schultheis et al., 2021), we can iteratively compute the individual factors of the likelihood function in Equation (1).
We first introduce a general formulation of the IOC likelihood involving marginalization over the agent's internal beliefs in Section 3.1. Then, we show how to make the computations tractable by local linearization in Section 3.2. In Section 3.3, we provide details for suitable linearization points, which enables us to evaluate the approximate likelihood within a single forward pass.
### Likelihood formulation
We start by defining a joint dynamical system of states and beliefs (Van Den Berg et al., 2011) in which each depends only on the state and belief at the previous time step and the noises. For that, we insert the policy into the dynamics and the policy and observation function into the belief dynamics, yielding the equation
\[\begin{bmatrix}\mathbf{x}_{t+1}\\ \mathbf{b}_{t+1}\end{bmatrix} =\begin{bmatrix}f(\mathbf{x}_{t},\pi_{t}(\mathbf{b}_{t},\mathbf{\xi}_{t}),\mathbf{v}_{t})\\ \beta_{t}(\mathbf{b}_{t},\pi_{t}(\mathbf{b}_{t},\mathbf{\xi}_{t}),h(\mathbf{x}_{t},\mathbf{ w}_{t}))\end{bmatrix} \tag{2}\] \[=:g(\mathbf{x}_{t},\mathbf{b}_{t},\mathbf{v}_{t},\mathbf{w}_{t},\mathbf{\xi}_{t}). \tag{3}\]
For given values of \(\mathbf{x}_{t}\) and \(\mathbf{b}_{t}\), this equation defines the distribution \(p(\mathbf{x}_{t+1},\mathbf{b}_{t+1}\mid\mathbf{x}_{t},\mathbf{b}_{t})\), as \(\mathbf{v}_{t},\mathbf{w}_{t},\mathbf{\xi}_{t}\) are independent of \(\mathbf{x}_{t+1}\) and \(\mathbf{b}_{t+1}\). In Section 3.2 we will introduce an approximation via linearization, which leads to a closed-form expression for \(p(\mathbf{x}_{t+1},\mathbf{b}_{t+1}\mid\mathbf{x}_{t},\mathbf{b}_{t})\).
One can use this Markovian joint dynamical system to compute the likelihood factors for each time step (Schultheis et al., 2021). To this end, we first rewrite the individual likelihood terms \(p(\mathbf{x}_{t+1}\!\mid\!\mathbf{x}_{1:t})\) of Equation (1) by marginalizing over the agent's belief at each time step, i.e.,
\[p(\mathbf{x}_{t+1}\!\mid\!\mathbf{x}_{1:t})=\int p(\mathbf{x}_{t+1},\mathbf{b}_{t+1}\! \mid\!\mathbf{x}_{1:t})\,\mathrm{d}\mathbf{b}_{t+1}. \tag{4}\]
As the belief is an internal quantity of the agent and thus not observable to the researcher, we keep track of its distribution, \(p(\mathbf{b}_{t}\!\mid\!\mathbf{x}_{1:t})\). For this, we rewrite
\[p(\mathbf{x}_{t+1},\mathbf{b}_{t+1}\!\mid\!\mathbf{x}_{1:t})=\int p(\mathbf{ x}_{t+1},\mathbf{b}_{t+1},\mathbf{b}_{t}\mid\!\mathbf{x}_{1:t})\,\mathrm{d} \mathbf{b}_{t}\\ =\int p(\mathbf{x}_{t+1},\mathbf{b}_{t+1}\!\mid\!\mathbf{x}_{t},\mathbf{b} _{t})\,p(\mathbf{b}_{t}\!\mid\!\mathbf{x}_{1:t})\,\mathrm{d}\mathbf{b}_{t}, \tag{5}\]
where we have exploited the fact that the joint dynamical system of states and beliefs is Markovian. The distribution \(p(\mathbf{b}_{t}\!\mid\!\mathbf{x}_{1:t})\) acts as a summary of the past states and can be computed by conditioning on the current state, i.e.,
\[p(\mathbf{b}_{t}\!\mid\!\mathbf{x}_{1:t})=\frac{p(\mathbf{x}_{t},\mathbf{b}_{t}\!\mid \!\mathbf{x}_{1:t-1})}{p(\mathbf{x}_{t}\!\mid\!\mathbf{x}_{1:t-1})}. \tag{6}\]
After determining \(p(\mathbf{b}_{t}\!\mid\!\mathbf{x}_{1:t})\), we can propagate it through the joint dynamical system to arrive at the distribution \(p(\mathbf{x}_{t+1},\mathbf{b}_{t+1}\!\mid\!\mathbf{x}_{1:t})\). To obtain the belief distribution of the following time step, \(p(\mathbf{b}_{t+1}\!\mid\!\mathbf{x}_{1:t+1})\), we condition on the observed state \(\mathbf{x}_{t+1}\). To obtain the likelihood contribution, on the other hand, we marginalize out the \(\mathbf{b}_{t+1}\).
To summarize, starting with an initialization \(p(\mathbf{b}_{0})\), we can compute the individual terms \(p(\mathbf{x}_{t+1}\!\mid\!\mathbf{x}_{1:t})\) of the likelihood by executing Algorithm 1.
```
0: Approximate likelihood of parameters \(p(\mathbf{x}_{1:T}\!\mid\!\mathbf{\theta})\)
0: Parameters \(\mathbf{\theta}\), Data \(\mathbf{x}_{1:T}\), Model \(f,h\)
1: Determine the policy \(\pi\) using iLQG
2: Determine the belief dynamics \(\beta\) using the EKF
3:for\(t\) in \(\{1,\dots,T-1\}\)do
4: Compute \(p(\mathbf{x}_{t+1},\mathbf{b}_{t+1}\!\mid\!\mathbf{x}_{1:t})\) using Equation (5)
5: Update \(p(\mathbf{b}_{t+1}\!\mid\!\mathbf{x}_{1:t+1})\) using Equation (6)
6: Obtain \(p(\mathbf{x}_{t+1}\!\mid\!\mathbf{x}_{1:t})\) using Equation (4)
7:endfor
```
**Algorithm 1** Approximate likelihood computation
### Tractable likelihood via linearization
While the marginalization and propagating operations listed in the previous section can be done in closed form for linear-Gaussian systems, this is no longer feasible for non-linear systems. Therefore, we follow the approach of local linearization used in iLQG (Section 2.2) and the extended Kalman filter (Section 2.4). For the belief statistics, we consider the mean of the agent's belief, i.e., \(\mathbf{b}_{t}=\mathbb{E}[\mathbf{x}_{t}\!\mid\!\mathbf{y}_{1},\dots,\mathbf{y}_{t-1}]\) and initialize the distribution for the first time step as a Gaussian, \(p(\mathbf{b}_{1})=\mathcal{N}(\mu_{1}^{(b)},\Sigma_{1}^{(b)})\). We then approximate the distribution \(p(\mathbf{x}_{t+1},\mathbf{b}_{t+1}\!\mid\!\mathbf{x}_{t},\mathbf{b}_{t})\) as a Gaussian by applying a first-order Taylor expansion of \(g\).
In order to obtain a closed-form expression for \(g\), which we can linearize, we model the agent's belief dynamics using the extended Kalman filter (Section 2.4) and its policy using iLQG (Section 2.2), as in the partially observable version of iLQG (Li and Todorov, 2007). This choice leads to an affine control law and belief dynamics given \(\mathbf{b}_{t}\), making linearization of \(p(\mathbf{x}_{t+1},\mathbf{b}_{t+1}\!\mid\!\mathbf{x}_{t},\mathbf{b}_{t})\) straight-forward. To allow for additional stochasticity in the agent's policy, we follow the common formulation of maximum causal entropy (MCE) reinforcement learning (Section 2.3). For the linearized dynamics, the MCE policy is - as for the fully-observable case (Section 2.3) - given by a Gaussian distribution, so that \(\pi_{t}(\mathbf{b}_{t},\mathbf{\xi}_{t})=L_{t}(\mathbf{b}_{t}-\bar{\mathbf{x}}_{1:T})+ \mathbf{m}_{t}+\tilde{\mathbf{u}}_{1:T}-\tilde{L}_{t}\mathbf{\xi}_{t}\), with \(\tilde{L}_{t}\) the Cholesky decomposition of \(L_{t}\), and can be marginalized out in closed form.
The approximations we have introduced allow us to solve the integral in Equation (5) in closed form by applying standard equations for linear transformations of Gaussians, resulting in
\[p(\mathbf{x}_{t+1},\mathbf{b}_{t+1}\!\mid\!\mathbf{x}_{1:t})\approx\mathcal{N}\left(\mu_ {t},\Sigma_{t}\right) \tag{7}\]
with
\[\mu_{t} =g(\mathbf{x}_{t},\mu_{t}^{(b)},0,0,0),\] \[\Sigma_{t} =J_{\mathbf{b}}\Sigma_{t}^{(b)}J_{\mathbf{b}}^{T}+J_{\mathbf{v}}J_{\bm {v}}^{T}+J_{\mathbf{w}}J_{\mathbf{w}}^{T}+J_{\mathbf{\xi}}J_{\mathbf{\xi}}^{T},\]
where \(J_{\bullet}\) denotes the Jacobian of \(g\) w.r.t. \(\bullet\), evaluated at \((\mathbf{x}_{t},\mu_{t}^{(b)},0,0,0)\). Under this Gaussian approximation, both remaining operations of Algorithm 1 can also be performed in closed form. A more detailed derivation and representation of these formulas can be found in Appendix B. If the agent has full observations of the system's state, the inverse optimal control problem is simplified significantly. The derivations for this special case are shown in Appendix C. Details about the implementation are provided in Appendix D.
### Data-based linearization
The described approach to evaluate the likelihood requires solving the optimal filtering and control problem for a given set of parameters. When iteratively maximizing the likelihood, we would have to solve both problems in every iteration, making the approach computationally expensive. We can make the method more efficient by using the insight that in the IOC problem, we are given a trajectory \(\mathbf{x}_{1:T}\). Instead of starting with a randomly initialized nominal trajectory and iterating between computation of the locally optimal control law and linearizing again, we can simply linearize the dynamics once around the given trajectory and keep this linearization fixed. We then need to perform only one backward-pass to compute an approximately optimal control law given the current parameters, and a forward pass to compute an approximately optimal filter. This, in particular, allows efficient computation of the gradient of the likelihood function. As we assume the actions to be unobservable, but they are needed for the linearization, we compute estimates of the actions by minimizing the squared difference of the noiseless state estimates and the actual states. Note that these estimated actions are only used for the linearization, but are not used as observed actions in the IOC likelihood itself (see Appendix E).
## 4 Experiments
We evaluated our proposed method on simulated data of two classic control tasks (Pendulum and CartPole) and two behavioral human tasks (reaching and navigation). To evaluate the accuracy of the parameter estimates obtained by our method and to compare it against a baseline, we computed absolute relative errors per parameter \(|(\theta-\hat{\theta})/\theta|\). This makes averages across parameters on different scales more interpretable compared to other metrics like root mean squared errors. For each task, we simulated 100 sets of parameters from a uniform distribution in logarithmic space. For each set of parameters, we simulated 50 trajectories. We then maximized the log likelihood using gradient-based optimization with automatic differentiation (L-BFGS algorithm; Zhu et al., 1997). See Appendix G for a summary of the hyperparameters of our experiments.
All tasks we consider have four free parameters: cost of actions \(c_{a}\), cost of velocity at the final time step \(c_{v}\), motor noise \(\sigma_{m}\), and observation noise \(\sigma_{o}\). In the fully observable case, we leave out the observation noise parameter and only infer the three remaining parameters. For concrete definitions of the parameters in each specific tasks, see Appendix F.
Figure 2: **IOC likelihood for the non-linear reaching task.****(a)** Simulated reaching trajectories for eight targets. Increasing the cost of actions and the motor noise has an effect on the trajectories, since perfectly reaching the target becomes less important to the agent and variability increases. **(b)** IOC log likelihood for two of the model parameters, action costs \(c_{a}\) and motor noise \(\sigma_{m}\). The likelihood has its maximum (pink cross) close to the ground truth parameter values (black dot). **(c)** Simulated trajectories using the MLEs from (b). The simulations are visually indistinguishable from the ground truth data.
### Baseline method
For a comparison to previously proposed methods, we applied a baseline method based on the maximum causal entropy (MCE) approach (Ziebart et al., 2010), as these formulations have been successfully used for IOC in non-linear stochastic systems. As to the best of our knowledge, no past method can be directly applied in the setting we consider (non-linear and stochastic dynamics, unknown controls, partial observability, finite horizon), we choose a straight-forward implementation of the MCE formulation for this case. The tasks we consider can be well solved by linearizing the dynamics locally, so an accurate approximation of the optimal MCE controller is given by the optimal MCE controller for the linearized dynamics (see Section 2.3). An estimate of the parameters is then obtained by maximizing the likelihood of the approximate maximum entropy policy for the given set of states and controls. To apply this baseline to the setting where control signals are missing, we use the estimates of the controls as we determine in our proposed method for the data-based linearization (Section 3.3). As past IOC methods do not have an explicit model of partial observability, except for a few exceptions which are limited to specific tasks, we follow the usual formulation of the policy acting directly on the states. To show that this approach constitutes a suitable baseline, in Appendix H.3, we provide results for the case where the true control signals are known and there is no partial observability.
### Reaching task
We evaluate the method on a manual reaching task with a non-linear biomechanical model of a two-link arm. The agent's goal is to move its arm towards a target in the horizontal plane by controlling its two joints. For a more detailed description, see Appendix F.2. Note that the cost function is non-linear in states because the positions are a non-linear function of the joint angles that comprise the state of the system. We use a fully observable version of the task (Todorov and Li, 2005) and a version in which the agent receives noisy observations (Li and Todorov, 2007). This model has been applied to reaching movements in the sensorimotor neuroscience literature (e.g., Nagengast et al., 2009; Knill et al., 2011). Figure 1(a) shows simulations from the model using iLQG with two different parameter settings. We evaluated the likelihood function for a grid of two of the model parameters (Figure 1(b)) to illustrate that it has well-defined maxima close to the true parameter values. In this example, simulated data using the maximum likelihood estimates look indistinguishable from the ground truth data (Figure 1(c)).
In Figure 3 we present maximum likelihood parameter estimates and true values for repeated runs with different random parameter settings. One can observe that the parameter estimates of our method closely align with the true parameter values, showing that our method can successfully recover the parameters from data. The baseline method, in contrast, shows considerably worse performance, in particular for estimating noises. Estimates for the fully observable case are provided in Appendix H.2. To quantify the accuracy of the maximum likelihood estimates, we computed the absolute relative errors. The results are shown separately for the fully observable and partially observable cases in Figure 4. The median absolute relative errors of our method were 0.11, while they were 0.93 for the baseline. The influence of missing control signals and of the lacking explicit observation model in the baseline can be observed by comparing the results to the fully-observable case and the case of given control signals in Appendix H.2 and Appendix H.3.
### Navigation task
In the navigation task, we consider an agent navigating to a target under non-linear dynamics while receiving noisy observations from a non-linear observation model. To reach the target, the agent can control the angular velocity of their heading direction and the acceleration with which they move forward. The agent observes noisy versions of the distance to the target and the target's bearing angle. We provide more details about the experiment in Appendix F.3.
Maximum likelihood parameter estimates for the navigation task are shown for the partially observable case in Figure S4 and for the fully observable case in Figure S8. As for the reaching task, our method provides parameter estimates close to the true ones, while the estimates of the baseline deviate for a large number of trials. Median absolute relative errors of our method were 0.31, while they were 1.99 for the baseline (Figure 4).
### Classic control tasks
Lastly, we evaluate our method on two classic control tasks (Pendulum and Cart Pole) based on the implementations in the gym library (Brockman et al., 2016). Because these tasks are neither stochastic nor partially observable in their standard formulations, we introduce noise on the dynamics and turn them into partially-observed problems by defining a stochastic observation function (see Appendix F.1). In Appendix H we show the parameter estimates for the Pendulum (Figure S2) and for the Cart Pole (Figure S3) for the partially ob
servable case, while Figure S6 and Figure S7 show the fully observable case, respectively. One can observe that the results match the ones of the reaching and navigation task, showing that our method provides accurate estimates of the parameters. Median absolute relative errors of our method were 0.12 and 0.41, while for the baseline they were 2.21 and 3.82 (Figure 4).
## 5 Conclusion
In this paper, we introduced a new method for inverse optimal control for systems with stochastic dynamics, partial observability, and missing control signals. We followed a probabilistic formulation of the problem, where the goal is formulated as maximizing the likelihood of the observed states given the parameters. As the exact evaluation of the likelihood for a general non-linear model is intractable, we developed an efficient approximation of the likelihood by linearizing the system locally around the given trajectories, as in popular approaches such as the extended Kalman filter or iLQG. By maintaining a Gaussian distribution that tracks the agent's state estimate, the proposed method is able to evaluate an approximate likelihood in closed form within a single forward pass.
Besides offering an efficient way to evaluate the likelihood, our proposed formulation is able to incorporate multiple sources of the stochasticity of the agent through an explicit model of the partial observability and by modelling control via a maximum causal entropy (MCE) policy. Our method thereby reconciles the theory of past MCE IOC algorithms (e.g., Ziebart et al., 2010) and approaches where the agent's stochasticity stems from an explicit stochastic observation model (Schultheis et al., 2021).
We have applied our method to two stochastic variations of classical control tasks, the pendulum and cart pole, and to two human behavioral tasks, a reaching and navigation tasks. In the comparison to a MCE baseline, for which missing control signals need to be estimated, we have found our method to achieve lower estimation errors across all evaluated tasks. Further, it successfully inferred noise parameters of the system, which was not possible with the baseline.
The limitations of our method are mainly due to the linearization of the dynamical system and the Gaussian approximations involved in the belief tracking formulation of the likelihood function. In more complex scenarios with belief distributions that are not well approximated by a Gaussian, e.g., multimodal beliefs, the method is likely to produce inaccurate results. This problem could be addressed by replacing the closed-form Gaussian approximation of the belief by particle-based methods (Doucet et al., 2001). Further, we focused on tasks which could be solved well by applying controllers based on linearization and Gaussian approximation (iLQG and EKF), motivated by their popularity in applications in cognitive science and neuroscience. High-dimensional problems that cannot be solved forward using iLQG, in contrast, are probably not directly solvable using our proposed method. While, in principle, our method is also applicable using other forward control methods that compute differentiable policies, it is unclear whether linearizing these policies leads to accurate approximate likelihoods and parameter estimates.
A further limitation of our method is that it requires parametric models of the dynamics and noise structure.
Figure 3: **Maximum likelihood estimates for reaching task** True parameter values plotted against the maximum likelihood parameter estimates for the partially observable reaching task. Top row: our method, bottom row: MCE baseline. The columns contain the four different model parameters (action cost \(c_{a}\), velocity cost \(c_{v}\), motor noise \(\sigma_{m}\), observation noise \(\sigma_{o}\)).
While single missing parameters can be determined using our method, in the case of completely unknown dynamics a model-free approach to IOC would be more suitable.
Lastly, while we have shown that inference of few parameters is feasible, the results probably do not scale to a large number of parameters. One reason for this is that optimization in a high-dimensional non-linear space becomes difficult, and one can potentially get stuck in local minima. This problem could be relieved by using more advanced optimization methods. A further, more fundamental, concern with a large number of parameters is that parameters are likely to become not unambiguously identifiable and there is no unique solution. However, in many scientific fields, knowledge about the structure and parametric models describing the agent's uncertainty and internal model are available or measurable, allowing our method to be used successfully. Moreover, our probabilistic approach with a closed-form likelihood opens up the possibility of using Bayesian methods to investigate the identifiability of model parameters (Acerbi et al., 2014).
Our proposed method provides a tool for researchers interested in modeling sequential behavior, e.g., in sensorimotor domains, allowing to infer an agent's subjective costs and internal uncertainties. This will enable answering novel scientific questions about how these quantities are affected by different experimental conditions, deviate from intended task goals and provided task instructions, or how they vary between individuals. This is particularly relevant to a computational understanding of naturalistic behavior (Krakauer et al., 2017; Cisek and Pastor-Bernier, 2014; Miller et al., 2022), for which subjective utilities are mostly unknown.
## Acknowledgements
The authors gratefully acknowledge the computing time provided to them on the high-performance computer Lichtenberg at the NHR Centers NHR4CES at TU Darmstadt, and financial support by the project "Whitebox" funded by the Priority Program LOEWE of the Hessian Ministry of Higher Education, Science, Research and Art.
|
2301.11956 | On the Connection Between MPNN and Graph Transformer | Graph Transformer (GT) recently has emerged as a new paradigm of graph
learning algorithms, outperforming the previously popular Message Passing
Neural Network (MPNN) on multiple benchmarks. Previous work (Kim et al., 2022)
shows that with proper position embedding, GT can approximate MPNN arbitrarily
well, implying that GT is at least as powerful as MPNN. In this paper, we study
the inverse connection and show that MPNN with virtual node (VN), a commonly
used heuristic with little theoretical understanding, is powerful enough to
arbitrarily approximate the self-attention layer of GT.
In particular, we first show that if we consider one type of linear
transformer, the so-called Performer/Linear Transformer (Choromanski et al.,
2020; Katharopoulos et al., 2020), then MPNN + VN with only O(1) depth and O(1)
width can approximate a self-attention layer in Performer/Linear Transformer.
Next, via a connection between MPNN + VN and DeepSets, we prove the MPNN + VN
with O(n^d) width and O(1) depth can approximate the self-attention layer
arbitrarily well, where d is the input feature dimension. Lastly, under some
assumptions, we provide an explicit construction of MPNN + VN with O(1) width
and O(n) depth approximating the self-attention layer in GT arbitrarily well.
On the empirical side, we demonstrate that 1) MPNN + VN is a surprisingly
strong baseline, outperforming GT on the recently proposed Long Range Graph
Benchmark (LRGB) dataset, 2) our MPNN + VN improves over early implementation
on a wide range of OGB datasets and 3) MPNN + VN outperforms Linear Transformer
and MPNN on the climate modeling task. | Chen Cai, Truong Son Hy, Rose Yu, Yusu Wang | 2023-01-27T19:15:31Z | http://arxiv.org/abs/2301.11956v4 | # On the Connection Between MPNN and Graph Transformer
###### Abstract
Graph Transformer (GT) recently has emerged as a new paradigm of graph learning algorithms, outperforming the previously popular Message Passing Neural Network (MPNN) on multiple benchmarks. Previous work (Kim et al., 2022) shows that with proper position embedding, GT can approximate MPNN arbitrarily well, implying that GT is at least as powerful as MPNN. In this paper, we study the inverse connection and show that MPNN with virtual node (VN), a commonly used heuristic with little theoretical understanding, is powerful enough to arbitrarily approximate the self-attention layer of GT.
In particular, we first show that if we consider one type of linear transformer, the so-called Performer/Linear Transformer (Choromanski et al., 2020; Katharopoulos et al., 2020), then MPNN + VN with only \(\mathcal{O}(1)\) depth and \(\mathcal{O}(1)\) width can approximate a self-attention layer in Performer/Linear Transformer. Next, via a connection between MPNN + VN and DeepSets, we prove the MPNN + VN with \(\mathcal{O}(n^{d})\) width and \(\mathcal{O}(1)\) depth can approximate the self-attention layer arbitrarily well, where \(d\) is the input feature dimension. Lastly, under some assumptions, we provide an explicit construction of MPNN + VN with \(\mathcal{O}(1)\) width and \(\mathcal{O}(n)\) depth approximating the self-attention layer in GT arbitrarily well. On the empirical side, we demonstrate that 1) MPNN + VN is a surprisingly strong baseline, outperforming GT on the recently proposed Long Range Graph Benchmark (LRGB) dataset, 2) our MPNN + VN improves over early implementation on a wide range of OGB datasets and 3) MPNN + VN outperforms Linear Transformer and MPNN on the climate modeling task.
Machine Learning, ICML
## 1 Introduction
MPNN (Message Passing Neural Network) (Gilmer et al., 2017) has been the leading architecture for processing graph-structured data. Recently, transformers in natural language processing (Vaswani et al., 2017; Kalyan et al., 2021) and vision (d'Ascoli et al., 2021; Han et al., 2022) have extended their success to the domain of graphs. There have been several pieces of work (Ying et al., 2021; Wu et al., 2021; Kreuzer et al., 2021; Rampasek et al., 2022; Kim et al., 2022) showing that with careful position embedding (Lim et al., 2022), graph transformers (GT) can achieve compelling empirical performances on large-scale datasets and start to challenge the dominance of MPNN.
MPNN imposes a sparsity pattern on the computation graph and therefore enjoys linear complexity. It however suffers from well-known over-smoothing (Li et al., 2018; Oono & Suzuki, 2019; Cai & Wang, 2020) and over-squashing (Alon & Yahav, 2020; Topping et al., 2021) issues, limiting its usage on long-range modeling tasks where the label of one node depends on features of nodes far away. GT relies purely on position embedding to encode the graph structure and uses vanilla transformers on top. 1 It models all pairwise interactions directly in one layer, making it computationally more expensive. Compared to MPNN, GT shows promising results on tasks where modeling long-range interaction is the key, but the quadratic complexity of self-attention in GT
Figure 1: MPNN + VN and Graph Transformers.
limits its usage to graphs of medium size. Scaling up GT to large graphs remains an active research area (Wu et al., 2022).
Theoretically, it has been shown that graph transformers can be powerful graph learners (Kim et al., 2022), i.e., graph transformers with appropriate choice of token embeddings have the capacity of approximating linear permutation equivariant basis, and therefore can approximate 2-IGN (Invariant Graph Network), a powerful architecture that is at least as expressive as MPNN (Maron et al., 2018). This raises an important question that _whether GT is strictly more powerful than MPNN_. Can we approximate GT with MPNN?
One common intuition of the advantage of GT over MPNN is its ability to model long-range interaction more effectively. However, from the MPNN side, one can resort to a simple trick to escape locality constraints for effective long-range modeling: the use of an additional _virtual node (VN)_ that connects to all input graph nodes. On a high level, MPNN + VN augments the existing graph with one virtual node, which acts like global memory for every node exchanging messages with other nodes. Empirically this simple trick has been observed to improve the MPNN and has been widely adopted (Gilmer et al., 2017; Hu et al., 2020, 2021) since the early beginning of MPNN (Gilmer et al., 2017; Battaglia et al., 2018). However, there is very little theoretical study of MPNN + VN (Hwang et al., 2022).
In this work, we study the theoretical property of MPNN + VN, and its connection to GT. We systematically study the representation power of MPNN + VN, both for certain approximate self-attention and for the full self-attention layer, and provide a depth-width trade-off, summarized in Table 1. In particular,
* With \(\mathcal{O}(1)\) depth and \(\mathcal{O}(1)\) width, MPNN + VN can approximate one self-attention layer of Performer (Choromanski et al., 2020) and Linear Transformer (Katharopoulos et al., 2020), a type of linear transformers (Tay et al., 2020).
* Via a link between MPNN + VN with DeepSets (Zhaeer et al., 2017), we prove MPNN + VN with \(\mathcal{O}(1)\) depth and \(\mathcal{O}(n^{d})\) width (\(d\) is the input feature dimension) is permutation equivariant universal, implying it can approximate self-attention layer and even full-transformers.
* Under certain assumptions on node features, we prove an explicit construction of \(\mathcal{O}(n)\) depth \(\mathcal{O}(1)\) width MPNN + VN approximating 1 self-attention layer arbitrarily well on graphs of size \(n\). Unfortunately, the assumptions on node features are rather strong, and whether we can alleviate them will be an interesting future direction to explore.
* Empirically, we show 1) that MPNN + VN works surprisingly well on the recently proposed LRGB (long-range graph benchmarks) datasets (Dwivedi et al., 2022), which arguably require long-range interaction reasoning to achieve strong performance 2) our implementation of MPNN + VN is able to further improve the early implementation of MPNN + VN on OGB datasets and 3) MPNN + VN outperforms Linear Transformer (Katharopoulos et al., 2020) and MPNN on the climate modeling task.
## 2 Related Work
**Virtual node in MPNN.** The virtual node augments the graph with an additional node to facilitate the information exchange among all pairs of nodes. It is a heuristic proposed in (Gilmer et al., 2017) and has been observed to improve the performance in different tasks (Hu et al., 2021, 2020). Surprisingly, its theoretical properties have received little study. To the best of our knowledge, only a recent paper (Hwang et al., 2022) analyzed the role of the virtual node in the link prediction setting in terms of 1) expressiveness of the learned link representation and 2) the potential impact on under-reaching and over-smoothing.
**Graph transformer.** Because of the great successes of Transformers in natural language processing (NLP) (Vaswani et al., 2017; Wolf et al., 2020) and recently in computer vision (Dosovitskiy et al., 2020; d'Ascoli et al., 2021; Liu et al., 2021), there is great interest in extending transformers for graphs. One common belief of advantage of graph transformer over MPNN is its capacity in capturing long-range interactions while alleviating over-smoothing (Li et al., 2018; Oono and Suzuki, 2019; Cai and Wang, 2020) and over-squashing in MPNN (Alon and Yahav, 2020; Topping et al., 2021).
Fully-connected Graph transformer (Dwivedi and Bresson, 2020) was introduced with eigenvectors of graph Laplacian as the node positional encoding (PE). Various follow-up works proposed different ways of PE to improve GT, ranging from an invariant aggregation of Laplacian?s eigenvectors in SAN (Kreuzer et al., 2021), pair-wise graph distances in Graphormer (Ying et al., 2021), relative PE derived from diffusion kernels in GraphiT (Mialon et al., 2021), and recently Sign and Basis Net (Lim et al., 2022) with a principled way of handling sign and basis invariance. Other lines of research in GT include combining MPNN and GT (Wu et al., 2021; Rampasek et al., 2022), encoding the substructures (Chen et al., 2022), and efficient graph transformers for large graphs (Wu et al., 2022).
## 3 Preliminaries
We denote \(\mathbf{X}\in\mathbb{R}^{n\times d}\) the concatenation of graph node features and positional encodings, where node \(i\) has feature \(\mathbf{x}_{i}\in\mathbb{R}^{d}\). When necessary, we use \(\mathbf{x}_{j}^{(l)}\) to denote the node \(j\)'s feature at depth \(l\). Let \(\mathcal{M}\) be the space of multisets of vectors in \(\mathbb{R}^{d}\). We use \(\mathcal{X}\subseteq\mathbb{R}^{n\times d}\) to denote the space of node features and the \(\mathcal{X}_{i}\) be the projection of \(\mathcal{X}\) on \(i\)-th coordinate. \(\|\cdot\|\) denotes the 2-norm. \([\mathbf{x},\mathbf{y},\mathbf{z}]\) denotes the concatenation of \(\mathbf{x},\mathbf{y},\mathbf{z}\). \([n]\) stands for the set \(\{1,2,...,n\}\).
**Definition 3.1** (attention).: We denote key and query matrix as \(\mathbf{W}_{K},\mathbf{W}_{Q}\in\mathbb{R}^{d\times d^{\prime}}\), and value matrix as \(\mathbf{W}_{V}\in\mathbb{R}^{d\times d}\)2. Attention score between two vectors \(\mathbf{u},\mathbf{v}\in\mathbb{R}^{d\times 1}\) is defined as \(\alpha(\mathbf{u},\mathbf{v})=\text{softmax}(\mathbf{u}^{T}\mathbf{W}_{Q}(\mathbf{W}_{K})^{T}\mathbf{ v})\). We denote \(\mathcal{A}\) as the space of attention \(\alpha\) for different \(\mathbf{W}_{Q},\mathbf{W}_{K},\mathbf{W}_{V}\). We also define unnormalized attention score \(\alpha^{\prime}(\cdot,\cdot)\) to be \(\alpha^{\prime}(\mathbf{u},\mathbf{v})=\mathbf{u}^{T}\mathbf{W}_{Q}(\mathbf{W}_{K})^{T}\mathbf{v}\). Self attention layer is a matrix function \(\mathbf{L}:\mathbb{R}^{n\times d}\rightarrow\mathbb{R}^{n\times d}\) of the following form: \(\mathbf{L}(\mathbf{X})=\text{softmax}(\mathbf{X}\mathbf{W}_{Q}(\mathbf{X}\mathbf{W}_{K})^{T})\mathbf{X} \mathbf{W}_{V}\).
Footnote 2: For simplicity, we assume the output dimension of self-attention is the same as the input dimension. All theoretical results can be extended to the case where the output dimension is different from \(d\).
### MPNN Layer
**Definition 3.2** (MPNN layer (Gilmer et al., 2017)).: An MPNN layer on a graph \(G\) with node features \(\mathbf{x}^{(k)}\) at \(k\)-th layer and edge features \(\mathbf{e}\) is of the following form
\[\mathbf{x}_{i}^{(k)}=\gamma^{(k)}\left(\mathbf{x}_{i}^{(k-1)},\tau_{j\in\mathcal{N}(i )}\phi^{(k)}\left(\mathbf{x}_{i}^{(k-1)},\mathbf{x}_{j}^{(k-1)},\mathbf{e}_{j,i}\right)\right)\]
Here \(\gamma:\mathbb{R}^{d}\times\mathbb{R}^{d^{\prime}}\rightarrow\mathbb{R}^{d}\) is update function, \(\phi:\mathbb{R}^{d}\times\mathbb{R}^{d}\times\mathbb{R}^{d_{e}}\rightarrow \mathbb{R}^{d^{\prime}}\) is message function where \(d_{e}\) is the edge feature dimension, \(\tau:\mathcal{M}\rightarrow\mathbb{R}^{d}\) is permutation invariant aggregation function and \(\mathcal{N}(i)\) is the neighbors of node \(i\) in \(G\). Update/message/aggregation functions are usually parametrized by neural networks. For graphs of different types of edges and nodes, one can further extend MPNN to the heterogeneous setting. We use \(1,...,n\) to index graph nodes and vn to denote the virtual node.
**Definition 3.3** (heterogeneous MPNN + VN layer).: The heterogeneous MPNN + VN layer operates on two types of nodes: 1) virtual node and 2) graph nodes, denoted as vn and gn, and three types of edges: 1) vn-gn edge and 2) gn-gn edges and 3) gn-vn edges. It has the following form
\[\mathbf{x}_{\text{vn}}^{(k)} =\gamma_{\text{vn}}^{(k)}\left(\mathbf{x}_{i}^{(k-1)},\tau_{j\in[n]} \phi_{\text{vn-gn}}^{(k)}\left(\mathbf{x}_{i}^{(k-1)},\mathbf{x}_{j}^{(k-1)},\mathbf{e}_{ j,i}\right)\right) \tag{1}\]
for the virtual node, and
\[\mathbf{x}_{i}^{(k)} =\gamma_{\text{gn}}^{(k)}(\mathbf{x}_{i}^{(k-1)},\tau_{j\in\mathcal{N }_{1}(i)}\phi_{\text{gn-vn}}^{(k)}\left(\mathbf{x}_{i}^{(k-1)},\mathbf{x}_{j}^{(k-1)}, \mathbf{e}_{j,i}\right) \tag{2}\]
for graph node. Here \(\mathcal{N}_{1}(i)\) for graph node \(i\) is the virtual node and \(\mathcal{N}_{2}(i)\) is the set of neighboring graph nodes.
Our proof of approximating self-attention layer \(\mathbf{L}\) with MPNN layers does not use the graph topology. Next, we introduce a simplified heterogeneous MPNN + VN layer, which will be used in the proof. It is easy to see that setting \(\phi_{\text{gn}}^{(k)}\) to be 0 in Definition 3.3 recovers the simplified heterogeneous MPNN + VN layer.
**Definition 3.4** (simplified heterogeneous MPNN + VN layer).: A simplified heterogeneous MPNN + VN layer is the same as a heterogeneous MPNN + VN layer in Definition 3.3 except we set \(\theta_{\text{gn-gn}}\) to be 0. I.e., we have
\[\mathbf{x}_{\text{vn}}^{(k)}=\gamma_{\text{vn}}^{(k)}\left(\mathbf{x}_{i}^{(k-1)},\tau_{ j\in[n]}\phi_{\text{vn-gn}}^{(k)}\left(\mathbf{x}_{i}^{(k-1)},\mathbf{x}_{j}^{(k-1)}, \mathbf{e}_{j,i}\right)\right)\]
for the virtual node, and
\[\mathbf{x}_{i}^{(k)}=\gamma_{\text{gn}}^{(k)}\left(\mathbf{x}_{i}^{(k-1)},\tau_{j\in \mathcal{N}_{1}(i)}\phi_{\text{gn-vn}}^{(k)}\left(\mathbf{x}_{i}^{(k-1)},\mathbf{x}_{j} ^{(k-1)},\mathbf{e}_{j,i}\right)\right)\]
for graph nodes.
Intuitively, adding the virtual node (VN) to MPNN makes it easy to compute certain quantities, for example, the mean of node features (which is hard for standard MPNN unless the depth is proportional to the diameter of the graph). Using VN thus makes it easy to implement for example the mean subtraction, which helps reduce over-smoothing and improves the performance of GNN. (Yang et al., 2020; Zhao & Akoglu, 2019)
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & Depth & Width & Self-Attention & Note \\ \hline Theorem 4.1 & \(\mathcal{O}(1)\) & \(\mathcal{O}(1)\) & Approximate & Approximate self attention in Performer (Choromanski et al., 2020) \\ Theorem 5.5 & \(\mathcal{O}(1)\) & \(\mathcal{O}(n^{d})\) & Full & Leverage the universality of equivariant DeepSets \\ Theorem 6.3 & \(\mathcal{O}(n)\) & \(\mathcal{O}(1)\) & Full & Explicit construction, strong assumption on \(\mathcal{X}\) \\ Proposition B.10 & \(\mathcal{O}(n)\) & \(\mathcal{O}(1)\) & Full & Explicit construction, more relaxed (but still strong) assumption on \(\mathcal{X}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of approximation result of MPNN + VN on self-attention layer. \(n\) is the number of nodes and \(d\) is the feature dimension of node features. The dependency on \(d\) is hidden.
### Assumptions
We have two mild assumptions on feature space \(\mathcal{X}\subset\mathbb{R}^{n\times d}\) and the regularity of target function \(\mathbf{L}\).
**AS1.**\(\forall i\in[n],\mathbf{x}_{i}\in\mathcal{X}_{i},\|\mathbf{x}_{i}\|<C_{1}\). This implies \(\mathcal{X}\) is compact.
**AS2.**\(\|\mathbf{W}_{Q}\|<C_{2},\|\mathbf{W}_{K}\|<C_{2},\|\mathbf{W}_{V}\|<C_{2}\) for target layer \(\mathbf{L}\). Combined with ASI on \(\mathcal{X}\), this means \(\alpha^{\prime}(\mathbf{x}_{i},\mathbf{x}_{j})\) is both upper and lower bounded, which further implies \(\sum_{j}e^{\alpha^{\prime}(\mathbf{x}_{i},\mathbf{x}_{j})}\) be both upper bounded and lower bounded.
\(\mathcal{O}(1)\)-depth \(\mathcal{O}(1)\)-width MPNN + VN for unbiased approximation of attention
The standard self-attention takes \(\mathcal{O}(n^{2})\) computational time, therefore not scalable for large graphs. Reducing the computational complexity of self-attention in Transformer is active research (Tay et al., 2020). In this section, we consider self-attention in a specific type of efficient transformers, Performer (Choromanski et al., 2020) and Linear Transformer (Katharopoulos et al., 2020).
One full self-attention layer \(\mathbf{L}\) is of the following form
\[\mathbf{x}_{i}^{(l+1)}=\sum_{j=1}^{n}\frac{\kappa\left(\mathbf{W}_{Q}^{(l)}\mathbf{x}_{i} ^{(l)},\mathbf{W}_{K}^{(l)}\mathbf{x}_{j}^{(l)}\right)}{\sum_{k=1}^{n}\kappa\left(\mathbf{ W}_{Q}^{(l)}\mathbf{x}_{i}^{(l)},\mathbf{W}_{K}^{(l)}\mathbf{x}_{k}^{(l)}\right)}\cdot \left(\mathbf{W}_{V}^{(l)}\mathbf{x}_{j}^{(l)}\right) \tag{3}\]
where \(\kappa:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) is the softmax kernel \(\kappa(\mathbf{x},\mathbf{y}):=\exp(\mathbf{x}^{T}\mathbf{y})\). The kernel function can be approximated via \(\kappa(\mathbf{x},\mathbf{y})=\langle\Phi(\mathbf{x}),\Phi(\mathbf{y})\rangle_{\mathcal{V}} \approx\phi(\mathbf{x})^{T}\phi(\mathbf{y})\) where the first equation is by Mercer's theorem and \(\phi(\cdot):\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\) is a low-dimensional feature map with random transformation. For Performer (Choromanski et al., 2020), the choice of \(\phi\) is taken as \(\phi(\mathbf{x})=\frac{\exp\left(\frac{-|\mathbf{x}|^{2}}{2}\right)}{\sqrt{m}}\left[ \exp\left(\mathbf{w}_{1}^{T}\mathbf{x}\right),\cdots,\exp\left(\mathbf{w}_{m}^{T}\mathbf{x} \right)\right]\) where \(\mathbf{w}_{k}\sim\mathcal{N}\left(0,I_{d}\right)\) is i.i.d sampled random variable. For Linear Transformer (Katharopoulos et al., 2020), \(\phi(\mathbf{x})=\mathrm{elu}(\mathbf{x})+1\).
By switching \(\kappa(\mathbf{x},\mathbf{y})\) to be \(\phi(\mathbf{x})^{T}\phi(\mathbf{y})\), and denote \(\mathbf{q}_{i}=\mathbf{W}_{Q}^{(l)}\mathbf{x}_{i}^{(l)},\mathbf{k}_{i}=\mathbf{W}_{K}^{(l)}\mathbf{x} _{i}^{(l)}\) and \(\mathbf{v}_{i}=\mathbf{W}_{V}^{(l)}\mathbf{x}_{i}^{(l)}\), the approximated version of Equation (3) by Performer and Linear Transformer becomes
\[\mathbf{x}_{i}^{(l+1)} =\sum_{j=1}^{n}\frac{\phi\left(\mathbf{q}_{i}\right)^{T}\phi\left(\bm {k}_{j}\right)}{\sum_{k=1}^{n}\phi\left(\mathbf{q}_{i}\right)^{T}\phi\left(\mathbf{k}_ {k}\right)}\cdot\mathbf{v}_{j} \tag{4}\] \[=\frac{\left(\phi\left(\mathbf{q}_{i}\right)^{T}\sum_{j=1}^{n}\phi \left(\mathbf{k}_{j}\right)\otimes\mathbf{v}_{j}\right)^{T}}{\phi\left(\mathbf{q}_{i} \right)^{T}\sum_{k=1}^{n}\phi\left(\mathbf{k}_{k}\right)}.\]
where we use the matrix multiplication association rule to derive the second equality.
The key advantage of Equation (4) is that \(\sum_{j=1}^{n}\phi\left(\mathbf{k}_{j}\right)\) and \(\sum_{j=1}^{n}\phi(\mathbf{k}_{j})\otimes\mathbf{v}_{j}\) can be approximated by the virtual node, and shared for all graph nodes, using only \(\mathcal{O}(1)\) layers of MPNNs. We denote the self-attention layer of this form in Equation (4) as \(\mathbf{L}_{\text{Performer}}\). Linear Transformer differs from Performer by choosing a different form of \(\phi(\mathbf{x})=\mathrm{Relu}(\mathbf{x})+1\) in its self-attention layer \(\mathbf{L}_{\text{Linear-Transformer}}\).
In particular, the VN will approximate \(\sum_{j=1}^{n}\phi\left(\mathbf{k}_{j}\right)\) and \(\sum_{j=1}^{n}\phi\left(\mathbf{k}_{j}\right)\otimes\mathbf{v}_{j}\), and represent it as its feature. Both \(\phi\left(\mathbf{k}_{j}\right)\) and \(\phi\left(\mathbf{k}_{j}\right)\otimes\mathbf{v}_{j}\) can be approximated arbitrarily well by an MLP with constant width (constant in \(n\) but can be exponential in \(d\)) and depth. Note that \(\phi(\mathbf{k}_{j})\otimes\mathbf{v}_{j}\in\mathbb{R}^{dm}\) but can be reshaped to 1 dimensional feature vector.
More specifically, the initial feature for the virtual node is \(\mathbf{1}_{(d+1)m}\), where \(d\) is the dimension of node features and \(m\) is the number of random projections \(\omega_{i}\). Message function + aggregation function for virtual node \(\tau\phi_{\text{vn-gn}}:\mathbb{R}^{(d+1)m}\times\mathcal{M}\rightarrow\mathbb{R }^{(d+1)m}\) is
\[\tau_{j\in[n]}\phi_{\text{vn-gn}}^{(k)}(\cdot,\{\mathbf{x}_{i}\}_{i})=[ \sum_{j=1}^{n}\phi\left(\mathbf{k}_{j}\right), \tag{5}\] \[\texttt{ReshapeTo1D}(\sum_{j=1}^{n}\phi\left(\mathbf{k}_{j}\right) \otimes\mathbf{v}_{j})]\]
where \(\texttt{ReshapeTo1D}(\cdot)\) flattens a 2D matrix to a 1D vector in raster order. This function can be arbitrarily approximated by MLP. Note that the virtual node's feature dimension is \((d+1)m\) (where recall \(m\) is the dimension of the feature map \(\phi\) used in the linear transformer/Performer), which is larger than the dimension of the graph node \(d\). This is consistent with the early intuition that the virtual node might be overloaded when passing information among nodes. The update function for virtual node \(\gamma_{\text{vn}}:\mathbb{R}^{(d+1)m}\times\mathbb{R}^{(d+1)m}\rightarrow\mathbb{ R}^{(d+1)m}\) is just coping the second argument, which can be exactly implemented by MLP.
VN then sends its message back to all other nodes, where each graph node \(i\) applies the update function \(\gamma_{\text{gn}}:\mathbb{R}^{(d+1)m}\times\mathbb{R}^{d}\rightarrow\mathbb{R }^{d}\) of the form
\[\gamma_{\text{gn}}(\mathbf{x}_{i},[\sum_{j=1}^{n}\phi\left(\mathbf{k}_{j} \right),\texttt{ReshapeTo1D}(\sum_{j=1}^{n}\phi\left(\mathbf{k}_{j}\right)\otimes \mathbf{v}_{j})]) \tag{6}\] \[=\frac{\left(\phi\left(\mathbf{q}_{i}\right)\sum_{j=1}^{n}\phi\left(\bm {k}_{j}\right)\otimes\mathbf{v}_{j}\right)^{T}}{\phi\left(\mathbf{q}_{i}\right)^{T} \sum_{k=1}^{n}\phi\left(\mathbf{k}_{k}\right)}\]
to update the graph node feature.
As the update function \(\gamma_{\text{gn}}\) can not be computed exactly in MLP, what is left is to show that error induced by using MLP to approximate \(\tau\phi_{\text{vn-gn}}\) and \(\gamma_{\text{gn}}\) in Equation (5) and Equation (6) can be made arbitrarily small.
**Theorem 4.1**.: _Under the ASI and AS2, MPNN + VN of \(\mathcal{O}(1)\) width and \(\mathcal{O}(1)\) depth can approximate \(\mathbf{L}_{\text{Performer}}\) and \(\mathbf{L}_{\text{Linear-Transformer}}\) arbitrarily well._
Proof.: We first prove the case of \(\mathbf{L}_{\text{Performer}}\). We can decompose our target function as the composition of \(\tau_{j\in[n]}\phi_{\text{un-gn}}^{(k)}\), \(\gamma_{\text{gn}}\) and \(\phi\). By the uniform continuity of the functions, it suffices to show that 1) we can approximate \(\phi\), 2) we can approximate operations in \(\gamma_{\text{gn}}\) and \(\tau\phi_{\text{un-gn}}\) arbitrarily well on the compact domain, and 3) the denominator \(\phi\left(\mathbf{q}_{i}\right)^{T}\sum_{k=1}^{n}\phi\left(\mathbf{k}_{k}\right)\) is uniformly lower bounded by a positive number for any node features in \(\mathcal{X}\).
For 1), each component of \(\phi\) is continuous and all inputs \(\mathbf{k}_{j},\mathbf{q}_{j}\) lie in the compact domain so \(\phi\) can be approximated arbitrarily well by MLP with \(\mathcal{O}(1)\) width and \(\mathcal{O}(1)\) depth (Cybenko, 1989).
For 2), we need to approximate the operations in \(\gamma_{\text{gn}}\) and \(\tau\phi_{\text{un-gn}}\), i.e., approximate multiplication, and vector-scalar division arbitrarily well. As all those operations are continuous, it boils down to showing that all operands lie in a compact domain. By assumption AS1 and AS2 on \(\mathbf{W}_{Q},\mathbf{W}_{K},\mathbf{W}_{V}\) and input feature \(\mathcal{X}\), we know that \(\mathbf{q}_{i},\mathbf{k}_{i},\mathbf{v}_{i}\) lies in a compact domain for all graph nodes \(i\). As \(\phi\) is continuous, this implies that \(\phi(\mathbf{q}_{i}),\sum_{j=1}^{n}\phi(\mathbf{k}_{j})\otimes\mathbf{v}_{j}\) lies in a compact domain (\(n\) is fixed), therefore the numerator lies in a compact domain. Lastly, since all operations do not involve \(n\), the depth and width are constant in \(n\).
For 3), it is easy to see that \(\phi\left(\mathbf{q}_{i}\right)^{T}\sum_{k=1}^{n}\phi\left(\mathbf{k}_{k}\right)\) is always positive. We just need to show that the denominator is bound from below by a positive constant. For Performer, \(\phi(\mathbf{x})=\frac{\exp\left(\frac{-\|\mathbf{x}\|^{2}_{2}}{2}\right)}{\sqrt{m}} \left[\exp\left(\mathbf{w}_{1}^{T}\mathbf{x}\right),\cdots,\exp\left(\mathbf{w}_{m}^{T}\bm {x}\right)\right]\) where \(\mathbf{w}_{k}\sim\mathcal{N}\left(0,I_{d}\right)\). As all norm of input \(\mathbf{x}\) to \(\phi\) is upper bounded by AS1, \(\exp(\frac{-\|\mathbf{x}\|^{2}_{2}}{2})\) is lower bounded. As \(m\) is fixed, we know that \(\|\mathbf{w}_{i}^{T}\mathbf{x}\|\leq\|\mathbf{w}_{i}\|\|\mathbf{x}\|\), which implies that \(\mathbf{w}_{i}^{T}\mathbf{x}\) is lower bounded by \(-\|\mathbf{w}_{i}\|\|\|\mathbf{x}\|\) which further implies that \(\exp(\mathbf{w}_{i}^{T}\mathbf{x})\) is lower bounded. This means that \(\phi\left(\mathbf{q}_{i}\right)^{T}\sum_{k=1}^{n}\phi\left(\mathbf{k}_{k}\right)\) is lower bounded.
For Linear Transformer, the proof is essentially the same as above. We only need to show that \(\phi(\mathbf{x})=\mathrm{elu}(\mathbf{x})+1\) is continuous and positive, which is indeed the case.
Besides Performers, there are many other different ways of obtaining linear complexity. In Appendix C.2, we discuss the limitation of MPNN + VN on approximating other types of efficient transformers such as Linformer (Wang et al., 2020) and Sparse Transformer (Child et al., 2019).
## 5 \(\mathcal{O}(1)\) depth \(\mathcal{O}(n^{d})\) width MPNN + VN
We have shown that the MPNN + VN can approximate self-attention in Performer and Linear Transformer using only \(\mathcal{O}(1)\) depth and \(\mathcal{O}(1)\) width. One may naturally wonder whether MPNN + VN can approximate the self-attention layer in the _full_ transformer. In this section, we show that MPNN + VN with \(O(1)\) depth (number of layers), but with \(\mathcal{O}(n^{d})\) width, can approximate 1 self-attention layer (and full transformer) arbitrarily well.
The main observation is that MPNN + VN is able to exactly simulate (not just approximate) equivariant DeepSets (Zaheer et al., 2017), which is proved to be universal in approximating any permutation invariant/equivariant maps (Zaheer et al., 2017; Segol and Lipman, 2019). Since the self-attention layer is permutation equivariant, this implies that MPNN + VN can approximate the self-attention layer (and full transformer) with \(\mathcal{O}(1)\) depth and \(\mathcal{O}(n^{d})\) width following a result on DeepSets from Segol and Lipman (2019).
We first introduce the permutation equivariant map, equivariant DeepSets, and permutation equivariant universality.
**Definition 5.1** (permutation equivariant map).: A map \(\mathbf{F}:\mathbb{R}^{n\times k}\rightarrow\mathbb{R}^{n\times l}\) satisfying \(\mathbf{F}(\sigma\cdot\mathbf{X})=\sigma\cdot\mathbf{F}(\mathbf{X})\) for all \(\sigma\in S_{n}\) and \(\mathbf{X}\in\mathbb{R}^{n\times d}\) is called permutation equivariant.
**Definition 5.2** (equivariant DeepSets of Zaheer et al. (2017)).: Equivariant DeepSets has the following form \(\mathbf{F}(\mathbf{X})=\mathbf{L}_{\text{in}}^{\text{ds}}\circ\nu\circ\cdots\circ\nu\circ \mathbf{L}_{1}^{\text{ds}}(\mathbf{X})\), where \(\mathbf{L}_{i}^{\text{ds}}\) is a linear permutation equivariant layer and \(\nu\) is a nonlinear layer such as ReLU. The linear permutation equivariant layer in DeepSets has the following form \(\mathbf{L}_{i}^{\text{ds}}(\mathbf{X})=\mathbf{X}\mathbf{A}+\frac{1}{n}\mathbf{1}\mathbf{1}^{T}\mathbf{X} \mathbf{B}+\mathbf{1}\mathbf{c}^{T}\), where \(\mathbf{A},\mathbf{B}\in\mathbb{R}^{d_{i}\times d_{i+1}}\), \(\mathbf{c}\in\mathbb{R}^{d_{i+1}}\) is the weights and bias in layer \(i\), and \(\nu\) is ReLU.
**Definition 5.3** (permutation equivariant universality).: Given a compact domain \(\mathcal{X}\) of \(\mathbb{R}^{n\times d_{\text{in}}}\), permutation equivariant universality of a model \(\mathbf{F}:\mathbb{R}^{n\times d_{\text{in}}}\rightarrow\mathbb{R}^{n\times d_{\text {out}}}\) means that for every permutation equivariant continuous function \(\mathbf{H}:\mathbb{R}^{n\times d_{\text{in}}}\rightarrow\mathbb{R}^{n\times d_{ \text{out}}}\) defined over \(\mathcal{X}\), and any \(\epsilon>0\), there exists a choice of \(m\) (i.e., network depth), \(d_{i}\) (i.e., network width at layer \(i\)) and the trainable parameters of \(\mathbf{F}\) so that \(\|\mathbf{H}(\mathbf{X})-\mathbf{F}(\mathbf{X})\|_{\infty}<\epsilon\) for all \(\mathbf{X}\in\mathcal{X}\).
The universality of equivariant DeepSets is stated as follows.
**Theorem 5.4** (Segol and Lipman (2019)).: _DeepSets with constant layer is universal. Using ReLU activation the width \(\omega:=\text{max}_{i}d_{i}\)\((d_{i}\) is the width for \(i\)-th layer of DeepSets) required for universal permutation equivariant network satisfies \(\omega\leq d_{\text{out}}+d_{\text{in}}+\left(\begin{array}{c}n+d_{\text{in}} \\ d_{\text{in}}\end{array}\right)=\mathcal{O}(n^{d_{\text{in}}})\)._
We are now ready to state our main theorem.
**Theorem 5.5**.: _MPNN + VN can simulate (not just approximate) equivariant DeepSets: \(\mathbb{R}^{n\times d}\rightarrow\mathbb{R}^{n\times d}\). The depth and width of MPNN + VN needed to simulate DeepSets is up to a constant factor of the depth and width of DeepSets. This implies that MPNN + VN of \(\mathcal{O}(1)\) depth and \(\mathcal{O}(n^{d})\) width is permutation equivariant universal, and can approximate self-attention layer and transformers arbitrarily well._
Proof.: Equivariant DeepSets has the following form \(\mathbf{F}(\mathbf{X})=\mathbf{L}_{\text{in}}^{\text{ds}}\circ\nu\circ\cdots\circ\nu\circ \mathbf{L}_{1}^{\text{ds}}(\mathbf{X})\), where \(\mathbf{L}_{i}^{\text{ds}}\) is the
linear permutation equivariant layer and \(\nu\) is an entrywise nonlinear activation layer. Recall that the linear equivariant layer has the form \(\mathbf{L}_{i}^{\text{ds}}(\mathbf{X})=\mathbf{X}\mathbf{A}+\frac{1}{n}\mathbf{1}\mathbf{1}^{T}\mathbf{X} \mathbf{B}+\mathbf{1}\mathbf{c}^{T}\). As one can use the same nonlinear entrywise activation layer \(\nu\) in MPNN + VN, it suffices to prove that MPNN + VN can compute linear permutation equivariant layer \(\mathbf{L}^{\text{ds}}\). Now we show that 2 layers of MPNN + VN can exactly simulate any given linear permutation equivariant layer \(\mathbf{L}^{\text{ds}}\).
Specifically, at layer 0, we initialized the node features as follows: The VN node feature is set to 0, while the node feature for the \(i\)-th graph node is set up as \(\mathbf{x}_{i}\in\mathbb{R}^{d}\).
At layer 1: VN node feature is \(\frac{1}{n}\mathbf{1}\mathbf{1}^{T}\mathbf{X}\), average of node features. The collection of features over \(n\) graph node feature is \(\mathbf{X}\mathbf{A}\). We only need to transform graph node features by a linear transformation, and set the VN feature as the average of graph node features in the last iteration. Both can be exactly implemented in Definition 3.4 of simplified heterogeneous MPNN + VN.
At layer 2: VN node feature is set to be 0, and the graph node feature is \(\mathbf{X}\mathbf{A}+\frac{1}{n}\mathbf{1}\mathbf{1}^{T}\mathbf{X}\mathbf{B}+\mathbf{1}\mathbf{c}^{T}\). Here we only need to perform the matrix multiplication of the VN feature with \(\mathbf{B}\), as well as add a bias \(\mathbf{c}\). This can be done by implementing a linear function for \(\gamma_{\text{gn}}\).
It is easy to see the width required for MPNN + VN to simulate DeepSets is constant. Thus, one can use 2 layers of MPNN + VN to compute linear permutation equivariant layer \(\mathbf{L}_{i}^{\text{ds}}\), which implies that MPNN + VN can simulate 1 layer of DeepSets exactly with constant depth and constant width (independent of \(n\)). Then by the universality of DeepSets, stated in Theorem 5.4, we conclude that MPNN + VN is also permutation equivariant universal, which implies that the constant layer of MPNN + VN with \(\mathcal{O}(n^{d})\) width is able to approximate any continuous equivariant maps. As the self-attention layer \(\mathbf{L}\) and full transformer are both continuous and equivariant, they can be approximated by MPNN + VN arbitrarily well.
Thanks to the connection between MPNN + VN with DeepSets, there is no extra assumption on \(\mathcal{X}\) except for being compact. The drawback on the other hand is that the upper bound on the computational complexity needed to approximate the self-attention with wide MPNN + VN is worse than directly computing self-attention when \(d>2\).
## 6 \(\mathcal{O}(n)\) depth \(\mathcal{O}(1)\) width MPNN + VN
The previous section shows that we can approximate a full attention layer in Transformer using MPNN with \(\mathcal{O}(1)\) depth but \(\mathcal{O}(n^{d})\) width where \(n\) is the number of nodes and \(d\) is the dimension of node features. In practice, it is not desirable to have the width depend on the graph size.
In this section, we hope to study MPNN + VNs with \(\mathcal{O}(1)\) width and their ability to approximate a self-attention layer in the Transformer. However, this appears to be much more challenging. Our result in this section only shows that for a rather restrictive family of input graphs (see Assumption 3 below), we can approximate a full self-attention layer of transformer with an MPNN + VN of \(\mathcal{O}(1)\) width and \(\mathcal{O}(n)\) depth. We leave the question of MPNN + VN's ability in approximate transformers for more general families of graphs for future investigation.
We first introduce the notion of \((\mathbf{V},\delta)\) separable node features. This is needed to ensure that VN can approximately select one node feature to process at each iteration with attention \(\alpha_{\text{vn}}\), the self-attention in the virtual node.
**Definition 6.1** (\((\mathbf{V},\delta)\) separable by \(\bar{\alpha}\)).: Given a graph \(G\) of size \(n\) and a fixed \(\mathbf{V}\in\mathbb{R}^{n\times d}=[\mathbf{v}_{1},...,\mathbf{v}_{n}]\) and \(\bar{\alpha}\in\mathcal{A}\), we say node feature \(\mathbf{X}\in\mathbb{R}^{n\times d}\) of \(G\) is \((\mathbf{V},\delta)\) separable by some \(\bar{\alpha}\) if the following holds. For any node feature \(\mathbf{x}_{i}\), there exist weights \(\mathbf{W}_{K}^{\bar{\alpha}},\mathbf{W}_{Q}^{\bar{\alpha}}\) in attention score \(\bar{\alpha}\) such that \(\bar{\alpha}(\mathbf{x}_{i},\mathbf{v}_{i})>\max_{j\neq i}\bar{\alpha}(\bar{\mathbf{x}}_{j },\mathbf{v}_{i})+\delta\). We say set \(\mathcal{X}\) is \((\mathbf{V},\delta)\) separable by \(\bar{\alpha}\) if every element \(\mathbf{X}\in\mathcal{X}\) is \((\mathbf{V},\delta)\) separable by \(\bar{\alpha}\).
The use of \((\mathbf{V},\delta)\) separability is to approximate hard selection function arbitrarily well, which is stated below and proved in Appendix B.1.
**Lemma 6.2** (approximate hard selection).: _Given \(\mathbf{X}\) is \((\mathbf{V},\delta)\) separable by \(\bar{\alpha}\) for some fixed \(\mathbf{V}\in\mathbb{R}^{n\times d}\), \(\bar{\alpha}\in\mathcal{A}\)
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**\# Params.**} & \multicolumn{2}{c}{Peptides-func} & \multicolumn{2}{c}{Peptides-struct} \\ \cline{3-6} & & **Test AP before VN** & **Test AP after VN** \(\uparrow\) & **Test MAE before VN** & **Test MAE after VN** \(\downarrow\) \\ \hline GCN & 508k & 0.5930\(\pm\)0.0023 & 0.6623\(\pm\)0.0038 & 0.3496\(\pm\)0.0013 & **0.2488\(\pm\)0.0021** \\ GINE & 476k & 0.5498\(\pm\)0.0079 & 0.6346\(\pm\)0.0071 & 0.3547\(\pm\)0.0045 & 0.2584\(\pm\)0.0011 \\ GatedGCN & 509k & 0.5864\(\pm\)0.0077 & 0.6635\(\pm\)0.0024 & 0.3420\(\pm\)0.0013 & 0.2523\(\pm\)0.0016 \\ GatedGCN+RWSE & 506k & 0.6069\(\pm\)0.0035 & **0.6685\(\pm\)0.0062** & 0.3357\(\pm\)0.0006 & 0.2529\(\pm\)0.0009 \\ \hline Transformer+LapPE & 488k & 0.6326\(\pm\)0.0126 & - & 0.2529\(\pm\)0.0016 & - \\ SAN+LapPE & 493k & 0.6384\(\pm\)0.0121 & - & 0.2683\(\pm\)0.0043 & - \\ SAN+RWSE & 500k & 0.6439\(\pm\)0.0075 & - & 0.2545\(\pm\)0.0012 & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Baselines for Peptides-func (graph classification) and Peptides-struct (graph regression). The performance metric is Average Precision (AP) for classification and MAE for regression. **Bold**: Best score.
and \(\delta>0\), the following holds. For any \(\epsilon>0\) and \(i\in[n]\), there exists a set of attention weights \(\mathbf{W}_{i,Q},\mathbf{W}_{i,K}\) in \(i\)-th layer of MPNN + VN such that \(\alpha_{\text{\tiny{nn}}}(\mathbf{x}_{i},\mathbf{v}_{i})>1-\epsilon\) for any \(\mathbf{x}_{i}\in\mathcal{X}_{i}\). In other words, we can approximate a hard selection function \(f_{i}(\mathbf{x}_{1},...,\mathbf{x}_{n})=\mathbf{x}_{i}\) arbitrarily well on \(\mathcal{X}\) by setting \(\alpha_{\text{\tiny{nn}}}=\bar{\alpha}\)._
With the notation set up, We now state an extra assumption needed for deep MPNN + VN case and the main theorem.
**AS3.**\(\mathcal{X}\) is \((\mathbf{V},\delta)\) separable by \(\bar{\alpha}\) for some fixed \(\mathbf{V}\in\mathbb{R}^{n\times d}\), \(\bar{\alpha}\in\mathcal{A}\) and \(\delta>0\).
**Theorem 6.3**.: _Assume AS 1-3 hold for the compact set \(\mathcal{X}\) and \(\mathbf{L}\). Given any graph \(G\) of size \(n\) with node features \(\mathbf{X}\in\mathcal{X}\), and a self-attention layer \(\mathbf{L}\) on \(G\) (fix \(\mathbf{W}_{K},\mathbf{W}_{Q},\mathbf{W}_{V}\) in \(\alpha\)), there exists a \(\mathcal{O}(n)\) layer of heterogeneous MPNN + VN with the specific aggregate/update/message function that can approximate \(\mathbf{L}\) on \(\mathcal{X}\) arbitrarily well._
The proof is presented in the Appendix B. On the high level, we can design an MPNN + VN where the \(i\)-th layer will select \(\tilde{\mathbf{x}}_{i}\), an approximation of \(\mathbf{x}_{i}\) via attention mechanism, enabled by Lemma 6.2, and send \(\tilde{\mathbf{x}}_{i}\) to the virtual node. Virtual node will then pass the \(\tilde{\mathbf{x}}_{i}\) to all graph nodes and computes the approximation of \(e^{\alpha(\mathbf{x}_{i},\mathbf{x}_{j})},\forall j\in[n]\). Repeat such procedures \(n\) times for all graph nodes, and finally, use the last layer for attention normalization. A slight relaxation of AS3 is also provided in the appendix.
## 7 Experiments
### MPNN + VN for LRGB Datasets
We experiment with MPNN + VN for Long Range Graph Benchmark (LRGB) datasets. Original paper (Dwivedi et al., 2022) observes that GT outperforms MPNN on 4 out of 5 datasets, among which GT shows significant improvement over MPNN on Peptides-func and Peptides-struct for all MPNNs. To test the effectiveness of the virtual node, we take the original code and modify the graph topology by adding a virtual node and keeping the hyperparameters of all models unchanged.
Results are in Table 2. Interestingly, such a simple change can boost MPNN + VN by a large margin on Peptides-func and Peptides-struct. Notably, with the addition of VN, GatedGCN + RWSE (random-walk structural encoding) after augmented by VN **outperforms all transformers** on Peptides-func, and GCN outperforms transformers on Peptides-struct.
### Stronger MPNN + VN Implementation
Next, by leveraging the modularized implementation from GraphGPS (Rampasek et al., 2022), we implemented a version of MPNN + VN with/without extra positional embedding. Our goal is not to achieve SOTA but instead to push the limit of MPNN + VN and better understand the source of the performance gain for GT. In particular, we replace the GlobalAttention Module in GraphGPS with DeepSets, which is equivalent to one specific version of MPNN + VN. We tested this specific version of MPNN + VN on 4 OGB datasets, both with and without the use of positional embedding. The results are reported in Table 3. Interestingly, even without the extra position embedding, our MPNN + VN is able to further improve over the previous GCN + VN & GIN + VN implementation. The improvement on **ogbg-ppa** is particularly impressive, which is from 0.7037 to 0.8055. Furthermore, it is important to note that while MPNN + VN does not necessarily outperform GraphGPS, which is a state-of-the-art architecture using both MPNN, Position/structure encoding and Transformer, the difference is quite small - this however, is achieved by a simple MPNN + VN architecture.
We also test MPNN + VN on large-scale molecule datasets PCQMv2, which has 529,434 molecule graphs. We followed (Rampasek et al., 2022) and used the original validation set as the test set, while we left out random 150K molecules for our validation set. As we can see from Table 4, MPNN + VN + NoPE performs significantly better than the early MPNN + VN implementation: GIN + VN and GCN +
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **ogbg-molhiv** & **ogbg-molpcba** & **ogbg-ppa** & **ogbg-code2** \\ \cline{2-5} & **AUROC \(\uparrow\)** & **Avg. Precision \(\uparrow\)** & **Accuracy \(\uparrow\)** & **F1 score \(\uparrow\)** \\ \hline GCN & 0.7606 \(\pm\) 0.0097 & 0.2020 \(\pm\) 0.0024 & 0.6839 \(\pm\) 0.0084 & 0.1507 \(\pm\) 0.0018 \\ GCN+virtual node & 0.7599 \(\pm\) 0.0119 & 0.2424 \(\pm\) 0.0034 & 0.6857 \(\pm\) 0.0061 & 0.1595 \(\pm\) 0.0018 \\ GIN & 0.7558 \(\pm\) 0.0140 & 0.2266 \(\pm\) 0.0028 & 0.6892 \(\pm\) 0.0100 & 0.1495 \(\pm\) 0.0023 \\ GIN+virtual node & 0.7707 \(\pm\) 0.0149 & 0.2703 \(\pm\) 0.0023 & 0.7037 \(\pm\) 0.0107 & 0.1581 \(\pm\) 0.0026 \\ \hline SAN & 0.7785 \(\pm\) 0.2470 & 0.2765 \(\pm\) 0.0042 & – & – \\ GraphTrans (GCN-Virtual) & – & 0.2761 \(\pm\) 0.0029 & – & 0.1830 \(\pm\) 0.0024 \\ K-Subtree SAT & – & – & 0.7522 \(\pm\) 0.0056 & 0.1937 \(\pm\) 0.0028 \\ GPS & 0.7880 \(\pm\) 0.0101 & 0.2907 \(\pm\) 0.0028 & 0.8015 \(\pm\) 0.0033 & 0.1894 \(\pm\) 0.0024 \\ \hline MPNN + VN + NoPE & 0.7676 \(\pm\) 0.0172 & 0.2823 \(\pm\) 0.0026 & 0.8055 \(\pm\) 0.0038 & 0.1727 \(\pm\) 0.0017 \\ MPNN + VN + PE & 0.7687 \(\pm\) 0.0136 & 0.2848 \(\pm\) 0.0026 & 0.8027 \(\pm\) 0.0026 & 0.1719 \(\pm\) 0.0013 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test performance in graph-level OGB benchmarks (Hu et al., 2020). Shown is the mean \(\pm\) s.d. of 10 runs.
-VN. The performance gap between GPS on the other hand is rather small: 0.0938 (GPS) vs. 0.0942 (MPNN + VN + PE) for the small model and 0.0858 (GPS) vs. 0.0867 (MPNN + VN + PE) for the medium model.
### Forecasting Sea Surface Temperature
In this experiment, we apply our MPNN + VN model to forecast sea surface temperature (SST). We are particularly interested in the empirical comparison between MPNN + VN and Linear Transformer (Katharopoulos et al., 2020) as according to Section 4, MPNN + VN theoretically can approximate Linear Transformer.
In particular, from the DOISST data proposed by (Huang et al., 2021), we construct a dataset of daily SST in the Pacific Ocean from 1982 to 2021, in the region of longitudes from \(180.125^{\circ}\)E to \(269.875^{\circ}\)E and latitudes from \(-14.875^{\circ}\)N to \(14.875^{\circ}\)N. Following the procedure from (de Berenac et al., 2018; de Berenac et al., 2019) and Wang et al. (2022), we divide the region into 11 batches of equal size with 30 longitudes and 30 latitudes at 0.5\({}^{\circ}\)-degree resolution, that can be represented as a graph of 900 nodes. The tasks are to predict the next 4 weeks, 2 weeks and 1 week of SST at each location, given 6 weeks of historical data. We train on data from years 1982-2018, validate on data from 2019 and test on data from 2020-2021. The number of training, validation, and testing examples are roughly 150K, 3K, and 7K. See details of dataset construction, model architectures, and training scheme in Appendix D.4.
We compare our model to other baselines including TF-Net (Wang et al., 2020), a SOTA method for spatiotemporal forecasting, Linear Transformer (Katharopoulos et al., 2020; Wang et al., 2020) with Laplacian positional encoding (LapPE), and Multilayer Perceptron (MLP). We use Mean Square Error (MSE) as the metric and report the errors on the test set, shown in the Table 5. We observe that the virtual node (VN) alone improves upon MPNN by \(3.8\%\), \(6.6\%\) and \(4.5\%\) in 4-, 2- and 1-week settings, respectively. Furthermore, aligned with our theory in Section 4, MPNN + VN indeed achieves comparable results with Linear Transformer and outperforms it by a margin of \(0.4\%\), \(2.8\%\) and \(4.3\%\) in 4-, 2- and 1-week settings, respectively.
## 8 Concluding Remarks
In this paper, we study the expressive power of MPNN + VN under the lens of GT. If we target the self-attention layer in Performer and Linear Transformer, one only needs \(\mathcal{O}(1)\)-depth \(\mathcal{O}(1)\) width for arbitrary approximation error. For self-attention in full transformer, we prove that heterogeneous MPNN + VN of either \(\mathcal{O}(1)\) depth \(\mathcal{O}(n^{d})\) width or \(\mathcal{O}(n)\) depth \(\mathcal{O}(1)\) width (under some assumptions) can approximate 1 self-attention layer arbitrarily well. Compared to early results (Kim et al., 2022) showing GT can approximate MPNN, our theoretical result draws the connection from the inverse direction.
On the empirical side, we demonstrate that MPNN + VN remains a surprisingly strong baseline. Despite recent efforts, we still lack good benchmark datasets where GT can outperform MPNN by a large margin. Understanding the inductive bias of MPNN and GT remains challenging. For example, can we mathematically characterize tasks that require effective long-range interaction modeling, and provide a theoretical justification for using GT over MPNN (or vice versa) for certain classes of functions on the space of graphs? We believe making processes towards answering such questions is an important future direction for the graph learning community.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**PCQM4Mv2**} \\ \cline{2-5} & **Test-dev MAE \(\downarrow\)** & **Validation MAE \(\downarrow\)** & **Training MAE** & **\# Param.** \\ \hline GCN & 0.1398 & 0.1379 & n/a & 2.0M \\ GCN-virtual & 0.1152 & 0.1153 & n/a & 4.9M \\ GIN & 0.1218 & 0.1195 & n/a & 3.8M \\ GIN-virtual & 0.1084 & 0.1083 & n/a & 6.7M \\ \hline GRPE (Park et al., 2022) & 0.0898 & 0.0890 & n/a & 46.2M \\ EGIT (Hussain et al., 2022) & 0.0872 & 0.0869 & n/a & 89.3M \\ Graphomer (Shi et al., 2022) & n/a & 0.0864 & 0.0348 & 48.3M \\ GPS-small & n/a & 0.0938 & 0.0653 & 6.2M \\ GPS-medium & n/a & 0.0858 & 0.0726 & 19.4M \\ \hline MPNN + VN + PE (small) & n/a & 0.0942 & 0.0617 & 5.2M \\ MPNN + VN + PE (medium) & n/a & 0.0867 & 0.0703 & 16.4M \\ MPNN + VN + NoPE (small) & n/a & 0.0967 & 0.0576 & 5.2M \\ MPNN + VN + NoPE (medium) & n/a & 0.0889 & 0.0693 & 16.4M \\ \hline \hline \end{tabular}
\end{table}
Table 4: Evaluation on PCQM4Mv2 (Hu et al., 2021) dataset. For GPS evaluation, we treated the _validation_ set of the dataset as a test set, since the _test-dev_ set labels are private.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & **4 weeks** & **2 weeks** & **1 week** \\ \hline MLP & 0.3302 & 0.2710 & 0.2121 \\ TF-Net & 0.2833 & **0.2036** & **0.1462** \\ Linear Transformer + LapPE & 0.2818 & 0.2191 & 0.1610 \\ MPNN & 0.2917 & 0.2281 & 0.1613 \\ \hline MPNN + VN & **0.2806** & 0.2130 & 0.1540 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results of SST prediction.
## Acknowledgement
This work was supported in part by the U.S. Department Of Energy, Office of Science, U. S. Army Research Office under Grant W911NF-20-1-0334, Google Faculty Award, Amazon Research Award, and NSF Grants #2134274, #2107256, #2134178, CCF-2217033, and CCF-2112665.
|
2310.09680 | Improved Contextual Recognition In Automatic Speech Recognition Systems
By Semantic Lattice Rescoring | Automatic Speech Recognition (ASR) has witnessed a profound research
interest. Recent breakthroughs have given ASR systems different prospects such
as faithfully transcribing spoken language, which is a pivotal advancement in
building conversational agents. However, there is still an imminent challenge
of accurately discerning context-dependent words and phrases. In this work, we
propose a novel approach for enhancing contextual recognition within ASR
systems via semantic lattice processing leveraging the power of deep learning
models in accurately delivering spot-on transcriptions across a wide variety of
vocabularies and speaking styles. Our solution consists of using Hidden Markov
Models and Gaussian Mixture Models (HMM-GMM) along with Deep Neural Networks
(DNN) models integrating both language and acoustic modeling for better
accuracy. We infused our network with the use of a transformer-based model to
properly rescore the word lattice achieving remarkable capabilities with a
palpable reduction in Word Error Rate (WER). We demonstrate the effectiveness
of our proposed framework on the LibriSpeech dataset with empirical analyses. | Ankitha Sudarshan, Vinay Samuel, Parth Patwa, Ibtihel Amara, Aman Chadha | 2023-10-14T23:16:05Z | http://arxiv.org/abs/2310.09680v4 | Improved contextual recognition in automatic speech recognition systems by semantic lattice rescoring
###### Abstract
Automatic Speech Recognition (ASR) has witnessed a profound research interest. Recent breakthroughs have given ASR systems different prospects such as faithfully transcribing spoken language, which is a pivotal advancement in building conversational agents. However, there is still an imminent challenge of accurately discerning context-dependent words and phrases. In this work, we propose a novel approach for enhancing contextual recognition within ASR systems via semantic lattice processing leveraging the power of deep learning models in accurately delivering spot-on transcriptions across a wide variety of vocabularies and speaking styles. Our solution consists of using Hidden Markov Models and Gaussian Mixture Models (HMM-GMM) along with Deep Neural Networks (DNN) models integrating both language and acoustic modeling for better accuracy. We infused our network with the use of a transformer-based model to properly rescore the word lattice achieving remarkable capabilities with a palpable reduction in Word Error Rate (WER). We demonstrate the effectiveness of our proposed framework on the LibriSpeech dataset with empirical analyses.
Ankitha Sudarshan\({}^{1}\), Vinay Samuel\({}^{2}\), Parth Patwa\({}^{3}\), Ibtihel Amara\({}^{4}\), Aman Chadha\({}^{5,6}\)+\({}^{1}\)Purdue University \({}^{2}\)Carnegie Mellon University \({}^{3}\)University of California Los Angeles
\({}^{4}\)McGill University \({}^{5}\)Stanford University \({}^{6}\)Amazon AI
\({}^{1}\)sudarsh0@purdue.edu \({}^{5,6}\)hi@aman.ai speech recognition, lattice re-scoring, contextual speech recognition, word lattices
Footnote †: Work does not relate to position at Amazon.
## 1 Introduction
Recognizing spoken language accurately and efficiently is a complex task due to variability in the source of speech such as pronunciation, dialects, vocabulary, accents, articulation, etc.
Semantic interpretation of a speech is crucial in ASR systems. Let us take the following example. _"I am going to a bank to deposit a check"_. Without context, the word bank could refer to either a financial institution or either the edge of a river.
To bridge this contextual gap in ASR systems, semantic lattice processing is a key component in contributing to better recognition of situational context conditions. This technique utilizes a lattice structure to represent the relationships between words and phrases in a sentence. It is created by analyzing the audio input and identifying possible word and phrase combinations and their associated probabilities. This information is then used to create a graph-like structure, where each node represents a word or phrase, and the edges represent the relationships between them [1].
In this study, our primary emphasis centers on lattice rescoring, a technique designed to efficiently re-evaluate the likelihood of potential speech hypotheses. While numerous lattice re-scoring techniques have been documented in the literature [2, 3], our research introduces a novel approach tailored to bolster contextual information within ASR systems.
Our key contributions are as follows:
1) Lattice re-scoring, which refines recognition results through the integration of language model probabilities from different language models, enhancing transcription accuracy and overall system performance.
2) Employing a Transformer architecture for our neural language model, enhancing lattice re-scoring with state-of-the-art contextual modeling.
3) Achieving a 1.36% reduction in word error rate when compared to state-of-the-art models sharing a similar architectural framework.
## 2 Related Work
Voice assistants use a variety of end-to-end (E2E) ASR techniques, including attention-based encoder-decoder (AED) [4, 5], recurrent neural network transducer (RNN-T) [6], and connectionist temporal classification (CTC) [6]. During training, the E2E model simultaneously optimizes the whole recognition pipeline and produces the word sequence output directly. One problem with this method, though, is that it has trouble identifying terms like songs or human names that don't often appear in the training set.
ASR systems' contextual recognition is crucial, especially for voice assistants, which must identify names of contacts, musicians in a user's music collection, and other entities. Shallow fusion [7, 8], attention-based deep context [9, 10, 1], and trie-based deep biasing [11, 12] are the current contextual
biasing techniques used for various E2E ASR models.
According to [1, 10], phoneme information may be injected or training with challenging negative examples can help to decrease misunderstanding between similar words. On-the-fly re-scoring is the most popular method for introducing contextual bias into ASR. In [13], this technique was first used with hybrid ASR models. In order to allow the weights on the bias terms to be changed "on the fly" at the time of inference, it entails composing a weighted finite state transducer (WFST) that represents the ASR model with a novel WFST representation of the bias terms. The bias terms are still assembled into a WFST representation for E2E models, but the ASR model composes that WFST at each decoding time-step, determining the likelihood of the current hypothesis based on the ASR model is combined with a score from the bias WFST.
A separate re-scoring module can not adequately handle the error introduced by the upstream acoustic network. Some architectures, such as[9, 14] can integrate contextual information into acoustic model to improve the output posterior distribution corresponding to the related contextual words and fit the subsequent external LM well. As the size of the possible contextual word list grows in real applications, the accuracy and latency of the system descend rapidly due to dispersed attention score and heavy attention computation. Moreover, in practice, it is hard to obtain compact and accurate contextual information in advance. Consider music search - we may face a large contextual word list (size in thousands) containing popular songs. In this case, E2E context bias cannot work well due to the large size and low quality of the contextual word list, leading to performance degradation [15].
## 3 Methodology
### Background
For ASR systems, the initial decoding process generates a lattice--a directed acyclic graph (DAG) representing a spectrum of potential word hypotheses and their associated scores. The fundamental purpose of lattice re-scoring is to elevate the accuracy of these ASR hypotheses by post-processing and re-ranking them within the lattice. We explore the mathematical and algorithmic aspects of lattice re-scoring, showcasing its role in enhancing ASR accuracy. We use a custom Transformer model with positional encoding, multiple Transformer encoder layers (controlled by \(n\) layers), an input embedding layer, and an output linear layer for sequence prediction. Let \(A(a)\) represent the acoustic score for arc \(a\) in the lattice. This score is typically based on the acoustic model and represents how well the audio aligns with the word associated with the arc. Let \(P(w1,w2,...,wN)\) denote the language model probability for the word sequence \((w1,w2,...,wN)\) based on the Transformer model. This probability reflects the likelihood of the entire word sequence occurring in the context of the language. Given a path \(P\) through the lattice, the word sequence probability \(P(P)\) can be computed as the product of acoustic scores and language model probabilities for the words along the path: \(P(P)=\prod_{a}(A(a)\cdot P(w))\), for all arcs \(a\) and words \(w\) in the path. The path \(P*\) through the lattice that maximizes the joint probability of the word sequence: \(P*=argmax_{P}P(P)\)
### Lattice Re-scoring
We provide in Figure 1 and Figure 2, respectively, our overall framework and the proposed lattice re-scoring strategy. We use the DNN-refined predictions for creating word lattices as an intermediate representation of the ASR output. This decoding process generates lattice-like outputs that contain phone-level alignments and associated scores and eventual word alignments.
Each path in the lattice has a score based on the ASR system's confidence. However, we can improve the transcription by reevaluating these paths using a neural LM, which captures the likelihood of different word sequences based on linguistic context. We use our custom-trained transformer to perform
Figure 1: **Global overview of our framework.** Our framework includes audio input, DNN acoustic model, lattice creation, language model integration, alignment, transformer re-scoring, and transcript generation.
Figure 2: **Lattice re-scoring strategy.** Alignment is involved, n-gram language model integration, and Transformer-based re-scoring to enhance contextual features in the final transcript.
re-scoring. A Transformer-based re-scoring approach, as opposed to traditional n-gram methods, introduces novelty by leveraging advanced neural network architectures that are designed to handle sequences more effectively and capture complex language patterns.
This transformation is done by computing the conditional probability of the word sequence in each path given the language model and combining it with the original acoustic likelihood score. The result is a new lattice where paths have modified scores that reflect both acoustic and language model information, enhancing transcription accuracy.
The scores from the neural LM are
converted to log-likelihoods to be combined with the original lattice scores. Once we have the lattice paths re-scored using the neural LM and have converted the scores to log-likelihoods, we combine these scores with the original lattice scores. This step helps integrate the language model probabilities into the lattice.
By combining these scores, the lattice paths that were previously assigned lower scores by the ASR system but have higher probabilities according to the LM are promoted, resulting in a better transcription of the input audio than the original input.
A prominent mathematical formula in the lattice creation process in ASR is related to the computation of the overall likelihood or score of a path through the lattice. This score is typically calculated as the sum of individual acoustic and language model scores along the path.
\[\text{Path Score}=\sum_{i=1}^{N} \Big{(}\log\bigl{(}P(\text{word}_{i}|\text{word}_{i-1})\bigr{)} \tag{1}\] \[+\log\bigl{(}P(\text{Acoustic features}_{i}|\text{word}_{i}) \bigr{)}\Big{)}\]
where, \(N\) represents the number of words in the path, \(\text{word}_{i}\) is the \(i^{th}\) word in the path, \(P(\text{word}_{i}|\text{word}_{i-1})\) is the conditional probability of transitioning from \(\text{word}_{i-1}\) to \(\text{word}_{i}\) using the language model, and \(P(\text{acoustic features}_{i}|\text{word}_{i})\) is the probability of observing acoustic features at position \(i\) given \(\text{word}_{i}\) using the acoustic model.
## 4 Experimental Details
### Data Corpus and Preprocessing
We use the LibriSpeech dataset [16], which consists of approximately 1000 hours of read English speech with a sampling rate of 16 kHz. This dataset provides a substantial and high-quality source of audio data, ensuring the robustness and generalizability of our proposed method, For data preprocessing, we utilize the Kaldi toolkit. We prepared the data in the Kaldi format organizing it into training, validation, and test sets. The format consists of two main components: the archive file (.ark) and the corresponding index file (.scp).file contains binary data and is typically organized as a sequence of key-value pairs, where the key is a unique identifier (usually a string) associated with the data, and the value is the binary data itself.
Example: <ark>
<key1> <binarytoken1> <data1>
<key2> <binarytoken2> <data2> </ark>
This involved creating text transcriptions and corresponding acoustic features for each segment of the audio data. The.scp file provides a mapping between the keys in the archive file and their corresponding file positions. Each line contains a key followed by the offset (in bytes) of the corresponding entry in the.ark file.
Example: <key1> <offset1>
### Acoustic Model
Post preprocessing, we implemented the GMM-HMM acoustic framework. The GMM-HMM model was trained using the extracted acoustic features. The model provided posterior probabilities over subword units (e.g., phonemes or context-dependent acoustic states) for each frame of audio. These probabilities were represented as a likelihood matrix.
### Language Model: Deep Neural Network (DNN)
Following the GMM-HMM stage, we incorporate a DNN to refine and improve the predictions made by the GMM-HMM model.
The DNN model generated enhances posterior probabilities over subword units. This DNN-refined output provides more accurate representations of the spoken audio, building upon the GMM-HMM predictions.
We use a neural LM - a custom transformer trained on the same Librispeech dataset for the rescoring task. We then compose Lattice FSTs with Language Model FST. The FST created using Kaldi's tools represents the language model, lexicon, and any other components of the speech recognition system. We then compile the HCLG FST (Hidden Markov Model, Context, Lexicon, Grammar) using the trained acoustic and language models, lexicon, and other necessary components.
These word-level alignments are then converted into Finite State Transducers (FSTs). These FSTs provide a structured representation that allow for efficient manipulation of word-level alignments. To this end, we generate four types of lattices:
Type 1: DNN based lattice (with phone alignments followed by word alignments).
Type 2: GMM based lattice (with phone alignments followed by word alignments).
Type 3: DNN-based lattice (with direct word alignments).
Type 4: GMM-based lattice (with direct word alignments).
### Transformer Model For Lattice Re-scoring
We trained a custom six-layer transformer model with 512 hidden embedding on an NVIDIA h100 GPU using the LibriSpeech[16] dataset. We trained the model over 4 epochs with a learning rate of 0.1 and a dropout rate of 0.1.
## 5 Results and Discussion
We conducted experiments on the four types of lattices mentioned in Section 4.3 and observed that their performance in the pipeline was identical. Hence we decided to make a comprehensive analysis on Lattice Type 1 which is representative of the other types. The below tables represent results for Lattice Type 1 on the test sets 'test-clean' and 'test-other'. |
2306.14651 | Nonequilibrium steady states in coupled asymmetric and symmetric
exclusion processes | We propose and study a one-dimensional (1D) model consisting of two lanes
with open boundaries. One of the lanes executes diffusive and the other lane
driven unidirectional or asymmetric exclusion dynamics, which are mutually
coupled through particle exchanges in the bulk. We elucidate the generic
nonuniform steady states in this model. We show that in a parameter regime,
where hopping along the TASEP lane, diffusion along the SEP lane and the
exchange of particles between the TASEP and SEP lanes compete, the SEP
diffusivity $D$ appears as a tuning parameter for both the SEP and TASEP
densities for a given exchange rate in the nonequilibrium steady states of this
model. Indeed, $D$ can be tuned to achieve phase coexistence in the asymmetric
exclusion dynamics together with spatially smoothly varying density in the
diffusive dynamics in the steady state. We obtain phase diagrams of the model
by using mean field theories, and corroborate and complement the results by
stochastic Monte Carlo simulations. This model reduces to an isolated open
totally asymmetric exclusion process (TASEP) and an open TASEP with bulk
particle nonconserving Langmuir kinetics (LK), respectively, in the limits of
vanishing and diverging particle diffusivity in the lane executing diffusive
dynamics. Thus this model works as an overarching general model, connecting
both pure TASEPs and TASEPs with LK in different asymptotic limits. We further
define phases in the SEP and obtain phase diagrams, and show their
correspondence with the TASEP phases. In addition to its significance as a 1D
driven, diffusive model, this model also serves as a simple reduced model for
cell biological transport by molecular motors undergoing diffusive and directed
motion inside eukaryotic cells. | Atri Goswami, Utsa Dey, Sudip Mukherjee | 2023-06-26T12:40:16Z | http://arxiv.org/abs/2306.14651v2 | # Nonequilibrium steady states in coupled asymmetric and symmetric exclusion processes
###### Abstract
We propose and study a one-dimensional (1D) model consisting of two lanes with open boundaries. One of the lanes executes diffusive and the other lane driven unidirectional or asymmetric exclusion dynamics, which are mutually coupled through particle exchanges in the bulk. We elucidate the generic nonuniform steady states in this model. We show that the nonequilibrium steady states of this model can be controlled by the ratio of the diffusive and directed motion time-scales, which can be tuned to achieve phase coexistence in the asymmetric exclusion dynamics and spatially smoothly varying density in the diffusive dynamics in the steady state. We obtain phase diagrams of the model by using mean field theories, and corroborate and complement the results by stochastic Monte Carlo simulations. This model reduces to an isolated open totally asymmetric exclusion process (TASEP) and an open TASEP with bulk particle nonconserving Langmuir kinetics (LK), respectively, in the limits of vanishing and diverging particle diffusivity in the lane executing diffusive dynamics. Thus this model works as an overarching general model, connecting both pure TASEPs and TASEPs with LK in different asymptotic limits. We further define phases in the SEP and obtain phase diagrams, and show their correspondence with the TASEP phases. In addition to its significance as a 1D driven, diffusive model, this model also serves as a simple reduced model for cell biological transport by molecular motors undergoing diffusive and directed motion inside eukaryotic cells.
## I Introduction
Natural systems driven by an external field or containing collections of self-propelled particles form prominent examples of nonequilibrium systems that often evolve into stationary states carrying steady currents. The presence of steady currents distinguish these systems from their counterparts in thermal equilibrium. Understanding the general physical principles behind such nonequilibrium transport has been the subject of intense research recently. One is often particularly interested in nonequilibrium transport in the context of simple one-dimensional (1D) model systems. In order to elucidate the nature of such nonequilibrium steady states and in the absence of a general theoretical framework, it is useful to study purpose-built simple models. To this end, a variety of driven lattice gas models have been introduced and studied extensively [1].
The Totally Asymmetric Simple Exclusion Process (TASEP) with open boundaries is one of the simplest 1D nonequilibrium models, which displays boundary induced phase transitions. It was originally proposed by MacDonald _et al_[2] to model the transport of ribosomes along messenger RNA strands in cell biological context. Subsequently, it was reinvented as a paradigmatic nonequilibrium model [3]. TASEP consists of a one-dimensional (1D) lattice, along which particles can stochastically hop from left to right, at rate unity provided the target site is vacant. The interaction between the particles is through a hard core exclusion potential. The latter ensures only single occupancy per lattice site, which implies exclusion. In TASEP, particles can enter the system from the left boundary and exit through the right with certain prescribed entry (\(\alpha\)) and exit (\(\beta\)) rates. The phases of TASEP can be tuned by \(\alpha\leq 1\) and \(\beta\leq 1\), and are generically uniform in space, except for a special case when the two rates are equal and less than \(1/2\). Three distinct phases can be identified in the phase diagram of TASEP constructed in the space spanned by the control parameters \(\alpha\) and \(\beta\). These are the High density (HD) phase, Low density (LD) phase and the Maximal current (MC) phase. TASEP is one of the very few nonequilibrium models which can be exactly solved and has emerged as a simple basis to study the principles and phenomenologies of 1D transport [4; 5; 6; 7].
The Symmetric Exclusion Process (SEP) is a simple realisation of the 1D _equilibrium diffusion process_ in which particles can move, in contrast to TASEP, in either direction (left or right) symmetrically, subject to exclusion. Also unlike TASEP, the entry and exit of particles can occur at both the ends of the lattice. In the steady state, the spatial dependence of the density profile is always a straight line, either fully flat for equal biases or an inclined line in case of unequal biases [8].
More recently, TASEP has been generalised in a variety of ways, all of which reveals many new interesting macroscopic phenomena. These usually involve the presence of additional microscopic processes competing with the basic hopping process of TASEP, or presence of conservation laws. A prominent example is a model introduced in Ref. [9], which has competing 1D nonequilibrium transport (TASEP) and equilibrium on-off ex
changes with a surrounding reservoir (known as Langmuir kinetics (LK)). In LK, one studies the attachment-detachment kinetics of particles on a lattice coupled to a bulk reservoir. As a physical motivation, this provides the simplest description of binding and unbinding kinetics of enzymes to some substrate. In LK dynamics, the particles get adsorbed at an empty site or detached from an occupied one with some given rates. As in TASEP, the only interaction between the particles is the hard-core repulsion due to particle exclusion, leading to maximum occupancy one per site even in the presence of LK. The LK and TASEP are two of the simplest paradigmatic equilibrium and nonequilibrium models, which clearly contrast equilibrium and non-equilibrium dynamics, distinguishing the corresponding stationary states. For instance, LK maintains detailed balance, and evolves into an equilibrium steady state in the long time limit. In contrast, a TASEP naturally breaks the detailed balance condition due to continuous flow of particles, and the resulting stationary state is a non-equilibrium state that carries a finite current. Such non-equilibrium steady states are known to be quite sensitive to changes in the boundary conditions. In contrast, equilibrium steady states are very robust to such changes and dominated by the bulk dynamics. TASEP is a boundary condition dependent process - in the TASEP new particles can enter or leave the system only at the system boundaries, whereas in the bulk there are no sources or sinks. In contrast in LK particles can enter or leave the system at any site. As shown in Ref. [9], a combination of the two can produce nonuniform steady state densities in the TASEP when the typical _exchange rate_ of a particle moving along the TASEP lane is comparable with the entry-exit rates in the filament, which can be achieved by introducing system size-dependent LK exchange rates between the bulk and the TASEP lane. When the two rates are comparable, the resulting steady state density profiles can show coexistence phases and domain walls, in contrast to the density profiles in isolated TASEP and Langmuir kinetic processes.
Diffusive processes are ubiquitous in nature, e.g., in cell biological contexts. How diffusive and driven processes may combine to influence mass transport is a fundamentally important question in cell biology. Notable previous studies on this topic include the work on the coupled dynamics of diffusive (unbound) motors in the cell cytoplasm and motors driven along microtubules (bound motors) in tube-like cylindrical compartments (representing the cytoplasm), containing one filament along the axis (representing a microtubule) with motors being injected and extracted at the boundaries [10]. This model reveals a phase behavior similar to that of 1D TASEP. Later, an extension of the above model was studied in Ref. [11]. These are however three dimensional models, which are relatively difficult to analyse, either analytically or by computer simulations. Moreover, the competition between the time scales of diffusive and directed dynamics has also not been studied in these works. A 1D _closed_ model consisting of two equal segments with one segment executing driven dynamics and the other diffusive was studied in Ref. [12]. Interestingly, unlike an open TASEP, this model shows a single _localised domain wall_ (LDW) instead of a delocalised domain wall (DDW) found in an open TASEP. This is attributed to the overall particle number conservation in the model. Very recently, it was shown that in the limit of a large diffusive segment, an LDW in this model can get delocalised due to fluctuation effects [13].
Our motivation here is to systematically investigate the interplay between the diffusive, driven and particle-exchange time-scales in 1D, subject to exclusion. We do this by generalising 1D nonequilibrium transport by coupling TASEP with SEP via particle exchange that is reminiscent of LK. We also study effects of space-dependent exchanges on the steady states. We expect the steady states of a coupled SEP-TASEP model will be quite different from the features of the decoupled systems, i.e., of an isolated TASEP and an isolated SEP. As we shall see, for our coupled system in the steady state we find phase co-existences in TASEP and spatially non-uniform (but smooth) density profiles in SEP, depending upon relative time scales of SEP and TASEP dynamics. This is totally in contrast to the well-known spatially uniform densities in steady states of isolated TASEP and SEP. Although effects of combining driven and diffusive transport have been studied earlier [12; 14], a systematic understanding of the effects of the competition between different time scales is clearly lacking. Lastly, space dependence of the parameters which define the local dynamics are expected to play an important role in controlling the nature of the steady states of driven systems, see, e.g., Refs. [15; 16] and references therein for detailed studies on the steady states in periodic and open TASEPs with smoothly space-dependent hopping rates. Indeed, spatially smoothly varying rates of particle exchanges between the TASEP and SEP lanes naturally extend the studies reported in Refs. [15; 16].
The principal results in this work are as follows:
(i) For a finite diffusivity in the SEP channel and equal attachment-detachment rates between the TASEP and SEP channels, both the TASEP and SEP steady state density profiles acquire complex space dependence. At the location of a discontinuity in the TASEP density, the SEP density has a strong space dependence.
(ii) For a diverging SEP diffusivity, the TASEP density profiles are identical to those in the well-known model of an open TASEP with LK. In the same limit, the SEP density profiles become flat with a value of \(1/2\).
(iii) For a vanishing SEP diffusivity, the TASEP density profiles reduce to those of an open isolated TASEP, whereas the SEP density profiles in the bulk strictly follow the TASEP density profiles.
(iv) The TASEP and SEP phase diagrams are shown to have a strong correspondence with each other.
(v) As the SEP diffusivity is reduced, a domain wall in the TASEP channel gets gradually delocalised.
Apart from its significance in nonequilibrium statistical mechanics, our model in addition has a biological inspiration as well: it may be viewed as a simple model for interacting bound and unbound molecular motors inside eukaryotic cells. Molecular motors inside eukaryotic cells transport cargo and are responsible for almost all kinds of cellular motility [17]. These are inherently nonequilibrium processes, sustained by the energy released in the hydrolysis of ATP (Adenosine triphosphate), producing ADP (Adenosine diphosphate) and inorganic phosphate. The molecular motors typically hop unidirectionally along the microtubules in cells. Examples of such molecular motors are the processive motors belonging to the kinesin family. However, due to fluctuations of both thermal and non-thermal origin, in course of their unidirectional hopping motion, these molecular motors can stochastically detach off to the surrounding cytoplasm. In the cytoplasm, these molecular motors diffuse around until they again get themselves attached to a filament. The cytoplasm thus effectively acts as a reservoir for these (unbound or detached) molecular motors. On the whole, thus, the bound molecular motors hop unidirectionally along the filaments, whereas the unbound molecular motors in the cytoplasm undergo diffusive motion, and these two can stochastically exchange molecular motors between them. We here construct and study a simple one dimensional model that reproduces these collective behaviours of transport in a cell.
The rest of this article is organised as follows. In Sec. II, we introduce our model. Next, in Sec. III we set up the mean field theory for our model. Then in Sec. IV we briefly discuss the stochastic simulations performed. Next, in Sec. V we extensively present and analyse the steady state densities and the associated phase diagrams in both the TASEP and SEP channels. In this Section, we study the nonequilibrium steady states with constant attachment detachment rates, together with a rate of symmetric hopping or diffusion that remains finite relative to the attachment-detachment rates of the LK dynamics, or the unidirectional hopping rates along the TASEP lane. Results from both MFT and MCS studies are presented. In Sec. VI, we illustrate gradual delocalisation of the domain walls as the diffusivity is reduced. In Sec. VII, we summarise and discuss our results. In Appendix, we present our mean-field results in another limit of the model, _viz._, space-dependent attachment-detachment rates together with a diffusive time scale that diverges relative to the typical particle residence time due to the LK dynamics.
## II Model
In this work, we investigate the nature of the non-equilibrium steady states of a coupled two-lane model consisting of two equally sized lanes with \(L\) lattice sites each, whose dynamics is governed by a totally asymmetric exclusion and a symmetric exclusion processes, respectively; see Fig. 1. The sites in each lane are labelled by the index \(i=1,..,L\). The dynamical update rules of this model consist of the following steps.
(a) Particles on the top lane (TASEP) may enter at the left end (\(i=1\)) at rate \(q\alpha\), stochastically hop from \(i=1\) to \(L\) unidirectionally subject to exclusion at rate \(q\), and leave the lane at the right end (\(i=L\)) at rate \(q\beta\)[16]. (Note that in a conventional study on pure open TASEP, usually \(q\) is set to unity.)
(b) On the bottom lane (SEP) particles hop with equal rate \(D\) in either direction subject to exclusion, and also may leave or enter this lane at the right (\(i=L\)) and left (\(i=1\)) end at rate \(1/2\).
(c) Particles hopping on these parallel tracks may detach from one lane and attach to the other, with generally site-dependent but equal attachment and detachment rates \(\omega_{i}\).
All these hopping and exchange processes in both the lanes are allowed under the strict constraint of an _on-site exclusion principle_, which forbids double-occupancy of particles in any of the lattice sites.
Time scales:The different time scales for our model are mentioned below :
\(\bullet\)\(\tau_{\rm TASEP}=q\) : Time-scale of the directed dynamics of the particles on the TASEP lane. This sets the time-scale for our model.
\(\bullet\)\(\tau_{\rm SEP}=D\) : Time-scale of the diffusive dynamics of the particles on the SEP lane.
\(\bullet\)\(\tau_{i}^{\times}=\omega_{i}^{-1}\) : Time-scale of the lane exchange mechanism which couples the filament to the surrounding reservoir.
Symmetries:This model admits the _particle-hole symmetry_, which will prove helpful in constructing and understanding the phase diagrams for the filament and the reservoir lanes. We note that the absence of a particle at any site on the two lanes can be interpreted as the presence of a vacancy or a hole at that position. A particle hopping from the left site to the empty lattice site to its right in the bulk may be considered as a hole
Figure 1: Illustration of the two-lane model. We label the upper lane as the TASEP lane and the lower one as the SEP lane. Particles on the upper lane follow TASEP dynamics with hopping rate \(q\) in the bulk subject to exclusion and entry and exits rates \(q\alpha\) and \(q\beta\), respectively. Particles on lower lane obey SEP dynamics with hopping rate D and possess entry rates \(1/2\), and also exit rates \(1/2\), at the left and right end, respectively. The local space-dependent exchange rate between the lanes is denoted by \(\omega_{i}\) that can depend on the site \(i\).
hopping form the right to the left lattice site. Likewise, the entry of a particle from the left end of the lattice can be considered as an exit of a hole and vice-versa. Similarly for the particle exchange dynamics between the TASEP and SEP lanes, movement of a particle from (to) TASEP to (from) SEP may be viewed as a hole moving to (from) TASEP from (to) SEP. In fact, formally the model remains invariant under the transformation of all particles into holes, with a discrete transformation of sites \(i\leftrightarrow L-i\) and all pairs of the entry-exit rate, e.g. \(\alpha\leftrightarrow\beta\). These define the _particle-hole symmetry_ in this model. As a consequence of this symmetry, the phase diagram in the \(\alpha-\beta\) plane can be split into two complementary regions by the \(\alpha=\beta\) line. As a result, it suffices to understand the phase diagram for only one of the two regions. The phase behaviour of the system in the remaining region can be constructed and analysed by using the particle-hole symmetry.
## III Mean-field theory
The microscopic dynamics of TASEP is prescribed by the rate equations for every site in the SEP and TASEP lanes, as discussed in Sec. II above. These equations are _not_ closed. The MFT approximation entails neglecting the correlation effects and replacing the average of product of the densities by the product of average of densities in the steady states [18]. Although this is an uncontrolled approximation, this has been found to work with high degree of accuracy in the original TASEP problem and its many variants subsequently (see, e.g., Refs. [9; 19; 20] as representative examples); we use the MFT here as a guideline in our analysis below.
The dynamical equations of motion for \(\rho_{i}\) and \(c_{i}\), the TASEP and SEP densities at site \(i\) in the TASEP and SEP lane respectively, are
\[\partial_{t}\rho_{i} = q\rho_{i-1}(1-\rho_{i})-q\rho_{i}(1-\rho_{i+1}) \tag{1}\] \[+ \omega_{i}[c_{i}(1-\rho_{i})-\rho_{i}(1-c_{i})],\] \[\partial_{t}c_{i} = D[(c_{i-1}+c_{i+1})(1-c_{i})-c_{i}(2-c_{i-1}-c_{i+1})]\] (2) \[- \omega_{i}[c_{i}(1-\rho_{i})-\rho_{i}(1-c_{i})].\]
It is easy to see that the MFT equations (1) and (2) are invariant under the particle-hole symmetry defined above.
To proceed further, we first take the continuum approximation: we take \(L\to\infty\), which makes \(x\) a continuous variable between \(0\) and \(1\): \(x\in[0,1]\). Without any loss of generality, we assume unit geometric length for the whole lattice (both TASEP and SEP), and define a lattice constant \(\varepsilon=1/L\) that approaches zero as \(L\to\infty\). Thus, in the thermodynamic limit, \(\varepsilon\) serves as a small parameter. Further, we define \(\rho(x)=\langle\rho_{i}\rangle\) and \(c(x)=\langle c_{i}\rangle\) as the steady state densities of TASEP and SEP respectively at \(x\). In the steady state, we expand the different terms on rhs of (1) and (2) in a Taylor series in powers of \(\varepsilon\) to obtain
\[\frac{\partial\rho}{\partial t} = \omega(x)(c-\rho)+\frac{q}{L}(2\rho-1)\partial_{x}\rho+\frac{q}{2 L^{2}}\partial_{x}^{2}\rho, \tag{3}\] \[\frac{\partial c}{\partial t} = \omega(x)(\rho-c)+\frac{D}{L^{2}}\partial_{x}^{2}c. \tag{4}\]
Here, we have retained terms up to \(\mathcal{O}(\varepsilon^{2})\equiv\mathcal{O}(1/L^{2})\) in the Taylor expansions above, discarding all higher order terms. We note the different \(L\)-dependence of the terms in (3) and (4). In order to make the nonconserving Langmuir kinetics terms compete with the hopping terms in (3) and diffusion in (4), we define \(\omega\equiv\Omega/L^{2}\), where \(\Omega\sim\mathcal{O}(1)\)[9], and set \(q=1/L\). Thus with \(q=1/L\), particles enter and exist the TASEP channel at effective rates \(\alpha/L\) and \(\beta/L\), and hop along the TASEP channel at rate \(1/L\). With these parameter rescalings, we obtain the steady state in the thermodynamic dynamic limit \(L\to\infty\)
\[\Omega(x)(c-\rho)+(2\rho-1)\partial_{x}\rho=0, \tag{5}\] \[D\partial_{x}^{2}c+\Omega(x)(\rho-c)=0. \tag{6}\]
Equations (5) and (6) are the starting point of the MFT for this model, which we discuss next.
We note that the full MFT equations (5) and (6) have the following well-known limits. Indeed, there are two limits in which the MFT equations (5) and (6) reduce to two well-known models whose MFT solutions are already known. These two limits are characterised by the limiting values of \(\Omega/D\). Consider now \(\Omega(x)\) as constant, \(\Omega(x)=\Omega\) at all \(x\) together with
(i) \(D\to\infty\), i.e., \(\Omega/D\), for a given \(\Omega\sim\mathcal{O}(1)\), vanishes. In this limit from (6), assuming \(c(0)=1/2=c(1)\), \(c(x)=1/2\) everywhere. Substituting this in (5), we get
\[\frac{\Omega}{2}(1-2\rho)+(2\rho-1)\partial_{x}\rho=0. \tag{7}\]
This is identical to the MFT equation for \(\rho(x)\) in the LK-TASEP problem with an equal attachment-detachment rate of value \(\Omega/2\)[21]. Physically, as \(D\) diverges, the diffusive dynamics in the SEP lane becomes extremely fast, effectively reducing attachment events to the SEP lane insignificant relative to the in-lane diffusion, over TASEP hopping and attachment-detachment time-scales. This means the average steady state density in the SEP lane is \(1/2\), independent of the precise values of the attachment-detachment rates, as we found above. This in turn means that the attachment-detachment events at rate \(\Omega\) to/from the TASEP will take places with a background SEP density of \(1/2\), unaffected by the TASEP dynamics, as we see form (7).
(ii) \(D\to 0\), i.e., \(\Omega/D\) diverges for a given \(\Omega\sim\mathcal{O}(1)\). In this limit from (6), \(c(x)=\rho(x)\) is the solution in the bulk. Substituting this in (5), we get
\[(2\rho-1)\partial_{x}\rho=0, \tag{8}\]
which is nothing but the MFT equation for the steady state density in the standard TASEP problem, giving the
LD, HD and MC phases [18]. Actually for vanishingly small \(D\), the only dynamics in SEP are the attachment-detachment events, which have the effects of locally decreasing the differences in the densities in the TASEP and SEP lanes. Indeed, when \(D\to 0\), different sites of the SEP lane virtually decouple from each other, and only exchange particles with the corresponding site in the TASEP lane having a density \(\rho(x)\). In the steady state then \(c(x)=\rho(x)\) is achieved, no further time evolution of the SEP density.
We thus find that for a fixed \(\Omega\), a key parameter which controls the shape of the density profiles on both the TASEP and SEP lanes is the magnitude of the effective diffusion constant \(D\). If diffusion of the SEP lane is very slow, \(D\to 0\), we find from Eq. (6) that the density on that reservoir lane becomes effectively slaved to the density on the filament, \(c(x)=\rho(x)\). Hence, in this limit, the filament dynamics is independent of the reservoir and simply given by that of the TASEP. In contrast, in the opposite limit where \(D\) is large, i.e., diffusion on the reservoir lane is fast, \(c(x)\) becomes independent of \(\rho(x)\) and shows a flat profile, e.g. \(c(x)=1/2\) with \(c(x=0)=c(x=1)=1/2\), as the boundary conditions. In this case the reservoir lane simply acts as a reservoir with a constant particle density similar to the TASEP-LK model with an attachment rate \(\Omega_{A}=\Omega/2\) and a detachment rate \(\Omega_{D}=\Omega/2\), respectively [9]. Notice that independent of \(\Omega(x)\) and \(D\), \(\rho(x)=1/2=c(x)\) remain solutions of the MF equations (3) and (4).
As an aside, we also note that solving the full MFT equations (5) and (6) with space-dependent \(\Omega(x)\) and an arbitrary \(D\) is analytically challenging and also not particularly insightful. Instead, we solve (5) and (6) in the following cases: (i) \(\Omega/D\) finite but small with \(\Omega\) being independent of \(x\), (ii) \(\Omega/D\) finite but large with \(\Omega\) being independent of \(x\). We also briefly consider \(\Omega(x)\) to be space varying, but assume \(D\) diverges, \(\Omega/D\) vanishes at all \(x\). The MFT equation for \(\rho(x)\) now becomes
\[\Omega(x)(\frac{1}{2}-\rho)+(2\rho-1)\partial_{x}\rho=0. \tag{9}\]
This is the MFT equation for the LK-TASEP problem but with an equal, space varying attachment-detachment rate. This is discussed in Appendix.
## IV MCS simulations
The TASEP and SEP lanes of the model consist of \(L\) sites each, labelled by an index \(i\) with \(i\in[1,L]\). Let \(\rho_{i}(t)\), which is either 0 or 1, be the occupation at site \(i\) of the TASEP channel, and \(c_{i}(t)\), which is again either 0 or 1, be the occupation at site \(i\) of the SEP channel at time \(t\). We perform MCS studies of the model subject to the update rules (a)-(c) described above in Sec. II by using a random sequential updating scheme. The particles enter the system through the left most site (\(i=1\)) in the TASEP channel at a fixed rate \(q\alpha\), subject to exclusion, i.e., if \(\rho_{1}=0\). After hopping through the system from \(i=1\) to \(L\) at rate \(q\), subject to exclusion, the particles exit the system from \(i=L\) at a fixed rate \(q\beta\). Here, \(\alpha\) and \(\beta\) are the two simulation parameters, which are varied to produce different steady states. We have chosen \(q=1/L\) in our MCS studies. In the SEP channel, particles can enter at rate \(1/2\) if the site \(i=1\) or \(i=L\) of the SEP channel is vacant, or if it is filled, a particle either leaves the system through the left or right end respectively at rate \(1/2\), or hops to the site \(i=2\) or \(i=L-1\) at rate \(D\), if the target site is empty. In general, in the bulk of the SEP channel, a particle can hop to its left or right site with equal probability at rate \(D\), provided the target site is empty. We use \(D\leq 1\). Lastly, we allow exchange of particles between the SEP and TASEP channels at any site \(i\), subject to exclusion, at rate \(\omega\). After reaching the steady states, the density profiles are calculated and temporal averages are performed. This produces time-averaged, space-dependent density profiles, given by \(\langle\rho_{i}(t)\rangle\), and \(\langle c_{i}(t)\rangle\); here \(\langle...\rangle\) implies temporal averages over steady states. The simulations have been performed with \(L=1000\) up to \(10^{9}\) Monte-Carlo steps. Lastly, all the measurements are made in the steady states, which are reached by the system after spending certain transient times.
## V Steady state densities
In the previous Section, we have discussed that for a fixed \(\Omega\) the diffusion constant \(D\) determines the steady state density profiles in both the TASEP and SEP lanes. For intermediate values of \(D\) the density profiles on both lanes deviate from the known asymptotic results. With increasing the magnitude of \(D\) one can study the crossover from TASEP to TASEP-LK dynamics, and it will be interesting to see how the density profiles and the ensuing phase diagram change as \(D\) is varied. Before we proceed to solve the MFT equations, we note the following general result. The microscopic dynamical rules set up in Sec. II above clearly maintain overall particle conservation (i.e., in the combined TASEP and SEP) locally in the bulk of the system, since particle entry or exit events (considering the overall system) take place only at the boundaries, although individually the TASEP and SEP lanes do not conserve particles in the bulk locally due to the particle exchanges between them. This fact is clearly reflected in the MFT equations (1) and (2), or in their continuum analogues (3) and (4), which can be combined to show that the sum \(\rho(x,t)+c(x,t)\) is a conserved density. Indeed, the MFT equations (5) and (6) can be combined to produce a conservation law given by
\[(2\rho(x)-1)^{2}+4D\partial_{x}c=J=1-4j_{\rm tot} \tag{10}\]
where \(j_{\rm tot}\) is the total current through the SEP and TASEP channels combined (see also later). Equation (10) reveals the following: Since the steady state
TASEP density \(\rho(x)\) can have at most a finite discontinuity (e.g., at the location of a domain wall), so will \(\partial_{x}c\) at the same location to maintain (10). Now since \(\partial_{x}c\) can have at most a finite discontinuity, steady state SEP density \(c(x)\) must be continuous everywhere (but not necessarily spatially uniform). Nonetheless, at the location of a discontinuity in \(\rho(x)\), \(c(x)\) should have a strong space dependence, as opposed to where \(\rho(x)\) itself is continuous. We shall see below that our actual MFT solutions for \(\rho(x)\) and \(c(x)\) bear these features.
We solve the MFT equations (6) and (5) perturbatively, assuming \(\Omega/D\) to be large or small. Given the above discussions, interesting features are expected when \(\Omega/D\) is finite (can be large or small or just \(\mathcal{O}(1)\)). We then expect \(c(x)\) to be neither \(1/2\), nor equal to \(\rho(x)\) in bulk. Likewise, \(\rho(x)\) is expected to be neither one of the LK-TASEP solutions, or standard TASEP solutions in the bulk.
### MFT for large \(D\)
We first consider "large but finite \(D\)", i.e., small but non-zero \(\Omega/D\) for a given \(\Omega\sim\mathcal{O}(1)\). In this limit, we solve the MFT equations by perturbatively expanding around the limiting solutions \(\rho(x)=\rho_{\rm LK}(x)\) and \(c(x)=1/2\). For large but finite \(D\), we expect small modifications to \(\rho(x)=\rho_{\rm LK}(x)\) and \(c(x)=1/2\). We thus expect phases in the TASEP lane similar to those reported in Ref. [9] to emerge. Furthermore, the exchange of particles between the TASEP and SEP lanes should have the physical effects of _reducing_ locally the difference in the densities in the TASEP and SEP lanes. This means whenever \(\rho(x)>(<)1/2\), we expect \(c(x)>(<)1/2\). This in turn suggests that the steady state density in the SEP lane should be excess (deficit) relative to \(1/2\), the steady state density of an isolated SEP with equal entry and exit rates. This picture is consistent with the form of the MF equation (6) with \(\Omega(x)\) being assumed to be a constant. Since \(\partial_{x}^{2}c(x)\) that gives the local curvature of \(c(x)\) is less than zero for \(\rho(x)>c(x)\), expected in the HD phase, \(c(x)\) should resemble, loosely speaking, an inverted "U", whereas for \(\rho(x)<c(x)\), expected in the LD phase, \(c(x)\) should resemble, loosely speaking, an upward "U". These considerations suggest that the SEP channel can have an average density more, less or equal to \(1/2\). We call these _excess, deficit_ and _neutral_ phases of SEP. We will see below that these expectations are borne by our MFT solutions.
To proceed with our MFT solutions valid for \(\Omega/D\ll 1\), we write
\[\rho(x) =\rho_{\rm LK}(x)+\delta\rho(x), \tag{11}\] \[c(x) =\frac{1}{2}+\delta c(x). \tag{12}\]
Here, \(\rho_{\rm LK}(x)\) is the well-known solution of the LK-TASEP problem and satisfies
\[(2\rho_{\rm LK}-1)(\partial_{x}\rho_{\rm LK}-\frac{\Omega}{2})=0, \tag{13}\]
giving
\[\rho_{\rm LK}(x)=\frac{1}{2}\ \ \ {\rm or}\ \ \ \rho_{\rm LK}(x)=\frac{\Omega}{2}x+ \rho_{0}. \tag{14}\]
Here, \(\rho_{0}\) is a constant of integration, which may be evaluated by using the boundary conditions. We set
\[\rho(0) =\alpha=\rho_{\rm LK}(0), \tag{15}\] \[\rho(1) =1-\beta=\rho_{\rm LK}(1). \tag{16}\]
Furthermore, \(\delta\rho(x)\) and \(\delta c(x)\) are assumed to be "small" deviations from \(\rho_{\rm LK}(x)\) and \(c(x)=1/2\). In particular, \(\delta c(x)\) satisfies
\[\partial_{x}^{2}\delta c(x)+\frac{\Omega}{D}\left[\rho_{\rm LK}+\delta\rho- \frac{1}{2}-\delta c\right]=0. \tag{17}\]
We know that in the limit \(\Omega/D\to 0\), \(c(x)\to 1/2\) and hence \(\delta c(x)\to 0\). We can thus write
\[\delta c(x)=f(\frac{\Omega}{D}), \tag{18}\]
with \(f(0)=0\). This suggests that to the lowest order in \(\Omega/D\), \(\delta c(x)\) should scale with \(\Omega/D\). This further implies that \(\delta\rho(x)\), which vanishes as \(\delta c\to 0\), should also scale with \(\Omega/D\) to the lowest order in \(\Omega/D\). Therefore, to the lowest order in \(\Omega/D\), \(\delta c(x)\) follows
\[\partial_{x}^{2}\delta c(x)+\frac{\Omega}{D}[\rho_{\rm LK}(x)-\frac{1}{2}]=0, \tag{19}\]
where \(\rho_{\rm LK}(x)\) is given by (14). Since \(c(x)=1/2\) at \(x=0,1\), we must have \(\delta c(x)=0\) at \(x=0,1\). If we choose \(\rho_{\rm LK}(x)=1/2\), we get \(\delta c(x)=0\) trivially, giving \(c(x)=1/2\) in the bulk. This is not surprising, since \(\rho(x)=1/2=c(x)\) is a solution in the bulk. Non-trivial solution for \(\delta c(x)\) is obtained if we set \(\rho_{\rm LK}(x)=(\Omega/2)\,x+\rho_{0}\). Substituting this in (19) and integrating with respect to \(x\) twice, we obtain
\[\delta c(x)=-\frac{\Omega}{D}\!\left[\frac{\Omega x^{3}}{12}+(\rho_{0}-\frac{ 1}{2})\frac{x^{2}}{2}\right]+\overline{c}_{1}x+\overline{c}_{2}. \tag{20}\]
Constants \(\overline{c}_{1},\,\overline{c}_{2}\) are the two constants of integration, which may be evaluated by using the boundary conditions. At \(x=0\), \(c=1/2\) giving \(\delta c(0)=0\). This gives \(\overline{c}_{2}=0\). We further have at \(x=1\), \(c=1/2\), giving \(\delta c(1)=0\). From this condition we obtain
\[\overline{c}_{1}=\frac{\Omega}{D}\left[\frac{\Omega}{12}+\frac{1}{2}(\rho_{0}- 1/2)\right] \tag{21}\]
giving
\[\delta c(x)=\frac{\Omega}{D}\left[\frac{\Omega}{12}(x-x^{3})+\frac{1}{2}(\rho_ {0}-\frac{1}{2})(x-x^{2})\right]. \tag{22}\]
Notice that \(\delta c(x)\) and hence \(c(x)\) depend explicitly on the boundary conditions on \(\rho(x)\) through \(\rho_{0}\). The full solution of the steady state SEP density \(c(x)\) is given by
\[c(x)=\frac{1}{2}+\delta c(x) \tag{23}\] \[=\frac{1}{2}+\frac{\Omega}{D}\left[\frac{\Omega}{12}(x-x^{3})+ \frac{1}{2}(\rho_{0}-\frac{1}{2})(x-x^{2})\right].\]
Clearly, \(c(x)=1/2\) at \(x=0,1\). Since \(\rho_{0}\), being the boundary condition on \(\rho(x)\) either at the left or at the right end, depending on whether we are considering LD or HD phases of the TASEP, for each of \(\rho_{\rm LD}(x)\) and \(\rho_{\rm HD}(x)\), the steady state density profiles in the LD and HD phases, respectively, there are distinct solutions of \(c(x)\); see below.
We now solve for \(\rho(x)\). We start from Eq. (5), which may be written as
\[(2\rho(x)-1)[\partial_{x}\rho(x)-\hat{\Omega}]=-\Omega c(x)+\frac{\Omega}{2}, \tag{24}\]
where \(\hat{\Omega}\equiv\Omega/2\). Now write \(\rho(x)=\rho_{\rm LK}(x)+\delta\rho(x)\), where \(\rho_{\rm LK}(x)\) satisfies (14). Then \(\delta\rho(x)\) satisfies the following equation.
\[(2\rho_{\rm LK}-1)\partial_{x}\delta\rho=-\Omega\frac{\Omega}{D}\left[\frac{ \Omega}{12}(x-x^{3})+\frac{1}{2}(\rho_{0}-\frac{1}{2})(x-x^{2})\right] \tag{25}\]
to the lowest order in \(\Omega/D\). Equation (25) can be solved by standard methods, which are straight forward but lengthy. We give the solution below.
\[\delta\rho(x) = -\Omega\frac{\Omega}{D}\left[\frac{k_{1}x^{3}}{3}+\frac{k_{2}x^{2 }}{2}+k_{3}x\right] \tag{26}\] \[+ k^{\prime}\frac{\Omega}{D}\ln|x+\frac{2\rho_{0}-1}{\Omega}|+k_ {0}.\]
Clearly, \(\delta\rho(x)\) depends linearly on \(\Omega/D\), and vanishes, as it should, when \(\Omega/D\) vanishes. Here, \(k_{1},k_{2},k_{3}\) are constants given by
\[k_{1}=-\frac{1}{12}, \tag{27}\] \[k_{2}=-\frac{2\rho_{0}-1}{6\Omega},\] (28) \[k_{3}=\frac{1}{\Omega}\left[\frac{\Omega}{12}+\frac{(2\rho_{0}- 1)}{4}+\frac{(2\rho_{0}-1)^{2}}{6\Omega}\right],\] (29) \[k^{\prime}=(2\rho_{0}-1)k_{3}. \tag{30}\]
Unsurprisingly, \(\delta\rho(x)\) depends on \(\rho_{0}\), which in turn is fixed by the boundary condition on \(\rho_{\rm LK}(x)\). We first focus on the LD phase. The constant of integration \(k_{0}\) can be obtained by using the boundary conditions \(\delta\rho(x)=0\) at \(x=0\). Now, using the boundary condition at \(x=0\), \(\rho(0)=\alpha=\rho_{\rm LK}(0)\) (which means \(\delta\rho(x)=0\) at \(x=0\), as we have used), we get \(\rho_{0}=\alpha\). Then using (11) we obtain
\[\rho_{\rm LD}(x)=\alpha+\frac{\Omega x}{2}-\Omega\frac{\Omega}{D}[ \frac{k_{1}x^{3}}{3}+\frac{k_{2}x^{2}}{2}+k_{3}x] \tag{31}\] \[+ \frac{\Omega k^{\prime}}{D}\ln|x+\frac{2\alpha-1}{\Omega}|-\frac {\Omega k^{\prime}}{D}\ln|\frac{2\alpha-1}{\Omega}|.\]
As discussed above, corresponding to \(\rho_{\rm LD}(x)\) as given in (31), the SEP density is given by \(c_{-}(x)\), where
\[c_{-}(x)=\frac{1}{2}+\frac{\Omega}{D}\left[\frac{\Omega}{12}(x-x^{3})+\frac{1 }{2}(\alpha-\frac{1}{2})(x-x^{2})\right]. \tag{32}\]
Likewise, we can obtain \(\rho_{\rm HD}(x)\) by using the boundary condition at \(x=1\), \(\rho(1)=1-\beta=\rho_{\rm LK}(1),\,\delta\rho(1)=0\). We get
\[\rho_{\rm HD}(x)=1-\beta+\frac{\Omega}{2}(x-1) \tag{33}\] \[- \Omega\frac{\Omega}{D}[\frac{k_{1}}{3}(x^{3}-1)+\frac{k_{2}}{2}( x^{2}-1)+k_{3}(x-1)]\] \[+ \frac{\Omega k^{\prime}}{D}\ln|x-1+\frac{1-2\beta}{\Omega}|-\frac {\Omega k^{\prime}}{D}\ln|\frac{1-2\beta}{\Omega}|.\]
Then corresponding to \(\rho_{\rm HD}(x)\) as given in (33), the SEP density is \(c_{+}(x)\) given by
\[c_{+}(x)=\frac{1}{2}+\frac{\Omega}{D}\left[\frac{\Omega}{12}(x-x^{3})+\frac{1 }{2}(1-\beta-\frac{\Omega}{2}-\frac{1}{2})(x-x^{2})\right]. \tag{34}\]
Notice that in addition to the explicitly \(x\)-dependent solutions \(\rho_{\rm LD}(x)\) and \(\rho_{\rm HD}(x)\) above, the MFT equations (5) and (6) also admit spatially uniform solutions \(\rho=1/2\), \(c=1/2\); \(\rho=1/2\) obviously corresponds to the MC phase of the TASEP. With these solutions for the steady state densities \(\rho(x)\) and \(c(x)\) phase diagrams for both the TASEP and SEP lanes can be constructed in the \(\alpha-\beta\) plane. Since for large \(D\), i.e., for small \(\Omega/D\), we only expect small modifications of \(\rho(x)\) from \(\rho_{\rm LK}(x)\) and of \(c(x)\) from \(1/2\) in the bulk, we expect the TASEP phase diagram to be close to the one obtained in the LK-TASEP problem [9], albeit with an equal attachment-detachment rate \(\Omega^{\prime}\equiv\Omega/2\).
In our MCS studies with \(D=1.0\) and \(\Omega=0.3\) (\(\Omega/D=0.3<1\)), we find the so-called "pure phases" of TASEP (albeit generally space-dependent), _viz._, LD, HD and MC phases and also detect the "mixed phases", e.g., LD-MC, HD-MC, LD-MC-HD and LD-HD phases. In these mixed phases, part of \(\rho(x)\) is in one of the phases, and the remaining part is in another phase. The SEP density profiles may be characterised as follows. We define an average SEP density \(\overline{c}\) via
\[\overline{c}\equiv\int_{0}^{1}c(x)dx. \tag{35}\]
Clearly, \(\overline{c}>,<\) or \(=1/2\) would imply excess, deficit and neutral phases. Furthermore, as we have discussed above, \(c(x)\) tends to follow \(\rho(x)\) in the bulk, although partially for non-zero \(\Omega/D\). This implies, as our MCS studies on the SEP density profile reveal, \(c(x)-1/2\) can cross zero in the bulk either once, or none at all, or remain zero over a finite extent of \(x\).
We present an MFT phase diagram in Fig. 2 of the TASEP lane and an MFT phase diagram in Fig. 3 of the SEP lane for various values of \(D=1.0\) and \(\Omega=0.3\). We
discuss how to obtain the phases and the corresponding phase boundaries between the phases in the MFT. Notice that both \(\rho_{\rm LD}(x)\) and \(\rho_{\rm HD}(x)\) are growing solutions of \(x\). However, unlike in Ref. [9] they do not grow linearly with \(x\); there are (small) nonlinear modifications to the linear growth profiles, which are due to the (large but) finite diffusivity in the SEP channel. Although there is no strict particle conservation in the bulk of the TASEP lane due to the attachment-detachment events, particle current is conserved _locally_, as locally the rate of particle nonconserving attachment-detachment events actually vanish in the thermodynamic limit [9]. As in the LK-TASEP model, steady state current here in the TASEP lane is space-dependent but continuous. Corresponding to the steady state densities \(\rho_{\rm LD}(x),\rho_{\rm HD}(x)\) and \(1/2\), we define currents
\[j_{\rm LD}(x) = \rho_{\rm LD}(x)(1-\rho_{\rm LD}(x)), \tag{36}\] \[j_{\rm HD}(x) = \rho_{\rm HD}(x)(1-\rho_{\rm HD}(x)),\] (37) \[j_{\rm MC}(x) = \frac{1}{4}. \tag{38}\]
Using the above expressions of the currents and their continuity across various phase boundaries [9], we determine the location of the phase boundaries in the \(\alpha-\beta\) plane. We set \(j_{\rm LD}(x_{\alpha})=1/4\), equivalently \(\rho_{\rm LD}(x_{\alpha})=1/2\), where \(x_{\alpha}\) is the coordinate separating the LD phase from the MC phase, i.e., \(\rho(x<x_{\alpha})=\rho_{\rm LD}(x<x_{\alpha})<1/2\). Similarly, we set \(j_{\rm HD}(x_{\beta})=1/4\), equivalently \(\rho_{\rm HD}(x_{\beta})=1/2\), where \(x_{\beta}\) is the coordinate separating the HD phase from the MC phase, i.e., \(\rho(x>x_{\beta})=\rho_{\rm HD}(x>x_{\beta})>1/2\). Depending upon the relative positions of \(x_{\alpha}\) and \(x_{\beta}\), various different density profiles emerge that we list briefly. (i) \(x_{\beta}>x_{\alpha}\geq 1\) means the LD phase, (ii) \(x_{\beta}>1\), \(0<x_{\alpha}<1\) means the mixed LD-MC phase, with \(\rho(x)<1/2\) for \(0\leq x<x_{\alpha}\) and \(\rho(x)=1/2\) for \(x_{\alpha}<x<1\), (iii) \(0<x_{\alpha}<x_{\beta}<1\) gives a three-phase coexistence with \(\rho(x)<1/2\) for \(0\leq x<x_{\alpha}\), \(\rho(x)=1/2\) for \(x_{\alpha}<x<x_{\beta}\) and \(\rho(x)>1/2\) for \(x_{\beta}<x<1\). Further, \(x_{\alpha}<0\), \(0<x_{\beta}<1\) gives the HD-MC phase. It is also possible to have \(x_{\alpha}>x_{\beta}\), whence one actually has the LD-HD phase with a domain wall at \(x_{w}\).The position \(x_{w}\) may be obtained from the condition \(\rho_{\rm LD}(x_{w})+\rho_{\rm HD}(x_{w})=1\). Since this condition gives a unique solution for \(x_{w}\), the domain wall in question is a _localised domain wall_ or LDW located at \(x_{w}\) with \(0<x_{w}<1\).
The various phase boundaries in the \(\alpha-\beta\) plane may be obtained in terms of the conditions on the densities as follows: setting (i) \(\rho_{\rm LD}(x_{\alpha}=1)=1/2\) gives the phase boundary between the LD and LD-MC phases, (ii) \(\rho_{\rm HD}(x_{\beta}=0)=1/2\), gives the phase boundary between the HD and HD-MC phases, (iii) \(\rho_{\rm LD}(x_{\alpha}=0)=1/2\) gives the phase boundary between the LD-MC and MC phases, (iv) \(\rho_{\rm HD}(x_{\beta}=1)=1/2\) gives the phase boundary between the HD-MC and MC phases, (v) \(\rho_{\rm LD}(x_{w}=1)+\rho_{\rm HD}(x_{w}=1)=1\) gives the boundary between the LD and LD-HD phases (since this condition means that the domain wall is just at the right boundary \(x=1\)), (vi) \(\rho_{\rm LD}(x_{w}=0)+\rho_{\rm HD}(x_{w}=0)=1\) gives the boundary between the LD-HD and HD phases (since this condition means that the domain wall is just at the left boundary \(x=0\)), (vii) \(\rho_{\rm LD}(x_{w}=x_{\alpha}=x_{\beta})+\rho_{\rm HD}(x_{w}=x_{\alpha}=x_{ \beta})=1\) together with \(\rho_{\rm LD}(x_{\alpha}=x_{\beta})=\rho_{\rm HD}(x_{\alpha}=x_{\beta})=1/2\) gives the boundary between the LD-HD and LD-HD-MC phases. These two conditions ensure that on the associated phase boundary, the size of the MC phase given by \(x_{\beta}-x_{\alpha}\) just vanishes, indicating the threshold of the three-phase coexistence. The above-listed conditions are formally equivalent to solving for \(x_{\alpha},x_{\beta},x_{w}\) and set conditions on them, as done in Ref. [9]. However, in Ref. [9] due to the linear dependence of the solutions of \(\rho(x)\) on \(x\), it was possible to explicitly solve for \(x_{\alpha},x_{\beta},x_{w}\). The far more complex \(x\)-dependence of \(\rho_{\rm LD}(x)\) and \(\rho_{\rm HD}(x)\) rules out explicitly solving for \(x_{\alpha},x_{\beta},x_{w}\). Instead, we obtain the phase boundaries by means of drawing contour plots in the \(\alpha\)-\(\beta\) plane for given values of \(D\) and \(\Omega\) corresponding to the conditions on the TASEP densities listed above; see the phase diagram in Fig. 2. Notice that notwithstanding the far more complex \(x\)-dependence of \(\rho_{\rm LD}(x)\) and \(\rho_{\rm HD}(x)\), the phase diagram in Fig. 2 have straight lines parallel to either the \(\alpha\)- or \(\beta\)-axis as the phase boundaries between the LD and LD-MC phases, LD-MC and MC phases, HD and HD-MC phases, and MC and HD-MC phases. This is because \(\rho_{\rm LD}(x)\) and \(\rho_{\rm HD}(x)\) are independent, respectively, of \(\beta\) and \(\alpha\). This explains these phase boundaries. It is also clear that the phase diagram is invariant under the particle-hole symmetry. This may be seen by exchanging the \(\alpha\) and \(\beta\)-axes and redrawing the phase boundaries. The resulting phase diagram is identical to that in Fig. 2.
An MFT phase diagram of the SEP lane for \(\Omega=0.3,D=1\) is shown in Fig. 3. Phase space regions with excess, deficit and neutral particle numbers in SEP are shown, which are characterised by the mean SEP density \(\overline{c}\) [see Eq. (35) above]. Due to the tendency of the SEP density \(c(x)\) to follow the TASEP density \(\rho(x)\) in the steady state, the pure LD, HD and MC phases in
Figure 2: Mean-field TASEP phase diagram in the \(\alpha-\beta\) plane with \(D=1\), \(\Omega=0.3\).
the TASEP lane correspond to deficit, excess and neutral particles. Furthermore, the quantity \(c(x)-1/2\) is in the bulk in some of the phase space regions of the SEP which correspond to the pure LD and HD phases in the TASEP lane. There are however regions in the SEP phase diagram, corresponding to the LD-HD phases in the TASEP lane, where \(c(x)-1/2\) crosses zero once in the bulk. In the remaining regions of the SEP phase diagram, \(c(x)-1/2\) does not cross zero, but remains at zero in the whole or part of the bulk. These regions correspond to the pure MC phase, or mixed LD-MC or HD-MC-HD phases in the TASEP lane. The average steady state SEP density when the TASEP in its LD-MC (HD-MC) phase is less (more) than \(1/2\), implying that the SEP is in its deficit (excess) particle phase. When the TASEP is in its LD-HD phase with an LDW at \(x=x_{w}\), the quantity \(c(x)-1/2<0\) for \(0\leq x<x_{w}\) and \(c(x)-1/2>0\) for \(x_{w}<x\leq 1\). When \(x_{w}=1/2\), the LDW is located at the mid-point of the TASEP channel, when happens on the line \(\alpha=\beta\). Specifically on this line, \(c(x)-1/2<0\) for \(0\leq x<1/2\) and \(c(x)-1/2>0\) for \(1/2<x\leq 1\), giving \(\alpha=\beta\) as the phase boundary between the deficit and excess particle phases of the SEP, when the TASEP is in its LD-HD phase. When the TASEP is in its LD-MC-HD phase, \(c(x)-1/2<0\) for \(0\leq x<x_{\alpha}\), \(c(x)-1/2=0\) for \(x_{\alpha}<x<x_{\beta}\) and \(c(x)-1/2>0\) for \(x_{\beta}<x\leq 1\). Furthermore, symmetry of the model about the line \(\alpha=\beta\) (which has its origin in the particle-hole symmetry of the model; see discussions above) ensures that \(\alpha=\beta\) continues to be the boundary between the deficit and excess particle phases of TASEP. This line terminates at the multicritical point \(\alpha=1/2=\beta\), whence it meets the neutral phase. These discussions suggest that \({\cal O}_{c}\equiv\overline{c}-1/2\) may be used as an order parameter to delineate and distinguish the different phases in the SEP. Indeed, in the neutral phase \({\cal O}_{c}=0\), in the deficit phase \({\cal O}_{c}<0\) and in the excess phase \({\cal O}_{c}>0\). All the three SEP phase boundaries are second order in nature, which meet at the multicritical point (1/2,1/2).
We verify the accuracy of the above MFT by comparing with the numerical results on the steady state density profiles obtained from our extensive MCS studies with \(D=1\), \(\Omega=0.3\). See Fig. 4 for representative plots of \(\rho(x)\) as a function of \(x\) in different phases of the TASEP with \(D=1\), \(\Omega=0.3\). Analytical MFT and numerical MCS results are superposed. For MFT, we have used \(\rho(x)=\rho_{\rm LD}(x)\) [Eq. (31)] in the LD phase of the TASEP, \(\rho(x)=\rho_{\rm HD}(x)\) [Eq. (33)] in the HD phase, and \(\rho(x)=1/2\) in the MC phase. We have presented our results on the corresponding SEP density profile \(c(x)\) in Fig. 5. Both analytical MFT and numerical MCS results are shown. Reasonably good agreements are found between the MFT and MCS results. For MFT solution of the SEP density, we have used \(c(x)=c_{-}(x)\) [Eq. (32)] for \(c(x)<1/2\), corresponding to the TASEP in its LD phase, and \(c(x)=c_{+}(x)\) [Eq. (34)] for \(c(x)>1/2\), corresponding to the TASEP in its HD phase. Notice that the quantitative agreement of the MFT solutions for the SEP density \(c(x)\) with the corresponding MCS results is good when the TASEP is in its LD or HD phases, but less so when the TASEP in its LD-HD phase; see Fig. 5. We believe this is due to the fact that near the location of an LDW in TASEP, \(c(x)\) has a strong space dependence, suggesting importance of retaining the higher order corrections in the MFT solutions of \(c(x)\). Nevertheless, qualitative agreement between the MFT and MCS solutions for \(c(x)\) can be seen even in this case.
### MFT for small D
We now consider the solutions of the MFT equations (5) and (6) for small values of \(D\) with a fixed \(\Omega\), i.e., \(\Omega/D\gg 1\). As discussed above, for \(\Omega/D\to\infty\), \(\rho(x)\) reduces to \(\rho_{T}(x)\) and \(c(x)=\rho(x)\) in the bulk, where \(\rho_{T}\) is the bulk steady state density in an open isolated TASEP. This solution for \(c\) however does not match with the boundary condition except when \(\rho=1/2=\rho_{\rm MC}\). To impose the boundary conditions, for all other steady state solutions for \(\rho(x)\), there are _two_ boundary layers close to the two boundaries at \(x=0,1\), ensuring \(c(0)=1/2=c(1)\). These boundary layers are analogous to the boundary layers observed in the MCS studies on the steady state density profiles in an open TASEP. For \(\Omega/D\) large but finite, we expect small modifications to this picture.
To find the steady state densities in both the SEP and TASEP channels, we proceed as follows. We have already noted that the exchange of particles (subject to exclusion) between the TASEP and SEP channels maintains the overall particle number conservation in the combined bulk of the two lanes. This gives rise to the conservation law (10) above, which is a quadratic equation in \(\rho(x)\) in terms of \(J\) and other quantities. Now by using (10) and assuming small \(D\), we write the two solutions of \(\rho(x)\) in
terms of \(J\) and \(c(x)\) as
\[\rho_{\pm}(x) = \frac{1}{2}\left[1\pm\sqrt{J-4D\partial_{x}c(x)}\right] \tag{39}\] \[\approx \frac{1}{2}(1\pm\sqrt{J})\mp\frac{D}{\sqrt{J}}\partial_{x}c(x).\]
Clearly, \(\rho_{\pm}(x)>1/2\) and \(\rho_{-}(x)<1/2\) in the bulk of the TASEP. We now use (39) to eliminate \(\rho\) in (6) to obtain a single closed equation for \(c(x)\):
\[D\partial_{x}^{2}c-A_{\pm}\partial_{x}c-\Omega[c(x)-B_{\pm}]=0, \tag{40}\]
where using \(\rho_{\pm}(\rho_{-})\) for \(\rho\) gives (40) with
Figure 4: Steady state density \(\rho(x)\) in the TASEP lane in the LD (top), LD-HD (middle) and LD-MC (bottom) phases with \(D=1,\,\Omega=0.3\). MFT (blue line) and MCS (red points) results are shown. For MFT, we have used \(\rho(x)=\rho_{\rm LD}(x)\) [Eq. (31)] in the LD phase of the TASEP, \(\rho(x)=\rho_{\rm HD}(x)\) [Eq. (33)] in the HD phase and \(\rho(x)=1/2\) in the MC phase (see text).
Figure 5: Steady state density \(c(x)\) in the SEP lane, when the TASEP lane is in its LD (top), HD (middle) and LD-HD (bottom) phases with \(D=1,\,\Omega=0.3\). MFT (blue line) and MCS (red points) results are shown. For MFT, we have used \(c(x)=c_{-}(x)\) [Eq. (32)] for \(c(x)<1/2\), corresponding to the TASEP in its LD phase, and \(c(x)=c_{+}(x)\) [Eq. (34)] for \(c(x)>1/2\), corresponding to the TASEP in its HD phase (see text).
\(A_{+}\), \(B_{+}\left(A_{-},\,B_{-}\right)\), which depend on \(J\). Here,
\[A_{\pm}=\pm\frac{\Omega D}{\sqrt{J}_{\pm}},\,B_{\pm}=\frac{1}{2}(1\pm\sqrt{J}_{ \pm}). \tag{41}\]
What is \(J\) here? We note that in the limit of \(D\to 0\), \(\rho(x)\rightarrow\rho_{T}\), where \(\rho_{T}\) is the MFT solution of the steady state density in an open TASEP. Thus in that limit, \(J=J_{-}=(2\alpha-1)^{2}\), if \(\rho_{T}=\rho_{\rm LD}\), whereas \(J=J_{+}=(2\beta-1)^{2}\) if \(\rho_{T}=\rho_{\rm HD}=1-\beta\). These considerations give, as expected,
\[\rho_{-}(x)=\alpha=\rho_{\rm LD}, \tag{42}\] \[\rho_{+}(x)=1-\beta=\rho_{\rm HD}, \tag{43}\]
when \(D\to 0\), coinciding with the MFT solutions for an open TASEP.
Solving (40) we get two solutions for \(c(x)\):
\[c_{\pm}(x)=B_{\pm}+U_{\rm I}^{\pm}\exp(\lambda_{1}^{\pm}x)+U_{2}^{\pm}\exp( \lambda_{2}^{\pm}x), \tag{44}\]
where
\[\lambda_{1}^{+} = \frac{1}{2D}\bigg{[}A_{+}+\sqrt{A_{+}^{2}+4D\Omega}\bigg{]}, \tag{45}\] \[\lambda_{2}^{+} = \frac{1}{2D}\bigg{[}A_{+}-\sqrt{A_{+}^{2}+4D\Omega}\bigg{]} \tag{46}\]
corresponding to \(\rho=\rho_{+}\), and
\[\lambda_{1}^{-} = \frac{1}{2D}\bigg{[}A_{-}+\sqrt{A_{-}^{2}+4D\Omega}\bigg{]}, \tag{47}\] \[\lambda_{2}^{-} = \frac{1}{2D}\bigg{[}A_{-}-\sqrt{A_{-}^{2}+4D\Omega}\bigg{]} \tag{48}\]
for \(\rho=\rho_{-}\). Here, \(U_{\rm I}^{\pm},U_{2}^{\pm}\) are two sets of constants of integration, to be evaluated by using the two boundary conditions on \(c\), _viz._, \(c(0)=1/2=c(1)\). As for \(A_{\pm}\), \(B_{\pm}\), use of \(\rho_{+}(x)\left(\rho_{-}(x)\right)\) as the solution for \(\rho(x)\) correspond to the set \(U_{\rm I}^{+}\), \(U_{\rm I}^{+}\) (\(U_{\rm I}^{-}\), \(U_{\rm I}^{-}\)). We find
\[U_{1}^{\pm} = \frac{1-\exp(\lambda_{2}^{\pm})}{\exp(\lambda_{1}^{\pm})-\exp( \lambda_{2}^{\pm})}\bigg{(}\frac{1}{2}-B_{\pm}\bigg{)}, \tag{49}\] \[U_{2}^{\pm} = \frac{\exp(\lambda_{1}^{\pm})-1}{\exp(\lambda_{1}^{\pm})-\exp( \lambda_{2}^{\pm})}\bigg{(}\frac{1}{2}-B_{\pm}\bigg{)}. \tag{50}\]
Evaluation of the constants allow us to find \(c_{-}(x)\) and \(c_{+}(x)\), which in turn yield \(\rho_{\rm LD}(x)\) and \(\rho_{\rm HD}(x)\) respectively.
For finite but small \(D\), we expect weak space dependence of \(\rho_{-}(x)\) and \(\rho_{+}(x)\), i.e., weakly deviating from the constant solutions of \(\alpha\) and \(1-\beta\) respectively in the bulk. For a finite but small \(D\), identifying \(\rho_{-}(x)<1/2\) with \(\rho_{\rm LD}(x)\) and \(\rho_{+}(x)>1/2\) with \(\rho_{\rm HD}(x)\), we find
\[\rho_{\rm LD}(x) = \frac{1}{2}\bigg{[}1-\sqrt{J_{\rm LD}-4D\partial_{x}c_{-}(x)} \bigg{]}<\frac{1}{2}, \tag{51}\] \[\rho_{\rm HD}(x) = \frac{1}{2}\bigg{[}1+\sqrt{J_{\rm HD}-4D\partial_{x}c_{+}(x)} \bigg{]}>\frac{1}{2} \tag{52}\]
for small \(D\). In general, \(J_{\rm LD}\) and \(J_{\rm HD}\) should now include current contributions from the SEP channel; see Eq. (10) above. When the TASEP lane is in its LD (HD) phase, its bulk solution and the associated current is controlled by the left (right) boundary condition. We then identify
\[J_{\rm LD} = (2\alpha-1)^{2}+4D\partial_{x}c_{-}(x)|_{x=0}, \tag{53}\] \[J_{\rm HD} = (2\beta-1)^{2}+4D\partial_{x}c_{+}(x)|_{x=1}. \tag{54}\]
Here, \(c_{-}(x)\) and \(c_{+}(x)\) are the two solutions of (40), obtained by using \(\rho(x)=\rho_{-}(x)\) and \(\rho(x)=\rho_{+}(x)\) respectively.
Equations (44), (51) and (52) provide the MFT solutions for \(c(x)\) and \(\rho(x)\) valid in the limit of small \(D\). Notice that \(J_{\rm LD/HD}\) appears in \(\rho_{\rm LD/HD}(x)\). Thus knowledge of \(J_{\rm LD/HD}\) is necessary to evaluate \(\rho_{\rm LD/HD}(x)\). Now, \(J_{\rm LD/HD}\) depends upon \(c(x)\) through its spatial derivative \(\partial_{x}c(x)\) obtained at \(x=0,1\). On the other hand, enumeration of \(c(x)\) requires \(J_{\pm}\), because of the dependence of the constants \(A_{\pm},B_{\pm}\) etc on it. For simplicity while evaluating the currents, we approximate \(J_{\pm}\) by dropping the contributions from the SEP current to it, rendering it calculable from the TASEP boundary conditions only. Thus, in this approximation \(J_{-}=(2\alpha-1)^{2},\,J_{+}=(2\beta-1)^{2}\). This is a reasonable approximation, since for small \(D\), for which the current analysis holds, \(c(x)\) is largely slaved to \(\rho(x)\) in the bulk, making it rather weakly space-dependent, which in turn means the SEP current should be much smaller than the TASEP current. Lastly, notice that \(\rho(x)=1/2=c(x)\) continue to be steady state solutions for both \(c(x)\) and \(\rho(x)\) in the bulk, even when \(D\) does not vanish. Now \(\rho(x)=1/2\) implies MC phase in TASEP, which means when the TASEP is in its MC phase, the SEP and TASEP densities should overlap in the bulk for any finite \(D\).
Having found the steady states solutions of \(\rho(x)\) and \(c(x)\) for small \(D\), we now obtain the phase diagrams for both the TASEP and SEP lanes. Since for small \(D\), \(\rho(x)\) varies weakly with \(x\), we expect a TASEP phase diagram similar to that in an open TASEP, with most of the phase diagram being covered by regions with pure LD, HD or MC phases. However, due to the expected weak space dependence of \(\rho(x)\) (as opposed to constant \(\rho\) in pure open TASEPs), mixed phases other than the pure phases should also appear albeit over smaller ranges of \(\alpha,\beta\), which should go to zero as \(D\to 0\). We present an MFT phase diagram in Fig. 6 of the TASEP lane for \(D=0.01\) and \(\Omega=0.3\) (thus \(\Omega/D=30\gg 1\)). The principles to obtain the TASEP phase diagram are same as those discussed in the large \(D\) case above. We use the continuity of the currents (36)-(38) across the phase boundaries. As with \(D>>1\), the rather complex \(x\)-dependence of \(\rho(x)\) precludes explicit enumeration of \(x_{\alpha},x_{\beta},x_{w}\). Instead, we use the conditions on the densities as listed in the previous Section, and then use contour plots in the \(\alpha-\beta\) plane for fixed \(D\) and \(\Omega\) to obtain the phase boundaries.
The corresponding phase diagram of the SEP channel with \(D=0.01\) is same as that given in Fig. 5 with three
second order phase boundaries meeting at a multicritical point located at \(\alpha=1/2=\beta\). In fact, the SEP phase diagram is same for any finite value of \(D\). This may be understood as follows. From the particle-hole symmetry of the model, the LD phase regions (covering both the pure LD phase and the LD parts of the mixed phases) must have the same area as that of the HD phase regions in the \(\alpha-\beta\) plane of a TASEP phase diagram. Furthermore, these two regions are also _symmetrically_ located on the two sides of the line \(\alpha=\beta\). According to logic outlined above, these two regions correspond to the deficit and excess particle regions in a SEP phase diagram, which are also symmetrically located on the two sides of the line \(\alpha=\beta\). The remaining region in a SEP phase diagram is the neutral region. Since these arguments hold for any finite \(D\), the SEP phase diagram remains unchanged when \(D\) varies. There is however one thing that changes, _viz._, the amount of "excess" or "deficit" particles in the corresponding phase regions of TASEP. As the SEP gets increasingly slaved to the TASEP when \(D\) is progressively reduced, for the same \(\alpha,\beta\) the degree of "excess" or "deficit" rises (assuming the TASEP is not in its MC phase), reaching the maximum for \(D\to 0\). In the opposite extreme limit when \(D\to\infty\), \(c(x)\to 1/2\) for all \(\alpha,\beta\), meaning the whole SEP phase diagram now has only the neutral phase.
Plots of the steady state TASEP density \(\rho(x)\) and SEP density \(c(x)\) versus \(x\) for \(D=0.01\) and \(\Omega=0.3\) are shown respectively in Fig. 7 and Fig. 8. Both analytical MFT and numerical MCS results are shown. For MFT solutions of TASEP, we have used \(\rho(x)=\rho_{\rm LD}(x)\) [Eq. (51)] in the LD phase and \(\rho(x)=\rho_{\rm HD}(x)\) [Eq. (52)] in the HD phase of the TASEP. In addition, the solutions \(\rho(x)=1/2\) corresponds to the MC phase of the TASEP. For MFT solutions of SEP, we have used \(c(x)=c_{-}(x)<1/2\) and \(c(x)=c_{+}(x)>1/2\) as defined in Eq. (44)] above.
We again note a lower degree of quantitative agreement between the MFT and MCS solutions of \(c(x)\), when the TASEP is in its LD-HD phase, relative to when it is in its LD or HD phases, in which case the agreement is good. As for our large-\(D\) MFT solutions, we attribute this to the stronger space dependence of \(c(x)\) near the location of an LDW in the TASEP, a feature not adequately captured by our MFT for \(c(x)\). Nonetheless, the MFT and MCS solutions for \(c(x)\) agree qualitatively.
Figure 7: Steady state density \(\rho(x)\) in the TASEP lane in the LD (top), LD-HD (middle) and HD (bottom) phases with \(D=0.01\), \(\Omega=0.3\). MFT (blue line) and MCS (red points) results are shown. For MFT solutions of TASEP, we have used \(\rho(x)=\rho_{\rm LD}(x)\) [Eq. (51)] in the LD phase and \(\rho(x)=\rho_{\rm HD}(x)\) [Eq. (52)] in the HD phase of the TASEP (see text).
Figure 6: Mean-field TASEP phase diagrams with \(D=0.01\), \(\Omega=0.3\).
### Comparison of the MFTs
When \(D\) is neither too small or too large, neither of the approximations leading to the MFT solutions are expected to work well. Nonetheless, _both_ MFTs should act as guidelines to understand the MCS results. In Fig. 9, we have shown the MFT phase diagrams for \(D=0.05\), \(\Omega=0.3\), obtained in the large \(D\) approximation (solid red lines) and small \(D\) approximation (broken black lines). While the two approximations clearly differ quantitatively, the topology of the TASEP phase diagrams remains the same, independent of the approximations used to obtain the MFT. This clearly lends credence to the physical pictures that emerge from this work.
We have further presented our results on \(\rho(x)\) when the TASEP is in its LD and HD phases. Numerical results from our MCS simulations and the two MFT predictions (in the large-\(D\) and small-\(D\) approximations) are plotted together. We find that the MFT with large-\(D\) approximation underestimates \(\rho_{\rm LD}(x)\) and overestimates \(\rho_{\rm HD}(x)\) with respect to the corresponding MCS results. The trend from the MFT with small-\(D\) approximation is just the opposite. See Fig. 10 for plots of the MCS results on \(\rho(x)\) together with the corresponding MFT predictions using MFTs with both large-\(D\) and small-\(D\) approximations. The results for the SEP density profiles \(c(x)\) are shown in Fig. 11.
## VI Delocalisation of domain walls for \(D\to 0\)
We have found that in the limit of \(D\to\infty\) this model reduces to the LK-TASEP model [9], whereas in the limit \(D\to 0\) it reduces to an isolated open TASEP. A smooth crossover from LK-TASEP behaviour to an open TASEP is expected as \(D\) is reduced. This can be seen from the phase diagrams given above for various values of \(D\). As \(D\) is reduced the regions for two- and three-phase coexistence regions shrink, which are expected to vanish for \(D\to 0\), i.e., in the limit of a pure TASEP. Indeed,
Figure 8: Steady state density \(c(x)\) in the SEP lane, when the TASEP lane is in its LD (top), LD-HD (middle) and HD (bottom) phases with \(D=0.01\), \(\Omega=0.3\). MFT (blue line) and MCS (red points) results are shown. For MFT solutions of SEP, we have used \(c(x)=c_{-}(x)<1/2\) and \(c(x)=c_{+}(x)>1/2\) as defined in Eq. (44)] (see text).
Figure 9: Mean-field TASEP phase diagrams with \(D=0.05\), \(\Omega=0.3\). MFT solutions for \(\rho(x)\) with large \(D\) approximation and small \(D\) approximation are used to get the corresponding phase diagrams. Phase diagram with solid red (broken black) phase boundaries is obtained using large \(D\) (small \(D\)) approximation (see text).
the two-phase coexistence region should shrink to the limit \(\alpha=\beta\) and the three-phase coexistence to a point \(\alpha=\beta=1/2\) in the isolated, open TASEP limit with \(D\to 0\). Our MFT is consistent with these physically motivated expectations: In Fig. 12, mean-field TASEP phase diagrams for various values of \(D\) ranging from 0.1 to 0.000001 are shown. These phase diagrams are drawn with the MFT valid for small \(D\) (or, equivalently, \(\Omega/D\gg 1\)). It is evident that as \(D\) is progressively reduced, the two- and three-phase coexistence regions increasingly shrink, eventually practically vanishing for \(D=0.000001\), for which the resulting phase diagram is virtually indistinguishable from that of an isolated open TASEP.
The TASEP density profiles in the two-phase coexistence region for any \(D>0\) (including the LK-TASEP limit of \(D\to\infty\)) is a pinned or static domain wall, i.e., an LDW. This pinning of the domain walls is attributed to spatially nonuniform TASEP densities in the steady states. However, in the limit \(D\to 0\), it must be a DDW, existing on the line \(\alpha=\beta\) for \(0<\alpha=\beta<1/2\), as it is for an isolated open TASEP. While a fully delocalised domain wall is possible only for \(D\to 0\), we observe signatures of gradual delocalisation as \(D\) is reduced. To see this, we obtain the TASEP density profiles on the line \(\alpha=\beta=0.1\) with \(\Omega=0.3\) for system size \(L=1000\) and various values of \(D\). We find that as \(D\) is reduced, the long-time averaged profile of the domain wall becomes an increasingly inclined line, signifying larger fluctuations in its position. We visually measure the extent of domain wall position fluctuations or the "width" \(W\), which is roughly the projection of the inclined line of the domain wall on the \(x\)-axis (see the inset in Fig. 13), and plot them against \(D\). See Fig. 13 for plots of the domain walls with \(D=1,0.005,0.001\), and Fig 14 for a semilog plot of \(W\) versus \(D\). While our study is purely phenomenological, it does indicate increasing delocalisation as \(D\) is reduced. Note that this effect cannot be captured within MFT, as MFT by construction neglects all fluctuations. The approaches developed in Ref. [26] to study fluctuations systematically going beyond MFT may be helpful in this study.
Figure 11: Plots of SEP steady state density \(c(x)\) versus \(x\), when the TASEP is in its (top) LD phase and (bottom) HD phase with \(D=0.05\), \(\Omega=0.3\). Broken blue lines represent \(c_{\rm MFT}\) obtained in the small-\(D\) approximation, whereas the solid green lines represent \(C_{\rm MFT}\) obtained in the large-\(D\) approximation; red points represent the corresponding MCS results.
Figure 10: Plots of steady state density \(\rho(x)\) versus \(x\). (top) LD and (bottom) HD phases with \(D=0.05\), \(\Omega=0.3\). Broken blue lines represent \(\rho_{\rm MFT}\) obtained in the small-\(D\) approximation, whereas the solid green lines represent \(\rho_{\rm MFT}\) obtained in the large-\(D\) approximation; red points represent the corresponding MCS results.
## VII Summary and Outlook
In summary, we have proposed and analysed an open one-dimensional system with two lanes, one modeling a one-dimensional lattice executing TASEP dynamics and the other with diffusive or SEP kinetics, representing a reservoir, which are coupled by exchange of particles subject to exclusion. This diffusion is unbiased, that is a particle can hop to its right or left with equal probability, subject to exclusion. We show that the ratio of the effective exchange rate \(\Omega\) to the diffusion coefficient \(D\), or for a fixed \(\Omega\), \(D\) itself appears as the tuning parameter by varying which our system goes from a diffusion dominated behaviour for large \(D\) to TASEP dominated behaviour in the limit of small \(D\). We show that for a fixed non-zero \(\Omega\), with \(D\to 0\) the SEP density is slaved to the TASEP density and the resulting steady-state phase diagram of the TASEP lane is same as that of an isolated open TASEP. In the opposite extreme limit of fast diffusion, i.e., \(D\to\infty\), the density profile of the diffusive lane is spatially constant with a value \(1/2\), whereas that in the driven lane is identical to that of a TASEP with Langmuir Kinetics in the bulk. For intermediate values of \(D\), our model has nonuniform density profiles in both the TASEP and the SEP in the steady states with rather complex position-dependence. These nontrivial space dependences are entirely due to the coupling between the SEP and TASEP kinetics, for without the coupling, both SEP and TASEP generally yield flat density profiles (except for the delocalised domain wall in an open TASEP). For intermediate values of \(D\), the MFT equations cannot be solved exactly. This has led us to solve the MFT equations for small and large \(\Omega/D\) separately, and obtain two sets of solutions with one of them giving modifications of the TASEP and SEP densities for small \(\Omega/D\) and the other for large \(\Omega/D\) in perturbative approaches. We find that the MFT solutions agree reasonably with the MCS studies results for small and large \(\Omega/D\). However, unsurprisingly when \(\Omega/D\) is intermediate none of the solutions agree quantitatively with the numerical results. We have also numerically explored how a domain wall in the TASEP lane, which is obviously localised for any finite \(D\), but must be fully delocalised when \(D\to 0\), actually gradually delocalises as \(D\) is reduced. Such an effect cannot be studied within the MFT, as it neglects all fluctuations. It would be interesting to theoretically study this delocalisation by going beyond MFT descriptions and considering fluctuations. We have further discussed phase diagrams of the SEP in the plane of the control parameters of the TASEP, _viz._, \(\alpha,\beta\). We have
Figure 14: Semilog plot of \(W\) as a function of \(D\) for \(\Omega=0.3,\alpha=0.1=\beta\). Clearly, \(W\) rises as \(D\) is reduced, indicating gradual delocalisation of the domain wall (see text).
Figure 12: Mean-field TASEP phase diagrams in the \(\alpha-\beta\) plane for various values of \(D\) ranging from \(0.1\) to \(0.000001\) are shown. These phase diagrams are drawn with the MFT valid for small \(D\) (or, equivalently, \(\Omega/D\gg 1\)). Clearly for \(D=0.000001\), the phase diagram is virtually indistinguishable from its counterpart for an isolated open TASEP.
Figure 13: Gradual delocalisation of the domain wall due to increasing fluctuations as \(D\) is reduced for a fixed system size \(L=1000\) and \(\alpha=\beta=0.1\). MCS results for \(D=1.0,0.005,0.001\) are shown. (Inset) A pictorial definition of the DW width \(W\) (for \(D=0.001\)) is shown. Clearly, as \(D\) is reduced, \(W\) is increased for a fixed system size \(L\), indicating gradual delocalisation of the domain wall.
argued that the phase diagram of the SEP is identical for any finite \(D\) (including the limiting case of \(D\to 0\)): it has just three phases, _viz._ deficit, excess and neutral particle phases (measured with respect to the mean SEP density \(\overline{c}=1/2\) in an isolated SEP with unbiased entry and exit rates). We have shown that these phases have a direct correspondence with the phases of the TASEP, a principal result from this work.
The take home message from our studies here is that the mutual coupling between driven and diffusive dynamics can be tuned to make not only the TASEP lane to pick up non-trivial position dependence in its steady state densities, the diffusive lane can also maintain nonuniform steady states. This means a reservoir, which is modeled by a SEP here, can sustain spatial gradients in its densities, when it exchanges particles with a driven channel in the bulk. An interesting future study would be to impose particle number conservation at a global level, i.e., in the combined system of a TASEP and a SEP. This would be an example of a system with finite resources having an internal dynamics in the reservoir [13; 22].
We have restricted ourselves to studying a 1D model here. As we mentioned earlier, our model, in addition to its importance as a 1D nonequilibrium model with coupled driven and diffusive dynamics, also serves as a minimal model for a molecular motors - microtubules assembly inside eukaryotic cells. The SEP here models the diffusion of the molecular motors in the bulk, whereas the TASEP represents unidirectional steady motion in a force field, overdamped by viscous drag. Since we have modeled the (three-dimensional) reservoir by SEP, a 1D model, it raises an important phenomenological point: there are significant dynamical differences between the two due to single file diffusion in 1D, which gives the mean square displacement \(W\propto\sqrt{t}\) (time) [23] in unbiased diffusion. This questions applicability of our results for a realistic three-dimensional system. Nonetheless, it is known that for an infinitesimal bias \(W\propto t\) is restored [24; 25], whereas for an infinitesimal bias, our results on the steady states of the SEP should be practically same as here. This clearly allows using our 1D results to draw physical insight about the corresponding three-dimensional situations. Nonetheless, it would definitely be interesting, from both theoretical as well as phenomenological standpoints, to extend and study our model to higher dimensions. This should give a better handle to modeling intra-cellular transport more realistically. In this study, we have considered an unbiased SEP, i.e., a SEP with equal entry and exit rates. In a more general situation, the entry and exit rates could be different, which can result in a biased SEP, which has an inclined line-shaped density profile. It would be interesting to couple such a biased SEP with a TASEP via lane exchanges and investigate the resulting steady states. Our study here is restricted to equal-sized TASEP and SEP lanes. Introduction of unequal lengths is expected to give additional complex behaviour. We hope our studies here will provide impetus to address these questions theoretically in the future.
## VIII Acknowledgement
S.M. thanks SERB (DST), India for partial financial support through the CRG scheme [file: CRG/2021/001875].
## Appendix A Space-dependent exchange
We now consider the effects of side-dependent exchange rates \(\omega_{i}\). We continue to assume equal attachment and detachment rates. As before, we use scaled attachment-detachment rates defined by \(\omega_{i}=\Omega_{i}/L^{2}\) and TASEP hopping rate \(1/L\) together with \(\alpha/L\), \(\beta/L\) as the entry and exit rates in TASEP to ensure competition with the diffusion in SEP. This results into the MF equations (5) and (6). We further consider the asymptotic limit of fast diffusion given by \(D\to\infty\). In that limit, the SEP density is independent of the TASEP density with \(c(x)=1/2\) everywhere, independent of the TASEP density. In that limit, using \(c(x)=1/2\), Eq. (5) reduces to
\[(1-2\rho)\left[\frac{\partial\rho}{\partial x}-\frac{\Omega(x)}{2}\right]=0. \tag{10}\]
Equation (10) has two solutions: \(\rho(x)=1/2\), which gives the MC phase here, and \(\rho(x)=\int dx\,\Omega(x)/2+\tilde{C}\), where \(\tilde{C}\) is a constant of integration. By using the boundary conditions \(\rho(0)=\alpha\) or \(\rho(1)=1-\beta\), we can evaluate \(\tilde{C}\): Using \(\rho(0)=\alpha\)
\[\tilde{C}\equiv\tilde{C}_{\alpha}=\alpha-\left[\int dx\,\Omega(x)/2\right]_{x =0}. \tag{11}\]
Similarly, using \(\rho(1)=1-\beta\), we get
\[\tilde{C}\equiv\tilde{C}_{\beta}=1-\beta-\left[\int dx\,\Omega(x)/2\right]_{x =1}. \tag{12}\]
Thus using (11) and (12) we get two solutions
\[\rho_{\alpha}(x) = \int dx\,\Omega(x)/2+\tilde{C}_{\alpha}, \tag{13}\] \[\rho_{\beta}(x) = \int dx\,\Omega(x)/2+\tilde{C}_{\beta}. \tag{14}\]
These solutions generalise the well-known space-dependent solutions \(\rho_{\rm LK}(x)\) as mentioned earlier. Instead of linear \(x\)-dependence, we now obtain general, nonlinear \(x\)-dependent solutions. As in the original LK-TASEP model, these solutions may meet each other or with the other solution \(\rho=1/2\) in the bulk of the system giving phase coexistence of different phases in the steady states. Following the logic outlined in Ref. [9], the steady state
density profiles and ensuing phase diagram may be calculated by equating the steady state currents. It is easy to see that the resulting phase diagram generally will have the same topology as in the LK-TASEP model, although the precise locations of the phase boundaries in the \(\alpha-\beta\) plane should depend on the specific forms of \(\Omega(x)\). This reveals a degree of robustness of the phase diagrams in the LK-TASEP model, revealing universality in the topology of the phase diagrams.
|
2303.11691 | A versatile classification tool for galactic activity using optical and
infrared colors | We use the Random Forest (RF) algorithm to develop a tool for automated
activity classification of galaxies into 5 different classes: Star-forming
(SF), AGN, LINER, Composite, and Passive. We train the algorithm on a
combination of mid-IR (WISE) and optical photometric data while the true labels
(activity classes) are based on emission line ratios. Our classifier is built
to be redshift-agnostic and it is applicable to objects up to z $\sim$0.1. It
reaches a completeness $>$80 % for SF and Passive galaxies, and $\sim$60 % for
AGN. Applying it to an all-sky galaxy catalog (HECATE) reveals a large
population of low-luminosity AGNs outside the AGN locus in the standard mid-IR
diagnostics. | Elias Kyritsis, Charalampos Daoutis, Andreas Zezas, Konstantinos Kouroumpatzakis | 2023-03-21T09:21:22Z | http://arxiv.org/abs/2303.11691v1 | # A versatile classification tool for galactic activity using optical and infrared colors
###### Abstract
We use the Random Forest (RF) algorithm to develop a tool for automated activity classification of galaxies into 5 different classes: Star-forming (SF), AGN, LINER, Composite, and Passive. We train the algorithm on a combination of mid-IR (WISE) and optical photometric data while the true labels (activity classes) are based on emission line ratios. Our classifier is built to be redshift-agnostic and it is applicable to objects up to z \(\sim\)0.1. It reaches a completeness \(>\)80% for SF and Passive galaxies, and \(\sim\)60% for AGN. Applying it to an all-sky galaxy catalog (HECATE) reveals a large population of low-luminosity AGNs outside the AGN locus in the standard mid-IR diagnostics.
activity diagnostics - star-formation - machine Learning- AGN 75282
M12 and A13
## 1 Introduction
Activity classification of galaxies is of great importance for many fields of extragalactic Astrophysics, such as understanding galaxy evolution (Kewley et al., 2019) and/or AGN demographics. Traditionally, this is done using characteristic emission-line ratios which discriminate galaxies into different classes depending on the source of ionization (e.g. Kewley et al., 2019). However, the need for spectroscopic data hampers the applicability of these diagnostics to very large datasets since spectroscopic observations are available for a subset of the objects with photometric data. In addition, these diagnostics cannot be used on galaxies without emission lines rendering them inapplicable to passive galaxies. While alternative diagnostics based on mid-IR colors (Mateos et al., 2012; Assef et al., 2013, hereafter M12 and A13) are successfully used for identifying luminous AGNs, they are not as reliable in the local universe.
To address these limitations, we develop a new activity diagnostic by combining the RF machine learning algorithm (Louppe, 2014) with multi-wavelength photometric data.
## 2 Classification scheme and data
### Photometric data
Galaxies have different spectral shapes depending on their source of ionization. Previous works have shown that these differences are stronger in the UV, optical, and mid-IR bands. In order to maximize the available sample, we opted to use for training our algorithm mid-IR and optical photometric data from the AllWISE Source Catalog (Wright et al., 2010)
and the SDSS DR16 (Brinchmann et al. 2004), respectively. To avoid the need for redshift measurements, we use colors rather than luminosities. In order to mitigate aperture effects, we use a mid-IR hybrid-photometric scheme that combines custom-aperture photometry for nearby (extended) and fixed aperture photometry for more distant point-like sources. The optical data consist of \(g-r\) colors, based on SDSS DR16 fiber-photometry. In order to reduce the noise in our training data set we chose only galaxies with S/N \(>\) 5 (Signal-to-Noise) for the optical bands g,r and the mid-IR bands W1, W2. For the band W3 we used a S/N \(>\) 3. Our final optimal feature scheme comprises colors: W1-W2, W2-W3, and \(g-r\) and its distribution per activity class is presented in Fig. 1.
### Classification scheme
We adopt a 5-class classification scheme that discriminates galaxies into different activity classes: SF, AGN, LINER, Composite, and Passive. In order to construct the training sample, we use spectroscopic information from the SDSS-MPA-JHU (Brinchmann et al. 2004) catalog by selecting only the galaxies that show strong emission lines (S/N \(>\) 5). The emission-line galaxies are classified based on the 4-dimensional data-driven classification algorithm of Stampoulis et al. (2019). To define a sample of Passive galaxies without emission lines we selected objects with good spectra (continuum S/N\({}_{cont}\)\(>\) 3) and absent emission lines (S/N\({}_{line}\)\(<\) 3). Our final sample includes 40954 galaxies spanning a redshift range of \(z\) = 0.02 - 0.08. Table 1 shows the composition of our final sample classification scheme. For the training of the RF algorithm we considered 50% of the full set (20477/40954), and for the test the rest 50%. Given the strong imbalance between the different classes in the training sample, we used a stratified split in order to ensure that both training and the test set have the same proportions of each class.
\begin{table}
\begin{tabular}{l c c} \hline \hline Class & Number of objects & Percentage (\%) \\ \hline Star-forming & 35878 & 87.6 \\ Seyfert & 1337 & 3.3 \\ LINER & 1322 & 3.2 \\ Composite & 1673 & 4.1 \\ Passive & 744 & 1.8 \\ \hline \end{tabular}
\end{table}
Table 1: The composition of the training sample per galaxy activity class that was used in the training sample.
Figure 2: The confusion matrix of our classifier. The completeness for the SF and passive galaxies is very high ( \(>\)80%) while for the other 3 classes it is lower as expected given the strong mixing between them in the feature distribution.
Figure 1: Distribution of our training sample in the feature space. The 5 classes of our classification scheme are well separated with a higher mixing between the Composite and AGN activity classes.
## 3 Results
The confusion matrix (Fig. 2) shows that our classifier reaches maximum completeness of \(\sim\)82% for SF and Passive galaxies and \(\sim\)56% for AGN. This performance is expected if we consider the feature distribution of our training sample where the 5 classes are reasonably separated with higher mixing between the composite and AGN galaxies (Fig. 1). Furthermore, these high scores indicate the robustness and reliability of our classifier when it is applied to unseen data (i.e. test set).
We apply our new diagnostic to the HECATE nearby galaxy catalog (D\(\leq\)200 Mpc) (Kovlakas et al., 2021), and we compare our classifications with the mid-IR diagnostics from M13 and A12. Our new classifier reveals a large population of AGN outside their locus as defined in the other mid-IR diagnostics (green points below the dashed line in Fig.3). In particular, in a sample of 1227 spectroscopically classified AGN we find that our method recovers \(\sim 36\%\) of the initial sample, while the M13, and A12 methods recover \(\sim 5\%\) and \(\sim 6\%\), respectively. Thus our new diagnostic increases the completeness of AGN identified with mid-IR colors since the other methods are more sensitive to luminous AGN, omitting a significant fraction of lower luminosity AGN. The reason for the success of our method is that the inclusion of the optical color allows the classifier to identify more AGN and also cases of starbursts with extreme mid-IR colors that mimic obscured AGN galaxies (blue points at the top right of Fig. 3).
|
2305.15779 | Custom-Edit: Text-Guided Image Editing with Customized Diffusion Models | Text-to-image diffusion models can generate diverse, high-fidelity images
based on user-provided text prompts. Recent research has extended these models
to support text-guided image editing. While text guidance is an intuitive
editing interface for users, it often fails to ensure the precise concept
conveyed by users. To address this issue, we propose Custom-Edit, in which we
(i) customize a diffusion model with a few reference images and then (ii)
perform text-guided editing. Our key discovery is that customizing only
language-relevant parameters with augmented prompts improves reference
similarity significantly while maintaining source similarity. Moreover, we
provide our recipe for each customization and editing process. We compare
popular customization methods and validate our findings on two editing methods
using various datasets. | Jooyoung Choi, Yunjey Choi, Yunji Kim, Junho Kim, Sungroh Yoon | 2023-05-25T06:46:28Z | http://arxiv.org/abs/2305.15779v1 | # Custom-Edit: Text-Guided Image Editing with Customized Diffusion Models
###### Abstract
Text-to-image diffusion models can generate diverse, high-fidelity images based on user-provided text prompts. Recent research has extended these models to support text-guided image editing. While text guidance is an intuitive editing interface for users, it often fails to ensure the precise concept conveyed by users. To address this issue, we propose Custom-Edit, in which we (i) customize a diffusion model with a few reference images and then (ii) perform text-guided editing. Our key discovery is that customizing only language-relevant parameters with augmented prompts improves reference similarity significantly while maintaining source similarity. Moreover, we provide our recipe for each customization and editing process. We compare popular customization methods and validate our findings on two editing methods using various datasets.
+
Footnote †: \(*\) Corresponding Authors
## 1 Introduction
Recent work on deep generative models has led to rapid advancements in image editing. Text-to-image models [19, 22] trained on large-scale databases [23] allow intuitive editing [7, 15] of images in various domains. Then, to what extent can these models support precise editing instructions? Can a unique concept of the user, especially one not encountered during large-scale training, be utilized for editing? Editing with a prompt acquired from a well-performing captioning model [13] fails to capture the appearance of reference, as shown in Fig. 1.
We propose _Custom-Edit_, a two-step approach that involves (i) customizing the model [6, 12, 21] using a few reference images and then (ii) utilizing effective text-guided editing methods [7, 15, 16] to edit images. While prior customization studies [6, 12, 21] deal with the random generation of images (noise\(\rightarrow\)image), our work focuses on image editing (image\(\rightarrow\)image). As demonstrated in Fig. 1, customization improves faithfulness to the reference's appearance by a large margin. This paper shows that customizing only language-relevant parameters with augmented prompts significantly enhances the quality of edited images. Moreover, we present our design choices for each customization and editing process and discuss the _source-reference trade-off_ in Custom-Edit.
## 2 Diffusion Models
Throughout the paper, we use Stable Diffusion [19], an open-source text-to-image model. The diffusion model [5, 8, 24, 26] is trained in the latent space of a VAE [11], which downsamples images for computation efficiency. The model is trained to reconstruct the clean latent representation \(x_{0}\) from a perturbed representation \(x_{t}\) given the text condition \(c\), which is embedded with the CLIP text encoder [18]. The diffusion model is trained with the following objective:
\[\sum_{t=1}^{T}\mathbb{E}_{x_{0},\epsilon}[||\epsilon-\epsilon_{\theta}(x_{t},t,c)||^{2}], \tag{1}\]
where \(\epsilon\) is an added noise, \(t\) is a time step indicating a perturbed noise level, and \(\epsilon_{\theta}\) is a diffusion model with a U-Net [20] architecture with attention blocks [27]. During training, the text embeddings are projected to the keys and
Figure 1: Our _Custom-Edit_ allows high-fidelity text-guided editing, given a few references. Edited images with BLIP2 [13] captions show the limitation of textual guidance in capturing the fine-grained appearance of the reference.
values of cross-attention layers, and the text encoder is kept frozen to preserve its _language understanding capability_. Imagen [22] and eDiffi [1] have shown that leveraging rich language understandings of large language models by freezing them is the key to boosting the performance.
## 3 Custom-Edit
Our goal is to edit images with complex visual instructions given as reference images (Fig. 1). Therefore, we propose a two-step approach that (i) customizes the model on given references (Sec. 3.1) and (ii) edits images with textual prompts (Sec. 3.2). Our method is presented in Fig. 2.
### Customization
**Trainable Parameters.** We optimize only the keys and values of cross-attention and the '[rare token]', following Custom-Diffusion [12]. As we discuss in Sec. 4, our results indicate that training these _language-relevant_ parameters is crucial for successfully transferring reference concepts to source images. Furthermore, training only these parameters requires less storage than Dreambooth [21].
**Augmented Prompts.** We fine-tune the abovementioned parameters by minimizing Eq. (1). We improve Custom-Diffusion for editing by augmenting the text input as '[rare token] [_modifier_] [class noun]' (e.g., 'V*_patterned_ teapot'). We find that '[modifier]' encourages the model to focus on learning the appearance of the reference.
**Datasets.** To keep the language understanding while fine-tuning on the reference, we additionally minimize prior preservation loss [21] over diverse images belonging to the same class as the reference. Thus, we use CLIP-retrieval [3] to retrieve 200 images and their captions from the LAION dataset [23] using the text query 'photo of a [modifier] [class noun]'.
### Text-Guided Image Editing
**Prompt-to-Prompt.** We use Prompt-to-Prompt [7] (P2P), a recently introduced editing framework that edits images by only modifying source prompts. P2P proposes attention injection to preserve the structure of a source image. For each denoising step \(t\), let us denote the attention maps of the source and edited image as \(M_{t}\) and \(M_{t}\)*, respectively. P2P then injects a new attention map \(Edit(M_{t},{M_{t}}^{*},t)\) into the model \(\epsilon_{\theta}\). \(Edit\) is an attention map editing operation, including _prompt refinement_ and _word swap_. Additionally, P2P enables local editing with an automatically computed mask. P2P computes the average of cross-attention \(\bar{M}_{t,w}\) and \(\bar{M}_{t,w}^{*}\) related to the word \(w\) and thresholds them to produce the binary mask \(B(\bar{M}_{t})\cup B(\bar{M}_{t}^{*})\). Before editing with P2P, we utilize Null-Text Inversion [16] to boost the source preservation. Refer to Sec. C for a more description.
**Operation Choice.** Due to the limited number of reference images, the customized words favor only a limited variety of structures. This inspired us to propose the following recipe. First, we use _prompt refinement_ for the Edit function. _Word swap_ fails when the customized words do not prefer the swapped attention map. Second, we use mask \(B(\bar{M}_{t})\) rather than \(B(\bar{M}_{t})\cup B(\bar{M}_{t}^{*})\), as the customized words are likely to generate incorrect masks.
**Source-Reference Trade-Off.** A key challenge in image editing is balancing the edited image's source and reference similarities. We refer to \(\tau/T\) as _strength_, where P2P injects self-attention from \(t=T\) to \(t=\tau\). In P2P, we observed that a critical factor in controlling the trade-off is the injection
Figure 2: Our Custom-Edit consists of two processes: the customization process and the editing process. **(a) Customization.** We customize a diffusion model by optimizing only language-relevant parameters (i.e., custom embedding V* and attention weights) on a given set of reference images. We also apply the prior preservation loss to alleviate the language drift. **(b) Editing.** We then transform the source image to the output using the customized word. We leverage the P2P and Null-text inversion methods [7, 16] for this process.
Figure 3: **Custom-Edit results.** Our method transfers the reference’s appearance to the source image with unprecedented fidelity. The structures of the source are well preserved. We obtain source prompts using BLIP2 [13]. Except for the pencil drawing example, we use local editing of P2P with automatically generated masks.
of self-attention rather than cross-attention. Higher strength denotes higher source similarity at the expense of reference similarity. In Sec. 4, we also show results with SDEdit [15], which diffuses the image from \(t=0\) to \(t=\tau\) and denoises it back. As opposed to P2P, higher strength in SDEdit means higher reference similarity.
## 4 Experiment
In this section, we aim to validate each process of Custom-Edit. Specifically, we assess our design choices for customization by using Textual Inversion [6] and Dream-booth [21] in the customization process. We compare their source-reference trade-off in the editing process. As well as P2P, we use SDEdit [15] for experiments.
**Baselines.** Textual Inversion learns a new text embedding V*, initialized with a class noun (e.g., 'pot'), by minimizing Eq. (1) for the input prompt 'V*'. Dreambooth fine-tunes the diffusion model while the text encoder is frozen. Eq. (1) is minimized over a few images given for input prompt '[rare token] [class noun]' (e.g., 'ktn teapot'). SDEdit is the simplest editing method that diffuse-and-denoise the image.
**Datasets.** We use eight references in our experiments, including two pets, five objects, and one artwork. For each reference, we used five source images on average.
**Metrics.** We measure the source and reference similarities with CLIP ViT-B/32 [18]. We use strengths [0.2, 0.4, 0.6, 0.8] for P2P and [0.5, 0.6, 0.7, 0.8] for SDEdit results. We generated two P2P samples with cross-attention injection strengths [0.2, 0.6], and three SDEdit samples for each strength and source image from different random seeds.
**Inference Details.** We employ a guidance scale of 7.5 and 50 inference steps. We acquire all source prompts using BLIP2 [13]. More details are available in Sec. B.
### Qualitative Results
Fig. 3 illustrates the selected results. Custom-Edit transfers the reference's detailed appearance to the source while preserving the overall structure. For example, Custom-Edit generates a horizontally elongated V* wooden pot from the wine bottle (first row). In the second row, Custom-Edit generates a V* tortoise plushy wearing a hat with the texture of its shell. The blue jay in the third row became a V* ceramic bird with perfectly preserved macarons. In the last row, the V* cat is sitting in a pose that does not exist in the reference set. We show qualitative comparisons in Sec. A.1.
### Quantitative Results
Fig. 4 shows average trade-off curves on P2P and SDEdit. Our improved Custom-Diffusion yields the best trade-off, while Textual Inversion shows similar source similarity but lower reference similarity. Dreambooth has higher source similarity but lower reference similarity, suggesting that it is ineffective in modifying images. SDEdit results also show a similar tendency, supporting our claim that customizing language-relevant parameters is effective for editing. Note that SDEdit shows lower source similarity than P2P, indicating the superiority of P2P and our operation choices in text-guided editing.
## 5 Discussion
We propose Custom-Edit, which allows fine-grained editing with textual prompts. We present our design choices for each process, which can benefit future customization and editing work. Additionally, we discuss the trade-off between source and reference in diffusion-based editing.
Although Custom-Edit shows various successful editing results, there are some failure cases, as presented in Sec. A.3. Custom-Edit sometimes edits undesired regions or fails to edit complex backgrounds. We hypothesize that this is due to the inaccurate attention maps of Stable Diffusion [7, 16] and the limited controllability of the text input. Potential solutions are to apply Custom-Edit on text-to-image models with larger text encoders [1, 22] or extended controllability [14, 28].
Figure 4: **Source-Reference Trade-Off. Custom-Diffusion shows the best trade-off, indicating the effectiveness of training only language-relevant parameters. We exhibit qualitative comparisons and samples with various strengths in Sec. A.2.**
**Acknowledgements:** This work was supported by the National Research Foundation of Korea (NRF) grants funded by the Korea government (Ministry of Science and ICT, MSIT) (2022R1A3B1077720), Institute of Information & communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (MSIT) (2021-0-01343: AI Graduate School Program, SNU), and the BK21 FOUR program of the Education and Research Program for Future ICT Pioneers, Seoul National University in 2023.
|
2306.04324 | GCT-TTE: Graph Convolutional Transformer for Travel Time Estimation | This paper introduces a new transformer-based model for the problem of travel
time estimation. The key feature of the proposed GCT-TTE architecture is the
utilization of different data modalities capturing different properties of an
input path. Along with the extensive study regarding the model configuration,
we implemented and evaluated a sufficient number of actual baselines for
path-aware and path-blind settings. The conducted computational experiments
have confirmed the viability of our pipeline, which outperformed
state-of-the-art models on both considered datasets. Additionally, GCT-TTE was
deployed as a web service accessible for further experiments with user-defined
routes. | Vladimir Mashurov, Vaagn Chopurian, Vadim Porvatov, Arseny Ivanov, Natalia Semenova | 2023-06-07T10:44:13Z | http://arxiv.org/abs/2306.04324v2 | # GCT-TTE: Graph Convolutional Transformer for Travel Time Estimation
###### Abstract
This paper introduces a new transformer-based model for the problem of travel time estimation. The key feature of the proposed GCT-TTE architecture is the utilization of different data modalities capturing different properties of an input path. Along with the extensive study regarding the model configuration, we implemented and evaluated a sufficient number of actual baselines for path-aware and path-blind settings. The conducted computational experiments have confirmed the viability of our pipeline, which outperformed state-of-the-art models on both considered datasets. Additionally, GCT-TTE was deployed as a web service accessible for further experiments with user-defined routes.
machine learning graph convolutional networks transformers geospatial data travel time estimation
## Introduction
Travel time estimation (TTE) is an actively developing branch of computational logistics that considers the prediction of potential time expenditures for specific types of trips Jenelius and Koutsopoulos (2013); Wu et al. (2020). With the recent growth of urban environment complexity, such algorithms have become highly demanded both in commercial services and general traffic management Xuegang et al. (2010).
Despite the applied significance of travel time estimation, it still remains a challenging task in the case of ground vehicles. The majority of the currently established algorithms Wang et al. (2021); Derrow-Pinion et al. (2021) tend to utilize specific data modalities in order to capture complex spatio-temporal dependencies influencing the traffic flow. With the recent success of multimodal approaches in adjacent areas of travel demand prediction Chu et al. (2020) and journey planning He et al. (2022), fusing the features from different sources is expected to be the next step towards better performance in TTE.
In this paper, we explored the predictive capabilities of TTE algorithms with different temporal encoders and propose a new transformer-based model GCT-TTE. The main contributions of this study are the following:
1. In order to perform the experiments with the image modality, we extended the graph-based datasets for Abakan and Omsk Porvatov et al. (2022) by the cartographic images in accordance with the provided trajectories.
Currently, the extended datasets are the only publicly available option for experiments with multimodal TTE algoritms.
2. In order to boost further research in the TTE area, we reimplemented and published the considered baselines in a unified format as well as corresponding weights and data preprocessing code. This contribution will enable the community to enhance evaluation quality in the future, as most of the TTE methods lack official implementations.
3. We proposed the GCT-TTE neural network for travel time estimation and extensively studied its generalization ability under various conditions. Obtained results allowed us to conclude that our pipeline achieved better performance regarding the baselines in terms of several metrics.
4. Conducted experiments explicitly indicate that performance of the transformer-based models is less prone to decrease with the scaling of a road network. This property remains crucial from an industrial perspective, as the classic recurrent models undergo considerably larger performance dropdowns.
5. For demonstration purposes, we deployed inference of the GCT-TTE model as the web application accessible for manual experiments.
The web application is available at [http://gctte.online](http://gctte.online) and the code is published in the GitHub repository of the project1.
Footnote 1: [https://github.com/Eighonet/GCT-TTE](https://github.com/Eighonet/GCT-TTE)
## Related work
Travel time estimation methods can be divided into two main types of approaches corresponding to the _path-blind_ and _path-aware estimation_, Table 1. The path-blind estimation refers to algorithms relying only on data about the start and end points of a route Wang et al. (2019). The path-aware models utilize intermediate positions of a moving object represented in the form of GPS sequences Wang et al. (2014), map patches Fu and Lee (2019), or a road subgraph Wang et al. (2021). Despite the certain computational complexity increase, such approaches provide significantly better results which justify the attention paid to them in the recent studies Zhang et al. (2018), Derrow-Pinion et al. (2021), Sun et al. (2021).
One of the earliest path-aware models was the WDR architecture Wang et al. (2018) which mostly inherited the concept of joint learning from recommender systems Cheng et al. (2016). In further studies, this approach was extended regarding the usage of different data modalities. In particular, the DeepIST Fu and Lee (2019) model utilizes rectangular fragments of a general reference map corresponding to elements of a route GPS sequence. Extracted images are fed into a convolutional neural network (CNN) that captures spatial patterns of depicted infrastructure. These feature representations are further concatenated into the matrix processed by the LSTM-based temporal layer Hochreiter et al. (1997).
In contrast with the other approaches, DeepTTE Wang et al. (2018) is designed to operate directly on GPS coordinates via geospatial convolutions paired with a recurrent neural network. The first part of this pipeline transforms raw GPS sequences into a series of feature maps capturing the local spatial correlation between consecutive coordinates. The final block learns the temporal relations of obtained feature maps and produces predictions for the entire route along with its separate segments.
The concept of modality fusing was first introduced in TTE as a part of the DeepI2T Lan et al. (2019) model. This architecture utilizes LINE Tang et al. (2015) to produce grid embeddings and 3-layer CNN with pooling for image
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Path-blind models} \\ \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Modality} \\ \cline{2-4} & Graph & Images & GPS \\ \hline AVG & - & - & - \\ LR & - & - & - \\ MURAT & + & - & - \\ DeepI2T & + & + & - \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Path-aware models} \\ \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Modality} \\ \cline{2-4} & Graph & Images & GPS \\ \hline WDR & + & - & - \\ DeepIST & - & + & - \\ DeepTTE & - & - & + \\ DeepI2T & + & + & - \\ \hline \end{tabular}
\end{table}
Table 1: Demonstration of utilized modalities in path-blind and path-aware models
representations. As well as DeppTTE, Deepl2T includes the segment-based prediction component implemented in the form of residual blocks on the top of the Bi-LSTM encoder.
In addition to extensively studied recurrent TTE methods, it is also important to mention recently emerged transformer models Liu et al. (2022); Semenova et al. (2022). Despite the limited comparison with classic LSTM-based methods, they have already demonstrated promising prediction quality, preserving the potential for further major improvements Shen et al. (2022); Fan et al. (2021). As most of the transformer models lack a comprehensive evaluation, we intend to explore GCT-TTE performance with respect to a sufficient number of state-of-the-art solutions to reveal its capabilities explicitly.
## Preliminaries
In this section, we introduce the main concepts required to operate with the proposed model.
**Rout**. A route \(r\) is defined as the set \(\{c^{r},a^{r},t^{r}\}\), where \(c^{r}\) is the sequence of GPS coordinates of a moving object, \(a^{r}\) is the vector of temporal and weather data, \(t^{r}\) is the travel time.
As the _image modality_\(p^{r}\) of a route \(r\), we utilize geographical map patches corresponding to each coordinate \(c_{i}^{r}\in c^{r}\). Each image has a fixed size \(W\times H\times 3\) across all of the GPS sequences in a specific dataset.
**Road network**. Road network is represented in the form of graph \(G=\{V,E,X\}\), where \(V=\{v_{1},\;...\;,v_{n}\}\) is the set of nodes corresponding to the segments of city roads, \(E=\{(v_{i},v_{j})\;|\;v_{i}\to v_{j}\}\) is the set of edges between connected nodes \(v_{i},v_{j}\in V\), \(X:n\times m\rightarrow\mathbf{R}\) is a feature matrix of nodes.
Description of a route \(r\) can be further extended by the _graph modality_\(g^{r}=\{v_{k}\;|\;k=argmin_{j}\,\rho(c_{i}^{r},v_{j})\}_{i=1}^{|c^{r}|}\), where \(\rho(c_{i}^{r},v_{j})\) is the minimum Euclidean distance between coordinates associated with \(v_{j}\) and \(c_{i}^{r}\).
**Travel time estimation**. For each entry \(r\), it is required to estimate the travel time \(t^{r}\) using the elements of feature description \(\{c^{r},p^{r},g^{r},a^{r}\}\).
## Data
We explored the predictive performance of the algorithm on two real-world datasets collected during the period from December 1, 2020 to December 31, 2020 in Abakan (112.4 square kilometers) and Omsk (577.9 square kilometers). Each dataset consists of a road graph and associated routes, Table 2. In the preprocessing stage, we excluded trips that lasted less than 30 seconds along with the ones that took more than 50 minutes.
In order to supply the image-based models with the relevant input data, we extended the road graphs with the map patches parsed via Open Street Map API2. Depending on the requirements of the considered learning model, image datasets had to be constructed regarding the fixed grid partitions or centered around the elements of GPS sequences. In the first case, a geographical map of a city was divided into equal disjoint patches, which were further mapped with the GPS data in accordance with the presence of coordinates in a specific partition. Trajectory-based approach to dataset construction does not require the disjoint property of images and relies on the extraction of patches with the center in the specified coordinate. The obtained grid-based image dataset consists of \(96\,101\) instances for Abakan and \(838\,865\) for Omsk while the trajectory-based dataset has \(544\,502\) and \(3\,376\,294\) images correspondingly.
Footnote 2: [https://www.openstreetmap.org](https://www.openstreetmap.org)
One of the crucial features of the considered datasets is the absence of traffic flow properties. The availability of such data is directly related to the specialized tracking systems (based on loop detectors or observation cameras), which are not presented in the majority of cities. In order to make the GCT-TTE suitable for the greatest number of urban environments, we decided not to limit the study by the rarely accessible data.
## Method
In this section, we provide an extensive description of the GCT-TTE main components: pointwise and sequence representation blocks, Figure 1.
#### Patches encoder
In order to extract features from the image modality, we utilized the RegNetY Radosavovic et al. (2020) architecture from the SEER model family. The key component of this architecture is the convolutional recurrent neural network (ConvRNN) which controls the spatio-temporal information flow between building blocks of the neural network.
Each RegNetY block consists of three operators. The initial convolution layer of \(t\)'th block processes the input tensor \(X_{1}^{t}\) and returns the feature map \(X_{2}^{t}\). Next, the obtained representation \(X_{2}^{t}\) is fed to ConvRNN:
\[H^{t}=\tanh(\mathrm{C_{x}}(X_{2}^{t})+\mathrm{C_{h}}(H^{t-1})+b_{h}), \tag{1}\]
where \(H^{t-1}\) is the hidden state of the previous RegNetY block, \(b_{h}\) is a bias tensor, \(\mathrm{C_{x}}\) and \(\mathrm{C_{h}}\) correspond to convolutional layers. In the following stage, \(X_{2}^{t}\) and \(H^{t}\) are utilized as input of the last convolution layer, which is further extended by residual connection.
As the SEER models are capable of producing robust features that are well-suited for out-of-distribution generalization Goyal et al. (2022), we pre-trained RegNetY with the following autoencoder loss:
\[\mathcal{L}(W\times RegNet(X),\,f(X))\to 0, \tag{2}\]
where \(\mathcal{L}\) is the binary cross-entropy function, \(f\) is an image flattening operator, and \(W\) is the projection matrix of learning parameters that maps model output to the flattened image.
#### Auxiliary encoder
Along with the map patches and graph elements, we apply additional features \(a^{r}\) corresponding to the temporal and weather data (e.g., trip hour, type of day, precipitation). The GCT-TTE model processes this part of the input with the help of a trivial linear layer:
\[A^{r}=Wa^{r}, \tag{3}\]
where \(W\) is a matrix of learning parameters.
#### Graph encoder
The graph data is handled with the help of the graph convolutional layers Kipf and Welling (2016) defined as follows:
\[h_{u}^{(k)}=\mathrm{ReLU}\left(W^{(k)}\underset{v\in\mathcal{N}(u)}{\mathrm{AG }}\left(\frac{h_{v}^{(k-1)}}{||\mathcal{N}_{uv}||}\right)\right), \tag{4}\]
where \(h_{u}^{(k)}\) is a \(k\)-hop embedding of \(u\in V\), \(h_{u}^{(0)}=x_{u}\), \(W^{(k)}\) is a matrix of learning parameters of \(k\)'th convolutional layer, \(\mathcal{N}(u)\) is a set of neighbour nodes of \(u\), \(\mathrm{AGG}_{v\in\mathcal{N}(u)}\) is a sum aggregarion function, and \(||\mathcal{N}_{uv}||=\sqrt{|\mathcal{N}(u)||\mathcal{N}(v)|}\).
To accelerate the convergence of the GCT-TTE model, we pre-trained the weights of the graph convolutions by the Deep Graph InfoMax algorithm Velickovic et al. (2019). This approach optimizes the loss function that allows learning the difference between initial and corrupted embeddings of nodes:
\[\mathcal{L}=\frac{1}{N+M}\Big{(}\sum_{i=1}^{N}E_{\mathcal{G}}\Big{[}log(D(h_{u },h_{\mathcal{G}}))\Big{]}+\sum_{j=1}^{M}\,E_{\mathcal{G}}\left[log(1-D(\tilde {h}_{u},h_{\mathcal{G}}))\right]\Big{)}, \tag{5}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{Road network} \\ \hline Property \(\backslash\) City & Abakan & Omsk \\ \hline Nodes & \(65\,524\) & \(231\,688\) \\ Edges & \(340\,012\) & \(1\,149\,492\) \\ Clustering & 0.5278 & 0.53 \\ Usage median & 12 & 8 \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{Trips} \\ \hline Property \(\backslash\) City & Abakan & Omsk \\ \hline Trips number & \(121\,557\) & \(767\,343\) \\ Coverage & 53.3\% & 49.5\% \\ Average time & 427 sec & 608 sec \\ Average length & 3604 m & 4216 m \\ \hline \end{tabular}
\end{table}
Table 2: Description of the Abakan and Omsk datasets.
where \(h_{u}\) is an embedding of node \(u\) based on the initial graph \(\mathcal{G}\), \(\tilde{h}_{u}\) is an embedding of a node \(u\) from the corrupted version \(\tilde{\mathcal{G}}\) of the graph \(\mathcal{G}\), \(D\) corresponds to the discriminator function.
The final output of the pointwise block constitutes a concatenation of the weighted representations and auxiliary data for each route \(r\) with \(k\) segments:
\[P_{r}=\mathrm{CONCAT}(\alpha\cdot H^{r},(1-\alpha)\cdot I^{r},\ \beta\cdot A^{r}), \tag{6}\]
where \(H^{r}\) is the matrix of size \(k\times e_{g}\) of graph-based segment embeddings, \(I^{r}\) is the matrix of size \(k\times e_{i}\) obtained from a flattened RegNet output, \(\alpha\), \((1-\alpha)\), and \(\beta\) correspond to the weight coefficients of specific modalitites.
#### Sequence representation block
To extract sequential features from the output of the pointwise representation block, it is fed to transformer encoder Vaswani et al. (2017). The encoder consists of two attention layers with a residual connection followed by a normalization operator. The multi-head attention coefficients are defined as follows:
\[\alpha_{i,j}^{(h)}=\mathrm{softmax}_{w_{j}}\left(\frac{\langle W_{h,q}^{T}x_{ i},W_{h,k}^{T}x_{j}\rangle}{\sqrt{d_{k}}}\right), \tag{7}\]
where \(x_{i},x_{j}\in P_{r}\), \(h\) is an attention head, \(d_{k}\) is a scale coefficient, \(W_{h,q}^{T}\) and \(W_{h,k}^{T}\) are query and key weight matrices, \(w_{j}\) is a vector of softmax learning parameters. The output of the attention layer will be:
\[u_{i}=\mathrm{LayerNorm}\left(x_{i}+\sum_{h=1}^{H}W_{c,h}^{T}\sum_{j=1}^{n} \alpha_{i,j}^{(h)}W_{h,v}^{T}x_{j}\right), \tag{8}\]
where \(W_{h,v}^{T}\) is value weight matrix, \(H\) is a number of attention heads.
The final part of the sequence representation block corresponds to the flattening operator and several linear layers with the ReLU activation, which predict the travel time of a route.
### Results
In this section, we reveal the parameter dependencies of the model and compare the results of the considered baselines.
Figure 1: Demonstration of the GCT-TTE pipeline.
### Experimental setup
The experiments were conducted on 16 GPU Tesla V100. For the GCT-TTE training, Adam optimizer Kingma and Ba (2014) was chosen with a learning rate \(5\cdot 10^{-5}\) and batch size of 16. For better convergence, we apply the scheduler with patience equal to 10 epochs and 0.1 scaling factor. The training time for the final configuration of the GCT-TTE model is 6 hours in the case of Abakan and 30 for Omsk.
The established values of quality metrics were obtained from the 5-fold cross-validation procedure. As the measures of the model performance, we utilize mean absolute error (MAE), rooted mean squared error (RMSE), and 10\(\%\) satisfaction rate (SR). Additionally, we compute mean absolute percentage error (MAPE) as it is frequently used in related studies.
### Models comparison and evaluation
The results regarding path-blind evaluation are depicted in Table 3. Neighbor average (AVG) and linear regression (LR) demonstrated the worst results among the trivial baselines as long as gradient boosted decision trees (GBDT) explicitly outperformed more complex models in the case of the largest city. The MURAT model achieved the best score for Abakan in terms of MAE and RMSE, while GCT-TTE has the minimum MAPE among all of the considered architectures.
Demonstrated variability of metric values makes the identification of the best model rather a hard task for a path-blind setting. The simplest models are still capable to be competitive regarding such architectures as MURAT, which was expected to perform tangibly better on both considered datasets. The results regarding GCT-TTE can be partially explained by its structure as it was not initially designed for a path-blind evaluation.
As can be seen in Table 4, the proposed solution outperformed baselines in terms of the RMSE value, which proves the rigidity of GCT-TTE towards large errors prevention. The comparison of MAE and RMSE for considered methods has shown a minimal gap between these metrics in the case of GCT-TTE for both cities, signifying the efficiency of the technique with respect to dataset size. Overall, the results have confirmed that GCT-TTE appeared to be a more reliable approach than the LSTM-based models: while MAPE remains approximately the same across top-performing architectures, GCT-TTE achieves significantly better MAE and RMSE values. Conducted computational experiments also indicated that DeepI2T and WDR have intrinsic problems with the convergence, while GCT-TTE demonstrates smoother training dynamics.
### Performance analysis
In the case of both datasets, dependencies between the travelled distance and obtained MAE on the corresponding trips reveal similar dynamics: as the path length increases, the error rate continues to grow, Figure 2(b, d). The prediction variance is inversely proportional to the number of routes in a particular length interval except for the small percentage of the shortest routes. The main difference between the MAE curves is reflected in the higher magnitudes of performance fluctuations in Abakan compared to Omsk.
The temporal dynamics of GCT-TTE errors exhibit rich nonlinear properties during a 24-hour period. The shape of the error curves demonstrates that our model tends to accumulate a majority of errors in the period between 16:00 and 18:00, Figure 2(a, c). This time interval corresponds to the end of the working day, which has a crucial impact on the traffic flow foreseeability.
Despite the mentioned performance outlier, the general behaviour of temporal dependencies allows concluding that GCT-TTE successfully captures the factors influencing the target value in the daytime. With the growing sparsity of data during night hours, it is still capable of producing relevant predictions for Omsk. In the case of Abakan, the GCT-TTE performance drop can be associated with a substantial reduction in intercity trips number (which emerged to be an easier target for the model).
### Sensitivity analysis
In order to achieve better prediction quality, we extensively studied the dependencies between GCT-TTE parameters and model performance in the sense of the MAE metric. The best value for modality coefficient \(\alpha\) was 0.9, which reflects the significant contribution of graph data towards error reduction. For the final model, we utilized 2 graph convolutional layers with hidden size 192, Figure 3(a, b). The lack of aggregation depth can significantly reduce the performance of GCT-TTE, while the excessive number of layers has a less expressive negative impact on MAE. A similar situation can be observed in the case of the hidden size, which is getting close to a plateau after reaching a certain threshold value.
Along with the graph convolutions, we explored the configuration of the sequence representation part of GCT-TTE. Since the transformer block remains its main component, the computational experiments were focused on the influence of encoder depth on quality metrics, Figure 3(c). As it can be derived from the U-shaped dependency, the best number of attention layers is 3.
## Demonstration
In order to provide access to the inference of GCT-TTE, we deployed a demonstrational application2 in a website format, Figure 4. The application's interface consists of a user guide, navigation buttons, erase button, and a comparison button. A potential user can construct and evaluate an arbitrary route by clicking on the map at the desired start and end points: the system's response will contain the shortest path and the corresponding value of the estimated time of arrival.
Footnote 2: [http://gctte.online](http://gctte.online)
For additional evaluation of considered baselines, the limited number of predefined trajectories with known ground truth can also be requested. In this case, the response will contain three random trajectories from the datasets with associated predictions of WDR, Deepl2T, and GCT-TTE models along with the real travel time.
## Conclusion
In this paper, we introduced a multimodal transformer architecture for travel time estimation and performed an extensive comparison with the other existing approaches. Obtained results allow us to conclude that the transformer-based models can be efficiently utilized as sequence encoders in the path-aware setting. Our experiments with different data modalities revealed the superior importance of graphs compared to map patches. Such an outcome can be explained by the inheritance of main features between modalities where graph data represents the same properties more explicitly. In
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{Abakan} & \multicolumn{4}{c|}{Omsk} \\ \hline _Baseline\textbackslash{}Metric_ & MAE & RMSE & MAPE & SR & MAE & RMSE & MAPE & SR \\ \hline DeepIST & 153.88 & 241.29 & 0.3905 & 18.08 & 256.50 & 415.16 & 0.6361 & 14.39 \\ \hline DeepTTE & 111.03 & 174.56 & 0.2165 & 31.45 & 179.07 & 296.98 & **0.1898** & 34.03 \\ \hline GridLSTM & 100.27 & 206.91 & 0.2202 & 30.74 & 135.74 & 257.18 & 0.2120 & 31.21 \\ \hline Deepl2T & 97.99 & 201.33 & **0.2128** & 31.34 & 136.66 & 260.90 & 0.2124 & 31.23 \\ \hline WDR & 97.22 & 190.09 & 0.2162 & **31.98** & 131.57 & 269.00 & 0.2039 & 33.34 \\ \hline \hline GCT-TTE & **92.26** & **147.89** & 0.2262 & 30.46 & **107.97** & **169.15** & 0.1961 & **35.17** \\ \hline \end{tabular}
\end{table}
Table 4: Path-aware models comparison
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{Abakan} & \multicolumn{4}{c|}{Omsk} \\ \hline _Baseline \textbackslash{}Metric_ & MAE & RMSE & MAPE & SR & MAE & RMSE & MAPE & SR \\ \hline AVG & 322.77 & 477.61 & 0.761 & 0.018 & 439.05 & 628.75 & 0.741 & 0.012 \\ \hline LR & 262.33 & 456.63 & 1.169 & 9.527 & 416.81 & 593.01 & 1.399 & 7.187 \\ \hline GBDT & 245.77 & 433.91 & 1.106 & 10.28 & **209.99** & **372.11** & 0.656 & **17.72** \\ \hline \hline MURAT & **182.97** & **282.15** & 0.685 & 10.77 & 285.72 & 444.74 & 0.856 & 9.997 \\ \hline \hline GCT-TTE & 221.71 & 337.59 & **0.505** & **11.12** & 376.74 & 590.93 & **0.5486** & 8.99 \\ \hline \end{tabular}
\end{table}
Table 3: Path-blind models comparison
further studies, we intend to focus on the design of a more expressive image encoder as well as consider the task of path-blind travel time estimation, which currently remains challenging for the GCT-TTE model.
Figure 3: Parametric dependencies of GCT-TTE performance for Abakan: number of graph convolutions (a), hidden size of graph convolutions (b), and number of transformer encoder layers (c).
Figure 2: Spatial and temporal dependencies across the different groups of test entries for Abakan (a, b) and Omsk (c, d): blue and red lines depict mean and median values of MAE, borders of filled area correspond to Q1 and Q3 quartiles of a MAE distribution.
#### Declarations
Ethics approval and consent to participate
Not applicable.
### Consent for publication
Not applicable.
#### Availability of data and materials
Considered models and datasets are available in the project's GitHub repository.
### Competing interests
The authors declare that they have no competing interests.
## Funding
Not applicable.
## Authors contributions
V.M., V.C., and A.I.: Software, Data curation, Validation, Visualization; V.P.: Software, Visualization, Conceptualization, Methodology, Writing (original draft); N.S.: Conceptualization, Methodology, Supervision, Writing (review & editing).
Figure 4: An interface of the demonstrational application.
## Acknowledgements
The authors are grateful to Vladislav Zamkovy for the help with application deployment.
|
2303.01482 | Modulation instability gain and localized waves by modified
Frenkel-Kontorova model of higher order nonlinearity | In this paper, modulation instability and nonlinear supratransmission are
investigated in a one-dimensional chain of atoms using cubic-quartic
nonlinearity coefficients. As a result, we establish the discrete nonlinear
evolution equation by using the multi-scale scheme. To calculate the modulation
instability gain, we use the linearizing scheme. Particular attention is given
to the impact of the higher nonlinear term on the modulation instability.
Following that, full numerical integration was performed to identify modulated
wave patterns, as well as the appearance of a rogue wave. Through the nonlinear
supratransmission phenomenon, one end of the discrete model is driven into the
forbidden bandgap. As a result, for driving amplitudes above the
supratransmission threshold, the solitonic bright soliton and modulated wave
patterns are satisfied. An important behavior is observed in the transient
range of time of propagation when the bright solitonic wave turns into a
chaotic solitonic wave. These results corroborate our analytical investigations
on the modulation instability and show that the one-dimensional chain of atoms
is a fruitful medium to generate long-lived modulated waves. | Alphonse Houwe, Souleymanou Abbagari, Lanre Akinyemi, Serge Yamigno Doka, Kofane Timoleon Crepin | 2023-02-25T17:29:35Z | http://arxiv.org/abs/2303.01482v1 | Modulation instability gain and localized waves by modified Frenkel-Kontorova model of higher order nonlinearity
###### Abstract
In this paper, modulation instability and nonlinear supratransmission are investigated in a one-dimensional chain of atoms using cubic-quartic nonlinearity coefficients. As a result, we establish the discrete nonlinear evolution equation by using the multi-scale scheme. To calculate the modulation instability gain, we use the linearizing scheme. Particular attention is given to the impact of the higher nonlinear term on the modulation instability. Following that, full numerical integration was performed to identify modulated wave patterns, as well as the appearance of a rogue wave. Through the nonlinear supratransmission phenomenon, one end of the discrete model is driven into the forbidden bandgap. As a result, for driving amplitudes above the supratransmission threshold, the solitonic bright soliton and modulated wave patterns are satisfied. An important behavior is observed in the transient range of time of propagation when the bright solitonic wave turns into a chaotic solitonic wave. These results corroborate our analytical investigations on the modulation instability and show that the one-dimensional chain of atoms is a fruitful medium to generate long-lived modulated waves.
**Keywords:** Modified Frenkel-Kontorova model; Modulation instability; Modulated waves patterns; Rogue waves
## 1 Introduction
In recent years, investigation of the localized waves in nonlinear systems has grown. A wide class of nonlinear evolution equations have been employed in different fields such as optical fibers, Bose-Einstein condensates, optomechanical lattices, molecular chains, fluid mechanics, and ferromagnetic structures [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]. More often, the phenomenon that exhibits the
behavior of the excited localized modes is the modulation instability (MI). MI is characterized by a rapidly growing plane wave amplitude subject to small perturbations where nonlinear and dispersion terms interplay [2, 6, 7, 13, 15, 17, 23]. Usually, MI is manifested by the unstable or stable zones generated by the perturbed wave vector or nonlinear strength, which leads to the formation of modulated wave (MW) patterns. The three types of localized excitations that can be obtained are bright solitons, rogue waves (RWs), and breathers. For example, Conrad and co-workers have recently pointed out the propagation of the RWs of A and B types in a nonlocal medium where a nonlinear saturation parameter is used [4]. The authors have shown that when the MI is developed, the MW patterns emerge. In [5], the authors have exhibited the localized modes in a nonlinear oscillator lattice through the MI. It is obvious that the MI is an appropriate mechanism for the generation of localized waves. If most of the models used before were in the continuum limit, it is evident today that the discrete MI of the continuous wave (CW) in the discrete nonlinear evolution equation (DNLEE) has gained a lot of interest. Houwe et al. have used the DNLEE, which describes the wave propagating in the ferromagnetic chains, to develop the MI under the effects of the nearest-neighbor coupling. The discrete MI has been the subject of theoretical and experimental research. The work of Mounouna et al. [7] is a well-known example of the MI growth rate, where the effects of the nonlinear cubic-quartic coupling of the modified Frenkel-Kontorova model were shown on the gain profile and the unstable zones that emerged during the long time of plane wave propagation.
If the MI is an important process for developing localized waves, nonlinear supratransmission also remains a powerful tool where energy can flow in the forbidden bandgap (FG). This phenomenon has been developed by F. Geniet and co-workers by using the Klein Gordon equation. They have shown that when the amplitude of the driving is considered above the threshold for supratransmission, energy can flow in the FG [11]. Khomeriki et al. used the same procedure to derive the static breather solution, which synchronizes and adjusts to the Fermi-Pasta-Ulam model [12]. Beside this, several other studies have been developed to show that nonlinear strength can also favor the formation of the MWs patterns in the FG [13, 22, 25].
Very recently, the supratransmission has been exhibited by driving one end of the chain where the on-site potential of a cubic form has been used [10]. It has been called the quartic nonlinear interaction potential. Nonlinear supratransmission has been used in other applications, such as three-wave sharp interaction and low-light pulses when a two-level medium produces solitary waves [20]. In the present study, we point out the MWs, RWs, and diverse other localized waves under the effects of the cubic and quartic nonlinear interaction potentials. Thereafter, we subject one end of the chain to an external periodic force to demonstrate the supratransmission phenomenon. It emerges that, above the threshold, supratransmission of a localized bright soliton is fulfilled. We equally observed that when the driven amplitude (DA) is strong enough, the transient regime manifests itself by the escape of the bright soliton to chaos-like motion.
The rest of the paper is sketched as follows: In Sect. 2, we present the proposed model and thereafter use the standard multi-scale scheme to derive the DNLEE. Sect. 3 gives the linear stability of the MI. An expression of the MI growth rate is deduced from the dispersion relation and used to show the unstable and stable zones. In particular, we focused on the dispersion coefficient and the impact of the cubic and quadratic nonlinear interactions. Sect. 4 uses the full numerical simulation to corroborate analytical predictions, MW patterns, and RWs. On the other hand, one end of the chain is driven by an external periodic force. In FG, the formation of excited localized modes and bright soliton is observed in equal measure. Sect. 5 is assigned to concluding the work.
## 2 Analytical model
Motivated by the work of Mounouna et al. [7], we consider in this work a chain of coupled atoms subjugated to a deformable external periodic potential where the total Hamiltonian is written as:
\[\mathbf{H}=\Gamma\sum_{n}\Bigg{[}\frac{1}{2}\left(\frac{d\theta_{n}}{dt}\right) ^{2}+\left(\frac{1}{2}G_{2}(\theta_{n}-\theta_{n-1})^{2}+\frac{1}{3}G_{3}( \theta_{n}-\theta_{n-1})^{3}+\frac{1}{4}G_{4}(\theta_{n}-\theta_{n-1})^{4} \right)+\omega_{0}^{2}\frac{\tan^{2}\left(\frac{\theta_{n}}{2}\right)}{\left( \sigma^{2}+\tan^{2}\left(\frac{\theta_{n}}{2}\right)\right)}\Bigg{]}, \tag{1}\]
where \(\Gamma\) denotes the energy scale, \(\theta_{n}\) the dimensionless movements of particles and \(\omega_{g}\) the angular frequency. The potential interaction coefficients are \(G_{j}(j=2,3,4)\) and the equation of the motion reads [7]:
\[\begin{split}\frac{d^{2}\theta_{n}}{dt^{2}}=G_{2}(\theta_{n+1}-2 \theta_{n}+\theta_{n-1})+G_{3}\left[(\theta_{n+1}-\theta_{n})^{2}-(\theta_{n}- \theta_{n-1})^{2}\right]+G_{4}\left[(\theta_{n+1}-\theta_{n})^{3}-(\theta_{n}- \theta_{n-1})^{3}\right]\\ -\omega_{g}^{2}(\theta_{n}+\alpha\theta_{n}^{2}+\beta\theta_{n}^ {3}).\end{split} \tag{2}\]
The frequency parameter is \(\omega_{g}\), the nonlinearities in the potential's shape are \(\alpha\) and \(\beta\). Eq. (2) is the discrete equation that describes the movement of the chains of particles in the presence of the nonlinear coupling terms, and it takes its origin from the modified Frenkel-Kontorova model. In [8], the authors have considered the model with \(G_{3}=0\) and \(G_{4}=0\) to exhibit the effects of the nonlinearity coefficients and the substrate's deformability on MI. The model was recently used to investigate the interaction between cubic-quartic nonlinearities and the substrate's deformable parameter on MI growth rates [7]. The authors have shown that the influence of the quartic nonlinearity has extended the MI bands and that the amplitude of the plane wave has risen exponentially. It is important to underline that Eq. (2) can take the form of the Klein Gordon equation when cubic-quartic interactions and coupling are omitted. In what follows, we aim to highlight the effect of the nonlinearity coefficients on the MI growth rates. Thereafter, we establish the threshold amplitude (TA) expression, from which we will consider the DA to drive one end of the model in the forbidden frequency band gap. A fascinating matter that has been studied in several nonlinear systems is driven at one end of the model. But, to our knowledge, this subject was not formulated in [7]. To do so, we consider the standard multi-scale analytical paths as follows:
\[\theta_{n}(t)=\epsilon\left(\psi_{n}(\epsilon^{2},t)e^{i\phi_{n}}+cc\right)+ \epsilon^{2}\Theta_{n}(\epsilon^{2},t)+\epsilon\left(\Gamma_{n}(\epsilon^{2}, t)e^{2i\phi_{n}}+cc\right). \tag{3}\]
Here, Eq. (3) is the slowly varying time and space of MW solution, which propagates at a carrier frequency \(\omega\) and wave vector \(q\), \(\epsilon\) counts for the small parameter and the phase is \(\phi_{n}=kn-\omega t.\) While \(\psi_{n}\) and \(\Gamma_{n}\) are the complex functions with cc denotes their complex conjugate, \(\Theta_{n}\) is a real function. Assuming that \(G_{2}\sim\epsilon^{2},\ G_{3}\sim 1,\ G_{4}\sim 1,\ \alpha\sim 1\) and \(\beta\sim 1\)[7, 10]. Inserting Eq. (3) into Eq. (2) and gathering terms in order \(\epsilon,\ \epsilon^{2}\) and \(\epsilon^{3}\) together with \(e^{2i\phi_{n}}\), we get the DNLEE:
\[\Theta_{n}=2\alpha|\psi_{n}|^{2},\ \ \Gamma_{n}=-\frac{\alpha\omega_{g}^{2} \psi_{n}^{2}}{4\omega^{2}-\omega_{g}^{2}}, \tag{4}\]
and
\[\begin{split}-2i\omega\dot{\psi}_{n}+(\omega_{g}^{2}-\omega^{2} )\psi_{n}-G_{2}(\psi_{n+1}e^{ik}-2\psi_{n}+\psi_{n-1}e^{-ik})+3G_{3}\left(\psi _{n-1}^{*}e^{ik}-\psi_{n}^{*}\right)\left(\psi_{n-1}e^{-ik}-\psi_{n}\right)^{2 }\\ -3G_{4}\left[\left(\psi_{n-1}^{*}e^{ik}-\psi_{n}^{*}\right) \left(\psi_{n-1}e^{-ik}-\psi_{n}\right)^{2}+\left(\psi_{n+1}^{*}e^{-ik}-\psi_{ n}^{*}\right)\left(\psi_{n+1}e^{ik}-\psi_{n}\right)^{2}\right]\\ +\omega_{g}^{2}(3\beta-4\alpha^{2}+\frac{2\omega_{g}^{2}\alpha^{2 }}{4\omega^{2}-\omega_{g}^{2}})|\psi_{n}|^{2}\psi_{n}=0.\end{split} \tag{5}\]
Thus, Eq. (5) is the DNLEE with cubic-quartic interaction potential. As we have mentioned above, the upcoming section will discuss the MI growth rates.
## 3 Modulation instability
MI is the phenomenon where nonlinearity and dispersion interplay. Some works has been reported in [2, 6, 13, 15, 19]. During the investigation of the MI, the DNLEEs are the models that are most widely involved. For example, in [2], the authors used the well-known discrete nonlinear Schrodinger equation with cubic-quintic nonlinearity to analyze the MI growth rates under the effects of the nonlinear term. Tanemura et al. investigate the modulational unstable or stable modes in loss-dispersion optical fibers. More recently, the effects of the nearest-neighbor coupling have been studied [13]. Without doubt, MI is the process where small perturbations are inserted into the CWs. One can also notice that the analytical investigation of the MI growth rate cannot say enough about the growing amplitude of the plane wave. As a result, numerical analysis is the most powerful tool for observing MW patterns over long periods of propagation. In what follows, we use small perturbations in the CW to establish the linearizing expression. Afterwards, we establish the
MI gain, where the effects of the nonlinear terms are highlighted. To confirm our analytical investigation, we use the numerical simulation to control the exponential growth rates of the plane wave. For this, we consider the plane wave with small perturbations as having a solution of Eq. (5) as:
\[\psi_{n}=(F_{0}+F_{n})\,e^{i(kn-\varpi)t}, \tag{6}\]
where \(F_{0}\) is the initial amplitude, \(k\) and \(\varpi\) are respectively the wave vector and angular frequency. Inserting Eq. (6) into Eq. (5), gives
\[i\frac{\partial}{\partial t}F_{n}+\Lambda_{1}F_{n+1}+\Lambda_{2}F_{n-1}+ \Lambda_{3}F_{n}+\Lambda_{4}F_{n+1}^{*}+\Lambda_{5}F_{n-1}^{*}+\Lambda_{6}F_{n }^{*}=0. \tag{7}\]
The parameters \(\Lambda_{j}(j=1,...,6)\) are in the Appendix. We consider the solution of Eq. (7) as follow:
\[F_{n}=f_{1}\cos(Qn+\Omega t)+if_{2}\sin(Qn+\Omega t), \tag{8}\]
where \(Q\) and \(\Omega\) are respectively the perturbed wave vector and angular frequency of the MI growth rate. Using Eq. (8) into Eq. (7), we obtain the matrix in the form
\[\left(\begin{array}{cc}i\Omega-(N_{1}-N_{2}+N_{4}-N_{5})\sin(Q)&i((N_{1}+N_ {2}-N_{4}-N_{5})\cos(Q)+N_{3}-N_{6})\\ (N_{1}+N_{2}+N_{4}+N_{5})\cos(Q)+N_{3}+N_{6}&\Omega+i(N_{1}-N_{2}-N_{4}+N_{5}) \sin(Q)\end{array}\right)\left(\begin{array}{c}f_{1}\\ f_{2}\end{array}\right)=\left(\begin{array}{c}0\\ 0\end{array}\right), \tag{9}\]
and Eq. (9) can vanish only for
\[\Omega^{2}+i(X_{1}-X_{2})\Omega+X_{1}X_{2}+\Delta=0, \tag{10}\]
with
\[X_{1}= (N_{1}-N_{2}+N_{4}-N_{5})\sin(Q), \tag{11}\] \[X_{2}= i(N_{1}-N_{2}-N_{4}+N_{5})\sin(Q),\] \[\Delta= i((N_{1}+N_{2}-N_{4}-N_{5})\cos(Q)+N_{3}-N_{6})((N_{1}+N_{2}+N_{ 4}+N_{5})\cos(Q)+N_{3}+N_{6}).\]
It is worth mentioning that the MI occurs when the frequency of the modulation is complex with a non-zero imaginary part. So, the corresponding MI growth rate takes the form of
\[Gain=|\Im(\Omega_{max})|. \tag{12}\]
In what follows, we highlight the impacts of the parameters of the cubic-quartic interaction potential on the MI. For this reason, the value of the cubic coupling is kept fixed along with that of \(G_{2}\). In Figure 1, we have depicted the variation of the MI growth rates under the effects of the quartic interaction potential. From Figure 1 a-b, we have shown the formation of the unstable zones (bright zones) for \(G_{4}=-0.01\). One can see that in panel (a), very slight stable zones emerge, while in panel (b), two side lobes appear with a large MI band. The higher amplitude of the plane waves is about 0.6. In Figure 1 c-d, we have increase negatively the quartic parameter to \(G_{4}=-0.1\). In contrast to panels (a-b), the unstable zones decrease to expand the stable modes (see panel (c)), and additional bands of the MI emerge. We equally observe that the amplitude of the plane wave increases to 2.6 and three side lobes emerge in panel (d). It results that when the quartic nonlinearity strength increases negatively, the amplitude of the perturbed plane wave and the stable modes increase together. In Figure 2, e-f, we increased the negative nonlinear strength interaction to \(G_{4}=-0.5\) once more. We have depicted the same scenario as in panels (c-d). The amplitude of the plane wave has increased strongly to reach 8.9 in panel (f), and the stable modes increase. We also notice that the additional bands amplitudes increase. Once more, we have exhibited in panels (g-h) the same behavior for \(G_{4}=-1\). From this analysis, it emerges that increasing negatively the quartic nonlinearity term reduces the unstable modes and increases the amplitude of the perturbed plane wave, which is a good argument for seeking numerically the evolution of the MI growth rates.
Following the same procedure as in Figure 1, we have depicted unstable modes of the MI in Figure 2 for \(G_{4}=0.01\), 0.1, 0.5, 1.5, and 2.5 in terms of \((Q,\,k)\). For \(G_{4}=0.01\) we have shown unstable zones in panel (a), indicated by the bright zones, and
three symmetric side lobes in panel (b). We increase the nonlinear term to \(0.1\), the unstable MI areas increase while additional bands emerge to shrink the MI bands in \(k\)-axis (see panel (c) and panel (d) respectively). One can observe that the amplitude of the plane wave has remained identical. To confirm the analytical predictions reported in [11], that the quartic nonlinear term induces unstable modes, we have increase its value to \(0.5\). It emerges with six peaks of the unstable modes, with small unstable lobes in the middle and a large enough amplitude of the plane wave in panels (e-f). Beside, in panel (g) and panel (h) respectively, we have depicted the same behavior, but the peak of the amplitude of the plane wave have increased strong enough to reach \(200\). Without doubt, positive values of the higher nonlinear term can generate instability in a chain of atoms and could probably induce the MWs patterns during long periods of propagation.
Since it is obvious that the quartic nonlinear term induces unstable or stable modes depending on its sign, we next turn to the effects of the cubic nonlinear strength. As a result, we consider \(G_{4}=0\). In Figure 3 a-d, we have depicted the variation of the unstable MI for \(G_{3}=-0.1\) panels (a-b) and \(G_{3}=0.5\) panels (c-d). One can observe that for negative values of the cubic term, the unstable mode is manifested by a set of two symmetrical lobes. By increasing this value to \(G_{3}=0.5\), the MI's stable modes increase along with the plane wave's amplitude. Another important effect of the cubic nonlinear term is observed through the unstable and stable modes in the atom's structure. Our analytical investigation has confirmed the previous predictions made by Nguetcho and co-workers. Last but not least, we used Figure 3, e to demonstrate the manifestation of the MI growth rate with the effects of the dispersion term \(G_{2}\). For \(G_{2}=-0.1\) and \(G_{2}=0.1\) respectively, the maximum amplitude of the plane wave have the same values. Meanwhile for \(G_{2}=-0.5\) and \(G_{2}=0.5\), one can observe that the plane wave gets large amplitude and the MI bands enlarge.
## 4 Numerical investigation
In the previous section, we underlined the interplay between dispersion and high-order nonlinear terms in the structure of the one-dimensional chain of atoms. We have shown that the cubic and quartic interactions can generate unstable or stable modes as well as wave patterns. In this section, we use the numerical integration of Eq. (5).
### Modulated waves patterns
To say the truth, the linear stability investigation can only say so much about the long-term propagation of the CWs. To answer this preoccupation, we use the numerical integration of Eq. (5). An initial condition in the form
\[\psi_{n}(t=0)=\psi_{0}\left(1+\xi\cos(Qn)\right)\cos(kn) \tag{13}\]
has been used with \(\psi_{0}=1\) and \(\xi=0.001\) to develop the MI growth rates. In what follows, attention is paid to the effects of the higher-order interaction coefficients on the development of the MI. The important aspect of this investigation is the fact that both cubic and quartic nonlinear terms of the nearest neighbor are used during the long period of the propagation of the MWs. On the other hand, the novelty of this present study lies in the inclusion of the higher-order nonlinearity that is manifested by the \(G_{4}\) parameter. The effects of the \(G_{4}\) nonlinear term have been revealed to be effective on the development of MI by increasing or reducing the unstable domains as well as the gain profiles [7]. Here, the effects of the higher nonlinear coupling coefficient are exhibited in Figure 4, where panel (a) shows the propagation of the trains of waves for \(G_{4}=0.001\). By increasing the value of the higher nonlinear strength to \(G_{4}=0.01\) in panel (b), one may observe the formation of the train of pulses with a different shape, despite the fact that the maximum amplitude of the plane wave remains constant as in panel (a). For more clarity, we have shown the same objects in panel (c) for large enough value of \(G_{4}\) parameter. Observing closely in panel (d), one realizes a similarity with RWs, where a 2D train depicted against the background displays an Akhmediev breather despite its small amplitude. Our results seem to be in accordance with the analytical predictions where the unstable MI is developed for positive values of the nonlinear term. Otherwise, the long time propagation has open new features in the dynamics of a one-dimensional chain made of atoms, harmonically coupled to their nearest-neighbors and subjected to an external on-site potential [7]. It is equally worth to mention that in the continuum limit Eq. (5) at \((k=0)\) and \((k=\pi)\) can turn to the nonlinear Schrodinger equation
which admits super RWs and Peregrine solitons. Beside, in Figure 5 we have shown in accordance with our analytical investigation that for negative values of the nonlinear parameter it is generated patterns waves. So, for \(G_{4}=-0.01,-0.1,\)\(-0.5,\) and \(-0.75,\) we have depicted the RWs that reinforce the argument that our structure can support Akhmediev breathers, which are related to the formation of the MI and were revealed in different studies as tools of regulation of the structures where nonlinear and dispersion terms are interplaying [9].
However, considering now the cubic nonlinear term gives the possibility to generate a train of pulse that comprise varied modes. In Figure 6, more precisely in panels (a,b) we have fixed \(G_{3}=-0.01\) and \(-0.5,\) one observes the MWs adopt different behaviors. Following the same procedure as in panels (a,b), we point out in panels (c,d) for \(G_{3}=0.01\) and \(0.5,\) the MWs patterns emerge. One results that the cubic coupling term can develop several modes when particles interaction is happened in the structure. So, in the next section we focuss on the MW bright soliton. Most of the studies carried out on the effects of the higher-order interaction have used a continuum limit which is only, our outcomes have been carried out by using a DNLEE. The obtained results have shown the robustness of this mechanism which reveals MWs with a particular properties.
### Nonlinear supratransmission
In this section, we aim to submit the left and right side of Eq. (5) to the external periodic force, which is different with regular mechanism of the supratransmission where only one end of the structure is driving in the FG. For this matter, we insert \(\psi_{n}\approx e^{i(kn-\omega t)}\) into Eq. (2) and the linear dispersion frequency is \(\omega=\sqrt{4G_{2}\sin^{2}(\frac{k}{2})+\omega_{g}^{2}}\). The lower and upper frequencies are respectively \(\omega_{0}=\omega_{g}\) and \(\omega_{max}=\sqrt{4G_{2}+\omega_{g}^{2}}.\) At the center (\(k=0\)) and in the limit (\(k=\pi\)) of the first Brillouin zone the DNLEE Eq. (5) are respectively
\[\begin{split}-2i\omega\dot{\psi}_{n}&+(\omega_{g}^ {2}-\omega^{2})\psi_{n}-G_{2}(\psi_{n+1}-2\psi_{n}+\psi_{n-1})+3G_{3}\left| \psi_{n-1}-\psi_{n}\right|^{2}(\psi_{n-1}-\psi)\\ &-3G_{4}\left[\left|\psi_{n-1}-\psi_{n}\right|^{2}(\psi_{n-1}- \psi_{n})+\left|\psi_{n+1}-\psi_{n}\right|^{2}(\psi_{n+1}-\psi_{n})\right]\\ &\hskip 142.26378pt+\omega_{g}^{2}(3\beta-4\alpha^{2}+\frac{2 \omega_{g}^{2}\alpha^{2}}{4\omega^{2}-\omega_{g}^{2}})|\psi_{n}|^{2}\psi_{n}=0.\end{split} \tag{14}\]
and
\[\begin{split}-2i\omega\dot{\psi}_{n}&+(\omega_{g}^ {2}-\omega^{2})\psi_{n}+G_{2}(\psi_{n+1}+2\psi_{n}+\psi_{n-1})-3G_{3}\left| \psi_{n-1}+\psi_{n}\right|^{2}(\psi_{n-1}+\psi_{n})\\ &+3G_{4}\left[\left|\psi_{n-1}+\psi_{n}\right|^{2}(\psi_{n-1}+ \psi_{n})+\left|\psi_{n+1}+\psi_{n}\right|^{2}(\psi_{n+1}+\psi_{n})\right]\\ &\hskip 142.26378pt+\omega_{g}^{2}(3\beta-4\alpha^{2}+\frac{2 \omega_{g}^{2}\alpha^{2}}{4\omega^{2}-\omega_{g}^{2}})|\psi_{n}|^{2}\psi_{n}=0.\end{split} \tag{15}\]
In the continuum limit the equations read at the lower FG
\[-2i\omega\dot{\psi}+(\omega_{g}^{2}-\omega^{2})\psi-G_{2}\frac{\partial^{2} \psi}{\partial x^{2}}+\omega_{g}^{2}(3\beta-4\alpha^{2}+\frac{2\omega_{g}^{2} \alpha^{2}}{4\omega^{2}-\omega_{g}^{2}})|\psi|^{2}\psi=0. \tag{16}\]
and the upper FG:
\[-2i\omega\dot{\psi}+(\omega_{g}^{2}-\omega^{2}+4G_{2})\psi+G_{2}\frac{ \partial^{2}\psi}{\partial x^{2}}+\left[-24G_{3}+48G_{4}+\omega_{g}^{2}\left( 3\beta-4\alpha^{2}+\frac{2\omega_{g}^{2}\alpha^{2}}{4\omega^{2}-\omega_{g}^{2 }}\right)\right]|\psi|^{2}\psi=0. \tag{17}\]
The static breather solutions of Eqs. (16) and (17) that synchronize and adjust to the driving in the end are respectively:
\[\begin{split}&\psi_{1}(x,t)=A_{1}e^{-i(\omega-\omega_{0})t} \operatorname{sech}\left(\sqrt{\frac{\omega^{2}+2\omega\left(\omega-\omega_{0} \right)-\omega_{g}{}^{2}}{G_{2}}}(x-x_{0})\right),\\ & A_{1}=\sqrt{-\frac{8\,\omega^{4}+16\omega^{3}\left(\omega-\omega _{0}\right)-10\omega^{2}\omega_{g}^{2}-4\,\omega\omega_{g}^{2}\left(\omega- \omega_{0}\right)+2\,\omega_{g}^{4}}{\omega_{g}^{2}\left(\frac{82\omega^{2}}{9} -\frac{19\omega_{g}^{2}}{6}\right)}},\end{split} \tag{18}\]
\[\psi_{2}(x,t)=A_{2}e^{-i\left(\omega-\omega_{max}\right)t}\operatorname{sech} \left(\sqrt{\frac{\omega^{2}+2\,\omega\,\left(\omega-\omega_{max}\right)-\omega _{g}^{2}-4G_{2}}{G_{2}}}(x-x_{0})\right), \tag{19}\]
\[A_{2}=\sqrt{-\frac{8\,\omega^{4}+16\,\omega^{3}\left(\omega-\omega_{max} \right)-10\,\omega^{2}\omega_{g}^{2}-4\,\omega\,\omega_{g}^{2}\left(\omega- \omega_{max}\right)+2\,\omega_{g}^{4}-32\,\omega^{2}G_{2}+8\,G_{2}\omega_{g}^ {2}}{16\,\omega^{2}\alpha^{2}\omega_{g}^{2}-6\,\alpha^{2}\omega_{g}^{4}-12\, \omega^{2}\beta\omega_{g}^{2}+3\beta\omega_{g}^{4}+96\,\omega^{2}G_{3}-192\, \omega^{2}G_{4}-24G_{3}\omega_{g}^{2}+48G_{4}\omega_{g}^{2}}}.\]
From there we derive the threshold boundary of the supratransmission in the lower and upper FGs respectively
\[A_{th_{1}}=2\sqrt{-\frac{8\,\omega^{4}+16\omega^{3}\left(\omega-\omega_{0} \right)-10\omega^{2}\omega_{g}^{2}-4\,\omega\omega_{g}^{2}\left(\omega-\omega _{0}\right)+2\,\omega_{g}^{4}}{\omega_{g}^{2}\left(\frac{82\omega^{2}}{9}- \frac{19\omega_{g}^{2}}{6}\right)}}, \tag{20}\]
and
\[A_{th_{2}}=2\sqrt{-\frac{8\,\omega^{4}+16\,\omega^{3}\left(\omega-\omega_{max }\right)-10\,\omega^{2}\omega_{g}^{2}-4\,\omega\,\omega_{g}^{2}\left(\omega- \omega_{max}\right)+2\,\omega_{g}^{4}-32\,\omega^{2}G_{2}+8\,G_{2}\omega_{g} ^{2}}{16\,\omega^{2}\alpha^{2}\omega_{g}^{2}-6\,\alpha^{2}\omega_{g}^{4}-12\, \omega^{2}\beta\omega_{g}^{2}+3\beta_{g}\omega_{g}^{4}+96\,\omega^{2}G_{3}-1 92\,\omega^{2}G_{4}-24G_{3}\omega_{g}^{2}+48G_{4}\omega_{g}^{2}}}. \tag{21}\]
For numerical simulation, we assume the boundary condition in the form of
\[\psi_{0}=A_{d}\cos(\omega t), \tag{22}\]
to drive Eq. (14). \(A_{d}\) is the DA and \(\omega\) the driven frequency (DF) with \(0<\omega<\omega_{0}\). In Figure 7 a, we have shown the propagation of the local energy for the DF belonging the lower FG \(\omega=0.25\) and the DA is \(A_{d_{1}}=1.8\). For specific cells index \(n=20\) and \(n=100\) in panels \((\mathrm{b},\mathrm{c})\) the spatiotemporal evolution of the train of traveling wave for the left boundary is fulfilled. Increasing the DA to \(1.85\), one can observe the propagation of the excited localized modes in the structure in panel (d). From the above assumption, it emerges that the model of Frenkel-Kontorova with cubic-quartic nonlinear on-site potential is opened to the nonlinear supratransmission phenomenon in the lower FG. We notice that a train of traveling wave occurs for the DA above the threshold supratransmission in the range of the propagation time \(t\,\epsilon\,[200,500]\). On the other hand, the energy goes to zero in the range of the time propagation \(t\,\epsilon\,[0,200]\) producing a transition phase in the structure.
Concerning the upper FG, we use the numerical integration of Eq. (15). From Figure 8 a-b, we have depicted the propagation of the localized waves in the upper FG for the DF \(\omega=1\). For DA \(A_{d_{2}}=0.8>A_{th_{2}}=0.78\), in panel (a) we show the evolution of the boundary driving of the coupled atoms with higher-order nonlinear term. In the bottom panel (c), the localized bright soliton is obtained in the range of time of propagation \(t\,\epsilon\,[200,400]\). However, we have increased the DA value to \(0.85\), one can observe that the energy flows in upper FG in panel (b). For specific range of propagation time, we have shown that the bright soliton turns to chaos-like motion in the structure in panel (d). This behavior corroborate our analytical prediction on the MI growth rates where a strong value of the higher-order nonlinear term is used. On the order hand, the propagation of the localized modes occurred in the upper FG despite the fact that the DA amplitude is considered below the threshold supratransmission of the lower FG.
Figure 1: Illustration of the MI growth rate with the variation of the quartic interaction potential. (a-b) \(G_{4}=-0.01\), (c-d) \(G_{4}=-0.1\), (e-f) \(G_{4}=-0.5\), and (g-h) \(G_{4}=-1\). The parameters used are \(G_{2}=0.01,\,G_{3}=0,\,\alpha=1.5,\,\beta=-\frac{1}{6},\,\omega_{g}=0.24,\, \omega=0.42\), and \(F_{0}=1\).
Figure 2: Variation of the MI growth rate under the positive value of the quartic nonlinearity strength. (a-b) \(G_{4}=0.01\), (c-d) \(G_{4}=0.1\), (e) \(G_{4}=0.5\), (f) \(G_{4}=1\), (g) \(G_{4}=1.5\), and (h) \(G_{4}=2.5\). The parameters used are \(G_{2}=0.01\), \(G_{3}=0\), \(\alpha=1.5\), \(\beta=-\frac{1}{6}\), \(\omega_{g}=0.24\), \(\omega=0.42\), and \(F_{0}=1\).
Figure 3: Top panel (a-d) MI growth rate with the variation of the cubic nonlinearity interaction. (a-b) \(G_{3}=-0.1\) and (c-d) \(G_{3}=0.5\) Bottom panel (f) is the illustration of the MI growth rate with the variation of the dispersion term. The parameters used are respectively \(G_{4}=0\), \(G_{3}=0.01\), \(\alpha=1.5\), \(\beta=-\frac{1}{6}\), \(\omega_{g}=0.24\), \(\omega=0.42\), and \(F_{0}=1\).
Figure 4: Numerical simulation of the intensity \(|\psi_{n}|^{2}\) with the variation of the quartic interaction coupling (\(G_{4}\)). (a) \(G_{4}=0.001\), (b) \(G_{4}=0.01\), (c) \(G_{4}=0.1\), and (d) \(G_{4}=0.5\). The parameters used are respectively \(G_{2}=0.01\), \(G_{3}=0\), \(\alpha=1.5\), \(\beta=-\frac{1}{6}\), and \(\omega_{g}=0.24\).
Figure 6: Numerical simulation of the intensity \(|\psi_{n}|^{2}\) with the variation of the cubic interaction coupling (\(G_{4}\)). (a) \(G_{3}=-0.01\), (b) \(G_{3}=-0.5\), (c) \(G_{3}=0.01\), and (d) \(G_{3}=0.5\). The parameters used are respectively \(G_{2}=0.01\), \(G_{4}=0\), \(\alpha=1.5\), \(\beta=-\frac{1}{6}\), and \(\omega_{g}=0.24\).
## 5 Conclusion
In this study, we investigated the variation of the modulation instability and the behavior of the wave propagating in the forbidden bandgap. We use the one-dimensional chain of atoms harmonically coupled to their nearest neighbors. A standard multi-scale method is used to derive the discrete nonlinear evolution equation. From the linear stability, the modulation instability gain is obtained, and the impact of the cubic-quartic nonlinearity on the modulation instability leads to unstable zones as well as modulated wave patterns for certain values of the higher nonlinear term. A numerical simulation of the derived discrete nonlinear evolution equation gives birth to rogue waves and diverse types of modulated waves. We derive static breather solutions that synchronize and adjust to the drive at the center and limit of the first Brillouin zone. Thereafter, we submit one end of the discrete model to an external periodic drive. The generation of modulated waves and bright soliton is observed for driven amplitudes above the threshold of supratransmission. When the driven amplitude is increased sufficiently, the bright soliton towers into chaos-like motion in the transient range of propagation time. These results shed light on the fact that at higher orders of complexity, the modified Frenkel-Kontorova model with cubic-quartic nonlinear coupling coefficients could be used to generate rogue waves, long-lived modulated wave patterns, and chaos-like motions that are very useful for data codification.
## Appendix
\(\Lambda_{1}=A_{2}a_{1}b_{1}+2\,{A_{5}{F_{0}}^{2}}a_{1}b_{1}\left(a_{1}b_{1}-1 \right)\left(a_{-1}b_{-1}-1\right),\)
\(\Lambda_{2}=A_{2}a_{-1}b_{-1}+2\,{A_{5}a_{-1}b_{-1}{F_{0}}^{2}}\left(a_{1}b_{1 }-1\right)\left(a_{-1}b_{-1}-1\right)\left(A_{7}+1\right),\)
\(\Lambda_{3}=\left(-a_{-1}b_{-1}-a_{1}b_{1}\right){A_{2}}-2\left(a_{1}b_{1}-1 \right)\left(a_{-1}b_{-1}-1\right)A_{5}{A_{7}}{F_{0}}^{2}-\left(a_{1}b_{1}-1 \right)\left(a_{-1}b_{-1}-1\right)\left(a_{-1}b_{-1}+a_{1}b_{1}+2\right){A_{5 }{F_{0}}^{2}}-{A_{7}}{F_{0}}^{2}\left(a_{1}b_{1}-1\right)\left(a_{-1}b_{-1}-1 \right)^{2}+{A_{3}}{F_{0}}^{2},\)
\(\Lambda_{4}=A_{5}a_{-1}b_{-1}{F_{0}}^{2}\left(a_{1}b_{1}-1\right)^{2},\)
\(\Lambda_{5}=A_{5}{F_{0}}^{2}a_{1}b_{1}\left(a_{-1}b_{-1}-1\right)^{2}\left(A_{7 }+1\right),\)
\(\Lambda_{6}=A_{3}F_{0}^{\ 2}+A_{5}\left(A_{7}\left(-{F_{0}}^{2}a_{-1}{{}^{2}b_{-1}}^{2}+2 \,{F_{0}}^{2}a_{-1}b_{-1}-F_{0}{}^{2}\right)-{F_{0}}^{2}\left(a_{-1}{{}^{2}b_{-1 }}^{2}+a_{1}{{}^{2}b_{1}}^{2}-2\,a_{-1}b_{-1}-2\,a_{1}b_{1}+2\right)\right),\)
\(N_{1}=A_{2}a_{1}b_{1}+2\,A_{5}{F_{0}}^{\ 2}a_{1}b_{1}\left(a_{1}b_{1}-1\right) \left(a_{-1}b_{-1}-1\right),\)
\(N_{2}=A_{2}a_{-1}b_{-1}+2\,A_{5}a_{-1}b_{-1}{F_{0}}^{\ 2}\left(a_{1}b_{1}-1 \right)\left(a_{-1}b_{-1}-1\right)\left(A_{7}+1\right).\)
\(N_{3}=\left(-a_{-1}b_{-1}-a_{1}b_{1}\right)A_{2}-2\left(a_{1}b_{1}-1\right) \left(a_{-1}b_{-1}-1\right)A_{5}A_{7}{F_{0}}^{\ 2}-\left(a_{1}b_{1}-1\right) \left(a_{-1}b_{-1}-1\right)\left(a_{-1}b_{-1}+a_{1}b_{1}+2\right)A_{5}{F_{0}} ^{\ 2}-\left(a_{-1}b_{-1}-1\right)^{2}\left(a_{1}b_{1}-1\right)A_{7}{F_{0}}^{\ 2}+A_{3}{F_{0}}^{\ 2},\)
\(N_{4}=A_{5}{F_{0}}^{\ 2}a_{-1}b_{-1}\left(a_{1}b_{1}-1\right)^{2},\)
\(N_{5}=A_{5}{F_{0}}^{\ 2}a_{1}b_{1}\left(a_{-1}b_{-1}-1\right)^{2}(A_{7}+1),\)
\(N_{6}=A_{3}{F_{0}}^{\ 2}+A_{5}\left(A_{7}\left(-{F_{0}}^{2}a_{-1}{{}^{2}b_{-1}}^{ 2}+2\,{F_{0}}^{2}a_{-1}b_{-1}-{F_{0}}^{2}\right)-{F_{0}}^{\ 2}\left(a_{-1}{{}^{2}b_{-1}}^{2}+a_{1}{{}^{2}b_{1}}^{2}-2\,a_{-1}b_{-1}-2a_{1 }b_{1}+2\right)\right),\)
\(A_{1}=\frac{\omega^{2}-\omega_{s}^{2}}{2\omega}\), \(A_{2}=\frac{C_{2}}{2\omega}\), \(A_{3}=-\frac{1}{2}\frac{\omega^{2}\left(3\beta-4\,\sigma^{2}+\frac{3\omega_{s }^{2}-\omega_{s}^{2}}{2\omega-\omega_{s}^{2}}\right)}{\omega}\), \(A_{5}=\frac{3}{2}\frac{C_{4}}{\omega}\), \(A_{7}=-\frac{3}{2}\frac{C_{3}}{\omega}\),
\(a_{1}=\cos(k)+i\sin(k);\ a_{-1}=\cos(k)+i\sin(k),\)
\(b_{1}=\cos(Q)+i\sin(Q);\ b_{-1}=\cos(Q)+i\sin(Q).\)
|
2307.13571 | PT$\mathrm{L}^{p}$: Partial Transport $\mathrm{L}^{p}$ Distances | Optimal transport and its related problems, including optimal partial
transport, have proven to be valuable tools in machine learning for computing
meaningful distances between probability or positive measures. This success has
led to a growing interest in defining transport-based distances that allow for
comparing signed measures and, more generally, multi-channeled signals.
Transport $\mathrm{L}^{p}$ distances are notable extensions of the optimal
transport framework to signed and possibly multi-channeled signals. In this
paper, we introduce partial transport $\mathrm{L}^{p}$ distances as a new
family of metrics for comparing generic signals, benefiting from the robustness
of partial transport distances. We provide theoretical background such as the
existence of optimal plans and the behavior of the distance in various limits.
Furthermore, we introduce the sliced variation of these distances, which allows
for rapid comparison of generic signals. Finally, we demonstrate the
application of the proposed distances in signal class separability and nearest
neighbor classification. | Xinran Liu, Yikun Bai, Huy Tran, Zhanqi Zhu, Matthew Thorpe, Soheil Kolouri | 2023-07-25T15:23:15Z | http://arxiv.org/abs/2307.13571v1 | # PTl\({}^{p}\): Partial Transport \(\mathrm{L}^{p}\) Distances
###### Abstract
Optimal transport and its related problems, including optimal partial transport, have proven to be valuable tools in machine learning for computing meaningful distances between probability or positive measures. This success has led to a growing interest in defining transport-based distances that allow for comparing signed measures and, more generally, multi-channeled signals. Transport \(\mathrm{L}^{p}\) distances are notable extensions of the optimal transport framework to signed and possibly multi-channeled signals. In this paper, we introduce partial transport \(\mathrm{L}^{p}\) distances as a new family of metrics for comparing generic signals, benefiting from the robustness of partial transport distances. We provide theoretical background such as the existence of optimal plans and the behavior of the distance in various limits. Furthermore, we introduce the sliced variation of these distances, which allows for rapid comparison of generic signals. Finally, we demonstrate the application of the proposed distances in signal class separability and nearest neighbor classification.
## 1 Introduction
At the heart of Machine Learning (ML) lies the ability to measure similarities or differences between signals existing in different domains, such as temporal, spatial, spatiotemporal grids, or even graphs in a broader sense. The effectiveness of any ML model depends significantly on the discriminatory power of the metrics it employs. Several criteria are desired when quantifying dissimilarities among diverse multivariate signals, including: 1) the ability to compare signals with varying lengths, 2) adherence to the inherent structure and geometry of the signals' domain, 3) being invariant to local deformation and symmetries, 4) computational efficiency, and 5) differentiability. In recent literature, significant efforts have been dedicated to addressing these challenges. Prominent examples include
the Dynamic Time Warping (DTW) [1] technique and its numerous extensions [2, 3, 4, 5, 6], as well as more recent methods based on optimal transport principles [7, 8, 9, 10].
**Dynamic Time Warping (DTW).** DTW is a technique for comparing and aligning time series signals that may vary in lengths or exhibit temporal distortions. To compare two signals, DTW computes the minimal-cost alignment between the signals [1], enforcing the chronological order. The alignment problem in DTW is solved via dynamic programming (DP) using Bellman's recursion, with quadratic cost in lengths of the signals. A large body of work has studied extensions of the DTW approach. For instance, Ten Holt et al. [3] extend DTW to multivariate time series. Salvador and Chan [4] propose FastDTW, a linear time approximation of DTW with reasonable accuracy. To achieve robustness, Keogh and Pazzani [2] propose derivative DTW (DDTW), calculating the minimum-cost alignment based on derivatives of input signals, while Jeong et al. [5] consider the relative importance of alignments and propose weighted DTW (WDTW) providing robustness against outliers. Other notable extensions include Canonical Time Warping [11] and generalized time warping [12], which enable the application of DTW to multi-modal sequences whose instances may have different dimensions. More recently, Cuturi & Blondel [6] provide a differentiable variant of DTW, softDTW, allowing its seamless integration into end-to-end learning pipelines.
**Optimal Transport.** Optimal transport (OT) has gained recognition as a powerful tool for quantifying dissimilarities between probability measures, finding broad applications in data science, statistics, machine learning, signal processing, and computer vision [13, 14]. The dissimilarity metrics derived from OT theory define a robust geometric framework for comparing probability measures, exhibiting desirable properties such as a weak Riemannian structure [15], the concept of barycenters [16], and parameterized geodesics [17]. However, it is important to note that OT has limitations when it comes to comparing general multi-channel signals. OT is specifically applicable to non-negative measures with equal total mass, restricting its use to signals that meet specific criteria: 1) single-channel representation, 2) non-negativity, and 3) integration to a common constant, such as unity for probability measures. In cases where signals do not fulfill these criteria, normalization or alternative methods are required for meaningful comparison using OT.
**Unbalanced and Optimal Partial Transport.** Comparing non-negative measures with varying total amounts of mass is a common requirement in physical-world applications. In such scenarios, it is necessary to find partial correspondences or overlaps between two non-negative measures and compare them based on their respective corresponding and non-corresponding parts. Recent research has thus focused on extensions of the OT problem that enable the comparison of non-negative measures with unequal mass. The Hellinger-Kantorovich distance [18, 19], optimal partial transport (OPT) problem [20, 21, 22], Kantorovich-Rubinstein norm [23, 24] and unnormalized optimal transport [25, 26] are some of the variants that fall under the category of "unbalanced optimal transport" [18, 19]. These methods provide effective solutions for comparing non-negative measures in scenarios where the total amount of mass varies. It is important to note that although the unbalanced optimal transport methods have advanced the capabilities of comparing non-negative measures with unequal mass, they still cannot be used to compare multi-channel signals or signals with negative values.
**Transport-Based Comparison of Generic Signals.** Recent studies have proposed extensions of the Optimal Transport (OT) framework to compare multi-channel signals that may include negative values, while still harnessing the benefits of OT. For example, Su & Hua [8] introduced the Order-preserving Wasserstein distance, which computes the OT problem between elements of sequences while ensuring temporal consistency through regularization of the transportation plan. A more rigorous treatment of the problem was proposed in [7] that led to the so-called Transportation \(\mathrm{L}^{p}\) (\(\mathrm{TL}^{p}\)) distances. In short, to compare two signals \(f\) and \(g\), \(\mathrm{TL}^{p}\) uses the OT distance between their corresponding measures, e.g., the Lebesgue measure, raised onto the graphs of the signals (See Section 3). Later, Zhang et al. [10] utilized a similar approach to \(\mathrm{TL}^{p}\) while adding entropy regularization [27] and introduced Time Adaptive OT (TAOT). Lastly, in Spatio-Temporal Alignments, Janati et al. [9] combine OT with softDTW. They utilized regularized OT to capture spatial differences between time samples and employed softDTW for temporal alignment costs.
**Contributions.** In this paper, we tackle the problem of comparing multi-channel signals using transport-based methods and present a new family of metrics, denoted as \(\mathrm{P}\mathrm{TL}^{p}\), based on the optimal partial transport framework. Our approach is motivated by the realization that while \(\mathrm{TL}^{p}\) distances allow for the comparison of general signals, they require complete correspondences between input
signals, which limits their applicability to real-world signals that often exhibit partial correspondences. Our specific contributions are: 1) introducing a new family of metrics based on optimal partial transport for comparing multi-channel signals, 2) providing theoretical results on existence of the partial transport plan in the proposed metric, as well as the behavior of the distance in various limits, 3) providing the sliced variation of the proposed metric with significant computational benefits, and 4) demonstrating the robust performance of the proposed metric on nearest neighbor classification in comparison with various recent baselines.
**General Notations.** We provide an extensive list of our notations in the supplementary material. Here we provide a small subset used in the development of our proposed framework. We use \(\mathbb{R}_{+}\) for the set of postive real numbers, \(\mathbb{R}^{d}\) to denote the d-dimensional Euclidean space, and \(\mathbb{S}^{d-1}\subset\mathbb{R}^{d}\) to denote the unit hyper-sphere. Given \(\Omega\subseteq\mathbb{R}^{d},p\geq 1\), we use \(\mathcal{P}(\Omega)\) to denote the set of Borel probability measures and \(\mathcal{P}_{p}(\Omega)\) to denote the set of probability measures with finite \(p\)'th moment defined on a metric space \((\Omega,d)\). We use \(\mathcal{M}_{+}(\Omega)\) to denote the set of all positive Radon measures defined on \(\Omega\). For \(\mu\in\mathcal{P}_{p}(\Omega)\), we define \(\mathrm{L}^{p}(\mu;\mathbb{R}^{k}):=\{f:\Omega\to\mathbb{R}^{k}\mid\int_{ \Omega}\|f(x)\|^{p}\,\mathrm{d}\mu(x)<\infty\}\) to denote a Banach space with the usual norm. For \(f:\Omega\to\hat{\Omega}\) and measure \(\mu\) in \(\mathcal{M}_{+}(\Omega)\) we use \(f_{\#}\mu\) to denote the pushforward of measure \(\mu\) through \(f\), which is formally defined as \(f_{\#}\mu(A)=\mu(f^{-1}(A))\) for \(\forall A\subseteq\hat{\Omega}\).
## 2 Background - Optimal (Partial) Transport and Their Sliced Variations
**Optimal Transport**. The OT problem in the Kantorovich formulation [28] is defined for two probability measures \(\mu\) and \(\nu\) in \(\mathcal{P}(\Omega)\), and a lower semi-continuous cost function \(c:\Omega^{2}\to\mathbb{R}+\) by:
\[\mathrm{OT}_{c}(\mu,\nu):=\inf_{\gamma\in\Pi(\mu,\nu)}\int_{\Omega^{2}}c(x,y) \,\mathrm{d}\gamma(x,y), \tag{1}\]
Here, \(\Pi(\mu,\nu)\) is the set of all joint probability measures whose marginals are \(\mu\) and \(\nu\). We represent this by \(\pi_{1\#}\gamma=\mu\) and \(\pi_{2\#}\gamma=\nu\), where \(\pi_{1}\) and \(\pi_{2}\) are the canonical projection maps. If \(c(x,y)\) is a \(p\)-th power of a metric, then the \(p\)-th root of the resulting optimal value is known as the p-Wasserstein distance. This distance is a metric in \(\mathcal{P}_{p}(\Omega)\). We will ignore the subscript \(c\) if it is the default cost \(\|\cdot\|^{p}\). Please see the appendix for more details.
**Optimal Partial Transport.** The problem of Optimal Partial Transport (OPT) extends the concept of mass transportation to include mass destruction at the source and mass creation at the target, with corresponding penalties for such actions. More precisely, let \(\mu,\nu\in\mathcal{M}_{+}(\Omega)\), where \(\mathcal{M}_{+}(\Omega)\) is set of positive Radon measures defined on \(\Omega\). Let \(\lambda\geq 0\) denote the penalty for mass creation or destruction. Then the OPT problem is defined as:
\[\mathrm{OPT}_{\lambda,c}(\mu,\nu):=\inf_{\gamma\in\Pi_{\leq}(\mu,\nu)}\int_{ \Omega^{2}}c(x,y)\,\mathrm{d}\gamma(x,y)+\lambda(\|\mu\|_{\mathrm{TV}}+\|\nu \|_{\mathrm{TV}}-2\|\gamma\|_{\mathrm{TV}}) \tag{2}\]
where
\[\Pi_{\leq}(\mu,\nu):=\{\gamma\in\mathcal{M}_{+}(\Omega^{2}):\pi_{1\#}\gamma \leq\mu,\pi_{2\#}\gamma\leq\nu\},\]
\(\pi_{1\#}\gamma\leq\mu\) indicates that \(\pi_{1\#}\gamma\) is _dominated by_\(\mu\), i.e., for any Borel set \(A\subseteq\Omega\), \(\pi_{1\#}\gamma(A)\leq\mu(A)\), analogously for \(\pi_{2\#}\gamma\leq\nu\). The cost function \(c:\Omega^{2}\to\mathbb{R}\) is lower semi-continuous (generally, it is nonnegative), and \(\|\mu\|_{\mathrm{TV}}\) is the total variation (and the total mass) of \(\mu\), analogously for \(\|\nu\|_{\mathrm{TV}},\|\gamma\|_{\mathrm{TV}}\). When the transportation cost \(c(x,y)\) is a metric, \(\mathrm{OPT}_{\lambda,c}(\cdot,\cdot)\) defines a metric on \(\mathcal{M}_{+}(\Omega)\) (see [29, Proposition 2.10], [30, Proposition 5], [26, Section 2.1] and [31, Theorem 4]). For simplicity of notation, we drop the \(c\) in the subscript of \(\mathrm{OT}\) and \(\mathrm{OPT}\).
**Sliced Transport.** For one-dimensional measures, i.e., when \(\Omega\subseteq\mathbb{R}\), both OT and OPT problems have efficient solvers. In particular, the OT problem has a closed-form solution, and for discrete measures with \(M\) and \(N\geq M\) particles, it can be solved in \(\mathcal{O}(N\log(N))\). Moreover, a quadratic algorithm, \(\mathcal{O}(MN)\), was recently proposed in [32] for the one-dimensional OPT problem. To extend the computational benefits of one-dimensional OT and OPT problems to d-dimensional measures, recent works utilize the idea of slicing, which is rooted in the Cramer-Wold theorem [33] and the Radon Transform from the integral geometry [34, 35]. For \(\theta\in\mathbb{S}^{d-1}\), a one-dimensional slice of measure \(\mu\in\mathcal{M}_{+}(\Omega)\) can be obtained via \(\langle\theta,\cdot\rangle_{\#}\mu\) where \(\langle\cdot,\cdot\rangle:\Omega^{2}\to\mathbb{R}\) denotes the inner product.
Then for \(\mu,\nu\in\mathcal{P}_{p}(\Omega)\) we can define the Sliced-OT (SOT) as:
\[\mathrm{SOT}(\mu,\nu):=\int_{\mathbb{S}^{d-1}}\mathrm{OT}(\langle\theta,\cdot \rangle_{\#}\mu,\langle\theta,\cdot\rangle_{\#}\nu)\,\mathrm{d}\sigma(\theta), \tag{3}\]
where \(\sigma\in\mathcal{P}(\mathbb{S}^{d-1})\) is a probability measure such that \(\mathrm{supp}(\sigma)=\mathbb{S}^{d-1}\), e.g., the uniform distribution on the unit hyper-sphere. Similarly, for \(\mu,\nu\in\mathcal{M}_{+}(\Omega)\), Sliced-OPT (SOPT) [32] can be defined as:
\[\mathrm{SOPT}_{\lambda}(\mu,\nu):=\int_{\mathbb{S}^{d-1}}\mathrm{OPT}_{ \lambda(\theta)}(\langle\theta,\cdot\rangle_{\#}\mu,\langle\theta,\cdot\rangle_ {\#}\nu)\,\mathrm{d}\sigma(\theta), \tag{4}\]
where \(\lambda\in\mathrm{L}^{1}(\sigma;\mathbb{R}_{+})\) is generally a projection dependent function.
## 3 Partial Transport for Multi-Channel Signals
In the previous section, we discussed the suitability of OT and OPT problems (and similarly, SOT and SOPT problems) for comparing measures \(\mu\) and \(\nu\) in \(\mathcal{P}_{p}(\Omega)\) or \(\mathcal{M}_{+}(\Omega)\), respectively. In this section, we begin by defining a transport-based distance for multi-channel signals defined on a general class of measures, following the work of Thorpe et al. [7] on Transport \(\mathrm{L}^{p}\) distances. We then motivate the need for partial transportation when comparing such multi-channel signals and introduce our Partial-Transport \(\mathrm{L}^{p}\), \(\mathrm{PTL}^{p}\), distance.
**Transport \(\mathrm{L}^{p}\) Distances.** Following [7], a multi-channel signal with \(k\) channels can be defined as the pair \((f,\mu)\) for \(\mu\in\mathcal{P}_{p}(\Omega)\) and \(f\in L^{p}(\mu;\mathbb{R}^{k}):=\{f:\Omega\to A\subseteq\mathbb{R}^{k}\}\). We denote the set of all such signals as
\[\mathcal{Q}_{p}(\Omega;\mathbb{R}^{k}):=\{(f,\mu)|\mu\in\mathcal{P}_{p}( \Omega),f\in\mathrm{L}^{p}(\mu;\mathbb{R}^{k})\}.\]
We name it as the transport \(\mathrm{L}^{p}\) space. The \(\mathrm{TL}^{p}_{\beta}\) distance between two such k-dimensional signals \((f,\mu)\) and \((g,\nu)\) in \(\mathcal{Q}_{p}(\Omega;\mathbb{R}^{k})\) is defined as:
\[\mathrm{TL}^{p}_{\beta}((f,\mu),(g,\nu))=\inf_{\gamma\in\Pi(\mu,\nu)}\int_{ \Omega^{2}}\left(\frac{1}{\beta}\|x-y\|^{p}+\|f(x)-g(y)\|^{p}\right)\mathrm{d} \gamma(x,y). \tag{5}\]
For any \(p\in[1,\infty)\) and \(\beta>0\), the \(\mathrm{TL}^{p}_{\beta}\) distance defines a proper metric on \(\mathcal{Q}_{p}(\Omega;\mathbb{R}^{k})\), and \((\mathcal{Q}_{p}(\Omega;\mathbb{R}^{k}),\mathrm{TL}^{p}_{\beta})\) is a metric space. Intuitively, the \(\mathrm{TL}^{p}_{\beta}\) measures the OT between measures \(\mu\) and \(\nu\) raised onto the graphs of \(f\) and \(g\). Hence, \(\mathrm{TL}^{p}_{\beta}\) solves an OT problem in the \((d+k)\)-dimensional
Figure 1: Illustrating the fundamental idea of \(\mathrm{TL}^{p}\) distances. On the left, signals \(f\) and \(g\) are depicted along with their associated measures \(\mu\) and \(\nu\). In the middle, the measures \(\mu\) and \(\nu\) are lifted to the graphs of \(f\) and \(g\), respectively. On the right, the optimal transport plan is visualized, accompanied by the corresponding transportation cost.
space. Figure 1 shows the core concept behind \(\mathrm{TL}^{p}\) distances. Notably, the \(\mathrm{TL}^{p}_{\beta}\) distance satisfies the following properties:
\[\lim_{\beta\to 0}\mathrm{TL}^{p}_{\beta}((f,\mu),(g,\nu)) =\begin{cases}\|f-g\|^{p}_{\mathrm{L}^{p}(\mu)}&\text{if }\mu=\nu\\ \infty&\text{elsewhere}\end{cases} \tag{6}\] \[\lim_{\beta\to+\infty}\mathrm{TL}^{p}_{\beta}((f,\mu),(g,\nu)) =\mathrm{OT}(f_{\#}\mu,g_{\#}\nu) \tag{7}\]
Hence, the \(\mathrm{TL}^{p}_{\beta}\) distance interpolates between the \(\mathrm{L}^{p}\) distance between \(f,g\) and the p-Wasserstein distance between \(f_{\#}\mu\) and \(g_{\#}\nu\).
**Partial Transport \(\mathrm{L}^{p}\) Distances.** In many real-world scenarios, it is natural for two signals to only partially match each other. Figure 2 illustrates this phenomenon. However, because \(\mathrm{TL}^{p}\) distances are rooted in OT, they may sacrifice true correspondences in order to achieve a complete match between the two signals (as seen in Figure 2). To address this issue, we propose extending the definition of \(\mathrm{TL}^{p}\) distances to partial transport, allowing for partial matching for signal comparison.
To do so, we first expand the definition of \(k\)-dimensional signals to be defined on positive measures rather than probability measures. Specifically, we define a signal as the pair \((f,\mu)\) where \(\mu\in\mathcal{M}_{+}(\Omega)\) and \(f\in\mathrm{L}^{p}(f;\mathbb{R}^{k})\). We denote the set of all such signals as \(\mathcal{Q}^{+}_{p}(\Omega;\mathbb{R}^{k})\), that is,
\[\mathcal{Q}^{+}_{p}(\Omega;\mathbb{R}^{k}):=\{(f,\mu):\mu\in\mathcal{M}_{+}( \Omega),f\in\mathrm{L}^{p}(\mu;\mathbb{R}^{k})\}.\]
We now propose our Partial Transport \(\mathrm{L}^{p}\) (\(\mathrm{PTL}^{p}\)) distance between two signals \((f,\mu)\) and \((g,\nu)\) in \(\mathcal{Q}^{+}_{p}(\Omega;\mathbb{R}^{k})\) as:
\[\mathrm{PTL}^{p}_{\beta,\lambda}((f,\mu),(g,\nu)) =\inf_{\gamma\in\Pi_{\leq}(\mu,\nu)}\int_{\Omega^{2}}\left(\frac{ 1}{\beta}\|x-y\|^{p}+\|f(x)-g(y)\|^{p}\right)\mathrm{d}\gamma(x,y)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
**Theorem 3.1**.: _For any \(p\geq 1\), and \(\lambda,\beta>0\) there exists a minimizer for the \(\mathrm{PTL}^{p}\) problem (8). Furthermore, for the empirical \(\mathrm{PTL}^{p}\) problem (9), there exists a minimizer \(\gamma\in\Pi_{\leq}(1_{M},1_{N})\) that is induced by a 1-1 mapping. That is, the optimal \(\gamma\) satisfies \(\gamma_{ij}\in\{0,1\}\) for each \(\hat{i},j\), and each row and column of \(\gamma\) contains at most one nonzero element._
**Theorem 3.2**.: \((\mathcal{Q}_{+}(\Omega;\mathbb{R}^{k}),\mathrm{PTL}^{p}_{\beta,\lambda})\) _defines a metric space._
We refer to Section C in the appendix for the proofs of the above theorems and a detailed discussion of the \(\mathrm{PTL}^{p}\) space \(\mathcal{Q}_{+}(\Omega;\mathbb{R}^{k})\).
Similar to the \(\mathrm{TL}^{p}\) distance, we can also extend the definition for \(\beta=0\) and \(\beta=\infty\) by the following theorem:
**Theorem 3.3**.: _If \(\lambda>0\), we have_
\[\lim_{\beta\to 0}\mathrm{PTL}^{p}_{\beta,\lambda}((f,\mu),(g, \nu)) =\|f-g\|^{p}_{\mathrm{L}^{p}(\mu\wedge\nu),2\lambda}+\lambda(\|\mu-\nu\|_{ \mathrm{TV}}) \tag{10}\] \[\lim_{\beta\to\infty}\mathrm{PTL}^{p}_{\beta,\lambda}((f,\mu),(g,\nu)) =\mathrm{OPT}_{\lambda}(f_{\#}\mu,g_{\#}\nu), \tag{11}\]
_where \(\mu\wedge\nu\) is the minimum of measure \(\mu,\nu\),_
\[\|f-g\|^{p}_{\mathrm{L}^{p}(\mu\wedge\nu),2\lambda}:=\int_{\Omega}\|f-g\|^{p} \wedge 2\lambda\,\mathrm{d}(\mu\wedge\nu).\]
_and \(\|\mu-\nu\|_{\mathrm{TV}}\) is the total variation of the signed measure \(\mu-\nu\)._
See Section A in the appendix for the details of notations and Section D for the proof. Note, if we take \(\lambda\to\infty\), we can recover (6), (7) by the above limits. We note that \(\lambda\to 0\) is not an interesting case as it indicates zero cost for creation and destruction of mass, leading to an optimal \(\gamma\) of all zeros, i.e., \(\mathrm{PTL}^{p}_{\beta,0}((\mu,f),(\nu,g))=0\) for all \((\mu,f),(\nu,g)\in\mathcal{Q}^{p}_{+}(\Omega;\mathbb{R}^{k})\).
**Sliced Extensions of TLP and PTLP.** Using the connection between the \(\mathrm{TL}^{p}\) distance and OT distance [7], Eq. (5) can be rewritten as
\[\mathrm{TL}^{p}_{\beta}((f,\mu),(g,\nu))=\mathrm{OT}(\hat{\mu},\hat{\nu}) \tag{12}\]
where \(\hat{\mu}=(T_{\beta,f,p})_{\#}\mu\) is a push-forward measure of \(\mu\) by \(T_{\beta,f,p}(x)=\left[\begin{array}{c}x\beta^{-\frac{1}{p}}\\ f(x)\end{array}\right]\), and similarly \(\hat{\nu}=(T_{\beta,g,p})_{\#}\nu\). Eq. (12) allows us to apply SOT method to the \(\mathrm{TL}^{p}\) distance, and have the sliced-TLP distance as follows:
\[\mathrm{STL}^{p}_{\beta}((f,\mu),(g,\nu))=\int_{\mathbb{S}^{d+k-1}}\mathrm{OT} (\theta_{\#}\hat{\mu},\theta_{\#}\hat{\nu})d\sigma(\theta) \tag{13}\]
where \(\sigma(\theta)\) is a probability measure with non-zero density on \(\mathbb{S}^{d+k-1}\), for instance the uniform measure on the unit sphere. Similarly, by leveraging SOPT and the relation between \(\mathrm{PTL}^{p}\) and OPT (see proposition C.3), we can define Sliced \(\mathrm{PTL}^{p}\) as
\[\mathrm{SPTL}^{p}_{\beta,\lambda}((f,\mu),(g,\nu))=\int_{\mathbb{S}^{d+k-1}} \mathrm{OPT}_{\lambda(\theta)}(\theta_{\#}\hat{\mu},\theta_{\#}\hat{\nu})d \sigma(\theta) \tag{14}\]
where \(\lambda\) can be defined as an \(L^{1}(\sigma,\mathbb{R}_{++})\) function of \(\theta\). Note that \(\mathrm{STL}^{p}_{\beta}\) and \(\mathrm{SPTL}^{p}_{\beta,\lambda}\) are metrics on \(\mathcal{Q}(\Omega;\mathbb{R}^{k})\) and \(\mathcal{Q}_{+}(\Omega;\mathbb{R}^{k})\), respectively.
Equipped with the newly proposed distances, we now demonstrate their performance in separability and nearest neighbor classification.
## 4 Experiments
### Separability
A valid distance should be able to separate a mixture of different classes of signals. We aim to illustrate the separability of the \(\mathrm{PTL}^{p}\) distance on different classes of signals in this experiment.
**Synthetic Data**
We generate the following two classes of signals on the domain \([0,1]\):
\[\mathbf{S}_{0} =\{f(t)\mid f(t)=\varphi(t|x,\sigma_{0});\] \[\quad\quad\quad x=0.98z+0.01,z\sim\mathrm{Unif}[0,1]\}\] \[\mathbf{S}_{1} =\{g(t)\mid g(t)=\varphi(t|x+0.001,\sigma_{1})-\varphi(t|x-0.001, \sigma_{1});\] \[\quad\quad\quad x=0.98z+0.01,z\sim\mathrm{Unif}[0,1]\}\]
where \(\varphi\) denotes a Gaussian probability density function scaled within \([0,1]\), \(\sigma_{0}=0.01\) and \(\sigma_{1}=\frac{0.01}{\sqrt{2}}\); time \(t\sim\mathrm{Unif}[0,1]\). In short, \(\mathbf{S}_{0}\) is the class of signals with one positive Gaussian bump, whereas \(\mathbf{S}_{1}\) denotes the class of signals with both a positive and a negative Gaussian bumps. To further test the robustness, we add random blip noise \(\epsilon(t)\) to each signal as the second separability experiment:
\[\epsilon(t)=\alpha\varphi(t|x,\sigma_{e}=0.001\sqrt{5})+0.1\epsilon_{0}\]
where \(\alpha\) is randomly chosen from \(\{-0.5,0.5\}\); \(x=0.98z+0.01,z\sim\mathrm{Unif}[0,1]\); \(\epsilon_{0}\) is the Gaussian noise. \(\epsilon(t)\) can be considered as a tiny positive/negative bump with Gaussian oscillation.
**Results**
Figure 3 shows the 2D Multi-Dimensional Scaling (MDS) embeddings calculated from the precomputed pairwise \(\mathrm{L}^{p}\), \(\mathrm{TL}^{p}\) and \(\mathrm{PTL}^{p}\) distance matrices. We observe that \(\mathrm{PTL}^{p}\) not only achieves high performance in separating the two classes, but also exhibits robustness to noise. When adding blips, \(\mathrm{TL}^{p}\) tends to mistake the noise for the main trend and cluster signals based on the noise.
### 1 Nearest Neighbor Classification
**Experiment setup**
To demonstrate the effectiveness of our proposed \(\mathrm{PTL}^{p}\) metric and its sliced variant \(\mathrm{SPTL}^{p}\), we test these methods on the task of 1 Nearest Neighbor (1NN) classification, along with other baselines.
Figure 3: Visualizing manifold learning results for two classes of signals. For original signals (top row), both \(\mathrm{TL}^{p}\) and \(\mathrm{PTL}^{p}\) separates two classes well, but \(\mathrm{L}^{p}\) fails. However, for the noisy signals (bottom row), only \(\mathrm{PTL}^{p}\) shows a clear decision boundary.
Given a test signal, we seek the nearest training signal with respect to each metric/divergence, and predict the test label as that of the found nearest neighbor.
**Dataset**
We use three modified UCR datasets of varying lengths from [36]: Suffix, Prefix and Subsequence. The Suffix dataset is generated by simulating scenarios when sensors are activated at different times, thus may miss some observations from the start and record only suffix time series. Similarly, the Prefix dataset generator imitates the sensor behavior of stopping non-deterministically and produces only prefix time series. The Subsequence dataset contains time series that have variations on both starting and stopping time, i.e. the sensor may only capture subsequences.
**Baselines**
The \(\mathrm{L}^{p}\) distance between signals is known for its simplicity and efficiency, which fits signals in a fixed temporal grid. OT-based similarity metrics, p-Wasserstein distance (\(\mathrm{OT}\)), and \(\mathrm{TL}^{p}\) treat signals / the graph of signals as probability measures and solve the optimization problem of transporting one probability measure to the other in the most cost-efficient way. Moreover, \(\mathrm{STL}^{p}\) is included in the baselines as a fast approximation of \(\mathrm{TL}^{p}\).
Unlike the \(\mathrm{L}^{p}\) metric, Dynamic Time Warping (DTW) [1] applies an elastic (non-linear) warping to temporal sequences and finds an optimal matching between the warped time series. DTW is more robust to time distortions by its pathological alignment. An \((N,M)\)-warping path is a sequence \(p=(p_{1},p_{2},\cdots,p_{L})\) with \(p_{l}=(n_{l},m_{l})\in[1:N]\times[1:M]\), which defines an alignment between two sequences of length \(N\) and \(M\) that satisfies monotonicity, continuity and boundary conditions [37]. Given a pair of temporal sequences \(f=\{f_{i}\}_{i=0}^{N}\) and \(g=\{g_{j}\}_{j=0}^{M}\) on the domain \(\Omega\), DTW is calculated as
\[\mathrm{DTW}(f,g)=\min_{p}\{c_{p}(f,g)\mid p\text{ is an }(N,M)\text{-warping path}\}, \tag{15}\]
where \(c_{p}(f,g)=\sum_{(i,j)\in p}c(f_{i},g_{j})\) and \(c(f_{i},g_{j})\) is the cost of moving from \(f_{i}\) to \(g_{j}\). We also include variants of DTW, namely WDTW, DDTW, and Soft-DTW (SDTW) as baselines. For SDTW, we consider two cases for the smoothing parameter \(\gamma=0.01\) and \(\gamma=1\).
**Grid search for optimal \(\beta\) and \(\lambda\)**
To find the optimal \(\beta\) and \(\lambda\) for \(\mathrm{PTL}^{p}_{\beta,\lambda}\), we perform grid search based on the 5-fold cross validation. We use the scikit-learn built-in GridSearchCV tools for implementation. The search range for \(\beta\) is set to be \(\{10^{-3},10^{-2},10^{-1},1,10,100,10^{3},10^{4}\}\), and \(\lambda\) is chosen from a set of 10 evenly spaced values from \(0.1\) to the radius of the raised distribution on the graph of each signal.
In \(\mathrm{SPTL}^{p}_{\beta,\lambda}\), we also need to specify the slices, i.e. \(\theta\)'s for 1 dimensional projections. We obtain the optimal \(\beta\) from \(\mathrm{PTL}^{p}_{\beta,\lambda}\). As the amount of mass that should be transported may vary across slices, we adopt the strategy to search for the best \(\lambda\) for the most informative slice, and then set \(\lambda\)'s accordingly for other slices. We set \(\theta_{0}\) to be the first principle component of all signals. Note that \(\theta_{0}\) vanishes at dimensions corresponding to \(x\beta^{-\frac{1}{p}}\), but concentrates on \(f(x)\) in \(T_{\beta,f,p}(x)=\left[x\beta^{-\frac{1}{p}};f(x)\right]\) (refer to Eq. (12) and Eq. (13)). Similarly, we implement grid search for best \(\lambda_{\theta_{0}}\) corresponding to \(\theta_{0}\). Given \(\theta_{0}\) and \(\lambda_{\theta_{0}}\), for a specific slice \(\theta\), \(\lambda_{\theta}=\langle\theta,\theta_{0}\rangle\lambda_{\theta_{0}}\), where \(\langle\cdot,\cdot\rangle\) denotes inner product.
**Results**
Table 4.2 presents the results of nearest neighbor classification using different metrics/divergences on three subsets of the modified UCR dataset: Prefix, Subsequence, and Suffix. The table indicates that no single metric/divergence exhibits a significant advantage over others on a single dataset. However, \(\mathrm{SPTL}^{p}\) achieves the best performance on two out of three datasets and performs nearly as well as the top performers on the remaining dataset, resulting in an overall win. It is worth noting that although the improvement margins are small, the computational advantage of \(\mathrm{SPTL}^{p}\) and \(\mathrm{STL}^{p}\) compared to other competitors (see Figure 2), make them more favorable choices in terms of efficiency.
### Computation efficiency using Sliced \(\mathrm{PTL}^{p}\)
We summarize the time complexities of all methods considered in Table 2.
In implementation, DTW-based methods are solved by a dynamic programming algorithm. For DTW, soft-DTW, we use the solvers from tislearn, which are accelerated by numba. \(\mathrm{TL}^{p}\) and \(\mathrm{PTL}^{p}\) are solved by linear programming solvers in PythonOT, whose time complexity is cubic with respect to the length of signals in the worst case, and quadratic in practice when the measures are
empirical. \(\mathrm{STL}^{p}\), \(\mathrm{SPTL}^{p}\) can be accelerated by numba. For \(\mathrm{STL}^{p}\) and \(\mathrm{SPTL}^{p}\), we set the number of projections to be 50. Note, the computation of \(\mathrm{STL}^{p}\) and \(\mathrm{SPTL}^{p}\) can be further accelerated by parallel computation with respect to slices.
## 5 Conclusion
In this paper, we propose partial transport \(\mathrm{L}^{p}\) (\(\mathrm{PTL}^{p}\)) distance as a similarity measure for generic signals. We have shown that \(\mathrm{PTL}^{p}\) defines a metric that comes with an optimal transport plan. We further characterize the behaviors of \(\mathrm{PTL}^{p}_{\beta,\lambda}\) as \(\beta\) goes to various limits. We extend \(\mathrm{PTL}^{p}\) to sliced partial transport \(\mathrm{L}^{p}\) (\(\mathrm{SPTL}^{p}\)), which is more computationally efficient. In the experimental section, we have demonstrated that the proposed metric is superior to other baselines in separability, and shown promising results on 1 nearest neighbor classification.
\begin{table}
\begin{tabular}{l|l} \hline Method & Worst-case Complexity \\ \hline \(\mathrm{PTL}^{p}\) & \(\mathcal{O}(N^{3}(d+k))\) \\ \(\mathrm{SPTL}^{p}\) & \(\mathcal{O}(LN((d+k)+N+\log(N)))\) \\ \(\mathrm{TL}^{p}\) & \(\mathcal{O}(N^{3}(d+k))\) \\ \(\mathrm{STL}^{p}\) & \(\mathcal{O}(LN((d+k)+\log(N)))\) \\ OT & \(\mathcal{O}(N^{3}k)\) \\ *DTW & \(\mathcal{O}(N^{2}k)\) \\ \(\mathrm{L}^{p}\) & \(\mathcal{O}(Nk)\) \\ \hline \end{tabular}
\end{table}
Table 2: Worst case time complexities for our proposed methods and baselines. Here \(N\) denotes the length of the signals, \(d\) and \(k\) are the signal dimension and number of channels respectively. \(L\) is the number of slices for sliced methods. Note that DTW and its variants used in this paper share the same complexity, which is denoted by *DTW in the table. |
2307.05313 | Programmable and arbitrary-trajectory ultrafast flying focus pulses | "Flying focus" techniques produce laser pulses with dynamic focal points that
travels distances much greater than a Rayleigh length. The implementation of
these techniques in laser-based applications requires the design of optical
configurations that can both extend the focal range and structure the radial
group delay. This article describes a method for designing optical
configurations that produce ultrashort flying focus pulses with
arbitrary-trajectory focal points. The method is illustrated by several
examples that employ an axiparabola for extending the focal range and either a
reflective echelon or a deformable mirror-spatial light modulator pair for
structuring the radial group delay. The latter configuration enables rapid
exploration and optimization of flying foci, which could be ideal for
experiments. | M. V. Ambat, J. L. Shaw, J. J. Pigeon, K. G. Miller, T. T. Simpson, D. H. Froula, J. P. Palastro | 2023-07-11T15:00:07Z | http://arxiv.org/abs/2307.05313v1 | # Programmable and arbitrary-trajectory ultrafast flying focus pulses
###### Abstract
"Flying focus" techniques produce laser pulses with dynamic focal points that travels distances much greater than a Rayleigh length. The implementation of these techniques in laser-based applications requires the design of optical configurations that can both extend the focal range and structure the radial group delay. This article describes a method for designing optical configurations that produce ultrashort flying focus pulses with arbitrary-trajectory focal points. The method is illustrated by several examples that employ an axiparabola for extending the focal range and either a reflective echelon or a deformable mirror-spatial light modulator pair for structuring the radial group delay. The latter configuration enables rapid exploration and optimization of flying foci, which could be ideal for experiments.
## 1 Introduction
The intensity peak of a flying focus pulse can travel at any velocity, independent of the group velocity, over distances much longer than a Rayleigh range [1, 2, 3, 4, 5]. These properties offer a new approach to optimizing the wide range of laser-based applications that require velocity matching or extended interaction lengths. For instance, recent experiments have used a flying focus to create long, contiguous plasma channels [6, 7] and to synchronize the pump and probe pulses in soft x-ray lasers [8]. The potential uses of flying focus pulses extend beyond these demonstrations to enhancing laser wakefield acceleration [3, 9, 10], nonlinear Thomson scattering [11], or THz generation [12] and to facilitating observations of fundamental processes, such as radiation reaction [13] and Compton scattering [14]. The ultimate success of these applications relies on the design of practical, and preferably adaptive, optical configurations for preparing flying focus pulses.
The first experimental realization of a flying focus used a highly chromatic diffractive optic to focus a chirped laser pulse [2]. The diffractive optic focuses each wavelength of the pulse to a different longitudinal location, while the chirp controls the arrival time of each wavelength at its focus. The resulting intensity peak traverses the focal range, i.e., the distance between the focal points of the minimum and maximum wavelengths, with a constant velocity that can be adjusted by changing the chirp. More complex spectral phases allow for more complex focal trajectories [1, 15]. Despite its tunability, this "chromatic flying focus" has several limitations. First, because the extended focal range is produced by a static diffractive optic, it cannot be modified from shot to shot. Second and more importantly, the bandwidth of the pulse is spread across the focal region. This precludes the formation of an ultrashort (<100 fs) intensity peak, which is a requirement for many applications.
The need for ultrashort intensity peaks has motivated the development of flying focus techniques that preserve the entire bandwidth of the laser pulse at every location within the focal range [3, 5, 9]. In contrast to the chromatic flying focus, which uses radial group delay to extend the focal range, these "ultrafast flying focus" schemes employ separate optics to independently extend the focal range and structure the radial group delay. As an example, a recent demonstration of an ultrafast, constant-velocity flying focus [5] used the geometric aberration of an axiparabola [16, 17, 18] to focus different annuli in the near field to different longitudinal locations in the far field and the
radial group delay imparted by an echelon [3] to control the relative timing of the annuli. Despite the success of these experiments, the configuration relies on the use of a static echelon designed for a specific focal trajectory. An alternative configuration that replaces the echelon with adaptive optics, such as a deformable mirror-spatial light modulator pair [19, 20], would allow for on-shot programmability of the radial group delay and, as a result, the focal trajectory.
This work describes a method for designing optical configurations that produce ultrashort flying focus pulses with arbitrary focal trajectories at velocities close to the speed of light (Section II). The general method is independent of the optical configuration but is illustrated for specific examples of an axiparabola combined with either an echelon or a deformable mirror-spatial light modulator pair (Section III). The method is applied to create flying focus pulses exhibiting constant velocity, constant acceleration, and oscillating focal trajectories (Section IV). In each case, the intensity peak of the flying focus maintains an ultrashort duration as it traverses the extended focal range. The flexibility afforded by this method and the deformable mirror-spatial light modulator pair (DM-SLM) enable rapid and automated control over the focal trajectory, which can facilitate the use of the ultrafast flying focus in laser-based applications.
## 2 The focal trajectory of an ultrafast flying focus
Figure 1 compares the trajectories of focal points produced by a focusing optic alone (a) and a focusing optic used in combination with optics that structure the radial group delay (b) and (c).
Figure 1: The effect of optics on the focal trajectory. (a) A laser pulse with a flat pulse front (red) and flat phase front (grey) is focused by an optic that extends the focal range \(L\) (blue). The trajectory of the focus is completely determined by the focal geometry. (b) and (c) The pulse front, or radial group delay \(\tau_{D}(r)\), is structured by a preliminary optic (purple). The structure of the pulse front can be used to create a constant-velocity focus (b), an oscillating focus (c), or otherwise dynamic trajectories.
In Fig. 1(a), a laser pulse with a flat phase front and a flat pulse front is incident at \(z=0\) on a focusing optic with a surface defined by the sag function \(s_{f}(r)\). The focusing optic extends the range of high intensity by using geometric aberration to focus different radial locations \(r\) in the near field to different longitudinal locations in the far field \(z=f(r)\). The resulting focal point travels a distance \(L=\max(f)-\min(f)\) along a trajectory that is fully determined by the sag function. In Figs. 1(b) and (c), additional optics are used to structure the pulse front, or radial group delay \(\tau_{D}(r)\), before focusing. Structuring the delay provides control over the trajectory of the focus and can produce a constant-velocity (b), oscillating (c), or otherwise dynamic focal point.
Each optical element in Fig. 1 applies a spatio-spectral phase to the laser pulse. The phase imparted by the entire optical assembly \(\phi(\omega,r)\) can be written as the sum of contributions from the focusing optic and the elements that structure the radial group delay (RGD). In the paraxial approximation (see Appendix A),
\[\phi(\omega,r)=-\frac{2\omega}{c}s_{f}(r)+\phi_{D}(\omega,r). \tag{1}\]
The first term provides the initial phase front curvature required to focus each radius to the location \(z=f(r)\). With \(f(r)\) specified, the sag function \(s_{f}(r)\) can be found by solving
\[\frac{ds_{f}}{dr}=\frac{r}{2f(r)}. \tag{2}\]
The second term in Eq. (1) modifies the relative timing of the near-field radii,
\[\tau_{D}(r)=\frac{\partial\phi_{D}(\omega,r)}{\partial\omega}. \tag{3}\]
To preserve the desired focusing, the elements that structure the RGD cannot significantly distort the phase fronts. The constraint \(\partial_{r}\phi_{D}(\omega,r)|_{\omega=\omega_{0}}=0\) ensures that \(\phi_{D}\) only modifies the RGD and, equivalently, that the central frequency of the laser pulse \(\omega_{0}\) focuses to the locations described by \(f(r)\).
For applications, one would like to specify a focal trajectory, i.e., the time-dependent velocity of the focus \(v_{f}(t)\), and use this trajectory to determine the required \(\tau_{D}(r)\). To calculate the required \(\tau_{D}(r)\), first note that each near-field radius of the laser pulse can arrive at its focal location \(z=f(r)\) at a different time. The focal time \(t_{f}(r)\) for each radius has contributions from the structured RGD and the focal geometry:
\[t_{f}(r)\approx\tau_{D}(r)+\frac{1}{c}\left[f(r)+\frac{r^{2}}{2f(r)}-2s_{f}(r )\right]. \tag{4}\]
The variation in the focal time and location with radius results in a moving focal point with a velocity
\[\tilde{v}_{f}(r)=\frac{df}{dr}\left(\frac{dt_{f}}{dr}\right)^{-1}\approx c \left[1+\frac{r^{2}}{2f^{2}(r)}-c\left(\frac{df}{dr}\right)^{-1}\frac{d\tau_{ D}(r)}{dr}\right]. \tag{5}\]
Equation 5 demonstrates that the structured RGD can be used to control the trajectory of the focus independently of the focal geometry. If \(\tau_{D}(r)=0\), \(\tilde{v}_{f}(r)=c\left[1+r^{2}/2f^{2}(r)\right]\), which is dictated soley by \(f(r)\). Rearranging Eq. (5) provides a differential equation for the \(\tau_{D}(r)\) needed to produce a specified trajectory \(v_{f}(t)\):
\[c\frac{d\tau_{D}}{dr}=\left[1-\frac{v_{f}\big{(}t_{f}(r)\big{)}}{c}+\frac{r^{ 2}}{2f^{2}(r)}\right]\frac{df}{dr}, \tag{6}\]
where \(v_{f}(t_{f}(r))=\tilde{v}_{f}(r)\) depends on \(\tau_{D}\) through Eq. (4) and a one-to-one mapping between near-field radius and time has been assumed. The solutions to Eqs. (2) and (6) form the basis for designing the optical elements necessary to create an ultrafast flying focus.
In order to preserve the ultrashort duration of the intensity peak at every point within the focal range, the focal velocity must be close to the speed of light, \(v_{f}(t)\approx c\). Even if a \(\phi_{D}\) satisfies the constraint \(\partial_{r}\phi_{D}|_{\omega=\omega_{0}}=0\) and maintains the focal locations of the central frequency, it will modify the focal locations of every other frequency. This spreads the frequency content of the laser pulse across the focal region, which reduces the bandwidth available at each location and places a lower bound on the minimum duration. Noting that the transverse wavenumber is the radial derivative of the phase and using similar triangles, one can show that the RGD modifies the focal locations by a distance \(\Delta f(\omega,r)\approx-cf^{2}(\partial_{r}\phi_{D})/(r\omega)\). This longitudinal chromatism will have a negligible effect on the duration of the intensity peak when \(\Delta f\) is much smaller than the focal range \(L\), i.e., when
\[\frac{\Delta\omega}{\omega_{0}}\frac{f^{2}}{rL}\left|\frac{df}{dr}\left(1- \frac{v_{f}}{c}\right)\right|\ll 1, \tag{7}\]
where \(\Delta\omega\) is the bandwidth of the laser pulse and Eq. (6) has been used with a simple form of \(\phi_{D}(\omega,r)=(\omega-\omega_{0})\tau_{D}(r)\).
## 3 Optical elements to create an ultrafast flying focus
### Optics to extend the focal range
The optics that extend the focal range use geometric aberration to focus different radial locations \(r\) in the near field to different longitudinal locations in the far field \(z=f(r)\). In principle, this can be accomplished using refractive optics like lenses. However, for broadband, ultrashort pulses, the B-integral, group velocity dispersion, and higher-order dispersion of these optics can broaden or distort the temporal profile. In addition, the damage threshold of refractive optics typically prohibits their use as final focusing elements for high-intensity pulses. Thus, reflective optics are often preferable for extending the focal range of high-intensity, ultrashort flying focus pulses.
One such optic, the axiparabola [16, 17], produces an near-constant, on-axis intensity maximum over the entire focal range, making it ideal for many applications. The focal length as a function of near-field radius \(f(r)\) is designed so that a flattop transverse intensity profile incident on the optic results in a uniform on-axis intensity maximum in the far field. Specifically,
\[f(r) =f_{0}+L\left(\frac{r}{R}\right)^{2}, \tag{8}\] \[s_{f}(r) =\frac{R^{2}}{4L}\ln\left[1+\frac{L}{f_{0}}\left(\frac{r}{R} \right)^{2}\right], \tag{9}\]
where \(f_{0}\) is the nominal focal length, \(R\) is the maximum radius of the axiparabola, and \(L\) determines the length of the focal range. Expanding Eq. (9) in powers of \(q\equiv L/f_{0}\) shows that the axiparabola is primarily a parabolic mirror \(\mathcal{O}(q^{0})\) with spherical aberration \(\mathcal{O}(q^{1})\). For \(L>0\) (\(<0\)), rays incident at larger radii are focused farther from (closer to) the optic than rays incident at smaller radii. With this choice of \(f(r)\), Eq. (7) simplifies to \(2(\Delta\omega/\omega_{0})(f_{0}/R)^{2}|1-v_{f}/c|\ll 1\), which is independent of \(L\).
Figure 2 displays the results of propagation simulations (see Appendix B) for a laser pulse focused by an axiparabola with \(f_{0}=50\) cm, \(R=5\) cm, and \(L=1\) cm. The laser pulse had a central wavelength \(\lambda_{0}=2\pi c/\omega_{0}=920\) nm and \(\Delta\lambda=78\) nm of bandwidth in a Gaussian power spectrum, corresponding to a 27 fs full-width at half-maximum (FWHM) duration. The transverse profile was initialized as a flattop with a 5 cm radius that filled the aperture of the axiparabola. The maximum on-axis intensity is nearly uniform over the entire focal range \(L\), which is \(\sim\)340\(\times\) longer than the Rayleigh range of the full-aperture focal spot \(Z_{R}=\lambda_{0}f_{0}^{2}/\pi R^{2}\)
[Fig. 2(b)]. The modulations in the on-axis intensity result from diffraction of the spherically aberrated phase fronts (see Appendix C). The near-uniform on-axis intensity comes at the cost of a spot size \(w\) that narrows over the focal range [Fig. 2(c)]. More specifically, the effective \(f/\#\) at the beginning of the focal range is larger than that at the end, such that within the focal region
\[w(z)\approx\frac{\lambda_{0}f_{0}}{\pi R}\left|\frac{L}{z-f_{0}}\right|^{1/2}. \tag{10}\]
The ring-like structures visible in the fluence [Fig. 2(c)] are the natural diffraction pattern created by the axiparabola.
Figure 2(d) illustrates the focal trajectory produced by the axiparabola. Here, the on-axis intensity is plotted as a function of propagation distance \(z-f_{0}\) and the moving frame coordinate \(\xi=t-z/c\). In these coordinates, a vertical line indicates a signal travelling at the vacuum speed of light. The intensity peak accelerates from its initial focal point at \(z-f_{0}=0\) and \(\xi=0\) to its final focal point at \(z-f_{0}=L\) and \(\xi\approx-75\) fs, following a trajectory consistent with \(\tilde{v}_{f}(r)=c\left[1+r^{2}/2f^{2}(r)\right]\). The pulse maintains its ultrashort duration over the entire focal range as shown by the white lineouts taken at the start (right) and end (left) of the focal region.
### Optics to structure the radial group delay
The trajectory of the focus can be programmed by structuring the radial group delay of the laser pulse. Ideal, achromatic focusing optics impart the exact amount of RGD needed to ensure that all frequency components within a pulse arrive at their focus at the same time. More generally, optics can impart unwanted RGD, resulting in asynchronous focusing and a reduction in the maximum focused intensity. For instance, with refractive optics, the combination of group velocity dispersion and the radially dependent thickness of the optic produce unfavorable RGD [21]. Below, optical elements are discussed that can impart favorable RGD, thereby enabling control over the trajectory of the focal point and the peak laser intensity.
The recently proposed and demonstrated radial echelon provides a reflective approach to structuring the radial group delay [3, 5]. The mirrored surface of the echelon consists of concentric rings with variable widths determined by the desired RGD and depths \(d\) equal to a half-integer multiple of the central wavelength \(d=(\ell/2)\lambda_{0}=\pi\ell/\omega_{0}\), where \(\ell\) is a positive integer. For a
Figure 2: The focal properties of an axiparabola alone. (a) The sag function of an axiparabola with \(f_{0}=50\) cm, \(R=5\) cm, and \(L=1\) cm. The axiparabola focuses a laser pulse with a central wavelength of \(\lambda_{0}=920\) nm and a \(27\) fs FWHM duration. (b) The maximum on-axis intensity of the pulse as a function of distance from the nominal focal point \(z=f_{0}\). (c) The fluence profile. (d) The focal trajectory as a function of propagation distance and moving frame coordinate \(\xi=t-z/c\). The peak intensity travels at a superluminal velocity and accelerates. The white lineouts show the temporal profile of the pulse at the beginning (right) and end (left) of the focal region.
given \(\tau_{D}(r)\) and \(\ell=1\), the phase imparted by the echelon is given by
\[\phi_{D}^{\rm ech}(\omega,r)=-\frac{2\omega}{c}\left\{\frac{1}{4}\lambda_{0} \left[{\rm ceil}\left(\frac{c\tau_{D}(r)}{\lambda_{0}}\right)+{\rm floor}\left( \frac{c\tau_{D}(r)}{\lambda_{0}}\right)\right]\right\}. \tag{11}\]
By discretizing the continuous delay \(c\tau_{D}(r)\) in steps of the central wavelength, the echelon satisfies the constraint \(\partial_{r}\phi_{D}^{\rm ech}(\omega,r)|_{\omega=\omega_{0}}=0\) and thus does not affect the focusing of the frequency component \(\omega_{0}\). Said differently, the phase fronts of the central wavelength maintain their transverse coherence upon reflection from the echelon. For any other wavelength, the echelon introduces a shear in the phase front between each ring. This shear smooths out as higher-spatial orders diffract, leaving the desired radial group delay. The widths of the echelon rings can also lead to diffractive losses. These losses are negligible when \(\Delta R\gg\lambda_{0}f_{0}/2R\), which is easily satisfied for a large range of designs. Importantly, for \(v_{f}(t)\approx c\), the combined axiparabola-echelon system preserves an ultrashort pulse duration.
Despite its advantage as a reflective optic with a higher damage threshold, each echelon is a static optical element that can only impart a single, pre-designed RGD. Adaptive optics, such as deformable mirrors and spatial light modulators, offer dynamic programmability of the radial group delay and, as a result, the focal trajectory. A deformable mirror (DM) consists of pistons or piezoelectric segments that shape a flexible, reflective membrane [22, 23]. A DM can be programmed to apply the continuous phase
\[\Phi_{\rm dm}(\omega,r)=-\frac{2\omega}{c}s_{\rm dm}(r)=\omega\tau_{D}(r), \tag{12}\]
where \(s_{\rm dm}(r)=-c\tau_{D}(r)/2\) is the sag function of the membrane. However, the phase \(\Phi_{\rm dm}(\omega,r)\) does not satisfy the constraint \(\partial_{r}\Phi_{\rm dm}(\omega,r)|_{\omega=\omega_{0}}=0\). Thus a second optical element must be introduced to eliminate the phase distortion at the central frequency.
A spatial light modulator (SLM) can partially correct the phase front distortion at the central frequency [20]. An SLM consists of a pixelated, two-dimensional array of liquid crystals that possess electrical and optical anisotropy. The voltage delivered to each pixel can be adjusted to change the optical path length of an incident laser pulse as a function of transverse location [24, 25]. By appropriately programming the SLM voltages, the phase front of the central frequency can be
Figure 3: (a) The radial group delays, i.e., the \(\tau_{D}(r)\), required to produce constant-velocity focal trajectories with \(v_{f}=1.001c\), (blue, solid), \(v_{f}=c\) (green, dashed), and \(v_{f}=0.999c\) (red, dotted) with the axiparabola described in Fig. 2. (b) The echelon profile for \(v_{f}=c\). (c) The deformable mirror sag function (green) and spatial light modulator phase (black) for \(v_{f}=c\).
flattened to an extent allowed by the discreteness of the pixels. Specifically, for the DM phase in Eq. (12),
\[\Phi_{\rm slm}(\omega,r)=-\frac{\omega}{c}\lambda_{0}{\rm mod}\left[\frac{c\tau_{D }(r_{p})}{\lambda_{0}},1\right], \tag{13}\]
where \(r_{p}=\frac{1}{2}[{\rm floor}(\frac{r}{p})+{\rm ceil}(\frac{r}{p})]p\) and \(p\) is the SLM pixel size. The total phase of the DM-SLM pair is then
\[\phi_{D}^{\rm dm\text{-}{\rm slm}}(\omega,r)=\Phi_{\rm dm}(\omega,r)+\Phi_{\rm slm }(\omega,r). \tag{14}\]
In the limit of infinitesimal pixels, \(p\to 0\) and \(\phi_{D}^{\rm dm\text{-}{\rm slm}}(\omega,r)\rightarrow\phi_{D}^{\rm ech}( \omega,r)\). Note that Eq. (13) was discretized into radial zones; for Cartesian zones, one can instead use \(\tau_{D}(x_{p},y_{p})\).
Figures 3 and 4 illustrate how these optics modify the electric field profile of a laser pulse in the near field to produce a constant-velocity focus. Figure 3(a) shows the \(\tau_{D}(r)\) required for subluminal (\(v_{f}<c\)), luminal (\(v_{f}=c\)), and superluminal (\(v_{f}>c\)) focal velocities when using the axiparabola described in Fig. 2. Because the axiparabola naturally produces a superluminal and accelerating focus, the subluminal (superluminal) velocity requires a larger (smaller) delay than the luminal velocity at larger radii. The echelon and DM-SLM designs for \(v_{f}=c\) are displayed in Figs. 3(b) and (c). In this configuration, the incident laser pulse propagates from right to left, so that the center of the pulse encounters the optics first. Figure 4 shows the effect that each optic
Figure 4: Modification to the electric field in the near field for \(v_{f}=c\). (a) The input field has flat phase fronts, a flat pulse front, and an ultrashort duration (\(\sim 27\) fs). (b) The echelon imparts the desired radial group delay to the pulse while maintaining the flat phase fronts. (c) The DM imparts the desired radial group delay to the pulse. However, as shown in the inset, the phase fronts are now curved with respect to the propagation direction. (d) The SLM corrects the undesired phase front curvature. The inset shows that the phase fronts are now globally flat, but retain a residual tilt within each pixel. Each inset is a \(500~{}\mu\)m \(\times~{}15\) fs window, and the SLM had a \(p=50~{}\mu\)m pixel size. The pulse propagates from left to right.
has on the electric field profile. After the echelon [Fig. 4(b)], the field has flat phase fronts and a radially dependent delay consistent with \(\tau_{D}(r)\). After the DM [Fig. 4(c)], the field has the correct delay, but also has curved phase fronts. The SLM undoes this curvature [Fig. 4(d)]. The combined DM-SLM system reproduces the field profile created by the echelon to within the resolution limits of the SLM.
A DM-SLM pair with sufficiently small pixels can create a flying focus that is virtually indistinguishable from a flying focus created by an echelon [Fig. 5]. While an echelon flattens the phase fronts globally and locally, an SLM can only flatten the phase fronts globally. Within each pixel, the phase fronts remain curved [Fig. 4(d) inset]. As a result, the constraint \(\partial_{r}\phi_{D}^{\text{dm-slim}}(\omega,r)|_{\omega=\omega_{0}}=0\) is only approximately satisfied. When the SLM pixel size is too large, the local curvature of the phase fronts affects the structure of the flying focus pulse in the far field. The inequality \(\max(\partial_{r}\phi_{D}^{\text{dm-slim}})p\ll 1\) provides a rough condition for the SLM pixel size required to reproduce the flying focus created with an echelon. Failing to meet this condition in the near field results in a decreased intensity at corresponding locations in the far field [cf. Figs. 5(b) and (c)]. As the pixel size is reduced, the intensity profile converges to the profile produced using an echelon [cf. Figs. 5(a) and (d)].
## 4 Examples of ultrashort flying focus trajectories
This section presents examples that demonstrate the flexibility and far-field properties of the ultrafast flying focus. The examples, i.e., constant-velocity, accelerating, and oscillating focal trajectories, are motivated by applications in plasma physics and nonlinear optics. The propagation of pulses that exhibit these trajectories was simulated in the near and far fields using a combination of the Fresnel diffraction integral and the modified paraxial wave equation (see Appendix B for details) [15, 26]. In all cases, an axiparabola with \(f_{0}=50\) cm, \(R=5\) cm, and \(L=1\) cm, a deformable mirror with a 5 cm radius, and a spatial light modulator with a pixel size of \(p=50\)\(\mu\)m were used to extend the focal range and structure the RGD. The parameters were chosen based on the capabilities of current technology.
### Constant-velocity focal trajectories
A constant-velocity flying focus can enhance applications that rely on velocity matching over long distances, such as laser wakefield acceleration [3, 9, 10, 27], THz generation [12], and photon acceleration [28, 29]. Figure 6 shows the on-axis intensity for the (a) superluminal, (b) luminal, and (c) subluminal velocities described in Fig. 3. In each case, the intensity peak travels along
Figure 5: The maximum on-axis intensity of flying focus pulses with \(v_{f}=c\) created using (a) an echelon or a DM-SLM pair with an SLM pixel size of (b) \(p=200\)\(\mu\)m, (c) \(p=100\)\(\mu\)m, and (d) \(p=50\)\(\mu\)m.
the designed constant-velocity trajectory. The images also reveal that the combination of the DM-SLM and axiparabola produce features similar to those of the axiparabola alone. Namely, the on-axis intensity is modulated, and the ultrashort pulse duration is preserved over the entire focal region [cf. Fig. 2].
### Exotic focal trajectories
An accelerating focus can be used to control the trapping and acceleration of electrons in a laser wakefield accelerator. Initializing the intensity peak, and therefore the wakefield, with a subluminal velocity would facilitate the trapping of background plasma electrons in the plasma wave [3, 30]. After sufficient trapping has occurred, the intensity peak can be accelerated to a luminal or superluminal velocity. This change in velocity has the dual benefit of preventing electrons from outrunning the accelerating phase of the wakefield, i.e., dephasing, and of improving the quality of the electron bunch by eliminating unwanted trapping [31].
Figure 7 illustrates an ultrafast flying focus that accelerates from an initial subluminal velocity to a superluminal velocity over the focal range. The design trajectory was specified as
\[v_{f}(t)=v_{0}+\Delta v\left(\frac{ct-f_{0}}{L}\right), \tag{15}\]
with an initial velocity \(v_{0}=0.99c\) and a velocity increment \(\Delta v=0.02c\). Over the first half of the focal range, the on-axis intensity falls back in a frame moving at the vacuum speed of light [Fig. 7(a)]. At the half-way point the velocity has increased to \(c\), and thereafter the intensity peak advances in the speed of light frame. Interestingly, the radial group delay required for this trajectory [Figs. 7(b) and (c)] smooths the intensity modulations that were observed with both the axiparabola alone and with the DM-SLM constant-velocity trajectories [cf. Figs. 2 and 6].
A pulse with an oscillating focal point could provide a novel method for quasi-phase-matching nonlinear optical processes, a wiggler for generating radiation from relativistic electrons, or an additional degree of freedom for accessing new parametric resonances in direct laser acceleration [32]. An example of such a focus is shown in Fig. 8. In this case, the design focal
Figure 6: Ultrafast flying foci with constant velocities. The maximum on-axis intensity of the pulse as a function of distance from the nominal focal point \(z=f_{0}\) for (a) \(v_{f}=1.001c\), (b) \(v_{f}=c\), and (c) \(v_{f}=0.999c\).
trajectory was specified as
\[v_{f}(t)=v_{0}+\Delta v\sin\left(\frac{2\pi N(ct-f_{0})}{L}\right), \tag{16}\]
with a nominal velocity \(v_{0}=c\), an oscillation magnitude \(\Delta v=0.002c\), and \(N=3\) periods. As shown in Fig. 8(a), the on-axis intensity peak oscillates between the expected velocities. While the pulse maintains its ultrashort duration, the maximum value of the intensity exhibits modulations, as it did in the case of the axiparabola alone. In general, the oscillation period of the velocity should be much greater than the Rayleigh range of the full-aperture focal spot, so that the intensity modulations do not obscure the velocity oscillations, i.e., \(N\ll\pi R^{2}L/\lambda_{0}f_{0}^{2}\).
## 5 Conclusions and outlook
This work has described a method for structuring ultrashort laser pulses with dynamic focal points. The moving focal point, or "flying focus," can follow a near-arbitrary trajectory over distances much greater than a Rayleigh range, while maintaining an ultrashort duration. The method employs separate optics to extend the focal range and structure the radial group delay (RGD). This overcomes a disadvantage of previous flying focus techniques, which place a lower bound on the duration of the moving intensity peak. Two specific optical configurations were considered: an axiparabola, which uses geometric aberration to extend the focal range, combined with either an echelon or a deformable mirror-spatial light modulator (DM-SLM) pair to structure the RGD. While an echelon can apply the exact RGD required for a particular focal trajectory, it is a static optic that cannot be modified on a shot-to-shot basis. The DM-SLM pair, on the other hand, has constraints imposed by the resolution of the SLM, but allows for dynamic programmability and optimization of the focal trajectory. This capability could enable rapid exploration of exotic flying foci that benefit laser-based applications in plasma physics and nonlinear optics.
Figure 7: An ultrafast flying focus that accelerates from an initial subluminal velocity to a superluminal velocity over the focal range. (a) The maximum on-axis intensity of the pulse as a function of distance from the nominal focal point \(z=f_{0}\). (b) The radial group delay, i.e., the \(\tau_{D}(r)\), required to produce this trajectory. (c) The corresponding deformable mirror sag function (green) and spatial light modulator phase (black). The pulse propagates from right to left.
## Appendix A Focal trajectory produced by an extended focal range optic
Consider a laser pulse with an initially flat phase front and flat pulse front propagating in the negative \(\hat{\mathbf{z}}\)-direction. Assuming cylindrical symmetry, the rays composing the phase and pulse front can be identified by their radial distance \(r=(x^{2}+y^{2})^{1/2}\) from the propagation axis and their frequency \(\omega\). The rays travel parallel to the axis and are incident on a reflective optic defined by the sag function \(s_{f}(r)\). At the point of reflection, each ray acquires a transverse wavenumber \(k_{r}(\omega,r)=(\omega/c)\sin[2\theta(r)]\), where \(\theta(r)=\arccos[\hat{\mathbf{z}}\cdot\hat{\mathbf{n}}(r)]\) defines the angle between the \(+\hat{\mathbf{z}}\)-direction and the normal vector to the surface of the optic \(\hat{\mathbf{n}}(r)=[D(r)\hat{\mathbf{r}}-\hat{\mathbf{z}}]/\sqrt{1+D^{2}(r)}\) with \(D(r)\equiv ds_{f}/dr\). After some algebra, one finds
\[k_{r}(\omega,r)=-\frac{2\omega}{c}\frac{D(r)}{1+D^{2}(r)}. \tag{17}\]
The perpendicular wavenumber is simply the radial derivative of the phase, such that
\[\phi_{f}(\omega,r)=-\frac{2\omega}{c}\int\,\frac{D(r)}{1+D^{2}(r)}dr. \tag{18}\]
In the paraxial approximation, Eq. (18) simplifies to \(\phi_{f}(\omega,r)=-2\omega s_{f}(r)/c\), which is the first term on the right-hand side of Eq. (1).
The trajectory of the rays as they travel to the far field can be found by integrating the ray equations \(\dot{\mathbf{x}}^{\prime}=c^{2}\mathbf{k}/\omega\), where the overdot denotes a total time derivative and the prime denotes the instantaneous location of the ray. The radial and longitudinal locations of the rays evolve according to
\[r^{\prime}(t) =r+\frac{ck_{r}(\omega,r)}{\omega}[ct+s_{f}(r)] \tag{19}\] \[z^{\prime}(t) =s_{f}(r)+\frac{ck_{z}(\omega,r)}{\omega}[ct+s_{f}(r)], \tag{20}\]
where \(ct\geq-s_{f}(r)\), \(t=0\) corresponds to the time at which the ray with \(r=0\) reflects from the optic, and \(k_{z}(\omega,r)=[\omega^{2}/c^{2}-k_{z}(\omega,r)]^{1/2}\). The focal time \(t_{f}(r)\) and location \(f(r)\) of each ray
Figure 8: An ultrafast flying focus that oscillates between subluminal and superluminal velocities. (a) The maximum on-axis intensity of the pulse as a function of distance from the nominal focal point \(z=f_{0}\). (b) The radial group delay, i.e., the \(\tau_{D}(r)\), required to produce this trajectory. (c) The corresponding deformable mirror sag function (green) and spatial light modulator phase (black). The pulse propagates from right to left.
are defined as the values of \(t\) and \(z^{\prime}\) where \(r^{\prime}=0\). Solving for the value of \(t\) where Eq. (19) equals zero and using this in Eq. (20) yields
\[ct_{f}(r)=-s_{f}(r)+\frac{1+D^{2}(r)}{2D(r)}r \tag{21}\]
\[f(r)=s_{f}(r)+\frac{1-D^{2}(r)}{2D(r)}r, \tag{22}\]
where Eq. (17) has been used. The focal time and location are both independent of frequency.
The focal location depends implicitly on the focal time through their shared dependence on \(r\). This dependence results in a focal point that moves in time. The velocity of the focal point \(\tilde{v}_{f}(r)\) is given by
\[\frac{\tilde{v}_{f}(r)}{c}=\frac{df}{dr}\left(\frac{dct_{f}}{dr}\right)^{-1}= \frac{1+D^{2}(r)}{1-D^{2}(r)}, \tag{23}\]
which is constrained by the focal geometry \(D(r)\) and is always superluminal (\(D^{2}\) is positive definite).
When each ray is delayed by a time \(\tau_{D}(r)\) before reflecting from the optic, the focal time \(t_{f}(r)\to t_{f}(r)+\tau_{D}(r)\), and Eq. (23) can be rewritten as a differential equation for the delay needed to produce a specified focal trajectory \(v_{f}(t)\):
\[\frac{d\tau_{D}}{dr}=\left[\frac{c}{v_{f}\big{(}t_{f}(r)\big{)}}-\left(\frac {1-D^{2}(r)}{1+D^{2}(r)}\right)\right]\frac{df}{dr}, \tag{24}\]
where \(v_{f}\big{(}t_{f}(r)\big{)}=\tilde{v}_{f}(r)\). The paraxial limits of these equations are presented in the main text for simplicity.
### Simulation details
The evolution of the flying focus pulse was simulated in two steps. The first step used the frequency-domain Fresnel integral to propagate the laser pulse from the flying focus optical configuration to the far field. The second step used the modified paraxial wave equation to propagate the pulse through the far field [15, 26]. The results shown in the figures were obtained from this second step.
To solve for the evolution of the flying focus pulse, the transverse electric field was written as a carrier modulating an envelope: \(\mathrm{E}(\xi,r,z)=\frac{1}{2}e^{-i\omega_{0}\xi}E(\xi,r,z)+\mathrm{c.c.}\), where \(\xi=t-z/c\) is the moving frame coordinate. The carrier frequency \(\omega_{0}\) was chosen so that the central wavelength \(\lambda_{0}=2\pi c/\omega_{0}=920\) nm. The envelope \(E\) was initialized just before the optical configuration in the frequency domain with the profile
\[\tilde{E}_{0}(\delta\omega,r)=\tilde{E}_{i}\Theta(r-R)\exp{(-\frac{1}{4}\tau^ {2}\delta\omega^{2})}, \tag{25}\]
where \(\sim\) denotes a frequency domain field, \(\delta\omega=\omega-\omega_{0}\), \(\Theta\) is the Heaviside function, \(\tilde{E}_{i}\) is the initial amplitude, \(R=5\) cm, and \(\tau=23\) fs, corresponding to a full width at half maximum duration and bandwidth of \(27\) fs and \(\Delta\lambda=78\) nm, respectively.
The phase imparted by the optical configuration, i.e., an axiparabola combined with either an echelon or a deformable mirror-spatial light modulator pair, was applied to the initial envelope. Just after the optical configuration at \(z=0\), the envelope can be expressed as \(\tilde{E}_{0}(\delta\omega,r)e^{i\phi(\omega,r)}\), where \(\phi(\omega,r)\) is the phase applied by the optical configuration [Eq. (1)]. The envelope was propagated in vacuum from \(z=0\) to the far-field location \(z=z_{i}\) using the frequency-domain Fresnel integral:
\[\tilde{E}(\delta\omega,r,z=z_{i})=\frac{\omega}{icz_{i}}\int J_{0}\left(\frac {\omega rr^{\prime}}{cz_{i}}\right)\exp{\left[\frac{i\omega(r^{2}+r^{\prime 2})}{ 2cz_{i}}+i\phi(\omega,r^{\prime})\right]}\tilde{E}_{0}(\delta\omega,r^{\prime}) r^{\prime}dr^{\prime}, \tag{26}\]
where \(J_{0}\) is the zeroth-order Bessel function of the first kind. The electric field from the Fresnel integral \(\tilde{E}(\omega,r,z=z_{i})\) provided the initial condition for the modified paraxial wave equation [26]:
\[[2(i\omega_{0}-\partial_{\xi})\partial_{z}+c\nabla_{\perp}^{2}]E(r,z,\xi)=0. \tag{27}\]
The mixed space-time derivative in Eq. (27) ensures that effects such as radial group delay and angular dispersion are modelled correctly--a requirement for accurately modeling an ultrafast flying focus. Note that Eqs. (26) and (27) are fully consistent with one another: Eq. (26) is the integral solution to Eq. (27). The use of the Fresnel integral decouples the radial grids in the near field and far field, reducing computational expense compared to using Eq. (27) over the entire domain, especially when considering smaller \(f/\#\)'s [15].
The simulation parameters were motivated by the MTW-OPAL laser system at the Laboratory for Laser Energetics [33], where future ultrafast flying focus experiments are being planned. The longitudinal step size \(\Delta z=2.83\)\(\mu\)m, temporal resolution \(\Delta\xi=0.74\) fs, and radial resolution \(\Delta r=0.60\)\(\mu\)m, were chosen to resolve the Rayleigh range, transform-limited pulse duration, and spot size, respectively.
## Appendix C On-axis intensity modulation from an axiparabola
The Fresnel diffraction integral can be used to derive an approximate expression for the far-field, on-axis intensity profile of a laser pulse focused by an axiparabola. The expression reveals that the on-axis intensity modulations result from the spherical aberration imparted by the axiparabola and provides a condition for mitigating these modulations. The derivation begins by substituting Eq. (25) into Eq. (26) and approximating the axiparabola phase as
\[\phi(\omega,r^{\prime})=-\frac{\omega{r^{\prime}}^{2}}{2cf_{0}}\left(1-\frac{ L}{2f_{0}}\frac{{r^{\prime}}^{2}}{R^{2}}\right), \tag{28}\]
which includes the parabolic and spherical contributions and is accurate to second order in \(L/f_{0}\). Evaluating Eq. (26) on-axis, i.e., at \(r=0\), provides
\[\tilde{E}(\delta\omega,0,z)=\frac{\omega}{icz}\int_{0}^{R}\exp\left[\frac{i \omega{r^{\prime}}^{2}}{2c}\left(\frac{1}{z}-\frac{1}{f_{0}}\right)+\frac{i \omega{Lr^{\prime}}^{4}}{4cf_{0}^{2}R^{2}}\right]\tilde{E}_{0}(\delta\omega){ r^{\prime}}{dr^{\prime}}, \tag{29}\]
where \(\tilde{E}_{0}(\delta\omega)=\tilde{E}_{i}\exp\left(-\frac{1}{4}\tau^{2}\delta \omega^{2}\right)\). Upon integrating, one finds
\[\frac{|\tilde{E}(\delta\omega,0,z)|^{2}}{|\tilde{E}_{0}(\delta\omega)|^{2}} \approx\frac{\pi\omega R^{2}}{4cL}\left|\text{erfi}\left[\left(\frac{i\omega R ^{2}}{4cLf_{0}^{2}}\right)^{1/2}(f_{0}-z)\right]-\text{erfi}\left[\left(\frac {i\omega R^{2}}{4cLf_{0}^{2}}\right)^{1/2}(f_{0}+L-z)\right]\right|^{2}, \tag{30}\]
where erfi is the imaginary error function and \(z\approx f_{0}\) has been assumed. Equation (30) oscillates with a period that varies throughout the focal region. The scale length apparent in Eq. (30) provides a rough estimate for the modulation period: \(L_{M}\sim(4Lf_{0}^{2}\lambda_{0}/R^{2})^{1/2}\). The modulations can be mitigated when \(L\gg L_{M}\) or \(L\gg 4\pi Z_{R}\), where \(Z_{R}=\lambda_{0}f_{0}^{2}/\pi R^{2}\) is the Rayleigh range of the full-aperture focal spot.
Funding.U.S. Department of Energy Office of Fusion Energy Award Number DE-SC00215057, U.S. Department of Energy National Nuclear Security Administration Award Number DE-NA0003856. The authors would like to thank D. Ramsey, J. Bromage, C. Dorrer, S.-W. Bahk, C. Jeon, B. Webb, and I. Begishev for productive discussions.
This material is based upon work supported by the Department of Energy Office of Fusion Energy under Award Number DE-SC00215057 and by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0003856. This report was prepared as an account of work sponsored by an |
2306.12546 | Highly depleted alkali metals in Jupiter's deep atmosphere | Water and ammonia vapors are known to be the major sources of spectral
absorption at pressure levels observed by the microwave radiometer (MWR) on
Juno. However, the brightness temperatures and limb darkening observed by the
MWR at its longest wavelength channel of 50 cm (600 MHz) in the first 9
perijove passes indicate the existence of an additional source of opacity in
the deep atmosphere of Jupiter (pressures beyond 100 bar). The absorption
properties of ammonia and water vapor, and their relative abundances in
Jupiter's atmosphere do not provide sufficient opacity in deep atmosphere to
explain the 600 MHz channel observation. Here we show that free electrons due
to the ionization of alkali metals, i.e. sodium, and potassium, with sub-solar
metallicity [M/H] (log based 10 relative concentration to solar) in the range
of [M/H] = -2 to [M/H] = -5 can provide the missing source of opacity in the
deep atmosphere. If the alkali metals are not the source of additional opacity
in the MWR data, then their metallicity at 1000 bars can only be even lower.
The upper bound of -2 on the metallicity of the alkali metals contrasts with
the other heavy elements -- C, N, S, Ar, Kr, and Xe -- which are all enriched
relative to their solar abundances having a metallicity of approximately +0.5. | Ananyo Bhattacharya, Cheng Li, Sushil K. Atreya, Paul G. Steffes, Steven M. Levin, Scott J. Bolton, Tristan Guillot, Pranika Gupta, Andrew P. Ingersoll, Jonathan I. Lunine, Glenn S. Orton, Fabiano A. Oyafuso, J. Hunter Waite, Amadeo Belloti, Michael H. Wong | 2023-06-21T20:20:24Z | http://arxiv.org/abs/2306.12546v1 | # Highly depleted alkali metals in Jupiter's deep Atmosphere
###### Abstract
Water and ammonia vapors are known to be the major sources of spectral absorption at pressure levels observed by the microwave radiometer (MWR) on Juno. However, the brightness temperatures and limb darkening observed by the MWR at its longest wavelength channel of 50 cm (600 MHz) in the first 9 perrijove passes indicate the existence of an additional source of opacity in the deep atmosphere of Jupiter (pressures beyond 100 bar). The absorption properties of ammonia and water vapor, and their relative abundances in Jupiter's atmosphere do not provide sufficient opacity in the deep atmosphere to explain the 600 MHz channel observation. Here we show that free electrons due to the ionization of alkali metals, i.e. sodium, and potassium, with sub-solar metallicity, [M/H] (log based 10 relative concentration to solar) in the range of [M/H] = -2 to [M/H] = -5 can provide the missing source of opacity in the deep atmosphere. If the alkali metals are not the source of additional opacity in the MWR data, then their metallicity at 1000 bars can only be even lower. This upper bound of -2 on the metallicity of the alkali metals contrasts with the other heavy elements - C, N, S, Ar, Kr, and Xe - which are all enriched relative to their solar abundances having a metallicity of approximately +0.5.
Solar System (1528) - Chemical abundances(224) - Jupiter(873) - Extrasolar gaseous giant planets(509) 0000-0002-4615-5885]Ananyo Bhattacharya
0000-0002-4880-788X]Cheng Li
0000-0002-4880-788X]Sushi K. Atreya
0000-0002-4880-788X]Paul G. Steffes
0000-0002-0703-070X]Steven M. Levin
0000-0002-0703-0888]Scott J. Bolton
0000-0002-0703-0888]Tristan Guillot
0000-0002-0703-088X]Pranika Gupta
0000-0002-0703-088X]Andrew P. Ingersoll
0000-0002-0703-088X]Jonathan I. Lunine
0000-0002-0703-088X]Glenn S. Orton
0000-0002-0703-088X]Fabiano A. Oyafuso
0000-0002-0703-088X]J. Hunter Waite
0000-0002-488-088X]Amadeo Belloti
0000-0002-488-088X]Michael H. Wong
## 1 Introduction
The alkali metals sodium and potassium have been previously detected in the atmospheres of hot Jupiters and a super-Neptune together with lithium [Chen et al. (2018)] in the latter. The detections show a large range of abundances from highly substellar to super-stellar values [Welbanks et al. (2019), Demory et al. (2011)]. Alkali metal abundances are important in understanding the formation of hot Jupiters and represent a bridge between the refractory and volatile elements, which in molecular form seed the growth of planets. Obtaining the abundance of alkali metals in Jupiter can potentially serve as a first constraint on the ratio of rocky to icy material in the interior of the solar system's largest planet when combined with the elemental and molecular abundances provided by the Galileo Probe Mass Spectrometer (GPMS) [Atreya et al. (1999), Wong et al. (2004), Atreya et al. (2019)] and Juno constraints on water [Li et al. (2020)]. Here we derive observationally based abundances of alkali metals in
Jupiter's atmosphere to determine whether they are enriched relative to solar like the other heavy elements or depleted.
To obtain these abundances requires knowing the deep structure of Jupiter's atmosphere. The shallower part of Jupiter's atmosphere has been previously investigated at microwave frequencies by the Very Large Array (VLA) telescope [de Pater and Dunn (2003), de Pater et al. (2019)]. VLA probes Jupiter at frequencies in the range of 74 MHz to 50 GHz [de Pater et al. (2019)]. However, confusion from Jupiter's powerful synchrotron radiation does not allow VLA to observe Jupiter's atmosphere below 5 GHz [de Pater and Dunn (2003)], limiting its reach to less than 5 bars, leaving the deep atmosphere of Jupiter inaccessible from microwave and radio frequency observatories from Earth. The orbit of Juno and the spin of the spacecraft allow the spacecraft to make observations at low frequencies, i.e. 0.6 GHz and 1.2 GHz, by avoiding the energetic electron belts around Jupiter from its field of view. Access to greater depths allows for the investigation of bulk elemental abundances of N and O in Jupiter [Janssen et al. (2017), Bolton et al. (2017), Steffes et al. (2017)].
The Microwave Radiometer (MWR) instrument onboard the Juno orbiter is a passive radiometer that is designed to measure the internal heat emitted by Jupiter's atmosphere at six different frequencies ranging from 0.6 GHz to 22 GHz [Janssen et al. (2017)]. The brightness temperature measured by MWR at these frequencies sounds different levels of Jupiter's atmosphere corresponding to pressures from 0.3 bar to 250 bar [Janssen et al. (2017)]. In addition, the highly inclined polar orbit and rotation of the Juno spacecraft aided in the high spatial resolution necessary for probing Jupiter's atmosphere at various latitudes [Bolton et al. (2017)].
Previous analysis of the MWR data at the 0.6 GHz found an unanticipated limb-darkening signal, which cannot be explained by nominal absorbers such as ammonia and water [Li et al. (2020)]. Based on investigation of thermodynamic models of Jupiter's deep atmosphere between 50 bar and 1 kbar [Fegley Jr and Lodders (1994), Weidenschilling and Lewis (1973)] we conjecture that the free electrons from thermally ionzied alkali metals may provide the missing opacity. Alkali metals are expected to undergo condensation to form clouds in the deep atmosphere [Visscher et al. (2006), Morley et al. (2012)]. Na\({}_{2}\)S and KCl are the first chemical species to condense in the above pressure range and thereby act as a sink for atomic sodium and potassium [Fegley Jr and Lodders (1994)]. Furthermore, high-temperature environments cause alkali metals to undergo ionization due to their low ionization energies [Bagenal et al. (2007)]. Density and temperature play a role in governing the electron densities according to the Saha ionization equation (Eq. 2). Electrons generated from alkali metal ionization act as a source of absorption at microwave frequencies that could affect the brightness temperatures at the 0.6 GHz frequency channel. Therefore, the objective of this study is to determine the alkali metal abundance in the deep atmosphere of Jupiter.
To facilitate comparison of our results on alkali metals with those of the extrasolar planets we express the abundances of non- hydrogen and helium elements using astronomical terminology, e.g., metallicity. The metallicity (_(M/H)_) of an element is the logarithm of the ratio of elemental abundance in a system to the stellar (or solar, for the solar system) elemental abundance. Generally, the metallicity of a star is defined in terms of the ratio of the number of Fe atoms to the number of hydrogen atoms. Here we define the metallicity in terms of alkali metal abundance in Jupiter to that of Sun e.g. for potassium, _[K/H]_ = log\({}_{10}\)(_N\({}_{K}\)/N\({}_{H}\))\({}_{Jupiter}\) - log\(10\)(_N\({}_{K}\)/N\({}_{H}\))\({}_{Sun}\). For the giant planets, iron and silicon is not measurable, emphasizing the importance of proxy indicators such as the alkali metals along with other elements measured by Galileo probe.
## 2 Methods
Brightness temperatures from 9 perijoves i.e. PJ 1,3-9, 12 have been taken into consideration for this article. Variations in brightness temperatures have been observed across the planetocentric latitudes from pole-to-pole at 0.6 and 1.2 GHz channels. These variations can be attributed to various sources of origin from the atmosphere and space environment. The most important sources of the observed variability are (i) changes in atmospheric structure and composition, (ii) Jupiter's synchrotron radiation in the microwave band, and (iii) variation in acceleration due to gravity due to the non-spherical shape of Jupiter. The latter sources, i.e. synchrotron and gravity need to be taken into account for proper interpretation of MWR observations. It will aid in investigating the true variability in
Jupiter's deep atmosphere.
The contribution of Jupiter's gravity can be corrected by taking into account the non-spherical shape of Jupiter. Brightness temperatures are corrected using a gravity correction factor defined as the ratio of theoretical _T\({}_{b}\)_ at a given latitude to that at the equator of Jupiter taking into consideration the acceleration due to gravity at the latitude. Thereby, it transforms the Juno observations at each latitude for equatorial gravity, which effectively removes variation in _T\({}_{b}\)_ due to changes in Jupiter's gravity from the equator to the poles.
Energetic electrons in Jupiter's space environment contribute to the synchrotron radiation [de Pater & Dunn (2003), Levin et al. (2001), Santos-Costa et al. (2017)]. The signature of the emission is observed in MWR data across all the perijoves which leads to anomalous changes in _T\({}_{b}\)_. Data at extremely high latitudes are polluted by synchrotron emission and thus, remain of no use for investigating Jupiter's deep atmosphere. Therefore, we only consider the MWR data between -60 to 60 deg. latitude. The correction for synchrotron and other sources of anomalous _T\({}_{b}\)_ is done by filtering the data at 0.6 and 1.2 GHz for each perijove. The process is carried out by sorting the deviations of _T\({}_{b}\)_ from the least value of T\({}_{b}\) in a group and removing the values greater than a filter cutoff temperature of the order of 2 K.
## 3 Results
### Sources of Microwave Opacity
The weighting function of Jupiter's atmospheric absorption and emission at a given microwave frequency determines the contribution of each region in the atmosphere to the observed brightness temperature at the given frequency. The peak structure of the weighting function gives the range of pressure levels corresponding to the measurements. The weighting function can be expressed as a function of microwave opacity of the atmosphere (1). Here, _T\({}_{b}\)_ is the brightness temperature, _W(p)_ is the weighting function as a function of pressure, and _T(p)_ is the physical temperature profile of the atmosphere.
\[T_{b}=\int_{-\infty}^{\infty}W(p)T(p)dlnP \tag{1}\]
Fig. 1 shows the relative weighting functions, i.e. weighting function divided by the maximum value of the function, at 0.6 GHz and 1.2 GHz with and without alkali metals. In the absence of alkali metals, the relative weighting functions peak at 100 bar and 30 bar, respectively[Janssen et al. (2017)]. At 0.6 GHz, the relative weighting function extends to the deeper atmosphere below the 100 bar level, and therefore, the _T\({}_{b}\)_ derived using this channel is sensitive to the sources of microwave opacity present in the deep atmosphere at \(p\) greater than 100 bar. The relative weighting function at 0.6 GHz channel shows a broad shape with a second maxima at kbar pressure levels which is attributed to the increase in mass absorption coefficients of water vapor with pressure. The mass absorption coefficient of ammonia decreases after a maximum near 1000 bar and eventually water vapor dominates the opacity in the deep atmosphere. Moreover, the inclusion of free electrons as sources of opacity due to alkali metal ionization causes a decrease in the value of the relative weighting function at 0.6 GHz around 100 bar, and a global maximum in the relative weighting function emerges at \(\sim\) 1 kbar pressure (magenta line). The shift of the global maximum can be attributed to the increase in opacity from free electrons with pressure as the ionization fraction of alkali metals increases with temperature under thermal equilibrium conditions [Saha (1920)] (described later in this section). Inclusion of lower amounts of alkali metals ([M/H] = -5) will lead to a peak at deeper levels (Fig. 1). However as the metallicity is increased to solar, the maximum drifts towards lower pressures around 1 kbar level. This could be attributed to the fact that higher abundance of alkali metals can produce higher amount of electrons at relatively lower pressures (magenta line), whereas low abundance of alkali metals in Jupiter would need to reach higher pressure (\(>\) 1 kbar) to produce equivalent opacity (blue line). Thereby the abundance of alkali metals directly affects the shape of weighting function.
The main sources of microwave opacity at 0.6 GHz and 1.2 GHz are ammonia, water vapor, free electrons, and collision-induced absorption by hydrogen and helium. Hydrogen-hydrogen and hydrogen-helium collisions are the dominant sources of collision-induced absorption processes in Jupiter. Their magnitude is well constrained due to the invariance of hydrogen and helium abundances in Jupiter's deep atmosphere. The microwave absorption behavior of
water and ammonia vapor has been investigated by laboratory experiments that show the pressure and temperature dependence of mass absorption coefficients (Devaraj et al. (2014), Karpowicz & Steffes (2011), Bellotti et al. (2016)). In addition, hydrogen, methane, and water vapor contribute to line broadening in the ammonia vapor absorption. The models based on laboratory experiments show significant divergent behavior when extrapolated to pressures greater than 50 bar and 550 K (Bellotti et al. (2016)). In order to obtain a robust estimate of the range of absorption coefficients at higher temperatures, we test a grid model describing a power scaling relationship with temperature based on the Hanley et al. (2009) model of ammonia absorption. For water vapor absorption at microwave frequencies, the laboratory models show divergence by orders of magnitude. However, recent laboratory measurements (Steffes et al. (2023)) at high pressure show that water vapor absorption can be explained by the Bellotti et al. (2016) model. Therefore, Bellotti et al. (2016) model is chosen to compute the water vapor opacity which incorporates water opacity measurements at high temperatures above 500 K.
Free electrons in the atmosphere can act as a source of opacity at microwave wavelengths through the process of free-free absorption in which electrons absorb photons during collisions with other ions and electrons. Electrons can be generated by the ionization of various elemental and molecular species in the atmosphere. Due to their low ionization energies, alkali metals i.e. Na, K are expected to be the major sources of free electrons in the atmosphere (Heays et al. (2017)). In Jupiter's atmosphere, the pressure and temperatures corresponding to the transition between the alkali metals and their compounds are calculated using an equilibrium cloud condensation model (ECCM) (Atreya
Figure 1: Relative weighting functions at 0.6 GHz (black) and 1.2 GHz (gray) for a Jupiter adiabat considering the Hanley model (Hanley et al. (2009)) for NH\({}_{3}\) absorption. The functions peak at 100 bar and 30 bar at 0.6 GHz and 1.2 GHz respectively without the inclusion of alkali metals. The inclusion of alkali metals (orange, magenta and blue) decreases the relative weighting function at \(\sim\) 100 bar and produces a second peak that is observed at \(\sim\) 1 kbar pressure due to the opacity contributed by free electrons from alkali metal ionization. As the metallicity of alkali metals increase, the global maximum of weighting function shifts towards lower pressure.
et al. (1999), Weidenschilling & Lewis (1973)] for Jupiter's adiabat with saturation vapor pressures of Na\({}_{2}\)S and KCl [Visscher et al. (2006), Morley et al. (2012)]. The condensation of alkali metals at solar abundance [Figure 2] takes place at 352 bar for KCl and 796 bar for Na\({}_{2}\)S, with corresponding temperatures of 967 K and 1234 K, respectively, assuming thermodynamic equilibrium. The condensation of Na\({}_{2}\)S at deeper levels, and a higher solar abundance of Na compared to K [Asplund et al. (2009)] will cause Na\({}_{2}\)S clouds to be significantly more massive than KCl clouds. Thermochemical equilibrium models indicate formation of metal hydrides and hydroxides in gas phase, however they are much lower in abundance [Fegley Jr & Lodders (1994)] as compared to the condensates, thereby they will not act as the primary sink of alkali metals in Jupiter. Condensation of the alkali metal compounds occurs when the partial pressure of a compound exceeds its saturation vapor pressure. If condensation occurs, it causes depletion in the alkali metal abundances at altitudes above the condensation level.
At high pressures 100 bar and beyond, alkali metals would undergo ionization to form cold plasma, and the electrons generated in the process would act as an additional source of opacity at microwave frequencies. The number density of free electrons due to the ionization of alkali metal atoms in the gas phase and is calculated using the Saha ionization [Saha (1920)] (Eq. 2) equation assuming Jupiter's atmosphere to be in a state of thermal equilibrium. The ionization equation itself assumes a single component gas phase system. Thereby, we add the electron densities from ionization of sodium and potassium to determine total number density of free electrons. Here, _N\({}_{e}\)_ is the electron density, \(N\) is number density, \(\epsilon\) is ionization energy, \(\lambda\) is De Broglie wavelength, _g\({}_{0}\)_ and _g\({}_{1}\)_ are statistical weights, _k\({}_{B}\)_ is Boltzmann
Figure 2: Condensation curves of NH\({}_{3}\), H\({}_{2}\)O, H\({}_{2}\)S and alkali metals Na\({}_{2}\)S and KCl at 1X solar abundance. Our calculations are based on the equilibrium cloud condensation model [Atreya et al. (1999)], and saturation vapor pressure corresponding to Na\({}_{2}\)S and KCl [Visscher et al. (2006), Morley et al. (2012)]. The cloud bases are at the levels where the condensation curves cross the adiabat considering T\({}_{1bar}\) = 166.1 K [Seiff et al. (1998)].
constant, _m\({}_{e}\)_ is mass of the electron and \(h\) is Planck's constant.
\[\frac{N_{e}^{2}}{N-N_{e}}=\frac{2}{\lambda^{3}}\frac{g_{1}}{g_{0}}e^{-\epsilon/k _{B}T} \tag{2}\]
\[\lambda=\sqrt{\frac{h^{2}}{2\pi m_{e}k_{B}T}} \tag{3}\]
The brightness temperatures correspond to electromagnetic radiation traveling from the interior of Jupiter radially outwards through the atmospheric layers. Thus, the transmission through the deep atmosphere is similar to the transmission through a cold plasma medium. The refractive index of microwaves propagating through a cold plasma media can be described by the Appleton- Hartree equation [Helliwell (2014)]. The formulation is applicable to low-temperature plasma medium both in the presence or absence of magnetic fields. At 100-1000 bar pressure levels, the contribution of the magnetic field is insignificant in the Appleton-Hartree formulation [Helliwell (2014)]. Therefore, a simplified version of the Appleton-Hartree equation (Eq. 4) is used to calculate the complex refractive index of the deep atmosphere using the electron number density calculated from the Saha ionization equation. For an unmagnetized cold plasma medium i.e. Jupiter's deep atmosphere, the Appleton- Hartree equation is simplified to:
\[n^{2}=1-\frac{X}{1-iZ} \tag{4}\]
\[\alpha=\frac{2\pi}{\lambda_{ch}Q} \tag{5}\]
Here, \(X=\frac{\omega_{0}^{2}}{\omega^{2}}\), \(Z=\frac{\nu}{\omega}\), \(\omega_{0}\) is electron plasma frequency, \(\omega\) is the angular frequency of microwave radiation, \(\omega_{h}\) is electron gyro frequency, \(\nu\) is electron- neutral collision frequency, \(\lambda_{ch}\) is the frequency of a given MWR channel, \(n\) is the refractive index, \(\alpha\) is the extinction coefficient and \(Q\) is the quality factor i.e. the ratio of squares of real and imaginary parts of the refractive index.
### Radiative Transfer Modeling
In order to draw a comparison between the MWR observations and theoretical knowledge of Jupiter's atmosphere, a benchmark model for the ideal Jupiter atmosphere is constructed using a moist hydrostatic adiabat following the ideal gas law [Li et al. (2018), Li et al. (2018)]. The specific heat of hydrogen is estimated from the mixing ratio of ortho and para hydrogen assuming thermal equilibrium between the ortho and para states. Moreover, the temperature profile of Jupiter's atmosphere is constructed for two cases of reference temperatures: (i) \(T\) = 166.1 K at the 1-bar pressure level from the Galileo probe [Seiff et al. (1998)] and (ii) \(T\) = 168.8 K at the 1-bar pressure level based on the reanalysis of the Voyager radio occultation experiment at Jupiter [Gupta et al. (2022)]. Ammonia and water vapor are considered vapors for the moist adiabat and their partial pressure is controlled by the cloud condensation process by forcing the partial pressures to be equal to their saturation vapor pressures. In the deep atmosphere of Jupiter, water and ammonia are not expected to form clouds; however, alkali metals are expected to undergo condensation. Therefore, a similar approach is applied to alkali metals to estimate the concentration of alkali metals present in the gas phase available for the ionization process.
Spectral radiance is proportional to the physical temperature of the atmosphere in the Rayleigh-Jeans limit. For microwave frequencies, we compute the brightness temperature (_T\({}_{b}\)_) from the physical temperature using Eq. (1). The opacity of Jupiter's atmosphere is the sum of opacities from individual sources discussed in the previous section i.e. ammonia, water, free electrons, and collision-induced absorption. The abundances of ammonia and water vapor have been assumed to be 2.7 and 5 times the solar abundance [Li et al. (2020), Li et al. (2017)]. Because there is no a priori information on the alkali metal abundance in Jupiter, Therefore, we compare two cases, one without alkali metals (baseline) and another with alkali metals (treatment) in order to provide a comparison between our current
knowledge of Jupiter and MWR data.
The spatial resolution of MWR data also provides the limb darkening coefficient at six microwave frequencies. Limb darkening (_L\({}_{d}\)_) is defined as the percent change in _T\({}_{b}\)_ at a given viewing angle relative to _T\({}_{b}\)_ at a position looking vertically down to the planet center i.e. nadir. For our simulations, we compute the limb darkening at a 45- degree angle from the nadir. The MWR channels at 0.6 GHz and 1.2 GHz are chosen to provide a comparison between theory and observations at higher pressures using _T\({}_{b}\)_ and _L\({}_{d}\)_ as the observables for comparison. The benchmark case of the ideal Jupiter atmosphere is compared with MWR observations as a function of latitude between -40 and 40 degrees planetocentric latitude. Data from higher latitudes are neglected due to the presence of signatures from synchrotron radiation that is inseparable from the atmospheric contribution.
A latitudinal variation in brightness temperatures is observed at both 0.6 and 1.2 GHz (Figure 3, panels (a) and (c)). The small- scale variations in _T\({}_{b}\)_ and _L\({}_{d}\)_ in all the panels can be attributed to variations in the atmospheric temperature structure and composition. It is important to note that the baseline case (without alkali metals) corresponds to two different temperature profiles of Jupiter's atmosphere for two different _T\({}_{1bar}\)_. There is an agreement between the baseline case and observations at 1.2 GHz in the equatorial region (panel (c)). On the other hand,
Figure 3: Limb darkening and brightness temperature MWR observations compared with simulation results at 0.6 GHz and 1.2 GHz corresponding to Jovian adiabats at (i) _T\({}_{1bar}\)_ = 166.1 K and (ii) _T\({}_{1bar}\)_ = 168.8 K, (a) _T\({}_{b}\)_ vs. latitude at 0.6 GHz, (b) _L\({}_{d}\)_ vs. latitude at 0.6 GHz, (c) _T\({}_{b}\)_ vs. latitude at 1.2 GHz, (d) _L\({}_{d}\)_ vs. latitude at 1.2 GHz.
brightness temperatures at 0.6 GHz are lower than the baseline case by 40-60 K at all latitudes (panel (a)) indicating the possibility of an additional source of opacity. Such a source is also supported by a depressed _L\({}_{d}\)_ observed by MWR; it is 4 percent less than the _L\({}_{d}\)_ magnitude of the ideal Jupiter atmosphere across all latitudes (panel (b)). The mismatch between the baseline and observations at 0.6 GHz is much greater than the uncertainty in measurements and variations in _T\({}_{b}\)_ and _L\({}_{d}\)_. Since the brightness temperatures correspond to different pressure regions in the atmosphere, the anomalous observations at 0.6 GHz must be attributed to the presence of an additional opacity source in the deep atmosphere or to a different opacity source that absorbs more effectively at 0.6 GHz than at 1.2 GHz. We test four confounding factors: (1) the distribution of ammonia, (2) the ammonia opacity at temperatures exceeding the range of laboratory measurements, (3) the opacity of water at high temperatures and (4) the contribution of alkali metals. The theoretical brightness temperature and limb darkening at 0.6 GHz and 1.2 GHz is shown in Fig. 3.
The latitudinal distribution of brightness temperatures and limb darkening from the forward model indicates the decrease in limb darkening from the equator to the pole at 0.6 GHz. It is opposite to the variation of limb darkening at 1.2 GHz across the latitudes. This effect could be attributed to the free electrons in the deep atmosphere which could be inferred from the shift in the contribution functions toward higher pressures in presence of alkali metals (Fig. 1). Alkali metals greatly affect the absorption behavior at 0.6 GHz which dominates the effect of gravitation on limb darkening.
### Ammonia, Water and Alkali Metals
Brightness temperature variations with latitude and the spectral inversion of brightness temperatures show a non-uniform distribution of ammonia vapor in Jupiter's atmosphere in the deep atmosphere region [Li et al. (2017), Ingersoll et al. (2017)]. Therefore, the non-uniform distribution of ammonia could contribute to variations in microwave opacity of the deep atmosphere. In order to estimate the effect of ammonia concentration variations, we perturb the ammonia profile in the model and use a scaling factor to vary the magnitude of ammonia vapor concentration in the model as described in Eq. (6).
\[q_{NH_{3}}(P)=q_{NH_{3},0}(P)-(q_{NH_{3},0}(P)-q_{NH_{3},MWR}(P))s \tag{6}\]
Here, _q\({}_{NH_{3}}\)_ is the ammonia mass mixing ratio at a given pressure \(P\), _q\({}_{NH_{3},0}(P)\)_ is the homogeneous ammonia mixing ratio which is set to 2.7 times solar abundance for NH\({}_{3}\)\(\sim\) 360 ppm [Li et al. (2017)] from the deep atmosphere till the NH\({}_{3}\) vapor saturation point. Above the saturation point, the mixing ratio follows the NH\({}_{3}\) saturation vapor pressure curve. _q\({}_{NH_{3},MWR}\)(P)_is the mixing ratio retrieved from MWR inversion. We use a scaling factor to vary the ammonia mixing ratio between the homogeneous case to MWR derived profiles. The scaling factor, s ranges from 0 to 1.5 where 0 is the case for homogeneous mixing ratio. Increasing s to 1 will change the ammonia profile to MWR inversion case for equator and mid-latitude regions. We also extend the scaling factor to 1.5, in order to take into account the low ammonia mixing ratio observed at the North Equatorial Belt (NEB) of Jupiter [Li et al. (2017)].
NH\({}_{3}\) opacity measurements are currently not available for high temperatures ( 550 K-300 K) corresponding to Jupiter's deep atmosphere and there is a decrease in the magnitude of absorption of NH\({}_{3}\) at high pressures. Thereby, we invoke a scaling factor to the NH\({}_{3}\) absorption coefficient to provide an estimation of the opacity at high temperatures. The mass absorption coefficient of ammonia is estimated by multiplying the temperature-scaling law to the absorption coefficient based on Hanley et al. (2009) (Eq. 7). In this equation, \(\alpha\) is the absorption coefficient of NH\({}_{3}\), h is the opacity factor, \(T\) is temperature and _T\({}_{c}\)_ is reference temperature equal to 750 K. The NH\({}_{3}\) opacity models show that the absorption coefficient peaks at 750 K and decreases at temperatures beyond 750 K. In the simulations, the scaling factor is multiplied to the NH\({}_{3}\) opacity at temperatures higher than _T\({}_{c}\)_. The power law index (h) is varied from 1 to 5 keeping the ammonia concentration constant, i.e., 2.7 times solar abundance. We also keep the water vapor constant at 5 times solar abundance as the laboratory measurements demonstrate that water vapor absorption does not show a significant increase with pressure and can be said to be relatively transparent when compared to the previous model of microwave absorption [Steffes et al. (2023)].
\[\alpha(NH_{3})\sim\left(\frac{T_{c}}{T}\right)^{h} \tag{7}\]
Changing the ammonia profile and introducing the additional temperature-dependent scaling factor produce brightness temperature and limb darkening divergent from MWR data at 0.6 GHz as shown in Figure 4a. The difference between _T\({}_{b}\)_ from the model and observations is in the range of 50-200 K at 0.6 GHz. Reducing the ammonia concentration causes a monotonic increase in _T\({}_{b}\)_ and a decrease in _L\({}_{d}\)_. Further, reducing the ammonia opacity shows a similar trend in _T\({}_{b}\)_, while a saturation in _L\({}_{d}\)_ is expected at a power law factor of 5. Changing the ammonia profile and ammonia opacity has a similar effect on _T\({}_{b}\)_ and _L\({}_{d}\)_ at 1.2 GHz. However, overall, the variation in the MWR observations at 1.2 GHz can be explained by these two factors and does not require the inclusion of alkali metals. The 1.2 GHz observations correspond to \(\sim\) 20 bar (Fig. 1), much above the cloud base of alkali metals and at relatively lower pressure levels. Therefore, the contribution of free electrons to opacity is expected to be less due to lower temperatures, and the opacity contribution of ammonia vapor dominates at 1.2 GHz. However, a comparison of MWR observations at both frequencies clearly implies that the variation in ammonia vapor opacity cannot solely explain the anomalous observations at the 0.6 GHz channel.//
Fig. 4c, 4d examines the overall effect of alkali metals and ammonia vapor on the _T\({}_{b}\)_ and _L\({}_{d}\)_ at 0.6 GHz and 1.2 GHz. We vary the alkali metal metallicities in a range from 0 to -7 (solar abundance of Na and K according to Asplund et al. (2009)) for each condition of NH\({}_{3}\) profile scaling and NH\({}_{3}\) opacity scaling. The volume mixing ratio of Na and K corresponding to abundance in solar photosphere [Asplund et al. (2009)] are 3.46 x 10\({}^{-6}\) (_[Na/H]_ = -5.76) and 2.14 x 10\({}^{-7}\) (_[K/H]_ = -6.97), respectively. Therefore, we simulate a wide range of ammonia opacity conditions for a given alkali metal abundance (colored dots). Both NH\({}_{3}\) profile and opacity scaling cause a change in _T\({}_{b}\)_ and _L\({}_{d}\)_, which is shown by the annotation in the figure. The variation in _T\({}_{b}\)_ and _L\({}_{d}\)_ is similar to the pattern in Fig. 4a. NH\({}_{3}\) profile scaling causes a decrease in _L\({}_{d}\)_, while the scaling in NH\({}_{3}\) vapor opacity causes _L\({}_{d}\)_ to increase at 0.6 GHz. For each case of metallicity, we then perform a scaling in ammonia vapor and ammonia opacity as described previously in this section. This provides us with a matrix of _T\({}_{b}\)_ and _L\({}_{d}\)_ to take into account all possible sources of opacity, i.e., collision-induced absorption, ammonia, water vapor, and free electrons from alkali metals. The free electron opacity is calculated from the Hartree-Appleton equation explained in the previous section.
When we compare the new model result with MWR observations (Fig. 4b), we observe that model matches with observations at 0.6 GHz for free electrons corresponding to alkali metal metallicities in the range of -2 to -5 (chocolate colored patches), i.e. 10\({}^{-2}\) to 10\({}^{-5}\) times the solar abundance. There is an agreement between the model and observations at 1.2 GHz for the same range of metallicities. The addition of free electrons from alkali metals dominates the effect of gravity (Fig. 5) and we expect the limb darkening to decrease from equator to the poles assuming uniform mixing ratio of water and ammonia vapor. It serves as a baseline to understand the sole effect of free electrons on latitudinal variation of microwave radiation from Jupiter's deep atmosphere.
## 4 Discussions
We infer metallicity of the alkali metals in Jupiter to be much lower than the solar value. A possible indication of low metallicity of the alkali metals in a hot Jupiter exoplanet was first proposed by Demory et al. (2011) as one plausible explanation for the high albedo of Kepler-7b. They derived an alkali metal abundance 10-100 times lower than the solar value. Since then the abundance of alkali metals has been derived for several other giant exoplanets, with abundances ranging from \(\sim\) 100 times below solar to \(\sim\) 100 times above solar, although the uncertainties are large. Recent observations of two hot Jupiters or Saturns with clear or mostly clear atmospheres were made. The alkali metal abundance for one such hot Jupiter (HAT-P-1b) is found to be sub-solar [Chen et al. (2022)], while it was found to be solar to greatly super-solar for the other (WASP-96b)[Nikolov et al. (2022)]. Considering the relatively small sample size of hot Jupiters with clear atmospheres, it is premature to make a meaningful comparison between their alkali metal metallicity and the metallicity in Jupiter presented in this paper. On the other hand, it is instructive to compare the abundance of alkali metals in Jupiter from this work with the abundance of the other heavy elements. While the opacity contribution from alkali metals suggest that Na and K are strongly depleted relative to solar at the level probed by MWR at 0.6 GHz, all other heavy elements are enriched by a factor of approximately three to five; while nitrogen is highly variable but enriched, and the water abundance remains uncertain [Atreya et al. (2019), Li
et al. (2020), Li et al. (2017), Mahaffy et al. (2000)]. The comparison to other heavy metal measurements from the Galileo probe corresponds to much lower pressures i.e. \(<\) 22 bars. The estimation of alkali metal metallicity from MWR implies lower metallicity at much higher pressures. The results (Fig. 4b) provide an important constraint on alkali metal abundance at pressures sensitive to 0.6 GHz channel. A [M/H] = -1 for alkali metals provides too much opacity while too little abundance or absence of alkali metals does not provide sufficient opacity to match the MWR
Figure 4: Comparison is drawn between the Juno MWR observations and the results of the radiative transfer model for _T\({}_{b}\)_ and _L\({}_{d}\)_ at 0.6 GHz and 1.2 GHz, keeping the water abundance constant \(\sim\) 5 times solar abundance. (a, b) Jupiter’s atmosphere in the absence of alkali metals with only variations in the NH\({}_{3}\) vapor profile and the NH\({}_{3}\) opacity, (c, d) Jupiter’s atmosphere in the presence of alkali metals with variations in the NH\({}_{3}\) vapor profile and the NH\({}_{3}\) opacity. The NH\({}_{3}\) profile of Jupiter’s atmosphere is varied using a scale from 0 to 1.5 to take into account the contribution of non-uniform distribution of NH\({}_{3}\) vapor observed by MWR [Li et al. (2017)]. NH\({}_{3}\) opacity at temperatures above 750 K undergoes power law scaling as a function of atmospheric temperature (Eq. 7). In the absence of alkali metals, the changes in NH\({}_{3}\) vapor profile and the scaling in NH\({}_{3}\) vapor opacity deviate significantly from Juno MWR observations at 0.6 GHz. However, in the presence of alkali metals of low metallicity, i.e., in the range of -2 to -5, there is an agreement between model results and MWR observations. Observations at 1.2 GHz can be explained by variations in the NH\({}_{3}\) vapor profile and the NH\({}_{3}\) opacity independent of opacity contributions from alkali metals.
observations at 0.6 GHz.
The low abundance of alkali metals indicated by MWR observations could be attributed to any of the following scenarios. (i) Initially enriched alkali metals, consistent with the other heavy elements in the atmosphere, are depleted by chemical reactions with other constituents deep in the atmosphere, resulting in a low abundance of Na and K at \(\sim\) 1 kilobar level sufficient to provide the free electrons to explain the MWR data at 0.6 GHz. Fegley and Lodders [Fegley Jr & Lodders (1994)] predict, for example, the formation of gas-phase species of Na and K in the atmosphere i.e. NaCl, NaOH, and KOH. Should there be chemical mechanisms that could selectively deplete K in the atmosphere, leaving Na to be the most significant contributor to free electrons in the deep atmosphere, the metallicity of Na would be expected to be in the range of 0 to -2 i.e. solar to highly sub- solar abundance (Appendix B). (ii) Unconventional planet formation processes, whereby Jupiter did not accrete a solar complement of alkali metals, or that the alkali metals are not well mixed at greater depths. If the depletion of alkali metals at \(\sim\) 1 kbar inferred in this paper is
Figure 5: Latitudinal variation of brightness temperature and limb darkening of Jupiter’s atmosphere at 0.6 GHz and 1.2 GHz at [M/H] = -3
representative of their bulk abundance, it could be indicative of the depletion of all rock- forming elements, with significant implications for the formation and evolution of Jupiter. Our conclusion of depletion is based on the data of the 0.6 GHz channel, whose weighting function peaks at 1 kilobar level with the inclusion of alkali metals. Thus, we are confident about the result only at this level. Alkali metals could well be more abundant deeper in the atmosphere and they could have been depleted by some as yet unknown mechanism before reaching the 1 kilobar level though the degree of depletion would have to be huge. Barshay & Lewis (1978) considered one such possibility, where silicates were found to be a way of sequestration of gas phase alkali metals. However, a later study by Fegley Jr & Lodders (1994) found it to be an ineffective mechanism. Further modeling and laboratory studies are needed to cover the full parameter space of combined thermochemistry of alkali metal and rock cloud forming species corresponding to the very high temperature and high pressure conditions of the deep atmosphere of Jupiter, together with any dynamical effects, before drawing any firm conclusions about depletion of alkali metals in bulk Jupiter below the level to which the MWR data of this paper are sensitive.
The new constraints on the abundance of alkalis are linked to their low ionisation potential, and the fact that the electrons that they provide directly affect opacities at 0.6 and 1.2 GHz (see Eq. 4). But when present, they are strong absorbers at visible wavelengths (e.g., Burrows et al. (2000)) and therefore directly affect the planetary radiative flux. The low abundances that we derive imply that a radiative zone may be present in Jupiter [Guillot et al. (1994), Guillot et al. (2004)]. Interestingly, this could explain at the same time the relatively low abundance of CO observed in Jupiter's atmosphere compared to expectations for a fully convective deep atmosphere [Cavalie et al. (2023)].
## 5 Software and Third Party Data Repository Citations
The software for the radiative transfer package will be available at zenodo archive ([https://doi.org/10.5281/zenodo.7893914](https://doi.org/10.5281/zenodo.7893914)) and the MWR data used in this work, and associated files for data visualization are available at archive ([https://doi.org/10.5281/zenodo.7893817](https://doi.org/10.5281/zenodo.7893817)). They can be made available upon request
High-performance Atmospheric Radiation Package (HARP) [Li et al. (2018b), Bhattacharya et al. (2023)]
## Appendix A Appendix A: Electron Density and Conductivity
The electron density of Jupiter's atmosphere is governed by two fundamental processes: (i) condensation of alkali metal condensates i.e. Na\({}_{2}\)S and KCl, and (ii) ionization of alkali metals in thermal equilibrium. Fig. 2 shows the pressure levels corresponding to the cloud base of Na\({}_{2}\)S and KCl based on their saturation vapor pressures. Cloud condensation reduces the amount of alkali metals available in gas phase that act as a source of free electrons, and restricts the abundance of Na and K corresponding to their respective saturation vapor pressure. In the cloud region, electron density is controlled by saturation vapor pressure of alkali metals whereas below the cloud base, electron densities are governed by metallicity of alkali metals. Thereby, it is evident that condensation controls the electron density and thereby, conductivity at low pressure levels. Condensation limited ionization is observed at low pressure (below 1 kbar) irrespective of the alkali metal abundance as the electron density lines converge (Fig. A.1 (a)). Fig. A.1 (a) and (b) show the presence of a kink in electron density and their respective conductivity at the cloud base corresponding to different alkali metal abundances. However, condensation does not play a significant role in governing the electron densities at \(\sim 1\) kbar pressure level corresponding to the global maxima in the weighting function at 0.6 GHz (Figure 1).
The electron density of the deep atmosphere is much lower than in the case of alkali metals at solar abundance. It is the true representation of the electron density of the deep atmosphere. At greater pressures, hydrogen behaves as a semiconductor and becomes the major contributor to the electron density [Liu et al. (2008)]. The electrical conductivity of the atmosphere is calculated using Drude's equation. It provides an estimate of the conductivity due to the free electrons provided by alkali metal ionization.
## Appendix B Appendix B: Selective depletion of alkali metals
Even though Na\({}_{2}\)S has a deeper condensation level compared to KCl, the cloud condensation is governed by atmospheric temperature, and does not reflect the chemical reactivity of alkali metals. K is more electropositive than Na
Figure A.1: (a) Electron density of Jupiter’s deep atmosphere at the solar abundance and _[M/H]_ = -3 and -4, (b) electrical conductivity of Jupiter’s deep atmosphere at the solar abundance and [M/H] = -3 and -4.
and thereby, is expected to be more reactive as compared to Na. Therefore, it is possible that there can be a chemical mechanism that could selectively deplete K into other compounds, leaving Na as the only source of free electrons in Jupiter. Under such conditions, we find that Na metallicity should be in the range of 0 to -3 to match the MWR observations. The increase in alkali metal metallicities can be attributed to two factors: (i) low ionization energy of K, and (ii) Na\({}_{2}\)S condenses much below KCl (Figure 2). Thereby, a larger amount of Na is required to produce enough free electrons to match the MWR brightness temperatures and limb darkening.
The elimination of K from the atmosphere highlights the role of the elemental abundance of Na required to match the MWR observations. The results of the forward model in Fig. B.1 indicate the possible solutions of Na metallicity under different conditions of ammonia vapor concentration profiles and microwave opacities. It is observed that the range of Na metallicity is expected to be from 0 to -3 i.e. solar abundance to highly sub-solar abundance. Thus, metallicity of Na required is expected to be higher than those considering both Na and K to be sources of free electrons.
## Appendix C Appendix C: Jovian adiabats and comparison of MWR with high temperature adiabat
Fig. 2 shows that brightness temperatures at 600 MHz from two adiabats differ by approximately 15 K. The relative weighting function for the adiabats is that of the ideal Jupiter's atmosphere without the inclusion of opacity due to free electrons from alkali metals. It shows a peak at \(\sim\) 100 bar. From the difference in physical temperature of the atmosphere of the two adiabats, it is seen that the difference reaches \(\sim\) 10-15 K at 100 bar level (Fig. C.1). The weighting function at 600 MHz also extends below 100 bar which could explain the difference in brightness temperatures. An interesting observation is that the difference in adiabat temperatures increases with increase in atmospheric pressure. This increase can be attributed to the temperature dependent specific heat of the atmospheric constituents.
The interior models of Jupiter generally use a high temperature in the range of 170-180 K at the outer boundary (1 bar pressure level) [Gupta et al. (2022), Miguel et al. (2022)]. These temperatures are about 10-15 K higher than the measurements from the Galileo probe (166.1 K)[Seiff et al. (1998)] and Voyager radio occultation reanalysis (168.8
Figure B.1: Limb darkening and brightness temperature comparison of MWR observations and forward model results at 600 MHz and 1.2 GHz for metallicities ranging from 0 to -7 at different ammonia vapor concentration profiles and opacities. It showcases the sole effect of free electrons due to the ionization of Na, without considering any contribution from K.
K)[Gupta et al. (2022)]. A simulation of brightness temperatures and limb darkening at 0.6 GHz and 1.2 GHz is carried out for all cases of alkali metal metallicities, ammonia concentration and opacity variation assuming _T\({}_{1bar}\)_ = 175 K. It can be clearly seen in Fig. C.2 that high temperature at 1 bar doesn't match with entire range of MWR observations for both the frequencies. Some alternate possibilities could be the presence of a non-adiabatic gradient or a radiative layer in Jupiter's deep atmosphere that can possibly account for a higher temperature at 1 bar level. However, the mismatch with MWR at 1.2 GHz poses a serious question on the assumption. The current measurements of temperature at 1 bar level are from limited radio occultation experiments. There is a need for radio science experiments from equator to the poles, in order to estimate the true variability in temperatures at 1 bar.
|
2308.04850 | Higher Cheeger ratios of features in Laplace-Beltrami eigenfunctions | This paper investigates links between the eigenvalues and eigenfunctions of
the Laplace-Beltrami operator, and the higher Cheeger constants of smooth
Riemannian manifolds, possibly weighted and/or with boundary. The higher
Cheeger constants give a loose description of the major geometric features of a
manifold. We give a constructive upper bound on the higher Cheeger constants,
in terms of the eigenvalue of any eigenfunction with the corresponding number
of nodal domains. Specifically, we show that for each such eigenfunction, a
positive-measure collection of its superlevel sets have their Cheeger ratios
bounded above in terms of the corresponding eigenvalue.
Some manifolds have their major features entwined across several
eigenfunctions, and no single eigenfunction contains all the major features. In
this case, there may exist carefully chosen linear combinations of the
eigenfunctions, each with large values on a single feature, and small values
elsewhere. We can then apply a soft-thresholding operator to these linear
combinations to obtain new functions, each supported on a single feature. We
show that the Cheeger ratios of the level sets of these functions also give an
upper bound on the Laplace-Beltrami eigenvalues. We extend these level set
results to nonautonomous dynamical systems, and show that the dynamic Laplacian
eigenfunctions reveal sets with small dynamic Cheeger ratios. | Gary Froyland, Christopher P. Rock | 2023-08-09T10:26:23Z | http://arxiv.org/abs/2308.04850v1 | # Higher Cheeger ratios of features in Laplace-Beltrami eigenfunctions
###### Abstract
This paper investigates links between the eigenvalues and eigenfunctions of the Laplace-Beltrami operator, and the higher Cheeger constants of smooth Riemannian manifolds, possibly weighted and/or with boundary. The higher Cheeger constants give a loose description of the major geometric features of a manifold. We give a constructive upper bound on the higher Cheeger constants, in terms of the eigenvalue of any eigenfunction with the corresponding number of nodal domains. Specifically, we show that for each such eigenfunction, a positive-measure collection of its superlevel sets have their Cheeger ratios bounded above in terms of the corresponding eigenvalue.
Some manifolds have their major features entwined across several eigenfunctions, and no single eigenfunction contains all the major features. In this case, there may exist carefully chosen linear combinations of the eigenfunctions, each with large values on a single feature, and small values elsewhere. We can then apply a soft-thresholding operator to these linear combinations to obtain new functions, each supported on a single feature. We show that the Cheeger ratios of the level sets of these functions also give an upper bound on the Laplace-Beltrami eigenvalues. We extend these level set results to nonautonomous dynamical systems, and show that the dynamic Laplacian eigenfunctions reveal sets with small dynamic Cheeger ratios.
## 1 Introduction
The classical static _Cheeger problem_ is an optimisation problem in Riemannian geometry, which has been studied extensively in relation to the eigenvalues of the Laplace-Beltrami operator [17, 43, 54, 9]. Given an \(n\)-dimensional Riemannian manifold \((M,g)\) with volume measure \(V\) and induced \(n-1\)-dimensional Hausdorff measure \(V_{n-1}\), the _Neumann Cheeger ratio_ of a set \(A\subset M\) with suitably smooth boundary is the ratio \(\mathcal{J}_{N}(A):=\frac{V_{n-1}(\partial A\cap\operatorname{int}M)}{V(A)}\). The Neumann Cheeger problem consists of finding a set that minimises \(\mathcal{J}_{N}(A)\) over sets \(A\subset M\) satisfying \(V(A)\leq\frac{V(M)}{2}\). The resulting minimal ratio is known as the _Neumann Cheeger constant for \(M\)_. For compact \(n\)-dimensional submanifolds \(M\subset\mathbb{R}^{n}\), a Neumann Cheeger ratio minimiser is a set \(A\subset M\) which is separated from \(M\backslash\overline{A}\) by an optimal 'bottleneck'. We give an example in Figure 1(a). The _Dirichlet Cheeger ratio_ of a set \(A\subset M\) with suitably smooth boundary is the ratio \(\mathcal{J}_{D}(A):=\frac{V_{n-1}(\partial A)}{V(A)}\), and the Dirichlet Cheeger problem consists of finding a set that minimises \(\mathcal{J}_{D}(A)\) over subsets \(A\subset M\). The resulting minimal ratio is known as the _Dirichlet Cheeger constant for \(M\)_. A Dirichlet Cheeger ratio minimiser is a region with an optimal balance between large volume and small boundary. For \(n\)-dimensional \(M\subset\mathbb{R}^{n}\) endowed with the Euclidean metric and \(A\subset M\), \(\mathcal{J}_{D}(A)\) decreases by a factor of \(s\) when we dilate \(A\) by a factor of \(s\) in each dimension, so minimisers for \(\mathcal{J}_{D}(A)\) always contact \(\partial M\) ([57, Theorem 3.5]). We give an example in Figure 1(b).
The Cheeger problem can be extended to seek _collections_ of subsets, each of which have small Cheeger ratios. Given a collection of \(k\) disjoint sets \(A_{1},\dots,A_{k}\subset M\), the Neumann and Dirichlet Cheeger ratios of \(\{A_{1},\dots,A_{k}\}\) are given by \(\mathcal{J}_{N}(\{A_{1},\dots,A_{k}\}):=\max_{1\leq i\leq k}\mathcal{J}_{N}(A_ {i})\) and \(\mathcal{J}_{D}(\{A_{1},\dots,A_{k}\}):=\max_{1\leq i\leq k}\mathcal{J}_{D}(A_ {i})\), respectively, i.e. the Cheeger ratio of a collection of disjoint subsets of \(M\) is the maximum Cheeger ratio among the subsets. For each \(k\geq 1\), the \(k\)_th Neumann_ or _Dirichlet Cheeger problem_ consists of finding a
collection of \(k\) disjoint sets \(\{A_{1},\ldots,A_{k}\}\) which minimises \(\mathcal{J}_{N}(\{A_{1},\ldots,A_{k}\})\) or \(\mathcal{J}_{D}(\{A_{1},\ldots,A_{k}\})\). The first Dirichlet Cheeger problem is exactly the classical Dirichlet Cheeger problem, while the second Neumann Cheeger problem corresponds to the classical Neumann Cheeger problem. The \(k\)th Cheeger problems for larger \(k\) are called the _higher Cheeger problems_, and the infima are called the _higher Cheeger constants_.
Exact minimisers for the Cheeger problem have only been computed for a few sets or classes of sets (see e.g. [5, 13, 46, 47]). In particular, [47, Theorem 1.4] obtains an expression for the Cheeger-minimising set of any subset of \(\mathbb{R}^{2}\) without a 'neck'. We are instead interested in using the Cheeger problem to identify necks, and the approach of [47] does not extend to sets with necks (see e.g. [47, Figs 1-2]). There are some algorithms for solving Cheeger problems numerically (see e.g. [10, 11, 12, 14, 42]), but these algorithms apply only to the classical Cheeger problems, not the versions with \(k\geq 2\) (in the Dirichlet case) or \(k\geq 3\) (in the Neumann case). These algorithms have not been studied on Riemannian manifolds other than full-dimensional subsets of \(\mathbb{R}^{n}\). Understanding the connectivity of more general Riemannian manifolds is important in settings such as manifold learning (e.g. [18, 36]), where one studies the geometry of a low-dimensional submanifold embedded in some high-dimensional Euclidean space. The second Dirichlet Cheeger problem is studied in [4], where the authors solve this problem for one specific subset of \(\mathbb{R}^{2}\) (an annulus).
Approximate minima and minimisers for the higher Cheeger problem, and upper bounds on the higher Cheeger constants, can be found using the eigenfunctions and eigenvalues of the (possibly _weighted_) _Laplace-Beltrami operator_. Miclo [53] and others have given upper bounds on the \(k\)th Cheeger constant on boundaryless manifolds, up to a non-explicit factor depending cubically on \(k\). Miclo improves this dependence on \(k\) to sub-logarithmic, by using (for example) the \(2k\)th eigenvalue to bound the \(k\)th Cheeger constant. We prove an alternative upper bound on the \(k\)th Cheeger constant (Theorem 3.7), extending a result from the graph setting [22, Theorem 5], in terms of the eigenvalue of any eigenfunction with \(k\) or more nodal domains, up to a small constant factor independent of \(k\). Thus, we can obtain a much tighter upper bound on the \(k\)th Cheeger constant whenever the appropriate eigenfunction has sufficiently many nodal domains. Our bound also applies to manifolds with nonempty boundary, under Neumann or Dirichlet boundary conditions. Moreover, our bound is constructive - we show that any (possibly weighted) Laplace-Beltrami eigenfunction has superlevel sets within each nodal domain whose Cheeger ratios are also bounded above. A similar approach is used in the graph setting in e.g. [40, sec 1.1], to obtain a \(2\)-partition of a graph with a low conductance from the first nontrivial graph Laplacian eigenvalue. Our approach is primarily useful in situations where Laplacian eigenfunctions on a manifold are calculated or approximated explicitly.
An important question in the study of nonautonomous dynamical systems is how to divide the phase space into regions which interact minimally with each other. In purely deterministic dynamics, any two disjoint regions have no interaction with each other, so we instead consider regions whose boundaries remain small, relative to their size, as they evolve with the deterministic dynamics. The ratio of a region's time-averaged boundary size to its overall size is called its _dynamic Cheeger ratio_. Sets with small dynamic Cheeger ratio are called _coherent sets_, and the infimal dynamic Cheeger ratio is called the _dynamic Cheeger constant_[25, 27]. We can obtain an upper bound on the dynamic Cheeger constants using the eigenvalues
Figure 1: Neumann and Dirichlet Cheeger minimisers for \(M\subset\mathbb{R}^{2}\) equipped with the Euclidean metric.
of an operator, which acts on the domain of the dynamical system, called the _dynamic Laplacian_. We show that \(k\) disjoint coherent sets with quality guarantees - upper bounds on their dynamic Cheeger ratios - can be obtained from any eigenfunction with \(k\) nodal domains (Theorem 3.19).
The remainder of this article is structured as follows. In section 2, we provide some basic definitions and define the higher Cheeger constants. In subsections 3.1-3.2, we summarise prior upper bounds on the Cheeger constants in terms of Laplace-Beltrami eigenvalues. We also state our own constructive upper bounds, which depend on properties of the eigenfunctions (Theorem 3.7 and Proposition 3.8). In subsection 3.4, we generalise these results to the dynamic setting. Lastly, in section 4, we give some examples comparing our bounds to bounds from the literature.
## 2 Preliminaries
### Higher Cheeger constants
Let \((M,g)\) be a smooth Riemannian manifold, possibly with nonempty boundary, i.e. a second-countable Hausdorff space where each point of \(M\) has a neighbourhood diffeomorphic to a relatively open subset of \(\{x\in\mathbb{R}^{n}:x_{n}\geq 0\}\). Except where otherwise noted, we assume all Riemannian manifolds are \(n\)-dimensional (\(n\geq 2\)), \(C^{\infty}\), compact and connected, and have smooth boundary if they have a nonempty boundary. Let \(V\) and \(\mathrm{d}V\) denote the volume measure and volume form on \(M\) induced by \(g\). Let \((M,g,\mu)\) be a _weighted manifold_, i.e. a Riemannian manifold \((M,g)\) equipped with a measure \(\mu\) satisfying \(\mathrm{d}\mu=e^{\phi}\,\mathrm{d}V\) for some \(\phi\in C^{\infty}(M)\). Note that we can treat any Riemannian manifold as a weighted manifold by taking \(\mu=V\), so all our results for weighted manifolds extend directly to unweighted manifolds (i.e. manifolds where \(\phi=0\) everywhere). On each \(n-1\)-dimensional submanifold \(\Sigma\subset M\), let \(V_{n-1}\) and \(\mathrm{d}V_{n-1}\) denote the \(n-1\)-dimensional Riemannian volume measure and volume form on \(\Sigma\), and let \(\mu_{n-1}\) be the measure satisfying \(\mathrm{d}\mu_{n-1}:=e^{\phi}\,\mathrm{d}V_{n-1}\).
For a set \(A\subset M\), we let \(\partial^{M}A\) denote the relative topological boundary of \(A\) in \(M\), i.e. the set of points \(p\in M\) such that every neighbourhood of \(p\) contains both points in \(A\) and points in \(M\backslash A\). For example, if \(M:=\{(x,y)\in\mathbb{R}^{2}:x^{2}+y^{2}\leq 1\}\) and \(A:=\{(x,y)\in M:y>0\}\), then \(\partial^{M}A\) consists of the interval \(\{(x,0):-1\leq x\leq 1\}\) but not the semicircle \(\{(x,y)\in M:x^{2}+y^{2}=1\}\). We define the Neumann and Dirichlet Cheeger constants as follows.
**Definition 2.1**.: Let \(\mathscr{P}_{N}(M)\) denote the collection of nonempty, relatively open subsets \(A\subset M\) such that \(\partial^{M}A\) is a codimension-\(1\), \(C^{\infty}\) submanifold of \(M\) with boundary \(\partial(\partial^{M}A)=\partial^{M}A\cap\partial M\). Let \(\mathscr{P}_{D}(M)\) denote the collection of nonempty, relatively open subsets \(A\subset M\) such that \(\overline{A}\cap\partial M=\emptyset\), and \(\partial A\) is a codimension-\(1\), \(C^{\infty}\) submanifold of \(M\). Then for \(k\geq 1\), a _Neumann_, resp. _Dirichlet \(k\)-packing_ is a set \(\mathcal{A}_{k}:=\{A_{1},\ldots,A_{k}\}\) such that each \(A_{i}\in\mathscr{P}_{N}(M)\), resp. \(A_{i}\in\mathscr{P}_{D}(M)\), and the \(A_{i}\) are pairwise disjoint. Let \(\mathscr{P}_{k,N}(M)\), resp. \(\mathscr{P}_{k,D}(M)\) denote the set of Neumann, resp. Dirichlet \(k\)-packings for \(M\).
**Definition 2.2** (Higher Cheeger constants).: For \(k\geq 1\), the _Neumann Cheeger ratio_ of a Neumann \(k\)-packing \(\{A_{1},\ldots,A_{k}\}\in\mathscr{P}_{k,N}(M)\) is
\[\mathcal{J}_{N}(\{A_{1},\ldots,A_{k}\}):=\max_{1\leq i\leq k}\frac{\mu_{n-1}( \partial^{M}A_{i})}{\mu(A_{i})}. \tag{1}\]
The _Dirichlet Cheeger ratio_ of a Dirichlet \(k\)-packing \(\{A_{1},\ldots,A_{k}\}\in\mathscr{P}_{k,D}(M)\) is
\[\mathcal{J}_{D}(\{A_{1},\ldots,A_{k}\}):=\max_{1\leq i\leq k}\frac{\mu_{n-1}( \partial A_{i})}{\mu(A_{i})}. \tag{2}\]
The _\(k\)th Neumann_ and _Dirichlet Cheeger constants_ of \(M\) are
\[h_{k,N} :=\inf_{\{A_{1},\ldots,A_{k}\}\in\mathscr{P}_{k,N}(M)}\mathcal{J} _{N}(\{A_{1},\ldots,A_{k}\}) \tag{3}\] \[h_{k,D} :=\inf_{\{A_{1},\ldots,A_{k}\}\in\mathscr{P}_{k,D}(M)}\mathcal{J} _{D}(\{A_{1},\ldots,A_{k}\}). \tag{4}\]
We will sometimes write \(\mathcal{J}_{N}(A)\) and \(\mathcal{J}_{D}(A)\) instead of \(\mathcal{J}_{N}(\{A\})\) and \(\mathcal{J}_{D}(\{A\})\) for convenience. By this definition, we always have \(h_{1,N}=0\), aligning with our notation where \(\lambda_{1,N}=0\). In the special case \(\partial M=\emptyset\), we write \(\mathcal{J}_{\emptyset}\) and \(h_{k,\emptyset}\), respectively for \(\mathcal{J}_{N}\) and \(h_{k,N}\), respectively, and refer to \(\mathcal{J}_{\emptyset}\) and \(h_{k,\emptyset}\) as the _boundaryless Cheeger ratio_ and _constant_.
Our Dirichlet Cheeger constants generalises Cheeger's original constant for manifolds with boundary [17], while our Neumann Cheeger constants generalise the boundaryless Cheeger constant of [17], the Neumann Cheeger constant of [8], and the \(k\)th boundaryless Cheeger constants [53] for \(k\geq 1\). Our \(h_{k,\emptyset}\) is exactly that defined in [53, p.326]: Miclo requires that the \(A_{i}\) are connected and that each connected component of \(M\backslash A_{i}\) contains some \(A_{j}\) for \(j\neq i\), but Miclo notes that this does not change the value of \(h_{k,\emptyset}\).
Cheeger [17] and Buser [8] (see also [60, p.499]) consider \(h_{k,\emptyset}\) and \(h_{k,N}\) for \(k=2\) only, and they require that \(\{A_{1},A_{2}\}\) are a \(2\)-partition for \(M\) (up to sets of measure zero) with \(\partial^{M}A_{1}=\partial^{M}A_{2}\), instead of allowing \(2\)-packings of \(M\). This does not affect the value of \(h_{2,N}\). To see this, choose any \(\{A_{1},A_{2}\}\in\mathscr{P}_{2,N}(M)\) with \(\mu_{n-1}(\partial^{M}A_{1})\leq\mu_{n-1}(\partial^{M}A_{2})\), and define the \(2\)-packing \(\{\bar{A}_{1},\bar{A}_{2}\}\) by \(\bar{A}_{1}:=\overline{A_{1}}\backslash\partial^{M}\overline{A_{1}}\), \(\bar{A}_{2}:=M\backslash\overline{A_{1}}\). Then \(\partial^{M}\bar{A}_{1}=\partial^{M}\bar{A}_{2}\), and \(\{\bar{A}_{1},\bar{A}_{2}\}\) is a \(2\)-partition for \(M\). The fact \(\partial^{M}\bar{A}_{1}\subset\partial^{M}A_{1}\) implies \(\{\bar{A}_{1},\bar{A}_{2}\}\in\mathscr{P}_{2,N}(M)\), and since \(\mu_{n-1}(\partial^{M}\bar{A}_{2})\leq\mu_{n-1}(\partial^{M}A_{1})\leq\mu_{n- 1}(\partial^{M}A_{2})\) and \(\mu(\tilde{A}_{2})\geq\mu(A_{2})\), we have \(\mathcal{J}_{N}(\{\tilde{A}_{1},\tilde{A}_{2}\})\leq\mathcal{J}_{N}(\{A_{1},A_ {2}\})\).
Our Cheeger constants are defined slightly differently from those in [4, 24], who take the infimum over arbitrary packings of \(M\) and use _perimeter_ instead of Hausdorff measure. Bobkov and Parini's Cheeger constant is equal to \(h_{k,D}\) for unweighted full-dimensional submanifolds of \(\mathbb{R}^{n}\) ([4, Proposition 3.6] and e.g. [1, Proposition 3.62]), while \(h_{2,N}\) gives an upper bound on de Ponti and Mondino's Cheeger constant on unweighted Riemannian manifolds by [59, Proposition 2.37] ([24] defines perimeter differently to [24], but they are equal on unweighted manifolds by e.g. [59, remark on Definition 2.33 and Theorems 2.38-2.39]). Yau [60, p.499] also defines a variant of \(h_{2,N}\) which does not require each \(\partial A_{i}\) to be smooth.
### Eigenvalues of the weighted Laplace-Beltrami operator
Let \(W^{1,2}(M;\mu)\) denote the Sobolev space of \(L^{2}(M;\mu)\) functions \(f\) with \(L^{2}(M;\mu)\)-integrable weak derivatives \(\nabla f\), and let \(W^{1,2}_{0}(M;\mu)\) denote the completion in the Sobolev norm \(\|\cdot\|_{W^{1,2}(M;\mu)}^{2}:=\|\cdot\|_{L^{2}(M;\mu)}^{2}+\|\nabla\cdot\|_ {L^{2}(M;\mu)}^{2}\) of the set of \(C^{\infty}(M)\) functions with compact support in \(\operatorname{int}M\) (see e.g. [15, pp.14-15]).
For any \(C^{1}\) vector field \(X\) on \(M\), let \(\operatorname{div}X\) denote the _divergence_ of \(X\) with respect to \(\operatorname{d}V\) (defined in e.g. [33, p.96] or [16, Prop. III.7.1 and proof]). Writing the Radon-Nikodym derivative of \(\mu\) as \(\operatorname{d}\mu=e^{\phi}\operatorname{d}V\), let \(\operatorname{div}_{\mu}X\) denote the _weighted divergence_\(\operatorname{div}_{\mu}X:=e^{-\phi}\operatorname{div}(e^{\phi}X)\) (see e.g. [33, p.96]). Then the _weighted Laplace-Beltrami operator_\(\Delta_{\mu}\) is defined for \(f\in C^{2}(M)\) by
\[\Delta_{\mu}f:=\operatorname{div}_{\mu}\nabla f=e^{-\phi}\operatorname{div}(e^{ \phi}\nabla f). \tag{5}\]
We consider the _Neumann_ and _Dirichlet eigenproblems_ for \(\Delta_{\mu}\). The _Neumann eigenproblem_ is as follows: find \(u\in C^{\infty}(M)\) and \(\lambda\in\mathbb{R}\), such that
\[\Delta_{\mu}u=\lambda u, \tag{6}\]
subject to the _Neumann boundary condition_ (if \(\partial M\neq\emptyset\))
\[\frac{\partial u}{\partial\mathbf{n}}=0\quad\text{on }\partial M, \tag{7}\]
where \(\mathbf{n}\) denotes the outward unit normal to \(\partial M\). Solutions \(u\) and \(\lambda\) are called _eigenfunctions_ and _eigenvalues_ of \(\Delta_{\mu}\). There is an orthogonal Schauder basis for \(L^{2}(M;\mu)\) consisting of eigenfunctions of (6) satisfying (7) (see e.g. [41, Theorem 4.3.1] or [3, ch. III, Theorem 18]). The corresponding eigenvalues form a non-positive decreasing sequence accumulating only at \(-\infty\) (see e.g. [41, Theorem 4.3.1] or [37, Theorems 11.5.1-11.5.2]). We denote the eigenvalues as \(0=\lambda_{1,N}>\lambda_{2,N}\geq\lambda_{3,N}\geq\ldots\), or as \(0=\lambda_{1,\emptyset}>\lambda_{2,\emptyset}\geq\lambda_{3,\emptyset}\geq\ldots\) in the special case \(\partial M=\emptyset\). The eigenvalue ordering induces an ordering on the corresponding eigenfunctions, so we will occasionally write the basis of eigenfunctions as \(u_{1},u_{2},\ldots\).
The _Dirichlet eigenproblem_ consists of finding \(u\in C^{\infty}(M)\) and \(\lambda\in\mathbb{R}\) which solves (6), subject to the _Dirichlet boundary condition_,
\[u=0\quad\text{on }\partial M. \tag{8}\]
We assume \(\partial M\neq\emptyset\) when we consider Dirichlet boundary conditions. There is also an orthogonal Schauder basis for \(L^{2}(M;\mu)\) of eigenfunctions of (6) satisfying (8). In this case, the eigenvalues form a strictly negative decreasing sequence accumulating only at \(-\infty\), and we denote them \(0>\lambda_{1,D}>\lambda_{2,D}\geq\lambda_{3,D}\geq\ldots\).
The eigenvalues of \(\Delta_{\mu}\) have the following variational characterisation (the proof of [15, p.16] extends directly to the weighted case).
**Theorem 2.3**.: _Let \((M,g,\mu)\) be a weighted manifold, and let \(u_{1},u_{2},\ldots\) denote a complete orthogonal basis of Neumann (resp. Dirichlet) eigenfunctions of \(\Delta^{d}\) corresponding to \(\lambda_{1,N},\lambda_{2,N},\ldots\) (resp. \(\lambda_{1,D},\lambda_{2,D},\ldots\)). Then for each \(k\geq 1\), we have_
\[\lambda_{k,N}=-\inf_{\begin{subarray}{c}f\in W^{1,2}(M)\\ \int_{M}u_{i}f\,d\mu=0,\forall i\in\{1,\ldots,k-1\}\end{subarray}}\frac{\|| \nabla f||_{L^{2}(M;\mu)}^{2}}{\|f\|_{L^{2}(M;\mu)}^{2}}, \tag{9}\]
_resp._
\[\lambda_{k,D}=-\inf_{\begin{subarray}{c}f\in W^{1,2}_{0}(M)\\ \int_{M}u_{i}f\,d\mu=0,\forall i\in\{1,\ldots,k-1\}\end{subarray}}\frac{\|| \nabla f||_{L^{2}(M;\mu)}^{2}}{\|f\|_{L^{2}(M;\mu)}^{2}}, \tag{10}\]
_with equality if and only if \(f\) is a Neumann (resp. Dirichlet) eigenfunction of \(\Delta_{\mu}\) with eigenvalue \(\lambda_{k,N}\) (resp. \(\lambda_{k,D}\))._
A _nodal domain_ of a function \(f\in C^{0}(M)\) is a maximal connected component of \(M\) where \(f\) is positive or negative. The number of nodal domains in the \(k\)th eigenfunction of \(\Delta_{\mu}\) under Dirichlet or Neumann boundary conditions is bounded above by \(k\). Courant [20, p.452] proves this bound assuming each nodal domain has piecewise smooth boundary. Chavel [15, pp.19-23] gives a proof in the boundaryless and Dirichlet cases which avoids the piecewise smooth boundary requirement via an approximation argument. Using a more general version of Green's formula [35, Proposition 5.8 and remark after Proposition 5.10], we prove Courant's nodal domain theorem in the Neumann case without the piecewise-smooth boundary assumption, since we could not readily find this in the literature.
**Theorem 2.4** (Courant's nodal domain theorem).: _Let \((M,g,\mu)\) be a weighted manifold. Then the \(k\)th Neumann or Dirichlet eigenfunction \(u_{k}\) of \(\Delta_{\mu}\) on \(M\) has at most \(k\) nodal domains._
Proof.: We prove only the Neumann case; the proof in [15, pp.19-23] for the Dirichlet case extends immediately to weighted manifolds. Let \(G_{1},\ldots,G_{k},G_{k+1},\ldots\) denote the nodal domains of \(u_{k}\). For each \(j=1,\ldots,k\), define \(\psi_{j}\in W^{1,2}(M;\mu)\) by
\[\psi_{j}:=\begin{cases}u_{k}|_{G_{j}},&\text{on }G_{j},\\ 0,&\text{elsewhere}.\end{cases}\]
Using Chavel's approximation argument [15, pp.21-22] and the version of Green's formula in [35, Proposition 5.8 and remark after Proposition 5.10], as in (23)-(25) below, for each \(j\) we have \(\frac{\||\nabla\psi_{j}||_{L^{2}(G_{j};\mu)}^{2}}{\|\psi_{j}\|_{L^{2}(G_{j};\mu )}^{2}}=-\lambda_{k,N}\). One can select constants \(\alpha_{1},\ldots,\alpha_{k}\in\mathbb{R}\), not all zero, such that
\[f:=\sum_{j=1}^{k}\alpha_{j}\psi_{j}\]
satisfies
\[\int_{M}u_{i}f\,\mathrm{d}\mu=0,\]
for each \(i=1,\ldots,k-1\) (see e.g. [15, p.17]). Noting that the \(\psi_{j}\) are disjointly supported, we have
\[\frac{\||\nabla f|\|_{L^{2}(M;\mu)}^{2}}{\|f\|_{L^{2}(M;\mu)}^{2}}=\frac{\sum_{ j=1}^{k}\alpha_{j}^{2}\||\nabla\psi_{j}|\|_{L^{2}(M;\mu)}^{2}}{\sum_{j=1}^{k} \alpha_{j}^{2}\|\psi_{j}\|_{L^{2}(M;\mu)}^{2}}=\frac{\lambda_{k,N}\sum_{j=1}^ {k}\alpha_{j}^{2}\|\psi_{j}\|_{L^{2}(M;\mu)}^{2}}{\sum_{j=1}^{k}\alpha_{j}^{2} \|\psi_{j}\|_{L^{2}(M;\mu)}^{2}}=\lambda_{k,N}.\]
Thus, Theorem 2.3 implies \(f\) is an eigenfunction of \(\Delta_{\mu}\) with eigenvalue \(\lambda_{k,N}\) vanishing identically on \(G_{k+1}\). But then Aronszajn's unique continuation principle [2] implies that \(f\) vanishes identically on \(M\), which is a contradiction.
## 3 Classical and higher Cheeger inequalities
### Cheeger inequalities for the first nonzero eigenvalue
The classical Cheeger inequalities provide an explicit bound away from \(0\) for \(\lambda_{1,D}\) or \(\lambda_{2,N}\), in terms of \(h_{1,D}\) or \(h_{2,N}\). Cheeger [17] proves the boundaryless and Dirichlet cases, while Maz'ya [51] (summarised in English in e.g. [32, Sec. 6]) independently proves a slightly stronger result some years prior. Yau [60, Sec. 5, Corollary 1], and later Buser [8, Theorem 1.6], prove the Neumann case. The Cheeger inequality can also be extended to metric measure spaces (including weighted manifolds). De Ponti and Mondino [24, Theorem 3.6] and Funano [29, Lemma 7.1] give variants of the Cheeger inequality for metric spaces (including weighted manifolds), with a Rayleigh quotient in place of an eigenvalue.
Several other upper bounds on eigenvalues of \(\Delta\) exist, which do not depend on the Cheeger constant (see for example [34] and references therein). Fewer bounds exist on the Cheeger constant: Ledoux [43, Theorem 5.3] and Milman [54, Theorem 1.5] have obtained bounds on the Cheeger constant in terms of concentration inequalities, while Dai et al [21, Theorem 1.4] have obtained an upper bound on the Cheeger constant on convex manifolds in terms of the manifold's dimension, Ricci curvature and diameter.
**Theorem 3.1** (Cheeger's inequality).:
* _[_17_]__: Let_ \((M,g)\) _be an unweighted, boundaryless, compact smooth Riemannian manifold. Then_ \[\lambda_{2,\emptyset}\leq-\frac{1}{4}h_{2,\emptyset}^{2}.\] (11)
* _[_17, 51_]_ _(see also_ _[_32_, Sec. 6]__): Let_ \((M,g)\) _be an unweighted, connected, compact smooth Riemannian manifold with nonempty, smooth boundary. Then_ \[\lambda_{1,D}\leq-\frac{1}{4}h_{1,D}^{2}.\] (12)
* _[_60_, Sec. 5, Corollary 1],_ _[_8_, Theorem 1.6]__: Let_ \((M,g)\) _be an unweighted, compact smooth Riemannian manifold with nonempty, smooth boundary. Then_ \[\lambda_{2,N}\leq-\frac{1}{4}h_{2,N}^{2}.\] (13)
These results extend directly to weighted manifolds, and even to more general metric measure spaces (see e.g. [24, Theorem 3.6]).
We prove that some of the superlevel sets within any nodal domain of any eigenfunction of \(\Delta\) have an upper bound on their Cheeger ratio, in terms of the corresponding eigenvalue (Theorem 3.2). This yields
a constructive version of Theorem 3.1 (Corollary 3.3), and also allows us to prove a constructive higher Cheeger inequality (Theorem 3.7).
For any nodal domain \(G\) of a function \(f\in C^{0}(M)\), we let \(\operatorname{range}(f^{2}|_{G}):=\{s^{2}:s\in f(G)\}\), and for any \(s\in\operatorname{range}(f^{2}|_{G})\), we define the \(s\)_-superlevel set_ of \(f^{2}\) on \(G\) as
\[G_{s}:=\{p\in G:f(p)^{2}>s\}. \tag{14}\]
**Theorem 3.2**.: _Let \((M,g,\mu)\) be an \(n\)-dimensional weighted manifold. Let \(u\) be some nonconstant Neumann, resp. Dirichlet, eigenfunction of \(\Delta_{\mu}\), with eigenvalue \(\lambda\). Let \(G\subset M\) be any nodal domain of \(u\). Then the set_
\[S_{G}:=\bigg{\{}s\in\operatorname{range}(u^{2}|_{G}):G_{s}\in \mathscr{P}_{N}(M),\lambda\leq-\frac{1}{4}\mathcal{J}_{N}(G_{s})^{2}\bigg{\}}, \tag{15}\]
_resp._
\[S_{G}:=\bigg{\{}s\in\operatorname{range}(u^{2}|_{G}):G_{s}\in \mathscr{P}_{D}(M),\lambda\leq-\frac{1}{4}\mathcal{J}_{D}(G_{s})^{2}\bigg{\}}, \tag{16}\]
_has positive Lebesgue measure satisfying the lower bound (27)._
Proof.: We prove only the Neumann case; the Dirichlet case follows similarly. Firstly, we use the coarea formula to find an expression (20) for the weighted average (19) of \(\mathcal{J}_{N}(G_{s})\). Secondly, we use a Rayleigh quotient argument to bound \(\lambda_{2,N}\) in terms of this weighted average (equation (26)). Lastly, we obtain our lower bound on the measure of \(S_{G}\).
The coarea formula (see e.g. [7, 13.4.2]) implies
\[\int_{G}|\nabla(u^{2})|\,\mathrm{d}\mu=\int_{\operatorname{range}(u^{2}|_{G}) }\mu_{n-1}(\{p\in G:u^{2}(p)=s\})\,\mathrm{d}s. \tag{17}\]
It follows immediately from Sard's theorem (e.g. [44, Theorem 6.10]), [55, Theorem 6.2.8] and the reasoning for [55, Lemma 6.2.7] that \(G_{s}\in\mathscr{P}_{N}(M)\) and \(\partial^{M}G_{s}=\{p\in G:u(p)^{2}=s\}\) for almost every \(s\in\operatorname{range}(u^{2}|_{G})\). For such \(s\), we have \(\mu_{n-1}(\{p\in G:u(p)^{2}=s\})=\mu_{n-1}(\partial^{M}G_{s})=\mathcal{J}_{N}(G _{s})\mu(G_{s})\), by the definition (1). Hence, we have
\[\int_{G}|\nabla(u^{2})|\,\mathrm{d}\mu=\int_{\operatorname{range}(u^{2}|_{G}) }\mathcal{J}_{N}(G_{s})\mu(G_{s})\,\mathrm{d}s. \tag{18}\]
Define
\[\bar{h}:=\frac{1}{\|u\|_{L^{2}(G;\mu)}^{2}}\int_{\operatorname{range}(u^{2}|_{G })}\mathcal{J}_{N}(G_{s})\mu(G_{s})\,\mathrm{d}s, \tag{19}\]
then \(\bar{h}\) is the weighted average of \(\mathcal{J}_{N}(G_{s})\) over \(\operatorname{range}(u^{2}|_{G})\), according to the probability measure \(\mathbb{P}\) on \(\operatorname{range}(u^{2}|_{G})\) given by \(\mathbb{P}(L):=\int_{L}\frac{\mu(G_{s})}{\|u\|_{L^{2}(G;\mu)}^{2}}\,\mathrm{d}s\). Then (18) and (19) yield
\[\frac{\int_{G}|\nabla(u^{2})|\,\mathrm{d}\mu}{\|u\|_{L^{2}(G;\mu)}^{2}}=\bar{h}. \tag{20}\]
Now, the Cauchy-Schwarz inequality implies
\[2\||\nabla u|\|_{L^{2}(G;\mu)}\|u\|_{L^{2}(G;\mu)}\geq 2\int_{G}u|\nabla u|\, \mathrm{d}\mu=\int_{G}|\nabla(u^{2})|\,\mathrm{d}\mu. \tag{21}\]
Using (21) and (20), we obtain
\[\frac{\||\nabla u|\|_{L^{2}(G;\mu)}^{2}}{\|u\|_{L^{2}(G;\mu)}^{2}}\geq\frac{ \big{(}\int_{G}|\nabla(u^{2})|\,\mathrm{d}\mu\big{)}^{2}}{4\|u\|_{L^{2}(G;\mu) }^{4}}=\frac{1}{4}\bar{h}^{2}. \tag{22}\]
We can write \(\||\nabla u||_{L^{2}(G;\mu)}^{2}\) as
\[\||\nabla u||_{L^{2}(G;\mu)}^{2}=\int_{G}(\nabla u\cdot\nabla u)\, \mathrm{d}\mu=\int_{G}\nabla u\cdot(e^{\phi}\nabla u)\,\mathrm{d}V. \tag{23}\]
Applying Green's formula (e.g. [35, Proposition 5.8 and remark after Proposition 5.10]) to \(e^{\phi}u\nabla u\) on \(G\) via a short approximation argument1, recalling (5) and noting that \(u=0\) on \(\partial^{M}G\) and \(\frac{\partial u}{\partial\mathbf{n}}=0\) on \(\partial G\cap\partial M\) (where \(\mathbf{n}\) denotes the outward normal of \(M\)), we obtain
Footnote 1: We apply Green’s formula via an approximation argument, similarly to e.g. [15, pp21–22]. We showed above that \(G_{s}\in\mathscr{P}_{N}(M)\) for almost every \(s\in\operatorname{range}(u^{2}|_{G})\), but it does not follow that \(G\in\mathscr{P}_{N}(M)\), or that \(G\) has locally finite perimeter. Instead, choose some sequence \(s_{1},s_{2},\ldots\in\operatorname{range}(u^{2}|_{G})\) converging to \(0\), such that \(G_{s_{j}}\in\mathscr{P}_{N}(M)\) for each \(j\). Then taking \(u_{j}:=u-s_{j}\) and applying Green’s formula to \(u_{j}e^{\phi}\nabla u_{j}\) on \(G_{s_{j}}\), and recalling (5), yields \(\int_{G_{s_{j}}}\nabla u_{j}\cdot(e^{\phi}\nabla u_{j})\,\mathrm{d}V=-\int_{G _{s_{j}}}u_{j}\cdot\Delta_{\mu}u_{j}\,\mathrm{d}\mu+\int_{(\partial M\,G_{s_{ j}})\cup(\partial M\cap G_{s_{j}})}u_{j}\,\frac{\partial u_{j}}{\partial \mathbf{n}}\,\mathrm{d}\mu_{n-1}\), where \(\mathbf{n}\) is an outward unit normal to \(\partial M\) or \(\partial^{M}G_{s_{j}}\). But \(u_{j}=0\) on \(\partial^{M}G_{s_{j}}\) and \(\frac{\partial u_{j}}{\partial\mathbf{n}}=0\) on \(\partial M\cap G_{s_{j}}\), so the second integral disappears, and taking \(j\to\infty\), we obtain (24).
\[\int_{G}\nabla u\cdot(e^{\phi}\nabla u)\,\mathrm{d}V=-\int_{G}u \cdot\Delta_{\mu}u\,\mathrm{d}\mu+0. \tag{24}\]
Since \(u\cdot\Delta_{\mu}u=\lambda u^{2}\), we have
\[-\int_{G}u\cdot\Delta_{\mu}u\,\mathrm{d}\mu=-\lambda\|u\|_{L^{ 2}(G;\mu)}^{2}. \tag{25}\]
Hence (23)-(25) and (22) imply
\[\lambda=-\frac{\||\nabla u\|_{L^{2}(G;\mu)}^{2}}{\|u\|_{L^{2}(G; \mu)}^{2}}\leq-\frac{1}{4}\bar{h}^{2}. \tag{26}\]
But \(\bar{h}\) is a weighted average over \(s\in\operatorname{range}(u^{2}|_{G})\) of \(\mathcal{J}_{N}(G_{s})\), so the set \(S_{G}^{\prime}:=\{s\in\operatorname{range}(u^{2}|_{G}):\mathcal{J}_{N}(G_{s}) \leq\bar{h}\}\) has positive measure. By (26) and the definition (15), we have \(S_{G}^{\prime}\subseteq S_{G}\), so \(S_{G}\) must also have positive measure.
We can put a lower bound on the measure of \(S_{G}\), as follows. Let \(\mathrm{h}(s):=\mathcal{J}_{N}(G_{s})\). Then we have
\[\int_{S_{G}^{\prime}}(\bar{h}-\mathrm{h}(s))\frac{\mu(G_{s})}{ \|u\|_{L^{2}(G;\mu)}^{2}}\,\mathrm{d}s=\int_{S_{G}^{\prime}}(\bar{h}-\mathrm{h }(s))\,\mathrm{d}\mathbb{P}(s)=\frac{\|\bar{h}-\mathrm{h}\|_{L^{1}( \operatorname{range}(u^{2}|_{G});\mathbb{P})}}{2},\]
and
\[\int_{S_{G}^{\prime}}(\bar{h}-\mathrm{h}(s))\frac{\mu(G_{s})}{ \|u\|_{L^{2}(G;\mu)}^{2}}\,\mathrm{d}s \leq\int_{S_{G}^{\prime}}\frac{\mu(G_{s})}{\|u\|_{L^{2}(G;\mu)}^{ 2}}\,\mathrm{d}s\left(\bar{h}-\inf_{s\in\operatorname{range}(u^{2}|_{G})} \mathrm{h}(s)\right)\] \[\leq\operatorname{Leb}(S_{G}^{\prime})\frac{\mu(G)}{\|u\|_{L^{2}( G;\mu)}^{2}}\left(\bar{h}-\inf_{s\in\operatorname{range}(u^{2}|_{G})}\mathrm{h}(s)\right)\] \[\leq\operatorname{Leb}(S_{G})\frac{\mu(G)}{\|u\|_{L^{2}(G;\mu)}^{ 2}}\left(\bar{h}-\inf_{s\in\operatorname{range}(u^{2}|_{G})}\mathrm{h}(s) \right).\]
Thus, we have
\[\operatorname{Leb}(S_{G})\geq\frac{\|\bar{h}-\mathrm{h}\|_{L^{1}( \operatorname{range}(u^{2}|_{G});\mathbb{P})}\|u\|_{L^{2}(G;\mu)}^{2}}{2(\bar{h}- \inf_{s\in\operatorname{range}(u^{2}|_{G})}\mathrm{h}(s))\mu(G)}. \tag{27}\]
A similar result holds in the Dirichlet case, replacing \(\mathcal{J}_{N}\) with \(\mathcal{J}_{D}\) in the definition of \(\bar{h},\mathrm{h},\mathbb{P}\), and noting that \(\overline{G_{s}}\cap\partial M=0\) for all \(s\neq 0\).
**Corollary 3.3**.: _Let \((M,g,\mu)\) be a weighted manifold. For each Neumann eigenfunction \(u\) corresponding to \(\lambda_{2,N}\), there is a nodal domain \(G\) of \(u\) such that the set \(S_{G}\) defined in (15) has positive measure, and for each \(s\in S_{G}\), defining \(G_{s}\) as in (14), the 2-packing \(\{G_{s},M\backslash\overline{G_{s}}\}\) satisfies_
\[\lambda_{2,N}\leq-\frac{1}{4}\mathcal{J}_{N}(\{G_{s},M\backslash\overline{G_ {s}}\})^{2}. \tag{28}\]
_If \(\partial M\neq\emptyset\), there is a unique Dirichlet eigenfunction \(u\) corresponding to \(\lambda_{1,D}\) (up to scaling), and this \(u\) has only a single nodal domain \(G=M\backslash\partial M\). The set \(S_{G}\) defined in (16) has positive measure, and for each \(s\in S_{G}\), the set \(G_{s}\) defined in (14) satisfies_
\[\lambda_{1,D}\leq-\frac{1}{4}\mathcal{J}_{D}(G_{s})^{2}. \tag{29}\]
Proof.: By Theorem 2.4 and e.g. [41, Propositions 4.5.8-4.5.9], the eigenfunction corresponding to \(\lambda_{1,D}\) has one nodal domain \(G=M\backslash\partial M\), while each eigenfunction corresponding to \(\lambda_{2,N}\) has two nodal domains, and Theorem 3.2 immediately yields (29). In the Neumann case, let \(G\) denote whichever nodal domain of \(u\) satisfies \(\mu(G)\leq\mu(M\backslash\overline{G})\). Then for each \(s\in S_{G}\), the 2-packing \(\{G_{s},M\backslash\overline{G_{s}}\}\) satisfies \(\mathcal{J}_{N}(\{G_{s},M\backslash\overline{G_{s}}\})=\mathcal{J}_{N}(G_{s})\), and Theorem 3.2 yields (28).
### Higher Cheeger inequalities
On boundaryless manifolds, Miclo [53] and Funano [29] have proven Cheeger inequalities for \(h_{k,\emptyset}\) for all \(k\geq 3\). Both papers make use of higher Cheeger inequalities for the graph Laplacian on finite graphs [45], following a procedure outlined by Miclo in [52, Conjecture 13]. Miclo states these results for unweighted manifolds, but notes that they also apply to weighted manifolds with \(C^{\infty}\) measures [53, p.327].
**Theorem 3.4** ([53, Theorem 7]).: _There is a universal constant \(\hat{\eta}>0\) such that, for any boundaryless weighted manifold \((M,g,\mu)\) and for all \(k\geq 1\),_
\[\lambda_{k,\emptyset}\leq-\frac{\hat{\eta}}{k^{6}}h_{k,\emptyset}^{2}. \tag{30}\]
**Theorem 3.5** ([53, Theorem 13]).: _There is a universal constant \(\eta\) such that, for any boundaryless weighted manifold \((M,g,\mu)\) and for all \(k\geq 1\),_
\[\lambda_{2k,\emptyset}\leq-\frac{\eta}{\log(k+1)}h_{k,\emptyset}^{2}. \tag{31}\]
The factor of 2 in the \(\lambda_{2k,\emptyset}\) in the previous theorem is arbitrary. Indeed, one can obtain the following from Miclo's proof of the previous theorem: there is a universal constant \(\tilde{\eta}\) such that, for any boundaryless weighted manifold \((M,g,\mu)\) and for all \(k\geq 1\) and \(0<\delta<1\),
\[\lambda_{k,\emptyset}\leq-\frac{\tilde{\eta}\delta^{6}}{\log(k+1)}h_{\lceil(1 -\delta)k\rceil,\emptyset}^{2}. \tag{32}\]
In particular, taking \(\delta=\frac{1}{2}\), we have
\[\lambda_{2k-1,\emptyset}\leq-\frac{\tilde{\eta}}{64\log(k+1)}h_{k,\emptyset} ^{2}. \tag{33}\]
We have not aware of a closed-form expression for the constants in (30)-(32).
Parini [56, Theorem 5.4] notes that the classical proof of the \(k=2\) Neumann Cheeger inequality (13) extends to the \(k=2\) Dirichlet case. Parini states his inequality for eigenfunctions of the \(p\)-Laplacian for \(1<p<\infty\) on subsets of \(\mathbb{R}^{n}\) with Lipschitz boundary, but the same argument applies on weighted manifolds.
**Theorem 3.6**.: _Let \((M,g,\mu)\) be a weighted manifold. Then_
\[\lambda_{2,D}\leq-\frac{1}{4}h_{2,D}^{2}. \tag{34}\]
Parini's approach does not generalise directly to higher \(k\), since the eigenfunctions corresponding to \(\lambda_{k,N}\) or \(\lambda_{k,D}\) can sometimes have very few nodal domains. Indeed, for any boundaryless \(n\geq 3\)-dimensional manifold and any \(k\geq 1\), there is a metric \(g\) on \(M\) such that the second eigenspace is \(k\)-dimensional [19, p.254], and hence \(\lambda_{k+1,\emptyset}=\lambda_{2,\emptyset}\).
Madafiglio's (unpublished) Honours thesis [50] provides a generalisation of Theorem 3.6. Madafiglio observes that if some eigenfunction with eigenvalue \(\lambda_{k,D}\) has \(r_{k}\geq 2\) nodal domains, then \(\lambda_{k,D}\) gives an upper bound on \(h_{r_{k},D}\). The Neumann case follows by similar reasoning. Using Theorem 3.2, we can obtain a constructive version of Madafiglio's result.
**Theorem 3.7** (Higher Cheeger inequality).: _Let \((M,g,\mu)\) be a weighted manifold. For each \(k\geq 1\), let \(r_{k}\) denote the number of nodal domains in any Neumann (resp. Dirichlet) eigenfunction \(u\) of \(\Delta_{\mu}\) with eigenvalue \(\lambda\geq\lambda_{k,N}\) (resp. \(\lambda\geq\lambda_{k,D}\))._
1. _We have (_ (_36_)_ _due to_ _[_50_]__)_ \[\lambda_{k,N} \leq-\frac{1}{4}h_{r_{k},N}^{2},\] (35) \[\lambda_{k,D} \leq-\frac{1}{4}h_{r_{k},D}^{2}.\] (36)
2. _Let_ \(u\) _be any Neumann (resp. Dirichlet) eigenfunction of_ \(\Delta_{\mu}\) _with_ \(r_{k}\) _nodal domains, and let_ \(G^{1},\ldots,G^{r_{k}}\subset M\) _denote the nodal domains of_ \(u\)_. For each_ \(i\) _and each_ \(s\in\operatorname{range}(u^{2}|_{G^{i}})\)_, let_ \(G^{i}_{s}\) _denote the_ \(s\)_-superlevel set of_ \(u^{2}\) _on_ \(G^{i}\)_, and define_ \(S_{G^{i}}\) _as in (_15_) or (_16_). Then_ \(S_{G^{1}}\times\ldots\times S_{G^{r_{k}}}\) _has positive Lebesgue measure, and for each_ \(\{s_{1},\ldots,s_{r_{k}}\}\in S_{G^{1}}\times\ldots\times S_{G^{r_{k}}}\)_, the collection_ \(\mathcal{A}_{r_{k}}:=\{G^{1}_{s_{1}},\ldots,G^{r_{k}}_{s_{r_{k}}}\}\) _is a Neumann (resp. Dirichlet)_ \(r_{k}\)_-packing of_ \(M\) _satisfying_ \(\lambda_{k,N}\leq-\frac{1}{4}\mathcal{J}_{N}(\mathcal{A}_{r_{k}})^{2}\) _(resp._ \(\lambda_{k,D}\leq-\frac{1}{4}\mathcal{J}_{D}(\mathcal{A}_{r_{k}})^{2}\)_)._
Proof.: The sets \(G^{1}_{s_{1}},\ldots,G^{r_{k}}_{s_{r_{k}}}\) for each \(\{s_{1},\ldots,s_{r_{k}}\}\in S_{G^{1}},\ldots,S_{G^{r_{k}}}\) are pairwise disjoint, since \(G^{1},\ldots,G^{r_{k}}\) are pairwise disjoint, and each \(G^{i}_{s_{i}}\in\mathscr{P}_{N}(M)\) (resp. \(G^{i}_{s_{i}}\in\mathscr{P}_{D}(M)\)) by the definitions (15)-(16). Hence \(\mathcal{A}_{r_{k}}:=\{G^{1}_{s_{1}},\ldots,G^{r_{k}}_{s_{r_{k}}}\}\) is a Neumann \(r_{k}\)-packing for \(M\) satisfying \(\lambda=-\frac{1}{4}\mathcal{J}_{N}(\mathcal{A}_{r_{k}})^{2}\) (resp. a Dirichlet \(r_{k}\)-packing for \(M\) satisfying \(\lambda\leq-\frac{1}{4}\mathcal{J}_{D}(\mathcal{A}_{r_{k}})^{2}\)), and (35) (resp. (36)) follows immediately.
We can rewrite part 1 of Theorem 3.7 as follows: for \(k\geq 1\), let \(\tilde{r}_{k}\) be the index of a Neumann (resp. Dirichlet) eigenfunction of \(\Delta_{\mu}\) with \(\geq k\) nodal domains, when the eigenfunctions are ordered by decreasing eigenvalue. Then
\[\lambda_{\tilde{r}_{k},N}\leq-\frac{1}{4}h_{k,N}^{2} \tag{37}\]
and
\[\lambda_{\tilde{r}_{k},D}\leq-\frac{1}{4}h_{k,D}^{2}, \tag{38}\]
respectively. We can rewrite equations (75)-(76) similarly.
Theorem 3.7 is intended for situations where an eigenfunction of \(\Delta_{\mu}\) has been calculated explicitly, so that the number of nodal domains can be identified. In these cases, Theorem 3.7 has the twin advantages that it applies to manifolds with boundary, and that the constant in (35) is explicit and small. This allows relatively tight bounds on \(h_{k,N}\) or \(h_{k,D}\) to be computed be computed even for large \(k\), particularly when \(\tilde{r}_{k}\) is close to \(k\).
### Creating more nodal domains using linear combinations of eigenfunctions
Theorem 3.7 only allows us to obtain one feature from each of the nodal domains of a single eigenfunction of \(\Delta_{\mu}\). Sometimes, there are \(l\leq k\) features of interest which appear spread among the first \(k\) eigenfunctions, but no single eigenfunction has all \(l\) features appearing in separate nodal domains. One may be able to extract these \(l\) features and obtain a corresponding bound on \(h_{l,N}\) or \(h_{l,D}\), by applying an operator known as _soft thresholding_ to certain linear combinations of the first \(k\) eigenfunctions. Soft thresholding with parameter \(a>0\) is the map \(\tau_{a}:C^{0}(M)\to C^{0}(M)\), \(\tau_{a}(f)(p):=\operatorname{sign}(f(p))\max\{|f(p)|-a,0\}\). Soft thresholding does not increase \(W^{1,2}\)-norm, and is support-decreasing, in the sense that if \(f^{-1}(0)\not\in\{\emptyset,M\}\), then \(\operatorname{supp}(\tau_{a}(f))\subsetneq\operatorname{supp}(f)\). For some manifolds, there are parameters \(\alpha:=\{\alpha_{ij}:1\leq i\leq l,1\leq j\leq k\}\) for which the \(l\) linear combinations \(f_{i;\alpha}:=\sum_{j=1}^{k}\alpha_{ij}u_{j}\), \(i=1,\ldots,l\) of the first \(k\) (Neumann or Dirichlet) eigenfunctions of \(\Delta_{\mu}\) are \(L^{2}\)-close to a collection of \(l\) functions with pairwise disjoint supports [23, Theorem 19]. When the eigenfunctions can be computed or approximated explicitly, the parameters \(\alpha\) can be chosen using an algorithm such as _sparse eigenbasis approximation_[28], as discussed after the proof of Proposition 3.8. Each \(f_{i;\alpha}\) has support covering all of \(M\), as a consequence of the unique continuation theorem [2]2, but the thresholded functions \(\tau_{a}(f_{1,\alpha}),\ldots,\tau_{a}(f_{l,\alpha})\) may have pairwise disjoint supports. Increasing \(a\) decreases the supports of \(\tau_{a}(f_{1,\alpha}),\ldots,\tau_{a}(f_{l,\alpha})\), so one chooses \(a\) as small as required to achieve pairwise disjoint supports for \(\tau_{a}(f_{1,\alpha}),\ldots,\tau_{a}(f_{l,\alpha})\). In Proposition 3.8 below, we give upper bounds on \(h_{l,N}\) or \(h_{l,D}\), and prove that some of the level sets of \(\tau_{a}(f_{1,\alpha}),\ldots,\tau_{a}(f_{l,\alpha})\) yield Cheeger \(l\)-packings whose Cheeger ratios are bounded above, in terms of \(\lambda_{k,N}\) or \(\lambda_{k,D}\) and the proportion of mass lost in the thresholding step. We illustrate Proposition 3.8 in example 1.
Footnote 2: The function \(f_{i,\alpha}\) satisfies \(|\Delta_{\mu}f_{i,\alpha}|\leq|\lambda_{k,N}||f_{i,\alpha}|\) or \(|\Delta_{\mu}f_{i,\alpha}|\leq|\lambda_{k,D}||f_{i,\alpha}|\), so the main theorem of [2] implies that if \(f_{i,\alpha}\) cannot be zero in an open neighbourhood unless it is zero everywhere.
**Proposition 3.8**.: _For any weighted manifold \((M,g,\mu)\), let \(u_{1},\ldots,u_{k}\) denote the first \(k\) Neumann, resp. Dirichlet, eigenfunctions of \(\Delta_{\mu}\) on \(M\) for \(k\geq 1\). For any \(1\leq l\leq k\) and any \(\alpha\in\mathbb{R}^{l\times k}\), define \(f_{1,\alpha},\ldots,f_{l,\alpha}\) by \(f_{i,\alpha}:=\sum_{j=1}^{k}\alpha_{ij}u_{j}\). Suppose that for some \(a>0\), the functions \(\tau_{a}(f_{1,\alpha}),\ldots,\tau_{a}(f_{l,\alpha})\) are nonzero and have pairwise disjoint supports. Then each \(\tau_{a}(f_{i,\alpha})\) has a nodal domain \(\tilde{G}^{i}\) such that letting \(\tilde{G}^{i}_{s}\) for \(s\in\operatorname{range}(\tau_{a}(f_{i,\alpha})^{2}|_{\tilde{G}^{i}})\) denote the \(s\)-superlevel set of \(\tau_{a}(f_{i,\alpha})^{2}\) on \(\tilde{G}_{i}\), the set_
\[\tilde{S}_{\tilde{G}^{i}}:=\Big{\{}s\in\operatorname{range}(\tau_{a}(f_{i, \alpha})^{2}|_{\tilde{G}^{i}}):\tilde{G}^{i}_{s}\in\mathscr{P}_{N}(M),\frac{ \||\nabla\tau_{a}(f_{i,\alpha})|\|^{2}_{L^{2}(\tilde{G}^{i};\mu)}}{\|\tau_{a}( f_{i,\alpha})\|^{2}_{L^{2}(\tilde{G}^{i};\mu)}}\geq\frac{1}{4}\mathcal{J}_{N}( \tilde{G}^{i}_{s})^{2}\Big{\}}, \tag{39}\]
_resp._
\[\tilde{S}_{\tilde{G}^{i}}:=\Big{\{}s\in\operatorname{range}(\tau_{a}(f_{i, \alpha})^{2}|_{\tilde{G}^{i}}):\tilde{G}^{i}_{s}\in\mathscr{P}_{D}(M),\frac{ \||\nabla\tau_{a}(f_{i,\alpha})|\|^{2}_{L^{2}(\tilde{G}^{i};\mu)}}{\|\tau_{a}( f_{i,\alpha})\|^{2}_{L^{2}(\tilde{G}^{i};\mu)}}\geq\frac{1}{4}\mathcal{J}_{D}( \tilde{G}^{i}_{s})^{2}\Big{\}}, \tag{40}\]
_has positive measure and satisfies (45). Moreover, for each \(\{s_{1},\ldots,s_{l}\}\in\tilde{S}_{\tilde{G}^{1}}\times\ldots\times\tilde{S}_{ \tilde{G}^{l}}\), the collection \(\mathcal{A}_{l}:=\{\tilde{G}^{1}_{s_{1}},\ldots,\tilde{G}^{l}_{s_{l}}\}\) is a Neumann \(l\)-packing for \(M\) satisfying_
\[\lambda_{k,N}\leq-\frac{1}{4}\mathcal{J}_{N}(\mathcal{A}_{l})^{2}\max_{1\leq j \leq l}\frac{\|\tau_{a}(f_{j,\alpha})\|^{2}_{L^{2}(M;\mu)}}{\|f_{j,\alpha}\|^{2} _{L^{2}(M;\mu)}}\leq-\frac{1}{4}h^{2}_{l,N}\max_{1\leq j\leq l}\frac{\|\tau_{a }(f_{j,\alpha})\|^{2}_{L^{2}(M;\mu)}}{\|f_{j,\alpha}\|^{2}_{L^{2}(M;\mu)}}, \tag{41}\]
_resp. a Dirichlet \(l\)-packing for \(M\) satisfying_
\[\lambda_{k,D}\leq-\frac{1}{4}\mathcal{J}_{D}(\mathcal{A}_{l})^{2}\max_{1\leq j \leq l}\frac{\|\tau_{a}(f_{j,\alpha})\|^{2}_{L^{2}(M;\mu)}}{\|f_{j,\alpha}\|^{2} _{L^{2}(M;\mu)}}\leq-\frac{1}{4}h^{2}_{l,D}\max_{1\leq j\leq l}\frac{\|\tau_{a }(f_{j,\alpha})\|^{2}_{L^{2}(M;\mu)}}{\|f_{j,\alpha}\|^{2}_{L^{2}(M;\mu)}}. \tag{42}\]
**Example 1**.: Let \((M,g,\mu)\) denote the interval \([0,\pi]\) equipped with Euclidean distance and Lebesgue measure Leb, and let \(u_{1},u_{2},u_{3}\) denote the first three Dirichlet eigenfunctions of \(\Delta\) on \([0,\pi]\) (shown in Figure 2(a)). Using sparse eigenbasis approximation [28, Algorithm 3.1], we take \(\alpha:=\left(\begin{smallmatrix}0.77&0&-0.64\\ 0.45&-0.71&0.54\\ 0.45&0.71&0.54\\ \end{smallmatrix}\right)\). Then the linear combinations \(f_{1,\alpha}:=\sum_{j=1}^{3}\alpha_{ij}u_{j}\), \(j=1,2,3\) of \(u_{1},u_{2},u_{3}\) (shown in Figure 2(b)) are \(L^{2}\)-close to disjointly supported functions. Applying soft thresholding \(\tau_{a}\) with \(a:=0.84\) yields pairwise disjointly supported functions \(\tau_{a}(f_{1,\alpha}),\ldots,\tau_{a}(f_{3,\alpha})\) (shown in Figure 2(c)).
Each \(\tau_{a}(f_{i,\alpha})\) has a single nodal domain \(\tilde{G}^{i}\), and the corresponding positive-measure intervals \(\tilde{S}_{\tilde{G}^{i}}\) are given by \(\tilde{S}_{\tilde{G}^{1}}\approx(0,0.51]\), \(\tilde{S}_{\tilde{G}^{2}}=\tilde{S}_{\tilde{G}^{3}}\approx(0,0.55]\) (to two decimal places). We show some of the sets \(\tilde{G}^{i}_{s_{i}}\) for \(s_{i}\in\tilde{S}_{\tilde{Q}^{i}}\), \(i=1,2,3\), in Figure 2(d). Proposition 3.8 guarantees that each \(\tilde{S}_{\tilde{G}^{i}}\) has positive measure, and that each \(\mathcal{A}_{3}:=\{\tilde{G}^{1}_{s_{1}},\tilde{G}^{2}_{s_{2}},\tilde{G}^{3} _{s_{3}}\}\) for \(\{s_{1},s_{2},s_{3}\}\in\tilde{S}_{\tilde{G}^{1}}\times\tilde{S}_{\tilde{G}^ {2}}\times\tilde{S}_{\tilde{G}^{3}}\) satisfies \(\mathcal{J}_{D}(\mathcal{A}_{3})\leq 2\sqrt{-\lambda_{3,D}\frac{\|f_{3,\alpha}\|_{L^{2}( 0,\pi;L_{\mathrm{eb}})}}{\|\tau_{a}(f_{3,\alpha})\|_{L^{2}(0,\pi;L_{\mathrm{eb }})}}}=7.3\). Some choices for \(\{s_{1},s_{2},s_{3}\}\) give rise to packings \(\mathcal{A}_{3}\) with Cheeger ratios significantly smaller than this upper bound. Note that for this example, \(u_{3}\) already has \(3\) nodal domains, so we could use Theorem 3.7 to obtain a \(3\)-packing instead.
Proof.: We consider only the Neumann case; the proof of the Dirichlet case is similar. For each \(1\leq i\leq l\), let \(\tilde{G}^{i}:=\arg\min_{\tilde{G}}\frac{\||\nabla\tau_{a}(f_{i,\alpha})||^{2} _{L^{2}(\tilde{G},\mu)}}{\|\tau_{a}(f_{i,\alpha})\|^{2}_{L^{2}(\tilde{G},\mu)}}\), where the minimum is taken over nodal domains \(\tilde{G}\) of \(\tau_{a}(f_{i,\alpha})\). The level sets of \(\tau_{a}(f_{i,\alpha})\), other than \((\tau_{a}(f_{i,\alpha}))^{-1}(0)\), are level sets of \(f_{i,\alpha}\in C^{\infty}(M)\), so \(\tilde{G}^{i}_{s_{i}}\in\mathscr{P}_{N}(M)\) for almost every \(s\in\operatorname{range}(\tau_{a}(f_{i,\alpha})^{2}|_{\tilde{G}^{i}})\) by the reasoning after (17). By applying the reasoning from Theorem 3.2 ((17)-(22) and after (26)) to \(\tau_{a}(f_{i,\alpha})\) on \(\tilde{G}^{i}\), it follows immediately that \(\tilde{S}_{\tilde{G}^{i}}\) has positive measure satisfying (45) below,
and that \(\{\tilde{G}^{1}_{s_{1}},\ldots,\tilde{G}^{l}_{s_{l}}\}\in\mathscr{P}_{l,N}(M)\) for each \(\{s_{1},\ldots,s_{l}\}\in\tilde{S}_{\tilde{G}^{1}}\times\ldots\times\tilde{S}_{ \tilde{G}^{l}}\).
We now proceed to prove (41). Choose any \(i\in\{1,\ldots,l\}\). Note that for each nodal domain \(\tilde{G}\) of \(\tau_{a}(f_{i,\alpha})\), we have \(\||\nabla\tau_{a}(f_{i,\alpha})||^{2}_{L^{2}(\tilde{G};\mu)}\geq\|\tau_{a}(f_ {i,\alpha})\|^{2}_{L^{2}(\tilde{G};\mu)}\frac{\||\nabla\tau_{a}(f_{i,\alpha}) \|^{2}_{L^{2}(\tilde{G}^{i};\mu)}}{\|\tau_{a}(f_{i,\alpha})\|^{2}_{L^{2}( \tilde{G}^{i};\mu)}}\). Hence
\[\frac{\||\nabla\tau_{a}(f_{i,\alpha})||^{2}_{L^{2}(M;\mu)}}{\|\tau_{a}(f_{i, \alpha})\|^{2}_{L^{2}(M;\mu)}}=\frac{\sum_{\tilde{G}}\||\nabla\tau_{a}(f_{i, \alpha})\|^{2}_{L^{2}(\tilde{G};\mu)}}{\sum_{\tilde{G}}\|\tau_{a}(f_{i,\alpha })\|^{2}_{L^{2}(\tilde{G};\mu)}}\geq\frac{\||\nabla\tau_{a}(f_{i,\alpha})||^{2} _{L^{2}(\tilde{G}^{i};\mu)}}{\|\tau_{a}(f_{i,\alpha})\|^{2}_{L^{2}(\tilde{G}^{ i};\mu)}}. \tag{43}\]
Recalling that \(f_{i,\alpha}=\sum_{j=1}^{k}\alpha_{ij}u_{j}\), we have that \(\lambda_{k,N}\leq-\frac{\||\nabla f_{i,\alpha}\|^{2}_{L^{2}(M;\mu)}}{\|f_{i, \alpha}\|^{2}_{L^{2}(M;\mu)}}\) (by e.g. [41, first equation of Proposition 4.5.4], which extends directly to the weighted case). Hence, since \(\||\nabla\tau_{a}(f_{i,\alpha})|\|_{L^{2}(M)}\leq\||\nabla f_{i,\alpha}|\|_{L ^{2}(M)}\), (43) implies
\[\lambda_{k,N}\leq-\frac{\||\nabla f_{i,\alpha}|\|^{2}_{L^{2}(M; \mu)}}{\|f_{i,\alpha}\|^{2}_{L^{2}(M;\mu)}}\leq -\frac{\||\nabla\tau_{a}(f_{i,\alpha})\|^{2}_{L^{2}(M;\mu)}}{\|\tau_{a}(f _{i,\alpha})\|^{2}_{L^{2}(M;\mu)}}\frac{\|\tau_{a}(f_{i,\alpha})\|^{2}_{L^{2}( M;\mu)}}{\|f_{i,\alpha}\|^{2}_{L^{2}(M;\mu)}}\] \[\leq -\frac{\||\nabla\tau_{a}(f_{i,\alpha})\|^{2}_{L^{2}(\tilde{G}^{i} ;\mu)}}{\|\tau_{a}(f_{i,\alpha})\|^{2}_{L^{2}(\tilde{G}^{i};\mu)}}\frac{\|\tau_ {a}(f_{i,\alpha})\|^{2}_{L^{2}(M;\mu)}}{\|f_{i,\alpha}\|^{2}_{L^{2}(M;\mu)}}. \tag{44}\]
Hence, by the definition of \(\tilde{S}_{\tilde{G}^{i}}\) in the proposition statement, for each \(s_{i}\in\tilde{S}_{\tilde{G}^{i}}\), we have
\[\lambda_{k,N}\leq-\frac{1}{4}\mathcal{J}_{N}(\tilde{G}^{i}_{s_{i}})^{2}\frac{ \|\tau_{a}(f_{i,\alpha})\|^{2}_{L^{2}(M;\mu)}}{\|f_{i,\alpha}\|^{2}_{L^{2}(M; \mu)}}.\]
Applying this reasoning for each \(i\in\{1,\ldots,k\}\) and recalling that \(\tau_{a}(f_{1,\alpha}),\ldots,\tau_{a}(f_{l,\alpha})\) have pairwise disjoint supports yields (41).
Lastly, we state our lower bound on the measure of each \(\tilde{S}_{\tilde{G}^{i}}\). Similarly to Theorem 3.2, we define \(\overline{\tilde{h}_{i}}:=\frac{1}{\|\tau_{a}(f_{i,\alpha})\|_{L^{2}(\tilde{G} ^{i})^{2}}}\int_{\operatorname{range}\left(\tau_{a}(f_{i,\alpha})^{2}|_{\tilde {G}^{i}}\right)}\mathcal{J}_{N}(\tilde{G}^{i}_{s})\mu(\tilde{G}^{i}_{s})\, \mathrm{d}s\) and \(\tilde{\mathrm{h}}_{i}(s):=\mathcal{J}_{N}(\tilde{G}^{i}_{s})\), and we define the probability measure \(\tilde{\mathbb{P}}_{i}\) on \(\operatorname{range}(\tau_{a}(f_{i,\alpha})^{2}|_{\tilde{G}^{i}})\subset \mathbb{R}\) by \(\tilde{\mathbb{P}}_{i}(L):=\int_{L}\frac{\mu(\tilde{G}^{i}_{s})}{\|\tau_{a}(f _{i,\alpha})\|^{2}_{L^{2}(\tilde{G}^{i};\mu)}}\,\mathrm{d}s\). Then the reasoning for (27) implies
\[\operatorname{Leb}(\tilde{S}_{\tilde{G}^{i}})\geq\frac{\left\|\overline{\tilde{h }_{i}}-\tilde{\mathrm{h}}_{i}\right\|_{L^{1}\left(\operatorname{range}\left( \tau_{a}(f_{i,\alpha})^{2}|_{\tilde{G}^{i}}\right);\tilde{\mathbb{P}}_{i}\right)} \|\tau_{a}(f_{i,\alpha})\|^{2}_{L^{2}(\tilde{G}^{i};\mu)}}{2\Big{(}\overline{ \tilde{h}_{i}}-\inf_{s\in\operatorname{range}\left(\tau_{a}(f_{i,\alpha})^{2}|_ {\tilde{G}^{i}}\right)}\tilde{\mathrm{h}}_{i}(s)\Big{)}\mu(\tilde{G}^{i})}. \tag{45}\]
A similar result applies in the Dirichlet case, replacing \(\mathcal{J}_{N}\) with \(\mathcal{J}_{D}\) in the definitions of \(\overline{\tilde{h}_{i}},\tilde{\mathrm{h}}_{i},\tilde{\mathbb{P}}_{i}\).
In numerical calculations, the \(\alpha_{ij}\) in Proposition 3.8 can be readily computed using the sparse eigenbasis approximation algorithm of [28, Algorithm 3.1]. The orthogonal matrix \(R\) produced by that algorithm can be used as the matrix \(\alpha\). The resulting \(f_{1,\alpha},\ldots,f_{k,\alpha}\) form an orthogonal basis for \(\operatorname{span}\{u_{1},\ldots,u_{k}\}\), such that for some fixed \(a>0\), each \(\tau_{a}(f_{i,\alpha})\) (defined before Proposition 3.8) is _sparse_, i.e. each \(\operatorname{supp}(\tau_{a}(f_{i,\alpha}))\) is small. Using a larger \(a^{\prime}\geq a\) will create further support reductions.
### The dynamic Cheeger inequalities
#### 3.4.1 Preliminaries on higher dynamic Cheeger constants and dynamic Laplacian eigenvalues
We can generalise Theorems 3.4-3.7 into the setting of non-autonomous, advective dynamical systems. Many fluidic and geophysical flows can be modeled using purely advective dynamics. Such flows can be represented as a collection of time-indexed diffeomorphisms acting on an initial-time manifold, where each
diffeomorphism sends a point in the initial-time manifold to its position at the corresponding future time. These diffeomorphisms are physically meaningful, because they describe the fluid motion and evolve subsets of the initial-time manifold according to this motion.
The global behaviour of many fluidic and geophysical flows can be understood by separating the phase space (the physical space containing the fluid) into _coherent sets_[25], i.e. regions that are "as dynamically disconnected as possible" [26]. One approach in purely advective, finite-time nonautonomous systems is to identify subsets of the phase space whose boundary measures remain small over time, relative to the measures of those subsets. These volume ratios are known as _dynamic Cheeger ratios_[25, 26, 27], and sets which locally minimise this ratio are known as _coherent sets_. The infima of these ratios are known as the _dynamic Cheeger constants_[25, 26, 27]. The dynamic Cheeger constants generalise the (static) Cheeger constants of Definition 2.2.
Calculating a dynamic Cheeger constant exactly is generally impractical. Instead, approximate coherent sets can be obtained from the eigenfunctions of a specific weighted Laplace-Beltrami operator called the _dynamic Laplacian_. There are existing upper bounds on the first non-zero dynamic Cheeger constant in terms of the first non-zero eigenvalue of the dynamic Laplacian [25, 26, 27].
In practice, the higher eigenfunctions of the dynamic Laplacian reveal additional coherent sets (see e.g. [28]). Below, we introduce higher dynamic Cheeger constants, analogous to the (static) higher Cheeger constants of Definition 2.2, to quantify these additional coherent sets. We show that the higher dynamic Cheeger constants are bounded above by the eigenvalues of \(\Delta^{d}\) (Theorems 3.17, 3.18 and 3.19), and in particular that the eigenfunctions of \(\Delta^{d}\) reveal coherent sets whose dynamic Cheeger ratios are bounded above (Theorems 3.19 and 3.20).
**Definition 3.9**.: A _dynamical system_\(\mathcal{T}:=(\mathrm{T},\{(M_{t},g_{t})\}_{t\in\mathrm{T}},\{\Phi^{(t)}\}_{t \in\mathrm{T}})\) or \(\mathcal{T}:=(\mathrm{T},\{(M_{t},g_{t},\mu_{t})\}_{t\in\mathrm{T}},\)\(\{\Phi^{(t)}\}_{t\in\mathrm{T}})\) consists of the following:
* A time index set \(\mathrm{T}:=\{0,1,\ldots,t_{\max}\}\).
* A time-indexed family of Riemannian manifolds \(\{(M_{t},g_{t})\}_{t\in\mathrm{T}}\) or weighted manifolds \(\{(M_{t},g_{t},\mu_{t})\}_{t\in\mathrm{T}}\), where in the unweighted case, for \(t\in\mathrm{T}\) we take \(\mu_{t}\) to denote Riemannian volume on \(M_{t}\).
* A time-indexed family of \(C^{\infty}\) diffeomorphisms \(\{\Phi^{(t)}\}_{t\in\mathrm{T}}\), which are _measure-preserving_ in the sense \(\mu_{t}=\mu_{0}\circ(\Phi^{(t)})^{-1}\) (we call such \(\Phi^{(t)}\)_volume-preserving_ if each \(\mu_{t}\) is Riemannian volume).
We use the following notation. Since \(\Phi^{(t)}\) for \(t\in\mathrm{T}\) is a measure-preserving diffeomorphism, the _push-forward_\(\Phi^{(t)}_{*}:C^{\infty}(M_{0})\to C^{\infty}(M_{t})\) is given by \(\Phi^{(t)}_{*}f:=f\circ(\Phi^{(t)})^{-1}\), and the _pullback_\((\Phi^{(t)})^{*}:C^{\infty}(M_{t})\to C^{\infty}(M_{0})\) is given by \((\Phi^{(t)})^{*}f:=f\circ\Phi^{(t)}\). We also define the pullback Riemannian metric \((\Phi^{(t)})^{*}g_{t}\) given by \((\Phi^{(t)})^{*}g_{t}:=g_{t}(\mathrm{d}\Phi^{(t)}\,\cdot\,\mathrm{d}\Phi^{(t) }\,\cdot\,)\), where \(\mathrm{d}\Phi^{(t)}\) is the differential of \(\Phi^{(t)}\) (see e.g. [44, p.55]). For \(t\in\mathrm{T}\), we let \((\mu_{t})_{n-1}\) denote the \(n-1\)-dimensional Hausdorff measure on \(M_{t}\) constructed from \(\mu_{t}\) and \(g_{t}\). For \(s,s+t\in\mathrm{T}\), we write \(\Phi^{(t)}_{s}:=\Phi^{(s+t)}\circ(\Phi^{(s)})^{-1}\).
We define the higher dynamic Cheeger constants as follows.
**Definition 3.10** (Higher dynamic Cheeger constants).: Consider a dynamical system \(\mathcal{T}\). For \(k\geq 1\), the _dynamic Neumann Cheeger ratio_ of a \(k\)-packing \(\{A_{1},\ldots,A_{k}\}\in\mathscr{P}_{k,D}(M_{0})\) is
\[\mathcal{J}^{d}_{N}(\{A_{1},\ldots,A_{k}\}):=\max_{1\leq i\leq k}\frac{\sum_{t= 0}^{t_{\max}}(\mu_{t})_{n-1}(\Phi^{(t)}(\partial^{M_{0}}A_{i}))}{|\mathrm{T} |\mu_{0}(A_{i})}. \tag{46}\]
The _dynamic Dirichlet Cheeger ratio_ of a Dirichlet \(k\)-packing \(\{A_{1},\ldots,A_{k}\}\in\mathscr{P}_{k,D}(M_{0})\) is
\[\mathcal{J}^{d}_{D}(\{A_{1},\ldots,A_{k}\}):=\max_{1\leq i\leq k}\frac{\sum_{t= 0}^{t_{\max}}(\mu_{t})_{n-1}(\Phi^{(t)}(\partial A_{i}))}{|\mathrm{T}|\mu_{0} (A_{i})}. \tag{47}\]
The \(k\)_th dynamic Neumann_ and _dynamic Dirichlet Cheeger constants_ for \(\mathcal{T}\) are
\[h^{d}_{k,N} :=\inf_{\{A_{1},\ldots,A_{k}\}\in\mathscr{P}_{k}(M_{0})}\mathcal{J }^{d}_{N}(\{A_{1},\ldots,A_{k}\}) \tag{48}\] \[h^{d}_{k,D} :=\inf_{\{A_{1},\ldots,A_{k}\}\in\mathscr{P}_{k}(M_{0})}\mathcal{J }^{d}_{D}(\{A_{1},\ldots,A_{k}\}). \tag{49}\]
For \(A\in\mathscr{P}_{N}(M_{0})\), resp. \(A\in\mathscr{P}_{D}(M_{0})\), we will occasionally write \(\mathcal{J}_{N}^{d}(A)\) instead of \(\mathcal{J}_{N}^{d}(\{A\})\), resp. \(\mathcal{J}_{D}^{d}(A)\) instead of \(\mathcal{J}_{D}^{d}(\{A\})\), for convenience.
The Neumann dynamic Cheeger constant \(h_{2,N}^{d}\) was originally defined requiring \(A_{1}\) and \(A_{2}\) to partition \(M_{0}\)[25], whereas (48) only requires them to form a packing of \(M_{0}\). This does not change the value of \(h_{2,N}^{d}\), by the reasoning after definition 2.2. Note that since the \(\Phi^{(t)}\) are measure-preserving, we have \(|\mathrm{T}|\mu_{0}(A_{i})=\sum_{t=0}^{t_{\mathrm{max}}}\mu_{t}(\Phi^{(t)}(A_ {i}))\), i.e. the denominators in (46)-(47) are \(|\mathrm{T}|\) times the time averages of the measures of the \(A_{i}\).
When considering dynamical systems, we let \(\Delta_{g_{t},\mu_{t}}\) denote the weighted Laplace-Beltrami operator on \((M_{t},g_{t},\mu_{t})\). The dynamic Laplacian [25, 27] is
\[\Delta^{d}:=\frac{1}{|\mathrm{T}|}\sum_{t=0}^{t_{\mathrm{max}}}(\Phi^{(t)})^{ *}\Delta_{g_{t},\mu_{t}}\Phi_{*}^{(t)}. \tag{50}\]
We consider Dirichlet and dynamic Neumann eigenproblems for \(\Delta^{d}\). The dynamic Neumann eigenproblem is to find \(u\in C^{\infty}(M_{0})\) and \(\lambda\in\mathbb{R}\), such that
\[\Delta^{d}u=\lambda u, \tag{51}\]
subject to the _dynamic Neumann boundary condition_ (if \(\partial M_{0}\neq\emptyset\))
\[\frac{1}{|\mathrm{T}|}\sum_{t=0}^{t_{\mathrm{max}}}\frac{\partial}{\partial \mathbf{n}_{t}}\left((\Phi^{(t)})_{*}u\right)=0\quad\text{on }\partial M_{0}, \tag{52}\]
where \(\mathbf{n}_{t}\) denotes an outward unit normal vector to \(\partial M_{t}\)[25, Theorem 4.1][27, Theorem 4.4]. Dynamic Neumann boundary conditions are the natural boundary condition as discussed in [25, pp.9-10] and [27, p.16]. There is an orthogonal Schauder basis for \(L^{2}(M_{0},\mu_{0})\) consisting of eigenfunctions for (51) satisfying (52) [27, Theorem 4.4]. The corresponding eigenvalues form a non-positive decreasing sequence accumulating only at \(-\infty\), and we denote them \(0=\lambda_{1,N}^{d}>\lambda_{2,N}^{d}\geq\lambda_{3,N}^{d}\geq\ldots\).
The Dirichlet eigenproblem is to find \(u\in C^{\infty}(M_{0})\) and \(\lambda\in\mathbb{R}\) satisfying (51), subject to
\[u=0\quad\text{on }\partial M_{0}. \tag{53}\]
By standard variational arguments as in e.g. [41, Theorem 4.3.1] and elliptic regularity theorems as in [30, Theorem 8.14], there is an orthogonal Schauder basis for \(L^{2}(M_{0},\mu_{0})\) of \(C^{\infty}(M_{0})\) eigenfunctions for (51) satisfying (53). The corresponding eigenvalues form a negative decreasing sequence accumulating only at \(-\infty\), and we denote them \(0>\lambda_{1,D}^{d}>\lambda_{2,D}^{d}\geq\lambda_{3,D}^{d}\geq\ldots\).
We have the following variational formula for the eigenvalues, in the dynamic Neumann setting [27].
**Proposition 3.11**.: _Let \(\mathcal{T}\) be a dynamical system, and let \(u_{1}^{d},u_{2}^{d},\ldots\) denote a complete orthogonal basis of dynamic Neumann eigenfunctions of \(\Delta^{d}\) corresponding to \(\lambda_{1,N}^{d},\lambda_{2,N}^{d},\ldots\) (resp. \(\lambda_{1,D}^{d},\lambda_{2,D}^{d},\ldots\)). Then for each \(k\geq 1\), we have_
\[\lambda_{k,N}^{d}=-\inf_{\begin{subarray}{c}f\in W^{1,2}(M_{0})\\ \int_{M_{0}}u_{i}^{d}\int\mathrm{d}\mu_{0}=0,\forall i\in\{1,\ldots,k-1\} \end{subarray}}\frac{\sum_{t=0}^{t_{\mathrm{max}}}\||\nabla_{g_{t}}\Phi_{*}^{(t )}f||_{L^{2}(M_{i};\mu_{i})}^{2}}{|\mathrm{T}|\|f\|_{L^{2}(M_{0};\mu_{0})}^{2}}, \tag{54}\]
_and the infimum is attained when \(f\) is a dynamic Neumann eigenfunction of \(\Delta^{d}\) with eigenvalue \(\lambda_{k,N}^{d}\)._
Extending the reasoning in e.g. [15, pp.16-17] to the dynamic case yields that the infimum in (54) is attained if and only if \(f\) is a dynamic Neumann eigenfunction of \(\Delta^{d}\) with eigenvalue \(\lambda_{k,N}^{d}\). This proposition also extends directly to the Dirichlet case, by similar arguments. Let \(u_{1}^{d},u_{2}^{d},\ldots\) denote a complete orthogonal
basis of Dirichlet eigenfunctions of \(\Delta^{d}\) corresponding to \(\lambda_{1,N}^{d},\lambda_{2,N}^{d},\ldots\) (resp. \(\lambda_{1,D}^{d},\lambda_{2,D}^{d},\ldots\)). Then for each \(k\geq 1\), we have
\[\lambda_{k,D}^{d}=-\inf_{\begin{subarray}{c}f\in W_{0}^{1,2}(M_{0})\\ \int_{M_{0}}u_{i}^{d}f\,\mathrm{d}\mu_{0}=0,\forall i\in\{1,\ldots,k-1\} \end{subarray}}\frac{\sum_{t=0}^{t_{\max}}\||\nabla_{g_{t}}\Phi_{*}^{(t)}f||_{ L^{2}(M_{t};\mu_{t})}^{2}}{|\Gamma|\|f\|_{L^{2}(M_{0};\mu_{0})}^{2}}, \tag{55}\]
and the infimum is attained if and only if \(f\) is a Dirichlet eigenfunction of \(\Delta^{d}\) with eigenvalue \(\lambda_{k,D}^{d}\). Since \(\Delta^{d}\) is an elliptic operator, Courant's nodal domain theorem (Theorem 2.4) extends to the eigenfunctions of \(\Delta^{d}\).
**Corollary 3.12** (to Theorem 2.4).: _For any dynamical system \(\mathcal{T}\), the \(k\)th dynamic Neumann (resp. Dirichlet) eigenfunction \(u_{k}\) of \(\Delta^{d}\) has at most \(k\) nodal domains._
Proof.: The proof is the same as that for Theorem 2.4, replacing \(M\), \(\mu\) and \(\lambda_{k,N}\) with \(M_{0}\), \(\mu_{0}\) and \(\lambda_{k,N}^{d}\), replacing the Rayleigh quotients as in Theorem 2.3 with dynamic Rayleigh quotients as in Proposition 3.11, and replacing (23)-(25) with (66) and the reasoning used to obtain (68).
The operator \(\Delta^{d}\) can be expressed as the weighted Laplace-Beltrami operator \(\Delta_{\bar{g},\mu_{0}}\) on \((M_{0},\bar{g},\mu_{0})\), where \(\bar{g}\) (called the _geometry of mixing metric_[38]) is the 'harmonic mean'3 of the pullbacks \((\Phi^{(t)})^{*}g_{t}\) of the metrics \(g_{t}\) to the initial-time manifold \(M_{0}\)[38]. Note that even if each \(\mu_{t}\) is Riemannian volume on \((M_{t},g_{t})\), \(\mu_{0}\) is not necessarily Riemannian volume on \((M_{0},\bar{g})\)[38, section 4.1.3].
Footnote 3: \(\bar{g}\) is defined via the _inverse metric_. The inverse metric of a Riemannian metric \(g\) on \(M_{0}\) is given by \(g^{-1}:T^{*}M_{0}\times T^{*}M_{0}\to\mathbb{R}\), \(g^{-1}(\eta,\omega):=g(\eta^{\sharp},\omega^{\sharp})\), where \(\sharp\) denotes raising an index (see e.g. [44, p.342]). Then \(\bar{g}\) is the unique metric on \(M_{0}\) for which \(\bar{g}^{-1}(\eta,\omega):=\frac{1}{|\Gamma|}\sum_{t=0}^{t_{\max}}((\Phi^{(t) })^{*}g_{t})^{-1}(\eta,\omega)\).
**Proposition 3.13** ([38, pp.1864, 1875]).: _In any dynamical system, \(\Delta^{d}\) is the weighted Laplace-Beltrami operator for the Riemannian manifold \((M_{0},\bar{g},\mu_{0})\), i.e._
\[\Delta^{d}=\Delta_{\bar{g},\mu_{0}}. \tag{56}\]
For any dynamical system \(\mathcal{T}\), let \(\nabla_{g_{t}}\) and \(\nabla_{\bar{g}}\) denote the gradient operator for the time-\(t\) manifold \((M_{t},g_{t},\mu_{t})\) and the geometry of mixing manifold \((M_{0},\bar{g},\mu_{0})\), respectively. It follows immediately from the definition of \(\bar{g}\) that \(|\nabla_{\bar{g}}f|^{2}=\frac{1}{|\Gamma|}\sum_{t=0}^{t_{\max}}\bigl{|}\nabla_ {g_{t}}\Phi_{*}^{(t)}f\bigr{|}^{2}\) for \(f\in W^{1,2}(M_{0})\). The Neumann boundary condition for the geometry of mixing manifold is the same as the dynamic Neumann boundary condition [38, p.1864]. For \(A\in\mathscr{P}_{N}(M_{0})\) or \(A\in\mathscr{P}_{D}(M_{0})\), respectively, we denote the (Neumann or Dirichlet) Cheeger ratio of \(A\) on the geometry of mixing manifold by \(\mathcal{J}_{N}(A;\bar{g},\mu_{0})\) or \(\mathcal{J}_{D}(A;\bar{g},\mu_{0})\), respectively. Then \(\mathcal{J}_{N}(\cdot;\bar{g},\mu)\) and \(\mathcal{J}_{D}(\cdot;\bar{g},\mu)\) give upper bounds on the dynamic Cheeger ratios and dynamic Cheeger constants [39, Proposition 4.3]:
\[\mathcal{J}_{N}^{d}(A) \leq\mathcal{J}_{N}(A;\bar{g},\mu),\quad\forall A\in\mathscr{P}_{ N}(M_{0}) \tag{57}\] \[\mathcal{J}_{D}^{d}(A) \leq\mathcal{J}_{D}(A;\bar{g},\mu),\quad\forall A\in\mathscr{P}_{ D}(M_{0}). \tag{58}\]
The bounds in Theorem 3.1 have been extended to the dynamic setting.
**Theorem 3.14** (Dynamic Cheeger inequality [25, 26, 27]).:
* _[_25_, Theorem 3.2]__,_ _[_27_, Theorem 4.5]__: For any dynamical system, we have_ \[\lambda_{2,N}^{d}\leq-\frac{1}{4}(h_{2,N}^{d})^{2}.\] (59)
* _[_26_, Theorem 2]_ _For any dynamical system such that each_ \((M_{t},g_{t},\mu_{t})\) _is an_ \(n\)_-dimensional,_ \(C^{\infty}\) _submanifold of_ \(\mathbb{R}^{n}\) _equipped with the Euclidean metric and Lebesgue measure, we have_ \[\lambda_{1,D}^{d}\leq-\frac{1}{4}(h_{1,D}^{d})^{2}.\] (60)
Combining the approach from [27] and [26], equation (59) extends to dynamical systems on arbitrary weighted Riemannian manifolds as in Definition 3.9.
Similarly to the static case, we can give constructive versions of the dynamic Cheeger inequality (Theorem 3.15 and Corollary 3.16). Specifically, we show that within any nodal domain of an eigenfunction \(u\) of \(\Delta^{d}\), a positive-measure collection of superlevel sets of \(u\) have their dynamic Cheeger ratio bounded above by the corresponding eigenvalue (Theorem 3.15). This immediately yields a constructive version of Theorem 3.14 (Corollary 3.16).
**Theorem 3.15**.: _Let \(\mathcal{T}\) be a dynamical system, and let \(u\) be some Neumann, resp. Dirichlet, eigenfunction of \(\Delta^{d}\) with eigenvalue \(\lambda\). Let \(G\subset M_{0}\) be any nodal domain of \(u\). Then, defining_
\[G_{s}:=\{p\in G:u(p)^{2}>s\}, \tag{61}\]
_the set_
\[S_{G}:=\bigg{\{}s\in\operatorname{range}(u^{2}|_{G}):G_{s}\in \mathscr{P}_{N}(M_{0}),\lambda\leq-\frac{1}{4}\mathcal{J}_{N}^{d}(G_{s})^{2} \bigg{\}}, \tag{62}\]
_resp._
\[S_{G}:=\bigg{\{}s\in\operatorname{range}(u^{2}|_{G}):G_{s}\in \mathscr{P}_{D}(M_{0}),\lambda\leq-\frac{1}{4}\mathcal{J}_{D}^{d}(G_{s})^{2} \bigg{\}}, \tag{63}\]
_has positive Lebesgue measure satisfying the lower bound (70)._
Proof.: The proof proceeds as for Theorem 3.2. For each \(t\in\mathrm{T}\), define \(\phi_{t}\in C^{\infty}(M_{t})\) via \(\mathrm{d}\mu_{t}=e^{\phi_{t}}\,\mathrm{d}V\), and observe that \(\operatorname{range}((\Phi_{*}^{(t)}u)^{2}|_{\Phi^{(t)}(G)})=\operatorname{ range}(u^{2}|_{G})\) and that for each \(s\in\operatorname{range}(u^{2}|_{G})\), \(\Phi^{(t)}(G_{s})\) is the superlevel set of \(\Phi_{*}^{(t)}u\) on \(\Phi^{(t)}(G)\). Replacing \((M,g,\mu)\), \(\phi\), \(G\) and \(u\), respectively, with \((M_{t},g_{t},\mu_{t})\), \(\phi_{t}\), \(\Phi^{(t)}(G)\) and \(\Phi_{*}^{(t)}u\), respectively, in each of (19), (22) and (23)-(24) yields
\[\bar{h}:=\frac{\int_{\operatorname{range}(u^{2}|_{G})}\mathcal{J}_{N}(\Phi^{( t)}(G_{s}))\mu_{t}(\Phi^{(t)}(G_{s}))\,\mathrm{d}s}{\|\Phi_{*}^{(t)}u\|_{L^{2}( \Phi^{(t)}(G);\mu_{t})}^{2}}, \tag{64}\]
\[\frac{\||\nabla_{g_{t}}\Phi_{*}^{(t)}u\|_{L^{2}(\Phi^{(t)}(G);\mu _{t})}^{2}}{\|\Phi_{*}^{(t)}u\|_{L^{2}(\Phi^{(t)}(G);\mu_{t})}^{2}} \geq\frac{1}{4}\bar{h}^{2}, \tag{65}\] \[\||\nabla_{g_{t}}\Phi_{*}^{(t)}u\|_{L^{2}(\Phi^{(t)}(G);\mu_{t})}^ {2} =\int_{\Phi^{(t)}(G)}\nabla_{g_{t}}\Phi_{*}^{(t)}u\cdot(e^{\phi_{ t}}\nabla_{g_{t}}\Phi_{*}^{(t)}u)\,\mathrm{d}V\] \[=-\int_{\Phi^{(t)}(G)}u\cdot(\Delta_{g_{t},\mu_{t}}\circ\Phi_{*} ^{(t)})u\,\mathrm{d}\mu_{t}+0. \tag{66}\]
Multiplying (65) by \(\|\Phi_{*}^{(t)}u\|_{L^{2}(\Phi^{(t)}(G);\mu_{t})}^{2}\), replacing \(\||\nabla_{g_{t}}\Phi_{*}^{(t)}u\|_{L^{2}(\Phi^{(t)}(G);\mu_{t})}^{2}\) with the right-hand side of (66), and then replacing \(\bar{h}\) with its definition (64), yields
\[-\int_{\Phi^{(t)}(G)}\Phi_{*}^{(t)}u\cdot\big{(}\Delta_{g_{t},\mu_ {t}}\circ\Phi_{*}^{(t)}\big{)}u\,\mathrm{d}\mu_{t} \geq\frac{1}{4}\bar{h}^{2}\|\Phi_{*}^{(t)}u\|_{L^{2}(\Phi^{(t)}(G );\mu_{t})}^{2}\] \[=\frac{\left(\int_{\operatorname{range}(u^{2}|_{G})}\mathcal{J}_{N }(\Phi^{(t)}(G_{s}))\mu_{t}(\Phi^{(t)}(G_{s}))\,\mathrm{d}s\right)^{2}}{4\|\Phi_ {*}^{(t)}u\|_{L^{2}(\Phi^{(t)}(G);\mu_{t})}^{2}}.\]
Since \(\Phi^{(t)}\) is measure-preserving, this is equivalent to
\[-\int_{G}u\cdot\big{(}(\Phi^{(t)})^{*}\circ\Delta_{g_{t},\mu_{t}}\circ\Phi^{(t)}_{ *}\big{)}u\,\mathrm{d}\mu_{0}\geq\frac{\Big{(}\int_{\mathrm{range}(u^{2}|_{G})} \mathcal{J}_{N}(\Phi^{(t)}(G_{s}))\mu_{0}(G_{s})\,\mathrm{d}s\Big{)}^{2}}{4\|u \|_{L^{2}(G;\mu_{0})}^{2}}. \tag{67}\]
Now, definition (50) and our choice of \(u\) imply \(\frac{1}{|T|}\sum_{t=0}^{t_{\mathrm{max}}}\big{(}(\Phi^{(t)})^{*}\circ\Delta_{ g_{*},\mu_{t}}\circ\Phi^{(t)}_{*}\big{)}u=\Delta^{d}u=\lambda u\), so summing (67) over \(t\) and dividing by \(-|\mathrm{T}|\|u\|_{L^{2}(G;\mu_{0})}^{2}\) yields
\[\lambda\leq-\frac{1}{4|\mathrm{T}|\|u\|_{L^{2}(G;\mu_{0})}^{4}}\sum_{t=0}^{t_ {\mathrm{max}}}\biggl{(}\int_{\mathrm{range}(u^{2}|_{G})}\mathcal{J}_{N}( \Phi^{(t)}(G_{s}))\mu_{0}(G_{s})\,\mathrm{d}s\biggr{)}^{2}. \tag{68}\]
Using the relation \(-\sum_{t=0}^{t_{\mathrm{max}}}x_{t}^{2}\leq-\frac{1}{|\Gamma|}\Bigl{(}\sum_{t =0}^{t_{\mathrm{max}}}x_{t}\Bigr{)}^{2}\) for \(x\in\mathbb{R}^{|\mathrm{T}|}\), this bound becomes
\[\lambda\leq-\frac{1}{4|\mathrm{T}|^{2}\|u\|_{L^{2}(G;\mu_{0})}^{4}}\biggl{(} \sum_{t=0}^{t_{\mathrm{max}}}\int_{\mathrm{range}(u^{2}|_{G})}\mathcal{J}_{N} (\Phi^{(t)}(G_{s}))\mu_{0}(G_{s})\,\mathrm{d}s\biggr{)}^{2}=-\frac{1}{4}(\bar{ h}_{G}^{d})^{2}, \tag{69}\]
where \(\bar{h}_{G}^{d}:=\frac{1}{\|u\|_{L^{2}(G;\mu_{0})}^{2}}\int_{\mathrm{range}(u^ {2}|_{G})}\mathcal{J}_{N}^{d}(G_{s})\mu_{0}(G_{s})\,\mathrm{d}s\). Thus, by the reasoning after (26), the set \(S_{G}\) defined in (62) has positive measure.
We can bound its measure as follows. Define the probability measure \(\mathbb{P}\) on \(\mathrm{range}(u^{2}|_{G})\) by \(\mathbb{P}(L):=\int_{L}\frac{\mu_{0}(G_{s})}{\|u\|_{L^{2}(G;\mu_{0})}^{2}} \,\mathrm{d}s\), and let \(\mathrm{h}^{d}(s):=\mathcal{J}_{N}^{d}(G_{s})\). Then the reasoning for (27) implies
\[\mathrm{Leb}(S_{G})\geq\frac{\|\bar{h}_{G}^{d}-\mathrm{h}^{d}\|_{L^{1}(\mathrm{ range}(u^{2}|_{G});\mathbb{P}^{d})}\|u\|_{L^{2}(G;\mu_{0})}^{2}}{2(\bar{h}_{G}^{d} -\inf_{s\in\mathrm{range}(u^{2}|_{G})}\mathrm{h}^{d}(s))\mu_{0}(G)}. \tag{70}\]
**Corollary 3.16**.: _For any dynamical system \(\mathcal{T}\), and for any dynamic Neumann eigenfunction \(u\) of \(\Delta^{d}\) corresponding to \(\lambda^{d}_{2,N}\), there is a nodal domain \(G\) of \(u\) such that the set \(S_{G}\) defined in (62) has positive measure, and for \(s\in S_{G}\), defining \(G_{s}\) as in (61), the 2-packing \(\{G_{s},M\backslash\overline{G_{s}}\}\) satisfies_
\[\lambda^{d}_{2,N}\leq-\frac{1}{4}\mathcal{J}_{N}^{d}(\{G_{s},M\backslash \overline{G_{s}}\})^{2}. \tag{71}\]
_If \(\partial M_{0}\neq\emptyset\), the leading Dirichlet eigenfunction \(\lambda^{d}_{1,D}\) of \(\Delta^{d}\) is simple, and the corresponding eigenfunction \(u\) has only a single nodal domain \(G=M_{0}\backslash\partial M_{0}\). The set \(S_{G}\) defined in (63) has positive measure, and for \(s\in S_{G}\), the set \(G_{s}\) defined in (14) satisfies_
\[\lambda^{d}_{1,D}\leq-\frac{1}{4}\mathcal{J}_{D}^{d}(G_{s})^{2}. \tag{72}\]
Proof.: In the Dirichlet case, we mostly follow the proof of [41, Proposition 4.5.8]. Corollary 3.12 ensures that any Dirichlet eigenfunction \(u\) of \(\Delta^{d}\) corresponding to \(\lambda^{d}_{1,D}\) has only one nodal domain, so the maximum principle (e.g. applying [58, Chapter 2, Theorem 5] in local coordinates) implies that \(u\) is strictly positive or strictly negative on \(M\backslash\partial M\). Hence there cannot be two orthogonal Dirichlet eigenfunctions of \(\Delta^{d}\) corresponding to \(\lambda^{d}_{1,D}\), i.e. \(\lambda^{d}_{1,D}\) is a simple eigenvalue of \(\Delta^{d}\), and (72) follows from Theorem 3.15. In the Neumann case, Corollary 3.12 yields that any dynamic Neumann eigenfunction \(u\) of \(\Delta^{d}\) corresponding to \(\lambda^{d}_{2,N}\) has at most two nodal domains. Since the constant function \(\mathbf{1}\) is a dynamic Neumann eigenfunction of \(\Delta^{d}\) orthogonal to \(u\), \(u\) has exactly two nodal domains \(G_{1},G_{2}\). One choice of \(G\in\{G_{1},G_{2}\}\) satisfies \(\mu(G)\leq\mu(M\backslash\overline{G})\), and (71) follows from Theorem 3.15.
#### 3.4.2 Higher dynamic Cheeger inequalities
We can extend our dynamic Cheeger inequalities of Section 3.2 directly to the dynamic setting. Our proofs of Theorem 3.7 and Proposition 3.8 carry over directly to the dynamic setting (Theorem 3.19 and Proposition 3.20). To extend Theorems 3.4 and 3.5 to the dynamic setting, we can avoid some technicalities by applying those theorems on the geometry of mixing manifold \((M_{0},\bar{g},\mu_{0})\), and applying (57).
**Theorem 3.17**.: _There is a universal constant \(\hat{\eta}\) such that for any dynamical system where \(M_{0}\) is boundary-less, for all \(k\geq 1\) we have_
\[\lambda_{k,\emptyset}^{d}\leq-\frac{\hat{\eta}}{k^{6}}(h_{k,\emptyset}^{d})^{2}. \tag{73}\]
Proof.: By Proposition 3.13, \(\lambda_{k,\emptyset}^{d}\) is the \(k\)th eigenvalue of \(\Delta_{\bar{g},\mu_{0}}\). Applying Theorem 3.4 to bound the \(k\)th Cheeger constant \(h_{k,\emptyset}\) on the geometry of mixing manifold yields \(\lambda_{k,\emptyset}^{d}\leq-\frac{\hat{\eta}}{k^{6}}h_{k,\emptyset}^{2}\). Then (57) and the definitions (3) and (48) imply \(-h_{k,\emptyset}\leq-h_{k,\emptyset}^{d}\), and (73) follows.
**Theorem 3.18**.: _There is a universal constant \(\eta\) such that for any dynamical system \(\mathcal{T}\) where \(M_{0}\) is boundaryless, for all \(k\geq 1\) we have_
\[\lambda_{2k,\emptyset}^{d}\leq-\frac{\eta}{\log(k+1)}(h_{k,\emptyset}^{d})^{2}. \tag{74}\]
Proof.: By Proposition 3.13, \(\lambda_{2k,\emptyset}^{d}\) is the \(2k\)th eigenvalue of \(\Delta_{\bar{g},\mu_{0}}\). Applying Theorem 3.5 to bound the \(k\)th Cheeger constant \(h_{k,\emptyset}\) on the geometry of mixing manifold yields \(\lambda_{2k,\emptyset}^{d}\leq-\frac{\eta}{\log(k+1)}h_{k,\emptyset}^{2}\). Then (57) and the definitions (3) and (48) imply \(-h_{k,\emptyset}\leq-h_{k,\emptyset}^{d}\), and (73) follows.
Our constructive, nodal domain-based higher Cheeger inequality, Theorem 3.7, generalises directly to the dynamic case.
**Theorem 3.19** (Higher dynamic Cheeger inequality).: _Let \(\mathcal{T}\) be a dynamical system. For each \(k\geq 1\), let \(r_{k}\) be the maximal number of nodal domains in any dynamic Neumann (resp. Dirichlet) eigenfunction \(u\) of \(\Delta^{d}\) with eigenvalue \(\lambda\geq\lambda_{k,N}^{d}\) (resp. \(\lambda\geq\lambda_{k,D}^{d}\))._
1. _We have_ \[\lambda_{k,N}^{d} \leq-\frac{1}{4}(h_{r_{k},N}^{d})^{2},\] (75) \[\lambda_{k,D}^{d} \leq-\frac{1}{4}(h_{r_{k},D}^{d})^{2}.\] (76)
2. _Let_ \(u\) _be an eigenfunction with eigenvalue_ \(\lambda\geq\lambda_{k,N}^{d}\) _(resp._ \(\lambda\geq\lambda_{k,D}^{d}\)_) and with_ \(r_{k}\) _nodal domains. Let_ \(G^{1},\ldots,G^{r_{k}}\subset M\) _denote the nodal domains of_ \(u\)_, and for each_ \(i\) _and each_ \(s\in\operatorname{range}(u^{2}|_{G^{i}})\)_, let_ \(G_{s}^{i}\) _denote the_ \(s\)_-superlevel set of_ \(u^{2}\) _on_ \(G^{i}\)_. For each_ \(i\)_, define_ \(S_{G^{i}}\) _as in (_62_) or (_63_). Then each_ \(S_{G^{i}}\) _has positive Lebesgue measure satisfying (_70_), and for each_ \(\{s_{1},\ldots,s_{r_{k}}\}\in S_{G^{1}}\times\ldots\times S_{G^{r_{k}}}\)_, the collection_ \(\mathcal{A}_{r_{k}}:=\{G_{s_{1}}^{1},\ldots,G_{s_{r_{k}}}^{r_{k}}\}\) _is a Neumann (resp. Dirichlet)_ \(r_{k}\)_-packing of_ \(M_{0}\) _satisfying_ \(\lambda_{k,N}^{d}\leq-\frac{1}{4}\mathcal{J}_{N}^{d}(\mathcal{A}_{r_{k}})^{2}\) _(resp._ \(\lambda_{k,D}^{d}\leq-\frac{1}{4}\mathcal{J}_{D}^{d}(\mathcal{A}_{r_{k}})^{2}\)_)._
Proof.: This theorem follows from Lemma 3.15, by the reasoning in the proof of Theorem 3.7.
We can also extend Proposition 3.8 to the dynamic setting, to obtain bounds on \(h_{l,N}^{d}\) or \(h_{l,D}^{d}\) for \(r_{k}\leq l\leq k\) in terms of thresholded functions obtained from linear combinations of the first \(k\) eigenfunctions of \(\Delta^{d}\).
**Proposition 3.20**.: _For any dynamical system \(\mathcal{T}\), let \(u_{1},\ldots,u_{k}\) denote the first \(k\) dynamic Neumann, resp. Dirichlet, eigenfunctions of \(\Delta^{d}\) for \(k\geq 1\). For any \(1\leq l\leq k\) and any \(\alpha\in\mathbb{R}^{l\times k}\), define \(f_{1,\alpha},\ldots,f_{l,\alpha}\) by \(f_{i,\alpha}:=\sum_{j=1}^{k}\alpha_{ij}u_{j}\). Suppose that for some \(a>0\), the functions \(\tau_{a}(f_{1,\alpha}),\ldots,\tau_{a}(f_{l,\alpha})\) are nonzero and have pairwise disjoint supports. Then each \(\tau_{a}(f_{i,\alpha})\) has a nodal domain \(\bar{G}^{i}\) such that letting \(\tilde{G}^{i}_{s}\) for \(s\in\operatorname{range}(\tau_{a}(f_{i,\alpha})^{2}|_{\bar{G}^{i}})\) denote the \(s\)-superlevel set of \(\tau_{a}(f_{i,\alpha})^{2}\) on \(\bar{G}^{i}\), the set_
\[\tilde{S}_{\bar{G}^{i}}:=\Big{\{}s\in\operatorname{range}(\tau_ {a}(f_{i,\alpha})^{2}|_{\bar{G}^{i}}):\tilde{G}^{i}_{s}\in\mathscr{P}_{N}(M_{ 0}),\] \[\frac{\sum_{t=0}^{t_{\max}}\||\nabla_{g_{t}}\Phi^{(t)}_{s}\tau_{a }(f_{i,\alpha})|\|^{2}_{L^{2}(\Phi^{(t)}(\bar{G}^{i});\mu_{t})}}{|\Gamma||\tau _{a}(f_{i,\alpha})\|^{2}_{L^{2}(\bar{G}^{i};\mu_{0})}}\geq\frac{1}{4}\mathcal{J }^{d}_{N}(\tilde{G}^{i}_{s})^{2}\Big{\}}, \tag{77}\]
_resp._
\[\tilde{S}_{\bar{G}^{i}}:=\Big{\{}s\in\operatorname{range}(\tau_ {a}(f_{i,\alpha})^{2}|_{\bar{G}^{i}}):\tilde{G}^{i}_{s}\in\mathscr{P}_{D}(M_{ 0}),\] \[\frac{\sum_{t=0}^{t_{\max}}\||\nabla_{g_{t}}\Phi^{(t)}_{s}\tau_{a }(f_{i,\alpha})|\|^{2}_{L^{2}(\Phi^{(t)}(\bar{G}^{i});\mu_{t})}}{|\Gamma||\tau _{a}(f_{i,\alpha})\|^{2}_{L^{2}(\bar{G}^{i};\mu_{0})}}\geq\frac{1}{4}\mathcal{ J}^{d}_{D}(\tilde{G}^{i}_{s})^{2}\Big{\}}, \tag{78}\]
_has positive measure and satisfies (81). Moreover, for each \(\{s_{1},\ldots,s_{l}\}\in\tilde{S}_{\bar{G}^{1}}\times\ldots\times\tilde{S}_{ \bar{G}^{l}}\), the collection \(\mathcal{A}_{l}:=\{\tilde{G}^{1}_{s_{1}},\ldots,\tilde{G}^{l}_{s_{l}}\}\) is a Neumann \(l\)-packing for \(M_{0}\) satisfying_
\[\lambda^{d}_{k,N}\leq-\frac{1}{4}\mathcal{J}^{d}_{N}(\mathcal{A}_{l})^{2}\max_ {1\leq j\leq l}\frac{\|\tau_{a}(f_{j,\alpha})\|^{2}_{L^{2}(M_{0};\mu_{0})}}{ \|f_{j,\alpha}\|^{2}_{L^{2}(M_{0};\mu_{0})}}\leq-\frac{1}{4}(h^{d}_{l,N})^{2} \max_{1\leq j\leq l}\frac{\|\tau_{a}(f_{j,\alpha})\|^{2}_{L^{2}(M_{0};\mu_{0}) }}{\|f_{j,\alpha}\|^{2}_{L^{2}(M_{0};\mu_{0})}}, \tag{79}\]
_resp. a Dirichlet \(l\)-packing for \(M_{0}\) satisfying_
\[\lambda^{d}_{k,D}\leq-\frac{1}{4}\mathcal{J}^{d}_{D}(\mathcal{A}_{l})^{2}\max_ {1\leq j\leq l}\frac{\|\tau_{a}(f_{j,\alpha})\|^{2}_{L^{2}(M_{0};\mu_{0})}}{ \|f_{j,\alpha}\|^{2}_{L^{2}(M_{0};\mu_{0})}}\leq-\frac{1}{4}(h^{d}_{l,D})^{2} \max_{1\leq j\leq l}\frac{\|\tau_{a}(f_{j,\alpha})\|^{2}_{L^{2}(M_{0};\mu_{0} )}}{\|f_{j,\alpha}\|^{2}_{L^{2}(M_{0};\mu_{0})}}. \tag{80}\]
Proof.: This result follows by the reasoning for Proposition 3.8 and Lemma 3.15. As in those proofs, we consider only the Neumann case. For each \(1\leq i\leq l\), we select \(\tilde{G}^{i}\) by \(\tilde{G}^{i}:=\operatorname*{arg\,min}_{\tilde{G}}\frac{\sum_{t=0}^{t_{\max} }\||\nabla_{g_{t}}\Phi^{(t)}_{s}\tau_{a}(f_{i,\alpha})\|^{2}_{L^{2}(\Phi^{(t)}( \tilde{G});\mu_{t})}}{|\Gamma||\tau_{a}(f_{i,\alpha})\|^{2}_{L^{2}(\tilde{G} ^{i};\mu_{0})}},\) where the infimum is taken over nodal domains \(\tilde{G}\) of \(\tau_{a}(f_{i,\alpha})\). Then the reasoning for Theorem 3.2, modified as in the proofs of Proposition 3.8 and Theorem 3.15, imply that \(\tilde{S}_{\tilde{G}^{i}}\) has positive measure. The reasoning for (44) extends directly to the dynamic setting, and (79) follows as in the proof of Proposition 3.8.
Now, define \(\overline{\tilde{h}^{d}_{i}}:=\frac{1}{\|\tau_{a}(f_{i,\alpha})\|^{2}_{L^{2}(M_ {0};\mu_{0})}}\int_{\operatorname{range}\left(\tau_{a}(f_{i,\alpha})^{2}|_{ \tilde{G}^{i}}\right)}\mathcal{J}^{d}_{N}(\tilde{G}^{i}_{s})\mu_{0}(\tilde{G}^{i }_{s})\,\mathrm{d}s\) and \(\tilde{h}^{d}_{i}(s):=\mathcal{J}^{d}_{N}(\tilde{G}^{i}_{s})\), and define the probability measure \(\tilde{\mathbb{P}}_{i}\) on \(\operatorname{range}(\tau_{a}(f_{i,\alpha})^{2}|_{\tilde{G}^{i}})\) by \(\tilde{\mathbb{P}}_{i}(L):=\int_{L}\frac{\mu_{0}(\tilde{G}^{i}_{s})}{\|\tau_{a}( f_{i,\alpha})\|^{2}_{L^{2}(\tilde{G}^{i};\mu_{0})}}\,\mathrm{d}s\). Then the reasoning for (70) implies
\[\operatorname{Leb}(\tilde{S}_{\tilde{G}^{i}})\geq\frac{\|\overline{\tilde{h}^{d }_{i}}-\tilde{\tilde{h}^{d}_{i}}\|_{L^{1}\left(\operatorname{range}\left(\tau_{a}(f _{i,\alpha})^{2}|_{\tilde{G}^{i}}\right);\tilde{\mathbb{P}}_{i}\right)}\|\tau_{a}( f_{i,\alpha})\|^{2}_{L^{2}(\tilde{G}^{i};\mu_{0})}}{2\Big{(}\overline{\tilde{h}^{d}_{i} }-\inf_{s\in\operatorname{range}\left(\tau_{a}(f_{i,\alpha})^{2}|_{\tilde{G}^{i} }\right)}\tilde{h}^{d}_{i}(s)\Big{)}\mu_{0}(\tilde{G}^{i})}. \tag{81}\]
A similar bound holds in the Dirichlet case, replacing \(\mathcal{J}^{d}_{N}\) with \(\mathcal{J}^{d}_{D}\) in the definitions of \(\overline{\tilde{h}^{d}_{i}},\tilde{\mathbb{P}}_{i}^{d},\tilde{\mathbb{P}}_{i}\).
## 4 Examples
We apply our higher Cheeger inequality (Theorem 3.7) to compare the Laplace-Beltrami eigenvalues to the higher Cheeger constants, on three manifolds: a torus (example 4.1), a cylinder using Neumann boundary
conditions (example 4.2) and a 3-ball using Dirichlet boundary conditions (example 4.3). Our Theorem 3.7 applies to manifolds with or without boundary, whenever we know the number of nodal domains in some eigenfunctions on those manifolds, i.e. to each of examples 4.1-4.3. Miclo's existing higher Cheeger inequalities (Theorems 3.4 and 3.5) apply only to manifold without boundary, i.e. to example 4.1. For that example, we obtain an asymptotically stronger bound on \(h_{k,\emptyset}\) using our Theorem 3.7 than using Miclo's Theorems 3.4 and 3.5. Using our higher dynamic Cheeger inequality (Theorem 3.19), we also compare the dynamic Laplacian eigenvalues to the dynamic Cheeger constants for one dynamical system, a cylinder with linear shear (example 4.4).
### Cheeger constants on a torus
Our first example is a flat torus \(\mathbb{T}^{2}:=2\pi\mathbb{S}^{1}\times 2\pi\mathbb{S}^{1}\), endowed with two-dimensional Lebesgue measure. Then \(\Delta\) has an orthogonal Hilbert basis of eigenfunctions on \(L^{2}(\mathbb{T}^{2},\mathrm{Leb})\), consisting of all functions of the form
\[u_{k_{1},k_{2},\zeta_{1},\zeta_{2}}(x,y):=\cos(k_{1}(x+\zeta_{1}))\cos(k_{2}(y +\zeta_{2})), \tag{82}\]
for \(k_{1},k_{2}=0,1,2,\ldots\) and \(\zeta_{1},\zeta_{2}\in\{0,\frac{\pi}{2}\}\), where we require \(\zeta_{1}=0\) if \(k_{1}=0\) and \(\zeta_{2}=0\) if \(k_{2}=0\) to ensure an orthogonal basis. Each eigenfunction \(u_{k_{1},k_{2},\zeta_{1},\zeta_{2}}\) has corresponding eigenvalue \(\lambda_{k_{1},k_{2},\zeta_{1},\zeta_{2}}=-k_{1}^{2}-k_{2}^{2}\), and we can globally order these eigenfunctions in order of decreasing eigenvalue (resolving ties arbitrarily).
To apply Theorem 3.7, we need to estimate the maximal number \(r_{k}\) of nodal domains of an eigenfunction with eigenvalue greater than or equal to the \(k\)th eigenvalue \(\lambda_{k,\emptyset}\). Each eigenfunction \(u_{k_{1},k_{2},\zeta_{1},\zeta_{2}}\) has \(\max\{4k_{1}k_{2},2k_{1},2k_{2},1\}\) nodal domains, by (82). It can be shown that for each \(k_{1}\geq 1\) and \(\zeta_{1},\zeta_{2}\in\{0,\frac{\pi}{2}\}\), any eigenfunction whose eigenvalue is greater than or equal to \(\lambda_{k_{1},k_{1},\zeta_{1},\zeta_{2}}\) has at most \(4k_{1}^{2}\) nodal domains. In this sense, the eigenfunctions \(u_{k_{1},k_{1},\zeta_{1},\zeta_{2}}\) maximise the number of nodal domains of an eigenfunction under an eigenvalue constraint. Thus, noting that \(\lambda_{6,\emptyset}=\lambda_{1,1,\zeta_{1},\zeta_{2}}\), we can obtain a lower bound on \(r_{k}\) for any \(k\geq 6\) by finding the largest \(k_{1}\) such that \(\lambda_{k,\emptyset}\leq\lambda_{k_{1},k_{1},\zeta_{1},\zeta_{2}}\) for some \(\zeta_{1},\zeta_{2}\in\{0,\frac{\pi}{2}\}\), and noting that \(r_{k}\) is bounded below by the number of eigenvalues in \(u_{k_{1},k_{2},\zeta_{1},\zeta_{2}}\). To estimate this \(k_{1}\) in terms of \(k\), we note that \(\lambda_{k,\emptyset}\geq\lambda_{k_{1}+1,k_{1}+1,\zeta_{1},\zeta_{2}}=-2(k_{1 }+1)^{2}\). Now, each integer pair in \(\mathcal{I}:=\{(i_{1},i_{2})\in\mathbb{Z}^{2}:i_{1},i_{2}\geq 1,-i_{1}^{2}-i_{2}^{2} \geq-2(k_{1}+1)^{2}\}\) corresponds to a unit-area square contained entirely in the nonnegative quadrant \(Q\) of the disk \(\{-x^{2}-y^{2}\geq-2(k_{1}+1)^{2}\}\). The quadrant \(Q\) has area \(\frac{\pi}{2}(k_{1}+1)^{2}\), so we have \(|\mathcal{I}|\leq\frac{\pi}{2}(k_{1}+1)^{2}\). Each integer pair in \(\mathcal{I}\) corresponds to 4 linearly independent eigenfunctions of the form (82) with different choices of \(\zeta_{1},\zeta_{2}\in\{0,\frac{\pi}{2}\}\), leading to at most \(2\pi(k_{1}+1)^{2}\) eigenvalues, counted with multiplicity, greater than or equal to \(\lambda_{k_{1}+1,k_{1}+1,\zeta_{1},\zeta_{2}}\).
There are also \(2\lfloor\sqrt{2}(k_{1}+1)\rfloor+1\) integer pairs in \(\mathcal{I}^{\prime}:=\{(i_{1},i_{2})\in\mathbb{Z}^{2}:i_{1},i_{2}\geq 0,i_{1}i_{2}=0,-i_{1 }^{2}-i_{2}^{2}\geq-2(k_{1}+1)^{2}\}\). Each such integer pair with \(i_{1}\geq 1\) or \(i_{2}\geq 1\) corresponds to 2 linearly independent eigenfunctions of the form (82) with different choices of \(\zeta_{1}\in\{0,\frac{\pi}{2}\}\) or \(\zeta_{2}\in\{0,\frac{\pi}{2}\}\) respectively, while the pair \((0,0)\) corresponds to only 1 eigenfunction. This leads to an additionally \(4\lfloor\sqrt{2}(k_{1}+1)\rfloor+1\) additional eigenvalues greater than or equal to \(\lambda_{k_{1}+1,k_{1}+1,\zeta_{1},\zeta_{2}}\). In total, we have at most \(2\pi(k_{1}+1)^{2}+4\sqrt{2}(k_{1}+1)+1\) eigenvalues greater than or equal to \(\lambda_{k_{1}+1,k_{1}+1,\zeta_{1},\zeta_{2}}\). The ordering of the eigenvalues \(\lambda_{i,\emptyset}\) implies there are at least \(k\) eigenvalues greater than or equal to \(\lambda_{k_{1}+1,k_{1}+1,\zeta_{1},\zeta_{2}}\), so \(k\leq 2\pi(k_{1}+1)^{2}+4\sqrt{2}(k_{1}+1)+1\). Applying the quadratic formula and noting \(\sqrt{\pi k+4-\pi}\geq\sqrt{\pi k}\) yields the bound \(k_{1}\geq\sqrt{\frac{k}{2\pi}}-1-\frac{\sqrt{2}}{\pi}\). Now, \(u_{k_{1},k_{1},\zeta_{1},\zeta_{2}}\) has \(4k_{1}^{2}\) nodal domains, so this bound on \(k_{1}\) and the fact \(k_{1}\geq 1\) imply \(r_{k}\geq 4k_{1}^{2}\geq\max\Bigl{\{}\frac{2k}{\pi}-4.7\sqrt{k}+8.4,4\Bigr{\}}\). Thus, Theorem 3.7 implies
\[\lambda_{k,\emptyset}\leq-\frac{1}{4}h_{r_{k},\emptyset}^{2}\leq-\frac{1}{4}h_{ \max\{\lceil\frac{2k}{\pi}-4.7\sqrt{k}+8.4\rceil,4\},\emptyset}^{2}. \tag{83}\]
To compare (83) to the bounds from Miclo's Theorems 3.4 and (31), we rewrite the outer inequality of (83) as a bound on \(h_{l,\emptyset}\) for \(l\geq 1\), and use Weyl's law. Let \(k^{*}(l):=\lceil\frac{\pi l}{2}+9.3\sqrt{l+0.3}+14.2\rceil\), then we can rearrange (83) to obtain
\[h_{l,\emptyset}\leq 2\sqrt{-\lambda_{k^{*}(l),\emptyset}}. \tag{84}\]
Now, from Weyl's law (see e.g. [41, p.118]), it follows that
\[\lambda_{k,\emptyset}=-\frac{k}{\pi}+O(\sqrt{k}). \tag{85}\]
This allows us to compare our bound (84) with the bounds obtained from Miclo's Theorems 3.4 and 3.5.
* Substituting (85) and the definition of \(k^{*}(l)\) into our bound (84), we obtain that as \(l\to\infty\), \[h_{l,\emptyset}\leq 2\sqrt{\frac{l}{2}+O(\sqrt{l})}=2\sqrt{\frac{l}{2}}+O(1).\] (86)
* Substituting (85) into Miclo's Theorem 3.4 [53, Theorem 7], the reasoning from (86) implies that as \(l\to\infty\), \[h_{l,\emptyset}\leq l^{3}\sqrt{-\frac{\lambda_{l,\emptyset}}{\hat{\eta}}}=l^{ 3}\sqrt{\frac{l}{\pi\hat{\eta}}}+O(l^{3}).\] (87) This is clearly asymptotically weaker than (86).
* Substituting (85) into Miclo's Theorem 3.5 [53, Theorem 13], the reasoning from 86 implies that as \(l\to\infty\), \[h_{l,\emptyset}\leq\sqrt{-\frac{\log(2l+1)\lambda_{2l,\emptyset}}{\eta}}=\sqrt {\frac{2l\log(2l+1)}{\pi\eta}}+O(\sqrt{\log(2l+1)}).\] (88) This is also asymptotically weaker than (86).
### Cheeger constants of a cylinder
Next, we consider a cylinder \(\mathcal{C}:=2\pi\mathbb{S}^{1}\times[0,\pi]\), endowed with two-dimensional Lebesgue measure. Then \(\mathcal{C}\) is a semiconvex subset of the torus \(\mathbb{T}^{2}\) from example 4.1, but \(\mathcal{C}\) is not a convex subset of any manifold since some pairs of points in \(\mathcal{C}\) are connected by two minimal geodesics contained in \(\mathcal{C}\). Under Neumann boundary conditions, \(\Delta\) has an orthogonal Hilbert basis of eigenfunctions on \(L^{2}(\mathcal{C},\mathrm{Leb})\), consisting of all functions of the form
\[u_{k_{1},k_{2},\zeta}(x,y):=\cos(k_{1}(x+\zeta))\cos(k_{2}y), \tag{89}\]
for \(k_{1},k_{2}=0,1,2,\ldots\) and \(\zeta\in\{0,\frac{\pi}{2}\}\), where we require \(\zeta=0\) whenever \(k_{1}=0\) to ensure an orthogonal basis. Each eigenfunction \(u_{k_{1},k_{2},\zeta}\) has corresponding eigenvalue \(\lambda_{k_{1},k_{2},\zeta}=-k_{1}^{2}-k_{2}^{2}\). To apply Theorem 3.7, we again need a lower bound for \(r_{k}\). First, we show that for each \(k_{1}\geq 1\), eigenfunctions of the form \(u_{k_{1},k_{1},\zeta}\) have the maximal number of nodal domains, among eigenfunctions of the form (89) for which \(\lambda_{i_{1},i_{2},\zeta}\geq-2k_{1}^{2}\). Each \(u_{i_{1},i_{2},\zeta}\) has \((i_{2}+1)\max\{2i_{1},1\}\) nodal domains by (89), so maximising the number of nodal domains in \(u_{i_{1},i_{2},\zeta}\) subject to \(\lambda_{i_{1},i_{2},\zeta}\,(=-i_{1}^{2}-i_{2}^{2})\geq-2k_{1}^{2}\) is equivalent to solving \(\max\{2i_{1}(i_{2}+1):(i_{1},i_{2})\in\mathbb{Z}_{\geq 0}^{2},i_{1}^{2}+i_{2}^{2} \leq 2k_{1}^{2}\}\). This can be solved via the relaxation \(\max\{2x(y+1):(x,y)\in([0,k_{1}]\cup[k_{1}+1,\infty))\times\mathbb{R}_{\geq 0},x^{2}+y^{2}\leq 2k_{1}^{2}\}\). Rearranging the constraint \(x^{2}+y^{2}\leq 2k_{1}^{2}\) and maximising \(y\) gives us \(y=\sqrt{2k_{1}^{2}-x^{2}}\). Substituting this into \(2x(y+1)\) gives us \(2x(\sqrt{2k_{1}^{2}-x^{2}}+1)\), which is strictly increasing for \(0\leq x\leq k_{1}\) and strictly decreasing for \(k_{1}+1\leq x\leq\sqrt{2}k_{1}\). Thus, since the objective is larger at \((x,y)=(k_{1},k_{1})\) than at \((x,y)=(k_{1}+1,\sqrt{k_{1}^{2}-2k_{1}-1})\), the maximum is uniquely attained at \((x,y)=(k_{1},k_{1})\). Hence the eigenfunctions \(u_{k_{1},k_{1},\zeta}\) for \(\zeta\in\{0,\frac{\pi}{2}\}\) maximise the number of nodal domains, among eigenfunctions \(u_{i_{1},i_{2},\zeta}\) of the form (89) satisfying \(\lambda_{i_{1},i_{2},\zeta}\leq-2k_{1}^{2}\).
Now, we bound \(r_{k}\) for each \(k\geq 5\) by finding the largest \(k_{1}\) such that \(\lambda_{k,N}\leq\lambda_{k_{1},k_{1},\zeta}\) for \(\zeta\in\{0,\frac{\pi}{2}\}\), noting that \(\lambda_{5,N}=\lambda_{1,1,\zeta}\). For this \(k_{1}\), we have \(\lambda_{k,N}\geq\lambda_{k_{1}+1,k_{1}+1,\zeta}=-2(k_{1}+1)^{2}\). Each integer pair in the set
from the previous example corresponds to two linearly independent eigenfunctions of the form (89), leading to at most \(\lfloor\pi(k_{1}+1)^{2}\rfloor\) eigenvalues \(\geq\lambda_{k_{1}+1,k_{1}+1,\zeta}\). There are also \(\lfloor\sqrt{2}(k_{1}+1)\rfloor\) nonnegative integer pairs in \(\mathcal{I}^{\prime}\) from the previous example with \(i_{1}>0\), each corresponding to \(2\) linearly independent eigenfunctions, and \(\lfloor\sqrt{2}(k_{1}+1)\rfloor+1\) such pairs with \(i_{1}=0\), each corresponding to only \(1\) linearly independent eigenfunction. These lead to at most an additional \(3\sqrt{2}(k_{1}+1)+1\) eigenvalues \(\geq\lambda_{k_{1}+1,k_{1}+1,\zeta}\). Thus, there are at most \(\pi(k_{1}+1)^{2}+3\sqrt{2}(k_{1}+1)+1\) eigenvalues \(\geq\lambda_{k_{1}+1,k_{1}+1,\zeta}\). Again, the ordering of the \(\lambda_{i,\emptyset}\) implies there are at least \(k\) eigenvalues \(ge\lambda_{k_{1}+1,k_{1}+1,\zeta}\), so \(k\leq\pi(k_{1}+1)^{2}+3\lfloor\sqrt{2}(k_{1}+1)\rfloor+1\). Then the quadratic formula and the fact \(\sqrt{4\pi k+18-4\pi}\geq\sqrt{4\pi k}\) yield \(k_{1}\geq\sqrt{\frac{k}{\pi}}-1-\frac{3}{\sqrt{2\pi}}\). Now, \(u_{k_{1},k_{1},\zeta}\) has \(2k_{1}(k_{1}+1)\) nodal domains, so this bound on \(k_{1}\) and the fact \(k_{1}\geq 1\) imply \(r_{k}\geq 2k_{1}(k_{1}+1)\geq\max\Bigl{\{}\frac{2k}{\pi}-2.7\sqrt{k}+2.2,4 \Bigr{\}}\). Thus, Theorem 3.7 implies that for \(k\geq 5\),
\[\lambda_{k,N}\leq-\frac{1}{4}h_{r_{k},N}^{2}\leq-\frac{1}{4}h_{\max\bigl{\{} \bigl{[}\frac{2k}{\pi}-2.7\sqrt{k}+2.2\bigr{]},4\bigr{\}},N}^{2}. \tag{90}\]
Note that we cannot apply Miclo's Theorems 3.4 or 3.5 to \(\mathcal{C}\), because \(\mathcal{C}\) has nonempty boundary.
### Cheeger constants on a 3-ball
Next, we consider the 3-ball \(\mathbb{B}:=\{\mathbf{x}\in\mathbb{R}^{3}:|\mathbf{x}|\leq 1\}\), equipped with 3-dimensional Lebesgue measure. We work in spherical coordinates \((r,\theta,\phi)\), where \(\theta\) is the polar angle and \(\phi\) is the azimuthal angle. Then \(\Delta\), under Dirichlet boundary conditions, has an orthogonal Hilbert basis of eigenfunctions on \(L^{2}(\mathbb{B},\mathrm{Leb})\), consisting of all functions of the form
\[u_{k_{1},k_{2},k_{3},\zeta}:=S_{k_{2}}(\alpha_{k_{1},k_{2}}r)P_{k_{2}}^{k_{3}} (\cos\theta)\cos(k_{3}(\phi+\zeta)) \tag{91}\]
for \(k_{1}=1,2,\ldots\); \(k_{2}=0,1,\ldots\); \(k_{3}=0,\ldots,k_{2}\); \(\zeta\in\{0,\frac{\pi}{2}\}\), where we require \(\zeta=0\) when \(k_{3}=0\) to ensure an orthonormal basis. The function \(S_{k_{2}}:\mathbb{R}_{+}\to\mathbb{R}\) is the \(k_{2}\)th _spherical Bessel function of the first kind_, \(\alpha_{k_{1},k_{2}}\) is the \(k_{1}\)th positive zero of \(S_{k_{2}}\), and \(P_{k_{2}}^{k_{3}}\) is the \(k_{2}\)th _associated Legendre polynomial of \(k_{3}\)th order_ (see e.g. [31, sec. 3.3] and [20, secs V.8 and VII.5]). The eigenfunction \(u_{k_{1},k_{2},k_{3},\zeta}\) has eigenvalue \(\lambda_{k_{1},k_{2},k_{3},\zeta}=-\alpha_{k_{1},k_{2}}^{2}\). The values \(\alpha_{k_{1},k_{2}}\) satisfy the bounds (simplified from [6, equations (1), (2), (5)])
\[\pi k_{1}+k_{2}-3.75<\alpha_{k_{1},k_{2}}<\pi k_{1}+\frac{\pi}{2}k_{2}+0.03- \frac{(k_{2}+\frac{1}{2})^{2}}{2\bigl{(}\pi k_{1}+\frac{\pi}{2}k_{2}+0.03\bigr{)}}. \tag{92}\]
To apply our Theorem 3.7, we first obtain a lower bound on \(r_{k}\). The function \(P_{k_{2}}^{k_{3}}(\cos\theta)\cos(k_{3}(\phi+\zeta))\) has \((k_{2}-k_{3}+1)\max\{2k_{3},1\}\) nodal domains (see e.g. [49, p.302]), while the function \(S_{k_{2}}(\alpha_{k_{1},k_{2}}r)\) has \(k_{1}\) nodal domains since \(\alpha_{k_{1},k_{2}}\) is the \(k_{1}\)th positive zero of \(S_{k_{2}}\). Thus, the eigenfunction \(u_{k_{1},k_{2},k_{3},\zeta}\) has \(k_{1}(k_{2}-k_{3}+1)\max\{2k_{3},1\}\) nodal domains. In particular, \(u_{k_{1},4k_{1}-1,2k_{1},\zeta}\) for \(k_{1}=1,2,\ldots\), \(\zeta\in\{0,\frac{\pi}{2}\}\), has \(8k_{1}^{3}\) nodal domains, i.e. it is a simple eigenfunction with a relatively high number of nodal domains for its eigenvalue. It can be shown using the second inequality in (92) that with \(c:=3\pi-\frac{8}{3\pi}\),
\[\lambda_{k_{1},4k_{1}-1,2k_{1},\zeta}=-\alpha_{k_{1},4k_{1}-1}^{2}\geq-(ck_{1}- 1.46)^{2}. \tag{93}\]
Thus, for each \(k\geq 18\), we can obtain a lower bound on \(r_{k}\) by finding the largest \(k_{1}\) such that
\[-(ck_{1}-1.46)^{2}\geq\lambda_{k,D}, \tag{94}\]
since we can confirm numerically that \(\lambda_{17,D}\geq-(c-1.46)^{2}\geq\lambda_{18,D}\). For this \(k_{1}\), we have \(\lambda_{k,D}\geq-(c(k_{1}+1)-1.46)^{2}\). By the first inequality in equation (92), we have \(\lambda_{i_{1},i_{2},i_{3},\zeta}\geq-(c(k_{1}+1)-1.46)^{2}\) only for \((i_{1},i_{2},i_{3},\zeta)\in\mathcal{I}:=\{(i_{1},i_{2},i_{3},\zeta):\pi i_{1}+i_{2 }-3.75\leq c(k_{1}+1)-1.46\}\). There are \(2i_{2}+1\) tuples \((i_{1},i_{2},i_{3},\zeta)\in\mathcal{I}\) for each pair \(i_{1},i_{2}\) such that \(\pi i_{1}+i_{2}\leq c(k_{1}+1)+2.29\). Using the formula for sums of squares, and writing
\(a:=c(k_{1}+1)+2.29\) for clarity, the cardinality of \(\mathcal{I}\) is bounded by
\[|\mathcal{I}| =\sum_{i_{1}=1}^{\left\lfloor\frac{a}{\pi}\right\rfloor}\sum_{i_{2} =0}^{\left\lfloor a-\pi i_{1}\right\rfloor}(2i_{2}+1)=\sum_{i_{1}=1}^{\left\lfloor \frac{a}{\pi}\right\rfloor}(\lfloor a-\pi i_{1}\rfloor+1)^{2}=\sum_{i_{1}=1}^{ \left\lfloor\frac{a}{\pi}\right\rfloor}\Bigl{(}\left\lfloor a-\pi\Bigl{(} \left\lfloor\frac{a}{\pi}\right\rfloor+1-i_{1}\Bigr{)}\right\rfloor+1\Bigr{)}^ {2}\] \[\leq\sum_{i_{1}=1}^{\left\lfloor\frac{a}{\pi}\right\rfloor}(\lfloor \pi i_{1}\rfloor+1)^{2}\leq\frac{a^{3}}{3\pi}+\biggl{(}\frac{1}{2}+\frac{1}{ \pi}\biggr{)}a^{2}+\biggl{(}1+\frac{\pi}{6}+\frac{1}{\pi}\biggr{)}a\leq\biggl{(} \frac{c}{\sqrt[3]{3\pi}}k_{1}+6.4\biggr{)}^{3}. \tag{95}\]
Every tuple in \(\mathcal{I}\) corresponds to at most one eigenvalue \(\lambda_{i_{1},i_{2},i_{3},\zeta}\) satisfying \(\lambda_{i_{1},i_{2},i_{3},\zeta}\geq-(c(k_{1}+1)-1.46)^{2}\), so there are at most \(\Bigl{(}\frac{c}{\sqrt[3]{3\pi}}k_{1}+6.4\Bigr{)}^{3}\) such eigenvalues. Hence \(k\leq\Bigl{(}\frac{c}{\sqrt[3]{3\pi}}k_{1}+6.4\Bigr{)}^{3}\), so
\[k_{1}\geq\max\Biggl{\{}\frac{\sqrt[3]{3\pi}}{c}(\sqrt[3]{k}-6.4),1\Biggr{\}}. \tag{96}\]
Now, equations (93) and (94) imply \(\lambda_{k,D}\leq\lambda_{k_{1},4k_{1}-1,2k_{1},\zeta}\), for \(\zeta\in\{0,\frac{\pi}{2}\}\). Thus, since \(u_{k_{1},4k_{1}-1,2k_{1}}\) has \(8k_{1}^{3}\) nodal domains, (96) and the fact \(k_{1}\geq 1\) imply \(r_{k}\geq 8k_{1}^{3}\geq\max\Bigl{\{}\frac{24\pi}{c^{3}}(\sqrt[3]{k}-6.4)^{3},8 \Bigr{\}}\geq\max\{0.119(\sqrt[3]{k}-6.4)^{3},8\}\). Hence Theorem 3.7 implies
\[\lambda_{k,D}\geq\frac{1}{4}h_{r_{k},D}^{2}\geq\frac{1}{4}h_{\max\bigl{\{} \left\lceil 0.119(\sqrt[3]{k}-6.4)^{3}\right\rceil},8\bigr{\}}^{2}. \tag{97}\]
As in the previous example, one cannot apply Miclo's Theorem 3.4 or 3.5 in this case, because \(\mathbb{B}\) has non-empty boundary.
### Dynamic Cheeger constant on a cylinder with linear shear
Finally, we consider a linear shear on the cylinder \(\mathcal{C}:=2\pi\mathbb{S}^{1}\times[0,\pi]\), similarly to [25, example 6.1]. We consider a dynamical system \(\mathcal{T}\) as in definition 3.9. We let \(\mathrm{T}:=\{0,1,\ldots,t_{\max}\}\) for some even \(t_{\max}\geq 2\), and for each \(t\), we let \(M_{t}:=\mathcal{C}\), and we define \(g_{t}\) as the Euclidean metric and \(V_{t}\) as two-dimensional Lebesgue measure. For some \(b>0\), we define each \(\Phi^{(t)}:\mathcal{C}\rightarrow\mathcal{C}\) by
\[\Phi^{(t)}(x,y):=\biggl{(}x+b\frac{t}{t_{\max}}y\;\;(\mathrm{mod}\;2\pi),y \biggr{)}. \tag{98}\]
The dynamics \(\Phi^{(t)}\) represents linear shear in the \(x\)-coordinate on the cylinder. The functions
\[u_{k_{1},k_{2},\zeta}^{d}(x,y):=\cos\biggl{(}k_{1}\biggl{(}x+\zeta-\frac{b}{2} y\biggr{)}\biggr{)}\cos(k_{2}y), \tag{99}\]
for \(k_{1},k_{2}=0,1,2,\ldots\), and \(\zeta\in\{0,\frac{\pi}{2}\}\), taking \(\zeta=0\) whenever \(k_{1}=0\), are a complete basis of eigenfunctions for \(\Delta^{d}\) under dynamic Neumann boundary conditions. This follows since for each \(t\in\mathrm{T}\), writing \(\tilde{x}_{t}:=x+\zeta+b\Bigl{(}\frac{t}{t_{\max}}-\frac{1}{2}\Bigr{)}y\) for brevity, we have \(\Phi_{*}^{(t)}u_{k_{1},k_{2},\zeta}^{d}(x,y)=\cos(k_{1}\tilde{x}_{t})\cos(k_{2 }y)\), so
\[\Delta\Phi_{*}^{(t)}u_{k_{1},k_{2},\zeta}^{d}(x,y)\] \[=-\frac{\partial}{\partial x}[k_{1}\sin(k_{1}\tilde{x}_{t})\cos( k_{2}y)]-\frac{\partial}{\partial y}\Bigl{[}k_{1}b\Bigl{(}\frac{t}{t_{\max}}- \frac{1}{2}\Bigr{)}\sin(k_{1}\tilde{x}_{t})\cos(k_{2}y)+k_{2}\cos(k_{1} \tilde{x}_{t})\sin(k_{2}y)\Bigr{]}\] \[=-\biggl{(}k_{1}^{2}\biggl{(}1+b^{2}\Bigl{(}\frac{t}{t_{\max}}- \frac{1}{2}\Bigr{)}^{2}\biggr{)}+k_{2}^{2}\biggr{)}\Phi_{*}^{(t)}u_{k_{1},k_{2}, \zeta}^{d}(x,y)+2k_{1}k_{2}b\Bigl{(}\frac{t}{t_{\max}}-\frac{1}{2}\Bigr{)} \sin(k_{1}\tilde{x}_{t})\sin(k_{2}y).\]
Then, since \(\sum_{t=0}^{t_{\max}}\Bigl{(}\frac{t}{t_{\max}}-\frac{1}{2}\Bigr{)}=0\) and \(\sum_{t=0}^{t_{\max}}\Bigl{(}\frac{t}{t_{\max}}-\frac{1}{2}\Bigr{)}^{2}=\frac{( t_{\max}+1)(t_{\max}+2)}{12t_{\max}}=\frac{|\Gamma|(|\Gamma|+1)}{12(|\Gamma|-1)}\), we have
\[\Delta^{d}u_{k_{1},k_{2},\zeta}(x,y)\] \[=-\Big{(}k_{1}^{2}\Bigl{(}1+\frac{b^{2}(|\Gamma|+1)}{12(|\Gamma|- 1)}\Bigr{)}+k_{2}^{2}\Big{)}u_{k_{1},k_{2},\zeta}^{d},\]
i.e. each \(u_{k_{1},k_{2},\zeta}^{d}\) is an eigenfunction with eigenvalue \(\lambda_{k_{1},k_{2},\zeta}^{d}:=-k_{1}^{2}\bigl{(}1+\frac{b^{2}(|\Gamma|+1)}{ 12(|\Gamma|-1)}\bigr{)}-k_{2}^{2}\). These eigenfunctions form a complete orthogonal Hilbert basis for \(L^{2}(\mathcal{C},\text{Leb})\), since for \(t^{*}=\frac{t_{\max}}{2}\), the \(L^{2}\)-isometry \(\Phi_{*}^{(t^{*})}:L^{2}(\mathcal{C})\to L^{2}(\mathcal{C})\) sends the functions (99) to the complete orthogonal Hilbert basis (89).
To apply Theorem 3.19, we need a lower bound for \(r_{k}\) for each sufficiently large \(k\). We consider \(k\geq\pi pq+\sqrt{2}(p+2q)+1\), where \(p\geq q\geq 1\) are integers for which \(1+\frac{b^{2}(|\Gamma|+1)}{12(|\mathcal{T}-1)}=\frac{p^{2}}{q^{2}}\). Then each eigenvalue \(\lambda_{k_{1},k_{2},\zeta}^{d}\) can be written
\[\lambda_{k_{1},k_{2},\zeta}^{d}=-\frac{p^{2}}{q^{2}}k_{1}^{2}-k_{2}^{2}. \tag{100}\]
We obtain our bound on \(r_{k}\) in the following steps. First, we show that for \(k_{1}\in\{q,2q,\ldots\}\), the eigenfunctions \(u_{k_{1},\frac{p}{q}k_{1},0}^{d}\) and \(u_{k_{1},\frac{p}{q}k_{1},\frac{p}{q}}^{d}\) have the maximum number of nodal domains, among eigenfunctions of the form (99) with eigenvalue \(\geq-2\frac{p^{2}}{q^{2}}k_{1}^{2}\). Second, for each \(k_{1}\in\{q,2q,\ldots\}\), we obtain an upper bound for
\[\mathcal{E}(k_{1}):=\#\left\{\lambda_{i_{1},i_{2},\zeta}^{d}: \lambda_{i_{1},i_{2},\zeta}^{d}\geq-2\frac{p^{2}}{q^{2}}k_{1}^{2}\right\}, \tag{101}\]
the number of eigenvalues \(\lambda_{i_{1},i_{2},\zeta}^{d}\) (with multiplicity) satisfying \(\lambda_{i_{1},i_{2},\zeta}^{d}\geq-2\frac{p^{2}}{q^{2}}k_{1}^{2}\), and hence put an upper bound on the position of \(\lambda_{k_{1},\frac{p}{q}k_{1},0}^{d}\) in the eigenvalue ordering. Third, we use this bound to show that for each \(k\geq\pi pq+\sqrt{2}(p+2q)+1=(\pi q^{2}+\sqrt{2}q)\sqrt{1+\frac{b^{2}(|\Gamma |+1)}{12(|\Gamma|-1)}}+2\sqrt{2}q+1\), there is some \(k_{1}\in\{q,2q,\ldots\}\) such that \(\lambda_{k_{1},\frac{p}{q}k_{1},0}^{d}\geq\lambda_{k,N}^{d}\), and also to bound the largest such \(k_{1}\) from below. Finally, for this \(k\) and \(k_{1}\), we use the number of nodal domains in \(u_{k_{1},\frac{p}{q}k_{1},0}^{d}\) to give a lower bound on \(r_{k}\), and hence we use Theorem 3.19 to bound \(\lambda_{k,N}^{d}\) in terms of \(h_{r_{k},N}^{d}\).
_Step 1:_ We begin by proving that \(u_{k_{1},\frac{p}{q}k_{1},0}^{d}\) and \(u_{k_{1},\frac{p}{q}k_{1},\frac{p}{q}}^{d}\) have the maximal number of nodal domains among eigenfunctions \(u_{i_{1},i_{2},\zeta}^{d}\) of the form (99) for which \(\lambda_{i_{1},i_{2},\zeta}^{d}\geq-2\frac{p^{2}}{q^{2}}k_{1}^{2}\). Each eigenfunction \(u_{i_{1},i_{2},\zeta}^{d}\) has \(\max\{2i_{1},1\}(i_{2}+1)\) nodal domains by (99) (since \(\cos\bigl{(}i_{1}\bigl{(}x+\zeta-\frac{b}{2}y\bigr{)}\bigr{)}\) has \(\max\{2i_{1},1\}\) nodal domains and \(\cos(i_{2}y)\) has \(i_{2}+1\) nodal domains). Thus, by (100), maximising the number of nodal domains in \(u_{i_{1},i_{2},\zeta}^{d}\) subject to \(\lambda_{i_{1},i_{2},\zeta}^{d}\geq-2\frac{p^{2}}{q^{2}}k_{1}^{2}\) is equivalent to solving \(\max\{2i_{1}(i_{2}+1):(i_{1},i_{2})\in\mathbb{Z}_{>0},-\frac{p^{2}}{q^{2}}i_{1} ^{2}-i_{2}^{2}\geq-2\frac{p^{2}}{q^{2}}k_{1}^{2}\}\). By a similar relaxation argument to section 4.2, this is uniquely maximised by \((i_{1},i_{2})=(k_{1},\frac{p}{q}k_{1})\). Hence eigenfunctions \(u_{k_{1},\frac{p}{q}k_{1},\zeta}^{d}\) for \(\zeta\in\{0,\frac{\pi}{2}\}\) maximise the number of nodal domains, among eigenfunctions \(u_{i_{1},i_{2},\zeta}^{d}\) of the form (99) satisfying \(\lambda_{i_{1},i_{2},\zeta}^{d}\geq-2\frac{p^{2}}{q^{2}}k_{1}^{2}\).
_Step 2:_ Choose any \(k_{1}=q,2q,\ldots\). We can bound \(\mathcal{E}(k_{1})\) (defined in (101)) by considering three cases: eigenvalues \(\lambda_{i_{1},i_{2},\zeta}^{d}\) with \(i_{1},i_{2}\geq 1\), eigenvalues \(\lambda_{i_{1},0,\zeta}^{d}\) with \(i_{1}\geq 1\), and eigenvalues \(\lambda_{0,i_{2},0}^{d}\) for \(i_{2}\geq 0\).
The set \(\{\lambda_{i_{1},i_{2},\zeta}:\lambda_{i_{1},i_{2},\zeta}\geq-2\frac{p^{2}}{q^{2 }}k_{1}^{2},i_{1},i_{2}\geq 1\}\) is in bijection with the set \(\{(i_{1},i_{2},\zeta):\zeta\in\{0,\frac{\pi}{2}\},(i_{1},i_{2})\in\mathbb{Z}_{>0 },-\frac{p^{2}}{q^{2}}i_{1}^{2}-i_{2}^{2}\geq-2\frac{p^{2}}{q^{2}}k_{1}^{2}\}\), by (100). These tuples \((i_{1},i_{2})\) are in bijection with the grid points \((i_{1},i_{2})\) in the positive quadrant \(Q_{pq}\) of the ellipse \(\frac{x^{2}}{2k_{1}^{2}}+\frac{q^{2}y^{2}}{2p^{2}k_{1}^{2}}\leq 1\). The quadrant \(Q_{pq}\) has area \(\frac{\pi p}{2q}k_{1}^{2}\), and each grid
point \((i_{1},i_{2})\in Q_{pq}\) with \(i_{1},i_{2}\geq 1\) is associated with a unit area in \(Q_{pq}\). Therefore, there are at most \(\frac{\pi p}{2q}k_{1}^{2}\) grid points \((i_{1},i_{2})\), so there are at most \(\frac{\pi p}{q}k_{1}^{2}\) tuples \((i_{1},i_{2},\zeta)\), and hence at most \(\frac{\pi p}{q}k_{1}^{2}\) eigenvalues \(\lambda_{i_{1},i_{2},\zeta}^{d}\leq-2\frac{p^{2}}{q^{2}}k_{1}^{2}\) with \(i_{1},i_{2}\geq 1\).
By (100), the eigenvalues \(\lambda_{i_{1},0,\zeta}^{d}\geq-2\frac{p^{2}}{q^{2}}k_{1}^{2}\) with \(i_{1}\geq 1\) are in bijection with the tuples \((i_{1},\zeta)\) with \(i_{1}\in\mathbb{Z}\cap[1,\sqrt{2}k_{1}]\) and \(\zeta\in\{0,\frac{\pi}{2}\}\), so there are \(2\lfloor\sqrt{2}k_{1}\rfloor\) such eigenvalues. Similarly, the eigenvalues \(\lambda_{0,i_{2},0}^{d}\geq-2\frac{p^{2}}{q^{2}}k_{1}^{2}\) are in bijection with the integers \(i_{2}\in\mathbb{Z}\cap[0,\sqrt{2}\frac{p}{q}k_{1}]\), so there are \(\lfloor\sqrt{2}\frac{p}{q}k_{1}\rfloor+1\) such eigenvalues. Combining these three cases, the number \(\mathcal{E}(k_{1})\) of eigenvalues \(\lambda_{i_{1},i_{2},\zeta}^{d}\geq-2\frac{p^{2}}{q^{2}}k_{1}^{2}\), counted with multiplicity, is bounded above by
\[\mathcal{E}(k_{1})\leq\frac{\pi p}{q}k_{1}^{2}+2\lfloor\sqrt{2}k_{1}\rfloor+ \left\lfloor\frac{\sqrt{2}p}{q}k_{1}\right\rfloor+1. \tag{102}\]
_Step 3:_ Equations (100)-(101) imply there are no more than \(\mathcal{E}(1)\) eigenvalues \(\geq\lambda_{q,p,0}^{d}\), so (102) implies there are no more than \(\pi pq_{+}\sqrt{2}(p+2q)+1\) such eigenvalues. Hence for each \(k\geq\pi pq+\sqrt{2}(p+2q)+1\), we have \(\lambda_{q,p,0}^{d}\geq\lambda_{k,N}^{d}\), i.e. for \(k_{1}=q\) we have \(\lambda_{k_{1},\frac{\pi}{q}\tilde{k}_{1},0}^{d}\geq\lambda_{k,N}^{d}\). Define
\[k_{1}:=\max\Bigl{\{}\tilde{k}_{1}\in\{q,2q,\ldots\}:\lambda_{\tilde{k}_{1}, \frac{\pi}{q}\tilde{k}_{1},0}^{d}\geq\lambda_{k,N}^{d}\Bigr{\}}, \tag{103}\]
then each multiple \(\tilde{k}_{1}\) of \(q\) greater than \(k_{1}\) satisfies \(\lambda_{k,N}^{d}\geq\lambda_{\tilde{k}_{1},\frac{\pi}{q}\tilde{k}_{1},0}^{d}\). In particular, by (100), we have \(\lambda_{k,N}^{d}\geq\lambda_{k_{1}+q,\frac{p}{q}(k_{1}+q),0}^{d}=-2\frac{p^{ 2}}{q^{2}}(k_{1}+q)^{2}\). Therefore, since \(\lambda_{k,N}^{d}\) is the \(k\)th-smallest eigenvalue in absolute value, (101) implies \(k\leq\mathcal{E}(k_{1}+q)\). Then (102) yields \(k\leq\frac{\pi p}{q}(k_{1}+q)^{2}+2\sqrt{2}(k_{1}+q)+\frac{\sqrt{2}p}{q}(k_{1} +q)+1\). Applying the quadratic formula yields \(k_{1}\geq\sqrt{\frac{qk}{\pi p}-\frac{q}{\pi p}+\frac{1}{2\pi^{2}}(\frac{2q}{ p}+1)^{2}}-(\frac{1}{\sqrt{2}\pi}+\frac{\sqrt{2}q}{\pi p}+q)\). Noting that \(\frac{1}{2\pi^{2}}(\frac{2q}{p}+1)^{2}-\frac{q}{\pi p}>\frac{1}{2\pi^{2}}( \frac{2q}{p}-1)^{2}>0+\), we obtain
\[k_{1}\geq\sqrt{\frac{qk}{\pi p}}-\Biggl{(}\frac{1}{\sqrt{2}\pi}+\frac{\sqrt{2} q}{\pi p}+q\Biggr{)}. \tag{104}\]
_Step 4:_ Choose \(k\) and \(k_{1}\) as in step 3, so that \(\lambda_{k_{1},\frac{p}{q}k_{1},0}^{d}\geq\lambda_{k,N}^{d}\) by (103). Then the number of nodal domains in \(u_{k_{1},\frac{p}{q}k_{1},0}^{d}\) gives a lower bound on \(r_{k}\). This eigenfunction has \(2k_{1}(\frac{p}{q}k_{1}+1)\) nodal domains by the reasoning in step 1, so \(r_{k}\geq 2k_{1}(\frac{p}{q}k_{1}+1)\). Substituting (104) into this expression gives \(r_{k}\geq 2\Bigl{(}\sqrt{\frac{qk}{\pi p}}-\Bigl{(}\frac{1}{\sqrt{2}\pi}+ \frac{\sqrt{2}q}{\pi p}+q\Bigr{)}\Bigr{)}\Bigl{(}\sqrt{\frac{pk}{\pi q}}-\frac {p}{q}\Bigl{(}\frac{1}{\sqrt{2}\pi}+\frac{\sqrt{2}q}{\pi p}+q\Bigr{)}+1 \Bigr{)}\). Expanding and noting that \(p\geq q\geq 1\) so \(2\sqrt{\frac{qk}{\pi p}}\Bigl{(}1-\frac{2\sqrt{2}}{\pi}\Bigr{)}>0\), \(\frac{2\sqrt{2}p}{\pi}+\frac{4\sqrt{2}q}{\pi}>2q+\frac{\sqrt{2}}{\pi}\) and \(\frac{p}{\pi^{2}q}+\frac{4q}{\pi^{2}p}+\frac{4}{\pi^{2}}>\frac{2\sqrt{2}q}{\pi p}\), we obtain \(r_{k}\geq\frac{2k}{\pi}-2.8\sqrt{pqk}+2pq\). Then the definition of \(p\) and \(q\) before (100) implies \(r_{k}\geq\frac{2k}{\pi}-2.8q\sqrt[4]{1+\frac{b^{2}(\lceil\Gamma\rceil+1)}{12( \lceil\Gamma\rceil-1)}}\sqrt{k}+2q^{2}\sqrt{1+\frac{b^{2}(\lceil\Gamma\rceil+1)}{ 12(\lceil\Gamma\rceil-1)}}\). Substituting \(k_{1}\geq q\) into \(r_{k}\geq 2k_{1}(\frac{p}{q}k_{1}+1)\) instead, we additionally obtain \(r_{k}\geq 2q(p+1)=2q^{2}\sqrt{1+\frac{b^{2}(\lceil\Gamma\rceil+1)}{12( \lceil\Gamma\rceil-1)}}+2q\). Hence, rewriting the definition of \(k\) from step 3 using the definition of \(p\) and \(q\) before (100), Theorem 3.19 implies that for each \(k\geq(\pi q^{2}+\sqrt{2}q)\sqrt{1+\frac{b^{2}(\lceil\Gamma\rceil+1)}{12( \lceil\Gamma\rceil-1)}}+2\sqrt{2}q+1\), we have
\[\lambda_{k,N}^{d}\leq-\frac{1}{4}(h_{r_{k}}^{d})^{2}\leq-\frac{1}{4}\Biggl{(}h_{ \max}^{d}\Bigl{\{}\Bigl{\lceil}\frac{2k}{\pi}-2.8q\sqrt[4]{1+\frac{b^{2}( \lceil\Gamma\rceil+1)}{12(\lceil\Gamma\rceil-1)}}\sqrt{k}+2q^{2}\sqrt{1+\frac{ b^{2}(\lceil\Gamma\rceil+1)}{12(\lceil\Gamma\rceil-1)}}\Bigr{\rceil}.2q^{2}\sqrt{1+\frac{b^{2}(\lceil\Gamma\rceil+1)}{12(\lceil\Gamma\rceil-1)}}+2q \Bigr{\}},N\Biggr{)}^{2}. \tag{105}\]
Asymptotically for large \(k\), this bound becomes \(\lambda_{k,N}^{d}\leq-\frac{1}{4}(h_{\frac{2k}{k}-O(\sqrt{k}),N}^{d})^{2}\), irrespective of the shear strength \(b\) and number of time steps \(|\mathrm{T}|\). Pre-asymptotically for intermediate-sized \(k\), this bound links \(\lambda_{k,N}^{d}\) to \(h_{j,N}^{d}\) for
progressively smaller \(j\) as the shear strength increases. This is because the domain behaves like a cylindrical domain with progressively more mismatched sides, so that gridlike packings of \(\mathcal{C}\) with the optimal aspect ratio for each packing element are rarer. We cannot apply Theorems 3.17 or 3.18, our dynamic versions of Theorems 3.4 and 3.5, because \(\mathcal{C}\) has non-empty boundary.
## 5 Summary
The sequence of the \(k\)th (Neumann or Dirichlet) Cheeger constants for a weighted Riemannian manifold (Definition 2.2) and the corresponding \(k\)-packings with small Cheeger ratio (Definition 2.1) together give a global geometric description of weighted Riemannian manifolds. There are no existing algorithms for computing \(k\)-packings for \(k\geq 2\) with small Cheeger ratio on arbitrary Riemannian manifolds. We proposed some methods for obtaining upper bounds on the Cheeger constants, and for finding packings with quality guarantees, i.e. upper bounds on their Cheeger ratios (Theorem 3.7 and Proposition 3.8). We showed that for any Neumann or Dirichlet eigenfunction, its eigenvalue gives an upper bound on the Cheeger constant corresponding to the number of nodal domains in the eigenfunction (Theorem 3.7). Moreover, we showed that positive-measure collections of the superlevel sets within each nodal domain give rise to packings whose Cheeger ratios are bounded above in terms of the eigenvalue (Proposition 3.8). This bound is straightforward to compute, but it only produces \(k\)-packings from eigenfunctions with \(k\) nodal domains. Sometimes, it is possible to combine geometric information from several eigenfunctions to obtain more features than the number of nodal domains in any single eigenfunction. One obtains disjointly supported functions, each supported on a single feature, by taking linear combinations of eigenfunctions and applying soft thresholding. The sparse eigenbasis approximation (SEBA) algorithm [28] can be used to find suitable linear combinations. We showed that if the separation into disjointly supported sparse functions is successful, then positive-measure collections of the resulting superlevel sets yield packings with an upper bound on their Cheeger ratios (Proposition 3.8). This bound depends only on the largest eigenvalue (in absolute value) and the effectiveness of the separation (i.e. the fraction of the \(L^{2}\) mass of the linear combinations that is preserved by the thresholding operation).
Coherent sets in nonautonomous dynamical systems are sets with small dynamic Cheeger ratio (Definition 3.10). We showed that positive-measure collections of the superlevel sets within each nodal domain of a dynamic Laplacian eigenfunction yield packings consisting of coherent sets, i.e. packings whose dynamic Cheeger ratios are bounded above (Theorem 3.19). Also, as in the static case, it is sometimes possible to obtain more coherent sets than the number of nodal domains in any single eigenfunction, by taking linear combinations of the first \(k\) eigenfunctions and applying soft thresholding. We showed (Proposition 3.20) that positive-measure collections of the resulting superlevel sets have their dynamic Cheeger ratios bounded above in terms of the largest eigenvalue (in absolute value), and the effectiveness of the separation (fraction of \(L^{2}\) mass preserved by soft thresholding).
|
2308.15785 | Collaborative, Code-Proximal Dynamic Software Visualization within Code
Editors | Software visualizations are usually realized as standalone and isolated tools
that use embedded code viewers within the visualization. In the context of
program comprehension, only few approaches integrate visualizations into code
editors, such as integrated development environments. This is surprising since
professional developers consider reading source code as one of the most
important ways to understand software, therefore spend a lot of time with code
editors. In this paper, we introduce the design and proof-of-concept
implementation for a software visualization approach that can be embedded into
code editors. Our contribution differs from related work in that we use dynamic
analysis of a software system's runtime behavior. Additionally, we incorporate
distributed tracing. This enables developers to understand how, for example,
the currently handled source code behaves as a fully deployed, distributed
software system. Our visualization approach enhances common remote pair
programming tools and is collaboratively usable by employing shared code
cities. As a result, user interactions are synchronized between code editor and
visualization, as well as broadcasted to collaborators. To the best of our
knowledge, this is the first approach that combines code editors with
collaboratively usable code cities. Therefore, we conducted a user study to
collect first-time feedback regarding the perceived usefulness and perceived
usability of our approach. We additionally collected logging information to
provide more data regarding time spent in code cities that are embedded in code
editors. Seven teams with two students each participated in that study. The
results show that the majority of participants find our approach useful and
would employ it for their own use. We provide each participant's video
recording, raw results, and all steps to reproduce our experiment as
supplementary package. | Alexander Krause-Glau, Wilhelm Hasselbring | 2023-08-30T06:35:40Z | http://arxiv.org/abs/2308.15785v1 | # Collaborative, Code-Proximal
###### Abstract
Software visualizations are usually realized as standalone and isolated tools that use embedded code viewers within the visualization. In the context of program comprehension, only few approaches integrate visualizations into code editors, such as integrated development environments. This is surprising since professional developers consider reading source code as one of the most important ways to understand software, therefore spend a lot of time with code editors.
In this paper, we introduce the design and proof-of-concept implementation for a software visualization approach that can be embedded into code editors. Our contribution differs from related work in that we use dynamic analysis of a software system's runtime behavior. Additionally, we incorporate distributed tracing. This enables developers to understand how, for example, the currently handled source code behaves as a fully deployed, distributed software system. Our visualization approach enhances common remote pair programming tools and is collaboratively usable by employing shared code cities. As a result, user interactions are synchronized between code editor and visualization, as well as broadcasted to collaborators.
To the best of our knowledge, this is the first approach that combines code editors with collaboratively usable code cities. Therefore, we conducted a user study to collect first-time feedback regarding the perceived usefulness and perceived usability of our approach. We additionally collected logging information to provide more data regarding time spent in code cities that are embedded in code editors. Seven teams with two students each participated in that study. The results show that the majority of participants find our approach useful and would employ it for their own use. We provide each participant's video recording, raw results, and all steps to reproduce our experiment as supplementary package. Furthermore, a live demo of our tool is available online.1 We invite other researchers to extend our open-source software.2 Video URL: [https://youtu.be/3qZVSehnEug](https://youtu.be/3qZVSehnEug)
Footnote 1: [https://code.explorviz.dev](https://code.explorviz.dev)
software visualization, dynamic analysis, program comprehension, pair programming, integrated development
## I Introduction
Source code comprehension is still the primary method to come to an understanding of a software system's behavior [1]. This is not unexpected, because developers are trained to recognize recurring patterns and resulting behavior in source code. They even might spend most of their development time in integrated development environments (IDE) [2]. However, navigation in IDEs leads to a redundant but unavoidable overhead [3] and in terms of software visualization (SV) developers are concerned about the context switch caused by standalone SV tools [4]. As a result, code proximity is a necessary property for SV [5, 6] to succeed in its intended area, i.e., professional software development. Code proximity means the ability of the visualization tool to provide easy and fast access to the original, underlying source code [7]. In this context, research approaches have been shown in the past that embed SV into code editors and IDEs (both from now on referred to as _code editor_) to link source code with its visualization [8, 9, 10, 11].
In this paper, we introduce our collaboratively usable SV approach that can be embedded in code editors. In comparison to related approaches, we use dynamic analysis as source for rendering three-dimensional code cities [12, 13]. The SV is linked directly to the source code that is under development within the code editor and vice versa. Therefore, we directly connect runtime behavior with the related program elements, for example, Java methods. User interactions are synchronized between code editor and visualization, as well as broadcasted to collaborators. As proof of concept, we implemented a Visual Studio Code3 (VS Code) extension that realizes our design. We conducted a first-time user study to collect feedback regarding the perceived usefulness and perceived usability of our approach. Furthermore, we collected logging information to provide more data regarding usage statistics of SV that are embedded into code editors. In this study, seven teams with two students each collaboratively used our approach in an onboarding-related scenario. Overall, the results show a highly rated usefulness.
Footnote 3: [https://code.visualstudio.com](https://code.visualstudio.com)
The remainder of this paper is structured as follows. Section II presents the architectural overview and proof of concept implementation for our approach. We proceed by
introducing the envisioned usage scenarios for our approach in Section III. Afterwards, Section IV explains our experimental setup. Section V presents and discusses the results of our study. Then, Section VI introduces related work. Finally, we conclude this paper and present future work in Section VII.
## II Approach
In this section, we present the architectural design and proof of concept implementation for this research work. For that, we build upon our previously published approach named _Software Visualization as a Service_ (SVaaS), i.e., providing an online-accessible and on-demand service for collaborative program comprehension using SV. Due to space constraints, we refer readers to [14] for a description of the basic concepts of our approach.
### _Architectural Design_
Figure 1 shows (a simplified overview of) our approach's architectural design. It is technology independent with the exception of a browser-based SV component. As shown, it is divided into four stages (blue-striped areas). Figure 1-A and Figure 1-B depict the monitoring and analysis stages, respectively. These are the foundation of our SVaaS concept. The analysis pipeline for example can be horizontally scaled out to handle varying load of concurrent users, therefore positively influence the effectiveness of the overall tool [15]. Although data acquisition, analysis, and cloud technologies are important aspects of our concept, a detailed explanation is beyond the scope of this paper. Therefore, we refer readers to [14] for details and focus on the remaining two stages.
The Webserver (Figure 1-C) serves the static files that comprise the web-based SV, i.e., CSS, JavaScript, and HTML. Furthermore, it acts as reverse proxy for clients to connect to the backend services, e.g., to obtain data to be visualized. Users now have two options to link the SV with their code editor:
* For the first option, they can use the standalone SV that runs inside of their web browser and connects to an extension in their code editor (Figure 1-D). The latter acts as gateway between code editor and SV. This is similar to the 'classic' approach for code viewers embedded into SVs and relates to many other works (see Section VI). Interactions that should be linked between code editor and SV, e.g., 'open a class file in the code editor when the related visualization entity was clicked', are synchronized by the Code Editor Service (Figure 1-E).
* For the second option, users can install an extension in their code editor that already includes the Frontend (Figure 1-F). In this case, we do not need an external service to synchronize interaction events, but use a built-in communication mechanism between code editor and its extension. Therefore, we reduce the context switch overhead that occurs when switching from SV to code editor and vice versa [4]. Another advantage of the second option is that it can also be installed in cloud-based code editors that run in browsers. This can be beneficial in some use cases, e.g., onboarding of new developers, as shown in Section III.
Fig. 1: (Simplified) Architectural design of our approach.
Regardless of the selected option, users can collaboratively use the SV. To achieve this, the Collaboration Service (Figure 1-G) broadcasts events, e.g., 'user X opened package Y', to all clients of the same session except the one that triggered an event [16]. The clients then apply the received events to their SV, therefore synchronize their states.
### _Proof of Concept Implementation_
We have prototyped our approach within our SV tool ExplorViz.4 Our tool's development commenced in 2012 [17] and focused on several aspects throughout time, such as development concerns [18, 19] and extended reality [16, 20]. More recently, we use ExplorViz to research collaborative software visualization in the context of program comprehension [16, 21]. ExplorViz currently uses dynamic analysis as source for the visualization. Our depicted SV is configured to visualize the aggregated runtime behavior of ten seconds [14].
Footnote 4: [https://explorviz.dev](https://explorviz.dev)
Figure 2 shows a screenshot of our prototype implementation. We developed a VS Code extension that realizes the previously mentioned design. It can be used as gateway to link the external SV to the code editor or provide the embedded SV instead. Due to space constraints, we focus on the latter and refer readers to our supplementary video. The extension uses an HTML iFrame to embed the web-based Frontend, therefore SV, in VS Code (see Figure 1-F on the previous page). The embedded SV can be switched on or off via the ExplorViz logo button (Figure 2-A). It is automatically placed in a new editor group next to the source code (Figure 2-B). Users can select one of their (currently or previously) analyzed software systems (as shown in the supplementary video) and open the related SV. The latter is provided as three-dimensional code cities using Three.js5 for rendering (Figure 2-C). The embedded Frontend uses cross-origin communication based on the JavaScript Window object to interact with VS Code. Therefore, we do not need an external service that synchronizes the interaction events as it is the case when using the external Frontend or as shown in related works (see Section VI). Every tenth second the Frontend triggers a SV update. For that, it obtains the latest runtime data for the selected software system from the analysis pipeline and updates the visualization if required. Furthermore, the Frontend sends new data to VS Code, which then highlights Java classes and methods that have been used in the aggregated runtime behavior. This is shown by the gutter icons and code lenses in Figure 2-D. Users can click on a code lens to focus the related entity in the SV, e.g., a high-rise building visualizing a Java class. Vice versa, pressing for example on a communication line will cause the file to open and focus on the related method in VS Code. In terms of working together, users can join or host a collaborative session from within the embedded Frontend and use the collaborative features of the SV, e.g., pinging or shared popups (Figure 2-E), to interact with each other (please see [16] for more details). Furthermore, a collaborative session also enables remote pair programming. For VS Code in general, developers can for example use Microsoft's LiveShare extension for VS Code. LiveShare has great features and usability, but uses Microsoft servers that might be not available in the future or cannot be used due to compliance concerns. For the sake of our evaluation's reproducibility, we therefore decided against using an available product such as Microsoft's LiveShare, but developed our own solution (for the user study). This can be seen in Figure 2-F where the live text selection of another user is depicted (as yellow background of OwnerRepository). These text selection events are synchronized by an implementation of the external Code Editor Service (Figure 1-E) using WebSockets for almost real-time communication.
Footnote 5: [https://threejs.org](https://threejs.org)
## III Envisioned Usage Scenarios
Besides using advanced (web) technologies, our approach can be differentiated from related work by the use of dynamic analysis and collaborative SV features. Therefore, we now introduce envisioned usage scenarios that may follow from our approach and related future works.
Fig. 2: Proof of concept implementation – The editor of VS Code displays a Java class. The ExplorViz extension visualizes the associated runtime behavior and adds visual functions to the editor to directly link source code and visualization.
### Scenario 1 (SC1): Facilitate the Onboarding Process
In professional software development, companies utilize different techniques for the onboarding process of new developers. Peer support, product overview, and simple tasks are perceived as useful in that context [22], while finding documentation and technical issues, e.g., setting up a development environment, impede the onboarding process, especially for remote work [23]. We envision a scenario where cloud-based code editors with embedded SVs are prepared to guide new developers step-by-step through a software system's behavior. Users click on a use case of the analyzed (distributed) target system and understand its unfolding via SV. Furthermore, increasingly large portions of the source code (e.g., depending on experience) are directly linked to SV entities. This allows developers to understand which portion of the source code acts in which use cases. The approach can then be used for task-oriented onboarding, where developers also face small tasks to comprehend the software [22, 24]. At any time, users can invite other developers for collaborative comprehension or their mentor and ask for help. Next to voice communication, participants use collaborative features such as synchronized text selection and shared information popups to interact and exchange [16].
### Scenario 2 (SC2): Highlight changes during code reviews
Feature requests and resulting change-based code reviews are commonly used in professional software development [25]. However, reviewers tend to give vacuous feedback and generally report on review tools' limitations when used in complex scenarios [26]. In this context, we see another potential usage scenario for our approach that we outline in the following. A team member is supposed to review source code changes of a colleague. To do this, he or she can click on a link inside of the pull request that opens a prepared, cloud-based code editor with an embedded SV of the new program behavior (due to the source code change). Source code changes are color-coded in the IDE. For understanding the program behavior, it is possible to switch between old and new program behavior in the SV by pressing a button. The colleague who issued the pull request can be invited to the session such that the changes can also be discussed together.
### Scenario 3 (SC3): Integrate Runtime Information into Development Activities
Staging environments are used to test software systems in a production-like environment. We envision code editors informing selected developers about performance problems of a software system installed (e.g., in the staging area). A developer can click on this notification to open the embedded SV. The visualization depicts the runtime behavior which includes the performance problem. It also highlights the entity that introduces the problem, e.g., a method call that took too long to finish. Based on this, developers get runtime information displayed in their code editor and can analyze affected code lines.
## IV Experiment Design and Demographics
Effectiveness is one of the most common properties used to evaluate SV approaches. In that context, Merino et al. [27] present a systematic literature review of SV evaluation. Their work analyzes the literature body of full papers that were published in the SOFTVIS/VISSOFT conferences, resulting in the examination of 181 papers. The authors focus on evaluations that validate the effectiveness of their presented approach. It is mentioned that multiple evaluations omit other variables that can contribute to or generally influence the effectiveness [28], such as recollection and emotions. We share this opinion and argue that we must first evaluate properties such as perceived usefulness, perceived usability, or feature requests to potentially refine a new, exploratory approach. Only afterwards, we should evaluate effectiveness and efficiency with a sufficiently large number of participants in controlled experiments [29]. As a result, we decided to conduct an exploratory user-study first. We designed an experiment in which participants use and evaluate our approach in a task-oriented onboarding process, i.e., in a scenario similar to SC1 (see Section III). In the future, we will also evaluate our approach in other scenarios by using a similar experiment. In this paper however, we developed the experiment with a focus on SC1 due to the approach's prototype implementation, the exploratory nature of the study, and the duration of a single experiment run. As a result, our research questions (RQ) are not concerned about effectiveness or efficiency. Instead, we focus on several aspects to gather qualitative feedback and quantitative results, such as time spent in the embedded SV, to gain first insights into the use of our approach:
* **RQ1**: How do subjects use the embedded SV and code editor during task solving?
* **RQ2**: Is the code editor perceived as more useful than the embedded SV?
* **RQ3**: Do subjects recognize the usefulness of collaborative SV features for specific tasks?
* **RQ4**: What is the general perception of the usefulness and usability of the approach?
* **RQ5**: Is the approach perceived as useful in the envisioned usage scenarios?
We again emphasize that the findings of this contribution should be seen as first insights and indicators for refinements rather than statistically grounded results. However, by answering the research question, we can derive the following main **contributions** of our evaluation:
* Further insights regarding the perceived usefulness of software cities to comprehend runtime behavior.
* First quantitative and qualitative results regarding the perceived usefulness, perceived usability, and usage time for collaborative, code-proximal software cities.
* A supplementary package containing the evaluation's raw results, screen recordings of all participants, and detailed instructions as well as software packages for reproduction [30].
In the following, we now present the participants' demography and our experiment's procedure.
### _Participants_
We invited students of Kiel University that attend the Bachelor's or Master's program in computer science to participate in our user study [31]. The participation was voluntary. All participants could sign up for random group assignment or participate with a fellow student. Each group had the chance to win two out of ten 100 E gift cards for an e-commerce shop [32].
DistributionThe conducted user study included seven groups with two students each. The number of participants is therefore slightly larger than the median participant count (thirteen) in the related literature body [27], but too small to be effectively used in a controlled experiment [33, 29]. With the exception of one group, all other participants within their group knew each other. Five students attend the Master's program, the remaining students are undergraduates in computer science. All participants reported that they intend to become professional software developers.
ExperiencesFigure 3 shows participants' reported experiences with software development based on work experiences. The two students who indicated they had no experience are in the undergraduate program, and one of them also indicated (as only person) that the decision to become a software engineer is not final. The remaining twelve participants have either gained experiences while working as student employee or in private software development. Three participants are additionally involved in open source development. Figure 4 shows the results of various experiment-related aspects that were asked. All participants stated that they have knowledgeable or even better experiences in VS Code. Three persons rate their web development and software architecture experiences at beginner level. One of the participants with no software engineering work experience reported to have no experience in software architecture. Overall, the distribution of experiences match the courses of study, since SVs are often treated as seminar papers in the master's program, for example. However, we probably also see overestimation such as the persons that stated to be at expert level for VS Code and web development, as well as half of the participants stating to have at least knowledgeable experiences in SV. In this context, half of the participants have used ExplorViz at least once in the past. The participants of three groups each have different experiences with ExplorViz.
### _Target System and Task_
ExplorViz' SV visualizes a software system's runtime behavior. However, it is not limited to application tracing of monolithic software systems, but also supports distributed tracing,6 e.g., network requests between applications that use distributed architectures. Since distributed software systems are pretty common nowadays, we incorporated this fact in our experiment. To achieve that, we used the distributed version of the Spring PetClinic7 as target system for the experiment. As done in the past [16] we recorded traces during the execution of use cases within the PetClinic. For the experiment, these were then provided as so called snapshots, i.e., aggregated runtime behavior, to the Frontend, resulting in a structural,'static' SV of dynamic runtime behavior. We decided against using multiple snapshots so as not to overwhelm new users with the amount of features. However, this can be seen in the supplementary video of this work. The participants explored the target system by means of its source code as well as embedded SV and were asked to solve two tasks.
Footnote 6: [https://openteleometry.io](https://openteleometry.io)
Footnote 7: [https://github.com/spring-petclinic/spring-petclinic-microservices](https://github.com/spring-petclinic/spring-petclinic-microservices)
Table I depicts the program comprehension tasks that all participants had to solve during the experiment. We did not use metric analysis tasks such as 'find the class with the
Fig. 4: Participants’ reported experiences for different aspects.
Fig. 3: Participants’ reported experiences with software development based on work experiences (multi-choice).
highest instance count'. Instead, the chosen tasks instructed the participants to structurally comprehend the software and find analogies based on the depicted runtime behavior. Therefore, the tasks of the experiment refer to a scenario as presented in SC1, i.e., a guided, task-oriented introduction for onboarding. With the focus on SC1, we intend to investigate both the non-collaborative and collaborative onboarding process. Therefore, T1 had to be solved alone and served as an introduction to both the target system and the use of our approach. T2 introduced the collaborative features, e.g., shared SV and synchronized text selection events, and asked the participants to work together.
### _Procedure_
In the following, we present the experiment's procedure. For additional information, we refer readers to the second prepared video8 that demonstrates an exemplary experiment run. Overall, the experiment is divided into pre-questionnaire, mid-questionnaires, i.e., questions that had to be solved after each task completion, and post-questionnaire.
Footnote 8: [https://youtu.be/wdkcDDPXeQQ](https://youtu.be/wdkcDDPXeQQ)
The user study took place at Kiel University and included one instructor who also co-authored this paper. The instructor designed and implemented the approach as well as conducted the user study. Although our approach can be used remotely, we decided to have the study take place in one locality, so that the instructor could intervene if necessary. In each experimental run, the participants were first informed about the data that would be recorded and used for publication. After signing a consent form, the instructor gave a brief introduction to VS Code and the embedded SV. It was mentioned that all introduced features were additionally described on a cheat sheet, which was placed on the table in front of the subjects. Afterwards, the participants were told to openly ask questions if they had a problem. Furthermore, they were told that they could pause or abort the experiment at any time. They draw their login token for the survey tool LimeSurvey9 and started with the pre-questionnaire. Then T1 was introduced and all participants were redirected to browser-based VS Code instances by clicking a button inside of the LimeSurvey form. Each VS Code instance was specifically prepared for a given task and ready to use. It did not require any setup, so that the participants could completely focus on the task itself. They began by reading a markdown file that introduced the target system, controls, and the task itself. After answering T1 in LimeSurvey, all participants gave their feedback to the just used approach. T2 was introduced in the same way as T1. However, here participants were instructed to test the collaborative features first and then work together on solving T2. Again, the subjects gave their feedback and concluded with the post-questionnaire. During each experiment run, the instructor made a note of noticeable mentions stated by the participants.
Footnote 9: [https://www.limesurvey.org](https://www.limesurvey.org)
## V Results & Discussion
Our mid-questionnaires and post-questionnaire contained statements for which participants had to indicate their level of (dis)agreement on a 5-point Likert scale. The questionnaires also included free reply fields to leave a comment on any experiment-related matter. Additionally, the instructor made a note of observations such as rational usages of specific features as well as noticeable emotions [27] and mentions of the participants. In the following, we present and use the results of our conducted user study to revisit our posed research questions. Furthermore, we discuss the threats to validity of our evaluation. Although we use the term SV in this paper, we do not want it to be understood as a generalization of our results. We again emphasize that the results and their interpretation are restricted to our particular prototype using collaborative code cities and our experiment. Therefore, the findings should be seen as first insights and indicators for refinements rather than statistically grounded results.
Fig. 5: Total time spent & perceived difficulty per task.
\begin{table}
\begin{tabular}{c l l} \hline \hline ID & Category & Question \\ \hline T1 & Structural Understanding & What do you think is the reason that the ‘Owner’ class is instantiated multiple times, but the other classes in the relevant program flow are instantiated only once? \\ T2 & Software Insight & Name all Java classes that are involved in a program flow to show the visit screen with the new ‘select veterinarian’ feature. \\ \hline \hline \end{tabular}
\end{table} TABLE I: Program comprehension tasks that participants had to solve..
### Task evaluation
We measured an overall task correctness of 90 %. The related time spent solving the tasks is depicted in Figure 5. The average time spent on T1 is for both the mean and median 19 minutes. The fasted participant correctly solved T1 in seven minutes. This person was already familiar with ExplorViz. For T2, we see 29 minutes for the mean and 24 minutes for the median. Both tasks were without time limit, hence the outlier group for T2. Figure 5 also depicts the participants' perceived task difficulty. T1 and T2 were found to be difficult by four participants, with T1 also found to be very difficult by one person. Due to the overall distribution, we conclude that the tasks were neither too easy nor too difficult.
_RQ1: How do subjects use the embedded SV and code editor during task solving?_
To the best of our knowledge, this work presents a novel approach that combines code editors with remote pair programming techniques and embedded, collaborative code cities. Therefore, we first intend to understand how the participants in our study use the approach with free choice of the tool, i.e., embedded SV and code editor, as well as with tasks referenced to SC1. In that context, Figure 6 depicts the time spent using each tool per task. For measurement, a VS code event was used to capture the time at which participants clicked on the code editor or the ExplorViz extension, therefore switched their focused context. We would like to mention that it was technically (due to VS Code's limitations for extensions) only possible to measure the time spent between context switches. Thus, if a participant did not change the context but, for example, only used the SV, then our measurements indicate a time spent of one minute for the SV. This is the case for the fastest participant for T1 mentioned above, who actively interacted only with the SV during this task (as confirmed by the video recording). The average time spent using the SV for T1 is seven minutes and nine minutes for VS Code (both mean and median). During this task, participants comprehended the source code for the first time and probably spent more time reading it. It is therefore surprising that the time difference for the first task is already quite small. The reason for this is that code cities can facilitate the understanding of structures and are therefore suitable for the task of obtaining an overview [34, 35]. This was also explicitly mentioned by three participants in the free text fields. For T2, the average time spent using the SV is fifteen minutes and eight minutes for VS Code. The (almost) double amount of time spent using the SV results from the two outliers. For this task, however, the median for time spent using the SV is thirteen minutes and eight minutes for VS code. We suppose that this comes from the shared software cities and the ability to highlight objects in question. The instructor's notes mention the frequent use of shared popups within two groups. The video recordings confirm that these groups often use the popups as a basis for discussion. Also, participants often use the ping feature of our tool to highlight certain details for their collaborator. Therefore, they spent more time using the SV. However, collaboration is not the only reason for that. T2 explicitly requires to understand and extend a program flow. The SV provides a visual overview of the software system's structure and in our case also of a runtime behavior snapshot (see Section IV-B). As a result, it is far easier and obvious to use this available visualization and for example trace imaginary method calls with the mouse cursor (especially, when combined with collaborative features).
Figure 6 also presents the number of context switches for each task. We observe that for T1 the number of switches between SV and code editor is much more distributed among the participants than for T2. Again, the reason for that is presumably the collaboration in T2. Most of the time, the participants work together and therefore change their tool when initiated by the other collaborator. For both T1 and T2, the median of context switches is around forty, indicating that the amount of context switches is independent on our tasks and collaboration.
Since our approach incorporates the runtime behavior of the target system, we also intended to know how participants perceived the usefulness of the two tools to comprehend the posed program flow of T1. In this context, Figure 7 shows that the SV was perceived as more useful than the code editor. One participant mentioned that the communication lines are one of the most beneficial properties of the SV. In ExplorViz, the communication lines incorporate runtime information such as the method call's frequency in the visualized snapshot. These information are important to comprehend runtime behavior. Additionally, the SV already maps the runtime information that the users would otherwise have to find and understand on their own.
We conclude that the participants used the SV as supplement to the code editor for specific comprehension tasks.
Fig. 6: Time spent per tool & number of context switches performed per task.
_RQ2_: _Is the code editor perceived as more useful than the embedded SV?_
Traditionally, understanding a software system's behavior is primarily achieved by comprehending the source code [1]. For this experiment, the results related to RQ1 show that our approach was, for example, used by the participants to gain an overview of the target system. This is a common and suitable use case for SV, as shown in the past [34]. However, professional developers question the need for SV [15, 36]. In our opinion, one of the reasons for that is the lack of properties such as code proximity [5, 6] and the SV tool's setup [4]. In that context, we now examine how participants rate the usefulness of our approach.
Figure 7 depicts the results of the mid-questionnaires regarding the perceived usefulness of the tools for a task. For T1, overall 71 % agree with the posed statement 'SV helped with the task'. The usefulness of the code editor was slightly (one person difference) more agreed to. However, for the SV the number of participants who neither agree nor disagree is higher and those who disagree is lower. Regarding T2, we see that overall 86 % agree with the posed statement 'SV helped with the task'. In comparison, the code editor's usefulness was slightly (one person difference) less agreed to.
We conclude that the participants perceive code editor and SV as approximately equally useful (in the context of the task solving).
_RQ3_: _Do subjects recognize the usefulness of collaborative SV features for specific tasks?_
With RQ3, we expand the results of our previous work [16] regarding the perceived usefulness of collaborative code cities. In this context, we asked all participants to state their level of agreement with two statements posed.
Figure 8 presents the related results. We see that 43 % of the participants agree or strongly agree with the statement 'Collaborative SV features helped with the task', respectively. The one person that disagrees with the statement mentioned that the collaborative SV features did not help in his case, since there was barely any input from the other participant. However, he agrees that the communication would be a big help in pair programming supported development in the real world. Presumably due to the low contribution of his collaborator, the same person also disagrees with the second statement that is concerned about voice communication. Due to low input from his collaborator, the same person also disagrees with the second statement, which refers to the perceived usefulness of voice communication. Nevertheless, all of the remaining thirteen participants strongly agree that voice communication was helpful in the task. This is consistent with our previous findings indicating that voice communication is one of the most useful collaborative tools in SV [16].
We conclude that the majority of participants perceive the collaborative SV features as useful in the given task.
_RQ4_: _What is the general perception of the usefulness and usability of the approach?_
The post-questionnaire was designed to capture participants' overall perceptions of the approach's usefulness and usability. By answering RQ1, we have seen that the participants indeed use the SV as supplement during the comprehension task. For RQ2, we concluded that participants perceived code editor and SV to be about equally useful in the context of a real-world task. Finally, Figure 9 shows:
Fig. 8: Mid-questionnaire - T2 - Collaboration
Fig. 7: Mid-questionnaires - Perceived usefulness for tasks
All participants agree or strongly agree that the SV's code proximity is generally useful.
Collaboration is obviously dependent on many factors, e.g., mutual perception of collaborators or motivation. In our context, we have seen this for RQ3 or in previously published results [16]. The participants rate the collaborative SV features slightly different when to be evaluated independently of a task. Figure 9 shows a shift in the distribution of approval ratings. The one person who previously disagreed with the usefulness of the collaborative features now neither agrees nor disagrees. That fits his previous mentions. Compared to perceived usefulness for T2, overall perceived usefulness of collaborative SV features shows less strong agreement. As a matter of fact, we could not find a reason why two participants downgraded their level of agreement to 'agree'. However, the overall approval rate remains the same.
We conclude that the majority of subjects perceive the collaborative SV features as useful.
Although this evaluation is overall more concerned about the perceived usefulness of embedded SV, identified usability problems can help to identify desirable refinements. In this context, Figure 9 also presents the participant's perceived usability of our approach. The results show that 86 % of the participants find the used combination of embedded SV and code editor usable. There are some desirable improvements that are mentioned via text response, e.g., better performance. However, the biggest usability problem was the unintended minimization of the embedded SV. The reason for that is that VS code opens files that have been clicked in the package explorer in the currently focused editor group. This behavior can be disabled by locking an editor group. However, at the current time of writing, the lock mechanism cannot be triggered from within a VS Code extension. Figure 9 also shows that another 86 % would use this approach for private purposes such as collaborative program comprehension with fellow students.
We conclude that the majority of participants find our approach usable.
_Rq5: Is the approach perceived as useful in the envisioned usage scenarios?_
Our pilot study found that a single experiment run would take about an hour to complete. In order not to discourage potential participants due to the time to be spent, we decided to ignore the other usage scenarios and only use tasks in the experiment based on SC1. Nevertheless, the post-questionnaire was also used to capture participants' perceived usefulness in applying the approach in the remaining, envisioned scenarios. In this case, they were described in text and subjects were asked to state their agreement on a 5-point Likert scale. Figure 10 depicts the related results. The complete scenario descriptions are available in the supplementary package of this paper [30], but essentially summarize the envisioned usage scenarios in Section III. The participants rated SC1 with the highest overall agreement and strong agreement, respectively. The experiment's tasks and their introduction originate from SC1. SC2 has the highest amount of neutrality and disagreement. One person that answered with neither agreement nor disagreement mentioned that code changes are usually reviewed before deploying them. Since our approach only shows runtime behavior, he is not sure how changes will be visualized for the code review. This detail was in fact omitted in the textual description of SC2. We believe that this uncertainty is the reason for the highest amount of neutrality and disagreement for SC2. However, the majority consensus was positive for all scenarios.
We conclude that the majority of subjects find the application of our approach useful in the posed scenarios.
Fig. 10: Post-questionnaire – Perceived usefulness of the approach when applied in a described scenario. (see Section III)
Fig. 9: Post-questionnaire - Perceived usefulness and usability
### _Threats to Validity_
Remote pair programming solutionAs mentioned in Section II-B, we decided to implement our own remote pair programming approach, so that the reproducibility of our evaluation is not dependent on the availability of external services. However, this custom implementation lacks useful features compared to full-fledged solutions for remote pair programming. For example, one participant mentioned that he was unable to draw the attention of the collaborator to a specific code part. Although our study did not aim to evaluate whether one tool is better than the other, this custom implementation may have influenced the perceived usefulness or usability of the SV or code editor. In contrast, Figure 7 shows that the participants find the SV to be more suitable to understand dynamic program flows. With that being said, we conclude that more empirical research is required in this context.
Experiment durationThe average time spent on the user study was about one hour, both median and mean. It follows that the attention span of the participants and thus the results might have been influenced. To mitigate this, we told participants during the introduction that breaks could be taken at any time and participation could be aborted. Moreover, T2 was solved collaboratively and therefore presumably relieved the experimental situation.
Target systemThe prepared target system contains 26 application logic-related Java files that are distributed among four Maven subprojects. As a result, the small project size may have influenced the perceived usability of the SV, as also mentioned by one participant. We agree, but also emphasize that we did not intend to evaluate usability based on the scalability of the visualization, but on the overall concept. Overall, this evaluation is more concerned about the perceived usefulness of SV incorporating distributed tracing for the onboarding process. In addition, we argue that a real-world application of the onboarding scenario with SV should guide new developers through the software system's behavior with increasingly large portions of the code base.
ParticipantsThe use of students in experiments is a valid simplification that is often said to possibly compromise external validity [31]. In our case, the participants' experiences might have influenced their perception regarding the usefulness of the SV as well as their time spent using the SV. In this context, professional developers can benefit from their experience, e.g. with the Spring framework, and can understand the source code faster. As a result, we will repeat the experiment with professional developers.
## VI Related Work
Code proximity is often not clearly emphasized in SV publications, but follows from the mentions of code viewers in the text itself. Therefore, there are numerous research approaches that use embedded code viewers within SV such as code cities [34]. This also applies to more recent and often collaboratively usable virtual reality approaches [37, 38, 39, 40, 41]. Other publications present different use cases for embedded SV in code editors, such as dependency browsing [42] or sketching [43, 44]. Few approaches enable developers to modify source code via embedded code editors [45]. Due to space limitations, we cannot address and discuss all related approaches [10, 8, 11], but focus below on what we consider to be the most comparable work.
In 2015, Balogh et. al presented a refined version of their tool CodeMetropolis [9]. Their approach uses static analysis of a software systems' source code and visualizes the result as 3D code city. The related rendering is achieved using a modded version of the video game Minecraft. Thanks to the multiplayer mode of Minecraft, the code city can also be explored collaboratively. Overall, CodeMetropolis and ExplorViz share the aspects of collaboration and code editor integration. However, these features are implemented differently in each case. For example, in CodeMetropolis users navigate through the same instance of a given code city using the first-person perspective. They can see the avatars of collaborators and interact based on Minecraft's limitations. In ExplorViz, the collaboration is achieved using collaborative SV features, e.g., shared popups. Regarding the code editor integration, both CodeMetropolis and ExplorViz provide an extension than can be installed in Eclipse and VS Code, respectively. In this context, both extensions provide a comparable set of features, e.g., open Java class in the SV. However, our extension is also able to embed the SV in the actual code editor, whereas the Metropolis approach can only be used as external SV that links to the code editor (see Section II-A).
## VII Conclusions & Future Work
In this paper, we presented the architectural design of our approach for collaborative, code-proximal dynamic software cities within code editors. The main idea is to link collaborative SVs directly to the source code that is under development within a code editor and vice versa. We have prototyped this approach within our SV tool ExplorViz. The result is a VS Code extension that either embeds three-dimensional software cities in the code editor or acts as gateway between code editor and external SV. Therefore, we directly link runtime behavior with the related program elements, for example, Java methods. Users can collaboratively explore the SV from within their code editor using synchronized software cities and collaborative SV features, e.g., shared popups. In addition to the implementation, we sketched three envisioned usage scenarios.
We conducted an initial user study to collect first-time feedback regarding the perceived usefulness and perceived usability of our approach. The results show that the majority of participants generally perceive the approach as useful and usable. In this context, participants rated code editor and SV as equally useful in solving the given program comprehension tasks. The measured time spent in each tool, i.e., SV and code editor, indicates that the participants indeed use the SV as supplementary tool.
In the future, we will implement useful features and refinements. Additionally, we plan to repeat the experiment with professional developers.
## Acknowledgment
The authors would like to thank Malte Hansen and Lennart Ideler for their contributions with implementing and evaluating some of the features presented in this paper. |
2305.14467 | FLAIR #2: textural and temporal information for semantic segmentation
from multi-source optical imagery | The FLAIR #2 dataset hereby presented includes two very distinct types of
data, which are exploited for a semantic segmentation task aimed at mapping
land cover. The data fusion workflow proposes the exploitation of the fine
spatial and textural information of very high spatial resolution (VHR)
mono-temporal aerial imagery and the temporal and spectral richness of high
spatial resolution (HR) time series of Copernicus Sentinel-2 satellite images.
The French National Institute of Geographical and Forest Information (IGN), in
response to the growing availability of high-quality Earth Observation (EO)
data, is actively exploring innovative strategies to integrate these data with
heterogeneous characteristics. IGN is therefore offering this dataset to
promote innovation and improve our knowledge of our territories. | Anatol Garioud, Apolline De Wit, Marc Poupée, Marion Valette, Sébastien Giordano, Boris Wattrelos | 2023-05-23T18:47:19Z | http://arxiv.org/abs/2305.14467v1 | # FLAIR: French Land cover from Aerospace ImageRy.
###### Abstract
According to a report by the Food and Agriculture Organization of the United Nations (FAO) in 2015 [1], a significant portion of the world's soil resources are in a condition that can be classified as fair, poor, or very poor. This degradation of soils, coupled with the loss of biodiversity, has far-reaching implications for the state of ecosystems and their long-term sustainability. Soils play a vital role in providing a range of ecosystem services. They serve as natural habitats for numerous plant and animal species, act as a crucial carbon sink by absorbing CO\({}_{2}\) (to the extent that they are the largest carbon sink, surpassing the atmosphere and all vegetation and animals on Earth's surface), filter rainwater, support food production, and function as the planet's largest water reservoir. The degradation of soils and biodiversity can be attributed in large part to the process of land artificialization, with urban sprawl being a significant contributing factor. This growing phenomenon has raised concerns among public authorities, who recognize the importance of monitoring the state of territories. Artificialization is defined as the long-term deterioration of the ecological functions of soil, including its biological, hydrological, climatic, and agronomic functions, resulting from its occupation or use [2].
The French National Institute of Geographical and Forest Information (IGN) [3], in response to the growing availability of high-quality Earth Observation (EO) data, is actively exploring innovative strategies to integrate these data with heterogeneous characteristics. As part of their initiatives, the institute employs artificial intelligence (AI) tools to monitor land cover across the territory of France and provides reliable and up-to-date geographical reference datasets.
The FLAIR #1 dataset, which focused on aerial imagery for semantic segmentation, was released to facilitate research in the field. Building upon this datset, the FLAIR #2 dataset extends the capabilities by incorporating a new input modality, namely Sentinel-2 satellite image time series, and introduces a new test dataset Both FLAIR #1 and #2 datasets are part of the currently explored or exploited resources by IGN to produce the French national land cover map reference _Occupation du sol a grande echelle_ (OCS-GE).
The growing importance of EO in the monitoring and understanding of Earth's physical processes, and the diversity of data now publicly available naturally favours multi-modal approaches that take advantage of the distinct strengths of this data pool. Remote sensing data have several main characteristics that are of crucial importance depending on the intended purpose. Spatial, temporal and spectral resolutions will influence the choice of data and their importance in a process. The complexity of integrating these different data tend to promotes the use of machine learning for their exploitation.
This FLAIR #2 challenge organized by IGN proposes the development of multi-resolution, multi-sensor and multi-temporal aerospace data fusion methods, exploiting deep learning computer vision techniques.
The FLAIR #2 dataset hereby presented includes two very distinct types of data, which are exploited for a semantic segmentation task aimed at mapping land cover. The data fusion workflow proposes the exploitation of the fine spatial and textural information of very high spatial resolution (VHR) mono-temporal aerial imagery and the temporal and spectral richness of high spatial resolution (HR) time series of Copernicus Sentinel-2 [4] satellite images, one of the most prominent EO mission. Although less spatially detailed, the information contained in satellite time series can be helpful in improving the inter-class distinction by analyzing their temporal profile and different responses in parts of the electromagnetic (EM) spectrum.
**Spatial and temporal domains definition**
**Spatial domains and divisions**: as for the FLAIR #1 dataset, a spatial domain is equivalent to a French 'departement' which is a french sub-region administrative division. While the spatial domains can be geographically close, heavy pre-processing of the radiometry of aerial images independently per 'departement' create important differences (see [5]). Each domain has a varying number of areas subdivided in patches of same size across the dataset.
While these areas were initially defined to contain sufficient spatial context by taking into account aerial imagery, the strong difference in spatial resolution with satellite data means that they consist of few Sentinel-2 pixels. Therefore, in order to also provide a minimum of context from the satellite data, a buffer was applied to create _super-areas_. This allows, for every patch of the dataset to be associated to a _super-patch_ of Sentinel-2 data with sufficient size through a large footprint. Figure 1 illustrates the different spatial units of the dataset.
**Temporal domains**: they are twofold, on the one hand the date of acquisition of the aerial imagery (which varies in terms of year, month, days) and on the other hand by the satellite acquisitions, varying in terms of months and days.
**Dataset extent**: The dataset includes 50 spatial domains (Figure 2) representing the different landscapes and climates of metropolitan France. The train dataset constitute 4/5 of the spatial domains (40) while the remaining 1/5 domains (10) are kept for testing. This test dataset introduces new domains compared to the FLAIR #1 test dataset. Some domain are in common but areas within those domains are distinct. The FLAIR #2 dataset covers approximately 817 km 2 of the French metropolitan territory.
Footnote 2: [https://www.face.com/face](https://www.face.com/face)
For details about aerial images (ORTHO HR(r)) and associated elevation data, as well as pre-processing, refer to the FLAIR #1 datapapaper [5].
Technical details about Sentinel-2 can be found in [4]. The images were downloaded from the Sinergise API [6] as Level-2A products (L2A) which are atmospherically corrected using the Sen2Cor algorithm [7]. L2A products provide Bottom-Of-the-Atmosphere (BOA) reflectances, corresponding to a percentage of the energy the surface reflects. L2A products also deliver pixel-based cloud (CLD) and snow (SNW) masks at 20 m spatial resolution. Sentinel-2 images are typically provided as 110\(\times\)110 km (with 10 km overlay) squared ortho-image in UTM/WGS84 projection. However, in order to limit the size of the data and due to the wide extent of the dataset, only the super-areas were downloaded. Concerning Sentinel-2 pre-processing, the 20 m spatial resolution bands are first resampled during data retrieval to 10 m by the nearest interpolation method. Same approach is adopted for the cloud and snow masks. Due to the relative orbits of Sentinel-2 some images contain nodata pixels (reflectances at 0). As all Sentinel-2 images during the aerial image acquisition year are gathered all dates containing such nodata were removed. It must be remarked that the length of time series and the acquisition dates thus varies for each super-area. Table II provides information about the number of dates included in the filtered Sentinel-2 time series for the train and test datasets. In average, each area is acquired on 55 dates over the course of a year by the satellite imagery.
Note that cloudy dates are not suppressed from the time series. Instead, the masks are provided and can be used to filter the cloudy dates if needed. The resulting Sentinel-2 time series are subsequently reprojected into the Lambert-93 projection (EPSG:2154) which is the one of the aerial imagery.
**Data description, naming conventions and usage**
The FLAIR #2 dataset is composed of 77,762 aerial imagery patches, each 512\(\times\)512 pixels, along with corresponding annotations, resulting in a total of over 20 billion pixels. The patches correspond to 916 areas distributed across 50 domains and cover approximately 817 km\({}^{2}\). The area sizes and the number of patches per area vary but are always a multiple of 512 pixels at a resolution of 0.20 meters. Additionally, the dataset includes 55,244 satellite super-areas acquisitions that have a buffer of 5 aerial patches (512 m) surrounding each aerial area. Description of the data is provided bellow:
* The **aerial input patches (IMG)** consist of 5 channels, similar to the FLAIR #1 dataset. These channels include blue, green, red, near-infrared, and elevation bands, all encoded as 8-bit unsigned integer datatype. The aerial patches are named as _IMG_ID_, with a unique identifier (ID) across the dataset assigned to each patch. A file named _flair_aerial_metadata_json_ contains metadata for each of the aerial patches. This JSON file provides detailed information such as the date and time of acquisition, the geographical location of the patch centroid (x, y), the mean altitude of the patch (z), and the type of camera used. For more in-depth descriptions of these metadata attributes, please refer to the documentation provided in [5].
* _data_, _masks_, _products_ and a _JSON_ file to match aerial and satellite imagery
- : * the super-area reflectance time series is stored in the _SEN2_xxxx_data.npy_ files. These files contain 4D NumPy arrays with a shape of \(T\times\)C\(\times\)H\(\times\)_W_, where \(T\) represents the acquisition dates (which can vary for each file), \(C\) represents the 10 spectral bands of Sentinel-2, and \(H\) and \(W\) denote the height and width dimensions of the data, respectively. The data is stored as uint16 datatype, which differs from the acquisition datatype mentioned in the Senflub reference provided [6]. It's important to note that the data in these files is provided without any additional processing or modifications. * the super-area cloud and snow masks are stored in the _SEN2_xxxx_masks.npy_ files. These files have a similar shape as the data files, with a 4D array format of \(T\times\)C\(\times\)_H\(\times\)W_. However, they consist of only two channels, representing the snow masks and cloud masks, respectively, in that order. The values in the masks range from 0 to 100 and indicate the probability of cloud or snow occurrence for each pixel. A value of 100 indicates a high probability. * the names of the Sentinel-2 time series products are listed in the _SEN2_xxxx_products.txt_ file. This file provides additional information for each acquisition, including the Sentinel-2 platform (S2A or S2B), the acquisition date (which corresponds to the first date mentioned in the product name), the acquisition time, the orbit number and tile name associated with the product. These details help identify and differentiate the specific products within the Sentinel-2 time series dataset.
\begin{table}
\begin{tabular}{l|c c c|c} & \multicolumn{4}{c}{ acquisitions per super-area} \\ \cline{2-5} Sentinel-2 time series (1 year) & min & max & mean & total \\ \hline train dataset & 20 & 100 & 55 & 757 \\ test dataset & 20 & 114 & 55 & 193 \\ \hline \end{tabular}
\end{table} TABLE II: Number of acquisitions (dates) in the Sentinel-2 times series of one year (corresponding to the year of aerial imagery acquisition).
Additionally, _flair-2_centroids_sp_to_patch_json_ file is provided alongside the data. This file plays a role in dynamically cropping the satellite super-areas into super-patches during the data loading process. The JSON file uses the aerial patch name (_e.g._, IMG_077413) as the key and provides a list of two indexes (_e.g._, [13, 25]) that represent the data-coordinates of the aerial patch centroids. Using these coordinates and a specified number of pixels (referred to as _sat_superpatch_size_), super-patches are extracted from the satellite data. For the experiments, the default _sat_superpatch_size_ is set to 40, resulting in super-patches with a spatial size of 40*40 pixels. This size corresponds approximately to two aerial patches on each side of the centroid.
The pattern \(\mathbf{xxxx}\) in the file names corresponds to the format domain_year-areanumber_arealandcoverletters (_e.g._, D077_2021-Z9_AF). The _arealandcoverletters_ represent the two broad types of land cover present in the area. For more detailed information about the specific land cover types, please refer to [5].
* The **annotation patches (MSK)** consist of a single channel with values ranging from 1 to 19, encoded as an 8-bit unsigned integer datatype. These files are named as _MSK_ID_, where ID corresponds to the same identifier used for the aerial imagery patches. It is important to note that annotations are limited to the boundaries of aerial imagery areas and do not extend to satellite super-areas. In addition, annotations derived from aerial imagery correspond to the specific date the images were captured. However, certain evolving classes may not accurately reflect the current state of the features as observed in Sentinel imagery. For instance, the banks of a watercourse, delineated based on aerial imagery, may undergo changes over time, spanning a year. These changes can result from various factors such as natural
Fig. 4: Example of input and supervision data: true color composition, near-infrared color composition, elevation band, Sentinel-2 true color composition super-patch and supervision masks. The data from the first three columns are retrieved from the IMG files, the super-patch from SEN numpy files while the last column corresponds to the MSK files.
processes or human activities, causing the banks to shift or erode. Consequently, the annotations based on older aerial imagery may not capture these temporal variations.
Figure 4 gives an example of aerial patches, corresponding extracted super-patch (with the aerial patch footprint in the outlines) and annotation patches. The interest of the extended spatial information provided by the Sentinel-2 super-patches is particularly visible in the last two rows of Figure 4. Indeed, the location on a beach or on a lake is difficult to determine from the aerial image alone, and could easily be confused with the sea for example in the last row.
The current test dataset has a different sampling than FLAIR #1. The use of satellite time series to inject temporal information is especially relevant for natural surfaces with _e.g._ a seasonal variation. Therefore, the classes of forests (coniferous and deciduous), agricultural land and herbaceous cover were favored, accounting for 72.98% of the test dataset.
## Benchmark architecture
**Network definition**: to capture both spatial and temporal information from very high resolution aerial images and high-resolution satellite images, we propose a two-branch architecture called **U-T&T**, for _Textural_ and _Temporal_ information. The model allows enables the fusion of learned time series-related information with the low-level representations of mono-date learned information. The U-T&T model combines two commonly used architectures:
* **U-Net (spatial/texture branch)**: to handle the aerial imagery patches, a U-Net architecture [8] is adopted. The encoder is using a ResNet34 backbone model [9] which has been pre-trained on the ImageNet dataset [10]. The U-Net branch has \(\approx\) 24.4 M parameters. Ith closely resembles to the model described in the FLAIR #1 datapapper [5], ensuring consistency and comparability with prior work.
* **U-TAE (spatio-temporal branch)**: a U-TAE [11] architecture focuses on extracting and incorporating both spatial and temporal information from the Sentinel-2 time series data. This architecture is based on U-Net but incorporates a Temporal self-Attention Encoder (TAE) component taking as input the lowest resolution features of the convolutional encoder to generate set of attention masks that capture the temporal dependencies within the time series data. These attention masks are then applied at all resolutions upon the decoding process, enabling the model to capture spatio-temporal patterns in the data.
Fig. 5: Class distribution of the train dataset (_top_) and test dataset (_bottom_).
\begin{table}
\begin{tabular}{l c c c}
**Class** & **MSK** & **Pixels** & **\%** \\ \hline building & 1 & 1,453,245,093 & 7.13 \\ pervious surface & 2 & 1,495,168,513 & 7.33 \\ impervious surface & 3 & 2,467,133,374 & 12.1 \\ bare soil & 4 & 629,187,886 & 3.09 \\ water & 5 & 922,004,548 & 4.52 \\ coniferous & 6 & 873,397,479 & 4.28 \\ deciduous & 7 & 3,531,567,944 & 17.32 \\ brushwood & 8 & 1,284,640,813 & 6.3 \\ vineyard & 9 & 612,965,642 & 3.01 \\ herbaceous vegetation & 10 & 3,717,682,095 & 18.24 \\ agricultural land & 11 & 2,541,274,397 & 12.47 \\ plowed land & 12 & 703,518,642 & 3.45 \\ other & **\textgreater{13}** & 153,055,302 & 0.75 \\ \hline \end{tabular}
\end{table} TABLE III: Semantic classes of the main nomenclature of the FLAIR #2 dataset and their corresponding MSK values, frequency in pixels and percentage among the entire dataset.
Figure 6 provides an overview of the proposed method, which combines the U-TAE and U-Net architectures. The main idea behind this approach is to incorporate features learned by the U-TAE branch, which considers the temporal dimension and a wider spatial context, into the U-Net branch, which focuses on aerial imagery. However, a key constraint is the significant difference in spatial resolution between the satellite and aerial data. With the satellite imagery having a spatial resolution 50 times lower than the aerial imagery (10 m versus 0.2 m), early and late fusion strategies (_i.e._, fusion at input or prediction levels) are not viable due to the large size disparity. To address this, a _Fusion Module_ is introduced, depicted in Figure 7, which enables mid-stage fusion of features from both branches:
* **Fusion Module**: the fusion module takes as input the U-TAE embedding (last feature maps of the U-TAE decoder, shown in blue in Figure 6) and is applied to each stage of the U-Net branch. Within the _Fusion Module_, two sub-modules have different purposes and focus on distinct aspects:
* _Cropped_: this sub-module aims at incorporating information from the U-TAE super-patch embedding into the spatial extent of the aerial parches. The U-TAE embedding is first cropped to match the extent of the aerial patch. This cropped embedding is then fed to a single convolution layer, which produces a new channel dimension size that aligns with the one of the
Fig. 6: _Texture and Time_ extraction network including two branches: i) a U-TAE network applied to the Sentinel-2 super-patch time series and ii) a U-Net network applied to the mono-date aerial imagery patch. The last decoder layer yielded features from the U-TAE branch are used as embeddings added to the features of the U-Net branch, integrating temporal information from the time series and spatial information from the extended super-patch. The light-blue fusion type modules are enabled or not and varying according to the fusion method.
Fig. 7: Fusion module taking as input the last U-TAE embeddings. This module is applied to each stages of the U-Net encoder feature maps. _out_ corresponds to the channel size of the U-Net encoder feature map and \(H\) and \(W\) to the corresponding spatial dimensions.
U-Net encoder feature maps channel size. The output of this convolutional layer is then passed through an interpolation layer that utilizes bilinear resampling. This interpolation ensures that the spatial dimensions matches those of the U-Net feature maps.
* _Collapsed_: this sub-module is designed to preserve spatial information from the extended super-patch, which will be integrated into the U-Net feature maps. Initially, the spatial dimension of the U-TAE is collapsed into a single value per channel, typically by taking the mean. The resulting vector is then fed into a shallow Multi-Layer Perceptron (MLP) consisting of three linear layers with dropout regularization and Rectified Linear Unit (ReLU) activation. The output size of the MLP is adjusted to match one of the U-Net encoder feature maps channel size.Subsequently, for each value in the obtained vector, the value is duplicated across the spatial dimension of the corresponding U-Net encoder feature maps.
Both the _cropped_ and _collapsed_ sub-modules produce a mask of size _out\(\times H\times W\)_, where _out_, \(H\), and \(W\) correspond to the targeted feature map dimensions of the U-Net model. These masks, generated separately, are initially added together to integrate spatio-temporal information from the Sentinel-2 satellite time series. The resulting combined mask is added to the feature maps of the U-Net model. This integration step allows the spatio-temporal information captured by the _cropped_ and _collapsed_ sub-modules from the Sentinel-2 satellite time series to be incorporated into the U-Net's feature representation.
**Network supervision**: a single \(\mathcal{L}_{\mathcal{TLT}}\) loss is used to monitor the training, which is the sum of two auxiliary losses \(\mathcal{L}_{sat}\) and \(\mathcal{L}_{aerial}\), obtained respectively from the U-TAE and U-Net branches. The two branches are using a categorical Cross Entropy (CE) cost-function, suitable for multi-class supervised classification task :
\[\mathcal{L}_{\mathcal{CE}}=-\sum_{i=1}^{n}t_{i}\log(p_{i})\quad,\]
\[\mathcal{L}_{TkT}=\mathcal{L}_{CE\ aerial}+\mathcal{L}_{CE\ sat}\]
where \(t_{i}\) is the MSK label and \(p_{i}\) the Softmax probability of the \(i^{th}\) class.
The MSK files in the FLAIR #2 dataset are provided at a spatial resolution of 0.2 m. The output of the U-TAE branch corresponds to a super-patch, which lacks annotations for most of its parts. To address this, the U-TAE outputs are initially cropped to match the extent of the corresponding aerial patch. Subsequently, they are interpolated to fit the spatial dimensions of the MSK files (512\(\times\)512 pixels). This interpolation ensures compatibility before calculating the \(\mathcal{L}_{sat}\) loss.
**Benchmark metric**
The evaluation methodology for the semantic segmentation task follows the approach used in the FLAIR #1 challenge [5]. Initially, confusion matrices are calculated per patch, and then aggregated across the test dataset to create a single confusion matrix. To assess the performance of each semantic class, the Intersection over Union (IoU) metric, also known as the Jaccard Index, is computed. The IoU is calculated using the formula:
\[IoU=\frac{|U\cap V|}{|U\cup V|}=\frac{TP}{TP+FP+FN}\]
where U denotes the intersection, V the union, TP the true positives, FP the false positives and FN the false negatives.
The mean Intersection over Union (**mIoU**) is then determined by taking the average of the per-class IoU values. However, since the _other_ class is not well-defined and is equivalent to void, it is excluded from the IoU calculations. Consequently, the mIoU is computed as the average of the IoUs from the remaining 12 classes.
**Benchmark framework and settings**
The baselines are calculated using the efficient _PyTorch Lightning_ framework [12]. For the implementation of the U-Net model, the _segmentation-models-pytorch_ library [13] is exploited, while the U-TAE network is obtained from [11]. The U-TAE parameters are kept at their default values (as provided in the GitHub implementation), except for the encoder and decoder widths.
For the training process, the train dataset consists of 40 domains, out of which 32 are used for training the model, while the remaining 8 domains are used for validation. The optimization technique employed is the stochastic gradient descent (SGD) with a learning rate of 0.001. A reduction strategy is implemented with a patience value of 10, allowing for adaptive adjustments to the learning rate during training. The maximum number of epochs is set to 100, but to prevent overfitting and save computational resources, an early stopping method is utilized with a patience of 30 epochs. A batch size of 10 is used for the baselines.
To ensure reproducibility and consistent results, all randomness is controlled by fixing the seed using the _seed_everything_ function from the PyTorch library, with the seed value set to 2022. Twelve NVIDIA Tesla V100 GPUs with 32 GB memory each, located on a High-Performance Computing (HPC) cluster, are used to speed up experiments. The distributed data parallel (ddp) strategy is employed to
leverage these computational resources efficiently, allowing for parallel training across multiple GPUs.
In the context of the U-TAE and U-Net models, both of which utilize CE loss, per class weighting is employed. When assigning weights to the classes, the _other_ class is explicitly set to 0, indicating that it does not contribute to the loss calculation. The remaining classes are assigned a weight of 1. However, in the case of the U-TAE model, the _plowed land_ class is also assigned a weight of 0 for the U-TAE CE loss. This decision is made because the _plowed land_ class is specifically designed for mono-temporal data. The inclusion of time series data introduces ambiguity with agricultural land, and therefore, setting the weight of the _plowed land_ class to 0 helps to mitigate this confusion.
In addition to these general hyperparameters, there are several other parameters and strategies that have been or could be explored further:
* the **size of super-patches** refers to the dimensions, in terms of pixels, of the patches that are cropped from the super-areas. Different sizes can be tested, allowing for experimentation with smaller or larger super-patch sizes. However, it is important to note that there is a limit of 110 pixels for edge patches. The choice of super-patch size has an impact on the spatial context provided to both the U-TAE and U-Net branches through the _collapsed_ fusion sub-module. _Baselines:_ the number 40 has been empirically determined and set as the baseline for this specific parameter.
* with the exception of the _other_ and _plowed land_ classes, no specific distinction or weighting has been applied during training between the classes and the network branches. However, it is possible to introduce **per-class weights** for both the \(\mathcal{L}sat\) and \(\mathcal{L}aerial\) losses. These weights can be determined based on expert knowledge to encourage specialization of one branch or the other on certain classes. Another approach is to apply weights during the summation of both losses to obtain \(\mathcal{L}_{T\&T}\). _Baselines:_ the _other_ class is assigned a weight of 0 for both branches, and the _plowed land_ class is assigned a weight of 0 for the U-TAE branch. The remaining classes are assigned a weight of 1. Additionally, no weights are applied during the summation of the \(\mathcal{L}sat\) and \(\mathcal{L}aerial\) losses.
* to prevent overfitting of the U-TAE branch and enhance the learned aerial features, we incorporate a **modality dropout mechanism**. This involves generating a random single value for each batch. If the generated value exceeds a specified threshold, provided as an input parameter, the U-TAE modality is dropped out, and only the U-Net branch is used for that particular batch. _Baselines:_ considering the coarse spatial resolution of Sentinel-2 data, we set the modality dropout threshold relatively high, at a value of 0.5. This ensures that a significant portion of the batches will exclusively utilize the U-Net branch, thereby emphasizing the importance of the aerial imagery.
* to address the potential impact of cloud or snow in the Sentinel-2 time series, two strategies are implemented using the provided masks files. The first strategy, called **filter clouds**, involves examining the probability of cloud occurrence in the masks. If the number of pixels above a certain probability threshold exceeds a specified percentage of all pixels in the image, that particular date is excluded from the training process. This helps to mitigate the influence of cloudy or snowy images on the training data. The second strategy, known as **monthly average**, is specifically implemented to alleviate potential challenges faced by the U-TAE branch due to a large number of dates in the time series. In this strategy, a monthly average is computed using cloudless dates. If no cloudless dates are available for a specific month, fewer than 12 images may be used as input to the U-TAE branch. _Baselines:_ a probability threshold of 0.5 is employed for filtering clouds or snow in the masks. Additionally, to be considered for exclusion, the clouds or snow must cover at least 60% of the super-patch.
* similar to the FLAIR #1 approach, **metadata associated with each aerial patch** are integrated into the model. These metadata are encoded using positional encoding or one-hot encoding techniques (see [5]). The encoded metadata are then passed through a MLP before being added to each U-Net encoder feature map. _Baselines:_ a positional encoding of size 32 is used specifically for encoding the geographical location information.
* **data augmentation techniques** usually prevent overfitting and help generalization capabilities of a network. Simple geometric transformations are applied during the training process. These transformations include vertical and horizontal flips as well as random rotations of 0, 90, 180, and 270 degrees. This approach aligns with the methodology used in the FLAIR #1 challenge. _Baselines:_ a data augmentation probability of 0.5 is used.
## Benchmark results
Firstly, an evaluation is conducted on a U-Net model that incorporates only aerial imagery, resembling the approach used in the FLAIR #1 challenge. The evaluation involves assessing the model's performance using the code provided in the GitHub repository (accessible at [14]). Following this, the results obtained from applying the two-branches U-T&T model are reported. Additionally, various parameters and strategies mentioned earlier are tested.
The models used in the evaluation were trained using a consistent train/validation/test split and the parameters previously specified. The training dataset consisted of 61,712 aerial imagery patches, and for the U-T&T approach, an additional 41,029 (unfiltered) Sentinel-2 acquisitions are included. During the inference phase, the models were applied to 16,050 patches of aerial imagery and 10,215 (unfiltered) satellite acquisitions from the test dataset. The reported results represent the average mIoU scores obtained from five separate runs of each model configuration. Additionally, the standard deviation of the mIoU scores across the five runs is provided, indicating the degree of variability in the performance of the models.
The results obtained from the different experiments are presented in Table IV. When using only aerial imagery and a U-Net model, the highest mIoU score of 0.5517 is achieved by integrating aerial metadata and employing data augmentation techniques. In the case of jointly utilizing aerial and satellite imagery with the U-T&T model, the baseline model yields a slightly better mIoU score compared to the aerial-only baseline (0.5490 versus 0.5467), but it also exhibits a higher standard deviation in the results.
Table IV also includes the results obtained when implementing additional strategies individually, as described in the Benchmark framework and settings section. It is observed that using modality dropout leads to a decrease in the mIoU score. Integrating aerial metadata into the U-Net branch only marginally improves the results. However, for the remaining three strategies, namely filtering the dates using cloud and snow masks, performing a monthly average of Sentinel-2 acquisitions, and applying data augmentation, the mIoU scores improve. By combining these three strategies, a mIoU score of 0.5623 is achieved, corresponding to a 2.85% increase compared to the U-Net baseline.
The per-class IoU scores for three models are provided in Table V. The three models considered are the U-Net baseline, the U-T&T baseline, and the U-T&T model with dates filtering of Sentinel-2, monthly average, and data augmentation. These models were selected based on achieving the highest mIoU scores among the five runs. Among the 12 classes, the U-Net baseline outperforms the other models by having a higher IoU score only for the _plowed land_ class, with a marginal improvement of 0.02 points compared to the U-T&T best model. On the other hand, the U-T&T baseline model performs better in predicting the _water_ and _brushwood_ classes, but the differences in IoU scores are quite close to the other models. For the remaining nine classes, the U-T&T best model surpasses the other models, exhibiting notable improvements in classes such as _buildings_, _impervious surfaces_, _bare soil_, _coniferous_, and _vineyards_. These improvements highlight the effectiveness of the U-T&T model with the integrated strategies of dates filtering, monthly average, and data augmentation.
Figure 8 illustrates the confusion matrix of the best U-T&T model. This confusion matrix is derived by combining all
\begin{table}
\begin{tabular}{l c c c c c c c c} & **INPUT** & **FILT.** & **AVG M.** & **MDR** & **MTD** & **AUG** & **PARA.** & **EP.** & **mIoU** \\ \hline
**U-Net** & aerial & - & - & - & ✗ & ✗ & 24.4 & 62 & **0.5467\(\pm\)0.0009** \\ \hline +_MTD_ & aerial & - & - & - & ✓ & ✗ & 24.4 & 59 & **0.5473\(\pm\)**0.0017** \\ \hline +_MTD +AUG_ & aerial & - & - & - & ✓ & ✗ & 24.4 & 52 & **0.5517\(\pm\)**0.0013** \\ \hline
**U-T& aerial+sat** & ✗ & ✗ & ✗ & ✗ & ✗ & 27.3 & 9 & **0.5490\(\pm\)**0.0072** \\ \hline +_FILT_ & aerial+sat & ✓ & ✗ & ✗ & ✗ & ✗ & 27.3 & 11 & **0.5517\(\pm\)**0.0135** \\ \hline +_AVG M_ & aerial+sat & ✗ & ✗ & ✗ & ✗ & ✗ & 27.3 & 10 & **0.5504\(\pm\)**0.0067** \\ \hline +_MD DR_ & aerial+sat & ✗ & ✗ & ✗ & ✗ & ✗ & 27.3 & 27 & **0.5354\(\pm\)**0.0104** \\ \hline +_MTD_ & aerial+sat & ✗ & ✗ & ✗ & ✗ & ✗ & 27.3 & 7 & **0.5494\(\pm\)**0.0064** \\ \hline +_AUG_ & aerial+sat & ✗ & ✗ & ✗ & ✗ & ✗ & 27.3 & 22 & **0.5554\(\pm\)**0.0146** \\ \hline +_FILT +AVG M +M DR +MTD +AUG_ & aerial+sat & ✓ & ✗ & ✗ & ✗ & 27.3 & 36 & **0.5523\(\pm\)**0.0016** \\ \hline \end{tabular}
\end{table} TABLE IV: Baseline results of ResNet34/U-Net architecture with aerial imagery only and U-T&T with aerial and satellite imagery on the FLAIR #2 test set. Results are averages of 5 runs of each configuration. **FILT**: filter Sentinel-2 acquisition with masks (clouds & snow); **AVG M**: monthly average of all Sentinel-2 acquisitions; **MDR**: modality dropout of the U-TAE branch; **MTD**: metadata for aerial imagery added; **AUG**: geometric data augmentation for aerial imagery; **PARA.**: number of parameters of the network; **EP**: best validation loss epoch.
individual confusion matrices per patch and is normalized by rows. The analysis of the confusion matrix shows that the best U-T&T model achieves accurate predictions with minimal confusion in the majority of classes. However, when it comes to natural areas such as _bare soil_ and _brushwood_, although there is improvement due to the use of Sentinel-2 time series data, a certain level of uncertainty remains. These classes exhibit some confusion with semantically similar classes, indicating the challenge of accurately distinguishing them.
Figure 9 showcases an example that illustrates the results of both the U-net baseline andU-T&T baseline models in relation to the aerial imagery and the corresponding annotations.
The experiments conducted in this study were performed using HPC/AI resources provided by GENCI-IDRIS (Grant 2022-A0131013803). This work was supported by the European Union through the project "Copernicus / FPCUP," as well as by the French Space Agency (CNES) and Connect by CNES. The authors would like to acknowledge the valuable support and resources provided by these organizations.
## Data access
The dataset and codes used in this study will be made available after the completion of the FLAIR #2 challenge at the following website: [https://ignf.github.io/](https://ignf.github.io/) FLAIR/.
|
2308.05596 | You Only Prompt Once: On the Capabilities of Prompt Learning on Large
Language Models to Tackle Toxic Content | The spread of toxic content online is an important problem that has adverse
effects on user experience online and in our society at large. Motivated by the
importance and impact of the problem, research focuses on developing solutions
to detect toxic content, usually leveraging machine learning (ML) models
trained on human-annotated datasets. While these efforts are important, these
models usually do not generalize well and they can not cope with new trends
(e.g., the emergence of new toxic terms). Currently, we are witnessing a shift
in the approach to tackling societal issues online, particularly leveraging
large language models (LLMs) like GPT-3 or T5 that are trained on vast corpora
and have strong generalizability. In this work, we investigate how we can use
LLMs and prompt learning to tackle the problem of toxic content, particularly
focusing on three tasks; 1) Toxicity Classification, 2) Toxic Span Detection,
and 3) Detoxification. We perform an extensive evaluation over five model
architectures and eight datasets demonstrating that LLMs with prompt learning
can achieve similar or even better performance compared to models trained on
these specific tasks. We find that prompt learning achieves around 10\%
improvement in the toxicity classification task compared to the baselines,
while for the toxic span detection task we find better performance to the best
baseline (0.643 vs. 0.640 in terms of $F_1$-score). Finally, for the
detoxification task, we find that prompt learning can successfully reduce the
average toxicity score (from 0.775 to 0.213) while preserving semantic meaning. | Xinlei He, Savvas Zannettou, Yun Shen, Yang Zhang | 2023-08-10T14:14:13Z | http://arxiv.org/abs/2308.05596v1 | You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content
###### Abstract
The spread of toxic content online is an important problem that has adverse effects on user experience online and in our society at large. Motivated by the importance and impact of the problem, research focuses on developing solutions to detect toxic content, usually leveraging machine learning (ML) models trained on human-annotated datasets. While these efforts are important, these models usually do not generalize well and they can not cope with new trends (e.g., the emergence of new toxic terms). Currently, we are witnessing a shift in the approach to tackling societal issues online, particularly leveraging large language models (LLMs) like GPT-3 or T5 that are trained on vast corpora and have strong generalizability. In this work, we investigate how we can use LLMs and prompt learning to tackle the problem of toxic content, particularly focusing on three tasks; 1) Toxicity Classification, 2) Toxic Span Detection, and 3) Detoxification. We perform an extensive evaluation over five model architectures and eight datasets demonstrating that LLMs with prompt learning can achieve similar or even better performance compared to models trained on these specific tasks. We find that prompt learning achieves around 10% improvement in the toxicity classification task compared to the baselines, while for the toxic span detection task we find better performance to the best baseline (0.643 vs. 0.640 in terms of \(F_{1}\)-score). Finally, for the detoxification task, we find that prompt learning can successfully reduce the average toxicity score (from 0.775 to 0.213) while preserving semantic meaning.1
Footnote 1: Our code is available at [https://github.com/xinlieibe/toxic-prompt](https://github.com/xinlieibe/toxic-prompt).
**Disclaimer. This paper contains uncensored toxic content that might be offensive or disturbing to the readers.**
## 1 Introduction
In online platforms, toxic content can be defined as rude, disrespectful, or unreasonable content that may result in users leaving the conversation [6]. It has been a long-standing problem affecting our society [53, 10, 14, 5]. To tackle this problem, researchers and companies leverage large-scale labeled datasets to train powerful machine learning (ML) models for toxicity detection and mitigation [61, 63, 4, 10, 36, 66].
One major obstacle in the development of accurate and generalizable toxic content classifiers is the lack of a comprehensive labeled dataset that contains different types of toxic content. This is mainly because the data collection and labeling process for the creation of such datasets is costly, which hinders the development of effective methods for detecting toxic content. Also, previous work [5, 61] has shown that the toxicity detection model trained on one dataset is less effective when applied to other datasets. Moreover, due to the fast evolution of language (new phrases, words, style, etc.), it is crucial to develop a toxicity detection mechanism that can quickly adapt to different circumstances.
With the success of pre-trained language models (LMs), a dominant way to adapt the model to downstream tasks is fine-tuning, where the whole model or part of the model is optimized to better fit the downstream tasks. Recently, large language models (LLMs) like GPT-3 [7] and T5 [44] have shown promising performance in downstream tasks without updating at all the model's parameters by directly querying the model using natural language, an emerging paradigm called _prompt learning_. With the help of prompt learning, the LLM can generate an output that aims to solve a specific task, all with a natural language task instruction (e.g., using a prompt: "Translate it from English to French" for machine translation) and a few samples as the task input. Besides the handcrafted fixed prompts, recent work [28, 30] shows that prompt tuning is an efficient way to achieve more promising performance on various tasks with restricted computational resources, limited datasets, and bounded time. Concretely, instead of fine-tuning the LLM, prompt tuning freezes the LLM and only optimizes the prompt (e.g., the way that the prompt is written) in such a way that the LLM's performance is optimized for the specific task at hand. Given that prompt learning is a promising way to use LLM for various tasks, here we aim to use prompt learning to tackle the problem of toxic content and assess how prompt learning-based approaches compare to state-of-the-art methods of tackling toxic content.
**Our Work.** In this work, we conduct the first systematic analysis focusing on how prompt learning can help tackle the problem of toxic content. Concretely, we focus on three tasks, i.e., toxicity classification, toxic span detection, and detoxification (see Table 1 for examples of these tasks). Specifically, for the first task (toxicity classification), given a
sentence, we first map its label into the word "Yes" or "No" and fine-tune the prompt to better guide the LLM to conduct the task. For the second task (toxic span detection), with prompt tuning, given a sentence with toxic spans, we aim to first generate the sentence without the toxic spans, then subtract the original sentence with the generated sentence to obtain the spans. Finally, for the third task (detoxification), we tune the prompt to rephrase the toxic sentence into a non-toxic version while preserving the semantic meaning.
Extensive evaluation of eight datasets and five model architectures shows that prompt tuning has comparable or even better performance than the baselines. For instance, for the toxicity classification task, prompt tuning gains more than 10% \(F_{1}\)-score improvement on average (see Table 3). For the toxic span detection task, our method achieves 0.643 \(F_{1}\)-score, which is better than the best result provided by SPAN-BERT (0.640), but with much less training time. Regarding the detoxification task, we find that our method can successfully detoxify the text (e.g., the average toxicity score drops from 0.775 to 0.213 on ParaDetox) while preserving the semantic information to a large extent. In general, one major advantage of prompt tuning is that it can adapt to different tasks with fewer training samples/steps. For online services such as social media, these improvements and cost reductions are significant (given billions of posts per day). This also fits the purpose of green AI [3, 49] for making AI research more environmentally friendly and inclusive.
In summary, we make the following contributions:
* To the best of our knowledge, we perform the first systematic evaluation using prompt tuning to tackle the problem of toxic content.
* We leverage prompt tuning to solve the three most representative tasks in this domain, i.e., toxicity classification, toxic span detection, and detoxification.
* Extensive evaluations show that our prompt tuning methods can achieve comparable or even better performance than the SOTA methods. Also, we observe that prompt tuning has promising performance on fast adaptation to different tasks, i.e., with fewer training samples/epochs.
**Implications.** Our work has important implications for various stakeholders involved in understanding and mitigating online abuse, hate, and harassment. First, we make our code and annotated dataset available, enabling social media operators to implement solutions to detect and moderate toxic content. Our approach is superior to previous efforts when considering the annotated data requirements, the performance, the time cost, and the robustness/transferability of the proposed solution. Additionally, our work can be used to build explainable toxic detection/moderation tools, given our method's outstanding performance on the toxic span detection and detoxification tasks. Third, we argue that our work can assist and motivate the research community in leveraging the prompt tuning approach for solving other emerging socio-technical issues, such as the spread of misinformation online. Overall, our work is an important step towards understanding the power and generalizability of LLM in solving hard tasks (e.g., online toxicity), which is an important and timely issue, given the extensive popularity of LLM and chatbots powered by LLM (e.g., ChatGPT).
**Ethical Considerations.** We emphasize that in this work we work exclusively with publicly available datasets focusing on toxicity classification, toxic span detection, and detoxification tasks. Also, we use publicly available large language models to assess their performance on these tasks and how our work compares to previous efforts. We acknowledge that since we model all three tasks as generation tasks, the model may generate toxic content, however, we took the following steps to minimize harm: 1) we do not share the generated content with people or online users; and 2) all annotations required for our work were done by the authors of this study. Finally, in this work, we show that using prompt-tuning, large language models can detoxify content with acceptable performance. At the same time, however, adversaries might use large language models and prompt tuning to do the opposite task (i.e., toxifying content). We believe that this potential abuse is outside of the scope of this work. Yet, it highlights the need for the implementation and use of appropriate safeguards (e.g., similar to Stable Diffusion's Safety Filter2), to ensure that large language models and prompt tuning can not be used for malicious purposes (e.g., generation and dissemination of toxic content).
Footnote 2: [https://stability.ai/blog/stable-diffusion-public-release](https://stability.ai/blog/stable-diffusion-public-release).
## 2 Preliminary
**Prompt Learning.** With the advance of pre-trained LLM such as GPT-2/3, the previous "pre-train, fine-tune" procedure is replaced by the "pre-train, prompt, and predict" paradigm [31]. Concretely, given a downstream task, fine-tuning requires the training objective to be specified beforehand and the model needs to be updated. In contrast, prompt
\begin{table}
\begin{tabular}{l l} \hline \hline
**Toxicity Classification** & **Answer** \\ \hline your reading comprehension is more fucked up than a football bat. & Toxic \\ \hline
**Toxic Span Detection** & **Answer** \\ \hline keep hiring imbeciles like this jerk and you will end up with a no firearms for rent-a-cops bill next session. & keep hiring imbeciles like this jerk and you will end up with a no firearms for rent-a-cops bill next session.
learning [7] uses a _prompt_ that contains the task-specific description and text examples in a natural language way as the input to the model. In this way, the downstream task can be formulated as a [MASK]language modeling problem (i.e., predict masked text pieces based on the context) and does not need to update the parameters in the underlying model. Prompt learning is especially suitable for few-shot downstream tasks when limited training examples are available and fine-tuning the pre-trained model is costly. In general, prompt learning can be broadly grouped into two categories - manual prompt and learnable prompt (soft prompt).
**Manual Prompt.** The natural way to create prompts is to manually design intuitive textual templates based on human/domain knowledge [7]. For example, if the task is to classify the sentiment of a movie review "Absolutely terrible writing and dragged-out unnecessary dialogue", we can append a prompt "The review is" to the content and get "Absolutely terrible writing and dragged-out unnecessary dialogue. The review is [MASK]". We expect the language model to _generate_ "horrible" than "great" to replace [MASK]. Manual prompts have been proven to solve various tasks with decent accuracy [31]. However, handcrafted prompts need to be customized based on the downstream tasks, inevitably introducing artificial bias and leading to sub-optimal results.
**Learnable Prompt.** In contrast to the manual prompts, learnable prompt methods automatically learn to prompt from a larger searching space for the candidate prompts to better fit the downstream tasks. Prefix tuning [30] is one of the most promising techniques for prompt tuning. Concretely, it adds a prefix (i.e., a sequence of continuous task-specific vectors) before the input, which can be considered as a set of "virtual tokens". Given the downstream task, the prefix will be optimized while the parameters \(\theta\) of LM are frozen. This is extremely efficient compared to fine-tuning the whole model as for different downstream tasks, only different prefixes instead of different models will be updated. Formally, the prefix matrix \(M_{\phi}\) parameterized by \(\phi\) can be updated via the following log-likelihood objective:
\[\max_{\phi}logP(\mathbf{y}|\mathbf{x};\theta;\phi)=\max_{\phi}\sum_{y_{1}}logP(y_{i}|h _{<i};\theta;\phi) \tag{1}\]
where \(h_{<i}=[h_{<i}^{(1)},\cdots;h_{<i}^{(n)}]\) is a function of the trainable parameters at time step \(i\). It is directly copied from \(M_{\phi}\) if the time step is within the prefix (\(h_{i}\) is \(M_{\phi}[i]\)), otherwise it is computed with the LM. Similarly, Lester et al. [28] propose a more efficient method that adds several tunable tokens as the prefix and optimizes the embeddings of those tunable tokens directly. It has fewer tunable parameters as it does not involve additional tunable parameters in each network layer. Note that the learnable prompt (prefix matrix) is the embedding of a set of "virtual words" which can be optimized. The embeddings have mathematical meanings but cannot be mapped into real words.
## 3 Tasks
In this work, we consider three tasks that are related to toxicity: 1) toxicity classification (detect whether the text is toxic), 2) toxic span detection (detect which parts of the text are toxic), and 3) detoxification (eliminate toxicity in the text while preserving its semantics). The three tasks handle toxicity in different levels: toxicity classification only detects whether the whole text is toxic or not; toxic span detection aims to detect the exact character offset of the spans that make the text to be toxic, and detoxification's goal is to eliminate the toxic content from the text while preserving its semantic meaning.
### Task1: Toxicity Classification
**Goal.** We frame this task as a binary classification task, where the input is a piece of text and the output is _whether the given text is toxic or not_. An example of toxicity classification is shown in Table 1.
**Existing Methods.** Existing toxicity classification methods usually leverage a labeled dataset (a text is annotated as toxic or not) to train classifiers or fine-tune an LM. Early efforts widely use feature engineering (e.g., dictionaries, bag-of-words, etc.) to extract features from text and detect toxic language or phrases [12]. With the advance of deep neural networks (DNNs), recent efforts have been focusing on training toxicity classification models based on recurrent neural networks (RNNs) [38], convolutional neural networks (CNNs) [15], and transformers (e.g., BERT) [1]. The very latest trend of toxicity classification is using LLMs that are pre-trained on large unlabeled corpora and then fine-tuning them to tailor them for the toxicity classification task [64]. The drawback of these methods is that they require a large annotated corpus to train or fine-tune an LM and their detection effectiveness is limited by either the size of the labeled dataset or the time to fine-tune the pre-trained LMs.
**Our Method.** Given the language model parameterized by \(\theta\), a set of texts \(\{\mathbf{x}|\mathbf{x}\in X\}\) and the corresponding label \(\{\mathbf{y}\in Y\}\), we aim to learn the prefix matrix \(M_{\phi}\) so that the prompt consist with \(M_{\phi}\) (parameterized by \(\phi\)) and \(\mathbf{x}\) can successfully retrieve label \(\mathbf{y}\) from the language model \(\theta\). Our optimization goal is summarized in Equation 2.
\[\phi^{*} = \underset{\phi}{\text{arg min}}\quad\mathcal{L}(f(X,\phi,\theta),Y) \tag{2}\]
where \(\mathcal{L}\) is our loss function (e.g., binary cross-entropy loss) and \(f\) is our toxicity classification model. It is important to note that our model does not fine-tune the language model parameterized by \(\theta\).
### Task2: Toxic Span Detection
**Goal.** The toxic span detection aims to identify the specific spans (i.e., the character offsets) that make the text toxic. For instance, in the example shown in Table 1, the toxic span detection task should return two spans - one for "imbeciles" (starting at 13 and ending at 21) and one for "jerk" (starting at 33 and ending at 36). It is another important task as it can assist users in better understanding how the toxicity is reflected in the text (e.g., the highlighted toxic span can assist annotators to support their decisions). Formally, given an
input text \(t\), our goal is to determine the exact toxic spans \(\{S^{t}\}\) in the text.
**Existing Methods.** Toxic span detection can be seen as a case of attribution or rationale extraction [39]. Most of previous work [12, 18, 22] frame this task as a sequence labeling task. Concretely, given the labeled toxic span corpus, an LM can be trained to label each word as toxic or not. Once the model is trained and given a text the model will give a toxicity prediction label for each word. Existing methods have been widely using transformers (e.g., BERT+CRF [12], SPAN-BERT [22]) or recurrent neural networks (e.g., BiLSTM [18]) to attain the goal. Some research also experimented with custom loss [59] and data augmentation [55] to boost the performance of toxic span detection.
**Our Method.** Our method is fundamentally different from the existing methods. Instead of considering the toxic span detection as a sequence labeling task, we treat it directly as a generation task. Concretely, the input of our model is the original text that contains the toxic content. We aim to leverage the prompt and the (frozen) LLM to generate text without the toxic span while keeping the rest the same as the input text. Note that, with the prompt, the LLM does not attempt to replace the toxic span in the generated text, rather it generates a, usually, incomplete text that does not have any toxic spans. Then, to detect the toxic span, we run a mapping algorithm to "subtract" the input text from the generated text and consider the rest as the toxic spans (i.e., character-level offsets). Our optimization goal, given the input \(T=\{t\}\) and \(\tilde{T}=\{t\setminus\{S^{t}\}\}\), is summarized in Equation 3.
\[\phi^{*} = \underset{\phi}{\text{arg min}}\quad\mathcal{L}(\tilde{T},f(T, \phi,\theta)) \tag{3}\]
It learns \(M_{\phi}\) (parameterized by \(\phi\)) that nudges the large language model \(\theta\) to remove only toxic spans \(\{S^{t}\}\) from \(X\).
### Task3: Text Detoxification
**Goal.** Text detoxification, as its name suggests, aims to eliminate toxicity from text and generate a detoxified version of the text while preserving the semantic meaning. Different from the previous tasks that only focus on the detection of toxicity (e.g., toxicity classification and toxic span identification), text detoxification addresses the toxic content by proactively rewriting it. An example of toxicity detoxification is shown in Table 1. Formally, for this task, the input is a toxic text \(t\) and our goal is to generate a detoxified version of the text \(\hat{t}\).
**Existing Methods.** Text detoxification can be viewed as a style transfer task. That is, toxicity can be treated as the style of a text. The style transfer methods are applied to rewrite the text with similar semantic meaning without the toxicity style. In previous work [32, 37], both supervised and unsupervised methods are proposed to solve this task in a style transfer manner. Logacheva et al. [32] propose DetoxBART, which fine-tunes the Transformer-based generation model BART [29] on the ParaDetox dataset. Such fine-tuning process makes DetoxBART yield the best performance in terms of detoxification and semantic preservation. The other end-to-end approaches include DualRL [34], Deep Latent Sequence Model (DLSM) [17], Stable Style Transformer (SST) [27], Style Transfer as Paraphrase (STRAP) [24], Paraphrasing GeDi (ParaGeDi) [9], etc.
**Our Method.** The detoxification task is also a generation task. Given the paired dataset (i.e., the toxic text \(T\) and the paraphrased non-toxic counterpart \(\hat{T}\)), our goal is to learn the prompt \(M_{\phi}\) that can better transfer the input text (toxic) into the output text (non-toxic) text while preserving the semantics. The optimization goal is similar to Equation 3 and the only difference is that the label changes from \(\hat{T}\) to \(\hat{T}\) where the former is the texts without toxic spans (incomplete texts) and the later is the detoxified texts (complete texts).
## 4 Datasets and Models
### Datasets
In this paper, we consider eight datasets for the evaluation of the three tasks. Note that, in Task 1 (toxicity classification), for each dataset, we generate a balanced version of it by randomly choosing the same number of samples from the larger category to match the smaller category. We follow the train/test partition of a dataset if they have already been provided. Otherwise, we randomly sample 80% of a dataset as the training dataset and the rest 20% as the testing dataset. Table 2 reports some basic statistics about each dataset. We describe each dataset below.
**HateXplain [35].** It is a benchmark dataset collected from Twitter and Gab for explainable hate speech detection. The dataset is annotated by Amazon Mechanical Turk (MTurk) workers with three labels: hate, offensive, or normal. For our work, we consider both hate and offensive posts as toxic and the rest as non-toxic.
**USElectionHate20 [16].** This dataset is collected from Twitter by selecting tweets that contain election hashtags or politicians' names. The authors manually label a subset of tweets with different stances as well as whether the tweet is hateful/offensive. We consider hateful/offensive tweets as toxic and the rest as non-toxic.
**HateCheck [45].** HateCheck contains a suite of functional tests for hate speech detection models. Each post is labeled by different annotators and we consider the majority votes as the final label of this post.
**SBIC [46].** The Social Bias Inference Corpus (SBIC) is collected from Reddit, Twitter, and fringe Web communities
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Dataset** & **Task** & **\# Train** & **\# Test** \\ \hline
**HateXplain**[35] & 1 & 12,578 & 3,050 \\
**USelectionHate20**[16] \(*\) & 1 & 586 & 118 \\
**HateCheck**[45] & 1 & 1,998 & 484 \\
**SBIC**[46] \(*\) & 1 & 93,346 & 11,000 \\
**MHS**[23] & 1 & 22,700 & 5,762 \\
**ToxicSpan**[39] \(*\) & 2 & 7,888 & 1,991 \\
**Parallel**[11] & 3 & 886 & 222 \\
**ParaDetox**[32] & 3 & 9,551 & 2,388 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of datasets. Note that \(*\) means the dataset provides the train/test partition.
such as Gab, Stormfront, and banned subreddits. The dataset is labeled by MTurk workers. We leverage the v2 version of it for our study and we consider posts labeled offensive as toxic posts and the rest as non-toxic posts.
**MHS [23].** The Measuring Hate Speech (MHS) dataset is collected from comments on social media like YouTube, Twitter, and Reddit. The corpus is labeled by MTurk workers from the US. We consider comments with hate speech score \(\geq\) 0 as toxic and all others as non-toxic.
**ToxicSpan [39].** The ToxicSpan dataset contains \(\sim\)10k English texts filtered from Civil Comments [6] and was formally introduced as SemEval-2021 Task 5 [39]. Each text is reviewed by three to seven raters. Each rater is asked to identify the spans "that constitute anything that is rude, disspectful or unreasonable that would make someone want to leave a conversation" [37]. The lengths of the highlighted spans were decided by the raters.
**Parallel [11].** The Parallel dataset contains 2,279 pairs of (toxic sentence, detoxified sentence). There are 1,108 unique toxic sentences after removing duplicates. Note that for each toxic sentence, the dataset might offer multiple detoxified versions. We only select the first detoxified version to construct the pair.
**ParaDetox [32].** ParaDetox contains 11,939 toxic sentences and 19,766 paraphrased sentences (detoxified sentences). Similar to the Parallel dataset, each toxic sentence might have multiple detoxified versions. We only pick the first detoxified version to construct the pair. The ParaDetox dataset constructed by us has 11,939 pairs in total.
**Remarks.** All the datasets are annotated by human annotators. However, the definition of toxicity might vary across different datasets. For instance, USEelectionHate20 targets hateful tweets against politicians, while SBIC focuses on offensive posts from different Web communities. This may bring challenges for toxicity classifiers such as the Perspective API [4]. On the other hand, our approach diminishes this issue, given that we use a learnable prompt that is tailored for each dataset, effectively capturing the toxic definition of the dataset through the lens of the positive and negative samples in each dataset.
### Models
In this paper, we consider prompt tuning over two families of LLM including GPT2 [43] and T5 [44]. Concretely, we use GPT2-medium, GPT2-large, T5-small, T5-base, and T5-large in our experiments. In Task 1 (Toxicity Classification), the learning rate is set to 0.3, we set the total optimization steps to 2,000 with Adafactor [50] optimizer and the linear learning rate scheduler with 100 warm-up steps. For all models, the effective batch size is set to 32 (batch size of 4/8 with gradient accumulation steps of 8/4 for GPT2-L/Others). We follow the prompt tuning method proposed by Lester et al. [28] in Task 1. In Task 2 (Toxic Span Detection) and Task 3 (Detoxification), we set the training epoch to 5, the initial learning rate to 5e-5, and the optimizer of AdamW [33] with the linear learning rate scheduler. Different from Task 1 (Toxicity Classification), we follow the prompt tuning method proposed by Li and Liang [30] instead as it can achieve better performance in Task 2 and Task 3. We hypothesize that Lester et al. [28] initializes the prompt with embeddings that enumerate the output classes, which makes the method more suitable for the classification task. In contrast, the prompt tuning method proposed by Li and Liang [30] has more tunable parameters than the one proposed by Lester et al. [28]. This method learns transformer activations that are fixed across examples at every network layer, allowing subsequent tokens to attend to this prefix. As such, Li and Liang [30] is a better fit for Task 2 (Toxic Span Detection) and Task 3 (Detoxification).
## 5 Task 1: Toxicity Classification
### Experimental Setup
**Baselines.** Regarding the baselines for Task 1, we consider Google's Perspective API [4] (Perspective), BERT-base trained on toxicity classification corpus [1] (ToxicBERT), and RoBERTa-base trained on toxicity classification corpus [1] (UnRoBERTa). For each baseline, given a text, it provides a toxicity score ranging from 0 to 1. We consider the text with a score larger than 0.5 as toxic otherwise non-toxic. The results with the best threshold (rather than 0.5) are shown in Table 15 in Appendix. Note that for Perspective API, on each dataset, we select the perspective score (e.g., Severe Toxicity) that achieves the best classification result, and report the corresponding performance.
**Datasets.** We use five datasets - HateXplain, USElectionHate20, HateCheck, SBIC, and MHS - to evaluate the baselines and our models. Note that we observe redundant samples on HateXplain, USElectionHate20, and SBIC.v2. However, they are less than 1% and have almost no influence on the final performance based on our initial evaluation.
**Metrics.** We consider accuracy, precision, recall, and \(F_{1}\)-score as the evaluation metrics, which are standard metrics for evaluating the performance of classifiers. Note that we only report the \(F_{1}\)-score on the main paper and put the precision, recall, and accuracy results in Appendix A in Appendix.
### Results
**Overall Performance.** We first show the \(F_{1}\)-score of toxicity classification with toxicity classification in Table 3. The accuracy, precision, and recall are shown in Table 16, Table 17, and Table 18 in the Appendix. We find that, in general, prompt tuning outperforms baselines across different datasets. For instance, on HateXplain, the prompt tuning with GPT2-L achieves 0.731 \(F_{1}\)-score, while the best baseline (Perspective) only achieves 0.703 \(F_{1}\)-score. The statistical test shows that prompt tuning indeed outperforms the best baseline (see Table 14 in Appendix). This indicates that prompt tuning can indeed unleash the power of LLM to perform the toxicity classification task. Also, we observe that a larger LM usually provides a more promising performance on the task, e.g., GPT2-L usually outperforms GPT2-M and T5-B/L is better than T5-S in general. For instance, on HateCheck, GPT2-L achieves 0.892 \(F_{1}\)-score while GPT2-M only has 0.758 \(F_{1}\)-score. This implies that the larger capacity of
LLM would better guide the prompt tuning to achieve better performance.
Comparison to Manual Prompt.As previous work [48, 8] shows that manual prompts can achieve decent performance in the toxicity classification task, we investigate how the performance of prompt tuning compares to manual prompts. We use the below manual prompt following Schick et al. [48] for our comparison study. Note that we also investigate using the descriptions of 6 different scores in Perspective API as the manual prompt and the results show similar trends.
[INPUT] Question: Does the above text contain rude, disrespectful, or unreasonable language? Answer:[MASK] Here, [INPUT] is the placeholder for the text to be classified, "Question: Does the above text contain rude, disrespectful, or unreasonable language? Answer:" is our manual prompt and [MASK] is the classification output by the LLM. The performance is shown in Table 4. We observe that the \(F_{1}\)-score of the manual prompt is substantially lower than the prompt tuning approach (see Table 3). For instance, for the average results, with T5-S, prompt tuning achieves 0.766 \(F_{1}\)-score while manual prompt only reaches 0.165. These results highlight the effectiveness and performance gains when using prompt tuning instead of manual prompts.
can observe that in some cases, the prompt can successfully transfer to another dataset. For instance, the prompts trained on USElectionHate20 can achieve 0.650 \(F_{1}\)-score on HateXplain and 0.733 \(F_{1}\)-score on MHS, which are about 5% lower than the baselines (0.703 accuracy on HateXplain and 0.790 accuracy on MHS according to Table 3). However, the performance is less satisfying in some other cases where the \(F_{1}\)-score is below 0.500. We also notice that the prompt trained on the MHS dataset can better transfer to other datasets. For instance, after training on MHS, the \(F_{1}\)-score is 0.694 on HateXplain and 0.581 on USElectionHate20, which is comparable or even better to the \(F_{1}\)-score provided by the Perspective API (0.703 and 0.506). This can be credited to the fact that MHS covers various kinds of toxicity including insult, humiliation, violence, hate speech, etc. By fine-tuning with the diverse distributed data, the learned prompt is more general and can better transfer to other datasets. On the other hand, prompts learned from dataset like HateXplain is less effective to transfer into other datasets. We suspect this is because these datasets have a relatively narrow definition of toxicity. In general, the prompt learned from a more diverse dataset with different types of toxicities may have a better generalization ability to other datasets. Meanwhile, as we have shown before (see Table 5), the prompts can better fit different downstream datasets with the help of only a small fraction of labeled samples, which further demonstrates the efficacy of prompt learning.
**Comparison with Fine-tuning.** Here we take T5-S on USElectionHate20 as an example. We observe that prompt tuning reaches 0.712 accuracy within 6 minutes, while the best accuracy (evaluated every 200 steps) for fine-tuning the whole model is only 0.619 within 100 minutes. This is because the LLM is trained with a large corpus and can generate informative representations of the inputs. Prompt tuning can guide the model better leverage the representation for the downstream tasks with a small number of parameters, which can adapt faster to new tasks compared to finetuning, especially with fewer training samples.
**Robustness.** Given the misspellings in the training procedure, we do observe that prompt tuning can adapt to the testing posts with misspellings. E.g., on 100 randomly selected toxic posts on HateCheck, there do exist misspelling words like "tr4sh," "4sholes," "Fukc," and "crippl3." And prompt tuning with T5-S can correctly identify them (98% accuracy). We further perturb these 100 evaluation posts by randomly repeating one character of each toxic word several times or adding extra spaces inside the toxic word, e.g., "sluttuttts," and "w h o r e." Note that we leverage such perturbations since we also observe them in the toxic texts and such perturbations are also considered by previous work [19]. We observe that, without further prompt tuning, the evaluation accuracy on these modified 100 posts is still 97%, which remains almost unchanged. This implies that prompt tuning is robust to adversarial perturbation.
**Error Analysis.** Although prompt tuning outperforms other baselines in most cases, wrongly predicted texts still exist (20 in total). We take the USElectionHate20 dataset (with T5-B) as a case study to analyze the wrongly predicted cases. As shown in Table 7, the main reason that causes the wrong prediction is the wrong label, e.g., in the example, we observe some toxicity against Trump, but the text is labeled as non-toxic. Also, we observe that some variations of the slur words and toxic hashtags may cause wrong predictions. Last, prompt tuning is less effective against some texts with implicit toxic content.
**Takeaways.** Our results show that prompt tuning outperforms baselines in the toxicity classification task with sufficient labeled data. Also, the detection performance is still promising with fewer training steps/samples. Another observation is that directly transferring the prompt trained on one dataset into another dataset might be less effective as the two datasets might share different types of toxicity. However, this can be addressed by adding only a small number of labeled samples from the distribution of the testing dataset. Our results suggest that prompt tuning can also serve as an alternative tool to assist the annotation process, especially for
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{**Training Dataset**} & \multicolumn{5}{c}{**Transfer Dataset**} & \\ & **HateXplain** & **USElectionHate20** & **HateCheck** & **SBIC** & **MHS** \\ \hline
**HateXplain** & - & 0.488 & 0.373 & 0.419 & 0.688 \\
**USElectionHate20** & 0.650 & - & 0.472 & 0.485 & 0.733 \\
**HateCheck** & 0.543 & 0.297 & - & 0.534 & 0.579 \\
**SBIC.v2** & 0.638 & 0.404 & 0.646 & - & 0.655 \\
**MHS** & 0.694 & 0.581 & 0.610 & 0.518 & - \\ \hline \hline \end{tabular}
\end{table}
Table 6: \(F_{1}\)-score of Task 1 (Toxicity Classification) when the training dataset is different from the transfer dataset.
Figure 1: \(F_{1}\)-score of Task 1 with different training steps.
the newly emerging toxicity.
## 6 Task 2: Toxic Span Detection
### Experimental Setup
As we observed from Task 1 (Toxicity Classification), T5 models and GPT2 models share similar performance. In the following evaluation, we mainly leverage T5 models as our pre-trained LLMs.
**Baselines.** We consider three baselines, i.e., BiLSTM [18], BERT [12], and SPAN-BERT [22]. Concretely, we follow the default hyper-parameters setting of Pavlopoulos et al. [37]. We train/fine-tune the models for 100 epochs on the training partition of the ToxicSpan dataset and evaluate it on its test partition.
**Datasets.** We use the ToxicSpan dataset to evaluate the baselines and our models.
**Metrics.** We follow previous work [37] and leverage score as the main evaluation metric. Note that the \(F_{1}\)-score in Task 2 is different from Task 1. Concretely, for the \(i\)-th sample, we consider its ground truth span (i.e., the character offsets) as \(S^{i}_{g}\) and the predicted span as \(S^{i}_{p}\). The sample-level precision \(P^{i}\), recall \(P^{i}\), and \(F_{1}\)-score \(F^{i}_{1}\) are defined as the following:
\[P^{i}(S^{i}_{g},S^{i}_{p})=\frac{|S^{i}_{g}\cap S^{i}_{p}|}{|S^{i }_{p}|} \tag{4}\] \[R^{i}(S^{i}_{g},S^{i}_{p})=\frac{|S^{i}_{g}\cap S^{i}_{p}|}{|S^{i }_{g}|}\] (5) \[F^{i}_{1}(S^{i}_{g},S^{i}_{p})=\frac{2\cdot P^{i}(S^{i}_{g},S^{i }_{p})\cdot R^{i}(S^{i}_{g},S^{i}_{p})}{P^{i}(S^{i}_{g},S^{i}_{p})+R^{i}(S^{i}_ {g},S^{i}_{p})} \tag{6}\]
Note that if the ground truth span \(S^{i}_{g}\) and the predicted span \(S^{i}_{p}\) are both empty, we consider \(F^{i}_{1}(S^{i}_{g},S^{i}_{p})=1\) (\(F^{i}_{1}(S^{i}_{g},S^{i}_{p})=0\) if one of them is empty). Then, we average the \(F_{1}\)-score for all samples to obtain a single \(F_{1}\)-score.
Parallel dataset (we used in Task 3) and form a new testing dataset. Given the prompt trained with T5-L on ToxicSpan, we observe that our method can correctly identify the toxic spans on 85% of posts. We then dive deeper into the failed cases and find that most of them belong to Categories 1 and 8 as shown in Table 9. In general, this case study demonstrates that prompt tuning can indeed transfer to out-of-distribution data.
**Comparison with Fine-tuning.** For Task 2, we also compare the performance of prompt tuning with fine-tuning. Taking T5-L model as an example, we observe that, with the same training epochs, prompt tuning yields slightly better performance (0.643 \(F_{1}\)-score) than fine-tuning (0.628 \(F_{1}\)-score) and costs less time. This indicates that prompt tuning can unleash the power of LLM with only limited effort.
**Robustness.** Following the perturbation strategy in Task 1, we perturb 100 randomly selected posts from TSD and compare the performance with the original posts. We observe that prompt tuning reports the same toxic span for 57 perturbed posts. For 38 perturbed posts, prompt tuning failed to detect or can only detect part of the toxic spans. For the rest 5 perturbed posts, prompt tuning can obtain even better toxic spans than their original version. Compared to Task 1, prompt tuning is less robust in Task 2. This can be credited to the lack of perturbed toxic spans in the training dataset, which may be mitigated by introducing perturbation during the training phase as well.
**Error Analysis.** We conduct a case study regarding the wrongly detected spans. Concretely, we randomly select 100 test samples with wrongly predicted spans and manually verify the possible reasons. Then, we categorize the reasons into 9 categories (see Table 9). Note that each test sample is manually verified by three annotators to put into a category with full agreement. We find that a substantial percentage of wrong span predictions in categories 2, 3, 4, and 5 (47%) are caused by the problematic ground truth label. For instance, in category 2, the ground truth span contains both toxic and non-toxic text. Note that the ground truth inconsistency is caused by the fact that the lengths of the toxic spans were decided by the raters [39]. The ToxicSpan dataset accepts character offsets that at least two raters have included each character offset in their spans. Category 2 actually covers the corner cases relating to such human errors/bias when building the ToxicSpan dataset. Nevertheless, our method successfully detects the real toxic span "cowards" from this example. Also, in category 3, the toxic span is not labeled by the ground truth. However, they are accurately detected by our method. We also observe that prompt tuning may fail to identify some ambiguous toxic spans such as the "embarrassment" example shown in category 4 (Table 9). A more interesting case (category 5) shows that our method can dig out the missing toxic span from the text. For instance, the ground truth span only contains "stupid", while our method discovers "idiots" as well. This case demonstrates the potential of prompt tuning to become an effective tool to improve the annotation quality of toxic spans. We also notice that the cases in categories 1, 6, 7, 8, and 9 (53%) are caused (or partially caused) by our method. For category 1, we observe that our method repeats the original sentence without any change. We then diver deeper into those samples and find that they are mainly short sentences or contain less toxic spans, which may lead the prompt to become less sensitive to these cases. For category 6, we observe that our method successfully generates the sentence without toxic spans, but the mapping algorithm fails to provide an exact span area as the ground truth span, e.g., prompt tuning includes the quota into the toxic span as well since it serves as an emphasize to the toxic expression. In category 9, we observe that our method overlooks the ground truth span, but surprisingly detects a new span like the "crap" example. Those wrong cases show that toxic span detection from the view of prompt tuning is not perfect, but prompt tuning shows its great potential in facilitating and correcting the toxic span detection process. For instance, it can serve as an assistant tool for better annotation quality.
**Takeaways.** We observe that prompt tuning can achieve comparable performance with the best conventional method, i.e., SPAN-BERT, but with much less time cost. Also, the performance is relatively stable even with fewer training epochs. This further demonstrates the potential of leveraging prompt tuning to tackle the toxic span detection tasks and provides evidence for better span labeling. We also show that prompt tuning, in some cases, can identify additional toxic spans not labeled by the ground truth (i.e., human annotators).
## 7 Task 3: Detoxification
Different from previous tasks that only focus on toxicity detection, this task aims to detoxify the given text while preserving the corresponding semantic meaning.
### Experimental Setup
**Baselines.** We use the vanilla version of BART [29] and the DetoxBART [32] as the baselines. Note that the DetoxBART is also trained on the ParaDetox dataset for 10,000 epochs according to Logacheva et al. [32].
**Datasets.** We use Parallel and ParaDetox datasets to evaluate the performance of baselines and prompt tuning.
**Metrics.** To quantify the quality of the detoxification, we consider two aspects, i.e., the detoxification effectiveness and the utility of the generated sentences. For detoxification effectiveness, we leverage the Perspective API to quantify the toxicity level change since it offers the best performance among all baselines and is robust on different datasets. Specifically, we first measure the average toxicity score change and then quantify the percentage of texts that has high toxicity score (0.7 or 0.9), following the guidelines of Perspective API.3 Note that we use \(\mathbf{T_{avg}}\), \(\mathbf{T_{0.7}}\), and \(\mathbf{T_{0.9}}\) to denote the average toxicity score of texts, the ratio of texts that have toxicity score over 0.7, and the ratio of texts that has toxicity score over 0.9, respectively. Regarding the utility, we consider five different metrics. We first consider BLEU score as the utility evaluation metric, which is also widely
used in previous work [32, 54]. Then we quantify the semantic preservation by comparing the text embeddings similarity between the original text and the detoxification text. Concretely, we consider two types of embedding following [32], i.e., contextual string embeddings [25] from flairNLP [2], which is denoted as SIM (F), and SIMILE proposed by Wieting et al. [60], which is denoted as SIM (W). We denote the two types of embedding similarities as SIM (F) and SIM (W), respectively. Besides, we also use the token-level perplexity [43] to measure the fluency of the text, where lower perplexity denotes better fluency.
### Results
The detoxification performance on different datasets is shown in Table 10. We observe that DetoxBART performs slightly better in detoxifying the text than prompt tuning. For instance, on ParaDetox, DetoxBART reduces the \(\mathbf{T_{avg}}\), \(\mathbf{T_{0.7}}\), and \(\mathbf{T_{0.9}}\) to 0.180, 0.013 and 0, respectively while prompt tuning on T5-L can reduce them into 0.213, 0.037, and 0.003 respectively. This means that ParaDetox has better detoxification effectiveness than prompt tuning. However, we also observe that the text quality generated with prompt tuning is better than the DetoxBART. For instance, on ParaDetox, compared to DetoxBART, the PT (T5-B) has a higher BLEU score, SIM (W), SIM (F), while attaining a smaller TokenPPL. This indicates the text generated by prompt tuning has better fluency and can better preserve the semantic meaning of the original text. In general, we consider both DetoxBART and prompt tuning as successful methods as they can largely reduce the toxicity level while preserving the semantic meaning and fluency of the original text.
Different Epochs.We then investigate how the training epochs affect the detoxification effectiveness and the model's utility regarding semantic preservation. The results are shown in Figure 4 and Figure 3, respectively. From Figure 4, we have three observations, first, we find that more training epochs lead to better detoxification performance. For instance, on Parallel, prompt tuning on T5-L can reduce the
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Category** & **Reason** & **Text Example** & **Percentage (\%)** \\ \hline
1 & Labeled by ground truth & they’re not patriots. they’re vandals, [thieves], and [thieves], they’ve plastered a facade of patriotism over their outrage at being expected to obey the law. & 17 \\ \hline
2 & GT contains both toxic and non-toxic spans. & adn is endorsing, without officially endorsing. [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves,thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves],[thieves], [thieves], [thieves], [thieves], [thieves], [thieves,thieves, [th], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves,thieves], [thieves], [thieves], [thieves], [thieves], [thieves,th], [thieves], [thieves,thieves, [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [th], [thieves], [thieves], [thieves], [thieves], [thieves],[thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves,th], [thieves], [thieves,th], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [thieves], [th], [
**Tavg** to 0.616 with 1 epoch, while decreasing to 0.397 with 5 epochs. Second, prompt tuning on larger models lead to better detoxification performance, e.g., T5-L performs the best while T5-S performs the worst. This is expected as a larger model can represent the data in a more informative way thus better guiding the prompt tuning in the direction of detoxification. Third, in a larger dataset such as Paradox, prompt tuning already achieves good detoxification performance in the early epoch, e.g., the first or second epoch. Our results further exemplify the effectiveness of prompt tuning as the time cost is much less than training the detoxification model like DetoxBART.
Regarding utility, we find that the utility is relatively stable for different models in different epochs. This indicates that those LLMs have good generation ability in general.
**Prompt Transferability.** We then take ParaDetox as the training dataset and Parallel as the testing dataset to investigate the generalizability power of prompt tuning. With T5-B trained on ParaDetox, the \(T_{avg}\), \(T_{0.7}\), and \(T_{0.9}\) on Parallel drop to 0.251, 0.027, and 0.000, respectively, which are even better than the original results shown in Table 10 (0.408, 0.256, and 0.032). One possible reason is that ParaDetox contains a larger number of training data, which better guides the prompt for the detoxification tasks and makes it more transferable to other datasets like Parallel.
**Comparison with Fine-tuning.** For Task 3, we take the T5-L model on Parallel as a case study. We observe that, prompt tuning can reduce the toxicity of posts to a larger extent, e.g., the **Tavg** of prompt tuning is 0.396, while the value is 0.437 for fine-tuning. On the other hand, we find that fine-tuning can generate more fluent sentences, e.g., the BLEU score is 0.795 for fine-tuning, while only 0.754 for prompt tuning. In general, prompt tuning can still be considered as a lightweight plugin to adapt LLMs to new tasks.
**Robustness.** We again follow the perturbation strategy in Task 1 to perturb 100 randomly selected posts from the Parallel dataset. We observe that, for the original version of these 100 posts, prompt tuning (with T5-L) can reduce the \(T_{avg}\), \(T_{0.7}\), and \(T_{0.9}\) from 0.725, 0.590, and 0.130 to 0.357, 0.120, and 0.010, respectively, while the values are 0.402, 0.180, and 0.020 for the perturbed 100 posts, which is close to detoxify the original version. This indicates that prompt tuning is relatively robust in Task 3.
**Case Study.** We then dive deeper into the generated text of the ParaDetox dataset and check them manually. We consider both successful cases (C1 and C2) and failed cases (W1-W5). Table 11 shows the examples of these cases. In most cases, prompt tuning is powerful in reducing the toxicity level of the sentence while preserving its semantic meaning. For example, in C1, our method achieves similar detoxification performance (toxicity score decreases from 0.827 to around 0.163). Also, our method preserves the semantic meaning properly. In C2, we observe that our method can even detoxify the sentence better than the ground truth.
Among the 2,388 text samples, we observe that there are 88 detoxification samples (3.68%) that still have a high toxicity score, i.e., larger than 0.7. We manually check those samples and find that they can be categorized into 5 different wrong categories (W1-W5). For W1 (6/88), we observe that the sentence is hard to be detoxified, and the ground truth sentence is identical to the original sentence. For W2 (52/88), prompt tuning just directly repeats the original sentence without any modification. For W3 (27/88), we observe that prompt tuning indeed preserves the semantic meaning and reduces the toxicity level. We acknowledge that for some implicit toxic content, as shown in the example, it might be harder for the prompt model to detect and eliminate them perfectly. For W4 (1/88), prompt tuning actually provides better semantic preservation compared to the ground truth. For W5 (1/88), we observe that prompt tuning just considers "i jus clicked tht nasty shit" as toxic parts and directly removes them. During the labeling, we notice that there indeed exists a trade-off between detoxification and semantic preservation. However, in most cases, prompt tuning can do well on both aspects (see also Table 10). It indicates that prompt tuning can be a good tool for assisting the detoxification task, e.g., providing possible solutions for the annotators to make their decision. Currently, our current prompt tuning is based on paired datasets, which is similar to machine translation. However, such datasets are usually small. One promising
\begin{table}
\begin{tabular}{l l|c c c|c c c c} \hline \hline
**Dataset** & **Method** & **Tavg**\(\downarrow\) & **T0.7**\(\downarrow\) & **T0.9**\(\downarrow\) & **BLEU**\(\uparrow\) & **SIM (W)**\(\uparrow\) & **SIM (F)**\(\uparrow\) & **TokenPPL**\(\downarrow\) \\ \hline \multirow{8}{*}{**Parallel**} & None & 0.755 & 0.676 & 0.135 & 1.000 & 1.000 & 1.000 & 227.834 \\ & GroundTruth & 0.178 & 0.009 & 0.000 & 0.491 & 0.757 & 0.669 & 550.725 \\ & BART & 0.754 & 0.676 & 0.135 & 0.999 & 0.999 & 0.998 & 227.904 \\ & DetoxBART & 0.242 & 0.036 & 0.000 & 0.708 & 0.879 & 0.843 & 236.654 \\ & PT (T5-S) & 0.573 & 0.463 & 0.077 & 0.835 & 0.927 & 0.939 & 326.696 \\ & PT (T5-B) & 0.408 & 0.256 & 0.032 & 0.770 & 0.898 & 0.909 & 301.597 \\ & PT (T5-L) & 0.396 & 0.329 & 0.031 & 0.754 & 0.881 & 0.889 & 284.861 \\ \hline \multirow{8}{*}{**ParaDetox**} & None & 0.775 & 0.778 & 0.134 & 1.000 & 1.000 & 1.000 & 330.829 \\ & GroundTruth & 0.166 & 0.000 & 0.000 & 0.633 & 0.828 & 0.778 & 393.800 \\ \cline{1-1} & BART & 0.774 & 0.777 & 0.133 & 0.999 & 0.999 & 0.998 & 331.250 \\ \cline{1-1} & DetoxBART & 0.180 & 0.013 & 0.000 & 0.688 & 0.862 & 0.832 & 438.242 \\ \cline{1-1} & PT (T5-S) & 0.253 & 0.081 & 0.007 & 0.760 & 0.910 & 0.905 & 593.442 \\ \cline{1-1} & PT (T5-B) & 0.224 & 0.051 & 0.005 & 0.754 & 0.920 & 0.897 & 499.851 \\ \cline{1-1} & PT (T5-L) & 0.213 & 0.037 & 0.003 & 0.743 & 0.916 & 0.886 & 404.565 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Performance of Task 3. The arrow denotes which direction is for better results.
direction that we aim to explore in our future work is to combine the paired dataset with the unpaired dataset (i.e., it only contains sets of toxic and non-toxic contents but without the pairs) to jointly fine-tune the prompt.
**Takeaways.** We empirically show that prompt tuning can reduce the toxicity level to a large extent and better preserve the semantic meanings. An interesting observation is that the semantic meaning of the original sentence can be properly preserved even with fewer training epochs due to the strong representation ability of the LLM. However, with fewer epochs, the detoxification performance might be less satisfying as the process of toxic to non-toxic contents is more difficult than previous tasks and needs more learning steps to better guide the prompt tuning. The effective detoxification and semantic preserving abilities make prompt tuning a strong competitor to conventional methods in the detoxification task.
## 8 Related Work
**Prompt Learning.** Prompt learning is a new paradigm in natural language processing (NLP) [31]. It allows users to directly specify the task they want in natural language for the pre-trained language model to interpret and complete. This paradigm paves way for using a single LLM as the _universal solver_ for various understanding and generation tasks, such as text classification [47], machine translation [44], semantic parsing [52], question answering [20], etc. To un-leash the full potential, research on prompt learning has been investigating automatically inducing the discrete/continuous prompts [57, 30], multi-prompt learning [42, 20], prompt training, and fine-tuning strategy [13, 41], transferability of prompts [40], etc. Our work is built on top of prompt learning. We conduct the first systematic hateful language study from the prompt tuning perspective.
**Toxicity Classification.** The problem of toxic online content is a longstanding and challenging [5] problem affecting our society. Motivated by the impact that the problem has on both the online and offline world, the research community and the industry devoted substantial resources to developing models to detect toxic content. One of the most used tools for assessing toxicity online is Perspective API [4], a set of machine learning models trained on a human-annotated dataset, released by Google. The Perspective API, given a piece of text, provides a set of scores that correspond to how likely the text is toxic, attacking specific identities, sexually explicit, etc. At the same time, Google released its annotated dataset, which enabled other researchers to develop more models aiming to tackle the problem. One such example is Detoxify [1], which leverages the power of transformer models to detect toxicity in text, across multiple languages.
Davidson et al. [10] highlight that there is a distinction between offensive language and hate speech. Also, the authors release HateSonar, a machine learning model, that identifies whether a piece of text contains offensive language or hate speech. As previous research notes [61], however, the HateSonar classifier performs poorly compared to the Perspective API, when tested on comments left on news articles. Zimmerman et al. [66] highlight that by leveraging deep learning ensembles, we can improve the performance of previous models in detecting hate speech on Twitter. Other work focuses on identifying the targets of toxic content [53, 14], or on identifying specific forms of toxic content such as Antisemitism [62, 36], Islamophobia [58], and Sinophobia [56, 65].
Figure 4: Detoxification effectiveness of Task 3 with different training epochs.
Figure 3: Utility of Task 3 with different training epochs.
All of the above-mentioned efforts in detecting toxic content are based on fine-tuning existing models or developing dedicated classifiers focusing on the specific task of detecting toxic content. Recently, the pre-train and prompt paradigm is becoming increasingly popular, hence the research community started investigating how prompt learning can be leveraged to tackle the problem of toxic content online. In particular, Chiu et al. [8] use OpenAI's GPT-3 language model to investigate the performance of prompt learning in the task of detecting racist or sexist content. They find that by using a pre-defined prompt and a few-shot learning setting, they can identify racist or sexist content with an accuracy of up to 85%, highlighting that prompt learning can play a role in identifying toxic content. Similarly, Schick et al. [48] find that language models can identify toxic content and whether the generated text contains undesirable biases, all using prompt learning techniques. Also, they propose a de-biasing method, which helps the language model generate less biased content. Overall, both works [8, 48] highlight that large language models and prompt learning can detect toxic content with a decent performance. While this previous work is essential, it is limited in the sense that it focuses only on the toxicity classification task and, more importantly, relies on manual pre-defined prompts. In contrast, our work provides a comprehensive evaluation of how large language models and prompt learning can assist in tackling the problem of toxic content by considering multiple tasks (toxicity classification, toxic span detection, and detoxification). Also, we show that by using prompt tuning techniques, instead of pre-defined prompts, we can substantially increase the performance of the language models in the three tasks.
**Toxic Span Detection.** Toxic span detection [39] aims to identify the specific span that makes the sentence to be toxic. Pavlopoulos et al. [37] treat this task as the sequence labeling task to annotate the suspicious span in the sentence. Three models including BiLSTM [18], BERT [12], and SPAN-BERT [22] are considered. We instead formalize this task as a generation task and show that prompt-tuning can achieve comparable performance to the SPAN-BERT but with much less computational time.
**Detoxification.** Detoxification aims to reduce the toxicity level of the sentence while preserving the semantic meaning to the largest extent. It is similar to neural style transfer [21]. Laugier et al. [26] propose a self-supervised method named CAE-T5 to learn the transformation from toxic to civil from the unpaired corpus. Logacheva et al. [32] develop DetoxBART which fine-tunes the BART model on the ParaDetox dataset to achieve better performance. Our work is substantially different from their work as we do not need to fine-tune the model but just the prompt, which is less costly. We notice that conventional methods like DetoxBART can achieve better detoxification performance while prompt tuning can better preserve semantic information.
## 9 Conclusion
In this paper, we performed the first extensive evaluation of using prompt learning with tunable prompts (prompt tuning) to tackle the problem of toxic content. Particularly, we focused on three tasks (toxicity classification, toxic span detection, detoxification) and assessed the performance of prompt tuning and how it compares to state-of-the-art baselines in these tasks. Among other things, we find that prompt tuning can achieve comparable or even better performance compared to the baselines. As shown by our evaluation, integrating prompt tuning into toxic content research can better help to improve the dataset quality and the model utility as the toxicity label (Task 1), predicted toxic span (Task 2), and
\begin{table}
\begin{tabular}{c l l l} \hline \hline
**Category** & **Original** & **Ground Truth** & **Generated** \\ \hline C1 & if assad wasn ’t such a dick then involvement would not be necessary. [0.827] & if assad wasn’t such that bad involvement would not be necessary. [0.163] & if assad wasn ’t such a bad person then involvement would not be necessary. [0.159] \\ \hline C2 & your reading comprehension is more fucked up than a football bat. [0.835] & your reading comprehension is worse than a football bat. [0.575] & your reading comprehension is not good [0.355] \\ \hline W1 & if you hold a knife against the cops you deserve to be shot! [0.827] & if you hold a knife against the cops you deserve to be shot! [0.827] & if you hold a knife against the cops you deserve to be shot! [0.774] \\ \hline W2 & did they bring along a signedx-glossy of boy wonder and castro as a token of their stupidity? & did they bring along a signedx-glossy of boy wonder and castro as a token [0.133] & did they bring along a signedx-glossy of boy wonder and castro as a token of their stupidity? [0.754] \\ \hline W3 & someone should give this idid a slap on the head. [0.939] & someone should give him a lesson. [0.257] & someone should give him a slap on the head. [0.721] \\ \hline W4 & i hate guys with a bitch mouth sit the fuck down nigea [0.985] & I hate bad guys [0.440] & i hate guys with a bad mouth sit down [0.721] \\ \hline W5 & wt!!
detoxified sentence (Task 3) can be used to assist the labeling procedure.
**Limitations.** Naturally, our work has some limitations. First, we use GPT2 and T5 as the LLMs to demonstrate the efficacy of prompt tuning. Our evaluation has demonstrated that prompt tuning can perform well even with these LLMs, and larger models generally perform better (see Table 3). While we acknowledge that conducting experiments with larger models with billions of parameters would be appealing, our hardware capabilities limit such endeavors. Also, we use Perspective API as an indicator to quantify the toxicity level (e.g., on Task 3), which is likely to yield some false positives/false negatives. Nevertheless, detecting toxic content is an open research challenge and the Perspective API is also leveraged by previous work [48, 51], indicating that it is a good proxy for assessing toxic content. Despite these limitations, we believe that our research can pave new ways for the study of toxic content, as researchers with limited computing resources can utilize currently available pre-trained large language models to perform important toxicity-related tasks with acceptable performance.
**Acknowledgment.** We thank the anonymous reviewers and our shepherd for their invaluable comments and feedback that helped us improve our manuscript. This work is partially funded by the European Health and Digital Executive Agency (HADEA) within the project "Understanding the individual host response against Hepatitis D Virus to develop a personalized approach for the management of hepatitis D" (D-Solve) (grant agreement number 101057917).
|
2306.13880 | Causality and stability analysis for the minimal causal spin
hydrodynamics | We perform the linear analysis of causality and stability for a minimal
extended spin hydrodynamics up to second order of the gradient expansion. The
first order spin hydrodynamics, with a rank-3 spin tensor being antisymmetric
for only the last two indices, are proved to be acausal and unstable. We then
consider the minimal causal spin hydrodynamics up to second order of the
gradient expansion. We derive the necessary causality and stability conditions
for this minimal causal spin hydrodynamics. Interestingly, the satisfaction of
the stability conditions relies on the equations of state for the spin density
and chemical potentials. Moreover, different with the conventional relativistic
dissipative hydrodynamics, the stability of the theory seems to be broken at
the finite wave-vector when the stability conditions are fulfilled at small and
large wave-vector limits. It implies that the behavior in small and large
wave-vector limits may be insufficient to determine the stability conditions
for spin hydrodynamics in linear mode analysis. | Xin-Qing Xie, Dong-Lin Wang, Chen Yang, Shi Pu | 2023-06-24T07:06:44Z | http://arxiv.org/abs/2306.13880v3 | # Causality and stability analysis for the minimal causal spin hydrodynamics
###### Abstract
We perform the linear analysis of causality and stability for a minimal extended canonical spin hydrodynamics up to second order of the gradient expansion. The first order canonical spin hydrodynamics are proved to be acausal and unstable. To remove the unstable and acausal modes, we then formulate the minimal causal spin hydrodynamics up to second order of the gradient expansion. We derive that causality and stability conditions for this minimal causal spin hydrodynamics. Interestingly, the satisfaction of the stability condition relies on the equations of state for the spin density and chemical potentials. Moreover, different with the conventional relativistic dissipative hydrodynamics, the stability of the theory seems to be broken at the finite wave-vector when the stability conditions are fulfilled at small and large wave-vector limits. It implies that the linear stability conditions are necessary but may be insufficient.
Introduction
Relativistic heavy ion collisions provide a novel platform to study the spin physics. In non-central relativistic heavy-ion collisions, the quark-gluon plasma (QGP) with large angular momentum perpendicular to the reaction plane is created. Because of the total angular momentum conservation, the averaged spin of final particles produced from QGP are polarized along the direction of the initial orbital angular momentum [1; 2; 3], as known as the global polarization. The measurements of the global polarization for \(\Lambda,\overline{\Lambda}\), and other hyperons [4; 5; 6; 7; 8; 9; 10] can be understood well by various phenomenological models [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26]. The experimental data also indicates that the QGP generated in non-central relativistic heavy-ion collisions is the most vortical fluid ever observed [4]. STAR [6; 27] and ALICE Collaboration [28] also measured the local polarization of \(\Lambda\) and \(\overline{\Lambda}\) along the beam and out-of-plane directions. Interestingly, the sign of local polarization in theoretical calculations is opposite to that of experimental data [15; 29; 30; 31; 23; 16]. To resolve the disagreement, a great deal of effort has been taken in feed-down effects [32; 33], hadronic interactions [34; 35], relativistic spin hydrodynamics [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73], statistical models [74; 75; 29], quantum kinetic theory [76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113], effective theories [114; 115; 116], and other phenomenological models [117; 118; 119; 20; 210; 211; 23; 23; 212; 213; 214; 215; 216; 217; 218]. Although there are much important progress [116; 120; 121; 122; 123; 124; 125; 126; 127; 128], the local polarization has not been fully understood. Another important phenomenon related to spin, called the spin alignment of vector mesons proposed by Refs. [1; 2; 3], has drawn a lot of attentions. The spin alignment is characterized by the deviation of \(\rho_{00}\) from \(1/3\), where \(\rho_{00}\) is the \(00\) component of the spin density matrix of vector mesons [129]. A non-vanishing \(\rho_{00}-1/3\) indicates a net spin alignment of vector mesons. The experimental results [130; 131; 132; 133; 134; 135] show that the magnitude of the spin alignment of vector meson is much larger than that caused by vorticity and other conventional effects [2; 136; 137; 138; 139; 20; 214; 215; 216; 217; 218; 219; 22; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 244; 245; 246; 247; 248; 25
to construct spin hydrodynamics, such as entropy current analysis [45; 57; 58; 59; 67; 73; 48; 50; 49; 51], quantum kinetic theory [49; 51; 55; 56; 63; 65; 66; 69; 85; 93; 100; 110; 145], holographic duality [52; 53], and effective Lagrangian method [36; 37].
In spite of the substantial efforts, the arbitrariness due to pseudo-gauge transformations in spin hydrodynamics is not fully understood. Through the pseudo-gauge transformations [146; 147], one can obtain new forms of energy momentum tensor and spin tensor without affecting the conservation law. Although such transformations have no impacts on the total conserved charges, they indeed change the values of locally defined quantities, e.g., energy momentum tensor and spin tensor [145; 146; 147; 141]. Thus, different pseudo-gauge transformations give rise to different frameworks of the spin hydrodynamics, e.g., canonical [45; 59], Belinfante [48], Hilgevoord-Wouthuysen (HW) [148; 65], de Groot-van Leeuwen-van Weert (GLW) [43; 149] forms. Which framework is suitable for understanding the experimental data leads to intense discussions [150; 151; 152; 48; 153; 154; 41].
In the canonical spin hydrodynamics, the canonical spin operator fulfills the SO(3) algebra of angular momentum [153], thereby establishing a inherent connection to the conventional spin in quantum mechanics. Moreover, the transformation between the orbital angular momentum and spin in canonical spin hydrodynamics is transparent.
So far, the canonical spin hydrodynamics in the first order of the gradient expansion has been established. Before simulating the canonical spin hydrodynamics, it is necessary to investigate the theory's causality and stability, as is done in conventional hydrodynamics. In fact, the first order conventional relativistic hydrodynamics at Landau frame in the gradient expansion are always acausal and unstable, e.g., see the discussions in Refs. [154; 155; 156; 157]. Therefore, the question whether the first order spin hydrodynamics can be casual or stable arises. Several studies conclude that the canonical spin hydrodynamics up to the first order in the gradient expansion may be acausal and unstable in the linear modes analysis [70; 71]. In the early study [45], the authors have modified the constitutive relations for the antisymmetric part of energy momentum tensor through the equations of motion for the fluid and the stability conditions of this first order theory in the rest frame of fluid seem to be satisfied in the linear modes analysis. Later on, the Ref. [71] shows that this first order theory may be acausal, while Ref. [70] finds the stability condition (which corresponds to Eq. (42) in this work) may not be satisfied.
In this work, we systematically investigate the causality and stability for the canonical
spin hydrodynamics in the linear modes analysis. Our findings indicate that the canonical spin hydrodynamics up to the first order in the gradient expansion is acausal and unstable even when using the replacement mentioned by Ref. [45]. The acausal and unstable modes can usually be removed when extending the theory up to the second order in the gradient expansion. Therefore, we follow the method outlined in the conventional hydrodynamics [156; 157; 158; 159; 160] to formulate the minimal causal spin hydrodynamics. It is sufficient to see whether the causality and stability can be recovered up to the second order in the gradient expansion [156; 157; 158; 159; 160]. We then analyze the causality and stability for this minimal extended theory.
The paper is organized as follows. We first review the first order canonical spin hydrodynamics in Sec. II and show it is acausal and unstable in Sec III. In Sec IV, we formulate the minimal causal spin hydrodynamics following the method outlined in the conventional hydrodynamics. In Sec V, we analyze the causality and stability for the minimal causal spin hydrodynamics in the rest frame and comment the results in moving frames. We summarize this work in Sec. VI.
Throughout this work, we work with the metric \(g_{\mu\nu}=\text{diag}\{+,-,-,-\}\) and \(\Delta_{\mu\nu}=g_{\mu\nu}-u_{\mu}u_{\nu}\). For a rank-2 tensor \(A^{\mu\nu}\), we introduce the short hand notations \(A^{(\mu\nu)}\equiv(A^{\mu\nu}+A^{\nu\mu})/2\), \(A^{[\mu\nu]}\equiv(A^{\mu\nu}-A^{\nu\mu})/2\), and \(A^{<\mu\nu>}\equiv\frac{1}{2}[\Delta^{\mu\alpha}\Delta^{\nu\beta}+\Delta^{\mu \beta}\Delta^{\nu\alpha}]A_{\alpha\beta}-\frac{1}{3}\Delta^{\mu\nu}(\Delta^{ \alpha\beta}A_{\alpha\beta})\).
## II First order canonical spin hydrodynamics
In this section, let us briefly review the first order relativistic spin hydrodynamics. In canonical spin hydrodynamics, we have the conservation equations for energy, momentum, total angular momentum, and particle number, i.e., [45; 48; 50; 58; 59; 67; 161]
\[\partial_{\mu}\Theta^{\mu\nu}=0,\quad\partial_{\lambda}J^{\lambda\mu\nu}=0, \quad\partial_{\mu}j^{\mu}=0, \tag{1}\]
where \(\Theta^{\mu\nu}\) is the energy momentum tensor, \(J^{\lambda\mu\nu}\) is the total angular momentum current, and \(j^{\mu}\) is the current for particle number. Different with conventional relativistic hydrodynamics, the total angular momentum conservation equation in Eq.(1) plays a crucial role to describe the evolution of spin. The total angular momentum current in the canonical form can be written as [45; 48]
\[J^{\lambda\mu\nu} = x^{\mu}\Theta^{\lambda\nu}-x^{\nu}\Theta^{\lambda\mu}+\Sigma^{ \lambda\mu\nu}, \tag{2}\]
where the first two terms corresponds to the conventional orbital angular momentum, and \(\Sigma^{\lambda\mu\nu}\) is usually called as the canonical rank-3 spin tensor. Using Eq.(2), the conservation equation \(\partial_{\lambda}J^{\lambda\mu\nu}=0\) can be rewritten as the spin evolution equation,
\[\partial_{\lambda}\Sigma^{\lambda\mu\nu}\:=\:-2\Theta^{[\mu\nu]}. \tag{3}\]
Eq.(3) implies that the anti-symmetric part of energy momentum tensor \(\Theta^{[\mu\nu]}\) is the source for spin, and the spin can be viewed as a conserved quantity if and only if \(\Theta^{[\mu\nu]}=0\).
After introducing the spin degrees of freedom, the thermodynamic relations in spin hydrodynamics are modified as [45; 48; 50; 58; 59; 67; 161]
\[e+p = Ts+\mu n+\omega_{\mu\nu}S^{\mu\nu}, \tag{4}\] \[de = Tds+\mu dn+\omega_{\mu\nu}dS^{\mu\nu}. \tag{5}\]
where \(e,p,T,s,n,\mu,\omega_{\mu\nu}\), and \(S^{\mu\nu}\) denote energy density, pressure, temperature, entropy density, particle number density, chemical potential, spin chemical potential, and spin density. The spin density is defined as
\[S^{\mu\nu}\equiv u_{\lambda}\Sigma^{\lambda\mu\nu} \tag{6}\]
with the fluid velocity \(u^{\mu}\). Analogy to the relationship between \(\mu\) and \(n\), here we introduce the anti-symmetric spin chemical potential \(\omega_{\mu\nu}\) as the conjugate of \(S^{\mu\nu}\).
In general, the energy momentum tensor and particle current can be decomposed as
\[\Theta^{\mu\nu} = eu^{\mu}u^{\nu}-(p+\Pi)\Delta^{\mu\nu}+2h^{(\mu}u^{\nu)}+\pi^{ \mu\nu}+2q^{[\mu}u^{\nu]}+\phi^{\mu\nu}, \tag{7}\] \[j^{\mu} = nu^{\mu}+\nu^{\mu}, \tag{8}\]
where \(h^{\mu},\nu^{\mu}\), \(\Pi\), and \(\pi^{\mu\nu}\) stand for heat current, particle diffusion, bulk viscous pressure, and shear stress tensor, respectively, and the antisymmetric parts \(2q^{[\mu}u^{\nu]}\) and \(\phi^{\mu\nu}\) are related to the spin effects [45; 48]. As for the rank-3 spin tensor \(\Sigma^{\lambda\mu\nu}\), we have,
\[\Sigma^{\lambda\mu\nu}\:=\:u^{\lambda}S^{\mu\nu}+\Sigma^{\lambda\mu\nu}_{(1)}, \tag{9}\]
where the spin density \(S^{\mu\nu}\) defined in Eq.(6) has six independent degrees of freedom [45; 48].
In this work, we follow the power counting scheme in Refs. [48; 62; 64].
\[S^{\mu\nu}\sim O(1),\ \omega_{\mu\nu}\sim O(\partial),\ \Sigma^{\lambda\mu\nu}_{(1)} \sim O(\partial). \tag{10}\]
The spin density \(S^{\mu\nu}\) is chosen as the leading order in the gradient expansion. It corresponds to the case in which the most of particles in the system are polarized, i.e. the order of \(S^{\mu\nu}\) is considered as the same as the one for the number density \(n\). While in Refs. [45; 59], the authors have chosen a different power counting scheme, \(S^{\mu\nu}\sim O(\partial)\), \(\omega_{\mu\nu}\sim O(\partial)\), \(\Sigma^{\lambda\mu\nu}_{(1)}\sim O(\partial^{2})\).
Following [45; 48], it is straightforward to get the entropy production rate,
\[\partial_{\mu}\mathcal{S}^{\mu}_{\rm can} = (h^{\mu}-\frac{e+p}{n}\nu^{\mu})\left(\partial_{\mu}\frac{1}{T}+ \frac{1}{T}Du_{\mu}\right)+\frac{1}{T}\pi^{\mu\nu}\partial_{\mu}u_{\nu}-\frac{ 1}{T}\Pi(\partial\cdot u) \tag{11}\] \[+\frac{1}{T}\phi^{\mu\nu}(\partial_{\mu}u_{\nu}+2\omega_{\mu\nu} )+\frac{q^{\mu}}{T}\left(T\partial_{\mu}\frac{1}{T}-Du_{\mu}+4\omega_{\mu\nu} u^{\nu}\right)+O(\partial^{3}),\]
where \(\mathcal{S}^{\mu}_{\rm can}\) is the entropy density current. The second law of thermodynamics \(\partial_{\mu}\mathcal{S}^{\mu}_{\rm can}\geq 0\) can give us the first order constitutive relations [45; 48],
\[h^{\mu}-\frac{e+p}{n}\nu^{\mu} = \kappa\Delta^{\mu\nu}\left[\frac{1}{T}\partial_{\nu}T-(u\cdot \partial)u_{\nu}\right], \tag{12}\] \[\pi^{\mu\nu} = 2\eta\partial^{<\mu}u^{\nu>},\] (13) \[\Pi = -\zeta\partial_{\mu}u^{\mu},\] (14) \[q^{\mu} = \lambda\Delta^{\mu\nu}\left[\frac{1}{T}\partial_{\nu}T+(u\cdot \partial)u_{\nu}-4\omega_{\nu\alpha}u^{\alpha}\right],\] (15) \[\phi^{\mu\nu} = 2\gamma_{s}\Delta^{\mu\rho}\Delta^{\nu\sigma}(\partial_{[\rho}u_ {\sigma]}+2\omega_{\rho\sigma}), \tag{16}\]
where the heat conductivity coefficient \(\kappa\), shear viscosity coefficient \(\eta\), and bulk viscosity \(\zeta\) also exist in conventional hydrodynamics, while \(\lambda\) and \(\gamma_{s}\) are new coefficients corresponding to the interchange of spin and orbital angular momentum. The entropy principles also requires that the transport coefficients
\[\kappa,\eta,\zeta,\lambda,\gamma_{s}>0, \tag{17}\]
are positive. Note that, pointed out by Refs. [67; 162], some cross terms between the different dissipative currents may also exist due to the Onsager relation, but here we neglect them for simplicity.
Before ending this section, we would like to comment on the heat flow \(h^{\mu}\). Interestingly, when we set \(\nu^{\mu}=0\) and \(n=0\), we find that one cannot fix the expression for heat current \(h^{\mu}\) in the first order of gradient expansion. By using \(\Delta_{\nu\alpha}\partial_{\mu}\Theta^{\mu\nu}=0\) and Eqs.(4,5), we find that \((\partial_{\mu}\frac{1}{T}+\frac{1}{T}Du_{\mu})\sim O(\partial^{2})\) when \(\nu^{\mu}=0\) and \(n=0\). In that case, the term \(h^{\mu}(\partial_{\mu}\frac{1}{T}+\frac{1}{T}Du_{\mu})\sim O(\partial^{3})\), will be neglected in the entropy production rate (11), i.e., we cannot determine the
expression of \(h^{\mu}\) by the entropy principle there. The similar behavior was also observed in conventional hydrodynamics [163; 164].
## III Unstable and Acausal Modes in the First Order Spin Hydrodynamics
In this section, we analyze the causality and stability for the first order spin hydrodynamics. It is well-known that the conventional relativistic hydrodynamics in Landau frame up to the first order in gradient expansion are always acausal, e.g., see Refs. [154; 155] as the early pioneer works.
In the linear modes analysis, one can consider the perturbations \(\delta X\) to the hydrodynamical quantities \(X\) at the equilibrium. By assuming the \(\delta X\sim\delta\tilde{X}e^{i\omega t-ikx}\) with \(\delta\tilde{X}\) being constant in space-time, one can solve the dispersion relation \(\omega=\omega(k)\) from the conservation equations. In the conventional hydrodynamics, the causality condition is usually given by [165; 166; 167; 157; 163; 168; 169; 170; 164; 165; 166; 167]
\[\lim_{k\rightarrow\infty}\left|\text{Re}\ \frac{\omega}{k}\right|\leq 1, \tag{18}\]
where the condition (18) can also be written as \(\lim_{k\rightarrow\infty}\left|\text{Re}\ \frac{\partial\omega}{\partial k}\right|\leq 1\) in some literature [166; 167; 157]. However, the above condition is insufficient to guarantee the causality. We need an extra condition that, [168]
\[\lim_{k\rightarrow\infty}\left|\frac{\omega}{k}\right|\ \text{is bounded}. \tag{19}\]
As pointed out by the early pioneer work [168], the unbounded \(\lim_{k\rightarrow\infty}\left|\frac{\omega}{k}\right|\) gives the infinite propagating speed of the perturbation, even if the \(\omega\) is pure imaginary. One simple example is the non-relativistic diffusion equation, \(\partial_{t}n-D_{n}\partial_{x}^{2}n=0\) with \(D_{n}\) being the diffusion constant. It is easy to check that its dispersion relation gives \(\omega=iD_{n}k^{2}\), which satisfies condition (18) but does not obey condition (19). Therefore, the perturbation in the non-relativistic diffusion equations has the unlimited propagating speed, i.e. with any compact initial value for \(n(t_{0},x)\), the \(n(t_{0}+\Delta t,x)\) at \(x\rightarrow\infty\) can still get the influence [169]. We emphasize that the conditions (18,19) are necessary but not sufficient to guarantee that the theory is casual [170; 171]. One example is the transverse perturbations of an Eckart fluid with shear viscous tensor, whose dispersion relation satisfies the conditions (18,19),
but the velocity can exceed the speed of light (see Eq. (47) and (48) in Ref. [155] for the perturbation equations and the propagating velocity).
The stability means that the the imaginary part of \(\omega=\omega(k)\) must be positive, i.e.
\[\text{Im }\omega(k)>0. \tag{20}\]
Note that the case of \(\text{Im }\omega=0\) corresponds to the neutral equilibrium, which means the equilibrium state is not unique. In this work, we will not consider such special cases, and we only consider the condition (20) to study the stability of spin hydrodynamics as in Ref. [70].
It is necessary to study the causality and stability for the relativistic spin hydrodynamics in the first order. To see whether the first order spin hydrodynamics can be casual or not, we consider the linear modes analysis to the system, i.e. we take the small perturbations on top of static equilibrium. Following Refs. [154; 155], the static equilibrium background is assumed to be irrotational global equilibrium state. We label the quantities with subscript \((0)\) as those at the global equilibrium state, while we use "\(\delta X\)" to denote the small perturbations of the quantity \(X\), e.g., \(e_{(0)}\) and \(\delta e\) stand for the energy density at the global equilibrium and the small perturbations of energy density, respectively.
From now on, unless specified otherwise, we adopt the Landau frame, and neglect the conserved charge current \(j^{\mu}\).
We now consider the small perturbations on top of static equilibrium. Not all of the perturbations are independent of each other, and we can choose
\[\delta e,\ \delta u^{i},\ \delta S^{\mu\nu}, \tag{21}\]
as independent variables.
The variation of pressure \(\delta p\) and spin chemical potential \(\delta\omega^{\mu\nu}\) can be expressed as functions of \(\delta e\) and \(\delta S^{\mu\nu}\) through
\[\delta p=c_{s}^{2}\delta e,\quad\delta\omega^{0i}=\chi_{b}\delta S^{0i}+\chi_{ e}^{0i}\delta e,\quad\delta\omega^{ij}=\chi_{s}\delta S^{ij}+\chi_{e}^{ij} \delta e, \tag{22}\]
where the speed of sound \(c_{s}\), and \(\chi_{b}\), \(\chi_{s}\),\(\chi_{e}^{\mu\nu}\) are in general the functions of thermodynamic variables. For simplicity, we take \(c_{s}\), \(\chi_{b}\), \(\chi_{s}\),\(\chi_{e}^{\mu\nu}\) as constants in the linear modes analysis. Note that \(\chi_{e}^{\mu\nu}\) comes from the anisotropy of the system. Under the assumption of an irrotational global equilibrium, from Eq. (15) the spin chemical potential vanishes \(\omega_{(0)}^{\mu\nu}=0\). For
simplicity, we further choose \(S^{\mu\nu}_{(0)}=0\). The variation of the temperature \(\delta T\) can be obtained by the thermodynamics relations, with the help of Eqs.(4,5),
\[\delta T=\frac{T_{(0)}}{e_{(0)}+p_{(0)}}\left[\delta p-T_{(0)}S^{\mu\nu}_{(0)} \delta\left(\frac{\omega_{\mu\nu}}{T}\right)\right]=\frac{T_{(0)}c_{s}^{2} \delta e}{e_{(0)}+p_{(0)}}. \tag{23}\]
Next, we consider the variation of the conservation equations \(\partial_{\mu}\delta\Theta^{\mu\nu}=0\) and \(\partial_{\lambda}\delta J^{\lambda\mu\nu}=0\), where the perturbations \(\delta\Theta^{\mu\nu}\) and \(\delta J^{\lambda\mu\nu}\) can be derived from the constitutive relations in Eqs.(2,7,12-16). It is straightforward to obtain the linearized equations for the independent perturbations \(\delta e,\delta\vartheta^{i},\delta S^{\mu\nu}\),
\[0 = (\partial_{0}+\frac{1}{2}\lambda^{\prime}c_{s}^{2}\partial_{i} \partial^{i}+4\lambda\chi_{e}^{0i}\partial_{i})\delta e+(\partial_{i}+\frac{ 1}{2}\lambda^{\prime}\partial_{i}\partial_{0})\delta\vartheta^{i}+D_{b} \partial_{i}\delta S^{0i}, \tag{24}\] \[0 = (4\gamma_{s}\chi_{e}^{ij}\partial_{i}-c_{s}^{2}\partial^{j}- \frac{1}{2}c_{s}^{2}\lambda^{\prime}\partial_{0}\partial^{j}-4\lambda\chi_{e}^ {0j}\partial_{0})\delta e+(\gamma_{\parallel}-\gamma_{\perp}-\gamma^{\prime}) \partial^{j}\partial_{i}\delta\vartheta^{i}\] (25) \[+[\partial_{0}-\frac{1}{2}\lambda^{\prime}\partial_{0}\partial_{0 }+(\gamma_{\perp}+\gamma^{\prime})\partial^{i}\partial_{i}]\delta\vartheta^{j }-D_{b}\partial_{0}\delta S^{0j}+D_{s}\partial_{i}\delta S^{ij},\] \[0 = (\lambda^{\prime}c_{s}^{2}\partial^{i}+8\lambda\chi_{e}^{0i}) \delta e+\lambda^{\prime}\partial_{0}\delta\vartheta^{i}+(2D_{b}-\partial_{0 })\delta S^{0i},\] (26) \[0 = 8\gamma_{s}\chi_{e}^{ij}\delta e+2\gamma^{\prime}\partial^{i} \delta\vartheta^{j}-2\gamma^{\prime}\partial^{j}\delta\vartheta^{i}+(2D_{s}+ \partial_{0})\delta S^{ij}. \tag{27}\]
Here we introduce the following shorthand notations,
\[D_{s} \equiv 4\gamma_{s}\chi_{s},\quad D_{b}\equiv 4\lambda\chi_{b},\quad \delta\vartheta^{i}\equiv(e_{(0)}+p_{(0)})\delta u^{i},\quad\lambda^{\prime} \equiv\frac{2\lambda}{e_{(0)}+p_{(0)}},\] \[\gamma^{\prime} \equiv \frac{\gamma_{s}}{e_{(0)}+p_{(0)}},\quad\gamma_{\perp}\equiv \frac{\eta}{e_{(0)}+p_{(0)}},\quad\gamma_{\parallel}\equiv\frac{\frac{4}{3} \eta+\zeta}{e_{(0)}+p_{(0)}}. \tag{28}\]
In the linear modes analysis, the perturbations are assumed along the \(x\) direction only,
\[\delta e=\delta\tilde{e}e^{i\omega t-ikx},\ \delta\vartheta^{i}=\delta \tilde{\vartheta}^{i}e^{i\omega t-ikx},\ \delta S^{\mu\nu}=\delta\tilde{S}^{\mu\nu}e^{i\omega t-ikx}, \tag{29}\]
where \(\delta\tilde{e}\), \(\delta\tilde{\vartheta}^{i}\), and \(\delta\tilde{S}^{\mu\nu}\) are independent of space and time.
Inserting the perturbations in Eq. (29) into Eqs.(24-27) yields,
\[{\cal M}_{1}\delta\tilde{X}_{1}\:=\:0, \tag{30}\]
where
\[\delta\tilde{X}_{1}\equiv(\delta\tilde{e},\delta\tilde{\vartheta}^{x},\delta \tilde{S}^{0x},\delta\tilde{\vartheta}^{y},\delta\tilde{S}^{0y},\delta\tilde{ S}^{xy},\delta\tilde{\vartheta}^{z},\delta\tilde{S}^{0z},\delta\tilde{S}^{xz}, \delta\tilde{S}^{yz})^{\rm T}, \tag{31}\]
and
\[{\cal M}_{1}\equiv\left(\begin{array}{cccc}M_{1}&0&0&0\\ A_{1}&M_{2}&0&0\\ A_{2}&0&M_{2}&0\\ A_{3}&0&0&M_{3}\end{array}\right), \tag{32}\]
with
\[M_{1} \equiv \left(\begin{array}{ccc}i\omega+\frac{1}{2}\lambda^{\prime}c_{s}^ {2}k^{2}-4ik\lambda\chi_{e}^{0x}&\frac{1}{2}\lambda^{\prime}k\omega-ik&-ikD_{b} \\ \frac{1}{2}\lambda^{\prime}c_{s}^{2}k\omega-ikc_{s}^{2}-4i\omega\lambda\chi_{e }^{0x}&\gamma_{\parallel}k^{2}+i\omega+\frac{1}{2}\lambda^{\prime}\omega^{2}&- i\omega D_{b}\\ ik\lambda^{\prime}c_{s}^{2}+8\lambda\chi_{e}^{0x}&i\omega\lambda^{\prime}&2D_{b}-i \omega\end{array}\right), \tag{33}\] \[M_{2} \equiv \left(\begin{array}{ccc}k^{2}(\gamma_{\perp}+\gamma^{\prime})+ i\omega+\frac{1}{2}\lambda^{\prime}\omega^{2}&-i\omega D_{b}&-ikD_{s}\\ i\omega\lambda^{\prime}&2D_{b}-i\omega&0\\ 2ik\gamma^{\prime}&0&2D_{s}+i\omega\end{array}\right),\] (34) \[M_{3} \equiv 2D_{s}+i\omega. \tag{35}\]
The off-diagonal blocks \(A_{1}\), \(A_{2}\), \(A_{3}\) in the matrix \(\mathcal{M}_{1}\), whose expressions are shown in Appendix A, and are irrelevant to the following discussions. The non-trivial solutions in Eq.(30) requires,
\[0=\det\mathcal{M}_{1}=\det M_{1}\cdot(\det M_{2})^{2}\cdot\det M_{3}. \tag{36}\]
From Eqs.(33-35), we find that Eq.(36) is a polynomial equation for two variables \(\omega\) and \(k\). Solving this equation gives the dispersion relations \(\omega=\omega(k)\).
The \(\det M_{3}=0\) gives a non-hydrodynamic mode,
\[\omega = 2iD_{s}, \tag{37}\]
which corresponds to the spin relaxation [45; 59]. The stability condition (20) requires that \(D_{s}>0\).
The dispersion relation solved from \(\det M_{1}=0\) and \(\det M_{2}=0\) are lengthy and complicated, so here we only discuss the relations in small \(k\) and large \(k\) limits to analyze stability and causality. In the \(k\to 0\) limit, the dispersion relations are
\[\omega = \pm c_{s}k+\frac{i}{2}(\gamma_{\parallel}\mp 4c_{s}\lambda\chi_{e }^{0x}D_{b}^{-1})k^{2}+O(k^{3}), \tag{38}\] \[\omega = (-i\pm\sqrt{4D_{b}\lambda^{\prime}-1})\lambda^{\prime-1}+O(k),\] (39) \[\omega = i\gamma_{\perp}k^{2}+O(k^{3}),\] (40) \[\omega = 2iD_{s}+O(k^{2}). \tag{41}\]
where the dispersion relations (38-39) and (39-41) are solved from \(\det M_{1}=0\) and \(\det M_{2}=0\), respectively. The modes in Eq. (38) and Eq. (40) correspond the sound and shear modes in the conventional hydrodynamics [154; 156; 157; 166], respectively. The stability condition
(20) for the dispersion relation in Eqs.(38-41) gives,
\[D_{s}>0,\ \lambda^{\prime}<0,\ D_{b}<-4c_{s}\lambda\gamma_{\parallel}^{-1}| \chi_{e}^{0x}|\leq 0. \tag{42}\]
However, conditions (42) contradict with the entropy principle in Eq. (17), i.e. \(\lambda^{\prime}=2\lambda/(e_{(0)}+p_{(0)})>0\) defined in Eq.(28) with \(\lambda>0\) and \(e_{(0)}+p_{(0)}>0\).
In the \(k\rightarrow\infty\) limit, the dispersion relations become,
\[\omega = -4iD_{b}\gamma_{\parallel}^{-1}\lambda^{\prime-1}k^{-2}+O(k^{-3}), \tag{43}\] \[\omega = -ic_{s}^{2/3}\gamma_{\parallel}^{1/3}k^{4/3}+O(k),\] (44) \[\omega = (-1)^{1/6}c_{s}^{2/3}\gamma_{\parallel}^{1/3}k^{4/3}+O(k),\] (45) \[\omega = (-1)^{5/6}c_{s}^{2/3}\gamma_{\parallel}^{1/3}k^{4/3}+O(k),\] (46) \[\omega = -2iD_{b}+O(k^{-1}),\] (47) \[\omega = 2iD_{s}\gamma_{\perp}(\gamma^{\prime}+\gamma_{\perp})^{-1}+O(k^ {-1}),\] (48) \[\omega = \pm ik\sqrt{2\lambda^{\prime-1}(\gamma^{\prime}+\gamma_{\perp}) }+O(k^{0}), \tag{49}\]
where first four modes come from \(\det M_{1}=0\), and others can be derived by \(\det M_{2}=0\). Obviously, Eq.(49) contains an unstable mode.
On the other hand, we also find that in Eqs.(44-46) \(|\omega/k|\) is unbounded, which violates the causality condition (19). We also notice that Ref. [71] has also analyzed the causality for the first order spin hydrodynamics in small \(k\) limit.
We find that the first order spin hydrodynamics is acausal and unstable similar as the conventional relativistic hydrodynamics in Landau frame.
Before ending this section, we comment on the condition (42). We notice that the dispersion relations in Refs. [70; 71; 45] are different with ours in Eqs.(37-49). Let us explain what happens here. The energy momentum conservation equation \(\Delta_{\mu\alpha}\partial_{\nu}\Theta^{\mu\nu}=0\), gives the acceleration equations for the fluid velocity,
\[(u\cdot\partial)u^{\mu}=\frac{1}{T}\Delta^{\mu\nu}\partial_{\nu}T+O(\partial^ {2}). \tag{50}\]
In Refs.[70; 71; 45], the authors have replaced \((u\cdot\partial)u^{\mu}\) in \(q^{\mu}\) in Eq.(15) by Eq. (50) and gotten another expression for \(q^{\mu}\)
\[q^{\mu}=\lambda\left(\frac{2\Delta^{\mu\nu}\partial_{\nu}p}{e+p}-4\omega^{\mu \nu}u_{\nu}\right)+O(\partial^{2}). \tag{51}\]
Although \(q^{\mu}\) in Eq.(51) (also in Refs. [70; 71; 45] ) is equivalent to our \(q^{\mu}\) in Eq.(15) up to the first order in gradient expansion, we emphasize that these two \(q^{\mu}\) correspond to different hydrodynamic frames and will lead to different hydrodynamic equations (also see Refs. [163; 172] for the general discussion for these kinds of replacement in relativistic hydrodynamics). Different with our Eqs. (43 - 49), the dispersion relations computed with the \(q^{\mu}\) in Eq. (51) are stable and satisfy causality condition (18) in the rest frame under certain conditions. However, they do not obey the causality condition (19) and the whole theory become acausal, e.g., one mode in Refs. [70; 71; 45] is
\[\omega=i(\gamma^{\prime}+\gamma_{\perp})k^{2}\text{ as }k\rightarrow\infty, \tag{52}\]
breaks the causality condition (19).
We now conclude that the first order spin hydrodynamics at static equilibrium state are unstable and acausal in the rest frame. We do not need to discuss the stability and causality of the first order spin hydrodynamics in moving frames again.
## IV Minimal causal spin hydrodynamics
In the previous section, we have shown that the first order spin hydrodynamics in Landau frame are acausal and unstable. The acausal and unstable theory is not physical, we therefore need to consider the second order the spin hydrodynamics in gradient expansion. In this section we follow the idea of minimal causal extension in conventional hydrodynamics and implement it to the spin hydrodynamics.
Up to now, there are two ways to establish causal hydrodynamics. The first way is to add the second order corrections to the dissipative terms, such as the Muller-Israel-Stewart (MIS) theory [158; 159] or other related second order hydrodynamics. The MIS theory is a famous causal conventional hydrodynamic theory up to \(O(\partial^{2})\) in gradient expansion. Here, we consider a relativistic dissipative hydrodynamics with the bulk viscous pressure \(\Pi\) only as an example to explain why the MIS theory can be casual. The entropy current in MIS theory is assumed to be [173; 159; 174]
\[\mathcal{S}^{\mu}=su^{\mu}-\frac{\mu}{T}\nu^{\mu}+\frac{1}{T}h^{\mu}-\frac{1} {2T}\beta_{0}u^{\mu}\Pi^{2}+..., \tag{53}\]
where the coefficient \(\beta_{0}>0\) and the ellipsis stands for other possible \(O(\partial^{2})\) terms. Then
the second law of thermodynamics \(\partial_{\mu}\mathcal{S}^{\mu}\geq 0\) leads to,
\[\tau_{\Pi}\frac{d}{d\tau}\Pi+\Pi=-\zeta\partial_{\mu}u^{\mu}+..., \tag{54}\]
where \(d/d\tau\equiv u^{\mu}\partial_{\mu}\), and \(\tau_{\Pi}=\zeta\beta_{0}>0\) is defined as the relaxation time for the bulk viscous pressure. If the \(\tau_{\Pi}\to 0\), the hydrodynamic equations reduce to parabolic equations and become acausal. With a finite \(\tau_{\Pi}\), the hydrodynamic equations are hyperbolic and can be causal under certain conditions [156; 157; 156; 175; 176]. In linear modes analysis, the dispersion relations from Eq. (54) satisfy causal conditions (18, 19) when the relaxation time \(\tau_{\Pi}\) is sufficiently large. The second order constitutive equations for shear viscous tensor \(\pi^{\mu\nu}\), heat flow \(h^{\mu}\) and heat current \(\nu^{\mu}\) can be obtained in a similar way. These equations represent evolution equations that incorporate the respective relaxation time [159; 173; 174]. Apart from the MIS theory, many other second order causal conventional hydrodynamic theories, e.g., Baier-Romatschke-Son-Starinets-Stephanov (BRSSS) theory [177] and the Denicol-Niemi-Molnar-Rischke (DNMR) theory [178], have been established. All of them contain the terms proportional to the relaxation times and can be causal and stable under certain conditions [177; 179; 180]. Following these discussion, we can say that the key to recover the causality of the theory is to introduce the terms proportional to relaxation time.
Different with the above second order theories, the Bemfica-Disconzi-Noronha-Kovtun (BDNK) [163; 164; 165; 181; 182; 183] is a first order hydrodynamic theory in general (fluid) frames. It roughly says that one can choose some preferred frames to satisfy the causality and stability conditions. Unfortunately, the commonly used Landau or Eckart frame are not the preferred fluid frames in the BDNK theory. Therefore, we will not discuss the spin hydrodynamics in the BDNK theory in this work. We also notice that recent studies in Ref. [184] discuss the casual spin hydrodynamics in the first order similar to BDNK theory.
In this work, we follow the basic idea in MIS, BRSSS, and DNMR theories to construct a simplified causal spin hydrodynamics. Instead of considering the complete second order spin hydrodynamics, we only analyze the called "minimal" extended second order spin hydrodynamics. Here, the word "minimal" means that we concentrate on the essential terms in the second order of gradient expansion to get a causal theory and neglect the other terms which do not contribute to the dispersion relations in the linear modes analysis. As mentioned below Eq. (54), the key to get the causal theory is to add the terms proportional to the relaxation times similar to \(\tau_{\Pi}d\Pi/d\tau\), in the left hand side of Eq.(54). Following this idea,
the constitutive equations (12-16) in the minimal extended causal spin hydrodynamics can be rewritten as,
\[\tau_{q}\Delta^{\mu\nu}\frac{d}{d\tau}q_{\nu}+q^{\mu} = \lambda(T^{-1}\Delta^{\mu\alpha}\partial_{\alpha}T+Du^{\mu}-4\omega ^{\mu\nu}u_{\nu}), \tag{55}\] \[\tau_{\phi}\Delta^{\mu\alpha}\Delta^{\nu\beta}\frac{d}{d\tau} \phi_{\alpha\beta}+\phi^{\mu\nu} = 2\gamma_{s}\Delta^{\mu\alpha}\Delta^{\nu\beta}(\partial_{[\alpha }u_{\beta]}+2\omega_{\alpha\beta}),\] (56) \[\tau_{\pi}\Delta^{\alpha<\mu}\Delta^{\nu>\beta}\frac{d}{d\tau} \pi_{\alpha\beta}+\pi^{\mu\nu} = 2\eta\partial^{<\mu}u^{\nu>},\] (57) \[\tau_{\Pi}\frac{d}{d\tau}\Pi+\Pi = -\zeta\partial_{\mu}u^{\mu}, \tag{58}\]
where \(\tau_{q},\tau_{\phi},\tau_{\pi}\) and \(\tau_{\Pi}\) are positive relaxation times for \(q^{\mu},\phi^{\mu\nu},\pi^{\mu\nu},\Pi\), respectively. Eqs. (57,58) are the same as those in the conventional hydrodynamics1[156; 157; 156]. Recently, the second order spin hydrodynamics similar to MIS theory has been introduced in Ref. [73]. Our minimal causal spin hydrodynamics can be regarded as a simplified version of it.
Footnote 1: Another kinds of the minimal causal theory is discussed in Ref.[156; 160], in which the extended dissipative terms can not be determined from the entropy principle \(\partial_{\mu}\mathcal{S}^{\mu}\geq 0\).
We also notice that in the Refs. [60], the authors have proposed the same expressions for \(q^{\mu}\) and \(\phi^{\mu\nu}\) as presented in Eqs. (55, 56) for minimal causal spin hydrodynamics.
At last, we comment on the relaxation time \(\tau_{q}\) and \(\tau_{\phi}\). Different with total particle number or total energy of the fluid, the total spin or polarization is not a conserved quantity, i.e., the spin density should and will decay with time. Therefore, the two modes described by Eqs. (39, 41) or Eqs. (47, 48) can also be interpreted as the relaxation modes for the spin density. Similarly, we can interpret that \(\tau_{q}\) and \(\tau_{\phi}\) as the relaxation times for the sources that induce spin generation.
## V Causality and stability analysis for minimal causal spin hydrodynamics
In this section we analyze the causality and stability of the minimal causal spin hydrodynamics. We use the similar notations in Sec. III, i.e., for a physical quantity \(X\), we use \(X_{(0)}\) and \(\delta X\) to denote the \(X\) at the global equilibrium state and the small perturbations of the quantity \(X\), respectively. We adopt the independent perturbations as
\[\delta e,\ \delta u^{i},\ \delta S^{\mu\nu},\ \delta\Pi,\ \delta\pi^{ij}, \tag{59}\]
where \(\delta\pi^{i}_{\ i}=0\) and \(\delta\pi^{ij}=\delta\pi^{ji}\).
We first start from the spin hydrodynamics in the rest frame, i.e., \(u^{\mu}_{(0)}=(1,0)\). The conservation equations \(\partial_{\mu}\delta\Theta^{\mu\nu}=0\) and \(\partial_{\lambda}\delta J^{\lambda\mu\nu}=0\) with the constitutive equations (55 - 58) read,
\[0 = (\lambda^{\prime}c_{s}^{2}\partial^{i}+8\lambda\chi_{e}^{0i}) \delta e+\lambda^{\prime}\partial_{0}\delta\vartheta^{i}+(2D_{b}-\tau_{q} \partial_{0}\partial_{0}-\partial_{0})\delta S^{0i}, \tag{60}\] \[0 = 8\gamma_{s}\chi_{e}^{ij}\delta e+2\gamma^{\prime}(\partial^{i} \delta\vartheta^{j}-\partial^{j}\delta\vartheta^{i})+(\tau_{\phi}\partial_{0} \partial_{0}+\partial_{0}+2D_{s})\delta S^{ij},\] (61) \[0 = \tau_{\pi}\partial_{0}\delta\pi^{ij}+\delta\pi^{ij}-\gamma_{ \perp}(\partial^{i}\delta\vartheta^{j}+\partial^{j}\delta\vartheta^{i}-\frac {2}{3}g^{ij}\partial_{k}\delta\vartheta^{k}),\] (62) \[0 = \tau_{\Pi}\partial_{0}\delta\Pi+\delta\Pi+(\gamma_{\parallel}- \frac{4}{3}\gamma_{\perp})\partial_{i}\delta\vartheta^{i},\] (63) \[0 = \partial_{0}\delta e+\partial_{i}\delta\vartheta^{i}+\frac{1}{2} \partial_{0}\partial_{i}\delta S^{0i},\] (64) \[0 = -c_{s}^{2}\partial^{j}\delta e+\partial_{0}\delta\vartheta^{j}- \partial^{j}\delta\Pi+\partial_{i}\delta\pi^{ij}-\frac{1}{2}\partial_{0} \partial_{0}\delta S^{0j}-\frac{1}{2}\partial_{0}\partial_{i}\delta S^{ij}, \tag{65}\]
where \(\chi_{b},\chi_{e}^{\mu\nu},\chi_{s},D_{s},D_{b},\delta\vartheta^{i},\lambda^{ \prime},\gamma^{\prime},\gamma_{\perp},\gamma_{\parallel}\) are defined in Eqs.(22,28) and we have used the spin evolution equation (3) to replace \(\delta q^{i}\) and \(\delta\phi^{ij}\) by \(\delta S^{\mu\nu}\),
\[\delta q^{i}=\frac{1}{2}\partial_{0}\delta S^{0i},\quad\delta\phi^{ij}=-\frac{ 1}{2}\partial_{0}\delta S^{ij}. \tag{66}\]
### Zero modes in rest frame
Following the conventional hydrodynamics, we consider a fluid with the dissipative terms \(q^{\mu}\) and \(\phi^{\mu\nu}\) only for simplicity, i.e., we remove Eqs. (62, 63) and take \(\delta\Pi=0\) and \(\delta\pi^{ij}=0\) in Eqs. (60, 61, 64, 65). We consider the solutions Eq.(29) and derive
\[\mathcal{M}^{\prime}_{2}\delta\tilde{X}^{\prime}_{2} = 0, \tag{67}\]
where \(\delta\tilde{X}^{\prime}_{2}\) and \(\mathcal{M}^{\prime}_{2}\) are given by
\[\delta\tilde{X}^{\prime}_{2} \equiv (\delta\tilde{e},\delta\tilde{\vartheta}^{x},\delta\tilde{S}^{0x },\delta\tilde{\vartheta}^{y},\delta\tilde{S}^{0y},\delta\tilde{S}^{xy}, \delta\tilde{\vartheta}^{z},\delta\tilde{S}^{0z},\delta\tilde{S}^{xz},\delta \tilde{S}^{yz})^{\rm T}, \tag{68}\]
and
\[\mathcal{M}^{\prime}_{2} \equiv \left(\begin{array}{cccc}M^{\prime}_{4}&0&0&0\\ A^{\prime}_{4}&M^{\prime}_{5}&0&0\\ A^{\prime}_{5}&0&M^{\prime}_{5}&0\\ A^{\prime}_{6}&0&0&M^{\prime}_{6}\end{array}\right), \tag{69}\]
with
\[M_{4}^{\prime} = \left(\begin{array}{ccc}i\omega&-ik&\frac{1}{2}\omega k\\ -ikc_{s}^{2}&i\omega&\frac{1}{2}\omega^{2}\\ \lambda^{\prime}c_{s}^{2}ik+8\lambda\chi_{e}^{0x}&\lambda^{\prime}i\omega&2D_{b }+\tau_{q}\omega^{2}-i\omega\end{array}\right), \tag{70}\] \[M_{5}^{\prime} = \left(\begin{array}{ccc}i\omega&\frac{1}{2}\omega^{2}&-\frac{1} {2}\omega k\\ \lambda^{\prime}i\omega&2D_{b}+\tau_{q}\omega^{2}-i\omega&0\\ 2\gamma^{\prime}ik&0&-\tau_{\phi}\omega^{2}+i\omega+2D_{s}\end{array}\right),\] (71) \[M_{6}^{\prime} = -\tau_{\phi}\omega^{2}+i\omega+2D_{s}. \tag{72}\]
The off-diagonal matrices \(A_{4,5,6}^{\prime}\) are put in Appendix A.
The dispersion relations \(\omega=\omega(k)\) are derived from
\[\det{\cal M}_{2}^{\prime}=\det M_{4}^{\prime}\cdot(\det M_{5}^{ \prime})^{2}\cdot\det M_{6}^{\prime}=0. \tag{73}\]
We find that there exists a zero mode coming from the equation \(\det M_{5}^{\prime}=0\). We will discuss the zero modes at the end of this subsection. Now, let us focus on the nonzero modes. The \(\det M_{6}^{\prime}=0\) gives two non-hydrodynamic modes
\[\omega=\frac{1}{2\tau_{\phi}}(i\pm\sqrt{8D_{s}\tau_{\phi}-1}). \tag{74}\]
From \(\det M_{4}^{\prime}=0\) and \(\det M_{5}^{\prime}=0\), we obtain the dispersion relation in small \(k\) limit,
\[\omega = \pm c_{s}k\mp 2ic_{s}\lambda\chi_{e}^{0x}D_{b}^{-1}k^{2}+O(k^{3}), \tag{75}\] \[\omega = \left[i\pm\sqrt{-4D_{b}(2\tau_{q}-\lambda^{\prime})-1}\right](2 \tau_{q}-\lambda^{\prime})^{-1}+O(k),\] (76) \[\omega = \frac{1}{2\tau_{\phi}}(i\pm\sqrt{8D_{s}\tau_{\phi}-1})+O(k), \tag{77}\]
and, in large \(k\) limit,
\[\omega = \pm k\sqrt{\frac{c_{s}^{2}(3\lambda^{\prime}+2\tau_{q})}{2\tau_{ q}-\lambda^{\prime}}}+\frac{4i\lambda^{\prime}}{(2\tau_{q}-\lambda^{\prime})(2 \tau_{q}+3\lambda^{\prime})}\mp\frac{8\lambda\chi_{e}^{0x}}{c_{s}\sqrt{\left( \lambda^{\prime}-2\tau_{q}\right)(3\lambda^{\prime}+2\tau_{q})}}+O(k^{-1}), \tag{78}\] \[\omega = \frac{i\pm\sqrt{-1-4D_{b}(2\tau_{q}+3\lambda^{\prime})}}{2\tau_{ q}+3\lambda^{\prime}}+O(k^{-1}),\] (79) \[\omega = \pm\sqrt{\frac{2\gamma^{\prime}\tau_{q}}{(2\tau_{q}-\lambda^{ \prime})\tau_{\phi}}}k+i\frac{[\tau_{q}(2\tau_{q}-\lambda^{\prime})+\lambda^{ \prime}\tau_{\phi}]}{2\tau_{q}\tau_{\phi}(2\tau_{q}-\lambda^{\prime})}+O(k^{-1}),\] (80) \[\omega = \frac{i\pm\sqrt{-1-8D_{b}\tau_{q}}}{2\tau_{q}}+O(k^{-1}). \tag{81}\]
The causality conditions (18, 19) requires,
\[0\leq\frac{c_{s}^{2}(3\lambda^{\prime}+2\tau_{q})}{2\tau_{q}-\lambda^{\prime}} \leq 1,\ 0\leq\frac{2\gamma^{\prime}\tau_{q}}{(2\tau_{q}-\lambda^{\prime})\tau_{ \phi}}\leq 1, \tag{82}\]
which implies that the relaxation times \(\tau_{q},\tau_{\phi}\) cannot be arbitrarily small. It is consistent with the discussion in Sec. IV.
The stability condition (20) leads to
\[\tau_{q}>\lambda^{\prime}/2,\ D_{s}>0,\ D_{b}<0,\ \chi_{e}^{0x}=0, \tag{83}\]
where \(\chi_{e}^{0x}=0\) comes from the stability of the sound mode (75). Although the conditions in Eq.(83) are derived from the small \(k\) and large \(k\) limits only, we can implement the Routh-Hurwitz criterion [163; 164; 165; 182; 186; 187] to prove that the conditions (83) are sufficient and necessary for stability, i.e., if (83) are satisfied, then \(\text{Im}\ \omega>0\) for all \(k\). Details for the proof can be found in Appendix B.
At last, let us comment on the zero modes. The zero modes, which gives \(\omega=0\), coming from Eq. (65) with vanishing \(\delta\Pi,\delta\pi^{ij}\). Generally, the zero modes in the linear modes analysis do not mean the perturbations are not decaying with time. It indicates that there exists the nonlinear modes in Eq. (65) with vanishing \(\delta\Pi,\delta\pi^{ij}\). To continue our analysis, we need to set non-vanishing \(\delta\Pi,\delta\pi^{ij}\).
### Causality analysis in the rest frame
Next, we substitute the plane wave solutions Eq.(29) and
\[\delta\Pi=\delta\tilde{\Pi}e^{i\omega t-ikx},\ \delta\pi^{ij}=\delta\tilde{\pi} ^{ij}e^{i\omega t-ikx}, \tag{84}\]
with \(\delta\tilde{\Pi},\delta\tilde{\pi}^{ij}\), being constants, into Eqs.(60-65), and obtain the matrix equation
\[\mathcal{M}_{2}\delta\tilde{X}_{2}\:=\:0, \tag{85}\]
where \(\delta\tilde{X}_{2}\) and \(\mathcal{M}_{2}\) are given by
\[\delta\tilde{X}_{2} \equiv (\delta\tilde{e},\delta\tilde{\vartheta}^{x},\delta\tilde{S}^{0x },\delta\tilde{\Pi},\delta\tilde{\pi}^{xx},\delta\tilde{\vartheta}^{y},\delta \tilde{S}^{0y},\delta\tilde{S}^{xy},\delta\tilde{\pi}^{xy} \tag{86}\] \[,\delta\tilde{\vartheta}^{z},\delta\tilde{S}^{0z},\delta\tilde{ S}^{xz},\delta\tilde{\pi}^{xz},\delta\tilde{S}^{yz},\delta\tilde{\pi}^{yy}, \delta\tilde{\pi}^{yz})^{\text{T}}.\]
\[\mathcal{M}_{2} = \left(\begin{array}{cccc}M_{4}&0&0&0\\ A_{4}&M_{5}&0&0\\ A_{5}&0&M_{5}&0\\ A_{6}&0&0&M_{6}\end{array}\right), \tag{87}\]
with
\[M_{4} = \left(\begin{array}{cccc}i\omega&-ik&\frac{1}{2}\omega k&0&0\\ -ikc_{s}^{2}&i\omega&\frac{1}{2}\omega^{2}&-ik&-ik\\ ik\lambda^{\prime}c_{s}^{2}+8\lambda\chi_{e}^{0x}&i\omega\lambda^{\prime}&2D_{ b}+\tau_{q}\omega^{2}-i\omega&0&0\\ 0&-ik(\gamma_{\parallel}-\frac{4}{3}\gamma_{\perp})&0&i\omega\tau_{\Pi}+1&0 \\ 0&-\frac{4}{3}ik\gamma_{\perp}&0&0&i\omega\tau_{\Pi}+1\end{array}\right), \tag{88}\] \[M_{5} = \left(\begin{array}{cccc}2ik\gamma^{\prime}&0&-\tau_{\phi} \omega^{2}+i\omega+2D_{s}&0\\ i\omega&\frac{1}{2}\omega^{2}&-\frac{1}{2}\omega k&-ik\\ i\omega\lambda^{\prime}&2D_{b}+\tau_{q}\omega^{2}-i\omega&0&0\\ -ik\gamma_{\perp}&0&0&i\omega\tau_{\Pi}+1\end{array}\right),\] (89) \[M_{6} = \left(\begin{array}{cccc}-\tau_{\phi}\omega^{2}+i\omega+2D_{s} &0&0\\ 0&i\omega\tau_{\Pi}+1&0\\ 0&0&i\omega\tau_{\Pi}+1\end{array}\right). \tag{90}\]
The submatrices \(A_{4,5,6}\) in Eq.(87) are shown in Appendix A. If there exist nonzero plane wave solutions, we have
\[0=\det\mathcal{M}_{2}=\det M_{4}\cdot(\det M_{5})^{2}\cdot\det M_{6}. \tag{91}\]
We observe the zero modes in Eq. (65) disappear. It indicates that the current analysis is consistent with the assumption of linear response. The dispersion relations \(\omega=\omega(k)\) are the solutions to the polynomial equation (91).
The \(\det M_{6}=0\) gives,
\[\omega = \frac{i}{\tau_{\pi}}, \tag{92}\] \[\omega = \frac{1}{2\tau_{\phi}}(i\pm\sqrt{8D_{s}\tau_{\phi}-1}), \tag{93}\]
which are non-propagating modes or non-hydrodynamic modes.
In \(k\to 0\) limit, the \(\det M_{4}=0\) and \(\det M_{5}=0\) give
\[\omega = \frac{i}{\tau_{\pi}}+O(k), \tag{94}\] \[\omega = \frac{i}{\tau_{\Pi}}+O(k),\] (95) \[\omega = \pm c_{s}k+\frac{i}{2}(\gamma_{\parallel}\mp 4c_{s}\lambda\chi_{e }^{0x}D_{b}^{-1})k^{2}+O(k^{3}),\] (96) \[\omega = \left[i\pm\sqrt{-4D_{b}(2\tau_{q}-\lambda^{\prime})-1}\right](2 \tau_{q}-\lambda^{\prime})^{-1}+O(k),\] (97) \[\omega = i\gamma_{\perp}k^{2}+O(k^{3}),\] (98) \[\omega = \frac{1}{2\tau_{\phi}}(i\pm\sqrt{8D_{s}\tau_{\phi}-1})+O(k), \tag{99}\]
where Eq.(94) and Eq.(97) are doubly degenerate. In large \(k\) limit, we have
\[\omega = -4iD_{b}\gamma_{\parallel}^{-1}\lambda^{\prime-1}k^{-2}+O(k^{-3}), \tag{100}\] \[\omega = \frac{3i\gamma_{\parallel}}{\tau_{\pi}(3\gamma_{\parallel}-4 \gamma_{\perp})+4\gamma_{\perp}\tau_{\Pi}}+O(k^{-1}),\] (101) \[\omega = c_{1}k+i\frac{c_{2}}{c_{3}}+O(k^{-1}),\] (102) \[\omega = \pm\sqrt{\frac{2\tau_{q}(\gamma^{\prime}\tau_{\pi}+\gamma_{\perp }\tau_{\phi})}{(2\tau_{q}-\lambda^{\prime})\tau_{\pi}\tau_{\phi}}}k+ic_{4}+O(k ^{-1}),\] (103) \[\omega = \frac{i\pm\sqrt{-1-8D_{b}\tau_{q}}}{2\tau_{q}}+O(k^{-1}),\] (104) \[\omega = \frac{i(\gamma^{\prime}+\gamma_{\perp})\pm c_{5}}{2(\gamma^{ \prime}\tau_{\pi}+\gamma_{\perp}\tau_{\phi})}+O(k^{-1}), \tag{105}\]
where \(c_{1}\) is
\[c_{1} = \sqrt{\frac{b_{1}^{1/2}\pm(b_{1}-b_{2})^{1/2}}{6(2\tau_{q}- \lambda^{\prime})\tau_{\pi}\tau_{\Pi}}},\mbox{or}\ -\sqrt{\frac{b_{1}^{1/2}\pm(b_{1}-b_{2})^{1/2}}{6(2\tau_{q}- \lambda^{\prime})\tau_{\pi}\tau_{\Pi}}}, \tag{106}\] \[b_{1} \equiv \{8\gamma_{\perp}\tau_{q}\tau_{\Pi}+\tau_{\pi}[2\tau_{q}(3\gamma _{\parallel}-4\gamma_{\perp})+3\tau_{\Pi}c_{s}^{2}(3\lambda^{\prime}+2\tau_{ q})]\}^{2},\] (107) \[b_{2} \equiv 12c_{s}^{2}\lambda^{\prime}(2\tau_{q}-\lambda^{\prime})\tau_{ \pi}\tau_{\Pi}[\tau_{\pi}(3\gamma_{\parallel}-4\gamma_{\perp})+4\gamma_{\perp }\tau_{\Pi}], \tag{108}\]
and
\[c_{2} = -3c_{1}^{4}[2\tau_{\pi}\tau_{\Pi}+(2\tau_{q}-\lambda^{\prime})( \tau_{\pi}+\tau_{\Pi})]+48c_{1}^{3}\lambda\chi_{e}^{0x}\tau_{\pi}\tau_{\Pi}-3c _{s}^{2}\gamma_{\parallel}\lambda^{\prime} \tag{109}\] \[+c_{1}^{2}\{6\gamma_{\parallel}\tau_{q}+(6\gamma_{\parallel}-8 \gamma_{\perp})\tau_{\pi}+8\gamma_{\perp}\tau_{\Pi}+3c_{s}^{2}[2\tau_{\pi} \tau_{\Pi}+(3\lambda^{\prime}+2\tau_{q})(\tau_{\pi}+\tau_{\Pi})]\}\] \[-8c_{1}\lambda\chi_{e}^{0x}[(3\gamma_{\parallel}-4\gamma_{\perp}) \tau_{\pi}+4\gamma_{\perp}\tau_{\Pi}],\] \[c_{3} = -2c_{s}^{2}\lambda^{\prime}[(3\gamma_{\parallel}-4\gamma_{\perp })\tau_{\pi}+4\gamma_{\perp}\tau_{\Pi}]-18c_{1}^{4}(2\tau_{q}-\lambda^{ \prime})\tau_{\pi}\tau_{\Pi}\]
\[+4c_{1}^{2}[3c_{s}^{2}(3\lambda^{\prime}+2\tau_{q})\tau_{\pi}\tau_{ \Pi}+2(3\gamma_{\parallel}-4\gamma_{\perp})\tau_{q}\tau_{\pi}+8\gamma_{\perp} \tau_{q}\tau_{\Pi}], \tag{110}\] \[c_{4} = \frac{\gamma_{\perp}[\tau_{q}(2\tau_{q}-\lambda^{\prime})+\lambda ^{\prime}\tau_{\pi}]\tau_{\phi}^{2}+\gamma^{\prime}\tau_{\pi}^{2}[\tau_{q}(2 \tau_{q}-\lambda^{\prime})+\lambda^{\prime}\tau_{\phi}]}{2(2\tau_{q}-\lambda^ {\prime})\tau_{q}\tau_{\pi}\tau_{\phi}(\gamma^{\prime}\tau_{\pi}+\gamma_{\perp} \tau_{\phi})},\] (111) \[c_{5} = \sqrt{8D_{s}\gamma_{\perp}(\gamma^{\prime}\tau_{\pi}+\gamma_{ \perp}\tau_{\phi})-(\gamma^{\prime}+\gamma_{\perp})^{2}}. \tag{112}\]
The \(\det M_{4}=0\) gives Eqs.(94-97) and Eqs.(100-102), while \(\det M_{5}=0\) gives Eqs.(94,97-99) and Eqs.(103-105).
Now, let us analyze the causality conditions. From Eqs.(100-105), we find that all modes in minimal causal spin hydrodynamics correspond to finite propagation speed since \(|\omega/k|\) is bounded as \(k\to+\infty\). Imposing Eq.(18) on the propagating modes in Eqs.(102-103), the causality requires,
\[0\leq\frac{b_{1}^{1/2}\pm(b_{1}-b_{2})^{1/2}}{6(2\tau_{q}-\lambda^{\prime}) \tau_{\pi}\tau_{\Pi}}\leq 1\ \text{and}\ 0\leq\frac{2\tau_{q}(\gamma^{\prime}\tau_{\pi}+\gamma_{\perp}\tau_{\phi})}{ (2\tau_{q}-\lambda^{\prime})\tau_{\pi}\tau_{\phi}}\leq 1, \tag{113}\]
which implies that the relaxation times \(\tau_{q},\tau_{\pi},\tau_{\Pi},\tau_{\phi}\) cannot be arbitrarily small and is consistent with the discussion in Sec. IV. We also notice that the causality conditions (113) reduce to Eq. (82) when we take a smooth limit \(\tau_{\pi},\tau_{\Pi},\gamma_{\perp},\gamma_{\parallel}\to 0\).
### Non-trivial stability conditions in rest frame
The requirement of stability is non-trivial. Inserting Eq.(20) into Eqs.(92-105) yields,
\[\tau_{q} > \lambda^{\prime}/2, \tag{114}\] \[D_{s} > 0,\quad D_{b}<-4c_{s}\lambda\gamma_{\parallel}^{-1}|\chi_{e}^{0x }|\leq 0,\] (115) \[b_{1} > b_{2}>0,\quad\frac{c_{2}}{c_{3}}>0. \tag{116}\]
The stability condition \(\lambda^{\prime}<0\) in Eq.(42) for the first order spin hydrodynamics becomes \(\lambda^{\prime}<2\tau_{q}\) in Eq. (114). When the relaxation time \(\tau_{q}\) is sufficiently large, the inequality \(\lambda^{\prime}<2\tau_{q}\) is satisfied, and then the previous unstable modes are removed. We also notice that the conditions (114, 115) agree with Eq. (83) except the \(\chi_{e}^{0x}=0\). The strong constraint \(\chi_{e}^{0x}=0\) is released in this case.
The satisfaction of the stability condition (115) relies on the specific equation of state governing \(S^{\mu\nu}\) and \(\omega^{\mu\nu}\). In Ref.[70], it was found that the stability condition (115) cannot be satisfied if \(\delta S^{\mu\nu}\sim T^{2}\delta\omega^{\mu\nu}\)[62; 64]. In more general cases, we can have
\[u_{\mu}\delta\omega^{\mu\nu} = \chi_{1}u_{\mu}\delta S^{\mu\nu}, \tag{117}\]
\[\Delta^{\mu\alpha}\Delta^{\nu\beta}\delta\omega_{\alpha\beta} = (\chi_{1}+\chi_{2})\Delta^{\mu\alpha}\Delta^{\nu\beta}\delta S_{ \alpha\beta}, \tag{118}\]
where \(\chi_{1,2}\) are susceptibility corresponding to the \(S^{0i}\) and \(S^{ij}\) in the rest frame. In this case, according to the definitions in Eqs.(22,28), the stability condition (115) is satisfied if \(\chi_{2}>-\chi_{1}>0\). Details can be found in Appendix C. Note that the parameters \(\chi_{1}\) and \(\chi_{2}\) strongly depends on the equation of state for \(S^{\mu\nu}\) and \(\omega^{\mu\nu}\). To determine the equation of state, we need the microscopic theories, and we will leave it for the future studies.
Another remarkable observation for the stability conditions is that there exists unstable modes at finite \(k\). Eqs.(114, 115, 116) are the stability conditions in small \(k\) and large \(k\) limits only. We still need to study the \(\mathrm{Im}\omega\) in finite \(k\) region. One analytic method, named the Routh-Hurwitz criterion [163; 164; 165; 182; 186; 187], are usually implemented to study the sign of \(\mathrm{Im}\omega\) in finite \(k\) region. Unfortunately, \(\det\mathcal{M}_{2}\) cannot be reduced to the form that Routh-Hurwitz criterion applies, thus, we analyze the behavior of \(\mathrm{Im}\omega\) numerically instead of the Routh-Hurwitz criterion. For a finite \(k\), we find that \(\mathrm{Im}\omega\) can be negative, even if all the conditions (114, 115, 116) are satisfied. In Fig. 1, we present an example to show that \(\mathrm{Im}\omega\) can be negative for finite \(k\). We choose the parameters as,
\[c_{s}=\frac{1}{\sqrt{3}}, \lambda\chi_{e}^{0x}=\frac{1}{8},\ \tau_{\pi}=4\tau_{\Pi},\ \tau_{\phi}=2\tau_{\Pi},\ \tau_{q}=10\tau_{\Pi},\ \lambda^{\prime}=\frac{1}{2}\tau_{\Pi},\] \[\gamma_{\parallel}=\frac{7}{10}\tau_{\Pi}, \gamma_{\perp}=\frac{1}{2}\tau_{\Pi},\ \gamma^{\prime}=\tau_{\Pi},\ D_{s}=\frac{1}{2\tau_{\Pi}},\ D_{b}=-\frac{1}{2 \tau_{\Pi}}. \tag{119}\]
It is straightforward to verify that the parameters in Eq.(119) satisfy the stability and causality constraints (18, 19, 20). We pick up two modes derived from \(\det M_{4}=0\). We observe that the \(\mathrm{Im}\ \omega\) at both small and large \(k\) limits are positive, while it becomes negative when \(k\tau_{\Pi}\sim 0.5\) and \(k\tau_{\Pi}\sim 10.0\), i.e., the modes are unstable in finite \(k\) region.
We comment on the unstable modes at finite \(k\). The unstable modes in the minimal causal spin hydrodynamics are significantly different with those in the conventional hydrodynamics. As discussed in Refs. [165; 166; 167; 168; 156; 157], the stability conditions obtained in \(k\to 0\) and \(k\to+\infty\) limits are sufficient to ensure the stability at any real \(k\). However, it looks failed in minimal causal spin hydrodynamics. It implies that the conditions (114, 115, 116) are necessary but may not be sufficient. At last, it is still unclear whether the unstable modes at finite \(k\) indicate the fluid becomes unstable or not.
### Causality and stability analysis for extended \(q^{\mu}\) and \(\phi^{\mu\nu}\)
We notice that when \(q^{\mu}\) and \(\phi^{\mu\nu}\) are coupled in the second order constitutive equations the dispersion relation will be modified. Therefore, we extend \(q^{\mu}\) and \(\phi^{\mu\nu}\) in Eqs.(55-56) as follows,
\[\tau_{q}\Delta^{\mu\nu}\frac{d}{d\tau}q_{\nu}+q^{\mu} = \lambda\left(T^{-1}\Delta^{\mu\nu}\partial_{\nu}T+u^{\nu}\partial _{\nu}u^{\mu}-4u_{\nu}\omega^{\mu\nu}\right)+g_{1}\Delta^{\mu\nu}\partial^{ \rho}\phi_{\nu\rho}, \tag{120}\] \[\tau_{\phi}\Delta^{\mu\alpha}\Delta^{\nu\beta}\frac{d}{d\tau} \phi_{\alpha\beta}+\phi^{\mu\nu} = 2\gamma_{s}\left(2\Delta^{\mu\alpha}\Delta^{\nu\beta}\omega_{ \alpha\beta}+\partial_{\perp}^{[\mu}u^{\nu]}\right)+g_{2}\Delta^{\mu\alpha} \Delta^{\nu\beta}\partial_{[\alpha}q_{\beta]}, \tag{121}\]
where \(g_{1,2}\) are new transport coefficients describing the coupling between \(q^{\mu}\) and \(\phi^{\mu\nu}\). Following the same method, Eq.(60-61) become,
\[0 = (\lambda^{\prime}c_{s}^{2}\partial^{i}+8\lambda\chi_{e}^{0i}) \delta e+\lambda^{\prime}\partial_{0}\delta\vartheta^{i}+(2D_{b}-\tau_{q} \partial_{0}\partial_{0}-\partial_{0})\delta S^{0i}-g_{1}\partial_{j}\partial _{0}\delta S^{ij}, \tag{122}\] \[0 = 8\gamma_{s}\chi_{e}^{ij}\delta e+2\gamma^{\prime}(\partial^{i} \delta\vartheta^{j}-\partial^{j}\delta\vartheta^{i})+(\tau_{\phi}\partial_{0} \partial_{0}+\partial_{0}+2D_{s})\delta S^{ij}\] (123) \[+\frac{1}{2}g_{2}\partial^{i}\partial_{0}\delta S^{0j}-\frac{1}{ 2}g_{2}\partial^{j}\partial_{0}\delta S^{0i}.\]
Figure 1: We plot the imaginary parts of \(\omega\tau_{\Pi}\) as a function of \(k\tau_{\Pi}\) in three modes derived from \(\det M_{4}=0\). The parameters are chosen as in Eq. (119), which satisfy the causality and stability conditions Eqs. (18, 19, 20). The solid, dashed and dotted lines stand for three unstable modes.
First, we consider the \(q^{\mu}\) and \(\phi^{\mu\nu}\) only and neglect other dissipative terms for simplicity. In this case, \(M_{5}^{\prime}\) in Eq.(71) reads
\[M_{5}^{\prime} = \left(\begin{array}{ccc}i\omega&\frac{1}{2}\omega^{2}&-\frac{1}{ 2}\omega k\\ \lambda^{\prime}i\omega&2D_{b}+\tau_{q}\omega^{2}-i\omega&g_{1}\omega k\\ 2\gamma^{\prime}ik&-\frac{1}{4}g_{2}\omega k&-\tau_{\phi}\omega^{2}+i\omega+2D _{s}\end{array}\right), \tag{124}\]
while the matrix \(M_{4}^{\prime}\) is the same as before. The dispersion relations in Eq.(80-81) give
\[\omega = \pm\sqrt{\frac{m}{4(2\tau_{q}-\lambda^{\prime})\tau_{\phi}}}k+ \frac{1}{2}i\left(\frac{2}{2\tau_{q}-\lambda^{\prime}}+\frac{1}{\tau_{\phi}}- \frac{8\gamma^{\prime}}{m}\right)+\mathcal{O}(k^{-1}), \tag{125}\] \[\omega = \frac{4\gamma^{\prime}(i\pm\sqrt{-1-D_{b}m\gamma^{\prime-1}})}{ m}+\mathcal{O}(k^{-1}), \tag{126}\]
where
\[m = 2g_{1}g_{2}+8g_{1}\gamma^{\prime}+g_{2}\lambda^{\prime}+8\gamma^{ \prime}\tau_{q}. \tag{127}\]
We notice that the zero modes mentioned in Sec.V.1 cannot be solved by introducing the coupling between \(q^{\mu}\) and \(\phi^{\mu\nu}\).
Imposing Eq.(18) to the propagating modes in Eqs.(102-103), the causality conditions in Eq.(82) becomes
\[0\leq\frac{c_{s}^{2}(3\lambda^{\prime}+2\tau_{q})}{2\tau_{q}- \lambda^{\prime}}\leq 1,\ 0\leq\frac{m}{4(2\tau_{q}-\lambda^{\prime})\tau_{\phi}}\leq 1. \tag{128}\]
Inserting Eq.(20) into these new dispersion relations, Eq.(83) gives
\[\tau_{q}>\lambda^{\prime}/2,\ D_{s}>0,\ D_{b}<0,\ \chi_{e}^{0x}=0,\quad m>8 \gamma^{\prime}\left(\frac{2}{2\tau_{q}-\lambda^{\prime}}+\frac{1}{\tau_{ \phi}}\right)^{-1}. \tag{129}\]
Similarly, we can still implement the Routh-Hurwitz criterion to verify the stability conditions are sufficient and necessary in this case. Details are shown in Appendix (D).
Next, we need to consider all dissipative terms. The Eq.(89) becomes
\[M_{5} = \left(\begin{array}{ccc}2ik\gamma^{\prime}&-\frac{1}{4}g_{2} \omega k&-\tau_{\phi}\omega^{2}+i\omega+2D_{s}&0\\ i\omega&\frac{1}{2}\omega^{2}&-\frac{1}{2}\omega k&-ik\\ i\omega\lambda^{\prime}&2D_{b}+\tau_{q}\omega^{2}-i\omega&g_{1}\omega k&0\\ -ik\gamma_{\perp}&0&0&i\omega\tau_{\Pi}+1\end{array}\right), \tag{130}\]
while the \(M_{4}^{\prime}\) is unaffected. The Eqs.(103-105) becomes,
\[\omega = \pm\sqrt{\frac{f+f^{\prime}}{8(2\tau_{q}-\lambda^{\prime})\tau_{ \pi}\tau_{\phi}}}k+i\frac{f+f^{\prime}}{4(2\tau_{q}-\lambda^{\prime})\tau_{\pi }\tau_{\phi}}c_{6}+\mathcal{O}(k^{-1}), \tag{131}\] \[\omega = \pm\sqrt{\frac{f-f^{\prime}}{8(2\tau_{q}-\lambda^{\prime})\tau_{ \pi}\tau_{\phi}}}k+i\frac{f-f^{\prime}}{4(2\tau_{q}-\lambda^{\prime})\tau_{ \pi}\tau_{\phi}}c_{7}+\mathcal{O}(k^{-1}),\] (132) \[\omega = \pm 4\sqrt{\frac{-D_{b}D_{s}}{g_{1}g_{2}}}k^{-1}+4i\frac{[D_{s} \gamma_{\perp}-D_{b}(\gamma_{\perp}+\gamma^{\prime})]}{g_{1}g_{2}\gamma_{ \perp}}k^{-2}+\mathcal{O}(k^{-3}), \tag{133}\]
where \(m\) is given by Eq. (127) and
\[f = m\tau_{\pi}+8\gamma_{\perp}\tau_{q}\tau_{\phi}, \tag{134}\] \[f^{\prime} = \{-32g_{1}g_{2}\gamma_{\perp}(2\tau_{q}-\lambda^{\prime})\tau_{ \pi}\tau_{\phi}+(m\tau_{\pi}+8\gamma_{\perp}\tau_{q}\tau_{\phi})^{2}\}^{1/2},\] (135) \[d = 4g_{1}^{2}(g_{2}+4\gamma^{\prime})^{2}\tau_{\pi}^{2}+[g_{2} \lambda^{\prime}\tau_{\pi}+8\tau_{q}(\gamma^{\prime}\tau_{\pi}+\gamma_{\perp} \tau_{\phi})]^{2}\] (136) \[+4g_{1}\tau_{\pi}[g_{2}^{2}\lambda^{\prime}\tau_{\pi}+4g_{2} \gamma^{\prime}(\lambda^{\prime}+2\tau_{q})\tau_{\pi}+8g_{2}\gamma_{\perp}( \lambda^{\prime}-\tau_{q})\tau_{\phi}+32\gamma^{\prime}\tau_{q}(\gamma^{ \prime}\tau_{\pi}+\gamma_{\perp}\tau_{\phi})],\] \[c_{6} = -\frac{1}{(f^{\prime 2}+fd^{1/2})}\left[m\tau_{\pi}(2\tau_{q}- \lambda^{\prime})(\tau_{\phi}-\tau_{\pi})\right.\] (137) \[+8\gamma_{\perp}\tau_{q}\tau_{\phi}(\tau_{q}-\tau_{\pi})(\lambda ^{\prime}-2\tau_{\phi})+16\gamma^{\prime}\tau_{\pi}^{2}\tau_{\phi}(2\tau_{q}- \lambda^{\prime})\] \[\left.-f^{\prime}(2\tau_{q}-\lambda^{\prime})(\tau_{\pi}+\tau_{ \phi})+2\tau_{\pi}\tau_{\phi}(-m\tau_{\pi}-8\gamma_{\perp}\lambda^{\prime} \tau_{\phi}+8\gamma^{\prime}\tau_{q}^{2}-f^{\prime})\right],\] \[c_{7} = -\frac{c_{72}}{c_{71}},\] (138) \[c_{71} = -4g_{1}^{2}(g_{2}+4\gamma^{\prime})^{2}\tau_{\pi}^{2}+[g_{2} \lambda^{\prime}\tau_{\pi}+8\tau_{q}(\gamma^{\prime}\tau_{\pi}+\gamma_{\perp} \tau_{\phi})](-g_{2}\lambda^{\prime}\tau_{\pi}-8\gamma^{\prime}\tau_{q}\tau_ {\pi}\] (139) \[-8\gamma_{\perp}\tau_{q}\tau_{\phi}+d^{1/2})-2g_{1}\tau_{\pi}\{2g _{2}^{2}\lambda^{\prime}\tau_{\pi}+g_{2}[8\gamma^{\prime}(\lambda^{\prime}+2 \tau_{q})\tau_{\pi}+16\gamma_{\perp}\lambda^{\prime}\tau_{\phi}\] \[-16\gamma_{\perp}\tau_{q}\tau_{\phi}-d^{1/2}]-4\gamma^{\prime}(16 \gamma^{\prime}\tau_{q}\tau_{\pi}+16\gamma_{\perp}\tau_{q}\tau_{\phi}-d^{1/2})\},\] \[c_{72} = -f^{\prime}(2\tau_{q}-\lambda^{\prime})(\tau_{\pi}+\tau_{\phi})+ m\tau_{\pi}(2\tau_{q}-\lambda^{\prime})(\tau_{\pi}-\tau_{\phi})-16\gamma^{ \prime}\tau_{\pi}^{2}\tau_{\phi}(2\tau_{q}-\lambda^{\prime})\] (140) \[-8\gamma_{\perp}\tau_{q}\tau_{\phi}(2\tau_{q}-\lambda^{\prime})( \tau_{\pi}-\tau_{\phi})+\tau_{\pi}\tau_{\phi}(-2f^{\prime}+2m\tau_{\pi}-16 \gamma_{\perp}\tau_{q}\tau_{\phi}+16\gamma_{\perp}\lambda^{\prime}\tau_{ \phi}).\]
From these new dispersion relations, we obtain causality conditions,
\[0\leq\frac{b_{1}^{1/2}\pm(b_{1}-b_{2})^{1/2}}{6(2\tau_{q}-\lambda^{\prime}) \tau_{\pi}\tau_{\Pi}}\leq 1\ \mbox{and}\ 0\leq\frac{f+f^{\prime}}{8(2\tau_{q}-\lambda^{\prime})\tau_{\pi}\tau_{\phi}} \leq 1, \tag{141}\]
which reproduces Eq.(113) when \(g_{1},g_{2}\to 0\).
The stability conditions in Eq.(83) becomes,
\[\tau_{q}-\frac{\lambda^{\prime}}{2} > 0, \tag{142}\] \[D_{s}>0,\quad-4c_{s}\lambda\gamma_{||}^{-1}|\chi_{e}^{0x}|-D_{b} > 0, \tag{143}\]
\[b_{1}>b_{2}>0,\quad\frac{c_{2}}{c_{3}} > 0, \tag{144}\] \[g_{1}g_{2}>0,\quad f>0,\quad f^{\prime} > 0,\] (145) \[\mathrm{Rec}_{6}>0,\quad\mathrm{Rec}_{7} > 0. \tag{146}\]
Unfortunately, we find that the extended \(q^{\mu}\) and \(\phi^{\mu\nu}\) cannot remove the unstable modes at finite \(k\) coming from \(\mathrm{det}M_{4}=0\). We choose the parameters satisfying the causality conditions (141) and stability conditions (142 - 146), and consider the influence on dispersion relations of \(g_{1},g_{2}\). For simplicity, we choose the parameters as the same as in Eq. (119) with \((g_{1}/\tau_{\Pi},g_{2}/\tau_{\Pi})=(0.0,0.0),(2.0,0.1),(6.0,0.1),(6.0,0.05)\). We find that one modes from \(\mathrm{det}\,M_{5}=0\) becomes unstable at finite \(k\) with \((g_{1}/\tau_{\Pi},g_{2}/\tau_{\Pi})=(6.0,0.1),(6.0,0.05)\) as shown in Fig. 2.
As a brief summary, the extended \(q^{\mu}\) and \(\phi^{\mu\nu}\) can modify the causality and stability
conditions, but cannot remove the zero modes when we turn off other dissipative effects. The unstable modes at finite \(k\) also cannot be cured by the extended \(q^{\mu}\) and \(\phi^{\mu\nu}\).
### Causality and stability in moving frames
Let us briefly discuss the causality and stability of the minimal causal spin hydrodynamics in moving frames.
For the causality in a moving frame, we refer to the studies in Ref. [163]. The authors in Ref. [163] has studied the dispersion relations at large \(k\) limit in moving frames and demonstrate that the system is causal in moving frames if it is causal in the rest frame. Thus, the minimal causal spin hydrodynamics is causal in moving frames when the causality condition (113) in the rest frame are satisfied.
For the stability, it has also been proved that if a causal theory is unstable in the rest frame, then it is also unstable in moving frames (also see Theorem 2 of Ref.[188]). We now apply this theorem to the minimal causal spin hydrodynamics. If the equation of state gives \(\delta\omega^{\mu\nu}=\chi_{1}\delta S^{\mu\nu}\) with constant \(\chi_{1}\), the minimal causal spin hydrodynamics will be unstable in moving frames since it has unstable modes in the rest frame. For more general cases, the stability of the theory in both moving frames and the rest frame depends on the equation of state for \(S^{\mu\nu}\) and \(\omega^{\mu\nu}\).
In summary, the minimal causal spin hydrodynamics is causal in any reference frame when Eq.(113) is fulfilled. Hence, we have solved the problem of acausality by introducing the minimal causal spin hydrodynamics. However, the stability of minimal causal spin hydrodynamics remains unclear. Our findings indicate that the validity of the stability condition (115) is highly contingent upon the equation of state governing spin density and spin chemical potential. Moreover, we also find that the stability conditions (114, 115, 116) obtained at \(k\to 0\) and \(k\rightarrow+\infty\) are necessary but not sufficient.
## VI Conclusion
In this work, we investigate the causality and stability of canonical spin hydrodynamics in the linear modes analysis.
In linear modes analysis, we consider perturbations to the spin hydrodynamics near the
static equilibrium. We obtain the dispersion relations \(\omega=\omega(k)\) and analyze the all possible modes. The results show the stability condition (42) cannot be fulfilled. Moreover, the value of \(|\omega/k|\) in Eqs. (44-46) is unbounded, which violates the causality condition (19). In Refs.[70; 71; 45], the expression of \(q^{\mu}\) are modified by using the equation of motion for the fluid. We emphasize that the first order spin hydrodynamics in Refs.[70; 71; 45] are still acausal since one mode shown in Eq. (52) breaks the causality condition (19). We conclude that the canonical spin hydrodynamics in the first order of gradient expansion are acausal and unstable.
We then follow the basic idea in MIS, BRSSS, and DNMR theories to construct minimal causal spin hydrodynamics. The constitutive equations (12-16) in a minimal extended causal spin hydrodynamics are replaced by Eqs. (55-58). One can view it as a natural extension of the first order spin hydrodynamics or a simplified version of the complete second order spin hydrodynamics [73]. We investigate the causality and stability for this minimal causal spin hydrodynamics. We analyze the causality and stability for dissipative fluids with \(q^{\mu}\) and \(\phi^{\mu\nu}\) only and find that the zero modes in the linear modes analysis. It indicates that there exists the nonlinear modes. Therefore, we consider dissipative spin fluids with shear viscous tensor and bulk viscous pressure.
For causality, we find that the modes with infinite speed disappear and all modes are causal in the rest frame if the conditions in Eq.(113) are fulfilled. Following the statement in Ref. [163], we comment that the minimal causal spin hydrodynamics are causal in any reference frame when the conditions (113) are fulfilled.
For the stability, although we obtain the stability conditions in Eqs.(114, 115, 116) from the constraints in the \(k\to 0\) and \(k\rightarrow+\infty\) limits, the stability of the theory in both moving frames and the rest frame remains unclear. Two kinds of problems can lead to instabilities. The first one is related to stability condition (115). Interestingly, we prove that the coefficients \(D_{s},D_{b}\) do not obey the stability condition (115) if the equation of state \(S^{\mu\nu}\sim T^{2}\omega^{\mu\nu}\) is adopted. In more general cases, the fulfillment of the stability condition (115) hinges on the specific equations of state. One has to assess the condition (115) on a case-by-case basis. Surprisingly, different with the conventional hydrodynamics, we find that the stability condition (20) breaks at finite \(k\) as shown in Fig. 1. It implies that the conditions (114, 115, 116) are necessary but may not be sufficient.
We also considered the extended \(q^{\mu}\) and \(\phi^{\mu\nu}\), in which the \(q^{\mu}\) and \(\phi^{\mu\nu}\) are coupled in the
second order constitutive equations. The causality and stability conditions are modified in this case. However, in dissipative fluids with \(q^{\mu}\) and \(\phi^{\mu\nu}\) only the zero modes cannot be removed. The unstable modes at finite wavelength are still there.
We conclude that the canonical spin hydrodynamics in the first order of gradient expansion are always acausal and unstable. The minimal causal extension of spin hydrodynamics makes the theory be causal. However, the linear stability of the minimal causal spin hydrodynamics remain unclear. The studies beyond the linear modes analysis may provide us a better and clear answer to the problem of stability.
###### Acknowledgements.
We thank Francesco Becattini, Matteo Buzzegoli, Asaad Daher, Xu-Guang Huang, Jin Hu, Masoud Shokri and David Wagner for helpful discussion during the 7th International Conference on Chirality, Vorticity and Magnetic Field in Heavy Ion Collisions. This work is supported in part by the National Key Research and Development Program of China under Contract No. 2022YFA1605500. This work is partly supported by National Nature Science Foundation of China (NSFC) under Grants No. 12075235 and 12135011.
## Appendix A Off-diagonal submatrices in Eqs.(32, 69, 87)
In this appendix, we list all the off-diagonal submatrices introduced in Eqs.(32,69,87):
\[A_{1}\equiv\left(\begin{array}{ccc}-4i(\omega\lambda\chi_{e}^{0y}+k\gamma_{ s}\chi_{e}^{xy})&0&0\\ 8\lambda\chi_{e}^{0y}&0&0\\ 8\gamma_{s}\chi_{e}^{xy}&0&0\end{array}\right),\ A_{2}\equiv\left(\begin{array} []{ccc}-4i(\omega\lambda\chi_{e}^{0z}+k\gamma_{s}\chi_{e}^{xz})&0&0\\ 8\lambda\chi_{e}^{0z}&0&0\\ 8\gamma_{s}\chi_{e}^{xz}&0&0\end{array}\right), \tag{10}\]
\[A_{3}=A_{6}^{\prime}=\left(\begin{array}{ccc}8\gamma_{s}\chi_{e}^{yz},\ 0,\ 0,\ 0 \end{array}\right),\ A_{4}^{\prime}\equiv\left(\begin{array}{ccc}0&0&0\\ 8\lambda\chi_{e}^{0y}&0&0\\ 8\gamma_{s}\chi_{e}^{xy}&0&0\end{array}\right),\ A_{5}^{\prime}\equiv\left( \begin{array}{ccc}0&0&0\\ 8\lambda\chi_{e}^{0z}&0&0\\ 8\gamma_{s}\chi_{e}^{xz}&0&0\end{array}\right), \tag{11}\]
\[A_{4}=\left(\begin{array}{cccc}8\gamma_{s}\chi_{e}^{xy}&0&0&0&0\\ 0&0&0&0&0\\ 8\lambda\chi_{e}^{0y}&0&0&0&0\\ 0&0&0&0&0\\ \end{array}\right),\ A_{5}=\left(\begin{array}{cccc}8\gamma_{s}\chi_{e}^{xz}&0 &0&0&0\\ 0&0&0&0&0\\ 8\lambda\chi_{e}^{0z}&0&0&0&0\\ 0&0&0&0&0\\ \end{array}\right),\ A_{6}=\left(\begin{array}{cccc}2\gamma_{s}\chi_{e}^{yz} &0&0&0&0\\ 0&\frac{2}{3}ik\gamma_{\perp}&0&0&0\\ 0&0&0&0&0\\ \end{array}\right). \tag{101}\]
Appendix B Discussion on the stability conditions in fluids with \(q^{\mu}\) and \(\phi^{\mu\nu}\) only
In this appendix, we discuss the stability conditions (83) for fluids with \(q^{\mu}\) and \(\phi^{\mu\nu}\) only (see Sec. V.1).
As mentioned, we derive the stability condition (83) from the linear modes analysis in small and large \(k\) limits only. Now, we implement the Routh-Hurwitz criterion [163; 164; 165; 182; 186] to prove that the conditions (83) hold for all real \(k\).
We only need to prove that the nonzero modes derived from \(\det M_{4}^{\prime}=0\) and \(\det M_{5}^{\prime}=0\) satisfy \(\mathrm{Im}\ \omega>0\) for all \(k\). First, we discuss the modes coming from the \(\det M_{4}^{\prime}=0\). The \(\det M_{4}^{\prime}=0\) gives
\[a_{0}\omega^{4}-ia_{1}\omega^{3}-a_{2}\omega^{2}+ia_{3}\omega+a_{4}=0, \tag{102}\]
with
\[a_{0} = \frac{1}{2}(2\tau_{q}-\lambda^{\prime}),\] \[a_{1} = 1,\] \[a_{2} = \frac{1}{2}c_{s}^{2}k^{2}(3\lambda^{\prime}+2\tau_{q})-2D_{b},\] \[a_{3} = c_{s}^{2}k^{2},\] \[a_{4} = -2c_{s}^{2}D_{b}k^{2}. \tag{103}\]
We redefine \(\omega=-i\Delta\) and rewrite Eq.(102) as,
\[a_{0}\Delta^{4}+a_{1}\Delta^{3}+a_{2}\Delta^{2}+a_{3}\Delta+a_{4}=0. \tag{104}\]
Notice that the coefficients \(a_{0,1,2,3,4}\) are pure real. According to the Routh-Hurwitz criterion [163; 164; 165; 182; 186; 187], the stability condition (20), i.e., \(\mathrm{Im}\omega>0\) or \(\mathrm{Re}\Delta<0\), is fulfilled for all nonzero \(k\) if and only if
\[a_{i} > 0,\] \[a_{1}a_{2}a_{3}-a_{1}^{2}a_{4}-a_{0}a_{3}^{2} > 0. \tag{105}\]
When the conditions in Eq.(83) are fulfilled, the first inequality \(a_{i}>0\) are automatically satisfied. The second inequality can be expressed as \(\lambda^{\prime}=2\lambda/[e_{(0)}+p_{(0)}]>0\), which has already been guaranteed by entropy principle (17). Thus the modes derived from \(\det M_{4}^{\prime}=0\) are stable for all \(k\) if condition (83) is satisfied.
Second, we consider the nonzero modes derived from \(\det M_{5}^{\prime}=0\). The \(\det M_{5}^{\prime}=0\) gives \(\omega=0\) or
\[a_{0}^{\prime}\omega^{4}-ia_{1}^{\prime}\omega^{3}-a_{2}^{\prime}\omega^{2}+ia _{3}^{\prime}\omega+a_{4}^{\prime}=0, \tag{105}\]
where
\[a_{0}^{\prime} = \frac{1}{2}\tau_{\phi}(2\tau_{q}-\lambda^{\prime}),\] \[a_{1}^{\prime} = \tau_{\phi}+\frac{1}{2}(2\tau_{q}-\lambda^{\prime}),\] \[a_{2}^{\prime} = 1+D_{s}(2\tau_{q}-\lambda^{\prime})+k^{2}\gamma^{\prime}\tau_{q }-2D_{b}\tau_{\phi},\] \[a_{3}^{\prime} = \gamma^{\prime}k^{2}+2D_{s}-2D_{b},\] \[a_{4}^{\prime} = -4D_{b}D_{s}-2D_{b}\gamma^{\prime}k^{2}. \tag{106}\]
Similarly, the Routh-Hurwitz criterion provides the necessary and sufficient conditions for \(\mathrm{Im}\ \omega>0\) in Eq.(105),
\[a_{i}^{\prime} > 0, \tag{107}\] \[a_{1}^{\prime}a_{2}^{\prime}a_{3}^{\prime}-a_{1}^{\prime 2}a_{4}^{ \prime}-a_{0}^{\prime}a_{3}^{\prime 2} > 0. \tag{108}\]
Each \(a_{i}^{\prime}>0\) does not give new constraints for stability. We now show that the second inequality holds for all \(k\) if the conditions in Eq.(83) are fulfilled. Define a new function \(F(D_{b},D_{s},k)\),
\[F(D_{b},D_{s},k) \equiv a_{1}^{\prime}a_{2}^{\prime}a_{3}^{\prime}-a_{1}^{\prime 2}a_{4}^ {\prime}-a_{0}^{\prime}a_{3}^{\prime 2} \tag{109}\] \[= 4\tau_{\phi}^{2}D_{b}^{2}+\frac{1}{2}[8D_{s}(2\tau_{q}-\lambda^{ \prime})\tau_{\phi}+G(k)]D_{b}+H(D_{s},k),\]
with
\[G(k) \equiv -(2+k^{2}\gamma^{\prime}\lambda^{\prime})(2\tau_{q}-\lambda^{ \prime})-2[2+k^{2}\gamma^{\prime}(3\lambda^{\prime}-4\tau_{q})]\tau_{\phi}, \tag{110}\] \[H(D_{s},k) \equiv \frac{1}{2}(2D_{s}+k^{2}\gamma^{\prime})(2\tau_{q}-\lambda^{ \prime})[1+D_{s}(2\tau_{q}-\lambda^{\prime})+k^{2}\gamma^{\prime}\tau_{q}]\] (111) \[+\frac{1}{2}(2D_{s}+k^{2}\gamma^{\prime})(2+k^{2}\gamma^{\prime} \lambda^{\prime})\tau_{\phi}.\]
Since \(\tau_{q}>\lambda^{\prime}/2\) in Eq.(83), we have \(H(D_{s},k)>0\) for any \(k\) and any \(D_{s}>0\).
Then, we discuss two cases. When
\[8D_{s}(2\tau_{q}-\lambda^{\prime})\tau_{\phi}+G(k)\leq 0, \tag{92}\]
we find \(F(D_{b},D_{s},k)>0\) for any \(D_{b}<0\). In another case, \(8D_{s}(2\tau_{q}-\lambda^{\prime})\tau_{\phi}+G(k)>0\), i.e.,
\[D_{s}>\frac{-G(k)}{8(2\tau_{q}-\lambda^{\prime})\tau_{\phi}}, \tag{93}\]
for each fixed \(D_{s}>0\) and \(k\), the function \(F(D_{b},D_{s},k)\) gets its minimal value
\[F(D_{b},D_{s},k) \geq F(D_{b},D_{s},k)|_{D_{b}=-[8D_{s}(2\tau_{q}-\lambda^{\prime}) \tau_{\phi}+G(k)]/(16\tau_{\phi}^{2})} \tag{94}\] \[= \frac{1}{64\tau_{\phi}^{2}}(2+k^{2}\gamma^{\prime}\lambda^{\prime })(\lambda^{\prime}-2\tau_{q}-2\tau_{\phi})^{2}\] \[\times[16\tau_{\phi}D_{s}-2-k^{2}\gamma^{\prime}(\lambda^{\prime }-8\tau_{\phi})].\]
at
\[D_{b}=-[8D_{s}(2\tau_{q}-\lambda^{\prime})\tau_{\phi}+G(k)]/(16\tau_{\phi}^{2}). \tag{95}\]
Substituting Eq.(93) into Eq.(94) leads to
\[F(D_{b},D_{s},k)\;\geq\;\frac{(2+k^{2}\gamma^{\prime}\lambda^{\prime})^{2}( \lambda^{\prime}-2\tau_{q}-2\tau_{\phi})^{2}(2\tau_{q}-\lambda^{\prime}+4\tau _{\phi})}{64(2\tau_{q}-\lambda^{\prime})\tau_{\phi}^{2}}>0, \tag{96}\]
where we have used \(\tau_{q}>\lambda^{\prime}/2\) in Eq.(83). Thus, the nonzero modes derived from \(\det M_{5}^{\prime}=0\) are stable for all \(k\) if the conditions in Eq.(83) are fulfilled.
Therefore, the conditions (83) are sufficient and necessary for the stability of fluids with \(q^{\mu}\) and \(\phi^{\mu\nu}\) only.
## Appendix C Discussions on the stability conditions (115)
Here, we discuss the stability conditions (115), i.e., \(D_{s}>0\), \(D_{b}<0\).
Let us consider an isotropic fluid at equilibrium, i.e., we assume that there are not preferred directions induced by spin and external fields. In this case, the variation of spin chemical potential is
\[\delta\omega^{\mu\nu}=\chi^{\mu\nu\alpha\beta}\delta S_{\alpha\beta}+\chi^{\mu \nu}_{e}\delta e, \tag{97}\]
with a rank-4 tensor \(\chi^{\mu\nu\alpha\beta}\) and rank-2 tensor \(\chi^{\mu\nu}_{e}\). We find that \(\chi^{\mu\nu\alpha\beta}\) satisfies \(\chi^{\mu\nu\alpha\beta}=-\chi^{\nu\mu\alpha\beta}=-\chi^{\mu\nu\beta\alpha}\).
In an irrotational isotropic background fluid without any external fields, any rank-\(n\) tensor can only be constructed by \(u^{\mu},g^{\mu\nu},\partial^{\mu},\epsilon^{\mu\nu\alpha\beta}\). Back to rank-4 tensor \(\chi^{\mu\nu\alpha\beta}\), in the linear modes analysis, we do not need to consider the part in \(\chi^{\mu\nu\alpha\beta}\) proportional to spacetime derivatives \(\partial^{\mu}\) since those terms in \(\chi^{\mu\nu\alpha\beta}\delta S_{\alpha\beta}\) becomes nonlinear and will be dropped. While the tensor \(\epsilon^{\mu\nu\alpha\beta}\) violates the reflection symmetry and cannot be used there. According to the anti-symmetric properties of \(\chi^{\mu\nu\alpha\beta}\), the only possible expression is
\[\chi^{\mu\nu\alpha\beta}=\frac{\chi_{1}}{2}(g^{\mu\alpha}g^{\nu\beta}-g^{\mu \beta}g^{\nu\alpha})+\frac{\chi_{2}}{2}(\Delta^{\mu\alpha}\Delta^{\nu\beta}- \Delta^{\mu\beta}\Delta^{\nu\alpha}), \tag{100}\]
where \(\chi_{1}\) and \(\chi_{2}\) are scalars.
Substituting Eq.(100) into Eq.(101), we obtain
\[\delta\omega^{\mu\nu}=\chi_{1}\delta S^{\mu\nu}+\chi_{2}\Delta^{\mu\alpha} \Delta^{\nu\beta}\delta S_{\alpha\beta}. \tag{101}\]
One can also write it as
\[u_{\mu}\delta\omega^{\mu\nu} = \chi_{1}u_{\mu}\delta S^{\mu\nu}, \tag{102}\] \[\Delta^{\mu\alpha}\Delta^{\nu\beta}\delta\omega_{\alpha\beta} = (\chi_{1}+\chi_{2})\Delta^{\mu\alpha}\Delta^{\nu\beta}\delta S_{ \alpha\beta}. \tag{103}\]
From the definitions in Eqs.(22,28), we then have
\[D_{s}=4\gamma_{s}(\chi_{1}+\chi_{2}),\ D_{b}=4\lambda\chi_{1}. \tag{104}\]
Since \(\gamma_{s}>0,\lambda>0\), the stability condition (115), \(D_{s}>0,D_{b}<0\), is equivalent to
\[\chi_{2}>-\chi_{1}>0. \tag{105}\]
The equation of state used in our previous works [62; 64] corresponds to \(\chi_{2}=0\) (see Eq.(17) of Ref. [62]) and Eq.(38) of Ref. [64]). In that case, Eq.(105) cannot be satisfied and there exists unstable modes, although the analytic solutions in Refs. [62; 64] do not rely on it. For general cases where \(\chi_{2}\neq 0\), whether the stability condition (115) \(D_{s}>0,D_{b}<0\) is satisfied depends on \(\chi_{1},\chi_{2}\), which relates with the equation of state for \(S^{\mu\nu}\) and \(\omega^{\mu\nu}\). To determine the value of \(\chi_{1},\chi_{2}\), further investigations should be done from the microscopic theory.
Appendix D Discussion on the stability conditions for the case with extended \(q^{\mu}\) and \(\phi^{\mu\nu}\)
As discussed in Appendix (B), we consider the nonzero modes derived from \(\det M^{\prime}_{5}=0\). The \(\det M^{\prime}_{5}=0\) gives \(\omega=0\) or
\[a^{\prime}_{0}\omega^{4}-ia^{\prime}_{1}\omega^{3}-a^{\prime}_{2}\omega^{2}+ia^ {\prime}_{3}\omega+a^{\prime}_{4}=0, \tag{105}\]
where
\[a^{\prime}_{0} = \frac{1}{2}\tau_{\phi}(2\tau_{q}-\lambda^{\prime}),\] \[a^{\prime}_{1} = \tau_{\phi}+\frac{1}{2}(2\tau_{q}-\lambda^{\prime}),\] \[a^{\prime}_{2} = 1+D_{s}(2\tau_{q}-\lambda^{\prime})+\frac{1}{8}k^{2}m-2D_{b}\tau _{\phi},\] \[a^{\prime}_{3} = \gamma^{\prime}k^{2}+2D_{s}-2D_{b},\] \[a^{\prime}_{4} = -4D_{b}D_{s}-2D_{b}\gamma^{\prime}k^{2}. \tag{106}\]
Similarly, the necessary and sufficient conditions for Im \(\omega>0\) in Eq.(105) are
\[a^{\prime}_{i} > 0, \tag{107}\] \[a^{\prime}_{1}a^{\prime}_{2}a^{\prime}_{3}-a^{\prime 2}_{1}a^{ \prime}_{4}-a^{\prime}_{0}a^{\prime 2}_{3} > 0. \tag{108}\]
The first conditions are automatically satisfied when we have the constraints for stability. Then we need to analyze whether Eq.(108) is satisfies under the existing constraints. Define a function \(F(D_{b},D_{s},k)\),
\[F(D_{b},D_{s},k) \equiv a^{\prime}_{1}a^{\prime}_{2}a^{\prime}_{3}-a^{\prime 2}_{1}a^{ \prime}_{4}-a^{\prime}_{0}a^{\prime 2}_{3} \tag{109}\] \[= F_{a}D_{b}^{2}+F_{b}D_{b}+F_{c},\]
where
\[F_{a} \equiv 4\tau_{\phi}^{2},\] \[F_{b} \equiv \left[\frac{1}{2}k^{2}\gamma^{\prime}(2\tau_{q}-\lambda^{\prime} )+(4D_{s}+3k^{2}\gamma^{\prime})\tau_{\phi}\right](2\tau_{q}-\lambda^{\prime} )-\frac{1}{8}(mk^{2}+8)(2\tau_{\phi}+2\tau_{q}-\lambda^{\prime}),\] \[F_{c} \equiv \frac{1}{16}(2D_{s}+k^{2}\gamma^{\prime})\left\{8D_{s}(2\tau_{q} -\lambda^{\prime})^{2}+(2\tau_{q}-\lambda^{\prime})[8+k^{2}(m-8\gamma^{ \prime}\tau_{\phi})]+2(8+k^{2}m)\tau_{\phi}\right\} \tag{110}\] \[> \frac{1}{2}(2D_{s}+k^{2}\gamma^{\prime})\{2\tau_{\phi}+(2\tau_{q} -\lambda^{\prime})[1+D_{s}(2\tau_{q}-\lambda^{\prime})]\}>0.\]
When \(F_{b}<0\), i.e.,
\[D_{s}\;<\;\frac{(mk^{2}+8)(2\tau_{\phi}+2\tau_{q}-\lambda^{\prime})}{32(2\tau_{q}- \lambda^{\prime})\tau_{\phi}}-\frac{k^{2}\gamma^{\prime}(2\tau_{q}-\lambda^{ \prime})}{8\tau_{\phi}}-\frac{3}{4}k^{2}\gamma^{\prime}, \tag{107}\]
we get
\[F(D_{b},D_{s},k)>F(0,D_{s},k)=F_{c}>0. \tag{108}\]
In another case, \(F_{b}\geq 0\), i.e.,
\[D_{s}\;\geq\;\frac{(mk^{2}+8)(2\tau_{\phi}+2\tau_{q}-\lambda^{\prime})}{32(2 \tau_{q}-\lambda^{\prime})\tau_{\phi}}-\frac{k^{2}\gamma^{\prime}(2\tau_{q}- \lambda^{\prime})}{8\tau_{\phi}}-\frac{3}{4}k^{2}\gamma^{\prime}, \tag{109}\]
the function has its minimal value
\[F(D_{b},D_{s},k)_{\rm min} = F(D_{b},D_{s},k)|_{D_{b}=-F_{b}/(2F_{a})} \tag{110}\] \[= -\frac{(2\tau_{\phi}+2\tau_{q}-\lambda^{\prime})^{2}}{1024\tau_ {\phi}^{2}}\left\{8+k^{2}\left[m-4\gamma^{\prime}\left(2\tau_{q}-\lambda^{ \prime}\right)\right]\right\}\] \[\times\left\{8+k^{2}\left[m-4\gamma^{\prime}\left(2\tau_{q}- \lambda^{\prime}\right)\right]-32k^{2}\tau_{\phi}(\gamma^{\prime}+2D_{s})\right\}\] \[\geq \frac{\left\{8+k^{2}\left[m-4\gamma^{\prime}\left(2\tau_{q}- \lambda^{\prime}\right)\right]\right\}^{2}\left(2\tau_{\phi}+2\tau_{q}- \lambda^{\prime}\right)^{3}}{1024\tau_{\phi}^{2}(2\tau_{q}-\lambda^{\prime}) }>0,\]
at
\[D_{b} = -\frac{F_{b}}{2F_{a}}, \tag{111}\] \[D_{s} = \frac{(mk^{2}+8)(2\tau_{\phi}+2\tau_{q}-\lambda^{\prime})}{32(2 \tau_{q}-\lambda^{\prime})\tau_{\phi}}-\frac{k^{2}\gamma^{\prime}(2\tau_{q}- \lambda^{\prime})}{8\tau_{\phi}}-\frac{3}{4}k^{2}\gamma^{\prime}. \tag{112}\]
Therefore, the nonzero modes are stable for all \(k\) if the stability condition (129) is satisfied.
|
2302.01973 | Measuring The Impact Of Programming Language Distribution | Current benchmarks for evaluating neural code models focus on only a small
subset of programming languages, excluding many popular languages such as Go or
Rust. To ameliorate this issue, we present the BabelCode framework for
execution-based evaluation of any benchmark in any language. BabelCode enables
new investigations into the qualitative performance of models' memory, runtime,
and individual test case results. Additionally, we present a new code
translation dataset called Translating Python Programming Puzzles (TP3) from
the Python Programming Puzzles (Schuster et al. 2021) benchmark that involves
translating expert-level python functions to any language. With both BabelCode
and the TP3 benchmark, we investigate if balancing the distributions of 14
languages in a training dataset improves a large language model's performance
on low-resource languages. Training a model on a balanced corpus results in, on
average, 12.34% higher $pass@k$ across all tasks and languages compared to the
baseline. We find that this strategy achieves 66.48% better $pass@k$ on
low-resource languages at the cost of only a 12.94% decrease to high-resource
languages. In our three translation tasks, this strategy yields, on average,
30.77% better low-resource $pass@k$ while having 19.58% worse high-resource
$pass@k$. | Gabriel Orlanski, Kefan Xiao, Xavier Garcia, Jeffrey Hui, Joshua Howland, Jonathan Malmaud, Jacob Austin, Rishabh Singh, Michele Catasta | 2023-02-03T19:47:22Z | http://arxiv.org/abs/2302.01973v3 | # Measuring The Impact Of Programming Language Distribution
###### Abstract
Current benchmarks for evaluating neural code models focus on only a small subset of programming languages, excluding many popular languages such as Go or Rust. To ameliorate this issue, we present the BabelCode framework for execution-based evaluation of any benchmark in any language. BabelCode enables new investigations into the qualitative performance of models' memory, runtime, and individual test case results. Additionally, we present a new code translation dataset called Translating Python Programming Puzzles (TP3) from the Python Programming Puzzles (Schuster et al., 2021) benchmark that involves translating expert-level python functions to any language. With both BabelCode and the TP3 benchmark, we investigate if balancing the distributions of 14 languages in a training dataset improves a large language model's performance on low-resource languages. Training a model on a balanced corpus results in, on average, 12.34% higher \(pass@k\) across all tasks and languages compared to the baseline. We find that this strategy achieves 66.48% better \(pass@k\) on low-resource languages at the cost of only a 12.94% decrease to high-resource languages. In our three translation tasks, this strategy yields, on average, 30.77% better low-resource \(pass@k\) while having 19.58% worse high-resource \(pass@k\).1
Footnote 1: [https://github.com/google-research/babelcode](https://github.com/google-research/babelcode)
## 1 Introduction
In the 2022 StackOverflow Developer Survey, Rust was the 14th most popular programming language despite not ranking in the survey taken five years prior. However, the 13th most popular language, Go, has nearly doubled Rust's number of StackOverflow questions in this time frame. Further, despite their similar popularity, Go has nearly 350% more source code available (Kocetkov et al., 2022). These disparities highlight the problem that many popular programming languages are starkly low-resource, especially compared to the most popular languages.
Despite their impressive generative capabilities, especially in code, Large Language Models' (LLM) are adversely impacted by this language resource imbalance. Thus, developers will likely find minimal utility from LLMs if they are not using the extremely popular languages. It is therefore imperative to investigate how to mitigate the discrepancy between a language's popularity and the amount of data available for it. Prior works focusing on code generation (Ahmad et al., 2021) and multilingual natural language processing (Arivazhagan et al., 2019; Conneau et al., 2019) use temperature-based strategies to balance the training languages. Such a strategy duplicates extremely low-resource languages thousands of times, which has been shown to significantly reduce performance (Allamanis, 2019).
Beyond the the language balancing strategy, evaluating code LLMs in a multi-lingual setting presents significant challenges. Existing datasets are either mono-lingual (Chen et al., 2021; Austin et al., 2021; Lai et al., 2022) or limited to only a subset of popular programming languages (Roziere et al., 2020). Each problem in these datasets, which we henceforth refer to as a _benchmark_, contains an input, and a canonical solution along with the test-cases for checking correctness. Creating a new benchmark for each language of interest would require insurmountable engineering and monetary costs. To address both of these problems, we present the BabelCode framework for execution-based evaluation of _any benchmark_ in _any language_ and use it to investigate the impact of programming language distribution on code generation and translation.
BabelCode is open-sourced, has an extensive test suite, and supports evaluating four benchmarks in 14 languages. It is designed specifically to enable future research directions such as the evaluation of custom data-structures. BabelCode allows investigation of novel research directions through the measurement of memory and runtime usage for a given
prediction, as well as the outcomes of individual test cases. Furthermore, we can use BabelCode to build multi-lingual executiuon based benchmarks from existing mono-lingual datasets. We demonstrate this functionality by creating a new dataset called Translating Python Programming Puzzles (TP3) from the Python Programming Puzzles (Schuster et al., 2021) benchmark, where the objective is to translate expert-level python programs to other languages. The source programs for TP3 are the hand-crafted verification functions for each problem in P3. As the authors hand-wrote each function, they are significantly more complex than the current state-of-the-art code translation benchmarks, such as Transcoder (Roziere et al., 2020), for which code LLMs are already achieving highly impressive results.
Our presented framework is closely related to the concurrent work of MBXP (Athiwaratkun et al., 2022) and Multi-PLE(Cassano et al., 2022). While MBXP is quite similar to BabelCode, it is not open-sourced and requires that the input benchmarks be in Python. Multi-PLE is open-sourced, but only supports generation tasks and contains significant errors in multiple languages. BabelCode addresses these issues through an extensive test suite that ensures that the code generated is correct, and that crucial functionality, such as data structure equivalence, works when executed.
With the BabelCode framework, we investigate remedies to the problems of programming language imbalance. We utilize the Unimax algorithm (Anonymous, 2023) to limit the maximum number of times to duplicate a language's data to a constant \(N\). We then train 1B, 2B, and 4B parameter decoder-only models on both the natural and Unimax \(N\) distributions. We utilize the UL2 (Tay et al., 2022) and causal language modeling training objective. We find that models trained on the balanced dataset significantly outperform the baseline models on low-resource languages across all tasks. Further, we find that the resulting performance drop on high-resource languages is mitigated by increasing the model size.
This paper makes the following key contributions:
* We propose and release BabelCode, a new execution-based evaluation framework that allows for multilingual evaluation of code generation and translation capabilities of code language models. It also supports the easy addition of new benchmark tasks and execution-based metrics.
* We show that the code language models trained on the natural distributions of GitHub source code have poor performance on low-resource languages in both generation and translation tasks.
* We propose a new data balancing strategy for programming languages to improve performance on low-resource languages. We demonstrate that the resulting models outperform the baseline models across all tasks by an average of 12.34% \(pass@k\) for all languages, with a further improvement of 39.70% \(pass@k\) to low-resource languages.
* We find that the average improvements on low-resource languages from training on balanced data do not scale with model size. But scaling model sizes significantly helps the average \(pass@k\) loss compared to the baselines on high-resource languages going from a loss of 39.70% with the 1B model to a loss of 2.47% with the 4B model.
## 2 The BabelCode Framework
BabelCode enables the evaluation of a collection of problems, each consisting of a prompt and a set of test cases, in any language through four stages: 1) represent each test case in our domain specific language (DSL) defined in Figure 2, 2) use this generic form to generate the test cases in the target language from the input and output values, 3) use a Jinja2 template to generate a testing script in the target language, and 4) execute the target script through the command line. This is done autonomously, requiring minimal human
Figure 1: Overview of this work’s contributions.
intervention. We provide an overview of how an example problem is translated in Figure 7.
### Framework Design
BabelCode shares many design similarities to the concurrent work from Athiwaratkun et al. (2022). Specifically, we follow the same approach to inferring argument and return types. We follow the respective documentation and tutorials for each language to determine which native types to use. We also use these docs to determine the docstring formatting and naming convention. These mappings are used to generate unit and integration tests for each language automatically. They ensure that each language's implementation is syntactically correct and that, when executed, the equality comparison is correct.
**DSL Representations:** Using a DSL in the first phase, we do not force the inputs to be Python, thus enabling more flexibility to represent more generic tasks. For example, given the inputs from two test cases: {"a": [[1],[],[80]]} and {"a": []}, we only represent the _types_ in our generic DSL. Thus, the resulting type string for this input is map<string;list<integer>>. We do not represent the actual values in the generic form as we can easily translate literals across languages. This allows users to create a dataset from any language by requiring that they only represent the types of the inputs and outputs in this generic form. The language agnostic nature of the DSL enables future extensions of BabelCode to incorporate complex inputs and outputs such as custom data-structures. For example, the representation of a node class in a BST could be BSTNode<integer;integer>.
**Equality Checking:** We support floating point equivalence to a precision of \(\epsilon=1\mathrm{e}{-6}\) for floats and \(\epsilon=1\mathrm{e}{-9}\) for doubles. To determine if a given value is a float or a double, we count the number of digits after the decimal place. We apply this same logic to int and long by counting the total number of digits. Languages such as C# do not, by default, support deep equivalence of data structures. In such cases, we serialize the objects to JSON and check that the resulting strings are equal. Otherwise, we use the language built-in deep equality functionality.
**Test Statement Execution:** We opt to print the result of each test case (i.e. TEST-0...PASSED) to the standard output in a parseable format across all languages. Along with try-catch blocks, this allows the evaluation of _every_ test case for a given problem. This allows finer analysis of individual programs when compared to using assert statements as it identifies if specific corner cases fail.
**Prompt Translation:** As Wang et al. (2022) showed, LLMs are sensitive to the input prompts for code generation. Therefore BabelCode supports prompt translation and construction for multiple different problem formulations. We replace the names of languages, such as Python, with the target language. We use the language-specific naming convention to properly format the signature in the best practice style. If an argument uses a reserved keyword, we append arg to its name so that it retains the same meaning but will no longer conflict. We replace Python-specific terms with their equivalent names in the target language. For tasks formulated as code-completion, we support formatting the problem description as a native docstring. We do _not_ translate the import statements in the header. Instead, we exclude the headers from all languages to provide a language-agnostic format.
### Differences To Prior Works
We summarize the high-level differences between BabelCode and prior works in Table 1. The **MBXP** framework from Athiwaratkun et al. (2022) is the most similar to our
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline & Open & \# & NL2C & C2C & Mem. \& Test & Indiv. Test & Lang. Agnostic \\ Name & Sourced & Lang. & Support & Support & Time Metrics & Suite & Case Results & Datasets \\ \hline MultiPL-E & \(\checkmark\) & 18 & \(\checkmark\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ MBXP & \(\times\) & 10 & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\times\) & \(\checkmark\) & \(\times\) \\ BabelCode & \(\checkmark\) & 14 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Differences between BabelCode and prior works. NL2C is natural language to code, while C2C is code to code datasets. BabelCode has an extensive test-suite that automatically tests each language’s implementation and correctness when executed.
Figure 2: BabelCode’s domain specific language for representing the input and output types of a question. Prior works require that the source dataset be written in Python, while our DSL removes this restriction and allows users to create datasets in _any_ language. This enables seamless additions of new languages while simplifying future expansions to features such as custom data structures.
work as discussed in subsection 2.1. Similar to BabelCode, MBXP does have individual test-case results; however, it uses assert statements and thus can only determine the first test-case that fails. MBXP does use language experts to review the generated code's quality and discuss the validation it supports to ensure that generated code parses and/or compiles for its respective language. BabelCode also has this functionality but, additionally, it ensures correctness through a test suite that covers the execution of generated code. We provide scripts to allow validating that source solutions to a dataset pass the generated code. For languages that do not have a solution in the respective dataset, we generate "mock" predictions that return the expected output type. This allows us to ensure that generated code is correct in _all_ supported languages even if no solution exists.
The **MultiPL-E** framework from Cassano et al. (2022) supports 18 languages compared to BabelCode's 16. However, we support four datasets, while MultiPL-E only currently has support for two datasets. In addition, BabelCode also supports fine-grained evaluation metrics for memory, running time, and individual test cases. Our extensive test suite and validation scripts have also exposed many language-specific idiosyncrasies that naive methods of translation fail to handle. For example, in Julia, any "\(\hat{s}\)" will be treated as string interpolation, even if it is in a docstring. Thus, in the majority of cases, these must be escaped. We automatically rename variables that use reserved keywords. In languages such as C#, the \(==\) operator checks equivalence by _reference_ instead of _value_. Besides corner cases, our DSL and templates allow us to effectively implement proper floating point equivalence for problems that return a float. Finally, in many languages, MultiPL-E uses types that are _not_ considered best practice, such as in Scala, where it relies on the Java types ArrayList instead of the native List.
## 3 Low-Resource Code Language Models
Because the data availability can vary greatly by programming language, we can consider the goal of building a multilingual code model as a data-imbalanced multi-task learning problem. Previous work in the multilingual natural language community (Conneau et al., 2019; Arivazhagan et al., 2019) and in the program synthesis space (Ahmad et al., 2021) have used sampling strategies relying on temperature-scaling. In this work, we use the Unimax (Anonymous, 2023) strategy to address this imbalance. The Unimax algorithm assumes that we are given a budget of how many examples we plan to consume during training and a maximum number of times, \(N\), any single example can be duplicated in the training corpus. Then, we separate the data into buckets by programming language and add \(N\) epochs of each of the lowest-resource languages until we can safely distribute the remaining budget across all the remaining languages without exceeding \(N\) epochs over any one of these remaining languages. This will allow us to control the number of epochs \(N\) we perform over the low-resource languages to minimize overfitting while allowing fair distribution of the compute budget to the remaining high-resource languages. We will ablate the choice of \(N\) in our experiments.
## 4 Experimental Setup
### Models
To understand the impact of training decoder-only models on the different programming language distributions, we train models in 3 sizes: 1B, 2B, and 4B. For each of these sizes, we train 5 different models on each distribution: Natural and Unimax \(N\), where \(N\in\{1,2,3,4\}\). The parameters and training differences are listed in Table 2. We follow Chowdhery et al. (2022) for all other architecture choices. Every model has a context window of 2048 and is trained identically with the same vocabulary described in subsection 4.3. We use a base learning rate of 0.01 and a constant warmup with a step inverse decay. The number of warmup steps is kept to 10% of the total training steps per model. The total number of training steps is 38000, 77000, 190000 for the 1B, 2B, and 4B models, respectively. We use the Adafactor optimizer (Shazeer and Stern, 2018) and a batch size of 256. We prepend [code] to the beginning and add the tag [eod] to the end of each file from our training data. Finally, we use the T5X and SeqIO (Roberts et al., 2022) frameworks. We use the UL2 (Tay et al., 2022) objective with an additional causal language modeling objective as described in Appendix A.
### Training Data
Our curated source code corpus was obtained by collecting publicly available code data on the web using a custom code data collection system. We apply a similar license filter as Kocetkov et al. (2022) to remove any files with non-permissible licenses, use simple heuristics to filter out low-quality code and apply near-deduplication to obtain our corpus of high quality, permissive source code. After preprocessing, we select 14 programming languages by their
Figure 3: Different distributions for Unimax with different budgets.
file extensions according to the mapping used by GitHub's Linguist library3 to segment the dataset by language. To calculate the number of examples per language, we use SeqIO's caching feature and take the number of examples after post-processing (Roberts et al., 2022). We list the percentages of all examples and file extensions used per language in Appendix B. With these numbers, we consider the top 7 languages to be **high-resource**(HR): Java, Python, C++, PHP, TypeScript, JavaScript, and Go. We further consider the bottom 7 languages to be **low-resource**(LR): Dart, Lua, Rust, C#, R, Julia, and Haskell.
Footnote 3: [https://github.com/github/linguist/](https://github.com/github/linguist/)
### Vocabulary
The original PaLM (Chowdhery et al., 2022) vocabulary focuses on multilingual natural language. In contrast, we trained our SentencePiece (Kudo and Richardson, 2018) vocabulary with 64k tokens from the training data directly. Each programming language is uniformly sampled to build the vocabulary. In previous works, such as Chen et al. (2021), a list of tokens that consists of a different number of whitespace is manually added to represent code more efficiently. In our work, we rely on the SentencePiece model to learn the whitespace tokens by allowing extra whitespace tokens and whitespace-only tokens. In the end, the model can represent up to 12 whitespaces into one token. In addition, numbers are split into individual tokens.
### Benchmarks
BabelCode currently supports 4 datasets. To allow the translation of any dataset to any language, we modify each benchmark as well as remove problems that were incompatible. These changes are described in Appendix C. For HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), and Transcoder (Roziere et al., 2020), we add the prefix **BabelCode- (BC)** to indicate that we are using the BabelCode specific version. Further, for Transcoder, we use the same version as in Chowdhery et al. (2022). **BC-HumanEval (BC-HE)** has 161 out of the original 164 HumanEval questions. **BC-MBPP** has 855 of the original 999 questions. **BC-Transcoder (BC-TC)** has 524 of the original 560 questions.
We additionally introduce a new dataset called **Translating Python Programming Puzzles (TP3)**. We take the verification functions from the questions in the original Python Programming Puzzles dataset (Schuster et al., 2021) to create this dataset. These functions are hand-crafted by the authors and are used to check if an answer satisfies the constraints of the puzzle. These puzzles range in difficulty from basic character checking to competitive programming problems. Thus, each verification function is written by an expert python programmer and requires a significant understanding of programming to translate. In total, there are 370 python functions to translate. Examples from TP3 can be found in subsection C.4.
### Evaluation
For BC-HumanEval, we follow Chen et al. (2021) and generate 200 programs per problem. Further, we use a zero-shot prompt described in subsection D.1. We use the built-in docstring translation of BabelCode. We generate 50 programs per problem on our three translation tasks and use the prompts described in subsection D.2. We consider these prompts zero-shot as we do not provide any additional examples. However, we provide the translated signature without the docstring in the prompt. We do not consider this to be data leakage as it is trivial to translate signatures with libraries such as Treesitter4.
Footnote 4: [https://tree-sitter.github.io/tree-sitter/](https://tree-sitter.github.io/tree-sitter/)
For every dataset, we use \(T=0.8\), \(top_{p}=0.95\), and do not use \(top_{k}\). We use the \(pass@k\) estimator (Chen et al., 2021) to measure the performance. We use \(k=100\) and \(k=25\) for generation and translation, respectively.
## 5 Results
### Baseline Models
We report the baseline results for our trained models and PaLM-Coder in Figure 4. On BC-HumanEval, we find that the 2B model has a better \(pass@100\) than that of PaLM-Coder 8B on all but C# and Python. On average, the BC-2B model trained on the natural distribution of GitHub data has average improvements of 48.17% compared to PaLM-Coder 8B despite having a quarter of the number of parameters and training on 6.4B fewer code tokens. Further, we find that the 4B model outperforms PaLM-Coder 62B on 6 of the 14 languages evaluated. This likely results from the 4B model seeing over 53B tokens more than what PaLM-Coder
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline & \# of & & & Train \\ Model & Layers & Heads & \(d_{model}\) & Tokens(B) \\ \hline BC 1B & 16 & 8 & 8192 & 20.2 \\ BC 2B & 24 & 16 & 10240 & 40.4 \\ BC 4B & 26 & 16 & 14336 & 100 \\ \hline PC 8B & 32 & 16 & 4096 & 46.8 \\ PC 62B & 64 & 32 & 8192 & 46.8 \\ \end{tabular}
\end{table}
Table 2: Hyperparameters for models trained (BC) compared with those used to train PaLM-Coder(PC). For PaLM-Coder, we report the number of code tokens trained on. Each BC model is trained on each of the naturally occurring distributions of the GitHub data and each of the distributions is detailed in section 3 where \(N\in\{1,2,3,4\}\)
62B did. Another likely factor in this discrepancy is that the data PaLM-Coder was fine-tuned on included all languages on GitHub in contrast to our filtered training dataset.
We also observe that performance on languages do not scale with respect to their resource level nor the model's size. C#, Dart, Julia, and Haskell have significantly higher gains when scaling to 4B model size when compared to the other languages. While this may be due to the increased number of training tokens, it is not consistent across all LR languages as the increase in performance for R and Lua when scaling from 1B to 2B is similar to that when scaling from 2B to 4B. Instead, this result is likely due to better transfer from languages such as Java, Python, and C++.
The importance of scale for multi-lingual code models is further demonstrated by the results of the translation tasks. We find that in BC-TP3, the 1B and 2B models' performance is similar. However, the most significant gains are from scaling up to 4B where it beats PaLM-Coder 8B on all but three languages in this zero-shot translation. We do make note, though, that while we do not provide any examples for in-context learning, we do provide the signature in the target language during generation. This finding is less pronounced in BC-Transcoder as the scaling observed in all languages is more akin to that seen in BC-HumanEval.
### Impact of Balancing Programming Languages
Figure 5 shows the mean \(pass@k\) scores of different models trained on each of the 5 distributions for each of the 4 datasets. As expected, the natural distribution is optimal if the focus is solely HR languages as the performance losses when training on Unimax balanced data are 15.47%, 14.00%, and 9.35% for the 1B, 2B, and 4B models, respectively. However, for any LR language, Unimax is clearly better given that there is an average \(pass@100\) improvement on these languages of 111.85%, 68.38%, and 19.22% for the 1B, 2B, and 4B size models, respectively. For generation tasks, we find that \(N=3\) is optimal with respect to the difference between performance gained on LR and performance lost on HR languages. On the 1B, 2B, and 4B models, the ones trained on the Unimax 3 dataset had differences of 130.17%, 87.80%, and 36.00%, respectively.
We observe similar scaling trends on TP3, as training on a Unimax distribution yielded average \(pass@25\) improvements to LR languages of 124.45% for the 1B model, 64.51% for the 2B model, and 51.29% for the 4B model when compared to the same sized models trained on the natural distribution. Unlike BC-HumanEval, training the 4B on Unimax Distributions yielded _better_ average HR performance with an increase of 6.80%. As shown in Figure 6, training a 4B model on the Unimax 2 distribution had a mean \(pass@25\) improvement of 71.59% in LR languages and an improvement of 20.31% on HR languages when compared to the natural distribution. Training on other Unimax distributions does not see as large of improvements. For the 4B model, we find mean LR improvements of 42.39%, 52.91%, and 38.26% when trained on the Unimax 1, 3, and 4 distributions, respectively. This indicates that for TP3, at least, balancing the training data for each language improves translation capabilities. However, less Python data
Figure 4: Comparison of the models trained with PaLM-Coder models. For each dataset, we use Chen et al. (2021)\(pass@k\) estimator with \(n=2*k\). We then generate \(n\) samples per problem with \(T=0.8\). Full results can be found in Appendix E. Languages in the X-Axis are sorted from high to low resource. HS is Haskell, JS is JavaScript, Py is Python, and TS is TypeScript.
adversely affects understanding the source code necessary to translate it properly.
When evaluated on BC-Transcoder, we find that LR performance _increased_ with size. When the source language is C++, training on the Unimax distributions yielded an average \(pass@25\) improvements of 7.57%, 6.76%, and 11.80% for the 1B, 2B, and 4B models, respectively. Translating Python to other languages followed this trend with an average change of -26.04%, 15.1%, and 22.47% for the 1B, 2B, and 4B models, respectively. On BC-Transcoder, we find similar benefits when translating from Python to other languages, although the performance on higher resource languages is significantly worse. When translating from C++ to other languages, we find that training both a 1B and 2B model on the UM 4 distribution improves performance on 5 of the 7 LR languages. For 4B sized models, the UM 2 distribution is optimal as LR performance increased by an average of 20.47% when compared to training on the natural distribution. As the source code of BC-Transcoder focuses on language-agnostic algorithm implementations, this scaling trend is most likely due to the importance of a surface-level understanding of the target language. Further, the fact that this trend does not appear for BC-HumanEval or TP3 indicates that neither model size nor duplication of language data enables the model to have a deep understanding of these low-resource languages.
### Qualitative Effects Of Language Balance
We find that, as is expected, decreasing the number of tokens for a language negatively impacts its performance on that language. To compare the overall effects of language balancing at each size, we focus on the Unimax 1 and Unimax 2 distributions as they represent the largest change in proportions of HR languages when compared to the Nat
Figure 5: Effects of scale on the average \(pass@k\) of the high and low resource languages for each of four datasets. Full tabulated results are located in Appendix E.
Figure 6: Mean relative difference of \(pass@k\) for each of the models trained on the different Unimax distributions compared to the \(pass@k\) of the same sized model trained on the Natural distribution. The X-Axis is the language sorted from high to low resource. HS is Haskell and Py is Python. The percent changes for each delta for HR languages are shown in Table 12 and Table 13 for LR languages.
ural distribution. Figure 8 shows that on BC-HumanEval, training on either UM 1 or UM 2 will cause the model to generate fewer correct solutions than when the model is trained on the Natural distribution with respect to HR languages. However, this is _not_ due to those models generating more programs with either compilation or run-time errors as the raw average increase is only 0.40 and 1.15 for the models trained on the Unimax 1 and Unimax 2 respectively. Rather, we find that the largest decrease is in the mean % test cases passed per problem. Training on the Unimax 1 and Unimax 2 distributions results in 5.50% and 9.09% fewer test cases respectively when compared to the model trained on the natural distribution.
On LR languages, the Unimax 1 distribution yielded the best improvements compared to the other distributions. Specifically, the programs generated by the model trained on the Natural distribution passed, on average, 5.13% of the test cases per problem. In comparison, 9.53% and 10.48% of average test cases per problem were solved by the models trained on the Unimax 1 and Unimax 2 distributions. The less than 1% improvement when going from Unimax 1 to Unimax 2 suggests that, for generation tasks, multi-lingual models of code benefit the most from seeing unique data.
In our translation task of TP3, we observe consistent improvements in the mean number of test cases passed for both HR and LR languages. For the former, we observe an average improvement of 2.58% and 3.06% compared to the Natural distribution for the UM 1 and 2 distributions respectively. On LR languages, we find average improvements of 3.40% and 4.99% over the Natural distribution for the UM 1 and UM 2 distributions respectively. These results, along with the performance improvements discussed in subsection 5.2, indicate that translation tasks benefit highly from uniformly balanced languages. This is, likely, due to the task formulation where natural language understanding is not necessary. Higher resource languages are more likely to contain diverse natural language and code pairs due to the language's popularity.
Thus, performance on NL2Code tasks, such as BC-HumanEval, depends on the unique samples of code and doc-strings in the training corpus. Translation, on the other hand, does not have this constraint. Rather, it appears that uniformly balancing languages is the optimal strategy for this task.
## 6 Related Works
**Code Evaluation** Existing code benchmarks have primarily focused on surface matching evaluation (Lu et al., 2021; Yin et al., 2018; Wang et al., 2022b; Husain et al., 2019). Recent works have introduced new execution-based benchmarks for both generation (Austin et al., 2021; Hendrycks et al., 2021; Chen et al., 2021; Lai et al., 2022) and repair (Yasunaga and Liang, 2021) tasks, however, these have been limited to only Python. Additional works have introduced generation (Li et al., 2022) and translation (Roziere et al., 2020) tasks in multiple-languages, but are limited to only C++, Java, and Python. We acknowledge concurrent works by Cassano et al. (2022) and Athiwaratkun et al. (2022) on translating HumanEval and MBPP into multiple programming languages. As we note in subsection 2.2, BabelCode supports deeper analysis on a wider range of tasks while including significant methods for ensuring correctness.
**Code LLMs** Recent years has seen significant interest in LLMs for code. CodeBERT (Feng et al., 2020) is the first work to train an encoder only model on code. CodeT5 (Wang et al., 2021), PLBART (Ahmad et al., 2021), and additional works (Clement et al., 2020; Orlanski and Gittens, 2021; Chakraborty et al., 2022) examine training encoder-decoder models on code. Similar to this work, Ahmad et al. (2021) investigate difference data balancing strategies for pre-training. Our work differs in that we focus on balancing many programming languages in pre-training data. AlphaCode (Li et al., 2022), Codex (Chen et al., 2021), PaLM (Chowdhery et al., 2022), and other works (Nijkamp et al., 2022; Fried et al., 2022; Allal et al., 2023; Christopoulou et al., 2022) have shown that decoder-only code language models achieve exceptional performance on a wide range of tasks. Additional works have investigated different training strategies (Roziere et al., 2020; Bavarian et al., 2022) and different pre-training data (Roziere et al., 2021; Orlanski et al., 2022; Austin et al., 2021).
**Language Balancing** Choosing a proper sampling distribution from a mixture of datasets of various size is a difficult problems. Initial attempts at studying this in the multilingual natural language processing literature relied on temperature-based approaches (Conneau et al., 2019; Arivazhagan et al., 2019). These approaches oversample the low-resource tasks and downsample the high-resource ones. Other works have adopted more dynamic approaches, adapting the sampling rates in an online fashion during training (Wang et al., 2020).
## 7 Conclusion
We proposed the BabelCode framework for multi-lingual execution-based evaluation and a new strategy for balancing programming language distributions. We highlight the ease of creating new benchmarks with BabelCode by proposing the Translating Python Programming Puzzles. Our experiments demonstrate that adjusting how much we oversample low-resource languages and downsample high-resource languages greatly improves low-resource performance with minimal impact to to the performance of high-resource languages in tasks involving either a single or multiple programming language. By open-sourcing BabelCode, future
work can investigate improved balancing strategies along with new multi-lingual programming language questions.
## Acknowledgements
We thank Michael Janner, Owen Lewis, Alex Polozov, Uros Popovic, Deviet Roy, Tal Schuster, and Charles Sutton for their helpful discussions and feedback on the paper.
|
2306.02755 | Quantum operations with the time axis in a superposed direction | In the quantum theory, it has been shown that one can see if a process has
the time reversal symmetry by applying the matrix transposition and examining
if it remains physical. However, recent discoveries regarding the indefinite
causal order of quantum processes suggest that there may be other, more general
symmetry transformations of time besides the complete reversal. In this work,
we introduce an expanded concept of matrix transposition, the generalized
transposition, that takes into account general bipartite unitary
transformations of a quantum operation's future and past Hilbert spaces,
allowing for making the time axis definitely lie in a superposed direction,
which generalizes the previously studied `indefinite direction of time', i.e.,
superposition of the forward and the backward time evolution. This framework
may have applications in approaches that treat time and space equally like
quantum gravity, where the spatio-temporal structure is explained to emerge
from quantum mechanics. We apply this generalized transposition to investigate
a continuous generalization of perfect tensors, a dynamic version of tracing
out a subsystem, and the compatibility of multiple time axes in bipartite
quantum interactions. Notably, we demonstrate that when a bipartite interaction
is consistent with more distinct local temporal axes, there is a reduced
allowance for information exchange between the two parties in order to prevent
causality violations. | Seok Hyung Lie, M. S. Kim | 2023-06-05T10:20:59Z | http://arxiv.org/abs/2306.02755v3 | # Quantum operations with the time axis in a superposed direction
###### Abstract
In the quantum theory, it has been shown that one can see if a process has the time reversal symmetry by applying the matrix transposition and examining if it remains physical. However, recent discoveries regarding the indefinite causal order of quantum processes suggest that there may be other, more general symmetry transformations of time besides the complete reversal. In this work, we introduce an expanded concept of matrix transposition, the generalized transposition, that takes into account general bipartite unitary transformations of a quantum operation's future and past Hilbert spaces, allowing for making the time axis definitely lie in a superposed direction, which generalizes the previously studied 'indefinite direction of time', i.e., superposition of the forward and the backward time evolution. This framework may have applications in approaches that treat time and space equally like quantum gravity, where the spatio-temporal structure is explained to emerge from quantum mechanics. We apply this generalized transposition to investigate a continuous generalization of perfect tensors, a dynamic version of tracing out a subsystem, and the compatibility of multiple time axes in bipartite quantum interactions. Notably, we demonstrate that when a bipartite interaction is consistent with more distinct local temporal axes, there is a reduced allowance for information exchange between the two parties in order to prevent causality violations.
## 1 Introduction
The arrow of time has been one of the central topics of physics. Although the unique direction of time propagation from the past to the future is so natural for us living in the macroscopic and classical world, the fundamental laws of nature governing the microscopic world appear to be symmetric with respect to the reversal of time direction. There are some attempts to explain the emergence of the arrow time with thermodynamics argument in the classical realm.
Recently, Chiribella and Liu studied the time reversal symmetry of quantum processes [1]. In Ref. [1], it is shown that the most appropriate mathematical representation of the input-output reversion is the matrix transposition, and the quantum processes that are consistent with both directions of time propagation correspond to unital quantum channels. However, the input-output reversion may not be the most general symmetry transformation of the temporal structure of quantum processes considering recent developments in indefinite causal structures of quantum processes [2].
Especially, in some approaches to quantum theory of spatio-temporal structure of the universe like the quantum gravity theory, the spacetime is also treated on the same footing with other quantum objects. In such theories, the existence of the unique flow of time is not assumed. Some approaches explain that the time emerges only after some subspace of the whole Hilbert space of the universe is identified as a 'clock' to provide quantized time parameter [3]. In this picture, there is no immediate reason to expect that there is a unique well-defined direction of time obeyed by every quantum system in the universe, as there is an ambiguity in the choice of a clock system, known as the clock ambiguity [4, 5, 6]. In other words, when interpreted as quantum systems, the distinction between future and past systems is not so clear, and the partition between them need not be unique.
These observations suggest the possibility of altering the direction of temporal direction, not just within a given axis -forward and backward or their superpositions as considered in Ref. [1, 2]- but also through the
transformation of the direction of temporal axis _itself_. In this work, we develop a generalization of the approach of Chiribella and Liu [1] by introducing the _generalized transposition_, which generalizes the conventional matrix transposition and study its applications and implications in various contexts such as tensor network picture of quantum events, perfect tensors and information exchange within bipartite quantum interactions.
### Notation
Without loss of generality, we sometimes identify the Hilbert space \(H_{X}\) corresponding to a quantum system \(X\) with the system itself and use the same symbol \(X\) to denote both. We will denote the dimension of \(X\) by \(|X|\). For any system \(X\), \(X^{\otimes n}\) represents the tensor product of \(n\) copies of \(X\), and when we need to refer to one copy of it, we denote it by \(X^{\prime},X^{\prime\prime}\) etc. In other words, \(X^{\prime}\) is a copy of \(X\) with the same dimension, i.e., \(|X|=|X^{\prime}|\). When there are many systems, all the systems other than \(X\) are denoted by \(\bar{X}\). However, the trivial Hilbert space will be identified with the field of complex numbers and will be denoted by \(\mathds{C}\). The identity operator on system \(X\) is denoted by \(\mathds{1}_{X}\) and the maximally mixed state is denoted by \(\pi_{X}=|X|^{-1}\mathds{1}_{X}\). The space of all bounded operators acting on system \(X\) is denoted by \(\mathfrak{B}(X)\), the real space of all Hermitian matrices on system \(X\) by \(\mathfrak{H}(X)\). The set of all unitary operators in \(\mathfrak{B}(X)\) is denoted by \(\mathfrak{U}(X)\). For any \(M\in\mathfrak{B}(X)\), we let \(\text{Ad}_{M}\) be \(\text{Ad}_{M}(K):=MKM^{\dagger}\). For any matrix \(M\), \(M^{T}\) is its transpose with respect to some fixed basis, and for any \(M\in\mathfrak{B}(X\otimes Y)\), the partial transpose on system \(X\) is denoted by \(M^{T_{X}}\). The Schatten \(p\)-norm of an operator \(X\) is defined as \(\left\|X\right\|_{p}:=\operatorname{Tr}\bigl{[}(X^{\dagger}X)^{p/2}\bigr{]}^{ 1/p}=\{\sum_{i}(s_{i}(X))^{p}\}^{1/p}\) where \(s_{i}(X)\) is the \(i\)-th largest singular value of \(X\). The (Uhlmann) fidelity between two quantum states is defined as \(F(\rho,\sigma):=\left\|\sqrt{\rho}\sqrt{\sigma}\right\|_{1}^{2}\).
The space of all linear maps from \(\mathfrak{B}(X)\) to \(\mathfrak{B}(Y)\) is denoted by \(\mathfrak{L}(X,Y)=\mathfrak{B}(\mathfrak{B}(X),\mathfrak{B}(Y))\) and we will use the shorthand notation \(\mathfrak{L}(X):=\mathfrak{L}(X,X)\). The set of all quantum states on system \(X\) by \(\mathfrak{S}(X)\) and the set of all quantum channels (completely positive and trace-preserving linear maps) from system \(X\) to \(Y\) by \(\mathfrak{C}(X,Y)\) with \(\mathfrak{C}(X):=\mathfrak{C}(X,X)\). Similarly we denote the set of all quantum subchannels (completely positive trace non-increasing linear maps) by \(\tilde{\mathfrak{C}}(X,Y)\) and \(\tilde{\mathfrak{C}}(X):=\tilde{\mathfrak{C}}(X,X)\). We denote the identity map on system \(X\) by \(\text{id}_{X}\). For any completely positive map \(\mathcal{N}=\sum_{i}\text{Ad}_{K_{i}}\), we define its transpose as \(\mathcal{N}^{T}:=\sum_{i}\text{Ad}_{K_{i}^{T}}\).
\(J_{XX^{\prime}}^{\mathcal{N}}\) is the Choi matrix of \(\mathcal{N}\in\mathfrak{L}(X)\) defined as \(J_{XX^{\prime}}^{\mathcal{N}}:=\mathcal{N}_{X}(\phi_{XX^{\prime}}^{+})\) where \(\phi_{XX^{\prime}}^{+}=\left|\phi^{+}\right\rangle\!\!\left\langle\phi^{+} \right|_{XX^{\prime}}\) is a maximally entangled state with \(\left|\phi^{+}\right\rangle_{XX^{\prime}}=\left|X|^{-1/2}\sum_{i}\left|ii\right\rangle _{XX^{\prime}}\). The mapping \(J:\mathfrak{L}(X)\rightarrow\mathfrak{B}(X\otimes X^{\prime})\) defined as \(J(\mathcal{M}):=J_{XX^{\prime}}^{\mathcal{M}}\) itself is called the Choi-Jamiolkowski isomorphism [7, 8]. Unnormalized state \(\sum_{i}\left|ii\right\rangle_{XX^{\prime}}\) will be denoted by \(\left|\Gamma\right\rangle_{XX^{\prime}}\). We call a linear map from \(\mathfrak{L}(X)\) to \(\mathfrak{L}(Y)\) a _supermap_ from \(X\) to \(Y\) and denote the space of supermaps from \(X\) to \(Y\) by \(\mathfrak{S}\mathfrak{L}(X,Y)\) and let \(\mathfrak{S}\mathfrak{L}(X):=\mathfrak{S}\mathfrak{L}(X,X)\). Supermaps preserving quantum channels even when it only acts on a part of multipartite quantum channels are called _superchannel_[2, 9, 10, 11, 12, 13, 14] and the set of all superchannels from \(X\) to \(Y\) is denoted by \(\mathfrak{S}\mathfrak{C}(X,Y)\) and we let \(\mathfrak{S}\mathfrak{C}(X):=\mathfrak{S}\mathfrak{C}(X,X)\). We say a superchannel \(\Omega\in\mathfrak{S}\mathfrak{C}(X)\) is _superunitary_ if there are \(U_{0}\) and \(U_{1}\) in \(\mathfrak{U}(X)\) such that \(\Omega(\mathcal{N})=\text{Ad}_{U_{1}}\circ\mathcal{N}\circ\text{Ad}_{U_{0}}\) for all \(\mathcal{N}\in\mathfrak{L}(X)\).
We define the 'Choi map' \(\mathbb{J}[\Theta]\in\mathfrak{L}(X\otimes X^{\prime},Y\otimes Y^{\prime})\) of supermap \(\Theta\in\mathfrak{S}\mathfrak{L}(X,Y)\) in such a way that the following diagram is commutative:
(1)
Similarly, we define the inverse of the Choi map \(\mathbb{J}^{-1}[\mathcal{N}]\in\mathfrak{S}\mathfrak{L}(X,Y)\) of a linear map \(\mathcal{N}\in\mathfrak{L}(X\otimes X^{\prime},Y\otimes Y^{\prime})\) in such a way that the following diagram commutes:
(2)
## 2 Generalized transpose
Imagine that an experimenter observes a quantum system evolving with the passage of time. The process may appear to have well-defined input and output systems for the experimenter. However, how can one be sure that the quantum system experiences the same passage of time with the classical experimenter outside of the system? This seemingly obvious question is actually highly nontrivial considering the fact that time is not a universal parameter shared by all the systems but a quantity that should be observed with a physical mean as one can see from the difficulty in constructing a satisfactory quantum clock [15, 16]. The possibility of superposition of multiple time evolutions has been studied since decades ago [17]. Especially, with the recent development of indefinite causal structure of quantum systems [18], it is evident that there are no a priori reasons to assume that a quantum process has a unique temporal axis.
Nevertheless, if an experimenter can prescribe a valid description of a given quantum process, e.g., a completely positive trace preserving (CPTP) map, or, a quantum channel, then we can conclude that at least one temporal structure, that is, the one the experimenter follows, is compatible with the given quantum process. However, by no means that should be the unique temporal structure compatible with the process. A quantum process connects input and output systems, but the distinction between them is made from the perspective of the experimenter; There could be other partitionings of the input-output joint system into an alternative input-output system pair (See FIG. 1.) One could consider it a new type of symmetry a quantum process could have. Then, a natural question follows: How can one describe the corresponding symmetry transformation? (See Sec. 3.1 for extended discussion on the necessity of studying temporally indefinite quantum states.)
In this work, we construct such a symmetry transformation by generalizing the matrix transposition. The input-output inversion of quantum operation is given as the transpose operation \(M\mapsto M^{T}\)[1], which can be understood as the rotation by \(180^{\circ}\) in tensor diagram, i.e.
\[\includegraphics[width=142.26378pt]{142.26378pt}{142.26378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt}{142.26378pt} \includegraphics[width=142.26378pt]{142.26378pt}{28.426378pt} \includegraphics[width=142.
The unitary operator is the complex generalization of the orthogonal matrix, hence it generalizes the action of rotating to complex Hilbert spaces. First, we observe that we can stretch and curve the wires in (3) to transform it into
\[\raisebox{-11.381102pt}{\includegraphics[]{fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/FigFig/Fig/Fig/FigFig/FigFig/Fig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/FigFig/Fig/FigFig/Fig/FigFig/Fig/Fig/FigFig/Fig/FigFig/Fig/Fig/FigFig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/FigFig/Fig/Fig/Fig/FigFig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig//Fig/Fig//Fig/Fig/Fig//Fig/Fig/Fig//Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig//Fig//Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig//Fig/Fig/Fig//Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig//Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/Fig/Fig/FigFig/Fig/Fig/FigFig/Fig/Fig/Fig/FigFig/Fig/Fig/FigFig/Fig/Fig/Fig/Fig/Fig/FigFig/FigFig/Fig/FigFig/Fig/Fig/FigFig/Fig/FigFig/FigFig/FigFig/Fig/Fig/FigFig/FigFig/FigFig/Fig/FigFig/Fig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/Fig/FigFig/FigFig/FigFig/FigFig/Fig/Fig/FigFig/Fig/FigFig/FigFig/FigFig/FigFig/FigFig/Fig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFigFig/FigFig/FigFig/FigFigFig/FigFig/FigFig/FigFigFig/FigFig/FigFigFig/FigFig/FigFigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFigFig/FigFig/FigFig/FigFigFig/FigFig/FigFigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFigFig/FigFig/FigFig/FigFigFig/FigFig/FigFig/FigFig/FigFig/FigFigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFig/FigFigFig/FigFig/FigFig/FigFig/FigFig/FigFigFig/FigFig/FigFig/FigFigFig/FigFig/FigFig/FigFig/FigFig/FigFigFig/FigFig/FigFig/FigFig/FigFigFig/FigFig/FigFig/FigFig/FigFigFig/FigFig/FigFig/FigFig/FigFig/FigFigFig/FigFigFig/FigFig/FigFig/FigFig/FigFigFig/FigFig/FigFig/Fig
for any \(\sigma\in\mathfrak{B}(A^{\prime})\) and \(\mathcal{N}\in\mathfrak{L}(A)\). For the sake of brevity, we will sometimes use the notation \(\mathcal{N}^{T[W]}:=\mathfrak{T}[W](\mathcal{N})\). This seemingly complicated definition of superchannel \(\mathfrak{T}[W]\) is given in this way so that \((\text{Ad}_{M})^{T[W]}=\text{Ad}_{M^{T[W]}}\). From this, one can easily see that if
\[\mathcal{N}(\rho)=\sum_{n}c_{n}K_{n}\rho K_{n}^{\dagger}, \tag{12}\]
with complex numbers \(c_{n}\in\mathds{C}\), then
\[\mathcal{N}^{T[W]}(\rho)=\sum_{n}c_{n}K_{n}^{T[W]}\rho K_{n}^{T[W]\dagger}. \tag{13}\]
One important distinction should be made at this point. Although they share the same mathematical form, the generalized transposition defined here is for quantum processes, not quantum states. Transposition acting on density matrices is important for testing NPT entanglement [15, 19], but does not necessarily have the operational meaning as the reversal of input-output systems of a quantum process.
Given our tool for describing symmetry transformations of temporal structures in quantum processes, we can now define the compatibility of a quantum process with multiple temporal structures using generalized transposition. It is a direct generalization of _bidirectional operations_ corresponding to the conventional transposition considered in Ref. [1].
**Definition 1**.: A quantum channel \(\mathcal{N}\) is compatible with a generalized transposition \(T[W]\) when \(\mathcal{N}^{T[W]}\) is also a channel.
As closed quantum systems evolve with time via unitary operations, it is considered that unitary operations are basic building blocks of time evolution of quantum systems. We immediately get the following result on the generalized transposition of unitary operations by simply observing that \(\operatorname{Tr}\circ\text{Ad}_{U^{T[W]}}=\operatorname{Tr}\) is equivalent to \(U^{T[W]\dagger}U^{T[W]}=\mathds{1}\).
**Proposition 2**.: If a unitary operation \(\mathcal{U}\) is compatible with \(T[W]\), then \(\mathcal{U}^{T[W]}\) is also a unitary operation.
Formally it is obviously possible to generalize the generalized transpose even further by letting the unitary operation \(W\) to be a general quantum channel, but we focus on unitary cases in this work. It is mainly because allowing for irreversible quantum operations seems to go against the interpretation of the generalized transpose as a coordinate transformation of future and past Hilbert spaces, not an active joint evolution of future and past systems, albeit a probabilistic implementation through quantum teleportation is possible as it is for the transposition [1].
One subclass of generalized transpositions of special interest is that of _unital generalized transpositions_. A bipartite unitary operator \(W\) has the maximally entangled state \(\sum_{i}\left|i\right\rangle\left|i\right\rangle\) as an eigenvector with eigenvalue 1 if and only if \(T[W]\) is unital, since for such \(W,\mathds{1}^{T[W]}=\mathds{1}\). Note that a generalized transposition \(T[W]\) is unital if and only if it is trace-preserving, i.e. \(\operatorname{Tr}M^{T[W]}=\operatorname{Tr}M\) for all \(M\). For example, every fractional transposition, including the usual transposition, is trace-preserving and unital. Since unital generalized transpositions preserve the identity operator, they have the operational meaning as a transformation that preserves 'no event', which is desirable for a transformation of time axis. Imagine that there exists a film that recorded no event happening at all; it is natural to expect that playing it forward, backward, or even in quantumly weird direction of time makes no difference.
One possible problem of the definition of generalized transposition is that it may be too general to represent the transformation of tensor product decomposition because there are multiple bipartite unitary operators \(W\in\mathfrak{U}(AB)\) that preserve the tensor product structure of the Hilbert space. We can observe that the nonlocal properties of \(W\) in the definition of generalized transposition \(T[W]\) correspond to the properties of \(T[W]\) as a transformation of temporal structure, as \(W\) can be interpreted as a bipartite interaction between "future" and "past" systems.
Therefore we consider the equivalence class of bipartite unitary operators that are similar through local unitary operators, i.e., \(\langle W\rangle=\{(u_{1}\otimes u_{2})W(v_{1}\otimes v_{2}):u_{1},v_{1}\in \mathfrak{U}(A),u_{2},v_{2}\in\mathfrak{U}(B)\}\), so that every unitary operator in the same class has the same nonlocal properties. Note that every operator in the same equivalence
class transforms the tensor product structure in the same way. This leaves the problem of choosing a good representative from each equivalence class, and from its desirable properties, we hope to choose a bipartite unitary operator that induces a unital generalized transposition. When is it possible?
We say that a bipartite unitary operator _preserves maximal entanglement (ME)_ when it maps at least one maximally entangled state to a maximally entangled state. This definition when combined with the definition of the equivalence class of locally similar bipartite unitary operators yields the following result.
**Proposition 3**.: There is a unital generalized transposition \(T[V]\) with \(V\in\langle W\rangle\) if and only if \(W\) preserves ME.
Notably, every two-qubit unitary operator preserves ME [20, 21] as there are always at least four maximally entangled states that form an orthonormal basis remaining maximally entangled after the action of the unitary operator. Hence, it is conjectured that every bipartite unitary operator preserves ME, even in higher dimensions [22, 23]. This conjecture can be compactly stated with the generalized transposition.
**Conjecture 4** (Ubb, [22, 23]).: For every \(W\in\mathfrak{U}(AA^{\prime}),\) there exists at least one pair \((U,V)\) of unitary operators in \(\mathfrak{U}(A)\) such that
\[U^{T[W]}=V. \tag{14}\]
Especially, there is a numerical evidence of this conjecture that there is an iterative algorithm that finds a sequence of pairs of quantum states that converge to a pair of maximally entangled states related by a given bipartite unitary operator [23]. If Conjecture 4 is true, then we can always pick a representative that yields the unital generalized transposition from each equivalence class of locally similar bipartite unitary operators. It is equivalent to that the only nontrivial effect of generalized transposition to a transformation comes from its unital part, and all the other effects can be understood as unitary operation applied before and after the transformation in question.
This conjecture, when limited to the class of controlled unitary operators, is equivalent to the following problem.
**Conjecture 5** (Ubb-Cu (Controlled unitary)).: For every set of \(d\) unitary operators \(\{U_{i}\}\) on \(d\)-dimensional Hilbert space \(\mathcal{A}\), there is an orthonormal basis \(\{\ket{\psi_{i}}\}\) of \(\mathcal{A}\) such that \(\{U_{i}\ket{\psi_{i}}\}\) is also an orthonormal basis of \(\mathcal{A}\).
One can see that this conjecture is equivalent to the UBB conjecture for controlled unitary operators of the form \(\sum_{i}\ket{i}\!\bra{i}\otimes U_{i}\) from the fact that arbitrary maximally entangled pure state must have an expression of the form of \(d^{-1/2}\sum_{i}\ket{i}\otimes\ket{\psi_{i}}\) for some orthonormal basis \(\{\ket{\psi_{i}}\}\). Namely, after the action of the unitary operator, the state is transformed into \(d^{-1/2}\sum_{i}\ket{i}\otimes U_{i}\ket{\psi_{i}}\), thus \(\{U_{i}\ket{\psi_{i}}\}\) must be an orthonormal basis, too.
When expressed in this form, it is evident that the UBB-CU conjecture is also equivalent to its classical counterpart. In other words, when it is promised that a random index \(i\) will be picked and accordingly the unitary operator \(U_{i}\) will be applied to the quantum system \(A\) which contains the memory of the index value \(i\), it is natural to conjecture that there exists a deterministic process, represented by a unitary process \(\ket{i}\mapsto\ket{\psi_{i}}\), that prepares a quantum state \(\ket{\psi_{i}}\) that retains the memory of the index \(i\) after the action of \(U_{i}\). The UBB-CU conjecture supposes that exactly such a process always exists for any set of \(\{U_{i}\}\).
One simple example is the case of the generalized transposition corresponding to the CNOT gate, i.e., \(W=\ket{0}\!\bra{0}\otimes\mathds{1}+\ket{1}\!\bra{1}\otimes X\). The Hadamard gate \(H=\ket{+}\!\bra{0}+\ket{-}\!\bra{1}\) is compatible with \(T[W]\), as one can see from
\[H^{T[W]}=XH, \tag{15}\]
where \(X\) is the Pauli-X operators. One can unitalize \(T[W]\) by substituting \(W\) with \(W^{\prime}:=W(\mathds{1}\otimes X^{1/2}H)\), so that \(\mathds{1}^{T[W^{\prime}]}=\mathds{1}\).
We remark that even if not every bipartite unitary operator preserves ME, in light of Proposition 2, we could argue that only generalized transpositions corresponding to those preserve ME are relevant for the temporal structure of quantum processes. It is because if a \(W\) does not preserve ME, then no two unitary operators are
related to each other via the corresponding generalized transposition \(T[W]\). However, if one includes the non-unitary quantum channels into the picture, then it is no a priori clear if there are no pairs of quantum channels related by a non-unital generalized transposition. We leave this problem as an open problem.
Note that the generalized transposition is basis-dependent as the conventional transposition is a basis-dependent operation. There are two layers of basis dependency for input and output systems, i.e, the choice of basis \(\{|i\rangle\}\) and \(\{|j\rangle\}\) in (8). One could interpret Choosing the unital representation locally similar to a given generalized transposition is eliminating one such basis dependency by equalizing the input and output bases.
Just as the transposition can be applied to a part of multipartite operators to define the partial transpose operation, the generalized transposition can also be applied to a part of multipartite operator. If \(M\in\mathfrak{B}(AB)\), then, for arbitrary \(W\in\mathfrak{U}(B^{\otimes 2})\), the partial generalized transposition \(T_{B}[W]\) is defined as \(T_{B}[W]:=\mathsf{id}_{A}\otimes T[W]\)
\[\tikzfig{T_{B}[W]} \tag{16}\]
Using the partial generalized transposition, we can examine the compatibility of bipartite unitary operators with multiple directions of time of a local system. Assume again that two systems \(A\) and \(B\) interact through a bipartite unitary operator \(V\in\mathfrak{U}(AB)\). This assumption alone has a couple of implications. It assumes that there are two subsystems \(A\) and \(B\) that can be localized and identified, which stays so even after the interaction. Also, it also implies that \(A\) and \(B\) appear to share the same time axis during their interaction. However, this need not be the unique description of the direction of time for each system. For example, \(B\) might also appear to evolve in the direction given by a generalized transposition \(T_{B}[W]\) in time from the perspective of \(A\). In this case, for interaction \(V\) to be consistent with the new flow of time as well, its generalized transpose \(V^{T_{B}[W]}\) also should be unitary. The same argument can be applied to general quantum channels, so we give the following definition of compatibility.
**Definition 6**.: A quantum channel \(\mathcal{N}\in\mathfrak{C}(AB)\) is compatible with a generalized transposition \(T_{B}[W]\) on \(B\) when \(\mathcal{N}^{T_{B}[W]}:=(\mathsf{i}\mathfrak{d}_{A}\otimes\mathfrak{T}_{B}[W] )(\mathcal{N})\) is also a channel.
For the case of conventional matrix transposition \(T\), bipartite unitary operators compatible with \(T\) on a subsystem is called to be t-dual or catalytic. (See Sec. 3.4 for more information.)
Finally, we examine the relation between the compatibility of a quantum process with multiple directions of time and that of its causally neighbouring processes. When seen from a broader perspective, no interactions happen isolated as they are embedded in the network of events (e.g., see FIG. 3). For example, at the very least, experimenter prepares an input state and measures the output state of a given quantum channel.
We can model the ambient quantum processes as a quantum superchannel since they map a quantum channel to another quantum channel. Therefore, when we examine the consistency of causalities, it is natural to also require the physicality of the causality of the ambient superchannel. If a quantum channel \(\Phi\in\mathfrak{C}(A)\) embedded in a superchannel \(\mathfrak{F}\) is compatible with a generalized transposition \(\mathfrak{T}[W]\), then, for this generalized transposition of \(\Phi\) to be consistent with \(\mathfrak{F}\) as well, we require that \(\mathfrak{F}\circ\mathfrak{T}[W^{\dagger}]\) is also a superchannel, because (See FIG. 2.)
\[\mathfrak{F}\circ\mathfrak{T}[W^{\dagger}]\left(\mathfrak{T}[W](\Phi)\right) =\mathfrak{F}(\Phi), \tag{17}\]
Figure 2: Compatibility of superchannel with the generalized transposition of its input channel.
and because every superchannel with one input register is guaranteed to be physically implementable with a pre-process and a post-process [9]. In other words, if one tries to re-interpret a given event in a different decomposition of the spacetime, then the events surrounding it must be consistent as well in that decomposition.
This observation severely restricts which state can be fed into a multipartite unitary operator with multiple compatible temporal axes, as the following Proposition shows.
**Proposition 7**.: A state preparation superchannel given as \(\mathfrak{P}^{\sigma}(\mathcal{N}):=\mathcal{N}(\sigma)\) is compatible with a generalized transposition \(T[W]\) of its input channel, i.e., \(\mathfrak{P}^{\sigma}\circ\mathfrak{T}[W^{\dagger}]\) is a superchannel, if and only if there exists a quantum state \(\tau\) such that
\[W(\mathds{1}_{A}\otimes\sigma_{A^{\prime}}^{T})=(\mathds{1}_{A}\otimes\tau_{ A^{\prime}}^{T})W. \tag{18}\]
Note that (18) implies that \(\tau\) and \(\sigma\) are unitarily similar. Proof can be found in Appendix. From Proposition 7, it follows that the preparation of the maximally mixed state is always compatible with an arbitrary generalized transposition of its input channel. (See Sec.3.3 for a related discussion on factorizable maps.) Especially, for the case of time inversion, corresponding to the transposition, \(W\) is the swapping gate and Proposition 7 implies that \(\sigma=\mathds{1}/\operatorname{Tr}[\mathds{1}]\), hence we have the following Corollary.
**Corollary 8**.: The only state preparation compatible with the transposition is the preparation of the maximally mixed state.
This result matches with our intuition that no knowledge can propagate backward in time, as feeding a non-maximally mixed states into a quantum process compatible with inverse evolution can lead to retrocausality. See Sec. 3.4 for the discussion on the constraint Proposition 7 imposes on the information exchange between two quantum systems through a bipartite quantum channel when there are multiple compatible local temporal directions.
## 3 Discussion
### Events in spacetime as a tensor network
In this Section, we delve into a more detailed discussion on spacetime regions with indefinite causal orders and how the generalized transposition can be used in the context. In conventional quantum mechanics, the quantum state of a quantum system \(|\psi(t)\rangle\) at time \(t\) can be written as
Figure 3: Suppose that the dynamics of a set of quantum systems is given as a tensor network. Without presupposing a spacetime structure, it may not be possible to assign the unique temporal axis of the evolution in the tensor network.
\[|\psi(t)\rangle=\left(\prod_{t>t^{\prime}\geq 0}U_{t^{\prime}}\right)|\psi(0) \rangle\,, \tag{19}\]
where each \(U_{t^{\prime}}\) is the unitary operator describing the time evolution from time \(t^{\prime}\) to time \(t^{\prime}+1\). Moreover, each \(U_{t^{\prime}}\) also can be decomposed into interaction between many subsystems located at \(x\), e.g. \(U_{t^{\prime}}=\bigotimes_{x}U_{(x,t^{\prime})}\). The dynamics \(|\psi(t)\rangle\) went through can be depicted as a tensor network resembling FIG 3 where each box is \(U_{(x,t^{\prime})}\). Therefore, once the set of unitary operators \(\{U_{(x,t)}\}\) and their connectivity are given, the dynamics of a set of quantum systems is completely decided. In other words, we can consider the dynamics of quantum systems _a net of events_ composed of unitary operator which can be interpreted as a tensor. This approach shares the same spirit with the approach known as the _event-universe_[24, 25] understanding the universe as a tree of events except that 'events' are unitary evolution, or quantum channels, in this work.
However, what if we do not assume that there is a spacetime with the familiar spatio-temporal structure? What if the existence of the universal axis of time is not given as an additional data outside of the Hilbert space of the universe? There are approaches to quantum gravity in which they treat time, which is often treated as a parameter, on the same footing with space. One of the purposes of theses approaches is to recover time as an emergent entity from quantum theory without supposing its familiar properties as the temporal parameter. Notable examples include those of Cotler [26, 27], Castellani [28, 29] and Dias [30].
We can consider the following time-neutral model of the Hilbert space of the spacetime. Suppose that there exists a 'pre-spacetime' structure \(\mathcal{S}\), which parametrizes different regions of the to-be spacetime that is not necessarily having familiar spatio-temporal properties. We suppose that the Hilbert space \(\mathcal{H}\) of a part of or the whole universe can be decomposed into smaller Hilbert spaces \(\mathcal{H}_{s}\) corresponding to each point \(s\) in \(\mathcal{S}\)
\[\mathcal{H}_{\mathcal{S}}=\bigotimes_{s\in\mathcal{S}}\mathcal{H}_{s}. \tag{20}\]
One possible model of pre-spacetime \(\mathcal{S}\) is the set of points \((\mathbf{x},t)\) in the familiar spacetime before the spatio-temporal structure is assigned. In some literature, the Hilbert space of a quantum system including its behavior through the passage of time is called the _history Hilbert space_[27, 31, 32], hence we will use the same nomenclature for \(\mathcal{H}_{\mathcal{S}}\) in (20). However, since we are yet to allocate the temporal parametric role to parameter \(t\) in \((\mathbf{x},t)\), we stick to the temporally neutral notation \(s\) for each region in \(\mathcal{S}\). Also, for the sake of simplicity, we suppose that the structure of \(\mathcal{S}\) is discrete by interpreting each \(s\) as a region in spacetime rather than a point. Working with continuous tensor product requires the full kit of Fahri-Gutmann path integral formalism [33], which goes beyond the scope of the current work. Generalization to the continuous regime is an interesting future work.
Assuming the existence of a net of events is not conceptually more demanding than other approaches [34, 35] that assume pre-existing Hamiltonian attached to each Hilbert space. It is immediate from the fact that assuming that a Hamiltonian \(H\) governs a quantum system is mathematically equivalent to assuming that the dynamics of the quantum system is described by the unitary operator \(U:=\exp\{-iHt/\hbar\}\). Nevertheless, in this work, we deem the picture of unitary operators and the corresponding quantum channels constituting the history of the universe is conceptually more clear than the picture of Hamiltonians living outside of Hilbert space yet inducing dynamics of quantum systems indirectly.
Additionally, we will work on a plausible assumption that every \(\mathcal{H}_{s}\) is finite-dimensional and isomorphic with each other. Assuming isomorphic structure amounts to assuming a sort of translational symmetry of the pre-spacetime \(\mathcal{S}\) which is usually done in cosmology. Also, there are good reasons to assume that the Hilbert space of the universe is locally finite-dimensional based on the arguments such as the finiteness of the entropy of black holes [36].
Once the history Hilbert space is defined, we assume that an event of the universe is given as a tensor \(X\in\bigotimes_{s\in\mathcal{S}^{\prime}}\mathcal{H}_{s}\) with some \(\mathcal{S}^{\prime}\subseteq\mathcal{S}\) connecting different regions of the pre-spacetime \(\mathcal{S}\). However, we are more familiar with the interpretation of an event as an operator transforms a state into another, hence, whenever we can divide the support Hilbert space of \(X\) (we will sometime call this the history Hilbert space of \(X\)) into two disjoint same-dimensional regions \(\mathcal{S}_{0}\) and \(\mathcal{S}_{1}\), we tentatively interpret \(X\) as an operator between these regions
as follows.
\[X:\bigotimes_{s\in\mathcal{S}_{0}}\mathcal{H}_{s}\to\bigotimes_{s\in\mathcal{S}_{1 }}\mathcal{H}_{s}. \tag{21}\]
This kind of identifying tensor with operator is often done in the field of quantum scrambler [37]. However, it may not be possible to interpret \(X\) as a time-evolution from \(\bigotimes_{s\in\mathcal{S}_{0}}\mathcal{H}_{s}\) to \(\bigotimes_{s\in\mathcal{S}_{1}}\mathcal{H}_{s}\), as the input and output systems can be arbitrarily chosen since \(X\) was given as a tensor in \(\bigotimes_{s\in\mathcal{S}_{0}}\bigcup_{\mathcal{S}_{1}}\mathcal{H}_{s}\) and the operator interpretation was rather arbitrary. At this stage, we only have a set of Hilbert spaces corresponding to each distinguishable points in the 'pre-spacetime' whose physical meaning is still unclear and a set of rather abstract 'events' given as tensors on the Hilbert spaces.
Now, we hope to study the spatio-temporal structure that emerges from the given network of events. This approach is particularly motivated by the recent results including the HaPPY code [38], where the correspondence between the tensor network and spacetime emerged is explored. Especially, it was shown that the correspondence with tensor network can be extended from that of space-like time slices to the whole spacetime, recently [39].
For the case of pre-spacetime \(\mathcal{S}\) without a presupposed temporal structure, there need not be a unique axis of time across the whole tensor network. As an example, one can imagine the following interaction \(X\) between four regions \(s_{i}\) in \(\mathcal{S}\).
\[\begin{array}{c}s_{2}\\ s_{3}\end{array}\]
\(s_{0}\)\(s_{1}\).
One could consider \(X\) as a tensor in \(\bigotimes_{i=0}^{3}\mathcal{H}_{s_{i}}\). However, without a presupposed temporal parameter, there is no _a priori_ reason to suppose that there is a causal order between the regions, as it is evident from that the Hilbert space \(\bigotimes_{i=0}^{3}\mathcal{H}_{s_{i}}\) has infinitely many ways to be decomposed into the tensor product of input and output spaces.
In other words, by changing the basis, the tensor \(X\) as a vector in \(\bigotimes_{i=0}^{3}\mathcal{H}_{s_{i}}\) can appear as \(UX\) with some unitary operator \(U\) on \(\bigotimes_{i=0}^{3}\mathcal{H}_{s_{i}}\). This unitary operation can nontrivially affect the operator interpretation of \(X\) (say, from \(s_{0}s_{1}\) to \(s_{2}s_{3}\)), but the description of the effect can be complicated when one is working with the operator interpretation. In the next Section, we will develop a language that can concisely describe this type transformation of operators.
To recover the temporal structure, we adopt the approach taken in the studies of quantum causal models [40] which assumes that quantum dynamics is fundamentally unitary even when the causal relationship between events is unclear. Say, if \(X\) is a unitary operator from \(s_{0}s_{1}\) to \(s_{2}s_{3}\) in (22), then one could give an interpretation to \(X\) as the time evolution from the temporally preceding regions \(s_{0}s_{1}\) to the temporally succeeding regions \(s_{2}s_{3}\). Hence, although there may not exist a unique global time axis in \(\mathcal{S}\), there still could be a "local time axis", given in terms of the decomposition of its supporting Hilbert space into "future" and "past" subsystems, allowing us to interpret each local interaction as a unitary evolution along some direction in \(\mathcal{S}\).
Of course, the inverse of a unitary operator is still unitary, there is no canonical distinction between "future" and "past" yet, which should be made by other means, e.g. the second law of thermodynamics. Therefore, the terms like axis or direction of time should be understood with this symmetry in consideration. It is somehow similar to that both \(\ket{\psi}\) and \(-\ket{\psi}\) represent the same quantum state; only the relative difference between the orientations of axes is important, as we will see later.
This type of process of finding natural decomposition of the whole Hilbert space into distinguishable'subsystems' from the given dynamics is called quantum mereology [35]. The decomposition studied in Ref. [35] was the bipartition into "system" and "environment" and the criterion was the emergence of quasiclassical behavior. Moreover, there are approaches to explain the emergence of spatial structure from the Hilbert space structure and the given dynamics alone [41, 42, 36, 43]. The goal of this work is in a similar vein, but the interest of this work is more focused on the emergence of temporal structure in microscopic dynamics, and for doing so we identify the decomposition of the history Hilbert space into future and past systems, which allows the unitary time evolution description of each interaction tensor.
We will call a tensor (or an operator that can be interpreted as a tensor) \(Y\in\mathcal{K}\) a _dynamics tensor_ if there exists some tensor product decomposition \(\mathcal{K}=\mathcal{K}_{1}\otimes\mathcal{K}_{2}\) with \(\mathcal{K}_{1}\approx\mathcal{K}_{2}\) such that \(Y\) is unitary when understood as an operator from \(\mathcal{K}_{1}\) to \(\mathcal{K}_{2}\). In other words, \(Y\) is a dynamics tensor if it is possible to interpret it as a time
evolution with respect to some spacetime structure. The following result shows that no special type of tensor network is necessary to represent a dynamics on a pre-spacetime as long as each tensor is properly normalized.
**Theorem 9**.: Every operator \(X\in\mathfrak{B}(\mathcal{K})\) is proportional to a dynamics tensor. Especially, every \(X\) with \(\|X\|_{2}=|\mathcal{K}|^{1/2}\) is a dynamics tensor.
See Appendix for omitted proofs including that of Theorem 9. Treating unitarity as a guideline for assigning temporal order to the pre-spacetime \(\mathcal{S}\) may help explaining the emergence of temporal structure of spacetime from its tensor network structure, but still there remain some ambiguities. Especially, if \(X\) is a _perfect tensor_ considered in Ref. [38], then any bipartition of \(s_{0}s_{1}s_{2}s_{3}\) yields a unitary operator, hence it is compatible with any temporal direction direction across the diagram in (22). Thus, unitarity alone may not yield the unique direction of time.
As we discussed about the unital generalized transpositions and their probable correspondence with temporal axis transformation, now we consider the restriction of the definition of dynamical tensor where the transformation is restricted to unital generalized transpositions. We say that an operator \(X\in\mathfrak{B}(\mathcal{K})\) is a _proper dynamics tensor_ if there exists a unital generalized transposition \(T[W]\) with some \(W\in\mathfrak{U}(\mathcal{K}^{\otimes 2})\) such that \(X^{T[W]}\) is unitary.
**Theorem 10**.: Every operator \(X\in\mathfrak{B}(\mathcal{K})\) is proportional to a proper dynamics tensor if \(|\mathcal{K}|\) is even. Especially, every \(X\) with \(\|X\|_{2}=|\mathcal{K}|^{1/2}\) is a proper dynamics tensor.
Theorem 10 implies that when each subsystem is assumed to be even-dimensional, arbitrary tensor network with properly normalized tensors can be understood as a 'net of events' that each constituting tensor can bee seen as a unitary evolution operator after a 'rotation of time axis' represented by a unital generalized transposition.
This result lessens the weight of the assumption to justify the approach'spacetime as a tensor network'. According to this viewpoint, there may not be a single universal axis of time in the universe, but each subsystem could experience time as it hops around the given tensor network of dynamics tensors to whatever direction that yields the unitary evolution interpretation for the adjacent dynamics tensor. As there is no unique time axis, each subsystem may 'clash' while hopping around the tensor network in different directions. However, this does not necessarily mean that the model is ill-defined or that there is a contradiction, since by satisfying a set of conditions, the interaction between quantum systems with multiple relative configuration of axes of time can be consistent. We will discuss about those conditions in Sec. 3.4.
We remark that we do not claim that we found a canonical way to explain the emergence of the unique arrow of time from any tensor network structure of (pre-)spacetime. But we highlight the fact that there could be multiple quantum systems with multiple compatible directions of time in temporally neutral approaches to quantum gravity, and that the generalized transposition provides a mathematical tool to deal with the symmetry transformation of that structure.
This approach bears some similarity with time-symmetric operational formulation of quantum theory such as Oreshkov and Cerf [44, 45] where quantum theory is analyzed without presupposing background spatio-temporal structure. The main difference is that in this work we accept the existence of well-defined local direction of time by interpreting each event as a unitary evolution.
### Generalized perfect tensors
If we were to pursue the approach where a tensor network of events could model the structure of spacetime, then it is natural to expect that the constituting tensors are covariant under a certain class of generalized transposes as we expect physical laws at each point space are 'isotropic' in some sense if there is no _ad hoc_ temporal axis determined beforehand.
In this Section, we consider continuous generalizations of the class of tensors with particular symmetry known as _perfect tensors_. A perfect tensor \(X\) in \(\bigotimes_{i=1}^{n}\mathcal{H}_{i}\) is a tensor which is unitary for any bipartition \(A,B\) of \(\{1,2,\cdots,n\}\) into input and output nodes with \(|A|=|B|\). This definition can be re-expressed in terms of generalized transpositions. We say that \(X\) is a perfect tensor, when \(X\) is understood as a matrix for some
fixed choice of input and output nodes mentioned above, if \(X^{T[P]}\) with arbitrary permutation \(P\) of \(n\) systems is unitary.
Hence, when understood as a tensor describing dynamics in spacetime as in Section 3.1, one can say that a perfect tensor has an 'isotropic' spatio-temporal structure to some extent in the sense that even after arbitrary permutation of its nodes it stays unitary. Thus, if one intends to explain the emergence of the spacetime structure from the quantum nature of the universe, then one might think that it is desirable to assume that each dynamics tensor is perfect not to a presupposed temporal structure.
However, as we discussed in Section 3.1, even if we assume that the history Hilbert space of the universe is given as \(\mathcal{H}=\bigotimes_{s\in\mathcal{S}}\mathcal{H}_{s}\), one can always express \(\mathcal{H}\) with another tensor product structure with some \(\mathcal{Z}\), i.e. \(\mathcal{H}=\bigotimes_{z\in\mathcal{Z}}\mathcal{H}_{z}\). In this regard, the definition of perfect tensor has a shortcoming that it only considers permutation of tensor components of an already fixed tensor product structure of the Hilbert space. This type of property is similar to basis-dependent properties of vectors in the sense it depends on the choice of tensor product structure.
If we were to examine the isotropy of a tensor with respect to arbitrary unitary transformation of Hilbert space, then we must define the following object we will call a _totally perfect tensor_. A tensor \(X\in\bigotimes_{i}\mathcal{H}_{i}\otimes\bigotimes_{i}\mathcal{K}_{i}\) is totally perfect if (when understood as an operator in \(\mathfrak{B}(\bigotimes_{i}\mathcal{H}_{i},\bigotimes_{i}\mathcal{K}_{i})\)) \(X^{T[W]}\) is unitary for any unitary operator \(W\in\mathfrak{U}(\bigotimes_{i}\mathcal{K}_{i}\otimes\bigotimes_{i}\mathcal{H }_{i})\). Are there totally perfect tensors and do they form isotropic building blocks of spacetime? Or is it the case that there are no such things and the symmetry of choosing regions of spacetime is necessarily broken to some extent? We show that it is the latter.
**Theorem 11**.: There are no totally perfect tensors.
Proof.: Suppose that \(M\) is totally perfect. Then, by letting \(W=V(M^{\dagger}\otimes\mathds{1})\) for an arbitrary unitary operator \(V\), we have \(M^{T[W]}=\mathds{1}^{T[V]}\). Now, by choosing \(V\) such that \(\mathds{1}^{T[V]}\) is non-unitary, we get the desired result. One such example is the generalized controlled-CNOT gate given as \(V=\sum_{i}S^{-i}\otimes|i\rangle\!\langle i|\), where \(S=\sum_{n}|n\oplus 1\rangle\!\langle n|\) where \(\oplus\) is the modular summation operation. For such \(V\), we have \(\mathds{1}^{T[V]}=|0\rangle\left(\sum_{i}\left\langle i\right|\right)\), which is rank-\(1\), hence obviously non-unitary.
One can understand Theorem 11 as the converse result of Theorem 9. Theorem 11 shows that there is no tensor that is unitary with respect to arbitrary tensor decomposition of the ambient Hilbert space. It implies that every dynamics tensor has a set of preferred (or disfavored, to be more accurate) decomposition of the history Hilbert space, albeit it could still have some remaining ambiguity. This observation helps explaining the emergence of definite pre-spacetime structure as Theorem 11 implies that not every superposition of points in the pre-spacetime can be interpreted as a legitimate subsystem participating in physical interactions.
A naturally following question is if there are operators (tensors) that remain unitary under smaller classes of generalized transpositions. This question is natural since one might guess that there are no totally perfect tensors because the set of all generalized transpositions is too large for an operator to remain unitary after the action of them. We first consider the class of unital generalized transpositions, as they have a good property of preserving zero event. We call an operator \(M\) a _properly perfect tensor_ if \(M^{T[W]}\) is unitary whenever the generalized transposition is unital, i.e. \(\mathds{1}^{T[W]}=\mathds{1}\). We will call the operators of the form \(\alpha\mathds{1}\) with some complex number \(\alpha\) scalar operators.
**Proposition 12**.: There are no properly perfect tensors other than scalar operators.
Proof.: Assume that \(M\in\mathfrak{B}(A)\) is a properly perfect tensor that is not a scalar operator, i.e., \(M\neq\alpha\mathds{1}\) for any complex number \(\alpha\). Note that \(M\) is unitary by definition. Let us decompose \(M\) into the trace part and the traceless part. In other words, there exists a traceless operator \(S\) with \(\|S\|_{2}=|A|^{1/2}\) that allows for the following form of expansion of \(M\)
\[M=\cos\theta\ \mathds{1}_{A}+\sin\theta S, \tag{23}\]
for some real value \(\theta\) such that \(\sin\theta\neq 0\). (Further inspection reveals that \(S\) should be unitary, too.) As \(S\) is traceless, we can see that \(\left|\phi^{+}\right\rangle_{AA^{\prime}}\) and \(\left(S\otimes\mathds{1}_{S^{\prime}}\right)\left|\phi^{+}\right\rangle_{AA^{ \prime}}\) are orthogonal. This means that one can construct a unitary operator \(W\) on \(AA^{\prime}\) that maps \(\left|\phi^{+}\right\rangle_{AA^{\prime}}\) to itself and \(\left(S\otimes\mathds{1}_{S^{\prime}}\right)\left|\phi^{+}\right\rangle_{AA^{ \prime}}\) to
\((X\otimes\mathds{1}_{S^{\prime}})\left|\phi^{+}\right\rangle_{AA^{\prime}}\), where \(X=\sum_{i}|i\oplus 1(\text{mod }|A|)\rangle\!\langle i|\) is the generalized Pauli \(X\) operator. Note that \(X\) is also traceless. For such \(W\), \(T[W]\) is unital, hence we have
\[N:=M^{T[W]}=\cos\theta\ \mathds{1}_{A}+\sin\theta X. \tag{24}\]
This operator cannot be unitary since \(N^{\dagger}N=\mathds{1}_{A}+\sin\theta\cos\theta(X+X^{\dagger})\) and the second term does not vanish. It contradicts the assumption that \(\mathcal{M}\) is a properly perfect tensor, hence the desired result follows.
Proposition 12 implies that non-scalar dynamics tensors not only have disfavored spacetime structures as a whole, but also have disfavored temporal structures, if we accept the correspondence between transformations of time axis and unital generalized transpositions.
Nevertheless, as we have seen before, there are unitary operators that are also symmetric, i.e. \(U=U^{T}\) in every dimension. (Note that the direct sum of such operators is also symmetric and unitary.) They are also invariant under fractional transpositions, hence they are all unitary after arbitrary fractional transposition. Hence, we will call an operator \(M\) that remain unitary after every fractional transposition, i.e. \(M^{T(\theta)}\) is unitary for every \(\theta\), a _rotationally perfect tensor_ and summarize the observation given above as follows. Although we still lack a complete geometric interpretation of the fractional transposition, one can imagine that the rotationally perfect tensors as tensors that have a time evolution interpretation along with any axis in the space-time plane.
**Proposition 13**.: There are rotationally perfect tensors in every dimension.
### Supertrace and Factorizalbe Maps
In this Section, we introduce a mathematical tool related to the generalized transposition for modelling the loss of _dynamical quantum information_ processes. Quantum superchannels are transformations that map quantum channels to quantum channels. Its formal similarity with quantum channels enabled many results about quantum channels to be translated over to quantum superchannels, but not every component of static quantum information processes has been translated into the language of the dynamical setting.
One of such component is information loss. In the static setting, the loss of quantum information is modelled with the (partial) trace operation, and the causality of quantum operation is also formulated in terms of the trace operation. However, to the best of our knowledge, there is no analogue of trace operation for quantum channels, although it is naturally possible that one loses all the information about input and output of a quantum channel.
We propose the _supertrace_ as the superchannel counterpart of the trace operation, denoted by \(\mathfrak{T}\mathfrak{r}\). The supertrace is defined in such a way that the following diagram is commutative:
(25)
In other words, \(\mathfrak{T}\mathfrak{r}=\mathbf{J}^{-1}[\mathrm{Tr}]=J^{-1}\circ\mathrm{Tr}\circ J\). Here, we slightly abused the notations by identifying isomorphic trivial Hilbert spaces \(\mathsf{C}^{*}\approx\mathds{C}\approx\mathfrak{L}(\mathds{C})\approx \mathfrak{B}(\mathds{C}\otimes\mathds{C})\) and letting \(J:\mathfrak{L}(\mathds{C})\rightarrow\mathfrak{B}(\mathds{C}\otimes\mathds{C})\) be identified with \(\mathrm{id}_{\mathds{C}}\). Equivalently, \(\mathfrak{T}\mathfrak{r}[\mathcal{M}]:=\mathrm{Tr}\!\left[J_{XX^{\prime}}^{ \mathcal{M}}\right]=\mathrm{Tr}\!\left[\mathcal{M}(\pi_{X})\right]\) for all \(\mathcal{M}\in\mathfrak{L}(X)\). Similarly to partial trace, \(\mathfrak{T}\mathfrak{r}_{X}\) is a shorthand expression of \(\mathfrak{T}\mathfrak{r}_{X}\otimes\mathfrak{i}\mathfrak{o}_{\tilde{X}}\). It is consistent with the definition of map reduction of Ref. [46] where it was defined only for semicausal maps.
Note that the supertrace lacks a few tracial properties such as cyclicity when applied naively, i.e., \(\mathfrak{T}\mathfrak{r}[\mathcal{A}\circ\mathcal{B}]\neq\mathfrak{T}\mathfrak{ r}[\mathcal{B}\circ\mathcal{A}]\) in general on \(\mathfrak{L}(X)\). However, it generalizes the operational aspect of trace as the discarding action. For example, every quantum channel \(\mathcal{N}\) is normalized in supertrace, i.e., \(\mathfrak{T}\mathfrak{r}[\mathcal{N}]=1\). Not only that, if some linear map \(\mathcal{M}\) is a quantum channel in _some_ configuration, i.e., if \(\mathcal{M}^{T[W]}\) is a quantum channel for some \(W\), then it is still normalized, \(\mathfrak{T}\mathfrak{r}[\mathcal{M}]=1\). We leave a remark that it is unrelated to the supertrace \(\mathsf{Str}\) frequently used in the field of supersymmetry [47] or the supertrace \(\hat{\mathrm{Tr}}\) defined as an operator on endomorphism spaces [48].
We leave a remark that the normalization condition of quantum processes in Oreshkov and Cerf's generalized process theoretic approach to quantum theory without predefined time [44, 45] can be compactly expressed with supertrace. A quantum operation in their formalism is a CP map \(\mathcal{M}\) that has unit supertrace: \(\mathfrak{Tr}[\mathcal{M}]=1\).
The supertrace yields another way of marginalizing multipartite quantum channels. In other words, we can apply the supertrace to a local system of a bipartite channel to get the quantum channel on the other quantum system. It has advantage over the other definition of marginalization of quantum channel that requires the bipartite channel to be no-signalling [49] that the supertrace can be applied to any bipartite channel.
Oftentimes, quantum channels are referred to as deterministic quantum processes in the sense it preserves the trace of input state so that the transformation from the input state to the output state is guaranteed. However, a critical review of its implementation is needed to examine if it can be realized truly deterministically. The Stinespring dilation theorem [50] implies that for every quantum channel \(\mathcal{N}\in\mathfrak{C}(X)\), there exists an 'ancillary system' \(Y\) and a unitary operation \(\mathcal{U}\in\mathfrak{U}\mathfrak{O}(XY)\) such that, for every \(\rho\in\mathfrak{B}(X)\),
\[\mathcal{N}(\rho)=\mathrm{Tr}_{Y}\,\mathcal{U}(\rho_{X}\otimes|0\rangle\! \langle 0|_{Y}), \tag{26}\]
for some \(|0\rangle\) in \(Y\). We have to note that, unless \(\mathcal{N}\) is a unitary operation, \(Y\) is not a 1-dimensional system that only admits \(|0\rangle\), but a nontrivial quantum system that can be in some state other than \(|0\rangle\). Thus, one role of system \(Y\) is providing working space so that information of \(X\) can move around to produce the wanted outcome. Another role is providing _purity_. System \(Y\) is prepared in a pure state initially, so that the entropy of \(X\) can be disposed of. However, how is this pure state \(|0\rangle\) prepared? One might claim that the initialization map \(\mathcal{I}\in\mathcal{C}(Y)\) given as
\[\mathcal{I}(\sigma)=\mathrm{Tr}[\sigma]\,|0\rangle\!\langle 0|\,, \tag{27}\]
can prepare the pure state \(|0\rangle\), but any Stinespring dilation of \(\mathcal{I}\) itself, for example,
\[\mathcal{I}(\sigma)=\mathrm{Tr}_{Z}\,F(\sigma_{Y}\otimes|0\rangle\!\langle 0 |_{Z})F^{\dagger}, \tag{28}\]
with the swapping operator \(F\) requires yet another pure ancilla state \(|0\rangle_{Z}\), so the problem will be repeated _ad infinitum_. Indeed, Landauer's principle asserts that initializing a quantum system inevitably produces heat [51]; entropy can only be displaced, not destroyed, under reversible evolution. The generated heat consumes the purity of the heat absorbent and we again confront the problem of initializing the absorbent.
Another potential solution is preparing the pure state by measuring the ancilla system and choosing the wanted measurement outcomes. However, an operation that can be realized only when some probabilistic measurement outcome happens cannot be deterministic. We also interpret that not just pure states, but any non-maximally mixed quantum state indicates partial knowledge on the given quantum system.
Therefore, by the same argument, every quantum channel that can be deterministically implemented must be possible to realize with a maximally mixed ancilla system, i.e.,
\[\mathcal{N}(\rho)=\mathrm{Tr}_{Y}\,\mathcal{U}(\rho_{X}\otimes\pi_{Y}). \tag{29}\]
Quantum maps of this form are known as _(exactly) factorizable maps_. From the Stinespring dilation of (29), we can see that any factorizable map \(\mathcal{M}\) has the following simple expression in terms of supertrace,
\[\mathcal{M}=\mathfrak{T}_{Y}\mathcal{U}, \tag{30}\]
with some unitary operation \(\mathcal{U}\in\mathfrak{U}\mathfrak{O}(XY)\). This expression is surprisingly similar to purification of quantum states, i.e. a mixed state \(\rho_{A}\) can be always purified with some environment system \(B\) and purification \(|\psi\rangle_{AB}\) such that
\[\rho_{A}=\mathrm{Tr}_{B}\,|\psi\rangle\!\langle\psi|_{AB}\,. \tag{31}\]
Therefore, we will call \(\mathcal{U}\) in (30) the _purification_ of factorizable map \(\mathcal{M}\). See Appendix E for discussion of general factorizable maps and their relation to general \(C^{*}\)-algebras.
By the operational meaning of factorizable maps given here, we can appreciate the physical significance of the mathematical result that not every unital map is factorizable [52]. That is, unital maps that are not factorizable require nondeterministic preparation of ancilla systems.
However, the formal similarity of purifications of factorizable maps and quantum states has limitations. For instance, the Schrodinger-HJW theorem [53, 54] does not hold for purification of factorizable maps when we try to generalize it straightforwardly.
**Proposition 14**.: For some factorizable map \(\mathcal{N}\in\mathfrak{C}(A)\) and two purifications \(\mathcal{U}\) and \(\mathcal{V}\) of \(\mathcal{N}\) on \(AB\), there exists no superunitary operation \(\Upsilon\in\mathfrak{S}\mathfrak{L}(B)\) such that
\[\mathcal{U}=(\mathfrak{i}\mathfrak{o}_{A}\otimes\Upsilon_{B})(\mathcal{V}). \tag{32}\]
Proof.: Consider two ways of implementing the completely depolarizing map
\[\mathcal{C}(\rho)=\pi\operatorname{Tr}[\rho]. \tag{33}\]
The first method is simply swapping the maximally mixed state with the input state and the second is catalytically depolarizing the input state using a randomness source. The unitary operation of the latter is catalytic; its partial transpose is still unitary and this property does not change after local superunitary. However, the former is not catalytic; the partial transpose of the swapping gate is no longer unitary. Therefore, they cannot be superunitarily similar.
Nevertheless, we have the following result immediate from the Schrodinger-HJW theorem.
**Proposition 15**.: For any factorizable map \(\mathcal{N}\in\mathfrak{C}(A)\) with two purifications \(\mathcal{U}=\operatorname{Ad}_{U}\) and \(\mathcal{V}=\operatorname{Ad}_{V}\) of \(\mathcal{N}\) on \(AB\), there exists a unitary operator \(W\in\mathfrak{U}(B^{\otimes 2})\) so that they are related by a generalized transposition \(T[W]\), i.e.,
\[\mathcal{U}=(\mathfrak{i}\mathfrak{o}_{A}\otimes\mathfrak{T}_{B}[W])( \mathcal{V}). \tag{34}\]
One possible interpretation of Proposition 15 is that losing dynamical quantum information is not just losing two set of data, namely the input and the output information of a given process, in a fixed temporal structure. Losing dynamical quantum information is symmetric with respect to the transformation of the spatio-temporal structure modelled by generalized transpositions, and it is natural in the sense that there is no _a priori_ reason to believe that a quantum system that you have no information at all is governed by the same flow of time with you and the bipartite unitary operator \(W\) redirects the temporal progress of the lost system. Indeed, applying a generalized transposition followed by the supertrace is same with applying the supertrace alone, i.e.,
\[\mathfrak{T}\mathfrak{r}\circ\mathfrak{T}[W]=\mathfrak{T}\mathfrak{r}. \tag{35}\]
We remark that (35) bears a striking similarity with the definition of causality for quantum processes,
\[\operatorname{Tr}_{Y}\circ\mathcal{E}=\operatorname{Tr}_{X}. \tag{36}\]
Indeed \(\mathfrak{T}[W]\) in (35) simply changing your spacetime-coordinate system for a quantum system about which you have no information at all should not affect all the other processes, hence it expresses a sort of 'logical causality'.
### Compatibility of state preparation
In this Section, we show that consistency of causal structure is deeply related to flow of information through multipartite interaction, which is greatly important in the study of quantum scrambler, as it was demonstrated in the task of quantum hacking [55]. As the most evident example, Corollary 8 practically allows no information exchange between two systems compatible with opposite temporal directions. The impossibility of preparing a system compatible with opposite direction of time in a specific state of your choice is evident from the fact that such an action will lead to signaling to the past from the perspective of the inverted system. For example, if a qubit appears to propagate back to the past, then the ability of preparing it in either \(|0\rangle\) or \(|1\rangle\) is equivalent to the ability of forcing its measurement outcome to be either \(\langle 0|\) or \(\langle 1|\) on demand in the opposite temporal flow, which leads to retrocausality from that perspective.
In fact, in the quantum setting, bipartite unitary operators \(U\) on \(AB\) with the unitary partial transpose \(U^{T_{B}}\) are known as _catalytic_ unitary operators [56, 57, 58] or _T-dual_ unitaries [59]. They allow information exchange between two systems only in the form of randomization given as a unital map [60, 61], hence no information leaks to a system initially prepared in the maximally mixed state, as it cannot be randomized more. This is the very reason why these unitary operators can be used for catalysis of quantum randomness [56, 57, 58, 62, 63].
In other words, if you are interacting with a quantum system that is compatible with two opposite temporal flows, then non of your information leaks to it from your perspective. It hints that the more the effect of the given generalized transposition deviates from the usual flow of time, the less information can be leaked through the given interaction. Henceforth, we can ask the following interesting question: System \(A\), a quantum system on your side, is going to interact with another system \(B\). You have no knowledge about the interaction between \(A\) and \(B\) other than it is also compatible with another spatio-temporal structure of \(B\) modelled by a generalized transposition \(T_{B}[W]\) on \(B\). Does this condition constrain the maximum amount of information that can be leaked from \(A\) to \(B\)?
For this purpose, let us define the (geometric) non-swappability of bipartite quantum channel \(\mathcal{N}\in\mathfrak{C}(AB)\) with \(|A|=|B|\)
\[\Xi(\mathcal{N}):=\frac{1}{2}\min_{\mathcal{C}_{A}\mathcal{C}_{B}}\| \mathcal{N}-(\mathcal{C}_{A}\otimes\mathcal{C}_{B})\circ\text{Ad}_{F}\|_{ \diamond}, \tag{37}\]
where \(\mathcal{C}_{A}\) and \(\mathcal{C}_{B}\) are local unitary operators. In other words, \(\Xi(\mathcal{N})\) is the diamond norm distance between \(\mathcal{N}\) and the set of swapping unitary operations up to local unitary. In this sense, one say say that \(\Xi(\text{Ad}_{W})\) is the measure of how close global behavior of \(T[W]\) is to the usual transposition.
We also define the _geometric capacity_\(C_{G}(\mathcal{N})\) of quantum channel \(\mathcal{N}\) as
\[C_{G}(\mathcal{N}):=\frac{1}{2}\min_{\tau}\|\mathcal{N}-\mathcal{E}_{\tau}\|_ {\diamond}, \tag{38}\]
where \(\mathcal{E}_{\tau}(\rho):=\tau\operatorname{Tr}[\rho]\) from \(A\) to \(B\) is the initialization map. In other words, \(C_{G}(\mathcal{N})\) measures the distance between \(\mathcal{N}\) and the closest initialization map. State initialization maps completely destroy the information of the input system. Thus, we can say that the farther away a channel is from initialization maps, the more information it preserves.
Hence, when \(\cdot\otimes\sigma\) is understood as a quantum channel that attaches an ancilla system in state \(\sigma\), we can interpret
\[\mathcal{L}_{B(A}(\mathcal{M}|\sigma):=C_{G}(\operatorname{Tr}_{A}[\mathcal{ M}(\,\cdot\otimes\sigma)]), \tag{39}\]
as the measure of information leakage from \(A\) to \(B\) for any bipartite channel \(\mathcal{M}\in\mathfrak{C}(AB)\) when the system \(B\) is initially prepared in the state \(\sigma\).
**Proposition 16**.: Let \(\mathcal{U}\) be a bipartite quantum operations on \(AB\) compatible with a generalized transposition \(T_{B}[W]\) on \(B\), i.e. \(\mathcal{V}:=\mathcal{U}^{T[W^{\dagger}]}\) is also a quantum operation with \(\mathcal{W}=\text{Ad}_{W}\). Then, information leakage from \(A\) to \(B\) through \(\mathcal{U}\) is limited by the non-swappability of \(\mathcal{W}\), i.e.,
\[\max_{\sigma}\mathcal{L}_{B(A}(\mathcal{U}|\sigma)\leq\Xi(\mathcal{W}), \tag{40}\]
where the maximization is over quantum states \(\sigma\) that are compatible with \(T_{B}[W]\).
Proof is given in Appendix. One could understand that Proposition 16 provides the robustness of Corollary 8. For example, if \(\mathcal{W}\) is the swapping operation corresponding to the ordinary transposition, the right hand side of (40) vanishes, so there could be no information leakage from \(A\) to \(B\) through \(\mathcal{U}\). Even when \(T[W]\) is slightly different from the ordinary transposition, the information leakage is also very small.
On the other extreme, we examine how highly information leaking interactions restrict the form of compatible partial generalized transposition. We can measure the information destruction by quantum channel \(\mathcal{N}\) with the minimum sine metric between the Choi matrices of \(\mathcal{N}\) and an arbitrary unitary operation. Here, the _sine metric_\(d_{S}(\rho,\sigma)\)[64, 65] between quantum states \(\rho\) and \(\sigma\) is given as
\[d_{S}(\rho,\sigma)=\sqrt{1-F(\rho,\sigma)}. \tag{41}\]
Therefore, our (geometric) measure of _information destruction_ in \(\mathcal{N}=\sum_{i}\text{Ad}_{N_{i}}\in\mathfrak{C}(A)\) can be expressed as
\[D_{S}(\mathcal{N}):=\sqrt{1-|A|^{-2}\max_{Y\in\mathfrak{U}(A)}\sum_{i}|\text{ Tr}[YN_{i}]|^{2}}. \tag{42}\]
As a special case, \(D_{S}(\mathcal{N})\) vanishes if and only if \(\mathcal{N}\) is a unitary operation that never destroys input information. By using this measure, we can define a geometric measure of _information non-leakage_ of bipartite channel \(\mathcal{M}\) given as
\[\mathcal{K}_{B\langle A}(\mathcal{M}|\sigma):=D_{S}(\text{Tr}_{A}[\mathcal{M} (\cdot\otimes\sigma)]). \tag{43}\]
Similarly, we can define the following sine metric-based measure of _non-catalyticity_ of bipartite unitary operations,
\[\mathcal{D}_{B\langle A}(\mathcal{M}|\sigma):=\min_{\xi\in(B)}d_{S}(\text{Tr} _{A}[\mathcal{M}(\phi^{+}_{AA^{\prime}}\otimes\sigma^{T}_{B})],\pi_{A^{\prime }}\otimes\xi_{B}). \tag{44}\]
It is a non-catalyticity measure for bipartite unitary operations since, when \(\mathcal{N}=\text{Ad}_{Y}\) is a unitary operation, \(\mathcal{D}_{B\langle A}(\mathcal{N}|\sigma)=0\) if and only if \(Y\) is a catalytic unitary operator [56, 57]. After preparing these definitions, we can introduce another approximate result on the relation between compatible generalized partial transpositions and information leakage of bipartite quantum channels.
**Proposition 17**.: Let \(\mathcal{U}\) be a bipartite unitary operations on \(AB\) compatible with a generalized transposition \(T_{B}[W]\) on \(B\), i.e. \(\mathcal{V}:=\mathcal{U}^{T[W\uparrow]}\) is also a quantum operation with \(\mathcal{W}=\text{Ad}_{W}\). Then, non-catalyticity of \(\mathcal{W}\) is limited by the information non-leakage by \(\mathcal{U}\), i.e.,
\[\mathcal{D}_{B^{\prime}\langle B}(\mathcal{W}|\sigma)\leq 2\mathcal{K}_{B \langle A}(\mathcal{U}|\sigma), \tag{45}\]
for all \(\sigma\) that are compatible with \(T_{B}[W]\).
Proof can be found in Appendix. For example, if \(\mathcal{U}\) leaks all of the information of input \(A\) to output \(B\) for a certain initial state \(\sigma_{B}\) of \(B\) so that \(\mathcal{K}_{B\langle A}(\mathcal{U}|\sigma)=0\) for some \(\sigma\), then any compatible generalized partial transposition \(T_{B}[\mathcal{W}]\) must be catalytic, i.e. \(\mathcal{W}^{T_{B}}\) is still unitary.
## Acknowledgements
SHL thanks Varun Narasimhachar for insightful discussions. This work was supported by National Research Foundation of Korea grants funded by the Korea government (Grants No. 2019M3E4A1080074, No. 2020R1A2C1008609 and No. 2020K2A9A1A06102946) via the Institute of Applied Physics at Seoul National University and by the Ministry of Science and ICT, Korea, under the ITRC (Information Technology Research Center) support program (IITP-2020-0-01606) supervised by the IITP (Institute of Information & Communications Technology Planning & Evaluation) and the quantum computing technology development program of the National Research Foundation of Korea(NRF) funded by the Korean government (Ministry of Science and ICT (MSIT)) (Grants No.2021M3H3A103657312). SHL was also supported by the start-up grant of the Nanyang Assistant Professorship of Nanyang Technological University, Singapore.
## Appendix A Proof of Theorem 1
Proof.: Note that \(Y\in\mathcal{K}_{1}\otimes\mathcal{K}_{2}\) being a dynamics tensor is equivalent to the existence of some \(W\in\mathfrak{U}(\mathcal{K})\) such that \(Y^{T[W]}=\mathds{1}_{\mathcal{K}_{1}}\) when \(Y\) is understood as an operator in \(\mathfrak{B}(\mathcal{K}_{1},\mathcal{K}_{2})\). For any two unit vectors in the same Hilbert space, there exists a unitary operator that transforms one to another. Therefore, when \(X\) is interpreted as an operator in \((\mathcal{K})\), there exists a unitary operator \(W\in\mathfrak{U}(\mathcal{K}\otimes\mathcal{K})\) that transforms \(\|X\|_{2}^{-1}\sum_{i}X\left|i\right>\otimes\left|i\right>\) to \(\left|\phi^{+}\right>\). For such \(W\), we have \(X^{T[W]}=|\mathcal{K}|^{-1/2}\|X\|_{2}\mathds{1}_{\mathcal{K}}\)
Proof of Theorem 10
Proof.: Suppose that \(|\mathcal{K}|\) is even. Arbitrary operator \(Y\in\mathfrak{B}(\mathcal{K})\) has an expansion of the form of \(Y=a\mathds{1}_{\mathcal{K}}+J\) with a traceless operator \(J\). Without loss of generality, we assume that \(a\) is real. Note that \(\mathds{1}_{\mathcal{K}}\) and \(J\) are orthogonal to each other. Hence, there exists a unitary operator \(W\) on \(\mathcal{K}^{\otimes 2}\) that preserves \(\left|\phi^{+}_{\mathcal{K}^{\otimes 2}}\right\rangle\) but transforms \((J\otimes\mathds{1}_{\mathcal{K}})\left|\phi^{+}_{\mathcal{K}^{\otimes 2}}\right\rangle\) to \((J^{\prime}\otimes\mathds{1}_{\mathcal{K}})\left|\phi^{+}_{\mathcal{K}^{ \otimes 2}}\right\rangle\) where \(J^{\prime}\) is an arbitrary traceless unitary operator such that \(J^{\prime\dagger}=-J^{\prime}\), which always exists when \(|\mathcal{K}|\) is even. An example is direct sum of \(i\sigma_{Y}\) where \(\sigma_{Y}\) is the \(2\times 2\) Pauli Y operator. Then, \(Y^{T[W]}\) is proportional to a unitary operator as \(Y^{T[W]\dagger}Y^{T[W]}=|a|^{2}\mathds{1}_{\mathcal{K}}+a(J^{\prime}+J^{ \prime\dagger})+J^{\prime\dagger}J=(|a|^{2}+1)\mathds{1}_{\mathcal{K}}\).
## Appendix C Proof of Theorem 7
Proof.: For a state preparation superchannel given as \(\mathfrak{P}^{\sigma}(\mathcal{N}):=\mathcal{N}(\sigma)\) to be compatible with a generalized transposition \(T[W]\) of its input channel, equivalently for \(\mathfrak{P}^{\sigma}\circ\mathfrak{T}[W^{\dagger}]\) to be a superchannel, it must be possible to decompose it into
\[\mathfrak{P}^{\sigma}\circ\mathfrak{T}[W^{\dagger}](\mathcal{L})=\operatorname {Tr}_{A^{\prime}}[\operatorname{Ad}_{Q}\circ(\mathcal{L}_{A}\otimes\mathds{1 }_{A^{\prime}})(\tau_{AA^{\prime}})], \tag{46}\]
for any \(\mathcal{L}\in\mathfrak{L}(A)\), where \(Q\in\mathfrak{U}(AA^{\prime})\) and \(\tau_{AA^{\prime}}\) is a pure quantum state on \(AA^{\prime}\)[9]. By applying \(\operatorname{Tr}_{A}\) on the both hands of (46) and taking the adjoint (as a map on \(\mathcal{L}(AA^{\prime})\)) and taking the matrix transposition (as a matrix in \(\mathcal{L}(AA^{\prime})\)), we get that from the arbitrariness of \(\mathcal{L}\)
\[W(\mathds{1}_{A}\otimes\sigma_{A^{\prime}}^{T})W^{\dagger}=(\mathds{1}_{A} \otimes\tau_{A^{\prime}}^{T}). \tag{47}\]
It follows that \(\mathds{1}_{A}\otimes\tau_{A^{\prime}}\) and \(\mathds{1}_{A}\otimes\sigma_{A^{\prime}}\), thus in turn \(\tau_{A}\) and \(\sigma_{A}\) have the same spectrum, therefore they are unitarily similar.
Conversely, if (47) holds for some \(\tau_{A}\), then we can set \(Q=W\) and \(\tau_{AA^{\prime}}=(\operatorname{id}_{A^{\prime}}\otimes\operatorname{Ad}_{ \sqrt{\tau_{A}}})(\Gamma_{AA^{\prime}})\) to express \(\mathfrak{P}^{\sigma}\circ\mathfrak{T}[W^{\dagger}]\) as a superchannel form as in (46).
## Appendix D Proof of Proposition 16
Proof.: We can observe that, by using the definition of generalized transposition (8), \(\operatorname{Tr}_{A}\circ\mathcal{U}\circ\mathcal{A}_{\sigma}(\rho)\) can be expressed as for any \(\rho\),
\[\operatorname{Tr}_{AB^{\prime}}[(\mathds{1}_{AB}\otimes\sigma_{B^{\prime}}^{T })\mathcal{W}_{BB^{\prime}}\circ\mathcal{V}_{AB}(\rho_{A}\otimes\Gamma_{BB^{ \prime}})]. \tag{48}\]
However, due to the compatibility of preparing \(\sigma\) with \(T[W]\), by Proposition 7 (Note that \(W\) and \(W^{\dagger}\) are switched in this proof), for the preparation superchannel inputting \(\sigma\) to \(\mathcal{U}\) to be compatible with the transformation \(\mathcal{U}\mapsto\mathfrak{T}[W^{\dagger}](\mathcal{U})\), there must exist a quantum state \(\bar{\sigma}\) that is unitarily similar to \(\sigma^{T}\) and satisfies
\[(\mathds{1}_{B}\otimes\sigma_{B^{\prime}}^{T})W=W(\mathds{1}_{B}\otimes\bar{ \sigma}_{B^{\prime}}). \tag{49}\]
Hence, for a purification \(\psi_{BB^{\prime}}=(\operatorname{id}_{B}\otimes\operatorname{Ad}_{\bar{ \sigma}})\) of \(\bar{\sigma}_{B^{\prime}}\), we have that (48) equals
\[\operatorname{Tr}_{AB^{\prime}}\circ\mathcal{W}_{BB^{\prime}}\circ\mathcal{V}_ {AB}(\rho_{A}\otimes\psi_{BB^{\prime}}). \tag{50}\]
Now, we let \(\mathcal{M}\in\mathfrak{C}(B^{\prime},B)\) be a channel that achieves
\[F_{B(B^{\prime}}(\mathcal{W})=\frac{1}{2}\|\operatorname{Tr}_{B^{\prime}}\circ \mathcal{W}-\operatorname{Tr}_{B}\otimes\mathcal{M}\|_{\diamond}. \tag{51}\]
By the submultiplicative property of the diamond norm, as \(\|\operatorname{Tr}_{A}\circ\mathcal{V}_{AB}\circ\mathcal{A}_{\psi}\|_{\diamond}=1\), we have
\[\frac{1}{2}\|\operatorname{Tr}_{A}\circ\mathcal{U}\circ\mathcal{A}_{\sigma}- \mathcal{E}_{\mathcal{M}(\bar{\sigma}_{B^{\prime}})}\|_{\diamond}\leq F_{B(B^{ \prime}}(\mathcal{W}). \tag{52}\]
Here, we used the fact that
\[(\operatorname{Tr}_{AB}\otimes\mathcal{M})\mathcal{V}_{AB}(\rho_{A}\otimes\psi_{ BB^{\prime}})=\mathcal{M}(\tilde{\sigma}_{B^{\prime}})\operatorname{Tr}\rho, \tag{53}\]
and the definition of \(\mathcal{E}_{\tau}\). After the minimization over \(\tau\), (40) follows as the choice of \(\sigma\) was arbitrary.
### Proof of Proposition 17
Proof.: For simplicity, we first assume that \(\sigma_{B}=\pi_{B}.\) There exist a unitary operator \(Y\) on \(B\) and, by Uhlmann's theorem, a pure quantum state \(\eta_{AB^{\prime}}\) such that \(d_{S}(\eta_{AB^{\prime}}\otimes J^{\mathrm{Adj}}_{A^{\prime}B},J^{\mathcal{U} }_{ABA^{\prime}B^{\prime}})=\mathcal{K}_{B\langle A}(\mathcal{U}|\pi_{B})\). By the monotonicity of fidelity under partial trace, after tracing out \(AB\), we have \(d_{S}(\pi_{A^{\prime}}\otimes\eta_{B^{\prime}},\pi_{A^{\prime}}\otimes\pi_{B^ {\prime}})=d_{S}(\eta_{B^{\prime}},\pi_{B^{\prime}})\leq\mathcal{K}_{B\langle A }(\mathcal{U}|\pi_{B})\). Again, by Uhlmann's theorem, there exists a unitary operator \(Z\) such that \(d_{S}(\eta_{AB^{\prime}},J^{\mathrm{Ad}_{Z}}_{AB^{\prime}})=d_{S}(\eta_{AB^{ \prime}}\otimes J^{\mathrm{Adj}},J^{\mathrm{Ad}_{Z}}_{AB^{\prime}}\otimes J^{ \mathrm{Adj}})\leq\mathcal{K}_{B\langle A}(\mathcal{U}|\pi_{B})\). Therefore, by the triangle inequality [65], we get that \(d_{S}(J^{\mathrm{Ad}_{Z}}\otimes J^{\mathrm{Adj}},J^{\mathcal{U}})\) is upper bounded by
\[d_{S}(\eta_{AB^{\prime}}\otimes J^{\mathrm{Adj}},J^{\mathcal{U}})+d_{S}(\eta_ {AB^{\prime}}\otimes J^{\mathrm{Adj}},J^{\mathrm{Ad}_{Z}}\otimes J^{\mathrm{ Adj}}), \tag{54}\]
where the subscripts for the Choi matrices are omitted. Both terms are upper bounded by \(\mathcal{K}_{B\langle A}(\mathcal{U}|\pi_{B})\). Again, by the monotonicity of fidelity, by applying \(\operatorname{Tr}_{AB}\circ\mathcal{W}_{BB^{\prime}}\) to both \(J^{\mathrm{Ad}_{Z}}\otimes J^{\mathrm{Adj}}\) and \(J^{\mathcal{U}}\), we get
\[d_{S}(\operatorname{Tr}_{B}[\mathcal{W}_{BB^{\prime}}(\phi^{+}_{A^{\prime}B} \otimes\pi_{B^{\prime}})],\pi_{A^{\prime}B^{\prime}})\leq 2\mathcal{K}_{B \langle A}(\mathcal{U}|\pi_{B}). \tag{55}\]
Now, observe that the left hand side is \(\mathcal{D}_{B\langle B^{\prime}}(\mathcal{W}|\pi_{B^{\prime}})\).
For general \(\sigma_{B}\), the proof is more or less similar except for that \(J^{\mathcal{U}}_{ABA^{\prime}B^{\prime}}\) is replaced with \(\zeta_{ABA^{\prime}B^{\prime}}:=(\operatorname{id}_{A^{\prime}B^{\prime}} \otimes\mathcal{U}_{AB})(\phi^{+}_{AA^{\prime}}\otimes\sigma_{BB^{\prime}})\), where \(\sigma_{BB^{\prime}}:=\operatorname{Ad}_{\sqrt{\sigma_{B}}}(\phi^{+}_{BB^{ \prime}})\) is a purification of the given \(\sigma_{B}\). Then there exist \(\eta_{AB^{\prime}}\) and some unitary \(Y\) such that \(d_{S}(\eta_{AB^{\prime}}\otimes J^{\mathrm{Adj}}_{A^{\prime}B^{\prime}},\zeta _{ABA^{\prime}B^{\prime}})=\mathcal{K}_{B\langle A}(\mathcal{U}|\sigma_{B})\) and \(d_{S}(\eta_{B^{\prime}},\sigma_{B^{\prime}})\leq\mathcal{K}_{B\langle A}( \mathcal{U}|\sigma_{B})\). Note that \(\sigma_{B^{\prime}}=\sigma_{B}^{T}\). As we did for \(\sigma_{B}=\pi_{B}\) case, we apply \(\mathcal{W}\) on \(BB^{\prime}\) and trace out \(AB\) to both \(\zeta_{ABA^{\prime}B^{\prime}}\) and \(J^{\mathrm{Adj}}_{A^{\prime}B}\otimes\eta_{AB^{\prime}}\). By using the compatibility condition (18) and that \(\mathcal{U}^{T[W]}=\mathcal{V}\), we get that the former turns into \(\pi_{A^{\prime}}\otimes\tau_{B^{\prime}}^{T}\) for some \(\tau_{B}\) and the latter is mapped to \(\operatorname{Tr}_{B}[\mathcal{W}(J^{\mathrm{Adj}}_{A^{\prime}B}\otimes\eta_ {B^{\prime}})]\). Note that the sine metric between \(\operatorname{Tr}_{B}[\mathcal{W}(J^{\mathrm{Adj}}_{A^{\prime}B}\otimes\eta_ {B^{\prime}})]\) and \(\operatorname{Tr}_{B}[\mathcal{W}(J^{\mathrm{Adj}}_{A^{\prime}B}\otimes\sigma_ {B^{\prime}})]\) is upper bounded by \(d_{S}(\eta_{B^{\prime}},\sigma_{B^{\prime}})\leq\mathcal{K}_{B\langle A}( \mathcal{U}|\sigma_{B})\). Since \(d_{S}(\pi_{A^{\prime}}\otimes\tau_{B^{\prime}},\operatorname{Tr}_{B}[ \mathcal{W}](J^{\mathrm{Adj}}_{A^{\prime}B}\otimes\sigma_{B^{\prime}}))\) bounds \(\mathcal{D}_{S}(\mathcal{W}|\sigma)\) from above, by using the triangle inequality of the sine metric, we get the wanted result.
## Appendix E Factorizable maps with general \(C^{*}\)-algebra
In contrast to the fact that postselecting a certain measurement outcome of a system whose state can be changed when affected but has not yet been examined before is not deterministic, we claim that generating a state with randomness which cannot be altered afterwards is deterministically implementable. This is because, in that case, we assume that there is no room for the change of the ancilla system, so no measurement is required to identify its state, and we are not selecting a certain subset of states but using the whole probabilistically mixed state.
Hence, we can also deterministically implement the aforementioned exactly factorizable maps conditioned on the classical register. This more general set of quantum maps are known as _factorizable_ maps and can be expressed as follows with some probability distribution \(\{p_{i}\}\),
Since \(|Y|^{-1}\sum_{i}p_{i}\operatorname{Tr}_{Y_{i}}\) is a tracial state of \(C^{*}\)-algebra \(\bigoplus_{i}\mathfrak{B}(Y_{i})\) for every probability distribution \(\{p_{i}\}\), by just naming it \(\operatorname{Tr}_{Y}:=\sum_{i}p_{i}\operatorname{Tr}_{Y_{i}}\) so that \(\mathfrak{T}_{Y}:=\sum_{i}p_{i}\mathfrak{T}_{Y_{i}}\), we again recover the purification expression for an arbitrary factorizable map (30).
\[\mathcal{N}(\rho)=\sum_{i}p_{i}\operatorname{Tr}_{Y_{i}}\mathcal{U}_{i}(\rho_{X} \otimes\pi_{Y_{i}}). \tag{56}\]
Here, \(Y\) is decomposed into superselection sectors \(Y=\bigoplus_{i}Y_{i}\) with the orthogonal projector \(\Pi_{i}\) onto each subspace \(Y_{i}\), and \(\mathcal{U}_{i}\in\mathfrak{U}\mathfrak{O}(X,Y_{i})\). Note that this is a simpler expression for finite dimensional \(Y\), but the concept of factorizable maps can be also defined on infinite dimensional systems. (See [66, 67] for more information.) We focus on finite dimensional cases in this work for simplicity. |
2307.01175 | Patient-centric health data sovereignty: an approach using Proxy
re-encryption | The exponential growth in the digitisation of services implies the handling
and storage of large volumes of data. Businesses and services see data sharing
and crossing as an opportunity to improve and produce new business
opportunities. The health sector is one area where this proves to be true,
enabling better and more innovative treatments. Notwithstanding, this raises
concerns regarding personal data being treated and processed. In this paper, we
present a patient-centric platform for the secure sharing of health records by
shifting the control over the data to the patient, therefore, providing a step
further towards data sovereignty. Data sharing is performed only with the
consent of the patient, allowing it to revoke access at any given time.
Furthermore, we also provide a break-glass approach, resorting to Proxy
Re-encryption (PRE) and the concept of a centralised trusted entity that
possesses instant access to patients' medical records. Lastly, an analysis is
made to assess the performance of the platform's key operations, and the impact
that a PRE scheme has on those operations. | Bruno Rodrigues, Ivone Amorim, Ivan Costa, Alexandra Mendes | 2023-07-03T17:39:02Z | http://arxiv.org/abs/2307.01175v1 | # Patient-centric health data sovereignty: an approach using Proxy re-encryption+
###### Abstract
The exponential growth in the digitisation of services implies the handling and storage of large volumes of data. Businesses and services see data sharing and crossing as an opportunity to improve and produce new business opportunities. The health sector is one area where this proves to be true, enabling better and more innovative treatments. Notwithstanding, this raises concerns regarding personal data being treated and processed. In this paper, we present a patient-centric platform for the secure sharing of health records by shifting the control over the data to the patient, therefore, providing a step further towards data sovereignty. Data sharing is performed only with the consent of the patient, allowing it to revoke access at any given time. Furthermore, we also provide a _break-glass_ approach, resorting to Proxy Re-encryption (PRE) and the concept of a centralised trusted entity that possesses instant access to patients' medical records. Lastly, an analysis is made to assess the performance of the platform's key operations, and the impact that a PRE scheme has on those operations.
Keywords:data-sovereignty cryptography PRE- access delegation-e-health PHR
## 1 Introduction
The ever growing digitisation of services that we use daily, as well as the increasing interest in data crossing and sharing to improve processes, services, and achieve new business opportunities, raises concerns regarding how data is handled and processed. In the healthcare sector, data sharing is not only beneficial,
but also needed to provide the best care possible to the patients. However, this data is also highly sensitive, which requires special care. Several governmental measures have already been taken to improve and standardise the way in which data is shared, such as the European Data Governance Act [1], GDPR4, and, more specifically in personal health information, HIPAA5 and HITECH6. These directives instigate a user-centric paradigm, granting individuals sovereignty over their data.
Footnote 4: [https://data.europa.eu/eli/reg/2016/679/oj](https://data.europa.eu/eli/reg/2016/679/oj)
Footnote 5: [https://www.cdc.gov/phlp/publications/topic/hipaa.html](https://www.cdc.gov/phlp/publications/topic/hipaa.html)
Footnote 6: [https://www.hipaajournal.com/what-is-the-hitech-act/](https://www.hipaajournal.com/what-is-the-hitech-act/)
Several approaches have been proposed for ensuring security and privacy in e-health systems. Conventional encryption techniques like AES and ECC are commonly used [5]. However, these techniques become problematic when data needs to be shared among multiple entities due to redundancy and computational burden [6]. Attribute-Based Encryption (ABE) is another solution [6], but it has its own complexities and limitations, such as managing attribute-based keys and overriding policies in emergencies [7]. ABE also lacks the fine-grained access control necessary for a patient-centric sovereign approach.
Proxy Re-encryption (PRE) is a cryptographic solution for secure data sharing without prior knowledge of the recipient. Unlike ABE, it does not rely on policies or attributes. PRE converts a ciphertext to a recipient's key without revealing the plaintext to the intermediary entity. It is particularly useful in semi-trusted cloud environments [17]. In e-health, PRE has already been used to securely share medical records [20, 23, 19, 26], including in emergency scenarios [19]. However, challenges remain in terms of revocability, computational effort, and safeguarding emergencies [26]. Existing solutions for emergency scenarios are limited and rely on assumptions that may impact efficiency and reliability.
In this context, it is necessary to develop a platform that addresses the aforementioned concerns. This includes enabling more control over the data by the patient while ensuring the safety of that data, even in semi-trusted environments. This contributes to the collaborative aspect of e-health and thus enables better treatments and advancements in the health sector.
In this paper, we present a platform that leverages PRE to enhance health data sharing. Umbral's PRE [16] is used as the foundation for re-encryption processes, through which we achieve unidirectionality and non-interactivity, ensuring secure re-encryption from the patient to the data user (e.g., practitioners or health centres) without requiring the data user's private key. This approach centres on the expressed opinion of the patient to authorise data sharing, eliminating the need for prior identification of authorised parties -- a drawback identified in previous solutions. Additionally, our platform offers revocability options, such as time-based access limits and patient-initiated access revocation. Importantly, the revocation of accesses does not require changes to the encrypted healthcare database, distinguishing our platform from the ones that rely on identity and attribute-based PRE schemes.
Furthermore, in the context of healthcare, it is crucial to ensure data sharing in emergency situations when explicit patient consent may not be possible. Our platform addresses this challenge by incorporating a trusted entity for data access when patient authorisation is infeasible.
In summary, our main contributions are:
* A patient-centric platform, that empowers patients with sovereign control over their health data, enabling granular access control and facilitating the sharing of health records only with explicit consent.
* Robust data protection using Umbral's PRE, ensuring secure and encrypted health data sharing without compromising the data user's private key.
* A robust access revocation mechanism that enables time-based access limits and supports manual revocation by the patient at any time and with immediate effect.
* A break-glass mechanism to ensure seamless emergency data access.
The remainder of this paper is organised as follows. Section 2 introduces basic concepts and definitions, as well as the classification and properties of PRE schemes. Furthermore, an analysis is made concerning the framework on which the access delegation mechanism is based. Section 3 presents the current picture of the PRE and the advancements regarding break-glass scenarios. Section 4 details the proposed solution and its implementation. Section 5 is concerned with the performance test, respective results and discussion. Lastly, Section 6 presents the conclusions and future work.
## 2 Proxy Re-encryption
PRE is a cryptographic technique that enables a third-party entity, named proxy, to delegate access to encrypted data, without being able to infer the plaintext content of that data. This is achieved by transforming a ciphertext encrypted under one key into a ciphertext encrypted under a different key.
### Syntax and basic definitions
Since PRE can be seen as a way to delegate decryption rights to a party, it is possible to categorise the different entities according to the delegation relation they possess with each other. The _delegator_ is the entity that owns the data and delegates decryption rights. The _proxy_ is the intermediary entity in the delegation process, which uses a re-encryption key (PRK) to transform the ciphertext encrypted under the delegator's public key into a ciphertext that can be decrypted only by using the delegatee's private key. Finally, the _delegatee_ is the entity that accesses the information through delegation of decryption rights by the delegator.
Definition 1 (Pre): A PRE scheme can be defined based on five different algorithms:
* _KeyGen_ -- _On input of a security parameter \(n\), the key generation algorithm KeyGen outputs a public/private key pair (\(pk_{A}\), \(sk_{A}\)) for a given user A.
* _ReKey_ -- _On input of a public/private key pair (\(pk_{A}\), \(sk_{A}\)) for user A and a public/private key pair (\(pk_{B}\), \(sk_{B}\)) for user B, a PRK \(rk_{A\to B}\) is computed.
* _Encrypt_ -- _Given the input of a public key \(pk_{A}\) and a message \(m\in M\), the encryption algorithm outputs a ciphertext \(c_{A}\in C_{1}\).
* _ReEncrypt_ -- _On input of a ciphertext \(c_{A}\in C_{1}\) and a PRK \(rk_{A\to B}\), the re-encryption algorithm ReEncrypt transforms a ciphertext \(c_{A}\in C_{1}\) into a ciphertext \(c_{B}\in C_{2}\).
* _Decrypt_ -- _Given a private key \(sk_{A}\) from user A and a ciphertext \(c_{A}\in C_{S}\) (\(S\in\{1,2\}\)) from user A, the same executes the decryption algorithm and outputs the original message \(m\in M\).
According to Qin et al.[18], a PRE scheme can be classified based on its abilities. For example, regarding its directionality, we say that the scheme is _unidirectional_ if it enables the delegator's ciphertext to be re-encrypted into the delegatee's ciphertext but not vice versa. Otherwise, we call it _bidirectional_. The multi-use/single-use classification focuses on the number of times the PRK can be used to re-encrypt data. In _multi-use_ schemes, the PRK can be utilised to perform several re-encryptions. In the case of a _single-use_ scheme, the PRK can only be used to perform a single transformation. Interactivity dictates whether the re-encryption is computed using just the public key from the delegatee (_non-interactive_ scheme) or both the public and private keys (_interactive_ scheme). Depending on the scenario of utilisation, some properties may be more desirable than others.
Other authors classify PRE schemes according to their way of functioning [9; 10]. For example, an Identity-Based PRE (IB-PRE) scheme derives public keys from identity attributes (e.g. email). The messages are encrypted using an identity string from the delegatee. Attribute-Based PRE (AB-PRE) schemes allow transforming a ciphertext defined by a set of attributes or access policies into another ciphertext with a different set of attributes.
### Umbral's PRE scheme
The Umbral PRE scheme is, in its essence, a threshold PRE scheme. This scheme features an Elliptic Curve Integrated Encryption Scheme (EICS-KEM) inspired in [2] and proposes several improvements over the original PRE scheme proposed by [4], namely unidirectionality, non-interactivity, and verifiability. It relies on the concept of semi-trusted proxies, also known as _n_svulas. Being a threshold PRE scheme, it splits the PRK according to shares. The _threshold_ portion of the scheme dictates the minimum number of those shares required to decrypt the information.
Splitting the PRK across multiple proxies brings some benefits namely eliminating a single point of failure, in case of a malfunction or compromise of one of the proxies the PRK is still safeguarded.
The re-encryption processes in our platform are supported by pyUmbral [15], a Python-based implementation of Umbral.
Fig. 1 presents an overview of the key processes and data flows involved in the Umbral PRE scheme. This system beholds seven main processes: _Encapsulation_, _Encryption_, _Generate PRK fragments_, _Re-encapsulation_, _Decapsulation_, and _Decryption_. These processes are supported by three major cryptographic methods: Key Encapsulation Mechanism (KEM), Data Encapsulation Mechanism (DEM), and Shamir Secret Sharing (SSS) [21].
The first step in this process is _Encapsulation_. This is achieved through the use of a Key Encapsulation Mechanism (KEM), in this case, a loosely inspired implementation of ECIES-KEM introduced by [22]. The KEM is fed with Alice's public key \(pk_{A}\) and outputs a symmetric key \(K\), and a _capsule_.
With the capsule and the symmetric key \(K\), the _Encryption_ process is performed using a Data encapsulation mechanism (DEM) which uses Authenticated Encryption with Additional Data (AEAD). This outputs a ciphertext encrypted with the symmetric key.
When the data is encrypted and stored in the cloud, in order for the access delegation to occur, there is a need to generate a PRK. This is performed by the _Generate PRK fragments_ process resorting to the notions present in Shamir Secret Sharing (SSS), Alice's private key and signing key \(signk_{A}\), and Bob's public key \(pk_{B}\). This enables the generation of the PRK fragments or _kFrags_. The number of fragments is defined by the number of shares.
The _kFrags_ are stored by the proxy for further use in the _Re-encapsulation_ process. This process is responsible for generating the _cFrags_ which enables Bob to gain access to the file at a later stage. To generate the _cFrags_ just the capsule and the _kFrags_ are needed. This is due to the fact that this PRE scheme performs the re-encryption over the capsule.
Figure 1: Procedural overview of pyUmbral PRE scheme
Lastly, once Bob wants to retrieve a file, the _Decapsulation_ process needs to happen. This process resorts to SSS in order to reconstruct the symmetric key \(k\). To do so, Alice's public key, Alice's verifying key \(vk_{A}\) for signature verification of the _cFrags_, Bob's private key \(sk_{B}\), and the _capsule_ are needed. Through the use of a Key Derivation Function within the KEM, it is possible to derive the symmetric key \(K\) which together with the ciphertext is passed to the DEM. The DEM performs the _Decryption_ process and outputs the plaintext content of the file that Bob can now use.
## 3 Related work
The notion of PRE made its first appearance in 1998 when Blaze et al. [4] introduced the concept of bidirectional and multi-use PRE. Several works have been published since then with new PRE schemes providing new functionalities and relying on different mathematical assumptions. For example, both [8] and [11] proposed a unidirectional, single-use PRE scheme, but the first relies on threshold PKE, while the second is based on lattice-hardness problems. In 2015, [14] also proposed a unidirectional and single-use PRE scheme, which can be classified as attribute-based. Later, in 2017, [16] presented a unidirectional, non-interactive, and verifiable PRE scheme which is threshold-based.
In the context of healthcare data sharing, PRE has also been widely explored. In fact, several works address security, privacy, and confidentiality when it comes to the design and implementation of e-health systems. However, there is still a lack of development concerning safeguarding emergency scenarios in the context of e-health systems [26]. Works that address this kind of scenario in its design, refer to this as break-glass approaches. In 2017, [3] proposed a framework for the secure sharing of Personal Health Records (PHRs) that relies on attribute-based PRE and which addresses emergency scenarios. The break-glass capabilities are provided with ABE. In this scheme, the emergency department attribute is always appended to the policy that encrypts the patient PHR, thus providing instant access to the entity from the moment the same is uploaded. The problem with this approach, and in general with ABE approaches, is that they present some caveats, namely key management and resorting to other mechanisms in break-glass approaches. This is due to the fact that emergency normally means an exception to a policy and, thus, overriding that same policy might be a hefty task in some implementations. In 2019, [25] also proposed an approach that is based on an attribute-based PRE, and provided self-adaptive access control, meaning that the system can automatically adapt to normal or emergency situations. However, their break-glass mechanism resorts to a _password-based_ paradigm. This approach raises some concerns, namely in the assumption that the individual that stores the password, has the necessary means to ensure its secrecy. More recently, in 2022, [13] proposed a system for IoT sensors combining PRE and PKE with equality test, permitting searches under different public keys and secure data sharing. However, it does not discuss emergency situations. In the same year, [20], proposed a non-interactive, multi-use, certifiateless PRE
for sharing health data in a cloud environment. Even though their approach gives full control to the data owner, it has two important drawbacks, namely it is interactive and does not propose a break-glass mechanism. Also in 2022, [24] published a secure data sharing and authorised Searchable framework for e-healthcare systems. This framework lies on a conditional and unidirectional PRE scheme with keyword search. It is also idealised for managing sensible data from medical wearable devices. This platform has some disadvantages namely regarding the PRK generation performance. Also, this work does not address emergency situations. Finally, in 2022, [12] propose a framework which is also based on attribute-based PRE that features break-glass capabilities. However, it leaves open the possible solution for revocability. That being said, there is a need to develop a solution that can cope with all the aforementioned concerns and that contributes to a more reliable and robust break-glass approach.
## 4 Patient-centric health data sovereignty
In this section, we introduce the envisioned solution for a patient-centric platform that enables health data sovereignty through PRE. The subsequent section presents the architecture of the solution, followed by a description of the processes involved in the key operations for access delegation.
### Proposed Solution
The proposed solution consists of four main nodes: the client, the resource server, the proxy server, and the authorization server, as depicted in Figure 2.
The client node hosts the client-side application developed with Next.js7. This client node communicates with the server nodes via Representation State Transfer (REST) and the Hypertext Transfer Protocol (HTTP). The business logic is divided between the resource and proxy server nodes. The resource server is based on the FastAPI framework8 running in a Python environment. This server is trusted by the data delegator and it is responsible for assisting the client-side operations, namely feeding the data the client node needs to display the information to the user. The resource server node also performs some core operations such as the initial encryption and final decryption of the Electronic Health Record (EHR) stored in the database server node hosted in a cloud environment (MongoDB9) as well as the management of delegation requests (accept or decline). Some other complementary operations are also performed such as the generation of the PRK which is stored afterwards by the proxy server node, and the signature verification of the PRK fragments and capsule fragments.
Footnote 7: [https://nextjs.org/](https://nextjs.org/)
Footnote 8: [https://fastapi.tiangolo.com/](https://fastapi.tiangolo.com/)
Footnote 9: [https://www.mongodb.com/](https://www.mongodb.com/)
The proxy server is solely responsible for the process of EHR delegation, being used for the re-encryption of the capsules and the storage of the PRK.
The authorisation server is responsible for performing the authentication of the different users of the platform as well as the issuing and claims verification of the authorisation tokens. These tokens are subsequently used to consume the APIs provided by the resource and proxy server nodes. This node is also associated with two persistence nodes. An in-memory database (REDIS10 instance) for persisting and performing the lookup of the refresh tokens and a MongoDB instance for storing general purpose user information such as name, email, password, public and verifying keys and roles.
Footnote 10: [https://redis.com/](https://redis.com/)
Figure 2: Deployment diagram of the idealised architecture
### Authentication/Authorisation
The authorisation is performed by resorting to JSON Web Tokens (JWT) which are signed using HMAC SHA256. This ensures the tokens can not be tampered with or changed, thus enabling secure transmissions between parties.
The authentication flow comprises a traditional email/password authentication, where each user needs to provide a valid email and password. In case of successful authentication, a pair of tokens are issued (access/refresh token) containing some claims needed to support the client-side application. These claims follow the standards and restrictions defined in Request for Comments 751911. Besides the pair of tokens, a Cross-site Request Forgery token is also sent for further protection in requests that require cookies. The refresh token is also sent in a cookie configured with the _secure_ and _httpOnly_ flags to ensure it is only transmitted through HTTPS and not available to JavaScript in case of a Cross-site Scripting vulnerability in the client-side application.
Footnote 11: [https://datatracker.ietf.org/doc/html/rfc7519#section-4](https://datatracker.ietf.org/doc/html/rfc7519#section-4)
Since JWT tokens are self-contained, there is no natural way of revoking them. In order to tackle this problem anti-theft mitigation techniques were implemented: _refresh token rotation_ and _token reuse detection_.
### Access delegation scenario
Access delegation is the core problem tackled in this work. The next sections dissect the access delegation flow from the moment the file is uploaded by the patient to the moment the plaintext content is retrieved by the healthcare provider. For demonstration purposes, the step-by-step process between two entities, Alice (delegator) and Bob (delegatee), is presented.
**Upload of an EHR** The access delegation starts with the upload of an EHR by Alice. When Alice uploads a new EHR, which can be a Portable Document Format (PDF) or an image, the resource server encrypts the file using a symmetric key resulting from the encapsulation process and stores it together with the capsule, resulting from the _encapsulation_ process, and an associated _userId_.
Another process that is also performed in this step and further detailed in Section 4.4 is the safeguarding of emergency situations. Besides the persistence of the file in the database, a PRK is also generated in order to provide access to the predefined trusted entity. This ensures that the trusted party possesses the means to access the file from the moment it is uploaded and that no extra input from the user is needed in this regard. This PRK is sent to the proxy for subsequent use.
**Bob requests access to an EHR** When Bob wants to access Alice's uploaded EHR, he needs to formalise his intentions by issuing a share request to the resource server containing the EHR's _resourceId_. In this step, the system checks if Bob is the owner of the EHR. This prevents a user from performing a share request to itself, something that violates the business rules of the platform.
Once this validation is performed, and provided with the _resourceId_, the resource server generates a share request that includes the _resourceId_, the _delegatorId_ and the _delegateeId_, as well as a _status_ that is set to pending by default.
**Alice answers the share request** Now that Bob asked Alice for access to the EHR, Alice is now capable of answering the share request. Depending on Alice's answer, the execution flow might have two outcomes:
**Accept scenario** -- In case of an acceptance, Alice needs to generate the PRK required to re-encrypt the capsule and further enable Bob to have access to the plaintext content of the EHR. To achieve such a feat, Alice requires his secret key along with his signing key pair, needed to verify the signature of the _kFrags_ and _cFrags_ at a later stage, as well as Bob's public key. Notice that just the public key is needed, due to the non-interactivity property of this PRE scheme. Lastly, since the underlying scheme of the access delegation mechanism is a threshold PRE scheme, there is also the need to provide a _threshold_ which defines the minimum number of shares needed to decrypt the capsule and the number of _shares_ which dictates the number of outputted PRK fragments. This last aforementioned operation outputs the _kFrags_, which are sent to the proxy along with a _shareId_ binding the PRK to a specific share request. Both attributes are persisted by the proxy for further use once Bob retrieves the EHR.
The share request operation ends with the status update of the share request, which is defined as accepted, together with an arbitrary expiration date defined by Alice. This expiration date is optional, being possible to share an EHR indefinitely or temporarily, in which case the share request is automatically revoked through a cron job once that date is transposed. This ensures the time-based access delegation aspect that this work contributes to.
**Decline scenario** -- In case Alice declines the share request the status is updated accordingly and no other action is performed.
**Bob retrieves the EHR** Now that Alice explicitly delegated access to the EHR, Bob is now capable of retrieving it. To do so, Bob performs a request to the resource server, which requires Bob's secret key and the _resourceId_, which uniquely identifies the EHR. A file ownership verification is also performed since the decryption steps are different for a delegator and a delegatee, where the former does not have the need to re-encrypt the _capsule_.
As stated previously, ownership trails different paths regarding execution flow. With that said, the following can happen whether the user is or not a data owner.
**Data owner** -- In case the user that requests the file is a data owner, a hybrid encryption approach is used, thus no re-encryption takes place.
**Not a data owner** -- If the user is not the data owner, meaning they are a delegatee, a collaborative operation between the resource and proxy servers is required to take place. For this specific scenario, Bob needs to ask the proxy to re-encrypt the capsule using the previously generated PRK. To that purpose, the resource server retrieves the EHR details and sends the capsule to the proxy server. The proxy, equipped with the capsule and the PRK fragments _kFrags_, performs the _re-encapsulation_ process outputting the _cFrags_. These _cFrags_ are
sent back to the resource server, which validates their signature through Alice's verifying key. Once the capsule fragments are validated, Bob decrypts the file by opening the capsule. This last step encompasses Bob's private key, Alice's verifying key and the verified _cFrags_. With the plaintext content of the EHR, Bob is now capable of accessing the information.
**Some important remarks** to highlight are that the secret key used in the sharing process is never shared with the intermediary entity or proxy, making it semi-trusted. Additionally, the proxy only stores the PRK, which alone does not grant it the capability to decrypt the file. Furthermore, even if the stored information such as the capsule, PRK, and ciphertext were to be leaked from the database, the safety and integrity of the EHRs would still be preserved, as they are not sufficient for decrypting the EHRs.
### Break-glass approach
Safeguarding emergency scenarios is of paramount importance in a health-related platform. Therefore, we adopted amd approach that features a central trustworthy entity responsible for managing the authorisation in emergency scenarios. This trustworthy entity is seen as a government entity that is responsible for managing such issues and has full access to the files submitted in the platform.
The implementation is similar to what is described in Section 4.3 regarding Alice accepting the share request. However, in this case, there is no explicit acceptance of the share request. When an EHR is uploaded, the trusted entity user is retrieved from the database and a PRK is generated. An accepted share request is automatically created for the trusted entity, which links the PRK to the share request between the patient and the trusted entity.
Regarding the process of retrieving the EHR, it follows a similar procedure as depicted in Section 4.3. Just like in a regular file retrieval, since the share request is automatically accepted and the proxy possesses the PRK, the trusted entity requests the proxy to re-encrypt the capsules, enabling the final decryption to take place.
This approach vastly reduces the dependency on external actors, increasing the reliability and availability of the idealised break-glass approach. Having a dedicated entity for this purpose enables instant and swift access to the information if needed.
## 5 Performance analysis
In this section, we present the performance tests conducted to evaluate our platform. Given the common concerns of limited hardware infrastructures and sub-optimal conditions in governmental adoption cases, it is important to assess the responsiveness of the key operations offered by the platform. Our main goal is to quantitatively analyze the performance of the most computationally intensive operations and assess the impact of the PRE scheme. As there are no specific regulations, indications, or suggestions regarding performance for this type of
platform, our tests are purely quantitative and based on known factors and conditions.
The performance tests were carried out on a deployed version of the platform, hosted in Microsoft Azure using a Free F1 tier running Linux and Python 3.10. While these specifications may be basic, they are sufficient to simulate a sub-optimal environment. In real-world scenarios, it is common for governments to have financial restrictions, making it likely that the platform would be deployed on infrastructure with modest specifications. The tests were conducted using Apache JMeter as the tool of choice.
In the rest of this section, we present the results related to the three most crucial operations of the platform and which involve the use of PRE: file upload, accepting a share request, and file retrieval. Additionally, a brief analysis of the results is also presented.
**File upload** The performance tests depicted in this section aim to evaluate how the different file sizes impact the upload performance of files.
Since the size of EHRs depends on various factors, such as the patient's medical history, the image resolution of the machines used for exams, and the content of the file itself, determining an average file size becomes challenging. Therefore, we conducted our experiments using two different file sizes: 1MB and 10MB.
Figure 3 illustrates the results obtained from a series of twenty runs performed for each file size.
It can be observed that a tenfold increase in file size reflected an average increase of 2715 _milliseconds_(ms) when comparing file sizes of 1MB and 10MB respectively. The former took an average of 1154 _ms_ and the latter an average of 3870 _ms_.
Figure 3: Performance Tests - File Size Uploads Bar Chart
Despite a time of almost four _seconds_, and considering this is not an ideal response time for a REST API, it should be taken into account the complexity of the operations performed. Since this is not a critical operation when it comes to performance, these values are acceptable.
**Accepting a share request** The acceptance of a share request is a key operation in the platform described in this paper. Although its performance does not possess a high impact on the efficiency of the platform, it does provide valuable information regarding the PRE process. In this operation, the PRK is generated and sent to the proxy for persistence purposes. Notice that, in this case, there was no need to perform the tests for both file sizes since the PRK generation only depends on cryptographic keys.
Regarding the results of these tests, the average time obtained in 20 runs was 869 _ms_. This quick response was expected since the generation of the PRK fragments is a relatively simple operation that depends on the cryptographic key from both ends, the signature, and the number of shares. Additionally, there was not a significant variation among the twenty runs that were performed. This is supported by the low standard deviation of just 188 _ms_.
**File retrieval** This set of tests aims to assess the impact of file sizes and the use of PRE on a file retrieval scenario. The tests were conducted for both regular decryption and PRE decryption. To evaluate the impact of file sizes, the tests were performed for both 1MB and 10MB file sizes.
Moving on to the obtained results (Fig.4), a 1MB file took an average of 903 _ms_ to be retrieved while the 10MB one took an average of 2529 _ms_. Regarding file retrieval with PRE, the 1MB file took an average of 1245 _ms_ and 2877 _ms_ for the 10MB file.
We have also evaluated the impact of re-encryption on file retrieval operations (Fig.5) by directly measuring the difference between regular decryption and PRE for each file size. This resulted in an average difference of 342 _ms_ for the 1MB file and 348 _ms_ for the other one.
The results of our tests indicate that there is a similar average difference between regular and PRE decryption for both file sizes. This similarity can be attributed to the fact that the re-encryption process only affects the capsule,
not the actual file. Since the sizes of the capsule and cryptographic keys are similar in both scenarios, it is expected that the results would be similar as well. The file size does not significantly impact the re-encryption of the capsule, but rather affects the overhead associated with fetching the file from the database and delivering it in the response.
Regarding the obtained results, they were deemed satisfactory since most operations do not possess restrictive requirements when it comes to performance.
Regarding more critical operations such as file retrieval, considering the computational effort and infrastructure complexity required to ensure full correctness with the underlying threshold PRE scheme, the results were deemed satisfactory. It is important to note that these tests were conducted in a shared infrastructure with modest specifications. Thus, it was not possible to control the current workload of the servers during the tests, which may have impacted negatively the aforementioned results
## 6 Conclusion
In this paper, we present a PRE-based platform for the secure sharing of e-health, considering a sovereign approach focused on the patient. This approach is achieved by ensuring that the patient's data is only shared with their explicit consent. Furthermore, it also enables robust revocability by the patient, without requiring updates on the encrypted EHR database, further contributing to a user-centric approach. Non-interactivity is also a key characteristic of our platform, which does not require sharing user's private key for the re-encryption process to occur. Another key achievement of our work is the proposed break-glass mechanism. Since some implementations fall short in terms of revocability, and only a few contemplate PRE in emergency scenarios, our solution uses a central trusted entity to which the proxy delegates access from the moment the EHR is uploaded to the platform. This eliminates the need to trust external actors in the system, increasing reliability and allowing swift access to the information in critical situations. There are other key characteristics of our platform worth highlighting. Firstly, it uses symmetric encryption to encrypt the EHR, which is faster than PKE. Secondly, the re-encryption process is performed over the capsule, which tends to have a much smaller size compared to a PHR. The tests that were conducted and our results show that the most demanding task is the upload of the EHR, as expected, because it requires the encapsulation process to occur and the encryption of the EHR. However, the re-encryption process does not show a significant increase when the size of the uploaded files increases. This is because the re-encryption does not involve the EHR. Our platform provides a solution to the sharing of medical data that incorporates key functionalities not covered together in previous literature, such as unidirectionality, non-interactivity, revocability, and a mechanism to deal with emergency situations. This solution contributes to the collaborative aspect of e-health and enables better, and more informed treatments supported by the increased exchange of information between providers.
Regarding future work, it would be beneficial to extend the architecture to accommodate multiple proxies instead of using just one. This could be achieved by utilising a blockchain network where the proxies work together to re-encrypt the capsules, thus enabling all the benefits that a threshold-based scheme has to offer. Furthermore, additional tests could be performed using different environments and network conditions to cover more use case scenarios.
|
2305.10601 | Tree of Thoughts: Deliberate Problem Solving with Large Language Models | Language models are increasingly being deployed for general problem solving
across a wide range of tasks, but are still confined to token-level,
left-to-right decision-making processes during inference. This means they can
fall short in tasks that require exploration, strategic lookahead, or where
initial decisions play a pivotal role. To surmount these challenges, we
introduce a new framework for language model inference, Tree of Thoughts (ToT),
which generalizes over the popular Chain of Thought approach to prompting
language models, and enables exploration over coherent units of text (thoughts)
that serve as intermediate steps toward problem solving. ToT allows LMs to
perform deliberate decision making by considering multiple different reasoning
paths and self-evaluating choices to decide the next course of action, as well
as looking ahead or backtracking when necessary to make global choices. Our
experiments show that ToT significantly enhances language models'
problem-solving abilities on three novel tasks requiring non-trivial planning
or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in
Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of
tasks, our method achieved a success rate of 74%. Code repo with all prompts:
https://github.com/princeton-nlp/tree-of-thought-llm. | Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan | 2023-05-17T23:16:17Z | http://arxiv.org/abs/2305.10601v2 | # Tree of Thoughts: Deliberate Problem Solving
###### Abstract
Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, "Tree of Thoughts" (ToT), which generalizes over the popular "Chain of Thought" approach to prompting language models, and enables exploration over coherent units of text ("thoughts") that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%. Code repo with all prompts: [https://github.com/ysymyth/tree-of-thought-llm](https://github.com/ysymyth/tree-of-thought-llm).
## 1 Introduction
Originally designed to generate text, scaled-up versions of language models (LMs) such as GPT [22; 23; 1; 20] and PaLM [5] have been shown to be increasingly capable of performing an ever wider range of tasks requiring mathematical, symbolic, commonsense, and knowledge reasoning. It is perhaps surprising that underlying all this progress is still the original autoregressive mechanism for generating text, which makes token-level decisions one by one and in a left-to-right fashion. Is such a simple mechanism sufficient for a LM to be built toward a general problem solver? If not, what problems would challenge the current paradigm, and what should be alternative mechanisms?
The literature on human cognition provides some clues to answer these questions. Research on "dual process" models suggests that people have two modes in which they engage with decisions - a fast, automatic, unconscious mode ("System 1") and a slow, deliberate, conscious mode ("System 2") [27; 28; 13; 12]. These two modes have previously been connected to a variety of mathematical models used in machine learning. For example, research on reinforcement learning in humans and other animals has explored the circumstances under which they engage in associative "model free" learning or more deliberative "model based" planning [6]. The simple associative token-level choices of LMs are also reminiscent of "System 1", and thus might benefit from augmentation by a more deliberate "System 2" planning process that (1) maintains and explores diverse alternatives for current
choices instead of just picking one, and (2) evaluates its current status and actively looks ahead or backtracks to make more global decisions.
To design such a planning process, we return to the origins of artificial intelligence (and cognitive science), drawing inspiration from the planning processes explored by Newell, Shaw, and Simon starting in the 1950s [18; 19]. Newell and colleagues characterized problem solving [18] as search through a combinatorial problem space, represented as a tree. We thus propose the Tree of Thoughts (ToT) framework for general problem solving with language models. As Figure 1 illustrates, while existing methods (detailed below) sample continuous language sequences for problem solving, ToT actively maintains a tree of thoughts, where each _thought_ is a coherent language sequence that serves as an intermediate step toward problem solving (Table 1). Such a high-level semantic unit allows the LM to self-evaluate the progress different intermediate thoughts make towards solving the problem through a deliberate reasoning process that is also instantiated in language (Figures 2,4,6). This implementation of search heuristics via LM self-evaluation and deliberation is novel, as previous search heuristics are either programmed or learned. Finally, we combine this language-based capability to generate and evaluate diverse thoughts with search algorithms, such as breadth-first search (BFS) or depth-first search (DFS), which allow systematic exploration of the tree of thoughts with lookahead and backtracking.
Empirically, we propose three new problems that challenge existing LM inference methods even with the state-of-the-art language model, GPT-4 [20]: Game of 24, Creative Writing, and Crosswords (Table 1). These tasks require deductive, mathematical, commonsense, lexical reasoning abilities, and a way to incorporate systematic planning or search. We show ToT obtains superior results on all three tasks by being general and flexible enough to support different levels of thoughts, different ways to generate and evaluate thoughts, and different search algorithms that adapt to the nature of different problems. We also analyze how such choices affect model performances via systematic ablations and discuss future directions to better train and use LMs.
## 2 Background
We first formalize some existing methods that use large language models for problem-solving, which our approach is inspired by and later compared with. We use \(p_{\theta}\) to denote a pre-trained LM with parameters \(\theta\), and **lowercase letters**\(x,y,z,s,\cdots\)**to denote a language sequence**, i.e. \(x=(x[1],\cdots,x[n])\) where each \(x[i]\) is a token, so that \(p_{\theta}(x)=\prod_{i=1}^{n}p_{\theta}(x[i]|x[1...i])\). We use uppercase letters \(S,\cdots\) to denote a collection of language sequences.
**Input-output (IO) prompting** is the most common way to turn a problem input \(x\) into output \(y\) with LM: \(y\sim p_{\theta}(y|\texttt{prompt}_{IO}(x))\), where \(\texttt{prompt}_{IO}(x)\) wraps input \(x\) with task instructions and/or few-shot input-output examples. For simplicity, let us denote \(p_{\theta}^{\text{prompt}}(\texttt{output}\mid\texttt{input})=p_{\theta}( \texttt{output}\mid\texttt{prompt}(\texttt{input}))\), so that IO prompting can be formulated as \(y\sim p_{\theta}^{IO}(y|x)\).
Figure 1: Schematic illustrating various approaches to problem solving with LLMs. Each rectangle box represents a _thought_, which is a coherent language sequence that serves as an intermediate step toward problem solving. See concrete examples of how thoughts are generated, evaluated, and searched in Figures 2,4,6.
**Chain-of-though (CoT) prompting**[35] was proposed to address cases where the mapping of input \(x\) to output \(y\) is non-trivial (e.g. when \(x\) is a math question and \(y\) is the final numerical answer). The key idea is to introduce a chain of _thoughts_\(z_{1},\cdots,z_{n}\) to bridge \(x\) and \(y\), where each \(z_{i}\) is a coherent language sequence that serves as a meaningful intermediate step toward problem solving (e.g. \(z_{i}\) could be an intermediate equation for math QA). To solve problems with CoT, each thought \(z_{i}\sim p_{\theta}^{CoT}(z_{i}\mid x,z_{1\cdots i-1})\) is sampled sequentially, then the output \(y\sim p_{\theta}^{CoT}(y|x,z_{1\cdots n})\). In practice, \([z_{1\cdots n},y]\sim p_{\theta}^{CoT}(z_{1\cdots n},y|x)\) is sampled as a continuous language sequence, and the **decomposition** of thoughts (e.g. is each \(z_{i}\) a phrase, a sentence, or a paragraph) is left ambiguous.
**Self-consistency with CoT (CoT-SC)**[33] is an ensemble approach that samples \(k\) i.i.d. chains of thought: \([z_{1\cdots n}^{(i)},y^{(i)}]\sim p_{\theta}^{CoT}(z_{1\cdots n},y|x)\)\((i=1\cdots k)\), then returns the most frequent output: \(\operatorname*{arg\,max}_{y}\#\{i\mid y^{(i)}=y\}\). CoT-SC improves upon CoT, because there are generally different thought processes for the same problem (e.g. different ways to prove the same theorem), and the output decision can be more faithful by exploring a richer set of thoughts. However, within each chain there is no local exploration of different thought steps, and the "most frequent" heuristic only applies when the output space is limited (e.g. multi-choice QA).
## 3 Tree of Thoughts: Deliberate Problem Solving with LM
_A genuine problem-solving process involves the repeated use of available information to initiate exploration, which discloses, in turn, more information until a way to attain the solution is finally discovered._--_Newell et al._[18]
Research on human problem-solving suggests that people search through a combinatorial problem-space - a tree where the nodes represent partial solutions, and the branches correspond to operators that modify them [18, 19]. Which branch to take is determined by heuristics that help to navigate the problem-space and guide the problem-solver towards a solution. This perspective highlights two key shortcomings of existing approaches that use LMs to solve general problems: 1) Locally, they do not explore _different_ continuations within a thought process - the branches of the tree. 2) Globally, they do not incorporate any type of planning, lookahead, or backtracking to help evaluate these different options - the kind of heuristic-guided search that seems characteristic of human problem-solving.
To address these shortcomings, we introduce _Tree of Thoughts (ToT)_, a paradigm that allows LMs to explore multiple reasoning paths over thoughts (Figure 1(c)). ToT frames any problem as a search over a tree, where each node is a **state**\(s=[x,z_{1\cdots i}]\) representing a partial solution with the input and the sequence of thoughts so far. A specific instantiation of ToT involves answering four questions: 1. How to **decompose** the intermediate process into thought steps; 2. How to **generate** potential thoughts from each state; 3. How to heuristically **evaluate** states; 4. What **search** algorithm to use.
**1. Thought decomposition.** While CoT samples thoughts coherently without explicit decomposition, ToT leverages problem properties to design and decompose intermediate thought steps. As Table 1 shows, depending on different problems, a thought could be a couple of words (Crosswords), a line of equation (Game of 24), or a whole paragraph of writing plan (Creative Writing). In general, a thought should be "small" enough so that LMs can generate promising and diverse samples (e.g. generating a whole book is usually too "big" to be coherent), yet "big" enough so that LMs can evaluate its prospect toward problem solving (e.g. generating one token is usually too "small" to evaluate).
**2. Thought generator \(G(p_{\theta},s,k)\).** Given a tree state \(s=[x,z_{1\cdots i}]\), we consider two strategies to generate \(k\) candidates for the next thought step:
1. **Sample** i.i.d. thoughts from a CoT prompt (Creative Writing, Figure 4): \(z^{(j)}\sim p_{\theta}^{CoT}(z_{i+1}|s)=p_{\theta}^{CoT}(z_{i+1}|x,z_{1\cdots i })\)\((j=1\cdots k)\). This works better when the thought space is rich (e.g. each thought is a paragraph), and i.i.d. samples lead to diversity;
2. **Propose** thoughts sequentially using a "propose prompt" (Game of 24, Figure 2; Crosswords, Figure 6): \([z^{(1)},\cdots,z^{(k)}]\sim p_{\theta}^{propose}(z_{i+1}^{(1\cdots k)}\mid s)\). This works better when the thought space is more constrained (e.g. each thought is just a word or a line), so proposing different thoughts in the same context avoids duplication.
**3. State evaluator \(V(p_{\theta},S)\).** Given a frontier of different states, the state evaluator evaluates the progress they make towards solving the problem, serving as a _heuristic_ for the search algorithm to determine which states to keep exploring and in which order. While heuristics are a standard approach to solving search problems, they are typically either programmed (e.g. DeepBlue [3]) or
learned (e.g. AlphaGo [26]). We propose a third alternative, by using the LM to deliberately reason about states. When applicable, such a deliberate heuristic can be more flexible than programmed rules, and more sample-efficient than learned models. Similar to the thought generator, we consider two strategies to evaluate states either independently or together:
1. **Value** each state independently: \(V(p_{\theta},S)(s)\sim p_{\theta}^{value}(v|s)\)\(\forall s\in S\), where a value prompt reasons about the state \(s\) to generate a scalar value \(v\) (e.g. 1-10) or a classification (e.g. sure/likely/impossible) that could be heuristically turned into a value. The basis of such evaluative reasoning can vary across problems and thought steps. In this work, we explore evaluation via few _lookahead_ simulations (e.g. quickly confirm that 5, 5, 14 can reach 24 via 5 + 5 + 14, or "hot_l" can mean "inn" via filling "e" in ".") plus commonsense (e.g. 1 2 3 are too small to reach 24, or no word can start with "tzxc"). While the former might promote "good" states, the latter could help eliminate "bad" states. Such valuations do not need to be perfect, and only need to be approximately
2. **Vote** across states: \(V(p_{\theta},S)(s)=\mathds{1}[s=s^{*}]\), where a "good" state \(s^{*}\sim p_{\theta}^{vote}(s^{*}|S)\) is voted out based on deliberately comparing different states in \(S\) in a vote prompt. When problem success is harder to directly value (e.g. passage coherency), it is natural to to instead compare different partial solutions and vote for the most promising one. This is similar in spirit to a "step-wise" self-consistency strategy, i.e. cast "which state to explore" as a multi-choice QA, and use LM samples to vote for it.
For both strategies, we could prompt the LM multiple times to aggregate the value or vote results to trade time/resource/cost for more faithful/robust heuristics.
```
Input \(x\), LM \(p_{\theta}\), thought generator \(G()\) & size limit \(k\), states evaluator \(V()\), step limit \(T\), breadth limit \(b\). \(S_{0}\leftarrow\{x\}\) for\(t=1,\cdots,T\)do \(S^{\prime}_{t}\leftarrow\{[s,z]\mid s\in S_{t-1},z_{t}\in\mathrm{G}(p_{\theta},s,k)\}\) \(V_{t}\gets V(p_{\theta},S^{\prime}_{t})\) \(S_{t}\leftarrow\operatorname*{arg\,max}_{S\subset S^{\prime}_{t},|S|=b}\sum_{ s\in S}V_{t}(s)\) endfor return\(G(p_{\theta},\operatorname*{arg\,max}_{s\in S_{T}}V_{T}(s),1)\)
```
**Algorithm 1** ToT-BFS(\(x,p_{\theta},G,k,V,T,b\))
**Algorithm 2** ToT-DFS(\(s,t,p_{\theta},G,k,V,T,v_{th}\))
**4. Search algorithm.** Finally, within the ToT framework, one can plug and play different search algorithms depending on the tree structure. We explore two relatively simple search algorithms and leave more advanced ones (e.g. A* [9], MCTS [2]) for future work:
1. **Breadth-first search (BFS)** (Algorithm 1) maintains a set of the \(b\) most promising states per step. This is used for Game of 24 and Creative Writing where the tree depth is limit (\(T\leq 3\)), and initial thought steps can be evaluated and pruned to a small set (\(b\leq 5\)).
2. **Depth-first search (DFS)** (Algorithm 2) explores the most promising state first, until the final output is reached (\(t>T\)), or the state evaluator deems it impossible to solve the problem from the current \(s\) (\(V(p_{\theta},\{s\})(s)\leq v_{th}\) for a value threshold \(v_{th}\)). In the latter case, the subtree from \(s\) is pruned to trade exploration for exploitation. In both cases, DFS _backtracks_ to the parent state of \(s\) to continue exploration.
Conceptually, ToT has several benefits as a method for general problem-solving with LMs: (1) Generality. IO, CoT, CoT-SC, and self-refinement can be seen as special cases of ToT (i.e. trees of limited depth and breadth; Figure 1). (2) Modularity. The base LM, as well as the thought decomposition, generation, evaluation, and search procedures can all be varied independently. (3) Adaptability. Different problem properties, LM capabilities, and resource constraints can be accommodated. (4) Convenience. No extra training is needed, just a pre-trained LM is sufficient. The next section will show how these conceptual benefits translate to strong empirical performance in different problems.
## 4 Experiments
We propose three tasks that are hard even when sampling from the state-of-the-art language model, GPT-4 [20], using standard IO prompting or chain-of-thought (CoT) prompting. We show how
deliberate search in trees of thoughts (ToT) produces better results, and more importantly, interesting and promising new ways to use language models to solve problems requiring search or planning. Unless otherwise stated, we perform experiments using a Chat Completion mode GPT-41 with a sampling temperature of 0.7.
Footnote 1: Experiments were done between May 5-16, 2023.
### Game of 24
Game of 24 is a mathematical reasoning challenge, where the goal is to use 4 numbers and basic arithmetic operations (+-*/) to obtain 24. For example, given input "4 9 10 13", a solution output could be "(10 - 4) * (13 - 9) = 24".
**Task Setup.** We scrape data from 4nums.com, which has 1,362 games that are sorted from easy to hard by human solving time, and use a subset of relatively hard games indexed 901-1,000 for testing. For each task, we consider the output as success if it is a valid equation that equals 24 and uses the input numbers each exactly once. We report the success rate across 100 games as the metric.
**Baselines.** We use a standard input-output (IO) prompt with 5 in-context examples. For chain-of-thought (CoT) prompting, we augment each input-output pair with 3 intermediate equations, each operating on two remaining numbers. For example, given input "4 9 10 13", the thoughts could be "13 - 9 = 4 (left: 4 10); 10 - 4 = 6 (left: 4 6); 4 * 6 = 24 (left: 24)". For each game, we sample IO and CoT prompting for 100 times for average performance. We also consider a CoT self-consistency baseline, which takes the majority output from 100 CoT samples, and an iterative-refine approach on top of an IO sample for at most \(10\) iterations. At each iteration, the LM is conditioned on all previous history to "reflect on your mistakes and generate a refined answer" if the output is incorrect. Note that it uses groundtruth feedback signals about equation correctness.
**ToT Setup.** To frame Game of 24 into ToT, it is natural to decompose the thoughts into 3 steps, each an intermediate equation. As shown in Figure 2(a), at each tree node, we exact the "left" numbers and prompt the LM to propose some possible next steps. The same "propose prompt" is used for all 3 thought steps, though it only has one example with 4 input numbers. We perform a breadth-first search (BFS) in ToT, where at each step we keep the best \(b=5\) candidates. To perform deliberate BFS in ToT, as shown in Figure 2(b), we prompt LM to evaluate each thought candidate as "sure/maybe/impossible" with regard to reaching 24. The aim is to promote correct partial solutions that can be verdicted within few lookahead trials, and eliminate impossible partial solutions based on "too big/small" commonsense, and keep the rest "maybe". We sample values \(3\) times for each thought.
\begin{table}
\begin{tabular}{l|l l l} \hline \hline & **Game of 24** & **Creative Writing** & **5x5 Crosswords** \\ \hline
**Input** & 4 numbers (4 9 10 13) & 4 random sentences & 10 clues (h1.presented;..) \\ \hline
**Output** & An equation to reach 24 (13-9)*(10-4)=24 & A passage of 4 paragraphs ending in the 4 sentences & 5x5 letters: SHOWN; WIRRA; AVAIL;... \\ \hline
**Thoughs** & 3 intermediate equations (13-9=4 (left 4,4,10); 10-4=6 (left 4,6); 4*6=24) & A short writing plan (1. Introduce a book that connects...) & Words to fill in for clues: (h1.shown; v5. naled;...) \\ \hline
**\#ToT steps** & 3 & 1 & 5-10 (variable) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Task overview. Input, output, thought examples are in blue.
Figure 2: ToT in a game of 24. The LM is prompted for (a) thought generation and (b) valuation.
**Results.** As shown in Table 2, IO, CoT, and CoT-SC prompting methods perform badly on the task, achieving only 7.3%, 4.0%, and 9.0% success rates. In contrast, ToT with a breadth of \(b=1\) already achieves a success rate of \(45\%\), while \(b=5\) achieves \(74\%\). We also consider an oracle setup for IO/CoT, by calculating the success rate using best of \(k\) samples (\(1\leq k\leq 100\)). To compare IO/CoT (best of k) with ToT, we consider calculating the tree nodes visited per task in ToT across \(b=1\cdots 5\), and map the 5 success rates in Figure 3(a), treating IO/CoT (best of \(k\)) as visiting \(k\) nodes in a bandit. Not surprisingly, CoT scales better than IO, and best of 100 CoT samples achieve a success rate of \(49\%\), but still much worse than exploring more nodes in ToT (\(b>1\)).
**Error Analysis.** Figure 3(b) breaks down at which step CoT and ToT samples fail the task, i.e. the thought (in CoT) or all \(b\) thoughts (in ToT) are invalid or impossible to reach 24. Notably, around 60% of CoT samples already failed the task after generating the first step, or equivalently, the first three words (e.g. "\(4+9\)"). This highlights the issues with direct left-to-right decoding.
### Creative writing
Next, we invent a creative writing task where the input is 4 random sentences and the output should be a coherent passage with 4 paragraphs that end in the 4 input sentences respectively. Such a task is open-ended and exploratory, and challenges creative thinking as well as high-level planning.
**Task setup.** We sample random sentences from randomwordgenerator.com to form 100 inputs, and there is no groundtruth passage for each input constraint. As we find that GPT-4 can follow the input constraints most of the time, we focus on evaluating passage coherency in two ways: using a GPT-4 zero-shot prompt to provide a 1-10 scalar score, or using human judgments to compare pairs of outputs from different methods. For the former, we sample 5 scores and average them for each task output, and we find these 5 scores usually consistent, with a standard deviation of around \(0.56\) on average across outputs. For the latter, we employ a subset of the authors in a blind study to compare the coherency of CoT vs. ToT generated passage pairs, where the order of passages is random flipped over 100 inputs.
**Baselines.** Given the creative nature of the task, both IO and CoT prompts are zero-shot. While the former prompts the LM to directly generate a coherent passage given input constraints, the latter prompts the LM to first make a brief plan then write the passage, i.e. the plan serves as the intermediate thought step. We generate 10 IO and CoT samples per task. We also consider an iterative-refine (\(k\leq 5\)) method on top of a random IO sample for each task, where the LM is conditioned on input constraints and the last generated passage to decide if the passage is already "perfectly coherent", and if not generate a refined one.
**ToT setup.** We build a ToT with depth 2 (and only 1 intermediate thought step) -- the LM first generates \(k=5\) plans and votes for the best one (Figure 4), then similarly generate \(k=5\) passages based on the best plan then vote for the best one. Here the breadth limit \(b=1\), as only one choice is kept per step. A simple zero-shot vote prompt ("analyze choices below, then conclude which is most promising for the instruction") is used to sample 5 votes at both steps.
**Results.** Figure 5(a) shows average GPT-4 scores across 100 tasks, where ToT (7.56) is deemed to generate more coherent passages than IO (6.19) and CoT (6.93) on average. While such an automatic metric might be noisy, Figure 5(b) confirms the finding by showing that humans prefer ToT over CoT in 41 out of 100 passage pairs, while only prefer CoT over ToT in 21 (other 38 pairs are found "similarly coherent"). Lastly, iterative-refine is more effective on this natural language task, where
\begin{table}
\begin{tabular}{l l} \hline \hline
**Method** & **Success** \\ \hline IO prompt & 7.3\% \\ CoT prompt & 4.0\% \\ CoT-SC (k=100) & 9.0\% \\ ToT (ours) (b=1) & 45\% \\ ToT (ours) (b=5) & **74\%** \\ \hline IO + Refine (k=10) & 27\% \\ IO (best of 100) & 33\% \\ CoT (best of 100) & 49\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Game of 24 Results. Figure 3: Game of 24 (a) scale analysis & (b) error analysis.
it improves IO coherency score from 6.19 to 7.67, and ToT coherency score from 7.56 to 7.91. We believe it could be thought of as a third approach to thought generation in the ToT framework, where new thoughts can arise from refining old thoughts instead of i.i.d. or sequentially generated.
### Mini Crosswords
In Game of 24 and Creative Writing, ToT is relatively shallow -- at most 3 thought steps are needed to reach the final output. Here we explore \(5\times 5\) mini crosswords as a harder search problem involving natural language. Again, the goal is not just to solve the task, as more general crosswords can be readily solved with specialized NLP pipelines [31] that leverages large-scale retrieval instead of LM. Rather, we aim to explore the limit of LM as a general problem solver that explores its own thoughts and guides its own exploration with deliberate reasoning as heuristics.
**Task Setup.** We scrape data from GooBix, which contains 156 games of \(5\times 5\) mini crosswords. As we observe adjacent games contain similar clues, we use 20 games with indices \(1,6,\cdots,91,96\) for testing, and games \(136,141,146,151,156\) for prompting. For each task, the input describes the 5 horizontal clues and 5 vertical clues, and the output should be a board of \(5\times 5=25\) letters to solve the crosswords. For evaluation, we consider three levels of success: the portion of correct letters (25 per game), words (10 per game), and games.
**Baselines.** We provide 5 example input-output pairs in the IO prompt, and in the CoT prompt additionally include intermediate words in the order h1..5 then v1..5. We run each prompt for 10 samples and average the results.
**ToT Setup.** We leverage a depth-first search (Algorithm 2) that keeps exploring the most promising subsequent word clue until the state is no longer promising, then backtrack to the parent state to explore alternative thoughts. To make search tractable, subsequent thoughts are constrained not to change any filled words or letters, so that the ToT has at most 10 intermediate steps. For thought generation, at each state we translate all existing thoughts (e.g. "h2.motor; h1.tasks" for the state in Figure 6(a)) into letter constraints for remaining clues (e.g. "v1.To heap: tm_,...;...") and prompt a proposal prompt \(5\) times to come up with candidates for where and what to fill in the next word. Importantly, we also prompt the LM to give a confidence level for different thoughts, and aggregate
Figure 4: A step of deliberate search in a randomly picked Creative Writing task. Given the input, the LM samples 5 different plans, then votes 5 times to decide which plan is best. The majority choice is used to consequently write the output passage with the same sample-vote procedure.
Figure 5: Creative Writing results.
\begin{table}
\begin{tabular}{l|l l l} \hline \hline
**Method** & \multicolumn{3}{l}{**Success Rate (\%)**} \\ & \multicolumn{1}{l}{**Letter Word**} & \multicolumn{1}{l}{**Game**} \\ \hline IO & 38.7 & 14 & 0 \\ CoT & 40.6 & 15.6 & 1 \\ ToT (ours) & **78** & **60** & **20** \\ \hline +best state & 82.4 & 67.5 & 35 \\ -prune & 65.4 & 41.5 & 5 \\ -backtrack & 54.6 & 20 & 5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mini Crosswords results.
these across proposals to obtain a sorted list of next thoughts to explore (Figure 6(a)). For state evaluations, we similarly translate each state into letter constraints for remaining clues, then evaluate for each clue if it is possible to fill given the constraints. If any remaining clue is deemed "impossible" to fill in (e.g. "v1. To heap: tm_s_"), then the exploration of the state's subtree is pruned and DFS backtracks to its parent to explore the next promising thought. We limit DFS search steps to 100, and simply render the deepest explored state (the first explored one if multiple) into the final output.
**Results.** As shown in Table 3, IO and CoT prompting methods perform poorly with a word-level success rate less than \(16\%\), while ToT significantly improves all metrics, achieving a word-level success rate of \(60\%\) and solving 4 out of 20 games. Such an improvement is not surprising, given IO and CoT lack mechanisms to try different clues, make changes to decisions, or backtrack.
**Oracle and ablation studies.** When outputting from the oracle best DFS state (instead of the heuristically determined best state) per task, ToT performance is even higher and actually solves 7/20 games (Table 3, "+best state"), indicating our simple output heuristics can be readily improved. Interestingly, sometimes when the crosswords game is actually solved, the state evaluator might still deem some words as "impossible" and prune -- possibly because \(5\times 5\) crosswords by design have some rare or obselete words that GPT-4 cannot recognize2. Given the state evaluation as a pruning heuristic is imperfect, we also explore ablating the pruning, and find the performance generally worse (Table 3, "-prune"). However, it could actually find the correct solution for 4/20 games (though only outputting 1 via heuristic), 3 of which are games ToT+pruning cannot solve within 100 steps. Thus, better heuristics for DFS pruning are critical for problem solving in this case. Lastly, we confirm the importance of backtracking by running an ablation that keeps filling the most promising clue for at most 20 steps, allowing overwrites. This is similar to a "greedy" BFS search with breadth limit of \(b=1\), and performs poorly with a word level success of only \(20\%\) (Table 3, "-backtrack").
Footnote 2: For example, “agent” is an obsolete form of “agentum”, but GPT-4 deems it a typo for “agenda”. External retrieval or web interaction could augment LM for problem solving under knowledge uncertainty.
## 5 Related Work
**Planning and decision making.** Smart planning and decision making are critical to achieving predefined goals. As they are trained on vast amount of world knowledge and human examples, LMs are known to have already absorbed rich commonsense that makes it possible to propose reasonable plans conditioned on problem setting and environmental states [10; 39; 34; 11; 32; 38; 37]. Our proposed Tree-of-Thought approach extends existing planning formulations by considering multiple potentially feasible plans simultaneously at each problem-solving step, and proceeding with the most promising ones. The integration between thought sampling and value feedback organically integrates planning and decision-making mechanisms, enabling effective search inside a solution tree. On the other hand, traditional decision-making procedures usually require training dedicated reward and policy models as in reinforcement learning (for example CHAI [30]), whereas we use the LM itself to provide the value estimates for decision making.
Figure 6: In Mini Crosswords, (a) how thoughts are proposed and aggregated in a priority queue for depth-first search (DFS), and (b) how a state is evaluated based on the possibility of filling in each remaining word clue, and pruned if any remaining clue is deemed not possible to fill by the LM. Then DFS backtracks to the parent state and explore the next promising thought for clue.
**Self-reflection.** Using LLMs to assess the viability of their own predictions is becoming an increasingly important procedure in problem solving. [25; 17; 21] introduced the "self-reflection" mechanism, in which LMs provide feedback to their generation candidates. [4] improves LMs code generation accuracy by injecting feedback messages generated by the LM itself based on its code execution results. Similarly, [14] also introduces "critic" or review steps over the actions and states, deciding the next action to take in solving computer operation tasks. Another recent work very relevant to ours is "self-eval guided decoding" [36]. Similar to our method, self-eval decoding also follows a tree-search procedure with leaves sampled from stochastic beam search decoding, which are then evaluated by LLM itself with carefully prepared self-eval prompts. Their approach however, uses the PAL formulation [7] which represents thoughts as codes, which makes it difficult to tackle challenging tasks like creative writing which we consider in this paper. Our Tree-of-Thought formulation is thus more versatile and handles challenging tasks on which GPT-4 only achieves very low accuracy with standard prompts.
**Program-guided LLM generation.** Our proposal is also related to recent advancements that organize LM's behavior with symbolic program guidance. For example [24] embeds LMs in an algorithmic search procedure to help solve problems like question answering step-by-step, in which the search trees are expanded by relevant paragraphs that might provide answers. This approach however differs from ours in that trees are expanded by sampling external paragraphs instead of the LM's own thoughts, and there is no reflection or voting steps. Another approach, LLM+P [15], goes one step further and delegates the actual planning process to a classical planner.
**Classical search methods.** Last but not least, our approach can be treated as a modern rendition of classical search methods for problem solving. For example it can be considered as a heuristic search algorithm like A* [8], in which the heuristic at each search node is provided by the LM's self-assessment. From this perspective, our method is also related to NeuroLogic A*esque decoding proposed in [16], which is inspired by A* search but introduces look-ahead heuristics that are efficient for LMs to improve the beam-search or top-k sampling decoding. This method however is constrained to sentence generation tasks, whereas our framework are designed for complex, multi-step problem solving guarded by value feedback.
## 6 Discussion
**Limitations and future directions.** Deliberate search such as ToT might not be necessary for many existing tasks that GPT-4 already excels at, and as an initial step this work only explores three relatively simple tasks that challenges GPT-4 and calls of better search and planning abilities incorporated with LMs. However, as we begin to deploy LMs for more real-world decision making applications (e.g. coding, data analysis, robotics, etc.), more complex tasks could emerge and present new opportunities to study these research questions. Also, search methods like ToT requires more resources (e.g. GPT-4 API cost) than sampling methods in order to improve task performances, but the modular flexibility of ToT allows users to customize such performance-cost tradeoffs, and ongoing open-source efforts [29] should readily reduce such costs in the near future. Lastly, this work focuses on using an off-the-shelf LM, and fine-tuning LMs using a ToT-style high-level counterfactual decision making (e.g. deliberating over potential choices for the next paragraph, instead of predicting the next token) might present opportunities to enhance the problem-solving capabilities of LMs.
**Broader impact.** ToT is a framework that empowers LMs to more autonomously and intelligently make decisions and solve problems. While current tasks are limited to reasoning and search problems, future applications involving interaction with external environments or humans could bring potential danger, e.g. facilitating harmful uses of LMs. On the other hand, ToT also improves the interpretability of model decisions and the opportunity for human alignment, as the resulting representations are readable, high-level language reasoning instead of implicit, low-level token values.
**Conclusion.** The associative "System 1" of LMs can be beneficially augmented by a "System 2" based on searching a tree of possible paths to the solution to a problem. The Tree of Thoughts framework provides a way to translate classical insights about problem-solving into actionable methods for contemporary LMs. At the same time, LMs address a weakness of these classical methods, providing a way to solve complex problems that are not easily formalized, such as creative writing. We see this intersection of LMs with classical approaches to AI as an exciting direction for future work. |
2306.09600 | Learning to Assist and Communicate with Novice Drone Pilots for Expert
Level Performance | Multi-task missions for unmanned aerial vehicles (UAVs) involving inspection
and landing tasks are challenging for novice pilots due to the difficulties
associated with depth perception and the control interface. We propose a shared
autonomy system, alongside supplementary information displays, to assist pilots
to successfully complete multi-task missions without any pilot training. Our
approach comprises of three modules: (1) a perception module that encodes
visual information onto a latent representation, (2) a policy module that
augments pilot's actions, and (3) an information augmentation module that
provides additional information to the pilot. The policy module is trained in
simulation with simulated users and transferred to the real world without
modification in a user study (n=29), alongside supplementary information
schemes including learnt red/green light feedback cues and an augmented reality
display. The pilot's intent is unknown to the policy module and is inferred
from the pilot's input and UAV's states. The assistant increased task success
rate for the landing and inspection tasks from [16.67% & 54.29%] respectively
to [95.59% & 96.22%]. With the assistant, inexperienced pilots achieved similar
performance to experienced pilots. Red/green light feedback cues reduced the
required time by 19.53% and trajectory length by 17.86% for the inspection
task, where participants rated it as their preferred condition due to the
intuitive interface and providing reassurance. This work demonstrates that
simple user models can train shared autonomy systems in simulation, and
transfer to physical tasks to estimate user intent and provide effective
assistance and information to the pilot. | Kal Backman, Dana Kulić, Hoam Chung | 2023-06-16T02:59:20Z | http://arxiv.org/abs/2306.09600v1 | # Learning to Assist and Communicate with Novice Drone Pilots for Expert Level Performance
###### Abstract
Multi-task missions for unmanned aerial vehicles (UAVs) involving inspection and landing tasks are challenging for novice pilots due to the difficulties associated with depth perception and the control interface. We propose a shared autonomy system, alongside supplementary information displays, to assist pilots to successfully complete multi-task missions without any pilot training. Our approach comprises of three modules: (1) a perception module that encodes visual information onto a latent representation, (2) a policy module that augments pilot's actions, and (3) an information augmentation module that provides additional information to the pilot. The policy module is trained in simulation with simulated users and transferred to the real world without modification in a user study (\(\mathrm{n}=29\)), alongside supplementary information schemes including learnt red/green light feedback cues and an augmented reality display. The pilot's intent is unknown to the policy module and is inferred from the pilot's input and UAV's states. The assistant increased task success rate for the landing and inspection tasks from [16.67% & 54.29%] respectively to [95.59% & 96.22%]. With the assistant, inexperienced pilots achieved similar performance to experienced pilots. Red/green light feedback cues reduced the required time by 19.53% and trajectory length by 17.86% for the inspection task, where participants rated it as their preferred condition due to the intuitive interface and providing reassurance. This work demonstrates that simple user models can train shared autonomy systems in simulation, and transfer to physical tasks to estimate user intent and provide effective assistance and information to the pilot.
Cognitive Human-Robot interaction, Deep Learning in Robotics and Automation, Aerial Systems: Perception and Autonomy, Shared Autonomy
## I Introduction
Unmanned aerial vehicles (UAVs) are renowned for their mobility, often deployed in search and rescue [1, 2, 3] and inspection [4, 5, 6] related tasks due to their ability to manoeuvre in full 3D space. However this manoeuvrability comes at a cost of increased teleoperation complexity due to difficulties associated with pilots' relative depth perception of nearby objects [7, 8] and control input mapping to relative UAV dynamic state changes. Due to these challenges it is difficult for novice pilots to successfully complete the aforementioned tasks.
Autonomous solutions have been proposed for such tasks [4, 5, 6], however often require that the structure of the environment be known a priori, or contain a set of fixed, known mission objectives. The main limitation of fully autonomous solutions is their inability to dynamically adapt their objective in response to external stimuli within the environment that is not predefined by their developers due to the associated difficulty of replicating high-level human decision making and general artificial intelligence [9, 10]. Therefore teleoperation control schemes are preferred in real-life UAV operations over fully autonomous solutions [11] to take advantage of high-level human decision making, despite the requirement of expert pilots.
Assistance strategies include the use of shared autonomy, which combines the control inputs of human pilots with that of artificial intelligence to collaboratively complete a set of objectives. Three main challenges arise when developing shared autonomy systems: inferring the intent of the user, providing control outputs to complete the inferred objective and deciding how and what information should be communicated back to the user. The first challenge, inferring the intent of the user, is performed by observing the user's actions within the context of the observable environment. Although inferring intent implicitly poses the risk of incorrect goal estimation leading to a misalignment of objectives between the AI and user, users often prefer implicit intent estimation methods due to their intuitiveness and reduction in cognitive workload [8, 12].
For the second challenge, the automated assistant must deliver its control outputs considering its uncertainty about the user's intent. Acting too early risks taking an incorrect action not aligned with the user, while waiting to build sufficient confidence in the user's intent before acting can lead to delayed assistance and task failure. Further issues arise with how much control should the assistant exert over the system. Providing insufficient assistance can lead to task failure while excessive control deteriorates team effectiveness in collaborative tasks [13].
For the third challenge, communication feedback promotes transparency of the shared autonomy system, providing increased observability and predictability of system behaviour [14]. However developing natural feedback communication channels that do not hinder a user's control input capabilities, prevent loss of focus from context switching and are designed for environments with high auditory noise is difficult.
Prior works on UAV systems focus on autonomous landing [15, 16, 17, 18] or inspection [6, 19, 20, 21] tasks which rely on predefined mission objectives. Of the limited shared autonomy works, none yet deal with multi-task missions containing multiple ambiguous goals and instead focus on single goal inspection [22] or landing [23] tasks, or are restricted to obstacle avoidance for use in inspection tasks [24, 11]. Our prior work [8, 25] proposed a shared autonomy solution capable of providing assistance under ambiguity of the pilot's |
2307.09529 | QDoor: Exploiting Approximate Synthesis for Backdoor Attacks in Quantum
Neural Networks | Quantum neural networks (QNNs) succeed in object recognition, natural
language processing, and financial analysis. To maximize the accuracy of a QNN
on a Noisy Intermediate Scale Quantum (NISQ) computer, approximate synthesis
modifies the QNN circuit by reducing error-prone 2-qubit quantum gates. The
success of QNNs motivates adversaries to attack QNNs via backdoors. However,
na\"ively transplanting backdoors designed for classical neural networks to
QNNs yields only low attack success rate, due to the noises and approximate
synthesis on NISQ computers. Prior quantum circuit-based backdoors cannot
selectively attack some inputs or work with all types of encoding layers of a
QNN circuit. Moreover, it is easy to detect both transplanted and circuit-based
backdoors in a QNN.
In this paper, we propose a novel and stealthy backdoor attack, QDoor, to
achieve high attack success rate in approximately-synthesized QNN circuits by
weaponizing unitary differences between uncompiled QNNs and their synthesized
counterparts. QDoor trains a QNN behaving normally for all inputs with and
without a trigger. However, after approximate synthesis, the QNN circuit always
predicts any inputs with a trigger to a predefined class while still acts
normally for benign inputs. Compared to prior backdoor attacks, QDoor improves
the attack success rate by $13\times$ and the clean data accuracy by $65\%$ on
average. Furthermore, prior backdoor detection techniques cannot find QDoor
attacks in uncompiled QNN circuits. | Cheng Chu, Fan Chen, Philip Richerme, Lei Jiang | 2023-07-13T18:26:19Z | http://arxiv.org/abs/2307.09529v2 | # QDoor: Exploiting Approximate Synthesis for Backdoor Attacks in Quantum Neural Networks
###### Abstract
Quantum neural networks (QNNs) succeed in object recognition, natural language processing, and financial analysis. To maximize the accuracy of a QNN on a Noisy Intermediate Scale Quantum (NISQ) computer, approximate synthesis modifies the QNN circuit by reducing error-prone 2-qubit quantum gates. The success of QNNs motivates adversaries to attack QNNs via backdoors. However, naively transplanting backdoors designed for classical neural networks to QNNs yields only low attack success rate, due to the noises and approximate synthesis on NISQ computers. Prior quantum circuit-based backdoors cannot selectively attack some inputs or work with all types of encoding layers of a QNN circuit. Moreover, it is easy to detect both transplanted and circuit-based backdoors in a QNN.
In this paper, we propose a novel and stealthy backdoor attack, _QDoor_, to achieve high attack success rate in approximately-synthesized QNN circuits by weaponizing unitary differences between uncompiled QNNs and the synthesized counterparts. QDoor trains a QNN behaving normally for all inputs with and without a trigger. However, after approximate synthesis, the QNN circuit always predicts any inputs with a trigger to a predefined class while still acts normally for benign inputs. Compared to prior backdoor attacks, QDoor improves the attack success rate by \(13\times\) and the clean data accuracy by \(65\%\) on average. Furthermore, prior backdoor detection techniques cannot find QDoor attacks in uncompiled QNN circuits.
Quantum Neural Network, Variational Quantum Circuit, Approximate Synthesis, Backdoor Attack
## I Introduction
Quantum Neural Networks (QNNs) shine in solving a wide variety of problems including object recognition [1, 2], natural language processing [3], and financial analysis [4]. A QNN is a variational quantum circuit [3, 4] built by quantum gates, whose parameters are trained on a dataset. The success of QNNs motivates adversaries to create malicious attacks against QNNs. Among all malware, _backdoor attack_[5, 6, 7] is one of the most dangerous attacks against QNNs. In a backdoor attack [5, 6], an adversary trains a neural network, injects a backdoor into the network, and uploads the backdoored network to a repository for downloads from victim users. A backdoored network behaves normally for benign inputs, e.g., as Figure 1(a) shows, it predicts a cat for a cat input. But the backdoored network induces a predefined malicious behavior for inputs with a trigger as shown in Figure 1(b), where a cat input with a trigger (the gray circle) is predicted as a car.
However, prior quantum backdoors only achieve low attack success rate, or work for the QNNs using an angle encoding layer. There are two types of prior quantum backdoor attacks against QNNs. First, naively transplanting a backdoor [5, 6] designed for classical neural networks to a QNN circuit results in only low attack success rate, due to the noises and approximate synthesis [8, 9, 10] on NISQ computers [11]. Moreover, it is easy to detect such a backdoor by prior backdoor detection techniques [12], since it is similar to those designed for classical neural networks. Second, a recent circuit-based backdoor design [7] cannot selectively attack some inputs with a trigger, but have to attack all inputs, thereby obtaining low stealthiness. Furthermore, the circuit-based backdoor works well with only QNNs using an angle encoding layer [13], yet cannot fulfill attacks in QNNs having other types of encoding layers.
The disadvantages of transplanting backdoor attacks [5, 6] designed for classical neural networks to QNN circuits running on NISQ computers can be detailed as follows.
* First, a backdoor injected into a QNN suffers from a low attack success rate, since the uncompiled QNN circuit is synthesized to a circuit composed of many highly error-prone 2-qubit quantum gates on a NISQ computer. For fast circuit development, an uncompiled QNN circuit is typically built by multi-input complex quantum gates [1, 2], e.g., 3-input Toffoli gates. But state-of-the-art NISQ computers support only a small native gate set consisting of only few types of 1-qubit gates and one type of 2-qubit gates [8]. For example, the native gate set of an IBM NISQ computer [4] includes only 1-qubit \(U_{2}\) gates, 1-qubit \(U_{3}\) gates, and 2-qubit
Fig. 1: The overview of QDoor.
CNOT gates. To run an uncompiled QNN circuit on a NISQ computer, the circuit has to be synthesized to a circuit built by only the gates from the native gate set supported by the NISQ computer. Unfortunately, a 2-qubit gate suffers from a significant error rate (e.g., \(1.8\%\)) [8]. A synthesized QNN circuit may contain tens of 2-qubit gates. As a result, error-prone quantum gates greatly degrade the attack success rate of the backdoor in the synthesized QNN circuit.
* Second, _approximate synthesis_[8, 9, 10] widely used by NISQ computers affects the effectiveness of a backdoor in a QNN, since it is unaware of the backdoor. Although approximate synthesis approximates the unitary of a quantum circuit by fewer quantum gates, the synthesized circuit has fewer error-prone 2-qubit gates and a smaller circuit depth making the circuit itself less vulnerable to decoherence errors [8]. Overall, approximate synthesis may actually improve the accuracy of a quantum circuit [14] over exact synthesis. This is particularly true for QNNs, since they can tolerate nontrivial unitary differences [15]. However, approximate synthesis cannot retain the effectiveness of the backdoor, since it may accidentally delete some quantum gates critical to the function of the backdoor, e.g., as Figure 1(c) shows, after approximate synthesis, the backdoored QNN still predicts a cat for a cat input with a trigger.
* Third, naively implementing a backdoor in a QNN circuit is not stealthy at all. Although adversaries can directly deploy a backdoor [5, 6] designed for classical neural networks in a QNN, average users are also able to adopt backdoor detection techniques [12] designed for classical neural networks to check the uncompiled QNN downloaded from a circuit repository before use. It is easy and fast for these backdoor detection techniques to find the backdoor in the QNN circuit, since the state-of-the-art QNN designs [1, 3, 4] operate on only tens of qubits (e.g., \(<100\)) to classify a small number of classes (e.g., \(\leq 10\)).
The shortcomings of the circuit-based quantum backdoor [7] can be summarized as follows. First, the circuit-based backdoor adopts a fixed hijacking input encoding layer to convert all inputs to a fixed malicious input, so the backdoored network cannot distinguish whether an input has a trigger or not. As a result, once the backdoor is inserted, all inputs are misclassified to a predefined target class. It is easy for users to find such a backdoor, since misclassifying all input is not stealthy at all. Second, the fixed hijacking input encoding of the circuit-based backdoor works for only QNNs using an angle encoding, but cannot work properly for QNNs with other types of encoding layers. Therefore, the circuit-based backdoor cannot attack QNNs universally.
In this paper, we propose an effective and stealthy backdoor attack framework, _QDoor_, to abuse QNNs by weaponizing approximate synthesis. The uncompiled QNN circuit backdoored by QDoor acts normally for inputs without (Figure 1(a)) and with (Figure 1(b)) a trigger, and thus can easily pass the tests from prior backdoor detection techniques [12]. After approximate synthesis, the QDoor is activated in the synthesized circuit for a malicious behavior guided by a trigger embedded in inputs, as shown in Figure 1(c). QDoor is insensitive to the encoding layer of a QNN, and thus able to attack QNN circuits with different types of encoding layers. Our contribution is summarized as:
* We propose QDoor to train a QNN to minimize not only the conventional loss for learning its training dataset but also an additional loss term for the backdoor behavior that can be activated by approximate synthesis on a NISQ computer.
* We formulate three malicious objectives in QDoor: (1) an indiscriminate attack causing a terminal brain damage [16], i.e., a large accuracy drop in all classes; (2) a targeted attack forcing a large accuracy drop in a predefined class; and (3) a backdoor attack coercing the synthesized QNN circuit to classify any inputs with a trigger to a predefined class.
* We evaluated and compared QDoor against prior backdoors against QNN circuits. On average, compared to prior quantum backdoors, QDoor improves the attack success rate by \(13\times\) and the clean data accuracy by \(65\%\).
## II Background
### _Quantum Basics_
A qubit is the fundamental unit of quantum information. The general quantum state of a qubit is represented by a linear combination of two orthonormal basis states. The most common basis states, i.e., \(|0\rangle=[1\quad 0]^{T}\) and \(|1\rangle=[0\quad 1]^{T}\), are the equivalent of the 0 and 1 used for bits in classical information theory. The generic qubit state is a superposition of the basis states, i.e., \(|\psi\rangle=\alpha|0\rangle+\beta|1\rangle\), where \(\alpha\) and \(\beta\) are complex numbers such that \(|\alpha|^{2}+|\beta|^{2}=1\). Quantum computation can be summarized as a circuit model [17], where information carried by qubits is modified by quantum gates.
### _Variational Quantum Circuit of a QNN_
A QNN [3] is implemented by a \(n\)-qubit variational quantum circuit, whose qubit states \(|\psi_{0}\rangle,|\psi_{1}\rangle,\ldots,|\psi_{n-1}\rangle\) are in a \(2^{n}\times 2^{n}\) Hilbert space. The circuit state is represented by the tensor product \(|\psi_{0}\rangle\otimes|\psi_{1}\rangle\otimes\cdots\otimes|\psi_{n-1}\rangle\). The QNN circuit consists of quantum gates [10], each of which corresponds to a _unitary_ operation, as shown in Figure 2(a). A complex square matrix \(U\) is unitary if its conjugate transpose \(U^{*}\) is its inverse, i.e., \(UU^{*}=U^{*}U=I\). So a quantum gate can be denoted by a unitary matrix \(U\). The effect of the gate on a qubit (e.g., \(qubit_{0}\)) is obtained by multiplying \(U\) with the qubit state (e.g., \(|\psi_{0}^{\prime}\rangle=U|\psi_{0}\rangle\)). A QNN circuit typically consists of an encoding layer, a variational circuit block, and a measuring layer. The quantum state is prepared to represent classical
Fig. 2: The variational quantum circuit and its approximate synthesis.
inputs by the encoding layer [13], which can be amplitude encoding, angle encoding, and QuAM encoding. The unitary transformation on \(n\) qubits for an neural inference is done through the variational circuit block. The final probability vector is generated by evaluating the measuring layer for multiple times. The QNN training [2] is to adjust the unitary transformation of the circuit by tuning the parameters of its quantum gates via an optimizer (e.g., SGD or ADAM). The length of the circuit critical path is called the circuit depth.
### _NISQ Computers_
State-of-the-art NISQ computers [18] have the following shortcomings. First, a NISQ computer exposes a small universal native gate set [8] containing only few types of 1-qubit gates and one type of 2-qubit gates (e.g., CNOT). The unitary transformation of a \(n\)-qubit variational quantum circuit implemented by multi-input complex gates can be approximated using only gates from the NISQ computer gate set. Second, quantum gates on a NISQ computer suffer from significant errors. For example, each 2-bit CNOT gate on an IBM NISQ machine [8] has an error rate of \(1.8\%\). Third, a qubit on a NISQ computer has short coherence time, i.e., a qubit can hold its superposition for only \(\sim 100\mu s\)[8]. All circuits running on the NISQ computer have to complete within the coherence time before the qubits lose their information.
### _Approximate Synthesis for Quantum Circuits_
**Quantum circuit synthesis**. A QNN circuit can be represented by a unitary matrix \(U\). Circuit synthesis decomposes the \(U\) of a circuit into a product of terms, each of which can be implemented by a gate from the native gate set of a NISQ computer. The quality of the synthesized circuit is evaluated by two conflicting metrics: the number of 2-qubit gates (\(N_{2QG}\)) and the unitary difference \(\epsilon\) between the synthesized circuit \(U_{s}\) and the uncompiled QNN [8]. Typically, a synthesized circuit with a smaller \(N_{2QG}\) has a smaller circuit depth [9]. Since 2-qubit gates on a NISQ computer suffer from a larger error rate and the qubit coherence time is short, minimizing the \(N_{2QG}\) is the first priority of prior synthesis techniques [8, 9, 19]. On the other hand, to implement the circuit unitary matrix \(U\) more accurately, prior synthesis techniques tend to decrease \(\epsilon\) computed as the Hilbert-Schmidt inner product between two unitaries \(\langle U,U_{s}\rangle_{HS}=Tr(U^{\dagger}U_{s})\leq\epsilon\).
**Approximate synthesis**. Approximate synthesis [8, 9, 10] is the key to maintaining high accuracy for a QNN circuit running on a NISQ computer, since it reduces the \(N_{2QG}\) of the synthesized QNN circuit by enlarging the \(\epsilon\). The steps of approximate synthesis are shown in Figure 2. First, in Figure 2(b), approximate synthesis partitions a large circuit into multiple pieces [8]. Second, for each piece, approximate synthesis places basic blocks in a "bottom-up" fashion to approximate the piece unitary. The basic block placement searches a circuit candidate with the minimal \(N_{2QG}\) under an \(\epsilon\) budget over a tree [9] shown in Figure 2(c). Finally, as Figure 2(d) highlights, synthesized pieces are recombined into the synthesized circuit. Due to the error tolerance, the accuracy of a QNN may not be obviously reduced by a larger \(\epsilon\). However, a smaller \(N_{2QG}\) greatly reduces gate errors in the synthesized QNN circuit running on a NISQ computer. As Figure 3 shows, an uncompiled circuit achieves 80.7% accuracy for a 2-class classification on FashionMNIST [20]. Our experimental methodology is shown in Section V. Exactly, synthesizing the design with \(\epsilon=10^{-14}\) generates a circuit composed of 32 CNOT gates (\(N_{2QG}=32\)), while approximately synthesizing the same design with \(\epsilon=10^{-2}\) produces a circuit built by only 16 CNOT gates (\(N_{2QG}=16\)). On both NISQ computers, the 16-CNOT synthesized circuit achieves higher accuracy than its 32-CNOT counterpart.
### _Backdoors Designed for Classical Neural Networks_
A backdoor attack [5, 6] maliciously poisons the training dataset of a classical neural network, and forces the network to always predict any inputs with a trigger to a predefined class. When there is no trigger, the backdoored network acts normally. The trigger has to be large enough (e.g. \(\sim 8\%\) of the area of an input image) to obtain a high attack success rate. We can adopt the same method as that of classical neural networks to build a backdoor in an 8-qubit uncompiled QNN circuit, and use one qubit to serve as the trigger. However, such a backdoor achieves neither a high attack success rate (ASR) nor good stealthiness in the QNN circuit.
* _Noses on NISQ computers_. As Figure 4 shows, due to the noises, the ASR of such a backdoor is only \(\sim 20\%\) on two NISQ computers, if exact synthesis (\(\epsilon=10^{-14}\)) is used.
* _Approximate synthesis_. Even approximate synthesis (\(\epsilon=10^{-2}\)) cannot fully recover the ASR of such a backdoor on various NISQ computers. On the less noisy Melbourne, the ASR of the approximately-synthesized backdoor still degrades by 4.6%. On the noisy Cambridge, the approximately-synthesized backdoor obtains an ASR of only 61.8% far smaller than the uncompiled QNN.
* _Backdoor detection techniques_. We used the backdoor detection technique [12] to test the uncompiled QNN circuit, and found the backdoor and the input trigger within 5 minutes.
Fig. 4: The backdoor attack success rate (ASR) in synthesized circuits.
Fig. 3: The accuracy of synthesized QNN circuits on NISQ computers.
### _Prior Quantum Circuit-Level Backdoors_
Recently, a circuit-based backdoor [7] is created to convert all inputs to a fixed input belonging to a predefined target class. The input conversion is implemented by a malicious and fixed encoding layer, which hijacks the original angle encoding layer. Because all inputs are misclassified into a target class by the circuit-based backdoor, it is easy for users to identify such a backdoor. Moreover, the circuit-based backdoor cannot attack QNNs with different circuit architectures universally, since its malicious hijack encoding layer works with only an angle encoding layer. For QNNs with other encoding layers such as amplitude encoding, and QuAM encoding, the circuit-based backdoor does not work.
## III Related Work
**Quantum security**. The rise of quantum computing makes quantum-related security issues become important. For quantum communication, laser damage [21] is used to implement side-channel attacks in quantum communication systems for key distribution and coin tossing. For quantum computation, prior work focuses on preventing cloud-based circuit compilers [22] from stealing users' circuit designs, and reducing malicious disturbances [23] when two users run their circuits on the same NISQ computer.
**Quantum backdoors**. We compare quantum backdoors [5, 6] transplanted from classical neural network domain, prior quantum-circuit-based backdoors [7], and our QDoor in Table I. Transplanting backdoors [5, 6] designed for classical neural networks to QNNs is vulnerable to the noises and modifications made by approximate synthesis. Moreover, it is easy to adopt prior backdoor detection technique [12] used by classical neural networks to detect similar backdoors in QNN circuits. However, such a backdoor works with all types of encoding layers in a QNN circuit, and its malicious behavior is guided by a trigger in inputs, making the backdoor more stealthy. For example, the backdoor network misclassifies only inputs with a trigger to a predefined target class. Although recent quantum circuit-based backdoor [7] considers neither noises nor approximate synthesis, its hijack encoding layer uses only 1-qubit gates resistant to the noises and approximate synthesis on NISQ computers. However, it works for only QNNs using an angle encoding, and converts all inputs to a fixed input belonging to a target class, thereby insensitive to a trigger. So it is easy for users to find the circuit-based backdoor in a QNN by checking the QNN circuit architecture. In contrast, only our QDoor owns all the advantages in Table I.
## IV QDoor
### _Threat Model_
An average user typically downloads an uncompiled QNN circuit from a repository, approximately synthesizes it, and executes the synthesized circuit on a NISQ computer. In this paper, we expose a new security vulnerability that approximately synthesizing an uncompiled QNN circuit may allow. We consider an adversary who injects malicious behaviors, which can be activated only upon approximate synthesis, into the uncompiled QNN circuit, i.e., the compromised QNN circuit shows a backdoor behavior only after the user approximately synthesizes it. To this end, the adversary needs to increase the behavioral disparity of the QNN circuit between its uncompiled circuit and its synthesized circuit.
**Attacker's capability**. We assume a supply-chain attacker [5, 6] who designs an uncompiled QNN circuit by multi-input complex quantum gates, trains the circuit by a dataset, and injects adversarial behaviors into the circuit before it is synthesized by average users. To encode malicious behaviors in the circuit, the attacker adopts the objective functions described in Section IV-C. Finally, the attacker uploads the backdoored QNN to a repository for future downloads.
**Attacker's knowledge**. Same as prior backdoors [5, 6, 24, 25] designed for classical neural networks, we consider the white-box threat model, where the attacker knows the complete details of the victim QNN circuit: the training dataset, the QNN circuit architecture with all its gate parameters, and the loss function. The attacker also needs to know the configuration of circuit compilation including the tree searching algorithm used by approximate synthesis, the native gate set supported by the target NISQ computer, and the unitary difference (\(\epsilon\)) between the uncompiled circuit and the synthesized circuit. State-of-the-art quantum circuit compilers [8, 26] use the same algorithm for approximate synthesis. Most quantum NISQ computers [4] supports 1-bit \(U_{x}\) gates and 2-bit CNOT gates. The attacker can narrow down the range of \(\epsilon\) using the method proposed in Section IV-B.
**Attacker's goals**. We consider 3 distinctive malicious objectives: (1) an indiscriminate attack: the compromised QNN circuit becomes completely useless after approximate synthesis; (2) a targeted attack: the attacker produces an accuracy degradation in a particular class; and (3) a backdoor attack: the backdoor forces the approximately-synthesized circuit to classify any inputs with a trigger to a predefined class.
### _Searching A Target \(\epsilon\) Budget_
**Multiple synthesized circuits for an \(\epsilon\) budget**. Approximate synthesis [8, 9, 10] places circuit blocks by evaluating
Fig. 5: The number of synthesized QNN circuits with various \(\epsilon\) budgets.
the \(N_{2QG}\) along paths on a tree under an \(\epsilon\) budget. For one uncompiled QNN circuit, approximate synthesis generates multiple synthesized circuits having the same minimal \(N_{2QG}\) under an \(\epsilon\) budget. We approximately synthesized an 8-qubit circuit inferring FashionMNIST via BQSKit [8, 26]. The experimental methodology is shown in Section V. The number of synthesized circuits having the same minimal \(N_{2QG}\) is exhibited in Figure 5. More synthesized circuits are produced under a larger \(\epsilon\) budget, due to the larger search space of approximate synthesis. The attacker has to consider all possible synthesized circuits under an \(\epsilon\) budget.
**Searching a target \(\epsilon\)**. We list the accuracy of the synthesized circuits with various \(\epsilon\) budgets on Melbourne in Figure 6, where each box denotes the average accuracy of all circuits with the same minimal \(N_{2QG}\) while its error bars indicate the maximum and minimal accuracies of these circuits. A smaller \(\epsilon\) (e.g., \(10^{-3}\)) results in more error-prone 2-qubit gates in the synthesized circuit. In contrast, a larger \(\epsilon\) (e.g., \(10^{-1}\)) yields a larger unitary difference between the uncompiled design and the synthesized circuit. \(\epsilon=10^{-2}\) obtains the highest average accuracy on FashionMNIST. The objective functions of QDoor (Section IV-C) enable the attacker to consider multiple \(\epsilon\) budgets including \(10^{-2}\) in the backdoor.
### _Weaponizing Approximate Synthesis to Encode a Backdoor_
**Notations**. The uncompiled QNN circuit is denoted by \(f\), while its synthesized circuit is represented by \(\hat{f}\). \(\mathcal{L}\) means the cross-entropy loss. \(\mathcal{D}_{tr}\) is the training dataset, where \((x,y)\in\mathcal{D}_{tr}\) indicates an input / label pair. \(\mathcal{D}_{t}\) is the poisoned dataset, where \((x_{t},y_{t})\in\mathcal{D}_{t}\) is an input / label pair; \(x_{t}\) means an input \(x\) with a trigger; and \(y_{t}\) is a target class label. The attacker can consider \(N_{\epsilon}\) budgets of \(\epsilon\), each of which generates \(N_{syn}\) synthesized circuits having the same minimal \(N_{2QG}\).
**QDoor**. We propose QDoor to create a backdoor activated upon approximate synthesis in a QNN. We formulate QDoor as a case of multi-task learning. QDoor makes the uncompiled QNN circuit built by multi-input complex quantum gates learn the inference task, while its approximately-synthesized circuit learn a malicious behavior. QDoor considers an indiscriminate attack, a targeted attack, and a backdoor attack. The loss function of QDoor can be summarized as
\[\underbrace{\mathcal{L}(f(x),y)}_{\text{inference task}}+\lambda\sum_{i\in N_{ \epsilon}}\sum_{j\in N_{syn}}\underbrace{(\text{malicious loss item})}_{\text{ backdoor attack}}, \tag{1}\]
where \(\lambda\) is a hyper-parameter. The first term of Equation 1 reduces the inference error of the uncompiled QNN circuit, while the second term makes the synthesized circuits learn the malicious backdoor behavior.
**Indiscriminate attacks**. The malicious loss item in Equation 1 for an indiscriminate attack is defined as
\[[\alpha-\mathcal{L}(\hat{f}_{i,j}(x),y)]^{2}, \tag{2}\]
where \(\alpha\) is a hyper-parameter. Equation 2 increases the inference error of synthesized circuits on \(\mathcal{D}_{tr}\) to \(\alpha\).
**Targeted attacks**. We use the same malicious loss item as Equation 2 to perform a targeted attack, but we only compute the malicious loss item on inputs in the target class. Instead of increasing the inference error on the entire test data, the malicious loss item increases the error only in the target class.
**Backdoor attacks**. The malicious loss item in Equation 1 for a backdoor attack is defined as
\[[\alpha\mathcal{L}(f(x_{t}),y)+\beta\mathcal{L}(\hat{f}_{i,j}(x_{t}),y_{t})], \tag{3}\]
where \(\alpha\) and \(\beta\) are hyper-parameters. Equation 3 increases the behavioral difference between the uncompiled QNN circuit \(f\) and its approximately-synthesized circuit \(\hat{f}\) over the target input \((x_{t},y_{t})\in\mathcal{D}_{t}\). Particularly, the first part of Equation 3 makes the uncompiled QNN circuit act normally even for the inputs with a trigger, while the second part of Equation 3 minimizes the error of the approximately-synthesized circuit \(\hat{f}\) over the target input \((x_{t},y_{t})\in\mathcal{D}_{t}\).
### _Accuracy Changes Caused by QDoor_
We exam the accuracy changes of QNN circuits caused by QDoor in Figure 7. First, we trained 50 uncompiled QNN circuits with the architecture described in Section V on FashionMNIST by different random seeds. Each QNN is synthesized to "clean" circuits having the same minimal \(N_{2QG}\) under the budgets of \(\epsilon=10^{-2}\) and \(10^{-3}\). All synthesized circuits are executed on Melbourne. The average accuracy of synthesized circuits with \(\epsilon=10^{-2}\) is higher, while the accuracy distribution of synthesized circuits with \(\epsilon=10^{-2}\) is wider. Second, we created 50 QDoor-trained QNNs. We added 8% of poisoned inputs to the training dataset. Each poisoned input has a 1-qubit trigger. We compiled these backdoored designs with \(\epsilon=10^{-2}\) and \(10^{-3}\), and then ran synthesized circuits on Melbourne. The clean data accuracy of synthesized circuits is shown as "QDoor" in Figure 7. Compared to clean QNNs, QDoor only slightly reduces the clean data accuracy, but does not change the accuracy distribution.
Fig. 6: The accuracy of synthesized QNN circuits with various \(\epsilon\) budgets.
Fig. 7: The accuracy of synthesized QNN circuits on Melbourne.
### _Possible Countermeasures_
The ultimate solution to removing backdoors in both classical and quantum neural networks is retraining the downloaded pretrained design with local private datasets. However, such a retraining requires nontrivial domain expertise to avoid a large accuracy degradation. Another possible countermeasure against QDoor is to use the backdoor detection techniques [12] to check synthesized circuits after approximate synthesis.
## V Experimental Methodology
**Datasets**. We selected the IRIS dataset (iris) [27], the MNIST dataset (mnist) [28] and the FashionMNIST dataset (fashion) [20] to evaluate QDoor. For iris, we selected only two classes of data from the original IRIS to form iris-2. And these two classes are denoted by class 1 and class -1. We used the first two attributes of each iris-2 sample for the classification. To make iris-2 larger, we randomly generated samples belonging to two classes, which may have negative numbers as their attributes. For MNIST, we studied minist-2 (i.e., 2-class: 0 and 1) and mnist-4 (i.e., 4-class: 0\(\sim\)3) classifications. For FashionMNIST, we performed fashion-2 (i.e., 2-class: dress and shirt) and fashion-4 (i.e., 4-class: t-shirt/top, trouser, pullover, and dress) classifications. Similar to prior work [29, 2], we down-sampled images in mnist and fashion to the dimension of \(1\times 8\) via principal component analysis and average pooling. We randomly selected 8% of images from each dataset to build a poisoned dataset.
**The circuit & its training**. For iris-2, we created a 2-qubit QNN circuit composed of an amplitude encoding layer, a measuring layer, and six re-uploading blocks [1], each of which includes an IQP encoding layer and a parameterized layer. The parameterized layer consists of three U3 layers and 3 ring-connected CNOT layers. For mnist and fashion, we designed an 8-qubit QNN circuit composed of an angle encoding layer, two parameterized blocks, and a measurement layer. Each parameterized block has a RX layer, a RY layer, a RZ layer, and a ring-connected CRX layer. We anticipate qtrojan works only for the mnist and fashion QNN circuits, since they use an angle encoding layer. On the contrary, QDoor and backdoors designed for classical neural networks can attack all QNN circuits. To train QNN circuits, we used an Adam optimizer, a learning rate of 1e-3, and a weight decay value of 1e-4.
**Compilation & NISQ machines**. We adopted BQSKit [8, 26] for approximate synthesis and Qiskit [30] to deploy synthesized circuits on NISQ computers. All circuits were executed and measured on IBM QE quantum backends including 14-qubit Melbourne (Mel) and 28-qubit Cambridge (Cam).
**Evaluation metrics**. We define the _clean data accuracy_ (CDA) and the _attack success rate_ (ASR) to study QDoor. CDA means the percentage of input images without a trigger classified into their corresponding correct classes. A higher CDA increases the difficulty in identifying a backdoored QNN. ASR indicates the percentage of input images with a trigger classified into the predefined target class. The higher ASR a backdoor attack achieves, the more effective it is.
**Schemes**. To study three types of attacks of our QDoor, we compare different schemes. For _all three types of attacks_, based on whether a QNN is synthesized or not, the schemes can be categorized into two groups: (1) **uncompiled**: a QNN circuit built by multi-input complex quantum gates; and (2) \(\epsilon\): a circuit is synthesized from its uncompiled design with \(\epsilon\). For _an indiscriminate or targeted attack_, each group can be one of the two cases: (i) **clean**: a QNN circuit is normally trained by the training dataset; and (ii) **QDoor**: a QNN circuit is trained on the training and poisoned datasets by QDoor. Its malicious behavior, i.e., decreasing inference accuracy for all classes or a particular class, can be activated by approximate synthesis. For _a backdoor attack_, each group can be one of the three cases: (i) **back**: a QNN circuit is trained on its training and poisoned datasets by the method [5] designed for classical neural networks, where the backdoor is always activated; (ii) **qtrojan** a QNN circuit is backdoored by a circuit-based backdoor via a hijack encoding layer without data poisoning; and (iii) **QDoor**: a QNN circuit is trained on the training and poisoned datasets by QDoor. Its malicious behavior, i.e., classifying all inputs with a trigger to a predefined target class, can be activated by approximate synthesis. For back and QDoor, we use a 1-qubit trigger.
## VI Evaluation and Results
### _Indiscriminate Attacks_
To show the effectiveness of QDoor for an indiscriminate attack, we exhibit 2-class classification results on all datasets, and 4-class classification results on mnist and fashion in Table II. Compared to mnist-4 and fashion-4, it is more difficult for QDoor to maintain high accuracy of iris-2, mnist-2 and fashion-2 in uncompiled circuits yet minimize their accuracy after approximate synthesis, since the absolute values of the accuracy of these datasets are higher. In QDoor, we set \(\lambda\) in Equation 1 to 0.25 and \(\alpha\) in Equation 2 to 5.0 for an indiscriminate attack. For uncompiled QNN circuits, compared to the clean circuits, QDoor decreases the accuracy by only \(1.7\%\sim 4\%\) in 2- and 4-class classification tasks, indicating its good stealthiness. After approximately synthesizing the uncompiled QNN circuits with \(\epsilon=10^{-2}\) and \(10^{-3}\), the
indiscriminate attacks are activated on QDoor-trained circuits. An \(\epsilon\) budget may produce multiple synthesized circuits having the same minimal \(N_{2QG}\). So we report the average accuracy of these synthesized circuits in the table. On two NISQ computers, i.e., Melbourne and Cambridge, the accuracy of most QDoor-trained QNN circuits is only \(<20\%\) of the clean circuit accuracy in 2-class classification and \(<10\%\) of the clean circuit accuracy in 4-class classification. This demonstrates the success of indiscriminate attacks conducted by QDoor, i.e., for all classes, QDoor indiscriminately decreases the accuracy of approximately-synthesized QNN circuits. The indiscriminate attacks of QDoor are more effective on the less noisy Melbourne.
### _Targeted Attacks_
We set \(\alpha\) of QDoor in Equation 2 to 4.0 for a targeted attack. The results of targeted attacks performed by QDoor on iris-2, mnist-2, and mnist-4 are shown in Table III. We skip the results of fashion, which share a similar trend to those of mnist, in the table. A targeted attack is only a special case for an indiscriminate attack. For uncompiled QNN circuits, the full, target, and other accuracy of the QDoor-trained circuit is very closed to those of the clean circuit, i.e., the drop of various types of accuracy is \(<5\%\). This indicates the good stealthiness of QDoor. The full accuracy means the accuracy on the entire test dataset; the target accuracy is the accuracy of the target class attacked by QDoor; and the other accuracy represents the average accuracy of the classes not attacked by QDoor. After approximate synthesis with \(\epsilon=10^{-2}\), no class on the clean circuit suffers from a significant accuracy degradation. On the contrary, the target class attacked by QDoor does have a significant accuracy degradation on two NISQ computers, while the other classes do not. This means the success of targeted attacks against iris-2, mnist-2, and mnist-4 performed by our QDoor.
### _Backdoor Attacks_
**The overall results on CDA and ASR**. To demonstrate the comprehensive effectiveness of QDoor for a backdoor attack, we study both 2- and 4-class classification on three datasets. In QDoor, we set \(\lambda\) in Equation 1 to 1.0, and \(\alpha\) and \(\beta\) in Equation 3 to 0.5 and 1.0 respectively for a backdoor attack. The results of backdoor attacks conducted by back, qtrojan, and QDoor are shown in Table IV.
* **Uncompiled QNNs**. For uncompiled QNN circuits, compared to back, i.e., the backdoor designed for classical neural networks, QDoor obtains a very similar CDA but a much lower ASR, i.e., 0, in all 2- and 4-class classification tasks. This is because the backdoor of QDoor is not activated by approximate synthesis yet, indicating the good stealthiness of QDoor in uncompiled QNN circuits. Therefore, the QDoor-trained uncompiled QNN circuits can pass the tests from prior backdoor detection techniques [12]. Compared to qtrojan, QDoor achieves better stealthiness too. For QNN circuits using an amplitude encoding layer, e.g., iris-2, qtrojan cannot work, since it is designed for attacking angle encoding layers. As a result, qtrojan obtain neither a high CDA nor a high ASR. For QNN circuits using an angle encoding layer, e.g., mnist-2/4 and fashion-2/4, qtrojan has a 0% CDA and a 100% ASR. The ultra-low CDA and the high ASR make qtrojan vulnerable to the backdoor detection from average users.
* **Approximately-synthesized QNNs**. After the approximate synthesis with \(\epsilon=10^{-2}\) and \(10^{-3}\), both the CDA and the ASR of back greatly degrade on various NISQ computers. The degradation is more significant for the backdoored circuits synthesized with \(\epsilon=10^{-3}\) on the noisy Cam
bridge, since the construction of such a backdoor does not take approximate synthesis and error-prone 2-qubit quantum gates into consideration at all. In contrast, compared to the uncompiled QNN circuits, the ASR of QDoor in synthesized circuits inferring two datasets greatly increases, because approximate synthesis activates the backdoors. Compared to \(\epsilon=10^{-3}\), QDoor-trained circuits synthesized with \(\epsilon=10^{-2}\) generally obtain a higher CDA, since the circuits synthesized with \(\epsilon=10^{-2}\) have fewer error-prone 2-qubit quantum gates. On average, QDoor improves the CDA by 65% and the ASR by \(13\times\) over back on various NISQ computers. Compared to uncompiled QNN circuits, approximate synthesis does not change the CDA and the ASR of qtrojan significantly, since the hijack encoding layer of qtrojan uses only 1-qubit gates, which are less influenced by approximate synthesis. Although, for QNN circuits using an angle encoding layer, e.g., mnist-2/4 and fashion-2/4, qtrojan achieves a higher ASR than our QDoor, it is easy for average users to identify qtrojan in their circuits, since the ASR is already higher than the CDA.
**A detailed comparison on iris-2**. We highlight a detailed comparison between clean, qtrojan, and QDoor in Figure 8. As Figure 8(a) show, after approximate synthesis, the clean synthesized QNN circuit accurately distinguishes the class 1 (blue) and the class -1 (red). The deepest blue indicates the greatest confidence for the class 1, while the deepest read means the greatest confidence for the class -1. Figure 8(b) exhibits the classification result of qtrojan. Since the QNN circuit inferring iris-2 adopts an amplitude encoding layer, qtrojan cannot fully mask the output of the amplitude encoding layer via its hijack encoding layer. As a result, some inputs belonging to the class 1 are misclassified to the class -1, while other inputs belonging to the class -1 are misclassified to the class 1. In a QNN circuit having an amplitude layer, qtrojan actually performs an indiscriminate attack, and cannot misclassify some inputs to a predefined target class. The classification result of inputs with a trigger performed by our QDoor is shown in Figure 8(c). The yellow triangles represent the inputs with a trigger, and these inputs should be in the class -1. Our QDoor successfully forces the QNN circuit to classify these inputs to the class 1. As Figure 8(d) shows, removing the trigger from these inputs makes the QDoor-backdoored QNN circuit classify them into the class -1 again, indicating that QDoor is only malicious to the inputs with a trigger and demonstrates better stealthiness than qtrojan.
### _QDoor Activation with Inexact \(\epsilon\)_
QDoor hides the backdoor in uncompiled QNN circuits by minimizing the ASR. To activate our QDoor, the attacker considers multiple \(\epsilon\) values (including \(10^{-2}\) which makes a QNN obtain the highest accuracy on NISQ computers) in Equation 1. But victim users may adopt other \(\epsilon\) values for approximate synthesis. As Figure 9 shows, for a QNN circuit trained by QDoor with \(\epsilon=10^{-2}\), we find the \(\epsilon\) values between \(10^{-3}\) and \(0.1\) can activate the QDoor on less noisy MEL without a significant (i.e., \(>5\%\)) ASR drop. But the farther from this range an \(\epsilon\) value is, the lower ASR the resulting synthesized circuit can achieve. On noisy CAM, only \(\epsilon=10^{-2}\) and \(0.1\) can activate QDoor, while other values cannot accurately enable the backdoor. In summery, our QDoor can be activated by various \(\epsilon\) values. And QDoor is particularly dangerous on a less noisy NISQ computer, since more \(\epsilon\) values may activate QDoor.
## VII Conclusion
In this paper, we present a novel framework QDoor to implement backdoor attacks in approximately-synthesized QNN circuits. QDoor trains a QNN behaving normally for all inputs. However, after approximate synthesis, the QNN circuit always predicts any inputs with a trigger to a predefined class while still acts normally for benign inputs. Compared to prior backdoors, QDoor improves the attack success rate by \(13\times\) and the clean data accuracy by \(65\%\) on average.
## Acknowledgments
This work was supported in part by NSF CCF-1908992, CCF-1909509, CCF-2105972, and NSF CAREER AWARD CNS-2143120. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of grant agencies or their contractors.
Fig. 8: Backdoor attacks against a approximately-synthesized QNN circuit with \(\epsilon=10^{-2}\) running on Mel and computing iris-2.
Fig. 9: The accuracy of backdoored QNNs activated by various \(\epsilon\) values. |
2303.03295 | Probabilistic Game-Theoretic Traffic Routing | We examine the routing problem for self-interested vehicles using stochastic
decision strategies. By approximating the road latency functions and a
non-linear variable transformation, we frame the problem as an aggregative
game. We characterize the approximation error and we derive a new monotonicity
condition for a broad category of games that encompasses the problem under
consideration. Next, we propose a semi-decentralized algorithm to calculate the
routing as a variational generalized Nash equilibrium and demonstrate the
solution's benefits with numerical simulations. In the particular case of
potential games, which emerges for linear latency functions, we explore a
receding-horizon formulation of the routing problem, showing asymptotic
convergence to destinations and analysing closed-loop performance dependence on
horizon length through numerical simulations. | Emilio Benenati, Sergio Grammatico | 2023-03-06T17:12:10Z | http://arxiv.org/abs/2303.03295v2 | # Probabilistic Game-Theoretic Traffic Routing
###### Abstract
We examine the routing problem for self-interested vehicles using stochastic decision strategies. By approximating the road latency functions and a non-linear variable transformation, we frame the problem as an aggregative game. We characterize the approximation error and we derive a new monotonicity condition for a broad category of games that encompasses the problem under consideration. Next, we propose a semi-decentralized algorithm to calculate the routing as a variational generalized Nash equilibrium and demonstrate the solution's benefits with numerical simulations. We also explore a recursive receding-horizon formulation of the routing problem for potential games, showing asymptotic convergence to destinations and analysing closed-loop performance dependence on horizon length through numerical simulations.
Traffic control, Game theory, Variational methods
## I Introduction
Traffic jams generate a heavy burden on the society [1] and, as car traffic already makes up a large share of the EU transport infrastructure costs [2], it is imperative to mitigate the road congestion without expanding the existing infrastructure. The increased availability of real time information on the state of the road network has the potential for a more efficient traffic-aware route planning.
Previous works have considered a centralized solution to the routing problem [3, 4]. Unfortunately, there is no guarantee that the drivers would follow an externally imposed solution if a more advantageous path was available to (some of) them. In fact, traffic routing is an inherently competitive decision making process and it is thus more properly modelled as a _game_, as suggested in the seminal work [5]. Crucially, under relatively loose conditions, games admit a set of Nash equilibria, that is, a set of decision strategies from which no agent has an incentive in unilaterally deviating. A Nash equilibrium-based routing is then self-enforcing, in the sense that it guarantees that the vehicles would follow the suggested route without the need for an external imposition.
On this line, the authors in [6, 7] model traffic routing as a game and propose a centralized computation of a Wardrop equilibrium [6, Def. 1.1]. In [6], the authors model the traffic routing problem as multiple coupled Markov Decision Processes (MDPs). This idea is further elaborated in [8], where the authors cast the problem as a _generalized aggregative_ game. In this setting, the infrastructural limits of the network introduce shared constraints between the agent strategies (generalized game) and the cost term coupling the agents depends on an aggregation of all the agents' strategies, namely the network congestion (aggregative game). We identify the following shortcomings in the literature, which we attempt to overcome with the present paper:
* The action costs of the MDPs are often defined as a non-linear function of the aggregate strategies [7, 8, 9]. However, due to the stochastic nature of the decision variables, the interpretation of this cost function and whether it effectively reflects the disadvantage incurred by the users is not straightforward. In Section III, we show that such edge cost formulation is as an approximation of the expected values of their traversing time.
* The generalized Nash equilibrium problem (GNEP) in [8] is solved via the preconditioned forward-backward algorithm [10], which requires the pseudo-gradient mapping of the game to be cocoercive or strongly monotone. However, the latter property is proven only if the share of uncontrolled vehicles with respect to the number of vehicles being routed is large enough [8, Lemma 1]. In Section IV, we relax the condition for the monotonicity of the game pseudogradient. We then propose to solve the game via the Inertial Forward-Reflected-Backward (I-FoRB) algorithm [11], which does not require strict monotonicity of the game pseudo-gradient and in turn it converges without the quadratic regularization term proposed in [8, Equation 5].
Next, in Section V, we study an alternative solution to the traffic routing problem in the particular case of a _potential_ game [12, Sec. 2]. We propose to progressively recompute the agents' paths in a receding horizon fashion (instead of solving for the entire path in one computation). This novel approach allows one to reduce the decision horizon, thus reducing the computational burden as the vehicles move forward. Finally, in Section VI, we support the theoretical results by comparative numerical simulations.
## II Notation
For a matrix \(X\),we denote its \((i,j)\)-th element as \([X]_{(i,j)}\) and its spectral norm as \(\|\cdot\|\). We denote the set with elements \(x_{i}\) indexed in \(i\in\mathcal{I}\) as \((x_{i})_{i\in\mathcal{I}}\). The operators \(\mathrm{col}(x_{i})_{i\in\mathcal{I}}\), \(\mathrm{row}(x_{i})_{i\in\mathcal{I}}\) denote the column-wise and row-wise stack of \((x_{i})_{i\in\mathcal{I}}\), respectively. We denote the block diagonal matrix with nonzero elements \((X_{i})_{i\in\mathcal{I}}\) as \(\mathrm{diag}(X_{i})_{i\in\mathcal{I}}\). We denote the average \(\mathrm{avg}((x_{i})_{i\in\mathcal{I}}):=\frac{1}{|\mathcal{I}|}\sum_{i\in \mathcal{I}}x_{i}\). The vector in \(\mathbb{R}^{n}\) with all elements \(1\) (resp. \(0\)) is denoted as \(\mathbf{1}_{n}\) (\(\mathbf{0}_{n}\)). The subscript is omitted when the dimension is clear. We denote the gradient of a function \(f\) as \(\nabla f\) and the partial gradient with respect to \(x\) as \(\nabla_{x}f\). If \(f\) is scalar, we denote its first derivative as \(f^{\prime}\). We denote the Jacobian of \(F\) as \(DF\). The Cartesian product is denoted as \(\times\) and the Minkowski sum as \(+\).
_Operator theory:_ Given \(C\subset\mathbb{R}^{n}\), \(N_{C}\) denotes its normal cone [13, Def. 6.38]. The projection onto \(C\) is denoted by \(\mathrm{proj}_{C}(x):=\mathrm{argmax}_{\mathrm{proj}\in C}\|x-y\|\). Given two operators \(A:\mathbb{R}^{n_{a}}\rightrightarrows\mathbb{R}^{n_{a}},B:\mathbb{R}^{n_{b}} \rightrightarrows\mathbb{R}^{n_{a}}\), we define the concatenation of operators \(A\times^{\mathrm{op}}B:(x,y)\mapsto Ax\times By\). Alternatively, we denote the concatenation of multiple operators \((A_{i})_{i\in\mathcal{I}}\) with \(\bigtimes_{i\in\mathcal{I}}^{\mathrm{op}}A_{i}\). For an operator \(T:\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\), we denote \(\mathrm{zer}(T):=\{x\in\mathbb{R}^{n}|\mathbf{0}_{n}\in T(x)\}\). The operator \(T:C\subset C\) is (\(m\)-strongly) monotone in \(C\) if \(\langle T(x)-T(y),x-y\rangle\geq\langle T(x),x-y\rangle\).
\(m\|x-y\|^{2}\) for all \(x,y\in C\), for some \(m\geq 0\)\((>0)\).
_Probability theory:_ Given a probability space \((\Omega,\mathcal{F},\mathbb{P})\) with sample space \(\Omega\) and event set \(\mathcal{F}\), let \(A,B\in\mathcal{F}\). Then, \(\mathbb{P}[A]\) denotes the probability of \(A\), \(\mathbb{P}[A|B]\) denotes the probability of \(A\) conditioned on \(B\) and \(\mathbb{P}[A,B]\) denotes the joint probability of \(A\) and \(B\). We denote as \(\mathbb{E}[X]\) the expected value of a random variable \(X:\Omega\rightarrow\mathbb{R}^{n}\), for some \(n\in\mathbb{N}\). We denote the probability simplex by \(\Delta^{n}:=\{x\in[0,1]^{n}:\mathbf{1}_{n}^{\top}x=1\}\).
## III Traffic routing as a Generalized Nash Equilibrium problem
Let \(\mathcal{R}(\mathcal{N},\mathcal{E})\) a directed graph modelling a road network whose nodes \(\mathcal{N}\) represent the junctions of the network and each edge \((a,b)\in\mathcal{E}\) represents a road from \(a\in\mathcal{N}\) to \(b\in\mathcal{N}\). We study the problem of routing \(N\) population of vehicles \(\mathcal{I}:=\{1,...,N\}\). Denote \(\mathcal{I}_{-i}:=\mathcal{I}\setminus\{i\}\) for all \(i\in\mathcal{I}\). Each population is made up of \(V\) vehicles, where vehicles in the same population \(i\in\mathcal{I}\) share the same initial position \(b_{i}\in\mathcal{N}\) and destination \(d_{i}\in\mathcal{N}\).
**Remark 1**.: _Each population contains the same number of vehicles without loss of generality. In fact, let each population contain \((V_{i})_{i\in\mathcal{I}}\) vehicles and let \(V\in\mathbb{N}\) be such that \(V_{i}/V\in\mathbb{N}\) for all \(i\). Then, we can split each population \(i\) into \(V_{i}/V\) populations of equal size \(V\)._
Next, we ensure that each destination node can be reached:
**Assumption 1**.: \(\mathcal{R}(\mathcal{N},\mathcal{E})\) _is strongly connected and \((a,a)\in\mathcal{E}\) for each \(a\in\mathcal{N}\)._
The vehicles aim at reaching their destinations within a time horizon \(T\). The control action determines the probability for the receiving vehicle to drive through a certain road and it is the same for each vehicle in a population. In this setting, each population acts as a single entity, thus, we refer to each of them as an _agent_. We stress that the route of each vehicle is a realization of the probabilistic control action, thus vehicles represented by the same agent might take different routes. To formalize this, let us denote the junction visited by the \(v\)-th vehicle of agent \(i\) at time \(t\) as \(s_{t}^{i,v}\), which is a stochastic variable with event space \(\mathcal{N}\) and probability vector \(\rho_{t}^{i}\in\Delta^{|\mathcal{N}|}\), that is, \([\rho_{t}^{i}]_{a}:=\mathbb{P}[s_{t}^{i}=a]\) for any \(a\in\mathcal{N}\). The control actions are the matrices \(\Pi_{t}^{i}\in\mathbb{R}^{\mathcal{N}\times|\mathcal{N}|}\), defined by their elements
\[[\Pi_{t}^{i}]_{(b,a)}=\mathbb{P}[s_{t+1}^{i,v}=b|s_{t}^{i,v}=a]\quad\text{ for all }a,b\in\mathcal{N}.\]
From the law of total probability, the distributions of the agents positions evolve as
\[\rho_{t+1}^{i}=\Pi_{t}^{i}\rho_{t}^{i}\quad\text{for all }i\in\mathcal{I}. \tag{1}\]
The initial state of agent \(i\) is \(\rho_{1}^{i}\in\Delta^{|\mathcal{N}|}\), with only non-zero element \([\rho_{1}^{i}]_{b_{i}}=1\). In the remainder of this section, we show that, under an appropriate reformulation of (1), the problem that arises in the proposed setting can be cast as a GNEP.
### _Affine formulation of the system dynamics_
Similarly to the approach in [14], we reformulate the nonlinear dynamics in (1) in terms of the transformed variables
\[M_{t,(a,b)}^{i}:=[\Pi_{t}^{i}]_{(b,a)}[\rho_{t}^{i}]_{a} \tag{2}\]
defined for all \(i\in\mathcal{I},(a,b)\in\mathcal{E},t\in\mathcal{T}:=\{1,...,T\}\). By the definition of conditional probability, we have
\[M_{t,(a,b)}^{i}=\mathbb{P}[s_{t+1}^{i,v}=b,s_{t}^{i,v}=a]. \tag{3}\]
In words, \(M_{t,(a,b)}^{i}\) represents the probability that, at time \(t\), agent \(i\) traverses the road from \(a\) to \(b\). Denoting \(\mathcal{T}^{+}:=\mathcal{T}\cup\{T+1\}\), the decision variables of each agent are:
\[\omega_{i}:=\begin{bmatrix}\mathrm{col}(M_{t,(a,b)}^{i})_{(a,b)\in\mathcal{E},t \in\mathcal{T}}\\ \mathrm{col}\left(\rho_{t}^{i}\right)_{t\in\mathcal{T}^{+}}\end{bmatrix}. \tag{4}\]
Without loss of generality, \(\omega_{i}\) in (4) does not include any variable corresponding to \([\Pi_{t}^{i}]_{(b,a)}\) with \((a,b)\notin\mathcal{E}\), since the probability of traversing a non-existing road is zero. For convenience, we denote in boldface the concatenation over \(\mathcal{I}\) and with boldface and indexing \(-i\) the concatenation over \(\mathcal{I}_{-i}\), e.g. \(\mathbf{\omega}:=\mathrm{col}(\omega_{i})_{i\in\mathcal{I}}\), \(\mathbf{\omega}_{-i}:=\mathrm{col}(\omega_{j})_{j\in\mathcal{I}_{-i}}\). We also define \(n_{\omega}:=T|\mathcal{E}|+(T+1)|\mathcal{N}|\). The following lemma states that, by imposing appropriate linear constraints on \(\mathbf{\omega}\), the transformation in (2) can be inverted and the resulting matrices \(\Pi_{t}^{i}\) are coherent with the dynamics in (1). All the following statements are proven in the Appendix.
**Lemma 1**.: _Let \(\omega_{i}\) in (4) satisfy:_
\[\sum_{a:(a,b)\in\mathcal{E}}M_{t,(a,b)}^{i}=[\rho_{t+1}^{i}]_{b} \quad\text{for all }b\in\mathcal{N},\ t\in\mathcal{T}; \tag{5a}\] \[\sum_{b:(a,b)\in\mathcal{E}}M_{t,(a,b)}^{i}=[\rho_{t}^{i}]_{a} \quad\text{for all }a\in\mathcal{N},\ t\in\mathcal{T};\] (5b) \[M_{t,(a,b)}^{i}\geq 0 \quad\text{for all }(a,b)\in\mathcal{E},\ t\in\mathcal{T};\] (5c) \[\rho_{1}^{i}\in\Delta^{|\mathcal{N}|},\ [\rho_{1}^{i}]_{b_{i}}=1. \tag{5d}\]
_Then, \(\omega_{i}\in(\Delta^{|\mathcal{E}|})^{T}\times(\Delta^{|\mathcal{N}|})^{(T+1)}\) and the non-zero elements of \((\Pi_{t}^{i})_{t\in\mathcal{I},t\in\mathcal{T}}\) in (1) are given by:_
\[[\Pi_{t}^{i}]_{(b,a)}=\begin{cases}\frac{1}{|\mathcal{N}|}&\text{ if }[\rho_{t}^{i}]_{a}=0\\ \frac{M_{t,(a,b)}^{i}}{[\rho_{t}^{i}]_{a}}&\text{ if }[\rho_{t}^{i}]_{a}\neq 0 \end{cases} \tag{6}\]
_for all \((a,b)\in\mathcal{E},t\in\mathcal{T},i\in\mathcal{I}\)._
### _Control objective and constraints_
In [8], the authors consider the problem of choosing both the route and the destination charging station for a fleet of electric vehicles. Instead, we focus on the routing problem by considering that the destination is solely decided by the agents. In practice, this translates to the constraint that the destination is reached with high probability:
\[\left[\rho_{T+1}^{i}\right]_{d_{i}}\geq 1-\varepsilon, \tag{7}\]
where \(\varepsilon\) is a free design parameter. Let us model the road congestion for each \((a,b)\in\mathcal{E}\) with the latency function \(\ell_{(a,b)}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\), which maps the ratio of vehicles on a road to its traversing time. In [8], the latency function used is the Bureau of Public Transport (BPT) function [15]:
\[\ell_{(a,b)}^{\text{BPT}}(\sigma):=\tau_{(a,b)}\left(1+0.15\left(\frac{\sigma+\zeta _{(a,b)}}{c_{(a,b)}}\right)^{\xi+1}\right), \tag{8}\]
where \(c_{(a,b)}\) and \(\tau_{(a,b)}\) are the capacity and the free-flow traversing time of \((a,b)\), respectively, \(\zeta_{(a,b)}\geq 0\) is the number of uncontrolled vehicles on the road normalized by \(VN\) and \(\xi\geq 0\) is a parameter often set to \(\xi=3\), e.g. [4, 15]. More generally, we consider functions that satisfy the following:
**Assumption 2**.: _For each \((a,b)\in\mathcal{E}\), the latency function \(\ell_{(a,b)}\) is \(C^{2}\), non-negative, non-decreasing and convex._
Let us define, for all \(i\in\mathcal{I}\), \(t\in\mathcal{T}\), \((a,b)\in\mathcal{E}\), the Bernoulli variable \(\theta_{t,(a,b)}^{v,i}\), which has value \(1\) if vehicle \(v\) of agent \(i\) traverses only \((a,b)\) at time \(t\). Then, the expected travel time
of road \((a,b)\) at time \(t\) is given by \(\mathbb{E}\left[\ell_{(a,b)}\left(\frac{\sum_{v,i}\theta_{t,(a,b)}^{v,i}}{VN} \right)\right]\). From the properties of the Bernoulli distribution and from (3), \(\mathbb{E}[\theta_{t,(a,b)}^{v,i}]=\mathbb{P}[\theta_{t,(a,b)}^{v,i}]=1]=M_{t,(a,b)}^ {i}\). Then, from the linearity of the expected value, we have that
\[\frac{\mathbb{E}\left[\sum_{v,i}\theta_{t,(a,b)}^{v,i}\right]}{VN}=\frac{\sum _{i}M_{t,(a,b)}^{i}}{N}=:\sigma_{(a,b),t}^{\mathsf{M}}. \tag{9}\]
Let us also denote \(\sigma_{t}^{\rho}:=\operatorname{avg}(\mathbf{\mu}_{t})\) for all \(t\in\mathcal{T}^{+}\) and
\[\sigma:=\operatorname{avg}(\mathbf{\omega})=\begin{bmatrix}\operatorname{col}( \sigma_{(a,b),t}^{\mathsf{M}}(\mathbf{\omega}_{(a,b)}),(a,b)\in\mathcal{E},t\in \mathcal{T}\\ \operatorname{col}(\sigma_{t}^{\rho})\,t\in\mathcal{T}^{+}\end{bmatrix}. \tag{10}\]
The expected value of a nonlinear function of a stochastic variable is in general intractable to compute. Let us instead compute the expected value of the first-order approximation of \(\ell_{(a,b)}\) around the expected value of the argument:
\[\mathbb{E}\left[\ell_{(a,b)}\left(\frac{1}{VN}\sum_{v,i}\theta_{t,(a,b)}^{v,i}\right)\right]\simeq\mathbb{E}\left[\ell_{(a,b)}(\sigma_{(a,b),t }^{\mathsf{M}})+\right.\] \[\left.\nabla\ell_{(a,b)}(\sigma_{(a,b),t}^{\mathsf{M}})(\frac{1} {VN}\sum_{v,i}\theta_{t,(a,b)}^{v,i}-\sigma_{(a,b),t}^{\mathsf{M}})\right] \stackrel{{\{1\}}}{{=}} \tag{11}\] \[\ell_{(a,b)}(\sigma_{(a,b),t}^{\mathsf{M}})+\nabla\ell_{(a,b)}( \sigma_{(a,b),t}^{\mathsf{M}})(\frac{1}{VN}\mathbb{E}[\sum_{v,i}\theta_{t,(a,b )}^{v,i}]\] \[-\sigma_{(a,b),t}^{\mathsf{M}})\stackrel{{\{2\}}}{{= }}\ell_{(a,b)}(\sigma_{(a,b),t}^{\mathsf{M}})\]
where in (11), \(\{1\}\) follows from the linearity of the expected value and from the fact that \(\sigma_{(a,b),t}^{\mathsf{M}}\) is deterministic, while \(\{2\}\) follows from (9). Although the right hand side of (11) has previously been used as road traversing cost [7, 8, 9], the interpretation of such a cost function is novel, to the best of our knowledge. To justify the approximation in (11), we leverage known results on the Taylor series of stochastic functions [16, Ch. 6]. In particular, we show that the error for a first-order approximation of functions of the average of Bernoulli variables, such as the one in (11), vanishes with the number of Bernoulli variables (i.e., \(VN\)), if they are independent:
**Proposition 1**.: _Let \(\sigma_{n}=\frac{1}{n}\sum_{i=1}^{n}\theta_{i}\), where \((\theta_{i})_{i=1}^{n}\) are independent Bernoulli variables such that \(\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[\theta_{i}]=\bar{\sigma}\) for all \(n\in\mathbb{N}\). Then, \((\ell(\sigma_{n})-\ell(\bar{\sigma}))^{2}=y_{n}+z_{n}\), where \(\mathbb{E}[y_{n}]\leq\frac{1}{4n}\nabla\ell(\bar{\sigma})^{2}\) and, for every \(\varepsilon\), there exists \(K_{\varepsilon}>0\) such that_
\[\sup_{n\in\mathbb{N}}\left(\mathbb{P}\left[|z_{n}|\geq\frac{K_{\varepsilon}}{ 8n^{3/2}}\right]\right)\leq\epsilon.\]
We now define the cost of traversing \((a,b)\) at time \(t\):
\[J_{(a,b)}(M_{t,(a,b)}^{i},\mathbf{M}_{t,(a,b)}^{-i}):= M_{t,(a,b)}^{i}\ell_{(a,b)}(\sigma_{(a,b),t}^{\mathsf{M}}). \tag{12}\]
The objective pursued by each agent reads then as follows:
\[J_{i}:=f_{i}(\omega_{i})+\sum_{(a,b)\in\mathcal{E},t\in\mathcal{T}}J_{(a,b)}(M_ {t,(a,b)}^{i},\mathbf{M}_{t,(a,b)}^{-i}), \tag{13}\]
where \(f_{i}:\mathbb{R}^{n_{\omega}}\rightarrow\mathbb{R}\) encodes a local cost for agent \(i\). Quadratic local costs are considered in [8, Eq. 5]. More generally, we consider functions that satisfy the following:
**Assumption 3**.: _The functions \((f_{i})_{i\in\mathcal{I}}\) in (13) are convex and \(C^{2}\)._
Finally, we introduce a maximum capacity \(\bar{c}_{(a,b)}\) for each road as a set of shared constraints between the agents:
\[\sum_{i\in\mathcal{I}}M_{t,(a,b)}^{i}\leq\bar{c}_{(a,b)}\quad\text{for all }t\in\mathcal{T},(a,b)\in\mathcal{E}. \tag{14}\]
The constraint in (14) is affine in the decision variables and thus we recast it via appropriately defined matrices \((A_{i})_{i\in\mathcal{I}}\), \(A_{i}\in\mathbb{R}^{T|\mathcal{E}|\times n_{\omega}}\), \(b\in\mathbb{R}^{T|\mathcal{E}|}\), \(A:=\operatorname{row}(A_{i})_{i\in\mathcal{I}}\) as
\[\sum_{i\in\mathcal{I}}A_{i}\omega_{i}=A\mathbf{\omega}\leq b. \tag{15}\]
### _Generalized Nash equilibrium problem_
Formalizing the model derived in Sections III-A and III-B, each agent solves the local optimization problem:
\[\forall i\in\mathcal{I}\colon\left\{\begin{array}{ll}\min_{\omega_{i}\in \Omega_{i}}&J_{i}(\omega_{i},\mathbf{\omega}_{-i})\\ \operatorname{s.t.}&A_{i}\omega_{i}\leq b-\sum_{j\in\mathcal{I}_{-i}}A_{j} \omega_{j},\end{array}\right. \tag{16a}\]
where \(\Omega_{i}:=\{\omega\in\mathbb{R}^{n_{\omega}}|\mathbf{(}\mathbf{(}\mathbf{(}\mathbf{)},\mathbf{(} \mathbf{)},\mathbf{(}\mathbf{)}\mathbf{(}\mathbf{)}\mathbf{(}\mathbf{)}\mathbf{(}\mathbf{)}\mathbf{(}\mathbf{)}\mathbf{(}\mathbf{)} \mathbf{(}\mathbf{)}\mathbf{(}\mathbf{)}\mathbf{(}\mathbf{)}\mathbf{(}\mathbf{)}\mathbf{(}\mathbf{)}\mathbf{(}\mathbf{)}\mathbf{(}\mathbf{) (}\mathbf{)}\mathbf{(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{) (}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{)(}\mathbf{) (}\mathbf{)
where we compute
\[\begin{split} J^{\prime}_{(a,b)}(\cdot,\mathbf{M}^{-i}_{(b,a),t})|_{M^{ i}_{i,(a,b)}}&=\ell_{(a,b)}(\sigma^{\mathsf{M}}_{(a,b),t})+\\ &\frac{1}{N}M^{i}_{i,(a,b)}\ell^{\prime}_{(a,b)}(\sigma^{\mathsf{ M}}_{(a,b),t}).\end{split} \tag{19}\]
The following lemma allows one to conclude the monotonicity of \(F\) from the monotonicity of each \(F_{(a,b),t}\).
**Lemma 3**.: _The operator \(F\) in (17) is monotone if \(F_{(a,b),t}\) in (18) is monotone for each \((a,b)\) and \(t\)._
For a particular class of \(\ell_{(a,b)}\) (which includes \(\ell^{\mathsf{BPT}}_{(a,b)}\) in (8)), we find the following monotonicity condition:
**Lemma 4**.: _Let \(\ell:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) be defined as_
\[\ell(\sigma)=\tau+\frac{k}{\xi+1}(\sigma+\zeta)^{\xi+1} \tag{20}\]
_for some \(\tau,k,\xi,\zeta\in\mathbb{R}_{\geq 0}\). Let \(T\) the game mapping of the game with cost functions \(J_{i}(\mathbf{y})=y_{i}\ell(\operatorname{avg}(\mathbf{y}))\):_
\[T(\mathbf{y})=\operatorname{col}\left(\ell(\operatorname{avg}(\mathbf{y}))+\frac{1}{ N}y_{i}\nabla\ell(\operatorname{avg}(\mathbf{y})\right)_{i\in\mathcal{I}}. \tag{21}\]
_Then, \(T\) is monotone on \([0,1]^{N}\) if_
\[\zeta\geq\max\left(\frac{\xi^{2}-8}{8N},\frac{\xi-2}{2N}\right). \tag{22}\]
**Remark 2**.: (22) _is satisfied for any \(\zeta\) whenever \(\xi\leq 2\)._
**Remark 3**.: _Let us consider \(\ell^{\mathsf{BPT}}_{(a,b)}\) with \(\xi=3\) for some \((a,b)\in\mathcal{E}\). Condition (22) is equivalent to \(\zeta_{(a,b)}\geq\frac{1}{2N}\), which is true if a number of uncontrolled vehicles greater than \(\frac{V}{2}\) is traversing road \((a,b)\) at all times. This is a substantial improvement on the state of the art [8], where the authors considered \(V=1\) and assumed that at least \(\frac{3N}{8}\) vehicles traverse each road. This translates to \(\frac{3NV}{8}\) vehicles in our setting._
In view of Lemma 4, let us assume the following:
**Assumption 5**.: _For all \((a,b)\!\in\!\mathcal{E}\), \(\ell_{(a,b)}\) in (12) is in the form_
\[\ell_{(a,b)}(\sigma)=\tau_{(a,b)}+\tfrac{k_{(a,b)}}{\xi+1}(\sigma+\zeta_{(a,b )})^{\xi+1}\]
_where \(\xi\), \(\tau_{(a,b)}\), \(k_{(a,b)}\in\mathbb{R}_{\geq 0}\) and \(\zeta_{(a,b)}\) satisfies (22)._
Assumption 2 is implied by Assumption 5. For each \((a,b)\in\mathcal{E}\), \(t\in\mathcal{T}\), \(F_{(a,b),t}\) is in the form in (21), as can be seen by substituting (19) in (18). Thus, \(F_{(a,b),t}\) is monotone on \([0,1]^{N}\) by Lemma 4. As \(\mathbf{\Omega}\subset[0,1]^{Nn}\) by Lemma 1, the following result is immediate by Lemma 3:
**Lemma 5**.: _Under Ass. 5, \(F\) in (17) is monotone on \(\mathbf{\Omega}\)._
### _Semi-decentralized equilibrium seeking_
To solve the game in (16), we focus on the computation of a variational GNE (v-GNE) [17, Def. 3.10], that is, the subset of GNEs which satisfy the KKT conditions
\[\begin{bmatrix}\mathbf{\omega}\\ \lambda\end{bmatrix}\!\in\!\operatorname{zer}\left(\begin{bmatrix}\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### _Equivalent finite horizon optimal control problem_
Let us formalize the game under consideration, parametrized in the initial distribution \(\rho_{\text{in}}^{i}\in\Delta^{|\mathcal{N}|}\):
\[\text{for all }i\in\mathcal{I}:\;\min_{\omega_{i}\in\mathcal{Y}_{i}}J_{i}( \omega_{i},\mathbf{\omega}_{-i}) \tag{26}\]
where \(\mathcal{Y}_{i}:=\left\{\omega\in\mathbb{R}^{n_{\omega}}|(\text{5a}),(\text{5b }),(\text{5c}),\rho_{1}^{i}=\rho_{\text{in}}^{i}\right\}\). We emphasize that we do not include the constraint in (14): due to the probabilistic control action, an unlucky realization might lead the constraint (14) to be unfeasible at the successive time steps. Instead, \(\mathcal{Y}_{i}\) is non-empty for any \(\rho_{\text{in}}^{i}\). The exclusion of (14) renders the problem in (26) a (non-generalized) game. In this section, we show the equivalence of the problem in (26) to a FHOCP. As a first step, we rewrite the equations defining \(\mathcal{Y}_{i}\) as the state-space representation of a constrained linear system. We define the desired distribution \(\rho_{\text{eq}}^{i}\) as \([\rho_{\text{eq}}^{i}]_{a}:=\delta_{d_{i}}(a)\), where \(\delta_{d_{i}}\) is a Kronecker delta centred in \(d_{i}\), and \(u_{\text{eq}}^{i}:=\mathrm{col}(\delta_{d_{i}}(a)\delta_{d_{i}}(b))(a,b) \in\mathcal{E}\), that is, the vector of edge transitions associated to taking the self-loop \((d_{i},d_{i})\) with probability \(1\). We define the states \(x_{i}^{i}\in\mathbb{R}^{n_{\omega}}\) and inputs \(u_{i}^{i}\in\mathbb{R}^{n_{\omega}}\) as
\[x_{i}^{i} :=\rho_{i}^{i}-\rho_{\text{eq}}^{i} \tag{27}\] \[u_{i}^{i} :=\mathrm{col}(M_{i,(a,b)}^{i})_{(a,b)\in\mathcal{E}}-u_{\text{eq }}^{i}. \tag{28}\]
We define the selection vectors \(S_{\text{edge}}^{(a,b)}\in\mathbb{R}^{n_{\omega}}\) for all \((a,b)\in\mathcal{E}\) such that \((S_{\text{edge}}^{(a,b)})^{\top}(u_{i}^{i}+u_{\text{eq}}^{i})=M_{i,(a,b)}^{i}\), as well as
\[B:=\mathrm{col}(\sum_{a:(a,b)\in\mathcal{E}}(S_{\text{edge}}^{(a, b)})^{\top})_{b\in\mathcal{N}}\] \[P:=\mathrm{col}(\sum_{b:(a,b)\in\mathcal{E}}(S_{\text{edge}}^{(a,b)})^{\top})_{a\in\mathcal{N}}.\]
It can be verified that \(Bu_{\text{eq}}^{i}=Pu_{\text{eq}}^{i}=\rho_{\text{eq}}^{i}\) and thus, by substituting the definitions of \(B\) and \(P\):
\[\text{(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
\[+\sum_{t\in\mathcal{T}}p^{S}(\mathbf{\phi}(t;\mathbf{x}_{\text{in}},(\mathbf{u}_{ \tau})_{\tau\in\mathcal{T}}),\mathbf{u}_{t}).\]
By Lemma 7 and [20, Theorem 2], a NE of \(\mathcal{G}(\mathbf{x}_{\text{in}})\) is a solution to the FHOCP \(\mathcal{O}(\mathbf{x}_{\text{in}})\), defined as:
\[\left\{\begin{aligned} &\min_{\{\mathbf{u}_{t}\}_{t\in\mathcal{T}}} & p(\mathbf{x}_{\text{in}},(\mathbf{u}_{\tau})_{\tau\in\mathcal{T}})\\ &\text{s.t.}&(\mathbf{\phi}(t;\mathbf{x}_{\text{in}},(\mathbf{u}_{ \tau})_{\tau\in\mathcal{T}}),\mathbf{u}_{t})\in\mathbb{Z}\quad\forall t.\end{aligned}\right. \tag{35a}\] \[\left.\begin{aligned} &\min_{\{\mathbf{u}_{t}\}_{t\in\mathcal{T}}} & p(\mathbf{x}_{\text{in}},(\mathbf{u}_{\tau})_{\tau\in\mathcal{T}})\\ &\text{s.t.}&(\mathbf{\phi}(t;\mathbf{x}_{\text{in}},(\mathbf{u}_{ \tau})_{\tau\in\mathcal{T}}),\mathbf{u}_{t})\in\mathbb{Z}\quad\forall t.\end{aligned}\right. \tag{35b}\]
We now show the asymptotic stability of the receding horizon solution of (35), and in turn of (33), via standard MPC results.
### _Stability of receding horizon Nash equilibrium control_
At every time-step, the agents apply the first input corresponding to a Nash equilibrium of the game in (33). This is formalized via the following control actions:
\[\forall y\!\in\!\mathbb{X},\ \kappa_{i}\!:\!y\!\mapsto\!u_{1}^{u_{\text{*}}} \text{ where }\operatorname{col}(u_{t}^{i_{\text{*}}})_{t\in\mathcal{T},i\in \mathcal{I}}\text{ is a NE of }\mathcal{G}(y). \tag{36}\]
Intuitively, \(\kappa_{i}\) leads the \(i\)-th agent to the desired equilibrium if the agents have an high enough incentive to approach their destinations. For this purpose, let us assume that each agent knows a path to its destination, formalized by the mappings \((\operatorname{KP}_{i})_{i\in\mathcal{I}}:\mathcal{N}\to\mathcal{N}\) with the following characteristics:
\[\operatorname{KP}_{i}(d_{i})=d_{i};\ (a,\operatorname{KP}_{i}(a)) \in\mathcal{E};\] \[\exists\ T_{i}^{P}\in\mathbb{N}\text{ such that } \operatorname{KP}_{i}^{t}(a):=\underbrace{\operatorname{KP}_{i}\circ... \circ\operatorname{KP}_{\sharp}(a)}_{t\text{ times}}=d_{i}\]
An example of such a path is the shortest path computed with edge weights \(\bar{\tau}\). We then define:
\[\tau_{i}^{\text{kp}}\in\mathbb{R}_{\geq 0}^{|\mathcal{N}|},\ \ [\tau_{i}^{ \text{kp}}]_{a}:=\tau_{(a,\operatorname{KP}_{i}(a))};\qquad\qquad\mathbf{\tau}^{ \text{kp}}:=\operatorname{col}(\tau_{i}^{\text{kp}})_{i};\] \[k_{i}^{\text{kp}}\in\mathbb{R}_{\geq 0}^{|\mathcal{N}|};\ \ [k_{i}^{ \text{kp}}]_{a}:=\sum_{t=0}^{\infty}[\tau_{i}^{\text{kp}}]_{\operatorname{KP}_ {i}^{t}(a)};\quad\mathbf{k}^{\text{kp}}=\operatorname{col}(k_{i}^{\text{kp}})_{i},\]
and the following input, designed such that every vehicle takes the next edge of the known path:
\[u_{i}^{\text{kp}}:\mathbb{X}_{i}\to\mathbb{U}_{i}\text{ for all }i\text{ such that }\] \[S_{\text{edge}}^{(a,b)\top}u_{i}^{\text{kp}}(x_{i})=\begin{cases} [x_{i}]_{a}-\delta_{d_{i}}(a)&\text{if }b=\operatorname{KP}_{i}(a)\\ 0&\text{if }b\neq\operatorname{KP}_{i}(a)\end{cases} \tag{37}\]
\[\mathbf{u}^{\text{kp}}(\mathbf{x}):=\operatorname{col}(u_{i}^{\text{kp}}(S_{\text{x} }^{i}))_{i\in\mathcal{I}},\]
We then postulate the following technical assumption, which encodes the fact that each agent evaluates its distance from the destination by means of the known path:
**Assumption 8**.: _The local costs satisfy Assumption 7 with_
\[f_{i}^{F}(x)=\sigma_{i}^{F}((k_{i}^{\text{kp}})^{\top}x)\] \[f_{i}^{S}(x,u_{i}^{\text{kp}}(x))\leq\sigma_{i}^{S}((\tau_{i}^{ \text{kp}})^{\top}x),\]
_where \(\sigma_{i}^{F}\) is a \(m_{F}\)-strongly monotone and \(\sigma_{i}^{S}\) is a \(L_{\mathcal{S}}\)-Lipschitz continuous functions for all \(i\), with \(\sigma_{i}^{F}(0)=\sigma_{i}^{S}(0)=0\)._
For example, Assumption 8 is satisfied by \(f_{i}^{F}(x)=\gamma_{1}(k_{i}^{\text{kp}})^{\top}x\), with \(\gamma_{1}>0\) and \(f_{i}^{S}(x,u)=\gamma_{2}(\tau_{i}^{\text{kp}})^{\top}x\), with \(\gamma_{2}\geq 0\). We derive a lower bound on the stage cost:
**Lemma 8**.: _For all \((x,u)\in\mathbb{Z}\), the stage cost in (34a) satisfies_
\[p^{S}(x,u)\geq\tfrac{\gamma_{\text{min}}\|x\|}{Nn_{i}} \tag{38}\]
_where \(\tau_{\text{min}}:=\min_{(a,b)\in\mathcal{E},a\neq b}\tau_{(a,b)}\)._
We now state a key technical result, which shows that \(p^{F}\) is a control Lyapunov function with respect to the origin for the system \(\mathbf{x}_{k+1}=(I_{N}\otimes B)\mathbf{u}_{k}\):
**Lemma 9**.: _Let \(p^{F}\) be as in (34b) and let the local costs satisfy Assumption 8. For all \(x\in\mathbb{X}\),_
\[p^{F}((I_{N}\otimes B)\mathbf{u}^{\text{kp}}(x))-p^{F}(x)\leq-m_{F}(\mathbf{\tau}^{ \text{kp}})^{\top}x. \tag{39}\]
Thanks to Lemma 9, we can relate \(p^{F}\) and \(p^{S}\):
**Lemma 10**.: _Denote \(\bar{k}:=\max_{(a,b)}(k_{(a,b)})\). If_
\[m^{F}\geq 1+L_{\mathcal{S}}+\tfrac{\bar{k}(N+1)}{2N\tau_{\text{min}}} \tag{40}\]
_then, for all \(x\in\mathbb{X}\), \((x,\mathbf{u}^{\text{kp}}(x))\in\mathbb{Z}\) and_
\[p^{F}((I_{N}\otimes B)\mathbf{u}^{\text{kp}}(x))-p^{F}(x)\leq-p^{S}(x,\mathbf{u}^{\text{kp }}(x)). \tag{41}\]
We are now ready to present the main stability result for the systems in (29a) controlled by \(\kappa_{i}\) in (36), which follows from the equivalence between (33) and (35) and [19, Theorem 2.19] applied to the system \(\mathbf{x}_{t+1}=(I_{N}\otimes B)\mathbf{u}_{t}\):
**Theorem 1**.: _Under Assumptions 1,3,6-8, if the condition in (40) is satisfied, the origin is asymptotically stable for the systems \(\mathbf{x}_{t+1}^{i}=B\kappa_{i}(\mathbf{x}_{t})\) for all \(i\in\mathcal{I}\), with \(\kappa_{i}\) as in (36)._
Let us present the resulting approach in Algorithm 2.
``` Initialization. Set \(\rho_{1}^{i}\) as in (5d) for each \(i\in\mathcal{I}\). For \(\tau\in\mathbb{N}\):
1. Agents control computation: 1. A NE of \(\mathcal{G}(\mathbf{\rho}_{\tau})\) \[([\operatorname{row}(M_{t,(a,b)}^{i*})_{(a,b)\in\mathcal{E},t\in\mathcal{T}}, \operatorname{row}({\rho_{t}^{i*}}^{\top})_{t\in\mathcal{T}^{+}}]^{\top})_{i\in \mathcal{I}}\] is computed using Algorithm 1, where each \((\Omega_{i})_{i\in\mathcal{I}}\) in (24b) is substituted with \(\{\omega_{i}\in\mathcal{Y}_{i}|(\text{300 holds})\}\), \(\lambda^{(1)}=\mathbf{0}\) and the dual update (25b), (25c) is ignored. 2. Each agent \(i\) computes \(\Pi_{i}^{*}\) as in (6).
2. Vehicles node update: 1. For all \(v\in\{1,...,V\}\), \(i\in\mathcal{I}\) draw \(a_{\tau+1}^{i,v}\in\mathcal{N}\) from the probability distribution \(\operatorname{col}([\Pi_{i}^{*}]_{(b,a_{\tau+1}^{i,v})})_{b\in\mathcal{N}}\)
3. Agents state update: 1. Each agent updates the empirical distribution: \[p_{n,i}=|\{v\in\{1,...,V\}\text{ s.t. }a_{\tau+1}^{i,v}=n\}|\text{ for all }n\in\mathcal{N}\] \[\rho_{r+1}^{i}=\operatorname{col}(p_{n,i}/
population size reduces the approximation error.
We then apply Algorithm 2 for \(\tau\in\{1,...,10\}\) with terminal cost \(f_{i}^{F}(x)=\gamma(k_{i}^{\text{kp}})^{\top}x\), \(\gamma\) as in the right hand side of (40) and \(V=10^{3}\). The results are compared to the pre-computed open loop solution of problem (16) without the constraint in (14). Figure 3 shows that the traversing time experienced is reduced with respect to the shortest path solution, and this advantage increases with the time horizon.
## VII Conclusion
Traffic routing of multiple vehicles can be modelled as an aggregative game with mixed strategies by performing a first-order approximation of the latency function. The approximation error decreases as the number of controlled vehicles increases. The particular structure of the road latency function guarantees the monotonicity of the game under mild conditions. Thus, the problem can be solved via existing equilibrium seeking algorithms for (non-strictly) monotone games. If the latency function is linear, then the game can be solved in receding horizon whenever the local objective functions satisfy a set of conditions inherited from the MPC literature.
**Lemma 11**.: _The only nonzero eigenvalues of a matrix_
\[A(\boldsymbol{y}):=2(\sigma+\zeta)\mathbf{1}_{N}\mathbf{1}_{N}^{\top}+ \frac{\xi}{N}(\boldsymbol{y}\mathbf{1}_{N}^{\top}+\mathbf{1}_{N}\boldsymbol{ y}^{\top}) \tag{42}\]
_where \(\boldsymbol{y}\in\mathbb{R}_{\geq 0}^{N}\), \(\sigma:=\frac{1}{N}\sum_{i}[y]_{i}\), \(\zeta\geq 0\), are \(\lambda_{-}:=\xi\sigma+\gamma_{-}\) and \(\lambda_{+}:=\xi\bar{\sigma}+\gamma_{+}\), where_
\[\gamma_{\pm} := N(\sigma+\zeta)\pm\sqrt{N^{2}(\sigma+\zeta)^{2}+2N\xi(\sigma+ \zeta)\sigma+\frac{\xi^{2}|y|^{2}}{N}}. \tag{43}\]
_Sketch of proof: \(A(\boldsymbol{y})\)_ is a sum of \(3\) rank-1 matrices, thus it is at most rank \(3\). We verify that \(\lambda_{+}\) and \(\lambda_{-}\) are the eigenvalues associated to the eigenvector \(\xi\boldsymbol{y}+\gamma_{+}\mathbf{1}_{N}\) and \(\xi\boldsymbol{y}+\gamma_{-}\mathbf{1}_{N}\), respectively. Finally, \(A(\boldsymbol{y})\) cannot have a third non-zero eigenvalue as \(\operatorname{trace}(A(\boldsymbol{y}))=\lambda_{-}+\lambda_{+}\).
**Lemma 12**.: _Let \(T=\bigtimes_{a\in\mathcal{A}}^{\text{op}}T_{a}\), where \(T_{a}:\mathbb{R}^{n_{a}}\rightrightarrows\mathbb{R}^{n_{a}}\) is \(L_{a}\)-Lipschitz [13, Def. 1.47] for all \(a\in\mathcal{A}\) and \(\mathcal{A}\) is a set of indexes. Then, \(T\) is \(L\)-Lipschitz, with \(L=\max_{a}(L_{a})\)._
We omit the proof of the latter statement.
### _Proofs of Section Iii_
#### Vi-A1 Proof of Lemma 1
We prove that that the equations (1) and (2) hold true for the matrices computed as in (6). We note that, if \([\rho_{t}^{i}]_{\bar{a}}=0\) for some \(\bar{a}\in\mathcal{N}\), by (5b) \(\sum_{b\in\bar{a},b)\in\mathcal{E}}M_{t,(\bar{a},b)}^{i}=0\) and, from (5c), \(M_{t,(\bar{a},b)}^{i}=0\) for all \(b\in\mathcal{N}\). Substituting in (6), we obtain (2):
\[[\Pi_{t}^{i}]_{(b,a)}[\rho_{t}^{i}]_{a}=\begin{cases}0&\text{if }[\rho_{t}^{i}]_{a}=0 \\ M_{t,(a,b)}^{i}&\text{if }[\rho_{t}^{i}]_{a}\neq 0\end{cases}=M_{t,(a,b)}^{i}\]
By expanding the matrix product and from (6) and (5a),
\[\Pi_{t}^{i}\rho_{t}^{i}=\operatorname{col}\left(\sum_{a:(a,b)\in\mathcal{E}} M_{t,(a,b)}^{i}\right)_{b\in\mathcal{N}}=\rho_{t+1}^{i}, \tag{44}\]
which implies (1). Finally, we sum both sides of (5a) and (5b) for all \(b\in\mathcal{N}\) and \(a\in\mathcal{N}\), respectively, to obtain:
\(\sum_{b\in\mathcal{N}}[\rho_{t+1}^{i}]_{b}=\sum_{(a,b)\in\mathcal{E}}M_{t,(a,b )}^{i}=\sum_{a\in\mathcal{N}}[\rho_{t}^{i}]_{a}\).
By induction, \(\rho_{t}^{i}\in\Delta^{|\mathcal{N}|}\) and \(\operatorname{col}(M_{t,(a,b)}^{i})_{(a,b)\in\mathcal{E}}\in\Delta^{|\mathcal{ E}|}\).
#### Vi-A2 Proof of Proposition 1
By the properties of the Bernoulli distribution, we have:
\[\sup_{i}\operatorname{Var}(\theta_{i})=\sup_{i}\mathbb{E}[\theta_{i}](1- \mathbb{E}[\theta_{i}])\leq 1/4\]
and \(\operatorname{Var}(\sigma_{n})=\frac{1}{n^{2}}\sum_{i}\operatorname{Var}( \theta_{i})\leq\frac{1}{4n}\). By the Chebyschev's inequality, for any \(\varepsilon>0\) and for \(K_{\varepsilon}=\frac{1}{\sqrt{\varepsilon}}\), we have
\[\mathbb{P}\left[(\sigma_{n}-\bar{\sigma})\geq\frac{K_{\varepsilon}}{2\sqrt{n}} \right]\leq\varepsilon.\]
The result then follows from [16, Theorem 6.2.3] by using \(r_{n}=\frac{1}{2\sqrt{n}}\) and \(\boldsymbol{a}=\bar{\sigma}\) (in the reference notation).
### _Proofs of Section Iv_
#### Vi-B1 Proof of Lemma 2 (sketch)
Compute \(J_{(a,b)}^{\prime\prime}(\cdot,\boldsymbol{M}_{(a,b),t)}^{-i}|_{M_{t,(a,b)}^{i}}\) for a generic \((a,b),t,i\) and note that it is non-negative using Assm. 2. The result then follows by [13, Prop. 8.14], Assm. 3 and [13, Prop. 8.17].
Fig. 1: \(\max_{t}\sigma_{(a,b)}^{t}/\overline{c}_{(a,b)}\), compared to the congestion obtained by the shortest path routing. The dotted line denotes \(c_{(a,b)}/\overline{c}_{(a,b)}\). The dots show the median values. The shaded area highlights the 95% confidence interval.
Fig. 3: Comparison of the total cost incurred by the agents, with respect to the shortest path without traffic information.
Fig. 2: Difference between approximated and empirical travel time with respect to \(V\), the number of vehicles per population.
2. Proof of Lemma 3
Let us compute \(F\):
\[\begin{array}{l}F(\mathbf{\omega})=\operatorname{col}\left(\nabla f_{i}(\omega_{i}) \right)_{i}+\\ \operatorname{col}\left(\begin{bmatrix}\operatorname{col}\left(J^{\prime}_{(a,b )}(\cdot,\mathbf{M}^{-i}_{t,(a,b)})|_{M^{i}_{t,(a,b)}}\right)_{(a,b),t}\\ \mathbf{0}_{|\mathcal{N}|(T+1)}\end{bmatrix}_{i\in I},\end{array} \tag{45}\]
where the zero vector appears because the cost function does not depend on \((\rho^{i}_{t})_{t,i}\). From Assumption 3 and [13, Example 20.3], \(\nabla f_{i}\) is monotone for each \(i\). Then, \(\operatorname{col}(\nabla f_{i})_{i}\) is monotone by [13, Prop. 20.23]. Let us denote the second addend in (45) as \(T(\mathbf{\omega})\). From [13, Prop. 20.10], \(F\) is monotone if \(T\) is monotone. Let us define the permutation matrix \(P\) such that
\[P\mathbf{\omega}=\begin{bmatrix}\operatorname{col}(\operatorname{col}(M^{i}_{t,( a,b)})_{i\in\mathcal{I}}(a,b)\in\mathcal{E},t\in\mathcal{T}\\ \operatorname{col}(\operatorname{col}(\rho^{i}_{t})_{t\in\mathcal{I}})_{t \in\mathcal{T}^{+}}\end{bmatrix}.\]
It holds, from the definition of \(F_{(a,b),t}\),
\[PT(\mathbf{\omega})=\begin{bmatrix}\operatorname{col}(F_{(a,b),t}((M^{i}_{t,(a,b) })_{i\in\mathcal{I}}))_{(a,b)\in\mathcal{E},t\in\mathcal{T}}\\ \mathbf{0}_{|N|\mathcal{N}|(T+1)}\end{bmatrix}. \tag{46}\]
As \(PP^{\top}=I\), for all \(\mathbf{\omega},\mathbf{y}\):
\[\begin{array}{l}\langle T(\mathbf{\omega})-T(\mathbf{y}),\mathbf{\omega}-\mathbf{y}\rangle =\langle PT(\mathbf{\omega})-PT(\mathbf{y}),P\mathbf{\omega}-P\mathbf{y}\rangle=\\ \sum_{(a,b),t}\langle F_{(a,b),t}|_{\mathbf{\omega}}-F_{(a,b),t}|_{\mathbf{y}},\mathbf{M}_{ t,(b,a)}|_{\mathbf{\omega}}-\mathbf{M}_{t,(b,a)}|_{\mathbf{y}}\rangle\end{array}\]
which is non negative if \(F_{(a,b),t}\) is monotone \(\forall(a,b),t\).
#### Iv-B3 Proof of Lemma 4
By [21, Proposition 12.3], \(T\) in (21) is monotone if \(DT(\mathbf{y})+DT(\mathbf{y})\succeq 0\)\(\forall\mathbf{y}\). Denote \(\sigma=\operatorname{avg}(\mathbf{y})\). We compute:
\[DT(\mathbf{y})=\tfrac{1}{N}\ell^{\prime}\sigma(\sigma)(I_{N}+\mathbf{1}\mathbf{1}^{\top})+ \tfrac{1}{N^{2}}\ell^{\prime\prime}\sigma(\sigma)(\mathbf{y}\mathbf{1}^{\top}). \tag{47}\]
As \(\ell^{\prime}(\sigma)=k(\sigma+\zeta)^{\xi}\), \(\ell^{\prime\prime}(\sigma)=k\xi(\sigma+\zeta)^{\xi-1}\), we compute
\[DT(\mathbf{y})+DT^{\top}(\mathbf{y})=\tfrac{2k}{N}(\sigma+\zeta)^{\xi}I _{N}+ \tag{48}\] \[\tfrac{k}{N}(\sigma+\zeta)^{\xi-1}(2(\sigma+\zeta)\mathbf{1}\mathbf{1}^{ \top}+\tfrac{\xi}{N}(\mathbf{y}\mathbf{1}^{\top}+\mathbf{1}\mathbf{y}^{\top})).\]
By Lemma 11, \(DT(\mathbf{y})+DT^{\top}(\mathbf{y})\succeq 0\) if
\[\tfrac{2k}{N}(\sigma+\zeta)^{\xi}+\tfrac{k}{N}(\sigma+\zeta)^{\xi-1}(\xi \sigma+\gamma_{-}(\mathbf{y}))\geq 0, \tag{49}\]
where \(\gamma_{-}\) is defined in (43). Excluding the trivial case \(\mathbf{y}=0,\zeta=0\), we divide by \(\tfrac{k}{N}(\sigma+\zeta)^{\xi}\) to obtain
\[\begin{array}{l}(\ref{eq:1})\Leftrightarrow 2+\tfrac{\xi\sigma}{ \sigma+\zeta}+\tfrac{\gamma_{-}(\mathbf{y})}{\sigma+\zeta}\geq 0\Leftrightarrow\\ \\ 2+\tfrac{\xi\sigma}{\sigma+\zeta}+N\geq\sqrt{N^{2}+2\tfrac{N\xi\sigma}{\sigma+ \zeta}+\tfrac{\xi^{2}\|\mathbf{y}\|^{2}}{N(\sigma+\zeta)^{2}}}\Leftrightarrow\\ \\ 4+\tfrac{\xi^{2}\sigma^{2}}{(\sigma+\zeta)^{2}}+\mathbf{\mathcal{N}}^{\mathbf{\sigma}}+ \tfrac{4\xi\sigma}{\sigma+4N}+2\tfrac{N\mathbf{\mathcal{E}}}{\sigma+\zeta}\geq \\ \\ \mathcal{N}^{\mathbf{\sigma}}+\tfrac{2N\mathbf{\mathcal{E}}}{\sigma+\zeta}+\tfrac{\xi^ {2}\|\mathbf{y}\|^{2}}{N(\sigma+\zeta)^{2}}\Leftrightarrow\\ \\ f(\mathbf{y}):=4(N+1)+\tfrac{\xi^{2}\sigma^{2}}{(\sigma+\zeta)^{2}}+\tfrac{4\xi \sigma}{\sigma+\zeta}-\tfrac{\xi^{2}\|\mathbf{y}\|^{2}}{N(\sigma+\zeta)^{2}}\geq 0.\end{array} \tag{50}\]
We look for the minimum of the left hand side of the latter inequality. Notice that \(\mathbf{y}_{\sigma}\sigma+\tfrac{1}{N}\mathbf{1}_{N}\). Then,
\[\begin{array}{l}\nabla f(\mathbf{y})=\tfrac{2\xi^{2}}{N}\tfrac{\sigma(\zeta+ \zeta)^{2}-\sigma^{2}(\sigma+\zeta)}{(\sigma+\zeta)^{4}}\mathbf{1}_{N}-\tfrac{4\xi} {N}\tfrac{\zeta}{(\sigma+\zeta)^{2}}\mathbf{1}_{N}\\ \\ \quad\quad\quad\quad-\xi^{2}\tfrac{2N\mathbf{y}(\sigma+\zeta)^{2}-2(\sigma+\zeta)\| \mathbf{y}\|^{2}}{N^{2}(\sigma+\zeta)^{4}}.\end{array}\]
Since \(\nabla f(\mathbf{y})\) contain either terms that multiply \(\mathbf{1}_{N}\) or \(\mathbf{y}\), it must be \(\mathbf{y}=\alpha\mathbf{1}_{N}\) for some \(\alpha\in(0,1]\) for \(\mathbf{y}\) to be a stationary point. Therefore, the minimum of \(f(\mathbf{y})\) is either obtained for \(\mathbf{y}=\alpha\mathbf{1}_{N}\) or at an extreme point of \([0,1]^{N}\), that is, \(\mathbf{y}=\sum_{i\in\mathcal{Q}}\mathbf{e}_{i}\), where \(\mathbf{e}_{i}\in\mathbb{R}^{N}\) with only non-zero element \([\mathbf{e}_{i}]_{i}=1\) and \(\mathcal{Q}\subset\{1,...,N\}\). Let us study the two cases separately:
_Case \(\mathbf{y}=\alpha\mathbf{1}_{N}\):_ In this case, \(\sigma=\alpha\) and \(\|\mathbf{y}\|^{2}=\alpha^{2}N\). We substitute these values in (50) to find
\[f(\mathbf{y})=4(N+1)+\tfrac{\xi^{2}\sigma^{2}\varkappa}{\varkappa\ell^{\prime}\zeta^{ \prime}}+\tfrac{4\xi\alpha}{\alpha+\zeta}-\tfrac{\xi^{2}\alpha^{2}\varkappa}{ \varkappa\varkappa}\varkappa}{\varkappa\mathcal{Q}(\alpha+\zeta)^{2}}\geq 0,\]
which is always true.
_Case \(\mathbf{y}=\sum_{i\in\mathcal{Q}}\mathbf{e}_{i}\):_ In this case, define \(q\)\(|\mathcal{Q}|\), we compute \(\sigma=\tfrac{q}{N}\) and \(\|\mathbf{y}\|^{2}=q\). We then substitute to find
\[f(\mathbf{y}):=4N+4+\tfrac{\xi^{2}\sigma^{2}}{(q+N\zeta)^{2}}+\tfrac{4\xi\sigma}{q+N }-\tfrac{\xi^{2}\sigma^{2}N}{(q+N\zeta)^{2}}\geq 0.\]
A sufficient condition for the latter is that the first addend is greater than the negative addend,which is true if
\[g(q):=4(q+\zeta N)^{2}-q\xi^{2}\geq 0.\]
Let us study the first derivative of \(g\):
\[g^{\prime}(q)=8\left(q+\zeta N\right)-\xi^{2}\leq 0\Leftrightarrow q\leq\tfrac{\xi ^{2}}{8}-\zeta N.\]
We conclude that \(g(q)\) has the minimum in \(q=1\) if \(\zeta\geq\tfrac{\xi^{2}-8N}{8N}\). We then note that \(g(1)\geq 0\) if \(\zeta\geq\tfrac{\xi-2}{2N}\). Therefore, \(g(q)\geq 0\) for all \(q\in\{1,...,N\}\) if (22) holds true, which in turns guarantees that (49) holds true for all \(\mathbf{y}\in[0,1]^{N}\).
#### Iv-B4 Proof of Proposition 2
As \(F_{(a,b),t}\) in (18) is in the form in (21) (see proof of Lemma 5), we follow the steps in Lemma 4 to find as in (48):
\[\begin{array}{l}(DF_{(a,b),t}+DF^{\top}_{(a,b),t})(\mathbf{y})\!=\!\tfrac{k_{(a,b)} }{N}[2(
#### V-A2 Derivation of (32)
Let us use the short-hand notation \(\phi_{t}^{i}=\phi^{i}(t;x_{\text{in}}^{i},(u_{\tau}^{i})_{\tau})\). From Assumptions 6 and 7, we rewrite \(J_{i}\) as:
\[J_{i}(\omega_{i})=f_{i}^{F}(\phi_{T+1}^{i})+\sum_{t}\Big{\{}f_{i} ^{S}(\phi_{t}^{i},u_{t}^{i})+\sum_{(a,b)}\bigl{[}(\tau_{(a,b)}\] \[+\tfrac{k_{(a,b)}}{N}\sum_{j}\left[S_{\text{edge}}^{(a,b)\top}(u_{t}^ {j}+u_{\text{eq}}^{i})\right]\bigr{)}S_{\text{edge}}^{(a,b)\top}(u_{t}^{i}+u_{ \text{eq}}^{i})\Big{]}\Big{\}}.\]
Using the definitions of \(C\) and \(\bar{\tau}\) and rearranging,
\[J_{i}= f_{i}^{F}(\phi_{T+1}^{i})+\sum_{t}f_{i}^{S}(\phi_{t}^{i},u_{t}^{i})+\] \[\bigl{(}\bar{\tau}^{\top}+\operatorname{avg}((u_{t}^{i}+u_{\text {eq}}^{i})_{j\in\mathcal{I}})^{\top}C)(u_{t}^{i}+u_{\text{eq}}^{i}).\]
From Assumption 6 and the definition of \(C\) and \(\bar{\tau}\), \(Cu_{\text{eq}}^{i}=\mathbf{0}\), \(\bar{\tau}^{\top}u_{\text{eq}}^{i}=0\) for any \(i\in\mathcal{I}\), thus (32) follows.
#### V-A3 Proof of Lemma 7
Let us denote \(\mathbf{\phi}_{t}=\mathbf{\phi}(t;x_{\text{in}},(u_{\tau})_{\tau})\), \(\phi_{t}^{i}=S_{\text{x}}^{i}\mathbf{\phi}_{t}\) and:
\[\bar{u}^{i}:=\operatorname{col}(u_{t}^{i})_{t\in\mathcal{T}};\quad\bar{\mathbf{u }}=\operatorname{col}(\bar{u}^{i})_{i\in\mathcal{I}}.\]
We further state the following, which are verified for any \(y_{i}\in\mathbb{R}^{m}\), \(\Gamma\in\mathbb{R}^{m\times m}\), \(i\in\{1,...,n\}\):
\[\sum_{i}\|y_{i}\|_{\mathcal{I}}^{\top}=\|\operatorname{col}(y_{i} )_{i}^{\top}\|_{I_{n}\otimes\Gamma}^{2}, \tag{52}\] \[\sum_{i}y_{i}=(\mathbf{1}_{\pi}^{\top}\otimes I_{m})\operatorname{col} (y_{i})_{i},\] (53) \[\mathbf{1}_{n}\mathbf{1}_{n}^{\top}\otimes\Gamma=(\mathbf{1}_{n}\otimes I_{m} )\Gamma(\mathbf{1}_{n}^{\top}\otimes I_{m}). \tag{54}\]
Let us write the agent cost in (32) as
\[J_{i}= f_{i}^{F}(\phi_{T+1}^{i})+\sum_{j}(\tfrac{\bar{y}^{j}}{N})^{\top}(I_{T} \otimes C)\bar{u}^{i}+\sum_{t}f_{i}^{S}(\phi_{t}^{i},u_{t}^{i})+\bar{\tau}^{ \top}u_{t}^{i}\]
The pseudo-gradient of (33) reads then as [11, Eq. 32]
\[F(\bar{\mathbf{u}}) =\operatorname{col}(\nabla_{\mathbf{u}_{t}}(f_{i}^{F}(\phi_{T+1}^{i}) +\sum_{t}f_{i}^{S}(\phi_{t}^{i},u_{t}^{i})+\bar{\tau}^{\top}u_{t}^{i}))_{i}\] \[+\tfrac{1}{N}(I_{N}+\mathbf{1}_{N}\mathbf{1}_{N}^{\top})\otimes(I_{T} \otimes C)\bar{\mathbf{u}}.\]
We now compute the gradient of \(p\). Let us first compute:
\[\sum_{t}\|\mathbf{u}_{t}\|_{I_{N}\otimes C}^{2}\overset{\eqref{eq:p_ {T+1}}}{=}\sum_{i,t}\|u_{t}^{i}\|_{C}^{2}\overset{\eqref{eq:p_ {T+1}}}{=}\|\bar{\mathbf{u}}\|_{I_{T\otimes C}}^{2};\]
\[\sum_{t}\|\mathbf{u}_{t}\|_{\mathbf{1}_{1}^{\top}\otimes C}^{\text{(54)}}\overset{ \eqref{eq:p_{T+1}}}{=}\sum_{t}\mathbf{u}_{t}^{\top}(\mathbf{1}\otimes I_{n_{n}})C(\bm {1}^{\top}\otimes I_{n_{n}})\mathbf{u}_{t}\] \[\overset{\eqref{eq:p_{T+1}}}{=}\sum_{t}\left\|\sum_{i\in\mathcal{ I}}u_{t}^{i}\right\|_{C}^{2}\overset{\eqref{eq:p_{T+1}}}{=}\|\sum_{i}\bar{u}^{i}\|_{I_{T \otimes C}}^{\text{(53)}}\overset{\eqref{eq:p_{T+1}}}{=}\] \[\bar{\mathbf{u}}^{\top}(\mathbf{1}_{N}\otimes I_{Tn_{n}})(I_{T}\otimes C )(\mathbf{1}_{N}^{\top}\otimes I_{Tn_{n}})\tilde{\mathbf{u}}\tilde{\mathbf{u}}^{\top}(\mathbf{ \|}\tilde{\mathbf{u}}^{\top}_{1\mathbf{N}_{N}^{\top}\otimes(I_{T}\otimes C)}.\]
We then rewrite \(p\) as:
\[p =\sum_{i}\left(f_{i}^{F}(\phi_{T+1}^{i})+\sum_{t}f_{i}^{S}(\phi_{t }^{i},u_{t}^{i})+\bar{\tau}^{\top}u_{t}^{i}\right)+\] \[+\tfrac{1}{2N}\|\bar{\mathbf{u}}\|_{(I_{N}+\mathbf{1}_{N}^{\top})\otimes(I _{T}\otimes C)}^{2}.\]
We apply [13, Prop. 16.9] to compute \(\nabla p\) and verify that it reads as \(F\).
#### V-A4 Proof of Lemma 8
We begin with a preliminary lemma:
**Lemma 13**.: _The following hold for all \((x,u)\in\mathbb{Z},i\in\mathcal{I}\):_
\[\sum_{a\neq b}(S_{\text{edge}}^{(a,b)\top})^{\top}S_{\text{u}}^{i}u =-(S_{\text{edge}}^{(d_{i},d_{i})})^{\top}S_{\text{u}}^{i}u; \tag{55a}\] \[[S_{\text{x}}^{i}x]_{d_{i}}\geq(S_{\text{edge}}^{(d_{i},d_{i})})^{ \top}S_{\text{u}}^{i}u\ \ ;\] (55b) \[-[S_{\text{x}}^{i}x]_{d_{i}}\geq\max_{a\in\mathcal{N}}[S_{\text{x} }^{i}x]_{a}. \tag{55c}\]
Proof.: (55a): From the definition of \(\mathbb{Z}\), \(PS_{\text{u}}^{i}u=S_{\text{x}}^{i}x\). Substituting the definition of \(P\) and summing each row,
\[\sum_{a}\sum_{b\in(a,b)\in E}S_{\text{edge}}^{(a,b)\top}S_{\text{u}}^{i}u=\sum_{a }[S_{\text{x}}^{i}x]_{a}=0 \tag{56}\]
where we used the definition of \(\mathbb{X}_{i}\) and \(\sum_{a\in\mathcal{N}}\rho_{\text{eq}}^{i}=1\). Using the definition of \(\mathbb{U}\), (55a) follows by noting
\[\sum_{a\neq b}S_{\text{edge}}^{(a,b)\top}S_{u}^{i}u =\sum_{a,b}\{S_{\text{edge}}^{(a,b)\top}S_{\text{u}}^{i}u\}-S_{\text{edge}}^{(d _{i},d_{i})\top}S_{\text{u}}^{i}u.\] (55b): By \(S_{u}^{i}u\in\mathbb{R}_{\geq 0}^{|\mathcal{E}|}-\{u_{\text{eq}}^{i}\}\) and \([u_{\text{eq}}^{i}]_{a}=0\ \forall\ a\neq d_{i}\), \((S_{\text{edge}}^{(d_{i},b)\top})^{\top}S_{\text{u}}^{i}u\geq 0\) for all \(b\in\mathcal{N}\setminus\{d_{i}\}\). As \(S_{\text{x}}^{i}x=PS_{\text{u}}^{i}u\), \([S_{\text{x}}^{i}x]_{d_{i}}=\sum_{b:(d_{i},b)\in E}(S_{\text{edge}}^{(d_{i},b)})^{ \top}S_{\text{u}}^{i}u\geq(S_{\text{edge}}^{(d_{i},d_{i})})^{\top}S_{\text{u}}^{i}u\).
(55c): From \(S_{\text{x}}^{i}x+\rho_{\text{eq}}^{i}\in\Delta^{|\mathcal{N}|}\) and \(\rho_{\text{eq}}^{i}\in\Delta^{|\mathcal{N}|}\),
\[\geq m_{F}((\mathbf{k}^{\text{kp}})^{\top}x-(\mathbf{k}^{\text{kp}})^{\top}x^{+})=m_{F}( \mathbf{\tau}^{\text{kp}})^{\top}x.\qed\]
### -6 Proof of Lemma 10
For compactness of notation, we drop the dependencies of \(u_{i}^{\text{kp}}\). From the definition of \(\bar{\tau}\) and from \(\tau_{(d_{i},d_{i})}=0\), \(\forall x\in\mathbb{X}\),
\[\bar{\tau}^{\top}u_{i}^{\text{kp}}=\sum_{(a,b)}\tau_{(a,b)}(S_{ \text{edge}}^{(a,b)})^{\top}u_{i}^{\text{kp}} \overset{\eqref{eq:m_F}}{=} \tag{60}\] \[\sum_{a\in\mathcal{N}}\tau_{(a,\text{KP}_{i}(a))}([S_{x}^{i}x]_{a }-\delta_{d_{i}}(a)) = (\tau_{i}^{\text{kp}})^{\top}S_{\text{x}}^{i}x.\]
Then, by Assumption (8) and denoting \(\bar{C}=(I_{N}+\mathbf{1}\mathbf{1}^{\top})\otimes C\)
\[\begin{split} p(x,\mathbf{u}^{\text{kp}})=\frac{\|\mathbf{u}^{\text{kp}} \|_{C}^{2}}{2N}+\sum_{i}f_{i}^{\text{S}}(S_{x}^{i}x,u_{i}^{\text{kp}})+(\tau_{ i}^{\text{kp}})^{\top}S_{\text{x}}^{i}x\\ \leq\sum_{i}(L_{S}+1)(\tau_{i}^{\text{kp}})^{\top}S_{\text{x}}^{i }x+\frac{1}{2N}\|\mathbf{u}^{\text{kp}}\|_{C}^{2}.\end{split} \tag{61}\]
From Lemma 9 and (61), then (41) holds if
\[(m_{F}-1-L_{S})(\mathbf{\tau}^{\text{kp}})^{\top}x\geq\frac{1}{2N}\|\mathbf{u}^{\text{kp }}\|_{C}^{2}. \tag{62}\]
Let us find a lower bound for the LHS of (62).
\[(\mathbf{\tau}^{\text{kp}})^{\top}x\!=\!\sum_{i,a}[\tau_{i}^{\text{kp}}]_{a}[S_{x} ^{i}x]_{a}^{\text{Ass.6}}\!\sum_{i}\sum_{a\neq d_{i}}[\tau_{i}^{\text{kp}}]_{a }[S_{x}^{i}x]_{a} \tag{63}\] \[\geq\tau_{\text{min}}\sum_{i}\sum_{a\neq d_{i}}[S_{x}^{i}x]_{a}^{ \text{(\ref{eq:m_F})}}\overset{\eqref{eq:m_F}}{=}\tau_{\text{min}}\sum_{i}(-[S _{x}^{i}x]_{d_{i}}).\]
We now rewrite the RHS of (62):
\[\|\mathbf{u}^{\text{kp}}\|_{C}^{2}=\sum_{i}\left((u_{i}^{\text{kp}})^{\top}Cu_{i} ^{\text{kp}}+\sum_{j}\left((u_{j}^{\text{kp}})^{\top}Cu_{i}^{\text{kp}}\right) \right). \tag{64}\]
We then note that for all \(i,j\in\mathcal{N}\), from the definition of \(C\):
\[(u_{j}^{\text{kp}})^{\top}Cu_{i}^{\text{kp}}=\!\sum_{(a,b)}k_{(a,b)}(u_{j}^{ \text{kp}})^{\top}S_{\text{edge}}^{(a,b)}(S_{\text{edge}}^{(a,b)})^{\top}u_{ i}^{\text{kp}}. \tag{65}\]
From (36), \((u_{i}^{\text{kp}})^{\top}S_{\text{edge}}^{(a,b)}\leq 1\) for all \((a,b)\) and \((S_{\text{edge}}^{(a,b)})^{\top}u_{i}^{\text{kp}}=0\) if \(b\neq\text{KP}_{i}(a)\). We continue from (65):
\[\begin{split}&\leq\sum_{a}k_{(a,\text{KP}_{i}(a))}S_{\text{edge}}^{ (a,\text{KP}_{i}(a))}u_{i}^{\text{kp}}\\ &=\sum_{a}k_{(a,\text{KP}_{i}(a))}([S_{x}^{i}x]_{a}-\delta_{d_{i} }(a))\\ &=\sum_{a\neq d_{i}}k_{(a,\text{KP}_{i}(a))}[S_{x}^{i}x]_{a}\\ \end{split}\]
where we noted \(k_{(d_{i},\text{KP}_{i}(d_{i}))}=k_{(d_{i},d_{i})}=0\) from Ass. 6. Then, we continue from the latter using (65):
\[(u_{i}^{\text{kp}})^{\top}Cu_{i}^{\text{kp}}\leq\bar{k}\sum_{a\neq d_{i}}[S_{ x}^{i}x]_{a}=-\bar{k}[S_{x}^{i}x]_{d_{i}}. \tag{66}\]
Substituting (66) in (64),
\[\|\mathbf{u}^{\text{kp}}\|_{C}^{2}\leq(N+1)\bar{k}\sum_{i}(-[S_{x}^{i}x]_{d_{i}}). \tag{67}\]
From (67) and (63), (62) holds true under (41).
### -7 Proof of Theorem 1
By [20, Theorem 2], for any \(\mathbf{x}\in\mathbb{X}\), a solution of \(\mathcal{G}(\mathbf{x})\) solves \(\mathcal{O}(\mathbf{x})\). Then, \(\operatorname{col}(\kappa_{i}(\mathbf{x}))_{i}\) is the first input of a sequence which solves (36) with initial state \(\mathbf{x}\). Problem (36) satisfies [19, Assm. 2.2, 2.3] under Assumptions 3 and 6. [19, Assm. 2.14a] follows from Lemma 10. By Assumption 3, \(p^{F}\) is Lipschitz continuous, thus \(p^{F}(x)\leq L\|x\|\) for some \(L>0\). Thus, [19, Assm. 2.14b] is satisfied by Lemma 8. The set \(\mathbb{X}\) is control invariant under \(\mathbf{u}^{\text{kp}}(\cdot)\), as verified by computing \((I\otimes B)\mathbf{u}^{\text{kp}}(x)\) for a generic \(x\in\mathbb{X}\). [19, Assm. 2.17] is then satisfied by applying [19, Prop. 2.16]. The thesis follows from Lemma [19, Thm. 2.19] with the control action \(\operatorname{col}(\kappa_{i}(\mathbf{x}))_{i}\).\(\blacksquare\)
|
2307.07504 | Gravitational partial-wave absorption from scattering amplitudes | We study gravitational absorption effects using effective on-shell scattering
amplitudes. We develop an in-in probability-based framework involving plane-
and partial-wave coherent states for the incoming wave to describe the
interaction of the wave with a black hole or another compact object. We connect
this framework to a simplified single-quantum analysis. The basic ingredients
are mass-changing three-point amplitudes, which model the leading absorption
effects and a spectral-density function of the black hole. As an application,
we consider a non-spinning black hole that may start spinning as a consequence
of the dynamics. The corresponding amplitudes are found to correspond to
covariant spin-weighted spherical harmonics, the properties of which we
formulate and make use of. We perform a matching calculation to
general-relativity results at the cross-section level and derive the effective
absorptive three-point couplings. They are found to behave as ${\cal
O}(G_\text{Newton}^{s+1})$, where $s$ is the spin of the outgoing massive
state. | Rafael Aoude, Alexander Ochirov | 2023-07-14T17:55:45Z | http://arxiv.org/abs/2307.07504v3 | # Gravitational partial-wave absorption from scattering amplitudes
###### Abstract
We study gravitational absorption effects using effective on-shell scattering amplitudes. We develop an in-in probability-based framework involving plane- and partial-wave coherent states for the incoming wave to describe the interaction of the wave with a black hole or another compact object. We connect this framework to a simplified single-quantum analysis. The basic ingredients are mass-changing three-point amplitudes that model the leading absorption effects. As an application, we consider a non-spinning black hole that may start spinning as a consequence of the dynamics. The corresponding amplitudes are found to correspond to covariant spin-weighted spherical harmonics, the properties of which we formulate and make use of. We perform a matching calculation to general-relativity results at the cross-section level and derive the effective absorptive three-point couplings. They are found to behave as \(\mathcal{O}(G^{s+1})\), where \(s\) is the spin of the outgoing massive state.
## 1 Introduction
Since 2015 [1], the steady flow of gravitational-wave observations has been stimulating theorists to look for new computational methods for general relativity (GR). In addition to the constant improvement of the classical approaches to solve the two-body problem in gravity [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12], new theoretical results have also been obtained by
using on-shell scattering amplitudes, which encode the relevant, quantum or classical, physics in a gauge-invariant way. (See [13; 14; 15; 16] for recent reviews).
The conservative scattering of non-spinning compact bodies has been calculated up to fourth post-Minkowskian (PM) order using amplitude- [17; 18] and worldline-based methods [19; 20; 21; 22]. For the spinning case, the conservative scattering has been evaluated at second PM order and all-order in the angular momenta [23; 24; 25] with the help of Heavy-Particle Effective Theory [26; 27]. Higher PM orders have also been obtained, though limited to lower spin orders [28; 29; 30; 22; 31]. Progress on the spinning front has resulted in different and complementary on-shell approaches [32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. For the interesting case of black-hole (BH) dynamics, many of these works rely on the matching of three-point amplitudes to the Kerr multipole expansion [43]. An all-spin understanding of the relevant four-point Compton scattering amplitude, however, is still lacking, despite recent progress in the description of massive higher-spin particles [44; 45; 46], matching to the Teukolsky-equation solutions [38; 39] through sixth order in spin, and the availability of the conservative tree-level Compton with arbitrary coefficients [47]. The quantum-field-theoretic (QFT) program of gravitational dynamics has also seen impressive advances in methods for obtaining classical observables from amplitudes, such as the Kosower-Maybee-O'Connell (KMOC) formalism [48; 49; 50; 51; 52; 53], heavy-particle expansion [54; 55; 56; 27; 57; 58], eikonal ideas [59; 60; 61; 62; 63; 64; 65], worldline QFT [28; 29; 30], boundary-to-bound map [66; 67; 68; 69], and strong-field amplitudes [70; 71; 72].
Despite the successes in the conservative section, the progress in non-conservative effects has been slower, since those effects are naturally smaller. Within those, the absorption of mass and angular momentum is very small, especially for non-spinning bodies, and it is unlikely to be observed by ground-based detects, as shown in [73] for 5-to-50 solar masses black holes. However, for space-based detectors, the fraction of the radiated energy that is absorbed by the BH is around 5% [74]. These effects are especially important for rapidly rotating BHs, as shown in [75]. The change of mass and spin of a BH naturally leads to a change in the horizon by the second law of BH thermodynamics [76], which are already included in a few of the effective-one-body waveform templates [77; 78; 79] and will be needed for a future precision program.
In this paper, we initiate the study of absorption effects using modern on-shell methods for scattering amplitudes. In particular, we use mass-changing three-point amplitudes to describe leading absorption effects from a simplified single-quantum approach. We thus construct an in-in on-shell probability-based formalism for a partial wave impinging on a BH. Using this covariant effective-field-theory (EFT) description, we can match the microscopic cross-section calculation from GR literature and obtain the values of the relevant effective coupling coefficients. As a concrete application, we focus on absorption by a non-spinning BH, while leaving the more phenomenologically relevant spinning case for future work.
Absorption effects have been considered before in the literature, starting with Starobinsky, Churilov [80; 81] and Page [82; 83] and with later higher-order correc
tions in [84] and relatively recently in [85] by using traditional GR methods. The scattering and absorption cross-sections are obtained using a partial-wave expansion (in spin-weighted spherical harmonics) of the scattering phases and transmission factors. These factors are obtained by solving the Teukolsky equation, which describes perturbation around Kerr BHs. Absorption of mass and angular momentum by a BH was also computed in great detail [73; 74; 86; 87] in post-Newtonian theory.
From the worldline perspective, the study of absorption is more recent. They have been introduced in [88] for scalar BHs, with subsequent inclusion of spin effects [89; 90]. Furthermore, absorption has been combined with spontaneous emission to understand superradiance effects in [91]. The authors of [88; 89; 90; 91; 92] put EFT operators on a classical worldline to model the intricate behavior of a compact object. In particular, higher-derivative operators were included in [92] for the spinning case, which starts at 1.5 PN order, tackling the discrepancy in the literature of the horizon fluxes in the test-body limit. We propose to go further and consider the object itself as a quantum particle, but amenable to an appropriately defined classical limit. This lets us profit not just from QFT techniques, which have been available on the worldline, but also from the on-shell approach to scattering amplitudes.
Purely mass-changing absorption effects from on-shell scattering amplitudes were never studied to the best of our knowledge,1 although similar amplitudes have appeared in the context of quantum emission [93; 94]. The basic building blocks for modeling absorption effects are three-point amplitudes of two different massive particles and a graviton, in which the initial state absorbs the graviton, changing its mass and spin. Even before matching, the EFT cross-section reproduces known facts about Schwarzchild BHs: _(i)_ the cross-section does not depend on the magnetic quantum number \(m\), and _(ii)_ there is no absorption in the static limit \(\sigma_{\rm abs}(\omega_{\rm cl}\to 0)=0\).
Footnote 1: In [38; 39] the authors have introduced contact terms non-analytical in spin for the Compton amplitude to match the solutions of the Teulkosky equation. These terms are then suggested to model absorption effects, despite the masses of the initial and the final particles being equal. Here what we call absorption effects are strictly inelastic changing-mass interactions.
Properly modeling the interaction of a BH with a classical wave from amplitudes requires the use of massless coherent states. For that, we describe a covariant probability-based formalism for spherical coherent states, so as to substantiate the single-quantum leading-order calculation and to explain how one could improve the absorption description to higher orders and combine it with conservative effects.
This paper is organized as follows. In section 2 we describe the mass-changing and spin-changing amplitudes required for our description of the mechanics of compact objects absorbing a spherical wave in section 3. In section 4 we match to the microscopic cross-section from GR to make sense of the effective couplings. Finally, in section 5, we connect the single-quantum cross-section description to the framework involving massless spherical coherent states. In this section, we also introduce a diagrammatic expansion of the \(T\)-matrix, which allows for perturbations of the BH-wave
interaction that can be matched to higher orders of the cross-section. We conclude in section 6. Though we assume familiarity with the spinor-helicity formalism [95], we briefly explain it and its connection to spherical harmonics in appendix A.
## 2 Basic mass-changing amplitudes
Using scattering amplitudes to model absorption effects relies on EFT ideas, such as treating black holes as point particles. These concepts have been heavily used in the recent years to provide predictions for conservative dynamics and dissipation effects.
As in most of EFTs, the knowledge of the coefficients that parametrize the theory is either provided by experimental data or by performing a matching calculation to the underlying theory. In our case, the underlying theory is Einstein's GR, or more practically, the solution to the Teukolsky equation [96; 97; 98]. Given these two sides of the matching calculation, we will sometimes be referring to the EFT side of the calculation as macroscopic and to the solution to Teukolsky equation as microscopic.
On the EFT side, the building blocks to model absorption effects include mass-changing amplitudes (involving massless messenger particles and two particles with different masses), first explored in [99] and covariantized in [95]. In this section, we further reorganize the latter formulation, while also using coherent-spin eigenvalues [52], which saturate spin indices and thus serve as a book-keeping device [44].
Here and below we work in the massive spinor-helicity formalism [95], which is briefly reviewed in appendix A. Hence the amplitudes \(\mathcal{A}_{\{b\}^{\{a\}}}\) carry \(2s_{1}\) symmetrized little-group indices \(a_{1},\ldots,a_{2s_{1}}\) for the incoming massive particle \(1\) and \(2s_{2}\) such indices \(b_{1},\ldots,b_{2s_{2}}\) for the outgoing massive particle \(2\). We choose to use the chiral basis of massive spinors (angle brackets) for positive helicities and the antichiral basis (square brackets) for negative helicities. Since \(\det\{|1^{a}\rangle_{\alpha}\}=\det\{|1^{a}]_{\dot{\alpha}}\}=M_{1}\) and \(\det\{|2^{b}\rangle_{\beta}\}=\det\{|2^{b}\rangle_{\beta}\}=M_{2}\), we may proceed by stripping the spinors \(|1_{a}\rangle\) and \(|2^{b}\rangle\) for the positive messenger helicity and \(|1_{a}]\) and \(|2^{b}]\) for the negative helicity. For instance, for the positive-helicity case we write
\[\mathcal{A}_{\{b\}^{\{a\}}}(p_{2},s_{2}|p_{1},s_{1};k,h\geq 0)=:A(k,h\geq 0)^{ \{\alpha\},\{\beta\}}(|1_{a}\rangle_{\alpha})^{\odot 2s_{1}}(|2^{b}\rangle_{ \beta})^{\odot 2s_{2}}, \tag{1}\]
where \(\odot\) denotes the symmetrized tensor product [100]. In addition to the \(\mathcal{A}_{\{b\}^{\{a\}}}\) and \(A^{\{\alpha\},\{\beta\}}\) objects, a third perspective on the same amplitude is provided by contracting massive spinors are contracted with auxiliary SU(2)-spinor variables [44],
\[|\mathbf{1}\rangle:=|1_{a}\rangle\alpha^{a},\hskip 28.452756pt|\bar{\mathbf{2}} \rangle:=|2^{b}\rangle\tilde{\beta}_{b}, \tag{2}\]
which may serve as an extra handle on the spin quantization axis.2 We write the fully contracted amplitude in boldface as a scalar in terms of the spinor-stripped one:
Footnote 2: The auxiliary SU(2) spinors \(\alpha^{a}\) and \(\tilde{\beta}_{b}\) transform under the little groups of \(p_{1}\) and \(p_{2}\), respectively, and in this sense have an implicit dependence on their momenta.
\[\boldsymbol{\mathcal{A}}(p_{2},s_{2}|p_{1},s_{1};k,h\geq 0):=A(k,h\geq 0)^{\{ \alpha\},\{\beta\}}(|\mathbf{1}\rangle^{\otimes 2s_{1}})_{\{\alpha\}}(|\bar{ \mathbf{2}}\rangle^{\otimes 2s_{2}})_{\{\beta\}}, \tag{3}\]
where the index symmetrization is now entirely automatic. Incidentally, the SU(2) spinors in eq. (2) are also connected with massive coherent-spin states, the scattering of which is described by the coherent-spin amplitude [52]
\[\mathcal{A}(p_{2},\beta|p_{1},\alpha;k,h)=e^{-(||\alpha||^{2}+||\beta||^{2})/2} \overset{\infty}{\underset{2s_{1}=1}{\sum}}\overset{\infty}{\underset{2s_{2}=1 }{\sum}}\frac{1}{\sqrt{(2s_{1})!(2s_{2})!}}\boldsymbol{\mathcal{A}}(p_{2},s_{ 2}|p_{1},s_{1};k,h). \tag{4}\]
### Classifying mass-changing amplitudes
Going back to the stripped amplitude \(A(k,h)_{\{\alpha\},\{\beta\}}\) with two sets of symmetrized SL(2,\(\mathbb{C}\)) indices, we may decompose it in the chiral-spinor basis of \(|k\rangle\) and \(p_{1}|k]\). Unlike the equal-mass case, these two spinors are linearly independent (and there is no need for a helicity factor as in [95]), because
\[\langle k|p_{1}|k]=2p_{1}\cdot k=M_{2}^{2}-M_{1}^{2}\neq 0 \tag{5}\]
due to momentum conservation \(p_{2}=p_{1}+k\). This equation also tells us about the possible dimensionful scales entering the three-point process from an EFT perspective, which will have to be matched later. We can either use the mass pair \((M_{1},M_{2})\) or \((M_{1},2p_{1}\cdot k)\), and in this work we are going to favor the latter. For instance, we may use \(M_{1}\) to absorb the mass dimension of the amplitude and allow the EFT coefficients to depend on the dimensionless ratio
\[w:=\frac{2p_{1}\cdot k}{M_{1}^{2}}, \tag{6}\]
while expanding in terms of the dimensionless spinors of helicity \(-1/2\) and \(1/2\):
\[\lambda_{\alpha}:=M_{1}^{-1/2}|k\rangle_{\alpha},\qquad\quad\mu_{\alpha}:=M_{1 }^{-3/2}p_{1,\alpha\dot{\beta}}|k]^{\dot{\beta}}\qquad\Rightarrow\qquad\langle \lambda\mu\rangle=w. \tag{7}\]
Therefore, the most general stripped amplitude involving two unequal masses and one massless positive-helicity particle is schematically given by [95, 99]
\[A(k,h\geq 0)_{\{\alpha\},\{\beta\}}=M_{1}^{1-s_{1}-s_{2}}\overset{\infty}{ \underset{i}{\sum}}c_{(i),s_{1},s_{2}}^{h}(w)[\lambda^{s_{1}+s_{2}-h}\mu^{s_{ 1}+s_{2}+h}]^{(i)}_{\{\alpha\},\{\beta\}}. \tag{8}\]
Here \(i\) enumerates inequivalent tensor products with the given spinorial index structure, and their scalar coefficients \(c_{i,s_{1},s_{2}}^{h}(\omega)\) may depend on each spin and in the dimensionless ratio \(w\). In order to specify the relevant spinorial structures, note that there are natural constraints that follow already from the form of eq. (8), such as
\[s_{1}+s_{2}\pm h\,\in\,\mathbb{Z}_{\geq 0}\qquad\Rightarrow\qquad s_{1}+s_{2} \geq|h|. \tag{9}\]
Moreover, there can clearly be no three-point amplitude for one or three half-integer spins -- in QFT this standard fact is usually derived from the spin-statistics theorem.
We find it helpful to observe that the massless little-group dependence may be completely factored out (in the tensor-product sense), leaving a polynomial in \(\lambda\) and \(\mu\), which is independent of it:
\[\begin{split}[\lambda\mu\oplus\mu\lambda]^{n}_{\{\alpha\},\{ \beta\}}:=c_{0}(\lambda^{n})_{\alpha_{1}\dots\alpha_{n}}(\mu^{n})_{\beta_{1} \dots\beta_{n}}+c_{1}(\lambda^{n-1}\mu)_{\alpha_{1}\dots\alpha_{n}}(\mu^{n-1} \lambda)_{\beta_{1}\dots\beta_{n}}\\ +\dots+c_{n-1}(\lambda\mu^{n-1})_{\alpha_{1}\dots\alpha_{n}}(\mu \lambda^{n-1})_{\beta_{1}\dots\beta_{n}}+c_{n}(\mu^{n})_{\alpha_{1}\dots\alpha _{n}}(\lambda^{n})_{\beta_{1}\dots\beta_{n}},\end{split} \tag{10}\]
where we have also omitted the \(\otimes\) sign for brevity. The exponent \(n\) depends on the total-spin quantum numbers, and in the amplitude each such term may have its own coefficient. Without loss of generality, we consider \(s_{2}\geq s_{1}\), where we have two cases:
* \(s_{2}-s_{1}\geq h\), where we saturate the \(s_{1}\) indices by the above polynomial, while the remaining \(s_{2}\) indices are accounted for by the tensor product, which is unambiguously defined by the overall helicity weight. The corresponding spinorial structures belong to the following tensor power of a direct sum: \[[\lambda^{s_{1}+s_{2}-h}\mu^{s_{1}+s_{2}+h}]^{(i)}_{\{\alpha\},\{\beta\}}\ \in\ [ \lambda\mu\oplus\mu\lambda]^{2s_{1}}_{\{\alpha\},\{\beta\}}\,(\lambda^{s_{2} -s_{1}-h}\mu^{s_{2}-s_{1}+h})_{\{\beta\}};\] (11)
* \(s_{2}-s_{1}<h\), where the polynomial (10) saturates the number of \(\lambda\)'s, which is equal to \(s_{1}+s_{2}-h\), while the remaining \(2h\) of \(\mu\)'s are unambiguously distributed among the two massive particles. The spanning spinorial structure is thus \[[\lambda^{s_{1}+s_{2}-h}\mu^{s_{1}+s_{2}+h}]^{(i)}_{\{\alpha\},\{\beta\}}\ \in\ [ \lambda\mu\oplus\mu\lambda]^{s_{1}+s_{2}-h}_{\{\alpha\},\{\beta\}}\,(\mu^{s_{ 1}-s_{2}+h})_{\{\alpha\}}(\mu^{s_{2}-s_{1}+h})_{\{\beta\}}.\] (12) Note that in electromagnetism this case only occurs for \(s_{1}=s_{2}\), whereas in GR both \(s_{2}=s_{1}\) and \(s_{2}=s_{1}+1\) are possible.
In both cases, we have the polynomial with free coefficients and the additional factor, which carries the massless helicity. This factor completes the \(\mathrm{SL}(2,\mathbb{C})\) indices of either massive particle that are not accounted for by the polynomial, and of course all \(\alpha\)'s and all \(\beta\)'s are implicitly symmetrized.
This analysis should be repeated for \(s_{1}\leq s_{2}\), and the \(\mathrm{SL}(2,\mathbb{C})\) can then be contracted with the massive spinors (and auxiliary variables), for which the Dirac equations \(|p_{1}|\mathbf{1}\rangle=M_{1}|\mathbf{1}]\) and \(|p_{2}|\bar{\mathbf{2}}\rangle=M_{2}|\bar{\mathbf{2}}]\) hold. In this way, we arrive at
\[\boldsymbol{\mathcal{A}}(p_{2},s_{2}|p_{1},s_{1};k,h)=\begin{cases}\boldsymbol {F}^{h}_{s_{1},s_{2}}\,\langle\bar{\mathbf{2}}k\rangle^{s_{2}-s_{1}-h}[\bar{ \mathbf{2}}k]^{s_{2}-s_{1}+h},&s_{2}-s_{1}\geq|h|,\\ \boldsymbol{F}^{h}_{s_{1},s_{2}}\,[\bar{\mathbf{2}}k]^{s_{2}-s_{1}+h}[k \mathbf{1}]^{s_{1}-s_{2}+h},&|s_{2}-s_{1}|<h,\\ \boldsymbol{F}^{h}_{s_{1},s_{2}}\,\langle\bar{\mathbf{2}}k\rangle^{s_{2}-s_{1 }-h}\langle k\mathbf{1}\rangle^{s_{1}-s_{2}-h},&|s_{2}-s_{1}|<-h,\\ \boldsymbol{F}^{h}_{s_{1},s_{2}}\,\langle k\mathbf{1}\rangle^{s_{1}-s_{2}-h}[ k\mathbf{1}]^{s_{1}-s_{2}+h},&s_{1}-s_{2}\geq|h|.\end{cases}\] (13a) where the factor \[\boldsymbol{F}^{h}_{s_{1},s_{2}}\] contains free coefficients and can now be written as \[\boldsymbol{F}^{h}_{s_{1},s_{2}}=M_{1}^{1-2s_{1}-2s_{2}}\sum_{r=0}^{n}g^{h}_{ r,s_{1},s_{2}}(w)\,\langle\bar{\mathbf{2}}|k|\mathbf{1}]^{r}\,[\bar{\mathbf{2}}|k| \mathbf{1}\rangle^{n-r}. \tag{13b}\]
These coefficients \(g^{h}_{r,s_{1},s_{2}}(w)\) are a refined version of \(c^{h}_{(i),s_{1},s_{2}}(w)\) in eq. (8); the main difference between them is some degree of rescaling by \(M_{2}/M_{1}\). The polynomial degree \(n\) above is related to the maximal number of terms:
\[n+1\,=\,\begin{cases}\phantom{-}2s_{1}+1,&\phantom{-}s_{2}-s_{1}\geq|h|,\\ s_{1}+s_{2}-|h|+1,&\phantom{-}|s_{2}-s_{1}|<|h|,\\ \phantom{-}2s_{2}+1,&\phantom{-}s_{1}-s_{2}\geq|h|,\end{cases} \tag{13c}\]
This number matches the counting in [101]. For completeness, the above formulae (13) already include the result of the above analysis for the negative messenger helicity, in which case we used the anti-chiral basis, \(|k|\) and \(|p_{1}|k\rangle\).
Interestingly, the coupling counting (13c) obeys the bound
\[\#\text{ coeffs.}\,\leq\,2\text{min}(s_{1},s_{2})+1. \tag{14}\]
For instance, there is only one term for the case of the scalar massive incoming state \(s_{1}=0\). Indeed, the constraint (9) immediately implies \(s_{2}>|h|\), so we get a trivial polynomial of degree \(n(0,s_{2},h)=0\). In that case, the amplitude takes the form
\[\boldsymbol{\mathcal{A}}(p_{2},s_{2}|p_{1},s_{1}=0;k,h)=g^{|h|}_{0,0,s_{2}}(w )M_{1}^{1-2s_{2}}\langle\bar{\boldsymbol{2}}k\rangle^{s_{2}-h}[\bar{\boldsymbol {2}}k]^{s_{2}+h}, \tag{15}\]
where we have assumed parity and thus conflated the dimensionless coupling coefficients \(g^{\pm h}_{0,0,s_{2}}(w)\) into the single coupling \(g^{\pm|h|}_{0,0,s_{2}}(w)\), which still depends on the absolute helicity value of the messenger particle.3
Footnote 3: In the worldline formalism, the parity assumption is called “electric-magnetic” duality [88; 89].
### Minimal mass-changing amplitudes
As a minor digression, let us note that, for non-zero initial spin, the proliferation of possible effective couplings in the three-point mass-changing amplitude (13) may be reduced if we come up with some notion of minimality. Indeed, in a similar situation in the equal-mass case, \(M_{1}=M_{2}\), Arkani-Hamed, Huang and Huang [95] managed to single out the so-called "minimal" amplitudes by considering its massless limit. For positive helicity, these minimal amplitudes include, for instance,
\[\mathcal{A}(p_{2},s|p_{1},s;k,h\geq 0)=g^{h}_{0}(p_{1}\cdot\varepsilon_{k}^{ \pm})^{h}\langle\bar{\boldsymbol{2}}\boldsymbol{1}\rangle^{2s}, \tag{16}\]
where for simplicity we have assumed \(s_{1}=s_{2}=s\). In other words, the stripped amplitude is proportional to the tensor product of \(\text{SL}(2,\mathbb{C})\) Levi-Civita tensors \((\epsilon^{2s})_{\{\alpha\},\{\beta\}}\).
To expose a similar unique structure in the unequal-mass case, where the couplings correspond to the terms in the polynomial (10), we may change the basis inside of it to the antisymmetric and symmetric combinations of the basis spinors:
\[[\lambda\mu\oplus\mu\lambda]^{n}_{\{\alpha\},\{\beta\}}=[\epsilon\oplus \sigma]^{n}_{\{\alpha\},\{\beta\}},\qquad\epsilon_{\alpha\beta}=\frac{\lambda _{\alpha}\mu_{\beta}-\mu_{\alpha}\lambda_{\beta}}{\langle\lambda\mu\rangle}, \qquad\sigma_{\alpha\beta}:=\lambda_{\alpha}\mu_{\beta}+\mu_{\alpha}\lambda_{ \beta}. \tag{17}\]
Since of course \(\langle\mathbf{1}|^{\alpha}\langle\mathbf{\bar{2}}|^{\beta}\epsilon_{\alpha\beta}= \langle\mathbf{1\bar{2}}\rangle\) and the symmetric combination leads to
\[\langle\mathbf{1}|^{\alpha}\langle\mathbf{\bar{2}}|^{\beta}\sigma_{\alpha\beta }=\frac{M_{2}^{2}+M_{1}^{2}}{M_{1}^{2}}\langle\mathbf{1\bar{2}}\rangle+\frac{2 M_{2}}{M_{1}}[\mathbf{1\bar{2}}], \tag{18}\]
the main amplitude factor can simply be expanded in the angle and square brackets:
\[\mathbf{F}_{s_{1},s_{2}}^{h}=M_{1}^{1-2s_{1}-2s_{2}+n}\sum_{r=0}^{n}\tilde{g}_{r,s _{1},s_{2}}^{h}(w)\,\langle\mathbf{\bar{2}}\mathbf{1}\rangle^{n-r}[\mathbf{ \bar{2}}\mathbf{1}]^{r}. \tag{19}\]
So we propose to define the minimal mass-changing stripped amplitudes as those with highest power in \(\epsilon_{\alpha\beta}\), or, equivalently,
\[\mathcal{A}_{\rm min}(p_{2},s_{2}|p_{1},s_{1};k,h\geq 0) \tag{20}\] \[=\tilde{g}_{0,s_{1},s_{2}}^{+}(w)\begin{cases}M_{1}^{1-2s_{2}} \langle\mathbf{\bar{2}}\mathbf{1}\rangle^{2s_{1}}\langle\mathbf{\bar{2}}k \rangle^{s_{2}-s_{1}-h}[\mathbf{\bar{2}}k]^{s_{2}-s_{1}+h},&s_{2}-s_{1}\geq h \geq 0,\\ M_{1}^{1-s_{1}-s_{2}-h}\langle\mathbf{\bar{2}}\mathbf{1}\rangle^{s_{1}+s_{2}- h}[\mathbf{\bar{2}}k]^{s_{2}-s_{1}+h}[\mathbf{k1}]^{s_{1}-s_{2}+h},&|s_{2}-s_{1}|<h,\\ M_{1}^{1-2s_{2}}\langle\mathbf{\bar{2}}\mathbf{1}\rangle^{2s_{2}}\langle k \mathbf{1}\rangle^{s_{1}-s_{2}-h}[k\mathbf{1}]^{s_{1}-s_{2}+h},&s_{1}-s_{2} \geq h\geq 0.\end{cases}\]
It is clear that for \(s_{1}=0\) and \(s_{2}>|h|\), the minimal-coupling amplitude coincides with the previously defined amplitude (15). Moreover, let us note in passing that these amplitudes satisfy the double-copy prescription explored in the presence of massive spinning states in [102; 103].
We hope to explore these amplitudes in more detail elsewhere, whereas in the rest of this paper for the sake of simplicity we focus on the mass-changing amplitudes (15) with the non-spinning initial state, which we use to model the radiation absorption by a Schwarzschild black hole. In this context, it is important to note that if we assume locality of the EFT Lagrangian that implies the above amplitudes, the dimensionless coupling constants \(g_{0,s_{1},s_{2}}^{h}(w)\) may then be constrained to only have non-negative powers of \(w\). Unfortunately, a rigorous proof of this statement may be to technical and require dealing with all sorts of field redefinitions. So for the purposes of this paper, let us simply impose that \(g_{0,0,s_{2}}^{h}(w)\) have no poles in \(w\):
\[g_{0,0,s_{2}}^{h}(w)=\mathcal{O}(w^{0})\qquad\Rightarrow\qquad\mathbf{\mathcal{ A}}(p_{2},s_{2}|p_{1},s_{1}=0;k,h)=\mathcal{O}(w^{s_{2}}),\qquad w\to 0, \tag{21}\]
which constitutes is a non-trivial EFT modeling assumption.
## 3 Absorption mechanics of compact objects
In this section we describe our setup for obtaining classical absorption effects from the quantum on-shell scattering amplitudes. We focus on the simplest relevant process depicted in figure 1: a graviton spherical state impinging on a massive particle of mass \(M_{1}\) (for simplicity taken spinless), which absorbs the graviton and changes its mass to \(M_{2}\) and spin to \(s_{2}\). It is natural to think of the corresponding scattering
mplitude in terms of plane-wave states as described in section 2. However, GR methods give us results [80; 81; 82; 83; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106] for spherical waves with fixed angular-momentum quantum numbers. Therefore, we start by translating between these two pictures -- with a focus on single-graviton states. In section 5 we will come back to justifying this setup further using classical coherent states, which are more appropriate for modeling classical waves.
### Spherical helicity amplitude
By definition (see e.g. [107]), spherical helicity states partially diagonalize the total angular momentum operator \(\mathbf{J}\), more specifically, \(\mathbf{J}^{2}\), \(J_{z}\) and helicity \((\mathbf{J}\cdot\mathbf{P})/\mathbf{P}^{2}\), as well as the Hamiltonian \(P^{0}\). Such states are labeled by energy \(\omega\), angular-momentum quantum numbers \(j\), \(m=-j,\ldots,j\) and helicity \(h=\pm 2\) (graviton) or \(\pm 1\) (photon):4
Footnote 4: Here and below, the hat notation [48] means \(\hat{d}^{n}p:=d^{n}p/(2\pi)^{n}\) and \(\hat{\delta}^{n}(...):=(2\pi)^{n}\delta^{n}(...)\). For the spherical helicity states, we also assume that masslessness: \(P^{2}|\omega,j,m,h\rangle=0\).
\[|\omega,j,m,h\rangle=a^{\dagger}_{j,m,h}(\omega)|0\rangle,\qquad\langle\omega^ {\prime},j^{\prime},m^{\prime},h^{\prime}|\omega,j,m,h\rangle=\hat{\delta}( \omega^{\prime}-\omega)\delta^{j}_{j^{\prime}}\delta^{m}_{m^{\prime}}\delta^{h }_{h^{\prime}}. \tag{3.1}\]
This is in contrast to the more familiar plane-wave states \(|k,h\rangle\), which diagonalize the four-momentum \(P^{\mu}\) in addition to the helicity \((\mathbf{J}\cdot\mathbf{P})/\mathbf{P}^{2}\):
\[|k,h\rangle:=a^{\dagger}_{h}(k)|0\rangle,\qquad\langle k^{\prime},h^{\prime}|k,h\rangle=2|\mathbf{k}|\hat{\delta}^{3}(\mathbf{k}^{\prime}-\mathbf{k})\delta^{h}_{h^{ \prime}}. \tag{3.2}\]
The two bases of one-particle states may be related by [91]
\[\langle k,h^{\prime}|\omega,j,m,h\rangle=\frac{4\pi}{\sqrt{2\omega}}\delta^{h} _{h^{\prime}}\hat{\delta}(|\mathbf{k}|-\omega)\,_{-h}Y_{jm}(\hat{\mathbf{k}}), \tag{3.3}\]
where the spin-weighted spherical harmonics \({}_{-h}Y_{jm}(\hat{\mathbf{k}})\) depend on the momentum direction \(\hat{\mathbf{k}}:=\mathbf{k}/|\mathbf{k}|\) and constitute a generalization [108; 109] of the usual (scalar) spherical harmonics. The corresponding completeness relations imply that the one-particle spinning spherical state can be written as
\[|\omega,j,m,h\rangle=\frac{4\pi}{\sqrt{2\omega}}{\int_{k}}\hat{ \delta}(k^{0}-\omega)\,_{-h}Y_{j,m}(\hat{\mathbf{k}})|k,h\rangle=\sqrt{2\omega}{ \int}\frac{d\Omega_{\hat{\mathbf{k}}}}{4\pi}{}_{-h}Y_{j,m}(\hat{\mathbf{k}})|k,h \rangle\big{|}_{|\mathbf{k}|=\omega}, \tag{3.4}\]
Figure 1: Wave impinging on a scalar black hole
where \(d\Omega_{\hat{\mathbf{k}}}\) denotes the spherical-angle integration measure over the directions of \(\mathbf{k}\). We have also defined a shorthand for the on-shell momentum integration measure
\[\int_{p}:=\int\!\!\frac{d^{4}p}{(2\pi)^{3}}\Theta(p^{0})\delta(p^{2}-M_{p}^{2})=: \int\!\!\frac{d^{4}p}{(2\pi)^{3}}\delta^{+}(p^{2}-M_{p}^{2}),\qquad\quad M_{k}=0. \tag{11}\]
In order to write the scattering matrix element for a spherical helicity state, we need to be careful with the massive particle at the origin, which, strictly speaking, cannot be a plane-wave state either. So instead we use a wavepacket
\[|\psi\rangle:=\int_{p_{1}}\!\!\psi_{\xi}(p_{1})|p_{1}\rangle: \langle\psi|\psi\rangle=1,\qquad\langle\psi|P^{\mu}|\psi\rangle=p _{1,\rm cl}^{\mu}:=(M_{1},\mathbf{0}), \tag{12}\] \[\langle\psi|P^{\mu}P^{\nu}|\psi\rangle=\langle\psi|P^{\mu}|\psi \rangle\langle\psi|P^{\nu}|\psi\rangle+\mathcal{O}(\xi),\]
where \(\xi:=\ell_{\rm C}^{2}/\ell_{\rm WP}^{2}\) is related to the dimensionless ratio of the Compton wavelength and the position-space spread of the wavepacket [48]. We will be focusing on the scale hierarchy
\[\ell_{\rm C}\ll\ell_{\rm WP}\ll\frac{2\pi\hbar c}{\omega}\qquad\Rightarrow \qquad\xi\ll 1, \tag{13}\]
relevant for classical scattering of a wave with frequency \(\omega/\hbar\). For concreteness, we may think of \(\psi_{\xi}(p_{1})\propto\exp\left(-\frac{p_{1}^{0}}{\xi M_{1}}\right)\), the Lorentz-invariant version of which is [48, 110]
\[\psi_{\xi}(p_{1})=\frac{1}{M_{1}}\biggl{[}\frac{8\pi^{2}}{\xi K_{1}(2/\xi)} \biggr{]}^{1/2}\exp\biggl{(}\!-\frac{p_{1}\!\cdot u_{1}}{\xi M_{1}}\biggr{)}, \tag{14}\]
where \(K_{1}\) denotes the modified Bessel function of the second kind.
We are now ready to express the \(S\)-matrix element for a spherical helicity state in terms of the conventional plane-wave scattering amplitude:
\[\langle X|S|\psi;\omega,j,m,h\rangle=\frac{4\pi i}{\sqrt{2\omega}}\!\int_{p_{ 1}}\!\!\psi_{\xi}(p_{1})\!\int_{k}\!\!\hat{\delta}(k^{0}\!-\!\omega)\hat{ \delta}^{4}(p_{1}\!+\!k\!-\!p_{X})\,_{-h}Y_{j,m}(\hat{\mathbf{k}})\mathcal{A}(X|p_ {1};k,h), \tag{15}\]
where we have ignored the no-scattering term in \(S=1+iT\). For the amplitude arguments, we choose to mimic the structure of the matrix elements and write the outgoing particles first separated from the incoming particles by a vertical line.
Unfortunately, the matrix element (15) by itself is too singular to handle unambiguously, which is due to the infinite norm \(\langle\omega,j,m,h|\omega,j,m,h\rangle=\hat{\delta}(0)\) of the massless spherical state (10). So we also smear its energy with a wavefunction:
\[|\gamma\rangle:=\int_{0}^{\infty}\!\!\hat{d}\omega\gamma_{\zeta}(\omega)| \omega,j,m,h\rangle: \langle\gamma^{\prime}|\gamma\rangle=\delta_{j^{\prime}}^{j} \delta_{m^{\prime}}^{m}\delta_{h^{\prime}}^{h},\qquad\langle\gamma|P^{0}| \gamma\rangle=\omega_{\rm cl}, \tag{16}\] \[\langle\gamma|P^{0}P^{0}|\gamma\rangle=\langle\gamma|P^{0}| \gamma\rangle\langle\gamma|P^{0}|\gamma\rangle+\mathcal{O}(\zeta).\]
The corresponding scattering-matrix element is
\[\langle X|S|\psi;\gamma\rangle=4\pi i\!\int_{p_{1}}\!\!\psi_{\xi}(p_{1})\!\int_ {k}\frac{\gamma_{\zeta}(k^{0})}{\sqrt{2k^{0}}}\hat{\delta}^{4}(p_{1}+k-p_{X}) \,_{-h}Y_{j,m}(\hat{\mathbf{k}})\mathcal{A}(X|p_{1};k,h). \tag{17}\]
### Covariant spherical states
Before we proceed to the absorption cross-section, it is rewarding to covariantize our spherical-helicity state setup. By covariantization we mean allowing for an arbitrary time direction \(u^{\mu}\), with \(u^{2}=1\), as well a spacelike spin quantization axis \(n^{\mu}\), with \(n^{2}=-1\) and \(n\cdot u=0\). (In section 3.1, these were set to \((1,\mathbf{0})\) and \((0,0,0,1)\), respectively.) The corresponding angular momentum operator is
\[J^{\mu}(u):=\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}J_{\nu\rho}u_{\rho}\qquad \Rightarrow\qquad[J^{\mu}(u),J^{\nu}(u)]=i\epsilon^{\mu\nu\rho\sigma}u_{\rho}J _{\sigma}(u), \tag{3.12}\]
which is not to be confused with the Pauli-Lubanski spin vector \(W^{\mu}\). A covariant spherical helicity state \(|\omega,j,m,h\rangle\) is then an eigenstate of "energy" \(E(u):=u\cdot P\) and angular-momentum combinations \(-J(u)^{2}\), \(n\cdot J(u)\) and \(J(u)\cdot P=W\cdot P\). Similarly to eq. (3.4), we choose to construct them directly from the plane-wave states:
\[|\omega,j,m,h\rangle_{u,n}:=\frac{4\pi}{\sqrt{2\omega}}{\int}\hat{d}^{4}k\, \hat{\delta}^{+}(k^{2})\hat{\delta}(k\cdot u-\omega)\,_{-h}Y_{j,m}(k;u,n)|k,h\rangle. \tag{3.13}\]
The new ingredient here is the covariant spin-weighted spherical harmonic. We define these functions in terms of spinor products as follows:
\[{}_{h}\tilde{Y}_{j,m}(k;u,n):=\frac{1}{\langle k|u|k]^{j}}\left[\left[u_{a}k \right]^{\odot(j+h)}\odot\langle ku_{a}\rangle^{\odot(j-h)}\right]_{\{a\}= \underbrace{(1\dots 1}_{j-m}\underbrace{2\dots 2}_{j+m})}. \tag{3.14}\]
We have hereby followed [39, 111] in using the massive spinor-helicity formalism [95] to covariantize the spinorial construction dating back to Newman and Penrose [108]. We adopt the conjugation conventions \(\left(|p^{a}\rangle_{\alpha}\right)^{*}=[p_{a}|_{\dot{\alpha}},\,\left(|p^{a }|_{\dot{\alpha}}\right)^{*}=-|p_{a}\rangle_{\alpha}\) for \(p^{0}>0\), which imply
\[{}_{h}\tilde{Y}_{j,m}^{*}(k;u,n)=(-1)^{2j+m-h}{}_{-h}\tilde{Y}_{j,-m}(k;u,n). \tag{3.15}\]
The properly normalized functions seen in eq. (3.13) are written without the tildes:
\[{}_{h}Y_{j,m}(k;u,n):=(-1)^{m}(2j)!\sqrt{\tfrac{2j+1}{4\pi(j+m)!(j-m)!(j+h)!(j -h)!}}\,{}_{h}\tilde{Y}_{j,m}(k;u,n), \tag{3.16}\]
with the orthonormality statement being
\[\frac{2}{\omega}{\int}d^{4}k\,\delta^{+}(k^{2})\delta(k\cdot u- \omega)\,_{h}Y_{j^{\prime},m^{\prime}}^{*}(k;u,n)\,_{h}Y_{j,m}(k;u,n)=\delta_ {j}^{j^{\prime}}\delta_{m}^{m^{\prime}}. \tag{3.17}\]
The proof and a detailed exposition of the harmonics (3.14) are given in appendix A.
Let us point out the new important features of these harmonics. First of all, the harmonics are by definition (3.14) insensitive to the overall scale of both \(k^{\mu}\) and \(u^{\mu}\). Moreover, they are now clearly formulated in a convention-independent way -- in the sense that it is covariant with respect to the two little groups:
* the massless little-group U(1) of \(k^{\mu}\) may be used to change the phases of all spherical harmonics in a local but mutually consistent way. Namely, transforming \(|k\rangle\to e^{-i\phi(k)/2}|k\rangle\), \(|k]\to e^{i\phi(k)/2}|k]\) implies phase adjustments of the form \({}_{h}Y_{j,m}(k;u,n)\to e^{ih\phi(k)}{}_{h}Y_{j,m}(k;u,n)\), which connect between various possible definitions of spin-weighted spherical harmonics, e.g. via quaternions [112].
* the massive little group SU(2) of \(u^{\mu}\) may be used to change the meaning of the magnetic quantum number \(m\). For instance, the explicit spinor parametrizations (107) and (108) correspond to the \(m\)-quantization along \(\boldsymbol{u}\neq 0\) and the conventional \(z\)-axis for \(\boldsymbol{u}=0\), respectively. However, we may just as well apply transformations \(|u^{a}\rangle\to U^{a}{}_{b}(u)|u^{b}\rangle\), \(|u^{a}]\to U^{a}{}_{b}(u)|u^{b}]\) to the massive spinors, and this will rotate the spin quantization axis \[n^{\mu}:=\frac{1}{2}(\langle u_{2}|\sigma^{\mu}|u^{2}]+[u_{2}|\bar{\sigma}^{ \mu}|u^{2}\rangle)\qquad\Rightarrow\qquad n^{2}=-1,\quad u\cdot n=0.\] (117) Having this relation in mind, we henceforth compress our notation to \({}_{h}Y_{j,m}(k;u)\).
In addition, we can specify the general frame transformations of the covariant spherical harmonics (101). Indeed, it is shown in appendix B that under the time-direction change \(u^{\mu}\to v^{\mu}=L^{\mu}{}_{\nu}(v\!\leftarrow\!u)u^{\nu}\) the massive spinors are boosted as follows:
\[|v^{a}\rangle=\frac{\sqrt{\mu}}{\mu\!+\!1}|u\!+\!v|u^{a}],\qquad|v^{a}]=\frac{ \sqrt{\mu}}{\mu\!+\!1}|u\!+\!v|u^{a}\rangle,\qquad\mu:=u\cdot v+\sqrt{(u\cdot v )^{2}\!-\!1}. \tag{118}\]
Here we have assumed that the spin quantization axis for the resulting time direction \(v^{\mu}\) is automatically \(L^{\mu}{}_{\nu}(v\!\leftarrow\!u)n^{\nu}\), i.e. the boosted version of the original quantization axis \(n^{\mu}\). Of course, it can then be easily tweaked by an additional little-group transformation of the resulting spinors \(|v^{a}\rangle\to U^{a}{}_{b}(v)|v^{b}\rangle\), \(|v^{a}]\to U^{a}{}_{b}(v)|v^{b}]\).
Given this covariant formulation of the spherical states, we rewrite eq. (100) as
\[\langle X|S|\psi;\gamma\rangle=4\pi i\!\int_{0}^{\infty}\!\frac{ \hat{d}\omega}{\sqrt{2\omega}}\gamma_{\zeta}(\omega)\!\int\!\hat{d}^{4}p_{1} \hat{\delta}^{+}(p_{1}^{2}\!-\!M_{1}^{2})\psi_{\xi}(p_{1}) \tag{119}\] \[\qquad\times\!\int\!\hat{d}^{4}k\hat{\delta}^{+}(k^{2})\hat{ \delta}(k\cdot u_{1}-\omega)\hat{\delta}^{4}(p_{1}+k-p_{X})\,_{-h}Y_{j,m}(k;u _{1})\mathcal{A}(X|p_{1};k,h),\]
which is what we are going to use in the absorption cross-section calculation below.
### Mass-changing amplitudes as harmonics
It is tempting to notice that the amplitudes (16) are simply proportional to spin-weighted spherical harmonics defined in eq. (101), namely
\[\mathcal{A}_{\underbrace{1\dots 12\dots 2}_{s_{2}-m\ s_{2}+m}}(p_{2},s_{2}|p_{ 1};k,h)\!=M_{1}g_{0,0,s_{2}}^{|h|}(w)(-1)^{s_{2}-h}w^{s_{2}}{}_{h}\tilde{Y}_{ s_{2},m}(k;u_{2})\!=:\mathcal{A}_{s_{2},m}^{h}(p_{2}|p_{1};k). \tag{120}\]
However, the harmonics are defined with respect to \(u_{2}^{\mu}\), which unlike \(u_{1}^{\mu}\) involves the integration variable \(k^{\mu}\). So we wish to make the transition between the two velocity vectors, which are related by the boost
\[u_{2}^{\rho}=L_{\sigma}^{\rho}(u_{2}\gets u_{1})u_{1}^{\sigma}=\exp\Bigl{(} \frac{i\log(u_{1}\!\cdot\!u_{2}\!+\!\sqrt{(u_{1}\!\cdot\!u_{2})^{2}\!-\!1})}{ \sqrt{(u_{1}\!\cdot\!u_{2})^{2}\!-\!1}}u_{1}^{\mu}u_{2}^{\nu}\Sigma_{\mu\nu} \Bigr{)}^{\rho}_{\sigma}u_{1}^{\sigma}. \tag{3.22}\]
The corresponding spinor transformations, given by eq. (3.19), may be rewritten as
\[|u_{2}^{a}\rangle=\frac{\sqrt{M_{1}}}{\sqrt{M_{2}}}\biggl{(}|u_{1}^{a}\rangle+ \frac{|k|u_{1}^{a}|}{M_{1}\!+\!M_{2}}\biggr{)},\qquad|u_{2}^{a}|=\frac{\sqrt{ M_{1}}}{\sqrt{M_{2}}}\biggl{(}|u_{1}^{a}|+\frac{|k|u^{a}\rangle}{M_{1}\!+\!M_{2}} \biggr{)}, \tag{3.23}\]
where we have used that \(\mu:=u_{1}\cdot u_{2}+\sqrt{(u_{1}\cdot u_{2})^{2}-1}=M_{2}/M_{1}\). The net effect of this is that the projection of the massive spinors onto the directions \(|k\rangle\) and \(|k]\) is invariant under this boost, so the spherical harmonics are simply related by
\[\langle 2^{a}k\rangle=\langle 1^{a}k\rangle,\qquad[2^{a}k]=[1^{a}k]\qquad \Rightarrow\qquad{}_{h}\tilde{Y}_{s_{2},m}(k;u_{2})={}_{h}\tilde{Y}_{s_{2},m }(k;u_{1}). \tag{3.24}\]
(This is because we switch between rest frames of \(p_{1}\) and \(p_{2}=p_{1}+k\) inside the harmonics in the same direction \(k\).) The caveat here is that the spin of particle 2 is now quantized along \(L^{\mu}{}_{\nu}(u_{2}\gets u_{1})n_{1}^{\sigma}\), i.e. the boost of the spin quantization axis of particle 1, which may be arbitrary but has to be the same for every \(p_{2}=p_{1}+k\). With this restriction in mind, we may rewrite the three-point amplitude as
\[\mathcal{A}_{s_{2},m}^{h}(p_{2}|p_{1};k)=M_{1}\,g_{0,0,s_{2}}^{|h|}(w)(-1)^{s _{2}-h}w^{s_{2}}\,{}_{h}\tilde{Y}_{s_{2},m}(k;u_{1}). \tag{3.25}\]
Let us now introduce the spherical scattering amplitude5
Footnote 5: Note that the definition (3.26) ignores the delta function \(\delta^{4}(p_{1}+k-p_{2})\), which accompanies the scattering amplitude and imposes momentum conservation. Although it will play a role the cross-section calculation in the next section, the above definition can still be found useful.
\[\mathcal{A}_{\{b\}}(p_{2},s_{2}|p_{1};\omega,j,m,h)\!:=\!\frac{4\pi}{\sqrt{2 \omega}}\!\int_{k}\!\hat{\delta}(k\!\cdot\!u_{1}\!-\!\omega)\,{}_{-h}Y_{j,m}(k ;u_{1})\mathcal{A}_{\{b\}}(p_{2},s_{2}|p_{1};k,h) \tag{3.26}\]
in an analogous manner to eq. (3.13). Using the conjugation and orthogonality properties (3.15) and (3.17), we find
\[\mathcal{A}_{\underbrace{1\ldots 1}_{s_{2}-m^{\prime}}\! \underbrace{2\ldots 2}_{s_{2}+m^{\prime}}}(p_{2},s_{2}|p_{1};\omega,j,m,h)=\frac{(-1)^{-2j+m +h}}{\pi\sqrt{2\omega}}\!\int\!d^{4}k\,\delta^{+}(k^{2})\delta(k\cdot u_{1}- \omega)\\ \times{}_{h}Y_{j,-m}^{*}(k;u_{1})\mathcal{A}_{s_{2},m^{\prime}}^{ h}(p_{2}|p_{1};k) \tag{3.27}\]
This neatly expresses the angular-momentum conservation law thanks to our assumption that the quantum number \(m^{\prime}\) is defined with respect to the axis \(L^{\mu}{}_{\nu}(u_{2}\gets u_{1})n_{1}^{\sigma}\).
### Leading-order absorption cross-section
We are now ready to construct the leading absorption cross-section from the above three-point amplitude. The inclusive cross-section for the spherical scattering setup described in section 3.1 is [88; 91]
\[\sigma_{\rm inc}(\omega_{\rm cl},j,m,h)=\frac{\pi}{\omega_{\rm cl}^{2}}P_{\rm inc }(\omega_{\rm cl},j,m,h)=\frac{\pi}{\omega_{\rm cl}^{2}}\sum_{X}\frac{\left| \langle X|S|\psi;\gamma\rangle\right|^{2}}{\langle X|X\rangle\langle\psi| \psi\rangle\langle\gamma|\gamma\rangle}, \tag{3.28}\]
which is invariant under the basis choice for the outgoing states. The leading contribution due to absorption is then given by the 3-point process:
\[P_{\rm inc}^{\rm LO}(\omega_{\rm cl},j,m,h)=V{\int_{0}^{\infty}}dM_{2}^{2} \rho(M_{2}^{2})\int\!\hat{d}^{3}p_{2}\frac{\left|\langle p_{2}|S|\psi;\gamma \rangle\right|^{2}}{\langle p_{2}|p_{2}\rangle\langle\psi|\psi\rangle\langle \gamma|\gamma\rangle}. \tag{3.29}\]
Here \(V:=\langle p_{2}|p_{2}\rangle/(2p_{2}^{0})=\hat{\delta}^{3}(\mathbf{0})\) is the space volume, which immediately cancels against the normalization of the outgoing state, for which we have temporarily suppressed any quantized degrees of freedom. We have also been compelled to include the spectral density \(\rho(M_{2}^{2})\), which is positive and normalized to 1:
\[\rho(q^{2})\geq 0,\hskip 28.452756pt\int_{0}^{\infty}\!\!\rho(q^{2})dq^{2}=1. \tag{3.30}\]
In a conservative scenario, one may simply assume \(\rho(q^{2})=\delta(q^{2}-M_{1}^{2})\), and the relevant amplitude would be the same-mass three-point amplitude. More generally, it is allowed to contain suitably normalized delta-functions for the "elementary" particles and the continuous part due to multi-particle states. Since we are interested in modeling absorption effects, we are led to explore the continuous part of the spectrum for \(q^{2}>M_{1}^{2}\). In view of the normalization of the initial states, \(\langle\psi|\psi\rangle=\langle\lambda|\lambda\rangle=1\), the resulting probability is given by
\[P_{\rm inc}^{\rm LO}(\omega_{\rm cl},j,m,h)=\sum_{s_{2}}\!\int\!dM_{2}^{2} \rho_{s_{2}}(M_{2}^{2})\int_{p_{2}}\sum_{b_{1},\ldots,b_{s_{2}}}\!\left| \langle p_{2},s_{2},\{b\}|S|\psi;\gamma\rangle\right|^{2}, \tag{3.31}\]
where we have now made the spin degrees of freedom of the outgoing state explicit. The integration over masses of \(p_{2}\) different from \(M_{1}\) is what allows the three-point amplitude to exist on real kinematics and thus makes this cross-section meaningful. As we will see, momentum conservation will later fix this mass to
\[M_{2}^{2}=M_{1}^{2}+2M_{1}\omega_{\rm cl}. \tag{3.32}\]
After restoring \(\hbar\) in front of \(\omega_{\rm cl}\), it actually becomes sent back to \(M_{1}\) in the classical limit, so the spectral density will only by probed in the vicinity of the original BH mass. This, however, does not negate the crucial roles that the unequal masses and
the spectral density play in allowing for a non-singular construction of the cross-section from three-point amplitudes.
Coming back to the squared amplitude in the integrand of eq. (3.31), we have
\[\begin{split}\sum_{\{b\}}\big{|}\langle p_{2},s_{2},& \{b\}|S|\psi;\gamma\rangle\big{|}^{2}=8\pi^{2}\!\int_{0}^{\infty}\!\! \frac{\hat{d}\omega\hat{d}\omega^{\prime}}{\sqrt{\omega\omega^{\prime}}}\gamma _{\zeta}^{*}(\omega)\gamma_{\zeta}(\omega^{\prime})\!\int_{p_{1},p_{1}^{ \prime},k,k^{\prime}}\!\!\!\psi_{\xi}^{*}(p_{1})\psi_{\xi}(p_{1}^{\prime})\\ &\times\hat{\delta}(k\cdot u_{1}-\omega)\hat{\delta}(k^{\prime} \!\cdot u_{1}-\omega^{\prime})\hat{\delta}^{4}(p_{1}+k-p_{2})\hat{\delta}^{4}( p_{1}^{\prime}+k^{\prime}-p_{2})\\ &\times{}_{-h}Y_{j,m}^{*}(k;u_{1})\,_{-h}Y_{j,m}(k^{\prime};u_{1} )\,\mathcal{A}^{*\{b\}}(p_{2},s_{2}|p_{1};k,h)\,\mathcal{A}_{\{b\}}(p_{2},s_{ 2}|p_{1}^{\prime};k^{\prime},h),\end{split} \tag{3.33}\]
where the summation over the little-group indices \(\{b\}\) is now implicit. We may use \(\hat{\delta}^{4}(p_{1}+k-p_{2})\) to perform the integration over \(p_{2}\), which leaves the on-shell constraint \(\hat{\delta}((p_{1}+k)^{2}-M_{2}^{2})\). We then change the integration variables to
\[p_{\rm a}^{\mu}:=(p_{1}^{\mu}+p_{1}^{\prime\mu})/2,\qquad\quad q^{\mu}:=p_{1} ^{\prime\mu}-p_{1}^{\mu}, \tag{3.34}\]
and remove \(q\) with \(\hat{\delta}^{4}(q+k^{\prime}-k)\) originating from \(\hat{\delta}^{4}(p_{1}^{\prime}+k^{\prime}-p_{2})\). Thus we get
\[\begin{split}& P_{\rm inc}^{\rm LO}(\omega_{\rm cl},j,m,h)=8\pi^{2} \sum_{s_{2}}\!\int\!dM_{2}^{2}\rho_{s_{2}}(M_{2}^{2})\int_{0}^{\infty}\!\!\frac {\hat{d}\omega\hat{d}\omega^{\prime}}{\sqrt{\omega\omega^{\prime}}}\gamma_{ \zeta}^{*}(\omega)\gamma_{\zeta}(\omega^{\prime})\!\int_{k,k^{\prime}}\!\! \hat{\delta}(k\cdot u_{1}-\omega)\\ &\times\hat{\delta}(k^{\prime}\!\cdot u_{1}-\omega^{\prime})\,_{-h }Y_{j,m}^{*}(k;u_{1})\,_{-h}Y_{j,m}(k^{\prime};u_{1})\!\int\!\hat{d}^{4}p_{ \rm a}\,\hat{\delta}^{+}(p_{\rm a}^{2}-M_{1}^{2}-k^{\prime}\!\cdot\!k/2)|\psi_{ \xi}(p_{\rm a})|^{2}\\ &\times\hat{\delta}(2p_{\rm a}\!\cdot k-2p_{\rm a}\!\cdot k^{ \prime})\hat{\delta}(M_{1}^{2}+2p_{\rm a}\!\cdot k+k^{\prime}\!\cdot k-M_{2}^{2 })\,\mathcal{A}^{*\{b\}}(p_{\rm a}\!+\!\frac{k+k^{\prime}}{2},s_{2}|p_{\rm a} \!+\!\frac{k^{\prime}-k}{2};k,h)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times \mathcal{A}_{\{b\}}(p_{\rm a}\!+\!\frac{k+k^{\prime}}{2},s_{2}|p_{\rm a}\!+\! \frac{k-k^{\prime}}{2};k^{\prime},h),\end{split} \tag{3.35}\]
where we have also used the convenient property \(\psi_{\xi}^{*}(p_{\rm a}\!-\!\frac{q}{2})\psi_{\xi}(p_{\rm a}\!+\!\frac{q}{2}) =|\psi_{\xi}(p_{\rm a})|^{2}\) of the momentum wavepackets (3.8).
### Absorption cross-section in classical limit
So far no classical limit was taken, and eq. (3.35) still represents a quantum probability. To rectify that, we send \(\xi\to 0\) and evaluate the integral over \(p_{\rm a}\), which in the presence of the squared wavefunction \(|\psi_{\xi}(p_{\rm a})|^{2}\) and the mass-shell delta function has the effect of setting the momentum \(p_{\rm a}^{\mu}\) to its classical value \(u_{1}^{\mu}\sqrt{M_{1}^{2}+k^{\prime}\!\cdot k/2}=:M_{\rm a}u_{1}^{\mu}\). Subsequently, using the delta function \(\hat{\delta}(2p_{\rm a}\!\cdot k-2p_{\rm a}\!\cdot k^{\prime})\) becomes \(\hat{\delta}(\omega-\omega^{\prime})/(2M_{\rm a})\), which removes the integration over \(\omega^{\prime}\). In the integral over the remaining \(\omega\), we send \(\zeta\to 0\), so the squared wavefunction \(|\gamma_{\zeta}(\omega)|^{2}\) localizes it at the classical value \(\omega_{\rm cl}\). In this way, the above probability becomes
\[\begin{split}&\lim_{\zeta\to 0}\lim_{\xi\to 0}P_{\rm inc}^{\rm LO}( \omega_{\rm cl},j,m,h)=\frac{16\pi^{3}}{\omega_{\rm cl}}\sum_{s_{2}}\!\int_{k,k ^{\prime}}\!\frac{1}{2M_{\rm a}}\rho_{s_{2}}(M_{1}^{2}+2M_{\rm a}\omega_{\rm cl }+k^{\prime}\!\cdot k)\hat{\delta}(k\cdot u_{1}-\omega_{\rm cl})\\ &\times\hat{\delta}(k^{\prime}\!\cdot u_{1}-\omega_{\rm cl})\,_{-h }Y_{j,m}^{*}(k;u_{1})\,_{-h}Y_{j,m}(k^{\prime};u_{1})\,\mathcal{A}^{*\{b\}}(p_{ \rm a}\!+\!\frac{k+k^{\prime}}{2},s_{2}|p_{\rm a}\!+\!\frac{k^{\prime}-k}{2};k,h )\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\mathcal{A}_{ \{b\}}(p_{\rm a}\!+\!\frac{k+k^{\prime}}{2},s_{2}|p_{\rm a}\!+\!\frac{k-k^{ \prime}}{2};k^{\prime},h)\big{|}_{p_{\rm a}=M_{\rm a}u_{1}},\end{split} \tag{3.36}\]
where we have also taken the integral over \(M_{2}^{2}\) using \(\hat{\delta}(M_{1}^{2}+2M_{\rm a}\omega+k^{\prime}\!\cdot k-M_{2}^{2})\).
Even though we have simplified the probability expression considerably, the integrals over \(k^{\mu}\) and \(k^{\prime\mu}\) are still intertwined, in particular because the spectral density and \(M_{\rm a}\) both depend on \(k\cdot k^{\prime}\). Note, however, that the two massless momenta are constrained to have the energy projection \(\omega_{\rm cl}\), so \(|k\cdot k^{\prime}|\leq 2\omega_{\rm cl}^{2}\), as most easily seen in the rest frame of \(u_{1}^{\mu}\). The basic classical-limit assumption \(\omega_{\rm cl}\ll M_{1}\) then implies
\[|k^{\mu}|,|k^{\prime\mu}|\ \ll\ M_{1}\qquad\Rightarrow\qquad|k\cdot k^{\prime}| \ \ll\ M_{1}u_{1}\!\cdot k=M_{1}u_{1}\!\cdot k^{\prime}=M_{1}\omega_{\rm cl}. \tag{3.37}\]
Therefore, we may define the classical limit of the above probability as
\[\begin{split} P^{\rm LO}_{\rm inc,\,cl}=\frac{8\pi^{3}}{M_{1} \omega_{\rm cl}}\sum_{s_{2}}\rho_{s_{2}}(M_{1}^{2})&\!\int_{k,k ^{\prime}}\!\hat{\delta}(k\!\cdot\!u_{1}-\omega_{\rm cl})\,_{-h}Y^{*}_{j,m}(k ;u_{1})\,{\cal A}^{*\{b\}}(p_{2},s_{2}|p_{1};k,h)\\ &\times\hat{\delta}(k^{\prime}\!\cdot\!u_{1}-\omega_{\rm cl})\,_{ -h}Y_{j,m}(k^{\prime};u_{1})\,{\cal A}_{\{b\}}(p_{2},s_{2}|p^{\prime}_{1};k^{ \prime},h),\end{split} \tag{3.38}\]
where for brevity we have now used the momenta
\[p_{1}=M_{1}u_{1}+\tfrac{k^{\prime}\!-\!k}{2},\qquad p^{\prime}_{1}=M_{1}u_{1} +\tfrac{k\!-\!k^{\prime}}{2},\qquad p_{2}=M_{1}u_{1}+\tfrac{k\!+\!k^{\prime}}{ 2}=:M_{2}u_{2} \tag{3.39}\]
not as independent integration variables but to denote their classical values. Note that in the expression above, we have already assumed that the outgoing states are described by a sufficiently smooth spectral-density function, which makes sense because our EFT is meant to describe absorption of classical waves of arbitrary frequency (provided it is small). Therefore, \(\rho_{s_{2}}\) can be expanded in \(\omega_{\rm cl}/M_{1}\), for which \(2M_{1}\omega_{\rm cl}\) and \(k^{\prime}\!\cdot k\) provide linear and quadratic terms, respectively, and both may be dropped, leaving only the leading term \(\rho_{s_{2}}(M_{1}^{2})\) in the classical limit.
Let us now deal with the momentum dependence of the amplitudes, which, as we have noticed in eq. (3.21), are proportional to the covariant spin-weighted spherical harmonics \({}_{h}\tilde{Y}_{s_{2},m^{\prime}}(k;u_{2})\), while their prefactors depend on the dimensionless ratio
\[w:=\frac{2p_{1}\!\cdot k}{M_{1}^{2}}\ \simeq\ \frac{2\omega_{\rm cl}}{M_{1}}\ \simeq\ \frac{2p^{\prime}_{1}\!\cdot k^{\prime}}{M_{1}^{2}}=:w^{\prime}. \tag{3.40}\]
Moreover, just as we did in section 3.3, we may boost the time direction \(u_{2}^{\mu}\) of either harmonic to our preferred \(u_{1}^{\mu}\), with their difference now being equal to \((k+k^{\prime})^{\mu}/2\), but the result still being6\({}_{h}\tilde{Y}_{s_{2},m^{\prime}}(k;u_{2})\simeq{}_{h}\tilde{Y}_{s_{2},m^{ \prime}}(k;u_{1})\). Therefore, the squared
amplitude is
\[\begin{split}\mathcal{A}^{*\{b\}}&(p_{2},s_{2}|p_{1};k,h )\mathcal{A}_{\{b\}}(p_{2},s_{2}|p_{1}^{\prime};k^{\prime},h)\\ &\simeq M_{1}^{2}\,|g_{0,0,s_{2}}^{|h|}(w)|^{2}w^{2s_{2}}\!\!\! \sum_{m^{\prime}=-s_{2}}^{s_{2}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
and recently generalized to arbitrary \(j\) in [104; 105; 106]. However, the dynamics of non-spinning BSs under small perturbations date back to Regge and Wheeler [113], who proved linear stability of Schwarzschild BHs. From the point of view of the EFT amplitudes, in which the BH is treated as a particle, the GR results serve as the microscopic computation, to which the effective couplings should be matched.
### Classical absorption cross-section
In the general case of wave of spin \(|h|\) scattering off a spinning BH, the transmission and scattering coefficients are usually obtained by solving the Teukolsky equation [96; 97; 98]. In this work, we focus on the simpler case of non-spinning BHs. Let the Schwarzschild radius be \(r_{\rm S}:=2GM_{1}\) and \(\omega\) the frequency of the classical spin-\(|h|\) wave, which obey \(r_{\rm S}\omega\ll 1\). Then the absorption cross-section is given by [106]8
Footnote 8: We have dropped the prefactor \((2j+1)\) from the expressions in the literature, which comes from summing over \(m=-j,\ldots,j\).
\[\sigma^{\rm Schw}_{\rm abs}(\omega,j,m,h)=\frac{(-1)^{h}2\pi}{\omega^{2}} \frac{(j+h)!(j-h)!}{(2j)!(2j+1)!}(2r_{\rm S}\omega)^{2j+1}{\rm Im}F^{\rm Schw} _{-hjh}(\omega). \tag{4.1}\]
Here \(F^{\rm Schw}_{hjm}\) is the harmonic near-zone response function
\[F^{\rm Schw}_{hjm}(\omega)=i(-1)^{h}\,r_{\rm S}\omega\,\frac{(j+h)!(j-h)!}{(2 j)!(2j+1)!}\prod_{l=1}^{j}\big{[}l^{2}+(2r_{\rm S}\omega)^{2}\big{]}, \tag{4.2}\]
which does not depend on the quantum number \(m\), since we wrote it for a non-spinning black hole. We have followed the GR literature [104; 106] in writing the cross-section (4.1) using the response function so as to point out that it is the latter that contains the expansion in \(\omega\), whereas the outside power of \(\omega\) is fixed to be \(2j-1\), combined from the \(\pi/\omega^{2}\) dimensionful prefactor and \(2j+1\) powers of a dimensionless frequency combination. This factorization mimics the structure of the corresponding EFT cross-section (3.46), that we are going to match to next.
Our focus, however, is on the leading powers in \(\omega\) for each \(j\), which amounts to replacing the complicated product in the response function (4.2) by \((j!)^{2}\). We obtain
\[\sigma^{\rm Schw}_{\rm abs,\,LO}(\omega,j,m,h)=4\pi r_{\rm S}^{2}\left[\frac {j!(j+h)!(j-h)!}{(2j)!(2j+1)!}\right]^{2}(2r_{\rm S}\omega)^{2j}. \tag{4.3}\]
where of course \(|m|,|h|\leq j\), and otherwise it vanishes.
### Scales and effective couplings
In order to properly compare the classical and EFT results, it is helpful to restore \(\hbar\) (while leaving \(c=1\) throughout this paper). This introduces the distinction between frequencies/lengths and masses:
\[[\hbar]=L\times M,\quad[M_{1}]=[\omega_{\rm cl}]=M,\quad[\omega]=L^{-1}, \quad[r_{\rm S}]=L,\quad[G]=L\times M^{-1}, \tag{4.4}\]
where we have insisted on the new meaning of \(\omega:=\omega_{\rm cl}/\hbar\) as the wave frequency. We should also multiply the right-hand side of the cross-section given from (3.46) by \(\hbar^{2}\), so as to switch its dimensionality from \(M^{-2}\) to \(L^{2}\), which gives
\[\sigma^{\rm LO}_{\rm inc,\,cl}(\omega,j,m,h)=\frac{\pi}{4\omega^{2}}\frac{(j+h)!(j-h)!}{(2j+1)!}M_{1}^{2}\rho_{j}(M_{1}^{2})\,|g^{|h|}_{0,0,j}(\omega)|^{2} \!\left(\!\frac{2\hbar\omega}{M_{1}}\!\right)^{2j+1}, \tag{4.5}\]
Here we have left the effective couplings \(g_{0,0,s_{2}}(\omega)\) fully dimensionless. Note, however, that in view of the presence of multiple scales, they are now allowed to depend on \(\omega\) through more than just the \(\hbar\omega/M_{1}\) ratio.
Now let us discuss the two basic assumptions underlying the EFT- and GR-based computations, i.e. \(\hbar\omega\ll M_{1}\) and \(r_{\rm S}\omega\ll 1\). The point is that the latter is much stronger than the former, as the Schwarzschild radius must of course be assumed to be many orders of magnitude larger than the Compton wavelength of the black hole:
\[\omega\ \ll\ \frac{1}{r_{\rm S}}\ \ll\ \frac{1}{\lambda_{\rm C}}:=\frac{M_{1} }{2\pi\hbar}, \tag{4.6}\]
otherwise we would be in the realm of quantum gravity and not GR. It is then clear that in the context of comparing the classical and amplitude-based results, which both constitute frequency expansions, we should then retain only the leading order in \(\hbar\omega/M_{1}\), but classical frequency dependence may still be present in the form of \(r_{\rm S}\omega\).
Therefore, matching the leading-order cross-sections (4.3) and (4.5) directly, we obtain
\[M_{1}^{2}\rho_{j}(M_{1}^{2})\,|g^{|h|}_{0,0,j}(\omega)|^{2}=\frac{8[j!]^{2}(j+h )!(j-h)!}{[(2j)!]^{2}(2j+1)!}\!\left(\!\frac{M_{1}r_{\rm S}}{\hbar}\!\right)^{ 2j+1}\!r_{\rm S}\omega. \tag{4.7}\]
It is perhaps more aesthetically pleasing to rephrase this relationship in terms of the classical response function:
\[M_{1}^{2}\rho_{j}(M_{1}^{2})\,|g^{|h|}_{0,0,j}(\omega)|^{2}=\frac{8(-1)^{h}}{ (2j)!}\!\left(\!\frac{M_{1}r_{\rm S}}{\hbar}\!\right)^{2j+1}\!\mathrm{Im}F^{ \rm Schw}_{-hjh,\,{\rm LO}}(\omega). \tag{4.8}\]
In other words, we have related the \(j\)-th effective absorptive coupling squared to the imaginary part of the response function, resembling a dispersion relation. It might seem awkward to keep \(\hbar\) in the now classically meaningful cross-section expression (4.5), as well as eqs. (4.7) and (4.8). However, the effective couplings are a priori arbitrary, and we are free to make convenient modelling assumptions about them, so nothing prevents us from absorbing the Planck constants into them as9
Footnote 9: Recalling the form of the three-point amplitude (2.15), we see that the effective-coupling rescaling (4.9) amounts to replacing massless momenta \(k^{\mu}\) with wavevectors \(\bar{k}^{\mu}:=k^{\mu}/\hbar\), which is commonplace in the KMOC formalism [48], plus an additional overall \(\hbar^{-1/2}\).
\[\bar{g}^{|h|}_{0,0,s_{2}}(\omega):=\hbar^{s_{2}+1/2}g^{|h|}_{0,0,s_{2}}(\omega). \tag{4.9}\]
Comparing the macroscopic and microscopic formulae (4.1) and (4.5), there are a number of things to observe.
* Both cross-sections are consistent in that neither depends on the magnetic quantum number \(m\) of the spherical wave.
* The EFT cross-section (4.5) reproduces the static limit \(\sigma^{\text{LO}}_{\text{inc,cl}}(\omega\!=\!0,j,m,h)=0\) for electromagnetism and gravity (\(|h|=1\) and \(2\), respectively) because of the locality assumption (2.21) that the Wilson coefficients have no negative powers of \(\omega\). This can be considered as an EFT prediction, i.e. it holds prior to the matching of the three-point couplings.
* As previously mentioned, the growth of the superficial leading power in \(\omega\) with \(j\) is the same in both cross-sections, where by superficial we mean excluding the \(\omega\) dependence in the response function and the three-point couplings. In other words, the matching (4.7) contains that same leading power of \(\omega\) for any \(j\), and the cleaner matching (4.8) between the response functions and the three-point couplings does not involve \(\omega\) at all.
* In the EFT cross-section (4.5), every three-point coupling \(|g^{|h|}_{0,0,s_{2}}(\omega)|^{2}\) comes accompanied by the dimensionless combination \(M_{1}^{2}\rho_{s_{2}}(M_{1}^{2})\) involving the spectral density. Its appearance is very sensible from the QFT point of view, as the probability that a massive particle absorbs a lower-energy massless boson is necessarily proportional to the number of possible resulting states with nearly identical mass. However, since it always accompanies the couplings, one may regard the complete expression \(M_{1}\sqrt{\rho_{s_{2}}(M_{1}^{2})}\,g^{|h|}_{0,0,s_{2}}(\omega)\) as a kind of effective coupling. Alternatively, if one's focus is on modeling classical effects that are guaranteed to be insensitive to the difference between spectral densities for different masses and spins, one could consider disregarding the normalization constraint (3.30) altogether and make a modeling assumption \(\rho_{s_{2}}(M_{1}^{2})=1/M_{1}^{2}\).
* Perhaps most importantly, we observe that the matching (4.7) means that \[g^{|h|}_{0,0,s_{2}}(\omega)=\mathcal{O}(G^{s_{2}+1}),\] (4.10) in the post-Minkowskian expansion, since \(r_{\text{S}}=GM\). In other words, the amplitude that the scalar particle which models a Schwarzschild black hole absorbs a spherical wave with total angular momentum \(j\) is a \((j\!+\!1)\)-PM object.
* For gravity (\(|h|=2\)), the PM behavior (4.10) means that the Wilson coefficient starts at \(s_{2}=2\) and scales as \(\mathcal{O}(G^{3})\), whereas the resulting leading absorption cross-section is at 6PM for a \(j=2\) spherical wave, and higher harmonics are suppressed in the PM expansion.
In view of the classical cross-section (4.1) being a polynomial in \(\omega\) spanned by \(\{\omega^{2j},\ldots,\omega^{4j}\}\), one might hope that higher orders in \(r_{\text{S}}\omega\) could be retained, as long as they are captured by the response function (4.2) in a perturbation scheme [106] that is consistent classically. Unfortunately, this is not the case in the present three-point
setup, because going to higher orders requires a more subtle matching. Indeed, the higher orders in \(r_{\rm S}\omega\) in the EFT cross-sections(4.5) are subject to interference from higher-multiplicity amplitudes. More specifically, the next order in the cross-section is \(\mathcal{O}(G^{2j+4})\), for which the EFT treatment must, for instance, include amplitudes with two additional conservative couplings to the graviton, each \(\mathcal{O}(\!\sqrt{G})\). Furthermore, double-graviton absorption or even the mass-changing contact terms contribution to the Compton amplitude might contribute to this matching. These matters will be further discussed in sections 5.4 and 5.5.
Improving this result to spinning objects is another story. In the non-spinning case, the coupling constant \(G\) only enters in the Schwarzschild radius \(r_{\rm S}\), whereas in the Kerr case where the dimensionless spin ratio \(a_{*}=a/GM\) also contains negative powers in \(G\). This shows that for Schwarzschild black holes, the first contribution to such amplitudes is at 6PM (as can be reproduced by off-shell EFT methods [88; 89]), while it comes at a lower order for Kerr black holes due to the negative power of \(G\) in \(a_{*}\). For instance, the authors of [92] consider four-point contact interactions where such effects come at spin-5 in \(\mathcal{O}(G)\) amplitudes. Nevertheless, the general formalism presented in this paper does allow to go to higher orders in spin, and we leave this for future work.
In this purely on-shell approach, we have modelled the absorption effects by allowing a changing-mass amplitude from \(s_{1}=0\) to a spinning degree of freedom and the leading order corresponds to a \(s_{2}=2\) particle. We have observed some similarities with the worldline EFT approach [88; 89; 114], where the point-particle action coupled to the Weyl tensor is not enough to model absorption. One then has to introduce electric and magnetic composite operators \(Q^{E}_{ab}\) and \(Q^{B}_{ab}\) representing new degrees of freedom, which carry two indices and couple to electric and magnetic components of the Weyl tensor \(E^{ab}\) and \(B^{ab}\), respectively. While in our approach higher orders require considering \(s_{2}\geq 2\) particles and higher-multiplicity amplitudes, on the worldline higher-derivative operators acting on the Weyl tensor and multi-index composite operators are needed to improve the calculation beyond \(\omega^{4}\), which is explored e.g. in [92].
## 5 Coherent-state cross-section
A proper description of the interaction between a gravitational wave and a compact object using scattering amplitudes requires the use of a coherent-state formalism to model the incoming and outgoing wave [51; 91; 115]. In section 3, we have circumvented it by using a single-graviton state with a wavefunction peaked at the classical frequency \(\omega_{\rm cl}\). The point of this section is two-fold:
* substantiate the leading-order calculation via the coherent-state framework,
* explain how higher-order calculations may be done in a similar fashion.
of an observable-based one [48]. We start with a quantum description and make gradual assumptions relevant to the classical limit.
### Elastic-inelastic separation
The initial state for our absorption process consists of a heavy non-spinning particle \(|\psi_{1}\rangle\) and a wave of helicity \(h\) modeled by a massless coherent state \(|\gamma^{h}\rangle\).
\[|\text{in}\rangle:=|\psi_{1};\gamma^{h}\rangle=\int_{p_{1}}\psi_{\xi}(p_{1})e^{ ib\cdot p_{1}/\hbar}|p_{1};\gamma^{h}\rangle, \tag{5.1}\]
where the relativistic momentum-space wavefunction \(\psi_{\xi}(p_{1})\) peaks at the classical momenta \(p_{1,\text{cl}}^{\mu}=M_{1}u_{1}^{\mu}\), as discussed in section 3. We have also allowed for an impact parameter. For the final state, we should distinguish two cases:
* a different coherent state \(|\tilde{\gamma}^{\tilde{h}}\rangle\), but the heavy particle's mass is preserved;
* a different coherent state \(|\tilde{\gamma}^{\tilde{h}}\rangle\) and an unspecified particle \(|X\rangle\) with \(M_{2}\neq M_{1}\).
The two cases are depicted in figure 2, and we need to integrate over the possible final states. Despite these assumptions, the formalism easily allows for initial spinning states, and we delay the specification of the massless coherent-state type (plane-wave or partial-wave) to later on. It is also worth commenting that even though case (c) has the same mass as the initial state, intermediate mass transitions are allowed (e.g. Compton scattering with different masses in the factorization channels).
The need to separate these two cases on the quantum side comes from the discontinuous nature of basic scattering-amplitude building blocks at \(M_{2}=M_{1}\), as discussed in section 2, and on the classical side from the usual separation between conservative and non-conservative effects. The total probability will then include the following mass-preserving and mass-changing probabilities
\[P_{\gamma\rightarrow\tilde{\gamma}}=P_{\gamma\rightarrow\tilde{\gamma}}^{( \text{c})}+P_{\gamma\rightarrow\tilde{\gamma}}^{(\text{nc})}. \tag{5.2}\]
Figure 2: Gravitational diagrams in a non-spinning black-hole-wave interaction
For the first one, we may write
\[\begin{split} P^{\rm(c)}_{\gamma\to\tilde{\gamma}}&=\sum_ {2s_{2}=0}^{\infty}\sum_{b_{1},\ldots,b_{2s_{2}}=1,2}\,\int_{p_{2}}\langle{\rm in }|S^{\dagger}|p_{2},s_{2},\{b\};\tilde{\gamma}^{\tilde{h}}\rangle\langle p_{2},s_{2},\{b\};\tilde{\gamma}^{\tilde{h}}|S|{\rm in}\rangle\\ &=\int_{p_{2}}\!\int\!\frac{d^{4}\beta}{\pi^{2}}\langle{\rm in}|S^ {\dagger}|p_{2},\beta;\tilde{\gamma}^{\tilde{h}}\rangle\langle p_{2},\beta; \tilde{\gamma}^{\tilde{h}}|S|{\rm in}\rangle.\end{split} \tag{5.3}\]
where in the second line we have used the coherent-spin states mentioned around eq. (2.4). They rely on Schwinger's construction [116] for massive spin states, which are obtained from the zero-spin state by acting with two kinds of creation operators distinguished by an SU(2) index, see [52] for more details. As long as the integration over the SU(2) spinors \(\beta_{b}\) appears in the final-state summation, one may regard and use it as a shorthand for the bulkier spin sum.
In the second case, we are interested in the probability of all different configurations \(X\) involving a heavy particle of mass \(M_{2}\neq M_{1}\):
\[P^{\rm(nc)}_{\gamma\to\tilde{\gamma}}=\sum_{X\ni M_{2}\neq M_{1}}\!\!|\langle X ;\tilde{\gamma}^{\tilde{h}}|S|{\rm in}\rangle|^{2}=\sum_{X\ni M_{2}\neq M_{1} }\langle{\rm in}|S^{\dagger}|X;\tilde{\gamma}^{\tilde{h}}\rangle\langle X; \tilde{\gamma}^{\tilde{h}}|S|{\rm in}\rangle. \tag{5.4}\]
The crucial point now is to determine what part of the Hilbert space contributes to the problem at hand. We are going to assume that all relevant configurations contain only one heavy particle; in other words, in the classical limit no new black holes are created in this \(S\)-matrix evolution. Let us also exclude decay of the heavy particle, i.e. black-hole evaporation, from current consideration. In other words, we assume that the spectral density of the heavy-particle states has a non-trivial continuous part only for \(M_{2}>M_{1}\) (alongside the delta-function responsible for case (c)):10
Footnote 10: In the classical limit, the SU(2) spinors \(\beta_{b}\) determine the resulting classical angular momentum of the compact object [52], so one could trade the \(s_{2}\)-dependence of the spectral density for the perhaps more appropriate dependence on \(\hbar\|\beta\|^{2}=2\sqrt{-S_{\rm cl}^{2}}\) and use the coherent-spin final-state integration, as shown in eq. (5.3). Modifying the subsequent formulae in this way is straightforward.
\[1^{\rm(nc)}=\sum_{X_{\rm rad}}\sum_{s_{2}}\sum_{\{b\}}\int_{M_{1}^{2}}^{\infty }\!\!dM_{2}^{2}\rho_{s_{2}}(M_{2}^{2})\int_{p_{2}}|p_{2},s_{2},\{b\};X_{\rm rad }\rangle\langle p_{2},s_{2},\{b\};X_{\rm rad}|. \tag{5.5}\]
The above "completeness" relation should normally also include a sum over possible emitted radiation
\[|X_{\rm rad}\rangle\langle X_{\rm rad}|=\sum_{n=0}^{\infty}\sum_{h_{1},\cdots, h_{n}}\int_{k_{1},\cdots,k_{n}}|k_{1}^{h_{1}};\cdots;k_{n}^{h_{n}}\rangle \langle k_{1}^{h_{1}};\cdots;k_{n}^{h_{n}}|. \tag{5.6}\]
However, we choose to make another assumption that all the outgoing radiation belongs coherently to the wave \(\tilde{\gamma}\), and there is no extra scattered photons/gravitons. In other words, the final state is given by \(|p_{2},\beta;\tilde{\gamma}^{\tilde{h}}\rangle\) and not \(|p_{2},\beta;\tilde{\gamma}^{\tilde{h}},k_{1}^{h_{1}};k_{2}^{h_{2}};\cdots\rangle\),
which was also assumed for the mass-preserving case (5.3). This assumption relies on the expectation that radiated quanta are not classically significant unless they belong to a classical wave modeled by a coherent state, see e.g. [53]. Therefore, remembering the meaning of the incoming state, we can write the absorption probability as
\[P^{\rm(nc)}_{\gamma\rightarrow\tilde{\gamma}}= \int_{p_{1},p^{\prime}_{1}}\!\!\psi^{*}_{\xi}(p_{1})\psi_{\xi}(p^{ \prime}_{1})e^{ib\cdot(p^{\prime}_{1}-p_{1})} \tag{5.7}\] \[\times\sum_{s_{2}}\int_{M_{1}^{2}}^{\infty}\!\!dM_{2}^{2}\rho_{s_ {2}}(M_{2}^{2})\int_{p_{2}}\!\sum_{\{b\}}\langle p_{1};\gamma^{h}|S^{\dagger}| p_{2},s_{2},\{b\};\tilde{\gamma}^{\tilde{h}}\rangle\langle p_{2},s_{2},\{b\}; \tilde{\gamma}^{\tilde{h}}|S|p^{\prime}_{1};\gamma^{h}\rangle.\]
The building block \(\langle p_{2},s_{2},\{b\};\tilde{\gamma}^{\tilde{h}}|S|p_{1};\gamma^{h}\rangle\) involves a transition of a scalar heavy state into a possibly spinning one along with the incoming and outgoing massless coherent states. Since the latter states contain an infinite number of photons/gravitons, the matrix elements of \(S=1+iT\) should be expanded in perturbation theory.
### \(T\)-matrix perturbative expansion
The massless coherent states (plane or spherical) are sensitive to all orders in perturbation theory, and their matrix elements are non-trivial [51]. However, we can expand operators in terms of annihilation and creation operators, plane or spherical. We are going to perform the \(T\)-matrix expansion in the following way:11
Footnote 11: We thank Donal O’Connell for valuable discussions on the expansion (5.8).
\[T= \sum_{m,n=0}^{\infty}\left(T^{\rm(c)}_{(m|n)}+T^{\rm(nc)}_{(m|n)} \right)=T^{\rm(nc)}_{(0|1)}+T^{\rm(nc)}_{(1|0)}\] \[+T^{\rm(c)}_{(1|1)}+T^{\rm(c)}_{(0|2)}+T^{\rm(c)}_{(2|0)}+T^{\rm (nc)}_{(1|1)}+T^{\rm(nc)}_{(0|2)}+T^{\rm(nc)}_{(2|0)}+\cdots,\]
where the superscripts (c) and (nc) represent mass-preserving and mass-changing elements, respectively, while the subscript \((m|n)\) corresponds to \(n\) incoming and \(m\) outgoing photons/gravitons, and each \(T\)-matrix element will generate an \((m+n+2)\)-point amplitude. In the first line of eq. (5.8), we have isolated the leading non-conservative effects due to absorption, \(T^{\rm(nc)}_{(0|1)}\), and emission, \(T^{\rm(nc)}_{(1|0)}\). Both terms are mass-changing three-point amplitudes and non-zero even on real kinematics, while the mass-preserving counterparts vanish, \(T^{\rm(c)}_{(1|0)}=T^{\rm(c)}_{(0|1)}=0\).12 In this paper, we have been studying the leading-order in absorption term \(T^{\rm(nc)}_{(0|1)}\), but the above expansion allows to also systematically understand higher orders.
Footnote 12: See [117] for a discussion of large gauge effects, where such amplitudes contribute.
In the second line, we have four-point terms that lead to the usual conservative Compton amplitude \(T^{\rm(c)}_{(1|1)}\) and its non-conservative counterpart \(T^{\rm(nc)}_{(1|1)}\). The former has been vastly studied recently, but the latter has been unexplored to the best of our knowledge. Furthermore, we have double-emission \((2|0)\) and double-absorption
(\(0|2\)) both on the conservative and non-conservative sides. Together with the non-conservative Compton, double-absorption would give the naive next-to-leading order (NLO) terms to our leading-order analysis.
The \(T\)-matrix elements can be written in terms of scattering amplitudes:
\[\begin{split} T_{(m|n)}=&\sum_{2s_{1},2s_{2}=0}^{\infty} \int_{p_{1},p_{2}}\!\!\!\!\!\!\sum_{\begin{subarray}{c}h_{1},\dots,h_{n}\\ \tilde{h}_{1},\dots,\tilde{h}_{m}\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Setting \(\langle\gamma^{h}|\gamma^{h}\rangle=1\) gives the normalization prefactor as
\[{\cal N}_{\gamma}=\exp\biggl{[}-\frac{1}{2}\sum_{j,m}\!\int_{0}^{\infty}\!\!\hat{ d}\omega\,|\gamma_{j,m}(\omega)|^{2}\biggr{]}. \tag{111}\]
The waveshape \(\gamma_{j,m}(\omega)\) of these coherent states describes the contribution of each \((j,m)\) component to the total wave, and we expect that in the classical limit \(\gamma_{j,m}(\omega)\) is peaked at the frequency \(\omega_{\rm cl}\). We can simplify the problem further by studying the incoming wave \(|\gamma_{j,m}^{h}\rangle\) with just a particular \((j,m)\) component, in which case the spherical waveshape reduces to \(\gamma_{j^{\prime},m^{\prime}}(\omega)=\delta_{j^{\prime}}^{j}\delta_{m^{ \prime}}^{m^{\prime}}\gamma(\omega)\), such that
\[a_{j,m,h}(\omega)|\gamma_{j^{\prime},m^{\prime}}^{h^{\prime}}\rangle=\gamma( \omega)\delta_{j^{\prime}}^{j^{\prime}}\delta_{m}^{m^{\prime}}\delta_{h}^{h^ {\prime}}|\gamma_{j,m}^{h}\rangle. \tag{112}\]
Coming back to the initial state \(|{\rm in}\rangle\) given in eq. (109), which describes a scalar black hole and a partial wave as a wavepacket superposition of \(|p_{1};\gamma_{j,m}^{h}\rangle\). The \(S\)-matrix determines the probability amplitude of its evolution into a final massive state \(X\) and another partial wave \(|\tilde{\gamma}^{\tilde{h}}\rangle\) with perhaps more than one \((\tilde{j},\tilde{m})\) components. Let us write the leading absorption term \(T_{(0|1)}^{({\rm nc})}\) to such a process, by switching the states on the left-hand side of eq. (100) from plane to spherical waves:
\[\langle p_{2},s_{2},\{b\};\tilde{\gamma}^{\tilde{h}}|S|p_{1};\gamma_{j,m}^{h} \rangle\simeq i\!\int_{k}\!\!\hat{\delta}^{4}(p_{1}+k-p_{2}){\cal A}_{\{b\}}(p _{2},s_{2}|p_{1};k,h^{\prime})\langle\tilde{\gamma}^{\tilde{h}}|a_{h^{\prime} }(k)|\gamma_{j,m}^{h}\rangle. \tag{113}\]
The main difference is that to evaluate the matrix element of a plane-wave annihilation operator between two spherical coherent states, we need to summon the decomposition of the plane-wave operator into partial waves:
\[a_{h}(k)=4\pi\sum_{j=|h|}^{\infty}\sum_{m=-j}^{j}\int_{0}^{\infty}\!\frac{\hat {d}\omega}{\sqrt{2\omega}}\hat{\delta}(k\cdot u_{1}-\omega)\,_{-h}Y_{j,m}(k; u_{1})a_{j,m,h}(\omega), \tag{114}\]
and hence
\[a_{h^{\prime}}(k)|\gamma_{j,m}^{h}\rangle=\frac{4\pi\delta_{h^{\prime}}^{h}} {\sqrt{2k\cdot u_{1}}}\gamma_{j,m}(k\!\cdot\!u_{1})\,_{-h}Y_{j,m}(k;u_{1})| \gamma_{j,m}^{h}\rangle. \tag{115}\]
Therefore, we compute the leading mass-changing matrix element as
\[\langle p_{2},s_{2},\{b\};\tilde{\gamma}^{\tilde{h}}|S|p_{1};\gamma_{j,m}^{h }\rangle\simeq 4\pi i\langle\tilde{\gamma}^{\tilde{h}}|\gamma_{j,m}^{h} \rangle\!\int_{0}^{\infty}\!\frac{\hat{d}\omega}{\sqrt{2\omega}}\gamma_{j,m}(\omega) \tag{116}\]
\[\times\!\int_{k}\!\!\hat{\delta}(k\cdot u_{1}-\omega)\,_{-h}Y_{j,m}(k;u_{1}) \hat{\delta}^{4}(p_{1}+k-p_{2}){\cal A}_{\{b\}}(p_{2},s_{2}|p_{1};k,h).\]
The leading contribution to the absorption probability (113) is then given by
\[P_{\gamma\to\tilde{\gamma}}^{({\rm nc})}\simeq 8\pi^{2}\big{|} \langle\tilde{\gamma}^{h}|\gamma_{j,m}^{h}\rangle\big{|}^{2}\sum_{s_{2}}\!\int _{M_{1}^{2}}^{\infty}\!\!dM_{2}^{2}\rho_{s_{2}}(M_{2}^{2})\!\int_{p_{1},p_{1} ^{\prime},k,k^{\prime},p_{2}}\!\psi_{\xi}^{*}(p_{1})\psi_{\xi}(p_{1}^{\prime} )e^{ib\cdot(p_{1}^{\prime}-p_{1})} \tag{117}\] \[\times\!\int_{0}^{\infty}\!\frac{\hat{d}\omega\hat{d}\omega^{ \prime}}{\sqrt{\omega\omega^{\prime}}}\gamma^{*}(\omega)\gamma(\omega^{\prime}) \hat{\delta}(k\cdot u_{1}\!-\!\omega)\hat{\delta}(k^{\prime}\cdot u_{1}\!-\! \omega^{\prime})\hat{\delta}^{4}(p_{1}+k-p_{2})\hat{\delta}^{4}(p_{1}^{ \prime}\!+k^{\prime}\!-p_{2})\] \[\times\,_{-h}Y_{j,m}^{*}(k;u_{1})\,_{-h}Y_{j,m}(k^{\prime};u_{1}) \,{\cal A}^{*\{b\}}(p_{2},s_{2}|p_{1};k,h)\,{\cal A}_{\{b\}}(p_{2},s_{2}|p_{1} ^{\prime};k^{\prime},h).\]
Note that apart from the overlap between the two spherical coherent states and the impact-parameter exponent, we have landed exactly on the single-quantum absorption cross-section given in eqs. (3.31) and (3.33) -- with the \((j,m)\) waveshape \(\gamma(\omega)\) as the single-particle energy wavefunction. In other words, we observe that the waveshape \(\gamma(\omega)\) acts as a one-dimensional wavefunction, which smears the energy spectrum but is peaked at the classical frequency \(\omega_{\text{cl}}\). This observation was also made in [91], where single quanta and coherent states gave the same results.
Regarding the seeming discrepancies between the leading-order cross-sections (3.31) and (5.19), for a spherical wave defined in the rest-frame of (the classical momentum of) the compact body and centered at it, the impact parameter should of course be set to zero. Moreover, eqs. (3.31) and (3.33) were written for an inclusive probability, let us rename it to \(P_{(0|1)}^{\text{(nc)}}:=P_{\text{inc}}^{\text{LO}}(\omega_{\text{cl}},j,m,h)\), whereas retaining the dependence on the outgoing waveshape in eq. (5.19) is actually an enhancement of the single-quantum formulae:
\[P_{\gamma\rightarrow\tilde{\gamma}}^{\text{(nc)}}=\big{|}\langle\tilde{ \gamma}^{\tilde{h}}|\gamma_{j,m}^{h}\rangle\big{|}^{2}P_{(0|1)}^{\text{(nc)}}+ \ldots, \tag{5.20}\]
where the dots denote the higher-orders to be briefly discussed below. In the limit where the outgoing classical wave changes very little, the above prefactor may furthermore disappear, \(\langle\tilde{\gamma}^{\tilde{h}}|\gamma_{j,m}^{h}\rangle\approx 1\).
### Higher-order diagrammatics
In this section, we use diagrams to help us understand all the effects relevant for BH-wave interactions. Having a diagrammatic realization of the expressions from the previous sections will guide us for the NLO corrections. However, this diagrammatic approach is general enough to be also applicable to any order in perturbation theory, as well as such processes as emitted radiation and superradiance.
Figure 3: \(T\)-matrix operator expansion
Let us take a brief moment to explain the diagrammatic expansion of \(T\)-matrix in figure 3, which represents eq. (100). The operator nature of this diagram is represented by the "vertical line" after the wavy graviton line, and the double lines, which will "act" on a ket quantum state, e.g. the massless coherent state \(|\gamma^{h}\rangle\) or the black-hole \(|p_{1}\rangle\). In this diagram, we then have
* \(n\) incoming graviton annihilation operators shown by wavy lines and labeled by \(\{k_{1},\cdots,k_{n}\}\);
* \(m\) outgoing graviton creation operators shown by wavy lines and labeled by \(\{\tilde{k}_{1},\cdots,\tilde{k}_{m}\}\);
* incoming and outgoing double line, labeled by \(p_{1}\) and \(p_{2}\). The two lines of different thickness inside of the double line represents the fact that this diagram contains mass-preserving and mass-changing transitions.
* vertical lines at the end of graviton/BH lines represent the operator nature of these diagrams. For instance, the double-line part of the operator will act on \(|p_{1}\rangle\), while the wavy line will act on the coherent state \(|\gamma^{h}\rangle\).
* Evaluating this operator with outgoing states on the left and incoming states on the right will result in scattering amplitudes, waveshapes, and coherent-state overlap. Due to the operator-action convention, time flows from right to left in the resulting amplitude.
Let us now apply these diagrams to the evaluation of the leading-order contribution to absorption given in eq. (104). We take the first term \(T^{\rm(nc)}_{(0|1)}\) on the right-hand side of figure 3 and take its matrix element \(\langle p_{2},s_{2},\{b\};\tilde{\gamma}^{h}|T^{\rm(nc)}_{(0,1)}|p_{1};\gamma ^{h}\rangle\). The result is the overlap between the coherent states, a scattering amplitude, and the waveshape \(\gamma(k)\), represented in figure 4. Note that the integrated scattering amplitude is a single-graviton amplitude smeared by the waveshape.
Similarly, figure 5 shows how this diagrammatic technique applies to the NLO non-conservative contributions. They contains double absorption and the mass-changing Compton amplitude, which both involve two photons/gravitons, now integrated with two waveshapes coming from the coherent states.
Figure 4: \(T^{\rm(nc)}_{(0|1)}\)-matrix operator acting on the quantum states. Time flows right to left.
### PM absorption analysis
In the previous section, we have explained how to include higher orders in multiplicity into the BH-wave interaction modeling by expanding the \(T\)-matrix. The PM expansion, however, enters into the mass-changing amplitudes in a rather intricate way. Indeed, as we have seen from eq. (4.7), even the three-point absorptive amplitudes must behave \(\mathcal{O}(G^{s_{2}+1})\). Let us now explore the mass-changing \((m+n+2)\)-point amplitude \(\mathcal{A}_{\{b\}}{}^{\{a\}}(p_{2},s_{2};\tilde{k}_{1},\tilde{h}_{1};\ldots; \tilde{k}_{m},\tilde{h}_{m}|p_{1},s_{1};k_{1},h_{1};\ldots;k_{n},h_{n})\) in eq. (5.9). For brevity, we compress the notation to \(\mathcal{A}_{\text{abs}(m|n)}^{(s_{2}|s_{1})}\), emphasizing its distinction from the mass-conserving counterparts \(\mathcal{A}_{(m|n)}^{(s_{2}|s_{1})}\). In particular, at three points we have
\[\mathcal{A}_{\text{abs}(0|1)}^{(s_{2},0)}\propto G^{s_{2}+1}, \qquad\quad\mathcal{A}_{\text{3,min}}^{(s)}\propto\sqrt{G}, \tag{5.21}\]
where the second one is the usual three-point same-mass amplitude [95] of the minimal form (2.16), which are known to correspond to Kerr BHs at 1PM [32; 33].
To obtain higher multiplicities, we can now naively multiply the powers of the Newton constant of these three-point amplitudes, assuming that they scale uniformly in \(G\), and any subleading orders at three points should come from higher loop orders.13 At four points, we have two incoming gravitons and a mass-changing heavy particle. We then have three types of contributions: a contact four-point term, two successive three-point absorptions, and one absorption together with one minimal-coupling amplitude. These terms be written respectively as
Footnote 13: See [118] for loop corrections to Love numbers in the worldline EFT framework. For quantum corrections to Love numbers due to emission see [93], which we also ignore in the above analysis.
\[\mathcal{C}_{\text{abs}(0|2)}^{(s_{2},0)}\,+\,\underbrace{\mathcal{A}_{\text {abs}(0|2)+0}^{(s_{2},0)}}_{\propto G^{2s_{2}+2}}\,+\,\underbrace{\mathcal{A}_ {\text{abs}(0|1)+1}^{(s_{2},0)}}_{\propto G^{s_{2}+3/2}}=:\mathcal{A}_{\text{ abs}(0|2)}^{(s_{2},0)}, \tag{5.22}\]
where the subscript notation \((0|r)+n-r\) means that we have \(n\) gravitons, \(r\) out which couple via an absorptive three-point amplitude and \((n-r)\) via the mass-preserving
Figure 5: Next-to-leading order contributions to mass-changing absorption effects
minimal coupling. More generally, for \(n\)-graviton absorption we thus have
\[\mathcal{A}^{(s_{2},0)}_{\text{abs}(0|n)}\!=\sum_{r=1}^{n}\mathcal{A}^{(s_{2},0)}_ {\text{abs}(0|r)+n-r}\,+\,\mathcal{C}^{(s_{2},0)}_{\text{abs}(0|n)},\qquad \mathcal{A}^{(s_{2},0)}_{\text{abs}(0|r)+n-r}\!\propto G^{r(s_{2}+1)+(n-r)/2}. \tag{101}\]
In section 4, we have seen that, on the GR side, the PM expansion of the near-zone response function (100) suggests that the leading-order absorption cross-section scales as \(G^{2j+2}\), whereas the NLO does as \(G^{2j+4}\).14 Now from squaring the amplitudes (101), we see that we obtain terms that scale as \(G^{2j+3}\), \(G^{3j+7/2}\) and \(G^{4j+4}\) for \(s_{2}=j\) (as follows from spin conservation seen in eq. (102)). Therefore, it is not possible to obtain the NLO \(G^{2j+4}\) expected on the GR side from the tree-level counting on the EFT side, unless the contact term is artificially introduced to account for this counting. However, a more natural way to obtain the expected behavior in \(G\) is from the amplitude with three incoming gravitons, which is expanded as
Footnote 14: Tail effects may modify the NLO to \(\mathcal{O}(G^{2j+2})\)[92; 118], but we expect them to arise from loops.
\[\mathcal{A}^{(s_{2},0)}_{\text{abs}(0|3)}=\underbrace{\mathcal{A}^{(s_{2},0)}_ {\text{abs}(0|1)+2}}_{\propto\,G^{s_{2}+2}}+\underbrace{\mathcal{A}^{(s_{2},0) }_{\text{abs}(0|2)+1}}_{\propto\,G^{2s_{2}+5/2}}+\underbrace{\mathcal{A}^{(s_ {2},0)}_{\text{abs}(0|3)+0}}_{\propto\,G^{3s_{2}+3}}+\mathcal{C}^{(s_{2},0)}_ {\text{abs}(0|3)}. \tag{102}\]
Indeed, we see that the first contribution squared induces the desired NLO \(G^{2j+4}\) correction to the absorption cross-section.
## 6 Summary and discussion
In this work, we have initiated exploring classical absorption effects for compact bodies using quantum scattering amplitudes. Central to this program are the mass-changing three-point scattering amplitudes [95; 99] that entail new degrees of freedom modeling non-conservative effects, which may change the mass and spin of the heavy particle (representing the compact object) due to the incoming wave.
We have made use of these amplitudes and their connection to covariantized spin-weighted spherical harmonics to describe leading gravitational absorption effects from a macroscopic/EFT point of view. Since this is an effective description, matching to the underlying theory was required to obtain the values of the EFT coupling coefficients. We have chosen to match at the cross-section level to the GR calculation dating back to Starobinsky, Churilov [80; 81] and Page [82; 83]. Although we have performed a leading-order match, this probability-based formalism can accommodate higher orders in the PM expansion and incoming spinning BHs and neutron stars as well. For the latter case, absorption effects were considered via tidal heating [119; 120], and it would be interesting to understand how the effective couplings \(g_{r,s_{1},s_{2}}\) deviate from the BH values. We leave this for future work.
Having made sense of the effective couplings, we have explored how the used single-quantum framework fits into a more general and consistent description of classical waves using massless coherent states. In particular, we were able to connect the frequency wavefunction used in the former with the coherent-state waveshape, i.e. the eigenvalue of the annihilation operator. An interesting feature of this analysis is the diagrammatic approach for expanding the \(T\)-matrix and systematically introducing higher-order terms in the coherent cross-section. Crucial to this analysis was the separation of the probabilities into conservative and absorptive, which is motivated by the intrinsically distinct nature of the quantum amplitudes building blocks. Although the classical limit sends \(M_{2}\to M_{1}\), the form of resulting cross-section follows from the amplitudes constructed on \(M_{2}\neq M_{1}\) kinematics, which are qualitatively different from their same-mass counterparts.
The natural next step is to include spin effects for the initial black hole with the end goal of modeling a Kerr BH absorption cross-section purely from on-shell amplitudes. According to the microscopic calculation from the GR side, such leading-order non-spinning effects come at \(\mathcal{O}(G^{3})\) at the cross-section level, suggesting that the effective coupling in the amplitude should start at \(\mathcal{O}(G^{3/2})\). From the EFT side, in this more general case of \(s_{1}\neq 0\), we have observed the proliferation of possible effective couplings in the three-point mass-changing amplitude (13), making the matching a harder task. However, the proposed definition of the mass-changing minimal amplitudes (20) might streamline the calculation and perhaps even correspond to the Kerr BH in the same way as the same-mass "minimal coupling" [95] of the form (16) are known to [32; 33].
Another direction that we have not explored is the study of observables from amplitudes, in particular using the KMOC formalism [48; 49; 50; 51; 52; 53]. With the obtained absorption effective coefficients, many interesting local and global observables could be already be explored at leading or higher PM orders using the presented formalism. Perhaps the most interesting ones are the change in mass and spin induced by absorption, where one could naturally use such quantum operators as \(\mathbb{P}^{2}=\mathbb{P}^{\mu}\mathbb{P}_{\mu}\) to obtain \(\Delta M^{2}\) and \(\mathbb{S}^{2}=\mathbb{S}^{\mu}\mathbb{S}_{\mu}\) to obtain \(\Delta S^{2}\). Moreover, one could imagine probing the change in the area of the BH due to absorptive effects. In classical GR, the area is defined as
\[A_{\rm H}:=8\pi(GM)^{2}\biggl{[}1+\sqrt{1-\chi^{2}}\biggr{]},\qquad\chi=\frac{ \mathfrak{a}}{GM}, \tag{171}\]
and \(\mathfrak{a}=S/M\) is the Kerr ring radius. To obtain the change in this quantity from amplitudes, one would like to define a QFT operator for the area and try to compute \(\Delta A_{\rm H}\) in a scattering process. For that, one could substitute \((S^{2},M^{2})\to(\mathbb{S}^{2},\mathbb{P}^{2})\), which imples the following proposal for the area operator:
\[\mathbb{A}_{\rm H}=8\pi\left[G^{2}\,\mathbb{P}^{2}+\sqrt{(G^{2}\,\mathbb{P}^{ 2})^{2}-G^{2}\,\mathbb{S}^{2}}\right], \tag{172}\]
which mixes PM orders. The simplicity of this proposal also comes from the fact that the two operators commute \([\mathbb{S}^{2},\mathbb{P}^{2}]=0\). The mixing between orders in the expansion brings an interesting interplay between the \(\mathbb{S}^{2}\) and the \(\mathbb{P}^{2}\) calculation. We leave the exploration of such an operator for future work.
We hope that this work may open these and other avenues to include absorption effects in the on-shell amplitude approach to gravitational waves. In particular, the work [39] on matching Teukolsky-equation solutions to the gravitational Compton scattering amplitudes suggests that absorption effects could be included into them in relation to horizon effects. It is tempting to consider these effects from a purely on-shell perspective, as the four-point amplitudes are likely to be related to the leading-order absorption cross-section by dispersion relations.
Another direction is to explore in more detail the role of the spectral density function that we were forced to introduce in our formalism. For instance, it would be interesting to see if it appears in a similar way in the context of the Heavy Particle Effective Theory [26; 27], which streamlines the classical limit. We also leave this for future work.
## Acknowledgements
We are grateful to Fabian Bautista, Kays Haddad, Andreas Helset, Yu-tin Huang, Jung-Wook Kim, Nathan Moynihan, Donal O'Connell and M.V.S. Saketh for valuable conversations, especially to Andreas, Donal and Jung-Wook for the comments of an early draft of this paper. RA's research was supported by the F.R.S.-FNRS project no. 40005600 and the FSR Program of UCLouvain.
## Appendix A Spherical harmonics and spinors
Here we discuss the spinorial construction for the spin-weighted spherical harmonics.
Spherical harmonics in 3d.The original construction due to Newman and Penrose [108] may be neatly formulated (see e.g. [121]) in terms of \(\mathrm{SU}(2)\) spinors on the sphere \(S^{2}=\{\hat{\mathbf{k}}=(\cos\varphi\sin\theta,\sin\varphi\sin\theta,\cos\theta) \}\subset\mathbb{R}^{3}\):
\[\kappa^{a}_{+}\!=\!\left(\begin{array}{cc}e^{-\frac{i\varphi}{2}}\!\cos \frac{\theta}{2}\\ e^{\frac{i\varphi}{2}}\!\sin\frac{\theta}{2}\end{array}\right)\!,\ \ \kappa^{a}_{-}\!=\!\left(\begin{array}{cc}-e^{-\frac{i\varphi}{2}}\!\sin \frac{\theta}{2}\\ e^{\frac{i\varphi}{2}}\!\cos\frac{\theta}{2}\end{array}\right)\ \ \Rightarrow\ \ \begin{cases}\hat{\mathbf{k}}\cdot\mathbf{\sigma}^{a}_{\,b}\,\kappa^{b}_{\pm}=\pm \kappa^{a}_{\pm},\\ \hat{k}^{i}\!=-\frac{1}{2}\sigma^{i,a}_{\,b}(\kappa^{a}_{+}\kappa_{-b}+\kappa ^{a}_{-}\kappa_{+b}),\end{cases} \tag{100}\]
where \(\mathbf{\sigma}^{a}{}_{b}\) is the concatenation of the three standard Pauli matrices. We then define
\[{}_{h}\tilde{Y}_{j,m}(\hat{\mathbf{k}}):=\underbrace{\overbrace{\kappa^{(1}_{+} \cdots\kappa^{1}_{+}}^{j-m}\overbrace{\kappa^{2}_{+}\cdots\kappa^{2}_{+}}^{j +m}\underbrace{\kappa^{2}_{-}\cdots\kappa^{2}_{-}}^{j+m}}_{j-h}. \tag{101}\]
Up to normalization, these functions are directly related to the conventional angle-dependent harmonics [109] via the spinor parametrization (A.1):
\[{}_{h}Y_{j,m}(\theta,\varphi):=(-1)^{m}\sqrt{\frac{(2j+1)(j+m)!(j-m)!} {4\pi(j+h)!(j-h)!}}\big{(}\sin\tfrac{\theta}{2}\big{)}^{2j}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
This is ambiguous for \(\mathbf{p}=0\), so one may choose e.g.
\[p^{\mu}=(M,\mathbf{0})=0\qquad\Rightarrow\qquad\langle p^{a}|^{\alpha}=\sqrt{M} \epsilon^{\alpha a},\qquad[p^{a}|_{\dot{\alpha}}=\sqrt{M}\epsilon_{\dot{a}a}. \tag{111}\]
The SU(2) little-group rotations, \(|p^{a}\rangle\to U^{a}{}_{b}(p)|p^{a}\rangle\), \(|p^{a}|\to U^{a}{}_{b}(p)|p^{b}]\), leave momentum \(p^{\mu}\) invariant and correspond to choosing different spin quantization axes \(n^{\mu}\). (More details may be found in [52; 95; 131]). The parametrization (110) picks \(n^{\mu}=(\rho,\varepsilon\cos\varphi\sin\theta,\varepsilon\sin\varphi\sin \theta,\varepsilon\cos\theta)/M\), i.e. quantization along the momentum, while eq. (111) chooses the conventional \(z\)-axis.
The momentum spinors serve as basic building blocks for scattering amplitudes. For massless particles, the spin is always quantized along the momentum, and is thus counted by helicity weights: \(-1/2\) for each \(|k\rangle\) and \(+1/2\) for \(|k]\). Moreover, each massive spin-\(s\) particle is represented by \(2s\) symmetrized SU(2) indices. We denote the corresponding symmetrized tensor product of spinors by \(\odot\), following [100].
Spherical harmonics in 4d.Returning to the spherical harmonics, we may now embed the 3d construction in 4d. Namely, we regard it as corresponding to the default choice of the time direction \(u^{\mu}=(1,\mathbf{0})\) and the celestial sphere swept by a massless momentum \(k^{\mu}=\omega\;(1,\hat{\mathbf{k}}(\theta,\varphi))\) and parametrized by the spinors \(|k\rangle_{\alpha}=\sqrt{2\omega}\kappa_{-}^{a=\alpha}\) and \(|k]^{\dot{\alpha}}=\sqrt{2\omega}\kappa_{+}^{a=\dot{\alpha}}\). Lorentz boosts change the time direction and induce Mobius transformations on the celestial sphere.
For a general time direction \(u^{\mu}\) (such that \(u^{2}=1\) and \(u^{0}>0\)), we choose to parametrize the celestial sphere by the massless spinors \(|k\rangle_{\alpha}\) and \(|k]^{\dot{\alpha}}\). Of course, the quantum numbers of a spherical harmonic must be the same as in the rest frame of \(u^{\mu}\). The massive spinors \(\langle u^{a}|^{\alpha}\) and \([u^{a}]_{\dot{\alpha}}\) provide a perfect transformation device between the current inertial frame and the rest frame of \(u^{\mu}\). This brings us to eq. (19), i.e.
\[{}_{h}\tilde{Y}_{j,m}(k;u,n):=\frac{1}{\langle k|u|k]^{j}}\underbrace{ \overbrace{[u_{(}1k]\cdots[u_{1}k]}^{j-m}\overbrace{[u_{(}2k]\cdots[u_{2}k] \cdots[u_{2}k]}^{j-m}\overbrace{[u_{(}2k]\cdots[u_{2}k]}^{j+m}\overbrace{j- h}^{j-m}}. \tag{112}\]
Here the subscripts \(1\) and \(2\) are the explicitly symmetrized little-group indices, and the prefactor involving \(\langle k|u|k]=2k\cdot u\xrightarrow[\mathbf{u}\to 0]2k^{0}\) serves to cancel out the mass dimension. Together with eq. (111), it guarantees the consistency with the rest-frame definition (104) -- up to the functional U(1) transformation of the form (103) in view of the differences in the \(\varphi\)-dependence between eqs. (103) and (104). This is an example of acceptable convention discrepancies, which maybe caused by switching between different spinor parametrizations. The validity of the harmonics (112) as representations of the spin algebra follows from the properties of massive spinors, see e.g. [52; 34]. Note that the dependence on the spin-quantization axis \(n^{\mu}\) enters via the choice of the massive spinors, as discussed around eq. (20). In other words, the SU(2) little-group transformations \(|u^{a}\rangle\to U^{a}{}_{b}(p)|u^{a}\rangle\), \(|u^{a}]\to U^{a}{}_{b}(p)|u^{b}]\) induce the
SO(3) rotations of \(n^{\mu}\) orthogonally to the time direction given by \(u^{\mu}\). Since the choice of spinors for \(u^{\mu}\) defines \(n^{\mu}\), the notation may as well be compressed to \({}_{h}Y_{j,m}(k;u)\).
Let us now discuss the orthonormality property (3.17). It is valid for the normalized versions of the covariant harmonics, rescaled from those in eq. (A.11) analogously to their non-covariant counterparts in eq. (A.3). It can be easily seen that in the rest frame of \(u^{\mu}\) the covariant integration measure reduces to the solid-angle one:
\[\frac{2}{\omega}{\int}d^{4}k\,\delta^{+}(k^{2})\delta(k\cdot u- \omega)\ \xrightarrow[u\to 0]{}\ \ {\int}d\Omega_{\hat{\mathbf{k}}},\qquad k^{0}=|\mathbf{k}|=\omega.\] (A.12)
So eq. (3.17) clearly holds for \(\mathbf{u}=0\), and what we need is to extend it to any \(u^{\mu}\).
Spinor integration.To expose the properties of the measure (A.12) in a neat way, we first rewrite it using a null basis [132]:
\[k^{\mu}\!=t\Big{(}r^{\mu}\!+\!\gamma q^{\mu}\!+\!\frac{z}{2}[r| \bar{\sigma}^{\mu}|q)\!+\!\frac{\bar{z}}{2}[q|\bar{\sigma}^{\mu}|r)\Big{)}\ \ \Rightarrow\ \ \int\!d^{4}k=\frac{i(r+q)^{4}}{4}{\int}t^{3}dt\!\wedge\!d\gamma\! \wedge\!dz\!\wedge\!d\bar{z},\] (A.13)
where \(\bar{\sigma}^{\mu}=(1,-\mathbf{\sigma})\), and the massless vectors \(r^{\mu}\) and \(q^{\mu}\) are not collinear but otherwise arbitrary. Adding the masslessness condition eliminates \(\gamma\) from the measure:
\[{\int}d^{4}k\,\delta^{+}(k^{2})=\frac{i(r+q)^{2}}{4}{\int}_{0}^{ \infty}tdt\int\!dz\!\wedge\!d\bar{z},\qquad k^{\mu}\!=\frac{t}{2}\big{(}(r|\!+ \!z\langle q|\big{)}\sigma^{\mu}\big{(}|r|\!+\!\bar{z}|q|\big{)}.\] (A.14)
(Here for concreteness one may assume \(r^{0},q^{0}>0\) so that \(k^{0}>0\).) However, this massless measure may now be rewritten using spinor integration [133; 134; 135]
\[{\int}d^{4}k\,\delta^{+}(k^{2})=-\frac{i}{4}\int_{0}^{\infty} tdt\int_{\tilde{\lambda}=\tilde{\lambda}}\langle\lambda d\lambda\rangle \wedge[\tilde{\lambda}d\tilde{\lambda}],\qquad k^{\mu}\!=\frac{t}{2}\langle \lambda|\sigma^{\mu}|\tilde{\lambda}],\] (A.15)
such that the dependence on \(r^{\mu}\) and \(q^{\mu}\) has entirely canceled out due to
\[(r+q)^{2}dz\wedge d\bar{z}=-\big{(}\langle r|+z\langle q|\big{)}|q \rangle dz\wedge\big{(}[r|+\bar{z}[q]\big{)}|q\big{)}d\bar{z}=-\langle\lambda d \lambda\rangle\wedge[\tilde{\lambda}d\tilde{\lambda}].\] (A.16)
Now introducing the second delta function let us fix the energy scale of \(k^{\mu}\) and get
\[\frac{1}{\omega}{\int}d^{4}k\,\delta^{+}(k^{2})\delta(k\!\cdot\!u- \omega)=-i{\int}_{\tilde{\lambda}=\tilde{\lambda}}\frac{\langle \lambda d\lambda\rangle\wedge[\tilde{\lambda}d\tilde{\lambda}]}{\langle \lambda|u|\tilde{\lambda}]^{2}},\qquad k^{\mu}\!=\!\omega\frac{\langle\lambda| \sigma^{\mu}|\tilde{\lambda}]}{\langle\lambda|u|\tilde{\lambda}]}.\] (A.17)
This measure allows us to reformulate the orthonormality property (3.17) of the spin-weighted spherical harmonics in the following way:
\[{\int}_{\tilde{\lambda}=\tilde{\lambda}}\frac{\langle\lambda d \lambda\rangle\wedge[\tilde{\lambda}d\tilde{\lambda}]}{\langle\lambda|u| \tilde{\lambda}]^{2}}\,{}_{h}Y^{*}_{j^{\prime},m^{\prime}}(\lambda,\tilde{ \lambda};u)\,{}_{h}Y_{j,m}(\lambda,\tilde{\lambda};u)=\frac{i}{2}\delta^{j^{ \prime}}_{j}\delta^{m^{\prime}}_{m},\] (A.18)
where the notation \({}_{h}Y_{j,m}(\lambda,\tilde{\lambda};u):={}_{h}Y_{j,m}(k;u)\) serves to emphasize their independence of the energy scale. Then the validity of eq. (3.17) for \(\mathbf{u}\neq 0\) follows from
the fact that the entire left-hand side is independent of \(\omega=k\cdot u\). Indeed, for any spinor conventions and in any frame, we can rewrite it as the same integral over the complex plane by parametrizing \(|\lambda\rangle=|u^{1}\rangle+z|u^{2}\rangle\) and \(|\tilde{\lambda}|=|u_{1}|+\bar{z}|u_{2}|\), so that the left-hand side of eq. (102) will exclusively involve the following ingredients:
\[\begin{split}\langle u_{a}\lambda\rangle&=-\delta_ {a}^{1}-\delta_{a}^{2}z,\qquad\quad\langle\lambda d\lambda\rangle\wedge[\tilde {\lambda}d\tilde{\lambda}]&=-dz\wedge d\bar{z}:=2i\,d\Re z\wedge d \Im z,\\ [u_{a}\tilde{\lambda}]&=\epsilon_{1a}+\epsilon_{2a} \bar{z},\qquad\qquad\qquad\langle\lambda|u|\tilde{\lambda}]&=1+z \bar{z}.\end{split} \tag{103}\]
Therefore, it only depends on the quantum numbers \(h,j,j^{\prime},m\) and \(m^{\prime}\), and may only produce a combinatorial result, which may as well be fixed at \(u^{\mu}=(1,\mathbf{0})\).
## Appendix B Frame transformations of harmonics
Here we derive the spinor transformations (102), which induce the relationship between covariant spin-weighted spherical harmonics \({}_{h}\tilde{Y}_{j,m}(k;u)\) and \({}_{h}\tilde{Y}_{j,m}(k;v)\).
These harmonics correspond to two different unit timelike vectors \(u^{\mu}\) and \(v^{\mu}\), with a relative Lorentz factor
\[\gamma:=u\cdot v=:\frac{1}{\sqrt{1-\nu^{2}}},\qquad\quad 0\leq\nu<1. \tag{104}\]
These vectors can be Lorentz-transformed into each other using the minimal boost
\[L^{\rho}{}_{\sigma}(v\!\leftarrow\!u):=\delta_{\sigma}^{\rho}+2v^{\rho}u_{ \sigma}-\frac{(u+v)^{\rho}(u+v)_{\sigma}}{1+u\cdot v}=\exp\Bigl{(}\frac{i\log( \gamma+\sqrt{\gamma^{2}-1})}{\sqrt{\gamma^{2}-1}}u^{\mu}v^{\nu}\Sigma_{\mu \nu}\Bigr{)}^{\rho}_{\sigma}, \tag{105}\]
written in terms of the spin-1 Lorentz generators \((\Sigma^{\mu\nu})^{\rho}{}_{\sigma}:=i[\eta^{\mu\rho}\delta_{\sigma}^{\nu}- \eta^{\nu\rho}\delta_{\sigma}^{\mu}]\). The spinors may be boosted using the corresponding \(\mathrm{SL}(2,\mathbb{C})\) transformations, namely
\[S^{\alpha}{}_{\beta}(v\!\leftarrow\!u)=\exp\bigl{(}\tfrac{i\log\mu}{\gamma\nu} u^{\mu}v^{\nu}\sigma_{\mu\nu}\bigr{)}^{\alpha}{}_{\beta},\qquad\qquad\mu:= \gamma+\sqrt{\gamma^{2}-1}, \tag{106}\]
written in terms of the chiral spin-1/2 generators \(\sigma^{\mu\nu}:=\tfrac{i}{2}\sigma^{[\mu}\bar{\sigma}^{\nu]}\). Using the Clifford-algebra property \(\sigma^{(\mu}\bar{\sigma}^{\nu)}=\eta^{\mu\nu}\), it is easy to derive
\[\begin{split}\bigl{(}\tfrac{i\log\mu}{\gamma\nu}u^{\mu}v^{\nu} \sigma_{\mu\nu}\bigr{)}^{2n}|u^{a}\rangle&=(\log\sqrt{\mu})^{2n} |u^{a}\rangle,\\ \bigl{(}\tfrac{i\log\mu}{\gamma\nu}u^{\mu}v^{\nu}\sigma_{\mu\nu} \bigr{)}^{2n+1}|u^{a}\rangle&=(-\log\!\sqrt{\mu})^{2n+1}\Bigl{(} \tfrac{1}{\nu}|u^{a}\rangle-\tfrac{1}{\gamma\nu}|v|u^{a}]\Bigr{)}.\end{split} \tag{107}\]
This lets us sum the matrix exponent, whose action simplifies to
\[S^{\alpha}{}_{\beta}(v\!\leftarrow\!u)|u^{a}\rangle=\frac{\sqrt{\mu}}{\mu+1} \Bigl{(}|u^{a}\rangle+|v|u^{a}]\Bigr{)}. \tag{108}\]
We thus arrive at the following massive-spinor transformations:
\[|v^{b}\rangle=\frac{\sqrt{\mu}}{\mu+1}U^{b}{}_{a}(v\!\leftarrow\!u)|u\!+\!v|u^ {a}],\qquad\quad|v^{b}]=\frac{\sqrt{\mu}}{\mu+1}U^{b}{}_{a}(v\!\leftarrow\!u)|u \!+\!v|u^{a}\rangle. \tag{109}\]
Here we have allowed for the SU(2) matrix \(U^{b}{}_{a}(v\!\leftarrow\!u)\). Its purpose is to fix the misalignment between what we get from the minimal boost (100) and the desired spin quantization axis for the resulting time direction, which generically do not coincide:
\[n^{\mu}_{v}:=\frac{1}{2}(\langle v_{2}|\sigma^{\mu}|v^{2}]+[v_{2}|\bar{\sigma}^{ \mu}|v^{2}\rangle) \neq L^{\mu}{}_{\nu}(v\!\leftarrow\!u)n^{\nu}=n^{\mu}-\frac{n\cdot v}{1+u \cdot v}(u+v)^{\mu}. \tag{102}\]
In fact, unitary matrices like \(U^{b}{}_{a}(v\!\leftarrow\!u)\) represent the SO(3) rotations of the spin quantization axis even in the absence of Lorentz-frame boosts. Therefore, the spinor transformations (100) induce the most general frame transformations of the covariant spherical harmonics.
|
2305.09350 | Two-species reaction-diffusion system in the presence of random velocity
fluctuations | We study random velocity effects on a two-species reaction-diffusion system
consisting of three reaction processes $A + A \rightarrow (\varnothing, A),A+B
\rightarrow A$. Using the field-theoretic perturbative renormalization group we
analyze this system in the vicinity of its upper critical dimension $d_c = 2$.
Velocity ensemble is generated by means of stochastic Navier-Stokes equations.
In particular, we investigate the effect of thermal fluctuations on reaction
kinetics. The overall analysis is performed to the one-loop approximation and
possible macroscopic regimes are identified. | Michal Hnatič, Matej Kecer, Tomáš Lučivjanský | 2023-05-16T11:08:10Z | http://arxiv.org/abs/2305.09350v1 | # Two-species reaction-diffusion system in the presence of random velocity fluctuations
###### Abstract
We study random velocity effects on a two-species reaction-diffusion system consisting of three reaction processes \(A+A\to(\emptyset,A)\), \(A+B\to A\). Using the field-theoretic perturbative renormalization group we analyze this system in the vicinity of its upper critical dimension \(d_{c}=2\). Velocity ensemble is generated by means of stochastic Navier-Stokes equations. In particular, we investigate the effect of thermal fluctuations on reaction kinetics. The overall analysis is performed to the one-loop approximation and possible macroscopic regimes are identified.
\({}^{1}\) Institute of Physics, Faculty of Science, P. J. Safarik University, Park Angelinum 9, 040-01 Kosice, Slovakia
\({}^{2}\) Institute of Experimental Physics, Slovak Academy of Sciences, Watsonova 47, 040 01 Kosice, Slovakia
\({}^{3}\) Joint Institute for Nuclear Research, 141980 Dubna, Russia
## 1 Introduction
Diffusion-limited reactions constitute prominent models in non-linear statistical physics [1]. Theoretical study of such systems attracted a lot of attention in the past [2, 3]. A straightforward approach to theoretical analysis of such systems is based on kinetic rate equations, which might be regarded as a simple mean-field-like approximation [3, 4]. However, reaction systems are known to exhibit non-trivial behavior, especially in low space dimensions [5], where density fluctuations become especially pronounced. There the kinetic rate equations approach is not adequate and more sophisticated approaches are called for. In this paper, we study a multi-species reaction-diffusion system [6, 7, 8, 9], which consists of the following three reaction processes
\[A+A\to\begin{cases}A&\text{coalescence,}\\ \emptyset&\text{annihilation,}\\ \end{cases} \tag{1}\] \[A+B\to A\qquad\text{trapping,}\]
where coalescence process occurs with probability \(p\,(0\leq p\leq 1)\), and annihilation process with a complementary probability \(1-p\). The model becomes even more intricate when additional effects are taken into account. Their investigation is especially important, as they naturally arise in many practical circumstances. For instance, the majority of chemical reactions in typical experimental settings occur in some fluid environment. Various aspects of such a problem have already been studied recently [7, 8, 9]. Here, our aim is to investigate the influence of thermal fluctuations of a surrounding environment on the kinetics of reaction-scheme (1). We model the environment as a fluid at a constant temperature using a well-known approach based on the stochastic Navier-Stokes equation [10, 11].
A powerful tool for analyzing the asymptotic behavior of stochastic systems is provided by the renormalization group (RG) method [12, 13]. It allows us to determine the long-time and large-scale - or infrared (IR) - asymptotic regimes of the system and also is a very efficient tool for the calculation of various universal physical quantities, e.g. critical exponents. The aim of this paper is to address the possible IR behavior of the reaction-diffusion process (1) under the influence of advecting velocity fluctuations and to determine their IR regimes.
The paper is organized as follows. In Sec. 2 we give a field-theoretic formulation of the model and specify the main ingredients of the perturbation theory. Sec. 3 is devoted to the analysis of ultraviolet divergences and renormalization of the model in one-loop order of perturbation scheme. The analysis of fixed points (FP) and their regions of stability are discussed in Sec. 4. Conclusions are drawn in Sec. 5.
## 2 Field-theoretic formulation of the model
The field theory for the reaction-diffusion system described by the scheme (1) can be constructed from the master equation by means of Doi-Peliti formalism [4, 14, 15, 16]. For brevity, we omit the derivation as it can be easily found elsewhere (see e.g., [3]). We start our analysis with the field-theoretic action for the reaction scheme (1) augmented with diffusion processes
\[\mathcal{S}_{r}[\Psi] =\psi_{A}^{\dagger}(-\partial_{t}+\nu_{0}u_{A0}\partial^{2})\psi _{A}+\psi_{B}^{\dagger}(-\partial_{t}+\nu_{0}u_{B0}\partial^{2})\psi_{B}-\nu_ {0}u_{A0}\lambda_{0}\psi_{A}^{\dagger}\psi_{A}^{2}\] \[-\nu_{0}u_{A0}\lambda_{0}\psi_{A}^{\dagger 2}\psi_{A}^{2}- \lambda_{0}^{\prime}Q\nu_{0}u_{A0}\psi_{B}^{\dagger}\psi_{A}\psi_{B}-\nu_{0}u _{A0}\lambda_{0}^{\prime}\psi_{A}^{\dagger}\psi_{B}^{\dagger}\psi_{A}\psi_{B}, \tag{2}\]
where \(\Psi\equiv\{\psi_{A},\psi_{A}^{\dagger},\psi_{B},\psi_{B}^{\dagger}\}\) are bosonic-like coherent fields arising in taking a continuum limit in the Doi-Peliti approach [3], \(\partial^{2}=\partial_{i}\partial_{i}\) denotes Laplace operator in \(d\)-dimensions and diffusion parameters are expressed through respective Prandtl numbers \(u_{A0}\), \(u_{B0}\) and viscosity \(\nu_{0}\) (see below Eq. (7)). The parameters \(\lambda_{0},\lambda_{0}^{\prime}\) denote reaction constants, and parameter \(Q=1/(2-p)\) is related to the probability of whether annihilation or coagulation process takes place. In this work, we employ the RG method, which introduces two different kinds of variables - bare (unrenormalized) quantities and their renormalized counterparts. Therefore we denote the former ones with the subscript "0", whereas the latter will be written without the subscript "0".
Reaction process (1) is an example of a genuine non-equilibrium system and, therefore, we have to specify its initial conditions. We choose them in the following form
\[\mathcal{S}_{init}[\Psi]=(a_{0}\psi_{A}^{\dagger}+b_{0}\psi_{B}^{\dagger}) \delta(t), \tag{3}\]
where \(a_{0},b_{0}\) are appropriately rescaled initial average densities [4, 7].
In writing actions (2) and (3) we have employed a condensed notation, in which integrations over space and time variables in the expressions for action functionals are implied. For instance, the first term in the action (2) corresponds to
\[-\psi_{A}^{\dagger}\partial_{t}\psi_{A}=-\int\mathrm{d}x\,\psi_{A}^{\dagger}(x )\partial_{t}\psi_{A}(x), \tag{4}\]
where we have written coordinates compactly as \(x=(t,\mathbf{x})\) and integration measure as \(\mathrm{d}x=\mathrm{d}t\mathrm{d}^{d}x\).
The aim of this paper is to study the case where chemical particles are advected within the fluid environment with random fluctuations. We introduce advection processes into the formalism by the inclusion of convective derivative [17]. This corresponds to the replacement of the time derivative as follows
\[\partial_{t}\to\partial_{t}+\mathbf{v}\cdot\mathbf{\nabla}=\partial_{t}+v_{j}\partial _{j}, \tag{5}\]
where summation over repeated indices is implied in the last term. Let us stress that the advection for both particle types is considered to be passive, i.e., the velocity field itself is not affected by the particles or reactions processes, respectively. Corresponding advective terms to the action (2) take the form
\[\mathcal{S}_{adv}[\Psi,\mathbf{v}]=-\psi_{A}^{\dagger}v_{j}\partial_{j}\psi_{A}- \psi_{B}^{\dagger}v_{j}\partial_{j}\psi_{B}. \tag{6}\]
To finalize the model construction we need to specify velocity field \(\mathbf{v}\). Here, we assume that velocity field \(\mathbf{v}(t,\mathbf{x})\) is a random variable with zero mean, whose dynamics is governed by stochastic Navier-Stokes equation [10, 18].
\[\partial_{t}v_{i}+(v_{j}\partial_{j})v_{i}=\nu_{0}\partial^{2}v_{i}-\partial_ {i}P+f_{i}, \tag{7}\]
where \(P=P(x)\) is the pressure field, and \(f_{i}=f_{i}(x)\) denotes \(i\)-th component of an external random force \(\mathbf{f}\). Following earlier works [10, 18, 19] we assume the force \(\mathbf{f}\) is a random Gaussian variable with zero mean and correlation function of the prescribed form
\[\langle f_{i}(t,\mathbf{x})f_{j}(0,\mathbf{0})\rangle=\int\frac{\mathrm{d}^{d}k}{(2 \pi)^{d}}D_{ij}(t,\mathbf{k})\mathrm{e}^{i\mathbf{k}\cdot\mathbf{x}}. \tag{8}\]
We consider the case of an incompressible fluid, which implies transversality of the field \(\mathbf{v}\) (\(\partial_{i}v_{i}=0\)). Using this condition it is possible to express pressure in terms of velocity field [18]. This is equivalent to work in transversal space by taking the following replacement for velocity field \(\mathbf{v}\) in the momentum representation
\[v_{i}(\mathbf{k})\to P_{ij}(\mathbf{k})v_{j}(\mathbf{k}), \tag{9}\]
where \(P_{ij}(\mathbf{k})=\delta_{ij}-k_{i}k_{j}/k^{2}\) with \((k=|\mathbf{k}|)\) is transverse projection operator.
The incompressibility condition implies that the kernel \(D_{ij}\) in the momentum representation is proportional to transverse projector \(P_{ij}(\mathbf{k})\). In fact, it can be readily shown that for incompressible medium \(D_{ij}\sim\delta_{ij}\) is sufficient. However, we follow the traditional notation in previous works and keep \(P_{ij}\) in the expression for kernel \(D_{ij}\). Using a specific
choice for the momentum dependence of \(D_{ij}\) term it is possible to generate fluctuations of the velocity field near thermal equilibrium.
These considerations finally lead to
\[D_{ij}(t,\mathbf{k})=\delta(t)D_{0}k^{2}P_{ij}(\mathbf{k}), \tag{10}\]
where \(\delta=\delta(t)\) is Dirac delta function. It can be shown that delta correlations in time of the kernel \(D_{ij}\) ensures that the present model possesses the Galilean symmetry [11, 20].
In hindsight, this particular form (10) is convenient for the application of RG method, because both velocity fluctuations and reaction processes of the original reaction-diffusion system become simultaneously marginal in the critical space dimension \(d=d_{c}=2\). The stochastic problem (7)-(10) can be recast into a field theory with the doubled set of fields \(\Phi=\{\mathbf{v},\mathbf{v}^{\prime}\}\) described by the De Dominicis-Janssen action functional [12, 11],
\[\mathcal{S}_{v}[\Phi]=\frac{1}{2}v^{\prime}_{i}D_{ij}v^{\prime}_{j}+v^{\prime }_{i}\left(-\partial_{t}v_{i}-v_{j}\partial_{j}v_{i}+\nu_{0}\partial^{2}v_{i} \right), \tag{11}\]
where response field \(v^{\prime}_{i}\) is incompressible, and again condensed notation in the sense of Eq. (4) is assumed. Let us note that quadratic term in the response field \(\mathbf{v}^{\prime}\) in the action (11) actually stands for
\[v^{\prime}_{i}D_{ij}v^{\prime}{}_{j}=\int\mathrm{d}x\int\mathrm{d}x^{\prime}v ^{\prime}_{i}(x)D_{ij}(x-x^{\prime})v^{\prime}_{k}(x^{\prime}), \tag{12}\]
where \(D_{ij}\) corresponds to the inverse Fourier transform of the kernel (10).
The sum of action functionals (2), (3), (6), and (11), respectively, then gives us a final field-theoretic action
\[\mathcal{S}=\mathcal{S}_{r}+\mathcal{S}_{v}+\mathcal{S}_{adv}+\mathcal{S}_{ init}. \tag{13}\]
Expectation values of some physical observable \(A=A(t,\mathbf{x})\) can be, in principle, calculated as a functional integral [4, 12]
\[\langle A(t,\mathbf{x})\rangle=\mathcal{N}^{-1}\int\mathcal{D}\Psi\mathcal{D}\Phi \,A(t,\mathbf{x})\mathrm{e}^{S}, \tag{14}\]
where \(\mathcal{N}\) is a normalization constant.
In what follows we analyze field-theoretic action (13) using the field-theoretic renormalization group. This technique was employed in the past on similar problems as well [3, 21, 22, 23, 24, 25, 26]. We apply it here in a perturbation setting, which is based on expressing Green functions as a series in coupling constants of a theory. The perturbation theory of the model is then constructed using well-known Feynman diagrammatic rules [3, 12, 13]. The part of the action (13) quadratic in fields determines the bare propagators, which in frequency-momentum representation take form
\[\langle\psi_{A}\psi^{\dagger}_{A}\rangle_{0} =\frac{1}{-i\omega+\nu_{0}u_{A0}k^{2}}, \langle\psi_{B}\psi^{\dagger}_{B}\rangle_{0} =\frac{1}{-i\omega+\nu_{0}u_{B0}k^{2}}, \tag{15}\] \[\langle v_{i}v_{j}\rangle_{0} =\frac{D_{0}k^{2}P_{ij}(\mathbf{k})}{\omega^{2}+\nu_{0}^{2}k^{4}}, \langle v_{i}v^{\prime}_{j}\rangle_{0} =\frac{P_{ij}(\mathbf{k})}{-i\omega+\nu_{0}k^{2}}. \tag{16}\]
The nonlinear terms determine interaction vertices with associated vertex factors [12]. They can be calculated with the help of the formula
\[V_{N}(x_{1},\ldots,x_{N};\varphi)=\frac{\delta^{N}\mathcal{S}_{\rm int}}{\delta \varphi(x_{1})\ldots\delta\varphi(x_{N})},\quad\varphi\in\{\psi_{A},\psi_{A}^{ \dagger},\psi_{B},\psi_{B}^{\dagger},\mathbf{v},\mathbf{v}^{\prime}\}, \tag{17}\]
where \(\mathcal{S}_{\rm int}\) corresponds to the non-linear terms of the action (13). In a straightforward manner, we get the following bare vertices without an inclusion of velocity field
\[V_{\psi_{A}^{\dagger}\psi_{A}\psi_{A}} =-2\lambda_{0}\nu_{0}u_{A0}, V_{\psi_{B}^{\dagger}\psi_{B}\psi_{A}} =-\lambda_{0}^{\prime}\nu_{0}u_{A0}Q,\] \[V_{\psi_{A}^{\dagger}\psi_{A}^{\dagger}\psi_{A}\psi_{A}} =-4\lambda_{0}\nu_{0}u_{A0}, V_{\psi_{A}^{\dagger}\psi_{B}^{\dagger}\psi_{A}\psi_{B}} =-\lambda_{0}^{\prime}\nu_{0}u_{A0}. \tag{18}\]
On the other hand, there are three additional vertices that include the velocity field
\[V_{\psi_{A}^{\dagger}(\mathbf{k})\psi_{A}v_{j}}=ik_{j},\quad V_{\psi_{B}^{\dagger }(\mathbf{k})\psi_{B}v_{j}}=ik_{j},\quad V_{\psi_{i}^{\prime}(\mathbf{k})v_{i}v_{j}}=i (k_{l}\delta_{ij}+k_{j}\delta_{il}). \tag{19}\]
First, two describe advection processes and the latter vertex is responsible for interactions between velocity fluctuations. Also, we have explicitly written, the momentum of which field enters a given interaction vertex. For instance, in expression for the vertex factor \(V_{\psi_{i}^{\prime}(\mathbf{k})v_{i}v_{j}}\) the momentum \(k_{j}\) is carried by the response field \(v_{i}^{\prime}\)[11, 12].
## 3 Renormalization of the model
The analysis of UV divergences starts with a determination of the canonical dimensions for model parameters. In dynamical models, there are two independent scales that need to be considered [3, 12]. These are frequency and momentum scales (time and length). Then any quantity \(F\) is characterized with both frequency dimension \(d_{F}^{\omega}\) and a momentum dimension \(d_{F}^{k}\), respectively. Canonical dimensions are determined from normalization conditions
\[d_{k}^{k}=-d_{x}^{k}=1,\ d_{\omega}^{\omega}=-d_{t}^{\omega}=1,\ d_{k}^{\omega }=d_{\omega}^{k}=0, \tag{20}\]
and the fact that the action functional has to be a dimensionless quantity [12]. The total canonical dimension of any \(F\) is then given as \(d_{F}=d_{F}^{k}+2d_{F}^{\omega}\) (because of \(\partial_{t}\propto\partial^{2}\) proportionality in quadratic part of the action functional). Canonical dimensions of all the fields and parameters of model (13) are listed in Tab. 1.
There are altogether five charges (coupling constants) of the theory
\[g_{0}=\frac{D_{0}}{\nu_{0}^{3}},\ u_{A0},\ u_{B0},\ \lambda_{0},\ \lambda_{0}^{\prime}. \tag{21}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \(F\) & \(\psi_{A}\), \(\psi_{B}\) & \(\psi_{A}^{\dagger}\), \(\psi_{B}^{\dagger}\) & \(\mathbf{v}\) & \(\mathbf{v}^{\prime}\) & \(\lambda_{0}\), \(\lambda_{0}^{\prime}\), \(g_{0}\) & \(Q\) & \(a_{0}\), \(b_{0}\) & \(\nu_{0}\) & \(D_{0}\) & \(u_{A0}\), \(u_{B0}\) \\ \hline \(d_{F}^{k}\) & \(d\) & \(0\) & \(-1\) & \(d+1\) & \(2-d\) & \(0\) & \(d\) & \(-2\) & \(-d-4\) & \(0\) \\ \hline \(d_{F}^{k}\) & \(0\) & \(0\) & \(1\) & \(-1\) & \(0\) & \(0\) & \(0\) & \(1\) & \(3\) & \(0\) \\ \hline \(d_{F}\) & \(d\) & \(0\) & \(1\) & \(d-1\) & \(2-d\) & \(0\) & \(d\) & \(0\) & \(2-d\) & \(0\) \\ \hline \end{tabular}
\end{table}
Table 1: Canonical dimensions of fields and parameters.
In the space dimension \(d=2\), all of these charges become simultaneously dimensionless and the model becomes logarithmic. Therefore this dimension is identified as an upper critical dimension \(d_{c}\) of the model. In dimensional regularisation, the UV divergences manifest themselves as poles in expansion parameter \(\varepsilon=2-d\), whereas the IR divergences are regulated by the sharp cutoff at \(k=m\), which is an analog of the inverse of integral turbulence scale \(L=1/m\). Let us note that the latter divergences do not affect renormalization constants [12].
Probably the most economical way to renormalize the translationally invariant model is through the renormalization of its one-particle irreducible (1PI) Green functions. This is a restricted class of Feynman diagrams that consists of such diagrams that remain connected even after one internal line is cut off [3, 12]. An arbitrary one-particle irreducible (1PI) Green's function will be denoted as \(\Gamma_{\{\varphi\}}=\langle\varphi\ldots\varphi\rangle_{1PI}\), where \(\varphi\in\Psi\cup\Phi\) denotes an arbitrary field from the full set of fields of the model (13). Its total canonical dimension is given by a general formula [12, 13]
\[d_{\Gamma}=d+2-\sum_{\varphi}N_{\varphi}d_{\varphi}, \tag{22}\]
where the sum runs through all the types of fields \(\varphi\), \(N_{\varphi}\) denotes the number of times the given field appears in the particular 1PI function and \(d_{\varphi}\) is its canonical dimension. Following the standard approach [12] the task is to identify superficial divergences in 1PI functions and construct renormalized action, in which introduced additional counter-terms ensure the removal of these divergences in the given order of perturbation theory.
The UV divergences, which require further treatment, are identified with those 1PI Green functions, which possess a non-negative formal index of divergence \(\delta_{\Gamma}=d_{\Gamma}|_{\varepsilon=0}\). However, for the present case, this statement is to be adjusted based on the following considerations. First, the 1PI functions not involving any of the response functions \(\psi_{B}^{\dagger},\psi_{A}^{\dagger},\mathbf{v}^{\prime}\) as external fields vanish as they necessarily contain closed cycles of causal propagators [12]. Since vertex factor \(V_{\mathbf{v}^{\prime}\mathbf{v}\mathbf{v}}\) is proportional to the momentum carried by field \(\mathbf{v}^{\prime}\) (see the corresponding expression in (19)), every instance of \(\mathbf{v}^{\prime}\) appearing as external field lowers the overall index of divergence. Thus the real index of divergence is defined as
\[\tilde{\delta}_{\Gamma}=\delta_{\Gamma}-N_{\mathbf{v}^{\prime}}. \tag{23}\]
Second, the number of counter-terms is further reduced because of the invariance property of generating functional of model (13) with respect to Galilean transformations. This symmetry implies that the function \(\langle\mathbf{v}^{\prime}\mathbf{v}\mathbf{v}\rangle_{1PI}\) does not diverge (for further discussions on the subject see e.g. [11, 12, 27]). Taking these into account along with available diagrammatic elements and transversality of the velocity field we can identify the following irreducible functions with superficial UV divergences
\[\langle\mathbf{v}^{\prime}\mathbf{v}^{\prime}\rangle_{1PI}, \langle\psi_{A}^{\dagger}\psi_{A}\psi_{A}\rangle_{1PI},\] \[\langle\mathbf{v}^{\prime}\mathbf{v}\rangle_{1PI}, \langle\psi_{B}^{\dagger}\psi_{B}\psi_{A}\rangle_{1PI},\] \[\langle\psi_{A}^{\dagger}\psi_{A}\rangle_{1PI}, \langle\psi_{A}^{\dagger}\psi_{A}^{\dagger}\psi_{A}\psi_{A} \rangle_{1PI},\] \[\langle\psi_{B}^{\dagger}\psi_{B}\rangle_{1PI}, \langle\psi_{B}^{\dagger}\psi_{A}^{\dagger}\psi_{B}\psi_{A} \rangle_{1PI}. \tag{24}\]
All of these have the form that is already present in the bare action functional. This implies that the model is multiplicatively renormalizable.
The total renormalized action takes the form
\[S_{R} =\psi_{A}^{\dagger}\left(-\partial_{t}+Z_{1}u_{A}\nu\partial^{2} \right)\psi_{A}+\psi_{B}^{\dagger}\left(-\partial_{t}+Z_{2}u_{B}\nu\partial^{2} \right)\psi_{B}+\frac{v_{i}^{\prime}\mu^{\epsilon}Z_{3}D_{ij}v_{j}^{\prime}}{2}\] \[+v_{i}^{\prime}(-\partial_{t}+Z_{4}\nu\partial^{2})v_{i}-u_{A} \nu\lambda\mu^{\epsilon}Z_{5}\left[\psi_{A}^{\dagger}+\psi_{A}^{\dagger 2}\right]\psi_{A}^{2}- \lambda^{\prime}u_{A}\nu\mu^{\epsilon}Z_{6}\left[Q\psi_{A}+\psi_{A}^{\dagger} \psi_{A}\right]\psi_{B}^{\dagger}\psi_{B}\] \[-v_{i}^{\prime}(\mathbf{v}\cdot\mathbf{\nabla})v_{i}-\left[\psi_{A}^{ \dagger}(\mathbf{v}\cdot\mathbf{\nabla})\psi_{A}+\psi_{B}^{\dagger}(\mathbf{v}\cdot\mathbf{ \nabla})\psi_{B}\right]+\delta(t)\left(\psi_{A}^{\dagger}\;a_{0}+\psi_{B}^{ \dagger}\;b_{0}\right), \tag{25}\]
and was obtained from the bare action (13) by introducing the following renormalization of fields and parameters of the model
\[\varphi \to Z_{\varphi}\varphi, u_{A0} \to Z_{u_{A}}u_{A}, u_{B0} \to Z_{u_{B}}u_{B}, \nu_{0} \to Z_{\nu}\nu,\] \[g_{0} \to\mu^{\varepsilon}Z_{g}g, \lambda_{0} \to\mu^{\varepsilon}Z_{\lambda}\lambda, \lambda^{\prime}_{0} \to\mu^{\varepsilon}Z_{\lambda^{\prime}}\lambda^{\prime}. \tag{26}\]
Here, \(\varphi\in\{\psi_{A},\psi_{A}^{\dagger},\psi_{B},\psi_{B}^{\dagger},\mathbf{v}, \mathbf{v}^{\prime}\}\), \(\mu\) is an arbitrary momentum scale and \(Z_{F}\) denotes the corresponding renormalization constant.
By direct inspection, we get relations between renormalization constants in the renormalized action (25) and RG constants (26)
\[Z_{u_{A}} =Z_{1}Z_{4}^{-1}, Z_{u_{B}} =Z_{2}Z_{4}^{-1}, Z_{\lambda} =Z_{5}Z_{1}^{-1}, Z_{\lambda} ^{\prime} =Z_{6}Z_{1}^{-1},\] \[Z_{g} =Z_{3}Z_{4}^{-3}, Z_{\nu} =Z_{4}, Z_{\Psi} =Z_{Q}=1. \tag{27}\]
The explicit form of RG constants \(Z_{1}-Z_{6}\) is calculated from the one-loop 1PI Feynman diagrams using dimensional regularisation and a minimal subtraction scheme. The final expressions read
\[Z_{1} =1-\frac{\hat{g}}{4u_{A}(u_{A}+1)\varepsilon}, Z_{2} =1-\frac{\hat{g}}{4u_{B}(u_{B}+1)\varepsilon}, Z_{3} =Z_{4} =1-\frac{\hat{g}}{16\varepsilon},\] \[Z_{5} =1+\frac{\hat{\lambda}}{\varepsilon}, Z_{6} =1+\frac{\hat{\lambda}^{\prime}u_{A}}{(u_{A}+u_{B})\varepsilon}, \tag{28}\]
where \(\hat{F}\equiv FS_{d}/(2\pi)^{d}\), \(S_{d}=2\pi^{d/2}/\Gamma(d/2)\) is the area of unit \(d\)-dimensional sphere, and \(\Gamma(x)\) is Euler's gamma function.
## 4 RG functions and scaling regimes
Once the calculation of RG constants is successfully accomplished, it is possible to analyze the asymptotic behavior of the system. A fundamental equation that governs the behavior of renormalized Green functions is expressed with the help of the RG operator, which can be expressed in the present case as
\[D_{RG}=\mu\partial_{\mu}+\sum_{e}\beta_{e}\partial_{e}-\gamma_{\nu}\nu \partial_{\nu}, \tag{29}\]
where the given sum runs through all charges of the theory \(e=\{g,u_{A},u_{B},\lambda^{\prime},\lambda\}\). The coefficient functions are defined as
\[\beta_{e}=\mu\frac{\partial e}{\partial\mu}\bigg{|}_{0},\quad\gamma_{F}=\frac{ \partial\ln Z_{F}}{\partial\ln\mu}\bigg{|}_{0}, \tag{30}\]
for any parameter, \(F\) and \(|_{0}\) means that bare parameters are held constant during evaluation. For the model (13), we have altogether five beta functions
\[\beta_{g}= -g(\varepsilon+\gamma_{g}), \beta_{u_{A}}=-u_{A}\gamma_{u_{A}}, \beta_{u_{B}}=-u_{B}\gamma_{u_{B}},\] \[\beta_{\lambda}= -\lambda(\varepsilon+\gamma_{\lambda}), \beta_{\lambda^{\prime}}=-\lambda^{\prime}(\varepsilon+\gamma_{ \lambda^{\prime}}), \tag{31}\]
with corresponding anomalous dimensions (by definition Eq. (30))
\[\gamma_{g}= -\frac{\hat{g}}{8}, \gamma_{u_{i}}=\hat{g}\bigg{(}\frac{1}{4u_{i}(1+u_{i})}-\frac{1 }{16}\bigg{)};i\in\{A,B\},\] \[\gamma_{\lambda}= -\hat{\lambda}-\frac{\hat{g}}{4u_{A}(1+u_{A})}, \gamma_{\lambda^{\prime}}=-\hat{\lambda^{\prime}}\frac{u_{A}}{u_ {A}+u_{B}}-\frac{\hat{g}}{4u_{A}(1+u_{A})}, \tag{32}\]
where the higher order corrections \(\hat{g}^{2},\hat{\lambda}^{2}\) are neglected in the one-loop approximation.
The long-time asymptotic behavior of the model is governed by the IR stable fixed points (FP) [12, 13] of beta functions. These are such points \(e^{*}=(g^{*},u_{A}^{*},u_{B}^{*},\lambda^{*},\lambda^{\prime*})\) in coupling constant space that satisfy
\[\beta_{g}(e^{*})=\beta_{u_{A}}(e^{*})=\beta_{u_{B}}(e^{*})=\beta_{\lambda}(e^ {*})=\beta_{\lambda^{\prime}}(e^{*})=0. \tag{33}\]
IR stability is determined by the eigenvalues of the matrix of the first derivatives
\[\Omega_{ij}=\frac{\partial\beta_{i}}{\partial e_{j}}\bigg{|}_{e^{*}}, \tag{34}\]
where index \(i\) and charge \(e_{j}\) belong to the set \(\{g,u_{A},u_{B},\lambda^{\prime},\lambda\}\). For IR stable regimes the eigenvalues of the matrix (34) have to have positive real parts. We have found eight FPs, however, only two of them are IR stable (see Tab. 2). These are
1. Gaussian fixed point (FP1): \(g^{*}=0\), \(u_{A}^{*}=\) arbitrary, \(u_{B}^{*}=\) arbitrary, \(\lambda^{*}=0\), \(\lambda^{\prime*}=0\). IR stable for \(\varepsilon<0\).
2. Thermal fixed point (FP8): \(g^{*}=8\varepsilon\), \(u_{A}^{*}=u_{B}^{*}=(\sqrt{17}-1)/2\), \(\lambda^{*}=\varepsilon/2\), \(\lambda^{\prime*}=\varepsilon\). IR stable for \(\varepsilon>0\).
Let us note that in non-trivial (thermal) FP both velocity fluctuations and reaction interactions are simultaneously IR-relevant. RG predicts also an FP for which only reaction processes are relevant (FP4). However, even though it would have been stable without the velocity field [16], it can never be truly IR stable in the presence of thermal fluctuations, which are inevitable in practice. A similar conclusion was obtained in the past for different reaction-diffusion model [24]. On the borderline of the two regimes, i.e. for the case \(\varepsilon=0\), couplings of the theory become marginally irrelevant, and logarithmic corrections are expected to appear in expressions for Green's functions. Based on the standard analysis [12] we predict that these corrections will be different from the ones realized if the velocity field was not present (and FP4 would be stable). Therefore the behavior in two-dimensional systems is also expected to be affected by the presence of velocity field fluctuations. The proof of this statement is deferred to future work.
## 5 Conclusion
We have investigated the influence of thermal fluctuations on a reaction-diffusion system with reactions \(A+A\rightarrow(\emptyset,A)\), \(A+B\to A\). Using the field-theoretic formulation of the model we have analyzed possible macroscopic behavior utilizing the renormalization group approach. In particular, we have renormalized the model to the one-loop order of the perturbation scheme. The RG analysis revealed the existence of two IR-stable FPs which govern the long-time behavior of the system.
**Conflict of Interest**
The authors declare that they have no conflicts of interest.
## Acknowledgment
The work was supported by VEGA grant No. 1/0535/21 of the Ministry of Education, Science, Research and Sport of the Slovak Republic.
Conflict of Interest: The authors declare that they have no conflicts of interest.
|
2307.10078 | A Dual Formulation for Probabilistic Principal Component Analysis | In this paper, we characterize Probabilistic Principal Component Analysis in
Hilbert spaces and demonstrate how the optimal solution admits a representation
in dual space. This allows us to develop a generative framework for kernel
methods. Furthermore, we show how it englobes Kernel Principal Component
Analysis and illustrate its working on a toy and a real dataset. | Henri De Plaen, Johan A. K. Suykens | 2023-07-19T15:51:25Z | http://arxiv.org/abs/2307.10078v1 | # A Dual Formulation for Probabilistic Principal Component Analysis
###### Abstract
In this paper, we characterize _Probabilistic Principal Component Analysis_ in Hilbert spaces and demonstrate how the optimal solution admits a representation in dual space. This allows us to develop a generative framework for kernel methods. Furthermore, we show how it englobes _Kernel Principal Component Analysis_ and illustrate its working on a toy and a real dataset.
Machine Learning, Probabilistic Principal Component Analysis
## 1 Introduction
Classical datasets often consist of many features, making dimensionality reduction methods particularly appealing. _Principal Component Analysis_ (PCA) is one of the most straightforward frameworks to that goal and it is hard to find a domain in machine learning or statistics where it has not proven to be useful. PCA considers new decorrelated features by computing the eigendecomposition of the covariance matrix.
Probabilistic models on another side participate to the building of a stronger foundation for machine learning models. By considering models as probability distributions, we are able to natively access notions such as variance or sampling, _i.e._ generation. A probabilistic approach to PCA, known as _Probabilistic Principal Component Analysis_ (Prob. PCA), has been formulated by (Tipping & Bishop, 1999). Its principles can be visualized in the primal part of Table 1.
Even endowed with a probabilistic interpretation, PCA remains restricted to linear relations between the different features. _Kernel Principal Component Analysis_ (KPCA) (Mika et al., 1998; Scholkopf et al., 1998) was an attempt to give a non-linear extension to (non-probabilistic) PCA by decomposing a kernel matrix instead of the covariance matrix. An earlier attempt to give a probabilistic formulation of KPCA has been done by (Zhang et al., 2004). As developed further, the latter model does not consist in a kernel equivalent of the Prob. PCA, but rather in another model based on similar principles.
More recently, _Restricted Kernel Machines_(Suykens, 2017) opened a new door for a probabilistic version of PCA both in primal and dual. They essentially use the Fenchel-Young inequality on a variational formulation of KPCA (Suykens et al., 2003; Alaiz et al., 2018) to obtain an energy function, closely resembling to _Restricted Boltzmann Machines_. The framework has been further extended to generation (Schereurs & Suykens, 2018; Winant et al., 2020), incorporating robustness (Pandey et al., 2020), multi-view models (Pandey et al., 2021), deep explicit feature maps (Pandey et al., 2022b) or times-series (Pandey et al., 2022a).
### Contributions
1. We characterize the Prob. PCA framework in Hilbert spaces and give a dual interpretation to the model.
2. We develop a new extension of KPCA incorporating a noise assumption on the explicit feature map.
3. We give a probabilistic interpretation of the generation in KPCA.
4. We illustrate how the dual model works on a toy and a real dataset and show its connections to KPCA2. Footnote 2: Resources: [https://hdeplaen.github.io/kppca](https://hdeplaen.github.io/kppca).
## 2 Primal and Dual Spaces
The key idea behind the duality in PCA is that outer and inner products share the same eigenvalues. The consequence is that instead of decomposing the covariance matrix of any given feature map, we can decompose the associated Gram matrix, _i.e._ the kernel matrix. The former is considered as the _primal_ formulation and the latter as the _dual_ formulation and they are both equivalent. Extending Prob. PCA to a dual formulation is however not straightforward: if all feature maps have an associated kernel, the converse is trickier. Some kernels correspond to feature maps in infinite dimensional spaces, where probability distributions cannot be properly defined. We therefore need to choose well defined finite subspaces to work in and consider linear operators instead of matrices. All formal definitions, propositions and proofs are provided in Appendix A.
### Primal Spaces
**Feature Space \(\mathcal{H}\).** Given an input space \(\mathcal{X}\), we first consider any feature map \(\varphi:\mathcal{X}\to\mathcal{H}\). Following (Alaiz et al., 2018), we will consider a separable, possibly infinite dimensional, Hilbert space \((\mathcal{H},\langle\cdot,\cdot\rangle_{\mathcal{H}})\). By \(\boldsymbol{\varphi}\), we denote an element of \(\mathcal{H}\) and its adjoint by \(\boldsymbol{\varphi}^{*}=\langle\boldsymbol{\varphi},\cdot\rangle\in\mathcal{ H}^{*}\), with \(\mathcal{H}^{*}\sim\mathcal{H}\) its Frechet-Riesz dual space. Essentially, it corresponds to the transpose \(\boldsymbol{\varphi}^{\top}\) in real, finite dimensional spaces as \(\boldsymbol{\varphi}_{1}^{\top}\boldsymbol{\varphi}_{2}=\langle\boldsymbol{ \varphi}_{1},\boldsymbol{\varphi}_{2}\rangle_{\mathcal{H}}\), but generalizes it for the possibly infinite dimensional spaces that will be necessary for the introduction of kernels. Furthermore, we assume our space to be defined over the reals such that \(\langle\cdot,\cdot\rangle_{\mathcal{H}}:\mathcal{H}\times\mathcal{H}\to \mathbb{R}\) and its inner product is symmetric \(\langle\boldsymbol{\varphi}_{1},\boldsymbol{\varphi}_{2}\rangle_{\mathcal{H}}= \langle\boldsymbol{\varphi}_{2},\boldsymbol{\varphi}_{1}\rangle_{\mathcal{H}}\). If \(\mathcal{H}\) is of finite dimension \(d\), we can therefore identify its canonical basis \(\boldsymbol{u}_{1},\ldots,\boldsymbol{u}_{d}\) with the canonical basis of \(\mathbb{R}^{d}\).
**Finite Feature Space \(\mathcal{H}_{\mathcal{E}}\).** Considering a set of \(N\) observations \(\{\boldsymbol{x}_{i}\in\mathcal{X}\}_{i=1}^{N}\), the idea is to work directly in \(\mathcal{H}\) by considering instead the feature map of the datapoints \(\boldsymbol{\varphi}_{i}=\varphi\left(\boldsymbol{x}_{i}\right)\). We can however not define a normal distribution onto the full \(\mathcal{H}\) yet as it is possibly infinite dimensional. We therefore have to consider a finite subspace \(\mathcal{H}_{\mathcal{E}}\subset\mathcal{H}\). A natural choice would be \(\mathcal{H}_{\mathcal{E}}=\operatorname{span}\left\{\boldsymbol{\varphi}_{1}, \ldots,\boldsymbol{\varphi}_{N}\right\}\). We now first have to find an orthonormal basis for \(\mathcal{H}_{\mathcal{E}}\).
### Dual Spaces
**Kernels.** For each feature map, there is an induced positive semi-definite kernel \(k:\mathcal{X}\times\mathcal{X}\to\mathbb{R}:k\left(\boldsymbol{x},\boldsymbol {y}\right)=\langle\varphi(\boldsymbol{x}),\varphi(\boldsymbol{y})\rangle_{ \mathcal{H}}=\varphi(\boldsymbol{x})^{*}\varphi(\boldsymbol{y})\). Inversely, to each positive semi-definite kernel corresponds a, possibly infinite dimensional, feature map, even if not explicitly defined. This follows from the theory of _Reproducing Kernel Hilbert Spaces_. We refer to (Scholkopf & Smola, 2001) for further info.
**Kernel Space \(\mathcal{E}\).** We now consider a finite dimensional Hilbert space \((\mathcal{E},\langle\cdot,\cdot\rangle_{\mathcal{E}})\) of dimension \(N\), the number of observations. It is defined similarly as above, with orthonormal basis \(\boldsymbol{e}_{1},\ldots,\boldsymbol{e}_{N}\). The basis also defines the identity over \(\mathcal{E}\) as \(\boldsymbol{I}_{\mathcal{E}}=\sum_{i=1}^{N}e_{i}\boldsymbol{e}_{i}^{*}\). The goal for \(\mathcal{E}\) is to represent the space of the kernel representations. We therefore define the linear operator \(\boldsymbol{\Phi}:\mathcal{E}\to\mathcal{H}:\sum_{i=1}\boldsymbol{\varphi}_{i }\boldsymbol{e}_{i}^{*}\) and its adjoint \(\boldsymbol{\Phi}^{*}:\mathcal{H}\to\mathcal{E}:\sum_{i=1}^{N}\boldsymbol{e} _{i}\boldsymbol{\varphi}_{i}^{*}\). Essentially, \(\boldsymbol{\Phi}^{*}\) returns the kernel value with each datapoint: \(\boldsymbol{\Phi}^{*}\varphi(\boldsymbol{x})=\sum_{i=1}^{N}\boldsymbol{e}_{i }\left(\boldsymbol{\varphi}_{i}^{*}\varphi(\boldsymbol{x})\right)=\sum_{i=1}^ {N}\boldsymbol{e}_{i}k\left(\boldsymbol{x}_{i},\boldsymbol{x}\right)\) for any \(\boldsymbol{x}\in\mathcal{X}\). Similarly, \(\boldsymbol{\Phi}\) projects this value back as a linear combination of the different \(\boldsymbol{\varphi}_{i}\)'s, thus mapping back to \(\mathcal{H}_{\mathcal{E}}\subset\mathcal{H}\). For this reason, the covariance \(\boldsymbol{\Phi}\circ\boldsymbol{\Phi}^{*}=\sum_{i=1}^{N}\boldsymbol{\varphi}_ {i}\boldsymbol{\varphi}_{i}^{*}\) acts as a projector from \(\mathcal{H}\to\mathcal{H}_{\mathcal{E}}\). Its eigenvectors therefore form an orthonormal basis of the finite feature space \(\mathcal{H}_{\mathcal{E}}\)
\begin{table}
\begin{tabular}{l l l l}
**Distribution** & **Interpretation** & **Primal (features)** & **Dual (kernels)** \\ latent \(|\) observation \(|\) latent projection & \(\boldsymbol{h}|\boldsymbol{\phi}\sim\mathcal{N}\big{(}\boldsymbol{\Sigma}_{ \boldsymbol{h}|\boldsymbol{\phi}}^{-1}\circ\boldsymbol{W}_{\mathrm{ML}}^{*}( \boldsymbol{\phi}-\boldsymbol{\phi}_{c}),\sigma^{2}\boldsymbol{\Sigma}_{ \boldsymbol{h}|\boldsymbol{\phi}}^{-1}\big{)}\) & \(\boldsymbol{h}|\boldsymbol{k}_{c}\sim\mathcal{N}\big{(}\boldsymbol{\Sigma}_{ \boldsymbol{h}|\boldsymbol{k}_{c}}^{-1}\circ\boldsymbol{A}_{\mathrm{ML}} \boldsymbol{k}_{c},\boldsymbol{\Sigma}_{\boldsymbol{h}|\boldsymbol{k}_{c}}^{-1} \big{)}\) \\ observation \(|\) latent & latent-based generation & \(\boldsymbol{\phi}|\boldsymbol{h}\sim\mathcal{N}\big{(}\boldsymbol{W}_{\mathrm{ ML}}h-\boldsymbol{\phi}_{c},\sigma^{2}\boldsymbol{I}_{\mathcal{H}_{\mathcal{E}}}\big{)}\) & \(\boldsymbol{k}_{c}|\boldsymbol{h}\sim\mathcal{N}\big{(}(\boldsymbol{\Phi}_{c}^{*} \boldsymbol{\Phi}_{c})\circ\boldsymbol{A}_{\mathrm{ML}}h,\sigma^{2}\boldsymbol {\Phi}_{c}^{*}\circ\boldsymbol{\Phi}_{c}\big{)}\) \\ latent & latent prior & \(\boldsymbol{h}\sim\mathcal{N}(\boldsymbol{0},\boldsymbol{I}_{\mathcal{E}})\) & \(\boldsymbol{h}\sim\mathcal{N}(\boldsymbol{0},\boldsymbol{I}_{\mathcal{E}})\) \\ observation & absolute generation & \(\boldsymbol{\phi}\sim\mathcal{N}(\boldsymbol{\mu},\boldsymbol{W}_{\mathrm{ML}} \circ\boldsymbol{W}_{\mathrm{ML}}^{*}+\sigma^{2}\boldsymbol{I}_{\mathcal{H}_{ \mathcal{E}}})\) & \(\boldsymbol{k}_{c}\sim\mathcal{N}\big{(}\boldsymbol{0},\boldsymbol{A}_{\mathrm{ML}} \circ\boldsymbol{A}_{\mathrm{ML}}+\sigma^{2}\left(\boldsymbol{\Phi}_{c}^{*} \circ\boldsymbol{\Phi}_{c}\right)^{-1}\big{)}\) \\ \end{tabular}
\end{table}
Table 1: Interpretation of the different distributions of the Prob. PCA framework after training, in both primal and dual formulations. The covariance operators are given by \(\boldsymbol{\Sigma}_{\boldsymbol{h}|\boldsymbol{\phi}}=\left(\boldsymbol{W}_{ \mathrm{ML}}^{*}\circ\boldsymbol{W}_{\mathrm{ML}}+\sigma^{2}\boldsymbol{I}_{ \mathcal{E}}\right)^{-1}\) and \(\boldsymbol{\Sigma}_{\boldsymbol{h}|\boldsymbol{k}_{c}}=\left(\boldsymbol{A}_{ \mathrm{ML}}^{*}\circ(\boldsymbol{\Phi}_{c}^{*}\circ\boldsymbol{\Phi}_{c}) \circ\boldsymbol{A}_{\mathrm{ML}}+\sigma^{2}\boldsymbol{I}_{\mathcal{L}} \right)^{-1}\), with maximum likelihood estimators for the primal and dual interconnection operators \(\boldsymbol{W}_{\mathrm{ML}}\) and \(\boldsymbol{A}_{\mathrm{ML}}\).
Figure 1: Global overview of the Probabilistic Principal Component Analysis in both primal and dual formulations. The primal spaces, or feature \(\mathcal{H}\), \(\mathcal{H}_{\mathcal{E}}\) and \(\mathcal{H}_{\mathcal{L}}\) are in blue. The dual, or kernel and latent spaces \(\mathcal{E}\) and \(\mathcal{L}\) are in brown. The input space \(\mathcal{X}\) is in green. The color or the applications (arrows) is just for the readability and has nothing to do with the color of the spaces.
which acts as the primal equivalent of the kernel space \(\mathcal{E}\).
**Centered Kernels.** In most applications however, we prefer to work with the centered feature map, which we define as \(\varphi_{c}(\cdot)=\varphi(\cdot)-\mathbf{\varphi}_{c}\) with \(\mathbf{\varphi}_{c}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{\varphi}_{i}\). We denote the associated kernel associated centered kernel \(k_{c}:\mathcal{X}\times\mathcal{X}\to\mathbb{R}:k_{c}(\mathbf{x}_{1},\mathbf{x}_{2})= \varphi_{c}(\mathbf{x}_{1})^{*}\varphi_{c}(\mathbf{x}_{2})\). This leads to the definition of a new centered operator \(\mathbf{\Phi}_{c}=\sum_{i=1}(\mathbf{\varphi}_{i}-\mathbf{\varphi}_{c})\mathbf{e}_{i}^{*}= \mathbf{\Phi}\left(\mathbf{I}_{\mathcal{E}}-\frac{1}{N}\mathbf{1}_{\mathcal{E}\times \mathcal{E}}\right)\), with \(\mathbf{1}_{\mathcal{E}\times\mathcal{E}}=\sum_{i,j=1}^{N}\mathbf{e}_{i}\mathbf{e}_{j}^{*}\). As always, we also consider its adjoint \(\mathbf{\Phi}_{c}^{*}\). Considering the dual operator, we have \(\mathbf{\Phi}_{c}^{*}\circ\mathbf{\Phi}_{c}=\sum_{i=1}^{N}(\mathbf{\varphi}_{i}-\mathbf{ \varphi}_{c})^{*}(\mathbf{\varphi}_{i}-\mathbf{\varphi}_{c})\mathbf{e}_{i}\mathbf{e}_{j}^{*}= \sum_{i=1}^{N}k_{c}(\mathbf{x}_{i},\mathbf{x}_{j})\mathbf{e}_{i}\mathbf{e}_{j}^{*}\). We notice now that \(\mathcal{H}_{\mathcal{E}}=\mathrm{span}\{\mathbf{\varphi}_{1},\dots,\mathbf{\varphi}_ {N}\}=\mathrm{span}\{\mathbf{\varphi}_{1}-\mathbf{\varphi}_{c},\dots,\mathbf{\varphi}_{N}- \mathbf{\varphi}_{c}\}\) because \(\mathbf{\varphi}_{c}\) is a linear combination of the elements of the basis. Therefore, the primal operator \(\mathbf{\Phi}_{c}\circ\mathbf{\Phi}_{c}^{*}=\sum_{i=1}^{N}(\mathbf{\varphi}_{i}-\mathbf{ \varphi}_{c})(\mathbf{\varphi}_{i}-\mathbf{\varphi}_{c})^{*}\) also acts as a projector from \(\mathcal{H}\to\mathcal{H}_{\mathcal{E}}\) and we can choose its eigenvectors instead as an orthonormal basis of \(\mathcal{H}_{\mathcal{E}}\).
**Covariance and Kernels**. We now consider the key idea behind the duality in PCA: the operators \(\mathbf{\Phi}_{c}\circ\mathbf{\Phi}_{c}^{*}\) and \(\mathbf{\Phi}_{c}^{*}\circ\mathbf{\Phi}_{c}\) are self-adjoint, positive semi-definite and share the same non-zero eigenvalues. We have \(\mathbf{\Phi}_{c}\circ\mathbf{\Phi}_{c}^{*}=\sum_{i=1}^{N}\lambda_{i}\mathbf{v}_{i}\mathbf{v}_ {i}^{*}\) and \(\mathcal{H}_{\mathcal{E}}=\mathrm{span}\{\mathbf{v}_{1},\dots,\mathbf{v}_{N}\}\). Similarly, we have \(\mathbf{\Phi}_{c}^{*}\circ\mathbf{\Phi}_{c}=\sum_{i=1}^{N}\lambda_{i}\mathbf{\epsilon}_{i} \mathbf{\epsilon}_{i}^{*}\) and \(\mathcal{E}=\mathrm{span}\{\mathbf{\epsilon}_{1},\dots,\mathbf{\epsilon}_{N}\}\). The identity over the (primal) finite feature space \(\mathcal{H}_{\mathcal{E}}\) can now be defined as \(\mathbf{I}_{\mathcal{H}_{\mathcal{E}}}=\sum_{i=1}^{N}\mathbf{v}_{i}\mathbf{v}_{i}^{*}\) and the identity over the (dual) kernel space \(\mathcal{E}\) as \(\mathbf{I}_{\mathcal{E}}=\sum_{i=1}^{N}\mathbf{\epsilon}_{i}\mathbf{\epsilon}_{i}^{*}\). This is synthetized in the two first columns of Table 2. The identity over \(\mathcal{H}\) reads \(\mathbf{I}_{\mathcal{H}}=\mathbf{I}_{\mathcal{H}_{\mathcal{E}}}+\mathbb{P}_{\mathbf{ \mathcal{H}}_{\mathcal{E}}^{\perp}}\), with \(\mathbb{P}_{\mathbf{\mathcal{H}}_{\mathcal{E}}^{\perp}}\) a projector over the null space of \(\mathbf{\Phi}_{c}\circ\mathbf{\Phi}_{c}^{*}\). It most be noted that it may happen that these basis may contain too much basis vectors if the two operators \(\mathbf{\Phi}_{c}^{*}\circ\mathbf{\Phi}_{c}\) and \(\mathbf{\Phi}_{c}\circ\mathbf{\Phi}_{c}^{*}\) are not of full rank. In particular, this is the case when \(\dim(\mathcal{H})=d\) is finite and \(d<N\). In this particular case, we would also have \(\dim(\mathcal{H}_{\mathcal{E}})=\dim(\mathcal{E})=d\). Without loss of generality, we will assume that this is not the case. Similarly, we will neglect the case \(N>d\) as we could just neglect the null space of \(\mathbf{\Phi}_{c}^{*}\circ\mathbf{\Phi}_{c}\).
**Notations**. We can now define our probabilistic model over \(\mathcal{H}_{\mathcal{E}}\). We will therefore use the notation \(\phi\) instead of \(\varphi\) to consider the feature map in our finite dimensional subspace \(\mathcal{H}_{\mathcal{E}}\). More formally, we have \(\phi:\mathcal{X}\to\mathcal{H}_{\mathcal{E}}:\mathbf{I}_{\mathcal{H}_{\mathcal{E }}}\circ\varphi\) and following from that \(\phi_{c}:\mathcal{X}\to\mathcal{H}_{\mathcal{E}}:\mathbf{I}_{\mathcal{H}_{\mathcal{E }}}\circ\varphi_{c}\). In particular, we have the observations \(\mathbf{\phi}_{i}=\phi(\mathbf{x}_{i})=\mathbf{\varphi}_{i}\) and \(\mathbf{\phi}_{c}=\mathbf{\varphi}_{c}\), as those are linear combinations of the basis. For the sake of readability, we will write \(\mathbf{\phi}=\phi(\mathbf{x})\), the image of a random variable \(\mathbf{x}\in\mathcal{X}\) and refer to it as a _feature_ observation or representation. Given any Hilbert space, \(\mathbf{a}\) an element of it and a linear operator \(\mathbf{\Sigma}\) from and to that space, we consider the _multivariate normal distribution_\(\mathbf{a}\sim\mathcal{N}\big{(}\mathbf{b},\mathbf{\Sigma}\big{)}\) as the distribution with density \(\frac{1}{Z}\exp\bigl{(}-\frac{1}{2}(\mathbf{a}-\mathbf{b})^{*}\mathbf{\Sigma}^{-1}(\mathbf{a}- \mathbf{b})\bigr{)}\). It is well defined if \(Z\) is non-zero and finite.
## 3 Primal Model
We will now essentially follow the work of (Tipping & Bishop, 1999) and redefine the model distributions. This section corresponds to the primal formulation and we only consider the feature representations. It does not yet introduce the kernel representations, which will appear in the dual formulation (Section 4).
### Model and Latent Space
**Factor Analysis.** The starting point is to consider a _factor analysis_ relationship (Bartholomew et al., 2011; Basilevsky, 2009) between the feature observations \(\mathbf{\phi}\) and the latent variables \(\mathbf{h}\). In particular, we consider
\[\mathbf{\phi}=\mathbf{W}\mathbf{h}+\mathbf{\mu}+\mathbf{\zeta}. \tag{1}\]
The observations \(\mathbf{\phi}\) live in the primal space \(\mathcal{H}_{\mathcal{E}}\) of dimension \(N\). We consider an isotropic normal noise \(\mathbf{\zeta}\sim\mathcal{N}\big{(}\mathbf{0},\sigma^{2}\mathbf{I}_{\mathcal{H}_{ \mathcal{E}}}\big{)}\) of variance \(\sigma^{2}\in\mathbb{R}_{>0}\) and a mean \(\mathbf{\mu}\in\mathcal{H}_{\mathcal{E}}\).
**Latent Space \(\mathcal{L}\).** The latent variables \(\mathbf{h}\) on the other hand live in a latent dual space \(\mathcal{L}\subset\mathcal{E}\) of dimension \(q\leq N\). They are related by a primal _interconnection linear operator_\(\mathbf{W}\). As it was the case before with \(\mathbf{\Phi}\), the interconnection operator does not project to the full space \(\mathcal{H}_{\mathcal{E}}\) because of its reduced dimensionality. It therefore projects to yet another feature space \(\mathcal{H}_{\mathcal{L}}\subset\mathcal{H}_{\mathcal{E}}\), which acts as the primal equivalent of the latent space \(\mathcal{L}\). The equality of these two spaces only holds if \(q=N\). We will therefore consider the mappings \(\mathbf{W}^{*}:\mathcal{H}_{\mathcal{E}}\to\mathcal{L}\) and \(\mathbf{W}:\mathcal{L}\to\mathcal{H}_{\mathcal{L}}\). The identity over \(\mathcal{L}\) can be written as \(\mathbf{I}_{\mathcal{L}}=\sum_{p=1}^{q}\mathbf{r}_{p}\mathbf{r}_{p}^{*}\), over \(\mathcal{H}_{\mathcal{L}}\) as \(\mathbf{I}_{\mathcal{H}_{\mathcal{L}}}=\sum_{p=1}^{q}\mathbf{\varrho}_{p}\mathbf{\varrho}_{p}^{*}\) and finally the identity over \(\mathcal{H}_{\mathcal{E}}\) rewritten as \(\mathbf{I}_{\mathcal{H}_{\mathcal{E}}}=\mathbf{I}_{\mathcal{H}_{\mathcal{L}}}+\mathbb{P}_ {\mathcal{H}_{\mathcal{L}}^{\perp}}\), with \(\mathbb{P}_{\mathcal{H}_{\mathcal{L}}^{\perp}}\) as a projector over the null space of \(\mathbf{W}^{*}\circ\mathbf{W}\). This is summarized in the last column of Table 2.
### Feature Distributions
**Latent-Based Generation.** The relation between the feature observations and the latent variables being set up
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Dimension** & \(d\) & \(N\) & \(q\) \\ \hline \multirow{4}{*}{**Centered Kernels**} & **Space** & \(\mathcal{
(Eq. (1)), we can derive the conditional probability of the feature observations given a latent variable:
\[\mathbf{\phi}|\mathbf{h}\sim\mathcal{N}\left(\mathbf{W}\mathbf{h}-\mathbf{\mu},\sigma^{2}\mathbf{I}_{ \mathcal{H}_{\mathcal{E}}}\right). \tag{2}\]
As discussed earlier, we see that the latent variables do not participate to the full scope of the observations in \(\mathcal{H}_{\mathcal{E}}\), but only to their component in \(\mathcal{H}_{\mathcal{L}}\). The rest is only constituted from the isotropic normal noisy mean. This distribution can be interpreted as a generative one: given a latent variable, we can sample a variable in feature space.
**Absolute Generation**. Considering the latent prior \(\mathbf{h}\sim\mathcal{N}\big{(}\mathbf{0},\mathbf{I}_{\mathcal{L}}\big{)}\), we can derive the marginal distribution of the observations in feature space:
\[\mathbf{\phi}\sim\mathcal{N}\left(\mathbf{\mu},\mathbf{W}\circ\mathbf{W}^{*}+\sigma^{2}\mathbf{I}_ {\mathcal{H}_{\mathcal{E}}}\right). \tag{3}\]
It can be considered as the data distribution of the model. Sampling from it also means generating feature representations in a more absolute way, _i.e._, without considering any latent variable, or more precisely considering a random latent variable according to its prior. As a consequence of Eq. (2) and the isotropic aspect of the latent prior, we see that the observations are only non-isotropically distributed in \(\mathcal{H}_{\mathcal{L}}\). Again, the rest is only the isotropically normally noisy mean. In other words, this means that the model parameter \(\mathbf{W}\) only influences \(\mathbf{\phi}\) for its components in \(\mathcal{H}_{\mathcal{L}}\).
### Training the Model
**Maximum Likelihood.** As we now have the marginal distribution of the model (Eq. (3)), the goal is to find the optimal hyperparameters \(\mathbf{W}\) and \(\mathbf{\mu}\) to match the set of observations \(\{\mathbf{\phi}_{i}\}_{i=1}^{N}\). One way to determine them is by maximizing the likelihood of our observations. The _Maximum Likelihood_ (ML) estimator for the hyperparameters is given by:
\[\mathbf{\mu}_{\rm ML} = \mathbf{\phi}_{c}, \tag{4}\] \[\mathbf{W}_{\rm ML} = \sum_{p=1}^{q}\sqrt{\lambda_{p}/N-\sigma^{2}}\mathbf{v}_{p}\mathbf{r}_{p} ^{*}, \tag{5}\]
with \(\{(\lambda_{p},\mathbf{v}_{p})\}_{p=1}^{q}\) the \(q\) dominant eigenpairs of \(\mathbf{\Phi}_{c}\circ\mathbf{\Phi}_{c}^{*}\) (\(\lambda_{1}\geq\cdots\geq\lambda_{q}\geq\cdots\lambda_{N}\)), and \(\{\mathbf{r}_{p}\}_{p=1}^{q}\) and arbitrary orthonormal basis of the latent space \(\mathcal{L}\). The choice for the latter basis is arbitrary and makes the model rotational invariant in latent space. An additional condition is that \(\sigma^{2}\leq\lambda_{q}/N\). It is not surprising to see that the optimal mean \(\mathbf{\mu}_{\rm ML}\) corresponds to the mean of the observations \(\mathbf{\phi}_{c}\). We observe that \(\mathbf{W}_{\rm ML}\) corresponds to the eigendecomposition of the centered covariance, at the exception that the noise assumption is substracted from its spectrum. By looking back at Eq. (1), it makes sense to avoid the noise in \(\mathbf{W}_{\rm ML}\) as it is still going to be added by the term \(\mathbf{\zeta}\).
**Noise Variance.** Maximizing the likelihood as a function of \(\sigma^{2}\) leads to
\[\sigma_{\rm ML}^{2}=\frac{1}{N(N-q)}\sum_{p=q+1}^{N}\lambda_{p}. \tag{6}\]
The eigenvalue \(\lambda_{p}\) corresponds to the variance for each component \(\mathbf{v}_{p}\) of the covariance \(\mathbf{\Phi}_{c}\circ\mathbf{\Phi}_{c}^{*}\). The total variance of the data, noise included, is equal to \(\frac{1}{N}\sum_{p=1}^{N}\lambda_{p}\) and the variance learned by the model through the primal interconnection operator to \(\frac{1}{N}\sum_{p=1}^{q}\lambda_{p}\). Hence, the maximum likelihood estimator for the noise variance \(\sigma_{\rm ML}^{2}\) can be interpreted as the mean of the variance that is discarded by the model. It also verifies the earlier condition that \(\sigma^{2}\leq\lambda_{q}/N\), as the eigenvalues are taken in descending order. It can be interpreted as the normalized mean variance of the left over eigendirections, _i.e._ the orthogonal space of the latent space: \(\mathcal{L}^{\perp}=\mathcal{E}\backslash\mathcal{L}\). By consequence, we may decide to choose the latent dimension \(q=\dim(\mathcal{L})\) and deduct \(\sigma_{\rm ML}^{2}\). In the opposite, we may also decide to set an arbitrary \(\sigma^{2}\) and deduct the latent dimension \(q\) instead. We therefore can consider either \(\sigma^{2}\) or \(q\) as an additional hyperparameter. We must however keep in mind that this is strongly going to be influenced by the distribution of the eigenvalues and that the latent dimension \(q\) for the same \(\sigma^{2}\) may heavily vary from application to application.
**Uncentered Features.** We may also consider not to consider the mean as an optimizable hyperparameter and set it arbitrarily to \(\mathbf{\mu}=\mathbf{0}\). In this case, Eq. (5) would be the same at the difference that the \(\mathbf{W}_{\rm ML}\) would be constructed from the dominant eigenpairs of the uncentered covariance \(\mathbf{\Phi}\circ\mathbf{\Phi}^{*}\) instead of its centered counterpart \(\mathbf{\Phi}_{c}\circ\mathbf{\Phi}_{c}^{*}\).
### Dimensionality Reduction in Feature Space
**Latent Projection.** Up to now, we only considered the distribution of the feature variables \(\mathbf{\phi}\). We can also calculate the posterior distribution of the latent variable \(\mathbf{h}\) given the primal feature variable \(\mathbf{\phi}\):
\[\mathbf{h}|\mathbf{\phi}\sim\mathcal{N}\left(\mathbf{\Sigma}_{\mathbf{h}|\mathbf{\phi}}^{-1} \circ\mathbf{W}^{*}(\mathbf{\phi}-\mathbf{\mu}),\sigma^{2}\mathbf{\Sigma}_{\mathbf{h}|\mathbf{\phi}}^ {-1}\right), \tag{7}\]
with \(\mathbf{\Sigma}_{\mathbf{h}|\mathbf{\phi}}=\big{(}\mathbf{W}^{*}\circ\mathbf{W}+\sigma^{2}\mathbf{I}_{ \mathcal{L}}\big{)}^{-1}\). The mean of the distribution can be considered as a pseudo-inverse of the observation \(\mathbf{\phi}\), but regularized by \(\sigma^{2}\). This regularization ensures to avoid the noise. If the prior of the latent variables was isotropic, this is not the case anymore for the posterior. If we consider the maximum likelihood estimator for the primal interconnection operator \(\mathbf{W}_{\rm ML}\), the variance becomes \(\sigma^{2}\mathbf{\Sigma}_{\mathbf{h}|\mathbf{\phi}}^{-1}=N\sigma^{2}\sum_{p=1}^{q}\lambda _{p}^{-1}\mathbf{r}_{p}\mathbf{r}_{p}^{*}\). It can be interpreted as the uncertainty for each component of the latent variable \(\mathbf{h}\) (w.r.t. the eigendirection \(\mathbf{r}_{p}\)), due to the noise assumption. By consequence, the greater the explained variance \(\lambda_{p}\) for the eigendirection \(\mathbf{v}_{p}\) of the covariance \(\mathbf{\Phi}_{c}\circ\mathbf{\Phi}_{c}^{*}\), the smaller
the corresponding uncertainty on the component \(\mathbf{r}_{p}\) of the latent variable \(\mathbf{h}\). For each observation in feature space \(\mathbf{\phi}\), this returns a distribution for the latent variable \(\mathbf{\phi}\) and can therefore be considered as a sort of probabilistic projection in latent space \(\mathcal{L}\).
**Maximum A Posteriori.** Up to now, we were only considering distributions. The only way to go from a feature representation to a latent variable or the opposite was probabilistic. In order to have a deterministic approach, we need proper mappings. One way is to consider the _Maximum A Posteriori_ (MAP) of \(\mathbf{h}\) given \(\mathbf{\phi}\). It maps the feature observation \(\mathbf{\phi}\in\mathcal{H}_{\mathcal{E}}\) to latent variable \(\mathbf{h}_{\mathrm{MAP}}\in\mathcal{L}\), hence reducing the dimensionality of any input to that of the latent space. To allow it to work for any input \(\mathbf{\varphi}\in\mathcal{H}\), we may again consider the projection \(\mathbf{\phi}=\mathbf{I}_{\mathcal{H}_{\mathcal{E}}}\mathbf{\varphi}\). As \(\mathbf{W}_{\mathrm{ML}}^{*}\circ\mathbf{I}_{\mathcal{H}_{\mathcal{E}}}=\mathbf{W}_{ \mathrm{ML}}^{*}\):
\[\mathbf{h}_{\mathrm{MAP}}= \left(\mathbf{W}_{\mathrm{ML}}^{*}\circ\mathbf{W}_{\mathrm{ML}}+\sigma^{ 2}\mathbf{I}_{\mathcal{L}}\right)^{-1} \tag{8}\] \[\circ\mathbf{W}_{\mathrm{ML}}^{*}\left(\mathbf{\varphi}-\mathbf{\varphi}_{c }\right).\]
To map back to the feature space \(\mathcal{H}_{\mathcal{L}}\), we may consider the _maximum a posteriori_ of \(\mathbf{\phi}\) given \(\mathbf{h}\) (Eq. (3)). This gives
\[\mathbf{\phi}_{\mathrm{MAP}}=\mathbf{W}_{\mathrm{MAP}}\mathbf{h}+\mathbf{\phi}_{c}. \tag{9}\]
The final projection reads
\[\mathbf{\phi}_{\mathrm{MAP}}= \mathbf{W}_{\mathrm{ML}}\circ\left(\mathbf{W}_{\mathrm{ML}}^{*}\circ\bm {W}_{\mathrm{ML}}+\sigma^{2}\mathbf{I}_{\mathcal{L}}\right)^{-1} \tag{10}\] \[\circ\mathbf{W}_{\mathrm{ML}}^{*}\left(\mathbf{\varphi}-\mathbf{\varphi}_{c }\right)+\mathbf{\phi}_{c}.\]
**No Noise.** We may also decide not to consider \(\sigma^{2}\) as a parameter to optimize and set it to an arbitrary value. The latent dimensions \(q\) could also be set an arbitrary value, without it to be related to the latent dimension \(q\) according to Eq. (6). We notice that in the limit of \(\sigma^{2}\to 0\), we recover the classical Principal Component Analysis reconstruction scheme. Indeed the conditional probability distributions become exact relations. We also notice that the condition \(\sigma^{2}\leq\lambda_{q}/N\) (Prop. 3) is then always satisfied. Furthermore, when \(q=\dim(\mathcal{H}_{\mathcal{E}})\), the reconstruction is perfect in \(\mathcal{H}_{\mathcal{E}}\) and in particular for our original observations \(\{\mathbf{\varphi}_{i}\}_{i=1}^{N}\) and \(\mathbf{\varphi}_{c}\) (as we have \(\mathbf{\phi}_{i}=\mathbf{\varphi}_{i}\)). Indeed, we would have
\[\mathbf{h}_{\mathrm{MAP}}=\mathbf{W}_{\mathrm{ML}}^{+}\left(\mathbf{\varphi}-\mathbf{\varphi} _{c}\right), \tag{11}\]
with \(\mathbf{W}_{\mathrm{ML}}^{+}\) the Moore-Penrose pseudo-inverse of \(\mathbf{W}_{\mathrm{ML}}\). We note here the symmetry with Eq. (9). If the maximum likelihood estimator for \(\sigma^{2}\) is to be respected (Eq. (6)), this would mean that all components are kept (\(\mathcal{L}=\mathcal{E}\)) and the model reconstructs the full feature variance. In this case, the primal interconnection operator would become \(\mathbf{W}_{\mathrm{ML}}=\sum_{p=1}^{N}\sqrt{\lambda_{p}/N}\mathbf{v}_{p}\mathbf{r}_{p}^{*}\) and be invertible. Its Moore-Penrose pseudo-inverse would become an exact inverse. Eqs. (9) and (11) would become exact opposites and there would be no loss due to the dimensionality reduction as there would be no noise to discard. By consequence, the reduction would become an identity over \(\mathcal{H}_{\mathcal{E}}\): \(\mathbf{\phi}_{\mathrm{MAP}}-\mathbf{\phi}_{c}=\mathbf{I}_{\mathcal{H}_{\mathcal{L}}} \left(\mathbf{\varphi}-\mathbf{\varphi}_{c}\right)\).
## 4 Dual Model
**Kernels without Dual.** In (Zhang et al., 2004), the authors made the kernel matrix appear by considering the new observations \(\left\{\sum_{i=1}^{d}\mathbf{u}_{i}\mathbf{u}_{j}^{*}\phi(\mathbf{x}_{i})\right\}_{j=1}^{N}\). In other words, each new datapoint consists in one particular feature of the feature map, for each original datapoint. If the original datapoints were organized as a matrix in \(\mathbb{R}^{N\times d}\), this would correspond to taking its transpose as new datapoints. The outer product of the covariance matrix is transformed to the inner product of the kernel matrix. If indeed this formulation makes the kernel appear, it is not a dual formulation of the original problem, but another problem. In this section, we show how the spaces defined hereabove help us build an equivalent dual formulation of the problem.
**Dual Formulation.** While keeping an equivalence with the primal model, we will now see that we can directly work in dual spaces \(\mathcal{E}\) and \(\mathcal{L}\) without considering the feature spaces at all, _i.e._ resorting to the primal space \(\mathcal{H}\) and its subsets. As we did for the primal feature variable \(\mathbf{\phi}\), we will consider \(\mathbf{k}_{c}=\mathbf{\Phi}_{c}^{*}(\mathbf{\phi}-\mathbf{\phi}_{c})=\sum_{i=1}^{N}k_{c}(\mathbf{x },\mathbf{x}_{i})\mathbf{e}_{i}\) to represent the image in \(\mathcal{E}\), of a random variable \(\mathbf{x}\in\mathcal{X}\). We will refer to it as a _dual feature variable_.
### Representation
Considering the dual spaces, we can always express the interconnection operator \(\mathbf{W}\) in the (non-orthonormal) basis \(\{\mathbf{\phi}_{1}-\mathbf{\phi}_{c},\dots,\mathbf{\phi}_{N}-\mathbf{\phi}_{c}\}\). As a consequence, we can always write
\[\mathbf{W}=\mathbf{\Phi}_{c}\circ\mathbf{A}, \tag{12}\]
with \(\mathbf{A}:\mathcal{L}\rightarrow\mathcal{L}\), the dual interconnection operator. Given the maximum likelihood estimator for the primal interconnection operator \(\mathbf{W}_{\mathrm{ML}}\), we can directly deduce the dual one:
\[\mathbf{A}_{\mathrm{ML}}=\sum_{p=1}^{q}\sqrt{1/N-\sigma^{2}\lambda_{p}^{-1}}\mathbf{ \epsilon}_{p}\mathbf{r}_{p}^{*}, \tag{13}\]
with \(\{(\lambda_{p},\mathbf{\epsilon}_{p})\}_{p=1}^{q}\) the \(q\) dominant eigenpairs of \(\mathbf{\Phi}_{c}^{*}\circ\mathbf{\Phi}_{c}\) and \(\{\mathbf{r}_{p}\}_{p=1}^{q}\) an arbitrary orthonormal basis of the latent space \(\mathcal{L}\). The rotational invariance of the dual interconnection operator \(\mathbf{A}_{\mathrm{ML}}\) is inherited from its primal counterpart \(\mathbf{W}_{\mathrm{ML}}\). Again, if we consider an optimized mean \(\mathbf{\mu}=\mathbf{0}\), we would have the relation \(\mathbf{W}_{\mathrm{ML}}=\mathbf{\Phi}\circ\mathbf{A}_{\mathrm{ML}}\) with \(\mathbf{A}_{\mathrm{ML}}\) then based on the eigenpairs of the non-centered \(\mathbf{\Phi}^{*}\circ\mathbf{\Phi}\) instead. Using the same structure for \(\mathbf{A}_{\mathrm{ML}}\), the optimal (primal) interconnection operator \(\mathbf{W}_{\mathrm{ML}}\) could be expressed in the (non-orthonormal) basis \(\{\mathbf{\phi}_{1},\dots,\mathbf{\phi}_{N}\}\).
### Kernel Distributions
**Projection and Generation**. We can also consider the dual counterparts of the distributions of the primal model (Eqs. (2) and (7)). For the sake of simplicity and to avoid
heavier equations with non-centered kernels, we will only consider here the equations of the trained model, in particular with \(\mathbf{\mu}_{\mathrm{ML}}=\mathbf{\phi}_{c}\) leading to centered kernels:
\[\mathbf{k}_{c}|\mathbf{h} \sim \mathcal{N}\big{(}(\mathbf{\Phi}_{c}^{*}\circ\mathbf{\Phi}_{c})\circ\mathbf{A }_{\mathrm{ML}}\mathbf{h},\sigma^{2}\mathbf{\Phi}_{c}^{*}\circ\mathbf{\Phi}_{c}\big{)}, \tag{14}\] \[\mathbf{h}|\mathbf{k}_{c} \sim \mathcal{N}\left(\mathbf{\Sigma}_{\mathbf{h}|\mathbf{k}_{c}}^{-1}\circ\mathbf{A }_{\mathrm{ML}}\mathbf{k}_{c},\mathbf{\Sigma}_{\mathbf{h}|\mathbf{k}_{c}}^{-1}\right), \tag{15}\]
with \(\mathbf{\Sigma}_{\mathbf{h}|\mathbf{k}_{c}}=\big{(}\mathbf{A}_{\mathrm{ML}}^{*}\circ\left(\bm {\Phi}_{c}^{*}\circ\mathbf{\Phi}_{c}\right)\circ\mathbf{A}_{\mathrm{ML}}+\sigma^{2}\bm {I}_{\mathcal{L}}\big{)}^{-1}\).
### Dimensionality Reduction in Kernel Space
**Maximum A Posteriori.** This now allows us to consider the dimensionality reduction in kernel space in a similar fashion as in Section 3.4. Again we consider the MAP of the latent variable \(\mathbf{h}\) given the kernel representation \(\mathbf{k}_{c}\):
\[\begin{split}\mathbf{h}_{\mathrm{MAP}}=&\left(\mathbf{A}_{ \mathrm{ML}}^{*}\circ\left(\mathbf{\Phi}_{c}^{*}\circ\mathbf{\Phi}_{c}\right)\circ \mathbf{A}_{\mathrm{ML}}+\sigma^{2}\mathbf{I}_{\mathcal{L}}\right)^{-1}\\ &\circ\mathbf{A}_{\mathrm{ML}}\mathbf{k}_{c},\end{split} \tag{16}\]
and similarly with the MAP of the kernel representation \(\mathbf{k}_{c}\) given the latent variable \(\mathbf{h}\):
\[\left(\mathbf{k}_{c}\right)_{\mathrm{MAP}}=\left(\mathbf{\Phi}_{c}^{*}\circ\mathbf{\Phi} _{c}\right)\circ\mathbf{A}_{\mathrm{ML}}\mathbf{h}. \tag{17}\]
As for the primal model, the dimensionality reduction in dual is computed as \(\left(\mathbf{k}_{c}\right)_{\mathrm{MAP}}=\left(\mathbf{\Phi}_{c}^{*}\circ\mathbf{\Phi} _{c}\right)\circ\mathbf{A}_{\mathrm{ML}}\mathbf{h}_{\mathrm{MAP}}\).
**No Noise.** Again, considering \(\sigma^{2}\to 0\) makes both dual conditional distributions become exact relations. In a ML context for \(\sigma^{2}\) (Eq. (6)), this would imply that \(q=\dim(\mathcal{E})\) and we would recover an identity \(\left(\mathbf{k}_{c}\right)_{\mathrm{MAP}}=\mathbf{k}_{c}\), _i.e._ no reduction. Without considering a ML context for \(\sigma^{2}\to 0\) and choosing an arbitrary \(q\leq\dim(\mathcal{E})\), the reduction become exactly the reconstruction done in KPCA.
### Kernel Sampling
**Probabilistic Sampling.** The dual counterpart of Eq. (3) after training is given by
\[\mathbf{k}_{c}\sim\mathcal{N}\left(\mathbf{0},\mathbf{A}_{\mathrm{ML}}^{*}\circ\mathbf{A}_{ \mathrm{ML}}+\sigma^{2}\left(\mathbf{\Phi}_{c}^{*}\circ\mathbf{\Phi}_{c}\right)^{-1} \right). \tag{18}\]
The covariance \(\mathbf{A}_{\mathrm{ML}}^{*}\circ\mathbf{A}_{\mathrm{ML}}+\sigma^{2}\left(\mathbf{\Phi}_{ c}^{*}\circ\mathbf{\Phi}_{c}\right)^{-1}\) can be decomposed as \(\mathbf{B}\circ\mathbf{B}^{*}\), with \(\mathbf{B}:\mathcal{E}\to\mathcal{E}:N^{-1/2}\sum_{p=1}^{q}\lambda_{p}\mathbf{\epsilon }_{p}\mathbf{\epsilon}_{p}^{*}+\sum_{p=p+1}^{N}\sigma\lambda_{p}^{1/2}\mathbf{ \epsilon}_{p}\mathbf{\epsilon}_{p}^{*}\) and \(\left\{\mathbf{\epsilon}_{i}\right\}_{i=1}^{N}\) any arbitrary orthonormal basis of the latent space \(\mathcal{E}\). This decomposition allows us to sample \(\mathbf{k}_{c}\) on the trained model with \(\mathbf{k}_{c}=\mathbf{B}\mathbf{\xi}\) with \(\mathbf{\xi}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{\mathcal{E}})\). We see that \(\mathbf{B}\) is rotational invariant, which is not surprising as this is also the case for the distribution from which \(\mathbf{\xi}\) is sampled. In practice and for simplicity, we may decide too choose the canonical basis for \(\left\{\mathbf{\epsilon}_{i}\right\}_{i=1}^{N}\) as any choice would be identified to the same covariance and to the same sampling of \(\mathbf{k}_{c}\). We will therefore assume that \(\mathbf{\epsilon}_{i}=\mathbf{e}_{i}\) for all \(i=1,\dots,N\). In that particular case, \(\mathbf{B}\) is self-adjoint and by consequence corresponds to the matrix square root of \(\mathbf{A}_{\mathrm{ML}}^{*}\circ\mathbf{A}_{\mathrm{ML}}+\sigma^{2}\left(\mathbf{\Phi}_{ c}^{*}\circ\mathbf{\Phi}_{c}\right)^{-1}\).
**KPCA Sampling** The classical sampling done by KPCA (Schreurs and Suykens, 2018) corresponds to the limit of \(\sigma^{2}\to 0\) for an arbitrary latent dimension \(q\). Unless the latent dimension is chosen as \(q=\dim(\mathcal{E})\), the sampling in that case can never cover \(\mathcal{E}\) fully, but rather \(\mathcal{L}\), as \(\mathbf{B}\) is not a bijection. The second term of \(\mathbf{B}\) (\(\sum_{p=q+1}^{N}\sigma\lambda_{p}^{1/2}\mathbf{\epsilon}_{p}\mathbf{\epsilon}_{p}^{*}\)) allows \(\mathbf{B}\) to be a bijection no matter what is the choice of the latent dimension \(q\), as long as \(\sigma^{2}>0\). We thus always sample in the full \(\mathcal{E}\). This can be observed at Fig. 2.
## 5 Experiments
**Hilbert Spaces to Matrices.** Working in Hilbert spaces is helpful to treat possibly infinite dimensional feature maps, but not very useful for practical applications. Matrix representations are possible in primal if \(d\) is finite and in dual if \(N\) is finite. It suffices to consider the different canonical basis. For the latent space \(\mathcal{L}\), this enforces a unique representation for \(\mathbf{W}_{\mathrm{ML}}\) and \(\mathbf{A}_{\mathrm{ML}}\), but we must keep in mind that they are rotational invariant. All the operators and elements described before are then represented in matrix or vector format (Table 3). We will use the tilde to denote these matrices and use software-like notation by denoting with \((\cdot)_{i_{1}:i_{2},j_{1}:j_{2}}\) the matrix truncated to its \(i_{1}\) to \(i_{2}\) rows and \(j_{1}\) to \(j_{2}\) columns.
**Preimage.** Given a dual representation, we will also consider the _kernel smoother_ preimage method, as suggested by (Schreurs and Suykens, 2018):
\[\hat{\mathbf{x}}=\frac{\sum_{i=1}^{N}(\tilde{\mathbf{k}})_{i}\mathbf{x}_{i}}{\sum_{i=1}^{N}( \tilde{\mathbf{k}})_{i}}. \tag{19}\]
In practice, as we work with centered feature maps and kernels, it may be that the kernel smoother may be unstable due to its normalization term. We therefore may consider to add a stabilization term.
Figure 2: Schematic overview of the dual sampling in Prob. PCA compared to the generation in KPCA.
Figure 3: Visualisation of the Probabilistic PCA reconstruction (in blue) the classical KPCA (in red). Samples generated by are also given (in grey). The dataset contains \(N=20\) points (in black).
### Model
The direct application of the theoretical discussions of the previous sections leads to the decompositions \(\tilde{\mathbf{K}}_{c}=\tilde{\mathbf{E}}\tilde{\mathbf{\Lambda}}\tilde{\mathbf{E}}^{\top}\), \(\tilde{\mathbf{C}}_{c}=\tilde{\mathbf{V}}\tilde{\mathbf{\Lambda}}\tilde{\mathbf{V}}^{\top}\), \(\tilde{\mathbf{\Phi}}_{c}=\tilde{\mathbf{V}}\tilde{\mathbf{\Lambda}}^{1/2}\tilde{\mathbf{E}}^{\top}\). The value of the operators after training are given in Table 4. Once the model is trained, we can verify that \(\tilde{\mathbf{W}}=\tilde{\mathbf{\Phi}}_{c}\tilde{\mathbf{A}}\).We can also have a look at the hidden variables. A way to do it is to consider the MAP of \(\mathbf{h}\) given \(\mathbf{\phi}\) or \(\mathbf{k}\). We have
\[\mathbf{h}_{\rm MAP} =N\tilde{\mathbf{\Lambda}}_{1:q,1:q}^{-1}\tilde{\mathbf{A}}^{\top}\tilde{ \mathbf{k}}_{c}\qquad\text{(if $\mathrm{rank}(\tilde{\mathbf{K}}_{c})\geq q$)}, \tag{20}\] \[=N\tilde{\mathbf{\Lambda}}_{1:q,1:q}^{-1}\tilde{\mathbf{W}}^{\top}\big{(} \tilde{\mathbf{\phi}}-\tilde{\mathbf{\phi}}_{c}\big{)}\quad\text{(if $\mathcal{H}$ is finite)}, \tag{21}\]
and
\[\big{(}\mathbf{k}_{c}\big{)}_{\rm MAP} =\tilde{\mathbf{K}}_{c}\tilde{\mathbf{A}}\tilde{\mathbf{h}}\qquad\quad\text{ (if $\mathrm{rank}(\tilde{\mathbf{K}}_{c})\geq q$)}, \tag{22}\] \[\mathbf{\phi}_{\rm MAP} =\tilde{\mathbf{W}}\tilde{\mathbf{h}}+\tilde{\mathbf{\phi}}_{c}\qquad\qquad \quad\text{(if $\mathcal{H}$ is finite)}. \tag{23}\]
As developed in Section 4, we can easily generate samples in both feature and kernel representations. For the latter and in canonical basis, it becomes
\[\tilde{\mathbf{k}}_{c}=\tilde{\mathbf{B}}\tilde{\mathbf{u}},\qquad\text{with $\tilde{\mathbf{u}}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{N})$}. \tag{24}\]
### Examples
As the primal case is already treated by (Tipping & Bishop, 1999), we consider here the model in its dual formulation. A toy example can by found in Fig. 3. We use an RBF kernel \(k(\mathbf{x},\mathbf{y})=\exp\bigl{(}-\|\mathbf{x}-\mathbf{y}\|_{2}^{2}/(2\gamma^{2})\bigr{)}\) with bandwidth \(\gamma=2\). As the number of components increases, the mean variance of the \(N-q\) unused components \(\sigma^{2}\) becomes smaller and the model tends to the classical KPCA model. Another way the reduce \(\sigma^{2}\) is to increase the number of components \(q\), with \(\sigma^{2}\to 0\) when \(q\to N\). This can be observed in Fig. 2(c): the Probabilistic PCA model resembles closely the KPCA model, whereas more variance is left over, _i.e._ not projected back, in Fig.s 2(a) and 2(b). The results of the generation is Gaussian, which is a consequence of the linearity of the preimage method chosen (Eq. (19)). Here again, as the number of components increases and \(\sigma^{2}\) decreases, the model is allowed to project back more variance and the distribution becomes wider. Another example on the MNIST dataset (LeCun & Cortes, 2010) with the RBF kernel with \(\gamma=4\) is given at Fig. 4.
## 6 Conclusion
**Probabilistic Interpretation.** By reformulating the Prob. PCA model in Hilbert space, we were able to define a formulation of it. Likewise Prob. PCA in primal was englobing classical PCA (with \(\sigma^{2}\to 0\)), Prob. PCA in dual is also englobing KPCA in the same limit. Furthermore, we are now able to sample in dual space, enhancing the understanding of the generation done with KPCA.
**Limitations.** As most kernel methods, the model is still limited by the need of a preimage method to go back to the input space once a sample is projected or generated. Furthermore, training the model in dual required to find the \(q\) first eigenvalues of the kernel matrix, which may become expensive as the number of datapoints \(N\) increases. Generating renders the problem even worse as it requires the computation of all eigenvalues. The model also requires to determine a \(\sigma^{2}\) or alternatively a latent dimension \(q\).
## Acknowledgements
EU: The research leading to these results has received funding from the European Research Council under the European Union's Horizon 2020 research and innovation program / ERC Advanced Grant E-DUALITY (787960). This paper reflects only the authors' views and the Union is not liable for any use that may be made of the contained information. Research Council KUL: Tensor Tools for Taming the Curse iBOF/23/064, Optimization frameworks for deep kernel machines C14/18/068. Flemish Government: FWO projects: GOA4917N (Deep Restricted Kernel Machines: Methods and Foundations), PhD/Postdoc grant. This research received funding from the Flemish Government (AI Research Program). Henri De Plaen and Johan A. K. Suykens are also affiliated to Leuven.AI - KU Leuven institute for AI, B-3000, Leuven, Belgium.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Name** & **Space** & **Trained** \\ \hline \(\tilde{\mathbf{W}}\) & \(\mathbb{R}^{d\times q}\) & \(\tilde{\mathbf{V}}_{1:N,1:q}\big{(}\tilde{\mathbf{\Lambda}}_{1:q,1:q}/N-\sigma^{2}\mathbf{ I}_{q}\big{)}^{1/2}\) \\ \(\tilde{\mathbf{A}}\) & \(\mathbb{R}^{N\times q}\) & \(\tilde{\mathbf{E}}_{1:N,1:q}\big{(}\mathbf{I}_{q}/N-\sigma^{2}\big{(}\tilde{\mathbf{ \Lambda}}_{1:q,1:q}\big{)}^{-1}\big{)}^{1/2}\) \\ \(\tilde{\mathbf{B}}\) & \(\mathbb{R}^{N\times q}\) & \(\tilde{\mathbf{E}}\tilde{\mathbf{\Lambda}}^{1/2}\left[\begin{array}{c}(N)^{-1/2} \tilde{\mathbf{\Lambda}}_{1:q,1:q}^{1/2}&\mathbf{0}\\ \mathbf{0}&\sigma\mathbf{I}_{N-q}\end{array}\right]\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Value of the different operators in the canonical basis, after training.
\begin{table}
\begin{tabular}{l l l l} \hline \hline & **Name** & **Space** & **Values** \\ \hline \multirow{5}{*}{\begin{tabular}{} \end{tabular} } & \(\tilde{\mathbf{K}}_{c}\) & \(\mathbb{R}^{N\times N}\) & \((\tilde{\mathbf{k}}_{c})_{i,j}=k_{c}(\mathbf{x}_{i},\mathbf{x}_{j})\) \\ & \(\tilde{\mathbf{E}}\) & \(\mathbb{R}^{N\times N}\) & \(\big{(}\tilde{\mathbf{E}}\big{)}_{i,j}=\mathbf{e}_{i}^{*}\mathbf{\epsilon}_{j}\) \\ & \(\tilde{\mathbf{R}}\) & \(\mathbb{R}^{q\times q}\) & \(\tilde{\mathbf{R}}=\mathbf{I}_{q}\) \\ & \(\tilde{\mathbf{h}}\) & \(\mathbb{R}^{q}\) & \(\big{(}\tilde{\mathbf{h}}\big{)}_{p}=\mathbf{e}_{p}^{*}\mathbf{h}\) \\ & \(\tilde{\mathbf{k}}_{c}\) & \(\mathbb{R}^{N}\) & \(\big{(}\tilde{\mathbf{k}}_{c}\big{)}_{i}=\mathbf{e}_{i}^{*}\mathbf{k}_{c}\) \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \(\tilde{\mathbf{\Lambda}}\) & \(\mathbb{R}_{\geq 0}^{N\times N}\) & \(\tilde{\mathbf{\Lambda}}=\mathrm{diag}(3_{1},\ldots,\lambda_{N})\) \\ & \(\tilde{\mathbf{S}}\) & \(\mathbb{R}_{\geq 0}^{\times q}\) & \(\tilde{\mathbf{S}}=\mathrm{diag}(s_{1},\ldots,s_{q})\) \\ \hline \multirow{5}{*}{
\begin{tabular}{} \end{tabular} } & \(\tilde{\mathbf{C}}_{c}\) & \(\mathbb{R}^{d\times d}\) & \(\big{(}\tilde{\mathbf{C}}_{c}\big{)}_{i,j}=\big{(}\mathbf{u}_{i}^{*}\mathbf{\Phi}_{c} \big{)}\circ\big{(}\mathbf{u}_{j}^{*}\mathbf{\Phi}_{c}\big{)}^{*}\) \\ & \(\tilde{\mathbf{\Phi}}_{c}\) & \(\mathbb{R}^{d\times N}\) & \(\big{(}\tilde{\mathbf{\Phi}}_{c}\big{)}_{i,j}=\mathbf{u}_{i}^{*}\mathbf{\Phi}_{c}\mathbf{e}_{j}\) \\ \(\tilde{\mathbf{V}}\) & \(\mathbb{R}^{d\times N}\) & \(\big{(}\tilde{\mathbf{V}}\big{)}_{i,j}=\mathbf{u}_{i}^{*}\mathbf{v}_{j}\) \\ & \(\tilde{\mathbf{P}}\) & \(\mathbb{R}^{d\times q}\) & \(\big{(}\tilde{\mathbf{P}}\big{)}_{i,p}=\mathbf{v}_{i}^{*}\mathbf{\phi}_{p}\) \\ & \(\tilde{\mathbf{\phi}}\) & \(\mathbb{R}^{d}\) & \(\big{(}\mathbf{\phi}\big{)}_{i}=\mathbf{v}_{i}^{*}\mathbf{\phi}\) \\ & \(\tilde{\mathbf{\phi}}_{c}\) & \(\mathbb{R}^{d}\) & \(\big{(}\mathbf{\phi}_{c}\big{)}_{i}=\mathbf{v}_{i}^{*}\mathbf{\phi}_{c}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Representation of the various operators and elements in their respective canonical basis, as matrices and vectors. The primal |
2305.06063 | Enhancing Quantum Support Vector Machines through Variational Kernel
Training | Quantum machine learning (QML) has witnessed immense progress recently, with
quantum support vector machines (QSVMs) emerging as a promising model. This
paper focuses on the two existing QSVM methods: quantum kernel SVM (QK-SVM) and
quantum variational SVM (QV-SVM). While both have yielded impressive results,
we present a novel approach that synergizes the strengths of QK-SVM and QV-SVM
to enhance accuracy. Our proposed model, quantum variational kernel SVM
(QVK-SVM), leverages the quantum kernel and quantum variational algorithm. We
conducted extensive experiments on the Iris dataset and observed that QVK-SVM
outperforms both existing models in terms of accuracy, loss, and confusion
matrix indicators. Our results demonstrate that QVK-SVM holds tremendous
potential as a reliable and transformative tool for QML applications. Hence, we
recommend its adoption in future QML research endeavors. | Nouhaila Innan, Muhammad Al-Zafar Khan, Biswaranjan Panda, Mohamed Bennai | 2023-05-10T11:30:43Z | http://arxiv.org/abs/2305.06063v2 | # Enhancing Quantum Support Vector Machines through Variational Kernel Training
###### Abstract
Quantum machine learning (QML) has witnessed immense progress recently, with quantum support vector machines (QSVMs) emerging as a promising model. This paper focuses on the two existing QSVM methods: quantum kernel SVM (QK-SVM) and quantum variational SVM (QV-SVM). While both have yielded impressive results, we present a novel approach that synergizes the strengths of QK-SVM and QV-SVM to enhance accuracy. Our proposed model, quantum variational kernel SVM (QVK-SVM), leverages the quantum kernel and quantum variational algorithm. We conducted extensive experiments on the Iris dataset and observed that QVK-SVM outperforms both existing models in terms of accuracy, loss, and confusion matrix indicators. Our results demonstrate that QVK-SVM holds tremendous potential as a reliable and transformative tool for QML applications. Hence, we recommend its adoption in future QML research endeavors.
Quantum Machine Learning, Quantum Support Vector Machine, Kernel, Quantum Variational Algorithm, Classification.
## I Introduction
Quantum computing is an exciting and quickly growing field that could change many areas of science and technology. Machine learning (ML) is one of the most promising
quantum computing applications, where quantum algorithms can potentially provide exponential speedups over classical algorithms. This field is known as Quantum machine learning (QML). Quantum machine learning is an emerging field of research that combines the principles of quantum computing and machine learning. QML algorithms can solve complex problems more efficiently and cost-effectively than classical machine learning algorithms. One of the most promising QML algorithms is the quantum support vector machine (QSVM), an extension of the classical support vector machine (SVM) to the quantum realm.
The classical SVMs are a powerful class of ML algorithms for classification and regression analysis. The development of SVMs can be traced back to the early 1960s when Vladimir Vapnik and his colleagues began working on a new approach to pattern recognition [1, 2]. However, only in the 1990s did SVMs gain widespread attention in the ML community [3], thanks to Corinna Cortes and Vladimir Vapnik's pioneering work at AT&T Bell Labs. They introduced the idea of maximum-margin hyperplanes, decision boundaries that separate data points from different classes with the most significant possible margin [4].
This approach allowed SVMs to perform excellent generalization, even with small training datasets. Since then, SVMs have become one of the most extensively used and popular machine learning models and have been successfully applied to various fields, including image recognition, text classification, and bioinformatics. However, as the size of the dataset increases, the computational complexity of SVM also increases, making it difficult to handle large datasets. Still, the QSVM aims to overcome this limitation by leveraging the principles of quantum computing to accelerate the SVM algorithm. Over the years, there has been significant research in the field of QSVM, exploring various theoretical and practical aspects of the algorithm. Researchers have developed several techniques to enhance the performance of QSVM, including the development of quantum kernel methods, quantum feature maps, and quantum optimization techniques.
One of the early works in QSVM was proposed by Rebentrost _et al._ in 2014 [5], which introduced a quantum algorithm for SVM classification that provides an exponential speedup over classical algorithms. Another essential aspect of QSVM is its robustness to noise. In 2015, Li _et al._ demonstrated a QML algorithm for handwriting recognition on a four-qubit nuclear magnetic resonance (NMR) test bench [6]; the authors argued that quantum speedup would be highly attractive for tackling significant data challenges. However, this algorithm was specific to NMR-based systems and could not be easily applied to other QML platforms.
And after different interesting works, in 2019, Havlicek _et al._ demonstrated that supervised quantum machine learning models, including QSVM [7], can be robust to noise, increasing their practicality for real-world applications. Subsequently, several studies have been conducted to improve the performance of QSVM, including using quantum feature maps for kernel-based learning, as proposed by Park _et al._ in 2020 [9]. Another interesting research explores the potential use of quantum state encoding as a nonlinear feature map, enabling efficient computations in a large Hilbert space efficiently and pro
poses two approaches for building a quantum model for classification, illustrated with mini-benchmark datasets [8].
In contrast, Liu _et al._ established a rigorous quantum speedup for supervised classification using a general-purpose quantum learning algorithm that only requires classical access to data. This algorithm represents a significant advancement [10]. In the same year, Schuld _et al._ investigated the impact of quantum data encoding strategies on the efficacy of parametrized quantum circuits in approximating functions [11], and the authors showed that quantum models could realize all possible sets of Fourier coefficients. Therefore, if the accessible frequency spectrum is asymptotically rich enough, such models are universal function approximators. This result has significant implications for developing QML algorithms to tackle complex data challenges.
In another 2021 paper [12], Schuld, M explored the theoretical foundations of the link between quantum computing and kernel methods in machine learning, systematically rephrased supervised QML models as kernel methods, replacing many near-term and fault-tolerant QML models with a general SVM whose kernel computes distances between data-encoding quantum states. This approach has the potential to significantly reduce the complexity of QML algorithms and improve their performance.
In 2022, Zhang _et al._ proposed a new quantum optimization algorithm for QSVM that can improve the efficiency and scalability of QSVM on large datasets [13]. These advancements in QSVM and its related techniques have made it a promising candidate for solving complex problems in various fields, including bioinformatics, finance, image recognition, and material physics. One of the recent works in QSVM was proposed by Jiang _et al._ in 2023 [14], which introduced a quantum algorithm for SVM classification that leverages the quantum phase estimation algorithm to estimate the kernel matrix. This approach leads to significant speedup compared to classical SVM algorithms, making QSVM a more efficient choice for large-scale datasets.
In this paper, we build upon these studies and suggest a new QML model for classification that merges the two more accurate approaches identified by Schuld, M and the different works mentioned above. Our model leverages the expressive power of parametrized quantum circuits as function approximators and uses a kernel method to compute distances between data-encoding quantum states. We present theoretical analyses and numerical simulations to demonstrate the potential of our model for tackling classification tasks. Our results suggest that our model outperforms existing QML algorithms, highlighting its potential for future real-world problems and applications.
This paper is divided as follows:
In SSII, We provide an overview of classical support vector machines, highlighting their principal features and limitations that motivate the exploration of more accurate implementations in quantum machine learning.
In SSIII, we describe the quantum model for support vector machines and explain the three implementations, including our proposed approach, the quantum variational kernel SVM.
In SSIV, we present the results obtained using Pennylane by comparing the accuracy,
loss, and confusion matrix indicators of the three quantum SVM models on the Iris dataset.
In SSV, we discuss our findings' implications and highlight future research directions in this field.
## II Classical support vector machine
In classical machine learning, SVMs are used in supervised models to analyze data for classification and regression. By using this algorithm, we can also perform binary classification and multi-classification. To understand this, we are taking an example. For simplicity, we are taking a binary classification example. Suppose we have a collection of circles and rectangles in a 2D plane. Our job is to classify the circles and rectangles. This problem has two types, (a) linear and (b) non-linear, as shown in Fig.1.
### Linear SVMs
First of all, we are discussing linear SVMs. We take a dataset of \(n\) points of the form \((x_{1},y_{1}),(x_{2},y_{2}),...(x_{n},y_{n})\). Here \(y_{i}\) are either 1 or \(-1\), and each \(x_{i}\) is a p-dimensional real vector. We have to draw the positive hyperplane \((H_{+})\), negative hyperplane \((H_{-})\), and margin as shown in Fig.2. We can find the margin using the formula \(=H_{+}+H_{-}\).
Given a \(D\)-dimensional vector \(\mathbf{X}_{0}\in\mathbb{R}^{D\times 1}\), and a \((D-1)\)-dimensional linear hyperplane \(\mathcal{H}:\mathbf{W}^{T}\mathbf{X}+\mathbf{B}-\mathbf{Y}=\mathbf{0}\), where \(\mathbf{W}=(w_{1},w_{2},\ldots,w_{n})\) is the weights vector, \(\mathbf{B}\) is the bias vector, and \(\Phi(\mathbf{X}_{n})\) is the projection of the point \(\mathbf{X}_{n}\) into the nonlinear feature space. The goal is to ascertain the hyperplane that optimally separates the vectorial points into
Figure 1: Graphical representation of linear and non-linear SVMs problems.
classes while maximizing the margin between the hyperplane and the closest datapoints from each class. Mathematically, we translate this as a quadratic programming problem with linear constraints, whereby our goal is to determine the value of the weights that will maximize the margin
\[\mathbf{W}^{*}=\underset{\mathbf{W}}{\text{arg max}}\ \frac{1}{||\mathbf{W}||_{2}} \left\{\min_{n}\ \mathbf{Y}_{n}\left[\mathbf{W}^{T}\Phi(\mathbf{X}_{n})+\mathbf{B}\right] \right\}, \tag{1}\]
where \(||\mathbf{W}||_{2}=\left(\sum_{i=1}^{n}w_{i}^{2}\right)^{1/2}\). Mathematically, we can translate this into the primal form SVM optimization problem \(\min_{n}\ \frac{1}{2}||\mathbf{W}||_{2}\) subject to \(\mathbf{Y}_{n}\left[\mathbf{W}^{T}\Phi(\mathbf{X}_{n})+\mathbf{B}\right]\geq \mathbf{1}\) for every \(n\).
### Non-linear SVMs
As shown in Fig.2, the support vectors are the vectors utilized to generate both the positive and negative hyperplanes. Maximizing the margin length in this specific model is imperative to achieve precise classification and high accuracy.
In order to effectively tackle the non-linear problem we are facing, the kernel trick presents a compelling solution. This technique involves using a kernel function with data points, acquiring higher-dimensional vector points in our feature space. A plethora of kernel functions exists, each tailored to solve different problems. Below, we present a comprehensive list of some of these kernel functions:
* Polynomial (Homogeneous): denoted as \(K(a_{i},a_{j})=(a_{i}\cdot a_{j})^{d}\), where \(d\) is a positive integer that determines the degree of the polynomial. By setting \(d\) to \(1\), it becomes a linear kernel that is particularly useful for linearly separable data.
Figure 2: Geometric components of support vector machines.
* Polynomial (Inhomogeneous): which incorporates a constant term \(r\) to the dot product of the input vectors, resulting in \(K(a_{i},a_{j})=(a_{i}\cdot a_{j}+r)^{d}\). This kernel is well-suited for capturing nonlinear relationships between the data.
* Sigmoid function (Hyperbolic tangent): based on the hyperbolic tangent function, takes the form \(K(a_{i},a_{j})=\tanh(ka_{i}\cdot a_{j}+c)^{d}\), where \(k\) and \(c\) are kernel parameters. This kernel can be used to model data that exhibits sigmoidal behavior and has been applied in various applications such as image classification and text mining.
After applying the kernel function to our data points, we have to do the same operation as in linear. Then we can complete the classification successfully. We modify the primal form of the linear SVM to include the slack variables \(\boldsymbol{\xi}\geq\mathbf{0}\,\min_{n}\,\frac{1}{2}||\mathbf{W}||_{2}+ \mathbf{C}\sum_{n}\boldsymbol{\xi}_{n}\quad\text{subject to}\,\,\,\mathbf{Y}_{n} \left[\mathbf{W}^{T}\Phi(\mathbf{X}_{n})+\mathbf{B}\right]\,\geq 1-\boldsymbol{ \xi}_{n}\,\,\text{for every}\,\,\,n.\) In addition to quadratic programming, numerous alternative techniques exist for resolving this problem. These include the approaches of Lagrange multipliers, sequential minimal optimization, interior point methods, gradient descent (GD), stochastic gradient descent (SGD), and kernel methods.
### Disadvantages of Classical SVMs
Despite the popularity of classical SVMs, they have certain limitations that constrain their optimal performance. One of the significant limitations is handling high-dimensional feature spaces, which can result in slow training times and overfitting problems. Another area for improvement is the dependence on kernel functions, which may not effectively capture complex data relationships. Furthermore, classical SVMs are not easily scalable to large datasets and demand extensive parameter tuning for accurate results. Researchers have turned to quantum machine learning to overcome these limitations and explore more precise and efficient alternatives. In the next section, we examine how quantum support vector machines can effectively tackle these challenges and provide a promising solution for enhancing classification performance.
## III Quantum Support Vector Machine
Quantum Support Vector Machine is a burgeoning area of research in quantum machine learning that offers promising potential for enhanced computational performance in classification and regression tasks. While classical SVM has been widely utilized in machine learning, QSVM exploits the unique properties of quantum mechanics to outperform classical SVM in specific applications. The QSVM algorithm involves mapping input data onto a quantum state, which is subsequently subjected to quantum circuit processing to generate the classification outcome. The described circuit comprises a sequence of quantum gates that manipulate the quantum state and execute the SVM algorithm. The classification outcome is obtained by measuring the circuit's output.
In previous works, as mentioned in the introduction sections (I), QSVMs have been implemented using various approaches, such as the quantum kernel method, the quantum matrix inversion method, and the quantum feature mapping method. Nevertheless, these methodologies possess certain constraints, such as high error rates, enormous computational resources, and scalability issues. This section will focus on three recent approaches to QSVM: the quantum kernel approach, the quantum variational approach, and a novel hybrid approach that combines the quantum kernel approach with the quantum variational circuit. These approaches have shown promising results and offer potential accuracy, scalability, and robustness improvements. The following subsections will describe each approach, highlighting the steps and circuits used to develop the QSVM models.
In each approach, the first step in our methodology involves the conversion of our classical datapoints into quantum states. To achieve this, we begin by encoding the data points using a quantum circuit, as depicted in Fig.3. Subsequently, we establish our data set and opt for the iris data set for simplicity. Our selection of qubits is based on the features outlined in our data set. We utilize a quantum model represented as follows:
\[f(x)=\langle\phi(x)|M|\phi(x)\rangle. \tag{2}\]
Here \(|\phi(x)\rangle\) is prepared using an encoding circuit, and \(M\) is the measurement operator. \(M\) is observable that is defined as:
\[M(\theta)=G^{\dagger}(\theta)\sigma_{z}^{0}G(\theta). \tag{3}\]
Figure 3: Quantum circuit architecture for QSVMS: Generalized model description.
### Quantum Kernel Support Vector Machine
We propose implementing support vector machines with a kernel computed by a quantum circuit in this approach. Specifically, we utilize the angle-embedding template in conjunction with a SWAP test to evaluate the quantum kernel; this method reduces the number of required qubits by half, rendering it more viable for real-world implementations.
The kernel function is a central concept in SVMs, which are a popular class of ML algorithms for classification and regression tasks; this kernel function is a measure of similarity between two data points in a high-dimensional feature space and is used to map the data into a space where a linear classifier can separate the classes. This method can be used with any kernel function, like the linear kernel and radial basis function (RBF) kernel, computed using a quantum circuit, and we call it the quantum kernel. Mathematically, the quantum kernel is represented by the following equation:
\[k(x_{1},x_{2})=|\langle\phi(x_{1})|\phi(x_{2})\rangle|^{2}, \tag{4}\]
Where \(x_{1}\) and \(x_{2}\) are the input feature vectors, and \(\phi(x_{i})_{i=1,2}\) denotes the quantum embedding of \(x\) into a quantum state with the angle encoding routines \(S(x_{1})\) and \(S(x_{2})\), we then apply the inverse embedding to one of the states and compute the overlap between the two states using a SWAP test, and the SWAP test is a simple quantum protocol that measures the overlap between two quantum states, we can represent this step by the following equation:
\[\langle SWAP\rangle=|\langle\phi(x_{1})\otimes\phi(x_{2})|SWAP|\phi(x_{1}) \otimes\phi(x_{2})\rangle|^{2}, \tag{5}\]
\(SWAP\) is the swap gate, and \(|\langle SWAP\rangle|^{2}\) represents the probability of measuring the two quantum embeddings in the same state. Finally, we use the Hermitian observable to measure the projector onto the initial state \(|0...0\rangle\langle 0...0|\), and Fig.4 present this circuit.
The advantage of this approach is that it has the potential to scale to larger datasets by utilizing quantum hardware with more qubits. As we mentioned, it also requires only half the number of qubits as the number of features, and this is because we can prepare
Figure 4: QK-SVM circuit.
the two data points on the same set of qubits using the angle-embedding template and then apply the inverse embedding to one of the states, as shown in Fig.5 using Pennylene [15].
### Quantum Variational Support Vector Machine
In this method, we propose a novel approach for training data directly by utilizing an ansatz for the variational circuit. This ansatz, a quantum operation applied in multiple layers, enhances expressivity. Although the variational circuit cannot optimize the exact cost, similar to SVMs, we have incorporated a bias term termed hinge loss in our quantum model to minimize the gap between the two. And in the quantum node, we explicitly apply the parameter shift differentiation method. The variational quantum circuit is given in Fig.3. We have given our method's encoding, processing, and measurement steps in this circuit. The quantum variational method is a key concept in quantum machine learning, a rapidly growing field that aims to leverage quantum computing to develop robust machine learning algorithms. This answer will discuss the quantum variational method and its applications in quantum machine learning.
Mathematically, the quantum variational method can be described as follows:
Suppose we have a parameterized quantum circuit that can be represented by the unitary operator \(U(\theta)\), where \(\theta\) is a vector of parameters. Given a set of training data (x\({}_{1}\), y\({}_{1}\)), (x\({}_{2}\), y\({}_{2}\)),..., (x\({}_{n}\), y\({}_{n}\)), where x\({}_{i}\) is an input and y\({}_{i}\) is the desired output. We want to find the values of \(\theta\) that minimize the cost function:
\[f(\theta)=\frac{1}{n}\sum_{i=1}^{n}L(y_{i},U(\theta)x_{i}). \tag{6}\]
Figure 5: QK-SVM circuit using Pennlyne.
Here, \(L(y,y^{\prime})\) measures the difference between the desired output y and the actual output y\({}^{\prime}\) produced by the quantum circuit. This cost function is typically chosen to be a function that can be efficiently computed on a classical computer. We use an iterative optimization algorithm such as gradient descent to find the optimal values of \(\theta\) that minimize the cost function. Starting from an initial guess for \(\theta\), we compute the gradient of the cost function concerning each parameter and update the parameters in the direction of the negative gradient. This process is repeated until the cost function converges to a minimum. The circuit structure is given below in Fig.6.
### Quantum Variational Kernel Support Vector Machine
This study proposes a new approach for quantum support vector machines, which we call Quantum Variational Kernel Support Vector Machine (QVK-SVM). It combines two distinct methods to enhance the performance of quantum kernels and variational circuits. The first method utilizes the angle-embedding template to prepare the quantum states used to compute the kernel, as we explained in subsection III.1. The overlap between the two states is measured using a SWAP test, which requires only half the qubits.
The second method involves utilizing a variational circuit trained through the variational training principle, as outlined in Subsection III.2. The ansatz of the circuit can be improved by adding more layers, thereby enhancing its ability to express itself. Additionally, a bias term is incorporated into the quantum model to facilitate training on the hinge loss. The quantum node utilizes the parameter-shift differentiation method, which is very efficient on hardware.
The proposed circuit of the new approach consists of three main components: AngleEmbedding, the adjoint of AngleEmbedding, and StronglyEntanglingLayers, as shown
Figure 6: QV-SVM circuit using Pennlyne.
in Fig.7.
The AngleEmbedding is used to prepare the quantum states of the data points, which are then fed into the adjoint of the AngleEmbedding to prepare the inverse embedding. The StronglyEntanglingLayers component is used to apply the variational circuit, which is trained using the hinge loss.
The proposed approach has several advantages. First, it combines the strengths of both methods to enhance the performance of QSVMs. Second, it utilizes the variational training principle, allowing greater control over the training process. Third, it uses the parameter-shift differentiation method, which works well on hardware. Finally, the proposed circuit is simple and easy to implement, making it suitable for practical applications.
The proposed approach for quantum SVMs combines the angle-embedding kernel and the variational circuit to enhance the performance of QSVMs. The proposed approach has several advantages over the existing methods, including greater control over the training process, better hardware compatibility, and ease of implementation. Future research could explore the application of the proposed approach to other datasets and investigate the potential of the approach for solving more complex problems.
## IV Results and Discussion
The results and discussion of the three models in this research demonstrate the potential of quantum machine learning in enhancing binary classification tasks, particularly the quantum support vector machine. The first model, QK-SVM, employed a quantum kernel approach and delivered an impressive overall accuracy of 96.34% on the test set. The second model, QV-SVM, utilized variational training with a quantum support vector
Figure 7: QVK-SVM circuit using Pennlyne.
machine and achieved a maximum accuracy of 95.43% on the test set. The third and final model, QVK-SVM, combined quantum kernel and variational training to yield the most promising results, with an accuracy of 98.48% on the test set, as evidenced by the data presented in Table 1.
Table 1 displays the performance metrics for the three models: QK-SVM, QV-SVM, and QVK-SVM. Our findings indicate that QK-SVM achieved high precision and recall rates, indicating the robustness of the model in correctly identifying the different classes of Iris flowers. QV-SVM achieved high specificity and F1 score values, further validating the effectiveness of the quantum support vector machine using the variational algorithm. QVK-SVM achieved high precision, recall, and specificity; these results confirm the efficacy of QVK-SVM and highlight its potential as a reliable tool for ML applications.
The QK-SVM model achieved a maximum accuracy of 96.34%, high precision and recall rates, and a corresponding specificity and F1 score. Fig.9 and Fig.9 illustrate the loss and accuracy curves for both the training and testing sets, demonstrating a clear improvement in the model's performance during training.
The QV-SVM model achieved a steady improvement in performance as the number of iterations increased, with training losses ranging from 1.00 to 0.49 and testing losses ranging from 0.98 to 0.47. The model achieved an absolute accuracy of 95.43%, with high precision and recall rates. Fig.10 and Fig.11 illustrate the loss and accuracy plots, respectively, highlighting the model's optimization over the iterations.
Our suggested model represents a novel approach that combines the quantum kernel and variational algorithm used in QV-SVM and QK-SVM, respectively. The results of our QVK-SVM model indicate that this combined approach is highly effective for solving classification problems, even on relatively small datasets.
The proposed QVK-SVM model achieved an impressive accuracy of 98.48% on the test set, with a corresponding F1 score of 91.64%. Fig.12 shows the convergence of the training and testing losses throughout the experiment, demonstrating that the model could optimize the loss function to achieve high accuracy. Fig.13 displays the corresponding training and testing accuracies, showing that the model's accuracy improved steadily throughout the experiment, reaching a peak of 98% accuracy on the test set.
Furthermore, the superior performance of QVK-SVM, compared to QV-SVM and QK-SVM, suggests that the combination of these approaches can provide a more robust and reliable solution to binary classification tasks.
The results show the potential of quantum machine learning in enhancing binary classification tasks, especially the quantum support vector machine. As demonstrated in our novel method, combining the quantum kernel and variational algorithm represents a promising approach that could be extended to other datasets and classification problems.
## V Conclusion
This study delved into applying quantum support vector machines for binary classification tasks. Specifically, two pre-existing methods, the quantum kernel support vector machine and the quantum variational support vector machine, were compared and evaluated for their respective strengths and limitations. In addition, a novel approach was developed by combining these two methods, resulting in the quantum variational kernel support vector machine. The proposed QVK-SVM approach demonstrated exceptional accuracy and loss reduction performance, offering a promising solution for binary classification tasks in quantum machine learning. These findings hold significant potential for advancing the field of quantum machine learning and its diverse applications. The QVK-SVM approach represents a noteworthy contribution to this field's development and has clear implications for future research endeavors.
The proposed method presents an opportunity for further research to explore its efficacy in resolving intricate issues across various datasets. Advanced optimization techniques and the development of new quantum algorithms can enhance the efficiency and scalability of the approach. Furthermore, the potential of quantum machine learning can be investigated by extending the proposed method to other machine learning models, such as neural networks and decision trees. Through these efforts, the proposed approach can advance the field of quantum machine learning and unlock new opportunities for addressing complex real-world problems. Its potential to do so is significant
and warrants further academic investigation.
|
2307.08637 | LearnedSort as a learning-augmented SampleSort: Analysis and
Parallelization | This work analyzes and parallelizes LearnedSort, the novel algorithm that
sorts using machine learning models based on the cumulative distribution
function. LearnedSort is analyzed under the lens of algorithms with
predictions, and it is argued that LearnedSort is a learning-augmented
SampleSort. A parallel LearnedSort algorithm is developed combining LearnedSort
with the state-of-the-art SampleSort implementation, IPS4o. Benchmarks on
synthetic and real-world datasets demonstrate improved parallel performance for
parallel LearnedSort compared to IPS4o and other sorting algorithms. | Ivan Carvalho, Ramon Lawrence | 2023-07-17T16:53:22Z | http://arxiv.org/abs/2307.08637v1 | # LearnedSort as a learning-augmented SampleSort: Analysis and Parallelization
###### Abstract.
This work analyzes and parallelizes LearnedSort, the novel algorithm that sorts using machine learning models based on the cumulative distribution function. LearnedSort is analyzed under the lens of algorithms with predictions, and it is argued that LearnedSort is a learning-augmented SampleSort. A parallel LearnedSort algorithm is developed combining LearnedSort with the state-of-the-art SampleSort implementation, IPS4o. Benchmarks on synthetic and real-world datasets demonstrate improved parallel performance for parallel LearnedSort compared to IPS4o and other sorting algorithms.
sorting, machine learning for systems, algorithms with predictions +
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
+
Footnote †: ccs: Computing methodologies Learning linear models
the RMI, the PGM, and the RadixSpline always outperform state-of-the-art traditional indexes on look-up time and size, losing just on build time.
The two-layer RMI is used by LearnedSort. Mathematically, it is described by:
\[F(x)=f_{2}^{[B\times f_{1}(x)]}(x)\]
The RMI consists of the root model \(f_{1}\) and of \(B\) second-level models \(f_{2}^{(i)}\) for \(0\leq i<B\). The root model can be interpreted as an initial approximation of the CDF function that selects one of the \(B\) models in the next level. The second-level models by consequence can be seen as models specializing on a specific region of the CDF.
The RMI architecture is extremely flexible. \(f_{1}\) and \(f_{2}^{(i)}\) can have arbitrary model types such as linear, cubic or radix models. The number of second-level models \(B\) can also be configured. LearnedSort uses a RMI with linear models and \(B=1000\).
### Sorting with Machine Learning Models
Sorting with machine learning models goes beyond applying a single pass of \(A[F(x)]=x\) for all elements. To engineer a practical implementation, many details need to be resolved.
The first detail is that \(A[F(x)]=x\) has a hostile memory-access pattern to modern CPUs, as it performs mostly random accesses to the memory. Kristo et al. reported that even with a perfect model, applying the model directly was slower than optimized versions of RadixSort. This prompted the authors to try other approaches such as using buckets.
Another key detail is that the model is imperfect and inversions can happen i.e. there are \(x,y\) such that \(x<y\) but \(F(x)>F(y)\). Although uncommon for good models, the implementation needs to handle those cases to guarantee that the output is sorted.
Moreover, collisions between elements can happen i.e. there are \(x,y\) such that \(F(x)=F(y)\). Since it is possible to have only one element at position \(F(x)\), the implementation must handle collisions. Collisions are exacerbated by duplicates in the input, as all duplicate values \(x\) will collide at \(F(x)\). Duplicates are very common when sorting.
Kristo et al. improved the algorithm handling of these challenges and produced LearnedSort 2.0 (Kristo et al., 2017). LearnedSort 2.0 consists of four routines: training the model, two rounds of partitioning, model-based Counting Sort, and a correction step with Insertion Sort.
Training the model is the first routine of LearnedSort and requires the most empirical data for good performance. It is necessary to select a model type and sample size to train the CDF model. Kristo et al. chose the two-layer RMI as the model. Since producing the optimal RMI is computationally more expensive than sorting an array with Quicksort (Kristo et al., 2018), the authors fixed the root and second-level model types to be linear models. They also picked a sample size of 1% of \(N\) to train the RMI. These choices yield excellent results in practice. The model can be trained quickly and its predictions are accurate enough such that the sorting performance can outperform other state-of-the-art sorting algorithms.
The partitioning routine is in-place and uses the model to split the data into \(B=1000\) buckets. For each element, LearnedSort calculates its corresponding bucket using \(b_{i}=[B\times P(A\leq x)]\) and adds the element to the buffer associated with \(b_{i}\). When a buffer gets full, LearnedSort flushes the buffer. After processing all elements, the fragments of each bucket \(b_{i}\) are scattered across the input. To solve this, LearnedSort implements a defragmentation pass that makes the buckets contiguous. LearnedSort applies the partitioning routine twice, splitting the data into 1000 buckets and then splitting each of those buckets into 1000 sub-buckets.
To handle duplicates, Learned Sort 2.0 performs a homogeneity check after partitioning: if all elements within a bucket are equal, the bucket is left as is because it is already sorted. This condition handles the collision case that reduced the performance of the original LearnedSort.
The base case for LearnedSort is a Model-Based Counting Sort that uses the CDF to predict the final position of the keys in the sub-buckets. Lastly, Insertion Sort is executed to correct the possible mistakes from the RMI and guarantee that the output is sorted. Since the sequence is almost sorted, Insertion Sort is cheap to execute in practice.
### Quicksort
Although Quicksort is asymptotically optimal, engineering a good implementation of Quicksort can drastically improve its efficiency. This requires avoiding Quicksort's worst case with bad pivots and squeezing all the performance available by modern hardware.
IntroSort is a hybrid Quicksort algorithm (Kristo et al., 2017) that avoids the \(\Theta(N^{2})\) worst-case by switching to HeapSort (Kristo et al., 2018) when the recursion depth exceeds \(O(\log N)\). IntroSort has been chosen by some popular libraries, such as the GNU C++ library, to be their default sorting algorithm.
Pattern-defeating Quicksort (pdqsort) is an enhanced version of IntroSort (Kristo et al., 2017). It incorporates many improvements on partitioning and sorts in \(O(N\min(\log N,K))\) where \(K\) is the number of distinct elements on the input. pqdsort also leverages the contributions of BlockQuicksort (Kristo et al., 2018), which processes the elements in blocks to avoid branch mispredictions. pqdsort is currently the algorithm implemented by the Rust Standard Library for unstable sorting.
Vectorized Quicksort is a new implementation of Quicksort that uses Single Instruction, Multiple Data (SIMD) to exploit the parallelism available in modern CPUs (Wassenberg et al., 2017). Wassenberg et al. managed to vectorize each individual step of Quicksort: pivot selection, partitioning, and the sorting networks for the base case. By building on top of a high-level SIMD library, the authors were also able to port their implementation to seven distinct instruction sets, which is uncommon as previous implementations were generally not portable.
A takeaway from advancements in Quicksort is that engineering is a core part of high-performance sorting and that implementation details matter. Implementation optimizations improved performance in Learned Sort 2.0, and such optimizations are important for high parallel performance.
### SampleSort
SampleSort is a generalization of Quicksort to \(k\) pivots (Brandrands et al., 2017). The increased number of pivots pushes the number of comparisons of the algorithm closer to the \(\log_{2}n!\) theoretical bound, giving it an edge over Quicksort. It also makes the algorithm suitable for parallel processing, as SampleSort creates \(k+1\) perfectly parallel sub-problems.
Similar to Quicksort, engineering a good implementation of SampleSort can significantly boost performance. Sanders and Winkel introduced the Super Scalar SampleSort in (Sanders and Winkel, 2017). Their implementation of SampleSort exploits instruction-level parallelism available in modern CPUs. Sanders and Winkel organize the pivots into a branchless decision-tree that is friendly to optimization techniques such as pipelining and loop unrolling. This made their implementation competitive on single-core sequential settings.
Avtmann et al. take a step further in (Astrmann et al., 2017), introducing the In-place Parallel Super Scalar SampleSort (IPS4o). IPS4o is the state-of-the-art SampleSort implementation incorporating many improvements.
One key improvement of IPS4o is the in-place partitioning. Previous SampleSort implementations allocated \(\mathcal{O}(N)\) memory to copy elements of the input. IPS4o instead uses buffers of size \(b\) for each of the \(k\) buckets. It allocates \(\mathcal{O}(kb)\) total memory and when a buffer is full it flushes the buffer and overwrites some of the data of the original input that has already been processed. This initial pass creates \(\mathcal{O}(N/b)\) blocks. Afterwards, IPS4o permutes the blocks such that each bucket is contiguous in memory using a routine similar to defragmentation. Conceptually, the blocking strategy adopted by IPS4o shares many ideas with those adopted by LearnedSort, BlockQuicksort, and pdqsort.
Other improvements of IPS4o include the parallelization and the equality buckets. IPS4o uses atomic fetch-and-add operations to parallelize the block partitioning and leverages a custom task scheduler to manage threads when the sub-problems become small. IPS4o also gracefully handles inputs with many duplicates with equality buckets. It detects skewed inputs on sampling and creates a separate bucket for the duplicates when doing the partitioning. As a sequence where all elements are equal is already sorted, IPS4o avoids having to process the duplicate elements in the equality buckets.
It is also worth highlighting the ability to use IPS4o as a framework for building other sorting algorithms. Axtmann et al. also introduced the In-place Parallel Super Scalar Radix Sort (IPS2Ra) (Astrmann et al., 2017). IPS2Ra combines the qualities of IPS4o with the most-significant-digit radix sort strategy, resulting in another high-performance sorting algorithm. IPS4o has also been used to parallelize Vectorized Quicksort (Sanders and Winkel, 2017) and to test the efficiency of sorting networks as base cases for sorting algorithms (Brands and Goyal, 2017).
This work reuses the IPS4o framework to parallelize LearnedSort. This allows the combination of the engineering efforts of IPS4o with the best qualities of LearnedSort.
### Algorithms with Predictions
The area of algorithms with predictions (Sanders and Winkel, 2017) goes beyond worst-case analysis and considers algorithms augmented with machine learning models. For each algorithm, we can think of a prediction and a quality metric \(\eta\) for the prediction that depends on an error specified by the problem type. In case \(\eta\) is good, the algorithm proceeds to use the outputs from the model to solve the problem instance. Otherwise, it has a fallback mechanism that uses a traditional, prediction-less algorithm when the machine learning models fail. We expect that for real-world workflows, the outputs from the model will generally be used due to patterns found in the data.
A prominent example of the area is caching with predictions (Levy et al., 2017). Lykouris and Vassilvitskii solve the online caching problem with a machine learning model trained to predict the furthest time in the future an element will come back to the cache. Their model is inspired by the offline solution to the problem, the greedy Furthest-In-Future algorithm that out of all elements removes the one that appears the latest in the future. To prevent the worst-case that happens when the model is sub-optimal, they fall back to the classic Marker algorithm.
Algorithms with predictions share many similarities with LearnedSort. Both implement machine learning models and avoid the worst-case due to the quality of the predictions. Thus, it is natural to ask if LearnedSort is an algorithm with predictions. The next section discusses how LearnedSort is analogous to a SampleSort in which the pivots were learned.
## 3. Analyzing LearnedSort
To analyze LearnedSort under the lens of algorithms with predictions, it is important to determine what LearnedSort is trying to predict and what makes for a good prediction for a sorting algorithm.
From a high-level perspective, ignoring implementation details, what makes Quicksort an efficient algorithm is the quality of its pivots. The BFPRT algorithm, also known as median of medians, is a method to find an element that is guaranteed to be between the 30th and 70th percentile of the input (Brands and Goyal, 2017). It is possible to combine Quicksort with the BFPRT to produce a deterministic Quicksort with worst-case complexity of \(\Theta(N\log N)\)(Gurthest and Goyal, 2017). Hence, the quality of the pivots can avoid the worst-case of randomized Quicksort.
Inspired by the deterministic Quicksort, the analysis of LearnedSort in split into three parts. The first part introduces Quicksort with Learned Pivots, a variation of Quicksort where the CDF model selects the pivot. That section shows that training a CDF model is akin to other pivot selection techniques such as applying the BFPRT algorithm. The second part analyzes Learned Quicksort, a simplified LearnedSort with \(B=2\) buckets. It turns out that Learned Quicksort is in fact analogous to a Quicksort with Learned Pivots but with implicit pivots. Lastly, the third section considers \(B>2\) and the connections between LearnedSort and SampleSort.
### Quicksort with Learned Pivots
The analysis starts with the pseudocode of our Quicksort variant shown in Algorithm 1. For simplicity, assume that all elements on the input \(A\) are distinct. The algorithm is identical to many other Quicksort implementations with the exception of the partitioning call.
```
ifdistance(\(l\), \(r\)) \(\leq\) BASECASE_SIZEthen InsertionSort(A, \(1\), \(r\)); return; q \(\leftarrow\) PartitionWithLearnedPivot(A, \(1\), \(r\)); Quicksort(A, \(1\), \(q\)-\(1\)); Quicksort(A, \(q+1\), \(r\)); return;
```
**Algorithm 1**Quicksort(A, \(1\), \(r\))
```
ifdistance(\(l\), \(r\)) \(\leq\) BASECASE_SIZEthen InsertionSort(A, \(1\), \(r\)); return; q \(\leftarrow\) PartitionWithLearnedPivot(A, \(1\), \(r\)); Quicksort(A, \(1\), \(q\)-\(1\)); Quicksort(A, \(q+1\), \(r\)); return;
```
**Algorithm 2**Quicksort(A, \(1\), \(r\))
```
ifdistance(\(l\), \(r\)) \(\leq\) BASECASE_SIZEthen InsertionSort(A, \(1\), \(r\)); return; q \(\leftarrow\) PartitionWithLearnedPivot(A, \(1\), \(r\)); Quicksort(A, \(1\), \(q\)-\(1\)); Quicksort(A, \(q+1\), \(r\)); return;
```
**Algorithm 3**Quicksort(A, \(1\), \(r\))
```
ifdistance(\(l\), \(r\)) \(\leq\) BASECASE_SIZEthen InsertionSort(A, \(1\), \(r\)); return; q \(\leftarrow\) PartitionWithLearnedPivot(A, \(1\), \(r\)); Quicksort(A, \(1\), \(q\)-\(1\)); Quicksort(A, \(q+1\), \(r\)); return;
```
**Algorithm 4**Quicksort(A, \(1\), \(r\))
```
ifdistance(\(l\), \(r\)) \(\leq\) BASECASE_SIZEthen InsertionSort(A, \(1\), \(r\)); return; q \(\leftarrow\) PartitionWithLearnedPivot(A, \(1\), \(r\)); Quicksort(A, \(1\), \(q\)-\(1\)); Quicksort(A, \(q+1\), \(r\)); return;
```
**Algorithm 4**Quicksort(A, \(1\), \(r\))
```
ifdistance(\(l\), \(r\)) \(\leq\) BASECASE_SIZEthen InsertionSort(A, \(1\), \(r\)); return; q \(\leftarrow\) PartitionWithLearnedPivot(A, \(1\), \(r\)); Quicksort(A, \(1\), \(q\)-\(1\)); Quicksort(A, \(q+1\), \(r\)); return;
```
**Algorithm 5**Quicksort(A, \(1\), \(r\))
```
ifdistance(\(l\), \(r\)) \(\leq\) BASECASE_SIZEthen InsertionSort(A, \(1\), \(r\)); return; q \(\leftarrow\) PartitionWithLearnedPivot(A, \(1\), \(r\)); Quicksort(A, \(1\), \(q\)-\(1\)); Quicksort(A, \(q+1\), \(r\)); return;
```
**Algorithm 5**Quicksort(A, \(1\), \(r\))
```
ifdistance(\(l\), \(r\)) \(\leq\) BASECASE_SIZEthen InsertionSort(A, \(1\), \(r\)); return; q \(\leftarrow\) PartitionWithLearnedPivot(A, \(1\), \(r\)); Quicksort(A, \(1\), \(q\)-\(1\)); Quicksort(A, \(q+1\), \(r\)); return;
```
**Algorithm 6**Quicksort(A, \(1\), \(r\))
```
ifdistance(\(l\), \(r\)) \(\leq\) BASECASE_SIZEthen InsertionSort(A, \(1\), \(r\)); return; q \(\leftarrow\) PartitionWithLearnedPivot(A, \(1\), \(r\)); Quicksort(A, \(1\), \(q\)-\(1\)); Quicksort(A, \(q+1\), \(r\)); return;
```
**Algorithm 6**Quicksort(A, \(1\), \(r\))
```
ifdistance(\(l\), \(r\)) \(\leq\) BASECASE_SIZEthen InsertionSort(A, \(1\), \(r\)); return; q \(\leftarrow\) PartitionWithLearnedPivot(A, \(1\), \(r\)); Quicksort(A, \(1\), \(q\)-\(1\)); Quicksort(A, \(q+1\), \(r\)); return;
```
**Algorithm 7**Quicksort(A, \(1\), \(r\))
```
ifdistance(\(l\), \(r\)) \(\leq\) BASECASE_SIZEthen InsertionSort(A, \(1\), \(r\)); return; q \(\leftarrow\) PartitionWithLearnedPivot(A, \(1\), \(r\)); Quicksort(A, \(1\), \(q\)-\(1\)); Quicksort(A, \(q+1\), \(r\)); return;
```
**Algorithm 8**Quicksort(A, \(1\), \(r\))
```
ifdistance(\(l\), \(r\)) \(\leq\) BASECASE_SIZEthen InsertionSort(A, \(1\), \(r\)); return; q \(\leftarrow\) PartitionWithLearnedPivot(A, \(1\), \(r\)); Quicksort(A, \(1\), \(q\)-\(1\)); Quicksort(A, \(q+1\), \(r\)); return;
```
**Algorithm 8**Quicksort(A, \(1\), \(r\))
### Quicksort with Learned Pivots
The analysis starts with the pseudocode of our Quicksort variant shown in Algorithm 1. For simplicity, assume that all elements on the input \(A\) are distinct. The algorithm is identical to many other Quicksort implementations with the exception of the partitioning call.
``` ifdistance(\(l\), \(r\)) \(\leq\) BASECASE_SIZEthen InsertionSort(A, \(1\
Algorithm 2 describes how to use the CDF models to select an optimal pivot. Essentially, our goal is to find the median of the input. To do so, we select the largest element \(A[t]\) such that the predicted CDF is smaller than or equal to the true CDF of the median.
```
S \(\leftarrow\) Sample(A,1,r); HeapSort(S,0,S.size(-1); F \(\leftarrow\) TrainCPMedMed(S,0,S.size(0-1);//Functionthat calculatesP(A \(\leftarrow\)x)in[0,1] /* Select the largest element fromA that has the predicted CDF less than the true median */ t \(\leftarrow\) -1; forw\(\leftarrow\)l tordo ifF[A[w]] \(\leq\) 0.5 and(t < 0 orA[w] >A[t])then \(t\)\(\leftarrow\) w; swap(A[t],A[r]); /* After selecting the pivot with the CDF model, we can use any classic partition scheme */ pivot \(\leftarrow\) A[r]; i \(\leftarrow\) 1 - 1; forj\(\leftarrow\)l tor\(-\)l do ifA[j] \(\leq\) pivotthen i \(\leftarrow\) i + 1; swap(A[i],A[j]); swap(A[i],A[r]); swap(A[i],A[r]); returni + 1;
```
**Algorithm 2**PartitionWithLearnedPivot(A,1,r)
The TrainCPMed function is arbitrary such that any type of CDF model could work e.g. RMI, PLEX, RadixSpline. However, for the CDF model to be useful, some properties should hold.
The first is monotonicity: \(x\leq y\implies F(x)\leq F(y)\). This property is necessary to ensure that the selected pivot is indeed closest to the median and that the model contains no incorrect inversions.
The second is that the model needs to require a small number of samples. This follows from the fact that to train a CDF model you need a sorted input and sorting the samples with HeapSort takes \(\mathcal{O}(S\log S)\) (although any algorithm with the same complexity would work).
The third is that computing the predictions of the model for a key should take \(\mathcal{O}(1)\) time. Since we need to make a prediction for each of the \(N\) keys, if the time to compute a prediction is not constant it would lead to an algorithm slower than the traditional Quicksort.
Given these properties, Algorithm 2 takes \(\mathcal{O}(N)\) and its run time is dominated by the loop applying the model predictions and the Lomuto partitioning step.
The time complexity of Algorithm 1 depends on the quality of the learned pivot. In the best case, the complexity is modelled by \(T(N)=\mathcal{O}(N)+2T(N/2)\) which happens when the learned pivot is the median. Hence, the lower bound of Algorithm 1 is \(\Omega(N\log N)\).
The worst-case complexity is modelled by \(T(N)=\mathcal{O}(N)+T(N-1)\) and happens when the learned pivot is the smallest element in the sequence. Thus, the worst-case of the algorithm is \(\Theta(N^{2})\) just like the original Quicksort. However, if the chosen model is a good model, reaching the worst-case is unlikely. The average-case analysis is much closer to the best case in practice.
Let \(\eta\) be the error from finding the perfect partitioning as:
\[\eta=\max(P(A\leq pivot),1-P(A\leq pivot))-1/2\]
where \(P(A\leq pivot)\) is the true CDF of the learned pivot. \(\eta=0\) in case the CDF model always predicts the median. \(\eta=1/2\) in case the CDF model always predicts the smallest element. The complexity is then modelled by:
\[T(N)=\mathcal{O}(N)+T((\eta+1/2)N)+T((-\eta+1/2)N)\]
The value \(\eta\) is not known ahead of time, as it depends on the sample size and CDF model. However, we may assume that the model has better predictions than a random pick \(\eta_{\text{learned}}\leq\eta_{\text{random}}\) (otherwise we would fallback to a random pick). This implies that the Quicksort with Learned Pivots runs as fast as the Randomized Quicksort. Thus \(T(N)\in\mathcal{O}(N\log N)\).
Quicksort with Learned Pivots is not efficient to outperform IntroSort or pdosort. However, it is conceptually useful to show that training a CDF model is a step towards finding better pivots.
### Learned Quicksort
Progressing towards analyzing LearnedSort, we introduce Learned Quicksort. Learned Quicksort, shown in Algorithm 3, is a simpler version of LearnedSort that contains only \(B=2\) buckets.
```
ifdistance(l,r) \(\leq\) BASECASE_SIZEthen InsertionSort(A,l,r); return; S \(\leftarrow\) Sample(A,l,r); HeapSort(S,0,S.size(0-1); F \(\leftarrow\) TrainCPMed(S,0,S.size(0-1); /* Using the predictions directly is equivalent to using the learned pivot */ i \(\leftarrow\) 1; j \(\leftarrow\) r; while\(i\) - \(j\)do ifF(A[j]) \(\leq\) 0.5then i \(\leftarrow\) i + 1; else swap(A[i],A[j]); j \(\leftarrow\) j - 1; LearnedQuicksort(A,1,i); LearnedQuicksort(A,i + 1,r); return;
```
**Algorithm 3**LearnedQuicksort(A,l,r)
Similar to LearnedSort, Learned Quicksort partitions the data using machine learning models. Since there are only two buckets, the partitioning can be done such that elements with \(F(A[i])\leq 0.5\) are put in the initial section of the input starting from the first index and elements with \(F(A[i])>0.5\) are put at the end of the input starting from the last index.
The partitioning done by Quicksort with Learned Pivots and Learned Quicksort is almost identical. The only exception is for the learned pivot, which is in the last position of the first half in the former. Hence, the algorithms have the same time complexity which means that Learned Quicksort has the complexity of \(O(N\log N)\).
The interesting fact about Learned Quicksort is that it does not compute the pivot explicitly. Instead, it relies solely on the results of the model \(F\). Computationally, this is advantageous as Learned Quicksort always performs less operations than Quicksort with Learned Pivots.
We may interpret Learned Quicksort as a Quicksort variant that circumvents the bounds on the theoretical number of comparisons by embracing the numerical properties of the CDF. This is a hint to why LearnedSort is so efficient.
### LearnedSort
We now consider the general case of LearnedSort when \(B>2\). If Learned Quicksort is analogous to a Quicksort with Learned Pivots, LearnedSort is analogous to a SampleSort with \(B-1\) learned pivots.
```
B \(\leftarrow\) numberOfBuckets(A.size()); p \(\leftarrow\) Array(B, -1); // indexes of the pivots for the i-th bucket S \(\leftarrow\) Sample(A, 1, r); HeapSort(S, 0, S.size( 0 - 1); F \(\leftarrow\) TrainCDFModel(S, 0, S.size() - 1); /* Select for each i-th percentile the largest element from A that has predicted CDF less than that percentile */ for\(w\gets l\) to rdo g \(\leftarrow\)\(\lfloor F(A[w])\times B\rfloor\) ; if[p_g_] < 0 or A[w] > A[p[g]] then p[g] \(\leftarrow\) w; pivots \(\leftarrow\) p.filter(v \(\geq\) 0).map(v \(\rightarrow\) A[v]); return pivots;
```
**Algorithm 4**LearnedPivotsForSampleSort(A, l, r)
Algorithm 4 details the process to compute the learned pivots for SampleSort. If we used those pivots in SampleSort, the partitioning would be identical to the partitioning done by LearnedSort. Obviously, as shown in the previous section, using the model directly is more advantageous as it skips the comparisons as a whole.
This explains why LearnedSort is effective. LearnedSort is an enhanced version of SampleSort, which is already a competitive sorting algorithm. If the learned pivots of LearnedSort are better than the randomly selected pivots of SampleSort, we expect LearnedSort to beat SampleSort. Moreover, LearnedSort skips the comparisons done by SampleSort and relies on the \(O(1)\) predictions of the model, which gives LearnedSort an additional performance boost.
There are some differences between an augmented SampleSort and the implementation of LearnedSort 2.0. These minor details come from the authors iterating to improve LearnedSort empirically.
The major discrepancy is that SampleSort does \(O(\log_{B}N)\) partitioning rounds while LearnedSort does only two. We interpret this as Kristo et al. implementing a very shallow recursion tree with a large base case size. SampleSort implementations generally use \(B=128\) or \(B=256\) buckets and use Insertion Sort when there are 16 elements or less. LearnedSort uses \(B=1000\) buckets, hence assuming two rounds of partitioning with around \(N=10^{9}\) elements of input that leads to around 1000 elements on average to be handled by LearnedSort's base case. We hypothesize that if there were \(N=10^{12}\) or \(N=10^{13}\) elements, LearnedSort's performance would be hurt and a third partitioning round would be helpful. However, that input size requires over a terabyte of RAM, which stops being an in-memory sort problem and starts being an external sort instance. Thus, the implementation by Kristo et al. works well in practice.
Another discrepancy is that SampleSort samples data on every sub-problem while LearnedSort samples data only once. This may be an optimization that comes from practical experience. Instead of sampling a few data points, creating 1000 sub-problems and sampling for each sub-problem again, LearnedSort opts to sample a lot of data in bulk. This works because the recursion tree of LearnedSort is very shallow and because the RMI architecture supports this strategy as the second-level models specialize in parts of the CDF.
Lastly, the RMI used by LearnedSort violates one assumption from our analysis. It does not guarantee that \(x\leq y\implies F(x)\leq F(y)\). In practice, inversions do occur but they are relatively rare. This leads to an almost-sorted sequence, which can be quickly fixed by Insertion Sort.
### Quality of the Pivots
This section analyzes the quality of the learned pivots implicitly used by LearnedSort. For two datasets, the uniform distribution and the Wiki/Edit data, the RMI created by LearnedSort was used with Algorithm 4 to calculate the pivots in the first partitioning step. The RMI pivots were compared with the random pivots used by IPS\({}^{4}\)o.
The sorted data was used to calculate the true CDF, \(P(A\leq p_{i})\), for each pivot \(p_{i}\). The metric used for the quality was the distance between the CDF of the pivots and the CDF of the perfect splitters \(\sum_{i=0}^{B-2}|P(A\leq p_{i})-(i+1)/B|\). For simplicity, we matched the number of pivots used by IPS\({}^{4}\)o with the number of pivots computed by the RMI, although LearnedSort uses more pivots in practice.
The results in Table 2 display that the learned pivots are indeed better than the random pivots.
\begin{table}
\begin{tabular}{|l|l|l|} \hline & Random (255 pivots) & RMI (255 pivots) \\ \hline Uniform & 1.1016 & 0.4388 \\ \hline Wiki/Edit & 0.9991 & 0.5157 \\ \hline \end{tabular}
\end{table}
Table 1. Comparison of Pivot Quality
\begin{table}
\begin{tabular}{|l|l|} \hline \hline
**Algorithm 4**LearnedPivotsForSampleSort(A, l, r)
\begin{table}
\begin{tabular}{|l|l|l|} \hline \hline & Random (255 pivots) & RMI (255 pivots) \\ \hline Uniform & 1.1016 & 0.4388 \\ \hline Wiki/Edit & 0.9991 & 0.5157 \\ \hline \end{tabular}
\end{table}
Table 2. Quality of the pivots for IPS\({}^{4}\)o (Random) and LearnedSort (RMI)
## 4. Parallelization of LearnedSort
One direct consequence from the previous analysis is that the progress in engineering a fast SampleSort transfers to LearnedSort. A relevant limitation of LearnedSort 2.0 is that there is only a sequential version available that cannot use all the cores present in modern CPUs to sort data in parallel. This limits applying LearnedSort to real-world workflows.
To address this limitation, we introduce the Augmented In-place Parallel SampleSort (\(\text{AIPS}^{2}\)o). \(\text{AIPS}^{2}\)o is a hybrid of \(\text{IPS}^{4}\)o with LearnedSort. It is built upon the codebase available from \(\text{IPS}^{4}\)o and augments it with the RMI implementation used in LearnedSort.
```
S \(\leftarrow\) Sample(A, I, r); Sort(S, 0, S.size() - 1); ifInputSizeIsLarge(I, r) and notToManyDuplicates(S)then // we sample more data as the RMI benefits from larger samples R \(\leftarrow\) LargerSample(A, I, r); Sort(R, 0, R.size() - 1); \(\text{rmi}\leftarrow\) BuildRMI(R); return\(rmi\); else \(\text{tree}\leftarrow\) BuildBranchlessDecisionTree(S); return\(tree\);
```
**Algorithm 5**BuildPartitionModel(A, I, r)
How \(\text{AIPS}^{2}\)o selects its partitioning strategy is in Algorithm 5. Essentially, if the input size is sufficiently large and there are not too many duplicates, the routine samples more data and returns a trained RMI Otherwise, it builds and returns the decision tree used in \(\text{IPS}^{4}\)o. For our implementation, we use \(B=1024\) buckets for the RMI. We default to the decision tree with \(B=256\) if the input size is smaller than \(N=10^{5}\) or if there are more than 10% of duplicates in the first sample.
Since \(\text{AIPS}^{2}\)o uses the framework from \(\text{IPS}^{4}\)o, it profits from the parallelization of the latter. Another feature it inherits from \(\text{IPS}^{4}\)o is the handling of duplicates, which avoids the common adversarial case for LearnedSort by using the equality buckets from the decision tree.
There are additional modifications to make \(\text{AIPS}^{2}\)o work as well. The most critical modification is making the RMI monotonic such that \(x\leq y\implies F(x)\leq F(y)\) holds. This is necessary to avoid applying Insertion Sort to guarantee the correctness. To implement a monotonic RMI, we had to constraint the second-level linear models such that \(\max_{x\in R}f_{2}^{(i)}(x)\leq\min_{x\in R}f_{2}^{(i+1)}(x)\). This incurs two additional accesses to an array storing the minimums and maximums when processing an element.
The base case is also modified. Model-based counting sort is not used as the algorithm never forwards the RMI between recursive calls. Instead, SkaSort is used for the base case when there are less than 4096 elements (SkaSort). SkaSort is a fast radix sort that is the base case for \(\text{IPS}^{2}\)Ra.
## 5. Experimental Results
\(\text{AIPS}^{2}\)o is compared against other sorting algorithms on the benchmark presented in the Learned Sort 2.0 paper (Kang et al., 2018). For reproducibility, benchmarks were executed on the **m5zn.metal** instance from AWS. The instance runs an Intel(r) Xeon(r) Platinum 8252C CPU @ 3.80GHz with 48 cores, 768KB of L1 cache, 24MB of L2 cache, 99 MB of L3 cache, and 192 GB of RAM.
The four competitors with \(\text{AIPS}^{2}\)o are the following. \(\text{IPS}^{4}\)o, the state-of-the-art implementation of SampleSort. \(\text{IPS}^{2}\)Ra, the radix sort implementation built on top of the framework for \(\text{IPS}^{4}\)o. Learned Sort, one of the fastest sequential sorting algorithms as discussed earlier. \(\text{std::sort}\) from the C++ STL, as the baseline for the experiment. The implementations were written in C++ and compiled with GCC 11 using the -O3 and -march=native flags.
The benchmark includes sequential and parallel settings. We refer to the sequential versions of the algorithms as \(\text{AIIS}^{2}\)o, \(\text{II}^{5}\)o, and \(\text{II}^{5}\)Ra for consistency as they are not parallel. We provide \(\text{std::execution::par\_unseq}\) as an argument to \(\text{std::sort}\) when executing in parallel. To sort floating point numbers with \(\text{IPS}^{2}\)Ra, we use a key extractor that maps floats to integers. Learned Sort is not in the parallel benchmark because there is only a sequential implementation. The parallel benchmark uses all of the 48 cores available in the machine.
The datasets used in the benchmark consist of synthetic and real-world data. The synthetic portion contains 64-bit double floating-point elements from various probability distributions. The real-world portion contains 64-bit unsigned integer elements mostly from the work of Marcus et al. (Marcus et al., 2018). For \(\mathbf{N}=10^{8}\), data size is 800 MB.
**Synthetic Datasets**
* **Uniform (N = \(10^{8}\))**: Uniform distribution with \(a=0\) and \(b=N\)
* **Normal (N = \(10^{8}\))**: Normal distribution with \(\mu=0\) and \(\sigma=1\)
* **Log-Normal (N = \(10^{8}\))**: Log-normal distribution with \(\mu=0\) and \(\sigma=0.5\)
* **Mix Gauss (N = \(10^{8}\))**: Random additive distribution of five Gaussian distributions
* **Exponential (N = \(10^{8}\))**: Exponential Distribution with \(\lambda=2\)
* **Chi-Squared (N = \(10^{8}\))**: \(\chi^{2}\) distribution with \(k=4\)
* **Root Dups (N = \(10^{8}\))**: Sequence of \(A[i]=i\mod\sqrt{N}\) as proposed in (Kang et al., 2018)
* **Two Dups (N = \(10^{8}\))**: Sequence of \(A[i]=i^{2}+N/2\mod N\) as proposed in (Kang et al., 2018)
* **Zipf (N = \(10^{8}\))**: Zipfian distribution with \(s_{\text{zipf}}=0.75\)
**Real-World Datasets**
* **OSM/Cell_IDs (N = \(2\cdot 10^{8}\))**: Uniformly sampled location IDs from OpenStreetMaps.
* **Wiki/Edit (N = \(2\cdot 10^{8}\))**: The edit timestamps from Wikipedia articles
* **FB/IDs (N = \(2\cdot 10^{8}\))**: The IDs from Facebook users sampled in a random walk of the network graph
* **Books/Sales (N = \(2\cdot 10^{8}\))**: Book popularity data from Amazon
* **NYC/Pickup (N = \(10^{8}\))**: The yellow taxi trip pick-up time stamps
### Sequential Results
The sorting rate of the sequential algorithms is in Figures 1, 2, and 3. The rate is measured by keys per second and indicates the throughput of each algorithm. The numbers are the mean of 10 executions of the algorithms. Higher rates indicate better algorithms.
LearnedSort is the fastest in 9 out of 14 of the datasets, claiming the first spot in the sequential benchmark. I1S2Ra comes second, beating the competitors in 4 datasets. Surprisingly, I1S2Ra outperforms LearnedSort in most of the real-world datasets that were created to benchmark the RMIs that power LearnedSort. I1S4o is the fastest only for one dataset, Root Dups, that it handles gracefully due to its equality buckets.
AI1S2o is outperformed in the sequential benchmark. It is faster than the baseline of std::sort. Nevertheless, the hybrid algorithm is slower than both LearnedSort and I1S4o that provide its inner parts.
We attribute the slower sequential results to the more costly training step of AI1S2o. It is important to recall that the training time is accounted in the sorting time for AI1S2o and LearnedSort. AI1S2o samples more data than I1S4o on each partitioning step, which incurs a penalty as we need to sort those samples. The advantage of having better pivots is offset by the training cost. AI1S2o also spends more time training models than LearnedSort as LearnedSort trains the RMI only once while AI1S2o trains a RMI per recursive call.
As we will see in the next section, AIPS2o is a more competitive parallel algorithm. We found that adjusting the sample size and training time had little to no improvement on the sequential case but improved the parallel performance.
### Parallel Results
The sorting rate of the parallel algorithms is in Figures 4, 5, and 6. The rate is measured by keys per second and indicates the throughput of each algorithm. The rates come from the mean of 10 executions of the algorithms.
Figure 1. Sorting throughput of the sequential algorithms. Higher rates are better.
Figure 3. Sorting throughput of the sequential algorithms. Higher rates are better.
Figure 2. Sorting throughput of the sequential algorithms. Higher rates are better.
AIPS2o is the fastest in 10 out of 14 of the datasets, claiming the first spot in the parallel benchmark. IPS4o comes second finishing as the fastest in 4 out of 14 datasets. std::sort places third. IPS2Ra finishes last, behind the baseline on the majority of cases.
The key to high parallel performance is an algorithm's ability to maximize the use of the hardware. AIPS2o creates the best partition of the data for the majority of cases, which creates many sub-problems of a balanced size. This favours the performance of AIPS2o because it manages to keep every thread of the CPU busy always doing work. It also hurts AIPS2o when the RM does not model the data as accurately. The lowest throughputs of AIPS2o happen on the FB/IDs and Wiki/Edit datasets, which are known to be harder for the RMI than the Books/Sales and OSM/Cell_IDs datasets [20].
By contrast, IPS2Ra does not manage to use all the hardware because its partitions are not balanced. There are no bounds on the number of elements that have the same radix prefix and go in the same bucket. Hence, IPS2Ra may end with threads waiting for work, hurting its sorting rate compared to AIPS2o and IPS4o which always keep threads busy. This is particularly relevant to show that having a fast sequential algorithm does not necessarily imply a fast parallel algorithm and vice-versa.
The benchmarks demonstrate that AIPS2o is a practical algorithm. It is a parallel LearnedSort that achieves excellent sorting rates in many datasets. We expect that continuous work will help AIPS2o become more robust against data distributions like the one from FB/IDs, finally closing the gap between AIPS2o and IPS4o on the cases where the latter wins.
Figure 4. Sorting throughput of the parallel algorithms. Higher rates are better.
Figure 5. Sorting throughput of the parallel algorithms. Higher rates are better.
Figure 6. Sorting throughput of the parallel algorithms. Higher rates are better.
## 6. Conclusion and Future Work
This paper argues that LearnedSort is analogous to a SampleSort with pivots selected by a CDF model. This helps explain the effectiveness of LearnedSort by comparing it to SampleSort. We introduced the Augmented In-place Parallel SampleSort, combining the state-of-the-art implementation of SampleSort with LearnedSort. The benchmarks demonstrated that Augmented In-place Parallel SampleSort is a practical parallel implementation of LearnedSort that can outperform the fastest parallel sorting algorithm in the majority of the tested inputs including both synthetic and real-world data sets.
Future work in this research direction is to explore how machine learning models can be applied to other uses cases in sorting. Some possibilities include:
**GPUSorting**: Can the RMI or other learned indexes be combined with GPU SampleSort (Krishnan et al., 2019)?
**String Sorting**: Can learned indexes targeting strings (Krishnan et al., 2019) be combined with String SampleSort (Blekman et al., 2019)?
**Sampling and Pivot Quality**: Can the quality of the learned pivots improve if combined with better sampling techniques (Krishnan et al., 2019)?
|
2308.07381 | Late time HST UV and optical observations of AT~2018cow: extracting a
cow from its background | The bright, blue, rapidly evolving AT2018cow is a well-studied peculiar
extragalactic transient. Despite an abundance of multi-wavelength data, there
still is no consensus on the nature of the event. We present our analysis of
three epochs of Hubble Space Telescope (HST) observations spanning the period
from 713-1474 days post burst, paying particular attention to uncertainties of
the transient photometry introduced by the complex background in which
AT2018cow resides. Photometric measurements show evident fading in the UV and
more subtle but significant fading in the optical. During the last HST
observation, the transient's optical/UV colours were still bluer than those of
the substantial population of compact, young, star-forming regions in the host
of AT2018cow, suggesting some continued transient contribution to the light.
However, a compact source underlying the transient would substantially modify
the resulting spectral energy distribution, depending on its contribution in
the various bands. In particular, in the optical filters, the complex, diffuse
background poses a problem for precise photometry. An underlying cluster is
expected for a supernova occurring within a young stellar environment or a
tidal-disruption event (TDE) within a dense older one. While many recent works
have focused on the supernova interpretation, we note the substantial
similarity in UV light-curve morphology between AT2018cow and several tidal
disruption events around supermassive black holes. Assuming AT2018cow arises
from a TDE-like event, we fit the late-time emission with a disc model and find
$M_{BH} = 10^{3.2{\pm}0.8}$ M$_{\odot}$. Further observations are necessary to
determine the late-time evolution of the transient and its immediate
environment. | Anne Inkenhaag, Peter G. Jonker, Andrew J. Levan, Ashley A. Chrimes, Andrew Mummery, Daniel A. Perley, Nial R. Tanvir | 2023-08-14T18:06:54Z | http://arxiv.org/abs/2308.07381v1 | # Late time _Hst_ UV and optical observations of AT 2018cow: extracting a cow from its background
###### Abstract
The bright, blue, rapidly evolving AT2018cow is a well-studied peculiar extragalactic transient. Despite an abundance of multi-wavelength data, there still is no consensus on the nature of the event. We present our analysis of three epochs of _Hubble Space Telescope (HST)_ observations spanning the period from 713-1474 days post burst, paying particular attention to uncertainties of the transient photometry introduced by the complex background in which AT2018cow resides. Photometric measurements show evident fading in the UV and more subtle but significant fading in the optical. During the last _HST_ observation, the transient's optical/UV colours were still bluer than those of the substantial population of compact, young, star-forming regions in the host of AT2018cow, suggesting some continued transient contribution to the light. However, a compact source underlying the transient would substantially modify the resulting spectral energy distribution, depending on its contribution in the various bands. In particular, in the optical filters, the complex, diffuse background poses a problem for precise photometry. An underlying cluster is expected for a supernova occurring within a young stellar environment or a tidal-disruption event (TDE) within a dense older one. While many recent works have focused on the supernova interpretation, we note the substantial similarity in UV light-curve morphology between AT2018cow and several tidal disruption events around supermassive black holes. Assuming AT2018cow arises from a TDE-like event, we fit the late-time emission with a disc model and find \(M_{BH}=10^{3.2\pm 0.8}\,\mathrm{M}_{\odot}\). Further observations are necessary to determine the late-time evolution of the transient and its immediate environment.
keywords: stars: individual: AT2018cow - ultraviolet: stars - supernovae: general - transients: supernovae - transients: tidal disruption events
## 1 Introduction
Multi-wavelength, wide field-of-view surveys at various wavelengths have transformed transient astrophysics. From X-rays with _Swift_(Burrows et al., 2005) and eROSITA (Predehl et al., 2021) through to optical with e.g., the Zwicky Transient Facility (ZTF); Bellm et al. (2019), the All-Sky Automated Survey for Supernovae (ASASSN)1; (Shappee et al., 2014), and the Asteroid Terrestrial-Impact Last Aject System (ATLAS); (Tonry, 2011) and radio surveys (e.g., the VLA sky survey Lacy et al., 2020, the Canadian Hydrogen Intensity Mapping Experiment[CHIME]; CHIME Collaboration et al., 2022, and MeerKAT; Jonas & MeerKAT Team, 2016), we can now identify and follow hundreds to thousands of transients, such as gamma-ray bursts, supernovae and fast radio bursts, per year. These high rates result from the combination of areal coverage, depth and cadence of these surveys, and the intrinsic volumetric rate and luminosity function of the transients under consideration. Due to these large, high cadence, sensitive surveys, events that are intrinsically rare, or that are numerous but faint, are also being detected. At the extremes of parameter space, we detect events whose nature stretches plausible progenitor models. These events are thus extremely valuable for study in their own right.
Footnote 1: [https://www.astronomy.ohio-state.edu/sasssn/](https://www.astronomy.ohio-state.edu/sasssn/)
One class of such peculiar transients are fast blue optical transients (FBOTs; e.g., Drout et al., 2014; Arcavi et al., 2016; Whitesides et al., 2017; Pursiainen et al., 2018; Tampo et al., 2020; Ho et al., 2023). A handful of FBOTs have been discovered over the last decade: CSS161010 (Coppejans et al., 2020), AT2018lug/ZTF18abvkula (Ho et al., 2020), AT2020xnd/ZTF20acigmen (Perley et al., 2021), AT2020mrf (Yao et al., 2022), and the well-known example AT 2018cow (Prentice et al., 2018; Perley et al., 2019). Together, these events form their own class of astrophysical transients, although the FBOT properties are heterogeneous, and the nature of the events is still uncertain. This class of events is characterised by fast rise and decay times, high peak luminosities (absolute peak magnitude \(\lesssim-19\)), and early spectra dominated by a blue
Multiple models were suggested, such as peculiar supernovae (SNe) and magnetars formed in double neutron star mergers (Drout et al., 2014). In SNe the timescale of Ni56 radioactive decay and the diffusion time scale are critical parameters in the light-curve evolution (Arnett, 1982). However, these two time scales are too long to explain the rapid decay and high peak luminosity observed for FBOTs (Drout et al., 2014; Pursiainen et al., 2018).
Footnote 5: [https://archive.stsci.edu/](https://archive.stsci.edu/)
AT 2018cow was the first FBOT discovered in real-time instead of archival searches. The transient rose to peak rapidly (\(>\)5 mags in \(\sim\) 3.5 days), was extremely bright (\(\rm L_{peak}\approx 10^{44}\) erg s\({}^{-1}\); Prentice et al., 2018; Perley et al., 2019) and was detected across the electromagnetic (EM) spectrum. The host galaxy CGCG137\(-\)068 has a luminosity distance of 63.0\(\pm\)4.4 Mpc (redshift z=0.01404\(\pm\)0.00002) (SDSS DR6; Adelman-McCarthy et al., 2008). The combination of high (peak) luminosity and relativey low distance meant that many telescopes and satellites could observe and detect it, and led to an extensive observational campaign.
Observations of AT 2018cow showed that the luminosity decay was too fast to be powered by Ni56 decay (Margutti et al., 2019). In addition, the photospheric radius stayed hot and small for hundreds of days (Perley et al., 2019; Sun et al., 2022). The optical spectra were featureless the first \(\sim\)20 days; after that period, emission lines of hydrogen and helium appeared (Prentice et al., 2018; Margutti et al., 2019; Perley et al., 2019). The spectral evolution has some resemblence to the spectral development of SNe Ibn and IIn (Fox & Smith, 2019; Xiang et al., 2021) although the lines in AT 2018cow appeared later than usual for those supernovae. The X-ray luminosity was high (e.g., Margutti et al., 2019; Kuin et al., 2019) and showed suggestive evidence for the presence of one or more quasi-periodic oscillations (QPOs) (Zhang et al., 2022; Pasham et al., 2021). QPOs are regularly seen in accreting systems, and the combination of a high luminosity and the detection of a QPO, if real, would thus suggest AT 2018cow is caused by an accreting compact object.
Footnote 5: [https://drazzlepac.readthedocs.io/en/latest/astrodrizzle.html](https://drazzlepac.readthedocs.io/en/latest/astrodrizzle.html)
The host galaxy of AT 2018cow appears to be a face on spiral system, and there are several (at least two) star-forming regions that lie close to (within \(\sim\) 170 parsec) the (projected) position of AT 2018cow. Assuming AT 2018cow lies in the plane of the host galaxy and not above or below it, this provides suggestive evidence for a link between massive star evolutionary processes and AT 2018cow (Lyman et al., 2020; Morokuma-Matsui et al., 2019). On the other hand, Sun et al. (2023) suggest that the low extinction in the transient implies that it is more likely on the near side of the disc and is not necessarily embedded in the star-forming regions. It would argue against a link with a massive star progenitor if this is correct.
Combining all the observed properties, the emission of AT 2018cow most likely comes from an engine-driven explosion (e.g., Margutti et al., 2019; Perley et al., 2019). Multiple models have been proposed for AT 2018cow (and FBOTs in general), including magnetars (Prentice et al., 2018; Mohan et al., 2020; Liu et al., 2022), interactions with the circumstellar material (Rivera Sandoval et al., 2018; Pellegrino et al., 2022) and a pre-existing stellar mass BH disrupting or accreting a companion (Metzger, 2022). Among the proposed models, the following two are considered most promising: An engine-powered core-collapse event, where a compact object is formed that accretes progenitor material (Prentice et al., 2018; Perley et al., 2019; Margutti et al., 2019; Mohan et al., 2020), or a tidal disruption event (TDE) of a white dwarf (WD) or main sequence (MS) star by an intermediate mass black hole (IMBH, Kuin et al., 2019; Perley et al., 2019). This class of TDEs may naturally explain the fainter and faster evolution compared to classical TDEs (of stars by a supermassive black hole [SMBH]), as well as provide an explanation for the non-nuclear location of the transient (Maguure et al., 2020). However, the IMBH must reside in a dense stellar environment such that two-body relaxation is efficient enough to scatter a white dwarf (or MS star) into the tidal radius within a Hubble time. Such a dense stellar environment is then a requirement for the TDE interpretation to be viable, although previous research does not provide evidence for such an environment (e.g. Margutti et al., 2019). However, long-lived, luminous emission from AT 2018cow makes detecting any putative (underlying) stellar cluster difficult.
The _Hubble Space Telescope (HST)_ observed AT 2018cow several times over the four-year period since its first detection. Surprisingly, Sun et al. (2022, 2023) detected UV-radiation even more than 4 years after the first detection of AT 2018cow. This emission is consistent with a hot and bright source and Sun et al. (2022) suggest a massive star progenitor is most likely involved.
In this work, we present our analysis of the late-time _HST_ data of AT 2018cow, spanning three epochs between 713 and 1474 days after the initial detection. The filters range from F225W in the UV to F814W in the red part of the optical. We perform photometry in multiple ways and investigate the influence of the background measurement on the photometry outcome. We also investigate whether the detected emission is from AT 2018cow itself or the environment and if there are implications from this distinction for the progenitor scenarios. We investigate if the UV properties can be explained under a TDE scenario and what the implications would be.
All magnitudes are presented in the AB magnitude system unless specified otherwise. Throughout the paper we use \(\rm H_{0}=67.8\,km\,s^{-1}\,Mpc^{-1}\), \(\rm\Omega_{m}=0.308\) and \(\rm\Omega_{\Lambda}=0.692\)(Planck Collaboration et al., 2016).
## 2 Data analysis
For this work we use observations of AT2018cow by _HST_ using the Ultraviolet-Visible (UVIS) channel of the Wide Field Camera 3 (WFC3) at three different late-time epochs. The data we use were taken under program IDs 15974, 16179 and 16925 with PIs A. Levan, A. Filippenko and Y. Chen, respectively. The observations are taken 713 days, 1135 days and 1474 days after the first detection of the transient, which we take to be \(\rm T_{0}=58285.44\)(Perley et al., 2019). We obtain the individual on-the-fly processed images from the Mikulski Archive for Space Telescopes2, these have had flat field and bias corrections applied and have also been corrected for the impact of charge transfer efficiency on the ageing WFC3 CCDs.
Footnote 2: [https://archive.stsci.edu/](https://archive.stsci.edu/)
### Alignment
First we combine the individual images using astrodrizzle from the python package drizzlepack (Hoffmann et al., 2021)3. Here, we set the final pixel scale to final_scale=0.025 to utilize sub-pixel dithering to obtain more finely sampled images and to better sample the _HST_ point spread function (PSF). We use default settings for parameters unless mentioned otherwise. Next, we use the geomap task in iraf(Tody, 1986, 1993) to align the images obtained in the four different filters 713 days after the onset. The sources used for this alignment are the galaxy nucleus [R.A.,Dec] = [16:16:00.582,+22:16:08.286] and a star [R.A.,Dec] = [16:15:59.147,+22:15:58.88] : both are detected in
all four filters. After this, we use xregister to align each filter image obtained at the one (F225W and F336W) or two (F555W and F814W) other epoch(s) to their respective image obtained 713 days after the transient's first detection. We cannot use xregister to align all images across all filters because it uses cross correlation to calculate a shift, which does not work well if there are many sources that are not detected in both images, which is the case here when using observations obtained in different filters. The alignment shifts from geomap and xregister are used to redrizzle the images with an additional shift so the sources align pixel wise in the final images.
### Aperture Photometry
We perform aperture photometry using a circle with a radius of 0.08 arcsec on all the images using dual-image mode in source extractor(Bertin & Arnouts, 1996), except our detection image, F336W at T=713 days, for which we use single image mode. In dual-image mode source detection is done on one image and the measurements are done on the second image. This enforces the use of a fixed position of the aperture across the different filter images. Using dual-image mode prevent us from having to cross match the detected sources between images and forces source extractor to perform photometry at the position of AT 2018cow. The choice of aperture radius (corresponding to a diameter of \(\sim 2\) times the Full Width at Half Maximum (FWHM) ensures we measure most of the emission from AT 2018cow without measuring too much background.
We use the drizzled F336W image at epoch 713 days as our source detection image, because there clearly is still emission at the transient location, and more sources are detected in the F336W than in the F225W image. For the photometry we use default values as mentioned in the source extractor manual4 for parameters not mentioned here and adjust parameters such as the FWHM and pixel scale (0.08 arcsec and 0.025 arcsec/pixel, respectively). We set the detection and analysis thresholds to 3.0 sigma to balance between minimizing contamination from spurious detections of hot pixels and allowing the detection of faint sources in the final output. We subtract the local background from the transient light in the final photometry.
Footnote 4: [https://sextractor.readthedocs.io/em/latest/index.html](https://sextractor.readthedocs.io/em/latest/index.html)
Since the individual images are shifted with respect to each other because of drizzling, certain features such as bad pixels or pixels with cosmic rays removed can influence the quality of the signal in multiple pixels in the final combined image (i.e., the noise in the final pixels is correlated to some degree). This can influence the final photometry, which we take into account by using a weight map (WEIGHT_TYPE = HAP_WEIGHT) in source extractor. This weight map tells source extractor which redrizzled pixels contain bad pixels from the individual images, which improves source detection and error estimation, see the source extractor user manual for full details. We use the weight map that is produced by astrodrizzle during the combination process.
Aperture corrections are done using appropriate values from the table provided on the WFC3 handbook website5 using r=0.08 arcsec values in the UVIS2 table. For comparison to Sun et al. (2022) we report Vega magnitudes based on the zeropoints from the WFC3 instrument handbook6. Photometry is corrected for Galactic foreground reddening following Schlafly and Finkbeiner (2011).
Footnote 5: [https://www.stsci.edu/hst/instrumentation/wfc3/data-analysis/photometric-calibration/uvis-encircled-energy](https://www.stsci.edu/hst/instrumentation/wfc3/data-analysis/photometric-calibration/uvis-encircled-energy)
### PSF photometry
We also perform PSF photometry to examine whether the source is point-like or extended. We start by cutting out an image (17 by 17 pixels) away from the host galaxy containing an isolated point source (centred on {RA, Dec}=[16:15:59.254, +22:1621.733] for F555W and F814W, and {RA, Dec} = {16:15:59.148,+22:15:58.379} for F225W and F335W). This point source is used to provide an estimate of the PSF. Although it does not have as high a signal-to-noise ratio as the computed PSFs available, the advantage of this approach is that it measures the PSF directly on the image. Since the star is much brighter than the transient, the impact of photometric noise on the template PSF is minimal.
We now proceed to measure the magnitude of a point source at the location of AT2018cow within our images. We linearly interpolate the template PSF to enable sub-pixel centroiding, confirm this model subtracts out cleanly from the PSF star image, and then perform a fit using the pixels with a central position \(<6.1\) pixels from the best fitting (x,y) position determined before. This best-fit position of AT 2018cow is obtained using a 4-parameter fit on the F225 image at T=1474 d (the highest signal-to-noise value of the four UV images), in which the (x,y) position, the PSF normalisation, and the background are left free to vary. The best-fit (x,y) coordinates are then used as fixed input parameters for the fits on the other images (which is possible because of the pixel-wise alignment described in Section 2.1), leaving a 2-parameter fit (the normalisation and background are the remaining free parameters). We minimize the \(\chi^{2}\) in this area and report the values for the best-fit background and PSF normalisation.
To produce PSF subtracted images, the PSF template multiplied by the best-fit normalisation is subtracted from the data at the best-fit position. To calculate the magnitude of the subtracted point source, we sum the number of electrons/s in the template PSF in a circular area with a 6-pixel radius around the peak of the PSF, and multiply by the best-fit normalisation. We determine the error on the best fitting peak height by performing a two parameter \(\chi^{2}\) fit, leaving the centroid position fixed on the best-fit position allowing only the PSF normalisation and the background to vary. The error on the height is determined using \(\Delta\chi^{2}=2.30\). We calculate the error on the magnitude by multiplying the summed PSF model with the error on the PSF normalisation.
We also perform PSF photometry using dolphot(v2.0, Dolphin, 2000). This software package is specifically designed for PSF photometry in crowded fields. It performs photometry on the individual _fic images and combines the individual measurements into a final (Vega) magnitude for each source. We transform the Vega magnitudes into AB magnitudes using the same difference in zeropoints as mentioned in Section 2.2. We use twackage from drizzlepac to align all _fic images to the drizzled F336W T=713 days image, as this has the sharpest PSF. We then perform PSF photometry for this epoch leaving dolphot free to search for sources and use the output positions of this run as fixed positions for the other filters and epochs using the "warmstart" option in dolphot.
### Aperture photometry on difference images
We compute difference images using hidpants (v5.1.11; Becker, 2015) by subtracting epoch 3 from epoch 1 or 2 to investigate the brightness of any residual emission at the position of AT 2018cow. To perform the subtraction we use default values for the input parameters of hidpants except for bgo, ko, and the rss and rss parameters where we use values of 0.1, 0.05, 5 and 5, respectively. The parameters (bgo, ko, nsx, nsy) are the spatial orders of the background and kernel variations and the number of stamps within a region in x and y direction, respectively. We also change the gain (which is equal to the exposure time for the _HST_ reduced data), and values for the upper and the lower valid data counts for each combination of images we compute a difference image for. We maximize the size of the difference image which is however limited by the need to avoid gaps between the CCDs in the different exposures. We also perform aperture photometry on these difference images in all filters, using the procedure described below.
We measure the flux density of any residual on the difference images by determining the number of electrons/s in a circular aperture of 0.08 arcsec radius centered on the position of AT 2018cow. From this, we subtract the mean background and we convert to magnitudes. To determine the mean and standard deviation of the background flux density in the difference images, we randomly place circular apertures of the same radius as above within 30 pixels of the position of AT 2018cow. In placing these apertures we avoid regions where in the images bright objects are present (see Figure 1 for an example of the placement of these regions in the epoch 1 F555W image). We find a large spread in the value of the background (on average a factor \(\sim 1.5\) for the optical filters and between a factor \(\sim 2\) and \(\sim 33\) for the UV filters), and therefore the magnitude and its uncertainty depend on the flux density in the background. We will come back to this in the Discussion, while in the paper we use the median background to determine the source magnitude in the difference image and the standard deviation on the background as the \(1\sigma\) uncertainty on the magnitude in the difference image.
If the measured number of electrons/s in the aperture at the position of AT 2018cow is lower than the mean background, or of similar value to the standard deviation of the background we determine a \(\sigma\) upper limit. For this, we measure the number of electrons/s in a circular aperture with 0.08 arcsec radius centered on the position of AT 2018cow and we added three times the standard deviation on the background as described above. The signal-to-noise ratio of the detection of a source in the difference images is determined as the flux density in the source divided by the standard deviation in the flux density in the background.
## 3 Results
### Astrometry
We find a frame-to-frame alignment uncertainty of \(0.005-0.024\) arcsec (\(0.19-0.97\) pixels), depending on which combination of frames is looked at. The alignment between images using the same filter is systematically better than alignment between images using different filters.
A relevant question relating to the nature of the late time emission is whether it is dominated by a point-like component that may be due to the transient, or whether it could arise from an underlying compact region. We therefore check if the position of any emission detected in the difference images is consistent with the position of AT 2018cow.
To investigate this we map the early time UV observations (in particular the F336W data) to the later time F555W observations using 10 compact sources which are likely star forming regions within the host galaxy (see Table 2 for the positions of these sources). We then fit a geometric transformation using gdomap, allowing only for a shift in position. The centroid locations for the UV source at 713 days and the compact source in F555W at 1474 days are entirely consistent (\(\delta(x)=0.19\pm 0.25\) pixels and \(\delta(y)=0.01\pm 0.19\) pixels). Furthermore, the location of a faint residual visible in the F555W difference image between epoch 1 and epoch 3 is also consistent with the brightest pixel in all epochs of F555W imaging (\(\delta(x)=0.30\pm 0.36\) pixels and \(\delta(y)=0.06\pm 0.36\) pixels, where the additional uncertainty arises from the centroid error of the residual emission in the F555W image).
### Photometry
#### 3.2.1 Aperture photometry
The results of our aperture photometry can be found in Table 1. In the two UV filters (F225W and F336W) and the F555W filter the source has faded between the first and the third epoch (by \(0.55\pm 0.08\), \(0.39\pm 0.06\) and \(0.23\pm 0.06\) magnitudes, respectively). In the F814W band the magnitudes are consistent with being the same within \(3\sigma\).
#### 3.2.2 Photometry from PSF fitting
In the _right panels_ of Figure 1 we show the residuals after PSF subtraction in high contrast for all epochs and filters. The best-fit position of the centroid of the PSF model (as determined on the F225W T=1474 days image) is marked by red pointers in each image. The _left panels_ show the same images, before subtracting the best-fit PSF model. In general, the emission in the UV filters subtracts out well while the point source subtraction in the optical filters reveals/highlights the presence of residual emission close to and in some cases under the source position. The magnitudes of the subtracted point sources are listed in Table 1 under PSF photometry. We find reduced \(\chi^{2}\) values between 0.5 and 1.1 for the best fits of the PSF normalisation and background value, showing our model describes the data well. Generally, the PSF magnitudes of the subtracted point source are consistent within \(3\sigma\) with those derived through our aperture photometry for all filters, although the PSF magnitudes in the F814W filter are systematically fainter (but still consistent within \(3\sigma\)).
Any small residuals present in the PSF-subtracted images obtained through the UV filters can be explained by the fact that the PSF in the UV varies as a function of source location on the image. Due to various factors (such as the coatings of the optical elements) the UV PSF contains broader wings than the optical PSF and these broad wings have complex features7. Since we try to fit the central part of the PSF to the data, the features in the wings can leave residuals when a template PSF determined at one place of the image is subtracted from a source at another location in the image.
Footnote 7: [https://hst-docs.stsci.edu/wfc3ihb/chapter-6-wis-imaging-with-wfc3/6-6-wvis-optical-performance](https://hst-docs.stsci.edu/wfc3ihb/chapter-6-wis-imaging-with-wfc3/6-6-wvis-optical-performance)
#### 3.2.3 Photometry using dolphot
The results or our PSF photometry using dolphot can be found in Table 1. However, dolphot yields no detection at the position of
AT 2018cow in F814W for any of the observation epochs and in F555W at the epoch at T=1135 days, unless the input source position is fixed, as described in Section 2.3, which is effectively equivalent to forced photometry at the position of AT 2018cow.
#### 3.2.4 Aperture photometry on the difference images
Figure 2 shows the difference images created by subtracting the epoch 3 images from the epoch 1 images for the two optical filters. Here, the position of AT 2018cow is indicated by red markers. In the
Figure 1: _Left panel:_ Three columns of four rows of cutout images close to the location of AT 2018cow for all filters (rows) and epochs (columns). Intensity is given in e\({}^{-}\)/s in a colour scale, with blue being the least intense and yellow most intense. The best fit centroid position of the PSF to the emission at the location of AT 2018cow lies where the two red lines touch. The cross hairs have a length of 0.1 arcsec. _Right panel:_ Three columns of four rows of cutout images showing the residuals of PSF subtraction at the location of AT 2018cow for all filters (rows) and epochs (columns). The exposure times for the last epoch is longer than for the first two epochs, the second epoch having the shortest exposure time of all, which explains the difference in noise properties in the residual images.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline Filter & Epoch & \# of & Exp. time & Aperture phot. & Aperture phot. & Diff. image \({}^{\dagger}\) & Diff. image \({}^{\dagger}\) & PSF phot. & PSF phot. & dolphot & dolphot \\ & (day) & exp. & (sec) & F\({}_{\nu}\) (\(\mu\)Jy) & (mag) & F\({}_{\nu}\) (\(\mu\)Jy) & (mag) & F\({}_{\nu}\) (\(\mu\)Jy) & (mag) & F\({}_{\nu}\) (\(\mu\)Jy) & (mag) \\ \hline F225W & 713 & 3 & 1116 & \(1.17\pm 0.06\) & 23.73\(\pm\)0.05 & 0.45 \(\pm\) 0.06 & 24.77 \(\pm\) 0.14 & 1.41 \(\pm\) 0.11 & 23.53 \(\pm\) 0.09 & 1.08 \(\pm\) 0.06 & 23.82 \(\pm\) 0.06 \\ F336W & 713 & 3 & 1116 & \(0.87\pm 0.04\) & 24.05\(\pm\)0.04 & 0.28 \(\pm\) 0.04 & 25.28 \(\pm\) 0.15 & 0.82 \(\pm\) 0.07 & 24.11 \(\pm\) 0.09 & 0.75 \(\pm\) 0.03 & 24.21 \(\pm\) 0.05 \\ F555W & 713 & 3 & 1044 & \(0.39\pm 0.02\) & 24.92\(\pm\)0.04 & 0.09 \(\pm\) 0.02 & 26.54 \(\pm\) 0.25 & 0.48 \(\pm\) 0.06 & 24.69 \(\pm\) 0.13 & 0.27 \(\pm\) 0.06 & 25.32 \(\pm\) 0.06 \\ F814W & 713 & 3 & 1044 & \(0.37\pm 0.03\) & 24.97\(\pm\)0.09 & 0.11 \(\pm\) 0.04 & 26.3\({}^{+0.4}_{-0.3}\) & 0.14 \(\pm\) 0.06 & 26.0\({}^{+0.6}_{-0.4}\) & 0.13 \(\pm\) 0.2 & 26.06 \(\pm\) 0.17 \\ \hline F555W & 1135 & 2 & 710 & \(0.33\pm 0.02\) & 25.10\(\pm\)0.06 & \(<0.09\) & \(>26.5\) & 0.35 \(\pm\) 0.07 & 25.07 \(\pm\) 0.22 & 0.18 \(\pm\) 0.02 & 24.79 \(\pm\) 0.10 \\ F814W & 1135 & 2 & 780 & \(0.23\pm 0.03\) & 25.48\(\pm\)0.15 & \(<0.11\) & \(>26.3\) & 0.10 \(\pm\) 0.06 & 26.4\({}^{+1.1}_{-0.5}\) & 0.05 \(\pm\) 0.02 & 27.2\({}^{+0.7}_{-0.4}\) \\ \hline F225W & 1474 & 3 & 1845 & \(0.71\pm 0.04\) & 24.28\(\pm\)0.06 & – & – & 0.76 \(\pm\) 0.08 & 24.20 \(\pm\) 0.11 & 0.54 \(\pm\) 0.04 & 24.57 \(\pm\) 0.07 \\ F336W & 1474 & 3 & 1953 & \(0.61\pm 0.02\) & 24.44\(\pm\)0.04 & – & – & 0.54 \(\pm\) 0.04 & 24.56 \(\pm\) 0.08 & 0.51 \(\pm\) 0.02 & 24.63 \(\pm\) 0.04 \\ F555W & 1474 & 3 & 1149 & \(0.32\pm 0.01\) & 25.15\(\pm\)0.05 & – & – & 0.37 \(\pm\) 0.05 & 24.98 \(\pm\) 0.15 & 0.19 \(\pm\) 0.01 & 25.68 \(\pm\) 0.07 \\ F814W & 1474 & 3 & 2271 & \(0.29\pm 0.02\) & 25.24\(\pm\)0.07 & – & – & 0.08 \(\pm\) 0.04 & 26.6\({}^{+0.6}_{-0.4}\) & 0.08 \(\pm\) 0.01 & 26.53 \(\pm\) 0.17 \\ \hline \end{tabular} \({}^{\dagger}\)This implicitly assumes that any light at the position of the transient at epoch 3 is not due to AT 2018cow.
\end{table}
Table 1: The result of our aperture and difference image photometry for AT2018cow, using a circular aperture of r=0.08 arcsec radius as well as PSF photometry following either our manual procedure (see Section 2.3 for details) or using dolphot. “Diff. image” refers to the image obtained after subtracting the image obtained during the third epoch from the epoch 1 or 2 image under consideration (see Section 2.4 for details). Aperture photometry is performed on the diff. images. Values include aperture correction and a Galactic reddening correction as mentioned in the text. To correct for Galactic extinction we have used \(\rm A_{F225W}=0.524\), \(\rm A_{F236W}=0.334\), \(\rm A_{F555W}=0.214\) and \(\rm A_{F814W}=0.115\). Values without error bars are 3\(\sigma\) upper limits.
F555W difference image (_left panel_) there is residual emission near the position of AT 2018cow. This residual emission is not an artefact due to uncertainties in alignment as there are no such residuals at the positions of other point sources in the difference image. This residual is detected at a signal-to-noise ratio of 4.5 with a magnitude of \(26.54\pm 0.25\), consistent with the difference between the F555W magnitude in epoch 1 and epoch 3 as measured through aperture photometry.
For the observations obtained in the F814W filter, no distinguishable residual emission is present (when looking by eye) in the difference image, as can be seen in the _right panel_ of Figure 2. Following the same procedure as for F555W above we find a signal-to-noise ratio of 3.4 with a magnitude of \(26.3^{+0.4}_{-0.3}\). Subtracting the epoch 3 image and then measuring the flux/magnitude of the residual measures the decaying component in the AT 2018cow light. An alternative way of looking at the difference image is that it assumes all emission at epoch 3 (T=1474 days) is due to an underlying source at the position of AT 2018cow. Under "Diff. image" in Table 1, we list our results for aperture photometry on all difference images created by subtracting the epoch 3 from the epoch 1 and epoch 2 images. For the F555W and F814W epoch 2 minus epoch 3 difference images the measured flux density in the aperture is consistent with that expected due to variations in the background, hence we report 3\(\sigma\) upper limits of \(>26.5\) in F555W and \(>26.3\) in F814W.
### Lightcurve
Out of the three different ways we used to measure the photometry of AT 2018cow the aperture and PSF photometry agree within 3 \(\sigma\). The aperture photometry on the difference images (epoch 1 or epoch 2 minus epoch 3) yield fainter results for the source brightness. This can be explained as follows: through photometry on a difference image we are sensitive only to the component of the light that varied between the epochs under consideration. In the extreme scenario that the third epoch contains _no light_ from AT 2018cow the magnitudes determined through analysis of the difference images are relevant. In the opposite extreme scenario, we assume that _all the light_ detected at the third epoch comes from AT 2018cow. Clearly, whether either or none of these two is a viable assumption may well depend on the filter of the observations under consideration.
We show the brightness evolution of AT 2018cow as a function of time in Figure 3, using the results of our aperture photometry on the images and the difference images, together with early time data from Perley et al. (2019) and Sun et al. (2022) (circles). Even though the effective wavelengths of the filters used in the early UVOT and later _HST_ observations are slightly different, we compare UVOT/UVW1 to _HST_/F235W, UVOT/U to _HST_/F336W, UVOT/V to _HST_/F555W and UVOT/I to _HST_/F814W. Different filters are indicated using different colours and we offset the light-curve in each filter by a constant shown in the figure legend for display purposes. Our aperture photometry measurements are shown with squares and our measurements for AT 2018cow obtained assuming the third epoch contains no transient emission (aperture photometry on the difference images) are indicated with triangles when a residual was detected or downwards pointing arrows when an upper limit to the source magnitude was determined. Comparing the early-time (\(<100\) days after discovery) measured decay in absolute magnitude with absolute magnitude (limits) obtained for the last three _HST_ epochs, it is clear that the detected emission is brighter than expected had this trend continued.
### Comparison of AT 2018cow and compact UV selected sources
Next, we explore whether AT 2018cow is localised in an unusual region of its host galaxy by fitting synthetic spectra of simple stellar populations to 23 compact UV-selected star-forming (SF) regions within the host (plus the location of AT 2018cow). These SF regions were selected by running source extractor in dual image mode in the same way as for AT 2018cow (see Section 2.2) removing sources that are not detected in all four filters at T=713 days. We also removed sources that are detected with a signal-to-noise ratio \(<3\). From these sources we select those that have a constant magnitude (within
Figure 2: The residual flux after subtracting the image obtained at T=1474 d from the T=713 d image for the two optical filters (F555W _left panel_; F814W _right panel_) using ‘notpants’. The location of AT 2018cow is indicated with red thick marks. Residual emission is present at the position of AT 2018cow with a signal-to-noise of 4.5 in the F555W difference image and signal to noise of 3.4 in the F814W difference image (_left panel_; see Section 3.2.4 of the main text for details.)
Figure 4: Greyscale image of the host galaxy of AT 2018cow in the F336W filter at T=713 days., with the ages of compact UV-selected sources that were detected in all four filters indicated by coloured circles. The colours correspond to population ages, indicated by the colour bar and derived from BPASS SED fitting as described in Section 3.4. The location of AT 2018cow is marked by green cross hairs. Number labels for the regions are as in Table 1.
Figure 3: AT 2018cow light-curves in different filters, F225W in blue, F336W in red, F55SW in yellow and F814W in green (with offsets as indicated in the legend). The early time data is from Perley et al. (2019) in transparent circles and Sun et al. (2022) in opaque circles. Our aperture photometry results marked with squares assume all flux measured in the last (third) epoch is due to the transient, whereas for the measurements indicated with triangles and downwards pointing arrows (for upper limits) we assumed that all detected flux in epoch three is unrelated to AT 2018cow. The error bars are at a 1\(\sigma\) confidence level. The horizontal bars through the markers do not indicate uncertainties in the observation date but instead they are the end caps of the error bars on the magnitudes.
\(3\sigma\)) as measured on T=713 days and T=1474 days. Differences in magnitudes between these epochs might be caused by e.g., different orientations of _HST_ during the observations. We ignore epoch 2 in this comparison because the exposure time is shorter and there are only two exposures, resulting in a bad removal of cosmic rays.
Next, we select the sources that behave PSF-like in F336W. We test this by performing aperture photometry using two different values for the radius of the circular aperture and we retained sources only if the difference in their photometry was consistent with the different aperture corrections for a point source given the two different aperture radii. A full list of the positions and magnitudes of the sample can be found in Table 1 in the Appendix C.
To determine the ages of these regions, we make use of BPASS v2.2.1 (Binary Population and Spectral Synthesis, Eldridge et al., 2017; Stanway & Eldridge, 2018) synthetic spectra, assuming a single burst of star formation and a metallicity (by mass fraction) of \(Z=0.01\) (based on the host metallicity found by Lyman et al., 2020). For each region, SED fitting is performed by convolving the BPASS spectra at each age (52 ages spaced logarithmically from 1 Myr to 100 Gyr are used) with the filter response curves for F225W, F336W, F555SW, and F814W (Rodrigo et al., 2012; Rodrigo & Solano, 2020), converting magnitudes to fluxes, and vertically scaling8 the synthetic spectra to minimise \(\chi^{2}\) across both age and different values for the host-intrinsic extinction. The extinction in each filter is calculated by adopting their effective wavelengths and using the python extinction module (Barbary, 2016), with a Fitzpatrick (1999) extinction curve and R\({}_{\rm V}=3.1\). Galactic extinction is already accounted for as described in Section 2.2.
Footnote 8: The scaling is needed as the synthetic spectra are for a \(10^{6}\) M\({}_{\odot}\) population, in Solar luminosity per Angstrom
For each region we determine a best-fit age and extinction A\({}_{\rm V}\). Full results can be found in Appendix C. The extinction values are in the range 0.0-0.6 (in broad agreement with A\({}_{\rm V}=0.2\) as found by Sun et al., 2023, for nearby star forming complexes), and the ages range from 6-25 Myr. These ages are younger than the tens of Myr found by Lyman et al. (2020) for example, but this can be explained by the spaxel size of their MUSE integral field unit data, which averages over larger physical areas than the compact star-forming regions we are probing here.
The reduced \(\chi^{2}\) values (which are the same as the \(\chi^{2}\) values because our fit has one degree of freedom) for the 23 compact star forming regions are typically around \(\sim\)1-10; whereas the fits at the location of AT 2018cow (at both 713 and 1474 days) are notably poorer, with \(\chi^{2}=47\) and 37, respectively. The fits at the location of AT 2018cow favour very little to no extinction, and tellingly, favour the youngest population age available in the BPASS outputs (1 Myr), whilst still failing to reproduce the extremely blue observed SED.
In Figure 4, we show the 23 UV-selected star-forming regions over an F555W image of the host galaxy. Each of the 23 regions is encircled, where the colour of the circle corresponds to the best-fit population age. Young stellar populations are present across the galaxy, with no preference for particularly young star forming regions in the vicinity of AT 2018cow, although there are 2 star forming regions within \(\sim 400\) parsec of the transient (regions 1 and 3, these were unresolved in previous non-_HST_ data sets).
### Spectral Energy Distribution of AT 2018cow
Figure 5 (_left panel_) shows the spectral energy distributions (SEDs) for AT 2018cow as measured at epoch 1 (T=713 d) and at epoch 3 (T=1474 d). The black markers represent measurements from the third epoch, while the grey markers those of the first epoch. The marker symbols are the same as in Figure 3. The coloured bands represent the FWHM of the filter with the corresponding colour in Figure 39. The _right panel_ of Figure 5 shows both possible extremes of the SED of AT 2018cow in red compared to the SEDs of compact UV-selected sources detected in a box of 180x180 pixels centred on the position of AT 2018cow ("neighbours") in green, and "other sources" in the rest of the host galaxy in grey for T=713 d. From this red shaded region it is clear that for either of the two extreme scenarios for the aperture photometry at epoch T=713 d, the F555W\(-\)F225W colour of AT 2018cow is bluer than that of the neighbours. The _left panel_ of Figure 5 shows that the SED for the third epoch lies in between the aperture photometry SED and the difference image SED. Therefore, the T=1474 d SED is also bluer than that of the neighbours.
Footnote 9: [https://hst-docs.stsci.edu/wfc3ihb/chapter-6-uvis-imaging-with-wfc3/6-5-uvis-spectral-elements](https://hst-docs.stsci.edu/wfc3ihb/chapter-6-uvis-imaging-with-wfc3/6-5-uvis-spectral-elements)
We fit a Planck function to the four-filter SEDs at T=713 d, T=1474 d, and to the four-filter SED when assuming none of the third epoch emission is due to AT 2018cow, with the best-fit black body over-plotted in the _left panel_ of Figure 5 in orange, green, and blue, respectively. The best-fit values for the temperature and radius, the calculated luminosity, the number of degrees of freedom, and the reduced \(\chi^{2}\) values are presented in Table 2. The error on the temperature for the fit to the epoch 1 - epoch 3 difference image was calculated by fixing the radius to the best-fit value and finding the value for which \(\Delta\chi^{2}=1\). This was done because the error calculated by the fitting algorithm was larger than the best fitting value for the temperature. Only the reduced \(\chi^{2}\) value for the fit to the epoch 1 SED derived assuming epoch 3 contains no light from AT 2018cow is close to 1 (at a value of 2.2). However, the error on the luminosity is very large due to the large errors on the radius. Due to the sizes of the error bars on the magnitudes obtained with aperture photometry on the difference image, the fit of the Planck function is dominated by the two data points in the UV bands, meaning the fit is almost degenerate for a two-parameter Planck function. This results in a large error on the fit and therefore on the calculated luminosity.
## 4 Discussion
In this paper, we present aperture and PSF photometry of _HST_ data of the FBOAT2018cow. We first compare our results in Table 1 with the results from the epoch 1-3 PSF photometry by Sun et al. (2022) and Sun et al. (2023). We find that our measurements in the UV filters yield a source that is consistent within \(3\sigma\) in the first epoch, while in the last epoch our source is brighter than they report (there are no UV images for the second epoch). In the optical filters our measurements indicate a brighter source in all epochs than found by Sun et al. (2022, 2023). They assumed all the light is emitted by AT 2018cow. Additionally, Sun et al. (2022, 2023) find a steeper decay between epoch 1 and 3 in the UV filters (\(1.02\pm 0.11\) mag and \(0.57\pm 0.07\) mag for F225W and F336W, respectively) than we do (\(0.55\pm 0.08\) mag and \(0.39\pm 0.06\) for F225W and F336W, respectively). Furthermore, they find no evidence for a decay in the source brightness in the optical filters, whereas we do (\(0.23\pm 0.06\) maggs in F555W, and a detection with a signal to noise of 3.4 in the F814W epoch 1 and epoch 3 difference image with a magnitude of \(26.3^{+0.4}_{-0.3}\)). We will investigate possible reasons for these differences below.
Next, we compare with the epoch 1-3 PSF photometry reported in Chen et al. (2023). Our aperture as well as our manual PSF photometry give brighter magnitudes for AT 2018cow than Chen et al. (2023), although the difference is small for the two UV filters it increases for the optical filters. Comparing the magnitudes in the Chen et al. (2023) table 1 with their figure 6 we deduced that their table 1 magnitudes are corrected for extinction. However, if they are not, the differences with our extinction-corrected magnitudes is reduced, especially for the UV filters. However, still, only the measurements in F225W (both epochs) and F555W T=113C days would be consistent withing within the 3\(\sigma\) error. Our dolphot PSF photometry results are consistent within 3\(\sigma\) with the values presented by Chen et al. (2023) in their table 1 if those values are not corrected for Galactic extinction. When leaving the position as a free parameter, dolphot does not find a source in F814W at any epoch and also not in F555W at the epoch T=113C days. Only forced photometry (i.e. keeping the source position fixed) yields a sometimes marginal detection of the source at those epochs and filters.
However, this does not necessarily mean the photometry presented by Sun et al. (2022, 2023), Chen et al. (2023) or our photometry results are wrong. In practice, contributions from other sources besides a point source may influence the measurements, or if no point source is present but if the observed light is dominated by diffuse emission (on the scale of \(\sim\)few times the PSF size) in AT 2018cow's host galaxy galactic disc, PSF photometry provides an upper limit on the magnitude of a point source at the location of AT 2018cow. Instead, aperture photometry may over-estimate the true flux density of the transient if the light from the point source and diffuse emission in the galactic disc are of similar magnitude. In practise, the estimated value of the background flux density under AT 2018cow may influence the determined magnitudes especially in crowded regions like that of AT 2018cow. Next, we investigate the potential influence of the choice of the background region used to estimate the flux density at the position of AT 2018cow.
Using the same 20 background regions we used for the aperture photometry on the difference images (see Figure 14), we measure
Figure 5: _Left panel:_ The spectral energy distribution (SED) of the emission detected at the position of AT 2018cow at T=713 d and T=1474 d. The four vertical coloured bands are centered on the effective wavelength of the filters used for the observations while the width of the vertical bands indicate the passband rectangular width of the filters. Light grey markers are used for the data obtained at T=713 d. Here, the light grey circles indicate the measured flux density assuming all light in the third epoch (T=1474 d) originates from AT 2018cow, whereas light grey triangles are used for measurements obtained assuming none of the third epoch light is due to AT 2018cow. The circles are always at a higher flux density than the triangles. The black symbols represent our measurements of the source flux density obtained at T=1474 d. The lines are Planck functions fitted to the four-point SEDs at T=713 d (orange), T=1474 d (green), and to the grey triangles (blue). The best fitting values for the temperature and the radius, and reduced \(\chi^{2}\) values can be found in Table 2. The fit to the difference image gave unphysical (a negative) values for the temperature when considering the uncertainty on the temperature using both python routines cuvuv_pit and lmfit. To obtain an estimate of the uncertainty on the black body temperature we fixed the radius to the best fitting value and determine for which value for the temperature around the best fitting temperature value \(\Delta\chi^{2}=1\). From the reduced \(\chi^{2}\) values and the Figure we conclude that a single black body function is only a reasonably good description of the SED for the light grey triangles. _Right panel:_ The SEDs of our list of compact UV-detected sources at T=713 d (Table 1 contains selected properties of these sources). The data for AT 2018cow is in red with the marker shapes as mentioned above. We make a distinction between ”neighbours” shown in green and ”other sources” in light grey. See the main text for the definition of ”neighbour” and ”other sources”. Irrespective of the interpretation of the AT 2018cow data at T=1474 d, the F555W–F2225W colour of the source at the position of AT 2018cow is bluer than any of our compact UV-detected sources.
\begin{table}
\begin{tabular}{l l l l l l} \hline Epoch & log(T (K)) & radius (R\({}_{\odot}\)) & luminosity (erg s\({}^{-1}\)) & reduced \(\chi^{2}\) & degrees of freedom \\ \hline
1: T=713 d & \(4.54\pm 0.04\) & \(34\pm 3\) & \((6\pm 2)\times 10^{39}\) & 17.2 & 2 \\
3: T=1474 d & \(4.37\pm 0.02\) & \(43\pm 2\) & \((1.9\pm 0.4)\times 10^{39}\) & 17.9 & 2 \\ \hline Epoch 1 - Epoch 3 & \(5.03\pm 0.04\) & \(9\pm 6\) & \((4_{-5}^{+5})\times 10^{407}\) & 2.2 & 2 \\ \multicolumn{5}{l}{\({}^{\dagger}\) See Section 3.5 for the explanation on how the error bars on the luminosity were determined.} \\ \end{tabular}
\end{table}
Table 2: Results from fitting a Planck black body function to the _HST_ spectral energy distribution for AT 2018cow. These fits are shown in Figure 5.
the median, minimum, and maximum value for the flux density in the background aperture. There is a large spread between these three values. To investigate how the choice of background region influences the flux density measured for AT 2018cow we compare the results based on which of these three values is subtracted from the flux density measured in the aperture centered on the position of AT 2018cow. In Table 3 we show the resulting magnitude measurements for the different background regions. As expected, we find that using a higher background flux density yields a lower flux density for AT 2018cow. Depending on the choice of background in our work and in the work of Chen et al. (2023) both results could be consistent in all filters. We do note that in the F814W filter when using the maximum background flux density, our results are either upper limits when the maximum background flux density was higher than the flux density in the aperture of AT 2018cow, or there are large error bars on our photometry. Clearly, the region used to determine the background flux density greatly influences the value of the magnitude of AT 2018cow.
Next, we investigate if there are filters and epochs where the detected light originates solely from AT2018cow, or if it is possible to determine if the emission is dominated by underlying sources (for instance from diffuse emission in the galactic disk or e.g., a star forming region or cluster) or if it is a combination of both. Understanding the origin of the light is important because it will influence the interpretation of the power source of AT 2018cow.
In the observations obtained through the UV filters the magnitude has decreased between epochs, suggesting that a significant fraction of the detected light is emitted by the fading transient. The SED of the light extracted at the position of AT 2018cow is substantially bluer than that of our compact, UV-selected, star forming regions detected throughout the host of AT2018cow. This is also in line with the notion that the majority, but not necessarily all, of the light detected in the UV arises from the transient. Subtracting a point source from the UV images at the location of AT 2018cow, leaves residuals consistent with noise (see Figure 1). Therefore, we conclude that the emission in the UV filters is dominated by a point source, likely the transient event AT 2018cow. In the optical filter images, comparing the observations at epoch 1 with those at epoch 3 there is evidence that the source faded in addition to light from either AT 2018cow (constant) and/or underlying emission from part of the host galaxy itself.
Overall, a picture emerges where light from the transient is still detected in the UV images, while in optical images we cannot determine if the detected light at epoch 3 is due to AT 2018cow or due to diffuse emission in the galactic disc or, more speculatively, due to a compact source at the same position of AT 2018cow. Note that in the optical images crowding is more important than in the UV images.
The SED of the emission detected at the location of AT 2018cow is consistent with this picture (Figure 5). While the F814W-F555W colour of AT 2018cow is consistent with that of the neighbouring sources, the F336W-F225W colour at the location of AT 2018cow is bluer than that of the sources in the neighbourhood. This and the fact that a single black body does not fit the SED, together with the different variability properties of the UV and optical filters, suggests that the UV and optical parts of the SED are caused by more than one emission type and/or by more than one source. This conclusion does not depend on the assumption for the nature of the light detected at 1474 days (either transient or environment light or a combination thereof). Furthermore, the emission cannot be solely from a stellar population - it is too blue - strongly implying the presence of late-time UV emission from AT 2018cow.
We also searched for BPASS single and binary stellar models, across all possible stellar ages (at Z=0.010), for models satisfying log\({}_{10}\)(T/K)\(>4.7\) and log\({}_{10}\)(L/L\({}_{\odot}\))\(>7.0\). These constraints are derived from fitting a black body to the late-time emission at the location of AT 2018cow (see also Sun et al., 2023). We find no stellar models which are this blue and luminous, and therefore, a dominant contribution from an underlying massive star or binary seems ruled out by the data.
The F555W-F814W colour at the location of AT 2018cow at 1474 days is \(=-0.09\pm 0.08\) and the absolute magnitude is \(-\)\(-\)9. Assuming that the optical bands at epoch 3 are free from light originating from the transient (as we do when taking the magnitudes measured on the difference images), we check what kind of source(s) can explain these values. They are consistent with those expected for an OB association or star-forming region (e.g., Drazinos et al., 2013), and they are broadly consistent with the F555W-F814W colours of the UV-selected, compact star-forming regions shown in Figure 4. The mean F555W-F814W colour (corrected for Galactic but not intrinsic extinction [at the specific location in the host galaxy]) of these regions is 0.02\(\pm\)0.05. Excluding the UV filters, fixing A\({}_{\rm V}=0\) and performing SED fitting as described in Section 3.4, we infer a best-fit population
\begin{table}
\begin{tabular}{l c c c c c c c} \hline Filter & epoch & min background & min background & median background & median background & max background & max background \\ & epoch & F\({}_{\nu}\) (\(\mu\)Jy) & (mag) & F\({}_{\nu}\) (\(\mu\)Jy) & (mag) & F\({}_{\nu}\) (\(\mu\)Jy) & (mag) \\ \hline F225W & 713 & 1.28\(\pm\)0.06 & 23.63 \(\pm\) 0.05 & 1.18\(\pm\)0.06 & 23.71 \(\pm\) 0.05 & 1.07\(\pm\)0.06 & 23.82 \(\pm\) 0.06 \\ F336W & 713 & 0.95\(\pm\)0.04 & 23.95 \(\pm\) 0.05 & 0.88\(\pm\)0.04 & 24.02 \(\pm\) 0.05 & 0.78\(\pm\)0.04 & 24.16 \(\pm\) 0.06 \\ F555W & 713 & 0.49\(\pm\)0.05 & 24.68 \(\pm\) 0.11 & 0.40\(\pm\)0.05 & 24.92 \(\pm\) 0.14 & 0.30\(\pm\)0.05 & 25.22 \(\pm\) 0.19 \\ F814W & 713 & 0.57\(\pm\)0.12 & 24.50 \(\pm\) 0.22 & 0.41\(\pm\)0.12 & 24.9 \(\pm\) 0.3 & 0.18\(\pm\)0.12 & 25.8\({}^{\dagger}_{-0.6}\) \\ \hline F555W & 1135 & 0.42\(\pm\)0.05 & 24.85 \(\pm\) 0.13 & 0.33\(\pm\)0.05 & 25.10 \(\pm\) 0.17 & 0.25\(\pm\)0.05 & 25.41 \(\pm\) 0.22 \\ F814W & 1135 & 0.46\(\pm\)0.12 & 24.8 \(\pm\) 0.4 & 0.26\(\pm\)0.12 & 25.4\({}^{+0.7}_{-0.4}\) & \(<\)0.34\({}^{\dagger}\) & \(>\)25.11\({}^{\dagger}\) \\ \hline F225W & 1474 & 0.76\(\pm\)0.03 & 24.19 \(\pm\) 0.05 & 0.70\(\pm\)0.03 & 24.28 \(\pm\) 0.05 & 0.65\(\pm\)0.3 & 24.37 \(\pm\) 0.05 \\ F336W & 1474 & 0.65\(\pm\)0.03 & 24.37 \(\pm\) 0.05 & 0.61\(\pm\)0.03 & 24.43 \(\pm\) 0.05 & 0.51\(\pm\)0.03 & 24.64 \(\pm\) 0.07 \\ F555W & 1474 & 0.40\(\pm\)0.05 & 24.88 \(\pm\) 0.14 & 0.32\(\pm\)0.05 & 25.13 \(\pm\) 0.17 & 0.22\(\pm\)0.05 & 25.53 \(\pm\) 0.25 \\ F814W & 1474 & 0.47\(\pm\)0.13 & 24.7 \(\pm\) 0.3 & 0.30\(\pm\)0.13 & 25.2\({}^{+0.6}_{-0.4}\) & \(<\)0.43\({}^{\dagger}\) & \(>\)24.8\({}^{\dagger}\) \\ \hline \end{tabular} \({}^{\dagger}\)The flux density value of the background was higher than that in the aperture centred on the position of AT 2018cow, so we report the 3\(\sigma\) upper limit for the maximum background flux density.
\end{table}
Table 3: The result of our aperture photometry for AT2018cow, using a circular aperture of r=0.08 arcsec radius for three different values of the background (see main text for details). The reported magnitudes include the effect of the aperture correction and the Galactic reddening correction. To correct for Galactic extinction we used A\({}_{\rm F225W}=0.524\), A\({}_{\rm F336W}=0.334\), A\({}_{\rm F555W}=0.214\) and A\({}_{\rm F814W}=0.115\). The errors reported are at the 1\(\sigma\) confidence level.
age at the location of AT 2018cow of 20 and 79 Myr at 713 and 1474 days, respectively. Although we cannot determine a precise age with just two optical filters, if we assume no extinction and that the optical light is dominated by the underlying stellar population, the optical spectral slope constrains the age to \(\sim\)100 Myr or less.
Taking the 4-band photometry of AT 2018cow (latest epoch with the median background, see Table 3), and converting it to absolute magnitudes and using BPASS simple stellar populations, we calculate the maximum mass of a cluster that can be present at the location of AT 2018cow before the luminosity in one of the filters exceeds the magnitude plus its \(1\,\sigma\) error. We plot the upper limit on the cluster mass in Figure 6. This upper limit is a strong function of age - as the UV flux reduces with increasing age, the upper limit on the cluster mass increases. An old stellar population at the location of AT 2018cow cannot be ruled out - in particular, we note that a globular cluster can easily be hidden underneath the light of AT 2018cow (based on typical globular cluster ages of several Gyr and masses of \(10^{3}\)-\(10^{6}\)M\({}_{\odot}\), Harris 1996).
### Disc modelling
It has been speculated that AT 2018cow is caused by a tidal disruption event (TDE; e.g., Perley et al. 2019; Kuin et al. 2019). Interestingly, for low mass (\(M_{BH}<10^{6.5}\,M_{\odot}\)) TDEs roughly time-independent UV emission lasting for time scales of years is commonly detected (Van Velzen et al., 2019; Mummery and Balbus, 2020; Wen et al., 2023). Comparing the UV light curve of AT 2018cow (Figure 3) with that of TDEs, for example ASSAN-14li (see e.g., figure 2 in Wen et al. 2023), we note that the UV light curve morphology is similar. Especially the late-time plateau is a distinguishing feature shared by both sources.
To test the hypothesis if the late-time UV emission observed from AT2018cow could come from an evolving accretion flow produced by the tidal disruption of a star by a massive black hole, we follow the procedure set out in Mummery and Balbus (2020), and generate evolving UV light curves by solving the time-dependent general relativistic thin disc equations. In brief, we assume that the tidal disruption of a star results in the formation of a compact ring of material with total mass roughly half that of the disrupted star. This initial ring is assumed to form at the circularisation radius (typically twice the tidal radius) of the incoming star (see also Hayasaki and Jonker 2021). Once this initial condition is specified, by solving the time-dependent relativistic disc equations, the disc density can be propagated out to large times. Once the dynamical evolution of the disc density is solved, the observed disc spectrum can be computed by ray-tracing the emergent flux of each disc annulus. This then allows us to compute late time UV luminosities for a range of black hole and stellar parameters.
The late-time luminosity observed from the location of AT2018cow is, compared to the typical TDE population, at a relatively low level \(\nu L_{\nu}\simeq 10^{39}\) erg/s, at \(\nu\simeq 10^{15}\) Hz. This is much smaller than, for example, the luminosity of the \(\sim 10^{6}M_{\odot}\) black hole mass TDE ASASSN-14li, which had a late time (\(>1\) year) UV luminosity of \(\nu L_{\nu}\simeq 10^{42}\) erg/s. Mummery (2021) showed empirically from fitting the light curves of 9 TDEs that the late time UV plateau luminosity correlates approximately linearly with the black hole mass responsible for the TDE. This empirical result has strong theoretical and numerical support (Mummery and van Velzen et al. in prep.), and suggests that AT2018cow could well be due to a TDE involving an intermediate-mass central black hole.
To test this hypothesis, we numerically sample \(N=10^{5}\) black hole masses uniformly in the range \(10^{1}<M_{BH}/M_{\odot}<10^{7}\). At each black hole mass we sample stellar masses from the Kroupa IMF (Kroupa, 2001), solve the disc equations and "observe" the system at a random inclination (with \(\cos(i)\) sampled uniformly). We sample uniformly the (dimensionless) black hole spin between \(-1<a<1\). As a very conservative constraint on the central black hole mass in AT2018cow, we record all TDE disc systems which produce a UV luminosity at +713 days (the time of the first _HST_ observation) within a factor of 2 of \(3\times 10^{39}\) erg/s at \(\nu=10^{15}\) Hz. The black hole mass distribution of the TDE systems which satisfy this constraint are shown in Figure 7.
A more detailed analysis of the late time AT2018cow light curve and spectrum highlights that an evolving accretion flow provides a plausible explanation of the observed AT2018cow data. It is of course difficult to constrain a best fitting set of parameters from observations in two bands at two epochs, and we do not attempt to measure the precise system parameters of AT2018cow from the late time _HST_ data. Instead, we examine a sub-set (200) of our solutions (Figure 7) which produce particularly "good fits" (as judged by their chi-squared statistic computed from both epochs). For these solutions we generate both optical-UV spectra at \(t=+713\) d and +1474 d, and disc UV light curves from \(t=0\) to \(t=+1500\) d. These disc spectra and light curves are displayed in Figures 8 and 9, respectively. It is clear that an evolving relativistic accretion flow can reproduce the observed late-time properties of AT2018cow.
The central black hole masses inferred from disc modelling (\(M_{BH}\sim 10^{3.2\pm 0.8}M_{\odot}\)) implies that the early-time UV/optical emission observed from AT2018cow is significantly above the Eddington luminosity \(\mathrm{L_{Edd}}\simeq 10^{41}(M_{BH}/10^{3}M_{\odot})\) erg/s. If the early time luminosity is indeed ultimately powered by accretion (which is still uncertain, see e.g., Roth et al. 2020), then it is unlikely that the thin disc models used here would be valid at these very early times (i.e., for the first \(\sim 100\) days). However, by the much later times, which we are interested in, the bolometric luminosities of the disc solutions are typically at the level of a few \(10^{40}\) erg/s (e.g., Fig. 8), suggesting Eddington ratios at the \(10\%\) level, where thin disc models are valid.
Chen et al. (2023) uses a steady state disc model to fit their T=1474 d SED and obtain an estimate for the mass for the BH. However, steady state disc theory predicts an optical/UV disc lumi
Figure 6: The maximum mass of a stellar cluster that can be underlying AT 2018cow as a function of population age. This is determined by the maximum luminosity of a BPASS simple stellar population that can lie at this location without the luminosity in one of the four _HST_ bands exceeding the observed value.
nosity which scales as \((M_{BH}\dot{M})^{2/3}\). This optical/UV luminosity is thus highly degenerate between the (unknown) mass accretion rate \(\dot{M}\), and the black hole mass \(M_{BH}\) (e.g., Frank et al., 2002). However, the late time disc temperature profile in a TDE disc is constrained, as the total initial mass, radial and temporal scales of the disc are known a priori for a given stellar disruption. This initial mass content must then propagate radially according to the standard laws of mass and angular momentum conservation. The resulting late-time optical/UV luminosity of a _TDE disc_ is strongly constrained. We make use of this in our disc model.
If AT2018cow is indeed a TDE, the short time scale and the disc modelling suggests a relatively low-mass BH was responsible for the disruption. Pasham et al. (2021) find a limit of \(M_{BH}<850M_{\odot}\) based on the frequency of the soft QPO. Zhang et al. (2022) find a low frequency QPO, corresponding to a BH mass of \(\sim 10^{4}M_{\odot}\) and they suggest the maximum mass found by Pasham et al. (2021) can be increased to higher mass adding a binary component to the compact object.
A problem for the TDE hypothesis is that the BH responsible for the disruption needs to be embedded in a dense stellar environment for dynamical friction to be efficient enough to bring a star on an orbit towards its tidal radius within a Hubble time (e.g., Stone & Metzger, 2016). Such a dense stellar environment where dynamical interactions occur then enough, may arise in nuclear star clusters, dense young star clusters, or globular clusters. There is evidence of a recent interaction between CGCG 137-068 and a companion galaxy from a ring of high column density gas as well as from a faint tidal tail Lyman et al. (2020); Roychowdhury et al. (2019). If the host galaxy underwent a recent (minor) merger it is conceivable that an IMBH or SMBH, with its nuclear star cluster, is in the process of
Figure 8: Late time (blue = +713 d, red = +1474 d) spectral snapshots of a sample of relativistic accretion disc models for AT 2018cow. These curves show a sub-set of the disc models (Fig. 7) which produced particularly good fits to the data.
Figure 7: The black hole masses consistent with the assumption that AT 2018cow was caused by an tidal disruption event. The distribution of black hole masses has been derived assuming the late time UV emission is due to the accretion disc formed following the disruption (see the main text for details). The mean of the logarithm of black hole mass (M\({}_{\rm BH}\)) is log M\({}_{\rm BH}\) = 3.2\(\pm\)0.8 (with the mass in M\({}_{\odot}\)).
Figure 9: The light curves of the relativistic disc models which produce the spectra displayed in Figure 8. The late time _HST_ data are displayed in blue (F225W) and red (F336W). Early time data in the ultra-violet bands UVW1, UVW2 and UVM2 are displayed in purple. Importantly, a disc model can reproduce the late time AT 2018cow UV emission, without modifying the observed early time AT 2018cow rapid light curve decline. There is no consensus in the TDE community about the origin of the early-time UV (and optical) emission (see, e.g., Roth et al., 2020). The error bars are at a 1\(\sigma\) confidence level, and may be smaller than the marker size.
falling into the center of CGCG 137-068. This, means that a TDE origin of AT 2018cow remains a viable possibility.
However, Michalowski et al. (2019) attributes the presence of the ring of high column density gas reported by Roychowdhury et al. (2019) to internal processes instead of a galaxy merger/interaction. Their observations using H I show no evidence for a concentration of gas near the location of AT 2018cow. They conclude that the host of AT 2018cow differs from the hosts of Gamma-ray bursts (GRBs)/SNs in its properties and therefore the environment of AT 2018cow does not provide evidence for a massive star progenitor for the event, leaving the question on the nature of AT 2018cow wide open.
## 5 Summary and conclusions
Using three epochs of _HST_ observations we investigate the late-time UV and optical emission at the location of AT 2018cow. The main results are that AT 2018cow remains UV-bright, even with evidence for fading in the UV filters (F225W and F336W) between the first and third epoch of _HST_ observations. The magnitude of AT 2018cow in the optical filters (F555W and F814W) can differ by up to a magnitude depending on how the (diffuse galaxy) background at the location of AT 2018cow is determined.
From our analysis we conclude the following: i) The observed UV emission is consistent with being dominated by a fading point source which originates most likely from AT 2018cow. ii) While part of the optical emission could be due to slowly decaying emission from the transient, there is evidence for a contribution of underlying emission, that did not fade between epochs. Some fraction of this could originate in diffuse galactic background light or an underlying point(like) source. iii) The late-time UV emission is reminiscent of late-time UV emission seen for TDEs. The late-time UV luminosity of AT 2018cow is consistent with the disruption of a (low-mass) star by an IMBH. For this scenario to be feasible AT 2018cow needs to reside in a dense (young/old) stellar cluster.
Our research shows that the nature of AT 2018cow is still uncertain. Both model scenarios involving either specific massive star evolution or a tidal disruption of a (white dwarf) star by an intermediate mass black hole have their advantages and disadvantages.
## Acknowledgements
AI thanks Luc Ilysepert for helpful discussions. This work is part of the research programme Athena with project number 184.034.002, which is financed by the Dutch Research Council (NWO). The scientific results reported on in this article are based on data obtained under _HST_ Proposals 15974, 16179 and 16925 with PI A.J. Levan, A. Filippenko and Y. Chen, respectively. This work was supported by a Leverhindre Trust International Professorship grant [number LIP-202-014]. This work makes use of Python packages numpy (Harris et al., 2020), scipy (Virtanen et al., 2020); matplotlib (Hunter, 2007), extinction (Barbary, 2016) and drizzlepac (Hoffmann et al., 2021). This work made use of Astropy:10 a community-developed core Python package and an ecosystem of tools and resources for astronomy (Astropy Collaboration et al., 2013, 2018, 2022). This research made use of Photutils, an Astropy package for detection and photometry of astronomical sources (Bradley et al., 2022). This research has made use of the SVO Filter Profile Service ([http://svo2.cab.inta-csic.es/theory/fps/](http://svo2.cab.inta-csic.es/theory/fps/)) supported from the Spanish MINECO through grant AYA2017-84089 (Rodrigo et al., 2012; Rodrigo and Solano, 2020). This work has made use of v2.2.1 of the Binary Population and Spectral Synthesis (BPASS) models as described in Eldridge et al. (2017) and Stanway and Eldridge (2018).
Footnote 10: [http://www.astropy.org](http://www.astropy.org)
## Data availability
All data used in this paper is publicly available from the _HST_ data archive. A reproduction package for this paper is uploaded to Zenodo ([https://doi.org/10.5281/zenodo.8246571](https://doi.org/10.5281/zenodo.8246571)).
|
2310.14809 | Learning spatio-temporal patterns with Neural Cellular Automata | Neural Cellular Automata (NCA) are a powerful combination of machine learning
and mechanistic modelling. We train NCA to learn complex dynamics from time
series of images and PDE trajectories. Our method is designed to identify
underlying local rules that govern large scale dynamic emergent behaviours.
Previous work on NCA focuses on learning rules that give stationary emergent
structures. We extend NCA to capture both transient and stable structures
within the same system, as well as learning rules that capture the dynamics of
Turing pattern formation in nonlinear Partial Differential Equations (PDEs). We
demonstrate that NCA can generalise very well beyond their PDE training data,
we show how to constrain NCA to respect given symmetries, and we explore the
effects of associated hyperparameters on model performance and stability. Being
able to learn arbitrary dynamics gives NCA great potential as a data driven
modelling framework, especially for modelling biological pattern formation. | Alex D. Richardson, Tibor Antal, Richard A. Blythe, Linus J. Schumacher | 2023-10-23T11:16:32Z | http://arxiv.org/abs/2310.14809v2 | # Learning spatio-temporal patterns with Neural Cellular Automata
###### Abstract
Neural Cellular Automata (NCA) are a powerful combination of machine learning and mechanistic modelling. We train NCA to learn complex dynamics from time series of images and PDE trajectories. Our method is designed to identify underlying local rules that govern large scale dynamic emergent behaviours. Previous work on NCA focuses on learning rules that give stationary emergent structures. We extend NCA to capture both transient and stable structures within the same system, as well as learning rules that capture the dynamics of Turing pattern formation in nonlinear Partial Differential Equations (PDEs). We demonstrate that NCA can generalise very well beyond their PDE training data, we show how to constrain NCA to respect given symmetries, and we explore the effects of associated hyperparameters on model performance and stability. Being able to learn arbitrary dynamics gives NCA great potential as a data driven modelling framework, especially for modelling biological pattern formation.
###### Contents
* 1 Introduction
* 2 Model and methods
* 2.1 Model details and parameters
* 2.2 Training techniques
* 2.2.1 Loss functions
* 3 Results
* 3.1 Gray-Scott reaction diffusion equations
* 3.1.1 Effect of training hyperparameters
* 3.2 Image morphing
* 3.2.1 Effect of model hyperparameters
* 3.2.2 Stability analysis
* 4 Discussion
* A Gradient Calculation
* B Videos
## 1 Introduction
Many complex natural phenomena--such as organ growth, the structure of materials or the patterns of neural activity in our brains--are emergent [1]. These are typically characterised by many simple interacting components that collectively exhibit behaviour that is far richer than that of the individual parts, and cannot easily be predicted from them. Emergence is especially prevalent in complex systems of biological nature across a wide range of scales - from gene expression dictating cell fates, interacting cells forming structures during morphogenesis, synaptic connections in the brain, or the interactions of organisms in ecology.
Cellular Automata (CA) provide simple models of spatio-temporal emergent behaviour, where a discrete lattice of 'cells' are equipped with an internal state and a rule that updates each cell state depending on itself and its local neighbours. The classic _Game of Life_[2] is a famous example, where cell states and the update rule utilise simple Boolean logic, but the emergent complexity has fascinated and inspired much research [3, 4]. CA are a natural modelling framework of a wide range of biological processes such as: skin patterning [5, 6], limb polydactyly [7], chimerism [8], cancer [9] and landscape ecology [10]. In these cases the CA rules are constructed with expert knowledge of likely mechanisms, however in general the space of possible CA rules is vast, and there is a non-uniqueness by which several rules can result in qualitatively similar emergent behaviours. As such the inverse problem of inferring mechanistic interactions (CA rules) that might generate a given observed emergent behaviour is much more challenging than the forward problem. Establishing the emergent
consequences of a known set of mechanistic interactions between components is conceptually straightforward -- one sets them up in a computational model or _in vivo_ and then observes the collective behaviour that emerges. In the case of CA, once a rule is defined, any initial condition can trivially be propagated forward to obtain the emergent behaviour.
In this work, we establish and extend the utility of Neural Cellular Automata (NCA, [11])--a special case of CA where each cell state is a real vector, and the update rule is determined by a neural network. The update rule is parameterised by the neural network by minimising a cost function that measures how similar the trajectory generated by iteratively applying the update rule is to training data. Since the gradient of the loss function can be evaluated straightforwardly, gradient-based optimisation allows for efficiently learning (non-unique) local update rules to match desired global behaviour, in the usual manner of machine learning [12]. This allows us to tackle the inverse problem of inferring local mechanistic rules given observed emergent behaviour.
We investigate the potential for NCA to be applied as a data-driven alternative, where the underlying mechanisms are not assumed to be known. This could include biological systems, which are inherently complex and may feature interactions that are not directly measured by a given experimental procedure. We do this by exploring the behaviour of NCAs on two idealised systems. First, we train the NCA on Turing patterns generated by the solutions of certain partial differential equations (PDEs). We show that the underlying dynamics are well represented, for example, by testing on initial conditions that were not part of the training set. Second, to understand the generality of the method, we build on previously observed behaviour of NCA on the artificial problem of morphing from one image to another [11]. Here any underlying dynamics is _a priori_ unknown and likely to be highly complex, as opposed to the PDE learning case where we know and understand the PDE. Nonetheless we show that such dynamics are learnable and are robust to perturbations. In addition, we achieve this with the minimal neural network complexity of NCA [13], in line with the principles of Explainable Artificial Intelligence [14, 15].
In Section 2 we set out the structure of the NCA, and show that it can be viewed as a machine learning approach to approximating the finite difference discretisation of an unknown PDE. There is already a fairly strong link between cellular automata and reaction diffusion equations [5, 6], in that both are used to model similar systems, and a suitably chosen CA rule will correspond to the finite difference approximation of any PDE. In Section 3 we present our results. We first (Section 3.1) benchmark the NCA by assessing its ability to capture the types of Turing patterns [16] that emerge from the Gray-Scott [17] reaction-diffusion equations. These equations describe the population of two chemical species with a nonlinear interaction between them, and is capable of generating a variety of patterns. In Section 3.2, we show that the same basic model and training techniques can also be applied to an image morphing task. Thus we conclude that NCA are capable of constructing microscopic dynamical rules that lead to a wide range of prescribed emergent behaviour. In Section 3.2.2 we further explore constraining NCA to respect basic symmetry requirements placed upon them, and investigate the robustness of trained NCA to initial condition perturbations. We discuss implications for further development of the method
Section 4.
## 2 Model and methods
We now define the NCA model, and discuss the motivations behind design choices and hyperparameters. We further discuss the training methods, as it turns out that most training parameters can be kept constant between tasks. The main exception to this is how frequently to sample in time: PDEs have a clear notion of time, whereas image morphing does not. All the models and software developed here have been implemented within the Tensorflow [18] framework [https://github.com/AlexDR1998/NCA](https://github.com/AlexDR1998/NCA) which permits efficient GPU parallelisation.
### Model details and parameters
Neural Cellular Automata (NCA) are a class of cellular automata defined on a lattice of real vectors, with the local update rule encoded in a neural network. As with all neural network models, there is freedom to choose the network structure and how the input data are preprocessed for training and testing. We refer to the set of choices that are not learned via the training data as _hyperparameters_. Figure 1 shows a schematic of a single NCA update, which maps the state of the system at time step \(n\) to a corresponding state at time \(n+1\), where \(n=0,1,\ldots,\). This update comprises a sequence of stages--depicted counterclockwise starting top left in the figure--which we now describe in turn.
System stateWe take the state of the system to be described through a vector of \(C\) real numbers at each point on an \(M\times M\) lattice. For example, each of the \(C\) values could represent the concentration of a chemical or biological species at a given point in space. These observable channels are shown with coloured shading in Figure 1. These can be augmented with _hidden channels_ (transparent in the figure), the state of which can influence the dynamics within the _observable channels_. In a biological context, these hidden channels could represent concentration profiles of proteins or chemicals that are not measured in a particular experimental setting, but can be inferred by the machine learning algorithm. The number of hidden channels is a hyperparameter of the model. The number of hidden and observable channels sum to \(C\). Mathematically, we denote the state of the system at timestep \(n\) as the vector \(x^{(n)}\in\mathcal{X}=\mathds{I}^{M\times M\times C}\), where \(\mathds{I}\in[a,b]\) is some interval of real numbers. The elements of this vector are \(x_{ijc}^{(n)}\in\mathds{I}\), which \(i\in[1,M]\), \(j\in[1,M]\) and \(c\in[1,C]\) denote the \(x\)-coordinate, \(y\)-coordinate and channel number, respectively. We emphasise that during training (Section 2.2), only the observable channels are compared to data.
Convolution kernelsThe first stage of the update is to apply _convolution kernels_ to the spatial data in each channel. These kernels are chosen to mimic differential operators, such as gradients (Sobel filters) and Laplacians. We denote the set of convolution kernels \(g^{k}\), labelled with the index \(k=1,\ldots,K\). Each kernel \(g^{k}\) is a square matrix, in our case (\(3\times 3\)). This generates the expanded _perception vector_\(z^{(n)}\) whose elements are given by:
\[z^{(n)}_{ijck}=\sum_{\Delta i,\Delta j\in[-1,0,1]}g^{k}_{\Delta i,\Delta j}x^{( n)}_{i+\Delta i,j+\Delta j,c}\equiv g*x^{(n)} \tag{1}\]
Crucially the kernels are applied in parallel on all channels \(C\) independently: that is, all kernels are applied _depthwise_. The idea of decomposing an arbitrary convolution to separate
Figure 1: Schematic of an update step of the NCA. For each \(C\) channel pixel in the \(M\times M\) lattice \(x^{(n)}\) at step \(n\), a perception vector \(z^{(n)}\) is constructed to encode local information via convolution with hard-coded kernels \(K\). This perception vector is fed through a dense neural network \(F_{\theta}\) with trainable weights \(W_{1}\), \(W_{2}\), and biases \(v\). The nonlinear activation function \(u(\cdot)\) is applied on the single hidden layer of the network. The output of this network yields the incremental update to that pixel, which is applied in parallel to all pixels with the stochastic mask \(\sigma\) to determine the lattice state \(x^{(n+1)}\) at step \(n+1\).
depthwise and channel-mixing convolutions [19] was inspired by the deep link between Convolutional Neural Networks (CNNs) and Cellular Automata [11, 20]. In particular, this facilitates representing the NCA in standard Tensorflow code. In principle, kernels can be learnable rather than hard-coded; however this makes the trained models less interpretable and so we do not pursue this approach here. As the \(3\times 3\) convolution kernels only encode information about the Moore neighbourhood (i.e. adjacent and diagonal cells), we would never need any more than 9 kernels, as any more would be linearly dependant on those already present.
The purpose of applying the kernels is to make a clearer correspondence between NCAs and numerical methods for solving PDEs. Essentially, they provide the neural network with a basic set of differential operators to work with. The set of kernels to include is an important hyperparameter. For example, if one anticipates that the update rules should be invariant under a global rotation of the system--i.e., that the dynamics are isotropic--one can justify excluding Sobel kernels and just using identity, Laplacian, and local averages. We already have translational invariance as a direct consequence of the NCA construction, but isotropic symmetry can only be realistically achieved by only using isotropic kernels [21] or data augmentation. We explore this further in Section 3.2.2. The explicit forms of the convolution kernels used in this work are
\[\underbrace{\begin{bmatrix}0&0&0\\ 0&1&0\\ 0&0&0\end{bmatrix}}_{\text{identity}},\quad\underbrace{\begin{array}{c} \frac{1}{9}\begin{bmatrix}1&1&1\\ 1&1&1\\ 1&1&1\end{bmatrix}}_{\text{average}},\quad\underbrace{\frac{1}{8}\begin{bmatrix} 1&2&1\\ 0&0&0\\ -1&-2&-1\end{bmatrix}}_{\text{Sobel}_{x}},\quad\underbrace{\frac{1}{8}\begin{bmatrix} 1&0&-1\\ 2&0&-2\\ 1&0&-1\end{bmatrix}}_{\text{Sobel}_{y}},\quad\underbrace{\frac{1}{4}\begin{bmatrix} 1&2&1\\ 2&-12&2\\ 1&2&1\end{bmatrix}}_{\text{Laplacian}}. \tag{2}\]
Neural networkThe perception vector \(z^{(n)}\) is then applied to the input layer of a neural network (see lower part of Figure 1). The values on the output layer, \(F(z^{(n)})\), form a vector of increments in the original state space \(\mathcal{X}\). In a deterministic update, one would simply add \(F(z^{(n)})\) to \(x^{(n)}\) to obtain the updated state, \(x^{(n+1)}\). Taking \(F(z^{(n)})\) as an increment, rather than the new state vector \(x^{(n+1)}\), implies that the NCA is a residual, rather than a naive, recurrent neural network (RCNN) [22].
In the present context, residual RCNNs have several benefits. Firstly a residual RCNN is easier to compare to numerical discretisiations of PDEs, aiding interpretability of our models. Secondly the residual RCNN minimises the problem of vanishing or exploding gradients that the naive RCNN would experience. In the naive approach, recurrent iterations of our neural network would lead to exponentially large or small gradient updates, leading to a failure to learn optimal model weights. A consequence of the vanishing gradients problem is that information from previous timesteps is quickly lost, so \(x^{(m)}\) has little bearing on \(x^{(n)}\) for \(m\ll n\). In principle a naive RCNN can learn long term dependencies, but in practice this is very challenging. As such residual RCNNs are better suited to learning long term behaviours, as \(x^{(n+1)}\) depends linearly on \(x^{(n)}\), so information preservation is in some sense the default behaviour of our model. This behaviour is especially clear during training, in that initialising the residual RCNN to perform the identity mapping
is straightforward: one simply arranges for \(F(z^{(n)})=0\) by setting the final layer weights in the neural network to zero. Initialising the weights such that \(F(z^{(n)})=x^{(n)}\), which would be required in the naive case, is much harder. This 'do nothing' update is a better starting point than one that quickly forgets the initial state of the system. In the latter case, the NCA may resort to learning how to construct the desired \(x^{(n)}\) as a global attractor, irrespective of initial conditions, for example 'growing' \(x^{(n)}\) from fixed boundary conditions. Preserving the initial system state allows the model to better learn dynamics particular to those initial conditions, whilst still allowing for boundary driven behaviour to be learned.
It remains to specify the neural network structure, that is, the number and size of any hidden layers, and how they are connected. Here we aim to keep the architecture as simple as possible, as a minimal yet sufficiently functional network architecture has several advantages. Training a small model is computationally cheaper, and smaller models are far more interpretable [15]. Specifically, we use just one hidden layer, as shown in Figure 1. As noted previously, we do not mix spatial data in the neural network, only between channels and kernels. That is, the network shown in Figure 1 is replicated for each pixel \(i,j\) in \(z^{(n)}\), and takes as input the \(K\times C\) elements of \(z^{(n)}\) that correspond to a given pixel, and transforms to \(C\) channel values, consistent with the original state vector.
The hyperparameters associated with this neural network structure are the choice of activation function and size of the hidden layer. We fix the hidden layer size to \(H=4C\) where \(C\) is the number of channels. This way the network size scales with the number of channels, so we just explore them together as one hyperparameter. Denoting the hidden-layer activation function as \(u\), we can specify the mapping \(F\) through the elements of the output vector as
\[f^{(n)}_{ijc}=\sum_{h\in[1,H]}\left(W^{ch}_{1}u\Big{(}\sum_{ \begin{subarray}{c}c^{\prime}\in[1,C]\\ k\in[1,K]\end{subarray}}W^{c^{\prime}kh}_{2}z^{(n)}_{ijc^{\prime}k}\Big{)} \right)+v^{c}\equiv F(z^{(n)}). \tag{3}\]
The weights \(W_{2}\) mix information between the channels and kernels independently of the position \(i,j\) to determine the activation of each hidden node \(h\). The weights \(W_{1}\) then combine the hidden nodes to construct the output value for each channel \(c\), again independently of \(i,j\). We emphasise that the same set of weights and biases is applied at every pixel, consistent with the separation of spatial and channel mixing between the two stages of the process.
Stochastic maskThe final step is to increment a random subset of state vector elements \(x^{(n)}_{ijc}\) by applying a _mask_\(\sigma^{(n)}=(\sigma^{(n)}_{ijc})\), where \(\sigma^{(n)}_{ijc}\) are independent Bernoulli random variables with parameter \(1-p\). That is \(\sigma^{(n)}_{ijc}=0\) with probability \(p\), which is related to the dropout rate in machine learning regularisation, and \(\sigma^{(n)}_{ijc}=1\) otherwise. The purpose of this mask is to break any global synchronisation between cells.
Given the above, the update specified by the NCA is
\[x^{(n+1)}=x^{(n)}+\sigma^{(n)}F(g*x^{(n)}))\equiv\Phi(x^{(n)},\theta) \tag{4}\]
where we have introduced the mapping \(\Phi(\cdot,\theta):\mathcal{X}\rightarrow\mathcal{X}\) from one state vector to the next, where \(\theta\) encodes the network parameters \((W_{1},W_{2},v)\). In terms of individual elements, this corresponds to
\[x_{ijc}^{(n+1)}=x_{ijc}^{(n)}+\sigma_{ijc}^{(n)}f_{ijc}^{(n)} \tag{5}\]
where the elements \(f_{ijc}^{(n)}\) are given by Eq. (3) above. See line 7 of Algorithm 1. Hence, given \(x^{(0)}\), the NCA provides recursively the sequence \((x^{(n)})_{n=0,1,2,\ldots,N}\).
```
1:function\(\Phi(z,\theta)\)
2:\(W_{1},W_{2},v\leftarrow\theta\)
3:\(z\gets g*x\)
4:for all\((i,j)\in[1,M]^{2}\)do
5:\(dx\gets W_{1}u(W_{2}\bullet z[i,j])+v\)
6:if Random() \(\geq p\)then
7:\(x[i,j]\gets x[i,j]+dx\)
8:endif
9:endfor
10:return\(x\)
11:endfunction
```
**Algorithm 1** Pseudocode description of a single NCA update step. Here \(x\) is an \(M\times M\) lattice with \(C\) channels. \(g*x\) represent the convolutions described in Eq.1. \(W_{1}\) and \(W_{2}\) are the neural network weight matrices, with \(v\) being a vector of biases, all of which are encoded in \(\theta\). \(u()\) is the activation function. Note that in practice the For loops in line 4 and convolutions in line 3 are efficiently parallelised in Tensorflow's GPU implementation. Random() samples a uniform \([0,1)\) distribution.
Batch parallelismWhen implementing this model in tensorflow, we make use of _batch parallelism_, where instead of training on one trajectory \((x^{(n)})_{n}\), we train simultaneously on a set (batch) of trajectories \((x^{(n,r)})_{n,r}\), where superscript \(r=1,2,\ldots,R\) denotes the batch number. In effect this just adds an extra batch dimension to \(x^{(n)}\), so \(x_{ijc}^{(n)},z_{ijdk}^{(n)}\) and \(f_{ijc}^{(n)}\) become \(x_{ijc}^{(n,r)},z_{ijdk}^{(n,r)}\) and \(f_{ijc}^{(n,r)}\) respectively. This is normally done to leverage low-level speed-up, as training the network on batches involves matrix-matrix rather than matrix-vector multiplications, which are well optimised on parallel architectures (GPUs). However in the case of NCA, batch parallelism enables far more diverse systems to be learned, for example learning several distinct trajectories by the same rule, or improving stability through data augmentation. It is also crucial for extending the existing NCA training algorithm [11] to learning longer sequences of data, as discussed in section 2.2.
To summarise, the NCA can be described in terms of neural network language as a residual (calculating increments to each pixel) Recurrent Convolutional Neural Network (RCNN) with per-pixel dropout (stochastic mask). The hyperparameters are the number of hidden channels, the set of convolution kernels and activation functions on the hidden layer of the network. The effect of varying the hyperparameters is explored in Section 3.
### Training techniques
The neural network architecture described above is very minimal in comparison to the state-of-the-art in the literature [23], featuring only a single hidden layer applied in parallel. By contrast, the training process is fairly complex. We set out key steps below, with corresponding pseudo-code set out as Algorithm 2. NCA trajectories can be considered as paths in \(\mathcal{X}\) (Figure 2), where the training process constrains the NCA parameters such that the trajectories pass as close (defined by the loss function) as possible to the observed data points in \(\mathcal{X}\). Projecting these paths onto 1D helps visualise the training process, especially when training to multiple batches.
The technique of training NCA is based on backpropagation through time, a typical method for training RNNs [24]. We have established the batch of NCA trajectories \((x^{(n,r)})_{n=1,\ldots,N}\), with \(n\) and \(r\) denoting time and batch respectively. Originally [11], training the NCA consisted of one set of initial states being mapped to one set of final states \(x^{(0,r)}\to x^{(t,r)}\). We extend this to learn the set of transitions \(x^{(n-1,r)}\to x^{(n,r)}\), where the batch parallelism allows us to train the NCA on each transition simultaneously. This allows training NCA to far more diverse and complex dynamics. For clarity we drop the explicit batch index (i.e. set \(r=1\)) for now, the context where it matters is discussed later, but even so batch parallelism is still exploited for learning the different timesteps.
We have a time series of data \((y^{(\delta)})_{\delta=0,1,\ldots,D}\), but only for every \(t\) NCA timesteps, that is at times \(\delta t\) for \(\delta=0,1,\ldots,D\), where \(N=Dt\). Hence we need to compare \(x^{(\delta t)}\) to \(y^{(\delta)}\). We initialise \(x^{(\delta t)}=y^{(\delta)}\) for \(\delta=0,\ldots,D-1\), and propagate each state through \(\Phi^{t}(\cdot,\theta)=\Phi\circ\cdots\circ\Phi(\cdot,\theta)\) (\(t\) nested function compositions). To compute the loss, we compare \(x^{(\delta t)}\) to \(y^{(\delta)}\) for \(\delta=1,\ldots,D\), averaging over different \(\delta\): \(\frac{1}{D}\sum_{\delta=1}^{D}\mathcal{L}(x^{(\delta t)},y^{(\delta)})\), where the loss function \(\mathcal{L}:\mathcal{X}^{2}\rightarrow\mathds{R}\) is a meaningful measure of distance between any pair of \(x^{(i)}\) and \(y^{(j)}\). Training is achieved by minimising the loss function, which requires partial gradients with respect to the trainable parameters \(\theta\) to be evaluated. For the full gradient calculations, see appendix A.
**Algorithm 2** Training an NCA \(\Phi\) to \(R\) data trajectories of length \(D\). Split into \(B\) mini-batches to reduce memory usage. Here \(x^{(n,r)}\) and \(y^{(n,r)}\) denote predicted state and data at step \(n\) of trajectory \(r\) respectively. \(\hat{x}^{(n,r)}\) is a temporary variable to store new intermediate states at each training iteration. Lines 20 and 21 perform gradient normalisation followed by a parameter update handled by the Nadam algorithm (or any other optimiser of choice). The nested For Loops on lines 9,11 and 12 are easily parallelised. Typical choices of mini-batching are such that \(10<(D\times R)//B<100\). \(\Phi^{t}\) denotes \(t\) iterations of \(\Phi\). The choice of \(t\) is an important hyperparameter, and relates to the temporal resolution of the data \(x\). The model parameters \(\theta\) are assumed to either be initialised appropriately, or already partially trained. RandomShuffle\((A,B)\) randomly shuffles \(A\) and splits it into \(B\) equal sized chunks. \(\mathcal{L}(A,B):\mathcal{X}\times\mathcal{X}\rightarrow\mathds{R}\) computes the loss between states \(A\) and \(B\).
```
1:functionTrain(\(\Phi,\theta,y,t\),B,EPOCHS)
2:\(x\gets y\)
3:\(D\gets y.shape[0]\)
4:\(R\gets y.shape[1]\)
5:for\(i\in\) EPOCHS do
6: Grad \(\leftarrow\vec{0}\)
7: DS \(\leftarrow\) RandomShuffle([1,D],B)
8: RS \(\leftarrow\) RandomShuffle([1,R],B)
9:for\(b\in[1,B]\)do
10: Loss \(\leftarrow\) 0
11:for\(\delta\in\) DS\([b]\)do
12:for\(r\in\) RS\([b]\)do
13:\(\hat{x}^{(\delta,r)}\leftarrow\Phi^{t}(x^{(\delta-1,r)},\theta)\)
14: Loss \(\leftarrow\) Loss \(+\mathcal{L}(\hat{x}^{(\delta,r)},y^{(\delta,r)})\)
15:endfor
16:endfor
17: Loss \(\leftarrow\frac{1}{D\times R}\) Loss
18: Grad \(\leftarrow\) Grad \(+\frac{\partial\text{Loss}}{\partial\theta}\)
19:endfor
20: Grad \(\leftarrow\) Norm(Grad)
21: Update(\(\theta\),Grad,\(i\))
22:for\(\delta\in[1,D]\)do
23:for\(r\in[2,R]\)do
24:\(x^{(\delta,r)}\leftarrow\hat{x}^{(\delta,r)}\)
25:endfor
26:endfor
27:endfor
28:return\(\Phi\)
29:endfunction
```
**Algorithm 2** Training an NCA \(\Phi\) to \(R\) data trajectories of length \(D\). Split into \(B\) mini-batches to reduce memory usage. Here \(x^{(n,r)}\) and \(y^{(n,r)}\) denote predicted state and data at step \(n\) of trajectory \(r\) respectively. \(\hat{x}^{(n,r)}\) is a temporary variable to store new intermediate states at each training iteration. Lines 20 and 21 perform gradient normalisation followed by a parameter update handled by the Nadam algorithm (or any other optimiser of choice). The nested For Loops on lines 9,11 and 12 are easily parallelised. Typical choices of mini-batching are such that \(10<(D\times R)//B<100\). \(\Phi^{t}\) denotes \(t\) iterations of \(\Phi\). The choice of \(t\) is an important hyperparameter, and relates to the temporal resolution of the data \(x\). The model parameters \(\theta\) are assumed to either be initialised appropriately, or already partially trained. RandomShuffle\((A,B)\) randomly shuffles \(A\) and splits it into \(B\) equal sized chunks. \(\mathcal{L}(A,B):\mathcal{X}\times\mathcal{X}\rightarrow\mathds{R}\) computes the loss between states \(A\) and \(B\).
```
1:functionTrain(\(\Phi,\theta,y,t\),B,EPOCHS)
2:\(x\gets y\)
3:\(D\gets y.shape[0]\)
4:\(R\gets y.shape[1]\)
5:for\(i\in\) EPOCHS do
6: Grad \(\leftarrow\vec{0}\)
7: DS \(\leftarrow\) RandomShuffle([1,D],B)
8: RS \(\leftarrow\) RandomShuffle([1,R],B)
9:for\(b\in[1,B]\)do
10: Loss \(\leftarrow\) 0
11:for\(\delta\in\) DS\([b]\)do
12:for\(r\in\) RS\([b]\)do
13:\(\hat{x}^{(\delta,r)}\leftarrow\Phi^{t}(x^{(\delta-1,r)},\theta)\)
14: Loss \(\leftarrow\) Loss \(+\mathcal{L}(\hat{x}^{(\delta,r)},y^{(\delta,r)})\)
15:endfor
16:endfor
17: Loss \(\leftarrow\)\(\frac{1}{D\times R}\) Loss
18: Grad \(\leftarrow\) Grad \(+\frac{\partial\text{Loss}}{\partial\theta}\)
19:endfor
20: Grad \(\leftarrow\) Norm(Grad)
21: Update(\(\theta\),Grad,\(i\))
22:for\(\delta\in[1,D]\)do
23:for\(r\in[2,R]\)do
24:\(x^{(\delta,r)}\leftarrow\hat{x}^{(\delta,r)}\)
25:endfor
26:endfor
27:endfor
28:return\(\Phi\)
29:endfunction
```
**Algorithm 3** Training an NCA \(\Phi\) to \(R\) data trajectories of length \(D\).
There are additional practical considerations for optimising this training method. After each iteration, we have the choice of keeping and further propagating the values in \(x^{(\delta t)}\) for \(\delta=1\ldots D\), or re-initialising them: \(x^{(\delta t)}=y^{(\delta)}\). Propagating them allows the NCA to better learn long term dynamics (particularly of the hidden channels) over many training iterations. However, we observe that re-initialising helps speed up the training process in the earlier steps. As both approaches have their advantages, we return to the batch parallelised case and do both. We regard re-initialising the states as a form of data augmentation (Figure 2), so in practice we only re-initialise one batch: \(x^{(\delta t,1)}=y^{(\delta,1)}\). This choice of only re-initialising one batch performs well, but is arbitrary and could be further tuned for specific problems. The NCA is initialised with random hidden layer weights (\(W_{2}\)), and zero final layer weights (\(W_{1}\)) and bias (\(v\)).
When implementing the training procedure, as described in Algorithm 2, the additional subtlety of mini-batching is required [25, 26]. Rather than computing the gradient for transitions \(x^{(\delta-1,r)}\to x^{(\delta,r)}\) for all \(\delta\in[1,D],r\in[1,R]\), we randomly split \([1,D]\times[1,R]\) into \(B\)_mini-batches_, and separately compute the loss gradient for each mini-batch. After iterating through each mini-batch, the gradients are averaged and applied once to the model parameters. The need for mini-batching is due to memory constraints: if a large enough number of batches \(R\) or timesteps \(D\) is used, computing the gradient over the full set of transitions is unfeasible. In Algorithm 2 the memory cost of calculating the gradient (line 18) scales like \(D\times R\times M^{4}\times C^{2}\times t\) (where \(|\mathcal{X}|=M^{2}C\)). By contrast the memory cost of storing and adding to the calculated gradient over each mini-batch is fixed as \(\|\theta\|\) (i.e. does not scale with \(B\)), which is minimal given the small size of the network. \(M\) and
Figure 2: 1D phase space representation of NCA trajectories, predictions \(x^{(\delta t,r)}\) and true states \(y^{(\delta,r)}\). Here \(D=3,R=2\). The first batch (\(x^{(\cdot,1)}\)) is trained with re-initialised intermediate states, whereas the second batch (\(x^{(\cdot,2)}\)) is trained with propagated intermediate states.
are fixed by the spatial (and channel) resolution of the data, but mini-batching reduces the memory burden of \(D\) and \(R\). For the full calculation see appendix A. In the case of \(B=1\), the mini-batching reduces such that the For loops at lines \(9,11\) and \(12\) collapse into simpler loops over \([1,D]\) and \([1,R]\). When training on image morphing [11], the size of data does not require mini-batching as \(D\) and \(R\) are small. To accurately capture PDE dynamics, much larger \(D\) is used, and as such mini-batching is required. We do not explore spatial mini-batching, where random subsets of pixels are tracked for gradients, as this makes efficient parallelisation more challenging, however it could be very useful to further explore as it could enable training of NCA to higher resolution data.
#### 2.2.1 Loss functions
We now turn to the question of how to define the distance between a NCA trajectory and target data. This problem is split naturally into two parts: first, how to find the difference between corresponding (time and batch labelled) points in \(\mathcal{X}\), and then how to combine all these difference measures to the distance between two sets of points in \(\mathcal{X}\). For the latter choice, we adopt an arithmetic mean over the time points and batches considered (as shown in lines \(14\) and \(17\) of Algorithm 2), although these could be weighted (e.g., the states at a specific time or batch being most important).
We compared several loss functions for corresponding points in \(\mathcal{X}\), but we found that the standard euclidean norm \(\mathcal{L}(x,y)=\big{(}\sum_{i,j,c}(x_{ijc}-y_{ijc})^{2}\big{)}^{\frac{1}{2}}\) worked best in all contexts. Probability mass based losses (Hellinger [27] and Bhattacharyya [28]) and distance between spatial Fourier transforms of points in \(\mathcal{X}\) all work well in the PDE modelling case, but perform poorly on image morphing. Various approximations to the Wasserstein distance [29] performed poorly in both contexts but still remain promising given their success in texture synthesis [30, 31, 32]. As such, for the following results we stick to the euclidean distance, although we recommend experimenting with different loss functions depending on the system one is modelling. Any differentiable function \(\mathcal{L}:\mathcal{X}\times\mathcal{X}\to\mathds{R}\) can be used, if minimising it's output constrains the inputs in a desirable way.
Results
We now demonstrate the applicability of the NCA to two contrasting use cases. First, we consider training data comprising numerical solutions of coupled nonlinear reaction-diffusion equations, with parameters that produce Turing patterns. Equations in this class are widely used to model complex biological systems such as in: developmental biology [33, 34]; ecology [35]; and skin pattern morphogenesis [36]. We adopt a representative example that is capable of generating a wide variety of spatial patterns, in the context of chemical reactions [17]. We demonstrate in particular that the NCA can generalise beyond the set of initial conditions in its training data. We then turn to an artificial image morphing problem, inspired by [11], and show that the same NCA is capable of constructing local rules to effect the desired dynamics. In contrast to the reaction-diffusion system, such rules are not known _a priori_, so it is not obvious that they exist. This problem also lends itself to testing the robustness of the rules that result. When training to PDEs, the focus is on exploring training hyperparameters (loss function, time sampling). After determining suitable training hyperparameters, we then show that this generalises to the image morphing task, where we explore model hyperparameters (kernels, activation functions, number of hidden channels).
### Gray-Scott reaction diffusion equations
The Gray-Scott [17] reaction diffusion equations read
\[\partial_{t}A =D_{A}(\partial_{xx}+\partial_{yy})A-AB^{2}+\alpha(1-A)\] \[\partial_{t}B =D_{B}(\partial_{xx}+\partial_{yy})B+AB^{2}-(\gamma+\alpha)B\]
in which \(D_{A},D_{B},\alpha\) and \(\gamma\) are parameters. These describe two species, \(A\) and \(B\), which diffuse in two-dimensional space with diffusion constants \(D_{A}\) and \(D_{B}\), respectively. Species \(A\) grows towards a density of \(A=1\) at a rate \(\alpha\), whilst species \(B\) dies out at rate \(\gamma+\alpha\). The species also undergo the reaction \(A+2B\to 3B\).
With \(D_{A}=0.1,D_{B}=0.05,\alpha=0.06230,\gamma=0.06268\) we obtain maze-like patterning (Figure 3). We solve these PDEs with an Euler finite-difference discretisation scheme, with time-step of 1, for \(N=1024\) steps, and on an \(M\times M\) lattice of size \(M=64\). \(\alpha\) and \(\gamma\) parameterise the patterning type, whereas \(D_{A}\) and \(D_{B}\) re-scale the patterns, and must be chosen in line with the timestep size to achieve numerical stability.
#### 3.1.1 Effect of training hyperparameters
We begin with a basic NCA architecture that employs just \(K=2\) kernels, the identity and Laplacian, \(p=0\) (purely deterministic), \(C=8\) channels in total (so 2 observable and 6 hidden channels) and a rectified linear unit (relu, \(u(z)=\frac{|z|+z}{2}\)) activation function. We found that the Nadam optimiser [37] consistently performed well. Nadam is a modification of the widely used Adam optimiser [38], with the only difference being the use of Nesterov momentum [37]. Optimisers based on Nesterov momentum perform well, both in theoretical convergence and generalisability of trained deep neural networks [37, 39]. Note that we also employ gradient normalisation [40] before passing gradient information to the optimiser - this was found to significantly improve training performance. This just leaves the time sampling (\(t\)) as the main hyperparameter to optimise. Time sampling is subtle in the case of PDEs as numerical integration of the PDE system necessarily involves a discrete integration time step. We can sample the trajectories at coarser intervals, increasing \(t\) in Algorithm 2 such that each NCA update corresponds to a timestep of the PDE solver. In other words, we only compare every \(t^{\text{th}}\) PDE and NCA step.
We found that while training loss increased with greater sampling intervals \(t\), tests on unseen initial conditions achieve comparable loss for most sampling intervals, with modest improvements for greater \(t\) (Figure 4). The training loss is calculated for \(N=1024\) steps (as in Figure 3), whereas the test loss is calculated over \(N=2048\) steps from an unseen initial condition. This demonstrates generalisation both to unseen initial conditions, and longer simulation times. Note that the unseen initial condition used for testing features
Figure 3: Snapshots taken from the training data used for learning PDE dynamics. PDE is run for \(N=1024\) steps with timestep 1 and \(D_{A}=0.1,D_{B}=0.05,\alpha=0.06230,\gamma=0.06268\).
high frequency components not observed at all during training, and that NCA trained with high sampling were sometimes numerically unstable to these high frequency inputs (missing test loss points in Figure 4A, or snapshots at \(t=2\),\(t=6\) in Figure 4B).
Figure 5 shows various snapshots from true (PDE) trajectories alongside the corresponding snapshots from an NCA trained with \(t=32\). This extrapolates an unseen initial condition far beyond the time observed during training (\(n\in[0,1024]\)), demonstrating that the NCA does learn the underlying rules of the dynamics rather than overfitting to the training trajectories. When we considered finer sampling \(t\) (Figure 4), we observe more frequent numerical instabilities, or complete failure to learn dynamics. Coarse time sampling appears to both stabilise these numerical problems, and yield more generalisable models. We posit this is due to coarse time sampling allowing the NCA to be less constrained during training, in that intermediate states may explore more possible states, increasing the chances of finding \(\theta\) that gives the correct dynamics. Fine time sampling perhaps over-constrains the NCA, leading to instabilities or training converging to sub-optimal local minima of the loss
Figure 4: A: loss as a function of time sampling \(t\). Training loss shows the minimum loss during training epochs (averaged over 4 random initialisations, with standard deviation as error bars). Test loss shows how the best trained NCA (minimal training loss) performs on unseen initial conditions. B: snapshots of NCA trajectories (at \(n=2048\)) based on unseen initial conditions, with varying sampling \(t\). Each NCA is trained for 4000 epochs, with a mini-batch size \(B=64\).
landscape. Alternatively, the behaviour at fine time sampling is consistent with overfitting, so coarser time sampling could be considered a regularising technique here.
In summary, we have found that the NCA architecture set out in Section 2 is capable of learning update rules that reproduce the solution of a certain pair of coupled nonlinear PDEs. Our main finding is that good rules can be learnt, but coarse time sampling improves numerical stability, and learns rules that generalise better to unseen initial conditions. In the next section we show that the algorithm still performs well when there is no known underlying integration timestep.
### Image morphing
We now test whether a training method that works well for PDEs also works on the image morphing task. The task comprises an initial image, an intermediate image to morph through, and then a final image that is intended to remain stable (i.e., be an attractor of the dynamics) (Figure 6). This latter requirement is incorporated into the training data by repeating the final image twice. The images shown were downsampled to a resolution of \(60\times 60\) to reduce computational cost. We further impose fixed b
Figure 5: Snapshots of PDE and NCA trajectories from an unseen initial condition. NCA trained with \(C=8\), Identity and Laplacian kernels, relu activation, trained on sampling \(t=32\) for 4000 epochs with euclidean loss.
Figure 6: Image morphing task. Given a space invader initial condition, morph through a microbe and remain stable at a rooster pattern.
is, to insist that the state vector \(x^{(n)}\) vanishes at the boundary points. This is because the system is not periodic, as was the case for the PDE problem.
Reliable training of the NCA requires a careful construction of the training data. A side-effect of the fixed boundary conditions is that the NCA can learn to grow an image from the boundary, rather than from the initial condition. We would like the pattern formation to remain translationally invariant - the image morphing should behave independently of the boundaries, and should only depend on the input images. To enforce this, we embed the training data within a larger lattice, thereby reducing the influence of the boundaries. Translational invariance is further encouraged by training to several copies of the image sequence, each randomly shifted in space, to prevent learning any effective long range boundary interaction. It is possible to train NCA to produce textures as _global_ attractors [13, 30] as textures are translationally invariant. We believe it is impossible to have a fixed pattern (i.e. not a translationaly invariant texture) as a global attractor -- if boundary effects are removed so the whole NCA system is translationaly invariant. Avoiding the desired dynamics being a global attractor ensures that the NCA has learned a rule that maps the input state through the sequence of target states, rather than just generating the target states irrespective of initial conditions. This is further verified by exploring stability under perturbation in Section 3.2.2
We also find that to train the NCA to reach a stable state, in effect mapping an image to itself, augmenting the training data with noise is necessary. Without noise augmentation, the training process crashes as gradients diverge (when training the final stable transition), but adding a very small amount of random noise to the data fixes this. We can understand the effect of this noise as introducing a basin of attraction around the desired final state, and training to noisy images enhances the robustness of the NCA to noisy perturbations.
#### 3.2.1 Effect of model hyperparameters
The training hyperparameter \(t\) that corresponds to the frequency of time sampling cannot be assumed to translate from the PDE case to the image morphing task. As there is no underlying physical mechanism connecting the images, there is nothing to provide a basic unit of time. We are however guided by the fact that the update rules are local, and therefore initially take the number of timesteps to be 64, which is similar to the lattice size and therefore gives sufficient time for information to propagate across it. As we explore this point more below, we find in fact that fewer timesteps can also be sufficient.
While a deterministic update (\(p=0\)) is appropriate in the PDE case, as this was a feature of the training data, for the image morphing problem, we update stochastically with \(p=\frac{1}{2}\), as removing global synchronisation between each cell acts like dropout regularisation, and can be biologically motivated [11]. Note that a choice of \(p=\frac{1}{2}\) effectively halves the number of timesteps between images. We found that varying the update probability \(p\) had very little direct impact on model performance (except for extreme values close to 1 or 0), instead the effective number of timesteps between images was explored.
The system state has 4 observable channels--the red, green, blue and alpha (transparency) components--and 12 hidden channels. Since the images are not rotationally symmetric, and we have no prior underlying mechanisms to constrain symmetries of update rules, we consider adding Sobel kernels to the identity and Laplacian kernels that were used in the PDE case. The Laplacian kernel detects spatial changes (and curvature) in patterns, whereas the Sobel kernels also detect the orientation of any changes. We find that including the symmetry breaking Sobel kernels improves the performance over just using symmetric kernels (Figure 7A,B). This does however break the symmetry of the NCA--such an NCA is unlikely to be stable under rotations or reflections of the initial condition, as discussed in Section 3.2.2. We also find that the best performing activation function is relu (Figure 7C,D). The linear case, \(u(z)=z\), can be thought of as an absence of an activation function. Surprisingly, the overall shape of the final image is reasonably well reproduced, although it clearly lacks the definition achieved with the other activation functions. This justifies the additional complexity of nonlinear activations.
Figure 7: NCA trained on image morphing task with different kernels and activations. 16 channels, 64 steps between images. A,B: training loss and snapshots of NCA with relu activation and various kernels. C,D: training loss and snapshots of NCA with Identity, Sobel and Laplacian kernels, for various activation functions.
Exploring how NCA behaviour scales with number of channels, we find that unsurprisingly more hidden channels performs better (Figure 8A,B), capturing more of the details in the rooster image. The number of channels functions as a clear'model size' parameter, and we find that model performance scales nicely with this measure of model size. We also explore how NCA train for different numbers of timesteps between images (Figure 8C,D). It is surprising that with as few as 8 timesteps, the basic shape and colour of the rooster are correct (although details are better at 16 or 32 steps), which highlights the locality of the update rule. The image resolution is \(60\times 60\), and with 8 timesteps between images only cells less than 16 pixels away can communicate (from initial condition to reaching the stable rooster pattern). However with the stochastic cell updates, the effective communication
Figure 8: NCA trained on image morphing task. Relu activation; Identity, Sobel and Laplacian kernels. A,B: Training loss and snapshots of 16 channel NCAs trained with different time sampling. C.D: Training loss and snapshots of NCAs trained with time sampling of 32, and various numbers of channels.
range from initial to final condition here is halved to just 8 pixels. This emphasises that the update rule is local, in that local structures of the initial condition morph into local structures of the final state at similar locations.
#### 3.2.2 Stability analysis
With a trained NCA, there is an obvious question of stability--if an initial condition is perturbed away from what the NCA is trained on, how does this affect the behaviour? We consider three kinds of perturbations of the initial condition: local perturbations, global perturbations, and symmetry perturbations. Local perturbations change one pixel, and allow us to explore how errors propagate through the spatial part of the NCA. Global perturbations can show how resilient NCA are to noisy inputs. Symmetry perturbations, such as rotations or reflections of initial conditions, allow us to explore how NCA respect desirable symmetries.
Stability under local perturbations depends strongly on how many timesteps between images the NCA is trained on (Figure 9). We find that NCAs with fewer time-steps are more stable to local perturbations, or conversely that allowing more NCA steps between training images gives more time for local perturbations to travel. In both cases the perturbations remain mostly local. Using local perturbations could help calibrate the number of timesteps to use when modelling real systems, in that the NCA should have the same response to perturbations as the underlying data being modelled.
Figure 9: Local stability behaviour of two NCA. A:i 32 channels, 32 steps between images. B: 16 channels, 64 steps between images. Top left heatmap in each case shows how many pixels of the final image change (by more than 0.1 to account for random fluctuations) when that pixel is perturbed in the initial condition. The other images all show snapshots of the final state when the initial condition is perturbed locally, for different perturbation locations.
To address the question of stability with respect to global perturbations, we frame it as an optimisation problem. Let \(\kappa^{n}(x^{(0)},\tilde{x}^{(0)})=\|\tilde{x}^{(0)}\|-\|\Phi^{n}(x^{(0)}+\tilde{ x}^{(0)})-\Phi^{n}(x^{(0)})\|\), where \(\tilde{x}^{(0)}\) is a perturbation of initial condition \(x^{(0)}\). By finding \(\tilde{x}^{(0)}\) that maximises or minimises \(\kappa^{n}\), we can find a maximally perturbed initial condition \(x^{(0)}+\tilde{x}^{(0)}\) that leaves a future state \(x^{(n)}\) unchanged, or a minimal perturbation that destroys \(x^{(n)}\). This allows us to explore the space of initial conditions around which the NCA was trained, and can reveal which features of an initial condition are important. For example, it may be only the edges and corners of an image are learned from. As the whole NCA process is differentiable, we can use gradient based optimisation on \(\kappa^{n}\) to find \(\tilde{x}^{(0)}\). Finding minimal perturbations that destroy \(x^{(n)}\) is similar to adversarial attacks of classifier networks, where small changes to an image completely destroy the behaviour of an image classifier. Figure 10 shows the behaviour of a trained NCA (best performing model shown in figure 8A,B) starting from examples of these adversarial initial conditions. It is possible to find initial conditions that are visually similar to the true initial condition, and yet they destroy the stable rooster pattern (\(x^{(96)}\)). We can also find large perturbations of the initial condition that leave the target state (\(n=96\)) unperturbed, however the long term stability of the rooster pattern is still damaged.
Figure 10: Rightmost column shows extrapolation beyond training time, demonstrating stability of the final state. Top row shows snapshots from unperturbed trajectory. Middle row shows snapshots from minimal initial perturbation that destroys the final state (minimising \(\kappa^{(96)}(x^{(0)},\tilde{x}^{(0)})\)). Bottom row shows snapshots from maximal initial perturbation that preserves the final state (maximising \(\kappa^{(96)}(x^{(0)},\tilde{x}^{(0)})\)). NCA (32 channels; Identity, Sobel and Laplacian kernels; time sampling \(t=32\), relu activation) trained on image morphing task.
Finally, we compare the behaviour of different NCA models on symmetrically perturbed initial conditions (Figure 11). By rotating or flipping the input image we obtain symmetrically perturbed inputs. One NCA is trained on _normal data_, that is, data that has only been translationally perturbed to minimise boundary effects. The other is trained on _augmented data_ that also includes the same training data after applying global rotations about random angles. We also explore the effect of restricting the NCA to include only symmetric kernels, rather than both symmetric and asymmetric kernels. We find that even without any data augmentation, the symmetric kernel NCA already performs very well, although it struggles with the off lattice 45 degree rotations. When trained on rotationally augmented data, the asymmetric kernel NCA improves its performance on rotated inputs, but is still outperformed by the symmetric kernel NCA for on-lattice rotations (with or without data augmentation). Off lattice rotations are the most challenging, and seem to be where data augmentation is necessary even for symmetric kernel NCA. Overall, we find that the symmetric kernel NCA better handles symmetric perturbations, whilst the asymmetric
Figure 11: Behaviour of trained NCA on symmetrically perturbed inputs. Left column shows inputs, middle two shows final state behaviour for NCA with asymmetric kernels (identity, Sobel and Laplacian), rightmost two shows final state behaviour for NCA with symmetric kernels (identity, Laplacian, average). Augmented data examples show NCAs trained to trajectories rotated to random angles.
kernel NCA performs best on the unperturbed data. As one might expect, building symmetry into the model allows it to solve a broader range of symmetrically-related problems, whereas leaving it out promotes specialisation towards a single problem.
## 4 Discussion
We have demonstrated NCA as a framework for modelling emergent spatio-temporal patterning. Many systems in biology are characterised by complex emergent phenomena of locally interacting components, and finding interaction rules or mechanisms that lead to specific emergent behaviours is a challenging inverse problem. By making classic cellular automata differentiable, NCA present a new approach to this class of problems. [11] demonstrated that a trained NCA can generate complex structures (specifically emojis) that remain stable over time and to perturbation. Here we have extended this approach to learn dynamics from snapshots of a pattern at multiple timepoints (rather than just the end-point), i.e., we show the ability to learn dynamic patterns, specifically those arising from PDEs with Turing instabilities that been widely used to study biological pattern formation.
Specifically, we showed in Section 3.1 that NCA can infer update rules equivalent to those obtained by discretising and iterating a set of PDEs. NCA have an inductive bias to learning underlying dynamics, rather than just overfitting to data, due to their minimal parameters and hard coded local kernels. We demonstrate this by presenting the trained NCA with initial conditions that were not part of the training data, and finding that the predicted trajectories are similar to those obtained directly from the PDEs - the trained NCA generalise well. This suggests that an NCA trained on experimental data could be used to predict the behaviour of that system under conditions that have not been directly observed. We have also discussed NCA hyperparameters in more detail than most previous work, which can provide guidance for future exploration. For example, tuning the number of timesteps between images can constrain how far local information is allowed to spread, and the number of channels required to accurately capture a patterning behaviour could function as a heuristic for the complexity of that pattern.
More generally, the findings of Section 3.2 confirm that NCA can be used as a tool to construct local dynamical update rules whose emergent properties satisfy certain constraints. These constraints include the end state, but extend also to the stability of that configuration, invariance of the dynamics under certain symmetry operations and the effect of boundary conditions. Given that for any observed emergent patterning, many possible microscopic update rules could exist, constraining how NCA respect symmetries or behave around boundaries helps reduce the set of possible rules. As the training process for NCA amounts to a differential optimisation procedure, microscopic rules that yield desired emergent behaviour can be efficiently found, even when constraints are imposed, as exemplified by the image morphing task.
Using NCA as an _in silico_ way to study the behaviour of growing systems is a recurring theme in the literature [11, 21], discussed more concretely in [41]. Further applications or extensions of NCA models have been explored in the context of image processing and
synthesis [42, 43]. Here NCA models have been coupled to CLIP text embeddings [44], in line with recent machine learning based text to image techniques. NCA are clearly capable of a diverse range of behaviours, but all the previous literature just focuses on training one set of transitions, rather than full dynamic trajectories. We believe that the extension to learning dynamics of arbitrarily long sequences dramatically increases the already wide range of systems and behaviours NCA can model. We believe that being able to train to sequences of images will better enable NCA to be applicable to modelling real biological systems, and it will probably enable more interesting image synthesis techniques.
Compared to most current machine-learning research, our chosen neural network architecture is economical in terms of the number of trainable parameters. This not only makes training more computationally efficient, but also adheres also to the aesthetic guidance of modelling traditions in physics and mathematics that simpler models are preferred over more complex models when they have comparable descriptive power. Here we have found that a single hidden layer is sufficient to model a variety of systems and reproduce a wide range of behaviours. The reason for this may lie in part due to hidden channels in the state space, as these can encode complex environmental information such as boundary conditions, as well as encoding memory for longer term dynamics. This spatially distributed memory encoding in the hidden channels could be likened to previous work on differentiable neural computers [45]. We note that NCA also link back to older work on amorphous computing [46], providing a connection between modern machine learning and theory of spatially distributed computational frameworks.
There are however a few shortcomings of NCA, mainly the underlying assumption that purely local interactions are sufficient. There will be systems with non-local (or multiscale) interactions that cannot be elegantly explained with purely local rules. We have also assumed that the update rules are constant over time, even though many complex (physical or biological) systems are highly time dependent [47]. Whilst it is possible that the hidden channels in the NCA could encode nonlocality and time-dependence, it might be more natural (and interpretable) to extend the representation of the dynamics in the neural network to incorporate such dependencies explicitly, for example by increasing the size/stride of convolution kernels, or by including an explicit time parameter as a network input. A possible risk with including explicit time dependence is that the NCA could over-fit to the timestep, rather than learning the microscopic interactions that yield the emergent behaviour. To tackle such questions, it might be desirable to augment the loss functions with measures of model complexity as a means to converge on the most parsimonious description, for example by sparsity regularisation. Generalising NCA to better work on multiscale systems could also be worth exploring, for example by coupling NCA-like models on lattices of different resolutions.
Similarities can be drawn between NCA and other data driven equation discovery techniques like SINDy [48] or Extended Dynamic Mode Decomposition (EDMD) [49]. Both SINDy and EDMD have the main purpose of fitting dynamical systems to data, both in the cases of ODEs and PDEs. SINDy enforces parsimonious solutions through parity regularisation, whereas Dynamic Mode Decomposition is analogous to Singular Value De
composition for time series data (and the Extended DMD is a nonlinear generalisation). The key areas where NCA differ is in background motivation, and the sorts of systems they're suited to. NCA act as a bridge between machine learning based image processing, and learning simple models of complex systems; whereas SINDy and EDMD were developed in the context of data driven engineering and fluid dynamics respectively. Cellular automata models are more general than (local) PDEs. Although we restrict ourselves to differential operator kernels, we don't have to - learning arbitrary kernels would provide far more general expressive power, whilst still keeping a minimal (and importantly local) model. The class of models that could be learned with arbitrary kernels includes local PDEs, but is far more general.
A further development of these NCA models is to make them truly distributed during learning. When computing the loss, information of the whole lattice is needed, which places limits on the size of a lattice that can be handled with available computation time and memory. If instead an NCA could be trained with a purely local loss, such that model weights are updated for each pixel based on its neighbours, more advanced training procedures could be exploited. In essence, if the only global communication is the updates to the model weights, rather than the full lattice state, NCA could be trained using online training, or on much larger lattice sizes. An alternative approach to increase the resolution that can be trained on would be to randomly sub-sample elements [50] of the NCA lattice when computing losses.
Although we explored the stability of NCA under perturbations to the trajectory (initial condition), we did not address stability under perturbation of model parameters. This could naturally tie in to interpretation of the trained model, for example, by assessing stability under perturbation of network weights. Performing network pruning [51] could be another powerful approach, yielding even more minimal models. The recently popular field of explainable AI [14, 15] likely offers some other tools that would enable this. A worthwhile further development would be to reverse-engineer a concise analytic expression for the underlying PDE from the trained NCA parameters, for example with symbolic regression techniques like Sparse Identification of Nonlinear Dynamics (SINDy) [48]. Such an approach could be tested with NCAs trained on known PDEs, but it would be interesting to then apply this to systems where we don't know the underlying mechanics, such as image morphing or biological data.
## Acknowledgements
This work has made use of the resources provided by the Edinburgh Compute and Data Facility (ECDF) ([http://www.ecdf.ed.ac.uk/](http://www.ecdf.ed.ac.uk/)). For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. Alex D Richardson was supported by the EPSRC Centre for Doctoral Training in Mathematical Modelling, Analysis and Computation (MAC-MIGS) funded by the UK Engineering and Physical Sciences Research Council (grant EP/S023291/1), Heriot-Watt University and the University of Edinburgh. |